2020 was such an interesting year; it was certainly not the one I was waiting for. Due to several issues related to the pandemic, I unofficially took a leave from thesis work and had a chance to meet a lot of people virtually, collaborate, learn and grow. Although so many of us were stuck at home, open-science-driven events like NeuroMatch and BrainHack created opportunities to connect with colleagues and peers. This turned out to be hugely impactful for myself and other people like me—in other words students/early career researchers based in countries with limited resources.
And to think, it all started with a tweet….
Figure 1. Neuromatch 1.0 call for people (image credit: https://twitter.com/KordingLab/status/1239986383550365696).
Neuromatch 1.0 virtual (un)conference was held from 30th to 31st March, 2020. The organizers were so open and encouraging in setting up this collaborative experiment—I couldn't just sit back and wait for the event! I quickly volunteered to help with the scheduling, reaching out to the speakers before their talks, testing the setup and monitoring the chat during the talks. Even though it was entirely virtual I have learned a lot both academically and socially. It was so amazing to witness vibrant online community interaction. That time also coincides with my coming out, getting comfortable with my selected name and pronouns. It also felt so affirming to me to be accepted as who I am within the academic community. It was beyond my imagination!
But Neuromatch was just getting started. I was delighted to help organize the second Neuromatch conference, Neuromatch 2.0, held from 25th May to 3rdJune, 2020. It had more than 3000 registrants from all over the world. Although the same platform and structure were used as during the first Neuromatch, we saw even more talks, posters, presentations, and debates. This increased community interaction was encouraging, and showed how it was still possible to bring researchers together even without international travel.
Figure 2. Neuromatch 2.0 map of registrants all over the world. (image credit: https://twitter.com/KordingLab/status/1264998609411604480)
After my experiences at Neuromatch, I was overjoyed to discover another large-scale, explicitly collaborative event within the OHBM community: this year’s OHBM brainhack.
HBM Brainhack 2020 was my first Brainhack experience and one of my biggest encounters with the open science community after Neuromatch Conferences. It was truly phenomenal to see enthusiastic hackers, creative projects, inspirational training sessions and witness the facilitation of open science research virtually all around the world. It was an amazing privilege for me to serve the community as an event host. In addition to comprehensive guidelines, specific help channels and project specific coding places for hacking, a live virtual help desk at gather.town virtual space was dedicated for guidance and a socialization. There will be no exaggeration if I claim that OHBM Brainhack 2020 provided attendees a life-like event in the comfort of their homes. As the first virtual OHBM Brainhack, it was a stellar start. For the future events I believe that there would be more diverse representations and more opportunities from generally underrepresented neuroscience researchers all around the globe.
After OHBM brainhack I shifted my focus to Neuromatch Academy Summer School (NMA for short). It was a 3 week worldwide summer school, and, although all online, a tightly structured experience for 1757 interactive students and 191 teaching assistants (TAs). It was not only an intense course but also a huge mental challenge for students, TAs and organizers. It was extremely incredible that such a comprehensive course could happen at low/no cost for students. Diversity, Inclusion and Equity were core values of NMA that I championed in my involvement. For the preparation period of NMA, I took part in helping towards increasing diversity and inclusion besides enhancing student experience via extracurricular activities such as weekly karaoke times and daily yoga sessions. During the NMA, I helped some of the fellow TAs at my time zone towards mastering the material for teaching effectively, solving daily problems about the material and also served as a TA of a group of students in my time zone (pod)
From the interactions within NMA volunteers and TAs, I have learned that establishing an inclusive community is the key for a better and effective way of science, learning, teaching and academia.
One of the interesting things that I was involved in is meeting several artists to get some neuron doodles to increase virtual experience. I greatly enjoyed explaining to people without any STEM background, about the summer school, and about detailed yet basic concepts of the neuron. It is hard to find the words about my experience of witnessing the creative process of an artist. Seeing the sparkles of human creativity turning into an art piece was also very inspirational.
Figure 3. Neuromatch Academy, Mozilla Hub virtual space, student avatars interacting with neuron doodles (image credits: 1st image https://twitter.com/phant0msp1k3/status/1286102782890536960, 2nd image https://twitter.com/neurograce/status/1283047247299706880).
During the times I served as a TA, I realized that the best way of learning is teaching. I also witnessed that providing a safe space is the most important thing to do to enhance the abilities of students to flourish. CoC and automatic violation report system of NMA effectively helped us to preserve the provided safety.
Having gone from Neuromatch, to OHBM, to NMA, I was happy to return to Neuromatch with the Neuromatch 3.0 conference. Although largely the same format as Neuromatch 2.0, it significantly expanded again in size and in scope. I believe it was one of the most inclusive neuroscience conferences ever with six main themes of parallel tracks and a main stage events like lectures, panels, discussions, open affinity group sessions (black, queer, first generation) it gave the ambience of non-virtual big conferences. I served as a backend person mostly by clicking the necessary buttons, ensuring that every speaker runs their talk as smoothly as possible. It was wonderful to see some of the submitted NMA projects were turned into complete papers and submissions to NMC, it made me feel proud as a TA of NMA.
All of these experiences and the vibrant community of Brainhack encouraged me to organize (almost solely) and lead Brainhack Ankara on December 1st to 3rd, the first ever Brainhack in the Middle East! Taking part in both Neuromatch event series and OHBM Brainhack encouraged me to spread open science concepts and run a local event with the help of the global Brainhack community. Since the community here is new for Brainhack Ankara, it was convenient to have focus on learning the basics of open science tools and to explain why and how it can enhance our way of doing science. Overall attendance was about 30 people, it was a relatively small event comparingly. But I learned that to build a vibrant big community, starting small is important.
2020 for me was a year of meeting the neuroscience community virtually as well as demonstrating that how open science helps researchers, how the culture of collaborative science helps to enhance better practices and how vital it is to embrace diversity in research. I learned that listening, understanding and providing the needs of the people is necessary to maintain the collaborative culture and for the best possible approaches of ensuring inclusion. To take an action for this goal, I recently joined the OHBM Open Science Special Interest Group as an inclusivity officer for the organisation of the 2021 Open Science Room and OHBM Brainhack. In this role, I hope to provide a safe space for people to flourish, open space for underrepresented groups, and encourage initiatives that will enrich the community.
By Rachael Stickland & Nils Muhlert
Professor Helen Mayberg is a pioneer of neuroimaging and neurostimulation for depression. As a behavioral Neurologist she has helped to identify the brain circuits implicated in mood disorders, and then developed and refined effective treatments based on deep brain stimulation. She is a member of the National Academy of Medicine, The American Academy of Arts and Sciences and the National Academy of Inventors. As a founder member of OHBM we found out about her work, her experiences of seeing impact statements become reality and about holding on to the ‘OHBM train’.
Nils Muhlert (NM): I'm joined today with Professor Helen Mayberg, who is a professor of Neurology at Mount Sinai, as part of the OHBM oral history initiative. First, can you tell us how and why you became interested in neuroimaging?
Helen Mayberg (HM): I was a neurology resident in the early 80s. Imaging was in its infancy. In medical school, in the late 70s, we had our first CT scanner. MRI was relatively new during my residency training at Columbia, and it was an important diagnostic tool. I planned to train in behavioral neurology in Boston with Norman Geschwind. But in my last year of residency, he passed away suddenly, so I needed a change of plan.
My change in direction to imaging as a focus for further training was the result of a lucky coincidence. One of the first year neurosurgery residents on my team had just come to New York from Baltimore. He had written one of the first papers characterizing opiate receptor subtypes in the brain, and told me about this new research imaging method being developed at Hopkins where you could image chemistry in living people using positron emission tomography. I had always been interested in neurochemical abnormalities in psychiatric disorders but there was no way to study that directly in humans. Despite my intense interest in severe mental illnesses, I didn’t find the training in psychiatry to be a good fit for me, so I pursued my interest in behavior via neurology training. It wasn’t a perfect fit, but neurology seemed a better choice for clinical training than psychiatry where I just never felt comfortable with their vernacular. While it was still a stretch to understand chemical mechanisms of behavioral disorders it did provide foundation for thinking about structure-function relationships in the brain, an approach that really wasn't applied yet to psychiatric syndromes. So suddenly, here was this new scanner that might provide a way to do what I wanted: assess regional chemical abnormalities in patients with mental illness.
I started as a research fellow at Hopkins in the Nuclear Medicine Department, learning PET scanning in 1985. I was learning basic methods to map and model various neuroreceptor systems - mostly opiate receptors, but with some projects involving dopamine and importantly serotonin systems. The lab I was a part of did little to no behavioral mapping studies; it was a dedicated chemical neuroimaging lab. There was very little work on blood flow or glucose metabolism except as ancillary maps for receptor studies. At the same time, in LA, John Mazziotta, and Mike Phelps were working with glucose metabolism to map abnormalities in a variety of neurological disorders. At Wash-U, Marcus Raichle, Peter Fox, Mark Mintun and their colleagues were developing methods for behavioral mapping using cerebral blood flow. Nora Volkow was using various methods at the Brookhaven Labs. There was a relatively small group of teams with PET scanners worldwide that developed specific niches of expertise using this technology.
Whilst working within a chemical mapping group, I was reading about other methods and as my work evolved it was clear that my questions required tools that I didn’t have access to. Because there were so few groups, we were a very small and interactive community. Before OHBM, this type of research was discussed at the Cerebral Blood Flow and Metabolism meetings. The neurologists, like John Mazziotta, Marcus Raichle and Richard Frackowiak, would go to the Neurology annual meeting, and we’d have our own imaging sessions there. You’d also see different imagers at various meetings--Society for Neuroscience, American College of Neuropsychopharmacology, Society for Nuclear Medicine. The PET community was tight and continued to grow but it was relatively insular as there were few research PET scanners as you needed a cyclotron.
There was a point where the questions I wanted to ask about depression required methods that were not the focus of our lab. Peter Fox was setting up a new research imaging center in San Antonio; I had met him at a Cerebral Blood Flow meeting and heard what he was up to. It was a method that I didn't know but that would provide a set of tools to take my depression research in a new and potentially interesting direction.
In 1991, Peter invited me to be a founding member of this fledgling new center, so I moved from Baltimore to San Antonio in Texas. I was part of his team, setting up a research lab as part of the brain imaging group. We were very small, and focused on PET scanning but I was no longer doing any neuroreceptor mapping studies. By this point, I'd moved to studying depression exclusively. I worked as a clinical neurologist but did my research with collaborators in psychiatry using imaging.
In the early to mid nineties, The Research Imaging Center (RIC) was host to the original Human Brain Mapping workshops. As part of these workshops, there were ongoing discussions of how to develop common platforms to share data. The RIC team started compiling spreadsheets of coordinates of brain activation findings from the literature. These were the early days of imaging meta-analyses, done by hand. While the work was slightly peripheral to my own studies, I couldn't help but become involved. So I received an education in the world of mapping beyond my own area and saw a style of thinking that was way ahead of its time. We now take so much for granted with our contemporary approach to big data and data sharing. It was laborious work in those days.
NM: It sounds like a very exciting time - certainly within that lab.
HM: It’s funny. For every scientist you can never know when you're in the middle of something important, if anything you're doing will have long-term traction. We delude ourselves, write grants, make statements about the potential impact and how significant we think our work is. In fact, it's only when looking backwards that we can actually see how it all evolved. It's hard sometimes to be reminded that during my time in medical school, CT scanning was new. We studied dead brains, we had anatomical atlases, we had white matter maps from studies in nonhuman primates that we used to mentalize how brain regions were connected to each other. We mentalized a connectome in our head, by piecing together these various studies. We didn’t yet have access to multimodal mapping. I don’t think I could have even conceptualized such methods.
What was a paradigm shift of imaging, was that you could directly test your hypotheses rather than simply make interpretative inferences from pathology or animal models of behavior. Not only did technology allow visualization of the brain in action, but with time the choice of methods greatly expanded. My neurology professor in medical school said: “pick a topic, not a method.” At that time I was learning PET scanning. He said, “you'll reach a point where your current methodology no longer allows you to answer your question. So you’ll learn new methods and tailor your questions as methods evolve.” As a clinician and not a technical or methods developer, that was incredibly important advice - don’t just learn a method for method’s sake, but learn a method in service to your clinical question. That's been my approach since then. So what I know the most about, PET scanning, is something I don't even do much anymore. But the use of imaging as my experimental method has never changed, I have just learned to adopt new imaging tools to best address my next depression study. One can now pick and choose, and with that range of choices you can really go deep to answer your own questions.
NM: On that note, one highlight of your career is that you’ve helped identify the role of Brodmann area 25 in basic drives affected in people with depression. When did you start focusing on this area and how did that come about? Was it as a result of these meta-analyses?
HM: We didn’t go looking for it. We just followed the data and there it was. But it wasn’t on our radar with our early studies, which started with examining post-synaptic dopamine and serotonin receptor in post-stroke depression patients. These studies were complemented by studies of opiate receptor changes following electroconvulsive therapy as a model of epilepsy, to studying resting state abnormalities in basal ganglia disorder patients with and without depression. Our goal was to test the hypothesis that, regardless of etiology, there was a common set of regions affected in patients with depression. We were working to define a depression circuit. As a common pattern of limbic-cortical abnormalities emerged, we felt it was reasonable to move to study primary depression where clinical heterogeneity was well described. I presented that set of findings in 1989 at one of the Cerebral Blood Flow & Metabolism meetings. We did the next natural experiment: how the abnormalities change with treatment.
For the most part, we found what everybody else was finding in depression, low metabolism in the frontal lobes. When we treated people, the frontal activity normalized--it increased. By using the statistical methods Peter Fox had brought to San Antonio from Wash U-- change distribution analysis-- combined with new computer algorithms and higher resolution scans, we could further examine the whole brain instead of predefined regions of interest. I remember analyzing a specific set of data of depressed patients studied before and after successful antidepressant treatment. I was looking at the statistical change maps and I figured there must be some sort of misregistration artifact. I kept looking at the pictures. It was midnight and, all of a sudden, I realized the ventral parts of the brain were showing decreased activity while dorsal parts of the brain were showing increases. I squinted my eyes and looked at what I was seeing: there were brain regions that weren't abnormal at baseline that showed decreasing activity as people got better. When I looked up the brain regions, I found we were in the subgenual cingulate. I had my Talairach atlas, and I'm looking up with the ruler where I am - it was all done by hand, and I thought ‘what the hell is this BA25?’
Actually, the Talairach atlas I had misidentified it. So people thought for many years that it wasn't really in Area 25, and I say look I'm just following Talairach. I went to see who else had seen anything in this region. I found changes in this region in a study by Jose Pardo on mood induction. We tried looking at correlations, to see which part of this multi-node network went with which symptoms of depression, and I couldn't separate out the mood from the attention symptoms with the data I had. Peter suggested “Let's do a blood flow scan following a mood induction.” The intention was that if we induced a negative mood in healthy volunteers we would determine if you could dissociate the presumed limbic-emotional regions from the cognitive cortical regions in this presumed depression network. To our surprise, mood induction did indeed reveal limbic activations and they were in Area 25, but it also decreased activity in the prefrontal cortex--the same regions identified in the depressed patients. Area 25 and the prefrontal cortex were inversely correlated with each other in both experiments--depression recovery mapped over six weeks and sad mood induction over 2 minutes with the magnitude of behavioral changes correlating with both regions. In essence, our hypothesis was just wrong; we couldn’t induce solely change in limbic regions by focusing on mood. The two systems, limbic and cortical, could not be separated. Obviously a simplistic notion if viewed through today’s use of graph theory and dynamical modeling approaches to time series data. But at that time, these simple experiments using blood flow and glucose metabolism PET gave us one of our most important insights--these regions were yolked and worked as a synchronized limbic-cortical circuit to mediate the interaction of mood and cognition.
This pattern of reciprocal changes involving midline and lateral cortex regions was new. Today, we would look at this pattern and immediately see the default mode and executive networks displaying their typical anti-corrrelation with each other. But then, in the mid 90s, that concept was just developing. We looked at it thinking ‘what are these regions and what do they do?’
Area 25 had very little written about it. It was described in the animal literature as a visceral motor outflow area and not necessarily a mood area. You could even find references to its homologue in lizards, as it's a very old, highly conserved, part of the brain. I would get into fights with rodent anatomists by asking about the rodent equivalent: “is it infralimbic? Is it prelimbic?” Lots of opinions as to whether or not it is even a good idea to attempt to match rodent and human prefrontal cortices, if one is really interested in studying depression, a uniquely human clinical construct. That was sort of a turning point; if I wanted answers I needed to really learn to read the tract tracing studies done in nonhuman primates and learn the connections between regions by looking at combinations of anterograde and retrograde studies. Little did I know that I was laying the foundation for future work that would rely on maps of structural connectivity defined using DTI.
NM: And then you later moved into intervention studies, where you used targeted deep brain stimulation (DBS) of region BA25 to see how it affects symptoms. What was that like - setting that up and seeing the results from those first studies?
HM: In all honesty, I became an interventionist almost by accident. I wasn’t a trialist; I merely used treatments as probes to better understand depression and treatment mechanisms. I spent the first 20 years basically trying to prove that depression was a circuit disorder, first by identifying the nodes, and then the connections and making inferences about causal relationships using changes with various kinds of treatments. There became a point in Toronto where, findings in Area 25 were so consistent across all of our treatment studies, that we hypothesized that if you didn't downregulate this region then people didn't get better. It seemed to be really at the center of the antidepressant treatment response.
The idea to target Area 25 with DBS for treatment resistant patients was highly influenced by the neurosurgical literature and the evolution of ablation to DBS for Parkinson’s disease. The leading theory about DBS mechanisms at the time posited that high frequency stimulation resulted in a local depolarization block. As we had consistently demonstrated that effective antidepressants decreased or blocked activity in Area 25 and if you couldn’t block it you didn’t get better. We followed that logic to hypothesize that if you can't talk or drug or shock it down, maybe you could block it with targeted stimulation delivered very precisely at this node in the network.
Everything I knew about connectivity (even though at that point, there were no tractography tools available to us, so implied connectivity) was that if you downregulated a region such as BA25, maybe you would also get disinhibition of regions it was connected to. The DBS technology at this point, in 2002 or so, was well established and readily available. I had a surgeon that was willing to test my hypothesis. It was actually very much an imaging-driven idea. If I hadn't been doing imaging, would I have even thought about it? Probably not, but I was in the right place at the right time. We had the maps that pointed to a putative DBS target for treatment resistant depression, a surgeon with extensive experience with DBS for Parkinson’s disease and a team of investigators willing to learn about DBS and manage this group of extremely ill patients with this novel intervention. It was in some ways a natural next step for our ongoing studies. So that's why we did it - because we could. But the logic was basically built on that first mood induction depression recovery finding.
NM: It's as we discussed before - the impact statement becoming true over time, where you think ‘what areas are involved?’, then ‘what can you do about it?’ And here you've got a great example.
You were involved in the creation of OHBM. What was your role?
HM: Well, mainly I was involved because I showed up. I attended the first meeting in Paris, which was a natural extension of the Cerebral Blood Flow meetings I had been attending since starting my post-doc in PET imaging. With time, I became an officer; I was elected secretary in 2000, and served from 2000 to 2003. It's interesting that many of the originators of OHBM were clinician-scientists. Several of the key drivers - Mazziotta, Fox, and Evans all in North America - had a grant together, and joined forces with many key thought leaders and teams worldwide to make it happen. A shared vision. Again, being in San Antonio with Peter, I had a front row seat to the evolution of the organization. Timing and opportunity are a common theme here.
When I think back, how could anybody not participate? It was happening all around me. So you get on the train and hold on and see where the journey takes you. We all had a ringside seat, and saw an idea grow and mature. Like any diverse scientific community, building an infrastructure that requires not just expertise but buy-in and cooperativity is challenging. But like any democracy, there was a lot of trial and error and compromise seeing what worked, what the community, the stakeholders wanted; it evolved by taking great ideas and giving them space to evolve and mature. What was great was it was very inclusive - methodologists, clinicians, statisticians, engineers, all topics, all scan types, multimodal approaches, new science, courses, and great opportunities for networking. The multinational and multidisciplinary collaboration that established OHBM has continued to define it and foster its unique position among imaging meetings.
NM: And what have you found most rewarding about your experience in holding on to that train with OHBM over the years?
HM: Well, I've had the opportunity to collaborate with people world-wide and adopt a multimodal imaging approach to our team’s clinically-oriented research questions. Maximizing use of novel technologies is at the core of our work--with critical reliance on state-of-the-art engineering and statistics. OHBM is where I can always count on seeing the newest technical and analytic advances and where discussion is scholarly and collegial. Our own work is quite iterative, so it’s useful to see a new method used by others before jumping in ourselves. OHBM provides an important sounding board for our ideas and I have always found the meetings personally and scientifically rewarding.
OHBM has evolved beyond anything any of us could have imagined. Technological advances have been the critical catalyst but applications of the technologies have been important drivers. Perhaps I am biased, but imaging in one way or another has been at the center of many of the advances in neuroscience over the last 50 years.
NM: And what do you see as the most promising things that are coming out now?
HM: Like with anything, progress is not linear. Sometimes it seems like it's three steps forward, and then one steps backward or sometimes even sideways. I'm reminded of one of the first imaging meetings I attended prior to OHBM where we would sit and listen to thought leaders debate the advantages of their particular methodology. It was a curious sort of testosterone storm of statistical one-upmanship. It was as though one method had to defeat all others.
It has been fascinating to be part of our maturation as a field. Where our focus is on matching methods and technologies to a specific category of question rather than assuming one size fits all. How could it be otherwise? That's the natural evolution of any field… the first thing is you don't believe it, the second thing is it's obvious, then it evolves to be much richer because everybody starts to dig into working out the details.
Right now, I think we're going through a stage where there's so much data that we don't know how to parse it. We're at a time where doing experiments that people care about is expensive and hard. Early on, the focus was on ensuring the methods were valid and reliable. Like any broad field, people have different interests. I am grateful to know that there are people pushing the limits of the technologies and those using it to understand basic principles of brain function; the big data consortiums with multimodal data archives for general use are priceless resources for the community. As a depression researcher, I want to exploit the technology; ‘If I take on learning a new method, I need to decide if it's worth it.’ Then, ‘How is it going to help us test our next set of hypotheses?’ I don't think it's just because you're a physician that you want to do that. Everything is hard and time consuming - how do you have meaning in the way you spend your time, scientifically?
We're back to a plateau where we're “fighting” about ‘what's the right way to do it? Do we believe anything we know? Is it all an artifact? How do we replicate?’. We're learning that the brain is very adaptive. And when we think we control an experiment, we don't control it as well as we thought we did.
We're looking for big signals. We've gone from working with deforming the brain into a common space (which was to increase signal to noise and you didn't believe it until you saw it robustly across subjects) to trying to understand inter-individual variability (if you don't understand the individual, you know nothing). You can get dizzy, realizing these are natural evolutions. Everybody's right, just not at the same time.
This is the beauty of the OHBM culture: you can develop tools to answer the question in the way you want to. So for me, as a very specific example, we guessed based on a blob on a PET scan, where Area 25 was, stimulated with an invasive implant in that approximate spot, and made people better. It worked. And then we’ve spent the next 15 years trying to figure out what exactly we did, how to do it better, and why it worked.
Imaging has remained a key method towards these goals. For instance, Kisueng Choi in the lab had identified the critical white matter bundles that mediate the DBS treatment effects and developed tractography methods to reliably define the optimal surgical target in any individual patient. I love and use the data from the human connectome to test ideas, but at some point, we need to make a decision about an individual patient's brain. Where is the spot? Can it be visualized reliably? Can we hit it with millimeter precision? I’ve got to make a map for the surgeon to put the electrode where we say with a high level of accuracy. And then be able to show prospectively, that what we wanted to do, is in fact what we did.
I am envious, as we all are, of the amazing advances in circuit mapping techniques using cell-specific labeling, such as optogenetics, and CLARITY. While I can learn a lot from these exquisitely detailed maps in rodents and more recently non-human primates, I also need reliable lower resolution methods because I don't have the luxury of single cell stimulation. We inject a big amount of current into a pretty sizable brain area that contains a mix of many cell types and passing fibers. It remains a real mystery how such nonspecific stimulations work but it does. Obviously, more advanced methods will evolve. But for now, we work with what we have. That said, we’re always looking to see what new methods people are developing. When I was a kid, I used to hang out with my uncle who was a biochemist and nuclear medicine physician in the pre-PET era. He used low resolution detectors to measure radioactively tagged chemicals in the brain often injected through the ventricle during surgery and scanned later. The images were horrible, like looking at a fuzzy bowl of soup. You could measure changes in brain concentration of various compounds, but without the spatial detail. Still, it was really amazing we could do it at all. Now we're working to improve on 0.8 mm isotropic voxels. Looking to push the envelope further. All of this change in less than 50 years; I am just sorry my uncle missed all of this. He would have loved it.
NM: I could almost end there but one last question about your personal experiences attending OHBM. Are there any moments that stand out for you?
HM: There are so many. To hear the giants of imaging give the Talairach lecture is always a thrill. But I think like any meeting it’s the camaraderie, the openness of students who read your papers and want to get your opinion on their work; to both meet old and new heroes and to maintain relationships with colleagues over 30 years; to be able to sit down and talk or just hang out. I always enjoyed the grandeur of the big lectures and the rigor of the science as well. But it has always been the relaxed atmosphere that catalyzes new ideas and new collaborations. I can remember meeting the Oxford team -- Paul Matthews and Heidi Johansen-Berg and that short chat changed the course of my fledgling tractography work. I remember making sure to arrange meetings around world cup matches; Resting up for the dance party night; figuring out the train to Sendai. Just so many big and small wonderful memories.
NM: Professor Mayberg, I'd like to thank you very much for joining us. It was a fascinating insight into your experiences with OHBM.
HM: Well, thanks so much for including me. It's really an honor.
Ilona Lipp (Lead editor):
With the masked face and being on the beach in December in Connecticut (with 15 c), I feel it pretty much sums up the Pandemic and Climate Change ridden year - 2020!
In the coming year I look forward to more enlightening interviews as well as exploring new avenues for lay media blog posts with members of the Communication Committee.
Nils Muhlert (ComCom Chair):
This year has taught many of us a lot about ourselves. Personally I found out that I’m terrible at baking sourdough bread. Alongside all the awfulness there have been some real highlights. Inviting new people to join the blogteam and seeing their first contributions is definitely up there. As is passing the baton of blogteam lead and ComCom chair to Ilona Lipp; the OnDemand tutorials that she’s been leading have become a great resource for those wanting expert-led introductions to the many flavours of MRI. I hugely enjoyed interviewing some of the original founders of OHBM, including John Mazziotta and Helen Mayberg. More to come next year! As to next year, the prospect of an effective vaccine and a gradual return to an upgraded normality are certainly beacons of hope. With a bit of luck I’ll see many of you again at OHBM2022 in Glasgow, if not virtually at OHBM2021. Have a good holiday all, and hope you come back rested and recharged.
By Valentina Borghesani, Elvisha Dhamala, Niall Duncan, Marie-Eve Hoeppli, and Michele Veldsman, on behalf of the SEA-SIG
This month, OHBM announced the formation of a new Special Interest Group that will tackle sustainability and environmental issues around brain imaging.
Here, we talk with the Sustainability & Environment Action (SEA) SIG Chair Charlotte Rae to hear more about what the new SIG will seek to achieve.
Why do we need a new Sustainability & Environment SIG?
Awareness of the environmental impact of human activity has never been higher, and there is now strong international consensus that we urgently need rapid action to tackle multiple crises, including dangerous climate change and irreversible ecosystem degradation. Neuroimaging research activity plays a part in these crises - from liquid helium extracted through fossil fuel production, to the energy usage of big data. We all have a responsibility - especially as professional scientists - to address these issues and move towards a sustainable future.
We have set up the new SIG so that we can have a community conversation around how to enact the changes that are required. For example, we plan to do some work around measuring and assessing what the environmental impact of a neuroimaging workflow is, from data acquisition to data analysis and even publication. One back-of-the-envelope calculation puts the carbon footprint of a single MRI scan session at 160kg, and we know that server activity has a big impact - especially resource hungry approaches such as machine learning. Once we’ve quantified the size of the problem, we aim to provide a set of guidelines and recommendations for sustainable neuroimaging practises.
We are also really keen to work together with Council, the Executive Office, and colleagues across our community to decarbonise the annual meeting. There is growing recognition that 3000 of us flying across the globe annually isn’t compatible with a safe future on this planet: one transatlantic return trip generates nearly 2 tonnes of carbon dioxide. That’s the size of our annual individual personal ‘carbon budget’ if we’re going to limit warming to the 1.5C set by the Paris Climate agreement. We need to work up positive and practical alternatives that the whole of our neuroimaging community can get on board with, whether that’s ‘hub-and-spoke’ models, where you meet colleagues locally on your own continent, supporting hybrid in-person and online interactions, or reducing meeting frequency.
We have a lot of work to do! But our sister SIGs have shown that with international collaboration across our brain imaging community, we can achieve rapid change. The Open Science SIG has changed the way we think about open neuroimaging. The Diversity and Inclusivity Committee, set up in 2016, now has a dedicated symposium slot at every annual meeting. As has already happened for open science and inclusivity, we can aspire to drive rapid uptake of sustainability awareness and action amongst our community too.
How can OHBM members get involved?
We plan to hold regular open SIG ‘community meetings’ where any OHBM member can share their thoughts on what our priority actions should be for the SIG to take forward. This might be decarbonising the annual meeting - such as building on the 2020 and 2021 digital meetings to ensure we don’t simply return to 3000 members creating a huge travel footprint every June post-COVID. Or tackling the question of big data - how can we run our analyses sustainably when server manufacture has a huge ecological impact, and energy to perform computations often still comes from fossil fuels?
Once we know our priorities for action, we want to establish SEA-SIG working groups so that we don’t just ‘talk the talk’ about what the problems are, but ‘walk the walk’ to figure out what the changes are that need to happen. Ultimately, we want to be able to produce some guidance as to how neuroimagers can go about greening our research practises. We need OHBM members with expertise across MRI physics, computing, analysis practises, to all get involved!
It's also crucial that we have lots of input from early career researchers. Our current generation of trainees are going to have to live with the consequences of dangerous climate change for much of their lives - it is already happening, and is only going to get worse. We hope we can amplify the voices of ECRs, who we know often feel very strongly that rapid urgent action is necessary, but who are not always heeded by those in power.
If you would like to get involved with any of our activities or receive updates about what we’ve been doing then contact us at firstname.lastname@example.org.
You are also most welcome to come to our first community meeting, on Tuesday, 15th December via Zoom (with two sessions: 09.00 UTC and 18.00 UTC, to accommodate colleagues in different timezones). We will outline what the climate crisis and ecological emergency mean for us as neuroimagers, before we collaborate in small groups to determine priority aims for the SIG to pursue. Register to attend here: https://forms.gle/vVF3ydnJCyArobdj6
We are also looking for colleagues to join our Committee, in the posts of Webmaster, and Social Media officer. Please contact us on email@example.com if you are interested in taking on either of these roles.
Finally, you can follow us on Twitter, @OhbmEnvironment.
We hope to see you at a SEA-SIG community meeting soon!
By: Rosanna Olsen, Amanpreet Badhwar, Valentina Borghesani, Lee Jollans, Hajer Nakua, Laura Marzetti, Nils Muhlert, Pradeep Reddy Raamana, Tilak Ratnanather, and Lucina Uddin on behalf of the OHBM Diversity & Inclusivity Committee
In June 2020, OHBM made a statement condemning the murders of George Floyd, Breonna Taylor, and Ahmaud Arbery as well as ongoing actions of police brutality against Black Americans and underrepresented minorities around the world. During the conversations surrounding these events, there was a public recognition of the lack of support for Black and minority communities. We realized that at OHBM we have not done enough to support underrepresented minorities in science, and that we need to take concrete actions to make our organization a welcome and safe environment that educates and supports each and every member of our group.
To achieve this goal, we need to gain a better understanding of the experience of OHBM members and their sense of belonging within the organization. Hence, the Diversity and Inclusion Committee (DIC), with support from the OHBM Council, will perform a survey to learn how welcome and comfortable members feel within the organization, at the Annual Meeting, and other satellite events. This survey will also allow for anonymous reports of any experiences of discrimination based on race/ethnicity, gender, sexual orientation, religion, disability, or affiliation with any other marginalized group.
The DIC has developed an anonymous two-part survey: the “Survey of Member Views on Inclusivity at OHBM.” The first part of this survey will collect crucial information from OHBM membership and will eventually become a permanent resource for anonymous feedback for all of our activities. Survey responses will identify areas of concern, flag problems, and identify actions that OHBM can then work to improve. The second part of this survey will collect demographics and other identification characteristics of our membership. If you do not want your answers to this part of the survey linked to the first part, there will be a place to indicate this in the survey itself.
The Survey of Member Views on Inclusivity at OHBM will be sent to the OHBM members in December 2020. The survey will take around 10 minutes to complete, and your input will be incredibly valuable, as we aim for a complete picture of our membership’s unique experiences. A high response rate from our membership will provide us with a more representative picture of our diverse attributes and needs, which will provide a better basis for improving our organization. The survey will provide an opportunity for members to provide feedback regarding both what *is* and what *is not* working at OHBM and what do you think should be done to make OHBM more inclusive, for everyone . We also welcome any suggestions on how to improve our survey for subsequent data collection efforts.
by Claude Bajada
The GDPR is a new(ish) legislation by the European Union that regulates the processing of personal data when the person processing or controlling the data is in the EU, even if the actual processing occurs outside of the EU. Further, the GDPR also sometimes regulates the processing of personal data of people who are in the EU, even if the persons doing the processing are outside of the EU.
How does this affect neuroimaging? We sit down with neuroimaging expert and Open Brain Consent co-author Dr Cyril Pernet (CP) and Technology law expert Dr Mireille Caruana (MC) to discuss the implications of this law on our work.
The article flip-flops between the term “participants” and “data subjects” since ““data subject” is the term used in the GDPR but for the purposes of this article you can think of them as equivalent terms.
What follows is a summary of our conversation, edited for conciseness and clarity.
Who are our experts?
CP: I do a lot of method development in neuroimaging and in a clinical context. Data sharing is something that I have always been happy to work towards. Data sharing is like code sharing, we need it for good science. With the advent of the GDPR, we've got some extra constraints on what to share and how to share.
In the clinical context, the typical thing is to say is: “Oh, you know, we have patients’ data, therefore, privacy issues,” and people don't even try to share. This really annoys me because there are ways we can do it. It doesn't have to be completely open on the web so that everybody can download it. I've been working on all sorts of open science related projects and the Open Brain Consent is part of that line of work.
MC: I am the head of the Media, Communications and Technology Law Department within the Faculty of Laws at the University of Malta and my research has, since before the GDPR, focused on privacy and data protection issues. I would not contradict you that the GDPR is a relatively new law that has, from the start, been the subject to a lot of uncertainty and difficulty in implementation and application. It's well worth working our way through the legislation to seek correct interpretations of it.
Why is it important to discuss GDPR across disciplines?
CP: We are scientists, when we read the GDPR text, we don’t understand the implications. We do not know how judges will interpret the law. This means that we need lawyers to guide us on how to interpret what is written there.
MC: The problem is that in many instances there aren’t clear answers. In fact, while a lawyer may give legal advice, it may eventually be contradicted by a court. Nevertheless, scientists should behave as diligently and carefully as possible. If the perception of the GDPR ends up restricting research or not allowing researchers to do their work, that's a problem. It shouldn't be that way. But achieving this balance is very difficult.
Anonymous data are not governed by the GDPR. Do you think there's anything within neuroimaging that can be considered anonymous?
CP: In my opinion, one of the key points in GDPR that is relevant to neuroimaging is that neuroimagers are able to single out individuals from datasets, which makes the data identifiable. And I'm not just talking about brain structure data, I am also talking about EEG data, MEG data, etc.. With connectome matrices and a few tasks you can single out individuals, and we can thus consider that any imaging data should be considered identifiable. Others disagree with me and argue that singling out is not strictly identifiability, while I contend the opposite because GDPR indicates that singling-out is a prerequisite to identification.
This is a key difference between North American legislation and the GDPR. While North America differentiate between anonymised data, pseudonymised data and identifiable data, the GDPR only distinguishes between anonymised data or identifiable data. Pseudonymisation is just a process. Data can go through that process without changing their status as identifiable. That is we can remove the face, ID, etc ., but brain imaging data remain identifiable, in that we can potentially distinguish between individuals and even if we don’t have the metadata, link those data to someone by name.
Can we have an example?
CP: Imagine, for instance, that we have two independent datasets consisting of connectome matrices and tasks. There may be individuals who have been participating in each of those datasets. So we can now think about linking them and studies have indeed shown that it is possible to say that the same individual belongs to both datasets, because of the way connectomes look. Not only can we single out people within datasets, but we can also link datasets, and possibly by adding associated metadata we are getting even closer to identifying that person in the real world.
Are there any proposed solutions for this problem?
CP: The solutions that we have come up with are detailed in Open Brain Consent and involve two consent forms as well as a data user agreement for data collected in the EU. Of the two consent forms, one is the consent for the study and the other one is consent for people to share their data. The way you can legally share this is through a data user agreement, not through a licence, which means we ‘control’ who has access and to a lesser extent what can be done to the data. Now the control can be done in a way where people register to use specific datasets. For example, the Netherlands have a good system because every researcher is registered on a database. So for instance, if you log into the system of a particular institute, they know who you are, which institution you are affiliated with, and you can just download data, even if you're not part of the data-holding institute. This is possible because they can identify you. You can sign the data user agreement with a simple click.
A user agreement also helps researchers share data outside of the European Union. The GDPR refers to this as “standard contract clauses.” This allows you to get to a point where non-EU researchers can download the data and become the data controller. With the data user agreement, the downloader agrees with the terms of the GDPR. This way you can share data anywhere in the world, even outside the EU. But you cannot just put your data up on openneuro. This is important since openneuro servers reside in the US, and the US is special because it is not considered to be a “safe country” by the EU. Institutions can sign an agreement with the EU to become a safe repository. But that also means openneuro would have to change their infrastructure to support data user agreements.
Where does consent come into all of this? Could I just get consent from my participant to share all of my data in the US, and the rest of the world?
MC: In the GDPR, sharing or transferring data is considered to be a type of processing. Let's forget about how the original data were collected and focus on the sharing of these data. In this case, you should still have a legal basis for processing in the GDPR. I am also assuming that they're sensitive personal data, since I am assuming that they say something about an individual’s health status.
Article 9 of the GDPR has a legal basis specifically for research data processing. So perhaps you don't need to rely on consent to share data because there is another legal basis which speaks about the necessity for scientific research. However, this legal basis is somewhat unclear in its application because it speaks about individual member states laying down a law that provides appropriate safeguards.
With regard to data transfers to a third country such as the US, chapter 5 of the GDPR concerns transfers of personal data to third countries or international organisations. According to Article 45, transfer of personal data to a third country may take place where the EU Commission has decided that the third country, or one or more specified sectors within that third country, ensures an adequate level of protection. Such a transfer does not require any specific authorisation. In the absence of an adequacy decision, a controller or processor may transfer personal data to a third country only if the controller or processor has provided appropriate safeguards, and on condition that enforceable data subject rights and effective legal remedies for data subjects are available.
Under Article 49, in the absence of an adequacy decision, or of appropriate safeguards, a transfer or a set of transfers of personal data to a third country may take place only on one of a set of stated conditions, which include that “the data subject has explicitly consented to the proposed transfer, after having been informed of the possible risks of such transfers for the data subject due to the absence of an adequacy decision and appropriate safeguards”.
How do we deal with requests for deletion of data?
MC: Article 17, GDPR sub article 2 states that “Where the controller has made the personal data public and is obliged pursuant to paragraph 1 to erase the personal data, the controller, taking account of available technology and the cost of implementation, shall take reasonable steps, including technical measures, to inform controllers which are processing the personal data that the data subject has requested the erasure by such controllers of any links to, or copy or replication of, those personal data.” It talks about reasonable steps that would, by way of good practice, mean a record of people who accessed the data and contacting them to inform them about the request.
How long can we store data for?
CP: You are required to set a time frame within which you must review the need for continued storage of the data. However, if the data keep being necessary, the data can be kept indefinitely.
Is it true that under the GDPR, legally, you're not allowed to reuse your own data in your own lab to answer different questions than what it was originally collected for?
MC: The GDPR speaks about purpose limitation (“personal data shall be collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes”) and ‘specific’ consent (“‘consent’ of the data subject means any freely given, specific, informed and unambiguous indication of the data subject’s wishes…”). So ideally, I think even ethically, your research participants should understand how you're going to use their personal data; but no, research is treated in a particular manner under the GDPR. Research is not considered to be incompatible with the original purpose for data collection (“further processing for ... scientific ... research purposes ... shall, in accordance with Article 89(1), not be considered to be incompatible with the initial purposes”).
Furthermore, recital 33 of the GDPR clarifies “It is often not possible to fully identify the purpose of personal data processing for scientific research purposes at the time of data collection. Therefore, data subjects should be allowed to give their consent to certain areas of scientific research when in keeping with recognised ethical standards for scientific research. Data subjects should have the opportunity to give their consent only to certain areas of research or parts of research projects to the extent allowed by the intended purpose.” So, legally, you may be covered, even though the debate surrounding so-called ‘broad consent’ is not conclusive (cf. for example the Article 29 Working Party’s Guidelines on consent under Regulation 2016/679).
CP: In my opinion, the “purpose” research is not specific enough. But if you say the purpose is “memory” that's too specific because that way you could not even use a T1w image to create a template. So, we came up with a compromise. If you look at the Open Brain Consent GDPR edition, our solution is to say that, for instance, the purpose of conducting the study is one thing, but also that the data may be used for future research projects in the field of medicine and cognitive neuroscience, which strikes the balance.
MC: Article 5 (1) (b) of the GDPR states that “personal data shall be collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes; further processing for ... scientific or historical research purposes or statistical purposes shall, in accordance with Article 89(1), not be considered to be incompatible with the initial purposes”
This gives researchers quite a bit of flexibility. This is maybe one area where law and ethics overlap. The debate within research on genetic data that I have come across when dealing with biobanks, is that people speak of dynamic consent and they want to use dynamic consent to have more granular consent for specific projects. The thinking behind this is that certain people might object morally to particular research. So of course, you're being more respectful to the data subject if you don't use the data in ways that they would not approve of. Specific, granular consent is in line with the spirit of GDPR, but I don't think that the GDPR excludes such broader consent for scientific purposes.
The GDPR refers to data minimization? How do you guarantee that we don't collect data that are unnecessary?
CP: This is something that we also struggled with. On one hand, you would want to be able to collect participants' data and typically, in my lab, we go through a bunch of health questionnaires, handedness, medical history, language etc… because, of course, you can then reuse these data in a larger dataset. You've got 100 different studies, but for each participant, you have the common six questions, and you can do a nice big analysis. You could possibly connect these studies and perform richer analyses. What is the balance? We know that this may be the only way to aggregate enough data from multiple studies to then have a study that is powerful enough to look at the effect of some type of medication.
MC: Unfortunately, I think that this is an outstanding difficulty or problem because as a researcher you may not know exactly what you're looking for; for example, what analysing the patterns may reveal. It is a known tension in the GDPR that may also go against the purpose specification principle. So I think it's a tension that is real. I would however always emphasise in such contexts that the sole purpose for processing these data is in fact scientific research, that there may be uncertainty associated with research, but that there is also an important public good to be gained from such research that affects the balance to be achieved between the different competing interests, including the privacy and data protection rights of the data subjects.
What are the next steps?
CP: I think the next steps are twofold. One is for neuroimagers to engage with their own institutional repositories. We need to work with them and with data protection officers to come up with solutions for sharing data. You need to explain what systems need to be in place and how to implement them. We do have power because we do receive money from funders who often actively ask us to share the data. And it is the university’s job to provide us with the tools to be successful in funding applications and to comply with funders.
The other aspect is more ambitious. There are systems that work under the architecture of any repository to index them, so that for instance, every university in Europe could very well have their information connected. But this would necessitate that all universities cooperate with each other. It's more like a dream.
I am also very keen on making sure that everyone reading this interview knows about all the efforts of the Open Brain Consent project. I would like to highlight all of the hard work put in by many, in particular Chis Gorgolewski and Yaroslav Halchenko who started the project, Stephan Heunis and Peer Herholz who organize work on this during the Organisation for Human Brain Mapping (OHBM) hackathon, and all the people who helped sharing their consent, experience, and proposed translations (now available in 12 languages) thanks to the COST association support (GliMR2). Note that we are keen on having more people involved, in particular having and sharing more information about how these issues are dealt with in countries from the Global South that are currently under-represented.
You can find more details on the Open Brain Consent website.
Now is the time to submit your nominations for 2021 OHBM Awards. To inspire you, we are highlighting some of the outstanding winners from this year’s meeting.
This year’s annual meeting was unique in many ways. Uncertainty about whether the meeting would happen was followed by a remarkably fast reorganization in order to hold the meeting online with a complex time schedule. One event that was not missing in the program was the traditional award ceremony that recognized the work of individuals who have changed the scientific landscape of human brain mapping.
Inspired by their nomination letters, we honor OHBM 2020 award winners and their achievements:
2020 Early Career Investigator Award Winner: Danielle Bassett
Danielle Bassett received her PhD in physics in 2009 and after only 10 years, she is now a full professor at the University of Pennsylvania and has published over 240 peer reviewed articles. Her top-cited paper on small-world brain networks has received over 1,800 citations. In addition to the OHBM Early Career Investigator Award, she has also received the Erdos Renyi Prize in Network Science, a National Science Foundation CAREER award and a MacArther “genius” fellowship, amongst others.
Danielle Bassett’s laboratory, the Complex Systems Group at the University of Pennsylvania, combines theory and tools from bioengineering, physics, electrical engineering, neurology, psychiatry and philosophy. Her team’s translational, interdisciplinary research has enabled them to explore the human thought process through investigations of how we learn and how this is underpinned by the flexibility of brain network dynamics. Her interdisciplinary approach applies new physically-informed metrics and null models for spatially embedded systems to look at networks at different scales (from cellular to systems) in order to inform clinical medicine and societal interventions. Danielle and her lab also contribute to software packaging and science outreach events.
Danielle gave the opening lecture at OHBM2019 and participated in the mentoring symposium organized by the OHBM student/postdoc special interest group.
From musical notes to neural nodes, you can learn more about Danielle Bassett’s career and aspirations at ScienceMag.
2020 Education in Neuroimaging Award Winner: Robert Savoy
Educational programs are a key part of the success of OHBM. Before the OHBM meeting in 1996 in Boston, Dr. Robert L. Savoy organized an educational workshop on fMRI attended by 600 of the 900 meeting attendees. The success of this inaugural course showed the high demand for educational programs. These have continued annually with the still highly-attended workshops alongside each OHBM meeting. Robert’s very first course was offered at the MGH NMR Center in October of 1994, and it was envisioned that the market for an introductory fMRI course would soon be exhausted. In contrast, the continual advances in fMRI and the general excitement associated with the technology meant that it reached an ever-expanding range of disciplines, increasing the pool of interested students. As the field has grown, so too have Robert’s educational offerings. Since 2007, Robert has organized an annual two-week Multi-Modality course; this has in turn generated another short course on connectivity. Robert is a rare scientist who devotes almost all of his efforts to education. His courses have had a profound impact on the career trajectory of many of our colleagues, including many active and leading members of the functional imaging community around the globe.
A large fraction of the leaders in the field have attended his course – receiving their first instruction on fMRI and neuroimaging there. Peter Bandettini, Ph.D., Director of Functional Magnetic Resonance Imaging Core Facility (FMRIF) collected the following quotes:
2020 Mentor Award Winner: F. Xavier Castellanos
In her nomination letter, Lucina Q. Uddin describes Francisco Xavier Castellanos as “a winner with great mentoring values, guiding his lab members to become independent thinkers and scientists. He is a tireless mentor and teacher. He proposes clear goals with defined timeline and expectation along the way and he predicts correctly. He shows a clear vision of a career path and best opportunities that should not be to define a path for new lab members. He is able to teach the art of “grantsmanship”, one that every scientist must master. Xavier is always there for his trainees, current and past. Trainees can always count on Xavier to submit a letter of recommendation at a moment’s notice, which is a great aid to apply for fellowships, grants, and positions as the opportunities arise. He is always happy to comply with letter requests, no matter how frequent. He also remains, at every career transition, a sounding board, providing clear-headed, rational and thoughtful advice.”
Lucina mentioned one particular anecdote that represents her experience of being supervised by F. Xavier Castellanos: “One particularly salient example of Xavier’s unconditional and enthusiastic support for his trainees comes to mind. One day, in a conversation with Bharat Biswal, we were tossing around the idea of trying to collect neuroimaging data from a split-brain patient in order to test a theory we had about functional connectivity. Without a fuss, Xavier funded the trip for me and a colleague to fly across the country, collect data from this unique patient, and spend months analyzing the data (though this project was unrelated to any of his grants at that time). This spontaneous trip led to a number of interesting case studies (Nomi et al. 2019, Uddin et al. 2008), and remains one of my favorite Xavier memories. The fact that he has always been enthusiastically supportive of whimsical projects has made science fun over the years.”
What particularly distinguishes Xavier from other senior successful scientists is his generosity, intellectuality and personality. He clearly has had a positive impact on a number of young scientists. Indeed, it is worth noting that three of his previous mentees (Lucina Uddin, Mike Milham, and Daniel Margulies) have received the OHBM Early Career Award. Another example of the way in which Xavier exemplifies the values of collegiality and building community through acknowledgment and recognition is in his authorship practices. He never hesitated to add junior scientists as co-authors on manuscripts, and readily gave up senior authorship positions to his trainees, as he always practices the maxim of giving credit where credit is due.
Xavier has been a proponent of open science before open science was a thing. His lab was one of the earliest to get involved in grass-roots data sharing initiatives such as the Autism Brain Imaging Data Exchange, the ADHD 200 International Neuroimaging Data-sharing Initiative, and the Enhanced Nathan Kline Institute – Rockland Sample. In fact, he is acknowledging that so much data are collected and so many people are needed to analyze them, so he favors giving others the opportunity to use their expertise without worrying about authorship or credit or restrictions. This kind of radical data sharing has inspired countless researchers worldwide, who are beginning to follow a similar philosophy. Xavier’s lab and pioneering radical data sharing initiatives set the stage for the current climate of open science and collaboration that permeates the field today.
2020 Replication Award Winner: Andre Altmann
Andre received the replication award for his paper titled ‘A comprehensive analysis of methods for assessing polygenic burden on Alzheimer’s disease pathology and risk beyond APOE’, Altmann et al., Brain Communications (16th December):
In this paper, Andre Altmann and colleagues attempted to replicate results from “Polygenic hazard score, amyloid deposition and Alzheimer’s neurodegeneration”, published in early 2019 by Tan et al.. The original paper was proposing a link between a polygenic hazard score (PHS) and amyloid deposition (from amyloid PET) beyond APOE.
Andre Altmann and colleagues proposed to account for APOE4 status (carrying or not) instead of APOE4 burden (number of copies). Beyond this difference in analysis, Altmann et al. went further to show that their analysis better accounted for APOE4 than the initial study. While using subjects from the same database (ADNI), Altmann et al. were not able to replicate the results from Tan et al. (2019).
APOE4 is the strongest common genetic risk factor for the sporadic late onset of Alzheimer's disease and is known to be associated with amyloid deposition in the brain. Therefore, it is of importance to disentangle the effect of APOE4 from the polygenic hazard score in order to avoid correlation of no interest in the results. This would explain part of the previously observed strong link between their proposed Polygenic hazard score (PHS) and amyloid deposition. This reanalysis questions the conclusion from Tan et al. (2019) that PHS influences longitudinal cognitive decline in regard to the model.
Altmann et al. adjusted their linear mixed effects model and the replication study shows that small differences in modeling decisions have a dramatic impact on the results.
This study also rectifies a result that could have had a large impact in the field as PHS could have been used for follow up study in the Alzheimer's disease community without proper initial support.
2020 Open Science Award Winner: Michael P. Milham
Mike P. Milham’s efforts in open data, open resources and collaborations are numerous. They impact both clinical and basic science neuroimaging communities. In just a decade, starting with the aggregation and publication of the 1000 Functional Connectomes Project (FCP-1000), Mike has:
The above initiatives have had a substantial impact on the neuroscientific community both in terms of immediate/direct (e.g., publications) and sustained/indirect impact (e.g., cultural change).
Mike has been the driving and inspirational force of a host of important open science initiatives that have helped change the landscape of human and non-human primate neuroimaging.
Once again, we congratulate all the OHBM 2020 winners and nominees. We wish them a great year of science.
We hope we have inspired you to look around you, consider your own mentors, colleagues, trainees, friends and neuroimaging heros to might be an appropriate candidate for one of the 2021 OHBM Awards. The OHBM website has all the details regarding eligibility, and required information for each of the award categories; just select the award by name and there you will find the link to the submission webform. The nominating process is reasonably easy, all online and waiting for your submission. Remember, our ability to inclusively honor members of our diverse community is directly dependent upon you submitting deserving candidates!
Written by: Claude Bajada, Fakhereh Movahedian Attar, Ilona Lipp
Expert reviewers: Adina Wagner, Cyril Pernet
Newbie editors: Yana Dimech, Renzo Torrecuso
This post is about good neuroimaging practices. ‘Practices’ relates to all aspects of conducting research. By ‘good’, we mean beneficial to the field and neuroimaging community - but you’ll see that most of these practices also benefit the individual researcher. Here, we collected a number of tools, tips and tricks to do neuroimaging in the ‘best’ way possible. We aim to provide an overview and answer some questions you may have asked yourself about reproducibility and good neuroimaging practices. As usual, we refer to OHBM On-Demand videos from the educational sessions of previous annual meetings. OHBM has both a special interest group (SIG) for Open Science as well as a Best Practices Committee, where leading brain mappers promote and help implement Open Science and good practices in data analysis and sharing. Both the Open Science SIG and the Best Practices Committee regularly create invaluable resources, such as the annual Hackathon workshops, and the COBIDAS Best Practices in MRI and M/EEG data analysis papers.
Isn’t the main issue in our field reproducibility? Or the lack of it? Should I care about my science being reproducible?
Those are loaded questions. We think we just might not answer them because you are luring us into a trap that begins with seemingly innocent questions and then rabbit into an unending borough. There are so many terms to wade through that the novice neuroscientist can easily get lost in this bog!
In his video, Cyril Pernet clarifies the often used terms 'repeatability’ and ‘reproducibility’ (from min. 1:07). First, ‘repeatability’ means “simply” that redoing the same analysis with the same data should result in an identical result as the original analysis, which is not as trivial as it seems. The software version and the operating system can be variables that affect the output of your imaging analysis. That, however, is only step one. In his video, David Kennedy (from min. 3:54) highlights that ‘reproducibility’ is really a spectrum. We could use the exact same data and nominally similar analysis. Or, we may have nominally similar data with the exact same analysis. Or, we may have nominally similar data with nominally similar analysis. This way we can test the sensitivity and stability of our experiment.
Cyril explaining the different levels of reproducibility.
But this leads back to your question. Scientific findings should generalise. They should first be valid (repeatable) but should also be robust to various permutations of the data and analyses used. There is a great video by Kirstie Whitaker on YouTube that tackles these issues.
The reproducibility crisis is often associated with the field of psychology, is there anything different in the field of human brain mapping?
Ok, so here we are generally talking about the more general “reproducibility”, not just about being robust to permutations. We will assume that researchers have already ensured that their analysis is re-executable.
In 2005, John Ioannidis published a landmark article with the eye watering title of “Why Most Published Research Findings Are False.” If you are interested in understanding why many scientific articles are not reproducible we strongly recommend reading this article; it is an easy and insightful read. Notice that this article does not even specifically refer to psychology or to neuroimaging. This problem is general to, at least, the wider “medically-related” field.
The article points out that effect sizes in these fields tend to be low and that sample sizes are frequently lower than what would be needed to test for such small effects. In neuroimaging, there are many steps and expertise (and often money) involved in acquiring good data. As a result, our sample sizes tend to be typically small. Indeed, it was not too long ago when most neuroimaging articles were published on samples of approximately 20 participants. In 2020, studies with several hundred, up to a couple of thousand, participants are becoming more common, but these require a massive investment in resources and tight collaboration between sites.
In his video, Cyril provides an overview of cognitive biases that can contribute to limited reproducibility of neuroscientific research (from min. 7:18). He also explains how the analytical flexibility in neuroimaging research (such as fMRI analyses) adds an additional level of complexity (from min. 15:59). While papers with hot stories and “positive results” have it much easier to find a home in very high impact journals, the drawbacks of this trend are slowly starting to be recognized. Neuroimaging scientific societies are becoming aware of the importance of reproducible research and are incentivising the work. OHBM has a yearly replication award that was won by Andre Altmann this year. Also, initiatives, such as DORA, The Declaration on Research Assessment, aim to find ways of evaluating research and researchers that go beyond journal impact factors.
Pia Rotshtein discussing the conflict of interest between good science and researcher’s careers.
So what can we do to make neuroimaging research more reproducible?
Well, some things are harder to deal with than others. Running neuroimaging studies is time-consuming and expensive, there is very little that can be done about that, at least in the short to medium term. One thing we can do is to work towards using robust and valid measures from neuroimaging data. In his video, Xi-Nian explains how validity of our measures depends on reliability (from min. 5:40). He introduces reliability indices (the intraclass correlation coefficient) and gives an example of how they can inform the extent to which inter-subject variability (which is often what we are interested in, e.g. when investigating different groups of people or brain-behaviour correlations) exceeds intra-subject variability (which in these cases is unwanted variability in repeated measurements, often caused by measurement noise). He reminds us of this paper pointing out that brain-behaviour correlations are “puzzling high”, given the reliability of our cognitive measures and of our imaging measures. From min. 16:20 he goes through a variety of imaging measures and their reliability, and introduces CoRR (min. 21:30), the Consortium for Reliability and Reproducibility. The prerequisite to have reliable imaging measures is, of course, to have sufficient data quality.
How do I ensure that my data exhibits sufficient quality?
Quality assurance (QA) and quality control (QC) procedures are put forward to ensure and verify the quality of neuroimaging data, respectively. Although somewhat intertwined, QA and QC are slightly different. QA is process-oriented and aims to boost our confidence in the data via routine system checks, whereas QC is product-oriented and deals with verifying the quality of the final product in the pipeline. In his video, Pradeep Raamana briefly introduces QA and QC and outlines the different QC steps involved in the acquisition of neuroimaging data (from min. 3:47). Visualising and checking your neuroimaging data at all processing stages is absolutely essential. The most important yet basic tool you need is therefore an image viewer that allows simultaneous visualization of the three image planes, and of course, you as the observer! For more specialized QC, Pradeep presents a list of some of the available neuroimaging QC tools per neuroimaging modality here, where he also presents use-cases of some of the tools.
In order to conduct QC successfully, one would need to take care of the various common types and sources of artifacts encountered in neuroimaging data. Importantly, we need to keep in mind that QA and QC must be tailored to the specific nature of neuroimaging data in its various modalities, separately.
In the videos of the ‘Taking Control of Your Neuroimaging Data’ session, some of these procedures are presented. Pradeep introduces common sources of artifacts in anatomical MRI (min. 8:14) and presents some tips and tricks for detecting artifacts in T1-weighted images (min. 19:08). Then, Martina Callaghan presents key metrics to perform scanner QA for functional MRI, emphasising the need to look for subtleties (min. 3:53). Here, the key is to establish whether the system fluctuations inherent in the acquisition procedure and hardware are sufficiently low to allow detection of BOLD-related signal changes in task-based and resting-state functional MRI. Martina Callaghan then presents some of the online (i.e. real-time) QC procedures for functional MRI (min. 17:17).
Esther Kuehn then takes over and introduces artifacts in high resolution functional MRI acquired at high-field strength with particular emphasis on cortical layer imaging applications and presents some available means of artifact-correction (from beginning). In her video, Joset Etzel introduces a different aspect of QC for neuroimaging data - dataset QC - and talks about the importance of checklists and standard operating procedures (SOPs).
Dataset QC aims to verify whether a valid dataset (i.e. one that has already passed the various data QC steps) is also usable by different people at different times in different places, and intuitive data organisation alone is not sufficient. Finally, in his video, Alexander Leemans introduces common artifacts in diffusion MRI, presents strategies for checking the quality of data and common errors in this checking, and also correcting artifacts.
I’ve got so much data, how do I organise it?
Lots of neuroimaging data are acquired all over the world and the resulting datasets are organized in different ways according to the personal preferences of the users or the labs. With Open Data, so data that is publicly accessible, picking up momentum, there is growing need for standardization of neuroimaging datasets so that they are easy to use soon across a wide community of neuroscientists. The brain imaging data structure (BIDS) initiative aims to standardize neuroimaging data structures in order to make them interoperable under the FAIR data principle. In this tutorial, the BIDS data structure is introduced as a practical means for achieving FAIR data. Here, a number of BIDS resources and repositories and simple BIDS specifications are also given for an easy get-go (min. 27:27). Later, a hands-on session on how to create and validate a basic BIDS dataset is also introduced (min. 34:57). Also check out the TrainTrack session on BIDS of this year’s virtual meeting by Sam Nastase!
Jeffrey going through the benefits of the brain imaging data structure (BIDS).
Once you have nicely organised your data, they are also easier to use for other people. To make neuroimaging more reproducible overall, something else that can be done is to ensure that data does not get lost and forgotten. In short that our data are Findable, Accessible Interoperable and Reusable (or FAIR; see the educational course on FAIR data from min. 1:52 by Maryann Martone and Jeffrey Grethe).
This way, your science will be more robust, transparent and verifiable.
The problem is that making research FAIR as an afterthought is really tough. Indeed, generating or curating good quality data that abides by FAIR principles requires some forethought (FAIR workshop min. 12:36). Not only do a lot of steps and expertise go into acquiring good quality data, but your data need to be in a format and in a place that makes those data easy to use for your present self, your future self and for someone who is not yourself!
One tool to share statistical maps from your study is the platforms NeuroVault and Neurosynth. In his video, Chris Gorgolewski goes through the advantages that uploading your map has for you, such as the options for fancy visualisations of your maps (min. 4:37), cognitive decoding of your maps (min. 5:25), search to find similar maps in papers (min. 6:25), gene decoding (min. 7:04).
How can I make sure that my analysis workflow can be reproduced by others?
If you want all aspects of your study to be documented and reproducible, then this of course also includes your analysis. The BIDS structure can help with setting up a reproducible workflow, but it is not sufficient. It also needs to be clear which processing steps have happened, which analyses were done, with which software and which parameters, etc. There are a lot of tools out there to help you and the Center for Reproducible Neuroimaging Computation initiative (ReproNim) has held an extensive course at the 2018 annual meeting about this (and a whole Webinar series on best practices for neuroimaging, if you are interested).
Starting with the “computational basis”, Yaroslav Halchenko gives an introduction into the Linux shell, including the importance of environment variables (from min. 12:50) to ensure you are running the right version of software, how to use shell history (from min. 23:40) to check whether you indeed ran the right commands, and how to write shell scripts (min. 29:30). He also shows how Neurodebian can be used to search and download software (min. 41:21).
Most people have probably heard the name Git before. (Did you know the official definition is “stupid content tracker”?) Yaroslav explains the Git philosophy in 2 minutes (min. 58:01) and shows the most important commands (min. 52:50). While Git is useful to keep track of your scripts, get and provide code, a tool called DataLad (min. 1:03:17) can be used to do similar stuff with datasets. A hands-on session on this is provided in the Workflows for neuroimaging session from min. 47:20, and how this can be combined with specific statistical analyses is explained from min. 1:52:08.
Other tools to help you make sure you use consistent software within a study are containers and virtual machines. Dorota Jarecka gives a good overview of why these are very useful in research (from min. 7:39) and even guides you through some exercises (from min. 15:45). Jean-Baptiste Poline gives a short intro to Jupyter notebooks to demonstrate your code to others (from min. 2:43:51).
This year’s OHBM Hackathon also has a session on Git by Steffen & Saskia Bollman, on good coding practices with Matlab by Agah Karakuzu, on Datalad by Adina Wagner and on Containers by Tom Shaw and Steffen Bollmann.
You said that replicability also refers to other people being able to get the same outcome as my study, but if they test different participants, this is out of my control, right?
This is a good point, it is somewhat out of your control, but there are some ways in which you can help. First, being very transparent about what you did to your data will allow others to adapt methods as similar as possible to yours. As Celia Greenwood explains (from min. 2:24:01), the final statistical measure that one tries to replicate involves a lot more than just the statistical test, but includes all steps before, the processing, exclusion of outliers etc., which sometimes makes it hard to even work out what the null hypothesis is. She states that reproducibility in the statistical sense is about the final inference you make, so it is tied to the p-value. And this of course depends on your sample size and, to some extent, chance. In a demonstration (from min. 2:34:24) she shows that if you draw different samples from the same population, there is huge variability in the p-values and effect sizes that you get across samples (even with sample sizes of N > 100) , which are purely a result of random sampling.
Celia illustrates the effect of random sampling on estimated effect sizes.
Is this why “most published research findings are false?”
Are you insisting on going back to things we have already discussed?! I suppose it is fair to say that there is more to it. A measure called “predictive value” is the probability of the alternative hypothesis being true given your test result. In his video, Jean-Baptiste (from min. 2:47:14) uses a Jupyter notebook to explain the Bayesian math behind this value and shows that this measure depends on the power of your study as well as the odds ratio of the alternative hypothesis over the null hypothesis being true. So the lower the power in your study, the more unlikely that the alternative hypothesis (usually what you are interested in) is true, even if you have a significant result. And most neuroscience studies do not have much power, as shown by Katherine Button.
Well, you may say now, how do I know what my power will be? And is there even a point in doing my experiment or will it just produce another - false - research finding!?.
Good question. Doing power analysis for neuroimaging studies is not straightforward, but luckily, some packages, such as fmripower and neuropower, have been developed to at least get an educated guess of what your power might be. As Jeanette Mumford explains in her video (from min. 4:53) doing a power analysis has many benefits. She also gives some tips on how to assess other people’s power analyses (from min. 7:08) and what to consider when estimating effect sizes based on the literature (from min. 9:18). Jeanette also explains why the difficulty of doing power analysis increases with difficulty in model (from min. 11:59).
Jeanette talking about the power of different statistical models.
What else can I do to ensure best practices in neuroimaging?
Thorough reporting of what you have been doing in your data acquisition and analysis is always a good idea. Guidelines have been created by the Committee on Best Practices in Data Analysis and Sharing (COBIDAS; also see Tonya White’s video for the idea behind COBIDAS) for MRI and MEEG.
Various tools are available for testing your code. Also, if you publish your code on sites such as github, then other researchers can try it out and help further develop it.
Preregistration and registered reports are becoming more and more popular for neuroimaging, meaning that more and more journals accept and encourage them. In her video, Pia Rotshtein explains the philosophy behind and principles of registered reports (from min. 11:06) and shows some examples (from min. 22:55).
Tonya telling us about the Committee on Best Practices in Data Analysis and Sharing.
If I get into all these things, will I still have time to do research?
That is why there are 36 hours in every day! Seriously though, this is all part of doing research! Often, however, efforts on good practices in neuroimaging are not publishable by themselves and have not been well respected. There are good reasons and incentives to follow Open Science practices as individual researchers (for examples see this summary) and with the new OHBM initiative Aperture (see video and website), a new room for unconventional research objects (such as software and documentation) is being created.
If this still all seems overwhelming and time consuming, don’t worry. Most of the tools presented here have been developed to save you time and resources in the long run while making your research more sustainable. Think about the time that one would spend re-acquiring a data set because of a previously unnoticed problem with the scanner, trying to make sense of not intuitively organised data or trying to find a mistake in a long, badly structured code. Putting in place some of these preventative measures, does not seem like such a big investment anymore.
If you’re hooked, stay tuned. The numerous emerging Open Science initiatives keep coming up with new ideas and tools for how to make research as a whole more reproducible and trustworthy, and help us brain mappers, conduct neuroimaging research in more robust and applicable ways.
Guest post by Hiromasa Takemura
International diversity is essential for organizations like OHBM. Through my experiences attending recent OHBM Annual Meetings, I have found myself asking why so few researchers from Japan have visible roles. To find out whether this was indeed the case, and possibly why, I worked with the OHBM Executive Staff, Diversity & Inclusivity Committee, and Communications Committee to analyse membership and attendance data from the annual meetings. By collecting and analysing this demographic data we can gain insight into why some countries (in this case Japan given my background, but the findings may extend to others) may be underrepresented at OHBM.
Japan is the 11th most populous country in the world, with an estimated population of 126 million (m) people in 2020. For comparison, Mexico has the most similar population with 128m people and Germany, Europe’s most populous country, has 83m. Japan has, over the years, substantially contributed to the OHBM community: for instance, the 2002 Annual Meeting was held at Sendai, Japan and Dr. Kang Cheng, a pioneer of high-resolution fMRI studies at a founding lab for RIKEN's Brain Science Institute, is heavily involved in organization of OHBM meetings.
To get a picture of recent involvement of researchers from Japan, we examined data summarizing attendance and presentations at the OHBM Annual Meeting between 2017-2019 (Table 1). We defined Japanese members as those affiliated with Japanese institutions. Using this definition we found that Japanese members comprised 3.6%, 5.4% and 3.9% of all attendees for 2017, 2018, and 2019 respectively, with the fluctuation reflecting the location of the annual meeting (OHBM 2018 was held in Singapore). We found a lower proportion of abstracts submitted by Japanese members: 2.6%, 3.6%, and 3.6% of the total number of abstracts for each of these years.
We then examined the proportion of Japanese members giving oral presentations. These numbers included both regular oral sessions and symposia. The proportion was 1.7%, 3.0%, and 0.9% for 2017, 2018, and 2019 respectively. The low number at the 2019 Annual Meeting was striking, given the proportion of attendees and abstract submissions.
To determine potential contributors to these statistics, we examined the number of Japanese members who selected “talk preferred” at abstract submission, but were not accepted for talk presentations. Surprisingly, these numbers were very small: 3, 1, and 2 for 2017, 2018, and 2019, respectively. A major reason for underrepresentation of Japanese members at the OHBM meeting appeared to be a reluctance to present data in the form of a talk. It is true that certain types of presentations work better as posters than talks, but we wanted to find out why so few researchers from Japan opted for oral presentations. We wanted to find out why the community would miss opportunities to highlight and benefit from their work.
Why do Japanese members hesitate to give talks at the OHBM annual meeting?
To find out, we surveyed 86 Japanese scientists working in human brain mapping (Figure 1). First, we asked whether they would choose oral or poster presentations at domestic conferences: 58% answered “oral”. Then we asked whether they prefer oral or poster presentations at international conferences. In this case, only 35% answered “oral”. The trend to favor posters in international conferences was common across both junior and senior scientists.
Next, we asked why they would opt for a poster presentation (Figure 2). For domestic conferences, researchers chose poster presentations when the topics were specialized, or the data wasn’t ready to present to a broad audience. For international conferences, 32.6% of respondents were dissuaded due to the challenge of presenting in English. Indeed, for Japanese researchers the most common deterrent for oral presentations at international conferences like OHBM was the language barrier.
Figure 2. Survey on the reason for choosing a poster presentation for a domestic (left) and an international conference (right). Multiple choices were allowed for this question. While there are common reasons between a domestic and an international conference, people raised a difficulty in English presentation as a reason to prefer poster presentation in international conferences.
The challenge of presenting in English is not unique to Japanese members of OHBM. Instead, this case study serves to demonstrate the extent to which language barriers can limit scientific communication. It is, therefore, worth considering ways to organize an international conference that help enable non-native English speakers.
There are several actions we can take as a community. First, we could promote and encourage junior Japanese members (and other non-native English speakers) to apply for oral presentations, symposia proposals and educational courses. My own experience speaking at the 2019 Annual Meeting greatly increased my enthusiasm and experience of the conference (see photo below).
Second, as an international community, we can promote a friendly, open-minded environment for scientific presentations and debates across members, irrespective of their English proficiency. I appreciate that OHBM has made a clear Code of Conduct prohibiting harassment based on the accent of speakers. Since I believe that OHBM members are mutually respectful, I hope that non-native English speakers feel able to discuss their scientific work and ask questions during annual meetings.
Third, we can devise conference formats that reduce language barriers. OHBM 2020 was a virtual event. This allowed members to communicate using live chat features that will be much less affected by spoken language proficiency. OHBM 2021 will now also be virtual, so we have time to consider further digital features to aid communication. Looking forward to the return of physical conferences, we can use features like mobile apps to ask questions, as we did at OHBM 2019. There may be no single solution, but we can benefit from technologies tested in virtual formats in future physical conferences to encourage broader active participation in OHBM meetings. We could ensure that new scientific advances are communicated widely, and not hindered by the lingua franca.
Finally, it is worth restating that these issues are likely not specific to Japanese members. We hope that by shining a light on the challenges faced by my local community, we can increase accessibility for OHBM members from a variety of non-English speaking countries around the world.
Addendum (from the Diversity & Inclusivity Committee)
To examine the breadth of underrepresentation, the Diversity & Inclusivity Committee examined the geographical distribution of speakers at OHBM 2020. We calculated the number of speakers (at regular oral sessions and symposia) as a proportion of current OHBM members (see figure below).
Our findings paint a complex picture: most Asian countries are certainly underrepresented but researchers from central European countries, including non-native English speakers, are well represented. However, the Romance or West Germanic languages of these latter countries share typology with English, and so are considered by the Foreign Service Institute to be easier for an English speaker to learn. In contrast, Japanese, Arabic, Cantonese and Mandarin are considered to be ‘exceptionally difficult’ for English speakers to learn, and vice versa.
Other factors likely influence whether researchers submit abstracts as oral presentations. For example, Spain and Mexico, despite their Romance language, had relatively few speakers. Historical ties to OHBM from individual labs and other economic, local, and macro-cultural factors are likely at play. By considering what causes barriers - language or otherwise - and exploring how we can break them down, we can promote a culture of greater diversity and inclusivity at OHBM.
By Elizabeth DuPre
The OHBM 2020 Annual Meeting was a year of many firsts. The move to an all-online event reflected the severity of the COVID-19 pandemic, with work, travel and schooling routines already in disarray for researchers across the globe. As many of us had been out-of-office or away from our university campuses for months before the Annual Meeting, the chance to connect with the broader human brain mapping community became especially important.
Traditionally, the Annual Meeting offers a chance to interact formally and informally with other researchers to make both scientific as well as interpersonal connections. Replicating these spontaneous conversations was perhaps the biggest challenge for this year’s meeting. First, there were the issues of timing. With OHBM members participating from their home countries, one member’s afternoon in North America would be the middle of the night for another member in Asia. The meeting was therefore set on a rotating schedule, with day-blocks favoring Asia-Pacific, European and African, or North and South American working hours.
Once the timing was set, the second hurdle was developing a virtual space for interactions. Large online platforms—like those necessary to run a conference for thousands of members—often lean towards structured, lecture-style environments rather than organic interactions and impromptu discussions. From the available infrastructure options, OHBM Council decided in April to adopt the 6connex platform. Council’s intention was to allow time for all presenters, committees, and special interest groups (SIG) members to adapt their content; however, the time pressures of the COVID-19 pandemic meant that many were still unclear how this new platform would work in practice in June.
Expectations were thus high for the 6connex platform—possibly higher than could be reasonably met. The platform did well in delivering pre-recorded content, such as the excellent selection of keynotes lectures, symposia and oral sessions, but the space for spontaneous interaction was woefully lacking. As one example, many members noted the challenges of using the chat feature, such as when 1000+ attendees simultaneously participated in a single-threaded chat room. This lack of functionality created particular frustration in poster presentations and interactions, where presenters and attendees were unclear how to contact one another or how to provide on-the-spot poster walk-throughs.
OHBM members enjoying one of the poster sessions on the GallOP platform.
Although the official platform did not provide an outlet for interaction, it did create a galvanizing effect for the community to create such a space. Attendees, such as Yaroslav Halchenko, Soichi Hayashi, and many others, came together to openly develop the OHBM 2020 GallOP (Gallery of Open Presentations) platform. GallOP provided an easy interface to search for poster authors, titles, or keywords, creating more chances for researchers to find relevant work. But perhaps most importantly, it created individual video conferencing rooms for each poster, allowing attendees and presenters to directly interact during presentation time slots or to leave one another notes outside of official meeting times. Although GallOP was only created after the first poster presentation time, the community response was enthusiastic, and it was quickly accepted and shared by the OHBM leadership and incorporated into the official platform.
Interactions in the Open Science Room (OSR) Gather.Town, a virtual space where OHBM members could gather throughout the conference.
This spirit of creativity and connection swept through OHBM2020 and was perhaps the defining feature of the conference. Other important community-driven initiatives that arose included the BrainWeb poster viewer and the first-ever virtual OHBM Club Night, both of which created online spaces that mimicked many of the social features of an in-person meeting, albeit with fewer spilled drinks. All of these community-driven initiatives were linked together by emergent discussions in the Open Science Room (OSR); this central hub seemed to catalyse interaction across the conference. The OSR hosted emergent discussions on everything from software containerization, to correcting for confounding, to even the structure of the virtual conference itself. In a year in which our idea of community has been redefined by political, social, and cultural reckonings, this space to have conversation with other brain mappers about the important issues of our science—both in terms of research topics and lived experience—proved a highlight of the conference for many attendees.
Alongside these experiences, the official OHBM program also provided attendees the chance to consider the direction of our field. As always, the OHBM Talraich, Glass Brain awardee, keynote, and symposium speakers provided an inspiring vision of the future of our society and the work we can do together. The OHBM 2020 Hall of Fame celebrated individuals that uphold many of the values important to the membership (e.g. education, replication, open science, mentoring), as well as this year’s award-winning abstracts. Uniquely, the community-driven efforts of this year’s event also provided a glimpse into just how important more grassroots efforts are to the structure and functioning of our academic society. As a result of this work, the SIG chairs were invited to sit in on Council meetings and increase interaction between official and grassroots initiatives. This is an exciting next chapter for OHBM leadership, and it suggests that we will continue to see more innovation in the years to come.
Although the 2020 Annual Meeting was our first all-virtual event, it is clear that its lessons will shape the structure of OHBM moving forward. We now know that the OHBM 2021 annual meeting will also happen virtually; this decision was made in advance such that all community members have more time to prepare. These preparations include creating a dedicated ‘Technology Task Force’ to translate the lessons learned in the 2020 meeting into next year’s experience. Altogether, it’s clear from the 2020 meeting that the OHBM community is vibrant, responsive and collaborative. We look forward to seeing how these attributes can be further advanced in coming years, starting with the 2021 Annual Meeting!
By Tzipi Horowitz & Nils Muhlert
Institutions throughout the world have had to adapt to the Covid-19 pandemic. Many scanning centres shut their doors during lockdown, and have had to reopen gradually, and carefully. We surveyed several labs from around the world - to find out the challenges they’ve experienced and, in a few cases, the opportunities afforded.
UK - Matt Wall (Head of MRI applications, Invicro, Hammersmith Hospital, Imperial College London)
Challenges: Everything shut down rapidly at the start of lockdown. In March, two big commercial scanning projects had to stop immediately. One had been running for some time, the other had just started. We had a lot of clinical people working with us - some very good medics. They spent a lot of time developing risk assessments and procedures. So we ended up restarting in late June. I tweeted about it at the time, and was contacted by people in other universities, asking how we managed it - so we shared our findings from the risk assessment process.
Our actions were consistent with the government guidelines: 2m social distancing and everyone wearing masks where possible, full PPE (masks, visors, gloves and disposable apron) where closer contact was necessary, particularly when getting people in and out of the scanners. As part of our risk checklist we also asked all researchers and participants to confirm that neither they nor anyone in their household had experienced flu-like symptoms and we used a heatgun to check each individual’s temperature as they entered the building. We ended up being ‘open for business’ before many parts of the university were.
We’re not back to capacity yet - particularly for PET. There we develop and use unusual PET tracers - we have a cyclotron onsite to make carbon 11 tracers which have a short half life. For this you need a large team, people involved in the chemistry, Q&A and PET technicians. There were many people on the furlough scheme so it was slow to start up again, and is only now back up and running. Overall, we’re probably at about 60% of what we used to do.
Experience: There was a sense of anxiety to start but that resolved fairly quickly. Our collaborators, who are endocrinologists in Imperial University, were working on Covid-positive wards. They assured us that we were using similar procedures to those used when working with people that we know are covid-positive. That gave us a lot of confidence that we were doing the right thing. I was actually glad to get back to scanning. My research assistant, who runs the scanning, now goes in most days, and I’m in probably 2 days a week. My dog at home, who people may be familiar with on twitter, also calmed down during lockdown: no more chewed slippers.
Opportunities: We found in our studies that it was suddenly a lot easier to recruit people. Once lockdown was easing, there were still many people off work. For one study we needed to recruit a particular population and since people were essentially bored they were much more willing to participate. Also the scanner was almost completely free. We steamed through recruitment for that study in record time!
México - Eduardo Garza-Villareal (Lab Leader of the Computational and Translational Neuropsychiatry Lab, National MRI Laboratory, Institute of Neurobiology, Universidad Nacional Autónoma de México, Querétaro)
Challenges: There cannot be more than 2 people in the lab at the same time. Hence students are struggling to finish experiments. I mainly work with animals at the moment, and again, having very few staff is problematic. We were importing a rat strain from the US and in the end they stopped exporting animals and we couldn't finish that project. One of my students has to perform animal surgeries in another lab; she couldn't access the lab for one month. Another problem is student training, which is now impossible because you cannot be directly in contact with the students. Effectively, I'll see a work setback of 1 year or so. With TMS we do have human subjects but we are closed until probably January 2021, which is a big setback.
Adaptations: We are working remotely which has helped, however we would not be able to change population or projects now. We have also started to use public datasets for new research.
Because I have children my time is now more divided than ever with less time to think about research than before.
Canada - Rick Hoge (MRI Program Director, McConnell Brain Imaging Centre, Montreal Neurological Institute) & Julien Cohen-Adad (Functional Neuroimaging Unit, Polytechnique Montreal)
Challenges: All scanning stopped in March, except for clinical trials where imaging was a critical endpoint to evaluate treatment efficacy. Phantom and animal imaging activities gradually resumed in May. Human imaging restarted gradually in August. Many groups are eager to resume their studies and start new studies this fall.
Adaptations: The core MRI staff and users have focused on remote work, including data analysis, hardware development, literature review, attending conferences etc. Zoom and other platforms are used to maintain group interaction. On-site experimental work adapted protocols used previously for infectious controls are now used in animal neuroimaging research.
Gradually, we have adapted to the new normal, although scanning volumes are considerably lower than they were prior to Feb 2020. The first groups to return to scanning were mainly those doing methods development and neuroimaging in animal models (as noted above, clinical trials work continued throughout the different phases of the pandemic).
New ways of working: The pandemic forced us to become more efficient at working remotely, using videoconferencing tools, telephone, and mobile messaging. We were already using the Slack collaboration tool, but this was even more valuable since the pandemic began.
Remote monitoring of MRI cryogenics was a particular concern during the complete shutdown, and we implemented several layers of security to make sure we could respond immediately to events like a quench or chiller failure.
Germany - Sofie Valk (Research Group Leader, Cognitive Neurogenetics, Max Planck Institute, Leipzig)
Challenges: My lab officially started March 1st. So, I guess two major changes occurred at the same time, namely that I now have to run and support a lab of my own, and get used to working following COVID-19. Recruiting has been different. For me, given that I am interested in computational neuroanatomy, genes, and evolution, it is not strictly essential to acquire my own data at the moment to answer the questions that we have and I also did not plan the timeline of my lab-start this way. My current research takes advantage of open data, so that my lab and I use Human Connectome Project data, as well as enhanced NKI, genome superstructure project (GSP), Abide, as well as data from the Primate Neuroimaging data-exchange and UK Biobank. Now I try to read up on how start-ups manage remote and home office working and how to build a team that can work that way.
Adaptations: Remote work was a bit challenging at first, as I have two kids under 5 and daycare was closed. My partner and I had a shift system of one person 7am-1pm and the other 1pm-7pm and then work in the evening again. This kind of worked, but also resulted in little time for me, for sleep, or for my relationship. Often, I worked on my phone during my times with the kids, which I felt bad about. You can hardly call it the best of both worlds. For setting up the lab, the Max Planck Society supported funding for a researcher while this person was still abroad, which was very helpful. As the lab is oriented towards using open data, this meant we could just ‘start’. In the long run, we do want to acquire a good dataset for my lab to test more specific hypotheses, but there is no rush at the moment.
I try to be flexible and make the best out of the situation, and support the people that want to work with me. At the same time, home office and lab Slack make it sometimes feel work is always ongoing, whether I am in the office, playing with the kids, changing diapers, or in the sandbox. It is a novel challenge to also have ‘off’ time and to learn that, even if communication is always possible, most things can wait if needed. I am very fortunate with my team, because everybody is independent and mindful of each other.
Now in Germany, the situation has relaxed. Daycare is open, and most days I work at the institute, even though meetings are online. What has changed in me is that I now try to think of positive ways in which my lab can adapt, such as better collaboration, open science, database management.
Opportunities: I hope that the remote set-up can lead to sustainable changes. For example, collaborations and knowledge transfer with other labs not in the same building or hallway are easier now with Zoom talks. Also, it is nice to be able to give a presentation far away, without going there. Although flying to and from Montreal and Cologne with an infant (I was still breastfeeding at the time, and this seemed the best solution) made me feel like an international DJ last year, it was of course very tiring and bad for the environment. Now, I could just Zoom in from the kitchen table. Also, the lack of sleep and some challenges with work-life balance in the home-office has made me reconsider how I plan my time, and be more disciplined to take some time off, go for a run, and prioritize sleep a bit more. In the end it is a marathon and not a sprint.
USA - Fumiko Hoeft (Prof of Psychological Sciences, Mathematics, Neuroscience and Psychiatry, Brain Center Director, University of Connecticut, Storrs Connecticut)
Challenges: Since mid-March, non-COVID in-person related research operations have been shut down in the US. Only COVID-related research and remote research was allowed (like most other US institutions). Staff and students were asked to work remotely. Some behavioral-only research moved to remote platforms.
We conducted a survey beginning March to get a better understanding of the effect of COVID on our US-colleagues - we found that only 37% of imaging research centres remained open for a reduced ‘business as usual’:
Adaptations: Where possible, we moved to remote behavioral data collection but paused all imaging projects. Research, including human neuroimaging research, restarted in late May.
Behavioral testing remains remote whenever possible. For instance, we received NSF/Tremaine Foundation funding to start COVID related research. This is a project in K-Gr2 students examining the effect of an adoptive computerized reading instruction on children’s learning at home.
Israel - Yaniv Assaf (Prof of Neuroscience, Head of the Alfredo Federico Strauss Center, Department of Neurobiology, Faculty of Life Sciences and Sagol School of Neuroscience, Tel Aviv University)
Challenges: Israel was in a lockdown during March-April 2020, and studies in humans were shut down. At the end of lockdown (May), we were allowed to have only 5 individuals in the lab. As we have 12 team members in the group with four paid employees, only one student was allowed to attend the lab every day. Teaching remotely using zoom or similar software has some disadvantages including the lack of interaction with the students.
During lockdown, the imaging center was closed. Scanning eventually restarted in May with several restrictions dictated by the ministry of health: we had to sterilize the scanner and the suite and build in a gap of 30 minutes between each scan.
Adaption: We shifted to one-on-one and lab meetings using remote meeting applications. On a personal note, I had more quiet time to myself, which allowed me to complete some of my ongoing work.
Opportunities: I started working with a team-management software, that assists with planning my team’s assignments. On the research perspective, we had an incredible opportunity to scan individuals that were scanned prior to the pandemic (prior to February 2020) and after the lockdown release. Individuals were willing (and probably had time and availability!) to come over to get scanned. We inquired whether there are neurobiological associations to changes in behavior related to social avoidance, interaction etc following COVID. Strikingly, we found that the post-lockdown group showed an increase in amygdala volume. Obviously, the pandemic enabled an opportunity to test the brain structural correlates for the behavioral stress that everyone was experiencing.
In summary, labs across the world have been forced to adapt, to find new ways of remote working and safe ways of scanning during the covid-19 pandemic. There is clear regional variation in how imaging labs between countries have responded, reflecting local infection rates and government policies. Many have increased activities with open datasets, and there are certainly improved opportunities to present virtually at a range of locations. We’re clearly not out of the waters yet, as Europe and North America currently see a resurgence of higher covid-19 infection rates. But at least this time, we are at least somewhat prepared for whatever 2021 brings.
By Nabin Koirala
In advance of our scheduled launch of the upcoming Journal “Aperture” from the Organization of Human Brain Mapping (OHBM), we wanted to get up close with the first Editor-in-Chief of the Journal - Tonya White. Tonya is currently an Associate professor in the Department of Radiology and Nuclear Medicine and Department of Child and Adolescent Psychiatry in the Erasmus Medical center in Rotterdam, Netherlands. We discussed her personal journey in Science and her vision for the Journal.
Nabin Koirala (NK): Thank you so much for making time for this interview. To start, could you please introduce yourself to general readers who may not be scientists?
Tonya White (TW): That's always an interesting question because I have a number of different hats. I could say that I'm a developmental neuroscientist, a child and adolescent psychiatrist, a pediatrician or an electrical engineer. But what I've been mostly doing currently is what's called pediatric population neuroscience, which is actually the intersection between epidemiology and developmental neuroscience. The neuroimaging program I came to set up in Rotterdam is a large population-based study of child development. We’re currently collecting and evaluating more than 8000 MRI scans from children at three time points. Through the so called “Generation R Study” nearly 10,000 mothers who were pregnant between 2002 and 2006 were approached to participate in the study and the imaging is nested into a multifaceted epidemiologic study looking at many different aspects of child development.
NK: Wow, that's a large sample. So, what do you actually aim to find out from this huge cohort of 8000 scans?
TW: One of the things that's exciting about population imaging studies is that they have a direct translation into public health. So you can look at things such as exposures during pregnancies in mothers who smoke or don't take enough folate or who use cannabis and what happens in the brain because of these different exposures. Another aspect is that most studies using clinical populations look at the extremes within a distribution. Say, those who have a clinical diagnosis versus an overly healthy control group. Whereas, with population-based studies, you can also study individuals with subclinical symptoms i.e. who don't meet an actual clinical diagnosis but have some symptoms. Then you can ask the question, does the underlying neurobiology follow a continuum parallel with the clinical symptoms? If you have more symptoms, do you have more differences in the brain? Or are these differences related to a dose effect?
Lately, I've been really quite interested in the role of stochastic processes in development. We talk a lot about nature and nurture, but we don't often talk about chance. If you go back to the early 20th century, you see a whole dispute between Niels Bohr and Albert Einstein on the role of deterministic processes. So, I think we should also start a conversation of stochastic processes within the field of neuroscience.
NK: Thinking of Nature and Nurture, how much genetic influence do you think there is in brain development?
TW: I believe it's largely driven by genes. I think genes have a huge effect in terms of brain development and the other factors like the environment and stochastic processes have more subtle but real and measurable effects. As an example, in several of our studies, what we've looked into is the polygenic risk for different psychiatric disorders; we’ve shown several genes contributing to psychopathology.
NK: This is fascinating, but I have to admit I am even more curious about your Journey in Science. How did you end up doing neuroscience from being an Electrical Engineer?
TW: Oh, that’s a long but I guess a good story. Anyway, I’ll try to shorten it as much as I can. My undergraduate degree was in electrical engineering with a focus in biomedical engineering. I started engineering because I was good at math and my father, who was an engineer, encouraged my older brother, myself, and my younger sister to go into engineering. During my bachelors, about half the people in my engineering class were looking into jobs related to defense. But I was rather interested in medicine and the human side of things. So, during my graduate years, I started looking for options and my interest at first was in bioengineering and working with prosthetic devices. This might also have to do with the fact that I really liked my summer job working on the Utah artificial arm during my undergraduate years. So, while continuing my graduate program in Engineering at the University of Illinois, I began medical school. By my third year of medical school, I would have said that I was planning to go into pediatrics. However, during my fourth year of medical school, I had a chance to spend a month at Brown University, rotating through child Psychiatry at Bradley hospital. After that, I became very excited about child and adolescent psychiatry. I went back to my Dean and requested that he re-write his letter for me to apply to the so-called triple board program, which involved pediatrics, psychiatry and child psychiatry. It’s a natural step to become interested in the brain through psychiatry.
Because of my clinical experience during my residency, I became interested in autism. I went to the University of Iowa for a neuroimaging postdoctoral research fellowship with Nancy Andreasen on schizophrenia research with the goal of also being able to work with Joe Piven on neuroimaging in autism spectrum disorders. But shortly after I moved, Joe moved to North Carolina, so I then focused on schizophrenia and moved to the University of Minnesota to set up a youth psychosis program. I spent about four days a week at the Center for Magnetic Resonance Research at the University of Minnesota. Then in 2009 I had the unique opportunity to set up the neuroimaging program in Rotterdam.
NK: Wow! that’s quite a story. So, having three different degrees, how much do you think this vast intersection of knowledge has helped you in your academic career?
TW: I can say engineering has been extremely helpful, particularly for the intersection between neuroimaging and medicine. To give an example, there are lots of new methods that are proposed regularly. Most of them are cool, and sometimes neuroscientists apply them without actually thinking about the assumptions from a neuroscience perspective. So, the engineering degree has given me the ability to understand what's involved in the assumptions of these methods, to have a different overarching view to better apply them in addressing my research questions. Even though I do think that having in-depth knowledge in multiple domains is very helpful, I also like the kind of hybrid courses currently springing up in universities, like medical technology, computational neuroscience etc. I believe these courses create bridges between medicine and different aspects of technology.
NK: Maybe one last question regarding your academic journey so far. You were a faculty member in Minnesota for around eight years. What made you switch to this position in another continent?
TW: I would say a combination of things. A key factor was the generation R Study, which I thought was fascinating. Being able to infuse neuroimaging in that study seemed like a great opportunity. Among other factors, there was a lack of NIH funding under George W Bush. NIMH funding during his period went down to as low as 3-5%. I also had a great experience living in Europe during five years of my childhood with my parents in Germany. So I think it was more the opportunity in Rotterdam that had a pull, as I did very much enjoy being at the University of Minnesota and being close to family.
NK: So let’s focus on Aperture, could you tell us a bit more about its formation: the concept behind it and what this new journal brings in the sea of others? What’s different about it?
TW: I was the secretary on the OHBM Council when it was first introduced as a possibility by, I believe, Martin Lindquist. He said “wouldn't it be nice if we had a mechanism to publish these educational articles (the COBIDAS document) that are good for the OHBM community and the wider neuroimaging scientific community?” JB Poline then picked up on that and, together with Peter Bandettini and a group that now form the Aperture Oversight Committee, became the driving force behind Aperture.
As to what sets Aperture apart, there's a couple of different aspects. One is the promotion of open science. The journal itself will be open, but it’s not just the research articles that could be open access but also what we call research objects like scripts, data sets, educational materials etc. Even educational material that you would normally see at one of the educational courses in OHBM could be embedded in something like Jupyter notebooks etc. Second is the low publication cost. Given the very high ‘open-access’ publication costs in many journals, these costs can be prohibitive for researchers in low- and middle-income countries. Hence, it is getting more and more difficult to publish research from those countries. I think we're all together in this whole research mission to understand the brain in the hope that people's lives will be improved, so an initiative in that direction is definitely one of the primary goals of Aperture.
NK: Great. Coming from one of those low-income countries, I can totally relate to what you are saying. Based on an OHBM survey on Aperture, a large proportion of participants said that they would be mostly interested in publishing codes or tutorials. How do you plan to review and manage that given the updates needed for those articles?
TW: Science is dynamic, it will change. So, we are more focused on the initial review process rather than the eventual updates. Major updates could be submitted as updated research objects to Aperture and given a new DOI. We've discussed several ideas on how to review something like a code or a script. Even though there is not yet a concrete plan, one of the things I've been thinking is that perhaps it would be good to have teams who would get together to check the scripts and run the code and see if it does what it says it can do and what issues may arise while testing the code. We will start with some test cases to try some of these ideas from our editorial team and see how it goes. But the goal is eventually these types of research objects will be published and receive a DOI.
NK: That sounds fantastic. Among one of the active debates that's going on in science these days is about diversity and inclusion. Do you have some plans to enhance that in Aperture?
TW: Definitely. Step one, I will be looking at diversity within our editorial board, making sure that we have a diverse group covering different domains. And then it's important to consider diversity not only in terms of gender, race, ethnicity, sexual orientation, gender identity, disability, language, etc. but also the diversity in methodologies and types of approaches to science. Diversity is perhaps one of our greatest gifts that we currently have in academics, hence I would definitely try to promote that from every aspect possible.
NK: As an editor-in-chief have you envisioned a certain direction for the Journal in the next five years or so?
TW : One of the advantages of being under the umbrella of OHBM is that we can pursue only those research objects of high quality. It might not be straight forward at the beginning given we don’t yet have an impact factor or indexing in PubMed. We're working together to provide a venue for people to submit types of research objects that may not fit in other types of journals. The general goal would be to establish Aperture as a quality journal which is not for profit, and becomes a platform for diverse articles relevant to the field of neuroimaging.
NK: Have you thought about a particular model of review process? Would the reviewers be open after acceptance like in Frontiers? Would the process be double-blinded?
TW: We've discussed this and there are many advantages and disadvantages of both processes. For example, someone junior in the field who is reviewing a research object from someone more senior in the field may not feel comfortable publishing their name. So, for now the idea is to make it optional for the reviewer, having the ability to opt out of being open. Regarding the double-blinded process, we have not yet decided upon that.
NK: Before we conclude I would like to ask some personal questions which I am pretty sure is interesting for many of us. Let’s start with your hobbies?
TW: Actually, this afternoon I am planning to go climbing at a climbing hall. But apart from that probably the thing I most like to do (in the winter) is to ski. I'm a member of a cross country ski group. Here we go roller skiing in the Netherlands. I also enjoy biking and most things outdoors like canoeing, hiking etc., which I do a lot with my family. I’d rather listen to lots of audio books than read them. Because I read so much for my work, sitting and reading books feels more like work than free time, so I just listen to audiobooks while doing other things. Actually, listening to books on tape in Dutch has really helped to improve my Dutch language skills.
NK: Are you already a fluent speaker?
TW: Much of my interactions with my Dutch colleagues is in Dutch, but I am no way near perfect. Still a lot to learn.
NK: Having a chance of working both in the US and Europe, do you feel that there are some differences in terms of research culture across the continents?
TW: Most of the important things are very much the same, but there are things that are quite different. For example, most of the research in the US is done by the PIs (Principle Investigators) and research assistants, at least on the medical school side of things. But here in Europe most of the research is driven by PhD students. Also in Europe during summer the population of the research unit decreases by 75%, as most people go on vacation for extended periods of time and they're expected to do so. You are interviewing me in the summer [laughs]. I like the quietness of the summer to get work done and then I go on ski vacations in the winter.
NK: Being an expert in the Brain, I am very curious to know: if you were to be a brain region, which one would it be?
TW: What a great question! I think probably the hippocampus because it integrates multimodal information. And that's kind of sitting between multiple disciplines. That's kind of what I do. I actually believe more in connected regions rather than one specific brain region, but I guess I can't call connectivity a region, can I? [Laughs]. Probably another favorite would be an action potential. Potential for action related to Aperture. [Laughs]
NK: Three key ideas for early career researchers?
TW: 1. Think outside the box. Because you should always remember that if we, as senior researchers, had already figured it out, you wouldn't have a position. So if we continue doing things the same way that we're doing, it's possible that we miss something. So make sure that you think outside the box.
2. Persistence. As an example I would say, as a researcher you need to get used to rejection. When your first grant gets rejected and another grant after that, the ones who persist are the ones who are going to be successful.
3. Play well on the sandbox. We're all in this together to hopefully make life and this society better for all of us. So we all should remind ourselves of the real reason why we do science.
NK: Great suggestions. Thanks again for taking time and making it such a great interview.
Learn more about Aperture here.
John Mazziotta is Professor of neurology, CEO of UCLA Health, and vice chancellor of UCLA health sciences. He was also a founding member of the OHBM. He co-authored the first book on whole-body, cross-sectional anatomy using CT. He’s been involved in the first PET studies in normal subjects and with patients with epilepsy and Huntington’s disease. He was the principal investigator of the ICBM brain atlas, a key tool for brain normalisation. We interviewed him as part of our OHBM Oral History series, to find out about the early days of PET, (f)MRI and the inception of OHBM.
Nils Muhlert (NM): Thank you very much, Professor Mazziotta, for joining us today. I'd like to start by asking you about your background: Why and how did you become interested in neuroimaging?
John Mazziotta (JM): Well, I wanted to be an architect. That didn't work out because I spent a lot of time in Manhattan with architects when I was an undergraduate, and they didn't seem very happy. I like science and went into a lab where I was doing early molecular biology and that was interesting but very isolating. I thought, “Well, I'll go to medical school.” I hated medical school, memorizing bones and things of this sort. Ultimately, I met a neuroscientist in the medical school. The school also had a very active biophysics department and were building the first CT scanner that could image outside of the head. This is now mid-1970s. I got involved in that project and we physically built that machine, soldering wires. We had a functional scanner that worked anywhere in the body.
I decided I would become a neurologist, moved to Los Angeles and UCLA and immediately met the group that had moved from Washington University in St. Louis. They had been involved in the development of PET and all worlds connected, so I got involved in research with PET and then MRI.
NM: Should I ask who the neuroscientist was, that you met during your med school years?
JM: The neuroscientist in my medical school years was a woman named Betty Hamilton. And ironically, Peter Fox and I were in the same medical school class, and there were other neuroimagers in medical school with us. It was an interesting coincidence. The seeds were set there.
NM: And what was it about neuroscience and neuroimaging that really attracted you?
JM: Clinically, it was the approach to the patient, the gathering of the data, the deductive logic of coming to a diagnosis and then having confirmatory tools like imaging that could show you structural and/or functional abnormalities, confirming whether that diagnosis was accurate or not.
NM: And what do you see happening with neuroimaging in the US nowadays?
JM: Obviously there's been a big shift to MR-based strategies rather than PET. But now we're starting to see a resurgence of PET to study patients with neurodegenerative and psychiatric diseases. In the mid-1980s I was confident that psychiatric disease would be completely understood by PET because they were chemical disorders. You could give somebody chemicals and they would become psychotic or delusional or hallucinate. And you could take a patient who had mental health issues and give them medications and they would get better: it was a chemical issue, no structural changes. But after 40 years of scientists, very diligently, pounding away and making ligands and imaging patients, we still have not provided, to my mind, actionable insights in mental illness through imaging.
NM: And do you think there will be breakthroughs over the next 20 to 30 years? Is there anything you'd peg your hopes on?
JM: I’m just as confident now as I was then [laughs]. It's a tough problem and a very expensive problem. When developing a new ligand it might be useful if, as a community, we encouraged key places to be the factories of ligand development. It's so expensive, and like CERN or other high energy physics projects, talented scientists could go there, do the work and then return to their home institutions. Once proven, the recipes for the ligands could be distributed widely.
NM: That's an excellent idea, to really pool the resources and expertise. Going back to your own work, what research or other contributions are you most proud of?
JM: Always a humbling question. My research with PET began with studying normal individuals. We studied the visual system. Our first papers were on visual responses to different types of stimuli, sensory deprivation, auditory stimulation, and a variety of various states in normal subjects. That was very exciting. Every time we did a scan, it was a new day. You never knew what you were going to see. It was an exciting time.
On the clinical side, I was involved in the first PET studies of patients with epilepsy, the first studies of patients during seizures, Huntington's disease and depression. Combining genetics with imaging, in the Huntington study, was a good example of developing probabilistic approaches to individuals who are at risk, then testing them genetically and seeing the outcome and looking at the scan. Those were also exciting times, in our collaborations with what was then the Hammersmith group in London; ultimately with Karl Friston, Richard Frackowiac and the others, Terry Jones in that group, and our colleagues in UCLA with Mike Phelps and Henry Huang, myself and some of the fellows that I had, Roger Woods and Scott Grafton. We carried out a lot of the early work on blood flow measurements with that combined group.
NM: And many neuroimagers will know of the ICBM brain atlas. You had a pivotal role in the development of that. How did it come about?
JM: That was a painful part of history. We were all struggling with how to normalize data, in our own labs, among individuals within a modality and then across modalities, and then ultimately, to pool data from multiple different laboratories. And it was clear that this was an enormous problem, and it was unlikely that one laboratory on its own would solve it.
I invited to Los Angeles, where I was then working with Arthur Toga, a group of the individuals around the country and around the world that were doing this kind of work and were frustrated by the fact that the problem was a difficult one. We all worked together for a couple of days: Peter Fox, Alan Evans, the people I mentioned, some individuals from Europe and Asia. We were all natural born enemies. We're all vying for the same funding dollars, all doing a lot of similar work. There was a lot of posturing and opening remarks. In the end, we emerged from those two days saying the only way to really solve this problem is to do it together. And if we did it together, we'll actually get it done and would emerge with something that makes sense.
Later, we teamed up with other groups, particularly the group from Jülich, who did all the amazing work on the histology. It was a big program, Arthur Toga perfected the sectioning of human heads; that data went to Germany, to Karl Zilles and Katrin Amuints and their talented teams and went on from there. It's still going on today with the BigBrain project and the collaborations between McGill and Jülich. So that was another satisfying addition to the contributions by the group. That group continues to meet, always in Hawaii, always in the first week of November, this year will be the first exception in something like 26 years.
NM: It's a great example of fruitful collaboration, not just between national institutions, but across the globe. And you were also involved in the creation of OHBM. What was that like? And what did you imagine OHBM would be like?
JM: It was less about what it could be, but what it would eliminate for us. We were at a meeting in San Antonio that Peter Fox hosted. I was there, Arthur Toga, Alan Evans, Bernard Mazoyer. We were bemoaning the fact that we were all going to all these meetings every year, the cerebral blood flow and metabolism meeting, this meeting, that meeting, meetings about MRI, meetings about PET, meetings about everything. It didn't make any sense because most of those meetings had nothing to do with what we were interested in, which was trying to map the human brain.
Sitting around in a little conference room, the idea emerged,: "Well, why don't we just have a meeting about what we're interested in and not have to go to all these other meetings?" So everybody said, "Yeah, that'd be great. But it's going to be such a pain to do and who's going to front the money." But people were compelled to do it just because they were so frustrated with the current situation. And when we thought about how much money was wasted sending fellows and students and everybody else to all these other meetings that were low yield, we decided we'd roll the dice. Bernard decided he would really roll the dice and put a deposit on a center in Paris. And the rest is history.
NM: And how did you think it would evolve over the years?
JM: Well, we didn't know if it would work at all! So initially, it was a matter of trying to stabilize the finances to the point where we could at least be confident that if we advertisd the next year, there would be enough funding to get it to happen. The more grandiose envisioned something more like what the reality is now, that there would be books and journals and a subdiscipline of neuroscience that was basically doing this. We also felt that it would be important to have educational components: that a graduate student in psychology might not necessarily be exposed to the physics of the machines that was generating the raw data, or that a mathematician who was doing modeling wouldn't necessarily understand neurophysiology and neuroanatomy. And so once it was clear that we could sustain the meeting, then the next part was to make it something that had value not only as an information exchange but also as an opportunity to provide training to the field.
NM: And thinking of your own involvement with OHBM. What have you found most rewarding about that?
JM: Walking into the rooms with the posters or the lectures and seeing all those people, and the energy of the students and the fellows and people who are seeing their careers and their professional motivation, joined by colleagues who were like minded in the same place. That's very satisfying to see. From five people sitting in a room to thousands of people who are all thinking about these problems and trying to make progress.
NM: Definitely! Are there any experiences you've had attending OHBM that really stand out?
JM: One of the things that we did for a while was Richard Frackowiak and I would summarize the whole meeting at the last session in 30 minutes. So we had to somehow boil down 1000 presentations into 30 minutes. That was always challenging and fun. And I have to say it does focus your attention on the content of the meeting, rather than just cruising around and talking to people and reading a few posters. For the first 10 years, we had an “L & L party” with people from Los Angeles & London labs. We would jointly fund that party and it was one night a week and we'd bring the entire lab to these parties. We had them on boats, nightclubs, bars - they were all over the place. That's evolved into the social events that are on the different nights of the meeting,
NM: So that's been there since the very start?
JM: The first one.
NM: Brilliant. And have you seen any changes in how the meetings run or different angles coming into it that perhaps weren't there at the start?
JM: Well, when things are small, they're easy, everybody's in one room. And as things grow, they become more specialized and subcategorized. Then you have to pick and choose which things you want to attend or not. That's a natural evolution of any process like this. But I appreciate the fact that the named lectures and the other components have been maintained as unifying parts of the meeting. Attendees really look forward to those sessions and everybody's there.
NM: And a final question then. Not an easy one to end with! But, what do you see as the future for neuroimaging?
JM: [long pause] Notice the long pause [laughs].
The future is bright. But the pressure will be there to deliver on the clinical side, truly valid biomarkers from imaging. We don't have any of those yet. If you really look hard, and ask the question: if we have this imaging result, then this is the diagnosis. Those are few and far between, from functional imaging. That will be the question that will be put out there. Can you do that and if not, then why should we fund it? So that's one.
The second issue was the one I mentioned earlier: insights into purely functional and chemical illnesses like mental illness. And that's a big lift, and an important one. And similar one will be neurodegenerative diseases.
Then the most profound and the most interesting question is: how does the brain work? I would envision in the future, that through techniques we don't know about today or in some parts, extrapolations of some of the physics of MRI, that we'll get to the point of actually being able to image neural conduction, synaptic activity. It wouldn't be microscopic, but in large ensembles. With improved temporal resolution, we'll be able to understand the choreography of signaling in the brain. Once that level is achieved and massive data can be managed in four dimensions, then the insights will come more rapidly.
NM: So there's bridging of scales.
JM: Yes both spatially and temporally.
NM: Fantastic. Professor Mazziotta, thank you so much for taking the time to speak to us. It's been a pleasure hearing about your history with OHBM. Thank you very much.
JM: Thank you.
Authors: Katie Williams, Ilona Lipp, Mark Mikkelsen
Infographic: Roselyne Chauvin
Expert editors: James Kolasinski, Paul Mullins
Newbie editors: Curtiss Chapman, Yana Dimech
The noninvasive imaging tools that we Human Brain Mappers apply are most often being used to research brain structure and function. Neurotransmitter systems are something that we are aware of and use to take into account when coming up with hypotheses or interpreting our findings, but rarely make the direct subject of our investigation. Most of us have probably heard of GABA (gamma-aminobutyric acid) as the principal inhibitory neurotransmitter that is used by many interneurons. That we can also measure GABA in vivo with MR spectroscopy (MRS) is maybe less widely known. While this biomedical imaging tool opens many doors for neuroscience, measurement of GABA using MRS is not broadly used yet, possibly because special sequences and analysis methods are needed. At the OHBM Annual Meeting in 2019, for the first time, an educational session on GABA MRS was held. This post summarizes what was taught about the most important things you need to know if you’re considering GABA MRS for your research.
Why should we care about GABA?
As GABA is an inhibitory neurotransmitter, an intuitive way to think about it is that it can regulate neuronal firing, allowing the establishment of complex neural circuits and ensuring that the brain does not become “overactive”. This intuition is in line with the fact that drugs that act on the GABAergic system are traditionally used to treat anxiety and pain. In her video, Caroline Rae (from the beginning) emphasizes that when considering GABA, one should also consider glutamate, the excitatory neurotransmitter. She explains how GABA and glutamate are actively coupled at the synapse (min. 10:55). The ability of GABA to regulate glutamatergic firing makes it a neurotransmitter that is likely involved in many biological processes, one of them being brain plasticity, or the brain’s ability to structurally react to new situations (as explained by Charlotte Stagg from min. 1:55).
Charlotte explaining the role of GABA in brain plasticity
How can we measure metabolite concentrations with MRS?
To understand the physical principles that give rise to MRS, it is helpful to take a few steps back to the basics. Robin de Graaf succinctly reviews (from min: 2:30) how nuclear magnetic resonance (NMR) in its essence is all about separation and detection of frequencies. In conventional MRI, we create a signal by interacting with the resonance frequencies of protons in a nucleus of interest – most typically that of hydrogen around 127 MHz at 3T and 298 MHz at 7T. MRS differs from typical MRI because it relies on something called the chemical shift effect. What does chemical shift mean? Depending on the chemical composition of a given molecule, the constituent protons experience different electronic shielding effects, resulting in slight differences in their resonance frequencies, which translates into many peaks appearing as an MR spectrum rather than a single clean signal peak at that free molecule’s resonance frequency. This is called chemical shift, because the biochemistry and environment of the molecule lead to a slight shift of its signal in the frequency domain. So, if one were to effectively “zoom in” on the hydrogen proton signal at 298 MHz, for example, we would see that the signal is actually composed of many smaller peaks in the range of a few hundred Hz surrounding this frequency in the MR spectrum. This signal reflects all MR-visible hydrogen-containing molecules in the sample. Since the water signal (coming from the hydrogen protons in the water molecule) is so much stronger in intensity than these other peaks, frequency-selective water suppression pulses are integrated into MRS pulse sequences to help reveal the less intense peaks that we are interested in. After showing us a zoomed-in shot of several peaks (screenshot below), Robin explains (from min. 4:00) how electronic shielding and chemical shift lead to consistent, exact locations of the peaks of different chemicals – or metabolites – in the MR spectrum. As resonance frequency depends on the field strength, Robin goes on to describe how moving away from frequency-based units to a parts-per-million (ppm) scale allows metabolite measurements to be more easily compared across field strengths (from min. 6:52).
Robin showing us resonance frequency spectra of different nuclei
Ok, what now?
When combined with a pulse sequence with spatial localization, such as MEGA-PRESS or MEGA-(s)LASER (which Robin describes later, min. 17:56), a metabolite spectrum can be acquired from a volume of interest in the brain. The chemical shift effect holds true for all MR-detectable nuclei and, as such, for a long list of metabolites composed of those nuclei, including GABA. For this reason, many challenges that we face in measuring GABA concentrations apply universally in MRS. Clever use of relaxation properties and nuclear coupling effects give us a few solutions, however.
Why is it challenging to measure GABA concentration with MRS?
If specific metabolites like GABA can be measured with MRS, why are we not using it in every neuroimaging study? To be completely forthcoming, there are a number of challenges in conducting successful MRS measurements. Luckily, there are some options to deal with each of them. In spectroscopy, the signals we detect are very weak, so we have to run several hundred repeated acquisitions to obtain an acceptable averaged spectrum for quantification. Another way to boost SNR is to acquire spectra from larger voxels. Choosing an extra-large voxel size (by MRI standards) for higher SNR, however, is not an ideal solution because of heterogeneous tissue compositions in a voxel, and GABA concentration varies across different tissues. Ashley Harris explains that it is important to correct your measure for its tissue composition (from min. 8:53), because of known differences in GABA concentration in gray and white matter.
Given the low SNR of metabolite signals, it has been common for a long time to use single-voxel MRS acquisitions. This is the reason that sometimes spectroscopy is not always categorized together with conventional MRI as a true brain imaging technique. However, using specialized pulse sequences, it is possible to acquire data from more than one region of interest using dual-voxel MRS, for example, which Muhammad Saleh describes in his video (min. 20:40). It is worth mentioning here that MRS imaging (MRSI, so spatially resolved MRS) approaches do exist, with which multiple voxels are acquired from a cubic volume, for example 3D MRSI can reach whole-brain coverage with a 14 × 14 × 12 voxel matrix size and 200 × 200 × 170 mm field of view (2.89 mL nominal voxel resolution), and technological advances to improve them are continually occurring.
What is the problem with spectral overlap?
What might be considered the biggest challenge for accurate metabolite measurements is spectral overlap. Given that so many biologically relevant molecules contain hydrogen protons, many with similar hydrogen structures, their signals will overlap, making it hard to get an accurate quantification of individual peaks that we care about, as Robin describes (from min. 9:30) in his video. If we cannot isolate the GABA peaks, then we cannot quantify them easily!
In addition to the signals of identifiable metabolite peaks like creatine and glutamate that overlap with the GABA peaks in the spectrum, an underlying assortment of signals of broad peaks originating from macromolecules is present. (Here, macromolecules refer to a host of large molecules, including proteins, that differ from the smaller molecular structures such as GABA). The macromolecule (MM) signal is a biologically generic signal detected by in vivo MRS that usually consists of about ten peaks spread across the acquired spectrum. The MM signal can be attenuated using several acquisition solutions, which we describe below. However, it is important to note that the MM signals cannot be 100% removed, and their contribution is always present, to some extent, in a GABA measurement. There are several different options to approach spectral overlap, including moving to a higher field, like 7T, which improves the spectral resolution, meaning that the peaks are more spread out, and reduces the amount of overlap that occurs between them (an expensive solution, Robin notes, min. 10:50). Another possibility is to take advantage of T1 and T2 relaxation differences of different metabolites and use inversion recovery and spin-echo sequences in your experiments (as Robin describes, min. 11:46).
What can we do about this spectral overlap problem?
By far the most popular method for dealing with spectral overlap, and the most discussed technique for GABA quantification in the educational session, is the spectral editing approach: The same physical principles of nuclear interactions that make tiny changes to local magnetic environments and allow us to accomplish chemical shift imaging (i.e., to obtain spectra) offer a solution to spectral overlap. Nuclei that are chemically bonded to the same molecule, and thus generate multiple peaks for that molecule, are scalar-coupled, which, in quantum mechanics terms, means manipulation of one signal of a molecule also modulates the other signals of the same molecule. This phenomenon can be used to selectively manipulate overlapping signals and acquire the signal of interest. The figure above shows that GABA is composed of three major signals that are scalar-coupled to each other, and that glutamate and creatine have peaks overlapping in some locations. From min. 13:11 in his talk, Robin explains scalar coupling and how frequency-selective inversion pulses can be used during acquisition to modulate the signal of scalar-coupled molecules, but not the uncoupled ones. This is known as “editing” an MR spectrum. Using this technique, one can perform paired experiments, one with and one without the frequency-selective editing pulses, to recover the signal of the metabolite of interest. This technique, known as J-difference editing, is a powerful MRS technique used for measuring GABA in the brain. It should be noted that while scalar coupling helps us to more specifically acquire our signal of interest, co-editing always occurs, and attention should be paid about which molecules are being inverted. Robin describes a simple pulse sequence for a full J-difference editing experiment, using GABA as an example (from min. 17:56), while Muhammad Saleh speaks extensively in his video about special GABA editing sequences and ways to speed up editing experiments to increase the information extracted from the data acquired. And that brings us back to the topic of challenges in measuring GABA: applying solutions to acquire good spectra significantly increases scan duration, giving rise to more temporal instabilities in the signal, specifically frequency offsets. Frequency offsets are shifts in the main magnetic field that most often occur either because of heating/cooling of the gradient hardware elements in the scanner or bulk participant head motion. In her talk, around 24 minutes into the video, Ashley discusses this problem and how sometimes it can be fixed retrospectively through frequency alignment.
Robin explaining the J-difference strategy of measuring GABA
What do I need to consider when setting up a GABA-edited MRS acquisition?
There are some essential questions to answer when setting up a GABA-edited MRS experiment. Of course, the first is where in the brain you want to measure GABA. For hypothesis-driven studies, this will be determined either by the functional neuroanatomy of the aspect of brain function being studied or by the regions implicated in the neuropathology/pathogenesis of a particular brain disorder/disease. It is worth reiterating that, given the low SNR of the GABA signal, the size of the volume of interest will be on the order of cubic centimeters. Thus, one will need to be aware of the limitations on the specificity of where in the brain GABA will be measured. For GABA editing, voxels tend to have around 27 mL tissue in them (e.g., 3 × 3 × 3 cm3) in volume to attain reasonable SNR. In the MRS literature, the voxel size is often reported in volume, as this is the relevant factor for SNR.
Oh, such large voxels!? Can I not just go to 7T and get a better spatial resolution?
Should it be possible (and desired) to perform MRS experiments at ultra-high field (>3T), then the benefits of a higher field strength would alleviate some of the challenges of MRS acquisitions. Aside from the associated increase in inherent SNR of metabolite signals, and the already mentioned increase in spectral resolution (the separation of peaks in the spectrum), high field measurements allow improved selectivity of editing pulses. These advantages of ultra-high field MRS make it more feasible to detect GABA without using editing. Nevertheless, editing at 3T remains the most commonly used approach for measuring GABA that you will encounter in the literature.
Another consideration for increasing SNR is scan duration. In edited MRS, each acquisition is repeated (usually several hundred times) in order to perform signal averaging to improve the SNR of the detected metabolite signals. As Ashley Harris explains in her presentation (from min. 19:43), the question of how many averages are needed (how many times to repeat the measurement in one scan acquisition),(i.e., how long to scan) will depend on voxel size, the scientific question being asked, and the region in which you are scanning. Some regions like the occipital lobe provide good SNR and therefore allow you to scan for shorter periods. In contrast, other regions like the temporal lobe are more challenging to acquire high-quality data in and necessitate collecting relatively more averages.
Anything else I need to think about?
It is also worth considering the order in which you run your different MRI/S acquisitions in a given scan protocol. When conducting a study, it is quite likely you will be acquiring a variety of scans, such as fMRI, diffusion MRI, and MRS. Sequences that involve rapid switching of gradients (e.g., EPI and DWI) will lead to heating and subsequent cooling of the scanner’s hardware elements. This causes shifts (or drift) in the B0 field (and thus its frequency) that can have a considerably detrimental effect on edited MRS acquisitions, which require frequency stability to ensure the narrowband frequency-selective editing pulses perform as intended. Performing MRS acquisitions before any scans that make use of high gradient duty cycles can help lessen the impact of frequency drift on acquisition performance. Also, the use of prospective and retrospective frequency alignment methods can mitigate the detrimental effects of frequency drift on spectra. Ashley also talks about this in her presentation (from min. 22:30).
How do I know whether the quality of my spectra is good enough?
Several signal artifacts can lead to poor quality of MRS data. An excellent place to start is by reading this paper, which describes in detail the kind of artifacts one would see in corrupted MRS data. A full description of artifacts is beyond the scope of this blog post. Still, one thing in particular that can significantly degrade the quality of your spectra is participant motion. The comparatively longer scan times of edited MRS acquisitions, unfortunately, provide more opportunities for a participant to move and worsen spectral quality. Some simple steps that can be taken to prevent motion artifacts include emphasizing to participants the importance of remaining as still as they reasonably can when they hear the scans running and acquiring structural/fast localizer images and MRS data consecutively so that voxels are placed as accurately as possible given participants’ current head position.
The act (art) of rating the quality of MR spectra can be challenging to those new to MRS. Since MRS is methodologically distinct in several important ways from MRI, quality analysis may be less intuitive to new users who are more familiar with the latter technique. Typically, a good approach to quality analysis (when possible) is to consult a colleague (internally or externally) who has experience with MRS. A review of 2–3 pilot datasets can go a long way to establishing the predicted quality of MRS data for a proposed study. When an investigation is underway, it is highly beneficial to review data as they are collected. Continual reviews of data can prevent situations where a series of datasets have been acquired with significant artifacts that would lead to their removal from further analysis, which could potentially seriously undermine the success of a study.
Ok, so now I have a spectrum, but how do I quantify GABA?
Once you have acquired some GABA-edited MRS data, you can quantify the GABA from the data you have collected. There are some software analysis packages available to users that can quantify GABA from edited MRS data. These include Gannet, jMRUI, LCModel, TARQUIN and, most recently, also FSL. Each has its own strengths and weaknesses and particular learning curve, but each will allow you to derive a quantified measurement of GABA from your MRS data. The GABA signal is either quantified in the time or (more commonly) the frequency domain, where either the amplitude or the area of the GABA signal is used to determine the concentration (as concentration is proportional to signal amplitude or area). While a description of each package is beyond the scope of this blog, readers are advised to read the following papers for further information:
Edden RAE, Puts NAJ, Harris AD, Barker PB, Evans CJ. Gannet: A batch-processing tool for the quantitative analysis of gamma-aminobutyric acid-edited MR spectroscopy spectra. J Magn Reson Imaging. 2014;40(6):1445-1452
Provencher SW. Estimation of metabolite concentrations from localized in vivo proton NMR spectra. Magn Reson Med. 1993;30(6):672-679
Stefan D, Cesare F Di, Andrasescu A, et al. Quantitation of magnetic resonance spectroscopy signals: the jMRUI software package. Meas Sci Technol. 2009;20(10):104035
Wilson M, Reynolds G, Kauppinen RA, Arvanitis TN, Peet AC. A constrained least-squares approach to the automated quantitation of in vivo 1H magnetic resonance spectroscopy data. Magn Reson Med. 2011;65(1):1-12
Ok, so now I have a GABA concentration measure. But what does this measure mean?
There are two things to consider. First is the specificity of the GABA signal we are detecting. As Ashley describes (from min. 3:25), the GABA signal is contaminated by a co-edited MM signal that leads to the observed GABA signal at 3 ppm being ~50% MM. For this reason, it is conventional in the field to define edited GABA measurements acquired by standard editing as “GABA+” measurements, to acknowledge the MM contribution. The majority of studies in the literature applying GABA-edited MRS will have acquired measurements of this sort. It is, therefore, important to recognize this limitation when setting up an experiment (it may impact the interpretation of your findings). Alternatively, one may implement an MM suppression technique that removes the MM signal underlying the GABA+ signal so that measurements are a “purer” measure of GABA. However, this comes at the cost of reduced SNR (from min. 5:00).
Ashley warning us that we are not only measuring GABA
The second thing to consider is that MRS measurements of GABA are not direct measurements of neuronal inhibition. In her video, Caroline describes that there are actually four types of inhibition (from min. 8:43), several metabolic pathways for GABA, and multiple GABA receptor types (from min. 11:49). Based on the intuitive way of thinking about GABA as a inhibitory neurotransmitter, one may expect to find negative relationships between GABA levels and brain activity, such as measured with fMRI (also see this paper for guidance for how to form and test hypotheses about the relationship between neurochemistry and activity). However, when it comes to energy expenditure and metabolism that underlie functional imaging measures, such as BOLD signal changes, things are not so simple (see this paper to show that a relationship between GABA and BOLD is not easy to find). Caroline explains how excitatory and inhibitory activity together can either increase or decrease energy metabolism, depending on the context (min. 16:36), and even more, GABA can directly modulate blood flow (from min. 22:50). Therefore, the interpretation of GABA levels, measured with MRS, is far from straightforward. Caroline (from min. 23:20) points out that the measures reflect neurotransmitter and metabolic pools, they are dependent on brain energy and activity, and they could reflect tonic inhibition. As head motion, different types of medication, and tissue composition of your voxel all can have an impact on the outcome measure, Nicolaas (from min. 27:03) recommends considering these confounding factors in your analysis and data interpretation. Additionally, menstrual cycle and time of the day have been found to be potential influencers of MRS-measured GABA concentration.
Does the uncertainty of interpretation not mean that it is pointless to do GABA MRS?
We understand that the difficulty in interpretation may be off-putting. But at this point, we want to remind you that most of the imaging measures we look at are indirect. Think about BOLD as a measure of neural activity for example. These indirect measures are still useful for inferring something about a clinical condition, and in combination with other methods, develop a more holistic picture of what is going on. In his video, Nicolaas gives a number of examples for how GABA MRS has been used in clinical research, such as in neurodevelopmental disorders (from min. 7:46), depression (min. 9:44), personality disorders (min. 11:34) and schizophrenia (min. 13:40). Due to its role in learning and plasticity, GABA MRS has also been used in healthy populations. In her video, Charlotte Stagg provides some examples of how GABA, measured by MRS, changes in perceptual learning (from min. 10:00), overlearning (from min. 16:43), and learning how to juggle, as an example of long-term learning (from min. 20:06).
Ok, so if I do want to start using GABA MRS in my research, how can I learn more?
A good place to start is to read these overview/consensus papers:
Bogner W, Hangel G, Esmaeili M, Andronesi OC. 1D-spectral editing and 2D multispectral in vivo 1H-MRS and 1H-MRSI - Methods and applications. Anal Biochem. 2017;529:48-64
Harris AD, Saleh MG, Edden RAE. Edited 1H magnetic resonance spectroscopy in vivo: Methods and metabolites. Magn Reson Med. 2017;77(4):1377-1389
Mullins PG, McGonigle DJ, O’Gorman RL, et al. Current practice in the use of MEGA-PRESS spectroscopy for the detection of GABA. Neuroimage. 2014;86:43-52
Another excellent resource is Robin’s book (which can be downloaded if your institution has access): In Vivo NMR Spectroscopy: Principles and Techniques
Finally, the MRS community has recently begun assembling a curated collection of resources for data acquisition and analysis in the form of MRSHub. The forum is a great place to pose questions that can be answered by experts.
by Nikola Stikov
As we are getting ready to announce the 2020 OHBM Replication Award winner, here is a brief flashback to 2019 and our interview with Richard Dinga from the Department of Psychiatry at the Amsterdam University Medical Centers in the Netherlands. Richard led the effort to replicate a study published in Nature Medicine in 2017 about the relationship between resting state connectivity and the neurophysiological subtypes of depression.
In the lead up to the OHBM Annual Meeting, I had the pleasure of speaking to one of the keynote speakers, Dr. Biyu He, an Assistant Professor at New York University. Dr. He has made many valuable contributions to the field of neuroscience, combining diverse imaging methods and analytical techniques to tackle big questions relating to perceptual processing, spontaneous activity and consciousness in the human brain.
Rachael Stickland (RS): Thanks again for joining me. It's nice to - virtually - meet you.
Biyu He (BH): Pleasure to meet you as well.
RS: I'm getting used to having many video calls every day now. I'm sure you are as well. How have recent months been for you, adapting to working remotely and only connecting to most people virtually?
BH: It's been okay. I miss the face to face interactions with people. But I think we've been very adaptive in my lab. As you know, in human brain imaging, we do a lot of data analysis. So we have been working on reading, writing and data analysis. And I think we've been able to weather the strange situation we live in pretty well.
RS: You're currently based at New York University (NYU) as an Assistant Professor in the Departments of Neurology, Neuroscience & Physiology and Radiology. Do you mind telling me about your research path and your route into science?
BH: Sure. I was a biology major in college, and really liked maths and physics when I was young. I wasn't sure what I was going to do in college initially but once I found neuroscience I was immediately hooked. It is just so absolutely fascinating. I felt like I couldn't ever be bored again. And it's also one of the most interdisciplinary fields in science. It's challenging and fascinating and very, very intellectually engaging. I did my PhD in neuroscience at Washington University in St. Louis. From there, I was looking for postdoc positions at the end of my PhD and unexpectedly got offered two positions to set up my own lab. One at the National Institutes of Health (NIH) and one at the University of Konstanz in Germany. I decided to go to the NIH and spent about five and a half years there. It was a wonderful time — I learnt new techniques, made new friends, found new mentors, and started a new line of research, which is what I'll be talking about in my [OHBM] keynote talk. Then, I moved to NYU a few years ago.
RS: You mentioned how neuroscience is very interdisciplinary. That might be why it’s hard to explain what we do! If a non-scientist asked you what your research is about, and also why it's important, what would you say?
BH: Broadly speaking, I’m trying to understand how the human brain generates conscious awareness and conscious experiences. And how neural mechanisms underlying conscious awareness differ from, and interact with, unconscious processing. From decades of research in psychology, we know that sensory input impinging on the brain can be processed by the brain consciously, giving rise to all the experiences that we enjoy, but also unconsciously. So things that you don't consciously perceive can nevertheless influence your behavior. We don't really know what neural mechanism gives rise to conscious experience and how that differs from unconscious processing. Understanding the neural underpinnings of these processes and their differences is very important for a lot of clinically and societally important questions. For example, we'll be able to better treat disorders of consciousness, including minimally conscious states and vegetative states, as well as many clinical conditions with disordered perceptual awareness, such as hallucinations in schizophrenia, tunnel vision in autism. These are cases where you have disturbed conscious perception. In addition to applications in the clinical and societal domains, addressing this question also satisfies a fundamental human curiosity that is ‘Who are we? Why are we sentient beings? How are we different from robots?’
RS: That’s fascinating. I think scientists and nonscientists alike find the topic of consciousness very interesting. So do you think that fMRI has a key role in helping us understand consciousness?
BH: Absolutely. It's the best method for non-invasively measuring whole brain activity and finding out where in the brain some type of information is. In my mind, it is especially powerful when we combine fMRI with other techniques with higher temporal resolution, like MEG, ECoG or EEG. In human brain imaging, we have a lot of complementary techniques that are very powerful and can give us a view of whole brain activity or large-scale brain network activity, which you could say some of the more traditional animal research techniques haven't been able to get at. But, obviously, there's a lot of push to do large-scale simultaneous recording of many neurons and across many brain areas in animal models now as well.
RS: So your own research combines many of these techniques you just mentioned - invasive and non-invasive methods of studying the brain, including many different human neuroimaging methods. What are the main challenges with integrating such diverse methods, in terms of the experiments themselves but also in the interpretation of findings?
BH: Probably the main challenge is to grasp a lot of literature that's grounded in different techniques, because, when I was a PhD student, I realized that for the same question there is parallel literature, depending on if you use fMRI or EEG/MEG and then the insights are different. The questions and the debates people care about are also different. Each technique is like a window into the brain with its own vantage point. So if you only look through that one window, your field of view is somewhat limited. When combined, the knowledge and the insights from multiple techniques to understand the same biological question can provide a much broader view and you can get at the mechanisms better. Ultimately, we want to understand the mechanisms of how something works in a computational sense: how do neural circuits do the information transformation that allows certain perception and cognition to happen. And for that reason, simply mapping where or when would not be sufficient. We need to combine the insights from these different angles to build a full answer that addresses the mechanisms.
RS: Yeah, that makes sense. So, non-neuroscientists may be surprised just how much our prior knowledge and experience can shape how we perceive something in the present moment, and your research has advanced the scientific understanding on this topic. Related to that, what scientific finding have you found most surprising in your career? Has there been something that particularly surprised you about the brain?
BH: What you just mentioned was a finding that was actually very surprising to me. Me and my lab, when we made the discovery, we actually literally scratched our heads for several months before things started to make sense. You're absolutely right that past experiences and prior knowledge have a profound impact on perception. And it's very interesting because there are certain clinical disorders, including schizophrenia, autism, PTSD, where we know that this process is abnormal. There has been a lot of behavioral and neuroscience research done on this topic. What was really surprising in our findings was the spatial extent of the prior knowledge's impact on perceptually relevant processing across the brain. It used to be thought that visual perception, for example, is basically solved by visual regions. But what we found was that when you go to the really higher-order regions in the brain, even the so-called default network (that is the most remote from sensory input and the apex of the cortical hierarchy) they are involved in this process of prior knowledge guiding visual perception. It's not just that their activation magnitude changes, but their activation pattern changes as well. The voxel-wise activity pattern in those regions reflected the content of prior knowledge and the content of perception. So, that was very surprising. I think, in retrospect, it made sense because this process of prior knowledge guiding perception really requires many different brain networks to work together, from those processing sensory input to those mediating memories. We are still working on the exact mechanisms involved in this. But in the broader picture, it suggests that in real world vision, real world perception, where past experiences continually guide our perception, much more of the brain might be involved than we initially thought.
RS: Your research has brought new insights into the best ways to measure, categorize and model brain activity. Moving forward, what do you think are the most important questions that need addressing, or the most important technological advances, in order to progress understanding in your field?
BH: I have two thoughts here — one one is broader than the other one. The first one is that we need to integrate resting state approaches and task-evoked approaches. There's a huge amount of insight that has been learned, and to be learned, from both approaches. But each approach alone obviously won't be able to resolve how the brain works. I think we have made a lot of progress with both of those approaches, but exactly how we integrate the insights and their analysis methods, that is something that has a lot of room to be developed in the coming years. For example, related to my research topic, conscious perception: I don't think a system without spontaneous activity will have conscious perception; I think it will solve perceptual tasks, but it will not have perceptual awareness. Currently, we have a wonderful, beautiful field of knowledge based on resting-state studies but there is a gap between these insights and what we know about the neural mechanisms underlying perception and cognition. I think at the junction between those two fields, there is a lot of progress to be made.
And the second is something that I alluded to earlier (I think this is where the field is already going), which is to go beyond the mapping of where and when to get at the computational mechanisms. And there are many different ways of getting at the mechanisms — it probably requires leveraging multi-facetted analysis techniques to understand exactly the computational mechanisms as embodied in neural circuits and networks that underlie perception and cognition.
RS: What was the best piece of scientific or career advice you've received? What has helped you to get to the position you are in, carrying out brilliant research?
BH: Thank you. Something that comes to mind is when I was doing my PhD, my PhD advisor, Marcus Raichle, often told us that “Science must be done for its own sake, for any other harvest is uncertain.” It is important to enjoy the science you do. If not, you probably should do something different. That advice has propelled us to pursue questions we are passionate about.
RS: Your OHBM keynote talk is titled “From Resting State to Conscious Perception”. Can you give us a teaser or sneak preview of some of the interesting topics you will cover?
BH: It’s kind of a personal journey of how my scientific career has evolved, and how my work continues to make connections between these two areas. As you can see, from what I alluded to earlier, I think understanding the neural basis of conscious perception requires us to take into account the role of spontaneous brain activity and past experiences that persist through the resting brain. I've been to OHBM almost every year since I was a student, so it's very gratifying for me to be able to tell this personal journey through the different scientific questions I've investigated.
RS: Well, that's great. I look forward to tuning in and hearing it online.
Lee Jollans and the OHBM Diversity and Inclusivity Committee.
Edited by AmanPreet Badhwar
At the 2020 virtual meeting, OHBM will, for the second time, host a Diversity Round Table. This year the round table will feature discussions on the intersection between Neuroscience and the Lesbian, Gay, Bisexual, Transgender, and Queer (LGBTQ+) community. The four speakers will outline the specific challenges LGBTQ+ individuals face working in STEM (Jon Freeman), insights into the possible developmental bases of sexuality and gender (Doug VanderLaan), the current body of research into transgender identity (and its limitations), and the challenges and considerations that are crucial for carrying out good sex and gender research (Grace Huckins and Jonathan Vanhoecke).
Jon Freeman, New York University (top left), Doug VanderLaan, University of Toronto (top right), Grace Huckins, Stanford University (bottom left), and Jonathan Vanhoecke, Humboldt University (bottom right)
While studies suggest that the percentage of students interested in pursuing a doctorate is significantly higher among LGBTQ+ students (Greathouse et al., 2018), LGBTQ+ individuals have been shown in numerous studies to face unique challenges in STEM. Although specific data about Neuroscience and related fields is lacking (which is part of the problem), LGBTQ+ people are less represented in STEM fields than statistically expected, more frequently encounter non-supportive environments, and leave STEM fields at a high rate (Freeman, 2018). Moreover, one study suggests that more than 40% of LGBTQ+ people in STEM are not open about their LGBTQ+ identity with colleagues (Yoder & Mattheis, 2016). In his talk “LGBTQ Challenges in STEM: The Need for Data and Policy change”, Jon Freeman will outline how bias, harmful stereotypes, and unwelcoming environments can result in LGBTQ+ scientists leaving STEM, and will propose steps and policy changes we can implement to counteract these effects.
With a disproportionately low percentage of LGBTQ+ researchers, and rigid and outdated norms used to assess sex, gender, and sexuality, research about LGBTQ+ individuals has historically suffered from flawed data collection, and oversimplified, inaccurate, or outright harmful framing of research findings. In her talk “Trans Neuroscience: Stuck in 1995”, Grace Huckins will explain how studies examining the brains of transgender individuals are stuck in an outdated paradigm and why it is so crucial that this paradigm change.
Gender and sexuality are complex and interconnected, and attempting to examine them in isolation ignores the lived experiences of LGBTQ+ individuals. Cultural perceptions of masculinity and femininity, and social visibility and acceptance affects not only how LGBTQ+ people are treated and perceived, but also how research is conducted in different cultural contexts. Doug VanderLaan will describe findings from a neuroimaging study of LGBTQ+ individuals in Thailand, highlighting clues as to the relationship between early brain development, gender and sexuality in his talk “Sexual Orientation and Gender Identity Development: Insights from Thai gay men and sao praphet song”.
Research about marginalized groups by necessity always has a societal dimension – not only regarding the different experience of the world which marginalized individuals encounter, but also regarding the implications that findings might have for policy, stereotypes, and lived experience for the entire society. How can we disentangle ‘otherness’ from sociobiological variety? How to distinguish brain effects from effects of sociological background? Jonathan Vanhoecke will outline in their talk how brain research in the transgender community provokes sociological questions about sex and gender in other neuroscience fields. “The gap between neuroimaging of gender and gender studies of the brain: New perspectives from transgender research”.
We hope you’ll join us for this topical and thought-provoking roundtable, and we look forward to an interesting discussion!
The Diversity and Inclusivity Committee focuses on a different topic for their symposium each year. Topic and speaker suggestions for upcoming meetings are welcome.
In preparation for OHBM 2020, we talked to Dr Tomas Paus, who will be giving a keynote lecture on Friday, June 26th. Dr. Paus is Director of the Population Neuroscience & Developmental Neuroimaging Program at the Holland Bloorview Kids Rehabilitation Hospital, and Professor of Psychology and Psychiatry at the University of Toronto.
Roselyne Chauvin (RC): Thank you for taking the time to chat with us. In your talk you will be speaking about “population neuroscience and the growing brain.” There are a few ongoing longitudinal big data initiatives, such as ABCD or generation R. Those projects are now starting to think about the current pandemic situation. On one side, the situation is affecting everyone without discrimination; on the other, government responses create different experiences (from full to partial lockdown, to no restrictions), and of course, individuals show different stress responses. How do you think this might affect longitudinal datasets? And what are the questions that will need to be investigated out of this situation with regard to psychiatry and genetics?
Tomas Paus (TP): You can look at COVID as a natural disaster. There are studies where natural disasters have been used in the past as pseudo-experimental designs, i.e., to study the effects of a perturbation, because in most of our observational studies, we can really only look at associations between x and y and so cannot infer causality. In most cases, we don't know anything about the directionality of those relationships. But natural disasters provide an opportunity to study before and after and try to attribute the observed changes to those events.
A key component in the context of brain development and psychiatric disorders is social distancing and what has happened with social relationships. For children in particular there are two elements that I think really stand out. One is homeschooling, which, depending on a country, may last for several months. I don't know how it is in the Netherlands, but in Canada, it will last for at least three more months, if not more. And then the other element is the family, so it depends on what's happening at home. Unfortunately, in some cases, that means bigger exposure to adversity, adversity as bad as family violence. So there the stressor may be huge for some children.
Studies that have acquired detailed phenotypes, whether it's behaviour or brain phenotype on children before the event are in a unique position to go back when it will be possible and study the change in behaviour or in the brain. Generation R is certainly one such cohort, ABCD is another one. There are others. Even birth cohorts that may not be at the most relevant age from the perspective of child development but able to study the relationship between exposure to COVID-19 and events related to the disease and health in general. Of course, UK Biobank is the biggest one of all, right?
Now, one more thing in terms of children. Unfortunately, we do know that the most vulnerable segment of the population in terms of mortality are older people. And so there will be an increase in the number of grandparents dying. That is again, of course, a highly stressful life event and that will, one way or another, affect those children. Finally, we know already that at the level of mortality, COVID-19 is more frequent in disadvantaged populations, mostly in the context of socio-economic position. So there may also be an interaction between the pre-COVID conditions of those children and COVID-related stress.
RC: You’ve been involved in many different types of big data projects, from the acquisition and study of local communities like the SYS (saguenay youth study, ~1000 adolescents and their parents, from the genetic founder population of the Saguenay Lac St Jean region of Quebec, Canada) to the ENIGMA consortium (ENhancing Imaging Generic through Meta Analysis, a worldwide collaboration with more than 40 countries involved). How have you found carrying out these projects, and what advice would you give for those wanting to carry out these big data projects?
TP: It's a very good question and makes me reflect on my own path from the Saguenay to now. Over time I have increasingly become involved with collaborative work in the context of Enigma, and CHARGE, the other consortium that we work very closely with. I started this Saguenay study with my wife, Dr. Pausova, and others almost 20 years ago. That gives our team a lot of hands-on experience in carrying out big data projects. We learned what it takes to set up a cohort, to set up the protocol, to carry out quality assurance. All those different steps, on a relatively small scale. Even though 20 years ago, 1000 individuals was a fairly large scale for us. But I think that hands-on experience with a cohort is very, very important once you enter collaborations with others, and also once you start using data that had been produced by others. Of course, in a consortium, you share that experience and that's a currency.
In the CHARGE Consortium we have weekly conference calls. It's amazing how much you learn during one hour given there are between 20 to 40 people on the call. In one hour, we pick a topic, usually a study that is being carried out, and it's being discussed from the beginning to the end. You benefit, of course, from the expertise of people who have done many of those studies before. And you benefit from informal expertise that is very hard to get from reading the paper. In the same way that I can share my 20 years of experience with the Saguenay study with this group, every member of CHARGE group shares her or his experience back. So that's a huge plus.
In these consortia, it's not only about accessing data, you're really sharing knowledge; not only expertise in designing studies and acquiring data, but you’re also learning about the latest in genetics, epidemiology and statistics. So you’re keeping up-to-date with developments across many different fields. That's a huge benefit of working within a consortium.
The last point is about the diversity of the group. The group is diverse not only in terms of the disciplines, but also cultural backgrounds: it includes researchers from different countries, different educational systems. So for us, it means that there is a diversity of perspectives and I think that that's what you want. If you want to create new knowledge, you don't want everyone to have exactly the same background; you want to see things in many different ways and from many different perspectives.
RC: That also makes me think of sharing experience and trying to find the best way to maintain high quality. I mean, there are many initiatives to standardise scientific practices, for example using the BIDS format to organise data - that type of knowledge came from a consortium. Do you think we could extract some guidelines to help big database initiatives?
TP: I'm not sure about that. I mean there is a whole science of data harmonization of origin - there are experts who work on that. I'm somewhat sceptical about coming up with guidelines or toolboxes to be imposed on investigators when they are starting a new study. I think that there is a danger there. Yes, it would get easier then to harmonize across cohorts, but there is a danger that it would stifle innovation and new discoveries. If everyone is doing everything the same way, then where is the novelty? Where is the potential for new knowledge?
What I've seen is that, basically, it's a democracy of the scientists and the trainees voting indirectly by adopting certain tools more often than others. And then all of a sudden that tool emerges as the most commonly used tool. Freesurfer is an example of that, right? There are different ways to extract information about cortical thickness and surface area, but I must say that in the majority of studies Freesurfer became the main tool that everyone uses and so now you have a sort of natural emergence. So harmonisation has emerged in a natural way.
RC: In a similar vein, neuroimaging has faced a reproducibility crisis, just like genetics did before. There is increasing recognition that studies need to use larger sample sizes to produce more representative and reproducible findings. OHBM sessions have reflected these improvements in working, creating best practices for methods, promoting transparency via open publications, code, and data. The OHBM open science room grows every year and now the announcement of Aperture, their publication platform. What has been your experience and your change in practice? What advice do you give your lab members or early career researchers to improve the quality of their science?
TP: Well, that's a difficult one. I think that the starting point is critical thinking and that's what I'm trying to convey to my students. We need to question conclusions, to question reliability and that's maybe one of the reasons that even though we do use functional imaging, I do put more emphasis on multimodal imaging of brain structure because we know that structural imaging has higher reliability. Even though I started with imaging with PET with blood flow activation studies, I moved into that field from my interest in brain behaviour relationships, in a way. The relatively low test-retest reliability of functional measures and behaviour in general made me shift my focus to features that are easier to measure, such as the structural properties of the brain. That's probably one of the reasons why I changed my way of doing science in those large numbers - test-retest reliability becomes crucial if you are interested in a trait and if you are doing genetics, if you are running epidemiological studies where you are interested in influences of environment, you need to have that measure with a quality of a trait. That is, if I measure a trait today, and I measure it again two weeks later, I get more or less the same number. That's really crucial. I started by saying that one has to be critical, and I think, that that's kind of the simplest advice.
Another key for quality of science is replication. Let's say functional imaging studies, split the sample, analyse the data in one half and then see whether you find the same thing in the other half. Don't trust p-values. That would be my other advice. P-values will not guarantee reproducibility; replication would.
R.C.: So you said, you started with PET and then moved more towards structural MRI. Now that we are on the advice side, what do you think would be the next big topic in neuroimaging? Would you advise a young neuroscientist to follow the trend or look for their own niche? If you had to start something new, what would you go for?
TP: I wasn't really thinking too deeply about what I want to do in five years. I went with the flow and was always driven by curiosity, by novelty, by something unexplored. Often I was critical of a finding that I didn't believe and that triggered a line of thoughts: “I don't believe it's this way. Let's prove that it's the other way and what do I need to prove it.”
I do like to combine different levels of analysis. That's partly because of my initial educational background in medicine, human physiology, anatomy, etc, combined with deep interest in behaviour and psychiatric disorders. So you have both the systems level and molecular level, and integrating across systems, across levels, and I think it did work for me.
If I was going to do it again, I would probably again try to get a broad education that gives me at least some understanding of the different levels, rather than one very deep understanding of a particular approach, like the details of DNA structure. That just doesn't work for me, but it may work for someone else.
RC: Multidisciplinarity is at the core of cognitive science.
TP: It wasn't like that when I was starting! The fact that I got that broad education really prepared me for that interdisciplinarity and for working in large teams. When I was starting, the labs were small and there was little data sharing, even in genetics, and particularly in genetics of Mendelian traits. There were fierce competitions between people in terms of discovering disease genes, so people did not share. They competed with each other and that is a dramatic change over the past 30 years, possibly the biggest change I've seen in science and the social aspects of science.
Now, even with the amount of sharing there is always competition. Competition is good, we need it. But the competition doesn't interfere, as it did in the past, with generating data, with access to data because open science puts everyone on an equal playing field. So now it's not about someone having access to these data and blocking us from having access. It's not the case any longer. You really have to share data in some form.
RC: Yes. The evolution of the field is towards being open, being collaborative and getting experience from those that know how to acquire data and those that have strong expertise in methods.
TP: Also, when you look at institutions that support this kind of approach - they are successful. Institutions that are supporting open science and developing platforms for data sharing and open science in, for example, bringing different bioinformatics databases to communicate with each other, etc. An example is MIT Broad Institute in genetics.
RC: What are the findings that you are most proud of?
TP: There are two different types of things that I am proud of. I told you that I like innovation. I like doing things in a new way. In that context, I'm proud of two innovations. One is when we put together brain imaging and brain stimulation, our combined studies with transcranial magnetic stimulation and PET. Technically it was quite a challenge and I think we did it the right way. That approach eventually did not take off on a large scale. But I think in the mid 90s, when I worked on it, it was really exciting to be putting together TMS and PET in the way that we did. I'm definitely proud of that aspect.
Then, I think about what I'm doing now in terms of the combination of epidemiology, genetics, and neuroscience. I'm glad that I was able to put it together into that framework and I wrote a little book about it. I'm happy about it.
In terms of findings, I think two, for me, stand out. One goes back to the late 80s, to my PhD when I noticed some very interesting deficits associated with lesions of the anterior cingulate and then I followed up those findings with my first PET studies in Montreal. I came up with some discoveries about the function of the anterior cingulate cortex and its role in the interface between intention and motor control. Those early studies I still like.
The second finding is more recent and relates to what we have done in teenagers. The observation that testosterone has something to do with the radial growth of the axon. So, basically, the thickness of the axon, in particular in male adolescents, and how this may relate to axonal transport. That is a slight shift away from myelin and toward axon and I think it's important. We are pursuing that finding. I think that it's the axonal transport element that becomes very important for function. I personally believe that the link between axonal diameter and axonal transport will inform new studies of individuals, also mental illness. So that's the second finding that made a difference in my research.
RC : Are you going to talk about that during your OHBM lecture? Can you give us a sneak peak?
TP: I will talk mostly about big data and some findings from our work in the context of ENIGMA and CHARGE consortium, relating to the developing brain. This will illustrate the power of big data. But I will start with a bit of history on how we got where we are now and how important observations are, going back to my mentor Brenda Milner.
RC: Thank you for your time and for chatting with me!
TP: Thank you, it was really enjoyable.
RC: I am really looking forward to your lecture. This year is going to be a different format, as OHBM is happening online. So I hope this teaser will attract a lot of digital attendees and that everyone will enjoy your lecture and the meeting safely from home.
By Nils Muhlert
Professor Michael Fox is a neurologist at Harvard Medical School and director of the Lab for Brain Network Imaging and Modulation. His research into brain network imaging to define targets for brain stimulation holds considerable promise for new and improved treatments for a wide range of neurological and neuropsychiatric conditions. Here we found out how his academic career started through a chance meeting with Mark Raichle, about his plans for clinical translation of network neuroimaging, and his advice for early career researchers:
Nils Muhlert (NM): Thanks for meeting with us. We'll start by finding out about your background. How did you become interested in neuroimaging?
Michael Fox (MF): Good question. I didn't start off life as a neuroimager. I was an electrical engineer as an undergrad and then went to Washington in St. Louis for my MD and PhD combined. I wanted to do something at the intersection of engineering and medicine. My interest in neuroimaging came when I was walking through the neuroimaging facility at Washington University in St. Louis, on the way to a meeting. I saw a poster hanging there in the hall by Mark Raichle looking at brain imaging and the default mode network. I stopped, and I read the poster and thought, wow, that's fascinating. I had no idea who Mark Raichle was, but I subsequently knocked on his door and said, “Hey, I'm Mike - I just read a poster out here that I think is really interesting.” And that's how I got interested in neuroimaging.
NM: And how have you found the challenges of balancing your clinical work with your academic work?
MF: It's a challenge! There's always time constraints. On the side of getting out papers and getting grants, your clinician-scientists have to compete with full-time scientists. And with the challenge of taking care of patients, our clinical care has to be up to the same standard as full-time clinicians. It's like you're doing two jobs at once, and you have to be really good at both of them.
But with that challenge comes enormous opportunity. I wouldn't be doing both clinical and research if I didn't feel that it was valuable, and that one inherently informed the other. I don't feel like I'd know what the relevant research questions are to ask or to go after if I'm not seeing patients. Similarly, I won't know how to take care of my patients as best as I could, if I am not up to date on what the research is telling us about how to think about the brain.
NM: A lot of your work uses network neuroscience to understand how lesions in different locations in the brain can lead to similar symptoms. Can you tell us about this lesion-network mapping, how it works and how it can translate into the clinic?
MF: You asked me earlier: "how does research inform clinical care and clinical care inform research?". Well this entire field came from a patient. Aaron Boes, who was a fellow of mine at the time, saw a patient that walked into the clinic with acute onset visual hallucinations. Radiology acquired a brain scan on that patient and they found a focal lesion in the medial thalamus. Aaron Boes was fascinated by this patient. Why is it that a lesion in this particular location could result in this very impressive rapid onset severe visual hallucinations?
Aaron did what any good neurologists would do: he went through the literature and found other similar cases of patients with brain lesions that caused acute onset visual hallucinations. He mapped out where all of these lesion locations were, and then was left scratching his head.
All these different cases that cause symptoms very similar to what his patient had, were all in different locations across the brain. That's when he had his critical insight. When I'm trying to understand this patient's symptoms and I map out all the locations of brain damage, they don't line up. They don't intersect a single brain region.
Aaron literally came and knocked on my door and said, "Mike, I hear you do some kind of brain connectivity thing; could that brain connectivity stuff could help us understand how all these lesions in different locations are causing the same symptom."
Aaron's insight, which was in retrospect really brilliant, was that you can take a map of brain connectivity, overlay the lesions on a brain network and test the hypothesis that lesions causing the same symptom map to a single connected brain network rather than a single brain region. He was right for visual hallucinations. And subsequently, I think he's been right for every other neurological or psychiatric symptom that we've tried to investigate.
It's not really a new idea. Neurologists have known for a long time that symptoms probably mapped to brain networks or brain circuits. But before we had a wiring diagram, it was very hard to test that hypothesis or figure out what the network or circuit was in a data driven manner.
NM: How does it work in practice?
MF: In practice, you derive the network for each lesion location. So when you have a lesion that causes a certain symptom, you map it onto a brain atlas. You then turn to a connectome database and say, "Okay, I know where the lesion location is, but what I think is relevant for symptoms is everything that lesion location is connected to." So you turn the lesion into a lesion network, and you do that for every single lesion that you're interested in. Now, every lesion is going to be connected to hundreds of different brain areas, right? But if you take 40 lesions that all cause the same symptom, each one of those 40 lesions is a very different brain network or different set of connections. But the one thing that those 40 lesions share should be the connections that are relevant to the one symptom. And that's how you're able to then pull out the circuit that's relevant for that symptom shared by those 40 lesions.
NM: That's great. So this is a great example of how open science, through the human connectome project, has the potential to influence clinical practice...
MF: Very, very much so! I often feel a lot of gratitude for the field of neuroimaging as a whole and all the people out there that work so hard to build these connectome databases. If we didn't have things like Randy Buckner's genomics superstruct project, which is the connectome that I use for most of my work, if we didn't have the Wash U connectome, if we didn't have the MGH DTI connectome, then we wouldn't have the wiring diagram that allows me to do all the work that I do. So I'm very grateful to neuroimaging and grateful to these large scale projects that gave us these wiring diagrams. I'm just a user of this amazing resource that other people built.
NM: That's great to hear. Right now it's tricky to carry out clinical research projects so I imagine these large open databases are being well used. One topic that people have debated, particularly over the last couple of years, is clinical applications of fMRI. Your work seems to allow that - using functional brain networks to identify the targets for deep brain stimulation. How did you find the process of convincing people of the suitability of that approach?
MF: You're getting really to the heart of it. My PhD was focused on neuroimaging, and so when I moved into the clinic, and in my residency focused on trying to help people with brain problems, there was a disconnect. The field of functional neuroimaging does not have a lot of success stories. The idea was: "Hey, if we can see the brain at work, and identify areas that light up, if we can see the brain's connectivity, if we can look at the anatomical connectivity based on things like diffusion mapping, that all this will lead into better clinical care, better diagnosis, better outcomes, better treatments.” We don't have a lot of successes to hang our hat on. Even preoperative mapping with functional MRI is only used by a handful of centers. There's still debate as to how valuable it actually is. And that's probably our number one success story of clinical translation of functional neuroimaging.
So I've spent a lot of time thinking through why is that? One reason might be that we're on the right path but we need higher cohort sizes, better scanners, the next greatest imaging technique to show us something in the brain that we couldn't see before.
The other possibility is that we're approaching how we use neuroimaging to improve clinical care in the wrong way. I don't know the answer to it, but there's a couple of shifts that I've made in how I use neuroimaging and how I think about it. One big shift has been away from correlation imaging to causal mapping of human brain function. What I mean by this is that if you want to understand where a symptom lives in the human brain, neuroimagers have typically approached that by taking a bunch of patients with that symptom, and identified neuroimaging correlates of that symptom, which might be atrophy, PET metabolic patterns, resting state connectivity changes, and so on. But the problem is that in the end, that's just a correlate, not a therapeutic target. It doesn't tell you whether that neuroimaging correlate is causing the problem, compensating for the problem, or just a risk factor for the problem. We've started focusing on brain lesions and brain stimulation sites as a way to get at this causality. The idea is that the causal mapping of symptoms and brain function might be a more direct path to a treatment target.
The other big shift that I've made is a move away from focusing on single subject neuroimaging data to group neuroimaging data like the connectome. It's almost like I'm going in the opposite direction of where a lot of brilliant people are going: they're focusing on the individual and getting massive amounts of data on each single subject. That research is very valuable and might get us where we need to go with the clinical utility of single subject imaging data. In the meantime, as they improve the methods and technologies for single subject imaging, what I found is that the group connectome is already ready to be applied clinically. It's robust and reproducible and the wiring diagram is the wiring diagram of the average human brain.
NM: So we've very high hopes for your work targeting sites for stimulation to reduce symptoms in patients.
MF: Well, I don't want to overstate the success of my approach either. What we have right now is a lot of retrospective observations. So when we administer transcranial magnetic stimulation, for example, to try and reduce people's depression, what we see reproducibly is that people that are stimulated at a certain brain circuit or a certain site that's connected to a certain circuit, those are the people that are getting better. That is a reproducible, retrospective observation to explain why some people are getting better and some people are not. What we haven't done is taken the next step, where we change our clinical practice and directly target that circuit to improve clinical outcomes. We're just now reaching that precipice, the point where we're convinced that the retrospective observation is real and reproducible. But now we've got to actually prospectively apply it and find out if we can improve clinical outcomes, but we haven't done that yet.
NM: So what would you say are the most exciting things that your lab is working on now?
MF: I'd say, twofold. One is I'm very, very excited that we're reaching the point where we can take some of these retrospective observations and actually prospectively test them clinically. Now, those are bigger grants and take a lot more money. But I believe those resources are going to be coming. So I'm very excited to find out whether we can prospectively confirm our results and make treatment better.
The other is focusing on symptoms that are in huge need of better treatment. We recently submitted a paper, for example, on lesions that get rid of addiction (for a similar paper see here [NM]) and what brain circuit do those lesions map to? Does that identify a therapeutic target for addiction that can help constrain ongoing trials trying to make addiction treatment better?
In the field of depression, we've worked on brain lesions associated with depression, TMS sites that are associated with depression relief, and then some deep brain stimulation data that either can relieve depression or cause depression. What happens when you link up all three sources of causal information? Does it all converge on a single circuit target for depression across all these different modalities?
On the science side, we're even working on lesions that manipulate measures of spirituality or religiosity. Is there a human brain circuit that we can link to spirituality in a causal way? And is that a therapeutic target?
We're having a lot of fun these days, looking at very interesting questions both from the scientific side of things and social side of things, but also going towards the greatest therapeutic need. And then going towards clinical validation of all these observations that we're coming up with.
NM: Finally, what is your advice for early career researchers and those who are interested in network neuroscience? What would you say is a good training pathway for them?
MF: One piece of advice is follow your passion. If you're passionate about a particular brain problem or symptom or imaging technique, or brain circuit, follow that passion because your work is going to be better if you're following something that you're passionate about, not just what your advisor is passionate about.
Two, look at where the herd is going, and then intentionally go in a different direction. If everybody believes that the next big advancement is this imaging technique or application, then go the direction they're not going. Because there's plenty of people that are already doing what the herd is doing. That's why the herd is going there. It's an obvious need and a lot of smart people will fill that need. Go the opposite direction, find a way that people are not thinking about it. And that's where you feel like you add value to science, above and beyond what the community can generate. Think about it differently.
The last piece of advice is one I always tell my students. In my particular lab, we're focused on clinical translation and clinical application. So whenever my students come to me with brilliant ideas (and they come to me with brilliant ideas), I try and play it out. I say "Okay, let's say you're right, let's say that the experiment works out or that you're able to map it. Where does that go? What do we do with that information?" Oftentimes, you realize when you play that out is that the experiment, although it might be interesting, has no pathway towards clinical translation. There's no way that you can turn that information into a better treatment or a therapeutic target. Now, not everybody's interested in clinical translation, identifying therapeutic targets, but for my lab, thinking ahead three steps, we want to know 'Where does your research go? What do you do with that result? And how does that result translate into something important and meaningful, in my case for taking care of patients?' Again, it's a different way of approaching things then maybe in other neuroimaging labs.
NM: That's great advice. Professor Fox, thank you very much for your time today. We really look forward to your talk.
MF: Thank you so much for your interest.
By the OHBM Diversity and Inclusivity Committee (and endorsed by OHBM Council)
We share the deep sadness, outrage, and frustration that many around the world have felt in reaction to the murders of George Floyd, Breonna Taylor, and Ahmaud Arbery, and too many other innocent Black people over the years. As an international organization that strives to represent a diverse and vibrant global community of researchers studying the human brain, OHBM itself has struggled over its 25 years to incorporate initiatives and policies that reflect our values of inclusivity, tolerance, and respect.
The events of these past weeks are a grim reminder that words alone are not enough to combat the systemic racism that plagues societies across the world, and we recognize that we have not done enough to support Black, Indigenous, and People of Color. To this, we also add other groups, such as people with disabilities for which as an organization we may not have provided sufficient support. The past few days have been a period of inward contemplation for us. The events of last week enraged us, and our first urge was to publicly denounce them. Cautious voices advised us against issuing a statement that is not followed by concrete actions. They were right. Since then we, on the Diversity and Inclusivity Committee have had detailed discussions about how we can meaningfully contribute to the conversation and truly make a difference to make our organization a welcome, safe environment that educates and supports each and every member of our group.
In the spirit of openness, we share with you a non-exhaustive list of concrete actions OHBM plans to undertake over the coming months to affirm our commitment to creating and maintaining a supportive environment for all OHBM members, especially those who are historically underrepresented and marginalized. Our goal is to particularly support the Black community within OHBM, increase its representation, and address anti-Black racism. We understand that we have not done enough for this community yet, but we would like to change that in the near future.
The OHBM Council, the Diversity and Inclusivity Committee, and the Program Committee will work together with various other OHBM special interest groups and committees to implement the following:
By the OHBM Communication Committee
By now you've heard that the OHBM Annual Meeting will be virtual! The 26th Annual Meeting of the Organization for Human Brain Mapping is happening from June 23 - July 3, Saturday and Sunday excluded, and will take place entirely online.
This is new for many of us so we’ve put together a short Q&A. Here we address a number of questions you may have, and provide a taste of what you can expect from this unique OHBM Annual Meeting experience.
What can I expect from a virtual OHBM?
To start, this is not just going to be a massive Skype or Zoom meeting. After searching through many options for virtual meeting applications, OHBM council decided on a ‘real-feel’ conference provider that has previous experience with Neuroscience conferences and other very large meeting events. This conference may not feature wafts of espresso from the Rome cafes, or access to the delicious hawker markets of Singapore, but it will have almost everything else that you expect from a conference location: a lobby with signposts to navigate your way around, auditoriums for talks, a poster hall, an exhibit hall, engagement lounges for networking, the art exhibits, the open science room and even a help desk.
What about the different time zones, will I need to get up at 3am to not miss my favourite speaker?
OHBM is an international Society (as can be seen in the distribution of our members in the map below) and in looking at the Annual Meeting schedule and time zones, there are relatively few overlapping “humane” work hours in each day for all continents. To account for this, all sessions will be available 24/7 once presented and the schedule for this year’s Annual meeting has been carefully crafted by Program Committee to allow fair access to the live Q&A portions for all participants. The content has been spread out across two weeks, and the start time for sessions will vary. For three days, sessions will alternate between three major time zones as follows: 1) New York (North/South America) 2) London (Europe) and 3) Hong Kong (Asia/Australia). In addition live sessions will happen for only a few hours a day - so no need to spend long stretches glued to your computer screen! See the most up-to-date meeting schedule here.
A look on the bright side of meeting virtually:
No jet lag! With the OHBM Annual Meeting going virtual, there are no long plane rides, no cramped seating and no battles for middle arm rests. Instead of having three espresso shots and struggling to stay awake during a keynote lecture, you can attend after a good night’s sleep or even outside while getting some vitamin D and fresh air.
See everything! No choosing between talks in parallel sessions and running from the slightly overrun talk 1 in session A to talk 2 in session B. You can swap between auditoriums with just a mouse click and you will always have a front seat for each presentation!
No need to find pet sitters! Nor somebody to water your plants! And if you have older kids at home, there will be links to activities to get them engaged in neuroscience, such as printable brain hats and colouring sheets: give them insight into what it is you actually do.
A much reduced carbon footprint! As a community, we will produce less air/rail/car travel emissions from travel and also less onsite paper, plastic and food waste. And no need to argue with the airline staff about whether you can take your poster or not!
Home-made food, no queueing for the toilet and predictable Wifi connection at all times!
Still, internet connections are sometimes unstable, so what happens if the keynote speaker drops out during his or her talk?
Mindful of this, almost all sessions will be pre-recorded, but during the allocated time slots for the “live program”, the sessions will be chaired and the speakers will be available for Q&A. Pre-recording with professional audio visual support means that there will be a minimum of glitches. Plus, this means that almost everything in the meeting will be available for viewing on the Communiqué platform for four months after the meeting (using your registration) and, in time, via the OnDemand system (for OHBM members).
For a list of keynote lectures, symposia and oral sessions see the meeting website.
What I always loved most about OHBM are the poster sessions and interacting with people at their posters. How do I do this online?
One of the joys of attending OHBM meetings is pouring a coffee (or beverage of your choice) and ambling through the poster hall. So it’s a relief that there will still be poster sessions, albeit in a virtual poster hall. There are stand-by times as usual. For this you’ll have live chat functions, so you can respond to questions in real time (or leave questions outside the stand-by times for the presenter to respond to later). Even better, you can ‘stand’ in front of the same poster for as much time as you want without fear of blocking somebody else’s view. Contact options at each poster allow you to ask questions to the presenter and a virtual poster reception lets you interact with presenters and other poster hall attendees.
Some things you may want to think about when preparing your poster
The only restriction is that your poster has to be in PDF format, so be creative! But keep in mind that your audience may view it on their tablet or laptop, so make the layout easy to read. You might want to include links for more information, larger figures or your preprint paper on the work. Why not record yourself presenting the poster and add a link to the video in your poster? There are some great recording options (e.g. using Zoom); you could even ask somebody to be a pretend audience and ask tricky questions about your work!
Since this year you cannot attract people to your poster using funny dances or by handing out chocolates, advertise your poster or poster video on Twitter using #OHBM2020Posters!
A main aspect of OHBM is socialising. Are there options to do that?
Yes, it’s always Happy Hour somewhere! There will be Happy Hours/Coffee Hours taking place at various times in the schedule to accommodate many different time zones. We are currently working on solutions to make these Happy Hours as interactive as possible by having chat and video options available.
I usually don’t get enough sleep during OHBM due to the packed program and all the social networking. I don’t think I can handle being online so much!
We all know that being hunched over a desk for long periods is bad for our eyes, our backs and for our concentration. As stated above, the annual meeting has been split from four full days into eight half days, spread over two weeks. In addition, the meeting will now run only on weekdays to minimise any disruption with other activities, and all material will be available throughout the duration of the meeting (and beyond, as described above).
What about the educational courses?
The educational courses are always a particular draw, and extremely useful for both early career researchers and seasoned PIs alike. This year we have some great offerings on deep learning in neuroimaging, advanced functional and structural imaging of the cerebellum, EEG data acquisition and pre-processing, and many others (full list here). These educational sessions will now run after the annual meeting beginning July 13, when you can begin watching the pre-recorded lectures. Later that week there’ll be interaction times when you can ask questions of the educational speakers.
You can also prepare for some of these educational courses by reading through our ‘OnDemand’ tutorial series of blogposts, on resting-state fMRI, diffusion MRI, machine learning and anatomy in neuroimaging.
Will there be virtual brain art this year?
Definitely! Do not miss NeuroDiversity, the exclusively online 2020 OHBM Brain-Art Exhibit & Competition brought to you by the OHBM Brain-Art SIG.
NeuroDiversity is being developed along three axes. Axis 1 aims to give underrepresented groups in neuroscience a voice. Axis 2 will showcase art pieces by neurodiverse populations - for whom art can be a means of communication, an instrument for therapy, or a source of solace and pleasure. Axis 3 is designed to highlight the geographic, ethnic and cultural richness within the OHBM community - the Brain-Art SIG will put together a ‘brain collage’ from postcards provided by OHBM members. Check out our artworks and videos, chat with artists, and engage in our art-guided meditation session.
If you would like to showcase your art at the conference, then our annual Brain-Art competition is now open for submissions! We are accepting pieces for the following categories: 2D art (i.e., digital images such as drawings, photos, paintings); 3D art (i.e., sculptures & installations); Failed attempt/bug/artifact; and Special category on Neurodiversity & Hope. For the Special category we encourage all OHBM members to download one of the provided brain postcard outlines and fill it with a pattern/image that they like and feel represented by. Submit your art before Friday, June 20, 2020, 11:59 PM CDT.
You’ll be able to see this Brain-Art and use our online family-friend brain-art activities throughout the annual meeting. You can also engage with a train track session on brain visualization at the OHBM Hackathon. In addition, we’ll be announcing our competition winners at the virtual Student/Postdoc SIG and Neuro Bureau Networking Social. So yes, definitely lots of virtual brain art this year.
What about a virtual Open Science Room?
The Open Science Special Interest Group (OS-SIG)’s Open Science Room (OSR) will be hosted using the same interactive virtual platform as the OHBM meeting, and also broadcasted live to a zero-cost registration platform for accessible global access. The OSR will provide opportunities for networking and informal discussion alongside the formal hosting of nearly 40 talks, including keynotes, lightning talks and software demonstrations. OHBM members will also have 24h access to ‘Open Research Advisors’ in the main exhibition hall, who will be on hand to answer all your open research practice questions and signpost where necessary.
The Open Science room content will be repeated 3 times over 24 hours, at times suitable for individual members of our global community. For the first time, we are also actively engaging with the community to help us build the OSR, so we can deliver a professional and accessible program which works for everyone. Interested volunteers can still sign up to contribute here. As one of our community volunteers has said, “The OSR is the place to witness the practice of open science in action”, and we can’t wait for you to be a part of it. A call for OSR talks is also open; please consider contributing! We are also open for talk submissions (schedule space permitting) until 1st June. Please do submit your talk abstract via our website as soon as you can!
The OHBM Brainhack - the collaborative hackathon organized by the OS-SIG - will be held online from June 16th-18th. For the first time, the OHBM Brainhack will be run as a global online event organized around 3 hubs ('Africa, Middle East and Europe', 'Americas' and 'Asia and Pacific') that will foster collaborations across countries while making it possible for participants worldwide to attend during working hours. Registration is now open for an unprecedented capacity of 500 attendees. This year we are putting special care into building a welcoming environment for those who have never attended a hackathon before. We will provide educational TrainTrack sessions tailored for beginners and opportunities to directly apply new skills by joining a hackathon project.
What other events organised by the SIGs and committees can I expect?
For the family-friendly activities planned by the Diversity & Inclusivity Committee see our recent blog post. The Diversity & Inclusivity Committee is also organizing the second Diversity Roundtable on the topic of Neuroscience and the LGBTQ community. The four speakers of this year's roundtable will elaborate on challenges faced by LGBTQ scientists, and will familiarize the audience with research (and lack thereof) on LGBTQ individuals, with a focus on how increasing awareness around issues faced by this community can impact academic careers.
The Student Postdoc SIG are planning an annual symposium themed Success in Academia: A road paved with failures. There are 3 sub-themes: 1- Sharing/normalizing experiences of failure; 2- How to be a good mentor; 3- How to handle your own failures. A series of workshops are also being planned (e.g., career transition, coping with COVID-19 and trauma, life and work balance, working with industry from academia), stay tuned!
OHBM’s new open access publishing platform – Aperture – is set to launch in June! Aperture will host an informational booth during the OHBM 2020 Annual Meeting where you can learn more about the platform, the submission and review processes, and meet the Journal Manager and members of the Aperture Oversight Committee (AOC). You are invited to participate in an Aperture round table discussion that will also be offered during the Annual Meeting to get your question answered and learn more about the platform. In the meantime, if you have questions, please contact Kay Vanda, Aperture Journal Manager at firstname.lastname@example.org or visit the Aperture website.
Will all of this influence future meetings?
Going completely online will clearly take a little of the magic away from this year’s Annual Meeting. But the silver lining to that cloud is that this 2.0 version of the event addresses a number of recent concerns brought up by OHBM members: It makes attendance much easier for those less able to leave their home countries (for instance due to visa issues, dependents or mobility restrictions). It doesn’t require sometimes expensive travel and accommodation budgets, and reduces our carbon footprints. It also allows the use of innovative interactive elements that may not have been easy to implement at the physical conference. If you haven’t registered yet, you can do that now here.
The current situation has forced many scientific organisations to ramp up remote attendance options. In doing so, it has fostered innovative solutions that can improve online user experiences. In future years, these options will be tried and tested, making them easy to apply to supplement our physical meetings.
Overall, this year’s Annual Meeting is certainly going to be different. It will however remain consistent in that it will provide a thorough update on the latest findings, current trends and promising avenues of brain mapping research. It will provide learning opportunities for those wanting to train up in new skills. It will also provide opportunities for networking and socialising that may be sorely missing during early summer. The staff and Committees at OHBM have worked to ensure each part of the Annual Meeting is thought of and included in the virtual version.
And for those not yet convinced about whether a virtual meeting will offer the same communal or educational experience, there will be ways to increase the realism of the event in your workspace. Brew some coffee before the poster session; you don’t even need to drink it, just get the smell wafting through the house. Set your Zoom or Skype background to the streets of Montreal. Plan to “attend” with some of your friends or colleagues at the same time or the same talks/posters. You can even make your own event shirt using the OHBM 2020 logo, or design your own version! Or just show up and chat to new and old friends. We look forward to welcoming everyone to the OHBM 2020 Virtual Annual Meeting and hope to “see” you there.
Of course we all know that the brain functions as a network, but it is not straightforward to model it as such. One person who works very hard for us to be able to do so is Alex Fornito. He is a professor at Monash University and one of the leading forces in MRI-based network neuroscience. As he is also one of this year’s virtual meeting’s keynote speakers, I had the pleasure to invite Alex to a virtual meeting to ask about his scientific life.
Ilona Lipp (IL): Thanks for joining me during these crazy times. Apart from OHBM going virtual, what else has changed in your scientific life in the last few weeks?
Alex Fornito (AF): Yes, these are unusual times. Probably the biggest change in my life has been the intimate relationship that I've developed with Zoom! But, seriously, I feel fortunate that not much has actually changed for me from a professional or scientific standpoint. A lot of the work that we do in the lab focuses on data analysis and modelling, which is reasonably straightforward to do from home. We have two young kids at home and so our regular rhythm has been disrupted because we're juggling homeschooling and work. But at the same time, it's nice to spend more time with the family, and see and help the kids learn. Relative to the disruptions that other people have had to deal with, I think I have been very lucky. I guess the main challenge is really in trying to maintain a sense of connectivity, communication, and cohesion within the lab. But I'm very fortunate to work with a fantastic team of people that make that really easy.
IL: A main component of your research has been on connectomics, developing metrics to describe the whole brain as a network and applying them to psychiatric diseases, such as schizophrenia. How did you end up in this research niche?
AF: Well, that’s a bit of a long story! I did my PhD in a psychiatric lab focused on structural brain imaging where I was working on mapping cortical thickness changes in psychotic disorders. At that time, surface-based approaches were only beginning to be applied to MRI. And so this was a very exciting new way of looking at structural brain changes. But in my spare time I read a lot of fMRI work. At the time, classic voxel-wise activation mapping was the main approach that was being used. This work was really giving us a lot of insights about how brain regions respond to different tasks, but I always felt like it provided an incomplete picture, because we know that the brain is essentially an interconnected network. I was a really big fan of those early seminal papers on connectivity by people like Olaf Sporns and Rolf Kötter, Klaas Stephan, Karl Friston, and Randy McIntosh. But the applications of network methods to imaging data were limited.
Then, as I was nearing the end of my PhD, I came across a paper by Sophie Achard and Ed Bullmore, which was one of the first that generated these whole brain maps of connectivity using fMRI data. I remember a figure in that paper that had a tangled graph showing how different brain regions connected to each other, and I just thought to myself, perhaps naively at the time, “that looks more like how the brain works! I want to learn how to do that!” And so I got in contact with Ed and he was gracious enough to host me for a postdoc fellowship. It was really great timing to be there, as Ed was developing his Networks Group and I was able to work with and learn from some really great people like Dannielle Bassett, David Meunier, Manfred Kitzpichler and Aaron Alexander Bloch. That experience really ended up shaping the trajectory that I ended up taking
IL: Can you explain how looking at the brain’s connectome with MRI can help us gain a better understanding of psychiatric diseases and develop hypotheses about disease mechanisms and treatment options?
AF: I guess for me, it's a simple chain of logic. If we think about the brain as a network, then an important first step is to map and understand how different parts of the brain connect with each other. That's not to say that generating a map of connectivity on its own is sufficient; we also need to understand the dynamics that unfold on connectomes. But I do think that generating such a map is a necessary and important first step.
The first wave of connectomics studies that we've seen have been really useful for mapping where connectivity differences are between a given patient group and healthy controls. Now, in general, in psychiatry we do need to be a bit smarter about the way we define our clinical phenotypes in the first place, but we are starting to build a picture of how different brain systems are disrupted in different disorders. So now as we move into the next phase, the challenge is going to be to use this knowledge to generate new mechanistic insights and develop new treatment strategies. We're starting to see some success already, such as with the development of connectivity-guided brain stimulation protocols for mood disorders.
An advantage of the connectomic approach is that it can be coupled with biophysical models of brain dynamics, like neural mass models, which allow us to generate whole brain simulations of neural activity. This is an area that is still very much a work in progress, but, in principle, these models will allow us to test different mechanistic explanations for different disorders by tuning model parameters and seeing if the model can reproduce the activity changes seen in a given patient group. I think there's a lot of promise in this regard.
IL: You have been studying schizophrenia a lot. Why is the connectome particularly interesting in this disease?
AF: That's a good question. So the name itself–– schizophrenia––implies a splitting or a breakdown of the mind's thought processes. And so then an obvious neurobiological hypothesis would be that this disorder emerges or arises from a disruption in the way different parts of the brain communicate with each other. This is not a new idea –– It was first suggested by Carl Wernicke over 100 years ago.
Personally, I think there's a natural alignment between this idea and the phenomenology of the disorder, given that it really does seem to involve a breakdown in the brain's ability to think coherently and in an organised way. We now have the tools available to really interrogate these connectivity disruptions across the entire brain. You can see how, as these approaches have developed, the thinking in the field has changed. When I was doing my PhD, most of the literature focused on the role of individual brain regions like the dorsolateral prefrontal cortex or the striatum or hippocampus; and now we see a greater emphasis on trying to understand how all these regions interact in a connected system. The hope is that these network-based understandings provide a more accurate description of what's actually happening in the disease.
But this doesn’t just apply to schizophrenia. We know from a lot of imaging and lesion studies that there's no single causal lesion for psychotic illness, which then leads to the idea that there is something happening at the level of interconnected circuits. This tends to be a recurring theme in a lot of psychiatric disorders. We now know that most of them can't be explained by focal damage in any single part of the brain. It is possible that at least some of these disorders might have a focal onset of pathology in one part of the brain that then spreads to affect other areas over time. Other disorders might have a truly multi-focal origin. It’s also possible that many psychiatric disorders are the result of subtle neurodevelopmental changes in brain wiring. But these are still open questions.
IL: The microstructure and gene expression of cortical regions seems to play a large role in determining inter-cortical connections. Can you tell us a bit about the recently trending transcriptomic brain atlases and why and how you have been using them in your research?
AF: Well, I don't want to speak for other people, but I feel like if you spend enough time doing brain imaging, you eventually get to a point where you start to question what it is that you're actually measuring. And I mainly work with MRI, which is a fantastic tool, but it often provides indirect measures of the underlying physiological processes that we're interested in. This poses two major problems. The first is that it can be difficult to disentangle neural contributions from other contributions to the signal, including different noise sources, and that can make it difficult to interpret our findings. The second problem is that, even if we can rule out measurement noise, we often don't know what the underlying molecular mechanisms are that are driving our results. For me, the gene expression atlases provide an opportunity to try and move beyond just using the imaging data to develop some hypotheses about those underlying mechanisms.
That's not to say that the expression data are some kind of gold standard. In our lab, we've concentrated a lot on trying to understand some of the issues associated with gene expression data and developing workflows for how they can best be integrated with imaging measures. But I do think that if we put those issues aside and we do get a correlation between an imaging measure and the expression profile of a gene or a set of genes, then we can limit the range of possible explanations and identify candidate mechanisms that we can then pursue in further work. So it's really a way of moving beyond just mapping so that we can say: “Of all the possible molecular mechanisms that could explain what I see in this map, the expression data now allows me to narrow my search down to this set of mechanisms or pathways."
In our own work, we've looked at how gene expression profiles relate to brain connectivity. Other groups have done some really interesting work looking at transcriptional correlates of the effects of normal development or different types of disease. I find this work interesting because it does allow us to move past simply mapping where changes are occurring to start developing some plausible hypotheses about the specific molecules or pathways that might be involved.
IL: Recently, the usefulness of relating variants of candidate genes to brain and behavioural phenotypes in the context of psychiatric disease has been heavily questioned. Could you tell us a bit about where this debate is coming from? What do you think are the consequences and alternatives for researchers trying to understand the genetic underpinnings of individual differences in brain structure and function?
AF: More or less two decades ago, the main way to identify risk variants for disease was through linkage analysis. This required people to recruit extended pedigrees and it worked well for Mendelian traits, but a lot of psychiatric disorders are not Mendelian. So researchers started to hypothesize which specific genes might be involved based on what they knew about the physiology of those disorders. And then the idea was to identify a specific variant in that gene that was known to be functional and to examine how that variant relates to some imaging or behavioural measure.
After a little while people started to question the plausibility of that approach, for a few reasons. One is that the prior probability of correctly choosing a causal variant is quite low if you think about all the possible genes and variants that could contribute to complex phenotypes like schizophrenia. And we also know relatively little about the molecular mechanisms of what might be causing variation in these phenotypes. We then started to see a number of well powered studies fail to replicate earlier findings that had been published in smaller samples.
The solution to these and other problems in the field of genetics - because they had a false positive problem with candidate gene associations - involved shifting towards conducting large scale genome wide association studies, or GWAS, where the idea is that you compare allele frequencies between, say, a patient or control group, at hundreds of thousands or even millions of markers scattered throughout the genome. And given that you're doing so many tests and that you're often doing a Bonferroni correction over a million comparisons, you need huge sample sizes to be able to identify anything as being significant with a decent degree of power. So the sample size is generally in the order of tens of thousands of people.
We've had a wave of these studies now, and they've been really important in showing us that psychiatric disorders, and even brain imaging phenotypes, have a complex genetic basis. The effects of any individual variant, at least if we're talking about common variants in the population, are pretty small, at around 1%. The upshot of these developments is that if someone's interested in identifying genetic variants related to a phenotype, they probably need to conduct a GWAS as a first step. The ENIGMA consortium has really led the charge in this space with respect to imaging phenotypes, and I'm sure we'll start to see more of this kind of work as large open datasets like UK Biobank become increasingly available and used widely.
Personally, I view these analyses as a first step to identify which variants are related to a phenotype. But then the next step is to identify the biological effects of those variants and imaging can be helpful in addressing this goal. There are also some other really nice resources, such as data made available by the GTEx and PsychENCODE consortia, which allow people to identify which variants impact gene expression in the brain. These can be combined with data from gene expression atlases to develop a more comprehensive picture of the relationship between genes and brain. This approach aligns with the strategy we've been using in our own lab. We've tried to combine these and other sources of information to try to understand how genes influence brain connectivity.
IL: Your research combines expertise from various disciplines, including brain imaging methodology, modelling, psychiatry, genetics etc. What are the challenges when doing so and what recommendations do you have for people who want to pursue highly interdisciplinary research?
AF: It's a good question. I think the interdisciplinary nature of my research probably stems from my inability to focus on a single topic! But my personal view is that the mapping between brain and behaviour is so complex, and our measurements are so imprecise, that any single approach on its own is not going to be sufficient to really tackle interesting neuroscientific questions in a comprehensive way. So the main thing that motivates and excites me about science is the opportunity to learn new ideas and get exposed to completely different ways of thinking. And so I guess I just like to explore my interests and see where they lead me.
I feel that the main challenge is that you always feel like a novice. Each new area or new field has its own jargon and concepts and methods and conventions, and these can take time to learn. And so I guess the best advice I could give would be to learn to be comfortable with the discomfort of not being an expert and having to start from scratch. Something that can really help with that is to team up with people who are experts in the domain and to learn as much as you can from them. Try to cultivate a good working, respectful relationship and to not be afraid to ask dumb questions, which I happen to do a lot of. Especially in the beginning of a collaboration with someone from a different discipline, you might be speaking completely different languages and it can take some patience and time to navigate those differences. But personally I find that the end result is always worth it.
IL: We previously talked about how important a healthy work-life balance is to stay productive. Leading your own research group, how do you encourage the people in your team to sometimes work less?
AF: I think the most important thing that someone can do is to set aside some time each day to try and do something pleasurable that is unrelated to work. That could be playing a sport or a musical instrument or doing gardening or sketch art or stamp collecting or whatever. For me, daily exercise is really important, but for others, it could be something completely different.
The first thing I always suggest is to create that time each day. But everyone is different, and some people struggle with that. So ultimately, I'll let people decide what's going to work for them. But sometimes, when I suggest it, people think, ‘Oh, my God, I can't do that. It's impossible. How can I spare an hour a day?’. But you never realise that you're able to do it unless you actually do it. I always think of a line from the movie The Matrix, where one of the characters says ‘You cannot ever have time if you do not make time’. And I think that's very true. Once you create that headspace, it allows you to think a bit more rationally about how you're using your time effectively. And even more broadly, where you want to go with your career, what are the things you want to focus on. Creating that healthy space can help you get a bit more perspective.
IL: Somebody once told me that one should have a 10 year career plan ready. What are your plans for the next ten years?
AF: I guess it is challenging to develop a detailed 10 year plan, but I do think it is good to have long-term goals and a 10-year horizon can act as an anchor for more detailed shorter-term plans. I like to work in five-year increments.
If you press me, I'd have to say that there are two big questions that I want to focus on over the next 10 years. The first question is: why is the brain connected the way it is? We know that connectivity between brain regions is not random, so what are the underlying principles that govern how different parts of the brain connect to each other? Are these principles instantiated through genetic factors or other mechanisms? Do these principles or wiring rules have any bearing on our understanding for mental illness? So that's one area.
The other is a little more clinical and is really focused on whether we can develop an empirically-guided alternative to the DSM (Diagnostic and Statistical Manual of Psychiatric Disorders). Thinking about questions like how can we best draw the line between mental illness and health? What's the underlying latent structure of psychopathology? If we had an alternative to DSM, would it allow us to generate better insights into the biology of mental illness? These are the two big picture areas that I'm interested in, and which will frame my work over the next 10 years.
IL: Last but not least, do you want to give us a little teaser about your OHBM virtual keynote lecture?
AF: I'll be talking about work we've been doing in the lab, trying to understand brain network hubs. Hubs are highly connected parts of the brain, and it's thought that they play a really important role in promoting integrated brain function. In our lab, we've been focused on trying to understand why they get wired in the way they are, so I'll talk about work we have been doing on how to map and describe properties of brain network hubs, some of the insights that we've gained from generative models of network wiring, and what these models reveal in terms of what can and can't explain hub connectivity. I'll talk about some more recent work we've been doing focused on the genetics of hub connectivity, and I’ll present some data that suggests that there really is something quite unique and special about hubs at the level of genes. This is work that we've done across mouse, human and C. elegans, and it's trying to bring together imaging, genetics, and modelling, so hopefully there will be something in there for everyone!
IL: Thanks a lot, I am looking forward to seeing your keynote lecture!
by Athina Tzovara, Julia Kam, Valentina Borghesani, AmanPreet Badhwar
‘If you never did you should. These things are fun and fun is good’ - Dr. Seuss
Live Review for Kids
The OHBM Diversity and Inclusivity Committee is exploring an exciting and new direction this year: we will be engaging kids in the scientific review process! We asked five prominent scientists in the field of brain mapping and neuroscience to write a short article explaining their research to kids. The articles are written for the Frontiers for Young Minds (https://kids.frontiersin.org/, a journal dedicated to young readers of 8-15 years old. Once written, the articles are assigned to five young reviewers, who will work together with a neuroscientist mentor to critique the articles and prepare questions for the scientists.
During the virtual OHBM meeting, the five scientists will give a short presentation on their article. Following this, young reviewers will have 5-10 mins to grill the scientists with questions, based on the review they prepared with their mentors. Finally, the panel of young reviewers will decide whether to accept the paper or not. Thanks to the enthusiasm of our scientists, we have gathered five engaging topics: Caitlin Mills will talk about the neural correlates of boredom, Tonya White will explain why some kids are more easily frustrated than others, Fady Girgis will walk us through the use of brain surgery to treat epileptic seizures, AmanPreet Badhwar will present on biomarkers of Alzheimer’s disease, while Christoph Korn and Gabriela Rosenblau will talk about social learning in adolescence. After the virtual meeting, all accepted papers will be published online in a joined OHBM-themed release of Frontiers for Young Minds. All articles will then be combined in the form of an e-book, with the common theme of brain mapping.
Our goal with this activity is to communicate science to kids, develop their critical thinking ability, and nurture their inherent curiosity towards understanding the functions and anatomy of the human brain. Join the live review online and witness how Caitlin, Tonya, Fady, Aman, Gabriela and Christoph will handle the grilling by these critical kid reviewers!
Online family-friend brain-art activities - A collaboration with the OHBM Brain-Art SIG
At last year's meeting in Rome, a drawing corner was provided where children in attendance could freely craft. Their art pieces were progressively displayed on a panel within the BrainArt exhibit (see figures below). For this special virtual edition of OHBM, we will be providing links to online resources that kids can explore and enjoy at home such as printable activities (e.g., build your own brainhat) and coloring sheets (e.g., online coloring books).
Example of the kids drawing corner at OHBM 2019 in Rome
Kids and their caregivers can also dive into the exciting world of TEAM REMARKABLZ - an exciting world of diversity and inclusion aware science superheroes (https://www.theremarkablz.com/). Free Educational Resources include: printable colouring pages, science experiments, and topics that combine science and art (https://www.theremarkablz.com/freeresources).
Some examples of Science Superheroes waiting to welcome kids of all ages!
Images courtesy of www.theremarkablz.com
The Diversity and Inclusivity Committee is committed to supporting all families within the OHBM Community. We strive to offer activities and spaces (however virtual) where parents and kids can develop a shared passion for science: no matter their age, language, or educational system.
This year, we hope to engage them with our renewed collaboration with the Brain-Art SIG and adding the format of the live scientific review, but we wish for these initiatives to keep growing and diversifying as the Community they serve. To this end, we welcome comments and suggestions to ensure that any future OHBM annual meeting, whether online or in-person, will be a great experience for grown-ups as well as kids!
by Ekaterina Dobryakova
In preparation for this year’s Annual Meeting, we spoke to one of the keynote speakers, Dr. Claudia Buss. Claudia is an Associate Professor at the University of California, Irvine and a Professor of Medical Psychology at the Charité University Medicine in Berlin. In a virtual meeting, I sat down with Dr. Buss to discuss her captivating research in the field of developmental programming and newborn infant neuroimaging.
Ekatarina Dobryakova (ED): Dr. Buss, thank you for dedicating your time for this interview. Before we get into more specific questions, I was wondering whether you mind sharing a bit about how you came to do the work that you're doing, and what got you to follow this passion in research.
Claudia Buss (CB): Since I started studying psychology, I have always been very interested in the interface of the mind and the brain. Specifically, in how stress can affect the brain, and then, consequently, health and disease, specifically psychiatric disease. My mentor during my doctoral training was Dirk Hellhammer, who unfortunately recently passed away. He was really the one who stimulated my interest in stress biology and fetal programming of health and disease. He also taught me that if you want to understand the origins for disease susceptibility, you have to go back to the very early period of life, when an individual develops. The brain is highly plastic at that time and therefore can integrate information about the environment during development. Dirk introduced me to the concept of developmental programming.
The origins for basically all common complex disorders, including psychiatric disorders, can be traced back to very early life, when the susceptibility for these disorders is laid. I'm particularly very interested in the developing brain, because there is no other organ that develops over such a protracted period of time. More specifically, I am interested in how cues about maternal stress during and before pregnancy, and even during mothers’ own development, can affect the developing fetal brain. I investigate which biological signals provide information to the fetus about maternal stress.
ED: This is fascinating. What are the challenges of this research area given that you want to study individuals in their prenatal stage.
CB: Gaining information about the fetal origins of risk for psychiatric disorders in humans is best achieved by prospective longitudinal studies. These start during pregnancy and, ideally, follow up the offspring during critical developmental periods from fetal to infant to child to adolescence, with serial measures of brain, cognitive, and affective development. This ideally requires large study samples, which we usually don't have. However, our studies to date have provided a first proof of principle that there are associations between variation in the prenatal environment and alterations in the neonatal brain. At this point, postnatal events will have had minimal influence. Longer term changes and risks for psychiatric disorders have to be studied in large samples. Therefore, multicenter studies such as, the HEALthy Brain and Child Development Study and the Lifespan Baby Connectome Study are extremely valuable.
Further, to gain information about prenatal origins of susceptibility for psychiatric disorders, it is crucial to record many aspects of the prenatal environment and then serially assess the brain during the period of most rapid development. This is especially important during the first two years of life.
It is very important to have a neonatal baseline measure and then characterize the developmental trajectory from this point. Because we acquire MRI scans from neonates and young infants during natural sleep (so we never sedate the children), scan acquisition is extremely laborious and requires very, very committed and patient staff, as well as the cooperation of parents, because sometimes children take a long time to fall asleep and scans need to be repeated. We often discuss this amongst our collaborators, that study sites are only successful if there are people who are committed to this being done. It needs to be a priority because it's so laborious.
In terms of other challenges, the developing and immature brain is very different from an adult brain. Common data processing tools that have been developed and optimized for adult scans cannot be used. I'm very fortunate to collaborate with leading experts in the field, specifically Damien Fair from OHSU and Martin Styner from UNC, who have greatly contributed to developing methods that address these specific challenges.
Of course, another challenge common to all observational human studies is that inferences about causality cannot be drawn. This is why animal models that allow experimental manipulation is a very crucial complementation to the human observational studies on fetal programming.
ED: It’s obvious that there are a lot of layers to your research. Longitudinal studies by themselves present a lot of challenges, without adding a layer of scanning such a young pediatric population and having additional layers of technological challenges. So you already touched upon this before, but when you just started in this area of research, what was the most inspiring or motivating scientific work that sparked your interest even more aside from the inspiration you got from your mentors?
CB: The field received a lot of attention when the first papers on fetal programming of health and disease came out from David Barker’s lab. They showed that there were associations between lower birth weight and risk for cardiovascular disease in later life. Then, more and more epidemiological studies found associations between adverse birth outcomes, such as lower birth weight and shorter length of gestation, and basically all common complex disorders, including psychiatric disorders.
The idea was that it's not the low birth weight per se that's increasing the risk for later disease. Lower birth weight was found to be an indicator of an adverse prenatal environment which affects later disease susceptibility. So people started thinking: what kind of environmental conditions (i.e., nutrition, smoking, stress) during fetal life can program the organism in a way to predispose that individual for later disease? Through which pathways do these risk factors affect the developing fetus?
Another study that was really interesting to me was a study by Gilbertson et al. published in Nature Neuroscience in 2002. In a very elegant study design they found that a smaller hippocampal volume is a risk factor for developing Post Traumatic Stress Disorder (PTSD) after combat trauma exposure. Before that, it wasn't really clear whether the smaller hippocampal volume resulted from the trauma and due to the neurotoxicity of stress, or whether these patients had a smaller hippocampal volume to begin with, predisposing them to developing PTSD after trauma exposure. This study actually showed that it was smaller hippocampal volume predisposing them to PTSD. This made me want to find out what might be the origins of smaller hippocampal volumes.
I was pretty sure that genetics wouldn't explain hippocampal volumes, but that it rather would be an interaction with the environment and environmental factors, especially in early life and especially during critical periods of brain development. The Gilbertson paper made me want to study what might lead to these neuro-phenotypes that then increase vulnerability and susceptibility for psychiatric disorders.
ED: So are you seeing lower hippocampal volumes in pediatric populations, given the environmental factors during fetal development later in life.
CB: Yes, there are associations between adverse birth outcomes and smaller hippocampal volume, and lower birth weight and shorter length of gestation. Also, something we haven't published yet but we're just about to publish, is an association between maternal stress during pregnancy and smaller hippocampal volumes in newborns. Other groups have shown this as well. There's quite a bit of work now on alterations of the limbic system in association with prenatal stress.
ED: Stress is a hot topic now. When you're talking about stress, how do you define it in your research? Is it more chronic stress or a particular type of stress? One can say that even exercise is a stress to your body, for example.
CB: That's a very good question. I mainly define stress as an increase in stress-sensitive biological markers, specifically endocrine markers like cortisol or immune markers like proinflammatory cytokines, but also metabolic markers. What we know is that there are many different stressors, like stress at work, anxiety, depressive symptoms, death and sickness of someone close, lack of social support. All these factors have the potential to alter maternal biology. And the fetus needs to receive a biological cue to be able to adapt its development. The fetus doesn't care whether the mother is stressed because her boss is stressing her out, because she has conflicts with her partner, or because someone is sick. The fetus cannot interpret that; the fetus only gets biological cues through the placenta. Thus, stress of the mother has to translate into a biological signal, so that the fetus can respond to it.
Whether stress-associated biological changes occur depends on many things in the maternal constitution, such as maternal genetic makeup, social support, coping strategies, all these will determine whether the mothers’ stress that she experiences actually translates into a biological signal that the fetus can then receive. This is why I would refrain from calling certain stressors more harmful than others. There are some colleagues who think there's some evidence for that, but I don't think there is strong evidence. What is pretty clear is that acute stress and alterations in biological mediators of stress are very unlikely to alter fetal development. So it would have to be chronic stress exposure and chronic elevations of these stress mediators. I would even go as far as saying that maternal acute stress, from time to time, is good because the fetus gets exposed to certain variation in stress-sensitive biological mediators, which prepares him for extrauterine life.
Even when talking about the long-term neurodevelopmental consequences of chronic stress, you may view these as harmful because they increase risk for mental health disorders but you could also look at them from an evolutionary perspective and consider them adaptive because the changes may increase chances of survival in a more stressful environment (for example altered neural circuitry that supports high vigilance may on the one hand increase risk for anxiety disorders but may also serve an important purpose in a dangerous environment).
ED: Is there any research where you follow the kids who were exposed to chronic stress and how they fare later in life, even when they do develop psychiatric disorders?
CB: The studies that we have done mainly characterized newborn neuro-phenotypes based on MRI studies in association with different types of stressors. We have looked at elevated cortisol concentrations, and then also inflammation during pregnancy. There, we do see associations with newborn brain anatomy and also connectivity, and these neuro-phenotypes predict behavior at the age of two years. We have followed up these kids again at five years. But it's a rather small sample size; our sample size wasn't that big to begin with. We started out with about 120 mother-child pairs, where we had complete data during pregnancy of three time points during pregnancy and then the newborns. So by the age of five, the sample size was quite small: around 70. But in early childhood we also see associations between chronic systemic maternal inflammation during pregnancy and neuro-cognitive function, for example. We are now participating in the ECHO initiative, which may better allow answering questions about prenatal origins of psychiatric disorders because ECHO integrates many prospective longitudinal US studies into a common research protocol to answer questions related to developmental programming of health and disease.
ED: This is so interesting, and again, shows how multilayered your research is. What would you say are the most pressing methodological issues in your field of research?
CB: Improving processing pipelines for neonatal MRI data is a pressing issue. As I said, my collaborators are at the forefront of working on this and great advances have already been made in the context of the developing Human Connectome Project and the Baby Connectome Project. When I started this work 10 years ago there were very few groups who actually did newborn infant neuroimaging. Now, more and more people have become interested in this field and there are big consortia focusing on MRI-based characterization of early brain development. So a lot of progress has been made compared to 10 years ago. But I think there's still a lot of work ahead of us to be at a comparable state as we are for adult image processing.
Then, I think there is still room for optimizing acquisition protocols based on recent experiences. Weighing resolution and signal-to-noise ratio to scan time is important, as we always have to be very cognizant that, usually, newborns sleep pretty well for about 40 minutes and then after that, they start waking up. My experience, if you can keep the protocols below 40 or 45 minutes, you're good. Thus, my recommendation is to stay below 45 minutes and if you want to scan different modalities, you really have to weigh what is important to you.
I think what is also a pressing issue is to harmonize measures of the prenatal environment because, as you are pointing out, what people mean when they refer to stress and the way they define stress may differ a lot. Further, evaluating the quality of biological assays is very important. I'm not sure that this receives enough attention. I think there is this notion that biological measurements are more objective and valid than psychological self-report measures. However, there are problems and inaccuracies associated with biological assays as well, which deserve attention to obtain reliable and comparable study results. Right now, there is a lot of heterogeneity in the way prenatal stress is being defined in studies studying its neurodevelopmental consequences and it's therefore hard to say whether there is one study that specifically replicates what another study has found, as there are all these nuances between how the prenatal environment was characterized and then what aspects of the offspring brain was looked at. That really complicates the picture and I think there we can definitely improve.
ED: Absolutely. Hopefully such initiatives like OHBM’s Replication Award will start the wave of researchers trying to replicate previous findings and remove the barriers for replication studies. Another topic I would like to talk about is mentorship. What do you think are the most important things to do as a mentor?
CB: I feel that continuous communication and support for my students and regular meetings are crucial. Only if you're constantly communicating, you can monitor progress and detect barriers to progress. I also think it's crucial to foster intrinsic motivation for students’ work because I feel it is important to burn for what you are doing and what you're researching. Only then can you conduct these laborious studies and stay on top of things and stay motivated. I think one way of keeping students motivated is a very early introduction to international experts in the field, having research exchanges, visiting international laboratories and conferences, and being able to present one’s own work to peers.
What I also learned from my mentor, Pathik Wadhwa at UCI, is that having a well-grounded conceptual framework in the context of which research questions are being developed is very crucial. This is why I like to begin the training of my students by developing concept and perspectives papers in the respective fields that they are working on. I think that helps them get established in the field and provides a good basis and conceptual framework for doing their own empirical studies and developing their own research questions.
ED: Indeed, intrinsic motivation is very important. To round out our conversation, I was wondering whether you have thoughts about the OHBM conference going virtual in 2020. What are the silver linings given the current situation that hopefully will improve?
CB: Well, with the OHBM conference going virtual, potentially even more people can attend the conference because it will be more accessible, it doesn’t include traveling, which is good from an environmental perspective and also saves time. I think it could also be an advantage to take the time and listen in more detail to presentations that you're really interested in, and being able to go back and listen to them again. What we will be missing, of course, is the Q&A after the talks and interacting with peers during coffee breaks that's also really important at conferences. But I would assume that if there are specific questions, the speakers can be contacted, so I don't see too much of a downside. We just have to make the best of the situation that we're in. And I think having a virtual meeting is definitely better than not having a meeting and not hearing about the advancements in our field.
ED: Absolutely. The conference should still be very interesting and engaging. Thank you so much for your time.
Authors: Claude Bajada, Nils Muhlert, Ilona Lipp
Infographic: Roselyne Chauvin
Expert editors: Alfred Anwander, Jurgen Gatt
Newbie editor: Caroline Jantzen
Neuroanatomy is one of the most exciting topics in neuroscience! Some readers may disagree, but for now, humor us and read along. With the help of this On-Demand post, we will convince you not only that anatomy is a useful endeavour but that it is one where much beauty is found.
Our journey starts with the fundamental notion that the structure and the function of objects are tightly coupled; sometimes in ways that are not obvious. Understanding the complexity of the brain’s structure, hopefully, allows researchers to build more accurate models of brain function.
Neuroscience has however become such a transdisciplinary subject that it is not unexpected to meet a top scientist who has never seen a cadaveric brain. Indeed, while most neuroscientists have acquired a basic understanding of brain anatomy, learning about the main gyri, sulci and nuclei, few remember the function or location of the mysterious substantia innominata or of the periaqueductal grey.
If you are one of such scientists, fear not, you are not alone. Neuroanatomy is a vast and somewhat arcane subject, steeped in history. As such, for the sake of our sanity and yours, in this singular blogpost we decided to restrict our dealings to the topics that are most commonly tackled within the neuroimaging community. This is an overview of the major landmarks and structures that one can expect to see on an MRI scan and the ongoing conundrum of how to subdivide the brain into useful (sub)regions for further analysis - parcellation (See OHBM How-To Machine Learning on performing data-driven parcellation of MRI data).
Why is understanding anatomy important?
Getting from MRI DICOM files to a statistical map of significant clusters that shows differences between two experimental groups requires no anatomical knowledge whatsoever. But coming up with sensible brain based hypotheses, interpreting findings and communicating them to your fellow colleagues relies on a common language in the field: anatomy.
In his video, David Van Essen (from min. 0:50) details the basic features of human brains, such that we have two hemispheres of about 1000 cm2 volume and a 2-4mm thick cortex that is highly convoluted. While providing an overview of the developmental mechanisms leading to the adult brain as we know it, he points out the huge individual variability in brain anatomy that requires us to apply flexible approaches to neuroimaging studies. These approaches include advanced image registration and the use of atlases, but also functional localisers, individual tractography, individual parcellations and last, but not least, knowing our anatomy.
A knowledge of anatomy can make your life as a neuroscientist a lot easier. For example, it can aid placement of volumes of interest in MR spectroscopy, it can allow you to evaluate the output of automated segmentation pipelines, and, of course, help you quickly identify patterns of activity before receiving confirmation from atlases. Interested in doing some fancy high field, layer-specific, fMRI (see Noam Harel’s video)? Anatomical knowledge is vital! Also, recent evidence suggests that it can help increase the reproducibility of findings between labs, by improving accuracy and reducing noise when carrying out tractography in diffusion MRI. Understanding anatomy has clear implications for neuroimagers!
Nomenclature, Etymology and Orientation in the Brain
One of the first battlegrounds for the new student of anatomy is understanding the cryptic vocabulary of experts. Throughout the videos presented in the OHBM on-demand anatomy courses, various experts refer to different parts of the brain on the assumption that we all share a similar understanding of the language. To bring everyone to the same page, we will review the major terms that are crucial to understand before diving into any neuroimaging study.
Anatomy uses many words borrowed, butchered, and stolen from Latin and Greek. Cerebrum is Latin for “that which is carried toward the head.” Cephalon is ancient Greek for head. Encephalon is the substance that is found inside the head, the cauliflower-resembling organ we now know as the brain. Any word encountered that has elements of these words refers to the brain. For example, the word cerebellum is the diminutive of cerebrum. The diencephalon (across the brain) is made up of the thalami (chambers), hypothalamus (below the thalamus) and epithalamus (above the thalamus). Knowing the etymology of the words makes remembering these ludicrously named structures easier. Readers are referred to this amusing article for more.
Orientation is also often done in Latin. In their talks, Svenja and Julian Caspers regularly use two different ways of describing orientation in the brain. The terms superior, anterior, inferior and posterior (SAIP) versus the terms dorsal, ventral, rostral, and caudal (DVRC). While elite neuroanatomists would have no difficulty using these terms, for the neophyte they can be confusing. They actually refer to two completely independent coordinate systems. The SAIP approach is a real world orientation system. The terms themselves are intuitive and most people need little more explanation than the terms themselves. DVRC is another story! This is an orientation system that depends on the organism itself, the terms relate to parts of the body, once again, in Latin. Dorsal means towards the back, ventral is towards the belly, rostral is towards the beak (or nose), while caudal is toward the tail. Our bipedal nature makes this orientation system unintuitive - at the level of the spine, brainstem and cerebellum, rostral means towards the top of our body (upwards towards the nose), but in the cerebrum, rostral means towards the front of our body (forwards towards the nose). If it is still murky in your heads we would advise dipping into an introductory neuroanatomy textbook for some pretty pictures of the two orientation systems.
When you hear the term medial, this means “towards the middle of the brain”, whilst lateral indicates “towards the sides”.
The frames of reference in neuroanatomy change for the cerebrum compared to the cerebellum, brainstem and spinal cord. For quadrupeds, like sheep, no such change is seen. Walking on two feet is great for reaching things, but not so great for keeping neuroanatomy simple!
Another important set of terms relate to the way one can slice a brain. This is generally what we do when viewing MR images. Axial, or horizontal, slices allow you to scroll through the brain from top to bottom through the axis of the brain; sagittal slices from left to right derived from the latin word for arrow (think of the way an archer holds their bow and arrow). Finally, coronal sections provide a view of the brain as if it were cut through with a burning hairband shaped crown from ear to ear, slices moving anterior to posterior.
(Yawn) Thank you for that primer, but when I look at an MRI scan, I still feel completely lost!! How do I get my head around it?
MRI scans are tricky, they are often viewed as two dimensional slices and depending on the cut, and on the individual, it can still be very difficult to orient yourself, especially when the person was not lying straight in the scanner. As Svenja states in her video (min. 1:20), there are some general organising gross anatomical features that remain relatively consistent across individuals. These are called landmarks.
So, what are the major landmarks in the brain?
Some aspects of anatomy are pretty clear. Every healthy human brain has a cerebellum, brainstem and cerebrum... and a substantia innominata, of course. It has ventricles filled with corticospinal fluid, white matter that looks homogeneous on a conventional MRI scan, easily spottable subcortical gray matter nuclei, and cortical gray matter. Looking at a whole human brain from the outside shows a cortical folding pattern with specific sulcal and gyral structures, which have been labelled and can be used to orient yourself around the occipital, temporal, parietal and frontal lobes. In his video, Julian explains the main sulcal and gyral landmarks and how to find them in structural MR images. For example, if you spot an ‘M’ and ‘U’ in a lateral sagittal section, you have most likely found the central sulcus and precentral sulcus (from min. 11:36). An omega sign in a superior axial section indicates that you have found the motor cortex, while the cingulate sulcus appears as a bracket sign (from min. 14:40). The Figure below summarizes Julian’s guide to landmark spotting strategies.
How to find the major landmarks in the brain on MRI scans (compilation of Julian Casper’s slides)
These landmarks are definitely useful, but they seem quite vague for reporting the spatial locations of my findings!
A wise anatomy professor once complemented one of the authors of this piece (CJB) by stating that he was the owner of a “gelatinous mass of a brain.” Despite the gyral formations, the brain does indeed look like one amorphous clump of jelly.
Notwithstanding the repeated news headlines claiming that Neuroscientists have found the region of the brain responsible for X, it is notoriously difficult to consistently define brain regions across different individuals. While the macrostructure of the brain, such as the main sulcal and gyral pattern, is useful to orient yourself on a whole brain or MRI scan, its macrostructure does not necessarily relate very well to the underlying brain function, which might be more closely related to the neuronal microstructure of a cortical brain area. To decide how to define brain areas based on cell anatomy, we first need to think about what neural features to use in order to separate them (also see this paper for cortical parcellations).
Academic Journal Articles often refer to specific brain regions such as Brodmann area 17. Is this parcellation based on neural features?
Yes! During the late 19th and early 20th century, anatomists started discovering that, while the cortex looks fairly homogeneous to the naked eye, it consists of various layers that differ in their cell type, cell composition, and function. Anatomists such as Cecile and Oskar Vogt, Constantin von Economo, and Korbinian Brodmann spent their time observing microscopic features of the cerebral cortex and classifying it according to similarity. These areas have become known as parcels. Undoubtedly the most famous parcellation scheme is Korbinian Brodmann’s 1909 atlas.
In her video, Nicola Palomero-Gallagher shows some of the main historical cortical parcellations. She points out that the parcellation you get depends largely on how boundaries are defined. She then explains how more quantitative and objective approaches are used for not only finding parcellations in individual post-mortem brains, but also how this can be taken further into generating population maps that reflect individual variability in the boundaries of the areas. Cytoarchitecture is not the only feature that was used to parcellate the brain. At the same time that Brodmann was using cytoarchitecture, the Vogts tended to use myeloarchitecture to define regions. In his video, Matt Glasser explains how myelin-sensitive MRI contrasts can be used to study cortical myeloarchitecture and how that helps align cortical surfaces across individuals. But there is no reason to simply stop there, why not use the distribution of receptors for neurotransmitters to delineate brain areas?
Nicola explaining the cytoarchitectonic profiles of the primary and secondary visual cortices.
In fact, while the cyto- and myeloarchitecture of the cortex tells us something about the type of processing happening in a cortical region, neuroreceptor density can also tell us quite a bit about different neural functions and how they become impaired in disease states. However, as Karl Zilles explains in his video, while cortical regions differ in their receptor fingerprints, there does not seem to be a clear relationship between the parcellations based on cyto- and myeloarchitectonics and those based on neurotransmitter receptor maps.
Karl showing neurotransmitter receptor profiles of different cortical regions
How can I link my imaging research to the histological parcellation of the brain?
Functionally, the cortex is often divided based on the order of information processing: primary sensory areas are the first ones to receive sensory information, secondary areas do further processing, and association areas integrate information from different sensory modalities. Some functional units clearly match the microstructural organisation of the brain. For this reason, Brodmann’s atlas is often used to report the location of activation in functional imaging studies.
Particularly high correspondence between function and microstructure has been reported for primary areas. In her video, Katrin Amunts explains how to identify the primary areas cytoarchitectonically, including the primary motor cortex (from min. 3:00), the primary auditory cortex (from min. 8:20), the primary visual cortex (from min. 9:20), and Broca’s area (from min. 12:37). However, how well can these cytoarchitectonically distinct areas be localised based on the anatomical landmarks visible on conventional MRI scans differs. In her video, Nicola Palomero-Gallagher provides examples for brain regions where the gyrification patterns nicely coincide with the microstructure and where it does not. Practical examples of how to make use of the correspondence in the visual system are given by Kalanit Grill-Spector in her video. She explains the anatomical localisation and microstructural features of place-selective regions within the so-called collateral sulcus (from min. 5:30) and face-selective regions in the so-called mid-fusiform sulcus (from min. 09:53). (For a good overview of the visual system also see Rainer Goebel’s video (from min. 3:12)). Primary areas are also characterized by specific mesoscopic organisation called topography (explained in Daniel Margulies’s video from min. 1:10). How high resolution fMRI can be used to study such organisation is explained by Rainer with examples of retinotopic mapping (min. 6:30).
In order to be able to spatially localize results from your imaging study to parcellations based on the underlying cortical microstructural profile, findings from detailed postmortem characterisation have to be somehow transformed into usable atlases for in vivo imaging. In his video, Simon Eickhoff explains how probabilistic cytoarchitectonic mapping based on large-scale histology can aid with the spatial identification of MRI findings (from min. 7:28). He also goes into details on how to practically go about the question “Where is my blob?” (from min. 14:00).
But how many brain regions are there now and how should I define them?
How many regions there are depends on how you parcellate the brain (indeed experts often can’t agree on how many lobes there are! To limbic, or not to limbic?). To aid the localisation of findings and the definitions of regions of interest, brain atlases have been created. These atlases represent parcellations of a representative template brain, made to help you define your regions. When using these atlases (described in more detail here), we need to understand where they come from and what their limitations are, to decide which is the best atlas for our purpose.
The different available atlases are based on various features of the brain (e.g. see Paula Croxson’s video on parcellation based on histological and microstructural features or Danilo Bzdok’s video on functional parcellations), and as you will find out there is no simple way of defining ‘brain regions’. There is also no reason to restrict oneself to a single feature of interest. Multimodal parcellations are becoming more popular! In her video, Paula Croxson explains that a robust parcellation of the brain has various advantages, such as help with localisation of function and also for understanding individual variation.
In some contexts, parcellations into individual brain regions may not even be the way to go. For example, higher cognitive functions rely on large-scale networks and a complicated interplay of different regions. Functional connectivity is often done to tap into these networks (also see On Demand post about that). In his talk, Daniel explains how some local changes in functional connectivity even correspond to cytoarchitectonic boundaries (from min. 7:14).
Finally, of course, there is no cortical hegemony in the brain, even though reading the neuroimaging literature seems to imply it. Hence, all the concepts and approaches discussed for cortical parcellations also apply for subcortical parcellation.
We have discussed the cortex and the grey matter, can you tell me something interesting about white matter anatomy?
Of course! Like everything else in anatomy, to speak about white matter deserves a little bit of time travel to the nineteenth century. This is the era where all (or most) of the “great tracts” were first described. We say described rather than discovered because there is nothing intrinsic to a tract that requires it to be so!
This century (and parts of the previous) is the home to German and French giants such as Johann Christian Reil (did you know that the insula is sometimes referred to as the “Island of Reil?”, which is the word for island in Latin), who first described the arcuate and uncinate fasciculus, Karl Friedrich Burdach who identified the inferior longitudinal fasciculus, as well as Joseph and Augusta Dejerine, and Heinrich Sachs who all made contributions to confusing and contorted the white matter lexicon that we all currently know and love. A good resource on white matter tracts is this paper.
In his video, Marco Catani gives an excellent introduction to the different types of white matter tracts that you may encounter during your research (from min. 1:30). He explains that ascending and descending projection fibres connect subcortical with cortical regions, that commissural fibres connect left and right hemisphere, and that association fibres serve feedforward and feedback connections. He also goes into detail about how to evaluate the anatomical plausibility of diffusion MRI tractography, which is currently the only approach that we have to investigate white matter non-invasively (See OHBM How-To Diffusion MRI). If you are specifically interested in the white matter tracts of the occipital lobe, Svenja’s talk guides you through this area of the brain. She goes into details on projection fibres (from min. 1:30), such as the optic radiation, association fibres (from min. 9:27), such as the inferior longitudinal fasciculus, the inferior fronto-occipital fasciculus and the superior longitudinal fasciculus, intralobar association fibres (from min. 16:15) and transcallosal fibres (from min. 21:00).
Can we assign functions to white matter?
No and yes. Tracts are not really thought of primarily as processing regions, therefore the naive labeling of tracts with specific functions may be misguided. However, knowing which tracts tend to be associated with certain functions (particularly their disruption secondary to damage - a disconnection) is crucial for any neurosurgeon’s work. In his video, Hugues Duffau explains this beautifully. He describes the fascinating procedure behind intraoperative direct electrical stimulation of white matter (from min. 3:00) and the way that this is used in neurosurgical mapping. Also check out this paper on disconnections and dysfunctions.
Where is anatomy research heading with MRI developments?
While anatomy seems like an old thing, there is still a lot we haven’t agreed on. With developments in high resolution imaging (described in Noam’s video), we have new ways of understanding how the brain is structured. Noam gives an overview of how moving to high field strength allows to obtain images with higher resolution and more sensitive studying of functional anatomy. Focussing on the visual system, Rainer gives examples for specific developments in ultra-high-resolution functional imaging that allow to study the meso-scopic functional organisation of neurons in vivo.
Rainer Goebel on the latest developments and aims of high resolution MRI.
While many functional imaging findings have been superseded as the resolution and complexity of processing improve over the decades, neuroanatomical findings tend to be less dependent on the specific technique. As Marcel Mesulam said in an interview with the OHBM Blog: “The beauty about neuroanatomy is that it changes over millions of years. So once you discover something, it’s true for a few million years. And I have made some discoveries in neuroanatomy that were published maybe 30 to 35 years ago and are as true today as they were then.”