![]() By Nils Muhlert Bruce Rosen is a physicist and radiologist who, for the past 30 years, has been instrumental in the introduction and development of functional MRI. Bruce currently serves as the director of the Athinoula A. Martinos Center for Biomedical Imaging at the Massachusetts General Hospital. Here, we found out about the exciting early stages of putting a team together to discover and develop the principles of fMRI, and helping to found OHBM in the process.
0 Comments
![]() By Ilona Lipp Every year, the OHBM gives out an Education in NeuroImaging Award, acknowledging significant contributions to education and training in our field. This year’s award went to Jeanette Mumford. Jeanette is a well-known fMRI stats guru who spreads her knowledge not only through her published papers but also through YouTube and Facebook, and in the handbook for fMRI analysis. Many of you may also have tried her fMRI power analysis tools. I had the pleasure of meeting Jeanette in Rome and interrogating her about how she became such an enthusiastic educator and her views on neuroimaging research. Ilona Lipp (IL): Your YouTube videos must have contributed to you winning this award. Your channel has over 2600 subscribers - and without any cat videos! What was the motivation to start youtubing about brain stats?
Jeanette Mumford (JM): I had just moved to the University of Wisconsin Madison and I loved my new job, but one thing I had done at University of Texas was teach an fMRI analysis course, and I kind of missed it. At the same time that I was missing teaching this class, I was doing yoga on YouTube with ‘Yoga with Adrienne’. She has all these free classes, and specifically every year she has a 30 day yoga challenge where you do yoga every day. I thought ‘Oh, this is really cool for yoga, well, I could do free fMRI statistics classes online’. So I decided to put the two together and take my semester long course and break it up into digestible little 10 to 20 minute long videos and roll that out at the end of the summer so people who are starting graduate school could get a head start on fMRI analysis. IL: When someone asks me how to get started with their fMRI analysis, I say: “Go watch Jeanette’s videos!” So tell me, how did you learn all your brain stats back in the day - without having such youtube tutorials at hand? JM: Luckily, I had an amazing graduate advisor. That was Tom Nichols, so I obviously had to read all the fMRI statistics papers, which was tough, if you ever read… oh I’m not gonna say that. But if I got stuck on those, he'd help me out. He's a really good teacher as well; I watched him teach fMRI statistics at the University of Michigan summer course. Most of my fMRI stats knowledge I learned through him, and then more through taking the FSL summer course in the University of Michigan. I started following the mailing list for FSL - which tends to be the software package I use. A great way to learn for anyone starting out is just follow the emails and read them every once in a while and you learn a lot of new things. IL: You trained as a biostatistician. Most people in our field, and here at OHBM, are not statisticians, but psychologists, biologists, physicists, physicians, engineers etc. How do you approach teaching something like stats in a comprehensive way to people coming from such different backgrounds? JM: I actually made a conscious choice when I was graduating to do a postdoc that wasn't in biostatistics. Because it is all these people working together from these diverse backgrounds, so I wanted to work with somebody either a biomedical engineer or a psychologist, so I ended up with Russ Poldrack at UCLA. I think working with him and working with his lab, I learned how to explain things so that they understand, and then I of course learned a lot from them in turn. And especially Russ' lab are really good at giving feedback. So you can quickly learn and learn how to teach. IL: Have you developed strategies and do you have any teaching advice? Yeah, definitely to be okay with making mistakes and being ok with not knowing the answers to all the questions. I remember the first time I taught fMRI statistics at the NITP summer course, some of the questions I hadn't heard before and I hadn’t really thought about the answers. So a lot of times I had to just say ‘I don't know’. And after saying ‘I don't know‘ a few times, I thought ‘oh my God’. But then one of the other speakers came up to me afterwards and said that it's refreshing to hear someone just simply say they don't know. As opposed to another strategy I see, which is to answer a related, but different question to the question that was asked, and just use a lot of words and confuse the person who asked the question. I don't know if it's necessarily done intentionally, but I think it's fine to say ‘I don't know, but I'll look into it for you’. I do that a lot even now. Another mistake I’ve made teaching is if I make an error to dwell on it. I don't know if it's something women do more than men, but I don’t think that’s true. But I’d think ‘Oh, I can't believe I did that’ and I’ll say that and say ‘sorry’ and I'll keep apologizing. After my first year of teaching, when I was reading at Wisconsin - different class - I was reading the comments, and I was like ‘Oh, it's that thing I got hung up on’. You're basically feeding them the information to criticize you with. If you're criticizing yourself, then they're going to do that. So, now if I make a mistake, I correct it – obviously, I'm not going to pretend I didn't make a mistake because I'm human - so, be human, correct the mistake and move on. ‘Oh, I made a mistake. Here's the right answer, we're moving on now’. They're not going to be as devastated as you are because you just said something totally wrong. Also, if you teach something for a while, add something new that you don't know and try to teach it, because it makes it more exciting. And if you're not excited to teach something, don't teach it because you're going to give a boring lecture. If you're bored, everyone else will be bored. So I always add new materials to my courses every year. IL: This is a controversial question: Doing stats properly – if that is even possible – is already complicated, brain stats are even harder to grasp, and even the experts can fail to reach a consensus about issues like false positive rates. Would it be better to have a few, specifically qualified people who just do analysis and have the time to keep up to date with the best practices, rather than educating the masses on how to do their own MRI analysis? JM: So when they build cars, they have an assembly line, right? They have a bunch of people that do one specific task and at the end you get a car. It works really well for cars. But I think it's a horrible idea for scientific papers. If you have one person who did the data collection and then a different person does the data analysis, there's a gap in the communication. Things may have gone wrong during data collection and the analysis and then if somebody else is interpreting the analysis results and writing the paper, I just think it's not really helping move our science forward, so I actually prefer teaching folks. First of all, most neuroscience people are really smart, their statistics background is pretty strong compared to other graduate students. I think they can pick up things pretty quickly, and they can learn enough to do a good job at it and then they can do all the analyses and then the papers are more likely to be coherent. So, yeah, I'd rather train 10 people to do data analysis than do 10 data analyses myself. IL: The Anders Eklund’s PNAS paper three years ago created quite a panic in our field, suggesting a vast amount of false positives in neuroimaging research. What is your personal standpoint on this and what recommendations do you have for minimizing the risk? You know, the paper - for better or worse - had a pretty big splash. I think a lot of great things came out of it: software is better, people have a better understanding, they paid attention. Unfortunately, some other people used it to try to tear down our field a little bit. A lot of people now say ‘Do I always have to run permutation test? Because they take so long’. I mean, for most of what we do, you can set it running on a cluster and it doesn't take that long. I think permutation tests are great and a lot of people I work with use them. But I'm still okay with the parametric approaches, as long as they use a higher cluster forming threshold, because, at least - that was in the paper as well - Type I error control was better as long as the cluster forming threshold wasn't low, so now the default in FSL is that higher threshold, so that's one of the improvements. IL: Registered reports are slowly taking off. One concern I have heard several times now is that they are hard to do with brain imaging studies because they require thorough power analysis. You have been working on power analysis for fMRI. Do you think there are reliable ways to estimate power for fMRI studies? Actually, my stance on requiring power analysis has changed a lot over the years. I used to think everybody needs to do them, but then I realized if people are forced to do them, they're just going to make up numbers - and they're already kind of made up anyway. We’re using pilot data that might be really noisy and not correct, and there are other issues with power analyses of course. But in terms of registered reports, I think they are a little more lenient with that. I don't want to cheap in the power analysis, because then people aren't going to take it seriously. And you get a lot of benefit out of doing it anyway, because you think about your hypotheses more clearly, connecting those with what the data are going to look like, and connecting the data with what the models are going to look like. Usually when I'm helping a PI with a grant it could take two or three meetings to make that transition from their idea to the hypothesis test we're actually going to run. And when I read protocols that people have written, you can always tell when they've actually done a decent power analysis, because the methods section is better, even though the power analysis is a little hand-wavy. But I think we can go with our guts too, if you have 30 subjects and you're looking at a correlation between BOLD activation and behavioral measures, it's not enough. IL: So far, we have mostly been talking about brain stats. Apart from the statistical challenges, what do you find most exciting about brain imaging? I think how the communities has changed over the years. I've been coming to this conference almost every year since 2003. And it's so different now, all the changes with the attention that's been given to diversity, and all the new special interest groups and I feel like there's a lot more support for postdocs and graduate students. Improving the community, the attention to open science, people working together more. And I’m trying to get more into machine learning, that'll be my next thing on the YouTube channel. People keep asking and I haven’t thought of a good collection of papers to cover on it that would go together, but I think I got some now. Many thanks Jeanette and congratulations on the award! By Bin Lu and Niall Duncan
Recent years have seen a number of important themes come to the attention of the global neuroimaging community. The robustness of findings reported in the literature have been questioned as people begin to focus more on reproducibility and other statistical issues. At the same time, more attention is being paid to the variability between individuals, not least as efforts to develop diagnostic tools for different brain diseases advance. Databases of imaging data from very large samples have come to the fore as one way of tackling these issues and have already led to some striking results. Researchers working in China are leading a number of these large-scale initiatives. In all, several thousands of participants have been scanned to acquire various MRI image types. These have been used to produce resources that are openly available to all. Here, we provide a brief overview of some of these resources to bring them to the attention of the community and let people know what is available to work with now, and what will be coming out in the near future. Investigating the changes in the brain across the lifespan is a difficult endeavour but will help us understand how these changes affect us in health and disease. Large datasets are particularly useful in this context as they can capture the variability in developmental trajectory seen across the population. Understanding the brain in later life is a particularly prominent question within countries, such as China, that have rapidly aging populations. The Southwest University Adult Lifespan Dataset (SALD) includes data from 494 individuals spanning an age range of 19 to 80 years. Each person has a T1-weighted anatomical image and a resting-state functional scan, along with rich phenotypic information available for download. This represents the largest raw data resource currently available involving participants living in China. Two other large aging and development related initiatives are currently ongoing. The Beijing Aging Brain Rejuvenation Initiative (BABRI) project has been running for over a decade and has so far obtained multimodal imaging data from several thousand people over 50 years of age in the Beijing area. Each person also completes a battery of neuropsychological tests and various psychological questionnaires. The project, run by Beijing Normal University, aims to scan a total of 5000+ people. The Colour Nest Project, run by the Chinese Academy of Sciences Institute of Psychology is a longitudinal MRI project of participants aged between 6 to 84 years, and aims to scan up to 1200 people three times between 2016 and 2022. Testing this sort of measurement reliability is also the aim of the Southwest University Longitudinal Imaging Multimodal (SLIM) dataset. This is a test-retest resource obtained from 241 young participants. Each person was scanned three times over a three and a half year period, with each session including anatomical, diffusion-weighted, and resting-state fMRI scans. It is also the aim of the global Consortium for Reliability and Reproducibility (CoRR) to which researchers based in China have been contributing and which has been partly led out of the Chinese Academy of Sciences. This dataset includes a large number of anatomical, diffusion weighted, rs-fMRI, and cerebral blood flow images from centres in China and around the world. Hosting MRI data can be expensive and complicated due to the large amount of storage space required, especially as one gets to subject counts in the thousands. The R-fMRI Maps Project, run out of the Institute of Psychology at the Chinese Academy of Sciences, seeks to reduce this problem by hosting the final indices calculated on resting-state data, rather than the data itself. Standardised pipelines are applied to the data by researchers to produce these indices and then the relatively small resulting files can be easily uploaded, along with other data such as demographics or cognitive test scores. This approach also has the advantage of reducing some of the privacy concerns associated with publicly sharing raw data. One of the sets of indices hosted at the R-fMRI Maps Project is the REST-meta-MDD dataset. This represents one of the largest major depressive disorder (MDD) patient and control resources in the world with 2428 participants included (1300 patients) from sites all over China. The same processing pipeline was applied to all the participants and the resulting indices then uploaded to the central server. This resource is likely to be of great use in efforts to understand the variability contained within the MDD diagnosis. Finally, the standard brain templates used in most neuroimaging analyses are made from one person or from small samples of people of European descent. There may be morphological differences between these templates and many of the people living in China that could affect the results of analyses. To address this problem the Chinese2020 project obtained anatomical images from 1000 people in China and Hong Kong to create a brain template for the majority population in that region. The template is freely available for use, as is a conversion between it and MNI space. As can be seen, there are many exciting projects going on in China, generating large amounts of data that is (or will be) available to researchers to investigate. These datasets are targeted at some of the main questions neuroimagers are currently focused on and have the potential to greatly advance our understanding of, amongst other things, brain development, aging, and psychiatric disorders. Israel is a small country, approximately 400 km long north to south and 25 km width at its narrowest point. Despite its small size, Israel is home to six large universities and this year hosted the 1st Human Brain Mapping conference. This inaugural conference aimed to bring together neuroimaging researchers from each of these universities, to share ideas and methods. The conference unites those working on a number of different modalities - as was shown by the diversity in over 70 talks and posters, with research using MRI, fNIRS, MEG, EEG and brain stimulation, studying populations across the lifespan. The conference covered a wide array of computational tools to analyze neuroimaging data (deep learning algorithms, multi-variate pattern analysis, variability quenching etc), unique sequences for structural mapping, and applications of the above methods to clinical and healthy populations. Researchers presented studies on the therapeutic effect of TMS, for example, to reduce alcoholism symptoms, as well as other brain stimulation techniques such as tDCS, multi-unit electrodes, and deep TMS.
As a preview for the OHBM conference in Rome, a special session was dedicated to the developing brain. This session focused on functional MRI studies during reading and screen exposure in children. The researchers discussed the neural networks related to changes in the use of visual and language-related regions during development with the exposure to reading (Dr Bitan), the critical changes in neural circuits supporting memory along development (Dr Ofen) and the “competition” on these neural networks while exposed to screens in childhood (Dr Horowitz-Kraus). The session also highlighted the importance of mother-child joint attention for social and emotional development and the effect this interaction has on babies’ neural activity coherence patterns during rest (Dr Frenkel). These topics were expanded in the Neurobehavioral basis of Development session, chaired by Luna Beatriz in OHBM in Rome. In this session Niko Dosenbach demonstrated exciting new fMRI analysis techniques that could estimate functional connections within and between neural networks at the single subject level in children. Using this technique, he was able to reveal several networks, previously seen at group-level, including cingulo-opercular and fronto-parietal networks. His talk was followed by fascinating presentations by Drs Satterthwaite and Beatriz on the conjunction of behavior with structural (diffusion) and neurochemical (spectroscopy) neuroimaging data in relation to mental health and development. This, combined with a large sample of data (ABCD database, Damien Fair), left the audience with the feeling that this is just the tip of the iceberg. Other intriguing topics presented in the Israeli conference included several unique methods applied to structural neuroimaging data: from differentiating the six layers of the cortex (aka cortical layering), and using an MR sequence that provides the caliber of the axons in humans, presented by Dr Yaniv Assaf and his students, to quantitative T1 mapping presented by Dr Mezer. Some of these methods were extended to a discussion about structural plasticity in the session in Rome, chaired by Dr Monika Schonauer, which focused on changes in diffusion weighted measures (Dr Brodt), plasticity of diffusion weighted measures in relation to motor learning (Drs Maggiore and Johansen-Berg) and to the dynamic of the connectome (Dr Assaf). Both topics of developmental neuroimaging and innovative structural neuroimaging methods were merged in a fascinating keynote given in Rome by Dr Armin Raznahan, discussing sex-related differences in structural neuroimaging data (anatomical T1 data) in children. Israel, one of the leading countries in applications and industry development, is also known as the “start-up nation”. With several developments related to brain stimulation, machine learning algorithm applications to human brain mapping, a strong hub of human brain mappers across populations, ages, and techniques may mutually fertilize both researchers in academia and industry. “These annual meetings, which will continue occurring before the official OHBM conference, allow a unique opportunity to students and researchers with a variety of specialties focusing on the human brain, to interact, collaborate and comment on each other’s work” says Dr Porat. As a small geographical area with many stimulating brains, the ability to bring these brains together to make more than the sum of their parts during this conference was welcome, and we look forward to more exciting developments in human brain mapping in Israel. For more information see https://elsc.huji.ac.il/events/718 By Johannes Algermissen, James Bartlett, Remi Gau, Stephan Heunis, Eduard Klapwijk, Matan Mazor, Mariella Paul, Antonio Schettino, David Mehler The neuroimaging field has recently seen a substantial surge in new initiatives that aim to make research practices more robust and transparent. At our annual OHBM meetings you will have likely come across the Open Science room. While many aspects fall under the umbrella term Open Science, for this post we focus on research practices that aim to make science more replicable and reproducible. These include non peer-reviewed study preregistration, peer-reviewed registered reports that reward researchers’ study plan with in-principle acceptance before data collection, but also code and data sharing tools such as NeuroVault and OpenNeuro. As neuroimagers, we work closely with and learn from other disciplines, including Psychology. One place where a lot of grassroot development has come to fruition in recent years is the annual meeting of the Society for the Improvement of Psychological Science (SIPS). SIPS breaks with the traditional conference format and focuses on practical work, peer projects and solving concrete problems in groups. The SIPS experience can feel a bit like a playground for research practice geeks: participants sit in the driver's seat and can pick from a variety of so-called unconferences where they pitch and debate ideas to reform research practices, hackathons where everyone can contribute their “bits” and thoughts, and workshops where you can catch up on learning to use the latest R packages or Bayesian analysis. In this vibrant setting we embarked as a group of enthusiastic neuroimagers on an expedition to intermingle with other open science crowds. We wanted to find out how study preregistration and registered reports could be tailored more towards neuroimaging studies. Prepared with a list of challenges that we learned about through our informal survey, we felt determined to provide more clarity around adequate statistical power in our field, and strived to ultimately come up with a potential user-friendly template for preregistration of neuroimaging studies. We completed some initial steps at the hackathon and the immediate aftermath with a focus on tools that help researchers preregister their studies. Here, we summarize our group projects and provide you with some (interim) outcomes. Collection of preregistrations and registered reports in neuroimaging Preregistration and registered reports are ways to state in advance what your hypothesis is and how you are planning to run and analyze the study. They are meant as tools to prevent researchers’ own cognitive bias (e.g., hindsight bias or confirmation bias) hijacking their investigation. They are not meant to stifle exploration but to make very explicit what part of a study was confirmatory and what part was exploratory (see http://cos.io/prereg/ and http://cos.io/rr/ for more details). Preregistration protocols have been around for a while for clinical trials but they have only started in the past few years to be on the radar of psychology researchers. The uptake seems to have been much slower in research involving (f)MRI, EEG, or MEG. Apart from the large amount of methodological and analytical detail needed to preregister neuroimaging studies, one reason may be the lack of examples of what a preregistration in those fields could look like. Those M/EEG and fMRI preregistrations and registered reports scattered on the internet are also hard to find. Therefore, during the hackathon, we started a list of all the openly available neuroimaging preregistrations and registered reports. This resulted in a spreadsheet, accompanied by keywords to make it easier to select relevant ones you are interested in. This document is still a work in progress and we welcome contributions to this potentially ever-growing list, especially if we missed one of your own preregistrations! Simply use this form to add an entry. We hope that such an easily accessible list of preregistrations will inspire many more neuroscientists to preregister their studies and will help to establish best practices. BrainPower: resources for power analysis in neuroimaging Every planning phase of an empirical neuroimaging research project should consider sample size and statistical power: How big is the effect that I am interested in? How likely am I to observe it given the resources (number of participants, number of trials) at my disposal? Power analysis should provide clarity on these questions. It might appear relatively easy for simple designs with one-dimensional behavioural variables, especially with the help of programs such as G*Power and standard effect size measures such as Cohen's d. However, the high-dimensional nature of neuroimaging data and designs (processing three-dimensional data over time with mass univariate and multivariate approaches) requires additional steps, e.g., cluster correction, to prevent false-positive inference. And our understanding of "effects" based on these data and methods is not necessarily as intuitive: how strong should the level of activation be, or how large should the cluster be? One important approach to power analysis is simulations: When taking resting-state data and adding an activation of a certain size and extent, can I reliably find the effect? This approach has been facilitated by advances in computational power and new software in recent years, allowing researchers to have full control of the ground-truth. Alternative approaches to estimate effect sizes is relying on past literature (which may provide biased estimates) and re-using existing, or even open data sets. For both approaches, experts have created primers, tools, and software. Unfortunately, their use may not always seem intuitive. Further, researchers might have a hard time recognizing which tools best suits their specific needs. We thus collated a variety of such tools, compared these different approaches, and described their use (“how to”) to empirical researchers. Overall we gather collection that provides:
This list of resources is openly available and still growing in content. The immediate future goal is to expand the resources with tutorials and work examples of conducting power analyses on real and simulated fMRI data. We then plan to formalise these resources into a website. We invite and welcome any and all contributions from the community! A new way to calibrate the smallest effect size of interest (SESOI) for neuroimaging, using an fMRI example Adequate sample size planning is crucial to make good use of resources and draw valid inferences from imaging data. One-size-for-all recommended sample sizes are slowly being replaced by power analysis procedures that are based on effect sizes that seem reasonable. In the more common approach, effect sizes are estimated based on available data or previous studies. However, this approach does not account for the ability to necessarily detect a meaningful effect size. An alternative approach is to power studies sufficiently to detect the smallest effect size of interest (SESOI), thereby increasing the chance to find an effect that is meaningful for the research question (e.g., for practical, or theoretical reasons). Also, in the event of a non-significant (i.e., “null”) finding, this approach increases the chances to reject negligible effect sizes, rendering “null findings” more informative. Hence, while this approach is more rigorous, it often requires larger samples, especially when studying higher order cognitive functions where group effect sizes are known to be small. On the other hand, running too many participants also comes with a cost: scanner time is an expensive resource of limited availability. Identifying a procedure that can balance this trade-off would thus be desirable and potentially help researchers to implement a sampling plan that is based on a SESOI. We thus started with the following thought experiment: in an attempt to optimize sample sizes for specific experiments and statistical tests, one can capitalize on the fact that neuroimaging data is rich and affords numerous statistical tests that are statistically orthogonal. It is safe to assume that some sources of noise are shared between contrasts, within a participant (for example, a participant that moves a lot in the scanner will have more noisy parameter estimates), and that other sources are shared between participants within the same lab (for example, the quality of the scanner). Based on these two points, we envision a dynamic procedure for sample size specification that is sensitive to the noise in the specific sample of participants. Implementing such a procedure seems fairly simple: data acquisition stops exactly when a group-level contrast that is orthogonal to the ones of interest reaches a pre-specified significance level in a pre-specified region of interest. A preregistration template for EEG Analyzing neuroimaging data involves a myriad of decisions that researchers often consider only after data collection. When preregistering a neuroimaging study, thinking of each detail of the analysis can be challenging, especially because current available preregistration templates are generic and do not ask for the relevant technical details and specifications that are relevant for EEG experiments. For example, preprocessing EEG data involves many decisions - including resampling, filtering, and artefact rejection - that can have a profound impact on the results. As part of the hackathon, we started to create a preregistration template for EEG studies that highlights such decisions during preprocessing and statistical analysis. For instance, the user is reminded to describe the electrode type and brand, data import, resampling, filtering, epoching, artefact detection/rejection/correction procedures, baseline correction, and averaging. The current version of the template is a text document based on the standard OSF preregistration form where we added specific questions about preprocessing and analysis steps for event-related potentials (ERPs). This EEG preregistration template is an ongoing project. If you have worked with EEG data or preregistrations before, your input would be highly appreciated! Ultimately, we aim to include the finished template on the OSF list of preregistration forms and extend the preregistration template to other analyses of EEG data (e.g., time-frequency analyses). To wrap up, SIPS certainly provides a great opportunity for neuroimagers to intermingle with others and contribute to projects related to scientific practices in an open, inclusive, and dynamic environment. Anyone can pitch a session ad-hoc for the next day and the outcome of each project is openly documented on the OSF. This ensures that projects like ours on preregistration and neuroimaging can develop and live a happy after-conference life. By Claude Bajada & Ilona Lipp Infographics: Roselyne Chauvine Expert editors: Tommy Boshkovski, Nikola Stikov Newbie editors: Alina Serbanescu, Adriana Oliveira, Andreia Meseiro Introduction For the budding cerebronaut, the term diffusion MRI evokes images of fancy red, green or blue fibre coursing across the brain; pretty enough to find their way onto a musical album cover or to be the standard stock image for anyone giving a public communication lecture about the brain. While the pictures are appealing, the terminology associated with diffusion MRI is often confusing and hard to disentangle. Any PhD student about to embark on a diffusion MRI project has had to grapple with a sea of acronyms such as DTI, HARDI, FA, RD, ADC, CHARMED and many more! If you have ever got frustrated by these terms and how they relate, this “how-to” post is for you. What is diffusion MRI? Keeping things simple, diffusion MRI refers to the collection of magnetic resonance imaging data that is sensitive to the direction of water diffusing in a tissue. Let us imagine that the brain were the fishbowl that Christian Beaulieu shows in his video (see below); devoid of any tissue. Any single water molecule in this “fishbowl brain” will, depending on the temperature of the water in the bowl, vibrate and move in a seemingly random fashion, colliding with neighboring molecules. This motion is called Brownian motion and it will only be restricted at the limits of the bowl. However, as Christian shows so beautifully in his video (min. 2:20), the brain is not like a fishbowl (not even remotely like one - luckily!), but full of neurons and other cellular structures that act as potential barriers to water diffusion; hence water molecules can be used as a microscopic probe. If we can measure the average rate of displacement of water in all directions in every single brain voxel (a volume element - in the same way that a pixel is a picture element), then we have a measured profile of water diffusion in each voxel of the brain. To measure “displacement of water in all directions” we need to take many diffusion-weighted images. Diffusion MR data consists of these diffusion-weighted images and non-diffusion weighted images. A single diffusion weighted image can be thought of as an MRI volume that is sensitive to diffusion of water in one single direction. If you are wondering why we would want to do this … stick around! Ok, now we know the basics, but how do we do it in practice? In her video, Jennifer Campbell introduces the concepts of hindered and restricted diffusion in biological tissue and how they relate to the most basic diffusion MRI-based measure, the apparent diffusion coefficient (min. 2:50). She then explains how the generation of diffusion-weighted contrast in an MRI machine requires the application of a pair of equal and effectively opposite magnetic field gradients in a particular direction. These gradients disrupt the phase of proton spins in water molecules, and if there is random diffusion along the gradient direction, lead to signal loss, as compared to when no such gradient pair is present (from min. 4:25). In his video, Zoltan explains how this random movement differs from bulk flow (from min. 7:20) and how the diffusion-induced signal loss can be used to estimate diffusion constants (from min.11:00). How strong the signal loss is depends on the diffusion-weighting that is applied in your sequence, which often is parameterized with the so-called b-value. You will learn what the b-value means and what the famous signal equation looks like in Zoltan’s video (from min. 34:00). Importantly, as Els Fieremans explains in her video (from min. 3:00), diffusion-weighting can be manipulated either by changing the strength of the diffusion gradients or by altering the diffusion times, which is important for microstructural imaging (from min. 24:15, also see Christian Bealieu’s video from min. 23:45), a concept that we will discuss later. You would also always acquire some non-diffusion-weighted volumes (often called b0 images). A rule of thumb is that these are about 10% of your volumes. If you spread them out across your acquisition, this can later help you to correct for potential signal intensity drifts across your scan time (as explained in Alexander Leemans’ video from min. 14:25). Jennifer explains that if you acquire diffusion-weighting along many directions in a voxel, you get a diffusion-weighted signal profile, which depends on the underlying fibre orientations (around min. 13:50). The more directions we have, the higher the angular resolution (around the center of the sphere) we sample. In fact, scans with over 60(ish) directions are called High Angular Resolution Diffusion MRI (HARDI). Ideally, the directions you sample should be spaced out evenly around the sphere, and optimized using an electrostatic algorithm (this and other principles of diffusion acquisition are explained here). Luckily, most of the time you do not have to optimize these yourself, there are various standard gradient sets around that you can use in your HARDI acquisition. The diffusion MRI vocabulary is large, and this also holds true for diffusion sequences. In her video, Jennifer clarifies often used terms related to the most frequently used diffusion sequence, Stejskal-Tanner, such as little and big delta, effective diffusion time and all these funny letters b, k, n, and q (from min. 12:00). She also introduces other diffusion sequences designed to allow longer or shorter effective diffusion times, to reduce artifacts or to facilitate the quantification of compartment-specific or microscropic anisotropy (from min. 22:00). With all the options available, how do you choose how to acquire your data? Jennifer gives tips for how to design your protocols, depending on aim that you want to achieve with your experiment (from min. 16:00), which could be just estimating the apparent diffusion coefficient (ADC), reconstruction of the tensor, inferring multiple fibre orientations, or applying specific microstructural models such as Neurite Orientation and Dispersion Density Imaging (NODDI), Composite hindered and restricted model of diffusion (CHARMED) or AxCaliber. Zoltan gives some additional insight into optimising your acquisition, e.g. through triggering (from min. 40:30) and how to use a water bottle to test how linear your gradients are (from min. 19:45). Ok, now we have some data, what’s next? The first step that is always recommended with any kind of imaging data is to look at your images in different planes (axial, sagittal, coronal). During the first 14 minutes of his video, Alex shows us pretty much anything that could be wrong with diffusion data, before and after processing, including movement artifacts (from min. 2:00), distortions induced by eddy currents that occur due to fast gradient switching in diffusion sequences (from min. 19:15), distortions in the phase-encoding direction (from min. 19:50), vibration artifacts (from min. 4:50), physiologically plausible signals caused by Gibbs ringing (from min. 10:30), having done the calculations with wrong diffusion gradient directions (from min. 3:15), having applied unsuitable models (from min. 7:10), and artifacts that are not actually artifacts but truly abnormal ingenious brains (from min. 09:10). Luckily, most of the artifacts can be corrected with the vast amount of processing techniques that have been developed. Alex explains how to correct for signal intensity drifts (from min. 14:25) and why you famously have to rotate your b-matrix when doing subject motion correction (from min. 17:20). He illustrates the effects of eddy current correction (from min. 19:15), corrections for EPI deformations (from min. 19:50), and encourages the use of robust diffusion model estimations (from min. 22:15). If you have checked all these things and your data or results still look odd, don’t worry, there are at least 25 pitfalls with the analysis of diffusion MRI, and there is a variety of software packages that allow you to implement all sorts of algorithms and tricks. Everyone keeps saying “diffusion tensor imaging” - what on earth is a tensor anyway? This is a great question and one that diffusion MRI newbies (who do not come from a maths or physics background) will almost certainly ask. It is also one of those questions that seemingly has one of the most unsatisfactorily unintuitive answers ever to be found. Google’s dictionary defines a tensor as “a mathematical object analogous to but more general than a vector, represented by an array of components that are functions of the coordinates of a space.” But what does that have to do with the spheres and ellipsoids that we often see in diffusion tensor imaging? It may be more helpful to actually focus less on the idea of a tensor and more on the idea of a local diffusion model. We want to have a model that adequately describes the diffusion of water (or in some models, the fiber structure) in a given voxel and can explain the image intensities in our weighted images. One of the easiest ways of doing this is assuming that the diffusion of water can be described by a Gaussian distribution of displacement. At a particular time point that Gaussian can be described as a sphere (if diffusion is similar in all directions) or an ellipsoid (if diffusion is preferential in one direction). In any case, this idea that we have just described is called a local model - with diffusion tensor imaging, our local model is the ellipsoid. It is a model because it is a simplified way of describing the data that can be used (as we will see later) to extend our knowledge. It is local because the model is fit in every voxel. I still don’t know where the tensor comes into play! The tensor is a mathematical formulation of this simple local model. If you are super keen, keep on reading, otherwise, swiftly move to the next section. Since we are thinking in terms of spheres and ellipsoids, imagine that you can populate every voxel in an MRI volume with a unit sphere. This is a start, but it is not the best model of diffusion in every voxel. If we think of the corpus callosum we would have to apply tension to the ball to stretch it into an ellipsoid. You can think of the diffusion tensor as the transformation matrix, estimated from the data, that warps each sphere into an ellipsoid that describes the direction and magnitude of diffusion. So the tensor is not the ellipsoid itself, it is the instructions needed to warp a sphere into an ellipsoid. How to turn this mathematical intuition into a usable formulation is explained in Zoltan’s video. He explains how Fick’s law of diffusion relates to the diffusion constant we want to measure along one direction (from min. 12:50) and how to extend this concept to the 3D scenario (from min. 17:30). He nicely illustrates the tensor and explains the meaning of the off- and on-diagonal values (from min. 18:30). Is DTI just another way of saying diffusion MRI? No, in fact the terms diffusion MRI, diffusion-weighted MRI, and diffusion tensor imaging (DTI) all refer to subtly different things (also see Christian Beaulieu video from min. 16:45). Diffusion MRI is a very general phrase that can refer to any MRI sensitive to diffusion processes. Diffusion-weighted MRI refers to MRI sequences that are specifically designed to be sensitive to diffusion. Diffusion tensor imaging aims to specifically reconstruct the “tensor local model.” Do we always use the tensor as a local model? No, the diffusion ellipsoid (or tensor) is one of the simplest models that we can use when doing diffusion MRI but it is definitely not the only one. In his video, Flavio Dell’Acqua explains that the limitation of using an ellipsoid “model” is that it can only adequately capture the diffusion profile that results if our voxel is only occupied with one fibre population (when people say fibre population, they mean fibres travelling the same way, usually because they belong to the same white matter “tract”). However, an estimated 70-90% of voxels have more than 1 fibre population (from min. 3:25). Therefore, more advanced approaches try to capture the orientation distribution of fibres in each voxel, either by looking at the diffusion orientation distribution function (ODF) or by applying spherical deconvolution to get down to the fibre orientation distribution (Flavio walks us through the various ways to do that, from min. 10:00). In fact, almost every diffusion software package will implement their own local model. For example, the popular software package FSL, besides also having the option to fit a tensor, can model each voxel in terms of a “ball and sticks”, where the “free water” in the voxel is modelled by an isotropic sphere and every “white matter fiber” is modelled by a “stick” - which is a super skinny ellipsoid. This is why such a model is sometimes referred to as a “multi-tensor” model. Other software packages such as MRtrix and StarTrack use Spherical Deconvolution (explained in this review paper). The type and complexity of the local model that you choose depends on your application. We will later describe two main applications of diffusion MRI, tractography and microstructural imaging. What do I do after I have successfully estimated my local model? Now that we have a local model at every voxel we reach a fork in our pipeline. So far, all of the explanation was linear. But now there are multiple routes to go down and some of them potentially interact down the line, so writing about them becomes tricky! The two main prongs of the fork are “tracing” diffusion within white matter through tractography to reconstruct anatomical pathways, and extracting scalar maps from the local model which can potentially be given biological meaning related to brain microstructure. Let us first go down the tractography route: what can diffusion MRI teach us about white matter anatomy? Diffusion MRI is most frequently looked at in the context of WM. This is because WM consists of bundled fibre tracts that significantly impact on the diffusion anisotropy and directionality. The measured diffusion profiles can allow to reconstruct and delineate WM pathways in individual brains. This allows us to investigate an individual’s connective anatomy rather than us having to rely on expert or group based delineations of white matter pathways. For a nice review on the applications and challenges related to tractography see here. To do tractography, the process of “tracking” the WM pathways, we need voxel-wise estimations of fibre orientation. How to get these is explained in Flavio’s video (he also illustrates the limitations of tensor-based tracking in min. 24:30). Based on the peaks in the estimated diffusion or fibre orientation distributions, so-called streamlines are generated. In his video, Maxime Descoteaux explains how this can be achieved and all the various parameters that need to be set, such as how streamlines are seeded, how large each step size is etc. (from min. 3:00). An important point is the ability to resolve crossing fibres while not being tricked by kissing fibres (Flavio min. 4:50), and establishing constraints to ensure that the results are reliable (in min. 22:50 Flavio gives example for what happens when we don’t have enough constraints). The set of all streamlines is often called “tractogram”. Maxime introduces the methods of deterministic and probabilistic tractography (from min. 6:15) and how different software packages differ with regards to considering uncertainty (from min. 9:00). He also introduces the recent concepts of anatomically constrained tractography (from min. 13:00) and microstructure-informed tractography (from min. 25:00) to make results more anatomically accurate. Tractography results strongly depend on the methods used (and there are a lot of options, see this paper for a nice overview), something which has been recently demonstrated in a large-scale tractography challenge, where a large number of groups have applied their methods to the same dataset. In this and a previous challenge, it turned out that a main challenge with tractography is the large amount of false positive tracts produced by the currently available algorithms. Understanding WM anatomy helps with doing and validating your own tractography results. In his video, Marco Catani explains the different types of fibres in the brain (from min. 1:30) and provides some basic rules that help evaluate your tractography results, such as that projection fibres do not cross the midline and that association fibres do not enter core WM (from min. 8:30). He also emphasizes that the probability in probabilistic tractography, which is related to the certainty of the local fibre orientation estimates used to generate a streamline, is not an indication of anatomical certainty (from min. 12:20) and talks about the difficulties related to validating fibre tracts post-mortem (from min. 13:30). Since tractography allows us to study how cortical regions are connected through WM pathways, diffusion imaging has frequently been used to try to parcellate the cortex. In his video, Alfred Anwander points out that some of these parcellations, such as Brodmann parcellation or gyrification, are not useful to predict connectivity of cortical regions (from min. 4:00). He shows how probabilistic tractography seeded in individual voxels can be used to find parcels with similar “tractograms” (from min. 6:00). He also discusses the challenges of finding a parcel size that balances interindividual variability and specificity of parcel (from min. 3:00), and the problem that in some areas there is a smooth rather than sharp change in connectivity profile (from min. 11). In his video, Michel Thiebaut de Schotten explains how diffusion-based parcellation is based on a combination of connectivity matrices with clustering methods and the methodological challenges related to tractography-based parcellation, one being that the further the distance between two points, the fewer streamlines can be found (from min. 12:00). We will now back track and discuss what diffusion MRI can teach us about microstructure In her video, Els explains how diffusion MRI can be used as an in vivo microscope for probing the brain’s microstructure. This means, even though we are looking at large voxels, there is information in the diffusion data that tells us about the tissue composition of the voxels. More complex local models (than e.g. the tensor) can be used to obtain more biologically meaningful information. Els emphasizes the difference between mathematical representations of the diffusion signal and biophysical models (from min. 10:00). The most widely used mathematical representation of the diffusion signal (loss) is the tensor, as already discussed above. There are various parameters we can extract from it that tell us something about the microstructure, such as mean diffusivity (MD), axial diffusivity (AD), radial diffusivity (RD) and fractional anisotropy (FA; Els’ video from min. 4:40). FA is probably the most widely used microstructural parameter in application studies, and quantifies the extent of diffusion anisotropy in a voxel. What this means in terms of biology, is discussed by Christian in his video. He concludes that anisotropy is mostly driven by the axonal membranes, with myelin having a comparatively smaller impact. FA is known to be sensitive to various tissue properties, so interpreting individual differences or changes over time can be challenging. As diffusion is influenced by various things, to gain more specificity of our measures we can apply biophysical models (review paper on diffusion-based microstructural models). Commonly used examples are NODDI, neurite orientation dispersion and density imaging, CHARMED, the composite hindered and restricted model of diffusion, and many … many …….. many more. In these models, the geometry of the brain tissue is simplified, for example by assuming a limited number of compartments which differ in their diffusion behaviour. The most common distinction is the intra-axonal compartment - modelled as impermeable cylinders or sticks and extra-axonal space. In her video, Els (from min. 11:50) explains the different parameters that can be derived from such models, such as the proportion of the tissue belonging to intra-axonal vs extra-axonal space. To estimate such parameters, biophysical models mathematically formulate what aspects of microstructure affect the diffusion signal in what way, and then find out what is the most likely microstructure to give rise to the observed signal. Be aware that the more complicated the model and the more parameters are estimated, the more data you need and the less robust your fit will be. Els explains that this is challenging, and illustrates how such nonlinear models are ill-posed (from min. 14:45). For this reason, constraints need to be established, e.g. in the famous NODDI model, department diffusivities are assumed (from min. 16:50). Using biophysical models, it is always a good idea to understand where the constraints and assumptions are, so you can decide for your specific application, whether these are reasonable or may be problematic (see this paper for an example). Now that I have my microstructural parameter maps, what do I do? Ultimately, we want to apply all these methods to learn something about the brain. Often, we want to compare different groups of people, find brain-behaviour correlations or understand brain plasticity. In his video, Anton Beer walks you through the various methods to do group analysis on microstructural maps, including region of interest (ROI)-based (from min. 6:25) and whole-brain approaches (from min. 11:00). One frequently used statistical method for diffusion data is tract-based spatial statistics (TBSS), which is not actually based on tractography (as the name may imply). How TBSS uses FA maps to obtain a skeleton is explained by Anton (from min. 15:35). As he and Alex (from min. 30:45) point out, one limitation that needs to be kept in mind is that due to the alignment errors and the fact that the analysis is limited to local maxima of FA, there is no guarantee that anatomical structures do overlap across individuals (for other methodological considerations related to TBSS check out this paper). Anton explains how your results from tractography can be useful to get ROIs (from min. 8:20) and for surface-based analysis (from min. 19:15). An application of how a set of complementary tract-specific microstructural measures can be used for studying brain development is illustrated in Jason Yeatman’s video. Recent developments even allow fibre-population specific metrics (see Flavio’s video from min. 27:30). What do I do if I am completely confused or super motivated to do my own diffusion MRI study? Diffusion MRI is a large, complicated topic and there are a lot of things to get your head around. Luckily, there are also various resources other than this blog post that can help you with that. If feel like watching more educational videos, check out our friends at ISMRM with their diffusion without equations post and their interactive course on diffusion, this also includes quizzes to test yourself. If you are interested in the history of diffusion MRI from its earliest days to the latest developments, check out the presentations from the A Spin Thro’ the History of Restricted Diffusion MR workshop. If you prefer reading, a number of textbooks cover a wide range of aspects, such as Diffusion MRI: From Quantitative Measurement to In-vivo Neuroanatomy, Diffusion MRI: Theory, Methods and Applications, Introduction to Diffusion Tensor Imaging and Higher Order Models. Last but not least, if you have any questions that came up while reading this post, questions about things you have never really understood or questions that you have always wanted to ask the “experts” in the field, please send them to us (via email, or tweet). We are aiming to write a follow-up post to give you the answers. By Cyril Pernet, Dora Hermes, Chris Holdgraf
We are happy to announce that the Brain Imaging Data Structure (BIDS) now supports all of the major electrophysiology modalities in human neuroscience. This means that EEG, MEG, and iEEG researchers can all store their data in a BIDS-compliant manner, making these datasets more shareable, understandable, and re-usable. This post describes the BIDS standard in general and the community around it, as well as recent changes that have brought support for electrophysiology. The Brain Imaging Data Structure: BIDS BIDS is a standard that specifies how to organize data in different folders, how to name files and how to document metadata (i.e. information about the data). It does this using community standards and dictionaries enabling efficient communication and collaboration between data users. Details about BIDS can be found at http://bids.neuroimaging.io/. BIDS is an initiative that arose as a specific action taken in response to deliberations of the INCF, NeuroImaging Data Sharing Task Force (NIDASH), along with the NeuroImaging Data Model. NIDM is a Semantic Web-based metadata standard that helps capture and describe experimental data, analytic workflows and statistical results that complement BIDS. With seeds planted in January 2015, BIDS started in September after being presented at the OHBM (June) and INCF (August) annual conferences and has rapidly been taken up by our community – starting with a specification related to sharing MRI data (basic structural, functional and diffusion) submitted in December 2015 and followed by a growing number of extensions into various modalities. The analysis of a recent survey done by the Stanford Centre for reproducible neuroscience lead to a current estimate of over 65000 subjects’ data stored and/or shared using BIDS. Building bridges with the electrophysiology communities In 2016, MEG BIDS was published describing how to organize and share MEG data and metadata. Right after that, Cyril Pernet used the open science space during the annual OHBM meeting in Vancouver (2016) calling for an EEG-BIDS. The first draft was done the following week with the help of Robert Oostenveld during an EEGLAB workshop. Simultaneously, the iEEG community had a need to organize and share data in a standard that matches MRI, MEG and EEG data, so Dora Hermes and Chris Holdgraf developed the iEEG-BIDS extension. For almost 2 years, the two teams developed the standards, while checking with others for consistency, with some help from the MEG-BIDS team. This work culminated with two papers published in the journal Scientific Data (Holdgraf et al., 2019, Pernet et al., 2019). Concretely, this means human brain electrophysiology data sharing is fully harmonized thanks to the effort and collaborative spirit of all involved. It also means that about 2/3 of all functional imaging data can now be organized, documented and shared efficiently (with the exception of PET, NIRS, TMS and dTCS, 34.7% of publications since January 2018 according to our PubMed search). By Ekaterina Dobryakova Brain mapping techniques are a key tool for understanding the pathophysiology underlying neurological and psychiatric conditions. In this interview we interviewed leading clinically-focussed neuroimagers to find out about the state-of-the-art in applications of MRI techniques in people with multiple sclerosis (MS). Actress Selma Blair recently discussed her personal and very emotional struggle with MS with the world, shining a spotlight on this disorder. According to recent estimates, up to 1 million adults in the United States alone have a diagnosis of MS, a neurodegenerative inflammatory disease that diffusely affects the central nervous system.
While MS cannot be diagnosed using neuroimaging alone, magnetic resonance imaging (MRI) tools are widely used by clinicians who treat individuals with MS and by researchers who study aspects of MS progression, symptoms, and rehabilitation. The MRI approaches used to study MS vary from the more ‘standard’ and long-standing techniques to new ones that are still undergoing development. Neuroimaging research contributes a great deal to understanding various aspects of MS, from cognitive impairment, brain plasticity, to changes not only in the brain but in the spinal cord. ![]() By Ning-Xuan Chen The 3rd Annual Event of Chinese Young Scholars for OHBM was held on June 11th, during the 2019 OHBM Annual Meeting in Rome. This continued the success from the two previous meetings in Vancouver and Singapore. The theme for this year’s event was “China Roots, Global Impact!” Around 100 young scholars from universities around the world participated. ![]() The event aimed to bring together Chinese researchers with diverse backgrounds from the OHBM community to communicate, discuss, and collaborate on cutting edge neuroscience research topics and methods. This year, Professor Chao-Gan Yao, introduced the event, and set out the focus on enhancing collaboration between Chinese imaging scholars and International imaging scholars, to the benefit of the global brain imaging community. ![]() Professor Russell A. Poldrack from Stanford University gave the first talk, entitled “How can Chinese scientist contribute to open, transparent and reproducible science?”. He mentioned that China is becoming a neuroimaging powerhouse and he had already had a lot of collaboration with Chinese researchers. For the studies in China, more attention should be paid to improving the reproducibility. In addition, Professor Poldrack proposed two ways to improve it: pre-registration and using reproducible analysis tools (e.g. BIDS apps). Finally, he made pertinent suggestions to Chinese researchers: “Chinese science has great potential but the incentives currently are misaligned with reproducible research practices; the only way to fix this is to lead by example and show that one can succeed in science, while working towards best practices.” ![]() The second talk was from Professor Simon B. Eickhoff at the Institute for Systems Neuroscience Heinrich-Heine University Düsseldorf & Institute of Neuroscience and Medicine (INM-7) Research Center Jülich. His talk was “My (international) journey”. He shared his experiences in collaborating with Chinese researchers, and discussed the development of his Anatomy Toolbox, and how that experience could help others developing new toolboxes. Professor Eickhoff encouraged us to be “open-minded, helpful and productive”. Collaboration can help improve visibility, and software can open doors but still needs commitment. He regarded science as a multi-shot interactive game rather than a zero-sum game. Finally, he suggested embracing the idiosyncrasies of scientific systems. ![]() The third talk was given by Professor Yu-Feng Zang from Hangzhou Normal University, which was entitled “Clinical considerations about resting-state fMRI”. Professor Zang first introduced his own study experiences, and then he mentioned the current state of resting state research. Nowadays, there are too many analytical methods and papers on resting-state fMRI and task fMRI, but too few meta-analytic papers. In addition, statistical thresholds are too stringent and effect sizes are too small, so we need clearer hypotheses and should pay less attention to p-values. Professor Zang gave a few suggestions for clinical studies: recruitment at multiple centers to reduce sampling bias, using new analytical methods to increase effect size, sharing raw data from people with neurological and psychiatric conditions (e.g., ADHD-200, ABIDE), or at least t-maps, and doing source localization and treatment. ![]() The last talk, by Professor Yan-Chao Bi from Beijing Normal University, was entitled “What’s special about doing HBM research in China? ——Some personal thoughts”. At the very beginning, she introduced her background in the field of language. She then explained how we can extrapolate from studying the specifics of learning Chinese languages to understanding universal principles. Furthermore, she talked about the practice: collecting data with special populations in China (working with clinicians). Last, Professor Bi pointed out that in China, there is a unique cultural background and many great colleagues, allowing Chinese researchers to pursue a variety of different research avenues. On the other hand, China has fewer peers in sub-fields and there is a strong pressure to publish. Professor Bi concluded that the best way to solve these dilemmas is to be in touch with the world. ![]() After the keynote talks, Professor F. Xavier Castellanos from New York University School of Medicine, Professor Jia-Hong Gao from Peking University, and Professor Tian-Zi Jiang from Institute of Automation, Chinese Academy of Sciences joined as guest speakers for a panel session. Professor Chao-Gan Yan moderated the discussion, and introduced a topic on “How to improve the global impact of domestic researchers in China”. Each senior researcher shared their insights on these questions. Professor Tian-Zi Jiang emphasized that the international environment offers a variety of opportunities for Chinese domestic researchers, who can promote their global influence by publishing articles and seeking out international collaborations. However, he mentioned that China is undergoing rapid development and already has a strong international influence, providing a number of opportunities for international cooperation. Therefore, Tian-zi encouraged Chinese scholars to come back to China to seek opportunities.
Professor Jia-Hong Gao shared his opinion that the best way to promote international communication was to publish high-impact articles. These would lead to more opportunities for oral presentations, increasing your visibility. Furthermore, China today provides increasing funds to encourage researcher exchanges abroad. Professor F. Xavier Castellanos pointed out that young scholars needed to identify their own fields, communicate with an open attitude, and then keep in contact with other scholars. When attending a conference, they should go to the academic poster area, communicate with others, and be open to others’ criticism. Professor Simon B. Eickhoff further emphasized the importance of participating in the academic posters activity. In addition, he mentioned that when communicating with peers, researchers should focus on the big picture, rather than on the specific details of the research. Professor Yu-Feng Zang suggested that significant progress can be achieved by doing research abroad for at least one year. Research skills can be greatly improved by gaining more exposure to other research areas. Professor Yan-Chao Bi stressed that young scholars should be open to accepting different opinions and needn’t be afraid of criticism from others. Just be brave to communicate with others so that you can make progress. Towards the end of the panel session, a young scholar raised a question that “As a student, how can I communicate with other people on academic issues at an international conference?” Professor Castellanos’ response was that he likes to communicate with young scholars and had a strong desire to help others; for instance, he was prepared to stay after the event to discuss research with attendees. Professor Eickhoff suggested talking to at least 10 academic poster researchers during the meeting to practice presentation and communication skills At the end, the audience thanked the speakers for their informative presentations and discussions with hearty rounds of applause. After the meeting, we enjoyed a group dinner and more informal discussions on both science and life as a scientist. Organizing Committee of the Annual Event of Chinese Young Scholars for OHBM: Chao-Gan Yan, Institute of Psychology, Chinese Academy of Sciences Ling-Zhong Fan, Institute of Automation, Chinese Academy of Sciences Xiang-Zhen Kong, Max Planck Institute for Psycholinguistics, Nijmegen Hai-Yang Geng, University of Groningen, The Netherlands Wei Cheng, Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University Ning-Xuan Chen, Institute of Psychology, Chinese Academy of Sciences By Roselyne Chauvin After the success of the first Australian chapter meeting and the announcement of the OHBM publishing platform Aperture in 2018, the OHBM communications committee took the opportunity to meet Michael Breakspear in Rome to get to know more about brain mapping developments in Australia as well as the progress of The OHBM Publishing Initiative Committee (TOPIC). ![]() Roselyne Chauvin (RC): Hi. Thank you for giving me a bit of time in your busy schedule during OHBM. Michael Breakspear (MB): Pleasure! RC: So it's the 25th anniversary of OHBM, what has been your experience with the meeting? MB: Well, I first went to OHBM in 2001 when it was in New York. So actually, I don't go back 25 years, I go back 18 years. And I think I've been to nearly all of them, except maybe the one in Florence. Because my twins were born that day. So I couldn't come to Italy. RC: It was a good excuse at that time and we are back in Italy. MB: Yeah. So it's a wonderful meeting. I mean, it's an opportunity to present your work, to hear cutting edge work from others. I think one of the advantages of OHBM is that people are prepared to present work that's ongoing, rather than work that's finished. And they'll be brave enough to present work that they haven't submitted yet. So you have that feel of work in progress, as well as work that's been completed. Obviously, it's also an opportunity to meet colleagues and network, and see people that you might not see for the rest of the year. RC: ...and I also know that it was the first Australian chapter meeting this year. MB: Yes, we had the Australian chapter. We formed that last year. I guess it all started at OHBM, we see each other and obviously, we're meeting with our international colleagues. We decided that we would be meeting as a community in Australia once a year, and building strength in numbers in Australia, and collaborating more. So we had a meeting in Melbourne. I think that was last year, 2018. And we're meeting in October this year in Newcastle. There were over 120 people at the last meeting, and many of the initiatives that you see at OHBM, diversity initiatives, open science initiatives, were really at the forefront of the meeting that Dan Lubman and Alex Fornito organized in Melbourne, too. That was very exciting, with lots of presentations from mid-career researchers. RC: There is a push to collaborate more between centers, I heard that there are new initiatives along these lines in Australia with more investment in equipment. MB: Yeah, I think in the last five years, we've had a lot of investment in equipment. So we have two 7T, we have lots of 3T Prisma, we have PET-MR, we have all the vendors represented there. We have MEG with a new MEG center down in Melbourne [see details below]. What we need now is more investment in personnel, human resources, postdoc’s career development, people working across centers with the skills that allow state-of-the-art imaging analysis. And that's what OHBM in Australia is trying to achieve. So that if we have implemented a human connectome protocol in Brisbane, then we can roll it out in Melbourne and Sydney. And of course, it comes with particular pipelines that we put on GitHub, and then we can share those as well. And I think that's the two main things in Australia at the moment, working collaboratively around acquisition and analysis. And also well, in Australia, we fortunately have a fantastic health system. So we're looking at large longitudinal studies, particularly in dementia. RC: And so there’s a push for collaboration and investment in people; should people at OHBM look toward Australia to find jobs? MB: Yeah, we have lots of positions being advertised at the moment. But we’re also lobbying state and federal government to make sure that any capital investment is matched with investment in salary and personnel. We’re creating data hubs and imaging analysis hubs that can straddle between different institutes, and working with clinicians so that clinical studies are using state-of-the-art imaging. Because there's historically been a gap between what the brain mapping community is doing and what the clinical community is using on the scanner to do their studies. So we're trying to close that gap. RC: I wanted to also talk about your editorial activity. I know that you are part of the TOPIC initiative in OHBM. Can you tell us more about that? And so what is your vision about the future of the publishing system? MB: Well, I'm working with the TOPIC collective, we might say it's a fairly loosely knit group of people, to try and create, with new technology, the ability to curate, review, and publish in an open way, a diverse variety of content. In addition to traditional papers and review papers, we are looking into publishing code, data, tutorials, and Nikola Stikov has been working on transparency in publishing, so you can look at the source code, interact and experiment with the figures, and to allow traditional review, if authors and reviewers would like that, but also to encourage different forms of open and post-publication peer review. You see this already in a journal like eLife. I've been fortunate to be an editor in eLife and, of course, I'm editor-in-chief of Neuroimage. So we're seeing different models. Now we're working together to help the open science community at OHBM to make these ideas a practical reality. RC: Do you think OHBM can help in promoting those new systems and maybe experiment with it? MB: It's absolutely crucial that the OHBM community come on board. First of all, just to step back a little bit, I was treasurer on Council. And with all the other council members, we, in communication with the membership, decided to move OHBM towards a society, and then add on a new scene of communications, the communications committee, add on a lot of other activities around OHBM, in addition to the annual meeting. And so the publication platform is part of that. So first of all, there is an investment in starting up the platform. We want the OHBM members to be submitting content, to be reviewing. And we're looking at merit based systems for enthusiastic reviewers to move into editorial roles. And also move to an open source platform. We don't see it as a competition with the big journals, we see it as an alternative. But it needs to be a viable alternative where people are putting their best work and are confident that it's also contributing to their career progression. RC: We are still at the stage of the exploration and experimenting around those different alternative. MB: Yes, definitely. So this is making it a reality. RC: Yeah, exactly. It's impressive, your activity between your research and editorial activities. How do you split your time? MB: Well, my passion is always for doing research, doing experiments, I get a lot of pleasure out of mentoring my early and mid-career researchers and several of them are now faculty and that's one of the best things that can happen to somebody to see their postdocs become independent. But you know, I've always been passionate to stay involved in curation of the data. I came into the field as a computational neuroscientist so I do a lot of programming, and occasionally even doing a bit of actual maths. I think if I got to the point where I wasn't doing that anymore, I'd be very frustrated as a researcher. RC: You would lose track of what's going on in the field. MB: Yeah, in terms of the editorial work and the other work,the good thing is that you're working with colleagues and at OHBM, we're friends, we go out to dinner, and we have lots of discussions. There's a sense of community there with people at the same career stage as I'm not a mid career researcher anymore. You're not doing it as a solo activity. It's a collaboration with people outside of science. For example, publishing in a traditional journal is a collaboration with commercial publishers and people who have been trained in law, ethics, dealing with dilemmas around co-authorship, plagiarism, etc, etc. But my passion always will remain in doing the basic computational and data analysis. RC: It matters a lot to be able to get the balance. It was a pleasure to talk to you and to find out more about activities in Australia. Thank you very much for your time. MB: Okay. Cheers. Thank you. For more information: In Australia there is a National Imaging Facility through which our imaging infrastructure is funded (https://anif.org.au/). On that website is an extensive list of all the current imaging facilities in Australia. Australian imaging is around a fantastic new investment in infrastructure, including new equipment and upgrades to old equipment, across the entire country. The funding is from NCRIS (National Collaborative Research Infrastructure Strategy) and is on the order of tens of millions over 5 years, with substantial co-investments from partnering institutions and state government. Some of the highlights from this are:
There are also substantial upgrades across the country, including to the MEG at Swinburne, Human 3T at Florey and Swinburne, Human 7T at both University of Queensland and University of Melbourne, and preclinical 9.4T and 11.7T at UWA and WSU respectively. By Ekaterina Dobryakova Tianzi Jiang is a Director of the Brainnetome at the Institute of Automation the Chinese Academy of Sciences in Beijing, China and a Professor of Neuroimaging and Brainnetome at the Queensland Brain institute of University of Queensland, Australia. The Brainnetome Center attempts to take into consideration the social and environmental effects on the brain of individuals with psychiatric and neurological disorders while examining structural and functional brain characteristics in a multimodal fashion. The Brainnetome atlas, one of the major projects of the Brainnetome Center, currently contains 246 brain regions and allows examination of anatomical connectivity-based parcellation of various brain area. Here we briefly found out about Professor Jiang’s background and his diverse interests centered on the Brainnetome. ![]() Ekaterina Dobryakova (ED): What motivated you to start a career in brain mapping? Tianzi Jiang (TJ): All of my degrees were in mathematics. After my PhD, I started my postdoctoral training in computer vision and image processing. I was and remain very interested in how we can use mathematics to understand the brain’s organization. It motivated me to start my career in brain mapping twenty years ago, when I worked at Max Planck Institute for Human Cognitive and Brain Sciences. ED: You are involved in diverse areas of research from mental illness to macaque MRI. Will your keynote presentation highlight all areas of your research or focus on a specific direction that your lab is pursuing? TJ: I run a research center called the Brainnetome Center. In this center, we have four key research themes: the basic theory of the Brainnetome, modeling and simulation of the Brainnetome, imaging instrument development, and translational medicine. I am really involved in diverse areas. However, in my keynote lecture, I will just focus on the Human Brainnetome atlas; what this is and how we can construct it with anatomical connectivity profiles, how we can verify its reliability with different methods, and how we can apply it to address a variety of issues from neuroscience and clinical practice. I will also give a perspective in this field. ED: You are involved in a major “Brainnetome” project. What is the Brainnetome – and what do you hope to achieve with the data? TJ: The Brainnetome (Brain-net-ome) is a new –ome. It takes a brain network as basic research unit, which consists of two components: a node and a connection. It can be studied on different scales from a neuron to a brain region. We hope to achieve the following five goals: (1). Topological Structure of Brain Networks (Brainnetome Atlas); (2). Dynamics and Characteristics of Brain Networks; (3). Network Manifestation of Functions and Malfunctions of the Brain; (4). Genetic Basis of Brain Networks; (5). Simulating and Modeling Brain Networks on Supercomputing Facilities. For more information on the Brainnetome, please refer to my paper entitled “Brainnetome: A new -ome to understand the brain and its disorders”, NeuroImage, 80 (2013):263–272. KD: What do you consider to be your greatest scientific achievements? TJ: My greatest scientific achievements are contributions to understanding of brain structural and functional organization, and their dysfunctions using neuroimaging, especially on development of the Human Brainnetome Atlas (HBA). The HBA includes 246 subregions and the anatomical and functional connectivity profiles for each subregion. The atlas is an in vivo map. The fine-grained functional brain subregions and associated anatomical/functional connection patterns could help researchers and clinicians to describe the large-scale architecture of the human brain and accurately localize the targeted brain regions for therapeutics. The HBA was among the Top 10 Breakthroughs in Science and Technology of China in 2016, jointly selected by the Fellows from the Chinese Academy of Sciences and the Chinese Academy of Engineering. It was also ranked on the Top 10 Breakthroughs in Medicine of China in 2016. Recently, the HBA was selected as a part of 40 Milestone Achievements of the Chinese Academy of Sciences in the last 40 years (1978 -2018). KD: So this is a two part question: What is/are the most important things you do as a mentor with your team/lab? Second, is this advice based on your own experience during your training or things you wished you could have benefited from as a trainee? TJ: I think that the most important things I have done as a mentor with my lab is to decide promising directions, which can be worked on for several decades and set up facilities for it. Then we need bright trainees and collaborators to work together. Great goals need a great team to achieve them. BY Jessica Turner
This year’s OHBM Talairach awardee, Professor Riitta Hari, has had a momentous impact in magnetoencephalography (MEG) research. A professor emerita and Academician of Science and member of the US National Academy of Sciences, she has led the Brain Research Unit of the Low Temperature Laboratory at Aalto University in Finland since 1982. Her work has been critical in understanding how MEG sheds light on brain activity, and how that dynamical activity contributes to cognitive functions including action observation. Here, Jessica Turner found out about Riitta’s background, her current work with artists and the remarkable, undistorted, window into the brain offered by MEG. By Ilona Lipp
While there is nothing I would rather research than the brain, I dare say that brain imaging does not always feel like the most rewarding field of science to be in. A single study takes months - more often years - to plan and conduct, the methods can be very expensive and constantly under development, and the results, no matter how interesting, most often just seem like a tiny puzzle piece that (with a lot of luck) will have a (modest) impact in the (distant) future. Coming from this perspective, it was very refreshing for me to talk to somebody whose imaging research is as applied as it can possibly be: Gil Rabinovici. Gil is a professor in neurology, specialized in memory and aging. Using PET imaging with pathology-specific tracers, he does not only investigate mechanisms behind neurodegeneration, but also assesses the clinical applicability and utility of his imaging methods. He recently launched a study on amyloid-PET in more than 18000 people all across the US. Gil will be one of the keynote speakers in Rome and I had the pleasure to find out a bit about him and his research ahead of time. Ilona Lipp (IL): Your research focuses on brain imaging in the context of neurodegeneration and dementia. What are these things called amyloids? Gil Rabinovici (GR): Amyloids are biochemical structures, protein aggregates that form fibrillary beta-pleated sheets. While most protein aggregates associated with neurodegenerative diseases are amyloid fibrils, when people talk about amyloid they often refer to aggregates of the beta-amyloid polypeptide, first described as “miliary foci” by Alois Alzheimer in his seminal case report. In Alzheimer’s disease you have aggregated Tau protein in the neurofibrillary tangles and beta-amyloid in plaques. But amyloid is a very general term and there are various other kinds of amyloid that can cause diseases in the brain. IL: How did imaging amyloids become one of your main research topics? GR: I am a clinical neurologist and I realized early on that I am most interested in brain-behaviour relationships and in higher cognitive processes. So my research focuses on diseases that affect the core processes for us humans, such as memory. A problem with those diseases is that we often cannot offer a definite diagnosis. This is because for more than 100 years, diagnoses were purely based on symptoms. Brain tissue is not like other tissue where you could just do a biopsy for additional information. But the diagnosis that is given during somebody’s life only has 70% accuracy when compared to the diagnosis they would get based on their autopsy after death. This is a major barrier to developing effective treatments! And diagnoses are even less accurate early on in the disease stage where treatments often would be most beneficial. I was a 3rd year resident when I read a paper about the first imaging agent for amyloid, a PET tracer named Pittsburgh Compound-B (PiB). This paper was the most exciting thing I had ever read because it had the potential to completely transform clinical practice. At that time, I was about to start a fellowship and I asked my mentor Bruce Miller whether we could also do something with that. A year later we started a collaboration with Bill Jagust, a PET expert at University of California Berkeley. We recruited patients in our memory clinic and referred them to Bill for scans, and that is really how my research career started. IL: Since then, you have studied whether imaging data indeed helps to improve clinical diagnosis and prognosis. Are the results so far promising? Where are the methodological bottlenecks? GR: My research follows two major themes. One is to use molecular imaging and combine it with structural and functional MRI to understand disease mechanisms by studying how disease processes evolve during life in a longitudinal fashion. This is now feasible with the advances in the imaging techniques, especially with PET, where it has been validated that we are really capturing the biology. The other theme is to think about how these advanced imaging techniques can be translated from the lab into the clinic, to improve the diagnosis and care of patients. The bottlenecks for my research are that the diseases that I study are slowly progressing. The earliest biological changes often happen 10 years or more before the first symptoms appear. To follow people from their earliest changes onwards requires very long, longitudinal studies, which are practically and logistically challenging. An even bigger bottleneck in terms of clinical applications at the moment is that our diagnostic abilities exceeds our treatment abilities. So how can we translate expensive scans into clinical practice when the diagnostic information they provide does not immediately lead to successful treatment? IL: So what is the idea behind IDEAS then? GR: Even though three amyloid PET markers have been approved by the FDA for clinical use, in the US, these scans are currently not covered by health insurance, and they cost several thousand dollars, so most patients would not be able to afford them privately. However, Medicare, the US health care for people over 65, does cover scans if they are part of research studies that test their clinical utility. IDEAS is a national strategy, funded by government, industry partners, and non-profit organizations. The aim is to assess two things: whether having PET scans can change short-term patient management and whether they improve patient outcomes. We are testing 18000 patients with cognitive impairment of unclear origin in the whole of the US across 600 sites and in 350 PET facilities. We have, so far, been able to demonstrate that having this additional diagnostic information does have a very high effect on short-term management. It did change management in over 60% of cases, this is twice as much as what we expected. And the effects are not necessarily in form of pharmaceutical treatment, but also just making small changes in the patients’ lives to avoid medical events that could be a result of their diagnosed condition. IL: Having worked on such multi-center studies, how many knowledge gaps do you think can individual brain imaging labs still fill – or should we all be pushing towards multi-center and consortium data? GR: I am not sure I recommend managing 600 sites, that was really an exercise in losing control (laughs). IDEAS is quite exceptional, but other multi-site studies do also have a role. Individual labs are still important, they play a role in pushing the methodology and innovating techniques. It can take years between the development of a new technique and its application. For example, it took years to get a structural imaging sequence ready that is suitable for multi-site studies. So individual labs push this innovation, multi-site studies just generate big data that can be used, for example, for machine learning to detect relationship and patterns that our brains would not be able to see. But I still think we can learn from small studies with clean data and specific hypotheses. IL: And what direction is your own research going to take in the next few years? GR: I would like to keep pushing the research on disease mechanism, also using more Tau-imaging with PET. I am also more and more interested in what we yet cannot see. While there has been a lot of progress, we still have a very limited set of tools to study brain aging and disease. I am hoping to develop new tracers that also allow imaging other proteins, neuroinflammation, synaptic density etc. IL: Your research requires various areas of expertise: the imaging itself, the image processing side, the clinical side and then the statistical analysis. Can you tell us what your secret for successfully conducting inter-disciplinary research is? GR: I have the great fortune of working with a wonderful team with complementary skills and great collaborators. To do clinical research, you really need multi-disciplinarity, it is a team science, both within and across centers. I also think that the idea that one modality is enough to study a disease lies in the past. We need all tools we can get to understand a disease on a more holistic level. IL: If a junior scientist came up to you and asked: “Gil, one day I also want to be an OHBM keynote speaker, how do I do it?”, what advice would you give them? GR: Find something you are passionate about and follow your heart. Science is a calling, not a 9 to 5 job, I mean, you can do it as a job, but you would not be successful or happy. Also, find something that is meaningful, for example, for me as a physician, doing research with very direct implications for my patients. Another piece of advice I would give is to say yes more often than to say no to new opportunities. It is important to focus, but equally important to expand your horizon. If you are too focused on a narrow topic, you may miss the opportunity to lead your research in a promising new direction. Also, choose great mentors. Great not as in famous people who won a lot of prizes, but people you can work with and who care for your development. I find that often the role of mentors is underestimated. I have been really lucky that way and could not have done it without having had such great mentors. IL: Last but not least, can you give us a little teaser about your OHBM talk, without spoiling too much? GR: I can say as much: it will be better than the last episode of Game of Thrones (laughs). I will give some background on Alzheimer’s disease, explain the biology, and describe how imaging techniques in longitudinal studies have changed our approach to understanding diagnosis, treatment and care. IL: I am very much looking forward to this, thank you. GR: See you in Rome! Trainees represent more than half of all OHBM members (63% at OHBM 2018 in Singapore). In Singapore, the Student and Postdoc Special Interest Group (SP-SIG) organised the 'Secrets behind Success' Career Development Symposium and Lunch with Mentors event, where we heard very heartfelt and honest talks from Lucina Uddin and Tom Nichols, who talked about switching from academia to industry, and back again. The SP-SIG also hosted the Monday Night Social/Open Science Gala at 1-Altitude, with a breath-taking view of Marina Bay during the OHBM conference in Singapore. Beyond the annual conference, the SP-SIG provides ongoing support for trainees via the online international mentoring programme. This programme pairs researchers across the world, providing new links between OHBM members at different stages of their careers. Open Science Gala ‘Secrets behind Success' Career Development Symposium and Lunch with Mentors event ![]() We sat down with Ayaka Ando, the current (2018-2019) Chair of the SP-SIG and discussed what they have been up to and their plans for the new year. Chiara Caldinelli (CC): Hi Ayaka, tell us a bit about yourself. Ayaka Ando (AA): My name is Ayaka and I am a postdoc at the Department of Child and Adolescent Psychiatry at Heidelberg University in Germany, working on multimodal imaging. I am particularly focussed on kids with psychiatric disorder development. CC: Tell us about the upcoming Student and Postdoc SIG activities in Rome 2019? AA: We are definitely growing and expanding our activities in Rome! Of course we have our annual Career Development and Mentoring Symposium, where Dr Melanie Stefan and Dr Danielle Bassett will discuss their own journeys towards success in academia. As in Singapore, the symposium will be followed by our catered ‘Lunch with Mentors’ event where trainees will get to sit down and informally chat about career development with both new and established PIs, as well as industry experts (learn more of our Lunch with Mentors event). We are also very excited about the Monday Night Social/Open Science Gala event, which will be hosted at the beautiful gardens of Rome at the Vyta Casa Del Cinema a Villa Borghese. We are also delighted to announce our new venture! We will be hosting a dedicated Student and Postdoc SIG space during this year’s annual meeting. One room will be a dedicated space for mentors and mentees of the International Online Mentoring Programme (or any mentor/mentees who would like to catch up) to meet. In another room, we will be hosting a number of equality- and career development-oriented workshops. In the wake of the #MeToo movement, we have invited a special international guest from New York, Emma Kaywin, who is a conflict mediator and a sexual health educator. She’ll be advising trainees in managing their interpersonal relationships with one another and their superiors. Additionally, we will host a career switching workshop for trainees considering transitioning from academia to industry. We are very excited about the new SP-SIG space and welcome everyone and anyone who is interested to join the workshops. Stay tuned for details! CC: Why do you think a special interest group at OHBM representing trainees is needed? AA: The trainees represent more than half of the OHBM members, so it’s very important that there is a group to represent their needs. The SP-SIG’s goal is to create a community and a platform to support the development of trainees. That’s why we host the International Online Mentorship Programme, as well as organise events geared towards career development and networking. We are really passionate about supporting, empowering and creating a positive and encouraging atmosphere for trainees who are not only pursuing careers in academia, but also thinking about transitioning into industry. CC: In today’s world of increasingly difficult funding situations, trainees are actively seeking non-academic career opportunities. What do you think are the biggest hurdles OHBM trainees face in doing so? AA: I think the biggest challenges may be that a lot of us have a pure academic experience within our careers and we don’t know what skills are needed, what industry employers are looking for, what being in industry really means. So we don’t know exactly what it is like to be in industry, and only a few of us do industrial internships. For example, Tom Nichols said in last year’s “Secrets Behind Success” symposium that while he worked in industry, he noticed that there was a very fast change-over of staff. People who’s never had experience in industry may not be aware of this. I think academia is quite a particular environment, quite different from industry. It’s therefore really important to inform ourselves as much as possible. To help with this, we are trying to organize our mentorship programme in cooperation with people working in industry, so that the trainees have the opportunity to talk to people outside of academia. CC: What are your plans to address these issues in Rome 2019? Are there some initiatives that you would like to highlight? AA: Yes, definitely! We went around the industry exhibitors throughout the conference in Singapore to recruit more industry people to be involved in the International Online Mentoring Programme. We also managed to have three industry mentor tables at the Lunch with Mentors event at OHBM 2018 in Singapore. Looking at the statistics from our online mentoring programme on mentees’ long-term goals and the advice that they are seeking from mentors, we noticed that there were increases in percentage of people interested in industry positions. Therefore, we are trying our best to recruit more industry mentors. We are also organising a career-switching workshop at the SP-SIG Space in Rome this year so I hope that this will be helpful to trainees. CC: How diverse is the participation in the many SIG initiatives? And what can be done to improve this? AA: We already have a good gender distribution within our committee, as well as in our mentoring programme, which we are really proud of. But we still need to work on the geographic diversity around the world - we are very Europe and US-heavy. We have a smaller portion from Australasia, and an even smaller representation from South America, the Middle East and Africa. It would be great to get more people involved from these underrepresented regions. What would be really nice would be to talk to people from around the world who will be at the conference and see what their needs are and how we can get them more involved. Also, this year, we have a wider representation within the SP-SIG organising members so I hope this will help in the participation too! CC: What is the one thing that you have learnt from your tenure in the SIG? And how has this helped you in other facets of your life? AA: I think one of the most important things about being in a committee is to be a team player. That is definitely important! I think researchers should generally be good at this because you have to collaborate, you have lab members and you need to work together. But one key aspect of being a team player that I have particularly realised in this committee is that you should have faith in the people in your team - most people are very, very capable and I can trust them to do things that need to be done in their own way. I guess the sort of chair I aspire to be is someone who oversees and encourages, rather than micromanages. For example, you are the social coordinator, Chiara, so I want to give you space to do it your own way and I trust that you will pull off something really awesome in Rome too! That’s what I’ve learnt from Michele, our previous chair, she was like that and I think she was an excellent leader. She trusted us that we would do a good job. I also feel that everyone in this committee is very fluid with their positions. It has never been the case like ‘I am the treasurer, I only count the money’. I think everyone is very cooperative and works together very effectively to organise and host the each event and programme we have today. I also think that promoting positivity, encouragement and transparent communication is very important and key to naturally motivate and create an open, collaborative environment. So, working in a team, trusting the team members and bringing positive energy are the main things that I try and incorporate in my life. By David Mehler As we impatiently wait for the upcoming OHBM meeting in Rome, we wanted to take a moment to reflect on the awesome experience from previous years. One aspect about OHBM that makes it so exciting and with which many will agree is that it lives from its vibrant community. Early Career Researchers (ECRs) play a big role in this and certainly have a lot to say and share with the community. At the last two meetings in Vancouver (2017) and Singapore (2018) we reached out to some of them to find out about their motivation to become neuroimagers and their vision for the field that they study. Being an ECR neuroscientist, what gets you up in the morning?
Méadhbh Brosnan, Monash University, Australia and Trinity College Dublin, Ireland I’m really passionate about understanding how we can preserve high levels of cognitive function and help prevent dementia in our ageing population. In particular, I am intrigued by the potential for the prefrontal cortex to support adaptive compensatory processes in the face of age-related neuropathology. There are many aspects of this research that I enjoy, but the day-to-day variety is particularly great! We now have so many different techniques to look at the brain, and it’s a continuous challenge to keep up to date with emerging advances. I really appreciate working with people from a variety of backgrounds and it’s a pleasure to have a great bunch of colleagues and mentors all over the world. Although, admittedly, coffee was my immediate response to this question, working with older adults is something I really love. I’ve met some extraordinary individuals through our research whose positivity & resilience, often in the face of challenging conditions, has been really inspiring! Martin Hebart, National Institutes of Mental Health, U.S.A When I studied psychology, I took a course on cognitive neuropsychology, during which we were allowed to interact with stroke patients. I was fascinated by the effects of visuospatial neglect, where a patient may be able perceive the world around them but can only attend to the right side of objects. The phenomena we observed were so unintuitive, but so telling about the function of the brain. What really fascinates me today is how our visual processing is affected by top-down influences, and more generally how these influences shape our conscious perception of the world. My interest in developing and assessing data analysis methods derives from a desire to attain the best tools to address those questions. James Kolasinski, Cardiff University, U.K. I’m sure it’s a common answer, but my motivation really comes from my love of problem solving and a desire to understand how things work. The times in my working week when I’m most satisfied are when I crack a particularly challenging task and gain some new insight, however incremental it might be in terms of the overall question. On a more practical note: maintaining motivation is really important to me as an ECR. I find organisation is the key here: trying to keep the big picture in mind, but maintaining focus on the constituent parts of my research to ensure I get things done. It’s easy to be overwhelmed by the magnitude and scale of the projects we all take on: being organised maintains my progress and keeps me just as motivated as does my excitement about the big questions. Michael Lührs, Maastricht University, The Netherlands Usually the alarm clock followed by a good cup of coffee. Although as a neuroscientist it is clear to me that enjoying your first cup of coffee later would be a more beneficial use of caffeine. In general, I’m really motivated by the fact that there are still so many open questions in neuroscience and that we have the possibility to gain new insights into how the human brain works and how everything is connected. Natalia Bielczyk, University of Nijmegen, The Netherlands For me these days, it is more about neuroscientists than neuroscience itself: at some point, I realised how much there needs to be done on behalf of improving mentoring programmes within the neuroscientific community, and helping researchers in successfully shifting between academia and industry. Researchers just deserve a better starting package while sailing away into the free job market. This does not mean that science is not exciting for me anymore; I still read and review research papers and have a few manuscripts in the pipeline myself. But I also feel that my life is slowly changing at the moment; I recognise that there are a lot of smart people in my research domain (connectomic research) and it will be doing well without my involvement, while some other areas definitely still need any pair of hands. Would be nice to be asked exactly the same question in a year or two in order to see some personal development :) Martin Grund, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany Fixing the bug or figuring out why there is a bug. Before my PhD, I did not expect that I will be able to tell someone how the settings of a parallel port can be controlled. Furthermore, we lose a lot of research potential due to the way academia works. I would like to unleash more of this potential by making the quiet ones heard and bringing the knowledge that is not headline-grabbing to the attention of the media and policy makers. On top of this, I am still fascinated by my research questions. How does the brain create these impressive subjective sensory experiences? How do humans make decisions based on very limited information? Particularly, being active in science policy, I am often surprised about the percepts that people can have. What’s your vision for the field you are working in? Méadhbh Brosnan, Monash University, Australia and Trinity College Dublin, Ireland Investigating the neural basis by which lifestyle factors can cultivate resilience against neuropathology is an avenue of research which is, in my view, equally as important as the development of interventions (e.g. pharmacology, brain stimulation, neurofeedback, and cognitive training) to remediate clinical deficits. With regards to ageing, there have been repeated and compelling observations that cognitively stimulating activities, such as professional occupations, education, and leisure activities, lead to a neuroprotective buffer against cognitive decline in the face of neurodegenerative conditions, such as Alzheimer’s Disease. Researching the neural underpinnings of these sorts of effects are not only of scientific interest for understanding how environmental factors induce plasticity, but are also informative for public health interventions and thereby provide an opportunity for direct translational societal benefits. It is my hope that ambitious, large-scale longitudinal neuroimaging studies, encompassing several broad disciplines including psychology, computer science, neurology, public health, and neuroscience will be carried out across the globe. This would greatly aid our understanding of how lifestyle changes might be implemented to optimally harness brain plasticity and improve neuro-cognitive health. Martin Hebart, National Institutes of Mental Health, U.S.A. I believe our thinking about the brain is still strongly embedded in the idea of isolated cognitive processes acting on representations that are derived from sensory input or from memory. This idea might work for basic sensory processing, but I am not sure whether in the long run it will allow us to understand higher cognition. There is a lot of variability in higher cognitive processing, and the results that reproduce across trials or subjects may only provide restricted access to the processes we are trying to understand. I hope that in the long run, we will focus more strongly on the computational principles that govern our brain, and that we will perceive the brain more as a neural network that tries optimizing our interaction with the world. Luckily, there are a lot of great developments in computational neuroscience, so I'm confident that this vision will take shape in the near future. James Kolasinski, Cardiff University, U.K. I have recently relocated to the Cardiff University Brain Research Imaging Centre (CUBRIC), where we are very fortunate to house a really impressive array of methods and expertise. Housing 3T, 7T & Connectome MRI systems in the same building as MEG, brain stimulation, and EEG is really a dream come true for me as a neuroscientist. I guess my vision for the field is a more multimodal approach to the work we all do. I am increasingly thinking about the science I want to be doing in terms of a range of complementary methods, rather than focusing on a single method where I have prior expertise. I’d like to see an increasing focus on this multimodal approach across the field more broadly. I’m very fortunate to be pursuing multimodal work at CUBRIC! Michael Lührs, Maastricht University, The Netherlands My vision is to develop and establish a new form of treatment for various psychiatric diseases without the use of medications and thus side effects. Natalia Bielczyk, University of Nijmegen, The Netherlands What concerns me the most is the job market for ECRs: we all know it is harsh, and depending on the discipline, 2-15% of PhDs have a chance for a faculty position. What is rarely mentioned though, is that the odds of becoming successful are equally low in any area of the free job market: 75-90% of start-ups fail, 90% of traders lose money, only 2% of published books pay themselves back. In every area, the free market competition is brutal. In my view it is thus absolutely necessary to teach ECRs how to develop some core skills, get well connected and self-aware as this will have a crucial impact on them regardless of whether or not they decide to stay in academia. Another problem is that the mechanisms to bridge labor between academia and industry are seriously underdeveloped at the moment. For this reason, I have set up a private foundation, Stichting Solaris Onderzoek en Ontwikkeling in Nijmegen, the Netherlands. We are just starting to research this subject and develop some first solutions. If you are interested and planning to attend OHBM 2019 in Rome, I would be very happy to grab a coffee and chat! Martin Grund, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany One of the founding reasons for OHBM was to integrate the results across brain imaging methods. However, there are still large gaps between the modalities - from intracranial EEG via MRI to NIRS. Each community sells their method at its best. Fair enough. But hey, all methods have their pros and cons. We should work out which one serves best to answer the question at hand and collectively join forces to merge the modalities. This is inherently linked to the publication system. We should find ways beyond A4-PDFs to share what we tried and learnt. We need a system that integrates data across experimental paradigms and imaging methods. Particularly for perception research, we need a language that allows to merge the results of different sensory modalities instead of picturing them as silos. Thanks to all interviewees for sharing their careful thoughts and vision with us. I personally feel spoiled with food for thought now. And it reminds me how inspiring and motivating #ECRchatting is, learning why people are passionate about what they do, and where they see challenges and opportunities for ECR progression. OHBM is committed to support ECRs on their way and it’s worth checking out the activities run by the Special Interest Group (SIG) for Students and Post-docs. If you would like to share your story with us, reach out at the upcoming OHBM2019 meeting in Rome! by Christienne Gonzales Damatac & Roselyne Chauvin Roshan Cools is a Professor of Cognitive Neuropsychiatry and PI at the Donders Institute of Brain, Cognition and Behaviour. Following a PhD with Trevor Robbins in Cambridge University she completed a postdoc with Mark D’Esposito, before returning to Cambridge and eventually moving back to the Netherlands to start her own lab. Her work has resulted in multiple prestigious awards, including recognition from the James McDonnell foundations and the Royal Netherlands academy of Arts & Science. Here, we found out about her work on the effects of dopamine and serotonin on the brain and cognition and how she pushes for open science practices in her lab. Roselyne Chauvin (RCh):You’re studying the chemistry of the adaptive mind,motivation and cognitive control. The aim of this interview is to help people see if they would need an adaptive mind to attend your lecture. I would like to start with the basics. Imagine you met a random person on the street. How would you describe your research? Roshan Cools (RCo): Well, first, thanks for giving me this opportunity. What I would say? Well, I would say something like: Imagine you had to listen to me give a lecture or interview, or listen to this interview for the next half hour, but you forgot to turn off your phone and it's constantly beeping and there’s Facebook messages and there’s tweets and Mattermost messages, or whatever. The willpower that you need to continue to listen to this interview, or to my lecture—the cognitive control that you need—that's what we study and we are really good at that. It’s associated with a part of the brain that’s really well developed, but we fail to exert control, to exert willpower all the time. Why is that? What limits human cognition? That's really the overarching question of our research program. RCh: Indeed, it’s really applied to society then. So what do you consider to be your greatest achievement? RCo: Maybe I can just follow up on what I was just saying about what our overarching question is, which is about what makes us fail to pay attention so often. One reason for that, I believe, is that exerting control all the time is a bad thing. Our brain basically decides whether it’s good or bad to exert control and then makes a decision. We examine this by looking at the large ascending neuromodulatory systems like dopamine, but also noradrenaline, that are critical for these types of abilities. We do that in the lab using a combination of techniques: pharmacology, but also fMRI and chemical PET, where we measure these neuromodulators directly in the brain—so dopamine PET in particular. I guess there's a few things that I could say I am proud of. What we do is we look at the effects of drugs that change these neural modulators like dopamine and serotonin—so dopaminergic drugs—and what we’ve found is that these effects are extremely variable. And the whole program so far has been focused on trying to elucidate the factors that determine whether you will benefit or not from these drugs, the so-called cognitive-enhancing drugs. And what we find is that the effects of these drugs depend on the baseline state of the system. If you have low levels of dopamine, you get better, but if you have high levels of dopamine, you get worse. The effects of these dopaminergic drugs, which are often used as smart pills—like Ritalin, for example, for ADHD, but also in academia actually, in schools—their effects depend very much on their baseline state and baseline levels of dopamine. The other thing we found is that the effects depend on where in the brain these chemicals act. Many people study the neurophysiological signature of the cells that produce dopamine or noradrenaline with electrophysiology, for example, but what we find is that the effects of these neurons depends on where in the brain it acts. In the prefrontal cortex, for example, dopamine has a very different effect than in the striatum. If we want to understand what a drug that acts on the system does to human cognition, we have to take into account a number of factors. Bit of a long answer to your question. RCh: It’s perfect! I guess then you are working in the lab toward understanding if those baseline measures can have an interaction with the drugs depending on the area in the brain? RCo: Yes, exactly. Just concretely, we’re asking very large groups of subjects to come to the lab. We measure their baseline level of dopamine with PET and then we ask them to undergo an MRI scan once, after intake of a placebo pill, and once after intake of, for example, the dopaminergic drug. The most commonly used drug is methylphenidate, also known as Ritalin, so we use that in the lab also. We assess whether the effect of, in this particular case, Ritalin depends on how much dopamine you have in your brain measured with PET, and we see that that is the case. RCh: It’s really interesting because, indeed, it can be applied to society quite easily. How do you imagine the translation of your research? RCo: I think the first larger implication of the work is a pretty fundamental one. It’s a better understanding of the mechanisms—neurochemical mechanisms—of motivational cognitive control, and then ultimately also a better understanding of how we might maximally exploit mental capital, our human mental capital. And that has, possibly in the longer run, some implications for education. I guess that would be the first domain: How do we promote cognitive control? How do we promote creativity? This balance between focus and flexibility is very important. And I guess the second domain is [in] the clinic. Most concretely, we’re working on building a proxy model of dopamine synthesis capacity consisting of behavioural predictors mostly, but also physiological predictors, like spontaneous eyeblink rate, perhaps; and seeing how we can optimally combine all these predictors to provide a pragmatic and practical tool that can be used to predict how someone will respond to a dopaminergic drug. Because so far, there’s been a whole load of studies, including some of my own, suggesting that, for example, dopamine synthesis capacity is correlated with working memory capacity. And, indeed, we see that dopamine drug effects depend on working memory capacity. Of course, working memory capacity is much easier to measure in the lab or in the clinic than the PET scan to measure dopamine system capacity. So, if we can establish that these proxy measures of dopamine are equally good predictors of drug effects, then that gives a pragmatic handle on tailoring drug treatments to the individual. So, that’s the second promise, but I think we have to accept that this is not something that will be in use within the next five years or so. RCh: Your research is multimodal, integrating PET and fMRI to study the effects of drugs. Is this what you’re going to present in your OHBM keynote lecture or do you have extra things that you want to talk about? RCo: I think the general point that I will make is that the human brain faces a number of these computational trade-offs, like the trade-off between flexibility and stability, between labour and leisure and we need a way and ability to dynamically regulate this trade-off, depending on changes—the constantly changing tasks in the environment. And I’ll make the point that the ascending, the large ascending neuromodulators, like dopamine, are really perfectly suited to dynamically regulate these trade-offs. I’ll illustrate that by highlighting a number of general principles of chemical neuromodulation, like this baseline dependency principle. I might refer—I’m not sure yet—to the motivational opponency principle—that’s another observation which I won’t elaborate on now. But the general point is [that] I will talk about these multimodal projects that we’re doing, in which we combine PET, pharmacology, and fMRI to work towards a better prediction model of dopaminergic group effects. RCh: OHBM is creating a number of different special interest groups to talk about trends in science, such as open science, or equality & diversity. Do those topics apply to your daily work life and what you promote in your lab? RCo: Yes, we are certainly quite active in those areas. The diversity issue comes relatively naturally if we talk about gender diversity, at least. Simply by being me—I must admit that I’m not very active apart from just being me—and I’ve noticed that just by being me, I attract other women in science, so I think I fulfill a role there. We talk about it. RCh: It’s important. RCo: Yes. For open science, reproducible science, yes. Like many other labs, we’ve also taken a number of steps, defined operating procedures. When new people arrive in the lab, we highlight those to them and those include that all studies we do are pre-registered now on the Open Science Framework. RCh: Not everyone is doing that. RCo: Yes, we’re trying to promote version-control. We have a lab Git account and we try to explicitly check each other’s code for analyses. It’s not the funnest thing for everyone, but for most projects now, I ask a lab member to rerun the analysis of another student because we want to make sure that the analysis is reproducible. We promote the use of interactive notebooking, for example. I must admit that a lot of this was influenced very much by one of my postdocs, Bram Zandbelt, who is very active in this field; he had a great influence on us as a lab. He’s also teaching in our local Donders reproducibility course. RCh: What would you advise new researchers to do for their career? RCo: I think the most important thing for a scientist, but also for anyone, is to constantly ask yourself which things you can control and which you cannot control. And then accept the things that you can’t control, but act on the things that you can. Now for a new researcher, a new person entering the field, the things that they can control are selection of a mentor and I think that’s very important. By the way, I’m following the advice here from Jay McClelland. I just listened to his wonderful interview on the Brain Inspired Podcast by Paul Middlebrooks and I thought that was wonderful advice, and it was: Find the right mentor who you can bounce ideas off. But the other point is that the key is to find a project—to define a project for yourself that you find is representative of a general, a larger question, but it’s still tractable. I think those two points are very important. But the starting point—the first one that I said—is to constantly ask yourself: What are the things that you can control? And what are the things that you can’t control? Because I noticed many people spend quite a lot of time and energy on things that are outside of their control. And that’s, you know, a missed opportunity in a sense. RCh: Yes, for sure. Well, thank you very much for your time and I’m really now looking forward to your lecture and learning more about cognitive control and [the] dopamine system. RCo: Thanks, it was fun. I enjoyed it! By Claude Bajada OHBM is a community of neuroscientists interested in neural cartography. It draws upon the traditions of 19th century neural mappers such as the Vogts, Brodmann and von Economo. While the spirit of the society is still based in the biological brain, the conference itself is multidisciplinary. Although still a place for biologists, anatomists, physicians and surgeons, thanks to the development of Magnetic Resonance Imaging the field has become increasingly computational. Thomas Yeo is an assistant professor at the National University of Singapore where he leads the Computational Brain Imaging Group. His lab develops machine learning algorithms for MRI data. His work is well known to brain imagers who are familiar with the “Yeo” brain networks. Ahead of his keynote lecture, I met Thomas and learned how he made the switch from engineering to neuroscience, what led him to working on the topics he is now well known for, and what the exciting new topics in his field are. Claude Bajada (CB): From studying electrical engineering and computer science to getting into neuroscience what was the path? Or perhaps draw the graph.
Thomas Yeo (TY): To tell you the truth, my path was actually quite random. There was no sudden epiphany, no single life-changing event that led me down this path. As a kid, I was generally interested in the brain, but I was also interested in mathematics and physics. The best way to describe my path is that things just sort of happened. When I was deciding on PhD supervisors, I was debating between computer vision or medical vision. I ended up pursuing medical vision mainly because my PhD supervisor (Polina Golland) expressed the most interest in me joining her lab. At the time, I did not want to work with fMRI because it seemed too difficult. Consequently, I ended up working with both Polina and Bruce Fischl during my PhD, developing machine learning algorithms for registering and segmenting brain data on cortical surfaces. When my PhD was nearing its end, I was looking for a postdoc position, but I also wanted to try something new. I could either move closer towards image acquisition (MR physics) or towards the “end users” (neuroscientists/clinicians). Bruce mentioned that Randy Buckner was putting together a big dataset. At that time, there was not as much data sharing and not as many large datasets like today, so I was to join Randy’s lab, learn some neuroscience and analyze some large datasets. From then on, I was hooked onto neuroscience research, but with a computational bent because of my PhD training. CB: As someone with a solid STEM background what are your experiences interacting with clinicians, psychologists and other health professionals? TY: Interactions with clinicians, neuroscientists, and psychologists are extremely important to what I do. I have found that problems, which neuroscientists think are important, are often quite different from what I was interested in as a PhD student. Back then, I thought I was developing algorithms that were very helpful to neuroscientists/clinicians/psychologists. But after joining Randy’s lab, I realized that my algorithms are often not immediately relevant to what neuroscientists need. In engineering/computer science, there is the pressure to develop novel, beautiful, fast algorithms. However, at the Martinos center, where there is a very nice big computing cluster, speed is often not a pressing issue. Most neuroscientists also do not care about novelty or how elegant an algorithm is. They care about whether an algorithm can help to answer their question or help their patients. They don’t really care whether the algorithm involves lots of equations or just simple correlations. In fact, they prefer a simple algorithm to a complex algorithm unless I can demonstrate the complexity is really necessary. So working with neuroscientists has really changed how I think about problems. On a day-to-day basis, I like to think about what interesting neuroscience problems can be formulated as machine learning problems. For example, around 2012, I became intrigued by Russ Poldrack’s 2006 paper on reverse inference. He had this beautiful figure showing that tasks recruit unobserved cognitive processes, which can then be observed with brain imaging, behavioral and other kind of data. I realized that the figure can be mathematically expressed using a hierarchical generative model. I then applied this model to real data to estimate the unobserved cognitive processes and discover new insights into brain organization. Throughout this project, I received a lot of inputs from quite a number of neuroscientists, who brought with them their own unique expertise and insights to the project. In fact, I met Simon Eickhoff and Maxwell Bertolero because of this project and we have since collaborated on many more projects. Later on, I realized that the same class of hierarchical generative model can be applied to understanding disease nosology: in this case, the model would encode the idea that different disorders or disorder subtypes share multiple disease processes, which can then be observed with brain imaging and behavioral data. This has in turn led to projects on disorder nosology with quite a number of folks. Thus, one project led to new collaborations, which led to even more collaborations. CB: Your name is now almost synonymous with the 7 and 17 resting state “Yeo” networks. How did that work come about and how did it influence your subsequent career? TY: As I was saying before, I ended up joining Randy’s lab as a postdoc because he was amassing a large dataset in collaboration with a large number of PIs in the Boston area. At that time, there was already a lot of work showing that resting-state fMRI can be used to extract different networks. Given that my PhD advisor (Bruce Fischl) is one of the creators of FreeSurfer, I ended up re-processing Randy’s data and projecting them onto the surface coordinate system to visualize the data. I then used a clustering technique developed by a fellow PhD student (Danial Lashkari) of my other PhD advisor (Polina Golland) to parcellate the cortex. Frankly speaking, most of the networks we found were already known in the literature, so to this date, I am not 100% sure why this paper became such a hit. Perhaps it was the large number of subjects. Or the surface coordinate system allowed us to see some very exquisite topography that were less obvious in the volume. For example, we showed that the existence of multiple parallel-distributed and interdigitated association networks. Or perhaps it was the comprehensiveness of the paper – 40 pages long. I like to joke that it’s my second thesis. Without a doubt, the paper has been incredibly helpful for my career. I have a few students, who continue to push the frontier on this topic. Our work probably gets a disproportionate amount of attention, so my lab continues to benefit from the original paper. In some sense, I was very lucky. The technical aspects of the 2011/2012 papers (e.g., surface processing, clustering) were possible because of my PhD training. And I arrived in Randy’s lab at exactly the right time. If I came a year earlier, the data would not be ready. If I came a year later, the impact of the work might be overshadowed by similar papers (e.g., Jonathan Power’s work), which would have been published a lot earlier. I was lucky to have worked with super talented people in Randy’s lab, including Fenna Krienen (who was co-first author on the paper), Hesheng Liu, Jorge Sepulcre and of course Randy! CB: What would you say is the most exciting topic in computational brain imaging at the moment? TY: Given the large quantity of public data out there, I think this is an exciting time for human neuroscience. This is especially the case for computational scientists like me. I have found the big data to be very helpful for developing algorithms and applying them to discover new insights into the brain. Given the large public investments in these datasets, I am also thinking a lot about how we can use these big data for useful applications, e.g., helping patients, etc. Consequently, I have become less interested in problems, such as classifying controls versus schizophrenia, which are useful for benchmarking algorithms, but not really useful for clinicians per se. There are definitely machine learning problems with real clinical value, e.g., predicting best treatment in depression, but there’s not that much big public data on that (although I can’t really complain since I am just a data leech). Furthermore, the vast majority of machine learning algorithms only allow us to find associations. So no matter how “deep” the algorithms are, we are just finding glorified correlations, even if it’s out-of-sample prediction! Do these big data only allow us to find associations or can we gain mechanistic insights into the brain? On this front, I think biophysical modeling and causal modeling are potentially promising and exciting. CB: You played an integral role in COBIDAS, what was the motivation for that and what influence do you think it has had? TY: WelI, I wouldn’t say I played an integral role. I was one of many folks who contributed to the report. It was really Tom Nichols who had the unenviable task of “herding cats”! The OHBM Council initiated COBIDAS to develop recommendations and consensus on best neuroimaging practice. But soon it became clear that “neuroimaging” would cover too many things, so we ended up focusing on MRI. EEG/MEG COBIDAS is now spearheaded by Aina Puce and Cyril Pernet. Unfortunately, in my opinion, the COBIDAS report has not been as influential as I hope. We recommended a checklist of items that researchers should consider and report, but I think it’s safe to say that the vast majority of papers (including from my lab) do not really do so. I am speculating here, but one reason might be that many researchers do not know sufficient details of their preprocessing pipelines or analysis algorithms to actually complete the checklist. The checklists are also very long, so researchers might balk at the work of filling them. I think the best way for this to happen is to try to automate the process. I can imagine some software that keep track of the preprocessing/analysis one performs on the data. These metadata can then be shared. I believe Tom Nichols and others might be working on this. This could be promising. In the case of my lab, we mostly perform analysis of open datasets and we often develop our own algorithms. Unfortunately, I do not believe that there is a checklist long enough to completely specify an algorithm without access to the original code. Thus, my lab is more focused on sharing our code. Even then, replication is not so easy of course. While we work on open datasets, many datasets (e.g., UK Biobank) might not allow us to re-distribute the data. Thus, replicating our results is not so easy. If you explore our github (https://github.com/ThomasYeoLab/CBIG), you will see that our wrapper scripts often reference data on our server. But we have tried to make the code user friendly, so hopefully users can easily apply our code to their own data. CB: What can we expect in the future from the Yeo lab? TY: We have some new exciting individual-specific brain parcellations stuff coming out! We are also working on using machine learning and GPU to invert neural mass models; right now, these biophysical models mostly require hand-tuning of critical parameters. Finally, we are also working on using machine learning to understand disease nosology. GENES, ENVIRONMENT, THE DEVELOPING BRAIN, AND EVERYTHING IN BETWEENBy Tzipi Horowitz-Kraus One of the most interesting questions when researching the developing brain is the level of impact of defined genetic and environmental factors. Dr Armin Raznahan, a Neuroscientist and a child psychiatrist, who serves as Chief of the Developmental Neurogenomics Unit in the National Institute of Mental Health (NIMH), examines patterns of brain development in health and in groups with known neurogenetic disorders. His unique blending of basic and clinical neuroscience may help to identify risk pathways towards common psychiatric presentations, in addition to the insights it provides regarding the specific rare developmental disorder subtypes his clinical research protocols are focused on. I had the honor of interviewing Dr Raznahan, a keynote speaker in the upcoming OHBM 2019 conference, to find out more about his work. ![]() Tzipi Horowitz-Kraus (THK): What is developmental neurogenomics, and what motivated you to go into this area of research? Armin Raznahan (AR): I see Developmental Neurogenomics as a discipline that is concerned with brain development, and emphasises the role of genetic factors in patterning brain structure and function over development. Usage of the term Developmental Neurogenomics has increased in recent years, and for us, there is an additional emphasis within what I’ve just described on thinking about how genetic influences on the developing brain can contribute to psychiatric disorders. Coming from the perspective of my initial training as a child psychiatrist, there is that clinical element to what I do as well as the basic science questions about spatiotemporal patterning of the brain over development, and how genetic variation can contribute to that. By Roselyne Chauvin Recently, a Brain-Art Special Interest Group (SIG) was created within OHBM. This SIG will be officially managing the Brain-Art competition and exhibits that have been organized for several years by the Neuro Bureau. Each year the Brain-Art competition receives numerous submissions; the winners of this competition are then announced during the Student and Postdoc SIG and Neuro Bureau collaborative social evening at the OHBM annual meeting. Since the first exhibition in 2011, Brain-Art exhibitions have always been a great success. I was really happy to learn about the creation of the Brain-Art SIG and curious about the aim and perspective of development of its board. By officializing a Brain-Art dedicated group, art might start to take a bigger place in the OHBM scene. I’ve always valued the interaction between Art and Science. It’s an amazing way to reach out to the general public and scientific pairs, and thus to promote scientific content. Every type of art can be used as a vessel to talk about science, such as music, dance, theatre, literature or painting. It can come directly from researchers or from their collaboration with artists. Programs such as Artists in Residence or Artists in lab promote that artist-scientist interaction by proposing to artists to stay few months in a lab in order to learn and get inspired for their art. The other way around works as well, researchers might see their work with another angle by doing art or interacting with artists, revealing new perspective. As the Brain-Art competition 2019 just opened, let’s discover a bit more about this Brain-Art SIG and review the past editions of the competition and exhibits. You might even find inspiration on the way to participate in this year’s edition of the competition. To get to know about the Brain-Art SIG mission, I asked the board their personal experiences with Brain art and how this SIG came about. Brain-Art SIG vision for OHBM: Interview with the board Roselyne Chauvin (RC): So what is the view of Brain-Art SIG’s officials on art and science interactions and, personally, what motivated you to join the board of this SIG? Alain Dagher, Chair (AD): My answer is roughly half-way between “Because Daniel M asked me” and “because art and neuroscience both often seek to explain the human experience.” Valentina Borghesani, Secretary (VB): I agree with Alain: human beings are way too complex to be tackled only from a scientific perspective, one needs to embrace diverse forms of expression/investigation. Personally, while being totally void of any artistic talent whatsoever, I love being involved in the scientific community and do…the leg work! One day I saw on the brainhack slack workspace that they were looking for volunteers for this to-be-established SIG and… First lesson learned: things do escalate quickly around these folks! Aman Badhwar, Chair-Elect (AB): As far back as I can remember, I have been fascinated by art, and started painting as a child. In my view, science and art both seek to observe, record and explain the world around us, just using different means. Both have their theoretical frameworks, evolving techniques, and schools of thought. Above all, both scientists and artists need to be creative and insightful in order to make meaningful contributions to their respective fields. In one direction, I use painting as a means of communicating ideas from my scientific work to the public using the more visceral, emotional language of art. In the other direction, when grappling with a thorny scientific problem, the distinct focus required while painting frees my subconscious mind from conceptual boundaries and dogmatic ideas, and allows me to return to my scientific work with fresh eyes. Some people have told me that one cannot be a scientist and an artist at the same time, and that it is necessary to choose. Personally, I find that art synergizes with my academic endeavours, and provides me with a clarity that is sometimes hard to find in the barrage of scientific data. I was first told about Neuro Bureau and its OHBM art competition by Pierre Bellec. It was 2014, and I was having my first solo art exhibit at CRIUGM, University of Montreal. The next thing I know I was having this intense conversation with this “highly energetic, mile a minute person” (well compared to me as I internalize my energy, and Pierre, I would say, is the opposite), who was convinced that I needed to submit my art to the Neuro Bureau art competition. I did not know who Pierre was, had never been to the OHBM, but the art was submitted, and a couple of months later, again a very enthusiastic Pierre informed me that I had won one of the categories. Life has a funny way of working sometimes, because the next year (2015), I found myself being a postdoc at Pierre B’s lab, going to my first OHBM meeting, and being intensely involved in the Neuro Bureau/OHBM art activities. RC: Your Brain-Art SIG page states that you aim to:
Outside of the exhibit and competition, what other tasks would you like to start to reach these goals? I know that there is always multiple BrainHack projects related to new data-visualization tools. Will you consider proposing a special BrainHack on data visualization mixing scientist and artists? Or will we see the start of graphical abstracts for OHBM? AD: I think that is a great idea. At the prosaic level, better data visualization can improve communication of scientific results, and ease the work of reviewers (making your paper more likely to be published). But just as we need to emphasize the aesthetic side of brain imaging visualization, we also need to incorporate concepts of openness and reproducibility, i.e. make sure the data-to-image generation process is transparent. VB: Improving our current scientific visualization practices is clearly one of the expected, let’s say, side effects. Graphical abstracts and cross-disciplinary hackathons sound great way to enrich our SIG activities! However, I would like to point out that our concept of Art cannot be resolved in visual arts, as we clearly stress with this year’s award categories. Only embracing the heterogeneity of tools and perspectives the Arts can offer us, we will appreciate the full potential of this dialogue. Neuroscience can definitely exploit this diversity when it comes to outreach, both within (interactive graphical abstracts? performative poster presentations? Why not!) and outside our community (e.g., reach the general public honoring the different ways information can be assimilated and digested). But the benefit of integrating more insights from the Arts will also be seen in how it will unleash scientists’ creativity and divergent thinking. It’s not only about finding new ways of showing our results, but also exploring new point of views. RC: For now, you have a board that is composed of:
RC: I guess you might need more manpower to go on with these tasks, are you looking for more people in the SIG?
AD: Yes. We always need people to assist during OHBM for setting up the exhibition and helping during the conference. Also, always happy to see new ideas to take the project in new directions. RC: I think the transfer of Brain-Art activities from the Neuro Bureau to this Brain-Art SIG is a great initiative to get more attention on the exhibit and to be able to communicate specifically about art. What motivated and when did you take the decision of creating the Brain-Art SIG? VB: Over the years, the community of OHBM members interested in art-related initiatives kept growing. Giving it structure within an official SIG seemed like the best option to support its evolution. One key aspect is that the SIG promotes an open and transparent process allowing every OHBM member to contribute, e.g., joining our Slack community, following our activities online, volunteering to help, or joining as one of the SIG officials. Retrospective of Brain-Art Competitions and Exhibits The Neuro Bureau fostered the Brain-Art competition yearly since 2011. Every year, everyone can participate to the competition by submitting art pieces to specific categories. There is no limit in the number of submission per person. The constant categories are:
In addition, one or two special topics are proposed every year and reflected trendy topics in the field of neuroimaging:
As for the main contributors and most consistent over the years, we can find several submissions from Katja Heuer, AmanPreet Badhwar, Roberto Toro, Michel Thiebaut de Schotten, Benedicte Batrancourt or also Lucina Uddin. It’s only recently that the Brain-Art exhibit was proposed. The first exhibition called “Crossing fibers: A retrospectroscopic view” was proposed at OHBM 2015 in Hawaï and later displayed in Germany (2015,honolulu, berlin and Leipzig). This exhibits featured the best art from the Brain-Art competition and, to support the initiative, people could buy posters of their favorite piece. In OHBM 2017, a new exhibit was proposed to present the new art from the Brain-Art competition (read more about the OHBM 2017 exhibit) and presenting an art piece called Dream Sessions. Created by Nathalie Regard and Roberto Toro, this dream log of 101 nights was not only a piece of art but also a tool to study the EEG recordings done during these nights. In OHBM 2018, together with the Brain-Art competition best art pieces, the exhibition featured a local singaporean artist, Shubigi Rao (2018), inspired by her knowledge of neuroscience. Conference attendees were able to discover mesmerizing representations of creatures with complex nervous system. The Brain-Art SIG is currently working on setting up the exhibit of OHBM 2019 that is entitled: “Ars Cerebri : Creativity stemming from, and at the service of, neuroscience.” Inspired by the ancient Muses, this year exhibition will feature pieces covering multiple domains of the Arts sharing one common denominator: they are the fruits of the creativity that stems from or is inspired by neuroscientific research. Whether established or emerging, different artists and scientists will contribute their personal and unique works produced under the Muses' power of inspiration. Static as well as dynamic pieces will be exhibited during the main conference (June 9-13, 2019) in the heart of the Auditorium. In addition, a special evening event showcasing live performances will be held on Monday the 10th. The SIG just opened The Brain-Art Competition 2019 and this year, we see a renew of categories with an emphasis on different types of art and more dimensions. A major novelty is the dedicated categories for text and live performances. This year's exhibit will go further than visual arts. The categories for this are:
You can submit your art pieces/illustration/representation before the 11:59PM CDT, Wednesday, May 29th, 2019. To stay updated and participate in the Brain-Art SIG activity, join their slack workspace! Twitter handle: https://twitter.com/OHBM_BrainArt Slack workspace: ohbmbrainart.slack.com Peter Fox is a Professor of Neurology and has been a director of the Research Imaging Institute at the University of Texas Health Science Centre, San Antonio since 1991. He’s a co-founder of the journal Human Brain Mapping (with Jack Lancaster), a founding member of the International Consortium for Brain Mapping and has consistently been listed as one of the top 100 most cited neuroscientists since 2004. Peter Fox has played an integral role in the founding and development of OHBM, serving as Chair in 2004-05. We found out about his major academic achievements and experiences with OHBM.
About that time, articles started appearing in journals and being covered in Scientific American about what the Danes were doing with single photon studies. They started off with language studies, identifying that during language listening, there was a lot going on in the frontal lobes, and that the right hemisphere was involved, two points that nobody had anticipated. At that point, I knew that I wanted to study people, and I figured the only real way to do that was to go to medical school and to become a clinical neuroscientist. Then I could do this kind of work. What do you see happening with neuroimaging in the US these days? PF: In the area that I'm most involved in, and the sort of grants that I review, what I'm seeing as a strong trend, pushed both by the investigators and by the funding agencies, is using neuroimaging as a demonstration of the neurobiology of treatments. And the expectation is that if you're going to test a new treatment, if you're going to do a clinical trial in a neurological disorder or psychiatric disorder, you won't be funded unless you can establish the neurobiological mechanism. Imaging is the way to do that. So it's moved from being really basic science to clinical neuroscience, and the interface between the treatments and theory. I think that's a really powerful and appropriate role, and a way of moving neuroimaging into demonstrably helping humankind at large. So I think that's a very important and powerful direction that the field is going in. What research or other contributions are you most proud of in your career? PF: Two areas that I was very pleased to have been involved in were both at about the same time. One is doing the original studies demonstrating that blood flow and metabolism are uncoupled or are engaged in a very complicated relationship. Those observations gave rise pretty much immediately to the development of functional MRI. And in particular, the prediction of the BOLD signal. The people who described it, predicted it, and cited the work that we had just published, said, “If Fox and Raichle are correct then that would predict this, and we should get a signal like this”, and that was correct. Now BOLD fMRI has become the dominant technique for brain mapping. And so the lineage there is really quite clear. And so I'd say that was a lot of fun. Another area that I've been really pleased with how well early ideas evolved, and were adopted, is introducing standardized coordinates. And so when I started doing mapping studies, right away, I was unsatisfied with the ability to say where we were. And I looked around, and there weren't many examples, but people were mostly naming things by gyri. That, to me, seemed not enough. So I spent time looking for alternatives and came across Talairach's 1967 Atlas and some papers referencing that. So we developed a way of putting the images I was acquiring into Talairach space, and published that method, and encouraged people to adopt it. So ultimately, that has become the standard. And so really, everything is published in standardized coordinates, originally Talairach coordinates, but now the 1988 Talairach and MNI, and there's various versions, but still, they are translatable from one to another. And so the format that we all publish and analyze our data in, I had the opportunity to introduce and so that's real fun, I enjoyed that. You played a part in the creation of OHBM. What was that like? And how did you imagine OHBM would be like?
PF: When I was just starting out in San Antonio, I'd been working on the brain map database for a few years. We were trying to develop a data sharing mechanism that used standardized coordinates, to give people a way of sharing their data, or at least sharing their results, if not their original data. I received funding for quite a few years to bring people to San Antonio. I focused on bringing people that were having the most influence on methods development. We had two days of methods talks, always in the same organization. There were algorithms for data analysis; Karl Friston always ran that as a half day session. And there was a session on databases that I ran. There was a session on spatial normalization and [Jack] Lancaster ran that, and there was a session on merging different imaging modalities. We did that year after year and after about the third year, this was a meeting of about 200 people. I got grants to bring people and the people that kept coming year after year, said, we should open this up to a bigger community. I said, that's fine with me. And they said, you do it Peter, but I thought 'No, I like doing meetings this size. I'll do one later, but I don't want to do the first big meeting.’ Bernard Mazoyer and Per Roland said they'd do it, but they wanted guaranteed support because they didn't know how to do it. And so everybody there, [John] Mazziotta and [Karl] Friston and [Richard] Frackowiak and Leslie Ungerleider all said, we'll bring our labs. So with that kind of agreement, Bernard and Per went forward and did it. And then that just kicked it off. And it's rolled since then. What have you found most rewarding about your involvement with OHBM? PF: I've been to many different meetings. I think it's a very widely held opinion that the standard of science at OHBM is the best of any meeting that I've ever been to. It's very sophisticated, and has very high expectations. Clinical meetings are not this good by a long shot, they are not. Another thing is the inclusivity of this meeting; it's been that way from the start. We have mathematicians and statisticians and physicists and engineers, psychologists, psychiatrists and neurologists, everybody comes together. That's very unusual. The third thing that I think is really fun about this meeting is the international scope of it. And the gender balance. Many clinical meetings, many clinical studies are not very gender balanced. They're male predominant a lot of them, particularly imaging. So for instance Radiology is 80% males. And at this OHBM meeting in Singapore it’s pretty much 50:50, male to female. That's very unusual. So I think there's a lot of unusual things about this meeting. What memory stands out when you think about your experiences with OHBM? PF: I remember being the council chair in Toronto and that was amazing. That was really a lot of fun. And I definitely remember we got sponsored to host a party. Just the council chair hosted a party; that was a spectacular party. So we've been to a lot of really outstanding social events involved with OHBM. What changes have you seen in OHBM over the years? PF: I'm as impressed with what doesn't change as what does change. Certainly, the recent changes have been that there's much more social media outreach, and proactive engagement of young people, trying to attract people into this field and making a very complicated field as approachable as it can be. High marks for that - that's really an excellent initiative. And honestly, I think OHBM is doing that better than any other organization that I've been exposed to. It's very proactive, it's very positive. But, to me, equally impressive, is that the overall concept of the meeting, the organization of the meeting, how the program committee approaches the meeting, just the style of the meeting, was created early. If you went to the first OHBM and you went to the current OHBM you would see strong similarities. The intention of the meeting and what is attempted, what's being done, is giving you the most cutting edge applications, the most cutting edge methods, trying to span cognitive and clinical neuroscience, that was present from day one. And that really creates an outstanding feel, flavor and content for this meeting. And so I'm just as happy with what has persisted as with what has evolved. So what do you see as the future for neuroimaging? PF: I mentioned earlier that I think neuroimaging has a huge role in treatment development. I expect that to continue. Another direction that I really think the field will have to push on and be open and kind of aggressive in bringing people in, is pushing down into the basic neurobiological mechanisms underlying the imaging signals that we have. And so collaborating with people working in animal models, and working with techniques that are more invasive than our techniques, such as optical techniques that we need to do. And we need to really encourage scientists working at that level to bring their work to OHBM. So I think those are the directions that are important for us to go. Professor Fox it's been great. Thank you very much. PF: Yeah. Thank you. By Shruti Vij & Nils Muhlert Functional MRI has been in use for over 25 years. Despite providing us with a breadth of methods developments and exciting findings about how the brain works, there has been a dearth of clinical applications. The OHBM Alpine Chapter has been keenly focussed on ways in which we can translate fMRI and other neuroimaging modalities to the clinic. Founded in 2014, the Alpine Chapter has provide a forum for like-minded brain mappers, both basic scientists and clinicians, throughout Austria, Switzerland, Germany and neighbouring countries to discuss new methods, new projects and to collaborate on programs of research. Here, Shruti Vij spoke to the past and current Chairs, Roland Beisteiner and Christoph Stippich respectively, to find out how the Chapter has developed and its directions for growth.
by Aina Puce & Bernard Mazoyer, OHBM Program Committee
In the late 1980’s, neuroimagers were a ragged band of multi-disciplinary researchers with no real home. In search of their scientific interests, they attended meetings covering radiology, nuclear medicine, neurophysiology, engineering, image processing and computer science. Starting in 1992, a small group of internationally well-known neuroimagers had attended a series of 8 annual BrainMap Workshops in San Antonio devoted to promoting the development of standard space as an analysis and reporting standard, with discussions also related to development of open-access neuroimaging archives. These meetings were organized by Peter Fox [USA] and funded by NIH [USA] R13 awards. After one such meeting in 1994, the crying need for a home of their own was the central issue discussed around a table of 25 scientists who became the driving force behind what would become OHBM. At the meeting, Dr. Bernard Mazoyer [France] volunteered to host a first launch of such an international conference, with a second meeting in Boston, USA to be held in 1996 and organized by Jack Belliveau and Bruce Rosen. The rest is history. Mazoyer and colleagues Per Roland [Sweden] and Rudiger Seitz [Germany] hosted the meeting in Paris, France in June 1995. Incredibly 820 attendees came to the first meeting – greatly exceeding the organizers’ expectations! The meeting consisted of talks and poster sessions. The inaugural Talairach keynote lecture was given by Dr Jean Talairach – the French neurosurgeon who pioneered the use of a standardized stereotactic grid system for neurosurgery.
OHBM officially became an Organization in 1997 with ratified by-laws and the potential to elect office bearers [OHBM Council, OHBM Program Committee]. Indeed, many of the first OHBM Council Chairs were scientists who had participated in the original BrainMap Workshops. Over the past 25 years, the OHBM has taken on multiple new responsibilities, effectively functioning as a Society while retaining its original name. Therefore, it finally became a Society in 2018 – ratified by the OHBM membership at the annual meeting in Singapore – allowing the official sanctioning of year-round activities of ‘Chapters’ in different international communities.
In the mid-1990s, the neuroimaging zeitgeist was such that Positron Emission Tomography [PET] was an established neuroimaging modality, with activation studies of cerebral blood flow and glucose metabolism being performed in both humans and animals. The requirement of a nearby cyclotron meant that PET was largely confined to the largest institutions with clinical and/or research imaging centers. The 1995 Paris neuroimaging meeting was actually a satellite meeting for the Brain PET meeting in Cologne. At the time, only a few groups were performing functional magnetic resonance imaging [fMRI] studies. Analysis software was vestigial – the first generation of Statistical Parametric Mapping [SPM] software for PET data analysis was available – with the first methods papers being published by Karl Friston in 1990/1991 [see https://www.fil.ion.ucl.ac.uk/spm/doc/history.html]. Software packages for fMRI were being developed e.g. Analysis of Functional Images [AFNI] by Bob Cox at the Medical College of Wisconsin began in 1994 [see https://afni.nimh.nih.gov/afni_history], and SPM for fMRI came about from a number of attempts at implementing data analysis from Friston’s group in 1995. Magnetoencephalography [MEG] and electroencephalography [EEG] were already established neurophysiological methods in the mid-1990s, with their own specialized smaller scientific meetings. High-density MEG/EEG recordings were still not that common. Most of the book of 404 abstracts for the Paris meeting was devoted to brain activation studies, with 27% devoted to fMRI methods, 6% to the nature of the BOLD response, and 9% to MEG-EEG. The OHBM has been a hub for the neuroimaging community, gradually incorporating additional MRI-based methods such as quantification of grey matter and white matter, formulation of anatomical atlases. Efforts to encourage the involvement of more basic and clinical researchers performing MEG and EEG studies are also being made. Right from the outset, OHBM has recognized the importance of having an educational program [initially organized by Peter Bandettini from 1998-2000], with weekend education sessions being added as early as 1998, and morning education sessions commencing in 2000 for OHBM in San Antonio. In 2000, Peter Fox, obtained a 5-year NIH R13 grant whose $25,000/year proceeds were devoted to 25 travel awards for OHBM trainees, based on abstracts with the highest peer-reviews. This grant was extremely helpful in kickstarting engagement from new scientists just starting out in functional neuroimaging and launched the OHBM Trainee Travel Award Program. In 2005, Peter Fox succeeded in obtaining a renewal for this 5-year grant with an increased budget of $50,000/year. After 10 years of NIH travel awards to the tune of $750,000 and increasing attendances at OHBM meetings, OHBM had enough financial reserve to continue the travel award program and the NIH-grant was allowed to lapse. Additionally, the neuroimaging journals NeuroImage and Human Brain Mapping were spawned for this community. NeuroImage was an existing Elsevier journal that was transformed to be a forum for [mainly human] PET and fMRI studies by Editors Art Toga, Richard Frackowiak, and John Mazziotta [1995], whereas Human Brain Mapping was started de novo by Peter Fox for Wiley [1993]. Both Human Brain Mapping and NeuroImage were the source of OHBM abstract books for the first few years. Additional journals for neuroimaging and related disciplines have been added since those times e.g. Brain Connectivity [Christopher Pawela & Bharat Biswal] and Brain Structure and Function [Laszlo Zaborszky & Karl Zilles]. All of these senior scientists have been active in the OHBM community. Indeed, Editors for all of these journals continue to come largely from the OHBM community. In addition to journal-based activity, early efforts to standardize data formats and data sharing were occurring at the time. For example, in the early ‘90s, workshops for the International Consortium on Brain Mapping [beginning in 1992 and co-ordinated by John Mazziotta] and for the European Computerized Human Brain Database [beginning in 1994 and co-ordinated by Per Roland] were run in addition to the San Antonio BrainMap Workshops. A set of awards recognize the achievements of OHBM Members. An award devoted to recognizing excellence in early career neuroimagers began as the Wiley Young Investigator Award [first awarded to Karl Friston in 1996]. In 2016, it became the OHBM Early Career Investigator Award. Other OHBM awards include the Education in Neuroimaging Award [first awarded to JB Poline in 2013], the Replication Award [first awarded to Wouter Boekel in 2017]. In 2014 OHBM awarded the Glass Brain Award to Karl Zilles – created to recognize the lifetime achievements of scientists in the field of human neuroimaging. From 2005, OHBM has also been very fortunate to have the Editors-in-Chief of the journals Human Brain Mapping and NeuroImage also announce their Editor’s Choice Award for the best paper in their respective journals at the opening ceremony of each OHBM meeting. OHBM is a Society that is known to be inclusive and to change with the times. Its Council and Scientific Program Committee have existed from the early years [1997]. In response to current issues, committees such as a Diversity & Gender Committee, a Communications Committee, and the OHBM Publishing Initiatives Committee, among others, have been more recently constituted. The Communication committee has its hands full improving the OHBM website - providing ‘on demand’ education program [2014] consisting of resources such as videoed lectures from previous meetings and educational materials, running a blog [2015], among other things. OHBM also is an inclusive Society as indicated by its Code of Conduct Statement [see https://www.humanbrainmapping.org/i4a/pages/index.cfm?pageid=3912 ]. Three special interest groups [SIGs] devoted to Students & Post-docs, Open Science and Brain-Art are also now part of OHBM. As OHBM has grown, a professional secretariat soon become necessary, which has helped to preserve institutional knowledge and to increase professionalism. Initially, in the early 2000s Lori Anderson and her team [from a US-based company called L&L] fulfilled that role. Nowadays these greatly expanded functions are fulfilled by the OHBM Executive Office, based in Minneapolis, USA. Over the years the OHBM Annual Scientific Meeting has alternated between the European, Asian and North American continents, with occasional detours to places such as Australia. Attendee numbers have steadily grown over the years – first surpassing 3000 in 2005 when the meeting was held in Florence, Italy. Indeed, the 25th anniversary of scientific neuroimaging meeting in Rome, Italy promises to be a bumper year – with over 3700 abstract submissions and attendee numbers expected to be around 4000! This year’s meeting will be an exciting one – not only for the new science being presented, but also for the nostalgic look back at the previous 25 years of meetings being prepared by members of OHBMs Scientific Advisory Board – individuals who have been part of the history of OHBM. We look forward to seeing you at OHBM in Rome on June 9-13, 2019! By Nils Muhlert Resting-state fMRI has seen increasing attention over the last decade. The majority of these studies have focussed on static resting state networks, often considering the spatial topography or extent of components. A number of researchers are however considering how these networks change over time - dynamic changes - and what these temporal shifts in networks tell us about cognition and behaviour. Catie Chang, an assistant professor of computer science and electrical engineering at Vanderbilt University, has focussed on this question since her PhD - with her work exploiting signal analysis techniques to understand what drives and affects these dynamic changes in fMRI signals and networks. As our first keynote interview for OHBM 2019, we found out about how Catie honed her craft, what we can gain from investigating these signals, and her experiences of life as a new PI. Nils Muhlert (NM): I'm here today with Catie Chang, one of our keynote speakers at OHBM2019. Thanks Catie for joining us.
Catie Chang (CC): Thank you so much. NM: First, can you tell us a bit about your background? What turned your work towards functional connectivity? CC: My first experience in a human neuroimaging lab was at Stanford, working with Vinod Menon and Michael Greicius. They were pioneering ideas about brain networks, dynamics, resting state connectivity, and applications to neurological and neuropsychiatric disorders back around 2005, 2006 and earlier. I was very influenced by their perspectives and found this an interesting and exciting research field. That got me considering many ideas about brain connectivity, about resting state. Then, I went to work with Gary Glover for my PhD in the Radiological Sciences lab at Stanford, and the emphasis in that lab and environment was on the physiology and the physics of imaging signals. This led me to questions like, what is the physiological basis of the signal changes that we're measuring? Can we better acquire these signals, improve our analysis and post-processing? And can we combine signals from different modalities to improve our interpretations? So I really found my home at the intersection of these different worlds. NM: Did you spend a lot of time looking at noise? Trying to work out where the signal was actually coming from? CC: I started out looking at, I guess what we kind of consider noise, which is the influence of systemic physiology on fMRI signals. So when you take a deep breath, for example, this induces a very large BOLD signal change. But we were interested in it not only from the perspective of how this introduces noise into our signals, but also how it can introduce new information into the data. The first research question I started working on with Gary was, how can we use the fact that there's this large, systemic influence on the BOLD signal to calibrate for hemodynamic timing differences between different regions that may not be related to underlying neural activity? Can we use a breath holding task, and if we find timing delays across the brain in the breath-holding BOLD response, can this help us pinpoint fMRI timing differences between brain regions that are not neural in origin, but may be more vascular, hemodynamic related. Throughout, my work has been looking at two sides of the same coin - noise on the one side and trying to clean up the data, but on the other side, looking at the discarded component, which is often very valuable for a different purpose. And if we can disentangle these influences on the signal, then we have the power to use those components in different ways. NM: You also mentioned a few people there. So Mike Greicius, who we've interviewed before for the blog - he came across as thoughtful. Do you think that's influenced how you supervise your own students, now that you're building your own lab in Vanderbilt? CC: Yeah, Mike was a really influential mentor to me. He was always giving me great advice about for instance, not getting too caught up in certain details, instead seeing the big picture and the more interesting questions. To be honest, I am very detail oriented, so I keep this advice in mind when I mentor students. I try to be very involved in the details, but on the other hand, I also try to step back and say, are we providing an important message? Is this research going in the right direction? And so having many complementary mentoring styles throughout my work from Mike Greicius, Gary Glover, and Jeff Duyn and David Leopold, who I worked with as a postdoc -- they've all shown me very different but very valuable perspectives. NM: How have you found that process - moving to becoming a PI, having your own lab? CC: I really love it. There's so many interesting things that come with starting your own lab: working with my students, and the collaborators here at Vanderbilt, that's one of the best parts of being here. They're just brilliant, great collaborators, great colleagues. But it's also been very busy. So I'm not even one year in to this new position, and the past year has been a blur. Many new things to get used to in this environment. For example I've taken on some teaching. And I've discovered I really like teaching. NM: Don't let anyone know - they'll pull you in for loads of it! One thing that's come up in your work is the idea that you could use the dynamics of functional connectivity as a biomarker for cognitive and clinical studies, and clinical trials. Do you think this is feasible over the next 5-10 years? Are there steps being made towards that? How's the validation process going? CC: I think that looking at dynamics is very promising for studying cognitive and clinical questions. The idea here is: can we get more information from the signal if we open up the dimension of time, and aspects of the signal that may change over time? This notion opens the possibility that we can look at features of the data that reflect state changes and cognitive processes that may be really relevant markers of different disorders. But there are still many challenges that we have to address at the same time as we do this exploratory research. It's hard to go from having a hypothesis about brain dynamics to knowing exactly what metrics and features of the signal we should isolate to test for these questions. We (as a field) are also working out how we carry out the statistical testing, for example, to see if “dynamics” is really the core element that's disrupted in a given disorder, or if, perhaps, some of those apparent signal dynamics are just an offshoot of some other, simpler phenomena. We're at least starting to dig into that. There's a lot of exciting progress being made by many research groups and it's interesting to see where that will go. We also face a lot of challenges in the dynamics world, because fMRI has a low signal-to-noise ratio, with many different things that can cause fluctuations within a voxel other than neural activity. And so trying to interpret and clearly link the phenomena that we observe to a conclusion about brain function is challenging. NM: So what would you say you're most proud of in your career? What kind of work, would you say, stands out? CC: Whatever I can do that's helpful to researchers, I feel proud of. And so when people ask for code to isolate physiological signals, for example, then I'm really happy I can share that. My deep interest is trying to understand signals and mine them for information. So I'm really excited about the work that goes toward resolving particular influences on the fMRI signal. For instance, a subject’s level of alertness is one factor that can change fMRI signals, but on the other hand, it's also something very interesting we can study in itself. Our recent work examines how we can detect natural changes in alertness from fMRI spatiotemporal dynamics, which I also find to be a fascinating direction. NM: So this is maybe going back a little bit over what you've already said. But one issue that some people have been struggling with is about the underlying physiological basis of resting state functional connectivity networks. And people are starting to look at whether there are particularly high densities of neurotransmitter receptors within the hubs of these networks that might aid coordination of this activity. Do you think we're moving closer towards having that understanding of how these networks emerge? CC: I think we're moving closer. I mean, it's hard for me to say how close we are -- but for example, many researchers are combining fMRI with other techniques to perturb neural activity in specific ways and understand how that impacts resting state networks, which I believe is an important direction. I think that a bridge between non-invasive human imaging and more invasive animal or patient studies is really helping to provide that link. In animals, of course, there are many more flexible manipulations that we can do to try to understand the precise impact of activating or inhibiting certain brain regions on large scale connectivity. And so that'll be really important to bridge these types of research. NM:What can we expect from your lab over the coming years, then? CC: One direction is that in Vanderbilt, there's close collaboration between engineering and the medical center. So I'm really excited about the collaborations that we're forming with the medical school as well as the imaging center here. And so I've started to work together with Vicky Morgan and Dario Englot here, for example, forming ideas for how we can use fMRI methods to understand epilepsy. They've been carrying out this type of research for a long time, and I'm really excited to be collaborating with them. Another area that we're trying to push is carrying out multimodal studies to understand changes in alertness, and how that relates to changes that we see in fMRI signals. We're developing ways of performing more detailed characterization of the effects of these kinds of state changes on fMRI data. We're collecting, I guess, “mega scans”, where we have fMRI together with EEG, eye tracking, cardiac and respiratory signals, and behavioral measures. So my subjects may not like me very much [laughs], and now we have so much data, all these different data types, how do we combine them? But the more information that we have, the more that we can start to piece together the puzzle of what moment-to-moment fMRI signal changes reflect, and the signatures of specific ongoing neural and physiological processes. We're asking whether we can better capture and understand that. If we can figure out ways to integrate these external measurements, which are all complementary measures of humans and what state they're in, then it'll be really exciting. NM: And so last, can you give us some insight into what you'll be discussing in your keynote lecture? CC: The main theme is along the lines of what we've been talking about - the more that we can understand and disentangle the sources of signal or network fluctuations, the more rich and clear information we can extract from our data. That can lead us to more sensitive biomarkers, and sensitive measures and inferences of neural activity from fMRI. And when we combine fMRI with other modalities, such as EEG, then it can help us draw that information out of the data. NM: That sounds very comprehensive! CC: I'm going to make it more specific, and I have this terrible habit of changing my talks the night before, so who knows [laughs]. NM: Thanks for joining us here and we look forward to your talk! By Shruti Vij & Nils Muhlert Peter Bandettini has been a key figure in neuroimaging for over 25 years. His career started with earnest, in a PhD working with James S. Hyde and R. Scott Hinks in Wisconsin, where he pioneered the development of functional MRI. Now at the NIH, Peter’s work has examined the sources of functional contrast and noise in BOLD, the temporal variability of resting-state fMRI and, more recently, layer-dependent activity in fMRI. We found out about his history working alongside other founding members of OHBM, his advice for early career researchers and the unique challenges of working at the National Institutes of Health. Shruti Vij (SV): I would like to start by asking you about your background and why and how you became interested in neuroimaging. Peter Bandettini (PB): I started out as an undergraduate in physics at Marquette University. I was interested in the brain the entire time there. Even from high school, I remember reading a famous Scientific American article showing the first functional CT and PET scans. That was inspiring. I was always interested in the brain. At the same time, I wanted to study a “hard” science like physics. I thought that the integration between the two would be useful. Although I wasn't quite sure what I wanted to do. I could have been an engineer, I was potentially interested in medicine, but then I decided to go into grad school in a biophysics department, and luckily, it led me to brain function research. SV: Great! What do you see happening with neuroimaging in your country? What kinds of research and what kinds of advances? PB: I think that the Brain Initiative is the latest big thing in the United States. In Europe, there’s the Human Brain Project and I think together, they are really trying to step back and understand the brain at a more fundamental, a more mechanistic level. Right now, the exciting thing is that both programs are focusing on methods. There's methods development that is leading into more brain modeling, and then there’s the development of a more integrated structure for sharing data. In fMRI there's a lot of work on things like data pooling, machine learning, things like extracting information about individual subjects, as opposed to group studies. Actually trying to get more clinical traction from the data. SV: Awesome. What research or other contributions are you most proud of in your career? PB: I'm really proud of a lot of things, many of them from really good collaborations. I was lucky enough to be in the right place at the right time at the biophysics department at the Medical College of Wisconsin, to be working with Eric Wong who was developing the hardware for echo planar imaging. That was the right place at the right time to get going quickly as a graduate student in helping to start functional MRI. I was a grad school student, and I submitted my first first-authored paper ever - and on fMRI - which happened by luck to be the first paper ever published on fMRI (by a week) - to MRM as a communication. They published it quickly.And so I'm very proud of that. That said, our group was, by most accounts, the third group that successfully performed fMRI - behind MGH and Minnesota. We might have been the first group to perform fMRI of motor cortex activation - as shown in this paper. I was also part of pushing the initial use of correlation analysis for fMRI data. That was our second paper. I’ve been told that, in that paper was the very first use of the term “FMRI”.It’s important to emphasize that neither of those would have been possible without the incredibly rich environment of colleagues and resources. In particular, Eric Wong, a fellow grad student, was most fundamentally important. I'm very proud of pushing the concept of embedded contrast in fMRI data like multi-echo or simultaneous spin-echo, gradient-echo, pushing the temporal and spatial resolution. But right now, I'm proud of being able to lead and direct a group. They do all the work, I now feel more like an enabler of everything from multivariate assessment, to pattern effect assessment to resting state. I'm very proud of helping my grad students achieve great things as well. Recently I've been going to very high resolution and looking at layer-dependent fMRI, so you can actually start to untangle input and output connections from layer fMRI activation. So I'm proud of being able to integrate everything from the acquisition side with physics, to the basic neuroscience and also the data analysis as well. So to try to bring it all together. SV: Great! You played a part in the creation of OHBM. What was that like? PB: So that was really exciting. It's interesting that it first started out even before OHBM. Peter Fox had a regular meeting. And I think a number of people came together and thought, this can be bigger. It was exciting to be part of that process at the very beginning. I remember trying to get everything organized and trying to figure out: “okay, so we're going to have a council and we're going to have a program committee, and this is what we’re going to do.” Another memory is back in 2001, in Brighton, we hired a meeting management company called L&L, which we still use today (under a different name). It was a big decision then. And we were like, yeah, I think we should go with them. And it's amazing what impact it had. Also, back when the meeting was in San Antonio, one of the early OHBMs, around 2000, I remember sitting in council, and we had the idea of having a separate day for education courses, and having education courses in the morning as parallel sessions. That was very exciting. I was the chair of the education committee at the time, and back then I had to organize every single parallel morning session for two years - that was really challenging! Ed Bullmore took over after that, so there are a lot of good memories of that. I knew OHBM would grow. At the time, it seemed to be just another meeting - as I was young and had limited perspective on these things. At the same time, we all knew there was always something special about it. OHBM seemed united around the methods, and as opposed to just a cognitive neuroscience meeting or neuroscience or ISMRM, It’s about the brain imaging methods. And that was effective for bringing the community together. And there are many people who've attended every single year, and after about five years, everyone ended up knowing everyone else. And so it's become this really large, nice, extended sort of family in some regard where we all kind of know each other and know what everybody does. And that’s a good feeling. SV: And what did you imagine it would be like? PB: I thought it would likely go more towards the neuroscience direction. I didn't think it would go in the direction of the methods, continually improving. I always thought the methods would get better but now they're getting qualitatively much better, and they're becoming integrated. And I didn't imagine it would maintain the same cohesion. I mean, before it was small and cohesive. And it's somehow both grown and scaled but that cohesiveness has scaled with the times as well, which I think is really unique. So I didn't imagine that. I also didn't quite imagine that it would be as respected. It was always a grassroots movement of a meeting, but now it's a really respected meeting. And people look at it as their main meeting that they go to. I think that respect and that reputation is still growing. So that's been a nice thing that I didn't really completely expect. SV: What have you found the most rewarding in your involvement with OHBM? PB: The most rewarding thing has been that we really did get to invent it from the ground up. That was rewarding, to be able to figure things out as you're going along. But it's also rewarding, that it’s been a catalyst for so many people: to make science more than just doing the science asking the questions, presenting your paper. Instead it was about having a real community, knowing the people involved in the field and looking forward to going to the meeting, not just to give a talk and exchange information, but to get a better appreciation of what's going on in other people's groups and actually catch up with old friends. That I find really satisfying. SV: What advice would you give to young investigators? PB: I think people need to be a little more bold, because even with the established literature, there's a lot of room for complementary information. And I wouldn't be afraid to have data that contradicts results, because everything is relatively new. And we're still trying to figure out what's going on and how to interpret things. Another thing is to try to always think in an integrated way. Never be afraid of not being an expert. I'm coming from a physics background. Many of the physicists who I work with, sometimes aren't just physicists. I think that the people who really become successful are those who are not afraid to think outside their domain and gain confidence. Right now, I feel more like a neuroscientist than a physicist, even though my background, and training and PhD, were in physics departments. It's interesting how the tendency is to lock in and say, I'm a physicist, but for the last 20 years, I've been doing more neuroscience, processing and physiology. So I try to think of myself more in those domains as I expand out. So never limit yourself. That would be another main piece of advice. SV: Until recently you were the editor in chief of Neuroimage. How was that experience? And what would you tell trainees who are looking into going into editorial jobs? PB: When I started this work, I wasn't thinking of getting into an editorial job. I said yes to everything. I think that we're all really lucky to be in a situation like this, where there's so many opportunities. So I said, “Yes!” And I never took that for granted. I still try not to take it for granted. So I said yes to always reviewing papers and I would always do the best I can reviewing them. Then my number of reviews caught editor's attention. They said, “Oh, he says yes to all our papers, and he really likes to review papers, let's make him an editor.” I enjoyed that. And as you get older, it's probably good to say no, to just manage your time. But still, I still haven't figured that out to be honest. So what I would tell people is, when I agree to review papers, I look at it less like, "Oh, it's a duty i'm doing". Instead I feel like I would read the paper anyway so I might as well review it. So my advice to people is that getting into editorial work takes a certain mindset, it takes time, it takes a certain amount of confidence in making decisions about papers, but you learn about what's a good paper, what's a bad paper. You learn all the processes of sending back feedback and doing reviews and what a good review actually means. So that's helpful and it helps you write your own paper. The whole editing process has helped me so much in terms of my own writing. And it broadened my horizons a lot, especially being editor in chief. I got 10 to 20 papers in a day that I had to assess quickly and then send out to the senior editors. And so that gives you a very broad perspective and a very up to date perspective on the field because you're given the latest things. It's a good four to six months before they're published. So that's what you gain. Maybe some advice to people, I think I would just say “yes” to review as many papers as possible. I think it's also important to not just find flaws in papers. Finding flaws is good, but it's also too easy to dismiss papers as bad just because it has this, this and this flaw. I would recommend that people be more accepting. The goal is to help the author get it published, it's not to stop them. Usually a reviewer should, as much as possible, look at themselves as trying to help the process if it's above a threshold, as opposed to trying to stop the paper from being published. SV: Over all these years, has it been hard to maintain a work-life balance? PB: Yeah! [laughs] There's certain things though, like I have to go running every day or do 30 minutes of aerobic exercise, but that's like an addiction anyway. So that's easy to maintain that balance. And I'm really lucky to have a great job at the NIH, where I can turn it off when I need to. But to have a good work-life balance, you do need to develop a certain amount of discipline. It's so easy just to have your work spill over. And when I was a graduate student, I had no balance. I lost track of what day of the week it was. I'd be working all night, working whenever. But now I have a family, three boys and a wife, and sometimes there's a need to compromise on both ends. You know, even going to a meeting like this, it's a week and a half away and all kinds of craziness is happening at home. But as long as you have a good schedule, and compartmentalize enough, then it's good. I think that's the key, to be disciplined and to do certain things no matter what. I know I'm going to go running, to try to get a certain amount of sleep. And I know that I'm going to try to spend this much time with my family. I don't meet these goals all the time but I really try to work when I’m at work and focus on my family when I’m at home. The more you can compartmentalise in a disciplined way, the easier it is to achieve this balance. But that said, I fail all the time with this. SV: So you've been at the NIH for a while now. Are there specific challenges at the NIH that other people would not be aware of? PB: NIH is a unique place. It's so good in the sense that all the researchers there have a certain budget and don't have to write grants - they have incredible resources. At the same time you're working for the government, which is different, in the sense that there's certain rules that apply to government employees that I'm still learning - even after 20 years. That limits you in certain things. If you're a PI in the extramural world, I always think that you can write grants and be an entrepreneur, because you can build an empire, depending on how successful you are at getting grants. You can collaborate with industry, you could have your startup company, whatever. At the NIH, you're more confined. It frees you to do other things, but there are conflicts of interest - like for the flight here, I had to pick a flight on a government contract carrier that I didn't have a frequent flyer program on or whatever. You have to because you're working for the government. And also you have your budget. And in the way you get approval, you're assessed scientifically every four years. But it's not like you have a grant review. So there's less uncertainty. And it's potentially easy to coast if you want to, but luckily there's a good enough environment, really motivated professors and people. I've been so well supported at the NIH; all the good stuff outweighs the quirky, government stuff. SV: Thank you so much, Peter, for this oral history. I'm sure everybody will really enjoy hearing about it. Thank you. PB: Well, thank you. By Claude Bajada, Simon M. Hofmann and Ilona Lipp Edited by: Thomas Yeo and Lisa Nickerson Machine learning, deep learning and artificial intelligence are terms that currently appear everywhere; in the media, in job adverts… and at neuroimaging conferences. Machine learning is often portrayed as a mystical black box that will either solve all our problems in the future or replace us in our jobs. In this blog post, we discuss what the term machine learning actually means, what methods it encompasses, and how these methods can be applied to brain imaging analysis. Doing this, we refer to the OHBM OnDemand material, which contains some great videos explaining machine learning methodology and we provide examples for how it has been used in a variety of applications. If you are curious about machine learning tools, but are not really sure whether you want to jump on the bandwagon, then we hope that this post is right for you and will help you get started. What is machine learning? Machine learning is a broad term that goes beyond deep learning and incorporates many other methods that are discussed in this post. Many of these methods you may already be familiar with and have heard about in the context of classical statistics, such as linear regression. While machine learning ‘is built on the foundations of statistics and has absorbed much of its philosophy and many of its techniques over the years’ (Max Welling, 2015), a main focus of machine learning lies in generalization, i.e. finding patterns in data by training a computational model such that it can predict unseen data of the same or similar nature. Here, a balance between “overfitting” (fitting a model that is too complex and will only work well for the data used to train it) and “underfitting” (fitting a model that is too simple and works poorly even with the data used to train it) needs to be found, aiming at high sensitivity and specificity when applied to new cases. This is generally attempted by splitting the data into various sets, training model parameters on one set, choosing the best model by evaluating it in another set, and testing it in yet another set. Sometimes, the term “statistical learning” is used for machine learning methods that have their foundation in statistics. Introduction to statistical learning and The Elements of statistical learning are two great textbooks introducing some important concepts. What about Artificial Intelligence? Is that something different? Artificial Intelligence is a high-level, conceptual term that describes the movement to emulate “natural intelligence” in machines. Machine learning can be thought of as one set of statistical tools that can make machines more “intelligent”. What types of methods are there? The most popular machine learning techniques applied to neuroimaging can be split into two general groups: supervised and unsupervised learning. Supervised learning requires labelled data (e.g., data that has been labelled, classified, or categorized), whilst unsupervised methods detect patterns in unlabelled data. Different machine learning methods also differ with regard to their complexity. Both types of machine learning approaches can range from fairly simple linear models to much more complicated, non-linear, algorithms. The more complex the models get, the more computational power is required. Even though machine learning has been around for a long time, it has experienced a recent boom. In his OHBM OnDemand video, Vince Calhoun (6:30 min) explains why: it is not only because more and more data are available, but also because there has been an immense improvement in computational power (note that training neural networks can sometimes still take weeks!) and in better algorithms being developed and implemented in open source tools. Below we will discuss some important methods that deploy both supervised and unsupervised learning. We will also discuss some approaches that are unique to neuroimaging (such as multivoxel pattern analysis). Supervised Learning As described by Christophe Phillips in the OHBM 2017 course on pattern recognition (4:34 min), the idea of supervised learning is to train a machine to find a mapping between observed data, such as your fMRI images, and an explanatory variable, which could be a disease label or a cognitive score. We can then find new, unlabeled subjects and predict the disease label or cognitive score. Christophe further explains (7:10 min) that supervised machine learning problems can be further subdivided into discrete classification predictions and continuous, or regression, predictions. Either way, supervised learning (10:50 min) ultimately relies on a mapping function between input and target variables, the specification of the shape of this function and the optimization of its parameters. The following are some examples of various commonly used algorithms: Linear Regression Most of you are familiar with linear regression as a classical statistical technique. However, this old staple has refashioned itself as a supervised learning technique. We can think of regression as a predictive technique that uses one (or many) features in order to predict a response as one continuous value (7:35 min). The main difference between using regression as a standard statistical tool and as a machine learning tool is that in machine learning we test the predictive power of the linear model on unseen data that did not contribute to the training of the model. Logistic Regression The idea behind logistic regression is, at its root, exactly the same as linear regression. The only difference is the function that is fitted to the data. While in linear regression we fit a line (or some generalisation of it in n dimensions – e.g, a plane or a hyperplane), in logistic regression we fit a logistic function. The logistic function is that “S-shaped” curve that often pops up in many biological sciences. The logistic function has the very nice property of being bounded (often these boundaries are set to 0 and 1) and hence can be used to express a probability. By having a cut-off, usually half way, we can use logistic regression to categorise our sample, for example into patients and controls. Support Vector Machines (SVMs): Support vector machines (SVM) are a type of classification algorithms, where the aim is to draw a decision (or classification) boundary between sets of data points, so as to maximize the “separation” (or margin) between the sets. While this sounds fairly straightforward, it is often the case that the data points are not easily separable by a line or plane, for instance, if two circles are embedded into one another. Kernel SVMs use “kernels” to transform the data into an alternative space where it might become much easier to separate the two instances. Christophe describes kernels and SVMs (from 17:00 min) in his introductory lecture. There are additional parameters such as regularisation parameters, gamma and the margin, that are important to define how well the line separates the training data. For a more generic discussion of SVMs, this medium post does a good job at explaining the basics. Deep Learning: Deep learning is one of the most talked about classes of machine learning algorithms and the one that most excites the public’s mind. Despite all the hype, deep learning models are often treated as a black box, since their input-output-mapping is both analytically and intuitively hard to grasp. In Vince Calhoun’s OHBM educational lecture on deep learning approaches applied to neuroimaging, he explains that the foundation of deep learning lies in artificial neural networks. In fact, despite experiencing a boom in popularity in recent years, neural network modelling dates back to the 1950s when there was a lot of interest in creating a mathematical model of a biological neuron (This paper by Hassabis et al. (2017) provides a stimulating discussion on the relationship between neuroscience and artificial intelligence). This neuronal model became known as a perceptron. The most basic type of network is the multilayer perceptron (MLP), with artificial neurons (perceptrons) organised in hierarchical layers. The input to the network is propagated layer-by-layer, first through activation-functions in each node, and then through connections (weights) to the successive layer. The “deep” part of deep learning refers to the number of multiple hidden layers, i.e. the layers between the input and output of the network. In recent years, computational advances have allowed the training of deeper and deeper networks. Some types of neural networks that Vince describes are Restricted Boltzmann Machines (7:10 min), deep belief networks (8:20 min), convolutional neural networks (16:35 min) and others. As with other supervised learning algorithms, deep learning needs a training set and a test set. Furthermore, the more layers you have, the more (labeled) data and computational resources you usually need. In fact, deep learning increased in popularity once computing power increased to the point that deep networks were feasible, particularly after the availability of graphical processing units (GPUs), which are hardware chips originally developed for accelerated processing of digital videos and graphic rendering (3:10 min). Multi-voxel pattern analysis (MVPA): A common application of ML in brain imaging? In the classical analysis of structural and functional MRI, i.e. the application of a general linear model (GLM), each voxel is considered separately. Due to its linear equations, the approach is mathematically neat and tractable, however, this “massively univariate” approach disregards the interdependencies between multiple voxels (see Robert Cox’s talk about fMRI analysis methods at 4:16 min, Mike Pratt’s talk at 0:35 min). In light of dynamic brain processes that engage entire networks of the brain, the independence assumption of single voxels is controversial. In order to address this issue, a more recent class of statistical models, known as multi-voxel pattern analysis (MVPA), has been introduced to account for the joint contribution or ”combinatorial code“ of multiple voxels across the brain to the phenomenon of interest (see Janaina Mourão-Miranda’s talk at 6:08 min). That is, MVPA describes a class of pattern-recognition techniques, which are presented in Mike Pratt’s talk on MVPA (3:33 min), and in a session devoted to MVPA at OHMB 2017 (the corresponding videos can be found here). MVPA draws from algorithmic strategies commonly used in machine learning. First, the data are split into a training set and test set. Then the classifier of choice (e.g., SVM) is trained on the former to discriminate various multi-voxel patterns corresponding to the experimental conditions, and validated on the latter. Validation is done by using the trained model to predict the conditions in the test set based on the multi-voxel input, which is often referred to as decoding (see Bertrand Thirion from 5:38 min, and Mike Pratt’s talk at 8:04 min). In decoding, we try to predict from multi-scale neural processes its representational content, such as percepts or cognitive states, mostly induced by experimental conditions (Pratt’s talk at 11:55 min). Classifiers can be linear or non-linear in nature, each having their own limitations. Linear classifiers (e.g., linear discriminant analysis, LDA) are considered easier to train and to interpret, however, their sensitivity depends on the individual contribution of each voxel in the observed pattern (see Jo Etzel’s talk at 18:00 min). Whereas non-linear classifiers (e.g., artificial neural networks, see Vince Calhoun’s talk) are able to find more complex relationships between patterns of voxels, they require training on large datasets. The term MVPA was coined by Norman, Polyn, Detre, and Haxby (2006), who introduced it within the framework of fMRI analysis. However, considering a broader definition of the term, most of the methods that MVPA encompases are not restricted to fMRI and can be equally applied to structural imaging (e.g., Zhang et al., 2018; or Cole et al., 2017; and see James Cole’s talk at OHBM 2017). Unsupervised Learning In supervised learning, in addition to the input data (for example, fMRI images), we also need the ‘ground-truth’ output, which may be labels (e.g. healthy vs condition) or scores (some sort of cognitive or behavioural scores). However, frequently we either do not have appropriate labels, or the labels that we do have are unreliable, for example, in psychiatric imaging, as explained by Verena Kebets in her video. In this case, unsupervised machine learning methods open new doors. Clustering: In the neuroimaging community, the unsupervised machine learning technique of clustering is best known by its application to estimating brain parcellations. Brain parcellation is not a new problem and neither is it one that necessarily involves machine learning. All neuroimagers have heard of the 19th century neuroanatomist Korbinian Brodmann who labeled brain regions according to their cytoarchitecture -- the original brain mapping! As Simon Eickhoff explains in last year’s keynote, cytoarchitecture is not the only feature by which to parcellate the brain; there are others, such as receptor architecture, cortical myelin structure, and connectivity structure. Unsupervised clustering methods are ideal for data with known differences based on features of interest where we want to automatically group brain regions according to these features. The simplest, and probably most widely used technique available, is k-means clustering. In neuroimaging, this is done by creating a feature vector per voxel in a region of interest, for example, structural or functional connectivity information. These voxels can now be thought of as points in an n-dimensional feature space. The k-means algorithm then attempts to maximize within-group similarity. Unfortunately, k-means clustering requires an a priori knowledge of the amount of groupings (k) one is interested in (although there are some iterative techniques to try to establish the number of k). Other approaches to clustering, such as hierarchical clustering or spectral clustering, have the same basic idea of splitting up data (in this case brain voxels) into discrete groups, or parcels, but have slightly different assumptions or tricks. For example, hierarchical clustering assumes that the data have a hierarchical structure and so you could split the brain into two groups, each of which can be split into another two groups, until we reach the level of individual voxels. Or you could start from individual voxels and work your way up. On the other hand, spectral clustering has an additional step (the spectral transformation), which allows you to disregard weak similarity. Sarah Genon, in her educational course lecture, describes how to perform such analyses using diffusion MRI data. Laplacian EigenMaps / Diffusion Embedding: Sometimes you may not be interested in grouping voxels into a fixed number of parcels, but rather explore the relationship of voxels in a region of interest based on a feature of interest. In his educational talk, Daniel Margulies describes techniques that can be used to investigate the connectopies, or connectivity maps, of the brain. The initial approach is similar to the one described above, where you create a feature vector for every voxel in the brain. These features are then compared to each other using a measure of similarity to create a similarity, or affinity, matrix. This matrix is then decomposed and new vectors are obtained which describe the principal gradients of similarity across a region of interest, or indeed the whole brain. Daniel’s keynote describes how such types of analyses can be used to elucidate topographic principles of macroscale cortical connectivity. Associative models: Associative models, such as partial least squares (PLS) or canonical correlation analysis (CCA), are not exactly supervised or unsupervised. In supervised learning we generally have a multivariate input (e.g. brain images) and a univariate output (labels). In unsupervised learning, we only have one set of multivariate input data, such as the connectivity information for brain parcellation. In PLS or CCA, we want to discover relationships (associations) between two sets of multivariate inputs (e.g., brain images and behavioral/clinical scores). As Janaina Mourao-Miranda explains in her video (2:25 min), psychiatric conditions often have unreliable labels. To deal with this, she uses associative models (e.g., PLS), trying to find a linear combination of neuroimaging predictors that are most strongly associated with a linear combination of multivariate clinical and behavioural data. This provides a data-driven way to generate summary labels that can possibly shed new light on clinical conditions. It is possible to do significance testing on associative models to make inferences. Valeria Kebets describes (11:20 min) how to perform permutation tests in order to determine which components are significant, how to determine whether components are expressed differently across groups, and, finally, which variables drive the extracted components. In her video, Janaina also goes into the details about how her group applies a multiple hold-out validation framework in partial least squares analysis (16:50 min). What do I need to consider when using machine learning tools for brain imaging analysis? As explored in the previous paragraphs, machine learning techniques open many doors for brain imaging. They can help make predictions that depend on complex interactions, help find patterns in our data that we have been previously unaware of, and also automate time consuming manual tasks, such as segmentations (e.g. see Pim Moeskop’s video). However, there are also pitfalls that must be considered. First, the more complex and powerful machine learning techniques really need large datasets. In his video, Andrew Doyle (25:30 min) discusses how neuroimaging applications differ from classical image processing problems, with brain imaging data usually being very large and high-dimensional data, while sample sizes are comparatively small. For some applications (e.g. image segmentation or MVPA), smaller sample sizes may not be a big issue, but for others (such as patient classification) they may. A recent publication by Arbabshirani et al. (2018) explores the reason for why making individual predictions from brain imaging data is challenging. Another paper by Varoquaux (2018) focuses on the challenges with model cross validation on small sample sizes. Of course, the noisier the data, the more data points are needed, and brain imaging data are renowned to be noisy. Additionally, if no reliable labels can be provided, the best supervised learning algorithms will not be able to succeed. Another problem, in particular with the more complex methods such as deep learning, is the challenge of assessing how biologically meaningful the resulting models are. Recent efforts have gone into better understanding and evaluating what is actually happening in the deep layers (e.g. watch Alex Binder’s video). However, resulting models may not teach us anything about biological or pathological mechanisms, and they may actually represent biases that exist in our training data, limiting their generalisability to other data. For example, this year’s replication award was awarded to a study that showed lack of generalisability of some published models. Until these issues are fully resolved by the community, as individual researchers the best we can do is to understand the algorithms we are using and their limitations. That way we can choose the most suitable techniques, and rigorously apply them on suitable sample sizes and avoid overfitting. Luckily, there is a vast amount of online resources on machine learning techniques, including textbooks (e.g. Bishop, 2006), Andrew Ng’s famous Coursera courses on machine learning and deep learning, and online blogs and forums. Numerous papers from the MRI community provide overviews of machine learning tools for neuroimaging, or more specific examples, such as how machine learning is shaping cognitive neuroimaging, and how to use machine learning classifiers for fMRI data. OHBM’s OnDemand has an extensive archive of videos from education courses and talks on machine learning applications for neuroimaging that we’ve included in this article and, we also expect many exciting new educational and symposium talks on the use of machine learning techniques in brain imaging at this year’s OHBM in Rome, so watch out for those, too! |
BLOG HOME
Archives
November 2019
|