By Elizabeth DuPre
The Open Science Special Interest Group (SIG) is a relatively new organization within OHBM; however, it is responsible for several increasingly popular community initiatives including the hackathon and the open science room. As the Open Science SIG assumes new leadership this month, I sat down with the incoming chair, Kirstie Whitaker, to hear about her hopes for the upcoming year.
Elizabeth DuPre (ED): Today I’m here with Kirstie Whitaker, Chair of the OHBM Open Science SIG. Kirstie, can you first tell us about yourself?
Kirstie Whitaker (KW): I’m a research fellow at the Alan Turing Institute – the UK’s national research institute for data science and artificial intelligence. There’s a lot of research going on there, but one of the projects I work on is trying to incentivise reproducible research across all of data science. I’m a neuroscientist by training, and I did my PhD in UC Berkeley, followed by a postdoc in Cambridge at the department of Psychiatry. I then had a one year fellowship with the Mozilla Science Lab before I transitioned to working in the Turing Institute.
ED: It sounds like you’ve seen many aspects of neuroscience and data science, both in academia as well as in industry through your fellowship with Mozilla. Those can all lend very different perspectives on the thing we’re both passionate about: open science. Can you tell us your thoughts about open science following from those experiences?
KW: Open science, as you’ve said, can mean different things to different people. You can imagine our friends in the library sciences are extremely passionate about open access. We should all be passionate about open access and being able to read our colleague’s work. There’s also a lot of work going on at OHBM using open data. That’s making science more efficient and allowing us to answer more interesting questions with different types of techniques – by harnessing different peoples’ data and sharing that with our colleagues.
There’s another aspect which is pretty prominent in neuroscience, with huge influence around the world, which is open source code. I write some analyses and importantly I allow other people to use it – so in that sense it’s similar to open data – but they’re also able to see it and interrogate it. So instead of building a black box we’re building tools that you can look inside.
There’s also an additional angle of making sure that science is open to all people. This includes citizen science – and one of our hackathon organisers this year is Anisha Keshavan, who’s one of the coolest and most exciting citizen science people that I’ve ever worked with – which means breaking out of the ivory tower, and allowing everyone who’s interested in helping us understand the brain to productively take part.
It also means making sure that there are scientific career paths for people with diverse experiences and opinions. That means we allow women to succeed as well as men. We ensure that people from different cultural backgrounds, different races, different countries who speak different languages, are all given a fair shot at expressing their goals, and completing the analyses that they want to do.
So for me, open science is just doing science, and doing science well. But my particular passion is to ensure we are being diverse and inclusive.
ED: Over this past year you’ve served as chair elect while I’ve been secretary elect – and we’ve gotten to see the leadership do some amazing things. Anisha was the co-organiser for our hackathon. And this was the first time that the hackathon has sold out – so it was really exciting to see all the enthusiasm that the open science events are generating. We also had Felix Hoffstaedter organizing the open science room at the annual meeting, where we even decided we needed a bigger space.
And of course our current chair Chris Gorgolewski and secretary Matteo Visconti di Oleggio Castello have done a great job about communicating to the community what we’re so excited about. Given all this, now that you’re taking over as chair where would you like to take the SIG?
KW: I know, it’s such a brilliant and terrible problem to have sold out the hackathon! The other person we should mention is Greg Kiar who co-organised the hackathon. He liaised with ethnographic researchers who specifically do research on hackathons to create a survey that asked attendees what they gained from the event, how they felt it accommodated more junior members, and importantly, how these events could be improved in the future. I’m so glad Greg conducted that survey - before we closed out the room on Saturday we all had 30 minutes to fill in our survey and answer our questions - and we’ll see the fruits of that survey in next year’s hackathon. He gave a brief overview and one of the biggest themes was people being so excited and grateful that there were so many skills available – and that there were so many different levels of people that were there.
I think that the event selling out reflected that excitement. But selling out means we’ll have to confront some issues; in particular, we’re going to have to figure out if we want to keep the hackathon small and intimate or let everyone who wants to come attend. One of the big sells of a small event is that you can easily make some connections with individual people who can share their expertise with you or point you in the right direction. Once you get larger you effectively start building OHBM [laughs]. I mean, we’re the hackathon, we’re not trying to take over the entire conference, so we’ll have some interesting challenges about how we include everyone.
My goal is to think about culture change, and making sure we give credit to early career researchers that are doing excellent work that supports others. Historically, the incentive structure in academia has been to encourage very sharp elbows and making sure “To get to the top I’ve got to be number one. I’ve got to be uniquely better than everyone else.” One thing that really impressed me at this year’s OHBM conference was a presentation by JB Poline where he talked about the work that the community has brought together for a publishing platform where you don’t just publish traditional papers, but you might also publish code, data or tutorials. These are things that we all know are very useful, but that aren’t fully recognised. I’d love to see early career researchers get a bit more credit for that sort of thing.
I also think that the wider community should take back that spirit of the hackathon – the feeling in the open science room of these really helpful conversations and try and take that out into the OHBM community all year long. We have a Slack channel where you can get in touch with people, by pinging questions out. But I think it would be really interesting to see if we can solicit ideas from our community and actually get our members involved. It doesn’t have to be the SIG that puts on an event – it could be that we help our members make the connections (and we perhaps help out with a little funding).
One of the initiatives [Elizabeth] and I have been doing is the demo calls. There, we reach out to people and I sit on YouTube live and I ask people about their experiences with open source and their projects, and how others can get involved. Maybe those demo calls are useful and we can take them forward and keep them going. But maybe there are better ideas and that’s what I’d love to explore – how we can generate more ideas and bring them to light.
ED: I’m really excited to see where that goes. That leads into our recent round of elections…
KW: Yes! Traditionally there were just two members of the committee and they’ve done a lot of work. Thank you to the previous leadership of the OHBM hackathon and the Open Science Room and the brain hack and everything - all the people who have run so many of these initiatives. It was a lot of work! I was very happy that we created quite a few more positions to bring more people in that were passionate and wanted to help nurture the open science community. For example, this year we realised that we didn’t have a treasurer position, and keeping track of all this money and paying for these things was a lot of work, so we’re introducing a new role to cover this need.
We’ve talked about my vision and my passion for open science. But one of the things that is so fun, and frightening, about open science and diversity is that you have to eat your own dog food; that is, to practice what you preach. The success of open science in general and the SIG in particular relies on bringing in new people, new points of view, and I’m looking forward to it.
ED: Yes, I’m looking forward to seeing everything that happens and our new initiatives. Thanks so much!
After our conversation took place, we concluded the most recent round of elections. We’re now excited to announce the new leadership joining the Open Science SIG:
Greg Kiar - Treasurer
Camille Maumet - Chair elect
Ana Van Gulick - Secretary elect
Sara Kimmich - Treasurer elect
Roberto Toro and Katja Heuer - Hackathon co-chairs
Tim van Mourik - Open science room organizer
Cameron Craddock - Council liaison
Look for a follow-up post where we find out more about their pathways into open science!
Danilo Bzdok heads the section for “Social and Affective Neurosciences” at the Department of Psychiatry, Psychotherapy and Psychosomatics at RWTH Aachen University in Germany. Using his dual background in neuroscience and data science, Danilo tries to reframe psychological questions as statistical-learning questions to generate new insights. His work on social cognition and psychiatry has led to innovative data-led perspectives on how humans navigate the social world and its neural substrates. In 2017, he was designated “Rising Star” by the Association for Psychological Science (APS) in the USA. He is also a self-proclaimed potato chips gourmet and excessive consumer of especially electronic and classical music.
My first encounter with Danilo was unilateral, over the pages of Nature Methods’ Points of Significance section, where he published several introductory pieces on machine learning. His way of boiling down a complex topic into an accessible explanation was also at the heart of our next meeting. At ICM in Paris, he gave an institute lecture about the relation between mainstream statistics and emerging pattern-learning techniques in brain-imaging neuroscience. This led to a longer discussion afterwards, this time face to face... And this discussion is revealed here, where Danilo gives his views on big data, the changes in how we answer questions with data in everyday science, and some speculations on the future of neuroscience.
Tal Seidel-Malkinson (TSM): First, can you tell us about your career path. You started in medicine, then moved into basic research. What made you shift?
Danilo Bzdok (DB): It actually started when I was in middle-school – my first intellectual passion was in programming and computer science. I really liked composing logic using computer code. At roughly 15 I was fluent in half a dozen programming languages, such as 32-bit assembler, Pascal, and C++. But I felt early on that I was also intrigued by various other, completely different, things like philosophy,foreign languages, social sciences and neuroscience.
At that time, being mostly focused on natural sciences felt like somewhat of a limitation to me. One thing I really liked about the way of thinking in philosophy, and still appreciate very much, is the close interplay between logic and language. I was however not fully convinced this was a very pragmatic career choice. At least in Germany, a degree in philosophy is not always something that keeps many doors open for the next steps in life. That’s why I eventually decided to study a conservative area that would give me a solid foundation. Medicine seemed to be a safe choice, provided such a general education, and also gives you a lot of options. You go through an intense learning experience that shapes your work ethic. I wanted to go towards becoming active in scientific research, determined to move into brain science in particular. I therefore spent my early University years concentrating on neuroscience and psychiatry.
In the middle of my studies I wanted to get involved with research as soon as possible. This led me to work with Simon Eickhoff at the Institute of Neuroscience & Medicine at the Research Centre Juelich, who was an incredible mentor to me, and I also reached out to the department of psychiatry at the RWTH Aachen. I was lucky enough to be funded by the German Research Foundation (DFG) and to be part of an international research and training group (IRTG1328 on “Schizophrenia and autism”) with UPENN, USA. This particular department of psychiatry at RWTH Aachen University turned out to be active in brain-imaging research. Due to a series of lucky coincidences, I had the opportunity to go through an authentic research experience.
During the second half of Medical School I spent less and less time attending lectures, and instead tried to min-max the exams. Towards the end of my medical studies I was barely studying anymore. It then felt like a smooth transition into being a full-time researcher. At that point, I wasn’t ready to commit another >5 years to clinical specialization in psychiatry, which takes ~50-60 hours of your time per week and leaves less time for research.
I also learned a lot during a fantastic research stay working with Peter Fox and Angela Laird in San Antonio, Texas, USA, and I launched several collaboration projects with social cognition enthusiasts here in Germany, including Kai Vogeley, Leonhard Schilbach, and Denis Engemann. Together we conducted a series of neuroimaging studies on whether or not there are brain regions that may be uniquely devoted to social-affective processing - a direction of research which later pushed me to pursue always more general systems neuroscience questions. In 2013, I had become convinced that whether human-specific neural systems exist -- particularly ones that might be devoted to human social interaction -- was at its heart a methodological and statistical question. Whether or not scientists can go beyond the cognitive terms that we have been used for decades in social and affective neuroscience, such as “theory of mind”, “affect”, or “empathy”, is a question that can be more readily wrestled with certain data-analysis toolkits than others.
TSM: It’s clear that neuroscience nowadays increasingly requires an interdisciplinary set of skills. In your unique path you have acquired a broad set of skills from your degrees in medicine and maths and your PhDs in neuroscience and in computer science. How did you choose this path and, given this can’t be common training, what do you think early career researchers should focus on?
DB: I went through a journey of sometimes unconnected interests. It wasn’t always a conscious choice at a particular point. Essentially, I just went through several bouts of intense interest, getting absorbed in specific topics. That’s why, in retrospect, I am happy I somehow made it all the way through medicine. Despite changing areas of interest, at least I have an official degree that could help me give something back to society.
For years, I was not really sure how to cultivate and usefully combine my skills in language, logic and algorithms. When neuroscience later turned out to be such a vibrant interdisciplinary field, it was quite a relief to me. I found an opportunity to combine several different, what I like to call, thought styles. In neuroscience you can interface between diverging thought styles and approaches, and really get something out of it. That’s perhaps why I have a weakness for fuzzy topics like higher-order cognition, what domain-general function the TPJ may subserve, and what the “dark matter” of brain physiology - the default-mode network - may tell us about the nature of the human species. Several of these topics have a decent amount of soft-scienciness, at least to me – I then try to be principled and get at these research questions with algorithmic approaches that “let the data dominate”.
One thing that appears obvious to me in my activities as a supervisor, mentor and speaker: the data science revolution will depend on better quantitative literacy of the next generation of ambitious neuroscientists. We live in an increasingly quantified world. There are more quantifiable aspects about how we live and what we do; in normal life as well as when things go awry. There is a rapidly increasing opportunity to use algorithmic and computational tools, to generate quantitative insight and reach rigorous conclusions from the increasing amount of data at our hands.
Such modern regimes of data-analysis may look disturbingly different from the traditional goals of statistics and how statistics is taught at the university for many empirical sciences. In the data-rich setting, some traditional methods may have difficulty approximating the truth. That’s why I tried to structure my scientific education not only towards a solid neuroanatomical and neurophysiological understanding, in which I was much influenced by Karl Zilles and Katrin Amunts, but also a sense of probabilistic reasoning and quantitative methodology, in which I was much influenced by Bertrand Thirion, Gaël Varoquaux, and Olivier Grisel.
As almost every PI will tell you, most of their students will ultimately not end up in academia. I therefore believe that, at a more pragmatic level, getting an education with a solid data-analysis component can avoid pigeonholing PhD students or Post-Docs for a career as a scientist, and offer a broader portfolio of options to find jobs in industry and government after leaving academia.
TSM: Big data is a new opportunity for neuroscience, but equally it’s a new challenge. How do you see this development?
DB: In general, many scientific disciplines show a tendency to diversify into ever more specialized subdisciplines over time. So just because there are new opportunities doesn’t mean that the more established ways to conduct research and older techniques are rendered obsolete. Meticulously designed, hypothesis-guided experiments in carefully recruited participant samples will most likely remain the workhorse to generate new insight in neuroscience. What appears to be happening right now, is that we are extending the repertoire of questions that can be asked and are quantifiable.
Let me give one particular example. The increasing availability and quality of brain measurements will soon allow learning description systems of mental operations in health directly from data themselves - a cognitive taxonomy directly extracted from brain measurements, and nomenclatures of disturbed thinking in mental disease. Such goals are likely to require combinations of massive amounts of richly annotated brain data and innovative pattern-learning approaches.
TSM: There’s a tendency towards moving from group analyses to predicting outcomes for individual participants, are our current tools reliable enough for that?
DB: Broadly, I can see two distinct and promising trends – on the one hand, scientists bring in a small number of subjects into the lab several times and acquire hours of brain scanning, which allows accessing a finer granularity of neural processes at the level of densely sampled single individuals. There are several well-known labs that now seriously go into this direction with a lot of success...
TSM: Do we need so much data on individuals because of variability of cognition or the SNR of fMRI?
DB: There are several aspects at play. Often, resting-state scans are still just 5-10 minutes. I think that may not be enough to robustly describe *all* aspects of neural activity changes in the brain that investigators may find interesting. This is the first trend: one pocket of the brain-imaging community now tries to go always deeper in terms of subject specificity. It nicely complements the dominant agenda of conducting statistical tests on differences between pairs of experimental conditions or participant groups.
The completely other way to go beyond binary comparisons that I see is progress towards population-scale neuroscience. There is an increasing tendency for extensive data collections with hundreds and thousands of indicators like demographic, neuropsychological and health-related items, from a maximum of individuals. Such population neuroscience approaches will probably shed new light on variability patterns of brain biology, across distinct brain-imaging modalities, and bring into contact previously unconnected research streams. These people try to acquire as much information as possible that characterizes as many people as possible. The approach avoids strict a-priori choices as to the type of person or disease category to be distinguished and studied. One hopes that coherent clusters of individuals emerge in massive data. That again is a completely different perspective. This is a good setting, for example, to discover, quantify, and ultimately predict subclinical phenotypes in people - individuals who deviate from the normative population in some coherent way, without being “dysfunctional” in society.
It is my impression that both highly-sampled single participants and richly phenotyped participant populations are two exciting upcoming directions that hold a lot of promise. Both these research agendas can probably complement and inform experimental studies of ~30 people with well-chosen hypotheses and dedicated experimental designs.
From a more statistical perspective, there is an orthogonal aspect. For the majority of the 20th century, researchers in biomedicine have acquired and analyzed “long data”, with fewer variables than individuals. Today neuroscientists need to tackle always more often “wide data”, some call it “fat data”, with sometimes a much greater number of variables than individuals. Having extensive “found” or observational data from general-purpose databases is where machine-learning algorithms and data science come into play. Such more recently emerged statistical tools offer new strategies to search through abundant yet messy data. It is an exciting future perspective to integrate both – the highly sampled subjects and population neuroscience.
TSM: As you said both approaches require collecting, logging and archiving big datasets – this requires a lot of resources. Do you think this might increase the gap between well-funded and less well-funded labs?
DB: That’s a bit political, I’ll try to give a neutral answer. When you look at the Human Connectome Project (HCP) – there was a lot of excitement when it established itself as a trusted reference dataset for the brain-imaging community. That allowed new methodological approaches to be compared against each other in a more principled fashion. Yet, looking at the many thousand imaging neuroscientists on the planet, how many of those have really published a paper with the data from the HCP project? Actually, not that many.
Many of the existing HCP publications appear to often be methods-focused papers. I’m not saying that’s not interesting. But I think many scientists would perhaps have expected more discoveries on brain structure and function based on this unique data resource. One reason why this is surprising to me is that many of the classical software libraries still scaled fairly well to the HCP 500 release; just having to wait a bit longer for the results. Even with the full 1,200 subjects you could still scale to the higher sample size using essentially identical software and analysis pipelines that were already set-up in the lab.
We now have the UK Biobank Imaging, CamCAN, ENIGMA, and many other rich datasets. Given that HCP data were not primarily used by labs to answer cognitive neuroscience or neurobiological questions on brain connectivity, I expect that there will probably be an even bigger gap between the majority of imaging neuroscientists and those people who capitalize on the new generation of complex datasets. There will be even fewer labs that have a vested interest in and a daily exposure to methodological techniques needed to leverage these burgeoning data repositories.
TSM: This transition to big data requires a change in our methodologies and ways of thinking. How do you think this cultural shift should be achieved?
DB: Let’s go back to the two larger trends we discussed before – using densely sampled participants and population neuroscience to understand the healthy and diseased brain. Big-data methodologies are likely to play an important role in gaining this insight. We’ll need a shift in our everyday data-analysis practices and how we design and run our labs. We’ll need more computational savoir-faire and more people from STEM backgrounds. But that’s not enough. There also needs to be a more organic and fluid conversation between analysts and the PIs who have these people on their payrolls. More exchange in both directions will help us to negotiate between the research questions and optimal algorithmic methods.
A big issue, for instance, already is and will increasingly become the “big-data brain drain”: many people with quantitative aptitude and a proven data-analysis skill-set are highly sought after and may be aggressively headhunted by companies for several times higher salaries than what we in academia can offer. For instance, one of my students with a background in physics recently got recruited by McKinsey Analytics in London.
To tackle some of the ambitious questions we mentioned, we’ll also need better infrastructure than many universities today offer us neuroscientists. We simply need more money for this expensive computational architecture and its sustained maintenance. Now, some people may ask why we don’t just use cloud computing. And sure Amazon AWS and other cloud-based solutions are attractive options. But it’s worth considering two problems: first, you have data-privacy issues where you have personal data from individuals. In many research institutions, researchers may not be allowed to upload detailed information of individuals to servers in a different country. Second, there is a bureaucratic problem: you cannot easily estimate in advance how much money you need for your particular cloud-computing jobs. Many finance departments are however allocating money on a per-year basis, at least at German universities.
Last but not least, there’s the educational issue: how should we train young neuroscientists? It’s not clear how in this already very interdisciplinary teaching schedule, with theory of neuroscience, molecular biology, anatomy, physiology, classical statistics, genetics, brain diseases, and so forth, we could add multi-core processing, high performance programming, and so on. There are so many things that a 21st century neuroscientist is expected to absorb. It’s not clear where you’ll find people with such a multi-faceted mind who can be incentivized to, and are able to, embrace this breadth.
TSM: So perhaps we need to be collaborative? It’s perhaps not realistic to expect single people to have all these skills.
DB: It’s probably not realistic, but still, we will need some of these “glue people”. It’s not clear to me where we should expect them to come from. That’s why my feeling is that the shape and form of scientific education may play an increasingly important role in neuroscience.
TSM: Big data has been seen by some as a solution to the replication crisis – and another approach has been to use meta-analysis. You’ve recently published a meta-analysis on theory of mind. What did you learn from this, and what should we be careful about when applying meta-analysis?
DB: Several decades ago, there was a similar crisis in the social sciences, as we experience now in the current replication crisis. Many people weren’t sure how to go forward as there was a lot of uncertainty about how robust and valuable the abstract constructs were that these empirical scientists were studying. An important contribution to provide justification for these mental and social constructs came from quantitative meta-analysis.
Quantitative analysis is a very useful tool to identify convergence across isolated findings and thus solidify scientific areas. Especially if you know you will be facing small effects and a lot of noise; which is true for social and psychological sciences, and probably not wrong for brain-imaging. So you can either shift to a different area of research with more tractable problems or adapt to the situation that we have, where meta-analysis is one key solution to cope with the idiosyncrasies of a broad range of studies. It will unavoidably mask some subtle effects from single experiments. But you can see through the noise – distinguish the forest from the trees.
TSM: Presumably it also helps to collaborate with multi-centre studies.
DB: Sure. Many young students getting into neuroscience may perhaps still envision the lonely genius who is knowledgeable about so many areas of neuroscience. The biggest steps forward may come from *teams*. Sets of people who learned to genuinely work together; not despite but because they are drastically different in their knowledge and thought styles. If they succeed in aligning their thinking and efforts towards a common goal in neuroscience research, non-linear progress probably becomes much more likely.
In terms of data-collection, it’s worth comparing brain-imaging to genetics or genomics. Several trends in imaging neuroscience today may have been preceded in a similar form already 5-10 years ago in genomic research. There, many data collection collaborations were foundational and helped the research community to see through the noise more clearly. Imaging neuroscience is becoming larger and more international with increasing numbers of labs, so there is greater potential for people to work together. Intense and bidirectional collaboration between drastically different disciplines may be a prerequisite to render some of the ambitious questions actionable that we had the pleasure to discuss today. It also means you need people skills, on top of everything else!
TSM: I want to thank you for the nice chat – and it’s definitely an exciting, interesting era in neuroscience!
By Danka Jandric, Jeanette Mumford & Ilona Lipp
Planning a resting state study and analysing resting state data can feel overwhelming. There seems to be an endless number of options regarding all stages of the experiment. Decisions need to be made about how to acquire data in an optimal way, what preprocessing and noise correction pipelines to employ and how to extract the most meaningful metrics. Many strategies have been published and are available in software packages. However, there seems to be little consensus about what works best and even more importantly, about how to judge whether something “works” or not. The choice of method often depends on the specifics of the data and addressed research question, but can equally often seem arbitrary. To help guide you through this jungle of rs-fMRI, we walk you through all stages of a resting state experiment. We do this by addressing questions that researchers are likely to have… or should have! While we do not provide definite answers to these questions, we try to point out the most important considerations, outline some of the available methods, and offer some valuable video resources from recent OHBM education courses, to help you make informed decisions.
What do I need to consider when planning my experiment?
Running a rs-fMRI experiment seems easy enough. Technically, all you need is to put your participant in the scanner, tell them to rest and run a standard BOLD sequence. However, it may be worth thinking about your analysis strategy beforehand, so that once you start analysing your data you do not suddenly wish you had…
How do I know my data quality is good?
One of the most common questions asked when evaluating data is how to tell if the data are “good” or not. The answer to this question, regardless of the data, is to actually look at your data. Although this task is somewhat easy with behavioral data, when faced with hundreds of thousands of time series for a single subject, it is less clear how we can do this. Luckily Jonathan Power has not only developed tools we can use with our own data but also takes us through data inspection in his educational talk from 2017, “How to assess fMRI noise and data quality”.
How do I improve my data quality?
fMRI data are noisy and this is not going to change any time soon, so we have to deal with it somehow. Acceptance and hoping for the best is a strategy, but could lead to problems further on in your analysis. If there is a lot of noise compared to the signal of interest, then individual subject’s resting state networks will not look clean, and the power in detecting group-level effects may be low, so you might not find anything interesting in your group-level analysis. However, as importantly, if there are systematic differences in noise sources between the cohorts you are studying, then seemingly interesting effects can be simply a result of group differences in noise, such as head motion. Having ignored the noise problem, you might end up spending days writing a paper with a game-changing title, being hit by reality when the annoying reviewer then asks you to quantify group differences in your noise. Better to be aware of and account for noise to start with, right? But this is easier said than done…
What causes noise in rs-fMRI data?
Resting state analysis generally deals with correlations in time courses between voxels. If a noise source affects several voxels in similar ways, this can lead to temporal correlations which are independent of neural co-fluctuations. For this reason, the aim of noise correction is to get rid of as much variance in the BOLD signal as possible that is related to noise. To figure out what the best possible noise correction strategy may be, we first have to be aware of what the sources of noise in BOLD time series are.
In his video, Cesar Caballero Gaudes gives a comprehensive overview of the most common sources of noise, such as head motion (from minute 05:11), respiratory and cardiac variation (from minute 05:53), and hardware (from minute 11:11), and their effects on the data. Cesar also gives an overview of some of the denoising strategies that are available to tackle different types of noise.
How can I correct for noise when I have information about the noise sources? The nuisance regression approach:
One denoising approach is to record information about some of the potential noise sources during the scan, such as physiological recordings or head motion parameters. These can then be used to figure out to what extent our BOLD time series can be explained by the noise sources, by including nuisance regressors in a general linear model. Generally, we probably all agree that the more high-quality information we have on what happened during our scan, the better. One may also think that the more nuisance regressors we employ to regress out from our BOLD time series, the better our clean-up… but is that so? In her video, Molly Bright gives us deeper insight into the nuisance regression approach to clean up noise.
In some smart simulation analyses (from minute 12:30), Molly shows that simply adding as many nuisance regressors as possible may not be the best strategy, as we may accidentally remove a lot of signal. Also, we need to be careful about time-lagging our regressors in order to account for the delay between a physiological change and the BOLD response. Molly explains why trying to identify that delay using the rs-fMRI data can be tricky, and why adding a breath-hold at the end of your acquisition may be a good idea (from minute 20:16).
Molly also demonstrates that very commonly applied preprocessing steps, such as bandpass filtering, can have effects on our data that we might not have predicted (from minute 16:30). While introducing a few strategies to make the nuisance regression approach for noise corrections more valid – such as prewhitening (from minute 12:00) - she stresses the fact that there is not one optimal strategy and that it is very difficult to tell whether noise removal “has worked”. The take-home message here is probably that as a field, we need to work towards a better understanding of the BOLD profiles of different noise sources. Additionally, integrated strategies are needed to deal with the complicated interplay between different noise sources, such as between head motion and physiological noise.
How can I correct for noise when I do not have information about the noise sources? The ICA approach:
While the success of nuisance regression depends on having good quality nuisance regressors in the first place, data-driven approaches are available that can be applied to any dataset, the most common strategy being independent component analysis (ICA). ICA for noise removal is based on the separation of the BOLD time courses into spatial components, and classifying each component into signal vs. noise. This is typically done on a subject-by-subject basis. The time courses of the noise components can then be regressed out or accounted for during further analyses.
Ludovica Griffanti gives a comprehensive introduction to ICA for noise removal and highlights the difficulty that often lies in the signal vs noise classification that is performed by “experts”. Whilst semi-automated and automated approaches are under development in order to make this classification more objective, Ludovica makes the strong point that ultimately these algorithms or at least their validation are based on “gold-standard” manually labelled data. While there is no clear consensus yet on what signal and noise components look like, Ludovica provides us with some guidance and rules that can help with classification and are a first step towards this consensus.
How can multi echo data help with noise correction?
The vast majority of BOLD data has been acquired with a single echo time, optimised to the average T2 across grey matter. However, if you have not started your experiment, you might want to acquire data with several echo times. Prantik Kundu explains why: BOLD and non-BOLD related signal have different sensitivity to echo time, so having information about the actual decay can help distinguish signal of interest from noise (from minute 05:10).
Prantik provides a few beautiful examples on how multi-echo fMRI data can be combined with ICA-based approaches for noise clean-up, calculating parameters that objectively inform about how similar the components’ behaviour is to BOLD vs non-BOLD related signal (from 11:43). In the grand scheme of things, the multiple echo times used are still quite short, so acquiring this extra information would not necessarily increase your total acquisition time. On a side note, even data from one additional short echo time can provide information about some noise sources, as described in a study by Bright and Murphy (2013). Be aware that certain noise sources, such as slow physiological changes yield ‘BOLD-like’ noise (which we can treat as noise or as signal of interest, depending on our perspective), as they interact with the cerebrovascular system. Multi-echo data does not help with correcting for this type of noise.
Why go through all that pain? Can I not just do a simple global signal regression for noise correction?
A cheap and easy (and still very widely used) way for performing ‘noise correction’ is global signal regression. Here, the average signal across the whole brain (or all gray matter voxels or all cortical voxels) is calculated and regressed out from each voxel time series, with the underlying assumption that the global signal mostly reflects combined noise from various sources. The advantage of this approach is that it is able to remove artifacts that are hard to get rid of with other noise correction methods. However, global signal regression is highly controversial in the field, with the main points of criticism being that the global signal has neuronal contributions and that global signal regression shifts the correlation coefficients and induces negative functional connectivity. In her video, Molly Bright briefly touches on this (from minute 24:43), and refers to a recent 'consensus paper'. An alternative to regressing out the global signal are using the signal from white matter or CSF, as briefly described in Cesar’s video (from minute 20:00). If you are interested also see his recent paper.
How should rs-fMRI data be preprocessed?
Resting state fMRI data can largely be preprocessed in the same way as data from a task-based fMRI acquisition (for a refresher on steps we recommend the slides from the educational course from OHBM 2016). As Molly pointed out, some of the “standard” preprocessing steps, such as bandpass filtering, can have unexpected effects on rs-fMRI data. As rs-fMRI data does not have strong task-driven signal changes, it is generally more susceptible to noise and probably to anything we do to the data, so be wary of that.
As described above, there are strategies for tackling noise, such as physiological artifacts, in the preprocessing pipeline. Some good pointers, including Cesar Caballero Gaudes’s video on denoising, have been outlined in the previous section. In addition, in 2016 Rasmus Birn, an expert on the influence of physiological noise on the BOLD signal, gave a thorough overview of physiological noise and approaches to remove it.
How can I analyse the data to find meaningful resting state networks?
Once your data is preprocessed, denoised and you are confident that it is in good shape, you will want to get on with the exciting part – identifying resting state networks. When done properly, resting state data can show us large-scale networks in the ‘brain at rest.’ What defines them are the correlated temporal patterns across spatially independent regions. Each network has a distinct time course from other resting state networks, but one which is consistent across its regions.
The aim of rs-fMRI analyses approaches is to use the time courses of brain regions to decompose the brain into resting state networks. Several techniques exist, with the two most common being seed-based correlation analysis (SCA) and independent component analysis (ICA).
In his video, Carl Hacker gives a nice overview of both SCA and ICA. He introduces the two methods (from minute 1:12) and identifies the main differences between the approaches (from minute 4:15). Carl also discusses how to identify RSNs from seed-based mapping (from minute 6:25), and how the brain can be parcellated using ICA (from minute 13:35). While SCA uses the time series of an a priori selected seed region in order to identify whole brain functional connectivity maps of that region, ICA decomposes data from the whole brain into the time courses and spatial maps of the resting state signals, called independent components (ICs). SCA is a useful method to answer questions about the functional connectivity of one specific region. However, the drawback is that it only informs about connectivity of this region. On the other hand, the numerous ICs that you get from ICA are defined as a collection of regions which have maximal spatial independence but co-varying time courses, thus showing networks across the whole brain that have synchronous BOLD fluctuations when the brain is not performing a task.
In healthy subjects, SCA and ICA have been shown to produce moderately corresponding functional connectivity information, and the choice between them is likely to be guided by the specific research question. Note that the focus of Carl’s video is parcellation of the brain. However, many concepts and principles also apply to other types of analyses. Read more about these two methods in Cole et al. (2010) and Smith et al. (2013).
How do I interpret ICA components?
If you have run ICAs on your resting state data, your next task will be to interpret the output. The output consists of a number of spatial maps showing regions with spatial independence but co-varying time courses, called independent components (ICs). How many ICs you get depends on the parameters you set when you run the ICA, but it is typically a few dozen.
The first step when interpreting the ICs is to determine whether they are signal or noise. Because ICA is data-driven, it does not ‘filter out’ noise, but it can separate neural signal from non-neural signal, i.e. noise, so it is important to classify the components correctly as either signal or noise.
So how do I distinguish between signal and noise in extracted ICs?
In her video, Ludovica Griffanti discusses how RSNs and noise can be distinguished. She provides an overview of component classification approaches, including manual and automatic classification approaches (from minute 2:58). Importantly, Ludovica describes the characteristics of signal and noise components and gives examples of both (from minute 5:20). Ludovica’s key message is that the aim of classification is to retain as much signal as possible, so if you are unsure if a component is signal or noise, keep it in as signal. She also makes the point (from minute 19:00) that a number of factors relating to participants, MR acquisition and preprocessing affect IC characteristics and discusses these briefly. The classification approach discussed in Ludovica’s video is very similar when classifying ICA outputs from single-subject data and group level ICA, but there are differences. For an outline of these and for a more thorough discussion of manual classification of ICA components, please see Griffanti et al. (2017).
How do I identify RSNs from ICs classified as signal?
There are a few approaches to determining what networks signal components correspond to. Some ICA toolboxes will have spatial templates that can be compared to the ICs. But perhaps the most common approach is manual labelling based on known anatomy. The spatial patterns and time courses of many common resting state networks (RSNs) have been described. (e.g. for labelling RSNs from group-level data see Beckman et al. (2005) and De Luca et al. (2006)).
A further option for IC classification is the use of automated RSN classification techniques. In his video, Abraham Snyder gives an overview of how machine learning can be used to classify RSNs based on pattern recognition (minutes 28:50-33:00).
What is this thing called dual regression?
ICA is typically done with group data and produces spatial maps that reflect the group average functional connectivity. However, the individual variability of IC topography is often useful, for example to make comparisons between groups of individuals. A process called back-reconstruction is therefore used to obtain the individuals’ time courses for the ICs obtained from the group-level ICA, which are then correlated with each voxel to obtain subject-specific spatial maps. Dual regression is one available back-reconstruction method. In his video, Carl Hecker gives a brief overview of how it works (from minute 19:38).
If you are interested, Erhardt et al. (2011), describe the principles of several back-reconstruction methods, including dual regression.
What metrics can I extract from the rs-fMRI analyses?
Local activity metrics:
Even before running a network analysis on the rsfMRI data, such as SCA or ICA (see above), two useful metrics can be derived from the data, ALFF and ReHo.
Amplitude of Low Frequency Fluctuations (ALFF) measures the magnitude of low frequency oscillations (0.01-0.1 Hz) in the BOLD signal in neural regions. The fractional ALFF, a complementary metric, is a measure of the contribution that a specific low frequency oscillation makes to the whole frequency range recorded. Both metrics give a measure of the amplitudes of brain activity in specific regions. However, the interpretation of these measures is difficult. Fractional ALFF has been shown to be dependent on the vascularisation of the brain, similar to the resting-state fluctuation amplitude (RSFA), which is a very similar measure to the ALFF, and available from any rs-fMRI scan, but has often been interpreted differently. Physiological mechanisms, including vascular effects, in rs-fMRI are still not fully understood and the exact interpretation of measures linked to cerebrovascular characteristics is therefore more difficult.
The other common rs-fMRI metric is that of regional homogeneity, or ReHo. ReHo is a voxel-based measure of regional brain activity, based on the similarity of the time-series of a given voxel and its nearest neighbours. It quantifies the homogeneity of adjacent regions, to provide information about the coherence of neural activity of a specific spatial region.
Thus, both ALFF and ReHo give information about regional neural activity and have been shown to have high values in, for example, the default mode network regions during rest, indicating that they can point to the regions that play central roles in resting state networks. Because they provide information about regional neural activity at rest, both ALFF and ReHo can be used to determine an ROI for SCA.
Functional network metrics:
However, ALFF and ReHo are metrics of local neural activity, and are thus limited in their ability to provide information about large resting state networks. Network analyses therefore tend to focus on functional connectivity measures.
SCA and ICA, discussed above, both offer measures of functional connectivity within the brain. Both calculate the correlation of time series between voxels in the brain to produce spatial maps of Z-scores for each voxel. These scores reflect how well the time series of each voxel is correlated with the time series of other voxels and are a measure of functional connectivity. In SCA, the Z-scores reflect the correlation of each voxel with the average time course of the seed voxel, while in ICA the Z-scores reflect the correlation of each voxel with the average time series of the respective IC. Dual regression can be run with both SCA and ICA to enable the investigation of individual and group level differences of functional connectivity.
A good overview of the metrics described above is provided in Lv et al. (2018).
A more recent metric derived from rs-fMRI data is that of functional homotopy. Functional homotopy shows the synchrony of spontaneous neural activity between geometrically corresponding, i.e. homotopic, regions in the two hemispheres. It provides a measure of connectivity between corresponding interhemispheric regions, and can be used to determine regional versus hemispheric information processing.
Chao-Gan Yan asks whether these different measures of resting state functional connectivity show unique variance, and discusses the concordance among some of these metrics and also global connectivity (a graph theory measure, please see the next section), by drawing on work from his research group.
It is important to remember that most measures of resting state functional connectivity are based on correlational analyses and thus do not tell us anything about how regions of the brain influence the activity of other regions. It is possible to model the relationships between observed patterns of functional connectivity to be able to draw inferences about such neural influences, in an approach called effective connectivity, which is determined with Dynamic Causal Modelling. In his video, Karl Friston describes how we can use effective connectivity to infer causality from observed connectivity (minutes 0:57 to 23:07).
How can graph theory be applied to resting state data?
More advanced metrics can be derived from rs-fMRI data using graph theoretical analysis approaches. Graph theory is a mathematical method for mapping all the brain’s connections by depicting them as a graph consisting of nodes and edges. When graph theory is applied to rs-fMRI data, the nodes are often large-scale brain regions, and the edges represent the functional connectivity between them. The great advantage of graph theory over other measures of functional connectivity is that it offers a way to quantify the properties of large, complex networks.
Alex Fornito gives an excellent introduction to graph theory in his video. He discusses the rationale for using graph theory (minutes 0:55 to 3:39), before going on to give a history of graph theory (minutes 3:39 - 11:54). Then, Alex describes how network models can be created and shown as graphs (minutes 11:54 to 16:53), with a focus on defining nodes and edges. He describes how edges can be defined using fMRI data, including the potential problem of relying on the time series correlations that underpin functional connectivity (minutes 19:16 - 24:43). Finally, the construction of the graph is described (minutes 24:43 - 28:55).
Alex Fornito discusses several approaches to defining the nodes of a network. One of these is parcellation of the brain. The brain can be parcellated from rs-fMRI data through either SCA or ICA, as described by Carl Hacker.
Once a functional connectivity matrix has been created, either from brain parcellation or the components obtained from ICA, there are two options for deriving metrics. The first is to simply compare the functional connectivity matrices between two or more groups of participants. This approach can provide useful information about how the variable of interest, such as a disease, affects the connectivity between or within resting state networks, and has been used to characterise functional connectivity in diseases such as schizophrenia and autism. The other option is to create a graph from the functional connectivity matrix and study it with graph theory.
However, because functional connectivity matrices show correlations between the time series of defined brain regions, either approach is potentially susceptible to spurious or weak connections, for instance due to noise. One way to address this is to apply a threshold that removes the connections that fall below that threshold. Andrew Zalesky gives an introduction to network thresholding and an overview of how it is performed between 0:00 and 16:40 minutes of his video. He also provides an overview of the type of measures that can be extracted from brain graphs, with a focus on comparisons of edge strength (minutes 16:40 to 19:36).
Some regions of the brain are more strongly connected with others, and tend to be considered network hubs. Metrics related to network hubs are among the most commonly used in graph theoretical analysis. Martijn van den Heuvel discusses network hubs and the metrics associated with them (from about 1:30 minutes).
An extensive list of graph theory metrics and what they tell us about neural networks can be found in Rubinov and Sporns (2010).
For those interested, there is a small collection of videos on graph theory from last year’s presentations at the OHBM conference, including those discussed in this post.
What do the resting state networks actually show?
How do you interpret findings from your resting state analysis? Well, first, it is important to consider the biological function of the correlated temporal patterns. Unfortunately, it is not as simple as defining it as ‘activity during rest.’ RSNs are collections of brain regions that have synchronous BOLD fluctuations, but the source of the signal has not been unequivocally established. While there is strong evidence to suggest that the signal is neural, there is still ongoing debate about the extent to which it may be influenced by non-neuronal noise, such as respiratory and cardiac oscillations. However, the fact that rs-fMRI analysis results have been reproduced even when applying conservative physiological corrections across both individual subjects and groups points to a largely neural basis of the rs-fMRI signal.
So what does the functional connectivity mean? In purely methodological terms it is the statistical correlation of two time series. It has been suggested that such correlations have arisen as a result of neural populations that are active together to perform a task and have therefore ‘wired’ together. The rs-fMRI signal reflects their spontaneous neural activity in the absence of a specific task. There may be direct anatomic connections between networks derived from rs-fMRI analyses, or another joint source of the signal. This is currently not well understood, and rs-fMRI findings should be interpreted with caution.
A short, but good, outline of the origin of the rsfMRI signal is provided in van den Heuvel et al. (2010).
The Annual Event of Chinese Young Scholars for Human Brain Mapping was held on June 19th, during the 2018 OHBM Annual Meeting in Singapore. This was the second annual event, and continued the success from the inaugural meeting in Vancouver. The theme for this year’s event was “The Road to Independence”. Around 200 young scholars from universities around the world participated.
The annual event is committed to bringing together young Chinese researchers from a wide variety of backgrounds to share and discuss their professional expertise and career experiences, as well as any challenges they may have faced. This offered a platform for young researchers to build collaborations on cutting-edge neuroscience topics and methods, and also to learn from senior researchers on the route to a successful scientific career.
This year’s schedule commenced with a brief review of the annual event by one of the organisers: Professor Chaogan Yan. Then, Professor Yan introduced the two guest speakers: Professor Jiahong Gao (Director of the MRI Research Center of Peking University, Chair Elect of OHBM), and Professor Xinian Zuo (Director of the MRI Research Center, Institute of Psychology, Chinese Academy of Sciences, Program Committee Chair elect of OHBM).
Professor Jiahong Gao gave the first talk, entitled “ Brain Imaging in China: Opportunities and Challenges”. He summarized the fast development of human brain mapping research in China, and shared his vision on future directions of this field in a humorous way. Taking the latest advances on Magnetoencephalography development in his lab for instance, Professor Gao discussed the challenges and opportunities we face in brain imaging, and encouraged young scientists to seize the opportunities and bravely climb to the scientific peak.
The second speaker was Professor Xinian Zuo from the Institute of Psychology at the Chinese Academy of Sciences. In his talk titled “From Mathematics to Brain Sciences”, Professor Zuo shared his own career experiences, from a PhD in mathematics to becoming an outstanding independent researcher in human brain science. He particularly emphasized the importance of reliability and reproducibility in brain imaging studies, and briefly introduced several ongoing projects by his team, including the Chinese Color Nest Project and the Traveler Project.
After the two keynote talks, Professor Juan (Hellen) Zhou from Duke-NUS Medical School, and Professor Ning Liu from the Institute of Biophysics at the Chinese Academy of Sciences joined the guest speakers for a panel session. Professor Chaogan Yan moderated the discussion, and introduced several topics under this year’s focus “The Road to Independence”, including relationships with tutors, necessity of career planning and recovery from failures. Each senior researcher shared their insights on these questions.
Professor Jiahong Gao provided advice on these topics based on his own experiences. He pointed out that the extent of independence of a young scholar largely depends on the mentors’ style. Professor Gao encouraged young scholars to develop their skills with support from mentors, and to prepare themselves to become independent researchers. Young scholars should set spiritual goals, make plans to achieve them, and learn lessons from their consistent efforts.
Professor Xinian Zuo shared his insights based on his personal experiences of switching from mathematics to neuroimaging, and echoed Professor Gao that young scholars would better seek support from their mentors and develop the ability for independent research in projects with their mentors. He also shared his “failure” stories about manuscript writing and paper submission during his very early projects. He summarized that failure is not terrible, and that one should learn lessons from what he/she had experienced, and aim to improve from them.
Professor Juan (Hellen) Zhou shared her personal study experiences, and emphasized the importance of independence, as well as hard work and persistence in order to become a successful researcher. She provided the example of her public speaking training during her PhD, emphasizing the critical role of hard work for acquiring professional skills. She also advised that one could obtain power and motivation from setbacks, and should move forward towards one’s ultimate goal.
Professor Ning Liu provided her thoughts based on how she got along with her own students. She pointed out that unstructured ‘light-touch’ supervision would not be suitable for all students, and she suggested to supervise each student with specific proper strategies. She also discussed the special difficulties associated with animal studies, and encouraged young scholars to actively adapt to any difficulties or potential failures, and keep being positive towards their goals.
Professor Chaogan Yan talked about his personal “failure” when attempting to switch from neuroimaging studies using fMRI to animal studies, and how he subsequently adjusted his research direction back to human neuroimaging. He pointed out that it could be a big challenge to move to completely new fields for a PhD or postdoc. But he believed that it may still be worth trying, especially if you are keen on the new questions and are still young. Even if there was a high chance of failure, one could learn valuable lessons from these unforgettable experiences.
Towards the end of the panel session, Professor Jiahong Gao provided his answers to the questions from audience on how to get international impact as local scholars in mainland China and how to publish papers in high-impact journals. He encouraged young scholars to perform high-level studies in the field, and to actively communicate research results with international researchers and journal editors. He mentioned that “the point is not that we cannot publish high-impact papers, instead it’s that we have not yet achieved high-impact research results.” He continued, “we should cherish our time, and work hard, to pursue critical questions in the field. Only in this way, can we achieve influential results, and publish papers in high-impact journals, which will lead others to recognize our research capability.”
At the end, the audience thanked the speakers for their informative presentations and discussions with hearty rounds of applause. We took group pictures to conclude this inspiring and memorable event. After the meeting, we enjoyed a group dinner and more informal discussions on both science and life as a scientist.
Organizing Committee of the Annual Event of Chinese Young Scholars for Human Brain Mapping:
Chao-Gan Yan, Institute of Psychology, Chinese Academy of Sciences
Yuan Zhou, Institute of Psychology, Chinese Academy of Sciences
Rui-Bin Zhang, Department of Psychology, The University of Hong Kong
Xiang-Zhen Kong, MPI für Psycholinguistik
Chun-Yu Liu, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University
Xiao Chen, Institute of Psychology, Chinese Academy of Sciences
Heidi Johansen-Berg interviews Charlie Stagg about GABA-MRS, neurostimulation and medical and scientific careers
By Nils Muhlert
In much of biomedical science the questions dictate the methods. This often means we have to draw on knowledge from different disciplines, or combine data from different modalities to converge on a likely solution. In a first for the OHBM blog we asked a senior PI to interview a more junior PI within their institution. This was always going to lead to interesting discussions - but when Heidi Johansen-Berg, director of the new Wellcome Centre for Integrative Neuroimaging in Oxford, agreed to interview the multi-modal brain mapper Charlie Stagg, it became clear that we would be acquiring a full spectrum of insight into scientific and career-related issues.
What follows is a wide-ranging discussion on moving from medicine to pure research, combining information about neurotransmitters from MR spectroscopy with neurostimulation techniques and the potential benefits of mapping from preclinical to clinical imaging.
Heidi Johansen-Berg (HJB): Charlie, you initially trained as a medic, but then decided to go down the pure science route – what persuaded you to take on a life of research rather than being in the clinic?
Charlie Stagg (CS): Yes – I did medicine as an undergraduate in the UK. As part of that, in Bristol University, I had the chance to do an extra year halfway through, at which point I did an undergrad degree in physiology and fell in love with the subject. I’ve always been very interested in the brain and wanted to be a neurologist, then through my undergrad research and in my clinical years I realised that I wanted to do something more interventional and that there weren’t many treatments for chronic stroke recovery – something I was interested in at the time.
It’s very common in the UK for clinicians in training to do a PhD some years after their medical training, so I came to Oxford to do that. I fell in love with it and didn’t want to stop! I’ve been an academic ever since.
Quite a lot of our work is clinical research, I work with a number of clinicians, and I think the training is useful in dealing with the medical side and dealing with patients. It’s way too much fun to go back to doing just clinical work.
HJB: So you’ve not regretted that decision – you’ve not wished you could do a bit more.
CS: No. There are certainly days where you wonder what you’re doing – but overall, no!
HJB: I think we all get that [laughs]. In terms of your research, a lot of it has focussed on the role of GABA in behaviour. As we know, this neurotransmitter is receiving increasing attention, particularly as the methods to measure it have improved over the years. What most excites you about GABA, what have been the recent breakthrough findings in the role of GABA in healthy and disordered brain function?
CS: It’s a really exciting question – as there are a lot of recent papers on this. When I started there wasn’t much work on it – it was a real niche subject. I used to have to start all of my talks by having to explain what spectroscopy was. I remember people coming up at the end of talks saying “I didn’t know you could do that, that’s amazing.” And now, I never have that – and that’s fantastic.
Much of our work has been on primary motor areas and motor systems in particular. That’s been very interesting from my point of view. But there’s been a lot of recent work taking it out of the motor system that’s been really exciting.
James Kolasinski’s paper (who’s now in Cardiff at CUBRIC) took a really simple hypothesis from the animal literature, that local inhibition and cortical organisation should relate to behaviour. He really beautifully showed that it did, in absolutely the way we’d expect. That’s a really nice, elegant study as it hadn’t been measured in humans – but showed that what we’re measuring with GABA has at least some relevance to what we see in animals.
You then get people starting to answer interesting questions, not only about immediate behaviour within that region but how it relates to networks. For instance, there’s a paper on overlearning from Watanabe’s lab where they showed increases in GABA as memories stabilise. That’s really exciting, as we’d been previously looking at decreases in GABA as we learn. If our understanding of physiology is correct then we should see increases as the memories stabilise – so that was really exciting to see the first demonstration of that in humans.
All of this work has been done in primary regions – such as primary sensory regions or the motor system – and people like Helen Barron in Tim Behrens lab have done some really cool things, asking really abstract questions. You imagine that lateral inhibition is really important for somatotopy or retinotopy but linking it to memory and more advanced cognitive processes is really cool. It asks a lot of questions about what is going on.
HJB: And I guess that’s where the ability to ask these questions in humans really does make a difference. One of the limitations of MRS has perhaps been the lack of precision, compared to manipulating GABA in animal models. Potentially that’s been a criticism about why you would do it imprecisely in humans when you can do it precisely and specifically in animals. Presumably you feel the fact of doing it in humans makes a difference?
CS: Absolutely – it’s something that I think about a lot. It is an indirect method and we can use multi-modal approaches to triangulate what we’re seeing, but it’s still not as direct as doing invasive recordings and never will be. So we need a good reason for doing it in humans – and the complicated cognitive work that Helen Barron’s doing is absolutely one of those reasons.
I’d also argue that human hand control, and particularly the relearning of that in the timescales after stroke, is very difficult to model in animals. The use of the hand is incredibly complex, the separation between primary motor and somatosensory cortex is pretty unusual in primates. We’re beginning to believe that the primary motor cortex works quite differently in terms of the physiology compared to the primary somatosensory cortex. That distinction is important, so there aren’t ready animal models that recapitulate all of that. There are certainly arguments for carrying this work out in humans.
HJB: Relatedly, the ability to measure neurotransmitter levels using MRS has been around for decades but has never really had the popularity of other structural and functional techniques. What do you think has held it back – and why do you think this may be changing in recent years?
CS: It is true. I do remember people finding out that we could measure neurotransmitter levels and wondering why we don’t do it more often. It’s really hard – particularly at lower field strengths. The signal we’re dealing with is 10,000 times smaller than the water signal we use for fMRI, so we’re dealing with much poorer signal-to-noise. So trying to get measures of chemicals at the millimolar range even at 3T is challenging and takes a long time. The advent of 7T being more widely available has massively boosted that SNR and made it much more achievable to get reliable measures within a sensible time frame of a few minutes.
If you look at our early work on the 3T, it took 20-25 minutes to get a sensible measurement. This is possible, but actually quite difficult in the context of studying learning. It limits the questions you can ask and makes it more difficult for the patients. This is perhaps why there has been less work in neurological and psychiatric groups – the timescales are just not clinically feasible. But suddenly, because the timeframes are shorter with ultra-high field MRI you now don’t need to have a big team of skilled physicists before you can do this. You still need a good physicist and quite a lot of time and a 7T but it’s becoming much more feasible to take these approaches and use them in a similar way that you would with fMRI.
HJB: One thing that strikes me as a bit unusual with MRS on the analysis side is that other MR modalities have seen massive amounts of research into signal processing – but in MRS there seem to be few, relatively simple, approaches. But the field of signal processing has changed massively over decades. The analysis of MRS doesn’t seem to have changed a lot in that time. For something like the OHBM community you’d think there would be a wealth of talent of people who could develop much more sensitive measures for extracting useful information from noisy spectra – why hasn’t that happened?
CS: I think it’s chicken and egg. There haven’t been that many people using it, there hasn’t been that drive and the simple spectra that you get, the edited spectra, are reasonably easy to analyse with simple approaches: you just fit a Gaussian. Now people are doing interesting things, more complicated things at 7T; we’re dealing with much more complicated signals. There are gold standard methods out there but people are developing their own.
To some extent, some of the questions that are important for fMRI just aren’t important for MRS. You usually get just a single voxel, so all the issues around clustering and thresholding, all those issues are less important, but yes, it’s a field that’s wide open right now.
HJB: Yeah, you’re on the lookout for talent! There are now methods with multi-voxel MRS, there will be a spatial component. People are starting to acquire functional MRS over time. It’s getting a lot more multi-dimensional than it has been. It’s certainly an exciting topic.
CS: It is! One of the things we’re getting excited about is work with Uzay Emir, who’s set up some fantastic sequences that we’re playing with and getting to grips with. One of them is a combined fMRI-fMRS. So for each TR you get BOLD signal and a single voxel decent spectra. That gives us the temporal resolution that we’ve never had before, so we’re beginning to ask new questions. It feels very much, from talking to people who were there, that’s it’s like the beginning of fMRI. We have this new thing and no-one’s quite sure what to do with it. We’re amazingly lucky to be situated here in Oxford, where the FSL guys are sitting next door and we can just go and talk and work with them. But there’s a lot of room for improvement. If you’re interested, then there’s an entire career there.
HJB: And how have you found the challenge of fusing information from MRS and non-invasive brain stimulation, techniques like TMS, tDCS. What do you think are the opportunities in bringing those approaches together?
CS: Partly, my first ever study on the MR was combining the two as a first year PhD student. I didn’t realise that this was difficult.
HJB: It was cutting-edge!
CS: Yeah, it was the thing to do, so I went and did it! But it is technically challenging. The reason that I just went and did it as a first year student is a massive testament to the support that we have. The physicists and radiologists are amazing at setting up sometimes weird bits of kit – and there’s a load of expertise around on the brain stim side. Doing it is one thing and we’ve now worked through it enough to be confident and happy with it. But interpreting the results is a whole different question! That’s ongoing and lots of people are becoming interested in it.
HJB: It’s quite an exciting possibility – one of the big limitations of brain imaging is that we’re stuck with correlations. You put someone in the scanner and see what lights up and you can correlate activity with behaviour but you never get causal inference. So to be able to perturb the system and observe the consequences does certainly add to the toolset. So it’s powerful for asking causal questions.
CS: Yes, I think so. But there are obvious caveats to what we can assume about the specificity – and we use tDCS rather than TMS for lots of good reasons, but it does have slightly more questions about the spatial specificity. With all the techniques there are questions if you’re thinking about physiology. Quite a lot of our work a while ago was trying to ascertain whether tDCS, and TMS to an extent, affect the brain in a similar way to the naturally occurring changes when we learn something. Are we engaging the same systems and doing the same thing to induce plasticity – or are we doing something completely different? It looks like there are very similar mechanisms involved, which makes sense but that’s a key assumption that we’re making. We’re still doing it and I still think it’s a really important thing to do.
Looking at the OHBM symposium this year, it’s really encouraging to see how much brain stimulation work is being presented – how many novel techniques are presented. There’s also a satellite event, so clearly people are beginning to realise that BOLD is brilliant and can tell us many things but, as you say, it’s very correlational and doesn’t tell us that much about the physiology. Once we understand the regions and networks that are important we can then go in and look much more specifically at given nodes within that using MRS and using stimulation to get a feel of what’s happening.
HJB: And it feels like the brain stimulation field is evolving, getting more sophisticated and more nuanced. There has been scepticism about some of those techniques because of things like variability or lack of replication and I think there’s an acknowledgement that these effects are very variable, and that needs to be taken into account, but it could actually be incredibly interesting; there might be interesting reasons for that variability. Trying to prove our experiments, capture that variability at the individual subject level and understand it, could really increase the use of those techniques for studying healthy brains. It could also help explain the cases where you get the completely opposite response for the same stimulation. For some people that’s a reason to shy away from the technique altogether but for others they’re asking “why is that?”. It does seem that there are interesting reasons for the variability, which could include genetics or brain geometry, that could help us understand the responses.
CS: Yes, and it has been an interesting time over the last few years in the brain stimulation literature. We hope we’re coming to a conclusion where we’re saying that it is variable but that could be interesting. As with any technique we need to be able to use it properly – we need to control it properly, carry out double-blinded, placebo-controlled trials.
HJB: It’s just like the early days of brain imaging, there were a lot of non-perfect (shall we say) imaging studies in those days but that doesn’t mean that imaging is a flawed technique, you just have to do the experiments right.
CS: And it’s like 10 years ago people were tweeting about the dead salmon – now there are other things coming up – the field is very similar.
HJB: Here in Oxford, as you know we have core-funding from the Wellcome Trust to create this new centre for neuroimaging, the Wellcome Centre for Integrative Neuroimaging (WIN), building on the success of places like FMRIB and OHBA over the years. With that, we’ll have access to new facilities – ultra high-field MR but also new facilities for animal MRI. What kind of things are you most excited about, once you have all this equipment at your disposal?
CS: The WIN is really exciting for many reasons. One of the big differences it has made already is in people – having a lot more physicists around to develop these techniques, getting the brain stimulation working in all of the scanners and also to get the sequences to work and be reliable and trouble-shooting. Having physics-support to do that, which is provided by the Wellcome Trust, is just amazing. Oxford is also wonderful and has a huge number of very, very good people working on similar things. But it’s spatially spread out across the site. While that’s still true of the WIN, having one centre has begun to get people to talk to each other in ways that they haven’t before. There are people here that I’ve never spoken to – so it’s been good to get together and see what we could do.
One of the things I’ve been interested in for a while is that, while human MRS is important, there’s no doubt that it’s an indirect measure, and there are some key questions about what it is that we’re picking up in terms of the underlying physiology. It would be fantastic to look at that, if we can do very specific interventions. Having a small bore animal scanner which allows us to do similar things to what we do in humans – the same sequences, as well as complicated behaviour and genetics – is just very exciting.
HJB: Yes, that’s what I’m particularly excited about – being able to use imaging as a bridge between lab-based neuroscience and the things we do with patients. All of us using neuroimaging have been frustrated at the lack of specificity in what’s going on in physiological terms. Having the techniques that allow us to bridge that gap actually allows us to carry out the manipulations in animal models, but then relate those signals to what we see in humans. That’s something that I think is really exciting.
CS: We’ve been working very specifically with Jerome Sallet on ultrasound modulation and have a grant to develop it in humans. He’s doing it in primates. We’ve been working really hard on that, and it’s been fantastic to work with him to see what it actually looks like – what behaviour changes he’s getting, what imaging changes he’s getting. And it’s starting to inform what we can expect to see. That’s concrete evidence that’s already coming out.
HJB: Yes, bringing together people across those species boundaries – and starting to train junior people to carry out cross-species experiments, those like Helen Barron, individual scientists who are doing fMRI, but then understanding those signals in terms of electrophysiology, and using optogenetic manipulations. You then have the macaque work from people like Jerome, Rogier Mars, and so bringing them together with people with imaging analysis expertise like Mark Jenkinson, Saad Jbabdi, will help us build tools to seamlessly move from rodent space, to macaque space, to human space will make it all much easier for people to cross those species boundaries.
To finish up, what advice would you have to early career researchers who are about to start their careers in brain imaging or multi-modal brain mapping?
CS: I think it’s difficult – and it’s a case of do what I say and not what I do [laughs].
It’s all about the questions. You have to work out what the question is that you find exciting and interesting – I made a joke about it earlier but it is hard and you have days when you’re wondering why you’re doing it and it has to be something that really excites you.
HJB: Yes, it has to be something you care about and want to know the answers to.
CS: That you really, really want to know the answers to!
And then you need to work out what techniques you need to be able to answer that. For me, that did and does involve multi-modal neuroimaging. We’re starting to use MEG to look at brain oscillations, which I think is the key mechanism by which we’re getting links from inhibition, GABA MRS measures, to the functional connectivity changes we see in plasticity. I think oscillations are really important, probably the route through which that happens. Having worked out that that is what we’ll need to do did it. Working, again, with excellent people within Mark Woolrich’s group has certainly helped.
So you need to work out the question, what techniques you need to be able to answer it and then make sure you’re somewhere that can support that. One of the downsides of doing the multi-modal work that my group do (we do MRS, MRI, brain stimulation, MEG) is that you end up as a jack-of-all-trades and not an expert in any given one of them. Working somewhere that you have genuine experts in all of those, and where they’re happy to help, is really important and one of the reasons for me being here – is because we have all that expertise covered and we can do those tricky experiments.
HJB: Yes, that two-way interaction where you have impressive experts developing methods, which can then inspire researchers to ask new questions. But you need them there as well to know the pressing questions – whether those are clinical questions or neuroscience questions. That can hopefully steer method development to answer particular questions. I think it’s always tempting for some of us to get wooed by a particularly cool method or new analysis approach and lose sight of why we’re doing it, or what question we’re trying to answer. So I think your point that the question needs to come first, particularly for those of us in the more neuroscience side, is really important.
CS: Yes I think it may be different for the methods-development people.
HJB: Yes, but even then you need to keep in mind what question this method answers that can’t already be answered. Not just what cool engineering or mathematical principles can I implement. Really, what’s the point of it? It’s important for people to keep that in mind and use that as a way of prioritising and steering your work. Ideally you get a combination of something that uses cool cutting-edge computer science, but also allows people to do something that they couldn’t do before.
CS: And my other standard advice for early career researchers is to move between labs – but I’m very conscious that neither of us did! Though we did travel strategically, I spent a little time in Florida and UCL and you spent 6 months in Montreal.
HJB: [laughs] Yes, I completely agree, the standard advice is to move around and see various different labs, travel the world. That’s absolutely something that benefits people’s career development but for me personally it wasn’t the right thing to do at different stages of my career. So I’ve pretty much been here in Oxford throughout. I’ve tried to get that experience through collaborating very widely locally here in Oxford and elsewhere in the UK and further afield. We can get inspiration and avoid going stale in the same location through collaboration and meeting people in that way. So if you can’t move round, don’t worry too much about it.
(or, How I became an advocate of Open Science in 5 days.)
It’s the final day of OHBM 2018, and I’m tired. I’m also excited, enthusiastic and exhilarated. I’ve had my first real taste of the Open Science community, and I want more! What follows is an honest account of how I went from feelings of scepticism and ambivalence about Open Science, to a flag waving advocate. My hope is that you might read my story and find it in some way relatable. Then maybe, just maybe, you’ll join me in the Open Science Room next year.
When I registered for OHBM, I intended to sign up for the “Hackathon”. I wanted to improve my python programming skills and hoped I might make a few buddies along the way. Alas, the Hackathon was sold out, and if I wanted to know more I’d have to “make do” with attending the one-hour introduction session: Brain Hacking 101. “But do stop by the Open Science Room while you’re here!” the organiser said. Open Science wasn’t a priority for me, so I wasn’t sure if I’d find the time.
I started the conference with some educational sessions. People kept mentioning their “GitHub” pages, but I didn’t take much notice. Vince Calhoun presented some work using dynamic functional connectivity (dFC), and I felt the rumblings of a new project brewing. I knew I could apply this method to some data I had sitting in a drawer and it might turn out to be interesting. And then there was a link to his GitHub page. I scribbled in my notebook: “dFC toolbox available, with documentation and examples!” The toolbox was written in MATLAB (phew!), and was an extension of something I was already familiar with.
Next I headed to Brain Hack 101, to fill the lunch-hour void. There was an unassuming looking guy stood at the front, and he was clearly the super-programmer sort. This was Greg Kiar. He was patient with the attendees and our entry-level questions, and in one hour he explained a few terms that I had heard being thrown around but not really understood:
BIDS (Brain Imaging Data Structure) got triple underlined in my notebook. I had been looking for a way to better structure the masses of data I’d inherited, and here was a fully-fledged and well organised system which I could use. “One less job for me!”, I thought. Clearly a lot of effort had gone into defining BIDS, and who was I to try and reinvent the wheel? Someone else has already done a fantastic job in creating the system and implementation, and they were giving it to me for free!
fmriprep got triple underlined as well. This was a tool for doing some kick-ass preprocessing, made by the masters and shared freely with a ton of documentation and support. I was hoping to come away from the meeting prepared to build a tool to improve the efficiency of preprocessing the data collected by our group. In fmriprep, I’d found another fully-fledged tool to do exactly what I was looking for, and do it significantly better than I could with the time and resources available to me.
At the end of the session I was a bit more confident in my understanding of what was available and how I could make use of it. More importantly, I had ticked two of my “OHBM Outcome” boxes in discovering BIDS and fmriprep. I thanked Greg after the session, specifically because he had suggested I came here and I had found it really useful. He seemed grateful for the feedback.
The OHBM meeting continued and I found myself in the Open Science room again and again. This was the place where I could find out about real, tangible “things” which could elevate my research. They also had bean bags and power sockets, and there was an atmosphere of generosity which was very pleasant when you’re surrounded by some of the most eminent people in your field.
The real turning point was attending a session described as “Discussion: How to improve Multi Echo denoising”. I expected a panel, with me taking notes. What I got was an invitation to join a (physical) circle of 8-10 very impressive people as they attempted to make something (TEDANA) better, by working together in a friendly, communicative and organised way. The session started, as these circle things do, with each of us saying who we were and what we did with multi echo. Most people seemed to know each other already, at least by association, but the Chairs of the sociocratic circle (Elizabeth DuPre and Dan Handwerker) made a point of ensuring everyone was on the same page and that there were no assumptions. I was clearly the least technically experienced person in the room, but they were nice and attempted to actively engage me in the discussion. Susan Bookheimer had told us that women in science need to be outspoken, so I made an effort to speak out. To my surprise, my opinion was welcomed. I learned that people like them really do need to hear from people like me; I assisted in the development of this tool through virtue of my inexperience.
My original motivation for attending the session was not altruistic: ”if the tools are going to develop I want to be early on the curve to adopt them”. I had intended to be a passive recipient of their talent and hard work. At some point it became clear that this wasn’t going to be the case. I was skilfully guided into volunteering my services in improving the documentation of the tool, but I wasn’t too aggrieved about the extra workload ‘imposed’ upon me. I was probably going to read the “how to” guides anyway, to get TEDANA working with my data, so all I’d have to do is ‘track changes’ and make a few suggestions. This seemed a pretty reasonable exchange, but they weren’t taking my commitment lightly. They wrote my name down next to the action on the google doc which was being typed up for all to see. I have no doubt, however, that if I had kept my head down the group wouldn’t have push me to be involved. In truth, I wanted to be a member of this group, but I didn’t feel like I had anything to offer.
When the allotted time for discussion was over, I approached Kirstie Whitaker to thank her for her contributions in different sessions, and for saying out loud things which I was too nervous to say. She spoke with strength, clarity and precision. She seemed to embody all the things I want to see in the field, and I wanted her to know I was appreciative. Turns out we have some shared research interests and she was keen to work with me. She treated me with respect and encouragement, and seemed willing for me to benefit from her experience.
I was starting to notice a theme: The people in the Open Science Room were really nice. They were kind, considerate, generous, and welcoming. They were also really skilled scientists. The kind of top-of-your-game people who anyone would be honoured to work with and learn from. They genuinely cared about empowering others to foster more efficient, more effective and higher quality research. They cared about non-science things that are important to me, like gender equality, privilege and dubious political decisions. I wanted to work with them, and they wanted to work with me.
Though my initial motivations for engaging with Open Science were purely selfish, I came to appreciate that if I go a little bit further and contribute back to the system we can all do more effective research. Ultimately, this is good for advancing our understanding. I am reminded that the reason I love this job is not just because of the creative freedom and intellectual stimulation is provides, nor the opportunity to go to cool places and meet awesome people. It all boils down to helping those individuals whose brains work differently, or maybe not quite as they should, and trying to help them exist happily in this confusing and sometimes dangerous world. I have come away with the firm belief that Open Science is Social Justice, and that is something I want to be a part of. I am also reaffirmed of the good that we can do as a community, if we are kind and generous with each other.
Before this week, I was not a fan of Open Science. It felt like a bucket of extra work in my already stretched schedule. It also played on my insecurities as an early career researcher, to meet the demands of publishing and the fear that I may be exposed for being no good (I believe that’s called Imposter Syndrome). What I’ve come to learn is that I’ve already benefited massively from Open Science by using a long list of tools generated through many hours of skill, labour and dedication (see below for a list which grows longer the more I think about it). All of these have been free to me and required little more than a citation, if anything.
After this whirlwind of a conference I am now a strong advocate of Open Science, and I think you should be too. You don’t have to take my word for it, just come to the Open Science room at the OHBM Annual Meeting in Rome next year and see how you get on. They have bean bags and power sockets, and you might even have some emotions which help you do research.
Ways I have personally benefited from Open Science, without being aware of or appreciating:
3) OBART (NITRIC’s Online Brain Atlas Reconciliation Tool)
4) Chris Rorden’s mricron, mricro, dcm2nii, and fmri simulator
5) The 1,000,000 times when I’ve searched the internet for “how do I do x in MATLAB/Linux/bash?”
By Elizabeth DuPre and Kirstie Whitaker
This month we continued our Open Science Demo Call series with a focus on this year’s OHBM annual meeting in Singapore, and the many ways that OHBM members promote open science at the meeting. We heard from Michele Veldsman, Ayaka Ando, and Aki Nikolaidis from the student and postdoc special interest group; Greg Kiar and Anisha Keshevan from the open science special interest group; and Nils Muhlert from the communications committee.
We first heard from Aki about the Monday Night Social, featuring beautiful views of the Singapore skyline and an announcement of the winners of this year’s Brain Art competition. All OHBM members are encouraged to attend! Ayaka gave us a sneak peak into this year’s career development symposium, 'The Secrets Behind Success' with presentations from Tom Nichols and Lucina Uddin. Michelle told us about the mentorship program, which seeks to support professional growth at all career levels, from masters students to principal investigators. Although mentors and mentees meet at the OHBM annual meeting, you don’t have to be attending to get involved! Aki, Ayaka, and Michelle encourage everyone interested in any of these initiatives to stop by one of their events or reach out on twitter!
Greg and Anisha then told us about this year’s OHBM BrainHack and TrainTrack. Formerly known as the OHBM hackathon, this is a collaborative event which brings together researchers and scientists from across the neuroimaging community to work together on projects, learn new skills, and engage with the community. While registration for the BrainHack and TrainTrack is sold out, the Open Science Room at OHBM will be hosting demos, workshops, and an open working space throughout the annual meeting. Greg and Anisha encourage everybody to come and find out about the great initiatives and individuals in our open science community!
Nils introduced us to the amazing work the communications committee is doing to spotlight open science at the meeting and throughout the year. They were involved in revamping the OHBM website and maintain a blog highlighting initiatives within the OHBM community (including these demo calls!), as well as educational materials such as the OnDemand courses. Nils pointed out that this provides a great way to stay involved year-round (e.g. workshops on analysing diffusion MRI), even for those who cannot attend the annual meeting. The communications committee also helps promote the excellent brain mapping work carried out around the world, and provides a unique window into the stories of many successful neuroscientists through their keynote interview series. Nils encourages anyone interested in contributing to the blog to sign up!
We’ll be taking a break next month for the annual meeting, but look forward to starting back up at the end of the summer! If you’d like to nominate yourself or someone else to be featured on these monthly calls, please add their information at this github issue, or email the host of the calls Kirstie Whitaker at email@example.com. You can also join the OSSIG google group to receive reminders each month.
BY NIKOLA STIKOV AND JEAN-BAPTISTE POLINE
(IN CONSULTATION WITH THE OHBM PUBLISHING INITIATIVE COMMITTEE)
The current academic publishing norms impose many constraints on how and what we publish without fully embracing the new web-enabled dynamics. The emergence of the internet as the de facto publication medium, and the availability of open source technologies for handling the hosting and peer review process, have made it possible for organizations such as OHBM to experiment with innovative publishing platforms and to host high-quality research objects while promoting reproducible and open science.
With Aperture, OHBM plans to open up to a more diverse approach in communicating academic research, bringing transparency and interactivity to the publishing process. While initially our focus will be on reviews, tutorials and educational materials, we foresee using this format to incorporate computational notebooks, interactive plots, software, data, and post-publication peer-review to create living, reusable and reproducible research objects. We hope to have a beta version of Aperture in time for the Rome meeting in 2019.
Coko has extensive experience developing open source publishing components, some of which are used by Elife and other open-access publishers. Their framework could give the Open Science SIG and the broader OHBM community the opportunity to participate in the construction of Aperture. We look forward to establishing even more collaborations with like-minded partners.
Most importantly, we want to hear from our members! For that purpose, we invite you all to attend the Publishing Round Table, to be held at the annual meeting in Singapore on Monday June 18 at noon (Room: 324-326). Please join us for what we hope will be a fruitful discussion about the future of Aperture!
By AmanPreet Badhwar
“In a forest of a hundred thousand trees, no two leaves are alike. And no two journeys along the same path are alike.” ― Paulo Coelho
My first OHBM annual meeting experience was in 2015. I did not know many researchers in the field, having just started my postdoc in human brain imaging. On top of that, I was attending OHBM 2015 without my postdoc supervisor in tow (who knew the community well), and worried about not finding my place in the human brain mapping community. Luckily, I had met Daniel Margulies a few months prior to OHBM 2015. Not only did he make it a point to introduce me to the community at this particular annual meeting, but I also found myself happily involved in the many grassroot initiatives of the Neuro Bureau: ranging from brainhacks to sci-art exhibits to open science initiatives. Fast-forward to today, I have developed my own unique voice in the OHBM community, and it is in large part due to guidance from Daniel and his free-spirited compatriots during those formative moments in time. I have had the opportunity to collaborate with Daniel on several projects, both scientific and sci-artistic, and recently had the pleasure of interviewing him at the inaugural BrainHack School 2018 in Montreal.
AmanPreet Badhwar (AB): How would you describe your research to a random person on the street?
Daniel Margulies (DM): When explaining my research to a random person on the street, I usually gesture to my head to illustrate that I study the brain. If there is time for further elaboration, I explain that I study how areas are spatially arranged and connected to one another using MRI, and the consequences of this layout for the possibilities and constraints of cognition.
AB: What projects are you currently working on? Could you comment on some of the breakthroughs and bottlenecks you have encountered?
DM: I’ve recently moved my lab from Leipzig to Paris, which has provided a refreshing opportunity to set new research priorities and establish new collaborations. We recently identified a gradient in cortical organization that spans from primary cortical areas to the regions of the default-mode network, so my current projects are extending this observation to explore its consequences for cognition, cross-species comparative studies, and exploring how the gradient can be divided into zones of cross-modal integration.
AB: Can you tell me a bit about your career path?
DM: I studied humanities in undergrad, but ended up in neuroscience through a chance encounter years ago at a bus stop in New York that resulted in an invitation to join Xavier Castellano’s lab at New York University as a research assistant. I was soon introduced to neuroimaging data analysis by Mike Milham and imparted with a love for neuroanatomy by Michael Petrides. A similar twist of fate landed me in Berlin a few years later as a graduate student with Arno Villringer. I was very fortunate to have mentors that were immeasurably supportive and offered me opportunities to pursue my various interests. This all came together when I started my own lab in 2012 at the Max Planck Institute in Leipzig.
AB: What is your take on multimodal research? How have you integrated this within your own research project?
DM: The complexity of various fields in neuroimaging today has resulted in a level of specialization that makes it challenging to take a wider perspective. I believe one of the major challenges we face is in thinking across different methods and vocabularies to construct unified models that underlie these diverse, and at times divergent measures. As my core project aims to understand some basic principles of how features of the cerebral cortex are spatially arranged, perspectives from multiple modalities are central towards achieving that goal. We make use of the macaque monkey tract-tracing literature, high-resolution MRI, meta-analytic and task-based approaches… So much data is openly available these days that conducting multimodal studies is really becoming more the norm than the exception.
AB: If “like connects to like” in the brain, then tell us a bit about what makes the brain work as a unit?
DM: “Like connects to like” is a principle that has been introduced to describe preferential long-range connections between cortical areas that have similar degrees of laminar differentiation. It’s pithy and captures an elegant multimodal phenomenon of cortical organization. Nevertheless, various other principles are also critical to cortical organization, such as extensive connectivity between neighboring areas. Although there is a substantial focus in brain mapping of the differences and discrete boundaries between areas and large-scale systems, one challenge that your question illustrates is to also consider how these distinctions are integrated into a functional whole. There is little doubt that the brain is highly interconnected — a factor that is important to remember when delineating various subdivisions.
AB: You are a Neuro Bureau member. Could you tell me a bit about the Neuro Bureau?
DM: I started the Neuro Bureau with Cameron Craddock back in 2009 or so. When we first got going, all we had was the name, which we felt at the time was a solid enough starting point to merit a purpose. We developed the Neuro Bureau into a cross-institutional and cross-disciplinary support group for early career researchers with the aim of providing the neuroimaging community with projects and initiatives that weren’t traditionally credited. This includes the Open Science Gala at OHBM, the brain-art competition and exhibition, and the preprocessed data initiatives. The idea was to infuse our community with new perspectives, to render it more accessible to other disciplines, and to make it in some ways more playful. Towards those goals we also encouraged a spirit of open scientific practice, which grew into Brainhack a few years later. Early on I received the advice to help create the research community I wanted to be a part of — the Neuro Bureau is our way of doing just that.
AB: Could you comment on the Neuro Bureau’s role in mentoring trainees?
DM: I’ve never really thought of the Neuro Bureau as a mentoring-oriented organization. Mentoring implies a mentor and mentee, and the Neuro Bureau has always had more the spirit of a tree house, along with all the big ideas, camaraderie, shoe-string operations, and mischievousness that tree houses tend to have. Good mentorship is so critical when joining the neuroimaging community, but so is finding your own group of peers — a kind of research family. For us, the Neuro Bureau provides a space to try out new ideas, seek support when faced with the various challenges of research, and to feel that we have a place of our own in the wider community.
AB: Thank you Daniel for taking the time to sit down for this interview. Looking forward to your keynote at OHBM 2018.
By Shruti Vij
“I have always loved the idea of not being what people expect me to be!” - Dita Von Teese.
There isn’t another stalwart in neuroscience that this quote is better suited to describe. Be it her expertise in developmental cognitive neuroscience, her championing of novel techniques such as concurrent PET-MR scanning, her vocal demeanor or her punk rock persona! Bea Luna’s research persona lends an overwhelming sense of success in being not only innovative but also purposeful in her dogged pursuit of making one of the toughest periods in life - adolescence - better understood. She has to her credit hundreds of articles and many prestigious grants and awards, in addition to being the President of FLUX Society. With such lofty achievements, one might expect her to be a sombre intellectual but one is easily surprised by her bubbly and inviting personality! This article probably does not reflect my personal excitement at being able to interview such an amazing role model and to have so many take-aways from our conversation was definitely a bonus! What’s more, as you read on, you will discover that to be you in your own way is what leads you to success! Something that the world of academics needs to stop and think about while being in the rush to achieve the next big thing!
Shruti Vij (SV): The research focus of your lab at Pittsburgh is neurocognitive development. In particular, you are interested in brain maturation in adolescence. What motivated you to pursue this direction?
Bea Luna (BL): I was a crazy risk-taking teenager having fun and questioning everything, which probably contributed to being kicked out of school! But I remember being very aware that this was a really unique time of feeling free and invincible and that it was finite. I was very much into philosophy – thinking about what the mind means, what the brain means. It was during this time that I found out brain function could be measured with PET, and would fantasize about one day being able to use this to understand human consciousness.
As a grad student, I studied visual and attentional development longitudinally in premature infants who, due to their immature lungs, can have hypoxic ischaemic events in the brain. I was surprised and fascinated that it didn’t matter if half the brain was missing, or if there was a little dot of hypoxic ischaemia, it did not predict outcome in visual acuity and attentional processes. I thought, wow, how can that be? In adults it would be obvious what the outcome was likely to be. You could also think about diaschisis – where a small region could be injured but it compromises its connectivity to a whole bunch of other regions. So it became clear that what mattered, especially with regards to development, was the integrity of the functional brain, beyond structure, what the brain does with what it has as it is specializing. fMRI was just emerging and I was fascinated with the possibility of using this approach to look at development, which I concluded in my dissertation. However, I was discouraged from the ability to use it in pediatric populations, and I thought, ‘oh yeah, watch me’.
I did my postdoc in the Psychiatry department with John Sweeney where he saw that my developmental expertise could help us probe the prevalent theories that mental illness emerged in adolescence and neurobiological maturation could hold important clues. This was a perfect fit between my interest in developmental plasticity and brain functional integrity. From then on I just didn’t stop.
SV: And the field of cognitive neuroscience is thankful for that! What are some key questions that you think are going to be the big drivers in developmental cognitive neuroscience in the coming years and how is your research contributing to these questions?
BL: We are developing as a field ourselves. There has been great advances in mapping the regions and networks that show changes with age and their links to specific components of cognition. One of the areas that I have been a spokesperson about is the need to now understand the neural mechanisms underlying the developmental changes that we see with neuroimaging. Seeing pretty brain pictures is no longer enough. We need to speak to our colleagues that use animal models, those who are doing post-mortem work to build comprehensive models of development. This is how we can take the next step and make our work translational.
For example, we have an amazing project, that I’ll be talking about in my keynote, where we use a molecular MR machine that acquires MR and PET simultaneously. It’s very hypothesis driven and it considers different lines of evidence that suggests hyper-processing of the dopamine system in adolescence. Intuitively this makes sense but it’s very complex. I’ve literally stood up in front of big audiences and said “Hi I’m Bea Luna. I’m a developmental cognitive neuroscientist and I have overgeneralized the concept of dopamine function.” It’s a very complex system with pre- and post-synaptic processes and multiple types of receptors, all who could have their own developmental trajectory.
This is one of the ways that we are trying to understand mechanisms. I’ll also be talking about tissue iron as a proxy of dopamine. It’s difficult (though not impossible) to get testing pediatric populations with PET through the IRB so we’re finding this proxy for dopamine processing with MR-derived tissue iron. We are finding striking associations with tissue iron and PET markers of dopamine processing and how these are changing with age. You can see that talk for the punchline! I’ll also be mentioning a future direction with a new project that’s just started where we’re using spectroscopy at 7T using very complex acquisitions to look at changes in GABA and glutamate and the tissue iron proxy for dopamine. These three neurotransmitters are essential to understand plasticity. Animal models show molecular evidence for critical period opening and closing through puberty in association cortices, which motivates this new approach to understand critical period (vs. sensitive period) plasticity in association cortices in humans in vivo during adolescence. I find this tremendously exciting!
I come up with these crazy questions that compel me to probe them regardless of the complex technology it may require. Was I a PET expert? No. But I really want to understand what’s going on with dopamine. So, I go to my colleagues and say ‘elucidate me, tell me about how to use PET to answer my questions.’ It’s the same with MRS: ‘tell me how I can use this technology in the best way possible to answer my mechanistic questions’. I say, ‘come play with us’ and let’s use this fancy approach to answer some very cool questions. This is how we end up collaborating with people in other departments at Pitt and with other Universities such as MGH, and start discussions with Columbia and Stoney Brook to use new approaches. We also have some even newer projects looking at single-cell work in monkeys to further understand the actual neural basis of cognitive development.
SV: What do you think are the novel technological advances that will assist in uncovering brain maturation?
BL: Certainly PET/MR and MRSI have reached a level that can be applied to developmental questions. But there are tremendous advances in the analyses area including computational modelling, machine learning, and advanced statistics that can push the envelope as to what we can answer. We have been bringing that into the fore to make sense of these molecular mechanisms but also advancing what we can do with longitudinal neuroimaging data, resting state fMRI, and how to become informed by reinforcement learning. Resting state in particular is critically advancing in how we can control for head motion, a huge problem in developmental studies. Diffusion-weighted imaging has now advanced to a level that allows us to apply not just tensor models, but orientation approaches that afford us greater insight into the maturation of white matter connectivity making inferences beyond just myelination. For example, we know that glia dynamically influences myelination. Let’s not forget about the importance of behavioral assessments, which at the end of the day, we need to merge with our brain data to assess its relevance. Finally, Big Data approaches have had a huge impact in the power we now have to answer questions. We share our data as well as use other’s Big Data to replicate our findings. In sum, multimodal approaches to inform mechanisms, advanced analyses, and big data is where the field is showing great advancement.
SV: In today’s discussion on diversity and inclusion, what are your thoughts and how do you address these issues in your lab?
BL: First, I’ll tell you what my soapbox is. I was in the advisory council for the director of the NIH – this was in 2012. They’d just started to really speak about diversity. I was different to other minorities in the council. Understandably, they wanted to see enhancement and so on, but I thought ‘no, I don’t need your help because I’m a woman or I’m Hispanic. When I give a talk at SFN or OHBM I don’t want people to think ‘oh my god, look at the woman talking – and she’s Hispanic too.’ I said you know what Francis, white men have done a great job, and keep advancing things. That’s great, and we’re grateful – but I’m in a different place and am bringing insights that will never come to that closed club. So, actually, “you’re welcome!”. I’ve been in a lot of high level committees where I do feel in the minority, as a woman. I don’t blame the men, I have a husband, I have a son. I don’t think they mean ill – but I do notice that I have to prove myself, and it can take a while before I am finally listened to and my ideas be deemed critically helpful. White man are accepted much quicker. Again, I don’t think this is intentional at all and I get great satisfaction when I have won them over. I also don’t mind doing the extra work, it keeps me on my toes in science and keeps me humble. It is what it is, but I do see change. I see that men are really aware and trying to do better. Then again, my “dopamine” personality may be playing a role beyond typical diversity, ha ha.
SV: Your trainees speak very highly of you and the work environment that you have created in your lab. Is this purposeful? What are the things that you personally make an effort on to make things better and easier for your trainees?
BL: First of all, awesome! I am careful about selecting smart people that will fit into our dynamic cool atmosphere. Some may think “party lab” but in fact we are more about working hard and pushing each other in a respectful but humorous manner. Everyone in my lab loves what they’re doing and I don’t have to look over their shoulders. They’re self-driven, and that really works. I give them their space and I’ve been really lucky. Every grad student I’ve had has been amazing.
At the top of my list of favorite things is the one-on-one with my mentees. I talk to these young, super-brilliant people. They may not be familiar with these big questions that I have. So I have a discussion and there’s some theoretical aspects that take a while to understand – but I tell them not to worry. I’m planting seeds in their brain that will later grow. I tell them that they will deliberate and they will come up with the logical next step, which I am not interested in. Instead, I want to hear the other idea. I want to hear the one they’re embarrassed to tell me because it’s so outrageous, the one that makes you giggle. For me that’s great, because it’s usually in the context of bigger questions and leads to a bigger step forward. For them, it turns out that it grabs their passion, and then it’s their thing. They’re not following tightly in my footsteps. We go back-and-forth and think outside of the box with no limits. As a philosophy double major I loved to just boldly think beyond the obvious and now I can do this within my scientific questions with my brilliant students, who are so frikkin smart. But there’s a lot of laughter – sometimes too much!
There are a lot of bonds made in the lab – we’ve even had marriages. They are all discussing new methods and approaches and I love hearing how they all interact and help one another just for the joy of collaborating. I am careful not to bring in difficult people so as not to mess with our cool vibe.
SV: You are also the president of FLUX in addition to being an actively involved academic stalwart. How do you manage all this in the same 24 hours everyone gets?
BL: great question – I want to deliver an important message here. I think I can work intensely for a concentrated amount of time – but I try not to get into the office before 9 as I like my thinking space in the early morning before i go to the lab. Granted I’ll typically work until 7 or 7.30, and on the weekends I really try do minimal work. I think that’s super important. You need that space, doing other things, seeing your friends. Some of my friends have no idea what neuroscience is, a lot of them are artists and I think that gives you space to make the connections. If you’re always looking at the trees you sometimes struggle to see the forest, the bigger questions.
There are times when there are lots of deadlines, and it can be stressful. I’m in a medical school so you write grants, that’s what you do. Now I have an endowed chair so things are a bit easier but I always maintain a couple of R01s plus other collaborations and foundation grants. It used to be very stressful to know that you had to get a grant or sink. But for the last two grants, I thought enough, I’m going to love my the grant writing process. I now embrace and laugh at the innovative directions that I am willing to go. My first grants would take a couple of rounds to get funded with comments that the ideas were awesome but how could this be done, prove it! And we would! Now I have a track record and they get through more smoothly (knock on wood). I definitely never want to get a critique that everything is fine but “yawn”. When I have to write grants I get everyone involved and its a party with discussions going on everywhere and every white board filled with ideas and it’s intense and fun and we laugh a lot. I get home mentally exhausted but satisfied and my husband has a martini ready and feeds me. I don’t engage those who will be leaving the lab soon though since they will not be around, and I feel that they feel left out cause they want to play too.
By the way Flux is not an acronym, although everyone always capitalizes it. I made a great effort to not have an acronym. I chose the word Flux to always remind developmental cognitive neuroscientists that we are studying a dynamic process and we need to capture what is in Flux. I am so proud of the Society and the amazing conference that Brad Schlaggar, Silvia Bunge, and Bruce McCandliss and I made. We dreamed this up at a conference when we were all postdocs drinking at a pier late at night toasting to how one day we would bring developmental cognitive neuroscience into its own! By the way, that was another of those moments when people were saying no way that is too hard don’t do it, I didn’t even wince. Now as developmental cognitive neuroscientists we have a home and we bring people in to help unite us in advancing the field from David van Essen, Steve Petersen, Russ Poldrack, Michael Posner, John Gabrieli and on and on. We have intense days of hardcore science and then… party hard ending with crazy, and i am not kidding, crazy karaoke and people sweating and dancing from students to bigwigs. The idea being to break the science class system and for students to see that we are all in it together and for them to start forming bonds with each other that will end up in collaborations for great science.
SV: OHBM is largely made up of trainees at many different levels. They look up to people like you and would like your advice. What other advice would you like communicated to our large trainee audience?
BL: Whenever I find myself providing advice for people – and I see this a lot where people have to make choices, such as what lab to go to, what job, what country etc - some things that I think nobody ever tells them is to consider the whole package, to not put their personal life at the end of the list. For example, leaving the person they love to pursue what they think is the only path to career success. If you’re passionate about your science, you can make it work in many places. I personally chose to be where my husband and I both loved. That said, pursue the questions that you are passionate about, don’t compromise on that because this work is hard and can be stressful. But if you’re pursuing the questions that you are passionate about then you can deal with the annoyances. If you’re just going to do things because they’re convenient or they’re going to look good, then it’s not going to make you happy. For me the secret of success is to do what you really want to do, not what you think you should be doing.
Speaking of moving for work, I’m presently looking for postdocs, grad students, RAs to join the lab. If you’re into innovative thinking, great questions, and working with an awesome group doing new approaches then contact me, we’re actively looking for people. Feel free to contact me or my lab manager for further details!
SV: Thanks again for taking the time to share these wonderful pearls of wisdom with our audience! We look forward to your keynote next month!
The interview finished and as a trainee struggling to understand my personal growth within academia, I came away with a fresh look and a new mentor I felt I could seek out for advice! It also made me more confident that academics like Bea are what we are all fighting to be!
Martijn van den Heuvel heads the Dutch Connectome Lab, part of the Complex Traits Genetics Lab at the VU University in Amsterdam. The goal of his research is to understand the association between brain complexity and brain function in health and disease. We had the pleasure to interview Martijn and find out more about his career, and also get a sneak preview of his keynote lecture at OHBM 2018 in Singapore.
Tommy Boshkovski (TB): Can you tell us a bit about your background and your lab?
Martijn van den Heuvel (MvdH): I finished my undergrad and master’s in artificial intelligence, and Ph.D. studies in medical science at the University of Utrecht in the Netherlands. After finishing my Ph.D. I got a faculty position, and then my team and I recently moved to Amsterdam to the Center for Neurogenomics and Cognitive Research. There I built my lab on connectomics, we are quite a group of connectome enthusiasts; some of them are Ph.D. students, and some of them are postdocs now. The group is really multidisciplinary. We have a biologist, a physicist, mathematicians, psychologists, and engineers, and that's really my approach. I always try to make the group as interdisciplinary as possible, preferably with people that could do something other than what we can do.
TB: What motivated you to start a career in Connectomics?
MvdH: I’m not sure if anyone has intrinsic motivations to start a career in connectomics. The reason why I started to apply network science to brain imaging was actually by accident. I have a background in artificial intelligence, and I was trained in machine learning and neural networks, so to me that looked like a very obvious thing to do. I started to play around with small world network analysis, and then I got in contact with the big people in the field like Kees Stam and Olaf Sporns, and basically, that is how I rolled into the field.
TB: You are one of the keynote speakers at the upcoming OHBM annual meeting. Could you tell us how you got started with OHBM?
MvdH: I think OHBM was my first meeting that I have ever, attended, back when I was a master’s student. I was honored that my professor let me go (I think the meeting was in Budapest that year). It was awesome to be around all those people that were working on similar things. But I always had to choose between OHBM, ISMRM and SFN, and I think later on in my career I had the luxury to sometimes go to all three, and I really like that mix. The nice thing about OHBM is that it is very application-driven, so there are many people that are using the tools, but there is also (in more recent years) a growing interest in developing new tools in the field of network science and connectomics. I am part of a joint effort with Andrew Zalesky and Alex Fornito, organizing an (almost) annual educational symposium on graph theory and network science at the meeting, which is great.
TB: What can we expect from your plenary lecture?
MvdH: I’m going to tell quite a bit about the connectome. First I’m looking forward to providing a bird’s-eye view on the connectome field. I want to spend some time on “why are we doing this?” – so that’s the first part. Ideally, I want to put this into the framework of “if the brain is wired like it is, then does this also give certain vulnerabilities to brain disorders?” One of the ways that we are looking into this is by means of comparative connectomics. We got more and more interested in looking at connectome features shared across species. If these features are evolved through evolution and are present in many different species, then they might form the very core of the brain. I think it is really important to understand these ground rules of the brain because only then can we start understanding how the changes in the ground rules may lead to various different types of brain disorders. This will make up the second part of my talk, where I won’t focus just on one disorder, but, rather, I’ll talk about the wide range of brain disorders. Most of these studies are single disorders studies (i.e. we found property X or property Y to be involved in schizophrenia) but I think the field is ready now to start looking into more deep-rooted questions. I’m asking which of these effects are specific, and which of them are common across brain disorders. It is really important to understand the multidimensional aspects and multi-disease effect of connectomics.
TB: Which recent developments in the field excite you the most?
MvdH: There are quite a number of them, but what I really like is that the field moves a little bit away from just studying the connectome and starts combining this with other types of measurements. The connectome field is 10 - 15 years old, but it’s pretty nice to see that people already take it for granted. I am particularly excited by using for example gene expression data and combining them with connectome data, or using cell biology data, like cytoarchitectonics, and incorporating it into the connectome model. I think these developments are pretty cool to see from a multidimensional neuroscience perspective. What I also find exciting to see is that there is a lot OHBMof effort going on in improving the connectome field. Recent statistics papers show that it might be relatively easy to make a network, but then to study it in a very meticulous way, to have good case-control studies, it might actually be more difficult than we earlier thought. So there are great improvements in the last couple of years in new statistical methods that really also adopt the network perspective. Finally, there are so many efforts going into mapping connectomes of a wide range of species across different resolutions. A big part of the connectomics community is outside the MRI community, and I really hope that these communities will start to mix.
TB: Where do you see connectomics in the next 5-10 years?
MvdH: Connectomics is a young field, so looking into the future 5 or 10 years, or even 5 or 10 days is difficult. I do think that there will be more room for combining different types of data into the connectome model. We’re going to see more and more papers that combine EEG with fMRI, or diffusion with functional measurements or even beyond that, such as combining genetics with imaging, because the connectome alone is not going to give us the final answer. I hope that we are going to start using in-vivo connectomics, in a more clinical application. When we start to merge the machine learning field, the big data field, with the in vivo neuroimaging connectomics, there is room for very exciting discoveries: for example, some sort of personalized connectomics where we could use connectome imaging to make predictions on disease outcome, medications response, etc., and I think that could be feasible. It is going to be a rough road, and I’m not sure whether we are going to do this in 5 or 10 years, but I think it’s possible to use a clinical connectome approach on an individual level, like precision connectomics.
TB: What are your other interests besides connectomics?
MvdH: I’m Dutch, so obviously I cycle just like 17 million of my fellow countryman. I’m also quite interested in tech, like new gadgets or new developments in the machine learning and/or big data fields. Besides that, I’m a dad. I have two kids, which are of course the two nicest connectomes out there.
Professor Edward Bullmore has had careers in the clinic, academia and industry. He is the head of the department of Psychiatry at the University of Cambridge, the director of the Wolfson Brain Imaging Center and the head of a neuroimmunology research group at GlaxoSmithKline. His academic interests range from the clinical to the mathematical. He is perhaps most known for his work on analysing brain networks using the framework of graph theory while his current interest, described in his latest book, “The Inflamed Mind”, is neuroimmunology.
The understanding of psychiatric disorders is the thread that connects all of professor Bullmore’s diverse interests. The following interview probes into his past experiences and asks his advice for budding young scientists attending the OHBM 2018 annual meeting in Singapore.
Claude Bajada (CB): As a clinician, I found that the approach to thinking taught during a clinical course is very different to what is expected from a researcher. What are your thoughts on the differences between clinical medicine and medical research? And what enticed you to make the shift from clinical medicine to academia?
Ed Bullmore (EB): I agree the mindsets of a clinician and a biomedical scientist are often somewhat different. As a clinician you’re taught to convey a sense of calm certainty, or to reduce a complex situation to a much simpler diagnostic formulation or treatment recommendation. And at least when I was at medical school, in the 1980s, questioning the scientific basis for clinical wisdom was not always welcomed by senior physicians or surgeons! A scientific training, by contrast, is an education in learning to doubt or challenge everything, especially your own most treasured hypothesis or most precious results. There is certainly a tension between the reassuring bedside manner of a clinician and the oceanic scepticism of a scientist. Another very important difference between the two cultures is the status of numbers. Medicine was almost entirely non-quantitative when I was going through medical school; whereas in neuroscience and neuroimaging, mathematics is increasingly central. I think medical schools, at least in the UK, still need to do more to make doctors more mathematically competent and confident - and to provide proper career paths for non-medical scientists bringing their expertise from physics, maths and engineering into contact with the number-crunching challenges of modern biomedicine.
I switched to scientific training halfway through my clinical training in psychiatry. I was motivated by the idea that psychiatry was still at a relatively early stage of scientific development compared to other areas of medicine and I couldn't imagine being satisfied with a career solely dedicated to clinical practice in an area which I thought was very likely to see radical change. It wasn't a difficult decision for me in principle. But if I had been clinically specialised as a cardiologist, or some other area where the science base was already more evolved, it might have been a more debatable move. For people specialising in surgery or radiology, the number of training hours that must be dedicated to learning operational procedures is much greater than in psychiatry, and the financial rewards for focusing exclusively on clinical practice can be much greater than in psychiatry, so the decision to spend 3-4 years on a PhD is a much tougher choice. For any research-minded young doctors who might be reading this, I can say psychiatry is a highly recommended career move!
CB: When I hear the name Ed Bullmore, my semantic association goes: “Bullmore, Sporns, Graph Theory.” Were you always interested in the mathematical aspects of research? What first got you interested in Network Analysis? And how did your, now famous, collaboration with Olaf Sporns begin?
EB: My first research enthusiasm (aged 30) was fractal geometry, which I found intuitively very appealing as a way of quantifying the complexity of biological structures and processes, like MRI scans and EEG signals. However, my old-school medical education had left me completely unequipped with any quantitative skills. I was fortunate to find an excellent mentor, Prof Michael Brammer, at the Institute of Psychiatry in London, and applied to the Wellcome Trust for funding to do a PhD. I was interviewed at the Trust by Sir Stanley Peart, in 1992, who listened politely to what I had to say about fractals and then told me “of course what you’ll really find yourself working on is brain connectivity, isn’t it?”. I agreed with him immediately although that thought had not previously crossed my mind.
Karl Friston’s pioneering work on brain connectivity was very influential, and I also learnt a lot from Barry Horwitz, and through them I began to hear about Olaf Sporns. I admired the paper on complexity he wrote with Giulio Tononi and Gerald Edelman (PNAS 1994) but I didn't meet Olaf until we both attended the second Brain Connectivity Workshop, organized by Rolf Kotter in Dusseldorf in 2002. I liked his talk, about using graph theory to simulate computational networks that maximised the neural complexity measure from the PNAS paper; and several others at that meeting also opened my mind to the new physics of complex networks that was following from the seminal “small world” paper by Watts and Strogatz (Nature 1998) and the “scale-free” paper on network hubs by Barabasi (Science 1999). A few years later, in 2005, I met Olaf again at the Brain Connectivity Workshop in Boca Raton, where I presented some of the first results of using graph theory to measure topological complexity of human brain networks from resting state fMRI. That’s when we started talking more seriously about collaboration, which led to our first co-authored paper, a review of complex brain networks (Nature Reviews Neuroscience 2009) that has since been cited more than 5000 times.
CB: Your recent publications reveal a broad interest in psychiatric research, focusing on everything from developing methods to questions about the effects of drugs on the brain and much more. What are you working on at the moment? And what would you say is your current main interest?
EB: I am still working on brain network analysis or connectomics but since 2013 I have also become increasingly interested in the relationship between the mind, the brain and the immune system. The reason for this shift of focus goes back to my starting point as a psychiatrist. After 20 years of research, I couldn't help noticing that, although the field of neuroimaging and brain connectivity had grown tremendously, its real-life impact on mental healthcare was zero. By then I was edging into my mid-50s and I felt impatient to do something that might actually make a positive difference to the experience of people with depression and other mental health disorders in my lifetime. For various reasons, the strategy that appealed to me most was to pursue the idea that inflammatory responses of the immune system could cause depressive symptoms and, therefore, that anti-inflammatory interventions could provide a new therapeutic approach to depression.
I am not sure how many OHBM members will be acquainted with the immune system; I’m guessing not many. I knew a bit about it from my medical training in the 1980s but I was utterly dazzled when I took another look at immunology in 2013. We think neuroscience and neuroimaging has moved fast in the last 20 years, and it has, but scientific progress in immunology has been at least as rapid and its therapeutic impact has been much greater. The area I am working on is the interface between immunology, neuroscience and psychiatry – it’s called neuro-immunology or immuno-psychiatry and it’s growing rapidly. There are important questions for neuroimaging in this area: for example, how can we use MRI or PET to measure brain inflammation, especially microglial activation, more specifically and sensitively?
I have just published a book – called “The Inflamed Mind” – which summarises some of the background science for a general audience – and there are some short movies on YouTube which introduce the book in a brief and accessible way (here and here).
CB: What would you say to students, particularly medical students, who would like to start their research career? Particularly, what would you say to them if they were interested in technical subject matters but feel that they “come from the wrong background”.
EB: I always encourage students to recognise and pursue the interest that motivates them most deeply, almost regardless of any other consideration, because completing a PhD is a challenging process and you need to be highly motivated by your project if you’re going to get through it successfully. For medical students, there are some additional considerations, at least in the UK. There are basically two windows – you can do a PhD intercalated with your medical school training, so you graduate as MB/PhD. This works well if you are someone who knows what they want to do in research early on and if you have excellent time management skills. The other window is after completion of core specialist training – usually in general medicine or psychiatry – when there is an opportunity to take time out of the clinical training process to do a PhD, typically funded by a fellowship award from the Wellcome Trust or MRC. That is the route I took because I wasn't clear what kind of research I wanted to do until I was in my early thirties and had started specialist training in psychiatry. For UK medical students and recently qualified doctors, it is highly advantageous to get appointed to Academic Foundation Year (AFY) or Academic Clinical Fellow (ACF) posts because this will allow you to compete medical training and also spend a useful amount of time developing research interests and preparing a competitive application for a PhD training fellowship.
I think medical students with an interest in technical matters, like coding or statistics, should be encouraged. The world of biomedical science will increasingly need people who are both well-informed about the background biological and medical sciences and have the technical skills to handle big, complex datasets. So getting trained in both technical and biomedical skills can prepare you for an exciting career as a relatively rare and highly employable person! However, it is tough to learn technical skills from a low base and at the same time as keeping up with clinical training requirements. I think it is important to have an excellent mentor and also, in my opinion, to focus your technical learning priorities on solving the scientific problems that you are most motivated to address. A masters course in bioinformatics or image analysis could be a useful training step for some people but personally I found it easiest and most rewarding to learn technical skills when I could see immediately how they would help me answer the specific research questions I was interested in at the time.
CB: You also work for industry, do you see that as another career change or was the move to industry a natural progression? What are the difference between working in academia and working in industry?
EB: I started working half-time for GlaxoSmithKline in 2005. My original motivation was that I wanted to contribute to development of new treatments for mental health disorders and, much though I love(d) connectomics, I couldn't see that neuroimaging research in an academic setting was likely to have much impact on mental health practice in real-life.
I have really enjoyed the experience, for the last 12+ years, of working in two organizations with two rather different cultures. I have found it stimulating, refreshing, and I have learnt a lot that I would not have learnt if I had followed the more conventional path of staying fully embedded in academia.
Industry has offered me the chance to think and work broadly, across a wide spectrum of medicine and therapeutics, whereas the life of an academic tends to become proressively narrower and deeper in focus. Industry culture is also strong in terms of team-working and strategic planning, and the standards of statistical analysis and data management are high. In contrast, a tenured academic enjoys an extraordinary degree of intellectual freedom and the opportunity to work with highly talented younger people, as students or early career researchers. There are pros and cons to both organizational cultures. I would encourage people to keep an open mind about any opportunities that might arise to work in the private sector. It can be very exciting and, at least in the UK, there are increasing efforts to make it easy for people to move back-and-forth between industry and academia over the course of a career.
CB: OHBM 2018 will be held in Singapore in June, and is likely to be the first conference experience for many PhD and MD students. Such large events can sometimes be overwhelming. Can you remember your first big conference? And what advice would you give to newcomers?
One of my first big conferences was actually the first OHBM meeting in Paris in 1995. I thought it was electrifying to be in the same room as many people whose papers I had been reading for years but had never seen or met before. However, OHBM has got bigger since then and the scale can be intimidating. I would encourage newcomers to attend the educational program before the main meeting starts. The OHBM educational program has gone from strength-to-strength and is one of the best possible places to pick up on the state-of-the-art in neuroimaging methods. It is also a friendly atmosphere and a great opportunity to ask questions, introduce yourself to speakers, and connect with others who share your interests and are at a similar level of training. Once the main program starts, I would be sure to attend any smaller, early morning symposia that are focused on topics of personal interest. I would look through the poster schedule and make a point of visiting posters presented by people whose work you admire or you’d like to get to know. I would enjoy the social program for its own sake and also as another opportunity to get talking to the people you want to meet. Wear your name badge and consider using a business card so other people can easily remember your name. The key thing is to meet people and not to spend all your time sitting in the main hall passively listening to talks, or back at your hotel reading the abstracts! I am naturally quite shy so I don't find this particularly easy advice to follow myself. But I have discovered that if you have the courage to step up to someone, with a smile and a handshake, and say something like: “Dr X, I just wanted to introduce myself because I really liked your paper/talk/poster on Y…” then almost always you will find that Dr X is very open to starting a conversation.
CB: Finally, please be honest, are you reviewer 1 or 2?
EB: I hope I am not too often the legendary third reviewer who has a problem with the paper that nobody else recognises but can nevertheless be awkward enough to knock a good paper out of contention! My only advice for dealing with peer review is to remind yourself that it almost always improves the ultimate quality of the work to go through peer review, however uncomfortable it may be at the time, and it is an integral part of the scientific process to do so. I think you will generally have an easier ride if you respect the position your reviewer is coming from and try to deal with their points as constructively and clearly as possible. I usually recommend making changes to the text or supplementary material rather than writing long tracts in the rebuttal letter that do not change the paper itself. And take opportunities to be a reviewer yourself so you learn what kind of issues you should try to pre-empt when writing your own papers or responding to peer review.
By Jean Chen
Dr. Wilder Penfield once said that “the brain holds within its humming mechanism secrets that will determine the future of the human race.” And yet, most of us would agree that the brain remains the least understood organ. How do we start to understand how the brain works? Prof Gustavo Deco’s approach, one of our OHBM2018 keynote speakers, is to try to build one.
In 2001, Gustavo was awarded the Siemens "Inventor of the Year" prize for his contributions to statistical learning, models of visual perception, and fMRI based diagnosis of neuropsychiatric diseases. He has published 4 books, more than 258 journal publications and 34 book chapters. He has also filed 52 patents in Europe, USA, Canada and Japan. He was awarded an “Advanced ERC” grant in 2012 and he is member of the Human Brain Project (EU flagship).
Jean Chen (JC): As far as I know, you completed your PhD in atomic physics. How did you enter the field of neuroscience? How did these two fields come together for you?
Gustavo Deco (GD): When I got my first PhD in Physics in 1987, I thought that I would dedicate my research career to this field. However, after a postdoc at the University of Bordeaux in France and a two-year (1988 to 1990) postdoc from the Alexander von Humboldt Foundation at the University of Giessen in Germany, I found my focus shifting. I was absolutely fascinated by neuroscience and neuropsychology and decided to change my focus. Very broadly, I was drawn to these fields and to the simple question of how the brain works. I want to understand how the brain processes information. I wanted to understand how the brain works. I was, and I am now, convinced that a good formation in physics, especially in theoretical physics, is absolutely an advantage for investigating the brain. For example, in my research I have used a lot of tools from Physics, such as statistical physics, nonlinear dynamics, etc. I went to Munich, and began working for Siemens in their research center. It was there that I started my career in Neuroscience. At Siemens, I created one of the first Computation Neuroscience groups in Germany. In 1997, I received a PhD in Computer Science from the Technical University of Munich (Dr. rer. nat. habil.). In 2001, I received a PhD in Psychology (Dr. phil) from Ludwig-Maximilian-University of Munich.
JC: Your interests are broad, and you have made important contributions to computational neuroscience, neuropsychology and psycholinguistics, to name a few. How would you describe the importance of mathematics to neuroscience and psychology research, in the present and future?
GD: We cannot build models of the brain without math. We cannot model cognitive processes without math. To sum it up, I'm absolutely convinced of the necessity of mathematics for being able to express in a quantitative and systematic way the laws that regulate the functioning of the brain. The main reason or intuition, is that we are dealing with a huge, complex, nonlinearly coupled and stochastic system (involving billion of neurons and synapses that are coupled, stochastic and nonlinear). It is impossible to intuitively "speak" or "describe" such systems (even a simple system of two feedback-coupled neurons is difficult!), but we can understand and study them by expressing and investigating explicitly the equations, math, describing the brain. If we renounce that, we do only phenomenology… and we know what we can expect from that… nothing.
JC: I also understand that your most cited research focuses on computational modeling of spontaneous neural activity, the foundation of resting-state networks, and this work is incorporated into the Virtual Brain Project. What is the Virtual Brain Project, and how did it get started?
GD: Yes, I was very active in modeling the whole brain (now not only spontaneous activity but also task and different brain states, like sleep and anesthesia). The implementation of those models in a public, easy-to-use platform is fundamental for making the models available to the community, and especially to interested researchers without a strong computational background (eg. clinical researchers). The Virtual Brain Project was a fabulous initiative that started thanks to the McDonnell Foundations and the team working out of many enthusiastic labs. The initiative is led by Randy McIntosh (Toronto), with strong contributions from the labs of Giulio Tononi, Michael Breakspear, Olaf Sporns, Viktor Jirsa, my lab and many others.
JC: What is the next step or the main challenge in improving the ability of your computational models to predict biology and behaviour in brain diseases?
GD: Neuroscience, especially computational neuroscience, is a new field, and now is the most exciting time for the field. There is everything to discover! We have many of the required elements to create the first theories of computational neuroscience. I'm very interested in whole-brain dynamics and modeling. Neuroimaging has opened an unprecedented window on human brain activity, raising great expectations for novel mechanistic insights into brain function in health and disease to emerge. Unfortunately, the largely correlational findings have not delivered the anticipated outcomes yet. In contrast, a computational framework will allow for causal manipulation of models of multimodal neuroimaging data, opening up for characterisation of biomarkers of disease subgroups and a better understanding of underlying mechanisms. Furthermore, adding a coupled neuromodulator system using receptor binding data will pave the path for novel methods for rational drug discovery in silico.
I think the next challenge is to go from correlational neuroimaging studies to what we call, together with Morten Kringelbach (Oxford), causal neuroimaging. So in my view, the challenges are: 1) to develop and refine our novel framework for Causal Whole-brain Neuroimaging Modelling using sophisticated whole-brain dynamical models of multimodal neuroimaging data which can be manipulated off-line in silico to accurately describe causal mechanisms underlying human brain activity; 2) to apply the framework to the diagnosis of neuropsychiatric diseases, and to the design of therapies and their monitoring. In particular, one can use the model to exhaustively stimulate a realistic subject specific fitted whole-brain model in order to detect which type and locus of stimulation is more effective to re-establish a healthy dynamic of the whole brain.
JC: What are the main projects that your lab is focusing on currently?
GD: The main projects we are working on are the Human Brain Project, many other team projects of the EU, a large project from Germany together with Max-Planck in Leipzig (collaborator: Angela Friderici), and many others… The main issue that I see is to extend whole-brain models beyond just resting state as I described above.
JC: Can you provide a few pieces of advice for junior scientists in our field?
GD: As I said before, our field is a relatively new field, and now is, in my view, the most exciting time for the field. Junior scientists should study what they want. Don't be influenced by anyone. They should really investigate what motivates them. At this stage in their career, when they are learning how to be good scientists, it is an exciting time and they should take full advantage of it and study what really interests them.
But I'm convinced, and so I tell my students, that the 21st century is the century of Neuroscience and Genetics (but especially the former). I left physics and Quantum Mechanics. Although those fields were extremely interesting, challenging and mathematically sophisticated, all the main elements and basic concepts were already developed at the beginning of the 20th century. I always felt jealous of the scientists that were working during those times … Schrodinger, Pauli, Bohr, amongst others, they developed everything!!! I tell my students that I really felt a kind of “romantic nostalgia” for that time. When I switched to Neuroscience, I felt (and still feel) that we are now reliving those same exciting years. We do not have theories, but we have millions of interesting questions and the experimental technology for accessing the right data… So, our task is incredibly important, namely to develop a theory of the brain… I would recommend all the junior researchers to work on that!
“In order to be a mentor, and an effective one, one must care. You must care. You don’t have to know how many square miles are in Idaho, you don’t need to know what is the chemical makeup of chemistry, or of blood or water. Know what you know and care about the person.” — Maya Angelou
The online mentorship program is an ongoing initiative launched by the OHBM Student and Postdoc Special Interest Group in early 2017. In this international initiative, mentors and mentees from around the globe are matched on the basis of their experience and expectations. The mentor supports the mentee’s growth by providing advice on topics such as - but not limited to - academic development, grant writing, and work-life balance. What is unique about this program is that every member of the OHBM community can be mentored and can also be a mentor. For example, the program has early career principle investigators (PIs) who seek mentoring by more established PIs, as well as senior PhD students who mentor trainees just starting out. As a rule of thumb, the program maintains at least 3 years of “experience difference” between mentors and mentees, with mentor-mentee pairs often being close in career stage. Currently, there are 424 participants in the program. In this blogpost, we compare statistics from two successive rounds (Round 1, 2017 and Round 2, 2018) of the mentorship program: 252 participants signed up in Round 1, and an additional 172 participants signed up in Round 2.
Relative to Round 1, geographical distribution of brain mappers joining the mentorship program in Round 2 remained largely unchanged, with two notable exceptions: gain in members from the Middle East, and drop in new members from South America.
Distribution of participants with respect to career stage was similar in both rounds, with PhD candidates being the most prevalent.
Round 2 observed a decrease in the fraction of mentees who declared an interest in starting a lab, relative to mentees who were either undecided, or planning to move to industry. This effect might be associated with constantly decreasing percentage of faculty jobs as opposed to PhD jobs, which is a strong trend in academia since the 80s.
In line with the above observation, Round 2 of the mentorship program saw a drop in the demand for advice related to starting a lab, and a small increase in the demand for advice related to transitioning into industry from mentees.
Looking at the summary statistic of all participants in rounds 1 and 2 coming from USA and Canada, Europe, Australia and Asia, an outlook on mentorship was found to be similar globally (Figure 6).
In both rounds, mentors declared similar areas of expertise, mostly related to building a research career. This included taking career opportunities, finding postdoc jobs, developing relationships with coworkers and general advice on career development. Only a handful of mentors indicated expertise in coaching mentees on transitioning to industry.
In summary, participants were gender balanced, and while geographically they hailed from around the globe, the vast majority were from North America and Europe. Over 25% of participants in the programme were willing to take on a double role (i.e. both as a mentor and a mentee), thus indicating a willingness to give back to the OHBM community. While the program saw an increase in requests for mentoring on non-academic career options (e.g. transition to industry), this was not followed by an increase in mentoring capacity in these areas. We would thus like to reach out to mentors with experience in industry and entrepreneurship to join the mentoring initiative. Overall, the expectations and competencies declared by participants around the globe were similar, thereby indicating that an online mentorship platform is necessary and useful for the OHBM community.
Note: In addition to the online mentorship program, the OHBM Student and Postdoc Special Interest Group will be holding its second “Annual Mentoring and Career Development Symposium” at the annual OHBM meeting this year. Hope to see many mentors and mentees at the event on Tuesday, June 19th!
By Nils Muhlert
Professor Leah Somerville is an associate professor of psychology and director of the Affective Neuroscience and Development lab at Harvard university. She was recently awarded the Early Career award by the Social & Affective Neuroscience Society. Here we find out more about her academic career path, and what we can expect from her keynote speech at OHBM2018 in Singapore.
Nils Muhlert (NM): First, can you tell us about your career path – how did you get into neuroimaging?
Leah Somerville (LS): I started working on brain imaging research as an undergraduate at the university of Wisconsin. I was working in a couple of different brain imaging labs, right when the first research dedicated scanners arrived at the university. I was one of the first people to have the opportunity to run experiments on it – along with a team, of course, of other researchers in the labs I was working in.
I got that little thrill moment of seeing a person’s brain image pop up on the screen. Maybe others have had a similar experience. I still have that feeling every once in a while, it hasn’t completely gone away! I find neuroimaging so fascinating and powerful. From there I tried to orient my training towards continuing my brain imaging research, and in particular, fMRI-based research. I’ve studied emotion and anxiety-related processes. I’ve also studied motivation and cognitive control. Now in my lab we’re focused on understanding how those processes change with ongoing brain development through adolescence.
NM: What would you say is so special about adolescence in the context of human development?
LS: There’s a lot I could say here - I’ll try to keep it short! Adolescence is a time of life that on the surface level is associated with a number of important challenges, that individuals are facing sometimes for the very first time.
Adolescents are people who are faced with independent choices about how to act, who to affiliate with, what kind of goals they like to hold for themselves. At the same time there’s increasing demands on their self-control. They’re becoming more and more self-guided in the way that they’re interacting with the world. We can sometimes think of them as novice independent people who are still developing the toolkit that can support mature independent actions.
We find that ongoing brain development facilitates a number of great achievements at this time of life. But it also places a number of constraints on the way in which adolescents might optimize their behavior in certain situations. We’re very interested in understanding the interplay in that – thinking about adolescence as a very adaptive and useful time of life but also one that differs from adults in a number of important ways.
One insight that has fascinated me is looking at brain development measures and asking “when does a person become fully mature?” It may seem like an easy question or one that could be measured using a single modality. In fact, the answer you get really differs when it comes to brain structure or function or network properties. It’s especially surprising that on certain measures – including measures of white matter – the developmental changes continue to play out throughout the twenties and perhaps even through the thirties. So one thing that’s interesting, as an extension of that, is thinking about how we decide when a person is mature from a societal standpoint.
NM: In your work you also discuss socioaffective circuitry – how do changes in that circuit map on to the behaviors we see in adolescence? And what have you found out about that over the last decade?
LS: In our lab we tackle this from different angles – so I’ll let you know about one in particular that I’ll be talking about in OHBM.
We’re very interested in the intersection between motivation and cognitive control. That is, the degree to which motivational cues in the environment – potential rewards and punishments for example – can shape the way in which a person is able to optimize their cognitive control in a given context.
We’re interested in the shift across development, in which individuals across the ages can recognize situations that hold different motivational values. They might want to perform better in certain conditions than in others – either to avoid punishment or to obtain rewards. All of the detection and assignment of values seems to be very consistent in early development. But the degree to which we can take that information and use it to guide our goal-directed actions in the moment, seems to be continuing to develop well throughout adolescence.
One arm of our work is in trying to understand how the dynamic interactions in cortico-striatal circuitry (including the dorsal and ventral striatum and lateral prefrontal cortex) coordinate and give rise to motivation-guided cognition. This is something that we’ve seen play out and continue to change and refine well throughout adolescence and into early adulthood. This is one area of work that we’re excited about.
Another area we’re interested in is adolescent attunement to their social environment. This is a time of life that’s associated with dramatic changes in daily life; individuals are forging new independent relationships for the first time and there’s a lot of volatility in adolescent relationships. They are falling out of favour with one another more frequently than adults would be, giving them lots of opportunities to get feedback about how they’re doing socially. Another arm of our work is therefore to understand how adolescents learn from feedback and use positive and negative social feedback as learning cues to inform how they should feel about themselves in a given situation and how they should feel about other people.
We’ve seen in a couple of studies that when adolescents are on the receiving end of negative social feedback they tend to take that as a very strong cue to influence how they feel about themselves. This would result, for example, in a reduction in the momentary feelings of self-worth or self-esteem. Adults actually show a bias in the opposite direction. They have different strategies in place that allow them to offload or buffer themselves from negative feedback and maintain a positive self-concept, even in the face of very opposite social information. We’re really interested in understanding how learning processes – again subserved by striatal-based systems – might be biased towards learning from negative or positive information in the social domain at different points of life.
NM: And how does this system seem to change from early to late teenage years, or even people's early twenties?
LS: Well we carried out a study of individuals from age 10 to 25, and found that there is a period from early to mid-adolescence, perhaps from 12 to 15, that negative feedback had a strong negative impact on their self views. Whereas individuals of college age seem to have a lot of strategies in place already to buffer themselves from negative feedback. So this is one time period when a few years of age makes a large difference in terms of how these cues are incorporated into learning about themselves and other people.
NM: Thinking about how social media might tap into this, and perhaps exacerbate the concerns that adolescents have: as social media has become a more integral part of their everyday lives, has this had negative and positive consequences?
LS: Great question and one that I don’t have a scientific answer for but I’m happy to speculate!
This is a very hot issue now – thinking about how developmental stage might manifest the influences of these kinds of media processes differently. It’s only in very recent generations where people have taken up a lot of social interactions online. This is something that has not been subjected yet to empirical study.
There is a lot of speculation that perhaps social media is detrimental to adolescent development. Adolescents themselves are quite happy at having the option to socialize over the phone and over the internet. They say it helps them maintain strong social bonds, it gives them lots of information. They can stay attuned to the goings on of all of their friends more easily.
There is also the potential for social media to have certain negative and perhaps unintended consequences. One that has been suggested by our work is that social media has been almost designed to elicit and deliver feedback to people – by getting friended, getting thumbs-up or the absence of a like or lack of response from somebody. This can be interpreted as negative by someone or by people on social media.
The way we see it is that there can be very positive interaction on social media but there’s also the potential for a higher frequency of negative feedback, or the absence of positive feedback being interpreted as negative feedback. We’ve shown that negative feedback has a very potent influence on adolescent self-views, so that very high frequency of receiving negative feedback online could have a more detrimental effect during adolescence than other ages.
Developmental scientists have often had concerns about the effects of new technology influencing self-views. When I was a kid this would have come up with video games – suddenly people would have a Nintendo in their house, there was a wave of concern about that. At this point we just don’t know enough to have a definitive evidence-based account about whether social media is a good or bad thing for adolescence.
NM: Turning to your other work, what would you say are the scientific achievements that you’re most proud of during your career?
LS: I’m not sure if I’d call this a scientific achievement but I’m most proud of having had the opportunity to run my own lab.
I never thought I’d be a PI. It has been one of the most challenging and rewarding things I have ever done. I feel proud and gain a lot of reward from it, particularly when I interact with my trainees. They conduct great work, are great people and are becoming great mentors in their own right! It makes science very fun to do in our group. Fostering an atmosphere that makes science fun and exciting and collaborative is something I’m very proud of, and is down to the efforts of my whole lab.
NM: And to reflect the quality of your mentoring you were awarded the Everett Mendelsohn excellence in mentoring award. When you look back at your own career, which people could you point to that offered you good advice during your career, and how has that affected how you interact with your own trainees?
LS: I’ve been very fortunate to have had a number of wonderful mentors throughout my training. They’ve helped me bridge gaps into the next steps of my career – giving me advice, and sometimes tough love when I needed it! This includes my graduate mentor and my postdoctoral mentor, BJ Casey. I would point out BJ in particular – she was a big part of me discovering this very strong interest in developmental neuroscience, particularly after trialling out a number of different topics of study. That one fit for me in very large part because of the support in mentoring from her.
It’s important to mention that at first I didn’t realize that every trainee needs something different from a mentor. You need a lot of flexible thinking when you’re mentoring to understand what each person needs at different points in time. This of course evolves at different points of training. They might start by needing more hands-on help and more topically-focussed advising. But watching a person beginning to strive for independence and allowing for independence is something that I work hard to detect and accommodate.
When I became a PI I didn’t realize that I would still benefit from mentoring myself. I still have mentors who guide me and I don’t think anyone is ever quite finished in needing mentoring, advice and guidance. I have a number of colleagues – both peer-age going through similar career stages, as well as more senior mentors – who are still helping to guide me. I am very appreciative of that.
NM: And finally, your OHBM 2018 talk – can you give us a sneak preview? Which gems from your research career have you decided to focus on?
LS: Well I’m very excited about being invited to speak at OHBM and having the chance to go to Singapore. I’ll be talking about two main themes: adolescence as a phase of the lifetime associated with ongoing and dynamic brain development, in particular in development of functional brain connectivity.
I’ll also specifically focus on understanding the interactions between motivations and cognition as a test bed to think about how ongoing brain development would lead to important shifts in behavior. In doing that I’ll present some specialized studies that were conducted in my lab in Harvard, as well as some broader projects that we’re currently working on.
Most notably we’re one of the groups completing the human connectome project on development – a large scale ‘big data’ style project - that will ultimately collect brain imaging data on over 1,300 5-21 year olds. This is an ongoing study that we are about half-way through collecting data for. It’s partly longitudinal and partly cross-sectional, and it’s designed to help us really understand both fundamental patterns of brain connectivity that are changing at the basic neuroscience level as well as the implications of those connectivity changes for behaviours including motivated behavior and cognitive control.
So I’ll be discussing how we approach these problems from a broad, big-data standpoint and how this can complement the more specialized work that we’re doing.
NM: We’re definitely looking forward to that – many thanks for taking the time to speak to us and we’re looking forward to your talk in Singapore.
By Elizabeth DuPre and Kirstie Whitaker
This month we continued our Open Science Demo Call series by speaking to Ariel Rokem, Dora Hermes, and Tammy Vanderwal about three initiatives they’re involved with that promote openness in neuroimaging research.
Ariel introduced us to NiPy--short for NeuroImaging in Python--which is a large community-of-practice to support using python for neuroimaging. He explained that NiPy exists within the broader SciPy--short for Scientific computing in Python--community, and it unites many individuals who use Python in their scientific analyses. As open communities, Ariel pointed out that anyone is welcome to use the NiPy and SciPy software as well as to participate in its development. If you’re interested in hearing more, he encourages you to check out the NiPy mailing list or the annual SciPy conference!
Dora told us about iEEG BIDS extension proposal, a proposed extension to the BIDS standard for structuring human intracranial electroencephalography (iEEG) data. She explained that to date, current challenges with iEEG data sharing include the large variability in both electrode locations as well as data formats across sites. The proposed extension will create a standardized structure to store iEEG data and metadata, allowing for novel, multi-model analyses via integration of iEEG with MRI, MEG and EEG. To contribute to the development of the iEEG BIDS extension, Dora encourages checking out the current draft or commenting on the BIDS mailing list.
By Valeria Kebets, Csaba Orban, Thomas Yeo on behalf of the OHBM 2018 Local Organizing Committee (LOC)
As we’re swiftly approaching June, we thought we would follow-up our previous blogpost with 10 practical tips to help you make the most of OHBM 2018 in Singapore.
1. CLIMATE: Singapore has a hot and humid tropical climate. The air temperature remains in the mid-twenties (~75°F) even at night, so don’t be surprised if you break a sweat after only a 10 minute walk. For daytime walks, sunscreen is recommended as the UV index can reach extreme levels. Buildings tend to be heavily air-conditioned, so you may also want to pack a sweater for the conference. Also note that the weather is unpredictable, and heavy thunderstorms can develop in just a few minutes, so your weather app is unlikely to be helpful.
2. FOOD: If you want to grab a quick bite during the conference, there are many cafes and restaurants in the same building (listed here). There are also plenty of dining options at walking distance from the conference centre such as Gluttons Bay, Chijmes and Bussorah street. Shoppers will be glad to know that most stores in the city are open until 10pm, including on Sundays.
3. TRANSPORTATION: The best way to take advantage of Singapore’s public transportation system is by purchasing an ez-link card (same concept as Oyster card in London). Ez-link cards are sold at the airport and at most MRT (subway) stations for a $5 (3.75 USD) deposit. Ez-link works on all buses, MRT lines, and can also be used to pay in some stores, e.g. 7-11s, and some taxis. Pro tip: Remember to tap out with your card when alighting buses to avoid getting charged the maximum fare.
4. MAPS: Google Maps or Citymapper are great for figuring out the best combination of MRT/bus/walking to get anywhere on the island, including expected travel times, when to alight buses (stops are not announced), and the fastest way to exit MRT stations. Follow this link for directions to the conference centre.
5. TAXI/RIDESHARE: All Singapore taxis operate based on metered fare. There is no Uber, but Grab provides a similar service. There are separate pick-up points for metered and Grab taxis at Changi Airport. Pro tip: If you want to keep costs low avoid the Chrysler Cab (black taxis) in the airport taxi queue.
6. GRATUITY: Tipping is generally not expected in Singapore. Most restaurants automatically add a 10% service charge and a 7% Goods and Services Tax on the bill.
7. PAYMENT METHODS: Most places in Singapore will accept credit card payment (VISA/Mastercard, though usually not AmEx). However, do keep some cash for dining in hawker centres. ATMs are widely available in the city and airport.
8. LIQUOR TAX: Singapore imposes an excise duty on all liquor, so expect to pay between $10 - $14 (~ $9 USD) for a small bottle of beer in restaurants or bars. Pro tip: Duty free stores inside the airport terminal are exempted from the liquor tax.
9. MEDICATION RESTRICTIONS: Singapore has a strictly enforced no tolerance policy with respect to possession of illicit substances. Note that certain prescribed psychotropic medications (e.g. sleeping or anti-anxiety) may require you to apply for a license at least 10 days before your arrival. You can read more about this here.
10. RELATED EVENTS: Be sure to check out the satellite events before and after OHBM. The events kick-off with PRNI (June 12-14), OHBM Hackathon (June 14-16; Sold out) and BrainStim (June 15-16). The Chinese Young Scholars Meeting takes place June 19. There are also three post-conference workshops organized by the local brain imaging community: Multimodal Neuroimaging for Mental Disorders (organized by yours truly; June 22), Brain Connects (June 22) and Nonstandard Brain Image Analysis (June 22-23). Attendance is free but make sure to register early--while there are still seats!
If you haven’t already, we highly recommend you to check out the brain in SINc website for more in-depth information on food, sights & attractions in Singapore curated by the Local Organizing Committee.
We look forward to welcoming you next month in the Lion City!
By Jean Chen
As part of the OHBM International Outreach effort, we found about the experiences of Iranian trainees. Many of us in brain imaging have met and worked with Iranian trainees, who outnumber trainees from most other Middle-East countries. By hearing the trainees’ stories, we get a snapshot of the circumstances behind their decision to leave Iran as well as their aspirations in foreign lands. In this post, we speak to current and former trainees, including:
Jean Chen (JC): How much exposure to brain-mapping research did you have as undergraduate students in Iran?
Aras Kayvanrad (AK): I did not have much exposure to brain mapping research as an undergrad student. I completed my undergrad more than 10 years ago and at the time there was little brain-mapping research in the country. However, things have changed now and there are several research groups working in the area of brain-mapping. There are more opportunities for undergrad students to learn about brain mapping research through talks, workshops, seminars, etc.
Sana Nezhad (SN): During my undergrad in Electrical Engineering we had a course called " the Application of Electronics in Medicine". It was in that course that I received my first academic exposure to brain-mapping research, which actually motivated me to do a Masters in Bioelectronic Engineering in the University of Tehran. There we had one year of coursework, which exposed me to the use of EEG, MRI and CT for brain mapping. I also got to know about quantitative methods of analysing the data we acquire using these methods. For the second year of my Masters I was required to complete a research project on MRI data acquisition, and although my project was focused on body imaging, I had classmates doing fMRI and MRI projects on the brain. As a result of group meetings, I learned about their research.
Mahdi Khajehim (MK): My personal exposure to brain-mapping only started when I took the “introduction to biomedical engineering” course as an undergraduate student and for the first time got familiar with some methods like MRI and fMRI. However, I think as a result of multiple government-supported programs and increased interest to brain-mapping, this pattern has already started to change. Nowadays, undergraduate students in Iran have a much better opportunities to get familiar with this field through talks, workshops and summer schools, such as the Iranian Summer School of Cogntive Neuroscience. These are hosted by many different universities and institutions.
Arman Eshaghi (AE): During my undergraduate studies (Tehran University of Medical Sciences), I worked on at least two different projects in which we used advanced neuroimaging methods (DTI and fMRI) for patients with multiple sclerosis and neuromyelitis optica. My work was conducted with Professor Mohammadali Sahraian at the Sina Multiple Sclerosis Research Centre, which is affiliated with the Tehran University of Medical Sciences. I was also in active collaboration with UCL Institute of Neurology in London working with Prof. Olga Ciccarelli.
Mostafa Berangi (MB): During my undergraduate studies in Electrical Engineering, I took some courses in Biomedical Engineering, and they really interested me. As I became familiar with the multiple aspects of Biomedical Engineering, I was particularly interested in the field of MRI. That is the main reason for my decision to pursue brain imaging for my graduate degree.
JC: How would you describe the Iranian brain-mapping landscape? Are there major research programs or meetings that you were aware of as an Iranian student?
AK: Not as a student -- as I mentioned at the time I did my undergrad, there was not much brain mapping research going on. However, the growth of brain-mapping research has accelerated in recent years, and several research bodies have been established recently providing financial and/or technical support to researchers in this area, which can potentially further facilitate and expand brain mapping research in the country. Most notably, the Cognitive Sciences and Technologies Council (COGC) provides funding for brain-mapping research through a variety of research grants. Moreover, the recently-established National Brain Mapping Laboratory (NBML), equipped with state-of-the-art scanners, has further paved the way for brain mapping research in the country.
SN: There are several brain mapping groups specializing in advanced quantitative analysis of brain-imaging data generated through different modalities. I get the sense that In Iran there is a shortage of data-acquisition accessibility due to limited resources, however most active research groups overcome this problem through collaborations with universities abroad. For example, I had collaborations with a cancer centre based in the UK and received half of my data from there. This lack of imaging resources drives the research towards quantification methods rather than data acquisition approaches.
MK: In my perspective, the Iranian brain-mapping field has already started to grow at a promising pace. Thanks to increased government support through funding agencies like the Cognitive Science and Technologies Council (CSTC) and greater availability of required infrastructure that is an essential part of this field, there is now a rising interest to do research in brain-mapping. Moreover, some newly established institutions like the National Brain Mapping Laboratory (NBML) in conjunction with some older ones like the School of Cognitive Science are also playing a crucial role in expanding the field among the Iranian researchers and I personally benefited a lot from attending educational events hosted by these institutions. It all adds up to expect an even better future for this field in Iran.
AE: There have been active institutes working on animal neuroimaging (in addition to human) located in Tehran that are affiliated with top Iranian universities, including the Institute for Fundamental Physics and the Institute for Cognitive Science Studies. There are new centres such as the National Brain Mapping Laboratory, which did not exist when I left Iran in 2014. There are also groups working inside university hospitals including the Neuroimaging and Analysis Group. Therefore, in my opinion Iran can have a bright future in science and in particular neuroimaging in the Middle East.
MB: In Iran, the best students choose to go to Sharif University, University of Tehran, Amirkabir University of Technology, Iran University of Science and Technology, Khaje Nasir University and Shahid Beheshti Medical University (in that order). In terms of the field of brain mapping, from my perspective, the University of Tehran and Amirkabir University are the top institutions. I feel that these institutions have the largest and strongest faculties, and this quality is important for graduate students.
JC: For those of you abroad, what was your main motive for leaving Iran to pursue further training? For those in Iran, do you have plans to leave Iran for additional studies?
AK: I left Iran after my undergraduate degree. The reason was quite simple --- I left Iran to expand my horizons in a new environment doing cutting-edge research.
SN: My main motive was to have the opportunity to get more involved in MRI acquisition research, which is more feasible here in the UK. Also, I cannot rule out being adventurous and wanting to experience a different cultural environment!
MK: I imagine on one hand there are still some aspects of brain-mapping research that remained mostly untouched in Iran and those happen to be in the realm that I was mostly interested about and as such, leaving Iran made sense as there was not much expertise or experience available in Iran. On the other hand, in my opinion, one other thing still missing in Iran is the limited extent of the international collaboration that helps to accelerate the development and increase the quality of the brain-mapping in Iran. These two factors were my main motivations to go abroad for Ph.D. study.
AE: My main intention for leaving Iran was to expand my skill base in using larger databases, and in particular my quantitative skills. Moreover, working in a place such as the UCL Institute of Neurology, which is home to many renowned neurologists and neuroscientists, has enabled me to form more ambitious research plans with access to a wide range of patient populations.
MB: I would like to study in a foreign country, but it comes down to a personal decision, so I have not yet made up my mind. Certainly, most of our students would like to study abroad, and many of my labmates have left to pursue their PhDs. Our professors do not try to retain us. They actually encourage us to explore our options.
JC: How would you describe the career prospects of a highly trained neuroimaging researcher in Iran?
AK: With more groups working on neuroimaging and the availability of research funding and imaging facilities, the prospects seems very promising. In Iran many of the talented students are interested in engineering, in general, and medical imaging, in particular, which is an invaluable asset to principal investigators in these fields. Nevertheless, in spite of the recent progress, access to funding and imaging facilities is still very limited. Moreover, there is limited collaborative research between individual groups and between institutions. I hope the establishment of the new national research bodies, such as the NBML, will lead to collaborative research initiatives between research groups and institutions across the country.
SN: I would think a researcher with a good international network, particularly with countries with a strong neuroimaging landscape, can expect a promising future.
MK: I think for such an individual the available job positions could be in the academia or government-funded research institutions, however, in the private sector, there is only a limited range of options available. I imagine there would be several suitable faculty or research positions available in the capital city (Tehran), but not much so for the rest of the country. For the private research-based companies to grow and create more job positions in this field, there is still a lot that needs to be done.
AE: Compared to the developed world, there are very limited funding opportunities in a developing country like Iran. As a result, many students may prefer to leave the country to expand their skill base. However, despite these limitations, there has been an upward trajectory as is evident by the construction of new neuroimaging centres and availabilities of graduate (PhD) level university programme dedicated to neuroimaging.
MB: Medical Imaging is still a very new field in Iran, and frankly there are not that many jobs in this field, especially for PhD graduates.
Postamble (JC): As in any research community, trainees are the future of Iranian brain-mapping research. The trainees that you met here are some of the brightest among Iranian students; they are expanding their horizons voraciously and have ambitious future plans. Irrespective of their current locations, these trainees show their love of their home country and are obviously excited by the recent developments in the Iranian research arena. I have come to learn that > 40% of Electrical Engineering students as well as > 50% of Medical Physics students at the University of Tehran are women, numbers that exceed those of most western programs. We look forward to the transition of these trainees into independent scientists.
Although there is great need for brain-mapping expertise, there are currently few positions in Iranian universities for trainees, even those with often highly prestigious foreign training. In this regard, I have come to learn that the government has established paid postdoctoral fellowships (up to 2 years) for those returning to Iran and in search of faculty positions. In parallel, there are government programs that encourage highly-qualified individuals to return to Iran to establish tech companies, through both cash rewards (up to $40,000 USD) and low-interest (close to 0%) loans. These mechanisms will likely create jobs for future trainees in brain imaging.
We wish these trainees the best, and hope the OHBM community will be able to enhance outreach to those working and living in Iran as well.
BY THOMAS YEO, NICOLE KUEK
Professor Simon B. Eickhoff is the Director of the Institute for Systems Neuroscience at the Heinrich-Heine University Düsseldorf and the director of the Institute of Neuroscience and Medicine (INM-7) at the Research Center Jülich. Simon is a leading cartographer of the human brain, and his team utilizes a wide range of methods to map the organizational principles of the human brain. We had the opportunity to chat with Simon before his keynote lecture in the upcoming 2018 OHBM Annual Meeting in Singapore.
Thomas Yeo (TY): Today we have Prof Simon Eickhoff here, a keynote speaker at OHBM 2018. Simon, thanks for doing this. How would you describe your research to a random person on the street?
Simon Eickhoff (SE): I would say that I’m interested in how the brain is organized, how it varies between people, and how this variability relates to things like cognitive capacities. Then ultimately, I want to contribute to developing new tools for diagnosing and treating neurological and psychiatric disorders.
TY: That’s a rich set of activities – how did you end up on this research path?
SE: More or less by accident. I studied medicine in Aachen, and late at night at a party in my hometown, I met a friend from school who had started studying in Dusseldorf. He told me about brain research there and I thought it sounded quite interesting. So I called Karl Zilles’ secretary, met him, and was really fascinated. I started my doctoral studies there and never really managed to leave.
TY: Your lab is involved in several projects – but what is the most exciting thing you’re working on now?
SE: There are two things we are doing right now that I’m really excited about. One is brain-phenotype relationships. Can we actually infer complex phenotypes from brain imaging data? Can we predict personality traits or cognitive performance? And the key aspect here is --- given that there’s quite a lot of work on this already --- can we predict it in an interpretable fashion. What we need is good predictive performance, while also learning something about brain organization. This is one of the aspects that I really want to push, as it’s not highlighted enough in current discussions. It’s one thing to be on the data-driven side, and to get good compressions, good predictions. This is, without any question, awesome. But in the end, we also want to learn something about the brain – how the brain is organized. There’s a lot of work going on in our lab that really tries to combine the more data-driven work from a computer science perspective, with the more traditional neuroanatomical view.
The second part is more related to brain mapping. You can describe the brain through a lot of different features. So for each point in the brain you can say, what are the structural properties, what is the trajectory as we age, how is it disturbed or changed in people with Parkinson’s or schizophrenia. But also, what’s the functional connectivity profiles at rest and during task, what are the structural connectivity profiles, and so on. You can use each of these features to map the brain and to delineate brain areas. But how does all of that work together? That’s the critical question, and cracking this kind of topographical code, that’s something that we can hopefully get closer to. And it’s pretty exciting!
TY: Moving forward – what do you hope your research will accomplish in the next 5-10 years?
SE: Well if we manage to do the things we just mentioned, I’ll be quite happy! The predictive modeling, but also brain mapping, understanding organization and topographical complexity of the brain. That’s going to be fantastic!
Looking ahead, I’m not sure if it will be done in five years, but it will be really exciting if we can go outside of the academic field, outside of doing research to just get the next papers, and to mature enough to actually bring our research into clinical practice. Five years – I’m skeptical if we’ll get there, but over 10 years, I’d be more confident. If my team will be able to contribute to it, then that would be fantastic. And we’re working towards that goal.
TY: What do you think is the biggest obstacle right now?
SE: Towards clinical application? In the end there’s so much flexibility in the analysis of imaging data. We always hope that we live in a perfect world, where you acquire data, and then you do one single analysis, which is a priori planned, and that result gets published. But I’m not entirely sure that most labs do that.
The thing is, if you really want to go and measure yourself, say by the standards of clinical trials, that would need to happen. You’d have to have your analysis plan ready, deposit them, acquire the data and carry out one single analysis and report the outcome. Then if you want to have something that’s clinically useful, it needs to have a really high accuracy and predictive value. What I really value is the current push towards more methodological rigour. I’m really happy to see that it’s becoming slightly more easy to publish null results, and new methods are not just judged by “better” performance.
TY: So you’re saying that p-hacking is a problem, but I guess it’s a bit unclear to me if that’s the main problem. Even with the high quality Human Connectome Project data, the predictive accuracy is not that amazing.
SE: Right, we still have a lot more work to do. And since you’ve mentioned the HCP dataset, this is never what you’d get from a clinical setting, where the data is acquired in a short time by a technician who is less invested than, e.g., a PHD student. And also the patient may not be as motivated as a research subject to lie still, comply with instructions etc.
Most likely, what will happen is the field will be split into different domains. One that is very high resolution, very intense sampling, and a lot of valuable data for each individual. That will be great for understanding brain organization.
But there will also be the other side, that will deliberately say “I want to use low quality, clinical data.” These adjectives may be the same thing, though the latter just sounds better [laughs]. We are using standard clinical quality data and we know the data is bad, but we also know that we need to find something that works on such data if we want to make an impact beyond research as an academic discipline. So we have to be up for the challenge! One thing we are doing a lot now is to deliberately make our lives difficult, by combining data across many different sites, different scanners, different populations, different continents. That way, the dataset is diverse - often even bad - but we’re happy with some drop in performance, because we know this is genuine performance. We have to improve it, but at least we’re not tricking ourselves into believing that we’re doing extremely good predictions that don’t hold up in real life.
TY: We are at Whistler now and just had an exciting workshop held by Todd Constable. I’ve noticed that the talks from more senior professors seem to cover a lot of papers, whereas someone more junior (like me) will talk about two papers. Is this what I should be aiming for?
SE: I think it’s a matter of personal style. But maybe you are right, and personal style changes with age. You really have two choices when you’re giving a talk. You either take a rather high flight attitude and present an overarching picture. Or you are going to dive deeper into something and go into a lot more detail.
One of the explanations for your observation is that when you are younger, you just have fewer papers you really want to talk about. When you get to a certain stage, you have a lot of papers that you can talk about, so you need to make a decision to go deep or go broad. It really has to fit your style and what you’re comfortable with.
For me, usually I want to give an overview, as we’re making great efforts to put puzzles together. We have different studies that may not be particularly related to each other, but you can see the crosstalk, and you can see the connections that I’m so excited about. That’s why I talk about so many different things.
TY: I’m on twitter and see that you (@INM7_ISN) have strong perspectives on open science and the replication crisis. Do you want to comment on that?
SE: Yes – I’m a big chimera when it comes to that. On the one hand, I’m a big proponent of open science. Most of the work we’re doing depends on shared datasets. In fact, long before the term ‘open science’ became popular, I was sharing my software. Back in 2004, I was developing the SPM anatomy toolbox. That was just open matlab code. I was still an undergraduate at the time.
So I’m a big fan of open science, but what worries me at times are certain tones of the debate. Sometimes there’s a patronizing aspect to it, a moral argument “you have to” and “how can you not”. I think that in order for open science to grow, we need to take concerns seriously. Perhaps by virtue of being around the open science, computer science environments, but also around the very traditional German medical environment, I can see that there are two sides to the argument.
Basically, in the German medical environment, I don’t think any person that would review your grants or would hire you cares too much about whether your dataset is open, or whether your software has been released. There’s a more traditional focus on publications with a lot of focus on impact factor, and grant money. This is a completely different world.
An example – if you’re a software developer then your product, your outcome or claim to fame, is the thing (a software tool, a repository, any other resource) that you distribute freely on the internet. If you are someone who spent years collecting data on a rather rare disease, recruited patients, talked a lot to them, followed them up clinically, evaluated them repeatedly and put them in the scanner, then this data is a resource, an extremely valuable resource. And it comes with the assumption that this data will allow you to get enough out of it to carry you to the next step of your career. If you are then being told “well, you are unethical and doing something terribly wrong by not immediately sharing the data freely after the first paper”, this is not putting open science in a good light.
I think open science will succeed and it will be a tremendous accelerator of knowledge. But in order for that to really happen we need to take people’s concerns seriously. There will definitely be a development at different speeds, with things moving faster in some fields than others. It’s not the case that those fields that move faster can look down upon those that are not as fast. I don’t think anyone is opposed to open science from a personal conviction, but it’s more about needs and rewards and we need to take these views seriously.
Long-term - there has to be a better incentive structure. At the moment, we are conservative about it in the German medical system – one of our main criteria for hiring is based on a cumulative impact factor – from the sum of the journal impact factors of all your papers. H-index, citations and so on do come into play, but the fact that the cumulative impact is a major evaluation criteria shows you that different fields still evaluate contributions differently. At some point, open science practices will need to be rewarded not just morally, but also practically by selection committees. That will take quite a bit of time, though.
TY: So how do we change the minds of selection committees?
SE: Well, we just mentioned that we are getting older [laughs]. In some ways there is another generation before us, those who are really not used to it. This will perhaps change over time. Then at some point we have to find criteria for quantifying open science. You can show things on the internet to a review committee – e.g. you have 500 or so matlab scripts that you are sharing – but will that give you a job? If committee members are from another field, they might not get the value, so there’s needs to be some way of quantifying these contributions objectively. Then a committee made up of psychiatrists, dentists, or structural biologists (and they often as diverse at times) can refer to some numbers that give a assessment of your open science practices. That would be a big step forward.
TY: Have you heard of the idea that once you come up with a set of numbers, they will be gamed?
SE: Sure, but this will always happen. We would like a perfect world where all decisions about hiring or promotions are done by people who are experts in your field, spend several hours scrutinizing your CV or 10 most important papers for context and read related literature to compare to. But this just won’t happen. Maybe I’m part cynic, part realistic, but most people are overloaded with committee duties, so you need some easy summary of a person. Yes this will be gamed – there’s no way around that – and we hope there is on each committee someone, who can point out the gaming aspect, and spends time to know your work it more deeply. But usually you have to convey the importance of your work to people who are not familiar with your publications or your topics.
TY: Thank you so much for this interview!
We look forward to attending Simon’s exciting keynote on Monday June 18, 2018.
GUEST POST BY CHRIS CHAMBERS
The biomedical sciences are facing a rising tide of concerns about transparency and reproducibility. Among the chief concerns are inadequate sample sizes, lack of sufficient detail in published method sections to enable replication, lack of direct replication itself (and notable failures when attempted), selective reporting of statistical analyses in order to generate desirable outcomes, suppression of negative results, lack of sharing of materials and data, and the presentation of exploratory outcomes as though they were hypothesis-driven. Collectively these problems threaten the reliability of biomedical science, theory generation, and the ability for basic science to be translated into clinical applications and other settings.
Human neuroimaging in many ways represents a perfect storm of these weaknesses, exacerbated by the fact that two of the main techniques, MRI and MEG, are extremely expensive compared with adjacent fields. Researchers using these methods face tremendous pressure to produce clear, positive, publishable results, usually in small samples.
Until recently such issues were rarely discussed openly, perhaps for fear that it would bring a relatively embryonic discipline into disrepute and collapse funding opportunities. But they have been simmering below the surface for a long time. Years before irreproducibility was headline news, at one imaging centre where I worked we noticed that we were running out of data storage faster than we were acquiring new data. After some detective work we learned why. Researchers were repeatedly analysing and reanalysing the same datasets, and with every reanalysis they were inadvertently duplicating huge quantities of raw data. The incident was illuminating about normative research practices.
When I raise this scenario with colleagues, their typical response is “Well, duplication of raw data is a silly mistake but most fMRI research is exploratory and exploration is vital for science”. This is true, of course. There is a huge amount to gain from performing reanalysis of existing, complex datasets. But the key, then, is whether such exploratory research is documented transparently as exploration. In an exploratory field, and especially one that often relies on inferential statistics, shouldn’t publications faithfully report all analyses that were attempted before settling on the ones that drove the conclusions? And does this happen in fMRI? Of course it doesn’t. Pick up a copy of any neuroimaging or cognitive neuroscience journal and you’ll find article after article purporting to test hypotheses using complex analyses, each of which is presented as though it was planned in advance. Given the pressures on researchers to produce clean results and frame them as the outcomes of hypothesis testing, it comes as no surprise that virtually no two published fMRI studies report the same analysis pipeline.
There are many solutions to this quagmire, including greater sharing of data, materials and code, and I also believe one major piece of the puzzle is preregistration of hypotheses and analysis plans. Many in the neuroimaging community are skeptical of preregistration and what it might say about our scientific approach, which sits uncomfortably between confirmatory and exploratory modes and relies on massive investment to remain afloat. When your typical experiment involves hundreds of analytic decisions, each of which can be considered “legal” yet produce slightly different outcomes, there is a natural tendency to fear that pre-specification of any particular route through the garden of forking paths will lead to unpublishable, possibly confusing findings. We thus feel pressured to apply the “human element” to bring order to chaos. Researchers will routinely spend months poring over their data and analyses using sophisticated statistical methods but almost none appreciate the risks of their own biases in interpreting one outcome among hundreds or thousands of possibilities.
This is why I have pushed hard for neuroimaging journals to offer Registered Reports (RRs). The RR format eliminates the fear of preregistration producing unpublishable results because, for RRs, the editorial decision is made before the results are known. Detailed study protocols are reviewed before researchers commence the research, and following detailed review of the theory and methods, the highest quality submissions are accepted for later publication regardless of how the results turn out. Researchers can also report additional exploratory (unregistered) analyses, clearly flagged as exploratory, and are encouraged to include preliminary experiments to validate a proposed analysis pipeline.
This week sees the launch of Registered Reports as a new article option at NeuroImage as part of a two-year pilot initiative, co-edited by me, Birte Forstmann (University of Amsterdam), Rob Leech (Kings College London), Jeanette Mumford (University of Wisconsin-Madison), Kevin Murphy (Cardiff University) and Pia Rotshtein (University of Birmingham). In addition to the usual features of Registered Reports, we are also inviting proposals for secondary analyses of existing but unobserved datasets, innovative approaches using Bayesian adaptive optimisation to combine the strengths of exploratory and confirmatory science, and review/perspectives articles on the potential costs and benefits of preregistration in neuroimaging research. Submissions are invited in any area of human neuroimaging and readers can find detailed author guidelines here.
Preregistration in neuroimaging is a high stakes intervention. The combination of high analytic flexibility combined with high risk of bias and high expense of data generation means that it has the potential to yield perhaps the greatest scientific benefits of any field to which it has been applied so far. But because of this methodological complexity, preregistration also brings some of the greatest challenges.
One such challenge is power analysis. Many of the 103 journals that currently offer RRs require high prospective power to detect the smallest effect of theoretical interest (e.g. 0.9 at Cortex, 0.95 at Nature Human Behaviour), but we know that MRI in particular is underpowered to detect theoretically plausible effect sizes, and we also know that many researchers do not have the resources to fund large studies. At one level this problem can be solved by consortia projects. Initiatives such as the Psychological Science Accelerator, Study Swap and the ENIGMA neuroimaging consortium are blazing a trail to facilitate more definitive team-oriented science. However, the main benefit of RRs lies not in the support of big science but in the elimination of publication bias and selective reporting. Therefore, to make the format as accessible as possible to the neuroimaging community, the RR format at NeuroImage will not set a minimum required statistical power or sample size. Instead we will simply require authors to justify the sample size they are proposing.
A bigger question is whether preregistration in neuroimaging is even feasible. To what extent will researchers feel able to prespecify their analysis pipelines in advance? For a RR, if an exact pipeline cannot be prespecified then researchers will be given the opportunity to prespecify data-dependent contingencies (e.g. if the data look like this, then we will apply this filter, etc.). They may also propose a blinded analysis strategy or an adaptive design in which some decisions will be post hoc, but actively protected from bias. Can such approaches succeed? I believe they can but for me the most fascinating outcome of this particular RR launch will be to discover how a community of talented analysts tackles this challenge.
Chris Chambers is a professor of cognitive neuroscience at the Cardiff University Brain Research Imaging Centre and guest section editor for Registered Reports at NeuroImage
Since the first meeting of the Organization for Human Brain Mapping (OHBM) over twenty years ago in Paris, the Organization has evolved from a primarily European and North American organization, to an international organization that draws members from over 50 countries worldwide (Figure 1).
However, the European and North American leadership and educational roles within the organization have been slower to undergo a similar evolution. This is perhaps most noticeable in the geographic distribution of Council, of which apart from very sparse representation from Australia and Cuba, has consisted of primarily Europeans and North Americans (Figure 2).
The characteristics found in Council, are also seen in the chosen educational courses (Figure 3), while the symposia have slightly greater diversity (Figure 4).
The most striking omission from leadership and educational roles comes from our colleagues in Asia. The countries of China, Korea, Japan, Singapore, and Taiwan make up over 15% of attendees to the meetings and poster presentations (Figure 5) and similar rates (17%) within the OHBM membership; yet have to date no representation on Council*. The goal of the Diversity and Gender Committee (DGC) is to work with Council and the Nominations Committee to foster equity in representation both within the OHBM membership and meeting attendance.
*Note: there has been one member Council originally from China, however they are currently US-based, so was designated as representing the US. In addition, a former Council member also had a joint position in China, but was designated as representing Latin America.
How are we doing with Gender?
With three of the five most recent Council members being female, the gender distribution on council is 12 males and 3 females. While this tripled the gender distribution from one year earlier, it falls lower than the gender distribution within OHBM.
The gender distribution of attendees presenting posters is 50% male, 40% female, and 10% who provided no answer. Whether these 10% represent gender fluidity or allies for gender fluidity within OHBM is not known.
While the gender distribution for poster presentation is more balanced, there is a higher proportion of males for the educational courses and symposia.
Approaches to Foster Equity
There has been much productive discussion within the Diversity and Gender Committee regarding how to foster equitable representation within OHBM. There were a number of options that we discussed, including having ‘electoral votes’ for Council members to, in a sense, ‘force’ the leadership roles to match the membership demographics. However, we are a democracy, and the primary approach that we have adopted is to provide education (in the form of data) for our members and allow our members to vote. We therefore encourage all members to consider the above data and consider potential biases when voting for your OHBM leadership.
A member of the DGC also sits on the Nominations Committee, with the goal to keep diversity in mind during the decisions surrounding the nominations. Importantly, the Council, including the chairs and members of the Nominations Committee, are motivated to see equity in representation within leadership roles in OHBM. They have attended the DGC Meeting in Vancouver and echoed their support for the Committee’s work. This support is crucial!
Microaggressions and Bias
The DGC has been charged to address inequities in gender and geography, however, we have heard whispers of both macro- and micro-aggressions within the context of the OHBM meetings. OHBM is all about science and integrity in both science and behavior. Attendees should be able to attend the meetings free from any form of bias related to gender, ethnicity, sexual orientation, gender identity or handicaps. If events occur, whether overt or covert, it should be reported to the DGC who will then work within the OHBM leadership to assess the situation and, if indicated, to intervene. The DGC is currently working on the specifics of best practices to intervene in cases where it is warranted.
For some time now, intolerance at the political level has been propagated throughout the world. However, we as a scientific community subscribe to inclusivity from all cultures and nationalities, and value diversity. In this light, we would like to highlight some of the challenges faced by some of our international colleagues, some of their biggest achievements despite these challenges, as well as provide a platform to voice their opinions and concerns on scientific inclusion.
There are parts of the world that are far from our minds when considering brain-mapping research - Iran is certainly one of them. The last few decades have seen a massive Iranian exodus of highly trained individuals. As a result, this secluded country has produced a great number of researchers who now work and live abroad. In fact, many of us working in neuroimaging share frequent interactions with Iranian researchers and trainees, and these interactions have provided a glimpse into the state of science and education in Iran. I have come to understand that some of the top research-intensive universities in Iran in the field of brain mapping include Shahid Beheshti University, the University of Tehran, Institute for Research in Fundamental Sciences. When it comes to neuroimaging research, the University of Tehran, Shahid Beheshti University and AmirKabir University figure prominently.
Researchers who work in Iran, however, see not only the challenges but also tremendous potential in Iranian research. On the heels of the Persian New Year, we caught up with two Iranian imaging scientists who wish to share their distinct views and experiences with the OHBM international community.
Part 1: Dr. Mojtaba Zarei
Jean Chen (JC): Where did you receive your training, and what inspired you to study brain imaging?
Mojtaba Zarei (MZ): I was inspired to study brain mapping by my 3rd year high-school teacher and then by the work of Frank Duffy while in the early years of medical school. I completed my MD at Shiraz University of Medical Sciences in 1990, focusing on brain electrical activity mapping. Afterwards, I moved to King’s College London for my PhD in cortical electrophysiology, mapping out sensorimotor cortex of rat after embryonic neural transplantation. In 1996, I resumed my practice in Clinical Medicine and Neurology, first at London, then at Cambridge, Oxford and Birmingham (UK). In 1999, I restarted my research in cognitive neurology under Prof. John Hodges and later in Chicago with Prof Marsel Mesulam. In 2002, I became a postdoc in the Oxford Center for Functional Magnetic Resonance Imaging of the Brain (FMRIB) under Prof. Paul Matthews. I went on to become Senior Clinical Fellow at FMRIB in 2006. As part of this, I established the Imaging in Neurodegeneration Group in Oxford, which was later continued by colleagues. Following that, I moved to the University of Nottingham in 2012.
JC: Given your foreign training experiences, what inspired you to move back to Iran?
MZ: Iranians commonly maintain strong family ties even after moving abroad. I moved back to Iran during a time when the government was prepared to invest heavily in neuroimaging research. In 2013, I was invited to return by the Iranian Ministry of Health to establish the National Brain Mapping Centre. This negotiation included an equipment grant of $10,000,000 USD for the centre from the Office of Vice-President for Science and Technology. I was appointed Full Professor of Shahid Beheshti University, Senior Adviser to the Ministry of Health, and the Director of National Brain Mapping Centre based in Shahid Beheshti University of Medical Science. In the Ministry of Health, I designed and implemented a national Clinician-Scientist Program for the first time in Iran. I was also instrumental in founding National Institute for Medical Research Development (NIMAD), which was modeled from the Medical Research Council in the UK. This organization is now the main independent governmental grant awarding body with seven scientific committees.
JC: How would you describe the brain-mapping landscape in Iran? In terms of major infrastructure, labs, programs, universities involved in brain-mapping research?
MZ: The major labs are mostly located in the capital, Tehran. The major players in neuroimaging research include the University of Tehran, Shahid Beheshti University and AmirKabir University of Technology. There is a 3T GE MRI in the Iman Khomeni Hospital that is shared by researchers and clinicians. There are also two research-dedicated 3 Tesla Siemens MR scanners, one at the Institute of Research in Fundamental Sciences, and the other at the National Brain Mapping Lab. There are also 1.5 T Siemens Avanto systems in Iran that can be used for research but the most active one is at Kermanshah University of Medical Sciences.
JC: Are there formal national or regional-wide meetings or organizations devoted to brain mapping?
MZ: Indeed there are. Since 2014 I have been responsible for organizing the annual Iranian Human Brain Mapping Congress, involving an international audience with eminent scientists as speakers. In addition, in 2005, I invited my former colleagues from the UK, including Heidi Johansen-berg, Matthew Rushworth and Christian Beckmann to teach at the first Brain Mapping Workshop in Iran. There is also the Iranian Society for Cognitive Science and Technology, of which I am the current president. Furthermore, at the moment, our institution runs the only regular and long term hands-on brain mapping teaching program in the country.
JC: What are the biggest challenges facing Iranian brain-mapping researchers that you would like the OHBM to be aware of?
MZ: The obvious challenge is that due to travel restrictions, Iranian researchers are not always able to attend OHBM meetings. Perhaps with developments in web platforms, this difficulty could be somewhat circumvented. Within the country however, given the limited resources, funding is not necessarily distributed in the most productive way, and there has yet to be an effective plan to utilize the infrastructure that is in place. On top of that, competition for research funding is politicized, and I fear that the requirement for political connections may be hindering research and the development of a younger generation of researchers. Any international mechanism (financial or otherwise) to directly support young and enthusiastic scientists would be welcome.
JC: Does the Iranian education system foster research and encourage young people to enter research? For example, are there scholarships available to help students enter research?
MZ: Yes, there is a lot of encouragement but it translates little to financial support. Most MSc or PhD students do not get paid during their study, which makes life difficult for them during these years. Postdoc positions (12-18 months) have increased in the last few years, particularly for those who would have obtained their PhD abroad. There are a lot of workshops, but these are often aimed at raising income.
JC: Are there government funding bodies to fund research? If so, how difficult is it to obtain funding, albeit it limited?
MZ: There are a number of grant awarding bodies that provide funding for brain mapping research, including the National Institute for Medical Research Development (NIMAD), National Science Foundation, and the Cognitive Science and Technology Council.
JC: How did you build up your lab in Iran?
MZ: When I returned to Iran, I got official permission from the Ministry of Health to establish Brain Mapping Centre at the Tehran University of Medical Sciences. I then received additional permission from the Ministry of Health to establish the National Brain Mapping Centre in Shahid Beheshti University of Medical Sciences. However, after 2 years, with government changes, our funding was stopped. I obtained permission from the Ministry of Science and Technology to establish the Institute of Medical Science and Technology less than 2 years ago. Our researchers and labs are located in this Institute. We established international collaborations with the University of South Denmark, the University of Pennsylvania, University Nantes, and University of South California. The latter is where the ENIGMA Sleep project is. We are now leading the ENIGMA Sleep Group. More collaborations are being developed, and funding for these projects are often obtained from international bodies.
JC: What are the career prospects for your graduate students and perhaps for other foreign-trained Iranian brain-mapping researchers hoping to return to Iran?
MZ: Not much in Iran at the moment, many will leave the country for PhD positions and postdoc training. Some get recruited for teaching and research in Iranian Universities. I have written a curriculum for training PhD students specifically for brain mapping, but it has to be approved by the Ministry of Education before I can actually start the program. However, there are numerous upcoming opportunities for scientists who have been trained in the best western programs.
Part 2: Dr. Abbas Nasiraei Moghaddam
On a later occasion, I had the pleasure to speak with Dr. Abbas Nasiraei Moghaddam. Dr. Moghaddam is Associate Professor in Biomedical Engineering at Amirkabir University of Technology in Tehran, one of the top universities in Iran and a frontrunner in neuroimaging research. Dr. Moghaddam is one of the premier MRI physicists in Iran, and for the past 8 years, has been director of the Advanced Medical Imaging Lab at Amirkabir University. For most of that time, he has also been affiliated with the School of Cognitive Sciences at the Institute for Research in Fundamental Sciences (IPM).
Jean Chen (JC): I understand that you are the founder of the Iranian Chapter of the ISMRM (International Society for Magnetic Resonance in Medicine). Where did you receive your training, and what inspired you to study brain imaging?
Abbas Moghaddam (AM): I received my BSc in Electrical Engineering in 1995 from the University of Tehran, and completed my MSc at the same, under the guidance of Dr. Hamid Soltanian-Zadeh. Dr. Soltanian-Zadeh was the first person to teach MRI Physics in Iran (21 years ago), and he initiated me into the field of brain imaging. Afterwards, I went on to work at Washington University in St. Louis for two years (in cardiac imaging) before starting my PhD at the California Institute of Technology (Caltech). It was followed up by a few years of postdoc at the University of California in Los Angeles (UCLA).
JC: Given your foreign training experiences, what inspired you to move back to Iran?
AM: Iran is my home, where my parents, siblings and roots are. Prior to returning, I was in the US for a total of seven years, but for fear of travel restrictions, I did not visit Iran even once. It made me realize that I did not want to be away from my home for so long again. However, I retained a part-time appointment at the University of Southern California to allow me to facilitate my collaborations with my American colleagues.
JC: How would you describe the brain-mapping landscape in Iran? In terms of major infrastructure, labs, programs, universities involved in brain-mapping research?
AM: In Iran, most of the MRI systems are for clinical use. There is only one research-dedicated scanner (Siemens Prisma 3 Tesla), which is at the National Brain Mapping Lab (NBML). It is sited at the University of Tehran, which is where I first got into medical imaging. The NBML is not affiliated with any institution, but provides access to all researchers in Iran. The IPM system (Siemens Trio 3 Tesla) was purchased for the IPM, but due to regulations from the Ministry of Health, it was initially sited at the Imam Khomeini Hospital in Tehran. After 4 years, it was recently moved to the IPM, and is now essentially dedicated to research. As a result, we now have a unique opportunity to do MRI research at the IPM. The School of Cognitive Sciences at the IPM was directed by Dr. Hossein Esteky for over 15 years. Dr. Esteky is a world-renowned vision scientist that first drew the world’s attention to neuroscience research in Iran with his publication in Nature.
Currently, the research landscape in Iran is rapidly changing, allowing us to develop new areas of research. Amirkabir University is Iran’s leader in MRI Physics research, and its School of Biomedical Engineering is one of the oldest in the world (25 years old). Here at the IPM, we have about 40 students doing research in cognitive science. When I was at UCLA, I did sequence programming, but I did not have access to it when I first returned to Iran. Now we are in the process of negotiating a research agreement with Siemens that would allow us to do sequence development here as well. This is an exciting time.
JC: What are the biggest strengths and challenges facing Iranian brain-mapping researchers that you would like the OHBM to be aware of?
AM: We have excellent human resources. The students are well trained and eager for knowledge. Often, my students would have scored near the top during the Iranian University Entrance Exams. However, for many years, neuroimaging research in Iran was heavily focused on image processing, perhaps due to our lack of research-dedicated imaging infrastructure. We have labs that publish heavily on imaging processing algorithms. But without co-developing neuroscience and imaging physics, such a research program would lose its competitive edge. This is perhaps our biggest challenge. Since 18 months ago, the newly established NBML has been providing access to imaging facilities, including MRI, EEG, TMS and fNIRS, but researchers in Iran are still trying to adapt to a culture of doing their own data acquisition.
Of course, Iranian researchers suffer from travel restrictions. For example, we are glad that this year’s meeting of the ISMRM is in Europe (Paris). Had it been in the US, we would not be able to attend. I am thankful that my international collaborations have allowed to get around such challenges. Science should have no boundaries.
JC: I understand that research funding for brain imaging is limited in Iran. In this climate, how difficult is it for you to obtain funding?
AM: The funding levels are certainly nowhere near the levels in the developed world. However, nearly everyone I know has funding, and no one has had stress due to lack of funding. This is in strong contrast with my colleagues in the US. One thing that is not well understood by the west is that in Iran, research is not nearly as costly. Students do not typically receive stipends, and scanning is fully subsidized, therefore we only need funding for traveling, publishing and so on. This makes it possible to conduct relatively big studies with little funding. Having said that, there are multiple types of grants that we need to apply for. For instance, traveling is covered by a different type of grant from regular research expenditure. The system is actually much more relaxed than in the west.
JC: How did you build up your lab in Iran?
AM: Biomedical Engineering has attracted a lot of interest from students in recent years, and I have had many applicants. When I interview students, I emphasize that I do research in Imaging and not in Image Processing. They are still getting used to the concept, but drawn by the success of my previous students. In addition, I set high standards for my students and do not hesitate to reject students that do not meet the requirements. In my institution, we have also set up joint-degree programs with foreign institutions in the UK and Australia. I would really like to expand this field of research in Iran, but that too will take time.
JC: What types of research questions are you interested in?
AM: I am interested in developing both functional and quantitative MRI sequences to improve brain imaging. In terms of fMRI, we are interested in improving the neural specificity of the imaging technique as well as developing brain-connectivity processing methods. In quantitative MRI, we are developing new imaging technique for T1 and T2 mapping.
I first learned MR Physics at the University of Tehran, when I worked with Dr. Hamid Soltanian-Zadeh; this continues to be a big focus for my research. In the US, my research was in cardiac imaging, but when I moved back to Iran and started my affiliation with the IPM (at the recommendation of Dr. Soltanian-Zadeh), I started to do brain-imaging research. One of my recently graduated PhD students worked on developing a new MRI sequence. As we do not yet have a research agreement with Siemens, he did this work in collaboration with the group of Dr. David Norris in the Netherlands, and spent 15 months in the Norris lab. This resulted in a patent and 2 articles, and it was the first thesis on MR Physics in Iran. I have another student working on structural and functional brain connectivity. She worked with Patric Hagmann in Switzerland. This is mainly on image processing and neuroscience.
JC: Finally, what are the career prospects for your graduate students and perhaps for other foreign-trained Iranian brain-mapping researchers hoping to return to Iran?
AM: As I mentioned, we are hungry for MRI expertise, but the job situation in Iran is very uncertain. Brain Imaging is still a young field, and we certainly need more researchers to help us build up the programs. Meanwhile, I do encourage my students to see other places and gain other experiences. Many of my students have gone on to study in labs abroad, including Germany, the Netherlands and Canada.
Postamble (JC): As Dr. Moghaddam said, science should have no boundaries. What may seem to be challenges are also potential opportunities. Iranian scientists are passionate about their research as we are in the rest of the world. They are defying great odds to build up a research program and to provide their young generation with new opportunities. Also, although the current involvement of female scientists in brain-mapping research accounts for <10% of all users, the increasing dominance of female trainees at the graduate level will likely change this. In an installment about Iranian trainees, you will also hear the thoughts of early career researchers from Iran and around the world.
By Elizabeth DuPre and Kirstie Whitaker
This month we continued our Open Science Demo Call series by speaking to Tim van Mourik Eleftherios Garyfallidis and Malin Sandström about the communities they’re building and supporting to make everyone’s lives easier through better open source software tools.
After a few technical difficulties (Kirstie’s laptop inexplicably deleted the “broadcast” button so we were all chatting to each other without being able to include our viewers in the conversation!) Tim introduced Porcupine. Porcupine is a tool to visually program your analysis. By dragging and dropping modules that represent functions in your analysis, you can quickly build an insightful analysis and then Porcupine will provide the code that you or others need to run on your own data. All code and documentation is openly available at the project’s GitHub repository, and this is where you can also give any feedback or suggestions. Alternatively you can find Tim in the BrainHack Slack team (click here if you need an invitation to join) or via email at firstname.lastname@example.org.
Eleftherios told us about DIPY, a global, community-supported, software project for computational neuroanatomy, focusing mainly on structural and diffusion MRI. DIPY implements a broad range of algorithms for denoising, registration, reconstruction, microstructure, tracking, clustering, visualization, and statistical analysis of MRI data. You can get involved and help the DIPY team in many different ways, but Eleftherios particularly encouraged OHBM members to test their data with the DIPY algorithms and provide feedback on any challenges they have running the code. You can ask questions in the team’s live chatroom or send an e-mail to email@example.com.
Linking very nicely to Eleftherios’ call for student applicants to work on the DIPY team’s suggested projects was Malin Sandström, INCF’s community manager who manages the organization’s Google Summer of Code (GSoC) program. GSoC allows students to be financed with stipends for their work on open source software over the summer. Open source organizations in the project contribute project ideas and mentors. INCF is participating as a GSoC mentoring organization for the 8th year in a row, with mentors from the worldwide INCF community and a wide range of neuro-tool projects.
You can browse the INCF project list to learn more about the summer plans. If you were too late to take part this year, we encourage you to keep an eye on the INCF GSoC projects page for updates on future rounds. If you have a project idea you would like to mentor with INCF for next year, get in touch at firstname.lastname@example.org by 1st December 2018.
Our next call will be on Thursday April 26th at 7pm BST (check your local time zone). If you’d like to nominate yourself or someone else to be featured on these monthly calls, please add their information at this github issue, or email the host of the calls Kirstie Whitaker at email@example.com. You can also join the OSSIG google group to receive reminders each month.
by Souad El Bassam and Nikola Stikov
OHBM has members throughout the world. We used last year's meeting as an opportunity to interview some of them to find out about the international reach of OHBM.
In our Spanish language video, you can learn about LABMAN and the way developing countries try to keep up with the growing cost of brain mapping research. Maria Bobes, the president of LABMAN, speaks to Manuel Hinojosa about the importance of involving more Latin American researchers in brain mapping and the crucial role of LABMAN in raising awareness of the challenges facing researchers in this area of research in Latin America.
Our Dutch video features Emma Sprooten from Donders Institute for Brain, Cognition and Behaviour and Raissa Schiller from Erasmus MC – Sophia Children’s hospital, junior researchers who are interested in cognitive and behavioural research. They briefly talk about their impression of the conference before moving on to speak about Raissa’s PhD research on cognitive impairment in children who were critically ill as newborns.
Finally, the Balkan video features researchers from Macedonia (Nikola Stikov), Bulgaria (Kalina Christoff and Bogdan Draganski), Serbia (Bratislav Misic), Bosnia (Branislava Curcic-Blake) and Croatia (Lana Vasung) trying to communicate with each other in their respective languages. Among the topics discussed are work-life balance and the many reasons for attending (and skipping) the OHBM conference.
Our international outreach does not stop here. We have videos in 7 more languages, including Catalan, Czech, Greek, Hebrew, Portuguese, Slovak and Mandarin. If anybody reading this wants to help with the transcription, we will be very happy to add these videos to our YouTube channel. If interested, please get in touch with Nikola Stikov (firstname.lastname@example.org). Let's show our international community that the language of OHBM is universal!
P.S. A big "thank you" to Job van den Hurk, Amaia Benitez and Olivera Evrova for the transcription and translation of the videos.
Permutation methods are a class of statistical tests that, under minimal assumptions, can provide exact control of false positives (i.e., type I error). The central assumption is simply that of exchangeability, that is, swapping data points keeps the data just as likely as the original. With the increasing availability of inexpensive large-scale computational resources and openly shared, large datasets, permutation methods are becoming popular in neuroimaging due to their flexibility and ease of concern about yielding nominal error rates than parametric tests, which rely on assumptions and/or approximations that may be difficult to meet in real data. This becomes even more important in the presence of multiple testing, in that assumptions may not be satisfied for each and every test, and the correlation across tests may be difficult to account for. However, even exchangeability can be violated in the presence of dependence among observations, and it may not always be clear what to permute. The aim of this blog post is to emphasize the relevance of linking the null hypothesis and the dependence structure within the data to what should be shuffled in a permutation test. We provide a few practical examples, and offer some glimpses of the theory along the way.
Example 1: Permutation mechanics
Let’s begin by reviewing the mechanics of a permutation test. Consider a comparison between two groups, for example whether hippocampal volume is different between subjects with Alzheimer’s disease (AD) and demographically matched cognitively normal controls (that is, a group with similar age, sex, education level, etc). If we assume that in both groups the hippocampal volumes are independent samples from a Gaussian distribution, a classical parametric two-sample t-test can be used to test for a difference between means of the two groups. However, this distributional assumption may not be true, and departures from this assumption can potentially lead to incorrect conclusions. In these circumstances, permutation tests perform better than parametric tests by providing a valid statistical test with much weaker assumptions. Specifically, under the null hypothesis that the hippocampal volume has no actual difference between AD cases and controls, the group membership (or the label of case and control) becomes arbitrary, that is, any subject from one group might as well have been from the other.
While it may seem implausible that this would be the case for patients and controls, in fact this is what we are testing: all else being equal (that is, exchangeable), and any difference found must relate to the means, which is what we are interested in. In fact, a classical parametric two-sample test (with equal variance) makes not just the same assumption, but also further assumes that patients and controls come from the same Gaussian distribution. Permutation tests do not require Gaussianity; it suffices that the data are merely exchangeable. Exchangeability further relaxes another important assumption of parametric tests: independence. Data that are not independent may still be exchangeable, either globally or under certain restrictions, as presented in more detail in Example 3 below.
With exchangeability, we compute the t statistic under each permutation, and produce the permutation distribution of the statistic under the null. The permutation distribution is the empirical cumulative distribution function (cdf) obtained from the data themselves, as opposed to from some idealized distribution, as is the case with parametric tests. The observed test statistic can be considered a random sample from the permutation distribution because it is equally likely to have arisen from any case-control re-labeling given the null hypothesis.
The p-value is the probability of finding a test statistic for the group comparison at least as high as the one observed, provided that there is no actual difference (i.e., null hypothesis is true). So, the p-value can be calculated by randomly permuting the group labels many times, each time recalculating the test statistic; at the end of the process, we check how often a larger statistic was observed than the original (before any shuffling had been applied), and divide that by the number of permutations performed. Figure 1 shows an example in which there are three subjects in each group; before any permutation is done, the test statistic is t = +0.7361. After exhaustively computing all 20 possible permutations, we see that 4 of these (including the non-permuted) are higher than or equal to +0.7361. Thus, the p-value is 4/20 = 0.20. If we had decided beforehand that our significance level would be 0.05, we would say that the result of this test is not significant, that is, there is no significant difference in hippocampal volume between AD patients and controls.
Figure 1: Consider the hippocampal volume measured in 6 subjects, three with Alzheimer’s disease, and three cognitively normal controls. The values measured are shown in the boxes (ranging between 3498 and 3588), controls in blue, AD patients in green. The test statistic for a difference Controls > AD is t = +0.7361. If there is no actual difference between the two groups, then the group assignment can be randomly permuted. For each such permutation, a new test statistic is calculated. In this example, four t statistics (shown in red) computed after random permutations of the group assignments, out of the 20 performed, were equal to or larger than the observed, non-permuted statistic. The p-value is therefore 4/20 = 0.20.
Example 2: Permutation with the presence of nuisance
Suppose in Example 1 that there were other variables that could potentially explain some of the variability seen in hippocampal volume. Some of these variables could even be associated with diagnosis itself. For example, it may be the case that, in this particular study, AD patients were older than cognitively normal controls. To account for these nuisance variables, we can formulate the problem as a multiple regression, in which hippocampal volume is the dependent variable, whereas the case-control status, along with other potential nuisance variables, are the independent variables. We would then test whether the regression coefficient corresponding to the case-control label is significantly different than zero. Now it is less clear what should be permuted. If we permute just the group labels, what to do with the other variables in the model? It turns out that various approaches have been considered in the literature.
Systematic evaluations show that, among a host of permutation and regression strategies, the method attributed to Freedman and Lane provides accurate false positive control in the presence of nuisance variables and is robust to extreme outliers in the data. In the Freedman-Lane method, we regress out all nuisance variables from the hippocampal volume measurements to obtain the residuals of this nuisance-only model, and use the permuted residuals as the new dependent variable in the multiple regression, from which we construct the permutation distribution for the test statistic (i.e., the regression coefficient of interest). Intuitively, once the nuisance has been regressed out, what remains should be indistinguishable between AD patients and controls if the null hypothesis is true, and thus, can be permuted.
We note that whichever regression and permutation strategy is adopted, it is crucial that what is permuted is what would render the subjects different were the alternative hypothesis true. It is not relevant to permute aspects of the dataset that would not be affected should the null hypothesis be false, that is, should an effect actually exists. This is important because, when an experiment becomes complex (e.g., with multiple factors, levels, nuisance variables, and/or multiple response variables), it can be easy to permute aspects of the data that are not informative with respect to the null hypothesis. One should not lose sight of what is being tested, and permute the data accordingly.
Example 3: Permutation with the presence of dependence in observations
Data are not always freely exchangeable. It may be the case, for example, that there are repeated measurements from the same subjects among the observations. Or maybe some or all subjects are twins, siblings, or otherwise relatives. Cases such as these restrict the possibilities for permutations, but even so, permutation tests continue to be possible. They proceed in a similar manner as in the examples above, but care needs to be taken when selecting the permutations that are allowed. Exchangeability as defined above — that is, permuting the data keeps them just as likely as originally observed — must be preserved. More technically, it means that the joint distribution of all the data points must remain unchanged under the null. For example, in a twin study, one could permute the subjects within twin pairs, and pairs of twins could be permuted as a whole, but one sibling should never be mixed with the sibling from a different family; see an example in Figure 2. These restrictions, unfortunately, tend to reduce power compared to the analyses in which all subjects are independent and freely exchangeable. However, all other benefits of permutation tests are kept.
Figure 2: Observations that are not independent restrict the possible rearrangements of the data. In this figure, each white circle represent an observation (e.g., a measurement from a subject), the blue (+) or red (−) dots indicate whether the branches that originate at that dot are or are not exchangeable, respectively, and therefore indicate observations that can be permuted with each other. On the left, 10 unrelated subjects who are freely exchangeable. On the right, 18 subjects, some of which were recruited along with their siblings (FS), and/or with their monozygotic (MZ) or dizygotic (DZ) twin. Siblings must be kept together in every rearrangement of the data, which needs be performed in blocks; subjects within a sibship can be permuted; some families may have both twins and non-twins, which requires nested blocks. (Figure licensed under CC-BY 4.0. https://creativecommons.org/licenses/by/4.0/)
Consider a longitudinal extension of the AD patients vs. controls example, in which two measurements are obtained from each subject, one before and another after an intervention is applied. As per above, the measurements must stay together within subject. However, depending on what is being tested, we may permute the data only within-subject, or only the subjects as a whole while keeping the order of intra-subject measurements unaltered, or do both things simultaneously. Within-subject effects (that is, the effect of treatment) would require that permutations happen within-subject, whereas between-subject effects would require permutations of the subjects as a whole. Interactions in a mixed design (within and between-subject effects) could benefit from both types of permutation. Crucially, what needs to be permuted is what would be equal should the null hypothesis hold, and that would differ should the alternative hypothesis be actually true.
Example 4: Comparison between models
Now suppose that, in our AD example, in addition to hippocampal volume, we have also measured the amygdala volume for each subject, and are interested in investigating whether hippocampal volume is a better biomarker of AD than amygdala volume (for example, in terms of standardized mean difference between cases and controls as measured by the Cohen’s d statistic). It is tempting to permute the case-control label, but this strategy turns out to be wrong as it completely breaks the associations between the hippocampal/amygdala volume and disease status, which should be retained under the null hypothesis. In fact, in this example, it is unclear what to permute. As a second example, if we want to test whether the mean of hippocampal volume in AD cases is significantly different from a fixed value (e.g., the typical size of hippocampus in normal aging subjects), it can be seen that there is nothing to permute. In these circumstances where a permutation test is difficult to apply, we need to resort to other methods such as the bootstrap for statistical inference.
The bootstrap is an established data-based simulation method, which is often used to assign measures of accuracy, such as standard error, bias, and confidence intervals, to a statistical estimate. It essentially uses the observed data to define an empirical distribution that estimates the unknown underlying data-generation mechanism, and then generates bootstrap samples and bootstrap replications of the statistic of interest using the empirical distribution, from which measures of accuracy can be calculated.
Bootstrap can be applied to virtually any statistic and a wide variety of situations. For example, by sampling cases and controls with replacement independently, we can calculate the standard error or construct confidence intervals for the Cohen’s d statistic for hippocampal and amygdala volume, respectively, as well as for the difference of the two Cohen’s d. Given the strong connection between confidence intervals and hypothesis testing, a p-value can also be produced indicating whether the difference in Cohen’s d is significantly different from zero. In fact, bootstrap can be applied to hypothesis testing, including the questions described in Examples 1-3. However, unlike the permutation p-value, which is exact, the bootstrap significance is only approximate and thus less accurate.
Therefore, permutation is a natural and favorable choice when the null/alternative hypothesis is well defined and what to permute is clear. Bootstrap is useful when the primary goal is to quantify the accuracy of an estimate or when a permutation test is not available in a hypothesis test (e.g., nothing to permute). That said, we also caution that bootstrap relies on an accurate empirical estimation of the true underlying probability distribution. Thus the sampling procedure requires careful consideration in order to respect the data generation mechanism in the presence of complex data structures. For example, block bootstrap is often used to replicate correlations within the data, while variants of the wild bootstrap are used to capture heteroscedasticity in the sample.
Practical advice: It's easy to get started with permutation methods in brain imaging. Most software packages have some sort of permutation test implemented. AFNI's 3dttest++ now uses permutation by default for cluster inference with the -ClustSim option; BrainVoyager has a randomisation plugin (permutation tests are sometimes called randomisation tests); Freesurfer can do permutation with mri_glmfit-sim; FSL has its randomise tool; and SPM has the SnPM toolbox. Finally, PALM is a standalone tool for permutation that works with different types of input data and has various advanced features.