Many fledgling neuroscientists who are eager to dive deep into the statistical analysis of functional MRI data know of Martin Lindquist. Martin is a professor in the department of biostatistics at the Johns Hopkins University. His popular “Principles of fMRI” Massive Open Online Course (MOOC) and associated book have reached an audience of more than 80,000 students worldwide.
Our interview with Martin follows his path through various academic disciplines, eventually leading him towards educating the current generation of neuroimagers and winning the 2018 OHBM Education in Neuroimaging Award.
Claude Bajada (CB): Prof. Lindquist, you're a statistician by training but according to your CV your first academic foray was in the “History of Ideas” at Stockholm University. Can you tell us a bit about that?
Martin Lindquist (ML): When I was a highschool student, I didn't really know what I wanted to do with my life and what direction to take. At that time Sweden had compulsory military service and since I was a bit younger than my peers, I had to wait. So I decided to have a go at some history and philosophy. I’ve always enjoyed it, and still do, but I realised that it wasn’t the career for me.
CB: How did you link that to statistics? Was that a smooth progression?
ML: Well, after my military service, I applied to the Royal Institute of Technology’s Statistics program - it was really more like an engineering program, a STEM Science and technology type thing. So at the time I still didn’t know that I was going to work in statistics, but I was now focused on science and technology.
CB: Was this like a masters degree?
ML: Yes, it was a masters.
CB: How did you get into neuroscience?
ML: I actually did my master’s thesis on neural spiking. I then went to do a PhD at Rutgers University in the US. My PhD was in statistics doing fMRI-type analysis.
CB: Neuroimaging is a multidisciplinary field, there are psychologists, physicians, statisticians. Would you consider yourself a neuroscientist or a statistician if someone had to ask? Or a historian?
ML: Not a historian (laughs). I like to consider myself a little bit of everything. I think if I was pressed, I’d say I was a statistician because that’s what my degree is in. I have a PhD in statistics, I’m in a biostatistics department and the colleagues that I interact with on a daily basis are statisticians, so it feels like my home.But at the same time I’m excited about applying statistics to neuroscience. It just doesn’t feel as honest to say that I’m a neuroscientist because I don’t have that training and background. So I’d probably say I’m a statistician, but I like the idea of being a little bit of everything
CB: We are conducting this interview because you won the 2018 OHBM Education in Neuroimaging Award. Last year you interviewed the previous winner Mark Cohen. I’m curious, how does it feel to be on the other side this year?
ML: Awesome. It was great to help interview Mark last year, he really deserved the award. And it is very humbling to have been chosen to win the award myself.
CB: You have taught a lot of statistics to budding neuroscientists and have also changed fields from history, which is not a STEM subject, to engineering. How difficult do you think is it to understand the needs of students who don’t necessarily have a STEM background but come into neuroscience?
ML: I think, as any educator, you have to remember what it felt like to not know anything. What it felt like to enter a discipline or to learn about a new subject. It can be hard. I like trying to break down information into the smallest possible components that I can. Then I try to re-build things from these components and that seems to work with people. Sometimes people get lost but at least they have something to hold on to. My goal is that everyone gets something out of it. My strategy is to make content as small and manageable as possible, then sort of grow it, and you know, if you understand 80% of it, that’s ok! Everyone will take a different amount home.
CB: You’re talking about modular structures, these little chunks that people can build on, and I suppose in that sense MOOCS are really great because they do come in little bite sized pieces.
ML: This is true.
CB: I remember I took your Coursera course while doing my PhD and I found MOOCS amazing to fill little gaps in my knowledge. How did you you get into teaching MOOCS and do you think that they will be the future of education?
ML: That’s actually a pretty interesting question. My department, which is the department of biostatistics at Johns Hopkins University, was one of the pioneers in MOOCS. A few of my colleagues created a data science programme. It’s a set of 10 courses and they started about 3 or 4 years ago and they’ve had 5 million students, it’s pretty amazing! Those were the first real MOOC blockbusters and people saw what was happening and how exciting it was, so a lot of people in the department started playing around with MOOCS. We had some expertise and we kept saying “let’s try this.”
At one point, our department was running more MOOCs than most universities - in fact, had it been a university, our department would have been ranked fourth! At this point we probably had 50 MOOC classes. Finally we hired a videographer from a local arts school and she helped us with a lot of the editing and we streamlined the process. It became the culture of the department. So I guess I was at the right place at the right time! I mean had I been somewhere else, it might not have been considered possible, because it is a lot of work.
CB: How do you gauge what your students are finding hard to grapple with and more than that, what are the things that they find hard?
ML: Remember your earlier question “how do you teach people who may not be so STEMie”? If you are teaching a workshop, you see it on their face. If you are looking and interacting with your audience you can see when people are puzzled, you can see when they are nodding their heads and you can feed off that. That really helped me figure out what worked and what didn’t work. With the MOOCS this was really not clear, so I tried to use some of the knowledge from previous workshops. In Coursera you had student feedback, but it just didn’t have that same personal touch, so it was unclear exactly what needed to be tweaked. That was a little difficult.
But that’s the interesting story. In my experience and the experience of all of my colleagues who also teach MOOCS, the thing that people find hardest is the thing that you feel you know best. So for example, I know quite a bit about linear models and so I think that I can explain the GLM really well, but maybe I’m not as good at explaining pulse sequences for fMRI. But it seemed like it was the opposite, they found my explanations of pulse sequences more understandable than the material that I was an expert in.
CB: Do you think it may be because once you do attain expert knowledge you kind of forget what it’s like to not have it?
ML: Absolutely. I think that’s what’s so interesting because I thought “Oh, I’m really good at that, why don’t they understand it?” Then you have to take a step back and realise it’s probably because I’m too deep in the weeds and I’ve sort of lost track … you have to remember what it’s like. And sometimes if you think about something all the time then it’s hard to remember that.
CB: One more question about your most recent work, you just published a paper about how to properly power fMRI studies. Researchers are increasingly aware about lack of power, but of course including more participants always comes with extra costs. What do you think are the implications of this?
ML: Often it will depend on the research question. In other fields like genomics, they need pretty big sample sizes and people went together in consortia etc. So possibly for certain questions we need to do similar things . But at the same time there are many big databases coming out, like the UK Biobank and the HCP. Being at Hopkins, you see that there is this tension, as there are also people who are more interested in single subject analysis. So I’m fascinated by the question of whether we can use these bigger databases to inform small samples or single subject analysis and I think that’s going to be important.
CB: We talk about increasing the number of participants all the time, and coming from a small institution myself, this may hinder these small institutions that want to work. However, there are now many open datasets, could these be the solution?
ML: Sure, there are all these big open datasets, but they are not acquired for any targeted purposes. Then you have these smaller studies that have a very specific hypothesis, and I think you need both. And figuring out a way to get both is going to be a very important question moving forward.
CB: What final advice would you have for budding neuroscientists?
ML: Be curious, look outside the box, be willing to do crazy things and fail, and have fun!
OHBM plans to create a new publishing platform, Aperture, to host high-quality research objects while promoting reproducible and open science. With Aperture, OHBM plans to open up to a more diverse approach in communicating academic research, bringing transparency and interactivity to the publishing process. We want to hear from you, the OHBM community, about what you would like to see in such a publishing platform.
Please complete the survey by clicking here.
The OHBM Publishing Initiative Committee (TOPIC) will be introducing a new journal, called Aperture. The roadmap above is tentative, and it was presented to council in 2017 to illustrate the various aspects of the project led by the OHBM Publishing Initiative Committee (TOPIC). Credit: Agah Karakuzu
For more information about the processes behind publications, read through this explainer by Michael Breakspear.
By Elizabeth DuPre; Edited by Aman Badhwar
What exactly is “open science”? As open science has become increasingly central to discussions of scientific practice, publishing, and policy, it’s become harder to provide a precise definition that encompasses all of its aims. The ubiquitous nature of open science is at once its greatest strength and deepest weakness -- it’s broadly useful, but difficult to distill as a clear set of values or prescriptions. It’s been said that one way to get a better sense of a movement is to talk to its supporters, so I turned to some of the newest advocates for open science within OHBM: the newly elected members of the Open Science Special Interest Group (OSSIG) committee. I asked for their thoughts on what open science means, how they got involved in promoting open science initiatives, and why they’re so passionate about increasing its reach within our community.
Camille Maumet, research scientist at Inria and Chair-Elect, confirmed the current state of affairs: “open science is not a monolith.” Indeed, the backgrounds of newly elected OSSIG members support this idea. With training ranging from art, to physics, to cognitive science, to software engineering, these are a diverse group of individuals with multiple perspectives and skills. A far cry, it seems, from the myth of open scientists as fitting a single mold.
Their reasons for first getting involved in open science initiatives aligned with each OSSIG member’s background and seemed to echo the three overarching aims of open science in practice, publishing, and policy. For Tim van Mourik, an fMRI methods scientist and Open Science Room Chair, it was a realization that open science could address many of his concerns about the practice of science and the methods commonly used in functional neuroimaging. “In the wake of the reproducibility crisis I started to get a more complete picture of the situation and learned about publication bias, analysis flexibility, and publish-or-perish factors,” he said. “When I got a null-result rejected from a journal despite better methods and more subjects than previously published, positive results, I became even more determined to try and change the system.”
For Ana Van Gulick, a library faculty at Carnegie Mellon University and Secretary-Elect, it was the open publishing of data and code that motivated her to fully embrace open science. “I wanted to keep track all of the emerging open source tools”, she said, “to help students and faculty maximize transparency, efficiency, and reproducibility in their workflows.” She pointed out the importance of open information given “how fast these tools and software are being developed.” Accessible information on social media sites like “Twitter is great for learning about new developments,” Ana added, “and so are preprints.”
For Chair-Elect Camille, it was the the idea of fundamentally re-imagining science policy and restructuring the incentives for how we as scientists work together. “Open science brings us closer to a collaborative research,” she pointed out, “where we can share our results earlier, capitalize on each other’s experience and design research together.”
Despite their varied pathways into open science, all the committee members I spoke to echoed the same idea for why they’ve stayed involved: the community. Sara Kimmich, graduate student and Treasurer-Elect, put it this way: “This may sound corny, but I’m still impressed with how supportive the community around open science is. It's filled with people who are genuinely interested in seeing the best science get done, and they'll go out of their way to help you on your own path.” This sense that the open science community directly improved their science and careers was echoed by Katja Heuer, PhD candidate at Max Planck and Hackathon Co-Chair. “For my very first paper, I found collaborators on Twitter that I have never met,” she said. “Through these collaborations, I received additional data that we’ve made available to the entire community, and I can incorporate all feedback into the final journal version of the paper – how fantastic!”
Now that they’re advocates for open science, they also share a similar set of concerns and hopes for the future. Greg Kiar, PhD candidate at McGill University and current Treasurer, pointed to the difficulty of pushing for change in the current system. “Established labs and institutes often have practices or procedures for data collection and tool development that have been streamlined and relied on to be engine of their scientific achievements for years,” he said. “Interrupting existing solutions that are closed for equivalent open ones is a tough sell -- it can be a lot of work, and the gains are not immediate.” Treasurer-elect Sara echoed this idea, and said she often feels as though she’s “waiting on a cultural climate shift in the larger scientific community to fully incorporate open science frameworks into existing institutions and our educational systems”. Despite this, everyone seems optimistic about the possibility of change.
After speaking with the newest members of the OSSIG, I feel as though we may not have a single-definition of open science, but we have a new generation of scientists working together for broad and lasting change. Roberto Toro, group leader at Institut Pasteur and Hackathon Co-Chair, summed up this vision for the future as realizing that “the difference between Science and Open Science is wrong. The real difference we should make is between Siloed Science and Science. Siloed Science describes a type of science where the evidence and methods supporting research cannot be fully evaluated and discussed. Now, what we call Open Science, that’s just Science, there is no need for extra adjectives.”
By Niall Duncan
The rich scientific program enjoyed each year at the OHBM conference is the product of persistent hard work by the program committee. They take the raw material of the abstracts and proposals submitted by scientists all over the world and craft it into the finely polished end result that we all see. That means deciding which symposia get the green light, which abstracts become oral presentations and which posters only, and which researchers will be given the distinction of presenting their work in a keynote address.
This year the committee was chaired by Prof. Guillén Fernandez of the Donders Institute. We met with him to find out how the process went this year, to learn about his scientific path, and to hear his thoughts about the brain and how we study it.
Niall Duncan (ND): Professor Fernandez, welcome. You’re the program chair this year. In that process were there any particular unexpected challenges that came up? Any surprises?
Guillén Fernández (GF): There’s a pretty regular operation that you do every year. You get all the abstracts, the proposals for the keynotes, and so forth, and then you get together the program committee who meet in person and by teleconference. Then we just put together a nice program that fits the interesting topics together while considering some factors of diversity of gender and geography. Sometimes there is a surprise like a keynote is not available, so then you have to look for another one - that sometimes makes the balancing out in the end difficult. We were also interested to get certain topics that are currently of particular interest, large cohort studies, for example, into the program. It was all done quite smoothly.
ND: You started out as a medical doctor and then made the switch to what we could call basic science. Why did you make that switch?
GF: I actually started doing science while in med school, and that continued throughout my residency as a neurologist. At that time I initially did electrophysiology, then later also neuroimaging. It was hard to see how I could use these methods in my clinical practice - there was a gap in understanding. That was something that interested me so I worked on it and then you’re automatically away from clinically applied science, from science that is useful for clinical application.
A second point was that I liked clinical work and scientific research but I saw that it was difficult to do both at a good level. To be a good clinician and a good scientist is just difficult at the same time. Some people are able but I thought it was a stretch for me. I wanted to avoid being a kind of mediocre clinician and a good scientist, or vice versa, so then I decided to go for science only. Then there was this position at the Donders which I got and so the decision was made.
ND: And the rest was history... So, starting out as a physician, and then moving into brain science only, do you think that background has shaped the way that you think about the brain and how to study it?
GF: Yes, I think the disadvantage is that at med school and residency you are not that well trained in carrying out science. I think there is a deficit which you have to compensate for. But, on the other hand, as a clinician you have a very good overview of all kinds of things. You are quite pragmatic in your approach and I think you can more easily see the relevance of things sometimes. So, if you are too theoretical, too conceptual, then I sometimes in interactions with colleagues have the idea that it’s easier for physicians to be pragmatic in some aspects.
ND: A lot of your research has focussed on stress – both current stress and developmental stress. What was it that lead you into that area of study?
GF: I worked on, and my work is still quite focussed on, memory. I’m interested in states where memory formation - establishing a new memory trace, or retrieving that or stabilisation of it - is either impaired or improved. Stressful states are quite unique in the sense that they improve memory formation and subsequent stabilisation, but impair retrieval. And that’s a nice approach.
The second point is that I think neuroscience, and in particular neuroimaging, can bring something to understanding mental disorders, and I’m quite interested in why and how traumatic experiences are so well remembered that the memories become maladaptive to the individual. That is something about mental disorders that I am in the long term interested to understand more about. I’m trained as a neurologist but now my research might be more relevant for psychiatry. That’s something that I developed and is the reason why I research the effect of acute stress on memory formation and retrieval.
Developmentally, the human brain - the brain in general - is a very plastic organ and therefore is shaped by the experiences one has. These might make the brain later on more susceptible to, for example, negative memories. This memory bias and how that develops over the lifetime is something I am interested in.
ND: You’ve published many great papers but do you have a favourite paper or research project?
GF: I think that with our stress work there are some quite different studies that fit together nicely. There we have developed a model - that we also described in a review paper - that I think is particularly nice because we manipulated cortisol, we manipulated norepinephrine, and even with the genetic studies it all fitted together nicely. That makes the model quite nice and changed the perception, in my view, of the effect of cortisol in the brain. It’s usually just the bad boy but if I understand the more recent literature well then it appears to be that it is quite helpful in the acute state to get back to a normal state. So that’s more a dampening and normalisation function of cortisol, which is a different view. If you look into the literature twenty years ago then it’s always the bad boy. In the chronic state it probably still is, but in the acute not. That’s the most interesting.
ND: Similarly, you’ve taken what could be called emotions and applied it to what some people might call a cognitive function in memory. Do I understand that correctly, and if that interaction between emotions and cognition is correct what do you think that tells us about how the brain works?
GF: Sometimes I have trouble distinguishing between what an emotion is and what a cognition is. Sometimes they might be more or less the same. I think that there are states in the brain, for example acute stress, arousal, or threat perception, that affect a whole set of cognitive processes. We have to understand that a bit better. There are the second by second cognitive processes that are going on, the computations, and there’s more slowly modulated states that go rather in minutes, and sometimes also more rapidly. Trying to understand the interaction between these is something we are not doing often. We are usually lucky, we are happy, that we can kind of get something done on just the cognition, or just on the state, and I think we have to look more into that interaction. These states have different timescales and different spatial distributions. In neuromodulatory terms they are processes from norepinephrine, or serotonin or dopamine, that have an effect all over the brain. We have to capture these slowly modulated states in the brain and how they affect specific processes.
ND: And finally, if you were the program chair in five years time, which topics do you think will be the most exciting for everyone?
GF: Predictions are difficult! I can at least express my hope, whether it will be fulfilled in five years I don’t know. I hope that we will have bigger systematic studies, on the one hand. Not only them as I hope we will also keep the small hypothesis testing experiments, but at the same time we should have larger systematic studies that go after more complex interactions between the different cognitive levels, or emotions and cognitions, the different brain states, in a more systematic way. I think that will be there.
I think that we will still see new methods for analysis. We are already getting to see machine learning and artificial intelligence used for data analysis. I think it will help us with more complex patterns that we are currently having difficulties to grasp. And, probably not in five years but hopefully soon, we will get useful biomarkers from neuroimaging in mental disorders, so that they are really informative for diagnostics, for treatment selection or prediction. These I hope for in five years.
ND: Here’s hoping! Dr. Fernandez, thank you very much for your time!
By Claude Bajada, Emiliano Ricciardi, Pietro Pietrini and the Rome LOC
As you might know, the 25th OHBM Congress will come back to Italy and this time we will be in Rome. The capital and the largest city of Italy, Rome is one of the most visited cities in the world and is famous for its extensive, rich history. Delegates will travel from all corners of the world, all nooks and crannies to visit the eternal city for a week of neural cartography.
The 25th anniversary meeting will feature the most up to date research in the field of neuroimaging, using multimodal data and cutting edge analysis techniques with an increasingly strong focus upon machine learning and ‘big data’ approaches. OHBM also proudly promotes an increasingly open science environment.
The conference caters for all levels of researchers. This includes educational sessions for PhD students, postdocs and early career researchers, as well as the annual OHBM Hackathon, now a staple event that welcomes both new and established open science enthusiasts.
Given its long tradition in neuroscience, neurophysiology and psychology, Italy is well qualified to host such an important gathering of scientists who come from every corner of the globe. Indeed, it was the conclusion of Italian physiologist Angelo Mosso that brain circulation changes selectively with neural activity that is the basis of the powerful methodologies that we now employ to explore neural correlates of mental function. Currently, Italy has a rapidly expanding neuroimaging community distributed across the whole country and the 2019 Local Organizing Committee gathers together ‘brain mappers’ from the major Italian research centers, covering all methodological approaches of neuroimaging.
Rome was called “the Eternal City” by the ancient Romans, first of all because they believed that no matter what happened in the rest of the world, the city of Rome would always remain standing, and also because when the Roman Empire was new, Rome was already very old! Rome's history spans over two and half thousand years. During this time it transformed from a small Latin village to the center of a vast empire, through the founding of Catholicism, the Italian Renaissance and into the capital of today's Italy.
The historic center of the city is a UNESCO World Heritage Site with wonderful palaces, thousand-year-old churches, Romantic ruins, opulent monuments, ornate statues and graceful fountains. Rome has a rich historical heritage and cosmopolitan atmosphere, making it one of Europe's and the world's most famous, influential and beautiful capitals.
Today, Rome has a growing nightlife scene and is also seen as a shopping haven, being regarded as one of the fashion capitals of the world. Modern Rome is captivating with its heady mix of haunting ruins, awe-inspiring art and a vibrant street life.
There are so many things to do and places to visit that your week in Rome will be intense!
Ancient Rome aficionados cannot miss the great Colosseum, the Circus Maximus and the Roman Forum. Those who would like to discover Baroque Rome have to visit Piazza Navona with its great fountains and the world-wide famous Fontana di Trevi.
You cannot leave Rome without visiting the Vatican City with its majestic museums, Saint Peter’s Basilica and the Sistine Chapel.
And what about the beautiful gardens of Villa Borghese? A great opportunity to switch-off and take a stroll ending your visit with the entrance at the Borghese gallery Museum!
Then you can spend a great time shopping in the city center; you can go to Via del Corso for the major brands, to Via Condotti for the luxury brands and to Via del Boschetto for the independent boutiques. And it goes without saying, you cannot have shopping in Rome if you don’t experience one of the weekly markets in the city!
Italians, and Romans, often boast that their food is the greatest in the world; from the most known and iconic Italian foods, as pizza or ice-cream, to the more local dishes as “pasta all’amatriciana” or “supplì” or “maritozzi con la panna”… are you curious? We will not tell you what they are because you have to come and taste them to discover how great could be the real Roman food!
Reaching us is very easy. The Leonardo Da Vinci international airport operates daily flights to over 300 destinations throughout the world. The airport is also well connected to Rome's city center. There is the Leonardo Express, a train exclusively for airport passengers to/from Rome Termini railway station, leaving every 15 minutes with a journey time of 32 minutes.
The conference will be held at Auditorium Parco della Musica, a big multi-functional art complex designed by the most important Italian Archistar, Renzo Piano, and located in the heart of Rome.
We encourage you to submit your abstracts as soon as possible (the deadline is 11:59pm EST Wednesday, December 19). and what more can we say except… see you in Rome!!
For even more information, visit Rome’s official tourism website: http://www.turismoroma.it/?lang=en
By Ekaterina Dobryakova
Shubigi Rao, the Singapore-based artist whose works were presented at the OHBM 2018, grew up surrounded by science. As a child, she owned and was fascinated by rare books from the 17th-20th centuries that explored science and natural history. Neuroscience has always mesmerized her --- something she shares with brain mappers. Now Shubigi is a self-taught neuroscientist, with a neuroscience theory under her belt and art installations that often depict primordial ocean creatures with a complex central nervous system that are also reminiscent of sprouting dendrites and stained neurons.
We reached out to Shubigi Rao to get a behind-the-scenes look at her artistic thought processes:
Ekaterina Dobryakova (ED): You use many different mediums in your artwork and installations. Do you have a favorite technique and art form?
Shubigi Rao (SR): This is a great question - the reason I have employed diverse media is because for me the idea or concept is paramount, and if necessary I will teach myself a new medium or discipline if the idea demands it. This has been a lot of fun, but challenging sometimes when working with deadlines, as I don't have the luxury of getting lost in the wonders of a new form or field of knowledge. Since my current 10-year project involves the study of cultural destruction - its history and also why our species has a hostile relationship with knowledge - I've re-trained myself as a solo film-maker, and have been travelling around the world to document sites, events, people and oral histories. In terms of artistic medium I've also loved drawing (such a primal impulse and one that predates verbal/written language) and printmaking, especially intaglio and etching, but my current love is definitely film-making. I've written a fair bit about the relationships between these media, and I enjoy reading the neuroscience behind the drawing impulse etc.
ED: What was the most fascinating thing that you have learned during your studies of neuroscience?
SR: Almost everything is fascinating to me - from the very first human articulation to know the working of the brain, to studies on sea-slugs. I even find the politics behind the institutionalization of R&D and corporatization of research, and the problematics of it all to be very urgent and important issues. I'm endlessly fascinated by current work in language acquisition in infants (and in other animals as well), and interspecies communication. To answer your question with a single example, I suppose it would be my first encounter with neuroscience, when, as a young adult, I wanted to understand how we see, especially how our brain processes visual information and can make 'sense' of abstract art for instance. I still remember my sense of amazement at the decoding from V1 to the inferotemporal cortices (I was reading Hubel and Weiss, I think). Also, Cajal's studies, of course, appealed to me greatly, (as I grew up reading books on natural history and the science of the natural world from sometimes outdated books of 18th-19th century naturalists and scientists), and I devoured his work, and biographies.
ED: When I look at the works presented at OHBM 2018, I see ocean inhabitants such as the octopus and the jellyfish but these works also make me think about brain cells. What was your inspiration to create the works showcased at OHBM 2018?
SR: I've been particularly interested in interspecies communication, and also the way anthropomorphism occurs in popular retellings of scientific breakthroughs. The octopus is of course a subject of much current study and interest for its unique neuroanatomy. I've also been enjoying how it has been reimagined in popular imagination - all the way from Viktor Hugo's infamous 'devilfish' in Toilers of the Sea, which created an indelible image of the octopus as monstrous, to its appearance at the famed Great Exhibition at the Crystal Palace (London) in 1851. Our human imaginations (so essential to the artist) also make us invariably anthropomorphic and often unable to extrapolate from observed animal behavior without affixing human attributes. It's also why the life cycle of 'immortal' Turritopsis dohrnii has so seized public imagination. Invertebrates have often been classed as lower life forms, yet their neurological systems are amazing - the surprisingly complex nerve nets of siphonophores, their radial symmetry.
I make my work ambiguous and open, to allow the brains of the viewer to fill in the blanks, rather than passively look at an image. I hope the viewer will enter it, get lost in it, reimagine or re-contextualize it. For OHBM, I mixed fact and fiction. I was inspired by the way the human brain attempts to understand 'alien' or radically different intelligences and neuroanatomy, and the way we confabulate those gaps. This is also because of my lifelong study of how we look at nature.
I grew up in a forest - my parents left the city and took us to live in the jungles of northern India, where we learned how to 'read' interspecies communication between prey species, for instance, so we knew when a predator was on the move by the types of alarm calls of birds, monkeys, even insects. We developed a very intuitive appreciation for the lowliest of creatures, often disregarded in conservation efforts, for instance.
It's only recently that the cause of bees have been taken up, yet one has to only read Karl von Frisch's brilliant work from 1973 on bee communication to see the incredibly complex nature of its dance and the way that communication can only occur because of a social agreement of its codes. So, the social aspects of information processing is what I unconsciously and intuitively imbibed growing up in the wild.
All these elements feed back into the way I process disparate information, make connections, and interpret - which is what eventually led to my seeking out neuroscientific studies as a youngster, despite being disallowed from studying science at a higher level because of my gender.
ED: One can say that, just as artists, researchers have to start with an idea, an inspiration, that subsequently culminates in writing of a publication or a work of art. Do you get ‘writer’s block’?
SR: Yes, I do, sometimes, but once I start I don't stop. My writer's block is often because of the sheer enormity of the subject and its associated bodies of knowledge, that I am paralyzed into being unable to decide where to begin. Of course, like most people, once I start then it's off to the races, and I work in a fever of hyper-focus to the exclusion of everything, often forgetting to eat. I recently finished 65000 words in under 10 days, after being paralyzed with indecision for 7 months. So, a very uneconomical way of working!
ED: Many thanks Shubigi!
By Elizabeth DuPre
The Open Science Special Interest Group (SIG) is a relatively new organization within OHBM; however, it is responsible for several increasingly popular community initiatives including the hackathon and the open science room. As the Open Science SIG assumes new leadership this month, I sat down with the incoming chair, Kirstie Whitaker, to hear about her hopes for the upcoming year.
Elizabeth DuPre (ED): Today I’m here with Kirstie Whitaker, Chair of the OHBM Open Science SIG. Kirstie, can you first tell us about yourself?
Kirstie Whitaker (KW): I’m a research fellow at the Alan Turing Institute – the UK’s national research institute for data science and artificial intelligence. There’s a lot of research going on there, but one of the projects I work on is trying to incentivise reproducible research across all of data science. I’m a neuroscientist by training, and I did my PhD in UC Berkeley, followed by a postdoc in Cambridge at the department of Psychiatry. I then had a one year fellowship with the Mozilla Science Lab before I transitioned to working in the Turing Institute.
ED: It sounds like you’ve seen many aspects of neuroscience and data science, both in academia as well as in industry through your fellowship with Mozilla. Those can all lend very different perspectives on the thing we’re both passionate about: open science. Can you tell us your thoughts about open science following from those experiences?
KW: Open science, as you’ve said, can mean different things to different people. You can imagine our friends in the library sciences are extremely passionate about open access. We should all be passionate about open access and being able to read our colleague’s work. There’s also a lot of work going on at OHBM using open data. That’s making science more efficient and allowing us to answer more interesting questions with different types of techniques – by harnessing different peoples’ data and sharing that with our colleagues.
There’s another aspect which is pretty prominent in neuroscience, with huge influence around the world, which is open source code. I write some analyses and importantly I allow other people to use it – so in that sense it’s similar to open data – but they’re also able to see it and interrogate it. So instead of building a black box we’re building tools that you can look inside.
There’s also an additional angle of making sure that science is open to all people. This includes citizen science – and one of our hackathon organisers this year is Anisha Keshavan, who’s one of the coolest and most exciting citizen science people that I’ve ever worked with – which means breaking out of the ivory tower, and allowing everyone who’s interested in helping us understand the brain to productively take part.
It also means making sure that there are scientific career paths for people with diverse experiences and opinions. That means we allow women to succeed as well as men. We ensure that people from different cultural backgrounds, different races, different countries who speak different languages, are all given a fair shot at expressing their goals, and completing the analyses that they want to do.
So for me, open science is just doing science, and doing science well. But my particular passion is to ensure we are being diverse and inclusive.
ED: Over this past year you’ve served as chair elect while I’ve been secretary elect – and we’ve gotten to see the leadership do some amazing things. Anisha was the co-organiser for our hackathon. And this was the first time that the hackathon has sold out – so it was really exciting to see all the enthusiasm that the open science events are generating. We also had Felix Hoffstaedter organizing the open science room at the annual meeting, where we even decided we needed a bigger space.
And of course our current chair Chris Gorgolewski and secretary Matteo Visconti di Oleggio Castello have done a great job about communicating to the community what we’re so excited about. Given all this, now that you’re taking over as chair where would you like to take the SIG?
KW: I know, it’s such a brilliant and terrible problem to have sold out the hackathon! The other person we should mention is Greg Kiar who co-organised the hackathon. He liaised with ethnographic researchers who specifically do research on hackathons to create a survey that asked attendees what they gained from the event, how they felt it accommodated more junior members, and importantly, how these events could be improved in the future. I’m so glad Greg conducted that survey - before we closed out the room on Saturday we all had 30 minutes to fill in our survey and answer our questions - and we’ll see the fruits of that survey in next year’s hackathon. He gave a brief overview and one of the biggest themes was people being so excited and grateful that there were so many skills available – and that there were so many different levels of people that were there.
I think that the event selling out reflected that excitement. But selling out means we’ll have to confront some issues; in particular, we’re going to have to figure out if we want to keep the hackathon small and intimate or let everyone who wants to come attend. One of the big sells of a small event is that you can easily make some connections with individual people who can share their expertise with you or point you in the right direction. Once you get larger you effectively start building OHBM [laughs]. I mean, we’re the hackathon, we’re not trying to take over the entire conference, so we’ll have some interesting challenges about how we include everyone.
My goal is to think about culture change, and making sure we give credit to early career researchers that are doing excellent work that supports others. Historically, the incentive structure in academia has been to encourage very sharp elbows and making sure “To get to the top I’ve got to be number one. I’ve got to be uniquely better than everyone else.” One thing that really impressed me at this year’s OHBM conference was a presentation by JB Poline where he talked about the work that the community has brought together for a publishing platform where you don’t just publish traditional papers, but you might also publish code, data or tutorials. These are things that we all know are very useful, but that aren’t fully recognised. I’d love to see early career researchers get a bit more credit for that sort of thing.
I also think that the wider community should take back that spirit of the hackathon – the feeling in the open science room of these really helpful conversations and try and take that out into the OHBM community all year long. We have a Slack channel where you can get in touch with people, by pinging questions out. But I think it would be really interesting to see if we can solicit ideas from our community and actually get our members involved. It doesn’t have to be the SIG that puts on an event – it could be that we help our members make the connections (and we perhaps help out with a little funding).
One of the initiatives [Elizabeth] and I have been doing is the demo calls. There, we reach out to people and I sit on YouTube live and I ask people about their experiences with open source and their projects, and how others can get involved. Maybe those demo calls are useful and we can take them forward and keep them going. But maybe there are better ideas and that’s what I’d love to explore – how we can generate more ideas and bring them to light.
ED: I’m really excited to see where that goes. That leads into our recent round of elections…
KW: Yes! Traditionally there were just two members of the committee and they’ve done a lot of work. Thank you to the previous leadership of the OHBM hackathon and the Open Science Room and the brain hack and everything - all the people who have run so many of these initiatives. It was a lot of work! I was very happy that we created quite a few more positions to bring more people in that were passionate and wanted to help nurture the open science community. For example, this year we realised that we didn’t have a treasurer position, and keeping track of all this money and paying for these things was a lot of work, so we’re introducing a new role to cover this need.
We’ve talked about my vision and my passion for open science. But one of the things that is so fun, and frightening, about open science and diversity is that you have to eat your own dog food; that is, to practice what you preach. The success of open science in general and the SIG in particular relies on bringing in new people, new points of view, and I’m looking forward to it.
ED: Yes, I’m looking forward to seeing everything that happens and our new initiatives. Thanks so much!
After our conversation took place, we concluded the most recent round of elections. We’re now excited to announce the new leadership joining the Open Science SIG:
Greg Kiar - Treasurer
Camille Maumet - Chair elect
Ana Van Gulick - Secretary elect
Sara Kimmich - Treasurer elect
Roberto Toro and Katja Heuer - Hackathon co-chairs
Tim van Mourik - Open science room organizer
Cameron Craddock - Council liaison
Look for a follow-up post where we find out more about their pathways into open science!
Danilo Bzdok heads the section for “Social and Affective Neurosciences” at the Department of Psychiatry, Psychotherapy and Psychosomatics at RWTH Aachen University in Germany. Using his dual background in neuroscience and data science, Danilo tries to reframe psychological questions as statistical-learning questions to generate new insights. His work on social cognition and psychiatry has led to innovative data-led perspectives on how humans navigate the social world and its neural substrates. In 2017, he was designated “Rising Star” by the Association for Psychological Science (APS) in the USA. He is also a self-proclaimed potato chips gourmet and excessive consumer of especially electronic and classical music.
My first encounter with Danilo was unilateral, over the pages of Nature Methods’ Points of Significance section, where he published several introductory pieces on machine learning. His way of boiling down a complex topic into an accessible explanation was also at the heart of our next meeting. At ICM in Paris, he gave an institute lecture about the relation between mainstream statistics and emerging pattern-learning techniques in brain-imaging neuroscience. This led to a longer discussion afterwards, this time face to face... And this discussion is revealed here, where Danilo gives his views on big data, the changes in how we answer questions with data in everyday science, and some speculations on the future of neuroscience.
Tal Seidel-Malkinson (TSM): First, can you tell us about your career path. You started in medicine, then moved into basic research. What made you shift?
Danilo Bzdok (DB): It actually started when I was in middle-school – my first intellectual passion was in programming and computer science. I really liked composing logic using computer code. At roughly 15 I was fluent in half a dozen programming languages, such as 32-bit assembler, Pascal, and C++. But I felt early on that I was also intrigued by various other, completely different, things like philosophy,foreign languages, social sciences and neuroscience.
At that time, being mostly focused on natural sciences felt like somewhat of a limitation to me. One thing I really liked about the way of thinking in philosophy, and still appreciate very much, is the close interplay between logic and language. I was however not fully convinced this was a very pragmatic career choice. At least in Germany, a degree in philosophy is not always something that keeps many doors open for the next steps in life. That’s why I eventually decided to study a conservative area that would give me a solid foundation. Medicine seemed to be a safe choice, provided such a general education, and also gives you a lot of options. You go through an intense learning experience that shapes your work ethic. I wanted to go towards becoming active in scientific research, determined to move into brain science in particular. I therefore spent my early University years concentrating on neuroscience and psychiatry.
In the middle of my studies I wanted to get involved with research as soon as possible. This led me to work with Simon Eickhoff at the Institute of Neuroscience & Medicine at the Research Centre Juelich, who was an incredible mentor to me, and I also reached out to the department of psychiatry at the RWTH Aachen. I was lucky enough to be funded by the German Research Foundation (DFG) and to be part of an international research and training group (IRTG1328 on “Schizophrenia and autism”) with UPENN, USA. This particular department of psychiatry at RWTH Aachen University turned out to be active in brain-imaging research. Due to a series of lucky coincidences, I had the opportunity to go through an authentic research experience.
During the second half of Medical School I spent less and less time attending lectures, and instead tried to min-max the exams. Towards the end of my medical studies I was barely studying anymore. It then felt like a smooth transition into being a full-time researcher. At that point, I wasn’t ready to commit another >5 years to clinical specialization in psychiatry, which takes ~50-60 hours of your time per week and leaves less time for research.
I also learned a lot during a fantastic research stay working with Peter Fox and Angela Laird in San Antonio, Texas, USA, and I launched several collaboration projects with social cognition enthusiasts here in Germany, including Kai Vogeley, Leonhard Schilbach, and Denis Engemann. Together we conducted a series of neuroimaging studies on whether or not there are brain regions that may be uniquely devoted to social-affective processing - a direction of research which later pushed me to pursue always more general systems neuroscience questions. In 2013, I had become convinced that whether human-specific neural systems exist -- particularly ones that might be devoted to human social interaction -- was at its heart a methodological and statistical question. Whether or not scientists can go beyond the cognitive terms that we have been used for decades in social and affective neuroscience, such as “theory of mind”, “affect”, or “empathy”, is a question that can be more readily wrestled with certain data-analysis toolkits than others.
TSM: It’s clear that neuroscience nowadays increasingly requires an interdisciplinary set of skills. In your unique path you have acquired a broad set of skills from your degrees in medicine and maths and your PhDs in neuroscience and in computer science. How did you choose this path and, given this can’t be common training, what do you think early career researchers should focus on?
DB: I went through a journey of sometimes unconnected interests. It wasn’t always a conscious choice at a particular point. Essentially, I just went through several bouts of intense interest, getting absorbed in specific topics. That’s why, in retrospect, I am happy I somehow made it all the way through medicine. Despite changing areas of interest, at least I have an official degree that could help me give something back to society.
For years, I was not really sure how to cultivate and usefully combine my skills in language, logic and algorithms. When neuroscience later turned out to be such a vibrant interdisciplinary field, it was quite a relief to me. I found an opportunity to combine several different, what I like to call, thought styles. In neuroscience you can interface between diverging thought styles and approaches, and really get something out of it. That’s perhaps why I have a weakness for fuzzy topics like higher-order cognition, what domain-general function the TPJ may subserve, and what the “dark matter” of brain physiology - the default-mode network - may tell us about the nature of the human species. Several of these topics have a decent amount of soft-scienciness, at least to me – I then try to be principled and get at these research questions with algorithmic approaches that “let the data dominate”.
One thing that appears obvious to me in my activities as a supervisor, mentor and speaker: the data science revolution will depend on better quantitative literacy of the next generation of ambitious neuroscientists. We live in an increasingly quantified world. There are more quantifiable aspects about how we live and what we do; in normal life as well as when things go awry. There is a rapidly increasing opportunity to use algorithmic and computational tools, to generate quantitative insight and reach rigorous conclusions from the increasing amount of data at our hands.
Such modern regimes of data-analysis may look disturbingly different from the traditional goals of statistics and how statistics is taught at the university for many empirical sciences. In the data-rich setting, some traditional methods may have difficulty approximating the truth. That’s why I tried to structure my scientific education not only towards a solid neuroanatomical and neurophysiological understanding, in which I was much influenced by Karl Zilles and Katrin Amunts, but also a sense of probabilistic reasoning and quantitative methodology, in which I was much influenced by Bertrand Thirion, Gaël Varoquaux, and Olivier Grisel.
As almost every PI will tell you, most of their students will ultimately not end up in academia. I therefore believe that, at a more pragmatic level, getting an education with a solid data-analysis component can avoid pigeonholing PhD students or Post-Docs for a career as a scientist, and offer a broader portfolio of options to find jobs in industry and government after leaving academia.
TSM: Big data is a new opportunity for neuroscience, but equally it’s a new challenge. How do you see this development?
DB: In general, many scientific disciplines show a tendency to diversify into ever more specialized subdisciplines over time. So just because there are new opportunities doesn’t mean that the more established ways to conduct research and older techniques are rendered obsolete. Meticulously designed, hypothesis-guided experiments in carefully recruited participant samples will most likely remain the workhorse to generate new insight in neuroscience. What appears to be happening right now, is that we are extending the repertoire of questions that can be asked and are quantifiable.
Let me give one particular example. The increasing availability and quality of brain measurements will soon allow learning description systems of mental operations in health directly from data themselves - a cognitive taxonomy directly extracted from brain measurements, and nomenclatures of disturbed thinking in mental disease. Such goals are likely to require combinations of massive amounts of richly annotated brain data and innovative pattern-learning approaches.
TSM: There’s a tendency towards moving from group analyses to predicting outcomes for individual participants, are our current tools reliable enough for that?
DB: Broadly, I can see two distinct and promising trends – on the one hand, scientists bring in a small number of subjects into the lab several times and acquire hours of brain scanning, which allows accessing a finer granularity of neural processes at the level of densely sampled single individuals. There are several well-known labs that now seriously go into this direction with a lot of success...
TSM: Do we need so much data on individuals because of variability of cognition or the SNR of fMRI?
DB: There are several aspects at play. Often, resting-state scans are still just 5-10 minutes. I think that may not be enough to robustly describe *all* aspects of neural activity changes in the brain that investigators may find interesting. This is the first trend: one pocket of the brain-imaging community now tries to go always deeper in terms of subject specificity. It nicely complements the dominant agenda of conducting statistical tests on differences between pairs of experimental conditions or participant groups.
The completely other way to go beyond binary comparisons that I see is progress towards population-scale neuroscience. There is an increasing tendency for extensive data collections with hundreds and thousands of indicators like demographic, neuropsychological and health-related items, from a maximum of individuals. Such population neuroscience approaches will probably shed new light on variability patterns of brain biology, across distinct brain-imaging modalities, and bring into contact previously unconnected research streams. These people try to acquire as much information as possible that characterizes as many people as possible. The approach avoids strict a-priori choices as to the type of person or disease category to be distinguished and studied. One hopes that coherent clusters of individuals emerge in massive data. That again is a completely different perspective. This is a good setting, for example, to discover, quantify, and ultimately predict subclinical phenotypes in people - individuals who deviate from the normative population in some coherent way, without being “dysfunctional” in society.
It is my impression that both highly-sampled single participants and richly phenotyped participant populations are two exciting upcoming directions that hold a lot of promise. Both these research agendas can probably complement and inform experimental studies of ~30 people with well-chosen hypotheses and dedicated experimental designs.
From a more statistical perspective, there is an orthogonal aspect. For the majority of the 20th century, researchers in biomedicine have acquired and analyzed “long data”, with fewer variables than individuals. Today neuroscientists need to tackle always more often “wide data”, some call it “fat data”, with sometimes a much greater number of variables than individuals. Having extensive “found” or observational data from general-purpose databases is where machine-learning algorithms and data science come into play. Such more recently emerged statistical tools offer new strategies to search through abundant yet messy data. It is an exciting future perspective to integrate both – the highly sampled subjects and population neuroscience.
TSM: As you said both approaches require collecting, logging and archiving big datasets – this requires a lot of resources. Do you think this might increase the gap between well-funded and less well-funded labs?
DB: That’s a bit political, I’ll try to give a neutral answer. When you look at the Human Connectome Project (HCP) – there was a lot of excitement when it established itself as a trusted reference dataset for the brain-imaging community. That allowed new methodological approaches to be compared against each other in a more principled fashion. Yet, looking at the many thousand imaging neuroscientists on the planet, how many of those have really published a paper with the data from the HCP project? Actually, not that many.
Many of the existing HCP publications appear to often be methods-focused papers. I’m not saying that’s not interesting. But I think many scientists would perhaps have expected more discoveries on brain structure and function based on this unique data resource. One reason why this is surprising to me is that many of the classical software libraries still scaled fairly well to the HCP 500 release; just having to wait a bit longer for the results. Even with the full 1,200 subjects you could still scale to the higher sample size using essentially identical software and analysis pipelines that were already set-up in the lab.
We now have the UK Biobank Imaging, CamCAN, ENIGMA, and many other rich datasets. Given that HCP data were not primarily used by labs to answer cognitive neuroscience or neurobiological questions on brain connectivity, I expect that there will probably be an even bigger gap between the majority of imaging neuroscientists and those people who capitalize on the new generation of complex datasets. There will be even fewer labs that have a vested interest in and a daily exposure to methodological techniques needed to leverage these burgeoning data repositories.
TSM: This transition to big data requires a change in our methodologies and ways of thinking. How do you think this cultural shift should be achieved?
DB: Let’s go back to the two larger trends we discussed before – using densely sampled participants and population neuroscience to understand the healthy and diseased brain. Big-data methodologies are likely to play an important role in gaining this insight. We’ll need a shift in our everyday data-analysis practices and how we design and run our labs. We’ll need more computational savoir-faire and more people from STEM backgrounds. But that’s not enough. There also needs to be a more organic and fluid conversation between analysts and the PIs who have these people on their payrolls. More exchange in both directions will help us to negotiate between the research questions and optimal algorithmic methods.
A big issue, for instance, already is and will increasingly become the “big-data brain drain”: many people with quantitative aptitude and a proven data-analysis skill-set are highly sought after and may be aggressively headhunted by companies for several times higher salaries than what we in academia can offer. For instance, one of my students with a background in physics recently got recruited by McKinsey Analytics in London.
To tackle some of the ambitious questions we mentioned, we’ll also need better infrastructure than many universities today offer us neuroscientists. We simply need more money for this expensive computational architecture and its sustained maintenance. Now, some people may ask why we don’t just use cloud computing. And sure Amazon AWS and other cloud-based solutions are attractive options. But it’s worth considering two problems: first, you have data-privacy issues where you have personal data from individuals. In many research institutions, researchers may not be allowed to upload detailed information of individuals to servers in a different country. Second, there is a bureaucratic problem: you cannot easily estimate in advance how much money you need for your particular cloud-computing jobs. Many finance departments are however allocating money on a per-year basis, at least at German universities.
Last but not least, there’s the educational issue: how should we train young neuroscientists? It’s not clear how in this already very interdisciplinary teaching schedule, with theory of neuroscience, molecular biology, anatomy, physiology, classical statistics, genetics, brain diseases, and so forth, we could add multi-core processing, high performance programming, and so on. There are so many things that a 21st century neuroscientist is expected to absorb. It’s not clear where you’ll find people with such a multi-faceted mind who can be incentivized to, and are able to, embrace this breadth.
TSM: So perhaps we need to be collaborative? It’s perhaps not realistic to expect single people to have all these skills.
DB: It’s probably not realistic, but still, we will need some of these “glue people”. It’s not clear to me where we should expect them to come from. That’s why my feeling is that the shape and form of scientific education may play an increasingly important role in neuroscience.
TSM: Big data has been seen by some as a solution to the replication crisis – and another approach has been to use meta-analysis. You’ve recently published a meta-analysis on theory of mind. What did you learn from this, and what should we be careful about when applying meta-analysis?
DB: Several decades ago, there was a similar crisis in the social sciences, as we experience now in the current replication crisis. Many people weren’t sure how to go forward as there was a lot of uncertainty about how robust and valuable the abstract constructs were that these empirical scientists were studying. An important contribution to provide justification for these mental and social constructs came from quantitative meta-analysis.
Quantitative analysis is a very useful tool to identify convergence across isolated findings and thus solidify scientific areas. Especially if you know you will be facing small effects and a lot of noise; which is true for social and psychological sciences, and probably not wrong for brain-imaging. So you can either shift to a different area of research with more tractable problems or adapt to the situation that we have, where meta-analysis is one key solution to cope with the idiosyncrasies of a broad range of studies. It will unavoidably mask some subtle effects from single experiments. But you can see through the noise – distinguish the forest from the trees.
TSM: Presumably it also helps to collaborate with multi-centre studies.
DB: Sure. Many young students getting into neuroscience may perhaps still envision the lonely genius who is knowledgeable about so many areas of neuroscience. The biggest steps forward may come from *teams*. Sets of people who learned to genuinely work together; not despite but because they are drastically different in their knowledge and thought styles. If they succeed in aligning their thinking and efforts towards a common goal in neuroscience research, non-linear progress probably becomes much more likely.
In terms of data-collection, it’s worth comparing brain-imaging to genetics or genomics. Several trends in imaging neuroscience today may have been preceded in a similar form already 5-10 years ago in genomic research. There, many data collection collaborations were foundational and helped the research community to see through the noise more clearly. Imaging neuroscience is becoming larger and more international with increasing numbers of labs, so there is greater potential for people to work together. Intense and bidirectional collaboration between drastically different disciplines may be a prerequisite to render some of the ambitious questions actionable that we had the pleasure to discuss today. It also means you need people skills, on top of everything else!
TSM: I want to thank you for the nice chat – and it’s definitely an exciting, interesting era in neuroscience!
By Danka Jandric, Jeanette Mumford & Ilona Lipp
Planning a resting state study and analysing resting state data can feel overwhelming. There seems to be an endless number of options regarding all stages of the experiment. Decisions need to be made about how to acquire data in an optimal way, what preprocessing and noise correction pipelines to employ and how to extract the most meaningful metrics. Many strategies have been published and are available in software packages. However, there seems to be little consensus about what works best and even more importantly, about how to judge whether something “works” or not. The choice of method often depends on the specifics of the data and addressed research question, but can equally often seem arbitrary. To help guide you through this jungle of rs-fMRI, we walk you through all stages of a resting state experiment. We do this by addressing questions that researchers are likely to have… or should have! While we do not provide definite answers to these questions, we try to point out the most important considerations, outline some of the available methods, and offer some valuable video resources from recent OHBM education courses, to help you make informed decisions.
What do I need to consider when planning my experiment?
Running a rs-fMRI experiment seems easy enough. Technically, all you need is to put your participant in the scanner, tell them to rest and run a standard BOLD sequence. However, it may be worth thinking about your analysis strategy beforehand, so that once you start analysing your data you do not suddenly wish you had…
How do I know my data quality is good?
One of the most common questions asked when evaluating data is how to tell if the data are “good” or not. The answer to this question, regardless of the data, is to actually look at your data. Although this task is somewhat easy with behavioral data, when faced with hundreds of thousands of time series for a single subject, it is less clear how we can do this. Luckily Jonathan Power has not only developed tools we can use with our own data but also takes us through data inspection in his educational talk from 2017, “How to assess fMRI noise and data quality”.
How do I improve my data quality?
fMRI data are noisy and this is not going to change any time soon, so we have to deal with it somehow. Acceptance and hoping for the best is a strategy, but could lead to problems further on in your analysis. If there is a lot of noise compared to the signal of interest, then individual subject’s resting state networks will not look clean, and the power in detecting group-level effects may be low, so you might not find anything interesting in your group-level analysis. However, as importantly, if there are systematic differences in noise sources between the cohorts you are studying, then seemingly interesting effects can be simply a result of group differences in noise, such as head motion. Having ignored the noise problem, you might end up spending days writing a paper with a game-changing title, being hit by reality when the annoying reviewer then asks you to quantify group differences in your noise. Better to be aware of and account for noise to start with, right? But this is easier said than done…
What causes noise in rs-fMRI data?
Resting state analysis generally deals with correlations in time courses between voxels. If a noise source affects several voxels in similar ways, this can lead to temporal correlations which are independent of neural co-fluctuations. For this reason, the aim of noise correction is to get rid of as much variance in the BOLD signal as possible that is related to noise. To figure out what the best possible noise correction strategy may be, we first have to be aware of what the sources of noise in BOLD time series are.
In his video, Cesar Caballero Gaudes gives a comprehensive overview of the most common sources of noise, such as head motion (from minute 05:11), respiratory and cardiac variation (from minute 05:53), and hardware (from minute 11:11), and their effects on the data. Cesar also gives an overview of some of the denoising strategies that are available to tackle different types of noise.
How can I correct for noise when I have information about the noise sources? The nuisance regression approach:
One denoising approach is to record information about some of the potential noise sources during the scan, such as physiological recordings or head motion parameters. These can then be used to figure out to what extent our BOLD time series can be explained by the noise sources, by including nuisance regressors in a general linear model. Generally, we probably all agree that the more high-quality information we have on what happened during our scan, the better. One may also think that the more nuisance regressors we employ to regress out from our BOLD time series, the better our clean-up… but is that so? In her video, Molly Bright gives us deeper insight into the nuisance regression approach to clean up noise.
In some smart simulation analyses (from minute 12:30), Molly shows that simply adding as many nuisance regressors as possible may not be the best strategy, as we may accidentally remove a lot of signal. Also, we need to be careful about time-lagging our regressors in order to account for the delay between a physiological change and the BOLD response. Molly explains why trying to identify that delay using the rs-fMRI data can be tricky, and why adding a breath-hold at the end of your acquisition may be a good idea (from minute 20:16).
Molly also demonstrates that very commonly applied preprocessing steps, such as bandpass filtering, can have effects on our data that we might not have predicted (from minute 16:30). While introducing a few strategies to make the nuisance regression approach for noise corrections more valid – such as prewhitening (from minute 12:00) - she stresses the fact that there is not one optimal strategy and that it is very difficult to tell whether noise removal “has worked”. The take-home message here is probably that as a field, we need to work towards a better understanding of the BOLD profiles of different noise sources. Additionally, integrated strategies are needed to deal with the complicated interplay between different noise sources, such as between head motion and physiological noise.
How can I correct for noise when I do not have information about the noise sources? The ICA approach:
While the success of nuisance regression depends on having good quality nuisance regressors in the first place, data-driven approaches are available that can be applied to any dataset, the most common strategy being independent component analysis (ICA). ICA for noise removal is based on the separation of the BOLD time courses into spatial components, and classifying each component into signal vs. noise. This is typically done on a subject-by-subject basis. The time courses of the noise components can then be regressed out or accounted for during further analyses.
Ludovica Griffanti gives a comprehensive introduction to ICA for noise removal and highlights the difficulty that often lies in the signal vs noise classification that is performed by “experts”. Whilst semi-automated and automated approaches are under development in order to make this classification more objective, Ludovica makes the strong point that ultimately these algorithms or at least their validation are based on “gold-standard” manually labelled data. While there is no clear consensus yet on what signal and noise components look like, Ludovica provides us with some guidance and rules that can help with classification and are a first step towards this consensus.
How can multi echo data help with noise correction?
The vast majority of BOLD data has been acquired with a single echo time, optimised to the average T2 across grey matter. However, if you have not started your experiment, you might want to acquire data with several echo times. Prantik Kundu explains why: BOLD and non-BOLD related signal have different sensitivity to echo time, so having information about the actual decay can help distinguish signal of interest from noise (from minute 05:10).
Prantik provides a few beautiful examples on how multi-echo fMRI data can be combined with ICA-based approaches for noise clean-up, calculating parameters that objectively inform about how similar the components’ behaviour is to BOLD vs non-BOLD related signal (from 11:43). In the grand scheme of things, the multiple echo times used are still quite short, so acquiring this extra information would not necessarily increase your total acquisition time. On a side note, even data from one additional short echo time can provide information about some noise sources, as described in a study by Bright and Murphy (2013). Be aware that certain noise sources, such as slow physiological changes yield ‘BOLD-like’ noise (which we can treat as noise or as signal of interest, depending on our perspective), as they interact with the cerebrovascular system. Multi-echo data does not help with correcting for this type of noise.
Why go through all that pain? Can I not just do a simple global signal regression for noise correction?
A cheap and easy (and still very widely used) way for performing ‘noise correction’ is global signal regression. Here, the average signal across the whole brain (or all gray matter voxels or all cortical voxels) is calculated and regressed out from each voxel time series, with the underlying assumption that the global signal mostly reflects combined noise from various sources. The advantage of this approach is that it is able to remove artifacts that are hard to get rid of with other noise correction methods. However, global signal regression is highly controversial in the field, with the main points of criticism being that the global signal has neuronal contributions and that global signal regression shifts the correlation coefficients and induces negative functional connectivity. In her video, Molly Bright briefly touches on this (from minute 24:43), and refers to a recent 'consensus paper'. An alternative to regressing out the global signal are using the signal from white matter or CSF, as briefly described in Cesar’s video (from minute 20:00). If you are interested also see his recent paper.
How should rs-fMRI data be preprocessed?
Resting state fMRI data can largely be preprocessed in the same way as data from a task-based fMRI acquisition (for a refresher on steps we recommend the slides from the educational course from OHBM 2016). As Molly pointed out, some of the “standard” preprocessing steps, such as bandpass filtering, can have unexpected effects on rs-fMRI data. As rs-fMRI data does not have strong task-driven signal changes, it is generally more susceptible to noise and probably to anything we do to the data, so be wary of that.
As described above, there are strategies for tackling noise, such as physiological artifacts, in the preprocessing pipeline. Some good pointers, including Cesar Caballero Gaudes’s video on denoising, have been outlined in the previous section. In addition, in 2016 Rasmus Birn, an expert on the influence of physiological noise on the BOLD signal, gave a thorough overview of physiological noise and approaches to remove it.
How can I analyse the data to find meaningful resting state networks?
Once your data is preprocessed, denoised and you are confident that it is in good shape, you will want to get on with the exciting part – identifying resting state networks. When done properly, resting state data can show us large-scale networks in the ‘brain at rest.’ What defines them are the correlated temporal patterns across spatially independent regions. Each network has a distinct time course from other resting state networks, but one which is consistent across its regions.
The aim of rs-fMRI analyses approaches is to use the time courses of brain regions to decompose the brain into resting state networks. Several techniques exist, with the two most common being seed-based correlation analysis (SCA) and independent component analysis (ICA).
In his video, Carl Hacker gives a nice overview of both SCA and ICA. He introduces the two methods (from minute 1:12) and identifies the main differences between the approaches (from minute 4:15). Carl also discusses how to identify RSNs from seed-based mapping (from minute 6:25), and how the brain can be parcellated using ICA (from minute 13:35). While SCA uses the time series of an a priori selected seed region in order to identify whole brain functional connectivity maps of that region, ICA decomposes data from the whole brain into the time courses and spatial maps of the resting state signals, called independent components (ICs). SCA is a useful method to answer questions about the functional connectivity of one specific region. However, the drawback is that it only informs about connectivity of this region. On the other hand, the numerous ICs that you get from ICA are defined as a collection of regions which have maximal spatial independence but co-varying time courses, thus showing networks across the whole brain that have synchronous BOLD fluctuations when the brain is not performing a task.
In healthy subjects, SCA and ICA have been shown to produce moderately corresponding functional connectivity information, and the choice between them is likely to be guided by the specific research question. Note that the focus of Carl’s video is parcellation of the brain. However, many concepts and principles also apply to other types of analyses. Read more about these two methods in Cole et al. (2010) and Smith et al. (2013).
How do I interpret ICA components?
If you have run ICAs on your resting state data, your next task will be to interpret the output. The output consists of a number of spatial maps showing regions with spatial independence but co-varying time courses, called independent components (ICs). How many ICs you get depends on the parameters you set when you run the ICA, but it is typically a few dozen.
The first step when interpreting the ICs is to determine whether they are signal or noise. Because ICA is data-driven, it does not ‘filter out’ noise, but it can separate neural signal from non-neural signal, i.e. noise, so it is important to classify the components correctly as either signal or noise.
So how do I distinguish between signal and noise in extracted ICs?
In her video, Ludovica Griffanti discusses how RSNs and noise can be distinguished. She provides an overview of component classification approaches, including manual and automatic classification approaches (from minute 2:58). Importantly, Ludovica describes the characteristics of signal and noise components and gives examples of both (from minute 5:20). Ludovica’s key message is that the aim of classification is to retain as much signal as possible, so if you are unsure if a component is signal or noise, keep it in as signal. She also makes the point (from minute 19:00) that a number of factors relating to participants, MR acquisition and preprocessing affect IC characteristics and discusses these briefly. The classification approach discussed in Ludovica’s video is very similar when classifying ICA outputs from single-subject data and group level ICA, but there are differences. For an outline of these and for a more thorough discussion of manual classification of ICA components, please see Griffanti et al. (2017).
How do I identify RSNs from ICs classified as signal?
There are a few approaches to determining what networks signal components correspond to. Some ICA toolboxes will have spatial templates that can be compared to the ICs. But perhaps the most common approach is manual labelling based on known anatomy. The spatial patterns and time courses of many common resting state networks (RSNs) have been described. (e.g. for labelling RSNs from group-level data see Beckman et al. (2005) and De Luca et al. (2006)).
A further option for IC classification is the use of automated RSN classification techniques. In his video, Abraham Snyder gives an overview of how machine learning can be used to classify RSNs based on pattern recognition (minutes 28:50-33:00).
What is this thing called dual regression?
ICA is typically done with group data and produces spatial maps that reflect the group average functional connectivity. However, the individual variability of IC topography is often useful, for example to make comparisons between groups of individuals. A process called back-reconstruction is therefore used to obtain the individuals’ time courses for the ICs obtained from the group-level ICA, which are then correlated with each voxel to obtain subject-specific spatial maps. Dual regression is one available back-reconstruction method. In his video, Carl Hecker gives a brief overview of how it works (from minute 19:38).
If you are interested, Erhardt et al. (2011), describe the principles of several back-reconstruction methods, including dual regression.
What metrics can I extract from the rs-fMRI analyses?
Local activity metrics:
Even before running a network analysis on the rsfMRI data, such as SCA or ICA (see above), two useful metrics can be derived from the data, ALFF and ReHo.
Amplitude of Low Frequency Fluctuations (ALFF) measures the magnitude of low frequency oscillations (0.01-0.1 Hz) in the BOLD signal in neural regions. The fractional ALFF, a complementary metric, is a measure of the contribution that a specific low frequency oscillation makes to the whole frequency range recorded. Both metrics give a measure of the amplitudes of brain activity in specific regions. However, the interpretation of these measures is difficult. Fractional ALFF has been shown to be dependent on the vascularisation of the brain, similar to the resting-state fluctuation amplitude (RSFA), which is a very similar measure to the ALFF, and available from any rs-fMRI scan, but has often been interpreted differently. Physiological mechanisms, including vascular effects, in rs-fMRI are still not fully understood and the exact interpretation of measures linked to cerebrovascular characteristics is therefore more difficult.
The other common rs-fMRI metric is that of regional homogeneity, or ReHo. ReHo is a voxel-based measure of regional brain activity, based on the similarity of the time-series of a given voxel and its nearest neighbours. It quantifies the homogeneity of adjacent regions, to provide information about the coherence of neural activity of a specific spatial region.
Thus, both ALFF and ReHo give information about regional neural activity and have been shown to have high values in, for example, the default mode network regions during rest, indicating that they can point to the regions that play central roles in resting state networks. Because they provide information about regional neural activity at rest, both ALFF and ReHo can be used to determine an ROI for SCA.
Functional network metrics:
However, ALFF and ReHo are metrics of local neural activity, and are thus limited in their ability to provide information about large resting state networks. Network analyses therefore tend to focus on functional connectivity measures.
SCA and ICA, discussed above, both offer measures of functional connectivity within the brain. Both calculate the correlation of time series between voxels in the brain to produce spatial maps of Z-scores for each voxel. These scores reflect how well the time series of each voxel is correlated with the time series of other voxels and are a measure of functional connectivity. In SCA, the Z-scores reflect the correlation of each voxel with the average time course of the seed voxel, while in ICA the Z-scores reflect the correlation of each voxel with the average time series of the respective IC. Dual regression can be run with both SCA and ICA to enable the investigation of individual and group level differences of functional connectivity.
A good overview of the metrics described above is provided in Lv et al. (2018).
A more recent metric derived from rs-fMRI data is that of functional homotopy. Functional homotopy shows the synchrony of spontaneous neural activity between geometrically corresponding, i.e. homotopic, regions in the two hemispheres. It provides a measure of connectivity between corresponding interhemispheric regions, and can be used to determine regional versus hemispheric information processing.
Chao-Gan Yan asks whether these different measures of resting state functional connectivity show unique variance, and discusses the concordance among some of these metrics and also global connectivity (a graph theory measure, please see the next section), by drawing on work from his research group.
It is important to remember that most measures of resting state functional connectivity are based on correlational analyses and thus do not tell us anything about how regions of the brain influence the activity of other regions. It is possible to model the relationships between observed patterns of functional connectivity to be able to draw inferences about such neural influences, in an approach called effective connectivity, which is determined with Dynamic Causal Modelling. In his video, Karl Friston describes how we can use effective connectivity to infer causality from observed connectivity (minutes 0:57 to 23:07).
How can graph theory be applied to resting state data?
More advanced metrics can be derived from rs-fMRI data using graph theoretical analysis approaches. Graph theory is a mathematical method for mapping all the brain’s connections by depicting them as a graph consisting of nodes and edges. When graph theory is applied to rs-fMRI data, the nodes are often large-scale brain regions, and the edges represent the functional connectivity between them. The great advantage of graph theory over other measures of functional connectivity is that it offers a way to quantify the properties of large, complex networks.
Alex Fornito gives an excellent introduction to graph theory in his video. He discusses the rationale for using graph theory (minutes 0:55 to 3:39), before going on to give a history of graph theory (minutes 3:39 - 11:54). Then, Alex describes how network models can be created and shown as graphs (minutes 11:54 to 16:53), with a focus on defining nodes and edges. He describes how edges can be defined using fMRI data, including the potential problem of relying on the time series correlations that underpin functional connectivity (minutes 19:16 - 24:43). Finally, the construction of the graph is described (minutes 24:43 - 28:55).
Alex Fornito discusses several approaches to defining the nodes of a network. One of these is parcellation of the brain. The brain can be parcellated from rs-fMRI data through either SCA or ICA, as described by Carl Hacker.
Once a functional connectivity matrix has been created, either from brain parcellation or the components obtained from ICA, there are two options for deriving metrics. The first is to simply compare the functional connectivity matrices between two or more groups of participants. This approach can provide useful information about how the variable of interest, such as a disease, affects the connectivity between or within resting state networks, and has been used to characterise functional connectivity in diseases such as schizophrenia and autism. The other option is to create a graph from the functional connectivity matrix and study it with graph theory.
However, because functional connectivity matrices show correlations between the time series of defined brain regions, either approach is potentially susceptible to spurious or weak connections, for instance due to noise. One way to address this is to apply a threshold that removes the connections that fall below that threshold. Andrew Zalesky gives an introduction to network thresholding and an overview of how it is performed between 0:00 and 16:40 minutes of his video. He also provides an overview of the type of measures that can be extracted from brain graphs, with a focus on comparisons of edge strength (minutes 16:40 to 19:36).
Some regions of the brain are more strongly connected with others, and tend to be considered network hubs. Metrics related to network hubs are among the most commonly used in graph theoretical analysis. Martijn van den Heuvel discusses network hubs and the metrics associated with them (from about 1:30 minutes).
An extensive list of graph theory metrics and what they tell us about neural networks can be found in Rubinov and Sporns (2010).
For those interested, there is a small collection of videos on graph theory from last year’s presentations at the OHBM conference, including those discussed in this post.
What do the resting state networks actually show?
How do you interpret findings from your resting state analysis? Well, first, it is important to consider the biological function of the correlated temporal patterns. Unfortunately, it is not as simple as defining it as ‘activity during rest.’ RSNs are collections of brain regions that have synchronous BOLD fluctuations, but the source of the signal has not been unequivocally established. While there is strong evidence to suggest that the signal is neural, there is still ongoing debate about the extent to which it may be influenced by non-neuronal noise, such as respiratory and cardiac oscillations. However, the fact that rs-fMRI analysis results have been reproduced even when applying conservative physiological corrections across both individual subjects and groups points to a largely neural basis of the rs-fMRI signal.
So what does the functional connectivity mean? In purely methodological terms it is the statistical correlation of two time series. It has been suggested that such correlations have arisen as a result of neural populations that are active together to perform a task and have therefore ‘wired’ together. The rs-fMRI signal reflects their spontaneous neural activity in the absence of a specific task. There may be direct anatomic connections between networks derived from rs-fMRI analyses, or another joint source of the signal. This is currently not well understood, and rs-fMRI findings should be interpreted with caution.
A short, but good, outline of the origin of the rsfMRI signal is provided in van den Heuvel et al. (2010).
The Annual Event of Chinese Young Scholars for Human Brain Mapping was held on June 19th, during the 2018 OHBM Annual Meeting in Singapore. This was the second annual event, and continued the success from the inaugural meeting in Vancouver. The theme for this year’s event was “The Road to Independence”. Around 200 young scholars from universities around the world participated.
The annual event is committed to bringing together young Chinese researchers from a wide variety of backgrounds to share and discuss their professional expertise and career experiences, as well as any challenges they may have faced. This offered a platform for young researchers to build collaborations on cutting-edge neuroscience topics and methods, and also to learn from senior researchers on the route to a successful scientific career.
This year’s schedule commenced with a brief review of the annual event by one of the organisers: Professor Chaogan Yan. Then, Professor Yan introduced the two guest speakers: Professor Jiahong Gao (Director of the MRI Research Center of Peking University, Chair Elect of OHBM), and Professor Xinian Zuo (Director of the MRI Research Center, Institute of Psychology, Chinese Academy of Sciences, Program Committee Chair elect of OHBM).
Professor Jiahong Gao gave the first talk, entitled “ Brain Imaging in China: Opportunities and Challenges”. He summarized the fast development of human brain mapping research in China, and shared his vision on future directions of this field in a humorous way. Taking the latest advances on Magnetoencephalography development in his lab for instance, Professor Gao discussed the challenges and opportunities we face in brain imaging, and encouraged young scientists to seize the opportunities and bravely climb to the scientific peak.
The second speaker was Professor Xinian Zuo from the Institute of Psychology at the Chinese Academy of Sciences. In his talk titled “From Mathematics to Brain Sciences”, Professor Zuo shared his own career experiences, from a PhD in mathematics to becoming an outstanding independent researcher in human brain science. He particularly emphasized the importance of reliability and reproducibility in brain imaging studies, and briefly introduced several ongoing projects by his team, including the Chinese Color Nest Project and the Traveler Project.
After the two keynote talks, Professor Juan (Hellen) Zhou from Duke-NUS Medical School, and Professor Ning Liu from the Institute of Biophysics at the Chinese Academy of Sciences joined the guest speakers for a panel session. Professor Chaogan Yan moderated the discussion, and introduced several topics under this year’s focus “The Road to Independence”, including relationships with tutors, necessity of career planning and recovery from failures. Each senior researcher shared their insights on these questions.
Professor Jiahong Gao provided advice on these topics based on his own experiences. He pointed out that the extent of independence of a young scholar largely depends on the mentors’ style. Professor Gao encouraged young scholars to develop their skills with support from mentors, and to prepare themselves to become independent researchers. Young scholars should set spiritual goals, make plans to achieve them, and learn lessons from their consistent efforts.
Professor Xinian Zuo shared his insights based on his personal experiences of switching from mathematics to neuroimaging, and echoed Professor Gao that young scholars would better seek support from their mentors and develop the ability for independent research in projects with their mentors. He also shared his “failure” stories about manuscript writing and paper submission during his very early projects. He summarized that failure is not terrible, and that one should learn lessons from what he/she had experienced, and aim to improve from them.
Professor Juan (Hellen) Zhou shared her personal study experiences, and emphasized the importance of independence, as well as hard work and persistence in order to become a successful researcher. She provided the example of her public speaking training during her PhD, emphasizing the critical role of hard work for acquiring professional skills. She also advised that one could obtain power and motivation from setbacks, and should move forward towards one’s ultimate goal.
Professor Ning Liu provided her thoughts based on how she got along with her own students. She pointed out that unstructured ‘light-touch’ supervision would not be suitable for all students, and she suggested to supervise each student with specific proper strategies. She also discussed the special difficulties associated with animal studies, and encouraged young scholars to actively adapt to any difficulties or potential failures, and keep being positive towards their goals.
Professor Chaogan Yan talked about his personal “failure” when attempting to switch from neuroimaging studies using fMRI to animal studies, and how he subsequently adjusted his research direction back to human neuroimaging. He pointed out that it could be a big challenge to move to completely new fields for a PhD or postdoc. But he believed that it may still be worth trying, especially if you are keen on the new questions and are still young. Even if there was a high chance of failure, one could learn valuable lessons from these unforgettable experiences.
Towards the end of the panel session, Professor Jiahong Gao provided his answers to the questions from audience on how to get international impact as local scholars in mainland China and how to publish papers in high-impact journals. He encouraged young scholars to perform high-level studies in the field, and to actively communicate research results with international researchers and journal editors. He mentioned that “the point is not that we cannot publish high-impact papers, instead it’s that we have not yet achieved high-impact research results.” He continued, “we should cherish our time, and work hard, to pursue critical questions in the field. Only in this way, can we achieve influential results, and publish papers in high-impact journals, which will lead others to recognize our research capability.”
At the end, the audience thanked the speakers for their informative presentations and discussions with hearty rounds of applause. We took group pictures to conclude this inspiring and memorable event. After the meeting, we enjoyed a group dinner and more informal discussions on both science and life as a scientist.
Organizing Committee of the Annual Event of Chinese Young Scholars for Human Brain Mapping:
Chao-Gan Yan, Institute of Psychology, Chinese Academy of Sciences
Yuan Zhou, Institute of Psychology, Chinese Academy of Sciences
Rui-Bin Zhang, Department of Psychology, The University of Hong Kong
Xiang-Zhen Kong, MPI für Psycholinguistik
Chun-Yu Liu, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University
Xiao Chen, Institute of Psychology, Chinese Academy of Sciences
Heidi Johansen-Berg interviews Charlie Stagg about GABA-MRS, neurostimulation and medical and scientific careers
By Nils Muhlert
In much of biomedical science the questions dictate the methods. This often means we have to draw on knowledge from different disciplines, or combine data from different modalities to converge on a likely solution. In a first for the OHBM blog we asked a senior PI to interview a more junior PI within their institution. This was always going to lead to interesting discussions - but when Heidi Johansen-Berg, director of the new Wellcome Centre for Integrative Neuroimaging in Oxford, agreed to interview the multi-modal brain mapper Charlie Stagg, it became clear that we would be acquiring a full spectrum of insight into scientific and career-related issues.
What follows is a wide-ranging discussion on moving from medicine to pure research, combining information about neurotransmitters from MR spectroscopy with neurostimulation techniques and the potential benefits of mapping from preclinical to clinical imaging.
Heidi Johansen-Berg (HJB): Charlie, you initially trained as a medic, but then decided to go down the pure science route – what persuaded you to take on a life of research rather than being in the clinic?
Charlie Stagg (CS): Yes – I did medicine as an undergraduate in the UK. As part of that, in Bristol University, I had the chance to do an extra year halfway through, at which point I did an undergrad degree in physiology and fell in love with the subject. I’ve always been very interested in the brain and wanted to be a neurologist, then through my undergrad research and in my clinical years I realised that I wanted to do something more interventional and that there weren’t many treatments for chronic stroke recovery – something I was interested in at the time.
It’s very common in the UK for clinicians in training to do a PhD some years after their medical training, so I came to Oxford to do that. I fell in love with it and didn’t want to stop! I’ve been an academic ever since.
Quite a lot of our work is clinical research, I work with a number of clinicians, and I think the training is useful in dealing with the medical side and dealing with patients. It’s way too much fun to go back to doing just clinical work.
HJB: So you’ve not regretted that decision – you’ve not wished you could do a bit more.
CS: No. There are certainly days where you wonder what you’re doing – but overall, no!
HJB: I think we all get that [laughs]. In terms of your research, a lot of it has focussed on the role of GABA in behaviour. As we know, this neurotransmitter is receiving increasing attention, particularly as the methods to measure it have improved over the years. What most excites you about GABA, what have been the recent breakthrough findings in the role of GABA in healthy and disordered brain function?
CS: It’s a really exciting question – as there are a lot of recent papers on this. When I started there wasn’t much work on it – it was a real niche subject. I used to have to start all of my talks by having to explain what spectroscopy was. I remember people coming up at the end of talks saying “I didn’t know you could do that, that’s amazing.” And now, I never have that – and that’s fantastic.
Much of our work has been on primary motor areas and motor systems in particular. That’s been very interesting from my point of view. But there’s been a lot of recent work taking it out of the motor system that’s been really exciting.
James Kolasinski’s paper (who’s now in Cardiff at CUBRIC) took a really simple hypothesis from the animal literature, that local inhibition and cortical organisation should relate to behaviour. He really beautifully showed that it did, in absolutely the way we’d expect. That’s a really nice, elegant study as it hadn’t been measured in humans – but showed that what we’re measuring with GABA has at least some relevance to what we see in animals.
You then get people starting to answer interesting questions, not only about immediate behaviour within that region but how it relates to networks. For instance, there’s a paper on overlearning from Watanabe’s lab where they showed increases in GABA as memories stabilise. That’s really exciting, as we’d been previously looking at decreases in GABA as we learn. If our understanding of physiology is correct then we should see increases as the memories stabilise – so that was really exciting to see the first demonstration of that in humans.
All of this work has been done in primary regions – such as primary sensory regions or the motor system – and people like Helen Barron in Tim Behrens lab have done some really cool things, asking really abstract questions. You imagine that lateral inhibition is really important for somatotopy or retinotopy but linking it to memory and more advanced cognitive processes is really cool. It asks a lot of questions about what is going on.
HJB: And I guess that’s where the ability to ask these questions in humans really does make a difference. One of the limitations of MRS has perhaps been the lack of precision, compared to manipulating GABA in animal models. Potentially that’s been a criticism about why you would do it imprecisely in humans when you can do it precisely and specifically in animals. Presumably you feel the fact of doing it in humans makes a difference?
CS: Absolutely – it’s something that I think about a lot. It is an indirect method and we can use multi-modal approaches to triangulate what we’re seeing, but it’s still not as direct as doing invasive recordings and never will be. So we need a good reason for doing it in humans – and the complicated cognitive work that Helen Barron’s doing is absolutely one of those reasons.
I’d also argue that human hand control, and particularly the relearning of that in the timescales after stroke, is very difficult to model in animals. The use of the hand is incredibly complex, the separation between primary motor and somatosensory cortex is pretty unusual in primates. We’re beginning to believe that the primary motor cortex works quite differently in terms of the physiology compared to the primary somatosensory cortex. That distinction is important, so there aren’t ready animal models that recapitulate all of that. There are certainly arguments for carrying this work out in humans.
HJB: Relatedly, the ability to measure neurotransmitter levels using MRS has been around for decades but has never really had the popularity of other structural and functional techniques. What do you think has held it back – and why do you think this may be changing in recent years?
CS: It is true. I do remember people finding out that we could measure neurotransmitter levels and wondering why we don’t do it more often. It’s really hard – particularly at lower field strengths. The signal we’re dealing with is 10,000 times smaller than the water signal we use for fMRI, so we’re dealing with much poorer signal-to-noise. So trying to get measures of chemicals at the millimolar range even at 3T is challenging and takes a long time. The advent of 7T being more widely available has massively boosted that SNR and made it much more achievable to get reliable measures within a sensible time frame of a few minutes.
If you look at our early work on the 3T, it took 20-25 minutes to get a sensible measurement. This is possible, but actually quite difficult in the context of studying learning. It limits the questions you can ask and makes it more difficult for the patients. This is perhaps why there has been less work in neurological and psychiatric groups – the timescales are just not clinically feasible. But suddenly, because the timeframes are shorter with ultra-high field MRI you now don’t need to have a big team of skilled physicists before you can do this. You still need a good physicist and quite a lot of time and a 7T but it’s becoming much more feasible to take these approaches and use them in a similar way that you would with fMRI.
HJB: One thing that strikes me as a bit unusual with MRS on the analysis side is that other MR modalities have seen massive amounts of research into signal processing – but in MRS there seem to be few, relatively simple, approaches. But the field of signal processing has changed massively over decades. The analysis of MRS doesn’t seem to have changed a lot in that time. For something like the OHBM community you’d think there would be a wealth of talent of people who could develop much more sensitive measures for extracting useful information from noisy spectra – why hasn’t that happened?
CS: I think it’s chicken and egg. There haven’t been that many people using it, there hasn’t been that drive and the simple spectra that you get, the edited spectra, are reasonably easy to analyse with simple approaches: you just fit a Gaussian. Now people are doing interesting things, more complicated things at 7T; we’re dealing with much more complicated signals. There are gold standard methods out there but people are developing their own.
To some extent, some of the questions that are important for fMRI just aren’t important for MRS. You usually get just a single voxel, so all the issues around clustering and thresholding, all those issues are less important, but yes, it’s a field that’s wide open right now.
HJB: Yeah, you’re on the lookout for talent! There are now methods with multi-voxel MRS, there will be a spatial component. People are starting to acquire functional MRS over time. It’s getting a lot more multi-dimensional than it has been. It’s certainly an exciting topic.
CS: It is! One of the things we’re getting excited about is work with Uzay Emir, who’s set up some fantastic sequences that we’re playing with and getting to grips with. One of them is a combined fMRI-fMRS. So for each TR you get BOLD signal and a single voxel decent spectra. That gives us the temporal resolution that we’ve never had before, so we’re beginning to ask new questions. It feels very much, from talking to people who were there, that’s it’s like the beginning of fMRI. We have this new thing and no-one’s quite sure what to do with it. We’re amazingly lucky to be situated here in Oxford, where the FSL guys are sitting next door and we can just go and talk and work with them. But there’s a lot of room for improvement. If you’re interested, then there’s an entire career there.
HJB: And how have you found the challenge of fusing information from MRS and non-invasive brain stimulation, techniques like TMS, tDCS. What do you think are the opportunities in bringing those approaches together?
CS: Partly, my first ever study on the MR was combining the two as a first year PhD student. I didn’t realise that this was difficult.
HJB: It was cutting-edge!
CS: Yeah, it was the thing to do, so I went and did it! But it is technically challenging. The reason that I just went and did it as a first year student is a massive testament to the support that we have. The physicists and radiologists are amazing at setting up sometimes weird bits of kit – and there’s a load of expertise around on the brain stim side. Doing it is one thing and we’ve now worked through it enough to be confident and happy with it. But interpreting the results is a whole different question! That’s ongoing and lots of people are becoming interested in it.
HJB: It’s quite an exciting possibility – one of the big limitations of brain imaging is that we’re stuck with correlations. You put someone in the scanner and see what lights up and you can correlate activity with behaviour but you never get causal inference. So to be able to perturb the system and observe the consequences does certainly add to the toolset. So it’s powerful for asking causal questions.
CS: Yes, I think so. But there are obvious caveats to what we can assume about the specificity – and we use tDCS rather than TMS for lots of good reasons, but it does have slightly more questions about the spatial specificity. With all the techniques there are questions if you’re thinking about physiology. Quite a lot of our work a while ago was trying to ascertain whether tDCS, and TMS to an extent, affect the brain in a similar way to the naturally occurring changes when we learn something. Are we engaging the same systems and doing the same thing to induce plasticity – or are we doing something completely different? It looks like there are very similar mechanisms involved, which makes sense but that’s a key assumption that we’re making. We’re still doing it and I still think it’s a really important thing to do.
Looking at the OHBM symposium this year, it’s really encouraging to see how much brain stimulation work is being presented – how many novel techniques are presented. There’s also a satellite event, so clearly people are beginning to realise that BOLD is brilliant and can tell us many things but, as you say, it’s very correlational and doesn’t tell us that much about the physiology. Once we understand the regions and networks that are important we can then go in and look much more specifically at given nodes within that using MRS and using stimulation to get a feel of what’s happening.
HJB: And it feels like the brain stimulation field is evolving, getting more sophisticated and more nuanced. There has been scepticism about some of those techniques because of things like variability or lack of replication and I think there’s an acknowledgement that these effects are very variable, and that needs to be taken into account, but it could actually be incredibly interesting; there might be interesting reasons for that variability. Trying to prove our experiments, capture that variability at the individual subject level and understand it, could really increase the use of those techniques for studying healthy brains. It could also help explain the cases where you get the completely opposite response for the same stimulation. For some people that’s a reason to shy away from the technique altogether but for others they’re asking “why is that?”. It does seem that there are interesting reasons for the variability, which could include genetics or brain geometry, that could help us understand the responses.
CS: Yes, and it has been an interesting time over the last few years in the brain stimulation literature. We hope we’re coming to a conclusion where we’re saying that it is variable but that could be interesting. As with any technique we need to be able to use it properly – we need to control it properly, carry out double-blinded, placebo-controlled trials.
HJB: It’s just like the early days of brain imaging, there were a lot of non-perfect (shall we say) imaging studies in those days but that doesn’t mean that imaging is a flawed technique, you just have to do the experiments right.
CS: And it’s like 10 years ago people were tweeting about the dead salmon – now there are other things coming up – the field is very similar.
HJB: Here in Oxford, as you know we have core-funding from the Wellcome Trust to create this new centre for neuroimaging, the Wellcome Centre for Integrative Neuroimaging (WIN), building on the success of places like FMRIB and OHBA over the years. With that, we’ll have access to new facilities – ultra high-field MR but also new facilities for animal MRI. What kind of things are you most excited about, once you have all this equipment at your disposal?
CS: The WIN is really exciting for many reasons. One of the big differences it has made already is in people – having a lot more physicists around to develop these techniques, getting the brain stimulation working in all of the scanners and also to get the sequences to work and be reliable and trouble-shooting. Having physics-support to do that, which is provided by the Wellcome Trust, is just amazing. Oxford is also wonderful and has a huge number of very, very good people working on similar things. But it’s spatially spread out across the site. While that’s still true of the WIN, having one centre has begun to get people to talk to each other in ways that they haven’t before. There are people here that I’ve never spoken to – so it’s been good to get together and see what we could do.
One of the things I’ve been interested in for a while is that, while human MRS is important, there’s no doubt that it’s an indirect measure, and there are some key questions about what it is that we’re picking up in terms of the underlying physiology. It would be fantastic to look at that, if we can do very specific interventions. Having a small bore animal scanner which allows us to do similar things to what we do in humans – the same sequences, as well as complicated behaviour and genetics – is just very exciting.
HJB: Yes, that’s what I’m particularly excited about – being able to use imaging as a bridge between lab-based neuroscience and the things we do with patients. All of us using neuroimaging have been frustrated at the lack of specificity in what’s going on in physiological terms. Having the techniques that allow us to bridge that gap actually allows us to carry out the manipulations in animal models, but then relate those signals to what we see in humans. That’s something that I think is really exciting.
CS: We’ve been working very specifically with Jerome Sallet on ultrasound modulation and have a grant to develop it in humans. He’s doing it in primates. We’ve been working really hard on that, and it’s been fantastic to work with him to see what it actually looks like – what behaviour changes he’s getting, what imaging changes he’s getting. And it’s starting to inform what we can expect to see. That’s concrete evidence that’s already coming out.
HJB: Yes, bringing together people across those species boundaries – and starting to train junior people to carry out cross-species experiments, those like Helen Barron, individual scientists who are doing fMRI, but then understanding those signals in terms of electrophysiology, and using optogenetic manipulations. You then have the macaque work from people like Jerome, Rogier Mars, and so bringing them together with people with imaging analysis expertise like Mark Jenkinson, Saad Jbabdi, will help us build tools to seamlessly move from rodent space, to macaque space, to human space will make it all much easier for people to cross those species boundaries.
To finish up, what advice would you have to early career researchers who are about to start their careers in brain imaging or multi-modal brain mapping?
CS: I think it’s difficult – and it’s a case of do what I say and not what I do [laughs].
It’s all about the questions. You have to work out what the question is that you find exciting and interesting – I made a joke about it earlier but it is hard and you have days when you’re wondering why you’re doing it and it has to be something that really excites you.
HJB: Yes, it has to be something you care about and want to know the answers to.
CS: That you really, really want to know the answers to!
And then you need to work out what techniques you need to be able to answer that. For me, that did and does involve multi-modal neuroimaging. We’re starting to use MEG to look at brain oscillations, which I think is the key mechanism by which we’re getting links from inhibition, GABA MRS measures, to the functional connectivity changes we see in plasticity. I think oscillations are really important, probably the route through which that happens. Having worked out that that is what we’ll need to do did it. Working, again, with excellent people within Mark Woolrich’s group has certainly helped.
So you need to work out the question, what techniques you need to be able to answer it and then make sure you’re somewhere that can support that. One of the downsides of doing the multi-modal work that my group do (we do MRS, MRI, brain stimulation, MEG) is that you end up as a jack-of-all-trades and not an expert in any given one of them. Working somewhere that you have genuine experts in all of those, and where they’re happy to help, is really important and one of the reasons for me being here – is because we have all that expertise covered and we can do those tricky experiments.
HJB: Yes, that two-way interaction where you have impressive experts developing methods, which can then inspire researchers to ask new questions. But you need them there as well to know the pressing questions – whether those are clinical questions or neuroscience questions. That can hopefully steer method development to answer particular questions. I think it’s always tempting for some of us to get wooed by a particularly cool method or new analysis approach and lose sight of why we’re doing it, or what question we’re trying to answer. So I think your point that the question needs to come first, particularly for those of us in the more neuroscience side, is really important.
CS: Yes I think it may be different for the methods-development people.
HJB: Yes, but even then you need to keep in mind what question this method answers that can’t already be answered. Not just what cool engineering or mathematical principles can I implement. Really, what’s the point of it? It’s important for people to keep that in mind and use that as a way of prioritising and steering your work. Ideally you get a combination of something that uses cool cutting-edge computer science, but also allows people to do something that they couldn’t do before.
CS: And my other standard advice for early career researchers is to move between labs – but I’m very conscious that neither of us did! Though we did travel strategically, I spent a little time in Florida and UCL and you spent 6 months in Montreal.
HJB: [laughs] Yes, I completely agree, the standard advice is to move around and see various different labs, travel the world. That’s absolutely something that benefits people’s career development but for me personally it wasn’t the right thing to do at different stages of my career. So I’ve pretty much been here in Oxford throughout. I’ve tried to get that experience through collaborating very widely locally here in Oxford and elsewhere in the UK and further afield. We can get inspiration and avoid going stale in the same location through collaboration and meeting people in that way. So if you can’t move round, don’t worry too much about it.
(or, How I became an advocate of Open Science in 5 days.)
It’s the final day of OHBM 2018, and I’m tired. I’m also excited, enthusiastic and exhilarated. I’ve had my first real taste of the Open Science community, and I want more! What follows is an honest account of how I went from feelings of scepticism and ambivalence about Open Science, to a flag waving advocate. My hope is that you might read my story and find it in some way relatable. Then maybe, just maybe, you’ll join me in the Open Science Room next year.
When I registered for OHBM, I intended to sign up for the “Hackathon”. I wanted to improve my python programming skills and hoped I might make a few buddies along the way. Alas, the Hackathon was sold out, and if I wanted to know more I’d have to “make do” with attending the one-hour introduction session: Brain Hacking 101. “But do stop by the Open Science Room while you’re here!” the organiser said. Open Science wasn’t a priority for me, so I wasn’t sure if I’d find the time.
I started the conference with some educational sessions. People kept mentioning their “GitHub” pages, but I didn’t take much notice. Vince Calhoun presented some work using dynamic functional connectivity (dFC), and I felt the rumblings of a new project brewing. I knew I could apply this method to some data I had sitting in a drawer and it might turn out to be interesting. And then there was a link to his GitHub page. I scribbled in my notebook: “dFC toolbox available, with documentation and examples!” The toolbox was written in MATLAB (phew!), and was an extension of something I was already familiar with.
Next I headed to Brain Hack 101, to fill the lunch-hour void. There was an unassuming looking guy stood at the front, and he was clearly the super-programmer sort. This was Greg Kiar. He was patient with the attendees and our entry-level questions, and in one hour he explained a few terms that I had heard being thrown around but not really understood:
BIDS (Brain Imaging Data Structure) got triple underlined in my notebook. I had been looking for a way to better structure the masses of data I’d inherited, and here was a fully-fledged and well organised system which I could use. “One less job for me!”, I thought. Clearly a lot of effort had gone into defining BIDS, and who was I to try and reinvent the wheel? Someone else has already done a fantastic job in creating the system and implementation, and they were giving it to me for free!
fmriprep got triple underlined as well. This was a tool for doing some kick-ass preprocessing, made by the masters and shared freely with a ton of documentation and support. I was hoping to come away from the meeting prepared to build a tool to improve the efficiency of preprocessing the data collected by our group. In fmriprep, I’d found another fully-fledged tool to do exactly what I was looking for, and do it significantly better than I could with the time and resources available to me.
At the end of the session I was a bit more confident in my understanding of what was available and how I could make use of it. More importantly, I had ticked two of my “OHBM Outcome” boxes in discovering BIDS and fmriprep. I thanked Greg after the session, specifically because he had suggested I came here and I had found it really useful. He seemed grateful for the feedback.
The OHBM meeting continued and I found myself in the Open Science room again and again. This was the place where I could find out about real, tangible “things” which could elevate my research. They also had bean bags and power sockets, and there was an atmosphere of generosity which was very pleasant when you’re surrounded by some of the most eminent people in your field.
The real turning point was attending a session described as “Discussion: How to improve Multi Echo denoising”. I expected a panel, with me taking notes. What I got was an invitation to join a (physical) circle of 8-10 very impressive people as they attempted to make something (TEDANA) better, by working together in a friendly, communicative and organised way. The session started, as these circle things do, with each of us saying who we were and what we did with multi echo. Most people seemed to know each other already, at least by association, but the Chairs of the sociocratic circle (Elizabeth DuPre and Dan Handwerker) made a point of ensuring everyone was on the same page and that there were no assumptions. I was clearly the least technically experienced person in the room, but they were nice and attempted to actively engage me in the discussion. Susan Bookheimer had told us that women in science need to be outspoken, so I made an effort to speak out. To my surprise, my opinion was welcomed. I learned that people like them really do need to hear from people like me; I assisted in the development of this tool through virtue of my inexperience.
My original motivation for attending the session was not altruistic: ”if the tools are going to develop I want to be early on the curve to adopt them”. I had intended to be a passive recipient of their talent and hard work. At some point it became clear that this wasn’t going to be the case. I was skilfully guided into volunteering my services in improving the documentation of the tool, but I wasn’t too aggrieved about the extra workload ‘imposed’ upon me. I was probably going to read the “how to” guides anyway, to get TEDANA working with my data, so all I’d have to do is ‘track changes’ and make a few suggestions. This seemed a pretty reasonable exchange, but they weren’t taking my commitment lightly. They wrote my name down next to the action on the google doc which was being typed up for all to see. I have no doubt, however, that if I had kept my head down the group wouldn’t have push me to be involved. In truth, I wanted to be a member of this group, but I didn’t feel like I had anything to offer.
When the allotted time for discussion was over, I approached Kirstie Whitaker to thank her for her contributions in different sessions, and for saying out loud things which I was too nervous to say. She spoke with strength, clarity and precision. She seemed to embody all the things I want to see in the field, and I wanted her to know I was appreciative. Turns out we have some shared research interests and she was keen to work with me. She treated me with respect and encouragement, and seemed willing for me to benefit from her experience.
I was starting to notice a theme: The people in the Open Science Room were really nice. They were kind, considerate, generous, and welcoming. They were also really skilled scientists. The kind of top-of-your-game people who anyone would be honoured to work with and learn from. They genuinely cared about empowering others to foster more efficient, more effective and higher quality research. They cared about non-science things that are important to me, like gender equality, privilege and dubious political decisions. I wanted to work with them, and they wanted to work with me.
Though my initial motivations for engaging with Open Science were purely selfish, I came to appreciate that if I go a little bit further and contribute back to the system we can all do more effective research. Ultimately, this is good for advancing our understanding. I am reminded that the reason I love this job is not just because of the creative freedom and intellectual stimulation is provides, nor the opportunity to go to cool places and meet awesome people. It all boils down to helping those individuals whose brains work differently, or maybe not quite as they should, and trying to help them exist happily in this confusing and sometimes dangerous world. I have come away with the firm belief that Open Science is Social Justice, and that is something I want to be a part of. I am also reaffirmed of the good that we can do as a community, if we are kind and generous with each other.
Before this week, I was not a fan of Open Science. It felt like a bucket of extra work in my already stretched schedule. It also played on my insecurities as an early career researcher, to meet the demands of publishing and the fear that I may be exposed for being no good (I believe that’s called Imposter Syndrome). What I’ve come to learn is that I’ve already benefited massively from Open Science by using a long list of tools generated through many hours of skill, labour and dedication (see below for a list which grows longer the more I think about it). All of these have been free to me and required little more than a citation, if anything.
After this whirlwind of a conference I am now a strong advocate of Open Science, and I think you should be too. You don’t have to take my word for it, just come to the Open Science room at the OHBM Annual Meeting in Rome next year and see how you get on. They have bean bags and power sockets, and you might even have some emotions which help you do research.
Ways I have personally benefited from Open Science, without being aware of or appreciating:
3) OBART (NITRIC’s Online Brain Atlas Reconciliation Tool)
4) Chris Rorden’s mricron, mricro, dcm2nii, and fmri simulator
5) The 1,000,000 times when I’ve searched the internet for “how do I do x in MATLAB/Linux/bash?”
By Elizabeth DuPre and Kirstie Whitaker
This month we continued our Open Science Demo Call series with a focus on this year’s OHBM annual meeting in Singapore, and the many ways that OHBM members promote open science at the meeting. We heard from Michele Veldsman, Ayaka Ando, and Aki Nikolaidis from the student and postdoc special interest group; Greg Kiar and Anisha Keshevan from the open science special interest group; and Nils Muhlert from the communications committee.
We first heard from Aki about the Monday Night Social, featuring beautiful views of the Singapore skyline and an announcement of the winners of this year’s Brain Art competition. All OHBM members are encouraged to attend! Ayaka gave us a sneak peak into this year’s career development symposium, 'The Secrets Behind Success' with presentations from Tom Nichols and Lucina Uddin. Michelle told us about the mentorship program, which seeks to support professional growth at all career levels, from masters students to principal investigators. Although mentors and mentees meet at the OHBM annual meeting, you don’t have to be attending to get involved! Aki, Ayaka, and Michelle encourage everyone interested in any of these initiatives to stop by one of their events or reach out on twitter!
Greg and Anisha then told us about this year’s OHBM BrainHack and TrainTrack. Formerly known as the OHBM hackathon, this is a collaborative event which brings together researchers and scientists from across the neuroimaging community to work together on projects, learn new skills, and engage with the community. While registration for the BrainHack and TrainTrack is sold out, the Open Science Room at OHBM will be hosting demos, workshops, and an open working space throughout the annual meeting. Greg and Anisha encourage everybody to come and find out about the great initiatives and individuals in our open science community!
Nils introduced us to the amazing work the communications committee is doing to spotlight open science at the meeting and throughout the year. They were involved in revamping the OHBM website and maintain a blog highlighting initiatives within the OHBM community (including these demo calls!), as well as educational materials such as the OnDemand courses. Nils pointed out that this provides a great way to stay involved year-round (e.g. workshops on analysing diffusion MRI), even for those who cannot attend the annual meeting. The communications committee also helps promote the excellent brain mapping work carried out around the world, and provides a unique window into the stories of many successful neuroscientists through their keynote interview series. Nils encourages anyone interested in contributing to the blog to sign up!
We’ll be taking a break next month for the annual meeting, but look forward to starting back up at the end of the summer! If you’d like to nominate yourself or someone else to be featured on these monthly calls, please add their information at this github issue, or email the host of the calls Kirstie Whitaker at firstname.lastname@example.org. You can also join the OSSIG google group to receive reminders each month.
BY NIKOLA STIKOV AND JEAN-BAPTISTE POLINE
(IN CONSULTATION WITH THE OHBM PUBLISHING INITIATIVE COMMITTEE)
The current academic publishing norms impose many constraints on how and what we publish without fully embracing the new web-enabled dynamics. The emergence of the internet as the de facto publication medium, and the availability of open source technologies for handling the hosting and peer review process, have made it possible for organizations such as OHBM to experiment with innovative publishing platforms and to host high-quality research objects while promoting reproducible and open science.
With Aperture, OHBM plans to open up to a more diverse approach in communicating academic research, bringing transparency and interactivity to the publishing process. While initially our focus will be on reviews, tutorials and educational materials, we foresee using this format to incorporate computational notebooks, interactive plots, software, data, and post-publication peer-review to create living, reusable and reproducible research objects. We hope to have a beta version of Aperture in time for the Rome meeting in 2019.
Coko has extensive experience developing open source publishing components, some of which are used by Elife and other open-access publishers. Their framework could give the Open Science SIG and the broader OHBM community the opportunity to participate in the construction of Aperture. We look forward to establishing even more collaborations with like-minded partners.
Most importantly, we want to hear from our members! For that purpose, we invite you all to attend the Publishing Round Table, to be held at the annual meeting in Singapore on Monday June 18 at noon (Room: 324-326). Please join us for what we hope will be a fruitful discussion about the future of Aperture!
By AmanPreet Badhwar
“In a forest of a hundred thousand trees, no two leaves are alike. And no two journeys along the same path are alike.” ― Paulo Coelho
See Daniel Margulies' keynote speech here:
My first OHBM annual meeting experience was in 2015. I did not know many researchers in the field, having just started my postdoc in human brain imaging. On top of that, I was attending OHBM 2015 without my postdoc supervisor in tow (who knew the community well), and worried about not finding my place in the human brain mapping community. Luckily, I had met Daniel Margulies a few months prior to OHBM 2015. Not only did he make it a point to introduce me to the community at this particular annual meeting, but I also found myself happily involved in the many grassroot initiatives of the Neuro Bureau: ranging from brainhacks to sci-art exhibits to open science initiatives. Fast-forward to today, I have developed my own unique voice in the OHBM community, and it is in large part due to guidance from Daniel and his free-spirited compatriots during those formative moments in time. I have had the opportunity to collaborate with Daniel on several projects, both scientific and sci-artistic, and recently had the pleasure of interviewing him at the inaugural BrainHack School 2018 in Montreal.
AmanPreet Badhwar (AB): How would you describe your research to a random person on the street?
Daniel Margulies (DM): When explaining my research to a random person on the street, I usually gesture to my head to illustrate that I study the brain. If there is time for further elaboration, I explain that I study how areas are spatially arranged and connected to one another using MRI, and the consequences of this layout for the possibilities and constraints of cognition.
AB: What projects are you currently working on? Could you comment on some of the breakthroughs and bottlenecks you have encountered?
DM: I’ve recently moved my lab from Leipzig to Paris, which has provided a refreshing opportunity to set new research priorities and establish new collaborations. We recently identified a gradient in cortical organization that spans from primary cortical areas to the regions of the default-mode network, so my current projects are extending this observation to explore its consequences for cognition, cross-species comparative studies, and exploring how the gradient can be divided into zones of cross-modal integration.
AB: Can you tell me a bit about your career path?
DM: I studied humanities in undergrad, but ended up in neuroscience through a chance encounter years ago at a bus stop in New York that resulted in an invitation to join Xavier Castellano’s lab at New York University as a research assistant. I was soon introduced to neuroimaging data analysis by Mike Milham and imparted with a love for neuroanatomy by Michael Petrides. A similar twist of fate landed me in Berlin a few years later as a graduate student with Arno Villringer. I was very fortunate to have mentors that were immeasurably supportive and offered me opportunities to pursue my various interests. This all came together when I started my own lab in 2012 at the Max Planck Institute in Leipzig.
AB: What is your take on multimodal research? How have you integrated this within your own research project?
DM: The complexity of various fields in neuroimaging today has resulted in a level of specialization that makes it challenging to take a wider perspective. I believe one of the major challenges we face is in thinking across different methods and vocabularies to construct unified models that underlie these diverse, and at times divergent measures. As my core project aims to understand some basic principles of how features of the cerebral cortex are spatially arranged, perspectives from multiple modalities are central towards achieving that goal. We make use of the macaque monkey tract-tracing literature, high-resolution MRI, meta-analytic and task-based approaches… So much data is openly available these days that conducting multimodal studies is really becoming more the norm than the exception.
AB: If “like connects to like” in the brain, then tell us a bit about what makes the brain work as a unit?
DM: “Like connects to like” is a principle that has been introduced to describe preferential long-range connections between cortical areas that have similar degrees of laminar differentiation. It’s pithy and captures an elegant multimodal phenomenon of cortical organization. Nevertheless, various other principles are also critical to cortical organization, such as extensive connectivity between neighboring areas. Although there is a substantial focus in brain mapping of the differences and discrete boundaries between areas and large-scale systems, one challenge that your question illustrates is to also consider how these distinctions are integrated into a functional whole. There is little doubt that the brain is highly interconnected — a factor that is important to remember when delineating various subdivisions.
AB: You are a Neuro Bureau member. Could you tell me a bit about the Neuro Bureau?
DM: I started the Neuro Bureau with Cameron Craddock back in 2009 or so. When we first got going, all we had was the name, which we felt at the time was a solid enough starting point to merit a purpose. We developed the Neuro Bureau into a cross-institutional and cross-disciplinary support group for early career researchers with the aim of providing the neuroimaging community with projects and initiatives that weren’t traditionally credited. This includes the Open Science Gala at OHBM, the brain-art competition and exhibition, and the preprocessed data initiatives. The idea was to infuse our community with new perspectives, to render it more accessible to other disciplines, and to make it in some ways more playful. Towards those goals we also encouraged a spirit of open scientific practice, which grew into Brainhack a few years later. Early on I received the advice to help create the research community I wanted to be a part of — the Neuro Bureau is our way of doing just that.
AB: Could you comment on the Neuro Bureau’s role in mentoring trainees?
DM: I’ve never really thought of the Neuro Bureau as a mentoring-oriented organization. Mentoring implies a mentor and mentee, and the Neuro Bureau has always had more the spirit of a tree house, along with all the big ideas, camaraderie, shoe-string operations, and mischievousness that tree houses tend to have. Good mentorship is so critical when joining the neuroimaging community, but so is finding your own group of peers — a kind of research family. For us, the Neuro Bureau provides a space to try out new ideas, seek support when faced with the various challenges of research, and to feel that we have a place of our own in the wider community.
AB: Thank you Daniel for taking the time to sit down for this interview. Looking forward to your keynote at OHBM 2018.
By Shruti Vij
“I have always loved the idea of not being what people expect me to be!” - Dita Von Teese.
See Bea Luna's keynote OHBM2018 lecture here:
There isn’t another stalwart in neuroscience that this quote is better suited to describe. Be it her expertise in developmental cognitive neuroscience, her championing of novel techniques such as concurrent PET-MR scanning, her vocal demeanor or her punk rock persona! Bea Luna’s research persona lends an overwhelming sense of success in being not only innovative but also purposeful in her dogged pursuit of making one of the toughest periods in life - adolescence - better understood. She has to her credit hundreds of articles and many prestigious grants and awards, in addition to being the President of FLUX Society. With such lofty achievements, one might expect her to be a sombre intellectual but one is easily surprised by her bubbly and inviting personality! This article probably does not reflect my personal excitement at being able to interview such an amazing role model and to have so many take-aways from our conversation was definitely a bonus! What’s more, as you read on, you will discover that to be you in your own way is what leads you to success! Something that the world of academics needs to stop and think about while being in the rush to achieve the next big thing!
Shruti Vij (SV): The research focus of your lab at Pittsburgh is neurocognitive development. In particular, you are interested in brain maturation in adolescence. What motivated you to pursue this direction?
Bea Luna (BL): I was a crazy risk-taking teenager having fun and questioning everything, which probably contributed to being kicked out of school! But I remember being very aware that this was a really unique time of feeling free and invincible and that it was finite. I was very much into philosophy – thinking about what the mind means, what the brain means. It was during this time that I found out brain function could be measured with PET, and would fantasize about one day being able to use this to understand human consciousness.
As a grad student, I studied visual and attentional development longitudinally in premature infants who, due to their immature lungs, can have hypoxic ischaemic events in the brain. I was surprised and fascinated that it didn’t matter if half the brain was missing, or if there was a little dot of hypoxic ischaemia, it did not predict outcome in visual acuity and attentional processes. I thought, wow, how can that be? In adults it would be obvious what the outcome was likely to be. You could also think about diaschisis – where a small region could be injured but it compromises its connectivity to a whole bunch of other regions. So it became clear that what mattered, especially with regards to development, was the integrity of the functional brain, beyond structure, what the brain does with what it has as it is specializing. fMRI was just emerging and I was fascinated with the possibility of using this approach to look at development, which I concluded in my dissertation. However, I was discouraged from the ability to use it in pediatric populations, and I thought, ‘oh yeah, watch me’.
I did my postdoc in the Psychiatry department with John Sweeney where he saw that my developmental expertise could help us probe the prevalent theories that mental illness emerged in adolescence and neurobiological maturation could hold important clues. This was a perfect fit between my interest in developmental plasticity and brain functional integrity. From then on I just didn’t stop.
SV: And the field of cognitive neuroscience is thankful for that! What are some key questions that you think are going to be the big drivers in developmental cognitive neuroscience in the coming years and how is your research contributing to these questions?
BL: We are developing as a field ourselves. There has been great advances in mapping the regions and networks that show changes with age and their links to specific components of cognition. One of the areas that I have been a spokesperson about is the need to now understand the neural mechanisms underlying the developmental changes that we see with neuroimaging. Seeing pretty brain pictures is no longer enough. We need to speak to our colleagues that use animal models, those who are doing post-mortem work to build comprehensive models of development. This is how we can take the next step and make our work translational.
For example, we have an amazing project, that I’ll be talking about in my keynote, where we use a molecular MR machine that acquires MR and PET simultaneously. It’s very hypothesis driven and it considers different lines of evidence that suggests hyper-processing of the dopamine system in adolescence. Intuitively this makes sense but it’s very complex. I’ve literally stood up in front of big audiences and said “Hi I’m Bea Luna. I’m a developmental cognitive neuroscientist and I have overgeneralized the concept of dopamine function.” It’s a very complex system with pre- and post-synaptic processes and multiple types of receptors, all who could have their own developmental trajectory.
This is one of the ways that we are trying to understand mechanisms. I’ll also be talking about tissue iron as a proxy of dopamine. It’s difficult (though not impossible) to get testing pediatric populations with PET through the IRB so we’re finding this proxy for dopamine processing with MR-derived tissue iron. We are finding striking associations with tissue iron and PET markers of dopamine processing and how these are changing with age. You can see that talk for the punchline! I’ll also be mentioning a future direction with a new project that’s just started where we’re using spectroscopy at 7T using very complex acquisitions to look at changes in GABA and glutamate and the tissue iron proxy for dopamine. These three neurotransmitters are essential to understand plasticity. Animal models show molecular evidence for critical period opening and closing through puberty in association cortices, which motivates this new approach to understand critical period (vs. sensitive period) plasticity in association cortices in humans in vivo during adolescence. I find this tremendously exciting!
I come up with these crazy questions that compel me to probe them regardless of the complex technology it may require. Was I a PET expert? No. But I really want to understand what’s going on with dopamine. So, I go to my colleagues and say ‘elucidate me, tell me about how to use PET to answer my questions.’ It’s the same with MRS: ‘tell me how I can use this technology in the best way possible to answer my mechanistic questions’. I say, ‘come play with us’ and let’s use this fancy approach to answer some very cool questions. This is how we end up collaborating with people in other departments at Pitt and with other Universities such as MGH, and start discussions with Columbia and Stoney Brook to use new approaches. We also have some even newer projects looking at single-cell work in monkeys to further understand the actual neural basis of cognitive development.
SV: What do you think are the novel technological advances that will assist in uncovering brain maturation?
BL: Certainly PET/MR and MRSI have reached a level that can be applied to developmental questions. But there are tremendous advances in the analyses area including computational modelling, machine learning, and advanced statistics that can push the envelope as to what we can answer. We have been bringing that into the fore to make sense of these molecular mechanisms but also advancing what we can do with longitudinal neuroimaging data, resting state fMRI, and how to become informed by reinforcement learning. Resting state in particular is critically advancing in how we can control for head motion, a huge problem in developmental studies. Diffusion-weighted imaging has now advanced to a level that allows us to apply not just tensor models, but orientation approaches that afford us greater insight into the maturation of white matter connectivity making inferences beyond just myelination. For example, we know that glia dynamically influences myelination. Let’s not forget about the importance of behavioral assessments, which at the end of the day, we need to merge with our brain data to assess its relevance. Finally, Big Data approaches have had a huge impact in the power we now have to answer questions. We share our data as well as use other’s Big Data to replicate our findings. In sum, multimodal approaches to inform mechanisms, advanced analyses, and big data is where the field is showing great advancement.
SV: In today’s discussion on diversity and inclusion, what are your thoughts and how do you address these issues in your lab?
BL: First, I’ll tell you what my soapbox is. I was in the advisory council for the director of the NIH – this was in 2012. They’d just started to really speak about diversity. I was different to other minorities in the council. Understandably, they wanted to see enhancement and so on, but I thought ‘no, I don’t need your help because I’m a woman or I’m Hispanic. When I give a talk at SFN or OHBM I don’t want people to think ‘oh my god, look at the woman talking – and she’s Hispanic too.’ I said you know what Francis, white men have done a great job, and keep advancing things. That’s great, and we’re grateful – but I’m in a different place and am bringing insights that will never come to that closed club. So, actually, “you’re welcome!”. I’ve been in a lot of high level committees where I do feel in the minority, as a woman. I don’t blame the men, I have a husband, I have a son. I don’t think they mean ill – but I do notice that I have to prove myself, and it can take a while before I am finally listened to and my ideas be deemed critically helpful. White man are accepted much quicker. Again, I don’t think this is intentional at all and I get great satisfaction when I have won them over. I also don’t mind doing the extra work, it keeps me on my toes in science and keeps me humble. It is what it is, but I do see change. I see that men are really aware and trying to do better. Then again, my “dopamine” personality may be playing a role beyond typical diversity, ha ha.
SV: Your trainees speak very highly of you and the work environment that you have created in your lab. Is this purposeful? What are the things that you personally make an effort on to make things better and easier for your trainees?
BL: First of all, awesome! I am careful about selecting smart people that will fit into our dynamic cool atmosphere. Some may think “party lab” but in fact we are more about working hard and pushing each other in a respectful but humorous manner. Everyone in my lab loves what they’re doing and I don’t have to look over their shoulders. They’re self-driven, and that really works. I give them their space and I’ve been really lucky. Every grad student I’ve had has been amazing.
At the top of my list of favorite things is the one-on-one with my mentees. I talk to these young, super-brilliant people. They may not be familiar with these big questions that I have. So I have a discussion and there’s some theoretical aspects that take a while to understand – but I tell them not to worry. I’m planting seeds in their brain that will later grow. I tell them that they will deliberate and they will come up with the logical next step, which I am not interested in. Instead, I want to hear the other idea. I want to hear the one they’re embarrassed to tell me because it’s so outrageous, the one that makes you giggle. For me that’s great, because it’s usually in the context of bigger questions and leads to a bigger step forward. For them, it turns out that it grabs their passion, and then it’s their thing. They’re not following tightly in my footsteps. We go back-and-forth and think outside of the box with no limits. As a philosophy double major I loved to just boldly think beyond the obvious and now I can do this within my scientific questions with my brilliant students, who are so frikkin smart. But there’s a lot of laughter – sometimes too much!
There are a lot of bonds made in the lab – we’ve even had marriages. They are all discussing new methods and approaches and I love hearing how they all interact and help one another just for the joy of collaborating. I am careful not to bring in difficult people so as not to mess with our cool vibe.
SV: You are also the president of FLUX in addition to being an actively involved academic stalwart. How do you manage all this in the same 24 hours everyone gets?
BL: great question – I want to deliver an important message here. I think I can work intensely for a concentrated amount of time – but I try not to get into the office before 9 as I like my thinking space in the early morning before i go to the lab. Granted I’ll typically work until 7 or 7.30, and on the weekends I really try do minimal work. I think that’s super important. You need that space, doing other things, seeing your friends. Some of my friends have no idea what neuroscience is, a lot of them are artists and I think that gives you space to make the connections. If you’re always looking at the trees you sometimes struggle to see the forest, the bigger questions.
There are times when there are lots of deadlines, and it can be stressful. I’m in a medical school so you write grants, that’s what you do. Now I have an endowed chair so things are a bit easier but I always maintain a couple of R01s plus other collaborations and foundation grants. It used to be very stressful to know that you had to get a grant or sink. But for the last two grants, I thought enough, I’m going to love my the grant writing process. I now embrace and laugh at the innovative directions that I am willing to go. My first grants would take a couple of rounds to get funded with comments that the ideas were awesome but how could this be done, prove it! And we would! Now I have a track record and they get through more smoothly (knock on wood). I definitely never want to get a critique that everything is fine but “yawn”. When I have to write grants I get everyone involved and its a party with discussions going on everywhere and every white board filled with ideas and it’s intense and fun and we laugh a lot. I get home mentally exhausted but satisfied and my husband has a martini ready and feeds me. I don’t engage those who will be leaving the lab soon though since they will not be around, and I feel that they feel left out cause they want to play too.
By the way Flux is not an acronym, although everyone always capitalizes it. I made a great effort to not have an acronym. I chose the word Flux to always remind developmental cognitive neuroscientists that we are studying a dynamic process and we need to capture what is in Flux. I am so proud of the Society and the amazing conference that Brad Schlaggar, Silvia Bunge, and Bruce McCandliss and I made. We dreamed this up at a conference when we were all postdocs drinking at a pier late at night toasting to how one day we would bring developmental cognitive neuroscience into its own! By the way, that was another of those moments when people were saying no way that is too hard don’t do it, I didn’t even wince. Now as developmental cognitive neuroscientists we have a home and we bring people in to help unite us in advancing the field from David van Essen, Steve Petersen, Russ Poldrack, Michael Posner, John Gabrieli and on and on. We have intense days of hardcore science and then… party hard ending with crazy, and i am not kidding, crazy karaoke and people sweating and dancing from students to bigwigs. The idea being to break the science class system and for students to see that we are all in it together and for them to start forming bonds with each other that will end up in collaborations for great science.
SV: OHBM is largely made up of trainees at many different levels. They look up to people like you and would like your advice. What other advice would you like communicated to our large trainee audience?
BL: Whenever I find myself providing advice for people – and I see this a lot where people have to make choices, such as what lab to go to, what job, what country etc - some things that I think nobody ever tells them is to consider the whole package, to not put their personal life at the end of the list. For example, leaving the person they love to pursue what they think is the only path to career success. If you’re passionate about your science, you can make it work in many places. I personally chose to be where my husband and I both loved. That said, pursue the questions that you are passionate about, don’t compromise on that because this work is hard and can be stressful. But if you’re pursuing the questions that you are passionate about then you can deal with the annoyances. If you’re just going to do things because they’re convenient or they’re going to look good, then it’s not going to make you happy. For me the secret of success is to do what you really want to do, not what you think you should be doing.
Speaking of moving for work, I’m presently looking for postdocs, grad students, RAs to join the lab. If you’re into innovative thinking, great questions, and working with an awesome group doing new approaches then contact me, we’re actively looking for people. Feel free to contact me or my lab manager for further details!
SV: Thanks again for taking the time to share these wonderful pearls of wisdom with our audience! We look forward to your keynote next month!
The interview finished and as a trainee struggling to understand my personal growth within academia, I came away with a fresh look and a new mentor I felt I could seek out for advice! It also made me more confident that academics like Bea are what we are all fighting to be!
See Martijn van den Heuvel's keynote OHBM2018 lecture here:
Martijn van den Heuvel heads the Dutch Connectome Lab, part of the Complex Traits Genetics Lab at the VU University in Amsterdam. The goal of his research is to understand the association between brain complexity and brain function in health and disease. We had the pleasure to interview Martijn and find out more about his career, and also get a sneak preview of his keynote lecture at OHBM 2018 in Singapore.
Tommy Boshkovski (TB): Can you tell us a bit about your background and your lab?
Martijn van den Heuvel (MvdH): I finished my undergrad and master’s in artificial intelligence, and Ph.D. studies in medical science at the University of Utrecht in the Netherlands. After finishing my Ph.D. I got a faculty position, and then my team and I recently moved to Amsterdam to the Center for Neurogenomics and Cognitive Research. There I built my lab on connectomics, we are quite a group of connectome enthusiasts; some of them are Ph.D. students, and some of them are postdocs now. The group is really multidisciplinary. We have a biologist, a physicist, mathematicians, psychologists, and engineers, and that's really my approach. I always try to make the group as interdisciplinary as possible, preferably with people that could do something other than what we can do.
TB: What motivated you to start a career in Connectomics?
MvdH: I’m not sure if anyone has intrinsic motivations to start a career in connectomics. The reason why I started to apply network science to brain imaging was actually by accident. I have a background in artificial intelligence, and I was trained in machine learning and neural networks, so to me that looked like a very obvious thing to do. I started to play around with small world network analysis, and then I got in contact with the big people in the field like Kees Stam and Olaf Sporns, and basically, that is how I rolled into the field.
TB: You are one of the keynote speakers at the upcoming OHBM annual meeting. Could you tell us how you got started with OHBM?
MvdH: I think OHBM was my first meeting that I have ever, attended, back when I was a master’s student. I was honored that my professor let me go (I think the meeting was in Budapest that year). It was awesome to be around all those people that were working on similar things. But I always had to choose between OHBM, ISMRM and SFN, and I think later on in my career I had the luxury to sometimes go to all three, and I really like that mix. The nice thing about OHBM is that it is very application-driven, so there are many people that are using the tools, but there is also (in more recent years) a growing interest in developing new tools in the field of network science and connectomics. I am part of a joint effort with Andrew Zalesky and Alex Fornito, organizing an (almost) annual educational symposium on graph theory and network science at the meeting, which is great.
TB: What can we expect from your plenary lecture?
MvdH: I’m going to tell quite a bit about the connectome. First I’m looking forward to providing a bird’s-eye view on the connectome field. I want to spend some time on “why are we doing this?” – so that’s the first part. Ideally, I want to put this into the framework of “if the brain is wired like it is, then does this also give certain vulnerabilities to brain disorders?” One of the ways that we are looking into this is by means of comparative connectomics. We got more and more interested in looking at connectome features shared across species. If these features are evolved through evolution and are present in many different species, then they might form the very core of the brain. I think it is really important to understand these ground rules of the brain because only then can we start understanding how the changes in the ground rules may lead to various different types of brain disorders. This will make up the second part of my talk, where I won’t focus just on one disorder, but, rather, I’ll talk about the wide range of brain disorders. Most of these studies are single disorders studies (i.e. we found property X or property Y to be involved in schizophrenia) but I think the field is ready now to start looking into more deep-rooted questions. I’m asking which of these effects are specific, and which of them are common across brain disorders. It is really important to understand the multidimensional aspects and multi-disease effect of connectomics.
TB: Which recent developments in the field excite you the most?
MvdH: There are quite a number of them, but what I really like is that the field moves a little bit away from just studying the connectome and starts combining this with other types of measurements. The connectome field is 10 - 15 years old, but it’s pretty nice to see that people already take it for granted. I am particularly excited by using for example gene expression data and combining them with connectome data, or using cell biology data, like cytoarchitectonics, and incorporating it into the connectome model. I think these developments are pretty cool to see from a multidimensional neuroscience perspective. What I also find exciting to see is that there is a lot OHBMof effort going on in improving the connectome field. Recent statistics papers show that it might be relatively easy to make a network, but then to study it in a very meticulous way, to have good case-control studies, it might actually be more difficult than we earlier thought. So there are great improvements in the last couple of years in new statistical methods that really also adopt the network perspective. Finally, there are so many efforts going into mapping connectomes of a wide range of species across different resolutions. A big part of the connectomics community is outside the MRI community, and I really hope that these communities will start to mix.
TB: Where do you see connectomics in the next 5-10 years?
MvdH: Connectomics is a young field, so looking into the future 5 or 10 years, or even 5 or 10 days is difficult. I do think that there will be more room for combining different types of data into the connectome model. We’re going to see more and more papers that combine EEG with fMRI, or diffusion with functional measurements or even beyond that, such as combining genetics with imaging, because the connectome alone is not going to give us the final answer. I hope that we are going to start using in-vivo connectomics, in a more clinical application. When we start to merge the machine learning field, the big data field, with the in vivo neuroimaging connectomics, there is room for very exciting discoveries: for example, some sort of personalized connectomics where we could use connectome imaging to make predictions on disease outcome, medications response, etc., and I think that could be feasible. It is going to be a rough road, and I’m not sure whether we are going to do this in 5 or 10 years, but I think it’s possible to use a clinical connectome approach on an individual level, like precision connectomics.
TB: What are your other interests besides connectomics?
MvdH: I’m Dutch, so obviously I cycle just like 17 million of my fellow countryman. I’m also quite interested in tech, like new gadgets or new developments in the machine learning and/or big data fields. Besides that, I’m a dad. I have two kids, which are of course the two nicest connectomes out there.
Professor Edward Bullmore has had careers in the clinic, academia and industry. He is the head of the department of Psychiatry at the University of Cambridge, the director of the Wolfson Brain Imaging Center and the head of a neuroimmunology research group at GlaxoSmithKline. His academic interests range from the clinical to the mathematical. He is perhaps most known for his work on analysing brain networks using the framework of graph theory while his current interest, described in his latest book, “The Inflamed Mind”, is neuroimmunology.
The understanding of psychiatric disorders is the thread that connects all of professor Bullmore’s diverse interests. The following interview probes into his past experiences and asks his advice for budding young scientists attending the OHBM 2018 annual meeting in Singapore.
Claude Bajada (CB): As a clinician, I found that the approach to thinking taught during a clinical course is very different to what is expected from a researcher. What are your thoughts on the differences between clinical medicine and medical research? And what enticed you to make the shift from clinical medicine to academia?
Ed Bullmore (EB): I agree the mindsets of a clinician and a biomedical scientist are often somewhat different. As a clinician you’re taught to convey a sense of calm certainty, or to reduce a complex situation to a much simpler diagnostic formulation or treatment recommendation. And at least when I was at medical school, in the 1980s, questioning the scientific basis for clinical wisdom was not always welcomed by senior physicians or surgeons! A scientific training, by contrast, is an education in learning to doubt or challenge everything, especially your own most treasured hypothesis or most precious results. There is certainly a tension between the reassuring bedside manner of a clinician and the oceanic scepticism of a scientist. Another very important difference between the two cultures is the status of numbers. Medicine was almost entirely non-quantitative when I was going through medical school; whereas in neuroscience and neuroimaging, mathematics is increasingly central. I think medical schools, at least in the UK, still need to do more to make doctors more mathematically competent and confident - and to provide proper career paths for non-medical scientists bringing their expertise from physics, maths and engineering into contact with the number-crunching challenges of modern biomedicine.
I switched to scientific training halfway through my clinical training in psychiatry. I was motivated by the idea that psychiatry was still at a relatively early stage of scientific development compared to other areas of medicine and I couldn't imagine being satisfied with a career solely dedicated to clinical practice in an area which I thought was very likely to see radical change. It wasn't a difficult decision for me in principle. But if I had been clinically specialised as a cardiologist, or some other area where the science base was already more evolved, it might have been a more debatable move. For people specialising in surgery or radiology, the number of training hours that must be dedicated to learning operational procedures is much greater than in psychiatry, and the financial rewards for focusing exclusively on clinical practice can be much greater than in psychiatry, so the decision to spend 3-4 years on a PhD is a much tougher choice. For any research-minded young doctors who might be reading this, I can say psychiatry is a highly recommended career move!
CB: When I hear the name Ed Bullmore, my semantic association goes: “Bullmore, Sporns, Graph Theory.” Were you always interested in the mathematical aspects of research? What first got you interested in Network Analysis? And how did your, now famous, collaboration with Olaf Sporns begin?
EB: My first research enthusiasm (aged 30) was fractal geometry, which I found intuitively very appealing as a way of quantifying the complexity of biological structures and processes, like MRI scans and EEG signals. However, my old-school medical education had left me completely unequipped with any quantitative skills. I was fortunate to find an excellent mentor, Prof Michael Brammer, at the Institute of Psychiatry in London, and applied to the Wellcome Trust for funding to do a PhD. I was interviewed at the Trust by Sir Stanley Peart, in 1992, who listened politely to what I had to say about fractals and then told me “of course what you’ll really find yourself working on is brain connectivity, isn’t it?”. I agreed with him immediately although that thought had not previously crossed my mind.
Karl Friston’s pioneering work on brain connectivity was very influential, and I also learnt a lot from Barry Horwitz, and through them I began to hear about Olaf Sporns. I admired the paper on complexity he wrote with Giulio Tononi and Gerald Edelman (PNAS 1994) but I didn't meet Olaf until we both attended the second Brain Connectivity Workshop, organized by Rolf Kotter in Dusseldorf in 2002. I liked his talk, about using graph theory to simulate computational networks that maximised the neural complexity measure from the PNAS paper; and several others at that meeting also opened my mind to the new physics of complex networks that was following from the seminal “small world” paper by Watts and Strogatz (Nature 1998) and the “scale-free” paper on network hubs by Barabasi (Science 1999). A few years later, in 2005, I met Olaf again at the Brain Connectivity Workshop in Boca Raton, where I presented some of the first results of using graph theory to measure topological complexity of human brain networks from resting state fMRI. That’s when we started talking more seriously about collaboration, which led to our first co-authored paper, a review of complex brain networks (Nature Reviews Neuroscience 2009) that has since been cited more than 5000 times.
CB: Your recent publications reveal a broad interest in psychiatric research, focusing on everything from developing methods to questions about the effects of drugs on the brain and much more. What are you working on at the moment? And what would you say is your current main interest?
EB: I am still working on brain network analysis or connectomics but since 2013 I have also become increasingly interested in the relationship between the mind, the brain and the immune system. The reason for this shift of focus goes back to my starting point as a psychiatrist. After 20 years of research, I couldn't help noticing that, although the field of neuroimaging and brain connectivity had grown tremendously, its real-life impact on mental healthcare was zero. By then I was edging into my mid-50s and I felt impatient to do something that might actually make a positive difference to the experience of people with depression and other mental health disorders in my lifetime. For various reasons, the strategy that appealed to me most was to pursue the idea that inflammatory responses of the immune system could cause depressive symptoms and, therefore, that anti-inflammatory interventions could provide a new therapeutic approach to depression.
I am not sure how many OHBM members will be acquainted with the immune system; I’m guessing not many. I knew a bit about it from my medical training in the 1980s but I was utterly dazzled when I took another look at immunology in 2013. We think neuroscience and neuroimaging has moved fast in the last 20 years, and it has, but scientific progress in immunology has been at least as rapid and its therapeutic impact has been much greater. The area I am working on is the interface between immunology, neuroscience and psychiatry – it’s called neuro-immunology or immuno-psychiatry and it’s growing rapidly. There are important questions for neuroimaging in this area: for example, how can we use MRI or PET to measure brain inflammation, especially microglial activation, more specifically and sensitively?
I have just published a book – called “The Inflamed Mind” – which summarises some of the background science for a general audience – and there are some short movies on YouTube which introduce the book in a brief and accessible way (here and here).
CB: What would you say to students, particularly medical students, who would like to start their research career? Particularly, what would you say to them if they were interested in technical subject matters but feel that they “come from the wrong background”.
EB: I always encourage students to recognise and pursue the interest that motivates them most deeply, almost regardless of any other consideration, because completing a PhD is a challenging process and you need to be highly motivated by your project if you’re going to get through it successfully. For medical students, there are some additional considerations, at least in the UK. There are basically two windows – you can do a PhD intercalated with your medical school training, so you graduate as MB/PhD. This works well if you are someone who knows what they want to do in research early on and if you have excellent time management skills. The other window is after completion of core specialist training – usually in general medicine or psychiatry – when there is an opportunity to take time out of the clinical training process to do a PhD, typically funded by a fellowship award from the Wellcome Trust or MRC. That is the route I took because I wasn't clear what kind of research I wanted to do until I was in my early thirties and had started specialist training in psychiatry. For UK medical students and recently qualified doctors, it is highly advantageous to get appointed to Academic Foundation Year (AFY) or Academic Clinical Fellow (ACF) posts because this will allow you to compete medical training and also spend a useful amount of time developing research interests and preparing a competitive application for a PhD training fellowship.
I think medical students with an interest in technical matters, like coding or statistics, should be encouraged. The world of biomedical science will increasingly need people who are both well-informed about the background biological and medical sciences and have the technical skills to handle big, complex datasets. So getting trained in both technical and biomedical skills can prepare you for an exciting career as a relatively rare and highly employable person! However, it is tough to learn technical skills from a low base and at the same time as keeping up with clinical training requirements. I think it is important to have an excellent mentor and also, in my opinion, to focus your technical learning priorities on solving the scientific problems that you are most motivated to address. A masters course in bioinformatics or image analysis could be a useful training step for some people but personally I found it easiest and most rewarding to learn technical skills when I could see immediately how they would help me answer the specific research questions I was interested in at the time.
CB: You also work for industry, do you see that as another career change or was the move to industry a natural progression? What are the difference between working in academia and working in industry?
EB: I started working half-time for GlaxoSmithKline in 2005. My original motivation was that I wanted to contribute to development of new treatments for mental health disorders and, much though I love(d) connectomics, I couldn't see that neuroimaging research in an academic setting was likely to have much impact on mental health practice in real-life.
I have really enjoyed the experience, for the last 12+ years, of working in two organizations with two rather different cultures. I have found it stimulating, refreshing, and I have learnt a lot that I would not have learnt if I had followed the more conventional path of staying fully embedded in academia.
Industry has offered me the chance to think and work broadly, across a wide spectrum of medicine and therapeutics, whereas the life of an academic tends to become proressively narrower and deeper in focus. Industry culture is also strong in terms of team-working and strategic planning, and the standards of statistical analysis and data management are high. In contrast, a tenured academic enjoys an extraordinary degree of intellectual freedom and the opportunity to work with highly talented younger people, as students or early career researchers. There are pros and cons to both organizational cultures. I would encourage people to keep an open mind about any opportunities that might arise to work in the private sector. It can be very exciting and, at least in the UK, there are increasing efforts to make it easy for people to move back-and-forth between industry and academia over the course of a career.
CB: OHBM 2018 will be held in Singapore in June, and is likely to be the first conference experience for many PhD and MD students. Such large events can sometimes be overwhelming. Can you remember your first big conference? And what advice would you give to newcomers?
One of my first big conferences was actually the first OHBM meeting in Paris in 1995. I thought it was electrifying to be in the same room as many people whose papers I had been reading for years but had never seen or met before. However, OHBM has got bigger since then and the scale can be intimidating. I would encourage newcomers to attend the educational program before the main meeting starts. The OHBM educational program has gone from strength-to-strength and is one of the best possible places to pick up on the state-of-the-art in neuroimaging methods. It is also a friendly atmosphere and a great opportunity to ask questions, introduce yourself to speakers, and connect with others who share your interests and are at a similar level of training. Once the main program starts, I would be sure to attend any smaller, early morning symposia that are focused on topics of personal interest. I would look through the poster schedule and make a point of visiting posters presented by people whose work you admire or you’d like to get to know. I would enjoy the social program for its own sake and also as another opportunity to get talking to the people you want to meet. Wear your name badge and consider using a business card so other people can easily remember your name. The key thing is to meet people and not to spend all your time sitting in the main hall passively listening to talks, or back at your hotel reading the abstracts! I am naturally quite shy so I don't find this particularly easy advice to follow myself. But I have discovered that if you have the courage to step up to someone, with a smile and a handshake, and say something like: “Dr X, I just wanted to introduce myself because I really liked your paper/talk/poster on Y…” then almost always you will find that Dr X is very open to starting a conversation.
CB: Finally, please be honest, are you reviewer 1 or 2?
EB: I hope I am not too often the legendary third reviewer who has a problem with the paper that nobody else recognises but can nevertheless be awkward enough to knock a good paper out of contention! My only advice for dealing with peer review is to remind yourself that it almost always improves the ultimate quality of the work to go through peer review, however uncomfortable it may be at the time, and it is an integral part of the scientific process to do so. I think you will generally have an easier ride if you respect the position your reviewer is coming from and try to deal with their points as constructively and clearly as possible. I usually recommend making changes to the text or supplementary material rather than writing long tracts in the rebuttal letter that do not change the paper itself. And take opportunities to be a reviewer yourself so you learn what kind of issues you should try to pre-empt when writing your own papers or responding to peer review.
By Jean Chen
See Gustavo Deco's keynote OHBM2018 lecture here:
Dr. Wilder Penfield once said that “the brain holds within its humming mechanism secrets that will determine the future of the human race.” And yet, most of us would agree that the brain remains the least understood organ. How do we start to understand how the brain works? Prof Gustavo Deco’s approach, one of our OHBM2018 keynote speakers, is to try to build one.
In 2001, Gustavo was awarded the Siemens "Inventor of the Year" prize for his contributions to statistical learning, models of visual perception, and fMRI based diagnosis of neuropsychiatric diseases. He has published 4 books, more than 258 journal publications and 34 book chapters. He has also filed 52 patents in Europe, USA, Canada and Japan. He was awarded an “Advanced ERC” grant in 2012 and he is member of the Human Brain Project (EU flagship).
Jean Chen (JC): As far as I know, you completed your PhD in atomic physics. How did you enter the field of neuroscience? How did these two fields come together for you?
Gustavo Deco (GD): When I got my first PhD in Physics in 1987, I thought that I would dedicate my research career to this field. However, after a postdoc at the University of Bordeaux in France and a two-year (1988 to 1990) postdoc from the Alexander von Humboldt Foundation at the University of Giessen in Germany, I found my focus shifting. I was absolutely fascinated by neuroscience and neuropsychology and decided to change my focus. Very broadly, I was drawn to these fields and to the simple question of how the brain works. I want to understand how the brain processes information. I wanted to understand how the brain works. I was, and I am now, convinced that a good formation in physics, especially in theoretical physics, is absolutely an advantage for investigating the brain. For example, in my research I have used a lot of tools from Physics, such as statistical physics, nonlinear dynamics, etc. I went to Munich, and began working for Siemens in their research center. It was there that I started my career in Neuroscience. At Siemens, I created one of the first Computation Neuroscience groups in Germany. In 1997, I received a PhD in Computer Science from the Technical University of Munich (Dr. rer. nat. habil.). In 2001, I received a PhD in Psychology (Dr. phil) from Ludwig-Maximilian-University of Munich.
JC: Your interests are broad, and you have made important contributions to computational neuroscience, neuropsychology and psycholinguistics, to name a few. How would you describe the importance of mathematics to neuroscience and psychology research, in the present and future?
GD: We cannot build models of the brain without math. We cannot model cognitive processes without math. To sum it up, I'm absolutely convinced of the necessity of mathematics for being able to express in a quantitative and systematic way the laws that regulate the functioning of the brain. The main reason or intuition, is that we are dealing with a huge, complex, nonlinearly coupled and stochastic system (involving billion of neurons and synapses that are coupled, stochastic and nonlinear). It is impossible to intuitively "speak" or "describe" such systems (even a simple system of two feedback-coupled neurons is difficult!), but we can understand and study them by expressing and investigating explicitly the equations, math, describing the brain. If we renounce that, we do only phenomenology… and we know what we can expect from that… nothing.
JC: I also understand that your most cited research focuses on computational modeling of spontaneous neural activity, the foundation of resting-state networks, and this work is incorporated into the Virtual Brain Project. What is the Virtual Brain Project, and how did it get started?
GD: Yes, I was very active in modeling the whole brain (now not only spontaneous activity but also task and different brain states, like sleep and anesthesia). The implementation of those models in a public, easy-to-use platform is fundamental for making the models available to the community, and especially to interested researchers without a strong computational background (eg. clinical researchers). The Virtual Brain Project was a fabulous initiative that started thanks to the McDonnell Foundations and the team working out of many enthusiastic labs. The initiative is led by Randy McIntosh (Toronto), with strong contributions from the labs of Giulio Tononi, Michael Breakspear, Olaf Sporns, Viktor Jirsa, my lab and many others.
JC: What is the next step or the main challenge in improving the ability of your computational models to predict biology and behaviour in brain diseases?
GD: Neuroscience, especially computational neuroscience, is a new field, and now is the most exciting time for the field. There is everything to discover! We have many of the required elements to create the first theories of computational neuroscience. I'm very interested in whole-brain dynamics and modeling. Neuroimaging has opened an unprecedented window on human brain activity, raising great expectations for novel mechanistic insights into brain function in health and disease to emerge. Unfortunately, the largely correlational findings have not delivered the anticipated outcomes yet. In contrast, a computational framework will allow for causal manipulation of models of multimodal neuroimaging data, opening up for characterisation of biomarkers of disease subgroups and a better understanding of underlying mechanisms. Furthermore, adding a coupled neuromodulator system using receptor binding data will pave the path for novel methods for rational drug discovery in silico.
I think the next challenge is to go from correlational neuroimaging studies to what we call, together with Morten Kringelbach (Oxford), causal neuroimaging. So in my view, the challenges are: 1) to develop and refine our novel framework for Causal Whole-brain Neuroimaging Modelling using sophisticated whole-brain dynamical models of multimodal neuroimaging data which can be manipulated off-line in silico to accurately describe causal mechanisms underlying human brain activity; 2) to apply the framework to the diagnosis of neuropsychiatric diseases, and to the design of therapies and their monitoring. In particular, one can use the model to exhaustively stimulate a realistic subject specific fitted whole-brain model in order to detect which type and locus of stimulation is more effective to re-establish a healthy dynamic of the whole brain.
JC: What are the main projects that your lab is focusing on currently?
GD: The main projects we are working on are the Human Brain Project, many other team projects of the EU, a large project from Germany together with Max-Planck in Leipzig (collaborator: Angela Friderici), and many others… The main issue that I see is to extend whole-brain models beyond just resting state as I described above.
JC: Can you provide a few pieces of advice for junior scientists in our field?
GD: As I said before, our field is a relatively new field, and now is, in my view, the most exciting time for the field. Junior scientists should study what they want. Don't be influenced by anyone. They should really investigate what motivates them. At this stage in their career, when they are learning how to be good scientists, it is an exciting time and they should take full advantage of it and study what really interests them.
But I'm convinced, and so I tell my students, that the 21st century is the century of Neuroscience and Genetics (but especially the former). I left physics and Quantum Mechanics. Although those fields were extremely interesting, challenging and mathematically sophisticated, all the main elements and basic concepts were already developed at the beginning of the 20th century. I always felt jealous of the scientists that were working during those times … Schrodinger, Pauli, Bohr, amongst others, they developed everything!!! I tell my students that I really felt a kind of “romantic nostalgia” for that time. When I switched to Neuroscience, I felt (and still feel) that we are now reliving those same exciting years. We do not have theories, but we have millions of interesting questions and the experimental technology for accessing the right data… So, our task is incredibly important, namely to develop a theory of the brain… I would recommend all the junior researchers to work on that!
“In order to be a mentor, and an effective one, one must care. You must care. You don’t have to know how many square miles are in Idaho, you don’t need to know what is the chemical makeup of chemistry, or of blood or water. Know what you know and care about the person.” — Maya Angelou
The online mentorship program is an ongoing initiative launched by the OHBM Student and Postdoc Special Interest Group in early 2017. In this international initiative, mentors and mentees from around the globe are matched on the basis of their experience and expectations. The mentor supports the mentee’s growth by providing advice on topics such as - but not limited to - academic development, grant writing, and work-life balance. What is unique about this program is that every member of the OHBM community can be mentored and can also be a mentor. For example, the program has early career principle investigators (PIs) who seek mentoring by more established PIs, as well as senior PhD students who mentor trainees just starting out. As a rule of thumb, the program maintains at least 3 years of “experience difference” between mentors and mentees, with mentor-mentee pairs often being close in career stage. Currently, there are 424 participants in the program. In this blogpost, we compare statistics from two successive rounds (Round 1, 2017 and Round 2, 2018) of the mentorship program: 252 participants signed up in Round 1, and an additional 172 participants signed up in Round 2.
Relative to Round 1, geographical distribution of brain mappers joining the mentorship program in Round 2 remained largely unchanged, with two notable exceptions: gain in members from the Middle East, and drop in new members from South America.
Distribution of participants with respect to career stage was similar in both rounds, with PhD candidates being the most prevalent.
Round 2 observed a decrease in the fraction of mentees who declared an interest in starting a lab, relative to mentees who were either undecided, or planning to move to industry. This effect might be associated with constantly decreasing percentage of faculty jobs as opposed to PhD jobs, which is a strong trend in academia since the 80s.
In line with the above observation, Round 2 of the mentorship program saw a drop in the demand for advice related to starting a lab, and a small increase in the demand for advice related to transitioning into industry from mentees.
Looking at the summary statistic of all participants in rounds 1 and 2 coming from USA and Canada, Europe, Australia and Asia, an outlook on mentorship was found to be similar globally (Figure 6).
In both rounds, mentors declared similar areas of expertise, mostly related to building a research career. This included taking career opportunities, finding postdoc jobs, developing relationships with coworkers and general advice on career development. Only a handful of mentors indicated expertise in coaching mentees on transitioning to industry.
In summary, participants were gender balanced, and while geographically they hailed from around the globe, the vast majority were from North America and Europe. Over 25% of participants in the programme were willing to take on a double role (i.e. both as a mentor and a mentee), thus indicating a willingness to give back to the OHBM community. While the program saw an increase in requests for mentoring on non-academic career options (e.g. transition to industry), this was not followed by an increase in mentoring capacity in these areas. We would thus like to reach out to mentors with experience in industry and entrepreneurship to join the mentoring initiative. Overall, the expectations and competencies declared by participants around the globe were similar, thereby indicating that an online mentorship platform is necessary and useful for the OHBM community.
Note: In addition to the online mentorship program, the OHBM Student and Postdoc Special Interest Group will be holding its second “Annual Mentoring and Career Development Symposium” at the annual OHBM meeting this year. Hope to see many mentors and mentees at the event on Tuesday, June 19th!
By Nils Muhlert
Professor Leah Somerville is an associate professor of psychology and director of the Affective Neuroscience and Development lab at Harvard university. She was recently awarded the Early Career award by the Social & Affective Neuroscience Society. Here we find out more about her academic career path, and what we can expect from her keynote speech at OHBM2018 in Singapore.
Nils Muhlert (NM): First, can you tell us about your career path – how did you get into neuroimaging?
Leah Somerville (LS): I started working on brain imaging research as an undergraduate at the university of Wisconsin. I was working in a couple of different brain imaging labs, right when the first research dedicated scanners arrived at the university. I was one of the first people to have the opportunity to run experiments on it – along with a team, of course, of other researchers in the labs I was working in.
I got that little thrill moment of seeing a person’s brain image pop up on the screen. Maybe others have had a similar experience. I still have that feeling every once in a while, it hasn’t completely gone away! I find neuroimaging so fascinating and powerful. From there I tried to orient my training towards continuing my brain imaging research, and in particular, fMRI-based research. I’ve studied emotion and anxiety-related processes. I’ve also studied motivation and cognitive control. Now in my lab we’re focused on understanding how those processes change with ongoing brain development through adolescence.
NM: What would you say is so special about adolescence in the context of human development?
LS: There’s a lot I could say here - I’ll try to keep it short! Adolescence is a time of life that on the surface level is associated with a number of important challenges, that individuals are facing sometimes for the very first time.
Adolescents are people who are faced with independent choices about how to act, who to affiliate with, what kind of goals they like to hold for themselves. At the same time there’s increasing demands on their self-control. They’re becoming more and more self-guided in the way that they’re interacting with the world. We can sometimes think of them as novice independent people who are still developing the toolkit that can support mature independent actions.
We find that ongoing brain development facilitates a number of great achievements at this time of life. But it also places a number of constraints on the way in which adolescents might optimize their behavior in certain situations. We’re very interested in understanding the interplay in that – thinking about adolescence as a very adaptive and useful time of life but also one that differs from adults in a number of important ways.
One insight that has fascinated me is looking at brain development measures and asking “when does a person become fully mature?” It may seem like an easy question or one that could be measured using a single modality. In fact, the answer you get really differs when it comes to brain structure or function or network properties. It’s especially surprising that on certain measures – including measures of white matter – the developmental changes continue to play out throughout the twenties and perhaps even through the thirties. So one thing that’s interesting, as an extension of that, is thinking about how we decide when a person is mature from a societal standpoint.
NM: In your work you also discuss socioaffective circuitry – how do changes in that circuit map on to the behaviors we see in adolescence? And what have you found out about that over the last decade?
LS: In our lab we tackle this from different angles – so I’ll let you know about one in particular that I’ll be talking about in OHBM.
We’re very interested in the intersection between motivation and cognitive control. That is, the degree to which motivational cues in the environment – potential rewards and punishments for example – can shape the way in which a person is able to optimize their cognitive control in a given context.
We’re interested in the shift across development, in which individuals across the ages can recognize situations that hold different motivational values. They might want to perform better in certain conditions than in others – either to avoid punishment or to obtain rewards. All of the detection and assignment of values seems to be very consistent in early development. But the degree to which we can take that information and use it to guide our goal-directed actions in the moment, seems to be continuing to develop well throughout adolescence.
One arm of our work is in trying to understand how the dynamic interactions in cortico-striatal circuitry (including the dorsal and ventral striatum and lateral prefrontal cortex) coordinate and give rise to motivation-guided cognition. This is something that we’ve seen play out and continue to change and refine well throughout adolescence and into early adulthood. This is one area of work that we’re excited about.
Another area we’re interested in is adolescent attunement to their social environment. This is a time of life that’s associated with dramatic changes in daily life; individuals are forging new independent relationships for the first time and there’s a lot of volatility in adolescent relationships. They are falling out of favour with one another more frequently than adults would be, giving them lots of opportunities to get feedback about how they’re doing socially. Another arm of our work is therefore to understand how adolescents learn from feedback and use positive and negative social feedback as learning cues to inform how they should feel about themselves in a given situation and how they should feel about other people.
We’ve seen in a couple of studies that when adolescents are on the receiving end of negative social feedback they tend to take that as a very strong cue to influence how they feel about themselves. This would result, for example, in a reduction in the momentary feelings of self-worth or self-esteem. Adults actually show a bias in the opposite direction. They have different strategies in place that allow them to offload or buffer themselves from negative feedback and maintain a positive self-concept, even in the face of very opposite social information. We’re really interested in understanding how learning processes – again subserved by striatal-based systems – might be biased towards learning from negative or positive information in the social domain at different points of life.
NM: And how does this system seem to change from early to late teenage years, or even people's early twenties?
LS: Well we carried out a study of individuals from age 10 to 25, and found that there is a period from early to mid-adolescence, perhaps from 12 to 15, that negative feedback had a strong negative impact on their self views. Whereas individuals of college age seem to have a lot of strategies in place already to buffer themselves from negative feedback. So this is one time period when a few years of age makes a large difference in terms of how these cues are incorporated into learning about themselves and other people.
NM: Thinking about how social media might tap into this, and perhaps exacerbate the concerns that adolescents have: as social media has become a more integral part of their everyday lives, has this had negative and positive consequences?
LS: Great question and one that I don’t have a scientific answer for but I’m happy to speculate!
This is a very hot issue now – thinking about how developmental stage might manifest the influences of these kinds of media processes differently. It’s only in very recent generations where people have taken up a lot of social interactions online. This is something that has not been subjected yet to empirical study.
There is a lot of speculation that perhaps social media is detrimental to adolescent development. Adolescents themselves are quite happy at having the option to socialize over the phone and over the internet. They say it helps them maintain strong social bonds, it gives them lots of information. They can stay attuned to the goings on of all of their friends more easily.
There is also the potential for social media to have certain negative and perhaps unintended consequences. One that has been suggested by our work is that social media has been almost designed to elicit and deliver feedback to people – by getting friended, getting thumbs-up or the absence of a like or lack of response from somebody. This can be interpreted as negative by someone or by people on social media.
The way we see it is that there can be very positive interaction on social media but there’s also the potential for a higher frequency of negative feedback, or the absence of positive feedback being interpreted as negative feedback. We’ve shown that negative feedback has a very potent influence on adolescent self-views, so that very high frequency of receiving negative feedback online could have a more detrimental effect during adolescence than other ages.
Developmental scientists have often had concerns about the effects of new technology influencing self-views. When I was a kid this would have come up with video games – suddenly people would have a Nintendo in their house, there was a wave of concern about that. At this point we just don’t know enough to have a definitive evidence-based account about whether social media is a good or bad thing for adolescence.
NM: Turning to your other work, what would you say are the scientific achievements that you’re most proud of during your career?
LS: I’m not sure if I’d call this a scientific achievement but I’m most proud of having had the opportunity to run my own lab.
I never thought I’d be a PI. It has been one of the most challenging and rewarding things I have ever done. I feel proud and gain a lot of reward from it, particularly when I interact with my trainees. They conduct great work, are great people and are becoming great mentors in their own right! It makes science very fun to do in our group. Fostering an atmosphere that makes science fun and exciting and collaborative is something I’m very proud of, and is down to the efforts of my whole lab.
NM: And to reflect the quality of your mentoring you were awarded the Everett Mendelsohn excellence in mentoring award. When you look back at your own career, which people could you point to that offered you good advice during your career, and how has that affected how you interact with your own trainees?
LS: I’ve been very fortunate to have had a number of wonderful mentors throughout my training. They’ve helped me bridge gaps into the next steps of my career – giving me advice, and sometimes tough love when I needed it! This includes my graduate mentor and my postdoctoral mentor, BJ Casey. I would point out BJ in particular – she was a big part of me discovering this very strong interest in developmental neuroscience, particularly after trialling out a number of different topics of study. That one fit for me in very large part because of the support in mentoring from her.
It’s important to mention that at first I didn’t realize that every trainee needs something different from a mentor. You need a lot of flexible thinking when you’re mentoring to understand what each person needs at different points in time. This of course evolves at different points of training. They might start by needing more hands-on help and more topically-focussed advising. But watching a person beginning to strive for independence and allowing for independence is something that I work hard to detect and accommodate.
When I became a PI I didn’t realize that I would still benefit from mentoring myself. I still have mentors who guide me and I don’t think anyone is ever quite finished in needing mentoring, advice and guidance. I have a number of colleagues – both peer-age going through similar career stages, as well as more senior mentors – who are still helping to guide me. I am very appreciative of that.
NM: And finally, your OHBM 2018 talk – can you give us a sneak preview? Which gems from your research career have you decided to focus on?
LS: Well I’m very excited about being invited to speak at OHBM and having the chance to go to Singapore. I’ll be talking about two main themes: adolescence as a phase of the lifetime associated with ongoing and dynamic brain development, in particular in development of functional brain connectivity.
I’ll also specifically focus on understanding the interactions between motivations and cognition as a test bed to think about how ongoing brain development would lead to important shifts in behavior. In doing that I’ll present some specialized studies that were conducted in my lab in Harvard, as well as some broader projects that we’re currently working on.
Most notably we’re one of the groups completing the human connectome project on development – a large scale ‘big data’ style project - that will ultimately collect brain imaging data on over 1,300 5-21 year olds. This is an ongoing study that we are about half-way through collecting data for. It’s partly longitudinal and partly cross-sectional, and it’s designed to help us really understand both fundamental patterns of brain connectivity that are changing at the basic neuroscience level as well as the implications of those connectivity changes for behaviours including motivated behavior and cognitive control.
So I’ll be discussing how we approach these problems from a broad, big-data standpoint and how this can complement the more specialized work that we’re doing.
NM: We’re definitely looking forward to that – many thanks for taking the time to speak to us and we’re looking forward to your talk in Singapore.
By Elizabeth DuPre and Kirstie Whitaker
This month we continued our Open Science Demo Call series by speaking to Ariel Rokem, Dora Hermes, and Tammy Vanderwal about three initiatives they’re involved with that promote openness in neuroimaging research.
Ariel introduced us to NiPy--short for NeuroImaging in Python--which is a large community-of-practice to support using python for neuroimaging. He explained that NiPy exists within the broader SciPy--short for Scientific computing in Python--community, and it unites many individuals who use Python in their scientific analyses. As open communities, Ariel pointed out that anyone is welcome to use the NiPy and SciPy software as well as to participate in its development. If you’re interested in hearing more, he encourages you to check out the NiPy mailing list or the annual SciPy conference!
Dora told us about iEEG BIDS extension proposal, a proposed extension to the BIDS standard for structuring human intracranial electroencephalography (iEEG) data. She explained that to date, current challenges with iEEG data sharing include the large variability in both electrode locations as well as data formats across sites. The proposed extension will create a standardized structure to store iEEG data and metadata, allowing for novel, multi-model analyses via integration of iEEG with MRI, MEG and EEG. To contribute to the development of the iEEG BIDS extension, Dora encourages checking out the current draft or commenting on the BIDS mailing list.
By Valeria Kebets, Csaba Orban, Thomas Yeo on behalf of the OHBM 2018 Local Organizing Committee (LOC)
As we’re swiftly approaching June, we thought we would follow-up our previous blogpost with 10 practical tips to help you make the most of OHBM 2018 in Singapore.
1. CLIMATE: Singapore has a hot and humid tropical climate. The air temperature remains in the mid-twenties (~75°F) even at night, so don’t be surprised if you break a sweat after only a 10 minute walk. For daytime walks, sunscreen is recommended as the UV index can reach extreme levels. Buildings tend to be heavily air-conditioned, so you may also want to pack a sweater for the conference. Also note that the weather is unpredictable, and heavy thunderstorms can develop in just a few minutes, so your weather app is unlikely to be helpful.
2. FOOD: If you want to grab a quick bite during the conference, there are many cafes and restaurants in the same building (listed here). There are also plenty of dining options at walking distance from the conference centre such as Gluttons Bay, Chijmes and Bussorah street. Shoppers will be glad to know that most stores in the city are open until 10pm, including on Sundays.
3. TRANSPORTATION: The best way to take advantage of Singapore’s public transportation system is by purchasing an ez-link card (same concept as Oyster card in London). Ez-link cards are sold at the airport and at most MRT (subway) stations for a $5 (3.75 USD) deposit. Ez-link works on all buses, MRT lines, and can also be used to pay in some stores, e.g. 7-11s, and some taxis. Pro tip: Remember to tap out with your card when alighting buses to avoid getting charged the maximum fare.
4. MAPS: Google Maps or Citymapper are great for figuring out the best combination of MRT/bus/walking to get anywhere on the island, including expected travel times, when to alight buses (stops are not announced), and the fastest way to exit MRT stations. Follow this link for directions to the conference centre.
5. TAXI/RIDESHARE: All Singapore taxis operate based on metered fare. There is no Uber, but Grab provides a similar service. There are separate pick-up points for metered and Grab taxis at Changi Airport. Pro tip: If you want to keep costs low avoid the Chrysler Cab (black taxis) in the airport taxi queue.
6. GRATUITY: Tipping is generally not expected in Singapore. Most restaurants automatically add a 10% service charge and a 7% Goods and Services Tax on the bill.
7. PAYMENT METHODS: Most places in Singapore will accept credit card payment (VISA/Mastercard, though usually not AmEx). However, do keep some cash for dining in hawker centres. ATMs are widely available in the city and airport.
8. LIQUOR TAX: Singapore imposes an excise duty on all liquor, so expect to pay between $10 - $14 (~ $9 USD) for a small bottle of beer in restaurants or bars. Pro tip: Duty free stores inside the airport terminal are exempted from the liquor tax.
9. MEDICATION RESTRICTIONS: Singapore has a strictly enforced no tolerance policy with respect to possession of illicit substances. Note that certain prescribed psychotropic medications (e.g. sleeping or anti-anxiety) may require you to apply for a license at least 10 days before your arrival. You can read more about this here.
10. RELATED EVENTS: Be sure to check out the satellite events before and after OHBM. The events kick-off with PRNI (June 12-14), OHBM Hackathon (June 14-16; Sold out) and BrainStim (June 15-16). The Chinese Young Scholars Meeting takes place June 19. There are also three post-conference workshops organized by the local brain imaging community: Multimodal Neuroimaging for Mental Disorders (organized by yours truly; June 22), Brain Connects (June 22) and Nonstandard Brain Image Analysis (June 22-23). Attendance is free but make sure to register early--while there are still seats!
If you haven’t already, we highly recommend you to check out the brain in SINc website for more in-depth information on food, sights & attractions in Singapore curated by the Local Organizing Committee.
We look forward to welcoming you next month in the Lion City!
By Jean Chen
As part of the OHBM International Outreach effort, we found about the experiences of Iranian trainees. Many of us in brain imaging have met and worked with Iranian trainees, who outnumber trainees from most other Middle-East countries. By hearing the trainees’ stories, we get a snapshot of the circumstances behind their decision to leave Iran as well as their aspirations in foreign lands. In this post, we speak to current and former trainees, including:
Jean Chen (JC): How much exposure to brain-mapping research did you have as undergraduate students in Iran?
Aras Kayvanrad (AK): I did not have much exposure to brain mapping research as an undergrad student. I completed my undergrad more than 10 years ago and at the time there was little brain-mapping research in the country. However, things have changed now and there are several research groups working in the area of brain-mapping. There are more opportunities for undergrad students to learn about brain mapping research through talks, workshops, seminars, etc.
Sana Nezhad (SN): During my undergrad in Electrical Engineering we had a course called " the Application of Electronics in Medicine". It was in that course that I received my first academic exposure to brain-mapping research, which actually motivated me to do a Masters in Bioelectronic Engineering in the University of Tehran. There we had one year of coursework, which exposed me to the use of EEG, MRI and CT for brain mapping. I also got to know about quantitative methods of analysing the data we acquire using these methods. For the second year of my Masters I was required to complete a research project on MRI data acquisition, and although my project was focused on body imaging, I had classmates doing fMRI and MRI projects on the brain. As a result of group meetings, I learned about their research.
Mahdi Khajehim (MK): My personal exposure to brain-mapping only started when I took the “introduction to biomedical engineering” course as an undergraduate student and for the first time got familiar with some methods like MRI and fMRI. However, I think as a result of multiple government-supported programs and increased interest to brain-mapping, this pattern has already started to change. Nowadays, undergraduate students in Iran have a much better opportunities to get familiar with this field through talks, workshops and summer schools, such as the Iranian Summer School of Cogntive Neuroscience. These are hosted by many different universities and institutions.
Arman Eshaghi (AE): During my undergraduate studies (Tehran University of Medical Sciences), I worked on at least two different projects in which we used advanced neuroimaging methods (DTI and fMRI) for patients with multiple sclerosis and neuromyelitis optica. My work was conducted with Professor Mohammadali Sahraian at the Sina Multiple Sclerosis Research Centre, which is affiliated with the Tehran University of Medical Sciences. I was also in active collaboration with UCL Institute of Neurology in London working with Prof. Olga Ciccarelli.
Mostafa Berangi (MB): During my undergraduate studies in Electrical Engineering, I took some courses in Biomedical Engineering, and they really interested me. As I became familiar with the multiple aspects of Biomedical Engineering, I was particularly interested in the field of MRI. That is the main reason for my decision to pursue brain imaging for my graduate degree.
JC: How would you describe the Iranian brain-mapping landscape? Are there major research programs or meetings that you were aware of as an Iranian student?
AK: Not as a student -- as I mentioned at the time I did my undergrad, there was not much brain mapping research going on. However, the growth of brain-mapping research has accelerated in recent years, and several research bodies have been established recently providing financial and/or technical support to researchers in this area, which can potentially further facilitate and expand brain mapping research in the country. Most notably, the Cognitive Sciences and Technologies Council (COGC) provides funding for brain-mapping research through a variety of research grants. Moreover, the recently-established National Brain Mapping Laboratory (NBML), equipped with state-of-the-art scanners, has further paved the way for brain mapping research in the country.
SN: There are several brain mapping groups specializing in advanced quantitative analysis of brain-imaging data generated through different modalities. I get the sense that In Iran there is a shortage of data-acquisition accessibility due to limited resources, however most active research groups overcome this problem through collaborations with universities abroad. For example, I had collaborations with a cancer centre based in the UK and received half of my data from there. This lack of imaging resources drives the research towards quantification methods rather than data acquisition approaches.
MK: In my perspective, the Iranian brain-mapping field has already started to grow at a promising pace. Thanks to increased government support through funding agencies like the Cognitive Science and Technologies Council (CSTC) and greater availability of required infrastructure that is an essential part of this field, there is now a rising interest to do research in brain-mapping. Moreover, some newly established institutions like the National Brain Mapping Laboratory (NBML) in conjunction with some older ones like the School of Cognitive Science are also playing a crucial role in expanding the field among the Iranian researchers and I personally benefited a lot from attending educational events hosted by these institutions. It all adds up to expect an even better future for this field in Iran.
AE: There have been active institutes working on animal neuroimaging (in addition to human) located in Tehran that are affiliated with top Iranian universities, including the Institute for Fundamental Physics and the Institute for Cognitive Science Studies. There are new centres such as the National Brain Mapping Laboratory, which did not exist when I left Iran in 2014. There are also groups working inside university hospitals including the Neuroimaging and Analysis Group. Therefore, in my opinion Iran can have a bright future in science and in particular neuroimaging in the Middle East.
MB: In Iran, the best students choose to go to Sharif University, University of Tehran, Amirkabir University of Technology, Iran University of Science and Technology, Khaje Nasir University and Shahid Beheshti Medical University (in that order). In terms of the field of brain mapping, from my perspective, the University of Tehran and Amirkabir University are the top institutions. I feel that these institutions have the largest and strongest faculties, and this quality is important for graduate students.
JC: For those of you abroad, what was your main motive for leaving Iran to pursue further training? For those in Iran, do you have plans to leave Iran for additional studies?
AK: I left Iran after my undergraduate degree. The reason was quite simple --- I left Iran to expand my horizons in a new environment doing cutting-edge research.
SN: My main motive was to have the opportunity to get more involved in MRI acquisition research, which is more feasible here in the UK. Also, I cannot rule out being adventurous and wanting to experience a different cultural environment!
MK: I imagine on one hand there are still some aspects of brain-mapping research that remained mostly untouched in Iran and those happen to be in the realm that I was mostly interested about and as such, leaving Iran made sense as there was not much expertise or experience available in Iran. On the other hand, in my opinion, one other thing still missing in Iran is the limited extent of the international collaboration that helps to accelerate the development and increase the quality of the brain-mapping in Iran. These two factors were my main motivations to go abroad for Ph.D. study.
AE: My main intention for leaving Iran was to expand my skill base in using larger databases, and in particular my quantitative skills. Moreover, working in a place such as the UCL Institute of Neurology, which is home to many renowned neurologists and neuroscientists, has enabled me to form more ambitious research plans with access to a wide range of patient populations.
MB: I would like to study in a foreign country, but it comes down to a personal decision, so I have not yet made up my mind. Certainly, most of our students would like to study abroad, and many of my labmates have left to pursue their PhDs. Our professors do not try to retain us. They actually encourage us to explore our options.
JC: How would you describe the career prospects of a highly trained neuroimaging researcher in Iran?
AK: With more groups working on neuroimaging and the availability of research funding and imaging facilities, the prospects seems very promising. In Iran many of the talented students are interested in engineering, in general, and medical imaging, in particular, which is an invaluable asset to principal investigators in these fields. Nevertheless, in spite of the recent progress, access to funding and imaging facilities is still very limited. Moreover, there is limited collaborative research between individual groups and between institutions. I hope the establishment of the new national research bodies, such as the NBML, will lead to collaborative research initiatives between research groups and institutions across the country.
SN: I would think a researcher with a good international network, particularly with countries with a strong neuroimaging landscape, can expect a promising future.
MK: I think for such an individual the available job positions could be in the academia or government-funded research institutions, however, in the private sector, there is only a limited range of options available. I imagine there would be several suitable faculty or research positions available in the capital city (Tehran), but not much so for the rest of the country. For the private research-based companies to grow and create more job positions in this field, there is still a lot that needs to be done.
AE: Compared to the developed world, there are very limited funding opportunities in a developing country like Iran. As a result, many students may prefer to leave the country to expand their skill base. However, despite these limitations, there has been an upward trajectory as is evident by the construction of new neuroimaging centres and availabilities of graduate (PhD) level university programme dedicated to neuroimaging.
MB: Medical Imaging is still a very new field in Iran, and frankly there are not that many jobs in this field, especially for PhD graduates.
Postamble (JC): As in any research community, trainees are the future of Iranian brain-mapping research. The trainees that you met here are some of the brightest among Iranian students; they are expanding their horizons voraciously and have ambitious future plans. Irrespective of their current locations, these trainees show their love of their home country and are obviously excited by the recent developments in the Iranian research arena. I have come to learn that > 40% of Electrical Engineering students as well as > 50% of Medical Physics students at the University of Tehran are women, numbers that exceed those of most western programs. We look forward to the transition of these trainees into independent scientists.
Although there is great need for brain-mapping expertise, there are currently few positions in Iranian universities for trainees, even those with often highly prestigious foreign training. In this regard, I have come to learn that the government has established paid postdoctoral fellowships (up to 2 years) for those returning to Iran and in search of faculty positions. In parallel, there are government programs that encourage highly-qualified individuals to return to Iran to establish tech companies, through both cash rewards (up to $40,000 USD) and low-interest (close to 0%) loans. These mechanisms will likely create jobs for future trainees in brain imaging.
We wish these trainees the best, and hope the OHBM community will be able to enhance outreach to those working and living in Iran as well.
BY THOMAS YEO, NICOLE KUEK
Professor Simon B. Eickhoff is the Director of the Institute for Systems Neuroscience at the Heinrich-Heine University Düsseldorf and the director of the Institute of Neuroscience and Medicine (INM-7) at the Research Center Jülich. Simon is a leading cartographer of the human brain, and his team utilizes a wide range of methods to map the organizational principles of the human brain. We had the opportunity to chat with Simon before his keynote lecture in the upcoming 2018 OHBM Annual Meeting in Singapore.
Thomas Yeo (TY): Today we have Prof Simon Eickhoff here, a keynote speaker at OHBM 2018. Simon, thanks for doing this. How would you describe your research to a random person on the street?
Simon Eickhoff (SE): I would say that I’m interested in how the brain is organized, how it varies between people, and how this variability relates to things like cognitive capacities. Then ultimately, I want to contribute to developing new tools for diagnosing and treating neurological and psychiatric disorders.
TY: That’s a rich set of activities – how did you end up on this research path?
SE: More or less by accident. I studied medicine in Aachen, and late at night at a party in my hometown, I met a friend from school who had started studying in Dusseldorf. He told me about brain research there and I thought it sounded quite interesting. So I called Karl Zilles’ secretary, met him, and was really fascinated. I started my doctoral studies there and never really managed to leave.
TY: Your lab is involved in several projects – but what is the most exciting thing you’re working on now?
SE: There are two things we are doing right now that I’m really excited about. One is brain-phenotype relationships. Can we actually infer complex phenotypes from brain imaging data? Can we predict personality traits or cognitive performance? And the key aspect here is --- given that there’s quite a lot of work on this already --- can we predict it in an interpretable fashion. What we need is good predictive performance, while also learning something about brain organization. This is one of the aspects that I really want to push, as it’s not highlighted enough in current discussions. It’s one thing to be on the data-driven side, and to get good compressions, good predictions. This is, without any question, awesome. But in the end, we also want to learn something about the brain – how the brain is organized. There’s a lot of work going on in our lab that really tries to combine the more data-driven work from a computer science perspective, with the more traditional neuroanatomical view.
The second part is more related to brain mapping. You can describe the brain through a lot of different features. So for each point in the brain you can say, what are the structural properties, what is the trajectory as we age, how is it disturbed or changed in people with Parkinson’s or schizophrenia. But also, what’s the functional connectivity profiles at rest and during task, what are the structural connectivity profiles, and so on. You can use each of these features to map the brain and to delineate brain areas. But how does all of that work together? That’s the critical question, and cracking this kind of topographical code, that’s something that we can hopefully get closer to. And it’s pretty exciting!
TY: Moving forward – what do you hope your research will accomplish in the next 5-10 years?
SE: Well if we manage to do the things we just mentioned, I’ll be quite happy! The predictive modeling, but also brain mapping, understanding organization and topographical complexity of the brain. That’s going to be fantastic!
Looking ahead, I’m not sure if it will be done in five years, but it will be really exciting if we can go outside of the academic field, outside of doing research to just get the next papers, and to mature enough to actually bring our research into clinical practice. Five years – I’m skeptical if we’ll get there, but over 10 years, I’d be more confident. If my team will be able to contribute to it, then that would be fantastic. And we’re working towards that goal.
TY: What do you think is the biggest obstacle right now?
SE: Towards clinical application? In the end there’s so much flexibility in the analysis of imaging data. We always hope that we live in a perfect world, where you acquire data, and then you do one single analysis, which is a priori planned, and that result gets published. But I’m not entirely sure that most labs do that.
The thing is, if you really want to go and measure yourself, say by the standards of clinical trials, that would need to happen. You’d have to have your analysis plan ready, deposit them, acquire the data and carry out one single analysis and report the outcome. Then if you want to have something that’s clinically useful, it needs to have a really high accuracy and predictive value. What I really value is the current push towards more methodological rigour. I’m really happy to see that it’s becoming slightly more easy to publish null results, and new methods are not just judged by “better” performance.
TY: So you’re saying that p-hacking is a problem, but I guess it’s a bit unclear to me if that’s the main problem. Even with the high quality Human Connectome Project data, the predictive accuracy is not that amazing.
SE: Right, we still have a lot more work to do. And since you’ve mentioned the HCP dataset, this is never what you’d get from a clinical setting, where the data is acquired in a short time by a technician who is less invested than, e.g., a PHD student. And also the patient may not be as motivated as a research subject to lie still, comply with instructions etc.
Most likely, what will happen is the field will be split into different domains. One that is very high resolution, very intense sampling, and a lot of valuable data for each individual. That will be great for understanding brain organization.
But there will also be the other side, that will deliberately say “I want to use low quality, clinical data.” These adjectives may be the same thing, though the latter just sounds better [laughs]. We are using standard clinical quality data and we know the data is bad, but we also know that we need to find something that works on such data if we want to make an impact beyond research as an academic discipline. So we have to be up for the challenge! One thing we are doing a lot now is to deliberately make our lives difficult, by combining data across many different sites, different scanners, different populations, different continents. That way, the dataset is diverse - often even bad - but we’re happy with some drop in performance, because we know this is genuine performance. We have to improve it, but at least we’re not tricking ourselves into believing that we’re doing extremely good predictions that don’t hold up in real life.
TY: We are at Whistler now and just had an exciting workshop held by Todd Constable. I’ve noticed that the talks from more senior professors seem to cover a lot of papers, whereas someone more junior (like me) will talk about two papers. Is this what I should be aiming for?
SE: I think it’s a matter of personal style. But maybe you are right, and personal style changes with age. You really have two choices when you’re giving a talk. You either take a rather high flight attitude and present an overarching picture. Or you are going to dive deeper into something and go into a lot more detail.
One of the explanations for your observation is that when you are younger, you just have fewer papers you really want to talk about. When you get to a certain stage, you have a lot of papers that you can talk about, so you need to make a decision to go deep or go broad. It really has to fit your style and what you’re comfortable with.
For me, usually I want to give an overview, as we’re making great efforts to put puzzles together. We have different studies that may not be particularly related to each other, but you can see the crosstalk, and you can see the connections that I’m so excited about. That’s why I talk about so many different things.
TY: I’m on twitter and see that you (@INM7_ISN) have strong perspectives on open science and the replication crisis. Do you want to comment on that?
SE: Yes – I’m a big chimera when it comes to that. On the one hand, I’m a big proponent of open science. Most of the work we’re doing depends on shared datasets. In fact, long before the term ‘open science’ became popular, I was sharing my software. Back in 2004, I was developing the SPM anatomy toolbox. That was just open matlab code. I was still an undergraduate at the time.
So I’m a big fan of open science, but what worries me at times are certain tones of the debate. Sometimes there’s a patronizing aspect to it, a moral argument “you have to” and “how can you not”. I think that in order for open science to grow, we need to take concerns seriously. Perhaps by virtue of being around the open science, computer science environments, but also around the very traditional German medical environment, I can see that there are two sides to the argument.
Basically, in the German medical environment, I don’t think any person that would review your grants or would hire you cares too much about whether your dataset is open, or whether your software has been released. There’s a more traditional focus on publications with a lot of focus on impact factor, and grant money. This is a completely different world.
An example – if you’re a software developer then your product, your outcome or claim to fame, is the thing (a software tool, a repository, any other resource) that you distribute freely on the internet. If you are someone who spent years collecting data on a rather rare disease, recruited patients, talked a lot to them, followed them up clinically, evaluated them repeatedly and put them in the scanner, then this data is a resource, an extremely valuable resource. And it comes with the assumption that this data will allow you to get enough out of it to carry you to the next step of your career. If you are then being told “well, you are unethical and doing something terribly wrong by not immediately sharing the data freely after the first paper”, this is not putting open science in a good light.
I think open science will succeed and it will be a tremendous accelerator of knowledge. But in order for that to really happen we need to take people’s concerns seriously. There will definitely be a development at different speeds, with things moving faster in some fields than others. It’s not the case that those fields that move faster can look down upon those that are not as fast. I don’t think anyone is opposed to open science from a personal conviction, but it’s more about needs and rewards and we need to take these views seriously.
Long-term - there has to be a better incentive structure. At the moment, we are conservative about it in the German medical system – one of our main criteria for hiring is based on a cumulative impact factor – from the sum of the journal impact factors of all your papers. H-index, citations and so on do come into play, but the fact that the cumulative impact is a major evaluation criteria shows you that different fields still evaluate contributions differently. At some point, open science practices will need to be rewarded not just morally, but also practically by selection committees. That will take quite a bit of time, though.
TY: So how do we change the minds of selection committees?
SE: Well, we just mentioned that we are getting older [laughs]. In some ways there is another generation before us, those who are really not used to it. This will perhaps change over time. Then at some point we have to find criteria for quantifying open science. You can show things on the internet to a review committee – e.g. you have 500 or so matlab scripts that you are sharing – but will that give you a job? If committee members are from another field, they might not get the value, so there’s needs to be some way of quantifying these contributions objectively. Then a committee made up of psychiatrists, dentists, or structural biologists (and they often as diverse at times) can refer to some numbers that give a assessment of your open science practices. That would be a big step forward.
TY: Have you heard of the idea that once you come up with a set of numbers, they will be gamed?
SE: Sure, but this will always happen. We would like a perfect world where all decisions about hiring or promotions are done by people who are experts in your field, spend several hours scrutinizing your CV or 10 most important papers for context and read related literature to compare to. But this just won’t happen. Maybe I’m part cynic, part realistic, but most people are overloaded with committee duties, so you need some easy summary of a person. Yes this will be gamed – there’s no way around that – and we hope there is on each committee someone, who can point out the gaming aspect, and spends time to know your work it more deeply. But usually you have to convey the importance of your work to people who are not familiar with your publications or your topics.
TY: Thank you so much for this interview!
We look forward to attending Simon’s exciting keynote on Monday June 18, 2018.