BY NIKOLA STIKOV
In May 2016, OHBM announced the Open Science Special Interest Group (SIG). One of the SIG founders, Cameron Craddock, wrote an informative blog post about the mission of SIG and its potential. In the post Cameron illustrated the benefits and distinctions of open science by drawing upon the free beer vs. free speech analogy. The OHBM blog team felt that ‘beer vs. speech’ is jargon that needs explaining. Twitter thought otherwise. This made us aware that the open science voices are sometimes difficult to hear outside of their own echo chamber, especially in the noisy world of brain mapping. Cameron removed the reference to speech/beer from his feature, and we agreed to pick up the conversation with Samir Das and Pierre Bellec, two free speech and beer enthusiasts from Montreal.
Nikola Stikov: Can you please explain the difference between ‘free as in speech’ and ‘free as in beer’?
Pierre Bellec: The analogy “free beer” and “free speech” comes from the open-source software community. Free as in beer, or “gratis”, means you don’t need to pay to use the software. Free as in speech, or “libre”, means you can re-use freely the software in new projects without direct approval from the authors. Free software is generally both gratis and libre.
Samir Das: The “Free” concept is not limited to software. More recently, we have focused on ideas such as Open Science. We are embarking on a new mission at the Montreal Neurological Institute (MNI) to build open science, but defining what 'open science' is can be tricky. The MNI is the first institute to go “Open”. What this means is that the institute won’t worry about patenting ideas and techniques, and will make acquired data freely available to the scientific community at large.
NS: So is ‘free as in speech’ always better than ‘free as in beer’?
PB: For software, people don’t care that much that it is free as in beer. At the end of the day, if you really want to use a product, you will find a way. The fact that a software is free as in speech, though, has turned out to be incredibly powerful for innovation. Android is based on Linux, a prominent open-source project. Tesla autodrive is also based on Linux. You watch a video on the plane? Linux. Robots going to Mars? Linux. Linux is so robust and so flexible, it blows away anything that a private company could produce.
SD: Free doesn’t mean you can’t profit from it. Some people make a lot of money, even though many people still consider it a volunteer service.
PB: Exactly, in free software, people work together on projects that are difficult to do alone. But you can still add a layer that is unique, and you can sell the product as a whole. Apple built its OS from unix, they did not reinvent the wheel.
NS: Does sharing apply not only to software but also to data?
PB: Yes. A paper is not a very reusable unit, it is hard to build on it. There are details missing in your typical manuscript, plus we are an experimental field, and if you don’t have access to the data, there’s not much you can do with [the paper]. So ‘free as in speech’ in the context of science means that instead of sharing just papers, we should also share reusable units. Those units could be code, data, tools, workflows… I believe that hiring and promotion committees should consider all of these units when evaluating somebody’s work.
SD: By doing this we will reduce redundancy, waste, cost, because we will have more data available, and governments will spend less money. Even from a self-serving point of view, there is evidence to suggest that if you go open, you might get more collaborators, more citations, more funding, and ways to make money without violating open-science concepts. Finally, this makes it possible for other communities to use the same data in ways that [our] community could never even imagine, so that is very important.
NS: Tal Yarkoni just published a paper about the next generation platform for science publishing, in which, on top of open-access and data sharing, he recommended preprint archiving and Reddit-like peer review. Do you agree with these recommendations?
PB: Open review is exciting, but I have only limited hands-on experience with it. I recently published my name as a reviewer of an opinion piece in Frontiers, then I uploaded my review on Publons.com. Publons is a free website where you can see my entire review history. I definitely enjoyed that process, it is useful to document what generally happens “under the hood”. What I haven’t done yet is take an hour to write a summary of a paper where I wasn’t a reviewer. I want to try that out in the future.
SD: This is the future for sure, but I am not entirely sure about every nuance and the exact details of the outcome. I don’t have a strong opinion about post-publication peer-review, but if that is possible, I suppose it is a good thing. More transparency can help with the current reproducibility crisis in research. However, when it comes to preprint archiving, I feel like there is something to be said about due process. When we collect data for a study, sometimes it doesn’t make sense to release it immediately; we are not done yet. Little embargos so you can finish your planned work might be in order. I am for a reasonable amount of process.
NS: So when should the sharing happen?
PB: I fully agree with Samir, I don’t think it is realistic today to tell people that everybody should share their data as soon as it is collected. Because you are going to scare people. At the end of the day, I believe in most cases embargos are not useful, and that ten years down the road few people will still use them.
SD: If [the data] is organized while you are collecting it, with proper standards, then it won’t be so much work to share it in a few years. One problem is that a lot of this work is currently done by contract researchers that are not faculty, and there is no long-term career path for them in academia. You get a grant for a couple of years, and then everybody scatters, or they go to industry. Universities need to shape up and do more. The current model is extremely wasteful and contrary to the mission of science - it takes a fair bit of time to train people, and losing the great amount of knowledge acquired is particularly harmful to the research ecosystem.
Pierre Bellec is a professor of computer science at the University of Montreal and CRIUGM, where he develops fMRI connectivity biomarkers for Alzheimer's disease. He currently chairs the OHBM Open Science Special Interest Group, and is also involved in organizing the OHBM Hackathon.
Nikola: A word or two about the Open Science SIG activities. Pierre, you are one of the organizers of the NeuroBureau hackathons, what do you do there?
PB: At the beginning [of a hackathon], people pitch projects, little teams are formed, and then people sit down and work. The idea is to take those 5 minutes during conferences, when you meet somebody, you have a coffee, you have an exciting idea but you need to go back to the talks. So the idea is to take that little chunk of time and stretch it to the length of the conference.
NS: Do you need to know how to code to be at a hackathon?
PB: That is a common misconception. The hackathons come from the tech community initially, so people associate them with coding, but we try to gather a different kind of community, and we see all kinds of people coming to the hackathon and having a great time.
Samir Das is the Software Manager for the McGill Centre of Integrative Neuroscience, and system architect for the LORIS database. His goal is to facilitate technological solutions towards difficult data management and processing problems in neuroscience and beyond.
NS: Samir, what is your role at OHBM?
SD: So, I wear a lot of different hats in my life, but at OHBM, I consider myself Pierre Bellec’s sidekick. The point of it is that we are all trying to further a common goal, to do things like open science and data sharing.
NS: And as part of that you organize parties.
SD: I know it sounds weird to say that parties are part of the open science mission, but communication and collaboration [are facilitated by] social events, whether at a hackathon level, or at a big party. It is amazing how much stuff can be solved over a beer.
NS: The next meeting is in Vancouver, have you already planned the venue?
SD: I haven’t thought that far ahead, but I already have an idea of how it could be. I am picturing a beach... I feel like that will facilitate even more science. :)
Thanks to Sarabeth Fox for video recording.
Nikola Stikov: So I'm here with Kirstie Whitaker, a post-doc at University of Cambridge and she agreed to talk to us about her experiences with OHBM. How long have you been coming to this conference?
Kirstie Whitaker: This is only my third conference, but my first one was in 2009. So I haven't been able to travel to all the amazing places that OHBM has been over the years, but it's lovely to be here in Geneva.
NS: Wonderful. You're very active with the Hackathon, so can you tell us a little bit about your personal experience?
KW: I came up with a project that I thought would be meaningful and I pitched it at the beginning and I got teammates that came and joined me and they just kicked it out of the park. It became so much better than I ever thought that it could be when I came up with the idea. So it was wonderfully inspiring, it was great to meet the people that I slightly hero-worshipped and brand new people, and it sort of flowed out into the conference.
NS: I know you're very passionate about diversity issues within the society. So have you seen any progress and do you have any suggestions about what should be done to bring more diversity at our meetings?
KW: I think that the keynotes were really beautifully gender-balanced. We had three women and four men, which is great. It's lovely to see [them] and all seven of them were excellent. I think I was a little bit disappointed that the prizes all went to white men. I felt like that was maybe not the greatest message that could be given. But what was really lovely was the number of people that stood up at the Town Hall which we had the last night of the conference and mentioned this. So the fact that people are aware of it and people are thinking about it brings it to the fore. I think it holds it in the mind of not just the committee, but also the people who are voting for everyone. I think what was called for in the Town Hall and which I would love to see going forward is more people nominating women, people of color, and people who've had non-traditional career paths, bringing forward these bright stars, to nominate them so they can be celebrated next year.
While women may be underrepresented in Council this year, women scientists Drs. Lara Boyd and Doris Doudet are the Chair and Co-Chair, respectively, of the 2017 Local Vancouver Organizing Committee for the 2017 meeting, and AmanPreet Badhwar is the Co-Chair of the OHBM Student and Post-doc Special Interest Group (SIG).
Randy Gollub: We're here today to talk a little bit about the thoughts you have and the visions for what would like to see the SIG do in the next coming year.
AmanPreet Badhwar: Our mandate as the Student and Post-doc SIG is to provide opportunities for networking for trainees, both within the trainee group as well as with other young or senior scientists. To date, our flagship event has been the OHBM Monday Night Social, which we co-organized with the NeuroBureau.
RG: And have you a vision for how things might grow and develop in the future, about how OHBM can help you and your organization?
AB: Definitely. We're thinking of expanding to more than just the Monday night social because it is only one day of the year. We'd like the SIG to be more involved throughout the year. One of the things I do want to organize for next year's OHBM is a symposium to help trainees transition into the next phase of their career, and I'm especially referring to post-doc at this stage because that's really the hardest transition. A symposium on that topic would be very helpful. The other idea that I've been discussing with OHBM is to have, during the meeting, a room dedicated for mentoring, where for certain a period of the day, perhaps an hour or so, we have a rotating group of scientists, either young researchers or more established researchers, who the students can have conversations with and get some tips on how to move forward with their careers.
Check out the two videos to hear more about how these young women scientists are getting involved in the OHBM and how they are encouraging their colleagues and peers to become more engaged.
The OHBM has taken very seriously the call from members to make enhancing diversity an important goal for the society. In response, the OHBM leadership has recently created the Diversity/Gender Task force, lead by Co-Chairs Tonya White and Angela Laird to address issues of gender and minority representation. The goal of this task force is to increase awareness of these issues and identify ways that women and underrepresented scientists can be promoted at the OHBM to ensure balanced representation. If you are interested in volunteering for this task force, please complete the application form before October 21. Interested individuals must be current members of the OHBM. You can renew your membership at www.humanbrainmapping.org. All submissions will be reviewed with Task Force selections made by the Chairs of the Diversity and Gender Task Force.
In the meantime, the call for proposals for Educational Courses and Symposia for the 2017 Vancouver meeting was recently announced. I urge all of our OHBM community to make a special effort to include a balanced number of women scientists of all ages in their proposals!
BY THE KOREAN SOCIETY FOR HUMAN BRAIN MAPPING
The function and anatomy of the human brain are the basis of debates related to the inner workings of the human mind and body. Before the arrival of brain imaging technology, ethical dilemmas hindered neuroscientists who wished to conduct scientific studies on humans. Fortunately, neuroimaging techniques such as MRI, PET and SPECT have opened a new chapter in brain mapping. With the opening of “A New Window into the Human Brain” as Victor H. Fischer argued in 1962, researchers have been able to investigate not only human brain physiology and connectivity, but also its functionality, such as emotion and cognition, as well as numerous mental health disorders.
To keep pace with this emerging field of research, South Korea started its first society of brain imaging researchers, the Korean Society for Human Brain Mapping, or KHBM, in 2002. Given that modern human brain mapping utilizes cutting edge information technology (IT), the rapid development of the IT industry in Korea facilitated the early development of the KHBM. The Korean government promoted research and development in the IT industry early on in order to increase Korea’s share of the international information and communication technology market. This timely advance allowed for a positive feedback loop, in which the investment strategy in a variety of IT fields enhanced prompt industrial growth. In 2013, the Korean IT industry alone represented 30.9% of manufactured industrial products, as measured by the Gross Domestic Product (GDP), in comparison to 13.6% in 1997. At the same time, the biotechnology industry increased its share of the GDP by a factor of 12.7, from ₩0.59 trillion in 1997 to ₩7.51 trillion in 2013. Such statistics clearly illustrate the rise in the importance of medical technology in Korea.
The KHBM encompasses virtually all active Korean human brain researchers, including medical doctors, medical engineers, psychologists, and more, in order to encourage the study of brain dysfunctions, including those specific to Koreans. In addition, the KHBM aims to broaden the scope of brain studies by fostering information sharing among experts, while promoting improvements in brain mapping technology. For instance, one of the earliest topics of discussion at the KHBM conference in 2004 was the production of a standard Korean human brain map.
Recent topics covered at the KHBM conferences span a wide range of issues. For example, some researchers reported on medical issues, such as the localization of lesions involved in neuropsychiatric disorders using brain imaging technology and the effective use of statistical probabilistic anatomical maps. Other researchers focused on technology-related issues, including the effective use and differences among PET, MEG, and fMRI when investigating a variety of neurological disorders, and on the creation of an artificial cognitive system, based on the identified sensory regions of the brain. These studies are made possible by employing brain imaging technology to visualize the functional connectivity of the brain in vivo.
The members of the KHBM emphasize the necessity of new development and expansion of technology-based medical engineering expertise to improve the precision of medical apparatuses. This common goal of young neuroscientists and clinical researchers in South Korea motivates the theme of OHBM 2018, “Mapping the Interactions.” The theme not only embodies systematic efforts to create connections and develop mutual goals among researchers who study electrophysiology, metabolism, brain function and anatomy, but also epitomizes the determination of KHBM to promote interactions between theoretical research and clinical applications, between academia and the public, and between developed and developing nations.
The 2018 OHBM meeting will take place at the COEX Convention Center, located at the heart of Seoul. Seoul is well-known for its mixture of traditional and contemporary Korean culture. One of the most renowned examples of this fusion is Insa-dong, where artists display their creative works in an environment surrounded by traditional architecture. This vibrant city will provide a backdrop for the creative energy of the OHBM as it brings together researchers from all over the world.
The Korean Human Brain Mapping community cordially invites you to take part in the OHBM 2018 meeting at Seoul, to engage with other neuroscientists, to form connections, and to share and discuss our knowledge and passion for human brain research.
BY PANTHEA HEYDARI
Figuring out the right methods to conduct fMRI analyses can be a full time job. Most recently, I spent countless hours trying to determine the best way to do a Region of Interest (ROI) analysis on my imaging data. Usually, the scene plays out like this: Me sitting in uncomfortable office chair, eyes glued to the screen, fingers tapping, sometimes late at night, but mostly in the early AM hours (I’m more of a morning person) and always with piping hot coffee. I’m clicking on side links, opening tabs upon tabs and at some point my hair goes into a bun as I start frantically flipping through past notebooks, looking for notes that describe my subject’s stroke lesions. I am contemplating using ROI masks to remove the stroke lesions in order to understand the remaining brain activity. Errr, maybe I should clarify this image (and minimize those distracting Reddit and YouTube windows).
I’m on the hunt for tutorials about fMRI ROI analysis and there’s a lot to learn.
My particular thesis project investigates how individuals with stroke engage the Action Observation Network (AON) during observation, execution, and imitation of hand and foot actions. I’ve mostly been running whole brain analyses but today, I want to look at specific activity patterns in the pre-motor cortex, inferior frontal gyrus, and the posterior parietal lobule.
Since I’m not sure which package I want to use for my analysis, I start out on the NITRC home page. I need to consider that my subjects have lesions and use this as a keyword to narrow my search. Through the Resources feature of NITRC, I compare packages such as Mango, MarsBaR, and FSL. Which software is better suited for my types of analysis? Which is better supported? Which matches my particular needs?
NITRC-R offers me the ability to compare software and determine which package is best able to serve my needs.
· supports a variety of data formats and operating systems
· features fairly intuitive ROI editing and surface rendering
· allows for manual drawing of ROIs so I (or my minions…errr, I mean undergrads) can be specific about which voxels to select, but this may also increase rater variability
· is an added toolbox for SPM with an external homepage that features great step-by-step tutorials
· has easy to read instructions on how to define a functional ROI, extract the data, and run analysis through the Matlab console
· allows me to pre-define my ROIs or look at which voxels are active before selecting that as part of my ROI
· is a popular tool for looking at functional MRI results
· is great for tractography and relatively easy to use
· allows me to draw or load ROIs and can be exported into other packages, if desired
For these packages, NITRC shows me user ratings, support availability, development status, and documentation. I am concerned about reproducibility of results and how often these packages are used by other researchers. I am also interested in the availability of forums for my questions. For analysis, my top priority is being able to draw the ROIs myself and edit it if the automatic size is not what I prefer. These are factors I use in making my decision.
NITRC has consolidated information about resources and various software packages into a user-friendly and easily accessible site, so that I can compare packages in terms of priorities for my research. After assessing this information as well having open discussions with lab-mates about our project’s focus, I decide to go with FSL. I’ve been using FSL for prior work, so this makes sense in terms of logistics, but I also appreciate FSL’s ROI extraction tools and ability to control my regions. Lucky for me, NITRC has linked to sites for downloads and external resources of my winning choice. Easy enough, right?! All of this leaves me with much more free time. . .to spend online.
Interested in using NITRC for your analysis needs? Or, have additional questions on Best Practices in Data Analysis and Sharing (COBIDAS)? Take a look! As always, feedback and comments welcome and encouraged!
BY EKATERINA DOBRYAKOVA
Today, we're talking to David Poeppel, finding out about the differences between animal and human communication systems, his role in developing models of language processing, and in peeling away the 'speechyness' of speech. Dr. Poeppel is a Professor of Psychology at the New York University and Director of the Department of Neuroscience at the Max Planck Institute in Frankfurt, Germany. As one of the OHBM 2016 keynote speakers, Dr. Poeppel discussed how research in the neurobiology of language has developed over the last 20 years. We took the opportunity to find out more about his research on speech perception and psychophysics.
Ekaterina Dobryakova: To start, I wondered why you decided to study language and, specifically, speech perception?
David Poeppel: Well, it's easy to want to study language because it's what makes us “us”. It’s the most convenient and compelling feature that we have. If we didn't have that, we couldn’t have this conversation. There are endless vitriolic debates about how similarly animals communicate with each other, so it's deeply fascinating. How can we understand anything? How can we talk? If you ever take a class in linguistics, it’s like opening the curtains – you suddenly understand something you didn't think could be studied by sciences, because if you take a language class, let's say, in middle school or high school, they drown you with minutia in a very boring way and they try to take all the joy out. We've all taken second language French or Spanish in high school--oh my God, you know, what a nightmare--but if you suddenly think of it as a problem for sciences, it becomes a completely different kind of thing.
ED: You mentioned animal communication. How would you say our human communication through language is similar to non-language communication between animals?
DP: That is a good, hard, deep question for which we have no good answer. Animals are at the periphery, using the apparatus we have, the input systems and the output systems. We can learn a lot from studying animal communication systems. But there are certain attributes of human language that are just quite different, and for which we have no compelling animal model. So, some of the things we study are birdsongs or gestural communication. And those are wonderful important additions to our knowledge. But there are certain attributes of being a speaker of the language as a human that are unusual and that includes the peculiarly structured and complicated vocabulary.
ED: What is one of the most important projects that you are focusing on in your lab and how is it relevant to society?
DP: My laboratory focuses on basic science. That is, basic questions about the organization of perceptual systems, the auditory system and how we process language. We don't build products, diagnoses, therapeutic interventions. But of course the long term goal is that the insights we bring will help in all of these issues: for group communication, diagnosis, stroke, disease or rehabilitation, developmental disorders, you name it. So some of the work we've done has some very interesting and practical implications for developmental speech disorders. Max Planck has a good one-liner: ‘The application has to be preceded by basic insight’. I think this is very right: to really try to understand the system and its parts, and how they interact before you build the thing. You know, you want me to test the stuff before I give you a pill.
Now, I think, we're beginning to enter the age of maturity, where we can celebrate the amazing tools we have and the techniques and analytic approaches we have, and we can start to feel a bit critical about our own research.
ED: What would you say is your most surprising finding?
DP: The most surprising finding is that we ever find anything! That's why we're at this OHBM 2016 meeting. One of the remarkable things of the last 20 or 30 years is the astonishing technical development, the devices that we all use. This is quite a remarkable achievement. I remember when I was a graduate student and I read the first papers on imaging, some of which were actually on language, I was both simultaneously excited and appalled. And it really stimulated me and made me very passionate about the stuff. But notice that until 20 years ago we were not able to do any of the things we're now addressing. It was inconceivable. Let's say there was the age of fascination, the age of growth. Now, I think, we're beginning to enter the age of maturity, where we can celebrate the amazing tools we have and the techniques and analytic approaches we have, and we can start to feel a bit critical about our own research.
So here are two things that I've worked on a lot in my lab, both of which, I naively hope, have some value: One is really questions about the structural organization of the brain. So with one of my close colleagues, Greg Scott from Irvine, for many years we've developed a functional and anatomical model, pretty widely known as the dual stream model of language processing, which admittedly we stole straight from the visual system. We know a lot about the visual system and its anatomical physiological foundations, and some of those ideas struck us as potentially useful for the language system.
The second thing is the work that I've been focusing on for a number of years on neurophysiology and there, primarily, I’ve been using magnetoencephalography, MEG. It’s the technique I’ve begun to obsess about the last few years, primarily because perceptual processing and language processing is super fast.
ED: What is your biggest dream in research?
DP: Maybe your dreams get more modest as you cross into the precarious years of middle age. Of the many dreams I have for my labs, one that I'm particularly obsessing about right now is: What does it mean to store your words. Those are sort of at the intersection of everything. There are a hundred thousand things stored in your memory in a way that they can be listened to, spoken, read, signed. That means that the encoding of that is extremely complicated and subtle.
ED: What do you think is the coolest finding in the neuroimaging of language?
DP: I'd have to say all of my own [laughs]. I'll tell you one that I care about right now and I think is cool: there continues to be a long debate about specialization in the brain. Let's take speech perception. Speech perception is the transformation of acoustic information (so something hits your ear) into some code that's relatively abstract and that interfaces with linguists. Now, for 50 years there's been a very vigorous debate if that is a special mechanism or is it a generic mechanism. So, is speech perception merely a species of hearing, or is it actually something that deals with particular functions? And so, last year, in a very elaborate series of fMRI experiments in my lab, we tackled this question one more time, trying to really nail down the extent to which there is specialization, in part, building on our dual stream model. We predicted that some very specific chunks of cortex, particularly, aspects of the superior temporal gyrus and superior temporal sulcus are not just generic auditory analyzers but are specialized for speech. That doesn't mean that entire area is specialized, but that means that there are populations of cells there that really deal with that kind of signal. So we went to great effort to make stimuli which could selectively peel away the speech-yness of signal and see can you find an area in your head that has the right sensitivity for speech and the right specificity. That was the kind of an experiment that was not just one experiment but 10. But at the end, it is pretty satisfying, because you feel like you really nailed it. I think, we can say pretty conclusively, that there is a high degree of specialization in a particular area of the brain. That's kind of cool because it shows that you can take a pretty innovative technique like functional MRI and answer a classical question.
ED: Many thanks!
Prof Poeppel’s keynote speech on ‘New Directions in the Neurobiology of Language’ will soon be available to view on the OHBM OnDemand portal. Keep checking for this and other great talks from OHBM 2016.
Thanks to Sarabeth Fox for video recording.
BY EKATERINA DOBRYAKOVA
The Organization for Human Brain Mapping hosted an exciting lecture on June 30th, 2016 with Dr. Anissa Abi-Dargham who presented her work pertaining to the topography of dopamine alteration in schizophrenia through the use of PET imaging.
Anissa Abi-Dargham is a Professor of Psychiatry and Radiology at CUMC, Columbia University, and New York State Psychiatric Institute, where she directs the Division of Translational Imaging. Dr. Abi-Dargham is a pioneer in PET neuroimaging, beginning her research in the field of dopamine transmission in schizophrenia in the 1990s. Her research has resulted in seminal findings that explain the complex alterations of dopamine transmission in schizophrenia and the impact these alterations have on clinical symptoms, cognition and response to treatment.
Ekaterina Dobryakova: What motivated you to go into your particular area of research?
Anissa Abi-Dargham: I went to medical school to become a psychiatrist and study psychosis. Out of all brain disorders psychosis seemed the most extreme to me and the most devastating on people’s lives. Brain imaging, especially molecular imaging, seemed to be a useful approach to get one step closer to the underlying “brain biology” that relates to psychosis. When I started there was much interest in dopamine in schizophrenia so I did some of these studies with my colleagues and one finding led to the next question and next study. Now we have greater understanding of the complexities of this system in schizophrenia.
ED: If you weren't talking to brain mappers or scientists, how would you describe your most proud scientific accomplishment?
AA: It must be the study where I estimated amounts of dopamine (a transmitter in the brain used between nerve cells to transmit signals) and compared patients with schizophrenia and controls. To do so I had to remove most if not all dopamine in the brain (by giving a treatment that stops new synthesis or production of dopamine for 48 hours, this is called a depleted state) and then compared the baseline scan (before depletion) to the depleted scan to derive how much dopamine was removed, thus inferring how much dopamine was present at baseline.
ED: If you were speaking to a non-scientist, how would you describe your research and what you do for a living?
AA: The brain is a complicated and intricate super computer that remains like a black box. It is difficult to understand its normal functioning, much less its dysfunctional function in brain disorders. I use imaging techniques to get at some of these questions.
ED: What do you think are the most pressing issues in neuroimaging for your area of interest? For the field in general?
AA: Technology, funding, collaborations: we need to develop better tools, that are safe to use and not invasive so we can image the complexity of the brain. We need participants in research and multi site collaborations to have enough power to address the variability across human subjects, and funding to do all that.
ED: What do you think is the future of neuroimaging for basic research? For translational research and application?
AA: Imaging can serve as an ideal translational tool to understand and link the effects of genes onto cells, circuits an behavior. Animal models can provide an illustration of genes’ effects which can be searched for in humans. So imaging can bridge all these multi-levels of investigation across species.
ED: When you first started out, what was the most inspiring/motivating paper you read? How about the same question, but in the last 5 years?
AA: Tough question because there were / are too many. An inspiring one was that of Surmeier and Gerfen summarizing the circuitry involving direct and indirect pathways in the striatum, I often read it and re-read it.
In the last 5 years I would say some of the main papers marking major advances in the field, for example the genetics findings in schizophrenia published in Nature 2014. But others too, it is really hard to pick just one…
ED: What should the non-expert be wary of when reading about brain mapping articles in the lay press?
AA: Since imaging is very technical it is difficult for the non-expert to judge if a study is methodologically sound. That is the biggest issue. Another is small sample sizes and hyper inflated results. In general findings need to be replicated for one to start to believe them.
BY: OHBM BLOG TEAM
At the recent OHBM Annual Meeting in Geneva, we encountered an interesting variety of ways that people both chronicled their personal experience at the meeting and engaged with the material of sessions attended. Many people vigorously took notes, others live-tweeted or posted photographs of slides and presenters online, but one of the most unique examples we discovered was a meeting attendee who was live-sketching session speakers and posting them to instagram.
The live-sketching artist, Roselyne Chauvin, is a PhD Candidate and founder of Cogni' Junior and lives in the Netherlands. When asked about how she started doing this type of drawing, Chauvin replied "Well, I do a lot of popularization of science for children (cognijunior.org) so a lot of non-scientists/teachers are following me on social media. I thought live-sketching the conference was a nice way to show that researcher meetings are not boring and, in general, to fill the gap between the general public and us. In the meantime, I discovered that it helps me to keep focus between all those amazing talks and to remember more of them. It's also saying to scientists that can't make it and want to catch some glimpses of it via twitter: you are not forgotten. Not everyone can fly to Geneva, right?"
The images below are from keynote lectures, educational courses, oral sessions and symposia. Think you know some of these speakers? Recognize material you learned in an OHBM session? See how many sketches you can identify.
OHBM - What is it that interests you about neuroimaging?
William Seeley (WS) - Neuroimaging has the potential to address three key issues in neurodegeneration research. First, brain imaging can tell us when and where neurodegeneration begins in living patients. This critical information provides the “treasure maps” we can use to guide our search for the cellular-molecular mechanisms of disease within the right neuroanatomical context. Second, functional imaging can help us understand changes in network physiology underlying patient symptoms. Finally, the dawn of “connectomic” imaging has allowed us to test competing models of network-based disease progression.
OHBM - What difficulties have you faced balancing a research and clinical career? What benefits has it brought?
WS - My clinical life frames and motivates everything we do in research, and I continue to be impressed by how much clinical science can teach us about healthy brain organization. For me, the major challenges relate to time and not having enough of it to do everything I would like to do in my career. Overwhelmingly, though, my life as a clinician has added great meaning to my life as a researcher.
OHBM - What draws you to OHBM and how does it differ from other similar, large conferences?
WS - I’ve been coming to OHBM for about 10 years and I keep coming back because of the enthusiasm of the membership for this field. It feels like a cohesive group of people - they each bring a different perspective and a different set of tools. It’s also a good place to learn about those tools – the very front line of methodological advances is reported here first. That always makes it exciting.
OHBM – In your keynote talk you laid out a number of different variants of the dementias, then focused on frontotemporal dementia. The dementias vary based on both behavior and symptoms. Could networks and the connectivity between networks aid differential diagnoses? Do you envision scanning patients in order to distinguish between different types of dementias?
WS - Neuroimaging of brain structure and function can help us refine our assessment of a patient’s clinical syndrome. In my talk you heard me discuss syndromic diagnoses and pathological diagnoses as distinct and separate concepts. Structural and functional imaging can help with syndrome refinement, but when it comes to the underlying neuropathological cause of that syndrome, I think those strategies are going to fall just a bit short. Take the example of behavioural variant frontotemporal dementia (bvFTD), it has 15 different neuropathological causes – I doubt we could use neuroimaging alone to decide which of those 15 underlying histopathologies is the actual cause of a patient’s bvFTD. It’s more likely that we’ll need a molecular technique, whether that’s biomarker analysis from spinal fluid or molecular imaging using PET scanning, to decide which of those various underlying histopathologies is the cause. Alternatively we’ll use some kind of a merger, where the structural and functional imaging refines the syndrome to the point where the differential diagnosis gets shorter. Then we use molecular imaging to nail the final diagnosis.
OHBM - Some of your recent research centres on selective vulnerability. Can you tell us what this is, and why it might be relevant to many neurological conditions?
WS - All neurological diseases are selective in some way. In neurodegenerative disease, we can see that progression occurs in a selective manner that is governed by network connections. Where (in which cell type), how (in what manner), and most importantly why a disease begins where it does remains far more mysterious, but may be a key to developing early-stage treatment or prevention.
OHBM – What would it take to get to the point where we have screening for different vulnerabilities? Would that goal be a priority in the absence of effective neuroprotective recommendations – or are effective recommendations available?
WS – That’s already the reality in Alzheimer’s dementia. Alzheimer’s is a common disease. You can screen a healthy older population for amyloid-beta deposition using molecular imaging and then triage patients for experimental treatment trials based on that result. To imagine doing that for some of the less common dementias, such as frontotemporal dementia, is a little more daunting because of the lower population prevalence. FTD has a population prevalence of about 1 in 5000 in those aged over 45, so it would have to be either a very inexpensive test or a very powerful therapy to justify that kind of screening program.
OHBM – It’s been proposed that the salience network may switch activity between the default mode and the central executive networks – and impaired switching has been hypothesized to play a role in a number of psychiatric disorders. Do you see that playing a role in the frontotemporal dementias?
WS – I don’t think we really know the answer, as I don’t think there’s been a study yet that went straight after the switching concept. From a phenomenological standpoint the patients are pretty poor switchers. Sometimes they get stuck in ruts and perseverate on the same behavioral response over and over. Other times they fail to switch in other ways when switching would be helpful. Sometimes they switch too much, where they’re distractedly moving from task to task as opposed to finishing. I do think that behavioral switching is a deficit – whether that correlates with network switching is an open question and I think it’ll be an important one to address at some stage.
OHBM - What do you see as the next major goals of neuroimaging in dementia research?
WS - Neuroimaging can play a critical role by providing short-term interval biomarkers of disease progress for use in early-stage drug development. To accomplish this goal may require that we develop better models to predict progression, and then use those models as a way of assessing whether a drug has had a meaningful impact.
OHBM: Thank you Prof Seeley!
Prof Seeley's keynote talk on 'Network-based neurodegeneration' will soon be available to view on the OHBM OnDemand portal. Keep checking for this and other great talks from OHBM 2016.
Thanks to Sarabeth Fox for video recording.
BY: JEANETTE MUMFORD, CYRIL PERNET, THOMAS YEO, LISA NICKERSON, NILS MUHLERT, NIKOLA STIKOV, RANDY GOLLUB, & OHBM COMMUNICATIONS COMMITTEE (IN CONSULTATION WITH THOMAS NICHOLS)
In recent weeks a lot of attention has been given to the paper “Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates”, by Eklund, Nichols and Knutsson, in the Proceedings of the National Academy of Sciences. This work highlights an important concern; however, some of the media attention has been based on a misunderstanding and an ‘inflated’ interpretation of the results. Specifically, too much weight has been given to the numbers “40,000 impacted studies” and “70% false positives”, an unfortunate side effect of reducing a study rich in information to a few soundbites. We respect the views of this paper and the effort put forth by the authors who, like the leadership of OHBM, understand there is a growing concern for validity and reproducibility in our field. The purpose of this post is to put these numbers in context and clarify how these findings impact our view of past and future fMRI results.
In task-based fMRI studies we are often interested in looking for systematic differences between experimental conditions or cognitive states across upwards of 100,000 voxels in the brain. It is widely known that this large number of statistical tests, typically one per voxel, requires correction for multiplicity. The most common approaches focus on control of the family-wise error (FWE), which is the probability that a given study will produce any false positives. The most common approaches for FWE control are voxel-wise and cluster-wise thresholding. Voxel-wise thresholding draws conclusions about specific voxels and cluster-wise thresholding allows one to conclude whether a group (or cluster) of adjacent voxels show an effect based on a feature, most often its size (e.g. only groups of voxels bigger than size N are significant). Eklund et al. consider both voxel-wise and cluster-wise FWE control in an exercise that tests whether the thresholding methods and their implementation by various software packages control the FWE as advertised. The innovation in this work is that they used resting-state fMRI data rather than computer generated simulation data to estimate noise (see below for more on this); they analyzed this resting-state data as if it were actually task fMRI data.
Eklund et al. find voxel-wise results are always correct, i.e. control FWE below a requested 5% level, and are thus safe; we won't discuss these further. They also find that, depending on the exact methods and tools used, cluster-wise results can be invalid, i.e. have FWE in excess of the traditionally accepted 5% level. Understanding the specifics of when these methods are invalid is the focus of the article.
Figure 1. Cartoon example of how cluster-based thresholding works. The orange line represents the uncorrected, voxelwise p-values over a row of voxels (space). First, the cluster-defining threshold, CDT, is used to define the clusters, which are indicated by the boxes on the x-axis. Second, using the cluster size as the statistic, a threshold of cluster size k is used to assess the two clusters, concluding only the red cluster is large enough to be significant.
A cartoon example of the cluster-wise based strategy is illustrated in Figure 1. First, a primary threshold is required to define clusters (in Eklund et al. this is called a cluster-defining threshold, CDT). The CDT is typically based on the uncorrected voxelwise p-values. SPM and FSL use random field theory to obtain FWE-corrected p-values, which requires an estimate of the spatial smoothness of the image that is being thresholded, typically a map of t-statistics that quantifies the effect size at each voxel. AFNI uses a simulation-based procedure that also relies on a smoothness estimate. In contrast, another choice is to use a permutation approach, which is based on randomly permuting data labels to generate a null distribution for cluster size that is used to compute a p-value. The approaches in the 3 widely used fMRI data analysis packages, SPM, FSL and AFNI, are variations of parametric methods, and are based on specific assumptions about the data, while the permutation method is nonparametric and requires minimal assumptions.
What is unique about this work?
This paper is an example of a simulation study, an evaluation of a method based on ‘made up’ data. The reason simulations are used is because quantifying FWE can only be done if the ground truth is known. Specifically, we must ensure there is no signal in the data. A simulation is most useful when the simulated data reflect what we would find in real data as closely as possible. This has been a limitation of previous studies, which generated synthetic data with software and used this synthetic data to test the performance of the analysis algorithms (Friston et al. (1994) and Hayasaka and Nichols (2003) are examples). This work uses a large pool of real human resting state fMRI data as a source of null data, or data that do not contain any task-related signal. Fitting a model of a task to the data should not find any activation. The advantage of using actual fMRI data is that the spatial and temporal structure of the noise is real, in contrast to previous simulation studies that used computer-generated null data. In the simulations in Eklund et al., random samples of subjects from the resting state data set are taken, and these samples are analyzed with a fake task design. The subject-specific task activation estimates are then entered into either a 1-sample test (to test the hypothesis that there is an effect of this task in this group) or a 2-sample test between two groups of subjects (to test the hypothesis that the effect of the task differs between the groups). Each result is assessed in the usual way, looking for FWE-corrected p-values that fall below p=0.05, and the occurrence of significant clusters (cluster-wise approach) is recorded. The authors repeat this a total of 1000 times and the FWE is computed as the number of simulated studies with any false positives divided by 1000. In theory, using p=0.05 should result in a FWE of 5%.
Brief Summary: Four study designs, two blocked and two event related, were studied across multiple degrees of spatial smoothing, different cluster-forming thresholds and different software packages. Specifically SPM, FLAME 1 from FSL, OLS from FSL, 3dttest from AFNI, 3dMEMA from AFNI, and a permutation-based approach implemented using the BROCCOLI software were studied. The main result, highlighted in the first figure of the paper, shows that when using a parametric approach, a cluster defining threshold of p=0.01 leads to poor control of FWE (FWE from approximately 4-50%). However, FWE control is improved when a cluster defining threshold of p=0.001 is used instead, regardless of software package used (FWE ranges from approximately 0-25%). The more conservative nonparametric approach controls FWE regardless of cluster defining threshold in most cases, although elevated FWE were observed for the one-sample t-test in some cases due to skewed data. The second result, which is the source of the 70% FWE that has appeared in many other blog posts, occurs when simply using a cluster size of 10 as an ad-hoc inference procedure. In this case, a cluster defining threshold of p=0.001 was used and clusters with 10 or greater voxels are identified as significant. The high FWE of this approach indicates that it should not be thought of as controlling FWE. More details and the explanation of why FLAME1 appears conservative in both of these results are in the next section. The general conclusion is that when using cluster-based thresholding, a cluster-defining threshold of p=0.001 has better control of FWE than p=0.01 for SPM, FSL and AFNI. The nonparametric-based approach has better controlled FWE in the scenarios tested here.
AFNI problem identified. The results presented in this manuscript include the use of a pre May 2015 version of AFNI, specifically the 3dClustSim function used to implement the parametric FWE control. One of the discoveries made during this project was the smoothness estimate used in this older version of 3dClustSim had a flaw that increased the FWE. This was fixed by the AFNI developers in versions after May 2015. Although the new version reduces FWE, it is still inflated above the target of 5%; the p=0.01 and p=0.001 cluster defining thresholds’ FWE with 3dClustSim changed from 31.0% to 27.1% and 11.5% to 8.6%, respectively.
Is FLAME1 superior? Some results appear to support the claim that FLAME1 option in FSL has better FWE control, even in the ad-hoc case, but this is due to a known problem where FLAME1 sometimes overestimates the variance. To clarify, FLAME1 differentially weights the contribution of each subject according to the subject-specific mixed effects variance, which is a sum of within- and between-subject variances. The result is that more variable subjects contribute less to the statistic estimate. In comparison, the OLS option in FSL treats all subjects equally (also true for SPM, AFNI’s 3dttest and permutation tests). When the true between-subject variance is small, FLAME1 overestimates it, causing an increase in p-values, which reduces the FWE. When the true between subject variance is not close to 0, FLAME1 results in a more accurate estimate of the variance but the FWE can then be inflated with results similar to FSL’s OLS. The resting state data have a low true between-subject variance, leading to lower FWE than we might see with task data where systematic differences in task performance might indeed yield the predicted large between-subject differences. This is supported by a secondary simulation using task fMRI data with randomly assigned groups that found FLAME1 to have error rates comparable to FSL’s OLS. Overall, this implies that the FWE will be controlled if the true between-subject variance is small and will be elevated similarly to OLS if the variance is larger than 0.
Why do parametric methods fail? The assumptions of random field theory include that the spatial smoothness must be constant across the brain and the spatial autocorrelation follows a squared exponential distribution. The spatial autocorrelation distribution was not found to follow the squared exponential very well, instead the accuracy of the distribution varied according to distance. Simply put, if voxels were close together, there was a stronger agreement between the empirical and theoretical spatial correlation, but the two do not match for voxels that are far apart from each other. This explains why results improve for more stringent cluster forming thresholds, since clusters are smaller, hence the voxels involved are closer and the assumptions are more closely met.
Should we all panic and give up on fMRI? Are all 40,000 fMRI studies of the past worthless? Of course not.
The blog post by Tom Nichols refines this estimate to a more reasonable number of studies from the past that may be impacted: closer to 3,500. (Note: PNAS has accepted an Erratum from the authors that revises the sentences that led to the sensationalized press articles.) The study shows that (a) FWE control does not work properly in the parametric tests using an ad hoc threshold of 10 voxels; (b) FWE is often controlled by permutation-based testing; (c) cluster inference for SPM, FSL, and AFNI using a cluster-defining threshold of 0.01 is likely problematic; (d) although improvements would be expected if a cluster forming threshold of 0.001 was used, FWE is still not controlled at the nominal level of 5% under all conditions.
How shall we proceed to analyze fMRI data?
Both parametric and nonparametric-based inference have pros and cons and work well when their assumptions are met. Prior work has highlighted the assumptions of the parametric cluster-based thresholding approach, including using a small p-value based cluster defining threshold (see Friston et al. (1994) and Hayasaka and Nichols (2003) for examples). Although it was clear the threshold needed to be low, without knowing the true spatial covariance structure, it wasn’t necessarily clear how low for real fMRI data. Since the Eklund et al. work used real fMRI data in the simulations we now know that p=0.01 is not low enough and p=0.001 is a better option. Generally, the permutation test has fewer assumptions and tends to have better FWE control, but Eklund et al. did find some cases with the 1-sample t-test where the nonparametric approach had elevated FWE, due to skew in the data. Permutation-based options can be implemented on any NIfTI file using SnPM in SPM, randomise in FSL, PALM (also affiliated with FSL), Eklund’s BROCCOLI package and mri_glmfit-sim in FreeSurfer.
Importantly, AFNI users should update their versions to ensure use of either the repaired 3dClustSim (after May 2015), or the new 3dFWHMx function which uses a more accurate spatial smoothness estimate and will improve FWER control. Also, using the ad hoc cluster size of 10 voxels has the largest FWE and is not recommended as a method for controlling FWE.
The work of Eklund et al. supplies important information to those who choose to control the multiple comparison problem according to the FWE. In future work, researchers intending to use FWE correction can make better choices to ensure the true level of FWE is closer to the goal FWE. Although some previously published studies may have not used as stringent FWE control when they had intended to, the results can still be interpreted, but with more caution. Multiple comparison correction is just one element of neuroimaging practice, and there are countless choices in the design, acquisition, analysis and interpretation of any study. We encourage everyone to consult the OHBM Committee on Best Practice in Data Analysis and Sharing (COBIDAS) report on MRI, and review the detailed checklists for every stage of a study. The report is available directly on the OHBM website http://www.humanbrainmapping.org/COBIDASreport and on bioRxiv.
BY CYRIL PERNET
During the annual OHBM meeting in Geneva I had fun making word clouds from the twitter feed of the hashtag #OHBM2016. Attendants could see the word clouds in between every presentation, and I think it made the welcome screen look pretty cool (you can find them on the @OHBM_members channel and on the OHBM facebook page). In case you thought some information was missing, that is simply because it was either not that frequently discussed on Twitter or http://www.wordclouds.com/ did not show it (not all words appear depending on design and size). There was no censoring, and you can blame me if something was not to your liking.
The exciting stuff
The most discussed lectures were those of Tim Behrens and Fernando Lopes da Silva, closely followed by the talk from Gael Varoquaux. The main topics that engaged attendees were connectivity analyses, machine learning, power analyses, BIDS and yes, Brexit.
Thomas Yeo aptly summarized the current state of connectivity analyses on the OHBM blog so no need to talk more about it.
Machine learning is increasingly used in neuroimaging applications these days and Gael Varoquaux had lots of comments about his talk: 'Cross-validation to assess decoder performance: the good, the bad, and the ugly'. In this talk he shows that leave one out strategies are biased and that N-folds cross validation, providing we keep the data structure, works much better.
I am really glad that use of power analyses is now at the forefront of neuroimagers’ discussions. During Sunday’s reproducibility workshop I discussed and presented two of the main tools used to carry out power analyses on full maps: fMRIpower and neuropower. These tools were then presented by Jeanette Mumford (creator of fMRIpower) and Joke Durnez (creator of neuropower) in the Open Science SIG room.
Another favorite topic of mine: data sharing. BIDS, or to give it its full name - Brain Imaging Data Structure - is driven by Chris Gorgolewski, and describes how to structure and store your data in an easily shareable way. It provides advice on how to name files and how to create simple metadata text files (tsv and json). Using BIDS doesn’t require programming knowledge, and does substantially improve data sharing, by allowing machines to read data easily.
A paper from Anders Eklund et al. about failure to control the family-wise error rate (FWER) using cluster size was recently published in PNAS and elicited many comments, not just online but also from the floor. The paper suggests that cluster size correction may significantly inflate false-positives, addresses the extremely important issue of controlling FWER, and is a must read along with the comment from Glass Brain awardee Karl Friston and Guillaume Flandin.
From our community, gender imbalance and diversity was frequently discussed and added to the #GenderAvenger hashtag. It was often commented that committee members and awardees were predominantly white males from wealthy countries. The Organization is well aware of this and has actively sought to reflect the geographic diversity of our membership as well as to balance the number of male and female session and keynote speakers. Council takes this feedback, of the need to do more, seriously and is actively at work to further address these issues, and push for all aspects of OHBM to become as diverse as the members it represents.
Top 5 twitter users
During the conference, the OHBM and OHBM_SciNews accounts retweeted posts from or with twitter users mentioned. Thanks to the top 5: @kirstie_j, @NKriegeskorte, @pierre_vanmedge, @ChrisFiloG, @ten_photos.
Note: A version of this post previously appeared on Cyril Pernet's personal blog: