BY NIKOLA STIKOV
One of the newest initiatives of the OHBM is the establishment of a replication award to highlight the Organization’s commitment to reproducibility and transparency in neuroimaging research. The OHBM Replication Award will recognize the best replication study of the past year. The 2017 award is generously supported by the Laura and John Arnold Foundation.
Continuing with the open science coverage on this blog, I interviewed Chris Gorgolewski at the Center for Reproducible Neuroscience at Stanford University, to discuss the rules and implications of this new initiative.
Nikola Stikov (NS): First of all, what is a replication study?
Chris Gorgolewski (CG): A replication study is a repetition of a published study procedure with minor changes to variables assumed not to be important for the measured phenomena (this depends on the experiment, but could include demographics, scanner model, visual stimuli delivery system, analysis strategy, etc.). Replication studies usually (but not always) have a larger sample size than the original study for appropriate statistical power, and are performed by a different team than the original study (but planning of a replication study can benefit from involvement of the original researchers). Even though minor changes between the original study and its replication are inevitable they should be minimized as much as possible.
NS: What about methodological replications? Could a study applying different data processing streams to the same data (in contrast to acquiring new data) be eligible for the award?
CG: Yes, such studies should be considered as a form of a replication and will be eligible for the award. Since there is a lot of variability in how important methodological replications vs traditional ones are, the impact of such submissions will have to be evaluated by the judges on a case by case manner.
NS: What are the criteria used to choose the best paper?
CG: Each paper will be evaluated according along two dimensions: quality of the replication attempt and importance of evaluated finding. There are several factors that can improve the quality of a replication study: preregistration (especially if the registration was first evaluated by the researchers who designed the original study), sample size (and thus statistical power), transparency (publication of code and data), and lack of conflicts of interest. The importance of the evaluated finding rests on the degree to which it answers an interesting and important question. For example, findings that are a basis for a whole new branch of neuroimaging, challenge existing models of cognition, or are basis for policy changes in context of mental health care should be considered more important and worthwhile replicating. Admittedly, the second criterion is very subjective, but we are confident that the jury will do a good job evaluating all of the submissions.
NS: So does every replication need to be preregistered and fully open?
CG: Not necessarily. We wouldn’t discredit studies that choose not be fully transparent (and not share code or data), or did not preregister their methods. After all, even a non-preregistered replication attempt with closed code and data is a valuable contribution to scientific knowledge. Having said that, if I was presented with two identically powered replication studies of which one was preregistered and shared data and the other did not, I would personally have greater trust in the more transparent of the two.
NS: You mentioned “replication attempt”. Are failed replications also eligible for the award?
CG: Absolutely yes! Replication studies are meant as an accumulation of knowledge, and both null as well as statistically significant results contribute to our understanding of a given phenomenon. For example a well powered failed replication challenging an important study can be very valuable in preventing the field from researching a “dead end”.
NS: Are researchers allowed to nominate their own paper or does someone else have to do it?
CG: Self-nominations are perfectly fine.
NS: How about old replication studies, are they eligible?
CG: Yes. For this year’s first edition (2016), there are no time restrictions in terms of recency. This might change in the following years (limiting the award just to papers published in the previous year).
NS: Is there enough time to submit for people that just found out about the award? Getting reviews and resubmitting revisions of a replication paper will take at least half a year.
CG: Preprints that did not yet undergo a formal peer review process are perfectly acceptable submissions for the replication award, so you don’t need to wait until your paper gets accepted. Furthermore the submission deadline has been pushed to 22nd of February 2017.
NS: Can scientists reuse old data collected in their lab to perform a replication study?
CG: Of course! In fact I expect most labs are sitting on a wealth of replication data that was never published. All it takes to be eligible for the OHBM replication award is to write it up as a preprint and apply.
NS: You said that for the award preprints are sufficient, but which journals are likely to accept such a study for publication?
CG: PloS, Frontiers and Nature Scientific Reports seem like good bets, as they do not use “impact” as a criterion of acceptance. NeuroImage: Clinical should also be happy to accept replication studies, given it made an explicit editorial call for them. Cortex supports a Registered Reports article type which guarantees publication of your results independent of the outcome of the experiment given they first accept your preregistration report. This mechanism might be very useful for replications (since writing a preregistration plan for a replication is easier than for a standard study). There are probably more journals happy to publish replications - you just need to try!
NS: How was the idea for the OHBM Replication Award conceived?
CG: It was proposed by Russell Poldrack, Jean-Baptiste Poline, David Kennedy, Thomas Nichols and myself.
NS: What is the process to nominate a paper for the award?
CG: Just send a link to the paper/preprint you are nominating together with a short paragraph justifying your nomination to email@example.com.
NS: Chris, thank you so much for answering so many questions about this new award. We look forward to seeing the impact of recognizing reproducible results in neuroimaging research!
You can find more information about the OHBM Replication Award here.
The Communications Committee of the Organization for Human Brain Mapping is beginning its second year and is looking for additional members. This is a great opportunity to become part of a vibrant and thriving committee that produces posts for the OHBM blog, articles for HuffPost Science, conducts video and email interviews with top brain researchers and uses social media to communicate that work to the brain mapping community.
The formation of this Committee was approved by the OHBM Council in 2015 with the primary goal of increasing the visibility and impact of members’ work within the OHBM community and to extend it to a broader audience. The Communications Committee is now seeking a few additional volunteers for a three year term. If you have experience writing, editing copy, social media, video, graphic design or website maintenance we hope you’ll consider becoming part of the Communications Committee. OHBM seeks to include a diversity members from a wide range of geographic locations, different experience levels, and encourages women and minorities to apply.
We welcome you to participate in this very important OHBM initiative. If interested, please complete the Call for Volunteers online form no later than Monday, November 28. To apply you must be a current member of OHBM (visit www.humanbrainmapping.org to renew your membership or become a member). Submitted applications will be presented to the Communications Committee leadership for consideration and selection.
If you have any questions, please contact Stephanie McGuire, Communications Manager at firstname.lastname@example.org.
Lancet Neurology calls DMCBH “the future of neuroscience,” and celebrated this new era of patient care and scientific discovery in its October 2014 issue. Leveraging the expertise of over 150 faculty members in brain research at UBC, SFU and UVic (including 28 Canada Research Chairs, 6 BC Leadership Chairs, 1 Canada Excellence Research Chair, and 7 donor-funded professorships in neuroscience) and the personalized, high-quality care provided by VCH, DMCBH is a provincial resource for clinical care for over 20,000 patients and their families per year.
We hope you’ll plan to attend the 2017 OHBM Annual Meeting for the valuable educational programs, keynotes and networking opportunities, but there will also be many opportunities to visit fantastic dining establishments, get out into nature for outdoor activities, learn about the history and culture of Canada and enjoy vibrant nightlife and entertainment options.
Stay tuned for more information about the student-run BrainMeOut initiative, which will make a return appearance in Vancouver, following its successful debut in Geneva.
BY EKATERINA DOBRYAKOVA
Excerpt from OHBM Communications/Media Team article on Huff Post Science:
At the end of June, I found myself through running the streets of Geneva with two other brain mappers--all three of us sweaty from trying to catch the bus. Even though I live in New Jersey and am used to muggy weather in the summer, I couldn’t help but recognize how humid it was. We nearly missed the bus that would take us to the World Health Organization (WHO) to talk about how the Organization for Human Brain Mapping (OHBM) and WHO can work together to improve international public health through brain research. Thankfully, we made it on board and were able to get on with the important work of the day.
On July 1st, 2016, I joined a diverse group of behavioral neurologists, radiologists, psychiatrists, neuroscientists and public health professionals from around the world gathered in that building for a joint meeting between the WHO and OHBM. The WHO building opened its doors in 1966 and carries the stamp of time. Interestingly, we were all there to discuss something that could not have even been imagined in 1966 - applications of brain research to matters of public health. Read more.
BY NILS MUHLERT
What makes a successful international conference? Getting field-leading researchers to describe their work is of course key, but setting the stage (including hiring the venue, organising transport and arranging evening events) is equally important. As part of our OHBM 2016 insight series, we’ve provided views and highlights from those at the front of the stage - its keynote speakers (including Tim Behrens, Daniel Wolpert, Anissa Abi-Dargham and Nora Volkow) and special interest groups. Here, we look behind the curtain at the local organising team, those whose hard work fools you into thinking that organising an event on this scale is simple. No mean feat when you’re hosting 3,168 participants in one of the world’s most expensive countries!
The local organising committee (LOC) in Geneva was chaired by Christoph Michel, Professor of Neuroscience in the University of Geneva and a longtime attendee of OHBM. The LOC was greatly enhanced by the endeavours of a small group of local post-docs who, concerned that Geneva’s high costs might discourage those with tighter travel budgets, formed their own local organizing team, named BrainMeOut, to mitigate that problem. Their efforts provided students, postdocs and early career researchers with easy access to tasty, well-priced food and a chance to enjoy events hosted by this local BrainMeOut team: a varied mix of city tours, swing concerts, networking evenings and open air ping-pong contests (where – to my misfortune - my quiet German colleague revealed her former life as a Tischtennis-Bundesliga player). We speak to Christoph Michel and to Raphaël Thézé, co-director of the BrainMeOut events:
OHBM: I’m here with Dr. Christoph Michel, professor at the University of Geneva, and also chair of the OHBM local organising committee. Christoph, tell us about your experiences with OHBM.
Christoph Michel: I’ve been coming to OHBM since the beginning, its first meeting in Paris. I haven’t made it to all of them, but to most of them. And I’ve always wanted to host it here in Geneva, because I think it is a great opportunity to mark Geneva on the map of the neuroimaging community.
OHBM: What are your impressions from the meeting?
CM: It was fantastic – a real success. Most things ran smoothly. The executive office of OHBM has a lot of experience, which made hosting it easy to do. There were of course some challenges, mainly relating to hosting the conference slightly outside the city but, overall, I’d say it went OK. And we’ve had a lot of highlights, both scientifically and socially. I think the local neuroimaging community, particularly the younger generation, benefitted greatly from the meeting - be it through presenting their work, making contacts, showing the available research opportunities in Geneva, presenting the Masters and PhD programs, and so on.
OHBM: Anything you’re particularly proud of?
CM: We helped set up a symposium and meeting between the OHBM and the World Health Organisation. Making this contact possible was one of my main goals, since they’re based in Geneva. We organized a workshop at WHO after the meeting - it was extremely interesting and led to many ideas for future collaborations between the two organizations. It was great to see that the leaders of all international human brain projects participated and shared their ideas of how human brain research and the OHBM can contribute to public global health.
OHBM: And one last question – where would you like the next OHBM meeting to be held? We have a couple lined up but what would be your dream location?
CM: I think that it should dare to go once to South America, to increase the involvement of the South American neuroimagers.
OHBM: I second that! Thank you Christoph for joining us.
OHBM: How did BrainMeOut come about – who were the organisers, and how did they get in contact with the OHBM committee?
Brain Me Out: The name BrainMeOut – BMO for the insider – is actually inspired from the song “Take Me Out” by Franz Ferdinand, and the intention behind it is conspicuous. The concept itself is the work of three neuroimaging-focussed graduate students from the University Of Geneva. At first Christoph Michel reached out for us to join the local organizing committee. He knew we had some experience with event organization in Geneva and that we had participated in multiple national and international meetings. He gave us the mission to make this OHBM meeting great. We knew from experience that the key to a successful meeting was the human contact and the networking opportunities, and we knew that Geneva was not an easy city to get around for the occasional visitor. So we devised a plan, BrainMeOut, where we would do most of the work upstream, and create several opportunities for participants to get together. We asked ourselves what kind of social experience we would want and expect from an international conference; mostly it was about getting to know the city without getting lost, connecting easily with fellow researchers from around the world, having a good time at night with labmates and making new acquaintances without having to think about it.
OHBM: Part of BrainMeOut’s success was the variety of events hosted throughout the OHBM meeting – which were your favourite events from this, and why?
BMO: The HeadQuarter (HQ) was definitely a hit. It acted as a node connecting the various activities and offering a regular, welcoming yet very lively meeting point through the week. It did most of the work to connect people. I was particularly fond of the photobooth on Tuesday night, which really broke the ice and allowed participants to go home with a memory of the evening.
OHBM: How did you find the experience of organising and hosting BrainMeOut? Did you get to meet any useful contacts through this?
BMO: Organizing BMO was thrilling. We had a lot of planning to do, we sought funding on our own, we managed big budgets, gathered a team and designed a communication strategy. We certainly learned a lot from that experience. Contact-wise, we met with the OHBM central committee, worked alongside the OHBM communication team and certainly developed a strong network in Geneva. One downside is that during the meeting itself we were generally too busy to actually make contact with other participants. Fortunately, we had a great team of volunteers to help us! It was like throwing a party with our friends, and we had a lot of fun doing it.
OHBM: What advice would you give someone who wanted to organise a similar event at future meetings?
BMO: Not long after the conference, one of the participants emailed us to say “it was like having a personal travel agency…” and that’s what future committees should keep in mind while organizing BMO. From the start, it has to be managed by local brain imagers, familiar with the host city and able to deal with the planning and booking. An extended funding campaign is also critical to offer a greater diversity of activities, and to keep the expenses (i.e. drinks and food) as low as possible for OHBM attendees. In terms of activities, we are convinced that the key to success is, on the one hand, having a clear and informative website and an information booth at the conference venue, and, on the other hand, to hold a central HQ connecting the activities through the week. With more time, or more resources, we would probably have focused on offering more and even crazier group activities to encourage total strangers to bond and maybe later share their science around a drink at the HQ.
OHBM: Thanks Raphaël for your insight, the BMO team’s hard work, and a great set of events!
Please remember that the abstract deadline for OHBM 2017 is slightly earlier this year, on Thursday the 15th of December. See you in Vancouver for more science, socialising and BrainMeOut activities!
BY NIKOLA STIKOV
In May 2016, OHBM announced the Open Science Special Interest Group (SIG). One of the SIG founders, Cameron Craddock, wrote an informative blog post about the mission of SIG and its potential. In the post Cameron illustrated the benefits and distinctions of open science by drawing upon the free beer vs. free speech analogy. The OHBM blog team felt that ‘beer vs. speech’ is jargon that needs explaining. Twitter thought otherwise. This made us aware that the open science voices are sometimes difficult to hear outside of their own echo chamber, especially in the noisy world of brain mapping. Cameron removed the reference to speech/beer from his feature, and we agreed to pick up the conversation with Samir Das and Pierre Bellec, two free speech and beer enthusiasts from Montreal.
Nikola Stikov: Can you please explain the difference between ‘free as in speech’ and ‘free as in beer’?
Pierre Bellec: The analogy “free beer” and “free speech” comes from the open-source software community. Free as in beer, or “gratis”, means you don’t need to pay to use the software. Free as in speech, or “libre”, means you can re-use freely the software in new projects without direct approval from the authors. Free software is generally both gratis and libre.
Samir Das: The “Free” concept is not limited to software. More recently, we have focused on ideas such as Open Science. We are embarking on a new mission at the Montreal Neurological Institute (MNI) to build open science, but defining what 'open science' is can be tricky. The MNI is the first institute to go “Open”. What this means is that the institute won’t worry about patenting ideas and techniques, and will make acquired data freely available to the scientific community at large.
NS: So is ‘free as in speech’ always better than ‘free as in beer’?
PB: For software, people don’t care that much that it is free as in beer. At the end of the day, if you really want to use a product, you will find a way. The fact that a software is free as in speech, though, has turned out to be incredibly powerful for innovation. Android is based on Linux, a prominent open-source project. Tesla autodrive is also based on Linux. You watch a video on the plane? Linux. Robots going to Mars? Linux. Linux is so robust and so flexible, it blows away anything that a private company could produce.
SD: Free doesn’t mean you can’t profit from it. Some people make a lot of money, even though many people still consider it a volunteer service.
PB: Exactly, in free software, people work together on projects that are difficult to do alone. But you can still add a layer that is unique, and you can sell the product as a whole. Apple built its OS from unix, they did not reinvent the wheel.
NS: Does sharing apply not only to software but also to data?
PB: Yes. A paper is not a very reusable unit, it is hard to build on it. There are details missing in your typical manuscript, plus we are an experimental field, and if you don’t have access to the data, there’s not much you can do with [the paper]. So ‘free as in speech’ in the context of science means that instead of sharing just papers, we should also share reusable units. Those units could be code, data, tools, workflows… I believe that hiring and promotion committees should consider all of these units when evaluating somebody’s work.
SD: By doing this we will reduce redundancy, waste, cost, because we will have more data available, and governments will spend less money. Even from a self-serving point of view, there is evidence to suggest that if you go open, you might get more collaborators, more citations, more funding, and ways to make money without violating open-science concepts. Finally, this makes it possible for other communities to use the same data in ways that [our] community could never even imagine, so that is very important.
NS: Tal Yarkoni published a paper about the next generation platform for science publishing, in which, on top of open-access and data sharing, he recommended preprint archiving and Reddit-like peer review. Do you agree with these recommendations?
PB: Open review is exciting, but I have only limited hands-on experience with it. I recently published my name as a reviewer of an opinion piece in Frontiers, then I uploaded my review on Publons.com. Publons is a free website where you can see my entire review history. I definitely enjoyed that process, it is useful to document what generally happens “under the hood”. What I haven’t done yet is take an hour to write a summary of a paper where I wasn’t a reviewer. I want to try that out in the future.
SD: This is the future for sure, but I am not entirely sure about every nuance and the exact details of the outcome. I don’t have a strong opinion about post-publication peer-review, but if that is possible, I suppose it is a good thing. More transparency can help with the current reproducibility crisis in research. However, when it comes to preprint archiving, I feel like there is something to be said about due process. When we collect data for a study, sometimes it doesn’t make sense to release it immediately; we are not done yet. Little embargos so you can finish your planned work might be in order. I am for a reasonable amount of process.
NS: So when should the sharing happen?
PB: I fully agree with Samir, I don’t think it is realistic today to tell people that everybody should share their data as soon as it is collected. Because you are going to scare people. At the end of the day, I believe in most cases embargos are not useful, and that ten years down the road few people will still use them.
SD: If [the data] is organized while you are collecting it, with proper standards, then it won’t be so much work to share it in a few years. One problem is that a lot of this work is currently done by contract researchers that are not faculty, and there is no long-term career path for them in academia. You get a grant for a couple of years, and then everybody scatters, or they go to industry. Universities need to shape up and do more. The current model is extremely wasteful and contrary to the mission of science - it takes a fair bit of time to train people, and losing the great amount of knowledge acquired is particularly harmful to the research ecosystem.
Pierre Bellec is a professor of computer science at the University of Montreal and CRIUGM, where he develops fMRI connectivity biomarkers for Alzheimer's disease. He currently chairs the OHBM Open Science Special Interest Group, and is also involved in organizing the OHBM Hackathon.
Nikola: A word or two about the Open Science SIG activities. Pierre, you are one of the organizers of the NeuroBureau hackathons, what do you do there?
PB: At the beginning [of a hackathon], people pitch projects, little teams are formed, and then people sit down and work. The idea is to take those 5 minutes during conferences, when you meet somebody, you have a coffee, you have an exciting idea but you need to go back to the talks. So the idea is to take that little chunk of time and stretch it to the length of the conference.
NS: Do you need to know how to code to be at a hackathon?
PB: That is a common misconception. The hackathons come from the tech community initially, so people associate them with coding, but we try to gather a different kind of community, and we see all kinds of people coming to the hackathon and having a great time.
Samir Das is the Software Manager for the McGill Centre of Integrative Neuroscience, and system architect for the LORIS database. His goal is to facilitate technological solutions towards difficult data management and processing problems in neuroscience and beyond.
NS: Samir, what is your role at OHBM?
SD: So, I wear a lot of different hats in my life, but at OHBM, I consider myself Pierre Bellec’s sidekick. The point of it is that we are all trying to further a common goal, to do things like open science and data sharing.
NS: And as part of that you organize parties.
SD: I know it sounds weird to say that parties are part of the open science mission, but communication and collaboration [are facilitated by] social events, whether at a hackathon level, or at a big party. It is amazing how much stuff can be solved over a beer.
NS: The next meeting is in Vancouver, have you already planned the venue?
SD: I haven’t thought that far ahead, but I already have an idea of how it could be. I am picturing a beach... I feel like that will facilitate even more science. :)
Thanks to Sarabeth Fox for video recording.
Nikola Stikov: So I'm here with Kirstie Whitaker, a post-doc at University of Cambridge and she agreed to talk to us about her experiences with OHBM. How long have you been coming to this conference?
Kirstie Whitaker: This is only my third conference, but my first one was in 2009. So I haven't been able to travel to all the amazing places that OHBM has been over the years, but it's lovely to be here in Geneva.
NS: Wonderful. You're very active with the Hackathon, so can you tell us a little bit about your personal experience?
KW: I came up with a project that I thought would be meaningful and I pitched it at the beginning and I got teammates that came and joined me and they just kicked it out of the park. It became so much better than I ever thought that it could be when I came up with the idea. So it was wonderfully inspiring, it was great to meet the people that I slightly hero-worshipped and brand new people, and it sort of flowed out into the conference.
NS: I know you're very passionate about diversity issues within the society. So have you seen any progress and do you have any suggestions about what should be done to bring more diversity at our meetings?
KW: I think that the keynotes were really beautifully gender-balanced. We had three women and four men, which is great. It's lovely to see [them] and all seven of them were excellent. I think I was a little bit disappointed that the prizes all went to white men. I felt like that was maybe not the greatest message that could be given. But what was really lovely was the number of people that stood up at the Town Hall which we had the last night of the conference and mentioned this. So the fact that people are aware of it and people are thinking about it brings it to the fore. I think it holds it in the mind of not just the committee, but also the people who are voting for everyone. I think what was called for in the Town Hall and which I would love to see going forward is more people nominating women, people of color, and people who've had non-traditional career paths, bringing forward these bright stars, to nominate them so they can be celebrated next year.
While women may be underrepresented in Council this year, women scientists Drs. Lara Boyd and Doris Doudet are the Chair and Co-Chair, respectively, of the 2017 Local Vancouver Organizing Committee for the 2017 meeting, and AmanPreet Badhwar is the Co-Chair of the OHBM Student and Post-doc Special Interest Group (SIG).
Randy Gollub: We're here today to talk a little bit about the thoughts you have and the visions for what would like to see the SIG do in the next coming year.
AmanPreet Badhwar: Our mandate as the Student and Post-doc SIG is to provide opportunities for networking for trainees, both within the trainee group as well as with other young or senior scientists. To date, our flagship event has been the OHBM Monday Night Social, which we co-organized with the NeuroBureau.
RG: And have you a vision for how things might grow and develop in the future, about how OHBM can help you and your organization?
AB: Definitely. We're thinking of expanding to more than just the Monday night social because it is only one day of the year. We'd like the SIG to be more involved throughout the year. One of the things I do want to organize for next year's OHBM is a symposium to help trainees transition into the next phase of their career, and I'm especially referring to post-doc at this stage because that's really the hardest transition. A symposium on that topic would be very helpful. The other idea that I've been discussing with OHBM is to have, during the meeting, a room dedicated for mentoring, where for certain a period of the day, perhaps an hour or so, we have a rotating group of scientists, either young researchers or more established researchers, who the students can have conversations with and get some tips on how to move forward with their careers.
Check out the two videos to hear more about how these young women scientists are getting involved in the OHBM and how they are encouraging their colleagues and peers to become more engaged.
The OHBM has taken very seriously the call from members to make enhancing diversity an important goal for the society. In response, the OHBM leadership has recently created the Diversity/Gender Task force, lead by Co-Chairs Tonya White and Angela Laird to address issues of gender and minority representation. The goal of this task force is to increase awareness of these issues and identify ways that women and underrepresented scientists can be promoted at the OHBM to ensure balanced representation. If you are interested in volunteering for this task force, please complete the application form before October 21. Interested individuals must be current members of the OHBM. You can renew your membership at www.humanbrainmapping.org. All submissions will be reviewed with Task Force selections made by the Chairs of the Diversity and Gender Task Force.
In the meantime, the call for proposals for Educational Courses and Symposia for the 2017 Vancouver meeting was recently announced. I urge all of our OHBM community to make a special effort to include a balanced number of women scientists of all ages in their proposals!
BY THE KOREAN SOCIETY FOR HUMAN BRAIN MAPPING
The function and anatomy of the human brain are the basis of debates related to the inner workings of the human mind and body. Before the arrival of brain imaging technology, ethical dilemmas hindered neuroscientists who wished to conduct scientific studies on humans. Fortunately, neuroimaging techniques such as MRI, PET and SPECT have opened a new chapter in brain mapping. With the opening of “A New Window into the Human Brain” as Victor H. Fischer argued in 1962, researchers have been able to investigate not only human brain physiology and connectivity, but also its functionality, such as emotion and cognition, as well as numerous mental health disorders.
To keep pace with this emerging field of research, South Korea started its first society of brain imaging researchers, the Korean Society for Human Brain Mapping, or KHBM, in 2002. Given that modern human brain mapping utilizes cutting edge information technology (IT), the rapid development of the IT industry in Korea facilitated the early development of the KHBM. The Korean government promoted research and development in the IT industry early on in order to increase Korea’s share of the international information and communication technology market. This timely advance allowed for a positive feedback loop, in which the investment strategy in a variety of IT fields enhanced prompt industrial growth. In 2013, the Korean IT industry alone represented 30.9% of manufactured industrial products, as measured by the Gross Domestic Product (GDP), in comparison to 13.6% in 1997. At the same time, the biotechnology industry increased its share of the GDP by a factor of 12.7, from ₩0.59 trillion in 1997 to ₩7.51 trillion in 2013. Such statistics clearly illustrate the rise in the importance of medical technology in Korea.
The KHBM encompasses virtually all active Korean human brain researchers, including medical doctors, medical engineers, psychologists, and more, in order to encourage the study of brain dysfunctions, including those specific to Koreans. In addition, the KHBM aims to broaden the scope of brain studies by fostering information sharing among experts, while promoting improvements in brain mapping technology. For instance, one of the earliest topics of discussion at the KHBM conference in 2004 was the production of a standard Korean human brain map.
Recent topics covered at the KHBM conferences span a wide range of issues. For example, some researchers reported on medical issues, such as the localization of lesions involved in neuropsychiatric disorders using brain imaging technology and the effective use of statistical probabilistic anatomical maps. Other researchers focused on technology-related issues, including the effective use and differences among PET, MEG, and fMRI when investigating a variety of neurological disorders, and on the creation of an artificial cognitive system, based on the identified sensory regions of the brain. These studies are made possible by employing brain imaging technology to visualize the functional connectivity of the brain in vivo.
The members of the KHBM emphasize the necessity of new development and expansion of technology-based medical engineering expertise to improve the precision of medical apparatuses. This common goal of young neuroscientists and clinical researchers in South Korea motivates the theme of OHBM 2018, “Mapping the Interactions.” The theme not only embodies systematic efforts to create connections and develop mutual goals among researchers who study electrophysiology, metabolism, brain function and anatomy, but also epitomizes the determination of KHBM to promote interactions between theoretical research and clinical applications, between academia and the public, and between developed and developing nations.
The 2018 OHBM meeting will take place at the COEX Convention Center, located at the heart of Seoul. Seoul is well-known for its mixture of traditional and contemporary Korean culture. One of the most renowned examples of this fusion is Insa-dong, where artists display their creative works in an environment surrounded by traditional architecture. This vibrant city will provide a backdrop for the creative energy of the OHBM as it brings together researchers from all over the world.
The Korean Human Brain Mapping community cordially invites you to take part in the OHBM 2018 meeting at Seoul, to engage with other neuroscientists, to form connections, and to share and discuss our knowledge and passion for human brain research.
BY PANTHEA HEYDARI
Figuring out the right methods to conduct fMRI analyses can be a full time job. Most recently, I spent countless hours trying to determine the best way to do a Region of Interest (ROI) analysis on my imaging data. Usually, the scene plays out like this: Me sitting in uncomfortable office chair, eyes glued to the screen, fingers tapping, sometimes late at night, but mostly in the early AM hours (I’m more of a morning person) and always with piping hot coffee. I’m clicking on side links, opening tabs upon tabs and at some point my hair goes into a bun as I start frantically flipping through past notebooks, looking for notes that describe my subject’s stroke lesions. I am contemplating using ROI masks to remove the stroke lesions in order to understand the remaining brain activity. Errr, maybe I should clarify this image (and minimize those distracting Reddit and YouTube windows).
I’m on the hunt for tutorials about fMRI ROI analysis and there’s a lot to learn.
My particular thesis project investigates how individuals with stroke engage the Action Observation Network (AON) during observation, execution, and imitation of hand and foot actions. I’ve mostly been running whole brain analyses but today, I want to look at specific activity patterns in the pre-motor cortex, inferior frontal gyrus, and the posterior parietal lobule.
Since I’m not sure which package I want to use for my analysis, I start out on the NITRC home page. I need to consider that my subjects have lesions and use this as a keyword to narrow my search. Through the Resources feature of NITRC, I compare packages such as Mango, MarsBaR, and FSL. Which software is better suited for my types of analysis? Which is better supported? Which matches my particular needs?
NITRC-R offers me the ability to compare software and determine which package is best able to serve my needs.
· supports a variety of data formats and operating systems
· features fairly intuitive ROI editing and surface rendering
· allows for manual drawing of ROIs so I (or my minions…errr, I mean undergrads) can be specific about which voxels to select, but this may also increase rater variability
· is an added toolbox for SPM with an external homepage that features great step-by-step tutorials
· has easy to read instructions on how to define a functional ROI, extract the data, and run analysis through the Matlab console
· allows me to pre-define my ROIs or look at which voxels are active before selecting that as part of my ROI
· is a popular tool for looking at functional MRI results
· is great for tractography and relatively easy to use
· allows me to draw or load ROIs and can be exported into other packages, if desired
For these packages, NITRC shows me user ratings, support availability, development status, and documentation. I am concerned about reproducibility of results and how often these packages are used by other researchers. I am also interested in the availability of forums for my questions. For analysis, my top priority is being able to draw the ROIs myself and edit it if the automatic size is not what I prefer. These are factors I use in making my decision.
NITRC has consolidated information about resources and various software packages into a user-friendly and easily accessible site, so that I can compare packages in terms of priorities for my research. After assessing this information as well having open discussions with lab-mates about our project’s focus, I decide to go with FSL. I’ve been using FSL for prior work, so this makes sense in terms of logistics, but I also appreciate FSL’s ROI extraction tools and ability to control my regions. Lucky for me, NITRC has linked to sites for downloads and external resources of my winning choice. Easy enough, right?! All of this leaves me with much more free time. . .to spend online.
Interested in using NITRC for your analysis needs? Or, have additional questions on Best Practices in Data Analysis and Sharing (COBIDAS)? Take a look! As always, feedback and comments welcome and encouraged!
BY EKATERINA DOBRYAKOVA
Today, we're talking to David Poeppel, finding out about the differences between animal and human communication systems, his role in developing models of language processing, and in peeling away the 'speechyness' of speech. Dr. Poeppel is a Professor of Psychology at the New York University and Director of the Department of Neuroscience at the Max Planck Institute in Frankfurt, Germany. As one of the OHBM 2016 keynote speakers, Dr. Poeppel discussed how research in the neurobiology of language has developed over the last 20 years. We took the opportunity to find out more about his research on speech perception and psychophysics.
Ekaterina Dobryakova: To start, I wondered why you decided to study language and, specifically, speech perception?
David Poeppel: Well, it's easy to want to study language because it's what makes us “us”. It’s the most convenient and compelling feature that we have. If we didn't have that, we couldn’t have this conversation. There are endless vitriolic debates about how similarly animals communicate with each other, so it's deeply fascinating. How can we understand anything? How can we talk? If you ever take a class in linguistics, it’s like opening the curtains – you suddenly understand something you didn't think could be studied by sciences, because if you take a language class, let's say, in middle school or high school, they drown you with minutia in a very boring way and they try to take all the joy out. We've all taken second language French or Spanish in high school--oh my God, you know, what a nightmare--but if you suddenly think of it as a problem for sciences, it becomes a completely different kind of thing.
ED: You mentioned animal communication. How would you say our human communication through language is similar to non-language communication between animals?
DP: That is a good, hard, deep question for which we have no good answer. Animals are at the periphery, using the apparatus we have, the input systems and the output systems. We can learn a lot from studying animal communication systems. But there are certain attributes of human language that are just quite different, and for which we have no compelling animal model. So, some of the things we study are birdsongs or gestural communication. And those are wonderful important additions to our knowledge. But there are certain attributes of being a speaker of the language as a human that are unusual and that includes the peculiarly structured and complicated vocabulary.
ED: What is one of the most important projects that you are focusing on in your lab and how is it relevant to society?
DP: My laboratory focuses on basic science. That is, basic questions about the organization of perceptual systems, the auditory system and how we process language. We don't build products, diagnoses, therapeutic interventions. But of course the long term goal is that the insights we bring will help in all of these issues: for group communication, diagnosis, stroke, disease or rehabilitation, developmental disorders, you name it. So some of the work we've done has some very interesting and practical implications for developmental speech disorders. Max Planck has a good one-liner: ‘The application has to be preceded by basic insight’. I think this is very right: to really try to understand the system and its parts, and how they interact before you build the thing. You know, you want me to test the stuff before I give you a pill.
Now, I think, we're beginning to enter the age of maturity, where we can celebrate the amazing tools we have and the techniques and analytic approaches we have, and we can start to feel a bit critical about our own research.
ED: What would you say is your most surprising finding?
DP: The most surprising finding is that we ever find anything! That's why we're at this OHBM 2016 meeting. One of the remarkable things of the last 20 or 30 years is the astonishing technical development, the devices that we all use. This is quite a remarkable achievement. I remember when I was a graduate student and I read the first papers on imaging, some of which were actually on language, I was both simultaneously excited and appalled. And it really stimulated me and made me very passionate about the stuff. But notice that until 20 years ago we were not able to do any of the things we're now addressing. It was inconceivable. Let's say there was the age of fascination, the age of growth. Now, I think, we're beginning to enter the age of maturity, where we can celebrate the amazing tools we have and the techniques and analytic approaches we have, and we can start to feel a bit critical about our own research.
So here are two things that I've worked on a lot in my lab, both of which, I naively hope, have some value: One is really questions about the structural organization of the brain. So with one of my close colleagues, Greg Hickok from Irvine, for many years we've developed a functional and anatomical model, pretty widely known as the dual stream model of language processing, which admittedly we stole straight from the visual system. We know a lot about the visual system and its anatomical physiological foundations, and some of those ideas struck us as potentially useful for the language system.
The second thing is the work that I've been focusing on for a number of years on neurophysiology and there, primarily, I’ve been using magnetoencephalography, MEG. It’s the technique I’ve begun to obsess about the last few years, primarily because perceptual processing and language processing is super fast.
ED: What is your biggest dream in research?
DP: Maybe your dreams get more modest as you cross into the precarious years of middle age. Of the many dreams I have for my labs, one that I'm particularly obsessing about right now is: What does it mean to store your words. Those are sort of at the intersection of everything. There are a hundred thousand things stored in your memory in a way that they can be listened to, spoken, read, signed. That means that the encoding of that is extremely complicated and subtle.
ED: What do you think is the coolest finding in the neuroimaging of language?
DP: I'd have to say all of my own [laughs]. I'll tell you one that I care about right now and I think is cool: there continues to be a long debate about specialization in the brain. Let's take speech perception. Speech perception is the transformation of acoustic information (so something hits your ear) into some code that's relatively abstract and that interfaces with linguists. Now, for 50 years there's been a very vigorous debate if that is a special mechanism or is it a generic mechanism. So, is speech perception merely a species of hearing, or is it actually something that deals with particular functions? And so, last year, in a very elaborate series of fMRI experiments in my lab, we tackled this question one more time, trying to really nail down the extent to which there is specialization, in part, building on our dual stream model. We predicted that some very specific chunks of cortex, particularly, aspects of the superior temporal gyrus and superior temporal sulcus are not just generic auditory analyzers but are specialized for speech. That doesn't mean that entire area is specialized, but that means that there are populations of cells there that really deal with that kind of signal. So we went to great effort to make stimuli which could selectively peel away the speech-yness of signal and see can you find an area in your head that has the right sensitivity for speech and the right specificity. That was the kind of an experiment that was not just one experiment but 10. But at the end, it is pretty satisfying, because you feel like you really nailed it. I think, we can say pretty conclusively, that there is a high degree of specialization in a particular area of the brain. That's kind of cool because it shows that you can take a pretty innovative technique like functional MRI and answer a classical question.
ED: Many thanks!
Prof Poeppel’s keynote speech on ‘New Directions in the Neurobiology of Language’ will soon be available to view on the OHBM OnDemand portal. Keep checking for this and other great talks from OHBM 2016.
Thanks to Sarabeth Fox for video recording.
BY EKATERINA DOBRYAKOVA
The Organization for Human Brain Mapping hosted an exciting lecture on June 30th, 2016 with Dr. Anissa Abi-Dargham who presented her work pertaining to the topography of dopamine alteration in schizophrenia through the use of PET imaging.
Anissa Abi-Dargham is a Professor of Psychiatry and Radiology at CUMC, Columbia University, and New York State Psychiatric Institute, where she directs the Division of Translational Imaging. Dr. Abi-Dargham is a pioneer in PET neuroimaging, beginning her research in the field of dopamine transmission in schizophrenia in the 1990s. Her research has resulted in seminal findings that explain the complex alterations of dopamine transmission in schizophrenia and the impact these alterations have on clinical symptoms, cognition and response to treatment.
Ekaterina Dobryakova: What motivated you to go into your particular area of research?
Anissa Abi-Dargham: I went to medical school to become a psychiatrist and study psychosis. Out of all brain disorders psychosis seemed the most extreme to me and the most devastating on people’s lives. Brain imaging, especially molecular imaging, seemed to be a useful approach to get one step closer to the underlying “brain biology” that relates to psychosis. When I started there was much interest in dopamine in schizophrenia so I did some of these studies with my colleagues and one finding led to the next question and next study. Now we have greater understanding of the complexities of this system in schizophrenia.
ED: If you weren't talking to brain mappers or scientists, how would you describe your most proud scientific accomplishment?
AA: It must be the study where I estimated amounts of dopamine (a transmitter in the brain used between nerve cells to transmit signals) and compared patients with schizophrenia and controls. To do so I had to remove most if not all dopamine in the brain (by giving a treatment that stops new synthesis or production of dopamine for 48 hours, this is called a depleted state) and then compared the baseline scan (before depletion) to the depleted scan to derive how much dopamine was removed, thus inferring how much dopamine was present at baseline.
ED: If you were speaking to a non-scientist, how would you describe your research and what you do for a living?
AA: The brain is a complicated and intricate super computer that remains like a black box. It is difficult to understand its normal functioning, much less its dysfunctional function in brain disorders. I use imaging techniques to get at some of these questions.
ED: What do you think are the most pressing issues in neuroimaging for your area of interest? For the field in general?
AA: Technology, funding, collaborations: we need to develop better tools, that are safe to use and not invasive so we can image the complexity of the brain. We need participants in research and multi site collaborations to have enough power to address the variability across human subjects, and funding to do all that.
ED: What do you think is the future of neuroimaging for basic research? For translational research and application?
AA: Imaging can serve as an ideal translational tool to understand and link the effects of genes onto cells, circuits an behavior. Animal models can provide an illustration of genes’ effects which can be searched for in humans. So imaging can bridge all these multi-levels of investigation across species.
ED: When you first started out, what was the most inspiring/motivating paper you read? How about the same question, but in the last 5 years?
AA: Tough question because there were / are too many. An inspiring one was that of Surmeier and Gerfen summarizing the circuitry involving direct and indirect pathways in the striatum, I often read it and re-read it.
In the last 5 years I would say some of the main papers marking major advances in the field, for example the genetics findings in schizophrenia published in Nature 2014. But others too, it is really hard to pick just one…
ED: What should the non-expert be wary of when reading about brain mapping articles in the lay press?
AA: Since imaging is very technical it is difficult for the non-expert to judge if a study is methodologically sound. That is the biggest issue. Another is small sample sizes and hyper inflated results. In general findings need to be replicated for one to start to believe them.
BY: OHBM BLOG TEAM
At the recent OHBM Annual Meeting in Geneva, we encountered an interesting variety of ways that people both chronicled their personal experience at the meeting and engaged with the material of sessions attended. Many people vigorously took notes, others live-tweeted or posted photographs of slides and presenters online, but one of the most unique examples we discovered was a meeting attendee who was live-sketching session speakers and posting them to instagram.
The live-sketching artist, Roselyne Chauvin, is a PhD Candidate and founder of Cogni' Junior and lives in the Netherlands. When asked about how she started doing this type of drawing, Chauvin replied "Well, I do a lot of popularization of science for children (cognijunior.org) so a lot of non-scientists/teachers are following me on social media. I thought live-sketching the conference was a nice way to show that researcher meetings are not boring and, in general, to fill the gap between the general public and us. In the meantime, I discovered that it helps me to keep focus between all those amazing talks and to remember more of them. It's also saying to scientists that can't make it and want to catch some glimpses of it via twitter: you are not forgotten. Not everyone can fly to Geneva, right?"
The images below are from keynote lectures, educational courses, oral sessions and symposia. Think you know some of these speakers? Recognize material you learned in an OHBM session? See how many sketches you can identify.
OHBM - What is it that interests you about neuroimaging?
William Seeley (WS) - Neuroimaging has the potential to address three key issues in neurodegeneration research. First, brain imaging can tell us when and where neurodegeneration begins in living patients. This critical information provides the “treasure maps” we can use to guide our search for the cellular-molecular mechanisms of disease within the right neuroanatomical context. Second, functional imaging can help us understand changes in network physiology underlying patient symptoms. Finally, the dawn of “connectomic” imaging has allowed us to test competing models of network-based disease progression.
OHBM - What difficulties have you faced balancing a research and clinical career? What benefits has it brought?
WS - My clinical life frames and motivates everything we do in research, and I continue to be impressed by how much clinical science can teach us about healthy brain organization. For me, the major challenges relate to time and not having enough of it to do everything I would like to do in my career. Overwhelmingly, though, my life as a clinician has added great meaning to my life as a researcher.
OHBM - What draws you to OHBM and how does it differ from other similar, large conferences?
WS - I’ve been coming to OHBM for about 10 years and I keep coming back because of the enthusiasm of the membership for this field. It feels like a cohesive group of people - they each bring a different perspective and a different set of tools. It’s also a good place to learn about those tools – the very front line of methodological advances is reported here first. That always makes it exciting.
OHBM – In your keynote talk you laid out a number of different variants of the dementias, then focused on frontotemporal dementia. The dementias vary based on both behavior and symptoms. Could networks and the connectivity between networks aid differential diagnoses? Do you envision scanning patients in order to distinguish between different types of dementias?
WS - Neuroimaging of brain structure and function can help us refine our assessment of a patient’s clinical syndrome. In my talk you heard me discuss syndromic diagnoses and pathological diagnoses as distinct and separate concepts. Structural and functional imaging can help with syndrome refinement, but when it comes to the underlying neuropathological cause of that syndrome, I think those strategies are going to fall just a bit short. Take the example of behavioural variant frontotemporal dementia (bvFTD), it has 15 different neuropathological causes – I doubt we could use neuroimaging alone to decide which of those 15 underlying histopathologies is the actual cause of a patient’s bvFTD. It’s more likely that we’ll need a molecular technique, whether that’s biomarker analysis from spinal fluid or molecular imaging using PET scanning, to decide which of those various underlying histopathologies is the cause. Alternatively we’ll use some kind of a merger, where the structural and functional imaging refines the syndrome to the point where the differential diagnosis gets shorter. Then we use molecular imaging to nail the final diagnosis.
OHBM - Some of your recent research centres on selective vulnerability. Can you tell us what this is, and why it might be relevant to many neurological conditions?
WS - All neurological diseases are selective in some way. In neurodegenerative disease, we can see that progression occurs in a selective manner that is governed by network connections. Where (in which cell type), how (in what manner), and most importantly why a disease begins where it does remains far more mysterious, but may be a key to developing early-stage treatment or prevention.
OHBM – What would it take to get to the point where we have screening for different vulnerabilities? Would that goal be a priority in the absence of effective neuroprotective recommendations – or are effective recommendations available?
WS – That’s already the reality in Alzheimer’s dementia. Alzheimer’s is a common disease. You can screen a healthy older population for amyloid-beta deposition using molecular imaging and then triage patients for experimental treatment trials based on that result. To imagine doing that for some of the less common dementias, such as frontotemporal dementia, is a little more daunting because of the lower population prevalence. FTD has a population prevalence of about 1 in 5000 in those aged over 45, so it would have to be either a very inexpensive test or a very powerful therapy to justify that kind of screening program.
OHBM – It’s been proposed that the salience network may switch activity between the default mode and the central executive networks – and impaired switching has been hypothesized to play a role in a number of psychiatric disorders. Do you see that playing a role in the frontotemporal dementias?
WS – I don’t think we really know the answer, as I don’t think there’s been a study yet that went straight after the switching concept. From a phenomenological standpoint the patients are pretty poor switchers. Sometimes they get stuck in ruts and perseverate on the same behavioral response over and over. Other times they fail to switch in other ways when switching would be helpful. Sometimes they switch too much, where they’re distractedly moving from task to task as opposed to finishing. I do think that behavioral switching is a deficit – whether that correlates with network switching is an open question and I think it’ll be an important one to address at some stage.
OHBM - What do you see as the next major goals of neuroimaging in dementia research?
WS - Neuroimaging can play a critical role by providing short-term interval biomarkers of disease progress for use in early-stage drug development. To accomplish this goal may require that we develop better models to predict progression, and then use those models as a way of assessing whether a drug has had a meaningful impact.
OHBM: Thank you Prof Seeley!
Prof Seeley's keynote talk on 'Network-based neurodegeneration' will soon be available to view on the OHBM OnDemand portal. Keep checking for this and other great talks from OHBM 2016.
Thanks to Sarabeth Fox for video recording.
BY: JEANETTE MUMFORD, CYRIL PERNET, THOMAS YEO, LISA NICKERSON, NILS MUHLERT, NIKOLA STIKOV, RANDY GOLLUB, & OHBM COMMUNICATIONS COMMITTEE (IN CONSULTATION WITH THOMAS NICHOLS)
In recent weeks a lot of attention has been given to the paper “Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates”, by Eklund, Nichols and Knutsson, in the Proceedings of the National Academy of Sciences. This work highlights an important concern; however, some of the media attention has been based on a misunderstanding and an ‘inflated’ interpretation of the results. Specifically, too much weight has been given to the numbers “40,000 impacted studies” and “70% false positives”, an unfortunate side effect of reducing a study rich in information to a few soundbites. We respect the views of this paper and the effort put forth by the authors who, like the leadership of OHBM, understand there is a growing concern for validity and reproducibility in our field. The purpose of this post is to put these numbers in context and clarify how these findings impact our view of past and future fMRI results.
In task-based fMRI studies we are often interested in looking for systematic differences between experimental conditions or cognitive states across upwards of 100,000 voxels in the brain. It is widely known that this large number of statistical tests, typically one per voxel, requires correction for multiplicity. The most common approaches focus on control of the family-wise error (FWE), which is the probability that a given study will produce any false positives. The most common approaches for FWE control are voxel-wise and cluster-wise thresholding. Voxel-wise thresholding draws conclusions about specific voxels and cluster-wise thresholding allows one to conclude whether a group (or cluster) of adjacent voxels show an effect based on a feature, most often its size (e.g. only groups of voxels bigger than size N are significant). Eklund et al. consider both voxel-wise and cluster-wise FWE control in an exercise that tests whether the thresholding methods and their implementation by various software packages control the FWE as advertised. The innovation in this work is that they used resting-state fMRI data rather than computer generated simulation data to estimate noise (see below for more on this); they analyzed this resting-state data as if it were actually task fMRI data.
Eklund et al. find voxel-wise results are always correct, i.e. control FWE below a requested 5% level, and are thus safe; we won't discuss these further. They also find that, depending on the exact methods and tools used, cluster-wise results can be invalid, i.e. have FWE in excess of the traditionally accepted 5% level. Understanding the specifics of when these methods are invalid is the focus of the article.
Figure 1. Cartoon example of how cluster-based thresholding works. The orange line represents the uncorrected, voxelwise p-values over a row of voxels (space). First, the cluster-defining threshold, CDT, is used to define the clusters, which are indicated by the boxes on the x-axis. Second, using the cluster size as the statistic, a threshold of cluster size k is used to assess the two clusters, concluding only the red cluster is large enough to be significant.
A cartoon example of the cluster-wise based strategy is illustrated in Figure 1. First, a primary threshold is required to define clusters (in Eklund et al. this is called a cluster-defining threshold, CDT). The CDT is typically based on the uncorrected voxelwise p-values. SPM and FSL use random field theory to obtain FWE-corrected p-values, which requires an estimate of the spatial smoothness of the image that is being thresholded, typically a map of t-statistics that quantifies the effect size at each voxel. AFNI uses a simulation-based procedure that also relies on a smoothness estimate. In contrast, another choice is to use a permutation approach, which is based on randomly permuting data labels to generate a null distribution for cluster size that is used to compute a p-value. The approaches in the 3 widely used fMRI data analysis packages, SPM, FSL and AFNI, are variations of parametric methods, and are based on specific assumptions about the data, while the permutation method is nonparametric and requires minimal assumptions.
What is unique about this work?
This paper is an example of a simulation study, an evaluation of a method based on ‘made up’ data. The reason simulations are used is because quantifying FWE can only be done if the ground truth is known. Specifically, we must ensure there is no signal in the data. A simulation is most useful when the simulated data reflect what we would find in real data as closely as possible. This has been a limitation of previous studies, which generated synthetic data with software and used this synthetic data to test the performance of the analysis algorithms (Friston et al. (1994) and Hayasaka and Nichols (2003) are examples). This work uses a large pool of real human resting state fMRI data as a source of null data, or data that do not contain any task-related signal. Fitting a model of a task to the data should not find any activation. The advantage of using actual fMRI data is that the spatial and temporal structure of the noise is real, in contrast to previous simulation studies that used computer-generated null data. In the simulations in Eklund et al., random samples of subjects from the resting state data set are taken, and these samples are analyzed with a fake task design. The subject-specific task activation estimates are then entered into either a 1-sample test (to test the hypothesis that there is an effect of this task in this group) or a 2-sample test between two groups of subjects (to test the hypothesis that the effect of the task differs between the groups). Each result is assessed in the usual way, looking for FWE-corrected p-values that fall below p=0.05, and the occurrence of significant clusters (cluster-wise approach) is recorded. The authors repeat this a total of 1000 times and the FWE is computed as the number of simulated studies with any false positives divided by 1000. In theory, using p=0.05 should result in a FWE of 5%.
Brief Summary: Four study designs, two blocked and two event related, were studied across multiple degrees of spatial smoothing, different cluster-forming thresholds and different software packages. Specifically SPM, FLAME 1 from FSL, OLS from FSL, 3dttest from AFNI, 3dMEMA from AFNI, and a permutation-based approach implemented using the BROCCOLI software were studied. The main result, highlighted in the first figure of the paper, shows that when using a parametric approach, a cluster defining threshold of p=0.01 leads to poor control of FWE (FWE from approximately 4-50%). However, FWE control is improved when a cluster defining threshold of p=0.001 is used instead, regardless of software package used (FWE ranges from approximately 0-25%). The more conservative nonparametric approach controls FWE regardless of cluster defining threshold in most cases, although elevated FWE were observed for the one-sample t-test in some cases due to skewed data. The second result, which is the source of the 70% FWE that has appeared in many other blog posts, occurs when simply using a cluster size of 10 as an ad-hoc inference procedure. In this case, a cluster defining threshold of p=0.001 was used and clusters with 10 or greater voxels are identified as significant. The high FWE of this approach indicates that it should not be thought of as controlling FWE. More details and the explanation of why FLAME1 appears conservative in both of these results are in the next section. The general conclusion is that when using cluster-based thresholding, a cluster-defining threshold of p=0.001 has better control of FWE than p=0.01 for SPM, FSL and AFNI. The nonparametric-based approach has better controlled FWE in the scenarios tested here.
AFNI problem identified. The results presented in this manuscript include the use of a pre May 2015 version of AFNI, specifically the 3dClustSim function used to implement the parametric FWE control. One of the discoveries made during this project was the smoothness estimate used in this older version of 3dClustSim had a flaw that increased the FWE. This was fixed by the AFNI developers in versions after May 2015. Although the new version reduces FWE, it is still inflated above the target of 5%; the p=0.01 and p=0.001 cluster defining thresholds’ FWE with 3dClustSim changed from 31.0% to 27.1% and 11.5% to 8.6%, respectively.
Is FLAME1 superior? Some results appear to support the claim that FLAME1 option in FSL has better FWE control, even in the ad-hoc case, but this is due to a known problem where FLAME1 sometimes overestimates the variance. To clarify, FLAME1 differentially weights the contribution of each subject according to the subject-specific mixed effects variance, which is a sum of within- and between-subject variances. The result is that more variable subjects contribute less to the statistic estimate. In comparison, the OLS option in FSL treats all subjects equally (also true for SPM, AFNI’s 3dttest and permutation tests). When the true between-subject variance is small, FLAME1 overestimates it, causing an increase in p-values, which reduces the FWE. When the true between subject variance is not close to 0, FLAME1 results in a more accurate estimate of the variance but the FWE can then be inflated with results similar to FSL’s OLS. The resting state data have a low true between-subject variance, leading to lower FWE than we might see with task data where systematic differences in task performance might indeed yield the predicted large between-subject differences. This is supported by a secondary simulation using task fMRI data with randomly assigned groups that found FLAME1 to have error rates comparable to FSL’s OLS. Overall, this implies that the FWE will be controlled if the true between-subject variance is small and will be elevated similarly to OLS if the variance is larger than 0.
Why do parametric methods fail? The assumptions of random field theory include that the spatial smoothness must be constant across the brain and the spatial autocorrelation follows a squared exponential distribution. The spatial autocorrelation distribution was not found to follow the squared exponential very well, instead the accuracy of the distribution varied according to distance. Simply put, if voxels were close together, there was a stronger agreement between the empirical and theoretical spatial correlation, but the two do not match for voxels that are far apart from each other. This explains why results improve for more stringent cluster forming thresholds, since clusters are smaller, hence the voxels involved are closer and the assumptions are more closely met.
Should we all panic and give up on fMRI? Are all 40,000 fMRI studies of the past worthless? Of course not.
The blog post by Tom Nichols refines this estimate to a more reasonable number of studies from the past that may be impacted: closer to 3,500. (Note: PNAS has accepted an Erratum from the authors that revises the sentences that led to the sensationalized press articles.) The study shows that (a) FWE control does not work properly in the parametric tests using an ad hoc threshold of 10 voxels; (b) FWE is often controlled by permutation-based testing; (c) cluster inference for SPM, FSL, and AFNI using a cluster-defining threshold of 0.01 is likely problematic; (d) although improvements would be expected if a cluster forming threshold of 0.001 was used, FWE is still not controlled at the nominal level of 5% under all conditions.
How shall we proceed to analyze fMRI data?
Both parametric and nonparametric-based inference have pros and cons and work well when their assumptions are met. Prior work has highlighted the assumptions of the parametric cluster-based thresholding approach, including using a small p-value based cluster defining threshold (see Friston et al. (1994) and Hayasaka and Nichols (2003) for examples). Although it was clear the threshold needed to be low, without knowing the true spatial covariance structure, it wasn’t necessarily clear how low for real fMRI data. Since the Eklund et al. work used real fMRI data in the simulations we now know that p=0.01 is not low enough and p=0.001 is a better option. Generally, the permutation test has fewer assumptions and tends to have better FWE control, but Eklund et al. did find some cases with the 1-sample t-test where the nonparametric approach had elevated FWE, due to skew in the data. Permutation-based options can be implemented on any NIfTI file using SnPM in SPM, randomise in FSL, PALM (also affiliated with FSL), Eklund’s BROCCOLI package and mri_glmfit-sim in FreeSurfer.
Importantly, AFNI users should update their versions to ensure use of either the repaired 3dClustSim (after May 2015), or the new 3dFWHMx function which uses a more accurate spatial smoothness estimate and will improve FWER control. Also, using the ad hoc cluster size of 10 voxels has the largest FWE and is not recommended as a method for controlling FWE.
The work of Eklund et al. supplies important information to those who choose to control the multiple comparison problem according to the FWE. In future work, researchers intending to use FWE correction can make better choices to ensure the true level of FWE is closer to the goal FWE. Although some previously published studies may have not used as stringent FWE control when they had intended to, the results can still be interpreted, but with more caution. Multiple comparison correction is just one element of neuroimaging practice, and there are countless choices in the design, acquisition, analysis and interpretation of any study. We encourage everyone to consult the OHBM Committee on Best Practice in Data Analysis and Sharing (COBIDAS) report on MRI, and review the detailed checklists for every stage of a study. The report is available directly on the OHBM website http://www.humanbrainmapping.org/COBIDASreport and on bioRxiv.
BY CYRIL PERNET
During the annual OHBM meeting in Geneva I had fun making word clouds from the twitter feed of the hashtag #OHBM2016. Attendants could see the word clouds in between every presentation, and I think it made the welcome screen look pretty cool (you can find them on the @OHBM_members channel and on the OHBM facebook page). In case you thought some information was missing, that is simply because it was either not that frequently discussed on Twitter or http://www.wordclouds.com/ did not show it (not all words appear depending on design and size). There was no censoring, and you can blame me if something was not to your liking.
The exciting stuff
The most discussed lectures were those of Tim Behrens and Fernando Lopes da Silva, closely followed by the talk from Gael Varoquaux. The main topics that engaged attendees were connectivity analyses, machine learning, power analyses, BIDS and yes, Brexit.
Thomas Yeo aptly summarized the current state of connectivity analyses on the OHBM blog so no need to talk more about it.
Machine learning is increasingly used in neuroimaging applications these days and Gael Varoquaux had lots of comments about his talk: 'Cross-validation to assess decoder performance: the good, the bad, and the ugly'. In this talk he shows that leave one out strategies are biased and that N-folds cross validation, providing we keep the data structure, works much better.
I am really glad that use of power analyses is now at the forefront of neuroimagers’ discussions. During Sunday’s reproducibility workshop I discussed and presented two of the main tools used to carry out power analyses on full maps: fMRIpower and neuropower. These tools were then presented by Jeanette Mumford (creator of fMRIpower) and Joke Durnez (creator of neuropower) in the Open Science SIG room.
Another favorite topic of mine: data sharing. BIDS, or to give it its full name - Brain Imaging Data Structure - is driven by Chris Gorgolewski, and describes how to structure and store your data in an easily shareable way. It provides advice on how to name files and how to create simple metadata text files (tsv and json). Using BIDS doesn’t require programming knowledge, and does substantially improve data sharing, by allowing machines to read data easily.
A paper from Anders Eklund et al. about failure to control the family-wise error rate (FWER) using cluster size was recently published in PNAS and elicited many comments, not just online but also from the floor. The paper suggests that cluster size correction may significantly inflate false-positives, addresses the extremely important issue of controlling FWER, and is a must read along with the comment from Glass Brain awardee Karl Friston and Guillaume Flandin.
From our community, gender imbalance and diversity was frequently discussed and added to the #GenderAvenger hashtag. It was often commented that committee members and awardees were predominantly white males from wealthy countries. The Organization is well aware of this and has actively sought to reflect the geographic diversity of our membership as well as to balance the number of male and female session and keynote speakers. Council takes this feedback, of the need to do more, seriously and is actively at work to further address these issues, and push for all aspects of OHBM to become as diverse as the members it represents.
Top 5 twitter users
During the conference, the OHBM and OHBM_SciNews accounts retweeted posts from or with twitter users mentioned. Thanks to the top 5: @kirstie_j, @NKriegeskorte, @pierre_vanmedge, @ChrisFiloG, @ten_photos.
Note: A version of this post previously appeared on Cyril Pernet's personal blog:
BY KEVIN WEINER
New OHBM Communications Committee article on HuffPost Science:
"As a scientific conference approaches, I always think back to my first science fair: I stood next to my homemade presentation of graphs and tables glued to a poster board positioned next to hundreds of others made by fellow elementary school students. An overweight Paisan from rural New Jersey, I was more looking forward to my post-presentation reward of cannoli from my favorite bakery in South Philly than I was to standing for hours on end answering questions from the judges.
Conferences these days are pretty similar to the science fairs from decades ago, just on a grander scale with better technology. For example, every year the Organization for Human Brain Mapping (OHBM) brings together thousands of scientists who work with brain imaging data from around the world to share hot off the press findings that they just published or are preparing to publish. This year, 4,391 presentations will be on hand in Geneva for our annual conference." Read more
We asked Dr. Volkow a few questions to start the conversation before her Keynote Lecture:
What motivated you to go into your particular area of research?
Nora Volkow: I went into addiction research because I wanted to understand what happens to the brain of a person that can no longer control his/her desires even though they want to do it. In other words, investigating the neurobiological underpinnings of the balance between neuronal networks involved in desires/emotions and those involved in self-regulation and how drugs influence them in addiction.
If you weren't talking to brain mappers or scientists, how would you describe your most proud scientific accomplishment?NV: Providing insight that gave us an explanation of why people who are addicted to drugs cannot stop the strong urges to take the drugs even when this comes at the expense of very catastrophic consequences.
If you were speaking to a non-scientist, how would you describe your research and what you do for a living?
NV: I would say that, as a brain researcher, I explore how the brain works; and as a researcher studying the effects of drugs, I investigate the mechanisms within our brain that motivate our actions and that underlie free will.
What do you think are the most pressing issues in neuroimaging for your area of interest? For the field in general?
NV: For the addiction field, I would say, developing an imaging biomarker for substance use disorders that can help predict clinical outcomes, guide tailored interventions for a given patient and help in the development of new therapeutics for substance use disorder, including strategies to help the brain recover. For the field in general, I would say developing magnetic resonance imaging (MRI) with higher temporal and spatial resolution and with higher sensitivity.
What do you think is the future of neuroimaging for basic research? For translational research and application?
NV: For basic research, optical imaging in conjunction with optogenetics and DREADDs is allowing us to assess the role of specific neuronal types in circuit function. MRI and Positron Emission Tomography (PET) have also allowed us to start to assess the roles of specific gene manipulation on brain function. There are many applications of neuroimaging for translational research, including use of imaging for real life biofeedback interventions to regulate the function of specific brain regions associated with craving or negative moods that lead to relapse; use of imaging for tailoring prevention message on the basis of brain activation responses predictive of behavioral changes; use of imaging to identify individuals at higher risk for substance use disorders; and, as described above, the development of biomarkers to guide clinical treatment and new therapeutics.
If you're attending the Annual Meeting in Geneva, come and hear Dr. Volkow’s lecture on Tuesday to learn more about the insights into addiction-related neurochemical and functional changes that are being revealed by brain imaging combined with behavioral measurements and genetics.
More information on Dr. Volkow’s work and NIDA can be found at https://www.drugabuse.gov/about-nida/directors-page and you should also check out her blog for the latest in new addiction treatments and her perspective on today’s most important addiction issues.
BY LISA NICKERSON
The old adage “there’s something for everyone” is an understatement when it comes to the representation of imaging data analysis techniques at the OHBM Annual Meeting. From courses and workshops on the most basic fundamentals of analysis to oral sessions and symposia highlighting work at the forefront of analytical methods development, the annual OHBM meeting is unparalleled in this regard. As a young graduate student and later as post-doc, OHBM drew me in as one of the best resources for learning about imaging data analysis. Throughout the year, I would spend countless hours, days, and even months combing through the literature and the internet trying to determine what information was reliable or most relevant for my work, scouring the SPM and FSL forums for answers to my questions, and generally being frustrated at how long it took to get the answers I needed to make headway on various analysis issues. The OHBM Educational Courses and Morning Workshops offered me an opportunity to learn from experts, meet them, and ask them my questions directly. This is the only conference I know that places such a strong emphasis on imaging data analysis, and I advise all my trainees and collaborators who are trying to learn analysis to go to OHBM to soak it in.
This year, the opportunities for learning actually begin before the OHBM meeting starts with several Satellite Meetings taking place right before the conference, including: FSL Course 2016, Pattern Recognition in Neuroimaging, Brain Connectivity, and the BrainMap/Mango Workshop. In addition, the OHBM Educational Courses take place on Sunday before the Opening Ceremonies, with several courses that are fantastic for students, post-docs, those who are new to neuroimaging, and those who just want to pick up new analysis techniques.
The Art and Pitfalls of fMRI Preprocessing is a long running OHBM Educational Course designed to expose beginners to the critical importance of key fMRI pre-processing steps for both resting state and task fMRI and, this year, covers typical pre-processing pipelines in three major software tools, FSL, SPM, and AFNI. The course on MR Diffusion Imaging: From Basics to Advanced Applications will highlight methodological considerations of both acquisition and analysis for mapping structural connectivity and white matter microstructure. More advanced statistical methods for those with some experience, or for those who are merely curious, are also represented in courses such as:
Not all of the analysis-related symposia cover connectivity though. On Monday, What Neuroimaging Can Tell Us? From Correlation to Causation and Cognitive Ontologies takes up the important issue that simply studying associations between brain function and cognitive function does not inform the causal mechanisms of how brain functions actually give rise to cognitive functions. This symposium covers causal inference, including new methods for deriving causal hypotheses from observational data and validating causal hypotheses by brain stimulation. And one of Wednesday’s symposia, Neural Nets to Neural Nets: Deep Learning Approaches to Neuroimaging, will introduce deep learning, the new area of machine learning that was used by a computer program developed by Google DeepMind to beat Lee Sedol at Go without any handicaps. These methods are also reigniting the AI community. For those interested in this exciting new area of research, also be sure to check out the Talairach Lecture by Daniel Wolpert and our interview of him to see how scientists at Cambridge are applying some of these methods to tackle neuroscience questions.
The diversity of topics covered in all of these satellite conferences, Educational Courses, and Morning Symposia is truly astounding. I find it a real challenge to keep up with new specialized techniques and evolving perspectives on established methods, and being able to drop in on these lectures at OHBM both keeps me ahead of the game for developing new research directions and keeps me doing “good science”.
Kevin Weiner: If you met someone on the street who had no idea what brain mapping was, how would you describe your research and what you do for a living?
Tim Behrens: I play with brains and computers. I try to figure out how the world is represented in your brain and how you use these representations to guide your behaviour. I have whiteboards with boring looking equations on them. I have computers with lots of data on them. I try to see if the data on my computer matches the equations on my whiteboards. The data on my computer often comes from humans playing games in an MRI scanner, where we can watch as big patterns of brain activity change from moment-to-moment, but some of it also comes from recording what individual cells are doing when animals play similar games in the lab. Actually, that is a description of what all my friends and colleagues do. I mostly get to work and wonder what to do because my colleagues are doing all the work. Sometimes I also play candy crush.
KW: What should the non-expert be wary of when reading about brain mapping articles in the lay press?
TB: They shouldn’t. What does it matter if some of the nuances are not captured in the article? OK, often the article is just wrong, but that isn’t special to articles about neuroimaging. That’s just bad journalism. They should look at the amazing brain pictures, and they should say: "My god. That is a picture of activity inside someone’s brain. And that person was having lunch an hour later. That is amazing. Let’s give them our money”.
KW: If you weren't talking to brain mappers or scientists, how would you describe your most proud scientific accomplishment?
TB: Like all scientists, I think the coolest thing I have done is the stuff we are working on now, so I’ll describe 3 papers in one paragraph - one just published and two in submission. It’s also what I’ll be talking about at the meeting. Obviously, the next paragraph will sound very pretentious to any scientist reading it!
So - we figured out something about the languages that the brain uses to store knowledge about how things are related in the world. We figured out that brain regions can store relationships with positive (excitatory) connections between neurons but that these connections must be balanced by mirror-image negative (inhibitory) connections - kind of like “antimemories”. We figured out that many of these connections can then be organised into structures that represent knowledge in different parts of the brain, and that these neural structures are probably encoded in a language that other people have discovered for encoding our knowledge of space. This language is called a “grid cell code” and it is a very beautiful language. If this turns out to be true, it will be important for understanding how abstract concepts are represented in our brains, and how we can navigate through abstract concepts to understand the world around us.
Also, I used operant conditioning to train my son to respond “Dada” to the question “Who is the coolest?” when he was a year old. This was deeply satisfying both personally and scientifically.
KW: What do you think is the future of neuroimaging for basic research? For translational research and application?
TB: I think neuroimaging is in an awesome phase. Really awesome. We have moved to a stage where we can do really big experiments really well, and really study the organisation of cortex. New technologies are allowing us to study neural representations at the mesoscopic scale. These come both from new high resolution imaging technologies, but also from an increased understanding of how to design experiments that study representations rather than just average activity levels. I think there needs to be some work validating this stuff with electrophysiology and understanding how these things relate to cellular codes (but that is happening) and obviously there is still a lot of stuff we can’t see. But basically, I think that these two massive innovations (big data driven imaging with extremely high quality data, and the ability to study representations) will transform the current and next decade of human neuroscience. On top of this, MEG is really working at the “source” level now, so we can study dynamics. The kinds of questions that will be answered by neuroimaging experiments in the next years will be really about how stuff is organised, how computations happen etc. etc. I think this will bring us closer to animal experiments, so we can test whether neural codes found in animals can explain higher human cognitive behaviours that can’t be studied in animals. I think it will mean that interesting neuronal mechanisms discovered in animals, will be testable in real clinical populations for the first time, as candidates for disease. Being able to study representations across the whole brain simultaneously is new to neuroscience in any species. How is the same stuff represented in different areas? How do signals in one area change representations in another? I think it’s really an exciting time to be doing human neuroscience for people who want to think deeply about neural mechanisms and their organisation throughout the brain. Obviously, it’s also a very exciting time to be doing nonhuman research with a buffet of new techniques available that have extraordinary levels of precision and control. I think we need to make sure that the different fields stay in touch with each other as we are figuring out how neural computation happens across species.
KW: What motivated you to study frontal cortex and connectivity?
TB: I mostly just drift towards things that, or people who, are interesting. For example, things that seem new and cool and people who are fun to talk to. The people who I work with in Oxford and UCL are just about the most clever people I know. And some of the most exciting work in the world is happening in the function of frontal cortex and neural representations; in behavioural control; and in brain connectivity. It’s no coincidence that I copied these clever people in choosing these topics to study… I get to hang out with cool people and talk about interesting things.
KW: When you first started out, what was the most inspiring/motivating paper you read? How about the same question, but in the last 5 years?
TB: Bit of a weird question for me, because I started out making new methods for measuring brain connections from diffusion imaging, and now I am studying representations and computations. The classic papers in the diffusion field were obviously influential early. From people like Peter Basser, David Tuch, Geoff Parker, Derek Jones, Danny Alexander and many of the other early pioneers of diffusion imaging. Nowadays, I often have fun reading papers about how computations work. For example, you always learn something cool when you read papers by Peter Dayan, Nathaniel Daw, Yael Niv and their colleagues. I also like learning about how cool representations can enable computation. I loved Tim Vogels’ 2011 Science paper about simple inhibitory hebbian mechanisms for storing memories, for example. When I want to know how the frontal cortex works more broadly, I often turn to the views of people like Matthew Rushworth, Betsy Murray, Dick Passingham, Chris Frith, amongst many others. Recently when I have been interested in how computations work between frontal and temporal cortices, I have been very influenced by ideas from Howard Eichenbaum and Neil Burgess and their respective sets of awesome colleagues (as well as the league of extraordinary papers about how the hippocampal formation encodes space).
Rather than papers, however, I have mostly been inspired by people. I have been lucky enough to work with awesome sets of people. My group has always been stuffed with really clever students and post-docs, many of whom have become long term friends and colleagues. I won’t name individuals from my group here because they are all special :). From the very early days, though, I was lucky enough to work with people like Heidi Johansen Berg, Matthew Rushworth, Steve Smith, Mark Woolrich, Christian Beckmann, Mark Jenkinson. These guys have taught me how to think about science, data and the brain. They are endlessly curious and inventive. Who needs more inspiration than that?
KW: Thank you, Dr. Behrens!
Dr. Behrens will be giving the Wednesday afternoon Keynote lecture at the 22nd Annual Meeting of the OHBM Organization for Human Brain Mapping (June 29th, 4:15pm) later this month in Geneva. The topic of his lecture will encompass storing, using, and updating knowledge for behavioral control. All in all, we hope you will enjoy his lecture and have enjoyed getting to know him a bit better.
Excerpt from OHBM Communications/Media Team article on Huff Post Science:
"You may have heard the basal ganglia being mentioned in mainstream media recently associated with movement disorders like Parkinson’s Disease, which burdened the late Muhammed Ali. And rightly so - the role of the basal ganglia is mostly understood as related to the formation, execution, and remembrance of a sequence of movements towards a goal like throwing a punch for Ali or walking for us." Read full story.
BY NIKO KRIEGESKORTE
I'm here with Professor Daniel Wolpert of the Engineering Department at Cambridge University. Daniel is going to give the Talairach Lecture this year at the OHBM meeting in Geneva. I’d like to hear a little bit about his research, about his lecture, and his view of our field.
My Director of Studies at Oxford said to me “Modeling the brain is all where it’s at.” And he showed me a paper by Zipser and Andersen, which had just come out in Nature. They were using a simple neural network to model visual cortex. I was completely transfixed! So I decided I’d had enough of medicine. It was too much like hard work. I wanted to go back and do basic science.
After getting my PhD, I thought it was time to actually go and learn some mathematics. So I went to MIT. I had my interview with Michael Jordan who said “If you want to come and do a postdoc here you have to know what a Jacobian is. If you don't know what a Jacobian is, don't come here.” I said “That's not a problem.” I had no clue what a Jacobian was! I went straight back to the hotel room and read up all about them. And I did a postdoc there which was great fun. MIT was such a vibrant place.
NK: How did you end up at the engineering department in Cambridge?
DW: I was at the University College London for ten years where I was, I guess, the most computational person in a biological environment. And one day I got a phone call from Keith Peters, who was the head of the [Cambridge University] medical school, saying "Are you interested in a chair in Engineering at Cambridge?" And I said, "I think you've got the wrong person. I'm not an engineer. I'm a neuroscientist!" He said, “We're not idiots in Cambridge. We know what you do! We want to create a bio-engineering programme and what we really want to hire is not an engineer with an interest in biology, but a biologist with an interest in engineering." This was very attractive to me, because going from being the most mathematical in a biology department to being the least mathematical in an engineering department would be very good for my game. So I moved to Cambridge and we set up this group which does computation and biological learning. The idea is that we do both machine learning and neuroscience. The interplay between the machine learning and the neuroscience, I think, is absolutely vital. I think the idea that you can understand the brain without very strong computational support is ridiculous
NK: What’s your lecture at OHBM in Geneva going to be about?
DW: The first thing I want to convince people of is that there are interesting questions about motor control. I want to convince people that actually the only point in understanding cognition and sensation and perception is to guide action. Then I want to go through the different levels we’ve been working on to try and really explain the interesting new developments, both at the low level – sensory-motor noise, our work on normative models of Bayesian processing – and at the learning level: how people learn structures of tasks. Finally, I want to cover our more recent work, trying to link decision making and motor control together: How motor control affects decisions and how decisions affect motor control. I’ll try to give everyone a bit of an understanding of the algorithms we think the brain uses. Now I have to admit that we don’t do imaging in my group. But I’d like to inspire imagers to pick up our ideas, ideally, and go and test them using their techniques!
But the problem is they’re intractable. So although you can say you should do the Bayesian thing, actually doing the Bayesian thing is intractable for computers and probably intractable for the brain. So I think all the smart money at the moment is asking what the clever approximations are that the brain can use to solve these sorts of problems. And so the group which I'm head of is half machine learning and half neuroscience. The machine learners’ goal is not just saying it’s Bayesian, but asking how can we do these complex computations in efficient ways. And I think they've been very successful. We used to think that the way you did control was to have some cost and some desired trajectory. You have to play out the desired trajectory, and if you get perturbed, you update your plan. That's a very inefficient way to do control. All you basically need is an optimal feedback controller.
NK: Looking at your work, it seems to me we can learn a lot about the computations from behavioural data. Do we need to measure brain activity at all?
DW: Oh, we certainly do! I very much believe in Marr’s levels. I tend to focus on the computation and the algorithm. You know, what problem does the brain have to solve? What algorithms does it use? But clearly in the end we want to be able to understand where the algorithms are instantiated and, more than that, I think brain imaging can help differentiate between algorithms. I think the model-based approach, which has become very popular, is incredibly impressive! I guess if that had been around 20 years ago when I first started out, maybe I'd have got involved in imaging. That seems a very beautiful way to go.
NK: In humans we can get tens of thousands of channels of haemodynamic activity. In animals, at the circuit level, we can measure individual neurons, and much larger numbers of them than before. Do you prefer humans or animals?
DW: I work on the control of movement. I think animals have everything we need to understand most of the control of movement. I think people who work on language are a bit stuck in terms of the model systems they can work on. For me the neurophysiology level is an area I follow probably more than imaging, because most animals can move, most of them can move better than we can move, and so I think some of the circuit work from say places like Janelia Farm is really exciting. My field relies on very simple things like robotics and virtual reality, which have improved. But if you look at the improvements in imaging and in particular neural circuit manipulation, those are just extraordinary molecular techniques.
But it's still incredibly useful even for motor behaviour because there's no reason to believe that a rat does motor control the same way a human does. We have very different bodies; a very different understanding of the world. So I think it's incredibly useful.
NK: Toward an overall understanding of brain function, do we go from the bottom up or from the top down?
DW: We go both ways and they meet in the middle! It's hard to start from the middle. I think from the top down, we can get the algorithms. It's very hard for people who work on circuits to start with high-level questions. They're just trying to understand how the circuit works. But for some circuits, we've made huge advances in understanding the algorithms. So, I am very impressed with people who work, for example, on the cerebellar structure in electric fish, where we know it does prediction of sensory consequences, but it wasn't really known exactly how it did it. But recent work from Columbia and Nate Sawtell has beautifully shown how that circuit works. That work is just spectacular and probably wasn't achievable ten years ago.
NK: AI is finally beginning to work – using models inspired by the brain. Is AI relevant to neuroscience?
DW: AI has been very successful in a limited number of tasks, tasks which are very clear like the game of Go. That's a very simple state of this board and a very simple objective to win. When it comes to more general tasks, like having an autonomous agent acting in the real world, it's much harder to write down the cost function or what the algorithm should be to achieve success in the world. In robotics, closer to my area, there have been some successes. [Boston Dynamic’s] BigDog robot can walk over terrain. But then there are the really hard tasks, like manipulation, that are still unsolved. Robot control is generally solved one task at a time. So a robot is hand-tuned to solve one task. And if you want to go to another task, you go back to square one and start all over again.
One of the big challenges for the future is how you make general-purpose algorithms, which can learn multiple tasks and interact in multiple environments. That's still a very hard problem. At the moment, there's no robot with the dexterity of a five-year-old child in terms of manipulation: very good for things like driving and navigation, but when it comes to tactile things with the arms and hands, they're really in their infancy.
NK: Ultimately, we need computational models that actually perform the function.
DW: In the end that's the proof in the pudding. But unfortunately sometimes you can reproduce the function without understanding it any better. So one of the frustrations with deep nets is that they may work, but you don't really know why they work. But it's still great that they work. I think there's a tension. There's those of us that want beauty in the algorithms, as well as them working, and those who just want the algorithms to work. And so, those who believe in normative and Bayesian models are a bit frustrated, I would guess, by the deep nets, because they work so well; when you want the normative, optimal solution to work better.
NK: Finally, I'd like to hear about your interests and obsessions beyond science.
DW: I guess my real obsession is science. I tend to work on science most of the time and I just love working at weekends and evenings on science. But my other obsessions; I have two daughters who are both in the sciences. Next month, all being well, my eldest daughter will become a doctor! My youngest is a chemist, and so they're both at university. We spend a lot of time with them.
I guess one of my obsessions is travel. As a family, we love traveling. So we spend a lot of time together, me going to fun meetings in China, India, and South Africa, and they often join me and that's an absolute joy.
Thanks to Simon Strangeways for video recording and to Jenna Parker for transcribing the interview.
Excerpt from OHBM Communications/Media Team article on Huff Post Healthy Living:
"The brain is commonly intuitively regarded as a collection of separate regions each with distinct functions of complex behaviors, experiences and phenomena. But, “Fear centers” and “planning centers” are not separate rooms in our brains in which neurons ring alarm bells and draft flow charts, respectively. " Read more
Excerpt from OHBM Communications/Media Team article on Huff Post Science:
From Broken Brains to Frankenstein: A Walt Whitman Birthday Listicle
"Walt Whitman was born on this day in 1819. His lifespan overlapped with a period in neuroscience history that laid the foundation for today’s exciting time of brain exploration. For example, in 1854 when Whitman was 35 years old, Emil Huschke produced the first lithograph of the human brain. Just one year later, Pierre Gratiolet proposed a strategy for how to demarcate the lobes of the human brain that is similar to how they are defined today. Though he would likely roll over in his grave at the word listicle, here are three ways Whitman is historically linked to the brain. . . " Read more
Human brain mapping is inherently an interdisciplinary pursuit that benefits from the collaboration of diverse groups with a breadth of perspectives and expertise. This is why my colleagues and I are pleased to announce the formation of the OHBM Open Science Special Interest Group (SIG) to foster collaboration by encouraging the open dissemination of insights, tools, and data.
Open science is a movement built upon an ethos that fosters transparency through the free sharing of scientific code, data, derivatives, and publications. Open source software has been a part of the OHBM community from the beginning and nearly all of the tools we rely on are available without cost and many of them place very few restrictions on their use. The openness of these tools democratizes their access to researchers and facilitates the verification and extension of the methods implemented. Openly sharing raw data has also long been central to our community. Although the initial efforts made by the fMRIDC and OASIS were perhaps ahead of their time, tens of thousands of brains are now available through the 1000 Functional Connectomes Project, International Neuroimaging Datasharing Initiative, OpenfMRI, PING, Human Connectome Project, NITRC-IR, and the list continues to grow. We are additionally seeing the growth of initiatives aimed at sharing data derivatives, either the output of group-level analysis through NeuroVault, diverse preprocessed data through the Preprocessed Connectomes Project, or results reported in the literature through Neurosynth and BrainSpell. By pooling resources across labs, these openly shared data and derivatives are enabling an unprecedented scale of analyses and are providing valuable fodder for developing new tools and educating new scientists.
Open access publication, which includes both free access to the scientific literature and transparency into the review process, is still being developed in the community. Several journals offer open access publication either exclusively or for an additional charge. The US National Institutes of Health and other funding institutions around the world have mandated that publications arising from the research be openly available. Transparency in review is now being supported by systems such as Publons and the Gigascience Journal. This transparency fosters a more honest review process and allows a critical and diverse interpretation of a paper based on the machinations it went through during the publication process. Some journals are now enabling and encouraging post-publication comments on papers, which provide very important long-term review. Pre-publication through repositories such as arXiv and bioRXiv is another way for supporting transparent publication that is growing popular in the community. Some authors (including me) are also preparing their publications in the open using Github and other tools, making the entirety of the creation process transparent.
The Open Science SIG will support the spread of these practices in the OHBM community in a variety of ways. We will organize annual OHBM hackathons to encourage open collaboration between researchers from a variety of backgrounds and seniority levels. For the uninitiated, these hackathon events borrow ideas from, but are more than, stereotypical hackathons from the computer-programming world. The projects performed at these events do include development of new tools, but also involve working on a variety of neuroscience resources and projects, such as developing new data analyses, curating literature databases, assisting with data processing activities, and discussing open issues and important new ideas.
Another way we hope to encourage open science is through educational efforts. To this end, we will be organizing the “Brainhacking 101” courses during the 2016 Annual Meeting. These courses are conceptually similar to the popular Software Carpentry and Data Carpentry movements, and provide tutorials on collaborative tools and open source software that facilitate brain mapping. We will also host an open science room throughout the annual meeting where researchers can come to collaborate and see software demos.
We believe that the tenets of open science will accelerate scientific discovery and our hope is that by promoting a culture of openness and collaboration, the Open Science SIG will help enrich the scientific journeys of all OHBM members.
Excerpt from new OHBM Communications/Media Team article on Huff Post Science: "Reading is a fundamental ability that is critical for academic success. Currently, 5-15% of school children suffer from reading difficulties, which positions them at a lower starting point in school. Wouldn't it be great if we could identify neurobiological biomarkers of reading difficulties in less than an hour? Based on recent findings, it looks like we are becoming closer to making this goal a reality." Read more here.