“In order to be a mentor, and an effective one, one must care. You must care. You don’t have to know how many square miles are in Idaho, you don’t need to know what is the chemical makeup of chemistry, or of blood or water. Know what you know and care about the person.” — Maya Angelou
The online mentorship program is an ongoing initiative launched by the OHBM Student and Postdoc Special Interest Group in early 2017. In this international initiative, mentors and mentees from around the globe are matched on the basis of their experience and expectations. The mentor supports the mentee’s growth by providing advice on topics such as - but not limited to - academic development, grant writing, and work-life balance. What is unique about this program is that every member of the OHBM community can be mentored and can also be a mentor. For example, the program has early career principle investigators (PIs) who seek mentoring by more established PIs, as well as senior PhD students who mentor trainees just starting out. As a rule of thumb, the program maintains at least 3 years of “experience difference” between mentors and mentees, with mentor-mentee pairs often being close in career stage. Currently, there are 424 participants in the program. In this blogpost, we compare statistics from two successive rounds (Round 1, 2017 and Round 2, 2018) of the mentorship program: 252 participants signed up in Round 1, and an additional 172 participants signed up in Round 2.
Relative to Round 1, geographical distribution of brain mappers joining the mentorship program in Round 2 remained largely unchanged, with two notable exceptions: gain in members from the Middle East, and drop in new members from South America.
Distribution of participants with respect to career stage was similar in both rounds, with PhD candidates being the most prevalent.
Round 2 observed a decrease in the fraction of mentees who declared an interest in starting a lab, relative to mentees who were either undecided, or planning to move to industry. This effect might be associated with constantly decreasing percentage of faculty jobs as opposed to PhD jobs, which is a strong trend in academia since the 80s.
In line with the above observation, Round 2 of the mentorship program saw a drop in the demand for advice related to starting a lab, and a small increase in the demand for advice related to transitioning into industry from mentees.
Looking at the summary statistic of all participants in rounds 1 and 2 coming from USA and Canada, Europe, Australia and Asia, an outlook on mentorship was found to be similar globally (Figure 6).
In both rounds, mentors declared similar areas of expertise, mostly related to building a research career. This included taking career opportunities, finding postdoc jobs, developing relationships with coworkers and general advice on career development. Only a handful of mentors indicated expertise in coaching mentees on transitioning to industry.
In summary, participants were gender balanced, and while geographically they hailed from around the globe, the vast majority were from North America and Europe. Over 25% of participants in the programme were willing to take on a double role (i.e. both as a mentor and a mentee), thus indicating a willingness to give back to the OHBM community. While the program saw an increase in requests for mentoring on non-academic career options (e.g. transition to industry), this was not followed by an increase in mentoring capacity in these areas. We would thus like to reach out to mentors with experience in industry and entrepreneurship to join the mentoring initiative. Overall, the expectations and competencies declared by participants around the globe were similar, thereby indicating that an online mentorship platform is necessary and useful for the OHBM community.
Note: In addition to the online mentorship program, the OHBM Student and Postdoc Special Interest Group will be holding its second “Annual Mentoring and Career Development Symposium” at the annual OHBM meeting this year. Hope to see many mentors and mentees at the event on Tuesday, June 19th!
By Nils Muhlert
Professor Leah Somerville is an associate professor of psychology and director of the Affective Neuroscience and Development lab at Harvard university. She was recently awarded the Early Career award by the Social & Affective Neuroscience Society. Here we find out more about her academic career path, and what we can expect from her keynote speech at OHBM2018 in Singapore.
Nils Muhlert (NM): First, can you tell us about your career path – how did you get into neuroimaging?
Leah Somerville (LS): I started working on brain imaging research as an undergraduate at the university of Wisconsin. I was working in a couple of different brain imaging labs, right when the first research dedicated scanners arrived at the university. I was one of the first people to have the opportunity to run experiments on it – along with a team, of course, of other researchers in the labs I was working in.
I got that little thrill moment of seeing a person’s brain image pop up on the screen. Maybe others have had a similar experience. I still have that feeling every once in a while, it hasn’t completely gone away! I find neuroimaging so fascinating and powerful. From there I tried to orient my training towards continuing my brain imaging research, and in particular, fMRI-based research. I’ve studied emotion and anxiety-related processes. I’ve also studied motivation and cognitive control. Now in my lab we’re focused on understanding how those processes change with ongoing brain development through adolescence.
NM: What would you say is so special about adolescence in the context of human development?
LS: There’s a lot I could say here - I’ll try to keep it short! Adolescence is a time of life that on the surface level is associated with a number of important challenges, that individuals are facing sometimes for the very first time.
Adolescents are people who are faced with independent choices about how to act, who to affiliate with, what kind of goals they like to hold for themselves. At the same time there’s increasing demands on their self-control. They’re becoming more and more self-guided in the way that they’re interacting with the world. We can sometimes think of them as novice independent people who are still developing the toolkit that can support mature independent actions.
We find that ongoing brain development facilitates a number of great achievements at this time of life. But it also places a number of constraints on the way in which adolescents might optimize their behavior in certain situations. We’re very interested in understanding the interplay in that – thinking about adolescence as a very adaptive and useful time of life but also one that differs from adults in a number of important ways.
One insight that has fascinated me is looking at brain development measures and asking “when does a person become fully mature?” It may seem like an easy question or one that could be measured using a single modality. In fact, the answer you get really differs when it comes to brain structure or function or network properties. It’s especially surprising that on certain measures – including measures of white matter – the developmental changes continue to play out throughout the twenties and perhaps even through the thirties. So one thing that’s interesting, as an extension of that, is thinking about how we decide when a person is mature from a societal standpoint.
NM: In your work you also discuss socioaffective circuitry – how do changes in that circuit map on to the behaviors we see in adolescence? And what have you found out about that over the last decade?
LS: In our lab we tackle this from different angles – so I’ll let you know about one in particular that I’ll be talking about in OHBM.
We’re very interested in the intersection between motivation and cognitive control. That is, the degree to which motivational cues in the environment – potential rewards and punishments for example – can shape the way in which a person is able to optimize their cognitive control in a given context.
We’re interested in the shift across development, in which individuals across the ages can recognize situations that hold different motivational values. They might want to perform better in certain conditions than in others – either to avoid punishment or to obtain rewards. All of the detection and assignment of values seems to be very consistent in early development. But the degree to which we can take that information and use it to guide our goal-directed actions in the moment, seems to be continuing to develop well throughout adolescence.
One arm of our work is in trying to understand how the dynamic interactions in cortico-striatal circuitry (including the dorsal and ventral striatum and lateral prefrontal cortex) coordinate and give rise to motivation-guided cognition. This is something that we’ve seen play out and continue to change and refine well throughout adolescence and into early adulthood. This is one area of work that we’re excited about.
Another area we’re interested in is adolescent attunement to their social environment. This is a time of life that’s associated with dramatic changes in daily life; individuals are forging new independent relationships for the first time and there’s a lot of volatility in adolescent relationships. They are falling out of favour with one another more frequently than adults would be, giving them lots of opportunities to get feedback about how they’re doing socially. Another arm of our work is therefore to understand how adolescents learn from feedback and use positive and negative social feedback as learning cues to inform how they should feel about themselves in a given situation and how they should feel about other people.
We’ve seen in a couple of studies that when adolescents are on the receiving end of negative social feedback they tend to take that as a very strong cue to influence how they feel about themselves. This would result, for example, in a reduction in the momentary feelings of self-worth or self-esteem. Adults actually show a bias in the opposite direction. They have different strategies in place that allow them to offload or buffer themselves from negative feedback and maintain a positive self-concept, even in the face of very opposite social information. We’re really interested in understanding how learning processes – again subserved by striatal-based systems – might be biased towards learning from negative or positive information in the social domain at different points of life.
NM: And how does this system seem to change from early to late teenage years, or even people's early twenties?
LS: Well we carried out a study of individuals from age 10 to 25, and found that there is a period from early to mid-adolescence, perhaps from 12 to 15, that negative feedback had a strong negative impact on their self views. Whereas individuals of college age seem to have a lot of strategies in place already to buffer themselves from negative feedback. So this is one time period when a few years of age makes a large difference in terms of how these cues are incorporated into learning about themselves and other people.
NM: Thinking about how social media might tap into this, and perhaps exacerbate the concerns that adolescents have: as social media has become a more integral part of their everyday lives, has this had negative and positive consequences?
LS: Great question and one that I don’t have a scientific answer for but I’m happy to speculate!
This is a very hot issue now – thinking about how developmental stage might manifest the influences of these kinds of media processes differently. It’s only in very recent generations where people have taken up a lot of social interactions online. This is something that has not been subjected yet to empirical study.
There is a lot of speculation that perhaps social media is detrimental to adolescent development. Adolescents themselves are quite happy at having the option to socialize over the phone and over the internet. They say it helps them maintain strong social bonds, it gives them lots of information. They can stay attuned to the goings on of all of their friends more easily.
There is also the potential for social media to have certain negative and perhaps unintended consequences. One that has been suggested by our work is that social media has been almost designed to elicit and deliver feedback to people – by getting friended, getting thumbs-up or the absence of a like or lack of response from somebody. This can be interpreted as negative by someone or by people on social media.
The way we see it is that there can be very positive interaction on social media but there’s also the potential for a higher frequency of negative feedback, or the absence of positive feedback being interpreted as negative feedback. We’ve shown that negative feedback has a very potent influence on adolescent self-views, so that very high frequency of receiving negative feedback online could have a more detrimental effect during adolescence than other ages.
Developmental scientists have often had concerns about the effects of new technology influencing self-views. When I was a kid this would have come up with video games – suddenly people would have a Nintendo in their house, there was a wave of concern about that. At this point we just don’t know enough to have a definitive evidence-based account about whether social media is a good or bad thing for adolescence.
NM: Turning to your other work, what would you say are the scientific achievements that you’re most proud of during your career?
LS: I’m not sure if I’d call this a scientific achievement but I’m most proud of having had the opportunity to run my own lab.
I never thought I’d be a PI. It has been one of the most challenging and rewarding things I have ever done. I feel proud and gain a lot of reward from it, particularly when I interact with my trainees. They conduct great work, are great people and are becoming great mentors in their own right! It makes science very fun to do in our group. Fostering an atmosphere that makes science fun and exciting and collaborative is something I’m very proud of, and is down to the efforts of my whole lab.
NM: And to reflect the quality of your mentoring you were awarded the Everett Mendelsohn excellence in mentoring award. When you look back at your own career, which people could you point to that offered you good advice during your career, and how has that affected how you interact with your own trainees?
LS: I’ve been very fortunate to have had a number of wonderful mentors throughout my training. They’ve helped me bridge gaps into the next steps of my career – giving me advice, and sometimes tough love when I needed it! This includes my graduate mentor and my postdoctoral mentor, BJ Casey. I would point out BJ in particular – she was a big part of me discovering this very strong interest in developmental neuroscience, particularly after trialling out a number of different topics of study. That one fit for me in very large part because of the support in mentoring from her.
It’s important to mention that at first I didn’t realize that every trainee needs something different from a mentor. You need a lot of flexible thinking when you’re mentoring to understand what each person needs at different points in time. This of course evolves at different points of training. They might start by needing more hands-on help and more topically-focussed advising. But watching a person beginning to strive for independence and allowing for independence is something that I work hard to detect and accommodate.
When I became a PI I didn’t realize that I would still benefit from mentoring myself. I still have mentors who guide me and I don’t think anyone is ever quite finished in needing mentoring, advice and guidance. I have a number of colleagues – both peer-age going through similar career stages, as well as more senior mentors – who are still helping to guide me. I am very appreciative of that.
NM: And finally, your OHBM 2018 talk – can you give us a sneak preview? Which gems from your research career have you decided to focus on?
LS: Well I’m very excited about being invited to speak at OHBM and having the chance to go to Singapore. I’ll be talking about two main themes: adolescence as a phase of the lifetime associated with ongoing and dynamic brain development, in particular in development of functional brain connectivity.
I’ll also specifically focus on understanding the interactions between motivations and cognition as a test bed to think about how ongoing brain development would lead to important shifts in behavior. In doing that I’ll present some specialized studies that were conducted in my lab in Harvard, as well as some broader projects that we’re currently working on.
Most notably we’re one of the groups completing the human connectome project on development – a large scale ‘big data’ style project - that will ultimately collect brain imaging data on over 1,300 5-21 year olds. This is an ongoing study that we are about half-way through collecting data for. It’s partly longitudinal and partly cross-sectional, and it’s designed to help us really understand both fundamental patterns of brain connectivity that are changing at the basic neuroscience level as well as the implications of those connectivity changes for behaviours including motivated behavior and cognitive control.
So I’ll be discussing how we approach these problems from a broad, big-data standpoint and how this can complement the more specialized work that we’re doing.
NM: We’re definitely looking forward to that – many thanks for taking the time to speak to us and we’re looking forward to your talk in Singapore.
By Elizabeth DuPre and Kirstie Whitaker
This month we continued our Open Science Demo Call series by speaking to Ariel Rokem, Dora Hermes, and Tammy Vanderwal about three initiatives they’re involved with that promote openness in neuroimaging research.
Ariel introduced us to NiPy--short for NeuroImaging in Python--which is a large community-of-practice to support using python for neuroimaging. He explained that NiPy exists within the broader SciPy--short for Scientific computing in Python--community, and it unites many individuals who use Python in their scientific analyses. As open communities, Ariel pointed out that anyone is welcome to use the NiPy and SciPy software as well as to participate in its development. If you’re interested in hearing more, he encourages you to check out the NiPy mailing list or the annual SciPy conference!
Dora told us about iEEG BIDS extension proposal, a proposed extension to the BIDS standard for structuring human intracranial electroencephalography (iEEG) data. She explained that to date, current challenges with iEEG data sharing include the large variability in both electrode locations as well as data formats across sites. The proposed extension will create a standardized structure to store iEEG data and metadata, allowing for novel, multi-model analyses via integration of iEEG with MRI, MEG and EEG. To contribute to the development of the iEEG BIDS extension, Dora encourages checking out the current draft or commenting on the BIDS mailing list.
By Valeria Kebets, Csaba Orban, Thomas Yeo on behalf of the OHBM 2018 Local Organizing Committee (LOC)
As we’re swiftly approaching June, we thought we would follow-up our previous blogpost with 10 practical tips to help you make the most of OHBM 2018 in Singapore.
1. CLIMATE: Singapore has a hot and humid tropical climate. The air temperature remains in the mid-twenties (~75°F) even at night, so don’t be surprised if you break a sweat after only a 10 minute walk. For daytime walks, sunscreen is recommended as the UV index can reach extreme levels. Buildings tend to be heavily air-conditioned, so you may also want to pack a sweater for the conference. Also note that the weather is unpredictable, and heavy thunderstorms can develop in just a few minutes, so your weather app is unlikely to be helpful.
2. FOOD: If you want to grab a quick bite during the conference, there are many cafes and restaurants in the same building (listed here). There are also plenty of dining options at walking distance from the conference centre such as Gluttons Bay, Chijmes and Bussorah street. Shoppers will be glad to know that most stores in the city are open until 10pm, including on Sundays.
3. TRANSPORTATION: The best way to take advantage of Singapore’s public transportation system is by purchasing an ez-link card (same concept as Oyster card in London). Ez-link cards are sold at the airport and at most MRT (subway) stations for a $5 (3.75 USD) deposit. Ez-link works on all buses, MRT lines, and can also be used to pay in some stores, e.g. 7-11s, and some taxis. Pro tip: Remember to tap out with your card when alighting buses to avoid getting charged the maximum fare.
4. MAPS: Google Maps or Citymapper are great for figuring out the best combination of MRT/bus/walking to get anywhere on the island, including expected travel times, when to alight buses (stops are not announced), and the fastest way to exit MRT stations. Follow this link for directions to the conference centre.
5. TAXI/RIDESHARE: All Singapore taxis operate based on metered fare. There is no Uber, but Grab provides a similar service. There are separate pick-up points for metered and Grab taxis at Changi Airport. Pro tip: If you want to keep costs low avoid the Chrysler Cab (black taxis) in the airport taxi queue.
6. GRATUITY: Tipping is generally not expected in Singapore. Most restaurants automatically add a 10% service charge and a 7% Goods and Services Tax on the bill.
7. PAYMENT METHODS: Most places in Singapore will accept credit card payment (VISA/Mastercard, though usually not AmEx). However, do keep some cash for dining in hawker centres. ATMs are widely available in the city and airport.
8. LIQUOR TAX: Singapore imposes an excise duty on all liquor, so expect to pay between $10 - $14 (~ $9 USD) for a small bottle of beer in restaurants or bars. Pro tip: Duty free stores inside the airport terminal are exempted from the liquor tax.
9. MEDICATION RESTRICTIONS: Singapore has a strictly enforced no tolerance policy with respect to possession of illicit substances. Note that certain prescribed psychotropic medications (e.g. sleeping or anti-anxiety) may require you to apply for a license at least 10 days before your arrival. You can read more about this here.
10. RELATED EVENTS: Be sure to check out the satellite events before and after OHBM. The events kick-off with PRNI (June 12-14), OHBM Hackathon (June 14-16; Sold out) and BrainStim (June 15-16). The Chinese Young Scholars Meeting takes place June 19. There are also three post-conference workshops organized by the local brain imaging community: Multimodal Neuroimaging for Mental Disorders (organized by yours truly; June 22), Brain Connects (June 22) and Nonstandard Brain Image Analysis (June 22-23). Attendance is free but make sure to register early--while there are still seats!
If you haven’t already, we highly recommend you to check out the brain in SINc website for more in-depth information on food, sights & attractions in Singapore curated by the Local Organizing Committee.
We look forward to welcoming you next month in the Lion City!
By Jean Chen
As part of the OHBM International Outreach effort, we found about the experiences of Iranian trainees. Many of us in brain imaging have met and worked with Iranian trainees, who outnumber trainees from most other Middle-East countries. By hearing the trainees’ stories, we get a snapshot of the circumstances behind their decision to leave Iran as well as their aspirations in foreign lands. In this post, we speak to current and former trainees, including:
Jean Chen (JC): How much exposure to brain-mapping research did you have as undergraduate students in Iran?
Aras Kayvanrad (AK): I did not have much exposure to brain mapping research as an undergrad student. I completed my undergrad more than 10 years ago and at the time there was little brain-mapping research in the country. However, things have changed now and there are several research groups working in the area of brain-mapping. There are more opportunities for undergrad students to learn about brain mapping research through talks, workshops, seminars, etc.
Sana Nezhad (SN): During my undergrad in Electrical Engineering we had a course called " the Application of Electronics in Medicine". It was in that course that I received my first academic exposure to brain-mapping research, which actually motivated me to do a Masters in Bioelectronic Engineering in the University of Tehran. There we had one year of coursework, which exposed me to the use of EEG, MRI and CT for brain mapping. I also got to know about quantitative methods of analysing the data we acquire using these methods. For the second year of my Masters I was required to complete a research project on MRI data acquisition, and although my project was focused on body imaging, I had classmates doing fMRI and MRI projects on the brain. As a result of group meetings, I learned about their research.
Mahdi Khajehim (MK): My personal exposure to brain-mapping only started when I took the “introduction to biomedical engineering” course as an undergraduate student and for the first time got familiar with some methods like MRI and fMRI. However, I think as a result of multiple government-supported programs and increased interest to brain-mapping, this pattern has already started to change. Nowadays, undergraduate students in Iran have a much better opportunities to get familiar with this field through talks, workshops and summer schools, such as the Iranian Summer School of Cogntive Neuroscience. These are hosted by many different universities and institutions.
Arman Eshaghi (AE): During my undergraduate studies (Tehran University of Medical Sciences), I worked on at least two different projects in which we used advanced neuroimaging methods (DTI and fMRI) for patients with multiple sclerosis and neuromyelitis optica. My work was conducted with Professor Mohammadali Sahraian at the Sina Multiple Sclerosis Research Centre, which is affiliated with the Tehran University of Medical Sciences. I was also in active collaboration with UCL Institute of Neurology in London working with Prof. Olga Ciccarelli.
Mostafa Berangi (MB): During my undergraduate studies in Electrical Engineering, I took some courses in Biomedical Engineering, and they really interested me. As I became familiar with the multiple aspects of Biomedical Engineering, I was particularly interested in the field of MRI. That is the main reason for my decision to pursue brain imaging for my graduate degree.
JC: How would you describe the Iranian brain-mapping landscape? Are there major research programs or meetings that you were aware of as an Iranian student?
AK: Not as a student -- as I mentioned at the time I did my undergrad, there was not much brain mapping research going on. However, the growth of brain-mapping research has accelerated in recent years, and several research bodies have been established recently providing financial and/or technical support to researchers in this area, which can potentially further facilitate and expand brain mapping research in the country. Most notably, the Cognitive Sciences and Technologies Council (COGC) provides funding for brain-mapping research through a variety of research grants. Moreover, the recently-established National Brain Mapping Laboratory (NBML), equipped with state-of-the-art scanners, has further paved the way for brain mapping research in the country.
SN: There are several brain mapping groups specializing in advanced quantitative analysis of brain-imaging data generated through different modalities. I get the sense that In Iran there is a shortage of data-acquisition accessibility due to limited resources, however most active research groups overcome this problem through collaborations with universities abroad. For example, I had collaborations with a cancer centre based in the UK and received half of my data from there. This lack of imaging resources drives the research towards quantification methods rather than data acquisition approaches.
MK: In my perspective, the Iranian brain-mapping field has already started to grow at a promising pace. Thanks to increased government support through funding agencies like the Cognitive Science and Technologies Council (CSTC) and greater availability of required infrastructure that is an essential part of this field, there is now a rising interest to do research in brain-mapping. Moreover, some newly established institutions like the National Brain Mapping Laboratory (NBML) in conjunction with some older ones like the School of Cognitive Science are also playing a crucial role in expanding the field among the Iranian researchers and I personally benefited a lot from attending educational events hosted by these institutions. It all adds up to expect an even better future for this field in Iran.
AE: There have been active institutes working on animal neuroimaging (in addition to human) located in Tehran that are affiliated with top Iranian universities, including the Institute for Fundamental Physics and the Institute for Cognitive Science Studies. There are new centres such as the National Brain Mapping Laboratory, which did not exist when I left Iran in 2014. There are also groups working inside university hospitals including the Neuroimaging and Analysis Group. Therefore, in my opinion Iran can have a bright future in science and in particular neuroimaging in the Middle East.
MB: In Iran, the best students choose to go to Sharif University, University of Tehran, Amirkabir University of Technology, Iran University of Science and Technology, Khaje Nasir University and Shahid Beheshti Medical University (in that order). In terms of the field of brain mapping, from my perspective, the University of Tehran and Amirkabir University are the top institutions. I feel that these institutions have the largest and strongest faculties, and this quality is important for graduate students.
JC: For those of you abroad, what was your main motive for leaving Iran to pursue further training? For those in Iran, do you have plans to leave Iran for additional studies?
AK: I left Iran after my undergraduate degree. The reason was quite simple --- I left Iran to expand my horizons in a new environment doing cutting-edge research.
SN: My main motive was to have the opportunity to get more involved in MRI acquisition research, which is more feasible here in the UK. Also, I cannot rule out being adventurous and wanting to experience a different cultural environment!
MK: I imagine on one hand there are still some aspects of brain-mapping research that remained mostly untouched in Iran and those happen to be in the realm that I was mostly interested about and as such, leaving Iran made sense as there was not much expertise or experience available in Iran. On the other hand, in my opinion, one other thing still missing in Iran is the limited extent of the international collaboration that helps to accelerate the development and increase the quality of the brain-mapping in Iran. These two factors were my main motivations to go abroad for Ph.D. study.
AE: My main intention for leaving Iran was to expand my skill base in using larger databases, and in particular my quantitative skills. Moreover, working in a place such as the UCL Institute of Neurology, which is home to many renowned neurologists and neuroscientists, has enabled me to form more ambitious research plans with access to a wide range of patient populations.
MB: I would like to study in a foreign country, but it comes down to a personal decision, so I have not yet made up my mind. Certainly, most of our students would like to study abroad, and many of my labmates have left to pursue their PhDs. Our professors do not try to retain us. They actually encourage us to explore our options.
JC: How would you describe the career prospects of a highly trained neuroimaging researcher in Iran?
AK: With more groups working on neuroimaging and the availability of research funding and imaging facilities, the prospects seems very promising. In Iran many of the talented students are interested in engineering, in general, and medical imaging, in particular, which is an invaluable asset to principal investigators in these fields. Nevertheless, in spite of the recent progress, access to funding and imaging facilities is still very limited. Moreover, there is limited collaborative research between individual groups and between institutions. I hope the establishment of the new national research bodies, such as the NBML, will lead to collaborative research initiatives between research groups and institutions across the country.
SN: I would think a researcher with a good international network, particularly with countries with a strong neuroimaging landscape, can expect a promising future.
MK: I think for such an individual the available job positions could be in the academia or government-funded research institutions, however, in the private sector, there is only a limited range of options available. I imagine there would be several suitable faculty or research positions available in the capital city (Tehran), but not much so for the rest of the country. For the private research-based companies to grow and create more job positions in this field, there is still a lot that needs to be done.
AE: Compared to the developed world, there are very limited funding opportunities in a developing country like Iran. As a result, many students may prefer to leave the country to expand their skill base. However, despite these limitations, there has been an upward trajectory as is evident by the construction of new neuroimaging centres and availabilities of graduate (PhD) level university programme dedicated to neuroimaging.
MB: Medical Imaging is still a very new field in Iran, and frankly there are not that many jobs in this field, especially for PhD graduates.
Postamble (JC): As in any research community, trainees are the future of Iranian brain-mapping research. The trainees that you met here are some of the brightest among Iranian students; they are expanding their horizons voraciously and have ambitious future plans. Irrespective of their current locations, these trainees show their love of their home country and are obviously excited by the recent developments in the Iranian research arena. I have come to learn that > 40% of Electrical Engineering students as well as > 50% of Medical Physics students at the University of Tehran are women, numbers that exceed those of most western programs. We look forward to the transition of these trainees into independent scientists.
Although there is great need for brain-mapping expertise, there are currently few positions in Iranian universities for trainees, even those with often highly prestigious foreign training. In this regard, I have come to learn that the government has established paid postdoctoral fellowships (up to 2 years) for those returning to Iran and in search of faculty positions. In parallel, there are government programs that encourage highly-qualified individuals to return to Iran to establish tech companies, through both cash rewards (up to $40,000 USD) and low-interest (close to 0%) loans. These mechanisms will likely create jobs for future trainees in brain imaging.
We wish these trainees the best, and hope the OHBM community will be able to enhance outreach to those working and living in Iran as well.
BY THOMAS YEO, NICOLE KUEK
Professor Simon B. Eickhoff is the Director of the Institute for Systems Neuroscience at the Heinrich-Heine University Düsseldorf and the director of the Institute of Neuroscience and Medicine (INM-7) at the Research Center Jülich. Simon is a leading cartographer of the human brain, and his team utilizes a wide range of methods to map the organizational principles of the human brain. We had the opportunity to chat with Simon before his keynote lecture in the upcoming 2018 OHBM Annual Meeting in Singapore.
Thomas Yeo (TY): Today we have Prof Simon Eickhoff here, a keynote speaker at OHBM 2018. Simon, thanks for doing this. How would you describe your research to a random person on the street?
Simon Eickhoff (SE): I would say that I’m interested in how the brain is organized, how it varies between people, and how this variability relates to things like cognitive capacities. Then ultimately, I want to contribute to developing new tools for diagnosing and treating neurological and psychiatric disorders.
TY: That’s a rich set of activities – how did you end up on this research path?
SE: More or less by accident. I studied medicine in Aachen, and late at night at a party in my hometown, I met a friend from school who had started studying in Dusseldorf. He told me about brain research there and I thought it sounded quite interesting. So I called Karl Zilles’ secretary, met him, and was really fascinated. I started my doctoral studies there and never really managed to leave.
TY: Your lab is involved in several projects – but what is the most exciting thing you’re working on now?
SE: There are two things we are doing right now that I’m really excited about. One is brain-phenotype relationships. Can we actually infer complex phenotypes from brain imaging data? Can we predict personality traits or cognitive performance? And the key aspect here is --- given that there’s quite a lot of work on this already --- can we predict it in an interpretable fashion. What we need is good predictive performance, while also learning something about brain organization. This is one of the aspects that I really want to push, as it’s not highlighted enough in current discussions. It’s one thing to be on the data-driven side, and to get good compressions, good predictions. This is, without any question, awesome. But in the end, we also want to learn something about the brain – how the brain is organized. There’s a lot of work going on in our lab that really tries to combine the more data-driven work from a computer science perspective, with the more traditional neuroanatomical view.
The second part is more related to brain mapping. You can describe the brain through a lot of different features. So for each point in the brain you can say, what are the structural properties, what is the trajectory as we age, how is it disturbed or changed in people with Parkinson’s or schizophrenia. But also, what’s the functional connectivity profiles at rest and during task, what are the structural connectivity profiles, and so on. You can use each of these features to map the brain and to delineate brain areas. But how does all of that work together? That’s the critical question, and cracking this kind of topographical code, that’s something that we can hopefully get closer to. And it’s pretty exciting!
TY: Moving forward – what do you hope your research will accomplish in the next 5-10 years?
SE: Well if we manage to do the things we just mentioned, I’ll be quite happy! The predictive modeling, but also brain mapping, understanding organization and topographical complexity of the brain. That’s going to be fantastic!
Looking ahead, I’m not sure if it will be done in five years, but it will be really exciting if we can go outside of the academic field, outside of doing research to just get the next papers, and to mature enough to actually bring our research into clinical practice. Five years – I’m skeptical if we’ll get there, but over 10 years, I’d be more confident. If my team will be able to contribute to it, then that would be fantastic. And we’re working towards that goal.
TY: What do you think is the biggest obstacle right now?
SE: Towards clinical application? In the end there’s so much flexibility in the analysis of imaging data. We always hope that we live in a perfect world, where you acquire data, and then you do one single analysis, which is a priori planned, and that result gets published. But I’m not entirely sure that most labs do that.
The thing is, if you really want to go and measure yourself, say by the standards of clinical trials, that would need to happen. You’d have to have your analysis plan ready, deposit them, acquire the data and carry out one single analysis and report the outcome. Then if you want to have something that’s clinically useful, it needs to have a really high accuracy and predictive value. What I really value is the current push towards more methodological rigour. I’m really happy to see that it’s becoming slightly more easy to publish null results, and new methods are not just judged by “better” performance.
TY: So you’re saying that p-hacking is a problem, but I guess it’s a bit unclear to me if that’s the main problem. Even with the high quality Human Connectome Project data, the predictive accuracy is not that amazing.
SE: Right, we still have a lot more work to do. And since you’ve mentioned the HCP dataset, this is never what you’d get from a clinical setting, where the data is acquired in a short time by a technician who is less invested than, e.g., a PHD student. And also the patient may not be as motivated as a research subject to lie still, comply with instructions etc.
Most likely, what will happen is the field will be split into different domains. One that is very high resolution, very intense sampling, and a lot of valuable data for each individual. That will be great for understanding brain organization.
But there will also be the other side, that will deliberately say “I want to use low quality, clinical data.” These adjectives may be the same thing, though the latter just sounds better [laughs]. We are using standard clinical quality data and we know the data is bad, but we also know that we need to find something that works on such data if we want to make an impact beyond research as an academic discipline. So we have to be up for the challenge! One thing we are doing a lot now is to deliberately make our lives difficult, by combining data across many different sites, different scanners, different populations, different continents. That way, the dataset is diverse - often even bad - but we’re happy with some drop in performance, because we know this is genuine performance. We have to improve it, but at least we’re not tricking ourselves into believing that we’re doing extremely good predictions that don’t hold up in real life.
TY: We are at Whistler now and just had an exciting workshop held by Todd Constable. I’ve noticed that the talks from more senior professors seem to cover a lot of papers, whereas someone more junior (like me) will talk about two papers. Is this what I should be aiming for?
SE: I think it’s a matter of personal style. But maybe you are right, and personal style changes with age. You really have two choices when you’re giving a talk. You either take a rather high flight attitude and present an overarching picture. Or you are going to dive deeper into something and go into a lot more detail.
One of the explanations for your observation is that when you are younger, you just have fewer papers you really want to talk about. When you get to a certain stage, you have a lot of papers that you can talk about, so you need to make a decision to go deep or go broad. It really has to fit your style and what you’re comfortable with.
For me, usually I want to give an overview, as we’re making great efforts to put puzzles together. We have different studies that may not be particularly related to each other, but you can see the crosstalk, and you can see the connections that I’m so excited about. That’s why I talk about so many different things.
TY: I’m on twitter and see that you (@INM7_ISN) have strong perspectives on open science and the replication crisis. Do you want to comment on that?
SE: Yes – I’m a big chimera when it comes to that. On the one hand, I’m a big proponent of open science. Most of the work we’re doing depends on shared datasets. In fact, long before the term ‘open science’ became popular, I was sharing my software. Back in 2004, I was developing the SPM anatomy toolbox. That was just open matlab code. I was still an undergraduate at the time.
So I’m a big fan of open science, but what worries me at times are certain tones of the debate. Sometimes there’s a patronizing aspect to it, a moral argument “you have to” and “how can you not”. I think that in order for open science to grow, we need to take concerns seriously. Perhaps by virtue of being around the open science, computer science environments, but also around the very traditional German medical environment, I can see that there are two sides to the argument.
Basically, in the German medical environment, I don’t think any person that would review your grants or would hire you cares too much about whether your dataset is open, or whether your software has been released. There’s a more traditional focus on publications with a lot of focus on impact factor, and grant money. This is a completely different world.
An example – if you’re a software developer then your product, your outcome or claim to fame, is the thing (a software tool, a repository, any other resource) that you distribute freely on the internet. If you are someone who spent years collecting data on a rather rare disease, recruited patients, talked a lot to them, followed them up clinically, evaluated them repeatedly and put them in the scanner, then this data is a resource, an extremely valuable resource. And it comes with the assumption that this data will allow you to get enough out of it to carry you to the next step of your career. If you are then being told “well, you are unethical and doing something terribly wrong by not immediately sharing the data freely after the first paper”, this is not putting open science in a good light.
I think open science will succeed and it will be a tremendous accelerator of knowledge. But in order for that to really happen we need to take people’s concerns seriously. There will definitely be a development at different speeds, with things moving faster in some fields than others. It’s not the case that those fields that move faster can look down upon those that are not as fast. I don’t think anyone is opposed to open science from a personal conviction, but it’s more about needs and rewards and we need to take these views seriously.
Long-term - there has to be a better incentive structure. At the moment, we are conservative about it in the German medical system – one of our main criteria for hiring is based on a cumulative impact factor – from the sum of the journal impact factors of all your papers. H-index, citations and so on do come into play, but the fact that the cumulative impact is a major evaluation criteria shows you that different fields still evaluate contributions differently. At some point, open science practices will need to be rewarded not just morally, but also practically by selection committees. That will take quite a bit of time, though.
TY: So how do we change the minds of selection committees?
SE: Well, we just mentioned that we are getting older [laughs]. In some ways there is another generation before us, those who are really not used to it. This will perhaps change over time. Then at some point we have to find criteria for quantifying open science. You can show things on the internet to a review committee – e.g. you have 500 or so matlab scripts that you are sharing – but will that give you a job? If committee members are from another field, they might not get the value, so there’s needs to be some way of quantifying these contributions objectively. Then a committee made up of psychiatrists, dentists, or structural biologists (and they often as diverse at times) can refer to some numbers that give a assessment of your open science practices. That would be a big step forward.
TY: Have you heard of the idea that once you come up with a set of numbers, they will be gamed?
SE: Sure, but this will always happen. We would like a perfect world where all decisions about hiring or promotions are done by people who are experts in your field, spend several hours scrutinizing your CV or 10 most important papers for context and read related literature to compare to. But this just won’t happen. Maybe I’m part cynic, part realistic, but most people are overloaded with committee duties, so you need some easy summary of a person. Yes this will be gamed – there’s no way around that – and we hope there is on each committee someone, who can point out the gaming aspect, and spends time to know your work it more deeply. But usually you have to convey the importance of your work to people who are not familiar with your publications or your topics.
TY: Thank you so much for this interview!
We look forward to attending Simon’s exciting keynote on Monday June 18, 2018.
GUEST POST BY CHRIS CHAMBERS
The biomedical sciences are facing a rising tide of concerns about transparency and reproducibility. Among the chief concerns are inadequate sample sizes, lack of sufficient detail in published method sections to enable replication, lack of direct replication itself (and notable failures when attempted), selective reporting of statistical analyses in order to generate desirable outcomes, suppression of negative results, lack of sharing of materials and data, and the presentation of exploratory outcomes as though they were hypothesis-driven. Collectively these problems threaten the reliability of biomedical science, theory generation, and the ability for basic science to be translated into clinical applications and other settings.
Human neuroimaging in many ways represents a perfect storm of these weaknesses, exacerbated by the fact that two of the main techniques, MRI and MEG, are extremely expensive compared with adjacent fields. Researchers using these methods face tremendous pressure to produce clear, positive, publishable results, usually in small samples.
Until recently such issues were rarely discussed openly, perhaps for fear that it would bring a relatively embryonic discipline into disrepute and collapse funding opportunities. But they have been simmering below the surface for a long time. Years before irreproducibility was headline news, at one imaging centre where I worked we noticed that we were running out of data storage faster than we were acquiring new data. After some detective work we learned why. Researchers were repeatedly analysing and reanalysing the same datasets, and with every reanalysis they were inadvertently duplicating huge quantities of raw data. The incident was illuminating about normative research practices.
When I raise this scenario with colleagues, their typical response is “Well, duplication of raw data is a silly mistake but most fMRI research is exploratory and exploration is vital for science”. This is true, of course. There is a huge amount to gain from performing reanalysis of existing, complex datasets. But the key, then, is whether such exploratory research is documented transparently as exploration. In an exploratory field, and especially one that often relies on inferential statistics, shouldn’t publications faithfully report all analyses that were attempted before settling on the ones that drove the conclusions? And does this happen in fMRI? Of course it doesn’t. Pick up a copy of any neuroimaging or cognitive neuroscience journal and you’ll find article after article purporting to test hypotheses using complex analyses, each of which is presented as though it was planned in advance. Given the pressures on researchers to produce clean results and frame them as the outcomes of hypothesis testing, it comes as no surprise that virtually no two published fMRI studies report the same analysis pipeline.
There are many solutions to this quagmire, including greater sharing of data, materials and code, and I also believe one major piece of the puzzle is preregistration of hypotheses and analysis plans. Many in the neuroimaging community are skeptical of preregistration and what it might say about our scientific approach, which sits uncomfortably between confirmatory and exploratory modes and relies on massive investment to remain afloat. When your typical experiment involves hundreds of analytic decisions, each of which can be considered “legal” yet produce slightly different outcomes, there is a natural tendency to fear that pre-specification of any particular route through the garden of forking paths will lead to unpublishable, possibly confusing findings. We thus feel pressured to apply the “human element” to bring order to chaos. Researchers will routinely spend months poring over their data and analyses using sophisticated statistical methods but almost none appreciate the risks of their own biases in interpreting one outcome among hundreds or thousands of possibilities.
This is why I have pushed hard for neuroimaging journals to offer Registered Reports (RRs). The RR format eliminates the fear of preregistration producing unpublishable results because, for RRs, the editorial decision is made before the results are known. Detailed study protocols are reviewed before researchers commence the research, and following detailed review of the theory and methods, the highest quality submissions are accepted for later publication regardless of how the results turn out. Researchers can also report additional exploratory (unregistered) analyses, clearly flagged as exploratory, and are encouraged to include preliminary experiments to validate a proposed analysis pipeline.
This week sees the launch of Registered Reports as a new article option at NeuroImage as part of a two-year pilot initiative, co-edited by me, Birte Forstmann (University of Amsterdam), Rob Leech (Kings College London), Jeanette Mumford (University of Wisconsin-Madison), Kevin Murphy (Cardiff University) and Pia Rotshtein (University of Birmingham). In addition to the usual features of Registered Reports, we are also inviting proposals for secondary analyses of existing but unobserved datasets, innovative approaches using Bayesian adaptive optimisation to combine the strengths of exploratory and confirmatory science, and review/perspectives articles on the potential costs and benefits of preregistration in neuroimaging research. Submissions are invited in any area of human neuroimaging and readers can find detailed author guidelines here.
Preregistration in neuroimaging is a high stakes intervention. The combination of high analytic flexibility combined with high risk of bias and high expense of data generation means that it has the potential to yield perhaps the greatest scientific benefits of any field to which it has been applied so far. But because of this methodological complexity, preregistration also brings some of the greatest challenges.
One such challenge is power analysis. Many of the 103 journals that currently offer RRs require high prospective power to detect the smallest effect of theoretical interest (e.g. 0.9 at Cortex, 0.95 at Nature Human Behaviour), but we know that MRI in particular is underpowered to detect theoretically plausible effect sizes, and we also know that many researchers do not have the resources to fund large studies. At one level this problem can be solved by consortia projects. Initiatives such as the Psychological Science Accelerator, Study Swap and the ENIGMA neuroimaging consortium are blazing a trail to facilitate more definitive team-oriented science. However, the main benefit of RRs lies not in the support of big science but in the elimination of publication bias and selective reporting. Therefore, to make the format as accessible as possible to the neuroimaging community, the RR format at NeuroImage will not set a minimum required statistical power or sample size. Instead we will simply require authors to justify the sample size they are proposing.
A bigger question is whether preregistration in neuroimaging is even feasible. To what extent will researchers feel able to prespecify their analysis pipelines in advance? For a RR, if an exact pipeline cannot be prespecified then researchers will be given the opportunity to prespecify data-dependent contingencies (e.g. if the data look like this, then we will apply this filter, etc.). They may also propose a blinded analysis strategy or an adaptive design in which some decisions will be post hoc, but actively protected from bias. Can such approaches succeed? I believe they can but for me the most fascinating outcome of this particular RR launch will be to discover how a community of talented analysts tackles this challenge.
Chris Chambers is a professor of cognitive neuroscience at the Cardiff University Brain Research Imaging Centre and guest section editor for Registered Reports at NeuroImage
Since the first meeting of the Organization for Human Brain Mapping (OHBM) over twenty years ago in Paris, the Organization has evolved from a primarily European and North American organization, to an international organization that draws members from over 50 countries worldwide (Figure 1).
However, the European and North American leadership and educational roles within the organization have been slower to undergo a similar evolution. This is perhaps most noticeable in the geographic distribution of Council, of which apart from very sparse representation from Australia and Cuba, has consisted of primarily Europeans and North Americans (Figure 2).
The characteristics found in Council, are also seen in the chosen educational courses (Figure 3), while the symposia have slightly greater diversity (Figure 4).
The most striking omission from leadership and educational roles comes from our colleagues in Asia. The countries of China, Korea, Japan, Singapore, and Taiwan make up over 15% of attendees to the meetings and poster presentations (Figure 5) and similar rates (17%) within the OHBM membership; yet have to date no representation on Council*. The goal of the Diversity and Gender Committee (DGC) is to work with Council and the Nominations Committee to foster equity in representation both within the OHBM membership and meeting attendance.
*Note: there has been one member Council originally from China, however they are currently US-based, so was designated as representing the US. In addition, a former Council member also had a joint position in China, but was designated as representing Latin America.
How are we doing with Gender?
With three of the five most recent Council members being female, the gender distribution on council is 12 males and 3 females. While this tripled the gender distribution from one year earlier, it falls lower than the gender distribution within OHBM.
The gender distribution of attendees presenting posters is 50% male, 40% female, and 10% who provided no answer. Whether these 10% represent gender fluidity or allies for gender fluidity within OHBM is not known.
While the gender distribution for poster presentation is more balanced, there is a higher proportion of males for the educational courses and symposia.
Approaches to Foster Equity
There has been much productive discussion within the Diversity and Gender Committee regarding how to foster equitable representation within OHBM. There were a number of options that we discussed, including having ‘electoral votes’ for Council members to, in a sense, ‘force’ the leadership roles to match the membership demographics. However, we are a democracy, and the primary approach that we have adopted is to provide education (in the form of data) for our members and allow our members to vote. We therefore encourage all members to consider the above data and consider potential biases when voting for your OHBM leadership.
A member of the DGC also sits on the Nominations Committee, with the goal to keep diversity in mind during the decisions surrounding the nominations. Importantly, the Council, including the chairs and members of the Nominations Committee, are motivated to see equity in representation within leadership roles in OHBM. They have attended the DGC Meeting in Vancouver and echoed their support for the Committee’s work. This support is crucial!
Microaggressions and Bias
The DGC has been charged to address inequities in gender and geography, however, we have heard whispers of both macro- and micro-aggressions within the context of the OHBM meetings. OHBM is all about science and integrity in both science and behavior. Attendees should be able to attend the meetings free from any form of bias related to gender, ethnicity, sexual orientation, gender identity or handicaps. If events occur, whether overt or covert, it should be reported to the DGC who will then work within the OHBM leadership to assess the situation and, if indicated, to intervene. The DGC is currently working on the specifics of best practices to intervene in cases where it is warranted.
For some time now, intolerance at the political level has been propagated throughout the world. However, we as a scientific community subscribe to inclusivity from all cultures and nationalities, and value diversity. In this light, we would like to highlight some of the challenges faced by some of our international colleagues, some of their biggest achievements despite these challenges, as well as provide a platform to voice their opinions and concerns on scientific inclusion.
There are parts of the world that are far from our minds when considering brain-mapping research - Iran is certainly one of them. The last few decades have seen a massive Iranian exodus of highly trained individuals. As a result, this secluded country has produced a great number of researchers who now work and live abroad. In fact, many of us working in neuroimaging share frequent interactions with Iranian researchers and trainees, and these interactions have provided a glimpse into the state of science and education in Iran. I have come to understand that some of the top research-intensive universities in Iran in the field of brain mapping include Shahid Beheshti University, the University of Tehran, Institute for Research in Fundamental Sciences. When it comes to neuroimaging research, the University of Tehran, Shahid Beheshti University and AmirKabir University figure prominently.
Researchers who work in Iran, however, see not only the challenges but also tremendous potential in Iranian research. On the heels of the Persian New Year, we caught up with two Iranian imaging scientists who wish to share their distinct views and experiences with the OHBM international community.
Part 1: Dr. Mojtaba Zarei
Jean Chen (JC): Where did you receive your training, and what inspired you to study brain imaging?
Mojtaba Zarei (MZ): I was inspired to study brain mapping by my 3rd year high-school teacher and then by the work of Frank Duffy while in the early years of medical school. I completed my MD at Shiraz University of Medical Sciences in 1990, focusing on brain electrical activity mapping. Afterwards, I moved to King’s College London for my PhD in cortical electrophysiology, mapping out sensorimotor cortex of rat after embryonic neural transplantation. In 1996, I resumed my practice in Clinical Medicine and Neurology, first at London, then at Cambridge, Oxford and Birmingham (UK). In 1999, I restarted my research in cognitive neurology under Prof. John Hodges and later in Chicago with Prof Marsel Mesulam. In 2002, I became a postdoc in the Oxford Center for Functional Magnetic Resonance Imaging of the Brain (FMRIB) under Prof. Paul Matthews. I went on to become Senior Clinical Fellow at FMRIB in 2006. As part of this, I established the Imaging in Neurodegeneration Group in Oxford, which was later continued by colleagues. Following that, I moved to the University of Nottingham in 2012.
JC: Given your foreign training experiences, what inspired you to move back to Iran?
MZ: Iranians commonly maintain strong family ties even after moving abroad. I moved back to Iran during a time when the government was prepared to invest heavily in neuroimaging research. In 2013, I was invited to return by the Iranian Ministry of Health to establish the National Brain Mapping Centre. This negotiation included an equipment grant of $10,000,000 USD for the centre from the Office of Vice-President for Science and Technology. I was appointed Full Professor of Shahid Beheshti University, Senior Adviser to the Ministry of Health, and the Director of National Brain Mapping Centre based in Shahid Beheshti University of Medical Science. In the Ministry of Health, I designed and implemented a national Clinician-Scientist Program for the first time in Iran. I was also instrumental in founding National Institute for Medical Research Development (NIMAD), which was modeled from the Medical Research Council in the UK. This organization is now the main independent governmental grant awarding body with seven scientific committees.
JC: How would you describe the brain-mapping landscape in Iran? In terms of major infrastructure, labs, programs, universities involved in brain-mapping research?
MZ: The major labs are mostly located in the capital, Tehran. The major players in neuroimaging research include the University of Tehran, Shahid Beheshti University and AmirKabir University of Technology. There is a 3T GE MRI in the Iman Khomeni Hospital that is shared by researchers and clinicians. There are also two research-dedicated 3 Tesla Siemens MR scanners, one at the Institute of Research in Fundamental Sciences, and the other at the National Brain Mapping Lab. There are also 1.5 T Siemens Avanto systems in Iran that can be used for research but the most active one is at Kermanshah University of Medical Sciences.
JC: Are there formal national or regional-wide meetings or organizations devoted to brain mapping?
MZ: Indeed there are. Since 2014 I have been responsible for organizing the annual Iranian Human Brain Mapping Congress, involving an international audience with eminent scientists as speakers. In addition, in 2005, I invited my former colleagues from the UK, including Heidi Johansen-berg, Matthew Rushworth and Christian Beckmann to teach at the first Brain Mapping Workshop in Iran. There is also the Iranian Society for Cognitive Science and Technology, of which I am the current president. Furthermore, at the moment, our institution runs the only regular and long term hands-on brain mapping teaching program in the country.
JC: What are the biggest challenges facing Iranian brain-mapping researchers that you would like the OHBM to be aware of?
MZ: The obvious challenge is that due to travel restrictions, Iranian researchers are not always able to attend OHBM meetings. Perhaps with developments in web platforms, this difficulty could be somewhat circumvented. Within the country however, given the limited resources, funding is not necessarily distributed in the most productive way, and there has yet to be an effective plan to utilize the infrastructure that is in place. On top of that, competition for research funding is politicized, and I fear that the requirement for political connections may be hindering research and the development of a younger generation of researchers. Any international mechanism (financial or otherwise) to directly support young and enthusiastic scientists would be welcome.
JC: Does the Iranian education system foster research and encourage young people to enter research? For example, are there scholarships available to help students enter research?
MZ: Yes, there is a lot of encouragement but it translates little to financial support. Most MSc or PhD students do not get paid during their study, which makes life difficult for them during these years. Postdoc positions (12-18 months) have increased in the last few years, particularly for those who would have obtained their PhD abroad. There are a lot of workshops, but these are often aimed at raising income.
JC: Are there government funding bodies to fund research? If so, how difficult is it to obtain funding, albeit it limited?
MZ: There are a number of grant awarding bodies that provide funding for brain mapping research, including the National Institute for Medical Research Development (NIMAD), National Science Foundation, and the Cognitive Science and Technology Council.
JC: How did you build up your lab in Iran?
MZ: When I returned to Iran, I got official permission from the Ministry of Health to establish Brain Mapping Centre at the Tehran University of Medical Sciences. I then received additional permission from the Ministry of Health to establish the National Brain Mapping Centre in Shahid Beheshti University of Medical Sciences. However, after 2 years, with government changes, our funding was stopped. I obtained permission from the Ministry of Science and Technology to establish the Institute of Medical Science and Technology less than 2 years ago. Our researchers and labs are located in this Institute. We established international collaborations with the University of South Denmark, the University of Pennsylvania, University Nantes, and University of South California. The latter is where the ENIGMA Sleep project is. We are now leading the ENIGMA Sleep Group. More collaborations are being developed, and funding for these projects are often obtained from international bodies.
JC: What are the career prospects for your graduate students and perhaps for other foreign-trained Iranian brain-mapping researchers hoping to return to Iran?
MZ: Not much in Iran at the moment, many will leave the country for PhD positions and postdoc training. Some get recruited for teaching and research in Iranian Universities. I have written a curriculum for training PhD students specifically for brain mapping, but it has to be approved by the Ministry of Education before I can actually start the program. However, there are numerous upcoming opportunities for scientists who have been trained in the best western programs.
Part 2: Dr. Abbas Nasiraei Moghaddam
On a later occasion, I had the pleasure to speak with Dr. Abbas Nasiraei Moghaddam. Dr. Moghaddam is Associate Professor in Biomedical Engineering at Amirkabir University of Technology in Tehran, one of the top universities in Iran and a frontrunner in neuroimaging research. Dr. Moghaddam is one of the premier MRI physicists in Iran, and for the past 8 years, has been director of the Advanced Medical Imaging Lab at Amirkabir University. For most of that time, he has also been affiliated with the School of Cognitive Sciences at the Institute for Research in Fundamental Sciences (IPM).
Jean Chen (JC): I understand that you are the founder of the Iranian Chapter of the ISMRM (International Society for Magnetic Resonance in Medicine). Where did you receive your training, and what inspired you to study brain imaging?
Abbas Moghaddam (AM): I received my BSc in Electrical Engineering in 1995 from the University of Tehran, and completed my MSc at the same, under the guidance of Dr. Hamid Soltanian-Zadeh. Dr. Soltanian-Zadeh was the first person to teach MRI Physics in Iran (21 years ago), and he initiated me into the field of brain imaging. Afterwards, I went on to work at Washington University in St. Louis for two years (in cardiac imaging) before starting my PhD at the California Institute of Technology (Caltech). It was followed up by a few years of postdoc at the University of California in Los Angeles (UCLA).
JC: Given your foreign training experiences, what inspired you to move back to Iran?
AM: Iran is my home, where my parents, siblings and roots are. Prior to returning, I was in the US for a total of seven years, but for fear of travel restrictions, I did not visit Iran even once. It made me realize that I did not want to be away from my home for so long again. However, I retained a part-time appointment at the University of Southern California to allow me to facilitate my collaborations with my American colleagues.
JC: How would you describe the brain-mapping landscape in Iran? In terms of major infrastructure, labs, programs, universities involved in brain-mapping research?
AM: In Iran, most of the MRI systems are for clinical use. There is only one research-dedicated scanner (Siemens Prisma 3 Tesla), which is at the National Brain Mapping Lab (NBML). It is sited at the University of Tehran, which is where I first got into medical imaging. The NBML is not affiliated with any institution, but provides access to all researchers in Iran. The IPM system (Siemens Trio 3 Tesla) was purchased for the IPM, but due to regulations from the Ministry of Health, it was initially sited at the Imam Khomeini Hospital in Tehran. After 4 years, it was recently moved to the IPM, and is now essentially dedicated to research. As a result, we now have a unique opportunity to do MRI research at the IPM. The School of Cognitive Sciences at the IPM was directed by Dr. Hossein Esteky for over 15 years. Dr. Esteky is a world-renowned vision scientist that first drew the world’s attention to neuroscience research in Iran with his publication in Nature.
Currently, the research landscape in Iran is rapidly changing, allowing us to develop new areas of research. Amirkabir University is Iran’s leader in MRI Physics research, and its School of Biomedical Engineering is one of the oldest in the world (25 years old). Here at the IPM, we have about 40 students doing research in cognitive science. When I was at UCLA, I did sequence programming, but I did not have access to it when I first returned to Iran. Now we are in the process of negotiating a research agreement with Siemens that would allow us to do sequence development here as well. This is an exciting time.
JC: What are the biggest strengths and challenges facing Iranian brain-mapping researchers that you would like the OHBM to be aware of?
AM: We have excellent human resources. The students are well trained and eager for knowledge. Often, my students would have scored near the top during the Iranian University Entrance Exams. However, for many years, neuroimaging research in Iran was heavily focused on image processing, perhaps due to our lack of research-dedicated imaging infrastructure. We have labs that publish heavily on imaging processing algorithms. But without co-developing neuroscience and imaging physics, such a research program would lose its competitive edge. This is perhaps our biggest challenge. Since 18 months ago, the newly established NBML has been providing access to imaging facilities, including MRI, EEG, TMS and fNIRS, but researchers in Iran are still trying to adapt to a culture of doing their own data acquisition.
Of course, Iranian researchers suffer from travel restrictions. For example, we are glad that this year’s meeting of the ISMRM is in Europe (Paris). Had it been in the US, we would not be able to attend. I am thankful that my international collaborations have allowed to get around such challenges. Science should have no boundaries.
JC: I understand that research funding for brain imaging is limited in Iran. In this climate, how difficult is it for you to obtain funding?
AM: The funding levels are certainly nowhere near the levels in the developed world. However, nearly everyone I know has funding, and no one has had stress due to lack of funding. This is in strong contrast with my colleagues in the US. One thing that is not well understood by the west is that in Iran, research is not nearly as costly. Students do not typically receive stipends, and scanning is fully subsidized, therefore we only need funding for traveling, publishing and so on. This makes it possible to conduct relatively big studies with little funding. Having said that, there are multiple types of grants that we need to apply for. For instance, traveling is covered by a different type of grant from regular research expenditure. The system is actually much more relaxed than in the west.
JC: How did you build up your lab in Iran?
AM: Biomedical Engineering has attracted a lot of interest from students in recent years, and I have had many applicants. When I interview students, I emphasize that I do research in Imaging and not in Image Processing. They are still getting used to the concept, but drawn by the success of my previous students. In addition, I set high standards for my students and do not hesitate to reject students that do not meet the requirements. In my institution, we have also set up joint-degree programs with foreign institutions in the UK and Australia. I would really like to expand this field of research in Iran, but that too will take time.
JC: What types of research questions are you interested in?
AM: I am interested in developing both functional and quantitative MRI sequences to improve brain imaging. In terms of fMRI, we are interested in improving the neural specificity of the imaging technique as well as developing brain-connectivity processing methods. In quantitative MRI, we are developing new imaging technique for T1 and T2 mapping.
I first learned MR Physics at the University of Tehran, when I worked with Dr. Hamid Soltanian-Zadeh; this continues to be a big focus for my research. In the US, my research was in cardiac imaging, but when I moved back to Iran and started my affiliation with the IPM (at the recommendation of Dr. Soltanian-Zadeh), I started to do brain-imaging research. One of my recently graduated PhD students worked on developing a new MRI sequence. As we do not yet have a research agreement with Siemens, he did this work in collaboration with the group of Dr. David Norris in the Netherlands, and spent 15 months in the Norris lab. This resulted in a patent and 2 articles, and it was the first thesis on MR Physics in Iran. I have another student working on structural and functional brain connectivity. She worked with Patric Hagmann in Switzerland. This is mainly on image processing and neuroscience.
JC: Finally, what are the career prospects for your graduate students and perhaps for other foreign-trained Iranian brain-mapping researchers hoping to return to Iran?
AM: As I mentioned, we are hungry for MRI expertise, but the job situation in Iran is very uncertain. Brain Imaging is still a young field, and we certainly need more researchers to help us build up the programs. Meanwhile, I do encourage my students to see other places and gain other experiences. Many of my students have gone on to study in labs abroad, including Germany, the Netherlands and Canada.
Postamble (JC): As Dr. Moghaddam said, science should have no boundaries. What may seem to be challenges are also potential opportunities. Iranian scientists are passionate about their research as we are in the rest of the world. They are defying great odds to build up a research program and to provide their young generation with new opportunities. Also, although the current involvement of female scientists in brain-mapping research accounts for <10% of all users, the increasing dominance of female trainees at the graduate level will likely change this. In an installment about Iranian trainees, you will also hear the thoughts of early career researchers from Iran and around the world.
By Elizabeth DuPre and Kirstie Whitaker
This month we continued our Open Science Demo Call series by speaking to Tim van Mourik Eleftherios Garyfallidis and Malin Sandström about the communities they’re building and supporting to make everyone’s lives easier through better open source software tools.
After a few technical difficulties (Kirstie’s laptop inexplicably deleted the “broadcast” button so we were all chatting to each other without being able to include our viewers in the conversation!) Tim introduced Porcupine. Porcupine is a tool to visually program your analysis. By dragging and dropping modules that represent functions in your analysis, you can quickly build an insightful analysis and then Porcupine will provide the code that you or others need to run on your own data. All code and documentation is openly available at the project’s GitHub repository, and this is where you can also give any feedback or suggestions. Alternatively you can find Tim in the BrainHack Slack team (click here if you need an invitation to join) or via email at firstname.lastname@example.org.
Eleftherios told us about DIPY, a global, community-supported, software project for computational neuroanatomy, focusing mainly on structural and diffusion MRI. DIPY implements a broad range of algorithms for denoising, registration, reconstruction, microstructure, tracking, clustering, visualization, and statistical analysis of MRI data. You can get involved and help the DIPY team in many different ways, but Eleftherios particularly encouraged OHBM members to test their data with the DIPY algorithms and provide feedback on any challenges they have running the code. You can ask questions in the team’s live chatroom or send an e-mail to email@example.com.
Linking very nicely to Eleftherios’ call for student applicants to work on the DIPY team’s suggested projects was Malin Sandström, INCF’s community manager who manages the organization’s Google Summer of Code (GSoC) program. GSoC allows students to be financed with stipends for their work on open source software over the summer. Open source organizations in the project contribute project ideas and mentors. INCF is participating as a GSoC mentoring organization for the 8th year in a row, with mentors from the worldwide INCF community and a wide range of neuro-tool projects.
You can browse the INCF project list to learn more about the summer plans. If you were too late to take part this year, we encourage you to keep an eye on the INCF GSoC projects page for updates on future rounds. If you have a project idea you would like to mentor with INCF for next year, get in touch at firstname.lastname@example.org by 1st December 2018.
Our next call will be on Thursday April 26th at 7pm BST (check your local time zone). If you’d like to nominate yourself or someone else to be featured on these monthly calls, please add their information at this github issue, or email the host of the calls Kirstie Whitaker at email@example.com. You can also join the OSSIG google group to receive reminders each month.
by Souad El Bassam and Nikola Stikov
OHBM has members throughout the world. We used last year's meeting as an opportunity to interview some of them to find out about the international reach of OHBM.
In our Spanish language video, you can learn about LABMAN and the way developing countries try to keep up with the growing cost of brain mapping research. Maria Bobes, the president of LABMAN, speaks to Manuel Hinojosa about the importance of involving more Latin American researchers in brain mapping and the crucial role of LABMAN in raising awareness of the challenges facing researchers in this area of research in Latin America.
Our Dutch video features Emma Sprooten from Donders Institute for Brain, Cognition and Behaviour and Raissa Schiller from Erasmus MC – Sophia Children’s hospital, junior researchers who are interested in cognitive and behavioural research. They briefly talk about their impression of the conference before moving on to speak about Raissa’s PhD research on cognitive impairment in children who were critically ill as newborns.
Finally, the Balkan video features researchers from Macedonia (Nikola Stikov), Bulgaria (Kalina Christoff and Bogdan Draganski), Serbia (Bratislav Misic), Bosnia (Branislava Curcic-Blake) and Croatia (Lana Vasung) trying to communicate with each other in their respective languages. Among the topics discussed are work-life balance and the many reasons for attending (and skipping) the OHBM conference.
Our international outreach does not stop here. We have videos in 7 more languages, including Catalan, Czech, Greek, Hebrew, Portuguese, Slovak and Mandarin. If anybody reading this wants to help with the transcription, we will be very happy to add these videos to our YouTube channel. If interested, please get in touch with Nikola Stikov (firstname.lastname@example.org). Let's show our international community that the language of OHBM is universal!
P.S. A big "thank you" to Job van den Hurk, Amaia Benitez and Olivera Evrova for the transcription and translation of the videos.
Permutation methods are a class of statistical tests that, under minimal assumptions, can provide exact control of false positives (i.e., type I error). The central assumption is simply that of exchangeability, that is, swapping data points keeps the data just as likely as the original. With the increasing availability of inexpensive large-scale computational resources and openly shared, large datasets, permutation methods are becoming popular in neuroimaging due to their flexibility and ease of concern about yielding nominal error rates than parametric tests, which rely on assumptions and/or approximations that may be difficult to meet in real data. This becomes even more important in the presence of multiple testing, in that assumptions may not be satisfied for each and every test, and the correlation across tests may be difficult to account for. However, even exchangeability can be violated in the presence of dependence among observations, and it may not always be clear what to permute. The aim of this blog post is to emphasize the relevance of linking the null hypothesis and the dependence structure within the data to what should be shuffled in a permutation test. We provide a few practical examples, and offer some glimpses of the theory along the way.
Example 1: Permutation mechanics
Let’s begin by reviewing the mechanics of a permutation test. Consider a comparison between two groups, for example whether hippocampal volume is different between subjects with Alzheimer’s disease (AD) and demographically matched cognitively normal controls (that is, a group with similar age, sex, education level, etc). If we assume that in both groups the hippocampal volumes are independent samples from a Gaussian distribution, a classical parametric two-sample t-test can be used to test for a difference between means of the two groups. However, this distributional assumption may not be true, and departures from this assumption can potentially lead to incorrect conclusions. In these circumstances, permutation tests perform better than parametric tests by providing a valid statistical test with much weaker assumptions. Specifically, under the null hypothesis that the hippocampal volume has no actual difference between AD cases and controls, the group membership (or the label of case and control) becomes arbitrary, that is, any subject from one group might as well have been from the other.
While it may seem implausible that this would be the case for patients and controls, in fact this is what we are testing: all else being equal (that is, exchangeable), and any difference found must relate to the means, which is what we are interested in. In fact, a classical parametric two-sample test (with equal variance) makes not just the same assumption, but also further assumes that patients and controls come from the same Gaussian distribution. Permutation tests do not require Gaussianity; it suffices that the data are merely exchangeable. Exchangeability further relaxes another important assumption of parametric tests: independence. Data that are not independent may still be exchangeable, either globally or under certain restrictions, as presented in more detail in Example 3 below.
With exchangeability, we compute the t statistic under each permutation, and produce the permutation distribution of the statistic under the null. The permutation distribution is the empirical cumulative distribution function (cdf) obtained from the data themselves, as opposed to from some idealized distribution, as is the case with parametric tests. The observed test statistic can be considered a random sample from the permutation distribution because it is equally likely to have arisen from any case-control re-labeling given the null hypothesis.
The p-value is the probability of finding a test statistic for the group comparison at least as high as the one observed, provided that there is no actual difference (i.e., null hypothesis is true). So, the p-value can be calculated by randomly permuting the group labels many times, each time recalculating the test statistic; at the end of the process, we check how often a larger statistic was observed than the original (before any shuffling had been applied), and divide that by the number of permutations performed. Figure 1 shows an example in which there are three subjects in each group; before any permutation is done, the test statistic is t = +0.7361. After exhaustively computing all 20 possible permutations, we see that 4 of these (including the non-permuted) are higher than or equal to +0.7361. Thus, the p-value is 4/20 = 0.20. If we had decided beforehand that our significance level would be 0.05, we would say that the result of this test is not significant, that is, there is no significant difference in hippocampal volume between AD patients and controls.
Figure 1: Consider the hippocampal volume measured in 6 subjects, three with Alzheimer’s disease, and three cognitively normal controls. The values measured are shown in the boxes (ranging between 3498 and 3588), controls in blue, AD patients in green. The test statistic for a difference Controls > AD is t = +0.7361. If there is no actual difference between the two groups, then the group assignment can be randomly permuted. For each such permutation, a new test statistic is calculated. In this example, four t statistics (shown in red) computed after random permutations of the group assignments, out of the 20 performed, were equal to or larger than the observed, non-permuted statistic. The p-value is therefore 4/20 = 0.20.
Example 2: Permutation with the presence of nuisance
Suppose in Example 1 that there were other variables that could potentially explain some of the variability seen in hippocampal volume. Some of these variables could even be associated with diagnosis itself. For example, it may be the case that, in this particular study, AD patients were older than cognitively normal controls. To account for these nuisance variables, we can formulate the problem as a multiple regression, in which hippocampal volume is the dependent variable, whereas the case-control status, along with other potential nuisance variables, are the independent variables. We would then test whether the regression coefficient corresponding to the case-control label is significantly different than zero. Now it is less clear what should be permuted. If we permute just the group labels, what to do with the other variables in the model? It turns out that various approaches have been considered in the literature.
Systematic evaluations show that, among a host of permutation and regression strategies, the method attributed to Freedman and Lane provides accurate false positive control in the presence of nuisance variables and is robust to extreme outliers in the data. In the Freedman-Lane method, we regress out all nuisance variables from the hippocampal volume measurements to obtain the residuals of this nuisance-only model, and use the permuted residuals as the new dependent variable in the multiple regression, from which we construct the permutation distribution for the test statistic (i.e., the regression coefficient of interest). Intuitively, once the nuisance has been regressed out, what remains should be indistinguishable between AD patients and controls if the null hypothesis is true, and thus, can be permuted.
We note that whichever regression and permutation strategy is adopted, it is crucial that what is permuted is what would render the subjects different were the alternative hypothesis true. It is not relevant to permute aspects of the dataset that would not be affected should the null hypothesis be false, that is, should an effect actually exists. This is important because, when an experiment becomes complex (e.g., with multiple factors, levels, nuisance variables, and/or multiple response variables), it can be easy to permute aspects of the data that are not informative with respect to the null hypothesis. One should not lose sight of what is being tested, and permute the data accordingly.
Example 3: Permutation with the presence of dependence in observations
Data are not always freely exchangeable. It may be the case, for example, that there are repeated measurements from the same subjects among the observations. Or maybe some or all subjects are twins, siblings, or otherwise relatives. Cases such as these restrict the possibilities for permutations, but even so, permutation tests continue to be possible. They proceed in a similar manner as in the examples above, but care needs to be taken when selecting the permutations that are allowed. Exchangeability as defined above — that is, permuting the data keeps them just as likely as originally observed — must be preserved. More technically, it means that the joint distribution of all the data points must remain unchanged under the null. For example, in a twin study, one could permute the subjects within twin pairs, and pairs of twins could be permuted as a whole, but one sibling should never be mixed with the sibling from a different family; see an example in Figure 2. These restrictions, unfortunately, tend to reduce power compared to the analyses in which all subjects are independent and freely exchangeable. However, all other benefits of permutation tests are kept.
Figure 2: Observations that are not independent restrict the possible rearrangements of the data. In this figure, each white circle represent an observation (e.g., a measurement from a subject), the blue (+) or red (−) dots indicate whether the branches that originate at that dot are or are not exchangeable, respectively, and therefore indicate observations that can be permuted with each other. On the left, 10 unrelated subjects who are freely exchangeable. On the right, 18 subjects, some of which were recruited along with their siblings (FS), and/or with their monozygotic (MZ) or dizygotic (DZ) twin. Siblings must be kept together in every rearrangement of the data, which needs be performed in blocks; subjects within a sibship can be permuted; some families may have both twins and non-twins, which requires nested blocks. (Figure licensed under CC-BY 4.0. https://creativecommons.org/licenses/by/4.0/)
Consider a longitudinal extension of the AD patients vs. controls example, in which two measurements are obtained from each subject, one before and another after an intervention is applied. As per above, the measurements must stay together within subject. However, depending on what is being tested, we may permute the data only within-subject, or only the subjects as a whole while keeping the order of intra-subject measurements unaltered, or do both things simultaneously. Within-subject effects (that is, the effect of treatment) would require that permutations happen within-subject, whereas between-subject effects would require permutations of the subjects as a whole. Interactions in a mixed design (within and between-subject effects) could benefit from both types of permutation. Crucially, what needs to be permuted is what would be equal should the null hypothesis hold, and that would differ should the alternative hypothesis be actually true.
Example 4: Comparison between models
Now suppose that, in our AD example, in addition to hippocampal volume, we have also measured the amygdala volume for each subject, and are interested in investigating whether hippocampal volume is a better biomarker of AD than amygdala volume (for example, in terms of standardized mean difference between cases and controls as measured by the Cohen’s d statistic). It is tempting to permute the case-control label, but this strategy turns out to be wrong as it completely breaks the associations between the hippocampal/amygdala volume and disease status, which should be retained under the null hypothesis. In fact, in this example, it is unclear what to permute. As a second example, if we want to test whether the mean of hippocampal volume in AD cases is significantly different from a fixed value (e.g., the typical size of hippocampus in normal aging subjects), it can be seen that there is nothing to permute. In these circumstances where a permutation test is difficult to apply, we need to resort to other methods such as the bootstrap for statistical inference.
The bootstrap is an established data-based simulation method, which is often used to assign measures of accuracy, such as standard error, bias, and confidence intervals, to a statistical estimate. It essentially uses the observed data to define an empirical distribution that estimates the unknown underlying data-generation mechanism, and then generates bootstrap samples and bootstrap replications of the statistic of interest using the empirical distribution, from which measures of accuracy can be calculated.
Bootstrap can be applied to virtually any statistic and a wide variety of situations. For example, by sampling cases and controls with replacement independently, we can calculate the standard error or construct confidence intervals for the Cohen’s d statistic for hippocampal and amygdala volume, respectively, as well as for the difference of the two Cohen’s d. Given the strong connection between confidence intervals and hypothesis testing, a p-value can also be produced indicating whether the difference in Cohen’s d is significantly different from zero. In fact, bootstrap can be applied to hypothesis testing, including the questions described in Examples 1-3. However, unlike the permutation p-value, which is exact, the bootstrap significance is only approximate and thus less accurate.
Therefore, permutation is a natural and favorable choice when the null/alternative hypothesis is well defined and what to permute is clear. Bootstrap is useful when the primary goal is to quantify the accuracy of an estimate or when a permutation test is not available in a hypothesis test (e.g., nothing to permute). That said, we also caution that bootstrap relies on an accurate empirical estimation of the true underlying probability distribution. Thus the sampling procedure requires careful consideration in order to respect the data generation mechanism in the presence of complex data structures. For example, block bootstrap is often used to replicate correlations within the data, while variants of the wild bootstrap are used to capture heteroscedasticity in the sample.
Practical advice: It's easy to get started with permutation methods in brain imaging. Most software packages have some sort of permutation test implemented. AFNI's 3dttest++ now uses permutation by default for cluster inference with the -ClustSim option; BrainVoyager has a randomisation plugin (permutation tests are sometimes called randomisation tests); Freesurfer can do permutation with mri_glmfit-sim; FSL has its randomise tool; and SPM has the SnPM toolbox. Finally, PALM is a standalone tool for permutation that works with different types of input data and has various advanced features.
At its best, multi-modal imaging offers rich insight into a many aspects of brain structure & function. At the same time, its development has been thwarted by challenges, for example simultaneous EEG-fMRI has additional safety concerns, and the EEG data requires extra analysis steps to account for artifacts from the magnetic field and rapidly changing field gradients. Despite these issues, there is increasing attention to the merits of this approach, with high profile journals dedicating special issues to multi-modal data fusion.
To find out about the promises and pitfalls of multi-modal imaging, we sent a series of questions to members of the OHBM Multi-Modal Imaging Task Force. This team is comprised of experts in different imaging domains, and aims to promote and develop multi-modal imaging. We found out the state of the field from Alain Dagher, neurologist and PET/fMRI expert in the Montreal Neurological Institute, Urs Ribary, cognitive neuroscientist and EEG/fMRI expert in British Columbia, Gitte Knudsen, neurologist and translational neurobiologist at Copenhagen University, and Shella Keilholz, physicist and fMRI expert at Emory University and Georgia Tech.
OHBM: First, what advice would you give to those who are keen to get into multi-modal imaging?
Alain Dagher (AD): Make sure you have a strong grasp of both methods.
Urs Ribary (UR): First, focus on understanding the neurophysiological and biochemical aspects of the brain; then learn individual methods (MRI-fMRI, MEG/EEG, PET, or others…); finally, learn the additional technologies and techniques that will allow you to integrate these different sources of information.
Gitte Knudsen (GK): You need to train at a site where there is high-level expertise in both modalities, and preferentially integrated. If you cannot readily become attached to an academic site that masters true multimodality, do your master thesis/PhD in a centre where they master one or two of the modalities and then move on to another site with the complementary expertise.
Shella Keilholz (SK): Well I would tell them that if they want to do it, just go for it! It’s a great way to increase the impact of your research, especially when the additional modality allows you to make inferences about causality or fundamental mechanisms that you can’t obtain with a single methodology. Sometimes it seems overwhelmingly difficult to add another modality but we have always been able to find collaborators who generously help us get started.
OHBM: It seems the tools for collecting the data are more readily available (e.g. MRI compatible EEG setups). What is the biggest remaining hurdle in conducting multimodal studies? Is data-fusion between modalities improving?
AD: The increased cost and complexity is generally what holds this back. [Further note from Jean Chen, OHBM blogteam member: “For example, an integrated PET/MRI system is more costly than a regular PET or MRI system. Whilst it may not be as expensive as buying a PET and an MRI system separately, new money is often required to get into multi-modal imaging.”].
GK: The biggest hurdle is, first, to master more than one tool to perfection and second, to ask the right scientific questions that can only be addressed using a multimodality approach. Data-fusion between modalities is a challenge, but slowly improving.
UR: Yes, data fusion is improving, but not so much the underlying knowledge of neurophysiology (why to integrate). There are also clearly issues with money (more recordings are more expensive) and with time (it requires more knowledge and work, and everybody wants to publish quickly). On the other hand, data fusion is not something that has to be done alone, and can be done efficiently in collaborations.
SK: One of the biggest challenges in multimodal research is designing experiments and analyses that maximize the use of the information obtained from both modalities. It requires thinking beyond the conventional paradigms for each of the modalities involved.
OHBM: The increased use of simultaneous PET-MR scanners has clear advantages for cancer imaging. What benefits do you feel it may hold for other areas of neuroimaging?
UR: A clear benefit would be the ability to combine biochemical information with information about brain structure, function and dynamics.
AD: There are many benefits. For example if you take the combination of BOLD and neurotransmitter imaging, since neurotransmitter signalling fluctuates, simultaneous measurement of, for example, dopamine signalling and task-related BOLD has great potential. This then also allows powerful task designs with pharmacological manipulations.
GK: It allows us to measure neurotransmitter release and receptor occupancies and hemodynamic responses simultaneously. We can then use this with pharmacological, physiological or other stimuli. Another great advantage is that it saves time (becoming a one stop shop) for patients with neurological or psychiatric disorders, and so can be useful for those who are not able to tolerate multiple scanning sessions. Unfortunately, despite saving time and possibly resources, the simultaneous acquisition of these different types of information has not yet been truly exploited.
OHBM: The last decades have seen the development of a number of new radioligands for imaging tau and amyloid pathology, microglial activation with translocator protein, phosphodiesterases, and other exciting clinical markers. Are these helping drive multi-modal imaging research? Which emerging PET tracers are you most excited about and why?
AD: For me, the most exciting tracers have been those used to image tau and amyloid, providing otherwise unavailable information about neurodegenerative diseases. Previously we only had brain atrophy as a proxy of disease.
GK: If we’re still talking about hybrid scanners, then we are most interested in developing tracers that target components in the brain that are under rapid regulation. In these cases the methodology can capture these regulations and relate them to, for example, the hemodynamic responses. I’m currently excited about radioligands that are sensitive to neurotransmitter release, as well as emerging PET tracers that are informative of brain processes key to many different types of functions/pathologies. For example, tracers that indicate neuroplasticity or stem cells.
UR: Everything helps! I’ve been impressed with recent research relating imaging of neurotransmitters to cognitive functions in health and disease. In addition, the ability to image GABA as an inhibitory substance has been fascinating to see how it may contribute to, and even control, brain development and dynamic network functions. Last, it’s helped us understand the brain as a fine-tuned electrochemical system which controls all brain functions.
OHBM: Simultaneous EEG-fMRI offers high spatial and temporal precision - but how have labs coped with the challenge of integrating and analysing this wealth of data?
AD: This has been especially problematic for EEG. What we need is good open-source processing software for integrating this information, along with online tutorials and courses to teach people how use them.
UR: I believe that there’s still not enough work in this area. We need to have a much greater understanding of how structure, overall function and brain dynamics integrate in order to understand how typical/atypical brain networks function. Here the question is not so much about using information from different methods to prove each other but instead to complement each other.
OHBM: EEG-fMRI has clear benefits in conditions like epilepsy, for identifying seizure focus and spread. What applications has it had in other conditions - and what do researchers hope to achieve with it?
AD: Cognitive neuroscience can certainly benefit from the combination of higher spatial and temporal resolution in brain mapping.
GK: EEG-fMRI also has promise for use in sleep physiology, sleep disorders and coma.
UR: Any typical cognitive functions and any pathology which are ALL based on structure, function and dynamics....
OHBM: What do you think are the main strengths of multi-modal MRI work? Do you feel it offers hope for developing valid and reliable MR-biomarkers?
UR: Absolutely! Science is not a mystery, the more complementary information we have, the better we understand the human brain. It will help us to diagnose/monitor sub-types of pathologies and give much greater precision when tracking the effects of interventions....
AD: I do believe using multiple MR measures makes sense for biomarker development and understanding pathophysiology. Pathological processes (e.g. in Alzheimer’s Disease) can affect the brain in multiple but likely stereotyped ways. We can also Increase our power to detect pathology (e.g. inflammation, white and grey matter tissue loss, connectivity information) by combining multiple measures.
OHBM: What additional challenges do animal studies have in terms of sequence development or protocol considerations? How do you find these studies enrich those in humans?
AD: Clearly a major issue is the small size of animal brains. We also have to account for the animals typically being anaesthetised when scanned, which has implications for physiology.
GK: Sometimes data from preclinical studies can help optimize a project to be conducted later in humans.
UR: The real benefit of these preclinical studies is that it allows us to perform complementary invasive studies not possible on humans, such as MRI-histology studies. We do however need to continue developing better, or more realistic, settings in animal research in order to better correlate those findings with human brain research.
SK: One of the challenges that we’ve found is that tools that are available on human MRI systems (simultaneous multislice EPI, for example) are not easily implemented on animal systems due to hardware limitations. As Alain says, the other main issue is the use of anesthesia in animals, a special challenge for functional neuroimaging studies, as discussed in our review. Luckily, many of the basic properties of the brain remain relatively intact under light anesthesia, which has been critical of the validation of human neuroimaging methods against “ground truth” modalities like microelectrode recording. People talk of animal research as preclinical or translational, but we like to think of it more as circular. For example, one can take a neuroimaging finding in humans (e.g., fMRI response to tactile stimulation) and look at its basis in the rat using MRI and electrophysiology. Then perhaps one sees that this response is altered in human patients with a particular disorder (maybe stroke). One can then go back to a rat model of stroke and see if the same alteration is present, which helps to validate the stroke model. Then one can look for the neural basis of the alteration using MRI and electrophysiology and identify specific alterations in patients that may be detectable with EEG…etc, etc. We think that human and animal neuroimaging work should inform each other.
OHBM: Thanks all for your insight! We look forward to the multi-modal imaging symposium at OHBM 2018 in Singapore.
By Elizabeth DuPre and Kirstie Whitaker
This month we continued our Open Science Demo Call series by speaking to Anisha Keshavan, Yaroslav O. Halchenko, and Athina Tzovara about three tools they’re developing to improve openness and access in neuroimaging research.
Anisha introduced braindr, a project she’s developed to crowdsource quality control of large datasets such as the Healthy Brain Network data set. It builds off her previous work in creating MindControl but provides a fun, Tinder-inspired interface for image ratings. She encourages anyone interested to check out the app, remix it for their own data, or contribute to the conversation on how to do quality control of images!
Yaroslav told us about DataLad, a solution devised to allow for versioning data. We’ve already recognized the importance of versioning code, but it applies to data too! As Yaroslav pointed out, data can change or have “bugs” like the dreaded left-right orientation flip in MRI data, so understanding what version you’re working with is important. Using DataLad, Yaroslav demonstrated how to install datasets from sources like OpenNeuro and discussed how it can even be used for data sets before they are made publicly available. Interested contributors are welcome to check out the code!
Athina introduced a survey she’s actively developing to better understand how research treats underrepresented minorities. It aims to allow non-scientists --- particularly those belonging to traditionally underrepresented minorities --- to take an active role in the scientific process, bridging the divide between researchers and participants. Originally developed through the Mozilla Open Leadership program, the survey is still open to feedback from the community, and Athina encourages anyone interested to join the discussion on GitHub!
Our next call will be on Thursday March 22nd at 7pm GMT (check your local time zone). If you’d like to nominate yourself or someone else to be featured on these monthly calls, please add their information at this github issue, or email the host of the calls Kirstie Whitaker at email@example.com. You can also join the OSSIG google group to receive reminders each month.
"A brain scan may reveal the neural signs of anxiety, but a Kokoschka painting, or a Schiele self-portrait, reveals what an anxiety state really feels like. Both perspectives are necessary if we are to fully grasp the nature of the mind, yet they are rarely brought together".
-- Eric Kandel
Visual art can provide a glimpse into people’s consciousness. It works as a bridge, not only connecting us to each other, but also with the past, present, and future. The act of creating art is also therapeutic, and represents a powerful resource for mental and physical well-being. Yet, the mechanisms underlying the brain’s capacity to generate art remains largely elusive. While it has been commonly reported that the right brain (posterior parietal and posterior temporal) is dominant for artistic ability, emerging literature strongly indicates that the left brain is not a silent partner. Instead, it contributes to more of the symbolic/conceptual aspects of art. Moreover, the emergence of visual artistic skills in the healthy brain has been linked to plasticity in areas (in both hemispheres) responsible for cognitive processes. Which begs the question: how is visual artistic creativity affected by neurodegeneration?
In fact, art in the context of neurodegenerative diseases (e.g. Alzheimer’s disease, frontotemporal dementia) provides a unique window into brain anatomy and function. In this interview, I discuss the link between neurodegeneration and art with Bruce Miller, director of the Memory and Aging Centre at the University of California. Bruce also oversees the unique Hellman Visiting Artist Program, created to foster dialogue between scientists, caregivers, patients, clinicians and the public regarding creativity and the brain.
Q&A WITH BRUCE MILLER
AmanPreet Badhwar (AB): Can you begin by saying something about your background?
Bruce Miller (BM): I am a behavioural neurologist at the University of California, San Francisco. I focus a lot on degenerative disease: the clinical presentation, differential diagnosis of dementia, also deep dive into frontotemporal dementia. I think a lot about behavioral phenomena, particularly early in the course of these diseases.
I started realizing the importance of art and dementia very serendipitously. It was based on seeing a single patient (Jack). The son told me his father has become an artist in the setting of the illness. And I said “of course as the disease has progressed his work has gotten worse”, and he said “oh no it has gotten better”. So he sent me a series of pictures, and I was fascinated and really enchanted by the work that he did, and began to look in detail into the visual artistic process in that patient. Jack was preoccupied with creating purple and yellow art pieces, and a phrase I often heard from him was “ yellow and purple wave over me”.
I did not think it was a coincidence, although many people around me thought it was, and I was stubborn enough to pursue this, and continued to look for it in my frontotemporal dementia and progressive aphasia population. It does not take much time to hear about somebody, who they are, what they do etc. I would argue that this should be a mandatory part of any evaluation.
AB: How do the worlds of neuroscience and art combine?
BM: Art is unique to the human species. Other animals don’t spontaneously produce art and even our predecessors like the neanderthals and homo erectus made art. There are records of very sophisticated and complex cave paintings by homo sapiens that showed animals, had three-dimensional components and colours. So we developed this ability spontaneously, and without much formal teaching. The sense is that there is something really unique that happened, there was a change in the human brain, maybe a change in human circumstances that lead to this flourishing of art, and this continues to be a part of our ancient and modern societies.
Also looking at the human output around art: some people are extraordinary, and some never produce art. So I think art is a very interesting aspect of humanity and a very interesting aspect of the human brain, and that the two things cannot be more connected.
AB: You previously stated that “creativity is one characteristic that has been observed to improve with time, both in healthy older adults and people with age-related neurodegenerative disease”. Is the trajectory for artistic creativity different in normal aging and in age-related dementias?
BM: I think it’s a very interesting, complex question, tackling aging of humans and art. We are very interested in elder artists, there is no doubt about it. Picasso was in his eighties, he produced very different but interesting pieces, but they delighted people. There is no doubt that his work was exciting. Was it better when he was young, or was it more innovative, maybe not, but I think there is great variability in when an artist reaches his or her peak. Some artists may have a series of observations that become very important in their twenties, and don’t change very much over time, and in others there is a constant evolution. I think one thing that is clear is that it takes a while to master whatever artform that someone is working at, nobody picks up a pen and produces a perfect sketch of a face, it takes many, many iterations and practise over many times. I think this is what happens when someone is an art student, they are constantly working on these techniques, making their own observations and getting observations on their work made by teachers.
In disease, people who have never painted, made sculptures, or welded art pieces, suddenly become very interested in the process. Their first works are usually not as good as the ones they produce after they've had the chance to work at a specific media. They do things over and over again, and at some point they start to reach a mastery of their art. So I think there is often a period when they don’t produce something very interesting but there is a drive to do so. That drive pushes them to practise more and more and they reach some sort of a peak, until eventually the degenerative process and injury to circuits causes a loss of their abilities.
So we have this very beautiful but sad story of sometimes art heralding the onset of the degenerative disease process. Soon after the art has appeared the degenerative process gets worse, and eventually the ability to produce art is lost altogether.
AB: Do you think that this drive to produce art arises from disinhibition of certain brain networks, especially in patients who, earlier in their history, were never motivated to produce art? In other words is this artistic ability unveiled and perpetuated by the neurodegenerative process itself?
BM: I do. I think the fact that they never produced art before means that the circuits involved in this process had not been activated. Something about the degeneration, for reasons that we don’t completely understand, leads to an interest, an activation, an actual physical drive to carry out the artistic activities. The theme has been that degeneration on the left side of the brain (language based regions) releases functions on the right side, which are more visual.
AB: Have there been any fMRI studies done in these patients with relation to newly developed artistic abilities?
BM: There is quite a bit of fMRI data that we have collected on our artists. We are in the process of analysing that, but we don’t yet have a coherent story. We wrote about it. William Seeley did these analyses on a woman (Anne Adams) who became a visual artist in the setting of a non-fluent aphasia, and she showed on a blood flow scan increased activity in the right posterior brain region, and actually during that time an MRI was done and she had increased volume in that same area.
There are a number of theories, one being she was always like that (that is the bigger volume). But she was never much of an artist until the progressive aphasia emerged. We think there might have been slow remodeling in the early stages of the disease, with decreased activity in the left frontal insular regions allowing increased activity on the right posterior parietal area and actually some increase in volume.
AB: Does art created by people with brain disease or damage provide insight into brain anatomy and function? Could you provide a few examples?
BM: Surely Anne Adams was a paradigm shift for me to describe the phenomenon of art and dementia, but I had never really thought too much about the mechanism. But because she had undergone an MRI just before the onset of dementia, this really allowed us to look into the circuitry and mechanism. This also allowed me to broaden my thoughts about the topic, so seeing patients who had gardens with beautiful details, flowers, patterns. This is another form of visual creativity that I have become aware of.
AB: As a practising neurologist, how has your encounter with art influenced or changed your own conception about how the brain functions? Do you have specific examples? Did you have to overcome difficulties to promote this field?
BM: I think it has really humanised my approach to patients. It makes me realize that even though dementia is a relentless process, there are many pockets of preservation, and sometimes enhanced function. It is critically important that we recognize this in our patients. It is helpful in diagnosis. What is preserved is telling us something about where in the brain the bad molecules are not accumulating. But it also allows us to think about the patients, about things that are important to them, and help design programs for them and have activities that are meaningful. If you have lost your visual spatial function profoundly, then probably working in art is not going to be satisfying. But if instead there are other areas that are preserved around music or singing or something else, these things have to be kept in mind while thinking about the future for the patient and their families.
I think this should be a routine part of our diagnostic process, that is not only what are the weaknesses, but what are the strengths, and has anything new emerged that is actually a new strength. We do this regularly now at UCSF (it has opened up a whole new side to the evaluation). This also makes me appreciate the unbelievable effort that every patient that we see is putting into their life. When blocked in certain domains, they activate others and use others. So I think about patients in a very different way since the story of art emerged. I think, to a fault, neurologists have often thought about deficits a lot, without really seeing the whole human being, and I think this has really forced me in a very good way to think about the entire human within the ecosystem that they live and interact with others, and some of the things they perceive that might be very important.
AB: I have had the good fortune of discussing both art and neurodegeneration on various occasions with Bruce. Not only do Bruce and I share similar scientific curiosities with regards to art and dementia, I have also found him to be an excellent mentor. He has taught me to follow my heart in the quest to figuring out the brain, and for this I shall be forever grateful!
“I think the next philosophers, the philosophers of the 21st century, are going to be neuroscientists.” - Bruce Miller
The OHBM is dedicated to understanding the anatomical and functional organization of the human brain using neuroimaging. But how to best use brain-activity measurements, including human neuroimaging, to understand computational mechanisms remains an open problem. “Mapping the brain does not by itself reveal the brain’s computational mechanisms” says Niko Kriegeskorte, past chair of the OHBM Communications Committee. “Therefore one of the strategic priorities in the OHBM Communications Committee has been to explore the interaction between computational neuroscience & human neuroimaging.”
Here, we had the chance to discuss the current state and future of computational neuroscience with Mark Humphries, senior research fellow at the University of Manchester, Chair of Computational Neuroscience at the University of Nottingham, and talented blogger. We found out about research environments in different countries, mindful language use in neuroscience, Mark’s outlook on the future of network neuroscience, and his top three tips for those starting out in computational neuroscience.
Nils Muhlert (NM): Can you tell us a bit about your career path - were you first interested in computing, or in neuroscience? Also, your work has seen you move between the UK and France - have you found different approaches to research in these countries?
Mark Humphries (MH): I’m of the generation that grew up programming their home computers - their C64s, Spectrums, and BBC Micros - so computing was always there. As a kid I also loved chemistry. Originally I wanted to do Chemical Engineering at university, but it turned out that A-Level Chemistry was both hard and boring. So when I came across the mysterious “Cognitive Science” degree, promising computing, AI, and the brain, I signed up like a shot. In effect, I’m one of the few who was trained in computational neuroscience from my first year at undergraduate level.
That degree was followed by a PhD and postdoctoral work at Sheffield, with the quietly wonderful Kevin Gurney. Not quite the straight run it sounds: disillusioned and exhausted by the end of the PhD, I went off to freelance web design and software engineering. That lasted a year before I was tempted back by the offer of a post-doc.
My long stint at Sheffield was followed by three years in Paris at ENS. Both teams of computational neuroscientists, with radically different approaches. Sheffield were neuroscience-first, circuit modellers: build a model of a brain region, study its dynamics, and infer its function. Paris were theoreticians first: propose and study general principles for how computations could be done by the brain (memory, inference etc), then worry about the details of specific circuits later, if at all.
In my experience, the French research system, dominated by the CNRS and INSERM, is essentially just part of their civil service system. So you can have a job for life, but getting financial support to do your research can be an absolute pain. Theorists in all fields can thrive, of course. (ENS has an extraordinary maths department: the Bourbaki group were based there, and they’ve had five Fields medalists). The UK research system more clearly supports fundamental science.
NM: In a recent blog post on connectomes, you highlight some of the many factors influencing the spiking of a single neuron. In human neuroimaging, we typically summarise activity at the scale of cubic millimetres, with each voxel containing tens or hundreds of thousands of neurons in different cortical layers. How much cross-talk do you see between cellular systems neuroscience and human neuroimaging, and how much do you think understanding at one level currently constrains understanding in the other?
MH: The neuroscience of detailed neuron types - their physiology, receptors, transmitters, gene expression, and so on - often has little constraint on systems neuroscience studies of large populations of neurons. Many multi-neuron recordings from cortical regions can only hazard a guess at what layer they are recording in, never mind whether the recorded neurons are Martinotti or ViP interneurons or whatever. I think this lack of identifying neurons has played a large role in driving the take-up of calcium imaging, where we can at least identify some subtypes of neurons (typically 1 or 2), despite the obvious disadvantage of recording something (calcium) that is only partially related to the thing we’re interested in (the spiking of neurons). What’s particularly missing is the constraints of anatomy - the wiring between individual neurons - on the activity we’ve recorded from those neurons.
But that will come. In a handful of specialised circuits, this information is being combined. For example, in studies of the mouse retina, the type and position of neurons has been used to constrain classifications of large population recordings. And in tiny animals, like Drosophila larvae (maggots to the rest of us) and C Elegans, the details of wiring and neuron types have been combined with large-scale imaging to reveal deep insights into how brains could work.
NM: Marsel Mesulam revealed that students requesting higher field strength MRIs are asked “what would you do if you could record from every neuron in the brain?” This thought experiment is now an ambition for international research projects. How do you feel network neuroscience could sensibly use this massive amount of data?
A question that has occupied much of my thinking, but to which I’m no closer to a good answer. We have passed the milestone of recording every neuron from a simple nervous system. But as I wrote at the time, it was a cool study from which we learnt very little of consequence.
That said, everything that brains do, they do through the collective action of hundreds to millions of neurons. And we lack well-established theories for what that collective action means, or how to interpret changes to it. In the absence of theory, the gotta-catch-them-all philosophy of recording every neuron is seductive: let’s get the data we think we will need one day, and wait for theory to catch up.
Fortunately, ideas are emerging about how we can sensibly use this data. There’s some great recent work on how we can tell whether there’s anything special about the joint activity of many neurons: whether it is just the expected result of lots of individual neurons tuned to different properties of the world; or if the joint activity really conveys more information than the individual neurons summed together. And we’re starting to get a handle on how to understand the dimensionality of that joint activity: how much redundancy there is between neurons, how that redundancy differs between brain regions (and between different brains), and what that means.
NM: In another of your blog posts, you criticize media misinterpretations of dopamine as representing the ‘reward system’ of the brain. How does your own work feed into this - and at what point did you feel a general education piece was warranted?
MH: The tipping point was seeing “Dopamine dressing” in The Guardian‘s Style section. As though dopamine neurons give a damn about what you wear. Endless publications call dopamine the “reward system”, when it is not. And it’s particularly embarrassing when such language routinely appears in august publications like Nature. So I thought that it’d be useful for everyone to have a simple, accessible, concise explanation that dopamine neurons signal an error, not reward. And then we can all just point our undergraduates, friends, family, and editorial staff at esteemed publications to that post, and save ourselves the trauma.
Dopamine has been around in my research since the first days of my PhD. For years my work was primarily on the basal ganglia, and the striatum - the massive input nucleus of the basal ganglia - is where the dopamine neurons send their dopamine. So we include the effects of dopamine in all our models. In Paris I spent a couple of years analysing dopamine neuron firing in a project that never saw the light of day. More recently, I helped Kevin Gurney achieve his mammoth computational account of how dopamine teaches the basal ganglia to select actions. Dopamine has haunted me for my entire career...
David Mehler (DM): Richard Feynman used to stress the difference between “Knowing the name of something and knowing something”. In a similar spirit, you have critically assessed whether we put too much faith in named brain structures, giving examples why these should not be taken at face value. What advice do you have for students and ECRs, whose experience of Neuroscience may consist wholly of learned brain regions with set functions?
MH: Read more than just about your brain region. And internalise the idea of degeneracy: brains have many solutions to the same problem.
If we work on only one brain region, it is easy to fall into the trap of thinking that one brain region does everything. Just being aware of the thinking about brain regions other than your own will help not take anything at face value. In my own fields, it is easy for basal ganglia researchers to fall into the trap of claiming that it is responsible for “action selection”. But this patently can’t be true: there are multiple systems that select actions in the brain, from spinal reflexes, up through the brainstem, midbrain, and other sub-cortical structures - the amygdala can select fear responses just fine on its own.
DM: A recent study from your lab, in collaboration with Angela Bruno & Bill Frost from the Chicago Medical School, provides fascinating insight into how neural populations orchestrate their activity when coordinating movement: while their combined output converges to a similar pattern (an attractor), activity of individual neurons is not stable over time. What does this finding imply in your view for our understanding of functional connectivity (e.g. between neurons or neural populations)?
It means that functional connectivity is an epiphenomenon. The correlations between individual neurons are imposed by the dynamics of the whole circuit in which they reside. Those dynamics obey certain properties that emerge from the wiring of the whole circuit and the excitability of the individual neurons.
But it is very useful to study functional connectivity of neurons: mapping the correlations between neurons is so much easier than trying to infer the underlying attractor, or other form of dynamical system. And changes to those correlations imply a change to the underlying attractor. Indeed, we use this approach all the time. We just need to be mindful that those correlations are a read-out, an observable property, of the circuit’s dynamics.
Functional connectivity at the level of whole brain regions, of MEG/EEG and fMRI, is a different kettle of fish, of course. On this scale, correlated activity is telling us something about the distribution of how things are represented across the brain in very large neural populations, with tens of thousands to millions of neurons in a single time-series. Instability of correlations over time for these time-series would suggest entire neural populations that wink on or off as needed. And dynamical systems analysis has long been applied to EEG data, but usually as a way of looking for changes in gross neural activity - as may precede an epileptic seizure, for example - than as a view of how the brain computes.
Seeing a spiral attractor in neural activity. Activity was recorded from 105 neurons in a sea-slug's motor network during three separate bouts of galloping. There are three lines plotted here. Each line is the low-dimensional projection of those neurons' joint activity during a 90 second bout of galloping, from its onset (grey circle). Each line traces a circular movement whose amplitude decays over time: a spiral. The three lines together trace the same region of this low-dimensional space, indicating that the neurons' joint activity is attracted to the same pattern: the spiral is an attractor.
DM: Your work increasingly focuses on dynamic changes in neural networks. What insight do you think this will bring to the field over the next 5-10 years?
MH: We’re going after the idea that the brain encodes information at the level of the joint activity of populations of neurons. In this view, each neuron is a read-out of the joint activity of all the neurons that project to it. That neuron, in turn, is just one small component of the populations projecting to other neurons. So only by looking at the dynamics of the neural network as a whole can we understand what neurons are seeing, and hence what the brain is encoding. A change to those joint dynamics are then the change in what is being encoded: be it a sound, a memory, or a movement. In short: the response of single neurons may be irrelevant to what the brain is doing.
DM: … and finally, computational neuroscience is gaining increasing popularity. But starting out may seem daunting. What are your top three tips to get into the field?
MH: First, learn to code, properly. To some, this may seem obvious. In my experience most people who’ve come to me with a genuine interest in getting into computational neuroscience have never coded, certainly not seriously. But coding is the day-in, day-out life of the computational neuroscientist, so you won’t get far without deep skills in coding. And by “properly” I don’t mean “you have to learn a proper programming language”, whatever that means. No: properly learning to code means learning the logic of how code is built, independently of the language used: of variable types, indexing, functions, control loops. And learn to comment your code. You know who will love you for commenting your code? You, in a year’s time.
Second, ask yourself: What type of computational neuroscience do I want to do? The choices are endless. We can work on scales across the actions of receptors at single synapses; plasticity at single synapses; the intra-cellular signals triggered by receptor activation; the dynamics of a single neuron in all its glory, dendrites and all; the collective dynamics of networks of neurons; of specific brain circuits; right up to the entire brain. And on to read-outs of mass activity, to EEG, MEG, and fMRI, and the functional connections between regions. We can work bottom-up, top-down, or middle-out. We can aim to ask what a specific brain regions does, work out what causes a disorder, or reach for general principles for how neurons compute. We can use algorithms, like machine-learning; simulations of dynamics using differential equations; or pencil and paper to solve equations. What is it you want?
Finally, take a Master’s course in computational neuroscience. Both so you can find out if this path is for you; and so that you can be taught the neuroscience by neuroscientists and the computation by computational neuroscientists. Get either wrong, and no one will take you seriously.
By Elizabeth DuPre and Kirstie Whitaker
The open neuroimaging community is great and growing every day. This month saw the first of a series of Open Science Demo Calls. Brought to you by the OHBM Open Science Special Interest Group, these live streamed calls are a chance to hear from the developers of open neuroimaging tools. We'll use these calls to build connections between all members of the OHBM Open Science community and to tell the stories of the people making outstanding and reproducible neuroscience happen.
For our first call, we spoke to Alejandro de la Vega, Cameron Craddock, and Guiomar Niso about three ongoing initiatives they’re spearheading to improve openness in neuroimaging research.
Alejandro spoke about NeuroScout, a new, cloud-based platform allowing for the flexible re-analysis of neuroimaging datasets with naturalistic stimuli, such as the Study Forrest dataset. To do this, Alejandro is actively working to develop tools such as pliers and pybids. If you’re interested in this line of research, make sure to check out and contribute to these tools!
Cameron discussed this year’s Brainhack Global. Building off the successes of Brainhack Global 2017, Cameron is organizing a globally based hackathon for this spring, where neuroimaging researchers around the world can come together online to learn about, develop, and improve open neuroimaging tools. He encourages anyone interested in attending the event to join the Brainhack Slack team.
Technical difficulties prevented us from seeing Guiomar in our call, so we recorded a supplementary video to hear more about her work with MEG-BIDS. This is a very big extension of the BIDS specification to cover MEG data. As Guiomar informed us, MEG does not have a standardized acquisition file format (like MRI dicoms), so the creation of an MEG-BIDS standard will make a huge difference to the community! Feedback is welcomed on the current draft of the specification, which is planned for release on February 14th.
Our next call will be on Thursday February 22nd at 7pm GMT (check your local time zone) and will feature Anisha Keshavan on Braindr, Yaroslav Halchenko on DataLad and Athina Tzovara discussing how research treats underrepresented minorities.
If you’d like to nominate yourself or someone else to be featured on these monthly calls, please add their information at this github issue, or email the host of the calls Kirstie Whitaker at firstname.lastname@example.org. You can join the OSSIG google group to receive reminders each month.
Professor Aina Puce is the Eleanor Cox Riggs Professor in the department of Psychological and Brain Sciences at Indiana University, Bloomington, and a senior editor at Neuroimage. She has followed a career path that is now becoming more common in human brain mapping, starting firmly rooted in the methods end but, over time, gradually shifting focus towards understanding complex patterns of behaviour. To do this, she has made use of a number of imaging techniques, exploring ways to extract converging lines of evidence.
Here, we find out how her interests changed throughout her research, the promises and pitfalls of multi-modal imaging, and why you should not be discouraged by rejections but instead focus on and be motivated by the paper acceptances and other highlights in your career.
Nils Muhlert (NM): You initially graduated with degrees in Physics/ Biophysics. Now, one of your lab’s key interests is specific applications - such as understanding social cognition - though clearly facilitated through your expertise in imaging methods. Can you tell us about how your research focus has changed throughout your career?
Aina Puce (AP): My undergraduate degree was in Biophysics and my Masters degree was in Physics. For my Masters I was already recording EEG/ERPs in the operating room under anaesthesia – generating a frequency response of the visual system using sinusoidal visual stimulation through closed eyelids. During my PhD, I recorded intracranial EEG/ERPs from the hippocampus and temporal lobe for the purposes of identifying the epileptogenic temporal lobe in presurgical patient assessments.
My interest has always been tied to the relationship between brain and behavior. Over the years it has evolved from consciousness under anesthesia, to hippocampal integrity, to recognition memory of objects, to face perception, to recognition of face, hand and body actions, to multisensory perception, and now to the implicit recognition of emotions and other non-verbal signals. Seems like a lot of topics perhaps, but the evolving theme is how we make sense of our world. I owe a lot to my colleagues from the humanities: over the years they have patiently taught me so much about psychology.
NM: Much of your work involves imaging across modalities. Alongside the higher temporal and spatial precision, multi-modal imaging often involves the challenge of combining very large datasets. How have you got round these issues?
AP: Important question. When you study brain function using only one imaging method you will look at the world with a set of (rose-colored) glasses that give you only part of the story. We tend to forget that. Using multiple methods (either across or within subjects) keeps you honest, as you might get different answers to a scientific question. Then the onus is on you to get to the bottom of those differences, which means taking more time to study a problem. This can be frustrating, because at times you feel you are not getting anywhere relative to others in the field. At the same time, I would rather generate work that is reproducible and replicable by others! The field needs a solid foundation, and this can only be achieved by paying attention to data quality and also fully understanding the methods we work with.
With respect to large multimodal datasets, the biggest challenge right now as I see is data quality control. Data will likely be analyzed by individuals who may not have expertise in data acquisition and artifact recognition/rejection. When multiple assessment modalities are involved, this problem becomes compounded.
Another challenge that I see relates to cloud computing and subject privacy. Increasingly, subjects in these big datasets will be patients. As more investigators around the world interact with these datasets there is an increased potential for hacking and accessing sensitive information. Having easy to use, but secure, user interfaces and procedures for interacting with big datasets is key.
Another critical component is user training on computer hygiene. I am continually horrified by what I see those who are not computer-savvy doing with data-archiving and sharing. We cannot blame these people as they have not been formally trained in this area, but these are the potential weak links in the chain. That said, user-training needs to be made meaningful and interesting and something that users view as important – and that is also a big challenge in my opinion.
NM: Where do you see multi-modal imaging going over the next 5 years?
AP: With respect to methods and scientific practice: these have been re-examined and will continue to do so. With respect to neuroscience in general: I think that meso-scale neural interactions will be a major focus, as this work is critical to building bridges between systems neuroscience and molecular/cellular neuroscience.
Finally, for social neuroscience, measuring/monitoring brain and bodily function will also become more important as science moves more and more from a lab-based focus to a real-life one. Smart clothing used with dry electrode portable EEG systems and smartphone applications to gather data will become more common. Exciting developments in MEG sensor technology will continue, with attempts to develop higher temperature MEG devices and also flexible sensor helmets to better fit any head shape or size. This is a really exciting time to be involved in neuroscience!
NM: In your work on social attention you have proposed a ‘socially aware’ brain mode of social information processing. Can you tell us a bit more about this. How, if at all, does this brain mode map onto specific resting-state networks?
AP: I have recently been interested in how we use social information that we access implicitly to make social judgments or decisions about the behavior of others. Most lab-based studies in social neuroscience use tasks where subjects make explicit social decisions about others. Yet, this is so unlike what we do in real-life. In our lab we use both implicit task (involving a ‘default’ mode, where there is an internal focus on achieving goals) and explicit tasks (requiring a ‘socially aware’ mode, where we make explicit social judgements), using the same stimuli in the same subjects. We found very different neurophysiology across tasks – explaining in large part the existing variability seen in the literature.
Relationship to resting-state networks? Excellent question! We have been looking at the EEG dynamics during these implicit and explicit tasks, but have not yet looked at resting state EEG in these same subjects. So this is something that I would like to look at in future work.
NM: What advice would you give an early career researcher to help them stand out in the hunt for competitive fellowships, grants and faculty positions?
AP: I usually tell everyone to find what their passion is. What topic of study really motivates you scientifically? Doing science is a perpetual set of ups and downs – often more down than up. If you follow that passion, you are more likely to be successful, because it will help you get through the bad times.
As for specific advice for early career researchers. First and foremost, find a mentor – a senior scientist who you trust, have a personal rapport with, and who can help you work on your desired career goals. They should be a good sounding board, but also be able to network you with other scientists and point out career opportunities you may not know about. OHBM has an excellent mentor-mentee matching service. I have been recently assigned to mentor two young scientists, and I am looking forward to interacting with them on-line and face-to-face at the OHBM meeting itself!
Second, network network network! Don’t be afraid to speak with senior scientists at scientific meetings – not just at your poster, but do it at the various social events. Getting to know someone can allow you to visit their lab (perhaps even on a short stay to analyze some data), and who knows what other opportunities that might lead to? Applying to competitive Summer schools can also give you this opportunity.
Third, seek feedback from peers and colleagues on your fellowship and grant applications. People do not do this enough. That said, it requires being organized – you need to allow time for people to read and give you feedback, so that you can make the edits before the submission deadline. Same thing applies for job talks or conference talks – in our lab no-one does a talk anywhere without doing a dry run first! This rule also applies to me, and I value the detailed and caring feedback I get from my trainees.
Fourth, you can stand out by being yourself – scientifically and personally. Scientists are by nature prone to eccentricities. I like to celebrate those. Your (hopefully positive) eccentricities make you who you are, and importantly make you distinctive and memorable to others. (I'll never forget a job candidate who told us that he had a pet tarantula. He got the job!)
NM: Next month, you’ll be a keynote speaker at the Brain Twitter Conference. Can you give us some insight into what you’ll be presenting - and what you think can be achieved through this online mini-conference?
AP: I will keynote tweet on the different modes of social information processing that I mentioned before.
What can be achieved with an online Twitter conference? A couple of things quickly come to mind. First, the conference builds a greater sense of community, allowing new connections between scientists around the world to be made through interactions generated in response to speakers’ tweets. (It is interesting to finally meet people at scientific meetings that you have been tweeting with.) Second, communicating one's ideas with a series of 10 Tweets makes one distill the absolute essence of the ideas to be presented. It allows the presenter, at least, to work out what is really important in the practice of their science.
NM: When did you become involved in Neuroimage - and how have you seen it develop over the years?
AP: I became a member of the Editorial Board in 2005, a Handling Editor in 2009, a Section Editor in 2011 and finally a Senior Editor in 2013. It has been wonderful to watch our field grow exponentially over the years and to work with so many dedicated and committed people in our NeuroImage family. Back in the early 1990s we had no outlet where (f)MRI-related work was welcomed, whereas work related to MEG and EEG was being published in well established neurophysiology journals. Today we have NeuroImage as well as Human Brain Mapping (which also began very early to meet the need to publish MRI-related work). It is terrific to see neuroimaging work so mainstream and regularly appearing in high-profile neuroscience journals. Indeed it is hard to keep up with it all right now!
NM: ...and finally, you’re currently serving on the program committee for OHBM. What does this role involve - and how can others contribute?
AP: OHBM is my tribe. As a post-doc I presented a poster at the very first OHBM meeting in Paris organized by Bernard Mazoyer in 1995. I have only missed a couple of OHBMs since then, due to issues related to visas... I have presented in Educational Courses, Symposia and given a Keynote, as well as chairing scientific sessions over the years. I was a member of Council from 1999-2002, where I was the Meetings-Liaison. Back then we did not have the wonderful Secretariat we have now, so the meeting organization was a bit different. Currently, together with Cyril Pernet I am Co-chairing a COBIDAS for MEEG committee for OHBM. I am also a member of the OHBM Scientific Program Committee – and right now this is busy time for the committee. I want to give a huge shout out to Michael Chee and his very capable team in Singapore. World-events forced the change of the meeting city at the last minute, and Mike and his team are making sure that OHBM 2018 will be just as successful as all of our other meetings. I am really looking forward to it!
BY DAVID MEHLER & KEVIN WEINER
Open Science (OS) is a movement toward increased sharing among scientists of their data, their materials, their computer code, their papers, and their peer reviews. The ultimate goal of this movement is to boost collaborative progress and bring greater transparency. Scientists might more rapidly build on results of others and help each other spot errors. Read more
BY THE OHBM BLOG TEAM
As brain mappers start to log off their computers, comfortable in the knowledge that their OHBM 2018 abstract is finally submitted, we, at the OHBM Blog, offer a round-up of our most interesting and informative posts from 2017. This platter of self-citations should provide sustenance for those experiencing neuroscience hunger pangs throughout the holiday season. Each of our main contributors provides insight into their favourite posts - and if you feel you’d like to contribute as a guest blogger next year, let us know!
As 2017 comes to an end, I think of the numerous ways that OHBM has promoted diversity since that first meeting in Paris, 22 years ago. I realized this while interviewing Marsel Mesulam, a longstanding academic inspiration of mine and a founding member of OHBM. I found myself entranced by Marsel’s recap of the organization’s history. He touched on the themes of discovery, flexibility, and evolution in the field of human brain mapping, and the importance of taking a step back and drawing inspiration from the brilliant diversity that is OHBM – be it the various imaging modalities highlighted, the composition of OHBM’s membership, comprised of both trainee and established members, or multidisciplinary interactions such as the annual art and neuroscience exhibits. I truly believe that OHBM draws its strength from its inherent diversity, an ingredient necessary to advance the understanding of the organization of the human brain. I look forward to ushering in 2018 with the OHBM community.
This year I met interesting people and learned interesting things through writing the blog posts. My favourites are: (1) the story of the first human fMRI experiment at the MGH (Mark Cohen interview); (2) finding out about the future of data sharing from David Van Essen; (3) the chaotic but pseudo-stable nature of brain connectivity; and (4) how stimulating the lateral prefrontal cortex makes people comply more with social norms. As a PI, I find that all too often, PIs and trainees may get tunnel vision, being committed to certain research findings and to propagating certain theories. It is true that we all have to publish and propagate knowledge as researchers, but the reasons that we became researchers in the first place may be something different. From working on the OHBM blogs, I got a sense of history, of different perspectives, of how successful scientists can reinvent themselves and stay true to their passions. That experience was both humbling and energizing. As we look to the start of a new year, I ask you one question: “Sure, doing science can be tough, especially these days, but what would you rather be doing with your time and your brains?”
I had a lot of fun interviewing Alan Evans ahead of the annual OHBM meeting in Vancouver. However, my favorite post this year was not written by me, but by Agâh Karakuzu, a student of mine who wrote about his impressions as a first-timer at the OHBM Hackathon. The pleasure came from guiding Agâh through the labyrinthine process of introductions, interviews, standard operating procedures and gruelling team edits, only to see his efforts validated by the overwhelmingly positive response from the community. I feel like the hackathon post provided exactly what the OHBM open science SIG needs -- easy entry points for the uninitiated. I hope this post will motivate other OHBM trainees to volunteer their time and energy, be it in making science more open, or in spreading the word about the exciting initiatives coming from OHBM in 2018.
Whilst I was mainly involved in editing this year (including setting up the gruelling team edits), I did get the chance to interview a number of the OHBM execs - finding out about the challenges of working as treasurer and chair. But my personal favourite was interviewing the 2017 program chair Mike Greicius. His clinically-focused work covered such a wide breadth, from direct stimulation of the anterior cingulate in those with epilepsy to amyloid PET imaging in people with Alzheimer’s disease. Fleshing out the stories behind these papers and finding out his career path seemed to me to be exactly what we’re aiming for with our blog. Looking forward to our blogposts in 2018, you can expect more clinical neuroscience, open science, advice for early career researchers and coverage of brain mapping from around the globe.
My favorite experience of 2017 was surely the completion of my graduate studies and moving from the world of graduate student into a fully fledged PhD --- feels like I’m finally part of the cool kids club! A close second, though, was my OHBM Interview with Susan Bookheimer. Susan’s neuroimaging work at UCLA is fascinating and diverse, and her attitude and moral convictions are bold and impressive. It was refreshing to have a scientifically stimulating conversation with someone who shares such strong opinions on personal accomplishments, women in science, and the importance of life outside the PhD.
The end of this year marks my first year as a blog team member! Through the changing weathers, my work has changed colors, flourishing into an array of insightful posts on a variety of topics. It was fun liaising with the OHBM Student Postdoc SIG and writing for early career issues such as mentorship. But through all this, my favorite was to interview my postdoc advisor Lucina Uddin as an OHBM Young Investigator Awardee! It was enriching to see what makes a young investigator. I loved writing about things she is passionate about and sharing the pearls of wisdom that I, as her postdoc, have gleaned from her presence and mentorship. It was also inspirational to interview stalwarts such as Damien Fair, and see what a significant role mentorship has played in their careers. Personally, it was reassuring to know that I am surrounded by scientific experts who also value building mentee careers!
In addition to contributing to this blog, I am a member of the 2018 OHBM local organizing committee. These two roles nicely intersected as my postdocs (Csaba Orban and Valeria Kebets) and I put together a blog post introducing Singapore as the location of next year’s annual meeting of OHBM. I hope everyone is as excited as I am that OHBM will be held in Singapore. Look forward to seeing everyone here!
...and we’d like to thank all our contributors and interviewees, Sarabeth Fox for filming, and Randy Gollub, Niko Kriegeskorte, and especially Stephanie McGuire for their help in keeping the blog running!
Interested in suggesting a topic or writing a guest post for 2018? Contact us at email@example.com
Neuroimagers face an ocean of software tools that have the potential to make research transparent, sharable, and collaborative. Open science is on the horizon, and we could use some instructions on how to ride the open software wave. We couldn’t think of better software surfers than Michael Hanke and Yaroslav Halchenko, the creators of NeuroDebian. NeuroDebian is a curated ecosystem of neuroscience research software that runs on virtually everything. It originally started as a personal convenience tool in 2005, then went above and beyond its initial purpose, finally transforming into a widely-used and globally accessible platform.
AK: What is NeuroDebian? Can you briefly explain it in layman's terms?
Michael: NeuroDebian was started almost 12 years ago, and was originally intended to provide correct software ecosystems. This is not only about shipping software to another computer, but a full system integration effort, so that independently developed software can work together in one system. NeuroDebian is basically a front for the Debian system, a decentralized and democratic effort of individuals working together to develop the universal operating system. Think of it as a neuroscience capable operating system.
Yaroslav: Let me add a really layman definition. It’s your cellphone with the app store, but it is not just the cellphone. It can be your laptop, PC or even a computing cluster. NeuroDebian and Debian provide this whole turn-key platform where you have an app store offering accessible software. All that software is already available for you, and a dockerfile or singularity file can describe which ones to install. I think we cannot get more layman than that.
Michael: Did you say lame?
AK: NeuroDebian was not initially intended to be a global project. Tell us about how it grew into something bigger.
Yaroslav: It started from scratching our own itch. We were doing our PhDs (or so we thought), and needed software to analyze data. We were Debian users already, and thought it would be cool if we could share our work: I packaged and maintained PyEPL, Michael did the same for FSL. We soon realized that there are more interests besides those two software tools. The catalyst was us joining the Jim Haxby Lab at Dartmouth. In a frenzied week in 2009, we packaged lots of potentially useful software. Then we established NeuroDebian, as it is known now.
AK: The spirit of free software is at the heart of NeuroDebian. It is a community driven development. How do you manage all those remote contributions?
Michael: Yes, it is a global enterprise with many people contributing to it. But it is also, given its size, complexity and the amount of effort that goes into it, by far not international enough. For the amount of impact it has, it is actually a fragile enterprise. It has taken up considerable time and effort from researchers, even though the outcomes are not going straight into their PhD or research projects. Some software is more difficult to integrate than others. I don’t want to single any out, it just symbolises the whole situation and reflects the world we are in. Tools are kept behind closed doors until the people who developed them are properly credited.
Yaroslav: In comparison to where we were 12 years ago, we are now in a very different ecosystem. In getting there we used many of the concepts of open science and open software, such as continuous integration. Without that we’d be fixing bugs for every release. To ensure scalability, we encourage people to test their software, and even though this is not directly related to NeuroDebian per se, it ensures that software remains working on all systems. Another angle is that we share the responsibility. Michael packages something and I package something else, and then we share it so that anyone can benefit. The same goes for the teams within Debian (Debian-Med, Debian-Science) which we are also part of, and we maintain many packages together with those teams. We are not duplicating anyone’s effort. Other team members do their part and we often just borrow relevant neuroscience research packages and backport them for all Debian and Ubuntu users. Everyone is happy at the end of the day. Once again, it would be impossible if these were done in an independent fashion.
AK: We hear about backporting. Is there a simple explanation for that?
Yaroslav: It takes a lot of effort to stabilize a release. That’s why the most stable Debian releases are usually the most outdated ones. This takes us back to Michael's comment about integration. When you put together software that is supposed to work together and then you attempt to stabilize it, eventually it remains stable over years. But research software has different needs. Researchers want to adapt new methods as quickly as possible. So, we take a stable Debian release and put new software versions in it. If we were to upload fresh tools to the unstable versions of Debian and wait until it becomes stable, it would take too long. This is the point where backporting comes in handy. It allows us to provide the most up-to-date software for all Debian and Debian derivatives, such as Ubuntu.
AK: The name gives the impression that NeuroDebian is exclusive to the Debian OS. What about researchers used to OSX or Windows?
Yaroslav: We saw cases where people installed NeuroDebian on a virtual machine and after a while realized that they keep their OSX or Windows in the background all the time. So they installed Debian as dual-boot at the beginning and then wiped out anything else but Debian. There are other people who prefer to keep their original OS, because they need to use Microsoft Word. You can even run NeuroDebian within NeuroDebian.
AK: How is NeuroDebian maintained, where do the resources come from?
Yaroslav: First it was our advisors. They allowed us to pursue this instead of our research. Some projects provide a small proportion of the funds needed to support NeuroDebian. But overall, NeuroDebian never received dedicated funds as a project. Whenever we apply, there is always one reviewer who says “this effort matches Friston’s brilliancy of SPM” , and another who says “it is just some packaging, who cares”.
Michael: I think we should name drop here. Yaroslav’s advisor was Stephen Hanson, who paid for this for years. My advisor was Stefan Pollmann, who did the same. After many unsuccessful applications, I stopped thinking about grants. Once we wrote a grant where we had 40 letters of support, but reviewers questioned the letters’ legitimacy. Next time we gave them twice as much, and that was also not enough. The best comment we received was ‘this doesn’t fix any disease, try elsewhere’. Most of the stuff we do cannot be tied to a single purpose. We are not the ones achieving it, we are enablers. If any blog readers have recommendations, or are sitting on a pile of money, please contact us! Very recently, the CBBS research center in Magdeburg has started funding work that relies heavily on NeuroDebian and thereby helps to maintain it for everyone else, too.
AK: We know custom installations of neuroscience research software can be painful. How easy is it to install them on NeuroDebian?
Michael: It really depends on the complexity. You can have a really easy installation after downloading gigabytes of binaries that are pre-compiled for your system. If you want to install it again, you need to repeat the same steps. However in Debian, you have all the benefits of automatic upgrades. You would say “I use 150 software packages on this computer and I need to download and update all of them”. The more complex the system you use for your research, the greater the benefit of using a package manager system.
Yaroslav: Do you remember how difficult it is to install apps on your phone and maintain them? Not really! You just click, install them and they keep updating themselves automatically. This is primarily the same thing. Once your research software is installed you don’t need to think about where it comes from or how to update it. It just gets done and it works.
Michael: Most people do not have administrative privileges on the hardware or the execution environments they use. Think about institutional clusters, think about workstations managed by someone else. In general, you need to call a system administrator and say which software you need installed. If you are on Debian, this only takes a few minutes.
AK: Most of the processing pipelines have a heterogeneous working environment with strict version dependencies. This is a serious threat for sustainability and reproducibility. How does NeuroDebian tackle this problem?
Yaroslav: A while ago, we created the NeuroDebian virtual appliance. Downloading this virtual machine image, different people can have the identical environment. There are additional projects that we can benefit from. One of them is http://snapshot.debian.org/, which takes snapshots of the entire Debian archive repository twice a day. So you can recreate any Debian system you used in the past. We now do the same thing for NeuroDebian, although it is not fully public yet. This way, if you know that your software environment previously existed, you can re-establish it. It can also be used to validate the compatibility of your current system. We are pursuing this through the ReproNim project that can track versioning for even more crowded environments.
Michael: You can keep using the script-based generated environment for as long as the underlying components exist. But in my experience, many claims of strict version dependencies are convenient excuses, but not real dependencies. Often it’s because somebody spent time manually assessing whether a software works as it should. That’s why they have that strict dependency, because they are not willing to go through that manual effort again. In this case you are following a trust and hope model. No manual validation that is done on one machine scales to the other. Remember the paper showing surprising differences in Free Surfer outputs depending on the execution environment. You need high level tests for machine based verification so you can make sure that when you ship it to your users, it will work. NeuroDebian offers solutions to this problem by enabling inclusion of build-time and run-time tests.
AK: How can developers get their software into NeuroDebian?
Yaroslav: You should email us to start with, or email the Debian mailing list. For some people who reached out to us, we packaged and maintained their software. Others packaged it themselves and gave it to us to review and upload. So, there are multiple ways to achieve this. But if you mail us first, we can figure out the optimal way. Just remember, if it is for us to package, it might take longer because of the previous questions.
Michael: There are so many people and so many different projects in this field, not necessarily limited to neuroimaging. It doesn’t matter who you contact, as long as your initial e-mail contains all the relevant information. There are key questions you should be able to answer immediately before you contact them.
AK: How do you see the future of open science? Which role will NeuroDebian play along the way?
Michael: I am happy to say that open science is the present, but surely it is also the future. Open Science is not just doing science in a different way, it is the only way. All the excuses people had not to be fully transparent while spending public money for public research can be justified only by the technology limitations of the past. Today, we have surpassed those limitations. There is no point in keeping things secret. In addition, open source software development is the role model of open science. If you look at collaborative coding platforms like GitHub, you will see how low the threshold is for people to work together. The same will happen with open science, and things will become much more interesting and faster.
Yaroslav: Open science has already benefited from various standardization projects. For example, the open brain consent forms enable you to be open from the beginning. You don’t have to reveal all your cards, but you kind of pave your way, so that later on you can open up your research when the time is right.
Michael: One thing to plug is DataLad. Many things we learnt in collaborative open source projects have manifested themselves in our Datalad project. It is the fusion of code and data, with reliable dependencies on datasets and versions. So everything we talked about regarding software can be mapped one-to-one onto data. In the future, you’ll probably see more and more efforts like this.
AK: Tell us more about DataLad
Michael: DataLad is for data what Git is for repositories. I use it for my students, who have no idea about Git. We use it for projects, such as the StudyForrest project. DataLad can move data in a version controlled fashion from one machine to the other: you can publish unilaterally to Github and cloud storages. In the next release, we will provide extremely enhanced metadata support, so you can make data discovery between datasets that you don’t even have. Furthermore, you can ask DataLad to watch a website! For example, if you were following an fMRI dataset, and there was a change, DataLad will let you know what has changed, why it has changed and how the code was modified.
AK: Anything we did not ask? Would you like to add something?
Yaroslav: Do not be silent. If you use something, say it. If you use something and it does not work, say it. I am considered one of the biggest complainers in the community, but if we keep quiet it doesn’t get fixed. Constructive feedback is very much appreciated here. This is not just about NeuroDebian.
Michael: The NeuroDebian page shows the popularity statistics. There are about 300-500 machines a week that install NeuroDebian (or at least fill out the registration form). On the other hand, if the number of feedback emails goes above 10, it is considered a heavy week. There are two ways to interpret this: 1) This thing works! 2) It is a silent climate, and you are dealing with an undefined object. We don’t track people. It would be nice if people even told us that there is nothing broken.
BY MICHELE VELDSMAN AND SHRUTI VIJ
Academia provides a unique set of challenges throughout one’s career. It is often highly competitive and uncertain. The evolution of science is unpredictable and this can leave researchers, at all stages, unsure of what next steps to take, how to manage their careers, or build their confidence. Mentorship is key to navigating a career in the face of this uncertainty. Successful mentorship requires an unbiased perspective from an experienced individual within academia who is dedicated to your personal and professional development.
This year, the OHBM Student and Postdoc SIG launched an international, online mentoring programme. This novel programme, pairs researchers of all levels across the globe. By pairing individuals across the international community, mentors can bring a fresh, objective perspective to the relationship while mentees provide a unique window into the changing landscape of research. Over 400 OHBM members enrolled in the programme in the first 6 months!
To provide a real platform for mentoring, pairs were encouraged to launch their mentoring relationship in person at the annual meeting. More than 180 pairs met for the first time in Vancouver and have continued their relationships online via email and video chat. In an effort to get more human brain mappers into active mentoring relationships outside of their current environments, new enrollments have begun for another round of pairing mentors and mentees. To sign-up, visit this website.
Here we present the first of a series of short interviews on the experience of mentors and mentees that give a personal insight into the benefits of the programme! The interviews also include general advice that mentors have for trainees, and discussions on the challenges that early career researchers face. We start with our Blog Team Captain Nils Muhlert and his experience in the mentorship program. He was paired with Professor Robert Turner, one of the pioneering physicists responsible for the discovery of MRI and fMRI.
Michele Veldsman (MV): Nils, you signed up to the programme looking for a mentor and volunteering to mentor trainees. Why do you think mentoring is important in academia?
Nils Muhlert (NM): As a PhD student, and early postdoc, you’re still safely tucked under a senior researcher’s wing. While you can become more independent in these stages, there’s still someone offering (largely) independent advice. As you progress into faculty positions, these sources of advice are no longer a formal part of your work.
Despite mentors being less common in later stages, I’ve always wanted a good source of advice in my career. For instance, one of my concerns has been not having a clearly defined research area. It’s useful to speak to those who have reached a high level in their career – who can look back and see what might have been useful for them, and the missteps that may have frustrated others. This was an issue on which Robert Turner (my OHBM mentor) offered helpful advice.
MV: What has been your experience of the mentoring programme so far? Have you seen any benefits?
NM: I was pretty impressed to see that Prof Turner would be offering advice. I knew his work, and had seen him give some good talks in the past (and ask some tough questions). I sent along my CV and explained the stage I was at. I also mentioned my main concern: how to balance multiple research interests with dwindling time, particularly given lecturing workload. The response I received was insightful:
Advice from Professor Turner:
“What worked for me as a scientist is having the determination to focus on crucial neuroimaging questions, and to put my available time fruitfully into the development of new techniques to address them. At important stages of my earlier career, I tried to avoid investing large amounts of time on sideline projects (though I have always been very happy to be a contributor to some papers).
Great outcomes tend to follow from intense concentration. When I was working on gradient coil design, back in the 1980s, I was doing algebra and writing the ensuing novel software 16 hours per day. As a result, I came up with the shielding equation that revolutionized gradient design. When you have broad interests, it's very important to reflect on what questions really matter to you, and what you are really skilled at--because to put in the necessary commitment you absolutely need the enthusiasm. You seem to have already proved yourself to the extent that you should be able to pick and choose what you are most enthusiastic about. You can always put other interesting problems on the back boiler--with luck you can pick them up again when the time is absolutely right."
MV: What do you think are the biggest challenges in navigating a career in neuroimaging?
NM: As discussed, I find knowing what to focus on to be difficult. There seems to be different advice on this – for some, like Tianzi Jiang, looking to predict future trends has proved useful. Robert’s view was that we should also aim to be ambitious enough to create future trends: “The key to this is to work out what are the right questions--questions that are fundamental, overlooked, simple, and answerable.” Alternatively, in a careers-advice talk, Ralph Adolphs suggested considering a varied portfolio of research, so as to be eligible for a broad variety of funding. Clearly, different strategies work for different people – but having tried the broad approach, and following Robert’s advice, I feel that for me, moving back towards fewer areas of research fits my current career stage. Whether I can fit in working 16 hours a day alongside looking after a messy 3 year-old is, however, another matter!
MV & SV: As a window into an evolving and developing programme, we have received overwhelmingly positive feedback from mentees and mentors from all academic stages. We will continue to highlight the feedback in a series of similar posts from other mentors and mentees from a range of backgrounds and career paths. Meanwhile, if you are interested in learning more about the programme, please contact the OHBM Student Postdoc SIG at firstname.lastname@example.org and in order to sign up, visit this website to complete a short questionnaire that will help match you with a suitable mentor or mentee. This sign up round will close on the 17th December 2017 and new pairs will be assigned shortly after.
This month OHBM launched its new website. This long-standing project dates back to 2014, when a membership-needs survey revealed that OHBM members wanted more online resources for educational advancement and career opportunities. There was also a desire for outlets in which to promote new influential research and controversies in human brain mapping and engage our members outside of the annual meeting - issues now addressed through the OHBM blog, the non-member media, and social media accounts. For the website, members wanted this new functionality in a clear, intuitive layout that was more responsive to dynamic content and the changing needs of the brain mapping community. Thus, the story of a new website began.
The challenge to design and implement the new website was taken on by the website team within the OHBM Communications Committee. This 8-person-strong team (Fernando Barrios, Roselyne Chauvin, Christophe Phillips, Jessica Turner, Stephanie McGuire, Jae Gullings) led by Jeanette Mumford and Lisa Nickerson, integrated feedback from the survey with the set of existing webpages, updated content, and re-organized them by type for easier user navigation. They also considered how to merge the website with the new educational material provided by the OHBM On Demand service, a platform that hosts past OHBM courses and talks. These now openly available videos and slides allow anyone interested to watch keynote lectures from Susan Bookheimer, Tim Behrens, and other high profile brain mappers, as well as many educational courses, such as diffusion MRI, advanced fMRI analyses and hundreds of other topics covering state-of-the-art methodologies and clinical applications of neuroimaging.
The website team developed ways to integrate and present various streams of OHBM news that emerge throughout the year, with front-page links to the OHBM blog and lay media. They created a new searchable career center, allowing job seekers to post their CVs & search jobs, and employers to post new positions and search CVs. For important announcements, the team created a “what’s happening” section on the front page, letting users know about abstract submission deadlines, the Replication and Young Investigator awards and other OHBM initiatives. The new format also made it easier to locate and view the dedicated webpages for COBIDAS, special interest groups and OHBM chapters. Critically, the website now displays more easily on mobile phones and tablets - providing all this info in a way that suits our busier lives.
Last, our new website includes a dedicated space for each year’s annual meeting website, creating consistency and better organization of information. You can find out answers to frequently-asked questions, register for the hackathon, see featured speakers when they are announced and check out the sponsors and exhibitors. Our advice: have a few clicks around the website, give us your feedback, and most of all, enjoy your new hub for all things brain mapping!
Check out the new site: www.humanbrainmapping.org
By Csaba Orban, Valeria Kebets, Thomas Yeo on behalf of the OHBM 2018 Local Organizing Committee (LOC)
As you may have heard, OHBM 2018 will be hosted in Singapore. Come join us to learn about the latest developments in neuroimaging methodologies and applications, and to meet like-minded scientists in a highly inter-disciplinary forum. This year’s meeting will have a special focus on multimodal imaging, an LOC symposium on mapping functional connectivity to behaviors in young and aging populations, while also featuring the ever-popular educational sessions and the must-go Hackathon. You can visit our dedicated brain-in-sinc website (right), which highlights attractions in and beyond Singapore, delicious cuisine, and essential travel tips from our local volunteer group.
Singapore (aka “the Lion City”, “the Small Red Dot”) is a small island-state located in the heart of South-East Asia between Malaysia and Indonesia. Originally a small trading post under British rule in the 19th century, over the past few decades Singapore has remarkably transitioned from a third world to a first world economy. Paralleling this economic transformation, Singapore has invested heavily in its educational system, infrastructure, and biotech industry.
Singapore has a young, but rapidly growing neuroimaging community, fostered by its collaborative ethos, investment in state-of-the-art infrastructure, and ability to attract both foreign and local scientists. The neuroimaging community is distributed across the Duke-NUS Medical School (Duke-NUS Centre for Cognitive Neuroscience), National University of Singapore (NUS), Nanyang Technological University (NTU), the Lee Kong Chian School of Medicine (LKCMedicine), A*STAR-NUS Clinical Imaging Research Centre (CIRC), Singapore Institute for Neurotechnology (SINAPSE), National Neuroscience Institute (NNI), Yale-NUS College and the Singapore BioImaging Consortium (SBIC).
Situated 144 km north of the equator, Singapore is a tropical metropolis, where skyscrapers meet lush gardens and canopies, nestled between natural reserves rich with wildlife. Singapore has a uniquely diverse history, where customs, cuisine and architecture reflect Chinese, Malay, Indian and British influences. Attendees can explore Singapore’s rich cultural heritage on foot by visiting Victoria Theatre and Concert Hall, originally the Town Hall of Singapore in the 19th century, the spectacular architecture of Sri Veeramakaliamman Temple in Little India or the Peranakan shophouses on Emerald Hill and in Kampong Glam. For museum lovers, Singapore offers plenty of attractions, including the Peranakan Museum, National Gallery, ArtScience Museum, National Museum of Singapore, and the Asian Civilisations Museum. Those who want to switch-off can take a stroll in the Singapore Botanic Gardens, go for a hike in Bukit Timah Nature Reserve, or take the TreeTop Walk at MacRitchie Reservoir Park (click on photo below to see video).
Visitors can partake in the country’s national sport: eating out, and try an unparalleled variety of foods. Singapore is famous for its hawker centres where collections of individual, often family-run, stalls offer an assortment of culinary delights at affordable rates. OHBM attendees will have the chance to experience authentic specialties such as Hainanese Chicken Rice, Laksa, Chili Crab, Kaya Toast, Satay, Sambal Sting Ray and Char Kway Teow. In 2016, Hill Street Tai Hwa Pork Noodle and Hong Kong Soya Sauce Chicken Rice and Noodle became the first hawker stalls in Asia to attain Michelin star status, while serving meals from S$2 (US$1.42). Singapore also offers an impressive range of fine-dining options, covering local, Asian, European and Fusion cuisines. Visitors can pair their meal with breathtaking views of the city in one of Singapore’s many high-rise restaurants, such as the Equinox Restaurant on the 70th floor of the Swissôtel, Cé La Vi on the top of Marina Bay Sands, or Level33, the world’s highest microbrewery, overlooking Marina Bay.
Getting around the city is easy, due to its modern, efficient and affordable transportation system. The conference will be held at Suntec Singapore International Convention and Exhibition Centre, which is located near prime attractions of Singapore such as Esplanade, Gardens by the Bay, National Gallery and Raffles Hotel (where Singapore Sling was invented). Singapore should be easily accessible for all conference attendees, as Changi Airport is the 2nd most connected airport in Asia, serving flights to 380 cities, in 90 countries.
Remember to submit your abstracts by Dec 15th, 2017 at 11:59 pm EST. We look forward to welcoming you in the Lion City in June, 2018!
BY JEAN CHEN
David Van Essen is the winner of the prestigious 2017 Glass Brain Award from the Organization of Human Brain Mapping (OHBM). David is the Alumni Endowed Professor in the Department of Neuroscience at Washington University in St. Louis, and he chaired the department for 20 years. He was Principal Investigator (PI) for the original Young Adult Human Connectome Project (HCP, jointly with Dr. Kamil Ugurbil from the University of Minnesota) and is currently a co-PI for the Lifespan HCP Development, Lifespan HCP Aging, and Connectome Coordination Facility projects. He was also founding chair of the OHBM Council.
The Glass Brain Award is selected annually "to recognize lifetime achievement by leading researchers using or facilitating neuroimaging to discover original and influential findings". David was recognized for his illustrious career of charting the brain over more than 50 years. His journey in brain mapping has taken him from Harvard to Oslo to University College London (UCL, as a postdoctoral fellow), and then as faculty at Caltech and at Wash U.
Jean Chen (JC): When and how did you begin as a “brain mapper”?
David Van Essen (DVE): My adventures in brain mapping began in 1975, when I was studying extrastriate visual cortex in the macaque monkey as a postdoc at UCL. To deal with the complexity of cortical convolutions, I became a “cortical cartographer” by developing a pencil-and-tracing-paper method of making cortical flat maps, akin to flat maps of the earth’s surface. Later, my lab and others were able to computerize this process, thereby ushering in the modern era of computerized cortical cartography.
JC: What do you consider your proudest academic/career achievements (aside from the Glass Brain Award)?
DVE: On the scientific front, I consider my top four achievements to be:
1) Proposing that cerebral cortex is a distributed hierarchical system based on patterns of anatomical connectivity (with John Maunsell and Dan Felleman)
2) Hypothesizing that mechanical tension along axons and dendrites is a fundamental driving force for brain morphogenesis, including cortical folding.
3) Leading (with Kamil Ugurbil) the Human Connectome Project and helping develop and expound the ‘HCP-style’ neuroimaging paradigm for data acquisition, analysis, and sharing.
4) Mapping (with Matt Glasser) a new 180-area-per-hemisphere multimodal human cortical parcellation.
JC: As PI of the Human Connectome Project, how did your involvement begin, and what do you see for the outcome and future of the HCP?
DVE: In 2009, NIH announced a competition for the Human Connectome Project, triggering a flurry of discussions at Washington University (‘WashU’) and at many other institutions. I emerged as the leader of the WashU effort, and we later decided to team up with the University of Minnesota, Oxford University, and several other institutions to establish a consortium with broad and complementary strengths. Once the HCP was awarded in 2010, helping to lead this project became my primary research activity, and it emerged as the most exciting adventure of my scientific career. The success of the HCP can be measured in a variety of ways; e.g., nearly 10,000 investigators have agreed to HCP data use terms; more than 400 publications acknowledge HCP data use; and these numbers continue to grow.
The original ‘young adult HCP’ wrapped up in 2016. It has been supplanted by a three-pronged effort, all predicated on HCP-style neuroimaging. (1) NIH awarded three Lifespan Human Connectome Projects to elucidate brain circuitry across the lifespan, during healthy development, maturation, and aging; I am one of the Principal Investigators for the Lifespan HCP Development and Aging projects. These projects are complemented by the Developing Human Connectome Project (dHCP) in Britain, which is studying brain development prenatally and at birth. (2) NIH has also funded 14 projects under the Connectomes Related to Human Disease (“Disease Connectome”) umbrella; each of these projects studies brain circuitry in a particular brain disorder. (3) All of the data from the Lifespan projects and the Disease Connectome Projects will be freely shared via the Connectome Coordination Facility (CCF) that Dan Marcus and I jointly lead.
JC: In 2017, you were elected to the prestigious National Academy of Sciences. Can you describe for us how it came about and how it will influence your research going forward?
DVE: One morning last May, I received an unexpected phone call while working at home, with the terrific news about my election to the NAS. The news spread quickly, and I was soon enjoying a veritable blizzard of congratulatory emails. The actual induction ceremony will be in the spring of 2018.
While it is deeply gratifying to receive acknowledgment for lifetime accomplishments, I don’t anticipate that Academy membership will strongly impact my research focus or agenda. I still receive my greatest enjoyment from working in the scientific trenches with students, staff, and collaborators.
JC: Your 2016 publication in Nature (“A multi-modal parcellation of human cerebral cortex”) generated much excitement. This parcellation was generated based on cortical structure, function, connectivity and topography, and identified 97 new brain areas. What’s the next step in this line of research?
DVE: This study is truly a highlight of my scientific career, but the lion’s share of the credit goes to Matt Glasser (my grad student at the time) for his vision and tenacity in driving the project to fruition. Of particular importance is that our ‘areal classifier’ approach allows parcellation of individual subjects, as long as a sufficient amount of high-quality multimodal imaging data has been acquired.
Several interesting next steps spring to mind. Applying the HCP multimodal parcellation strategy (using the areal classifier) to subjects from the Lifespan HCP Development and Aging projects should reveal whether some cortical areas get larger, smaller, or change their connectivity with maturation and/or aging. Applying the same approach to the Disease Connectome datasets will hopefully reveal areal differences related to brain disorders, and could potentially serve as valuable disease-specific biomarkers. We also hope that investigators in a variety of other arenas use our freely available multimodal parcellation (and the areal classifier, once it is publicly released) to aid in their own research projects by more accurately localizing and analyzing various phenomena and regions of interest.
Finally, I am hopeful that the HCP multimodal parcellation and associated connectivity-related data will enable an important test of my 1997 hypothesis that axonal tension drives cortical folding. The key question is whether folding patterns in individual HCP subjects can be predicted by an analysis of ‘parcellated connectivity’.
JC: You’re considered an “activist” for data sharing. How did this passion begin, and what role did you think large-scale data sharing would play in brain research?
DVE: I became interested in data sharing in 1989-90, when I served on an Institute of Medicine committee that generated a report “Towards a National Neural Circuitry Database” and helped launch the “original” Human Brain Project in 1993. The HBP was led by two visionaries, Steve Koslow and Mike Huerta, and it emphasized data sharing from the outset. My Human Brain Project grant was first funded in 1994 and is now in its 24th year, with a sustained focus on informatics tools and data sharing.
Data sharing has been spectacularly successful and vital in genomics, proteomics, and other molecularly oriented domains of bioinformatics. In contrast, systems neuroscientists, and neuroimagers in particular, have been slow out of the neuroinformatics gate. This is finally changing, thanks in part to the success of several large-scale neuroimaging projects, including the HCP and OpenfMRI, that focus on sharing unprocessed data but also (especially for the HCP) ‘minimally preprocessed’ data.
Sharing of large-scale neuroimaging datasets is just the tip of an important iceberg. Another important objective is to facilitate sharing of extensively processed data, such as data associated with published figures. To address the challenge of organizing and sharing complex datasets, our neuroimaging visualization and analysis software, Connectome Workbench, uses ‘scene files’ to store all of the information needed to replicate exactly what is displayed in published figures. Investigators can upload scene files and their associated datasets to the BALSA database developed by my lab. To see BALSA in action, simply click this link.
JC: What do you foresee as a next “big thing” in brain mapping? An exciting new research direction or development?
DVE:It’s important to note that many “big things” consist of many small or medium advances that work well in combination. For example, the success of the HCP stems from numerous advances in data acquisition, analysis, and sharing, only a few of which were ‘big’ (such as “multiband” imaging for fMRI and diffusion imaging). I hope that one big thing in human neuroimaging over the next few years will be an accelerated transition, in which a large majority of investigators adopt the best available among existing approaches to data acquisition, analysis and sharing rather than sticking with ‘traditional’ methods that have been shown to be sub-optimal yet still dominate the field. Beyond that, I anticipate continued exciting advances in neuroimaging methodology, some of which may be game-changers in terms of acquiring and analyzing high resolution data at the level of individual cortical layers and columns.
Regarding invasive brain imaging in animal models, I stand in awe of the explosion of new methods of charting brain structure, function, connectivity, and gene expression at both the microscopic and mesoscopic level. These advances have far from run their course, and indeed are likely to accelerate through ongoing investments such as the BRAIN Initiative and major private funding sources (e.g., the Allen Institute and the Chan-Zuckerberg Initiative).
JC: Despite how much we know about the brain now, what do you consider to be the biggest challenge(s) for brain mappers, especially in this age of “big data”?
DVE: It seems that the more we know about the brain, the more we realize how much remains to be deciphered. The era of big data brings exciting but daunting challenge to center stage: how can we weave together truly staggering amounts of complex data at different scales in space and time. Capitalizing fully on these vast treasure troves of information will require major advances in neuroinformatics and computational neuroscience. I predict that over the next several decades neuroinformatics and computational neuroscience will be radically reshaped and in turn will have a transformative effect on our ability to understand the brain in health and disease.
JC: What two pieces of advice do you have for young and aspiring scientists?
DVE: First, stay close to your data! In general, many complex processing steps occur between the initially acquired data and that which can represent a publishable figure and associated quantitative analysis. For your own research projects, know exactly what happened to your data; be your own devil’s advocate regarding methodological problems and potential biases that might impact your interpretation; and temper your conclusions accordingly. When attending to the research of others, be constructively critical but as even-handed as possible.
Second, gain as much depth and breadth as possible for whatever research area you’ve chosen. This is very challenging in modern neuroimaging, so it is also important to network closely with others who can share complementary expertise needed for whatever project(s) you’re working on. Fortunately, a growing on-line community and associated resources can allow everyone to participate, even those from small institutions and/or laboratories.
[OK, this is one over the limit, but I can’t resist!] Follow your scientific passions! In today’s intensely competitive academic environment, one key for success is to be passionately interested in the scientific and technical issues you’re working on, so you can sustain the drive and energy to make it through the ups and downs of a research career.