Permutation methods are a class of statistical tests that, under minimal assumptions, can provide exact control of false positives (i.e., type I error). The central assumption is simply that of exchangeability, that is, swapping data points keeps the data just as likely as the original. With the increasing availability of inexpensive large-scale computational resources and openly shared, large datasets, permutation methods are becoming popular in neuroimaging due to their flexibility and ease of concern about yielding nominal error rates than parametric tests, which rely on assumptions and/or approximations that may be difficult to meet in real data. This becomes even more important in the presence of multiple testing, in that assumptions may not be satisfied for each and every test, and the correlation across tests may be difficult to account for. However, even exchangeability can be violated in the presence of dependence among observations, and it may not always be clear what to permute. The aim of this blog post is to emphasize the relevance of linking the null hypothesis and the dependence structure within the data to what should be shuffled in a permutation test. We provide a few practical examples, and offer some glimpses of the theory along the way.
Example 1: Permutation mechanics
Let’s begin by reviewing the mechanics of a permutation test. Consider a comparison between two groups, for example whether hippocampal volume is different between subjects with Alzheimer’s disease (AD) and demographically matched cognitively normal controls (that is, a group with similar age, sex, education level, etc). If we assume that in both groups the hippocampal volumes are independent samples from a Gaussian distribution, a classical parametric two-sample t-test can be used to test for a difference between means of the two groups. However, this distributional assumption may not be true, and departures from this assumption can potentially lead to incorrect conclusions. In these circumstances, permutation tests perform better than parametric tests by providing a valid statistical test with much weaker assumptions. Specifically, under the null hypothesis that the hippocampal volume has no actual difference between AD cases and controls, the group membership (or the label of case and control) becomes arbitrary, that is, any subject from one group might as well have been from the other.
While it may seem implausible that this would be the case for patients and controls, in fact this is what we are testing: all else being equal (that is, exchangeable), and any difference found must relate to the means, which is what we are interested in. In fact, a classical parametric two-sample test (with equal variance) makes not just the same assumption, but also further assumes that patients and controls come from the same Gaussian distribution. Permutation tests do not require Gaussianity; it suffices that the data are merely exchangeable. Exchangeability further relaxes another important assumption of parametric tests: independence. Data that are not independent may still be exchangeable, either globally or under certain restrictions, as presented in more detail in Example 3 below.
With exchangeability, we compute the t statistic under each permutation, and produce the permutation distribution of the statistic under the null. The permutation distribution is the empirical cumulative distribution function (cdf) obtained from the data themselves, as opposed to from some idealized distribution, as is the case with parametric tests. The observed test statistic can be considered a random sample from the permutation distribution because it is equally likely to have arisen from any case-control re-labeling given the null hypothesis.
The p-value is the probability of finding a test statistic for the group comparison at least as high as the one observed, provided that there is no actual difference (i.e., null hypothesis is true). So, the p-value can be calculated by randomly permuting the group labels many times, each time recalculating the test statistic; at the end of the process, we check how often a larger statistic was observed than the original (before any shuffling had been applied), and divide that by the number of permutations performed. Figure 1 shows an example in which there are three subjects in each group; before any permutation is done, the test statistic is t = +0.7361. After exhaustively computing all 20 possible permutations, we see that 4 of these (including the non-permuted) are higher than or equal to +0.7361. Thus, the p-value is 4/20 = 0.20. If we had decided beforehand that our significance level would be 0.05, we would say that the result of this test is not significant, that is, there is no significant difference in hippocampal volume between AD patients and controls.
Figure 1: Consider the hippocampal volume measured in 6 subjects, three with Alzheimer’s disease, and three cognitively normal controls. The values measured are shown in the boxes (ranging between 3498 and 3588), controls in blue, AD patients in green. The test statistic for a difference Controls > AD is t = +0.7361. If there is no actual difference between the two groups, then the group assignment can be randomly permuted. For each such permutation, a new test statistic is calculated. In this example, four t statistics (shown in red) computed after random permutations of the group assignments, out of the 20 performed, were equal to or larger than the observed, non-permuted statistic. The p-value is therefore 4/20 = 0.20.
Example 2: Permutation with the presence of nuisance
Suppose in Example 1 that there were other variables that could potentially explain some of the variability seen in hippocampal volume. Some of these variables could even be associated with diagnosis itself. For example, it may be the case that, in this particular study, AD patients were older than cognitively normal controls. To account for these nuisance variables, we can formulate the problem as a multiple regression, in which hippocampal volume is the dependent variable, whereas the case-control status, along with other potential nuisance variables, are the independent variables. We would then test whether the regression coefficient corresponding to the case-control label is significantly different than zero. Now it is less clear what should be permuted. If we permute just the group labels, what to do with the other variables in the model? It turns out that various approaches have been considered in the literature.
Systematic evaluations show that, among a host of permutation and regression strategies, the method attributed to Freedman and Lane provides accurate false positive control in the presence of nuisance variables and is robust to extreme outliers in the data. In the Freedman-Lane method, we regress out all nuisance variables from the hippocampal volume measurements to obtain the residuals of this nuisance-only model, and use the permuted residuals as the new dependent variable in the multiple regression, from which we construct the permutation distribution for the test statistic (i.e., the regression coefficient of interest). Intuitively, once the nuisance has been regressed out, what remains should be indistinguishable between AD patients and controls if the null hypothesis is true, and thus, can be permuted.
We note that whichever regression and permutation strategy is adopted, it is crucial that what is permuted is what would render the subjects different were the alternative hypothesis true. It is not relevant to permute aspects of the dataset that would not be affected should the null hypothesis be false, that is, should an effect actually exists. This is important because, when an experiment becomes complex (e.g., with multiple factors, levels, nuisance variables, and/or multiple response variables), it can be easy to permute aspects of the data that are not informative with respect to the null hypothesis. One should not lose sight of what is being tested, and permute the data accordingly.
Example 3: Permutation with the presence of dependence in observations
Data are not always freely exchangeable. It may be the case, for example, that there are repeated measurements from the same subjects among the observations. Or maybe some or all subjects are twins, siblings, or otherwise relatives. Cases such as these restrict the possibilities for permutations, but even so, permutation tests continue to be possible. They proceed in a similar manner as in the examples above, but care needs to be taken when selecting the permutations that are allowed. Exchangeability as defined above — that is, permuting the data keeps them just as likely as originally observed — must be preserved. More technically, it means that the joint distribution of all the data points must remain unchanged under the null. For example, in a twin study, one could permute the subjects within twin pairs, and pairs of twins could be permuted as a whole, but one sibling should never be mixed with the sibling from a different family; see an example in Figure 2. These restrictions, unfortunately, tend to reduce power compared to the analyses in which all subjects are independent and freely exchangeable. However, all other benefits of permutation tests are kept.
Figure 2: Observations that are not independent restrict the possible rearrangements of the data. In this figure, each white circle represent an observation (e.g., a measurement from a subject), the blue (+) or red (−) dots indicate whether the branches that originate at that dot are or are not exchangeable, respectively, and therefore indicate observations that can be permuted with each other. On the left, 10 unrelated subjects who are freely exchangeable. On the right, 18 subjects, some of which were recruited along with their siblings (FS), and/or with their monozygotic (MZ) or dizygotic (DZ) twin. Siblings must be kept together in every rearrangement of the data, which needs be performed in blocks; subjects within a sibship can be permuted; some families may have both twins and non-twins, which requires nested blocks. (Figure licensed under CC-BY 4.0. https://creativecommons.org/licenses/by/4.0/)
Consider a longitudinal extension of the AD patients vs. controls example, in which two measurements are obtained from each subject, one before and another after an intervention is applied. As per above, the measurements must stay together within subject. However, depending on what is being tested, we may permute the data only within-subject, or only the subjects as a whole while keeping the order of intra-subject measurements unaltered, or do both things simultaneously. Within-subject effects (that is, the effect of treatment) would require that permutations happen within-subject, whereas between-subject effects would require permutations of the subjects as a whole. Interactions in a mixed design (within and between-subject effects) could benefit from both types of permutation. Crucially, what needs to be permuted is what would be equal should the null hypothesis hold, and that would differ should the alternative hypothesis be actually true.
Example 4: Comparison between models
Now suppose that, in our AD example, in addition to hippocampal volume, we have also measured the amygdala volume for each subject, and are interested in investigating whether hippocampal volume is a better biomarker of AD than amygdala volume (for example, in terms of standardized mean difference between cases and controls as measured by the Cohen’s d statistic). It is tempting to permute the case-control label, but this strategy turns out to be wrong as it completely breaks the associations between the hippocampal/amygdala volume and disease status, which should be retained under the null hypothesis. In fact, in this example, it is unclear what to permute. As a second example, if we want to test whether the mean of hippocampal volume in AD cases is significantly different from a fixed value (e.g., the typical size of hippocampus in normal aging subjects), it can be seen that there is nothing to permute. In these circumstances where a permutation test is difficult to apply, we need to resort to other methods such as the bootstrap for statistical inference.
The bootstrap is an established data-based simulation method, which is often used to assign measures of accuracy, such as standard error, bias, and confidence intervals, to a statistical estimate. It essentially uses the observed data to define an empirical distribution that estimates the unknown underlying data-generation mechanism, and then generates bootstrap samples and bootstrap replications of the statistic of interest using the empirical distribution, from which measures of accuracy can be calculated.
Bootstrap can be applied to virtually any statistic and a wide variety of situations. For example, by sampling cases and controls with replacement independently, we can calculate the standard error or construct confidence intervals for the Cohen’s d statistic for hippocampal and amygdala volume, respectively, as well as for the difference of the two Cohen’s d. Given the strong connection between confidence intervals and hypothesis testing, a p-value can also be produced indicating whether the difference in Cohen’s d is significantly different from zero. In fact, bootstrap can be applied to hypothesis testing, including the questions described in Examples 1-3. However, unlike the permutation p-value, which is exact, the bootstrap significance is only approximate and thus less accurate.
Therefore, permutation is a natural and favorable choice when the null/alternative hypothesis is well defined and what to permute is clear. Bootstrap is useful when the primary goal is to quantify the accuracy of an estimate or when a permutation test is not available in a hypothesis test (e.g., nothing to permute). That said, we also caution that bootstrap relies on an accurate empirical estimation of the true underlying probability distribution. Thus the sampling procedure requires careful consideration in order to respect the data generation mechanism in the presence of complex data structures. For example, block bootstrap is often used to replicate correlations within the data, while variants of the wild bootstrap are used to capture heteroscedasticity in the sample.
At its best, multi-modal imaging offers rich insight into a many aspects of brain structure & function. At the same time, its development has been thwarted by challenges, for example simultaneous EEG-fMRI has additional safety concerns, and the EEG data requires extra analysis steps to account for artifacts from the magnetic field and rapidly changing field gradients. Despite these issues, there is increasing attention to the merits of this approach, with high profile journals dedicating special issues to multi-modal data fusion.
To find out about the promises and pitfalls of multi-modal imaging, we sent a series of questions to members of the OHBM Multi-Modal Imaging Task Force. This team is comprised of experts in different imaging domains, and aims to promote and develop multi-modal imaging. We found out the state of the field from Alain Dagher, neurologist and PET/fMRI expert in the Montreal Neurological Institute, Urs Ribary, cognitive neuroscientist and EEG/fMRI expert in British Columbia, Gitte Knudsen, neurologist and translational neurobiologist at Copenhagen University, and Shella Keilholz, physicist and fMRI expert at Emory University and Georgia Tech.
OHBM: First, what advice would you give to those who are keen to get into multi-modal imaging?
Alain Dagher (AD): Make sure you have a strong grasp of both methods.
Urs Ribary (UR): First, focus on understanding the neurophysiological and biochemical aspects of the brain; then learn individual methods (MRI-fMRI, MEG/EEG, PET, or others…); finally, learn the additional technologies and techniques that will allow you to integrate these different sources of information.
Gitte Knudsen (GK): You need to train at a site where there is high-level expertise in both modalities, and preferentially integrated. If you cannot readily become attached to an academic site that masters true multimodality, do your master thesis/PhD in a centre where they master one or two of the modalities and then move on to another site with the complementary expertise.
Shella Keilholz (SK): Well I would tell them that if they want to do it, just go for it! It’s a great way to increase the impact of your research, especially when the additional modality allows you to make inferences about causality or fundamental mechanisms that you can’t obtain with a single methodology. Sometimes it seems overwhelmingly difficult to add another modality but we have always been able to find collaborators who generously help us get started.
OHBM: It seems the tools for collecting the data are more readily available (e.g. MRI compatible EEG setups). What is the biggest remaining hurdle in conducting multimodal studies? Is data-fusion between modalities improving?
AD: The increased cost and complexity is generally what holds this back. [Further note from Jean Chen, OHBM blogteam member: “For example, an integrated PET/MRI system is more costly than a regular PET or MRI system. Whilst it may not be as expensive as buying a PET and an MRI system separately, new money is often required to get into multi-modal imaging.”].
GK: The biggest hurdle is, first, to master more than one tool to perfection and second, to ask the right scientific questions that can only be addressed using a multimodality approach. Data-fusion between modalities is a challenge, but slowly improving.
UR: Yes, data fusion is improving, but not so much the underlying knowledge of neurophysiology (why to integrate). There are also clearly issues with money (more recordings are more expensive) and with time (it requires more knowledge and work, and everybody wants to publish quickly). On the other hand, data fusion is not something that has to be done alone, and can be done efficiently in collaborations.
SK: One of the biggest challenges in multimodal research is designing experiments and analyses that maximize the use of the information obtained from both modalities. It requires thinking beyond the conventional paradigms for each of the modalities involved.
OHBM: The increased use of simultaneous PET-MR scanners has clear advantages for cancer imaging. What benefits do you feel it may hold for other areas of neuroimaging?
UR: A clear benefit would be the ability to combine biochemical information with information about brain structure, function and dynamics.
AD: There are many benefits. For example if you take the combination of BOLD and neurotransmitter imaging, since neurotransmitter signalling fluctuates, simultaneous measurement of, for example, dopamine signalling and task-related BOLD has great potential. This then also allows powerful task designs with pharmacological manipulations.
GK: It allows us to measure neurotransmitter release and receptor occupancies and hemodynamic responses simultaneously. We can then use this with pharmacological, physiological or other stimuli. Another great advantage is that it saves time (becoming a one stop shop) for patients with neurological or psychiatric disorders, and so can be useful for those who are not able to tolerate multiple scanning sessions. Unfortunately, despite saving time and possibly resources, the simultaneous acquisition of these different types of information has not yet been truly exploited.
OHBM: The last decades have seen the development of a number of new radioligands for imaging tau and amyloid pathology, microglial activation with translocator protein, phosphodiesterases, and other exciting clinical markers. Are these helping drive multi-modal imaging research? Which emerging PET tracers are you most excited about and why?
AD: For me, the most exciting tracers have been those used to image tau and amyloid, providing otherwise unavailable information about neurodegenerative diseases. Previously we only had brain atrophy as a proxy of disease.
GK: If we’re still talking about hybrid scanners, then we are most interested in developing tracers that target components in the brain that are under rapid regulation. In these cases the methodology can capture these regulations and relate them to, for example, the hemodynamic responses. I’m currently excited about radioligands that are sensitive to neurotransmitter release, as well as emerging PET tracers that are informative of brain processes key to many different types of functions/pathologies. For example, tracers that indicate neuroplasticity or stem cells.
UR: Everything helps! I’ve been impressed with recent research relating imaging of neurotransmitters to cognitive functions in health and disease. In addition, the ability to image GABA as an inhibitory substance has been fascinating to see how it may contribute to, and even control, brain development and dynamic network functions. Last, it’s helped us understand the brain as a fine-tuned electrochemical system which controls all brain functions.
OHBM: Simultaneous EEG-fMRI offers high spatial and temporal precision - but how have labs coped with the challenge of integrating and analysing this wealth of data?
AD: This has been especially problematic for EEG. What we need is good open-source processing software for integrating this information, along with online tutorials and courses to teach people how use them.
UR: I believe that there’s still not enough work in this area. We need to have a much greater understanding of how structure, overall function and brain dynamics integrate in order to understand how typical/atypical brain networks function. Here the question is not so much about using information from different methods to prove each other but instead to complement each other.
OHBM: EEG-fMRI has clear benefits in conditions like epilepsy, for identifying seizure focus and spread. What applications has it had in other conditions - and what do researchers hope to achieve with it?
AD: Cognitive neuroscience can certainly benefit from the combination of higher spatial and temporal resolution in brain mapping.
GK: EEG-fMRI also has promise for use in sleep physiology, sleep disorders and coma.
UR: Any typical cognitive functions and any pathology which are ALL based on structure, function and dynamics....
OHBM: What do you think are the main strengths of multi-modal MRI work? Do you feel it offers hope for developing valid and reliable MR-biomarkers?
UR: Absolutely! Science is not a mystery, the more complementary information we have, the better we understand the human brain. It will help us to diagnose/monitor sub-types of pathologies and give much greater precision when tracking the effects of interventions....
AD: I do believe using multiple MR measures makes sense for biomarker development and understanding pathophysiology. Pathological processes (e.g. in Alzheimer’s Disease) can affect the brain in multiple but likely stereotyped ways. We can also Increase our power to detect pathology (e.g. inflammation, white and grey matter tissue loss, connectivity information) by combining multiple measures.
OHBM: What additional challenges do animal studies have in terms of sequence development or protocol considerations? How do you find these studies enrich those in humans?
AD: Clearly a major issue is the small size of animal brains. We also have to account for the animals typically being anaesthetised when scanned, which has implications for physiology.
GK: Sometimes data from preclinical studies can help optimize a project to be conducted later in humans.
UR: The real benefit of these preclinical studies is that it allows us to perform complementary invasive studies not possible on humans, such as MRI-histology studies. We do however need to continue developing better, or more realistic, settings in animal research in order to better correlate those findings with human brain research.
SK: One of the challenges that we’ve found is that tools that are available on human MRI systems (simultaneous multislice EPI, for example) are not easily implemented on animal systems due to hardware limitations. As Alain says, the other main issue is the use of anesthesia in animals, a special challenge for functional neuroimaging studies, as discussed in our review. Luckily, many of the basic properties of the brain remain relatively intact under light anesthesia, which has been critical of the validation of human neuroimaging methods against “ground truth” modalities like microelectrode recording. People talk of animal research as preclinical or translational, but we like to think of it more as circular. For example, one can take a neuroimaging finding in humans (e.g., fMRI response to tactile stimulation) and look at its basis in the rat using MRI and electrophysiology. Then perhaps one sees that this response is altered in human patients with a particular disorder (maybe stroke). One can then go back to a rat model of stroke and see if the same alteration is present, which helps to validate the stroke model. Then one can look for the neural basis of the alteration using MRI and electrophysiology and identify specific alterations in patients that may be detectable with EEG…etc, etc. We think that human and animal neuroimaging work should inform each other.
OHBM: Thanks all for your insight! We look forward to the multi-modal imaging symposium at OHBM 2018 in Singapore.
By Elizabeth DuPre and Kirstie Whitaker
This month we continued our Open Science Demo Call series by speaking to Anisha Keshavan, Yaroslav O. Halchenko, and Athina Tzovara about three tools they’re developing to improve openness and access in neuroimaging research.
Anisha introduced braindr, a project she’s developed to crowdsource quality control of large datasets such as the Healthy Brain Network data set. It builds off her previous work in creating MindControl but provides a fun, Tinder-inspired interface for image ratings. She encourages anyone interested to check out the app, remix it for their own data, or contribute to the conversation on how to do quality control of images!
Yaroslav told us about DataLad, a solution devised to allow for versioning data. We’ve already recognized the importance of versioning code, but it applies to data too! As Yaroslav pointed out, data can change or have “bugs” like the dreaded left-right orientation flip in MRI data, so understanding what version you’re working with is important. Using DataLad, Yaroslav demonstrated how to install datasets from sources like OpenNeuro and discussed how it can even be used for data sets before they are made publicly available. Interested contributors are welcome to check out the code!
Athina introduced a survey she’s actively developing to better understand how research treats underrepresented minorities. It aims to allow non-scientists --- particularly those belonging to traditionally underrepresented minorities --- to take an active role in the scientific process, bridging the divide between researchers and participants. Originally developed through the Mozilla Open Leadership program, the survey is still open to feedback from the community, and Athina encourages anyone interested to join the discussion on GitHub!
Our next call will be on Thursday March 22nd at 7pm GMT (check your local time zone). If you’d like to nominate yourself or someone else to be featured on these monthly calls, please add their information at this github issue, or email the host of the calls Kirstie Whitaker at email@example.com. You can also join the OSSIG google group to receive reminders each month.
"A brain scan may reveal the neural signs of anxiety, but a Kokoschka painting, or a Schiele self-portrait, reveals what an anxiety state really feels like. Both perspectives are necessary if we are to fully grasp the nature of the mind, yet they are rarely brought together".
-- Eric Kandel
Visual art can provide a glimpse into people’s consciousness. It works as a bridge, not only connecting us to each other, but also with the past, present, and future. The act of creating art is also therapeutic, and represents a powerful resource for mental and physical well-being. Yet, the mechanisms underlying the brain’s capacity to generate art remains largely elusive. While it has been commonly reported that the right brain (posterior parietal and posterior temporal) is dominant for artistic ability, emerging literature strongly indicates that the left brain is not a silent partner. Instead, it contributes to more of the symbolic/conceptual aspects of art. Moreover, the emergence of visual artistic skills in the healthy brain has been linked to plasticity in areas (in both hemispheres) responsible for cognitive processes. Which begs the question: how is visual artistic creativity affected by neurodegeneration?
In fact, art in the context of neurodegenerative diseases (e.g. Alzheimer’s disease, frontotemporal dementia) provides a unique window into brain anatomy and function. In this interview, I discuss the link between neurodegeneration and art with Bruce Miller, director of the Memory and Aging Centre at the University of California. Bruce also oversees the unique Hellman Visiting Artist Program, created to foster dialogue between scientists, caregivers, patients, clinicians and the public regarding creativity and the brain.
Q&A WITH BRUCE MILLER
AmanPreet Badhwar (AB): Can you begin by saying something about your background?
Bruce Miller (BM): I am a behavioural neurologist at the University of California, San Francisco. I focus a lot on degenerative disease: the clinical presentation, differential diagnosis of dementia, also deep dive into frontotemporal dementia. I think a lot about behavioral phenomena, particularly early in the course of these diseases.
I started realizing the importance of art and dementia very serendipitously. It was based on seeing a single patient (Jack). The son told me his father has become an artist in the setting of the illness. And I said “of course as the disease has progressed his work has gotten worse”, and he said “oh no it has gotten better”. So he sent me a series of pictures, and I was fascinated and really enchanted by the work that he did, and began to look in detail into the visual artistic process in that patient. Jack was preoccupied with creating purple and yellow art pieces, and a phrase I often heard from him was “ yellow and purple wave over me”.
I did not think it was a coincidence, although many people around me thought it was, and I was stubborn enough to pursue this, and continued to look for it in my frontotemporal dementia and progressive aphasia population. It does not take much time to hear about somebody, who they are, what they do etc. I would argue that this should be a mandatory part of any evaluation.
AB: How do the worlds of neuroscience and art combine?
BM: Art is unique to the human species. Other animals don’t spontaneously produce art and even our predecessors like the neanderthals and homo erectus made art. There are records of very sophisticated and complex cave paintings by homo sapiens that showed animals, had three-dimensional components and colours. So we developed this ability spontaneously, and without much formal teaching. The sense is that there is something really unique that happened, there was a change in the human brain, maybe a change in human circumstances that lead to this flourishing of art, and this continues to be a part of our ancient and modern societies.
Also looking at the human output around art: some people are extraordinary, and some never produce art. So I think art is a very interesting aspect of humanity and a very interesting aspect of the human brain, and that the two things cannot be more connected.
AB: You previously stated that “creativity is one characteristic that has been observed to improve with time, both in healthy older adults and people with age-related neurodegenerative disease”. Is the trajectory for artistic creativity different in normal aging and in age-related dementias?
BM: I think it’s a very interesting, complex question, tackling aging of humans and art. We are very interested in elder artists, there is no doubt about it. Picasso was in his eighties, he produced very different but interesting pieces, but they delighted people. There is no doubt that his work was exciting. Was it better when he was young, or was it more innovative, maybe not, but I think there is great variability in when an artist reaches his or her peak. Some artists may have a series of observations that become very important in their twenties, and don’t change very much over time, and in others there is a constant evolution. I think one thing that is clear is that it takes a while to master whatever artform that someone is working at, nobody picks up a pen and produces a perfect sketch of a face, it takes many, many iterations and practise over many times. I think this is what happens when someone is an art student, they are constantly working on these techniques, making their own observations and getting observations on their work made by teachers.
In disease, people who have never painted, made sculptures, or welded art pieces, suddenly become very interested in the process. Their first works are usually not as good as the ones they produce after they've had the chance to work at a specific media. They do things over and over again, and at some point they start to reach a mastery of their art. So I think there is often a period when they don’t produce something very interesting but there is a drive to do so. That drive pushes them to practise more and more and they reach some sort of a peak, until eventually the degenerative process and injury to circuits causes a loss of their abilities.
So we have this very beautiful but sad story of sometimes art heralding the onset of the degenerative disease process. Soon after the art has appeared the degenerative process gets worse, and eventually the ability to produce art is lost altogether.
AB: Do you think that this drive to produce art arises from disinhibition of certain brain networks, especially in patients who, earlier in their history, were never motivated to produce art? In other words is this artistic ability unveiled and perpetuated by the neurodegenerative process itself?
BM: I do. I think the fact that they never produced art before means that the circuits involved in this process had not been activated. Something about the degeneration, for reasons that we don’t completely understand, leads to an interest, an activation, an actual physical drive to carry out the artistic activities. The theme has been that degeneration on the left side of the brain (language based regions) releases functions on the right side, which are more visual.
AB: Have there been any fMRI studies done in these patients with relation to newly developed artistic abilities?
BM: There is quite a bit of fMRI data that we have collected on our artists. We are in the process of analysing that, but we don’t yet have a coherent story. We wrote about it. William Seeley did these analyses on a woman (Anne Adams) who became a visual artist in the setting of a non-fluent aphasia, and she showed on a blood flow scan increased activity in the right posterior brain region, and actually during that time an MRI was done and she had increased volume in that same area.
There are a number of theories, one being she was always like that (that is the bigger volume). But she was never much of an artist until the progressive aphasia emerged. We think there might have been slow remodeling in the early stages of the disease, with decreased activity in the left frontal insular regions allowing increased activity on the right posterior parietal area and actually some increase in volume.
AB: Does art created by people with brain disease or damage provide insight into brain anatomy and function? Could you provide a few examples?
BM: Surely Anne Adams was a paradigm shift for me to describe the phenomenon of art and dementia, but I had never really thought too much about the mechanism. But because she had undergone an MRI just before the onset of dementia, this really allowed us to look into the circuitry and mechanism. This also allowed me to broaden my thoughts about the topic, so seeing patients who had gardens with beautiful details, flowers, patterns. This is another form of visual creativity that I have become aware of.
AB: As a practising neurologist, how has your encounter with art influenced or changed your own conception about how the brain functions? Do you have specific examples? Did you have to overcome difficulties to promote this field?
BM: I think it has really humanised my approach to patients. It makes me realize that even though dementia is a relentless process, there are many pockets of preservation, and sometimes enhanced function. It is critically important that we recognize this in our patients. It is helpful in diagnosis. What is preserved is telling us something about where in the brain the bad molecules are not accumulating. But it also allows us to think about the patients, about things that are important to them, and help design programs for them and have activities that are meaningful. If you have lost your visual spatial function profoundly, then probably working in art is not going to be satisfying. But if instead there are other areas that are preserved around music or singing or something else, these things have to be kept in mind while thinking about the future for the patient and their families.
I think this should be a routine part of our diagnostic process, that is not only what are the weaknesses, but what are the strengths, and has anything new emerged that is actually a new strength. We do this regularly now at UCSF (it has opened up a whole new side to the evaluation). This also makes me appreciate the unbelievable effort that every patient that we see is putting into their life. When blocked in certain domains, they activate others and use others. So I think about patients in a very different way since the story of art emerged. I think, to a fault, neurologists have often thought about deficits a lot, without really seeing the whole human being, and I think this has really forced me in a very good way to think about the entire human within the ecosystem that they live and interact with others, and some of the things they perceive that might be very important.
AB: I have had the good fortune of discussing both art and neurodegeneration on various occasions with Bruce. Not only do Bruce and I share similar scientific curiosities with regards to art and dementia, I have also found him to be an excellent mentor. He has taught me to follow my heart in the quest to figuring out the brain, and for this I shall be forever grateful!
“I think the next philosophers, the philosophers of the 21st century, are going to be neuroscientists.” - Bruce Miller
The OHBM is dedicated to understanding the anatomical and functional organization of the human brain using neuroimaging. But how to best use brain-activity measurements, including human neuroimaging, to understand computational mechanisms remains an open problem. “Mapping the brain does not by itself reveal the brain’s computational mechanisms” says Niko Kriegeskorte, past chair of the OHBM Communications Committee. “Therefore one of the strategic priorities in the OHBM Communications Committee has been to explore the interaction between computational neuroscience & human neuroimaging.”
Here, we had the chance to discuss the current state and future of computational neuroscience with Mark Humphries, senior research fellow at the University of Manchester, Chair of Computational Neuroscience at the University of Nottingham, and talented blogger. We found out about research environments in different countries, mindful language use in neuroscience, Mark’s outlook on the future of network neuroscience, and his top three tips for those starting out in computational neuroscience.
Nils Muhlert (NM): Can you tell us a bit about your career path - were you first interested in computing, or in neuroscience? Also, your work has seen you move between the UK and France - have you found different approaches to research in these countries?
Mark Humphries (MH): I’m of the generation that grew up programming their home computers - their C64s, Spectrums, and BBC Micros - so computing was always there. As a kid I also loved chemistry. Originally I wanted to do Chemical Engineering at university, but it turned out that A-Level Chemistry was both hard and boring. So when I came across the mysterious “Cognitive Science” degree, promising computing, AI, and the brain, I signed up like a shot. In effect, I’m one of the few who was trained in computational neuroscience from my first year at undergraduate level.
That degree was followed by a PhD and postdoctoral work at Sheffield, with the quietly wonderful Kevin Gurney. Not quite the straight run it sounds: disillusioned and exhausted by the end of the PhD, I went off to freelance web design and software engineering. That lasted a year before I was tempted back by the offer of a post-doc.
My long stint at Sheffield was followed by three years in Paris at ENS. Both teams of computational neuroscientists, with radically different approaches. Sheffield were neuroscience-first, circuit modellers: build a model of a brain region, study its dynamics, and infer its function. Paris were theoreticians first: propose and study general principles for how computations could be done by the brain (memory, inference etc), then worry about the details of specific circuits later, if at all.
In my experience, the French research system, dominated by the CNRS and INSERM, is essentially just part of their civil service system. So you can have a job for life, but getting financial support to do your research can be an absolute pain. Theorists in all fields can thrive, of course. (ENS has an extraordinary maths department: the Bourbaki group were based there, and they’ve had five Fields medalists). The UK research system more clearly supports fundamental science.
NM: In a recent blog post on connectomes, you highlight some of the many factors influencing the spiking of a single neuron. In human neuroimaging, we typically summarise activity at the scale of cubic millimetres, with each voxel containing tens or hundreds of thousands of neurons in different cortical layers. How much cross-talk do you see between cellular systems neuroscience and human neuroimaging, and how much do you think understanding at one level currently constrains understanding in the other?
MH: The neuroscience of detailed neuron types - their physiology, receptors, transmitters, gene expression, and so on - often has little constraint on systems neuroscience studies of large populations of neurons. Many multi-neuron recordings from cortical regions can only hazard a guess at what layer they are recording in, never mind whether the recorded neurons are Martinotti or ViP interneurons or whatever. I think this lack of identifying neurons has played a large role in driving the take-up of calcium imaging, where we can at least identify some subtypes of neurons (typically 1 or 2), despite the obvious disadvantage of recording something (calcium) that is only partially related to the thing we’re interested in (the spiking of neurons). What’s particularly missing is the constraints of anatomy - the wiring between individual neurons - on the activity we’ve recorded from those neurons.
But that will come. In a handful of specialised circuits, this information is being combined. For example, in studies of the mouse retina, the type and position of neurons has been used to constrain classifications of large population recordings. And in tiny animals, like Drosophila larvae (maggots to the rest of us) and C Elegans, the details of wiring and neuron types have been combined with large-scale imaging to reveal deep insights into how brains could work.
NM: Marsel Mesulam revealed that students requesting higher field strength MRIs are asked “what would you do if you could record from every neuron in the brain?” This thought experiment is now an ambition for international research projects. How do you feel network neuroscience could sensibly use this massive amount of data?
A question that has occupied much of my thinking, but to which I’m no closer to a good answer. We have passed the milestone of recording every neuron from a simple nervous system. But as I wrote at the time, it was a cool study from which we learnt very little of consequence.
That said, everything that brains do, they do through the collective action of hundreds to millions of neurons. And we lack well-established theories for what that collective action means, or how to interpret changes to it. In the absence of theory, the gotta-catch-them-all philosophy of recording every neuron is seductive: let’s get the data we think we will need one day, and wait for theory to catch up.
Fortunately, ideas are emerging about how we can sensibly use this data. There’s some great recent work on how we can tell whether there’s anything special about the joint activity of many neurons: whether it is just the expected result of lots of individual neurons tuned to different properties of the world; or if the joint activity really conveys more information than the individual neurons summed together. And we’re starting to get a handle on how to understand the dimensionality of that joint activity: how much redundancy there is between neurons, how that redundancy differs between brain regions (and between different brains), and what that means.
NM: In another of your blog posts, you criticize media misinterpretations of dopamine as representing the ‘reward system’ of the brain. How does your own work feed into this - and at what point did you feel a general education piece was warranted?
MH: The tipping point was seeing “Dopamine dressing” in The Guardian‘s Style section. As though dopamine neurons give a damn about what you wear. Endless publications call dopamine the “reward system”, when it is not. And it’s particularly embarrassing when such language routinely appears in august publications like Nature. So I thought that it’d be useful for everyone to have a simple, accessible, concise explanation that dopamine neurons signal an error, not reward. And then we can all just point our undergraduates, friends, family, and editorial staff at esteemed publications to that post, and save ourselves the trauma.
Dopamine has been around in my research since the first days of my PhD. For years my work was primarily on the basal ganglia, and the striatum - the massive input nucleus of the basal ganglia - is where the dopamine neurons send their dopamine. So we include the effects of dopamine in all our models. In Paris I spent a couple of years analysing dopamine neuron firing in a project that never saw the light of day. More recently, I helped Kevin Gurney achieve his mammoth computational account of how dopamine teaches the basal ganglia to select actions. Dopamine has haunted me for my entire career...
David Mehler (DM): Richard Feynman used to stress the difference between “Knowing the name of something and knowing something”. In a similar spirit, you have critically assessed whether we put too much faith in named brain structures, giving examples why these should not be taken at face value. What advice do you have for students and ECRs, whose experience of Neuroscience may consist wholly of learned brain regions with set functions?
MH: Read more than just about your brain region. And internalise the idea of degeneracy: brains have many solutions to the same problem.
If we work on only one brain region, it is easy to fall into the trap of thinking that one brain region does everything. Just being aware of the thinking about brain regions other than your own will help not take anything at face value. In my own fields, it is easy for basal ganglia researchers to fall into the trap of claiming that it is responsible for “action selection”. But this patently can’t be true: there are multiple systems that select actions in the brain, from spinal reflexes, up through the brainstem, midbrain, and other sub-cortical structures - the amygdala can select fear responses just fine on its own.
DM: A recent study from your lab, in collaboration with Angela Bruno & Bill Frost from the Chicago Medical School, provides fascinating insight into how neural populations orchestrate their activity when coordinating movement: while their combined output converges to a similar pattern (an attractor), activity of individual neurons is not stable over time. What does this finding imply in your view for our understanding of functional connectivity (e.g. between neurons or neural populations)?
It means that functional connectivity is an epiphenomenon. The correlations between individual neurons are imposed by the dynamics of the whole circuit in which they reside. Those dynamics obey certain properties that emerge from the wiring of the whole circuit and the excitability of the individual neurons.
But it is very useful to study functional connectivity of neurons: mapping the correlations between neurons is so much easier than trying to infer the underlying attractor, or other form of dynamical system. And changes to those correlations imply a change to the underlying attractor. Indeed, we use this approach all the time. We just need to be mindful that those correlations are a read-out, an observable property, of the circuit’s dynamics.
Functional connectivity at the level of whole brain regions, of MEG/EEG and fMRI, is a different kettle of fish, of course. On this scale, correlated activity is telling us something about the distribution of how things are represented across the brain in very large neural populations, with tens of thousands to millions of neurons in a single time-series. Instability of correlations over time for these time-series would suggest entire neural populations that wink on or off as needed. And dynamical systems analysis has long been applied to EEG data, but usually as a way of looking for changes in gross neural activity - as may precede an epileptic seizure, for example - than as a view of how the brain computes.
Seeing a spiral attractor in neural activity. Activity was recorded from 105 neurons in a sea-slug's motor network during three separate bouts of galloping. There are three lines plotted here. Each line is the low-dimensional projection of those neurons' joint activity during a 90 second bout of galloping, from its onset (grey circle). Each line traces a circular movement whose amplitude decays over time: a spiral. The three lines together trace the same region of this low-dimensional space, indicating that the neurons' joint activity is attracted to the same pattern: the spiral is an attractor.
DM: Your work increasingly focuses on dynamic changes in neural networks. What insight do you think this will bring to the field over the next 5-10 years?
MH: We’re going after the idea that the brain encodes information at the level of the joint activity of populations of neurons. In this view, each neuron is a read-out of the joint activity of all the neurons that project to it. That neuron, in turn, is just one small component of the populations projecting to other neurons. So only by looking at the dynamics of the neural network as a whole can we understand what neurons are seeing, and hence what the brain is encoding. A change to those joint dynamics are then the change in what is being encoded: be it a sound, a memory, or a movement. In short: the response of single neurons may be irrelevant to what the brain is doing.
DM: … and finally, computational neuroscience is gaining increasing popularity. But starting out may seem daunting. What are your top three tips to get into the field?
MH: First, learn to code, properly. To some, this may seem obvious. In my experience most people who’ve come to me with a genuine interest in getting into computational neuroscience have never coded, certainly not seriously. But coding is the day-in, day-out life of the computational neuroscientist, so you won’t get far without deep skills in coding. And by “properly” I don’t mean “you have to learn a proper programming language”, whatever that means. No: properly learning to code means learning the logic of how code is built, independently of the language used: of variable types, indexing, functions, control loops. And learn to comment your code. You know who will love you for commenting your code? You, in a year’s time.
Second, ask yourself: What type of computational neuroscience do I want to do? The choices are endless. We can work on scales across the actions of receptors at single synapses; plasticity at single synapses; the intra-cellular signals triggered by receptor activation; the dynamics of a single neuron in all its glory, dendrites and all; the collective dynamics of networks of neurons; of specific brain circuits; right up to the entire brain. And on to read-outs of mass activity, to EEG, MEG, and fMRI, and the functional connections between regions. We can work bottom-up, top-down, or middle-out. We can aim to ask what a specific brain regions does, work out what causes a disorder, or reach for general principles for how neurons compute. We can use algorithms, like machine-learning; simulations of dynamics using differential equations; or pencil and paper to solve equations. What is it you want?
Finally, take a Master’s course in computational neuroscience. Both so you can find out if this path is for you; and so that you can be taught the neuroscience by neuroscientists and the computation by computational neuroscientists. Get either wrong, and no one will take you seriously.
By Elizabeth DuPre and Kirstie Whitaker
The open neuroimaging community is great and growing every day. This month saw the first of a series of Open Science Demo Calls. Brought to you by the OHBM Open Science Special Interest Group, these live streamed calls are a chance to hear from the developers of open neuroimaging tools. We'll use these calls to build connections between all members of the OHBM Open Science community and to tell the stories of the people making outstanding and reproducible neuroscience happen.
For our first call, we spoke to Alejandro de la Vega, Cameron Craddock, and Guiomar Niso about three ongoing initiatives they’re spearheading to improve openness in neuroimaging research.
Alejandro spoke about NeuroScout, a new, cloud-based platform allowing for the flexible re-analysis of neuroimaging datasets with naturalistic stimuli, such as the Study Forrest dataset. To do this, Alejandro is actively working to develop tools such as pliers and pybids. If you’re interested in this line of research, make sure to check out and contribute to these tools!
Cameron discussed this year’s Brainhack Global. Building off the successes of Brainhack Global 2017, Cameron is organizing a globally based hackathon for this spring, where neuroimaging researchers around the world can come together online to learn about, develop, and improve open neuroimaging tools. He encourages anyone interested in attending the event to join the Brainhack Slack team.
Technical difficulties prevented us from seeing Guiomar in our call, so we recorded a supplementary video to hear more about her work with MEG-BIDS. This is a very big extension of the BIDS specification to cover MEG data. As Guiomar informed us, MEG does not have a standardized acquisition file format (like MRI dicoms), so the creation of an MEG-BIDS standard will make a huge difference to the community! Feedback is welcomed on the current draft of the specification, which is planned for release on February 14th.
Our next call will be on Thursday February 22nd at 7pm GMT (check your local time zone) and will feature Anisha Keshavan on Braindr, Yaroslav Halchenko on DataLad and Athina Tzovara discussing how research treats underrepresented minorities.
If you’d like to nominate yourself or someone else to be featured on these monthly calls, please add their information at this github issue, or email the host of the calls Kirstie Whitaker at firstname.lastname@example.org. You can join the OSSIG google group to receive reminders each month.
Professor Aina Puce is the Eleanor Cox Riggs Professor in the department of Psychological and Brain Sciences at Indiana University, Bloomington, and a senior editor at Neuroimage. She has followed a career path that is now becoming more common in human brain mapping, starting firmly rooted in the methods end but, over time, gradually shifting focus towards understanding complex patterns of behaviour. To do this, she has made use of a number of imaging techniques, exploring ways to extract converging lines of evidence.
Here, we find out how her interests changed throughout her research, the promises and pitfalls of multi-modal imaging, and why you should not be discouraged by rejections but instead focus on and be motivated by the paper acceptances and other highlights in your career.
Nils Muhlert (NM): You initially graduated with degrees in Physics/ Biophysics. Now, one of your lab’s key interests is specific applications - such as understanding social cognition - though clearly facilitated through your expertise in imaging methods. Can you tell us about how your research focus has changed throughout your career?
Aina Puce (AP): My undergraduate degree was in Biophysics and my Masters degree was in Physics. For my Masters I was already recording EEG/ERPs in the operating room under anaesthesia – generating a frequency response of the visual system using sinusoidal visual stimulation through closed eyelids. During my PhD, I recorded intracranial EEG/ERPs from the hippocampus and temporal lobe for the purposes of identifying the epileptogenic temporal lobe in presurgical patient assessments.
My interest has always been tied to the relationship between brain and behavior. Over the years it has evolved from consciousness under anesthesia, to hippocampal integrity, to recognition memory of objects, to face perception, to recognition of face, hand and body actions, to multisensory perception, and now to the implicit recognition of emotions and other non-verbal signals. Seems like a lot of topics perhaps, but the evolving theme is how we make sense of our world. I owe a lot to my colleagues from the humanities: over the years they have patiently taught me so much about psychology.
NM: Much of your work involves imaging across modalities. Alongside the higher temporal and spatial precision, multi-modal imaging often involves the challenge of combining very large datasets. How have you got round these issues?
AP: Important question. When you study brain function using only one imaging method you will look at the world with a set of (rose-colored) glasses that give you only part of the story. We tend to forget that. Using multiple methods (either across or within subjects) keeps you honest, as you might get different answers to a scientific question. Then the onus is on you to get to the bottom of those differences, which means taking more time to study a problem. This can be frustrating, because at times you feel you are not getting anywhere relative to others in the field. At the same time, I would rather generate work that is reproducible and replicable by others! The field needs a solid foundation, and this can only be achieved by paying attention to data quality and also fully understanding the methods we work with.
With respect to large multimodal datasets, the biggest challenge right now as I see is data quality control. Data will likely be analyzed by individuals who may not have expertise in data acquisition and artifact recognition/rejection. When multiple assessment modalities are involved, this problem becomes compounded.
Another challenge that I see relates to cloud computing and subject privacy. Increasingly, subjects in these big datasets will be patients. As more investigators around the world interact with these datasets there is an increased potential for hacking and accessing sensitive information. Having easy to use, but secure, user interfaces and procedures for interacting with big datasets is key.
Another critical component is user training on computer hygiene. I am continually horrified by what I see those who are not computer-savvy doing with data-archiving and sharing. We cannot blame these people as they have not been formally trained in this area, but these are the potential weak links in the chain. That said, user-training needs to be made meaningful and interesting and something that users view as important – and that is also a big challenge in my opinion.
NM: Where do you see multi-modal imaging going over the next 5 years?
AP: With respect to methods and scientific practice: these have been re-examined and will continue to do so. With respect to neuroscience in general: I think that meso-scale neural interactions will be a major focus, as this work is critical to building bridges between systems neuroscience and molecular/cellular neuroscience.
Finally, for social neuroscience, measuring/monitoring brain and bodily function will also become more important as science moves more and more from a lab-based focus to a real-life one. Smart clothing used with dry electrode portable EEG systems and smartphone applications to gather data will become more common. Exciting developments in MEG sensor technology will continue, with attempts to develop higher temperature MEG devices and also flexible sensor helmets to better fit any head shape or size. This is a really exciting time to be involved in neuroscience!
NM: In your work on social attention you have proposed a ‘socially aware’ brain mode of social information processing. Can you tell us a bit more about this. How, if at all, does this brain mode map onto specific resting-state networks?
AP: I have recently been interested in how we use social information that we access implicitly to make social judgments or decisions about the behavior of others. Most lab-based studies in social neuroscience use tasks where subjects make explicit social decisions about others. Yet, this is so unlike what we do in real-life. In our lab we use both implicit task (involving a ‘default’ mode, where there is an internal focus on achieving goals) and explicit tasks (requiring a ‘socially aware’ mode, where we make explicit social judgements), using the same stimuli in the same subjects. We found very different neurophysiology across tasks – explaining in large part the existing variability seen in the literature.
Relationship to resting-state networks? Excellent question! We have been looking at the EEG dynamics during these implicit and explicit tasks, but have not yet looked at resting state EEG in these same subjects. So this is something that I would like to look at in future work.
NM: What advice would you give an early career researcher to help them stand out in the hunt for competitive fellowships, grants and faculty positions?
AP: I usually tell everyone to find what their passion is. What topic of study really motivates you scientifically? Doing science is a perpetual set of ups and downs – often more down than up. If you follow that passion, you are more likely to be successful, because it will help you get through the bad times.
As for specific advice for early career researchers. First and foremost, find a mentor – a senior scientist who you trust, have a personal rapport with, and who can help you work on your desired career goals. They should be a good sounding board, but also be able to network you with other scientists and point out career opportunities you may not know about. OHBM has an excellent mentor-mentee matching service. I have been recently assigned to mentor two young scientists, and I am looking forward to interacting with them on-line and face-to-face at the OHBM meeting itself!
Second, network network network! Don’t be afraid to speak with senior scientists at scientific meetings – not just at your poster, but do it at the various social events. Getting to know someone can allow you to visit their lab (perhaps even on a short stay to analyze some data), and who knows what other opportunities that might lead to? Applying to competitive Summer schools can also give you this opportunity.
Third, seek feedback from peers and colleagues on your fellowship and grant applications. People do not do this enough. That said, it requires being organized – you need to allow time for people to read and give you feedback, so that you can make the edits before the submission deadline. Same thing applies for job talks or conference talks – in our lab no-one does a talk anywhere without doing a dry run first! This rule also applies to me, and I value the detailed and caring feedback I get from my trainees.
Fourth, you can stand out by being yourself – scientifically and personally. Scientists are by nature prone to eccentricities. I like to celebrate those. Your (hopefully positive) eccentricities make you who you are, and importantly make you distinctive and memorable to others. (I'll never forget a job candidate who told us that he had a pet tarantula. He got the job!)
NM: Next month, you’ll be a keynote speaker at the Brain Twitter Conference. Can you give us some insight into what you’ll be presenting - and what you think can be achieved through this online mini-conference?
AP: I will keynote tweet on the different modes of social information processing that I mentioned before.
What can be achieved with an online Twitter conference? A couple of things quickly come to mind. First, the conference builds a greater sense of community, allowing new connections between scientists around the world to be made through interactions generated in response to speakers’ tweets. (It is interesting to finally meet people at scientific meetings that you have been tweeting with.) Second, communicating one's ideas with a series of 10 Tweets makes one distill the absolute essence of the ideas to be presented. It allows the presenter, at least, to work out what is really important in the practice of their science.
NM: When did you become involved in Neuroimage - and how have you seen it develop over the years?
AP: I became a member of the Editorial Board in 2005, a Handling Editor in 2009, a Section Editor in 2011 and finally a Senior Editor in 2013. It has been wonderful to watch our field grow exponentially over the years and to work with so many dedicated and committed people in our NeuroImage family. Back in the early 1990s we had no outlet where (f)MRI-related work was welcomed, whereas work related to MEG and EEG was being published in well established neurophysiology journals. Today we have NeuroImage as well as Human Brain Mapping (which also began very early to meet the need to publish MRI-related work). It is terrific to see neuroimaging work so mainstream and regularly appearing in high-profile neuroscience journals. Indeed it is hard to keep up with it all right now!
NM: ...and finally, you’re currently serving on the program committee for OHBM. What does this role involve - and how can others contribute?
AP: OHBM is my tribe. As a post-doc I presented a poster at the very first OHBM meeting in Paris organized by Bernard Mazoyer in 1995. I have only missed a couple of OHBMs since then, due to issues related to visas... I have presented in Educational Courses, Symposia and given a Keynote, as well as chairing scientific sessions over the years. I was a member of Council from 1999-2002, where I was the Meetings-Liaison. Back then we did not have the wonderful Secretariat we have now, so the meeting organization was a bit different. Currently, together with Cyril Pernet I am Co-chairing a COBIDAS for MEEG committee for OHBM. I am also a member of the OHBM Scientific Program Committee – and right now this is busy time for the committee. I want to give a huge shout out to Michael Chee and his very capable team in Singapore. World-events forced the change of the meeting city at the last minute, and Mike and his team are making sure that OHBM 2018 will be just as successful as all of our other meetings. I am really looking forward to it!
BY DAVID MEHLER & KEVIN WEINER
Open Science (OS) is a movement toward increased sharing among scientists of their data, their materials, their computer code, their papers, and their peer reviews. The ultimate goal of this movement is to boost collaborative progress and bring greater transparency. Scientists might more rapidly build on results of others and help each other spot errors. Read more
BY THE OHBM BLOG TEAM
As brain mappers start to log off their computers, comfortable in the knowledge that their OHBM 2018 abstract is finally submitted, we, at the OHBM Blog, offer a round-up of our most interesting and informative posts from 2017. This platter of self-citations should provide sustenance for those experiencing neuroscience hunger pangs throughout the holiday season. Each of our main contributors provides insight into their favourite posts - and if you feel you’d like to contribute as a guest blogger next year, let us know!
As 2017 comes to an end, I think of the numerous ways that OHBM has promoted diversity since that first meeting in Paris, 22 years ago. I realized this while interviewing Marsel Mesulam, a longstanding academic inspiration of mine and a founding member of OHBM. I found myself entranced by Marsel’s recap of the organization’s history. He touched on the themes of discovery, flexibility, and evolution in the field of human brain mapping, and the importance of taking a step back and drawing inspiration from the brilliant diversity that is OHBM – be it the various imaging modalities highlighted, the composition of OHBM’s membership, comprised of both trainee and established members, or multidisciplinary interactions such as the annual art and neuroscience exhibits. I truly believe that OHBM draws its strength from its inherent diversity, an ingredient necessary to advance the understanding of the organization of the human brain. I look forward to ushering in 2018 with the OHBM community.
This year I met interesting people and learned interesting things through writing the blog posts. My favourites are: (1) the story of the first human fMRI experiment at the MGH (Mark Cohen interview); (2) finding out about the future of data sharing from David Van Essen; (3) the chaotic but pseudo-stable nature of brain connectivity; and (4) how stimulating the lateral prefrontal cortex makes people comply more with social norms. As a PI, I find that all too often, PIs and trainees may get tunnel vision, being committed to certain research findings and to propagating certain theories. It is true that we all have to publish and propagate knowledge as researchers, but the reasons that we became researchers in the first place may be something different. From working on the OHBM blogs, I got a sense of history, of different perspectives, of how successful scientists can reinvent themselves and stay true to their passions. That experience was both humbling and energizing. As we look to the start of a new year, I ask you one question: “Sure, doing science can be tough, especially these days, but what would you rather be doing with your time and your brains?”
I had a lot of fun interviewing Alan Evans ahead of the annual OHBM meeting in Vancouver. However, my favorite post this year was not written by me, but by Agâh Karakuzu, a student of mine who wrote about his impressions as a first-timer at the OHBM Hackathon. The pleasure came from guiding Agâh through the labyrinthine process of introductions, interviews, standard operating procedures and gruelling team edits, only to see his efforts validated by the overwhelmingly positive response from the community. I feel like the hackathon post provided exactly what the OHBM open science SIG needs -- easy entry points for the uninitiated. I hope this post will motivate other OHBM trainees to volunteer their time and energy, be it in making science more open, or in spreading the word about the exciting initiatives coming from OHBM in 2018.
Whilst I was mainly involved in editing this year (including setting up the gruelling team edits), I did get the chance to interview a number of the OHBM execs - finding out about the challenges of working as treasurer and chair. But my personal favourite was interviewing the 2017 program chair Mike Greicius. His clinically-focused work covered such a wide breadth, from direct stimulation of the anterior cingulate in those with epilepsy to amyloid PET imaging in people with Alzheimer’s disease. Fleshing out the stories behind these papers and finding out his career path seemed to me to be exactly what we’re aiming for with our blog. Looking forward to our blogposts in 2018, you can expect more clinical neuroscience, open science, advice for early career researchers and coverage of brain mapping from around the globe.
My favorite experience of 2017 was surely the completion of my graduate studies and moving from the world of graduate student into a fully fledged PhD --- feels like I’m finally part of the cool kids club! A close second, though, was my OHBM Interview with Susan Bookheimer. Susan’s neuroimaging work at UCLA is fascinating and diverse, and her attitude and moral convictions are bold and impressive. It was refreshing to have a scientifically stimulating conversation with someone who shares such strong opinions on personal accomplishments, women in science, and the importance of life outside the PhD.
The end of this year marks my first year as a blog team member! Through the changing weathers, my work has changed colors, flourishing into an array of insightful posts on a variety of topics. It was fun liaising with the OHBM Student Postdoc SIG and writing for early career issues such as mentorship. But through all this, my favorite was to interview my postdoc advisor Lucina Uddin as an OHBM Young Investigator Awardee! It was enriching to see what makes a young investigator. I loved writing about things she is passionate about and sharing the pearls of wisdom that I, as her postdoc, have gleaned from her presence and mentorship. It was also inspirational to interview stalwarts such as Damien Fair, and see what a significant role mentorship has played in their careers. Personally, it was reassuring to know that I am surrounded by scientific experts who also value building mentee careers!
In addition to contributing to this blog, I am a member of the 2018 OHBM local organizing committee. These two roles nicely intersected as my postdocs (Csaba Orban and Valeria Kebets) and I put together a blog post introducing Singapore as the location of next year’s annual meeting of OHBM. I hope everyone is as excited as I am that OHBM will be held in Singapore. Look forward to seeing everyone here!
...and we’d like to thank all our contributors and interviewees, Sarabeth Fox for filming, and Randy Gollub, Niko Kriegeskorte, and especially Stephanie McGuire for their help in keeping the blog running!
Interested in suggesting a topic or writing a guest post for 2018? Contact us at email@example.com
Neuroimagers face an ocean of software tools that have the potential to make research transparent, sharable, and collaborative. Open science is on the horizon, and we could use some instructions on how to ride the open software wave. We couldn’t think of better software surfers than Michael Hanke and Yaroslav Halchenko, the creators of NeuroDebian. NeuroDebian is a curated ecosystem of neuroscience research software that runs on virtually everything. It originally started as a personal convenience tool in 2005, then went above and beyond its initial purpose, finally transforming into a widely-used and globally accessible platform.
AK: What is NeuroDebian? Can you briefly explain it in layman's terms?
Michael: NeuroDebian was started almost 12 years ago, and was originally intended to provide correct software ecosystems. This is not only about shipping software to another computer, but a full system integration effort, so that independently developed software can work together in one system. NeuroDebian is basically a front for the Debian system, a decentralized and democratic effort of individuals working together to develop the universal operating system. Think of it as a neuroscience capable operating system.
Yaroslav: Let me add a really layman definition. It’s your cellphone with the app store, but it is not just the cellphone. It can be your laptop, PC or even a computing cluster. NeuroDebian and Debian provide this whole turn-key platform where you have an app store offering accessible software. All that software is already available for you, and a dockerfile or singularity file can describe which ones to install. I think we cannot get more layman than that.
Michael: Did you say lame?
AK: NeuroDebian was not initially intended to be a global project. Tell us about how it grew into something bigger.
Yaroslav: It started from scratching our own itch. We were doing our PhDs (or so we thought), and needed software to analyze data. We were Debian users already, and thought it would be cool if we could share our work: I packaged and maintained PyEPL, Michael did the same for FSL. We soon realized that there are more interests besides those two software tools. The catalyst was us joining the Jim Haxby Lab at Dartmouth. In a frenzied week in 2009, we packaged lots of potentially useful software. Then we established NeuroDebian, as it is known now.
AK: The spirit of free software is at the heart of NeuroDebian. It is a community driven development. How do you manage all those remote contributions?
Michael: Yes, it is a global enterprise with many people contributing to it. But it is also, given its size, complexity and the amount of effort that goes into it, by far not international enough. For the amount of impact it has, it is actually a fragile enterprise. It has taken up considerable time and effort from researchers, even though the outcomes are not going straight into their PhD or research projects. Some software is more difficult to integrate than others. I don’t want to single any out, it just symbolises the whole situation and reflects the world we are in. Tools are kept behind closed doors until the people who developed them are properly credited.
Yaroslav: In comparison to where we were 12 years ago, we are now in a very different ecosystem. In getting there we used many of the concepts of open science and open software, such as continuous integration. Without that we’d be fixing bugs for every release. To ensure scalability, we encourage people to test their software, and even though this is not directly related to NeuroDebian per se, it ensures that software remains working on all systems. Another angle is that we share the responsibility. Michael packages something and I package something else, and then we share it so that anyone can benefit. The same goes for the teams within Debian (Debian-Med, Debian-Science) which we are also part of, and we maintain many packages together with those teams. We are not duplicating anyone’s effort. Other team members do their part and we often just borrow relevant neuroscience research packages and backport them for all Debian and Ubuntu users. Everyone is happy at the end of the day. Once again, it would be impossible if these were done in an independent fashion.
AK: We hear about backporting. Is there a simple explanation for that?
Yaroslav: It takes a lot of effort to stabilize a release. That’s why the most stable Debian releases are usually the most outdated ones. This takes us back to Michael's comment about integration. When you put together software that is supposed to work together and then you attempt to stabilize it, eventually it remains stable over years. But research software has different needs. Researchers want to adapt new methods as quickly as possible. So, we take a stable Debian release and put new software versions in it. If we were to upload fresh tools to the unstable versions of Debian and wait until it becomes stable, it would take too long. This is the point where backporting comes in handy. It allows us to provide the most up-to-date software for all Debian and Debian derivatives, such as Ubuntu.
AK: The name gives the impression that NeuroDebian is exclusive to the Debian OS. What about researchers used to OSX or Windows?
Yaroslav: We saw cases where people installed NeuroDebian on a virtual machine and after a while realized that they keep their OSX or Windows in the background all the time. So they installed Debian as dual-boot at the beginning and then wiped out anything else but Debian. There are other people who prefer to keep their original OS, because they need to use Microsoft Word. You can even run NeuroDebian within NeuroDebian.
AK: How is NeuroDebian maintained, where do the resources come from?
Yaroslav: First it was our advisors. They allowed us to pursue this instead of our research. Some projects provide a small proportion of the funds needed to support NeuroDebian. But overall, NeuroDebian never received dedicated funds as a project. Whenever we apply, there is always one reviewer who says “this effort matches Friston’s brilliancy of SPM” , and another who says “it is just some packaging, who cares”.
Michael: I think we should name drop here. Yaroslav’s advisor was Stephen Hanson, who paid for this for years. My advisor was Stefan Pollmann, who did the same. After many unsuccessful applications, I stopped thinking about grants. Once we wrote a grant where we had 40 letters of support, but reviewers questioned the letters’ legitimacy. Next time we gave them twice as much, and that was also not enough. The best comment we received was ‘this doesn’t fix any disease, try elsewhere’. Most of the stuff we do cannot be tied to a single purpose. We are not the ones achieving it, we are enablers. If any blog readers have recommendations, or are sitting on a pile of money, please contact us! Very recently, the CBBS research center in Magdeburg has started funding work that relies heavily on NeuroDebian and thereby helps to maintain it for everyone else, too.
AK: We know custom installations of neuroscience research software can be painful. How easy is it to install them on NeuroDebian?
Michael: It really depends on the complexity. You can have a really easy installation after downloading gigabytes of binaries that are pre-compiled for your system. If you want to install it again, you need to repeat the same steps. However in Debian, you have all the benefits of automatic upgrades. You would say “I use 150 software packages on this computer and I need to download and update all of them”. The more complex the system you use for your research, the greater the benefit of using a package manager system.
Yaroslav: Do you remember how difficult it is to install apps on your phone and maintain them? Not really! You just click, install them and they keep updating themselves automatically. This is primarily the same thing. Once your research software is installed you don’t need to think about where it comes from or how to update it. It just gets done and it works.
Michael: Most people do not have administrative privileges on the hardware or the execution environments they use. Think about institutional clusters, think about workstations managed by someone else. In general, you need to call a system administrator and say which software you need installed. If you are on Debian, this only takes a few minutes.
AK: Most of the processing pipelines have a heterogeneous working environment with strict version dependencies. This is a serious threat for sustainability and reproducibility. How does NeuroDebian tackle this problem?
Yaroslav: A while ago, we created the NeuroDebian virtual appliance. Downloading this virtual machine image, different people can have the identical environment. There are additional projects that we can benefit from. One of them is http://snapshot.debian.org/, which takes snapshots of the entire Debian archive repository twice a day. So you can recreate any Debian system you used in the past. We now do the same thing for NeuroDebian, although it is not fully public yet. This way, if you know that your software environment previously existed, you can re-establish it. It can also be used to validate the compatibility of your current system. We are pursuing this through the ReproNim project that can track versioning for even more crowded environments.
Michael: You can keep using the script-based generated environment for as long as the underlying components exist. But in my experience, many claims of strict version dependencies are convenient excuses, but not real dependencies. Often it’s because somebody spent time manually assessing whether a software works as it should. That’s why they have that strict dependency, because they are not willing to go through that manual effort again. In this case you are following a trust and hope model. No manual validation that is done on one machine scales to the other. Remember the paper showing surprising differences in Free Surfer outputs depending on the execution environment. You need high level tests for machine based verification so you can make sure that when you ship it to your users, it will work. NeuroDebian offers solutions to this problem by enabling inclusion of build-time and run-time tests.
AK: How can developers get their software into NeuroDebian?
Yaroslav: You should email us to start with, or email the Debian mailing list. For some people who reached out to us, we packaged and maintained their software. Others packaged it themselves and gave it to us to review and upload. So, there are multiple ways to achieve this. But if you mail us first, we can figure out the optimal way. Just remember, if it is for us to package, it might take longer because of the previous questions.
Michael: There are so many people and so many different projects in this field, not necessarily limited to neuroimaging. It doesn’t matter who you contact, as long as your initial e-mail contains all the relevant information. There are key questions you should be able to answer immediately before you contact them.
AK: How do you see the future of open science? Which role will NeuroDebian play along the way?
Michael: I am happy to say that open science is the present, but surely it is also the future. Open Science is not just doing science in a different way, it is the only way. All the excuses people had not to be fully transparent while spending public money for public research can be justified only by the technology limitations of the past. Today, we have surpassed those limitations. There is no point in keeping things secret. In addition, open source software development is the role model of open science. If you look at collaborative coding platforms like GitHub, you will see how low the threshold is for people to work together. The same will happen with open science, and things will become much more interesting and faster.
Yaroslav: Open science has already benefited from various standardization projects. For example, the open brain consent forms enable you to be open from the beginning. You don’t have to reveal all your cards, but you kind of pave your way, so that later on you can open up your research when the time is right.
Michael: One thing to plug is DataLad. Many things we learnt in collaborative open source projects have manifested themselves in our Datalad project. It is the fusion of code and data, with reliable dependencies on datasets and versions. So everything we talked about regarding software can be mapped one-to-one onto data. In the future, you’ll probably see more and more efforts like this.
AK: Tell us more about DataLad
Michael: DataLad is for data what Git is for repositories. I use it for my students, who have no idea about Git. We use it for projects, such as the StudyForrest project. DataLad can move data in a version controlled fashion from one machine to the other: you can publish unilaterally to Github and cloud storages. In the next release, we will provide extremely enhanced metadata support, so you can make data discovery between datasets that you don’t even have. Furthermore, you can ask DataLad to watch a website! For example, if you were following an fMRI dataset, and there was a change, DataLad will let you know what has changed, why it has changed and how the code was modified.
AK: Anything we did not ask? Would you like to add something?
Yaroslav: Do not be silent. If you use something, say it. If you use something and it does not work, say it. I am considered one of the biggest complainers in the community, but if we keep quiet it doesn’t get fixed. Constructive feedback is very much appreciated here. This is not just about NeuroDebian.
Michael: The NeuroDebian page shows the popularity statistics. There are about 300-500 machines a week that install NeuroDebian (or at least fill out the registration form). On the other hand, if the number of feedback emails goes above 10, it is considered a heavy week. There are two ways to interpret this: 1) This thing works! 2) It is a silent climate, and you are dealing with an undefined object. We don’t track people. It would be nice if people even told us that there is nothing broken.
BY MICHELE VELDSMAN AND SHRUTI VIJ
Academia provides a unique set of challenges throughout one’s career. It is often highly competitive and uncertain. The evolution of science is unpredictable and this can leave researchers, at all stages, unsure of what next steps to take, how to manage their careers, or build their confidence. Mentorship is key to navigating a career in the face of this uncertainty. Successful mentorship requires an unbiased perspective from an experienced individual within academia who is dedicated to your personal and professional development.
This year, the OHBM Student and Postdoc SIG launched an international, online mentoring programme. This novel programme, pairs researchers of all levels across the globe. By pairing individuals across the international community, mentors can bring a fresh, objective perspective to the relationship while mentees provide a unique window into the changing landscape of research. Over 400 OHBM members enrolled in the programme in the first 6 months!
To provide a real platform for mentoring, pairs were encouraged to launch their mentoring relationship in person at the annual meeting. More than 180 pairs met for the first time in Vancouver and have continued their relationships online via email and video chat. In an effort to get more human brain mappers into active mentoring relationships outside of their current environments, new enrollments have begun for another round of pairing mentors and mentees. To sign-up, visit this website.
Here we present the first of a series of short interviews on the experience of mentors and mentees that give a personal insight into the benefits of the programme! The interviews also include general advice that mentors have for trainees, and discussions on the challenges that early career researchers face. We start with our Blog Team Captain Nils Muhlert and his experience in the mentorship program. He was paired with Professor Robert Turner, one of the pioneering physicists responsible for the discovery of MRI and fMRI.
Michele Veldsman (MV): Nils, you signed up to the programme looking for a mentor and volunteering to mentor trainees. Why do you think mentoring is important in academia?
Nils Muhlert (NM): As a PhD student, and early postdoc, you’re still safely tucked under a senior researcher’s wing. While you can become more independent in these stages, there’s still someone offering (largely) independent advice. As you progress into faculty positions, these sources of advice are no longer a formal part of your work.
Despite mentors being less common in later stages, I’ve always wanted a good source of advice in my career. For instance, one of my concerns has been not having a clearly defined research area. It’s useful to speak to those who have reached a high level in their career – who can look back and see what might have been useful for them, and the missteps that may have frustrated others. This was an issue on which Robert Turner (my OHBM mentor) offered helpful advice.
MV: What has been your experience of the mentoring programme so far? Have you seen any benefits?
NM: I was pretty impressed to see that Prof Turner would be offering advice. I knew his work, and had seen him give some good talks in the past (and ask some tough questions). I sent along my CV and explained the stage I was at. I also mentioned my main concern: how to balance multiple research interests with dwindling time, particularly given lecturing workload. The response I received was insightful:
Advice from Professor Turner:
“What worked for me as a scientist is having the determination to focus on crucial neuroimaging questions, and to put my available time fruitfully into the development of new techniques to address them. At important stages of my earlier career, I tried to avoid investing large amounts of time on sideline projects (though I have always been very happy to be a contributor to some papers).
Great outcomes tend to follow from intense concentration. When I was working on gradient coil design, back in the 1980s, I was doing algebra and writing the ensuing novel software 16 hours per day. As a result, I came up with the shielding equation that revolutionized gradient design. When you have broad interests, it's very important to reflect on what questions really matter to you, and what you are really skilled at--because to put in the necessary commitment you absolutely need the enthusiasm. You seem to have already proved yourself to the extent that you should be able to pick and choose what you are most enthusiastic about. You can always put other interesting problems on the back boiler--with luck you can pick them up again when the time is absolutely right."
MV: What do you think are the biggest challenges in navigating a career in neuroimaging?
NM: As discussed, I find knowing what to focus on to be difficult. There seems to be different advice on this – for some, like Tianzi Jiang, looking to predict future trends has proved useful. Robert’s view was that we should also aim to be ambitious enough to create future trends: “The key to this is to work out what are the right questions--questions that are fundamental, overlooked, simple, and answerable.” Alternatively, in a careers-advice talk, Ralph Adolphs suggested considering a varied portfolio of research, so as to be eligible for a broad variety of funding. Clearly, different strategies work for different people – but having tried the broad approach, and following Robert’s advice, I feel that for me, moving back towards fewer areas of research fits my current career stage. Whether I can fit in working 16 hours a day alongside looking after a messy 3 year-old is, however, another matter!
MV & SV: As a window into an evolving and developing programme, we have received overwhelmingly positive feedback from mentees and mentors from all academic stages. We will continue to highlight the feedback in a series of similar posts from other mentors and mentees from a range of backgrounds and career paths. Meanwhile, if you are interested in learning more about the programme, please contact the OHBM Student Postdoc SIG at firstname.lastname@example.org and in order to sign up, visit this website to complete a short questionnaire that will help match you with a suitable mentor or mentee. This sign up round will close on the 17th December 2017 and new pairs will be assigned shortly after.
This month OHBM launched its new website. This long-standing project dates back to 2014, when a membership-needs survey revealed that OHBM members wanted more online resources for educational advancement and career opportunities. There was also a desire for outlets in which to promote new influential research and controversies in human brain mapping and engage our members outside of the annual meeting - issues now addressed through the OHBM blog, the non-member media, and social media accounts. For the website, members wanted this new functionality in a clear, intuitive layout that was more responsive to dynamic content and the changing needs of the brain mapping community. Thus, the story of a new website began.
The challenge to design and implement the new website was taken on by the website team within the OHBM Communications Committee. This 8-person-strong team (Fernando Barrios, Roselyne Chauvin, Christophe Phillips, Jessica Turner, Stephanie McGuire, Jae Gullings) led by Jeanette Mumford and Lisa Nickerson, integrated feedback from the survey with the set of existing webpages, updated content, and re-organized them by type for easier user navigation. They also considered how to merge the website with the new educational material provided by the OHBM On Demand service, a platform that hosts past OHBM courses and talks. These now openly available videos and slides allow anyone interested to watch keynote lectures from Susan Bookheimer, Tim Behrens, and other high profile brain mappers, as well as many educational courses, such as diffusion MRI, advanced fMRI analyses and hundreds of other topics covering state-of-the-art methodologies and clinical applications of neuroimaging.
The website team developed ways to integrate and present various streams of OHBM news that emerge throughout the year, with front-page links to the OHBM blog and lay media. They created a new searchable career center, allowing job seekers to post their CVs & search jobs, and employers to post new positions and search CVs. For important announcements, the team created a “what’s happening” section on the front page, letting users know about abstract submission deadlines, the Replication and Young Investigator awards and other OHBM initiatives. The new format also made it easier to locate and view the dedicated webpages for COBIDAS, special interest groups and OHBM chapters. Critically, the website now displays more easily on mobile phones and tablets - providing all this info in a way that suits our busier lives.
Last, our new website includes a dedicated space for each year’s annual meeting website, creating consistency and better organization of information. You can find out answers to frequently-asked questions, register for the hackathon, see featured speakers when they are announced and check out the sponsors and exhibitors. Our advice: have a few clicks around the website, give us your feedback, and most of all, enjoy your new hub for all things brain mapping!
Check out the new site: www.humanbrainmapping.org
By Csaba Orban, Valeria Kebets, Thomas Yeo on behalf of the OHBM 2018 Local Organizing Committee (LOC)
As you may have heard, OHBM 2018 will be hosted in Singapore. Come join us to learn about the latest developments in neuroimaging methodologies and applications, and to meet like-minded scientists in a highly inter-disciplinary forum. This year’s meeting will have a special focus on multimodal imaging, an LOC symposium on mapping functional connectivity to behaviors in young and aging populations, while also featuring the ever-popular educational sessions and the must-go Hackathon. You can visit our dedicated brain-in-sinc website (right), which highlights attractions in and beyond Singapore, delicious cuisine, and essential travel tips from our local volunteer group.
Singapore (aka “the Lion City”, “the Small Red Dot”) is a small island-state located in the heart of South-East Asia between Malaysia and Indonesia. Originally a small trading post under British rule in the 19th century, over the past few decades Singapore has remarkably transitioned from a third world to a first world economy. Paralleling this economic transformation, Singapore has invested heavily in its educational system, infrastructure, and biotech industry.
Singapore has a young, but rapidly growing neuroimaging community, fostered by its collaborative ethos, investment in state-of-the-art infrastructure, and ability to attract both foreign and local scientists. The neuroimaging community is distributed across the Duke-NUS Medical School (Duke-NUS Centre for Cognitive Neuroscience), National University of Singapore (NUS), Nanyang Technological University (NTU), the Lee Kong Chian School of Medicine (LKCMedicine), A*STAR-NUS Clinical Imaging Research Centre (CIRC), Singapore Institute for Neurotechnology (SINAPSE), National Neuroscience Institute (NNI), Yale-NUS College and the Singapore BioImaging Consortium (SBIC).
Situated 144 km north of the equator, Singapore is a tropical metropolis, where skyscrapers meet lush gardens and canopies, nestled between natural reserves rich with wildlife. Singapore has a uniquely diverse history, where customs, cuisine and architecture reflect Chinese, Malay, Indian and British influences. Attendees can explore Singapore’s rich cultural heritage on foot by visiting Victoria Theatre and Concert Hall, originally the Town Hall of Singapore in the 19th century, the spectacular architecture of Sri Veeramakaliamman Temple in Little India or the Peranakan shophouses on Emerald Hill and in Kampong Glam. For museum lovers, Singapore offers plenty of attractions, including the Peranakan Museum, National Gallery, ArtScience Museum, National Museum of Singapore, and the Asian Civilisations Museum. Those who want to switch-off can take a stroll in the Singapore Botanic Gardens, go for a hike in Bukit Timah Nature Reserve, or take the TreeTop Walk at MacRitchie Reservoir Park (click on photo below to see video).
Visitors can partake in the country’s national sport: eating out, and try an unparalleled variety of foods. Singapore is famous for its hawker centres where collections of individual, often family-run, stalls offer an assortment of culinary delights at affordable rates. OHBM attendees will have the chance to experience authentic specialties such as Hainanese Chicken Rice, Laksa, Chili Crab, Kaya Toast, Satay, Sambal Sting Ray and Char Kway Teow. In 2016, Hill Street Tai Hwa Pork Noodle and Hong Kong Soya Sauce Chicken Rice and Noodle became the first hawker stalls in Asia to attain Michelin star status, while serving meals from S$2 (US$1.42). Singapore also offers an impressive range of fine-dining options, covering local, Asian, European and Fusion cuisines. Visitors can pair their meal with breathtaking views of the city in one of Singapore’s many high-rise restaurants, such as the Equinox Restaurant on the 70th floor of the Swissôtel, Cé La Vi on the top of Marina Bay Sands, or Level33, the world’s highest microbrewery, overlooking Marina Bay.
Getting around the city is easy, due to its modern, efficient and affordable transportation system. The conference will be held at Suntec Singapore International Convention and Exhibition Centre, which is located near prime attractions of Singapore such as Esplanade, Gardens by the Bay, National Gallery and Raffles Hotel (where Singapore Sling was invented). Singapore should be easily accessible for all conference attendees, as Changi Airport is the 2nd most connected airport in Asia, serving flights to 380 cities, in 90 countries.
Remember to submit your abstracts by Dec 15th, 2017 at 11:59 pm EST. We look forward to welcoming you in the Lion City in June, 2018!
BY JEAN CHEN
David Van Essen is the winner of the prestigious 2017 Glass Brain Award from the Organization of Human Brain Mapping (OHBM). David is the Alumni Endowed Professor in the Department of Neuroscience at Washington University in St. Louis, and he chaired the department for 20 years. He was Principal Investigator (PI) for the original Young Adult Human Connectome Project (HCP, jointly with Dr. Kamil Ugurbil from the University of Minnesota) and is currently a co-PI for the Lifespan HCP Development, Lifespan HCP Aging, and Connectome Coordination Facility projects. He was also founding chair of the OHBM Council.
The Glass Brain Award is selected annually "to recognize lifetime achievement by leading researchers using or facilitating neuroimaging to discover original and influential findings". David was recognized for his illustrious career of charting the brain over more than 50 years. His journey in brain mapping has taken him from Harvard to Oslo to University College London (UCL, as a postdoctoral fellow), and then as faculty at Caltech and at Wash U.
Jean Chen (JC): When and how did you begin as a “brain mapper”?
David Van Essen (DVE): My adventures in brain mapping began in 1975, when I was studying extrastriate visual cortex in the macaque monkey as a postdoc at UCL. To deal with the complexity of cortical convolutions, I became a “cortical cartographer” by developing a pencil-and-tracing-paper method of making cortical flat maps, akin to flat maps of the earth’s surface. Later, my lab and others were able to computerize this process, thereby ushering in the modern era of computerized cortical cartography.
JC: What do you consider your proudest academic/career achievements (aside from the Glass Brain Award)?
DVE: On the scientific front, I consider my top four achievements to be:
1) Proposing that cerebral cortex is a distributed hierarchical system based on patterns of anatomical connectivity (with John Maunsell and Dan Felleman)
2) Hypothesizing that mechanical tension along axons and dendrites is a fundamental driving force for brain morphogenesis, including cortical folding.
3) Leading (with Kamil Ugurbil) the Human Connectome Project and helping develop and expound the ‘HCP-style’ neuroimaging paradigm for data acquisition, analysis, and sharing.
4) Mapping (with Matt Glasser) a new 180-area-per-hemisphere multimodal human cortical parcellation.
JC: As PI of the Human Connectome Project, how did your involvement begin, and what do you see for the outcome and future of the HCP?
DVE: In 2009, NIH announced a competition for the Human Connectome Project, triggering a flurry of discussions at Washington University (‘WashU’) and at many other institutions. I emerged as the leader of the WashU effort, and we later decided to team up with the University of Minnesota, Oxford University, and several other institutions to establish a consortium with broad and complementary strengths. Once the HCP was awarded in 2010, helping to lead this project became my primary research activity, and it emerged as the most exciting adventure of my scientific career. The success of the HCP can be measured in a variety of ways; e.g., nearly 10,000 investigators have agreed to HCP data use terms; more than 400 publications acknowledge HCP data use; and these numbers continue to grow.
The original ‘young adult HCP’ wrapped up in 2016. It has been supplanted by a three-pronged effort, all predicated on HCP-style neuroimaging. (1) NIH awarded three Lifespan Human Connectome Projects to elucidate brain circuitry across the lifespan, during healthy development, maturation, and aging; I am one of the Principal Investigators for the Lifespan HCP Development and Aging projects. These projects are complemented by the Developing Human Connectome Project (dHCP) in Britain, which is studying brain development prenatally and at birth. (2) NIH has also funded 14 projects under the Connectomes Related to Human Disease (“Disease Connectome”) umbrella; each of these projects studies brain circuitry in a particular brain disorder. (3) All of the data from the Lifespan projects and the Disease Connectome Projects will be freely shared via the Connectome Coordination Facility (CCF) that Dan Marcus and I jointly lead.
JC: In 2017, you were elected to the prestigious National Academy of Sciences. Can you describe for us how it came about and how it will influence your research going forward?
DVE: One morning last May, I received an unexpected phone call while working at home, with the terrific news about my election to the NAS. The news spread quickly, and I was soon enjoying a veritable blizzard of congratulatory emails. The actual induction ceremony will be in the spring of 2018.
While it is deeply gratifying to receive acknowledgment for lifetime accomplishments, I don’t anticipate that Academy membership will strongly impact my research focus or agenda. I still receive my greatest enjoyment from working in the scientific trenches with students, staff, and collaborators.
JC: Your 2016 publication in Nature (“A multi-modal parcellation of human cerebral cortex”) generated much excitement. This parcellation was generated based on cortical structure, function, connectivity and topography, and identified 97 new brain areas. What’s the next step in this line of research?
DVE: This study is truly a highlight of my scientific career, but the lion’s share of the credit goes to Matt Glasser (my grad student at the time) for his vision and tenacity in driving the project to fruition. Of particular importance is that our ‘areal classifier’ approach allows parcellation of individual subjects, as long as a sufficient amount of high-quality multimodal imaging data has been acquired.
Several interesting next steps spring to mind. Applying the HCP multimodal parcellation strategy (using the areal classifier) to subjects from the Lifespan HCP Development and Aging projects should reveal whether some cortical areas get larger, smaller, or change their connectivity with maturation and/or aging. Applying the same approach to the Disease Connectome datasets will hopefully reveal areal differences related to brain disorders, and could potentially serve as valuable disease-specific biomarkers. We also hope that investigators in a variety of other arenas use our freely available multimodal parcellation (and the areal classifier, once it is publicly released) to aid in their own research projects by more accurately localizing and analyzing various phenomena and regions of interest.
Finally, I am hopeful that the HCP multimodal parcellation and associated connectivity-related data will enable an important test of my 1997 hypothesis that axonal tension drives cortical folding. The key question is whether folding patterns in individual HCP subjects can be predicted by an analysis of ‘parcellated connectivity’.
JC: You’re considered an “activist” for data sharing. How did this passion begin, and what role did you think large-scale data sharing would play in brain research?
DVE: I became interested in data sharing in 1989-90, when I served on an Institute of Medicine committee that generated a report “Towards a National Neural Circuitry Database” and helped launch the “original” Human Brain Project in 1993. The HBP was led by two visionaries, Steve Koslow and Mike Huerta, and it emphasized data sharing from the outset. My Human Brain Project grant was first funded in 1994 and is now in its 24th year, with a sustained focus on informatics tools and data sharing.
Data sharing has been spectacularly successful and vital in genomics, proteomics, and other molecularly oriented domains of bioinformatics. In contrast, systems neuroscientists, and neuroimagers in particular, have been slow out of the neuroinformatics gate. This is finally changing, thanks in part to the success of several large-scale neuroimaging projects, including the HCP and OpenfMRI, that focus on sharing unprocessed data but also (especially for the HCP) ‘minimally preprocessed’ data.
Sharing of large-scale neuroimaging datasets is just the tip of an important iceberg. Another important objective is to facilitate sharing of extensively processed data, such as data associated with published figures. To address the challenge of organizing and sharing complex datasets, our neuroimaging visualization and analysis software, Connectome Workbench, uses ‘scene files’ to store all of the information needed to replicate exactly what is displayed in published figures. Investigators can upload scene files and their associated datasets to the BALSA database developed by my lab. To see BALSA in action, simply click this link.
JC: What do you foresee as a next “big thing” in brain mapping? An exciting new research direction or development?
DVE:It’s important to note that many “big things” consist of many small or medium advances that work well in combination. For example, the success of the HCP stems from numerous advances in data acquisition, analysis, and sharing, only a few of which were ‘big’ (such as “multiband” imaging for fMRI and diffusion imaging). I hope that one big thing in human neuroimaging over the next few years will be an accelerated transition, in which a large majority of investigators adopt the best available among existing approaches to data acquisition, analysis and sharing rather than sticking with ‘traditional’ methods that have been shown to be sub-optimal yet still dominate the field. Beyond that, I anticipate continued exciting advances in neuroimaging methodology, some of which may be game-changers in terms of acquiring and analyzing high resolution data at the level of individual cortical layers and columns.
Regarding invasive brain imaging in animal models, I stand in awe of the explosion of new methods of charting brain structure, function, connectivity, and gene expression at both the microscopic and mesoscopic level. These advances have far from run their course, and indeed are likely to accelerate through ongoing investments such as the BRAIN Initiative and major private funding sources (e.g., the Allen Institute and the Chan-Zuckerberg Initiative).
JC: Despite how much we know about the brain now, what do you consider to be the biggest challenge(s) for brain mappers, especially in this age of “big data”?
DVE: It seems that the more we know about the brain, the more we realize how much remains to be deciphered. The era of big data brings exciting but daunting challenge to center stage: how can we weave together truly staggering amounts of complex data at different scales in space and time. Capitalizing fully on these vast treasure troves of information will require major advances in neuroinformatics and computational neuroscience. I predict that over the next several decades neuroinformatics and computational neuroscience will be radically reshaped and in turn will have a transformative effect on our ability to understand the brain in health and disease.
JC: What two pieces of advice do you have for young and aspiring scientists?
DVE: First, stay close to your data! In general, many complex processing steps occur between the initially acquired data and that which can represent a publishable figure and associated quantitative analysis. For your own research projects, know exactly what happened to your data; be your own devil’s advocate regarding methodological problems and potential biases that might impact your interpretation; and temper your conclusions accordingly. When attending to the research of others, be constructively critical but as even-handed as possible.
Second, gain as much depth and breadth as possible for whatever research area you’ve chosen. This is very challenging in modern neuroimaging, so it is also important to network closely with others who can share complementary expertise needed for whatever project(s) you’re working on. Fortunately, a growing on-line community and associated resources can allow everyone to participate, even those from small institutions and/or laboratories.
[OK, this is one over the limit, but I can’t resist!] Follow your scientific passions! In today’s intensely competitive academic environment, one key for success is to be passionately interested in the scientific and technical issues you’re working on, so you can sustain the drive and energy to make it through the ups and downs of a research career.
By Shruti Vij
To advance human brain mapping we need to develop and identify novel facets of brain function, and understand how they are affected in clinical disorders. Like any other field, we rely on young investigators to bring innovation and creativity. And when we find stellar individuals that contribute towards the advancement of the entire field, we reward them, with a Young Investigator Award! This year’s OHBM Young Investigator award winner is one such impressive researcher with a reputation for making significant contributions to cognitive neuroscience – Lucina Uddin.
Katherine Karlsgodt from UCLA who nominated her for the award introduces her as “a researcher with remarkable focus and productivity”, highlighting that:
A remarkable feature of Dr. Uddin’s work is her ability to bring together sophisticated neuroimaging analytic approaches with important and insightful theoretical questions. As one example, over the last several years she has developed and tested a novel model of network function focused on the role of the insula (and the salience network as a whole) in large scale brain network dynamics. She has supported this model empirically with a series of innovative papers in high impact journals, presenting the theoretical development of her model and its relevance to understanding autism. Consequently, she has become a sought-after speaker for many national and international courses on neural connectivity, an impressive accomplishment for someone relatively early in her career. I have no doubt that she will continue to apply her strong work ethic, acute intellect, and leadership abilities to this project and that the results will have considerable impact on our field.
As a postdoctoral fellow in her lab, it is an immense personal pleasure to interview Lucina Uddin and to discover the makings of a talented Young Investigator.
Shruti Gopal Vij (SV): Congratulations Lucina on winning the young investigator award at OHBM this year! Tell us a little about yourself --- where you come from and how you grew up.
Lucina Uddin (LU): To make a long story short, I was born in Bangladesh and my parents immigrated to the United States when I was less than a year old. So, I spent most of my time growing up in southern California and then moving from coast to coast to get my postdoctoral training at NYU and Stanford till I finally ended up here in Miami!
SV: Do you like it here?
LU: I love it here. I live on the beach and I really enjoy it!
SV: What was your first reaction when you were notified that you were the winner of this year’s OHBM Young Investigator Award? How did you feel?
LU: I was really happy because it is always great to have your work recognized. I was also surprised mainly because if you look at the previous 21 years’ awardees for the Young Investigator award, I don’t fit the profile of being a white male researcher. So, I was really surprised to be honest! But I have to say that I think the OHBM leadership has done a wonderful job in recent years in addressing issues of gender and diversity in the organization both at the program committee level and at the leadership level. So, I am happy to see these issues really come to the forefront now.
SV: When did you start thinking about cognitive neuroscience and developmental neuroimaging?
LU: I went to UCLA, which is a really large school with over a hundred and thirty majors for undergrads to choose from. It was overwhelming, but at the same time I did what I guess any 18-year-old would do. I looked at the list and thought about which one seemed cool to me. Neuroscience jumped out at me, and so I picked that as my major and to this day I haven’t regretted that decision. And then as all my friends started preparing for med school, I realized that I didn’t necessarily want to become a doctor. The other career path that seemed viable was research, and I ended up having the good fortune of staying at UCLA for my PhD with a great group of mentors who basically let me do what I wanted. That was where I got to just explore cognitive neuroscience, learn from the experts and figure out what really interested me. And I am still doing it to this day.
SV: Why do you feel so passionately about developmental neuroimaging and autism research?
LU: Autism is a very interesting disorder. I was very excited when I got the opportunity to do my first neuroimaging study on autism as a graduate student, collaborating with Susan Bookheimer and Mirella Dapretto. It was just fascinating to me how different individuals with autism are from each other. There is such heterogeneity in the disorder, and there are also varying levels of abilities. There are severe impairments in some individuals with the disorder, and in other cases there are exceptional skills. I just thought it was a really unique condition and I really wanted to learn more about the brain basis of it. So I continued that research as a postdoc and still do it now as a PI.
SV: What has been your scientific approach to excellence?
LU: Oh, I don’t know about that! But I really value collaborations with friends and colleagues all over the world and I encourage students and trainees at the lab to look outside of the lab for growth and learning. The reason for that is that I have a background in neuroscience and cognitive psychology, and because I am trained in a particular way, there are some things that I am great at and other things that I really struggle with – statistics and programming are not my strong suit. So, I always try to surround myself with computer scientists and engineers and other people that I can learn from and hopefully we can have a symbiotic relationship. At least that’s the way I see it. But I think everyone stands to benefit from collaborations. And I try to push that in all of our projects.
SV: How have your life experiences affected your mentoring methods?
LU: I mean being a mentor is like being a parent (I imagine) except you have a lot more kids. So you say “I am not going to do it that way” or “I will try to emulate this mentor that I really enjoyed”. I think I have learned a lot from everyone I came in contact with. My graduate advisor Eran Zaidel always brought food to lab meetings and I thought “That’s a great idea! Why shouldn’t there be food at lab meetings?”. We always had great discussions and he always let us pursue exactly what we wanted to. So, I have taken that approach quite a bit, which is trying to figure what people want to study and making sure that there are no obstacles for them. Trying to put people in touch with the right collaborators for the right idea and just being open to new ideas. And that means people are going to fail, and I am open to that. I think everyone needs to fail and learn from failure. So, I think my own mentoring has been sort of hands off. Do what you want and I am going to try to point you towards the resources that will help.
SV: Have you had a singular inspiration in your life?
LU: Well I have to say that it was my father. He came to the US at age 34 to complete his PhD in comparative literature. He basically started his life all over again, learned a new language and experienced a whole new culture. And you know a lot of immigrants do that and succeed. It is definitely a lot harder to do than what I had to do which is live in the same country most of my life (with the exception of a six month teaching stint at the Asian University for Women in Bangladesh in 2010). So, I consider myself really lucky and try to make the most of it.
SV: I guess this year has been really exciting for you! You got your tenure and now the Young Investigator Award! What is the next step for you?
LU: Well, people always say once you get tenure, you can do whatever you want. But I have always done whatever I want. I haven’t particularly done things in pursuance of tenure or specific types of research because they were trendy. If anything, I was working in resting state fMRI in 2006 when really it wasn’t trendy, it was very fringe. People sort of looked at you funny if you said you are looking into intrinsic networks or resting state networks. They didn’t take it seriously until much later. So I did what I thought would be interesting and I am going to continue to do that. And one way to capitalize on recent trends in open science is to take “big-data” approaches. The Human Connectome Project and many other sources have become available for researchers to mine. It is great for discovery and good for students. I think we are going to continue to go more into neuroinformatics and computational neuroscience. I am hoping that in a few years, I will have time to take a sabbatical and visit a few big computational neuroscience labs and learn more about it.
SV: Do you have any significant advice that you want to give to the OHBM mentees?
LU: My advice to mentees is don’t take anybody’s advice! Because it is hard to glean truths from other people’s experiences. I think you have to be honest with yourself at every step of the career. People always ask advice about personal questions and nobody’s responses to these questions are going to be necessarily applicable to you. So I think you have to assess your own needs and wants in terms of both your career and personal life. And I have done things both professionally and personally that I am proud of and others that I regret. So, I am not in any position to give any advice other than to say make sure that you are honest with yourself when making big life decisions.
SV: Last, University of Miami is now launching a Cognitive and Behavioral Neuroscience Program which you are spearheading. Can you tell us a little about it?
LU: Yeah. I am excited that we are starting this cognitive and behavioral neuroscience program here. There is a real need for training in this area and we previously didn’t have the resources in place for that. But now we have the faculty and curricula in place. We have an excellent group and most of the necessary courses. We are excited to take graduate students and I am hoping to bring some of the great things I learned from UCLA’s neuroscience program to the University of Miami and continue to grow the program.
SV: Thank you for all these wonderful insights into what makes this year’s OHBM Young Investigator an accomplished researcher. Congratulations!
As the interview wrapped up and I headed out to my office next door, I was left with immense inspiration for the future of my own academic career as an immigrant woman in science!
Q&A with Marsel Mesulam
BY AMANPREET BADHWAR
Innovation doesn't come just from giving people incentives; it comes from creating environments where their ideas can connect.
June 28th 2017, Vancouver, 4:30 am
Looking out the window in that predawn hour, it was like peering into the black box of time itself, a place where the past, present, and future come together. But as I watched, the sky played with at least a thousand permutations of midnight blue, before it burst into all hues of gold and red. It was at that moment that I thought to myself … “it’s gonna be a great ride today”. Because, not only was I launching the OHBM Student and Postdoctoral Special Interest Group’s inaugural Mentorship Symposium that afternoon, an initiative that I ardently hoped would continue in future years, but also would I be interviewing Marsel Mesulam, a founding member of OHBM and a longstanding academic inspiration of mine. June 28th 2017 was the day I was going to board the OHBM time machine to record an oral history of neuroimaging from one of the pioneers of the field, and gain his perspective on its future.
AmanPreet Badhwar (AB): I would like to start by asking you about your background and why and how you became interested in neuroimaging.
Marsel Mesulam (MM): I started Neurology residency way back, I believe in 1973 or 1974, and I was at that time trying to make a choice between psychiatry, psychology, and neurology, so it was very clear that the common theme in all three was brain function. The question was how I was going to approach it. I decided to go into neurology, and that was largely due to the influence of Geschwind, who was then at Boston City Hospital and teaching at the Harvard Medical School, where I was a student. Neuroanatomy became my main research area. So, with Deepak Pandya and Gary Van Hoesen, I carried out a number of neuroanatomical studies.
I developed a method for tracing cortical projections in the primate brain. Neuroanatomy, and how it relates to animal models and complex behaviour, such as memory and attention, was my primary interest at that time. Since I was also a neurologist, I was trying to apply this information to my patients. When I was doing my training, there were no CT scans available. So at that time those of us doing very detailed neuroanatomy in monkeys, well when we tried to translate what we learnt into the human brain, there was really nothing.
My first CT scan experience was like an epiphany, where I said “my goodness”!! At that time CT scans were very, very noisy. So then, one development followed another, and there were better CT scans. The first serious demonstration of fMRI was reported in Jack Belliveau’s paper in Science, and just before that he actually called me and said “come to my laboratory --- you’re gonna see something that you would not believe”. This was a Saturday, and we used to have a summer place we used to go to, so I had to make a special arrangement not to go away, and instead went to Jack Belliveau’s lab.
That day the experiment did not work! Over the next few months or so everything started to work, and there was this Science paper that was a revolution in the field. I mean the ability to see function in the living human brain, and with decent anatomy! Those developments made me shift my laboratory focus from primate connection neuroanatomy to imaging, so I could apply what I learnt in the monkey to the human brain in neurological patients with specific lesions. That’s how I got interested in the field.
AB: What was it about the fMRI experience that moved you --- the protocol, the equipment, the potential knowledge to be gained or simply the esthetic beauty of the end product - the image? What inspired you to shift gears from traditional neuroanatomy to imaging?
MM: For people who do the kind of work that I do, which makes up the majority of OHBM members, our algorithm is “where in the brain does such and such happen”, i.e. localization of function from the ‘very, very simple one area, one center’ to a ‘very, very complicated and parallel distributed processing’. The beauty of new modalities, like fMRI and PET, is suddenly the ability to see function in a real brain in anatomical terms. And that was an absolute revolution! I cannot think of any other event in the history of cognitive neuroscience that made such a difference. So the beauty is inherent in the anatomy of the human brain. What the imaging did is that it allowed us to see and experience this beautiful complexity in a living human brain.
AB: You are a giant in the field of neurodegeneration. What do you feel are your most significant contributions?
MM: That’s a good question, a humbling question. You know I don’t think I would be able to name any single contributions in the imaging area. But in contrast, there are things that I did in my neuroanatomy days that were new discoveries. The beauty about neuroanatomy is that it changes over millions of years. So once you discover something, it’s true for a few million years. And I have made some discoveries in neuroanatomy that were published maybe 30 to 35 years ago and are as true today as they were then.
In the field of imaging we are dealing with a much more flexible, shifting atmosphere in that things that are discovered evolve over time. Truth is much more elusive in imaging. So I have published lots of articles, but I must say that I can’t point out to something and say that this discovery has withstood the test of time, that’s not the sort of thing that imaging lends itself to.
My first involvement in functional imaging was in the network for spatial attention; I did the neuroanatomy in the monkey, and then applied it to the human. In fact I wrote a paper a long time ago called ‘Large scale neurocognitive networks and distributed processing for attention, language, and memory’, that was one of the first times the word ‘networks’ was used for human cognition. It was based on my adaptation of the macaque monkey cortical interconnections to the human spatial attention network and hemispatial neglect syndrome. In that way I was able to apply the physiology and anatomy of networks that was described in the monkey to the human condition and then take it to functional imaging to confirm the relationship between the two. So I came up with a system of serial connections embedded in parallel connections to come up with the principles of distributed processing in cognitive networks. I think that is a lasting contribution even though it has been much embellished and improved, but it’s not a single finding, it’s a conceptualization. And then my recent work in primary progressive aphasia and language, I think that it is adding something to the field.
AB: Being involved with OHBM from the very beginning you see the big picture of human brain mapping as a field. How do you see neuroimaging moving forward? Do you still expect really big changes?
MM: Yes. Yes. Its an issue of specificity, of resolution. Resolution in space. Resolution in time. But it’s important to have a sense of perspective. Sometimes when students and postdocs come, they tend to be very focused on methods. And they say, “we would like a 7 Tesla” , or we would like this and that, i.e. more precise measurements. I give them the following thought experiment: let’s assume I give you the following apparatus that will allow you to put the intracellular electrode in all 40 billion neurons with recordings. What will you do with the information?
You really need no more precision than your conceptual framework requires. So every time there is a new development we have to take a deep breath and say, how do we integrate this precision with our concepts of brain function before going further.
OHBM has done that beautifully. When we started initially, PET scanning was dominant, fMRI was just coming in. And now we have more precision, more powerful magnets, more analytical methods. So there is a tremendous set of advances, alongside conceptual advances --- the task is to integrate the two, and see where we go. But as a society, I think this is a tremendous success story from where things started back in Paris in 1995 to where things are now. One thing I have always liked about the OHBM -- there is a tremendous amount of young people who get interested in this, and that’s great!
AB: I would be interested in knowing how OHBM come together as an organization? Also, how has it evolved over the years?
MM: In 1994 I got an invitation to attend a meeting in Paris. I believe it was Bernard Mazoyer who convened the meeting. It was something about imaging at that time, PET scanning was the main modality. At that time there was no OHBM, and it was not entirely clear where this meeting would go. It was absolutely a fantastic meeting! It left the desire to do it again. The decision was made to go to Boston in 1996. Bruce Rosen took responsibility to organize this meeting, and the story is that he had to mortgage his house to guarantee the financial backing of this meeting.
In 1996 there was a huge debate of what to name this group. There were people who said it had to be a Society, but there was a tremendous pushback because people said that a Society means that you have to pay dues, and we don’t want to pay dues. All we want to do is organize a yearly meeting. So it was called the ‘Organization’. Of course now we pay dues! With dues we started to have enough money that we could back the financial risk of the meeting without having to mortgage our houses. Then I joined the council. There, we wrote the bylaws that you now follow, to have a committee of current president, the president-elect, and the past president, so three in each of the various levels. I actually wrote the rules for that. Of course it went through council discussions and modifications. Those were the formative years.
There were also some watershed meetings. For example, I was interested in writing a review or commentary from my point of view as a neurologist, and what made it possible for me to write this paper which was titled “From sensation to cognition”, was what I saw in the Copenhagen meeting. Because the Copenhagen meeting was the first time that fMRI really showed what it could do. That was the first time that we saw functional imaging of the face area, of memory circuits, it was a fantastic meeting. Every year has its own personality. These days there are a lot more topics, a lot more multidisciplinary interactions, and a lot of energy.
AB: Do you find there is a greater proportion of students now, or have the demographics remained similar?
MM: It’s pretty similar. There is always this tremendous energy coming from young people. Students, postdocs, and that’s the driving force behind the society. It’s always been that way. It was never top heavy! There was always the encouragement of young people to participate as much as possible. In fact, I have now been invited to be on the scientific advisory committee board, they took the ancients and put them together in a committee (MM smiles), and one thing I suggested is that the council should have a representative from postdocs and students so that your voice could be heard at the decision making level which may be helpful.
AB: That’s really wonderful to hear!!!
AB: It was wonderful talking to you, revisiting OHBM’s initial days, and discussing its role in the future of neuroimaging. Thank you for taking the time out of your busy schedule and making this interview possible.
Thanks also to Sarabeth Fox for filming.
By Nils Muhlert
Bernard Mazoyer, Professor of Radiology & Medical Imaging at Bordeaux University Hospital, has been at the forefront of the human brain mapping community for thirty years. In 1989, Bernard, Nathalie Tzourio-Mazoyer and Marc Joliot founded the functional imaging group (GIN-IRM), the first neuroimaging group in France.
As a founder member of OHBM --- indeed, organising what was to become its first meeting in Paris --- Bernard has seen the organisation grow from hundreds of members to many thousands. Here we find out about his background, and his views on why OHBM may now be ready to become a society:
Nils Muhlert (NM): Can you tell us about your route to neuroimaging?
Bernard Mazoyer (BM): My initial background was in mathematics, which I completed with a PhD in biostatistics. I taught maths for a few years but I was more attracted by maths applied to biology and medicine. So I decided to go to medical school and started doing research in a medical nuclear imaging department. My first project in 1979 evaluated the paths through which blood moves from outside to within the brain by measuring the transit time of a radionuclide injected in the carotid artery using brain images provided by a gamma camera.
Soon after my MD, I spent 2 years as a postdoc at the Lawrence Berkeley Laboratory where I worked on PET and MR advanced instrumentation and image processing. When I returned to France in 1986, Marcus Raichle and his colleagues from Saint-Louis had invented the PET O15 water brain activation mapping method and I was hired by the Atomic Energy Commission to implement and develop applications of this method. With my colleagues from CEA, namely my wife Nathalie Tzourio and Marc Joliot, we founded the GIN, the first research unit in France devoted to human brain mapping (HBM). Thirty years later we are still working together in the HBM domain.
NM: What do you feel have been the real breakthrough findings within GIN-IRM?
BM: Hard to answer that question! The GIN has been in the field since 1989 and has contributed over 200 articles. Besides, as with several other pioneer research groups in the HBM field, over its lifetime the GIN has gathered researchers from a variety of fields, from neuroimaging methods to cognitive (language, attention, mental imagery) and clinical (schizophrenia, Parkinson disease, Alzheimer disease) neuroscience. If I were to select the one study that I believe has had the highest impact on the field, I would certainly put forward the AAL: the atlas of anatomical ROIs.
In the mid-1990’s the brain mapping community had largely adopted the stereotactic averaging approach, but there was no standard for labeling activation. On our side, we were interested in individual variability and thus very eager to match structural MR with PET. The AAL was conceived by Nathalie to solve these issues. Building upon a longstanding collaboration with the French neuroanatomist Georges Salamon, Nathalie designed an atlas of ROI’s having sulcal limits, literally spending days tracing the sulci and gyri on Louis Collins’ brain MRI slices. This quite tedious work provided the community with the reference anatomical labeling method it needed. Amazingly, in 15 years, AAL has reached over 5,000 citations and has been adopted not only by the HBM community but also by many clinicians.
NM: You’ve been involved in OHBM since the very beginning. Initially, what were you looking to achieve, or what particular interests did you want to highlight, through OHBM?
BM: The main goal of the first OHBM meeting in Paris in 1995 was to gather together the different communities involved in human cognitive neuroimaging. In the early 1990’s, PET was in its golden age thanks to the O15-water blood flow mapping, functional MRI was just born, MEG was being developed and EEG-cartography was just starting. Cognitive process mapping was the common denominator for all, but was not clearly identified as a field of research of its own.
As a matter of fact, there were no meetings or professional societies where these different communities could meet. In particular, methodological issues were very important but there was rarely an open forum for discussions. So Paris was really designed as a place for exchange between neuroimaging communities with the long-term prospect of gathering all communities within a common scientific society.
NM: How do you feel the organization has changed over the years? Do you feel Alan Evan’s famous OHBM helmet can stay safely locked away in his office, or have there been times when the debates may have required it?
BM: The main change that has happened over the past 20 years is certainly the sustained development of OHBM, in terms of membership, meeting attendees, and spectrum of activities. Membership has reached 2,000, and while 900 people showed up in Paris 1995, an average of 2,500 have attended the recent OHBM meetings.
More importantly, we started as an organization, i.e. a structure focused on setting up an annual meeting. Today, my personal view is that we are now a society, i.e. a group of individuals, with a large spectrum of shared activities besides the annual meeting. To name a few, we have regional chapters, special interest groups, committees on gender and diversity, multimodality, education, career, and we are developing relationships with societies’ sharing some of our goals. This development has been progressing over the past 20 years and came about as the result of demands from the OHBM community.
I am not saying that some strategic options and their implementation did not raise concerns and debates. But, apart from that very hot first business session in 1996 in Boston, this development has been discussed and conducted in a constructive and peaceful atmosphere. So yes, I consider it very unlikely that Alan’s helmet will ever be used again.
NM: We’ve recently found out that OHBM 2018 will now be held in Singapore. How did this decision come about – and what can we look forward to in the Lion City?
BM: Due to the continuous degradation of the relationships between North Korea and other countries and repeated demonstration of military threats, both OHBM office and council members have received since June numerous messages of concern from OHBM members and sponsors about holding the 2018 meeting in Seoul. According to our bylaws, only the council can decide on the annual meeting location, and discussions between council members have been under way for several weeks.
In September, the decision to move the meeting away from Seoul was approved by a very large majority of Council members. But it has been a difficult decision to take for every one of us because we were all well aware of, and very grateful for, the extraordinary job done by our Korean colleagues within the Seoul local organizing committee. However, we were also conscious that the meeting had to be moved away from Korea in order to give everyone a chance to attend next year’s meeting. This was the basis for our decision.
The choice of Singapore was less difficult, first because we wanted the meeting to stay in Asia, and second because Singapore was previously shortlisted as a potential host for OHBM2018. So, with the help of our Singapore colleagues, who were very responsive in forming a local committee, we all expect to have a great meeting in Singapore despite the overlap with the ISMRM meeting. As always, it will be attendees who make the OHBM 2018 meeting a big success.
NM: As OHBM Chair, what do you hope to achieve during your tenure?
BM: My first hope is to successfully lead the building of the new OHBM strategic plan for the next 3 years. It is Karen Berman who pushed the idea of having a strategic plan and it is under her leadership that the first OHBM strategic plan (2015-2017) was conceived. This plan has been instrumental in implementing fundamental elements of OHBM functioning and development. To name a few, gender and diversity, the increased role of students and interactions with other societies have been essential and successful components of the first strategic plan. In the new plan, I hope we will include among others very important topics such as developing HBM science in countries with limited resources, promoting the role of brain mapping in education and healthcare, and open science.
My second hope is that OHBM becomes a Society. My view is that an Organization is a structure set up with a defined goal whereas a Society is a gathering of individuals sharing common interests and values. I believe that the past history of OHBM and its current large spectrum of activities both advocate for becoming a Society.
My third specific hope is to have multimodality increase its place and visibility within OHBM. We need to be more open to methods other than MR and to be at the forefront of combining brain signals. But we need also to promote multimodality in a broader sense, namely combining neuroimaging signals with signals obtained at other scales, from genes to behavior, as well as in species other than humans. This, I think, is one of the main aims of OHBM future development.
NM: Last, what do you see as the main challenges facing OHBM over the next 5-10 years?
BM: OHBM is still a young “Society” and its steady development over the past 20 years has been fueled thanks to the advent and development of in vivo HBM tools and related neuroscience research at the systems level. This field of research is very rapidly expanding. In my opinion, the major challenge facing OHBM in the future will be to maintain the coherence of the HBM community while attracting scientists from other domains that are now essential for understanding brain systems. Maintaining the coherence of the HBM community is a challenge by itself, as the risk of split from groups of people with insufficiently represented special interests is very high, given the size of OHBM. Meanwhile, attracting scientists from other domains is a challenge as well, as it will require giving them the space they need within OHBM.
NM: Many thanks Bernard - we look forward to OHBM 2018 in Singapore!
by Agâh Karakuzu
The 5th OHBM Hackathon was an official component of the 23rd Annual Meeting of the OHBM held in Vancouver, June 25-30th 2017, with support from two leading open science institutions: the McGill Centre for Integrative Neuroscience, and the Stanford Center for Reproducible Neuroscience.
Imagine entering a room full of people with flashy stickers on the back of their laptops. Gathering around tables, they stare at screens full of code, colorful text on a black background reminiscent of the code rain from the Matrix trilogy. Welcome to the control room of computational neuroscience! There is a reason why we call these neuroscientists ‘hackers’: they are hacking to provide their colleagues with accessible and powerful computational methods, or whatever means necessary, to crack the brain code.
While trying to wrap my head around a way to explain the hackathon concept, I received an email by the Free Software Foundation. It was an article by Richard Matthew Stallman (‘rms’, considered by many to be the founding father of free software), explaining why hackathons should insist on developing free software. Such a lucky coincidence! Its opening lines were an eloquent description of the core mandate of hackathons: “Hackathons are an accepted method of giving community support to digital development projects. The community invites developers to join an event which offers an encouraging atmosphere, some useful resources, and the opportunity to work on useful projects.”
OHBM Hackathons are organized by the OHBM Open Science Special Interest Group (SIG), to achieve open sharing of ideas, data and tools between members of the OHBM community. In line with this mission, most projects in the hackathon aimed to improve analyses using free software libraries, releasing computational tools from the grip of proprietary software. For example, the AFNI Love project team built some of the most complex tools for Analysis of Functional Neuroimaging (AFNI) into nipype, an open-source Python project that aims to piece together algorithms from different packages into a single processing workflow. Managing to complete the project in only three days, this team demonstrated how the hackathon could bring people together to create useful new software, while also creating networking opportunities to connect novice and more experienced hackers, and planting the seeds of long-term, more ambitious projects. This is well summarized in the following tweet by Pierre Bellec, the chair of the OHBM Open Science SIG:
Pierre Bellec has put his heart and soul into the hackathon for years, and this dedication has clearly paid off. In discussing this year’s hackathon he told me: “Vancouver, that was beyond anything I’ve seen so far. Because, there, the hackathon had matured and taken on a life of its own. I feel like the previous hackathons were more like experiments, a recipe in the making. This time around the community jumped right in, everybody knew their part and seamlessly integrated the new members. By the end of the event, there was no telling the newcomers from the veterans.” Indeed, the level of productivity and communication in Vancouver was off the charts. After talking to Pierre, I spotted my friend Gregory Kiar, whom I first met at the hackathon. The most exciting thing to him was that so many new people were coming into the hackathon and, in a short period of time, achieving something remarkable. He mentioned how five projects were merged into one, as captured by another member of the hackathon organizing committee, Kirstie Whitaker:
The OHBM hackathon is designed to encourage collaboration and sharing of knowledge, rather than competition. This drives productivity and enables people to learn and develop quickly. During three days of hacking, the teams worked hard to develop software that could make computational neuroscientists’ lives easier. Most importantly, several small and somewhat redundant individual projects will merge into an open source collaborative effort, with much higher chance of long term success and reaching high code quality.
For instance, Mindcontrol, Abide, Deep-qc, Clowder and Freesurfer teams united their considerable hacking prowess under one banner: Clowdcontrol. This new user friendly web platform can run demanding “deep learning” analyses in large and distributed networks of computers - the metaphorical cloud. The PyNets project team developed a Python toolbox that automates functional and diffusion-weighted MRI network analysis. By leveraging existing open source tools, they were able to assemble an impressive array of methods, that has kept growing since.
To conclude, the hackathon served to pull together researchers from different institutions as a team while being open to add new members and committing themselves to push the limits of productivity during the event. While the previous sentence was sprinkled with GitHub jargon, the good news is that even if none of these commands ring a bell with you, you are always more than welcome to participate in the Hackathon. As long as you don’t let the jargon intimidate you, the Hackathon will help you gain useful skills on the fly, make new connections and broaden your academic horizon. As one of the participants noted during the wrap-up, the OHBM hackathon is helping to put back the collaboration and the fun at the heart of science, right where it belongs. Hope to see you at the OHBM Hackathon 2018 in Singapore!
BY RUSSELL POLDRACK, 2018 OHBM EDUCATION CHAIR
The Educational Courses at OHBM are an essential part of the meeting for many attendees, and we are always looking for ways to make them more effective and engaging for a diverse group of participants. Educational courses are selected based on submitted proposals from the community, with submissions due on Dec 15. We desire a diverse set of presenters, and women and individuals from underrepresented groups are strongly encouraged to apply.
Historically, educational courses have been composed of lectures along with some time for discussion. This year we would like to encourage proposers of educational courses to consider the adoption of active learning approaches in their proposals. Active learning means many different things to different people, but in general it refers to approaches in which the student takes an active role in the learning experience beyond simply absorbing information from the lecturer. A substantial body of research has shown that active learning approaches improve educational outcomes and student engagement (see for example this commentary by Carl Weiman). Additional information about these approaches can be found at the CSWEI and the University of Michigan.
The spectrum of active learning is broad, and we encourage proposals that span the range of possible activities.
This document from the University of Michigan outlines a number of ways to incorporate active learning into the classroom, several of which could possibly be used in the context of an OHBM Educational workshop. At one end of the spectrum would be a fully active course in which a set of brief lectures is followed by hands-on group activities, with brief presentations by the groups at the end of the course. At the other end of the spectrum could be a standard lecture-based format in which the lectures include specific activities meant to engage the students more actively.
These could include:
One important resource that could be used in service of active learning is the library of videos of educational courses from previous years that are hosted by OHBM OnDemand. These could be used as resources for students needing additional background knowledge prior to the Educational Course day.
We hope that members of the OHBM Educational community will embrace the use of active learning in their proposed courses. We realize that it will require additional work beyond the standard lecture, but the science of learning strongly suggests that the adoption of active learning techniques will significantly improve learning outcomes for the community.
Q&A with Wouter Boekel
By Neuroskeptic & Chris Gorgolewski
At the 2016 OHBM Annual Meeting, a new ‘Replication Award’ was announced that aimed to promote and reward reproducibility in neuroimaging research, and to open science more generally. Here, we hear from Neuroskeptic, a prominent blogger and advocate of open science, as to why replication studies are pivotal in improving the quality of our science. We then learn from this year’s recipient of the OHBM Replication Award, Wouter Boekel, about his winning study.
I’m very pleased to have been invited to write a few words on why replication studies are important in science. Congratulations to Wouter Boekel for winning the Award (for a study which I blogged about at the time it was published).
In my view, replicability is the essence of scientific truth. To say that a certain scientific result is true or valid, is nothing other than to say that someone, who correctly carries out the same methods, would be able to confirm it for themselves. Without the assumption of replicability, scientific papers would become merely historical documents – ‘we did so and so, and we observed so and so, but your mileage may vary.’
While most (although, I should acknowledge, not all) researchers accept the importance of replicability in principle, it is still common to hear it said that actually doing a replication study is unworthy of a good scientist. Sure, replications are all very well, but they’re not very exciting. Even some journal editors have been known to express this view, refusing to consider any replication studies for publication in their journals (although I think this is becoming less common).
The problem is that replications lack novelty, which is, at least for many people, the hallmark of science at the highest level (or at least, the highest impact factor). I think that this cult of novelty is unfortunate. While novelty can be a virtue in a scientific work, this is only true if the result is valid. A novel, but false, claim is unlikely to advance science, while a failure to replicate such a claim does.
Successful replications, too, are a contribution to science. It could be said that they ‘merely confirm what we already knew’, but in fact we can’t assume that a given claim will be confirmed, until it is. To build a road through the wilderness, we do need pioneers to map out a new path, but we also need people to check the path is sound.
From today’s perspective, it is relatively simple to acquire questionnaire data or a computerized task in combination with a T1- or diffusion-weighted structural scan. So most of the early data on brain-behavior relationships came from these kind of studies. At the time there were also increasing concerns coming from experimental psychology about problems with replicability and reliability of research findings. An increasing number of publications started to warn of file-drawer effects, questionable research practices, and sample size. There were also some notable cases of large-scale misuse of statistics and research methodology at the time, which further attracted attention to these concerns.
We found ourselves in the middle of a fast increase in the rate of published cognitive neuroscience articles on one side, and increasing concerns about reliability and the absence of replications on the other. This seemed the perfect environment in which to set up a replication study of some of the many structural brain behavior correlations which had been discovered.
CG: …and what did you find?
WB: We tried to replicate a total of 17 structural brain-behavior correlation effects. Our Bayesian statistical results suggested there was reliable evidence for the absence of 8 of the effects, and none of the effects were reliably present when viewed through a Bayesian lens. We used some other statistics as well, including the p-value, which showed 16 non-replications.
CG: Was the definition of replication problematic?
WB: Yeah it was – this came back in the review process, and later in the commentaries and online discussions. We defined a ‘failure of replication’ as the inability to find a significant effect or a convincing effect in Bayesian terms. But that also means that we considered an absence of evidence to be a ‘failure of replication’. Maybe some other people would say “That’s a bit more ambiguous, you should reserve the ‘failure of replication’ only if you find convincing evidence that the effect is absent.” So, yeah, there were difficulties with that.
CG: So if you rephrased the categories into: definitely replicated, definitely not replicated, no effect, and somewhere in between, what are the numbers then?
WB: We found 8 effects where the Bayes factor was higher than 3. That means that the data were more than 3 times as likely to have occurred under the null hypothesis. For us, that number 3 was pretty convincing; in those cases we were satisfied that the effect was definitely not replicated. The other half – 8 effects – were more ambiguous. The Bayes factor for those tests was around 1, and mostly in favour of the absence of the effect, but not so much that we could make the claim that it was definitely absent. So, it was about 50/50.
CB: Many of these comments came up after the paper was published. I know that there was a commentary and a rebuttal – bringing up concerns. How did you address them?
WB: Indeed, two commentaries (here and here) came out of these discussions and we subsequently wrote a rejoinder as well. There were also discussions online in Neuroskeptic’s blogpost. The issues were mostly about our sample size. We had just 34 subjects, 35 for some effects. That was a bit low compared to some of the original findings - some of the original studies had over 100 subjects. That was raised as an issue in the commentaries. They definitely had a good point and we just tried to reason that this was new at the time, carrying out this sort of replication, so any replication is better than no replication, even if the sample size was modest.
Another concern was about the differences in the pipeline that we used. We used FSL for the analysis of our structural data. Some of the original studies used SPM or other software. At the time we were familiar with FSL so we decided to use that. We assumed that volumes of grey matter would be similar using different software. It was demonstrated in the commentaries, that the algorithms used by different imaging software does impact on volume measurements. In future other investigators can look at the differences you get when using different analysis methods.
CG: Critically, you preregistered the methods for the study and notified the authors of the original studies beforehand. Were there any complications or challenges in doing this?
WB: Preregistration was pretty new back then. I think Cortex was just starting to set up their preregistered reports format and there were some other journals becoming interested in such a format as well. We decided to publish our preregistered methods on a blog. The main difficulty there was in deciding what to preregister, and what not to. There were no standardised guidelines yet so we had to think of our own. Indeed we also notified the authors of the original studies, who were very supportive and some even provided files that made our analyses easier and more comparable to the original study.
CG: And to wrap up, what advice would you give to people who are planning or wanting to publish a replication?
WB: I think it’s really nice to involve the authors of the original work – send them your plan for replication and find out whether you should use a different method or just make sure that you’re doing the study correctly. That’s a good tip because it makes it a lot easier if you then find an effect that’s not there – you find a null effect – then it becomes more difficult for the original authors to try to find something by which they can discredit your finding. They agreed to the way you did it – so if you adhere to the preregistration then you should be fine.
In addition, if everyone would replicate all of their published experiments at least once, that would already provide the field with a large increase in reliability. With that idea in mind I would advise any researcher who is interested in replication to start by replicating one of their own published studies. This should be easier than replicating someone else's study, simply due to the familiarity with one's own methods.
I would also say – try to do it on a larger scale. This was one of our first replications so we only had a small sample size. In future it would be beneficial to have a larger sample, that would be an advantage.
CG: Thank you so much and congratulations once again.
The call for nominations for the OHBM Replication Award is now open. Deadline is January 12, 2018. For more information about the OHBM Replication Award, review the nomination criteria here and read our Q&A with Chris Gorgolewski. Neuroskeptic is a neuroscientist and a blogger for Discover Magazine.
Interview with Nathalie Regard & Roberto Toro
by AmanPreet Badhwar and Ekaterina Dobryakova
“...I wander all night in my vision,
Stepping with light feet, swiftly and noiselessly stepping and stopping,
Bending with open eyes over the shut eyes of sleepers …..
The female that loves unrequited sleeps,
And the male that loves unrequited sleeps,
The head of the money-maker that plotted all day sleeps,
And the enraged and treacherous dispositions, all, all sleep …..
I love the rich running day, but I do not desert her in whom I lay so long,
I know not how I came of you and I know not where I go with you, but
I know I came well and shall go well...”
-Walt Whitman, The Sleepers, 1892
The Mesopotamians archived their dreams on clay tablets, the Egyptians wrote down their dreams on papyrus. While many throughout history have recorded dreams, at OHBM 2017 we were provided with the opportunity to view their tangible form in 3D representations of brain activity. This art and neuroscience initiative, entitled Dream Sessions, was undertaken by professional artist Nathalie Regard in collaboration with neuroscientists Roberto Toro and Guillaume Dumas. Creative pieces from Dream Sessions (both 80 Days in Dreams and 101Nights) were showcased at this year’s OHBM art exhibit, entitled “Levels of Thought”, along with artworks from other artists and neuroscientists.
Initiated in 2011, the ongoing goal of Dream Sessions is to confront the subjective perception of Nathalie’s dreams with an objective, quantitative analysis through brain recordings. This journey to bridge the dream world with brain mapping required considerable effort, and had to overcome the challenges of being based in two different continents. Nathalie resides in Mexico City, while both Roberto and Guillaume are based in Paris. We caught up with Nathalie and Roberto at this year’s Art at OHBM initiative in Vancouver, Canada:
Q&A with Nathalie Regard and Roberto Toro at OHBM 2017
Nathalie Regard (NR): Hello, I am glad to introduce our new project which is kindly supported by Electrical Geodesics Inc. and The Neuro Bureau. This is a protocol that is half art and half science. My friends Roberto, Guillaume, and I are doing the piece 101Nights. For this project we’re recording EEG while I dream over a period of 101 nights.
Ekaterina Dobryakova (ED): What encouraged you to do this work, and how do you combine neuroscience and art?
Roberto Toro (RT): Nathalie has been keeping track of her dreams for more than 20 years, writing down every single dream as soon as she wakes up. She is now able to remember most of what she dreams.
With this project, we are creating a dialogue between what she does and what I do. In our first project she slept with an EEG headset for 80 nights, in continuity. I wrote a little program that played sounds – words, names of people she knows, etc. – at random times during the night. After she woke up we could read her dreams and find out if some of those words managed to enter into her dreams. Because we knew the exact time each word was played, we could get the part of the EEG signal corresponding to that part of her dream. After that, for each word that entered into Nathalie’s dreams, I did a laser cut bas relief confronting the EEG recording and the text corresponding to that part of the dream.
I think the approach is interesting; it’s very different from my day-to-day activities in science, and offers me a different perspective on the data. For me, there is also a humble message: on the one hand there is such a rich subjective experience in someone’s dreams, with all those links to different aspects of what we are, and on the other hand, there’s so little that we can finally see in the data, even if we dig very deep into it. These bas-reliefs show us how far we are from reaching a true understanding of one individual’s subjective experience.
ED: So what is it that motivated this type of work, and what is it that made you bring it to the science realm from the art realm?
NR: We have worked with Roberto for many years, probably since 1995. We met in Valparaíso (Chile) and started working together for a mentor of Roberto, who was the head of the science faculty of the University of Valparaíso at that time. We discussed a beautiful book, the Theory of Colours (by Johann Wolfgang von Goethe). That was the beginning of this long conversation that continues to this day.
RT: We worked on many things with Nathalie. I started making a video of Nathalie when she was doing a big mural painting in front of the pacific ocean in Chile. Later on, I recorded videos and created time-flattened pictures of Nathalie while she was painting in Paris. 101Nights is one in a series of works that we have been doing together. I really value these exchanges.
NR: Part of our collaboration comes from the ability of Roberto to build a chronology of things, something that he enjoys doing very much. This has been very nice for my work, which is often related to painting very large formats, or recording dreams for very long periods. Keeping a record of these processes would be very challenging for a video camera or a photographic machine.
RT: That is interesting, as my neuroscience work focuses on the study of brain development and brain evolution, so basically I work on tracking weird stuff through time!
NR: So we are made for each other (laughs).
RT: (laughs as well) So in addition to tracking brain development and brain evolution, I track the evolution of Nathalie’s art. I think I can add that to my CV!
This project is supported by Electrical Geodesics Inc. and The Neuro Bureau, a non-profit open initiative for international collaboration, in anything related to brains and new ways of thinking about them.
More information can be found at dreamsessions.org
With thanks to Sarabeth Fox for filming, and Catherine Monahon for onsite photographs.
OHBM 2017 ABSTRACT HIGHLIGHT: NOVEL DISEASE PROGRESSION SCORES AND SUSCEPTIBILITY LOCI FOR ALZHEIMER’S DISEASE
BY HUGH PEMBERTON
There are over 9.9 million new cases of dementia every year worldwide, which equates to a new case every 3.2 seconds. The discovery of susceptibility loci for Alzheimer’s disease has the potential to inform research hypotheses and could, eventually, lead to novel therapies. I sat down with Marzia Antonella Scelsi from University College London to discuss her abstract on Multi-modal Imaging Disease Progression Scores as Quantitative Traits in GWAS of the ADNI Cohort. In this work, they used a mathematical model to generate an individual score for each patient based on the stage of their Alzheimer’s disease (AD) progression. Using this score, they performed a Genome Wide Association Study (GWAS) to look for any genetic markers that may be driving disease progression.
Hugh Pemberton (HP): How would you explain your study to your non-neuroscientist friends?
Marzia Antonella Scelsi (MAS): It is commonly known that AD is very complex and involves several different pathological processes occurring at different stages. There also exist several genetic influences but with only one very well known risk factor. The heritability of the disease is not well understood so better tools are required for studying the genetics of AD. The research world is currently looking at the genetics of each pathological process separately. However, this is likely to give only limited insight into AD, without reflecting the true complexity of the disease.
We want to know what drives the onset of all the symptoms together and come up with a phenotype – i.e., a measure of the disease severity – that combines information about all the different mechanisms at play during the disease course. We integrated two different signatures of AD pathology into one measure and tried to assess the stage of AD progression for each patient (Graph A, Figure 1). Subsequently, we assigned a number to each patient based on this calculated AD progression (Figure 1B). What we get from a genetic study of this score is information on multiple disease processes at the same time.
HP: Why do you think your abstract was selected as newsworthy?
MAS: I think it’s especially newsworthy because overall we need to look at Alzheimer’s disease from a different perspective at the moment, and in particular at the genetics behind it. Many studies to date have been quite crystallised in the sense that each study proposes slight variations on the same theme, or bigger sample sizes, thus always centering around Genome Wide Association Study (GWAS); but the imaging genetics community feels the need to integrate multiple, biologically meaningful sources of information in the underlying models, and that’s what we’re aiming to do here. Eventually, we hope our results can inform solid research hypotheses to be followed-up into the wet lab, animal models, and that could ultimately lead to benefits, speeding up the drug discovery process. We’ve found new susceptibility loci that could potentially translate into the clinic several years down the road. Our method is also unique because we used a mathematical model originally intended for basic Alzheimer’s research and exploited its outcome for translational application: first, by giving it a clinical interpretation, the Dementia Progression Score (DPS), which conveys the progression of each individual’s dementia, and then by using it in a genetic study.
HP: You mentioned the susceptibility loci that you found – could you explain a bit more about this?
MAS: We found 2 susceptibility loci, one on chromosome 4 that is very tightly linked to gene expression in the hippocampus and another on chromosome 22 that is related to amyloid accumulation and age-related macular degeneration, a neurodegenerative disease of the retina in the eye. So these two genes are both linked to the biomarkers that we used.
HP: What relevance do you think these findings have?
MAS: For chromosome 4, this finding underlines the fact that structural variation influences disease through a cascade of mechanisms and gene expression is key, so you need to intervene at the molecular level to establish a normal gene expression pattern that has been disrupted by these variants. For chromosome 22, the findings are more newsworthy because we appear to have two diseases, AD and age-related macular degeneration, that are apparently different but happen to share a genetic susceptibility locus. This means that, if we can target variation in chromosome 22 to find a potential treatment for one disease, we might be able to slow down the progression of the other, thus killing two birds with one stone, idealistically speaking!
HP: What are your next research steps going to be given these findings?
MAS: We’re currently working on refining this model since it can be problematic when only using a few biomarkers. We want to expand it to more biomarkers.
HP: What sort of other biomarkers would you like to include?
MAS: One of my colleagues has developed a reliable tool for segmentation of hippocampal subfields . Therefore, I am working on preliminary analyses to see if any subfields are differentially affected in late or early stages of AD in order to include them in the disease progression model, and hopefully achieve more fine-grained disease staging results. I’ll also add biomarkers like cognitive scores. While amyloid build-up is most likely a prodromal event and hippocampal atrophy is characteristic for mid-stage AD, for characterising later stages of disease progression I think cognitive scores are the best choice. The aim is to include biomarkers from all stages of AD progression in my model. We want to characterise the full spectrum of the disease and hopefully identify genetic variations underlying the whole course of the pathology.
 Iglesias JE, Augustinack JC, Nguyen K, Player CM, Player A, Wright M, et al. A computational atlas of the hippocampal formation using ex vivo, ultra-high resolution MRI: Application to adaptive segmentation of in vivo MRI. Neuroimage 2015;115:117–37. doi:10.1016/j.neuroimage.2015.04.042.
BY JEAN CHEN
Video interview conducted by Pamela Douglas and Martin Lindquist
Many working in or with MRI know about Mark Cohen’s contributions to neuroimaging. He played a critical role in developing practical echo-planar scanning, ultra-fast MRI applications, contrast-based and BOLD functional MRI and applications of linear systems analysis to increase fMRI sensitivity and resolution.
As the creator and director of the UCLA/Semel NeuroImaging Training Program Mark has pushed his students to an integrative understanding of the role of imaging in neuroscience: the use of images as hypothesis tests and the relationship between blurring, convolution, statistical error and inference from images. We spoke to Mark to find out his background and the rationale behind the neuroimaging training program.
Martin Lindquist: Seven or eight years ago Mark approached me about teaching a class at his annual neuroimaging training program at UCLA. It lasts two weeks and is very intensive. It’s an amazing program, allowing people from many different disciplines to learn from the ground up. Everything involved with regard to neuroimaging -- how to collect the data, k-space, acquisition, reconstruction, pre-processing and analysis -- is discussed in the course. It really brought up the skillset of the students and provided a lot of exposure to what may have been a new field. They worked in teams, developing collaborations for life as they were learning, collecting data together and working together. When OHBM started this award it seemed as if it was written for Mark. Everyone wanted to be involved in co-nominating him. We were all very impressed with what Mark had achieved.
Pamela Douglas: Mark, can you tell us what gave you the idea for the neuroimaging training program?
Mark Cohen: Back then it was pretty clear that there was a lot of bad science going on. It was clear to the folks at the NIH that there were problems; a lot of people came into the field without a core understanding of the type of work that they were trying to do. Because the field is so intensely multi-disciplinary it was clear that we needed to find people with means to access a lot of stuff outside their original core skillset. For about three years, starting 2002, I lobbied the NIH, and said “it’s a really expensive field to do science in. You need to set some sort of bar for the level of knowledge that people need to have!” I was very fortunate that somebody in there heard my pleas. In particular this was supported by Steve Grant, who was previously my program officer.
Steve was a huge advocate for my idea. He actually went to various institutes at the NIH and got money to put together an RFP (Request for Proposals) to develop a neuroimaging training program. Amongst the things we wanted in there were that it had to be open to people being brought in from outside fields, whose skillsets weren’t traditional neuroscience. It was critical to me that it had to be internationally focussed.
We were able to fund students at the US graduate level and higher. We were also able to fund international students from what they called an R-90 mechanism. So, you could actually take in students and give them stipends from outside of the country. That was really incredibly valuable. The other benefit of the grants was an award of $100,000 a year to put on a summer program. The summer program was really our major form of outreach. Pamela (PD) was a student in the original program.
Pamela: The training program is a mixture of hands-on and theory. We were able to go from a few op-amps and then to suddenly have a working EEG system. That mixture of applying everything was really effective. Was there something or someone that inspired you to be able to integrate these different mechanisms of teaching?
Mark: I think we always felt in the lab that this kind of science was group science. Again, it was so multi-disciplinary – it was essential that you turn around and respect the person on the other side – whether they’re bringing you knowledge from squishy crafts like psychology or boring stuff like MR physics, or even worse, Math. You have to understand that all those domains are critical. Part of the process of learning is to understand what those pieces bring to the table, to bring ownership of their flaws, but also ownership of the fact that it’s important knowledge that’s needed in the team.
When we set up the summer course, one of the fun things we did was to organize people into groups and get them to define a research question. We taught them an entire week’s worth of methods, then set them free on the scanner. I had a terrific team of people in UCLA who would sit there and make the experiments happen. So, someone would come up with a crazy idea for an experiment and some talented person like Cameron Rodriguez or Agatha Lenartowicz would write all the code, and build the devices. The students would help build these things. The students were then organized into their teams, largely by my colleague Susan Bookheimer, who would say: “we need a psychologist on this team, we need a physicist on this team, and so on”. They would run the experiments. They would analyse the experiments in the second week of the course, then present the results at the end.
The learning experience was terrific but I was also intensely proud of how many people went back with lasting collaborations, people that continue to research and publish together. That was a very successful part of the course.
Pamela: Do you have a favourite topic to teach, since you teach across domains? Is it MR physics?
Mark: No. I’ve been teaching MR physics for 32 years. I started doing that in the context of training at Siemens medical systems. This is a true story - I have fallen asleep giving a lecture in MR physics! There’s a point at which it just spills out.
For me the most important thing to teach is for people to look intimately at their data and get an eye for not only not being credulous about the stuff that’s in there, but also to look at it and say “that’s not the pattern that I expected.” There’s a frequent tendency for people to look at their data and say “it’s not the pattern I expected to see. What’s wrong with my data?” That’s “wrong-think”. Obviously the discovery comes from seeing where your data challenges you.
Pamela: Do you have any advice for how to become a better educator?
Mark: You need a mission involved in the teaching program. You shouldn’t be thinking just about “what is the field? How do you teach the field” But instead think about “what practical knowledge do you need?” I try to organize the course around the needs of the students in the room. The more time you spend doing it, the more the flow comes. You don’t get nervous after you’ve taught 1000 times. Also, not everybody’s best teaching is done in front of a podium. A lot of people’s teaching mode is best one-on-one. Recognizing where you’re most effective is, I think, really important. Mentorship and direct one-on-one is at one level, at least as important as standing up in front of a thousand people in a society meeting and yammering at them.
Jean Chen: Amongst other achievements, you were involved in the first successful experiment to produce fMRI activation maps in the human brain (the famous cover of Science). What led you to become a neuroscientist? What would you say is behind your success?
Mark: I entered college as a Music major, with an interest in performing music. Soon after, I realized that this was not the way to pursue my personal interest in music, so I switched out of that and decided to turn my attention to understanding human hearing. At that point, I became invested in Neurophysiology. I did a lot of classic lab work in neural labs, learning about electrophysiology, dissecting animals, attaching nerve endings to electrodes and so on. However, I became unhappy with that work for a number of reasons, most importantly because my interests were more cognitive rather than physiological.
This was at a time (early 1970’s) when the whole field of neuroscience was suffering from the theory of behaviourism (John Watson, B. F. Skinner), which defined all behavioural science only in terms of inputs and outputs, discredited the idea of there being something conscious inside the brain. It left me disaffected with the field and switch into Electrical Engineering, which embodies another aspect of Music.
I entered Electrical Engineering (MIT) with an interest in musical applications. But over time, I re-entered Neuroscience, completed a Master’s in Human Biology (Stanford), and then a PhD in Neuroscience (Rockefeller University). My early work was in classical neuroscience, but as I became more aware of the technology surrounding neuroscience, I realized that I could attach electrodes to every nerve in a rat and still have no idea how it feels.
So, after my PhD, I went on to work on medical technologies, since I have the right background (Engineering and Neuroscience). I had an idea to build small bedside instruments like physiological monitors. To get experience, I took a job in MRI -- but thought it wouldn’t last because the MRI was so impractical, so incredibly expensive and slow (chuckle).
I spent a long time working at Siemens MRI but realized we could do better with MRI. So I spent virtually my entire time to try to speed up MRI. We made some really good progress fast 3D imaging using modifications of the FLASH sequence. Then I went on a conference on fast imaging (run by E. Mark Haacke), and heard Richard Rzedzian and Ian Pykette on EPI (echo-planar imaging) of the heart. EPI was very grainy, low resolution of the beating heart that electrified me.
After that, I quit my job at Siemens to go work with these guys on EPI. Their big interest was cardiac imaging, and I took over as Director of Applications for their company. I invested a lot of time on developing brain imaging applications (since 65% of MRI applications were in the nervous system). We took the special resolution from 4.5 mm to 2 mm with negligible scan times, and built up the protocol for brain imaging. I was in charge of translating this technology, and was fortunate to have hooked up with the MGH group (Bruce Rosen, Arno Villringer, Van Wedeen). We developed the first practical diffusion MRI by using EPI. We started on tractography, but then we got interested in perfusion. Jack Belliveau had suggested that EPI could be used for fMRI applications. Jack came to our Lab on Advanced MRI, and we did first fMRI experiments (Robert Weisskoff, Jack, Bruce, Arno). We shot some contrast in our volunteer and got our first fMRI images. Every single person in the room had understood what had just happened --- we had a way to look into the human brain. It was a huge door opening -- all the things I had wanted to do, I could finally do! It was a combination of serendipity and vision!
After those fMRI experiments, I was hired by Harvard. As it happens, I actually had some formal training in graphic arts and had the opportunity to create the Science cover. The story about this cover can be found here.
Jean: In this increasingly demanding and competitive research environment, can you provide two pieces of advice to new/emerging/aspiring scientists?
Mark: My top advice for young investigators is “trust your instincts”. You should pursue only the things that you feel truly passionate about. I learned tons of engineering (signal processing, etc) by building guitar effects panels. Also, learning to program computers was fun so I got good at it. I became interested in cognitive neuroscience because of my interest in philosophy early in life. The passions make you skilled, and allow you to find a way to apply your skills.
Jean: Finally, I understand that your current research interests go beyond neuroimaging, reaching out to astronomy and nanoscale imaging. How do these broader imaging domains come together in your mind, and what are some of the problems you’d most like to work on?
Mark: I am beyond excited about sparse sampling. This work has its origins at UCLA with people like Stanley Osher, Terence Tao. The MRI community knows about this --- with a very sparse sampling, satisfactory images can still be reconstructed. However, there is a much deeper part of that --- every bit of information we receive is sparse, and our brains do a trick like in sparse reconstruction, filling in the missing information. We infer huge amounts of structures with the few pieces of information we see through our eyes. I believe there is something profound to be discovered from sparsity. My knowledge in it comes from my MRI background, but it goes beyond MRI. Most of us work in medical centres, but there are a lot of smart people around the university. Integrating my work across disciplines has brought me into Chemistry, Applied Math, Astrophysics and so on. As long as you understand our shared interests and understand the talent outside the neuroimaging world, the possibilities are truly exciting.
Postamble (Jean): When I spoke to Mark for part of this interview, he was changing lightbulbs in his lab. When I conveyed my surprise at his hands-on leanings, he laughed and said it was “a normal part of his day”. Maybe his excitement at the chance to “get his hands dirty” is behind his dynamic outlook on research. Maybe that is something else that young investigators should learn from.
Thanks to Mark, and to Sarabeth Fox for filming.
OHBM 2017 Abstract Highlight: Deep Learning Reveals Brain Features associated with Preterm Birth and Perinatal Risk Factors
Q & A with Manuel Hinojosa-Rodriguez
BY BRENDAN E. DEPUE
Infants and children with a history of preterm birth (PB) and with perinatal risk factors (PRF) for brain injury may exhibit structural brain abnormalities. For example, they may exhibit grey matter (GM) lesions that could impair motor or cognitive functions. However, MRI identification of these potential GM abnormalities in infants and children is very challenging and not often employed in clinical practice. Researchers have therefore devised machine learning algorithms to identify such structural abnormalities. To better understand these new tools, I got together with Manuel Hinojosa-Rodriguez currently at the Universidad Nacional Autonoma de Mexico, who collaborates with the University of Southern California.
BD: Please briefly explain why machine learning is important for translational neuroimaging analyses.
MHR: Briefly, machine learning is an “artificial intelligence” technique that enables the computer to automatically identify “specific mathematical patterns”, which belong to distinctive conditions. Therefore, machine learning may potentiate our abilities to distinguish different medical conditions in neuroimaging analysis. Based on neurological features — which are informative for the machine to provide accurate prediction — this will hopefully enhance or even deepen our understanding of specific medical conditions.
BD: How does deep learning compare to other machine learning algorithms?
MHR: Deep learning is a technique in machine learning that enables the use of “neural-network” models, which contain a significant number of processing layers. Within each layer, there exists varied numbers of “neurons” and “connections”. Each individual neuron is a function that may include a distinctive mathematical operation, which defines how the neuron can be activated/deactivated; each connection may vary in the amount of information passed between neurons of different layers. Therefore, deep learning in neural networks may allow us to build a mathematical model that defines the relationship between input predictors and output medical conditions that is far more complex than ones initiated from other machine learning algorithms. Accordingly, deep learning may help us identify more subtle or intrinsic patterns in nature.
BD: Why do you think your abstract was selected as newsworthy? What is the appeal of your abstract to a broad audience?
MHR: Until now, neuroimaging detection of subtle perinatal brain injury has been complicated, because conventional magnetic resonance imaging (MRI) in clinical environments does not allow us to detect microscopic lesions and does not offer relevant information about the etiology or perinatal risk factors suffered by the patient. However, by using neural networks and deep learning, it is hopefully possible to identify relationships between brain structure and medical conditions which affect early neurodevelopment. Our results reveal that brain features of preterm children can be associated with certain perinatal pathologies and/or risk factors for perinatal brain injury (see paper here).
BD: Given these findings, what are your next (research) steps going to be?
MHR: The next step in this project will be to examine the correlation between clinical MRI (per grades of severity) and our results. We are very interested in the accurate diagnosis of subtle brain pathologies and early prediction of motor and cognitive disabilities using conventional and advanced MRI.
By David Mehler
The Local Organising Committee (LOC) at OHBM 2017 achieved a remarkable feat. With public health experts voicing concern about the spread of the Zika virus from South to Central America, it was decided that it was too risky to expose so many young OHBM members to potential infections in Puerto Rico, this year’s original site for OHBM. At this point, the Vancouver LOC stepped forward. They organised an entire, major international neuroimaging conference not in four years, as planned, but in one.
Here we speak to Lara Boyd, Professor of translational neuroscience at the University of British Columbia (UBC), chair of the Vancouver LOC, and TEDx sensation. We find out about the challenges of setting up OHBM at such short notice, and about her work mapping out rehabilitative medicine in stroke survivors:
David Mehler (DM): It is my great pleasure to introduce Lara Boyd – she’s a professor here at UBC, where she leads the Brain & Behaviour Lab. Lara, perhaps you could give us some insight into chairing the organization of the 2017 OHBM meeting?
Lara Boyd (LB): It was an exciting year to chair. In case you missed it, we were supposed to be in Puerto Rico in 2017, and in Vancouver in 2020. Because of circumstances related to world health, we moved it up somewhat suddenly. It only came together through everyone scrambling and working really well together. We were lucky that the venue was available and we were available on short notice. The most fun we had was putting together the local organizing committee symposia, where we could showcase some of the science we do here in British Columbia. That was the best part!
DM: Can you tell us what it takes to host OHBM?
LB: First, we worked with the organizing committee of the OHBM to find the venue, and select the different speakers for all the symposia; that was a fun process that we really learned from. We worked with the convention centre group just on the physical location; that was less fun but still exciting – particularly in a place that looks like this [looks out the window]. After that, we worked with the student group to make sure we had the venues and the social events planned and they had spaces for Brain Me Out, the Hackathon, those kind of things. Last, we got to put together the symposia. Now, at this point, when we’re all here, we get to just sit back and enjoy and show off a little bit, and that’s been the most fun part.
DM: There have been many highlights at the conference so far; for instance, Tal Yarkoni’s input on the statistical implications of fMRI analysis, and then a session on myelin imaging at which you were co-chair. What were your personal highlights of the conference?
LB: I loved Tal’s talk --- and I’m not a statistician but he just made that info so accessible. My lab is excited to go home on Monday and try it and see what happens. I also loved the Talairach Lecture, I think Carla Shatz did such a nice job and it’s something I knew nothing about – not my field, not my expertise – so to just sit back and watch how science in one area progressed to something totally unexpected, and she was able to take that knowledge and translate that into something that’s really going to help people with Alzheimer’s and other dementias, that was really exciting. It was just a wonderfully put-together talk.
DM: You mentioned translation – your lab is heavily involved in translational neuroscience, particularly in stroke rehabilitation. Your work has contributed to our understanding of how therapies in stroke can work. You’ve more recently shown plasticity even within myelin - very exciting work! It’d be interesting to know what got you into stroke research and what you find particularly interesting about this field.
LB: I actually started my professional career as a physical therapist. It didn’t last that long – only about a year. Part of that was because my stroke patients just didn’t get better. I had that sense that I was a car mechanic and didn’t understand how the engine worked. So I went back to school to become a neuroscientist to understand how the brain worked in the hope to translate that information back into therapies for stroke. That’s what led me into the field and it was perhaps good timing, as that was right when the field took off.
I believe the first OHBM was in 1995 and that’s when I started my doctorate, so I just grew up with the meeting, and with the field in general. It’s just been a set of really lucky circumstances that has allowed our science and translation to advance so rapidly.
I’ve always said that I’m on the consumer end of the neuroimaging spectrum. We take these beautiful approaches that our physicists are designing and we use them to try to really unpack the changes that occur in the human brain. That’s why my lab is called the “Brain and Behavior Lab”. We also try to find out what behavior enables those changes and try to map them. We try to take that information and leverage it into therapies for people with stroke and try to really speed their recovery. We try to enable greater recovery than we’re currently seeing by improving our basic understanding of how the brain changes.
DM: At this conference we witnessed that the field of translational neuroscience is rapidly growing. For many young researchers with clinical backgrounds that want to pursue a career in neuroimaging, what would be your top three tips for starting out?
LB: First, no question is a dumb question. Don’t be afraid to go up to a senior scientist, or just to someone in an area that you’re unfamiliar with or comfortable with and ask that question. Ask about the field – how did they get into it? What kind of things led them to that? We need to remember that we were all junior scientists and just starting out at one point. I find that everyone is really happy and helpful in sharing their knowledge.
Second, make as many connections as possible. You can see the field is incredibly diverse. There are many different imaging platforms. The future I’m seeing is where there is going to be much more multimodal imaging. We can’t be an expert in all of those areas, so we’ll need a lot of good friends. We can start to translate information from different findings in different research studies to understand this marvelously complex thing, the brain.
Last, build connections with your peers. These are the colleagues that are going to be reviewing your grants and your papers and in the future these are the people that are going to give you students for your lab as you move along. The more interconnected you can become with your peer group as you rise up through the ranks, the better suited you’ll be when you need that friend who knows a technique when you don’t. You can call upon them and they can really enrich your science.
DM: Thanks Lara. Last, can you give us a bit of an outlook for the field of translational neuroimaging for stroke rehabilitation and where you see this field going within OHBM?
LB: In stroke rehab right now we’re actually a little bit stuck. Lately, we’ve had a bunch of clinical trials that failed. They failed to show any benefit beyond regular care. In part I think it’s because we treat stroke as if it were a single condition - any of us who have seen stroke patients know that they’re marvelously different.
So what we’ve become really interested in, in my group, is understanding biomarkers that can help us sub-categorise people with stroke. We then use those biomarkers to predict what recovery patterns we might see and which treatments are going to be best for which patients. Our stroke recovery biomarkers are all neuroimaging derived. So we can take a human stroke patient, we can use maybe diffusion or myelin water imaging to understand the residual brain structure, understand how that patient may be compensating through different networks in functional patterns, how their cortical excitability is changing with Transcranial Magnetic Stimulation. Then we start to build algorithms and models that take each of those pieces of information and put them together to build a more complete portrait of that patient. We then use that information to predict what may be the best therapy for them.
I really think that as we become better consumers of these many different multi-modal types of imaging we can really put them together in a meaningful way. That’s what will move stroke rehabilitation forward, as it will allow us to understand that unique complexity of each patient.
It’s sort of what you might think of cancer treatment: cancer treatments are very personalized, highly tailored to each individual. We want to use neuroimaging to do the same thing with our stroke patients. That’s the future I envision and I hope we’re moving rapidly towards it.
Thanks Lara, and many thanks to Sarabeth Fox for filming.