Neurosalience Episode 16: Understanding the reproducibility crisis and how to get through it, with Dr. Ahmad Hariri
By Peter Bandettini; production by Kevin Sitek & Rachael Stickland
Dr. Hariri recently published an important paper on the test-retest reliability of common task-fMRI measures. This received attention in the field and from the popular media and generated useful discussions. In this podcast Peter and Ahmad discuss the implications of this paper and how to address the challenges it presents and continue to move the field forward. This is an informative and positive discussion about how to collectively address these issues as a field.
By Tzipi Horowitz-Kraus & Nils Muhlert
The neuroimaging community has been very active in creating large-scale studies across a range of age groups, which have helped to tackle reproducibility issues. Most studies originate in the United States and Europe, although many other geographic regions are pursuing similar initiatives (for instance, you can read about initiatives in China in our blogpost). The Middle East, a 7.2m km2 region of diverse religions and cultures, has been very productive in the neuroimaging community in the past years. To get an update on how neuroimaging is changing in this region we asked brain mappers from the middle east to let us know of their projects and surveyed prominent researchers in different countries. We provide a brief overview of some of these activities in the Emirates, Israel, Saudi Arabia, and Turkey with the hope of future scientific collaboration between these countries.
Interview by Peter Bandettini, production by Nils Muhlert and Niall Duncan
This podcast idea was precipitated by Dimitri Kullman’s 2020 editorial in Brain, causing a stir in the community. It leveled criticism about the clinical validity of fMRI. Some of it was outdated but some was indeed on point. In this podcast we had a great discussion on all things fMRI - what it can and cannot measure, and how it can continue to proceed. We also discuss some of the scientific culture surrounding fMRI. Overall, the discussion was useful in bringing some of the flaws as well as some of the outstanding innovations to light. We ended up agreeing that fMRI is in fact, an extremely useful tool that allows penetrating insight into the brain at a specific temporal and spatial scale. We feel that there is still considerable hope yet also considerable challenge in increasing its clinical relevance.
Dr. Dimitri Kullmann is a professor of Neurology at UCL Queen Square Institute of Neurology.
Dimitri received his doctorate from the University of Oxford in 1984 and his Bachelor of Medicine/Surgery from the University of London in 1986. He alternated between research in synaptic transmission and post graduate medical training in London. In 1992, he started his lab at the Institute of Neurology and in 2000, he became a fellow of the Royal College of Physicians.
Dimitri's interests span the fundamental mechanisms of synaptic transmission, the computational properties of small neuronal circuits, and alterations in neuronal and circuit excitability in epilepsy and other neurological disorders. The core methods in his lab are in vitro electrophysiology and pharmacology, but he also applies confocal and two-photon laser scanning microscopy, computational simulations, molecular genetic methods, and heterologous expression of mutated ion channels. His laboratory has contributed to the discovery of silent synapses, glutamate spillover, presynaptic GABAA receptors in the cortex, human epilepsy caused by K+ and Ca2+ channel mutations, tonic inhibition in the hippocampus, and Hebbian and anti-Hebbian LTP in hippocampal interneurons.
One of Dimitri's goals is to understand how phenomena that he has studied at the cellular level interact to regulate the excitability of small neuronal circuits. He is integrating studies on hippocampal circuit function with knowledge of how interneurons and principal cells fire during different behaviours. This is being approached both experimentally and with computational simulations. He also aims to apply his lab's recent insights into the cellular consequences of inherited mutations of ion channels (channelopathies) to develop new ways to diagnose and treat neurological diseases.
Dimitri was editor of the journal Brain from 2013 to 2020. He brings to the table the perspective of a clinician neuroscientist who does research at the neuronal scale.
Dr. Vince Calhoun is the director, since 2019, of Translational Research in Neuroimaging and Data Science (TReNDS), which includes three universities: Georgia State, Georgia Tech, and Emory.
In 2002, Vince received his Ph.D. in Electrical Engineering from the University of Maryland Baltimore County, and then became an assistant clinical professor at Yale, Director of the Medical Image Analysis laboratory Institute of Living, in Hartford. He moved on in 2006 to the University of New Mexico, Albuquerque as an associate professor, and moved up to become a Distinguished Professor at The University of New Mexico and as a leader in various forms of the Mind Research Network in Albuquerque.
Vince is an expert on brain imaging acquisition and analysis and has created numerous algorithms for making sense of complex brain imaging data. He is the creator of the group independent component analysis algorithm, which has become widely used for extracting 'networks' of coherent activity from functional magnetic resonance imaging (fMRI) data. He was an early innovator in approaches to characterize the dynamics of brain connectivity. He has also developed techniques to link many different types of data, called 'data fusion' including various types of brain imaging (structural, functional, connectivity) with genomic and epigenomic data. A key focus of Calhoun's work is the development of tools to identify brain imaging markers to help identify and potentially treat various brain disorders including schizophrenia, bipolar disorder, autism, and Alzheimer's disease. He has recently served as President of the OHBM and has been truly prolific in his work to push the methodology of fMRI.
The Neurosalience production team consists of Anastasia Brovkin, Katie Moran, Nils Muhlert, Kevin Sitek, and Rachael Stickland.
By Peter Bandettini & the OHBM Neurosalience production team
Here Professor Peter Bandettini has a wide ranging discussion with the 2021 Early Career Investigator Awardee, Chao-Gan Yan. They talk a bit about his career path, the highly impactful work he has been doing, as well as some of the most challenging issues in fMRI: dealing with motion, variability, finding biomarkers, and designing just the right packages that help the beginner and expert alike. Chao-Gan gives some great advice to new investigators regarding what was important to him to get him where he is today.
Dr. Chao-Gan Yan is a professor at the Institute of Psychology, Chinese Academy of Sciences (IPCAS). He is the Director of the Magnetic Resonance Imaging Research Center, the Director of International Big-Data Center for Depression Research, and the Principal Investigator of The R-fMRI Lab located at IPCAS. Before he joined the IPCAS in 2015, he worked as a Research Scientist at the Nathan Kline Institute for Psychiatric Research and a Research Assistant Professor at the Department of Child and Adolescent Psychiatry, New York University School of Medicine.
Dr. Yan received his Ph.D. degree in Cognitive Neuroscience from State Key Laboratory of Cognitive Neuroscience and Learning at Beijing Normal University in 2011. His research mainly focuses on the resting-state fMRI (R-fMRI) computational methodology, mechanisms of spontaneous brain activity, and their applications in depression. He has addressed fundamental methodological issues such as the impact of head motion, standardization, and multiple comparisons on the study of resting-state functional connectomics. He has also developed data processing and analysis toolbox for R-fMRI, DPABI, and DPARSF, the latter having been cited over 2000 times.
The Neurosalience production team consists of Anastasia Brovkin, Katie Moran, Nils Muhlert, Kevin Sitek, and Rachael Stickland.
For Brain Mappers around the world, the OHBM annual meeting is a time to meet, network, and learn about the latest progress across the many domains of human neuroimaging. Despite the challenges introduced by COVID19, the virtual conference formats used in 2020 and now in 2021 offer the chance to maintain contacts and benefit from many of the normal features of an in-person conference. In 2020, the COVID19 pandemic forced a rapid transition with little time or resources to enable organizers to fully realize the many social aspects of OHBM that are most beloved by attendees- the ample opportunities for talking around posters, chance hallway greetings that turn into deep and meaningful friendships and collaborations, celebratory parties that ensure you will find all your friends and colleagues to catch up and network for that job or bit of news you are seeking out. Many solutions were created by OHBM attendees, as detailed in our 2020 roundup, and these provided ideas to help shape OHBM2021.
In seeking an alternative platform for OHBM 2021, OHBM leadership, with plenty of grassroots input from the community and incredible support from volunteer leadership in the form of a Technology Task Force, took the risk of selecting an alternative open-source meeting software platform, Sparklespaces. This new platform offered the chance to robustly support social engagement while stretching to meet the demands of the more traditional meeting elements offered by competing platforms. One aspect of the decision to go with Sparklespaces was the option for our Program Committee Leadership and our Technology Task Force to work in close collaboration with the Sparklespace developers to ensure that the resulting software met our needs; unlike other providers which offered fixed packages, the Sparkle team was open to co-creating tailored features for our community. As part of this deal, we would retain access to the shared development source code for future meetings. Everyone involved was and remains very enthusiastic about this collaboration, with this year’s platform being affectionately dubbed #PlanetBrain by our Chair, Aina Puce.
For those attending OHBM2021, the start of the meeting did not however live up to expectations. A slew of connectivity problems meant that many people experienced sometimes considerable lag in accessing the symposia, poster hall and keynote talks. Here we set out what happened and how things are being rapidly resolved. Before the close of the first day, an update was provided from the Chair of the Technology Task Force, Alex Fornito:
The sparkle team and OHBM members worked around the clock to resolve these issues. We deployed a backup stream site within hours and screening room within a day - which allowed another way to watch live streams and explore screening room videos while we resolved the core technical issues. Several fixes incrementally improved performance over the first couple of days, and thankfully, from midway through the Wednesday session we saw a significant improvement in connectivity. The prime culprit - identified by Soichi Hayashi (and the Sparkle Team) - seemed to be that planet brain was bogged down by its handling of user update events, which inform different conference goers of each others’ whereabouts so as to enable a highly collaborative, social experience. As more and more users joined the platform, and moved between the thousands of different events and spaces, each user received a cascadingly large number of update notifications, causing browsers to struggle. Though the platform is well tested at volumes of thousands of users, its scalability with the combination of users, spaces (including posters), and events each being in the thousands had not been tested in combination and led the site to perform poorly. The Education Courses functions performed more smoothly because there were both fewer users, and fewer spaces and events- thus exponentially fewer user-update events.
Soichi’s temporary fix was to switch from receiving real-time events to polling user information, and so limiting the number of updates that a browser must make. This patch substantially improved the performance and user experience of #PlanetBrain’ - and a more permanent fix (requiring more substantial reworking of the codebase) is currently being implemented by the Sparkle team. On June 23rd, OHBM tweeted to inform users of the upgrade to the Poster Hall functionality and invited people to join again, offering an accompanying video to showcase the reboot!
The platform should be operating more smoothly, and is closer to the vision developed by the large team of people who worked on putting this conference together. Improvements are ongoing and the OHBM team is appreciative of everyone’s feedback.
In addition, rest assured that all talks are being recorded, so anything you may have missed can be rewatched later. If you do have comments on the process or would like to hear more from OHBM Leadership on these and other key issues, OHBM members are invited to join us at the General Assembly and Feedback Forum (Town Hall), 30th June at 9am EDT/1pm UTC (link to be sent by email soon).
In this tutorial, we present the fundamental concepts of network neuroscience. We build upon material from the previous Educational Sessions of OHBM’s Annual meeting, available through the OHBM Youtube channel, which contains a trove of lectures, education courses, and symposia from OHBM’s annual conferences. During the virtual meeting 2020, Alex Fornito and Andrew Zalesky presented a course on Fundamental Concepts and Methods in Network Neuroscience. Speakers included Alex Fornito, Janine Bijsterbosch, Shella Keilholz, Robert Smith, Andrew Zalesky, Bratislav Misic, Richard Betzel, Ann Sizemore Blevins, Martijn van den Heuvel and Petra Ritter. We highlight material from this course, which is currently available on YouTube, so readers can learn from the experts in this condensed tutorial.
I hear a lot of talk about “network approaches” to neuroscience, what does that mean?
Simply put, we are interested in studying the connections, or wiring, between so-called “brain areas.” Though Franz Joseph Gall is famously known for the pseudoscience of phrenology, his underlying idea that different parts of the brain perform different functions has informed much of modern neuroscience. This phenomenon of functional localization, often referred to as functional segregation or separation, has been thoroughly investigated down to individual neurons. However, as Alex Fornito points out at the beginning of the education course, though this degree of specification is a fundamental property of brain organisation, the experiences that are constructed by the human brain are not thought to be formed by independent, disconnected processes. Rather, specialized processes are integrated together through connections between different brain regions. This apparent functional integration occurs over multiple spatial and temporal scales and heavily depends on the connections between different neural elements, which can be studied at macroscopic resolutions using magnetic resonance imaging (MRI).
The brain has more than 85 billion neurons, which show an extraordinary level of local interconnectivity and highly organized long-range connectivity. In this respect, the brain’s organization resembles that of other complex systems observed throughout nature and in our everyday lives. For example, the world wide web is a network that links each of us to pretty much anyone anywhere in the world at any time. One of the principal aims of network neuroscience is to try to understand the properties of brain networks. As explained by Alex at 5:30 min, network neuroscience aims to to understand and map the properties and organization of the brain through the mathematics of complex networks. It applies this framework to study the properties and organization of the network of connections forming the human brain––its “connectome”––in terms of both structure and function.
Alex Fornito describing the brain as a network at different levels of resolution
Is this related to a thing called graph theory that I always get confused about?
Graph theory, as Alex explains from 6:04 onwards, is a mathematical framework that can be used to study the connectivity among the core constituents of a graph, e.g., a set of “nodes” connected by “edges” that measure the association between pairs of nodes. In this sense, a graph is a structure that attempts to model the relationships between different interconnected objects (the “objects” can range from brain areas to people’s friendships) and offers a natural analysis model for any networked system.
Using MRI, we can study brain networks at coarse spatial scales, in which nodes are defined as spatially focal, functionally homogeneous brain areas that are considered to be fundamental processing units of the network. In network neuroscience, nodes are also referred to as “parcels”, and a set of nodes covering the whole brain can be defined as a “brain parcellation”. Edges are defined as the connections between the nodes in the network and for MRI-based studies of the brain, these connections are representative measures of the functional and structural connectivity between brain regions. The mathematics of graph theory allows us to extract the organizational properties of the brain connectome and relate these properties to those observed in other systems found in nature.
Hmm … ok that makes sense, but how are these nodes defined?
Sensible node definition is perhaps the most challenging aspect of connectomics analysis as, in many instances, there are no clear boundaries visible with MRI that segregate brain areas into functionally meaningful homogeneous brain areas (Fornito and Bullmore, 2015). At 8:00 min Alex describes that the interpretation of any network analysis depends on the way in which nodes and edges are defined. Good definitions of nodes should adhere to three properties:
1. The nodes need to be spatially constrained. That is, a node should be defined as a spatially contiguous patch of the brain rather than a combination of spatially distinct regions. This is based on the assumption that neurons dedicated to any specific functionality are constrained to a specific location in the brain.
2. Each node should be intrinsically functionally homogeneous, so that the constituents of the nodes, such as voxels clustered into a single node, should share a similar function.
3. The nodes should be extrinsically distinct, so that the function of one node can be differentiated from others.
Alex Fornito explaining how we can use graphs to represent brain networks
Further elaborating on the first property, at 37:54 min Janine discusses different ways to conceptualize spatially contiguous and non-contiguous nodes. Contiguous nodes are spatially interconnected voxels or vertices which form a single localized region of the brain and are consistent with the notion of functionally specialised regions. Alternatively, non-contiguous nodes are made up of various distinct areas that may be dispersed throughout the brain, for example, bilateral brain areas. This conceptualization is consistent with the hierarchical and hemispherically organised network structure of the brain.
Janine goes on to explain other characteristics such as “binary” and “weighted” nodes. Nodes are considered “binary” when each voxel is a member of exactly one node only; often referred to as a “hard parcel”. Alternatively, nodes are “weighted” when a voxel can have non-zero membership to multiple nodes; often referred to as a “soft parcel”.
Clustering methods such as K-means clustering and normalised cuts group voxels together based on similarity to create nodes. These nodes tend to be binary and can either be contiguous or non-contiguous. Data-driven decomposition methods, such as ICA and PCA, produce weighted parcellations that usually combine different regions in the brain (i.e. not spatially contiguous) into a single node.
Janine Bijsterbosch discussing characteristics of nodes
At 42:47 min, Janine also mentions the challenges that we have to address when defining nodes. For example, it is generally assumed that a node is a homogeneous region, the aggregate of which serves as an accurate representation of the activity that goes on throughout the different voxels in that node. However, this is questionable for different reasons: a region may be heterogeneous in both its connectivity and function. For example, if one were to average the signals across the entire primary motor cortex, a region which has homogenous cytoarchitecture, this would overlook the meaningful functional heterogeneity which encapsulates different hand and foot movements (Haak et al., 2020). For this reason, it is important to be mindful of the core assumptions that are implicit in any parcellation procedure.
Right, but how can we define them in a more practical sense?
Generally, nodes should be defined either using published parcellations available in literature or from your own data. At 9:26 min Alex explains different approaches by which nodes can be defined, such as cytoarchitectonic maps or sulci and gyral landmarks. There are many different published parcellation schemes and methods for parcellation of your own data that are now open access. Arslan et al. (2018) consider a variety of such anatomical, connectivity-driven, and random parcellations in their study, comparing the performance of different schemes and methods for connectomics. They conclude that there is no clear “one size fits all” winner, instead providing guidelines for the choice of the parcellation technique that may be appropriate for the problem under investigation. Similarly, at 33:34 min Janine notes that the parcellation scheme should match with the modality that you’re investigating.
An advantage of using published parcellations or atlases to define nodes is that the node definitions are identical to other studies, allowing comparison with other published findings. Anatomical parcellations may also be derived from aspects of brain tissue, such as cytoarchitecture, that are not captured by functional parcellations. However, a limitation of using a structural-based atlas is that it may not represent your data as well as using your own data to produce your nodes. This is especially true for functional connectivity, as using atlas-based parcellations have been shown to have reduced performance for network analysis when compared with data-driven functional parcellations (Smith et al., 2011). Therefore, nodes should be defined in accordance with what best suits the research question and hypotheses that you want to investigate.
What about the edges?
At 10:43 Alex explains that when mapping brain connectivity, edges are defined by distinguishing between 3 different types of brain connectivity; structural, functional and effective connectivity. Whilst structural connectivity refers to the physical white matter tracts connecting regions, functional connectivity refers to the statistical relationship between neurophysiological signals recorded in spatially differentiated regions. Notably, the directionality of a structural connection (e.g., from one region to another) is not resolvable using MRI whereas functional connectivity can be directed or undirected. Effective connectivity describes the influence that a group of nodes can have on another group, which allows for causal interactions between different neuronal systems. Whilst functional connectivity is estimated at the level of measured physiological signals, effective connectivity is estimated at the neuronal level and requires some kind of mapping between the measured signal (e.g., BOLD fMRI) and the underlying neuronal events that generate the signals.
You keep saying the word “connectivity” . Can you tell me a bit more about it?
Neuroscientists use the word “connectivity” to refer to any physical connection or statistical dependency between different areas of the brain. The terms “connectivity matrix”, “similarity matrix”, “affinity matrix”, or “adjacency matrix” are used interchangeably. At 12:55 min Alex mentions some different approaches that are used to measure connectivity such as diffusion Magnetic Resonance Imaging (dMRI) and functional Magnetic Resonance Imaging (fMRI). He goes over some problems that need to be tackled in the MRI data processing pipeline, such as head motion (13:27 min) and physiological noise (17:05 min). An evaluation of different preprocessing pipelines, and their impact on results when comparing connectivity differences between patients and controls, is discussed by Parkes et al. (2018). It is important to be aware of the impact of these preprocessing steps as they will have a strong influence on analysis results. Once data are preprocessed and parcellated, you can measure functional or structural connectivity between regions and produce an adjacency matrix, in which each element of the matrix contains the measure of connectivity between each node pair, or a graph representation of this connectivity represented as node elements and lines connecting them. At 19:34 min Alex explains that these two types of representation are equivalent by giving an example of a functional connectivity matrix; the only difference between the two is the visualisation.
Alex Fornito explaining the different approaches used to measure connectivity
But how do we measure this “connectivity”; say, for example, from functional MRI?
Just as there are many ways to parcellate the brain, there are many ways to define “functional connectivity” via different statistical measures that capture the shared information (relationships) between two nodes. In addition, one can examine the average, or “static”, connectivity across the duration of the fMRI scan, which obtains a single measure of connectivity for the whole scan between each node pair, or one can examine the dynamic fluctuations in connectivity that occur on a moment-by-moment temporal scale.
Seed- or parcellation-based correlation analysis and Independent Component Analysis (ICA) are two popular methods for investigating average functional connectivity based on Pearson correlations among BOLD time courses. At 1 hr 03:52 min, Shella Keiholz explains seed- and parcellation-based correlations. At its core, Pearson correlation is a simple measure which captures linear relationships between two variables; in effect one can think of it as “how well do two signals co-fluctuate”. At 1 hr 16:47 min, Shella goes on to explain ICA as a method which decomposes an fMRI dataset into the individual signal sources that give rise to the measured BOLD signals. ICA can be performed in either the spatial or the temporal domain; given the number of voxels versus the number of timepoints in a standard resting state scan, it is more common to apply spatial ICA. While spatial ICA can separate interesting signal sources (resting state networks) from artifact signals (like those from cerebrospinal fluid or subject motion), it is important to note that ICA cannot automatically determine the number of the components (or signal sources) that should be estimated by the algorithm: this “model order” or dimensionality has to be decided by the researcher doing the analysis. At 1 hr 17:57 min Shella gives an example of using ICA to show that the choice of model order affects the resulting network maps. The main advantage of ICA is that it is data-driven so can be used to investigate functional connectivity without any a priori information, such as seed selection, and it can effectively separate signal from noise. But this also means that it can be hard to interpret resulting ICA components: one must identify the components of interest versus those that represent noise from the resulting spatial maps, which may require comparing the components with published maps or drawing from other studies to interpret them.
Measures of average functional connectivity, beyond Pearson correlation, that are used for seed- and parcellation-based methods are partial correlation, mutual information measures, coherence/partial coherence, and many others. Smith et al. (2011) used simulations to investigate how different network configurations, experimental protocols, confounds, and choice of connectivity measures impacts network modelling and found that partial correlations have high sensitivity to detect direct connections between nodes, directionality is difficult to estimate, and that functionally inaccurate parcellations severely degrade connectivity analyses.
More information on static functional connectivity analysis can be found in the OHBM’s OnDemand How-To: Resting State fMRI Analysis blog post.
In contrast to average functional connectivity, for “time-varying” or “dynamic” functional connectivity analysis, more than one measure of connectivity is obtained to capture the variations in brain connectivity over the scan duration. Dynamic connectivity can be implemented using different approaches such as window or event-based methods. There are many excellent reviews on the promises and pitfalls of dynamic connectivity methods (Hutchison et al., 2013; Preti et al., 2017; Jalilianhasanpour et al., 2021) and the utility of these methods for classification and prediction of brain disorders (Du et al., 2018). At 1 hr 23:28 min, Shella gives an example of the sliding window method, which divides time series into small time windows to calculate the correlation between BOLD signals from two areas within a small time window that slides from one time point to the next. This new time course of correlations from each window across the duration of the scan reflects the dynamic changes in connectivity that may occur between two brain areas, even in the absence of an apparent overall average connectivity between them. This approach can be extended to parcellations: in this case, an association matrix is computed for each sliding window.
Shella Keilholz explains the sliding window correlation approach used to analyse the correlation values between two areas at different points in time
Due to the number of timepoints in a typical BOLD scan, and the number of brain regions that can be considered in a parcellation scheme, data explosion is one of the main challenges of this type of analysis. Clustering can be performed as a kind of reduction step to identify a small set of brain states and their dynamic properties, either on the BOLD times series or on the association matrices. However determining the number of brain states to be estimated is subjective, similar to setting the model order for ICA. At 1 hr 36:31 min, Shella talks about these and other caveats to these functional connectivity approaches.
What about structural connectivity?
Structural connectivity is, conceptually, the easiest one to understand. We ask the question, is there a physical connection between one brain area and another. In more neuroscientific parlance, structural connectivity looks at the axonal tissue (white matter) that connects cortical regions to one another. While structural connectivity is what most people would intuitively think of as “connectivity”, the approaches to estimating a structural connectome can sound like a mystical art to the uninitiated. At 11:32 min Robert talks about the fundamental requirements to quantify white matter pathways. He mentions that the first requirement is to estimate the local fibre orientations in each image voxel using an appropriate diffusion model; the second is to follow the fibre orientations using a reconstruction algorithm such as streamlines tractography. This process creates trajectories through space that ideally reflect the underlying white matter connections. At 19:00 min Robert mentions that another requirement is to have a sufficiently large number of streamlines generated so that the result can be reproducible.
Following reconstruction of streamlines trajectories across the entire brain white matter, structural connectome construction proceeds as follows. Firstly, each reconstructed connection must be attributed to the appropriate edge based on the nodes from the parcellation to which it is assigned. Then, one must choose the metric of connectivity to be quantified for each connectome edge; this could be the number of streamlines within that connection, samples of some quantitative metric along the corresponding streamlines trajectories, or estimates of connection density based on more advanced modelling approaches.
More information on the general structural connectivity pipeline can be found in the how-to diffusion MRI blog.
Robert Smith explaining how structural connectivity is measured
So ... once a functional or structural network is obtained and the connectome is mapped, what then?!
Once we have the association matrix reflecting either the functional or structural connectome, how do we characterize the properties of the network or do case-control comparisons? These questions proceed via different analyses. To characterize the properties of a network, one typically applies mathematical tools from graph theory.
In many cases, the network (or association matrix) is first thresholded to remove the weakest connections from it. At 37:22 min Andrew Zalesky talks about the pros and cons of this thresholding process and at 40:56 min he gives an example of thresholding a network that has weak, moderate and strong connections. An advantage of thresholding is that the weakest connections may be false positives and so, removing them through thresholding will provide a clearer and more accurate connectome visualisation. On the other hand, one of the disadvantages of thresholding is that the choice of thresholding value is arbitrary (explained at 46:15 min) and can strongly impact the result graph theory measures. After the thresholding process, Andrew explains that the network can additionally be binarized, indicating either the presence or the absence of a connection between a pair of brain regions; alternatively, keeping the connectivity values without binarizing provides a measure of the strength of the connection between each pair of brain regions. He further explains the pros and cons of binarization.
Overall there are two classes of thresholding methods: “density thresholding”, which aims to achieve a desired connection density by removing the weakest connections, and “weight thresholding”, which identifies the minimum weight a connection needs to have in order not to be removed. At 47:07 min Andrew explains other alternative approaches to connectivity strength thresholding, such as “consensus thresholding” and “local thresholding”.
Andrew Zalesky explaining how the thresholding process eliminates weak connections
Once the network is identified via thresholding (and possibly binarizing), the next step is to try to understand the organization of the network. An important concept mentioned by Andrew at 50:02 min is that null networks can be used as a benchmark of network organisation under a specific null hypothesis. For example, a random network matched for the number of nodes, edges, and possibly other network properties, can be used to understand whether a feature such as path length (the average number of nodes that must be traversed to connect any two nodes in a network) of the empirical network is unusually short, medium or long when compared with a random network. If the topology is a lattice or other kind of non-random configuration, then the path length will differ from that observed in a matched random network. The Maslov-Sneppen rewiring algorithm is further explained as a way to generate a random network matched for number of nodes, edges, and node degree (the number of edges connected to a node), which is the most common method for generating a random network as a null model. From 57:13 min onwards, Andrew explains other null models such as the geometry-preserving null model, which makes use of the impact of geometry on network topology, and generative models, which are null models that use specific wiring rules to evolve networks in silico. Inference on a specific property of the network then proceeds by sampling thousands of null networks to enable quantitative inference via Monte Carlo methods.
Reconstructed connectomes can also be compared to assess group differences in connectome properties, or how connectome properties may be affected by treatment or vary with behavior. In addition, connectomes can be used to predict clinical outcomes or treatment responses. There are myriad network properties that can be assessed for differences or for prediction, ranging from connection-level to global properties of the network as Andrew describes at 1 hr 02:07 min. In contrast to connection-level analyses that examine each connection independently, network-specific attributes, such as sub-networks, may better capture differences between patients and controls, as pathology is more likely to impact sub-networks of connected nodes rather than node pairs scattered randomly around the brain. As such, network-specific attributes may also be more useful for predicting clinical outcomes or response to treatments. At 1 hr 05:26 min Andrew talks about how network-based statistics can be used to leverage network-specific attributes to enhance statistical power.
So, a ‘network’ can represent the functional or structural connectivity in the brain, but it can also represent a subset of interconnected nodes referred to as “sub-networks” or “modules”. What are these modules exactly?
The connectome can be conceptualized as a single large network at the macroscale or global level, or as sub-networks at the mesoscale level, or at the level of individual regions or nodes and other local-scale components. Generally speaking, at the mesoscale, ‘sub-networks’, ‘modules’, or ‘network communities’ are used interchangeably to refer to clusters of densely interconnected nodes embedded within a larger network. Though modules can be identified in social, behavioural and cellular networks, in this article the terms ‘modules’ or ‘communities’ will refer to groups of nodes reflecting organisation in the brain network. In network neuroscience, a set of strongly coupled nodes comprise structurally or functionally linked sub-networks. In most applications, nodes are strongly interconnected with each other within a sub-network, but weakly connected with nodes in the rest of the network. Understanding which nodes make up specific modules can provide significant insight into the functioning of the network. For example, changes in modular organization are related to individual differences in behaviour, development, cognitive states and disease liability.
Community detection algorithms are used to discover the network’s community structure to identify subgroups of interconnected nodes. While these approaches are very flexible, detection of communities can be an ill-posed problem. At 2:07 min Richard Betzel talks about the challenges of module/community detection.
One approach to detect communities in single and multi-layer networks is called “modularity maximisation”. In a single-layer network, a set of nodes is connected to each other with edges derived from a single imaging modality at one point in time. In multi-layer networks, nodes and connections can be analysed across another dimension, such as time, imaging modalities, or subjects. Ann further describes how a multilayer network works (42:54 min) and how we obtain this type of network from our data (from 50:37 min onwards).
So you’re saying that the point of modularity maximisation is to get a good estimate of the communities’ network’s structure?
Yes! Modularity maximisation considers communities to be groups of nodes that show stronger connections between each other within the observed network than would be expected by chance. At 4:55 min Richard explains how modularity maximisation works and further on gives examples of high-quality and low-quality partitions, in which nodes may form strong clusters or may only be weakly clustered together, respectively. For the modularity maximization approach the quality of the partition is given by the modularity quality function, Q (7:25 min). Maximizing Q corresponds to identifying a better partition into sub-networks; e.g., if the partition has a greater value of Q because of communities which are more internally dense than expected by chance, then it is considered to be of high quality. The idea is that the partition with the greatest value of Q will provide a reasonable estimate of the network’s structure.
A few methodological issues, such as the choice of null model and the resolution used to detect modules, need to be considered. There are different types of null models to choose from, with the choice largely depending on the research question. In his talk (11:57 min), Richard explains how to run the Louvain algorithm with a degree-preserving null model. A challenge with modularity maximisation and other community detections algorithms is that they suffer from a resolution limit in that they are not able to detect small clusters, therefore the modules obtained by maximising Q may not reflect the networks’ actual mesoscale structure. This is explained at 15:43 min together with a solution for parameterising the modularity equation to add a resolution parameter and examine communities at different scales.
Another factor to take into account for multi-layer networks (an example application to time-varying connectivity) is given at 20:37 min, which is explained by Richard from 23:57 min, is whether inter-layer coupling is defined temporally, with time-varying information within the network, or categorically, with no temporal information within the network. The weight of the temporal inter-layer coupling is under experimental control: if the multi-layer network has very weak inter-layer coupling, communities will be very versatile across layers, whilst if it’s very strong, the communities will emphasise homogeneity with little variety.
Richard Betzel explaining inter-layer coupling in multi-layer networks
Martijn notes that the existence of separate modules cannot explain the complexity of neural connectivity alone (1:01:54 min). Rather, modules need to be integrated with each other through central places (or nodes) of communication. Such nodes that connect with multiple modules to facilitate integration and communication throughout the network are referred to as “hubs”.
What are “hubs”?
Hubs are defined as highly connected nodes in the brain network. As Martijn describes (1:05:40 min), if we take a random network, the probability of any particular edge between two nodes being present is equivalent for all edges, and as a consequence, the number of edges present for each node (the node “degree”) is typically quite similar across all nodes; it is unlikely for there to be nodes possessing considerably more connections than others. However, in most biological networks this probability is not equal: some nodes usually have more connections than other nodes, and so these are interpreted as hubs. Martijn explains four metrics of ‘hubness’ or ‘centrality’ from 1 hr 07:36 min onwards, defining the centrality of a node via measures such as degree, closeness and betweenness. As an example, if centrality is to be identified by the degree metric, the most densely connected nodes of the networks are considered to be places of high level of centrality; there are, however, different metrics that can be used to identify centrality in both single-layer and, as explained by Ann 54:16 min, in multi-layer networks.
As Bratislav points out at 1 hr 51:31 min, it is important to be aware of the assumptions being made when assigning importance to a particular network. If one assumes that communication happens when nodes communicate with their closest neighbour, then the node with the greatest degree would be chosen as an important node, or hub; conversely, if one assumes that communication happens across the shortest paths between nodes, then the node with the greatest betweenness would be chosen. The links between communication processes and measures of hubness are further explored in Bratislav’s talk.
Last, once hubs are identified, they can be categorised either as “connector” or “provincial” hubs, depending on the role of the node. A connector hub refers to nodes that provide connections across modules or brain sub-networks (inter-modular connections), whilst a provincial hub refers to a node that has strong connections to the other hubs within a module (intra-modular connections). Martijn talks about this categorisation of hubs and quantitative measures reflecting node categorization from 1 hr16:19 min onwards. He further explains the two main types of interactions that may occur among the brain’s anatomical hubs, referred to as “core” and “rich club” interactions. Whilst the core refers to nodes that share the same degree (or number of connections) amongst each other, rich clubs refer to the tendency of regions with high connectivity degrees to connect densely among one another. Some limitations and caveats regarding hubs, such as the effect of community sizes on the degree of connections in modules, are also mentioned from 1 hr 32:28 min onwards.
And what do all these complex topological properties mean for brain activity?
In her talk, Petra Ritter provides an overview of biophysical models of large-scale brain dynamics . These models describe the activity of each brain region using biophysically-informed differential equations that describe the aggregate dynamics of populations of mutually interacting inhibitory and excitatory neurons. These populations are then ‘linked up’ according to the inter-regional connections defined by an empirical connectome dataset. The models offer a way of simulating brain activity to investigate how variations in structural connectivity affect function and to test different hypotheses about the mechanisms that drive observed activity recordings. The Virtual Brain offers a powerful and open source platform for exploring these models.
So, what do I do if I’m still a bit confused but want to learn more about network science?
It’s understandable that you should feel confused when trying to wrap your head around connectome construction and network analysis. As mentioned in the previous paragraphs, graph theory can be used to further our understanding of the organisation of the brain network. However, there isn’t a set roadmap for defining brain nodes and edges, or for detecting network’s modules or hubs. Though this may be a tricky process for a researcher who is trying to ensure that the right decisions are made, it allows the researcher to tailor the process to address specific research questions. For example, though there are general good practices for defining nodes (such as replicating findings using different parcellations), there isn’t an optimal approach which should always be used to define nodes because it depends on the research question being addressed.
There are several online resources on network neuroscience, including two textbooks: the first by Albert-László Barabási (2016) found here, and the other by Alex Fornito, Andrew Zalesky and Ed Bullmore (2016) found here .
Some papers that provide a good overview of concepts in network neuroscience are:
In this episode of Neurosalience, Peter Bandettini meets the OHBM Open Science Special Interest Group (OS-SIG). Together they discuss the history of and the unique, important role the OS-SIG plays in OHBM. The group fosters open science not only by encouraging best practices and sharing data and code, but also by encouraging inclusivity in science and open- ended discussion in a supportive environment.
Aki Nikolaidis - is chair of the OHBM Open Science Special Interest Group, and served on the Student Postdoc Special Interest Group from 2017-2019. He is a research scientist at the Center for the Developing Brain at the Child Mind Institute in New York City. His research focuses on using advanced statistical methods to better understand psychopathology, brain organization, and cognitive performance. Aki's work in neuroscience, psychology, and machine learning has been featured in over 20 peer-reviewed publications.
Janine Bijsterbosch is chair- elect of the OHBM Open Science Special Interest Group. She is Assistant Professor in the Computational Imaging section of the Department of Radiology at Washington University in St Louis. The Personomics Lab headed by Dr. Bijsterbosch aims to understand how brain connectivity patterns differ from one person to the next, by studying the “personalized connectome”. Using open data resources such as the Human Connectome Project and UK Biobank, the Personomics Lab adopts cutting edge analysis techniques to study functional connectivity networks and their role in behavior, performance, mental state, disease risk, treatment response, and physiology. In addition, Dr. Bijsterbosch wrote a textbook on functional connectivity analyses, which was published by Oxford University Press in 2017.
Johanna Bayer is secretary- elect of the OHBM Open Science Special Interest Group, treasurer elect of the OHBM Australia chapter and active member of the OHBM Sustainability and Environmental Group. This year is also her second year behind the scenes helping to organize the OHBM Brainhack. Johanna, who is originally from Munich, Germany has a background in Psychology, Neuroscience and Computer science and was awarded the Melbourne Research Scholarship in 2018 to pursue her PhD at the University of Melbourne. Her PhD work is focusing on creating a normative model of the brain from neuroimaging data to study depression, and on a method to harmonize site-effects in large neuroimaging data sets. In addition to her passion for her work and for Open Science, Johanna also loves dancing, rock climbing, computer science and cats.
Katie Bottenhorn is the secretary of the OHBM Open Science Special Interest Group. She is a graduate student who is interested in how large-scale brain network topology varies both between- and within-individuals over the course of everyday life. She is especially interested in how hormonal fluctuations associated with the menstrual cycle and hormonal contraceptives contribute to this variability, and how this differs with respect to changes in sleep, exercise, and stress.
Melvin Selim Atay is the inclusivity officer of the OHBM Open Science Special Interest Group. He is working at the Middle East Technical University, Ankara, Turkey and his research focus is explainability of deep learning.
By Nabin Koirala
In light of her upcoming Keynote lecture at OHBM2021, we wanted to get up close with Anna Wang Roe. Dr. Roe is currently the director of the Interdisciplinary Institute of Neuroscience and Technology at Zhejiang University, China. In the interview, we talked about her academic journey and were lucky to hear some backstage stories to get to know Dr. Roe even better.
Nabin Koirala (NK): I would start by saying thank you for making time and agreeing to the interview. I am very excited to find out more about the keynote lecture that you are planning for the annual OHBM meeting 2021 but also in general about you and your personal experience in research. To start with, maybe you could introduce your research to our readers who are not only scientists.
Anna Wang Roe (AWR): I've been trying to understand, for a long time now, what it is about our brain that makes it capable of doing abstract things like sensation, cognition and emotion. I've been mystified by the fact that the brain as a structure is physical, but still able to generate these abstract behaviors. So how does abstraction arise from a physical entity like the brain? And after all these years now, I believe that I may have an approach that will lead us to an answer.
NK: That’s fascinating. Could you maybe also tell us a bit more about all those years you mentioned, or in other words your journey in science so far?
AWR: Sure, let me go through a little bit of my travels through these ideas. So, I started off in college as a math major at Harvard, where I took a course on logical systems with elements, theorems and rules. This led to a lightbulb moment in which I suddenly needed to know whether the brain qualifies as a logical system. So, kind of overnight, I turned myself into a biologist and filled my senior year with neuroanatomy, neurophysiology and laboratory work, which I immediately fell in love with. So much so that I ended up as a graduate student at the Brain and Cognitive Sciences department at MIT. My mentor at that time, Mriganka Sur, gave me a really great project on brain development. The question we were asking was whether the visual cortex was visual because of the cortical tissue itself or whether it was visual by virtue of the inputs that it gets? In some sense it was a very philosophical question. In the experiment, I rerouted the visual inputs from the eye into what normally is the auditory pathway to observe whether the auditory cortex would behave like a visual cortex, or whether it would still behave like an auditory cortex. Using electrophysiological methods, I found, lo and behold, that this cortex was definitely more visual in the sense that it contained a visual map and contained cells that were orientation selective. This was a very big finding with huge impact (even yielding a couple of Science papers!) because it showed that the target structure had an internal inherent circuitry that processes whatever it receives as an input, suggesting there are standard canonical cortical circuits. Importantly, this result had a huge impact on me personally because I was very attracted to the idea of cortical columns, made famous by Nobel laureates David Hubel and Torsten Wiesel, as fundamental elements of very systematic architecture to the brain. I was really attracted to the computational, genetic and developmental efficiency of this architecture. So I wanted to know whether columns are ubiquitous in the brain, and if so, what are the rules that govern their organization and their connectivity? Are the observed functions in the brain a result of a system of such elements and rules? These questions then drove my research and my career in science because if they were to be true, then this would suggest that the brain really has some machine-like qualities, bringing me back to my original question of whether the brain is a logical system. I hope to find the answers to these in the coming years.
NK: That’s very interesting. So, at what stage are we in this path of getting the answer? As you mention in your webpage, do you think we are close to developing a mind-machine interface and how far are we from being able to modify it to enhance brain function or human behaviors for therapeutic purposes? It would be great if you could give us more insight with your current research focus in achieving these goals.
AWR: I chose non-human primate brains as an animal model because of the similarity to humans. They have a similar visual system, they use their hands for manual behaviors as we do, and their brain architecture, including columns, is very similar to that of humans. We train monkeys to do behaviors, and then image their brains while they're performing different visual tasks to understand the functional role of columnar units in vision. Because these functional units are very small, in a sub-millimeter range, we developed imaging techniques which provide high spatial resolution and help us accurately map the brain. In the last three decades, I've mapped these columns throughout the brain, particularly in visual and somatosensory areas, but also in motor and prefrontal areas. And based on the findings from these studies, I believe that these columns exist at least in 80% area throughout the cortex.
The next step was to study how these units are connected to each other in networks, something that really no one has done systematically and at this columnar scale. I believe the word connectome is probably a familiar term referring to all the connections in the brain. There are a lot of connectome projects throughout the world and many millions of connectome dollars have been spent. But ours is the first in primates at mesoscale (or columnar scale).
To give you an example of why imaging spatial resolution is important, let's say you and I are standing next to each other and I am talking to Person A, and you are talking to Person B. Now, if a method could not resolve you versus me, then it would appear that both you and I as a unit are speaking to both A and to B, which would be incorrect. So the importance of high spatial resolution and developing methods to achieve such resolution cannot be overstated. The method I will talk about at OHBM is called INS-fMRI (or infrared neural stimulation with fMRI). This is an optical stimulation method that activates neurons but, unlike optogenetics, does not depend on viral transfection. This new method directly stimulates neurons with infrared light in a way that doesn't harm them, and at the same time, activates them effectively. With this method, we’ve shown in our 2019 Science Advances and our 2021 Neuroimage papers that brain networks are indeed based on activations at columnar scale. I'm now planning to apply this technique across the brain, systematically and hope to learn about the overall architecture of connections in the brain. I predict this focus on the fundamental units of processing will strongly impact the world of brain machine interface, medicine, and AI. That is, to effectively interface with the brain, you must understand its basic architecture.
NK: I hope that happens soon so that we could actually have the mesoscale level connectome. I believe this is also part of your upcoming keynote lecture, so let me try to understand this a bit more. While explaining the advantages of this method, we talked a lot about the spatial resolution which could be in the sub-millimeter scale, but what about the temporal resolution? Isn’t this also an important aspect to understand these functional units’ behavior? Is the temporal resolution in your method somehow better than established imaging techniques like fMRI?
AWR: That’s an important issue and I am glad that you brought it up. There is definitely a limit to the temporal resolution. So, with ultra-high field imaging, you could gain a lot in terms of signal to noise ratio, but the temporal resolution is still limited to that of the BOLD signal which is on the order of seconds. So to study the rapid dynamics of these units, you need to add other methodologies such as electrophysiological recordings. As my papers will attest to, I’m a true believer in multimodal solutions to challenging questions.
NK: So, are you already exploring the possibility of combining all three techniques – infrared neural stimulation, fMRI and EEG (for example)?
AWR: Yes, absolutely and this is exactly what we're planning to do. Right now, we have the INS and fMRI in anesthetized and behaving monkeys, and have already developed electrodes that can record during the MRI. We are investing resources into EEG recordings covering the whole brain to provide a more complete picture of the temporal aspects, so bit by bit we hope to crack this nut!
NK: Do you think it might have a translational possibility to humans at some time point?
AWR: I wasn't going to raise this but yes, we are already starting to explore this possibility with some neurosurgeons. You know, having interfaces with the brain is an idea that is becoming more widely accepted. People are walking around today with stimulators in their brain, for example, deep brain stimulation for Parkinson's patients, psychiatric conditions, cochlear implants etc. The whole field is still developing. So, we are working on that direction and I think it will happen sooner than we expect.
NK: That's fascinating to hear, and I believe with the newer technologies for deep brain stimulation forthcoming like high intensity focal ultrasound which provides the possibility of non-invasive stimulation will further escalate the possibilities. Turning to your experience in academia so far - You have worked in many different universities and eventually moved to China from the US. Could you maybe walk us through what drove this decision? Is it actually like the media portrays that the weight is shifting to China in terms of research resources available etc.?
ARW: Well, one of the main reasons I moved around was because of the technology that was available at the places that I went to. For me, science and technology go hand in hand, so to answer the questions I have, I need new technologies. For example, as we talked earlier, we need higher spatial and temporal resolution to be able to answer the connectivity problem and the technical possibilities to be able to stimulate the brain in specific ways. So, that is my main attractor and was the reason as well why I moved then from Yale to Vanderbilt because Vanderbilt was establishing a new MRI center and a whole new group for that. I benefited a lot by working with them at Vanderbilt.
Now regarding my move to China, I believe it was a combination of things. In 2012, I took a sabbatical in China and visited many universities and institutions, where I discovered Zhejiang University here in Hangzhou. I was really wowed by how collaborative people were here, and on top of that the city is truly one of the most beautiful in the world! The collaborative atmosphere made me believe that it would be a great place for an interdisciplinary institute. So I made the proposal to the university to set up a neuroscience and technology Institute and they went for it, so that's how that started. It's been a gradual transition moving from US to here and in the meantime, I also joined Oregon National Primate Research Center in a half time position. ONPRC is a huge primate resource with over 5000 monkeys. It's absolutely amazing. The resources they have, like a dedicated MRI for monkeys and lots of NHP expertise in the campus, attracted me to get involved there. But eventually, my projects in China - especially the connectome project - required my full time. Here in China, I’ve got students of different backgrounds in my lab, including medical science, optical engineering, computer science, biomedical engineering. It’s a real collaborative team effort! And as the institute is growing with about 15 PIs, an MRI center with 3T and 7T, nonhuman primate facility, two photon and three photon imaging facilities, and multiple teams of researchers working here, it is difficult to get out. (laughs)
NK: That's great. It's very rare that you can gather all these technologies in one institute and focus on your research question. Also, it’s good to know that this kind of research is still possible. Because with the recent incidents in Tübingen, Germany and a couple of labs in the US, scientists were thinking twice before starting the primate research. Anyways, as you now have research experience in both China and the USA, could you tell us something about similarities and differences in terms of research culture?
AWR: Well, there's definitely differences but I would say that, as far as the quality of research goes, the best research here is as good as the best research in the US. I have found not only are the scientists excellent here, but the students are fantastic as well. They're very motivated, hardworking and inquisitive and that's important. The students here though have different backgrounds than those in the US because for the students here it’s been a competition at every level: they have to be the best students to get into good middle schools and the best high schools and then on to college. So, by the time they get to join the university, they are really the cream of the crop. It’s been really a privilege to be working with them. They might not be thinking outside of the box enough and opening themselves up as much as the students in the US because of their culture, but I challenge them on this front. They also benefit from our international profile, as we have researchers from different cultures, including non-Chinese speaking scientists from different parts of the world.
NK: I think this is an interesting point that you brought up regarding the non-Chinese scientist coming to China. So, how is the trend now compared to, say, 10 or 20 years back? Do you think it is increasing given China’s investment in science? Or how do you think it will go on in the future?
AWR: I would say that it was really good for a while. There were a lot of foreign scientists coming here and Chinese scientists going abroad. But in the last few years, particularly since the Trump administration, it's really kind of shut down. I would say in the last three years it's changed drastically. And now the feeling is that, well, you might get in trouble if you collaborate, which is not a good feeling to have. I became very sensitive because I am working on both sides of the ocean. So, it has become difficult these days to collaborate or develop technologies collaboratively. Even NIH has become sensitive towards these interactions. This is not good for either side. According to my colleagues and friends in the US, they are having such a hard time finding postdocs to fill their labs. On this side, the students are not able to gain the exposure and experience because these days they cannot get visas to go to the US. I am sure this is a passing thing and science will prevail.
NK: Hopefully this political tension will improve soon, and they will leave the scientist to do science again. So, we have talked a lot about science and as we are almost nearing the end, I would like to talk about some of your personal interests. So, let’s start with your hobbies: what kind of hobbies do you have outside of science?
AWR: Well, I have to be honest, I don't have a lot of free time (laughs). But what I truly enjoy is cooking. That's something my husband and I do a lot together. We enjoy that and having friends over, trying different recipes and enjoying the time. The other thing I enjoy is art and nature. So, whenever I get the chance, I go to galleries and museums, or outdoors in nature. The city is truly beautiful and with emphasis on nature, so I enjoy that a lot here.
NK: Sounds great. So, as this blog is read by lots of young scientists, what would be some suggestions you would like to give them, not only to be successful but also to do good research.
AWR: Okay, I generally don't prefer to say this is the way you should do things. But if I had to answer that question, I would say to do good science, follow your heart and your intuition. Often, people think of scientists as these logical beings, but sometimes some of your critical decisions are really from the heart. If it works for you, you should go for it. Another useful quality is being able to fight and be persistent. If somebody says you’re wrong, you should not give up, if an editor rejects your idea, you have to tell the editor to reconsider - I know that's not always easy though (laughs), but you should really stand up for yourself. It doesn't have to be obnoxious, but persistence has to be there. A third suggestion is probably a more practical aspect of science, which is money. I would say, always plan for a rainy day and use your funding wisely. The last thing I would like to add to that is you should always know that science is full of different personalities and people with different backgrounds. So, as a supervisor it's good to try to figure out what somebody is seeking and what they are good at and try to use that to their advantage and as well as your own advantage. It's not always easy to do but everyone's got skills and talents that can be useful in science.
NK: These are great suggestions, thank you. Maybe a bit of fun now, as you've been working in neuroscience for so long, what would be your favorite brain region and a toolbox?
AWR: I guess for the favorite brain region, I'm biased by the brain region I'm studying now. I've been studying sensory systems for a long time and trying to understand the hierarchy of the processing within a sensory system. Now I'm in an area called the amygdala, which is a part of the limbic system and primarily processes emotions. And this is a region I am really in love with now because this is an area that's tied to all different parts of the brain, including your sensory, motor, cognitive regions up to the autonomic and visceral systems. And it amazes me every time thinking how this tiny region could handle all this complex integration of different information. This area not only filters your incoming information through cognitive and emotional filters to make a final output or so to say generate feeling, but at the same time it also controls your physiology, including your heart rate, your breathing and all sorts of things. I should mention that this will be a part of my keynote talk in OHBM this year as we are studying these connections using the INS-fMRI technique. For the toolbox, I am not using any actively at the moment but my students and postdocs love AFNI. Personally though, I still love mathematics and most of the time I am still wondering how to develop a math theory of the brain. Recently I am trying to think more about recursion and how that might bring succinctness to our brain architecture. So from that sense I am more interested in basic equations rather than toolboxes (laughs).
NK: That’s great. I think it would be incomplete to conclude this interview without mentioning the pandemic. As there are lots of discussions about productivity, how it has been impacted, how it should be handled and so on. So, maybe you could say how it had impacted your research and research in general.
AWR: Well, last year in 2020, I spent 9 months in the US as I got stuck there when the pandemic hit. I could not do lab work and mostly stayed home, but I became very productive writing papers that were backlogged. So, it still was a good use of time. When I returned to China, I immediately felt that I landed in a different world. Society was functioning pretty much like normal. Just because they were able to control the pandemic here, the return to normal was fast. In the US, even though things have slowed down a bit, with the COVID vaccinations going well, things are picking up and returning to normal, too. People have found ways to adapt, and sometimes challenging times can also be good for science. For example, I was recently invited to a meeting organized by PRIME-DE (international primate data exchange group), and there were no presentations or talks but instead just primate researchers getting together for discussions. I really loved that, and the response to the pandemic showed that we can be resourceful and find alternative solutions.
NK: I would like to say thank you again for talking with me and sharing your experience with us. Looking forward to your talk.
AWR: Thank you so much for this opportunity. It was a pleasure talking with you and I wish you great success in your research.
In this episode Peter Bandettini finds his similarities with Professor Nik Weiskopf. They are both physicists working at the interface of MRI acquisition and brain physiology and function, plumbing ever more quantitative information about how our brains are organized, how they change with disease and how they vary between individuals. We find out about Nik’s professional history, get into real-time fMRI neurofeedback, and discuss his work on quantitative MRI for deriving maps of myelin, iron content, fiber track direction, and more. All of this done at unprecedented resolutions using their unique Connectome scanner - one of only four in the world.
Professor Nik Weiskopf is the Director of the Department of Neurophysics at the Max Planck Institute for Human Cognitive and Brain Sciences. He received his Ph.D. in 2004 from Graduate School of Neural & Behavioural Sciences and International Max Planck Research School in Tübingen, Germany.
In 2004 Nik moved to the Wellcome Trust Centre for Neuroimaging at University College London in the UK. He quickly moved up to head the Physics Group there in 2006, becoming Senior Lecturer (equivalent to an associate professor) in 2009. He became full professor of MRI Physics at the UCL Institute of Neurology in 2014 and subsequently moved to the Directorship of Neurophysics at the Max Planck Insitute for Human Cognitive and Brain Sciences in Leipzig in 2015.
The Neurosalience production team consists of Anastasia Brovkin, Katie Moran, Nils Muhlert, Kevin Sitek, and Rachael Stickland.
by Rosanna Olsen, Valentina Borghesani and AmanPreet Badhwar on behalf of the OHBM Diversity & Inclusivity Committee
OHBM initially launched a “Diversity and Gender Task Force” in 2017 to address the growing need to recognize and address multiple forms of inequity with respect to gender balance and geographical representation on the OHBM Council.
Since 2017, this initiative has worked towards tackling a range of issues surrounding underrepresentation at OHBM. The task force has evolved into a standing “Diversity and Inclusivity Committee” that meets regularly to ensure that the needs of the diverse OHBM community are adequately represented at all levels of the organization and in all of its activities. Not surprisingly, members of the Diversity and Inclusivity Committee liaisons with all OHBM Committees and SIGs throughout the year.
During this year’s virtual meeting we are planning a series of different events to highlight and celebrate issues of diversity and inclusivity in OHBM.
The 3rd annual DIC symposium: Racial Bias in Neuroscience
This is the third year OHBM will feature a symposium devoted to the discussion of issues in diversity that affect our society. In 2019, speakers presented issues of gender equality that are present at the OHBM meeting, it’s leadership, and the field of human brain mapping. Speakers provided an overview on gender imbalances in academia, including citation counts, conference speakers, and prize awardees. The issue of biologically versus culturally driven sex differences in brain anatomy was also discussed. In 2020, speakers discussed neuroscience issues pertinent to the LGBTQ+ community such as underrepresentation of LGBTQ+ individuals in STEM careers and the complex nature of gender/sexuality and how these factors are addressed in research.
This year’s symposium will be exploring issues of racial bias in neuroscience. People of color are not well-represented in academia, especially at senior levels (e.g. Full Professor). This year our speakers will discuss issues such as systemic racism and other factors leading to the underrepresentation of Black and Latin scholars. Moreover, as in many areas of science, the majority of human brain mapping research is conducted in homogenous, non-representative populations, which is problematic for the interpretability and generalization of research findings. Strategies for promoting equity within the field of human brain mapping, overcoming current barriers for ethnic minorities in OHBM, and increasing the ethnic diversity within our research samples, will be discussed.
This year Diversity Symposium speakers: Dr. Sharleen Newman (University of Alabama), Dr. Yakeel Quiroz (Harvard University), Sade Abiodun (Princeton University)
The 2nd edition of the Kids Review - this time multilingual!
After the success of the Kids Review in 2020, which led to five Frontiers for Young Minds papers, this year our kids-friendly initiatives are scaling up! We opened up to young reviewers from all over the world by reaching out directly to schools and, critically, by offering the experience in five different languages. Children speaking Korean, Spanish, Portuguese, French, or Italian will be able to act as reviewers for five different papers presented by OHBM members. Whether in school with their teachers or at home with their parents, kids will be able to watch the prerecorded talk in their native language (with subtitles either in their native language or in English) and then ask questions directly to the researchers the day of the live Q&A during our annual meeting.
This monumental effort required a full team of volunteers across three continents and more than seven countries, some are OHBM members, some educators, scientists, or teachers - in alphabetical order they are: Inês Almeida, Aman Badhwar, Giulia Baracchini, Valentina Borghesani, Eunji Cho, Antea D'Andrea, João Duarte, Mariana Flores, Eduardo Garza-Villarreal, Kirk Geier, Soo Hyun Kim, Hyang Woon Lee, Kangjoo Lee, Anaïs Llorens, Laura Marzetti, Rosanna Olsen, Pedro Pinheiro Chagas, Ana Luísa Pinho, Maria Ribeiro, Viviana Siless, and Athina Tzovara. We all came together united by the desire to bring the latest scientific output of our community to the bright, curious, young minds of future scientists!
Too much fun to miss out? You can still register in our online form & check out details on our self standing website!
Our paper in NeuroImage and associated Diversity Roundtable at OHBM 2021
The OHBM Diversity and Inclusivity committee published our first paper in 2021. In this paper, we reviewed the composition and activities of the Diversity and Inclusivity Committee that have promoted diversity within OHBM, in order to inspire other organizations to implement similar initiatives.
We strongly believe that these activities have brought positive change within the wider OHBM community, improving inclusivity and fostering diversity (e.g. sex distribution of council members and speakers provided as an example below) while promoting rigorous, ground-breaking science. Limitations and challenges faced by our committee and the way forward are also discussed in this paper.
Sex distribution for committee members and Tailarach lecture and keynote speakers. Dark colors highlight the number of women; light colors indicate the total number of individuals. Dotted lines mark the year 2016 when the DIC was established. Image from Tzovara et al, 2021.
During the 2021 annual meeting, we will be holding a Diversity Roundtable to showcase the findings in this paper, and discuss with the community on how to further improve equity, diversity and inclusivity at OHBM.
The Diversity & Inclusivity Committee can’t wait to share this (and more!) with you during the annual meeting! Let’s recap (click here for a D&I-specific google calendar): We’ll see you there.
This year’s OHBM conference is quickly approaching, including two consistent conference highlights: the OHBM Brainhack and Open Science Room (OSR). To learn more about what we can expect from this year’s events, I spoke to some of the current leadership of the OHBM Open Science Special Interest Group (OSSIG).
Current chair Aki Nikolaidis and secretary-elect Johanna Bayer joined us to talk about ongoing initiatives within the OSSIG, while hackathon co-chairs Matteo Mancini and Tibor Auer and OSR co-chairs Gabriel Gonzalez-Escamilla and Eva Alonzo-Ortiz told us about the upcoming events.
We had a long and thoroughly enjoyable discussion, which I encourage you to check out in its entirety. Below are some highlights from our conversation, condensed and edited for clarity.
I'm a longtime fan of the Open Science Room and the Brainhack, so I'm very excited to hear about everything you’ve been up to. The OSSIG events are some of my absolute favorite parts of the conference.
To start us off: I try to keep up with the OSSIG on twitter (htps://twitter.com/OHBMOpen), and I've seen that you launched new Open Science fellowships. Can you tell me a little bit about these fellowships?
The fellowships were envisioned to recognize early career researchers who were committed to and contributing to open science. The idea was to give monetary awards to work that is traditionally unrewarded; for example, community building or tool development. We also see it as a way to diversify who can engage in these events. Historically, the open science community has been predominantly white, predominantly North American and European.
We’ve closed our first round of applications, and we were encouraged and inspired by all of the applicants. Our first class of fellows are really impressive.
One thing we’re excited about, too, is to engage the fellows more directly in OSSIG events. There's a lot of volunteer work that happens during OHBM and at OSSIG events. We’re asking fellows to help as teaching assistants during the Brainhack. So we’re lucky to have fellows who are knowledgeable about the different topics that we're going to be teaching in the train track [educational sessions] of the hackathon.
Speaking of Brainhack, it's coming up soon! Could you talk about what this year's hackathon will look like?
Last year’s event was already an incredible success, and we’ve really tried to build on that work. Something we want to focus on more this year is to try to engage people who may not be interested in coding. We also want to have a conversation about the fact that historically—and unfortunately—the audience of Brainhack tends to be predominately white, straight men. We want to create a more diverse community, and we’ve planned a session to discuss what we can do to move towards that goal.
For those who are interested in coding, QMENTA has agreed to provide access to their platform during the event for any project that wants to run on their resources. Or, if people want to organize a multi-group challenge that can continue after the hackathon. We hope these additional resources give people the chance to not only dream big but to actually go for it.
That sounds really exciting. How can folks access these sessions and resources? I know that this year the OHBM conference is going to be on the Sparkle platform. How will the train track and hack track be adapted to these virtual spaces?
We are working with Sparkle to try and have the hackathon there, though this isn’t yet finalized.
But more generally on access: In thinking about how to make sure that this event is accessible, we’ve been inspired by cakes. A good cake needs to have multiple layers, and we want to have multiple layers in which people can engage with the Brainhack event.
For the TrainTrack [educational sessions], the first layer is pre-recorded videos that can be accessed during or even before the event. These will cover topics like version control, data visualization and machine learning, and reproducible workflows—these will leverage recorded sessions from some of the many wonderful events that have gone virtual over the last year. We’ll also have pre-recorded content on community building, as part of the non-coding focus I mentioned.
Another layer of engagement will be live Q&A sessions where we will have moderators to take questions from the audience and guide an overall discussion on a given topic. We hope that this will encourage anyone who wants to ask or answer a question to have an opportunity to engage with the Brainhack community.
The last layer connects the hackathon with the OSSIG fellows. OSSIG fellows will lead small group sessions that will be repeated through the days of the hackathon. And people will actually be able to solve problems with the tools that they want to learn about, explicitly inspired from the Neuromatch Academy experience.
In the hack track, participants will be working together on individual projects. How are you envisioning that participants will work together from different sides of the globe on the same project?
We learned from last year’s event to structure around different time zones. We came up with two time zone slots: “Atlantis” for Europe, Africa, and the Americas, and “The Rising Sun” for Australia and Asia-Pacific. That said, we don’t expect projects to stick to just one time zone; for example, someone from Australia can work on one particular project and when they go to sleep, someone else in Europe can start on it.
I'm a huge fan of collaborative working, and I think the virtualization of the event allows more people to get involved. We really do hope that this can happen on the Sparkle platform, but we’re exploring other options as necessary. And of course, the actual work is probably happening through GitHub or other collaborative platforms.
The HackTrack is one of the most creative parts of the conference and a great show for all the geekiness from our community. I’m looking forward to seeing the new things folks come up with.
In the past, it’s been unclear how to recognize the work that happens at the hackathon. Now that OHBM has its own publishing platform Aperture, do you have any plans for hackathon proceedings—maybe in partnership with Aperture?
We are in talks with Aperture about hosting the hackathon proceedings, which would highlight all of the different hackathon projects. That said, we haven’t yet finalized submission requirements or clarified the eventual format. But we hope that hackathon participants will eventually be able to submit their projects for publication. We’re not envisioning full length manuscripts, but short summaries for each of the projects: what they did, what they tried to do, what they found, etc.
I'm so looking forward to this year's Brainhack! But of course, open science doesn't stop at the Brainhack, right? It goes right into the main conference with the Open Science Room (OSR). Can you tell us how you’re adapting the OSR in 2021?
Stepping in as OSR co-chairs is a hard challenge, because the programming has consistently been very good so far. This year, I’m excited to announce that the OSR will be a real part of the OHBM program. Traditionally we’ve been an auxiliary event, so joining the main program on the Sparkle platform is an exciting evolution.
As for the OSR program itself, we’ll have four main kinds of events: panel discussions, educational sessions, emergent sessions, and a BIDS town hall.
The panel discussions are organized around some core topics. We solicited contributions from the community for ideas within those themes, and we actually just closed those calls. We’re now working to finalize which panelists will be presenting. We also have 10 educational sessions, ranging from introducing the basic tools and motivation for doing open science to dealing with failures in the current system.
Our emergent sessions have the most open format, as our goal there is to provide the space for folks to launch their own conversations rather than to specify a general topic. We will generally be looking for topics related to open science and related ideas like diversity and accessibility. But attendees are free to propose topics, and they can do so directly by email or on our website during the main conference.
A few years ago, Dan Lurie organized an emergent session which was kind of an MC’ed group discussion. Our goal is to bring people together like that and discuss things, like, for example, problems in transitioning from academia to industry. But the important thing is that participants decide the topics, and we leave the floor open.
For the panels, on the other hand, we had specific topics we wanted to cover. We chose those topics based on community surveys, so those are some of the topics that people were really interested in. And also, we felt like we wanted to move the Open Science Room a little bit away from just being about coding to being more about community building, inclusivity, diversity and related ideas. So we wanted to address those topics as well.
One other session type you mentioned is the BIDS town hall. BIDS is the Brain Imaging Data Structure, a community-driven standard for organizing neuroimaging data. It’s still in active development, with many ongoing extension proposals. Will the town hall be a space to learn more about this work?
It will be an opportunity to have an overview of the current status of BIDS from the point-of-view of the BIDS steering committee. We’re happy to provide the OSR as a platform for this town hall, and attendees will be able to hear more about ongoing developments in this space.
For both the hackathon and the OSR, how can we get involved? It sounds like the goal is to cast as wide a net as possible, so I imagine there's lots of work to do. And if people want to volunteer and join in, what can they do at this point?
First, just to join the events themselves: you don't have to be attending the OHBM conference or be an OHBM member. So if you're online, you're interested and you want to learn or contribute, you can just come. All of our activities are as open and inclusive as possible. We have a $25 fee for the hackathon, but if you come from a low or middle income country, or you can't pay for any reason, we can give you a reduction or waive the fee completely.
To volunteer: you can reach out to us in a lot of different ways. You can go to our website. You can email us at ohbmopenscience AT gmail.com. You can reach out to us on Twitter or on the Brainhack mattermost.
We're looking for volunteers to help run both the hackathon as well as the open science room. We don’t need people to commit to long days since the events themselves are on a distributed schedule. We do need people in different countries to be joining us to help run these different events. If the events don’t work with your schedule: We always need help with our websites, too. We're going to be doing our best to dynamically update our websites and our content on the Sparkle platform as these events roll out.
If anyone who is located in the Asia-Pacific or “Rising Sun” slot is inspired to join, we’re specifically looking for people from those time zones to help run these events!
On the hackathon website, we link out to our volunteer form which points to the actual descriptions of the volunteer roles. So we’re trying to be completely transparent here in the sense that we put the actual description of what you will be doing.
And for the OSR, we’re looking for volunteers to attend and monitor the emergent sessions, monitor the event chats. This is a great way for people who are not attending the main conference to interact with the speakers. On the OSR website, we have a ‘Volunteer’ section with descriptions of the roles and a chance to sign up. We also ask if people are interested in volunteering directly in the OSR registration form, and we’re happy to have people join in at any point!
If you are new to open science, volunteering with the Open Science SIG is one of the best ways to start getting a feel for open science and to start feeling like you're part of the community. We don’t have any skill-based barriers to participation: you already know what you need to know to get involved. So come and join.
I should also mention that we have some positions on the OSSIG open for next year as well. So if any of the available positions sound exciting to you, and you want to get involved, please come. And if those sound like too much of a commitment, there's always other ways to get involved, from open science fellows to volunteering and coordinating things onsite.
If anyone is not yet familiar with Brainhack and wants a quick primer before jumping in, we’ve just published a short piece about the ideas guiding the community that highlights the culture and structure of Brainhack events.
One thing that’s worth saying, too, about vision is that the vision of the OSSIG continues to grow every year. We’ve been really fortunate to have strong support throughout the OHBM leadership this year, and I hope that’s only going to continue to grow.
I hope so too, and I’m so excited to see the success of this year’s events and all of the OSSIG community initiatives. Thank you all for taking the time to speak to me about it today!
OHBM 2021 Keynote Interview Series: Q&A with Dora Hermes - Multi-modal computational models for a new human systems neuroscience
By: Elizabeth DuPre
Dora Hermes is currently an Associate Professor of biomedical engineering at the Mayo Clinic in Rochester, Minnesota where her work focuses on identifying disease biomarkers and developing neuroprosthetics to interface with the brain.
I had the pleasure of catching up with her before her 2021 OHBM keynote lecture to find out more about her research, what drove her work on iEEG-BIDS, and her advice for early career researchers.
Elizabeth DuPre (ED): One thing that stands out to me is the diversity of your research projects: from improving recording standards and materials to developing computational models of visual processing. How would you describe your research program or field of study?
Dora Hermes (DH): This is actually a discussion point among our colleagues right now. I would call it computational cognitive neuroscience or human systems neuroscience. One of the things that I think defines this field is that we’re trying to learn as much as possible about the mesoscale population measurements that we have in humans. That we’re measuring from populations of neurons in the human brain is so important and so fascinating: we can measure all of these different signals, but how do we characterize those signals and their interactions in a quantitative manner?
Excitingly, this is also the scale at which neuroprosthetics are being developed. So brain-machine interfaces, closed-loop stimulation; these systems all work with electrodes that measure from populations of neurons. So if we want to understand where to implant these and what type of signals are biomarkers for particular processes, then the neural population scale is where we should be focussed on to drive those therapeutic techniques forward.
ED: I think many OHBM members will have heard of initiatives like Neuralink and the exciting momentum around brain-computer interfaces right now. Is this an application that got you interested in this field?
DH: During my PhD I actually started in a lab in the Netherlands that was developing brain-computer interfaces, so in 2008, 2009, we were working with patients who had ECoG electrodes implanted on the surface of the brain, and they were playing [the video game] Pong with their brain signals. That is, they had to hit the target just by manipulating their brain signals. So at the time we were asking questions like: Where should these electrodes be implanted, and can we actually determine that from an fMRI scan? Because we measure BOLD on the one hand and electrophysiological signals on the other hand, we wanted to know how BOLD measurements can inform our understanding of electrophysiology.
That’s how I initially started to get interested in this field, but there are of course many more applications of this methodology. For example, deep brain stimulation or combining different closed-loop measurements with stimulation to try and entrain networks. But brain-computer interfaces were what first got me interested.
ED: You recently published a paper looking at implantation sites to help with gesture recognition.
DH: This is centered around the question of: how can we get as many degrees of freedom as possible with as few electrodes as possible, since there are computational and hardware limitations in terms of how many signals we can describe. In that paper, we—and particularly my PhD student Max van den Boom—try to look at what the optimal implantation and configuration strategy should be.
ED: You obviously have a lot of data that you’re collecting and analyzing. Was this one of the motivations to help develop iEEG-BIDS, the sheer volume of data that you need to process in your own research?
DH: There are several aspects of this. When we first started developing iEEG-BIDS we had a very practical problem. We had a consortium between different sites and we wanted to share data between the sites, including 7T fMRI data, 3T fMRI data, intracranial EEG recordings, and MEG data. Because the data were collected across sites, they were also in different formats! So we were asking, how can we best organize this data? While I was thinking about this, Chris Gorgolewski was at the time one floor below our floor and I talked to him for a little bit, and then looked into the Brain Imaging Data Structure (BIDS), and it just was the most practical and the simplest solution to our problem. And that’s what I’ve seen throughout, is that whenever you give people a description of BIDS it’s immediately recognized as such a practical and simple solution to share data across sites.
So for us, the nice thing about it was that we could easily organize all of our different modalities according to BIDS. At that time the fMRI and MEG descriptions were already created, so we just had to work on creating a solid description of (i)EEG data to make sure that everything would be in there. And since that worked out really well, in my lab now all of my data have to be organized according to the BIDS structure, and it gives everyone guidance on what the data should be like. It makes it much easier to share data with publications as well and make sure that publications are reproducible. And as we get more and more data over time, it’s easier to find things. So I’m really happy that we put in the time to help expand BIDS.
ED: I think a lot of people are happy that you put in the time! It’s really such a wonderful resource for the community.
DH: And the community was the most amazing part of it, actually. There were so many people who were working together to get that done. It was really amazing how much help we got from different people who were all helping to think about when it would work and so to see people use it now—it’s been really fantastic.
ED: You already had this question back in your PhD about how BOLD measurements can inform our understanding of ECoG, and you’re continuing to collect rich data across many different modalities. Is there a direction you’re excited about in bringing together these different data types?
DH: These are complex measurements, and the nice thing about having multi-modal measurements is that they are complementary. I’m really excited about the fact that we can move beyond word descriptions of these signals. So beyond saying something like “gamma is bottom-up,” we can start to make an image-derived model where we’re predicting the amplitude of different signals in computational models—which gives us different ways to quantify these signals beyond just comparing them or simple correlations. We can start to ask what they tell us about perception or about the underlying circuitry. So I think we’re now at this transition point when we’re starting to have not just a word model or a word description but really to have a more quantitative model for what these signals can tell us about the underlying physiology.
ED: You described your interest as computational cognitive neuroscience, and just from the name it seems like computational models are an important part of this work. So moving from these more verbal descriptions to computational models seems exciting and important.
DH: To make things a bit more concrete, let me give an example. Initially, there were descriptions such as, “Gamma oscillations are bottom up.” But what does that actually mean? Does it mean the low-level visual information that comes in from the eye? Does it mean whatever you actually [consciously] perceive? Rather than trying to argue at this level of description, we now have an image-derived model that can predict gamma amplitude from an image.
The nice thing about this model is that it’s directly testable. Importantly, the model can be wrong. But you can test it, and you can expand upon it. You don’t just have a word description in a scientific paper; now you have a model that’s available on GitHub. There are of course other examples from other groups, but what’s exciting about this direction generally is its quantitative focus. So now rather than “gamma is bottom-up,” we can say things like: “these models of gamma oscillations work well in this setting, but not that one” or “these are the quantities that we expect to change in disease models, and these are the changes in the supporting computations.” This is the kind of model that will help push the field forward, since it’s formalized and—unlike word descriptions—can be tested and built upon directly.
ED: I think that’s exactly it. Speaking of pushing the field forward, I wanted to ask: What directions are you excited about for the field right now? Obviously one of them would be developing more multi-modal, computational models. Would you add anything to that?
DH: I’m certainly really excited about testable computational models. The other thing I’m excited about is being able to directly manipulate brain signals to create closed-loop devices. It goes back to these brain-computer interfaces we started talking about. So being able to quantify and model the brain signals we measure is important, but in order to have closed-loop devices we also need to be able to modulate measured brain networks via electrical stimulation. That is, not just having a measurement, but being able also to directly modify the network activation patterns themselves.
One of the directions where we’re currently moving is to really understand how we can change network-level biomarkers with electrical stimulation. For example, in epilepsy, we know certain circuits are involved. Can you directly stimulate those circuits to normalize the measured signals; that is, to push them to look more or less typical? I think this direction of integrating quantitative measurements with electrical stimulation will provide new therapeutic possibilities for many different diseases.
ED: That is really exciting, and it also sounds really challenging! Do you have any advice for early career researchers who are just starting out in this area to help in overcoming some of the challenges along the way?
DH: We work with a very complex system, and we need to make tractable measurements. But making inferences is less than straightforward. If we have electrodes in 250 different places, how do we make that problem approachable? First, of course, I’d say to look at the signals. Don’t just apply analyses blindly; for example, you can apply coherence to many different signals but it doesn’t make sense in the absence of a clear oscillation. So always look at the data first and think of which analyses would apply.
Secondly, I’d say it’s important to interact with your colleagues: other PhD students, postdocs, and PIs in the field. One of the things that was a real benefit when I started was the discussions and collaborations that I had with my colleagues. For example, we had noticed that there were often broadband increases across many different high frequencies rather than just narrow-band oscillations. We were confused by this, but we talked to a lot of people and learned that they had similar observations. That experience helped convince us that it wasn’t just an artifact due to our equipment and allowed us to start asking “What does that tell us about brain function?”.
And finally, the other thing I’d recommend is to share data and code when it’s possible. There will definitely be a benefit in the long term. When I started, we developed a very simple tool to localize ECoG electrodes and correct for brain shifts. I made the code available, and I’ve been amazed by how widely it’s been used since. Really, I hope this has helped people, and I think it’s a good example of the power of sharing code and data whenever you can. It’s one of those things that drives the field forward. It also helps to better situate your work: if you share code and data with a paper, other researchers can reproduce your results and see how robust the effects are rather than just speculating.
ED: I just have one last question: What can we look forward to in your keynote lecture at OHBM?
DH: Generally, I’ll discuss how describing mesoscale brain measurements with different models allows us to identify potential biomarkers for disease. To support this, I’ll explain how we have parameterized the power spectrum measured with intracranial EEG and related these different electrophysiological signals to different image properties to learn about the human visual system. I’ll also describe how different image modalities—such as fMRI and electrophysiological measurements—allow us to get different views on the system.
I hope that I can convince the OHBM community about this core idea: fMRI and iEEG integrate across neuronal populations in fundamentally different ways, and we should view BOLD and electrophysiological signals as providing complementary information with respect to the system that drives the signals. Viewing measurements as functions of the underlying systems allow us to better understand which circuits may be related to atypical processing in neurological diseases. We're hoping that this work will help move the field in that direction.
ED: Thanks so much for taking the time, Dr. Hermes. I’m very much looking forward to your keynote in a few weeks !
In this episode of NeuroSalience, Peter chats with Alex about connectomics, or the study of the brain’s networks of connections. We discuss Alex’s work leveraging the Allen Brain Atlas (https://portal.brain-map.org/) and fMRI to better understand the genetic basis of the network structure. He points out clear differences between network hubs and other network components, with hubs having important roles in resting state dynamics and in neurological disorders. We also discuss the ongoing challenge of removing physiological noise from the fMRI signal in the context of his new and powerful methods for dissecting it out. Last, we touch on the new iteration of the OHBM virtual platform that Alex was instrumental in developing.
Alex Fornito, Ph.D. is a leader, educator, and innovator in the field of brain imaging. A major emphasis of his work concerns understanding foundational principles of brain organization and their genetic basis; characterizing brain connectivity disturbances in psychiatric disorders such as schizophrenia; and mapping how brain networks dynamically reconfigure in response to changing task demands.
He is currently a Sylvia and Charles Viertel Foundation Senior Research Fellow, Professor of Psychological Sciences, and Head of the Brain Mapping and Modelling Research Program at the Turner Institute for Brain and Mental Health. He also leads his Neural Systems and Behavior Lab at Monash University.
He completed his Clinical Masters (Neuropsychology) and PhD in 2007 in the Departments of Psychiatry and Psychology at The University of Melbourne before undertaking Post-Doctoral training in the Department of Psychiatry at the University of Cambridge, UK, under the auspices of an NHMRC Training Fellowship.
He has co-authored an outstanding book (with Andrew Zalesky and Ed Bullmore) on network neuroscience, titled “Fundamentals of Brain Network Analysis.” He has collaborators all over Europe, the US, and Canada, and is an active member of OHBM, where he has been instrumental in establishing the virtual platform for the meeting.
The Neurosalience production team consists of Anastasia Brovkin, Katie Moran, Nils Muhlert, Kevin Sitek, and Rachael Stickland.
In this episode, Peter Bandettini meets with Tom Nichols, Remi Gau and Jack Van Horn to discuss the motivation for a set of best reporting and analysis practices. This provides insight into how the COBIDAS (Committee on Best Practice in Data Analysis and Sharing) in OHBM started. We talk about the reproducibility crisis in fMRI and how it is being addressed. We discuss how the culture of fMRI has changed from isolated scientists doing N=20 studies to a connected web of researchers collecting and contributing to fMRI databases of high quality data for the purpose of revealing ever more subtle information. Through this work, the field aims to achieve robust biomarkers that are clinically useful in diagnosing and treating diseases. We also discuss many of the issues and decisions made in analysis, and how this may contribute to irreproducible results. Last, we consider the ongoing and future global efforts to increase data transparency to make fMRI a more effective tool.
Remi Gau, Ph.D. is currently a postdoc at the Catholic University of Louvain in Belgium. He received his PhD. in 2010 in neurosciences from the University of Pierre and Marie Curie in Paris, and has studied fMRI methodology at Max Planck Institute in Tuebingen and University of Birmingham, UK. He has been active over the years focusing on the infrastructure of imaging data collection and sharing as well as more widely on the culture of neuroimaging, and most recently, created the COBIDAS (Committee on Best Practice in Data Analysis and Sharing) checklist in 2019 as well as eCOBIDAS. He also does neuroscience research, focusing on laminar fMRI to explore how the brain integrates and uses information.
Tom Nichols, Ph.D. is the Professor of Neuroimaging Statistics and a Wellcome Trust Senior Research Fellow in Basic Biomedical Science. He is a statistician with a solitary focus on modelling and inference methods for brain imaging research. He has a unique background, with both industrial and academic experience, and diverse training including computer science, cognitive neuroscience and statistics. He received his Ph.D. in Statistics from Carnegie Mellon University in 2001. After serving on the faculty of University of Michigan's Department of Biostatistics (2000-2006) he became the Director Modelling and Genetics at GlaxoSmithKline's Clinical Imaging Centre, London. He returned to academia in 2009 moving to the University of Warwick, taking a joint position between the Department of Statistics and the Warwick Manufacturing Group. Finally in 2017, he joined the Big Data Institute at Oxford. The focus of Dr. Nichols work is developing modelling and inference methods for brain image data. His current research involves meta-analysis of neuroimaging studies and informatics tools to make data sharing easy and pervasive.
Jack Van Horn, Ph.D. received his Ph.D. in Psychology from the University of London, and then received his Masters of Science and Engineering from the University of Maryland. He is currently a professor in the department of Psychology at the University of Virginia. He was a staff fellow at the NIH until 2000. He moved to Dartmouth College and while there - until 2006 - was instrumental in starting their databasing and data sharing efforts. In 2006 he moved to UCLA and contributed in a large way to their data repository efforts. In 2014 he moved to USC, and finally in 2020, moved to the University of Virginia. He has been an active member of OHBM and a proponent of data sharing since the very early days.
The Neurosalience production team consists of Anastasia Brovkin, Katie Moran, Nils Muhlert, Kevin Sitek, and Rachael Stickland.
by Roselyne Chauvin & Valentina Borghesani
We’ve freshened up!
After two years of existence as an official OHBM Special Interest Group (SIG), the BrainArt SIG has now proudly released its website, created by Anastasia Brovkin and Désirée Lussier, following brainstorming by all SIG officials. You can browse through all previous competitions and exhibits, as well as submit your pieces for the 2021 edition!
You can find out more about our SIG by checking out previous posts on how we came of age and how we consolidated our role within OHBM, but also about our prehistory and history. And we highly recommend having a listen to Neurosalience episode #8, where we had a blast chatting with Peter Bandettini.
We’re preparing a great BrainArt Exhibit for you!
The BrainArt SIG is busy preparing the annual brainart exhibit for OHBM 2021 meeting attendees. The SIG has confirmed artists from all over the world and...well, we don’t want to spoil your surprise but you are in for a treat! We will offer a broad representation of artists ranging from full time scientists, full time artists, and all creative souls in-between. Our 2021 theme is “Big Data & Me”. We wish to celebrate the achievements of Big Data neuroimaging projects, while acknowledging the suffering of individuals affected with brain disorders. Honoring the trees as well as the forest.
First, we will dive into large datasets while keeping in focus inclusivity, diversity, and the representation of populations.
Second, from big N to small N, what about the personal suffering of individuals? After all, we started the field with case study. We will explore the dimension of brain illnesses such as schizophrenia, depression, age-related neurodegenerative diseases, epilepsy, multiple sclerosis and autism in a single subject looking glass.
Finally, will host the pioneering ideas linking different levels of observation, as interpreted by artists who are encouraged to explore the reciprocal interactions between Big Data research and personalized treatment, i.e. breaking of the barrier between research findings and treatment.
We’re ready for your artworks!
So don’t wait another minute! Please go explore our website and the archives and get inspired. It’s time to create and participate in the BrainArt competition 2021 - now open to accept your masterpieces. On the website you will find a form to submit your art for one of the following five award categories:
This competition highlights an ongoing aim of the BrainArt SIG, which is to foster the dialog between artists and scientists, blurring the line between Arts and Science. We believe that the exchange of ideas and tools between these two disciplines encourages the development of novel approaches to scientific data visualization, and promotes the exploration of different perspectives on human brain structures and functions. Researchers, scientists, and everyone in between: you are all encouraged to submit your original work(s)! There are no limits to the number of submissions per participant, and both team and single-person entries are welcomed.
The Submission Deadline is 11:59 PM CDT, Saturday, June 6, 2021 and the award notification will take place during the Annual Meeting. For additional details, please check out our website.
This year, following the success of our campaign to provide a logo for Aperture, there is also a very special 6th category added to the BrainArt competition:
We hope that this teaser and the exploration of the new OHBM BrainArt SIG website will encourage you to participate or enjoy this year's BrainArt Competition & Exhibit.
With contributions from the BrainArt SIG:
In this episode Peter Bandettini meets Carolina Makowski, Michele Veldsman and Alex Fornito to discuss the OHBM Student–Postdoc special interest group (SIG), with particular emphasis on their mentoring scheme and meeting-related workshops. Carolina is a current member of the SIG, Michele previously served as its Chair, and Alex has been an active mentor to several junior OHBM members over the years through this group. They discuss the mentorship program, the workshops at the meeting, what good mentorship is, and why it’s needed more than ever, as the stresses and demands of students and postdocs increases within an ever more demanding professional climate.
Carolina Makowski, Ph.D. is the Career Development and Mentorship Director–Elect of the Student–Postdoc Special Interest Group. Dr. Makowski completed her PhD in neuroscience at McGill University and is currently a postdoctoral fellow at the University of California San Diego with Dr. Anders Dale and Dr. Chi-Hua Chen, with funding from the Canadian Institutes of Health Research, Fonds de Recherche du Quebec - Santé, and the Kavli Institute for Brain and Mind.
Michele Veldsman, Ph.D. is a previous Chair of the Student-Postdoc Student Interest Group and is currently a Postdoctoral Research Scientist in Cognitive Neurology, University of Oxford.
Alex Fornito, Ph.D. is the Sylvia and Charles Viertel Foundation Senior Research Fellow, Professor of Psychological Sciences, and Head of the Brain Mapping and Modelling Research Program at the Turner Institute for Brain and Mental Health. He leads his Neural Systems and Behavior Lab and has actively participated in the student-postdoc SIG.
The Neurosalience production team consists of Anastasia Brovkin, Katie Moran, Nils Muhlert, Kevin Sitek, and Rachael Stickland.
“It is precisely our plasticity, our long childhood, that prevents a slavish adherence to genetically programmed behavior in human beings more than in any other species.”
― Carl Sagan, Dragons of Eden: Speculations on the Evolution of Human Intelligence
I first learned about Ted Satterthwaite’s work when I started teaching about resting state fMRI and motion artifacts. His research showed how motion affects resting state connectivity measures, and I was thrilled that his group also compared the variety of effects with different preprocessing pipelines. In Mexico, every year we host a Neuroimaging Meeting where we invite neuroimaging researchers to visit the city of Guanajuato, [binge] eat Mexican food and talk to students, and so we were delighted to invite Ted to our 2019 meeting.
From our time together there, I got to know more about Ted and his research program. He is currently an Associate Professor in the Department of Psychiatry at the University of Pennsylvania Perelman School of Medicine, and the Director of the Lifespan Informatics & Neuroimaging Center. As a psychiatrist, he is highly interested in human development and building huge development datasets.
When I was asked to do this interview I knew it was going to be difficult to focus on a topic, but we managed to come up with a coherent chat, which I hope we can soon repeat with some beer and mezcal.
Eduardo A. Garza-Villarreal (EG): How did a psychiatrist end up writing these influential methodological papers such as the effect of movement on BOLD signal? And how did you go from psychiatry to methods? What was your career path?
Ted Satterthwaite (TS): I am a psychiatrist, and the reason I got into research was to try to develop tools that could be useful for the diagnosis and treatment of mental illness. That being said, I quickly learned if we ignore the methods, we probably can’t make progress on the ultimate clinical problems that we're interested in. I've primarily been working on large scale datasets, because to me, the only question in psychiatry is clinical heterogeneity. For example: someone comes to see me in the clinic and I diagnose them with depression. But clearly, depression is not one “thing” – depression is almost certainly many different things that we call one thing. It is pretty clear that to parse heterogeneity, we need large studies, because your ability to parse heterogeneity will be determined by both the noise in your signal and the size of your sample. Since we have noisy signals, we probably need very large samples. However, when having very large samples, although you get a lot of statistical power, you also become incredibly sensitive to confounding signals. There's a history in psychiatric imaging of being worried about medication confounds as well as other types of confounds. When we started doing developmental studies, that's right around the time my twins were born. Now, you don't need to scan thousands of kids to recognize that they don't sit still, you just need to come to dinner at my house. And it brought up this obvious question of “is movement going to impact our measures?”, and we started thinking about it because we could see the artifact in the time series. We were just very surprised. We assumed that this had already been solved. However, although there were papers from Karl Friston and others from a decade ago on motion in task-fMRI, there was nothing for functional connectivity. At that point it was just a practical question because we wanted to study brain development, we wanted to study psychopathology, and we know both age and illness are associated with in-scanner motion.
EG: Why are you so interested in development? And why do you think it's an important topic of research in psychiatry?
TS: The dominant paradigm in psychiatry now is that, when you see someone with their first onset symptoms like a severe mood disorder or psychosis, it's not like something went wrong in their brain right there that caused them to have the symptoms. It’s not like a switch that was flipped. Rather, there is accruing evidence for many years now suggesting that most mental illnesses are neurodevelopmental in origin. So, the goal is to understand how the brain develops normally, and then understand also how abnormal patterns of brain development are associated with different sorts of psychopathology. If you think about other fields of medicine, the way they've made clinical progress is by getting there earlier and unpacking heterogeneity. Think about cancer. We used to diagnose cancer only when we found huge tumors that had spread widely, and they were diagnosed on physical examination, like palpating the abdomen and saying, “you have a tumor”. Advances in oncology and in other fields, they have been predicated on both getting there earlier and unpacking heterogeneity; saying it's not just a tumor, but that it is a malignancy from this tissue, with this receptor profile and that genetics, and as a result it's going to be sensitive to this treatment. I think we're still at that “it's a tumor” phase in psychiatry. My hope is that by better understanding patterns of brain development and heterogeneity within the disorders we treat, we can get there earlier and ultimately achieve better outcomes for patients.
EG: As you say, psychiatric disorders are very heterogeneous, and there's a lot of overlap especially in symptoms. What do you think about the current disorder classification, about the Research Domain Criteria (RDoC), and about this overlap between disorders?
TS: There are two sides of the same coin: heterogeneity within disorders, and non-specificity across disorders. The RDoC framework is trying to map symptom profiles to brain systems, and I think that's a totally laudable approach. The one challenge of all this, though, is we don't have a ground truth. We don't have anything like postmortem pathology in neurodegenerative diseases, and so the challenge right now, for example in machine learning techniques, is that we don't know what the labels should be. You can have the best engineers and the best pattern recognition algorithms, but a lot of advances in machine learning have been based on supervised learning, and right now, we just don't have the best labels. Without good labels, it's kind of garbage in, garbage out. What I see is that the biggest challenge is using biological data like images to help us understand what those labels should be. I think it's a challenge and we're all still grappling with it.
EG: Do you think machine learning will have some influence on the future of psychiatry, or you think it's one more tool in the bag?
TS: I think both. Machine learning is going to have a big influence, especially as we have larger and larger training datasets, and the ability to generalize things across samples. However, part of the problem right now is that we don't have a lot of datasets where we can link multivariate patterns from machine learning tools to outcomes of interest, which I think is an essential step. Datasets like the Adolescent Brain Cognitive Development (ABCD) Study that follow 10,000 kids over 10 years are actually really important starting points because longitudinal prognosis is a very important clinical outcome to be able to predict. However, there is still a lot of room to make advances in terms of incorporating these methods using health system data where we have medical records on medications and hospitalizations, but we have a long way to go. Also, I think machine learning won't solve everything. Dani Bassett, one of my closest collaborators, makes a very cogent case that machine learning, while great, alone will not do it; you need a combination of good machine learning and sound theory. I agree. We should not forget that we know a lot about the brain from decades of basic neuroscience – we need to incorporate that knowledge as priors to inform these tools and help interpret the results.
EG: To you, what is your lab’s most interesting project and why?
TS: Right now, the one I am most excited about is a super ambitious project that I lead together with Michael Milham at the Child Mind Institute. We're trying to get a lot of the larger studies of brain development - around 11,000 samples in total – and make sure they are pristinely curated, processed, and QA’d ahead of a public data release on the International Neuroimaging Data-Sharing Initiative (INDI). The project is called the RBC - “Reproducible Brain Charts” – project. One thing that I've learned is that a lot of the sexy stuff in science is actually easier than the non-sexy stuff like data organization, curation, the really low-level things that are necessary for reproducible neuroscience. These things are often really time consuming. What makes me excited about this project with Mike and his team is that if we can do those things well, the dataset will be much more useful to everyone else, and everyone can just run faster by not having to recreate the wheel. I think that's super exciting. But it's been a huge challenge, because it's very heterogeneous data from different studies, which meant we've had to build new tools to handle it.
EG: Can you tell us a bit about the tools you're developing?
TS: Sure-- there’s a couple, which are at different stages. Sydney Covitz and Matt Cieslak have a really cool tool that is being presented as a OHBM poster at the meeting called “Cu-BIDS” for “Curation of BIDS”. It is a tool for curating Brain Imaging Data Structure (BIDS) formatted data at scale, designed for very large heterogeneous datasets -- we needed it to be able to handle the RBC data. I wish I had a time machine and we had that a couple years ago, because it makes life so much easier. And then once you get that data curated into BIDS, and this is all building off from Russ Poldrack, Chris Gorgolewski and Oscar Esteban’s work with BIDS and fMRIrep, we're focusing on running everything in containerized pipelines. For example, xcpEngine is our containerized post processing pipeline that was developed in the lab by Azeez Adebimpe (now) and Rastko Ciric (originally); it consumes fMRIPrep output to produce derived measures for studies of functional connectivity. Now, a lot of efforts are about moving beyond fMRI. For example, I am super excited about Matt Cieslak’s works to build QSIPrep, which is a fully containerized and highly generalizable BIDS-app for diffusion images. Lastly, Azeez is just finishing up ASLPrep for ASL MRI data. In the end, the goal of all this is to make it all super reproducible and open. If you talk to other neuroscientists, one of the biggest reservations they have about imaging is that it's so complicated, and there's so much data processing... that they just don't believe it. Over time, I’ve come to really agree with other people who have been doing open science for many more years than me -- I think the only answer that will get people to believe us is to just show them everything; sunlight really is the best disinfectant.
EG: In terms of your research achievements, what are you most proud of?
TS: You know, it's funny. What makes me most excited is not the individual projects, but the people I work with. I've just been super lucky to have amazing students who do really incredible work. This year’s keynote leverages work from many awesome people in the lab. The title of the talk is from a review from Valerie Sydnor, one of my grad students, who just dug into complex literature and produced a work of incredible scholarship. Similarly, some of the developmental data I am most excited about is from the hard work of another grad student, Adam Pines. What I'm most proud of is the trainees and how much heart they put into their work, and how much they teach me. That's the fun part for me, the people, not any individual finding.
EG: To me one key discovery was about motion artifacts and how it affects the signal. Even right now there is a huge debate on things like the global signal. I don’t think we have it figured out.
TS: Yes, it remains contentious. I think studies of motion artifacts however is a great example of how science should work. It was awesome – we had that paper in March of 2012, but in January of 2012, Koene Van Dijk and Randy Buchner published almost the same finding in an independent dataset. In February 2012 you got Jonathan Power and Steve Petersen with the same finding and also a method for handling it. So, three different labs, working independently with different data sets – all coming to the same conclusion -- producing three papers in NeuroImage at almost the same time. I think that's a great example of how science can work to provide convergent evidence.
EG: What do you think psychiatry should go for in the future? Where do you think is going? You mentioned looking for an actual ground truth, do you think we're going to get it at some point?
TS: People ask that all the time, “when is this stuff going to be useful?” And I think the first answer is: it's not useful now, to be honest. But that doesn't mean it's not a super important problem. If we do make progress, this sort of work could be incredibly impactful because psychiatric disorders are among the most common afflictions that humans get. But in the end, to really show something, we're going to need clinical trials and outcomes that matter. Some people who have been starting towards this, like Nicholas Koutsouleris and the PRONIA consortium – they are doing some really cool work. In the end we will not convince practicing physicians that this brain imaging in psychiatry matters until we show real results in clinical trials. And that's a challenge—perhaps a 10- or 20-year challenge. But I think we'll get there.
EG: Thank you Ted for taking the time to sit down for this interview. Looking forward to your keynote at OHBM 2021.
In this conversation, Peter Bandettini meets members of the BrainArt SIG to discuss its history from the NeuroBureau to its current formal SIG status. They discuss what brain art (or more generally science art) is, consider what the best features of brain art are, and how, essentially, any scientist trying to convey the essence of their findings can be considered an artist. You’ll discover the planned competitions and directions of the BrainArt SIG. The discussion also considers why diversity in this SIG, the field of Brain Mapping, and science in general is so important.
In the episode you’ll hear about the ‘Dream Catchers’ exhibit from OHBM2017 in Vancouver, and how those with dementia can discover new artistic creativity. You can also see some highlights from the OHBM 2020 exhibits below:
By Kevin Sitek
OHBM’s Annual Meeting is virtual again in 2021, following in the footsteps of 2020’s conference—but don’t expect it to look the same.
2020 was a year marked by challenges. For the Organization for Human Brain Mapping, that included quickly transitioning from the final stages of planning an in-person conference to putting together an entirely new format for its virtual meeting. In many ways, the 2020 Annual Meeting was a huge success. Understandably, though—given the short timeframe for creating and executing a brand-new conference format—not every part of the conference went off without a hitch.
For 2021, the OHBM Council sought to build the Annual Meeting on three core pillars of the OHBM community: Openness, Interactivity, and Accessibility. After months of deliberation by a dedicated task force, the OHBM 2021 Annual Meeting will run on a fully customized, open source platform designed and engineered by the Sparkle team. To help make this decision, Council created the OHBM Technology Task Force (TTF) in September of 2020. In an effort to ensure representation across the entire OHBM community, Council invited over twenty OHBM members to join the TTF, including representatives from the Open Science, Student–Postdoc, Sustainability & Environmental Action, and Brain Art Special Interest Groups (SIGs), multiple OHBM committees, and other diverse voices from OHBM’s membership around the globe.
“The goal of this group was to identify areas for improvement for the 2021 Annual Meeting, as well as to identify a virtual event platform which would meet all stakeholders’ needs for this year’s meeting,” says Mike Mullaly, a member of the OHBM Executive Office. “Over the course of several months, this group vetted various platforms via virtual demos.” To learn more about this process, we turned to TTF members themselves to hear directly about their experiences and hopes in selecting this year’s OHBM virtual platform.
The TTF looked not only at feedback regarding last year’s virtual OHBM meeting but also at other conferences. “We discussed which aspects of these meetings worked or did not work and which features we would like to incorporate into the OHBM platform,” says TTF Chair Professor Alex Fornito. “We also evaluated platforms used by other meetings that were considered to be effective by different TTF members. We then shortlisted different platforms and vendors, encompassing a broad range of open source and commercial options and met with reps for several of them.”
And TTF members came with strong, detailed expectations for the platform vendors. According to TTF member and Student–Postdoc SIG Social Chair Dr. Elvisha Dhamala, the ideal platform “enables real-time conversations and reactions to presentations. It has features that facilitate spontaneous and random interactions and conversations. It has an intuitive user interface that is easily manageable and navigable.”
Finding a vendor that could do all of these things (and more!) turned out to be easier said than done. Alex explains, “We quickly learned that there was no single platform that could do everything we wanted to the level that we desired. We had to make trade-offs.” For instance, some of the platforms that were considered had great search functionality and discoverability but couldn’t incorporate social interactions or work seamlessly across time zones. “In the end, we had to focus on identifying a platform that could do a good job of our priority features, while also having the potential to further develop other features in coming years.” Given the uncertainty of the past 18 months, flexibility will be an important feature moving forward.
Ultimately, the TTF settled on the Sparkle platform. Alex acknowledges that, “In many ways, this is a risky choice; Sparkle was originally developed for online concerts and I believe that we are the first scientific conference to take place on the platform.” Yet, the Sparkle team ultimately won over the TTF.
"Having reviewed in detail many different dedicated conference platforms, the TTF was nearly unanimous in their support for Sparkle.” Mike agrees. “We were looking for a vendor that was offering something completely customizable, open-source, and would improve social interaction. Sparkle was by far and away the front runner in this regard.”
There were a few central tenets that the TTF found attractive in the Sparkle platform. Most crucially, Sparkle demonstrated that virtual conferences could still be highly interactive, serendipitous, and fun. “While OHBM 2020 successfully presented the scientific content for the meeting, it lacked the features needed to socially interact with one another,” explains Elvisha, “whether that’s through mini one-on-one conversations about the ongoing presentation or in spontaneously formed small groups during a happy hour.” This year, the Sparkle platform “enables real-time conversations and reactions to presentations. It has features that facilitate spontaneous and random interactions and conversations. It has an intuitive user interface that is easily manageable and navigable.”
An early prototype of the platform's main map, showing various conference locations, sponsor visibility, and chat functionality. Specific stylistic elements and functions will likely be updated before the conference.
Secondly, the Sparkle team is fully dedicated to open source development: OHBM contracted Sparkle to build the conference platform, but the platform’s source code itself is open source. This means that OHBM can use the conference infrastructure beyond OHBM2021 and continue building in new features and technology. Indeed, “the Sparkle team was very willing to work with us to extend the platform and develop many of the essential features we required,” says Alex. “We did not encounter this openness with many other platforms.”
Finally, Sparkle understood our community’s need for accessibility and inclusion, working with the TTF to incorporate automatic text captioning and intuitive design elements. At last year’s conference, TTF member Professor Tilak Ratnanather used his own speech-to-text software during talks and poster presentations, but it was an imperfect solution to a very real problem. “Not having to think about this will make me more relaxed and focus on science.”
A page dedicated to OHBM SIGs is just off the main map. This is an early prototype—specific features unique to each SIG will be added for the conference in June.
However, as the meeting approaches, there are still a few high-priority items that OHBM and the Sparkle team are working on, including global accessibility. Testing is currently under way around the globe and Zoom has been integrated wherever possible. However, individuals concerned about the ability to connect to the Sparkle platform may try connecting via VPN and a list of Zoom links for the meeting will be available from the Executive Office upon request.
In addition, while real-time speech-to-text technology is advancing rapidly (for instance, 2020’s star app, Zoom, recently made live captioning an option for institutional accounts, and Google Chrome can now do live captioning automatically), in practice there are still significant limitations, especially for speakers with accents that the software wasn’t trained on, as well as for fast-paced, jargon-filled presentations. (We’re sure you can remember one or two of these.)
So while the conference platform is still being optimized for the meeting, TTF representatives from across the OHBM community are helping guide the platform’s development. And, according to Mike, OHBM "will be sending out a survey during the meeting (as we always do) looking for areas of improvement and member feedback" to improve the experience for the next meeting—whatever state that will be in.
Overall, excitement about the new platform is palpable across the board. Elvisha sums it up best: “The lack of space constraints and the endless features that Sparkle has really enables us to facilitate multiple activities simultaneously so we can cater to all interests and host a more inclusive social experience. I’m really excited about Sparkle and I can’t wait for the OHBM community to experience all that’s planned for the 2021 conference!”
OHBM Neurosalience episode 7: The Organization (Society) for Human Brain Mapping today. Some history, challenges and virtuality.
In this podcast we discuss some of the history and evolution of OHBM. We also talk about some of the challenges that it has faced in recent years with world events causing a last minute change in venue three times. We talk about the improvements in this year’s virtual meeting as well as the growth in the engagement of younger members of OHBM with all the chapters and SIGs.
Aina Puce, Ph.D. Aina has been active in OHBM since the beginnings. She is Chair of OHBM Council, otherwise known as President of OHBM and Director of the Indiana University Research Imaging Facility and the Eleanor Cox Riggs Professor of Psychological and Brain Sciences. After a Ph.D. from the University of Melbourne in 1990, she was a post-doc then an associate research scientist at Yale. She moved back to University of Swinburne in 1998, then back again to the states in 2002 to West Virginia University. Finally, she moved to Indiana University in 2008. She is an expert in visual neuroscience and EEG as well as fMRI.
Daniel Margulies PhD. Daniel started in the US, receiving his BS in 2005 from NYU then in 2008, earned his MS at the European Graduate School in Saas Fee, Switzerland and Ph.D. at Humboldt University in Berlin. From 2009 to 2011 he was a post-doc in the Department of Neurology at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig then a group leader of the Neuroanatomy and Connectivity group at Max Planck, Leipzig from 2011-2017. He received the 2018 OHBM Young Investigator (now called Early Career) award and received the Otto Hahn Award in 2010. He has been a pioneer in fMRI connectivity methods and has recently produced novel and penetrating work elucidating the organizational gradient that spans between sensorimotor and trans-modal areas.
The Neurosalience production team consists of Anastasia Brovkin, Katie Moran, Nils Muhlert, Kevin Sitek, and Rachael Stickland.
This week’s podcast is centered on physiologic fMRI. Generally, when people think of fMRI, they think of a way to map neuronal function, however there is so much information about neurovascular physiology in the signal. Many researchers who use fMRI may not realize all of the potentially untapped information—and confounds!—in the fMRI time series. Dr Jean Chen and Dr Molly Bright each run research groups that focus on this information in complementary ways. Both use physiologic manipulations and an array of acquisition methods to probe and characterize details of the hemodynamic response, though their two research programs focus on different aspects of the haemodynamic response function. In this podcast, they highlight the importance of physiologic fMRI for the field. They also consider the challenges facing women in male-dominated research fields and how the life of women scientists might be improved.
Jean Chen PhD. Dr. Chen received her MSc (2004) in Electrical Engineering from the University of Calgary, and her PhD (2009) in Biomedical Engineering from McGill University. She completed her postdoctoral work on multimodal MRI of brain aging at the Martinos Center for Biomedical Imaging and Harvard Medical School (2011), then joined The University of Toronto Medical Biophysics Program as faculty. She is a Senior Scientist at the Rotman Research Institute and Tier II Canada Research Chair in Neuroimaging of Aging.
Molly Bright PhD. Following a B.S. in physics from MIT in 2006, Molly received her D.Phil. from the University of Oxford in 2011 as part of a collaboration with the National Institutes of Health, working with Peter Jezzard at the Oxford Centre for Functional MRI of the Brain (FMRIB) and Jeff Duyn in the Advanced MRI group of NINDS. She completed postdoctoral training at the Cardiff University Brain Research Imaging Centre (CUBRIC). She then moved to Nottingham as an independent Anne McLaren Fellow, to develop ultra-high-field MR imaging methods for studying cerebral physiology in neurological diseases at the Sir Peter Mansfield Imaging Centre.
The Neurosalience production team consists of Rachael Stickland, Kevin Sitek, Katie Moran and Anastasia Brovkin
In this week’s podcast, you’ll hear about clinical applications of resting-state fMRI from Dr Michael Fox. You’ll hear some of the highlights of his research, from the beginnings at Wash U, including his early work on resting-state fMRI and the issue of global signal regression, to his more recent pioneering work on lesion network mapping. Through this, you’ll find out about how lesions can impact behavior through their effects on functional networks. This approach is a promising inroad of fMRI towards clinical utility.
Michael D. Fox, MD, PhD, is the founding Director of the Center for Brain Circuit Therapeutics at Brigham and Women’s Hospital and Associate Professor of Neurology at Harvard Medical School. He is also the inaugural Raymond D. Adams Distinguished Chair of Neurology and the Kaye Family Research Director of Brain Stimulation. He completed a degree in Electrical Engineering at Ohio State University, an MD and PhD at Washington University in St. Louis, and Neurology Residency and Movement Disorders Fellowship at Mass General Brigham. Clinically, he specializes in the use of invasive and noninvasive brain stimulation for the treatment of neurological and psychiatric symptoms. Dr. Fox’s research focuses on developing new and improved treatments for brain disease by understanding brain circuits and the effects of neuromodulation.
The Neurosalience production team consists of Rachael Stickland, Kevin Sitek, Katie Moran and Anastasia Brovkin
By Peter Bandettini & the OHBM Neurosalience production team
In this week’s podcast, Dr Catie Chang walks us through her thought process regarding pulling information out of the fMRI time series. After discussing some of the ongoing issues in fMRI, such as whether or not to use global signal regression to remove noise, she leads us into a commonly overlooked effect in fMRI—that of changes in arousal and vigilance. In particular, this has measurable effects on the resting state fMRI signal. She discusses the perspective that one person’s artifact may be another’s useful signal, depending on the goal of the study.
Catie Chang, Ph.D. received her B.S. in Electrical Engineering and Computer Science from MIT, and received her M.S. and Ph.D. in Electrical Engineering from Stanford University. While in graduate school, she opened up the field of fMRI by publishing a seminal paper using time-frequency analysis of resting state fMRI, showing that it was quite dynamic. Since then, she has been exploring the effect of basic physiological processes, such as cardiac function and respiration on the fMRI signal, and has recently been uncovering unique information regarding the influence that changes in vigilance have on the time series signal.
The Neurosalience production team consists of Rachael Stickland, Kevin Sitek, Katie Moran and Anastasia Brovkin
By Peter Bandettini & the OHBM Neurosalience production team
In this week's episode, Peter talks to directly to MRI scanner vendors. Together, they try to reconcile the importance of fMRI in research contexts with the market pressures of developing clinical applications. As fMRI has virtually no clinical market, does it really influence vendor decisions on pulse sequences and hardware? Could more be done aside from making fMRI more clinically relevant? In this discussion, you’ll hear some fascinating history into the early days of echo planar imaging and high speed imaging, as well as insight into the processes by which products are prioritized. You’ll also find out a possible future of how fMRI may begin to become more clinically useful.
R. Scott Hinks, Ph.D. is the Retired Chief Scientist from GE Healthcare's MR division. He received his PhD from the University of Toronto in 1985, where he began his studies of MR Physics and Imaging. For over 34 years Scott has pursued a career in MR research in both industry and academia, specializing in imaging and system physics. He was the principal developer of FSE and has led technical development of EPI for both fMRI and DWI. His work has resulted in numerous publications and over 34 patents. In his most recent role as Chief Scientist for GE Healthcare’s MR division,he is actively engaged in every aspect of MR imaging and works in close collaboration with leading academic researchers worldwide.
Franz Schmitt, Ph.D. is the retired chief scientist from Siemens’ MR division. He received his Ph.D. from the University of Munich and has worked for Siemens since 1983, overseeing development of EPI, gradient and RF coils, both 3T and 7T, as well as pTx imaging. He worked on site at the Martinos Center for a few years in the early 2000’s and has been actively engaged in academic research worldwide.
Ravi Menon, Ph.D. is a Professor of Medical Biophysics, Medical Imaging and Psychiatry at Western University, where he is also a member of the Graduate Program in Neuroscience and the Graduate Program in Biomedical Engineering and Scientific Director of Western’s Centre for Functional and Metabolic Mapping (CFMM), Canada’s only ultra-high field MRI facility. He received his Ph.D. in Medicine from the University of Alberta and performed his post doc in the laboratory of Kamil Ugurbil at the University of Minnesota where he helped to pioneer fMRI.
The Neurosalience production team consists of Rachael Stickland, Kevin Sitek, Katie Moran and Anastasia Brovkin
By Charlotte Rae, on behalf of the SEA-SIG
The Sustainability and Environment Action (SEA) SIG has formed three new Working Groups, to tackle the environmental impact of the annual meeting, assess environmental implications of neuroimaging research activities, and educate our community on these.
What are the new Working Groups?
In December 2020, we held two open meetings to talk about the priority actions for our new SIG with the OHBM community. We had colleagues attend from across the world, who shared fantastic ideas on how we should make OHBM activities more sustainable.
From these meetings, there was a pretty clear consensus that we needed to tackle three areas: the Annual Meeting, neuroimaging research pipelines, and education. So, we have set up three new Working Groups that will focus on these particular domains.
The Annual Meeting Working Group will assess the environmental impact of the Annual Meeting, investigate sustainable conference models, and make recommendations to the Council for how to create a more sustainable Annual Meeting beyond COVID-19.
The Neuroimaging Research Pipelines Working Group will assess the environmental impact of neuroimaging research pipelines, investigate how we could do our research more sustainably, and create resources and publications to support neuroimagers in greening their research practices.
The Education & Outreach Working Group will collaborate with the other two to educate our community about the impacts our research activities have, including putting on events around the Annual Meeting. It will also seek to collaborate with industry and sister neuroscience societies. In these collaborations and in guiding the SIG's own activities, it will use insights from psychological and neuroscientific work on how humans respond to communications about climate change and environmental issues.
How can I get involved?
We hope that there are lots of OHBM members who are interested in participating in these groups to help us achieve our sustainability objectives!
For example, in the Annual Meeting group, we want to comprehensively assess what the environmental impacts were of recent in-person (e.g. Rome, 2019) and online (2020, 2021) meetings. Looking forward, we hope to then investigate how much our meeting footprint would be reduced if we adopted potential alternative conference models, such as hybrid (with some in-person and some online content), hub-and-spoke (where we have several meeting locations and you travel to your nearest), or moving to a biennial meeting. Many other societies and conferences are considering such options (Figure 1). For the Working Group, we need colleagues who are interested in looking at these options and putting together a report for Council. We are very fortunate that Sepideh Sadaghiani, an experienced member of the Program Committee, has come on board to chair this Working Group.
In the Neuroimaging Research Pipelines Group, we need colleagues who are up for digging down into all the details of a neuroimaging workflow, from hardware and data acquisition to analysis and computing infrastructure. Ideally, we want to quantify the potential environmental implications of all these stages, so we can produce resources for the neuroimaging community that would allow researchers to plug in their pipeline protocol and get a measure of its environmental footprint. Of course the next step is then to provide resources for our community to enable them to go about changing this for the better - establishing what best practice looks like for sustainable neuroimaging. In this group, we will need colleagues from across OHBM disciplines who have experience across all sorts of neuroimaging processing streams. We might also seek to collaborate with external experts such as cloud computing providers.
The Education & Outreach Working Group will have quite a broad remit around educating our community about the impacts our research activities have, in concert with the other two groups. Here we need colleagues who have experience in (or want to get experience in!) areas such as putting on events around the annual meeting, like symposia and socials; interfacing with industry and sister neuroscience societies; and perhaps even bringing psychology-based knowledge of what works well when communicating about climate change, to make sure we are operating as effectively as possible in the SEA-SIG as a whole.
As well as general group members, we are looking for two individuals who might be interested in Chairing the Neuroimaging Research Pipelines and Education & Outreach groups.
I’m in! What are the next steps?
If you would like to participate in any of the three groups, or would like further information, please do get in touch with us at email@example.com. We welcome informal enquiries if you are not sure before you sign up to participate!
For further details on the aims and objectives for each group, see our new website at ohbm-environment.org
If you know a colleague who would be ideal to contribute to one of our groups, please do pass on our details. And you can retweet our Twitter post announcing the groups.
We look forward to sharing updates on the Working Groups’ progress soon!