Aperture Neuro Celebrates One Year Anniversary with New Publishing Platform and First Published Research Object
Aperture Neuro (previously Aperture) is excited to announce its first published Research Object and the launch of a new open-source publishing platform, Kotahi (a Coko Community product).
It has been one year since Aperture Neuro, the new open-access publishing platform powered by the Organization for Human Brain Mapping, officially opened for submissions. In that time, 20 Research Objects have been submitted and reviewed, and four of those have already been accepted for publication. More Research Objects are currently being revised and reviewed and we look forward to announcing more published Research Objects in the near future.
With this launch, we wanted to highlight the first Aperture Neuro publication, share some of the exciting features of the new platform, review the types of Research Objects authors can submit, and discuss the ways OHBM members can support this exciting initiative.
By Peter Bandettini
Episode produced by Kevin Sitek and Rachael Stickland
S2 Ep5: Jack Gallant, Strong opinions about fMRI analysis
MRI is ultimately about separating a known but variable signal from highly variable noise. How one does this makes all the difference, and fMRI is particularly challenging since what is signal and what is noise is not always clear as they both vary in time and space. Jack is a huge proponent of fMRI encoding or, more generally, careful model building to probe the time series, and he thinks that more model-free approaches and paradigm-free methods are ultimately limited. The discussion gets technical as well as intense at times. The points he makes are important. While we agreed most of the time, there were some nuanced differences of opinion - mostly when it came to discussing alternative methods for probing fMRI data. Overall, it was a fun and hopefully useful discussion! What does come through is his passion for what he does. Given that we only barely got into my questions, we scheduled a follow-up conversation with him.
S2 Ep6: Jack Gallant, Deriving fundamentals of brain organization with fMRI
The first podcast with Jack delved so deeply into his approach to assessing fMRI data and his philosophy of doing good science that we really didn’t get a chance to talk about either his groundbreaking results or what questions they open up. In this episode, we cover both of these topics in-depth. First, we discuss his fascinating and potentially paradigm shifting results on widely-distributed, semantic maps in the brain that shift and warp depending on the task itself. These results, at least in my opinion, open up new avenues for insight into fundamentals of brain organization. The brain is not just a conglomeration of distinct and static modules, but a shifting landscape of representation, much of which may be shaped primarily by our experience in the world. How our attention shifts these landscapes is an open and potentially profound question. Here we also discuss prospects for layer fMRI as well as the challenges of clinical MRI. It was a rich and engaging discussion with one of the true luminaries in the field.
About the guest:
Jack Gallant, Ph.D. is a neuroscientist and engineer at heart who trained with David Van Essen at Wash U. He is currently a Chancellor’s Professor of Psychology and Class of 1940 Endowed Chair at UC Berkeley and is affiliated with the Department of Electrical Engineering and Computer Science. He is also affiliated with the graduate programs in Bioengineering, Biophysics, Neuroscience and Vision Science. His work spans from single unit recordings, to whole brain fMRI, embracing the whole of computational neuroscience, setting extremely high standards, technical rigor, creativity, and insight.
By Peter Bandettini
Episode produced by Rachael Stickland and Anastasia Brovkin
AFNI is a major processing package used by brain mapping groups all over the world. It is nearly as old as fMRI itself and has been steadily growing in functionality as the field has evolved. Here we discuss how it all started as well as a few of the challenges of fMRI processing that have arisen over the years. Importantly, we explore the philosophy underlying a key tenet of AFNI development: the ability for researchers to drill down and look directly at the data. This emphasis on flexibly and efficiently visualizing the data at all processing steps not only guards against problematic data and hidden artifacts but is also a catalyst for new analysis ideas. We discuss a bit of the future of analysis and the bottleneck for clinical implementations.
It’s happened again. Summer (in the Northern hemisphere) has left us, and now we’re left with just memories of the balmier days of 2021. And with that, it’s a good time to reflect on the events of summer—and in particular OHBM2021. This was the second virtual edition of the annual OHBM meeting, with the online-only format offering unique challenges and opportunities.
Considerable planning and effort from many teams of volunteers and OHBM staff members went into the development of the new platform—affectionately dubbed “Planet Brain” by past Chair Aina Puce. Leadership took the higher-risk path to work with a new-to-the-market and, critically, open-source vendor, Sparkle, in hopes of achieving a much more engaging virtual experience that captured more of the true OHBM meeting spirit. But how did you think it went? Here we summarize some results from the two surveys that were sent out, one to annual meeting attendees to gather information about their experience and another to OHBM members, whether they registered for the meeting or not, to find out their views on future meetings. We provide anonymized responses to the Annual Meeting Feedback survey, alternative visualizations, and an environment for their re-analysis at https://emdupre.github.io/ohbm2021-survey-feedback/
By Peter Bandettini
Episode produced by Nils Muhlert and Rachael Stickland
Peter talks to Nikola Stikov, a physicist, engineer and a strong proponent of quantitative and reproducible MRI for further clinical traction and impact. This involves promoting open science, creating shared analysis toolboxes, and fostering data and code sharing across researchers and vendors. As mature as MRI is, we are still just scratching the surface of what information it can provide. Nikola is a gifted and passionate communicator; this conversation touches on his research in using MRI to derive information about cell structure in the brain and the potential uses in understanding brain connectivity as well as pathology. Also discussed is Nikola’s many initiatives regarding open science, dissemination of results, publishing - and how outdated the PDF is, and science outreach.
Nikola Stikov, PhD. Nikola Stikov is a professor of Biomedical Engineering, a researcher at the Montreal Heart Institute, and co-director of NeuroPoly, the Neuroimaging Research Laboratory at Polytechnique Montreal. Dr. Stikov received his Ph.D. degree at Stanford University, working with John Pauly and Dwight Nishimura, then carried out his postdoctoral training with Dr. Bruce Pike at the Montreal Neurological Institute. In 2014, Dr. Stikov was elected Junior Fellow of the ISMRM.
His research spans the gamut of quantitative magnetic resonance imaging, from basic issues of standardization and accuracy, to biophysical modeling, microstructural imaging and clinical applications. His group is particularly interested in developing and validating novel biomarkers for non-invasive characterization of the brain and heart microstructure during development, disease and treatment, thus pushing the boundaries of the emerging field of in vivo histology.
Over the years, he has become active in open science and science communication, founding the MRM Highlights, OHBM Blog, and the Canadian Open Science Platform (CONP). He is also the founder of MRBalkan, a conference series associated with ISMRM that has been held on the Balkan Peninsula (Macedonia, Turkey, and Slovenia).
By Peter Bandettini
Episode produced by Anastasia Brovkin and Nils Muhlert
In this wide ranging discussion, Peter and Melanie Boly address everything related to her work on consciousness. They start with some of her early work on resting state as a modulator for detecting subtle stimuli and then get into a discussion on a working definition of consciousness and her work on understanding the neural correlates of consciousness. Melanie is a proponent of the idea that many, if not all, of the fundamental physical correlates of consciousness reside in the posterior part of the brain. Peter and Melanie also discuss Integrated Information Theory (IIT): how it helps us begin to understand consciousness. Lastly they consider her studies of sleep and how dreaming is not limited to REM sleep. This interesting discussion straddles theoretical work and practical clinical applications of brain imaging.
Melanie Boly, MD, PhD. Melanie Boly is a neurologist and neuroscientist who has worked for more than fifteen years in the field of altered states of consciousness such as vegetative state, sleep and anesthesia. She has worked with and has been mentored by such people as Stephen Laureys, Adrian Owen, Marcelo Massimini, and Karl Friston. Her research is directed at combining neuroimaging techniques such as PET, fMRI, TMS-EEG, and high-density EEG to a theoretical framework, known as the Integrated Information Theory of Consciousness, hoping to uncover the neural mechanisms associated with levels of consciousness as well as its contents in healthy subjects and neurological patients. She has over 150 publications and is Associate Editor of the journals Neuroimage, Frontiers in Consciousness Research, Frontiers in Brain Imaging Methods and Neuroscience of Consciousness. Currently, she is an assistant professor at the University of Wisconsin in the Neurology department, with a secondary appointment in Psychology. She received her MD in 2005 and her Ph.D. in 2009 both from the University of Liege, Belgium. From 2009-2014 she did post docs at the University of Liege, University College London, and then the University of Wisconsin, Madison. By all measures, Dr. Boly is a rising star in advancing our understanding of the neural correlates of altered states of consciousness. She also collaborates with many luminaries in consciousness research including Christof Koch and Giulio Tononi.
By Peter Bandettini and Rachael Stickland
Fresh new sounds welcome you back to season 2 of the Neurosalience podcast! In this episode Peter Bandettini talks to production lead Rachael Stickland. They summarize the types of podcasts released in Season 1, the experience of making the podcast so far, and episodes and themes coming up in Season 2. This episode production is by Nils Muhlert and Anastasia Brovkin.
Through their chat, Peter and Rachel answer the question: Who is involved in making this podcast? Peter Bandettini contacts guests, prepares for interviews, and carries out these interviews, as host of the podcast. This process is supported by a wider team of volunteers who also package the episodes for production: Anastasia Brovkin, Ekaterina Dobryakova, Katie Moran, Nils Muhlert, Kevin Sitek and Rachael Stickland. Roselyne Chauvin adapted the OHBM logo into the great podcast logo. All these people are members or contributors with the OHBM Communication Committee; there are also many other people in this committee that have supported the podcast, contributing ideas to help get it up and running and to keep it going. If you don’t know much about this committee see this recent blog post that tells you all about it in the form of a chat with past and current chairs and chair-elects. They even talk about how the idea for the podcast came about and their thoughts on it. If you have thoughts for future episodes, share them with us by emailing ohbmbrain AT gmail.com.
By: Nils Muhlert
Professor Marcus Raichle has played a truly pivotal role in the discovery of the physiological basis of functional neuroimaging. During the 1980s he helped to discover the relative independence of blood flow and oxygen consumption during changes in brain activity; in the 1990s he identified the ‘default mode’ of brain activity; more recently, his team carried out critical work into the infraslow activity of the brain. Marcus is currently the Alan A. & Edith L. Wolff Professor of Medicine and a Professor of Neurology, Radiology, Neurobiology and Biomedical Engineering at Washington University School of Medicine in St Louis, Missouri. He took time out of his busy schedule to tell us about his seminal studies on measuring blood flow and exploring the rhythms of the brain.
By Peter Bandettini and the Neurosalience production team
In this podcast, Peter and David Poeppel discuss what it might mean to understand the brain, and how MRI and other imaging modalities may play a part. They discuss David’s past work with Greg Hickok on language pathways as well as his study of the auditory cortex. Another topic discussed is the potential impact of David’s work clinically as well as the need to start with—and progressively add to—models of the brain.
David Poeppel Ph.D. is Professor of Psychology and Neural Science at New York University (NYU). Since 2014, he has also been the Director of the Department of Neuroscience at Max Planck Institute for Empirical Aesthetics (MPIEA). In 2019, he co-founded the Center for Language, Music and Emotion, an international joint research center, co-sponsored by the Max Planck Society and New York University. Since 2021, he is now also the Managing Director of the Ernst Strüngmann Institute in Germany.
David grew up between Munich, Germany; Cambridge MA, USA; and Caracas, Venezuela. He obtained his bachelor's degree (1990) and doctorate (1995) from the Massachusetts Institute of Technology (MIT). He received training in functional brain imaging as a postdoctoral fellow at the School of Medicine of the University of California, San Francisco. From 2000 to 2008, he directed the Cognitive Neuroscience of Language Laboratory at the University of Maryland College Park, where he was a professor of linguistics and biology. He joined New York University in 2009.
He was a fellow at the Berlin Institute for Advanced Study and has been a guest professor at several institutions. He has received the DaimlerChrysler Berlin Prize of the American Academy of Arts and Sciences and other honors.
David Poeppel is a researcher who employs behavioral and cognitive neuroscience approaches to study the brain basis of auditory processing, speech perception and language comprehension. The research in Poeppel's laboratory addresses questions such as: What are the cognitive and neuronal “parts lists” that form the basis for language processing, the fundamental constituents used in speech and language? How is sensory information transformed into the abstract representations that underlie language processing? What are the neural circuits that enable language processing?
Well-known contributions of the Poeppel laboratory include: the functional anatomic model of language developed with Greg Hickok; research on lateralization in auditory processing; and experimental work on the role of neuronal oscillations in audition and speech perception. He also writes and lectures about methodological questions at the interdisciplinary boundary between cognitive science research and brain research.
NEUROSALIENCE EP17: Dynamic modeling of the brain, NeuroImage, and the neuroscience crisis in Australia with Michael Breakspear
By Peter Bandettini and the Neurosalience production team
In this wide ranging podcast discussion, Peter talks to Michael Breakspear about his motivations for modelling brain dynamics and how his research may pay off in the long run towards clinical applications. Michael is also the current Editor-in-Chief of the journal NeuroImage; there is discussion of some of the changes that have occurred, such as new types of papers, new policies on data sharing, and of course the transition to open-access. Michael mentions a new offshoot of NeuroImage called NeuroImage Reports, which welcomes re-analysis of previous results. Lastly, recent news of the Australian National University shutting down its neuroscience program because of budget problems is discussed.
Michael Breakspear Ph.D. is a physicist, psychiatrist, and the leader of the Systems Neuroscience and Translational Neuroimaging Group at the Hunter Medical Research Institute at the University of Newcastle. He is the current Editor-in-Chief of the journal NeuroImage. His work in physics focuses on dynamic models of large-scale brain activity, toolbox development and the detection of nonlinear dynamics in empirical data. His work in translational imaging encompasses healthy ageing, dementia, bipolar disorder and schizophrenia, with a focus on connectomics and risk prediction.
Michael grew up in Sydney and studied medicine, philosophy and mathematics. He undertook early-career research training in the School of Physics at the University of Sydney before moving to the School of Psychiatry at UNSW as a mid-career researcher. He formed his Systems Neuroscience Group at the University of South Wales in Sydney in 2004, then moved to QIMR Berghofer Medical Research Institute in 2009. He relocated to Newcastle in 2019 and established the Systems Neuroscience Group, Newcastle (SNG-Newy) with aspirations to integrate basic methods, bioinformatics and clinical translation with a unique regional Australian character. Their imaging centre is in a beautiful bushland setting in Awabakal country.
In addition to basic research training, he also completed training in psychiatry and nowadays combines his research career with clinical sessions in adult psychiatry. Michael has an interest in recovery-focussed treatment of mood disorders, psychosis, and addiction. In the past he has also worked in prison mental health and inner-city community psychiatry.
Michael has a passion for climate science, being rather social, and surfing.
The OHBM communication committee (OHBM ComCom) works to improve communication both within and beyond the OHBM community, creating content for the OHBM blog, podcast and social media and reporting about current topics in our community and field.
Founded in 2015 from a Council Strategic Planning retreat chaired by Karen Berman, ComCom was originally envisioned as a way to improve communication between Council and membership, serving as a contact point for members throughout the year. Under its first chair Randy Gollub, ComCom established four initial teams: social media, blog, lay media, and website. From this strong foundation, ComCom has grown to include additional platforms such as the new OHBM podcast Neurosalience.
In this interview, the 2019 chair Nikola Stikov, 2020 chair Nils Muhlert, current chair Ilona Lipp, and chair-elect Elizabeth DuPre sit down together to discuss the history of ComCom, ongoing initiatives, and the importance of open communication both within OHBM as well as to the broader scientific community.
If you are interested in joining ComCom or contributing to the OHBM blog or podcast, please get in touch with current chair Ilona Lipp by email at lippi [at] cbs.mpg.de or on Twitter at @ilona_lipp.
By: Winson Yang, on behalf of the SEA-SIG
The Sustainability and Environmental Action Special Interest Group (SEA-SIG) was formed by the Organization for Human Brain Mapping (OHBM) in 2020 in recognition of the need to reduce the impact of the organization and its members on the environment. The SIG has three working groups, with each providing a unique focus for shared action:
Neuroimaging Data Sharing and Open Brain Science in China: The 5th Annual Event of Chinese Young Scholars for Human Brain Mapping
By: Chao-Gan Yan
The 5th Annual Event of Chinese Young Scholars for Human Brain Mapping was held on June 23, 2021 under the topic of “Neuroimaging data sharing and open brain science in China”. As in previous editions, the goal of this event is to bring together Chinese researchers from the OHBM community to communicate, discuss, and collaborate on cutting edge neuroscience research topics and methods. We invited Professors Ying Han, Jiang Qiu, Sha Tao, Chao-Gan Yan, Chun-Shui Yu, Yu-Feng Zang, Zhan-Jun Zhang, and Xi-Nian Zuo to give talks on open neuroimaging resources in China.
Neurosalience Episode 16: Understanding the reproducibility crisis and how to get through it, with Dr. Ahmad Hariri
By Peter Bandettini; production by Kevin Sitek & Rachael Stickland
Dr. Hariri recently published an important paper on the test-retest reliability of common task-fMRI measures. This received attention in the field and from the popular media and generated useful discussions. In this podcast Peter and Ahmad discuss the implications of this paper and how to address the challenges it presents and continue to move the field forward. This is an informative and positive discussion about how to collectively address these issues as a field.
By Tzipi Horowitz-Kraus & Nils Muhlert
The neuroimaging community has been very active in creating large-scale studies across a range of age groups, which have helped to tackle reproducibility issues. Most studies originate in the United States and Europe, although many other geographic regions are pursuing similar initiatives (for instance, you can read about initiatives in China in our blogpost). The Middle East, a 7.2m km2 region of diverse religions and cultures, has been very productive in the neuroimaging community in the past years. To get an update on how neuroimaging is changing in this region we asked brain mappers from the middle east to let us know of their projects and surveyed prominent researchers in different countries. We provide a brief overview of some of these activities in the Emirates, Israel, Saudi Arabia, and Turkey with the hope of future scientific collaboration between these countries.
Interview by Peter Bandettini, production by Nils Muhlert and Niall Duncan
This podcast idea was precipitated by Dimitri Kullman’s 2020 editorial in Brain, causing a stir in the community. It leveled criticism about the clinical validity of fMRI. Some of it was outdated but some was indeed on point. In this podcast we had a great discussion on all things fMRI - what it can and cannot measure, and how it can continue to proceed. We also discuss some of the scientific culture surrounding fMRI. Overall, the discussion was useful in bringing some of the flaws as well as some of the outstanding innovations to light. We ended up agreeing that fMRI is in fact, an extremely useful tool that allows penetrating insight into the brain at a specific temporal and spatial scale. We feel that there is still considerable hope yet also considerable challenge in increasing its clinical relevance.
Dr. Dimitri Kullmann is a professor of Neurology at UCL Queen Square Institute of Neurology.
Dimitri received his doctorate from the University of Oxford in 1984 and his Bachelor of Medicine/Surgery from the University of London in 1986. He alternated between research in synaptic transmission and post graduate medical training in London. In 1992, he started his lab at the Institute of Neurology and in 2000, he became a fellow of the Royal College of Physicians.
Dimitri's interests span the fundamental mechanisms of synaptic transmission, the computational properties of small neuronal circuits, and alterations in neuronal and circuit excitability in epilepsy and other neurological disorders. The core methods in his lab are in vitro electrophysiology and pharmacology, but he also applies confocal and two-photon laser scanning microscopy, computational simulations, molecular genetic methods, and heterologous expression of mutated ion channels. His laboratory has contributed to the discovery of silent synapses, glutamate spillover, presynaptic GABAA receptors in the cortex, human epilepsy caused by K+ and Ca2+ channel mutations, tonic inhibition in the hippocampus, and Hebbian and anti-Hebbian LTP in hippocampal interneurons.
One of Dimitri's goals is to understand how phenomena that he has studied at the cellular level interact to regulate the excitability of small neuronal circuits. He is integrating studies on hippocampal circuit function with knowledge of how interneurons and principal cells fire during different behaviours. This is being approached both experimentally and with computational simulations. He also aims to apply his lab's recent insights into the cellular consequences of inherited mutations of ion channels (channelopathies) to develop new ways to diagnose and treat neurological diseases.
Dimitri was editor of the journal Brain from 2013 to 2020. He brings to the table the perspective of a clinician neuroscientist who does research at the neuronal scale.
Dr. Vince Calhoun is the director, since 2019, of Translational Research in Neuroimaging and Data Science (TReNDS), which includes three universities: Georgia State, Georgia Tech, and Emory.
In 2002, Vince received his Ph.D. in Electrical Engineering from the University of Maryland Baltimore County, and then became an assistant clinical professor at Yale, Director of the Medical Image Analysis laboratory Institute of Living, in Hartford. He moved on in 2006 to the University of New Mexico, Albuquerque as an associate professor, and moved up to become a Distinguished Professor at The University of New Mexico and as a leader in various forms of the Mind Research Network in Albuquerque.
Vince is an expert on brain imaging acquisition and analysis and has created numerous algorithms for making sense of complex brain imaging data. He is the creator of the group independent component analysis algorithm, which has become widely used for extracting 'networks' of coherent activity from functional magnetic resonance imaging (fMRI) data. He was an early innovator in approaches to characterize the dynamics of brain connectivity. He has also developed techniques to link many different types of data, called 'data fusion' including various types of brain imaging (structural, functional, connectivity) with genomic and epigenomic data. A key focus of Calhoun's work is the development of tools to identify brain imaging markers to help identify and potentially treat various brain disorders including schizophrenia, bipolar disorder, autism, and Alzheimer's disease. He has recently served as President of the OHBM and has been truly prolific in his work to push the methodology of fMRI.
The Neurosalience production team consists of Anastasia Brovkin, Katie Moran, Nils Muhlert, Kevin Sitek, and Rachael Stickland.
By Peter Bandettini & the OHBM Neurosalience production team
Here Professor Peter Bandettini has a wide ranging discussion with the 2021 Early Career Investigator Awardee, Chao-Gan Yan. They talk a bit about his career path, the highly impactful work he has been doing, as well as some of the most challenging issues in fMRI: dealing with motion, variability, finding biomarkers, and designing just the right packages that help the beginner and expert alike. Chao-Gan gives some great advice to new investigators regarding what was important to him to get him where he is today.
Dr. Chao-Gan Yan is a professor at the Institute of Psychology, Chinese Academy of Sciences (IPCAS). He is the Director of the Magnetic Resonance Imaging Research Center, the Director of International Big-Data Center for Depression Research, and the Principal Investigator of The R-fMRI Lab located at IPCAS. Before he joined the IPCAS in 2015, he worked as a Research Scientist at the Nathan Kline Institute for Psychiatric Research and a Research Assistant Professor at the Department of Child and Adolescent Psychiatry, New York University School of Medicine.
Dr. Yan received his Ph.D. degree in Cognitive Neuroscience from State Key Laboratory of Cognitive Neuroscience and Learning at Beijing Normal University in 2011. His research mainly focuses on the resting-state fMRI (R-fMRI) computational methodology, mechanisms of spontaneous brain activity, and their applications in depression. He has addressed fundamental methodological issues such as the impact of head motion, standardization, and multiple comparisons on the study of resting-state functional connectomics. He has also developed data processing and analysis toolbox for R-fMRI, DPABI, and DPARSF, the latter having been cited over 2000 times.
The Neurosalience production team consists of Anastasia Brovkin, Katie Moran, Nils Muhlert, Kevin Sitek, and Rachael Stickland.
For Brain Mappers around the world, the OHBM annual meeting is a time to meet, network, and learn about the latest progress across the many domains of human neuroimaging. Despite the challenges introduced by COVID19, the virtual conference formats used in 2020 and now in 2021 offer the chance to maintain contacts and benefit from many of the normal features of an in-person conference. In 2020, the COVID19 pandemic forced a rapid transition with little time or resources to enable organizers to fully realize the many social aspects of OHBM that are most beloved by attendees- the ample opportunities for talking around posters, chance hallway greetings that turn into deep and meaningful friendships and collaborations, celebratory parties that ensure you will find all your friends and colleagues to catch up and network for that job or bit of news you are seeking out. Many solutions were created by OHBM attendees, as detailed in our 2020 roundup, and these provided ideas to help shape OHBM2021.
In seeking an alternative platform for OHBM 2021, OHBM leadership, with plenty of grassroots input from the community and incredible support from volunteer leadership in the form of a Technology Task Force, took the risk of selecting an alternative open-source meeting software platform, Sparklespaces. This new platform offered the chance to robustly support social engagement while stretching to meet the demands of the more traditional meeting elements offered by competing platforms. One aspect of the decision to go with Sparklespaces was the option for our Program Committee Leadership and our Technology Task Force to work in close collaboration with the Sparklespace developers to ensure that the resulting software met our needs; unlike other providers which offered fixed packages, the Sparkle team was open to co-creating tailored features for our community. As part of this deal, we would retain access to the shared development source code for future meetings. Everyone involved was and remains very enthusiastic about this collaboration, with this year’s platform being affectionately dubbed #PlanetBrain by our Chair, Aina Puce.
For those attending OHBM2021, the start of the meeting did not however live up to expectations. A slew of connectivity problems meant that many people experienced sometimes considerable lag in accessing the symposia, poster hall and keynote talks. Here we set out what happened and how things are being rapidly resolved. Before the close of the first day, an update was provided from the Chair of the Technology Task Force, Alex Fornito:
The sparkle team and OHBM members worked around the clock to resolve these issues. We deployed a backup stream site within hours and screening room within a day - which allowed another way to watch live streams and explore screening room videos while we resolved the core technical issues. Several fixes incrementally improved performance over the first couple of days, and thankfully, from midway through the Wednesday session we saw a significant improvement in connectivity. The prime culprit - identified by Soichi Hayashi (and the Sparkle Team) - seemed to be that planet brain was bogged down by its handling of user update events, which inform different conference goers of each others’ whereabouts so as to enable a highly collaborative, social experience. As more and more users joined the platform, and moved between the thousands of different events and spaces, each user received a cascadingly large number of update notifications, causing browsers to struggle. Though the platform is well tested at volumes of thousands of users, its scalability with the combination of users, spaces (including posters), and events each being in the thousands had not been tested in combination and led the site to perform poorly. The Education Courses functions performed more smoothly because there were both fewer users, and fewer spaces and events- thus exponentially fewer user-update events.
Soichi’s temporary fix was to switch from receiving real-time events to polling user information, and so limiting the number of updates that a browser must make. This patch substantially improved the performance and user experience of #PlanetBrain’ - and a more permanent fix (requiring more substantial reworking of the codebase) is currently being implemented by the Sparkle team. On June 23rd, OHBM tweeted to inform users of the upgrade to the Poster Hall functionality and invited people to join again, offering an accompanying video to showcase the reboot!
The platform should be operating more smoothly, and is closer to the vision developed by the large team of people who worked on putting this conference together. Improvements are ongoing and the OHBM team is appreciative of everyone’s feedback.
In addition, rest assured that all talks are being recorded, so anything you may have missed can be rewatched later. If you do have comments on the process or would like to hear more from OHBM Leadership on these and other key issues, OHBM members are invited to join us at the General Assembly and Feedback Forum (Town Hall), 30th June at 9am EDT/1pm UTC (link to be sent by email soon).
In this tutorial, we present the fundamental concepts of network neuroscience. We build upon material from the previous Educational Sessions of OHBM’s Annual meeting, available through the OHBM Youtube channel, which contains a trove of lectures, education courses, and symposia from OHBM’s annual conferences. During the virtual meeting 2020, Alex Fornito and Andrew Zalesky presented a course on Fundamental Concepts and Methods in Network Neuroscience. Speakers included Alex Fornito, Janine Bijsterbosch, Shella Keilholz, Robert Smith, Andrew Zalesky, Bratislav Misic, Richard Betzel, Ann Sizemore Blevins, Martijn van den Heuvel and Petra Ritter. We highlight material from this course, which is currently available on YouTube, so readers can learn from the experts in this condensed tutorial.
I hear a lot of talk about “network approaches” to neuroscience, what does that mean?
Simply put, we are interested in studying the connections, or wiring, between so-called “brain areas.” Though Franz Joseph Gall is famously known for the pseudoscience of phrenology, his underlying idea that different parts of the brain perform different functions has informed much of modern neuroscience. This phenomenon of functional localization, often referred to as functional segregation or separation, has been thoroughly investigated down to individual neurons. However, as Alex Fornito points out at the beginning of the education course, though this degree of specification is a fundamental property of brain organisation, the experiences that are constructed by the human brain are not thought to be formed by independent, disconnected processes. Rather, specialized processes are integrated together through connections between different brain regions. This apparent functional integration occurs over multiple spatial and temporal scales and heavily depends on the connections between different neural elements, which can be studied at macroscopic resolutions using magnetic resonance imaging (MRI).
The brain has more than 85 billion neurons, which show an extraordinary level of local interconnectivity and highly organized long-range connectivity. In this respect, the brain’s organization resembles that of other complex systems observed throughout nature and in our everyday lives. For example, the world wide web is a network that links each of us to pretty much anyone anywhere in the world at any time. One of the principal aims of network neuroscience is to try to understand the properties of brain networks. As explained by Alex at 5:30 min, network neuroscience aims to to understand and map the properties and organization of the brain through the mathematics of complex networks. It applies this framework to study the properties and organization of the network of connections forming the human brain––its “connectome”––in terms of both structure and function.
Alex Fornito describing the brain as a network at different levels of resolution
Is this related to a thing called graph theory that I always get confused about?
Graph theory, as Alex explains from 6:04 onwards, is a mathematical framework that can be used to study the connectivity among the core constituents of a graph, e.g., a set of “nodes” connected by “edges” that measure the association between pairs of nodes. In this sense, a graph is a structure that attempts to model the relationships between different interconnected objects (the “objects” can range from brain areas to people’s friendships) and offers a natural analysis model for any networked system.
Using MRI, we can study brain networks at coarse spatial scales, in which nodes are defined as spatially focal, functionally homogeneous brain areas that are considered to be fundamental processing units of the network. In network neuroscience, nodes are also referred to as “parcels”, and a set of nodes covering the whole brain can be defined as a “brain parcellation”. Edges are defined as the connections between the nodes in the network and for MRI-based studies of the brain, these connections are representative measures of the functional and structural connectivity between brain regions. The mathematics of graph theory allows us to extract the organizational properties of the brain connectome and relate these properties to those observed in other systems found in nature.
Hmm … ok that makes sense, but how are these nodes defined?
Sensible node definition is perhaps the most challenging aspect of connectomics analysis as, in many instances, there are no clear boundaries visible with MRI that segregate brain areas into functionally meaningful homogeneous brain areas (Fornito and Bullmore, 2015). At 8:00 min Alex describes that the interpretation of any network analysis depends on the way in which nodes and edges are defined. Good definitions of nodes should adhere to three properties:
1. The nodes need to be spatially constrained. That is, a node should be defined as a spatially contiguous patch of the brain rather than a combination of spatially distinct regions. This is based on the assumption that neurons dedicated to any specific functionality are constrained to a specific location in the brain.
2. Each node should be intrinsically functionally homogeneous, so that the constituents of the nodes, such as voxels clustered into a single node, should share a similar function.
3. The nodes should be extrinsically distinct, so that the function of one node can be differentiated from others.
Alex Fornito explaining how we can use graphs to represent brain networks
Further elaborating on the first property, at 37:54 min Janine discusses different ways to conceptualize spatially contiguous and non-contiguous nodes. Contiguous nodes are spatially interconnected voxels or vertices which form a single localized region of the brain and are consistent with the notion of functionally specialised regions. Alternatively, non-contiguous nodes are made up of various distinct areas that may be dispersed throughout the brain, for example, bilateral brain areas. This conceptualization is consistent with the hierarchical and hemispherically organised network structure of the brain.
Janine goes on to explain other characteristics such as “binary” and “weighted” nodes. Nodes are considered “binary” when each voxel is a member of exactly one node only; often referred to as a “hard parcel”. Alternatively, nodes are “weighted” when a voxel can have non-zero membership to multiple nodes; often referred to as a “soft parcel”.
Clustering methods such as K-means clustering and normalised cuts group voxels together based on similarity to create nodes. These nodes tend to be binary and can either be contiguous or non-contiguous. Data-driven decomposition methods, such as ICA and PCA, produce weighted parcellations that usually combine different regions in the brain (i.e. not spatially contiguous) into a single node.
Janine Bijsterbosch discussing characteristics of nodes
At 42:47 min, Janine also mentions the challenges that we have to address when defining nodes. For example, it is generally assumed that a node is a homogeneous region, the aggregate of which serves as an accurate representation of the activity that goes on throughout the different voxels in that node. However, this is questionable for different reasons: a region may be heterogeneous in both its connectivity and function. For example, if one were to average the signals across the entire primary motor cortex, a region which has homogenous cytoarchitecture, this would overlook the meaningful functional heterogeneity which encapsulates different hand and foot movements (Haak et al., 2020). For this reason, it is important to be mindful of the core assumptions that are implicit in any parcellation procedure.
Right, but how can we define them in a more practical sense?
Generally, nodes should be defined either using published parcellations available in literature or from your own data. At 9:26 min Alex explains different approaches by which nodes can be defined, such as cytoarchitectonic maps or sulci and gyral landmarks. There are many different published parcellation schemes and methods for parcellation of your own data that are now open access. Arslan et al. (2018) consider a variety of such anatomical, connectivity-driven, and random parcellations in their study, comparing the performance of different schemes and methods for connectomics. They conclude that there is no clear “one size fits all” winner, instead providing guidelines for the choice of the parcellation technique that may be appropriate for the problem under investigation. Similarly, at 33:34 min Janine notes that the parcellation scheme should match with the modality that you’re investigating.
An advantage of using published parcellations or atlases to define nodes is that the node definitions are identical to other studies, allowing comparison with other published findings. Anatomical parcellations may also be derived from aspects of brain tissue, such as cytoarchitecture, that are not captured by functional parcellations. However, a limitation of using a structural-based atlas is that it may not represent your data as well as using your own data to produce your nodes. This is especially true for functional connectivity, as using atlas-based parcellations have been shown to have reduced performance for network analysis when compared with data-driven functional parcellations (Smith et al., 2011). Therefore, nodes should be defined in accordance with what best suits the research question and hypotheses that you want to investigate.
What about the edges?
At 10:43 Alex explains that when mapping brain connectivity, edges are defined by distinguishing between 3 different types of brain connectivity; structural, functional and effective connectivity. Whilst structural connectivity refers to the physical white matter tracts connecting regions, functional connectivity refers to the statistical relationship between neurophysiological signals recorded in spatially differentiated regions. Notably, the directionality of a structural connection (e.g., from one region to another) is not resolvable using MRI whereas functional connectivity can be directed or undirected. Effective connectivity describes the influence that a group of nodes can have on another group, which allows for causal interactions between different neuronal systems. Whilst functional connectivity is estimated at the level of measured physiological signals, effective connectivity is estimated at the neuronal level and requires some kind of mapping between the measured signal (e.g., BOLD fMRI) and the underlying neuronal events that generate the signals.
You keep saying the word “connectivity” . Can you tell me a bit more about it?
Neuroscientists use the word “connectivity” to refer to any physical connection or statistical dependency between different areas of the brain. The terms “connectivity matrix”, “similarity matrix”, “affinity matrix”, or “adjacency matrix” are used interchangeably. At 12:55 min Alex mentions some different approaches that are used to measure connectivity such as diffusion Magnetic Resonance Imaging (dMRI) and functional Magnetic Resonance Imaging (fMRI). He goes over some problems that need to be tackled in the MRI data processing pipeline, such as head motion (13:27 min) and physiological noise (17:05 min). An evaluation of different preprocessing pipelines, and their impact on results when comparing connectivity differences between patients and controls, is discussed by Parkes et al. (2018). It is important to be aware of the impact of these preprocessing steps as they will have a strong influence on analysis results. Once data are preprocessed and parcellated, you can measure functional or structural connectivity between regions and produce an adjacency matrix, in which each element of the matrix contains the measure of connectivity between each node pair, or a graph representation of this connectivity represented as node elements and lines connecting them. At 19:34 min Alex explains that these two types of representation are equivalent by giving an example of a functional connectivity matrix; the only difference between the two is the visualisation.
Alex Fornito explaining the different approaches used to measure connectivity
But how do we measure this “connectivity”; say, for example, from functional MRI?
Just as there are many ways to parcellate the brain, there are many ways to define “functional connectivity” via different statistical measures that capture the shared information (relationships) between two nodes. In addition, one can examine the average, or “static”, connectivity across the duration of the fMRI scan, which obtains a single measure of connectivity for the whole scan between each node pair, or one can examine the dynamic fluctuations in connectivity that occur on a moment-by-moment temporal scale.
Seed- or parcellation-based correlation analysis and Independent Component Analysis (ICA) are two popular methods for investigating average functional connectivity based on Pearson correlations among BOLD time courses. At 1 hr 03:52 min, Shella Keiholz explains seed- and parcellation-based correlations. At its core, Pearson correlation is a simple measure which captures linear relationships between two variables; in effect one can think of it as “how well do two signals co-fluctuate”. At 1 hr 16:47 min, Shella goes on to explain ICA as a method which decomposes an fMRI dataset into the individual signal sources that give rise to the measured BOLD signals. ICA can be performed in either the spatial or the temporal domain; given the number of voxels versus the number of timepoints in a standard resting state scan, it is more common to apply spatial ICA. While spatial ICA can separate interesting signal sources (resting state networks) from artifact signals (like those from cerebrospinal fluid or subject motion), it is important to note that ICA cannot automatically determine the number of the components (or signal sources) that should be estimated by the algorithm: this “model order” or dimensionality has to be decided by the researcher doing the analysis. At 1 hr 17:57 min Shella gives an example of using ICA to show that the choice of model order affects the resulting network maps. The main advantage of ICA is that it is data-driven so can be used to investigate functional connectivity without any a priori information, such as seed selection, and it can effectively separate signal from noise. But this also means that it can be hard to interpret resulting ICA components: one must identify the components of interest versus those that represent noise from the resulting spatial maps, which may require comparing the components with published maps or drawing from other studies to interpret them.
Measures of average functional connectivity, beyond Pearson correlation, that are used for seed- and parcellation-based methods are partial correlation, mutual information measures, coherence/partial coherence, and many others. Smith et al. (2011) used simulations to investigate how different network configurations, experimental protocols, confounds, and choice of connectivity measures impacts network modelling and found that partial correlations have high sensitivity to detect direct connections between nodes, directionality is difficult to estimate, and that functionally inaccurate parcellations severely degrade connectivity analyses.
More information on static functional connectivity analysis can be found in the OHBM’s OnDemand How-To: Resting State fMRI Analysis blog post.
In contrast to average functional connectivity, for “time-varying” or “dynamic” functional connectivity analysis, more than one measure of connectivity is obtained to capture the variations in brain connectivity over the scan duration. Dynamic connectivity can be implemented using different approaches such as window or event-based methods. There are many excellent reviews on the promises and pitfalls of dynamic connectivity methods (Hutchison et al., 2013; Preti et al., 2017; Jalilianhasanpour et al., 2021) and the utility of these methods for classification and prediction of brain disorders (Du et al., 2018). At 1 hr 23:28 min, Shella gives an example of the sliding window method, which divides time series into small time windows to calculate the correlation between BOLD signals from two areas within a small time window that slides from one time point to the next. This new time course of correlations from each window across the duration of the scan reflects the dynamic changes in connectivity that may occur between two brain areas, even in the absence of an apparent overall average connectivity between them. This approach can be extended to parcellations: in this case, an association matrix is computed for each sliding window.
Shella Keilholz explains the sliding window correlation approach used to analyse the correlation values between two areas at different points in time
Due to the number of timepoints in a typical BOLD scan, and the number of brain regions that can be considered in a parcellation scheme, data explosion is one of the main challenges of this type of analysis. Clustering can be performed as a kind of reduction step to identify a small set of brain states and their dynamic properties, either on the BOLD times series or on the association matrices. However determining the number of brain states to be estimated is subjective, similar to setting the model order for ICA. At 1 hr 36:31 min, Shella talks about these and other caveats to these functional connectivity approaches.
What about structural connectivity?
Structural connectivity is, conceptually, the easiest one to understand. We ask the question, is there a physical connection between one brain area and another. In more neuroscientific parlance, structural connectivity looks at the axonal tissue (white matter) that connects cortical regions to one another. While structural connectivity is what most people would intuitively think of as “connectivity”, the approaches to estimating a structural connectome can sound like a mystical art to the uninitiated. At 11:32 min Robert talks about the fundamental requirements to quantify white matter pathways. He mentions that the first requirement is to estimate the local fibre orientations in each image voxel using an appropriate diffusion model; the second is to follow the fibre orientations using a reconstruction algorithm such as streamlines tractography. This process creates trajectories through space that ideally reflect the underlying white matter connections. At 19:00 min Robert mentions that another requirement is to have a sufficiently large number of streamlines generated so that the result can be reproducible.
Following reconstruction of streamlines trajectories across the entire brain white matter, structural connectome construction proceeds as follows. Firstly, each reconstructed connection must be attributed to the appropriate edge based on the nodes from the parcellation to which it is assigned. Then, one must choose the metric of connectivity to be quantified for each connectome edge; this could be the number of streamlines within that connection, samples of some quantitative metric along the corresponding streamlines trajectories, or estimates of connection density based on more advanced modelling approaches.
More information on the general structural connectivity pipeline can be found in the how-to diffusion MRI blog.
Robert Smith explaining how structural connectivity is measured
So ... once a functional or structural network is obtained and the connectome is mapped, what then?!
Once we have the association matrix reflecting either the functional or structural connectome, how do we characterize the properties of the network or do case-control comparisons? These questions proceed via different analyses. To characterize the properties of a network, one typically applies mathematical tools from graph theory.
In many cases, the network (or association matrix) is first thresholded to remove the weakest connections from it. At 37:22 min Andrew Zalesky talks about the pros and cons of this thresholding process and at 40:56 min he gives an example of thresholding a network that has weak, moderate and strong connections. An advantage of thresholding is that the weakest connections may be false positives and so, removing them through thresholding will provide a clearer and more accurate connectome visualisation. On the other hand, one of the disadvantages of thresholding is that the choice of thresholding value is arbitrary (explained at 46:15 min) and can strongly impact the result graph theory measures. After the thresholding process, Andrew explains that the network can additionally be binarized, indicating either the presence or the absence of a connection between a pair of brain regions; alternatively, keeping the connectivity values without binarizing provides a measure of the strength of the connection between each pair of brain regions. He further explains the pros and cons of binarization.
Overall there are two classes of thresholding methods: “density thresholding”, which aims to achieve a desired connection density by removing the weakest connections, and “weight thresholding”, which identifies the minimum weight a connection needs to have in order not to be removed. At 47:07 min Andrew explains other alternative approaches to connectivity strength thresholding, such as “consensus thresholding” and “local thresholding”.
Andrew Zalesky explaining how the thresholding process eliminates weak connections
Once the network is identified via thresholding (and possibly binarizing), the next step is to try to understand the organization of the network. An important concept mentioned by Andrew at 50:02 min is that null networks can be used as a benchmark of network organisation under a specific null hypothesis. For example, a random network matched for the number of nodes, edges, and possibly other network properties, can be used to understand whether a feature such as path length (the average number of nodes that must be traversed to connect any two nodes in a network) of the empirical network is unusually short, medium or long when compared with a random network. If the topology is a lattice or other kind of non-random configuration, then the path length will differ from that observed in a matched random network. The Maslov-Sneppen rewiring algorithm is further explained as a way to generate a random network matched for number of nodes, edges, and node degree (the number of edges connected to a node), which is the most common method for generating a random network as a null model. From 57:13 min onwards, Andrew explains other null models such as the geometry-preserving null model, which makes use of the impact of geometry on network topology, and generative models, which are null models that use specific wiring rules to evolve networks in silico. Inference on a specific property of the network then proceeds by sampling thousands of null networks to enable quantitative inference via Monte Carlo methods.
Reconstructed connectomes can also be compared to assess group differences in connectome properties, or how connectome properties may be affected by treatment or vary with behavior. In addition, connectomes can be used to predict clinical outcomes or treatment responses. There are myriad network properties that can be assessed for differences or for prediction, ranging from connection-level to global properties of the network as Andrew describes at 1 hr 02:07 min. In contrast to connection-level analyses that examine each connection independently, network-specific attributes, such as sub-networks, may better capture differences between patients and controls, as pathology is more likely to impact sub-networks of connected nodes rather than node pairs scattered randomly around the brain. As such, network-specific attributes may also be more useful for predicting clinical outcomes or response to treatments. At 1 hr 05:26 min Andrew talks about how network-based statistics can be used to leverage network-specific attributes to enhance statistical power.
So, a ‘network’ can represent the functional or structural connectivity in the brain, but it can also represent a subset of interconnected nodes referred to as “sub-networks” or “modules”. What are these modules exactly?
The connectome can be conceptualized as a single large network at the macroscale or global level, or as sub-networks at the mesoscale level, or at the level of individual regions or nodes and other local-scale components. Generally speaking, at the mesoscale, ‘sub-networks’, ‘modules’, or ‘network communities’ are used interchangeably to refer to clusters of densely interconnected nodes embedded within a larger network. Though modules can be identified in social, behavioural and cellular networks, in this article the terms ‘modules’ or ‘communities’ will refer to groups of nodes reflecting organisation in the brain network. In network neuroscience, a set of strongly coupled nodes comprise structurally or functionally linked sub-networks. In most applications, nodes are strongly interconnected with each other within a sub-network, but weakly connected with nodes in the rest of the network. Understanding which nodes make up specific modules can provide significant insight into the functioning of the network. For example, changes in modular organization are related to individual differences in behaviour, development, cognitive states and disease liability.
Community detection algorithms are used to discover the network’s community structure to identify subgroups of interconnected nodes. While these approaches are very flexible, detection of communities can be an ill-posed problem. At 2:07 min Richard Betzel talks about the challenges of module/community detection.
One approach to detect communities in single and multi-layer networks is called “modularity maximisation”. In a single-layer network, a set of nodes is connected to each other with edges derived from a single imaging modality at one point in time. In multi-layer networks, nodes and connections can be analysed across another dimension, such as time, imaging modalities, or subjects. Ann further describes how a multilayer network works (42:54 min) and how we obtain this type of network from our data (from 50:37 min onwards).
So you’re saying that the point of modularity maximisation is to get a good estimate of the communities’ network’s structure?
Yes! Modularity maximisation considers communities to be groups of nodes that show stronger connections between each other within the observed network than would be expected by chance. At 4:55 min Richard explains how modularity maximisation works and further on gives examples of high-quality and low-quality partitions, in which nodes may form strong clusters or may only be weakly clustered together, respectively. For the modularity maximization approach the quality of the partition is given by the modularity quality function, Q (7:25 min). Maximizing Q corresponds to identifying a better partition into sub-networks; e.g., if the partition has a greater value of Q because of communities which are more internally dense than expected by chance, then it is considered to be of high quality. The idea is that the partition with the greatest value of Q will provide a reasonable estimate of the network’s structure.
A few methodological issues, such as the choice of null model and the resolution used to detect modules, need to be considered. There are different types of null models to choose from, with the choice largely depending on the research question. In his talk (11:57 min), Richard explains how to run the Louvain algorithm with a degree-preserving null model. A challenge with modularity maximisation and other community detections algorithms is that they suffer from a resolution limit in that they are not able to detect small clusters, therefore the modules obtained by maximising Q may not reflect the networks’ actual mesoscale structure. This is explained at 15:43 min together with a solution for parameterising the modularity equation to add a resolution parameter and examine communities at different scales.
Another factor to take into account for multi-layer networks (an example application to time-varying connectivity) is given at 20:37 min, which is explained by Richard from 23:57 min, is whether inter-layer coupling is defined temporally, with time-varying information within the network, or categorically, with no temporal information within the network. The weight of the temporal inter-layer coupling is under experimental control: if the multi-layer network has very weak inter-layer coupling, communities will be very versatile across layers, whilst if it’s very strong, the communities will emphasise homogeneity with little variety.
Richard Betzel explaining inter-layer coupling in multi-layer networks
Martijn notes that the existence of separate modules cannot explain the complexity of neural connectivity alone (1:01:54 min). Rather, modules need to be integrated with each other through central places (or nodes) of communication. Such nodes that connect with multiple modules to facilitate integration and communication throughout the network are referred to as “hubs”.
What are “hubs”?
Hubs are defined as highly connected nodes in the brain network. As Martijn describes (1:05:40 min), if we take a random network, the probability of any particular edge between two nodes being present is equivalent for all edges, and as a consequence, the number of edges present for each node (the node “degree”) is typically quite similar across all nodes; it is unlikely for there to be nodes possessing considerably more connections than others. However, in most biological networks this probability is not equal: some nodes usually have more connections than other nodes, and so these are interpreted as hubs. Martijn explains four metrics of ‘hubness’ or ‘centrality’ from 1 hr 07:36 min onwards, defining the centrality of a node via measures such as degree, closeness and betweenness. As an example, if centrality is to be identified by the degree metric, the most densely connected nodes of the networks are considered to be places of high level of centrality; there are, however, different metrics that can be used to identify centrality in both single-layer and, as explained by Ann 54:16 min, in multi-layer networks.
As Bratislav points out at 1 hr 51:31 min, it is important to be aware of the assumptions being made when assigning importance to a particular network. If one assumes that communication happens when nodes communicate with their closest neighbour, then the node with the greatest degree would be chosen as an important node, or hub; conversely, if one assumes that communication happens across the shortest paths between nodes, then the node with the greatest betweenness would be chosen. The links between communication processes and measures of hubness are further explored in Bratislav’s talk.
Last, once hubs are identified, they can be categorised either as “connector” or “provincial” hubs, depending on the role of the node. A connector hub refers to nodes that provide connections across modules or brain sub-networks (inter-modular connections), whilst a provincial hub refers to a node that has strong connections to the other hubs within a module (intra-modular connections). Martijn talks about this categorisation of hubs and quantitative measures reflecting node categorization from 1 hr16:19 min onwards. He further explains the two main types of interactions that may occur among the brain’s anatomical hubs, referred to as “core” and “rich club” interactions. Whilst the core refers to nodes that share the same degree (or number of connections) amongst each other, rich clubs refer to the tendency of regions with high connectivity degrees to connect densely among one another. Some limitations and caveats regarding hubs, such as the effect of community sizes on the degree of connections in modules, are also mentioned from 1 hr 32:28 min onwards.
And what do all these complex topological properties mean for brain activity?
In her talk, Petra Ritter provides an overview of biophysical models of large-scale brain dynamics . These models describe the activity of each brain region using biophysically-informed differential equations that describe the aggregate dynamics of populations of mutually interacting inhibitory and excitatory neurons. These populations are then ‘linked up’ according to the inter-regional connections defined by an empirical connectome dataset. The models offer a way of simulating brain activity to investigate how variations in structural connectivity affect function and to test different hypotheses about the mechanisms that drive observed activity recordings. The Virtual Brain offers a powerful and open source platform for exploring these models.
So, what do I do if I’m still a bit confused but want to learn more about network science?
It’s understandable that you should feel confused when trying to wrap your head around connectome construction and network analysis. As mentioned in the previous paragraphs, graph theory can be used to further our understanding of the organisation of the brain network. However, there isn’t a set roadmap for defining brain nodes and edges, or for detecting network’s modules or hubs. Though this may be a tricky process for a researcher who is trying to ensure that the right decisions are made, it allows the researcher to tailor the process to address specific research questions. For example, though there are general good practices for defining nodes (such as replicating findings using different parcellations), there isn’t an optimal approach which should always be used to define nodes because it depends on the research question being addressed.
There are several online resources on network neuroscience, including two textbooks: the first by Albert-László Barabási (2016) found here, and the other by Alex Fornito, Andrew Zalesky and Ed Bullmore (2016) found here .
Some papers that provide a good overview of concepts in network neuroscience are:
By Peter Bandettini & the OHBM Neurosalience production team
In this episode of Neurosalience, Peter Bandettini meets the OHBM Open Science Special Interest Group (OS-SIG). Together they discuss the history of and the unique, important role the OS-SIG plays in OHBM. The group fosters open science not only by encouraging best practices and sharing data and code, but also by encouraging inclusivity in science and open- ended discussion in a supportive environment.
Aki Nikolaidis - is chair of the OHBM Open Science Special Interest Group, and served on the Student Postdoc Special Interest Group from 2017-2019. He is a research scientist at the Center for the Developing Brain at the Child Mind Institute in New York City. His research focuses on using advanced statistical methods to better understand psychopathology, brain organization, and cognitive performance. Aki's work in neuroscience, psychology, and machine learning has been featured in over 20 peer-reviewed publications.
Janine Bijsterbosch is chair- elect of the OHBM Open Science Special Interest Group. She is Assistant Professor in the Computational Imaging section of the Department of Radiology at Washington University in St Louis. The Personomics Lab headed by Dr. Bijsterbosch aims to understand how brain connectivity patterns differ from one person to the next, by studying the “personalized connectome”. Using open data resources such as the Human Connectome Project and UK Biobank, the Personomics Lab adopts cutting edge analysis techniques to study functional connectivity networks and their role in behavior, performance, mental state, disease risk, treatment response, and physiology. In addition, Dr. Bijsterbosch wrote a textbook on functional connectivity analyses, which was published by Oxford University Press in 2017.
Johanna Bayer is secretary- elect of the OHBM Open Science Special Interest Group, treasurer elect of the OHBM Australia chapter and active member of the OHBM Sustainability and Environmental Group. This year is also her second year behind the scenes helping to organize the OHBM Brainhack. Johanna, who is originally from Munich, Germany has a background in Psychology, Neuroscience and Computer science and was awarded the Melbourne Research Scholarship in 2018 to pursue her PhD at the University of Melbourne. Her PhD work is focusing on creating a normative model of the brain from neuroimaging data to study depression, and on a method to harmonize site-effects in large neuroimaging data sets. In addition to her passion for her work and for Open Science, Johanna also loves dancing, rock climbing, computer science and cats.
Katie Bottenhorn is the secretary of the OHBM Open Science Special Interest Group. She is a graduate student who is interested in how large-scale brain network topology varies both between- and within-individuals over the course of everyday life. She is especially interested in how hormonal fluctuations associated with the menstrual cycle and hormonal contraceptives contribute to this variability, and how this differs with respect to changes in sleep, exercise, and stress.
Melvin Selim Atay is the inclusivity officer of the OHBM Open Science Special Interest Group. He is working at the Middle East Technical University, Ankara, Turkey and his research focus is explainability of deep learning.
By Nabin Koirala
In light of her upcoming Keynote lecture at OHBM2021, we wanted to get up close with Anna Wang Roe. Dr. Roe is currently the director of the Interdisciplinary Institute of Neuroscience and Technology at Zhejiang University, China. In the interview, we talked about her academic journey and were lucky to hear some backstage stories to get to know Dr. Roe even better.
Nabin Koirala (NK): I would start by saying thank you for making time and agreeing to the interview. I am very excited to find out more about the keynote lecture that you are planning for the annual OHBM meeting 2021 but also in general about you and your personal experience in research. To start with, maybe you could introduce your research to our readers who are not only scientists.
Anna Wang Roe (AWR): I've been trying to understand, for a long time now, what it is about our brain that makes it capable of doing abstract things like sensation, cognition and emotion. I've been mystified by the fact that the brain as a structure is physical, but still able to generate these abstract behaviors. So how does abstraction arise from a physical entity like the brain? And after all these years now, I believe that I may have an approach that will lead us to an answer.
NK: That’s fascinating. Could you maybe also tell us a bit more about all those years you mentioned, or in other words your journey in science so far?
AWR: Sure, let me go through a little bit of my travels through these ideas. So, I started off in college as a math major at Harvard, where I took a course on logical systems with elements, theorems and rules. This led to a lightbulb moment in which I suddenly needed to know whether the brain qualifies as a logical system. So, kind of overnight, I turned myself into a biologist and filled my senior year with neuroanatomy, neurophysiology and laboratory work, which I immediately fell in love with. So much so that I ended up as a graduate student at the Brain and Cognitive Sciences department at MIT. My mentor at that time, Mriganka Sur, gave me a really great project on brain development. The question we were asking was whether the visual cortex was visual because of the cortical tissue itself or whether it was visual by virtue of the inputs that it gets? In some sense it was a very philosophical question. In the experiment, I rerouted the visual inputs from the eye into what normally is the auditory pathway to observe whether the auditory cortex would behave like a visual cortex, or whether it would still behave like an auditory cortex. Using electrophysiological methods, I found, lo and behold, that this cortex was definitely more visual in the sense that it contained a visual map and contained cells that were orientation selective. This was a very big finding with huge impact (even yielding a couple of Science papers!) because it showed that the target structure had an internal inherent circuitry that processes whatever it receives as an input, suggesting there are standard canonical cortical circuits. Importantly, this result had a huge impact on me personally because I was very attracted to the idea of cortical columns, made famous by Nobel laureates David Hubel and Torsten Wiesel, as fundamental elements of very systematic architecture to the brain. I was really attracted to the computational, genetic and developmental efficiency of this architecture. So I wanted to know whether columns are ubiquitous in the brain, and if so, what are the rules that govern their organization and their connectivity? Are the observed functions in the brain a result of a system of such elements and rules? These questions then drove my research and my career in science because if they were to be true, then this would suggest that the brain really has some machine-like qualities, bringing me back to my original question of whether the brain is a logical system. I hope to find the answers to these in the coming years.
NK: That’s very interesting. So, at what stage are we in this path of getting the answer? As you mention in your webpage, do you think we are close to developing a mind-machine interface and how far are we from being able to modify it to enhance brain function or human behaviors for therapeutic purposes? It would be great if you could give us more insight with your current research focus in achieving these goals.
AWR: I chose non-human primate brains as an animal model because of the similarity to humans. They have a similar visual system, they use their hands for manual behaviors as we do, and their brain architecture, including columns, is very similar to that of humans. We train monkeys to do behaviors, and then image their brains while they're performing different visual tasks to understand the functional role of columnar units in vision. Because these functional units are very small, in a sub-millimeter range, we developed imaging techniques which provide high spatial resolution and help us accurately map the brain. In the last three decades, I've mapped these columns throughout the brain, particularly in visual and somatosensory areas, but also in motor and prefrontal areas. And based on the findings from these studies, I believe that these columns exist at least in 80% area throughout the cortex.
The next step was to study how these units are connected to each other in networks, something that really no one has done systematically and at this columnar scale. I believe the word connectome is probably a familiar term referring to all the connections in the brain. There are a lot of connectome projects throughout the world and many millions of connectome dollars have been spent. But ours is the first in primates at mesoscale (or columnar scale).
To give you an example of why imaging spatial resolution is important, let's say you and I are standing next to each other and I am talking to Person A, and you are talking to Person B. Now, if a method could not resolve you versus me, then it would appear that both you and I as a unit are speaking to both A and to B, which would be incorrect. So the importance of high spatial resolution and developing methods to achieve such resolution cannot be overstated. The method I will talk about at OHBM is called INS-fMRI (or infrared neural stimulation with fMRI). This is an optical stimulation method that activates neurons but, unlike optogenetics, does not depend on viral transfection. This new method directly stimulates neurons with infrared light in a way that doesn't harm them, and at the same time, activates them effectively. With this method, we’ve shown in our 2019 Science Advances and our 2021 Neuroimage papers that brain networks are indeed based on activations at columnar scale. I'm now planning to apply this technique across the brain, systematically and hope to learn about the overall architecture of connections in the brain. I predict this focus on the fundamental units of processing will strongly impact the world of brain machine interface, medicine, and AI. That is, to effectively interface with the brain, you must understand its basic architecture.
NK: I hope that happens soon so that we could actually have the mesoscale level connectome. I believe this is also part of your upcoming keynote lecture, so let me try to understand this a bit more. While explaining the advantages of this method, we talked a lot about the spatial resolution which could be in the sub-millimeter scale, but what about the temporal resolution? Isn’t this also an important aspect to understand these functional units’ behavior? Is the temporal resolution in your method somehow better than established imaging techniques like fMRI?
AWR: That’s an important issue and I am glad that you brought it up. There is definitely a limit to the temporal resolution. So, with ultra-high field imaging, you could gain a lot in terms of signal to noise ratio, but the temporal resolution is still limited to that of the BOLD signal which is on the order of seconds. So to study the rapid dynamics of these units, you need to add other methodologies such as electrophysiological recordings. As my papers will attest to, I’m a true believer in multimodal solutions to challenging questions.
NK: So, are you already exploring the possibility of combining all three techniques – infrared neural stimulation, fMRI and EEG (for example)?
AWR: Yes, absolutely and this is exactly what we're planning to do. Right now, we have the INS and fMRI in anesthetized and behaving monkeys, and have already developed electrodes that can record during the MRI. We are investing resources into EEG recordings covering the whole brain to provide a more complete picture of the temporal aspects, so bit by bit we hope to crack this nut!
NK: Do you think it might have a translational possibility to humans at some time point?
AWR: I wasn't going to raise this but yes, we are already starting to explore this possibility with some neurosurgeons. You know, having interfaces with the brain is an idea that is becoming more widely accepted. People are walking around today with stimulators in their brain, for example, deep brain stimulation for Parkinson's patients, psychiatric conditions, cochlear implants etc. The whole field is still developing. So, we are working on that direction and I think it will happen sooner than we expect.
NK: That's fascinating to hear, and I believe with the newer technologies for deep brain stimulation forthcoming like high intensity focal ultrasound which provides the possibility of non-invasive stimulation will further escalate the possibilities. Turning to your experience in academia so far - You have worked in many different universities and eventually moved to China from the US. Could you maybe walk us through what drove this decision? Is it actually like the media portrays that the weight is shifting to China in terms of research resources available etc.?
ARW: Well, one of the main reasons I moved around was because of the technology that was available at the places that I went to. For me, science and technology go hand in hand, so to answer the questions I have, I need new technologies. For example, as we talked earlier, we need higher spatial and temporal resolution to be able to answer the connectivity problem and the technical possibilities to be able to stimulate the brain in specific ways. So, that is my main attractor and was the reason as well why I moved then from Yale to Vanderbilt because Vanderbilt was establishing a new MRI center and a whole new group for that. I benefited a lot by working with them at Vanderbilt.
Now regarding my move to China, I believe it was a combination of things. In 2012, I took a sabbatical in China and visited many universities and institutions, where I discovered Zhejiang University here in Hangzhou. I was really wowed by how collaborative people were here, and on top of that the city is truly one of the most beautiful in the world! The collaborative atmosphere made me believe that it would be a great place for an interdisciplinary institute. So I made the proposal to the university to set up a neuroscience and technology Institute and they went for it, so that's how that started. It's been a gradual transition moving from US to here and in the meantime, I also joined Oregon National Primate Research Center in a half time position. ONPRC is a huge primate resource with over 5000 monkeys. It's absolutely amazing. The resources they have, like a dedicated MRI for monkeys and lots of NHP expertise in the campus, attracted me to get involved there. But eventually, my projects in China - especially the connectome project - required my full time. Here in China, I’ve got students of different backgrounds in my lab, including medical science, optical engineering, computer science, biomedical engineering. It’s a real collaborative team effort! And as the institute is growing with about 15 PIs, an MRI center with 3T and 7T, nonhuman primate facility, two photon and three photon imaging facilities, and multiple teams of researchers working here, it is difficult to get out. (laughs)
NK: That's great. It's very rare that you can gather all these technologies in one institute and focus on your research question. Also, it’s good to know that this kind of research is still possible. Because with the recent incidents in Tübingen, Germany and a couple of labs in the US, scientists were thinking twice before starting the primate research. Anyways, as you now have research experience in both China and the USA, could you tell us something about similarities and differences in terms of research culture?
AWR: Well, there's definitely differences but I would say that, as far as the quality of research goes, the best research here is as good as the best research in the US. I have found not only are the scientists excellent here, but the students are fantastic as well. They're very motivated, hardworking and inquisitive and that's important. The students here though have different backgrounds than those in the US because for the students here it’s been a competition at every level: they have to be the best students to get into good middle schools and the best high schools and then on to college. So, by the time they get to join the university, they are really the cream of the crop. It’s been really a privilege to be working with them. They might not be thinking outside of the box enough and opening themselves up as much as the students in the US because of their culture, but I challenge them on this front. They also benefit from our international profile, as we have researchers from different cultures, including non-Chinese speaking scientists from different parts of the world.
NK: I think this is an interesting point that you brought up regarding the non-Chinese scientist coming to China. So, how is the trend now compared to, say, 10 or 20 years back? Do you think it is increasing given China’s investment in science? Or how do you think it will go on in the future?
AWR: I would say that it was really good for a while. There were a lot of foreign scientists coming here and Chinese scientists going abroad. But in the last few years, particularly since the Trump administration, it's really kind of shut down. I would say in the last three years it's changed drastically. And now the feeling is that, well, you might get in trouble if you collaborate, which is not a good feeling to have. I became very sensitive because I am working on both sides of the ocean. So, it has become difficult these days to collaborate or develop technologies collaboratively. Even NIH has become sensitive towards these interactions. This is not good for either side. According to my colleagues and friends in the US, they are having such a hard time finding postdocs to fill their labs. On this side, the students are not able to gain the exposure and experience because these days they cannot get visas to go to the US. I am sure this is a passing thing and science will prevail.
NK: Hopefully this political tension will improve soon, and they will leave the scientist to do science again. So, we have talked a lot about science and as we are almost nearing the end, I would like to talk about some of your personal interests. So, let’s start with your hobbies: what kind of hobbies do you have outside of science?
AWR: Well, I have to be honest, I don't have a lot of free time (laughs). But what I truly enjoy is cooking. That's something my husband and I do a lot together. We enjoy that and having friends over, trying different recipes and enjoying the time. The other thing I enjoy is art and nature. So, whenever I get the chance, I go to galleries and museums, or outdoors in nature. The city is truly beautiful and with emphasis on nature, so I enjoy that a lot here.
NK: Sounds great. So, as this blog is read by lots of young scientists, what would be some suggestions you would like to give them, not only to be successful but also to do good research.
AWR: Okay, I generally don't prefer to say this is the way you should do things. But if I had to answer that question, I would say to do good science, follow your heart and your intuition. Often, people think of scientists as these logical beings, but sometimes some of your critical decisions are really from the heart. If it works for you, you should go for it. Another useful quality is being able to fight and be persistent. If somebody says you’re wrong, you should not give up, if an editor rejects your idea, you have to tell the editor to reconsider - I know that's not always easy though (laughs), but you should really stand up for yourself. It doesn't have to be obnoxious, but persistence has to be there. A third suggestion is probably a more practical aspect of science, which is money. I would say, always plan for a rainy day and use your funding wisely. The last thing I would like to add to that is you should always know that science is full of different personalities and people with different backgrounds. So, as a supervisor it's good to try to figure out what somebody is seeking and what they are good at and try to use that to their advantage and as well as your own advantage. It's not always easy to do but everyone's got skills and talents that can be useful in science.
NK: These are great suggestions, thank you. Maybe a bit of fun now, as you've been working in neuroscience for so long, what would be your favorite brain region and a toolbox?
AWR: I guess for the favorite brain region, I'm biased by the brain region I'm studying now. I've been studying sensory systems for a long time and trying to understand the hierarchy of the processing within a sensory system. Now I'm in an area called the amygdala, which is a part of the limbic system and primarily processes emotions. And this is a region I am really in love with now because this is an area that's tied to all different parts of the brain, including your sensory, motor, cognitive regions up to the autonomic and visceral systems. And it amazes me every time thinking how this tiny region could handle all this complex integration of different information. This area not only filters your incoming information through cognitive and emotional filters to make a final output or so to say generate feeling, but at the same time it also controls your physiology, including your heart rate, your breathing and all sorts of things. I should mention that this will be a part of my keynote talk in OHBM this year as we are studying these connections using the INS-fMRI technique. For the toolbox, I am not using any actively at the moment but my students and postdocs love AFNI. Personally though, I still love mathematics and most of the time I am still wondering how to develop a math theory of the brain. Recently I am trying to think more about recursion and how that might bring succinctness to our brain architecture. So from that sense I am more interested in basic equations rather than toolboxes (laughs).
NK: That’s great. I think it would be incomplete to conclude this interview without mentioning the pandemic. As there are lots of discussions about productivity, how it has been impacted, how it should be handled and so on. So, maybe you could say how it had impacted your research and research in general.
AWR: Well, last year in 2020, I spent 9 months in the US as I got stuck there when the pandemic hit. I could not do lab work and mostly stayed home, but I became very productive writing papers that were backlogged. So, it still was a good use of time. When I returned to China, I immediately felt that I landed in a different world. Society was functioning pretty much like normal. Just because they were able to control the pandemic here, the return to normal was fast. In the US, even though things have slowed down a bit, with the COVID vaccinations going well, things are picking up and returning to normal, too. People have found ways to adapt, and sometimes challenging times can also be good for science. For example, I was recently invited to a meeting organized by PRIME-DE (international primate data exchange group), and there were no presentations or talks but instead just primate researchers getting together for discussions. I really loved that, and the response to the pandemic showed that we can be resourceful and find alternative solutions.
NK: I would like to say thank you again for talking with me and sharing your experience with us. Looking forward to your talk.
AWR: Thank you so much for this opportunity. It was a pleasure talking with you and I wish you great success in your research.
By Peter Bandettini & the OHBM Neurosalience production team
In this episode Peter Bandettini finds his similarities with Professor Nik Weiskopf. They are both physicists working at the interface of MRI acquisition and brain physiology and function, plumbing ever more quantitative information about how our brains are organized, how they change with disease and how they vary between individuals. We find out about Nik’s professional history, get into real-time fMRI neurofeedback, and discuss his work on quantitative MRI for deriving maps of myelin, iron content, fiber track direction, and more. All of this done at unprecedented resolutions using their unique Connectome scanner - one of only four in the world.
Professor Nik Weiskopf is the Director of the Department of Neurophysics at the Max Planck Institute for Human Cognitive and Brain Sciences. He received his Ph.D. in 2004 from Graduate School of Neural & Behavioural Sciences and International Max Planck Research School in Tübingen, Germany.
In 2004 Nik moved to the Wellcome Trust Centre for Neuroimaging at University College London in the UK. He quickly moved up to head the Physics Group there in 2006, becoming Senior Lecturer (equivalent to an associate professor) in 2009. He became full professor of MRI Physics at the UCL Institute of Neurology in 2014 and subsequently moved to the Directorship of Neurophysics at the Max Planck Insitute for Human Cognitive and Brain Sciences in Leipzig in 2015.
The Neurosalience production team consists of Anastasia Brovkin, Katie Moran, Nils Muhlert, Kevin Sitek, and Rachael Stickland.
by Rosanna Olsen, Valentina Borghesani and AmanPreet Badhwar on behalf of the OHBM Diversity & Inclusivity Committee
OHBM initially launched a “Diversity and Gender Task Force” in 2017 to address the growing need to recognize and address multiple forms of inequity with respect to gender balance and geographical representation on the OHBM Council.
Since 2017, this initiative has worked towards tackling a range of issues surrounding underrepresentation at OHBM. The task force has evolved into a standing “Diversity and Inclusivity Committee” that meets regularly to ensure that the needs of the diverse OHBM community are adequately represented at all levels of the organization and in all of its activities. Not surprisingly, members of the Diversity and Inclusivity Committee liaisons with all OHBM Committees and SIGs throughout the year.
During this year’s virtual meeting we are planning a series of different events to highlight and celebrate issues of diversity and inclusivity in OHBM.
The 3rd annual DIC symposium: Racial Bias in Neuroscience
This is the third year OHBM will feature a symposium devoted to the discussion of issues in diversity that affect our society. In 2019, speakers presented issues of gender equality that are present at the OHBM meeting, it’s leadership, and the field of human brain mapping. Speakers provided an overview on gender imbalances in academia, including citation counts, conference speakers, and prize awardees. The issue of biologically versus culturally driven sex differences in brain anatomy was also discussed. In 2020, speakers discussed neuroscience issues pertinent to the LGBTQ+ community such as underrepresentation of LGBTQ+ individuals in STEM careers and the complex nature of gender/sexuality and how these factors are addressed in research.
This year’s symposium will be exploring issues of racial bias in neuroscience. People of color are not well-represented in academia, especially at senior levels (e.g. Full Professor). This year our speakers will discuss issues such as systemic racism and other factors leading to the underrepresentation of Black and Latin scholars. Moreover, as in many areas of science, the majority of human brain mapping research is conducted in homogenous, non-representative populations, which is problematic for the interpretability and generalization of research findings. Strategies for promoting equity within the field of human brain mapping, overcoming current barriers for ethnic minorities in OHBM, and increasing the ethnic diversity within our research samples, will be discussed.
This year Diversity Symposium speakers: Dr. Sharleen Newman (University of Alabama), Dr. Yakeel Quiroz (Harvard University), Sade Abiodun (Princeton University)
The 2nd edition of the Kids Review - this time multilingual!
After the success of the Kids Review in 2020, which led to five Frontiers for Young Minds papers, this year our kids-friendly initiatives are scaling up! We opened up to young reviewers from all over the world by reaching out directly to schools and, critically, by offering the experience in five different languages. Children speaking Korean, Spanish, Portuguese, French, or Italian will be able to act as reviewers for five different papers presented by OHBM members. Whether in school with their teachers or at home with their parents, kids will be able to watch the prerecorded talk in their native language (with subtitles either in their native language or in English) and then ask questions directly to the researchers the day of the live Q&A during our annual meeting.
This monumental effort required a full team of volunteers across three continents and more than seven countries, some are OHBM members, some educators, scientists, or teachers - in alphabetical order they are: Inês Almeida, Aman Badhwar, Giulia Baracchini, Valentina Borghesani, Eunji Cho, Antea D'Andrea, João Duarte, Mariana Flores, Eduardo Garza-Villarreal, Kirk Geier, Soo Hyun Kim, Hyang Woon Lee, Kangjoo Lee, Anaïs Llorens, Laura Marzetti, Rosanna Olsen, Pedro Pinheiro Chagas, Ana Luísa Pinho, Maria Ribeiro, Viviana Siless, and Athina Tzovara. We all came together united by the desire to bring the latest scientific output of our community to the bright, curious, young minds of future scientists!
Too much fun to miss out? You can still register in our online form & check out details on our self standing website!
Our paper in NeuroImage and associated Diversity Roundtable at OHBM 2021
The OHBM Diversity and Inclusivity committee published our first paper in 2021. In this paper, we reviewed the composition and activities of the Diversity and Inclusivity Committee that have promoted diversity within OHBM, in order to inspire other organizations to implement similar initiatives.
We strongly believe that these activities have brought positive change within the wider OHBM community, improving inclusivity and fostering diversity (e.g. sex distribution of council members and speakers provided as an example below) while promoting rigorous, ground-breaking science. Limitations and challenges faced by our committee and the way forward are also discussed in this paper.
Sex distribution for committee members and Tailarach lecture and keynote speakers. Dark colors highlight the number of women; light colors indicate the total number of individuals. Dotted lines mark the year 2016 when the DIC was established. Image from Tzovara et al, 2021.
During the 2021 annual meeting, we will be holding a Diversity Roundtable to showcase the findings in this paper, and discuss with the community on how to further improve equity, diversity and inclusivity at OHBM.
The Diversity & Inclusivity Committee can’t wait to share this (and more!) with you during the annual meeting! Let’s recap (click here for a D&I-specific google calendar): We’ll see you there.
This year’s OHBM conference is quickly approaching, including two consistent conference highlights: the OHBM Brainhack and Open Science Room (OSR). To learn more about what we can expect from this year’s events, I spoke to some of the current leadership of the OHBM Open Science Special Interest Group (OSSIG).
Current chair Aki Nikolaidis and secretary-elect Johanna Bayer joined us to talk about ongoing initiatives within the OSSIG, while hackathon co-chairs Matteo Mancini and Tibor Auer and OSR co-chairs Gabriel Gonzalez-Escamilla and Eva Alonzo-Ortiz told us about the upcoming events.
We had a long and thoroughly enjoyable discussion, which I encourage you to check out in its entirety. Below are some highlights from our conversation, condensed and edited for clarity.
I'm a longtime fan of the Open Science Room and the Brainhack, so I'm very excited to hear about everything you’ve been up to. The OSSIG events are some of my absolute favorite parts of the conference.
To start us off: I try to keep up with the OSSIG on twitter (htps://twitter.com/OHBMOpen), and I've seen that you launched new Open Science fellowships. Can you tell me a little bit about these fellowships?
The fellowships were envisioned to recognize early career researchers who were committed to and contributing to open science. The idea was to give monetary awards to work that is traditionally unrewarded; for example, community building or tool development. We also see it as a way to diversify who can engage in these events. Historically, the open science community has been predominantly white, predominantly North American and European.
We’ve closed our first round of applications, and we were encouraged and inspired by all of the applicants. Our first class of fellows are really impressive.
One thing we’re excited about, too, is to engage the fellows more directly in OSSIG events. There's a lot of volunteer work that happens during OHBM and at OSSIG events. We’re asking fellows to help as teaching assistants during the Brainhack. So we’re lucky to have fellows who are knowledgeable about the different topics that we're going to be teaching in the train track [educational sessions] of the hackathon.
Speaking of Brainhack, it's coming up soon! Could you talk about what this year's hackathon will look like?
Last year’s event was already an incredible success, and we’ve really tried to build on that work. Something we want to focus on more this year is to try to engage people who may not be interested in coding. We also want to have a conversation about the fact that historically—and unfortunately—the audience of Brainhack tends to be predominately white, straight men. We want to create a more diverse community, and we’ve planned a session to discuss what we can do to move towards that goal.
For those who are interested in coding, QMENTA has agreed to provide access to their platform during the event for any project that wants to run on their resources. Or, if people want to organize a multi-group challenge that can continue after the hackathon. We hope these additional resources give people the chance to not only dream big but to actually go for it.
That sounds really exciting. How can folks access these sessions and resources? I know that this year the OHBM conference is going to be on the Sparkle platform. How will the train track and hack track be adapted to these virtual spaces?
We are working with Sparkle to try and have the hackathon there, though this isn’t yet finalized.
But more generally on access: In thinking about how to make sure that this event is accessible, we’ve been inspired by cakes. A good cake needs to have multiple layers, and we want to have multiple layers in which people can engage with the Brainhack event.
For the TrainTrack [educational sessions], the first layer is pre-recorded videos that can be accessed during or even before the event. These will cover topics like version control, data visualization and machine learning, and reproducible workflows—these will leverage recorded sessions from some of the many wonderful events that have gone virtual over the last year. We’ll also have pre-recorded content on community building, as part of the non-coding focus I mentioned.
Another layer of engagement will be live Q&A sessions where we will have moderators to take questions from the audience and guide an overall discussion on a given topic. We hope that this will encourage anyone who wants to ask or answer a question to have an opportunity to engage with the Brainhack community.
The last layer connects the hackathon with the OSSIG fellows. OSSIG fellows will lead small group sessions that will be repeated through the days of the hackathon. And people will actually be able to solve problems with the tools that they want to learn about, explicitly inspired from the Neuromatch Academy experience.
In the hack track, participants will be working together on individual projects. How are you envisioning that participants will work together from different sides of the globe on the same project?
We learned from last year’s event to structure around different time zones. We came up with two time zone slots: “Atlantis” for Europe, Africa, and the Americas, and “The Rising Sun” for Australia and Asia-Pacific. That said, we don’t expect projects to stick to just one time zone; for example, someone from Australia can work on one particular project and when they go to sleep, someone else in Europe can start on it.
I'm a huge fan of collaborative working, and I think the virtualization of the event allows more people to get involved. We really do hope that this can happen on the Sparkle platform, but we’re exploring other options as necessary. And of course, the actual work is probably happening through GitHub or other collaborative platforms.
The HackTrack is one of the most creative parts of the conference and a great show for all the geekiness from our community. I’m looking forward to seeing the new things folks come up with.
In the past, it’s been unclear how to recognize the work that happens at the hackathon. Now that OHBM has its own publishing platform Aperture, do you have any plans for hackathon proceedings—maybe in partnership with Aperture?
We are in talks with Aperture about hosting the hackathon proceedings, which would highlight all of the different hackathon projects. That said, we haven’t yet finalized submission requirements or clarified the eventual format. But we hope that hackathon participants will eventually be able to submit their projects for publication. We’re not envisioning full length manuscripts, but short summaries for each of the projects: what they did, what they tried to do, what they found, etc.
I'm so looking forward to this year's Brainhack! But of course, open science doesn't stop at the Brainhack, right? It goes right into the main conference with the Open Science Room (OSR). Can you tell us how you’re adapting the OSR in 2021?
Stepping in as OSR co-chairs is a hard challenge, because the programming has consistently been very good so far. This year, I’m excited to announce that the OSR will be a real part of the OHBM program. Traditionally we’ve been an auxiliary event, so joining the main program on the Sparkle platform is an exciting evolution.
As for the OSR program itself, we’ll have four main kinds of events: panel discussions, educational sessions, emergent sessions, and a BIDS town hall.
The panel discussions are organized around some core topics. We solicited contributions from the community for ideas within those themes, and we actually just closed those calls. We’re now working to finalize which panelists will be presenting. We also have 10 educational sessions, ranging from introducing the basic tools and motivation for doing open science to dealing with failures in the current system.
Our emergent sessions have the most open format, as our goal there is to provide the space for folks to launch their own conversations rather than to specify a general topic. We will generally be looking for topics related to open science and related ideas like diversity and accessibility. But attendees are free to propose topics, and they can do so directly by email or on our website during the main conference.
A few years ago, Dan Lurie organized an emergent session which was kind of an MC’ed group discussion. Our goal is to bring people together like that and discuss things, like, for example, problems in transitioning from academia to industry. But the important thing is that participants decide the topics, and we leave the floor open.
For the panels, on the other hand, we had specific topics we wanted to cover. We chose those topics based on community surveys, so those are some of the topics that people were really interested in. And also, we felt like we wanted to move the Open Science Room a little bit away from just being about coding to being more about community building, inclusivity, diversity and related ideas. So we wanted to address those topics as well.
One other session type you mentioned is the BIDS town hall. BIDS is the Brain Imaging Data Structure, a community-driven standard for organizing neuroimaging data. It’s still in active development, with many ongoing extension proposals. Will the town hall be a space to learn more about this work?
It will be an opportunity to have an overview of the current status of BIDS from the point-of-view of the BIDS steering committee. We’re happy to provide the OSR as a platform for this town hall, and attendees will be able to hear more about ongoing developments in this space.
For both the hackathon and the OSR, how can we get involved? It sounds like the goal is to cast as wide a net as possible, so I imagine there's lots of work to do. And if people want to volunteer and join in, what can they do at this point?
First, just to join the events themselves: you don't have to be attending the OHBM conference or be an OHBM member. So if you're online, you're interested and you want to learn or contribute, you can just come. All of our activities are as open and inclusive as possible. We have a $25 fee for the hackathon, but if you come from a low or middle income country, or you can't pay for any reason, we can give you a reduction or waive the fee completely.
To volunteer: you can reach out to us in a lot of different ways. You can go to our website. You can email us at ohbmopenscience AT gmail.com. You can reach out to us on Twitter or on the Brainhack mattermost.
We're looking for volunteers to help run both the hackathon as well as the open science room. We don’t need people to commit to long days since the events themselves are on a distributed schedule. We do need people in different countries to be joining us to help run these different events. If the events don’t work with your schedule: We always need help with our websites, too. We're going to be doing our best to dynamically update our websites and our content on the Sparkle platform as these events roll out.
If anyone who is located in the Asia-Pacific or “Rising Sun” slot is inspired to join, we’re specifically looking for people from those time zones to help run these events!
On the hackathon website, we link out to our volunteer form which points to the actual descriptions of the volunteer roles. So we’re trying to be completely transparent here in the sense that we put the actual description of what you will be doing.
And for the OSR, we’re looking for volunteers to attend and monitor the emergent sessions, monitor the event chats. This is a great way for people who are not attending the main conference to interact with the speakers. On the OSR website, we have a ‘Volunteer’ section with descriptions of the roles and a chance to sign up. We also ask if people are interested in volunteering directly in the OSR registration form, and we’re happy to have people join in at any point!
If you are new to open science, volunteering with the Open Science SIG is one of the best ways to start getting a feel for open science and to start feeling like you're part of the community. We don’t have any skill-based barriers to participation: you already know what you need to know to get involved. So come and join.
I should also mention that we have some positions on the OSSIG open for next year as well. So if any of the available positions sound exciting to you, and you want to get involved, please come. And if those sound like too much of a commitment, there's always other ways to get involved, from open science fellows to volunteering and coordinating things onsite.
If anyone is not yet familiar with Brainhack and wants a quick primer before jumping in, we’ve just published a short piece about the ideas guiding the community that highlights the culture and structure of Brainhack events.
One thing that’s worth saying, too, about vision is that the vision of the OSSIG continues to grow every year. We’ve been really fortunate to have strong support throughout the OHBM leadership this year, and I hope that’s only going to continue to grow.
I hope so too, and I’m so excited to see the success of this year’s events and all of the OSSIG community initiatives. Thank you all for taking the time to speak to me about it today!
OHBM 2021 Keynote Interview Series: Q&A with Dora Hermes - Multi-modal computational models for a new human systems neuroscience
By: Elizabeth DuPre
Dora Hermes is currently an Associate Professor of biomedical engineering at the Mayo Clinic in Rochester, Minnesota where her work focuses on identifying disease biomarkers and developing neuroprosthetics to interface with the brain.
I had the pleasure of catching up with her before her 2021 OHBM keynote lecture to find out more about her research, what drove her work on iEEG-BIDS, and her advice for early career researchers.
Elizabeth DuPre (ED): One thing that stands out to me is the diversity of your research projects: from improving recording standards and materials to developing computational models of visual processing. How would you describe your research program or field of study?
Dora Hermes (DH): This is actually a discussion point among our colleagues right now. I would call it computational cognitive neuroscience or human systems neuroscience. One of the things that I think defines this field is that we’re trying to learn as much as possible about the mesoscale population measurements that we have in humans. That we’re measuring from populations of neurons in the human brain is so important and so fascinating: we can measure all of these different signals, but how do we characterize those signals and their interactions in a quantitative manner?
Excitingly, this is also the scale at which neuroprosthetics are being developed. So brain-machine interfaces, closed-loop stimulation; these systems all work with electrodes that measure from populations of neurons. So if we want to understand where to implant these and what type of signals are biomarkers for particular processes, then the neural population scale is where we should be focussed on to drive those therapeutic techniques forward.
ED: I think many OHBM members will have heard of initiatives like Neuralink and the exciting momentum around brain-computer interfaces right now. Is this an application that got you interested in this field?
DH: During my PhD I actually started in a lab in the Netherlands that was developing brain-computer interfaces, so in 2008, 2009, we were working with patients who had ECoG electrodes implanted on the surface of the brain, and they were playing [the video game] Pong with their brain signals. That is, they had to hit the target just by manipulating their brain signals. So at the time we were asking questions like: Where should these electrodes be implanted, and can we actually determine that from an fMRI scan? Because we measure BOLD on the one hand and electrophysiological signals on the other hand, we wanted to know how BOLD measurements can inform our understanding of electrophysiology.
That’s how I initially started to get interested in this field, but there are of course many more applications of this methodology. For example, deep brain stimulation or combining different closed-loop measurements with stimulation to try and entrain networks. But brain-computer interfaces were what first got me interested.
ED: You recently published a paper looking at implantation sites to help with gesture recognition.
DH: This is centered around the question of: how can we get as many degrees of freedom as possible with as few electrodes as possible, since there are computational and hardware limitations in terms of how many signals we can describe. In that paper, we—and particularly my PhD student Max van den Boom—try to look at what the optimal implantation and configuration strategy should be.
ED: You obviously have a lot of data that you’re collecting and analyzing. Was this one of the motivations to help develop iEEG-BIDS, the sheer volume of data that you need to process in your own research?
DH: There are several aspects of this. When we first started developing iEEG-BIDS we had a very practical problem. We had a consortium between different sites and we wanted to share data between the sites, including 7T fMRI data, 3T fMRI data, intracranial EEG recordings, and MEG data. Because the data were collected across sites, they were also in different formats! So we were asking, how can we best organize this data? While I was thinking about this, Chris Gorgolewski was at the time one floor below our floor and I talked to him for a little bit, and then looked into the Brain Imaging Data Structure (BIDS), and it just was the most practical and the simplest solution to our problem. And that’s what I’ve seen throughout, is that whenever you give people a description of BIDS it’s immediately recognized as such a practical and simple solution to share data across sites.
So for us, the nice thing about it was that we could easily organize all of our different modalities according to BIDS. At that time the fMRI and MEG descriptions were already created, so we just had to work on creating a solid description of (i)EEG data to make sure that everything would be in there. And since that worked out really well, in my lab now all of my data have to be organized according to the BIDS structure, and it gives everyone guidance on what the data should be like. It makes it much easier to share data with publications as well and make sure that publications are reproducible. And as we get more and more data over time, it’s easier to find things. So I’m really happy that we put in the time to help expand BIDS.
ED: I think a lot of people are happy that you put in the time! It’s really such a wonderful resource for the community.
DH: And the community was the most amazing part of it, actually. There were so many people who were working together to get that done. It was really amazing how much help we got from different people who were all helping to think about when it would work and so to see people use it now—it’s been really fantastic.
ED: You already had this question back in your PhD about how BOLD measurements can inform our understanding of ECoG, and you’re continuing to collect rich data across many different modalities. Is there a direction you’re excited about in bringing together these different data types?
DH: These are complex measurements, and the nice thing about having multi-modal measurements is that they are complementary. I’m really excited about the fact that we can move beyond word descriptions of these signals. So beyond saying something like “gamma is bottom-up,” we can start to make an image-derived model where we’re predicting the amplitude of different signals in computational models—which gives us different ways to quantify these signals beyond just comparing them or simple correlations. We can start to ask what they tell us about perception or about the underlying circuitry. So I think we’re now at this transition point when we’re starting to have not just a word model or a word description but really to have a more quantitative model for what these signals can tell us about the underlying physiology.
ED: You described your interest as computational cognitive neuroscience, and just from the name it seems like computational models are an important part of this work. So moving from these more verbal descriptions to computational models seems exciting and important.
DH: To make things a bit more concrete, let me give an example. Initially, there were descriptions such as, “Gamma oscillations are bottom up.” But what does that actually mean? Does it mean the low-level visual information that comes in from the eye? Does it mean whatever you actually [consciously] perceive? Rather than trying to argue at this level of description, we now have an image-derived model that can predict gamma amplitude from an image.
The nice thing about this model is that it’s directly testable. Importantly, the model can be wrong. But you can test it, and you can expand upon it. You don’t just have a word description in a scientific paper; now you have a model that’s available on GitHub. There are of course other examples from other groups, but what’s exciting about this direction generally is its quantitative focus. So now rather than “gamma is bottom-up,” we can say things like: “these models of gamma oscillations work well in this setting, but not that one” or “these are the quantities that we expect to change in disease models, and these are the changes in the supporting computations.” This is the kind of model that will help push the field forward, since it’s formalized and—unlike word descriptions—can be tested and built upon directly.
ED: I think that’s exactly it. Speaking of pushing the field forward, I wanted to ask: What directions are you excited about for the field right now? Obviously one of them would be developing more multi-modal, computational models. Would you add anything to that?
DH: I’m certainly really excited about testable computational models. The other thing I’m excited about is being able to directly manipulate brain signals to create closed-loop devices. It goes back to these brain-computer interfaces we started talking about. So being able to quantify and model the brain signals we measure is important, but in order to have closed-loop devices we also need to be able to modulate measured brain networks via electrical stimulation. That is, not just having a measurement, but being able also to directly modify the network activation patterns themselves.
One of the directions where we’re currently moving is to really understand how we can change network-level biomarkers with electrical stimulation. For example, in epilepsy, we know certain circuits are involved. Can you directly stimulate those circuits to normalize the measured signals; that is, to push them to look more or less typical? I think this direction of integrating quantitative measurements with electrical stimulation will provide new therapeutic possibilities for many different diseases.
ED: That is really exciting, and it also sounds really challenging! Do you have any advice for early career researchers who are just starting out in this area to help in overcoming some of the challenges along the way?
DH: We work with a very complex system, and we need to make tractable measurements. But making inferences is less than straightforward. If we have electrodes in 250 different places, how do we make that problem approachable? First, of course, I’d say to look at the signals. Don’t just apply analyses blindly; for example, you can apply coherence to many different signals but it doesn’t make sense in the absence of a clear oscillation. So always look at the data first and think of which analyses would apply.
Secondly, I’d say it’s important to interact with your colleagues: other PhD students, postdocs, and PIs in the field. One of the things that was a real benefit when I started was the discussions and collaborations that I had with my colleagues. For example, we had noticed that there were often broadband increases across many different high frequencies rather than just narrow-band oscillations. We were confused by this, but we talked to a lot of people and learned that they had similar observations. That experience helped convince us that it wasn’t just an artifact due to our equipment and allowed us to start asking “What does that tell us about brain function?”.
And finally, the other thing I’d recommend is to share data and code when it’s possible. There will definitely be a benefit in the long term. When I started, we developed a very simple tool to localize ECoG electrodes and correct for brain shifts. I made the code available, and I’ve been amazed by how widely it’s been used since. Really, I hope this has helped people, and I think it’s a good example of the power of sharing code and data whenever you can. It’s one of those things that drives the field forward. It also helps to better situate your work: if you share code and data with a paper, other researchers can reproduce your results and see how robust the effects are rather than just speculating.
ED: I just have one last question: What can we look forward to in your keynote lecture at OHBM?
DH: Generally, I’ll discuss how describing mesoscale brain measurements with different models allows us to identify potential biomarkers for disease. To support this, I’ll explain how we have parameterized the power spectrum measured with intracranial EEG and related these different electrophysiological signals to different image properties to learn about the human visual system. I’ll also describe how different image modalities—such as fMRI and electrophysiological measurements—allow us to get different views on the system.
I hope that I can convince the OHBM community about this core idea: fMRI and iEEG integrate across neuronal populations in fundamentally different ways, and we should view BOLD and electrophysiological signals as providing complementary information with respect to the system that drives the signals. Viewing measurements as functions of the underlying systems allow us to better understand which circuits may be related to atypical processing in neurological diseases. We're hoping that this work will help move the field in that direction.
ED: Thanks so much for taking the time, Dr. Hermes. I’m very much looking forward to your keynote in a few weeks !