OHBM 2021 Keynote Interview Series: Q&A with Dora Hermes - Multi-modal computational models for a new human systems neuroscience
By: Elizabeth DuPre
Dora Hermes is currently an Associate Professor of biomedical engineering at the Mayo Clinic in Rochester, Minnesota where her work focuses on identifying disease biomarkers and developing neuroprosthetics to interface with the brain.
I had the pleasure of catching up with her before her 2021 OHBM keynote lecture to find out more about her research, what drove her work on iEEG-BIDS, and her advice for early career researchers.
Elizabeth DuPre (ED): One thing that stands out to me is the diversity of your research projects: from improving recording standards and materials to developing computational models of visual processing. How would you describe your research program or field of study?
Dora Hermes (DH): This is actually a discussion point among our colleagues right now. I would call it computational cognitive neuroscience or human systems neuroscience. One of the things that I think defines this field is that we’re trying to learn as much as possible about the mesoscale population measurements that we have in humans. That we’re measuring from populations of neurons in the human brain is so important and so fascinating: we can measure all of these different signals, but how do we characterize those signals and their interactions in a quantitative manner?
Excitingly, this is also the scale at which neuroprosthetics are being developed. So brain-machine interfaces, closed-loop stimulation; these systems all work with electrodes that measure from populations of neurons. So if we want to understand where to implant these and what type of signals are biomarkers for particular processes, then the neural population scale is where we should be focussed on to drive those therapeutic techniques forward.
ED: I think many OHBM members will have heard of initiatives like Neuralink and the exciting momentum around brain-computer interfaces right now. Is this an application that got you interested in this field?
DH: During my PhD I actually started in a lab in the Netherlands that was developing brain-computer interfaces, so in 2008, 2009, we were working with patients who had ECoG electrodes implanted on the surface of the brain, and they were playing [the video game] Pong with their brain signals. That is, they had to hit the target just by manipulating their brain signals. So at the time we were asking questions like: Where should these electrodes be implanted, and can we actually determine that from an fMRI scan? Because we measure BOLD on the one hand and electrophysiological signals on the other hand, we wanted to know how BOLD measurements can inform our understanding of electrophysiology.
That’s how I initially started to get interested in this field, but there are of course many more applications of this methodology. For example, deep brain stimulation or combining different closed-loop measurements with stimulation to try and entrain networks. But brain-computer interfaces were what first got me interested.
ED: You recently published a paper looking at implantation sites to help with gesture recognition.
DH: This is centered around the question of: how can we get as many degrees of freedom as possible with as few electrodes as possible, since there are computational and hardware limitations in terms of how many signals we can describe. In that paper, we—and particularly my PhD student Max van den Boom—try to look at what the optimal implantation and configuration strategy should be.
ED: You obviously have a lot of data that you’re collecting and analyzing. Was this one of the motivations to help develop iEEG-BIDS, the sheer volume of data that you need to process in your own research?
DH: There are several aspects of this. When we first started developing iEEG-BIDS we had a very practical problem. We had a consortium between different sites and we wanted to share data between the sites, including 7T fMRI data, 3T fMRI data, intracranial EEG recordings, and MEG data. Because the data were collected across sites, they were also in different formats! So we were asking, how can we best organize this data? While I was thinking about this, Chris Gorgolewski was at the time one floor below our floor and I talked to him for a little bit, and then looked into the Brain Imaging Data Structure (BIDS), and it just was the most practical and the simplest solution to our problem. And that’s what I’ve seen throughout, is that whenever you give people a description of BIDS it’s immediately recognized as such a practical and simple solution to share data across sites.
So for us, the nice thing about it was that we could easily organize all of our different modalities according to BIDS. At that time the fMRI and MEG descriptions were already created, so we just had to work on creating a solid description of (i)EEG data to make sure that everything would be in there. And since that worked out really well, in my lab now all of my data have to be organized according to the BIDS structure, and it gives everyone guidance on what the data should be like. It makes it much easier to share data with publications as well and make sure that publications are reproducible. And as we get more and more data over time, it’s easier to find things. So I’m really happy that we put in the time to help expand BIDS.
ED: I think a lot of people are happy that you put in the time! It’s really such a wonderful resource for the community.
DH: And the community was the most amazing part of it, actually. There were so many people who were working together to get that done. It was really amazing how much help we got from different people who were all helping to think about when it would work and so to see people use it now—it’s been really fantastic.
ED: You already had this question back in your PhD about how BOLD measurements can inform our understanding of ECoG, and you’re continuing to collect rich data across many different modalities. Is there a direction you’re excited about in bringing together these different data types?
DH: These are complex measurements, and the nice thing about having multi-modal measurements is that they are complementary. I’m really excited about the fact that we can move beyond word descriptions of these signals. So beyond saying something like “gamma is bottom-up,” we can start to make an image-derived model where we’re predicting the amplitude of different signals in computational models—which gives us different ways to quantify these signals beyond just comparing them or simple correlations. We can start to ask what they tell us about perception or about the underlying circuitry. So I think we’re now at this transition point when we’re starting to have not just a word model or a word description but really to have a more quantitative model for what these signals can tell us about the underlying physiology.
ED: You described your interest as computational cognitive neuroscience, and just from the name it seems like computational models are an important part of this work. So moving from these more verbal descriptions to computational models seems exciting and important.
DH: To make things a bit more concrete, let me give an example. Initially, there were descriptions such as, “Gamma oscillations are bottom up.” But what does that actually mean? Does it mean the low-level visual information that comes in from the eye? Does it mean whatever you actually [consciously] perceive? Rather than trying to argue at this level of description, we now have an image-derived model that can predict gamma amplitude from an image.
The nice thing about this model is that it’s directly testable. Importantly, the model can be wrong. But you can test it, and you can expand upon it. You don’t just have a word description in a scientific paper; now you have a model that’s available on GitHub. There are of course other examples from other groups, but what’s exciting about this direction generally is its quantitative focus. So now rather than “gamma is bottom-up,” we can say things like: “these models of gamma oscillations work well in this setting, but not that one” or “these are the quantities that we expect to change in disease models, and these are the changes in the supporting computations.” This is the kind of model that will help push the field forward, since it’s formalized and—unlike word descriptions—can be tested and built upon directly.
ED: I think that’s exactly it. Speaking of pushing the field forward, I wanted to ask: What directions are you excited about for the field right now? Obviously one of them would be developing more multi-modal, computational models. Would you add anything to that?
DH: I’m certainly really excited about testable computational models. The other thing I’m excited about is being able to directly manipulate brain signals to create closed-loop devices. It goes back to these brain-computer interfaces we started talking about. So being able to quantify and model the brain signals we measure is important, but in order to have closed-loop devices we also need to be able to modulate measured brain networks via electrical stimulation. That is, not just having a measurement, but being able also to directly modify the network activation patterns themselves.
One of the directions where we’re currently moving is to really understand how we can change network-level biomarkers with electrical stimulation. For example, in epilepsy, we know certain circuits are involved. Can you directly stimulate those circuits to normalize the measured signals; that is, to push them to look more or less typical? I think this direction of integrating quantitative measurements with electrical stimulation will provide new therapeutic possibilities for many different diseases.
ED: That is really exciting, and it also sounds really challenging! Do you have any advice for early career researchers who are just starting out in this area to help in overcoming some of the challenges along the way?
DH: We work with a very complex system, and we need to make tractable measurements. But making inferences is less than straightforward. If we have electrodes in 250 different places, how do we make that problem approachable? First, of course, I’d say to look at the signals. Don’t just apply analyses blindly; for example, you can apply coherence to many different signals but it doesn’t make sense in the absence of a clear oscillation. So always look at the data first and think of which analyses would apply.
Secondly, I’d say it’s important to interact with your colleagues: other PhD students, postdocs, and PIs in the field. One of the things that was a real benefit when I started was the discussions and collaborations that I had with my colleagues. For example, we had noticed that there were often broadband increases across many different high frequencies rather than just narrow-band oscillations. We were confused by this, but we talked to a lot of people and learned that they had similar observations. That experience helped convince us that it wasn’t just an artifact due to our equipment and allowed us to start asking “What does that tell us about brain function?”.
And finally, the other thing I’d recommend is to share data and code when it’s possible. There will definitely be a benefit in the long term. When I started, we developed a very simple tool to localize ECoG electrodes and correct for brain shifts. I made the code available, and I’ve been amazed by how widely it’s been used since. Really, I hope this has helped people, and I think it’s a good example of the power of sharing code and data whenever you can. It’s one of those things that drives the field forward. It also helps to better situate your work: if you share code and data with a paper, other researchers can reproduce your results and see how robust the effects are rather than just speculating.
ED: I just have one last question: What can we look forward to in your keynote lecture at OHBM?
DH: Generally, I’ll discuss how describing mesoscale brain measurements with different models allows us to identify potential biomarkers for disease. To support this, I’ll explain how we have parameterized the power spectrum measured with intracranial EEG and related these different electrophysiological signals to different image properties to learn about the human visual system. I’ll also describe how different image modalities—such as fMRI and electrophysiological measurements—allow us to get different views on the system.
I hope that I can convince the OHBM community about this core idea: fMRI and iEEG integrate across neuronal populations in fundamentally different ways, and we should view BOLD and electrophysiological signals as providing complementary information with respect to the system that drives the signals. Viewing measurements as functions of the underlying systems allow us to better understand which circuits may be related to atypical processing in neurological diseases. We're hoping that this work will help move the field in that direction.
ED: Thanks so much for taking the time, Dr. Hermes. I’m very much looking forward to your keynote in a few weeks !
Your comment will be posted after it is approved.
Leave a Reply.