BY KEVIN WEINER
New OHBM Communications Committee article on HuffPost Science: "As a scientific conference approaches, I always think back to my first science fair: I stood next to my homemade presentation of graphs and tables glued to a poster board positioned next to hundreds of others made by fellow elementary school students. An overweight Paisan from rural New Jersey, I was more looking forward to my post-presentation reward of cannoli from my favorite bakery in South Philly than I was to standing for hours on end answering questions from the judges. Conferences these days are pretty similar to the science fairs from decades ago, just on a grander scale with better technology. For example, every year the Organization for Human Brain Mapping (OHBM) brings together thousands of scientists who work with brain imaging data from around the world to share hot off the press findings that they just published or are preparing to publish. This year, 4,391 presentations will be on hand in Geneva for our annual conference." Read more
0 Comments
We asked Dr. Volkow a few questions to start the conversation before her Keynote Lecture:
What motivated you to go into your particular area of research? Nora Volkow: I went into addiction research because I wanted to understand what happens to the brain of a person that can no longer control his/her desires even though they want to do it. In other words, investigating the neurobiological underpinnings of the balance between neuronal networks involved in desires/emotions and those involved in self-regulation and how drugs influence them in addiction. If you weren't talking to brain mappers or scientists, how would you describe your most proud scientific accomplishment?NV: Providing insight that gave us an explanation of why people who are addicted to drugs cannot stop the strong urges to take the drugs even when this comes at the expense of very catastrophic consequences. If you were speaking to a non-scientist, how would you describe your research and what you do for a living? NV: I would say that, as a brain researcher, I explore how the brain works; and as a researcher studying the effects of drugs, I investigate the mechanisms within our brain that motivate our actions and that underlie free will. What do you think are the most pressing issues in neuroimaging for your area of interest? For the field in general? NV: For the addiction field, I would say, developing an imaging biomarker for substance use disorders that can help predict clinical outcomes, guide tailored interventions for a given patient and help in the development of new therapeutics for substance use disorder, including strategies to help the brain recover. For the field in general, I would say developing magnetic resonance imaging (MRI) with higher temporal and spatial resolution and with higher sensitivity. What do you think is the future of neuroimaging for basic research? For translational research and application? NV: For basic research, optical imaging in conjunction with optogenetics and DREADDs is allowing us to assess the role of specific neuronal types in circuit function. MRI and Positron Emission Tomography (PET) have also allowed us to start to assess the roles of specific gene manipulation on brain function. There are many applications of neuroimaging for translational research, including use of imaging for real life biofeedback interventions to regulate the function of specific brain regions associated with craving or negative moods that lead to relapse; use of imaging for tailoring prevention message on the basis of brain activation responses predictive of behavioral changes; use of imaging to identify individuals at higher risk for substance use disorders; and, as described above, the development of biomarkers to guide clinical treatment and new therapeutics. If you're attending the Annual Meeting in Geneva, come and hear Dr. Volkow’s lecture on Tuesday to learn more about the insights into addiction-related neurochemical and functional changes that are being revealed by brain imaging combined with behavioral measurements and genetics. More information on Dr. Volkow’s work and NIDA can be found at https://www.drugabuse.gov/about-nida/directors-page and you should also check out her blog for the latest in new addiction treatments and her perspective on today’s most important addiction issues. BY LISA NICKERSON
The old adage “there’s something for everyone” is an understatement when it comes to the representation of imaging data analysis techniques at the OHBM Annual Meeting. From courses and workshops on the most basic fundamentals of analysis to oral sessions and symposia highlighting work at the forefront of analytical methods development, the annual OHBM meeting is unparalleled in this regard. As a young graduate student and later as post-doc, OHBM drew me in as one of the best resources for learning about imaging data analysis. Throughout the year, I would spend countless hours, days, and even months combing through the literature and the internet trying to determine what information was reliable or most relevant for my work, scouring the SPM and FSL forums for answers to my questions, and generally being frustrated at how long it took to get the answers I needed to make headway on various analysis issues. The OHBM Educational Courses and Morning Workshops offered me an opportunity to learn from experts, meet them, and ask them my questions directly. This is the only conference I know that places such a strong emphasis on imaging data analysis, and I advise all my trainees and collaborators who are trying to learn analysis to go to OHBM to soak it in. This year, the opportunities for learning actually begin before the OHBM meeting starts with several Satellite Meetings taking place right before the conference, including: FSL Course 2016, Pattern Recognition in Neuroimaging, Brain Connectivity, and the BrainMap/Mango Workshop. In addition, the OHBM Educational Courses take place on Sunday before the Opening Ceremonies, with several courses that are fantastic for students, post-docs, those who are new to neuroimaging, and those who just want to pick up new analysis techniques. The Art and Pitfalls of fMRI Preprocessing is a long running OHBM Educational Course designed to expose beginners to the critical importance of key fMRI pre-processing steps for both resting state and task fMRI and, this year, covers typical pre-processing pipelines in three major software tools, FSL, SPM, and AFNI. The course on MR Diffusion Imaging: From Basics to Advanced Applications will highlight methodological considerations of both acquisition and analysis for mapping structural connectivity and white matter microstructure. More advanced statistical methods for those with some experience, or for those who are merely curious, are also represented in courses such as:
Not all of the analysis-related symposia cover connectivity though. On Monday, What Neuroimaging Can Tell Us? From Correlation to Causation and Cognitive Ontologies takes up the important issue that simply studying associations between brain function and cognitive function does not inform the causal mechanisms of how brain functions actually give rise to cognitive functions. This symposium covers causal inference, including new methods for deriving causal hypotheses from observational data and validating causal hypotheses by brain stimulation. And one of Wednesday’s symposia, Neural Nets to Neural Nets: Deep Learning Approaches to Neuroimaging, will introduce deep learning, the new area of machine learning that was used by a computer program developed by Google DeepMind to beat Lee Sedol at Go without any handicaps. These methods are also reigniting the AI community. For those interested in this exciting new area of research, also be sure to check out the Talairach Lecture by Daniel Wolpert and our interview of him to see how scientists at Cambridge are applying some of these methods to tackle neuroscience questions. The diversity of topics covered in all of these satellite conferences, Educational Courses, and Morning Symposia is truly astounding. I find it a real challenge to keep up with new specialized techniques and evolving perspectives on established methods, and being able to drop in on these lectures at OHBM both keeps me ahead of the game for developing new research directions and keeps me doing “good science”.
Kevin Weiner: If you met someone on the street who had no idea what brain mapping was, how would you describe your research and what you do for a living? Tim Behrens: I play with brains and computers. I try to figure out how the world is represented in your brain and how you use these representations to guide your behaviour. I have whiteboards with boring looking equations on them. I have computers with lots of data on them. I try to see if the data on my computer matches the equations on my whiteboards. The data on my computer often comes from humans playing games in an MRI scanner, where we can watch as big patterns of brain activity change from moment-to-moment, but some of it also comes from recording what individual cells are doing when animals play similar games in the lab. Actually, that is a description of what all my friends and colleagues do. I mostly get to work and wonder what to do because my colleagues are doing all the work. Sometimes I also play candy crush. KW: What should the non-expert be wary of when reading about brain mapping articles in the lay press? TB: They shouldn’t. What does it matter if some of the nuances are not captured in the article? OK, often the article is just wrong, but that isn’t special to articles about neuroimaging. That’s just bad journalism. They should look at the amazing brain pictures, and they should say: "My god. That is a picture of activity inside someone’s brain. And that person was having lunch an hour later. That is amazing. Let’s give them our money”. KW: If you weren't talking to brain mappers or scientists, how would you describe your most proud scientific accomplishment? TB: Like all scientists, I think the coolest thing I have done is the stuff we are working on now, so I’ll describe 3 papers in one paragraph - one just published and two in submission. It’s also what I’ll be talking about at the meeting. Obviously, the next paragraph will sound very pretentious to any scientist reading it! So - we figured out something about the languages that the brain uses to store knowledge about how things are related in the world. We figured out that brain regions can store relationships with positive (excitatory) connections between neurons but that these connections must be balanced by mirror-image negative (inhibitory) connections - kind of like “antimemories”. We figured out that many of these connections can then be organised into structures that represent knowledge in different parts of the brain, and that these neural structures are probably encoded in a language that other people have discovered for encoding our knowledge of space. This language is called a “grid cell code” and it is a very beautiful language. If this turns out to be true, it will be important for understanding how abstract concepts are represented in our brains, and how we can navigate through abstract concepts to understand the world around us. Also, I used operant conditioning to train my son to respond “Dada” to the question “Who is the coolest?” when he was a year old. This was deeply satisfying both personally and scientifically. KW: What do you think is the future of neuroimaging for basic research? For translational research and application? TB: I think neuroimaging is in an awesome phase. Really awesome. We have moved to a stage where we can do really big experiments really well, and really study the organisation of cortex. New technologies are allowing us to study neural representations at the mesoscopic scale. These come both from new high resolution imaging technologies, but also from an increased understanding of how to design experiments that study representations rather than just average activity levels. I think there needs to be some work validating this stuff with electrophysiology and understanding how these things relate to cellular codes (but that is happening) and obviously there is still a lot of stuff we can’t see. But basically, I think that these two massive innovations (big data driven imaging with extremely high quality data, and the ability to study representations) will transform the current and next decade of human neuroscience. On top of this, MEG is really working at the “source” level now, so we can study dynamics. The kinds of questions that will be answered by neuroimaging experiments in the next years will be really about how stuff is organised, how computations happen etc. etc. I think this will bring us closer to animal experiments, so we can test whether neural codes found in animals can explain higher human cognitive behaviours that can’t be studied in animals. I think it will mean that interesting neuronal mechanisms discovered in animals, will be testable in real clinical populations for the first time, as candidates for disease. Being able to study representations across the whole brain simultaneously is new to neuroscience in any species. How is the same stuff represented in different areas? How do signals in one area change representations in another? I think it’s really an exciting time to be doing human neuroscience for people who want to think deeply about neural mechanisms and their organisation throughout the brain. Obviously, it’s also a very exciting time to be doing nonhuman research with a buffet of new techniques available that have extraordinary levels of precision and control. I think we need to make sure that the different fields stay in touch with each other as we are figuring out how neural computation happens across species. KW: What motivated you to study frontal cortex and connectivity? TB: I mostly just drift towards things that, or people who, are interesting. For example, things that seem new and cool and people who are fun to talk to. The people who I work with in Oxford and UCL are just about the most clever people I know. And some of the most exciting work in the world is happening in the function of frontal cortex and neural representations; in behavioural control; and in brain connectivity. It’s no coincidence that I copied these clever people in choosing these topics to study… I get to hang out with cool people and talk about interesting things. KW: When you first started out, what was the most inspiring/motivating paper you read? How about the same question, but in the last 5 years? TB: Bit of a weird question for me, because I started out making new methods for measuring brain connections from diffusion imaging, and now I am studying representations and computations. The classic papers in the diffusion field were obviously influential early. From people like Peter Basser, David Tuch, Geoff Parker, Derek Jones, Danny Alexander and many of the other early pioneers of diffusion imaging. Nowadays, I often have fun reading papers about how computations work. For example, you always learn something cool when you read papers by Peter Dayan, Nathaniel Daw, Yael Niv and their colleagues. I also like learning about how cool representations can enable computation. I loved Tim Vogels’ 2011 Science paper about simple inhibitory hebbian mechanisms for storing memories, for example. When I want to know how the frontal cortex works more broadly, I often turn to the views of people like Matthew Rushworth, Betsy Murray, Dick Passingham, Chris Frith, amongst many others. Recently when I have been interested in how computations work between frontal and temporal cortices, I have been very influenced by ideas from Howard Eichenbaum and Neil Burgess and their respective sets of awesome colleagues (as well as the league of extraordinary papers about how the hippocampal formation encodes space). Rather than papers, however, I have mostly been inspired by people. I have been lucky enough to work with awesome sets of people. My group has always been stuffed with really clever students and post-docs, many of whom have become long term friends and colleagues. I won’t name individuals from my group here because they are all special :). From the very early days, though, I was lucky enough to work with people like Heidi Johansen Berg, Matthew Rushworth, Steve Smith, Mark Woolrich, Christian Beckmann, Mark Jenkinson. These guys have taught me how to think about science, data and the brain. They are endlessly curious and inventive. Who needs more inspiration than that? KW: Thank you, Dr. Behrens! Dr. Behrens will be giving the Wednesday afternoon Keynote lecture at the 22nd Annual Meeting of the OHBM Organization for Human Brain Mapping (June 29th, 4:15pm) later this month in Geneva. The topic of his lecture will encompass storing, using, and updating knowledge for behavioral control. All in all, we hope you will enjoy his lecture and have enjoyed getting to know him a bit better.
Excerpt from OHBM Communications/Media Team article on Huff Post Science:
"You may have heard the basal ganglia being mentioned in mainstream media recently associated with movement disorders like Parkinson’s Disease, which burdened the late Muhammed Ali. And rightly so - the role of the basal ganglia is mostly understood as related to the formation, execution, and remembrance of a sequence of movements towards a goal like throwing a punch for Ali or walking for us." Read full story. BY NIKO KRIEGESKORTE I'm here with Professor Daniel Wolpert of the Engineering Department at Cambridge University. Daniel is going to give the Talairach Lecture this year at the OHBM meeting in Geneva. I’d like to hear a little bit about his research, about his lecture, and his view of our field.
My Director of Studies at Oxford said to me “Modeling the brain is all where it’s at.” And he showed me a paper by Zipser and Andersen, which had just come out in Nature. They were using a simple neural network to model visual cortex. I was completely transfixed! So I decided I’d had enough of medicine. It was too much like hard work. I wanted to go back and do basic science. After getting my PhD, I thought it was time to actually go and learn some mathematics. So I went to MIT. I had my interview with Michael Jordan who said “If you want to come and do a postdoc here you have to know what a Jacobian is. If you don't know what a Jacobian is, don't come here.” I said “That's not a problem.” I had no clue what a Jacobian was! I went straight back to the hotel room and read up all about them. And I did a postdoc there which was great fun. MIT was such a vibrant place. ![]() NK: How did you end up at the engineering department in Cambridge? DW: I was at the University College London for ten years where I was, I guess, the most computational person in a biological environment. And one day I got a phone call from Keith Peters, who was the head of the [Cambridge University] medical school, saying "Are you interested in a chair in Engineering at Cambridge?" And I said, "I think you've got the wrong person. I'm not an engineer. I'm a neuroscientist!" He said, “We're not idiots in Cambridge. We know what you do! We want to create a bio-engineering programme and what we really want to hire is not an engineer with an interest in biology, but a biologist with an interest in engineering." This was very attractive to me, because going from being the most mathematical in a biology department to being the least mathematical in an engineering department would be very good for my game. So I moved to Cambridge and we set up this group which does computation and biological learning. The idea is that we do both machine learning and neuroscience. The interplay between the machine learning and the neuroscience, I think, is absolutely vital. I think the idea that you can understand the brain without very strong computational support is ridiculous NK: What’s your lecture at OHBM in Geneva going to be about? DW: The first thing I want to convince people of is that there are interesting questions about motor control. I want to convince people that actually the only point in understanding cognition and sensation and perception is to guide action. Then I want to go through the different levels we’ve been working on to try and really explain the interesting new developments, both at the low level – sensory-motor noise, our work on normative models of Bayesian processing – and at the learning level: how people learn structures of tasks. Finally, I want to cover our more recent work, trying to link decision making and motor control together: How motor control affects decisions and how decisions affect motor control. I’ll try to give everyone a bit of an understanding of the algorithms we think the brain uses. Now I have to admit that we don’t do imaging in my group. But I’d like to inspire imagers to pick up our ideas, ideally, and go and test them using their techniques!
But the problem is they’re intractable. So although you can say you should do the Bayesian thing, actually doing the Bayesian thing is intractable for computers and probably intractable for the brain. So I think all the smart money at the moment is asking what the clever approximations are that the brain can use to solve these sorts of problems. And so the group which I'm head of is half machine learning and half neuroscience. The machine learners’ goal is not just saying it’s Bayesian, but asking how can we do these complex computations in efficient ways. And I think they've been very successful. We used to think that the way you did control was to have some cost and some desired trajectory. You have to play out the desired trajectory, and if you get perturbed, you update your plan. That's a very inefficient way to do control. All you basically need is an optimal feedback controller.
![]() NK: Looking at your work, it seems to me we can learn a lot about the computations from behavioural data. Do we need to measure brain activity at all? DW: Oh, we certainly do! I very much believe in Marr’s levels. I tend to focus on the computation and the algorithm. You know, what problem does the brain have to solve? What algorithms does it use? But clearly in the end we want to be able to understand where the algorithms are instantiated and, more than that, I think brain imaging can help differentiate between algorithms. I think the model-based approach, which has become very popular, is incredibly impressive! I guess if that had been around 20 years ago when I first started out, maybe I'd have got involved in imaging. That seems a very beautiful way to go. NK: In humans we can get tens of thousands of channels of haemodynamic activity. In animals, at the circuit level, we can measure individual neurons, and much larger numbers of them than before. Do you prefer humans or animals? DW: I work on the control of movement. I think animals have everything we need to understand most of the control of movement. I think people who work on language are a bit stuck in terms of the model systems they can work on. For me the neurophysiology level is an area I follow probably more than imaging, because most animals can move, most of them can move better than we can move, and so I think some of the circuit work from say places like Janelia Farm is really exciting. My field relies on very simple things like robotics and virtual reality, which have improved. But if you look at the improvements in imaging and in particular neural circuit manipulation, those are just extraordinary molecular techniques.
![]() But it's still incredibly useful even for motor behaviour because there's no reason to believe that a rat does motor control the same way a human does. We have very different bodies; a very different understanding of the world. So I think it's incredibly useful. NK: Toward an overall understanding of brain function, do we go from the bottom up or from the top down? DW: We go both ways and they meet in the middle! It's hard to start from the middle. I think from the top down, we can get the algorithms. It's very hard for people who work on circuits to start with high-level questions. They're just trying to understand how the circuit works. But for some circuits, we've made huge advances in understanding the algorithms. So, I am very impressed with people who work, for example, on the cerebellar structure in electric fish, where we know it does prediction of sensory consequences, but it wasn't really known exactly how it did it. But recent work from Columbia and Nate Sawtell has beautifully shown how that circuit works. That work is just spectacular and probably wasn't achievable ten years ago. NK: AI is finally beginning to work – using models inspired by the brain. Is AI relevant to neuroscience? DW: AI has been very successful in a limited number of tasks, tasks which are very clear like the game of Go. That's a very simple state of this board and a very simple objective to win. When it comes to more general tasks, like having an autonomous agent acting in the real world, it's much harder to write down the cost function or what the algorithm should be to achieve success in the world. In robotics, closer to my area, there have been some successes. [Boston Dynamic’s] BigDog robot can walk over terrain. But then there are the really hard tasks, like manipulation, that are still unsolved. Robot control is generally solved one task at a time. So a robot is hand-tuned to solve one task. And if you want to go to another task, you go back to square one and start all over again. One of the big challenges for the future is how you make general-purpose algorithms, which can learn multiple tasks and interact in multiple environments. That's still a very hard problem. At the moment, there's no robot with the dexterity of a five-year-old child in terms of manipulation: very good for things like driving and navigation, but when it comes to tactile things with the arms and hands, they're really in their infancy. NK: Ultimately, we need computational models that actually perform the function. DW: In the end that's the proof in the pudding. But unfortunately sometimes you can reproduce the function without understanding it any better. So one of the frustrations with deep nets is that they may work, but you don't really know why they work. But it's still great that they work. I think there's a tension. There's those of us that want beauty in the algorithms, as well as them working, and those who just want the algorithms to work. And so, those who believe in normative and Bayesian models are a bit frustrated, I would guess, by the deep nets, because they work so well; when you want the normative, optimal solution to work better. NK: Finally, I'd like to hear about your interests and obsessions beyond science. DW: I guess my real obsession is science. I tend to work on science most of the time and I just love working at weekends and evenings on science. But my other obsessions; I have two daughters who are both in the sciences. Next month, all being well, my eldest daughter will become a doctor! My youngest is a chemist, and so they're both at university. We spend a lot of time with them. I guess one of my obsessions is travel. As a family, we love traveling. So we spend a lot of time together, me going to fun meetings in China, India, and South Africa, and they often join me and that's an absolute joy. Thanks to Simon Strangeways for video recording and to Jenna Parker for transcribing the interview. Excerpt from OHBM Communications/Media Team article on Huff Post Healthy Living:
"The brain is commonly intuitively regarded as a collection of separate regions each with distinct functions of complex behaviors, experiences and phenomena. But, “Fear centers” and “planning centers” are not separate rooms in our brains in which neurons ring alarm bells and draft flow charts, respectively. " Read more |
BLOG HOME
Archives
February 2021
|