Search the site...

organization for human brain mapping
  • BLOG
  • Tutorials
  • Media
  • Contributors
  • OHBM WEBSITE
  • BLOG
  • Tutorials
  • Media
  • Contributors
  • OHBM WEBSITE

 BRAIN MAPPING

 

From Cannoli in South Philly to Brain Mapping in Geneva

6/29/2016

0 Comments

 
BY KEVIN WEINER
New OHBM Communications Committee article on HuffPost Science: 

"As a scientific conference approaches, I always think back to my first science fair: I stood next to my homemade presentation of graphs and tables glued to a poster board positioned next to hundreds of others made by fellow elementary school students. An overweight Paisan from rural New Jersey, I was more looking forward to my post-presentation reward of cannoli from my favorite bakery in South Philly than I was to standing for hours on end answering questions from the judges.
​
Conferences these days are pretty similar to the science fairs from decades ago, just on a grander scale with better technology. For example, every year the Organization for Human Brain Mapping (OHBM) brings together thousands of scientists who work with brain imaging data from around the world to share hot off the press findings that they just published or are preparing to publish. This year, 4,391 presentations will be on hand in Geneva for our annual conference." Read more
Picture
#OHBM2016 Posters Area
Picture
Cortical surface reconstruction of a right hemisphere
0 Comments

Q&A WITH DR. NORA VOLKOW: OHBM 2016 KEYNOTE SPEAKER SERIES

6/22/2016

0 Comments

 
BY LISA NICKERSON & TZIPI HOROWITZ-KRAUS
​

Dr. Nora Volkow will be giving one of the Keynote Lectures at the 22nd Annual Meeting of the Organization for Human Brain Mapping. Her talk “Mapping Addiction in the Human Brain” is scheduled for Tuesday morning at 9:30am on June 28. Her pioneering neuroimaging work investigating the effects of addictive drugs on the human brain has helped define drug addiction as a disease. 
​
As Director of the National Institute on Drug Abuse, a position she has held since 2003, she ensures that our nation’s research efforts keep pace with the rapidly-changing landscape of addiction in America, from the methamphetamine crisis in the mid-2000’s to today’s opioid epidemic and the impact of legalization of medical and recreational cannabis. Her work in addiction and other domains, including obesity, ADHD and aging, has led to more than 600 publications and numerous awards, including being named one of Time magazine’s “Top 100 People Who Shape Our World.”
Picture
Dr. Nora Volkow
We asked Dr. Volkow a few questions to start the conversation before her Keynote Lecture:
​

What motivated you to go into your particular area of research? 
Nora Volkow: I went into addiction research because I wanted to understand what happens to the brain of a person that can no longer control his/her desires even though they want to do it.  In other words, investigating the neurobiological underpinnings of the balance between neuronal networks involved in desires/emotions and those involved in self-regulation and how drugs influence them in addiction.

If you weren't talking to brain mappers or scientists, how would you describe your most proud scientific accomplishment?NV: Providing insight that gave us an explanation of why people who are addicted to drugs cannot stop the strong urges to take the drugs even when this comes at the expense of very catastrophic consequences. 

If you were speaking to a non-scientist, how would you describe your research and what you do for a living?
NV: I would say that, as a brain researcher, I explore how the brain works; and as a researcher studying the effects of drugs, I investigate the mechanisms within our brain that motivate our actions and that underlie free will.

What do you think are the most pressing issues in neuroimaging for your area of interest? For the field in general?
NV: For the addiction field, I would say, developing an imaging biomarker for substance use disorders that can help predict clinical outcomes, guide tailored interventions for a given patient and help in the development of new therapeutics for substance use disorder, including strategies to help the brain recover.  For the field in general, I would say developing magnetic resonance imaging (MRI) with higher temporal and spatial resolution and with higher sensitivity. 

What do you think is the future of neuroimaging for basic research? For translational research and application?
NV: For basic research, optical imaging in conjunction with optogenetics and DREADDs is allowing us to assess the role of specific neuronal types in circuit function.  MRI and Positron Emission Tomography (PET) have also allowed us to start to assess the roles of specific gene manipulation on brain function.  There are many applications of neuroimaging for translational research, including use of imaging for real life biofeedback interventions to regulate the function of specific brain regions associated with craving or negative moods that lead to relapse; use of imaging for tailoring prevention message on the basis of brain activation responses predictive of behavioral changes; use of imaging to identify individuals at higher risk for substance use disorders; and, as described above, the development of biomarkers to guide clinical treatment and new therapeutics. 

If you're attending the Annual Meeting in Geneva, come and hear Dr. Volkow’s lecture on Tuesday to learn more about the insights into addiction-related neurochemical and functional changes that are being revealed by brain imaging combined with behavioral measurements and genetics.

More information on Dr. Volkow’s work and NIDA can be found at https://www.drugabuse.gov/about-nida/directors-page and you should also check out her blog for the latest in new addiction treatments and her perspective on today’s most important addiction issues.
​
0 Comments

Learn About Imaging Data Analysis at the OHBM Educational Courses and Morning Symposia

6/21/2016

0 Comments

 
BY LISA NICKERSON
The old adage “there’s something for everyone” is an understatement when it comes to the representation of imaging data analysis techniques at the OHBM Annual Meeting. From courses and workshops on the most basic fundamentals of analysis to oral sessions and symposia highlighting work at the forefront of analytical methods development, the annual OHBM meeting is unparalleled in this regard. As a young graduate student and later as post-doc, OHBM drew me in as one of the best resources for learning about imaging data analysis. Throughout the year, I would spend countless hours, days, and even months combing through the literature and the internet trying to determine what information was reliable or most relevant for my work, scouring the SPM and FSL forums for answers to my questions, and generally being frustrated at how long it took to get the answers I needed to make headway on various analysis issues. The OHBM Educational Courses and Morning Workshops offered me an opportunity to learn from experts, meet them, and ask them my questions directly.  This is the only conference I know that places such a strong emphasis on imaging data analysis, and I advise all my trainees and collaborators who are trying to learn analysis to go to OHBM to soak it in.                             
This year, the opportunities for learning actually begin before the OHBM meeting starts with several
Satellite Meetings taking place right before the conference, including: FSL Course 2016, Pattern Recognition in Neuroimaging, Brain Connectivity, and the BrainMap/Mango Workshop. In addition, the OHBM Educational Courses take place on Sunday before the Opening Ceremonies, with several courses that are fantastic for students, post-docs, those who are new to neuroimaging, and those who just want to pick up new analysis techniques.


The Art and Pitfalls of fMRI Preprocessing is a long running OHBM Educational Course designed to expose beginners to the critical importance of key fMRI pre-processing steps for both resting state and task fMRI and, this year, covers typical pre-processing pipelines in three major software tools, FSL, SPM, and AFNI. The course on MR Diffusion Imaging: From Basics to Advanced Applications will highlight methodological considerations of both acquisition and analysis for mapping structural connectivity and white matter microstructure. More advanced statistical methods for those with some experience, or for those who are merely curious, are also represented in courses such as:
  • Pattern Recognition for Neuroimaging which covers how machine learning methods are adapted to investigate neuroscience questions such as disease prediction, classification, and brain decoding
  • Graph Theoretic Models of Brain Networks highlights the application of graph theory to investigate the organizational properties of brain networks, delving into the fundamentals of network graph construction from connectivity data, different types of models and measures, and clinical applications
  • Introduction to Imaging Genetics is another long-running course that covers approaches for modeling and understanding how genetic variation influences brain structure and function, key conceptual and methodological issues when combining two daunting – from a statistical perspective – techniques, and addresses reproducibility and validation in this field
  • Neuroimaging Meta-Analysis reviews tools and techniques for summarizing research findings across a large number of participants and diverse experimental settings to overcome some of the typical limitations of BOLD fMRI studies, including small sample size and limited reliability of BOLD signals
  • Practicalities for Reproducible Neuroimaging takes up the issue of reproducibility as it pertains to neuroimaging studies more generally, which will likely be of increasing importance in light of recent research conducted by the Open Science Collaboration showing poor reproducibility of psychological science research.
The daily Morning Symposia also offer up many opportunities to learn about both well-established methods and methods that are at the forefront of imaging research.  On Tuesday, Skeptical Connectivity: Time for Something Completely Different presents three powerful new statistical methods for investigating brain connectivity from a very different perspective than standard network analysis methods, while also discussing how these methods overcome some of the limitations of current methods. While this symposium is at the forefront of connectivity methods development, a parallel symposium on Effects of Head Motion on Structural and Functional MRI Studies will discuss the more basic topic of how motion affects resting state functional connectivity and other modalities, including methods to prospectively correct for these effects, through improved acquisition methods, modeling and other analytical strategies. For those new to connectivity methods, the latter is a must-see symposium to learn about this critical issue and methods for remediation. On Wednesday, Functional Connectivity or Causality in the Brain: How do We Know? focuses on reviewing the state of the art of functional/effective connectivity and causality mapping approaches, which will be great for novices. This symposium further explores how to experimentally validate connectivity measures and how to reconcile measures based on fMRI with those from EEG/MEG, which will surely be useful for anyone doing connectivity research to better understand their findings.​

Not all of the analysis-related symposia cover connectivity though. On Monday, What Neuroimaging Can Tell Us? From Correlation to Causation and Cognitive Ontologies takes up the important issue that simply studying associations between brain function and cognitive function does not inform the causal mechanisms of how brain functions actually give rise to cognitive functions. This symposium covers causal inference, including new methods for deriving causal hypotheses from observational data and validating causal hypotheses by brain stimulation. And one of Wednesday’s symposia, Neural Nets to Neural Nets: Deep Learning Approaches to Neuroimaging, will introduce deep learning, the new area of machine learning that was used by a computer program developed by Google DeepMind to beat Lee Sedol at Go without any handicaps. These methods are also reigniting the AI community. For those interested in this exciting new area of research, also be sure to check out the Talairach Lecture by Daniel Wolpert and our interview of him to see how scientists at Cambridge are applying some of these methods to tackle neuroscience questions.

The diversity of topics covered in all of these satellite conferences, Educational Courses, and Morning Symposia is truly astounding. I find it a real challenge to keep up with new specialized techniques and evolving perspectives on established methods, and being able to drop in on these lectures at OHBM both keeps me ahead of the game for developing new research directions and keeps me doing “good science”.
0 Comments

Q&A WITH DR. TIM BEHRENS: OHBM 2016 KEYNOTE SPEAKER SERIES

6/20/2016

0 Comments

 
BY KEVIN WEINER
​Last week we kicked off a series of interviews with the plenary speakers for the OHBM Annual Meeting in Geneva (June 26-30, 2016).  Our
first interview featured OHBM Talairach speaker, Daniel Wolpert. In this series, the speakers answer a handful of questions spanning a variety of topics, such as what motivates them, what brought them to the field, and how they would describe their research to a non-scientist. Dr. Tim Behrens was gracious enough to take time out of his day to tackle these questions head – or rather brain – on.


Picture
Dr. Tim Behrens
Kevin Weiner:  If you met someone on the street who had no idea what brain mapping was, how would you describe your research and what you do for a living?
Tim Behrens: I play with brains and computers. I try to figure out how the world is represented in your brain and how you use these representations to guide your behaviour. I have whiteboards with boring looking equations on them.  I have computers with lots of data on them.  I try to see if the data on my computer matches the equations on my whiteboards.  The data on my computer often comes from humans playing games in an MRI scanner, where we can watch as big patterns of brain activity change from moment-to-moment, but some of it also comes from recording what individual cells are doing when animals play similar games in the lab. Actually, that is a description of what all my friends and colleagues do.  I mostly get to work and wonder what to do because my colleagues are doing all the work. Sometimes I also play candy crush.

KW: What should the non-expert be wary of when reading about brain mapping articles in the lay press?
TB:  They shouldn’t. What does it matter if some of the nuances are not captured in the article?   OK, often the article is just wrong, but that isn’t special to articles about neuroimaging.  That’s just bad journalism.  They should look at the amazing brain pictures, and they should say:  "My god. That is a picture of activity inside someone’s brain.  And that person was having lunch an hour later. That is amazing. Let’s give them our money”.   

KW: If you weren't talking to brain mappers or scientists, how would you describe your most proud scientific accomplishment?
TB: Like all scientists, I think the coolest thing I have done is the stuff we are working on now, so I’ll describe 3 papers in one paragraph - one just published and two in submission. It’s also what I’ll be talking about at the meeting.  Obviously, the next paragraph will sound very pretentious to any scientist reading it!
So - we figured out something about the languages that the brain uses to store knowledge about how things are related in the world. We figured out that brain regions can store relationships with positive (excitatory) connections between neurons but that these connections must be balanced by mirror-image negative (inhibitory) connections - kind of like “antimemories”. We figured out that many of these connections can then be organised into structures that represent knowledge in different parts of the brain, and that these neural structures are probably encoded in a language that other people have discovered for encoding our knowledge of space.  This language is called a “grid cell code” and it is a very beautiful language.  If this turns out to be true, it will be important for understanding how abstract concepts are represented in our brains, and how we can navigate through abstract concepts to understand the world around us. 
Also, I used operant conditioning to train my son to respond “Dada” to the question “Who is the coolest?” when he was a year old. This was deeply satisfying both personally and scientifically. 

KW: What do you think is the future of neuroimaging for basic research? For translational research and application?
TB: I think neuroimaging is in an awesome phase.  Really awesome.  We have moved to a stage where we can do really big experiments really well, and really study the organisation of cortex.  New technologies are allowing us to study neural representations at the mesoscopic scale.  These come both from new high resolution imaging technologies, but also from an increased understanding of how to design experiments that study representations rather than just average activity levels.  I think there needs to be some work validating this stuff with electrophysiology and understanding how these things relate to cellular codes (but that is happening) and obviously there is still a lot of stuff we can’t see.  But basically, I think that these two massive innovations (big data driven imaging with extremely high quality data, and the ability to study representations) will transform the current and next decade of human neuroscience.  On top of this, MEG is really working at the “source” level now, so we can study dynamics.  The kinds of questions that will be answered by neuroimaging experiments in the next years will be really about how stuff is organised, how computations happen etc. etc.  I think this will bring us closer to animal experiments, so we can test whether neural codes found in animals can explain higher human cognitive behaviours that can’t be studied in animals.  I think it will mean that interesting neuronal mechanisms discovered in animals, will be testable in real clinical populations for the first time, as candidates for disease.  Being able to study representations across the whole brain simultaneously is new to neuroscience in any species.  How is the same stuff represented in different areas? How do signals in one area change representations in another? I think it’s really an exciting time to be doing human neuroscience for people who want to think deeply about neural mechanisms and their organisation throughout the brain.  Obviously, it’s also a very exciting time to be doing nonhuman research with a buffet of new techniques available that have extraordinary levels of precision and control.  I think we need to make sure that the different fields stay in touch with each other as we are figuring out how neural computation happens across species. 

KW: What motivated you to study frontal cortex and connectivity? 
TB: I mostly just drift towards things that, or people who, are interesting.  For example, things that seem new and cool and people who are fun to talk to.  The people who I work with in Oxford and UCL are just about the most clever people I know. And some of the most exciting work in the world is happening in the function of frontal cortex and neural representations; in behavioural control; and in brain connectivity.  It’s no coincidence that I copied these clever people in choosing these topics to study…  I get to hang out with cool people and talk about interesting things.   

KW: When you first started out, what was the most inspiring/motivating paper you read? How about the same question, but in the last 5 years?
TB: Bit of a weird question for me,  because I started out making new methods for measuring brain connections from diffusion imaging,  and now I am studying representations and computations.  The classic papers in the diffusion field were obviously influential early.  From people like Peter Basser, David Tuch, Geoff Parker, Derek Jones, Danny Alexander and many of the other early pioneers of diffusion imaging.  Nowadays, I often have fun reading papers about how computations work. For example, you always learn something cool when you read papers by Peter Dayan, Nathaniel Daw, Yael Niv  and their colleagues. I also like learning about how cool representations can enable computation.  I loved Tim Vogels’ 2011 Science paper about simple inhibitory hebbian mechanisms for storing memories, for example. When I want to know how the frontal cortex works more broadly, I often turn to the views of people like Matthew Rushworth, Betsy Murray, Dick Passingham, Chris Frith, amongst many others. Recently when I have been interested in how computations work between frontal and temporal cortices, I have been very influenced by ideas from Howard Eichenbaum and Neil Burgess and their respective sets of awesome colleagues (as well as the league of extraordinary papers about how the hippocampal formation encodes space).

Rather than papers, however, I have mostly been inspired by people.   I have been lucky enough to work with awesome sets of people.  My group has always been stuffed with really clever students and post-docs, many of whom have become long term friends and colleagues. I won’t name individuals from my group here because they are all special :).   From the very early days, though, I was lucky enough to work with people like Heidi Johansen Berg, Matthew Rushworth, Steve Smith, Mark Woolrich, Christian Beckmann, Mark Jenkinson.  These guys have taught me how to think about science, data and the brain. They are endlessly curious and inventive.  Who needs more inspiration than that? 

KW: Thank you, Dr. Behrens! 

Dr. Behrens will be giving the Wednesday afternoon Keynote lecture at the 22nd Annual Meeting of the OHBM Organization for Human Brain Mapping (June 29th, 4:15pm) later this month in Geneva. The topic of his lecture will encompass storing, using, and updating knowledge for behavioral control. All in all, we hope you will enjoy his lecture and have enjoyed getting to know him a bit better.

​During the 2016 Annual Meeting in Geneva, OHBM Communications Committee member Kevin Weiner had the opportunity to speak with Dr. Behrens immediately following his lively and extremely well-attended keynote. 
0 Comments

The Hidden World Of The Brain: The Role Of The Basal Ganglia Beyond Movement

6/17/2016

0 Comments

 
Excerpt from OHBM Communications/Media Team article on Huff Post Science: 
"You may have heard the basal ganglia being mentioned in mainstream media recently associated with movement disorders like Parkinson’s Disease, which burdened the late Muhammed Ali. And rightly so - the role of the basal ganglia is mostly understood as related to the formation, execution, and remembrance of a sequence of movements towards a goal like throwing a punch for Ali or walking for us." Read full story.
Picture
0 Comments

Interview with Daniel Wolpert - OHBM 2016 Talairach Lecture

6/9/2016

0 Comments

 
BY NIKO KRIEGESKORTE
I'm here with Professor Daniel Wolpert of the Engineering Department at Cambridge University. Daniel is going to give the Talairach Lecture this year at the OHBM meeting in Geneva. I’d like to hear a little bit about his research, about his lecture, and his view of our field. ​
Nikolaus Kriegeskorte: ​Daniel, how did you come to be interested in the human brain?
Daniel Wolpert: I started as an undergraduate at Cambridge doing medicine. So I did neuroscience as part of my undergraduate training. I have to say, I wasn’t thrilled by it. It was so complicated. Learning every part of the anatomy of the brain drove me absolutely crazy! But I was interested in mathematics. ​
My Director of Studies at Oxford said to me “Modeling the brain is all where it’s at.” And he showed me a paper by Zipser and Andersen, which had just come out in Nature. They were using a simple neural network to model visual cortex. I was completely transfixed! So I decided I’d had enough of medicine. It was too much like hard work. I wanted to go back and do basic science.

After getting my PhD, I thought it was time to actually go and learn some mathematics. So I went to MIT. I had my interview with Michael Jordan who said “If you want to come and do a postdoc here you have to know what a 
Jacobian is. If you don't know what a Jacobian is, don't come here.” I said “That's not a problem.” I had no clue what a Jacobian was! I went straight back to the hotel room and read up all about them. And I did a postdoc there which was great fun. MIT was such a vibrant place.
Picture
NK: How did you end up at the engineering department in Cambridge?
DW: I was at the University College London for ten years where I was, I guess, the most computational person in a biological environment. And one day I got a phone call from Keith Peters, who was the head of the [Cambridge University] medical school, saying "Are you interested in a chair in Engineering at Cambridge?" And I said, "I think you've got the wrong person. I'm not an engineer. I'm a neuroscientist!" He said, “We're not idiots in Cambridge. We know what you do! We want to create a bio-engineering programme and what we really want to hire is not an engineer with an interest in biology, but a biologist with an interest in engineering." This was very attractive to me, because going from being the most mathematical in a biology department to being the least mathematical in an engineering department would be very good for my game. So I moved to Cambridge and we set up this group which does computation and biological learning. The idea is that we do both machine learning and neuroscience. The interplay between the machine learning and the neuroscience, I think, is absolutely vital. I think the idea that you can understand the brain without very strong computational support is ridiculous

​NK: What’s your lecture at OHBM in Geneva going to be about?
DW: The first thing I want to convince people of is that there are interesting questions about motor control. I want to convince people that actually the only point in understanding cognition and sensation and perception is to guide action. Then I want to go through the different levels we’ve been working on to try and really explain the interesting new developments, both at the low level – sensory-motor noise, our work on normative models of Bayesian processing – and at the learning level: how people learn structures of tasks. Finally, I want to cover our more recent work, trying to link decision making and motor control together: How motor control affects decisions and how decisions affect motor control. I’ll try to give everyone a bit of an understanding of the algorithms we think the brain uses. Now I have to admit that we don’t do imaging in my group. But I’d like to inspire imagers to pick up our ideas, ideally, and go and test them using their techniques! 
I’d like to inspire imagers to pick up our ideas, ideally, and go and test them using their techniques! ​
NK: For vision as well as motor control, we don’t yet have models that can compete with human performance. How does the normative perspective help us if our models are not up to the task?
DW: Normative models tell you how you should do perception and might tell you how to do action. 
But the problem is they’re intractable. So although you can say you should do the Bayesian thing, actually doing the Bayesian thing is intractable for computers and probably intractable for the brain. So I think all the smart money at the moment is asking what the clever approximations are that the brain can use to solve these sorts of problems. And so the group which I'm head of is half machine learning and half neuroscience. The machine learners’ goal is not just saying it’s Bayesian, but asking how can we do these complex computations in efficient ways. And I think they've been very successful. We used to think that the way you did control was to have some cost and some desired trajectory. You have to play out the desired trajectory, and if you get perturbed, you update your plan. That's a very inefficient way to do control. All you basically need is an optimal feedback controller. 
Although you can say you should do the Bayesian thing, actually doing the Bayesian thing is intractable for computers and probably intractable for the brain.
There's no desired trajectory. You just know how to respond to the state of the limb or the state of the environment in an optimal way. You look ahead and you're always updating your feedback controllers. That's a very powerful theoretical framework. We've been testing that out experimentally and finding evidence that it's simply what the brain does. We still have a long way to go to be able to make something that can play tennis like a human being.
Picture
NK: Looking at your work, it seems to me we can learn a lot about the computations from behavioural data. Do we need to measure brain activity at all?
DW: Oh, we certainly do! I very much believe in Marr’s levels. I tend to focus on the computation and the algorithm. You know, what problem does the brain have to solve? What algorithms does it use? But clearly in the end we want to be able to understand where the algorithms are instantiated and, more than that, I think brain imaging can help differentiate between algorithms. I think the model-based approach, which has become very popular, is incredibly impressive! I guess if that had been around 20 years ago when I first started out, maybe I'd have got involved in imaging. That seems a very beautiful way to go.

NK: In humans we can get tens of thousands of channels of haemodynamic activity. In animals, at the circuit level, we can measure individual neurons, and much larger numbers of them than before. Do you prefer humans or animals?
DW: I work on the control of movement. I think animals have everything we need to understand most of the control of movement. I think people who work on language are a bit stuck in terms of the model systems they can work on. For me the neurophysiology level is an area I follow probably more than imaging, because most animals can move, most of them can move better than we can move, and so I think some of the circuit work from say places like Janelia Farm is really exciting. My field relies on very simple things like robotics and virtual reality, which have improved. But if you look at the improvements in imaging and in particular neural circuit manipulation, those are just extraordinary molecular techniques.

There's no reason to believe that a rat does motor control the same way a human does.
NK: Is human imaging only useful for higher-level cognitive processes that are unique to humans?
DW: No. It's certainly useful to understand movement. I think it’s particularly powerful, I would say, for techniques you can't do with animals, because clearly the resolution you get is not going to be anywhere​ near as good as neurophysiology. 
Picture
But it's still incredibly useful even for motor behaviour because there's no reason to believe that a rat does motor control the same way a human does. We have very different bodies; a very different understanding of the world. So I think it's incredibly useful. ​

​NK: Toward an overall understanding of brain function, do we go from the bottom up or from the top down?

DW: We go both ways and they meet in the middle! It's hard to start from the middle. I think from the top down, we can get the algorithms. It's very hard for people who work on circuits to start with high-level questions. They're just trying to understand how the circuit works. But for some circuits, we've made huge advances in understanding the algorithms. So, I am very impressed with people who work, for example, on the cerebellar structure in electric fish, where we know it does prediction of sensory consequences, but it wasn't really known exactly how it did it. But recent work from Columbia and Nate Sawtell has beautifully shown how that circuit works. That work is just spectacular and probably wasn't achievable ten years ago.

NK: AI is finally beginning to work – using models inspired by the brain. Is AI relevant to neuroscience?
DW: AI has been very successful in a limited number of tasks, tasks which are very clear like the game of Go. That's a very simple state of this board and a very simple objective to win. When it comes to more general tasks, like having an autonomous agent acting in the real world, it's much harder to write down the cost function or what the algorithm should be to achieve success in the world. In robotics, closer to my area, there have been some successes. [Boston Dynamic’s] BigDog robot can walk over terrain. But then there are the really hard tasks, like manipulation, that are still unsolved. Robot control is generally solved one task at a time. So a robot is hand-tuned to solve one task.  And if you want to go to another task, you go back to square one and start all over again.
One of the big challenges for the future is how you make general-purpose algorithms, which can learn multiple tasks and interact in multiple environments. That's still a very hard problem. At the moment, there's no robot with the dexterity of a five-year-old child in terms of manipulation: very good for things like driving and navigation, but when it comes to tactile things with the arms and hands, they're really in their infancy.

NK: Ultimately, we need computational models that actually perform the function.
DW: In the end that's the proof in the pudding. But unfortunately sometimes you can reproduce the function without understanding it any better. So one of the frustrations with deep nets is that they may work, but you don't really know why they work. But it's still great that they work. I think there's a tension. There's those of us that want beauty in the algorithms, as well as them working, and those who just want the algorithms to work. And so, those who believe in normative and Bayesian models are a bit frustrated, I would guess, by the deep nets, because they work so well; when you want the normative, optimal solution to work better.

NK: Finally, I'd like to hear about your interests and obsessions beyond science.
DW: I guess my real obsession is science. I tend to work on science most of the time and I just love working at weekends and evenings on science. But my other obsessions; I have two daughters who are both in the sciences. Next month, all being well, my eldest daughter will become a doctor! My youngest is a chemist,  and so they're both at university. We spend a lot of time with them.

​I guess one of my obsessions is travel. As a family, we love traveling. So we spend a lot of time together, me going to fun meetings in China, India, and South Africa, and they often join me and that's an absolute joy.


Thanks to Simon Strangeways for video recording and to Jenna Parker for transcribing the interview.





0 Comments

the thinking center

6/7/2016

1 Comment

 
Excerpt from OHBM Communications/Media Team article on Huff Post Healthy Living:
"The brain is commonly intuitively regarded as a collection of separate regions each with distinct functions of complex behaviors, experiences and phenomena. But, “Fear centers” and “planning centers” are not separate rooms in our brains in which neurons ring alarm bells and draft flow charts, respectively. " Read more
Picture
1 Comment

    BLOG HOME

    ​TUTORIALS

    ​MEDIA

    ​contributors

    ​OHBM WEBSITE


    ​

    OHBM OnDemand 
    ​Education Platform


    RSS Feed

      blog survey

    Submit

    Archives

    February 2021
    January 2021
    December 2020
    November 2020
    October 2020
    September 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    December 2019
    November 2019
    October 2019
    September 2019
    August 2019
    July 2019
    June 2019
    May 2019
    April 2019
    March 2019
    February 2019
    January 2019
    December 2018
    November 2018
    October 2018
    August 2018
    July 2018
    June 2018
    May 2018
    April 2018
    March 2018
    February 2018
    January 2018
    December 2017
    November 2017
    October 2017
    September 2017
    August 2017
    July 2017
    June 2017
    May 2017
    April 2017
    March 2017
    February 2017
    January 2017
    December 2016
    November 2016
    October 2016
    September 2016
    August 2016
    July 2016
    June 2016
    May 2016
    April 2016

stay connected with ohbm!


become a member

Telephone

952-646-2029

Email ohbm

EMAIL BLOG TEAM
Header image created by Thiebaut de Schotten & Batrancourt  
www.brainconnectivitybehaviour.eu