BY THOMAS YEO, NICOLE KUEK Professor Simon B. Eickhoff is the Director of the Institute for Systems Neuroscience at the Heinrich-Heine University Düsseldorf and the director of the Institute of Neuroscience and Medicine (INM-7) at the Research Center Jülich. Simon is a leading cartographer of the human brain, and his team utilizes a wide range of methods to map the organizational principles of the human brain. We had the opportunity to chat with Simon before his keynote lecture in the upcoming 2018 OHBM Annual Meeting in Singapore. Thomas Yeo (TY): Today we have Prof Simon Eickhoff here, a keynote speaker at OHBM 2018. Simon, thanks for doing this. How would you describe your research to a random person on the street? Simon Eickhoff (SE): I would say that I’m interested in how the brain is organized, how it varies between people, and how this variability relates to things like cognitive capacities. Then ultimately, I want to contribute to developing new tools for diagnosing and treating neurological and psychiatric disorders. TY: That’s a rich set of activities – how did you end up on this research path? SE: More or less by accident. I studied medicine in Aachen, and late at night at a party in my hometown, I met a friend from school who had started studying in Dusseldorf. He told me about brain research there and I thought it sounded quite interesting. So I called Karl Zilles’ secretary, met him, and was really fascinated. I started my doctoral studies there and never really managed to leave. TY: Your lab is involved in several projects – but what is the most exciting thing you’re working on now? SE: There are two things we are doing right now that I’m really excited about. One is brain-phenotype relationships. Can we actually infer complex phenotypes from brain imaging data? Can we predict personality traits or cognitive performance? And the key aspect here is --- given that there’s quite a lot of work on this already --- can we predict it in an interpretable fashion. What we need is good predictive performance, while also learning something about brain organization. This is one of the aspects that I really want to push, as it’s not highlighted enough in current discussions. It’s one thing to be on the data-driven side, and to get good compressions, good predictions. This is, without any question, awesome. But in the end, we also want to learn something about the brain – how the brain is organized. There’s a lot of work going on in our lab that really tries to combine the more data-driven work from a computer science perspective, with the more traditional neuroanatomical view. The second part is more related to brain mapping. You can describe the brain through a lot of different features. So for each point in the brain you can say, what are the structural properties, what is the trajectory as we age, how is it disturbed or changed in people with Parkinson’s or schizophrenia. But also, what’s the functional connectivity profiles at rest and during task, what are the structural connectivity profiles, and so on. You can use each of these features to map the brain and to delineate brain areas. But how does all of that work together? That’s the critical question, and cracking this kind of topographical code, that’s something that we can hopefully get closer to. And it’s pretty exciting! TY: Moving forward – what do you hope your research will accomplish in the next 5-10 years? SE: Well if we manage to do the things we just mentioned, I’ll be quite happy! The predictive modeling, but also brain mapping, understanding organization and topographical complexity of the brain. That’s going to be fantastic! Looking ahead, I’m not sure if it will be done in five years, but it will be really exciting if we can go outside of the academic field, outside of doing research to just get the next papers, and to mature enough to actually bring our research into clinical practice. Five years – I’m skeptical if we’ll get there, but over 10 years, I’d be more confident. If my team will be able to contribute to it, then that would be fantastic. And we’re working towards that goal. TY: What do you think is the biggest obstacle right now? SE: Towards clinical application? In the end there’s so much flexibility in the analysis of imaging data. We always hope that we live in a perfect world, where you acquire data, and then you do one single analysis, which is a priori planned, and that result gets published. But I’m not entirely sure that most labs do that. The thing is, if you really want to go and measure yourself, say by the standards of clinical trials, that would need to happen. You’d have to have your analysis plan ready, deposit them, acquire the data and carry out one single analysis and report the outcome. Then if you want to have something that’s clinically useful, it needs to have a really high accuracy and predictive value. What I really value is the current push towards more methodological rigour. I’m really happy to see that it’s becoming slightly more easy to publish null results, and new methods are not just judged by “better” performance. TY: So you’re saying that p-hacking is a problem, but I guess it’s a bit unclear to me if that’s the main problem. Even with the high quality Human Connectome Project data, the predictive accuracy is not that amazing. SE: Right, we still have a lot more work to do. And since you’ve mentioned the HCP dataset, this is never what you’d get from a clinical setting, where the data is acquired in a short time by a technician who is less invested than, e.g., a PHD student. And also the patient may not be as motivated as a research subject to lie still, comply with instructions etc. Most likely, what will happen is the field will be split into different domains. One that is very high resolution, very intense sampling, and a lot of valuable data for each individual. That will be great for understanding brain organization. But there will also be the other side, that will deliberately say “I want to use low quality, clinical data.” These adjectives may be the same thing, though the latter just sounds better [laughs]. We are using standard clinical quality data and we know the data is bad, but we also know that we need to find something that works on such data if we want to make an impact beyond research as an academic discipline. So we have to be up for the challenge! One thing we are doing a lot now is to deliberately make our lives difficult, by combining data across many different sites, different scanners, different populations, different continents. That way, the dataset is diverse - often even bad - but we’re happy with some drop in performance, because we know this is genuine performance. We have to improve it, but at least we’re not tricking ourselves into believing that we’re doing extremely good predictions that don’t hold up in real life. TY: We are at Whistler now and just had an exciting workshop held by Todd Constable. I’ve noticed that the talks from more senior professors seem to cover a lot of papers, whereas someone more junior (like me) will talk about two papers. Is this what I should be aiming for? SE: I think it’s a matter of personal style. But maybe you are right, and personal style changes with age. You really have two choices when you’re giving a talk. You either take a rather high flight attitude and present an overarching picture. Or you are going to dive deeper into something and go into a lot more detail. One of the explanations for your observation is that when you are younger, you just have fewer papers you really want to talk about. When you get to a certain stage, you have a lot of papers that you can talk about, so you need to make a decision to go deep or go broad. It really has to fit your style and what you’re comfortable with. For me, usually I want to give an overview, as we’re making great efforts to put puzzles together. We have different studies that may not be particularly related to each other, but you can see the crosstalk, and you can see the connections that I’m so excited about. That’s why I talk about so many different things. TY: I’m on twitter and see that you (@INM7_ISN) have strong perspectives on open science and the replication crisis. Do you want to comment on that? SE: Yes – I’m a big chimera when it comes to that. On the one hand, I’m a big proponent of open science. Most of the work we’re doing depends on shared datasets. In fact, long before the term ‘open science’ became popular, I was sharing my software. Back in 2004, I was developing the SPM anatomy toolbox. That was just open matlab code. I was still an undergraduate at the time. So I’m a big fan of open science, but what worries me at times are certain tones of the debate. Sometimes there’s a patronizing aspect to it, a moral argument “you have to” and “how can you not”. I think that in order for open science to grow, we need to take concerns seriously. Perhaps by virtue of being around the open science, computer science environments, but also around the very traditional German medical environment, I can see that there are two sides to the argument. Basically, in the German medical environment, I don’t think any person that would review your grants or would hire you cares too much about whether your dataset is open, or whether your software has been released. There’s a more traditional focus on publications with a lot of focus on impact factor, and grant money. This is a completely different world. An example – if you’re a software developer then your product, your outcome or claim to fame, is the thing (a software tool, a repository, any other resource) that you distribute freely on the internet. If you are someone who spent years collecting data on a rather rare disease, recruited patients, talked a lot to them, followed them up clinically, evaluated them repeatedly and put them in the scanner, then this data is a resource, an extremely valuable resource. And it comes with the assumption that this data will allow you to get enough out of it to carry you to the next step of your career. If you are then being told “well, you are unethical and doing something terribly wrong by not immediately sharing the data freely after the first paper”, this is not putting open science in a good light. I think open science will succeed and it will be a tremendous accelerator of knowledge. But in order for that to really happen we need to take people’s concerns seriously. There will definitely be a development at different speeds, with things moving faster in some fields than others. It’s not the case that those fields that move faster can look down upon those that are not as fast. I don’t think anyone is opposed to open science from a personal conviction, but it’s more about needs and rewards and we need to take these views seriously. Long-term - there has to be a better incentive structure. At the moment, we are conservative about it in the German medical system – one of our main criteria for hiring is based on a cumulative impact factor – from the sum of the journal impact factors of all your papers. H-index, citations and so on do come into play, but the fact that the cumulative impact is a major evaluation criteria shows you that different fields still evaluate contributions differently. At some point, open science practices will need to be rewarded not just morally, but also practically by selection committees. That will take quite a bit of time, though. TY: So how do we change the minds of selection committees? SE: Well, we just mentioned that we are getting older [laughs]. In some ways there is another generation before us, those who are really not used to it. This will perhaps change over time. Then at some point we have to find criteria for quantifying open science. You can show things on the internet to a review committee – e.g. you have 500 or so matlab scripts that you are sharing – but will that give you a job? If committee members are from another field, they might not get the value, so there’s needs to be some way of quantifying these contributions objectively. Then a committee made up of psychiatrists, dentists, or structural biologists (and they often as diverse at times) can refer to some numbers that give a assessment of your open science practices. That would be a big step forward. TY: Have you heard of the idea that once you come up with a set of numbers, they will be gamed? SE: Sure, but this will always happen. We would like a perfect world where all decisions about hiring or promotions are done by people who are experts in your field, spend several hours scrutinizing your CV or 10 most important papers for context and read related literature to compare to. But this just won’t happen. Maybe I’m part cynic, part realistic, but most people are overloaded with committee duties, so you need some easy summary of a person. Yes this will be gamed – there’s no way around that – and we hope there is on each committee someone, who can point out the gaming aspect, and spends time to know your work it more deeply. But usually you have to convey the importance of your work to people who are not familiar with your publications or your topics. TY: Thank you so much for this interview! We look forward to attending Simon’s exciting keynote on Monday June 18, 2018.
0 Comments
Your comment will be posted after it is approved.
Leave a Reply. |
BLOG HOME
Archives
January 2024
|