BY THOMAS YEO
Professor Kalanit Grill-Spector is the principal investigator of the Vision and Perception Neuroscience Lab at the Department of Psychology and the Stanford Neuroscience Institute at Stanford University. She will give a much anticipated keynote lecture at the upcoming 2017 OHBM Annual Meeting at Vancouver. We caught up with Professor Grill-Spector to discuss her illustrious research career.
Thomas Yeo (TY): Imagine that you meet some random person off the street. How would you describe your research to the person?
Kalanit Grill-Spector (KGS): The core of my research is figuring out how the brain enables us – as humans – to understand what we see. Therefore, my research examines how the function, anatomy, and computations of the parts of the brain that are involved in visual processing relate to visual perception. Additionally, I am interested in uncovering how these parts of the brain develop from childhood to adulthood and what aspects of this development are shaped by experience.
TY: How did you end up on this research path?
KGS: I first studied electrical engineering and computer science (CS). However, when I started working as an engineer, I did not like it. Then, I discovered the fascinating world of vision science by reading a Scientific American article by Semir Zeki in 1992. I thought, “I want to study this too!” Someone told me that there is a group in the Weizmann Institute of Science that works on Computational Vision. I asked to audit a seminar jointly run by professors in CS and Neurobiology. One of the meetings was devoted to a Nature paper by Fujita and colleagues from Tanaka and Cheng’s group: Columns for visual features of objects in monkey inferotemporal cortex. The CS and Neurobiology researchers were so passionate in their heated discussions that I got hooked. I wanted to be part of something that was obviously important. So, I applied to the graduate school at the Weizmann Institute of Science. I started my scientific career in computational modeling of orientation columns in V1 with Shimon Edelman and Rafi Malach. During this process, Rafi returned from a sabbatical at MGH (Boston), and said “there is this new thing called functional magnetic resonance imaging (fMRI)”. No one at Weizmann believed him that it was going to be a worthwhile research direction, but Rafi had a vision and was both energetic and stubborn. So, armed with energy and determination, Rafi and I set out to develop the first fMRI system in Israel. I think we would all agree that it turned out to be successful after all…
TY: What is the most exciting thing your lab is working on now?
KGS: C’mon, if I’m spending time working on projects, obviously, they are all exciting to me! But, if you had to twist my arm, Thomas, there are two things that perhaps I’m the most excited about. One is integrating many types of anatomical and functional in-vivo neuroimaging measurements within individual brains to understand the interplay between structure, function, and behavior, especially in the context of development. The second is developing the next generation of computational encoding models of the visual system with the goal of predicting cortical responses resulting from both bottom-up and top-down inputs.
TY: Moving forward, what do you hope your research will accomplish in the next 10 years?
KGS: I am hopeful that we will make big strides in three domains: (1) Understanding the structural and functional development of the ventral visual stream and in particular which aspects of development are shaped by experience. (2) Elucidating the anatomy of the visual system. We are far behind in understanding the anatomical constraints underlying the function of the visual system. For example, we do not even have a wiring diagram of the white matter tracts in the visual system in the human brain. (3) Developing precise computational models of the visual system based on empirical functional and anatomical measurements in humans. The development of such models will hopefully help us understand how the brain perceives and recognizes the visual input.
TY: Can you give us a teaser or preview of your OHBM keynote lecture?
KGS: In the lecture, I will address a central neuroscience question: how do brain mechanisms develop from childhood to adulthood to enhance behavior? I will use the ventral visual stream as a model brain system to address this question, and present data addressing two main developmental hypotheses: pruning and growth. Since this is OHBM, I will underscore how recent advances in multiple non-invasive neuroimaging approaches give us – as a field – a powerful toolkit to study how the development of brain function and structure relates to behavioral development.
TY: What do you think is the most exciting development that is happening in your area or the broader field of neuroscience.
KGS: In my field, computational and anatomical advancements have made a big impact. One hot area is combining neuroimaging measurements of brain function with computational (encoding) models that not only predict brain responses, but also explain the underlying computations. The second is significant advancement in anatomical methods including quantitative MRI and diffusion weighted imaging as well as improved analysis tools to define tracts and determine cortical networks. While fMRI has significantly advanced our ability to map human brain function, we still know very little about the underlying anatomy.
TY: Who are the people that have inspired you throughout your career?
KGS: The people who have inspired me and continue to inspire me are those that push me in new directions. For example, Rafi Malach (my PhD adviser) taught me how to ask – and answer – big questions with enthusiasm and attention to detail. Brian Wandell (at Stanford) is an excellent mentor who continuously makes it clear that the gold standard for good research is one that can be articulated in precise quantitative and reproducible terms. My students, who have taken me on new paths. In particular, Kevin Weiner, who took me on a successful trip down neuroanatomy lane (I’d never have thought that I’d drive there) and Jesse Gomez, who has introduced new and neuroimaging methods to the lab.
TY: What is the biggest challenge that you have experienced in your research career?
KGS: One of the biggest challenges I have experienced as a young researcher was coping with adversarial reviews of papers and grants. For example, one of my most cited papers is my 1999 Neuron paper about fMRI-adaptation. However, the first round of reviews was tough to handle. Some parts of the review were insulting, for example, a reviewer wrote that our findings were mundane. Other parts were so nasty that we don’t even know what the reviewers wrote as the editor blocked them out with a black marker. As a young researcher, I can tell you that this review really shattered my confidence. Critically, however, this early experience made me learn fast that to become a scientist, you not only have to be determined, but you also need to develop thick skin. So, while this and other challenges have affected me, it didn’t deter me from staying on this great path of scientific discovery.
TY: What advice would you give to aspiring researchers?
KGS: Neuroimaging today is an increasingly computational field, so I have two main pieces of advice. First, you should really learn how to code. You need to be able to understand, own, and develop your research tools. Importantly, being able to code will also help you do reproducible research. Second, inspect your data, not just the summary statistics – from individual participants’ data, to individual brain anatomies, even individual voxels in each participant. Computational tools and automation do not prohibit close contact with your data. On the contrary. This type of approach has helped me understand the consistent and variable aspects of the data across participants and also flag artifacts that otherwise would have been missed.
We thank Professor Grill-Spector for her time in answering these questions. We look forward to attending her exciting keynote on “Brain Growth and the Development of Face Recognition” on Monday 9.30am at Ballroom AB.
BY SHRUTI GOPAL VIJ
Human nature dictates that each and every one of us seeks guidance on life choices and trajectories. A key to this is mentorship. As scientists navigating the ever hardening world of academia it is vital today to find a mentor. A mentor that can show you the short-cuts, encourage you, applaud your achievements and support you in tough times. While some of us are lucky to find such mentors in some form or other, there are a large number of students, postdocs and other early career researchers who are left in the lurch. On the other hand, neuroimaging has quite a few established researchers and PIs who have themselves taken a long winding path picking up tips along the way that will make them great mentors. The OHBM Student and Post-doc SIG of 2017 aspires to provide a platform for both mentors and mentees to come together and establish an independent and effective mentoring relationship. This initiative, spearheaded by AmanPreet Badhwar (chair) and Michele Veldsman (chair-elect) with enormous support from SIG officials, covers two aspects, 1) a Mentorship and Career Development Symposium at OHBM 2017, and 2) an online Mentorship program.
The Mentorship and Career Development Symposium slated for Wednesday, June 28th 2017 (12:00 - 14:30 pm), is aimed at imparting meaningful information on how to navigate initial career transitions. Since an overwhelming number of our social media followers requested information on non-academic or industry transitions, the symposium is set to answer many of the hard questions facing junior researchers. The program promises to cover a variety of topics such as transitioning to PI, science writing/journal editorial positions, how to deal with micro-aggressions in work environments, managing work-life balance, starting a business post PhD and much more. Following short-talks from individual speakers, the panel will answer questions from the audience on what it is like to be a PI, run their own lab and what they look for when hiring junior researchers for academic and non-academic positions. The collection of 7 speakers and 5 panelists includes a variety of academic and industry experts who are not only approachable and personable but are also well-equipped to provide us significant advice. If you have questions that you want featured in the panel discussion at the Mentorship and Career Development symposium, please feel free to send them to us in advance.
While much has been talked about on the importance of mentorship in today’s world as well as the requisite nature of pro-active attitude on behalf of the trainees to find appropriate mentors, the trainees of OHBM have been largely isolated. In further bridging this gap, the OHBM Student and Postdoc SIG has announced an online mentorship program that promises to bring together mentors and mentees from the world over throughout the year. The idea behind this is primarily to provide another platform for mentees to seek support outside of their current environment and increase their knowledge of the ever-expanding unknown of academic and/or non-academic careers. The online forum was recently announced and sign-up for this year was closed on 1st May 2017. Even at the nascent stage of this being the first year for such trainee-focused initiatives, we have an impressive enrollment of 331, of which 89 PIs have signed up to be mentors. About 143 trainees have signed up to be mentored and will be pleasantly surprised by who they end-up with as a mentor. An additional 88 brain mappers have signed up to be both mentors and mentees, principally post-docs who are vital rungs in the academic ladder and can provide invaluable advice to students whilst also seeking support for their own career development. The SIG will match these mentors and mentees and introduce them via email. The mentors are required to meet their mentees online once every quarter and in person at the annual meeting and establish a mutually beneficial relationship resulting in the betterment of science all-around.
I have also signed up to be a mentor and a mentee. My reasons are simple. My career trajectory to a post-doc has been unique, with its own up’s and downs, and the kind of jobs best suited for me are also unique, even though they may be hard to come by. If I can help other scientists and especially women scientists break more boundaries, I personally consider that a win! There are many ways of getting to the end-goal of our own careers. I believe the only way to learn is to participate and talk. So here is me participating and discussing and learning. I welcome you to come participate, discuss, learn, find new mentors and become great brain mappers. Let’s make science self-supporting and self-sustaining in today’s age of uncertainty!
Suggestions, questions and comments are most welcome at @OHBM_trainees, Facebook and email@example.com.
BY DAVID MEHLER
In a recent blog post we learned about the activities of the OHBM Committee on Best Practices in Data Analysis and Sharing (COBIDAS), whose members work on establishing recommendations and tools to increase transparency and reproducibility in human neuroimaging. Together with other early career researchers I was fortunate to recently attend a workshop dedicated to Advanced Methods for Reproducible Science. There, a number of pioneers in reproducible science discussed the challenges of the field, and introduced ways to improve current practices. As part of this, Dr. Russell Poldrack discussed creating reproducible research pipelines for neuroimaging.
Russ Poldrack is a professor of Psychology at Stanford University where he also heads the Stanford Centre for Reproducible Neuroscience. He presented a new exciting framework for reproducible neuroimaging called Brain Imaging Data Structure standard application (BIDS app). Russ agreed to an interview, providing an ideal opportunity to find out more about his views on the reproducibility crisis in science and get his recommendations for the field.
Whenever you find a seemingly good result – one that fits your prediction – assume that it occurred due to an error in your code. - Russ Poldrack
1) David Mehler (DM): How would you describe the reproducibility crisis in psychology and neuroimaging to a (tax paying) member of the public?
Russ Poldrack (RP): I would explain it like this: Some of the research practices that scientists have used in the last few decades have turned out to generate results that are less reliable than we thought they were. As we have come to recognize this, many researchers are trying to change how we do things so that our results are more reliable. This is the self-correcting nature of science; we are human and we make mistakes, but the hallmark of science is that we are constantly questioning ourselves and trying to figure out how to fix the problems and do better. An important part of the problem is that researchers are not currently incentivized by the system to do reproducible research; there is much more pressure to publish large numbers of papers in high-profile journals, which focus more on splashy findings, than there is to make sure that those findings are reproducible.
2) DM: The definition of direct and conceptual replications can be debatable and it is not always clear how close a replication must be to the original study to count as a direct replication attempt. In neuroimaging, best practice for each element of the processing pipeline might change over time and these changes can affect the final result. In your view, what constitutes a successful direct replication in neuroimaging?
RP: It’s a challenging question. On the one hand, you would hope that the minor details don’t matter very much; if they do, then the result has limited generalizability and thus is probably not that important even if it’s true under those specific circumstances. On the other hand, we know from the work of Stephen Strother and his colleagues, and from the work of Josh Carp that processing choices can make a substantial difference. In my opinion, what’s most important is that a replication attempts to be as close as possible to the original study in its details, recognizing that this will never be fully possible. If a well-powered replication attempt of an important study fails, then it’s the responsibility of the field to determine whether the replication attempt reflects true lack of effect, differences in methodological details, or random fluctuations. It’s worth remembering that some number of well-powered replication attempts will always fail due to chance even when there is a true effect, and thus a single replication failure should not necessarily cause us to abandon the initial finding.
3) DM: Do current open science/data practices favor senior researchers, who already have tenure and high impact publications, over junior researchers, who often must put in the extra work? If so what can be done about it?
RP: Yes, definitely. Doing reproducible science will almost certainly make it harder to succeed by today’s criteria of large numbers of publications in high-profile journals. Just as one example, I have become convinced that pre-registration of study design and analysis plans is critical to improving our science. However, doing a pre-registered study makes it more likely that one will come up with null effects, because there is no flexibility to tweak the analysis until a significant effect is found. I think there are a few ways to address the problem. First, established researchers need to lead by example; if we can’t engage in open and transparent research practices then there is no way that we can expect the younger generation to do so. Second, we need to pay more attention to open and transparent practices when we are judging job applicants, tenure cases, and grant proposals. This is much harder than simply counting up numbers of publications and impact factors, but it’s the only way that we can ensure that people doing solid research have a chance of making it on the job market, since they will always be outgunned by those who use shoddier practices to get papers in high-profile journals. One way to help with this was suggested by Lucina Uddin in a recent Tweet, where she described adding a section titled "Contributions to Open Science" to her CV; I could see this listing things like shared datasets, code, and pre-registrations. This would help signal that one is committed to open and reproducible science.
4) DM: This brings us back to the role of incentive structures. Together with other OHBM committee members you have recently initiated an OHBM Replication Award for the best neuroimaging replication study. What is your vision for a system that creates the “right” rewarding and incentive structures to promote data sharing and open science work?
RP: Foremost, people need to get credit for their efforts. The rise of “data papers” has helped with this, since now a person can get citation credit for a shared dataset when it’s used by others. Registered Reports are another good move in this direction, as they ensure that one will get a publication for a well-designed study regardless of the outcome. As I mentioned earlier, we also need to work to make these practices more central to our hiring and tenure decisions; changing these kinds of processes is challenging, and requires more senior researchers to take the lead, which many of us are trying to do but it’s an ongoing effort.
5) DM: Thanks Russ. Finally, what is your main message for early career neuroscientists? What would you advise them to look for when choosing a lab and planning their career path?
RP: First, focus on finding a scientific question that fascinates you. Science is full of long hours, intense criticism, and repeated disappointments, and only a burning scientific question will give you the continued motivation to persevere. Second, find a lab that shares your values. Talk to people in the lab and find out whether they have adopted the kinds of practices that would make you feel confident that your interest in openness and transparency will be supported and nurtured. Third, be open to change. It’s natural to make plans for the future, but often the world has different ideas for us, and it’s important to be able to take advantage of the best of whatever your situation has to offer you, even if it’s not what you initially planned for. Finally, realize that we are humans and we make mistakes, so that nothing you do will ever be perfect. One unfortunate consequence of the reproducibility crisis is that it seems to have led many trainees to worry that their work is never quite good enough, and that someone in the future will find a flaw or fail to replicate their work. This is a problem because if you don’t get the work written up, you will never get credit for having done it, regardless of how clever the experiment was.Science is a process for attaining knowledge, not an endpoint, and we need to keep that in mind. We should do the best we can to make our work transparent and reproducible, but also realize that at some point you just have to put the work out there for the world to see.
Figure 2: Attendees and speakers of the Advanced Methods for Reproducible Science workshop at Cumberland Lodge, Windsor. The workshop was organized by Dr. Dorothy Bishop (top row, 5th from the left), Dr. Chris Chambers (top row, 4th from the left) and Dr. Marcus Munafo, and funded by the Biotechnology and Biological Sciences Research Council (BBSRC) and the European College of Neuropsychopharmacology (ECNP).
Dynamic Functional Connectivity – A Brief Overview and Latest Thoughts from the Rotman Research Conference on Neural Dynamics
BY JEAN CHEN
For those who have not yet come across functional connectivity in their research, it won’t be long before you do. In the human brain mapping community, functional connectivity is often defined as the correlation between brain regions that share functional properties (activation patterns or fluctuations). Functional connectivity can be measured in an active or “resting” (task-less) brain state, using electrophysiological, optical and MRI methods. In recent years, the brain’s functional connectivity has begun to capture the public’s imagination in a tangible way. In 2009, the National Institutes of Health launched the Human Connectome Project to map all connections in the brain, including functional connections. This was followed by the European launch of the ambitious Human Brain Project in 2013. Today, beyond helping us to understand how the brain works, functional connectivity measurements are widely used in studying brain aging and brain diseases. Some examples include ADNI (USA), BIOCARD (USA), ONDRI (Canada), CCNA (Canada), SMART (Europe), Rotterdam Study (Europe) and the Sydney Memory and Aging Study (Australia).
With the extensive use of functional connectivity in various domains, many of us have come to expect functional connectivity, like the fibre structures physically connecting different brain areas, to be more or less stable. Most studies revolve around “static” functional connectivity, averaged into a single quantity over the course of many minutes. The dominant belief is that we would like this “static connectivity” to be reproducible across multiple time points. However, more recent work has shown that getting “stable” functional connectivity measurements within individuals has been rather difficult, especially in resting-state measurements. This, for me, was when two bodies of knowledge collided. On the one hand, image-analysis research has largely been focused on making functional connectivity within an individual more “static”, i.e. more stable across time. On the other hand, many years of electrophysiological research have shown that a healthy brain is a variable brain. It would be natural to reconcile these two streams of knowledge into a coherent and more complete view of functional connectivity. Through dynamic functional connectivity, we can observe moment-to-moment (seconds apart) changes in connectivity. Yet, it had seemed to me, that by allowing functional connectivity to vary with time, we would lose a degree of tractability in this “wanna-be” biomarker. “How should it be calculated?” “How should it be interpreted?” “Should it replace conventional static functional connectivity?”
The added challenges have not stopped researchers from increasingly embracing this new trend. At the recent Rotman Research Conference on Neural Dynamics, I caught up with some world authorities on dynamic functional connectivity, trying to get a sense of how dynamic connectivity should be measured and what the future holds for it.
Who’s mapping dynamic functional connectivity and why?
The Rotman Research Conference is an annual research conference series hosted by the Rotman Research Institute of Baycrest (University of Toronto). Since its inception in 1990, the Rotman Conference has varied its theme from year to year, but always revolving around important current concepts in cognitive neuroscience and brain aging. The objective for the 2017 conference was to showcase cutting-edge research in neuroimaging of brain dynamics and its clinical translation. Amongst the notable speakers at this year’s conference, Drs. Vince Calhoun, Viktor Jirsa, Randy McIntosh and Cheryl Grady presented their work on dynamic connectivity.
“Is dynamic connectivity a natural next step in functional connectivity research?”
Vince: Yes, it seems like it. We were always looking at functional connectivity associated with tasks, and even during a task block, connectivity within a brain network fluctuates. The brain is constantly variable, even at rest. By averaging connectivity across a 10-minute fMRI scan, a lot of information will be averaged out. We and others first published on this topic in 2010 and since then the field has really exploded. Functional connectivity should be measured within a dynamic context, but there may well be a mixture of both dynamic and static aspects in the data that are useful.
Randy: By construction, the brain needs to be dynamic, or else it wouldn’t work very well. It is how the system is set up.
Viktor: Dynamic connectivity can be more useful than static connectivity, since brain function is dynamic in nature, no matter what. Measures should not be repeatable. The brain is a nonlinear complex system, and multi-stability of the brain is a necessary feature. This is something we try to capture. Non-stationarity in brain activity is not a bad thing; it can give us much more useful information. There have been studies in which dynamic connectivity provides better prediction.
“How can dynamic connectivity be measured and used?”
Cheryl: In our research, we have been calculating functional connectivity dynamics, that is, connectivity over moving time windows. We then take the variability of connectivity across the windows. We find older adults demonstrating lower variability, and this is a very robust finding. There might always be issues with the sliding-window approach, but I’m not sure what the best approach would be at this moment.
Randy: In the 1970’s, there was much enthusiasm for using quantitative EEG signal as a biomarker, but the signal and its mechanisms were hard to understand. The same is true for functional connectivity. Functional connectivity is a consequence of a cascade of bottom-up and top-down processes, involving molecules and genes. What’s more, functional connectivity in clinical and healthy groups may be similar on the surface but be based on different mechanisms. Dynamic connectivity provides a way to generate more features to characterize the uniqueness of each individual brain.
“What does the future hold for dynamic connectivity?”
Vince: Functional connectivity is progressing towards being an imaging biomarker, but is not quite there yet. I think static and dynamic connectivity remain both useful. Certain connectivity components may vary a lot and others may not vary much. In addition, integrating this information within the larger structural/functional context is important. I am a big proponent of integrating multiple modalities, and in our work, we have been jointly modeling static and dynamic connectivity, letting the data tell us what is useful.
Randy: Until we can get a good understanding of the mechanisms behind functional connectivity, we will not be ready to use it as a biomarker. A biomarker ideally needs to be individualized, and it is not enough for functional connectivity to only show sensitivity when measured in a large group of individuals. For now, to find some way of concentrating the dynamic features of the individual brain would more likely lead to a biomarker.
Viktor: Static functional connectivity is not a bad measure. We just need to be careful how to use it and interpret it. We may need to recognize that non-reproducibility is part of the functional process, and modeling this variability may be a solution towards individualizing functional connectivity. The mathematical modeling is likely to benefit functional connectivity in terms of personalization.
Cheryl: I don’t think dynamic connectivity will replace static connectivity measures. We can still learn quite a bit from static connectivity. It can still distinguish young adults from older adults. What I hope from those that develop new data-analysis methods is an easy and reliable way to account for vascular effects in functional connectivity (and other fMRI) measures.
There you have it. It is likely that the only thing about the brain that’s constant is change. To help characterize it, dynamic functional connectivity may be a scalable brain measure that is accessible to neuroscientists and informatics researchers alike.