By Ilona Lipp
Every year, the OHBM gives out an Education in NeuroImaging Award, acknowledging significant contributions to education and training in our field. This year’s award went to Jeanette Mumford. Jeanette is a well-known fMRI stats guru who spreads her knowledge not only through her published papers but also through YouTube and Facebook, and in the handbook for fMRI analysis. Many of you may also have tried her fMRI power analysis tools. I had the pleasure of meeting Jeanette in Rome and interrogating her about how she became such an enthusiastic educator and her views on neuroimaging research.
Ilona Lipp (IL): Your YouTube videos must have contributed to you winning this award. Your channel has over 2600 subscribers - and without any cat videos! What was the motivation to start youtubing about brain stats?
Jeanette Mumford (JM): I had just moved to the University of Wisconsin Madison and I loved my new job, but one thing I had done at University of Texas was teach an fMRI analysis course, and I kind of missed it. At the same time that I was missing teaching this class, I was doing yoga on YouTube with ‘Yoga with Adrienne’. She has all these free classes, and specifically every year she has a 30 day yoga challenge where you do yoga every day. I thought ‘Oh, this is really cool for yoga, well, I could do free fMRI statistics classes online’. So I decided to put the two together and take my semester long course and break it up into digestible little 10 to 20 minute long videos and roll that out at the end of the summer so people who are starting graduate school could get a head start on fMRI analysis.
IL: When someone asks me how to get started with their fMRI analysis, I say: “Go watch Jeanette’s videos!” So tell me, how did you learn all your brain stats back in the day - without having such youtube tutorials at hand?
JM: Luckily, I had an amazing graduate advisor. That was Tom Nichols, so I obviously had to read all the fMRI statistics papers, which was tough, if you ever read… oh I’m not gonna say that. But if I got stuck on those, he'd help me out. He's a really good teacher as well; I watched him teach fMRI statistics at the University of Michigan summer course. Most of my fMRI stats knowledge I learned through him, and then more through taking the FSL summer course in the University of Michigan. I started following the mailing list for FSL - which tends to be the software package I use. A great way to learn for anyone starting out is just follow the emails and read them every once in a while and you learn a lot of new things.
IL: You trained as a biostatistician. Most people in our field, and here at OHBM, are not statisticians, but psychologists, biologists, physicists, physicians, engineers etc. How do you approach teaching something like stats in a comprehensive way to people coming from such different backgrounds?
JM: I actually made a conscious choice when I was graduating to do a postdoc that wasn't in biostatistics. Because it is all these people working together from these diverse backgrounds, so I wanted to work with somebody either a biomedical engineer or a psychologist, so I ended up with Russ Poldrack at UCLA. I think working with him and working with his lab, I learned how to explain things so that they understand, and then I of course learned a lot from them in turn. And especially Russ' lab are really good at giving feedback. So you can quickly learn and learn how to teach.
IL: Have you developed strategies and do you have any teaching advice?
Yeah, definitely to be okay with making mistakes and being ok with not knowing the answers to all the questions. I remember the first time I taught fMRI statistics at the NITP summer course, some of the questions I hadn't heard before and I hadn’t really thought about the answers. So a lot of times I had to just say ‘I don't know’. And after saying ‘I don't know‘ a few times, I thought ‘oh my God’. But then one of the other speakers came up to me afterwards and said that it's refreshing to hear someone just simply say they don't know. As opposed to another strategy I see, which is to answer a related, but different question to the question that was asked, and just use a lot of words and confuse the person who asked the question. I don't know if it's necessarily done intentionally, but I think it's fine to say ‘I don't know, but I'll look into it for you’. I do that a lot even now.
Another mistake I’ve made teaching is if I make an error to dwell on it. I don't know if it's something women do more than men, but I don’t think that’s true. But I’d think ‘Oh, I can't believe I did that’ and I’ll say that and say ‘sorry’ and I'll keep apologizing. After my first year of teaching, when I was reading at Wisconsin - different class - I was reading the comments, and I was like ‘Oh, it's that thing I got hung up on’. You're basically feeding them the information to criticize you with. If you're criticizing yourself, then they're going to do that. So, now if I make a mistake, I correct it – obviously, I'm not going to pretend I didn't make a mistake because I'm human - so, be human, correct the mistake and move on. ‘Oh, I made a mistake. Here's the right answer, we're moving on now’. They're not going to be as devastated as you are because you just said something totally wrong.
Also, if you teach something for a while, add something new that you don't know and try to teach it, because it makes it more exciting. And if you're not excited to teach something, don't teach it because you're going to give a boring lecture. If you're bored, everyone else will be bored. So I always add new materials to my courses every year.
IL: This is a controversial question: Doing stats properly – if that is even possible – is already complicated, brain stats are even harder to grasp, and even the experts can fail to reach a consensus about issues like false positive rates. Would it be better to have a few, specifically qualified people who just do analysis and have the time to keep up to date with the best practices, rather than educating the masses on how to do their own MRI analysis?
JM: So when they build cars, they have an assembly line, right? They have a bunch of people that do one specific task and at the end you get a car. It works really well for cars. But I think it's a horrible idea for scientific papers. If you have one person who did the data collection and then a different person does the data analysis, there's a gap in the communication. Things may have gone wrong during data collection and the analysis and then if somebody else is interpreting the analysis results and writing the paper, I just think it's not really helping move our science forward, so I actually prefer teaching folks. First of all, most neuroscience people are really smart, their statistics background is pretty strong compared to other graduate students. I think they can pick up things pretty quickly, and they can learn enough to do a good job at it and then they can do all the analyses and then the papers are more likely to be coherent. So, yeah, I'd rather train 10 people to do data analysis than do 10 data analyses myself.
IL: The Anders Eklund’s PNAS paper three years ago created quite a panic in our field, suggesting a vast amount of false positives in neuroimaging research. What is your personal standpoint on this and what recommendations do you have for minimizing the risk?
You know, the paper - for better or worse - had a pretty big splash. I think a lot of great things came out of it: software is better, people have a better understanding, they paid attention. Unfortunately, some other people used it to try to tear down our field a little bit.
A lot of people now say ‘Do I always have to run permutation test? Because they take so long’. I mean, for most of what we do, you can set it running on a cluster and it doesn't take that long. I think permutation tests are great and a lot of people I work with use them. But I'm still okay with the parametric approaches, as long as they use a higher cluster forming threshold, because, at least - that was in the paper as well - Type I error control was better as long as the cluster forming threshold wasn't low, so now the default in FSL is that higher threshold, so that's one of the improvements.
IL: Registered reports are slowly taking off. One concern I have heard several times now is that they are hard to do with brain imaging studies because they require thorough power analysis. You have been working on power analysis for fMRI. Do you think there are reliable ways to estimate power for fMRI studies?
Actually, my stance on requiring power analysis has changed a lot over the years. I used to think everybody needs to do them, but then I realized if people are forced to do them, they're just going to make up numbers - and they're already kind of made up anyway. We’re using pilot data that might be really noisy and not correct, and there are other issues with power analyses of course. But in terms of registered reports, I think they are a little more lenient with that. I don't want to cheap in the power analysis, because then people aren't going to take it seriously. And you get a lot of benefit out of doing it anyway, because you think about your hypotheses more clearly, connecting those with what the data are going to look like, and connecting the data with what the models are going to look like. Usually when I'm helping a PI with a grant it could take two or three meetings to make that transition from their idea to the hypothesis test we're actually going to run. And when I read protocols that people have written, you can always tell when they've actually done a decent power analysis, because the methods section is better, even though the power analysis is a little hand-wavy. But I think we can go with our guts too, if you have 30 subjects and you're looking at a correlation between BOLD activation and behavioral measures, it's not enough.
IL: So far, we have mostly been talking about brain stats. Apart from the statistical challenges, what do you find most exciting about brain imaging?
I think how the communities has changed over the years. I've been coming to this conference almost every year since 2003. And it's so different now, all the changes with the attention that's been given to diversity, and all the new special interest groups and I feel like there's a lot more support for postdocs and graduate students. Improving the community, the attention to open science, people working together more.
And I’m trying to get more into machine learning, that'll be my next thing on the YouTube channel. People keep asking and I haven’t thought of a good collection of papers to cover on it that would go together, but I think I got some now.
Many thanks Jeanette and congratulations on the award!