Friday, September 25, 2015

How to do cross-validation? Have to partition on the runs?

People sometimes ask why many MVPA studies use leave-one-run-out cross-validation (partitioning on the runs), and if it's valid to set up the cross-validation in other ways, such as leaving out individual trials. Short answer: partitioning on the runs is a sensible default for single-subject analyses, but (as usual with fMRI) other methods are also valid in certain situations.

As I've written before, cross-validation is not a statistical test itself, but rather the technique by which we get the statistic we want to test (see this for a bit more on doing statistical testing). My default example statistic is accuracy from a linear SVM, but these issues apply to other statistics and algorithms as well.

Why is partitioning on the runs a sensible default? A big reason is that scanner runs (also called "sessions") are natural breaks in fMRI datasets: a run is the period in which the scanner starts, takes a bunch of images at intervals of one TR, then stops. Images from within a single scanner run are intrinsically more related to each other than to images from different runs, partly because they were collected closer together in time (so the subject's mental state and physical position should be more similar; single hemodynamic responses can last more than one TR), and because many software packages perform some corrections on each run individually (e.g., SPM motion correction usually aligns each volume to the first image within its run). The fundamental nature of temporal dependency is reflected in the terminology of many analysis programs, such as "chunks" in pyMVPA and "bricks" in afni.

So, since the fMRI data within each person is structured into runs, it makes sense to partition on the runs. It's not necessary to leave-one-run-out, but if you're not leaving some multiple number of runs out for the cross-validation, you should verify that your (not run-based) scheme is ok (more later). By "some multiple number of runs" I mean leaving out two, three, or more runs on each cross-validation fold. For example, if your dataset has 20 short runs you will likely have more stable (and significant) performance if you leave out two or four runs each fold rather than just one (e.g., Permutation Tests for Classification. AI Memo 2003-019). Another common reason to partition with multiple runs is if you're doing some sort of cross-classification, like training on all examples from one scanning day and testing on all examples from another scanning day. With cross-classification there might not be full cross-validation, such as if training on "novel" runs and testing on "practiced" runs is of interest for an experiment, but training on "practiced" runs and testing with "novel" runs is not. There's no problem with these cross-classification schemes, so long as one set of runs is used for training, and another (non-intersecting) set of runs for testing in an experiment-sensible (and ideally a priori) manner.

But it can sometimes be fine to do cross-validation with smaller bits of the dataset than the run, such as leaving out individual trials or blocks. The underlying criteria is the same, though: are there sensible reasons to think that the units you want to use for cross-validation are fairly independent? If your events are separated by less than 10 seconds or so (i.e., HRF duration) it's hard to argue that it'd be ok to put one event in the training set and the event after it into the testing set, since there will be so much "smearing" of information from the first event into the second (and doubly so if the event ordering is not perfectly balanced). But if your events are fairly well-spaced (e.g. a block design with 10 second action blocks separated by 20 seconds of rest), then yes, cross-validating on the blocks might be sensible, particularly if preprocessing is aimed at isolating the signal from individual blocks.

If you think that partitioning on trials or blocks, rather than full runs, is appropriate for your particular dataset and analysis, I suggest you include an explanation of why you think it's ok (e.g., because of the event timing), as well as a demonstration that your partitioning scheme did not positively bias the results. What demonstration is convincing necessarily varies with dataset and hypotheses, but could include "random runs" and control analyses. I described "random runs" in The impact of certain methodological choices ... (2011); the basic idea is to cross-validate on the runs, but then randomly assign the trials to new "runs", and do the analysis again (i.e., so that trials from the same run are mixed into the training and testing sets). If partitioning on the random runs and the real runs produce similar results, then mixing the trials didn't artificially increase performance, and so partitioning other than on the runs (like leaving out individual blocks) is probably ok. Control analyses are classifications that shouldn't work, such as using visual cortex to classify an auditory stimulus. If such negative control analyses aren't negative (can classify) in a particular cross-validation scheme, then that cross-validation scheme shouldn't be trusted (the training and testing sets are not independent, or some other error is occurring). But if the target analysis (e.g., classifying the auditory stimuli with auditory cortex) is successful while the control analysis (e.g., classifying the auditory stimuli with visual cortex) is not, and both use the same cross-validation scheme, then the partitioning was probably ok.

This post has focused on cross-validation when each person is analyzed separately. However, for completeness, I want to mention that another sensible way to partition fMRI data is on the people, such as with leave-one-subject-out cross-validation. Subjects are generally independent (and so suitable "chunks" for cross-validation), unless the design includes an unusual population (e.g., identical twins) or design (e.g., social interactions).

Friday, September 11, 2015

a nice "Primer on Pattern-Based Approaches to fMRI"

John-Dylan Haynes has a new MVPA review article out in Neuron, "A Primer on Pattern-Based Approaches to fMRI" (full citation below). I say "new" because he (and Rees) was the author of one of the first MVPA overview articles ("Decoding mental states from brain activity in humans"), back in 2006. As with the earlier article, this is a good introduction to some of the main methods and issues; I'll highlight a few things here, in no particular order.

At left is part of Figure 4, which nicely illustrates four different "temporal selection" options for event-related designs. The red and green indicate the two different classes of stimuli (cats and dogs), with the black line the fMRI signal in a single voxel across time.

I often refer to these as "temporal compression" options, but like "temporal selection" even better: we are "selecting" how to represent the temporal aspect of the data, and "compression" isn't necessarily involved.

John-Dylan Haynes (quite properly, in my opinion) recommends permutation tests (over binomial and t-tests) for estimating significance, noting that one of their (many) benefits is in detecting biases or errors in the analysis procedure.

Overall, I agree with his discussion of spatial selection methods (Figure 3 is a nice summary), and was intrigued by the mention of wavelet pyramids as a form of spatial filtering. I like the idea of spatial filtering to quantify the spatial scale at which information is present, but haven't run across a straightforward implementation (other than searchlight analysis); wavelet pyramids might need more investigation.

I also appreciate his caution against trying too many different classifiers and analysis procedures on the same dataset, pointing out that this can cause "circularity and overfitting". This problem is also sometimes referred to as having too many experimenter degrees of freedom: if you try enough versions of an analysis you can probably find one that's "significant." His advice to use nested cross-validation strategies (e.g., tuning the analysis parameters on a subset of the subjects) is solid, if sometimes difficult in practice because of the limited number of available subjects. The advice to use an analysis that "substantially decreases the number of free parameters" is also solid, if somewhat unsatisfying: I often advise people to use linear SVM, c=1 as the default analysis. While tuning parameters and testing other classifiers could conceivably lead to a higher accuracy, the risk of false positives from exploiting experimenter degrees of freedom is very real.

I also like his inclusion of RSA and encoding methods in this "primer". Too often we think of classifier-based MVPA, RSA, and encoding methods as unrelated techniques, but it's probably more accurate to think of them as "cousins," or falling along a continuum.

It's clear by now that I generally agree with his issue framing and discussion, though I do have a few minor quibbles, such as his description of MVPA methods as aimed at directly studying "the link between mental representations and corresponding multivoxel fMRI activity patterns". I'd assert that MVPA are also useful as a proxy for regional information content, even without explicit investigation of cognitive encoding patterns. But these are minor differences; I encourage you to take a look at this solid review article. Haynes JD (2015). A Primer on Pattern-Based Approaches to fMRI: Principles, Pitfalls, and Perspectives. Neuron, 87 (2), 257-70 PMID: 26182413

Tuesday, August 18, 2015

demo: permutation tests for within-subjects cross-validation

This R code demonstrates how to carry out a permutation test at the group level, when using within-subjects cross-validation. I describe permutation schemes in a pair of PRNI conference papers; see DOI:10.1109/PRNI.2013.44 for an introduction and single subjects, and this one for the group level, including the fully balanced within-subjects approach used in the demo. A blog post from a few years ago also describes some of the issues, using examples structured quite similarly to this demo.

For this demo I used part of the dataset from doi:10.1093/cercor/bhu327, which can be downloaded from the OSF site. I did a lot of temporal compression with this dataset, which is useful for the demo, since only about 18 MB of files need to be downloaded.Unfortunately, none of the analyses we did for the paper are quite suitable to demonstrate simple permutation testing with within-subjects cross-validation, so this demo performs a new analysis. The demo analysis is valid, just not really sensible for the paper's hypotheses (so, don't be confused when you can't find it in the paper!).

The above figure is generated by the demo code, and shows the results of the test. The demo uses 18 subjects' data, and their null distributions are shown as blue histograms. The true-labeled accuracy for each person is plotted as a red line, and listed in the title, along with its p-value, calculated from the shown null distribution (the best-possible p-value, 1/2906, rounds to 0).

The dataset used for the demo has no missings: each of the people has six runs, with four examples (two of each class) in each run. Thus, I can use a single set of labels for the permutation test, carrying out the relabelings and classifications in each person individually (since it's within-subjects cross-validation), but with the null distribution for each person built from the same relabelings. Using the same relabelings in each person allows the group-level null distribution (green, in the image above) to be built from the across-subjects average accuracy for each relabeling. In a previous post I called this fully-balanced strategy "single corresponding stripes", illustrated with the image below; see that post (or the demo code) for more detail.

The histogram for the across-subjects means (green histogram; black column) is narrower than the individual subject's histograms. This is sensible: for any particular permutation relabeling, one person might have a high accuracy, and another person a low accuracy; averaging the values together gives a value closer to chance. Rephrased, each individual has at least one permutation with very low (0.2) accuracy (as can be seen in the blue histograms). But different labelings made that low accuracy in each person, so the lowest group-mean accuracy was 0.4.

The group mean of 0.69 was higher than all the permuted-label group means, giving a p-value of 0.0003 = 1/2906 (2906 permutation relabelings were run, all possible). The equivalent t-test is shown in the last panel, and also produces a very significant p-value.

Friday, August 14, 2015

start planning: PRNI 2016

Start planning: the 6th International Workshop on Pattern Recognition in Neuroimaging will be held at the University of Trento (Italy), (probably) 21-24 June 2016. PRNI is an OHBM satellite conference, generally held several days before and somewhat geographically near the OHBM conference, which next year is in Geneva, Switzerland, 26-30 June 2016. Once the hosting is sorted out the website will be (as for past meetings) linked from

The paper deadline will be mid-March 2016, perhaps with a later deadline for short abstract-only poster submissions. Previous workshops' papers were published by IEEE; search for "Pattern Recognition in Neuroimaging". Conference papers are standard in many engineering-related fields, but might be unfamiliar for people from a psychology or neuroscience background. Basically, PRNI papers are "real" publications: they are peer-reviewed, published in proceedings, indexed, and cite-able. The paper guidelines aren't settled yet, but will likely be similar to those of previous years. MVPA methods papers (applications, statistical testing, ...) are a good fit for PRNI; not just fMRI, but any neuroimaging modality is welcome.

Monday, August 3, 2015

R? python? MATLAB?

I've been asked a few times why I use R for MVPA, and what I think people just getting into MVPA should use. I don't think that there is a universally "best" package for MVPA (or neuroimaging, or statistics), but here are some musings.

The question as to why I personally started using R for MVPA is easy: I started before MVPA packages were available, so I had to write my own scripts, and I prefer scripting in R (then and now). Whether to keep using my own scripts or switch to pyMVPA (or some other package) is something I reconsider occasionally.

A very big reason to an established package is that it's a known quantity: coding bugs have hopefully been caught, and analyses can be reproduced. Some packages are more open (and have more stringent tests) than others, but in general, the more eyes that have studied the code and tried the routines, the better. This need for openness was one of my motivations for starting this blog: to post bits of code and detailed methods descriptions. I think the more code and details we share (blog, OSF, github, whatever), the better, regardless of what software we use (and I wish code was hosted by journals, but that's another issue).

I'm a very, very big fan of using R for statistical analyses, and of knitr (sweave and RMarkdown are also viable options in R) for documenting the various analyses, results, impressions, and decisions as the research progresses (see my demo here), regardless of the program that generated the raw data. My usual workflow is to switch to knitr once an analysis reaches the "what happened?" stage, regardless of the program that generated the data being analyzed (e.g., I have knitr files summarizing the motivation, procedure, and calculating results from cvMANOVA analyses run in MATLAB). Python has the iPython Notebook, which is sort of similar to knitr (I don't think as aesthetically pleasing, but that's a matter of taste); I don't think MATLAB has anything equivalent. Update 12 August 2015 (Thanks, Dov!): MATLAB comes with Publishing Markup, which (at a quick glance) looks similar to RMarkdown.

All neuroimaging (and psychology, neuroscience, ...) graduate students should expect to learn a proper statistical analysis language, by which I mostly mean R, with MATLAB and python coming in as secondary options. In practice, if you have proficiency in one of these programs you can use the others as needed (the syntax isn't that different), or have them work together (e.g., calling MATLAB routines from R; calling R functions from python). The exact same MVPA can be scripted in all three languages (e.g., read in NIfTI images, fit a linear SVM, write the results into a file), and I don't see that any one of the three languages is clearly best or worst. MATLAB has serious licensing issues (and expense); python dependencies can be a major headache, but which program is favored seems to go more with field (engineers for MATLAB, statisticians for R) and personal preference than intrinsic qualities.

So, what should a person getting started with MVPA use? I'd say an R, python, or MATLAB-based package/set of scripts, with which exact one depending on (probably most important!) what your colleagues are using, personal preference and experience (e.g, if you know python in and out, try pyMVPA), and what software you're using for image preprocessing (e.g., if SPM, try PRoNTO). Post-MVPA (and non-MVPA) investigations will likely involve R at some point (e.g., for fitting mixed models or making publication-quality graphs), since it has the most comprehensive set of functions (statisticians favor R), but that doesn't mean everything needs to be run in R.

But don't start from scratch; use existing scripts/programs/functions as much as possible. You should mostly be writing code for analysis-specific things (e.g., the cross-validation scheme, which subjects are patients, which ROIs to include), not general things (like reading NIfTI images, training a SVM, fitting a linear model). Well-validated functions exist for those more general things (e.g., oro.nifti, libsvm); use them.

Monday, July 6, 2015

Notes from the OHBM 2015 "Statistical Assessment of MVPA Results" Morning Workshop

Thanks to everyone that attended and gave feedback on the OHBM morning workshop "Statistical Assessment of MVPA Results" that Yaroslav Halchenko and I organized! We've received several requests for slides and materials related to the workshop, so I'll collect them here. It appears that material from the meeting will also be searchable from links on the main OHBM 2015 page. As always, all rights are reserved, and we expect to be fully cited, acknowledged, and consulted for any uses of this material.

I started the workshop off with a tutorial on permutation testing aimed at introducing issues particularly relevant for MVPA (and neuroimaging datasets in general). I'll eventually post a version of the slides, but some of the material is already available in more detail in two PRNI conference papers:
  • Etzel, J.A. 2015. MVPA Permutation Schemes: Permutation Testing for the Group Level. 5th International Workshop on Pattern Recognition in NeuroImaging (PRNI 2015). Stanford, CA, USA. In press, full text here, and in ResearchGate.
  • Etzel, J.A., Braver, T.S., 2013. MVPA Permutation Schemes: Permutation Testing in the Land of Cross-Validation. 3rd International Workshop on Pattern Recognition in NeuroImaging (PRNI 2013). IEEE, Philadelphia, PA, USA. DOI:10.1109/PRNI.2013.44. Full text here, and in ResearchGate.
Next, Johannes Stelzer gave a talk entitled "Nonparametric methods for correcting the multiple comparisons problem in classification-based fMRI", the slides for which are available here.

Then, Nikolaus Kriegeskorte gave a talk entitled "Inference on computational models from predictions of representational geometries", the slides for which are available here.

Finally, Yaroslav Halchenko finished the session with a talk giving an "Overview of statistical evaluation techniques adopted by publicly available MVPA toolboxes", the slides for which are available here

Monday, May 18, 2015

resampling images with wb_command -volume-affine-resample

I often need to resample images without performing other calculations, for example, making a 3x3x3 mm voxel version of an anatomical image with 1x1x1 mm voxels for use as an underlay. This can be done with ImCalc in SPM, but that's a bit annoying, as it requires firing up SPM, and only outputs two-part NIfTI images (minor annoyances, but still).

The wb_command -volume-affine-resample program gets the resampling done at the command prompt with a single long command:

 wb_command -volume-affine-resample d:/temp/inImage.nii.gz d:/temp/affine.txt d:/temp/matchImage.nii CUBIC d:/temp/outImage.nii  

If the wb_command program isn't on the path, run this at the command prompt, from wherever wb_command.exe (or the equivalent for your platform) is installed. A lot of  things need to be specified:
  • inImage.nii.gz is the image you want to resample (for example, the 1x1x1 mm anatomical image)
  • affine.txt is a text file with the transformation to apply (see below)
  • matchImage.nii is the image with the dimensions you want the output image to have - what inImage should be transformed to match (for example, the 3x3x3 mm functional image)
  • CUBIC is how to do the resampling; other options are TRILINEAR and ENCLOSING_VOXEL
  • outImage.nii is the new image that will be written: inImage resampled to match matchImage; specifying a outImage.nii.gz will cause a gzipped NIfTI to be written.
The program writes outImage as a one-file (not a header-image pair) NIfTI. It takes input images as both compressed (i.e., .nii.gz) and uncompressed (i.e., .nii) one-file NIfTIs, but didn't like a header-image pair for input.

You need to specify an affine transform, but I don't want to warp anything so the matrix is all 1s and 0s; just put this matrix into a plain text file (I called it affine.txt):
 1 0 0 0  
 0 1 0 0  
 0 0 1 0  

UPDATE 20 May 2015: Changed the resampling method to CUBIC and added a note that the program can output compressed images, as suggested by Tim Coalson.