Monday, July 7, 2014

A toolbox for representational similarity analysis (and some RSA musings)

An interesting new paper about RSA, A Toolbox for Representational Similarity Analysis (Nili et al 2014, see citation below), shifted my picture of RSA a bit, making me realize I haven't always used the technique quite properly.

First, as you'd guess from the title, the paper mostly describes a MATLAB package for performing RSA. I could easily download the package and start looking at the demos and documentation, but there is a lot in the package, and understanding what all it's capable of (and how exactly it's doing everything) is not a job for an hour or two. It certainly looks worth careful examination, though; I'm particularly interested in the statistical inference functions.

The part I mostly want to comment on is separate from the MATLAB package: the paper suggests using a linear discriminant analysis t-value as a distance (dissimilarity) metric measure of discriminability instead of Pearson correlation (1 - Pearson correlation was suggested in Kriegeskorte 2008). Here's how they describe the method (there's a bit more in the supplemental):
"We first divide the data into two independent sets. For each pair of stimuli, we then fit a Fisher linear discriminant to one set, project the other set onto that discriminant dimension, and compute the t value reflecting the discriminability between the two stimuli. We call this multivariate separation measure the linear-discriminant t (LD-t) value."
This is dense. To unpack it a bit, the idea is that you're using a statistic derived from a classification analysis for the distance metric. They suggest using Fisher linear discriminant analysis (LDA) for the classification algorithm, with two-fold cross-validation, averaging results across the folds. LDA strikes me as a reasonable suggestion, and I assume any sort of reasonable cross-validation scheme (e.g. leave-one-run-out) would be fine.

But, how to derive the a t-value from the cross-validated LDA? The paper's description wasn't detailed enough for me, so I poked around in the toolbox code, and found the fishAtestB_optShrinkageCov_C function in /Engines/fisherDiscrTRDM.m. It looks like they're fitting the discriminant to the training dataset, projecting the test dataset onto the discriminant, then doing a t-test on the "error variance" computing a t-value from the test data projected on the discriminant. The function code does everything with linear algebra; my MATLAB (and linear algebra) is too rusty for it to all be obvious (e.g. which step, if any, corresponds to the coefficients produced by the R lda command? Is it a two-sided t-test against zero?). Please comment or email if you can clarify and I'll update this post.

Anyway, the idea of using a classification-derived distance metric for RSA is appealing, particularly to get a consistent and predictable zero when stimuli are truly unrelated (fMRI examples are often a bit correlated, making correlation-based RSA comparisons sometimes between "not that correlated" and "somewhat correlated", rather than the more interpretable "nothing" and "something").

Which brings me to what I realized I had wrong about RSA. To do cross-validation, you need multiple examples of the same stimulus, and at the end you have a single number (accuracy, LD-t, whatever). RSA is accordingly not done between examples (e.g. individual trials) but between stimulus types (classes with lots of examples; what we classify).

This RSA matrix (the official term is "RDM") is from a previous post, which I described as "an RSA matrix for a dataset with six examples in each of two classes (w and f)." While the matrix is sensible (w-f cells are oranger - less correlated - than w-w and f-f cells), the matrix should properly be a single value: the distance between w and f.

In other words, to make an RSA matrix (RDM) I needed at least three classes; not multiple examples of two classes. Say the new class is 'n'. Then, my RSA matrix would have w, f, and n along each axis, and we can ask questions like, "is w is more similar to f or n?". That RSA matrix would have just three numbers: the distances between w and f, w and n, and f and n. If using Pearson correlation, we'd calculate those three numbers by averaging (or some other sort of temporal compression, such as fitting a linear model) across the examples of each class (here, w1, w2, w3, w4, w5, w6) to get one example per class, then correlating these vectors (e.g. w with f). If using LDA, we'd (for example) use the first three w and f examples to train the classifier, then test on the last three of each (and the reverse), then calculate the LD-t. (To be clear, you can calculate LD-t with just two classes, it won't really look like an RDM since you just have one value (w-f).)


ResearchBlogging.org Nili H, Wingfield C, Walther A, Su L, Marslen-Wilson W, & Kriegeskorte N (2014). A toolbox for representational similarity analysis. PLoS computational biology, 10 (4) PMID: 24743308

UPDATE 17 July 2014: Changed a bit of the text (strikeouts) in response to helpful comments from Hamed Nili. He also pointed out this page, which describes a few other aspects of the paper and toolbox.

Monday, May 19, 2014

HCP: extracting timecourses from the surface files

In a previous post about viewing functional data from the Human Connectome Project (HCP) I described downloading the images then viewing them with the Connectome Workbench. This post describes a way to extract the timecourses for specific surface vertices using code, rather than one-at-a-time in the Workbench.

First, working with surfaces means working with GIFTI and CIFTI files. Many developers are creating functions that can read these files, but software is less mature and harder to find than for reading NIfTI files (MATLAB, FSL and NiBabel seem furthest along; I couldn't find one for R). The GIFTI library for MATLAB worked great ... except with HCP-derived files.Guillaume Flandin very kindly (and quickly!) changed his code to work with HCP files, making it ignore the spec-inconsistent part of those headers. By the time you read this, the files might be fixed, but for now (19 May 2014), if the GIFTI library for MATLAB gives you errors with HCP files (but not others), ask Guillaume for a copy of make sure you have version 1.4 of the @gifti library.


Here is an overview of the process. The logic is the simple: figure out which voxels/vertices are needed, then get the timecourses for those voxels/vertices. For HCP functional surface images there are two preliminary steps: extracting the part of the brain we want from the CIFTI file (green), and finding which vertices we want (such as by constructing a surface ROI mask, blue).

Note that it's not necessary to create a ROI to extract the timecourses; if you know which vertices you need by some other means you can read them directly in MATLAB once the GIFTI has been loaded.

unpack the CIFTI

It works best for me to think of HCP CIFTI files as archives containing the functional data for the whole brain, as a mixture of surfaces (for the cortical sheet) and volumes (for sub-cortical structures). Before reading the functional data we need to unpack the CIFTI, making GIFTI or NIfTI files, as appropriate for the part of the brain we want; the wb_command -cifti-separate program does this "unpacking" (see this post for notes about downloading HCP data, and this one about working with the Workbench wb_command command-line program). For example, if we want the functional time courses for a ROI on the left hemisphere surface, we run this at the command line:
wb_command -cifti-separate in.dtseries.nii COLUMN -metric CORTEX_LEFT out.func.gii 
where in.dtseries.nii is the path and filename of an HCP tfMRI CIFTI file (e.g. tfMRI_MOTOR_LR_Atlas.dtseries.nii), and out.func.gii is the path and filename of the GIFTI we want the program to create. The command's help has more options and explanation; in brief CORTEX_LEFT  indicates which anatomical structure to unpack, COLUMN gets timecourses, and -metric is because CORTEX_LEFT  is stored as a surface.

identify the desired vertices

There is not a one-to-one correspondence between volumetric voxels and surface vertices in the HCP functional datasets, nor a way (that I could find) to translate between the two coordinate schemes (if you know differently, please let me know!)(see update below). As a work-around, I used wb_command -volume-to-surface-mapping to translate a volumetric ROI to a surface ROI, in the same way as before:
wb_command -volume-to-surface-mapping roi.nii.gz atlas.surf.gii roi.func.gii -enclosing
where roi.nii.gz is the volumetric ROI mask, atlas.surf.gii is an atlas surface for the hemisphere containing the ROI, and roi.func.gii is the GIFTI metric file that will be created. This only works if the surface atlas, volumetric ROI, and functional CIFTI are all aligned to the same space. For the HCP data, the MNINonLinear images for each person are aligned to the MNI atlas, specifically matching the conte69 32k atlas. Thus, I made the volumetric ROI on an MNI template brain, then used Conte69.L.midthickness.32k_fs_LR.surf.gii for the atlas.surf.gii in the -volume-to-surface-mapping call. So long as the headers are correct in roi.nii.gz (i.e. the voxel size, origin, etc are correct) the ROI should be in the correct place in roi.func.gii, but view it in Workbench to be sure.

extract the vertex timecourses

Finally, we can read both the ROI and functional data GIFTI files into MATLAB, reading the vertex indices from the ROI then saving the timecourses as text:

addpath 'C:/Program Files/MATLAB/gifti-1.4';   % path to GIFTI library
roi = gifti(['d:/temp/roi.func.gii']);   % load the ROI GIFTI
inds = find(roi.cdata > 0);    % find is like which: get the vertex indices
wm = gifti([inpath 'out.func.gii']);  % load the functional GIFTI
tmp = wm.cdata(inds,:);    % get those indices' timcourses
csvwrite(['d:/temp/out.csv'], tmp);   % write as a csv text file

Now, out.csv has one column for each timepoint and one row for each vertex in the ROI.

confirming the match

We can check that the values match by opening out.func.gii in both Workbench and MATLAB:
In the image I clicked on vertex 5336 in Workbench (red arrow), which is the vertex at position 5337 in the MATLAB .cdata array (purple arrow), since MATLAB is one-based but Workbench is zero-based. The first two values of the data series shown in the Workbench Information window (red underline) match those shown in MATLAB (purple underline).

Note: I could only get Workbench to show the first two values of the timeseries if I set the little blue-arrowed button to 2; otherwise it would display only the first value. Clicking through the little blue-arrow box changes the display to different timepoints, but doesn't change the Information window that I could tell (I'm using Workbench 0.85). Note also that the "charting" interface has changed from my previous post; now you need to go through the "Chart" radio button on the upper left of the View window (right under the first tab name on my screen); I couldn't get it to write out the full timeseries in Workbench itself.


UPDATE (20 May 2014): Tim Coalson suggested that by convention the output files should be named roi.func.gii and out.func.gii, not roi.shape.gii and out.dtseries.gii as I originally wrote; I changed the commands accordingly. Tim also pointed me to the program wb_command -surface-closest-vertex, which will return the closest vertex to an arbitrary 3d coordinate. He suggests that to go the other way (from a vertex in a .surf.gii to 3d coordinates) you "could look at the coordinate of a particular vertex, and back-convert through the nifti sform to get the real-valued voxel "indices" it resides at (real-valued because it could be a third of a voxel to the right of a voxel center, etc)."

Tuesday, May 6, 2014

clever RSA: "Hippocampal Activity Patterns Carry Information about Objects in Temporal Context"

There's an interesting use of RSA (representational similarity analysis) in a recent paper by Hsieh et al. This bit of Figure 1 summarizes the dataset: in each scanning run people were shown the same set of object images, each image shown for 1 sec, followed by a 5 sec inter-stimulus interval. The people pushed a button to answer a semantic question about each image (e.g. "Is the presented object living?"), with a different semantic question each run. A key part of the experimental design is the sequences in which the objects were presented.

The images made up six sequences, which were learned right before scanning, then shown three times in each of the five scanning runs. As shown here, different objects were used in the Fixed, X, Y, and Random sequences; two objects were shared between the X sequences, and three between the Y sequences. Each sequence had the images shown in the order in the figure, except for the Random sequence, which was randomly different each time (the camel could be first one time, then third the next).

This set of sequences made it possible to look for order and identity effects: once you saw the rake, you would know (since the participants memorized the sequences before scanning) that you would see the truck next, followed by the cabinet, etc. If you saw the rhino you would know the drill and strawberry would be next, but not whether the chair or elk would be in the fourth position. Seeing the camel first would have you expect the tractor, shears, stand, or pineapple to be next, but not which one (though by the fourth object you'd know which of the Random set hadn't yet been shown).

The presented results are all ROI-based, with the hippocampus, parahippocampal cortex (PHc), and perirhinal cortex (PRc). The ROIs were individually drawn for each person,  but I didn't see a list of how many voxels went into the ROIs for each person, or a mention of how much variability there was in the size across people and ROIs. If they kept the voxels at the acquired 3.2×3.2×3.0 mm, I'd guess there'd be less than 20 voxels in each ROI, but it would be nice to have had the exact counts. (And I wonder if they looked outside the ROIs; seems likely, since they acquired whole-brain images.)

Anyway, they created parameter estimate images (fit a canonical HRF) for each image presentation (90 per run), then created an RSA matrix (with Pearson correlation) for each same-sequence repetition within a run, then averaged those three matrices to get one matrix per sequence per run, then averaged across runs, then across people (Figure 3).

I'm not going to mention everything they presented, just the analysis summarized in Figure 4, which is copied in part here. The left pane shows the RSA matrix when everything except the Random sequence goes into the average: nice dark red colors (high correlation) along the diagonal, dropping off moving away from the diagonal (note the weird matlab-default color scheme: yellow, green, and cyan are near zero).

The clever bit is how they made the RSA matrices for the Random sequences: based on position or object (Figure 3). For position, they did the RSA with the true sequences: correlating the first-presented image against the first, even though they were different images. There's very little correlation in the upper left corner of this matrix, but more in the lower right - perhaps because the last few images could be guessed. Then, they did the RSA based on object: correlating the same images together (camel to camel), regardless of order. They used these three RSA matrices to test their hypotheses (Figure 8): which ROIs had information about object identity? Which about the order? Which had both?

One last comment: Figure 5 makes me wish for more supplemental information ... these are very strong correlations for the noisiness of the data (and the small size of the correlations making up the "similarity change" metric). It would have been nice to see error bars on these points, or something like the range across the five runs for each person. The individual graphs ("same obj+pos" and "same obj") separately, rather than just the difference, would also be interesting, and perhaps explain why some people have a negative similarity change.


ResearchBlogging.orgHsieh, L., Gruber, M., Jenkins, L., & Ranganath, C. (2014). Hippocampal Activity Patterns Carry Information about Objects in Temporal Context Neuron, 81 (5), 1165-1178 DOI: 10.1016/j.neuron.2014.01.015

Tuesday, April 15, 2014

not having fun with R: as.integer, truncating, and rounding

I had a very unpleasant R debugging experience this morning: when is what you see not what you actually have?

The screenshot at left reproduces what I was seeing: inds1 and inds2 both are shown as the vector 2, 4, but inds2 selects the 2nd and 3rd array elements, not the 2nd and 4th, as expected.

The code I used to create inds1 and inds2 makes the problem clear - the second number is just a bit larger than 4 in inds1, but just a bit smaller than 4 in inds2:
inds1 <- c(2, 4.00000001);
inds2 <- c(2, 3.99999999999);

So, what happened?

The  inds1 and inds2 arrays are type numeric (double), but they are shown on the screen rounded, in this case, not showing any numbers past the decimal. They look like integers, but are not. When used to specify array indices R coerces the arrays to integers. as.integer() does not round, but rather truncates. Thus, inds2[2] becomes 3.


I will now be including tests in my code to be sure what I think are integers are actually integers, such as all.equal(inds, as.integer(inds)).

Friday, March 28, 2014

connectome workbench: working with volumes

I think of plotting surfaces when I think of the Connectome Workbench, but Workbench can do quite a few nice things with volumes as well. If you haven't already read my previous HCP tutorial posts, read those first, because I'm going to be skipping quite a bit of the introductory info. I'm using the most recent version of the Workbench, 0.84.

I'm very pleased with these screenshots in this post and will describe how I made them. The pictures show both a surface and volume-slices view of the accuracy map resulting from my searchlight demo.

Before firing up the Workbench, you'll need both the volumetric (NIfTI) and surface (*.shape.gii or *.funct.gii) versions of the file you want to plot, plus the corresponding anatomical underlays. In this case, the demo searchlight accuracy map is aligned to the MNI anatomical template, so we'll use the atlas_Conte69_74k_pals anatomy, as before. Thankfully, the atlas download includes both surface and volumetric versions, so that's ready to go. The demo volumetric NIfTI can be downloaded here, and the wb_command -volume-to-surface-mapping program will create the corresponding *shape.gii file, also as before.

Now that we have all the files we need,we can plot them in Workbench. I started a new .spec file based off the Conte69 anatomy so that I could skip loading the files I didn't need (e.g. borders) and save the ones I do need, but that's not essential (see here for explanation).

Once Workbench is open, we need to load in the images, both volume and surface, that we want to overlay, plus the volume anatomy, since that isn't included in the default Conte69_atlas-v2.LR.32k_fs_LR.wb.spec. These four files (searchlightAccuracies_rad2.nii.gz, Conte69_AverageT1w.nii.gz, searchlight_right.shape.gii, and searchlight_left.shape.gii) can all be loaded (as Volume Files and Metric Files) through repeated use of the File -> Open File dialog.

Finally, we can start making nice pictures. I found it convenient to work with just two Workbench tabs, one for the surface and one for the volume version of the dataset. You can close extra tabs by clicking the little red boxes at the top of each, and open new ones with File -> New Tab.

Go to the first tab and click the "All" radio button in the "View" part of the Toolbar. The window will change to show the surface of both hemispheres, but not the searchlight map; change the bottom two METRIC entries in the Overlay ToolBox to searchlight_right.shape.gii and searchlight_left.shape.gii. You can turn the brain around with the mouse to see the overlay (adjust the color scaling via the little wrench icon). Now, switch the top dropdown box to VOLUME searchlightAccuracies_rad2.nii.gz. This will plot the volume data inside the surface, projected onto planes (twist the brain around to see the planes). You can adjust the planes' location in the "Slice Indices/Coords" part of the Toolbar.

Go to the other tab and click the "Volume" radio button in the "View" part of the Toolbar. Set the bottom dropdown box to VOLUME Conte69_AverageT1w.nii.gz (the anatomic template), then the upper dropdown box to VOLUME searchlightAccuracies_ rad2.nii.gz.When both of these layers are checked On the view should look something like the right side part of this image.

Now we have two tabs open, one with a surface, and one with a volume. To view both side-by-side like in the screenshot, click View -> Enter Tile Tabs. There are a lot of neat things you can do in the "Tile Tabs" view; play around with it and check out the tutorial.

Another feature of the Workbench I really like for volumetric data is making "montages": views of many slices at once, like the top screenshot in this post. To make these, click the "M" button in the "Slice Plane" part of the Toolbar. You can then switch the number of slices shown by adjusting the boxes in the "Montage" part of the Toolbar, and where it starts showing slices in the "Slice Indices/Coords" part of the Toolbar. The montage view doesn't have to be axial slices, but any type - just click the buttons in the Slice Plane part of the toolbar.

So, this was an overview of what I thought was pretty nice when using the Workbench with volumetric images. I did find a few things frustrating, particularly the Yoking; I just couldn't make yoking work. Also, volume-only montage view is rather like MRIcron ... but the view doesn't recenter on clicked coordinates; you adjust the position through the "Slice Indices/Coords" part of the Toolbar. It would also be neat to plot horizontal lines on the surface when viewing data like in the top screenshot, where the horizontal lines indicate the slices shown in the volume montage. But, some nice features, and I'll probably be using the Workbench with volumetric data quite a bit more in the future.

UPDATE (7 May 2014): Tim Coalson told me how to get Workbench to recenter on clicked coordinates: depress the Volume ID button (yellow arrow) in the Information box (red arrow button makes it appear). He said they might make this the default in future versions of Workbench, which would help.

Note that if you click the red-arrow i button again (so it is popped-out, rather than depressed) the Information window won't keep appearing every time you click in the volume.

Tuesday, March 25, 2014

NIfTI, CIFTI, GIFTI in the HCP and Workbench: a primer

The HCP is releasing preprocessed data in both volumetric NIfTI and surface/volumetric CIFTI formats. Working with the HCP files, or doing much of anything with the Workbench, requires navigating through a plethora of .*.nii and .*.gii files. In this post I'll explain why we need all these files, and how they relate to each other. Disclaimer: I'm writing this as a primer from the viewpoint of someone familiar with volumetric fMRI data analysis; it is not at all a full description of everything the files can be used for. Also, though I'm referring to the HCP and Workbench, these file formats are used by other projects and software.

For a starting point, consider how we work with volumetric NIfTI files. Neuroimagers often think about volumetric NIfTI files as storing functional data in a 4d matrix (x,y,z, and time). Libraries such as oro.nifti make reading NIfTI files fairly easy: they create a 3d or 4d matrix of voxel values, plus a object with the header information.

While you can get an idea of the anatomy by looking at slices of the 4d functional data matrix, analyses generally rely on having a 3d matrix of anatomical data (binary mask of regions, anatomic scan, etc) perfectly aligned to the 4d functional data. So, the 4d NIfTI file doesn't contain everything we need: we get some alignment information out of the header (qfactor, etc), but also need the registered 3d anatomical data. For a concrete example, I had to provide two files for the little ROI-based analysis demo: the dataset (4d NIfTI with preprocessed BOLD) and the ROI mask (binary 3d NIfTI showing the voxels corresponding to the anatomical region of interest), plus stating that the dataset was normalized to the MNI anatomical atlas (so that we can overlay the data on the correct anatomical template).

Now, on to CIFTI. CIFTI-2 files follow the NIfTI-2 file format specification (CIFTI-2 is a "flavor" of NIfTI-2, so both use the *.nii file extension), and both consist of a data matrix and headers. In the case of the HCP data, the functional timecourses are in the data matrix part of *.dtseries.nii CIFTI files. Like NIfTI volume files, the CIFTI file contains information about where voxels are, though this information is stored in a different place (in the extension containing the CIFTI XML). But, paralleling how you need an anatomic file to figure out exactly where the voxels in a volumetric NIfTI lie, you need other files (not just the CIFTI) to tell you where the surface vertices lie, and how they're connected (the "triangles", etc). Aside: While I wrote "surface vertices" in this paragraph, note that the HCP CIFTIs store both surface vertices (for the cortical sheet) and volumetric voxels (for sub-cortical structures).

These "other files" are not a single file but multiple; as many as necessary. Having all of these files is akin to having multiple ROI files available for an analysis: you won't use each ROI in each analysis, just the ones corresponding to the anatomical area (or whatever) you need for a particular test. The "other files" for the HCP are not just ROIs, but can also be underlying anatomy at different inflation levels, maps of tissue types, etc.

For example, at left is a screenshot showing some of the "other files" provided for each HCP person in the released datasets. These files are from /100307_Q3/MNINonLinear/Native/: the maps are in subject space. Many files with similar names are in /100307_Q3/MNINonLinear/fsaverage_LR32k/: maps of the same structures/types, but aligned to the MNI template anatomy (specifically, the 32k Conte69 mesh, see page 112 of Glasser, et. al 2013).

And now we're encountering GIFTI files: many of the "other files" are in GIFTI format, with the extension .*.gii. The naming of the "other files" (the last bit before the .gii) in the HCP tends to follow the CARET conventions, and gives a hint as to what sort of information they contain:

*.surf.gii, "gifti surface files", contain only vertex coordinates and triangles (which vertices are connected). The HCP *.surf.gii files are mostly structures that you might want to overlay data onto, such as 100307.L.inflated.native.surf.gii (left hemisphere, inflated) and 100307.L.midthickness.native.surf.gii.(left hemisphere, not inflated at all, but rather halfway through the thickness of the cortical ribbon).

*.func.gii and *.shape.gii, "metric files", contain data values for every vertex. Essentially, these are data arrays whose indices correspond to a surface file - you need a matching surface file to know where in the brain to put the data stored in a metric file. For example, a metric file from the HCP release is 100307.L.corrThickness.native.shape.gii: the cortical thickness at each vertex.

For an example of how these files work together, my tutorial on plotting a NIfTI image with the Workbench uses the wb_command -volume-to-surface-mapping program to create .shape.gii files aligned to Conte69.*.midthickness.32k_fs_LR.surf.gii. The data from the volumetric NIfTI (e.g. searchlight accuracies at each voxel) is stored (by vertex) in .shape.gii files, but a shape.gii file by itself isn't enough to plot the data properly on a surface: you need an aligned .surf.gii file as well. Paralleling how you need an aligned anatomy to properly overlay a volumetric NIfTI ROI, you need an aligned surf.gii to know how to properly locate the data from a metric file.

Whew! Hopefully this primer helps explain why so many files are released with the HCP data, and a bit about how they work together. For additional information see the Workbench Glossary, as well as Glasser, et. al 2013. If you've found any references particularly useful that I haven't already linked to, please send them along and I'll add links.

I want to end this post with a BIG thank you to Tim Coalson, who patiently (and repeatedly) walked me through these file types and how they relate to each other.

Tuesday, March 11, 2014

Allefeld 2014: Searchlight-based multi-voxel pattern analysis of fMRI by cross-validated MANOVA

A recent paper by Carsten Allefeld and John-Dylan Haynes, "Searchlight-based multi-voxel pattern analysis of fMRI by cross-validated MANOVA (see full citation below), caught my eye. The paper advocates using a MANOVA-related statistic for searchlight analysis instead of classification-based statistics (like linear SVM accuracy). Carsten implemented the full procedure for SPM8 and Matlab; the code is available on his website.

In this post I'm going to describe the statistic proposed in the paper, leaving the discussion of when (sorts of hypotheses, dataset structures) a MANOVA-type statistic might be most suitable for a (possible) later post. There's quite a bit more in the paper (and to the method) than what's summarized here!

MANOVA-related statistics have been used/proposed for searchlight analysis before, including Kriegeskorte's original paper and implementations in BrainVoyager and pyMVPA. From what I can tell (please let me know otherwise), this previous MANOVA-searchlights fit the MANOVA on the entire dataset at once: all examples/timepoints, no cross-validation. Allefeld and Haynes propose doing MANOVA-type searchlights a bit differently: “cross-validated MANOVA” and “standardized pattern distinctness”.

Most of the paper's equations review multivariate statistics and the MANOVA; the “cross-validated MANOVA” and “standardized pattern distinctness” proposed in the paper are in equations 14 to 17:
  • Equation 14 is the equation for the Hotelling-Lawley Trace statistic, which Allefeld refers to as D ("pattern distinctness").
  • Equation 15 shows how Allefeld and Haynes propose to calculate the statistic in a "cross-validated" way. Partitioning on the runs, they obtain a D for each partition by calculating the residual sum-of-squares matrix (E) and the first part of the H equation from the not-left-out-runs, but the second part of the H equation from the left-out run.
  • Equation 16 averages the D from each "cross-validation" fold, then multiplies the average by a correction factor calculated from the number of runs, voxels, and timepoints.
  • Finally, equation 17 is the equation for “standardized pattern distinctness”: dividing the value from equation 16 by the square root of the number of voxels in the searchlight.
To understand the method a bit better I coded up a two-class version in R, using the same toy dataset as my searchlight demo. Note that this is a minimal example to show how the "cross-validation" works, not necessarily what would be useful for an actual analysis, and not showing all parts of the procedure.

The key part from the demo code is below. The dataset a matrix, with the voxels (for a single searchlight) in the columns and the examples (volumes) in the rows. There are two classes, "a" and "b". For simplicity, the "left-out run" is called "test" and the others "train", though this is not training and testing as meant in machine learning. train.key is a vector giving the class labels for each row of the training dataset.

For Hotelling's T2 we first calculate the "pooled" sample covariance matrix in the usual way, but using the training data only:
S123 <- ((length(which(train.key == "a"))-1) * var(train.data[which(train.key == "a"),]) + (length(which(train.key == "b"))-1) * var(train.data[which(train.key == "b"),])) / (length(which(train.key == "a")) + length(which(train.key == "b")) - 2);

To make the key equation more readable we store the total number of "a" and "b" examples:
a.count <- length(which(train.key == "a")) + length(which(test.key == "a"));
b.count <- length(which(train.key == "b")) + length(which(test.key == "b"));


and the across-examples mean vectors:
a.test.mean <- apply(test.data[which(test.key == "a"),], 2, mean);
b.test.mean <- apply(test.data[which(test.key == "b"),], 2, mean) ;  

a.train.mean <- apply(train.data[which(train.key == "a"),], 2, mean); 
b.train.mean <- apply(train.data[which(train.key == "b"),], 2, mean);

now we can calculate the Hotelling's T2 (D) for this "cross-validation fold" (note that solve(S123) returns the inverse of matrix S123):
((a.count*b.count)/(a.count+b.count)) * (t(a.train.mean-b.train.mean) %*% solve(S123) %*% (a.test.mean-b.test.mean));

The key is that, paralleling equation 15, the covariance matrix is computed from the training data, multiplied on the left by the mean difference vector from the training data, then on the right by the mean difference vector from the testing data.

Should we think of this way of splitting the Hotelling-Lawley Trace calculation as cross-validation? It is certainly similar: a statistic is computed on data subsets, then combined over the subsets. It feels different to me though, partly because the statistic is calculated from the "training" and "testing" sets together, and partly because I'm not used to thinking in terms of covariance matrices. I'd like to explore how the statistic behaves with different cross-validation schemes (e.g. partitioning on participants or groups of runs), and how it compares to non-cross-validated MANOVA. It'd also be interesting to compare the statistic's performance to those that don't model covariance, such as Gaussian Naive Bayes.

Interesting stuff; I hope this post helps you understand the procedure, and to keep us all thinking about the statistics we choose for our analyses.


ResearchBlogging.orgAllefeld C, & Haynes JD (2014). Searchlight-based multi-voxel pattern analysis of fMRI by cross-validated MANOVA. NeuroImage, 89, 345-57 PMID: 24296330