Friday, May 15, 2015

MVPA on the surface: to interpolate or not to interpolate?

A few weeks ago I posted about a set of ROI-based MVPA results using HCP images, comparing the results of doing the analysis with the surface or volume version of the dataset. As mentioned there, there hasn't been a huge amount of MVPA with surface data, but there has been some, particularly using the algorithms in Surfing (they're also in pyMVPA and CoSMoMVPA), described by Nikolaas Oosterhof (et al., 2011).

The general strategy in all MVPA (volume or surface) is usually to minimize changing the fMRI timeseries as much as possible; motion correction is pretty much always unavoidable, but is sometimes the only whole-brain image manipulation applied: voxels are kept in the acquired resolution, not smoothed, not slice-time corrected, not spatially normalized to an atlas (i.e., each individual analyzed in their own space, allowing the people to have differently-shaped brains). The hope is that this minimal preprocessing will maximize spatial resolution: since we want to detect voxel-level patterns, let's change the voxels as little as possible.

The surface searchlighting procedure in Surfing follows this minimum-voxel-manipulation strategy, using a combination of surface and volume representations: voxel timecourses are used, but adjacency determined from the surface representation. Rephrased, even though the searchlights are drawn following the surface (using a high-resolution surface representation), the functional data is not interpolated, but rather kept as voxels: each surface vertex is spatially mapped to a voxel, allowing multiple vertices to fall within a single voxel in highly folded areas. Figure 2 from the Surfing documentation  shows this dual surface-and-volume way of working with the data, and describes the voxel selection procedure in more detail. In the way I've described my own searchlight code, the Surfing procedure results in a lookup table (which voxels constitute the searchlight for each voxel) where the searchlights are shaped to follow the surface in a particular way.

It should be possible to do this (Surfing-style, surface searchlights with voxel timecourses) with the released HCP data. The HCP volumetric task-fMRI images are spatially normalized to the MNI atlas, which will simplify things, since the same lookup table can be used with all people, though possibly at the cost of some spatial normalization-caused distortions. [EDIT 17 May 2015: Nick Oosterhof pointed out that even with MNI-normalized volumetric fMRI data, the subject-space surfaces could be used to map adjacent vertices, in which case each person would need their own lookup table. With this mapping, the same i,j,k-coordinate voxel could have different searchlights in different people.]

The HCP task fMRI data is also available as (CIFTI-format) surfaces, which were generated by resampling the (spatially-normalized) voxels' timecourses into surface vertices. The timecourses in the HCP surface fMRI data have thus been interpolated several times, including to volumetric MNI space and to the vertices.

Is this extra interpolation beneficial or not? Comparisons are needed, and I'd love to hear about any if you've tried them. The ones I've done so far are with comparatively large parcels, not searchlights, and certainly not the last word.

No comments:

Post a Comment