Wednesday, October 29, 2014

demo: transforming from MNI to Talairach (or other atlases)

I generally prefer to spatially normalize to an MNI anatomical template, but sometimes need to work with images that were normalized to a Talairach atlas. This post shows how to warp a NIfTI image (such as a ROI mask) from one space to another. Converting coordinates is a different matter; see  Laird et al, 2010 for more details.

Basic strategy: Use the Normalise function in SPM, with the Template Image the new atlas, the Source Image the existing atlas, and the Images to Write the mask(s). Then, use ImCalc in SPM to change the transformed mask to the needed dimensionality.

Thus, we need atlas images for both the space we're in (e.g. MNI), and the space we're going to (e.g. Talairach). The atlas images don't need to match in voxel size, but should be similar in overall appearance; for example both skull-stripped (or not), and roughly equivalent brightness. Check that the mask is in the proper location when overlaid on the starting (source) atlas image.



First, warp the image to match the new atlas. In SPM, select Normalise: Estimate & Write and add a Subject. Set the Source Image to the atlas image that matches the current space of the mask (MNI, in this example), and the Template Image to the atlas image that matches the new space (here, Talairach). Set the Images to Write to the mask (here, aligned to the MNI atlas), and set the Voxel sizes to the desired output size (here, 3x3x3 mm). Running the function produces files named like the Images to Write, but with the Filename Prefix prefixed.

Check that the output image looks approximately like the original mask. If things look very wrong, try changing the Interpolation, check that the atlas images are similar, and check that the mask was plotted properly on the source atlas image.

If the output image looks ok, check the NIfTI header parameters (e.g. in MRIcroN). Most likely, they will not be exactly what you need. For example, I need my Talairach-ed mask to precisely (bounding box, orientation, voxel size) match some functional images. ImCalc in SPM will do this final transformation.

For ImCalc we need an image to match: one with the NIfTI header information we want the mask to have (it's ok if the template is 4d and the mask 3d). In this case, the functional image (aligned to the Talairach atlas) is TAL_template.nii. Set two Input Images: first, the template, then the mask (here wMNI_mask.nii, the output from Normalise); see screenshot. The order is important: ImCalc will change the second image to match the first, writing a new file Output Filename. Set the Expression to i2.

Finally, compare the original and transformed masks for alignment. Here, I show them both on anatomic images. The original mask had integer values, which have been "blurred" by the warping. I generally correct such blurring manually in R, such as by rounding the voxel values, and then changing individual voxel values as needed to fine-tune the transformed mask. How much fine-tuning is needed varies, depending on the degree of transformation required, initial mask image, etc.

I've found that this combination of SPM Normalise and ImCalc usually transforms masks well enough that relatively little manual adjustment is needed. Please share if you know of any alternative (or better!) procedures.

Wednesday, September 24, 2014

demo: clustering in afni with Clusterize

My go-to program for separating clusters out of an image is the Clusterize routine in AFNI. This little tutorial steps you through getting a NIfTI image into AFNI, using Clusterize, then getting a NIfTI out again. A word of warning: be sure to check laterality in the post-Clusterize NIfTI; sometimes things get flipped when you use multiple analysis programs. Also, I have a Windows box, so run AFNI within NeuroDebian (you should, too, especially if you run Windows), as the screenshots and notes below reflect. 

First, you need to get your NIfTI image into AFNI. Since I use NeuroDebian I started by putting the NIfTI I want to open into the for_afni subdirectory of the shared folder. Then you need to tell AFNI which directory to find the images in, which you do by clicking the Read button in the DataDir window (top red arrow). The Read Session window appears (right side of the screenshot, and, since I'm in NeuroDebian, I find my for_afni subdirectory under /home/brain/host/. Clicking the Set button (bottom red arrow) makes AFNI look for images in the directory.

Now we need to display the image that we want to clusterize. The image needs to be loaded as an OverLay, but AFNI is happiest if it has both an UnderLay and the OverLay, which are loaded via the circled buttons. Clicking the UnderLay button brings up a list of images, from both the for_afni subdirectory (since it was Read in the previous step) and standard anatomies (in my installation). In the screenshot I picked a standard anatomy; it also works if you use the overlay for the underlay (but you need something for the underlay). Then click the OverLay button and select the image you want to cluster. After setting both images you should see colored blobs on top of a greyscale background image: the colored image (the OverLay) will be the one clustered. Then click the Define OverLay button (arrow) to bring up the display shown in the upper right corner of the screenshot.

Next, set the threshold so that only the voxels you want to cluster are shown. Here, my overlay image consists of integers, and I want to identify clusters of at least 10 voxels with the value of 6 or higher. The screenshot shows how to set the threshold of 6: I put the little ** dropdown menu to 1 so that the values in the color slider are the actual numerical values (rather than a statistic). Next, I uncheck the autoRange box, also since I want the slider to be the actual numerical values. Finally, I move the slider (top arrow) to be exactly at 6 (use the up and down arrows for fine-tuning). The overlay changed as I moved the slider: now only voxels with values of 6 or larger are shown, and the overlay color scaling shifted.

Now we can do the clustering. First, click the Clear button (circled), in case any previous clustering is still in memory. Then click the Clusterize button (also circled), which brings up the menu dialog box (shown at left). Adjust the NN level and Voxels boxes to match your clustering parameters; the screenshot is set to find clusters with at least 10 voxels, and voxels must share a side to be in the same cluster. Click the Set button to close the menu dialog box. The main display won't change, except that the Rpt button (circled) will be enabled.

Clicking the Rpt button brings up the AFNI Cluster Results dialog box, as shown here. The display shows that AFNI found 8 clusters in my mask, ranging in size from 277 to 10 voxels. The coordinates are shown for the peak voxel in each cluster (since the XYZ dropdown is set to Peak), and clicking the Jump button in each row changes the coordinates in the display accordingly. To save the clustered version of the image, type a name into the box to the left of the SaveMsk button (marked with an arrow), then click the SaveMsk button. It doesn't look like anything happened, but there should now be a pair of images in the AFNI output directory (/brain/ by default in NeuroDebian) starting with the name specified.

Last, we need to convert the clustered mask back in NIfTI. I do this at the command line with 3dcopy. Not liking to mess about with configuration files, when I first open up the terminal I run . /etc/afni/afni.sh so that it can find 3dcopy. In the screenshot the terminal window opened up in the /brain/ directory, which is where the AFNI files are, so running 3dcopy outImage_mask+tlrc outImage.nii.gz writes the NIfTI file in /brain/ as well. Then I copy-paste outImage.nii.gz into /host/for_afni/ so that I can get to the file in Windows.

None of these steps are particularly difficult, but navigating back and forth can be a bit tricky, and the steps need to be done in the proper order. Good luck!

Thursday, September 18, 2014

demo: R code to perform a voxelwise t-test

It's easy to perform a voxelwise t-test (t-test at each voxel individually). Programs for mass-univariate analysis (like SPM and fsl) of course do this (and much more), but sometimes you just want to do a simple t-test across subjects at each voxel.

The demo code linked in this post does a voxelwise t-test in R. It takes as input a set of 3d NIfTI files, where each NIfTI is assumed to come from a different person, each voxel of which contains a statistic describing effect strength (for example, accuracy resulting from a searchlight analysis). The code reads the 3d NIfTI images into a 4d array (people in the fourth dimension), then performs a t-test at each voxel, saving the t-values for each voxel in a new NIfTI image. This figure shows the t-value image produced by the demo code.

The demo R code together with the input images to run the demo are available here, and the R code alone here.

Here is the key part of the code. The 4d array big contains the 3d statistic images for each person (subjects are the 4th dimension of the array). Then the plyr function aaply calls my little getT function (below and in the code file), which calculates the t-value at each voxel (ie over the subjects), creating a 3d array of t-values.

 # function to calculate the t-test at each voxel and return the t value  
 getT <- function(x) {    
  # can't do a t-test if variance is zero, so check before trying.  
  if (var(x) == 0) {   
   stat <- NA;   
  } else {   
   stat <- t.test(x, alternative="greater", mu=0.5)$statistic;   
  }  
  return(stat)  
 }  
 # plyr function aaply calls getT at each voxel in the 4d array big,  
 # creating a 3d output array of t-values (what getT returns)  
 t.img <- aaply(big, c(1,2,3), getT);   

Monday, September 15, 2014

FYI: Connectome Workbench 1.0 released

The Human Connectome Project team released version 1.0 of the Connectome Workbench last week. I'll update my tutorials as I notice changes, but it doesn't look like much of a change (at least to the volume functionality) from version 0.85; version 1.0 correctly opened complicated scenes I made in version 0.85 without error or complaint.

Here's my advice for getting started with the Connectome Workbench:
  1. First, go through my tutorial on plotting a NIfTI image; it also describes installing the Workbench and using wb_command.
  2. Read my summary of the different file types.
  3. Try the official Workbench tutorial (or at least look at the manual to get an idea of the possibilities).
  4. Look at my post on using the Workbench with volumetric images.
And why bother learning Workbench? Setting aside all its surface and HCP functionality (and that's a lot to set aside), I think the ability to create "scenes" justifies the time spent learning the software.

This screenshot shows scenes in action: clicking the little button marked with yellow arrows brings up the scene dialog box. I have three scenes stored in this file, and selecting one for display changes Workbench to recreate exactly how it was when the scene was created: window size, colors and scaling, loaded images, tab layout. Creating scenes for each image that might be used in a publication can save massive amounts of time: need to adjust a threshold or change a color? Just bring up the scene and make the change, no need to start from the beginning.

Monday, September 8, 2014

nice methods: Manelis and Reder 2013, "He Who is Well Prepared ..."

It's always great to read a paper with interesting methodology clearly explained, and Manelis and Reder 2013, "He Who Is Well Prepared Has Half Won The Battle: An fMRI Study of Task Preparation" is one of those papers (full citation below). As usual, I'm not going to fully describe the paper (go read it!), but just comment on a few things that caught my eye.




First, I was struck again by the strength and consistency of the activations and deactivations associated with the n-back task; they seem as reliable as those from some motor and somatosensory tasks. The authors used a mass-univariate analysis to identify a set of ROIs to use for the MVPA, shown in this part of Figure 2 (warm colors for regions that increased with n-back level, and cool colors for regions that decreased). As the authors properly point out, doing MVPA on the task blocks with these ROIs would be somewhat circular (since a mass-univariate analysis of the task blocks was used to create the ROIs), but their main MVPA avoids circularity, since it was done on a different part of the task.


Next, I appreciated the discussions of possible confounds in the results section: the authors report pairwise accuracies, not just the three-way, explaining that they want to make sure one very accurate pair is not driving the results, and they performed a nice control analysis using randomly-selected rest volumes.

Finally, they found a correlation between classification accuracy (MVPA during task preparation periods) and behavioral performance (participant speed on the n-back task); there are still relatively few reports tying fMRI analyses to behavior, and it's nice to see another one.


ResearchBlogging.orgManelis, A., & Reder, L. (2013). He Who Is Well Prepared Has Half Won The Battle: An fMRI Study of Task Preparation Cerebral Cortex DOI: 10.1093/cercor/bht262

Friday, August 22, 2014

quick recommendation: pre-calculate permutation test schemes

A quick recommendation: I strongly suggest pre-calculating the relabeling schemes before running a permutation test. In other words, prior to actually running the code doing all the calculations necessary to generate the null distribution, determine which relabeling will be used for each iteration of the permutation test, and store these new labels so that they can be read out again. To be clear, I think the only alternative to pre-calculating the relabeling scheme is to generate them at run time, such as by randomly resampling a set of labels during each iteration of the permutation test; that's not what I'm recommending here.

There are several reasons I think this is a good principle to follow for any "serious" permutation test (e.g. one that might end up in a publication):

Safety and reproducibility. It's a lot easier to confirm that the relabeling scheme is operating as expected when it can be checked outside of debug mode/run time. At minimum, I check that there are no duplicate entries, and that the randomization looks reasonable (e.g. labels chose at approximately equal frequencies?). Having the relabeling stored also means that the same permutation test can be run at a later time, even if the software or machines have changed (built-in randomization functions are not always guaranteed to produce the same output with different machines or versions of the software).

Easy of separating the jobs. I am fortunate to have access to an excellent supercomputing cluster. Since my permutations are pre-calculated, I can run a permutation test quickly by sending many separate, non-interacting jobs to the different cluster computers. For example, I might start one job that runs permutations 1 to 20, another job running permutations 21 to 30, etc. In the past I've tried running jobs like this by setting random seeds, but it was much more buggy than explicitly pre-calculating the labelings. Relatedly, if a job crashes for some reason it's a lot better to be able to start after the last-completed permutation if they've been pre-calculated.

Thursday, August 21, 2014

more on the LD-t

In a previous post I wrote a bit about  A Toolbox for Representational Similarity Analysis, and my efforts at figuring out the LD-t statistic. Nikolaus Kriegeskorte kindly pointed me to some additional information, and cleared up some of my confusion. Note that I'll be using the term "LD-t" in this post for consistency (and since it's short); it's a "cross-validated, normalized variation on the Mahalanobis distance", as phrased in Nili (2014).

First, the LD-t has been described and used previously (before Nili et al 2014), though (as far as I can tell) not in the context of representational similarity analysis (RSA). It is summarized in this figure (Figure S8a from Kriegeskorte et al. (2007 PNAS)); the paper's supplemental text has additional explanation ("Significance testing of ROI response-pattern differences" section). A bit more background is also on pages 69-71 of Niko's thesis.

To give a high-level picture, calculating the LD-t involves deriving a t-value from Fisher linear discriminant test set output. The linear discriminant is fit (weights calculated from the training data) with the standard algorithms, but using an error covariance matrix calculated from the residuals of fitting a GLM to the entire training dataset (time-by-voxel data matrix), which is then run through the Ledoit-Wolf Covariance Shrinkage Estimator function.

This isn't a procedure that can be dropped into an arbitrary MVPA workflow. For example, for my little classification demo I provided a single-subject 4D dataset, in which each of the 20 volumes is the temporally-compressed version (averaging, in this case) of an experimental block; the first ten class A, the second ten class B. The demo then uses cross-validation and an SVM to get an accuracy for distinguishing class A and B. It is trivial to replace SVM in that demo with LDA, but that will not produce the LD-t. One reason is that the LD-t procedure requires splitting the dataset into two parts (a single training and testing set), not arbitrary cross-validation schemes, but there are additional differences.

Here's a description of the procedure; many thanks to Carolina Ramirez for guiding me through the MATLAB! We're assuming a task-based fMRI dataset for a single person.
  • Perform "standard" fMRI preprocessing: motion correction, perhaps also slice-timing correction or spatial normalization. The values resulting from this could be output as a 4d data matrix of the same length as the raw data: one (preprocessed) image for each collected TR. We can also think of this as a (very large) time-by-voxel data matrix, where time is the TR.
  • Create design matrices (X) for each half of the dataset (A and B, using the terms from the Toolbox file fisherDiscrTRDM.m). These are as usual for fMRI mass-univariate analysis: the columns generally includes predictors for motion and linear trends as well as the experimental conditions, which have been convolved with a hemodynamic response function.
  • Create data matrices (Y) for each half of the dataset (training (A) and testing (B)), structured as usual for fMRI mass-univariate analysis, except including just the voxels making up the current ROI.
  • Now, we can do the LD-t calculations; this is function fishAtestB_optShrinkageCov_C, also from the Toolbox file fisherDiscrTRDM.m. First, fit the linear model to dataset A (training data): 
eBa=inv(Xa'*Xa)*Xa'*Ya; % calculate betas
eEa=Ya-Xa*eBa; % calculate error (residuals) matrix
  • Use the training set residuals matrix (eEa) to estimate the error covariance, and apply the Ledoit-Wolf Covariance Shrinkage Estimator function to the covariance matrix.This is Toolbox file covdiag.m, and returns the the shrinkage estimate of the covariance matrix, Sa.
  • Use the inverse of that covariance matrix (invSa) and the training-set PEIs (eBa) to calculate the Fisher linear discriminant (C is a contrast matrix, made up of -1, 0, and 1 entries, corresponding to the design matrix). Fitting the discriminant function produces a set of weights, one for each voxel.
was=C'*eBa*invSa; % calculate linear discriminant weights
  • Now, project the test dataset (Yb) onto this discriminant: yb_was=Yb*was'; 
  • The final step is to calculate a t-value describing how well the discriminant separated the test set. t-values are a number divided by its standard error, which is the purpose of the final lines of the fishAtestB_optShrinkageCov_C function. The values are adjusted before the t-value calculation; it looks like this is intended to compensate for differing array dimensions (degrees of freedom). I don't fully understand each line of code, but here they are, for completeness:
invXTXb=inv(Xb'*Xb);
ebb_was=invXTXb*(Xb'*yb_was); 
eeb_was=yb_was-Xb*ebb_was;  
nDFb=size(yb_was,1)-size(Xb,2);
esb_was=diag(eeb_was'*eeb_was)/nDFb;
C_new=C(1:min([size(ebb_was,1),size(C,1)]),:);
ctb_was2=diag(C_new'*ebb_was);
se_ctb_was2=sqrt(esb_was.*diag(C_new'*invXTXb*C_new));
ts=ctb_was2./se_ctb_was2;
  • Once the t-value is calculated it can be converted to a p-value if desired, using the standard t-distribution.

This procedure is for a single subject. For group analysis, in Kriegeskorte et al. (2007) they did a group analysis by "concatenating the discriminant time courses of all subjects and fitting a composite design matrix with separate predictors for each subject." Nili et al. (2014, supplemental) suggests a different approach: averaging the LD-t RDMs cell-wise across people, then dividing by the square root of the number of subjects.

The statistic produced by these calculations is related to the cross-validated MANOVA described by Carsten Allefeld; I hope to compare them thoroughly in the future.