Wednesday, January 28, 2015

pointer: "Reward Motivation Enhances Task Coding in Frontoparietal Cortex"

I'm pleased to announce that a long-in-the-works paper of mine is now online: "Reward Motivation Enhances Task Coding in Frontoparietal Cortex". It doesn't look like the supplemental is online at the publisher's yet; you can download it here. This is the work I spoke about at ICON last summer (July 2014). As the title indicates, this is not a straight methodology paper, though it has some neat methodological aspects, which I'll highlight here.

Briefly, the dataset is from a cognitive control task-switching paradigm: during fMRI scanning, people saw images of a human face with a word superimposed. But their response to the stimuli varied according to the preceding cue: in the Word task they responded whether the word part of the stimulus had two syllables or not; in the Face task they responded whether the image was of a man or woman. Figure 1 from the paper (below) schematically shows the timing and trial parts. The MVPA tried to isolate the activity associated with the cue part of the trial.


The people did this task on two separate scanning days: first the Baseline session, then the Incentive session. During the Incentive session incentives were introduced: people had a chance to earn extra money on some trials for responding quickly and accurately.

The analyses in the paper are aimed at understanding the effects of incentive: people perform a bit better when given an incentive (are more motivated) to perform better. We tested the idea that this improvement in performance is because the (voxel level) brain activity patterns encoding the task are better formed with incentive: sharper, more distinct, less noisy task-related patterns on trials with an incentive than trials without an incentive.

How to quantify "better formed"? There's no simple test, so we got at it three ways:

First, cross-session task classification accuracy (train on baseline session, test on incentive session) was higher on incentive trials, suggesting that the Incentive trials are "cleaner" (less noisy, so easier to classify). Further, the MVPA classification accuracy is a statistical mediator of performance accuracy (how many trials each person responded to correctly): people with a larger incentive-related increase in MVPA classification accuracy also tended to have a larger incentive-related increase in behavioral performance accuracy.

At left is Figure 4 from the paper, showing the correlation between classification and performance accuracy differences; each circle is a participant. It's nice to see this correlation between MVPA accuracy and behavior; there are still relatively few studies tying them together.

Second, we found that the Incentive test set examples tended to be further from the SVM hyperplane than the No-Incentive test set examples, which suggests that the classifier was more "confident" when classifying the Incentive examples. Since we used cross-session classification there was only one hyperplane for each person (the (linear) SVM trained on all baseline session examples), so it's possible to directly compare the distance of the test set examples to the hyperplane.

Third, we found a higher likelihood of distance concentration in the No-Incentive examples, suggesting that this dataset is less structured (higher intrinsic dimensionality) than the Incentive examples. The distance concentration calculation doesn't rely on the SVM hyperplane, and so gives another line of evidence.

There's (of course!) lots more detail and cool methods in the main paper; hope you enjoy! As always, please let me know what you think of this (and any questions), in comments, email, or in person.


 ResearchBlogging.orgEtzel JA, Cole MW, Zacks JM, Kay KN, & Braver TS (2015). Reward Motivation Enhances Task Coding in Frontoparietal Cortex. Cerebral Cortex PMID: 25601237

Wednesday, January 21, 2015

research blogging: "Exceeding chance level by chance"

Neuroskeptic made me aware of a new paper by Combrisson & Jerbi entitled "Exceeding chance level by chance: The caveat of theoretical chance levels in brain signal classification and statistical assessment of decoding accuracy"; full citation below. Neuroskeptic's post has comments and a summary of the article, which I suggest you check out, along with its comment thread. 

My first reaction reading the article was confusion: are they suggesting we shouldn't test against chance (0.5 for two classes), but some other value? But no, they are arguing that it is necessary to do a test against chance ... to which I say, yes, of course it is necessary to do a statistical test to see if the accuracy you obtained is significantly above chance. The authors are arguing against a claim ("the accuracy is 0.6! 0.6 is higher than 0.5, so it's significant!") that I don't think I've seen in an MVPA paper, and would certainly question if I did. Those of us doing MVPA debate about how exactly to best do a permutation test (a favorite topic of mine!), and if the binomial or t-test is appropriate in particular situations, but everyone agrees that a statistical test is needed to support a claim that an accuracy is significant. In short, I agree with

What about the results of the paper's analyses? Basically, they strike me as unsurprising. For example, the authors note that smaller datasets are less stable (eg quite easy to get accuracies above 0.7 in noise data when only 5 examples of each class), and that smaller test set sizes (eg leave-1-out vs. leave-20-out cross validation when 100 examples) tend to have higher variance across the cross-validation folds (and so harder to reach significance). At right is Figure 1e, showing the accuracies they obtained from classifying many (Gaussian random) noise datasets of different sizes. What I immediately noticed is how nice and symmetrical around chance the spread of dots appears: this is the sort of figure we expect to see when doing a permutation test. Eyeballing the graph (and assuming the permutation test was done properly), we'd probably end up with accuracies above 0.7 being significant at small sample sizes, and around 0.6 for larger datasets, which strikes me as reasonable.

I'm not a particular fan of using the binomial for significance in neuroimaging datasets, especially when the datasets have any sort of complex structure (eg multiple fMRI scanning runs, cross-validation, more than one person), which they almost always have. Unless your data is structured exactly like Combrisson & Jerbi's (and they did the permutation test properly, which they might not have, see Martin Hebart's comments), Table 1 strikes me as inadequate for establishing significance: I'd want to see a test taking into account the variance in your actual dataset (and claims being made).

Perhaps my concluding comment should be that proper statistical testing can be hard, and is usually time consuming, but is absolutely necessary. Neuroimaging datasets are nearly always structured (eg sources of variance and patterns of dependency and interaction) far differently from the assumptions of quick statistical tests, and we are asking questions of them not covered by one-line descriptions. Don't look for a quick fix, but rather focus on your dataset and claims, and a method for establishing significance levels is nearly always possible.


ResearchBlogging.orgCombrisson, E., & Jerbi, K. (2015). Exceeding chance level by chance: The caveat of theoretical chance levels in brain signal classification and statistical assessment of decoding accuracy Journal of Neuroscience Methods DOI: 10.1016/j.jneumeth.2015.01.010

Thursday, January 8, 2015

connectome workbench: montages of volumes

This tutorial describes working with montages of volumetric images in the Connectome Workbench. Workbench calls displays with more than one slice "Montages;" these have other names in other programs, such as "MultiSlice" in MRIcroN. I've written a series of tutorials about the Workbench; check the this post for comments about getting started, and see other posts labeled workbench.

When you first open a volumetric image in Workbench, the Volume tab doesn't display a montage, but rather a single slice, like in the image at left (which is my fakeBrain.nii.gz demo file superimposed on the conte69 anatomy).

Workbench opens an axial (A) view by default, as in this screenshot. The little push buttons in the Slice Plane section (marked with a red arrow in the screenshot) change the view to the parasagittal (P) (often called the sagittal) or coronal (C) plane instead. Whichever view is selected but the Slice Plane buttons will be shown in the montage - montages can be made of axial slices (as is most common), but just as easily of coronal or sagittal slices. (The All button displays all three planes at once, which can be useful, but not really relevant for montages.)

To change the single displayed slice, put the mouse cursor in the Slice Indices/Coords section (marked with a red arrow in the screenshot) corresponding to the plane you're viewing, and use the up and down arrows to scroll (or click the little up and down arrow buttons, or type in a new number). In the screenshot, I'm viewing axial slice 109, at 37.0 mm.


Now, viewing more than one slice, a montage. The On button in the Montage section (arrow in screenshot at left) puts Workbench into montage mode: click the On button so that it sticks down to work with montages; click it again to get out of montage mode.

Workbench doesn't let you create an arbitrary assortment of slices in montage mode, but rather a display of images with the number of rows (Rows) and columns (Cols) specified in the Montage section boxes. The number of slices between each of the images filling up those rows and columns is given in the Step box of the Montage section, and the slice specified in the Slice Indices/Coords section is towards the middle of the montage. Thus, this screenshot shows images in four rows and three columns, with the displayed slices separated by 12 mm.

Customizing the montage view requires fiddling: adjusting the window size, number of rows and columns, step between slices, and center slice (in the Slice Indices/Coords section) to get the desired collection of slices. On my computer, I can adjust the zoom level (the size of the individual montage slice images) with a "scroll" gesture; I haven't found a keyboard or menu option to similarly adjust the zoom - anyone know of one?

Several useful montage-relevant options are not on the main Volume tab, but rather in the Preferences (bring it up with the Preferences option in the File dropdown menu in the main program toolbar), as shown at left. Set:ting the Volume Montage Slice Coord: option to Off hides the Z=X mm labels, which can be useful. The Volume Axes Crosshairs option hides the crosshairs;  experiment with the options to see their effect.

I haven't found ways of controlling all aspects of the montage; for publication-quality images I ended up using an image editor to have full control, such changing the slice label font.

Friday, December 19, 2014

tutorial: knitr for neuroimagers

I'm a big fan of using R for my MVPA, and have become an even bigger fan over the last year because of knitr. I now use knitr to create nearly all of my analysis-summary documents, even those with "brain blob" images, figures, and tables. This post contains a knitr tutorial in the form of an example knitr-created document, and the source needed to recreate it.


What does knitr do?  Yihui has many demonstrations on his web site. I use knitr to create pdf files presenting, summarizing, and interpreting analysis results. Part of the demo pdf is in the image at left to give the idea: I have several paragraphs of explanatory text above a series of overlaid brain images, along with graphs and tables. This entire pdf was created from a knitr .rnw source file, which contains LaTeX text and R code blocks.

Previously, I'd make Word documents describing an analysis, copy-pasting figures and screenshots as needed, and manually formatting tables. Besides time, a big drawback of this system is human memory ... "how exactly did I calculate these figures?." I tried including links to the source R files and notes about thresholds, etc, but often missed some key detail, which I'd then have to reverse-engineer. knitr avoids that problem: I can look at the document's .rnw source code and immediately see which NIfTI image is displayed, which directory contains the plotted data, etc.

In addition to (human) memory and reproducibility benefits, the time saved by using knitr instead of Word for analysis summary documents is substantial. Need to change a parameter and rerun an analysis? With knitr there's no need to spend hours updating the images: just change the file names and parameters in the knitr document and recompile. Similarly, the color scaling or displayed slices can be changed easily.

Using knitr is relatively painless: if you use RStudio. There is still a bit of a learning curve, especially if you want fancy formatting in the text parts of the document, since it uses LaTeX syntax. But RStudio takes care of all of the interconnections: simply click the "Compile PDF" button (yellow arrow) ... and it does! I generally don't use RStudio, except for knitr, which I only do in RStudio.


 to run the demo

We successfully tested this demo file on Windows, MacOS, and Ubuntu, always using RStudio, but with whichever LaTeX compiler was recommended for the system.

Software-wise, first install RStudio, then install a LaTeX compiler. Within RStudio, you'll need to install the knitr and oro.nifti packages.

Now, download the files needed for the demo (listed below). These are mostly the NIfTI files I've used in previous tutorials, with a new anatomic underlay image, and the knitr .rnw demo file itself. Put all of the image files into a single directory. When knitr compiles it produces many intermediate files, so it is often best to put each .rnw file into its own directory. For example, put all of the image files into c:/temp/demo/, then brainPlotsDemo.rnw into c:/temp/demo/knitr/.
Next, open brainPlotsDemo.rnw in RStudio. The RStudio GUI tab menu should look like in the screenshot above, complete with a Compile PDF button. But don't click the button yet. Instead, go through Tools then Global Options in the top RStudio menus to bring up the Options dialog box, as shown here. Click on the Sweave icon, then tell it to Weave Rnw files using knitr (marked with yellow arrow). Then click Ok to close the dialog box, and everything should be ready. In my experience, RStudio just finds the LaTeX installation - you don't need to set the paths yourself.

In the first code block, change the path to point to where you put the image files. Finally, click the Compile PDF button! RStudio should bring up a  running Compile PDF log, finishing with opening the finished pdf in a separate window. A little reload pdf button also appears to the right of the Compile PDF button (red arrow at left). If the pdf viewer doesn't open itself, try clicking this button to reload.

Good luck!

Saturday, November 22, 2014

free advice of the day: is it brain-shaped?

Here's some advice: when starting a new analysis, and especially when troubleshooting problematic code, always open up a few of the input images and make sure they are brain-shaped. Many, many hours have been spent on complicated debugging when the problem was faulty input images.

At left is a screenshot from MRIcroN, my go-to program for quickly looking at images: yep, got data in the ranges I expected and in the shape of a brain.

Monday, November 10, 2014

SfN: Wanna talk MVPA?

I'll be at SfN next week, Sunday (16th) through Tuesday (18th). Drop me an email if you'd like to get together and talk some MVPA, permutation testing, or other exciting methods. I might try to organize a (very informal) gathering of MVPA-interested people; email me if you'd like the details. See you at SfN!

Wednesday, October 29, 2014

demo: transforming from MNI to Talairach (or other atlases)

I generally prefer to spatially normalize to an MNI anatomical template, but sometimes need to work with images that were normalized to a Talairach atlas. This post shows how to warp a NIfTI image (such as a ROI mask) from one space to another. Converting coordinates is a different matter; see  Laird et al, 2010 for more details.

Basic strategy: Use the Normalise function in SPM, with the Template Image the new atlas, the Source Image the existing atlas, and the Images to Write the mask(s). Then, use ImCalc in SPM to change the transformed mask to the needed dimensionality.

Thus, we need atlas images for both the space we're in (e.g. MNI), and the space we're going to (e.g. Talairach). The atlas images don't need to match in voxel size, but should be similar in overall appearance; for example both skull-stripped (or not), and roughly equivalent brightness. Check that the mask is in the proper location when overlaid on the starting (source) atlas image.



First, warp the image to match the new atlas. In SPM, select Normalise: Estimate & Write and add a Subject. Set the Source Image to the atlas image that matches the current space of the mask (MNI, in this example), and the Template Image to the atlas image that matches the new space (here, Talairach). Set the Images to Write to the mask (here, aligned to the MNI atlas), and set the Voxel sizes to the desired output size (here, 3x3x3 mm). Running the function produces files named like the Images to Write, but with the Filename Prefix prefixed.

Check that the output image looks approximately like the original mask. If things look very wrong, try changing the Interpolation, check that the atlas images are similar, and check that the mask was plotted properly on the source atlas image.

If the output image looks ok, check the NIfTI header parameters (e.g. in MRIcroN). Most likely, they will not be exactly what you need. For example, I need my Talairach-ed mask to precisely (bounding box, orientation, voxel size) match some functional images. ImCalc in SPM will do this final transformation.

For ImCalc we need an image to match: one with the NIfTI header information we want the mask to have (it's ok if the template is 4d and the mask 3d). In this case, the functional image (aligned to the Talairach atlas) is TAL_template.nii. Set two Input Images: first, the template, then the mask (here wMNI_mask.nii, the output from Normalise); see screenshot. The order is important: ImCalc will change the second image to match the first, writing a new file Output Filename. Set the Expression to i2.

Finally, compare the original and transformed masks for alignment. Here, I show them both on anatomic images. The original mask had integer values, which have been "blurred" by the warping. I generally correct such blurring manually in R, such as by rounding the voxel values, and then changing individual voxel values as needed to fine-tune the transformed mask. How much fine-tuning is needed varies, depending on the degree of transformation required, initial mask image, etc.

I've found that this combination of SPM Normalise and ImCalc usually transforms masks well enough that relatively little manual adjustment is needed. Please share if you know of any alternative (or better!) procedures.