Tuesday, March 24, 2015

some thoughts on "Generation and Evaluation of a Cortical Area Parcellation from Resting-State Correlations"

Lately I've been working a bit with the brain parcellation map described in Gordon, et al. (2014), "Generation and Evaluation of a Cortical Area Parcellation from Resting-State Correlations" (citation below), which is available from the authors (thanks!) in both surface and volumetric formats.

As the title of the paper states, their brain parcellation was derived from resting state functional connectivity analyses. Briefly, the paper describes defining a set of ROIs ("parcels") which divide the cortical surface into meaningful (structurally and functionally) units. The parcel boundaries were defined from functional connectivity analyses, following the (reasonable) assumption that functional connectivity statistics should be homogenous within a parcel, but shift abruptly at parcel boundaries.

This is not the first functional connectivity-derived brain parcellation (Table 1 of the paper lists others), but I think is probably the most methodologically rigorous to date. I like that they used two datasets, each with more than 100 people, and serious quality control checks (motion, etc). I also like their reliability/stability/validity analyses: both between the two datasets, in individuals, and against the most robust and best understood atlases (including cytoarchitechtonic).

We'd like to use the parcellation for defining ROIs, particularly for frontal and parietal areas that aren't definitively divided in existing atlases. Gordon et al. did their analyses on the surface, but provide both surface (CIFTI) and volumetric (NIfTI) versions of the parcellation.

Here's an image I made (using a version of my knitr plotting code) showing a few slices the volumetric parcels (plotted on a conte MNI anatomy I resampled to 2x2x2 mm voxels to match the parcellation's resolution). As is apparent, while in volumetric space, the parcels closely follow the cortical surface (grey matter ribbon).

I see the logic of constraining analyses to the surface, particularly when fMRI images are acquired with high-resolution imaging and then subjected to precise, surface-optimized preprocessing (such as in the HCP). But I'm less convinced these narrow ROIs are advantageous for datasets collected with larger voxels (say 3x3x3 mm or larger) and "standard" preprocessing (such as SPM and spatial normalization).

For example, here are a few slices showing some of the validated ROIs from my recent paper overlayed on a few greyscale slices from the Gordon et al. parcellation. My dataset was acquired with 4x4x4 mm voxels, and all preprocessing was volumetric. The red ROI in the image at left was defined using a volumetric searchlight analysis, and, unsurprisingly, is rather "blobby:" it's not constrained to the cortical ribbon. The red ROI contains 416 voxels, of which 181 (.43) are in one of the parcels. Does that mean the 235 non-parcel voxels are uninformative? No. With 4x4x4 mm voxels, typical amounts of movement, and preprocessing which included spatial normalization, some of the BOLD, even if it all actually came from the grey matter, will appear to come from outside the cortical ribbon.

To summarize, my take is that this parcellation is the result of a methodological tour de force and worth careful consideration, especially for a high-resolution dataset preprocessed with surface analysis in mind (e.g. with freesurfer), even if doing a volumetric analysis. It may be less suitable for datasets with larger voxels and more standard preprocessing.


ResearchBlogging.orgGordon EM, Laumann TO, Adeyemo B, Huckins JF, Kelley WM, & Petersen SE (2014). Generation and Evaluation of a Cortical Area Parcellation from Resting-State Correlations. Cerebral cortex. PMID: 25316338 doi:10.1093/cercor/bhu239

Tuesday, February 17, 2015

research blogging: concatenating vs. averaging timepoints

A little bit of the (nicely described) methods in Shen et al. 2014 ("Decoding the individual finger movements ..." citation below) caught my eye: they report better results when concatenating the images from adjacent time points instead of averaging (or analyzing each independently). The study was straightforward: classifying which finger (or thumb) did a button press. They got good accuracies classifying single trials, with both searchlights and anatomical ROIs. There's a lot of nice methodological detail, including how they defined the ROIs in individual participants, and enough description of the permutation testing to tell that they followed what I'd call a dataset-wise scheme (nice to see!).

But what I want to highlight here is a pretty minor part of the paper: during preliminary analyses they classified the button presses in individual images (i.e., single timepoints; the image acquired during 1 TR), the average of two adjacent images (e.g., averaging the images collected 3 and 4 TR after a button press), and by concatenating adjacent images (e.g., concatenating the images collected 3 and 4 TR after the button press), and found the best results for concatenation (they don't specify how much better).

Concretely, concatenation sends more voxels to the classifier each time: if a ROI has 100 voxels, concatenating two adjacent images means that each example has 200 voxels (the 100 ROI voxels at timepoint 1 and the 100 ROI voxels at timepoint 2). The classifier doesn't "know" that this is actually 100 voxels at two timepoints; it "sees" 200 unique voxels. Shen et al.used linear SVM (c=1), which generally handles large numbers of voxels well; doubling ROI sizes might hurt the performance of other classifiers.

I haven't tried concatenating timepoints; my usual procedure is averaging (or fitting a HRF-type model). But I know others have also had success with concatenation; feel free to comment if you have any experience (good or bad).


ResearchBlogging.orgShen, G., Zhang, J., Wang, M., Lei, D., Yang, G., Zhang, S., & Du, X. (2014). Decoding the individual finger movements from single-trial functional magnetic resonance imaging recordings of human brain activity European Journal of Neuroscience, 39 (12), 2071-2082 DOI: 10.1111/ejn.12547

Saturday, February 14, 2015

hyperacuity with MVPA: a verdict yet?

A few years ago a debate started about whether MVPA hyperacuity is possible: can we pick up signals from sources smaller than an individual voxel? This topic popped up for me again recently, so this post organizes my notes, gives some impressions, and points out some of the key papers.

beginnings: V1 and grating orientations

In 2005 Kamitani & Tong and Haynes & Rees reported being able to use MVPA methods with fMRI data to detect the orientation of gratings in V1. The anatomy and function of early visual areas is better understood than most parts of the brain; we know that they are organized into columns, each sensitive to particular visual attributes. In V1, the orientation columns are known to be much smaller than (typical) fMRI voxels, so how could classification be possible?

Multiple groups, including Kamitani & Tong in their 2005 paper, suggest that this "hyperacuity" could be due to a voxel-level bias in the underlying architecture, whether in the distribution of orientation columns, the vasculature, or some combination of the two. The idea here is that, since columns are not perfectly evenly spatially distributed, each voxel will end up with more columns of one orientation than another, and this subtle population-level bias  is what's being detected in the MVPA.

does degrading the signal reduce hyperacuity?

From the beginning, the idea that hyperacuity was behind the detection of orientation was met with both excitement and skepticism. In 2010 Hans op de Beeck wrote a NeuroImage Comments and Controversy article which kicked off a series of papers trying to confirm (or not) hyperacuity by means of degrading the fMRI images. The logic is straightforward: if subtle biases within individual voxels are making the classification possible, degrading the images (blurring, adding noise, filtering), should dramatically reduce the classification accuracy.

op de Beeck (2010) smoothed images with information at varying spatial scales at different FWHM to change their signal, interpreting the results as suggesting that the apparent hyperacuity might actually be due to larger-scale patterns (spanning multiple voxels). Objections to this technique were raised, however, partly because smoothing's affect on information content is complex and difficult to interpret. Swisher et al. (2010) used spatial filters, rather than smoothing, to degrade the signal in very high-resolution images, and found that small (< 1 mm) scale information was present, and critical for classification accuracy. But, the presence of small-scale signal in high-resolution images doesn't preclude the presence of larger (> 2 mm) scale information; indeed, larger-scale information was also found by Swisher et al. (2010). Filtering was also used by Freeman et al. (2011), who identified larger ("coarser") scale information about orientation in V1. Alink et al. (2013) also used filtering, along with more complex stimuli, finding information in a range of scales, but also cautioning that the numerous interactions and complications mean that filtering is not a perfect approach.

spiraling around

Recently, a set of studies have tried another approach: changing the stimuli (such as to spirals) to try to avoid potential confounds related to the visual properties of the stimuli. These debates (e.g., Freeman et al. 2013, Carlson 2014)  get too much into details of visual processing for me to summarize here, but a new set of Comments and Controversy NeuroImage articles (Carlson & Wardle, Clifford & Mannion 2015) suggests that using spiral stimuli won't be definitive, either.

my musings, and does this imply anything about MVPA?

Overall, I'm landing in the "bigger patterns, not hyperacuity" camp. I find the demonstrations of larger-scale patterns convincing, and a more plausible explanation of the signal, at least for human fMRI with ~ 3 mm voxels; it strikes me as equally reasonable that very small-scale patterns could dominate for high-resolution scanning in anesthetized animals (e.g., Swisher et al. 2010).

But do these debates imply anything for the usefulness of MVPA as a whole? Carlson and Wardle (2015) suggest that it does, pointing out that at this 10-year anniversary of the first papers suggesting the possibility of hyperacuity, we still haven't "determined the underlying source of information, despite our strong understanding of the physiology of early visual cortex." I wonder if this is because the best understanding of the physiology of early visual cortex is at the fine scale (neurons and columns), not the coarse scale ( > 5 mm maps). I agree that interpreting the properties of individual voxels from MVPA is fraught with difficulty; interpreting the properties of groups of voxels is much more robust.

papers mentioned here, plus some other relevant papers

Wednesday, January 28, 2015

pointer: "Reward Motivation Enhances Task Coding in Frontoparietal Cortex"

I'm pleased to announce that a long-in-the-works paper of mine is now online: "Reward Motivation Enhances Task Coding in Frontoparietal Cortex". It doesn't look like the supplemental is online at the publisher's yet; The supplement is online now, or you can download it here. This is the work I spoke about at ICON last summer (July 2014). As the title indicates, this is not a straight methodology paper, though it has some neat methodological aspects, which I'll highlight here.

Briefly, the dataset is from a cognitive control task-switching paradigm: during fMRI scanning, people saw images of a human face with a word superimposed. But their response to the stimuli varied according to the preceding cue: in the Word task they responded whether the word part of the stimulus had two syllables or not; in the Face task they responded whether the image was of a man or woman. Figure 1 from the paper (below) schematically shows the timing and trial parts. The MVPA tried to isolate the activity associated with the cue part of the trial.


The people did this task on two separate scanning days: first the Baseline session, then the Incentive session. During the Incentive session incentives were introduced: people had a chance to earn extra money on some trials for responding quickly and accurately.

The analyses in the paper are aimed at understanding the effects of incentive: people perform a bit better when given an incentive (are more motivated) to perform better. We tested the idea that this improvement in performance is because the (voxel level) brain activity patterns encoding the task are better formed with incentive: sharper, more distinct, less noisy task-related patterns on trials with an incentive than trials without an incentive.

How to quantify "better formed"? There's no simple test, so we got at it three ways:

First, cross-session task classification accuracy (train on baseline session, test on incentive session) was higher on incentive trials, suggesting that the Incentive trials are "cleaner" (less noisy, so easier to classify). Further, the MVPA classification accuracy is a statistical mediator of performance accuracy (how many trials each person responded to correctly): people with a larger incentive-related increase in MVPA classification accuracy also tended to have a larger incentive-related increase in behavioral performance accuracy.

At left is Figure 4 from the paper, showing the correlation between classification and performance accuracy differences; each circle is a participant. It's nice to see this correlation between MVPA accuracy and behavior; there are still relatively few studies tying them together.

Second, we found that the Incentive test set examples tended to be further from the SVM hyperplane than the No-Incentive test set examples, which suggests that the classifier was more "confident" when classifying the Incentive examples. Since we used cross-session classification there was only one hyperplane for each person (the (linear) SVM trained on all baseline session examples), so it's possible to directly compare the distance of the test set examples to the hyperplane.

Third, we found a higher likelihood of distance concentration in the No-Incentive examples, suggesting that this dataset is less structured (higher intrinsic dimensionality) than the Incentive examples. The distance concentration calculation doesn't rely on the SVM hyperplane, and so gives another line of evidence.

There's (of course!) lots more detail and cool methods in the main paper; hope you enjoy! As always, please let me know what you think of this (and any questions), in comments, email, or in person.

UPDATE (24 March 2015): I have put some of the code and input images for this project online at the Open Science Foundation.


 ResearchBlogging.orgEtzel JA, Cole MW, Zacks JM, Kay KN, & Braver TS (2015). Reward Motivation Enhances Task Coding in Frontoparietal Cortex. Cerebral Cortex PMID: 25601237

Wednesday, January 21, 2015

research blogging: "Exceeding chance level by chance"

Neuroskeptic made me aware of a new paper by Combrisson & Jerbi entitled "Exceeding chance level by chance: The caveat of theoretical chance levels in brain signal classification and statistical assessment of decoding accuracy"; full citation below. Neuroskeptic's post has comments and a summary of the article, which I suggest you check out, along with its comment thread. 

My first reaction reading the article was confusion: are they suggesting we shouldn't test against chance (0.5 for two classes), but some other value? But no, they are arguing that it is necessary to do a test against chance ... to which I say, yes, of course it is necessary to do a statistical test to see if the accuracy you obtained is significantly above chance. The authors are arguing against a claim ("the accuracy is 0.6! 0.6 is higher than 0.5, so it's significant!") that I don't think I've seen in an MVPA paper, and would certainly question if I did. Those of us doing MVPA debate about how exactly to best do a permutation test (a favorite topic of mine!), and if the binomial or t-test is appropriate in particular situations, but everyone agrees that a statistical test is needed to support a claim that an accuracy is significant. In short, I agree with

What about the results of the paper's analyses? Basically, they strike me as unsurprising. For example, the authors note that smaller datasets are less stable (eg quite easy to get accuracies above 0.7 in noise data when only 5 examples of each class), and that smaller test set sizes (eg leave-1-out vs. leave-20-out cross validation when 100 examples) tend to have higher variance across the cross-validation folds (and so harder to reach significance). At right is Figure 1e, showing the accuracies they obtained from classifying many (Gaussian random) noise datasets of different sizes. What I immediately noticed is how nice and symmetrical around chance the spread of dots appears: this is the sort of figure we expect to see when doing a permutation test. Eyeballing the graph (and assuming the permutation test was done properly), we'd probably end up with accuracies above 0.7 being significant at small sample sizes, and around 0.6 for larger datasets, which strikes me as reasonable.

I'm not a particular fan of using the binomial for significance in neuroimaging datasets, especially when the datasets have any sort of complex structure (eg multiple fMRI scanning runs, cross-validation, more than one person), which they almost always have. Unless your data is structured exactly like Combrisson & Jerbi's (and they did the permutation test properly, which they might not have, see Martin Hebart's comments), Table 1 strikes me as inadequate for establishing significance: I'd want to see a test taking into account the variance in your actual dataset (and claims being made).

Perhaps my concluding comment should be that proper statistical testing can be hard, and is usually time consuming, but is absolutely necessary. Neuroimaging datasets are nearly always structured (eg sources of variance and patterns of dependency and interaction) far differently from the assumptions of quick statistical tests, and we are asking questions of them not covered by one-line descriptions. Don't look for a quick fix, but rather focus on your dataset and claims, and a method for establishing significance levels is nearly always possible.


ResearchBlogging.orgCombrisson, E., & Jerbi, K. (2015). Exceeding chance level by chance: The caveat of theoretical chance levels in brain signal classification and statistical assessment of decoding accuracy Journal of Neuroscience Methods DOI: 10.1016/j.jneumeth.2015.01.010

Thursday, January 8, 2015

connectome workbench: montages of volumes

This tutorial describes working with montages of volumetric images in the Connectome Workbench. Workbench calls displays with more than one slice "Montages;" these have other names in other programs, such as "MultiSlice" in MRIcroN. I've written a series of tutorials about the Workbench; check the this post for comments about getting started, and see other posts labeled workbench.

When you first open a volumetric image in Workbench, the Volume tab doesn't display a montage, but rather a single slice, like in the image at left (which is my fakeBrain.nii.gz demo file superimposed on the conte69 anatomy).

Workbench opens an axial (A) view by default, as in this screenshot. The little push buttons in the Slice Plane section (marked with a red arrow in the screenshot) change the view to the parasagittal (P) (often called the sagittal) or coronal (C) plane instead. Whichever view is selected but the Slice Plane buttons will be shown in the montage - montages can be made of axial slices (as is most common), but just as easily of coronal or sagittal slices. (The All button displays all three planes at once, which can be useful, but not really relevant for montages.)

To change the single displayed slice, put the mouse cursor in the Slice Indices/Coords section (marked with a red arrow in the screenshot) corresponding to the plane you're viewing, and use the up and down arrows to scroll (or click the little up and down arrow buttons, or type in a new number). In the screenshot, I'm viewing axial slice 109, at 37.0 mm.


Now, viewing more than one slice, a montage. The On button in the Montage section (arrow in screenshot at left) puts Workbench into montage mode: click the On button so that it sticks down to work with montages; click it again to get out of montage mode.

Workbench doesn't let you create an arbitrary assortment of slices in montage mode, but rather a display of images with the number of rows (Rows) and columns (Cols) specified in the Montage section boxes. The number of slices between each of the images filling up those rows and columns is given in the Step box of the Montage section, and the slice specified in the Slice Indices/Coords section is towards the middle of the montage. Thus, this screenshot shows images in four rows and three columns, with the displayed slices separated by 12 mm.

Customizing the montage view requires fiddling: adjusting the window size, number of rows and columns, step between slices, and center slice (in the Slice Indices/Coords section) to get the desired collection of slices. On my computer, I can adjust the zoom level (the size of the individual montage slice images) with a "scroll" gesture; I haven't found a keyboard or menu option to similarly adjust the zoom - anyone know of one?

Several useful montage-relevant options are not on the main Volume tab, but rather in the Preferences (bring it up with the Preferences option in the File dropdown menu in the main program toolbar), as shown at left. Set:ting the Volume Montage Slice Coord: option to Off hides the Z=X mm labels, which can be useful. The Volume Axes Crosshairs option hides the crosshairs;  experiment with the options to see their effect.

I haven't found ways of controlling all aspects of the montage; for publication-quality images I ended up using an image editor to have full control, such changing the slice label font.

Friday, December 19, 2014

tutorial: knitr for neuroimagers

I'm a big fan of using R for my MVPA, and have become an even bigger fan over the last year because of knitr. I now use knitr to create nearly all of my analysis-summary documents, even those with "brain blob" images, figures, and tables. This post contains a knitr tutorial in the form of an example knitr-created document, and the source needed to recreate it.


What does knitr do?  Yihui has many demonstrations on his web site. I use knitr to create pdf files presenting, summarizing, and interpreting analysis results. Part of the demo pdf is in the image at left to give the idea: I have several paragraphs of explanatory text above a series of overlaid brain images, along with graphs and tables. This entire pdf was created from a knitr .rnw source file, which contains LaTeX text and R code blocks.

Previously, I'd make Word documents describing an analysis, copy-pasting figures and screenshots as needed, and manually formatting tables. Besides time, a big drawback of this system is human memory ... "how exactly did I calculate these figures?." I tried including links to the source R files and notes about thresholds, etc, but often missed some key detail, which I'd then have to reverse-engineer. knitr avoids that problem: I can look at the document's .rnw source code and immediately see which NIfTI image is displayed, which directory contains the plotted data, etc.

In addition to (human) memory and reproducibility benefits, the time saved by using knitr instead of Word for analysis summary documents is substantial. Need to change a parameter and rerun an analysis? With knitr there's no need to spend hours updating the images: just change the file names and parameters in the knitr document and recompile. Similarly, the color scaling or displayed slices can be changed easily.

Using knitr is relatively painless: if you use RStudio. There is still a bit of a learning curve, especially if you want fancy formatting in the text parts of the document, since it uses LaTeX syntax. But RStudio takes care of all of the interconnections: simply click the "Compile PDF" button (yellow arrow) ... and it does! I generally don't use RStudio, except for knitr, which I only do in RStudio.


 to run the demo

We successfully tested this demo file on Windows, MacOS, and Ubuntu, always using RStudio, but with whichever LaTeX compiler was recommended for the system.

Software-wise, first install RStudio, then install a LaTeX compiler. Within RStudio, you'll need to install the knitr and oro.nifti packages.

Now, download the files needed for the demo (listed below). These are mostly the NIfTI files I've used in previous tutorials, with a new anatomic underlay image, and the knitr .rnw demo file itself. Put all of the image files into a single directory. When knitr compiles it produces many intermediate files, so it is often best to put each .rnw file into its own directory. For example, put all of the image files into c:/temp/demo/, then brainPlotsDemo.rnw into c:/temp/demo/knitr/.
Next, open brainPlotsDemo.rnw in RStudio. The RStudio GUI tab menu should look like in the screenshot above, complete with a Compile PDF button. But don't click the button yet. Instead, go through Tools then Global Options in the top RStudio menus to bring up the Options dialog box, as shown here. Click on the Sweave icon, then tell it to Weave Rnw files using knitr (marked with yellow arrow). Then click Ok to close the dialog box, and everything should be ready. In my experience, RStudio just finds the LaTeX installation - you don't need to set the paths yourself.

In the first code block, change the path to point to where you put the image files. Finally, click the Compile PDF button! RStudio should bring up a  running Compile PDF log, finishing with opening the finished pdf in a separate window. A little reload pdf button also appears to the right of the Compile PDF button (red arrow at left). If the pdf viewer doesn't open itself, try clicking this button to reload.

Good luck!