Freesurfer Group-Level Analysis (mri glmfit)
This procedure is based on information from here
The group-level (Random-Effects) analysis repeats the contrasts performed for individual subjects on a variance-weighted composite of all your subjects. This will identify voxels for which your contrast effects are consistent across all your participants. For the function MRI group analysis you will need to:
- Concatenate individuals into one file (isxconcat-sess)
- Do not smooth (already smoothed during first-level analysis)
- For each space (lh/rh/mni305):
- Run mri_glmfit using weighted least squares (WLS)
- Correct for multiple comparisons
- Correct for multiple comparisons
- Optionally merge into one volume space
Before you start
Ensure that your SUBJECTS_DIR variable is set to the working directory containing all your participants, and that the first-level analyses have been completed for all participants (using selxavg-3)
Concatenate First-Level Analyses(isxconcat-sess)
In your $SUBJECTS_DIR, there should be a subjects file containing the subjectID for each participant for which you had run selxavg-3. Assuming that the analysis directories in each participant's folder is called my_analysis ( mkanalysis-sess), and that the contrasts folder is called conditionA_vs_conditionB ( mkcontrast-sess), run the isxconcat-sess
script as follows:
- Left Hemisphere
isxconcat-sess -sf subjects -analysis my_analysis.lh -contrast conditionA_vs_conditionB -o RFX
- Right Hemisphere
isxconcat-sess -sf subjects -analysis my_analysis.rh -contrast conditionA_vs_conditionB -o RFX
- MNI Space
isxconcat-sess -sf subjects -analysis my_analysis.mni305 -contrast conditionA_vs_conditionB -o RFX
When finished, a new directory will exist, $SUBJECTS_DIR/RFX and contain sub-folders for each of your analyses (in this case, left- and right-hemisphere and MNI305 analysis). Note that your analysis directory does not have to be called RFX (e.g., if multiple people are working in the same subjects folder, you might have an output folder named something like RFX_CHRIS, or whatever suits you).
Note that if you have already run isxconcat-sess in the past, you may get a warning because an earlier group-level directory will already exist. You should probably just delete the old directory and rerun isxconcatenate-sess.
Run mri_glmfit
PAY ATTENTION: You will need to run mri_glmfit for each contrast peformed for each of the analyses! (but see the section below about scripting)
In the above example, we have 1 contrast (conditionA_vs_conditionB) to be carried out for the my_analysis.lh, my_analysis.rh and my_analysis.mni305 analyses directories. Of course, if we had also contrasted other conditions, then you will have to run mri_glmfit many more times.
One-Sample Group Mean
The 1-sample group mean (OSGM) tests whether the contrast A-B differs significantly from zero. The example that follows demonstrates how you would run the conditionA_vs_conditionB OSGM analysis for each of the analysis directories as in my example scenario:
- my_analysis.lh
cd $SUBJECTS_DIR/RFX/my_analysis.lh/conditionA_vs_conditionB/
mri_glmfit --y ces.nii.gz --wls cesvar.nii.gz --osgm --surface fsaverage lh --glmdir glm.wls --nii.gz
- my_analysis.rh
cd $SUBJECTS_DIR/RFX/my_analysis.rh/conditionA_vs_conditionB/
mri_glmfit --y ces.nii.gz --wls cesvar.nii.gz --osgm --surface fsaverage rh --glmdir glm.wls --nii.gz
- my_analysis.mni305
cd $SUBJECTS_DIR/RFX/my_analysis.rh/conditionA_vs_conditionB/
mri_glmfit --y ces.nii.gz --wls cesvar.nii.gz --osgm --glmdir glm.wls --nii.gz
Correct for Multiple Comparisons (Surface-Based)
Again, you will have to perform this step for each analysis directory. This takes very little time to run for the surface-based analyses, which can use precomputed values.
An explanation of the parameters: --glmdir glm.wls
refers to the directory you specified when you ran the mri_glmfit command. --cache 3 pos
indicates that you want to use the pre-cached simulation with a voxel-wise threshold of 10E-3 (i.e., 0.001) and use positive contrast values. --cwpvalthresh .025
indicates you will retain clusters with a size-corrected p-value of P<.025.
- my_analysis.lh
cd $SUBJECTS_DIR/RFX/my_analysis.lh/conditionA_vs_conditionB/
- mri_glmfit-sim --glmdir glm.wls --cache 3 pos --cwpvalthresh .025
- my_analysis.rh
cd $SUBJECTS_DIR/RFX/my_analysis.rh/conditionA_vs_conditionB/
- mri_glmfit-sim --glmdir glm.wls --cache 3 pos --cwpvalthresh .025
Running this step saves thresholded files in the directory indicated by the --glmdir
switch (in the above example, files were saved in glm.wls). There, you will find files named cache.*.nii.gz. You can then load those files as overlays to see which clusters survived the size thresholding.
Correct for Multiple Comparisons with Permutation Test (MNI305 Space, or Surface-Based)
The above examples use cached Monte Carlo simulations for speed. The help text for mri_glmfit-sim gives the syntax for running your own sims, which take much longer to execute, but are required for MNI305 space:
mri_glmfit-sim --glmdir glmdir \ --sim nulltype nsim threshold csdbase \ --sim-sign sign
e.g. Monte Carlo correction for surface space (10K iterations, voxel-wise p<.005, positive differences only):
mri_glmfit-sim --glmdir glm.wls \ --sim mc-z 10000 2.3 mc-z.pos.005 --sim-sign pos
e.g. permutation correction for MNI space (10K iterations, voxel-wise p<.01, absolute value differences):
mri_glmfit-sim --glmdir glm.wls --sim perm 10000 2 perm.abs.01 \ --sim-sign abs --cwp 0.05 --3spaces
Options for sim type (listed fastest to slowest) are perm, mc-z, and mc-full.
Notes
The choice of cwpvalthresh was computed by dividing the nominal desired p-value (0.05) by the number of spaces entailed by the simulation. In the above example if we are only looking at the lh and rh surfaces we divide 0.05/2 = 0.025. If you were looking at the lh, rh and MNI305 space, you would have to divide by 3 to maintain the same FWE: 0.05/3=0.0167.
As indicated above, the uncorrected voxel threshold (p=.001) was used to enable the use of pre-cached simulated values with the --cache
switch. Though .1, .01, and .001 are easy enough to compute (1, 2, 3, respectively), other values can be computed by noting that this value corresponds to -1 × the base-10 logarithm of the uncorrected p-value. For example, to use cluster-size correction with an uncorrected voxel-wise p-value of 0.005, a spreadsheet or calculator can compute -1×(log100.005)=2.301. Or for a p-value of 0.05: -1×(log100.05)=1.301 . Rather than specify positive values, you can also use negative values (neg
) or absolute values (abs
, which I think may be the suggested default?).
Scripting
Because mri_glmfit has to be run for each combination of analysis directory (*.lh, *.rh, possibly *.mni305) and contrast, the number of commands you need to run multiplies pretty quickly. You can check out the OSGM Script page for an example of how to automate these calls to mri_glmfit using a shell script.
Check It Out
This little snippet shows you how to inspect the results of the MNI305 first level analysis.
MNI305
RFXDIR=RFX ANALYSIS=FAM.sm4 CONTRAST=task #assuming you used the conventions described above, the glm results can be found #in the directory glm.wls, and the osgm results in a subdirectory called osgm THEDIR=${SUBJECTS_DIR}/${RFXDIR}/${ANALYSIS}.mni305/${CONTRAST}/glm.wls/osgm
tkmeditfv fsaverage orig.mgz -ov ${THEDIR}/perm.abs.01.sig.cluster.nii.gz \ -seg ${THEDIR}/perm.abs.01.sig.ocn.nii.gz \ ${THEDIR}/perm.abs.01.sig.ocn.lut
What Next?
The cluster-level correction step will create a series of files in each of the glm directories. They will be automatically given names that describe the parameters used. For example, the above commands generated files named cache.th30.abs.sig.*. One of these files is a .annot file, where each cluster is a label in the file. These labels make convenient functional ROIs, wouldn't you say?
I haven't done this yet, but it seems that one thing you might want to do is map the labels in the .annot file generated by the group mri_glmfit-sim to each participant using mri_surf2surf
. A script called cpannot.sh
that simplifies this has been written and copied to the UBFS/Scripts/Shell/ folder. This script is invoked thus:
cpannot.sh $SOURCESUBJECT $TARGETSUBJECT $ANNOTFILE
e.g.:
cpannot.sh fsaverage FS_0183 lausanne.annot
Before you do this, you can check out the details of the cluster statistics, which include things like the associated anatomical labels for each cluster, and also the surface area of the cluster. The clusters can be further subdivided to make them more similar in size by mris_divide_parcellation
. With a series of roughly equal-sized ROIs, they can be mapped from the fsaverage surface to each participant to extract time series or other information from that individual.