Freesurfer Group-Level Analysis (mri glmfit): Difference between revisions
Line 50: | Line 50: | ||
Running this step saves thresholded files in the directory indicated by the <code>--glmdir</code> switch (in the above example, files were saved in glm.wls). There, you will find files named ''cache.*.nii.gz''. You can then load those files as overlays to see which clusters survived the size thresholding. | Running this step saves thresholded files in the directory indicated by the <code>--glmdir</code> switch (in the above example, files were saved in glm.wls). There, you will find files named ''cache.*.nii.gz''. You can then load those files as overlays to see which clusters survived the size thresholding. | ||
The above examples use cached Monte Carlo simulations for speed. The help text for mri_glmfit-sim gives the syntax for running your own sims: | |||
mri_glmfit-sim --glmdir glmdir \ | |||
--sim nulltype nsim threshold csdbase \ | |||
--sim-sign sign | |||
e.g.: | |||
mri_glmfit-sim --glmdir glm.wls \ | |||
--sim mc-z 10000 2.3 mc-z.pos.005 --sim-sign pos | |||
Options for sim type (listed fastest to slowest) are perm, mc-z, and mc-full. | |||
====Notes==== | ====Notes==== | ||
The choice of '''cwpvalthresh''' was computed by dividing the nominal desired p-value (0.05) by the number of spaces entailed by the simulation. In the above example we are only looking at the lh and rh surfaces, so 0.05/2 = 0.025. If you were looking at the lh, rh and MNI305 space, you would have to divide by 3 to maintain the same FWE: 0.05/3=0.0167. | The choice of '''cwpvalthresh''' was computed by dividing the nominal desired p-value (0.05) by the number of spaces entailed by the simulation. In the above example we are only looking at the lh and rh surfaces, so 0.05/2 = 0.025. If you were looking at the lh, rh and MNI305 space, you would have to divide by 3 to maintain the same FWE: 0.05/3=0.0167. |
Revision as of 13:02, 28 September 2017
This procedure is based on information from here
The group-level (Random-Effects) analysis repeats the contrasts performed for individual subjects on a variance-weighted composite of all your subjects. This will identify voxels for which your contrast effects are consistent across all your participants. For the function MRI group analysis you will need to:
- Concatenate individuals into one file (isxconcat-sess)
- Do not smooth (already smoothed during first-level analysis)
- For each space (lh/rh/mni305):
- Run mri_glmfit using weighted least squares (WLS)
- Correct for multiple comparisons
- Correct for multiple comparisons
- Optionally merge into one volume space
Before you start
Ensure that your SUBJECTS_DIR variable is set to the working directory containing all your participants, and that the first-level analyses have been completed for all participants (using selxavg-3)
Concatenate First-Level Analyses(isxconcat-sess)
In your $SUBJECTS_DIR, there should be a subjects file containing the subjectID for each participant for which you had run selxavg-3. Assuming that the analysis directories in each participant's folder is called my_analysis ( mkanalysis-sess), and that the contrasts folder is called conditionA_vs_conditionB ( mkcontrast-sess), run the isxconcat-sess
script as follows:
- Left Hemisphere
isxconcat-sess -sf subjects -analysis my_analysis.lh -contrast conditionA_vs_conditionB -o RFX
- Right Hemisphere
isxconcat-sess -sf subjects -analysis my_analysis.rh -contrast conditionA_vs_conditionB -o RFX
When finished, a new directory will exist, $SUBJECTS_DIR/RFX and contain sub-folders for each of your analyses (in this case, a left- and right-hemisphere analysis; MNI305 can also be performed to look at subcortical activation contrasts if these were performed at the single-subject level). Note that your analysis directory does not have to be called RFX (e.g., if multiple people are working in the same subjects folder, you might have an output folder named something like RFX_CHRIS, or whatever suits you).
Run mri_glmfit
PAY ATTENTION: You will need to run mri_glmfit for each contrast peformed for each of the analyses! (but see the section below about scripting)
In the above example, we have 1 contrast (conditionA_vs_conditionB) to be carried out for the my_analysis.lh and my_analysis.rh analyses directories. Of course, if we also had an MNI305 analysis directory, or if we had also contrasted other conditions, then you will have to run mri_glmfit many more times.
One-Sample Group Mean
The 1-sample group mean (OSGM) tests whether the contrast A-B differs significantly from zero. The example that follows demonstrates how you would run the conditionA_vs_conditionB OSGM analysis for the my_analysis.lh and my_analysis.rh analysis directories as in my example scenario:
- my_analysis.lh
cd $SUBJECTS_DIR/RFX/my_analysis.lh/conditionA_vs_conditionB/
mri_glmfit --y ces.nii.gz --wls cesvar.nii.gz --osgm --surface fsaverage lh --glmdir glm.wls --nii.gz
- my_analysis.rh
cd $SUBJECTS_DIR/RFX/my_analysis.rh/conditionA_vs_conditionB/
mri_glmfit --y ces.nii.gz --wls cesvar.nii.gz --osgm --surface fsaverage rh --glmdir glm.wls --nii.gz
Correct for Multiple Comparisons
Again, you will have to perform this step for each analysis directory. This takes very little time to run.
An explanation of the parameters: --glmdir glm.wls
refers to the directory you specified when you ran the mri_glmfit command. --cache 3 pos
indicates that you want to use the pre-cached simulation with a voxel-wise threshold of 10E-3 (i.e., 0.001) and use positive contrast values. --cwpvalthresh .025
indicates you will retain clusters with a size-corrected p-value of P<.025.
- my_analysis.lh
cd $SUBJECTS_DIR/RFX/my_analysis.lh/conditionA_vs_conditionB/
- mri_glmfit-sim --glmdir glm.wls --cache 3 pos --cwpvalthresh .025
- my_analysis.rh
cd $SUBJECTS_DIR/RFX/my_analysis.rh/conditionA_vs_conditionB/
- mri_glmfit-sim --glmdir glm.wls --cache 3 pos --cwpvalthresh .025
Running this step saves thresholded files in the directory indicated by the --glmdir
switch (in the above example, files were saved in glm.wls). There, you will find files named cache.*.nii.gz. You can then load those files as overlays to see which clusters survived the size thresholding.
The above examples use cached Monte Carlo simulations for speed. The help text for mri_glmfit-sim gives the syntax for running your own sims:
mri_glmfit-sim --glmdir glmdir \ --sim nulltype nsim threshold csdbase \ --sim-sign sign
e.g.:
mri_glmfit-sim --glmdir glm.wls \ --sim mc-z 10000 2.3 mc-z.pos.005 --sim-sign pos
Options for sim type (listed fastest to slowest) are perm, mc-z, and mc-full.
Notes
The choice of cwpvalthresh was computed by dividing the nominal desired p-value (0.05) by the number of spaces entailed by the simulation. In the above example we are only looking at the lh and rh surfaces, so 0.05/2 = 0.025. If you were looking at the lh, rh and MNI305 space, you would have to divide by 3 to maintain the same FWE: 0.05/3=0.0167.
As indicated above, the uncorrected voxel threshold (p=.001) was used to enable the use of pre-cached simulated values with the --cache
switch. Though .1, .01, and .001 are easy enough to compute (1, 2, 3, respectively), other values can be computed by noting that this value corresponds to -1 × the base-10 logarithm of the uncorrected p-value. For example, to use cluster-size correction with an uncorrected voxel-wise p-value of 0.005, a spreadsheet or calculator can compute -1×(log100.005)=2.301. Or for a p-value of 0.05: -1×(log100.05)=1.301 . Rather than specify positive values, you can also use negative values (neg
) or absolute values (abs
, which I think may be the suggested default?).
Scripting
Because mri_glmfit has to be run for each combination of analysis directory (*.lh, *.rh, possibly *.mni305) and contrast, the number of commands you need to run multiplies pretty quickly. You can check out the OSGM Script page for an example of how to automate these calls to mri_glmfit using a shell script.
What Next?
The cluster-level correction step will create a series of files in each of the glm directories. They will be automatically given names that describe the parameters used. For example, the above commands generated files named cache.th30.abs.sig.*. One of these files is a .annot file, where each cluster is a label in the file. These labels make convenient functional ROIs, wouldn't you say?
I haven't done this yet, but it seems that one thing you might want to do is map the labels in the .annot file generated by the group mri_glmfit-sim to each participant using something like mri_label2label
. Before you do this, you can check out the details of the cluster statistics, which include things like the associated anatomical labels for each cluster, and also the surface area of the cluster. The clusters can be further subdivided to make them more similar in size by mris_divide_parcellation
. With a series of roughly equal-sized ROIs, they can be mapped from the fsaverage surface to each participant to extract time series or other information from that individual.