Configure mkanalysis-sess: Difference between revisions

From CCN Wiki
Jump to navigation Jump to search
No edit summary
mNo edit summary
 
(6 intermediate revisions by the same user not shown)
Line 1: Line 1:
The first step in the first-level analysis is to configure analyses and contrasts. This step describes what preprocessing stages should have been run as well as the parameters needed to construct a design matrix (no data are analyzed yet). This is done with mkanalysis-sess.
The first step in the first-level analysis is to configure analyses and contrasts. This step describes what preprocessing stages should have been run as well as the parameters needed to construct a design matrix (no data are analyzed yet). This is done with mkanalysis-sess. You only have to do this one time for your project, and not for each subject in your experiment.


A good way to make it clear how you are configuring your analysis is to declare important parameters as shell environment variables, and then use them when calling mkanalysis-sess:
A good way to make it clear how you are configuring your analysis is to declare important parameters as shell environment variables, and then use them when calling mkanalysis-sess:
Line 6: Line 6:
  REFEVENTDUR=0.8; #How long are the events?
  REFEVENTDUR=0.8; #How long are the events?
  TR=2.047; #What is the TR
  TR=2.047; #What is the TR
[https://en.wikibooks.org/wiki/Neuroimaging_Data_Processing/Temporal_Filtering HIPASS]=0.008 #Hz .008 = 125 seconds
STC=up; #what is the slice time correction? (up, down, siemens)
  NCONDITIONS=3; #How many conditions in the par files
  NCONDITIONS=3; #How many conditions in the par files
  SURFACE=fsaverage; #generally valid options are 'self' or 'fsaverage'
  SURFACE=fsaverage; #generally valid options are 'self' or 'fsaverage'
  HEMIS=( lh rh ); #for automatically looping over both hemispheres
  HEMIS=( lh rh ); #for automatically looping over both hemispheres
  PARFILE=LDT.par; #What's the name of the .par files in your bold/ directories?
  PARFILE=FAM.par; #What's the name of the .par files in your bold/ directories?
  ANROOT="LDT.sm" #base name for the analysis directories
  ANROOT="FAM.sm" #base name for the analysis directories


  for hemi in "${HEMIS[@]}"
  for hemi in "${HEMIS[@]}"
Line 18: Line 21:
   -surface ${SURFACE} ${hemi} \
   -surface ${SURFACE} ${hemi} \
   -fwhm ${SMOOTHING}  \
   -fwhm ${SMOOTHING}  \
  -stc ${STC} \
   -event-related \
   -event-related \
   -paradigm ${PARFILE} \
   -paradigm ${PARFILE} \
   -nconditions ${NCONDITIONS} \
   -nconditions ${NCONDITIONS} \
   -timewindow 24 \
   -timewindow 24 \
  -hpf ${HIPASS} \
   -spmhrf 2 \
   -spmhrf 2 \
   -polyfit 2 \
   -polyfit 2 \
Line 30: Line 35:
   -per-run -force
   -per-run -force
  done
  done
#repeat for MNI305 voxel-space
mkanalysis-sess \
  -fsd bold \
  -mni305 2 \
  -stc ${STC} \
  -fwhm ${SMOOTHING} \
  -event-related \
  -paradigm ${PARFILE} \
  -nconditions ${NCONDITIONS} \
  -timewindow 24 \
  -spmhrf 2 \
  -polyfit 2 \
  -mcextreg \
  -TR ${TR} \
  -refeventdur ${REFEVENTDUR} \
  -analysis ${ANROOT}${SMOOTHING}.mni \
  -per-run -force
The above code could be saved as a script in your ~/bin directory (e.g., mkanalysis.sh) and modified as required for different datasets or parametric choices.
The above code could be saved as a script in your ~/bin directory (e.g., mkanalysis.sh) and modified as required for different datasets or parametric choices.


Line 35: Line 58:


Note that there is also a directive to skip volumes, however it doesn't seem to do what we expect it to, and so we just drop those volumes from the data entirely, and modify the event onsets to account for the dropped volumes.
Note that there is also a directive to skip volumes, however it doesn't seem to do what we expect it to, and so we just drop those volumes from the data entirely, and modify the event onsets to account for the dropped volumes.
[[Category:FreeSurfer]]

Latest revision as of 09:37, 7 October 2020

The first step in the first-level analysis is to configure analyses and contrasts. This step describes what preprocessing stages should have been run as well as the parameters needed to construct a design matrix (no data are analyzed yet). This is done with mkanalysis-sess. You only have to do this one time for your project, and not for each subject in your experiment.

A good way to make it clear how you are configuring your analysis is to declare important parameters as shell environment variables, and then use them when calling mkanalysis-sess:

SMOOTHING=4; #FWHM smoothing kernel; rule of thumb is 2 x VoxelSize
REFEVENTDUR=0.8; #How long are the events?
TR=2.047; #What is the TR
HIPASS=0.008 #Hz .008 = 125 seconds 
STC=up; #what is the slice time correction? (up, down, siemens)
NCONDITIONS=3; #How many conditions in the par files
SURFACE=fsaverage; #generally valid options are 'self' or 'fsaverage'
HEMIS=( lh rh ); #for automatically looping over both hemispheres
PARFILE=FAM.par; #What's the name of the .par files in your bold/ directories?
ANROOT="FAM.sm" #base name for the analysis directories

for hemi in "${HEMIS[@]}"
do
mkanalysis-sess \
 -fsd bold \
 -surface ${SURFACE} ${hemi} \
 -fwhm ${SMOOTHING}  \
 -stc ${STC} \
 -event-related \
 -paradigm ${PARFILE} \
 -nconditions ${NCONDITIONS} \
 -timewindow 24 \
 -hpf ${HIPASS} \
 -spmhrf 2 \
 -polyfit 2 \
 -mcextreg \
 -TR ${TR} \
 -refeventdur ${REFEVENTDUR} \
 -analysis ${ANROOT}${SMOOTHING}.${hemi} \
 -per-run -force
done
#repeat for MNI305 voxel-space
mkanalysis-sess \
 -fsd bold \
 -mni305 2 \
 -stc ${STC} \
 -fwhm ${SMOOTHING} \
 -event-related \
 -paradigm ${PARFILE} \
 -nconditions ${NCONDITIONS} \
 -timewindow 24 \
 -spmhrf 2 \
 -polyfit 2 \
 -mcextreg \
 -TR ${TR} \
 -refeventdur ${REFEVENTDUR} \
 -analysis ${ANROOT}${SMOOTHING}.mni \
 -per-run -force

The above code could be saved as a script in your ~/bin directory (e.g., mkanalysis.sh) and modified as required for different datasets or parametric choices.

This step creates analysis directories in your $SUBJECTS_DIR containing single files that contain all the information needed for the next step. Your $SUBJECTS_DIR should be your project root directory.

Note that there is also a directive to skip volumes, however it doesn't seem to do what we expect it to, and so we just drop those volumes from the data entirely, and modify the event onsets to account for the dropped volumes.