Time Series Data in Surface Space: Difference between revisions
No edit summary |
No edit summary |
||
Line 76: | Line 76: | ||
done | done | ||
The above BASH code snippet will report characteristics for subject FS_T1_501, using the ?h.lausanne.annot files in his/her labels/ directory. The for-loop is a concise way of ensuring that the lh and rh hemispheres are measured, with the results being written to ''lh.lausanne.stats.txt'' and ''rh.lausanne.stats.txt'' in his/her subject directory (i.e., in ''$SUBJECTS_DIR/FS_T1_501/'') | The above BASH code snippet will report characteristics for subject FS_T1_501, using the ?h.lausanne.annot files in his/her labels/ directory. The for-loop is a concise way of ensuring that the lh and rh hemispheres are measured, with the results being written to ''lh.lausanne.stats.txt'' and ''rh.lausanne.stats.txt'' in his/her subject directory (i.e., in ''$SUBJECTS_DIR/FS_T1_501/'') | ||
== Now What? == | |||
Your time series data can be used to carry out a functional connectivity analysis (the correlational or neural network approach) or machine learning classification. | |||
[[Category: Time Series]] | [[Category: Time Series]] |
Revision as of 18:27, 13 June 2016
We can calculate mean time course vectors calculated across all voxels within regions defined in a FreeSurfer annotation (.annot) file. Though any annotation file can be used for this purpose, we have been working at the scale of the Lausanne parcellation. A script exists, gettimecourses.sh
, that will handle this:
gettimecourses.sh
#!/bin/bash USAGE="Usage: gettimecourses.sh annot filepattern sub1 ... subN" if [ "$#" == "0" ]; then echo "$USAGE" exit 1 fi #first two parameters are is the annot files and filepatterns for the \ #.nii.gz time series to be detrended, up to the hemisphere indicator #e.g., detrend.fmcpr.sm6.self.?h.nii.gz would use detrend.fmcpr.sm6.self as the filepattern annot="$1" shift filepat="$1" shift #subjects subs=( "$@" ); #hemispheres hemis=( lh rh ) for sub in "${subs[@]}"; do source_dir=${SUBJECTS_DIR}/${sub}/bold if [ ! -d ${source_dir} ]; then #The subject_id does not exist echo "${source_dir} does not exist!" else cd ${source_dir} readarray -t runs < runs for hemi in "${hemis[@]}"; do for r in "${runs[@]}"; do mri_segstats \ --annot ${sub} ${hemi} ${annot} \ --i ${source_dir}/${r}/${filepat}.${hemi}.mgh \ --sum ${sub}_${hemi}_${annot}_${r}.${filepat}.sum.txt \ --avgwf ${sub}_${hemi}_${annot}_${r}.${filepat}.wav.txt done done fi done
Running the Script
A precondition of this script is that the data have been detrended following the procedure for detrending FreeSurfer data. A second precondition is that there should be a file called 'runs' in the bold/ directory. If you have followed the instructions for detrending your data, such a file should already exist.
The script is run in a terminal by specifying the name of an annot file that can be found in the $SUBJECT_ID/label/ directory, a file pattern, followed by a list of subject IDs:
gettimecourses.sh lausanne fmcpr.sm6.self FS_T1_501
The above command would look in the /label directory for subject FS_T1_501 for lh.lausanne.annot and rh.lausanne.annot, and extract the mean time series for each defined region within each .annot file for the bold data found in the file fmcpr.sm6.self.?h.mgz in the run folders indicated in the runs file. A series of output files is produced in the subject's bold/ directory. The time series files are named ${sub}_?h_${annot}_${run}.${filepattern}.wav.txt
Decoding the Regions
The time series output files are plain text files with rows=time points and columns=regions. In the case of the Lausanne 2008 parcellation, there are approximately 500 columns per file. Interpreting these data will require some sort of reference for the identity of each column.
As you might expect, there is a relationship between the column order and the segmentation ID, as confirmed by this archived email exchange involving yours truly: Extracting resting state time series from surface space using Lausanne 2008 parcellation
When you ran the gettimecourses.sh script, two files were created for each run folder: one containing the time series data (*.wav.txt) and one with some summary data (*.sum.txt). You can use the information from the 5th column of the .sum files to determine the segment labels corresponding to the columns of your time series data. A MATLAB function, ParseFSSegments.m, has been written (ubfs /Scripts/Matlab folder) that will extract the data from these summary files.
%if called without providing filename will launch a dialog box to select the .sum.txt to parse lh_summary=parseFSSegments(); %If you already know the filename, you can provide it as a parameter rh_summary=parseFSSegments('FS_T1_501_lh_myaparc_60_005.fmcpr.sm6.sum.txt'); lh_region_names=lh_summary.segname; rh_region_names=rh_summary.segname;
Note that the segmentation names will be consistent across participants for a given segmentation scheme. In practical terms, this means you should only have to do this once for a given .annot file, though it's probably worth double checking a few .sum.txt files to make sure the regions are listed in the same order because sometimes things get screwed up.
Other Information
The FreeSurfer command mris_anatomical_stats uses an .annot file to query properties of a surface. The output of this program can be used to determine anatomical characteristics of each region. These values might be useful for modeling purposes.
SUBJECT_ID=FS_T1_501 ANNOT=lausanne HEMIS=( "lh" "rh" ) for hemi in "${HEMIS[@]}"; do mris_anatomical_stats -a ${SUBJECT_ID}/label/${hemi}.${ANNOT}.annot -f ${SUBJECTS_DIR}/${SUBJECT_ID}/${hemi}.${ANNOT}.stats.txt ${SUBJECT_ID} ${hemi} done
The above BASH code snippet will report characteristics for subject FS_T1_501, using the ?h.lausanne.annot files in his/her labels/ directory. The for-loop is a concise way of ensuring that the lh and rh hemispheres are measured, with the results being written to lh.lausanne.stats.txt and rh.lausanne.stats.txt in his/her subject directory (i.e., in $SUBJECTS_DIR/FS_T1_501/)
Now What?
Your time series data can be used to carry out a functional connectivity analysis (the correlational or neural network approach) or machine learning classification.