<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://ccn-wiki.caset.buffalo.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Connor</id>
	<title>CCN Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://ccn-wiki.caset.buffalo.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Connor"/>
	<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php/Special:Contributions/Connor"/>
	<updated>2026-05-01T22:54:50Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.3</generator>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Detrending_FreeSurfer_Data&amp;diff=1529</id>
		<title>Detrending FreeSurfer Data</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Detrending_FreeSurfer_Data&amp;diff=1529"/>
		<updated>2018-04-12T15:44:16Z</updated>

		<summary type="html">&lt;p&gt;Connor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Over the course of a run, there can be a linear drift in the signal in different regions of the brain. There are many possible causes for this that have nothing to do with any interesting aspect of your data -- in other words, this linear drift is a nuisance artifact. The second step is to remove this signal drift from the data because it can introduce spurious correlations between two unrelated time series. You can see this for yourself in a quick experiment you could whip up in Excel: take two vectors of 100 randomly generated numbers (e.g., randbetween(1,99)). They should be uncorrelated. Now add 1, 2, 3, ... , 99, 100 to the values in each vector. This simulates a linear trend in the data. You shouldn&#039;t be surprised to find that the two vectors are now highly and positively correlated!&lt;br /&gt;
&lt;br /&gt;
A script has been written called detrend.sh that removes the linear trend in your BOLD data:&lt;br /&gt;
==== detrend.sh ====&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 USAGE=&amp;quot;Usage: detrend.sh filepattern surface sub1 ... subN&amp;quot;&lt;br /&gt;
 if [ &amp;quot;$#&amp;quot; == &amp;quot;0&amp;quot; ]; then&lt;br /&gt;
 	echo &amp;quot;$USAGE&amp;quot;&lt;br /&gt;
 	exit 1&lt;br /&gt;
 fi&lt;br /&gt;
  &lt;br /&gt;
 #first parameter is the filepattern for the .nii.gz time series to be detrended, up to the surface indicator&lt;br /&gt;
 #e.g., fmcpr.sm6.$surf.?h.nii.gz would use fmcpr.sm6.[self|fsaverage] as the filepattern&lt;br /&gt;
 filepat=&amp;quot;$1&amp;quot;&lt;br /&gt;
 shift&lt;br /&gt;
 &lt;br /&gt;
 #second parameter should be specified either as &#039;&#039;&#039;self&#039;&#039;&#039; or &#039;&#039;&#039;fsaverage&#039;&#039;&#039;&lt;br /&gt;
 surf=&amp;quot;$1&amp;quot;&lt;br /&gt;
 shift&lt;br /&gt;
 &lt;br /&gt;
 #after the shift command, all the arguments are shifted down one place and the first two arguments &lt;br /&gt;
 #(the filepattern and surface) &lt;br /&gt;
 #fall off the list. The remaining arguments should be subject_ids&lt;br /&gt;
 subs=( &amp;quot;$@&amp;quot; );&lt;br /&gt;
 hemis=( &amp;quot;lh&amp;quot; &amp;quot;rh&amp;quot; );&lt;br /&gt;
 &lt;br /&gt;
 for sub in &amp;quot;${subs[@]}&amp;quot;; do&lt;br /&gt;
    source_dir=${SUBJECTS_DIR}/${sub}/bold&lt;br /&gt;
    echo ${source_dir}&lt;br /&gt;
    if [ ! -d ${source_dir} ]; then&lt;br /&gt;
        #The subject_id does not exist&lt;br /&gt;
        echo &amp;quot;${source_dir} does not exist!&amp;quot;&lt;br /&gt;
    else&lt;br /&gt;
 	cd ${source_dir}&lt;br /&gt;
 	readarray -t runs &amp;lt; runs&lt;br /&gt;
 	for r in &amp;quot;${runs[@]}&amp;quot;; do&lt;br /&gt;
 		if [ -n &amp;quot;${r}&amp;quot; ]; then&lt;br /&gt;
 		#the -n test makes sure that the run number is not an empty string&lt;br /&gt;
 		#caused by a trailing newline in the runs file&lt;br /&gt;
 			for hemi in &amp;quot;${hemis[@]}&amp;quot;; do&lt;br /&gt;
 				cd ${source_dir}/${r}&lt;br /&gt;
 				pwd&lt;br /&gt;
        			#subject_id does exist. Detrend&lt;br /&gt;
 				if [ &amp;quot;${surf}&amp;quot; = &amp;quot;self&amp;quot; ]; then&lt;br /&gt;
 &lt;br /&gt;
 					SURFTOUSE=${sub}&lt;br /&gt;
 				else&lt;br /&gt;
 					SURFTOUSE=fsaverage&lt;br /&gt;
 				fi&lt;br /&gt;
                		mri_glmfit --y ${source_dir}/${r}/${filepat}.${surf}.${hemi}.nii.gz \&lt;br /&gt;
        			--glmdir ${source_dir}/${r}/${hemi}.detrend \&lt;br /&gt;
 				--qa --save-yhat --eres-save \&lt;br /&gt;
 				--surf ${SURFTOUSE} ${hemi}&lt;br /&gt;
 				mv ${source_dir}/${r}/${hemi}.detrend/eres.mgh ${source_dir}/${r}/${filepat}.${hemi}.mgh&lt;br /&gt;
        		done&lt;br /&gt;
 		fi&lt;br /&gt;
 	done&lt;br /&gt;
    fi&lt;br /&gt;
 done&lt;br /&gt;
&lt;br /&gt;
== Running the Script ==&lt;br /&gt;
Before running this script, you will need to create a text file called &#039;runs&#039; in the bold/ directory for each subject&#039;s dataset, e.g.,&lt;br /&gt;
*FS_T1_501/&lt;br /&gt;
**bold/&lt;br /&gt;
***runs&lt;br /&gt;
***005/&lt;br /&gt;
***006/&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;runs&amp;lt;/code&amp;gt; file simply lists each run folder on its own line:&lt;br /&gt;
 005&lt;br /&gt;
 006&lt;br /&gt;
The detrend.sh script uses this file to determine the folders containing the data to be detrended. If this file doesn&#039;t already exist, you can manually generate it in any text editor (e.g., &amp;lt;code&amp;gt;nano runs&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;gedit runs&amp;lt;/code&amp;gt;), but the quickest method takes advantage of the fact that the run folders all start with 0, and uses common command-line utilities:&lt;br /&gt;
&lt;br /&gt;
 SUBJECT=FS_501&lt;br /&gt;
 cd ${SUBJECTS_DIR}/${SUBJECT}/bold&lt;br /&gt;
 #if there are fewer than 10 runs of BOLD data, then the run directories will probably have 2 leading zeros&lt;br /&gt;
 ls -1 | grep &amp;quot;^00*&amp;quot; &amp;gt; runs&lt;br /&gt;
&lt;br /&gt;
Assuming all your subject folders have the same run folders to detrend, you would detrend multiple subjects using detrend.sh, specifying a file pattern for the source data (i.e., the name of the preprocessed files generated by FS-FAST, omitting anything after the &#039;?h&#039; hemisphere identifier), followed by a list of subject IDs:&lt;br /&gt;
 #A SPECIFIC EXAMPLE: (note these parameters may differ &#039;&#039;&#039;substantially&#039;&#039;&#039; from what you would be typing in)&lt;br /&gt;
 detrend.sh fmcpr.sm6 self FS_T1_501 FS_T2_501 FS_T1_505 FS_T2_505&lt;br /&gt;
 #For a more generalizable example of how you should call this function, see the section below using variables&lt;br /&gt;
The gist is that it calls the mri_glmfit function and saves the residuals after the linear trend has been removed from the data. Multiple files are generated in ?h.detrend/ directories in each run directory. The detrended data is subsequently copied back to the run directory as a new file called ${filepat}.?h.mgh, where ${filepat} is whatever file pattern you provided to the script (note that the source data are .nii.gz files, whereas the detrended data are .mgh files).&lt;br /&gt;
&lt;br /&gt;
== Using variables ==&lt;br /&gt;
As I mentioned in the comments in the sample code snippet above, what you type depends entirely on the filenames, which in turn depend entirely on how the data were preprocessed. You can use environment variables to help walk you through figuring out the correct file patterns. Also, a handy shortcut exists if you happen to have a &amp;lt;code&amp;gt;subjects&amp;lt;/code&amp;gt; file in &amp;lt;code&amp;gt;$SUBJECTS_DIR&amp;lt;/code&amp;gt;. Putting these two techniques together:&lt;br /&gt;
 FILEPATTERN=fmcpr #the preprocessed files will almost always be called &#039;&#039;&#039;fmcpr&#039;&#039;&#039;&lt;br /&gt;
 SMOOTHING=&amp;quot;.sm4&amp;quot; #how much smoothing did you use when you ran preproc-sess?&lt;br /&gt;
 SLICETIME=&amp;quot;.down&amp;quot; #[&amp;quot;.up&amp;quot; | &amp;quot;.down&amp;quot; | OMIT ]&lt;br /&gt;
 SURFACE=self #[self | fsaverage]&lt;br /&gt;
 &lt;br /&gt;
 detrend.sh ${FILEPATTERN}${SLICETIME}${SMOOTHING} $SURFACE `cat ${subjects}`&lt;br /&gt;
This will execute the detrend.sh script on all the subjects listed in the subjects text file, using the subject&#039;s self surface. If the data were preprocessed with the fsaverage surface, you would specify &#039;&#039;fsaverage&#039;&#039;, in place of &#039;&#039;self&#039;&#039; I just figured this out today. &lt;br /&gt;
&lt;br /&gt;
[[Category: Time Series]]&lt;/div&gt;</summary>
		<author><name>Connor</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Time_Series_Data_in_Surface_Space&amp;diff=1528</id>
		<title>Time Series Data in Surface Space</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Time_Series_Data_in_Surface_Space&amp;diff=1528"/>
		<updated>2018-04-12T15:43:37Z</updated>

		<summary type="html">&lt;p&gt;Connor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;We can calculate mean time course vectors calculated across all voxels within regions defined in a FreeSurfer annotation (.annot) file (we can also extract the time courses on a vertex-by-vertex basis -- [[#Voxelwise_Time_Series | see below]]). Though any annotation file can be used for this purpose, we have been working at the scale of the Lausanne parcellation. A script exists, &amp;lt;code&amp;gt;gettimecourses.sh&amp;lt;/code&amp;gt;, that will handle this:&lt;br /&gt;
=== gettimecourses.sh ===&lt;br /&gt;
Update Jan 11, 2018: I added a feature that uses the filepattern string to determine whether to use the self or fsaverage annotation.&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 USAGE=&amp;quot;Usage: gettimecourses.sh annot filepattern sub1 ... subN&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 if [ &amp;quot;$#&amp;quot; == &amp;quot;0&amp;quot; ]; then&lt;br /&gt;
         echo &amp;quot;$USAGE&amp;quot;&lt;br /&gt;
         exit 1&lt;br /&gt;
 fi&lt;br /&gt;
 &lt;br /&gt;
 #first two parameters are is the annot files and filepatterns for the \&lt;br /&gt;
 #.nii.gz time series to be detrended, up to the hemisphere indicator&lt;br /&gt;
 #e.g., detrend.fmcpr.sm6.self.?h.nii.gz would use detrend.fmcpr.sm6.self as the filepattern&lt;br /&gt;
 annot=&amp;quot;$1&amp;quot;&lt;br /&gt;
 shift&lt;br /&gt;
 filepat=&amp;quot;$1&amp;quot;&lt;br /&gt;
 shift&lt;br /&gt;
 &lt;br /&gt;
 #subjects&lt;br /&gt;
 subs=( &amp;quot;$@&amp;quot; );&lt;br /&gt;
 #hemispheres&lt;br /&gt;
 hemis=( lh rh )&lt;br /&gt;
 &lt;br /&gt;
 for sub in &amp;quot;${subs[@]}&amp;quot;; do&lt;br /&gt;
 source_dir=${SUBJECTS_DIR}/${sub}/bold&lt;br /&gt;
 &lt;br /&gt;
 #Which annotation? Look for &amp;quot;fsaverage&amp;quot; or &amp;quot;self&amp;quot; in the filepat&lt;br /&gt;
   if [[ &amp;quot;${filepat}&amp;quot; =~ &amp;quot;fsaverage&amp;quot; ]]&lt;br /&gt;
   then&lt;br /&gt;
      SURF=&amp;quot;fsaverage&amp;quot;&lt;br /&gt;
   elif [[ &amp;quot;${filepat}&amp;quot; =~ &amp;quot;self&amp;quot; ]]&lt;br /&gt;
   then&lt;br /&gt;
      SURF=${sub}&lt;br /&gt;
   fi&lt;br /&gt;
 &lt;br /&gt;
    if [ ! -d ${source_dir} ]; then&lt;br /&gt;
        #The subject_id does not exist&lt;br /&gt;
        echo &amp;quot;${source_dir} does not exist!&amp;quot;&lt;br /&gt;
    else&lt;br /&gt;
        cd ${source_dir}&lt;br /&gt;
        readarray -t runs &amp;lt; runs&lt;br /&gt;
        for hemi in &amp;quot;${hemis[@]}&amp;quot;; do&lt;br /&gt;
                for r in &amp;quot;${runs[@]}&amp;quot;; do&lt;br /&gt;
                        mri_segstats \&lt;br /&gt;
                        --annot ${SURF} ${hemi} ${annot} \&lt;br /&gt;
                        --i ${source_dir}/${r}/${filepat}.${hemi}.mgh \&lt;br /&gt;
                        --sum ${sub}_${hemi}_${annot}_${r}.${filepat}.sum.txt \&lt;br /&gt;
                        --avgwf ${sub}_${hemi}_${annot}_${r}.${filepat}.wav.txt&lt;br /&gt;
                done&lt;br /&gt;
        done&lt;br /&gt;
     fi&lt;br /&gt;
 done&lt;br /&gt;
&lt;br /&gt;
== Running the Script ==&lt;br /&gt;
This script has two preconditions (conditions that must be satisfied prior to running the script)&lt;br /&gt;
# The first is that the data have been detrended following the [[Detrending_FreeSurfer_Data | procedure for detrending FreeSurfer data]].&lt;br /&gt;
# The second precondition is that there should be a file called &#039;runs&#039; in the bold/ directory. &lt;br /&gt;
If you have followed the instructions for detrending your data, both conditions should already be met.&lt;br /&gt;
&lt;br /&gt;
The script is run in a terminal by specifying the name of an annot file that can be found in the $SUBJECT_ID/label/ directory, a file pattern, followed by a list of subject IDs:&lt;br /&gt;
 gettimecourses.sh lausanne fmcpr.siemens.sm6.self FS_T1_501&lt;br /&gt;
The above command would look in the /label directory for subject &#039;&#039;FS_T1_501&#039;&#039; for &#039;&#039;lh.lausanne.annot&#039;&#039; and &#039;&#039;rh.lausanne.annot&#039;&#039;, and extract the mean time series for each defined region within each .annot file for the bold data found in the file &#039;&#039;fmcpr.sm6.self.?h.mgz&#039;&#039; in the run folders indicated in the &#039;&#039;runs&#039;&#039; file.&lt;br /&gt;
A series of output files is produced in the subject&#039;s bold/ directory. The time series files are named ${sub}_?h_${annot}_${run}.${filepattern}.wav.txt&lt;br /&gt;
&lt;br /&gt;
==== A Trick ====&lt;br /&gt;
Similar to the shortcut described for detrending the data, you can take advantage of a &amp;lt;code&amp;gt;subjects&amp;lt;/code&amp;gt; file in your SUBJECTS_DIR:&lt;br /&gt;
&lt;br /&gt;
 FILEPATTERN=fmcpr.siemens.sm4.self&lt;br /&gt;
 ANNOT=lausanne&lt;br /&gt;
 gettimecourses.sh $ANNOT $FILEPATTERN `cat ${subjects}`&lt;br /&gt;
&lt;br /&gt;
=== Decoding the Regions ===&lt;br /&gt;
The time series output files are plain text files with rows=time points and columns=regions. In the case of the Lausanne 2008 parcellation, there are approximately 500 columns per file. Interpreting these data will require some sort of reference for the identity of each column.&lt;br /&gt;
&lt;br /&gt;
As you might expect, there is a relationship between the column order and the segmentation ID, as confirmed by this archived email exchange involving yours truly: [https://mail.nmr.mgh.harvard.edu/pipermail/freesurfer/2012-December/026853.html Extracting resting state time series from surface space using Lausanne 2008 parcellation]&lt;br /&gt;
&lt;br /&gt;
When you ran the gettimecourses.sh script, two files were created for each run folder: one containing the time series data (*.wav.txt) and one with some summary data (*.sum.txt). You can use the information from the 5th column of the .sum files to determine the segment labels corresponding to the columns of your time series data. A MATLAB function, &#039;&#039;&#039;[[ParseFSSegments.m]]&#039;&#039;&#039;, has been written (ubfs /Scripts/Matlab folder) that will extract the data from these summary files.&lt;br /&gt;
 %if called without providing filename will launch a dialog box to select the .sum.txt to parse &lt;br /&gt;
 lh_summary=parseFSSegments(); &lt;br /&gt;
 %If you already know the filename, you can provide it as a parameter&lt;br /&gt;
 rh_summary=parseFSSegments(&#039;FS_T1_501_lh_myaparc_60_005.fmcpr.sm6.sum.txt&#039;); &lt;br /&gt;
 &lt;br /&gt;
 lh_region_names=lh_summary.segname;&lt;br /&gt;
 rh_region_names=rh_summary.segname; &lt;br /&gt;
&lt;br /&gt;
Note that the segmentation names will be consistent across participants for a given segmentation scheme. In practical terms, this means you &#039;&#039;&#039;should&#039;&#039;&#039; only have to do this once for a given .annot file, though it&#039;s probably worth double checking a few .sum.txt files to make sure the regions are listed in the same order because Murphy&#039;s law dictates that sometimes things get screwed up.&lt;br /&gt;
&lt;br /&gt;
== Other Information ==&lt;br /&gt;
The FreeSurfer command mris_anatomical_stats uses an .annot file to query properties of a surface. The output of this program can be used to determine anatomical characteristics of each region. These values might be useful for modeling purposes.&lt;br /&gt;
 SUBJECT_ID=FS_T1_501&lt;br /&gt;
 ANNOT=lausanne&lt;br /&gt;
 HEMIS=( &amp;quot;lh&amp;quot; &amp;quot;rh&amp;quot; )&lt;br /&gt;
 for hemi in &amp;quot;${HEMIS[@]}&amp;quot;; do&lt;br /&gt;
    mris_anatomical_stats -a ${SUBJECT_ID}/label/${hemi}.${ANNOT}.annot -f ${SUBJECTS_DIR}/${SUBJECT_ID}/${hemi}.${ANNOT}.stats.txt ${SUBJECT_ID} ${hemi}&lt;br /&gt;
 done&lt;br /&gt;
The above BASH code snippet will report characteristics for subject FS_T1_501, using the ?h.lausanne.annot files in his/her labels/ directory. The for-loop is a concise way of ensuring that the lh and rh hemispheres are measured, with the results being written to &#039;&#039;lh.lausanne.stats.txt&#039;&#039; and &#039;&#039;rh.lausanne.stats.txt&#039;&#039; in his/her subject directory (i.e., in &#039;&#039;$SUBJECTS_DIR/FS_T1_501/&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
== Voxelwise Time Series ==&lt;br /&gt;
The above methods extract the mean time series across all the voxels contained in the regions indexed by the vertices in a given region or set of regions. Some techniques, such as MVPA, however, work on the single-voxel level. I [http://www.mail-archive.com/freesurfer%40nmr.mgh.harvard.edu/msg54885.html asked the Freesurfer mailing list] about this and got the following suggestion for using MATLAB to obtain 1 time series per vertex:&lt;br /&gt;
 waveforms = fast_vol2mat(MRIread(&#039;waveform.nii.gz&#039;));&lt;br /&gt;
&lt;br /&gt;
== Now What? ==&lt;br /&gt;
Your time series data can be used to carry out a functional connectivity analysis (the [[ Functional_Connectivity_(Cross-Correlation_Method) | correlational ]] or neural network approach) or machine learning classification.&lt;br /&gt;
[[Category: Time Series]]&lt;/div&gt;</summary>
		<author><name>Connor</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Highlight_Reel&amp;diff=1468</id>
		<title>Highlight Reel</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Highlight_Reel&amp;diff=1468"/>
		<updated>2018-02-19T16:40:16Z</updated>

		<summary type="html">&lt;p&gt;Connor: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Rebecca ==&lt;br /&gt;
*Came up with an amazing nickname for Chris &amp;quot;Norgie&amp;quot; McNorgan.&lt;br /&gt;
&lt;br /&gt;
==Elizabeth==&lt;br /&gt;
*Agreed to coordinate the sorting, tagging and filtering of 20K concept features. That&#039;s a really huge quantity of data.&lt;br /&gt;
&lt;br /&gt;
== James ==&lt;br /&gt;
*Got stumped with Matlab. Went and got a haircut. Came back ready to get things done.&lt;br /&gt;
*Made an awesome shell file that contains a bunch of code needed for functional analysis (which would be a pain to type in by hand).&lt;br /&gt;
&lt;br /&gt;
== Mandy ==&lt;br /&gt;
*Reminded me that I had a package to pick up from the main office. They were document covers, so that was really exciting.&lt;br /&gt;
&lt;br /&gt;
== Erica ==&lt;br /&gt;
*Painted original artwork for the lab.&lt;br /&gt;
**Also makes really nice looking powerpoint slides that can explain waveform filtering&lt;br /&gt;
*Whizzed through a whole pile of statistical analyses in R like &amp;quot;no big&amp;quot;. So second-nature to her that she doesn&#039;t even realize her skill level.&lt;br /&gt;
*Got a coffee maker&lt;br /&gt;
*Always catches the little details&lt;br /&gt;
&lt;br /&gt;
== Greg ==&lt;br /&gt;
*Made a funny remark about how writing a guide about writing guides was &amp;quot;pretty meta&amp;quot;&lt;br /&gt;
*Realizes we should have just been issuing electronic Amazon gift cards all this time&lt;br /&gt;
*Enabled login-controlled edits to the wiki&lt;br /&gt;
&lt;br /&gt;
== Connor ==&lt;br /&gt;
*Showed up&lt;/div&gt;</summary>
		<author><name>Connor</name></author>
	</entry>
</feed>