<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://ccn-wiki.caset.buffalo.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Chris</id>
	<title>CCN Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://ccn-wiki.caset.buffalo.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Chris"/>
	<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php/Special:Contributions/Chris"/>
	<updated>2026-04-22T21:48:59Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.3</generator>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Main_Page&amp;diff=2280</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Main_Page&amp;diff=2280"/>
		<updated>2022-11-14T20:35:58Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Experiment A to Z (not necessarily in alphabetical order) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Welcome to the CCN Lab Wiki.&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:cpicard.jpg]]&lt;br /&gt;
&lt;br /&gt;
== Terminal Commands &amp;amp; Other Lab Stuff ==&lt;br /&gt;
* [[New Research Staff]]&lt;br /&gt;
* [[Lab Roles]]&lt;br /&gt;
* [[Mess Ups | Times when we messed up]]&lt;br /&gt;
* [[Highlight Reel | Times when we rocked it]]&lt;br /&gt;
* [[Participant Screening]]&lt;br /&gt;
* [[3D Brain Models]]&lt;br /&gt;
&lt;br /&gt;
== Technobabble ==&lt;br /&gt;
* [[ML Environment | Running a ML Workstation ]]&lt;br /&gt;
* [[ubmount | UBFS and ubmount]]&lt;br /&gt;
* [[SSH | Using SSH]]&lt;br /&gt;
* [[Synching Scripts]]&lt;br /&gt;
* [[Connecting to CCR]]&lt;br /&gt;
* [[Lab Email]]&lt;br /&gt;
* [[Excel Formulas]]&lt;br /&gt;
* [[Troubleshooting]]&lt;br /&gt;
* [[BASH Tricks]]&lt;br /&gt;
* [[FreeSurfer on Windows]]&lt;br /&gt;
&lt;br /&gt;
== Data Analysis ==&lt;br /&gt;
* [https://wiki.cam.ac.uk/bmuwiki/FMRI Data Quality]&lt;br /&gt;
* [[Downloading CRTC Data]]&lt;br /&gt;
* [[Behavioral Analyses]]&lt;br /&gt;
* [[FreeSurfer | FreeSurfer Pipeline]]&lt;br /&gt;
* [[SPM | SPM Pipeline]]&lt;br /&gt;
* [[Time Series Analysis]]&lt;br /&gt;
* [[Network Analyses]]&lt;br /&gt;
*[[Self-organized-mapping(SOM)]]&lt;br /&gt;
* [[Data Simulation]]&lt;br /&gt;
&lt;br /&gt;
== Manuscript Preparation ==&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=Rendering_MRIcron Brain Rendering in MRIcron (SPM)]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=Producing_Tables_of_Coordinates_(SPM) Producing Tables of Coordinates (SPM)]&lt;br /&gt;
* [[Annotation Coordinates| Extracting XYZ coordinates of .annot labels]]&lt;br /&gt;
* [http://colorbrewer2.org/ Color schemes (e.g., color keys for graphs, experiment conditions in fMRI renderings, etc.)]&lt;br /&gt;
** Above link yoinked from [http://sites.bu.edu/cnrlab/lab-resources/ BU&#039;s CNR Lab]&lt;br /&gt;
* [[Acquisition_Parameters | Acquisition Parameters at CTRC]]&lt;br /&gt;
* [[Brain Net Viewer]]&lt;br /&gt;
* [[Manuscript Formatting]]&lt;br /&gt;
&lt;br /&gt;
== Informational ==&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php/Developmental_Neuroscience Developmental Neuroscience]&lt;br /&gt;
* [[NSF Proposal Submission Walkthrough for n00bs]]&lt;br /&gt;
* [[Recipes]]&lt;br /&gt;
&lt;br /&gt;
== MediaWiki - Guides for Getting started ==&lt;br /&gt;
&lt;br /&gt;
* [//www.mediawiki.org/wiki/Help:Formatting Useful Formatting Guide]&lt;br /&gt;
* [//www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]&lt;br /&gt;
* [//www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]&lt;br /&gt;
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce MediaWiki release mailing list]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Consult the [//meta.wikimedia.org/wiki/Help:Contents User&#039;s Guide] for information on using the wiki software.&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Detrending_FreeSurfer_Data&amp;diff=2279</id>
		<title>Detrending FreeSurfer Data</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Detrending_FreeSurfer_Data&amp;diff=2279"/>
		<updated>2022-09-27T20:20:12Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Scripting */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Over the course of a run, there can be a linear drift in the signal in different regions of the brain. There are many possible causes for this that have nothing to do with any interesting aspect of your data -- in other words, this linear drift is a nuisance artifact. The second step is to remove this signal drift from the data because it can introduce spurious correlations between two unrelated time series. You can see this for yourself in a quick experiment you could whip up in Excel: take two vectors of 100 randomly generated numbers (e.g., randbetween(1,99)). They should be uncorrelated. Now add 1, 2, 3, ... , 99, 100 to the values in each vector. This simulates a linear trend in the data. You shouldn&#039;t be surprised to find that the two vectors are now highly and positively correlated!&lt;br /&gt;
&lt;br /&gt;
A script has been written called detrend.sh that removes the linear trend in your BOLD data. The latest version of this script can be found in /usr/local/sbin (which should be in your $PATH):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt;&#039;&#039;&#039;Update:&#039;&#039;&#039; The script has been modified to also regress out motion parameters from the mcprextreg files. This modification is not (yet) reflected in the code below.&amp;lt;/span&amp;gt;&lt;br /&gt;
==The Script ==&lt;br /&gt;
The most recent version of the BASH script (detrend.sh) is below:&lt;br /&gt;
=== detrend.sh ===&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
 USAGE=&amp;quot;Usage: detrend.sh filepattern surf sub1 ... subN&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 if [ &amp;quot;$#&amp;quot; == &amp;quot;0&amp;quot; ]; then&lt;br /&gt;
        echo &amp;quot;$USAGE&amp;quot;&lt;br /&gt;
        exit 1&lt;br /&gt;
 fi&lt;br /&gt;
 &lt;br /&gt;
 #first parameter is the filepattern for the .nii.gz time series to be detrended, up to the surface indicator&lt;br /&gt;
 #e.g., fmcpr.siemens.sm6.fsaverage.?h.nii.gz would use fmcpr.siemens.sm6  as the filepattern, fsaverage as the surf&lt;br /&gt;
 filepat=&amp;quot;$1&amp;quot;&lt;br /&gt;
 shift&lt;br /&gt;
 &lt;br /&gt;
 #second parameter should be specified either as self or fsaverage&lt;br /&gt;
 surf=&amp;quot;$1&amp;quot;&lt;br /&gt;
 shift&lt;br /&gt;
 &lt;br /&gt;
 #after the shift commands, all the arguments are shifted down two places and the first &lt;br /&gt;
 #2 arguments (the filepattern and surface) fall off the list. &lt;br /&gt;
 #The remaining arguments should be subject_ids&lt;br /&gt;
 subs=( &amp;quot;$@&amp;quot; );&lt;br /&gt;
 hemis=( &amp;quot;lh&amp;quot; &amp;quot;rh&amp;quot; );&lt;br /&gt;
 &lt;br /&gt;
 for sub in &amp;quot;${subs[@]}&amp;quot;; do&lt;br /&gt;
    source_dir=${SUBJECTS_DIR}/${sub}/bold&lt;br /&gt;
    echo ${source_dir}&lt;br /&gt;
    if [ ! -d ${source_dir} ]; then&lt;br /&gt;
        #The subject_id does not exist&lt;br /&gt;
        echo &amp;quot;${source_dir} does not exist!&amp;quot;&lt;br /&gt;
    else&lt;br /&gt;
        cd ${source_dir}&lt;br /&gt;
        readarray -t runs &amp;lt; runs&lt;br /&gt;
        for r in &amp;quot;${runs[@]}&amp;quot;; do&lt;br /&gt;
                if [ -n &amp;quot;${r}&amp;quot; ]; then&lt;br /&gt;
                #the -n test makes sure that the run number is not an empty string&lt;br /&gt;
                #caused by a trailing newline in the runs file&lt;br /&gt;
                        for hemi in &amp;quot;${hemis[@]}&amp;quot;; do&lt;br /&gt;
                                cd ${source_dir}/${r}&lt;br /&gt;
                                pwd&lt;br /&gt;
                                #subject_id does exist. Detrend&lt;br /&gt;
                                if [ &amp;quot;${surf}&amp;quot; = &amp;quot;self&amp;quot; ]; then&lt;br /&gt;
 &lt;br /&gt;
                                        SURFTOUSE=${sub}&lt;br /&gt;
                                else&lt;br /&gt;
                                        SURFTOUSE=fsaverage&lt;br /&gt;
                                fi&lt;br /&gt;
                                mri_glmfit --y ${source_dir}/${r}/${filepat}.${surf}.${hemi}.nii.gz \&lt;br /&gt;
                                --glmdir ${source_dir}/${r}/${hemi}.detrend \&lt;br /&gt;
                                --qa --save-yhat --eres-save \&lt;br /&gt;
                                --surf ${SURFTOUSE} ${hemi}&lt;br /&gt;
                                mv ${source_dir}/${r}/${hemi}.detrend/eres.mgh ${source_dir}/${r}/${filepat}.${surf}.${hemi}.mgh&lt;br /&gt;
                        done&lt;br /&gt;
                        #now detrend the mni305 file&lt;br /&gt;
                        mri_glmfit --y ${source_dir}/${r}/${filepat}.mni305.2mm.nii.gz \&lt;br /&gt;
                        --glmdir ${source_dir}/${r}/mni305.detrend \&lt;br /&gt;
                        --qa --save-yhat --eres-save&lt;br /&gt;
                        mv ${source_dir}/${r}/mni305.detrend/eres.mgh ${source_dir}/${r}/${filepat}.${surf}.mni305.2mm.mgh&lt;br /&gt;
                fi&lt;br /&gt;
        done&lt;br /&gt;
    fi&lt;br /&gt;
 done&lt;br /&gt;
&lt;br /&gt;
== Running the Script ==&lt;br /&gt;
===Before you run these scripts===&lt;br /&gt;
Before running either of the scripts on this page, you will need to create a text file called &#039;runs&#039; in the bold/ directory for each subject&#039;s dataset, e.g.,&lt;br /&gt;
*FS_T1_501/&lt;br /&gt;
**bold/&lt;br /&gt;
***runs&lt;br /&gt;
***005/&lt;br /&gt;
***006/&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;runs&amp;lt;/code&amp;gt; file simply lists each run folder on its own line:&lt;br /&gt;
 005&lt;br /&gt;
 006&lt;br /&gt;
The detrend.sh script uses this file to determine the folders containing the data to be detrended. If this file doesn&#039;t already exist, you can manually generate it in any text editor (e.g., &amp;lt;code&amp;gt;nano runs&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;gedit runs&amp;lt;/code&amp;gt;), but the quickest method takes advantage of the fact that the run folders all start with 0, and uses common command-line utilities:&lt;br /&gt;
&lt;br /&gt;
 SUBJECT=FS_501&lt;br /&gt;
 cd ${SUBJECTS_DIR}/${SUBJECT}/bold&lt;br /&gt;
 #if there are fewer than 10 runs of BOLD data, then the run directories will probably have 2 leading zeros&lt;br /&gt;
 ls -1 | grep &amp;quot;^00*&amp;quot; &amp;gt; runs&lt;br /&gt;
===Example script===&lt;br /&gt;
Assuming all your subject folders have the same run folders to detrend, you would detrend multiple subjects using detrend.sh, specifying a file pattern for the source data (i.e., the name of the preprocessed files generated by FS-FAST, omitting anything after the &#039;?h&#039; hemisphere identifier), followed by a list of subject IDs:&lt;br /&gt;
 #A SPECIFIC EXAMPLE: (note these parameters may differ &#039;&#039;&#039;substantially&#039;&#039;&#039; from what you would be typing in)&lt;br /&gt;
 detrend.sh fmcpr.sm6 self FS_T1_501 FS_T2_501 FS_T1_505 FS_T2_505&lt;br /&gt;
 #For a more generalizable example of how you should call this function, see the section below using variables&lt;br /&gt;
The gist is that it calls the mri_glmfit function and saves the residuals after the linear trend has been removed from the data. Multiple files are generated in ?h.detrend/ directories in each run directory. The detrended data is subsequently copied back to the run directory as a new file called ${filepat}.?h.mgh, where ${filepat} is whatever file pattern you provided to the script (note that the source data are .nii.gz files, whereas the detrended data are .mgh files).&lt;br /&gt;
&lt;br /&gt;
==White Matter and CSF Signal==&lt;br /&gt;
It is among best practices in functional connectivity studies to regress out WM and CSF signals from your time series before computing time series correlations, at least for resting state studies (Muschelli et al (2014)). We can add WM and CSF signal to the set of nuisance regressors when detrending our data (note that the procedure below removes the mean WM + CSF as a single component; it can be modified to remove WM and CSF independently):&lt;br /&gt;
 &lt;br /&gt;
===Create a configuration file===&lt;br /&gt;
An answer to a question I posted to the FreeSurfer mailing list pointed me to [http://surfer.nmr.mgh.harvard.edu/fswiki/FsFastFunctionalConnectivityWalkthrough a walkthrough] for functional connectivity analyses in FreeSurfer is the lead that I&#039;m following. We will be using fcseed-sess to pull out our mean signal from seed regions. In our case, our seed regions will be from the white matter and csf masks. It&#039;s a 2-step process. The first step is to create a configuration file:&lt;br /&gt;
 fcseed-config -cfg wmcsf.cfg -wm -vcsf -fsd bold -mean&lt;br /&gt;
This assumes that the BOLD data are in ${SUBJECTS_DIR}/${SUBID}/&#039;&#039;&#039;bold&#039;&#039;&#039; (-fsd), which is likely going to be the case if you&#039;ve been following the FreeSurfer pipeline on this wiki. I ran the above step in ${SUBJECTS_DIR}.&lt;br /&gt;
&lt;br /&gt;
===Computing WM and CSF Regressor Values===&lt;br /&gt;
Now you will have a configuration file, &#039;&#039;&#039;wmcsf.cfg&#039;&#039;&#039; that you will pass to the next step that performs the math on the fmri data:&lt;br /&gt;
 fcseed-sess -s ${SUBID} -cfg wmcsf.cfg&lt;br /&gt;
or&lt;br /&gt;
 fcseed-sess -sf &#039;&#039;subjects&#039;&#039; -cfg wmcsf.cfg&lt;br /&gt;
&lt;br /&gt;
This will use the config file just created to extract a seed regressor for ${SUBID} (or for all subjects listed in your &#039;&#039;subjects&#039;&#039; file if you use the -sf switch to run in batch mode). You&#039;ll find a file called &#039;&#039;&#039;wmcsf&#039;&#039;&#039; in each of your BOLD/run directories. This file is a plaintext file with one column of values that you can use as a nuisance regressor. The detrend script could be modified to first run fcseed-sess to generate this value to include in the nuisance regressor matrix.&lt;br /&gt;
&lt;br /&gt;
If you wanted to remove WM and CSF as two separate components, you would need to generate two separate .cfg files, and then run fcseed-sess twice.&lt;br /&gt;
&lt;br /&gt;
===Scripting===&lt;br /&gt;
These steps have been combined with the detrend.sh script above to remove linear trends, motion parameters and wm+csf signal (as a single component). You will find detrend_wmcsf.sh in the ubfs/Scripts/Shell folder. However the script is reproduced below:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: I think there&#039;s a bug that prevents the script from working correctly when passed more than 1 subject at a time.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
====detrend_wmcsf.sh====&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 USAGE=&amp;quot;Usage: detrend_wmcsf.sh filepattern surf SUBJECT&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 if [ &amp;quot;$#&amp;quot; == &amp;quot;0&amp;quot; ]; then&lt;br /&gt;
 	echo &amp;quot;$USAGE&amp;quot;&lt;br /&gt;
 	exit 1&lt;br /&gt;
 fi&lt;br /&gt;
 #first parameter is the filepattern for the .nii.gz time series to be detrended, up to the surface indicator &lt;br /&gt;
 #e.g., fmcpr.siemens.sm6.fsaverage.?h.nii.gz would use fmcpr.siemens.sm6  as the filepattern, fsaverage as the surf&lt;br /&gt;
 filepat=&amp;quot;$1&amp;quot;&lt;br /&gt;
 shift&lt;br /&gt;
 #second parameter should be specified either as self or fsaverage&lt;br /&gt;
 surf=&amp;quot;$1&amp;quot;&lt;br /&gt;
 shift&lt;br /&gt;
  &lt;br /&gt;
 #after the shift command, all the arguments are shifted down one place and the first argument (the filepattern) &lt;br /&gt;
 #falls off the list. The remaining arguments should be subject_ids&lt;br /&gt;
 subs=( &amp;quot;$@&amp;quot; );&lt;br /&gt;
 hemis=( &amp;quot;lh&amp;quot; &amp;quot;rh&amp;quot; );&lt;br /&gt;
 &lt;br /&gt;
 #Make sure we&#039;re in SUBJECTS_DIR to start so that we can make some assumptions about relative file locations&lt;br /&gt;
 cd ${SUBJECTS_DIR}&lt;br /&gt;
 &lt;br /&gt;
 #first, create configuration file for the wm &amp;amp; csf regression&lt;br /&gt;
 echo &amp;quot;Making fcseed config file...&amp;quot;&lt;br /&gt;
 fcseed-config -cfg wmcsf.cfg -wm -vcsf -fsd bold -mean&lt;br /&gt;
 echo &amp;quot;done&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 echo &amp;quot;Processing subjects...&amp;quot;&lt;br /&gt;
 for sub in &amp;quot;${subs[@]}&amp;quot;&lt;br /&gt;
 do&lt;br /&gt;
    source_dir=${SUBJECTS_DIR}/${sub}/bold&lt;br /&gt;
    echo ${source_dir}&lt;br /&gt;
    if [ ! -d ${source_dir} ]; then&lt;br /&gt;
        #The subject_id does not exist&lt;br /&gt;
        echo &amp;quot;${source_dir} does not exist!&amp;quot;&lt;br /&gt;
    else&lt;br /&gt;
 	#The subject bold directory exists. Proceed.&lt;br /&gt;
        fcseed-sess -s ${sub} -cfg ${SUBJECTS_DIR}/wmcsf.cfg&lt;br /&gt;
 	echo &amp;quot;Completed wm &amp;amp; csf nuisance regressor calculation for ${sub}&amp;quot;&lt;br /&gt;
 	cd ${source_dir}&lt;br /&gt;
 &lt;br /&gt;
 ########readarray -t runlist &amp;lt; runs  ### This is fine and all on Linux but MacOS does not have readarray&lt;br /&gt;
 &lt;br /&gt;
 	#This is a MacOS compatible alternative to readarray that also works on Linux&lt;br /&gt;
        while IFS= read r; do&lt;br /&gt;
    		runlist+=($r)&lt;br /&gt;
 	done &amp;lt; runs&lt;br /&gt;
 	for r in &amp;quot;${runlist[@]}&amp;quot;; do&lt;br /&gt;
 		if [ -n &amp;quot;${r}&amp;quot; ]; then&lt;br /&gt;
 		  #the -n test makes sure that the run number is not an empty string&lt;br /&gt;
 		  #caused by a trailing newline in the runs file&lt;br /&gt;
 		  cd ${source_dir}/${r}&lt;br /&gt;
 		  pwd&lt;br /&gt;
        	  #subject_id does exist. Detrend&lt;br /&gt;
 		  if [ &amp;quot;${surf}&amp;quot; = &amp;quot;self&amp;quot; ]; then&lt;br /&gt;
 			SURFTOUSE=${sub}&lt;br /&gt;
 		  else&lt;br /&gt;
 			SURFTOUSE=fsaverage&lt;br /&gt;
 		  fi&lt;br /&gt;
  &lt;br /&gt;
 		  #this will generate the QA matrix in lh.detrend if needed:&lt;br /&gt;
 		  if [ ! -f ${source_dir}/${r}/lh.detrend/Xg.dat ]; then&lt;br /&gt;
 	                  mri_glmfit --y ${source_dir}/${r}/${filepat}.${surf}.lh.nii.gz \&lt;br /&gt;
         			--glmdir ${source_dir}/${r}/lh.detrend \&lt;br /&gt;
 				--qa \&lt;br /&gt;
 				--surf ${SURFTOUSE} lh&lt;br /&gt;
 		  fi&lt;br /&gt;
 &lt;br /&gt;
 		  #copy and merge the QA, WMCSF and MC matrices&lt;br /&gt;
 		  echo &amp;quot;Generating nuisance regressor matrix for ${hemi} ${r}&amp;quot;&lt;br /&gt;
 		  paste ${source_dir}/${r}/lh.detrend/Xg.dat wmcsf mcprextreg &amp;gt; nuisance&lt;br /&gt;
  &lt;br /&gt;
 &lt;br /&gt;
 		  for hemi in &amp;quot;${hemis[@]}&amp;quot;; do&lt;br /&gt;
 		  #regress out nuisance&lt;br /&gt;
 			mri_glmfit --y ${source_dir}/${r}/${filepat}.${surf}.${hemi}.nii.gz \&lt;br /&gt;
                        	--glmdir ${source_dir}/${r}/${hemi}.detrend \&lt;br /&gt;
                                --X nuisance \&lt;br /&gt;
 				--no-contrasts-ok --save-yhat --eres-save \&lt;br /&gt;
                                --surf ${SURFTOUSE} ${hemi}&lt;br /&gt;
 				#move and rename detrended files&lt;br /&gt;
 			mv ${source_dir}/${r}/${hemi}.detrend/eres.mgh ${source_dir}/${r}/${filepat}.${surf}.${hemi}.mgh&lt;br /&gt;
        	  done&lt;br /&gt;
 &lt;br /&gt;
 		  #now detrend the mni305 file&lt;br /&gt;
                  mri_glmfit --y ${source_dir}/${r}/${filepat}.mni305.2mm.nii.gz \&lt;br /&gt;
                       --glmdir ${source_dir}/${r}/mni305.detrend \&lt;br /&gt;
                       --X nuisance --no-contrasts-ok --save-yhat --eres-save&lt;br /&gt;
                       mv ${source_dir}/${r}/mni305.detrend/eres.mgh ${source_dir}/${r}/${filepat}.${surf}.mni305.2mm.mgh&lt;br /&gt;
 		fi&lt;br /&gt;
 	done&lt;br /&gt;
    fi  &lt;br /&gt;
 done&lt;br /&gt;
&lt;br /&gt;
== Using variables ==&lt;br /&gt;
As I mentioned in the comments in the sample code snippet above, what you type depends entirely on the filenames, which in turn depend entirely on how the data were preprocessed. You can use environment variables to help walk you through figuring out the correct file patterns. Also, a handy shortcut exists if you happen to have a &amp;lt;code&amp;gt;subjects&amp;lt;/code&amp;gt; file in &amp;lt;code&amp;gt;$SUBJECTS_DIR&amp;lt;/code&amp;gt;. Putting these two techniques together:&lt;br /&gt;
 FILEPATTERN=fmcpr #the preprocessed files will almost always be called &#039;&#039;&#039;fmcpr&#039;&#039;&#039;&lt;br /&gt;
 SMOOTHING=&amp;quot;.sm4&amp;quot; #how much smoothing did you use when you ran preproc-sess?&lt;br /&gt;
 SLICETIME=&amp;quot;.up&amp;quot; #[&amp;quot;.up&amp;quot; | &amp;quot;.down&amp;quot; | &amp;quot;.siemens&amp;quot; | OMIT ]&lt;br /&gt;
 SURFACE=fsaverage #[self | fsaverage]&lt;br /&gt;
 &lt;br /&gt;
 detrend.sh ${FILEPATTERN}${SLICETIME}${SMOOTHING} $SURFACE `cat subjects`&lt;br /&gt;
This will execute the detrend.sh script on all the subjects listed in the subjects text file, using the fsaverage surface. The assemblage of variables will look for files named &#039;&#039;fmcpr.up.sm4.*&#039;&#039;&lt;br /&gt;
[[Category: Time Series]]&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Detrending_FreeSurfer_Data&amp;diff=2278</id>
		<title>Detrending FreeSurfer Data</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Detrending_FreeSurfer_Data&amp;diff=2278"/>
		<updated>2022-09-27T20:19:14Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* detrend_wmcsf.sh */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Over the course of a run, there can be a linear drift in the signal in different regions of the brain. There are many possible causes for this that have nothing to do with any interesting aspect of your data -- in other words, this linear drift is a nuisance artifact. The second step is to remove this signal drift from the data because it can introduce spurious correlations between two unrelated time series. You can see this for yourself in a quick experiment you could whip up in Excel: take two vectors of 100 randomly generated numbers (e.g., randbetween(1,99)). They should be uncorrelated. Now add 1, 2, 3, ... , 99, 100 to the values in each vector. This simulates a linear trend in the data. You shouldn&#039;t be surprised to find that the two vectors are now highly and positively correlated!&lt;br /&gt;
&lt;br /&gt;
A script has been written called detrend.sh that removes the linear trend in your BOLD data. The latest version of this script can be found in /usr/local/sbin (which should be in your $PATH):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt;&#039;&#039;&#039;Update:&#039;&#039;&#039; The script has been modified to also regress out motion parameters from the mcprextreg files. This modification is not (yet) reflected in the code below.&amp;lt;/span&amp;gt;&lt;br /&gt;
==The Script ==&lt;br /&gt;
The most recent version of the BASH script (detrend.sh) is below:&lt;br /&gt;
=== detrend.sh ===&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
 USAGE=&amp;quot;Usage: detrend.sh filepattern surf sub1 ... subN&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 if [ &amp;quot;$#&amp;quot; == &amp;quot;0&amp;quot; ]; then&lt;br /&gt;
        echo &amp;quot;$USAGE&amp;quot;&lt;br /&gt;
        exit 1&lt;br /&gt;
 fi&lt;br /&gt;
 &lt;br /&gt;
 #first parameter is the filepattern for the .nii.gz time series to be detrended, up to the surface indicator&lt;br /&gt;
 #e.g., fmcpr.siemens.sm6.fsaverage.?h.nii.gz would use fmcpr.siemens.sm6  as the filepattern, fsaverage as the surf&lt;br /&gt;
 filepat=&amp;quot;$1&amp;quot;&lt;br /&gt;
 shift&lt;br /&gt;
 &lt;br /&gt;
 #second parameter should be specified either as self or fsaverage&lt;br /&gt;
 surf=&amp;quot;$1&amp;quot;&lt;br /&gt;
 shift&lt;br /&gt;
 &lt;br /&gt;
 #after the shift commands, all the arguments are shifted down two places and the first &lt;br /&gt;
 #2 arguments (the filepattern and surface) fall off the list. &lt;br /&gt;
 #The remaining arguments should be subject_ids&lt;br /&gt;
 subs=( &amp;quot;$@&amp;quot; );&lt;br /&gt;
 hemis=( &amp;quot;lh&amp;quot; &amp;quot;rh&amp;quot; );&lt;br /&gt;
 &lt;br /&gt;
 for sub in &amp;quot;${subs[@]}&amp;quot;; do&lt;br /&gt;
    source_dir=${SUBJECTS_DIR}/${sub}/bold&lt;br /&gt;
    echo ${source_dir}&lt;br /&gt;
    if [ ! -d ${source_dir} ]; then&lt;br /&gt;
        #The subject_id does not exist&lt;br /&gt;
        echo &amp;quot;${source_dir} does not exist!&amp;quot;&lt;br /&gt;
    else&lt;br /&gt;
        cd ${source_dir}&lt;br /&gt;
        readarray -t runs &amp;lt; runs&lt;br /&gt;
        for r in &amp;quot;${runs[@]}&amp;quot;; do&lt;br /&gt;
                if [ -n &amp;quot;${r}&amp;quot; ]; then&lt;br /&gt;
                #the -n test makes sure that the run number is not an empty string&lt;br /&gt;
                #caused by a trailing newline in the runs file&lt;br /&gt;
                        for hemi in &amp;quot;${hemis[@]}&amp;quot;; do&lt;br /&gt;
                                cd ${source_dir}/${r}&lt;br /&gt;
                                pwd&lt;br /&gt;
                                #subject_id does exist. Detrend&lt;br /&gt;
                                if [ &amp;quot;${surf}&amp;quot; = &amp;quot;self&amp;quot; ]; then&lt;br /&gt;
 &lt;br /&gt;
                                        SURFTOUSE=${sub}&lt;br /&gt;
                                else&lt;br /&gt;
                                        SURFTOUSE=fsaverage&lt;br /&gt;
                                fi&lt;br /&gt;
                                mri_glmfit --y ${source_dir}/${r}/${filepat}.${surf}.${hemi}.nii.gz \&lt;br /&gt;
                                --glmdir ${source_dir}/${r}/${hemi}.detrend \&lt;br /&gt;
                                --qa --save-yhat --eres-save \&lt;br /&gt;
                                --surf ${SURFTOUSE} ${hemi}&lt;br /&gt;
                                mv ${source_dir}/${r}/${hemi}.detrend/eres.mgh ${source_dir}/${r}/${filepat}.${surf}.${hemi}.mgh&lt;br /&gt;
                        done&lt;br /&gt;
                        #now detrend the mni305 file&lt;br /&gt;
                        mri_glmfit --y ${source_dir}/${r}/${filepat}.mni305.2mm.nii.gz \&lt;br /&gt;
                        --glmdir ${source_dir}/${r}/mni305.detrend \&lt;br /&gt;
                        --qa --save-yhat --eres-save&lt;br /&gt;
                        mv ${source_dir}/${r}/mni305.detrend/eres.mgh ${source_dir}/${r}/${filepat}.${surf}.mni305.2mm.mgh&lt;br /&gt;
                fi&lt;br /&gt;
        done&lt;br /&gt;
    fi&lt;br /&gt;
 done&lt;br /&gt;
&lt;br /&gt;
== Running the Script ==&lt;br /&gt;
===Before you run these scripts===&lt;br /&gt;
Before running either of the scripts on this page, you will need to create a text file called &#039;runs&#039; in the bold/ directory for each subject&#039;s dataset, e.g.,&lt;br /&gt;
*FS_T1_501/&lt;br /&gt;
**bold/&lt;br /&gt;
***runs&lt;br /&gt;
***005/&lt;br /&gt;
***006/&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;runs&amp;lt;/code&amp;gt; file simply lists each run folder on its own line:&lt;br /&gt;
 005&lt;br /&gt;
 006&lt;br /&gt;
The detrend.sh script uses this file to determine the folders containing the data to be detrended. If this file doesn&#039;t already exist, you can manually generate it in any text editor (e.g., &amp;lt;code&amp;gt;nano runs&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;gedit runs&amp;lt;/code&amp;gt;), but the quickest method takes advantage of the fact that the run folders all start with 0, and uses common command-line utilities:&lt;br /&gt;
&lt;br /&gt;
 SUBJECT=FS_501&lt;br /&gt;
 cd ${SUBJECTS_DIR}/${SUBJECT}/bold&lt;br /&gt;
 #if there are fewer than 10 runs of BOLD data, then the run directories will probably have 2 leading zeros&lt;br /&gt;
 ls -1 | grep &amp;quot;^00*&amp;quot; &amp;gt; runs&lt;br /&gt;
===Example script===&lt;br /&gt;
Assuming all your subject folders have the same run folders to detrend, you would detrend multiple subjects using detrend.sh, specifying a file pattern for the source data (i.e., the name of the preprocessed files generated by FS-FAST, omitting anything after the &#039;?h&#039; hemisphere identifier), followed by a list of subject IDs:&lt;br /&gt;
 #A SPECIFIC EXAMPLE: (note these parameters may differ &#039;&#039;&#039;substantially&#039;&#039;&#039; from what you would be typing in)&lt;br /&gt;
 detrend.sh fmcpr.sm6 self FS_T1_501 FS_T2_501 FS_T1_505 FS_T2_505&lt;br /&gt;
 #For a more generalizable example of how you should call this function, see the section below using variables&lt;br /&gt;
The gist is that it calls the mri_glmfit function and saves the residuals after the linear trend has been removed from the data. Multiple files are generated in ?h.detrend/ directories in each run directory. The detrended data is subsequently copied back to the run directory as a new file called ${filepat}.?h.mgh, where ${filepat} is whatever file pattern you provided to the script (note that the source data are .nii.gz files, whereas the detrended data are .mgh files).&lt;br /&gt;
&lt;br /&gt;
==White Matter and CSF Signal==&lt;br /&gt;
It is among best practices in functional connectivity studies to regress out WM and CSF signals from your time series before computing time series correlations, at least for resting state studies (Muschelli et al (2014)). We can add WM and CSF signal to the set of nuisance regressors when detrending our data (note that the procedure below removes the mean WM + CSF as a single component; it can be modified to remove WM and CSF independently):&lt;br /&gt;
 &lt;br /&gt;
===Create a configuration file===&lt;br /&gt;
An answer to a question I posted to the FreeSurfer mailing list pointed me to [http://surfer.nmr.mgh.harvard.edu/fswiki/FsFastFunctionalConnectivityWalkthrough a walkthrough] for functional connectivity analyses in FreeSurfer is the lead that I&#039;m following. We will be using fcseed-sess to pull out our mean signal from seed regions. In our case, our seed regions will be from the white matter and csf masks. It&#039;s a 2-step process. The first step is to create a configuration file:&lt;br /&gt;
 fcseed-config -cfg wmcsf.cfg -wm -vcsf -fsd bold -mean&lt;br /&gt;
This assumes that the BOLD data are in ${SUBJECTS_DIR}/${SUBID}/&#039;&#039;&#039;bold&#039;&#039;&#039; (-fsd), which is likely going to be the case if you&#039;ve been following the FreeSurfer pipeline on this wiki. I ran the above step in ${SUBJECTS_DIR}.&lt;br /&gt;
&lt;br /&gt;
===Computing WM and CSF Regressor Values===&lt;br /&gt;
Now you will have a configuration file, &#039;&#039;&#039;wmcsf.cfg&#039;&#039;&#039; that you will pass to the next step that performs the math on the fmri data:&lt;br /&gt;
 fcseed-sess -s ${SUBID} -cfg wmcsf.cfg&lt;br /&gt;
or&lt;br /&gt;
 fcseed-sess -sf &#039;&#039;subjects&#039;&#039; -cfg wmcsf.cfg&lt;br /&gt;
&lt;br /&gt;
This will use the config file just created to extract a seed regressor for ${SUBID} (or for all subjects listed in your &#039;&#039;subjects&#039;&#039; file if you use the -sf switch to run in batch mode). You&#039;ll find a file called &#039;&#039;&#039;wmcsf&#039;&#039;&#039; in each of your BOLD/run directories. This file is a plaintext file with one column of values that you can use as a nuisance regressor. The detrend script could be modified to first run fcseed-sess to generate this value to include in the nuisance regressor matrix.&lt;br /&gt;
&lt;br /&gt;
If you wanted to remove WM and CSF as two separate components, you would need to generate two separate .cfg files, and then run fcseed-sess twice.&lt;br /&gt;
&lt;br /&gt;
===Scripting===&lt;br /&gt;
These steps have been combined with the detrend.sh script above to remove linear trends, motion parameters and wm+csf signal (as a single component). You will find detrend_wmcsf.sh in the ubfs/Scripts/Shell folder. However the script is reproduced below:&lt;br /&gt;
====detrend_wmcsf.sh====&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 USAGE=&amp;quot;Usage: detrend_wmcsf.sh filepattern surf SUBJECT&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 if [ &amp;quot;$#&amp;quot; == &amp;quot;0&amp;quot; ]; then&lt;br /&gt;
 	echo &amp;quot;$USAGE&amp;quot;&lt;br /&gt;
 	exit 1&lt;br /&gt;
 fi&lt;br /&gt;
 #first parameter is the filepattern for the .nii.gz time series to be detrended, up to the surface indicator &lt;br /&gt;
 #e.g., fmcpr.siemens.sm6.fsaverage.?h.nii.gz would use fmcpr.siemens.sm6  as the filepattern, fsaverage as the surf&lt;br /&gt;
 filepat=&amp;quot;$1&amp;quot;&lt;br /&gt;
 shift&lt;br /&gt;
 #second parameter should be specified either as self or fsaverage&lt;br /&gt;
 surf=&amp;quot;$1&amp;quot;&lt;br /&gt;
 shift&lt;br /&gt;
  &lt;br /&gt;
 #after the shift command, all the arguments are shifted down one place and the first argument (the filepattern) &lt;br /&gt;
 #falls off the list. The remaining arguments should be subject_ids&lt;br /&gt;
 subs=( &amp;quot;$@&amp;quot; );&lt;br /&gt;
 hemis=( &amp;quot;lh&amp;quot; &amp;quot;rh&amp;quot; );&lt;br /&gt;
 &lt;br /&gt;
 #Make sure we&#039;re in SUBJECTS_DIR to start so that we can make some assumptions about relative file locations&lt;br /&gt;
 cd ${SUBJECTS_DIR}&lt;br /&gt;
 &lt;br /&gt;
 #first, create configuration file for the wm &amp;amp; csf regression&lt;br /&gt;
 echo &amp;quot;Making fcseed config file...&amp;quot;&lt;br /&gt;
 fcseed-config -cfg wmcsf.cfg -wm -vcsf -fsd bold -mean&lt;br /&gt;
 echo &amp;quot;done&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 echo &amp;quot;Processing subjects...&amp;quot;&lt;br /&gt;
 for sub in &amp;quot;${subs[@]}&amp;quot;&lt;br /&gt;
 do&lt;br /&gt;
    source_dir=${SUBJECTS_DIR}/${sub}/bold&lt;br /&gt;
    echo ${source_dir}&lt;br /&gt;
    if [ ! -d ${source_dir} ]; then&lt;br /&gt;
        #The subject_id does not exist&lt;br /&gt;
        echo &amp;quot;${source_dir} does not exist!&amp;quot;&lt;br /&gt;
    else&lt;br /&gt;
 	#The subject bold directory exists. Proceed.&lt;br /&gt;
        fcseed-sess -s ${sub} -cfg ${SUBJECTS_DIR}/wmcsf.cfg&lt;br /&gt;
 	echo &amp;quot;Completed wm &amp;amp; csf nuisance regressor calculation for ${sub}&amp;quot;&lt;br /&gt;
 	cd ${source_dir}&lt;br /&gt;
 &lt;br /&gt;
 ########readarray -t runlist &amp;lt; runs  ### This is fine and all on Linux but MacOS does not have readarray&lt;br /&gt;
 &lt;br /&gt;
 	#This is a MacOS compatible alternative to readarray that also works on Linux&lt;br /&gt;
        while IFS= read r; do&lt;br /&gt;
    		runlist+=($r)&lt;br /&gt;
 	done &amp;lt; runs&lt;br /&gt;
 	for r in &amp;quot;${runlist[@]}&amp;quot;; do&lt;br /&gt;
 		if [ -n &amp;quot;${r}&amp;quot; ]; then&lt;br /&gt;
 		  #the -n test makes sure that the run number is not an empty string&lt;br /&gt;
 		  #caused by a trailing newline in the runs file&lt;br /&gt;
 		  cd ${source_dir}/${r}&lt;br /&gt;
 		  pwd&lt;br /&gt;
        	  #subject_id does exist. Detrend&lt;br /&gt;
 		  if [ &amp;quot;${surf}&amp;quot; = &amp;quot;self&amp;quot; ]; then&lt;br /&gt;
 			SURFTOUSE=${sub}&lt;br /&gt;
 		  else&lt;br /&gt;
 			SURFTOUSE=fsaverage&lt;br /&gt;
 		  fi&lt;br /&gt;
  &lt;br /&gt;
 		  #this will generate the QA matrix in lh.detrend if needed:&lt;br /&gt;
 		  if [ ! -f ${source_dir}/${r}/lh.detrend/Xg.dat ]; then&lt;br /&gt;
 	                  mri_glmfit --y ${source_dir}/${r}/${filepat}.${surf}.lh.nii.gz \&lt;br /&gt;
         			--glmdir ${source_dir}/${r}/lh.detrend \&lt;br /&gt;
 				--qa \&lt;br /&gt;
 				--surf ${SURFTOUSE} lh&lt;br /&gt;
 		  fi&lt;br /&gt;
 &lt;br /&gt;
 		  #copy and merge the QA, WMCSF and MC matrices&lt;br /&gt;
 		  echo &amp;quot;Generating nuisance regressor matrix for ${hemi} ${r}&amp;quot;&lt;br /&gt;
 		  paste ${source_dir}/${r}/lh.detrend/Xg.dat wmcsf mcprextreg &amp;gt; nuisance&lt;br /&gt;
  &lt;br /&gt;
 &lt;br /&gt;
 		  for hemi in &amp;quot;${hemis[@]}&amp;quot;; do&lt;br /&gt;
 		  #regress out nuisance&lt;br /&gt;
 			mri_glmfit --y ${source_dir}/${r}/${filepat}.${surf}.${hemi}.nii.gz \&lt;br /&gt;
                        	--glmdir ${source_dir}/${r}/${hemi}.detrend \&lt;br /&gt;
                                --X nuisance \&lt;br /&gt;
 				--no-contrasts-ok --save-yhat --eres-save \&lt;br /&gt;
                                --surf ${SURFTOUSE} ${hemi}&lt;br /&gt;
 				#move and rename detrended files&lt;br /&gt;
 			mv ${source_dir}/${r}/${hemi}.detrend/eres.mgh ${source_dir}/${r}/${filepat}.${surf}.${hemi}.mgh&lt;br /&gt;
        	  done&lt;br /&gt;
 &lt;br /&gt;
 		  #now detrend the mni305 file&lt;br /&gt;
                  mri_glmfit --y ${source_dir}/${r}/${filepat}.mni305.2mm.nii.gz \&lt;br /&gt;
                       --glmdir ${source_dir}/${r}/mni305.detrend \&lt;br /&gt;
                       --X nuisance --no-contrasts-ok --save-yhat --eres-save&lt;br /&gt;
                       mv ${source_dir}/${r}/mni305.detrend/eres.mgh ${source_dir}/${r}/${filepat}.${surf}.mni305.2mm.mgh&lt;br /&gt;
 		fi&lt;br /&gt;
 	done&lt;br /&gt;
    fi  &lt;br /&gt;
 done&lt;br /&gt;
&lt;br /&gt;
== Using variables ==&lt;br /&gt;
As I mentioned in the comments in the sample code snippet above, what you type depends entirely on the filenames, which in turn depend entirely on how the data were preprocessed. You can use environment variables to help walk you through figuring out the correct file patterns. Also, a handy shortcut exists if you happen to have a &amp;lt;code&amp;gt;subjects&amp;lt;/code&amp;gt; file in &amp;lt;code&amp;gt;$SUBJECTS_DIR&amp;lt;/code&amp;gt;. Putting these two techniques together:&lt;br /&gt;
 FILEPATTERN=fmcpr #the preprocessed files will almost always be called &#039;&#039;&#039;fmcpr&#039;&#039;&#039;&lt;br /&gt;
 SMOOTHING=&amp;quot;.sm4&amp;quot; #how much smoothing did you use when you ran preproc-sess?&lt;br /&gt;
 SLICETIME=&amp;quot;.up&amp;quot; #[&amp;quot;.up&amp;quot; | &amp;quot;.down&amp;quot; | &amp;quot;.siemens&amp;quot; | OMIT ]&lt;br /&gt;
 SURFACE=fsaverage #[self | fsaverage]&lt;br /&gt;
 &lt;br /&gt;
 detrend.sh ${FILEPATTERN}${SLICETIME}${SMOOTHING} $SURFACE `cat subjects`&lt;br /&gt;
This will execute the detrend.sh script on all the subjects listed in the subjects text file, using the fsaverage surface. The assemblage of variables will look for files named &#039;&#039;fmcpr.up.sm4.*&#039;&#039;&lt;br /&gt;
[[Category: Time Series]]&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Detrending_FreeSurfer_Data&amp;diff=2277</id>
		<title>Detrending FreeSurfer Data</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Detrending_FreeSurfer_Data&amp;diff=2277"/>
		<updated>2022-09-27T19:59:47Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Running the Script */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Over the course of a run, there can be a linear drift in the signal in different regions of the brain. There are many possible causes for this that have nothing to do with any interesting aspect of your data -- in other words, this linear drift is a nuisance artifact. The second step is to remove this signal drift from the data because it can introduce spurious correlations between two unrelated time series. You can see this for yourself in a quick experiment you could whip up in Excel: take two vectors of 100 randomly generated numbers (e.g., randbetween(1,99)). They should be uncorrelated. Now add 1, 2, 3, ... , 99, 100 to the values in each vector. This simulates a linear trend in the data. You shouldn&#039;t be surprised to find that the two vectors are now highly and positively correlated!&lt;br /&gt;
&lt;br /&gt;
A script has been written called detrend.sh that removes the linear trend in your BOLD data. The latest version of this script can be found in /usr/local/sbin (which should be in your $PATH):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt;&#039;&#039;&#039;Update:&#039;&#039;&#039; The script has been modified to also regress out motion parameters from the mcprextreg files. This modification is not (yet) reflected in the code below.&amp;lt;/span&amp;gt;&lt;br /&gt;
==The Script ==&lt;br /&gt;
The most recent version of the BASH script (detrend.sh) is below:&lt;br /&gt;
=== detrend.sh ===&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
 USAGE=&amp;quot;Usage: detrend.sh filepattern surf sub1 ... subN&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 if [ &amp;quot;$#&amp;quot; == &amp;quot;0&amp;quot; ]; then&lt;br /&gt;
        echo &amp;quot;$USAGE&amp;quot;&lt;br /&gt;
        exit 1&lt;br /&gt;
 fi&lt;br /&gt;
 &lt;br /&gt;
 #first parameter is the filepattern for the .nii.gz time series to be detrended, up to the surface indicator&lt;br /&gt;
 #e.g., fmcpr.siemens.sm6.fsaverage.?h.nii.gz would use fmcpr.siemens.sm6  as the filepattern, fsaverage as the surf&lt;br /&gt;
 filepat=&amp;quot;$1&amp;quot;&lt;br /&gt;
 shift&lt;br /&gt;
 &lt;br /&gt;
 #second parameter should be specified either as self or fsaverage&lt;br /&gt;
 surf=&amp;quot;$1&amp;quot;&lt;br /&gt;
 shift&lt;br /&gt;
 &lt;br /&gt;
 #after the shift commands, all the arguments are shifted down two places and the first &lt;br /&gt;
 #2 arguments (the filepattern and surface) fall off the list. &lt;br /&gt;
 #The remaining arguments should be subject_ids&lt;br /&gt;
 subs=( &amp;quot;$@&amp;quot; );&lt;br /&gt;
 hemis=( &amp;quot;lh&amp;quot; &amp;quot;rh&amp;quot; );&lt;br /&gt;
 &lt;br /&gt;
 for sub in &amp;quot;${subs[@]}&amp;quot;; do&lt;br /&gt;
    source_dir=${SUBJECTS_DIR}/${sub}/bold&lt;br /&gt;
    echo ${source_dir}&lt;br /&gt;
    if [ ! -d ${source_dir} ]; then&lt;br /&gt;
        #The subject_id does not exist&lt;br /&gt;
        echo &amp;quot;${source_dir} does not exist!&amp;quot;&lt;br /&gt;
    else&lt;br /&gt;
        cd ${source_dir}&lt;br /&gt;
        readarray -t runs &amp;lt; runs&lt;br /&gt;
        for r in &amp;quot;${runs[@]}&amp;quot;; do&lt;br /&gt;
                if [ -n &amp;quot;${r}&amp;quot; ]; then&lt;br /&gt;
                #the -n test makes sure that the run number is not an empty string&lt;br /&gt;
                #caused by a trailing newline in the runs file&lt;br /&gt;
                        for hemi in &amp;quot;${hemis[@]}&amp;quot;; do&lt;br /&gt;
                                cd ${source_dir}/${r}&lt;br /&gt;
                                pwd&lt;br /&gt;
                                #subject_id does exist. Detrend&lt;br /&gt;
                                if [ &amp;quot;${surf}&amp;quot; = &amp;quot;self&amp;quot; ]; then&lt;br /&gt;
 &lt;br /&gt;
                                        SURFTOUSE=${sub}&lt;br /&gt;
                                else&lt;br /&gt;
                                        SURFTOUSE=fsaverage&lt;br /&gt;
                                fi&lt;br /&gt;
                                mri_glmfit --y ${source_dir}/${r}/${filepat}.${surf}.${hemi}.nii.gz \&lt;br /&gt;
                                --glmdir ${source_dir}/${r}/${hemi}.detrend \&lt;br /&gt;
                                --qa --save-yhat --eres-save \&lt;br /&gt;
                                --surf ${SURFTOUSE} ${hemi}&lt;br /&gt;
                                mv ${source_dir}/${r}/${hemi}.detrend/eres.mgh ${source_dir}/${r}/${filepat}.${surf}.${hemi}.mgh&lt;br /&gt;
                        done&lt;br /&gt;
                        #now detrend the mni305 file&lt;br /&gt;
                        mri_glmfit --y ${source_dir}/${r}/${filepat}.mni305.2mm.nii.gz \&lt;br /&gt;
                        --glmdir ${source_dir}/${r}/mni305.detrend \&lt;br /&gt;
                        --qa --save-yhat --eres-save&lt;br /&gt;
                        mv ${source_dir}/${r}/mni305.detrend/eres.mgh ${source_dir}/${r}/${filepat}.${surf}.mni305.2mm.mgh&lt;br /&gt;
                fi&lt;br /&gt;
        done&lt;br /&gt;
    fi&lt;br /&gt;
 done&lt;br /&gt;
&lt;br /&gt;
== Running the Script ==&lt;br /&gt;
===Before you run these scripts===&lt;br /&gt;
Before running either of the scripts on this page, you will need to create a text file called &#039;runs&#039; in the bold/ directory for each subject&#039;s dataset, e.g.,&lt;br /&gt;
*FS_T1_501/&lt;br /&gt;
**bold/&lt;br /&gt;
***runs&lt;br /&gt;
***005/&lt;br /&gt;
***006/&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;runs&amp;lt;/code&amp;gt; file simply lists each run folder on its own line:&lt;br /&gt;
 005&lt;br /&gt;
 006&lt;br /&gt;
The detrend.sh script uses this file to determine the folders containing the data to be detrended. If this file doesn&#039;t already exist, you can manually generate it in any text editor (e.g., &amp;lt;code&amp;gt;nano runs&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;gedit runs&amp;lt;/code&amp;gt;), but the quickest method takes advantage of the fact that the run folders all start with 0, and uses common command-line utilities:&lt;br /&gt;
&lt;br /&gt;
 SUBJECT=FS_501&lt;br /&gt;
 cd ${SUBJECTS_DIR}/${SUBJECT}/bold&lt;br /&gt;
 #if there are fewer than 10 runs of BOLD data, then the run directories will probably have 2 leading zeros&lt;br /&gt;
 ls -1 | grep &amp;quot;^00*&amp;quot; &amp;gt; runs&lt;br /&gt;
===Example script===&lt;br /&gt;
Assuming all your subject folders have the same run folders to detrend, you would detrend multiple subjects using detrend.sh, specifying a file pattern for the source data (i.e., the name of the preprocessed files generated by FS-FAST, omitting anything after the &#039;?h&#039; hemisphere identifier), followed by a list of subject IDs:&lt;br /&gt;
 #A SPECIFIC EXAMPLE: (note these parameters may differ &#039;&#039;&#039;substantially&#039;&#039;&#039; from what you would be typing in)&lt;br /&gt;
 detrend.sh fmcpr.sm6 self FS_T1_501 FS_T2_501 FS_T1_505 FS_T2_505&lt;br /&gt;
 #For a more generalizable example of how you should call this function, see the section below using variables&lt;br /&gt;
The gist is that it calls the mri_glmfit function and saves the residuals after the linear trend has been removed from the data. Multiple files are generated in ?h.detrend/ directories in each run directory. The detrended data is subsequently copied back to the run directory as a new file called ${filepat}.?h.mgh, where ${filepat} is whatever file pattern you provided to the script (note that the source data are .nii.gz files, whereas the detrended data are .mgh files).&lt;br /&gt;
&lt;br /&gt;
==White Matter and CSF Signal==&lt;br /&gt;
It is among best practices in functional connectivity studies to regress out WM and CSF signals from your time series before computing time series correlations, at least for resting state studies (Muschelli et al (2014)). We can add WM and CSF signal to the set of nuisance regressors when detrending our data (note that the procedure below removes the mean WM + CSF as a single component; it can be modified to remove WM and CSF independently):&lt;br /&gt;
 &lt;br /&gt;
===Create a configuration file===&lt;br /&gt;
An answer to a question I posted to the FreeSurfer mailing list pointed me to [http://surfer.nmr.mgh.harvard.edu/fswiki/FsFastFunctionalConnectivityWalkthrough a walkthrough] for functional connectivity analyses in FreeSurfer is the lead that I&#039;m following. We will be using fcseed-sess to pull out our mean signal from seed regions. In our case, our seed regions will be from the white matter and csf masks. It&#039;s a 2-step process. The first step is to create a configuration file:&lt;br /&gt;
 fcseed-config -cfg wmcsf.cfg -wm -vcsf -fsd bold -mean&lt;br /&gt;
This assumes that the BOLD data are in ${SUBJECTS_DIR}/${SUBID}/&#039;&#039;&#039;bold&#039;&#039;&#039; (-fsd), which is likely going to be the case if you&#039;ve been following the FreeSurfer pipeline on this wiki. I ran the above step in ${SUBJECTS_DIR}.&lt;br /&gt;
&lt;br /&gt;
===Computing WM and CSF Regressor Values===&lt;br /&gt;
Now you will have a configuration file, &#039;&#039;&#039;wmcsf.cfg&#039;&#039;&#039; that you will pass to the next step that performs the math on the fmri data:&lt;br /&gt;
 fcseed-sess -s ${SUBID} -cfg wmcsf.cfg&lt;br /&gt;
or&lt;br /&gt;
 fcseed-sess -sf &#039;&#039;subjects&#039;&#039; -cfg wmcsf.cfg&lt;br /&gt;
&lt;br /&gt;
This will use the config file just created to extract a seed regressor for ${SUBID} (or for all subjects listed in your &#039;&#039;subjects&#039;&#039; file if you use the -sf switch to run in batch mode). You&#039;ll find a file called &#039;&#039;&#039;wmcsf&#039;&#039;&#039; in each of your BOLD/run directories. This file is a plaintext file with one column of values that you can use as a nuisance regressor. The detrend script could be modified to first run fcseed-sess to generate this value to include in the nuisance regressor matrix.&lt;br /&gt;
&lt;br /&gt;
If you wanted to remove WM and CSF as two separate components, you would need to generate two separate .cfg files, and then run fcseed-sess twice.&lt;br /&gt;
&lt;br /&gt;
===Scripting===&lt;br /&gt;
These steps have been combined with the detrend.sh script above to remove linear trends, motion parameters and wm+csf signal (as a single component). You will find detrend_wmcsf.sh in the ubfs/Scripts/Shell folder. However the script is reproduced below:&lt;br /&gt;
====detrend_wmcsf.sh====&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 USAGE=&amp;quot;Usage: detrend_wmcsf.sh filepattern surf sub1 ... subN&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 if [ &amp;quot;$#&amp;quot; == &amp;quot;0&amp;quot; ]; then&lt;br /&gt;
 	echo &amp;quot;$USAGE&amp;quot;&lt;br /&gt;
 	exit 1&lt;br /&gt;
 fi&lt;br /&gt;
 #first parameter is the filepattern for the .nii.gz time series to be detrended, up to the surface indicator &lt;br /&gt;
 #e.g., fmcpr.siemens.sm6.fsaverage.?h.nii.gz would use fmcpr.siemens.sm6  as the filepattern, fsaverage as the surf&lt;br /&gt;
 filepat=&amp;quot;$1&amp;quot;&lt;br /&gt;
 shift&lt;br /&gt;
 #second parameter should be specified either as self or fsaverage&lt;br /&gt;
 surf=&amp;quot;$1&amp;quot;&lt;br /&gt;
 shift&lt;br /&gt;
  &lt;br /&gt;
 #after the shift command, all the arguments are shifted down one place and the first argument (the filepattern) &lt;br /&gt;
 #falls off the list. The remaining arguments should be subject_ids&lt;br /&gt;
 subs=( &amp;quot;$@&amp;quot; );&lt;br /&gt;
 hemis=( &amp;quot;lh&amp;quot; &amp;quot;rh&amp;quot; );&lt;br /&gt;
 &lt;br /&gt;
 #Make sure we&#039;re in SUBJECTS_DIR to start so that we can make some assumptions about relative file locations&lt;br /&gt;
 cd ${SUBJECTS_DIR}&lt;br /&gt;
 &lt;br /&gt;
 #first, create configuration file for the wm &amp;amp; csf regression&lt;br /&gt;
 echo &amp;quot;Making fcseed config file...&amp;quot;&lt;br /&gt;
 fcseed-config -cfg wmcsf.cfg -wm -vcsf -fsd bold -mean&lt;br /&gt;
 echo &amp;quot;done&amp;quot;&lt;br /&gt;
 &lt;br /&gt;
 echo &amp;quot;Processing subjects...&amp;quot;&lt;br /&gt;
 for sub in &amp;quot;${subs[@]}&amp;quot;&lt;br /&gt;
 do&lt;br /&gt;
    source_dir=${SUBJECTS_DIR}/${sub}/bold&lt;br /&gt;
    echo ${source_dir}&lt;br /&gt;
    if [ ! -d ${source_dir} ]; then&lt;br /&gt;
        #The subject_id does not exist&lt;br /&gt;
        echo &amp;quot;${source_dir} does not exist!&amp;quot;&lt;br /&gt;
    else&lt;br /&gt;
 	#The subject bold directory exists. Proceed.&lt;br /&gt;
        fcseed-sess -s ${sub} -cfg ${SUBJECTS_DIR}/wmcsf.cfg&lt;br /&gt;
 	echo &amp;quot;Completed wm &amp;amp; csf nuisance regressor calculation for ${sub}&amp;quot;&lt;br /&gt;
 	cd ${source_dir}&lt;br /&gt;
 &lt;br /&gt;
 ########readarray -t runlist &amp;lt; runs  ### This is fine and all on Linux but MacOS does not have readarray&lt;br /&gt;
 &lt;br /&gt;
 	#This is a MacOS compatible alternative to readarray that also works on Linux&lt;br /&gt;
        while IFS= read r; do&lt;br /&gt;
    		runlist+=($r)&lt;br /&gt;
 	done &amp;lt; runs&lt;br /&gt;
 	for r in &amp;quot;${runlist[@]}&amp;quot;; do&lt;br /&gt;
 		if [ -n &amp;quot;${r}&amp;quot; ]; then&lt;br /&gt;
 		  #the -n test makes sure that the run number is not an empty string&lt;br /&gt;
 		  #caused by a trailing newline in the runs file&lt;br /&gt;
 		  cd ${source_dir}/${r}&lt;br /&gt;
 		  pwd&lt;br /&gt;
        	  #subject_id does exist. Detrend&lt;br /&gt;
 		  if [ &amp;quot;${surf}&amp;quot; = &amp;quot;self&amp;quot; ]; then&lt;br /&gt;
 			SURFTOUSE=${sub}&lt;br /&gt;
 		  else&lt;br /&gt;
 			SURFTOUSE=fsaverage&lt;br /&gt;
 		  fi&lt;br /&gt;
  &lt;br /&gt;
 		  #this will generate the QA matrix in lh.detrend if needed:&lt;br /&gt;
 		  if [ ! -f ${source_dir}/${r}/lh.detrend/Xg.dat ]; then&lt;br /&gt;
 	                  mri_glmfit --y ${source_dir}/${r}/${filepat}.${surf}.lh.nii.gz \&lt;br /&gt;
         			--glmdir ${source_dir}/${r}/lh.detrend \&lt;br /&gt;
 				--qa \&lt;br /&gt;
 				--surf ${SURFTOUSE} lh&lt;br /&gt;
 		  fi&lt;br /&gt;
 &lt;br /&gt;
 		  #copy and merge the QA, WMCSF and MC matrices&lt;br /&gt;
 		  echo &amp;quot;Generating nuisance regressor matrix for ${hemi} ${r}&amp;quot;&lt;br /&gt;
 		  paste ${source_dir}/${r}/lh.detrend/Xg.dat wmcsf mcprextreg &amp;gt; nuisance&lt;br /&gt;
  &lt;br /&gt;
 &lt;br /&gt;
 		  for hemi in &amp;quot;${hemis[@]}&amp;quot;; do&lt;br /&gt;
 		  #regress out nuisance&lt;br /&gt;
 			mri_glmfit --y ${source_dir}/${r}/${filepat}.${surf}.${hemi}.nii.gz \&lt;br /&gt;
                        	--glmdir ${source_dir}/${r}/${hemi}.detrend \&lt;br /&gt;
                                --X nuisance \&lt;br /&gt;
 				--no-contrasts-ok --save-yhat --eres-save \&lt;br /&gt;
                                --surf ${SURFTOUSE} ${hemi}&lt;br /&gt;
 				#move and rename detrended files&lt;br /&gt;
 			mv ${source_dir}/${r}/${hemi}.detrend/eres.mgh ${source_dir}/${r}/${filepat}.${surf}.${hemi}.mgh&lt;br /&gt;
        	  done&lt;br /&gt;
 &lt;br /&gt;
 		  #now detrend the mni305 file&lt;br /&gt;
                  mri_glmfit --y ${source_dir}/${r}/${filepat}.mni305.2mm.nii.gz \&lt;br /&gt;
                       --glmdir ${source_dir}/${r}/mni305.detrend \&lt;br /&gt;
                       --X nuisance --no-contrasts-ok --save-yhat --eres-save&lt;br /&gt;
                       mv ${source_dir}/${r}/mni305.detrend/eres.mgh ${source_dir}/${r}/${filepat}.${surf}.mni305.2mm.mgh&lt;br /&gt;
 		fi&lt;br /&gt;
 	done&lt;br /&gt;
    fi  &lt;br /&gt;
 done&lt;br /&gt;
&lt;br /&gt;
== Using variables ==&lt;br /&gt;
As I mentioned in the comments in the sample code snippet above, what you type depends entirely on the filenames, which in turn depend entirely on how the data were preprocessed. You can use environment variables to help walk you through figuring out the correct file patterns. Also, a handy shortcut exists if you happen to have a &amp;lt;code&amp;gt;subjects&amp;lt;/code&amp;gt; file in &amp;lt;code&amp;gt;$SUBJECTS_DIR&amp;lt;/code&amp;gt;. Putting these two techniques together:&lt;br /&gt;
 FILEPATTERN=fmcpr #the preprocessed files will almost always be called &#039;&#039;&#039;fmcpr&#039;&#039;&#039;&lt;br /&gt;
 SMOOTHING=&amp;quot;.sm4&amp;quot; #how much smoothing did you use when you ran preproc-sess?&lt;br /&gt;
 SLICETIME=&amp;quot;.up&amp;quot; #[&amp;quot;.up&amp;quot; | &amp;quot;.down&amp;quot; | &amp;quot;.siemens&amp;quot; | OMIT ]&lt;br /&gt;
 SURFACE=fsaverage #[self | fsaverage]&lt;br /&gt;
 &lt;br /&gt;
 detrend.sh ${FILEPATTERN}${SLICETIME}${SMOOTHING} $SURFACE `cat subjects`&lt;br /&gt;
This will execute the detrend.sh script on all the subjects listed in the subjects text file, using the fsaverage surface. The assemblage of variables will look for files named &#039;&#039;fmcpr.up.sm4.*&#039;&#039;&lt;br /&gt;
[[Category: Time Series]]&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=BASH_Tricks&amp;diff=2276</id>
		<title>BASH Tricks</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=BASH_Tricks&amp;diff=2276"/>
		<updated>2022-09-27T19:57:26Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Make a series of numbered directories */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Text File Batch==&lt;br /&gt;
Suppose we have a list of items to process -- like all the entries in the &amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt;subjects&amp;lt;/span&amp;gt; file, for example. We want to use each line in the file in a for-loop:&lt;br /&gt;
 while read s; do&lt;br /&gt;
   echo &amp;quot;$s&amp;quot;;&lt;br /&gt;
 done &amp;amp;lt;&amp;lt;span style=&amp;quot;color:red;&amp;quot;&amp;gt;subjects&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How many lines in my text file? ==&lt;br /&gt;
Totally useful when you have some kind of training file with many rows and columns:&lt;br /&gt;
 FILENAME=myfile.csv&lt;br /&gt;
 nl ${FILENAME} | awk &#039;{ print $1 }&#039;&lt;br /&gt;
&lt;br /&gt;
== I want to drop the first line of my text file ==&lt;br /&gt;
&amp;lt;code&amp;gt;tail&amp;lt;/code&amp;gt; echoes the last n lines (default: 10) of a text file to stdout. Using the -n flag flips it around so that it echoes back &#039;&#039;all up to the last&#039;&#039; n lines of the file. So -n +2 will echo back the file up to the 2nd line of the file (i.e., dropping the first line). We can pipe this to a temp file (so we don&#039;t write out an empty file), and then rename:&lt;br /&gt;
 tail -n +2 &amp;quot;$FILE&amp;quot; &amp;gt; &amp;quot;$FILE.tmp&amp;quot; &amp;amp;&amp;amp; mv &amp;quot;$FILE.tmp&amp;quot; &amp;quot;$FILE&amp;quot;&lt;br /&gt;
&#039;&#039;&#039;PROTIP:&#039;&#039;&#039; I&#039;m pretty sure the same trick applies using the &amp;lt;code&amp;gt;head&amp;lt;/code&amp;gt; command to drop the &#039;&#039;last&#039;&#039; n lines from a file.&lt;br /&gt;
&lt;br /&gt;
== Make a list of directory names ==&lt;br /&gt;
We often organize subject data so that each subject gets their own directory. Freesurfer uses a &#039;&#039;&#039;subjects&#039;&#039;&#039; file when batch processing. Rather than manually type out each folder name into a text file, it can be generated in one line of code:&lt;br /&gt;
 ls -1 -d */ | sed &amp;quot;s,/$,,&amp;quot; &amp;gt;  subjects&lt;br /&gt;
This lists in 1 column all the directories (-1 -d) and uses &#039;&#039;sed&#039;&#039; to snip off the trailing forward slashes in the directory names&lt;br /&gt;
&lt;br /&gt;
==What directories are in one directory but not the other?==&lt;br /&gt;
Scenario: We had a directory, let&#039;s call it ALLSUBJECTS, that had a bunch of subject directories named &#039;&#039;&#039;NDARINVxxxxxxxx&#039;&#039;&#039;. Some of them had a full dataset, but many of them did not. Sophia made a directory called &#039;&#039;&#039;gooddata&#039;&#039;&#039; that contained only the subset of folders that had full datasets. What&#039;s the fastest way to figure out who has incomplete data? Look for folders appearing in &#039;&#039;&#039;ALLSUBJECTS&#039;&#039;&#039; that don&#039;t appear in &#039;&#039;&#039;gooddata&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
 cd ALLSUBJECTS&lt;br /&gt;
 #next command lists only directories (-d), in a single column (-1), sorts the list (sort), &lt;br /&gt;
 #makes sure it only lists folder names starting with &amp;quot;ND&amp;quot; ( grep &amp;quot;^ND&amp;quot;), and then uses &lt;br /&gt;
 #sed to strip the trailing backslash&lt;br /&gt;
 ls -d -1  */ | grep &amp;quot;^ND&amp;quot; | sort | sed &#039;s/\///g&#039; &amp;gt;&amp;gt; ../allsubs.txt&lt;br /&gt;
 cd ../gooddata&lt;br /&gt;
 ls -d -1  */ | grep &amp;quot;^ND&amp;quot; | sort | sed &#039;s/\///g&#039; &amp;gt;&amp;gt; ../goodsubs.txt&lt;br /&gt;
 #next line finds lines appearing in allsubs that do not appear in goodsubs:&lt;br /&gt;
 comm -23 allsubs.txt goodsubs.txt&lt;br /&gt;
&lt;br /&gt;
== Making a &amp;lt;code&amp;gt;tar&amp;lt;/code&amp;gt; archive containing only the minimal set of structural files for FreeSurfer ==&lt;br /&gt;
FreeSurfer makes a zillion files during the recon-all step. I have no idea what most of them are there for. Which do we need? The absolute minimal list is a work in progress, but I have made a file called &#039;&#039;&#039;fsaverage.required&#039;&#039;&#039; (copied to &amp;lt;code&amp;gt;/ubfs/caset/cpmcnorg/Scripts/fsaverage.required&amp;lt;/code&amp;gt;) based on the contents of the fsaverage template subject directory contents. I dropped the obvious directories (e.g., mri 2mm), so what&#039;s left should hopefully be close to the minimal required set for getting things done with a subject. The idea is to reduce the number of superfluous files that you store or copy over the network so that we don&#039;t waste as much time and disk space with useless nonsense. &lt;br /&gt;
&lt;br /&gt;
So here&#039;s what you do (note the text in red will vary - don&#039;t just blindly copy the code snippets below and expect them to work; that&#039;s how Chernobyl happens! You need to understand what you&#039;re doing):&lt;br /&gt;
&lt;br /&gt;
#Copy &#039;&#039;&#039;fsaverage.required&#039;&#039;&#039; to &amp;lt;code&amp;gt;$SUBJECTS_DIR&amp;lt;/code&amp;gt;&lt;br /&gt;
#Inspect fsaverage.required to make sure that it has any idiosyncratic files that you might wish to include&lt;br /&gt;
#*e.g., the original version only includes f.nii.gz files. If you want to also grab all your preprocessed .mgz files, then you&#039;ll want to include *.mgz up at the top. Save any changes.&lt;br /&gt;
#Navigate to a subject directory: &amp;lt;code&amp;gt;cd &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;FS_sub-001&amp;lt;/span&amp;gt;&amp;lt;/code&amp;gt;&lt;br /&gt;
#The following command will use the files listed in &amp;lt;code&amp;gt;$SUBJECTS_DIR/fsaverage.required&amp;lt;/code&amp;gt; to find and archive the desired files for this subject:&lt;br /&gt;
#*&amp;lt;code&amp;gt;tar -czvf &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;sub-001.minimal.tgz&amp;lt;/span&amp;gt; `find . | grep -G -f ${SUBJECTS_DIR}/fsaverage.required`&amp;lt;/code&amp;gt;&lt;br /&gt;
#When you&#039;re done, you&#039;ll have the bare-bones minimum files to permit FS-FAST analyses of your BOLD data for your subject.&lt;br /&gt;
#You can copy the .tgz files to an external drive or over the network. Be sure to unpack the .tgz archive in an empty subject directory&lt;br /&gt;
#*e.g.:&lt;br /&gt;
 mkdir &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;~/new_project/&amp;lt;/span&amp;gt;   #starting a new project directory - in this case on the same computer, but it could be anywhere&lt;br /&gt;
 cd &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;new_project&amp;lt;/span&amp;gt;            #enter the new project directory&lt;br /&gt;
 mkdir &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;FS_sub-001&amp;lt;/span&amp;gt;       #making an empty subject directory for the files we&#039;re about to unpack&lt;br /&gt;
 cd &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;FS_sub-001&amp;lt;/span&amp;gt;            #navigate into the new empty subject directory&lt;br /&gt;
  #next line copies the minimal file archive from the source directory into the new empty subject directory&lt;br /&gt;
 cp ${SUBJECTS_DIR}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;/FS_sub-001/sub-001.minimal.tgz&amp;lt;/span&amp;gt; ./&lt;br /&gt;
 #next line unzips the file archive into the empty directory&lt;br /&gt;
 tar -xzvf &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;sub-001.minimal.tgz&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
With the minimal set of structural files, you should be able to unzip the surface and T1 anatomical files and inspect for reconstruction accuracy, or add BOLD files from elsewhere (the BOLD files are what &#039;&#039;really&#039;&#039; does you in, and I&#039;ve developed a similar procedure to grab your blob analysis files)&lt;br /&gt;
&lt;br /&gt;
== Making a &amp;lt;code&amp;gt;tar&amp;lt;/code&amp;gt; archive containing only the minimal set of GLM Analysis Files for FreeSurfer ==&lt;br /&gt;
After running a first level GLM analysis (a &amp;quot;blob&amp;quot; analysis) using &amp;lt;code&amp;gt;selxavg3-sess&amp;lt;/code&amp;gt;, each of your subject/bold directories will contain an analysis directory for each of the surfaces you included in your analysis (typically for lh and rh, and possibly also for mni305). Assuming the analyses were done in fsaverage template space (and there&#039;s no good reason anymore why they &#039;&#039;wouldn&#039;t&#039;&#039; be), then if you would like to download the bare minimum set of files required to inspect the subject-level analyses, then you can do so with the following script:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 #usage: ./zip1stla.sh SUBJECT_ID ANALYSIS_DIR_1 [ANALYSIS_DIR_2 ... etc]&lt;br /&gt;
 #this first step is going to be to enforce that this only works when SUBJECTS_DIR is set&lt;br /&gt;
 cd ${SUBJECTS_DIR}&lt;br /&gt;
 #first param is subject id&lt;br /&gt;
 SUB=${1}&lt;br /&gt;
 shift&lt;br /&gt;
 #remaining params are analysis directories&lt;br /&gt;
 DIRS=&amp;quot;$@&amp;quot;&lt;br /&gt;
 #we&#039;re going to clone the analysis directory structure&lt;br /&gt;
 cd ${SUB}/bold&lt;br /&gt;
 mkdir --parents ${SUB}/bold&lt;br /&gt;
 &lt;br /&gt;
 #iterate through analysis directories&lt;br /&gt;
 for DIR in &amp;quot;${DIRS[@]}&amp;quot;;&lt;br /&gt;
 do&lt;br /&gt;
   cp -r ${DIR}  ${SUB}/bold/&lt;br /&gt;
 done&lt;br /&gt;
 #zip up our cloned directory structure &lt;br /&gt;
 tar -czvf ${SUBJECTS_DIR}/${SUB}.1stla.tgz ${SUB}&lt;br /&gt;
 #delete the clone&lt;br /&gt;
 rm -rf ${SUB}&lt;br /&gt;
 #go back to where we started&lt;br /&gt;
 cd ${SUBJECTS_DIR}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
If you were to copy/paste the above script to a file named &#039;&#039;&#039;zip1stla.sh&#039;&#039;&#039; and make it executable (&amp;lt;code&amp;gt;chmod ug+x zip1stla.sh&amp;lt;/code&amp;gt;) then you would run it this way:&lt;br /&gt;
 #suppose my analysis directories are called FAM.sm6.lh, FAM.sm6.rh and FAM.sm6.mni&lt;br /&gt;
 zip1stla.sh FS_SUB01 FAM.sm6.lh FAM.sm6.rh FAM.sm6.mni&lt;br /&gt;
This will create a file called FS_SUB01.1stla.tgz. When you unzip the file, it will create a subject folder with the following structure:&lt;br /&gt;
*FS_SUB01&lt;br /&gt;
**bold&lt;br /&gt;
***FAM.sm6.lh&lt;br /&gt;
****{some files}&lt;br /&gt;
***FAM.sm6.rh&lt;br /&gt;
****{some files}&lt;br /&gt;
***FAM.sm6.mni&lt;br /&gt;
****{some files}&lt;br /&gt;
No other files will be included in the archive, which keeps the archive size to a minimum. If FS_SUB01 already exists, then the contents of this archive will be added to the existing directory. This can be useful if you previously used the method described above to archive a minimal set of FreeSurfer structural files. Note that the FreeSurfer structural files are not needed to view the first level GLM data if you ran the analysis in fsaverage space, because these data are mapped to the fsaverage template, which you will already have on your local machine if you have FreeSurfer installed.&lt;br /&gt;
&lt;br /&gt;
== Make a series of numbered directories ==&lt;br /&gt;
 FreeSurfer BOLD data goes in a series of directories, numbered 001, 002, ... , 0nn. A one-liner of code to create these directories in the command line:&lt;br /&gt;
 for i in $(seq -f &amp;quot;%03g&amp;quot; 1 6); do mkdir ${i}; done&lt;br /&gt;
 #this will create directories 001 to 006. Obviously, if you need more directories, change the second value from 6 to something else&lt;br /&gt;
&#039;&#039;Protip:&#039;&#039; If you want to also make the &amp;lt;code&amp;gt;runs&amp;lt;/code&amp;gt; file that some of our scripts use at the same time, the above snippet can be modified:&lt;br /&gt;
  for i in $(seq -f &amp;quot;%03g&amp;quot; 1 6); do mkdir ${i}; echo ${i} &amp;gt;&amp;gt; runs; done&lt;br /&gt;
&lt;br /&gt;
=== Save list of numbered directories to file ===&lt;br /&gt;
Another protip: If you already had a set of numbered directories and want to save them to a list (e.g., a &amp;quot;runs&amp;quot; file):&lt;br /&gt;
 while read s; do&lt;br /&gt;
   ls &amp;quot;$SUBJECTS_DIR/$s/bold&amp;quot; | grep &amp;quot;^00*&amp;quot; &amp;gt; $SUBJECTS_DIR/$s/bold/runs&lt;br /&gt;
 done &amp;lt;subjects&lt;br /&gt;
&lt;br /&gt;
== Restart Window Manager ==&lt;br /&gt;
This has happened a couple times before: you step away from the computer for awhile (maybe even overnight) and when you come back, you find it is locked up and completely unresponsive. The nuclear option is to reboot the whole machine:&lt;br /&gt;
 sudo shutdown -r now #Sad for anyone running autorecon or a neural network&lt;br /&gt;
Unfortunately, that will stop anything that might be running in the background. A less severe solution might be to just restart the window manager. To do this you will need to ssh into the locked-up computer from a different computer, and then restart the lightdm process. This will require superuser privileges.&lt;br /&gt;
 ssh &#039;&#039;hostname&#039;&#039;&lt;br /&gt;
Then after you have connected to the frozen computer:&lt;br /&gt;
 sudo restart lightdm&lt;br /&gt;
Any processes that were dependent on the window manager will be terminated (e.g., so if you had been in the middle of editing labels in tksurfer, you will find that tksurfer has been shutdown and you will need to start over), however anything that was running in the background (e.g., autorecon) should be unaffected.&lt;br /&gt;
&lt;br /&gt;
==Renaming Multiple Files==&lt;br /&gt;
=== Rename Using &amp;lt;code&amp;gt;rename&amp;lt;/code&amp;gt;===&lt;br /&gt;
A &#039;&#039;&#039;perl&#039;&#039;&#039; command, called &amp;lt;code&amp;gt;rename&amp;lt;/code&amp;gt; might be available on your *nix system:&lt;br /&gt;
 rename [OPTIONS] perlexpr files&lt;br /&gt;
Among useful options are the &amp;lt;code&amp;gt;-n&amp;lt;/code&amp;gt; flag, which just reports what all the file renames would be, but doesn&#039;t actually execute them.&lt;br /&gt;
A handy application of &#039;&#039;&#039;rename&#039;&#039;&#039; is to hide files and/or directories. Files with names beginning with a dot are hidden by default and don&#039;t show up in directory listings. This can be a handy way of excluding chunks of data from your scripts.&lt;br /&gt;
====Use-Case: Hiding Session 2 Data====&lt;br /&gt;
In our Multisensory Imagery experiment, we collect 6 runs at time points 1 and 2. If we wish to be able to analyze all the data, these would be stored together as runs 001 to 012. Suppose we wish to temporarily hide the second time point data:&lt;br /&gt;
 rename -n &#039;s/01/\._01/&#039; `find ./ -type d -name &amp;quot;01*&amp;quot;`&lt;br /&gt;
This would find all the directories (&amp;quot;-type -d&amp;quot;) named 01*, then it would show you how it would rename them. If everything looked right, you would execute the same command again, but omit the -n flag so that the renaming actually takes place.&lt;br /&gt;
Note that this example only gets the 010, 011 and 012 directories. You would do something similar for directories 00[6-9].&lt;br /&gt;
&lt;br /&gt;
====Use-Case: Unhiding Directories====&lt;br /&gt;
This one is easier, since all the hidden directories start with &amp;quot;._&amp;quot; using the approach described above:&lt;br /&gt;
 rename &#039;s/\._//&#039; `find ./ -type d -name &amp;quot;._0*&amp;quot;`&lt;br /&gt;
In case you&#039;re curious about the syntax of the perl expression, you might want to [https://www.regular-expressions.info/quickstart.html read up a bit about regular expressions], but in this case, &amp;lt;span style=&amp;quot;color: red;&amp;quot;&amp;gt;&#039;s&amp;lt;/span&amp;gt;/&amp;lt;span style=&amp;quot;color: green;&amp;quot;&amp;gt;\._&amp;lt;/span&amp;gt;&amp;lt;span style=&amp;quot;color: blue;&amp;quot;&amp;gt;//&amp;lt;/span&amp;gt;&#039; indicates we are doing a &amp;lt;span style=&amp;quot;color: red;&amp;quot;&amp;gt;substitution&amp;lt;/span&amp;gt; that will replace every instance of &amp;lt;span style=&amp;quot;color: green;&amp;quot;&amp;gt;._&amp;lt;/span&amp;gt; with an empty string (&amp;lt;span style=&amp;quot;color: blue;&amp;quot;&amp;gt;//&amp;lt;/span&amp;gt;).&lt;br /&gt;
The extra back-slash in front of the period is an &#039;&#039;escape character&#039;&#039;, which is needed because otherwise the dot (period) will be interpreted as a special character.&lt;br /&gt;
&lt;br /&gt;
== Rename Using &amp;lt;code&amp;gt;mv&amp;lt;/code&amp;gt;==&lt;br /&gt;
If you don&#039;t have access to the rename command (Mac OSX), you can fake it:&lt;br /&gt;
 PREFIX=LO&lt;br /&gt;
 for file in `find . -name &amp;quot;*.txt&amp;quot;`; do mv ${file##*/} ${PREFIX}_${file##*/}; done&lt;br /&gt;
Source: [https://stackoverflow.com/questions/2664740/extract-file-basename-without-path-and-extension-in-bash]&lt;br /&gt;
== Related Trick: Collecting and Renaming Multiple Files in Subdirectories ==&lt;br /&gt;
Use case: I ran a bunch of model simulations. Each batch of simulations produced a series of 8 Keras files named &#039;&#039;&#039;model_0x.h5&#039;&#039;&#039;, and stored in directories named &#039;&#039;&#039;batch_##/&#039;&#039;&#039;. 10 batches of simulations produced 80 model files, except that they all had the same names. I wanted to run some tests on the complete set, so I needed to aggregate all the files in a single directory, but rename them from 01 to 80:&lt;br /&gt;
&lt;br /&gt;
 for run in $(seq 1 10) &lt;br /&gt;
 do&lt;br /&gt;
        r=`printf &amp;quot;%02d&amp;quot; $run`&lt;br /&gt;
        echo &amp;quot;Gathering run $r files&amp;quot;&lt;br /&gt;
        for m in {1..8}&lt;br /&gt;
        do&lt;br /&gt;
                basemodel=`printf &amp;quot;%02d&amp;quot; $m`&lt;br /&gt;
                blockstart=$(( ($run-1)*8 ))&lt;br /&gt;
                newmodel=$(( $blockstart+$m ))&lt;br /&gt;
               cp batch_$r/model_$basemodel.h5 ./model_$newmodel.h5 &lt;br /&gt;
        done&lt;br /&gt;
 done&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;code&amp;gt;sed&amp;lt;/code&amp;gt; Tricks ==&lt;br /&gt;
===Replacing Text in Multiple Files===&lt;br /&gt;
 sed -i &#039;s/oldtext/newtext/g&#039; *.ext&lt;br /&gt;
&lt;br /&gt;
===Remove punctuation and convert to lowercase===&lt;br /&gt;
 $FILENAME=file.txt&lt;br /&gt;
 sed &#039;s/&amp;amp;#91;&amp;amp;#91;:punct:&amp;amp;#93;&amp;amp;#93;//g&#039; $FILENAME | sed $&#039;s/\t//g&#039; | tr &#039;&amp;amp;#91;:upper:&amp;amp;#93;&#039; &#039;&amp;amp;#91;:lower:&amp;amp;#93;&#039;  &amp;gt; lowercase.$FILENAME&lt;br /&gt;
&lt;br /&gt;
==Archiving Specific Files in a Directory Tree==&lt;br /&gt;
The &amp;lt;code&amp;gt;tar&amp;lt;/code&amp;gt; has an &amp;lt;code&amp;gt;--include&amp;lt;/code&amp;gt; switch which will archive &#039;&#039;only&#039;&#039; matching file patterns, however it appears that this filtering [https://stackoverflow.com/questions/5747755/tar-with-include-pattern breaks] when trying to archive files in subdirectories. Fortunately, the person who posed the question on StackExchange already had a workaround that works fine (it&#039;s just ugly):&lt;br /&gt;
&lt;br /&gt;
  find ./ -name &amp;quot;*.wav.txt&amp;quot; -print0 | tar -cvzf ~/adhd.tgz --null -T -&lt;br /&gt;
&lt;br /&gt;
No idea what the -T does, nor what the trailing - does, but there you have it. This works. Just replace your file pattern with whatever it is you&#039;re filtering out, and of course specify an appropriate tgz archive name.&lt;br /&gt;
&lt;br /&gt;
== mysql on the terminal ==&lt;br /&gt;
So I learned tonight how to export query results to a text file from the shell interface. Note that MySQL server is running with the --secure-file-priv option enabled, so you can&#039;t just willy-nilly write files wherever you want. However /var/lib/mysql-files/ is fair game, so for example:&lt;br /&gt;
 select * from conceptstats inner join concepts on conceptstats.concid=concepts.concid where pid=183 and norm=1 into outfile &#039;/var/lib/mysql-files/0183.txt&#039;&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Freesurfer_Subparcellation&amp;diff=2275</id>
		<title>Freesurfer Subparcellation</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Freesurfer_Subparcellation&amp;diff=2275"/>
		<updated>2022-09-27T18:30:24Z</updated>

		<summary type="html">&lt;p&gt;Chris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Freesurfer annotations and labels describe swaths of cortical regions, defined anatomically during the recon-all process, or functionally or statistically (e.g., significant GLM clusters). Tools exist to further subdivide these regions to the desired level of granularity.&lt;br /&gt;
= Using AnchorBar =&lt;br /&gt;
AnchorBar is a set of Python tools developed in our lab that maintain a database of a set of annotations and supports set operations on the labels associated with one or more annotations. For example, an atlas-based annotation can be intersected with a functional annotation (e.g., from a group-level GLM) to subdivide large functional clusters along anatomical boundaries.&lt;br /&gt;
&lt;br /&gt;
= Using mris_divide_parcellation =&lt;br /&gt;
This program divides one or more parcellations into divisions&lt;br /&gt;
perpendicular to the long axis of the label.  The number of divisions&lt;br /&gt;
can be specified in one of two ways, depending upon the nature of the&lt;br /&gt;
fourth argument.&lt;br /&gt;
&lt;br /&gt;
First, a splitfile can be specified as the fourth argument. The&lt;br /&gt;
splitfile is a text file with two columns. The first column is the&lt;br /&gt;
name of the label in the source annotation, and the second column is&lt;br /&gt;
the number of units that label should be divided into. The names of&lt;br /&gt;
the labels depends upon the source parcellation.  For aparc.annot and&lt;br /&gt;
aparc.a2005.annot, the names can be found in&lt;br /&gt;
$FREESURFER_HOME/FreeSurferColorLUT.txt.  For aparc.annot, the labels&lt;br /&gt;
are between the ranges of 1000-1034.  For aparc.a2005s.annot, the&lt;br /&gt;
labels are between the ranges of 1100-1181.  The name for the label is&lt;br /&gt;
the name of the segmentation without the &#039;ctx-lh&#039;. Note that the name&lt;br /&gt;
included in the splitfile does not indicate the hemisphere. For&lt;br /&gt;
example, 1023 is &#039;ctx-lh-posteriorcingulate&#039;.  You should put&lt;br /&gt;
&#039;posteriorcingulate&#039; in the splitfile. Eg, to divide it into three&lt;br /&gt;
segments, the following line should appear in the splitfile:&lt;br /&gt;
&lt;br /&gt;
posteriorcingulate 3&lt;br /&gt;
&lt;br /&gt;
Only labels that should be split need be specified in the splitfile.&lt;br /&gt;
&lt;br /&gt;
The second method is to specify an area threshold (in mm^2) as the&lt;br /&gt;
fourth argument, in which case each label is divided until each&lt;br /&gt;
subdivision is smaller than the area threshold.&lt;br /&gt;
&lt;br /&gt;
The output label name will be the original name with _divN appended,&lt;br /&gt;
where N is the division number. N will go from 2 to the number of&lt;br /&gt;
divisions. The first division has the same name as the original label.&lt;br /&gt;
&lt;br /&gt;
== Sample Split File (Functional Clusters) ==&lt;br /&gt;
In this example, I have carried out a one-sample group mean (OSGM) analysis and did a cluster-size threshold. This saved a number of clusters in two .annot files (one in each hemisphere). There are also some &#039;&#039;*.cluster.summary&#039;&#039; files in this directory. Reading these files, I see the smallest clusters are about 120mm&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;. I would like to subdivide the larger clusters so that they are all between about 100-150mm&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;. I load the corresponding .annot file in Freesurfer just to verify the cluster names (the summary file lists 10 clusters, but the names do not follow a consistent naming convention). From the file, I see that clusters 1,2,3,4,6 and 8 need to be subdivided. Simple division by the desired cluster size tells me the number of divisions I want for each cluster:&lt;br /&gt;
&lt;br /&gt;
 echo cluster-001 6 &amp;gt; rsplittable.txt&lt;br /&gt;
 echo cluster-002 7 &amp;gt;&amp;gt; rsplittable.txt&lt;br /&gt;
 echo cluster-003 5 &amp;gt;&amp;gt; rsplittable.txt&lt;br /&gt;
 echo cluster-004 5 &amp;gt;&amp;gt; rsplittable.txt&lt;br /&gt;
 echo cluster-006 2 &amp;gt;&amp;gt; rsplittable.txt&lt;br /&gt;
 echo cluster-008 2 &amp;gt;&amp;gt; rsplittable.txt&lt;br /&gt;
&lt;br /&gt;
Repeat the process for the lh clusters, creating an lsplittable.txt file.&lt;br /&gt;
&lt;br /&gt;
The group analysis used the fsaverage surface. Since mris_divide_parcellation assumes we&#039;re subdividing a particular subject&#039;s annotation, it makes sense to copy the source .annot files into the fsaverage label/ folder (giving the appropriate ?h prefix)&lt;br /&gt;
&lt;br /&gt;
 cp cache.th30.abs.sig.ocn.annot \&lt;br /&gt;
 $SUBJECTS_DIR\fsaverage\label\rh.cache.th30.abs.sig.ocn.annot&lt;br /&gt;
&lt;br /&gt;
Then call the program thus (assuming that rsplittable.txt is in your current working directory):&lt;br /&gt;
 &lt;br /&gt;
 mris_divide_parcellation fsaverage rh \ &lt;br /&gt;
 cache.th30.abs.sig.ocn.annot rsplittable.txt \&lt;br /&gt;
 functional_subclusters&lt;br /&gt;
&lt;br /&gt;
Program output:&lt;br /&gt;
&lt;br /&gt;
 reading colortable from annotation file...&lt;br /&gt;
 colortable with 11 entries read (originally none)&lt;br /&gt;
 interpreting 4th command line arg as split file name&lt;br /&gt;
 dividing cluster-001 (1) into 6 parts&lt;br /&gt;
 dividing cluster-002 (2) into 7 parts&lt;br /&gt;
 dividing cluster-003 (3) into 5 parts&lt;br /&gt;
 dividing cluster-004 (4) into 5 parts&lt;br /&gt;
 dividing cluster-006 (6) into 2 parts&lt;br /&gt;
 dividing cluster-008 (8) into 2 parts&lt;br /&gt;
 allocating new colortable with 31 additional units...&lt;br /&gt;
 saving annotation to functional_subclusters&lt;br /&gt;
&lt;br /&gt;
Repeat for lh.&lt;br /&gt;
&lt;br /&gt;
Note: the first time I ran this, I have no idea where the rh.functional_subclusters.annot file went! Running the lh and re-running the rh subparcellation generated the expected files in the fsaverage/label directory.&lt;br /&gt;
&lt;br /&gt;
== Check Subparcellation ==&lt;br /&gt;
Presumably the python wrapper does this iteratively because all subdivisions are done along the same axis. As I discovered, a large number of subdivisions may be realized as a series of thin bands (below). In this case, it might be best to subdivide larger regions over a series of calls, as I describe below.&lt;br /&gt;
&lt;br /&gt;
[[File:Striped_Parcellation.png]]&lt;br /&gt;
&lt;br /&gt;
=== Iterative Subparcellation ===&lt;br /&gt;
If banding among your subdivisions is a problem, here&#039;s an example of the iterative approach using region size, rather than a split file:&lt;br /&gt;
&lt;br /&gt;
First, break down clusters into 400mm&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; chunks:&lt;br /&gt;
 mris_divide_parcellation fsaverage lh \&lt;br /&gt;
 cache.th30.abs.sig.ocn.annot &lt;br /&gt;
 400 400functional_subclusters&lt;br /&gt;
&lt;br /&gt;
Next, break down the 400mm&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; chunks into sub-200mm&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt; regions:&lt;br /&gt;
&lt;br /&gt;
 mris_divide_parcellation fsaverage lh \&lt;br /&gt;
 400functional_subclusters.annot \&lt;br /&gt;
 200 200functional_subclusters&lt;br /&gt;
&lt;br /&gt;
Because each call to &amp;lt;code&amp;gt;mris_divide_parcellation&amp;lt;/code&amp;gt; bisects the region along it&#039;s longest axis, successive function calls at each iteration produce increasingly less elongated subparcellations.&lt;br /&gt;
&lt;br /&gt;
== Transfer Subparcellation to Other Subjects ==&lt;br /&gt;
Once satisfied with the source subparcellation, the scheme &#039;&#039;can&#039;&#039; be mapped to other surfaces using mri_surf2surf if, for example, you are working in &#039;&#039;&#039;self&#039;&#039;&#039; space (this needs to be done individually for the lh and rh surfaces -- below is an example command for the left):&lt;br /&gt;
&lt;br /&gt;
 mri_surf2surf \&lt;br /&gt;
 --srcsubject fsaverage \&lt;br /&gt;
 --trgsubject FS_T1_501 \&lt;br /&gt;
 --hemi lh \&lt;br /&gt;
 --sval-annot $SUBJECTS_DIR/fsaverage/label/lh.200functional_subclusters.annot \&lt;br /&gt;
 --tval $SUBJECTS_DIR/FS_T1_501/label/lh.200functional_subclusters.annot&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;Pro Tip: Use a global variable (e.g. SUBN) and use $SUBN in place of your subject in trgsubject and tval.  Better yet, use the clustmap.sh shell script.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
You may want to pull out overlapping voxels for T1 and T2 data (like a venn diagram) before mapping the surfaces to each subject, [http://ccnlab.psy.buffalo.edu/wiki/index.php/Working_with_ROIs_(Freesurfer) here].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: If you are working in &#039;&#039;fsaverage&#039;&#039; space for all participants, you only have to subparcellate your annotations once -- for the fsaverage surface.&#039;&#039;&#039; Because the functional analyses are invariably carried out in fsaverage space in order to compute group-level statistics, you will seldom have to transfer a subparcellation to a specific participant&#039;s surface space.&lt;br /&gt;
&lt;br /&gt;
[[Category: FreeSurfer]]&lt;br /&gt;
[[Category: Network Analyses]]&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=AnchorBar&amp;diff=2274</id>
		<title>AnchorBar</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=AnchorBar&amp;diff=2274"/>
		<updated>2022-09-14T15:14:54Z</updated>

		<summary type="html">&lt;p&gt;Chris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;AnchorBar (aka &amp;quot;Analyzer&amp;quot;) is a Python program developed to maintain and compute set operations on FreeSurfer annotation files. It&#039;s kind of like SPM&#039;s MarsBar plugin, but for FreeSurfer.&lt;br /&gt;
=Requirements=&lt;br /&gt;
AnchorBar has a few dependencies that might not be part of your base Python install:&lt;br /&gt;
*hashlib&lt;br /&gt;
*nibabel&lt;br /&gt;
Attempting to run the scripts will fail and report missing libraries. They can be installed using pip:&lt;br /&gt;
 pip install nibabel&lt;br /&gt;
The utility is (currently) comprised of 3 scripts. Syntax help for each of these scripts can be obtained using the &amp;lt;code&amp;gt;-h&amp;lt;/code&amp;gt; switch:&lt;br /&gt;
 $ AnchorBar_tools.py -h&lt;br /&gt;
 usage: AnchorBar_tools.py [-h] [--list] [--labels annot_id] [--db DB] [--rename annot_id new_shortname]&lt;br /&gt;
                          [--relabel annot_id label_id new_label_name] [--abbrev annot_id label_id abbrev_label_name]&lt;br /&gt;
&lt;br /&gt;
=Querying Available Annotations=&lt;br /&gt;
AnchorBar_tools.py provides tools to list, rename and relabel annotations. Each annotation has a unique ID and descriptors indicating the hemisphere, a shortname used for naming labels generated from set operations, and a path indicating the file path of the source .annot file. The default set of annotations stored in &#039;&#039;&#039;annot.db&#039;&#039;&#039; includes the 10 pairs of lhrh annotations from fsaverage/label:&lt;br /&gt;
 AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
The labels associated with a given annotation can be found using the &amp;lt;code&amp;gt;--labels&amp;lt;/code&amp;gt; switch:&lt;br /&gt;
 #List all the labels in the lh.aparc_2009s.annot FreeSurfer annotation&lt;br /&gt;
 AnchorBar_tools.py --db annot.db --labels 2 &lt;br /&gt;
&lt;br /&gt;
=Adding Annotations=&lt;br /&gt;
The most common use-case for adding annotations is when working with functional ROIs from group analyses. AnchorBar_init.py is the tool that imports new .annot files into the database for later operations. Imported annotations are associated with a unique identifier based on the labels and vertices that make up an annotation. This allows the scripts to recognize and ignore duplicate annotations that might have been given different names, but also to differentiate distinct annotations that happen to have the same name (as is common when working with functional ROIs). &lt;br /&gt;
&lt;br /&gt;
The import process infers the hemisphere from the filename. Functional cluster annotations are unlikely to have an lh/rh prefix, and so will require renaming prior to importing (I like to stick them in the project copy of the fsaverage folder):&lt;br /&gt;
 cp $SUBJECTS_DIR/RFX/myanalysis.&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt;/a_vs_b/glm.wls/osgm/cache.th30.abs.sig.ocn.annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt;.a_vs_b.cache.th30.abs.annot&lt;br /&gt;
 cp $SUBJECTS_DIR/RFX/myanalysis.&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt;/a_vs_b/glm.wls/osgm/cache.th30.abs.sig.ocn.annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt;.a_vs_b.cache.th30.abs.annot&lt;br /&gt;
 &lt;br /&gt;
To add one or more new annotations (that you&#039;ve renamed and saved to the fsaverage/label folder):&lt;br /&gt;
 AnchorBar_init.py --db annot.db --annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;*.th30.abs.annot&amp;lt;/span&amp;gt;&lt;br /&gt;
Of course, the particular filename pattern in red in the above command may differ from what&#039;s shown here, depending on your file naming convention, but you get the idea. When this code executes, it will import all the files that end in &amp;quot;th30.abs.annot&amp;quot; that it finds. This is handy for importing annotations for multiple hemispheres and/or for multiple contrasts.&lt;br /&gt;
&lt;br /&gt;
==Renaming Imported Annotations==&lt;br /&gt;
Imported annotations will be given a shortname that corresponds to the filename of the source annotation. These names can be unwieldy, so AnchorBar_tools.py allows you to assign a new shortname to stored annotations. First, list the annotations and note their shortnames and id values. Use the id when selecting the annotation for renaming. For example:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
 &lt;br /&gt;
 *** Annotation List ***&lt;br /&gt;
 id	LR shortname	path&lt;br /&gt;
 1	lh aparc.a2005s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 2	lh aparc.a2009s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 ...&lt;br /&gt;
 25	lh a_v_b.th30.abs	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
 26	rh a_v_b.th30.abs	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
&lt;br /&gt;
If we wanted to remove the &amp;quot;th30.abs&amp;quot; part of the shortname for the annotations 25 and 26:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --rename 26 AvsB&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --rename 25 AvsB&lt;br /&gt;
&lt;br /&gt;
Now inspect the annotation database to see the shortname has been updated with the values you just provided:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
 &lt;br /&gt;
 *** Annotation List ***&lt;br /&gt;
 id	LR shortname	path&lt;br /&gt;
 1	lh aparc.a2005s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 2	lh aparc.a2009s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 ...&lt;br /&gt;
 25	lh AvsB	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
 26	rh AvsB	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
&lt;br /&gt;
=Set Operations on Annotations=&lt;br /&gt;
Note that all set operations require pairs of annotations from the same hemisphere. If unsure, list the annotations and take careful note of the hemisphere noted for the ids of the annotations you&#039;re interested in. Using annotations from opposite hemispheres will result in an error message and produce no new output.&lt;br /&gt;
&lt;br /&gt;
==Intersection of Two Annotations==&lt;br /&gt;
Example: We have a functional contrast (A vs Rest) and wish to divide the large clusters along Brodmann Area boundaries. By intersecting the clusters (which only cover some of the brain surface) with one of the anatomical atlas annotations (which cover all of the brain surface), we assign a new label to each vertex according to which cluster and which anatomical label it corresponds to. Vertices not belonging to functional clusters will be dropped. The end result is a new annotation where the clusters are subdivided along anatomical boundaries.&lt;br /&gt;
&lt;br /&gt;
We first list the annotations in the database to find the corresponding id numbers. A vs Rest are annotations 23 (lh) and 24 (rh), and the Brodmann annotations are 5 (lh) and 15 (rh). We intersect the two corresponding annotations thus:&lt;br /&gt;
 AnchorBar_sets.py --db annot.db --intersect 5 23&lt;br /&gt;
 AnchorBar_sets.py --db annot.db --intersect 15 24&lt;br /&gt;
&lt;br /&gt;
==Merging (Union) of Two Annotations==&lt;br /&gt;
Example: We have two tasks, A and B, and each task condition has been contrasted with rest (A vs Rest; B vs Rest). We wish to identify left-hemisphere regions that are involved in either task, so we compute the union. This functionality has been added to AnchorBar&lt;br /&gt;
 #Assumes that lh.AnnotA and lh.AnnotB correspond to the annotations derived from the two task contrasts, &lt;br /&gt;
 #A vs Rest and B vs Rest, respectively, and that these annotations are numbered 23 and 25&lt;br /&gt;
 AnchorBar_sets.py --db annot.db --union 23 25&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=AnchorBar&amp;diff=2273</id>
		<title>AnchorBar</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=AnchorBar&amp;diff=2273"/>
		<updated>2022-09-14T15:10:39Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Merging (Union) of Two Annotations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;AnchorBar (aka &amp;quot;Analyzer&amp;quot;) is a Python program developed to maintain and compute set operations on FreeSurfer annotation files. It&#039;s kind of like SPM&#039;s MarsBar plugin, but for FreeSurfer.&lt;br /&gt;
=Requirements=&lt;br /&gt;
AnchorBar has a few dependencies that might not be part of your base Python install:&lt;br /&gt;
*hashlib&lt;br /&gt;
*nibabel&lt;br /&gt;
Attempting to run the scripts will fail and report missing libraries. They can be installed using pip:&lt;br /&gt;
 pip install nibabel&lt;br /&gt;
The utility is (currently) comprised of 3 scripts. Syntax help for each of these scripts can be obtained using the &amp;lt;code&amp;gt;-h&amp;lt;/code&amp;gt; switch:&lt;br /&gt;
 $ AnchorBar_tools.py -h&lt;br /&gt;
 usage: AnchorBar_tools.py [-h] [--list] [--labels annot_id] [--db DB] [--rename annot_id new_shortname]&lt;br /&gt;
                          [--relabel annot_id label_id new_label_name] [--abbrev annot_id label_id abbrev_label_name]&lt;br /&gt;
&lt;br /&gt;
=Querying Available Annotations=&lt;br /&gt;
AnchorBar_tools.py provides tools to list, rename and relabel annotations. Each annotation has a unique ID and descriptors indicating the hemisphere, a shortname used for naming labels generated from set operations, and a path indicating the file path of the source .annot file. The default set of annotations stored in &#039;&#039;&#039;annot.db&#039;&#039;&#039; includes the 10 pairs of lhrh annotations from fsaverage/label:&lt;br /&gt;
 AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
&lt;br /&gt;
=Adding Annotations=&lt;br /&gt;
The most common use-case for adding annotations is when working with functional ROIs from group analyses. AnchorBar_init.py is the tool that imports new .annot files into the database for later operations. Imported annotations are associated with a unique identifier based on the labels and vertices that make up an annotation. This allows the scripts to recognize and ignore duplicate annotations that might have been given different names, but also to differentiate distinct annotations that happen to have the same name (as is common when working with functional ROIs). &lt;br /&gt;
&lt;br /&gt;
The import process infers the hemisphere from the filename. Functional cluster annotations are unlikely to have an lh/rh prefix, and so will require renaming prior to importing (I like to stick them in the project copy of the fsaverage folder):&lt;br /&gt;
 cp $SUBJECTS_DIR/RFX/myanalysis.&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt;/a_vs_b/glm.wls/osgm/cache.th30.abs.sig.ocn.annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt;.a_vs_b.cache.th30.abs.annot&lt;br /&gt;
 cp $SUBJECTS_DIR/RFX/myanalysis.&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt;/a_vs_b/glm.wls/osgm/cache.th30.abs.sig.ocn.annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt;.a_vs_b.cache.th30.abs.annot&lt;br /&gt;
 &lt;br /&gt;
To add one or more new annotations (that you&#039;ve renamed and saved to the fsaverage/label folder):&lt;br /&gt;
 AnchorBar_init.py --db annot.db --annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;*.th30.abs.annot&amp;lt;/span&amp;gt;&lt;br /&gt;
Of course, the particular filename pattern in red in the above command may differ from what&#039;s shown here, depending on your file naming convention, but you get the idea. When this code executes, it will import all the files that end in &amp;quot;th30.abs.annot&amp;quot; that it finds. This is handy for importing annotations for multiple hemispheres and/or for multiple contrasts.&lt;br /&gt;
&lt;br /&gt;
==Renaming Imported Annotations==&lt;br /&gt;
Imported annotations will be given a shortname that corresponds to the filename of the source annotation. These names can be unwieldy, so AnchorBar_tools.py allows you to assign a new shortname to stored annotations. First, list the annotations and note their shortnames and id values. Use the id when selecting the annotation for renaming. For example:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
 &lt;br /&gt;
 *** Annotation List ***&lt;br /&gt;
 id	LR shortname	path&lt;br /&gt;
 1	lh aparc.a2005s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 2	lh aparc.a2009s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 ...&lt;br /&gt;
 25	lh a_v_b.th30.abs	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
 26	rh a_v_b.th30.abs	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
&lt;br /&gt;
If we wanted to remove the &amp;quot;th30.abs&amp;quot; part of the shortname for the annotations 25 and 26:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --rename 26 AvsB&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --rename 25 AvsB&lt;br /&gt;
&lt;br /&gt;
Now inspect the annotation database to see the shortname has been updated with the values you just provided:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
 &lt;br /&gt;
 *** Annotation List ***&lt;br /&gt;
 id	LR shortname	path&lt;br /&gt;
 1	lh aparc.a2005s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 2	lh aparc.a2009s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 ...&lt;br /&gt;
 25	lh AvsB	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
 26	rh AvsB	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
&lt;br /&gt;
=Set Operations on Annotations=&lt;br /&gt;
Note that all set operations require pairs of annotations from the same hemisphere. If unsure, list the annotations and take careful note of the hemisphere noted for the ids of the annotations you&#039;re interested in. Using annotations from opposite hemispheres will result in an error message and produce no new output.&lt;br /&gt;
&lt;br /&gt;
==Intersection of Two Annotations==&lt;br /&gt;
Example: We have a functional contrast (A vs Rest) and wish to divide the large clusters along Brodmann Area boundaries. By intersecting the clusters (which only cover some of the brain surface) with one of the anatomical atlas annotations (which cover all of the brain surface), we assign a new label to each vertex according to which cluster and which anatomical label it corresponds to. Vertices not belonging to functional clusters will be dropped. The end result is a new annotation where the clusters are subdivided along anatomical boundaries.&lt;br /&gt;
&lt;br /&gt;
We first list the annotations in the database to find the corresponding id numbers. A vs Rest are annotations 23 (lh) and 24 (rh), and the Brodmann annotations are 5 (lh) and 15 (rh). We intersect the two corresponding annotations thus:&lt;br /&gt;
 AnchorBar_sets.py --db annot.db --intersect 5 23&lt;br /&gt;
 AnchorBar_sets.py --db annot.db --intersect 15 24&lt;br /&gt;
&lt;br /&gt;
==Merging (Union) of Two Annotations==&lt;br /&gt;
Example: We have two tasks, A and B, and each task condition has been contrasted with rest (A vs Rest; B vs Rest). We wish to identify left-hemisphere regions that are involved in either task, so we compute the union. This functionality has been added to AnchorBar&lt;br /&gt;
 #Assumes that lh.AnnotA and lh.AnnotB correspond to the annotations derived from the two task contrasts, &lt;br /&gt;
 #A vs Rest and B vs Rest, respectively, and that these annotations are numbered 23 and 25&lt;br /&gt;
 AnchorBar_sets.py --db annot.db --union 23 25&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=AnchorBar&amp;diff=2272</id>
		<title>AnchorBar</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=AnchorBar&amp;diff=2272"/>
		<updated>2022-09-14T15:10:20Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Merging (Union) of Two Annotations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;AnchorBar (aka &amp;quot;Analyzer&amp;quot;) is a Python program developed to maintain and compute set operations on FreeSurfer annotation files. It&#039;s kind of like SPM&#039;s MarsBar plugin, but for FreeSurfer.&lt;br /&gt;
=Requirements=&lt;br /&gt;
AnchorBar has a few dependencies that might not be part of your base Python install:&lt;br /&gt;
*hashlib&lt;br /&gt;
*nibabel&lt;br /&gt;
Attempting to run the scripts will fail and report missing libraries. They can be installed using pip:&lt;br /&gt;
 pip install nibabel&lt;br /&gt;
The utility is (currently) comprised of 3 scripts. Syntax help for each of these scripts can be obtained using the &amp;lt;code&amp;gt;-h&amp;lt;/code&amp;gt; switch:&lt;br /&gt;
 $ AnchorBar_tools.py -h&lt;br /&gt;
 usage: AnchorBar_tools.py [-h] [--list] [--labels annot_id] [--db DB] [--rename annot_id new_shortname]&lt;br /&gt;
                          [--relabel annot_id label_id new_label_name] [--abbrev annot_id label_id abbrev_label_name]&lt;br /&gt;
&lt;br /&gt;
=Querying Available Annotations=&lt;br /&gt;
AnchorBar_tools.py provides tools to list, rename and relabel annotations. Each annotation has a unique ID and descriptors indicating the hemisphere, a shortname used for naming labels generated from set operations, and a path indicating the file path of the source .annot file. The default set of annotations stored in &#039;&#039;&#039;annot.db&#039;&#039;&#039; includes the 10 pairs of lhrh annotations from fsaverage/label:&lt;br /&gt;
 AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
&lt;br /&gt;
=Adding Annotations=&lt;br /&gt;
The most common use-case for adding annotations is when working with functional ROIs from group analyses. AnchorBar_init.py is the tool that imports new .annot files into the database for later operations. Imported annotations are associated with a unique identifier based on the labels and vertices that make up an annotation. This allows the scripts to recognize and ignore duplicate annotations that might have been given different names, but also to differentiate distinct annotations that happen to have the same name (as is common when working with functional ROIs). &lt;br /&gt;
&lt;br /&gt;
The import process infers the hemisphere from the filename. Functional cluster annotations are unlikely to have an lh/rh prefix, and so will require renaming prior to importing (I like to stick them in the project copy of the fsaverage folder):&lt;br /&gt;
 cp $SUBJECTS_DIR/RFX/myanalysis.&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt;/a_vs_b/glm.wls/osgm/cache.th30.abs.sig.ocn.annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt;.a_vs_b.cache.th30.abs.annot&lt;br /&gt;
 cp $SUBJECTS_DIR/RFX/myanalysis.&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt;/a_vs_b/glm.wls/osgm/cache.th30.abs.sig.ocn.annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt;.a_vs_b.cache.th30.abs.annot&lt;br /&gt;
 &lt;br /&gt;
To add one or more new annotations (that you&#039;ve renamed and saved to the fsaverage/label folder):&lt;br /&gt;
 AnchorBar_init.py --db annot.db --annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;*.th30.abs.annot&amp;lt;/span&amp;gt;&lt;br /&gt;
Of course, the particular filename pattern in red in the above command may differ from what&#039;s shown here, depending on your file naming convention, but you get the idea. When this code executes, it will import all the files that end in &amp;quot;th30.abs.annot&amp;quot; that it finds. This is handy for importing annotations for multiple hemispheres and/or for multiple contrasts.&lt;br /&gt;
&lt;br /&gt;
==Renaming Imported Annotations==&lt;br /&gt;
Imported annotations will be given a shortname that corresponds to the filename of the source annotation. These names can be unwieldy, so AnchorBar_tools.py allows you to assign a new shortname to stored annotations. First, list the annotations and note their shortnames and id values. Use the id when selecting the annotation for renaming. For example:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
 &lt;br /&gt;
 *** Annotation List ***&lt;br /&gt;
 id	LR shortname	path&lt;br /&gt;
 1	lh aparc.a2005s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 2	lh aparc.a2009s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 ...&lt;br /&gt;
 25	lh a_v_b.th30.abs	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
 26	rh a_v_b.th30.abs	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
&lt;br /&gt;
If we wanted to remove the &amp;quot;th30.abs&amp;quot; part of the shortname for the annotations 25 and 26:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --rename 26 AvsB&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --rename 25 AvsB&lt;br /&gt;
&lt;br /&gt;
Now inspect the annotation database to see the shortname has been updated with the values you just provided:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
 &lt;br /&gt;
 *** Annotation List ***&lt;br /&gt;
 id	LR shortname	path&lt;br /&gt;
 1	lh aparc.a2005s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 2	lh aparc.a2009s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 ...&lt;br /&gt;
 25	lh AvsB	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
 26	rh AvsB	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
&lt;br /&gt;
=Set Operations on Annotations=&lt;br /&gt;
Note that all set operations require pairs of annotations from the same hemisphere. If unsure, list the annotations and take careful note of the hemisphere noted for the ids of the annotations you&#039;re interested in. Using annotations from opposite hemispheres will result in an error message and produce no new output.&lt;br /&gt;
&lt;br /&gt;
==Intersection of Two Annotations==&lt;br /&gt;
Example: We have a functional contrast (A vs Rest) and wish to divide the large clusters along Brodmann Area boundaries. By intersecting the clusters (which only cover some of the brain surface) with one of the anatomical atlas annotations (which cover all of the brain surface), we assign a new label to each vertex according to which cluster and which anatomical label it corresponds to. Vertices not belonging to functional clusters will be dropped. The end result is a new annotation where the clusters are subdivided along anatomical boundaries.&lt;br /&gt;
&lt;br /&gt;
We first list the annotations in the database to find the corresponding id numbers. A vs Rest are annotations 23 (lh) and 24 (rh), and the Brodmann annotations are 5 (lh) and 15 (rh). We intersect the two corresponding annotations thus:&lt;br /&gt;
 AnchorBar_sets.py --db annot.db --intersect 5 23&lt;br /&gt;
 AnchorBar_sets.py --db annot.db --intersect 15 24&lt;br /&gt;
&lt;br /&gt;
==Merging (Union) of Two Annotations==&lt;br /&gt;
Example: We have two tasks, A and B, and each task condition has been contrasted with rest (A vs Rest; B vs Rest). We wish to identify left-hemisphere regions that are involved in either task, so we compute the union. This functionality has been added to AnchorBar&lt;br /&gt;
 #Assumes that lh.AnnotA and lh.AnnotB correspond to the annotations derived from the two task contrasts, A vs Rest and B vs Rest, respectively&lt;br /&gt;
 #and that these annotations are numbered 23 and 25&lt;br /&gt;
 AnchorBar_sets.py --db annot.db --union 23 25&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=AnchorBar&amp;diff=2271</id>
		<title>AnchorBar</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=AnchorBar&amp;diff=2271"/>
		<updated>2022-09-14T15:09:52Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Merging (Union) of Two Annotations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;AnchorBar (aka &amp;quot;Analyzer&amp;quot;) is a Python program developed to maintain and compute set operations on FreeSurfer annotation files. It&#039;s kind of like SPM&#039;s MarsBar plugin, but for FreeSurfer.&lt;br /&gt;
=Requirements=&lt;br /&gt;
AnchorBar has a few dependencies that might not be part of your base Python install:&lt;br /&gt;
*hashlib&lt;br /&gt;
*nibabel&lt;br /&gt;
Attempting to run the scripts will fail and report missing libraries. They can be installed using pip:&lt;br /&gt;
 pip install nibabel&lt;br /&gt;
The utility is (currently) comprised of 3 scripts. Syntax help for each of these scripts can be obtained using the &amp;lt;code&amp;gt;-h&amp;lt;/code&amp;gt; switch:&lt;br /&gt;
 $ AnchorBar_tools.py -h&lt;br /&gt;
 usage: AnchorBar_tools.py [-h] [--list] [--labels annot_id] [--db DB] [--rename annot_id new_shortname]&lt;br /&gt;
                          [--relabel annot_id label_id new_label_name] [--abbrev annot_id label_id abbrev_label_name]&lt;br /&gt;
&lt;br /&gt;
=Querying Available Annotations=&lt;br /&gt;
AnchorBar_tools.py provides tools to list, rename and relabel annotations. Each annotation has a unique ID and descriptors indicating the hemisphere, a shortname used for naming labels generated from set operations, and a path indicating the file path of the source .annot file. The default set of annotations stored in &#039;&#039;&#039;annot.db&#039;&#039;&#039; includes the 10 pairs of lhrh annotations from fsaverage/label:&lt;br /&gt;
 AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
&lt;br /&gt;
=Adding Annotations=&lt;br /&gt;
The most common use-case for adding annotations is when working with functional ROIs from group analyses. AnchorBar_init.py is the tool that imports new .annot files into the database for later operations. Imported annotations are associated with a unique identifier based on the labels and vertices that make up an annotation. This allows the scripts to recognize and ignore duplicate annotations that might have been given different names, but also to differentiate distinct annotations that happen to have the same name (as is common when working with functional ROIs). &lt;br /&gt;
&lt;br /&gt;
The import process infers the hemisphere from the filename. Functional cluster annotations are unlikely to have an lh/rh prefix, and so will require renaming prior to importing (I like to stick them in the project copy of the fsaverage folder):&lt;br /&gt;
 cp $SUBJECTS_DIR/RFX/myanalysis.&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt;/a_vs_b/glm.wls/osgm/cache.th30.abs.sig.ocn.annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt;.a_vs_b.cache.th30.abs.annot&lt;br /&gt;
 cp $SUBJECTS_DIR/RFX/myanalysis.&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt;/a_vs_b/glm.wls/osgm/cache.th30.abs.sig.ocn.annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt;.a_vs_b.cache.th30.abs.annot&lt;br /&gt;
 &lt;br /&gt;
To add one or more new annotations (that you&#039;ve renamed and saved to the fsaverage/label folder):&lt;br /&gt;
 AnchorBar_init.py --db annot.db --annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;*.th30.abs.annot&amp;lt;/span&amp;gt;&lt;br /&gt;
Of course, the particular filename pattern in red in the above command may differ from what&#039;s shown here, depending on your file naming convention, but you get the idea. When this code executes, it will import all the files that end in &amp;quot;th30.abs.annot&amp;quot; that it finds. This is handy for importing annotations for multiple hemispheres and/or for multiple contrasts.&lt;br /&gt;
&lt;br /&gt;
==Renaming Imported Annotations==&lt;br /&gt;
Imported annotations will be given a shortname that corresponds to the filename of the source annotation. These names can be unwieldy, so AnchorBar_tools.py allows you to assign a new shortname to stored annotations. First, list the annotations and note their shortnames and id values. Use the id when selecting the annotation for renaming. For example:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
 &lt;br /&gt;
 *** Annotation List ***&lt;br /&gt;
 id	LR shortname	path&lt;br /&gt;
 1	lh aparc.a2005s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 2	lh aparc.a2009s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 ...&lt;br /&gt;
 25	lh a_v_b.th30.abs	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
 26	rh a_v_b.th30.abs	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
&lt;br /&gt;
If we wanted to remove the &amp;quot;th30.abs&amp;quot; part of the shortname for the annotations 25 and 26:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --rename 26 AvsB&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --rename 25 AvsB&lt;br /&gt;
&lt;br /&gt;
Now inspect the annotation database to see the shortname has been updated with the values you just provided:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
 &lt;br /&gt;
 *** Annotation List ***&lt;br /&gt;
 id	LR shortname	path&lt;br /&gt;
 1	lh aparc.a2005s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 2	lh aparc.a2009s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 ...&lt;br /&gt;
 25	lh AvsB	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
 26	rh AvsB	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
&lt;br /&gt;
=Set Operations on Annotations=&lt;br /&gt;
Note that all set operations require pairs of annotations from the same hemisphere. If unsure, list the annotations and take careful note of the hemisphere noted for the ids of the annotations you&#039;re interested in. Using annotations from opposite hemispheres will result in an error message and produce no new output.&lt;br /&gt;
&lt;br /&gt;
==Intersection of Two Annotations==&lt;br /&gt;
Example: We have a functional contrast (A vs Rest) and wish to divide the large clusters along Brodmann Area boundaries. By intersecting the clusters (which only cover some of the brain surface) with one of the anatomical atlas annotations (which cover all of the brain surface), we assign a new label to each vertex according to which cluster and which anatomical label it corresponds to. Vertices not belonging to functional clusters will be dropped. The end result is a new annotation where the clusters are subdivided along anatomical boundaries.&lt;br /&gt;
&lt;br /&gt;
We first list the annotations in the database to find the corresponding id numbers. A vs Rest are annotations 23 (lh) and 24 (rh), and the Brodmann annotations are 5 (lh) and 15 (rh). We intersect the two corresponding annotations thus:&lt;br /&gt;
 AnchorBar_sets.py --db annot.db --intersect 5 23&lt;br /&gt;
 AnchorBar_sets.py --db annot.db --intersect 15 24&lt;br /&gt;
&lt;br /&gt;
==Merging (Union) of Two Annotations==&lt;br /&gt;
Example: We have two tasks, A and B, and each task condition has been contrasted with rest (A vs Rest; B vs Rest). We wish to identify left-hemisphere regions that are involved in either task, so we compute the union. This functionality has been added to AnchorBar&lt;br /&gt;
 #Assumes that lh.AnnotA and lh.AnnotB correspond to the annotations derived from the two task contrasts, A and B, respectively&lt;br /&gt;
 #and that these annotations are numbered 23 and 25&lt;br /&gt;
 AnchorBar_sets.py --db annot.db --union 23 25&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=AnchorBar&amp;diff=2270</id>
		<title>AnchorBar</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=AnchorBar&amp;diff=2270"/>
		<updated>2022-09-14T15:09:04Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Merging (Union) of Two Annotations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;AnchorBar (aka &amp;quot;Analyzer&amp;quot;) is a Python program developed to maintain and compute set operations on FreeSurfer annotation files. It&#039;s kind of like SPM&#039;s MarsBar plugin, but for FreeSurfer.&lt;br /&gt;
=Requirements=&lt;br /&gt;
AnchorBar has a few dependencies that might not be part of your base Python install:&lt;br /&gt;
*hashlib&lt;br /&gt;
*nibabel&lt;br /&gt;
Attempting to run the scripts will fail and report missing libraries. They can be installed using pip:&lt;br /&gt;
 pip install nibabel&lt;br /&gt;
The utility is (currently) comprised of 3 scripts. Syntax help for each of these scripts can be obtained using the &amp;lt;code&amp;gt;-h&amp;lt;/code&amp;gt; switch:&lt;br /&gt;
 $ AnchorBar_tools.py -h&lt;br /&gt;
 usage: AnchorBar_tools.py [-h] [--list] [--labels annot_id] [--db DB] [--rename annot_id new_shortname]&lt;br /&gt;
                          [--relabel annot_id label_id new_label_name] [--abbrev annot_id label_id abbrev_label_name]&lt;br /&gt;
&lt;br /&gt;
=Querying Available Annotations=&lt;br /&gt;
AnchorBar_tools.py provides tools to list, rename and relabel annotations. Each annotation has a unique ID and descriptors indicating the hemisphere, a shortname used for naming labels generated from set operations, and a path indicating the file path of the source .annot file. The default set of annotations stored in &#039;&#039;&#039;annot.db&#039;&#039;&#039; includes the 10 pairs of lhrh annotations from fsaverage/label:&lt;br /&gt;
 AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
&lt;br /&gt;
=Adding Annotations=&lt;br /&gt;
The most common use-case for adding annotations is when working with functional ROIs from group analyses. AnchorBar_init.py is the tool that imports new .annot files into the database for later operations. Imported annotations are associated with a unique identifier based on the labels and vertices that make up an annotation. This allows the scripts to recognize and ignore duplicate annotations that might have been given different names, but also to differentiate distinct annotations that happen to have the same name (as is common when working with functional ROIs). &lt;br /&gt;
&lt;br /&gt;
The import process infers the hemisphere from the filename. Functional cluster annotations are unlikely to have an lh/rh prefix, and so will require renaming prior to importing (I like to stick them in the project copy of the fsaverage folder):&lt;br /&gt;
 cp $SUBJECTS_DIR/RFX/myanalysis.&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt;/a_vs_b/glm.wls/osgm/cache.th30.abs.sig.ocn.annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt;.a_vs_b.cache.th30.abs.annot&lt;br /&gt;
 cp $SUBJECTS_DIR/RFX/myanalysis.&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt;/a_vs_b/glm.wls/osgm/cache.th30.abs.sig.ocn.annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt;.a_vs_b.cache.th30.abs.annot&lt;br /&gt;
 &lt;br /&gt;
To add one or more new annotations (that you&#039;ve renamed and saved to the fsaverage/label folder):&lt;br /&gt;
 AnchorBar_init.py --db annot.db --annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;*.th30.abs.annot&amp;lt;/span&amp;gt;&lt;br /&gt;
Of course, the particular filename pattern in red in the above command may differ from what&#039;s shown here, depending on your file naming convention, but you get the idea. When this code executes, it will import all the files that end in &amp;quot;th30.abs.annot&amp;quot; that it finds. This is handy for importing annotations for multiple hemispheres and/or for multiple contrasts.&lt;br /&gt;
&lt;br /&gt;
==Renaming Imported Annotations==&lt;br /&gt;
Imported annotations will be given a shortname that corresponds to the filename of the source annotation. These names can be unwieldy, so AnchorBar_tools.py allows you to assign a new shortname to stored annotations. First, list the annotations and note their shortnames and id values. Use the id when selecting the annotation for renaming. For example:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
 &lt;br /&gt;
 *** Annotation List ***&lt;br /&gt;
 id	LR shortname	path&lt;br /&gt;
 1	lh aparc.a2005s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 2	lh aparc.a2009s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 ...&lt;br /&gt;
 25	lh a_v_b.th30.abs	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
 26	rh a_v_b.th30.abs	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
&lt;br /&gt;
If we wanted to remove the &amp;quot;th30.abs&amp;quot; part of the shortname for the annotations 25 and 26:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --rename 26 AvsB&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --rename 25 AvsB&lt;br /&gt;
&lt;br /&gt;
Now inspect the annotation database to see the shortname has been updated with the values you just provided:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
 &lt;br /&gt;
 *** Annotation List ***&lt;br /&gt;
 id	LR shortname	path&lt;br /&gt;
 1	lh aparc.a2005s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 2	lh aparc.a2009s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 ...&lt;br /&gt;
 25	lh AvsB	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
 26	rh AvsB	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
&lt;br /&gt;
=Set Operations on Annotations=&lt;br /&gt;
Note that all set operations require pairs of annotations from the same hemisphere. If unsure, list the annotations and take careful note of the hemisphere noted for the ids of the annotations you&#039;re interested in. Using annotations from opposite hemispheres will result in an error message and produce no new output.&lt;br /&gt;
&lt;br /&gt;
==Intersection of Two Annotations==&lt;br /&gt;
Example: We have a functional contrast (A vs Rest) and wish to divide the large clusters along Brodmann Area boundaries. By intersecting the clusters (which only cover some of the brain surface) with one of the anatomical atlas annotations (which cover all of the brain surface), we assign a new label to each vertex according to which cluster and which anatomical label it corresponds to. Vertices not belonging to functional clusters will be dropped. The end result is a new annotation where the clusters are subdivided along anatomical boundaries.&lt;br /&gt;
&lt;br /&gt;
We first list the annotations in the database to find the corresponding id numbers. A vs Rest are annotations 23 (lh) and 24 (rh), and the Brodmann annotations are 5 (lh) and 15 (rh). We intersect the two corresponding annotations thus:&lt;br /&gt;
 AnchorBar_sets.py --db annot.db --intersect 5 23&lt;br /&gt;
 AnchorBar_sets.py --db annot.db --intersect 15 24&lt;br /&gt;
&lt;br /&gt;
==Merging (Union) of Two Annotations==&lt;br /&gt;
Example: We have two tasks, A and B, and each task condition has been contrasted with rest (A vs Rest; B vs Rest). We wish to identify left-hemisphere regions that are involved in either task, so we compute the union. This functionality has been added to AnchorBar&lt;br /&gt;
 #Assumes that lh.AnnotA and lh.AnnotB correspond to the annotations derived from the two task contrasts, A and B, respectively&lt;br /&gt;
 #and that these annotations are numbered 14 and 15&lt;br /&gt;
 AnchorBar_sets.py --db annot.db --union 14 15&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=AnchorBar&amp;diff=2269</id>
		<title>AnchorBar</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=AnchorBar&amp;diff=2269"/>
		<updated>2022-08-17T21:48:03Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Set Operations on Annotations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;AnchorBar (aka &amp;quot;Analyzer&amp;quot;) is a Python program developed to maintain and compute set operations on FreeSurfer annotation files. It&#039;s kind of like SPM&#039;s MarsBar plugin, but for FreeSurfer.&lt;br /&gt;
=Requirements=&lt;br /&gt;
AnchorBar has a few dependencies that might not be part of your base Python install:&lt;br /&gt;
*hashlib&lt;br /&gt;
*nibabel&lt;br /&gt;
Attempting to run the scripts will fail and report missing libraries. They can be installed using pip:&lt;br /&gt;
 pip install nibabel&lt;br /&gt;
The utility is (currently) comprised of 3 scripts. Syntax help for each of these scripts can be obtained using the &amp;lt;code&amp;gt;-h&amp;lt;/code&amp;gt; switch:&lt;br /&gt;
 $ AnchorBar_tools.py -h&lt;br /&gt;
 usage: AnchorBar_tools.py [-h] [--list] [--labels annot_id] [--db DB] [--rename annot_id new_shortname]&lt;br /&gt;
                          [--relabel annot_id label_id new_label_name] [--abbrev annot_id label_id abbrev_label_name]&lt;br /&gt;
&lt;br /&gt;
=Querying Available Annotations=&lt;br /&gt;
AnchorBar_tools.py provides tools to list, rename and relabel annotations. Each annotation has a unique ID and descriptors indicating the hemisphere, a shortname used for naming labels generated from set operations, and a path indicating the file path of the source .annot file. The default set of annotations stored in &#039;&#039;&#039;annot.db&#039;&#039;&#039; includes the 10 pairs of lhrh annotations from fsaverage/label:&lt;br /&gt;
 AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
&lt;br /&gt;
=Adding Annotations=&lt;br /&gt;
The most common use-case for adding annotations is when working with functional ROIs from group analyses. AnchorBar_init.py is the tool that imports new .annot files into the database for later operations. Imported annotations are associated with a unique identifier based on the labels and vertices that make up an annotation. This allows the scripts to recognize and ignore duplicate annotations that might have been given different names, but also to differentiate distinct annotations that happen to have the same name (as is common when working with functional ROIs). &lt;br /&gt;
&lt;br /&gt;
The import process infers the hemisphere from the filename. Functional cluster annotations are unlikely to have an lh/rh prefix, and so will require renaming prior to importing (I like to stick them in the project copy of the fsaverage folder):&lt;br /&gt;
 cp $SUBJECTS_DIR/RFX/myanalysis.&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt;/a_vs_b/glm.wls/osgm/cache.th30.abs.sig.ocn.annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt;.a_vs_b.cache.th30.abs.annot&lt;br /&gt;
 cp $SUBJECTS_DIR/RFX/myanalysis.&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt;/a_vs_b/glm.wls/osgm/cache.th30.abs.sig.ocn.annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt;.a_vs_b.cache.th30.abs.annot&lt;br /&gt;
 &lt;br /&gt;
To add one or more new annotations (that you&#039;ve renamed and saved to the fsaverage/label folder):&lt;br /&gt;
 AnchorBar_init.py --db annot.db --annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;*.th30.abs.annot&amp;lt;/span&amp;gt;&lt;br /&gt;
Of course, the particular filename pattern in red in the above command may differ from what&#039;s shown here, depending on your file naming convention, but you get the idea. When this code executes, it will import all the files that end in &amp;quot;th30.abs.annot&amp;quot; that it finds. This is handy for importing annotations for multiple hemispheres and/or for multiple contrasts.&lt;br /&gt;
&lt;br /&gt;
==Renaming Imported Annotations==&lt;br /&gt;
Imported annotations will be given a shortname that corresponds to the filename of the source annotation. These names can be unwieldy, so AnchorBar_tools.py allows you to assign a new shortname to stored annotations. First, list the annotations and note their shortnames and id values. Use the id when selecting the annotation for renaming. For example:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
 &lt;br /&gt;
 *** Annotation List ***&lt;br /&gt;
 id	LR shortname	path&lt;br /&gt;
 1	lh aparc.a2005s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 2	lh aparc.a2009s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 ...&lt;br /&gt;
 25	lh a_v_b.th30.abs	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
 26	rh a_v_b.th30.abs	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
&lt;br /&gt;
If we wanted to remove the &amp;quot;th30.abs&amp;quot; part of the shortname for the annotations 25 and 26:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --rename 26 AvsB&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --rename 25 AvsB&lt;br /&gt;
&lt;br /&gt;
Now inspect the annotation database to see the shortname has been updated with the values you just provided:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
 &lt;br /&gt;
 *** Annotation List ***&lt;br /&gt;
 id	LR shortname	path&lt;br /&gt;
 1	lh aparc.a2005s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 2	lh aparc.a2009s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 ...&lt;br /&gt;
 25	lh AvsB	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
 26	rh AvsB	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
&lt;br /&gt;
=Set Operations on Annotations=&lt;br /&gt;
Note that all set operations require pairs of annotations from the same hemisphere. If unsure, list the annotations and take careful note of the hemisphere noted for the ids of the annotations you&#039;re interested in. Using annotations from opposite hemispheres will result in an error message and produce no new output.&lt;br /&gt;
&lt;br /&gt;
==Intersection of Two Annotations==&lt;br /&gt;
Example: We have a functional contrast (A vs Rest) and wish to divide the large clusters along Brodmann Area boundaries. By intersecting the clusters (which only cover some of the brain surface) with one of the anatomical atlas annotations (which cover all of the brain surface), we assign a new label to each vertex according to which cluster and which anatomical label it corresponds to. Vertices not belonging to functional clusters will be dropped. The end result is a new annotation where the clusters are subdivided along anatomical boundaries.&lt;br /&gt;
&lt;br /&gt;
We first list the annotations in the database to find the corresponding id numbers. A vs Rest are annotations 23 (lh) and 24 (rh), and the Brodmann annotations are 5 (lh) and 15 (rh). We intersect the two corresponding annotations thus:&lt;br /&gt;
 AnchorBar_sets.py --db annot.db --intersect 5 23&lt;br /&gt;
 AnchorBar_sets.py --db annot.db --intersect 15 24&lt;br /&gt;
&lt;br /&gt;
==Merging (Union) of Two Annotations==&lt;br /&gt;
Example: We have two tasks, A and B, and each task condition has been contrasted with rest (A vs Rest; B vs Rest). We wish to identify regions that are involved in either task, so we compute the union.&lt;br /&gt;
Unfortunately, it looks like I have to write this functionality!&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=AnchorBar&amp;diff=2268</id>
		<title>AnchorBar</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=AnchorBar&amp;diff=2268"/>
		<updated>2022-08-17T21:44:13Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Intersection of Two Annotations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;AnchorBar (aka &amp;quot;Analyzer&amp;quot;) is a Python program developed to maintain and compute set operations on FreeSurfer annotation files. It&#039;s kind of like SPM&#039;s MarsBar plugin, but for FreeSurfer.&lt;br /&gt;
=Requirements=&lt;br /&gt;
AnchorBar has a few dependencies that might not be part of your base Python install:&lt;br /&gt;
*hashlib&lt;br /&gt;
*nibabel&lt;br /&gt;
Attempting to run the scripts will fail and report missing libraries. They can be installed using pip:&lt;br /&gt;
 pip install nibabel&lt;br /&gt;
The utility is (currently) comprised of 3 scripts. Syntax help for each of these scripts can be obtained using the &amp;lt;code&amp;gt;-h&amp;lt;/code&amp;gt; switch:&lt;br /&gt;
 $ AnchorBar_tools.py -h&lt;br /&gt;
 usage: AnchorBar_tools.py [-h] [--list] [--labels annot_id] [--db DB] [--rename annot_id new_shortname]&lt;br /&gt;
                          [--relabel annot_id label_id new_label_name] [--abbrev annot_id label_id abbrev_label_name]&lt;br /&gt;
&lt;br /&gt;
=Querying Available Annotations=&lt;br /&gt;
AnchorBar_tools.py provides tools to list, rename and relabel annotations. Each annotation has a unique ID and descriptors indicating the hemisphere, a shortname used for naming labels generated from set operations, and a path indicating the file path of the source .annot file. The default set of annotations stored in &#039;&#039;&#039;annot.db&#039;&#039;&#039; includes the 10 pairs of lhrh annotations from fsaverage/label:&lt;br /&gt;
 AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
&lt;br /&gt;
=Adding Annotations=&lt;br /&gt;
The most common use-case for adding annotations is when working with functional ROIs from group analyses. AnchorBar_init.py is the tool that imports new .annot files into the database for later operations. Imported annotations are associated with a unique identifier based on the labels and vertices that make up an annotation. This allows the scripts to recognize and ignore duplicate annotations that might have been given different names, but also to differentiate distinct annotations that happen to have the same name (as is common when working with functional ROIs). &lt;br /&gt;
&lt;br /&gt;
The import process infers the hemisphere from the filename. Functional cluster annotations are unlikely to have an lh/rh prefix, and so will require renaming prior to importing (I like to stick them in the project copy of the fsaverage folder):&lt;br /&gt;
 cp $SUBJECTS_DIR/RFX/myanalysis.&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt;/a_vs_b/glm.wls/osgm/cache.th30.abs.sig.ocn.annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt;.a_vs_b.cache.th30.abs.annot&lt;br /&gt;
 cp $SUBJECTS_DIR/RFX/myanalysis.&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt;/a_vs_b/glm.wls/osgm/cache.th30.abs.sig.ocn.annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt;.a_vs_b.cache.th30.abs.annot&lt;br /&gt;
 &lt;br /&gt;
To add one or more new annotations (that you&#039;ve renamed and saved to the fsaverage/label folder):&lt;br /&gt;
 AnchorBar_init.py --db annot.db --annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;*.th30.abs.annot&amp;lt;/span&amp;gt;&lt;br /&gt;
Of course, the particular filename pattern in red in the above command may differ from what&#039;s shown here, depending on your file naming convention, but you get the idea. When this code executes, it will import all the files that end in &amp;quot;th30.abs.annot&amp;quot; that it finds. This is handy for importing annotations for multiple hemispheres and/or for multiple contrasts.&lt;br /&gt;
&lt;br /&gt;
==Renaming Imported Annotations==&lt;br /&gt;
Imported annotations will be given a shortname that corresponds to the filename of the source annotation. These names can be unwieldy, so AnchorBar_tools.py allows you to assign a new shortname to stored annotations. First, list the annotations and note their shortnames and id values. Use the id when selecting the annotation for renaming. For example:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
 &lt;br /&gt;
 *** Annotation List ***&lt;br /&gt;
 id	LR shortname	path&lt;br /&gt;
 1	lh aparc.a2005s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 2	lh aparc.a2009s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 ...&lt;br /&gt;
 25	lh a_v_b.th30.abs	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
 26	rh a_v_b.th30.abs	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
&lt;br /&gt;
If we wanted to remove the &amp;quot;th30.abs&amp;quot; part of the shortname for the annotations 25 and 26:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --rename 26 AvsB&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --rename 25 AvsB&lt;br /&gt;
&lt;br /&gt;
Now inspect the annotation database to see the shortname has been updated with the values you just provided:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
 &lt;br /&gt;
 *** Annotation List ***&lt;br /&gt;
 id	LR shortname	path&lt;br /&gt;
 1	lh aparc.a2005s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 2	lh aparc.a2009s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 ...&lt;br /&gt;
 25	lh AvsB	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
 26	rh AvsB	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
&lt;br /&gt;
=Set Operations on Annotations=&lt;br /&gt;
==Intersection of Two Annotations==&lt;br /&gt;
Example: We have a functional contrast (A vs Rest) and wish to divide the large clusters along Brodmann Area boundaries. By intersecting the clusters (which only cover some of the brain surface) with one of the anatomical atlas annotations (which cover all of the brain surface), we assign a new label to each vertex according to which cluster and which anatomical label it corresponds to. Vertices not belonging to functional clusters will be dropped. The end result is a new annotation where the clusters are subdivided along anatomical boundaries.&lt;br /&gt;
&lt;br /&gt;
We first list the annotations in the database to find the corresponding id numbers. A vs Rest are annotations 23 (lh) and 24 (rh), and the Brodmann annotations are 5 (lh) and 15 (rh). We intersect the two corresponding annotations thus:&lt;br /&gt;
 AnchorBar_sets.py --db annot.db --intersect 5 23&lt;br /&gt;
 AnchorBar_sets.py --db annot.db --intersect 15 24&lt;br /&gt;
&lt;br /&gt;
==Merging (Union) of Two Annotations==&lt;br /&gt;
Example: We have two tasks, A and B, and each task condition has been contrasted with rest (A vs Rest; B vs Rest). We wish to identify regions that are involved in either task, so we compute the union.&lt;br /&gt;
Unfortunately, it looks like I have to write this functionality!&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=AnchorBar&amp;diff=2267</id>
		<title>AnchorBar</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=AnchorBar&amp;diff=2267"/>
		<updated>2022-08-17T21:37:16Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Intersection of Two Annotations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;AnchorBar (aka &amp;quot;Analyzer&amp;quot;) is a Python program developed to maintain and compute set operations on FreeSurfer annotation files. It&#039;s kind of like SPM&#039;s MarsBar plugin, but for FreeSurfer.&lt;br /&gt;
=Requirements=&lt;br /&gt;
AnchorBar has a few dependencies that might not be part of your base Python install:&lt;br /&gt;
*hashlib&lt;br /&gt;
*nibabel&lt;br /&gt;
Attempting to run the scripts will fail and report missing libraries. They can be installed using pip:&lt;br /&gt;
 pip install nibabel&lt;br /&gt;
The utility is (currently) comprised of 3 scripts. Syntax help for each of these scripts can be obtained using the &amp;lt;code&amp;gt;-h&amp;lt;/code&amp;gt; switch:&lt;br /&gt;
 $ AnchorBar_tools.py -h&lt;br /&gt;
 usage: AnchorBar_tools.py [-h] [--list] [--labels annot_id] [--db DB] [--rename annot_id new_shortname]&lt;br /&gt;
                          [--relabel annot_id label_id new_label_name] [--abbrev annot_id label_id abbrev_label_name]&lt;br /&gt;
&lt;br /&gt;
=Querying Available Annotations=&lt;br /&gt;
AnchorBar_tools.py provides tools to list, rename and relabel annotations. Each annotation has a unique ID and descriptors indicating the hemisphere, a shortname used for naming labels generated from set operations, and a path indicating the file path of the source .annot file. The default set of annotations stored in &#039;&#039;&#039;annot.db&#039;&#039;&#039; includes the 10 pairs of lhrh annotations from fsaverage/label:&lt;br /&gt;
 AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
&lt;br /&gt;
=Adding Annotations=&lt;br /&gt;
The most common use-case for adding annotations is when working with functional ROIs from group analyses. AnchorBar_init.py is the tool that imports new .annot files into the database for later operations. Imported annotations are associated with a unique identifier based on the labels and vertices that make up an annotation. This allows the scripts to recognize and ignore duplicate annotations that might have been given different names, but also to differentiate distinct annotations that happen to have the same name (as is common when working with functional ROIs). &lt;br /&gt;
&lt;br /&gt;
The import process infers the hemisphere from the filename. Functional cluster annotations are unlikely to have an lh/rh prefix, and so will require renaming prior to importing (I like to stick them in the project copy of the fsaverage folder):&lt;br /&gt;
 cp $SUBJECTS_DIR/RFX/myanalysis.&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt;/a_vs_b/glm.wls/osgm/cache.th30.abs.sig.ocn.annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt;.a_vs_b.cache.th30.abs.annot&lt;br /&gt;
 cp $SUBJECTS_DIR/RFX/myanalysis.&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt;/a_vs_b/glm.wls/osgm/cache.th30.abs.sig.ocn.annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt;.a_vs_b.cache.th30.abs.annot&lt;br /&gt;
 &lt;br /&gt;
To add one or more new annotations (that you&#039;ve renamed and saved to the fsaverage/label folder):&lt;br /&gt;
 AnchorBar_init.py --db annot.db --annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;*.th30.abs.annot&amp;lt;/span&amp;gt;&lt;br /&gt;
Of course, the particular filename pattern in red in the above command may differ from what&#039;s shown here, depending on your file naming convention, but you get the idea. When this code executes, it will import all the files that end in &amp;quot;th30.abs.annot&amp;quot; that it finds. This is handy for importing annotations for multiple hemispheres and/or for multiple contrasts.&lt;br /&gt;
&lt;br /&gt;
==Renaming Imported Annotations==&lt;br /&gt;
Imported annotations will be given a shortname that corresponds to the filename of the source annotation. These names can be unwieldy, so AnchorBar_tools.py allows you to assign a new shortname to stored annotations. First, list the annotations and note their shortnames and id values. Use the id when selecting the annotation for renaming. For example:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
 &lt;br /&gt;
 *** Annotation List ***&lt;br /&gt;
 id	LR shortname	path&lt;br /&gt;
 1	lh aparc.a2005s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 2	lh aparc.a2009s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 ...&lt;br /&gt;
 25	lh a_v_b.th30.abs	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
 26	rh a_v_b.th30.abs	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
&lt;br /&gt;
If we wanted to remove the &amp;quot;th30.abs&amp;quot; part of the shortname for the annotations 25 and 26:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --rename 26 AvsB&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --rename 25 AvsB&lt;br /&gt;
&lt;br /&gt;
Now inspect the annotation database to see the shortname has been updated with the values you just provided:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
 &lt;br /&gt;
 *** Annotation List ***&lt;br /&gt;
 id	LR shortname	path&lt;br /&gt;
 1	lh aparc.a2005s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 2	lh aparc.a2009s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 ...&lt;br /&gt;
 25	lh AvsB	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
 26	rh AvsB	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
&lt;br /&gt;
=Set Operations on Annotations=&lt;br /&gt;
==Intersection of Two Annotations==&lt;br /&gt;
Example: We have a functional contrast (A vs Rest) and wish to divide the large clusters along Brodmann Area boundaries. By intersecting the clusters (which only cover some of the brain surface) with one of the anatomical atlas annotations (which cover all of the brain surface), we assign a new label to each vertex according to which cluster and which anatomical label it corresponds to. Vertices not belonging to functional clusters will be dropped. The end result is a new annotation where the clusters are subdivided along anatomical boundaries.&lt;br /&gt;
We first list the annotations in the database to find the corresponding id numbers. A vs Rest are annotations 23 (lh) and 24 (rh), and the Brodmann annotations are 5 (lh) and 15 (rh). We intersect the two corresponding annotations thus:&lt;br /&gt;
 AnchorBar_sets.py --db annot.db --intersect 5 23&lt;br /&gt;
 AnchorBar_sets.py --db annot.db --intersect 15 24&lt;br /&gt;
&lt;br /&gt;
==Merging (Union) of Two Annotations==&lt;br /&gt;
Example: We have two tasks, A and B, and each task condition has been contrasted with rest (A vs Rest; B vs Rest). We wish to identify regions that are involved in either task, so we compute the union.&lt;br /&gt;
Unfortunately, it looks like I have to write this functionality!&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=AnchorBar&amp;diff=2266</id>
		<title>AnchorBar</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=AnchorBar&amp;diff=2266"/>
		<updated>2022-08-17T21:36:58Z</updated>

		<summary type="html">&lt;p&gt;Chris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;AnchorBar (aka &amp;quot;Analyzer&amp;quot;) is a Python program developed to maintain and compute set operations on FreeSurfer annotation files. It&#039;s kind of like SPM&#039;s MarsBar plugin, but for FreeSurfer.&lt;br /&gt;
=Requirements=&lt;br /&gt;
AnchorBar has a few dependencies that might not be part of your base Python install:&lt;br /&gt;
*hashlib&lt;br /&gt;
*nibabel&lt;br /&gt;
Attempting to run the scripts will fail and report missing libraries. They can be installed using pip:&lt;br /&gt;
 pip install nibabel&lt;br /&gt;
The utility is (currently) comprised of 3 scripts. Syntax help for each of these scripts can be obtained using the &amp;lt;code&amp;gt;-h&amp;lt;/code&amp;gt; switch:&lt;br /&gt;
 $ AnchorBar_tools.py -h&lt;br /&gt;
 usage: AnchorBar_tools.py [-h] [--list] [--labels annot_id] [--db DB] [--rename annot_id new_shortname]&lt;br /&gt;
                          [--relabel annot_id label_id new_label_name] [--abbrev annot_id label_id abbrev_label_name]&lt;br /&gt;
&lt;br /&gt;
=Querying Available Annotations=&lt;br /&gt;
AnchorBar_tools.py provides tools to list, rename and relabel annotations. Each annotation has a unique ID and descriptors indicating the hemisphere, a shortname used for naming labels generated from set operations, and a path indicating the file path of the source .annot file. The default set of annotations stored in &#039;&#039;&#039;annot.db&#039;&#039;&#039; includes the 10 pairs of lhrh annotations from fsaverage/label:&lt;br /&gt;
 AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
&lt;br /&gt;
=Adding Annotations=&lt;br /&gt;
The most common use-case for adding annotations is when working with functional ROIs from group analyses. AnchorBar_init.py is the tool that imports new .annot files into the database for later operations. Imported annotations are associated with a unique identifier based on the labels and vertices that make up an annotation. This allows the scripts to recognize and ignore duplicate annotations that might have been given different names, but also to differentiate distinct annotations that happen to have the same name (as is common when working with functional ROIs). &lt;br /&gt;
&lt;br /&gt;
The import process infers the hemisphere from the filename. Functional cluster annotations are unlikely to have an lh/rh prefix, and so will require renaming prior to importing (I like to stick them in the project copy of the fsaverage folder):&lt;br /&gt;
 cp $SUBJECTS_DIR/RFX/myanalysis.&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt;/a_vs_b/glm.wls/osgm/cache.th30.abs.sig.ocn.annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt;.a_vs_b.cache.th30.abs.annot&lt;br /&gt;
 cp $SUBJECTS_DIR/RFX/myanalysis.&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt;/a_vs_b/glm.wls/osgm/cache.th30.abs.sig.ocn.annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt;.a_vs_b.cache.th30.abs.annot&lt;br /&gt;
 &lt;br /&gt;
To add one or more new annotations (that you&#039;ve renamed and saved to the fsaverage/label folder):&lt;br /&gt;
 AnchorBar_init.py --db annot.db --annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;*.th30.abs.annot&amp;lt;/span&amp;gt;&lt;br /&gt;
Of course, the particular filename pattern in red in the above command may differ from what&#039;s shown here, depending on your file naming convention, but you get the idea. When this code executes, it will import all the files that end in &amp;quot;th30.abs.annot&amp;quot; that it finds. This is handy for importing annotations for multiple hemispheres and/or for multiple contrasts.&lt;br /&gt;
&lt;br /&gt;
==Renaming Imported Annotations==&lt;br /&gt;
Imported annotations will be given a shortname that corresponds to the filename of the source annotation. These names can be unwieldy, so AnchorBar_tools.py allows you to assign a new shortname to stored annotations. First, list the annotations and note their shortnames and id values. Use the id when selecting the annotation for renaming. For example:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
 &lt;br /&gt;
 *** Annotation List ***&lt;br /&gt;
 id	LR shortname	path&lt;br /&gt;
 1	lh aparc.a2005s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 2	lh aparc.a2009s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 ...&lt;br /&gt;
 25	lh a_v_b.th30.abs	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
 26	rh a_v_b.th30.abs	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
&lt;br /&gt;
If we wanted to remove the &amp;quot;th30.abs&amp;quot; part of the shortname for the annotations 25 and 26:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --rename 26 AvsB&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --rename 25 AvsB&lt;br /&gt;
&lt;br /&gt;
Now inspect the annotation database to see the shortname has been updated with the values you just provided:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
 &lt;br /&gt;
 *** Annotation List ***&lt;br /&gt;
 id	LR shortname	path&lt;br /&gt;
 1	lh aparc.a2005s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 2	lh aparc.a2009s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 ...&lt;br /&gt;
 25	lh AvsB	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
 26	rh AvsB	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
&lt;br /&gt;
=Set Operations on Annotations=&lt;br /&gt;
==Intersection of Two Annotations==&lt;br /&gt;
Example: We have a functional contrast (A vs Rest) and wish to divide the large clusters along Brodmann Area boundaries. By intersecting the clusters (which only cover some of the brain surface) with one of the anatomical atlas annotations (which cover all of the brain surface), we assign a new label to each vertex according to which cluster and which anatomical label it corresponds to. Vertices not belonging to functional clusters will be dropped. The end result is a new annotation where the clusters are subdivided along anatomical boundaries.&lt;br /&gt;
We first list the annotations in the database to find the corresponding id numbers. A vs Rest are annotations 23 (lh) and 24 (rh), and the Brodmann annotations are 5 (lh) and 15 (rh). We intersect the two corresponding annotations thus:&lt;br /&gt;
 AnchorBar_sets.py --db annot.db --intersect 5 23&lt;br /&gt;
  AnchorBar_sets.py --db annot.db --intersect 15 24&lt;br /&gt;
 &lt;br /&gt;
==Merging (Union) of Two Annotations==&lt;br /&gt;
Example: We have two tasks, A and B, and each task condition has been contrasted with rest (A vs Rest; B vs Rest). We wish to identify regions that are involved in either task, so we compute the union.&lt;br /&gt;
Unfortunately, it looks like I have to write this functionality!&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=AnchorBar&amp;diff=2265</id>
		<title>AnchorBar</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=AnchorBar&amp;diff=2265"/>
		<updated>2022-08-17T21:19:30Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Requirements */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;AnchorBar (aka &amp;quot;Analyzer&amp;quot;) is a Python program developed to maintain and compute set operations on FreeSurfer annotation files. It&#039;s kind of like SPM&#039;s MarsBar plugin, but for FreeSurfer.&lt;br /&gt;
=Requirements=&lt;br /&gt;
AnchorBar has a few dependencies that might not be part of your base Python install:&lt;br /&gt;
*hashlib&lt;br /&gt;
*nibabel&lt;br /&gt;
Attempting to run the scripts will fail and report missing libraries. They can be installed using pip:&lt;br /&gt;
 pip install nibabel&lt;br /&gt;
The utility is (currently) comprised of 3 scripts. Syntax help for each of these scripts can be obtained using the &amp;lt;code&amp;gt;-h&amp;lt;/code&amp;gt; switch:&lt;br /&gt;
 $ AnchorBar_tools.py -h&lt;br /&gt;
 usage: AnchorBar_tools.py [-h] [--list] [--labels annot_id] [--db DB] [--rename annot_id new_shortname]&lt;br /&gt;
                          [--relabel annot_id label_id new_label_name] [--abbrev annot_id label_id abbrev_label_name]&lt;br /&gt;
&lt;br /&gt;
=Querying Available Annotations=&lt;br /&gt;
AnchorBar_tools.py provides tools to list, rename and relabel annotations. Each annotation has a unique ID and descriptors indicating the hemisphere, a shortname used for naming labels generated from set operations, and a path indicating the file path of the source .annot file. The default set of annotations stored in &#039;&#039;&#039;annot.db&#039;&#039;&#039; includes the 10 pairs of lhrh annotations from fsaverage/label:&lt;br /&gt;
 AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
&lt;br /&gt;
=Adding Annotations=&lt;br /&gt;
The most common use-case for adding annotations is when working with functional ROIs from group analyses. AnchorBar_init.py is the tool that imports new .annot files into the database for later operations. Imported annotations are associated with a unique identifier based on the labels and vertices that make up an annotation. This allows the scripts to recognize and ignore duplicate annotations that might have been given different names, but also to differentiate distinct annotations that happen to have the same name (as is common when working with functional ROIs). &lt;br /&gt;
&lt;br /&gt;
The import process infers the hemisphere from the filename. Functional cluster annotations are unlikely to have an lh/rh prefix, and so will require renaming prior to importing (I like to stick them in the project copy of the fsaverage folder):&lt;br /&gt;
 cp $SUBJECTS_DIR/RFX/myanalysis.&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt;/a_vs_b/glm.wls/osgm/cache.th30.abs.sig.ocn.annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt;.a_vs_b.cache.th30.abs.annot&lt;br /&gt;
 cp $SUBJECTS_DIR/RFX/myanalysis.&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt;/a_vs_b/glm.wls/osgm/cache.th30.abs.sig.ocn.annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt;.a_vs_b.cache.th30.abs.annot&lt;br /&gt;
 &lt;br /&gt;
To add one or more new annotations (that you&#039;ve renamed and saved to the fsaverage/label folder):&lt;br /&gt;
 AnchorBar_init.py --db annot.db --annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;*.th30.abs.annot&amp;lt;/span&amp;gt;&lt;br /&gt;
Of course, the particular filename pattern in red in the above command may differ from what&#039;s shown here, depending on your file naming convention, but you get the idea. When this code executes, it will import all the files that end in &amp;quot;th30.abs.annot&amp;quot; that it finds. This is handy for importing annotations for multiple hemispheres and/or for multiple contrasts.&lt;br /&gt;
&lt;br /&gt;
==Renaming Imported Annotations==&lt;br /&gt;
Imported annotations will be given a shortname that corresponds to the filename of the source annotation. These names can be unwieldy, so AnchorBar_tools.py allows you to assign a new shortname to stored annotations. First, list the annotations and note their shortnames and id values. Use the id when selecting the annotation for renaming. For example:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
 &lt;br /&gt;
 *** Annotation List ***&lt;br /&gt;
 id	LR shortname	path&lt;br /&gt;
 1	lh aparc.a2005s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 2	lh aparc.a2009s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 ...&lt;br /&gt;
 25	lh a_v_b.th30.abs	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
 26	rh a_v_b.th30.abs	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
&lt;br /&gt;
If we wanted to remove the &amp;quot;th30.abs&amp;quot; part of the shortname for the annotations 25 and 26:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --rename 26 AvsB&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --rename 25 AvsB&lt;br /&gt;
&lt;br /&gt;
Now inspect the annotation database to see the shortname has been updated with the values you just provided:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
 &lt;br /&gt;
 *** Annotation List ***&lt;br /&gt;
 id	LR shortname	path&lt;br /&gt;
 1	lh aparc.a2005s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 2	lh aparc.a2009s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 ...&lt;br /&gt;
 25	lh AvsB	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
 26	rh AvsB	/home/projects/mystudy/fsaverage/label&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=AnchorBar&amp;diff=2264</id>
		<title>AnchorBar</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=AnchorBar&amp;diff=2264"/>
		<updated>2022-08-17T21:14:05Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Renaming Imported Annotations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;AnchorBar (aka &amp;quot;Analyzer&amp;quot;) is a Python program developed to maintain and compute set operations on FreeSurfer annotation files. It&#039;s kind of like SPM&#039;s MarsBar plugin, but for FreeSurfer.&lt;br /&gt;
=Requirements=&lt;br /&gt;
AnchorBar has a few dependencies that must be met:&lt;br /&gt;
#argparse&lt;br /&gt;
#sqlite3&lt;br /&gt;
#hashlib&lt;br /&gt;
#nibabel&lt;br /&gt;
#random&lt;br /&gt;
#numpy&lt;br /&gt;
Attempting to run the scripts will fail and report missing libraries. They can be installed using pip:&lt;br /&gt;
 pip install nibabel&lt;br /&gt;
The utility is (currently) comprised of 3 scripts. Syntax help for each of these scripts can be obtained using the &amp;lt;code&amp;gt;-h&amp;lt;/code&amp;gt; switch:&lt;br /&gt;
 $ AnchorBar_tools.py -h&lt;br /&gt;
 usage: AnchorBar_tools.py [-h] [--list] [--labels annot_id] [--db DB] [--rename annot_id new_shortname]&lt;br /&gt;
                          [--relabel annot_id label_id new_label_name] [--abbrev annot_id label_id abbrev_label_name]&lt;br /&gt;
=Querying Available Annotations=&lt;br /&gt;
AnchorBar_tools.py provides tools to list, rename and relabel annotations. Each annotation has a unique ID and descriptors indicating the hemisphere, a shortname used for naming labels generated from set operations, and a path indicating the file path of the source .annot file. The default set of annotations stored in &#039;&#039;&#039;annot.db&#039;&#039;&#039; includes the 10 pairs of lhrh annotations from fsaverage/label:&lt;br /&gt;
 AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
&lt;br /&gt;
=Adding Annotations=&lt;br /&gt;
The most common use-case for adding annotations is when working with functional ROIs from group analyses. AnchorBar_init.py is the tool that imports new .annot files into the database for later operations. Imported annotations are associated with a unique identifier based on the labels and vertices that make up an annotation. This allows the scripts to recognize and ignore duplicate annotations that might have been given different names, but also to differentiate distinct annotations that happen to have the same name (as is common when working with functional ROIs). &lt;br /&gt;
&lt;br /&gt;
The import process infers the hemisphere from the filename. Functional cluster annotations are unlikely to have an lh/rh prefix, and so will require renaming prior to importing (I like to stick them in the project copy of the fsaverage folder):&lt;br /&gt;
 cp $SUBJECTS_DIR/RFX/myanalysis.&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt;/a_vs_b/glm.wls/osgm/cache.th30.abs.sig.ocn.annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt;.a_vs_b.cache.th30.abs.annot&lt;br /&gt;
 cp $SUBJECTS_DIR/RFX/myanalysis.&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt;/a_vs_b/glm.wls/osgm/cache.th30.abs.sig.ocn.annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt;.a_vs_b.cache.th30.abs.annot&lt;br /&gt;
 &lt;br /&gt;
To add one or more new annotations (that you&#039;ve renamed and saved to the fsaverage/label folder):&lt;br /&gt;
 AnchorBar_init.py --db annot.db --annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;*.th30.abs.annot&amp;lt;/span&amp;gt;&lt;br /&gt;
Of course, the particular filename pattern in red in the above command may differ from what&#039;s shown here, depending on your file naming convention, but you get the idea. When this code executes, it will import all the files that end in &amp;quot;th30.abs.annot&amp;quot; that it finds. This is handy for importing annotations for multiple hemispheres and/or for multiple contrasts.&lt;br /&gt;
&lt;br /&gt;
==Renaming Imported Annotations==&lt;br /&gt;
Imported annotations will be given a shortname that corresponds to the filename of the source annotation. These names can be unwieldy, so AnchorBar_tools.py allows you to assign a new shortname to stored annotations. First, list the annotations and note their shortnames and id values. Use the id when selecting the annotation for renaming. For example:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
 &lt;br /&gt;
 *** Annotation List ***&lt;br /&gt;
 id	LR shortname	path&lt;br /&gt;
 1	lh aparc.a2005s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 2	lh aparc.a2009s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 ...&lt;br /&gt;
 25	lh a_v_b.th30.abs	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
 26	rh a_v_b.th30.abs	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
&lt;br /&gt;
If we wanted to remove the &amp;quot;th30.abs&amp;quot; part of the shortname for the annotations 25 and 26:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --rename 26 AvsB&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --rename 25 AvsB&lt;br /&gt;
&lt;br /&gt;
Now inspect the annotation database to see the shortname has been updated with the values you just provided:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
 &lt;br /&gt;
 *** Annotation List ***&lt;br /&gt;
 id	LR shortname	path&lt;br /&gt;
 1	lh aparc.a2005s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 2	lh aparc.a2009s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 ...&lt;br /&gt;
 25	lh AvsB	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
 26	rh AvsB	/home/projects/mystudy/fsaverage/label&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=AnchorBar&amp;diff=2263</id>
		<title>AnchorBar</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=AnchorBar&amp;diff=2263"/>
		<updated>2022-08-17T21:12:45Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Adding Annotations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;AnchorBar (aka &amp;quot;Analyzer&amp;quot;) is a Python program developed to maintain and compute set operations on FreeSurfer annotation files. It&#039;s kind of like SPM&#039;s MarsBar plugin, but for FreeSurfer.&lt;br /&gt;
=Requirements=&lt;br /&gt;
AnchorBar has a few dependencies that must be met:&lt;br /&gt;
#argparse&lt;br /&gt;
#sqlite3&lt;br /&gt;
#hashlib&lt;br /&gt;
#nibabel&lt;br /&gt;
#random&lt;br /&gt;
#numpy&lt;br /&gt;
Attempting to run the scripts will fail and report missing libraries. They can be installed using pip:&lt;br /&gt;
 pip install nibabel&lt;br /&gt;
The utility is (currently) comprised of 3 scripts. Syntax help for each of these scripts can be obtained using the &amp;lt;code&amp;gt;-h&amp;lt;/code&amp;gt; switch:&lt;br /&gt;
 $ AnchorBar_tools.py -h&lt;br /&gt;
 usage: AnchorBar_tools.py [-h] [--list] [--labels annot_id] [--db DB] [--rename annot_id new_shortname]&lt;br /&gt;
                          [--relabel annot_id label_id new_label_name] [--abbrev annot_id label_id abbrev_label_name]&lt;br /&gt;
=Querying Available Annotations=&lt;br /&gt;
AnchorBar_tools.py provides tools to list, rename and relabel annotations. Each annotation has a unique ID and descriptors indicating the hemisphere, a shortname used for naming labels generated from set operations, and a path indicating the file path of the source .annot file. The default set of annotations stored in &#039;&#039;&#039;annot.db&#039;&#039;&#039; includes the 10 pairs of lhrh annotations from fsaverage/label:&lt;br /&gt;
 AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
&lt;br /&gt;
=Adding Annotations=&lt;br /&gt;
The most common use-case for adding annotations is when working with functional ROIs from group analyses. AnchorBar_init.py is the tool that imports new .annot files into the database for later operations. Imported annotations are associated with a unique identifier based on the labels and vertices that make up an annotation. This allows the scripts to recognize and ignore duplicate annotations that might have been given different names, but also to differentiate distinct annotations that happen to have the same name (as is common when working with functional ROIs). &lt;br /&gt;
&lt;br /&gt;
The import process infers the hemisphere from the filename. Functional cluster annotations are unlikely to have an lh/rh prefix, and so will require renaming prior to importing (I like to stick them in the project copy of the fsaverage folder):&lt;br /&gt;
 cp $SUBJECTS_DIR/RFX/myanalysis.&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt;/a_vs_b/glm.wls/osgm/cache.th30.abs.sig.ocn.annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt;.a_vs_b.cache.th30.abs.annot&lt;br /&gt;
 cp $SUBJECTS_DIR/RFX/myanalysis.&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt;/a_vs_b/glm.wls/osgm/cache.th30.abs.sig.ocn.annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt;.a_vs_b.cache.th30.abs.annot&lt;br /&gt;
 &lt;br /&gt;
To add one or more new annotations (that you&#039;ve renamed and saved to the fsaverage/label folder):&lt;br /&gt;
 AnchorBar_init.py --db annot.db --annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;*.th30.abs.annot&amp;lt;/span&amp;gt;&lt;br /&gt;
Of course, the particular filename pattern in red in the above command may differ from what&#039;s shown here, depending on your file naming convention, but you get the idea. When this code executes, it will import all the files that end in &amp;quot;th30.abs.annot&amp;quot; that it finds. This is handy for importing annotations for multiple hemispheres and/or for multiple contrasts.&lt;br /&gt;
&lt;br /&gt;
==Renaming Imported Annotations==&lt;br /&gt;
Imported annotations will be given a shortname that corresponds to the filename of the source annotation. These names can be unwieldy, so AnchorBar_tools.py allows you to assign a new shortname to stored annotations. First, list the annotations and note their shortnames and id values. Use the id when selecting the annotation for renaming. For example:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
 &lt;br /&gt;
 *** Annotation List ***&lt;br /&gt;
 id	LR shortname	path&lt;br /&gt;
 1	lh aparc.a2005s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 2	lh aparc.a2009s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 ...&lt;br /&gt;
 25	lh a_v_b.th30.abs	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
 26	rh a_v_b.th30.abs	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
&lt;br /&gt;
If we wanted to remove the &amp;quot;th30.abs&amp;quot; part of the shortname for the last two annotations:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --rename 26 AvsB&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --rename 25 AvsB&lt;br /&gt;
&lt;br /&gt;
Now inspect the annotation database:&lt;br /&gt;
 ./AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
 &lt;br /&gt;
 *** Annotation List ***&lt;br /&gt;
 id	LR shortname	path&lt;br /&gt;
 1	lh aparc.a2005s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 2	lh aparc.a2009s	/usr/local/freesurfer/subjects/fsaverage/label&lt;br /&gt;
 ...&lt;br /&gt;
 25	lh AvsB	/home/projects/mystudy/fsaverage/label&lt;br /&gt;
 26	rh AvsB	/home/projects/mystudy/fsaverage/label&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=AnchorBar&amp;diff=2262</id>
		<title>AnchorBar</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=AnchorBar&amp;diff=2262"/>
		<updated>2022-08-17T20:53:08Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Adding Annotations */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;AnchorBar (aka &amp;quot;Analyzer&amp;quot;) is a Python program developed to maintain and compute set operations on FreeSurfer annotation files. It&#039;s kind of like SPM&#039;s MarsBar plugin, but for FreeSurfer.&lt;br /&gt;
=Requirements=&lt;br /&gt;
AnchorBar has a few dependencies that must be met:&lt;br /&gt;
#argparse&lt;br /&gt;
#sqlite3&lt;br /&gt;
#hashlib&lt;br /&gt;
#nibabel&lt;br /&gt;
#random&lt;br /&gt;
#numpy&lt;br /&gt;
Attempting to run the scripts will fail and report missing libraries. They can be installed using pip:&lt;br /&gt;
 pip install nibabel&lt;br /&gt;
The utility is (currently) comprised of 3 scripts. Syntax help for each of these scripts can be obtained using the &amp;lt;code&amp;gt;-h&amp;lt;/code&amp;gt; switch:&lt;br /&gt;
 $ AnchorBar_tools.py -h&lt;br /&gt;
 usage: AnchorBar_tools.py [-h] [--list] [--labels annot_id] [--db DB] [--rename annot_id new_shortname]&lt;br /&gt;
                          [--relabel annot_id label_id new_label_name] [--abbrev annot_id label_id abbrev_label_name]&lt;br /&gt;
=Querying Available Annotations=&lt;br /&gt;
AnchorBar_tools.py provides tools to list, rename and relabel annotations. Each annotation has a unique ID and descriptors indicating the hemisphere, a shortname used for naming labels generated from set operations, and a path indicating the file path of the source .annot file. The default set of annotations stored in &#039;&#039;&#039;annot.db&#039;&#039;&#039; includes the 10 pairs of lhrh annotations from fsaverage/label:&lt;br /&gt;
 AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
&lt;br /&gt;
=Adding Annotations=&lt;br /&gt;
The most common use-case for adding annotations is when working with functional ROIs from group analyses. AnchorBar_init.py is the tool that imports new .annot files into the database for later operations. Imported annotations are associated with a unique identifier based on the labels and vertices that make up an annotation. This allows the scripts to recognize and ignore duplicate annotations that might have been given different names, but also to differentiate distinct annotations that happen to have the same name (as is common when working with functional ROIs). &lt;br /&gt;
&lt;br /&gt;
The import process infers the hemisphere from the filename. Functional cluster annotations are unlikely to have an lh/rh prefix, and so will require renaming prior to importing (I like to stick them in the project copy of the fsaverage folder):&lt;br /&gt;
 cp $SUBJECTS_DIR/RFX/myanalysis.&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt;/a_vs_b/glm.wls/osgm/cache.th30.abs.sig.ocn.annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt;.a_vs_b.cache.th30.abs.annot&lt;br /&gt;
 cp $SUBJECTS_DIR/RFX/myanalysis.&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt;/a_vs_b/glm.wls/osgm/cache.th30.abs.sig.ocn.annot $SUBJECTS_DIR/fsaverage/label/&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt;.a_vs_b.cache.th30.abs.annot&lt;br /&gt;
 &lt;br /&gt;
To add one or more new annotations:&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=AnchorBar&amp;diff=2261</id>
		<title>AnchorBar</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=AnchorBar&amp;diff=2261"/>
		<updated>2022-08-17T20:46:02Z</updated>

		<summary type="html">&lt;p&gt;Chris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;AnchorBar (aka &amp;quot;Analyzer&amp;quot;) is a Python program developed to maintain and compute set operations on FreeSurfer annotation files. It&#039;s kind of like SPM&#039;s MarsBar plugin, but for FreeSurfer.&lt;br /&gt;
=Requirements=&lt;br /&gt;
AnchorBar has a few dependencies that must be met:&lt;br /&gt;
#argparse&lt;br /&gt;
#sqlite3&lt;br /&gt;
#hashlib&lt;br /&gt;
#nibabel&lt;br /&gt;
#random&lt;br /&gt;
#numpy&lt;br /&gt;
Attempting to run the scripts will fail and report missing libraries. They can be installed using pip:&lt;br /&gt;
 pip install nibabel&lt;br /&gt;
The utility is (currently) comprised of 3 scripts. Syntax help for each of these scripts can be obtained using the &amp;lt;code&amp;gt;-h&amp;lt;/code&amp;gt; switch:&lt;br /&gt;
 $ AnchorBar_tools.py -h&lt;br /&gt;
 usage: AnchorBar_tools.py [-h] [--list] [--labels annot_id] [--db DB] [--rename annot_id new_shortname]&lt;br /&gt;
                          [--relabel annot_id label_id new_label_name] [--abbrev annot_id label_id abbrev_label_name]&lt;br /&gt;
=Querying Available Annotations=&lt;br /&gt;
AnchorBar_tools.py provides tools to list, rename and relabel annotations. Each annotation has a unique ID and descriptors indicating the hemisphere, a shortname used for naming labels generated from set operations, and a path indicating the file path of the source .annot file. The default set of annotations stored in &#039;&#039;&#039;annot.db&#039;&#039;&#039; includes the 10 pairs of lhrh annotations from fsaverage/label:&lt;br /&gt;
 AnchorBar_tools.py --db annot.db --list&lt;br /&gt;
&lt;br /&gt;
=Adding Annotations=&lt;br /&gt;
The most common use-case for adding annotations is when working with functional ROIs from group analyses. AnchorBar_init.py is the tool that imports new .annot files into the database for later operations. Imported annotations are associated with a unique identifier based on the labels and vertices that make up an annotation. This allows the scripts to recognize and ignore duplicate annotations that might have been given different names, but also to differentiate distinct annotations that happen to have the same name (as is common when working with functional ROIs). &lt;br /&gt;
&lt;br /&gt;
The import process infers the hemisphere from the filename. Functional cluster annotations are unlikely to have an lh/rh prefix, and so will require renaming prior to importing:&lt;br /&gt;
 cp $SUBJECTS_DIR/RFX/myanalysis.&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt;/a_vs_b/glm.wls/osgm/cache.th30.abs.sig.ocn.annot ./&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt;.a_vs_b.cache.th30.abs.annot&lt;br /&gt;
 cp $SUBJECTS_DIR/RFX/myanalysis.&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt;/a_vs_b/glm.wls/osgm/cache.th30.abs.sig.ocn.annot ./&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt;.a_vs_b.cache.th30.abs.annot&lt;br /&gt;
 &lt;br /&gt;
To add a new annotation:&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=AnchorBar&amp;diff=2260</id>
		<title>AnchorBar</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=AnchorBar&amp;diff=2260"/>
		<updated>2022-08-17T20:29:28Z</updated>

		<summary type="html">&lt;p&gt;Chris: Created page with &amp;quot;AnchorBar (aka &amp;quot;Analyzer&amp;quot;) is a Python program developed to maintain and compute set operations on FreeSurfer annotation files. It&amp;#039;s kind of like SPM&amp;#039;s MarsBar plugin, but for FreeSurfer. =Requirements= AnchorBar has a few dependencies that must be met: #argparse #sqlite3 #hashlib #nibabel #random #numpy Attempting to run the scripts will fail and report missing libraries. They can be installed using pip:  pip install nibabel The utility is (currently) comprised of 3 scr...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;AnchorBar (aka &amp;quot;Analyzer&amp;quot;) is a Python program developed to maintain and compute set operations on FreeSurfer annotation files. It&#039;s kind of like SPM&#039;s MarsBar plugin, but for FreeSurfer.&lt;br /&gt;
=Requirements=&lt;br /&gt;
AnchorBar has a few dependencies that must be met:&lt;br /&gt;
#argparse&lt;br /&gt;
#sqlite3&lt;br /&gt;
#hashlib&lt;br /&gt;
#nibabel&lt;br /&gt;
#random&lt;br /&gt;
#numpy&lt;br /&gt;
Attempting to run the scripts will fail and report missing libraries. They can be installed using pip:&lt;br /&gt;
 pip install nibabel&lt;br /&gt;
The utility is (currently) comprised of 3 scripts. Syntax help for each of these scripts can be obtained using the &amp;lt;code&amp;gt;-h&amp;lt;/code&amp;gt; switch:&lt;br /&gt;
 $ AnchorBar_tools.py -h&lt;br /&gt;
 usage: AnchorBar_tools.py [-h] [--list] [--labels annot_id] [--db DB] [--rename annot_id new_shortname]&lt;br /&gt;
                          [--relabel annot_id label_id new_label_name] [--abbrev annot_id label_id abbrev_label_name]&lt;br /&gt;
=Querying Available Annotations=&lt;br /&gt;
AnchorBar_tools.py provides tools to list, rename and relabel annotations. Each annotation has a unique ID and descriptors indicating the hemisphere, a shortname used for naming labels generated from set operations, and a path indicating the file path of the source .annot file. The default set of annotations stored in &#039;&#039;&#039;annot.db&#039;&#039;&#039; includes the 10 pairs of lhrh annotations from fsaverage/label:&lt;br /&gt;
 AnchorBar_tools.py --db annot.db --list&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Working_with_ROIs_(Freesurfer)&amp;diff=2259</id>
		<title>Working with ROIs (Freesurfer)</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Working_with_ROIs_(Freesurfer)&amp;diff=2259"/>
		<updated>2022-08-17T20:13:04Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* AnchorBar */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page seriously needs to be reorganized. It documents some procedures that were developed before we figured out better ways to do some things, and so it may be more confusing than helpful.&lt;br /&gt;
&lt;br /&gt;
=Working From Group-Level Results=&lt;br /&gt;
Group-level contrast maps, generated by [[Mri_glmfit]] are well-suited for generating a set of ROIs that are appropriate for use across the set of participants entered into the group-level analysis. They have the useful property of being spatially consistent across all participants. There are a couple things to note, however. The first is that there must first be a sufficient number of participants to produce statistically significant contrast clusters. The second is that the group-level analysis must be done in fsaverage space. If the data you wish to explore is in native &#039;&#039;&#039;self&#039;&#039;&#039; space, you would need to map the ROIs back to a subject&#039;s native surface space. Note that this is not a strong consideration, given that you would have had to have preprocessed and analyzed individuals in fsaverage space in order to produce group-level contrast maps, so I cannot imagine a situation that would absolutely &#039;&#039;require&#039;&#039; you to map ROIs to native space.&lt;br /&gt;
&lt;br /&gt;
=Working From Existing Labels=&lt;br /&gt;
The other primary source of ROIs come from anatomical labels provided in the &amp;lt;code&amp;gt;fsaverage/label/&amp;lt;/code&amp;gt; folder. Because the folder in the FreeSurfer install directory is owned by root, you won&#039;t be able to add new labels to this folder. Instead, you should make a copy of the fsaverage folder in your project directory:&lt;br /&gt;
 cp -r $FREESURFER_HOME/subjects/fsaverage $SUBJECTS_DIR&lt;br /&gt;
&lt;br /&gt;
=AnchorBar=&lt;br /&gt;
All the procedures and code lower down on the page have been replaced with a handy Python tool called [[AnchorBar]] (aka &amp;quot;Analyzer&amp;quot;), which maintains a sqlite3 database of annotations in template space. The program allows the user to add new annotations to the database and compute set operations on the stored annotations to create new annotation schemes; for example dividing functional clusters along anatomical boundaries.&lt;br /&gt;
&lt;br /&gt;
==Adding Annotations to the AnchorBar database==&lt;br /&gt;
The initial AnchorBar annotation database is called annot.db, and has been populated with standard FreeSurfer annotations.&lt;br /&gt;
&lt;br /&gt;
See also: [[Working with Subcortical ROIs (Freesurfer)]]&lt;br /&gt;
&lt;br /&gt;
= Old Information =&lt;br /&gt;
The information below this point is accurate, but is largely obsolete. I couldn&#039;t bring myself to just delete it all because you never know when it&#039;ll be handy to know some of this stuff.&lt;br /&gt;
&lt;br /&gt;
The .label files created during autorecon3, or by saving an overlay are plaintext lists of vertices. It should be straightforward to find the intersection or union of the vertices in two or more .label files using common Unix shell commands.&lt;br /&gt;
&lt;br /&gt;
== .label Syntax ==&lt;br /&gt;
Below is the first few lines of the lh.banksts.label file for a participant. This file was produced during the Lausanne parcellation procedure.&lt;br /&gt;
 #!ascii label  , from subject FS_0043 vox2ras=TkReg&lt;br /&gt;
 1457&lt;br /&gt;
 32992  -47.867  -62.891  33.007 0.0000000000&lt;br /&gt;
 32993  -48.881  -62.907  32.963 0.0000000000&lt;br /&gt;
 34014  -47.298  -61.696  33.538 0.0000000000&lt;br /&gt;
 34015  -47.972  -61.259  33.648 0.0000000000&lt;br /&gt;
 34024  -47.634  -62.264  33.251 0.0000000000&lt;br /&gt;
 34025  -48.286  -61.775  33.303 0.0000000000&lt;br /&gt;
 ...etc&lt;br /&gt;
&lt;br /&gt;
The first line is a header line. The second line indicates the number of vertices in the label file. The remaining lines indicate the vertex number, x,y,z coordinates of the vertex, and I have no idea what that last column indicates but it&#039;s not terribly important for our purposes.&lt;br /&gt;
&lt;br /&gt;
==Working From Individual-Level Results==&lt;br /&gt;
Running selxavg3-sess will produce some number of statistical contrast maps that can be loaded as an overlay in tksurfer. Significant clusters can be turned into ROIs for that particular individual in a series of steps. This has the advantage of being applicable to a single dataset -- there is no need to wait until sufficient fMRI data has been collected to produce significant group-level contrasts. The downside is that this is process has many steps.&lt;br /&gt;
&lt;br /&gt;
=== Load Overlay ===&lt;br /&gt;
The first step is to load up the participant&#039;s surface data and the desired overlay. Care must be taken in selecting a contrast overlay that is appropriate for the type of analysis that you eventually plan to do. As a rule of thumb, the overlay contrast should be orthogonal (i.e., completely independent) of the contrast that you are ultimately interested in. For example, if your goal is to compare mean signal strength for low- vs. high-familiar trials, it would be &#039;&#039;inappropriate&#039;&#039; to use the contrast of high-familiar.vs.low-familiar to generate ROIs from which to calculate the mean signal strength. The reason for this is that this contrast will be selecting voxels that have already been found to demonstrate a significant difference. Naturally, any analysis of familiarity effects on these voxels will be &#039;&#039;biased&#039;&#039;! A more appropriate contrast to use would be something like task.vs.rest, which isn&#039;t biased towards either high or low-familiarity items, but is instead just showing the voxels that were more highly active during the task trials (which presumably contain equal numbers of high and low familiarity items).&lt;br /&gt;
&lt;br /&gt;
After the overlay is loaded, you may want to play around with the [https://surfer.nmr.mgh.harvard.edu/fswiki/TkSurferGuide/TkSurferWorkingWithData/TkSurferOverlay overlay configuration parameters]: for example you might choose to &#039;&#039;truncate&#039;&#039; your results to only show positive t-values, or modify your significance threshold.&lt;br /&gt;
&lt;br /&gt;
===Save Clusters as Labels===&lt;br /&gt;
Now comes the somewhat tedious part of individually converting each cluster into a .label file. This simple but repetitive process is described [https://surfer.nmr.mgh.harvard.edu/fswiki/CreatingROIs here] and you can get pretty quick at it:&lt;br /&gt;
#If you wish to subdivide a large cluster, first create one or more paths that completely cross (e.g., bisect) the cluster.&lt;br /&gt;
#Use the cursor to select a point anywhere in the desired cluster&lt;br /&gt;
#Click the &#039;&#039;Custom Fill&#039;&#039; button. A new window will popup:&lt;br /&gt;
#*Choose &#039;&#039;&#039;Up to functional values below threshold&#039;&#039;&#039;&lt;br /&gt;
#*Optionally choose &#039;&#039;&#039;Up to and including paths&#039;&#039;&#039; if had subdivided the cluster. You will have to repeat these steps on the remainder of the cluster&lt;br /&gt;
#*The region will be colored in white before reverting to a yellow outlined region&lt;br /&gt;
#On the Tools window, go to &#039;&#039;&#039;File &amp;gt; Label &amp;gt; Save Selected Label&#039;&#039;&lt;br /&gt;
#*A box will open with a path to save the ROI in the subject&#039;s label directory by default&lt;br /&gt;
#*Give it a meaningful name that includes the hemisphere and uses the .label filename extension (e.g. lh.TASK_001.label)&lt;br /&gt;
#*Click OK&lt;br /&gt;
&lt;br /&gt;
=== Merge .label Files ===&lt;br /&gt;
If you wish to work with the resulting .label files individually, your work is done. However, if you wish to merge multiple clusters into a single .annot file, you have to first create a color table which is called by &amp;lt;code&amp;gt;mris_label2annot&amp;lt;/code&amp;gt;.&lt;br /&gt;
=== Create a CTAB File ===&lt;br /&gt;
Existing instructions for ctab files are a little hard to follow, so I&#039;ll do my best. The CTAB files are plaintext files with 6 columns and 1 row for each label. The structure of the columns is as follows:&lt;br /&gt;
{| &lt;br /&gt;
|Index &lt;br /&gt;
|Label&lt;br /&gt;
|R&lt;br /&gt;
|G&lt;br /&gt;
|B&lt;br /&gt;
|A&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
You might find it easiest if left and right hemispheres have their own CTAB files. These two files can be identical if you want, but, having two files allows you to have different numbers of labels associated with each hemispheres without encountering problems in subsequent steps. The indices for each hemisphere should be numbered independently. For example, if there are 11 labels in the lh and 8 in the rh, then one ctab file should have indices 1 to 11 and the other should have indices 1 to 8.&lt;br /&gt;
&lt;br /&gt;
The label names should correspond to the ?h.*.label files you created. For example, suppose you created the following 5 label files in the previous step:&lt;br /&gt;
 lh.TASK_001.label&lt;br /&gt;
 lh.TASK_002.label&lt;br /&gt;
 rh.TASK_003.label&lt;br /&gt;
 rh.TASK_004.label&lt;br /&gt;
 rh.TASK_005.label&lt;br /&gt;
The lh ctab file should contain the following labels:&lt;br /&gt;
 0    unknown     128    128    128    0&lt;br /&gt;
 1    TASK_001    110    112    100    0&lt;br /&gt;
 2    TASK_002    128    140    130    0&lt;br /&gt;
and the rh ctab file should contain the following labels:&lt;br /&gt;
 0    unknown     128    128    128    0&lt;br /&gt;
 1    TASK_003    110    112    100    0&lt;br /&gt;
 2    TASK_004    190    140    105    0&lt;br /&gt;
 3    TASK_005    200    150    100    0&lt;br /&gt;
&lt;br /&gt;
Note that as explained above, the ctab indices (column 1) were independent. Sets of RGB values can be generated according to your whims, or you can use an external website to produce a set of n distinguishable colors. For example, I went to http://tools.medialab.sciences-po.fr/iwanthue/ to generate 34 distinct colors. Note that the final column, A, refers to the alpha transparency value, and is always set to 0 as far as I can tell.&lt;br /&gt;
&lt;br /&gt;
===Call mris_label2annot===&lt;br /&gt;
The last step uses the information in the ctab files to generate the .annot files from the appropriate sets of .label files. The syntax is:&lt;br /&gt;
 mris_label2annot --s ${SUBJECT_ID} \&lt;br /&gt;
 --h [lh | rh] \&lt;br /&gt;
 --ctab [lh | rh].${CTAB_FILENAME} \&lt;br /&gt;
 --a ${PREFIX} \&lt;br /&gt;
 --ldir ${SUBJECTS_DIR}/${SUBJECT_ID}/label&lt;br /&gt;
This will parse the indicated ctab file to identify the labels indicated therein. The identified labels will then be assigned the colors indicated in that ctab file and written to the .annot file specified by the ${PREFIX} associated with the --a argument. It is assumed that all the .label files you created earlier were saved to ${SUBJECTS_DIR}/${SUBJECT_ID}/label. If this is different, you should specify the actual file location with the --ldir argument.&lt;br /&gt;
&lt;br /&gt;
Note that the above command is the syntax used when you have an &#039;&#039;unknown&#039;&#039; label and corresponding entry in your CTAB file. If you do not wish to include the unknown label, rather than assign index 0 to the unknown label, start your CTAB index numbering at 0 for the first real label, and use the &amp;lt;code&amp;gt;--no-unknown&amp;lt;/code&amp;gt; command-line argument when calling &amp;lt;code&amp;gt;mris_label2annot&amp;lt;/code&amp;gt;. I didn&#039;t remember this being a problem the first time I tried this, but when I just went through this process again, it seemed that the CTAB index needed to start at 0, but I couldn&#039;t have index 0 associated with unknown if there was not an &#039;&#039;?h.unknown.label&#039;&#039; file, even when using the --no-unknown switch. When I edited the ctab files to have index 0, 1, ... , n-1 corresponding with label_001, label_002, ... , label_n, then everything seemed to work fine.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
==Preparation of .label Files==&lt;br /&gt;
The group analysis used the fsaverage surface, and since mris_divide_parcellation (which you may also be using, or use later) assumes we&#039;re subdividing a particular subject&#039;s annotation, it makes sense to copy the source .annot files into the fsaverage label/ folder (giving the appropriate ?h prefix). The group analysis annot file will be found in the contrast directory for a particular analysis. The example below describes working with the .annot file for the TASK contrast in the group analysis, found under &#039;&#039;&#039;$SUBJECTS_DIR/RFX&#039;&#039;&#039;&lt;br /&gt;
 HEMI=rh  &lt;br /&gt;
 cd $SUBJECTS_DIR/RFX/analysis.${HEMI}/task/glm.wls/osgm&lt;br /&gt;
 ANNOT_FILE=cache.th30.abs.sign.ocn.annot&lt;br /&gt;
 cp ${ANNOT_FILE}  $SUBJECTS_DIR\fsaverage\label\${HEMI}.${ANNOT_FILE}&lt;br /&gt;
&lt;br /&gt;
Note that fsaverage may not be writable if it is a symbolic link to $FREESURFER_DIR/subjects/fsaverage. If this is the case, you will have to make a copy of fsaverage that you own:&lt;br /&gt;
 #first unlink the symbolic link to fsaverage&lt;br /&gt;
 cd $SUBJECTS_DIR&lt;br /&gt;
 unlink fsaverage&lt;br /&gt;
 #now make a new empty fsaverage directory. You will own it and so should be able to read/write anything it contains&lt;br /&gt;
 mkdir fsaverage&lt;br /&gt;
 #finally, copy everything in the &#039;&#039;real&#039;&#039; fsaverage directory to the directory that you own:&lt;br /&gt;
 cp -r $FREESURFER_DIR/subjects/fsaverage/* fsaverage&lt;br /&gt;
&lt;br /&gt;
===Breaking Up .annot into multiple .label Files ===&lt;br /&gt;
The steps described below do not work on .annot files because they are not plaintext and easily manipulated. However, [https://surfer.nmr.mgh.harvard.edu/fswiki/mri_annotation2label mri_annotation2label] will convert a .annot file to a series of .label files:&lt;br /&gt;
 BASE_LABEL_NAME=TD_W&lt;br /&gt;
 ANNOT=lh.cache.th30.abs.sign.ocn.annot&lt;br /&gt;
 HEMI=lh&lt;br /&gt;
 SURF=orig&lt;br /&gt;
 mri_annotation2label --annotation ${ANNOT} --hemi ${HEMI} --subject fsaverage --surf ${SURF} --labelbase ${HEMI}.${BASE_LABEL_NAME}&lt;br /&gt;
Alternatively, you might want to dump all your .label files into a single output directory. Instead of supplying a labelbase, you specify an output dir (&#039;&#039;outdir&#039;&#039;):&lt;br /&gt;
 mri_annotation2label --subject fsaverage --hemi lh --outdir LDT.lh --annotation lh.ldt_hi.annot&lt;br /&gt;
This command created a new subdirectory (LDT.lh), and broke up the ld.ldt_hi.annot into its constituent label files, placing them in the subdirectory.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: when working with functional contrast map annotation files, the output files will be called ?h.cluster-nnn.label. If you convert multiple .annot files for the same hemisphere (e.g., from two separate contrasts) with the intention of later merging them, you will run the risk of overwriting earlier .label files unless you take care to rename them as you go.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=Set Operations on .label Files=&lt;br /&gt;
==Intersecting .label Files (Intersection)==&lt;br /&gt;
Given what we know about the contents of a label file, the unix &amp;lt;code&amp;gt;comm&amp;lt;/code&amp;gt; command should suffice to find the common entries between two label files, which is the set intersection of vertices appearing in each.&lt;br /&gt;
 &lt;br /&gt;
[https://www.mail-archive.com/freesurfer@nmr.mgh.harvard.edu/msg11344.html Relevant thread here]&lt;br /&gt;
&lt;br /&gt;
==Merging .label Files (Union)==&lt;br /&gt;
A union operation will find all vertices appearing in &#039;&#039;either&#039;&#039; .label file. You will want to make sure that only unique values are retained (i.e., you don&#039;t want to list the same vertex multiple times in the output .label file).&lt;br /&gt;
&lt;br /&gt;
The easiest way to accomplish this is FreeSurfer&#039;s &amp;lt;code&amp;gt;mri_mergelabels&amp;lt;/code&amp;gt; tool:&lt;br /&gt;
 mri_mergelabels -i label1 -i label2 ... -o outputlabel &lt;br /&gt;
or&lt;br /&gt;
  mri_mergelabels -d &amp;lt;dirname&amp;gt; -o outputlabel &lt;br /&gt;
This FreeSurfer tool merges two or more label files. It does this by catting the label files together (after removing the first two lines). It inserts a new header (the first two lines). The number of entries in the new file (ie, the number on the second line), is computed by summing those from the input files. When you pass a directory name with the -d flag, it will merge all labels in a directory. This is handy when you want to merge many labels created from multiple .annot files.&lt;br /&gt;
&lt;br /&gt;
For example, if I have followed the steps above to create a series of ?h.CONTRAST_A-0??.label and ?h.CONTRAST_B-0??.label files, then I can find the union of CONTRAST_A and CONTRAST_B by creating a lh/ and rh/ subdirectory and moving the label files to the appropriate directories. Then:&lt;br /&gt;
 mri_mergelabels -d lh -o lh.AB_UNION.label&lt;br /&gt;
 mri_mergelabels -d rh -o rh.AB_UNION.label&lt;br /&gt;
The above two commands will generate my lh and rh label files in the current working directory (assumed to be ${SUBJECTS_DIR}/${SUBJECT}/label; ${SUBJECT} is likely to be fsaverage if you are making a functional mask from group-level analyses in fsaverage space).&lt;br /&gt;
===You&#039;re Going to Want to Turn These into an .annot File===&lt;br /&gt;
After you&#039;ve merged multiple .label files into a single .label file, all the vertices will be assigned to the same label. You will probably want to go through and relabel each of the distinct regions (e.g., by cluster) in FreeSurfer. After you&#039;ve done that, save (export) each of the relabeled regions as separate .label files (e.g., CLUST_01, CLUST_02, ... etc.) in a working directory. &lt;br /&gt;
&lt;br /&gt;
After you do, you need to create or snag a CLUT file.&lt;br /&gt;
&lt;br /&gt;
Finally, you use mris_label2annot to merge your united labels, along with the CLUT into a single .annot file.&lt;br /&gt;
&lt;br /&gt;
=ROI &#039;VennDiagram&#039; Matlab Function=&lt;br /&gt;
ROIVennDiagram.m uses intersect and setdiff functions to extract intersecting and unique vertices. Variable names suggest that it&#039;s used for T1 and T2 data, but you can use it for any paired data. The label files generated above (using mri_annotation2label) need to be copied into a lh and rh directory within a parent dir. A simple modification would allow these files to remain in your fsaverage or other subject&#039;s dir, if you wanted to clutter those up.&lt;br /&gt;
 &lt;br /&gt;
 function ROIvenndiagram = ROIvenndiagram(hemi)&lt;br /&gt;
 %% function ROIvenndiagram = ROIvenndiagram(&#039;hemi&#039;);&lt;br /&gt;
 %Pull out intersecting and different vertices for FS label files,&lt;br /&gt;
 %and store in one of 3 text files (T1, T1T2, T2)&lt;br /&gt;
 %&lt;br /&gt;
 %Currently takes one string input indicating lh or rh.&lt;br /&gt;
 %&lt;br /&gt;
 %You&#039;ll need to adjust the file names on lines 42 and 43 to match your&lt;br /&gt;
 %file names.&lt;br /&gt;
 %&lt;br /&gt;
 %Note - The headers in the output text files have quotes around them, as&lt;br /&gt;
 %the headers begin with matlab unfriendly characters (#). So, you&#039;ll need to&lt;br /&gt;
 %go into those 3 output text files and delete the quotes.&lt;br /&gt;
 %&lt;br /&gt;
 %Intended for use in parent dir containing lh and rh dirs, which&lt;br /&gt;
 %contain label files.&lt;br /&gt;
 % _____________________________________&lt;br /&gt;
 % G.J. Smith&lt;br /&gt;
 % gjsmith4@buffalo.edu&lt;br /&gt;
 % _____________________________________&lt;br /&gt;
 parentdir=pwd;&lt;br /&gt;
 hemidir = sprintf(&#039;%s/%s&#039;, pwd, hemi);&lt;br /&gt;
 cd(hemidir);&lt;br /&gt;
 &lt;br /&gt;
 %% Pre-initialize array for cating labels&lt;br /&gt;
 T1Agg = cell(1,5);&lt;br /&gt;
 T2Agg = cell(1,5);&lt;br /&gt;
 &lt;br /&gt;
 % Begin cycle through label files&lt;br /&gt;
 for labeln = 1:3000 %arbirary number higher than any label number&lt;br /&gt;
     %% Apply appropriate number of 0&#039;s before number - eg. 001, 010, 100&lt;br /&gt;
     numcorrection=num2str(labeln);&lt;br /&gt;
     if labeln &amp;lt; 10&lt;br /&gt;
         labelnum=strcat(&#039;00&#039;,numcorrection);&lt;br /&gt;
     elseif labeln &amp;lt; 100&lt;br /&gt;
         labelnum=strcat(&#039;0&#039;,numcorrection);&lt;br /&gt;
     else&lt;br /&gt;
         labelnum=strcat(numcorrection);&lt;br /&gt;
     end&lt;br /&gt;
     &lt;br /&gt;
     %% T1 and T2 label file names -- Edit these to match your file naming convention&lt;br /&gt;
     T1_filename = sprintf(&#039;%s.T1-%s.label&#039;, hemi, labelnum);&lt;br /&gt;
     T2_filename = sprintf(&#039;%s.T2-%s.label&#039;, hemi, labelnum);&lt;br /&gt;
     &lt;br /&gt;
     %% Build one large aggregate file with all vertices from all label files&lt;br /&gt;
     header = 2; % lines 1 and 2 of label file stored seperately&lt;br /&gt;
     delimiter = &#039; &#039;;&lt;br /&gt;
     if exist([pwd filesep T1_filename], &#039;file&#039;)&lt;br /&gt;
         %load data&lt;br /&gt;
         T1file = importdata(T1_filename,delimiter,header);&lt;br /&gt;
         %Store in one array&lt;br /&gt;
         T1Agg = vertcat(T1Agg, num2cell(T1file.data));&lt;br /&gt;
     end&lt;br /&gt;
     if exist([pwd filesep T2_filename], &#039;file&#039;)&lt;br /&gt;
         %load data&lt;br /&gt;
         T2file = importdata(T2_filename,delimiter,header);&lt;br /&gt;
         %Store in one array&lt;br /&gt;
         T2Agg = vertcat(T2Agg, num2cell(T2file.data));&lt;br /&gt;
     end&lt;br /&gt;
 end&lt;br /&gt;
 &lt;br /&gt;
 %% Check that label files were found&lt;br /&gt;
 exist T1file;&lt;br /&gt;
 if ans == 1&lt;br /&gt;
     display(&#039;Labels found, pulling out intersects and setdiffs&#039;);&lt;br /&gt;
 else&lt;br /&gt;
     display(&#039;No label files found, check your dir or file names&#039;);&lt;br /&gt;
     return;&lt;br /&gt;
 end&lt;br /&gt;
 &lt;br /&gt;
 %% Find intersection and differences&lt;br /&gt;
 T1Agg = cell2mat(T1Agg);&lt;br /&gt;
 T2Agg = cell2mat(T2Agg);&lt;br /&gt;
 %compare&lt;br /&gt;
 %union = union(T1file.data(:,1), T2file.data(:,1));&lt;br /&gt;
 intersection = intersect(T1Agg, T2Agg, &#039;rows&#039;); %&#039;rows&#039; works with same # of columns&lt;br /&gt;
 T1_diffs = setdiff(T1Agg, T2Agg, &#039;rows&#039;);&lt;br /&gt;
 T2_diffs = setdiff(T2Agg, T1Agg, &#039;rows&#039;);&lt;br /&gt;
 &lt;br /&gt;
 %% Build new label files&lt;br /&gt;
 %T1&lt;br /&gt;
 T1_differences = cell(50000, 5); %hard coded arbitrarily high number of rows for pre-init&lt;br /&gt;
 T1_differences(1,1) = T1file.textdata(1,1); % header&lt;br /&gt;
 T1_differences{2,1} = length(T1_diffs); % header line 2, new number of rows&lt;br /&gt;
 T1_differences(3:length(T1_diffs) + 2,:) = num2cell(T1_diffs); % data&lt;br /&gt;
 &lt;br /&gt;
 %T2&lt;br /&gt;
 T2_differences = cell(50000, 5); %hard coded arbitrarily high number of rows for pre-init&lt;br /&gt;
 T2_differences(1,1) = T2file.textdata(1,1); % header&lt;br /&gt;
 T2_differences{2,1} = length(T2_diffs); % header line 2, new number of rows&lt;br /&gt;
 T2_differences(3:length(T2_diffs) + 2,:) = num2cell(T2_diffs); % data&lt;br /&gt;
 &lt;br /&gt;
 %Match&lt;br /&gt;
 Match_labelfile = cell(50000, 5); %hard coded arbitrarily high number of rows for pre-init&lt;br /&gt;
 Match_labelfile(1,1) = T1file.textdata(1,1); % header T1 and T2 for same subject is the same&lt;br /&gt;
 Match_labelfile{2,1} = length(intersection); % header line 2, new number of rows&lt;br /&gt;
 Match_labelfile(3:length(intersection) + 2, :) = num2cell(intersection); % data&lt;br /&gt;
 &lt;br /&gt;
 %% Remove blank rows from oversized pre-alo arrays&lt;br /&gt;
 T1_differences(all(cellfun(@isempty,T1_differences),2),:) = [];&lt;br /&gt;
 T2_differences(all(cellfun(@isempty,T2_differences),2),:) = [];&lt;br /&gt;
 Match_labelfile(all(cellfun(@isempty,Match_labelfile),2),:) = [];&lt;br /&gt;
 &lt;br /&gt;
 %% Write files&lt;br /&gt;
 cd(parentdir);&lt;br /&gt;
 T1_fname = sprintf(&#039;%s.T1.label.txt&#039;, hemi);&lt;br /&gt;
 writetable(cell2table(T1_differences), T1_fname,&#039;Delimiter&#039;,&#039; &#039;,&#039;WriteVariableNames&#039;,false);&lt;br /&gt;
 &lt;br /&gt;
 %T2&lt;br /&gt;
 T2_fname = sprintf(&#039;%s.T2.label.txt&#039;, hemi);&lt;br /&gt;
 writetable(cell2table(T2_differences), T2_fname,&#039;Delimiter&#039;,&#039; &#039;,&#039;WriteVariableNames&#039;,false);&lt;br /&gt;
 &lt;br /&gt;
 %Match&lt;br /&gt;
 Match_fname = sprintf(&#039;%s.T1T2.label.txt&#039;, hemi);&lt;br /&gt;
 writetable(cell2table(Match_labelfile), Match_fname,&#039;Delimiter&#039;,&#039; &#039;,&#039;WriteVariableNames&#039;,false);&lt;br /&gt;
 &lt;br /&gt;
 end&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Visualize and Cluster (Intersected Label File)==&lt;br /&gt;
*in progress*&lt;br /&gt;
Open tksurfer.&lt;br /&gt;
 tksurfer fsaverage ?h inflated&lt;br /&gt;
&lt;br /&gt;
Import ?h.aparc.annot files for a reference&lt;br /&gt;
&lt;br /&gt;
Load the intersected label file.&lt;br /&gt;
&lt;br /&gt;
Use cut line to make cluster boundaries (If necessary, otherwise see below).&lt;br /&gt;
&lt;br /&gt;
Erase aparc.annot labels that overlap with a desired label.&lt;br /&gt;
&lt;br /&gt;
Select area you want to cluster and use custom fill to create new label. Up to and including path, up to other labels, and up to unlabeled, filling from last clicked vertex. If you left adjacent aparc.annot labels these can be used as boundaries.&lt;br /&gt;
&lt;br /&gt;
Save each label file using save selected label.  -- must premake text files as cant specify file names. -- must be a better way of doing this, or getting export annotation to work.&lt;br /&gt;
&lt;br /&gt;
Pro tip: Colour your label file red, when creating new labels, they will be blue. (see below)&lt;br /&gt;
&lt;br /&gt;
==CTAB and Annot==&lt;br /&gt;
Now you&#039;ll need to create a ctab file for the labels to make a .annot file.&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have a ton of labels, this can easily be done by hand by copying and editing the &#039;FreeSurferColorLUT.txt&#039; file found in /usr/local/freesurfer, and simply renaming the regions(label names) to match your labelling convention. Then save it as a .annot.ctab file, e.g. &#039;RH.annot.ctab&#039;.&lt;br /&gt;
&lt;br /&gt;
Then you&#039;ll want to use the mris_label2annot command found [https://surfer.nmr.mgh.harvard.edu/fswiki/mris_label2annot here].  There are 2 ways to do this. The first is to pass it each of the label files using the--l switch, and a ctab fie. Example:&lt;br /&gt;
&lt;br /&gt;
 mris_label2annot --l rh.label-001.txt --l rh.label-002.txt --l rh.label-003.txt --l rh.label-004.txt --l rh.label-005.txt --l rh.label-006.txt --l rh.label-007.txt --l rh.label-008.txt --l rh.label-009.txt --l rh.label-010.txt --l   rh.label-011.txt --l rh.label-012.txt --l rh.label-013.txt --l rh.label-014.txt --l rh.label-015.txt --l rh.label-016.txt --l rh.label-017.txt --l rh.label-018.txt --l rh.label-019.txt --l rh.label-020.txt --l rh.label-021.txt --l  rh.label-022.txt --l rh.label-023.txt --l rh.label-024.txt --l rh.label-025.txt --l rh.label-026.txt --ctab RH.annot.ctab --s fsaverage --h rh --annot T1T2intersected&lt;br /&gt;
&lt;br /&gt;
That&#039;s unwieldy (and yet, that&#039;s how I first did this). The easier way is to throw all the label files into a single directory, and pass the directory and a ctab file.&lt;br /&gt;
&lt;br /&gt;
Now that you have .annot files for your intersecting vertices, you can [http://ccnlab.psy.buffalo.edu/wiki/index.php/Freesurfer_Subparcellation#Transfer_Subparcellation_to_Other_Subjects, subparcellate] and/or continue with network analyses.&lt;br /&gt;
&lt;br /&gt;
==Using R to Work With Colortables==&lt;br /&gt;
This is kind of a stub topic, but there is an R library when trying to extract a CLUT from an existing .annot file. I introduce the [https://rdrr.io/cran/freesurferformats/ freesurferformats R package]. In particular, there is a function to extract the color lookup table from a .annot file:&lt;br /&gt;
 &amp;gt; library(&amp;quot;freesurferformats&amp;quot;)&lt;br /&gt;
 &amp;gt; annotfile=&amp;quot;/path/to/subject/label/lh.myannot.annot&amp;quot;&lt;br /&gt;
 &amp;gt; annot = read.fs.annot(annotfile);&lt;br /&gt;
 &amp;gt; colortable = colortable.from.annot(annot);&lt;br /&gt;
 &amp;gt; head(colortable);&lt;br /&gt;
&lt;br /&gt;
If you install this library, you can use [[Extracting CLUT from .annot Files Using R|this handy R script]] that I wrote that can be run from the terminal to extract a CLUT from a .annot file.&lt;br /&gt;
&lt;br /&gt;
[[Category: FreeSurfer]]&lt;br /&gt;
[[Category: ROI]]&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Working_with_ROIs_(Freesurfer)&amp;diff=2258</id>
		<title>Working with ROIs (Freesurfer)</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Working_with_ROIs_(Freesurfer)&amp;diff=2258"/>
		<updated>2022-08-17T20:08:21Z</updated>

		<summary type="html">&lt;p&gt;Chris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page seriously needs to be reorganized. It documents some procedures that were developed before we figured out better ways to do some things, and so it may be more confusing than helpful.&lt;br /&gt;
&lt;br /&gt;
=Working From Group-Level Results=&lt;br /&gt;
Group-level contrast maps, generated by [[Mri_glmfit]] are well-suited for generating a set of ROIs that are appropriate for use across the set of participants entered into the group-level analysis. They have the useful property of being spatially consistent across all participants. There are a couple things to note, however. The first is that there must first be a sufficient number of participants to produce statistically significant contrast clusters. The second is that the group-level analysis must be done in fsaverage space. If the data you wish to explore is in native &#039;&#039;&#039;self&#039;&#039;&#039; space, you would need to map the ROIs back to a subject&#039;s native surface space. Note that this is not a strong consideration, given that you would have had to have preprocessed and analyzed individuals in fsaverage space in order to produce group-level contrast maps, so I cannot imagine a situation that would absolutely &#039;&#039;require&#039;&#039; you to map ROIs to native space.&lt;br /&gt;
&lt;br /&gt;
=Working From Existing Labels=&lt;br /&gt;
The other primary source of ROIs come from anatomical labels provided in the &amp;lt;code&amp;gt;fsaverage/label/&amp;lt;/code&amp;gt; folder. Because the folder in the FreeSurfer install directory is owned by root, you won&#039;t be able to add new labels to this folder. Instead, you should make a copy of the fsaverage folder in your project directory:&lt;br /&gt;
 cp -r $FREESURFER_HOME/subjects/fsaverage $SUBJECTS_DIR&lt;br /&gt;
&lt;br /&gt;
=AnchorBar=&lt;br /&gt;
All the procedures and code lower down on the page have been replaced with a handy Python tool called AnchorBar (aka &amp;quot;Analyzer&amp;quot;), which maintains a sqlite3 database of annotations in template space. The program allows the user to add new annotations to the database and compute set operations on the stored annotations to create new annotation schemes; for example dividing functional clusters along anatomical boundaries.&lt;br /&gt;
&lt;br /&gt;
==Adding Annotations to the AnchorBar database==&lt;br /&gt;
The initial AnchorBar annotation database is called annot.db, and has been populated with standard FreeSurfer annotations.&lt;br /&gt;
&lt;br /&gt;
See also: [[Working with Subcortical ROIs (Freesurfer)]]&lt;br /&gt;
= Old Information =&lt;br /&gt;
The information below this point is accurate, but is largely obsolete. I couldn&#039;t bring myself to just delete it all because you never know when it&#039;ll be handy to know some of this stuff.&lt;br /&gt;
&lt;br /&gt;
The .label files created during autorecon3, or by saving an overlay are plaintext lists of vertices. It should be straightforward to find the intersection or union of the vertices in two or more .label files using common Unix shell commands.&lt;br /&gt;
&lt;br /&gt;
== .label Syntax ==&lt;br /&gt;
Below is the first few lines of the lh.banksts.label file for a participant. This file was produced during the Lausanne parcellation procedure.&lt;br /&gt;
 #!ascii label  , from subject FS_0043 vox2ras=TkReg&lt;br /&gt;
 1457&lt;br /&gt;
 32992  -47.867  -62.891  33.007 0.0000000000&lt;br /&gt;
 32993  -48.881  -62.907  32.963 0.0000000000&lt;br /&gt;
 34014  -47.298  -61.696  33.538 0.0000000000&lt;br /&gt;
 34015  -47.972  -61.259  33.648 0.0000000000&lt;br /&gt;
 34024  -47.634  -62.264  33.251 0.0000000000&lt;br /&gt;
 34025  -48.286  -61.775  33.303 0.0000000000&lt;br /&gt;
 ...etc&lt;br /&gt;
&lt;br /&gt;
The first line is a header line. The second line indicates the number of vertices in the label file. The remaining lines indicate the vertex number, x,y,z coordinates of the vertex, and I have no idea what that last column indicates but it&#039;s not terribly important for our purposes.&lt;br /&gt;
&lt;br /&gt;
==Working From Individual-Level Results==&lt;br /&gt;
Running selxavg3-sess will produce some number of statistical contrast maps that can be loaded as an overlay in tksurfer. Significant clusters can be turned into ROIs for that particular individual in a series of steps. This has the advantage of being applicable to a single dataset -- there is no need to wait until sufficient fMRI data has been collected to produce significant group-level contrasts. The downside is that this is process has many steps.&lt;br /&gt;
&lt;br /&gt;
=== Load Overlay ===&lt;br /&gt;
The first step is to load up the participant&#039;s surface data and the desired overlay. Care must be taken in selecting a contrast overlay that is appropriate for the type of analysis that you eventually plan to do. As a rule of thumb, the overlay contrast should be orthogonal (i.e., completely independent) of the contrast that you are ultimately interested in. For example, if your goal is to compare mean signal strength for low- vs. high-familiar trials, it would be &#039;&#039;inappropriate&#039;&#039; to use the contrast of high-familiar.vs.low-familiar to generate ROIs from which to calculate the mean signal strength. The reason for this is that this contrast will be selecting voxels that have already been found to demonstrate a significant difference. Naturally, any analysis of familiarity effects on these voxels will be &#039;&#039;biased&#039;&#039;! A more appropriate contrast to use would be something like task.vs.rest, which isn&#039;t biased towards either high or low-familiarity items, but is instead just showing the voxels that were more highly active during the task trials (which presumably contain equal numbers of high and low familiarity items).&lt;br /&gt;
&lt;br /&gt;
After the overlay is loaded, you may want to play around with the [https://surfer.nmr.mgh.harvard.edu/fswiki/TkSurferGuide/TkSurferWorkingWithData/TkSurferOverlay overlay configuration parameters]: for example you might choose to &#039;&#039;truncate&#039;&#039; your results to only show positive t-values, or modify your significance threshold.&lt;br /&gt;
&lt;br /&gt;
===Save Clusters as Labels===&lt;br /&gt;
Now comes the somewhat tedious part of individually converting each cluster into a .label file. This simple but repetitive process is described [https://surfer.nmr.mgh.harvard.edu/fswiki/CreatingROIs here] and you can get pretty quick at it:&lt;br /&gt;
#If you wish to subdivide a large cluster, first create one or more paths that completely cross (e.g., bisect) the cluster.&lt;br /&gt;
#Use the cursor to select a point anywhere in the desired cluster&lt;br /&gt;
#Click the &#039;&#039;Custom Fill&#039;&#039; button. A new window will popup:&lt;br /&gt;
#*Choose &#039;&#039;&#039;Up to functional values below threshold&#039;&#039;&#039;&lt;br /&gt;
#*Optionally choose &#039;&#039;&#039;Up to and including paths&#039;&#039;&#039; if had subdivided the cluster. You will have to repeat these steps on the remainder of the cluster&lt;br /&gt;
#*The region will be colored in white before reverting to a yellow outlined region&lt;br /&gt;
#On the Tools window, go to &#039;&#039;&#039;File &amp;gt; Label &amp;gt; Save Selected Label&#039;&#039;&lt;br /&gt;
#*A box will open with a path to save the ROI in the subject&#039;s label directory by default&lt;br /&gt;
#*Give it a meaningful name that includes the hemisphere and uses the .label filename extension (e.g. lh.TASK_001.label)&lt;br /&gt;
#*Click OK&lt;br /&gt;
&lt;br /&gt;
=== Merge .label Files ===&lt;br /&gt;
If you wish to work with the resulting .label files individually, your work is done. However, if you wish to merge multiple clusters into a single .annot file, you have to first create a color table which is called by &amp;lt;code&amp;gt;mris_label2annot&amp;lt;/code&amp;gt;.&lt;br /&gt;
=== Create a CTAB File ===&lt;br /&gt;
Existing instructions for ctab files are a little hard to follow, so I&#039;ll do my best. The CTAB files are plaintext files with 6 columns and 1 row for each label. The structure of the columns is as follows:&lt;br /&gt;
{| &lt;br /&gt;
|Index &lt;br /&gt;
|Label&lt;br /&gt;
|R&lt;br /&gt;
|G&lt;br /&gt;
|B&lt;br /&gt;
|A&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
You might find it easiest if left and right hemispheres have their own CTAB files. These two files can be identical if you want, but, having two files allows you to have different numbers of labels associated with each hemispheres without encountering problems in subsequent steps. The indices for each hemisphere should be numbered independently. For example, if there are 11 labels in the lh and 8 in the rh, then one ctab file should have indices 1 to 11 and the other should have indices 1 to 8.&lt;br /&gt;
&lt;br /&gt;
The label names should correspond to the ?h.*.label files you created. For example, suppose you created the following 5 label files in the previous step:&lt;br /&gt;
 lh.TASK_001.label&lt;br /&gt;
 lh.TASK_002.label&lt;br /&gt;
 rh.TASK_003.label&lt;br /&gt;
 rh.TASK_004.label&lt;br /&gt;
 rh.TASK_005.label&lt;br /&gt;
The lh ctab file should contain the following labels:&lt;br /&gt;
 0    unknown     128    128    128    0&lt;br /&gt;
 1    TASK_001    110    112    100    0&lt;br /&gt;
 2    TASK_002    128    140    130    0&lt;br /&gt;
and the rh ctab file should contain the following labels:&lt;br /&gt;
 0    unknown     128    128    128    0&lt;br /&gt;
 1    TASK_003    110    112    100    0&lt;br /&gt;
 2    TASK_004    190    140    105    0&lt;br /&gt;
 3    TASK_005    200    150    100    0&lt;br /&gt;
&lt;br /&gt;
Note that as explained above, the ctab indices (column 1) were independent. Sets of RGB values can be generated according to your whims, or you can use an external website to produce a set of n distinguishable colors. For example, I went to http://tools.medialab.sciences-po.fr/iwanthue/ to generate 34 distinct colors. Note that the final column, A, refers to the alpha transparency value, and is always set to 0 as far as I can tell.&lt;br /&gt;
&lt;br /&gt;
===Call mris_label2annot===&lt;br /&gt;
The last step uses the information in the ctab files to generate the .annot files from the appropriate sets of .label files. The syntax is:&lt;br /&gt;
 mris_label2annot --s ${SUBJECT_ID} \&lt;br /&gt;
 --h [lh | rh] \&lt;br /&gt;
 --ctab [lh | rh].${CTAB_FILENAME} \&lt;br /&gt;
 --a ${PREFIX} \&lt;br /&gt;
 --ldir ${SUBJECTS_DIR}/${SUBJECT_ID}/label&lt;br /&gt;
This will parse the indicated ctab file to identify the labels indicated therein. The identified labels will then be assigned the colors indicated in that ctab file and written to the .annot file specified by the ${PREFIX} associated with the --a argument. It is assumed that all the .label files you created earlier were saved to ${SUBJECTS_DIR}/${SUBJECT_ID}/label. If this is different, you should specify the actual file location with the --ldir argument.&lt;br /&gt;
&lt;br /&gt;
Note that the above command is the syntax used when you have an &#039;&#039;unknown&#039;&#039; label and corresponding entry in your CTAB file. If you do not wish to include the unknown label, rather than assign index 0 to the unknown label, start your CTAB index numbering at 0 for the first real label, and use the &amp;lt;code&amp;gt;--no-unknown&amp;lt;/code&amp;gt; command-line argument when calling &amp;lt;code&amp;gt;mris_label2annot&amp;lt;/code&amp;gt;. I didn&#039;t remember this being a problem the first time I tried this, but when I just went through this process again, it seemed that the CTAB index needed to start at 0, but I couldn&#039;t have index 0 associated with unknown if there was not an &#039;&#039;?h.unknown.label&#039;&#039; file, even when using the --no-unknown switch. When I edited the ctab files to have index 0, 1, ... , n-1 corresponding with label_001, label_002, ... , label_n, then everything seemed to work fine.&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
==Preparation of .label Files==&lt;br /&gt;
The group analysis used the fsaverage surface, and since mris_divide_parcellation (which you may also be using, or use later) assumes we&#039;re subdividing a particular subject&#039;s annotation, it makes sense to copy the source .annot files into the fsaverage label/ folder (giving the appropriate ?h prefix). The group analysis annot file will be found in the contrast directory for a particular analysis. The example below describes working with the .annot file for the TASK contrast in the group analysis, found under &#039;&#039;&#039;$SUBJECTS_DIR/RFX&#039;&#039;&#039;&lt;br /&gt;
 HEMI=rh  &lt;br /&gt;
 cd $SUBJECTS_DIR/RFX/analysis.${HEMI}/task/glm.wls/osgm&lt;br /&gt;
 ANNOT_FILE=cache.th30.abs.sign.ocn.annot&lt;br /&gt;
 cp ${ANNOT_FILE}  $SUBJECTS_DIR\fsaverage\label\${HEMI}.${ANNOT_FILE}&lt;br /&gt;
&lt;br /&gt;
Note that fsaverage may not be writable if it is a symbolic link to $FREESURFER_DIR/subjects/fsaverage. If this is the case, you will have to make a copy of fsaverage that you own:&lt;br /&gt;
 #first unlink the symbolic link to fsaverage&lt;br /&gt;
 cd $SUBJECTS_DIR&lt;br /&gt;
 unlink fsaverage&lt;br /&gt;
 #now make a new empty fsaverage directory. You will own it and so should be able to read/write anything it contains&lt;br /&gt;
 mkdir fsaverage&lt;br /&gt;
 #finally, copy everything in the &#039;&#039;real&#039;&#039; fsaverage directory to the directory that you own:&lt;br /&gt;
 cp -r $FREESURFER_DIR/subjects/fsaverage/* fsaverage&lt;br /&gt;
&lt;br /&gt;
===Breaking Up .annot into multiple .label Files ===&lt;br /&gt;
The steps described below do not work on .annot files because they are not plaintext and easily manipulated. However, [https://surfer.nmr.mgh.harvard.edu/fswiki/mri_annotation2label mri_annotation2label] will convert a .annot file to a series of .label files:&lt;br /&gt;
 BASE_LABEL_NAME=TD_W&lt;br /&gt;
 ANNOT=lh.cache.th30.abs.sign.ocn.annot&lt;br /&gt;
 HEMI=lh&lt;br /&gt;
 SURF=orig&lt;br /&gt;
 mri_annotation2label --annotation ${ANNOT} --hemi ${HEMI} --subject fsaverage --surf ${SURF} --labelbase ${HEMI}.${BASE_LABEL_NAME}&lt;br /&gt;
Alternatively, you might want to dump all your .label files into a single output directory. Instead of supplying a labelbase, you specify an output dir (&#039;&#039;outdir&#039;&#039;):&lt;br /&gt;
 mri_annotation2label --subject fsaverage --hemi lh --outdir LDT.lh --annotation lh.ldt_hi.annot&lt;br /&gt;
This command created a new subdirectory (LDT.lh), and broke up the ld.ldt_hi.annot into its constituent label files, placing them in the subdirectory.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note: when working with functional contrast map annotation files, the output files will be called ?h.cluster-nnn.label. If you convert multiple .annot files for the same hemisphere (e.g., from two separate contrasts) with the intention of later merging them, you will run the risk of overwriting earlier .label files unless you take care to rename them as you go.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
=Set Operations on .label Files=&lt;br /&gt;
==Intersecting .label Files (Intersection)==&lt;br /&gt;
Given what we know about the contents of a label file, the unix &amp;lt;code&amp;gt;comm&amp;lt;/code&amp;gt; command should suffice to find the common entries between two label files, which is the set intersection of vertices appearing in each.&lt;br /&gt;
 &lt;br /&gt;
[https://www.mail-archive.com/freesurfer@nmr.mgh.harvard.edu/msg11344.html Relevant thread here]&lt;br /&gt;
&lt;br /&gt;
==Merging .label Files (Union)==&lt;br /&gt;
A union operation will find all vertices appearing in &#039;&#039;either&#039;&#039; .label file. You will want to make sure that only unique values are retained (i.e., you don&#039;t want to list the same vertex multiple times in the output .label file).&lt;br /&gt;
&lt;br /&gt;
The easiest way to accomplish this is FreeSurfer&#039;s &amp;lt;code&amp;gt;mri_mergelabels&amp;lt;/code&amp;gt; tool:&lt;br /&gt;
 mri_mergelabels -i label1 -i label2 ... -o outputlabel &lt;br /&gt;
or&lt;br /&gt;
  mri_mergelabels -d &amp;lt;dirname&amp;gt; -o outputlabel &lt;br /&gt;
This FreeSurfer tool merges two or more label files. It does this by catting the label files together (after removing the first two lines). It inserts a new header (the first two lines). The number of entries in the new file (ie, the number on the second line), is computed by summing those from the input files. When you pass a directory name with the -d flag, it will merge all labels in a directory. This is handy when you want to merge many labels created from multiple .annot files.&lt;br /&gt;
&lt;br /&gt;
For example, if I have followed the steps above to create a series of ?h.CONTRAST_A-0??.label and ?h.CONTRAST_B-0??.label files, then I can find the union of CONTRAST_A and CONTRAST_B by creating a lh/ and rh/ subdirectory and moving the label files to the appropriate directories. Then:&lt;br /&gt;
 mri_mergelabels -d lh -o lh.AB_UNION.label&lt;br /&gt;
 mri_mergelabels -d rh -o rh.AB_UNION.label&lt;br /&gt;
The above two commands will generate my lh and rh label files in the current working directory (assumed to be ${SUBJECTS_DIR}/${SUBJECT}/label; ${SUBJECT} is likely to be fsaverage if you are making a functional mask from group-level analyses in fsaverage space).&lt;br /&gt;
===You&#039;re Going to Want to Turn These into an .annot File===&lt;br /&gt;
After you&#039;ve merged multiple .label files into a single .label file, all the vertices will be assigned to the same label. You will probably want to go through and relabel each of the distinct regions (e.g., by cluster) in FreeSurfer. After you&#039;ve done that, save (export) each of the relabeled regions as separate .label files (e.g., CLUST_01, CLUST_02, ... etc.) in a working directory. &lt;br /&gt;
&lt;br /&gt;
After you do, you need to create or snag a CLUT file.&lt;br /&gt;
&lt;br /&gt;
Finally, you use mris_label2annot to merge your united labels, along with the CLUT into a single .annot file.&lt;br /&gt;
&lt;br /&gt;
=ROI &#039;VennDiagram&#039; Matlab Function=&lt;br /&gt;
ROIVennDiagram.m uses intersect and setdiff functions to extract intersecting and unique vertices. Variable names suggest that it&#039;s used for T1 and T2 data, but you can use it for any paired data. The label files generated above (using mri_annotation2label) need to be copied into a lh and rh directory within a parent dir. A simple modification would allow these files to remain in your fsaverage or other subject&#039;s dir, if you wanted to clutter those up.&lt;br /&gt;
 &lt;br /&gt;
 function ROIvenndiagram = ROIvenndiagram(hemi)&lt;br /&gt;
 %% function ROIvenndiagram = ROIvenndiagram(&#039;hemi&#039;);&lt;br /&gt;
 %Pull out intersecting and different vertices for FS label files,&lt;br /&gt;
 %and store in one of 3 text files (T1, T1T2, T2)&lt;br /&gt;
 %&lt;br /&gt;
 %Currently takes one string input indicating lh or rh.&lt;br /&gt;
 %&lt;br /&gt;
 %You&#039;ll need to adjust the file names on lines 42 and 43 to match your&lt;br /&gt;
 %file names.&lt;br /&gt;
 %&lt;br /&gt;
 %Note - The headers in the output text files have quotes around them, as&lt;br /&gt;
 %the headers begin with matlab unfriendly characters (#). So, you&#039;ll need to&lt;br /&gt;
 %go into those 3 output text files and delete the quotes.&lt;br /&gt;
 %&lt;br /&gt;
 %Intended for use in parent dir containing lh and rh dirs, which&lt;br /&gt;
 %contain label files.&lt;br /&gt;
 % _____________________________________&lt;br /&gt;
 % G.J. Smith&lt;br /&gt;
 % gjsmith4@buffalo.edu&lt;br /&gt;
 % _____________________________________&lt;br /&gt;
 parentdir=pwd;&lt;br /&gt;
 hemidir = sprintf(&#039;%s/%s&#039;, pwd, hemi);&lt;br /&gt;
 cd(hemidir);&lt;br /&gt;
 &lt;br /&gt;
 %% Pre-initialize array for cating labels&lt;br /&gt;
 T1Agg = cell(1,5);&lt;br /&gt;
 T2Agg = cell(1,5);&lt;br /&gt;
 &lt;br /&gt;
 % Begin cycle through label files&lt;br /&gt;
 for labeln = 1:3000 %arbirary number higher than any label number&lt;br /&gt;
     %% Apply appropriate number of 0&#039;s before number - eg. 001, 010, 100&lt;br /&gt;
     numcorrection=num2str(labeln);&lt;br /&gt;
     if labeln &amp;lt; 10&lt;br /&gt;
         labelnum=strcat(&#039;00&#039;,numcorrection);&lt;br /&gt;
     elseif labeln &amp;lt; 100&lt;br /&gt;
         labelnum=strcat(&#039;0&#039;,numcorrection);&lt;br /&gt;
     else&lt;br /&gt;
         labelnum=strcat(numcorrection);&lt;br /&gt;
     end&lt;br /&gt;
     &lt;br /&gt;
     %% T1 and T2 label file names -- Edit these to match your file naming convention&lt;br /&gt;
     T1_filename = sprintf(&#039;%s.T1-%s.label&#039;, hemi, labelnum);&lt;br /&gt;
     T2_filename = sprintf(&#039;%s.T2-%s.label&#039;, hemi, labelnum);&lt;br /&gt;
     &lt;br /&gt;
     %% Build one large aggregate file with all vertices from all label files&lt;br /&gt;
     header = 2; % lines 1 and 2 of label file stored seperately&lt;br /&gt;
     delimiter = &#039; &#039;;&lt;br /&gt;
     if exist([pwd filesep T1_filename], &#039;file&#039;)&lt;br /&gt;
         %load data&lt;br /&gt;
         T1file = importdata(T1_filename,delimiter,header);&lt;br /&gt;
         %Store in one array&lt;br /&gt;
         T1Agg = vertcat(T1Agg, num2cell(T1file.data));&lt;br /&gt;
     end&lt;br /&gt;
     if exist([pwd filesep T2_filename], &#039;file&#039;)&lt;br /&gt;
         %load data&lt;br /&gt;
         T2file = importdata(T2_filename,delimiter,header);&lt;br /&gt;
         %Store in one array&lt;br /&gt;
         T2Agg = vertcat(T2Agg, num2cell(T2file.data));&lt;br /&gt;
     end&lt;br /&gt;
 end&lt;br /&gt;
 &lt;br /&gt;
 %% Check that label files were found&lt;br /&gt;
 exist T1file;&lt;br /&gt;
 if ans == 1&lt;br /&gt;
     display(&#039;Labels found, pulling out intersects and setdiffs&#039;);&lt;br /&gt;
 else&lt;br /&gt;
     display(&#039;No label files found, check your dir or file names&#039;);&lt;br /&gt;
     return;&lt;br /&gt;
 end&lt;br /&gt;
 &lt;br /&gt;
 %% Find intersection and differences&lt;br /&gt;
 T1Agg = cell2mat(T1Agg);&lt;br /&gt;
 T2Agg = cell2mat(T2Agg);&lt;br /&gt;
 %compare&lt;br /&gt;
 %union = union(T1file.data(:,1), T2file.data(:,1));&lt;br /&gt;
 intersection = intersect(T1Agg, T2Agg, &#039;rows&#039;); %&#039;rows&#039; works with same # of columns&lt;br /&gt;
 T1_diffs = setdiff(T1Agg, T2Agg, &#039;rows&#039;);&lt;br /&gt;
 T2_diffs = setdiff(T2Agg, T1Agg, &#039;rows&#039;);&lt;br /&gt;
 &lt;br /&gt;
 %% Build new label files&lt;br /&gt;
 %T1&lt;br /&gt;
 T1_differences = cell(50000, 5); %hard coded arbitrarily high number of rows for pre-init&lt;br /&gt;
 T1_differences(1,1) = T1file.textdata(1,1); % header&lt;br /&gt;
 T1_differences{2,1} = length(T1_diffs); % header line 2, new number of rows&lt;br /&gt;
 T1_differences(3:length(T1_diffs) + 2,:) = num2cell(T1_diffs); % data&lt;br /&gt;
 &lt;br /&gt;
 %T2&lt;br /&gt;
 T2_differences = cell(50000, 5); %hard coded arbitrarily high number of rows for pre-init&lt;br /&gt;
 T2_differences(1,1) = T2file.textdata(1,1); % header&lt;br /&gt;
 T2_differences{2,1} = length(T2_diffs); % header line 2, new number of rows&lt;br /&gt;
 T2_differences(3:length(T2_diffs) + 2,:) = num2cell(T2_diffs); % data&lt;br /&gt;
 &lt;br /&gt;
 %Match&lt;br /&gt;
 Match_labelfile = cell(50000, 5); %hard coded arbitrarily high number of rows for pre-init&lt;br /&gt;
 Match_labelfile(1,1) = T1file.textdata(1,1); % header T1 and T2 for same subject is the same&lt;br /&gt;
 Match_labelfile{2,1} = length(intersection); % header line 2, new number of rows&lt;br /&gt;
 Match_labelfile(3:length(intersection) + 2, :) = num2cell(intersection); % data&lt;br /&gt;
 &lt;br /&gt;
 %% Remove blank rows from oversized pre-alo arrays&lt;br /&gt;
 T1_differences(all(cellfun(@isempty,T1_differences),2),:) = [];&lt;br /&gt;
 T2_differences(all(cellfun(@isempty,T2_differences),2),:) = [];&lt;br /&gt;
 Match_labelfile(all(cellfun(@isempty,Match_labelfile),2),:) = [];&lt;br /&gt;
 &lt;br /&gt;
 %% Write files&lt;br /&gt;
 cd(parentdir);&lt;br /&gt;
 T1_fname = sprintf(&#039;%s.T1.label.txt&#039;, hemi);&lt;br /&gt;
 writetable(cell2table(T1_differences), T1_fname,&#039;Delimiter&#039;,&#039; &#039;,&#039;WriteVariableNames&#039;,false);&lt;br /&gt;
 &lt;br /&gt;
 %T2&lt;br /&gt;
 T2_fname = sprintf(&#039;%s.T2.label.txt&#039;, hemi);&lt;br /&gt;
 writetable(cell2table(T2_differences), T2_fname,&#039;Delimiter&#039;,&#039; &#039;,&#039;WriteVariableNames&#039;,false);&lt;br /&gt;
 &lt;br /&gt;
 %Match&lt;br /&gt;
 Match_fname = sprintf(&#039;%s.T1T2.label.txt&#039;, hemi);&lt;br /&gt;
 writetable(cell2table(Match_labelfile), Match_fname,&#039;Delimiter&#039;,&#039; &#039;,&#039;WriteVariableNames&#039;,false);&lt;br /&gt;
 &lt;br /&gt;
 end&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Visualize and Cluster (Intersected Label File)==&lt;br /&gt;
*in progress*&lt;br /&gt;
Open tksurfer.&lt;br /&gt;
 tksurfer fsaverage ?h inflated&lt;br /&gt;
&lt;br /&gt;
Import ?h.aparc.annot files for a reference&lt;br /&gt;
&lt;br /&gt;
Load the intersected label file.&lt;br /&gt;
&lt;br /&gt;
Use cut line to make cluster boundaries (If necessary, otherwise see below).&lt;br /&gt;
&lt;br /&gt;
Erase aparc.annot labels that overlap with a desired label.&lt;br /&gt;
&lt;br /&gt;
Select area you want to cluster and use custom fill to create new label. Up to and including path, up to other labels, and up to unlabeled, filling from last clicked vertex. If you left adjacent aparc.annot labels these can be used as boundaries.&lt;br /&gt;
&lt;br /&gt;
Save each label file using save selected label.  -- must premake text files as cant specify file names. -- must be a better way of doing this, or getting export annotation to work.&lt;br /&gt;
&lt;br /&gt;
Pro tip: Colour your label file red, when creating new labels, they will be blue. (see below)&lt;br /&gt;
&lt;br /&gt;
==CTAB and Annot==&lt;br /&gt;
Now you&#039;ll need to create a ctab file for the labels to make a .annot file.&lt;br /&gt;
&lt;br /&gt;
If you don&#039;t have a ton of labels, this can easily be done by hand by copying and editing the &#039;FreeSurferColorLUT.txt&#039; file found in /usr/local/freesurfer, and simply renaming the regions(label names) to match your labelling convention. Then save it as a .annot.ctab file, e.g. &#039;RH.annot.ctab&#039;.&lt;br /&gt;
&lt;br /&gt;
Then you&#039;ll want to use the mris_label2annot command found [https://surfer.nmr.mgh.harvard.edu/fswiki/mris_label2annot here].  There are 2 ways to do this. The first is to pass it each of the label files using the--l switch, and a ctab fie. Example:&lt;br /&gt;
&lt;br /&gt;
 mris_label2annot --l rh.label-001.txt --l rh.label-002.txt --l rh.label-003.txt --l rh.label-004.txt --l rh.label-005.txt --l rh.label-006.txt --l rh.label-007.txt --l rh.label-008.txt --l rh.label-009.txt --l rh.label-010.txt --l   rh.label-011.txt --l rh.label-012.txt --l rh.label-013.txt --l rh.label-014.txt --l rh.label-015.txt --l rh.label-016.txt --l rh.label-017.txt --l rh.label-018.txt --l rh.label-019.txt --l rh.label-020.txt --l rh.label-021.txt --l  rh.label-022.txt --l rh.label-023.txt --l rh.label-024.txt --l rh.label-025.txt --l rh.label-026.txt --ctab RH.annot.ctab --s fsaverage --h rh --annot T1T2intersected&lt;br /&gt;
&lt;br /&gt;
That&#039;s unwieldy (and yet, that&#039;s how I first did this). The easier way is to throw all the label files into a single directory, and pass the directory and a ctab file.&lt;br /&gt;
&lt;br /&gt;
Now that you have .annot files for your intersecting vertices, you can [http://ccnlab.psy.buffalo.edu/wiki/index.php/Freesurfer_Subparcellation#Transfer_Subparcellation_to_Other_Subjects, subparcellate] and/or continue with network analyses.&lt;br /&gt;
&lt;br /&gt;
==Using R to Work With Colortables==&lt;br /&gt;
This is kind of a stub topic, but there is an R library when trying to extract a CLUT from an existing .annot file. I introduce the [https://rdrr.io/cran/freesurferformats/ freesurferformats R package]. In particular, there is a function to extract the color lookup table from a .annot file:&lt;br /&gt;
 &amp;gt; library(&amp;quot;freesurferformats&amp;quot;)&lt;br /&gt;
 &amp;gt; annotfile=&amp;quot;/path/to/subject/label/lh.myannot.annot&amp;quot;&lt;br /&gt;
 &amp;gt; annot = read.fs.annot(annotfile);&lt;br /&gt;
 &amp;gt; colortable = colortable.from.annot(annot);&lt;br /&gt;
 &amp;gt; head(colortable);&lt;br /&gt;
&lt;br /&gt;
If you install this library, you can use [[Extracting CLUT from .annot Files Using R|this handy R script]] that I wrote that can be run from the terminal to extract a CLUT from a .annot file.&lt;br /&gt;
&lt;br /&gt;
[[Category: FreeSurfer]]&lt;br /&gt;
[[Category: ROI]]&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2257</id>
		<title>ML Environment</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2257"/>
		<updated>2022-07-26T20:44:15Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Using the CCN Lab ML Environment */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The lab primarily uses a Linux environment. We have several workstations running the latest version of [https://ubuntu.com/download Ubuntu]. Our machine learning (ML) research is primarily carried out using Google&#039;s [https://www.tensorflow.org/ TensorFlow] libraries, written for Python. Workstations with powerful GPUs can use CUDA for GPU acceleration.&lt;br /&gt;
=Using the CCN Lab ML Environment=&lt;br /&gt;
To avoid clashes between different releases of Python, Tensorflow and whatever libraries are required for a particular ML application, we&#039;ll be using Virtual Environments. We have a Python 2.9 virtual environment, which is like a sandbox that all our ML libraries can be installed into and toggled on/off without disrupting anything else. &#039;&#039;&#039;There is a one-time setup procedure every individual user will have to complete:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
#ssh or open a terminal on ws03&lt;br /&gt;
#run the following command:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda init bash&amp;lt;/code&amp;gt;&lt;br /&gt;
#log out, log back in. Now your shell prompt will look like:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;(base) user@host:~$&amp;lt;/code&amp;gt;&lt;br /&gt;
#*To toggle it on:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda activate /data/tf&amp;lt;/code&amp;gt;&lt;br /&gt;
#*To toggle it off:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda deactivate&amp;lt;/code&amp;gt;&lt;br /&gt;
#*To list all other environments that you can activate&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda info --envs&amp;lt;/code&amp;gt;&lt;br /&gt;
#*To list all the packages installed in your current environment:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda list&amp;lt;/code&amp;gt;&lt;br /&gt;
#If &amp;lt;code&amp;gt;&#039;&#039;&#039;conda list&#039;&#039;&#039;&amp;lt;/code&amp;gt; output doesn&#039;t include a bunch of tensorflow libraries, install it using pip:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;pip install tensorflow&amp;lt;/code&amp;gt;&lt;br /&gt;
#*Other libraries may have to be installed separately by each user using pip. For example:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;pip install sklearn&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Setting up your own ML environment=&lt;br /&gt;
Below are the specs and walkthrough for setting up a modest ML programming environment.&lt;br /&gt;
&lt;br /&gt;
==Hardware==&lt;br /&gt;
There are no particular hardware requirements for running TensorFlow. Obviously, more disk space and RAM are desirable, as is a [https://developer.nvidia.com/cuda-gpus CUDA-enabled GPU].&lt;br /&gt;
&lt;br /&gt;
==Operating System==&lt;br /&gt;
The closed Apple ecosystem might make MacOS a good choice of platform because a Mac is a Mac is a Mac. However, I&#039;m not Mr. Moneybags over here, and Apple drops support for older hardware after a time. You couldn&#039;t pay me to use Windows for anything but Office applications, so that leaves Linux. The TensorFlow installation directions assume Ubuntu 16.04 or higher, and though I have dabbled with Debian Linux for a home media server, I typically use Ubuntu. These instructions assume Ubuntu 22.04 LTS.&lt;br /&gt;
&lt;br /&gt;
==Python Version==&lt;br /&gt;
Ubuntu 22.04 has retired Python 2.x, and uses Python 3.9 by default, which works nicely with TensorFlow (I understand that getting TensorFlow to work with Python 3.10+ includes some challenges). These instructions assume Python 3.9. If you want to use other versions of Python for other applications, I recommend using virtual environments to manage and switch between Python versions. The pip installation instructions use miniconda to create a Python 3.9 virtual environment.&lt;br /&gt;
&lt;br /&gt;
==Setup==&lt;br /&gt;
===TensorFlow Installation===&lt;br /&gt;
First off, you can use Conda, but don&#039;t use Conda to install TensorFlow. Instead, follow the [https://www.tensorflow.org/install/pip pip install instructions] published by the TensorFlow people. The steps are broadly described below:&lt;br /&gt;
====Create a virtual environment====&lt;br /&gt;
The first step is to install miniconda if it isn&#039;t already installed. Then, create a new Python 3.9 virtual environment:&lt;br /&gt;
 conda create --name tf python=3.9&lt;br /&gt;
Then activate your new environment:&lt;br /&gt;
 conda activate tf&lt;br /&gt;
&lt;br /&gt;
====Update pip and install tensorflow====&lt;br /&gt;
You&#039;ll need to update pip first:&lt;br /&gt;
 pip install --upgrade pip&lt;br /&gt;
Then pip install tensorflow:&lt;br /&gt;
 pip install tensorflow&lt;br /&gt;
&lt;br /&gt;
===Spyder Installation===&lt;br /&gt;
Anthony got me using the Spyder IDE for Python programming. Problem is, as of today (July 6, 2022), it&#039;s buggy on Ubuntu 22.04. You can install it using pip, as documented [https://bugs.launchpad.net/ubuntu/+source/spyder/+bug/1968479 here]:&lt;br /&gt;
 pip install -U spyder&lt;br /&gt;
However, when I uninstalled the apt package dependencies (there are LOTS), the pip installation no longer worked, so I reinstalled the apt package as well. The apt package binary still won&#039;t launch, but at least the pip version will have everything it needs. It&#039;s possible that having the apt package installed prior to the pip installation caused pip to not bother installing some critical components. If that&#039;s the case, then it may not be strictly necessary to have both the apt and pip versions installed.&lt;br /&gt;
 &lt;br /&gt;
Note that pip will have installed spyder into a specific virtual environment (e.g., tf). You won&#039;t be able to launch spyder without first activating that virtual environment. After installation, you can launch it from a terminal window:&lt;br /&gt;
 (base) &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ conda activate tf&lt;br /&gt;
 (tf) &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ spyder&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2256</id>
		<title>ML Environment</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2256"/>
		<updated>2022-07-26T20:42:00Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Using the CCN Lab ML Environment */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The lab primarily uses a Linux environment. We have several workstations running the latest version of [https://ubuntu.com/download Ubuntu]. Our machine learning (ML) research is primarily carried out using Google&#039;s [https://www.tensorflow.org/ TensorFlow] libraries, written for Python. Workstations with powerful GPUs can use CUDA for GPU acceleration.&lt;br /&gt;
=Using the CCN Lab ML Environment=&lt;br /&gt;
To avoid clashes between different releases of Python, Tensorflow and whatever libraries are required for a particular ML application, we&#039;ll be using Virtual Environments. We have a Python 2.9 virtual environment, which is like a sandbox that all our ML libraries can be installed into and toggled on/off without disrupting anything else. &#039;&#039;&#039;There is a one-time setup procedure every individual user will have to complete:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
#ssh or open a terminal on ws03&lt;br /&gt;
#run the following command:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda init bash&amp;lt;/code&amp;gt;&lt;br /&gt;
#log out, log back in. Now your shell prompt will look like:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;(base) user@host:~$&amp;lt;/code&amp;gt;&lt;br /&gt;
#*To toggle it on:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda activate /data/tf&amp;lt;/code&amp;gt;&lt;br /&gt;
#*To toggle it off:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda deactivate&amp;lt;/code&amp;gt;&lt;br /&gt;
#*To list all other environments that you can activate&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda info --envs&amp;lt;/code&amp;gt;&lt;br /&gt;
#*To list all the packages installed in your current environment:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda list&amp;lt;/code&amp;gt;&lt;br /&gt;
#If &amp;lt;code&amp;gt;&#039;&#039;&#039;conda list&#039;&#039;&#039;&amp;lt;/code&amp;gt; doesn&#039;t include a bunch of tensorflow libraries, install it using pip:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;pip install tensorflow&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Setting up your own ML environment=&lt;br /&gt;
Below are the specs and walkthrough for setting up a modest ML programming environment.&lt;br /&gt;
&lt;br /&gt;
==Hardware==&lt;br /&gt;
There are no particular hardware requirements for running TensorFlow. Obviously, more disk space and RAM are desirable, as is a [https://developer.nvidia.com/cuda-gpus CUDA-enabled GPU].&lt;br /&gt;
&lt;br /&gt;
==Operating System==&lt;br /&gt;
The closed Apple ecosystem might make MacOS a good choice of platform because a Mac is a Mac is a Mac. However, I&#039;m not Mr. Moneybags over here, and Apple drops support for older hardware after a time. You couldn&#039;t pay me to use Windows for anything but Office applications, so that leaves Linux. The TensorFlow installation directions assume Ubuntu 16.04 or higher, and though I have dabbled with Debian Linux for a home media server, I typically use Ubuntu. These instructions assume Ubuntu 22.04 LTS.&lt;br /&gt;
&lt;br /&gt;
==Python Version==&lt;br /&gt;
Ubuntu 22.04 has retired Python 2.x, and uses Python 3.9 by default, which works nicely with TensorFlow (I understand that getting TensorFlow to work with Python 3.10+ includes some challenges). These instructions assume Python 3.9. If you want to use other versions of Python for other applications, I recommend using virtual environments to manage and switch between Python versions. The pip installation instructions use miniconda to create a Python 3.9 virtual environment.&lt;br /&gt;
&lt;br /&gt;
==Setup==&lt;br /&gt;
===TensorFlow Installation===&lt;br /&gt;
First off, you can use Conda, but don&#039;t use Conda to install TensorFlow. Instead, follow the [https://www.tensorflow.org/install/pip pip install instructions] published by the TensorFlow people. The steps are broadly described below:&lt;br /&gt;
====Create a virtual environment====&lt;br /&gt;
The first step is to install miniconda if it isn&#039;t already installed. Then, create a new Python 3.9 virtual environment:&lt;br /&gt;
 conda create --name tf python=3.9&lt;br /&gt;
Then activate your new environment:&lt;br /&gt;
 conda activate tf&lt;br /&gt;
&lt;br /&gt;
====Update pip and install tensorflow====&lt;br /&gt;
You&#039;ll need to update pip first:&lt;br /&gt;
 pip install --upgrade pip&lt;br /&gt;
Then pip install tensorflow:&lt;br /&gt;
 pip install tensorflow&lt;br /&gt;
&lt;br /&gt;
===Spyder Installation===&lt;br /&gt;
Anthony got me using the Spyder IDE for Python programming. Problem is, as of today (July 6, 2022), it&#039;s buggy on Ubuntu 22.04. You can install it using pip, as documented [https://bugs.launchpad.net/ubuntu/+source/spyder/+bug/1968479 here]:&lt;br /&gt;
 pip install -U spyder&lt;br /&gt;
However, when I uninstalled the apt package dependencies (there are LOTS), the pip installation no longer worked, so I reinstalled the apt package as well. The apt package binary still won&#039;t launch, but at least the pip version will have everything it needs. It&#039;s possible that having the apt package installed prior to the pip installation caused pip to not bother installing some critical components. If that&#039;s the case, then it may not be strictly necessary to have both the apt and pip versions installed.&lt;br /&gt;
 &lt;br /&gt;
Note that pip will have installed spyder into a specific virtual environment (e.g., tf). You won&#039;t be able to launch spyder without first activating that virtual environment. After installation, you can launch it from a terminal window:&lt;br /&gt;
 (base) &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ conda activate tf&lt;br /&gt;
 (tf) &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ spyder&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2255</id>
		<title>ML Environment</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2255"/>
		<updated>2022-07-26T20:38:59Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Using the CCN Lab ML Environment */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The lab primarily uses a Linux environment. We have several workstations running the latest version of [https://ubuntu.com/download Ubuntu]. Our machine learning (ML) research is primarily carried out using Google&#039;s [https://www.tensorflow.org/ TensorFlow] libraries, written for Python. Workstations with powerful GPUs can use CUDA for GPU acceleration.&lt;br /&gt;
=Using the CCN Lab ML Environment=&lt;br /&gt;
To avoid clashes between different releases of Python, Tensorflow and whatever libraries are required for a particular ML application, we&#039;ll be using Virtual Environments. We have a Python 2.9 virtual environment, which is like a sandbox that all our ML libraries can be installed into and toggled on/off without disrupting anything else. &#039;&#039;&#039;There is a one-time setup procedure every individual user will have to complete:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
#ssh or open a terminal on ws03&lt;br /&gt;
#run the following command:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda init bash&amp;lt;/code&amp;gt;&lt;br /&gt;
#log out, log back in. Now your shell prompt will look like:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;(base) user@host:~$&amp;lt;/code&amp;gt;&lt;br /&gt;
#*To toggle it on:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda activate /data/tf&amp;lt;/code&amp;gt;&lt;br /&gt;
#*To toggle it off:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda deactivate&amp;lt;/code&amp;gt;&lt;br /&gt;
#*To list all other environments that you can activate&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda info --envs&amp;lt;/code&amp;gt;&lt;br /&gt;
#*To list all the packages installed in your current environment:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda list&amp;lt;/code&amp;gt;&lt;br /&gt;
#It appears everyone has to install tensorflow individually using pip&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;pip install tensorflow&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Setting up your own ML environment=&lt;br /&gt;
Below are the specs and walkthrough for setting up a modest ML programming environment.&lt;br /&gt;
&lt;br /&gt;
==Hardware==&lt;br /&gt;
There are no particular hardware requirements for running TensorFlow. Obviously, more disk space and RAM are desirable, as is a [https://developer.nvidia.com/cuda-gpus CUDA-enabled GPU].&lt;br /&gt;
&lt;br /&gt;
==Operating System==&lt;br /&gt;
The closed Apple ecosystem might make MacOS a good choice of platform because a Mac is a Mac is a Mac. However, I&#039;m not Mr. Moneybags over here, and Apple drops support for older hardware after a time. You couldn&#039;t pay me to use Windows for anything but Office applications, so that leaves Linux. The TensorFlow installation directions assume Ubuntu 16.04 or higher, and though I have dabbled with Debian Linux for a home media server, I typically use Ubuntu. These instructions assume Ubuntu 22.04 LTS.&lt;br /&gt;
&lt;br /&gt;
==Python Version==&lt;br /&gt;
Ubuntu 22.04 has retired Python 2.x, and uses Python 3.9 by default, which works nicely with TensorFlow (I understand that getting TensorFlow to work with Python 3.10+ includes some challenges). These instructions assume Python 3.9. If you want to use other versions of Python for other applications, I recommend using virtual environments to manage and switch between Python versions. The pip installation instructions use miniconda to create a Python 3.9 virtual environment.&lt;br /&gt;
&lt;br /&gt;
==Setup==&lt;br /&gt;
===TensorFlow Installation===&lt;br /&gt;
First off, you can use Conda, but don&#039;t use Conda to install TensorFlow. Instead, follow the [https://www.tensorflow.org/install/pip pip install instructions] published by the TensorFlow people. The steps are broadly described below:&lt;br /&gt;
====Create a virtual environment====&lt;br /&gt;
The first step is to install miniconda if it isn&#039;t already installed. Then, create a new Python 3.9 virtual environment:&lt;br /&gt;
 conda create --name tf python=3.9&lt;br /&gt;
Then activate your new environment:&lt;br /&gt;
 conda activate tf&lt;br /&gt;
&lt;br /&gt;
====Update pip and install tensorflow====&lt;br /&gt;
You&#039;ll need to update pip first:&lt;br /&gt;
 pip install --upgrade pip&lt;br /&gt;
Then pip install tensorflow:&lt;br /&gt;
 pip install tensorflow&lt;br /&gt;
&lt;br /&gt;
===Spyder Installation===&lt;br /&gt;
Anthony got me using the Spyder IDE for Python programming. Problem is, as of today (July 6, 2022), it&#039;s buggy on Ubuntu 22.04. You can install it using pip, as documented [https://bugs.launchpad.net/ubuntu/+source/spyder/+bug/1968479 here]:&lt;br /&gt;
 pip install -U spyder&lt;br /&gt;
However, when I uninstalled the apt package dependencies (there are LOTS), the pip installation no longer worked, so I reinstalled the apt package as well. The apt package binary still won&#039;t launch, but at least the pip version will have everything it needs. It&#039;s possible that having the apt package installed prior to the pip installation caused pip to not bother installing some critical components. If that&#039;s the case, then it may not be strictly necessary to have both the apt and pip versions installed.&lt;br /&gt;
 &lt;br /&gt;
Note that pip will have installed spyder into a specific virtual environment (e.g., tf). You won&#039;t be able to launch spyder without first activating that virtual environment. After installation, you can launch it from a terminal window:&lt;br /&gt;
 (base) &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ conda activate tf&lt;br /&gt;
 (tf) &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ spyder&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Main_Page&amp;diff=2254</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Main_Page&amp;diff=2254"/>
		<updated>2022-07-26T20:37:04Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Technobabble */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Welcome to the CCN Lab Wiki.&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:cpicard.jpg]]&lt;br /&gt;
&lt;br /&gt;
== Terminal Commands &amp;amp; Other Lab Stuff ==&lt;br /&gt;
* [[New Research Staff]]&lt;br /&gt;
* [[Lab Roles]]&lt;br /&gt;
* [[Mess Ups | Times when we messed up]]&lt;br /&gt;
* [[Highlight Reel | Times when we rocked it]]&lt;br /&gt;
* [[Participant Screening]]&lt;br /&gt;
* [[3D Brain Models]]&lt;br /&gt;
&lt;br /&gt;
== Technobabble ==&lt;br /&gt;
* [[ML Environment | Running a ML Workstation ]]&lt;br /&gt;
* [[ubmount | UBFS and ubmount]]&lt;br /&gt;
* [[SSH | Using SSH]]&lt;br /&gt;
* [[Synching Scripts]]&lt;br /&gt;
* [[Connecting to CCR]]&lt;br /&gt;
* [[Lab Email]]&lt;br /&gt;
* [[Excel Formulas]]&lt;br /&gt;
* [[Troubleshooting]]&lt;br /&gt;
* [[BASH Tricks]]&lt;br /&gt;
* [[FreeSurfer on Windows]]&lt;br /&gt;
&lt;br /&gt;
== Data Analysis ==&lt;br /&gt;
* [https://wiki.cam.ac.uk/bmuwiki/FMRI Data Quality]&lt;br /&gt;
* [[Downloading CRTC Data]]&lt;br /&gt;
* [[Behavioral Analyses]]&lt;br /&gt;
* [[FreeSurfer | FreeSurfer Pipeline]]&lt;br /&gt;
* [[SPM | SPM Pipeline]]&lt;br /&gt;
* [[Time Series Analysis]]&lt;br /&gt;
* [[Network Analyses]]&lt;br /&gt;
*[[Self-organized-mapping(SOM)]]&lt;br /&gt;
* [[Data Simulation]]&lt;br /&gt;
&lt;br /&gt;
== Manuscript Preparation ==&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=Rendering_MRIcron Brain Rendering in MRIcron (SPM)]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=Producing_Tables_of_Coordinates_(SPM) Producing Tables of Coordinates (SPM)]&lt;br /&gt;
* [[Annotation Coordinates| Extracting XYZ coordinates of .annot labels]]&lt;br /&gt;
* [http://colorbrewer2.org/ Color schemes (e.g., color keys for graphs, experiment conditions in fMRI renderings, etc.)]&lt;br /&gt;
** Above link yoinked from [http://sites.bu.edu/cnrlab/lab-resources/ BU&#039;s CNR Lab]&lt;br /&gt;
* [[Acquisition_Parameters | Acquisition Parameters at CTRC]]&lt;br /&gt;
* [[Brain Net Viewer]]&lt;br /&gt;
* [[Manuscript Formatting]]&lt;br /&gt;
&lt;br /&gt;
== Experiment A to Z (not necessarily in alphabetical order) ==&lt;br /&gt;
* [[Project-Specific Documentation]]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php/Category:MATLAB_functions MATLAB Functions]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=Participant_Triage Participant Triage]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=Pre-fMRI_Scanning_Protocol Pre-fMRI Scanning Protocol]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=Participant_Instructions Participant Instructions]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=MRI_Prep MRI Prep (Prior to scan date)]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=MRI_Setup MRI Setup]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=MIKENET MIKENET Neural Network C Library Setup]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=TensorFlow TensorFlow OpenSource Python Setup]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=Reading_Experiment_IDs IDs for Reading Experiment BOLD]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php/CCR Center for Computational Research (CCR)]&lt;br /&gt;
* [[Neural Networks in Python]]&lt;br /&gt;
* [https://openneuro.org/datasets/ds001246/versions/1.1.0 Horikawa dataset]&lt;br /&gt;
** &amp;lt;code&amp;gt;aws s3 sync --no-sign-request s3://openneuro.org/ds001246 ds001246-download/&amp;lt;/code&amp;gt;&lt;br /&gt;
* [https://nda.nih.gov/edit_collection.html?id=2155 MTA dataset]&lt;br /&gt;
* [[Wall of Ideas]]&lt;br /&gt;
&lt;br /&gt;
== Informational ==&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php/Developmental_Neuroscience Developmental Neuroscience]&lt;br /&gt;
* [[NSF Proposal Submission Walkthrough for n00bs]]&lt;br /&gt;
* [[Recipes]]&lt;br /&gt;
&lt;br /&gt;
== MediaWiki - Guides for Getting started ==&lt;br /&gt;
&lt;br /&gt;
* [//www.mediawiki.org/wiki/Help:Formatting Useful Formatting Guide]&lt;br /&gt;
* [//www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]&lt;br /&gt;
* [//www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]&lt;br /&gt;
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce MediaWiki release mailing list]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Consult the [//meta.wikimedia.org/wiki/Help:Contents User&#039;s Guide] for information on using the wiki software.&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2253</id>
		<title>ML Environment</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2253"/>
		<updated>2022-07-26T17:32:44Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Using the CCN Lab ML Environment */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The lab primarily uses a Linux environment. We have several workstations running the latest version of [https://ubuntu.com/download Ubuntu]. Our machine learning (ML) research is primarily carried out using Google&#039;s [https://www.tensorflow.org/ TensorFlow] libraries, written for Python. Workstations with powerful GPUs can use CUDA for GPU acceleration.&lt;br /&gt;
=Using the CCN Lab ML Environment=&lt;br /&gt;
To avoid clashes between different releases of Python, Tensorflow and whatever libraries are required for a particular ML application, we&#039;ll be using Virtual Environments. We have a Python 2.9 virtual environment, which is like a sandbox that all our ML libraries can be installed into and toggled on/off without disrupting anything else. &#039;&#039;&#039;There is a one-time setup procedure every individual user will have to complete:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
#ssh or open a terminal on ws03&lt;br /&gt;
#run the following command:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda init bash&amp;lt;/code&amp;gt;&lt;br /&gt;
#log out, log back in. Now your shell prompt will look like:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;(base) user@host:~$&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can toggle the base and tensorflow python environments, and any other environments that might get added down the road.&lt;br /&gt;
#To toggle it on:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda activate /data/tf&amp;lt;/code&amp;gt;&lt;br /&gt;
#To toggle it off:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda deactivate&amp;lt;/code&amp;gt;&lt;br /&gt;
#To list all other environments that you can activate&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda info --envs&amp;lt;/code&amp;gt;&lt;br /&gt;
#To list all the packages installed in your current environment:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda list&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Setting up your own ML environment=&lt;br /&gt;
Below are the specs and walkthrough for setting up a modest ML programming environment.&lt;br /&gt;
&lt;br /&gt;
==Hardware==&lt;br /&gt;
There are no particular hardware requirements for running TensorFlow. Obviously, more disk space and RAM are desirable, as is a [https://developer.nvidia.com/cuda-gpus CUDA-enabled GPU].&lt;br /&gt;
&lt;br /&gt;
==Operating System==&lt;br /&gt;
The closed Apple ecosystem might make MacOS a good choice of platform because a Mac is a Mac is a Mac. However, I&#039;m not Mr. Moneybags over here, and Apple drops support for older hardware after a time. You couldn&#039;t pay me to use Windows for anything but Office applications, so that leaves Linux. The TensorFlow installation directions assume Ubuntu 16.04 or higher, and though I have dabbled with Debian Linux for a home media server, I typically use Ubuntu. These instructions assume Ubuntu 22.04 LTS.&lt;br /&gt;
&lt;br /&gt;
==Python Version==&lt;br /&gt;
Ubuntu 22.04 has retired Python 2.x, and uses Python 3.9 by default, which works nicely with TensorFlow (I understand that getting TensorFlow to work with Python 3.10+ includes some challenges). These instructions assume Python 3.9. If you want to use other versions of Python for other applications, I recommend using virtual environments to manage and switch between Python versions. The pip installation instructions use miniconda to create a Python 3.9 virtual environment.&lt;br /&gt;
&lt;br /&gt;
==Setup==&lt;br /&gt;
===TensorFlow Installation===&lt;br /&gt;
First off, you can use Conda, but don&#039;t use Conda to install TensorFlow. Instead, follow the [https://www.tensorflow.org/install/pip pip install instructions] published by the TensorFlow people. The steps are broadly described below:&lt;br /&gt;
====Create a virtual environment====&lt;br /&gt;
The first step is to install miniconda if it isn&#039;t already installed. Then, create a new Python 3.9 virtual environment:&lt;br /&gt;
 conda create --name tf python=3.9&lt;br /&gt;
Then activate your new environment:&lt;br /&gt;
 conda activate tf&lt;br /&gt;
&lt;br /&gt;
====Update pip and install tensorflow====&lt;br /&gt;
You&#039;ll need to update pip first:&lt;br /&gt;
 pip install --upgrade pip&lt;br /&gt;
Then pip install tensorflow:&lt;br /&gt;
 pip install tensorflow&lt;br /&gt;
&lt;br /&gt;
===Spyder Installation===&lt;br /&gt;
Anthony got me using the Spyder IDE for Python programming. Problem is, as of today (July 6, 2022), it&#039;s buggy on Ubuntu 22.04. You can install it using pip, as documented [https://bugs.launchpad.net/ubuntu/+source/spyder/+bug/1968479 here]:&lt;br /&gt;
 pip install -U spyder&lt;br /&gt;
However, when I uninstalled the apt package dependencies (there are LOTS), the pip installation no longer worked, so I reinstalled the apt package as well. The apt package binary still won&#039;t launch, but at least the pip version will have everything it needs. It&#039;s possible that having the apt package installed prior to the pip installation caused pip to not bother installing some critical components. If that&#039;s the case, then it may not be strictly necessary to have both the apt and pip versions installed.&lt;br /&gt;
 &lt;br /&gt;
Note that pip will have installed spyder into a specific virtual environment (e.g., tf). You won&#039;t be able to launch spyder without first activating that virtual environment. After installation, you can launch it from a terminal window:&lt;br /&gt;
 (base) &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ conda activate tf&lt;br /&gt;
 (tf) &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ spyder&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2252</id>
		<title>ML Environment</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2252"/>
		<updated>2022-07-26T17:32:22Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Using the CCN Lab ML Environment */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The lab primarily uses a Linux environment. We have several workstations running the latest version of [https://ubuntu.com/download Ubuntu]. Our machine learning (ML) research is primarily carried out using Google&#039;s [https://www.tensorflow.org/ TensorFlow] libraries, written for Python. Workstations with powerful GPUs can use CUDA for GPU acceleration.&lt;br /&gt;
=Using the CCN Lab ML Environment=&lt;br /&gt;
To avoid clashes between different releases of Python, Tensorflow and whatever libraries are required for a particular ML application, we&#039;ll be using Virtual Environments. We have a Python 2.9 virtual environment, which is like a sandbox that all our ML libraries can be installed into and toggled on/off without disrupting anything else. &#039;&#039;&#039;There is a one-time setup procedure every individual user will have to complete:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
#ssh or open a terminal on ws03&lt;br /&gt;
#run the following command:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda init bash&amp;lt;/code&amp;gt;&lt;br /&gt;
#log out, log back in. Now your shell prompt will look like:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;(base) user@host:~$&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can toggle the base and tensorflow python environments, and any other environments that might get added down the road.&lt;br /&gt;
#To toggle it on:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda activate /data/tf&amp;lt;/code&amp;gt;&lt;br /&gt;
#To toggle it off:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda deactivate&amp;lt;/conda&amp;gt;&lt;br /&gt;
#To list all other environments that you can activate&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda info --envs&amp;lt;/code&amp;gt;&lt;br /&gt;
#To list all the packages installed in your current environment:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda list&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Setting up your own ML environment=&lt;br /&gt;
Below are the specs and walkthrough for setting up a modest ML programming environment.&lt;br /&gt;
&lt;br /&gt;
==Hardware==&lt;br /&gt;
There are no particular hardware requirements for running TensorFlow. Obviously, more disk space and RAM are desirable, as is a [https://developer.nvidia.com/cuda-gpus CUDA-enabled GPU].&lt;br /&gt;
&lt;br /&gt;
==Operating System==&lt;br /&gt;
The closed Apple ecosystem might make MacOS a good choice of platform because a Mac is a Mac is a Mac. However, I&#039;m not Mr. Moneybags over here, and Apple drops support for older hardware after a time. You couldn&#039;t pay me to use Windows for anything but Office applications, so that leaves Linux. The TensorFlow installation directions assume Ubuntu 16.04 or higher, and though I have dabbled with Debian Linux for a home media server, I typically use Ubuntu. These instructions assume Ubuntu 22.04 LTS.&lt;br /&gt;
&lt;br /&gt;
==Python Version==&lt;br /&gt;
Ubuntu 22.04 has retired Python 2.x, and uses Python 3.9 by default, which works nicely with TensorFlow (I understand that getting TensorFlow to work with Python 3.10+ includes some challenges). These instructions assume Python 3.9. If you want to use other versions of Python for other applications, I recommend using virtual environments to manage and switch between Python versions. The pip installation instructions use miniconda to create a Python 3.9 virtual environment.&lt;br /&gt;
&lt;br /&gt;
==Setup==&lt;br /&gt;
===TensorFlow Installation===&lt;br /&gt;
First off, you can use Conda, but don&#039;t use Conda to install TensorFlow. Instead, follow the [https://www.tensorflow.org/install/pip pip install instructions] published by the TensorFlow people. The steps are broadly described below:&lt;br /&gt;
====Create a virtual environment====&lt;br /&gt;
The first step is to install miniconda if it isn&#039;t already installed. Then, create a new Python 3.9 virtual environment:&lt;br /&gt;
 conda create --name tf python=3.9&lt;br /&gt;
Then activate your new environment:&lt;br /&gt;
 conda activate tf&lt;br /&gt;
&lt;br /&gt;
====Update pip and install tensorflow====&lt;br /&gt;
You&#039;ll need to update pip first:&lt;br /&gt;
 pip install --upgrade pip&lt;br /&gt;
Then pip install tensorflow:&lt;br /&gt;
 pip install tensorflow&lt;br /&gt;
&lt;br /&gt;
===Spyder Installation===&lt;br /&gt;
Anthony got me using the Spyder IDE for Python programming. Problem is, as of today (July 6, 2022), it&#039;s buggy on Ubuntu 22.04. You can install it using pip, as documented [https://bugs.launchpad.net/ubuntu/+source/spyder/+bug/1968479 here]:&lt;br /&gt;
 pip install -U spyder&lt;br /&gt;
However, when I uninstalled the apt package dependencies (there are LOTS), the pip installation no longer worked, so I reinstalled the apt package as well. The apt package binary still won&#039;t launch, but at least the pip version will have everything it needs. It&#039;s possible that having the apt package installed prior to the pip installation caused pip to not bother installing some critical components. If that&#039;s the case, then it may not be strictly necessary to have both the apt and pip versions installed.&lt;br /&gt;
 &lt;br /&gt;
Note that pip will have installed spyder into a specific virtual environment (e.g., tf). You won&#039;t be able to launch spyder without first activating that virtual environment. After installation, you can launch it from a terminal window:&lt;br /&gt;
 (base) &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ conda activate tf&lt;br /&gt;
 (tf) &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ spyder&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2251</id>
		<title>ML Environment</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2251"/>
		<updated>2022-07-26T17:31:48Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Using the CCN Lab ML Environment */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The lab primarily uses a Linux environment. We have several workstations running the latest version of [https://ubuntu.com/download Ubuntu]. Our machine learning (ML) research is primarily carried out using Google&#039;s [https://www.tensorflow.org/ TensorFlow] libraries, written for Python. Workstations with powerful GPUs can use CUDA for GPU acceleration.&lt;br /&gt;
=Using the CCN Lab ML Environment=&lt;br /&gt;
To avoid clashes between different releases of Python, Tensorflow and whatever libraries are required for a particular ML application, we&#039;ll be using Virtual Environments. We have a Python 2.9 virtual environment, which is like a sandbox that all our ML libraries can be installed into and toggled on/off without disrupting anything else. &#039;&#039;&#039;There is a one-time setup procedure every individual user will have to complete:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
#ssh or open a terminal on ws03&lt;br /&gt;
#run the following command:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda init bash&amp;lt;/code&amp;gt;&lt;br /&gt;
#log out, log back in. Now your shell prompt will look like:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;(base) user@host:~$&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now you can toggle the base and tensorflow python environments, and any other environments that might get added down the road.&lt;br /&gt;
#To toggle it on:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda activate /data/tf&amp;lt;/code&amp;gt;&lt;br /&gt;
#To toggle it off:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda deactivate&amp;lt;/conda&amp;gt;&lt;br /&gt;
#To list all other environments that you can activate&amp;lt;br&amp;gt;&amp;lt;source&amp;gt;conda info --envs&amp;lt;/source&amp;gt;&lt;br /&gt;
#To list all the packages installed in your current environment:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda list&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Setting up your own ML environment=&lt;br /&gt;
Below are the specs and walkthrough for setting up a modest ML programming environment.&lt;br /&gt;
&lt;br /&gt;
==Hardware==&lt;br /&gt;
There are no particular hardware requirements for running TensorFlow. Obviously, more disk space and RAM are desirable, as is a [https://developer.nvidia.com/cuda-gpus CUDA-enabled GPU].&lt;br /&gt;
&lt;br /&gt;
==Operating System==&lt;br /&gt;
The closed Apple ecosystem might make MacOS a good choice of platform because a Mac is a Mac is a Mac. However, I&#039;m not Mr. Moneybags over here, and Apple drops support for older hardware after a time. You couldn&#039;t pay me to use Windows for anything but Office applications, so that leaves Linux. The TensorFlow installation directions assume Ubuntu 16.04 or higher, and though I have dabbled with Debian Linux for a home media server, I typically use Ubuntu. These instructions assume Ubuntu 22.04 LTS.&lt;br /&gt;
&lt;br /&gt;
==Python Version==&lt;br /&gt;
Ubuntu 22.04 has retired Python 2.x, and uses Python 3.9 by default, which works nicely with TensorFlow (I understand that getting TensorFlow to work with Python 3.10+ includes some challenges). These instructions assume Python 3.9. If you want to use other versions of Python for other applications, I recommend using virtual environments to manage and switch between Python versions. The pip installation instructions use miniconda to create a Python 3.9 virtual environment.&lt;br /&gt;
&lt;br /&gt;
==Setup==&lt;br /&gt;
===TensorFlow Installation===&lt;br /&gt;
First off, you can use Conda, but don&#039;t use Conda to install TensorFlow. Instead, follow the [https://www.tensorflow.org/install/pip pip install instructions] published by the TensorFlow people. The steps are broadly described below:&lt;br /&gt;
====Create a virtual environment====&lt;br /&gt;
The first step is to install miniconda if it isn&#039;t already installed. Then, create a new Python 3.9 virtual environment:&lt;br /&gt;
 conda create --name tf python=3.9&lt;br /&gt;
Then activate your new environment:&lt;br /&gt;
 conda activate tf&lt;br /&gt;
&lt;br /&gt;
====Update pip and install tensorflow====&lt;br /&gt;
You&#039;ll need to update pip first:&lt;br /&gt;
 pip install --upgrade pip&lt;br /&gt;
Then pip install tensorflow:&lt;br /&gt;
 pip install tensorflow&lt;br /&gt;
&lt;br /&gt;
===Spyder Installation===&lt;br /&gt;
Anthony got me using the Spyder IDE for Python programming. Problem is, as of today (July 6, 2022), it&#039;s buggy on Ubuntu 22.04. You can install it using pip, as documented [https://bugs.launchpad.net/ubuntu/+source/spyder/+bug/1968479 here]:&lt;br /&gt;
 pip install -U spyder&lt;br /&gt;
However, when I uninstalled the apt package dependencies (there are LOTS), the pip installation no longer worked, so I reinstalled the apt package as well. The apt package binary still won&#039;t launch, but at least the pip version will have everything it needs. It&#039;s possible that having the apt package installed prior to the pip installation caused pip to not bother installing some critical components. If that&#039;s the case, then it may not be strictly necessary to have both the apt and pip versions installed.&lt;br /&gt;
 &lt;br /&gt;
Note that pip will have installed spyder into a specific virtual environment (e.g., tf). You won&#039;t be able to launch spyder without first activating that virtual environment. After installation, you can launch it from a terminal window:&lt;br /&gt;
 (base) &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ conda activate tf&lt;br /&gt;
 (tf) &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ spyder&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2250</id>
		<title>ML Environment</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2250"/>
		<updated>2022-07-26T17:30:53Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Using the CCN Lab ML Environment */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The lab primarily uses a Linux environment. We have several workstations running the latest version of [https://ubuntu.com/download Ubuntu]. Our machine learning (ML) research is primarily carried out using Google&#039;s [https://www.tensorflow.org/ TensorFlow] libraries, written for Python. Workstations with powerful GPUs can use CUDA for GPU acceleration.&lt;br /&gt;
=Using the CCN Lab ML Environment=&lt;br /&gt;
To avoid clashes between different releases of Python, Tensorflow and whatever libraries are required for a particular ML application, we&#039;ll be using Virtual Environments. We have a Python 2.9 virtual environment, which is like a sandbox that all our ML libraries can be installed into and toggled on/off without disrupting anything else. There is a one-time setup procedure every individual user will have to complete:&lt;br /&gt;
&lt;br /&gt;
#ssh or open a terminal on ws03&lt;br /&gt;
#run the following command:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda init bash&amp;lt;/code&amp;gt;&lt;br /&gt;
#log out, log back in. Now your shell prompt will look like:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;(base) user@host:~$&amp;lt;/code&amp;gt;&lt;br /&gt;
Now you can toggle the base and tensorflow python environments.&lt;br /&gt;
#To toggle it on:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda activate /data/tf&amp;lt;/code&amp;gt;&lt;br /&gt;
#To toggle it off:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda deactivate&amp;lt;/conda&amp;gt;&lt;br /&gt;
#To list all other environments that you can activate&amp;lt;br&amp;gt;&amp;lt;source&amp;gt;conda info --envs&amp;lt;/source&amp;gt;&lt;br /&gt;
#To list all the packages installed in your current environment:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda list&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Setting up your own ML environment=&lt;br /&gt;
Below are the specs and walkthrough for setting up a modest ML programming environment.&lt;br /&gt;
&lt;br /&gt;
==Hardware==&lt;br /&gt;
There are no particular hardware requirements for running TensorFlow. Obviously, more disk space and RAM are desirable, as is a [https://developer.nvidia.com/cuda-gpus CUDA-enabled GPU].&lt;br /&gt;
&lt;br /&gt;
==Operating System==&lt;br /&gt;
The closed Apple ecosystem might make MacOS a good choice of platform because a Mac is a Mac is a Mac. However, I&#039;m not Mr. Moneybags over here, and Apple drops support for older hardware after a time. You couldn&#039;t pay me to use Windows for anything but Office applications, so that leaves Linux. The TensorFlow installation directions assume Ubuntu 16.04 or higher, and though I have dabbled with Debian Linux for a home media server, I typically use Ubuntu. These instructions assume Ubuntu 22.04 LTS.&lt;br /&gt;
&lt;br /&gt;
==Python Version==&lt;br /&gt;
Ubuntu 22.04 has retired Python 2.x, and uses Python 3.9 by default, which works nicely with TensorFlow (I understand that getting TensorFlow to work with Python 3.10+ includes some challenges). These instructions assume Python 3.9. If you want to use other versions of Python for other applications, I recommend using virtual environments to manage and switch between Python versions. The pip installation instructions use miniconda to create a Python 3.9 virtual environment.&lt;br /&gt;
&lt;br /&gt;
==Setup==&lt;br /&gt;
===TensorFlow Installation===&lt;br /&gt;
First off, you can use Conda, but don&#039;t use Conda to install TensorFlow. Instead, follow the [https://www.tensorflow.org/install/pip pip install instructions] published by the TensorFlow people. The steps are broadly described below:&lt;br /&gt;
====Create a virtual environment====&lt;br /&gt;
The first step is to install miniconda if it isn&#039;t already installed. Then, create a new Python 3.9 virtual environment:&lt;br /&gt;
 conda create --name tf python=3.9&lt;br /&gt;
Then activate your new environment:&lt;br /&gt;
 conda activate tf&lt;br /&gt;
&lt;br /&gt;
====Update pip and install tensorflow====&lt;br /&gt;
You&#039;ll need to update pip first:&lt;br /&gt;
 pip install --upgrade pip&lt;br /&gt;
Then pip install tensorflow:&lt;br /&gt;
 pip install tensorflow&lt;br /&gt;
&lt;br /&gt;
===Spyder Installation===&lt;br /&gt;
Anthony got me using the Spyder IDE for Python programming. Problem is, as of today (July 6, 2022), it&#039;s buggy on Ubuntu 22.04. You can install it using pip, as documented [https://bugs.launchpad.net/ubuntu/+source/spyder/+bug/1968479 here]:&lt;br /&gt;
 pip install -U spyder&lt;br /&gt;
However, when I uninstalled the apt package dependencies (there are LOTS), the pip installation no longer worked, so I reinstalled the apt package as well. The apt package binary still won&#039;t launch, but at least the pip version will have everything it needs. It&#039;s possible that having the apt package installed prior to the pip installation caused pip to not bother installing some critical components. If that&#039;s the case, then it may not be strictly necessary to have both the apt and pip versions installed.&lt;br /&gt;
 &lt;br /&gt;
Note that pip will have installed spyder into a specific virtual environment (e.g., tf). You won&#039;t be able to launch spyder without first activating that virtual environment. After installation, you can launch it from a terminal window:&lt;br /&gt;
 (base) &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ conda activate tf&lt;br /&gt;
 (tf) &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ spyder&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2249</id>
		<title>ML Environment</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2249"/>
		<updated>2022-07-26T17:13:21Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Using the CCN Lab ML Environment */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The lab primarily uses a Linux environment. We have several workstations running the latest version of [https://ubuntu.com/download Ubuntu]. Our machine learning (ML) research is primarily carried out using Google&#039;s [https://www.tensorflow.org/ TensorFlow] libraries, written for Python. Workstations with powerful GPUs can use CUDA for GPU acceleration.&lt;br /&gt;
=Using the CCN Lab ML Environment=&lt;br /&gt;
To avoid clashes between different releases of Python, Tensorflow and whatever libraries are required for a particular ML application, we&#039;ll be using Virtual Environments. We have a Python 2.9 virtual environment, which is like a sandbox that all our ML libraries can be installed into and toggled on/off without disrupting anything else. There is a one-time setup procedure every individual user will have to complete:&lt;br /&gt;
&lt;br /&gt;
#ssh or open a terminal on ws03&lt;br /&gt;
#run the following command:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda init bash&amp;lt;/code&amp;gt;&lt;br /&gt;
#log out, log back in. Now your shell prompt will look like:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;(base) user@host:~$&amp;lt;/code&amp;gt;&lt;br /&gt;
#I have created the tf environment in /data/tf. To activate it:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda activate /data/tf&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=Setting up your own ML environment=&lt;br /&gt;
Below are the specs and walkthrough for setting up a modest ML programming environment.&lt;br /&gt;
&lt;br /&gt;
==Hardware==&lt;br /&gt;
There are no particular hardware requirements for running TensorFlow. Obviously, more disk space and RAM are desirable, as is a [https://developer.nvidia.com/cuda-gpus CUDA-enabled GPU].&lt;br /&gt;
&lt;br /&gt;
==Operating System==&lt;br /&gt;
The closed Apple ecosystem might make MacOS a good choice of platform because a Mac is a Mac is a Mac. However, I&#039;m not Mr. Moneybags over here, and Apple drops support for older hardware after a time. You couldn&#039;t pay me to use Windows for anything but Office applications, so that leaves Linux. The TensorFlow installation directions assume Ubuntu 16.04 or higher, and though I have dabbled with Debian Linux for a home media server, I typically use Ubuntu. These instructions assume Ubuntu 22.04 LTS.&lt;br /&gt;
&lt;br /&gt;
==Python Version==&lt;br /&gt;
Ubuntu 22.04 has retired Python 2.x, and uses Python 3.9 by default, which works nicely with TensorFlow (I understand that getting TensorFlow to work with Python 3.10+ includes some challenges). These instructions assume Python 3.9. If you want to use other versions of Python for other applications, I recommend using virtual environments to manage and switch between Python versions. The pip installation instructions use miniconda to create a Python 3.9 virtual environment.&lt;br /&gt;
&lt;br /&gt;
==Setup==&lt;br /&gt;
===TensorFlow Installation===&lt;br /&gt;
First off, you can use Conda, but don&#039;t use Conda to install TensorFlow. Instead, follow the [https://www.tensorflow.org/install/pip pip install instructions] published by the TensorFlow people. The steps are broadly described below:&lt;br /&gt;
====Create a virtual environment====&lt;br /&gt;
The first step is to install miniconda if it isn&#039;t already installed. Then, create a new Python 3.9 virtual environment:&lt;br /&gt;
 conda create --name tf python=3.9&lt;br /&gt;
Then activate your new environment:&lt;br /&gt;
 conda activate tf&lt;br /&gt;
&lt;br /&gt;
====Update pip and install tensorflow====&lt;br /&gt;
You&#039;ll need to update pip first:&lt;br /&gt;
 pip install --upgrade pip&lt;br /&gt;
Then pip install tensorflow:&lt;br /&gt;
 pip install tensorflow&lt;br /&gt;
&lt;br /&gt;
===Spyder Installation===&lt;br /&gt;
Anthony got me using the Spyder IDE for Python programming. Problem is, as of today (July 6, 2022), it&#039;s buggy on Ubuntu 22.04. You can install it using pip, as documented [https://bugs.launchpad.net/ubuntu/+source/spyder/+bug/1968479 here]:&lt;br /&gt;
 pip install -U spyder&lt;br /&gt;
However, when I uninstalled the apt package dependencies (there are LOTS), the pip installation no longer worked, so I reinstalled the apt package as well. The apt package binary still won&#039;t launch, but at least the pip version will have everything it needs. It&#039;s possible that having the apt package installed prior to the pip installation caused pip to not bother installing some critical components. If that&#039;s the case, then it may not be strictly necessary to have both the apt and pip versions installed.&lt;br /&gt;
 &lt;br /&gt;
Note that pip will have installed spyder into a specific virtual environment (e.g., tf). You won&#039;t be able to launch spyder without first activating that virtual environment. After installation, you can launch it from a terminal window:&lt;br /&gt;
 (base) &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ conda activate tf&lt;br /&gt;
 (tf) &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ spyder&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2248</id>
		<title>ML Environment</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2248"/>
		<updated>2022-07-26T17:12:46Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Using the CCN Lab ML Environment */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The lab primarily uses a Linux environment. We have several workstations running the latest version of [https://ubuntu.com/download Ubuntu]. Our machine learning (ML) research is primarily carried out using Google&#039;s [https://www.tensorflow.org/ TensorFlow] libraries, written for Python. Workstations with powerful GPUs can use CUDA for GPU acceleration.&lt;br /&gt;
=Using the CCN Lab ML Environment=&lt;br /&gt;
To avoid clashes between different releases of Python, Tensorflow and whatever libraries are required for a particular ML application, we&#039;ll be using Virtual Environments. We have a Python 2.9 virtual environment, which is like a sandbox that all our ML libraries can be installed into and toggled on/off without disrupting anything else. There is a one-time setup procedure every individual user will have to complete:&lt;br /&gt;
&lt;br /&gt;
#ssh or open a terminal on ws03&lt;br /&gt;
#run the following command:&amp;lt;br&amp;gt;&amp;lt;code&amp;gt;conda init bash&amp;lt;/code&amp;gt;&lt;br /&gt;
#log out, log back in. Now your shell prompt will look like:&lt;br /&gt;
 (base) user@host:~$&lt;br /&gt;
#I have created the tf environment in /data/tf. To activate it:&lt;br /&gt;
 conda activate /data/tf&lt;br /&gt;
&lt;br /&gt;
=Setting up your own ML environment=&lt;br /&gt;
Below are the specs and walkthrough for setting up a modest ML programming environment.&lt;br /&gt;
&lt;br /&gt;
==Hardware==&lt;br /&gt;
There are no particular hardware requirements for running TensorFlow. Obviously, more disk space and RAM are desirable, as is a [https://developer.nvidia.com/cuda-gpus CUDA-enabled GPU].&lt;br /&gt;
&lt;br /&gt;
==Operating System==&lt;br /&gt;
The closed Apple ecosystem might make MacOS a good choice of platform because a Mac is a Mac is a Mac. However, I&#039;m not Mr. Moneybags over here, and Apple drops support for older hardware after a time. You couldn&#039;t pay me to use Windows for anything but Office applications, so that leaves Linux. The TensorFlow installation directions assume Ubuntu 16.04 or higher, and though I have dabbled with Debian Linux for a home media server, I typically use Ubuntu. These instructions assume Ubuntu 22.04 LTS.&lt;br /&gt;
&lt;br /&gt;
==Python Version==&lt;br /&gt;
Ubuntu 22.04 has retired Python 2.x, and uses Python 3.9 by default, which works nicely with TensorFlow (I understand that getting TensorFlow to work with Python 3.10+ includes some challenges). These instructions assume Python 3.9. If you want to use other versions of Python for other applications, I recommend using virtual environments to manage and switch between Python versions. The pip installation instructions use miniconda to create a Python 3.9 virtual environment.&lt;br /&gt;
&lt;br /&gt;
==Setup==&lt;br /&gt;
===TensorFlow Installation===&lt;br /&gt;
First off, you can use Conda, but don&#039;t use Conda to install TensorFlow. Instead, follow the [https://www.tensorflow.org/install/pip pip install instructions] published by the TensorFlow people. The steps are broadly described below:&lt;br /&gt;
====Create a virtual environment====&lt;br /&gt;
The first step is to install miniconda if it isn&#039;t already installed. Then, create a new Python 3.9 virtual environment:&lt;br /&gt;
 conda create --name tf python=3.9&lt;br /&gt;
Then activate your new environment:&lt;br /&gt;
 conda activate tf&lt;br /&gt;
&lt;br /&gt;
====Update pip and install tensorflow====&lt;br /&gt;
You&#039;ll need to update pip first:&lt;br /&gt;
 pip install --upgrade pip&lt;br /&gt;
Then pip install tensorflow:&lt;br /&gt;
 pip install tensorflow&lt;br /&gt;
&lt;br /&gt;
===Spyder Installation===&lt;br /&gt;
Anthony got me using the Spyder IDE for Python programming. Problem is, as of today (July 6, 2022), it&#039;s buggy on Ubuntu 22.04. You can install it using pip, as documented [https://bugs.launchpad.net/ubuntu/+source/spyder/+bug/1968479 here]:&lt;br /&gt;
 pip install -U spyder&lt;br /&gt;
However, when I uninstalled the apt package dependencies (there are LOTS), the pip installation no longer worked, so I reinstalled the apt package as well. The apt package binary still won&#039;t launch, but at least the pip version will have everything it needs. It&#039;s possible that having the apt package installed prior to the pip installation caused pip to not bother installing some critical components. If that&#039;s the case, then it may not be strictly necessary to have both the apt and pip versions installed.&lt;br /&gt;
 &lt;br /&gt;
Note that pip will have installed spyder into a specific virtual environment (e.g., tf). You won&#039;t be able to launch spyder without first activating that virtual environment. After installation, you can launch it from a terminal window:&lt;br /&gt;
 (base) &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ conda activate tf&lt;br /&gt;
 (tf) &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ spyder&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2247</id>
		<title>ML Environment</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2247"/>
		<updated>2022-07-26T17:12:19Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Using the CCN Lab ML Environment */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The lab primarily uses a Linux environment. We have several workstations running the latest version of [https://ubuntu.com/download Ubuntu]. Our machine learning (ML) research is primarily carried out using Google&#039;s [https://www.tensorflow.org/ TensorFlow] libraries, written for Python. Workstations with powerful GPUs can use CUDA for GPU acceleration.&lt;br /&gt;
=Using the CCN Lab ML Environment=&lt;br /&gt;
To avoid clashes between different releases of Python, Tensorflow and whatever libraries are required for a particular ML application, we&#039;ll be using Virtual Environments. We have a Python 2.9 virtual environment, which is like a sandbox that all our ML libraries can be installed into and toggled on/off without disrupting anything else. There is a one-time setup procedure every individual user will have to complete:&lt;br /&gt;
&lt;br /&gt;
#ssh or open a terminal on ws03&lt;br /&gt;
#run the following command:&lt;br /&gt;
 conda init bash&lt;br /&gt;
#log out, log back in. Now your shell prompt will look like:&lt;br /&gt;
 (base) user@host:~$&lt;br /&gt;
#I have created the tf environment in /data/tf. To activate it:&lt;br /&gt;
 conda activate /data/tf&lt;br /&gt;
&lt;br /&gt;
=Setting up your own ML environment=&lt;br /&gt;
Below are the specs and walkthrough for setting up a modest ML programming environment.&lt;br /&gt;
&lt;br /&gt;
==Hardware==&lt;br /&gt;
There are no particular hardware requirements for running TensorFlow. Obviously, more disk space and RAM are desirable, as is a [https://developer.nvidia.com/cuda-gpus CUDA-enabled GPU].&lt;br /&gt;
&lt;br /&gt;
==Operating System==&lt;br /&gt;
The closed Apple ecosystem might make MacOS a good choice of platform because a Mac is a Mac is a Mac. However, I&#039;m not Mr. Moneybags over here, and Apple drops support for older hardware after a time. You couldn&#039;t pay me to use Windows for anything but Office applications, so that leaves Linux. The TensorFlow installation directions assume Ubuntu 16.04 or higher, and though I have dabbled with Debian Linux for a home media server, I typically use Ubuntu. These instructions assume Ubuntu 22.04 LTS.&lt;br /&gt;
&lt;br /&gt;
==Python Version==&lt;br /&gt;
Ubuntu 22.04 has retired Python 2.x, and uses Python 3.9 by default, which works nicely with TensorFlow (I understand that getting TensorFlow to work with Python 3.10+ includes some challenges). These instructions assume Python 3.9. If you want to use other versions of Python for other applications, I recommend using virtual environments to manage and switch between Python versions. The pip installation instructions use miniconda to create a Python 3.9 virtual environment.&lt;br /&gt;
&lt;br /&gt;
==Setup==&lt;br /&gt;
===TensorFlow Installation===&lt;br /&gt;
First off, you can use Conda, but don&#039;t use Conda to install TensorFlow. Instead, follow the [https://www.tensorflow.org/install/pip pip install instructions] published by the TensorFlow people. The steps are broadly described below:&lt;br /&gt;
====Create a virtual environment====&lt;br /&gt;
The first step is to install miniconda if it isn&#039;t already installed. Then, create a new Python 3.9 virtual environment:&lt;br /&gt;
 conda create --name tf python=3.9&lt;br /&gt;
Then activate your new environment:&lt;br /&gt;
 conda activate tf&lt;br /&gt;
&lt;br /&gt;
====Update pip and install tensorflow====&lt;br /&gt;
You&#039;ll need to update pip first:&lt;br /&gt;
 pip install --upgrade pip&lt;br /&gt;
Then pip install tensorflow:&lt;br /&gt;
 pip install tensorflow&lt;br /&gt;
&lt;br /&gt;
===Spyder Installation===&lt;br /&gt;
Anthony got me using the Spyder IDE for Python programming. Problem is, as of today (July 6, 2022), it&#039;s buggy on Ubuntu 22.04. You can install it using pip, as documented [https://bugs.launchpad.net/ubuntu/+source/spyder/+bug/1968479 here]:&lt;br /&gt;
 pip install -U spyder&lt;br /&gt;
However, when I uninstalled the apt package dependencies (there are LOTS), the pip installation no longer worked, so I reinstalled the apt package as well. The apt package binary still won&#039;t launch, but at least the pip version will have everything it needs. It&#039;s possible that having the apt package installed prior to the pip installation caused pip to not bother installing some critical components. If that&#039;s the case, then it may not be strictly necessary to have both the apt and pip versions installed.&lt;br /&gt;
 &lt;br /&gt;
Note that pip will have installed spyder into a specific virtual environment (e.g., tf). You won&#039;t be able to launch spyder without first activating that virtual environment. After installation, you can launch it from a terminal window:&lt;br /&gt;
 (base) &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ conda activate tf&lt;br /&gt;
 (tf) &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ spyder&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2246</id>
		<title>ML Environment</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2246"/>
		<updated>2022-07-26T17:06:27Z</updated>

		<summary type="html">&lt;p&gt;Chris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The lab primarily uses a Linux environment. We have several workstations running the latest version of [https://ubuntu.com/download Ubuntu]. Our machine learning (ML) research is primarily carried out using Google&#039;s [https://www.tensorflow.org/ TensorFlow] libraries, written for Python. Workstations with powerful GPUs can use CUDA for GPU acceleration.&lt;br /&gt;
=Using the CCN Lab ML Environment=&lt;br /&gt;
To avoid clashes between different releases of Python, Tensorflow and whatever libraries are required for a particular ML application, we&#039;ll be using Virtual Environments. CASET has set up a Python 2.9 environment for us. There is a one-time setup procedure every individual user will have to complete:&lt;br /&gt;
&lt;br /&gt;
#ssh or open a terminal on ws03&lt;br /&gt;
#run &amp;lt;code&amp;gt;conda init bash&amp;lt;/code&amp;gt;&lt;br /&gt;
#log out, log back in. Now your shell prompt will look like:&lt;br /&gt;
 (base) user@host:~$&lt;br /&gt;
&lt;br /&gt;
=Setting up your own ML environment=&lt;br /&gt;
Below are the specs and walkthrough for setting up a modest ML programming environment.&lt;br /&gt;
&lt;br /&gt;
==Hardware==&lt;br /&gt;
There are no particular hardware requirements for running TensorFlow. Obviously, more disk space and RAM are desirable, as is a [https://developer.nvidia.com/cuda-gpus CUDA-enabled GPU].&lt;br /&gt;
&lt;br /&gt;
==Operating System==&lt;br /&gt;
The closed Apple ecosystem might make MacOS a good choice of platform because a Mac is a Mac is a Mac. However, I&#039;m not Mr. Moneybags over here, and Apple drops support for older hardware after a time. You couldn&#039;t pay me to use Windows for anything but Office applications, so that leaves Linux. The TensorFlow installation directions assume Ubuntu 16.04 or higher, and though I have dabbled with Debian Linux for a home media server, I typically use Ubuntu. These instructions assume Ubuntu 22.04 LTS.&lt;br /&gt;
&lt;br /&gt;
==Python Version==&lt;br /&gt;
Ubuntu 22.04 has retired Python 2.x, and uses Python 3.9 by default, which works nicely with TensorFlow (I understand that getting TensorFlow to work with Python 3.10+ includes some challenges). These instructions assume Python 3.9. If you want to use other versions of Python for other applications, I recommend using virtual environments to manage and switch between Python versions. The pip installation instructions use miniconda to create a Python 3.9 virtual environment.&lt;br /&gt;
&lt;br /&gt;
==Setup==&lt;br /&gt;
===TensorFlow Installation===&lt;br /&gt;
First off, you can use Conda, but don&#039;t use Conda to install TensorFlow. Instead, follow the [https://www.tensorflow.org/install/pip pip install instructions] published by the TensorFlow people. The steps are broadly described below:&lt;br /&gt;
====Create a virtual environment====&lt;br /&gt;
The first step is to install miniconda if it isn&#039;t already installed. Then, create a new Python 3.9 virtual environment:&lt;br /&gt;
 conda create --name tf python=3.9&lt;br /&gt;
Then activate your new environment:&lt;br /&gt;
 conda activate tf&lt;br /&gt;
&lt;br /&gt;
====Update pip and install tensorflow====&lt;br /&gt;
You&#039;ll need to update pip first:&lt;br /&gt;
 pip install --upgrade pip&lt;br /&gt;
Then pip install tensorflow:&lt;br /&gt;
 pip install tensorflow&lt;br /&gt;
&lt;br /&gt;
===Spyder Installation===&lt;br /&gt;
Anthony got me using the Spyder IDE for Python programming. Problem is, as of today (July 6, 2022), it&#039;s buggy on Ubuntu 22.04. You can install it using pip, as documented [https://bugs.launchpad.net/ubuntu/+source/spyder/+bug/1968479 here]:&lt;br /&gt;
 pip install -U spyder&lt;br /&gt;
However, when I uninstalled the apt package dependencies (there are LOTS), the pip installation no longer worked, so I reinstalled the apt package as well. The apt package binary still won&#039;t launch, but at least the pip version will have everything it needs. It&#039;s possible that having the apt package installed prior to the pip installation caused pip to not bother installing some critical components. If that&#039;s the case, then it may not be strictly necessary to have both the apt and pip versions installed.&lt;br /&gt;
 &lt;br /&gt;
Note that pip will have installed spyder into a specific virtual environment (e.g., tf). You won&#039;t be able to launch spyder without first activating that virtual environment. After installation, you can launch it from a terminal window:&lt;br /&gt;
 (base) &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ conda activate tf&lt;br /&gt;
 (tf) &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ spyder&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Freesurfer_Group-Level_Analysis_(mri_glmfit)&amp;diff=2245</id>
		<title>Freesurfer Group-Level Analysis (mri glmfit)</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Freesurfer_Group-Level_Analysis_(mri_glmfit)&amp;diff=2245"/>
		<updated>2022-07-21T18:58:24Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Correct for Multiple Comparisons with Permutation Test (MNI305 Space, or Surface-Based) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This procedure is based on information from [https://surfer.nmr.mgh.harvard.edu/fswiki/FsFastTutorialV5.1/FsFastGroupLevel here]&lt;br /&gt;
&lt;br /&gt;
The group-level (Random-Effects) analysis repeats the contrasts performed for individual subjects on a variance-weighted composite of all your subjects. This will identify voxels for which your contrast effects are consistent across all your participants. For the function MRI group analysis you will need to:&lt;br /&gt;
&lt;br /&gt;
*Concatenate individuals into one file (isxconcat-sess)&lt;br /&gt;
*Do not smooth (already smoothed during first-level analysis)&lt;br /&gt;
*For each space (lh/rh/mni305):&lt;br /&gt;
**Run mri_glmfit using weighted least squares (WLS)&lt;br /&gt;
**Correct for multiple comparisons&lt;br /&gt;
**Correct for multiple comparisons&lt;br /&gt;
*Optionally merge into one volume space&lt;br /&gt;
&lt;br /&gt;
==Before you start==&lt;br /&gt;
Ensure that your SUBJECTS_DIR variable is set to the working directory containing all your participants, and that the first-level analyses have been completed for all participants (using selxavg-3)&lt;br /&gt;
&lt;br /&gt;
==Concatenate First-Level Analyses(isxconcat-sess)==&lt;br /&gt;
In your &#039;&#039;&#039;&#039;&#039;$SUBJECTS_DIR&#039;&#039;&#039;&#039;&#039;, there should be a &#039;&#039;&#039;&#039;&#039;subjects&#039;&#039;&#039;&#039;&#039; file containing the subjectID for each participant for which you had run selxavg-3. Assume that the analysis directories are called &#039;&#039;my_analysis.*&#039;&#039; ([[Configure_mkanalysis-sess | mkanalysis-sess]]), and that the contrast is called &#039;&#039;conditionA_vs_conditionB&#039;&#039; ([[Configure_mkcontrast-sess | mkcontrast-sess]]). You might find it helpful to use environment variables and run the &amp;lt;code&amp;gt;isxconcat-sess&amp;lt;/code&amp;gt; script as follows:&lt;br /&gt;
 ANALYSIS=my_analysis.sm4&lt;br /&gt;
 CONTRAST1=conditionA_vs_conditionB&lt;br /&gt;
 #left hemisphere&lt;br /&gt;
 isxconcat-sess -sf subjects -analysis ${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.lh&amp;lt;/span&amp;gt; -contrast &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;$CONTRAST1&amp;lt;/span&amp;gt; -o RFX&lt;br /&gt;
 #right hemisphere&lt;br /&gt;
 isxconcat-sess -sf subjects -analysis ${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.rh&amp;lt;/span&amp;gt; -contrast &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;$CONTRAST1&amp;lt;/span&amp;gt; -o RFX&lt;br /&gt;
 #subcortical&lt;br /&gt;
 isxconcat-sess -sf subjects -analysis ${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.mni&amp;lt;/span&amp;gt; -contrast &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;$CONTRAST1&amp;lt;/span&amp;gt; -o RFX&lt;br /&gt;
&lt;br /&gt;
When finished, a new directory will exist, &#039;&#039;&#039;&#039;&#039;$SUBJECTS_DIR/RFX&#039;&#039;&#039;&#039;&#039; and contain sub-folders for each of your analyses (in this case, left- and right-hemisphere and MNI305 analysis). Note that your analysis directory does not have to be called &#039;&#039;&#039;RFX&#039;&#039;&#039; (e.g., if multiple people are working in the same subjects folder, you might have an output folder named something like &#039;&#039;&#039;RFX_CHRIS&#039;&#039;&#039;, or whatever suits you).&lt;br /&gt;
&lt;br /&gt;
If you have multiple contrasts, you can just create additional CONTRAST variables and rerun the same scripts, replacing &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;${CONTRAST1}&amp;lt;/span&amp;gt; with alternative contrasts. &lt;br /&gt;
&lt;br /&gt;
Note that if you have already run isxconcat-sess in the past, you may get a warning because an earlier group-level directory will already exist. You should probably just delete the old directory and rerun isxconcatenate-sess.&lt;br /&gt;
===Protip for Looping Through Contrasts===&lt;br /&gt;
Contrasts will have a corresponding .config file in one or more of the analysis directories found in $SUBJECTS_DIR. If you plan to automate this process using a script that loops over analysis folders and contrasts, the following code snippet will create a text file containing the names of the contrasts with .config files:&lt;br /&gt;
 cd $SUBJECTS_DIR/&amp;lt;span style=&#039;color:red&#039;&amp;gt;my_analysis&amp;lt;/span&amp;gt;.lh&lt;br /&gt;
 ls -1 | grep config | sed &#039;s/\.config//g&#039; &amp;gt; ../list_of_contrasts&lt;br /&gt;
 #now $SUBJECTS_DIR has a text file called list_of_contrasts for use in a loop&lt;br /&gt;
&lt;br /&gt;
==Run mri_glmfit==&lt;br /&gt;
&#039;&#039;&#039;PAY ATTENTION:&#039;&#039;&#039; You will need to run mri_glmfit &#039;&#039;&#039;for each contrast&#039;&#039;&#039; peformed &#039;&#039;&#039;for each of the analyses&#039;&#039;&#039;! (but see the section below about scripting)&lt;br /&gt;
&lt;br /&gt;
In the above example, we have 1 contrast (conditionA_vs_conditionB) to be carried out for the my_analysis.lh, my_analysis.rh and my_analysis.mni305 analyses directories. Of course, if we had also contrasted other conditions, then you will have to run mri_glmfit many more times.&lt;br /&gt;
===One-Sample Group Mean===&lt;br /&gt;
The 1-sample group mean (OSGM) tests whether the contrast A-B differs significantly from zero. The example that follows demonstrates how you would run the conditionA_vs_conditionB OSGM analysis for each of the analysis directories as in my example scenario:&lt;br /&gt;
 ANALYSIS=my_analysis&lt;br /&gt;
 CONTRAST1=conditionA_vs_conditionB&lt;br /&gt;
 &lt;br /&gt;
 #contrast 1 in left hemisphere&lt;br /&gt;
 cd ${SUBJECTS_DIR}/RFX/${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.lh&amp;lt;/span&amp;gt;/${CONTRAST1}&lt;br /&gt;
 mri_glmfit --y ces.nii.gz --wls cesvar.nii.gz --osgm --surface fsaverage &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt; --glmdir glm.wls --nii.gz&lt;br /&gt;
 &lt;br /&gt;
 #contrast 1 in right hemisphere&lt;br /&gt;
 cd ${SUBJECTS_DIR}/RFX/${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.rh&amp;lt;/span&amp;gt;/${CONTRAST1}&lt;br /&gt;
 mri_glmfit --y ces.nii.gz --wls cesvar.nii.gz --osgm --surface fsaverage &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt; --glmdir glm.wls --nii.gz&lt;br /&gt;
 &lt;br /&gt;
 #contrast 1 in subcortical MNI space - no need to specify the surface&lt;br /&gt;
 cd ${SUBJECTS_DIR}/RFX/${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.mni&amp;lt;/span&amp;gt;/${CONTRAST1}&lt;br /&gt;
 mri_glmfit --y ces.nii.gz --osgm --glmdir glm.wls --nii.gz &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;--eres-save&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Correct for Multiple Comparisons (Surface-Based)===&lt;br /&gt;
After you have run the voxel-wise analysis, you need to correct the contrast maps for multiple comparisons. This is because the previous step runs the analysis on each of many thousand voxels independently, which mathematically implies that &#039;&#039;alpha&#039;&#039; of those significant voxels are actually Type I errors. Again, you will have to perform this step for each analysis directory. This takes very little time to run for the surface-based analyses, which can use precomputed values.&lt;br /&gt;
&lt;br /&gt;
An explanation of the parameters: &amp;lt;code&amp;gt;--glmdir glm.wls&amp;lt;/code&amp;gt; refers to the directory you specified when you ran the mri_glmfit command. &amp;lt;code&amp;gt;--cache 3 pos&amp;lt;/code&amp;gt; indicates that you want to use the pre-cached simulation with a voxel-wise threshold of 10E-3 (i.e., 0.001) and use positive contrast values. &amp;lt;code&amp;gt;--cwpvalthresh .025&amp;lt;/code&amp;gt; indicates you will retain clusters with a size-corrected p-value of &#039;&#039;P&#039;&#039;&amp;lt;.025 (see the &#039;&#039;&#039;Notes&#039;&#039;&#039; below).&lt;br /&gt;
*my_analysis.lh&lt;br /&gt;
**&amp;lt;code&amp;gt;cd &#039;&#039;$SUBJECTS_DIR/RFX/my_analysis.lh/conditionA_vs_conditionB/&#039;&#039;&amp;lt;/code&amp;gt;&lt;br /&gt;
**mri_glmfit-sim --glmdir glm.wls --cache 3 pos --cwpvalthresh .025&lt;br /&gt;
*my_analysis.rh&lt;br /&gt;
**&amp;lt;code&amp;gt;cd &#039;&#039;$SUBJECTS_DIR/RFX/my_analysis.rh/conditionA_vs_conditionB/&#039;&#039;&amp;lt;/code&amp;gt;&lt;br /&gt;
**mri_glmfit-sim --glmdir glm.wls --cache 3 pos --cwpvalthresh .025&lt;br /&gt;
&lt;br /&gt;
Running this step saves thresholded files in the directory indicated by the &amp;lt;code&amp;gt;--glmdir&amp;lt;/code&amp;gt; switch (in the above example, files were saved in glm.wls). There, you will find files named &#039;&#039;cache.*.nii.gz&#039;&#039;. You can then load those files as overlays to see which clusters survived the size thresholding.&lt;br /&gt;
&lt;br /&gt;
===Correct for Multiple Comparisons with Permutation Test (MNI305 Space, or Surface-Based)===&lt;br /&gt;
The above examples use cached Monte Carlo simulations for speed. The help text for mri_glmfit-sim gives the syntax for running your own sims, which take much longer to execute, but are required for MNI305 space:&lt;br /&gt;
 mri_glmfit-sim --glmdir glmdir \&lt;br /&gt;
  --sim nulltype nsim threshold csdbase \&lt;br /&gt;
  --sim-sign sign&lt;br /&gt;
e.g. Monte Carlo correction for surface space (10K iterations, voxel-wise p&amp;lt;.005, positive differences only, no correction for multiple spaces):&lt;br /&gt;
 mri_glmfit-sim --glmdir glm.wls \&lt;br /&gt;
  --sim mc-z 10000 2.3 mc-z.th005.pos --sim-sign pos&lt;br /&gt;
&lt;br /&gt;
e.g. permutation correction for MNI space (10K iterations, voxel-wise p&amp;lt;.01, absolute value differences, cluster-wise p&amp;lt;sub&amp;gt;familywise&amp;lt;/sub&amp;gt;=.05 is corrected for lh, rh and mni spaces):&lt;br /&gt;
 mri_glmfit-sim --glmdir glm.wls --sim perm 10000 2 perm10k.th01.abs \&lt;br /&gt;
  --sim-sign abs --cwp 0.05 --3spaces&lt;br /&gt;
&lt;br /&gt;
e.g. splitting permutation correction for MNI spaces into 4 parallel jobs for speed (each job will do 2500/10k sims); using -log thresh notation in filenames:&lt;br /&gt;
 mri_glmfit-sim --glmdir glm.wls --sim perm 10000 3 perm10k.th30.abs --sim-sign abs --cwp 0.05 --3spaces --bg 4&lt;br /&gt;
&lt;br /&gt;
Options for sim type (listed fastest to slowest) are perm, mc-z, and mc-full.&lt;br /&gt;
&lt;br /&gt;
====Notes====&lt;br /&gt;
The choice of &#039;&#039;&#039;cwpvalthresh&#039;&#039;&#039; was computed by dividing the nominal desired p-value (0.05) by the number of spaces entailed by the simulation. In the above example if we are only looking at the lh and rh surfaces we divide 0.05/2 = 0.025. If you were looking at the lh, rh and MNI305 space, you would have to divide by 3 to maintain the same FWE: 0.05/3=0.0167.&lt;br /&gt;
&lt;br /&gt;
As indicated above, the uncorrected voxel threshold (p=.001) was used to enable the use of pre-cached simulated values with the &amp;lt;code&amp;gt;--cache&amp;lt;/code&amp;gt; switch. Though .1, .01, and .001 are easy enough to compute (1, 2, 3, respectively), other values can be computed by noting that this value corresponds to -1 &amp;amp;times; the base-10 logarithm of the uncorrected p-value. For example, to use cluster-size correction with an uncorrected voxel-wise p-value of 0.005, a spreadsheet or calculator can compute -1&amp;amp;times;(log&amp;lt;sub&amp;gt;10&amp;lt;/sub&amp;gt;0.005)=2.301. Or for a p-value of 0.05: -1&amp;amp;times;(log&amp;lt;sub&amp;gt;10&amp;lt;/sub&amp;gt;0.05)=1.301 . Rather than specify positive values, you can also use negative values (&amp;lt;code&amp;gt;neg&amp;lt;/code&amp;gt;) or absolute values (&amp;lt;code&amp;gt;abs&amp;lt;/code&amp;gt;, which I think may be the suggested default?).&lt;br /&gt;
&lt;br /&gt;
====Scripting====&lt;br /&gt;
Because mri_glmfit has to be run for each combination of analysis directory (*.lh, *.rh, possibly *.mni305) and contrast, the number of commands you need to run multiplies pretty quickly. You can check out the [[OSGM Script]] page for an example of how to automate these calls to mri_glmfit using a shell script.&lt;br /&gt;
&lt;br /&gt;
==Check It Out==&lt;br /&gt;
This little snippet shows you how to inspect the results of the MNI305 first level analysis.&lt;br /&gt;
===MNI305===&lt;br /&gt;
 RFXDIR=RFX&lt;br /&gt;
 ANALYSIS=FAM.sm4&lt;br /&gt;
 CONTRAST=task&lt;br /&gt;
 #assuming you used the conventions described above, the glm results can be found&lt;br /&gt;
 #in the directory &#039;&#039;&#039;glm.wls&#039;&#039;&#039;, and the osgm results in a subdirectory called &#039;&#039;&#039;osgm&#039;&#039;&#039;&lt;br /&gt;
 THEDIR=${SUBJECTS_DIR}/${RFXDIR}/${ANALYSIS}.mni305/${CONTRAST}/glm.wls/osgm&lt;br /&gt;
&lt;br /&gt;
 tkmeditfv fsaverage orig.mgz -ov ${THEDIR}/perm.abs.01.sig.cluster.nii.gz \&lt;br /&gt;
  -seg ${THEDIR}/perm.abs.01.sig.ocn.nii.gz  \&lt;br /&gt;
       ${THEDIR}/perm.abs.01.sig.ocn.lut&lt;br /&gt;
&lt;br /&gt;
==What Next?==&lt;br /&gt;
The cluster-level correction step will create a series of files in each of the glm directories. They will be automatically given names that describe the parameters used. For example, the above commands generated files named &#039;&#039;cache.th30.abs.sig.*&#039;&#039;. One of these files is a .annot file, where each cluster is a label in the file. These labels make convenient &#039;&#039;[[Working_with_ROIs_(Freesurfer) | functional ROIs]]&#039;&#039;, wouldn&#039;t you say?&lt;br /&gt;
&lt;br /&gt;
I haven&#039;t done this yet, but it seems that one thing you might want to do is map the labels in the .annot file generated by the group mri_glmfit-sim to each participant using &amp;lt;code&amp;gt;mri_surf2surf&amp;lt;/code&amp;gt;. A script called &amp;lt;code&amp;gt;cpannot.sh&amp;lt;/code&amp;gt; that simplifies this has been written and copied to the UBFS/Scripts/Shell/ folder. This script is invoked thus:&lt;br /&gt;
 cpannot.sh $SOURCESUBJECT $TARGETSUBJECT $ANNOTFILE&lt;br /&gt;
&lt;br /&gt;
e.g.:&lt;br /&gt;
&lt;br /&gt;
 cpannot.sh fsaverage FS_0183 lausanne.annot&lt;br /&gt;
&lt;br /&gt;
Before you do this, you can check out the details of the cluster statistics, which include things like the associated anatomical labels for each cluster, and also the surface area of the cluster. The clusters can be further subdivided to make them more similar in size by &amp;lt;code&amp;gt;[[Freesurfer Subparcellation | mris_divide_parcellation]]&amp;lt;/code&amp;gt;. With a series of roughly equal-sized ROIs, they can be mapped from the fsaverage surface to each participant to extract time series or other information from that individual.&lt;br /&gt;
&lt;br /&gt;
[[Category:FreeSurfer]]&lt;br /&gt;
[[Category:Functional Analyses]]&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Freesurfer_Group-Level_Analysis_(mri_glmfit)&amp;diff=2244</id>
		<title>Freesurfer Group-Level Analysis (mri glmfit)</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Freesurfer_Group-Level_Analysis_(mri_glmfit)&amp;diff=2244"/>
		<updated>2022-07-21T18:53:27Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Correct for Multiple Comparisons with Permutation Test (MNI305 Space, or Surface-Based) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This procedure is based on information from [https://surfer.nmr.mgh.harvard.edu/fswiki/FsFastTutorialV5.1/FsFastGroupLevel here]&lt;br /&gt;
&lt;br /&gt;
The group-level (Random-Effects) analysis repeats the contrasts performed for individual subjects on a variance-weighted composite of all your subjects. This will identify voxels for which your contrast effects are consistent across all your participants. For the function MRI group analysis you will need to:&lt;br /&gt;
&lt;br /&gt;
*Concatenate individuals into one file (isxconcat-sess)&lt;br /&gt;
*Do not smooth (already smoothed during first-level analysis)&lt;br /&gt;
*For each space (lh/rh/mni305):&lt;br /&gt;
**Run mri_glmfit using weighted least squares (WLS)&lt;br /&gt;
**Correct for multiple comparisons&lt;br /&gt;
**Correct for multiple comparisons&lt;br /&gt;
*Optionally merge into one volume space&lt;br /&gt;
&lt;br /&gt;
==Before you start==&lt;br /&gt;
Ensure that your SUBJECTS_DIR variable is set to the working directory containing all your participants, and that the first-level analyses have been completed for all participants (using selxavg-3)&lt;br /&gt;
&lt;br /&gt;
==Concatenate First-Level Analyses(isxconcat-sess)==&lt;br /&gt;
In your &#039;&#039;&#039;&#039;&#039;$SUBJECTS_DIR&#039;&#039;&#039;&#039;&#039;, there should be a &#039;&#039;&#039;&#039;&#039;subjects&#039;&#039;&#039;&#039;&#039; file containing the subjectID for each participant for which you had run selxavg-3. Assume that the analysis directories are called &#039;&#039;my_analysis.*&#039;&#039; ([[Configure_mkanalysis-sess | mkanalysis-sess]]), and that the contrast is called &#039;&#039;conditionA_vs_conditionB&#039;&#039; ([[Configure_mkcontrast-sess | mkcontrast-sess]]). You might find it helpful to use environment variables and run the &amp;lt;code&amp;gt;isxconcat-sess&amp;lt;/code&amp;gt; script as follows:&lt;br /&gt;
 ANALYSIS=my_analysis.sm4&lt;br /&gt;
 CONTRAST1=conditionA_vs_conditionB&lt;br /&gt;
 #left hemisphere&lt;br /&gt;
 isxconcat-sess -sf subjects -analysis ${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.lh&amp;lt;/span&amp;gt; -contrast &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;$CONTRAST1&amp;lt;/span&amp;gt; -o RFX&lt;br /&gt;
 #right hemisphere&lt;br /&gt;
 isxconcat-sess -sf subjects -analysis ${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.rh&amp;lt;/span&amp;gt; -contrast &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;$CONTRAST1&amp;lt;/span&amp;gt; -o RFX&lt;br /&gt;
 #subcortical&lt;br /&gt;
 isxconcat-sess -sf subjects -analysis ${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.mni&amp;lt;/span&amp;gt; -contrast &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;$CONTRAST1&amp;lt;/span&amp;gt; -o RFX&lt;br /&gt;
&lt;br /&gt;
When finished, a new directory will exist, &#039;&#039;&#039;&#039;&#039;$SUBJECTS_DIR/RFX&#039;&#039;&#039;&#039;&#039; and contain sub-folders for each of your analyses (in this case, left- and right-hemisphere and MNI305 analysis). Note that your analysis directory does not have to be called &#039;&#039;&#039;RFX&#039;&#039;&#039; (e.g., if multiple people are working in the same subjects folder, you might have an output folder named something like &#039;&#039;&#039;RFX_CHRIS&#039;&#039;&#039;, or whatever suits you).&lt;br /&gt;
&lt;br /&gt;
If you have multiple contrasts, you can just create additional CONTRAST variables and rerun the same scripts, replacing &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;${CONTRAST1}&amp;lt;/span&amp;gt; with alternative contrasts. &lt;br /&gt;
&lt;br /&gt;
Note that if you have already run isxconcat-sess in the past, you may get a warning because an earlier group-level directory will already exist. You should probably just delete the old directory and rerun isxconcatenate-sess.&lt;br /&gt;
===Protip for Looping Through Contrasts===&lt;br /&gt;
Contrasts will have a corresponding .config file in one or more of the analysis directories found in $SUBJECTS_DIR. If you plan to automate this process using a script that loops over analysis folders and contrasts, the following code snippet will create a text file containing the names of the contrasts with .config files:&lt;br /&gt;
 cd $SUBJECTS_DIR/&amp;lt;span style=&#039;color:red&#039;&amp;gt;my_analysis&amp;lt;/span&amp;gt;.lh&lt;br /&gt;
 ls -1 | grep config | sed &#039;s/\.config//g&#039; &amp;gt; ../list_of_contrasts&lt;br /&gt;
 #now $SUBJECTS_DIR has a text file called list_of_contrasts for use in a loop&lt;br /&gt;
&lt;br /&gt;
==Run mri_glmfit==&lt;br /&gt;
&#039;&#039;&#039;PAY ATTENTION:&#039;&#039;&#039; You will need to run mri_glmfit &#039;&#039;&#039;for each contrast&#039;&#039;&#039; peformed &#039;&#039;&#039;for each of the analyses&#039;&#039;&#039;! (but see the section below about scripting)&lt;br /&gt;
&lt;br /&gt;
In the above example, we have 1 contrast (conditionA_vs_conditionB) to be carried out for the my_analysis.lh, my_analysis.rh and my_analysis.mni305 analyses directories. Of course, if we had also contrasted other conditions, then you will have to run mri_glmfit many more times.&lt;br /&gt;
===One-Sample Group Mean===&lt;br /&gt;
The 1-sample group mean (OSGM) tests whether the contrast A-B differs significantly from zero. The example that follows demonstrates how you would run the conditionA_vs_conditionB OSGM analysis for each of the analysis directories as in my example scenario:&lt;br /&gt;
 ANALYSIS=my_analysis&lt;br /&gt;
 CONTRAST1=conditionA_vs_conditionB&lt;br /&gt;
 &lt;br /&gt;
 #contrast 1 in left hemisphere&lt;br /&gt;
 cd ${SUBJECTS_DIR}/RFX/${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.lh&amp;lt;/span&amp;gt;/${CONTRAST1}&lt;br /&gt;
 mri_glmfit --y ces.nii.gz --wls cesvar.nii.gz --osgm --surface fsaverage &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt; --glmdir glm.wls --nii.gz&lt;br /&gt;
 &lt;br /&gt;
 #contrast 1 in right hemisphere&lt;br /&gt;
 cd ${SUBJECTS_DIR}/RFX/${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.rh&amp;lt;/span&amp;gt;/${CONTRAST1}&lt;br /&gt;
 mri_glmfit --y ces.nii.gz --wls cesvar.nii.gz --osgm --surface fsaverage &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt; --glmdir glm.wls --nii.gz&lt;br /&gt;
 &lt;br /&gt;
 #contrast 1 in subcortical MNI space - no need to specify the surface&lt;br /&gt;
 cd ${SUBJECTS_DIR}/RFX/${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.mni&amp;lt;/span&amp;gt;/${CONTRAST1}&lt;br /&gt;
 mri_glmfit --y ces.nii.gz --osgm --glmdir glm.wls --nii.gz &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;--eres-save&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Correct for Multiple Comparisons (Surface-Based)===&lt;br /&gt;
After you have run the voxel-wise analysis, you need to correct the contrast maps for multiple comparisons. This is because the previous step runs the analysis on each of many thousand voxels independently, which mathematically implies that &#039;&#039;alpha&#039;&#039; of those significant voxels are actually Type I errors. Again, you will have to perform this step for each analysis directory. This takes very little time to run for the surface-based analyses, which can use precomputed values.&lt;br /&gt;
&lt;br /&gt;
An explanation of the parameters: &amp;lt;code&amp;gt;--glmdir glm.wls&amp;lt;/code&amp;gt; refers to the directory you specified when you ran the mri_glmfit command. &amp;lt;code&amp;gt;--cache 3 pos&amp;lt;/code&amp;gt; indicates that you want to use the pre-cached simulation with a voxel-wise threshold of 10E-3 (i.e., 0.001) and use positive contrast values. &amp;lt;code&amp;gt;--cwpvalthresh .025&amp;lt;/code&amp;gt; indicates you will retain clusters with a size-corrected p-value of &#039;&#039;P&#039;&#039;&amp;lt;.025 (see the &#039;&#039;&#039;Notes&#039;&#039;&#039; below).&lt;br /&gt;
*my_analysis.lh&lt;br /&gt;
**&amp;lt;code&amp;gt;cd &#039;&#039;$SUBJECTS_DIR/RFX/my_analysis.lh/conditionA_vs_conditionB/&#039;&#039;&amp;lt;/code&amp;gt;&lt;br /&gt;
**mri_glmfit-sim --glmdir glm.wls --cache 3 pos --cwpvalthresh .025&lt;br /&gt;
*my_analysis.rh&lt;br /&gt;
**&amp;lt;code&amp;gt;cd &#039;&#039;$SUBJECTS_DIR/RFX/my_analysis.rh/conditionA_vs_conditionB/&#039;&#039;&amp;lt;/code&amp;gt;&lt;br /&gt;
**mri_glmfit-sim --glmdir glm.wls --cache 3 pos --cwpvalthresh .025&lt;br /&gt;
&lt;br /&gt;
Running this step saves thresholded files in the directory indicated by the &amp;lt;code&amp;gt;--glmdir&amp;lt;/code&amp;gt; switch (in the above example, files were saved in glm.wls). There, you will find files named &#039;&#039;cache.*.nii.gz&#039;&#039;. You can then load those files as overlays to see which clusters survived the size thresholding.&lt;br /&gt;
&lt;br /&gt;
===Correct for Multiple Comparisons with Permutation Test (MNI305 Space, or Surface-Based)===&lt;br /&gt;
The above examples use cached Monte Carlo simulations for speed. The help text for mri_glmfit-sim gives the syntax for running your own sims, which take much longer to execute, but are required for MNI305 space:&lt;br /&gt;
 mri_glmfit-sim --glmdir glmdir \&lt;br /&gt;
  --sim nulltype nsim threshold csdbase \&lt;br /&gt;
  --sim-sign sign&lt;br /&gt;
e.g. Monte Carlo correction for surface space (10K iterations, voxel-wise p&amp;lt;.005, positive differences only, no correction for multiple spaces):&lt;br /&gt;
 mri_glmfit-sim --glmdir glm.wls \&lt;br /&gt;
  --sim mc-z 10000 2.3 mc-z.pos.th005 --sim-sign pos&lt;br /&gt;
&lt;br /&gt;
e.g. permutation correction for MNI space (10K iterations, voxel-wise p&amp;lt;.01, absolute value differences, cluster-wise p&amp;lt;sub&amp;gt;familywise&amp;lt;/sub&amp;gt;=.05 is corrected for lh, rh and mni spaces):&lt;br /&gt;
 mri_glmfit-sim --glmdir glm.wls --sim perm 10000 2 perm.abs.th01 \&lt;br /&gt;
  --sim-sign abs --cwp 0.05 --3spaces&lt;br /&gt;
&lt;br /&gt;
e.g. splitting permutation correction for MNI spaces into 4 parallel jobs for speed (each job will do 2500/10k sims); using -log thresh notation in filenames:&lt;br /&gt;
 mri_glmfit-sim --glmdir glm.wls --sim perm 10000 3 perm10k.th30 --sim-sign abs --cwp 0.05 --3spaces --bg 4&lt;br /&gt;
&lt;br /&gt;
Options for sim type (listed fastest to slowest) are perm, mc-z, and mc-full.&lt;br /&gt;
&lt;br /&gt;
====Notes====&lt;br /&gt;
The choice of &#039;&#039;&#039;cwpvalthresh&#039;&#039;&#039; was computed by dividing the nominal desired p-value (0.05) by the number of spaces entailed by the simulation. In the above example if we are only looking at the lh and rh surfaces we divide 0.05/2 = 0.025. If you were looking at the lh, rh and MNI305 space, you would have to divide by 3 to maintain the same FWE: 0.05/3=0.0167.&lt;br /&gt;
&lt;br /&gt;
As indicated above, the uncorrected voxel threshold (p=.001) was used to enable the use of pre-cached simulated values with the &amp;lt;code&amp;gt;--cache&amp;lt;/code&amp;gt; switch. Though .1, .01, and .001 are easy enough to compute (1, 2, 3, respectively), other values can be computed by noting that this value corresponds to -1 &amp;amp;times; the base-10 logarithm of the uncorrected p-value. For example, to use cluster-size correction with an uncorrected voxel-wise p-value of 0.005, a spreadsheet or calculator can compute -1&amp;amp;times;(log&amp;lt;sub&amp;gt;10&amp;lt;/sub&amp;gt;0.005)=2.301. Or for a p-value of 0.05: -1&amp;amp;times;(log&amp;lt;sub&amp;gt;10&amp;lt;/sub&amp;gt;0.05)=1.301 . Rather than specify positive values, you can also use negative values (&amp;lt;code&amp;gt;neg&amp;lt;/code&amp;gt;) or absolute values (&amp;lt;code&amp;gt;abs&amp;lt;/code&amp;gt;, which I think may be the suggested default?).&lt;br /&gt;
&lt;br /&gt;
====Scripting====&lt;br /&gt;
Because mri_glmfit has to be run for each combination of analysis directory (*.lh, *.rh, possibly *.mni305) and contrast, the number of commands you need to run multiplies pretty quickly. You can check out the [[OSGM Script]] page for an example of how to automate these calls to mri_glmfit using a shell script.&lt;br /&gt;
&lt;br /&gt;
==Check It Out==&lt;br /&gt;
This little snippet shows you how to inspect the results of the MNI305 first level analysis.&lt;br /&gt;
===MNI305===&lt;br /&gt;
 RFXDIR=RFX&lt;br /&gt;
 ANALYSIS=FAM.sm4&lt;br /&gt;
 CONTRAST=task&lt;br /&gt;
 #assuming you used the conventions described above, the glm results can be found&lt;br /&gt;
 #in the directory &#039;&#039;&#039;glm.wls&#039;&#039;&#039;, and the osgm results in a subdirectory called &#039;&#039;&#039;osgm&#039;&#039;&#039;&lt;br /&gt;
 THEDIR=${SUBJECTS_DIR}/${RFXDIR}/${ANALYSIS}.mni305/${CONTRAST}/glm.wls/osgm&lt;br /&gt;
&lt;br /&gt;
 tkmeditfv fsaverage orig.mgz -ov ${THEDIR}/perm.abs.01.sig.cluster.nii.gz \&lt;br /&gt;
  -seg ${THEDIR}/perm.abs.01.sig.ocn.nii.gz  \&lt;br /&gt;
       ${THEDIR}/perm.abs.01.sig.ocn.lut&lt;br /&gt;
&lt;br /&gt;
==What Next?==&lt;br /&gt;
The cluster-level correction step will create a series of files in each of the glm directories. They will be automatically given names that describe the parameters used. For example, the above commands generated files named &#039;&#039;cache.th30.abs.sig.*&#039;&#039;. One of these files is a .annot file, where each cluster is a label in the file. These labels make convenient &#039;&#039;[[Working_with_ROIs_(Freesurfer) | functional ROIs]]&#039;&#039;, wouldn&#039;t you say?&lt;br /&gt;
&lt;br /&gt;
I haven&#039;t done this yet, but it seems that one thing you might want to do is map the labels in the .annot file generated by the group mri_glmfit-sim to each participant using &amp;lt;code&amp;gt;mri_surf2surf&amp;lt;/code&amp;gt;. A script called &amp;lt;code&amp;gt;cpannot.sh&amp;lt;/code&amp;gt; that simplifies this has been written and copied to the UBFS/Scripts/Shell/ folder. This script is invoked thus:&lt;br /&gt;
 cpannot.sh $SOURCESUBJECT $TARGETSUBJECT $ANNOTFILE&lt;br /&gt;
&lt;br /&gt;
e.g.:&lt;br /&gt;
&lt;br /&gt;
 cpannot.sh fsaverage FS_0183 lausanne.annot&lt;br /&gt;
&lt;br /&gt;
Before you do this, you can check out the details of the cluster statistics, which include things like the associated anatomical labels for each cluster, and also the surface area of the cluster. The clusters can be further subdivided to make them more similar in size by &amp;lt;code&amp;gt;[[Freesurfer Subparcellation | mris_divide_parcellation]]&amp;lt;/code&amp;gt;. With a series of roughly equal-sized ROIs, they can be mapped from the fsaverage surface to each participant to extract time series or other information from that individual.&lt;br /&gt;
&lt;br /&gt;
[[Category:FreeSurfer]]&lt;br /&gt;
[[Category:Functional Analyses]]&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Freesurfer_Group-Level_Analysis_(mri_glmfit)&amp;diff=2243</id>
		<title>Freesurfer Group-Level Analysis (mri glmfit)</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Freesurfer_Group-Level_Analysis_(mri_glmfit)&amp;diff=2243"/>
		<updated>2022-07-21T18:52:21Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Correct for Multiple Comparisons with Permutation Test (MNI305 Space, or Surface-Based) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This procedure is based on information from [https://surfer.nmr.mgh.harvard.edu/fswiki/FsFastTutorialV5.1/FsFastGroupLevel here]&lt;br /&gt;
&lt;br /&gt;
The group-level (Random-Effects) analysis repeats the contrasts performed for individual subjects on a variance-weighted composite of all your subjects. This will identify voxels for which your contrast effects are consistent across all your participants. For the function MRI group analysis you will need to:&lt;br /&gt;
&lt;br /&gt;
*Concatenate individuals into one file (isxconcat-sess)&lt;br /&gt;
*Do not smooth (already smoothed during first-level analysis)&lt;br /&gt;
*For each space (lh/rh/mni305):&lt;br /&gt;
**Run mri_glmfit using weighted least squares (WLS)&lt;br /&gt;
**Correct for multiple comparisons&lt;br /&gt;
**Correct for multiple comparisons&lt;br /&gt;
*Optionally merge into one volume space&lt;br /&gt;
&lt;br /&gt;
==Before you start==&lt;br /&gt;
Ensure that your SUBJECTS_DIR variable is set to the working directory containing all your participants, and that the first-level analyses have been completed for all participants (using selxavg-3)&lt;br /&gt;
&lt;br /&gt;
==Concatenate First-Level Analyses(isxconcat-sess)==&lt;br /&gt;
In your &#039;&#039;&#039;&#039;&#039;$SUBJECTS_DIR&#039;&#039;&#039;&#039;&#039;, there should be a &#039;&#039;&#039;&#039;&#039;subjects&#039;&#039;&#039;&#039;&#039; file containing the subjectID for each participant for which you had run selxavg-3. Assume that the analysis directories are called &#039;&#039;my_analysis.*&#039;&#039; ([[Configure_mkanalysis-sess | mkanalysis-sess]]), and that the contrast is called &#039;&#039;conditionA_vs_conditionB&#039;&#039; ([[Configure_mkcontrast-sess | mkcontrast-sess]]). You might find it helpful to use environment variables and run the &amp;lt;code&amp;gt;isxconcat-sess&amp;lt;/code&amp;gt; script as follows:&lt;br /&gt;
 ANALYSIS=my_analysis.sm4&lt;br /&gt;
 CONTRAST1=conditionA_vs_conditionB&lt;br /&gt;
 #left hemisphere&lt;br /&gt;
 isxconcat-sess -sf subjects -analysis ${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.lh&amp;lt;/span&amp;gt; -contrast &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;$CONTRAST1&amp;lt;/span&amp;gt; -o RFX&lt;br /&gt;
 #right hemisphere&lt;br /&gt;
 isxconcat-sess -sf subjects -analysis ${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.rh&amp;lt;/span&amp;gt; -contrast &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;$CONTRAST1&amp;lt;/span&amp;gt; -o RFX&lt;br /&gt;
 #subcortical&lt;br /&gt;
 isxconcat-sess -sf subjects -analysis ${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.mni&amp;lt;/span&amp;gt; -contrast &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;$CONTRAST1&amp;lt;/span&amp;gt; -o RFX&lt;br /&gt;
&lt;br /&gt;
When finished, a new directory will exist, &#039;&#039;&#039;&#039;&#039;$SUBJECTS_DIR/RFX&#039;&#039;&#039;&#039;&#039; and contain sub-folders for each of your analyses (in this case, left- and right-hemisphere and MNI305 analysis). Note that your analysis directory does not have to be called &#039;&#039;&#039;RFX&#039;&#039;&#039; (e.g., if multiple people are working in the same subjects folder, you might have an output folder named something like &#039;&#039;&#039;RFX_CHRIS&#039;&#039;&#039;, or whatever suits you).&lt;br /&gt;
&lt;br /&gt;
If you have multiple contrasts, you can just create additional CONTRAST variables and rerun the same scripts, replacing &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;${CONTRAST1}&amp;lt;/span&amp;gt; with alternative contrasts. &lt;br /&gt;
&lt;br /&gt;
Note that if you have already run isxconcat-sess in the past, you may get a warning because an earlier group-level directory will already exist. You should probably just delete the old directory and rerun isxconcatenate-sess.&lt;br /&gt;
===Protip for Looping Through Contrasts===&lt;br /&gt;
Contrasts will have a corresponding .config file in one or more of the analysis directories found in $SUBJECTS_DIR. If you plan to automate this process using a script that loops over analysis folders and contrasts, the following code snippet will create a text file containing the names of the contrasts with .config files:&lt;br /&gt;
 cd $SUBJECTS_DIR/&amp;lt;span style=&#039;color:red&#039;&amp;gt;my_analysis&amp;lt;/span&amp;gt;.lh&lt;br /&gt;
 ls -1 | grep config | sed &#039;s/\.config//g&#039; &amp;gt; ../list_of_contrasts&lt;br /&gt;
 #now $SUBJECTS_DIR has a text file called list_of_contrasts for use in a loop&lt;br /&gt;
&lt;br /&gt;
==Run mri_glmfit==&lt;br /&gt;
&#039;&#039;&#039;PAY ATTENTION:&#039;&#039;&#039; You will need to run mri_glmfit &#039;&#039;&#039;for each contrast&#039;&#039;&#039; peformed &#039;&#039;&#039;for each of the analyses&#039;&#039;&#039;! (but see the section below about scripting)&lt;br /&gt;
&lt;br /&gt;
In the above example, we have 1 contrast (conditionA_vs_conditionB) to be carried out for the my_analysis.lh, my_analysis.rh and my_analysis.mni305 analyses directories. Of course, if we had also contrasted other conditions, then you will have to run mri_glmfit many more times.&lt;br /&gt;
===One-Sample Group Mean===&lt;br /&gt;
The 1-sample group mean (OSGM) tests whether the contrast A-B differs significantly from zero. The example that follows demonstrates how you would run the conditionA_vs_conditionB OSGM analysis for each of the analysis directories as in my example scenario:&lt;br /&gt;
 ANALYSIS=my_analysis&lt;br /&gt;
 CONTRAST1=conditionA_vs_conditionB&lt;br /&gt;
 &lt;br /&gt;
 #contrast 1 in left hemisphere&lt;br /&gt;
 cd ${SUBJECTS_DIR}/RFX/${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.lh&amp;lt;/span&amp;gt;/${CONTRAST1}&lt;br /&gt;
 mri_glmfit --y ces.nii.gz --wls cesvar.nii.gz --osgm --surface fsaverage &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt; --glmdir glm.wls --nii.gz&lt;br /&gt;
 &lt;br /&gt;
 #contrast 1 in right hemisphere&lt;br /&gt;
 cd ${SUBJECTS_DIR}/RFX/${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.rh&amp;lt;/span&amp;gt;/${CONTRAST1}&lt;br /&gt;
 mri_glmfit --y ces.nii.gz --wls cesvar.nii.gz --osgm --surface fsaverage &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt; --glmdir glm.wls --nii.gz&lt;br /&gt;
 &lt;br /&gt;
 #contrast 1 in subcortical MNI space - no need to specify the surface&lt;br /&gt;
 cd ${SUBJECTS_DIR}/RFX/${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.mni&amp;lt;/span&amp;gt;/${CONTRAST1}&lt;br /&gt;
 mri_glmfit --y ces.nii.gz --osgm --glmdir glm.wls --nii.gz &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;--eres-save&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Correct for Multiple Comparisons (Surface-Based)===&lt;br /&gt;
After you have run the voxel-wise analysis, you need to correct the contrast maps for multiple comparisons. This is because the previous step runs the analysis on each of many thousand voxels independently, which mathematically implies that &#039;&#039;alpha&#039;&#039; of those significant voxels are actually Type I errors. Again, you will have to perform this step for each analysis directory. This takes very little time to run for the surface-based analyses, which can use precomputed values.&lt;br /&gt;
&lt;br /&gt;
An explanation of the parameters: &amp;lt;code&amp;gt;--glmdir glm.wls&amp;lt;/code&amp;gt; refers to the directory you specified when you ran the mri_glmfit command. &amp;lt;code&amp;gt;--cache 3 pos&amp;lt;/code&amp;gt; indicates that you want to use the pre-cached simulation with a voxel-wise threshold of 10E-3 (i.e., 0.001) and use positive contrast values. &amp;lt;code&amp;gt;--cwpvalthresh .025&amp;lt;/code&amp;gt; indicates you will retain clusters with a size-corrected p-value of &#039;&#039;P&#039;&#039;&amp;lt;.025 (see the &#039;&#039;&#039;Notes&#039;&#039;&#039; below).&lt;br /&gt;
*my_analysis.lh&lt;br /&gt;
**&amp;lt;code&amp;gt;cd &#039;&#039;$SUBJECTS_DIR/RFX/my_analysis.lh/conditionA_vs_conditionB/&#039;&#039;&amp;lt;/code&amp;gt;&lt;br /&gt;
**mri_glmfit-sim --glmdir glm.wls --cache 3 pos --cwpvalthresh .025&lt;br /&gt;
*my_analysis.rh&lt;br /&gt;
**&amp;lt;code&amp;gt;cd &#039;&#039;$SUBJECTS_DIR/RFX/my_analysis.rh/conditionA_vs_conditionB/&#039;&#039;&amp;lt;/code&amp;gt;&lt;br /&gt;
**mri_glmfit-sim --glmdir glm.wls --cache 3 pos --cwpvalthresh .025&lt;br /&gt;
&lt;br /&gt;
Running this step saves thresholded files in the directory indicated by the &amp;lt;code&amp;gt;--glmdir&amp;lt;/code&amp;gt; switch (in the above example, files were saved in glm.wls). There, you will find files named &#039;&#039;cache.*.nii.gz&#039;&#039;. You can then load those files as overlays to see which clusters survived the size thresholding.&lt;br /&gt;
&lt;br /&gt;
===Correct for Multiple Comparisons with Permutation Test (MNI305 Space, or Surface-Based)===&lt;br /&gt;
The above examples use cached Monte Carlo simulations for speed. The help text for mri_glmfit-sim gives the syntax for running your own sims, which take much longer to execute, but are required for MNI305 space:&lt;br /&gt;
 mri_glmfit-sim --glmdir glmdir \&lt;br /&gt;
  --sim nulltype nsim threshold csdbase \&lt;br /&gt;
  --sim-sign sign&lt;br /&gt;
e.g. Monte Carlo correction for surface space (10K iterations, voxel-wise p&amp;lt;.005, positive differences only, no correction for multiple spaces):&lt;br /&gt;
 mri_glmfit-sim --glmdir glm.wls \&lt;br /&gt;
  --sim mc-z 10000 2.3 mc-z.pos.th005 --sim-sign pos&lt;br /&gt;
&lt;br /&gt;
e.g. permutation correction for MNI space (10K iterations, voxel-wise p&amp;lt;.01, absolute value differences, p&amp;lt;sub&amp;gt;familywise&amp;lt;/sub&amp;gt;=.05 is corrected for 3 spaces):&lt;br /&gt;
 mri_glmfit-sim --glmdir glm.wls --sim perm 10000 2 perm.abs.th01 \&lt;br /&gt;
  --sim-sign abs --cwp 0.05 --3spaces&lt;br /&gt;
&lt;br /&gt;
e.g. splitting permutation correction for MNI spaces into 4 parallel jobs for speed (each job will do 2500/10k sims); using -log thresh notation in filenames:&lt;br /&gt;
 mri_glmfit-sim --glmdir glm.wls --sim perm 10000 3 perm10k.th30 --sim-sign abs --cwp 0.05 --3spaces --bg 4&lt;br /&gt;
&lt;br /&gt;
Options for sim type (listed fastest to slowest) are perm, mc-z, and mc-full.&lt;br /&gt;
&lt;br /&gt;
====Notes====&lt;br /&gt;
The choice of &#039;&#039;&#039;cwpvalthresh&#039;&#039;&#039; was computed by dividing the nominal desired p-value (0.05) by the number of spaces entailed by the simulation. In the above example if we are only looking at the lh and rh surfaces we divide 0.05/2 = 0.025. If you were looking at the lh, rh and MNI305 space, you would have to divide by 3 to maintain the same FWE: 0.05/3=0.0167.&lt;br /&gt;
&lt;br /&gt;
As indicated above, the uncorrected voxel threshold (p=.001) was used to enable the use of pre-cached simulated values with the &amp;lt;code&amp;gt;--cache&amp;lt;/code&amp;gt; switch. Though .1, .01, and .001 are easy enough to compute (1, 2, 3, respectively), other values can be computed by noting that this value corresponds to -1 &amp;amp;times; the base-10 logarithm of the uncorrected p-value. For example, to use cluster-size correction with an uncorrected voxel-wise p-value of 0.005, a spreadsheet or calculator can compute -1&amp;amp;times;(log&amp;lt;sub&amp;gt;10&amp;lt;/sub&amp;gt;0.005)=2.301. Or for a p-value of 0.05: -1&amp;amp;times;(log&amp;lt;sub&amp;gt;10&amp;lt;/sub&amp;gt;0.05)=1.301 . Rather than specify positive values, you can also use negative values (&amp;lt;code&amp;gt;neg&amp;lt;/code&amp;gt;) or absolute values (&amp;lt;code&amp;gt;abs&amp;lt;/code&amp;gt;, which I think may be the suggested default?).&lt;br /&gt;
&lt;br /&gt;
====Scripting====&lt;br /&gt;
Because mri_glmfit has to be run for each combination of analysis directory (*.lh, *.rh, possibly *.mni305) and contrast, the number of commands you need to run multiplies pretty quickly. You can check out the [[OSGM Script]] page for an example of how to automate these calls to mri_glmfit using a shell script.&lt;br /&gt;
&lt;br /&gt;
==Check It Out==&lt;br /&gt;
This little snippet shows you how to inspect the results of the MNI305 first level analysis.&lt;br /&gt;
===MNI305===&lt;br /&gt;
 RFXDIR=RFX&lt;br /&gt;
 ANALYSIS=FAM.sm4&lt;br /&gt;
 CONTRAST=task&lt;br /&gt;
 #assuming you used the conventions described above, the glm results can be found&lt;br /&gt;
 #in the directory &#039;&#039;&#039;glm.wls&#039;&#039;&#039;, and the osgm results in a subdirectory called &#039;&#039;&#039;osgm&#039;&#039;&#039;&lt;br /&gt;
 THEDIR=${SUBJECTS_DIR}/${RFXDIR}/${ANALYSIS}.mni305/${CONTRAST}/glm.wls/osgm&lt;br /&gt;
&lt;br /&gt;
 tkmeditfv fsaverage orig.mgz -ov ${THEDIR}/perm.abs.01.sig.cluster.nii.gz \&lt;br /&gt;
  -seg ${THEDIR}/perm.abs.01.sig.ocn.nii.gz  \&lt;br /&gt;
       ${THEDIR}/perm.abs.01.sig.ocn.lut&lt;br /&gt;
&lt;br /&gt;
==What Next?==&lt;br /&gt;
The cluster-level correction step will create a series of files in each of the glm directories. They will be automatically given names that describe the parameters used. For example, the above commands generated files named &#039;&#039;cache.th30.abs.sig.*&#039;&#039;. One of these files is a .annot file, where each cluster is a label in the file. These labels make convenient &#039;&#039;[[Working_with_ROIs_(Freesurfer) | functional ROIs]]&#039;&#039;, wouldn&#039;t you say?&lt;br /&gt;
&lt;br /&gt;
I haven&#039;t done this yet, but it seems that one thing you might want to do is map the labels in the .annot file generated by the group mri_glmfit-sim to each participant using &amp;lt;code&amp;gt;mri_surf2surf&amp;lt;/code&amp;gt;. A script called &amp;lt;code&amp;gt;cpannot.sh&amp;lt;/code&amp;gt; that simplifies this has been written and copied to the UBFS/Scripts/Shell/ folder. This script is invoked thus:&lt;br /&gt;
 cpannot.sh $SOURCESUBJECT $TARGETSUBJECT $ANNOTFILE&lt;br /&gt;
&lt;br /&gt;
e.g.:&lt;br /&gt;
&lt;br /&gt;
 cpannot.sh fsaverage FS_0183 lausanne.annot&lt;br /&gt;
&lt;br /&gt;
Before you do this, you can check out the details of the cluster statistics, which include things like the associated anatomical labels for each cluster, and also the surface area of the cluster. The clusters can be further subdivided to make them more similar in size by &amp;lt;code&amp;gt;[[Freesurfer Subparcellation | mris_divide_parcellation]]&amp;lt;/code&amp;gt;. With a series of roughly equal-sized ROIs, they can be mapped from the fsaverage surface to each participant to extract time series or other information from that individual.&lt;br /&gt;
&lt;br /&gt;
[[Category:FreeSurfer]]&lt;br /&gt;
[[Category:Functional Analyses]]&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Freesurfer_Group-Level_Analysis_(mri_glmfit)&amp;diff=2242</id>
		<title>Freesurfer Group-Level Analysis (mri glmfit)</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Freesurfer_Group-Level_Analysis_(mri_glmfit)&amp;diff=2242"/>
		<updated>2022-07-21T18:50:29Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Correct for Multiple Comparisons with Permutation Test (MNI305 Space, or Surface-Based) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This procedure is based on information from [https://surfer.nmr.mgh.harvard.edu/fswiki/FsFastTutorialV5.1/FsFastGroupLevel here]&lt;br /&gt;
&lt;br /&gt;
The group-level (Random-Effects) analysis repeats the contrasts performed for individual subjects on a variance-weighted composite of all your subjects. This will identify voxels for which your contrast effects are consistent across all your participants. For the function MRI group analysis you will need to:&lt;br /&gt;
&lt;br /&gt;
*Concatenate individuals into one file (isxconcat-sess)&lt;br /&gt;
*Do not smooth (already smoothed during first-level analysis)&lt;br /&gt;
*For each space (lh/rh/mni305):&lt;br /&gt;
**Run mri_glmfit using weighted least squares (WLS)&lt;br /&gt;
**Correct for multiple comparisons&lt;br /&gt;
**Correct for multiple comparisons&lt;br /&gt;
*Optionally merge into one volume space&lt;br /&gt;
&lt;br /&gt;
==Before you start==&lt;br /&gt;
Ensure that your SUBJECTS_DIR variable is set to the working directory containing all your participants, and that the first-level analyses have been completed for all participants (using selxavg-3)&lt;br /&gt;
&lt;br /&gt;
==Concatenate First-Level Analyses(isxconcat-sess)==&lt;br /&gt;
In your &#039;&#039;&#039;&#039;&#039;$SUBJECTS_DIR&#039;&#039;&#039;&#039;&#039;, there should be a &#039;&#039;&#039;&#039;&#039;subjects&#039;&#039;&#039;&#039;&#039; file containing the subjectID for each participant for which you had run selxavg-3. Assume that the analysis directories are called &#039;&#039;my_analysis.*&#039;&#039; ([[Configure_mkanalysis-sess | mkanalysis-sess]]), and that the contrast is called &#039;&#039;conditionA_vs_conditionB&#039;&#039; ([[Configure_mkcontrast-sess | mkcontrast-sess]]). You might find it helpful to use environment variables and run the &amp;lt;code&amp;gt;isxconcat-sess&amp;lt;/code&amp;gt; script as follows:&lt;br /&gt;
 ANALYSIS=my_analysis.sm4&lt;br /&gt;
 CONTRAST1=conditionA_vs_conditionB&lt;br /&gt;
 #left hemisphere&lt;br /&gt;
 isxconcat-sess -sf subjects -analysis ${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.lh&amp;lt;/span&amp;gt; -contrast &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;$CONTRAST1&amp;lt;/span&amp;gt; -o RFX&lt;br /&gt;
 #right hemisphere&lt;br /&gt;
 isxconcat-sess -sf subjects -analysis ${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.rh&amp;lt;/span&amp;gt; -contrast &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;$CONTRAST1&amp;lt;/span&amp;gt; -o RFX&lt;br /&gt;
 #subcortical&lt;br /&gt;
 isxconcat-sess -sf subjects -analysis ${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.mni&amp;lt;/span&amp;gt; -contrast &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;$CONTRAST1&amp;lt;/span&amp;gt; -o RFX&lt;br /&gt;
&lt;br /&gt;
When finished, a new directory will exist, &#039;&#039;&#039;&#039;&#039;$SUBJECTS_DIR/RFX&#039;&#039;&#039;&#039;&#039; and contain sub-folders for each of your analyses (in this case, left- and right-hemisphere and MNI305 analysis). Note that your analysis directory does not have to be called &#039;&#039;&#039;RFX&#039;&#039;&#039; (e.g., if multiple people are working in the same subjects folder, you might have an output folder named something like &#039;&#039;&#039;RFX_CHRIS&#039;&#039;&#039;, or whatever suits you).&lt;br /&gt;
&lt;br /&gt;
If you have multiple contrasts, you can just create additional CONTRAST variables and rerun the same scripts, replacing &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;${CONTRAST1}&amp;lt;/span&amp;gt; with alternative contrasts. &lt;br /&gt;
&lt;br /&gt;
Note that if you have already run isxconcat-sess in the past, you may get a warning because an earlier group-level directory will already exist. You should probably just delete the old directory and rerun isxconcatenate-sess.&lt;br /&gt;
===Protip for Looping Through Contrasts===&lt;br /&gt;
Contrasts will have a corresponding .config file in one or more of the analysis directories found in $SUBJECTS_DIR. If you plan to automate this process using a script that loops over analysis folders and contrasts, the following code snippet will create a text file containing the names of the contrasts with .config files:&lt;br /&gt;
 cd $SUBJECTS_DIR/&amp;lt;span style=&#039;color:red&#039;&amp;gt;my_analysis&amp;lt;/span&amp;gt;.lh&lt;br /&gt;
 ls -1 | grep config | sed &#039;s/\.config//g&#039; &amp;gt; ../list_of_contrasts&lt;br /&gt;
 #now $SUBJECTS_DIR has a text file called list_of_contrasts for use in a loop&lt;br /&gt;
&lt;br /&gt;
==Run mri_glmfit==&lt;br /&gt;
&#039;&#039;&#039;PAY ATTENTION:&#039;&#039;&#039; You will need to run mri_glmfit &#039;&#039;&#039;for each contrast&#039;&#039;&#039; peformed &#039;&#039;&#039;for each of the analyses&#039;&#039;&#039;! (but see the section below about scripting)&lt;br /&gt;
&lt;br /&gt;
In the above example, we have 1 contrast (conditionA_vs_conditionB) to be carried out for the my_analysis.lh, my_analysis.rh and my_analysis.mni305 analyses directories. Of course, if we had also contrasted other conditions, then you will have to run mri_glmfit many more times.&lt;br /&gt;
===One-Sample Group Mean===&lt;br /&gt;
The 1-sample group mean (OSGM) tests whether the contrast A-B differs significantly from zero. The example that follows demonstrates how you would run the conditionA_vs_conditionB OSGM analysis for each of the analysis directories as in my example scenario:&lt;br /&gt;
 ANALYSIS=my_analysis&lt;br /&gt;
 CONTRAST1=conditionA_vs_conditionB&lt;br /&gt;
 &lt;br /&gt;
 #contrast 1 in left hemisphere&lt;br /&gt;
 cd ${SUBJECTS_DIR}/RFX/${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.lh&amp;lt;/span&amp;gt;/${CONTRAST1}&lt;br /&gt;
 mri_glmfit --y ces.nii.gz --wls cesvar.nii.gz --osgm --surface fsaverage &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt; --glmdir glm.wls --nii.gz&lt;br /&gt;
 &lt;br /&gt;
 #contrast 1 in right hemisphere&lt;br /&gt;
 cd ${SUBJECTS_DIR}/RFX/${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.rh&amp;lt;/span&amp;gt;/${CONTRAST1}&lt;br /&gt;
 mri_glmfit --y ces.nii.gz --wls cesvar.nii.gz --osgm --surface fsaverage &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt; --glmdir glm.wls --nii.gz&lt;br /&gt;
 &lt;br /&gt;
 #contrast 1 in subcortical MNI space - no need to specify the surface&lt;br /&gt;
 cd ${SUBJECTS_DIR}/RFX/${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.mni&amp;lt;/span&amp;gt;/${CONTRAST1}&lt;br /&gt;
 mri_glmfit --y ces.nii.gz --osgm --glmdir glm.wls --nii.gz &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;--eres-save&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Correct for Multiple Comparisons (Surface-Based)===&lt;br /&gt;
After you have run the voxel-wise analysis, you need to correct the contrast maps for multiple comparisons. This is because the previous step runs the analysis on each of many thousand voxels independently, which mathematically implies that &#039;&#039;alpha&#039;&#039; of those significant voxels are actually Type I errors. Again, you will have to perform this step for each analysis directory. This takes very little time to run for the surface-based analyses, which can use precomputed values.&lt;br /&gt;
&lt;br /&gt;
An explanation of the parameters: &amp;lt;code&amp;gt;--glmdir glm.wls&amp;lt;/code&amp;gt; refers to the directory you specified when you ran the mri_glmfit command. &amp;lt;code&amp;gt;--cache 3 pos&amp;lt;/code&amp;gt; indicates that you want to use the pre-cached simulation with a voxel-wise threshold of 10E-3 (i.e., 0.001) and use positive contrast values. &amp;lt;code&amp;gt;--cwpvalthresh .025&amp;lt;/code&amp;gt; indicates you will retain clusters with a size-corrected p-value of &#039;&#039;P&#039;&#039;&amp;lt;.025 (see the &#039;&#039;&#039;Notes&#039;&#039;&#039; below).&lt;br /&gt;
*my_analysis.lh&lt;br /&gt;
**&amp;lt;code&amp;gt;cd &#039;&#039;$SUBJECTS_DIR/RFX/my_analysis.lh/conditionA_vs_conditionB/&#039;&#039;&amp;lt;/code&amp;gt;&lt;br /&gt;
**mri_glmfit-sim --glmdir glm.wls --cache 3 pos --cwpvalthresh .025&lt;br /&gt;
*my_analysis.rh&lt;br /&gt;
**&amp;lt;code&amp;gt;cd &#039;&#039;$SUBJECTS_DIR/RFX/my_analysis.rh/conditionA_vs_conditionB/&#039;&#039;&amp;lt;/code&amp;gt;&lt;br /&gt;
**mri_glmfit-sim --glmdir glm.wls --cache 3 pos --cwpvalthresh .025&lt;br /&gt;
&lt;br /&gt;
Running this step saves thresholded files in the directory indicated by the &amp;lt;code&amp;gt;--glmdir&amp;lt;/code&amp;gt; switch (in the above example, files were saved in glm.wls). There, you will find files named &#039;&#039;cache.*.nii.gz&#039;&#039;. You can then load those files as overlays to see which clusters survived the size thresholding.&lt;br /&gt;
&lt;br /&gt;
===Correct for Multiple Comparisons with Permutation Test (MNI305 Space, or Surface-Based)===&lt;br /&gt;
The above examples use cached Monte Carlo simulations for speed. The help text for mri_glmfit-sim gives the syntax for running your own sims, which take much longer to execute, but are required for MNI305 space:&lt;br /&gt;
 mri_glmfit-sim --glmdir glmdir \&lt;br /&gt;
  --sim nulltype nsim threshold csdbase \&lt;br /&gt;
  --sim-sign sign&lt;br /&gt;
e.g. Monte Carlo correction for surface space (10K iterations, voxel-wise p&amp;lt;.005, positive differences only, no correction for multiple spaces):&lt;br /&gt;
 mri_glmfit-sim --glmdir glm.wls \&lt;br /&gt;
  --sim mc-z 10000 2.3 mc-z.pos.th005 --sim-sign pos&lt;br /&gt;
&lt;br /&gt;
e.g. permutation correction for MNI space (10K iterations, voxel-wise p&amp;lt;.01, absolute value differences, correcting for 3 spaces):&lt;br /&gt;
 mri_glmfit-sim --glmdir glm.wls --sim perm 10000 2 perm.abs.th01 \&lt;br /&gt;
  --sim-sign abs --cwp 0.05 --3spaces&lt;br /&gt;
&lt;br /&gt;
e.g. splitting permutation correction for MNI spaces into 4 parallel jobs for speed (each job will do 2500/10k sims); using -log thresh notation in filenames:&lt;br /&gt;
 mri_glmfit-sim --glmdir glm.wls --sim perm 10000 3 perm10k.th30 --sim-sign abs --cwp 0.05 --3spaces --bg 4&lt;br /&gt;
&lt;br /&gt;
Options for sim type (listed fastest to slowest) are perm, mc-z, and mc-full.&lt;br /&gt;
&lt;br /&gt;
====Notes====&lt;br /&gt;
The choice of &#039;&#039;&#039;cwpvalthresh&#039;&#039;&#039; was computed by dividing the nominal desired p-value (0.05) by the number of spaces entailed by the simulation. In the above example if we are only looking at the lh and rh surfaces we divide 0.05/2 = 0.025. If you were looking at the lh, rh and MNI305 space, you would have to divide by 3 to maintain the same FWE: 0.05/3=0.0167.&lt;br /&gt;
&lt;br /&gt;
As indicated above, the uncorrected voxel threshold (p=.001) was used to enable the use of pre-cached simulated values with the &amp;lt;code&amp;gt;--cache&amp;lt;/code&amp;gt; switch. Though .1, .01, and .001 are easy enough to compute (1, 2, 3, respectively), other values can be computed by noting that this value corresponds to -1 &amp;amp;times; the base-10 logarithm of the uncorrected p-value. For example, to use cluster-size correction with an uncorrected voxel-wise p-value of 0.005, a spreadsheet or calculator can compute -1&amp;amp;times;(log&amp;lt;sub&amp;gt;10&amp;lt;/sub&amp;gt;0.005)=2.301. Or for a p-value of 0.05: -1&amp;amp;times;(log&amp;lt;sub&amp;gt;10&amp;lt;/sub&amp;gt;0.05)=1.301 . Rather than specify positive values, you can also use negative values (&amp;lt;code&amp;gt;neg&amp;lt;/code&amp;gt;) or absolute values (&amp;lt;code&amp;gt;abs&amp;lt;/code&amp;gt;, which I think may be the suggested default?).&lt;br /&gt;
&lt;br /&gt;
====Scripting====&lt;br /&gt;
Because mri_glmfit has to be run for each combination of analysis directory (*.lh, *.rh, possibly *.mni305) and contrast, the number of commands you need to run multiplies pretty quickly. You can check out the [[OSGM Script]] page for an example of how to automate these calls to mri_glmfit using a shell script.&lt;br /&gt;
&lt;br /&gt;
==Check It Out==&lt;br /&gt;
This little snippet shows you how to inspect the results of the MNI305 first level analysis.&lt;br /&gt;
===MNI305===&lt;br /&gt;
 RFXDIR=RFX&lt;br /&gt;
 ANALYSIS=FAM.sm4&lt;br /&gt;
 CONTRAST=task&lt;br /&gt;
 #assuming you used the conventions described above, the glm results can be found&lt;br /&gt;
 #in the directory &#039;&#039;&#039;glm.wls&#039;&#039;&#039;, and the osgm results in a subdirectory called &#039;&#039;&#039;osgm&#039;&#039;&#039;&lt;br /&gt;
 THEDIR=${SUBJECTS_DIR}/${RFXDIR}/${ANALYSIS}.mni305/${CONTRAST}/glm.wls/osgm&lt;br /&gt;
&lt;br /&gt;
 tkmeditfv fsaverage orig.mgz -ov ${THEDIR}/perm.abs.01.sig.cluster.nii.gz \&lt;br /&gt;
  -seg ${THEDIR}/perm.abs.01.sig.ocn.nii.gz  \&lt;br /&gt;
       ${THEDIR}/perm.abs.01.sig.ocn.lut&lt;br /&gt;
&lt;br /&gt;
==What Next?==&lt;br /&gt;
The cluster-level correction step will create a series of files in each of the glm directories. They will be automatically given names that describe the parameters used. For example, the above commands generated files named &#039;&#039;cache.th30.abs.sig.*&#039;&#039;. One of these files is a .annot file, where each cluster is a label in the file. These labels make convenient &#039;&#039;[[Working_with_ROIs_(Freesurfer) | functional ROIs]]&#039;&#039;, wouldn&#039;t you say?&lt;br /&gt;
&lt;br /&gt;
I haven&#039;t done this yet, but it seems that one thing you might want to do is map the labels in the .annot file generated by the group mri_glmfit-sim to each participant using &amp;lt;code&amp;gt;mri_surf2surf&amp;lt;/code&amp;gt;. A script called &amp;lt;code&amp;gt;cpannot.sh&amp;lt;/code&amp;gt; that simplifies this has been written and copied to the UBFS/Scripts/Shell/ folder. This script is invoked thus:&lt;br /&gt;
 cpannot.sh $SOURCESUBJECT $TARGETSUBJECT $ANNOTFILE&lt;br /&gt;
&lt;br /&gt;
e.g.:&lt;br /&gt;
&lt;br /&gt;
 cpannot.sh fsaverage FS_0183 lausanne.annot&lt;br /&gt;
&lt;br /&gt;
Before you do this, you can check out the details of the cluster statistics, which include things like the associated anatomical labels for each cluster, and also the surface area of the cluster. The clusters can be further subdivided to make them more similar in size by &amp;lt;code&amp;gt;[[Freesurfer Subparcellation | mris_divide_parcellation]]&amp;lt;/code&amp;gt;. With a series of roughly equal-sized ROIs, they can be mapped from the fsaverage surface to each participant to extract time series or other information from that individual.&lt;br /&gt;
&lt;br /&gt;
[[Category:FreeSurfer]]&lt;br /&gt;
[[Category:Functional Analyses]]&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Freesurfer_Group-Level_Analysis_(mri_glmfit)&amp;diff=2241</id>
		<title>Freesurfer Group-Level Analysis (mri glmfit)</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Freesurfer_Group-Level_Analysis_(mri_glmfit)&amp;diff=2241"/>
		<updated>2022-07-21T18:49:06Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Correct for Multiple Comparisons with Permutation Test (MNI305 Space, or Surface-Based) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This procedure is based on information from [https://surfer.nmr.mgh.harvard.edu/fswiki/FsFastTutorialV5.1/FsFastGroupLevel here]&lt;br /&gt;
&lt;br /&gt;
The group-level (Random-Effects) analysis repeats the contrasts performed for individual subjects on a variance-weighted composite of all your subjects. This will identify voxels for which your contrast effects are consistent across all your participants. For the function MRI group analysis you will need to:&lt;br /&gt;
&lt;br /&gt;
*Concatenate individuals into one file (isxconcat-sess)&lt;br /&gt;
*Do not smooth (already smoothed during first-level analysis)&lt;br /&gt;
*For each space (lh/rh/mni305):&lt;br /&gt;
**Run mri_glmfit using weighted least squares (WLS)&lt;br /&gt;
**Correct for multiple comparisons&lt;br /&gt;
**Correct for multiple comparisons&lt;br /&gt;
*Optionally merge into one volume space&lt;br /&gt;
&lt;br /&gt;
==Before you start==&lt;br /&gt;
Ensure that your SUBJECTS_DIR variable is set to the working directory containing all your participants, and that the first-level analyses have been completed for all participants (using selxavg-3)&lt;br /&gt;
&lt;br /&gt;
==Concatenate First-Level Analyses(isxconcat-sess)==&lt;br /&gt;
In your &#039;&#039;&#039;&#039;&#039;$SUBJECTS_DIR&#039;&#039;&#039;&#039;&#039;, there should be a &#039;&#039;&#039;&#039;&#039;subjects&#039;&#039;&#039;&#039;&#039; file containing the subjectID for each participant for which you had run selxavg-3. Assume that the analysis directories are called &#039;&#039;my_analysis.*&#039;&#039; ([[Configure_mkanalysis-sess | mkanalysis-sess]]), and that the contrast is called &#039;&#039;conditionA_vs_conditionB&#039;&#039; ([[Configure_mkcontrast-sess | mkcontrast-sess]]). You might find it helpful to use environment variables and run the &amp;lt;code&amp;gt;isxconcat-sess&amp;lt;/code&amp;gt; script as follows:&lt;br /&gt;
 ANALYSIS=my_analysis.sm4&lt;br /&gt;
 CONTRAST1=conditionA_vs_conditionB&lt;br /&gt;
 #left hemisphere&lt;br /&gt;
 isxconcat-sess -sf subjects -analysis ${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.lh&amp;lt;/span&amp;gt; -contrast &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;$CONTRAST1&amp;lt;/span&amp;gt; -o RFX&lt;br /&gt;
 #right hemisphere&lt;br /&gt;
 isxconcat-sess -sf subjects -analysis ${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.rh&amp;lt;/span&amp;gt; -contrast &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;$CONTRAST1&amp;lt;/span&amp;gt; -o RFX&lt;br /&gt;
 #subcortical&lt;br /&gt;
 isxconcat-sess -sf subjects -analysis ${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.mni&amp;lt;/span&amp;gt; -contrast &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;$CONTRAST1&amp;lt;/span&amp;gt; -o RFX&lt;br /&gt;
&lt;br /&gt;
When finished, a new directory will exist, &#039;&#039;&#039;&#039;&#039;$SUBJECTS_DIR/RFX&#039;&#039;&#039;&#039;&#039; and contain sub-folders for each of your analyses (in this case, left- and right-hemisphere and MNI305 analysis). Note that your analysis directory does not have to be called &#039;&#039;&#039;RFX&#039;&#039;&#039; (e.g., if multiple people are working in the same subjects folder, you might have an output folder named something like &#039;&#039;&#039;RFX_CHRIS&#039;&#039;&#039;, or whatever suits you).&lt;br /&gt;
&lt;br /&gt;
If you have multiple contrasts, you can just create additional CONTRAST variables and rerun the same scripts, replacing &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;${CONTRAST1}&amp;lt;/span&amp;gt; with alternative contrasts. &lt;br /&gt;
&lt;br /&gt;
Note that if you have already run isxconcat-sess in the past, you may get a warning because an earlier group-level directory will already exist. You should probably just delete the old directory and rerun isxconcatenate-sess.&lt;br /&gt;
===Protip for Looping Through Contrasts===&lt;br /&gt;
Contrasts will have a corresponding .config file in one or more of the analysis directories found in $SUBJECTS_DIR. If you plan to automate this process using a script that loops over analysis folders and contrasts, the following code snippet will create a text file containing the names of the contrasts with .config files:&lt;br /&gt;
 cd $SUBJECTS_DIR/&amp;lt;span style=&#039;color:red&#039;&amp;gt;my_analysis&amp;lt;/span&amp;gt;.lh&lt;br /&gt;
 ls -1 | grep config | sed &#039;s/\.config//g&#039; &amp;gt; ../list_of_contrasts&lt;br /&gt;
 #now $SUBJECTS_DIR has a text file called list_of_contrasts for use in a loop&lt;br /&gt;
&lt;br /&gt;
==Run mri_glmfit==&lt;br /&gt;
&#039;&#039;&#039;PAY ATTENTION:&#039;&#039;&#039; You will need to run mri_glmfit &#039;&#039;&#039;for each contrast&#039;&#039;&#039; peformed &#039;&#039;&#039;for each of the analyses&#039;&#039;&#039;! (but see the section below about scripting)&lt;br /&gt;
&lt;br /&gt;
In the above example, we have 1 contrast (conditionA_vs_conditionB) to be carried out for the my_analysis.lh, my_analysis.rh and my_analysis.mni305 analyses directories. Of course, if we had also contrasted other conditions, then you will have to run mri_glmfit many more times.&lt;br /&gt;
===One-Sample Group Mean===&lt;br /&gt;
The 1-sample group mean (OSGM) tests whether the contrast A-B differs significantly from zero. The example that follows demonstrates how you would run the conditionA_vs_conditionB OSGM analysis for each of the analysis directories as in my example scenario:&lt;br /&gt;
 ANALYSIS=my_analysis&lt;br /&gt;
 CONTRAST1=conditionA_vs_conditionB&lt;br /&gt;
 &lt;br /&gt;
 #contrast 1 in left hemisphere&lt;br /&gt;
 cd ${SUBJECTS_DIR}/RFX/${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.lh&amp;lt;/span&amp;gt;/${CONTRAST1}&lt;br /&gt;
 mri_glmfit --y ces.nii.gz --wls cesvar.nii.gz --osgm --surface fsaverage &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt; --glmdir glm.wls --nii.gz&lt;br /&gt;
 &lt;br /&gt;
 #contrast 1 in right hemisphere&lt;br /&gt;
 cd ${SUBJECTS_DIR}/RFX/${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.rh&amp;lt;/span&amp;gt;/${CONTRAST1}&lt;br /&gt;
 mri_glmfit --y ces.nii.gz --wls cesvar.nii.gz --osgm --surface fsaverage &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt; --glmdir glm.wls --nii.gz&lt;br /&gt;
 &lt;br /&gt;
 #contrast 1 in subcortical MNI space - no need to specify the surface&lt;br /&gt;
 cd ${SUBJECTS_DIR}/RFX/${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.mni&amp;lt;/span&amp;gt;/${CONTRAST1}&lt;br /&gt;
 mri_glmfit --y ces.nii.gz --osgm --glmdir glm.wls --nii.gz &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;--eres-save&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Correct for Multiple Comparisons (Surface-Based)===&lt;br /&gt;
After you have run the voxel-wise analysis, you need to correct the contrast maps for multiple comparisons. This is because the previous step runs the analysis on each of many thousand voxels independently, which mathematically implies that &#039;&#039;alpha&#039;&#039; of those significant voxels are actually Type I errors. Again, you will have to perform this step for each analysis directory. This takes very little time to run for the surface-based analyses, which can use precomputed values.&lt;br /&gt;
&lt;br /&gt;
An explanation of the parameters: &amp;lt;code&amp;gt;--glmdir glm.wls&amp;lt;/code&amp;gt; refers to the directory you specified when you ran the mri_glmfit command. &amp;lt;code&amp;gt;--cache 3 pos&amp;lt;/code&amp;gt; indicates that you want to use the pre-cached simulation with a voxel-wise threshold of 10E-3 (i.e., 0.001) and use positive contrast values. &amp;lt;code&amp;gt;--cwpvalthresh .025&amp;lt;/code&amp;gt; indicates you will retain clusters with a size-corrected p-value of &#039;&#039;P&#039;&#039;&amp;lt;.025 (see the &#039;&#039;&#039;Notes&#039;&#039;&#039; below).&lt;br /&gt;
*my_analysis.lh&lt;br /&gt;
**&amp;lt;code&amp;gt;cd &#039;&#039;$SUBJECTS_DIR/RFX/my_analysis.lh/conditionA_vs_conditionB/&#039;&#039;&amp;lt;/code&amp;gt;&lt;br /&gt;
**mri_glmfit-sim --glmdir glm.wls --cache 3 pos --cwpvalthresh .025&lt;br /&gt;
*my_analysis.rh&lt;br /&gt;
**&amp;lt;code&amp;gt;cd &#039;&#039;$SUBJECTS_DIR/RFX/my_analysis.rh/conditionA_vs_conditionB/&#039;&#039;&amp;lt;/code&amp;gt;&lt;br /&gt;
**mri_glmfit-sim --glmdir glm.wls --cache 3 pos --cwpvalthresh .025&lt;br /&gt;
&lt;br /&gt;
Running this step saves thresholded files in the directory indicated by the &amp;lt;code&amp;gt;--glmdir&amp;lt;/code&amp;gt; switch (in the above example, files were saved in glm.wls). There, you will find files named &#039;&#039;cache.*.nii.gz&#039;&#039;. You can then load those files as overlays to see which clusters survived the size thresholding.&lt;br /&gt;
&lt;br /&gt;
===Correct for Multiple Comparisons with Permutation Test (MNI305 Space, or Surface-Based)===&lt;br /&gt;
The above examples use cached Monte Carlo simulations for speed. The help text for mri_glmfit-sim gives the syntax for running your own sims, which take much longer to execute, but are required for MNI305 space:&lt;br /&gt;
 mri_glmfit-sim --glmdir glmdir \&lt;br /&gt;
  --sim nulltype nsim threshold csdbase \&lt;br /&gt;
  --sim-sign sign&lt;br /&gt;
e.g. Monte Carlo correction for surface space (10K iterations, voxel-wise p&amp;lt;.005, positive differences only):&lt;br /&gt;
 mri_glmfit-sim --glmdir glm.wls \&lt;br /&gt;
  --sim mc-z 10000 2.3 mc-z.pos.th005 --sim-sign pos&lt;br /&gt;
&lt;br /&gt;
e.g. permutation correction for MNI space (10K iterations, voxel-wise p&amp;lt;.01, absolute value differences):&lt;br /&gt;
 mri_glmfit-sim --glmdir glm.wls --sim perm 10000 2 perm.abs.th01 \&lt;br /&gt;
  --sim-sign abs --cwp 0.05 --3spaces&lt;br /&gt;
&lt;br /&gt;
e.g. splitting permutation correction for MNI spaces into 4 parallel jobs for speed (each job will do 2500/10k sims); using -log thresh notation in filenames:&lt;br /&gt;
 mri_glmfit-sim --glmdir glm.wls --sim perm 10000 3 perm10k.th30 --sim-sign abs --cwp 0.05 --3spaces --bg 4&lt;br /&gt;
&lt;br /&gt;
Options for sim type (listed fastest to slowest) are perm, mc-z, and mc-full.&lt;br /&gt;
&lt;br /&gt;
====Notes====&lt;br /&gt;
The choice of &#039;&#039;&#039;cwpvalthresh&#039;&#039;&#039; was computed by dividing the nominal desired p-value (0.05) by the number of spaces entailed by the simulation. In the above example if we are only looking at the lh and rh surfaces we divide 0.05/2 = 0.025. If you were looking at the lh, rh and MNI305 space, you would have to divide by 3 to maintain the same FWE: 0.05/3=0.0167.&lt;br /&gt;
&lt;br /&gt;
As indicated above, the uncorrected voxel threshold (p=.001) was used to enable the use of pre-cached simulated values with the &amp;lt;code&amp;gt;--cache&amp;lt;/code&amp;gt; switch. Though .1, .01, and .001 are easy enough to compute (1, 2, 3, respectively), other values can be computed by noting that this value corresponds to -1 &amp;amp;times; the base-10 logarithm of the uncorrected p-value. For example, to use cluster-size correction with an uncorrected voxel-wise p-value of 0.005, a spreadsheet or calculator can compute -1&amp;amp;times;(log&amp;lt;sub&amp;gt;10&amp;lt;/sub&amp;gt;0.005)=2.301. Or for a p-value of 0.05: -1&amp;amp;times;(log&amp;lt;sub&amp;gt;10&amp;lt;/sub&amp;gt;0.05)=1.301 . Rather than specify positive values, you can also use negative values (&amp;lt;code&amp;gt;neg&amp;lt;/code&amp;gt;) or absolute values (&amp;lt;code&amp;gt;abs&amp;lt;/code&amp;gt;, which I think may be the suggested default?).&lt;br /&gt;
&lt;br /&gt;
====Scripting====&lt;br /&gt;
Because mri_glmfit has to be run for each combination of analysis directory (*.lh, *.rh, possibly *.mni305) and contrast, the number of commands you need to run multiplies pretty quickly. You can check out the [[OSGM Script]] page for an example of how to automate these calls to mri_glmfit using a shell script.&lt;br /&gt;
&lt;br /&gt;
==Check It Out==&lt;br /&gt;
This little snippet shows you how to inspect the results of the MNI305 first level analysis.&lt;br /&gt;
===MNI305===&lt;br /&gt;
 RFXDIR=RFX&lt;br /&gt;
 ANALYSIS=FAM.sm4&lt;br /&gt;
 CONTRAST=task&lt;br /&gt;
 #assuming you used the conventions described above, the glm results can be found&lt;br /&gt;
 #in the directory &#039;&#039;&#039;glm.wls&#039;&#039;&#039;, and the osgm results in a subdirectory called &#039;&#039;&#039;osgm&#039;&#039;&#039;&lt;br /&gt;
 THEDIR=${SUBJECTS_DIR}/${RFXDIR}/${ANALYSIS}.mni305/${CONTRAST}/glm.wls/osgm&lt;br /&gt;
&lt;br /&gt;
 tkmeditfv fsaverage orig.mgz -ov ${THEDIR}/perm.abs.01.sig.cluster.nii.gz \&lt;br /&gt;
  -seg ${THEDIR}/perm.abs.01.sig.ocn.nii.gz  \&lt;br /&gt;
       ${THEDIR}/perm.abs.01.sig.ocn.lut&lt;br /&gt;
&lt;br /&gt;
==What Next?==&lt;br /&gt;
The cluster-level correction step will create a series of files in each of the glm directories. They will be automatically given names that describe the parameters used. For example, the above commands generated files named &#039;&#039;cache.th30.abs.sig.*&#039;&#039;. One of these files is a .annot file, where each cluster is a label in the file. These labels make convenient &#039;&#039;[[Working_with_ROIs_(Freesurfer) | functional ROIs]]&#039;&#039;, wouldn&#039;t you say?&lt;br /&gt;
&lt;br /&gt;
I haven&#039;t done this yet, but it seems that one thing you might want to do is map the labels in the .annot file generated by the group mri_glmfit-sim to each participant using &amp;lt;code&amp;gt;mri_surf2surf&amp;lt;/code&amp;gt;. A script called &amp;lt;code&amp;gt;cpannot.sh&amp;lt;/code&amp;gt; that simplifies this has been written and copied to the UBFS/Scripts/Shell/ folder. This script is invoked thus:&lt;br /&gt;
 cpannot.sh $SOURCESUBJECT $TARGETSUBJECT $ANNOTFILE&lt;br /&gt;
&lt;br /&gt;
e.g.:&lt;br /&gt;
&lt;br /&gt;
 cpannot.sh fsaverage FS_0183 lausanne.annot&lt;br /&gt;
&lt;br /&gt;
Before you do this, you can check out the details of the cluster statistics, which include things like the associated anatomical labels for each cluster, and also the surface area of the cluster. The clusters can be further subdivided to make them more similar in size by &amp;lt;code&amp;gt;[[Freesurfer Subparcellation | mris_divide_parcellation]]&amp;lt;/code&amp;gt;. With a series of roughly equal-sized ROIs, they can be mapped from the fsaverage surface to each participant to extract time series or other information from that individual.&lt;br /&gt;
&lt;br /&gt;
[[Category:FreeSurfer]]&lt;br /&gt;
[[Category:Functional Analyses]]&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=OSGM_Script&amp;diff=2240</id>
		<title>OSGM Script</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=OSGM_Script&amp;diff=2240"/>
		<updated>2022-07-21T17:54:08Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Contrasts */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A BASH script can be written to take the tedium out of running group level analyses for multiple contrasts.&lt;br /&gt;
&lt;br /&gt;
==Example Script==&lt;br /&gt;
Below is a script I wrote to run mri_glmfit and mri_glmfit-sim on a set of 3 contrasts. It accomplishes this by executing the commands within a nested loop.&lt;br /&gt;
&lt;br /&gt;
 #!/bin/bash&lt;br /&gt;
 SUBJECTS_DIR=`pwd`&lt;br /&gt;
 RFXDIR=RFX #This is the name of the directory into which the group-level analyses will go&lt;br /&gt;
 HEMIS=( &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt; ) #Which hemispheres are you analyzing? Possible choices are [lh | rh | mni305]&lt;br /&gt;
 ANALYSES=( LDT.fsaverage.sm4.down ) #Name of the analysis directory created by mkanalysis-sess&lt;br /&gt;
 CONTRASTS=( &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;task&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;w-v-pw&amp;lt;/span&amp;gt; &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;hi-v-lo&amp;lt;/span&amp;gt; ) #Names of all contrasts you&#039;re interested in&lt;br /&gt;
 &lt;br /&gt;
 for hemi in &amp;quot;${HEMIS[@]}&amp;quot;&lt;br /&gt;
 do&lt;br /&gt;
    for anls in &amp;quot;${ANALYSES[@]}&amp;quot;&lt;br /&gt;
    do&lt;br /&gt;
 	#the analysis directory contains each contrast directory&lt;br /&gt;
 	ANDIR=${SUBJECTS_DIR}/${RFXDIR}/${anls}.${hemi}&lt;br /&gt;
 	#move to analysis directory for the first time&lt;br /&gt;
 	cd ${ANDIR}&lt;br /&gt;
        for con in &amp;quot;${CONTRASTS[@]}&amp;quot;&lt;br /&gt;
        do&lt;br /&gt;
 	    cd ${con}&lt;br /&gt;
 	    pwd&lt;br /&gt;
 	    mri_glmfit --y ces.nii.gz --wls cesvar.nii.gz --osgm \&lt;br /&gt;
 		--surface fsaverage ${hemi} --glmdir glm.wls --nii.gz&lt;br /&gt;
 	    #correct for multiple comparisons&lt;br /&gt;
            mri_glmfit-sim --glmdir glm.wls --cache 2 abs --cwpvalthresh .0167&lt;br /&gt;
 	    cd ${ANDIR} #go back to the analysis directory&lt;br /&gt;
        done&lt;br /&gt;
    done&lt;br /&gt;
 done&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
 #There appears to be no cached file for multiple comparison correction in &lt;br /&gt;
 #MNI space, so this loop handles the mni-space analysis&lt;br /&gt;
 for anls in &amp;quot;${ANALYSES[@]}&amp;quot;&lt;br /&gt;
   do&lt;br /&gt;
        #the analysis directory contains each contrast directory&lt;br /&gt;
        ANDIR=${SUBJECTS_DIR}/${RFXDIR}/${anls}.mni&lt;br /&gt;
        #move to analysis directory for the first time&lt;br /&gt;
        cd ${ANDIR}&lt;br /&gt;
        for con in &amp;quot;${CONTRASTS[@]}&amp;quot;&lt;br /&gt;
        do&lt;br /&gt;
            cd ${con}&lt;br /&gt;
            mri_glmfit --y ces.nii.gz --wls cesvar.nii.gz --osgm \&lt;br /&gt;
                --glmdir glm.wls --nii.gz&lt;br /&gt;
            #correct for multiple comparisons&lt;br /&gt;
            mri_glmfit-sim --glmdir glm.wls --sim perm 1000 2 perm.abs.01 \&lt;br /&gt;
              --sim-sign abs --cwp 0.05 --3spaces&lt;br /&gt;
            cd ${ANDIR} #go back to the analysis directory&lt;br /&gt;
       done&lt;br /&gt;
   done&lt;br /&gt;
&lt;br /&gt;
This script ran mri_glmfit on the fsaverage surface for each hemisphere/surface appearing in the HEMIS array. The output volumes go into the glmdir (in this case a directory called glm.wls) and will be saved as gzipped .nii files. The script also did cluster size thresholding using a cached Monte Carlo simulation with uncorrected .01 (&amp;quot;cache 2&amp;quot;, or p=10E-02) p-values for positive t-scores. The thresholding was corrected by dividing .05 by 2, the total number of surfaces examined. A more clever way would be to determine the cwpvalthresh by calling the Unix &#039;&#039;bc&#039;&#039; function to calculate it according to the size of the HEMIS array.&lt;br /&gt;
&lt;br /&gt;
== What values do I use for ANALYSES and CONTRASTS? ==&lt;br /&gt;
The above script runs in nested for-loops that iterate through each entry in the ANALYSES and CONTRASTS arrays. The script comments tell you what these array values represent, but you might not be clear what the correct values should be.&lt;br /&gt;
&lt;br /&gt;
=== Analyses ===&lt;br /&gt;
When you ran mkanalysis-sess in your &amp;lt;code&amp;gt;SUBJECTS_DIR&amp;lt;/code&amp;gt;, 1, 2 or 3 directories were created, and will be sitting along-side your individual participant directories. These will have names like &#039;&#039;my_analysis.lh&#039;&#039;, &#039;&#039;my_analysis.rh&#039;&#039;, and &#039;&#039;my_analysis.mni305&#039;&#039;. In this case, your ANALYSES array declaration should read like:&lt;br /&gt;
 ANALYSES=( my_analysis )&lt;br /&gt;
Where you omit the part of the directory name that indicates the surface/space (lh/rh/mni305).&lt;br /&gt;
It is our usual practice to use analysis folder names that give some indication of the nature of the processing that was done, and the surface that was used (self or fsaverage). In the above script, the analysis folder is called &#039;&#039;LDT.fsaverage.sm4.down&#039;&#039; because it used par-files called LDT, used the fsaverage surface for the first-level analysis, and the data used a 4mm smoothing kernel and were slice-time corrected in the down direction.&lt;br /&gt;
&lt;br /&gt;
=== Contrasts ===&lt;br /&gt;
In the analysis directory will be a series of .mat and .config files: one for each contrast you defined using mkcontrast-sess. Your set of contrast names should be the names of each of these contrasts you would like to see at the group level -- just omit the .mat or .config part of the filename.&lt;br /&gt;
&lt;br /&gt;
[[Category: FreeSurfer]]&lt;br /&gt;
[[Category: Functional Analyses]]&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Freesurfer_Group-Level_Analysis_(mri_glmfit)&amp;diff=2239</id>
		<title>Freesurfer Group-Level Analysis (mri glmfit)</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Freesurfer_Group-Level_Analysis_(mri_glmfit)&amp;diff=2239"/>
		<updated>2022-07-21T17:30:14Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Correct for Multiple Comparisons (Surface-Based) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This procedure is based on information from [https://surfer.nmr.mgh.harvard.edu/fswiki/FsFastTutorialV5.1/FsFastGroupLevel here]&lt;br /&gt;
&lt;br /&gt;
The group-level (Random-Effects) analysis repeats the contrasts performed for individual subjects on a variance-weighted composite of all your subjects. This will identify voxels for which your contrast effects are consistent across all your participants. For the function MRI group analysis you will need to:&lt;br /&gt;
&lt;br /&gt;
*Concatenate individuals into one file (isxconcat-sess)&lt;br /&gt;
*Do not smooth (already smoothed during first-level analysis)&lt;br /&gt;
*For each space (lh/rh/mni305):&lt;br /&gt;
**Run mri_glmfit using weighted least squares (WLS)&lt;br /&gt;
**Correct for multiple comparisons&lt;br /&gt;
**Correct for multiple comparisons&lt;br /&gt;
*Optionally merge into one volume space&lt;br /&gt;
&lt;br /&gt;
==Before you start==&lt;br /&gt;
Ensure that your SUBJECTS_DIR variable is set to the working directory containing all your participants, and that the first-level analyses have been completed for all participants (using selxavg-3)&lt;br /&gt;
&lt;br /&gt;
==Concatenate First-Level Analyses(isxconcat-sess)==&lt;br /&gt;
In your &#039;&#039;&#039;&#039;&#039;$SUBJECTS_DIR&#039;&#039;&#039;&#039;&#039;, there should be a &#039;&#039;&#039;&#039;&#039;subjects&#039;&#039;&#039;&#039;&#039; file containing the subjectID for each participant for which you had run selxavg-3. Assume that the analysis directories are called &#039;&#039;my_analysis.*&#039;&#039; ([[Configure_mkanalysis-sess | mkanalysis-sess]]), and that the contrast is called &#039;&#039;conditionA_vs_conditionB&#039;&#039; ([[Configure_mkcontrast-sess | mkcontrast-sess]]). You might find it helpful to use environment variables and run the &amp;lt;code&amp;gt;isxconcat-sess&amp;lt;/code&amp;gt; script as follows:&lt;br /&gt;
 ANALYSIS=my_analysis.sm4&lt;br /&gt;
 CONTRAST1=conditionA_vs_conditionB&lt;br /&gt;
 #left hemisphere&lt;br /&gt;
 isxconcat-sess -sf subjects -analysis ${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.lh&amp;lt;/span&amp;gt; -contrast &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;$CONTRAST1&amp;lt;/span&amp;gt; -o RFX&lt;br /&gt;
 #right hemisphere&lt;br /&gt;
 isxconcat-sess -sf subjects -analysis ${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.rh&amp;lt;/span&amp;gt; -contrast &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;$CONTRAST1&amp;lt;/span&amp;gt; -o RFX&lt;br /&gt;
 #subcortical&lt;br /&gt;
 isxconcat-sess -sf subjects -analysis ${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.mni&amp;lt;/span&amp;gt; -contrast &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;$CONTRAST1&amp;lt;/span&amp;gt; -o RFX&lt;br /&gt;
&lt;br /&gt;
When finished, a new directory will exist, &#039;&#039;&#039;&#039;&#039;$SUBJECTS_DIR/RFX&#039;&#039;&#039;&#039;&#039; and contain sub-folders for each of your analyses (in this case, left- and right-hemisphere and MNI305 analysis). Note that your analysis directory does not have to be called &#039;&#039;&#039;RFX&#039;&#039;&#039; (e.g., if multiple people are working in the same subjects folder, you might have an output folder named something like &#039;&#039;&#039;RFX_CHRIS&#039;&#039;&#039;, or whatever suits you).&lt;br /&gt;
&lt;br /&gt;
If you have multiple contrasts, you can just create additional CONTRAST variables and rerun the same scripts, replacing &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;${CONTRAST1}&amp;lt;/span&amp;gt; with alternative contrasts. &lt;br /&gt;
&lt;br /&gt;
Note that if you have already run isxconcat-sess in the past, you may get a warning because an earlier group-level directory will already exist. You should probably just delete the old directory and rerun isxconcatenate-sess.&lt;br /&gt;
===Protip for Looping Through Contrasts===&lt;br /&gt;
Contrasts will have a corresponding .config file in one or more of the analysis directories found in $SUBJECTS_DIR. If you plan to automate this process using a script that loops over analysis folders and contrasts, the following code snippet will create a text file containing the names of the contrasts with .config files:&lt;br /&gt;
 cd $SUBJECTS_DIR/&amp;lt;span style=&#039;color:red&#039;&amp;gt;my_analysis&amp;lt;/span&amp;gt;.lh&lt;br /&gt;
 ls -1 | grep config | sed &#039;s/\.config//g&#039; &amp;gt; ../list_of_contrasts&lt;br /&gt;
 #now $SUBJECTS_DIR has a text file called list_of_contrasts for use in a loop&lt;br /&gt;
&lt;br /&gt;
==Run mri_glmfit==&lt;br /&gt;
&#039;&#039;&#039;PAY ATTENTION:&#039;&#039;&#039; You will need to run mri_glmfit &#039;&#039;&#039;for each contrast&#039;&#039;&#039; peformed &#039;&#039;&#039;for each of the analyses&#039;&#039;&#039;! (but see the section below about scripting)&lt;br /&gt;
&lt;br /&gt;
In the above example, we have 1 contrast (conditionA_vs_conditionB) to be carried out for the my_analysis.lh, my_analysis.rh and my_analysis.mni305 analyses directories. Of course, if we had also contrasted other conditions, then you will have to run mri_glmfit many more times.&lt;br /&gt;
===One-Sample Group Mean===&lt;br /&gt;
The 1-sample group mean (OSGM) tests whether the contrast A-B differs significantly from zero. The example that follows demonstrates how you would run the conditionA_vs_conditionB OSGM analysis for each of the analysis directories as in my example scenario:&lt;br /&gt;
 ANALYSIS=my_analysis&lt;br /&gt;
 CONTRAST1=conditionA_vs_conditionB&lt;br /&gt;
 &lt;br /&gt;
 #contrast 1 in left hemisphere&lt;br /&gt;
 cd ${SUBJECTS_DIR}/RFX/${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.lh&amp;lt;/span&amp;gt;/${CONTRAST1}&lt;br /&gt;
 mri_glmfit --y ces.nii.gz --wls cesvar.nii.gz --osgm --surface fsaverage &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt; --glmdir glm.wls --nii.gz&lt;br /&gt;
 &lt;br /&gt;
 #contrast 1 in right hemisphere&lt;br /&gt;
 cd ${SUBJECTS_DIR}/RFX/${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.rh&amp;lt;/span&amp;gt;/${CONTRAST1}&lt;br /&gt;
 mri_glmfit --y ces.nii.gz --wls cesvar.nii.gz --osgm --surface fsaverage &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt; --glmdir glm.wls --nii.gz&lt;br /&gt;
 &lt;br /&gt;
 #contrast 1 in subcortical MNI space - no need to specify the surface&lt;br /&gt;
 cd ${SUBJECTS_DIR}/RFX/${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.mni&amp;lt;/span&amp;gt;/${CONTRAST1}&lt;br /&gt;
 mri_glmfit --y ces.nii.gz --osgm --glmdir glm.wls --nii.gz &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;--eres-save&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Correct for Multiple Comparisons (Surface-Based)===&lt;br /&gt;
After you have run the voxel-wise analysis, you need to correct the contrast maps for multiple comparisons. This is because the previous step runs the analysis on each of many thousand voxels independently, which mathematically implies that &#039;&#039;alpha&#039;&#039; of those significant voxels are actually Type I errors. Again, you will have to perform this step for each analysis directory. This takes very little time to run for the surface-based analyses, which can use precomputed values.&lt;br /&gt;
&lt;br /&gt;
An explanation of the parameters: &amp;lt;code&amp;gt;--glmdir glm.wls&amp;lt;/code&amp;gt; refers to the directory you specified when you ran the mri_glmfit command. &amp;lt;code&amp;gt;--cache 3 pos&amp;lt;/code&amp;gt; indicates that you want to use the pre-cached simulation with a voxel-wise threshold of 10E-3 (i.e., 0.001) and use positive contrast values. &amp;lt;code&amp;gt;--cwpvalthresh .025&amp;lt;/code&amp;gt; indicates you will retain clusters with a size-corrected p-value of &#039;&#039;P&#039;&#039;&amp;lt;.025 (see the &#039;&#039;&#039;Notes&#039;&#039;&#039; below).&lt;br /&gt;
*my_analysis.lh&lt;br /&gt;
**&amp;lt;code&amp;gt;cd &#039;&#039;$SUBJECTS_DIR/RFX/my_analysis.lh/conditionA_vs_conditionB/&#039;&#039;&amp;lt;/code&amp;gt;&lt;br /&gt;
**mri_glmfit-sim --glmdir glm.wls --cache 3 pos --cwpvalthresh .025&lt;br /&gt;
*my_analysis.rh&lt;br /&gt;
**&amp;lt;code&amp;gt;cd &#039;&#039;$SUBJECTS_DIR/RFX/my_analysis.rh/conditionA_vs_conditionB/&#039;&#039;&amp;lt;/code&amp;gt;&lt;br /&gt;
**mri_glmfit-sim --glmdir glm.wls --cache 3 pos --cwpvalthresh .025&lt;br /&gt;
&lt;br /&gt;
Running this step saves thresholded files in the directory indicated by the &amp;lt;code&amp;gt;--glmdir&amp;lt;/code&amp;gt; switch (in the above example, files were saved in glm.wls). There, you will find files named &#039;&#039;cache.*.nii.gz&#039;&#039;. You can then load those files as overlays to see which clusters survived the size thresholding.&lt;br /&gt;
&lt;br /&gt;
===Correct for Multiple Comparisons with Permutation Test (MNI305 Space, or Surface-Based)===&lt;br /&gt;
The above examples use cached Monte Carlo simulations for speed. The help text for mri_glmfit-sim gives the syntax for running your own sims, which take much longer to execute, but are required for MNI305 space:&lt;br /&gt;
 mri_glmfit-sim --glmdir glmdir \&lt;br /&gt;
  --sim nulltype nsim threshold csdbase \&lt;br /&gt;
  --sim-sign sign&lt;br /&gt;
e.g. Monte Carlo correction for surface space (10K iterations, voxel-wise p&amp;lt;.005, positive differences only):&lt;br /&gt;
 mri_glmfit-sim --glmdir glm.wls \&lt;br /&gt;
  --sim mc-z 10000 2.3 mc-z.pos.005 --sim-sign pos&lt;br /&gt;
&lt;br /&gt;
e.g. permutation correction for MNI space (10K iterations, voxel-wise p&amp;lt;.01, absolute value differences):&lt;br /&gt;
 mri_glmfit-sim --glmdir glm.wls --sim perm 10000 2 perm.abs.01 \&lt;br /&gt;
  --sim-sign abs --cwp 0.05 --3spaces&lt;br /&gt;
&lt;br /&gt;
Options for sim type (listed fastest to slowest) are perm, mc-z, and mc-full.&lt;br /&gt;
&lt;br /&gt;
====Notes====&lt;br /&gt;
The choice of &#039;&#039;&#039;cwpvalthresh&#039;&#039;&#039; was computed by dividing the nominal desired p-value (0.05) by the number of spaces entailed by the simulation. In the above example if we are only looking at the lh and rh surfaces we divide 0.05/2 = 0.025. If you were looking at the lh, rh and MNI305 space, you would have to divide by 3 to maintain the same FWE: 0.05/3=0.0167.&lt;br /&gt;
&lt;br /&gt;
As indicated above, the uncorrected voxel threshold (p=.001) was used to enable the use of pre-cached simulated values with the &amp;lt;code&amp;gt;--cache&amp;lt;/code&amp;gt; switch. Though .1, .01, and .001 are easy enough to compute (1, 2, 3, respectively), other values can be computed by noting that this value corresponds to -1 &amp;amp;times; the base-10 logarithm of the uncorrected p-value. For example, to use cluster-size correction with an uncorrected voxel-wise p-value of 0.005, a spreadsheet or calculator can compute -1&amp;amp;times;(log&amp;lt;sub&amp;gt;10&amp;lt;/sub&amp;gt;0.005)=2.301. Or for a p-value of 0.05: -1&amp;amp;times;(log&amp;lt;sub&amp;gt;10&amp;lt;/sub&amp;gt;0.05)=1.301 . Rather than specify positive values, you can also use negative values (&amp;lt;code&amp;gt;neg&amp;lt;/code&amp;gt;) or absolute values (&amp;lt;code&amp;gt;abs&amp;lt;/code&amp;gt;, which I think may be the suggested default?).&lt;br /&gt;
&lt;br /&gt;
====Scripting====&lt;br /&gt;
Because mri_glmfit has to be run for each combination of analysis directory (*.lh, *.rh, possibly *.mni305) and contrast, the number of commands you need to run multiplies pretty quickly. You can check out the [[OSGM Script]] page for an example of how to automate these calls to mri_glmfit using a shell script.&lt;br /&gt;
&lt;br /&gt;
==Check It Out==&lt;br /&gt;
This little snippet shows you how to inspect the results of the MNI305 first level analysis.&lt;br /&gt;
===MNI305===&lt;br /&gt;
 RFXDIR=RFX&lt;br /&gt;
 ANALYSIS=FAM.sm4&lt;br /&gt;
 CONTRAST=task&lt;br /&gt;
 #assuming you used the conventions described above, the glm results can be found&lt;br /&gt;
 #in the directory &#039;&#039;&#039;glm.wls&#039;&#039;&#039;, and the osgm results in a subdirectory called &#039;&#039;&#039;osgm&#039;&#039;&#039;&lt;br /&gt;
 THEDIR=${SUBJECTS_DIR}/${RFXDIR}/${ANALYSIS}.mni305/${CONTRAST}/glm.wls/osgm&lt;br /&gt;
&lt;br /&gt;
 tkmeditfv fsaverage orig.mgz -ov ${THEDIR}/perm.abs.01.sig.cluster.nii.gz \&lt;br /&gt;
  -seg ${THEDIR}/perm.abs.01.sig.ocn.nii.gz  \&lt;br /&gt;
       ${THEDIR}/perm.abs.01.sig.ocn.lut&lt;br /&gt;
&lt;br /&gt;
==What Next?==&lt;br /&gt;
The cluster-level correction step will create a series of files in each of the glm directories. They will be automatically given names that describe the parameters used. For example, the above commands generated files named &#039;&#039;cache.th30.abs.sig.*&#039;&#039;. One of these files is a .annot file, where each cluster is a label in the file. These labels make convenient &#039;&#039;[[Working_with_ROIs_(Freesurfer) | functional ROIs]]&#039;&#039;, wouldn&#039;t you say?&lt;br /&gt;
&lt;br /&gt;
I haven&#039;t done this yet, but it seems that one thing you might want to do is map the labels in the .annot file generated by the group mri_glmfit-sim to each participant using &amp;lt;code&amp;gt;mri_surf2surf&amp;lt;/code&amp;gt;. A script called &amp;lt;code&amp;gt;cpannot.sh&amp;lt;/code&amp;gt; that simplifies this has been written and copied to the UBFS/Scripts/Shell/ folder. This script is invoked thus:&lt;br /&gt;
 cpannot.sh $SOURCESUBJECT $TARGETSUBJECT $ANNOTFILE&lt;br /&gt;
&lt;br /&gt;
e.g.:&lt;br /&gt;
&lt;br /&gt;
 cpannot.sh fsaverage FS_0183 lausanne.annot&lt;br /&gt;
&lt;br /&gt;
Before you do this, you can check out the details of the cluster statistics, which include things like the associated anatomical labels for each cluster, and also the surface area of the cluster. The clusters can be further subdivided to make them more similar in size by &amp;lt;code&amp;gt;[[Freesurfer Subparcellation | mris_divide_parcellation]]&amp;lt;/code&amp;gt;. With a series of roughly equal-sized ROIs, they can be mapped from the fsaverage surface to each participant to extract time series or other information from that individual.&lt;br /&gt;
&lt;br /&gt;
[[Category:FreeSurfer]]&lt;br /&gt;
[[Category:Functional Analyses]]&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Freesurfer_Group-Level_Analysis_(mri_glmfit)&amp;diff=2238</id>
		<title>Freesurfer Group-Level Analysis (mri glmfit)</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Freesurfer_Group-Level_Analysis_(mri_glmfit)&amp;diff=2238"/>
		<updated>2022-07-21T16:20:53Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Concatenate First-Level Analyses(isxconcat-sess) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This procedure is based on information from [https://surfer.nmr.mgh.harvard.edu/fswiki/FsFastTutorialV5.1/FsFastGroupLevel here]&lt;br /&gt;
&lt;br /&gt;
The group-level (Random-Effects) analysis repeats the contrasts performed for individual subjects on a variance-weighted composite of all your subjects. This will identify voxels for which your contrast effects are consistent across all your participants. For the function MRI group analysis you will need to:&lt;br /&gt;
&lt;br /&gt;
*Concatenate individuals into one file (isxconcat-sess)&lt;br /&gt;
*Do not smooth (already smoothed during first-level analysis)&lt;br /&gt;
*For each space (lh/rh/mni305):&lt;br /&gt;
**Run mri_glmfit using weighted least squares (WLS)&lt;br /&gt;
**Correct for multiple comparisons&lt;br /&gt;
**Correct for multiple comparisons&lt;br /&gt;
*Optionally merge into one volume space&lt;br /&gt;
&lt;br /&gt;
==Before you start==&lt;br /&gt;
Ensure that your SUBJECTS_DIR variable is set to the working directory containing all your participants, and that the first-level analyses have been completed for all participants (using selxavg-3)&lt;br /&gt;
&lt;br /&gt;
==Concatenate First-Level Analyses(isxconcat-sess)==&lt;br /&gt;
In your &#039;&#039;&#039;&#039;&#039;$SUBJECTS_DIR&#039;&#039;&#039;&#039;&#039;, there should be a &#039;&#039;&#039;&#039;&#039;subjects&#039;&#039;&#039;&#039;&#039; file containing the subjectID for each participant for which you had run selxavg-3. Assume that the analysis directories are called &#039;&#039;my_analysis.*&#039;&#039; ([[Configure_mkanalysis-sess | mkanalysis-sess]]), and that the contrast is called &#039;&#039;conditionA_vs_conditionB&#039;&#039; ([[Configure_mkcontrast-sess | mkcontrast-sess]]). You might find it helpful to use environment variables and run the &amp;lt;code&amp;gt;isxconcat-sess&amp;lt;/code&amp;gt; script as follows:&lt;br /&gt;
 ANALYSIS=my_analysis.sm4&lt;br /&gt;
 CONTRAST1=conditionA_vs_conditionB&lt;br /&gt;
 #left hemisphere&lt;br /&gt;
 isxconcat-sess -sf subjects -analysis ${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.lh&amp;lt;/span&amp;gt; -contrast &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;$CONTRAST1&amp;lt;/span&amp;gt; -o RFX&lt;br /&gt;
 #right hemisphere&lt;br /&gt;
 isxconcat-sess -sf subjects -analysis ${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.rh&amp;lt;/span&amp;gt; -contrast &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;$CONTRAST1&amp;lt;/span&amp;gt; -o RFX&lt;br /&gt;
 #subcortical&lt;br /&gt;
 isxconcat-sess -sf subjects -analysis ${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.mni&amp;lt;/span&amp;gt; -contrast &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;$CONTRAST1&amp;lt;/span&amp;gt; -o RFX&lt;br /&gt;
&lt;br /&gt;
When finished, a new directory will exist, &#039;&#039;&#039;&#039;&#039;$SUBJECTS_DIR/RFX&#039;&#039;&#039;&#039;&#039; and contain sub-folders for each of your analyses (in this case, left- and right-hemisphere and MNI305 analysis). Note that your analysis directory does not have to be called &#039;&#039;&#039;RFX&#039;&#039;&#039; (e.g., if multiple people are working in the same subjects folder, you might have an output folder named something like &#039;&#039;&#039;RFX_CHRIS&#039;&#039;&#039;, or whatever suits you).&lt;br /&gt;
&lt;br /&gt;
If you have multiple contrasts, you can just create additional CONTRAST variables and rerun the same scripts, replacing &amp;lt;span style=&amp;quot;color:blue&amp;quot;&amp;gt;${CONTRAST1}&amp;lt;/span&amp;gt; with alternative contrasts. &lt;br /&gt;
&lt;br /&gt;
Note that if you have already run isxconcat-sess in the past, you may get a warning because an earlier group-level directory will already exist. You should probably just delete the old directory and rerun isxconcatenate-sess.&lt;br /&gt;
===Protip for Looping Through Contrasts===&lt;br /&gt;
Contrasts will have a corresponding .config file in one or more of the analysis directories found in $SUBJECTS_DIR. If you plan to automate this process using a script that loops over analysis folders and contrasts, the following code snippet will create a text file containing the names of the contrasts with .config files:&lt;br /&gt;
 cd $SUBJECTS_DIR/&amp;lt;span style=&#039;color:red&#039;&amp;gt;my_analysis&amp;lt;/span&amp;gt;.lh&lt;br /&gt;
 ls -1 | grep config | sed &#039;s/\.config//g&#039; &amp;gt; ../list_of_contrasts&lt;br /&gt;
 #now $SUBJECTS_DIR has a text file called list_of_contrasts for use in a loop&lt;br /&gt;
&lt;br /&gt;
==Run mri_glmfit==&lt;br /&gt;
&#039;&#039;&#039;PAY ATTENTION:&#039;&#039;&#039; You will need to run mri_glmfit &#039;&#039;&#039;for each contrast&#039;&#039;&#039; peformed &#039;&#039;&#039;for each of the analyses&#039;&#039;&#039;! (but see the section below about scripting)&lt;br /&gt;
&lt;br /&gt;
In the above example, we have 1 contrast (conditionA_vs_conditionB) to be carried out for the my_analysis.lh, my_analysis.rh and my_analysis.mni305 analyses directories. Of course, if we had also contrasted other conditions, then you will have to run mri_glmfit many more times.&lt;br /&gt;
===One-Sample Group Mean===&lt;br /&gt;
The 1-sample group mean (OSGM) tests whether the contrast A-B differs significantly from zero. The example that follows demonstrates how you would run the conditionA_vs_conditionB OSGM analysis for each of the analysis directories as in my example scenario:&lt;br /&gt;
 ANALYSIS=my_analysis&lt;br /&gt;
 CONTRAST1=conditionA_vs_conditionB&lt;br /&gt;
 &lt;br /&gt;
 #contrast 1 in left hemisphere&lt;br /&gt;
 cd ${SUBJECTS_DIR}/RFX/${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.lh&amp;lt;/span&amp;gt;/${CONTRAST1}&lt;br /&gt;
 mri_glmfit --y ces.nii.gz --wls cesvar.nii.gz --osgm --surface fsaverage &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;lh&amp;lt;/span&amp;gt; --glmdir glm.wls --nii.gz&lt;br /&gt;
 &lt;br /&gt;
 #contrast 1 in right hemisphere&lt;br /&gt;
 cd ${SUBJECTS_DIR}/RFX/${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.rh&amp;lt;/span&amp;gt;/${CONTRAST1}&lt;br /&gt;
 mri_glmfit --y ces.nii.gz --wls cesvar.nii.gz --osgm --surface fsaverage &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;rh&amp;lt;/span&amp;gt; --glmdir glm.wls --nii.gz&lt;br /&gt;
 &lt;br /&gt;
 #contrast 1 in subcortical MNI space - no need to specify the surface&lt;br /&gt;
 cd ${SUBJECTS_DIR}/RFX/${ANALYSIS}&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;.mni&amp;lt;/span&amp;gt;/${CONTRAST1}&lt;br /&gt;
 mri_glmfit --y ces.nii.gz --osgm --glmdir glm.wls --nii.gz &amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;--eres-save&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Correct for Multiple Comparisons (Surface-Based)===&lt;br /&gt;
After you have run the first-level voxel-wise analysis, you need to correct the contrast maps for multiple comparisons. This is because the previous step runs the analysis on each of many thousand voxels independently, which mathematically implies that &#039;&#039;alpha&#039;&#039; of those significant voxels are actually Type I errors. Again, you will have to perform this step for each analysis directory. This takes very little time to run for the surface-based analyses, which can use precomputed values.&lt;br /&gt;
&lt;br /&gt;
An explanation of the parameters: &amp;lt;code&amp;gt;--glmdir glm.wls&amp;lt;/code&amp;gt; refers to the directory you specified when you ran the mri_glmfit command. &amp;lt;code&amp;gt;--cache 3 pos&amp;lt;/code&amp;gt; indicates that you want to use the pre-cached simulation with a voxel-wise threshold of 10E-3 (i.e., 0.001) and use positive contrast values. &amp;lt;code&amp;gt;--cwpvalthresh .025&amp;lt;/code&amp;gt; indicates you will retain clusters with a size-corrected p-value of &#039;&#039;P&#039;&#039;&amp;lt;.025 (see the &#039;&#039;&#039;Notes&#039;&#039;&#039; below).&lt;br /&gt;
*my_analysis.lh&lt;br /&gt;
**&amp;lt;code&amp;gt;cd &#039;&#039;$SUBJECTS_DIR/RFX/my_analysis.lh/conditionA_vs_conditionB/&#039;&#039;&amp;lt;/code&amp;gt;&lt;br /&gt;
**mri_glmfit-sim --glmdir glm.wls --cache 3 pos --cwpvalthresh .025&lt;br /&gt;
*my_analysis.rh&lt;br /&gt;
**&amp;lt;code&amp;gt;cd &#039;&#039;$SUBJECTS_DIR/RFX/my_analysis.rh/conditionA_vs_conditionB/&#039;&#039;&amp;lt;/code&amp;gt;&lt;br /&gt;
**mri_glmfit-sim --glmdir glm.wls --cache 3 pos --cwpvalthresh .025&lt;br /&gt;
&lt;br /&gt;
Running this step saves thresholded files in the directory indicated by the &amp;lt;code&amp;gt;--glmdir&amp;lt;/code&amp;gt; switch (in the above example, files were saved in glm.wls). There, you will find files named &#039;&#039;cache.*.nii.gz&#039;&#039;. You can then load those files as overlays to see which clusters survived the size thresholding.&lt;br /&gt;
&lt;br /&gt;
===Correct for Multiple Comparisons with Permutation Test (MNI305 Space, or Surface-Based)===&lt;br /&gt;
The above examples use cached Monte Carlo simulations for speed. The help text for mri_glmfit-sim gives the syntax for running your own sims, which take much longer to execute, but are required for MNI305 space:&lt;br /&gt;
 mri_glmfit-sim --glmdir glmdir \&lt;br /&gt;
  --sim nulltype nsim threshold csdbase \&lt;br /&gt;
  --sim-sign sign&lt;br /&gt;
e.g. Monte Carlo correction for surface space (10K iterations, voxel-wise p&amp;lt;.005, positive differences only):&lt;br /&gt;
 mri_glmfit-sim --glmdir glm.wls \&lt;br /&gt;
  --sim mc-z 10000 2.3 mc-z.pos.005 --sim-sign pos&lt;br /&gt;
&lt;br /&gt;
e.g. permutation correction for MNI space (10K iterations, voxel-wise p&amp;lt;.01, absolute value differences):&lt;br /&gt;
 mri_glmfit-sim --glmdir glm.wls --sim perm 10000 2 perm.abs.01 \&lt;br /&gt;
  --sim-sign abs --cwp 0.05 --3spaces&lt;br /&gt;
&lt;br /&gt;
Options for sim type (listed fastest to slowest) are perm, mc-z, and mc-full.&lt;br /&gt;
&lt;br /&gt;
====Notes====&lt;br /&gt;
The choice of &#039;&#039;&#039;cwpvalthresh&#039;&#039;&#039; was computed by dividing the nominal desired p-value (0.05) by the number of spaces entailed by the simulation. In the above example if we are only looking at the lh and rh surfaces we divide 0.05/2 = 0.025. If you were looking at the lh, rh and MNI305 space, you would have to divide by 3 to maintain the same FWE: 0.05/3=0.0167.&lt;br /&gt;
&lt;br /&gt;
As indicated above, the uncorrected voxel threshold (p=.001) was used to enable the use of pre-cached simulated values with the &amp;lt;code&amp;gt;--cache&amp;lt;/code&amp;gt; switch. Though .1, .01, and .001 are easy enough to compute (1, 2, 3, respectively), other values can be computed by noting that this value corresponds to -1 &amp;amp;times; the base-10 logarithm of the uncorrected p-value. For example, to use cluster-size correction with an uncorrected voxel-wise p-value of 0.005, a spreadsheet or calculator can compute -1&amp;amp;times;(log&amp;lt;sub&amp;gt;10&amp;lt;/sub&amp;gt;0.005)=2.301. Or for a p-value of 0.05: -1&amp;amp;times;(log&amp;lt;sub&amp;gt;10&amp;lt;/sub&amp;gt;0.05)=1.301 . Rather than specify positive values, you can also use negative values (&amp;lt;code&amp;gt;neg&amp;lt;/code&amp;gt;) or absolute values (&amp;lt;code&amp;gt;abs&amp;lt;/code&amp;gt;, which I think may be the suggested default?).&lt;br /&gt;
&lt;br /&gt;
====Scripting====&lt;br /&gt;
Because mri_glmfit has to be run for each combination of analysis directory (*.lh, *.rh, possibly *.mni305) and contrast, the number of commands you need to run multiplies pretty quickly. You can check out the [[OSGM Script]] page for an example of how to automate these calls to mri_glmfit using a shell script.&lt;br /&gt;
&lt;br /&gt;
==Check It Out==&lt;br /&gt;
This little snippet shows you how to inspect the results of the MNI305 first level analysis.&lt;br /&gt;
===MNI305===&lt;br /&gt;
 RFXDIR=RFX&lt;br /&gt;
 ANALYSIS=FAM.sm4&lt;br /&gt;
 CONTRAST=task&lt;br /&gt;
 #assuming you used the conventions described above, the glm results can be found&lt;br /&gt;
 #in the directory &#039;&#039;&#039;glm.wls&#039;&#039;&#039;, and the osgm results in a subdirectory called &#039;&#039;&#039;osgm&#039;&#039;&#039;&lt;br /&gt;
 THEDIR=${SUBJECTS_DIR}/${RFXDIR}/${ANALYSIS}.mni305/${CONTRAST}/glm.wls/osgm&lt;br /&gt;
&lt;br /&gt;
 tkmeditfv fsaverage orig.mgz -ov ${THEDIR}/perm.abs.01.sig.cluster.nii.gz \&lt;br /&gt;
  -seg ${THEDIR}/perm.abs.01.sig.ocn.nii.gz  \&lt;br /&gt;
       ${THEDIR}/perm.abs.01.sig.ocn.lut&lt;br /&gt;
&lt;br /&gt;
==What Next?==&lt;br /&gt;
The cluster-level correction step will create a series of files in each of the glm directories. They will be automatically given names that describe the parameters used. For example, the above commands generated files named &#039;&#039;cache.th30.abs.sig.*&#039;&#039;. One of these files is a .annot file, where each cluster is a label in the file. These labels make convenient &#039;&#039;[[Working_with_ROIs_(Freesurfer) | functional ROIs]]&#039;&#039;, wouldn&#039;t you say?&lt;br /&gt;
&lt;br /&gt;
I haven&#039;t done this yet, but it seems that one thing you might want to do is map the labels in the .annot file generated by the group mri_glmfit-sim to each participant using &amp;lt;code&amp;gt;mri_surf2surf&amp;lt;/code&amp;gt;. A script called &amp;lt;code&amp;gt;cpannot.sh&amp;lt;/code&amp;gt; that simplifies this has been written and copied to the UBFS/Scripts/Shell/ folder. This script is invoked thus:&lt;br /&gt;
 cpannot.sh $SOURCESUBJECT $TARGETSUBJECT $ANNOTFILE&lt;br /&gt;
&lt;br /&gt;
e.g.:&lt;br /&gt;
&lt;br /&gt;
 cpannot.sh fsaverage FS_0183 lausanne.annot&lt;br /&gt;
&lt;br /&gt;
Before you do this, you can check out the details of the cluster statistics, which include things like the associated anatomical labels for each cluster, and also the surface area of the cluster. The clusters can be further subdivided to make them more similar in size by &amp;lt;code&amp;gt;[[Freesurfer Subparcellation | mris_divide_parcellation]]&amp;lt;/code&amp;gt;. With a series of roughly equal-sized ROIs, they can be mapped from the fsaverage surface to each participant to extract time series or other information from that individual.&lt;br /&gt;
&lt;br /&gt;
[[Category:FreeSurfer]]&lt;br /&gt;
[[Category:Functional Analyses]]&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2237</id>
		<title>ML Environment</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2237"/>
		<updated>2022-07-07T00:46:44Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Spyder Installation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The lab primarily uses a Linux environment. We have several workstations running the latest version of [https://ubuntu.com/download Ubuntu]. Our machine learning (ML) research is primarily carried out using Google&#039;s [https://www.tensorflow.org/ TensorFlow] libraries, written for Python. Workstations with powerful GPUs can use CUDA for GPU acceleration.&lt;br /&gt;
&lt;br /&gt;
Below are the specs and walkthrough for setting up a modest ML programming environment.&lt;br /&gt;
&lt;br /&gt;
=Hardware=&lt;br /&gt;
There are no particular hardware requirements for running TensorFlow. Obviously, more disk space and RAM are desirable, as is a [https://developer.nvidia.com/cuda-gpus CUDA-enabled GPU].&lt;br /&gt;
&lt;br /&gt;
=Operating System=&lt;br /&gt;
The closed Apple ecosystem might make MacOS a good choice of platform because a Mac is a Mac is a Mac. However, I&#039;m not Mr. Moneybags over here, and Apple drops support for older hardware after a time. You couldn&#039;t pay me to use Windows for anything but Office applications, so that leaves Linux. The TensorFlow installation directions assume Ubuntu 16.04 or higher, and though I have dabbled with Debian Linux for a home media server, I typically use Ubuntu. These instructions assume Ubuntu 22.04 LTS.&lt;br /&gt;
&lt;br /&gt;
=Python Version=&lt;br /&gt;
Ubuntu 22.04 has retired Python 2.x, and uses Python 3.9 by default, which works nicely with TensorFlow (I understand that getting TensorFlow to work with Python 3.10+ includes some challenges). These instructions assume Python 3.9. If you want to use other versions of Python for other applications, I recommend using virtual environments to manage and switch between Python versions. The pip installation instructions use miniconda to create a Python 3.9 virtual environment.&lt;br /&gt;
&lt;br /&gt;
=Setup=&lt;br /&gt;
==TensorFlow Installation==&lt;br /&gt;
First off, you can use Conda, but don&#039;t use Conda to install TensorFlow. Instead, follow the [https://www.tensorflow.org/install/pip pip install instructions] published by the TensorFlow people. The steps are broadly described below:&lt;br /&gt;
===Create a virtual environment===&lt;br /&gt;
The first step is to install miniconda if it isn&#039;t already installed. Then, create a new Python 3.9 virtual environment:&lt;br /&gt;
 conda create --name tf python=3.9&lt;br /&gt;
Then activate your new environment:&lt;br /&gt;
 conda activate tf&lt;br /&gt;
&lt;br /&gt;
===Update pip and install tensorflow===&lt;br /&gt;
You&#039;ll need to update pip first:&lt;br /&gt;
 pip install --upgrade pip&lt;br /&gt;
Then pip install tensorflow:&lt;br /&gt;
 pip install tensorflow&lt;br /&gt;
&lt;br /&gt;
==Spyder Installation==&lt;br /&gt;
Anthony got me using the Spyder IDE for Python programming. Problem is, as of today (July 6, 2022), it&#039;s buggy on Ubuntu 22.04. You can install it using pip, as documented [https://bugs.launchpad.net/ubuntu/+source/spyder/+bug/1968479 here]:&lt;br /&gt;
 pip install -U spyder&lt;br /&gt;
However, when I uninstalled the apt package dependencies (there are LOTS), the pip installation no longer worked, so I reinstalled the apt package as well. The apt package binary still won&#039;t launch, but at least the pip version will have everything it needs. It&#039;s possible that having the apt package installed prior to the pip installation caused pip to not bother installing some critical components. If that&#039;s the case, then it may not be strictly necessary to have both the apt and pip versions installed.&lt;br /&gt;
 &lt;br /&gt;
Note that pip will have installed spyder into a specific virtual environment (e.g., tf). You won&#039;t be able to launch spyder without first activating that virtual environment. After installation, you can launch it from a terminal window:&lt;br /&gt;
 (base) &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ conda activate tf&lt;br /&gt;
 (tf) &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ spyder&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2236</id>
		<title>ML Environment</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2236"/>
		<updated>2022-07-07T00:21:44Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Update pip and install tensorflow */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The lab primarily uses a Linux environment. We have several workstations running the latest version of [https://ubuntu.com/download Ubuntu]. Our machine learning (ML) research is primarily carried out using Google&#039;s [https://www.tensorflow.org/ TensorFlow] libraries, written for Python. Workstations with powerful GPUs can use CUDA for GPU acceleration.&lt;br /&gt;
&lt;br /&gt;
Below are the specs and walkthrough for setting up a modest ML programming environment.&lt;br /&gt;
&lt;br /&gt;
=Hardware=&lt;br /&gt;
There are no particular hardware requirements for running TensorFlow. Obviously, more disk space and RAM are desirable, as is a [https://developer.nvidia.com/cuda-gpus CUDA-enabled GPU].&lt;br /&gt;
&lt;br /&gt;
=Operating System=&lt;br /&gt;
The closed Apple ecosystem might make MacOS a good choice of platform because a Mac is a Mac is a Mac. However, I&#039;m not Mr. Moneybags over here, and Apple drops support for older hardware after a time. You couldn&#039;t pay me to use Windows for anything but Office applications, so that leaves Linux. The TensorFlow installation directions assume Ubuntu 16.04 or higher, and though I have dabbled with Debian Linux for a home media server, I typically use Ubuntu. These instructions assume Ubuntu 22.04 LTS.&lt;br /&gt;
&lt;br /&gt;
=Python Version=&lt;br /&gt;
Ubuntu 22.04 has retired Python 2.x, and uses Python 3.9 by default, which works nicely with TensorFlow (I understand that getting TensorFlow to work with Python 3.10+ includes some challenges). These instructions assume Python 3.9. If you want to use other versions of Python for other applications, I recommend using virtual environments to manage and switch between Python versions. The pip installation instructions use miniconda to create a Python 3.9 virtual environment.&lt;br /&gt;
&lt;br /&gt;
=Setup=&lt;br /&gt;
==TensorFlow Installation==&lt;br /&gt;
First off, you can use Conda, but don&#039;t use Conda to install TensorFlow. Instead, follow the [https://www.tensorflow.org/install/pip pip install instructions] published by the TensorFlow people. The steps are broadly described below:&lt;br /&gt;
===Create a virtual environment===&lt;br /&gt;
The first step is to install miniconda if it isn&#039;t already installed. Then, create a new Python 3.9 virtual environment:&lt;br /&gt;
 conda create --name tf python=3.9&lt;br /&gt;
Then activate your new environment:&lt;br /&gt;
 conda activate tf&lt;br /&gt;
&lt;br /&gt;
===Update pip and install tensorflow===&lt;br /&gt;
You&#039;ll need to update pip first:&lt;br /&gt;
 pip install --upgrade pip&lt;br /&gt;
Then pip install tensorflow:&lt;br /&gt;
 pip install tensorflow&lt;br /&gt;
&lt;br /&gt;
==Spyder Installation==&lt;br /&gt;
Anthony got me using the Spyder IDE for Python programming. Problem is, as of today (July 6, 2022), it&#039;s buggy on Ubuntu 22.04. Don&#039;t bother using the apt package. Instead, you can install it using pip, as documented [https://bugs.launchpad.net/ubuntu/+source/spyder/+bug/1968479 here]:&lt;br /&gt;
 pip install -U spyder&lt;br /&gt;
Note that pip will have installed spyder into a specific virtual environment (e.g., tf). You won&#039;t be able to launch spyder without first activating that virtual environment. After installation, you can launch it from a terminal window:&lt;br /&gt;
 (base) &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ conda activate tf&lt;br /&gt;
 (tf) &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ spyder&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2235</id>
		<title>ML Environment</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2235"/>
		<updated>2022-07-07T00:01:03Z</updated>

		<summary type="html">&lt;p&gt;Chris: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The lab primarily uses a Linux environment. We have several workstations running the latest version of [https://ubuntu.com/download Ubuntu]. Our machine learning (ML) research is primarily carried out using Google&#039;s [https://www.tensorflow.org/ TensorFlow] libraries, written for Python. Workstations with powerful GPUs can use CUDA for GPU acceleration.&lt;br /&gt;
&lt;br /&gt;
Below are the specs and walkthrough for setting up a modest ML programming environment.&lt;br /&gt;
&lt;br /&gt;
=Hardware=&lt;br /&gt;
There are no particular hardware requirements for running TensorFlow. Obviously, more disk space and RAM are desirable, as is a [https://developer.nvidia.com/cuda-gpus CUDA-enabled GPU].&lt;br /&gt;
&lt;br /&gt;
=Operating System=&lt;br /&gt;
The closed Apple ecosystem might make MacOS a good choice of platform because a Mac is a Mac is a Mac. However, I&#039;m not Mr. Moneybags over here, and Apple drops support for older hardware after a time. You couldn&#039;t pay me to use Windows for anything but Office applications, so that leaves Linux. The TensorFlow installation directions assume Ubuntu 16.04 or higher, and though I have dabbled with Debian Linux for a home media server, I typically use Ubuntu. These instructions assume Ubuntu 22.04 LTS.&lt;br /&gt;
&lt;br /&gt;
=Python Version=&lt;br /&gt;
Ubuntu 22.04 has retired Python 2.x, and uses Python 3.9 by default, which works nicely with TensorFlow (I understand that getting TensorFlow to work with Python 3.10+ includes some challenges). These instructions assume Python 3.9. If you want to use other versions of Python for other applications, I recommend using virtual environments to manage and switch between Python versions. The pip installation instructions use miniconda to create a Python 3.9 virtual environment.&lt;br /&gt;
&lt;br /&gt;
=Setup=&lt;br /&gt;
==TensorFlow Installation==&lt;br /&gt;
First off, you can use Conda, but don&#039;t use Conda to install TensorFlow. Instead, follow the [https://www.tensorflow.org/install/pip pip install instructions] published by the TensorFlow people. The steps are broadly described below:&lt;br /&gt;
===Create a virtual environment===&lt;br /&gt;
The first step is to install miniconda if it isn&#039;t already installed. Then, create a new Python 3.9 virtual environment:&lt;br /&gt;
 conda create --name tf python=3.9&lt;br /&gt;
Then activate your new environment:&lt;br /&gt;
 conda activate tf&lt;br /&gt;
&lt;br /&gt;
===Update pip and install tensorflow===&lt;br /&gt;
You&#039;ll need to update pip first. Then pip install tensorflow&lt;br /&gt;
==Spyder Installation==&lt;br /&gt;
Anthony got me using the Spyder IDE for Python programming. Problem is, as of today (July 6, 2022), it&#039;s buggy on Ubuntu 22.04. Don&#039;t bother using the apt package. Instead, you can install it using pip, as documented [https://bugs.launchpad.net/ubuntu/+source/spyder/+bug/1968479 here]:&lt;br /&gt;
 pip install -U spyder&lt;br /&gt;
Note that pip will have installed spyder into a specific virtual environment (e.g., tf). You won&#039;t be able to launch spyder without first activating that virtual environment. After installation, you can launch it from a terminal window:&lt;br /&gt;
 (base) &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ conda activate tf&lt;br /&gt;
 (tf) &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ spyder&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2234</id>
		<title>ML Environment</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2234"/>
		<updated>2022-07-06T23:51:39Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Python Version */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The lab primarily uses a Linux environment. We have several workstations running the latest version of [https://ubuntu.com/download Ubuntu]. Our machine learning (ML) research is primarily carried out using Google&#039;s [https://www.tensorflow.org/ TensorFlow] libraries, written for Python. Workstations with powerful GPUs can use CUDA for GPU acceleration.&lt;br /&gt;
&lt;br /&gt;
Below are the specs and walkthrough for setting up a modest ML programming environment.&lt;br /&gt;
&lt;br /&gt;
=Hardware=&lt;br /&gt;
There are no particular hardware requirements for running TensorFlow. Obviously, more disk space and RAM are desirable, as is a [https://developer.nvidia.com/cuda-gpus CUDA-enabled GPU].&lt;br /&gt;
&lt;br /&gt;
=Operating System=&lt;br /&gt;
The closed Apple ecosystem might make MacOS a good choice of platform because a Mac is a Mac is a Mac. However, I&#039;m not Mr. Moneybags over here, and Apple drops support for older hardware after a time. You couldn&#039;t pay me to use Windows for anything but Office applications, so that leaves Linux. The TensorFlow installation directions assume Ubuntu 16.04 or higher, and though I have dabbled with Debian Linux for a home media server, I typically use Ubuntu. These instructions assume Ubuntu 22.04 LTS.&lt;br /&gt;
&lt;br /&gt;
=Python Version=&lt;br /&gt;
Ubuntu 22.04 has retired Python 2.x, and uses Python 3.9 by default, which works nicely with TensorFlow (I understand that getting TensorFlow to work with Python 3.10+ includes some challenges). These instructions assume Python 3.9. If you want to use other versions of Python for other applications, I recommend using virtual environments to manage and switch between Python versions. The pip installation instructions use miniconda to create a Python 3.9 virtual environment.&lt;br /&gt;
&lt;br /&gt;
=Setup=&lt;br /&gt;
==TensorFlow Installation==&lt;br /&gt;
First off, you can use Conda, but don&#039;t use Conda to install TensorFlow. Instead, follow the [https://www.tensorflow.org/install/pip pip install instructions] published by the TensorFlow people.&lt;br /&gt;
&lt;br /&gt;
==Spyder Installation==&lt;br /&gt;
Anthony got me using the Spyder IDE for Python programming. Problem is, as of today (July 6, 2022), it&#039;s buggy on Ubuntu 22.04. Don&#039;t bother using the apt package. Instead, you can install it using pip, as documented [https://bugs.launchpad.net/ubuntu/+source/spyder/+bug/1968479 here]:&lt;br /&gt;
 pip install -U spyder&lt;br /&gt;
After installation, you can launch it from a terminal window:&lt;br /&gt;
 &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ spyder&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2233</id>
		<title>ML Environment</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2233"/>
		<updated>2022-07-06T23:51:11Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Python Version */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The lab primarily uses a Linux environment. We have several workstations running the latest version of [https://ubuntu.com/download Ubuntu]. Our machine learning (ML) research is primarily carried out using Google&#039;s [https://www.tensorflow.org/ TensorFlow] libraries, written for Python. Workstations with powerful GPUs can use CUDA for GPU acceleration.&lt;br /&gt;
&lt;br /&gt;
Below are the specs and walkthrough for setting up a modest ML programming environment.&lt;br /&gt;
&lt;br /&gt;
=Hardware=&lt;br /&gt;
There are no particular hardware requirements for running TensorFlow. Obviously, more disk space and RAM are desirable, as is a [https://developer.nvidia.com/cuda-gpus CUDA-enabled GPU].&lt;br /&gt;
&lt;br /&gt;
=Operating System=&lt;br /&gt;
The closed Apple ecosystem might make MacOS a good choice of platform because a Mac is a Mac is a Mac. However, I&#039;m not Mr. Moneybags over here, and Apple drops support for older hardware after a time. You couldn&#039;t pay me to use Windows for anything but Office applications, so that leaves Linux. The TensorFlow installation directions assume Ubuntu 16.04 or higher, and though I have dabbled with Debian Linux for a home media server, I typically use Ubuntu. These instructions assume Ubuntu 22.04 LTS.&lt;br /&gt;
&lt;br /&gt;
=Python Version=&lt;br /&gt;
Ubuntu 22.04 has retired Python 2.x, and uses Python 3.9 by default, which works nicely with TensorFlow (I understand that getting TensorFlow to work with Python 3.10+ includes some challenges). These instructions assume Python 3.9. If you want to use other versions of Python for other applications, I recommend using virtual environments to manage and switch between Python versions. The installation instructions use miniconda to create a Python 3.9 virtual environment.&lt;br /&gt;
&lt;br /&gt;
=Setup=&lt;br /&gt;
==TensorFlow Installation==&lt;br /&gt;
First off, you can use Conda, but don&#039;t use Conda to install TensorFlow. Instead, follow the [https://www.tensorflow.org/install/pip pip install instructions] published by the TensorFlow people.&lt;br /&gt;
&lt;br /&gt;
==Spyder Installation==&lt;br /&gt;
Anthony got me using the Spyder IDE for Python programming. Problem is, as of today (July 6, 2022), it&#039;s buggy on Ubuntu 22.04. Don&#039;t bother using the apt package. Instead, you can install it using pip, as documented [https://bugs.launchpad.net/ubuntu/+source/spyder/+bug/1968479 here]:&lt;br /&gt;
 pip install -U spyder&lt;br /&gt;
After installation, you can launch it from a terminal window:&lt;br /&gt;
 &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ spyder&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2232</id>
		<title>ML Environment</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2232"/>
		<updated>2022-07-06T23:21:37Z</updated>

		<summary type="html">&lt;p&gt;Chris: /* Spyder Installation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The lab primarily uses a Linux environment. We have several workstations running the latest version of [https://ubuntu.com/download Ubuntu]. Our machine learning (ML) research is primarily carried out using Google&#039;s [https://www.tensorflow.org/ TensorFlow] libraries, written for Python. Workstations with powerful GPUs can use CUDA for GPU acceleration.&lt;br /&gt;
&lt;br /&gt;
Below are the specs and walkthrough for setting up a modest ML programming environment.&lt;br /&gt;
&lt;br /&gt;
=Hardware=&lt;br /&gt;
There are no particular hardware requirements for running TensorFlow. Obviously, more disk space and RAM are desirable, as is a [https://developer.nvidia.com/cuda-gpus CUDA-enabled GPU].&lt;br /&gt;
&lt;br /&gt;
=Operating System=&lt;br /&gt;
The closed Apple ecosystem might make MacOS a good choice of platform because a Mac is a Mac is a Mac. However, I&#039;m not Mr. Moneybags over here, and Apple drops support for older hardware after a time. You couldn&#039;t pay me to use Windows for anything but Office applications, so that leaves Linux. The TensorFlow installation directions assume Ubuntu 16.04 or higher, and though I have dabbled with Debian Linux for a home media server, I typically use Ubuntu. These instructions assume Ubuntu 22.04 LTS.&lt;br /&gt;
&lt;br /&gt;
=Python Version=&lt;br /&gt;
Ubuntu 22.04 has retired Python 2.x, and uses Python 3.9 by default, which works nicely with TensorFlow (I understand that getting TensorFlow to work with Python 3.10+ includes some challenges). These instructions assume Python 3.9.&lt;br /&gt;
&lt;br /&gt;
=Setup=&lt;br /&gt;
==TensorFlow Installation==&lt;br /&gt;
First off, you can use Conda, but don&#039;t use Conda to install TensorFlow. Instead, follow the [https://www.tensorflow.org/install/pip pip install instructions] published by the TensorFlow people.&lt;br /&gt;
&lt;br /&gt;
==Spyder Installation==&lt;br /&gt;
Anthony got me using the Spyder IDE for Python programming. Problem is, as of today (July 6, 2022), it&#039;s buggy on Ubuntu 22.04. Don&#039;t bother using the apt package. Instead, you can install it using pip, as documented [https://bugs.launchpad.net/ubuntu/+source/spyder/+bug/1968479 here]:&lt;br /&gt;
 pip install -U spyder&lt;br /&gt;
After installation, you can launch it from a terminal window:&lt;br /&gt;
 &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host&amp;lt;/span&amp;gt;:~ spyder&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2231</id>
		<title>ML Environment</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=ML_Environment&amp;diff=2231"/>
		<updated>2022-07-06T23:20:52Z</updated>

		<summary type="html">&lt;p&gt;Chris: Created page with &amp;quot;The lab primarily uses a Linux environment. We have several workstations running the latest version of [https://ubuntu.com/download Ubuntu]. Our machine learning (ML) research...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The lab primarily uses a Linux environment. We have several workstations running the latest version of [https://ubuntu.com/download Ubuntu]. Our machine learning (ML) research is primarily carried out using Google&#039;s [https://www.tensorflow.org/ TensorFlow] libraries, written for Python. Workstations with powerful GPUs can use CUDA for GPU acceleration.&lt;br /&gt;
&lt;br /&gt;
Below are the specs and walkthrough for setting up a modest ML programming environment.&lt;br /&gt;
&lt;br /&gt;
=Hardware=&lt;br /&gt;
There are no particular hardware requirements for running TensorFlow. Obviously, more disk space and RAM are desirable, as is a [https://developer.nvidia.com/cuda-gpus CUDA-enabled GPU].&lt;br /&gt;
&lt;br /&gt;
=Operating System=&lt;br /&gt;
The closed Apple ecosystem might make MacOS a good choice of platform because a Mac is a Mac is a Mac. However, I&#039;m not Mr. Moneybags over here, and Apple drops support for older hardware after a time. You couldn&#039;t pay me to use Windows for anything but Office applications, so that leaves Linux. The TensorFlow installation directions assume Ubuntu 16.04 or higher, and though I have dabbled with Debian Linux for a home media server, I typically use Ubuntu. These instructions assume Ubuntu 22.04 LTS.&lt;br /&gt;
&lt;br /&gt;
=Python Version=&lt;br /&gt;
Ubuntu 22.04 has retired Python 2.x, and uses Python 3.9 by default, which works nicely with TensorFlow (I understand that getting TensorFlow to work with Python 3.10+ includes some challenges). These instructions assume Python 3.9.&lt;br /&gt;
&lt;br /&gt;
=Setup=&lt;br /&gt;
==TensorFlow Installation==&lt;br /&gt;
First off, you can use Conda, but don&#039;t use Conda to install TensorFlow. Instead, follow the [https://www.tensorflow.org/install/pip pip install instructions] published by the TensorFlow people.&lt;br /&gt;
&lt;br /&gt;
==Spyder Installation==&lt;br /&gt;
Anthony got me using the Spyder IDE for Python programming. Problem is, as of today (July 6, 2022), it&#039;s buggy on Ubuntu 22.04. Don&#039;t bother using the apt package. Instead, you can install it using pip, as documented [https://bugs.launchpad.net/ubuntu/+source/spyder/+bug/1968479 here]:&lt;br /&gt;
 pip install -U spyder&lt;br /&gt;
After installation, you can launch it from a terminal window:&lt;br /&gt;
 &amp;lt;span style=&amp;quot;color:green&amp;quot;&amp;gt;user@host$&amp;lt;/span&amp;gt; spyder&lt;/div&gt;</summary>
		<author><name>Chris</name></author>
	</entry>
</feed>