FreeSurfer: Difference between revisions

From CCN Wiki
Jump to navigation Jump to search
 
(59 intermediate revisions by 6 users not shown)
Line 5: Line 5:
These instructions assume that Freesurfer has already been installed and configured on your workstation.
These instructions assume that Freesurfer has already been installed and configured on your workstation.


== Getting started with raw data==
==First Things First: Enable FreeSurfer==
The first step to running files through a program is to first get said files! To do this, you will need to access ownCloud. Norgie will be able to give you the password (or it also might be written on the lab whiteboard). Download the most recent files (ignore the study files) and unzip. These files will be named based on the system used by the MRI group, but you will need to rename the folder based on out system. You can find out the correlation in the google spreadsheet shared with the lab.
To use FreeSurfer, it must be in your path, and some key environment variables need to be set. '''If you have not done so already''', you should edit your .bashrc or .bash_profile file and append the following to the bottom:
===Linux===
Edit your .bashrc file:
nano .bashrc
This will open the file in the <code>nano</code> editor (you can use another text editor if you prefer). Add the following lines to the bottom of this file and save:


The important step here is checking to see if all the file components are there. You will need to make sure there is an MPRAGE file (the anatomical image) and bold files from 700-1200 (intervals of 100). Furthermore, these bold files could be above or around 60 MB. If any of this seems off (files missing, too many files, files are too small) you will need to email the head of the MRI group. Email Cheryl Knapp (clknapp@buffalo.edu) and tell her what's going on.
#FREESURFER
export FREESURFER_HOME=/usr/local/freesurfer-7.1.1 #change if freesurfer is installed elsewhere
source ${FREESURFER_HOME}/SetUpFreeSurfer.sh
#FSL
export FSLDIR=/usr/share/fsl/5.0
source ${FSLDIR}/etc/fslconf/fsl.sh
 
Test it out in the terminal by typing:
source ~/.bashrc
 
===Mac OS===
Edit your .bash_profile file:
nano ~/.bash_profile
Add the following lines to the bottom of the file and save:
export FREESURFER_HOME=/Applications/freesurfer
source $FREESURFER_HOME/SetUpFreeSurfer.sh
 
Test it out in your terminal window by typing:
source ~/.bash_profile
 
After you do this, when you launch a new terminal window, you will see some information appear in the terminal window indicating where FreeSurfer is located and where your subjects directory can be found. It should look like the following:
Setting up environment for FreeSurfer/FS-FAST (and FSL)
FREESURFER_HOME  /Applications/freesurfer
FSFAST_HOME      /Applications/freesurfer/fsfast
FSF_OUTPUT_FORMAT nii.gz
SUBJECTS_DIR      /Applications/freesurfer/subjects
MNI_DIR          /Applications/freesurfer/mni
FSL_DIR          /usr/local/fsl


If all goes well, simply extract, create folders, and move around files as instructed below. A shell script taking care of these steps is coming soon to a lab near you...
If you don't see this, then the SetUpFreeSurfer.sh initialization script is not automatically running. You may need to seek help from a higher authority.


== Organization ==
== Organization ==
Line 18: Line 49:
  /usr/local/freesurfer/subjects
  /usr/local/freesurfer/subjects


Let us assume that you have been collecting data for some lexical decision task experiment. All the data for all subjects should be stored in a single directory, which you  will set as your $SUBJECTS_DIR variable. For example, if we keep all our data in ~/ubfs/cpmcnorg/openfmri/LDT, then we would type the following:
Let us assume that you have been collecting data for some lexical decision task experiment. All the data for all subjects should be stored in a single directory, which you  will set as your $SUBJECTS_DIR variable. For example, if you have copied the data to ~/Projects/LDT, then we would type the following:
  SUBJECTS_DIR=~/ubfs/cpmcnorg/openfmri/LDT
  SUBJECTS_DIR=~/Projects/LDT
  echo $SUBJECTS_DIR
  echo $SUBJECTS_DIR
Another trick we can do is to use the Unix <code>pwd</code> command to set the SUBJECTS_DIR to be whatever directory we happen to be in at the moment. The following series of commands will do the same as the previous example command:
Another trick we can do is to use the Unix <code>pwd</code> command to set the SUBJECTS_DIR to be whatever directory we happen to be in at the moment. The following series of commands will do the same as the previous example command:
  cd ~
  cd ~
  cd ubfs/cpmcnorg/openfmri
  cd Projects
  cd LDT
  cd LDT
  SUBJECTS_DIR=`pwd`
  SUBJECTS_DIR=`pwd`
  echo $SUBJECTS_DIR
  echo $SUBJECTS_DIR
The first line above, <code>cd ~</code> moves you to your home directory. The second line moves you from your home directory to the ubfs network folder containing several sets of experiments. The third line of code moves you into the subdirectory containing the LDT data. The fourth line sets the SUBJECTS_DIR environment variable to whatever gets printed out when you execute the <code>pwd</code> command (the <code>pwd</code> command '''<u>p</u>'''rints the current '''<u>w</u>'''orking '''<u>d</u>'''irectory). As a result, the current working directory becomes the new SUBJECTS_DIR after you execute this command, as you can see when you execute the last line of code. Note that in the <code>SUBJECTS_DIR=`pwd`</code> line, those are back-quotes, which you might find on your keyboard sharing a key with the ~ character. <code>`</code> is not the same character as <code>'</code>. When you enclose a command in a pair of back-quotes, you are telling the operating something along the lines of "this is a command that I want you to execute first, before using its output to figure out the rest of this business."
The first line above, <code>cd ~</code> moves you to your home directory. The second line moves you to Projects folder that might contain several sets of experiments. The third line of code moves you into the subdirectory containing the LDT data. The fourth line sets the SUBJECTS_DIR environment variable to whatever gets printed out when you execute the <code>pwd</code> command (the <code>pwd</code> command '''<u>p</u>'''rints the current '''<u>w</u>'''orking '''<u>d</u>'''irectory). As a result, the current working directory becomes the new SUBJECTS_DIR after you execute this command, as you can see when you execute the last line of code.  
 
Note that in the <code>SUBJECTS_DIR=`pwd`</code> line, those are '''back-quotes''', which you might find on your keyboard sharing a key with the ~ character. <code>`</code> is not the same character as <code>'</code>. When you enclose a command in a pair of back-quotes, you are telling the operating something along the lines of "this is a command that I want you to execute first, before using its output to figure out the rest of this business."




----
----
=== Subject directory organization ===
=== Subject directory organization ===
''n.b.: Just kidding!''
Data for each subject should be kept in their own directory. Moreover, different types of data (i.e., anatomical/structural or bold/functional) are kept in separate subdirectories. The basic directory structure for each participant ('session' in Freesurfer terminology) looks like this (see also [[Freesurfer_BOLD_files| Freesurfer BOLD files]]):


''n.b.: orig.nii is the MRPAGE file (anatomical MPRAGE image)''
*SUBJECTS_DIR
**Subject_001
***mri
***bold
****001
****002
****003
****004
****005
****006
Copy the data for the participant from the /raw subdirectory for the project in the ubfs folder. You will only need the /mri and the /bold directories. If you are processing data for multiple session for a single participant, you may need to rename some of the files as you copy them over, otherwise, you will end up overwriting files.
 
<i> Note that all the functional data (in the 'bold' subdirectory) are stored in sequentially numbered folders (3-digits), and all are given the same name ('f.nii' or 'f.nii.gz'). This seems to be a requirement. It may be possible to circumvent this requirement, but this is a relatively minor concern at this time. </i>


Data for each subject should be kept in their own directory. Moreover, different types of data (i.e., anatomical/structural or bold/functional) are kept in separate subdirectories. Assuming your raw data are in the .nii format, the basic directory structure for each participant ('session' in Freesurfer terminology) looks like this (see also [[Freesurfer_BOLD_files| Freesurfer BOLD files]]):
By the end, your data should look like this:


*SUBJECTS_DIR
*SUBJECTS_DIR
**Subject_001
**Subject_001
***mri
***mri
****orig.nii
****orig.nii.gz (or MPRAGE.nii)
***bold
***bold
****001/f.nii
****001/f.nii.gz
****002/f.nii
****002/f.nii.gz
****003/f.nii
****etc.
****etc.


Note that all the functional data (in the 'bold' subdirectory) are stored in sequentially numbered folders (3-digits), and all are called 'f.nii'. This seems to be a requirement. It may be possible to circumvent this requirement, but this is a relatively minor concern at this time.
Note that the .nii.gz file extension indicates that this is a gzipped NIFTI file. You can use <code>gunzip</code> to unzip the files, but this isn't really necessary unless you are going to manipulate these files in MATLAB. We have figured out ways to do everything for FreeSurfer in the BASH shell, so you may as well just leave them as-is unless you have a compelling reason (or compulsion) to unzip them.


== Structural Preprocessing ==
== Structural Preprocessing ==
The structural mri file (orig.nii) is transformed over a series of computationally-intensive steps invoked by the recon-all Freesurfer program. Recon-all is designed to execute all the steps in series without intervention, however in practice it seems preferable to execute the process in a series of smaller groups of steps and check the output in between. This is because the process is automated using computational algorithms, but if one step doesn't execute correctly, everything that follows will be compromised. The steps take many hours to complete, so by inspecting the progress along the way can save many hours of processing time redoing steps that had been done incorrectly.
The structural mri file (T1.nii or orig.nii) is transformed over a series of computationally-intensive steps invoked by the recon-all Freesurfer program. Recon-all is designed to execute all the steps in series without intervention, however when working with data that may be of dubious quality it seems preferable to execute the process in a series of smaller groups of steps and check the output in between. This is because the process is automated using computational algorithms, but if one step doesn't execute correctly, everything that follows will be compromised. The steps take many hours to complete, so by inspecting the progress along the way can save many hours of processing time redoing steps that had been done incorrectly. Data that are more likely to be problem-free are more likely to be successfully processed in a single step.
 
== Anatomical Surface Mesh Construction with Recon-All ==
The computationally-intensive surface mapping is carried out by a Freesurfer program called recon-all. This program is extensively spaghetti-documented on the Freesurfer wiki [https://surfer.nmr.mgh.harvard.edu/fswiki/recon-all here]. Though the Freesurfer documentation goes into much detail, it is also a little hard to follow at times and sometimes does some odd things.  


===Keeping Track===
=== recon-all -all ===
For every subject you run, create a LOG_FS_xxxx.txt file. A template file can be found in ubfs in the LDT folder. After completing each step for processing (both structural and functional) mark an x next to the corresponding line. It can be really easy to forget to do this, but it will ensure that others accessing the folder on ubfs will know what's been done. This file should be a resource for you anyway since it contains a list of commands. More elaboration can be found on the wiki.
If you have good reason to suspect that the reconstruction process will complete without errors (e.g., you are working with an archival dataset that includes only "good" subjects, or you are using both T1 and T2 input images), then you can use the <code>-all</code> switch to run through all the autorecon steps 1-3 in a single sitting.


Since most of the folders on ubfs do not contain this file, it is hard to know what steps of functional analysis have been done. I am working on writing down what specific files/folders are created at each step to use as markers. I will fill this in as I process a new participant. For now, here is the template:
The recon-all command expects both a participant ID and a processing directive. If you are running all steps, then you are presumably beginning with a T1-weighted .nii or .nii.gz (or similar) file. The following commands take advantage of environment variables, which would let me run exactly the same command after changing the relevant variable values (in this case, the subject ID):
#set the SUBJECTS_DIR from the default
SUBJECTS_DIR=`pwd`
#I'd like to reconstruct data for subject 01
#The target data follows BIDS convention and
#can be found in ./<span style="color:red">sub-01</span>/anat/<span style="color:red">sub-01</span><span style="color:green">_T1w.nii.gz</span>
#I'll create a subjectID variable that can be changed to allow me to run the same command across participants
subjectID=<span style="color:red">sub-01</span>
recon-all -all \
-i ${SUBJECTS_DIR}/${subjectID}/anat/${subjectID}_T1w.nii.gz \
-subjid FS_${subjectID}


*slicer
The above command will find the <span style="color:green">T1w.nii.gz</span> file for whatever subject is stored in <span style="color:red">${subjectID}</span>, and run all processing steps. The output files will be found in the FreeSurfer subject folder called FS_<span style="color:red">${subjectID}</span>. In this case, folder FS_<span style="color:red">sub-01</span> will be created in the ${SUBJECTS_DIR}. On the GPU-enabled workstations, this process should take between 6-7 hours. If your anatomical data files have a different naming convention (e.g., include session or acquisition information or have a different file extension), you will probably end up changing the <span style="color:green">code in green</span>.
**MPRAGE.mgz
*autorecon1
**brainmask.mgz
*autorecon2
*autorecon3
*parfiles
*mkanalysis
*mkconstrast
*preproc
*selxavg3
*subpar.py
*clut
Make sure to update ubfs! It's a bummer to run a participant through all the steps and then realize the files were just sitting completed
on someone's computer locally.


In order to help you through analysis, there is a file called commands in the LDT folder of ubfs. This has all the commands you need to go from raw data to time courses (no GLM though). This is a good skeleton for reference, but it will still be necessary to reference the wiki to recognize what's going on at each step and address any errors.
=== recon-all in stages ===
The Freesurfer wiki describes a generally problem-free case. This might work well for data that you already know is going to be problem-free, but we seldom have that guarantee. We might instead split the recon-all processing into sub-stages where you can do quality-control inspection at each step:
#[[Autorecon1]] (~0.5  hours on ws02, assuming no problems encountered)
#[[Autorecon2]] (~2.5 hours on ws02, assuming no problems encountered)
#[[Autorecon3]]


=== Image Format: Slicer ===
After running autorecon1, it is best to run autorecon2 immediately afterward. Usually autorecon1 does an alright job skull stripping. If it doesn't, the results will be evident after autorecon2 is completed if you use the <code>tksurfer SUBJECTID HEMI inflated</code>:
The first thing that you will need to do is convert the orig.nii file to .mgz format using [[Slicer]]. Though Freesurfer is capable of reading .nii files, it natively uses .mgz files, and so this conversion step will ensure that the structural data file has all the information that Freesurfer expects.
subjectID=FS_0202
tksurfer ${subjectID} lh inflated #this will display the left hemisphere (lh) inflated surface
Check both the lh and rh surfaces. If the brain appears lumpy or there are odd points sticking out, proceed to [[Autorecon2]] editing.


=== Recon-All ===
=== recon-all as a batch ===
The computationally-intensive surface mapping is carried out by a Freesurfer program called recon-all. This program is extensively spaghetti-documented on the Freesurfer wiki [https://surfer.nmr.mgh.harvard.edu/fswiki/recon-all here]. Though the Freesurfer documentation goes into much detail, it is also a little hard to follow at times and sometimes does some odd things. Notably, it describes a generally problem-free case. This might work well for data that you already know is going to be problem-free, but we seldom have that guarantee. Instead, this guide will split the recon-all processing into sub-stages where you can do quality-control inspection at each step.  
Anatomical surface reconstruction using the <code>recon-all</code> script is designed to work on one participant at a time, whereas the functional data analysis using fs-fast can work over a batch of participants listed in a <code>subjects</code> file. If you are comfortable setting your workstation to run for an extended period of time, you can run recon-all on a batch of participants listed in your <code>subjects</code> file. The following code snippet takes the name of your subjects file as a parameter, and iterates through each name. For each subject it runs recon-all -all using the T1 and T2 anatomical files:
#[[Autorecon1]]
==== batch_autorecon.sh ====
#[[Autorecon2]]
#!/bin/bash
#[[Autorecon3]]
export SUBJECTS_DIR=`pwd`
echo $SUBJECTS_DIR
while read s; do
  <span style="color:green">#note that, depending on the locations and names of your anatomical data files,
  #you might have to modify the next line to reflect those differences</span>
  recon-all -all -i ./$s/T1.nii.gz -T2 ./$s/T2.nii.gz -subjid FS_$s
done<$1


After running autorecon1, it is best to run autorecon2 immediately afterward. Usually autorecon1 does an alright job skull stripping. If it doesn't, the results will be evident after autorecon2 is completed if you use the <code>tksurfer SUBJECTID HEMI inflated</code>. Fill in the SUBJECTID, and fill in HEMI with lh or rh (but check both). If the brain appears overly lumpy or there are odd points sticking out, proceede to [[Autorecon2]] editing.
To run:
nohup batch_autorecon.sh mysubjects.txt &
Because recon-all takes a few hours for each participant, we would use <code>nohup COMMAND &</code> to run it in the background, and log back in several hours later to check on the progress:
cat nohup.out


== Functional Analysis ==
== Functional Analysis ==
Line 96: Line 155:
#Create (or copy if experiment used a fixed schedule) your paradigm files for each fMRI run, and edit them using matlab (see "[[Par-Files]]").
#Create (or copy if experiment used a fixed schedule) your paradigm files for each fMRI run, and edit them using matlab (see "[[Par-Files]]").
#*When editing par files, be sure to check how many volumes to drop for each par file; it may be different every time! See above link for more details.
#*When editing par files, be sure to check how many volumes to drop for each par file; it may be different every time! See above link for more details.
#Create a subjects text file in your $SUBJECTS_DIR called "subjectname" that contains a list of your subjects necessary for later (used for [[Configure preproc-sess|preproc-sess]])
#Create a subjects text file in your $SUBJECTS_DIR called "subjects" (or some other name) that contains a list of your subjects necessary for batch processing (used for [[Configure preproc-sess|preproc-sess]])
#*A quick way to do this, assuming you want to preprocess everyone, is to pipe the results of the ls command with the -1 switch (1 per line) into grep and then redirect the output to a file:
#*A quick way to do this, assuming you want to preprocess everyone, is to pipe the results of the ls command with the -1 switch (1 per line) into grep and then redirect the output to a file:
#*<code>ls -1 | grep "^FS" > subjects</code>
#*<code>ls -1 -d FS*/ | sed "s,/$,," > subjects</code>
#*This will list the contents of the current directory, 1 per line, then keep only lines starting with ''FS''
#*This will list the names of any directories in the current director starting with "FS", and strip off the trailing forward slash
#Create a text file called <code>subjectname</code> in each of the subject folders. The file is a 1-line plaintext file that just contains the name of the subject.
#Configure your analyses (using [[Configure mkanalysis-sess|mkanalysis-sess]])
#Configure your analyses (using [[Configure mkanalysis-sess|mkanalysis-sess]])
#Configure your contrast (using [[Configure mkcontrast-sess|mkcontrast-sess]])
#Configure your contrast (using [[Configure mkcontrast-sess|mkcontrast-sess]])
Line 105: Line 165:
#Check the *.mcdat files in each of the ''subject_name/bold/xxx'' directories to inspect the amount of motion detected during the motion correction step. Data associated with periods of excessive movement (>1mm) should be dealt with carefully. Columns in the text document from left to right are: (1) time point number (2) roll rotation (in ) (3) pitch rotation (in ) (4) yaw rotation (in ) (5) between-slice motion or translation (in mm) (6) within slice up-down motion or translation (in mm) (7) within slice left-right motion or translation (in mm) (8) RMS error before correction (9) RMS error after correction (10) total vector motion or translation (in mm). We need to look at column 10. If any of these values are above 1, this might be indicative of excessive movement. Consult with someone further up the chain for advice (undergrads ask a grad student; grad students ask Chris or a Postdoc if he ever has the funding to get one).
#Check the *.mcdat files in each of the ''subject_name/bold/xxx'' directories to inspect the amount of motion detected during the motion correction step. Data associated with periods of excessive movement (>1mm) should be dealt with carefully. Columns in the text document from left to right are: (1) time point number (2) roll rotation (in ) (3) pitch rotation (in ) (4) yaw rotation (in ) (5) between-slice motion or translation (in mm) (6) within slice up-down motion or translation (in mm) (7) within slice left-right motion or translation (in mm) (8) RMS error before correction (9) RMS error after correction (10) total vector motion or translation (in mm). We need to look at column 10. If any of these values are above 1, this might be indicative of excessive movement. Consult with someone further up the chain for advice (undergrads ask a grad student; grad students ask Chris or a Postdoc if he ever has the funding to get one).
#Run the GLM for Single Subjects([[selxavg3-sess]])
#Run the GLM for Single Subjects([[selxavg3-sess]])
#*This may be chicken-voodoo, but I encountered group-level GLM errors when I ran my preprocessing and subject-level analyses haphazardly:
#*Typically, we will do a group-level GLM. I have come to realize that it should generally suffice to do all processing in '''fsaverage''' surface space (mri_glmfit requires all operations to be done in a common surface space).
#**In the unlikely case that you ''only'' care about looking at subject-level results, you can preprocess and configure your analyses for the '''self''' surface
#*This will be relevant to the parameters you use in steps 4,6 and 8
#**Typically, we will do a group-level GLM. First, complete the above steps first for the '''self''' surface, and then for the '''fsaverage''' surface (mri_glmfit requires all operations to be done in a common surface space)
#View the single-subject results; evaluate for sensibility
#*We can't cherry-pick the subjects according to their GLM results, but some basic contrasts of non-interest can be used as part of a data quality assessment
#*In surface space (fsaverage)
#*[[Viewing_Results_in_Standard_Voxel_Space | In voxel space (mni305)]]
#Run the group-level GLM ([[mri_glmfit]])
#Run the group-level GLM ([[mri_glmfit]])
Lab-specific documentation can be found on this wiki, but a more thorough (and accurate) description can be found on the [https://surfer.nmr.mgh.harvard.edu/fswiki/FsFastTutorialV5.1/FsFastFirstLevel Freesurfer Wiki]
Lab-specific documentation can be found on this wiki, but a more thorough (and accurate) description can be found on the [https://surfer.nmr.mgh.harvard.edu/fswiki/FsFastTutorialV5.1/FsFastFirstLevel Freesurfer Wiki]
== Subparcellation using ConnectomeMapper ==
Various network analyses we do in our lab are carried out in FreeSurfer surface space, using regions defined by the autorecon3 stage of the anatomical parcellation. Many of these regions are rather large and are consequently likely to be non-uniformly involved in many cognitive processes. The ConnectomeMapper is a tool that further subdivides these initial brain region parcellations into multiple sub-regions of comparable sizes. More information on these steps is outlined in the wiki page describing [[Lausanne Parcellation]]


== Trouble Shooting ==
== Trouble Shooting ==
Line 147: Line 207:
#It's documented elsewhere on the wiki
#It's documented elsewhere on the wiki
#If you're confused about how to set SUBJECTS_DIR, you're also likely to just blindly type whatever example command I give without understanding what the command does. If this is the case, please become more proficient with the lab's procedures and software.
#If you're confused about how to set SUBJECTS_DIR, you're also likely to just blindly type whatever example command I give without understanding what the command does. If this is the case, please become more proficient with the lab's procedures and software.
=== Is Freesurfer in your path? ===
This is an unlikely issue, but it is possible that your ~/.bashrc file doesn't add Freesurfer to your path. To check, launch a new terminal window. You should see something like the following:
-------- freesurfer-Linux-centos6_x86_64-stable-pub-v5.3.0 --------
Setting up environment for FreeSurfer/FS-FAST (and FSL)
FREESURFER_HOME  /usr/local/freesurfer
FSFAST_HOME      /usr/local/freesurfer/fsfast
FSF_OUTPUT_FORMAT nii.gz
SUBJECTS_DIR      /usr/local/freesurfer/subjects
MNI_DIR          /usr/local/freesurfer/mni
If you don't, then open up your .bashrc file with a text editor:
gedit ~/.bashrc
Then add the following lines to the bottom of the file:
#FREESURFER
export FREESURFER_HOME=/usr/local/freesurfer
source ${FREESURFER_HOME}/SetUpFreeSurfer.sh
#FSL
export FSLDIR=/usr/share/fsl/5.0
source ${FSLDIR}/etc/fslconf/fsl.sh
Save your changes, log out of Linux and log back in.


=== ERROR: Flag unrecognized. ===
=== ERROR: Flag unrecognized. ===
Line 181: Line 220:
Each of the flags is supposed to be prefixed with a '-' character, but in the example above, <code>-watershed</code> was instead typed as <code>watershed</code> ''without'' the '-' character. These little typos can be hard to spot.  
Each of the flags is supposed to be prefixed with a '-' character, but in the example above, <code>-watershed</code> was instead typed as <code>watershed</code> ''without'' the '-' character. These little typos can be hard to spot.  


<span style="font-size:150%">'''If you are completely perplexed why something doesn't work out when you followed the directions to the letter, the first thing you should do is throw out your assumption that you typed in the command correctly.'''</span>
<span style="color:red">'''If you are completely perplexed why something doesn't work out when you followed the directions to the letter, the first thing you should do is throw out your assumption that you typed in the command correctly.'''</span>
 
= FastSurfer =
[https://deep-mi.org/research/fastsurfer/ FastSurfer] "is a fast and extensively validated deep-learning pipeline for the fully automated processing of structural human brain MRIs." It works on top of a FreeSurfer installation, requires Python 3, and can be obtained through GitHub.
== Obtaining FastSurfer ==
# Git Clone the repository at https://github.com/deep-mi/FastSurfer
# pip3-install -r requirements.txt
# There may be an issue installing LaPy with the requirements file. I overcame this by visiting github.com/Deep-MI/LaPy and installing it separately


[[Category:FreeSurfer]]
[[Category:FreeSurfer]]

Latest revision as of 12:00, 24 June 2022

Freesurfer is a surface-based fMRI processing and analysis package written for the Unix environment (including Mac OS X, which is based on Unix). The neuroanatomical organization of the brain has the grey matter on the outside surface of the cortex. Aside from the ventral/medial subcortical structures, the interior volume of the brain is predominately white matter axonal tracts. Because neurons metabolize in the cell body, rather than along the axons, we can focus on the grey matter found in the cortical surface because any fMRI signal changes detected in the white matter should theoretically be noise. This is the motivation for surface-based analyses of fMRI.

Freesurfer has a rigid set of assumptions concerning how the input data is organized and labeled. The following instructions will help avoid any violations of these assumptions that might derail your Freesurfer fMRI processing pipeline.

These instructions assume that Freesurfer has already been installed and configured on your workstation.

First Things First: Enable FreeSurfer

To use FreeSurfer, it must be in your path, and some key environment variables need to be set. If you have not done so already, you should edit your .bashrc or .bash_profile file and append the following to the bottom:

Linux

Edit your .bashrc file:

nano .bashrc

This will open the file in the nano editor (you can use another text editor if you prefer). Add the following lines to the bottom of this file and save:

#FREESURFER
export FREESURFER_HOME=/usr/local/freesurfer-7.1.1 #change if freesurfer is installed elsewhere
source ${FREESURFER_HOME}/SetUpFreeSurfer.sh
#FSL
export FSLDIR=/usr/share/fsl/5.0
source ${FSLDIR}/etc/fslconf/fsl.sh

Test it out in the terminal by typing:

source ~/.bashrc

Mac OS

Edit your .bash_profile file:

nano ~/.bash_profile

Add the following lines to the bottom of the file and save:

export FREESURFER_HOME=/Applications/freesurfer
source $FREESURFER_HOME/SetUpFreeSurfer.sh

Test it out in your terminal window by typing:

source ~/.bash_profile

After you do this, when you launch a new terminal window, you will see some information appear in the terminal window indicating where FreeSurfer is located and where your subjects directory can be found. It should look like the following:

Setting up environment for FreeSurfer/FS-FAST (and FSL)
FREESURFER_HOME   /Applications/freesurfer
FSFAST_HOME       /Applications/freesurfer/fsfast
FSF_OUTPUT_FORMAT nii.gz
SUBJECTS_DIR      /Applications/freesurfer/subjects
MNI_DIR           /Applications/freesurfer/mni
FSL_DIR           /usr/local/fsl

If you don't see this, then the SetUpFreeSurfer.sh initialization script is not automatically running. You may need to seek help from a higher authority.

Organization

Freesurfer data for a collection of subjects is organized into a single project directory, called $SUBJECTS_DIR. Try this in the Linux terminal:

echo $SUBJECTS_DIR

It is likely that you will see something like the following, which is the sample 'bert' dataset that comes with a Freesurfer installation:

/usr/local/freesurfer/subjects

Let us assume that you have been collecting data for some lexical decision task experiment. All the data for all subjects should be stored in a single directory, which you will set as your $SUBJECTS_DIR variable. For example, if you have copied the data to ~/Projects/LDT, then we would type the following:

SUBJECTS_DIR=~/Projects/LDT
echo $SUBJECTS_DIR

Another trick we can do is to use the Unix pwd command to set the SUBJECTS_DIR to be whatever directory we happen to be in at the moment. The following series of commands will do the same as the previous example command:

cd ~
cd Projects
cd LDT
SUBJECTS_DIR=`pwd`
echo $SUBJECTS_DIR

The first line above, cd ~ moves you to your home directory. The second line moves you to Projects folder that might contain several sets of experiments. The third line of code moves you into the subdirectory containing the LDT data. The fourth line sets the SUBJECTS_DIR environment variable to whatever gets printed out when you execute the pwd command (the pwd command prints the current working directory). As a result, the current working directory becomes the new SUBJECTS_DIR after you execute this command, as you can see when you execute the last line of code.

Note that in the SUBJECTS_DIR=`pwd` line, those are back-quotes, which you might find on your keyboard sharing a key with the ~ character. ` is not the same character as '. When you enclose a command in a pair of back-quotes, you are telling the operating something along the lines of "this is a command that I want you to execute first, before using its output to figure out the rest of this business."



Subject directory organization

Data for each subject should be kept in their own directory. Moreover, different types of data (i.e., anatomical/structural or bold/functional) are kept in separate subdirectories. The basic directory structure for each participant ('session' in Freesurfer terminology) looks like this (see also Freesurfer BOLD files):

  • SUBJECTS_DIR
    • Subject_001
      • mri
      • bold
        • 001
        • 002
        • 003
        • 004
        • 005
        • 006

Copy the data for the participant from the /raw subdirectory for the project in the ubfs folder. You will only need the /mri and the /bold directories. If you are processing data for multiple session for a single participant, you may need to rename some of the files as you copy them over, otherwise, you will end up overwriting files.

Note that all the functional data (in the 'bold' subdirectory) are stored in sequentially numbered folders (3-digits), and all are given the same name ('f.nii' or 'f.nii.gz'). This seems to be a requirement. It may be possible to circumvent this requirement, but this is a relatively minor concern at this time.

By the end, your data should look like this:

  • SUBJECTS_DIR
    • Subject_001
      • mri
        • orig.nii.gz (or MPRAGE.nii)
      • bold
        • 001/f.nii.gz
        • 002/f.nii.gz
        • etc.

Note that the .nii.gz file extension indicates that this is a gzipped NIFTI file. You can use gunzip to unzip the files, but this isn't really necessary unless you are going to manipulate these files in MATLAB. We have figured out ways to do everything for FreeSurfer in the BASH shell, so you may as well just leave them as-is unless you have a compelling reason (or compulsion) to unzip them.

Structural Preprocessing

The structural mri file (T1.nii or orig.nii) is transformed over a series of computationally-intensive steps invoked by the recon-all Freesurfer program. Recon-all is designed to execute all the steps in series without intervention, however when working with data that may be of dubious quality it seems preferable to execute the process in a series of smaller groups of steps and check the output in between. This is because the process is automated using computational algorithms, but if one step doesn't execute correctly, everything that follows will be compromised. The steps take many hours to complete, so by inspecting the progress along the way can save many hours of processing time redoing steps that had been done incorrectly. Data that are more likely to be problem-free are more likely to be successfully processed in a single step.

Anatomical Surface Mesh Construction with Recon-All

The computationally-intensive surface mapping is carried out by a Freesurfer program called recon-all. This program is extensively spaghetti-documented on the Freesurfer wiki here. Though the Freesurfer documentation goes into much detail, it is also a little hard to follow at times and sometimes does some odd things.

recon-all -all

If you have good reason to suspect that the reconstruction process will complete without errors (e.g., you are working with an archival dataset that includes only "good" subjects, or you are using both T1 and T2 input images), then you can use the -all switch to run through all the autorecon steps 1-3 in a single sitting.

The recon-all command expects both a participant ID and a processing directive. If you are running all steps, then you are presumably beginning with a T1-weighted .nii or .nii.gz (or similar) file. The following commands take advantage of environment variables, which would let me run exactly the same command after changing the relevant variable values (in this case, the subject ID):

#set the SUBJECTS_DIR from the default
SUBJECTS_DIR=`pwd`
#I'd like to reconstruct data for subject 01
#The target data follows BIDS convention and 
#can be found in ./sub-01/anat/sub-01_T1w.nii.gz
#I'll create a subjectID variable that can be changed to allow me to run the same command across participants
subjectID=sub-01
recon-all -all \
-i ${SUBJECTS_DIR}/${subjectID}/anat/${subjectID}_T1w.nii.gz \
-subjid FS_${subjectID}

The above command will find the T1w.nii.gz file for whatever subject is stored in ${subjectID}, and run all processing steps. The output files will be found in the FreeSurfer subject folder called FS_${subjectID}. In this case, folder FS_sub-01 will be created in the ${SUBJECTS_DIR}. On the GPU-enabled workstations, this process should take between 6-7 hours. If your anatomical data files have a different naming convention (e.g., include session or acquisition information or have a different file extension), you will probably end up changing the code in green.

recon-all in stages

The Freesurfer wiki describes a generally problem-free case. This might work well for data that you already know is going to be problem-free, but we seldom have that guarantee. We might instead split the recon-all processing into sub-stages where you can do quality-control inspection at each step:

  1. Autorecon1 (~0.5 hours on ws02, assuming no problems encountered)
  2. Autorecon2 (~2.5 hours on ws02, assuming no problems encountered)
  3. Autorecon3

After running autorecon1, it is best to run autorecon2 immediately afterward. Usually autorecon1 does an alright job skull stripping. If it doesn't, the results will be evident after autorecon2 is completed if you use the tksurfer SUBJECTID HEMI inflated:

subjectID=FS_0202
tksurfer ${subjectID} lh inflated #this will display the left hemisphere (lh) inflated surface 

Check both the lh and rh surfaces. If the brain appears lumpy or there are odd points sticking out, proceed to Autorecon2 editing.

recon-all as a batch

Anatomical surface reconstruction using the recon-all script is designed to work on one participant at a time, whereas the functional data analysis using fs-fast can work over a batch of participants listed in a subjects file. If you are comfortable setting your workstation to run for an extended period of time, you can run recon-all on a batch of participants listed in your subjects file. The following code snippet takes the name of your subjects file as a parameter, and iterates through each name. For each subject it runs recon-all -all using the T1 and T2 anatomical files:

batch_autorecon.sh

#!/bin/bash

export SUBJECTS_DIR=`pwd`
echo $SUBJECTS_DIR

while read s; do
  #note that, depending on the locations and names of your anatomical data files,
  #you might have to modify the next line to reflect those differences
  recon-all -all -i ./$s/T1.nii.gz -T2 ./$s/T2.nii.gz -subjid FS_$s
done<$1

To run:

nohup batch_autorecon.sh mysubjects.txt &

Because recon-all takes a few hours for each participant, we would use nohup COMMAND & to run it in the background, and log back in several hours later to check on the progress:

cat nohup.out

Functional Analysis

The previous steps have been concerned only with processing the T1 anatomical data. Though this might suffice for a purely structural brain analysis (e.g., voxel-based brain morphometry, which might explore how cortical thickness relates to some cognitive ability), most of our studies will employ functional MRI, which measures how the hemodynamic response changes as a function of task, condition or group. In the Freesurfer pipeline, this is done using a program called FS-FAST.

FS-FAST Functional Analysis

Each of these steps are detailed more extensively elsewhere, but generally speaking you will need to follow these steps before starting your functional analysis:

  1. Copy BOLD data to the Freesurfer subject folder (see page for Freesurfer BOLD files)
  2. Create (or copy if experiment used a fixed schedule) your paradigm files for each fMRI run, and edit them using matlab (see "Par-Files").
    • When editing par files, be sure to check how many volumes to drop for each par file; it may be different every time! See above link for more details.
  3. Create a subjects text file in your $SUBJECTS_DIR called "subjects" (or some other name) that contains a list of your subjects necessary for batch processing (used for preproc-sess)
    • A quick way to do this, assuming you want to preprocess everyone, is to pipe the results of the ls command with the -1 switch (1 per line) into grep and then redirect the output to a file:
    • ls -1 -d FS*/ | sed "s,/$,," > subjects
    • This will list the names of any directories in the current director starting with "FS", and strip off the trailing forward slash
  4. Create a text file called subjectname in each of the subject folders. The file is a 1-line plaintext file that just contains the name of the subject.
  5. Configure your analyses (using mkanalysis-sess)
  6. Configure your contrast (using mkcontrast-sess)
  7. Preprocess your data (smoothing, slice-time correction, intensity normalization and brain mask creation) (using preproc-sess)
  8. Check the *.mcdat files in each of the subject_name/bold/xxx directories to inspect the amount of motion detected during the motion correction step. Data associated with periods of excessive movement (>1mm) should be dealt with carefully. Columns in the text document from left to right are: (1) time point number (2) roll rotation (in ) (3) pitch rotation (in ) (4) yaw rotation (in ) (5) between-slice motion or translation (in mm) (6) within slice up-down motion or translation (in mm) (7) within slice left-right motion or translation (in mm) (8) RMS error before correction (9) RMS error after correction (10) total vector motion or translation (in mm). We need to look at column 10. If any of these values are above 1, this might be indicative of excessive movement. Consult with someone further up the chain for advice (undergrads ask a grad student; grad students ask Chris or a Postdoc if he ever has the funding to get one).
  9. Run the GLM for Single Subjects(selxavg3-sess)
    • Typically, we will do a group-level GLM. I have come to realize that it should generally suffice to do all processing in fsaverage surface space (mri_glmfit requires all operations to be done in a common surface space).
    • This will be relevant to the parameters you use in steps 4,6 and 8
  10. View the single-subject results; evaluate for sensibility
    • We can't cherry-pick the subjects according to their GLM results, but some basic contrasts of non-interest can be used as part of a data quality assessment
    • In surface space (fsaverage)
    • In voxel space (mni305)
  11. Run the group-level GLM (mri_glmfit)

Lab-specific documentation can be found on this wiki, but a more thorough (and accurate) description can be found on the Freesurfer Wiki

Trouble Shooting

Missing surfaces

Not sure how this came to pass, as I have never encountered this before, but probably was the result of one of the autorecon steps stopping early. I was running preproc-sess using the fsaverage surface (having already successfully run it on the self surface) and got an error message about being unable to find lh.sphere.reg. A quick google found a FreeSurfer mailing list archive email that was concerned with a different issue that had a similar solution. In Bruce the Almighty's words:

> >> you can use mris_register with the -1 switch to indicate that the target 
> >> is a single surface not a statistical atlas. You will however still have to
> >> create the various surface representations and geometric measures we expect
> >> (e.g. ?h.inflated, ?h.sulc, etc....). If you can convert your 
> >> surfaces to our binary format (e.g. using mris_convert) to create
> >> an lh.orig, it would be something like:
> >>
> >> mris_smooth lh.orig lh.smoothwm
> >> mris_inflate lh.smoothwm lh.inflated
> >> mris_sphere lh.inflated lh.sphere
> >> mris_register -1 lh.sphere $TARGET_SUBJECT_DIR/lh.sphere ./lh.sphere.reg
> >>
> >> I've probably left something out, but that basic approach should work.
> >>
> >> cheers
> >> Bruce

The subject that had given me a problem already had ?h.inflated files, but no ?h.sphere files. I tried running some of the above steps but there were missing dependencies. Right now running:

recon-all -s $SUBJECT -surfreg

This allegedly produces the surf.reg files as an output.

My script can't find my data

Some versions of the autorecon*.sh scripts have the SUBJECTS_DIR hard-coded. Or sometimes you will close your terminal window (e.g., at the end of the day), and then launch a new terminal window when you come back to the workstation (or resume working at a different computer). There's a good chance that your Freesurfer malfunction is the result of your SUBJECTS_DIR environment variable being set to the incorrect value. Troubleshooting step #1 should be the following:

echo $SUBJECTS_DIR

If the wrong directory name is printed to the screen, setting it to the correct value may well fix your problem.

At this point you might expect me to tell you how to set SUBJECTS_DIR to the correct value. But I'm not going to do that, and here's why:

  1. It's documented elsewhere on the wiki
  2. If you're confused about how to set SUBJECTS_DIR, you're also likely to just blindly type whatever example command I give without understanding what the command does. If this is the case, please become more proficient with the lab's procedures and software.

ERROR: Flag unrecognized.

Most Linux programs take parameters, or flags that modify or specify how they are run. For example, you can't just call the recon-all command; you have to tell the progam what data you want to work on, and so this information is provided using the -i flag. Other flags might tell the program how aggressive to be when deciding to remove potential skull voxels, for example. There are no hard-and-fast rules, but to find the set of flags that you can use for a particular Linux program, there are a few options you can try:

  1. man program_name
  2. program_name -help
  3. program_name -?

Though often the program_name -some_flag option causes the usage information to be displayed if some_flag is not a valid flag.

When you see an error message concerning an unrecognized flag, it is most likely because there is a typo in your command. For example:

recon-all -autorecon1 watershed 15

Each of the flags is supposed to be prefixed with a '-' character, but in the example above, -watershed was instead typed as watershed without the '-' character. These little typos can be hard to spot.

If you are completely perplexed why something doesn't work out when you followed the directions to the letter, the first thing you should do is throw out your assumption that you typed in the command correctly.

FastSurfer

FastSurfer "is a fast and extensively validated deep-learning pipeline for the fully automated processing of structural human brain MRIs." It works on top of a FreeSurfer installation, requires Python 3, and can be obtained through GitHub.

Obtaining FastSurfer

  1. Git Clone the repository at https://github.com/deep-mi/FastSurfer
  2. pip3-install -r requirements.txt
  3. There may be an issue installing LaPy with the requirements file. I overcame this by visiting github.com/Deep-MI/LaPy and installing it separately