FreeSurfer: Difference between revisions

From CCN Wiki
Jump to navigation Jump to search
 
(129 intermediate revisions by 11 users not shown)
Line 4: Line 4:


These instructions assume that Freesurfer has already been installed and configured on your workstation.
These instructions assume that Freesurfer has already been installed and configured on your workstation.
==First Things First: Enable FreeSurfer==
To use FreeSurfer, it must be in your path, and some key environment variables need to be set. '''If you have not done so already''', you should edit your .bashrc or .bash_profile file and append the following to the bottom:
===Linux===
Edit your .bashrc file:
nano .bashrc
This will open the file in the <code>nano</code> editor (you can use another text editor if you prefer). Add the following lines to the bottom of this file and save:
#FREESURFER
export FREESURFER_HOME=/usr/local/freesurfer-7.1.1 #change if freesurfer is installed elsewhere
source ${FREESURFER_HOME}/SetUpFreeSurfer.sh
#FSL
export FSLDIR=/usr/share/fsl/5.0
source ${FSLDIR}/etc/fslconf/fsl.sh
Test it out in the terminal by typing:
source ~/.bashrc
===Mac OS===
Edit your .bash_profile file:
nano ~/.bash_profile
Add the following lines to the bottom of the file and save:
export FREESURFER_HOME=/Applications/freesurfer
source $FREESURFER_HOME/SetUpFreeSurfer.sh
Test it out in your terminal window by typing:
source ~/.bash_profile
After you do this, when you launch a new terminal window, you will see some information appear in the terminal window indicating where FreeSurfer is located and where your subjects directory can be found. It should look like the following:
Setting up environment for FreeSurfer/FS-FAST (and FSL)
FREESURFER_HOME  /Applications/freesurfer
FSFAST_HOME      /Applications/freesurfer/fsfast
FSF_OUTPUT_FORMAT nii.gz
SUBJECTS_DIR      /Applications/freesurfer/subjects
MNI_DIR          /Applications/freesurfer/mni
FSL_DIR          /usr/local/fsl
If you don't see this, then the SetUpFreeSurfer.sh initialization script is not automatically running. You may need to seek help from a higher authority.


== Organization ==
== Organization ==
Line 11: Line 49:
  /usr/local/freesurfer/subjects
  /usr/local/freesurfer/subjects


Let us assume that you have been collecting data for some lexical decision task experiment. All the data for all subjects should be stored in a single directory, which you  will set as your $SUBJECTS_DIR variable. For example, if we keep all our data in ~/ubfs/cpmcnorg/openfmri/LDT, then we would type the following:
Let us assume that you have been collecting data for some lexical decision task experiment. All the data for all subjects should be stored in a single directory, which you  will set as your $SUBJECTS_DIR variable. For example, if you have copied the data to ~/Projects/LDT, then we would type the following:
  SUBJECTS_DIR=~/~/ubfs/cpmcnorg/openfmri/LDT
  SUBJECTS_DIR=~/Projects/LDT
echo $SUBJECTS_DIR
Another trick we can do is to use the Unix <code>pwd</code> command to set the SUBJECTS_DIR to be whatever directory we happen to be in at the moment. The following series of commands will do the same as the previous example command:
cd ~
cd Projects
cd LDT
SUBJECTS_DIR=`pwd`
echo $SUBJECTS_DIR
The first line above, <code>cd ~</code> moves you to your home directory. The second line moves you to Projects folder that might contain several sets of experiments. The third line of code moves you into the subdirectory containing the LDT data. The fourth line sets the SUBJECTS_DIR environment variable to whatever gets printed out when you execute the <code>pwd</code> command (the <code>pwd</code> command '''<u>p</u>'''rints the current '''<u>w</u>'''orking '''<u>d</u>'''irectory). As a result, the current working directory becomes the new SUBJECTS_DIR after you execute this command, as you can see when you execute the last line of code.
 
Note that in the <code>SUBJECTS_DIR=`pwd`</code> line, those are '''back-quotes''', which you might find on your keyboard sharing a key with the ~ character. <code>`</code> is not the same character as <code>'</code>. When you enclose a command in a pair of back-quotes, you are telling the operating something along the lines of "this is a command that I want you to execute first, before using its output to figure out the rest of this business."


----
=== Subject directory organization ===
=== Subject directory organization ===
Data for each subject should be kept in their own directory. Moreover, different types of data (i.e., anatomical/structural or bold/functional) are kept in separate subdirectories. Assuming your raw data are in the .nii format, the basic directory structure for each participant ('session' in Freesurfer terminology) looks like this:
Data for each subject should be kept in their own directory. Moreover, different types of data (i.e., anatomical/structural or bold/functional) are kept in separate subdirectories. The basic directory structure for each participant ('session' in Freesurfer terminology) looks like this (see also [[Freesurfer_BOLD_files| Freesurfer BOLD files]]):


*SUBJECTS_DIR
*SUBJECTS_DIR
**Subject_001
**Subject_001
***mri
***mri
****orig.nii
***bold
***bold
****001/f.nii
****001
****002/f.nii
****002
****003/f.nii
****003
****etc.
****004
**Subject_002
****005
****006
Copy the data for the participant from the /raw subdirectory for the project in the ubfs folder. You will only need the /mri and the /bold directories. If you are processing data for multiple session for a single participant, you may need to rename some of the files as you copy them over, otherwise, you will end up overwriting files.
 
<i> Note that all the functional data (in the 'bold' subdirectory) are stored in sequentially numbered folders (3-digits), and all are given the same name ('f.nii' or 'f.nii.gz'). This seems to be a requirement. It may be possible to circumvent this requirement, but this is a relatively minor concern at this time. </i>
 
By the end, your data should look like this:
 
*SUBJECTS_DIR
**Subject_001
***mri
***mri
****orig.nii
****orig.nii.gz (or MPRAGE.nii)
***bold
***bold
****001/f.nii
****001/f.nii.gz
****002/f.nii
****002/f.nii.gz
****003/f.nii
****etc.
****etc.


Note that all the functional data (in the 'bold' subdirectory) are stored in sequentially numbered folders (3-digits), and all are called 'f.nii'. This seems to be a requirement. It may be possible to circumvent this requirement, but this is a relatively minor concern at this time.
Note that the .nii.gz file extension indicates that this is a gzipped NIFTI file. You can use <code>gunzip</code> to unzip the files, but this isn't really necessary unless you are going to manipulate these files in MATLAB. We have figured out ways to do everything for FreeSurfer in the BASH shell, so you may as well just leave them as-is unless you have a compelling reason (or compulsion) to unzip them.


== Structural Preprocessing ==
== Structural Preprocessing ==
The structural mri file (orig.nii) is transformed over a series of computationally-intensive steps invoked by the recon-all Freesurfer program. Recon-all is designed to execute all the steps in series without intervention, however in practice it seems preferable to execute the process in a series of smaller groups of steps and check the output in between. This is because the process is automated using computational algorithms, but if one step doesn't execute correctly, everything that follows will be compromised. The steps take many hours to complete, so by inspecting the progress along the way can save many hours of processing time redoing steps that had been done incorrectly.
The structural mri file (T1.nii or orig.nii) is transformed over a series of computationally-intensive steps invoked by the recon-all Freesurfer program. Recon-all is designed to execute all the steps in series without intervention, however when working with data that may be of dubious quality it seems preferable to execute the process in a series of smaller groups of steps and check the output in between. This is because the process is automated using computational algorithms, but if one step doesn't execute correctly, everything that follows will be compromised. The steps take many hours to complete, so by inspecting the progress along the way can save many hours of processing time redoing steps that had been done incorrectly. Data that are more likely to be problem-free are more likely to be successfully processed in a single step.
 
== Anatomical Surface Mesh Construction with Recon-All ==
The computationally-intensive surface mapping is carried out by a Freesurfer program called recon-all. This program is extensively spaghetti-documented on the Freesurfer wiki [https://surfer.nmr.mgh.harvard.edu/fswiki/recon-all here]. Though the Freesurfer documentation goes into much detail, it is also a little hard to follow at times and sometimes does some odd things.
 
=== recon-all -all ===
If you have good reason to suspect that the reconstruction process will complete without errors (e.g., you are working with an archival dataset that includes only "good" subjects, or you are using both T1 and T2 input images), then you can use the <code>-all</code> switch to run through all the autorecon steps 1-3 in a single sitting.
 
The recon-all command expects both a participant ID and a processing directive. If you are running all steps, then you are presumably beginning with a T1-weighted .nii or .nii.gz (or similar) file. The following commands take advantage of environment variables, which would let me run exactly the same command after changing the relevant variable values (in this case, the subject ID):
#set the SUBJECTS_DIR from the default
SUBJECTS_DIR=`pwd`
#I'd like to reconstruct data for subject 01
#The target data follows BIDS convention and
#can be found in ./<span style="color:red">sub-01</span>/anat/<span style="color:red">sub-01</span><span style="color:green">_T1w.nii.gz</span>
#I'll create a subjectID variable that can be changed to allow me to run the same command across participants
subjectID=<span style="color:red">sub-01</span>
recon-all -all \
-i ${SUBJECTS_DIR}/${subjectID}/anat/${subjectID}_T1w.nii.gz \
-subjid FS_${subjectID}


=== Image Format: Slicer ===
The above command will find the <span style="color:green">T1w.nii.gz</span> file for whatever subject is stored in <span style="color:red">${subjectID}</span>, and run all processing steps. The output files will be found in the FreeSurfer subject folder called FS_<span style="color:red">${subjectID}</span>. In this case, folder FS_<span style="color:red">sub-01</span> will be created in the ${SUBJECTS_DIR}. On the GPU-enabled workstations, this process should take between 6-7 hours. If your anatomical data files have a different naming convention (e.g., include session or acquisition information or have a different file extension), you will probably end up changing the <span style="color:green">code in green</span>.
The first thing that you will need to do is convert the orig.nii file to .mgz format using [[Slicer]]. Though Freesurfer is capable of reading .nii files, it natively uses .mgz files, and so this conversion step will ensure that the structural data file has all the information that Freesurfer expects.


=== Recon-All ===
=== recon-all in stages ===
The computationally-intensive surface mapping is carried out by a Freesurfer program called recon-all. This program is extensively spaghetti-documented on the Freesurfer wiki [https://surfer.nmr.mgh.harvard.edu/fswiki/recon-all here]. Though the Freesurfer documentation goes into much detail, it is also a little hard to follow at times and sometimes does some odd things. Notably, it assumes you're going to just try to jam your data into the program and hope for the best. This might work well for data that you already know is going to be problem-free, but we seldom have that guarantee. Instead, this guide will split the recon-all processing into sub-stages where you can do quality-control inspection at each step.
The Freesurfer wiki describes a generally problem-free case. This might work well for data that you already know is going to be problem-free, but we seldom have that guarantee. We might instead split the recon-all processing into sub-stages where you can do quality-control inspection at each step:
#[[Autorecon1]] (~0.5  hours on ws02, assuming no problems encountered)
#[[Autorecon2]] (~2.5 hours on ws02, assuming no problems encountered)
#[[Autorecon3]]


==== Stage 1: Skull Stripping ====
After running autorecon1, it is best to run autorecon2 immediately afterward. Usually autorecon1 does an alright job skull stripping. If it doesn't, the results will be evident after autorecon2 is completed if you use the <code>tksurfer SUBJECTID HEMI inflated</code>:
Calling <code>recon-all</code> with the -autorecon1 flag performs a number of steps related to segmenting the anatomical MRI image into brain/non-brain voxels. A shell script called <code>autorecon1.sh</code> can be found on the ubfs Scripts/Shell folder. This script simply acts as a wrapper (i.e., a convenient shortcut for executing computer code that might have a bunch of parameters that are error-prone or annoying to have to specify). You can copy this script to your ~/bin directory to be able to run it yourself:
subjectID=FS_0202
cp ~/ubfs/cpmcnorg/Scripts/Shell/autorecon1.sh ~/bin/
tksurfer ${subjectID} lh inflated #this will display the left hemisphere (lh) inflated surface
Before you run the script, be sure to first open it up in a text editor so that you can: 1) see what it's actually doing, and 2) modify it so that it matches your particular requirements:
Check both the lh and rh surfaces. If the brain appears lumpy or there are odd points sticking out, proceed to [[Autorecon2]] editing.
nano ~/autorecon1.sh
 
If you do so, you will find that it sets some environment variables that you will want to change, such as your <code>$PROJECTROOT</code>
=== recon-all as a batch ===
After you have  modified your copy of the script, you can run it thus:
Anatomical surface reconstruction using the <code>recon-all</code> script is designed to work on one participant at a time, whereas the functional data analysis using fs-fast can work over a batch of participants listed in a <code>subjects</code> file. If you are comfortable setting your workstation to run for an extended period of time, you can run recon-all on a batch of participants listed in your <code>subjects</code> file. The following code snippet takes the name of your subjects file as a parameter, and iterates through each name. For each subject it runs recon-all -all using the T1 and T2 anatomical files:
autorecon1.sh 501
==== batch_autorecon.sh ====
Where you would replace <code>501</code> with the appropriate subject number in your project directory. For example, if I want to execute autorecon1 on subject 501 found in ~/ubfs/cpmcnorg/openfmri/booth/ then my script would have the following line:
#!/bin/bash
PROJECTROOT=/home/chris/ubfs/cpmcnorg/openfmri/booth/
When operating on data stored locally on either wernickesarea or brocasarea, this step takes between 20 and 30 minutes to complete. At the other extreme, operating on data stored on the ubfs network drive using the accumbens micro-computer takes much longer to complete.
export SUBJECTS_DIR=`pwd`
echo $SUBJECTS_DIR
while read s; do
  <span style="color:green">#note that, depending on the locations and names of your anatomical data files,
  #you might have to modify the next line to reflect those differences</span>
  recon-all -all -i ./$s/T1.nii.gz -T2 ./$s/T2.nii.gz -subjid FS_$s
done<$1
 
To run:
nohup batch_autorecon.sh mysubjects.txt &
Because recon-all takes a few hours for each participant, we would use <code>nohup COMMAND &</code> to run it in the background, and log back in several hours later to check on the progress:
cat nohup.out
 
== Functional Analysis ==
The previous steps have been concerned only with processing the T1 anatomical data. Though this might suffice for a purely structural brain analysis (e.g., voxel-based brain morphometry, which might explore how cortical thickness relates to some cognitive ability), most of our studies will employ functional MRI, which measures how the hemodynamic response changes as a function of task, condition or group. In the Freesurfer pipeline, this is done using a program called FS-FAST.
 
=== FS-FAST Functional Analysis===
Each of these steps are detailed more extensively elsewhere, but generally speaking you will need to follow these steps before starting your functional analysis:
#Copy BOLD data to the Freesurfer subject folder (see page for [[Freesurfer BOLD files]])
#Create (or copy if experiment used a fixed schedule) your paradigm files for each fMRI run, and edit them using matlab (see "[[Par-Files]]").
#*When editing par files, be sure to check how many volumes to drop for each par file; it may be different every time! See above link for more details.
#Create a subjects text file in your $SUBJECTS_DIR called "subjects" (or some other name) that contains a list of your subjects necessary for batch processing (used for [[Configure preproc-sess|preproc-sess]])
#*A quick way to do this, assuming you want to preprocess everyone, is to pipe the results of the ls command with the -1 switch (1 per line) into grep and then redirect the output to a file:
#*<code>ls -1 -d FS*/ | sed "s,/$,," > subjects</code>
#*This will list the names of any directories in the current director starting with "FS", and strip off the trailing forward slash
#Create a text file called <code>subjectname</code> in each of the subject folders. The file is a 1-line plaintext file that just contains the name of the subject.
#Configure your analyses (using [[Configure mkanalysis-sess|mkanalysis-sess]])
#Configure your contrast (using [[Configure mkcontrast-sess|mkcontrast-sess]])
#Preprocess your data (smoothing, slice-time correction, intensity normalization and brain mask creation) (using [[Configure preproc-sess|preproc-sess]])
#Check the *.mcdat files in each of the ''subject_name/bold/xxx'' directories to inspect the amount of motion detected during the motion correction step. Data associated with periods of excessive movement (>1mm) should be dealt with carefully. Columns in the text document from left to right are: (1) time point number (2) roll rotation (in ) (3) pitch rotation (in ) (4) yaw rotation (in ) (5) between-slice motion or translation (in mm) (6) within slice up-down motion or translation (in mm) (7) within slice left-right motion or translation (in mm) (8) RMS error before correction (9) RMS error after correction (10) total vector motion or translation (in mm). We need to look at column 10. If any of these values are above 1, this might be indicative of excessive movement. Consult with someone further up the chain for advice (undergrads ask a grad student; grad students ask Chris or a Postdoc if he ever has the funding to get one).
#Run the GLM for Single Subjects([[selxavg3-sess]])
#*Typically, we will do a group-level GLM. I have come to realize that it should generally suffice to do all processing in '''fsaverage''' surface space (mri_glmfit requires all operations to be done in a common surface space).
#*This will be relevant to the parameters you use in steps 4,6 and 8
#View the single-subject results; evaluate for sensibility
#*We can't cherry-pick the subjects according to their GLM results, but some basic contrasts of non-interest can be used as part of a data quality assessment
#*In surface space (fsaverage)
#*[[Viewing_Results_in_Standard_Voxel_Space | In voxel space (mni305)]]
#Run the group-level GLM ([[mri_glmfit]])
Lab-specific documentation can be found on this wiki, but a more thorough (and accurate) description can be found on the [https://surfer.nmr.mgh.harvard.edu/fswiki/FsFastTutorialV5.1/FsFastFirstLevel Freesurfer Wiki]


== Trouble Shooting ==
== Trouble Shooting ==
=== Is Freesurfer in your path? ===
=== Missing surfaces ===
This is an unlikely issue, but it is possible that your ~/.bashrc file doesn't add Freesurfer to your path. To check, launch a new terminal window. You should see something like the following:
Not sure how this came to pass, as I have never encountered this before, but probably was the result of one of the autorecon steps stopping early. I was running preproc-sess using the ''fsaverage'' surface (having already successfully run it on the ''self'' surface) and got an error message about being unable to find lh.sphere.reg. A quick google found a FreeSurfer mailing list archive email that was concerned with a different issue that had a similar solution. In Bruce the Almighty's words:
  -------- freesurfer-Linux-centos6_x86_64-stable-pub-v5.3.0 --------
 
  Setting up environment for FreeSurfer/FS-FAST (and FSL)
> >> you can use mris_register with the -1 switch to indicate that the target
  FREESURFER_HOME  /usr/local/freesurfer
> >> is a single surface not a statistical atlas. You will however still have to
  FSFAST_HOME      /usr/local/freesurfer/fsfast
> >> create the various surface representations and geometric measures we expect
  FSF_OUTPUT_FORMAT nii.gz
  > >> (e.g. ?h.inflated, ?h.sulc, etc....). If you can convert your
  SUBJECTS_DIR      /usr/local/freesurfer/subjects
  > >> surfaces to our binary format (e.g. using mris_convert) to create
  MNI_DIR          /usr/local/freesurfer/mni
  > >> an lh.orig, it would be something like:
If you don't, then open up your .bashrc file with a text editor:
> >>
  gedit ~/.bashrc
> >> mris_smooth lh.orig lh.smoothwm
Then add the following lines to the bottom of the file:
  > >> mris_inflate lh.smoothwm lh.inflated
  #FREESURFER
  > >> mris_sphere lh.inflated lh.sphere
export FREESURFER_HOME=/usr/local/freesurfer
  > >> mris_register -1 lh.sphere $TARGET_SUBJECT_DIR/lh.sphere ./lh.sphere.reg
source ${FREESURFER_HOME}/SetUpFreeSurfer.sh
  > >>
> >> I've probably left something out, but that basic approach should work.
#FSL
> >>
export FSLDIR=/usr/share/fsl/5.0
> >> cheers
source ${FSLDIR}/etc/fslconf/fsl.sh
> >> Bruce
Save your changes, log out of Linux and log back in.
 
The subject that had given me a problem already had ?h.inflated files, but no ?h.sphere files. I tried running some of the above steps but there were missing dependencies. Right now running:
  recon-all -s $SUBJECT -surfreg
This allegedly produces the surf.reg files as an output.
 
=== My script can't find my data ===
Some versions of the autorecon*.sh scripts have the SUBJECTS_DIR hard-coded. Or sometimes you will close your terminal window (e.g., at the end of the day), and then launch a new terminal window when you come back to the workstation (or resume working at a different computer). There's a good chance that your Freesurfer malfunction is the result of your SUBJECTS_DIR environment variable being set to the incorrect value. Troubleshooting step #1 should be the following:
  echo $SUBJECTS_DIR
If the wrong directory name is printed to the screen, setting it to the correct value may well fix your problem.
 
At this point you might expect me to tell you how to set SUBJECTS_DIR to the correct value. But I'm not going to do that, and here's why:
#It's documented elsewhere on the wiki
#If you're confused about how to set SUBJECTS_DIR, you're also likely to just blindly type whatever example command I give without understanding what the command does. If this is the case, please become more proficient with the lab's procedures and software.


=== ERROR: Flag unrecognized. ===
=== ERROR: Flag unrecognized. ===
Line 90: Line 218:
When you see an error message concerning an unrecognized flag, it is most likely because there is a typo in your command. For example:
When you see an error message concerning an unrecognized flag, it is most likely because there is a typo in your command. For example:
  recon-all -autorecon1 watershed 15
  recon-all -autorecon1 watershed 15
Each of the flags is supposed to be prefixed with a '-' character, but in the example above, <code>-watershed</code> was instead typed as <code>watershed</code> ''without'' the '-' character. These little typos can be hard to spot. So if you are completely perplexed why something doesn't work out when you followed the directions to the letter, the first thing you should do is throw out your assumption that you typed in the command correctly.
Each of the flags is supposed to be prefixed with a '-' character, but in the example above, <code>-watershed</code> was instead typed as <code>watershed</code> ''without'' the '-' character. These little typos can be hard to spot.  
 
<span style="color:red">'''If you are completely perplexed why something doesn't work out when you followed the directions to the letter, the first thing you should do is throw out your assumption that you typed in the command correctly.'''</span>
 
= FastSurfer =
[https://deep-mi.org/research/fastsurfer/ FastSurfer] "is a fast and extensively validated deep-learning pipeline for the fully automated processing of structural human brain MRIs." It works on top of a FreeSurfer installation, requires Python 3, and can be obtained through GitHub.
== Obtaining FastSurfer ==
# Git Clone the repository at https://github.com/deep-mi/FastSurfer
# pip3-install -r requirements.txt
# There may be an issue installing LaPy with the requirements file. I overcame this by visiting github.com/Deep-MI/LaPy and installing it separately
 
[[Category:FreeSurfer]]

Latest revision as of 12:00, 24 June 2022

Freesurfer is a surface-based fMRI processing and analysis package written for the Unix environment (including Mac OS X, which is based on Unix). The neuroanatomical organization of the brain has the grey matter on the outside surface of the cortex. Aside from the ventral/medial subcortical structures, the interior volume of the brain is predominately white matter axonal tracts. Because neurons metabolize in the cell body, rather than along the axons, we can focus on the grey matter found in the cortical surface because any fMRI signal changes detected in the white matter should theoretically be noise. This is the motivation for surface-based analyses of fMRI.

Freesurfer has a rigid set of assumptions concerning how the input data is organized and labeled. The following instructions will help avoid any violations of these assumptions that might derail your Freesurfer fMRI processing pipeline.

These instructions assume that Freesurfer has already been installed and configured on your workstation.

First Things First: Enable FreeSurfer

To use FreeSurfer, it must be in your path, and some key environment variables need to be set. If you have not done so already, you should edit your .bashrc or .bash_profile file and append the following to the bottom:

Linux

Edit your .bashrc file:

nano .bashrc

This will open the file in the nano editor (you can use another text editor if you prefer). Add the following lines to the bottom of this file and save:

#FREESURFER
export FREESURFER_HOME=/usr/local/freesurfer-7.1.1 #change if freesurfer is installed elsewhere
source ${FREESURFER_HOME}/SetUpFreeSurfer.sh
#FSL
export FSLDIR=/usr/share/fsl/5.0
source ${FSLDIR}/etc/fslconf/fsl.sh

Test it out in the terminal by typing:

source ~/.bashrc

Mac OS

Edit your .bash_profile file:

nano ~/.bash_profile

Add the following lines to the bottom of the file and save:

export FREESURFER_HOME=/Applications/freesurfer
source $FREESURFER_HOME/SetUpFreeSurfer.sh

Test it out in your terminal window by typing:

source ~/.bash_profile

After you do this, when you launch a new terminal window, you will see some information appear in the terminal window indicating where FreeSurfer is located and where your subjects directory can be found. It should look like the following:

Setting up environment for FreeSurfer/FS-FAST (and FSL)
FREESURFER_HOME   /Applications/freesurfer
FSFAST_HOME       /Applications/freesurfer/fsfast
FSF_OUTPUT_FORMAT nii.gz
SUBJECTS_DIR      /Applications/freesurfer/subjects
MNI_DIR           /Applications/freesurfer/mni
FSL_DIR           /usr/local/fsl

If you don't see this, then the SetUpFreeSurfer.sh initialization script is not automatically running. You may need to seek help from a higher authority.

Organization

Freesurfer data for a collection of subjects is organized into a single project directory, called $SUBJECTS_DIR. Try this in the Linux terminal:

echo $SUBJECTS_DIR

It is likely that you will see something like the following, which is the sample 'bert' dataset that comes with a Freesurfer installation:

/usr/local/freesurfer/subjects

Let us assume that you have been collecting data for some lexical decision task experiment. All the data for all subjects should be stored in a single directory, which you will set as your $SUBJECTS_DIR variable. For example, if you have copied the data to ~/Projects/LDT, then we would type the following:

SUBJECTS_DIR=~/Projects/LDT
echo $SUBJECTS_DIR

Another trick we can do is to use the Unix pwd command to set the SUBJECTS_DIR to be whatever directory we happen to be in at the moment. The following series of commands will do the same as the previous example command:

cd ~
cd Projects
cd LDT
SUBJECTS_DIR=`pwd`
echo $SUBJECTS_DIR

The first line above, cd ~ moves you to your home directory. The second line moves you to Projects folder that might contain several sets of experiments. The third line of code moves you into the subdirectory containing the LDT data. The fourth line sets the SUBJECTS_DIR environment variable to whatever gets printed out when you execute the pwd command (the pwd command prints the current working directory). As a result, the current working directory becomes the new SUBJECTS_DIR after you execute this command, as you can see when you execute the last line of code.

Note that in the SUBJECTS_DIR=`pwd` line, those are back-quotes, which you might find on your keyboard sharing a key with the ~ character. ` is not the same character as '. When you enclose a command in a pair of back-quotes, you are telling the operating something along the lines of "this is a command that I want you to execute first, before using its output to figure out the rest of this business."



Subject directory organization

Data for each subject should be kept in their own directory. Moreover, different types of data (i.e., anatomical/structural or bold/functional) are kept in separate subdirectories. The basic directory structure for each participant ('session' in Freesurfer terminology) looks like this (see also Freesurfer BOLD files):

  • SUBJECTS_DIR
    • Subject_001
      • mri
      • bold
        • 001
        • 002
        • 003
        • 004
        • 005
        • 006

Copy the data for the participant from the /raw subdirectory for the project in the ubfs folder. You will only need the /mri and the /bold directories. If you are processing data for multiple session for a single participant, you may need to rename some of the files as you copy them over, otherwise, you will end up overwriting files.

Note that all the functional data (in the 'bold' subdirectory) are stored in sequentially numbered folders (3-digits), and all are given the same name ('f.nii' or 'f.nii.gz'). This seems to be a requirement. It may be possible to circumvent this requirement, but this is a relatively minor concern at this time.

By the end, your data should look like this:

  • SUBJECTS_DIR
    • Subject_001
      • mri
        • orig.nii.gz (or MPRAGE.nii)
      • bold
        • 001/f.nii.gz
        • 002/f.nii.gz
        • etc.

Note that the .nii.gz file extension indicates that this is a gzipped NIFTI file. You can use gunzip to unzip the files, but this isn't really necessary unless you are going to manipulate these files in MATLAB. We have figured out ways to do everything for FreeSurfer in the BASH shell, so you may as well just leave them as-is unless you have a compelling reason (or compulsion) to unzip them.

Structural Preprocessing

The structural mri file (T1.nii or orig.nii) is transformed over a series of computationally-intensive steps invoked by the recon-all Freesurfer program. Recon-all is designed to execute all the steps in series without intervention, however when working with data that may be of dubious quality it seems preferable to execute the process in a series of smaller groups of steps and check the output in between. This is because the process is automated using computational algorithms, but if one step doesn't execute correctly, everything that follows will be compromised. The steps take many hours to complete, so by inspecting the progress along the way can save many hours of processing time redoing steps that had been done incorrectly. Data that are more likely to be problem-free are more likely to be successfully processed in a single step.

Anatomical Surface Mesh Construction with Recon-All

The computationally-intensive surface mapping is carried out by a Freesurfer program called recon-all. This program is extensively spaghetti-documented on the Freesurfer wiki here. Though the Freesurfer documentation goes into much detail, it is also a little hard to follow at times and sometimes does some odd things.

recon-all -all

If you have good reason to suspect that the reconstruction process will complete without errors (e.g., you are working with an archival dataset that includes only "good" subjects, or you are using both T1 and T2 input images), then you can use the -all switch to run through all the autorecon steps 1-3 in a single sitting.

The recon-all command expects both a participant ID and a processing directive. If you are running all steps, then you are presumably beginning with a T1-weighted .nii or .nii.gz (or similar) file. The following commands take advantage of environment variables, which would let me run exactly the same command after changing the relevant variable values (in this case, the subject ID):

#set the SUBJECTS_DIR from the default
SUBJECTS_DIR=`pwd`
#I'd like to reconstruct data for subject 01
#The target data follows BIDS convention and 
#can be found in ./sub-01/anat/sub-01_T1w.nii.gz
#I'll create a subjectID variable that can be changed to allow me to run the same command across participants
subjectID=sub-01
recon-all -all \
-i ${SUBJECTS_DIR}/${subjectID}/anat/${subjectID}_T1w.nii.gz \
-subjid FS_${subjectID}

The above command will find the T1w.nii.gz file for whatever subject is stored in ${subjectID}, and run all processing steps. The output files will be found in the FreeSurfer subject folder called FS_${subjectID}. In this case, folder FS_sub-01 will be created in the ${SUBJECTS_DIR}. On the GPU-enabled workstations, this process should take between 6-7 hours. If your anatomical data files have a different naming convention (e.g., include session or acquisition information or have a different file extension), you will probably end up changing the code in green.

recon-all in stages

The Freesurfer wiki describes a generally problem-free case. This might work well for data that you already know is going to be problem-free, but we seldom have that guarantee. We might instead split the recon-all processing into sub-stages where you can do quality-control inspection at each step:

  1. Autorecon1 (~0.5 hours on ws02, assuming no problems encountered)
  2. Autorecon2 (~2.5 hours on ws02, assuming no problems encountered)
  3. Autorecon3

After running autorecon1, it is best to run autorecon2 immediately afterward. Usually autorecon1 does an alright job skull stripping. If it doesn't, the results will be evident after autorecon2 is completed if you use the tksurfer SUBJECTID HEMI inflated:

subjectID=FS_0202
tksurfer ${subjectID} lh inflated #this will display the left hemisphere (lh) inflated surface 

Check both the lh and rh surfaces. If the brain appears lumpy or there are odd points sticking out, proceed to Autorecon2 editing.

recon-all as a batch

Anatomical surface reconstruction using the recon-all script is designed to work on one participant at a time, whereas the functional data analysis using fs-fast can work over a batch of participants listed in a subjects file. If you are comfortable setting your workstation to run for an extended period of time, you can run recon-all on a batch of participants listed in your subjects file. The following code snippet takes the name of your subjects file as a parameter, and iterates through each name. For each subject it runs recon-all -all using the T1 and T2 anatomical files:

batch_autorecon.sh

#!/bin/bash

export SUBJECTS_DIR=`pwd`
echo $SUBJECTS_DIR

while read s; do
  #note that, depending on the locations and names of your anatomical data files,
  #you might have to modify the next line to reflect those differences
  recon-all -all -i ./$s/T1.nii.gz -T2 ./$s/T2.nii.gz -subjid FS_$s
done<$1

To run:

nohup batch_autorecon.sh mysubjects.txt &

Because recon-all takes a few hours for each participant, we would use nohup COMMAND & to run it in the background, and log back in several hours later to check on the progress:

cat nohup.out

Functional Analysis

The previous steps have been concerned only with processing the T1 anatomical data. Though this might suffice for a purely structural brain analysis (e.g., voxel-based brain morphometry, which might explore how cortical thickness relates to some cognitive ability), most of our studies will employ functional MRI, which measures how the hemodynamic response changes as a function of task, condition or group. In the Freesurfer pipeline, this is done using a program called FS-FAST.

FS-FAST Functional Analysis

Each of these steps are detailed more extensively elsewhere, but generally speaking you will need to follow these steps before starting your functional analysis:

  1. Copy BOLD data to the Freesurfer subject folder (see page for Freesurfer BOLD files)
  2. Create (or copy if experiment used a fixed schedule) your paradigm files for each fMRI run, and edit them using matlab (see "Par-Files").
    • When editing par files, be sure to check how many volumes to drop for each par file; it may be different every time! See above link for more details.
  3. Create a subjects text file in your $SUBJECTS_DIR called "subjects" (or some other name) that contains a list of your subjects necessary for batch processing (used for preproc-sess)
    • A quick way to do this, assuming you want to preprocess everyone, is to pipe the results of the ls command with the -1 switch (1 per line) into grep and then redirect the output to a file:
    • ls -1 -d FS*/ | sed "s,/$,," > subjects
    • This will list the names of any directories in the current director starting with "FS", and strip off the trailing forward slash
  4. Create a text file called subjectname in each of the subject folders. The file is a 1-line plaintext file that just contains the name of the subject.
  5. Configure your analyses (using mkanalysis-sess)
  6. Configure your contrast (using mkcontrast-sess)
  7. Preprocess your data (smoothing, slice-time correction, intensity normalization and brain mask creation) (using preproc-sess)
  8. Check the *.mcdat files in each of the subject_name/bold/xxx directories to inspect the amount of motion detected during the motion correction step. Data associated with periods of excessive movement (>1mm) should be dealt with carefully. Columns in the text document from left to right are: (1) time point number (2) roll rotation (in ) (3) pitch rotation (in ) (4) yaw rotation (in ) (5) between-slice motion or translation (in mm) (6) within slice up-down motion or translation (in mm) (7) within slice left-right motion or translation (in mm) (8) RMS error before correction (9) RMS error after correction (10) total vector motion or translation (in mm). We need to look at column 10. If any of these values are above 1, this might be indicative of excessive movement. Consult with someone further up the chain for advice (undergrads ask a grad student; grad students ask Chris or a Postdoc if he ever has the funding to get one).
  9. Run the GLM for Single Subjects(selxavg3-sess)
    • Typically, we will do a group-level GLM. I have come to realize that it should generally suffice to do all processing in fsaverage surface space (mri_glmfit requires all operations to be done in a common surface space).
    • This will be relevant to the parameters you use in steps 4,6 and 8
  10. View the single-subject results; evaluate for sensibility
    • We can't cherry-pick the subjects according to their GLM results, but some basic contrasts of non-interest can be used as part of a data quality assessment
    • In surface space (fsaverage)
    • In voxel space (mni305)
  11. Run the group-level GLM (mri_glmfit)

Lab-specific documentation can be found on this wiki, but a more thorough (and accurate) description can be found on the Freesurfer Wiki

Trouble Shooting

Missing surfaces

Not sure how this came to pass, as I have never encountered this before, but probably was the result of one of the autorecon steps stopping early. I was running preproc-sess using the fsaverage surface (having already successfully run it on the self surface) and got an error message about being unable to find lh.sphere.reg. A quick google found a FreeSurfer mailing list archive email that was concerned with a different issue that had a similar solution. In Bruce the Almighty's words:

> >> you can use mris_register with the -1 switch to indicate that the target 
> >> is a single surface not a statistical atlas. You will however still have to
> >> create the various surface representations and geometric measures we expect
> >> (e.g. ?h.inflated, ?h.sulc, etc....). If you can convert your 
> >> surfaces to our binary format (e.g. using mris_convert) to create
> >> an lh.orig, it would be something like:
> >>
> >> mris_smooth lh.orig lh.smoothwm
> >> mris_inflate lh.smoothwm lh.inflated
> >> mris_sphere lh.inflated lh.sphere
> >> mris_register -1 lh.sphere $TARGET_SUBJECT_DIR/lh.sphere ./lh.sphere.reg
> >>
> >> I've probably left something out, but that basic approach should work.
> >>
> >> cheers
> >> Bruce

The subject that had given me a problem already had ?h.inflated files, but no ?h.sphere files. I tried running some of the above steps but there were missing dependencies. Right now running:

recon-all -s $SUBJECT -surfreg

This allegedly produces the surf.reg files as an output.

My script can't find my data

Some versions of the autorecon*.sh scripts have the SUBJECTS_DIR hard-coded. Or sometimes you will close your terminal window (e.g., at the end of the day), and then launch a new terminal window when you come back to the workstation (or resume working at a different computer). There's a good chance that your Freesurfer malfunction is the result of your SUBJECTS_DIR environment variable being set to the incorrect value. Troubleshooting step #1 should be the following:

echo $SUBJECTS_DIR

If the wrong directory name is printed to the screen, setting it to the correct value may well fix your problem.

At this point you might expect me to tell you how to set SUBJECTS_DIR to the correct value. But I'm not going to do that, and here's why:

  1. It's documented elsewhere on the wiki
  2. If you're confused about how to set SUBJECTS_DIR, you're also likely to just blindly type whatever example command I give without understanding what the command does. If this is the case, please become more proficient with the lab's procedures and software.

ERROR: Flag unrecognized.

Most Linux programs take parameters, or flags that modify or specify how they are run. For example, you can't just call the recon-all command; you have to tell the progam what data you want to work on, and so this information is provided using the -i flag. Other flags might tell the program how aggressive to be when deciding to remove potential skull voxels, for example. There are no hard-and-fast rules, but to find the set of flags that you can use for a particular Linux program, there are a few options you can try:

  1. man program_name
  2. program_name -help
  3. program_name -?

Though often the program_name -some_flag option causes the usage information to be displayed if some_flag is not a valid flag.

When you see an error message concerning an unrecognized flag, it is most likely because there is a typo in your command. For example:

recon-all -autorecon1 watershed 15

Each of the flags is supposed to be prefixed with a '-' character, but in the example above, -watershed was instead typed as watershed without the '-' character. These little typos can be hard to spot.

If you are completely perplexed why something doesn't work out when you followed the directions to the letter, the first thing you should do is throw out your assumption that you typed in the command correctly.

FastSurfer

FastSurfer "is a fast and extensively validated deep-learning pipeline for the fully automated processing of structural human brain MRIs." It works on top of a FreeSurfer installation, requires Python 3, and can be obtained through GitHub.

Obtaining FastSurfer

  1. Git Clone the repository at https://github.com/deep-mi/FastSurfer
  2. pip3-install -r requirements.txt
  3. There may be an issue installing LaPy with the requirements file. I overcame this by visiting github.com/Deep-MI/LaPy and installing it separately