<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://ccn-wiki.caset.buffalo.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Erica</id>
	<title>CCN Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://ccn-wiki.caset.buffalo.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Erica"/>
	<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php/Special:Contributions/Erica"/>
	<updated>2026-04-04T10:30:48Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.3</generator>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=3D_Brain_Models&amp;diff=1802</id>
		<title>3D Brain Models</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=3D_Brain_Models&amp;diff=1802"/>
		<updated>2019-05-29T15:18:54Z</updated>

		<summary type="html">&lt;p&gt;Erica: /* Use srf2obj to create a Blender-Compatible File */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is all about turning FreeSurfer surface files into 3D printable models.&lt;br /&gt;
Inspiration comes from [https://brainder.org/2012/05/08/importing-freesurfer-cortical-meshes-into-blender/ here]&lt;br /&gt;
&lt;br /&gt;
=Free Surfer=&lt;br /&gt;
==Create a WaveFront OBJ File==&lt;br /&gt;
Use mris_convert to convert from FreeSurfer binary to ASCII:&lt;br /&gt;
 mris_convert lh.pial lh.pial.asc&lt;br /&gt;
Repeat for both hemispheres&lt;br /&gt;
&lt;br /&gt;
==Rename Files==&lt;br /&gt;
 mv lh.pial.asc lh.pial.srf&lt;br /&gt;
Repeat for both hemispheres&lt;br /&gt;
&lt;br /&gt;
==Use srf2obj to create a Blender-Compatible File==&lt;br /&gt;
The srf2obj script requires the installation of gawk if it&#039;s not already installed. The script can be found in the &#039;&#039;&#039;scripts/Shell&#039;&#039;&#039; folder on ubfs, and should be in your path (copy it to your ~/bin directory if you don&#039;t already have it).&lt;br /&gt;
 srf2obj lh.pial.srf &amp;gt; lh.pial.obj&lt;br /&gt;
&lt;br /&gt;
=Blender=&lt;br /&gt;
After creating the pial layer hemisphere&#039;s in FreeSurfer, you&#039;re next going to want to import one file at a time into Blender. Once the file has been imported there should be a gray brain mesh on the screen in Blender. To make working with the mesh easier you can change it&#039;s size by going to the bottom left-hand side of the page and change the X, Y, and Z coordinates to 1.0. Once they are all at 1.0 the brain will be much smaller and fit nicely within the provided grid. You can use the rotate and transform tools on the left-hand side of the screen to move and change the direction of the brain. To the best of your ability, try to get the brain level on all planes in a way similar to it&#039;s real life orientation. After the first hemisphere is in place do the exact same process with the other hemisphere. Once both halves are of the same size and orientation, try to align them next to each other as best as possible and join the halves into one object by right clicking on one half, then hold shift down and right click on the other. After both hemispheres are selected you just have to press CTRL + J, and the two halves should become one mesh.   &lt;br /&gt;
===3D Printing===&lt;br /&gt;
The 3D printers at UB use a multitude of files, but all of the printers will accept .stl files (which Blender will let you directly export any file as that).&lt;br /&gt;
Before printing you will need to fix your STL files using the Netfabb online service (requires a free autodesk account)&lt;br /&gt;
https://service.netfabb.com/login.php&lt;/div&gt;</summary>
		<author><name>Erica</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=3D_Brain_Models&amp;diff=1801</id>
		<title>3D Brain Models</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=3D_Brain_Models&amp;diff=1801"/>
		<updated>2019-05-29T15:18:22Z</updated>

		<summary type="html">&lt;p&gt;Erica: /* Free Surfer */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is all about turning FreeSurfer surface files into 3D printable models.&lt;br /&gt;
Inspiration comes from [https://brainder.org/2012/05/08/importing-freesurfer-cortical-meshes-into-blender/ here]&lt;br /&gt;
&lt;br /&gt;
=Free Surfer=&lt;br /&gt;
==Create a WaveFront OBJ File==&lt;br /&gt;
Use mris_convert to convert from FreeSurfer binary to ASCII:&lt;br /&gt;
 mris_convert lh.pial lh.pial.asc&lt;br /&gt;
Repeat for both hemispheres&lt;br /&gt;
&lt;br /&gt;
==Rename Files==&lt;br /&gt;
 mv lh.pial.asc lh.pial.srf&lt;br /&gt;
Repeat for both hemispheres&lt;br /&gt;
&lt;br /&gt;
==Use srf2obj to create a Blender-Compatible File==&lt;br /&gt;
The srf2obj script requires the installation of gawk if it&#039;s not already installed. The script can be found in the &#039;&#039;&#039;scripts/Shell&#039;&#039;&#039; folder on ubfs, and should be in your path (copy it to your ~/bin directory if you don&#039;t already have it).&lt;br /&gt;
 srf2obj filename&lt;br /&gt;
&lt;br /&gt;
=Blender=&lt;br /&gt;
After creating the pial layer hemisphere&#039;s in FreeSurfer, you&#039;re next going to want to import one file at a time into Blender. Once the file has been imported there should be a gray brain mesh on the screen in Blender. To make working with the mesh easier you can change it&#039;s size by going to the bottom left-hand side of the page and change the X, Y, and Z coordinates to 1.0. Once they are all at 1.0 the brain will be much smaller and fit nicely within the provided grid. You can use the rotate and transform tools on the left-hand side of the screen to move and change the direction of the brain. To the best of your ability, try to get the brain level on all planes in a way similar to it&#039;s real life orientation. After the first hemisphere is in place do the exact same process with the other hemisphere. Once both halves are of the same size and orientation, try to align them next to each other as best as possible and join the halves into one object by right clicking on one half, then hold shift down and right click on the other. After both hemispheres are selected you just have to press CTRL + J, and the two halves should become one mesh.   &lt;br /&gt;
===3D Printing===&lt;br /&gt;
The 3D printers at UB use a multitude of files, but all of the printers will accept .stl files (which Blender will let you directly export any file as that).&lt;br /&gt;
Before printing you will need to fix your STL files using the Netfabb online service (requires a free autodesk account)&lt;br /&gt;
https://service.netfabb.com/login.php&lt;/div&gt;</summary>
		<author><name>Erica</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Neural_Networks_in_Python&amp;diff=1764</id>
		<title>Neural Networks in Python</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Neural_Networks_in_Python&amp;diff=1764"/>
		<updated>2019-04-26T14:35:09Z</updated>

		<summary type="html">&lt;p&gt;Erica: /* A Simple Feedforward Classifier Network */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Before You Begin=&lt;br /&gt;
We&#039;ve been using the Keras API with the TensorFlow backend for our simulations. Our code often also uses the numPy and SciKit libraries because I often steal code examples that happen to have been coded using those libraries. When you get started, you will want to make sure that all the requisite libraries are installed. Pip (the Python package installer) has a function called &amp;lt;code&amp;gt;freeze&amp;lt;/code&amp;gt; that will list all your installed libraries. This list can be dumped to a text file and used to share the list of Python packages you&#039;ll need to run any of the code we&#039;ve developed:&lt;br /&gt;
 pip freeze &amp;gt; requirements.txt&lt;br /&gt;
Now any lab member can use &#039;&#039;&#039;requirements.txt&#039;&#039;&#039; to ensure they have the requisite Python packages installed:&lt;br /&gt;
 pip install -r requirements.txt&lt;br /&gt;
I&#039;ve got the current list of dependencies on ubfs/Scripts/Python/requirements.txt&lt;br /&gt;
=A Simple Feedforward Classifier Network=&lt;br /&gt;
Keras has defined network templates, but the only one that I&#039;ve seen well-documented is the &#039;&#039;&#039;forward&#039;&#039; model. Fortunately, the forward model is well-suited to the classification task: it takes a set of input values, feeds the values forward over a set of layers, and checks the output values in a set of binary classification units. If the output decision is strictly binary (A vs B), then one output unit will suffice (since it&#039;s a zero-sum decision). If 3+ output categories are possible, then there needs to be an output unit for each possible category.&lt;br /&gt;
&lt;br /&gt;
 import numpy as np&lt;br /&gt;
 import keras&lt;br /&gt;
 import time&lt;br /&gt;
 import scipy.io as sio&lt;br /&gt;
 from keras.layers import Input,Dense,Dropout&lt;br /&gt;
 from keras.models import Model, Sequential, load_model&lt;br /&gt;
 from keras.regularizers import l1&lt;br /&gt;
 from keras.optimizers import Adadelta&lt;br /&gt;
 from sklearn.model_selection import StratifiedKFold as SKF&lt;br /&gt;
 from sklearn.model_selection import ShuffleSplit as SS&lt;br /&gt;
 &lt;br /&gt;
 import read_activations as reader&lt;br /&gt;
 &lt;br /&gt;
 seed=int(time.time())&lt;br /&gt;
 np.random.seed(seed)&lt;br /&gt;
 &lt;br /&gt;
 trainfile=&#039;av_hi.csv&#039;&lt;br /&gt;
 traindata=np.genfromtxt(trainfile, delimiter=&#039;,&#039;)&lt;br /&gt;
 testdata=(traindata)&lt;br /&gt;
 &lt;br /&gt;
 x_train=traindata[:,0:-1]&lt;br /&gt;
 y_cat_code=testdata[:,-1]&lt;br /&gt;
 #this next line converts the category codes of 1,2,3 into 3 separate output units so that&lt;br /&gt;
 #a code of 1 maps to {1,0,0}, 2 maps to {0,1,0} and 3 maps to {0,0,1}&lt;br /&gt;
 y_cat_train=keras.utils.to_categorical(y_cat_code, num_classes=3)&lt;br /&gt;
 &lt;br /&gt;
 encoding_dim1=96&lt;br /&gt;
 encoding_dim2=32&lt;br /&gt;
 input_trials, inodes = x_train.shape&lt;br /&gt;
 &lt;br /&gt;
 foldnumber=0&lt;br /&gt;
 fp=open(&#039;perflog.txt&#039;, &#039;w&#039;)&lt;br /&gt;
 hfp=open(&#039;cat_traininghist.txt&#039;, &#039;w&#039;)&lt;br /&gt;
 &lt;br /&gt;
 kfold = SKF(n_splits=8, shuffle=True, random_state=seed)&lt;br /&gt;
 for train, test in kfold.split(x_train, y_cat_code):&lt;br /&gt;
  foldnumber=foldnumber+1&lt;br /&gt;
  ada=keras.optimizers.Adadelta(lr=1, rho=0.9, decay=1e-6)&lt;br /&gt;
 &lt;br /&gt;
  #input layer: a series of 1000 floats from 0.0 to 1.0&lt;br /&gt;
  main_input=Input(shape=(inodes,), dtype=&#039;float32&#039;, name=&#039;input&#039;)&lt;br /&gt;
 &lt;br /&gt;
  #first dropout layer, initially, lets randomly drop 20% of values&lt;br /&gt;
  dropout_1=Dropout(0.2, input_shape=(inodes,), name=&#039;dense_1_drop&#039;)(main_input)&lt;br /&gt;
 &lt;br /&gt;
  #noise layer to mitigate overfitting&lt;br /&gt;
  noise_1=keras.layers.noise.GaussianNoise(0.05, input_shape=(inodes,), name=&#039;dense_1_noise&#039;)(dropout_1)&lt;br /&gt;
 &lt;br /&gt;
  #first hidden layer&lt;br /&gt;
  dense_1=Dense(encoding_dim1, activation=&#039;relu&#039;, input_dim=inodes,&lt;br /&gt;
        name=&#039;dense_1&#039;, kernel_regularizer=l1(0.00001))(noise_1)&lt;br /&gt;
 &lt;br /&gt;
  #second hidden layer&lt;br /&gt;
  dense_2=Dense(encoding_dim2, activation=&#039;relu&#039;, name=&#039;dense_2&#039;)(dense_1)&lt;br /&gt;
  #category output layer&lt;br /&gt;
  cat_output=Dense(3, activation=&#039;softmax&#039;, name=&#039;cat_output&#039;)(dense_2)&lt;br /&gt;
 &lt;br /&gt;
  #assemble the model&lt;br /&gt;
  model=Model(inputs=[main_input], outputs=[cat_output])&lt;br /&gt;
  #compile the model&lt;br /&gt;
  model.compile(optimizer=&#039;adadelta&#039;,&lt;br /&gt;
       loss=[&#039;categorical_crossentropy&#039;],&lt;br /&gt;
       metrics=[&#039;accuracy&#039;], loss_weights=[1])&lt;br /&gt;
 &lt;br /&gt;
  X_train=x_train[train]&lt;br /&gt;
  X_test=x_train[test]&lt;br /&gt;
  Y_cat_train=y_cat_train[train]&lt;br /&gt;
  Y_cat_test=y_cat_train[test]&lt;br /&gt;
 &lt;br /&gt;
  xrows, xcols =  X_test.shape&lt;br /&gt;
  history=model.fit(X_train, [Y_cat_train],&lt;br /&gt;
   validation_split=0.2,&lt;br /&gt;
   epochs=64,&lt;br /&gt;
   batch_size=32,&lt;br /&gt;
   shuffle=True,&lt;br /&gt;
   verbose=0)&lt;br /&gt;
  trainscore = model.evaluate(X_test, [Y_cat_test],  verbose=0)&lt;br /&gt;
  outstr=&#039;%.2f\n&#039; % trainscore[1]&lt;br /&gt;
  print outstr&lt;br /&gt;
  fp.write(outstr)&lt;br /&gt;
  fname=&#039;model_&#039; + format(foldnumber, &#039;02&#039;) + &#039;.h5&#039;&lt;br /&gt;
  model.save(fname) #save the trained model&lt;br /&gt;
 &lt;br /&gt;
  #training history&lt;br /&gt;
  vals=history.history[&#039;val_acc&#039;]&lt;br /&gt;
  strlist= [&#039;{:.3f}&#039;.format(x) for x in vals]&lt;br /&gt;
  hfp.write(&#039;\n&#039; + &#039;,&#039;.join(strlist))&lt;br /&gt;
 &lt;br /&gt;
 print model.summary()&lt;br /&gt;
 fp.close()&lt;br /&gt;
 hfp.close()&lt;/div&gt;</summary>
		<author><name>Erica</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Connectivity_Toolbox&amp;diff=1693</id>
		<title>Connectivity Toolbox</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Connectivity_Toolbox&amp;diff=1693"/>
		<updated>2018-11-07T17:46:45Z</updated>

		<summary type="html">&lt;p&gt;Erica: /* Processing options */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The &#039;&#039;&#039;conn&#039;&#039;&#039; Functional Connectivity toolbox is a MATLAB-based program that hijacks SPM to do some data de-noising and remove potential artefacts that might skew your connectivity data. It is somewhat geared towards voxel-space analyses, but can support the surface-space analyses that we typically do in our lab. The walkthrough below is geared towards running &#039;&#039;&#039;conn&#039;&#039;&#039; on your FreeSurfer data.&lt;br /&gt;
&lt;br /&gt;
=Preconditions=&lt;br /&gt;
These instructions assume that you have functional and anatomical data organized in the manner that FreeSurfer expects, and that you have completed the autorecon steps on the T1 anatomical file and have generated the surface meshes for your target participants. These instructions will also assume that you have dropped some number of initial volumes from the BOLD runs (usually the first 4) if required. If working from the raw data archived from the CTRC scanner, this will be a required step; if you are working with BOLD data that has already been analyzed, or preprocessed this may already have been done, and you will &#039;&#039;not&#039;&#039; want to drop &#039;&#039;yet another&#039;&#039; initial 4 volumes!&lt;br /&gt;
&lt;br /&gt;
Because conn runs on top of SPM, you need to ensure that SPM is in your MATLAB path. Of course, conn also needs to be in your path. Check that they are using the &amp;lt;code&amp;gt;which&amp;lt;/code&amp;gt; command in the MATLAB command window:&lt;br /&gt;
 &amp;gt;&amp;gt; which spm&lt;br /&gt;
 /opt/spm12/spm.m&lt;br /&gt;
 &amp;gt;&amp;gt; which conn&lt;br /&gt;
 /opt/conn/conn.m&lt;br /&gt;
&lt;br /&gt;
If you see the paths to both .m files, you can run conn from the MATLAB command window:&lt;br /&gt;
 &amp;gt;&amp;gt; conn&lt;br /&gt;
&lt;br /&gt;
=Setup in CONN=&lt;br /&gt;
My goal here isn&#039;t going to be to duplicate the [https://web.conn-toolbox.org/resources/manuals conn user manual], and we&#039;re not going through an entire functional analysis. But you have to start somewhere, so here we are...&lt;br /&gt;
&lt;br /&gt;
When you launch conn, you will get a nice gui window with a menu bar along the top. We want to start a new project:&lt;br /&gt;
&#039;&#039;&#039;Project &amp;gt; New (blank)&#039;&#039;&#039;&lt;br /&gt;
You will get a uiwindow that asks where you want to save the project details. You can save this .mat file wherever you like, though I would recommend somewhere near your $SUBJECTS_DIR for the project. Next you will have to specify parameters for setting up the analysis&lt;br /&gt;
&lt;br /&gt;
==Basic information==&lt;br /&gt;
The first uiwindow you need to populate has some general experimental setup information:&lt;br /&gt;
===Number of subjects===&lt;br /&gt;
Indicate here how many subjects you wish to process. This will tell conn how many datasets to expect. You don&#039;t even have to add everyone in the experiment to the analysis: If you have FreeSurfer data for 10 people, but just want to process 1 or two people, then just indicate that there are 1 or 2 subjects.&lt;br /&gt;
===Number of session===&lt;br /&gt;
By this, they mean functional runs. Our Imagery experiment has 2 visits with 6 functional runs each. If someone completed both visits, they would have 12 sessions worth of data. As with the subjects parameter, you don&#039;t have to use all the runs. You might wish to just analyze the 6 runs from the first visit, in which case you would say there are 6 sessions. The program will almost certainly crash if some runs are missing for some participants because it will expect the same amount of data for each participant. You will need to separately process individuals with missing runs.&lt;br /&gt;
===Repetition Time (seconds)===&lt;br /&gt;
This is our experimental TR. For the Imagery study, our TR is approximately 2 seconds (2.047, to be precise). For other data, the TR may be different, though a 2-second TR is common.&lt;br /&gt;
&lt;br /&gt;
===Acquisition type===&lt;br /&gt;
Leave it as the default, Continuous, which means the scanner is continuously acquiring BOLD data throughout the run. It&#039;s unlikely that you&#039;ll be analyzing data collected using sparse sampling.&lt;br /&gt;
&lt;br /&gt;
==Structural data==&lt;br /&gt;
Here is where you have to find the T1 and surface files for your participants, which are found in the &#039;mri&#039; directory for each participant. They made the program clever enough to do pattern matching, so if you pick a root directory, and then indicate the file pattern for the structural data, it will find all the matching files. For example, if you left-click on your $SUBJECTS_DIR, type in T1.mgz and click the &#039;&#039;&#039;Find&#039;&#039;&#039; button , it will identify the paths to all the structural files generated by FreeSurfer. From there, you highlight the files you wish to work with and click the Select button (make sure the number of selected files matches the number of highlighted Subjects). If the T1 data have already been preprocessed with FreeSurfer, you should be asked if you wish to also import the segmentation files. Say &#039;&#039;&#039;Yes&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
One thing I will point out is that our FreeSurfer analyses are done in surface space, which doesn&#039;t care about where the particular voxels sit in relation to the cube of voxels surrounding the standard MNI template brain. For that reason, your structural and functional data will seldom be even close to aligned with the MNI brain. This isn&#039;t usually a problem, but I&#039;m now considering coregistration of the data (but not spatial normalization) as a routine preprocessing step. Stay tuned.&lt;br /&gt;
&lt;br /&gt;
==Functional data==&lt;br /&gt;
This works pretty much the same as selecting your structural data. Following our FreeSurfer conventions, you will be working with the f.nii.gz data (the zipped data will be automatically unzipped). Make sure that these data have already had the first 4 volumes dropped and that the slice timing and TR information has already been fixed in the header (see the page on [[Freesurfer BOLD files]]).&lt;br /&gt;
&lt;br /&gt;
==ROIs==&lt;br /&gt;
Here we indicate regional sources of variance that will be used to denoise our data. By default conn has selected Grey and White matter, CSF, atlas and networks. We wish to extract the average timeseries for the Grey matter, the Principle Component Analysis signal from White matter and CSF. We can remove the atlas and networks ROIs from the list of ROIs we care about so that we only have the first 3.&lt;br /&gt;
&lt;br /&gt;
There&#039;s a checkbox for regressing out covariates before regressing out the PCA noise covariates, but I don&#039;t see any reason to use that option.&lt;br /&gt;
&lt;br /&gt;
==Conditions==&lt;br /&gt;
This is set up by default to only include a rest condition that spans the entire run, as if it&#039;s resting state data. But we&#039;re working with task data using a mixed design, where we might add the event or block onsets. The thing is, I don&#039;t know that we want to have conn do any funny business with each of our conditions because I&#039;m only using the program for preprocessing, and not analysis. It would be catastrophic to regress out the condition-related signal that you&#039;re actually trying to study! &lt;br /&gt;
&lt;br /&gt;
It turns out that you have to have at least 1 condition specified, so you can use the rest condition as a placeholder.&lt;br /&gt;
&lt;br /&gt;
==Covariates (1st-level)==&lt;br /&gt;
This would be timeseries related covariates, like motion parameters or perhaps if you wanted to exclude the onsets of button presses based on the runtime RT data. We can leave this blank unless we have a reason to get fancy.&lt;br /&gt;
&lt;br /&gt;
==Covariates (2nd level)==&lt;br /&gt;
These are participant-level covariates (e,g. age, task performance, etc.), though if you&#039;re studying individual differences, these may be precisely the factors you&#039;re interested in, so you wouldn&#039;t want to regress those out. The AllSubjects covariate is added by default, but that&#039;s fine, because it&#039;s just a vector of 1s, and therefore does not covary with anything.&lt;br /&gt;
&lt;br /&gt;
==Processing options==&lt;br /&gt;
Almost there!&lt;br /&gt;
===Enabled analyses===&lt;br /&gt;
Check that we want to do Voxel-to-Voxel analyses&lt;br /&gt;
Other analyses maybe checked by default, unless otherwise specified we only care about Voxel-to-Voxel. &lt;br /&gt;
===Optional output files===&lt;br /&gt;
check that we create confound-corrected beta-maps and confound-corrected time-series (our primary objective for using conn).&lt;br /&gt;
===Analysis space (voxel-level) ===&lt;br /&gt;
Select &#039;&#039;Volume: same as functionals&#039;&#039; so that conn doesn&#039;t resample the data. I made the mistake of saying that analyses would be done in FreeSurfer&#039;s fsaverage space, and it totally mangled the data by throwing away all the spatial information about the voxels (it produced a time-series cube, which is decidedly not brain-shaped).&lt;br /&gt;
&lt;br /&gt;
===Analysis mask (voxel-level)===&lt;br /&gt;
Select Implicit mask (subject-specific) because our subjects voxels are not guaranteed to fall within the template brainmask.&lt;br /&gt;
&lt;br /&gt;
===Second_Level Analyses===&lt;br /&gt;
Second level analyses is another option in the option menu. Unless otherwise specified you can ignore this option.&lt;br /&gt;
&lt;br /&gt;
===BOLD signal units===&lt;br /&gt;
Let&#039;s use Raw signal.&lt;br /&gt;
&lt;br /&gt;
==Preprocessing==&lt;br /&gt;
Click the green Preprocessing button to bring up a uidialogbox to select a processing pipeline. We wish to use the preprocessing pipeline for sufface-based analyses in subject space. This will populate the window of processing steps. Note that we already have our structural segmentation from FreeSurfer, so you can go ahead and remove that last item from the prepopulated list. Click &#039;&#039;&#039;Start&#039;&#039;&#039;. &lt;br /&gt;
===Slice timing window===&lt;br /&gt;
You&#039;ll be asked for the slice order. Data from the CTRC used &#039;&#039;&#039;interleaved top-down&#039;&#039;&#039;&lt;br /&gt;
===Functional outlier detection===&lt;br /&gt;
If ART is in your pipeline (it likely will be if you did what I described above), here is where you set the threshold for detecting &amp;quot;bad&amp;quot; volumes.  Intermediate is probably perfectly reasonable.&lt;br /&gt;
&lt;br /&gt;
=Denoising (1st level)=&lt;br /&gt;
A bunch of data preprocessing will happen over the next 15 minutes or so (1 subject, 6 runs) after you click DONE on the setup process. A status window gives you updates on what steps in the preprocessing pipeline are executing. Once it&#039;s finished, you find yourself in the Denoising step.&lt;br /&gt;
&lt;br /&gt;
In this step, you can remove confounds from your data, and see the effect on the distribution of connectivity values in the window area on the right. Before denoising, there is generally a positive bias on your connectivity values, and you should find the denoised data to be 0-centered and a narrower distribution across all your sessions. You can also see how the sessions differed within individuals, so that&#039;s somewhat interesting. Anyways, at this point, we select which confounds we want to remove from our data:&lt;br /&gt;
==White Matter==&lt;br /&gt;
Removes the WM timeseries ICA, by default set to 5 component decompsition. The number of compoments can be changed (try increasing it from 5 to 10) if the denoised distribution is still biased.&lt;br /&gt;
==CSF==&lt;br /&gt;
Likewise, this removes the CSF timeseries, also set to a 5 component ICA.&lt;br /&gt;
==Realignment==&lt;br /&gt;
Regresses out the 6 parameter motion correction parameters and by default adds their first order derivatives (for a total of 12 parameters).&lt;br /&gt;
==Scrubbing==&lt;br /&gt;
Regresses out whether the data were flagged for outlier scrubbing by ARTRepair&lt;br /&gt;
==Effect of Condition (e.g., rest, dummy, task)==&lt;br /&gt;
If we&#039;re just looking to use conn to remove noise from our data, and especially if we have task-related data, I don&#039;t think it makes sense to remove the experimental design from our data as a confound., so highlight this entry right click on the right angle (&#039;&#039;&amp;amp;gt;&#039;&#039;) bracket that appears between the columns.&lt;br /&gt;
&lt;br /&gt;
==Band-pass filter==&lt;br /&gt;
Removes signal that falls outside the high- and low-end of the frequency range. These values are expressed in Hz, or cycles per second. As a postdoc, I saw first hand what happens if you don&#039;t give this some consideration. Suppose you have a block design, where each block is, say 40 seconds long. So any signal that is dependent on the block will appear at a frequency that matches the block frequency. If it takes 40 seconds for 1 block to coomplete, then the block frequency, in cycles per second is 1/40 = .025 Hz. The default Band-pass filter window goes from .008 Hz (1/125 seconds) to  .09 (1/11 seconds). Our 40 second block signal would be safe within this range,  but if you were to change the low-frequency number to .03 (1/30 seconds), then you&#039;d end up filtering out your block signal, which may not be something you want to do.&lt;br /&gt;
&lt;br /&gt;
==Click the Done Button==&lt;br /&gt;
Do it to begin the denoising pipeline. Go make yourself a sandwich.&lt;br /&gt;
&lt;br /&gt;
=Obtaining Denoised Time-Series Data=&lt;br /&gt;
Your project folder will contain a data/ folder full of output files produced during all the above steps. Of interest are the series of .matc files. &#039;&#039;&#039;conn&#039;&#039;&#039; ships with a conversion utility, &amp;lt;code&amp;gt;conn_matc2nii&amp;lt;/code&amp;gt; that will convert the .matc file for a session into a .nii file:&lt;br /&gt;
 &amp;gt;&amp;gt; conn_matc2nii(&#039;DATA_Subject001_Session001.matc&#039;, false)&lt;br /&gt;
 &amp;gt;&amp;gt; %the second parameter is optional and indicates whether to display a waitbar. By default, this is set to false.&lt;br /&gt;
&lt;br /&gt;
After you execute this command in MATLAB, you&#039;ll end up with a file with the same name, except with a .nii extension. &lt;br /&gt;
Now, at this point, you probably started off with an f.nii.gz file, and now have a DATA_Subject&#039;xxx&#039;_Session&#039;yyy&#039;.nii file. In order to look at this newly denoised data in FreeSurfer, you will need to run it through the FS-FAST preproc-sess step.  There&#039;s a good chance the source directory already has an f.nii.gz file, as well as a series of fmcpr.&#039;&#039;${direction}&#039;&#039;.sm&#039;&#039;${smoothing}&#039;&#039;.fsaverage.?h.nii.gz files. Assuming you want to hold on to these, I&#039;d make a backup directory in each of the run subdirectories, and move all files to the backup directory (you can keep any .par files here). After you&#039;ve backed everything up, move the new denoised files into the appropriate directories and rename them to f.nii. When you redo the preproc-sess step, make sure you do not apply slice-time correction, because this has already been applied to the denoised data:&lt;br /&gt;
&lt;br /&gt;
 preproc-sess -s FS_0231 -per-run -surface fsaverage lhrh -mni305 -fwhm 4 -fsd bold&lt;br /&gt;
&lt;br /&gt;
The departure from the other wiki entry on preproc-sess is that we do not use the -sliceorder flag on the denoised data. Other than that, just process the denoised functional data like you would any other data. When done, you&#039;ll have a set of fmcpr.${smooth}.fsaverage.?h.nii.gz (they will no longer have the word &#039;down&#039; in the filename, so you may have to modify some of your later analysis or data processing scripts accordingly -- just be on the lookout for &#039;&#039;file not found&#039;&#039; errors, and if they pop up, the first thing you should look for is to see if your command or script was expecting a filename with the slice order). Of course, another option would be to manually rename the preprocessed data when preproc-sess is completed, so that the preprocessed denoised data still includes the sliceorder in the name. I&#039;m not sure how big of a problem this minor change in filename is going to be for our various scripts.&lt;/div&gt;</summary>
		<author><name>Erica</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Connectivity_Toolbox&amp;diff=1692</id>
		<title>Connectivity Toolbox</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Connectivity_Toolbox&amp;diff=1692"/>
		<updated>2018-11-07T17:41:41Z</updated>

		<summary type="html">&lt;p&gt;Erica: /* Structural data */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The &#039;&#039;&#039;conn&#039;&#039;&#039; Functional Connectivity toolbox is a MATLAB-based program that hijacks SPM to do some data de-noising and remove potential artefacts that might skew your connectivity data. It is somewhat geared towards voxel-space analyses, but can support the surface-space analyses that we typically do in our lab. The walkthrough below is geared towards running &#039;&#039;&#039;conn&#039;&#039;&#039; on your FreeSurfer data.&lt;br /&gt;
&lt;br /&gt;
=Preconditions=&lt;br /&gt;
These instructions assume that you have functional and anatomical data organized in the manner that FreeSurfer expects, and that you have completed the autorecon steps on the T1 anatomical file and have generated the surface meshes for your target participants. These instructions will also assume that you have dropped some number of initial volumes from the BOLD runs (usually the first 4) if required. If working from the raw data archived from the CTRC scanner, this will be a required step; if you are working with BOLD data that has already been analyzed, or preprocessed this may already have been done, and you will &#039;&#039;not&#039;&#039; want to drop &#039;&#039;yet another&#039;&#039; initial 4 volumes!&lt;br /&gt;
&lt;br /&gt;
Because conn runs on top of SPM, you need to ensure that SPM is in your MATLAB path. Of course, conn also needs to be in your path. Check that they are using the &amp;lt;code&amp;gt;which&amp;lt;/code&amp;gt; command in the MATLAB command window:&lt;br /&gt;
 &amp;gt;&amp;gt; which spm&lt;br /&gt;
 /opt/spm12/spm.m&lt;br /&gt;
 &amp;gt;&amp;gt; which conn&lt;br /&gt;
 /opt/conn/conn.m&lt;br /&gt;
&lt;br /&gt;
If you see the paths to both .m files, you can run conn from the MATLAB command window:&lt;br /&gt;
 &amp;gt;&amp;gt; conn&lt;br /&gt;
&lt;br /&gt;
=Setup in CONN=&lt;br /&gt;
My goal here isn&#039;t going to be to duplicate the [https://web.conn-toolbox.org/resources/manuals conn user manual], and we&#039;re not going through an entire functional analysis. But you have to start somewhere, so here we are...&lt;br /&gt;
&lt;br /&gt;
When you launch conn, you will get a nice gui window with a menu bar along the top. We want to start a new project:&lt;br /&gt;
&#039;&#039;&#039;Project &amp;gt; New (blank)&#039;&#039;&#039;&lt;br /&gt;
You will get a uiwindow that asks where you want to save the project details. You can save this .mat file wherever you like, though I would recommend somewhere near your $SUBJECTS_DIR for the project. Next you will have to specify parameters for setting up the analysis&lt;br /&gt;
&lt;br /&gt;
==Basic information==&lt;br /&gt;
The first uiwindow you need to populate has some general experimental setup information:&lt;br /&gt;
===Number of subjects===&lt;br /&gt;
Indicate here how many subjects you wish to process. This will tell conn how many datasets to expect. You don&#039;t even have to add everyone in the experiment to the analysis: If you have FreeSurfer data for 10 people, but just want to process 1 or two people, then just indicate that there are 1 or 2 subjects.&lt;br /&gt;
===Number of session===&lt;br /&gt;
By this, they mean functional runs. Our Imagery experiment has 2 visits with 6 functional runs each. If someone completed both visits, they would have 12 sessions worth of data. As with the subjects parameter, you don&#039;t have to use all the runs. You might wish to just analyze the 6 runs from the first visit, in which case you would say there are 6 sessions. The program will almost certainly crash if some runs are missing for some participants because it will expect the same amount of data for each participant. You will need to separately process individuals with missing runs.&lt;br /&gt;
===Repetition Time (seconds)===&lt;br /&gt;
This is our experimental TR. For the Imagery study, our TR is approximately 2 seconds (2.047, to be precise). For other data, the TR may be different, though a 2-second TR is common.&lt;br /&gt;
&lt;br /&gt;
===Acquisition type===&lt;br /&gt;
Leave it as the default, Continuous, which means the scanner is continuously acquiring BOLD data throughout the run. It&#039;s unlikely that you&#039;ll be analyzing data collected using sparse sampling.&lt;br /&gt;
&lt;br /&gt;
==Structural data==&lt;br /&gt;
Here is where you have to find the T1 and surface files for your participants, which are found in the &#039;mri&#039; directory for each participant. They made the program clever enough to do pattern matching, so if you pick a root directory, and then indicate the file pattern for the structural data, it will find all the matching files. For example, if you left-click on your $SUBJECTS_DIR, type in T1.mgz and click the &#039;&#039;&#039;Find&#039;&#039;&#039; button , it will identify the paths to all the structural files generated by FreeSurfer. From there, you highlight the files you wish to work with and click the Select button (make sure the number of selected files matches the number of highlighted Subjects). If the T1 data have already been preprocessed with FreeSurfer, you should be asked if you wish to also import the segmentation files. Say &#039;&#039;&#039;Yes&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
One thing I will point out is that our FreeSurfer analyses are done in surface space, which doesn&#039;t care about where the particular voxels sit in relation to the cube of voxels surrounding the standard MNI template brain. For that reason, your structural and functional data will seldom be even close to aligned with the MNI brain. This isn&#039;t usually a problem, but I&#039;m now considering coregistration of the data (but not spatial normalization) as a routine preprocessing step. Stay tuned.&lt;br /&gt;
&lt;br /&gt;
==Functional data==&lt;br /&gt;
This works pretty much the same as selecting your structural data. Following our FreeSurfer conventions, you will be working with the f.nii.gz data (the zipped data will be automatically unzipped). Make sure that these data have already had the first 4 volumes dropped and that the slice timing and TR information has already been fixed in the header (see the page on [[Freesurfer BOLD files]]).&lt;br /&gt;
&lt;br /&gt;
==ROIs==&lt;br /&gt;
Here we indicate regional sources of variance that will be used to denoise our data. By default conn has selected Grey and White matter, CSF, atlas and networks. We wish to extract the average timeseries for the Grey matter, the Principle Component Analysis signal from White matter and CSF. We can remove the atlas and networks ROIs from the list of ROIs we care about so that we only have the first 3.&lt;br /&gt;
&lt;br /&gt;
There&#039;s a checkbox for regressing out covariates before regressing out the PCA noise covariates, but I don&#039;t see any reason to use that option.&lt;br /&gt;
&lt;br /&gt;
==Conditions==&lt;br /&gt;
This is set up by default to only include a rest condition that spans the entire run, as if it&#039;s resting state data. But we&#039;re working with task data using a mixed design, where we might add the event or block onsets. The thing is, I don&#039;t know that we want to have conn do any funny business with each of our conditions because I&#039;m only using the program for preprocessing, and not analysis. It would be catastrophic to regress out the condition-related signal that you&#039;re actually trying to study! &lt;br /&gt;
&lt;br /&gt;
It turns out that you have to have at least 1 condition specified, so you can use the rest condition as a placeholder.&lt;br /&gt;
&lt;br /&gt;
==Covariates (1st-level)==&lt;br /&gt;
This would be timeseries related covariates, like motion parameters or perhaps if you wanted to exclude the onsets of button presses based on the runtime RT data. We can leave this blank unless we have a reason to get fancy.&lt;br /&gt;
&lt;br /&gt;
==Covariates (2nd level)==&lt;br /&gt;
These are participant-level covariates (e,g. age, task performance, etc.), though if you&#039;re studying individual differences, these may be precisely the factors you&#039;re interested in, so you wouldn&#039;t want to regress those out. The AllSubjects covariate is added by default, but that&#039;s fine, because it&#039;s just a vector of 1s, and therefore does not covary with anything.&lt;br /&gt;
&lt;br /&gt;
==Processing options==&lt;br /&gt;
Almost there!&lt;br /&gt;
===Enabled analyses===&lt;br /&gt;
Check that we want to do Voxel-to-Voxel analyses&lt;br /&gt;
===Optional output files===&lt;br /&gt;
check that we create confound-corrected beta-maps and confound-corrected time-series (our primary objective for using conn).&lt;br /&gt;
===Analysis space (voxel-level) ===&lt;br /&gt;
Select &#039;&#039;Volume: same as functionals&#039;&#039; so that conn doesn&#039;t resample the data. I made the mistake of saying that analyses would be done in FreeSurfer&#039;s fsaverage space, and it totally mangled the data by throwing away all the spatial information about the voxels (it produced a time-series cube, which is decidedly not brain-shaped).&lt;br /&gt;
&lt;br /&gt;
===Analysis mask (voxel-level)===&lt;br /&gt;
Select Implicit mask (subject-specific) because our subjects voxels are not guaranteed to fall within the template brainmask.&lt;br /&gt;
===BOLD signal units===&lt;br /&gt;
Let&#039;s use Raw signal.&lt;br /&gt;
&lt;br /&gt;
==Preprocessing==&lt;br /&gt;
Click the green Preprocessing button to bring up a uidialogbox to select a processing pipeline. We wish to use the preprocessing pipeline for sufface-based analyses in subject space. This will populate the window of processing steps. Note that we already have our structural segmentation from FreeSurfer, so you can go ahead and remove that last item from the prepopulated list. Click &#039;&#039;&#039;Start&#039;&#039;&#039;. &lt;br /&gt;
===Slice timing window===&lt;br /&gt;
You&#039;ll be asked for the slice order. Data from the CTRC used &#039;&#039;&#039;interleaved top-down&#039;&#039;&#039;&lt;br /&gt;
===Functional outlier detection===&lt;br /&gt;
If ART is in your pipeline (it likely will be if you did what I described above), here is where you set the threshold for detecting &amp;quot;bad&amp;quot; volumes.  Intermediate is probably perfectly reasonable.&lt;br /&gt;
&lt;br /&gt;
=Denoising (1st level)=&lt;br /&gt;
A bunch of data preprocessing will happen over the next 15 minutes or so (1 subject, 6 runs) after you click DONE on the setup process. A status window gives you updates on what steps in the preprocessing pipeline are executing. Once it&#039;s finished, you find yourself in the Denoising step.&lt;br /&gt;
&lt;br /&gt;
In this step, you can remove confounds from your data, and see the effect on the distribution of connectivity values in the window area on the right. Before denoising, there is generally a positive bias on your connectivity values, and you should find the denoised data to be 0-centered and a narrower distribution across all your sessions. You can also see how the sessions differed within individuals, so that&#039;s somewhat interesting. Anyways, at this point, we select which confounds we want to remove from our data:&lt;br /&gt;
==White Matter==&lt;br /&gt;
Removes the WM timeseries ICA, by default set to 5 component decompsition. The number of compoments can be changed (try increasing it from 5 to 10) if the denoised distribution is still biased.&lt;br /&gt;
==CSF==&lt;br /&gt;
Likewise, this removes the CSF timeseries, also set to a 5 component ICA.&lt;br /&gt;
==Realignment==&lt;br /&gt;
Regresses out the 6 parameter motion correction parameters and by default adds their first order derivatives (for a total of 12 parameters).&lt;br /&gt;
==Scrubbing==&lt;br /&gt;
Regresses out whether the data were flagged for outlier scrubbing by ARTRepair&lt;br /&gt;
==Effect of Condition (e.g., rest, dummy, task)==&lt;br /&gt;
If we&#039;re just looking to use conn to remove noise from our data, and especially if we have task-related data, I don&#039;t think it makes sense to remove the experimental design from our data as a confound., so highlight this entry right click on the right angle (&#039;&#039;&amp;amp;gt;&#039;&#039;) bracket that appears between the columns.&lt;br /&gt;
&lt;br /&gt;
==Band-pass filter==&lt;br /&gt;
Removes signal that falls outside the high- and low-end of the frequency range. These values are expressed in Hz, or cycles per second. As a postdoc, I saw first hand what happens if you don&#039;t give this some consideration. Suppose you have a block design, where each block is, say 40 seconds long. So any signal that is dependent on the block will appear at a frequency that matches the block frequency. If it takes 40 seconds for 1 block to coomplete, then the block frequency, in cycles per second is 1/40 = .025 Hz. The default Band-pass filter window goes from .008 Hz (1/125 seconds) to  .09 (1/11 seconds). Our 40 second block signal would be safe within this range,  but if you were to change the low-frequency number to .03 (1/30 seconds), then you&#039;d end up filtering out your block signal, which may not be something you want to do.&lt;br /&gt;
&lt;br /&gt;
==Click the Done Button==&lt;br /&gt;
Do it to begin the denoising pipeline. Go make yourself a sandwich.&lt;br /&gt;
&lt;br /&gt;
=Obtaining Denoised Time-Series Data=&lt;br /&gt;
Your project folder will contain a data/ folder full of output files produced during all the above steps. Of interest are the series of .matc files. &#039;&#039;&#039;conn&#039;&#039;&#039; ships with a conversion utility, &amp;lt;code&amp;gt;conn_matc2nii&amp;lt;/code&amp;gt; that will convert the .matc file for a session into a .nii file:&lt;br /&gt;
 &amp;gt;&amp;gt; conn_matc2nii(&#039;DATA_Subject001_Session001.matc&#039;, false)&lt;br /&gt;
 &amp;gt;&amp;gt; %the second parameter is optional and indicates whether to display a waitbar. By default, this is set to false.&lt;br /&gt;
&lt;br /&gt;
After you execute this command in MATLAB, you&#039;ll end up with a file with the same name, except with a .nii extension. &lt;br /&gt;
Now, at this point, you probably started off with an f.nii.gz file, and now have a DATA_Subject&#039;xxx&#039;_Session&#039;yyy&#039;.nii file. In order to look at this newly denoised data in FreeSurfer, you will need to run it through the FS-FAST preproc-sess step.  There&#039;s a good chance the source directory already has an f.nii.gz file, as well as a series of fmcpr.&#039;&#039;${direction}&#039;&#039;.sm&#039;&#039;${smoothing}&#039;&#039;.fsaverage.?h.nii.gz files. Assuming you want to hold on to these, I&#039;d make a backup directory in each of the run subdirectories, and move all files to the backup directory (you can keep any .par files here). After you&#039;ve backed everything up, move the new denoised files into the appropriate directories and rename them to f.nii. When you redo the preproc-sess step, make sure you do not apply slice-time correction, because this has already been applied to the denoised data:&lt;br /&gt;
&lt;br /&gt;
 preproc-sess -s FS_0231 -per-run -surface fsaverage lhrh -mni305 -fwhm 4 -fsd bold&lt;br /&gt;
&lt;br /&gt;
The departure from the other wiki entry on preproc-sess is that we do not use the -sliceorder flag on the denoised data. Other than that, just process the denoised functional data like you would any other data. When done, you&#039;ll have a set of fmcpr.${smooth}.fsaverage.?h.nii.gz (they will no longer have the word &#039;down&#039; in the filename, so you may have to modify some of your later analysis or data processing scripts accordingly -- just be on the lookout for &#039;&#039;file not found&#039;&#039; errors, and if they pop up, the first thing you should look for is to see if your command or script was expecting a filename with the slice order). Of course, another option would be to manually rename the preprocessed data when preproc-sess is completed, so that the preprocessed denoised data still includes the sliceorder in the name. I&#039;m not sure how big of a problem this minor change in filename is going to be for our various scripts.&lt;/div&gt;</summary>
		<author><name>Erica</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Importing_Time_Series_Into_MATLAB&amp;diff=1647</id>
		<title>Importing Time Series Into MATLAB</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Importing_Time_Series_Into_MATLAB&amp;diff=1647"/>
		<updated>2018-09-18T18:19:39Z</updated>

		<summary type="html">&lt;p&gt;Erica: /* Functionally-assisted loading */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The next step is to load your data matrices. Assuming the time series are plaintext files containing only structured numerical data (e.g., as produced by the gettimecourses.sh script), you can use the &amp;lt;code&amp;gt;load&amp;lt;/code&amp;gt; function to import these data.&lt;br /&gt;
= Functionally-assisted loading =&lt;br /&gt;
The steps described in this subsection have been simplified with the inevitable development of a MATLAB function, &amp;lt;code&amp;gt;loadFSTS.m&amp;lt;/code&amp;gt; that can be found in the ubfs Scripts/Matlab folder. If you have copied it to a folder in your MATLAB path, it can be invoked thus:&lt;br /&gt;
 M=loadFSTS(); %A dialog box will prompt you to select the files you want to open&lt;br /&gt;
One convenient feature is the ability to omit specified columns when loading the data (e.g., the &#039;&#039; &#039;unknown&#039; &#039;&#039; region or the &#039;&#039; &#039;corpuscallosum&#039; &#039;&#039;). Two optional paired arguments include &#039;&#039;ldrop&#039;&#039; and &#039;&#039;rdrop&#039;&#039;, which are matched with arrays corresponding to the columns (regions) to be dropped. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE: If using ROI&#039;s, you likely only want to remove column 1 (unknown).&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
 %Omit columns 1 and 5 from both hemispheres&lt;br /&gt;
 ldropregions=[1 5];&lt;br /&gt;
 rdropregions=[1 5];&lt;br /&gt;
 M=loadFSTS(&#039;ldropregions&#039;, ldropregions, &#039;rdropregions&#039;, rdropregions);&lt;br /&gt;
Of course, you can and should read the help documentation to find out the full capabilities of the function, which I assure you is very clever. But the above functionality will probably get you through 99.99% of the time. Rather than load each time series for each hemisphere into a separate array, loadFSTS will return a single cell array, where each cell contains the merged lh &amp;amp;amp; rh data for each session. Thus, if you open up the lh/rh time series values for 6 fMRI runs, &#039;&#039;&#039;&#039;&#039;M&#039;&#039;&#039;&#039;&#039; will be a 6&amp;amp;times;1 cell array, where &#039;&#039;&#039;&#039;&#039;M&#039;&#039;&#039;&#039;&#039;{i} is a &#039;&#039;t timepoints&#039;&#039; &amp;amp;times; &#039;&#039;r regions&#039;&#039; matrix (&#039;&#039;r regions&#039;&#039; is the sum of the regions retained for lh and rh data sets)&lt;br /&gt;
&lt;br /&gt;
If each matrix &#039;&#039;&#039;&#039;&#039;M&#039;&#039;&#039;&#039;&#039;{i} is the same size, you can convert the cell array into a 3D matrix, which can facilitate a number of tasks. There are a number of ways to do this. For example:&lt;br /&gt;
 new_M=cat(3,M{1}, M{2}, ... , M{i}); %concatenates each 2D matrix stored in M along a 3rd dimension&lt;br /&gt;
 %new_M is now a 3D matrix of size &#039;&#039;t&#039;&#039; &amp;amp;times; &#039;&#039;r&#039;&#039; regions &amp;amp;times; &#039;&#039;i&#039;&#039; input series&lt;br /&gt;
&lt;br /&gt;
= Batch FSTS =&lt;br /&gt;
 list=load(&#039;participants.txt&#039;);&lt;br /&gt;
 for p=1:length(list)&lt;br /&gt;
    %dynamically figure out subject ID&lt;br /&gt;
    subid=sprintf(&#039;FS_%d&#039;,list(p));&lt;br /&gt;
    dirname=[pwd() filesep subid filesep &#039;bold&#039;];&lt;br /&gt;
    lfiles=dir([dirname filesep &#039;*lh*.wav.txt&#039;]);&lt;br /&gt;
    lfilenames={lfiles.name};&lt;br /&gt;
    rfiles=dir([dirname filesep &#039;*rh*.wav.txt&#039;]);&lt;br /&gt;
    rfilenames={rfiles.name};&lt;br /&gt;
    for n=1:length(lfilenames)&lt;br /&gt;
        lfilenames{n}=fullfile(dirname, lfilenames{n});&lt;br /&gt;
    end&lt;br /&gt;
    for n=1:length(rfilenames)&lt;br /&gt;
        rfilenames{n}=fullfile(dirname, rfilenames{n});&lt;br /&gt;
    end&lt;br /&gt;
    &lt;br /&gt;
 [M, hemis]=loadFSTS(&#039;lnames&#039;, lfilenames, &#039;rnames&#039;, rfilenames);&lt;br /&gt;
&lt;br /&gt;
=Bulk Functionally-Assisted Loading=&lt;br /&gt;
The structure of the Freesurfer directories and some of the existing helper files can be leveraged in a Matlab script to automatically call loadFSTS on all the .wav.txt time series data in a SUBJECTS_DIR. The following code parses the contents of the &#039;&#039;&#039;&#039;&#039;subjects&#039;&#039;&#039;&#039;&#039; file (used by preproc-sess, et al.) and a &#039;&#039;&#039;&#039;&#039;runs&#039;&#039;&#039;&#039;&#039; file that can be placed in each subject&#039;s &#039;&#039;&#039;bold&#039;&#039;&#039; directory. The code will iterate through each subject to navigate to that subject&#039;s functional data (where the .wav.txt files should be located). There, it will parse the &#039;&#039;&#039;runs&#039;&#039;&#039; file to string together the names of all the .wav.txt files that should be found there. These dynamically-generated file names can be passed as parameters to the &#039;&#039;&#039;&#039;&#039;loadFSTS()&#039;&#039;&#039;&#039;&#039; function. The .mat files that are produced will be saved in SUBJECTS_DIR.&lt;br /&gt;
&lt;br /&gt;
 %% General Config %%&lt;br /&gt;
 rootdir=pwd();&lt;br /&gt;
 annot=&#039;myaparc_125&#039;;&lt;br /&gt;
 pattern=&#039;fmcpr.down.sm4&#039;;&lt;br /&gt;
 ldrop=[1 5];&lt;br /&gt;
 rdrop=[1 5];&lt;br /&gt;
 &lt;br /&gt;
 %% 1. Get the subjects to process %%&lt;br /&gt;
 fid=fopen(&#039;subjects&#039;);&lt;br /&gt;
 Subjects={};&lt;br /&gt;
 while 1&lt;br /&gt;
    s = fgetl(fid);&lt;br /&gt;
    if ~ischar(s)&lt;br /&gt;
        break&lt;br /&gt;
    end&lt;br /&gt;
    Subjects{length(Subjects)+1}=s;&lt;br /&gt;
 end&lt;br /&gt;
 fclose(fid);&lt;br /&gt;
 %% 2. For each subject, get the runs to process %%&lt;br /&gt;
 for s=1:length(Subjects)&lt;br /&gt;
    subj=Subjects{s};&lt;br /&gt;
    dirname=[subj filesep &#039;bold&#039;];&lt;br /&gt;
    cd(dirname);&lt;br /&gt;
    &lt;br /&gt;
    fid=fopen(&#039;runs&#039;);&lt;br /&gt;
    Runs={};&lt;br /&gt;
    while 1&lt;br /&gt;
        r = fgetl(fid);&lt;br /&gt;
        if ( ~ischar(r) || length(r)&amp;lt;1 )&lt;br /&gt;
            break&lt;br /&gt;
        end&lt;br /&gt;
        Runs{length(Runs)+1}=r;&lt;br /&gt;
    end&lt;br /&gt;
    fclose(fid);&lt;br /&gt;
    % Cross the subject with each of the runs&lt;br /&gt;
    lnames=cellfun(@(x) [subj &#039;_lh_&#039; annot &#039;_&#039; x &#039;.&#039; pattern &#039;.wav.txt&#039;], Runs, &#039;UniformOutput&#039;, false);&lt;br /&gt;
    rnames=cellfun(@(x) [subj &#039;_rh_&#039; annot &#039;_&#039; x &#039;.&#039; pattern &#039;.wav.txt&#039;], Runs, &#039;UniformOutput&#039;, false);&lt;br /&gt;
    [M, hemis]=loadFSTS(&#039;lnames&#039;, lnames, &#039;rnames&#039;, rnames, &#039;ldropregions&#039;, ldrop, &#039;rdropregions&#039;, rdrop);&lt;br /&gt;
    save(fullfile(rootdir, [subj &#039;.&#039; annot &#039;.timeseries.mat&#039;]), &#039;M&#039;, &#039;hemis&#039;, &#039;ldrop&#039;, &#039;rdrop&#039;);&lt;br /&gt;
    cd(rootdir);&lt;br /&gt;
 end&lt;br /&gt;
&lt;br /&gt;
[[Category: Time Series]]&lt;br /&gt;
[[Category: FreeSurfer]]&lt;br /&gt;
[[Category: Functional Connectivity]]&lt;br /&gt;
[[Category: MATLAB]]&lt;br /&gt;
[[Category: MATLAB functions]]&lt;/div&gt;</summary>
		<author><name>Erica</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Configure_mkanalysis-sess&amp;diff=1589</id>
		<title>Configure mkanalysis-sess</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Configure_mkanalysis-sess&amp;diff=1589"/>
		<updated>2018-06-19T14:57:03Z</updated>

		<summary type="html">&lt;p&gt;Erica: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The first step in the first-level analysis is to configure analyses and contrasts. This step describes what preprocessing stages should have been run as well as the parameters needed to construct a design matrix (no data are analyzed yet). This is done with mkanalysis-sess.&lt;br /&gt;
&lt;br /&gt;
A good way to make it clear how you are configuring your analysis is to declare important parameters as shell environment variables, and then use them when calling mkanalysis-sess:&lt;br /&gt;
&lt;br /&gt;
 SMOOTHING=4; #FWHM smoothing kernel; rule of thumb is 2 x VoxelSize&lt;br /&gt;
 REFEVENTDUR=0.8; #How long are the events?&lt;br /&gt;
 TR=2.047; #What is the TR&lt;br /&gt;
 NCONDITIONS=3; #How many conditions in the par files&lt;br /&gt;
 SURFACE=fsaverage; #generally valid options are &#039;self&#039; or &#039;fsaverage&#039;&lt;br /&gt;
 HEMIS=( lh rh ); #for automatically looping over both hemispheres&lt;br /&gt;
 PARFILE=LDT.par; #What&#039;s the name of the .par files in your bold/ directories?&lt;br /&gt;
 ANROOT=&amp;quot;LDT.sm&amp;quot; #base name for the analysis directories&lt;br /&gt;
&lt;br /&gt;
 for hemi in &amp;quot;${HEMIS[@]}&amp;quot;&lt;br /&gt;
 do&lt;br /&gt;
 mkanalysis-sess \&lt;br /&gt;
  -fsd bold \&lt;br /&gt;
  -surface ${SURFACE} ${hemi} \&lt;br /&gt;
  -fwhm ${SMOOTHING}  \&lt;br /&gt;
  -event-related \&lt;br /&gt;
  -paradigm ${PARFILE} \&lt;br /&gt;
  -nconditions ${NCONDITIONS} \&lt;br /&gt;
  -timewindow 24 \&lt;br /&gt;
  -spmhrf 2 \&lt;br /&gt;
  -polyfit 2 \&lt;br /&gt;
  -mcextreg \&lt;br /&gt;
  -TR ${TR} \&lt;br /&gt;
  -refeventdur ${REFEVENTDUR} \&lt;br /&gt;
  -analysis ${ANROOT}${SMOOTHING}.${hemi} \&lt;br /&gt;
  -per-run -force&lt;br /&gt;
 done&lt;br /&gt;
The above code could be saved as a script in your ~/bin directory (e.g., mkanalysis.sh) and modified as required for different datasets or parametric choices.&lt;br /&gt;
&lt;br /&gt;
This step creates analysis directories in your $SUBJECTS_DIR containing single files that contain all the information needed for the next step. Your $SUBJECTS_DIR should be your project root directory.&lt;br /&gt;
&lt;br /&gt;
Note that there is also a directive to skip volumes, however it doesn&#039;t seem to do what we expect it to, and so we just drop those volumes from the data entirely, and modify the event onsets to account for the dropped volumes.&lt;/div&gt;</summary>
		<author><name>Erica</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Participant_Screening&amp;diff=1522</id>
		<title>Participant Screening</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Participant_Screening&amp;diff=1522"/>
		<updated>2018-03-07T15:21:41Z</updated>

		<summary type="html">&lt;p&gt;Erica: /* Screening Questions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When a potential participant contacts the lab to participate in one of the on going studies ask them the following questions:&lt;br /&gt;
&lt;br /&gt;
== Screening Questions ==&lt;br /&gt;
#Are you right handed?&lt;br /&gt;
#Is English your dominant language? &lt;br /&gt;
#Are you taking any medications that affect your central nervous system? (e.g. Ritalin, Valium, beta blockers, etc.)&lt;br /&gt;
#Are you or may you be pregnant?&lt;br /&gt;
#Do you have metallic implants, such as pacemakers, replacement joints, cochlear implants?&lt;br /&gt;
#Do you have an aneurism clip, implanted neural stimulator?&lt;br /&gt;
#Do you have a vagus nerve stimulator (VNS)?&lt;br /&gt;
#Do you have any shrapnel injuries?&lt;br /&gt;
#Do you have any ocular foreign bodies (metal shavings, e.g., associated with welding)?&lt;br /&gt;
#Do you have a history of metal in head or eyes or other parts of the body?&lt;br /&gt;
#Do you have tattoos that contain metal (often found in black inks)?&lt;br /&gt;
#Do you have untreatable claustrophobia otherwise requiring anesthesia or antianxiety medications that may alter your ability to perform the tasks during fMRI scanning?&lt;br /&gt;
#For safety purposes the MRI scanner has a weight limit of 350 lbs. Do you fall under this limit?&lt;/div&gt;</summary>
		<author><name>Erica</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Participant_Screening&amp;diff=1521</id>
		<title>Participant Screening</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Participant_Screening&amp;diff=1521"/>
		<updated>2018-03-07T15:20:20Z</updated>

		<summary type="html">&lt;p&gt;Erica: /* Screening Questions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When a potential participant contacts the lab to participate in one of the on going studies ask them the following questions:&lt;br /&gt;
&lt;br /&gt;
== Screening Questions ==&lt;br /&gt;
#Are you right handed?&lt;br /&gt;
#Are you taking any medications that affect your central nervous system? (e.g. Ritalin, Valium, beta blockers, etc.)&lt;br /&gt;
#Are you or may you be pregnant?&lt;br /&gt;
#Do you have metallic implants, such as pacemakers, replacement joints, cochlear implants?&lt;br /&gt;
#Do you have an aneurism clip, implanted neural stimulator?&lt;br /&gt;
#Do you have a vagus nerve stimulator (VNS)?&lt;br /&gt;
#Do you have any shrapnel injuries?&lt;br /&gt;
#Do you have any ocular foreign bodies (metal shavings, e.g., associated with welding)?&lt;br /&gt;
#Do you have a history of metal in head or eyes or other parts of the body?&lt;br /&gt;
#Do you have tattoos that contain metal (often found in black inks)?&lt;br /&gt;
#Do you have untreatable claustrophobia otherwise requiring anesthesia or antianxiety medications that may alter your ability to perform the tasks during fMRI scanning?&lt;br /&gt;
#For safety purposes the MRI scanner has a weight limit of 350 lbs. Do you fall under this limit?&lt;/div&gt;</summary>
		<author><name>Erica</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Main_Page&amp;diff=1481</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Main_Page&amp;diff=1481"/>
		<updated>2018-02-27T21:20:02Z</updated>

		<summary type="html">&lt;p&gt;Erica: /* Data Analysis */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Welcome to the CCN Lab Wiki.&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:cpicard.jpg]]&lt;br /&gt;
&lt;br /&gt;
== Terminal Commands &amp;amp; Other Lab Stuff ==&lt;br /&gt;
* [[New Research Staff]]&lt;br /&gt;
* [[ubmount | UBFS and ubmount]]&lt;br /&gt;
* [[SSH | Using SSH]]&lt;br /&gt;
* [[Synching Scripts]]&lt;br /&gt;
* [[Connecting to CCR]]&lt;br /&gt;
* [[Mess Ups | Times when we messed up]]&lt;br /&gt;
* [[Highlight Reel | Times when we rocked it]]&lt;br /&gt;
* [[Lab Email]]&lt;br /&gt;
* [[Excel Formulas]]&lt;br /&gt;
* [[Troubleshooting]]&lt;br /&gt;
* [[BASH Tricks]]&lt;br /&gt;
* [[Participant Screening]]&lt;br /&gt;
&lt;br /&gt;
== Data Analysis ==&lt;br /&gt;
* [[Downloading CRTC Data]]&lt;br /&gt;
* [[Behavioral Analyses]]&lt;br /&gt;
* [[FreeSurfer | FreeSurfer Pipeline]]&lt;br /&gt;
* [[SPM | SPM Pipeline]]&lt;br /&gt;
* [[Time Series Analysis]]&lt;br /&gt;
* [[Network Analyses]]&lt;br /&gt;
&lt;br /&gt;
== Manuscript Preparation ==&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=Rendering_MRIcron Brain Rendering in MRIcron (SPM)]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=Producing_Tables_of_Coordinates_(SPM) Producing Tables of Coordinates (SPM)]&lt;br /&gt;
* [[Annotation Coordinates| Extracting XYZ coordinates of .annot labels]]&lt;br /&gt;
* [http://colorbrewer2.org/ Color schemes (e.g., color keys for graphs, experiment conditions in fMRI renderings, etc.)]&lt;br /&gt;
** Above link yoinked from [http://sites.bu.edu/cnrlab/lab-resources/ BU&#039;s CNR Lab]&lt;br /&gt;
&lt;br /&gt;
== Experiment A to Z (not necessarily in alphabetical order) ==&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php/Category:MATLAB_functions MATLAB Functions]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=Participant_Triage Participant Triage]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=Pre-fMRI_Scanning_Protocol Pre-fMRI Scanning Protocol]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=Participant_Instructions Participant Instructions]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=MRI_Prep MRI Prep (Prior to scan date)]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=MRI_Setup MRI Setup]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=MIKENET MIKENET Neural Network C Library Setup]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=TensorFlow TensorFlow OpenSource Python Setup]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=Reading_Experiment_IDs IDs for Reading Experiment BOLD]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php/CCR Center for Computational Research (CCR)]&lt;br /&gt;
&lt;br /&gt;
== MediaWiki - Guides for Getting started ==&lt;br /&gt;
&lt;br /&gt;
* [//www.mediawiki.org/wiki/Help:Formatting Useful Formatting Guide]&lt;br /&gt;
* [//www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]&lt;br /&gt;
* [//www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]&lt;br /&gt;
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce MediaWiki release mailing list]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Consult the [//meta.wikimedia.org/wiki/Help:Contents User&#039;s Guide] for information on using the wiki software.&lt;/div&gt;</summary>
		<author><name>Erica</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Main_Page&amp;diff=1480</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Main_Page&amp;diff=1480"/>
		<updated>2018-02-27T21:19:14Z</updated>

		<summary type="html">&lt;p&gt;Erica: /* Data Analysis */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Welcome to the CCN Lab Wiki.&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:cpicard.jpg]]&lt;br /&gt;
&lt;br /&gt;
== Terminal Commands &amp;amp; Other Lab Stuff ==&lt;br /&gt;
* [[New Research Staff]]&lt;br /&gt;
* [[ubmount | UBFS and ubmount]]&lt;br /&gt;
* [[SSH | Using SSH]]&lt;br /&gt;
* [[Synching Scripts]]&lt;br /&gt;
* [[Connecting to CCR]]&lt;br /&gt;
* [[Mess Ups | Times when we messed up]]&lt;br /&gt;
* [[Highlight Reel | Times when we rocked it]]&lt;br /&gt;
* [[Lab Email]]&lt;br /&gt;
* [[Excel Formulas]]&lt;br /&gt;
* [[Troubleshooting]]&lt;br /&gt;
* [[BASH Tricks]]&lt;br /&gt;
* [[Participant Screening]]&lt;br /&gt;
&lt;br /&gt;
== Data Analysis ==&lt;br /&gt;
* [[Downloading CTRC Data]]&lt;br /&gt;
* [[Behavioral Analyses]]&lt;br /&gt;
* [[FreeSurfer | FreeSurfer Pipeline]]&lt;br /&gt;
* [[SPM | SPM Pipeline]]&lt;br /&gt;
* [[Time Series Analysis]]&lt;br /&gt;
* [[Network Analyses]]&lt;br /&gt;
&lt;br /&gt;
== Manuscript Preparation ==&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=Rendering_MRIcron Brain Rendering in MRIcron (SPM)]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=Producing_Tables_of_Coordinates_(SPM) Producing Tables of Coordinates (SPM)]&lt;br /&gt;
* [[Annotation Coordinates| Extracting XYZ coordinates of .annot labels]]&lt;br /&gt;
* [http://colorbrewer2.org/ Color schemes (e.g., color keys for graphs, experiment conditions in fMRI renderings, etc.)]&lt;br /&gt;
** Above link yoinked from [http://sites.bu.edu/cnrlab/lab-resources/ BU&#039;s CNR Lab]&lt;br /&gt;
&lt;br /&gt;
== Experiment A to Z (not necessarily in alphabetical order) ==&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php/Category:MATLAB_functions MATLAB Functions]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=Participant_Triage Participant Triage]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=Pre-fMRI_Scanning_Protocol Pre-fMRI Scanning Protocol]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=Participant_Instructions Participant Instructions]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=MRI_Prep MRI Prep (Prior to scan date)]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=MRI_Setup MRI Setup]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=MIKENET MIKENET Neural Network C Library Setup]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=TensorFlow TensorFlow OpenSource Python Setup]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=Reading_Experiment_IDs IDs for Reading Experiment BOLD]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php/CCR Center for Computational Research (CCR)]&lt;br /&gt;
&lt;br /&gt;
== MediaWiki - Guides for Getting started ==&lt;br /&gt;
&lt;br /&gt;
* [//www.mediawiki.org/wiki/Help:Formatting Useful Formatting Guide]&lt;br /&gt;
* [//www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]&lt;br /&gt;
* [//www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]&lt;br /&gt;
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce MediaWiki release mailing list]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Consult the [//meta.wikimedia.org/wiki/Help:Contents User&#039;s Guide] for information on using the wiki software.&lt;/div&gt;</summary>
		<author><name>Erica</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Participant_Screening&amp;diff=1471</id>
		<title>Participant Screening</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Participant_Screening&amp;diff=1471"/>
		<updated>2018-02-21T16:01:45Z</updated>

		<summary type="html">&lt;p&gt;Erica: Created page with &amp;quot;When a potential participant contacts the lab to participate in one of the on going studies ask them the following questions:  == Screening Questions == #Are you right handed?...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;When a potential participant contacts the lab to participate in one of the on going studies ask them the following questions:&lt;br /&gt;
&lt;br /&gt;
== Screening Questions ==&lt;br /&gt;
#Are you right handed?&lt;br /&gt;
#Are you or may you be pregnant?&lt;br /&gt;
#Do you have metallic implants, such as pacemakers, replacement joints, cochlear implants?&lt;br /&gt;
#Do you have an aneurism clip, implanted neural stimulator?&lt;br /&gt;
#Do you have a vagus nerve stimulator (VNS)?&lt;br /&gt;
#Do you have any shrapnel injuries?&lt;br /&gt;
#Do you have any ocular foreign bodies (metal shavings, e.g., associated with welding)?&lt;br /&gt;
#Do you have a history of metal in head or eyes or other parts of the body?&lt;br /&gt;
#Do you have tattoos that contain metal (often found in black inks)?&lt;br /&gt;
#Do you have untreatable claustrophobia otherwise requiring anesthesia or antianxiety medications that may alter your ability to perform the tasks during fMRI scanning?&lt;br /&gt;
#For safety purposes the MRI scanner has a weight limit of 350 lbs. Do you fall under this limit?&lt;/div&gt;</summary>
		<author><name>Erica</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Main_Page&amp;diff=1470</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Main_Page&amp;diff=1470"/>
		<updated>2018-02-21T15:50:00Z</updated>

		<summary type="html">&lt;p&gt;Erica: /* Terminal Commands &amp;amp; Other Lab Stuff */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;strong&amp;gt;Welcome to the CCN Lab Wiki.&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:cpicard.jpg]]&lt;br /&gt;
&lt;br /&gt;
== Terminal Commands &amp;amp; Other Lab Stuff ==&lt;br /&gt;
* [[New Research Staff]]&lt;br /&gt;
* [[ubmount | UBFS and ubmount]]&lt;br /&gt;
* [[SSH | Using SSH]]&lt;br /&gt;
* [[Synching Scripts]]&lt;br /&gt;
* [[Connecting to CCR]]&lt;br /&gt;
* [[Mess Ups | Times when we messed up]]&lt;br /&gt;
* [[Highlight Reel | Times when we rocked it]]&lt;br /&gt;
* [[Lab Email]]&lt;br /&gt;
* [[Excel Formulas]]&lt;br /&gt;
* [[Troubleshooting]]&lt;br /&gt;
* [[BASH Tricks]]&lt;br /&gt;
* [[Participant Screening]]&lt;br /&gt;
&lt;br /&gt;
== Data Analysis ==&lt;br /&gt;
* [[Downloading CRTC Data]]&lt;br /&gt;
* [[Behavioral Analyses]]&lt;br /&gt;
* [[FreeSurfer | FreeSurfer Pipeline]]&lt;br /&gt;
* [[SPM | SPM Pipeline]]&lt;br /&gt;
* [[Time Series Analysis]]&lt;br /&gt;
* [[Network Analyses]]&lt;br /&gt;
&lt;br /&gt;
== Manuscript Preparation ==&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=Rendering_MRIcron Brain Rendering in MRIcron (SPM)]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=Producing_Tables_of_Coordinates_(SPM) Producing Tables of Coordinates (SPM)]&lt;br /&gt;
* [[Annotation Coordinates| Extracting XYZ coordinates of .annot labels]]&lt;br /&gt;
* [http://colorbrewer2.org/ Color schemes (e.g., color keys for graphs, experiment conditions in fMRI renderings, etc.)]&lt;br /&gt;
** Above link yoinked from [http://sites.bu.edu/cnrlab/lab-resources/ BU&#039;s CNR Lab]&lt;br /&gt;
&lt;br /&gt;
== Experiment A to Z (not necessarily in alphabetical order) ==&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php/Category:MATLAB_functions MATLAB Functions]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=Participant_Triage Participant Triage]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=Pre-fMRI_Scanning_Protocol Pre-fMRI Scanning Protocol]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=Participant_Instructions Participant Instructions]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=MRI_Prep MRI Prep (Prior to scan date)]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=MRI_Setup MRI Setup]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=MIKENET MIKENET Neural Network C Library Setup]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=TensorFlow TensorFlow OpenSource Python Setup]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php?title=Reading_Experiment_IDs IDs for Reading Experiment BOLD]&lt;br /&gt;
* [//ccnlab.psy.buffalo.edu/wiki/index.php/CCR Center for Computational Research (CCR)]&lt;br /&gt;
&lt;br /&gt;
== MediaWiki - Guides for Getting started ==&lt;br /&gt;
&lt;br /&gt;
* [//www.mediawiki.org/wiki/Help:Formatting Useful Formatting Guide]&lt;br /&gt;
* [//www.mediawiki.org/wiki/Special:MyLanguage/Manual:Configuration_settings Configuration settings list]&lt;br /&gt;
* [//www.mediawiki.org/wiki/Special:MyLanguage/Manual:FAQ MediaWiki FAQ]&lt;br /&gt;
* [https://lists.wikimedia.org/mailman/listinfo/mediawiki-announce MediaWiki release mailing list]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Consult the [//meta.wikimedia.org/wiki/Help:Contents User&#039;s Guide] for information on using the wiki software.&lt;/div&gt;</summary>
		<author><name>Erica</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Mess_Ups&amp;diff=1469</id>
		<title>Mess Ups</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Mess_Ups&amp;diff=1469"/>
		<updated>2018-02-20T17:55:54Z</updated>

		<summary type="html">&lt;p&gt;Erica: /* Chris&amp;#039; Mess-Ups */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Honourable Mentions ==&lt;br /&gt;
=== Kali ===&lt;br /&gt;
Doesn&#039;t like Hawaiian pizza&lt;br /&gt;
&lt;br /&gt;
== Chris&#039; Mess-Ups ==&lt;br /&gt;
#Didn&#039;t get the MPRAGE files.&lt;br /&gt;
#Hard-coded a script to always run the same experiment file, ignoring the run-time parameters.&lt;br /&gt;
#Bothered the Freesurfer group when the problem was that the FSL environment variables weren&#039;t set in my .bashrc file on wernickesarea (but were correct on the other computers)&lt;br /&gt;
#Didn&#039;t book Psychonomics hotel in time. Inadvertently booked us in a dodgy part of Boston near a couple homeless shelters and possibly a methadone clinic.&lt;br /&gt;
#*We walked to the conference each morning.&lt;br /&gt;
#Claimed that the regions of the reading network were &amp;quot;literally&amp;quot; apples and oranges.&lt;br /&gt;
&lt;br /&gt;
== Greg&#039;s Mess-Ups ==&lt;br /&gt;
# Ran the trainer twice.&lt;br /&gt;
# Trained the model with the wrong example file.&lt;br /&gt;
# Tried to train the retina&lt;br /&gt;
&lt;br /&gt;
== Erica&#039;s Mess-Ups ==&lt;br /&gt;
&lt;br /&gt;
[[File:loader3.gif]]&lt;/div&gt;</summary>
		<author><name>Erica</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=Downloading_CRTC_Data&amp;diff=1467</id>
		<title>Downloading CRTC Data</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=Downloading_CRTC_Data&amp;diff=1467"/>
		<updated>2018-02-15T19:18:32Z</updated>

		<summary type="html">&lt;p&gt;Erica: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Our fMRI data is typically posted to the CTRC Owncloud website within one or two days after acquisition (possibly longer if the scan was a Friday). This page describes how to download and organize the data for the first time, and then upload the organized raw data files to the UBFS network directory where any team member can then access them. Note that the data stored on UBFS is intended to be &#039;&#039;&#039;copied&#039;&#039;&#039; to your local computer hard drive, where you can mangle it to your heart&#039;s content without worrying about affecting anyone else&#039;s projects or corrupting our only copy of the data.&lt;br /&gt;
&lt;br /&gt;
These directions assume that you have the URL and login/password to access the Owncloud website.&lt;br /&gt;
&lt;br /&gt;
SemCat URL: http://tinyurl.com/blobs-catburgers&lt;br /&gt;
&lt;br /&gt;
LDT URL: http://tinyurl.com/lexitron5000.&lt;br /&gt;
&lt;br /&gt;
==Identify the Subjects==&lt;br /&gt;
Internally, we use our own set of Subject IDs, which are generated when a participant enters our experiment. The CTRC does the same thing, meaning that our subject numbers do not line up with those at the CTRC. Moreover, the same participant scanned multiple times will have a new CTRC subject number for each scan. It is imperative that you be able to translate between our subject numbers and those assigned to the fMRI data by the CTRC!&lt;br /&gt;
&lt;br /&gt;
The timestamps for each zipped data set are visible through the Owncloud interface. As a rule of thumb, the most recently acquired data will also be the one with the most recent timestamp. If there is really only one candidate file, it should be fairly easy to identify the files you want. If you need to download older data, or if there are multiple candidates, you can consult the fMRI Log File Google Spreadsheet, which lists the CTRC subject number along with our own internal subject number.&lt;br /&gt;
&lt;br /&gt;
==Download the Data==&lt;br /&gt;
Each fMRI session has two associated zip files. You can ignore the ones labeled &#039;&#039;study_files&#039;&#039;, as those are stored in a proprietary data format and are useless to us. Instead, you will want to download the NIFTI files to your local hard drive. &lt;br /&gt;
&lt;br /&gt;
==File Organization==&lt;br /&gt;
Once the download has finished, you can unzip the files, and copy them to a local directory. Give this directory a name corresponding to our local subject number. For example, if the data belongs to our subject 201, create directory 0201, and move the files into the new directory. The CTRC assigns sequential numbers and other file information to each of these files, and so we generally rename them according to our NIFTI file organization and naming convention.&lt;br /&gt;
&lt;br /&gt;
An important step here is to check whether all expected files are present. We are generally expecting to see the following key files:&lt;br /&gt;
# A file with &#039;&#039;&#039;MPRAGE&#039;&#039;&#039; appearing somewhere in the filename. This is the high-resolution (~1mm voxels) anatomical image. It will likely begin with the prefix 500 or 501, and should be roughly 20MB in size.&lt;br /&gt;
#*Create a subdirectory called mri and move this file to the new subfolder, renaming it to &amp;quot;MPRAGE.nii.gz&amp;quot; only change the first part of the filename, leave the extension as is.&lt;br /&gt;
#A set of files with &#039;&#039;&#039;BOLD&#039;&#039;&#039; appearing somewhere in the filename. These files may be prefixed with numbers in the range of 700, 800, 900, etc..&lt;br /&gt;
#*These are the most likely to be problematic because sometimes a run will have to be aborted (e.g., the experiment malfunctioned) meaning there will be some dud files.&lt;br /&gt;
#*These files should be somewhere in the range of 50MB&lt;br /&gt;
#*There should be exactly one of these files for each run that the participant completed. If there are extra files, it is likely that a run was restarted. If there are fewer, this is more troubling, since that means we are missing some data, and this needs to be followed up on with the CTRC (it also makes it ambiguous which runs have missing data)&lt;br /&gt;
#*Our LDT and Semantic Imagery experiment have 6 runs. If we have 6 full runs, all is good in the hood.&lt;br /&gt;
#*If you are confident that you know that the data are complete, go ahead and create a directory called &#039;&#039;bold&#039;&#039;. Under that directory, create run directories called 001, 002, ... etc. (one for each run). Move each of the BOLD files to these run directories, and rename to f.nii.gz&lt;br /&gt;
#*For some of the sessions participants complete a test run. Make sure that when you are moving the bold files in to their corresponding folders that your are not starting with these test run BOLD files.&lt;br /&gt;
#The scan session may include a T2- or FLAIR image, which helps identify gray matter. The unzipped files may include a file with &#039;&#039;&#039;FLAIR&#039;&#039;&#039; or &#039;&#039;&#039;3D_T2&#039;&#039;&#039; in the name. Expect a FLAIR file to be approximately 20MB, and the 3D_T2 file to be over 100MB in sise.&lt;br /&gt;
#*If these files exist, copy them to the /mri subdirectory you created in Step 1&lt;br /&gt;
*All other files not already moved into the /mri or /bold/xxx directories can go into a third subdirectory called /other&lt;br /&gt;
&lt;br /&gt;
===Multisession Data===&lt;br /&gt;
Our Semantic Imagery experiment has two sets of data for some participants. We have been adding the _SESS_1 and _SESS_2 suffix to the subject numbers for these data, and otherwise treating the data as though they belong to completely different subjects.  For example, the raw data folder contains &#039;&#039;0183_SESS_1&#039;&#039; and &#039;&#039;0183_SESS_2&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
If a team member wishes to analyze the data for both sessions at once, it will be incumbent on that individual to organize and possibly rename their own local copies of the data. For example, each session of Semantic Imagery should have 6 runs of BOLD files. When analyzing both sessions, as I copy over the SESS_1 data, I rename the MPRAGE file to MPRAGE_1, but otherwise leave the file/folder names intact. Then when I copy over the MPRAGE and the BOLD files for SESS_2, I rename the MPRAGE to MPRAGE_2, and I rename the BOLD folders from 001, 002, ..., 006 to 007, 008, ..., 012, reflecting the fact that the individual has 12 total runs of BOLD data.&lt;br /&gt;
&lt;br /&gt;
Note also that preprocessing multisession data requires a bit of extra work to make sure that the files are all coregistered to the same space. This has nothing to do with how the files are organized; it&#039;s just mentioned here to put it on your radar.&lt;br /&gt;
&lt;br /&gt;
==File Archiving==&lt;br /&gt;
At this point, the raw data have been organized according to the directory structure below:&lt;br /&gt;
*Subject Number&lt;br /&gt;
**mri&lt;br /&gt;
**bold&lt;br /&gt;
***001&lt;br /&gt;
***002&lt;br /&gt;
***etc.&lt;br /&gt;
**other&lt;br /&gt;
If this is correct, you can now upload the data to the raw data archive on ubfs/openfmri/ that corresponds to the project that this data belongs to. Be sure to make sure that your folder name matches the format of the existing folders you are moving the data too. &lt;br /&gt;
&lt;br /&gt;
==Runtime Data==&lt;br /&gt;
This is a good occasion to make sure that the MATLAB runtime files for this particular session have also been taken off the Experimenter laptop and are also in the project directory on UBFS.&lt;/div&gt;</summary>
		<author><name>Erica</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=FreeSurfer&amp;diff=1466</id>
		<title>FreeSurfer</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=FreeSurfer&amp;diff=1466"/>
		<updated>2018-02-14T21:43:16Z</updated>

		<summary type="html">&lt;p&gt;Erica: /* Using mri_convert */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Freesurfer is a surface-based fMRI processing and analysis package written for the Unix environment (including Mac OS X, which is based on Unix). The neuroanatomical organization of the brain has the grey matter on the outside surface of the cortex. Aside from the ventral/medial subcortical structures, the interior volume of the brain is predominately white matter axonal tracts. Because neurons metabolize in the cell body, rather than along the axons, we can focus on the grey matter found in the cortical surface because any fMRI signal changes detected in the white matter should theoretically be noise. This is the motivation for surface-based analyses of fMRI.&lt;br /&gt;
&lt;br /&gt;
Freesurfer has a rigid set of assumptions concerning how the input data is organized and labeled. The following instructions will help avoid any violations of these assumptions that might derail your Freesurfer fMRI processing pipeline.&lt;br /&gt;
&lt;br /&gt;
These instructions assume that Freesurfer has already been installed and configured on your workstation.&lt;br /&gt;
&lt;br /&gt;
== Organization ==&lt;br /&gt;
Freesurfer data for a collection of subjects is organized into a single project directory, called $SUBJECTS_DIR. Try this in the Linux terminal:&lt;br /&gt;
 echo $SUBJECTS_DIR&lt;br /&gt;
It is likely that you will see something like the following, which is the sample &#039;bert&#039; dataset that comes with a Freesurfer installation:&lt;br /&gt;
 /usr/local/freesurfer/subjects&lt;br /&gt;
&lt;br /&gt;
Let us assume that you have been collecting data for some lexical decision task experiment. All the data for all subjects should be stored in a single directory, which you  will set as your $SUBJECTS_DIR variable. For example, if we keep all our data in ~/ubfs/cpmcnorg/openfmri/LDT, then we would type the following:&lt;br /&gt;
 SUBJECTS_DIR=~/ubfs/cpmcnorg/openfmri/LDT&lt;br /&gt;
 echo $SUBJECTS_DIR&lt;br /&gt;
Another trick we can do is to use the Unix &amp;lt;code&amp;gt;pwd&amp;lt;/code&amp;gt; command to set the SUBJECTS_DIR to be whatever directory we happen to be in at the moment. The following series of commands will do the same as the previous example command:&lt;br /&gt;
 cd ~&lt;br /&gt;
 cd ubfs/cpmcnorg/openfmri&lt;br /&gt;
 cd LDT&lt;br /&gt;
 SUBJECTS_DIR=`pwd`&lt;br /&gt;
 echo $SUBJECTS_DIR&lt;br /&gt;
The first line above, &amp;lt;code&amp;gt;cd ~&amp;lt;/code&amp;gt; moves you to your home directory. The second line moves you from your home directory to the ubfs network folder containing several sets of experiments. The third line of code moves you into the subdirectory containing the LDT data. The fourth line sets the SUBJECTS_DIR environment variable to whatever gets printed out when you execute the &amp;lt;code&amp;gt;pwd&amp;lt;/code&amp;gt; command (the &amp;lt;code&amp;gt;pwd&amp;lt;/code&amp;gt; command &#039;&#039;&#039;&amp;lt;u&amp;gt;p&amp;lt;/u&amp;gt;&#039;&#039;&#039;rints the current &#039;&#039;&#039;&amp;lt;u&amp;gt;w&amp;lt;/u&amp;gt;&#039;&#039;&#039;orking &#039;&#039;&#039;&amp;lt;u&amp;gt;d&amp;lt;/u&amp;gt;&#039;&#039;&#039;irectory). As a result, the current working directory becomes the new SUBJECTS_DIR after you execute this command, as you can see when you execute the last line of code. Note that in the &amp;lt;code&amp;gt;SUBJECTS_DIR=`pwd`&amp;lt;/code&amp;gt; line, those are back-quotes, which you might find on your keyboard sharing a key with the ~ character. &amp;lt;code&amp;gt;`&amp;lt;/code&amp;gt; is not the same character as &amp;lt;code&amp;gt;&#039;&amp;lt;/code&amp;gt;. When you enclose a command in a pair of back-quotes, you are telling the operating something along the lines of &amp;quot;this is a command that I want you to execute first, before using its output to figure out the rest of this business.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Subject directory organization ===&lt;br /&gt;
Data for each subject should be kept in their own directory. Moreover, different types of data (i.e., anatomical/structural or bold/functional) are kept in separate subdirectories. The basic directory structure for each participant (&#039;session&#039; in Freesurfer terminology) looks like this (see also [[Freesurfer_BOLD_files| Freesurfer BOLD files]]):&lt;br /&gt;
&lt;br /&gt;
*SUBJECTS_DIR&lt;br /&gt;
**Subject_001&lt;br /&gt;
***mri&lt;br /&gt;
***bold&lt;br /&gt;
****001&lt;br /&gt;
****002&lt;br /&gt;
****003&lt;br /&gt;
****004&lt;br /&gt;
****005&lt;br /&gt;
****006&lt;br /&gt;
Copy the data for the participant from the /raw subdirectory for the project in the ubfs folder. You will only need the /mri and the /bold directories. If you are processing data for multiple session for a single participant, you may need to rename some of the files as you copy them over, otherwise, you will end up overwriting files.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt; Note that all the functional data (in the &#039;bold&#039; subdirectory) are stored in sequentially numbered folders (3-digits), and all are given the same name (&#039;f.nii&#039; or &#039;f.nii.gz&#039;). This seems to be a requirement. It may be possible to circumvent this requirement, but this is a relatively minor concern at this time. &amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By the end, your data should look like this:&lt;br /&gt;
&lt;br /&gt;
*SUBJECTS_DIR&lt;br /&gt;
**Subject_001&lt;br /&gt;
***mri&lt;br /&gt;
****orig.nii.gz (or MPRAGE.nii)&lt;br /&gt;
***bold&lt;br /&gt;
****001/f.nii.gz&lt;br /&gt;
****002/f.nii.gz&lt;br /&gt;
****etc.&lt;br /&gt;
&lt;br /&gt;
Note that the .nii.gz file extension indicates that this is a gzipped NIFTI file. You can use &amp;lt;code&amp;gt;gunzip&amp;lt;/code&amp;gt; to unzip the files, but this isn&#039;t really necessary unless you are going to manipulate these files in MATLAB. We have figured out ways to do everything for FreeSurfer in the BASH shell, so you may as well just leave them as-is unless you have a compelling reason (or compulsion) to unzip them.&lt;br /&gt;
&lt;br /&gt;
== Structural Preprocessing ==&lt;br /&gt;
The structural mri file (orig.nii) is transformed over a series of computationally-intensive steps invoked by the recon-all Freesurfer program. Recon-all is designed to execute all the steps in series without intervention, however in practice it seems preferable to execute the process in a series of smaller groups of steps and check the output in between. This is because the process is automated using computational algorithms, but if one step doesn&#039;t execute correctly, everything that follows will be compromised. The steps take many hours to complete, so by inspecting the progress along the way can save many hours of processing time redoing steps that had been done incorrectly.&lt;br /&gt;
&lt;br /&gt;
===Tip: Keeping Track===&lt;br /&gt;
If you are working on multiple subjects, or if you are sharing preprocessing duties with someone else, you can coordinate or track your progress by create a LOG_FS_xxxx.txt file. A template file can be found in ubfs in the LDT folder. After completing each step for processing (both structural and functional) mark an x next to the corresponding line. It can be really easy to forget to do this, but it will ensure that others accessing the folder on ubfs will know what&#039;s been done. This file should be a resource for you anyway since it contains a list of commands. More elaboration can be found on the wiki.&lt;br /&gt;
&lt;br /&gt;
Since most of the folders on ubfs do not contain this file, it is hard to know what steps of functional analysis have been done. I am working on writing down what specific files/folders are created at each step to use as markers. I will fill this in as I process a new participant. For now, here is the template:&lt;br /&gt;
&lt;br /&gt;
*slicer (a window-based graphical program) or mri_convert (a command-line program)&lt;br /&gt;
**MPRAGE.mgz&lt;br /&gt;
*autorecon1&lt;br /&gt;
**brainmask.mgz&lt;br /&gt;
*autorecon2&lt;br /&gt;
*autorecon3&lt;br /&gt;
*parfiles&lt;br /&gt;
*mkanalysis&lt;br /&gt;
*mkconstrast&lt;br /&gt;
*preproc&lt;br /&gt;
*selxavg3&lt;br /&gt;
 &lt;br /&gt;
Make sure to update ubfs! It&#039;s a bummer to run a participant through all the steps and then realize the files were just sitting completed &lt;br /&gt;
on someone&#039;s computer locally.&lt;br /&gt;
&lt;br /&gt;
In order to help you through analysis, there is a file called commands in the LDT folder of ubfs. This has all the commands you need to go from raw data to time courses (no GLM though). This is a good skeleton for reference, but it will still be necessary to reference the wiki to recognize what&#039;s going on at each step and address any errors.&lt;br /&gt;
&lt;br /&gt;
=== Image File Format ===&lt;br /&gt;
The first thing that you will need to do is convert the orig.nii file to .mgz format. This can be done using the graphic-based [[Slicer]] program, or in the terminal using &amp;lt;code&amp;gt;mri_convert&amp;lt;/code&amp;gt;. Though Freesurfer is capable of reading .nii files, it natively uses .mgz files, and so this conversion step will ensure that the structural data file has all the information that Freesurfer expects (there&#039;s no reason to expect a problem if using .nii files, but we have run into problems with archival data that had funny data header information).&lt;br /&gt;
&lt;br /&gt;
==== Using mri_convert ====&lt;br /&gt;
Slicer seemed to be useful for automatically fixing goofy .NII header info, however, if there&#039;s nothing wrong with your file header you might find the &amp;lt;code&amp;gt;mri_convert&amp;lt;/code&amp;gt; utility to be the fastest means of converting your files. You will need to make sure that you are in the directory that holds the .nii file.&lt;br /&gt;
 INFILE_BASE=MPRAGE&lt;br /&gt;
 INFILE_EXT=nii.gz&lt;br /&gt;
 mri_convert --in_type nii --out_type mgz -i ${INFILE_BASE}.${INFILE_EXT} -o ${INFILE_BASE}.mgz&lt;br /&gt;
&lt;br /&gt;
=== Recon-All ===&lt;br /&gt;
The computationally-intensive surface mapping is carried out by a Freesurfer program called recon-all. This program is extensively spaghetti-documented on the Freesurfer wiki [https://surfer.nmr.mgh.harvard.edu/fswiki/recon-all here]. Though the Freesurfer documentation goes into much detail, it is also a little hard to follow at times and sometimes does some odd things. Notably, it describes a generally problem-free case. This might work well for data that you already know is going to be problem-free, but we seldom have that guarantee. Instead, this guide will split the recon-all processing into sub-stages where you can do quality-control inspection at each step. &lt;br /&gt;
#[[Autorecon1]] (~2-2.5  hours, assuming no problems encountered)&lt;br /&gt;
#[[Autorecon2]] (~6-8 hours, assuming no problems encountered)&lt;br /&gt;
#[[Autorecon3]]&lt;br /&gt;
&lt;br /&gt;
After running autorecon1, it is best to run autorecon2 immediately afterward. Usually autorecon1 does an alright job skull stripping. If it doesn&#039;t, the results will be evident after autorecon2 is completed if you use the &amp;lt;code&amp;gt;tksurfer SUBJECTID HEMI inflated&amp;lt;/code&amp;gt;. Fill in the SUBJECTID, and fill in HEMI with lh or rh (but check both). If the brain appears overly lumpy or there are odd points sticking out, proceede to [[Autorecon2]] editing.&lt;br /&gt;
&lt;br /&gt;
== Functional Analysis ==&lt;br /&gt;
The previous steps have been concerned only with processing the T1 anatomical data. Though this might suffice for a purely structural brain analysis (e.g., voxel-based brain morphometry, which might explore how cortical thickness relates to some cognitive ability), most of our studies will employ functional MRI, which measures how the hemodynamic response changes as a function of task, condition or group. In the Freesurfer pipeline, this is done using a program called FS-FAST.&lt;br /&gt;
&lt;br /&gt;
=== FS-FAST Functional Analysis===&lt;br /&gt;
Each of these steps are detailed more extensively elsewhere, but generally speaking you will need to follow these steps before starting your functional analysis:&lt;br /&gt;
#Copy BOLD data to the Freesurfer subject folder (see page for [[Freesurfer BOLD files]])&lt;br /&gt;
#Create (or copy if experiment used a fixed schedule) your paradigm files for each fMRI run, and edit them using matlab (see &amp;quot;[[Par-Files]]&amp;quot;).&lt;br /&gt;
#*When editing par files, be sure to check how many volumes to drop for each par file; it may be different every time! See above link for more details.&lt;br /&gt;
#Create a subjects text file in your $SUBJECTS_DIR called &amp;quot;subjectname&amp;quot; that contains a list of your subjects necessary for later (used for [[Configure preproc-sess|preproc-sess]])&lt;br /&gt;
#*A quick way to do this, assuming you want to preprocess everyone, is to pipe the results of the ls command with the -1 switch (1 per line) into grep and then redirect the output to a file:&lt;br /&gt;
#*&amp;lt;code&amp;gt;ls -1 | grep &amp;quot;^FS&amp;quot; &amp;gt; subjects&amp;lt;/code&amp;gt;&lt;br /&gt;
#*This will list the contents of the current directory, 1 per line, then keep only lines starting with &#039;&#039;FS&#039;&#039;&lt;br /&gt;
#Configure your analyses (using [[Configure mkanalysis-sess|mkanalysis-sess]])&lt;br /&gt;
#Configure your contrast (using [[Configure mkcontrast-sess|mkcontrast-sess]])&lt;br /&gt;
#Preprocess your data (smoothing, slice-time correction, intensity normalization and brain mask creation) (using [[Configure preproc-sess|preproc-sess]])&lt;br /&gt;
#Check the *.mcdat files in each of the &#039;&#039;subject_name/bold/xxx&#039;&#039; directories to inspect the amount of motion detected during the motion correction step. Data associated with periods of excessive movement (&amp;gt;1mm) should be dealt with carefully. Columns in the text document from left to right are: (1) time point number (2) roll rotation (in ) (3) pitch rotation (in ) (4) yaw rotation (in ) (5) between-slice motion or translation (in mm) (6) within slice up-down motion or translation (in mm) (7) within slice left-right motion or translation (in mm) (8) RMS error before correction (9) RMS error after correction (10) total vector motion or translation (in mm). We need to look at column 10. If any of these values are above 1, this might be indicative of excessive movement. Consult with someone further up the chain for advice (undergrads ask a grad student; grad students ask Chris or a Postdoc if he ever has the funding to get one).&lt;br /&gt;
#Run the GLM for Single Subjects([[selxavg3-sess]])&lt;br /&gt;
#*Typically, we will do a group-level GLM. I have come to realize that it should generally suffice to do all processing in &#039;&#039;&#039;fsaverage&#039;&#039;&#039; surface space (mri_glmfit requires all operations to be done in a common surface space).&lt;br /&gt;
#*This will be relevant to the parameters you use in steps 4,6 and 8&lt;br /&gt;
#Run the group-level GLM ([[mri_glmfit]])&lt;br /&gt;
Lab-specific documentation can be found on this wiki, but a more thorough (and accurate) description can be found on the [https://surfer.nmr.mgh.harvard.edu/fswiki/FsFastTutorialV5.1/FsFastFirstLevel Freesurfer Wiki]&lt;br /&gt;
&lt;br /&gt;
== Trouble Shooting ==&lt;br /&gt;
=== Missing surfaces ===&lt;br /&gt;
Not sure how this came to pass, as I have never encountered this before, but probably was the result of one of the autorecon steps stopping early. I was running preproc-sess using the &#039;&#039;fsaverage&#039;&#039; surface (having already successfully run it on the &#039;&#039;self&#039;&#039; surface) and got an error message about being unable to find lh.sphere.reg. A quick google found a FreeSurfer mailing list archive email that was concerned with a different issue that had a similar solution. In Bruce the Almighty&#039;s words:&lt;br /&gt;
&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; you can use mris_register with the -1 switch to indicate that the target &lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; is a single surface not a statistical atlas. You will however still have to&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; create the various surface representations and geometric measures we expect&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; (e.g. ?h.inflated, ?h.sulc, etc....). If you can convert your &lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; surfaces to our binary format (e.g. using mris_convert) to create&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; an lh.orig, it would be something like:&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt;&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; mris_smooth lh.orig lh.smoothwm&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; mris_inflate lh.smoothwm lh.inflated&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; mris_sphere lh.inflated lh.sphere&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; mris_register -1 lh.sphere $TARGET_SUBJECT_DIR/lh.sphere ./lh.sphere.reg&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt;&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; I&#039;ve probably left something out, but that basic approach should work.&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt;&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; cheers&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; Bruce&lt;br /&gt;
&lt;br /&gt;
The subject that had given me a problem already had ?h.inflated files, but no ?h.sphere files. I tried running some of the above steps but there were missing dependencies. Right now running:&lt;br /&gt;
 recon-all -s $SUBJECT -surfreg&lt;br /&gt;
This allegedly produces the surf.reg files as an output.&lt;br /&gt;
&lt;br /&gt;
=== My script can&#039;t find my data ===&lt;br /&gt;
Some versions of the autorecon*.sh scripts have the SUBJECTS_DIR hard-coded. Or sometimes you will close your terminal window (e.g., at the end of the day), and then launch a new terminal window when you come back to the workstation (or resume working at a different computer). There&#039;s a good chance that your Freesurfer malfunction is the result of your SUBJECTS_DIR environment variable being set to the incorrect value. Troubleshooting step #1 should be the following:&lt;br /&gt;
 echo $SUBJECTS_DIR&lt;br /&gt;
If the wrong directory name is printed to the screen, setting it to the correct value may well fix your problem.&lt;br /&gt;
&lt;br /&gt;
At this point you might expect me to tell you how to set SUBJECTS_DIR to the correct value. But I&#039;m not going to do that, and here&#039;s why:&lt;br /&gt;
#It&#039;s documented elsewhere on the wiki&lt;br /&gt;
#If you&#039;re confused about how to set SUBJECTS_DIR, you&#039;re also likely to just blindly type whatever example command I give without understanding what the command does. If this is the case, please become more proficient with the lab&#039;s procedures and software.&lt;br /&gt;
&lt;br /&gt;
=== Is Freesurfer in your path? ===&lt;br /&gt;
This is an unlikely issue, but it is possible that your ~/.bashrc file doesn&#039;t add Freesurfer to your path. To check, launch a new terminal window. You should see something like the following:&lt;br /&gt;
 -------- freesurfer-Linux-centos6_x86_64-stable-pub-v5.3.0 --------&lt;br /&gt;
 Setting up environment for FreeSurfer/FS-FAST (and FSL)&lt;br /&gt;
 FREESURFER_HOME   /usr/local/freesurfer&lt;br /&gt;
 FSFAST_HOME       /usr/local/freesurfer/fsfast&lt;br /&gt;
 FSF_OUTPUT_FORMAT nii.gz&lt;br /&gt;
 SUBJECTS_DIR      /usr/local/freesurfer/subjects&lt;br /&gt;
 MNI_DIR           /usr/local/freesurfer/mni&lt;br /&gt;
If you don&#039;t, then open up your .bashrc file with a text editor:&lt;br /&gt;
 gedit ~/.bashrc&lt;br /&gt;
Then add the following lines to the bottom of the file:&lt;br /&gt;
 #FREESURFER&lt;br /&gt;
 export FREESURFER_HOME=/usr/local/freesurfer&lt;br /&gt;
 source ${FREESURFER_HOME}/SetUpFreeSurfer.sh&lt;br /&gt;
 &lt;br /&gt;
 #FSL&lt;br /&gt;
 export FSLDIR=/usr/share/fsl/5.0&lt;br /&gt;
 source ${FSLDIR}/etc/fslconf/fsl.sh&lt;br /&gt;
Save your changes, log out of Linux and log back in.&lt;br /&gt;
&lt;br /&gt;
=== ERROR: Flag unrecognized. ===&lt;br /&gt;
Most Linux programs take parameters, or &#039;&#039;flags&#039;&#039; that modify or specify how they are run. For example, you can&#039;t just call the &amp;lt;code&amp;gt;recon-all&amp;lt;/code&amp;gt; command; you have to tell the progam &#039;&#039;what&#039;&#039; data you want to work on, and so this information is provided using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; flag. Other flags might tell the program how aggressive to be when deciding to remove potential skull voxels, for example.&lt;br /&gt;
There are no hard-and-fast rules, but to find the set of flags that you can use for a particular Linux program, there are a few options you can try:&lt;br /&gt;
#&amp;lt;code&amp;gt;man program_name&amp;lt;/code&amp;gt;&lt;br /&gt;
#&amp;lt;code&amp;gt;program_name -help&amp;lt;/code&amp;gt;&lt;br /&gt;
#&amp;lt;code&amp;gt;program_name -?&amp;lt;/code&amp;gt;&lt;br /&gt;
Though often the &amp;lt;code&amp;gt;program_name -some_flag&amp;lt;/code&amp;gt; option causes the usage information to be displayed if &#039;&#039;some_flag&#039;&#039; is not a valid flag.&lt;br /&gt;
&lt;br /&gt;
When you see an error message concerning an unrecognized flag, it is most likely because there is a typo in your command. For example:&lt;br /&gt;
 recon-all -autorecon1 watershed 15&lt;br /&gt;
Each of the flags is supposed to be prefixed with a &#039;-&#039; character, but in the example above, &amp;lt;code&amp;gt;-watershed&amp;lt;/code&amp;gt; was instead typed as &amp;lt;code&amp;gt;watershed&amp;lt;/code&amp;gt; &#039;&#039;without&#039;&#039; the &#039;-&#039; character. These little typos can be hard to spot. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;&#039;&#039;&#039;If you are completely perplexed why something doesn&#039;t work out when you followed the directions to the letter, the first thing you should do is throw out your assumption that you typed in the command correctly.&#039;&#039;&#039;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:FreeSurfer]]&lt;/div&gt;</summary>
		<author><name>Erica</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=FreeSurfer&amp;diff=1465</id>
		<title>FreeSurfer</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=FreeSurfer&amp;diff=1465"/>
		<updated>2018-02-14T21:42:38Z</updated>

		<summary type="html">&lt;p&gt;Erica: /* Using mri_convert */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Freesurfer is a surface-based fMRI processing and analysis package written for the Unix environment (including Mac OS X, which is based on Unix). The neuroanatomical organization of the brain has the grey matter on the outside surface of the cortex. Aside from the ventral/medial subcortical structures, the interior volume of the brain is predominately white matter axonal tracts. Because neurons metabolize in the cell body, rather than along the axons, we can focus on the grey matter found in the cortical surface because any fMRI signal changes detected in the white matter should theoretically be noise. This is the motivation for surface-based analyses of fMRI.&lt;br /&gt;
&lt;br /&gt;
Freesurfer has a rigid set of assumptions concerning how the input data is organized and labeled. The following instructions will help avoid any violations of these assumptions that might derail your Freesurfer fMRI processing pipeline.&lt;br /&gt;
&lt;br /&gt;
These instructions assume that Freesurfer has already been installed and configured on your workstation.&lt;br /&gt;
&lt;br /&gt;
== Organization ==&lt;br /&gt;
Freesurfer data for a collection of subjects is organized into a single project directory, called $SUBJECTS_DIR. Try this in the Linux terminal:&lt;br /&gt;
 echo $SUBJECTS_DIR&lt;br /&gt;
It is likely that you will see something like the following, which is the sample &#039;bert&#039; dataset that comes with a Freesurfer installation:&lt;br /&gt;
 /usr/local/freesurfer/subjects&lt;br /&gt;
&lt;br /&gt;
Let us assume that you have been collecting data for some lexical decision task experiment. All the data for all subjects should be stored in a single directory, which you  will set as your $SUBJECTS_DIR variable. For example, if we keep all our data in ~/ubfs/cpmcnorg/openfmri/LDT, then we would type the following:&lt;br /&gt;
 SUBJECTS_DIR=~/ubfs/cpmcnorg/openfmri/LDT&lt;br /&gt;
 echo $SUBJECTS_DIR&lt;br /&gt;
Another trick we can do is to use the Unix &amp;lt;code&amp;gt;pwd&amp;lt;/code&amp;gt; command to set the SUBJECTS_DIR to be whatever directory we happen to be in at the moment. The following series of commands will do the same as the previous example command:&lt;br /&gt;
 cd ~&lt;br /&gt;
 cd ubfs/cpmcnorg/openfmri&lt;br /&gt;
 cd LDT&lt;br /&gt;
 SUBJECTS_DIR=`pwd`&lt;br /&gt;
 echo $SUBJECTS_DIR&lt;br /&gt;
The first line above, &amp;lt;code&amp;gt;cd ~&amp;lt;/code&amp;gt; moves you to your home directory. The second line moves you from your home directory to the ubfs network folder containing several sets of experiments. The third line of code moves you into the subdirectory containing the LDT data. The fourth line sets the SUBJECTS_DIR environment variable to whatever gets printed out when you execute the &amp;lt;code&amp;gt;pwd&amp;lt;/code&amp;gt; command (the &amp;lt;code&amp;gt;pwd&amp;lt;/code&amp;gt; command &#039;&#039;&#039;&amp;lt;u&amp;gt;p&amp;lt;/u&amp;gt;&#039;&#039;&#039;rints the current &#039;&#039;&#039;&amp;lt;u&amp;gt;w&amp;lt;/u&amp;gt;&#039;&#039;&#039;orking &#039;&#039;&#039;&amp;lt;u&amp;gt;d&amp;lt;/u&amp;gt;&#039;&#039;&#039;irectory). As a result, the current working directory becomes the new SUBJECTS_DIR after you execute this command, as you can see when you execute the last line of code. Note that in the &amp;lt;code&amp;gt;SUBJECTS_DIR=`pwd`&amp;lt;/code&amp;gt; line, those are back-quotes, which you might find on your keyboard sharing a key with the ~ character. &amp;lt;code&amp;gt;`&amp;lt;/code&amp;gt; is not the same character as &amp;lt;code&amp;gt;&#039;&amp;lt;/code&amp;gt;. When you enclose a command in a pair of back-quotes, you are telling the operating something along the lines of &amp;quot;this is a command that I want you to execute first, before using its output to figure out the rest of this business.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Subject directory organization ===&lt;br /&gt;
Data for each subject should be kept in their own directory. Moreover, different types of data (i.e., anatomical/structural or bold/functional) are kept in separate subdirectories. The basic directory structure for each participant (&#039;session&#039; in Freesurfer terminology) looks like this (see also [[Freesurfer_BOLD_files| Freesurfer BOLD files]]):&lt;br /&gt;
&lt;br /&gt;
*SUBJECTS_DIR&lt;br /&gt;
**Subject_001&lt;br /&gt;
***mri&lt;br /&gt;
***bold&lt;br /&gt;
****001&lt;br /&gt;
****002&lt;br /&gt;
****003&lt;br /&gt;
****004&lt;br /&gt;
****005&lt;br /&gt;
****006&lt;br /&gt;
Copy the data for the participant from the /raw subdirectory for the project in the ubfs folder. You will only need the /mri and the /bold directories. If you are processing data for multiple session for a single participant, you may need to rename some of the files as you copy them over, otherwise, you will end up overwriting files.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt; Note that all the functional data (in the &#039;bold&#039; subdirectory) are stored in sequentially numbered folders (3-digits), and all are given the same name (&#039;f.nii&#039; or &#039;f.nii.gz&#039;). This seems to be a requirement. It may be possible to circumvent this requirement, but this is a relatively minor concern at this time. &amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By the end, your data should look like this:&lt;br /&gt;
&lt;br /&gt;
*SUBJECTS_DIR&lt;br /&gt;
**Subject_001&lt;br /&gt;
***mri&lt;br /&gt;
****orig.nii.gz (or MPRAGE.nii)&lt;br /&gt;
***bold&lt;br /&gt;
****001/f.nii.gz&lt;br /&gt;
****002/f.nii.gz&lt;br /&gt;
****etc.&lt;br /&gt;
&lt;br /&gt;
Note that the .nii.gz file extension indicates that this is a gzipped NIFTI file. You can use &amp;lt;code&amp;gt;gunzip&amp;lt;/code&amp;gt; to unzip the files, but this isn&#039;t really necessary unless you are going to manipulate these files in MATLAB. We have figured out ways to do everything for FreeSurfer in the BASH shell, so you may as well just leave them as-is unless you have a compelling reason (or compulsion) to unzip them.&lt;br /&gt;
&lt;br /&gt;
== Structural Preprocessing ==&lt;br /&gt;
The structural mri file (orig.nii) is transformed over a series of computationally-intensive steps invoked by the recon-all Freesurfer program. Recon-all is designed to execute all the steps in series without intervention, however in practice it seems preferable to execute the process in a series of smaller groups of steps and check the output in between. This is because the process is automated using computational algorithms, but if one step doesn&#039;t execute correctly, everything that follows will be compromised. The steps take many hours to complete, so by inspecting the progress along the way can save many hours of processing time redoing steps that had been done incorrectly.&lt;br /&gt;
&lt;br /&gt;
===Tip: Keeping Track===&lt;br /&gt;
If you are working on multiple subjects, or if you are sharing preprocessing duties with someone else, you can coordinate or track your progress by create a LOG_FS_xxxx.txt file. A template file can be found in ubfs in the LDT folder. After completing each step for processing (both structural and functional) mark an x next to the corresponding line. It can be really easy to forget to do this, but it will ensure that others accessing the folder on ubfs will know what&#039;s been done. This file should be a resource for you anyway since it contains a list of commands. More elaboration can be found on the wiki.&lt;br /&gt;
&lt;br /&gt;
Since most of the folders on ubfs do not contain this file, it is hard to know what steps of functional analysis have been done. I am working on writing down what specific files/folders are created at each step to use as markers. I will fill this in as I process a new participant. For now, here is the template:&lt;br /&gt;
&lt;br /&gt;
*slicer (a window-based graphical program) or mri_convert (a command-line program)&lt;br /&gt;
**MPRAGE.mgz&lt;br /&gt;
*autorecon1&lt;br /&gt;
**brainmask.mgz&lt;br /&gt;
*autorecon2&lt;br /&gt;
*autorecon3&lt;br /&gt;
*parfiles&lt;br /&gt;
*mkanalysis&lt;br /&gt;
*mkconstrast&lt;br /&gt;
*preproc&lt;br /&gt;
*selxavg3&lt;br /&gt;
 &lt;br /&gt;
Make sure to update ubfs! It&#039;s a bummer to run a participant through all the steps and then realize the files were just sitting completed &lt;br /&gt;
on someone&#039;s computer locally.&lt;br /&gt;
&lt;br /&gt;
In order to help you through analysis, there is a file called commands in the LDT folder of ubfs. This has all the commands you need to go from raw data to time courses (no GLM though). This is a good skeleton for reference, but it will still be necessary to reference the wiki to recognize what&#039;s going on at each step and address any errors.&lt;br /&gt;
&lt;br /&gt;
=== Image File Format ===&lt;br /&gt;
The first thing that you will need to do is convert the orig.nii file to .mgz format. This can be done using the graphic-based [[Slicer]] program, or in the terminal using &amp;lt;code&amp;gt;mri_convert&amp;lt;/code&amp;gt;. Though Freesurfer is capable of reading .nii files, it natively uses .mgz files, and so this conversion step will ensure that the structural data file has all the information that Freesurfer expects (there&#039;s no reason to expect a problem if using .nii files, but we have run into problems with archival data that had funny data header information).&lt;br /&gt;
&lt;br /&gt;
==== Using mri_convert ====&lt;br /&gt;
Slicer seemed to be useful for automatically fixing goofy .NII header info, however, if there&#039;s nothing wrong with your file header you might find the &amp;lt;code&amp;gt;mri_convert&amp;lt;/code&amp;gt; utility to be the fastest means of converting your files.&lt;br /&gt;
 INFILE_BASE=MPRAGE&lt;br /&gt;
 INFILE_EXT=nii.gz&lt;br /&gt;
 mri_convert --in_type nii --out_type mgz -i ${INFILE_BASE}.${INFILE_EXT} -o ${INFILE_BASE}.mgz&lt;br /&gt;
&lt;br /&gt;
=== Recon-All ===&lt;br /&gt;
The computationally-intensive surface mapping is carried out by a Freesurfer program called recon-all. This program is extensively spaghetti-documented on the Freesurfer wiki [https://surfer.nmr.mgh.harvard.edu/fswiki/recon-all here]. Though the Freesurfer documentation goes into much detail, it is also a little hard to follow at times and sometimes does some odd things. Notably, it describes a generally problem-free case. This might work well for data that you already know is going to be problem-free, but we seldom have that guarantee. Instead, this guide will split the recon-all processing into sub-stages where you can do quality-control inspection at each step. &lt;br /&gt;
#[[Autorecon1]] (~2-2.5  hours, assuming no problems encountered)&lt;br /&gt;
#[[Autorecon2]] (~6-8 hours, assuming no problems encountered)&lt;br /&gt;
#[[Autorecon3]]&lt;br /&gt;
&lt;br /&gt;
After running autorecon1, it is best to run autorecon2 immediately afterward. Usually autorecon1 does an alright job skull stripping. If it doesn&#039;t, the results will be evident after autorecon2 is completed if you use the &amp;lt;code&amp;gt;tksurfer SUBJECTID HEMI inflated&amp;lt;/code&amp;gt;. Fill in the SUBJECTID, and fill in HEMI with lh or rh (but check both). If the brain appears overly lumpy or there are odd points sticking out, proceede to [[Autorecon2]] editing.&lt;br /&gt;
&lt;br /&gt;
== Functional Analysis ==&lt;br /&gt;
The previous steps have been concerned only with processing the T1 anatomical data. Though this might suffice for a purely structural brain analysis (e.g., voxel-based brain morphometry, which might explore how cortical thickness relates to some cognitive ability), most of our studies will employ functional MRI, which measures how the hemodynamic response changes as a function of task, condition or group. In the Freesurfer pipeline, this is done using a program called FS-FAST.&lt;br /&gt;
&lt;br /&gt;
=== FS-FAST Functional Analysis===&lt;br /&gt;
Each of these steps are detailed more extensively elsewhere, but generally speaking you will need to follow these steps before starting your functional analysis:&lt;br /&gt;
#Copy BOLD data to the Freesurfer subject folder (see page for [[Freesurfer BOLD files]])&lt;br /&gt;
#Create (or copy if experiment used a fixed schedule) your paradigm files for each fMRI run, and edit them using matlab (see &amp;quot;[[Par-Files]]&amp;quot;).&lt;br /&gt;
#*When editing par files, be sure to check how many volumes to drop for each par file; it may be different every time! See above link for more details.&lt;br /&gt;
#Create a subjects text file in your $SUBJECTS_DIR called &amp;quot;subjectname&amp;quot; that contains a list of your subjects necessary for later (used for [[Configure preproc-sess|preproc-sess]])&lt;br /&gt;
#*A quick way to do this, assuming you want to preprocess everyone, is to pipe the results of the ls command with the -1 switch (1 per line) into grep and then redirect the output to a file:&lt;br /&gt;
#*&amp;lt;code&amp;gt;ls -1 | grep &amp;quot;^FS&amp;quot; &amp;gt; subjects&amp;lt;/code&amp;gt;&lt;br /&gt;
#*This will list the contents of the current directory, 1 per line, then keep only lines starting with &#039;&#039;FS&#039;&#039;&lt;br /&gt;
#Configure your analyses (using [[Configure mkanalysis-sess|mkanalysis-sess]])&lt;br /&gt;
#Configure your contrast (using [[Configure mkcontrast-sess|mkcontrast-sess]])&lt;br /&gt;
#Preprocess your data (smoothing, slice-time correction, intensity normalization and brain mask creation) (using [[Configure preproc-sess|preproc-sess]])&lt;br /&gt;
#Check the *.mcdat files in each of the &#039;&#039;subject_name/bold/xxx&#039;&#039; directories to inspect the amount of motion detected during the motion correction step. Data associated with periods of excessive movement (&amp;gt;1mm) should be dealt with carefully. Columns in the text document from left to right are: (1) time point number (2) roll rotation (in ) (3) pitch rotation (in ) (4) yaw rotation (in ) (5) between-slice motion or translation (in mm) (6) within slice up-down motion or translation (in mm) (7) within slice left-right motion or translation (in mm) (8) RMS error before correction (9) RMS error after correction (10) total vector motion or translation (in mm). We need to look at column 10. If any of these values are above 1, this might be indicative of excessive movement. Consult with someone further up the chain for advice (undergrads ask a grad student; grad students ask Chris or a Postdoc if he ever has the funding to get one).&lt;br /&gt;
#Run the GLM for Single Subjects([[selxavg3-sess]])&lt;br /&gt;
#*Typically, we will do a group-level GLM. I have come to realize that it should generally suffice to do all processing in &#039;&#039;&#039;fsaverage&#039;&#039;&#039; surface space (mri_glmfit requires all operations to be done in a common surface space).&lt;br /&gt;
#*This will be relevant to the parameters you use in steps 4,6 and 8&lt;br /&gt;
#Run the group-level GLM ([[mri_glmfit]])&lt;br /&gt;
Lab-specific documentation can be found on this wiki, but a more thorough (and accurate) description can be found on the [https://surfer.nmr.mgh.harvard.edu/fswiki/FsFastTutorialV5.1/FsFastFirstLevel Freesurfer Wiki]&lt;br /&gt;
&lt;br /&gt;
== Trouble Shooting ==&lt;br /&gt;
=== Missing surfaces ===&lt;br /&gt;
Not sure how this came to pass, as I have never encountered this before, but probably was the result of one of the autorecon steps stopping early. I was running preproc-sess using the &#039;&#039;fsaverage&#039;&#039; surface (having already successfully run it on the &#039;&#039;self&#039;&#039; surface) and got an error message about being unable to find lh.sphere.reg. A quick google found a FreeSurfer mailing list archive email that was concerned with a different issue that had a similar solution. In Bruce the Almighty&#039;s words:&lt;br /&gt;
&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; you can use mris_register with the -1 switch to indicate that the target &lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; is a single surface not a statistical atlas. You will however still have to&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; create the various surface representations and geometric measures we expect&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; (e.g. ?h.inflated, ?h.sulc, etc....). If you can convert your &lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; surfaces to our binary format (e.g. using mris_convert) to create&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; an lh.orig, it would be something like:&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt;&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; mris_smooth lh.orig lh.smoothwm&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; mris_inflate lh.smoothwm lh.inflated&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; mris_sphere lh.inflated lh.sphere&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; mris_register -1 lh.sphere $TARGET_SUBJECT_DIR/lh.sphere ./lh.sphere.reg&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt;&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; I&#039;ve probably left something out, but that basic approach should work.&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt;&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; cheers&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; Bruce&lt;br /&gt;
&lt;br /&gt;
The subject that had given me a problem already had ?h.inflated files, but no ?h.sphere files. I tried running some of the above steps but there were missing dependencies. Right now running:&lt;br /&gt;
 recon-all -s $SUBJECT -surfreg&lt;br /&gt;
This allegedly produces the surf.reg files as an output.&lt;br /&gt;
&lt;br /&gt;
=== My script can&#039;t find my data ===&lt;br /&gt;
Some versions of the autorecon*.sh scripts have the SUBJECTS_DIR hard-coded. Or sometimes you will close your terminal window (e.g., at the end of the day), and then launch a new terminal window when you come back to the workstation (or resume working at a different computer). There&#039;s a good chance that your Freesurfer malfunction is the result of your SUBJECTS_DIR environment variable being set to the incorrect value. Troubleshooting step #1 should be the following:&lt;br /&gt;
 echo $SUBJECTS_DIR&lt;br /&gt;
If the wrong directory name is printed to the screen, setting it to the correct value may well fix your problem.&lt;br /&gt;
&lt;br /&gt;
At this point you might expect me to tell you how to set SUBJECTS_DIR to the correct value. But I&#039;m not going to do that, and here&#039;s why:&lt;br /&gt;
#It&#039;s documented elsewhere on the wiki&lt;br /&gt;
#If you&#039;re confused about how to set SUBJECTS_DIR, you&#039;re also likely to just blindly type whatever example command I give without understanding what the command does. If this is the case, please become more proficient with the lab&#039;s procedures and software.&lt;br /&gt;
&lt;br /&gt;
=== Is Freesurfer in your path? ===&lt;br /&gt;
This is an unlikely issue, but it is possible that your ~/.bashrc file doesn&#039;t add Freesurfer to your path. To check, launch a new terminal window. You should see something like the following:&lt;br /&gt;
 -------- freesurfer-Linux-centos6_x86_64-stable-pub-v5.3.0 --------&lt;br /&gt;
 Setting up environment for FreeSurfer/FS-FAST (and FSL)&lt;br /&gt;
 FREESURFER_HOME   /usr/local/freesurfer&lt;br /&gt;
 FSFAST_HOME       /usr/local/freesurfer/fsfast&lt;br /&gt;
 FSF_OUTPUT_FORMAT nii.gz&lt;br /&gt;
 SUBJECTS_DIR      /usr/local/freesurfer/subjects&lt;br /&gt;
 MNI_DIR           /usr/local/freesurfer/mni&lt;br /&gt;
If you don&#039;t, then open up your .bashrc file with a text editor:&lt;br /&gt;
 gedit ~/.bashrc&lt;br /&gt;
Then add the following lines to the bottom of the file:&lt;br /&gt;
 #FREESURFER&lt;br /&gt;
 export FREESURFER_HOME=/usr/local/freesurfer&lt;br /&gt;
 source ${FREESURFER_HOME}/SetUpFreeSurfer.sh&lt;br /&gt;
 &lt;br /&gt;
 #FSL&lt;br /&gt;
 export FSLDIR=/usr/share/fsl/5.0&lt;br /&gt;
 source ${FSLDIR}/etc/fslconf/fsl.sh&lt;br /&gt;
Save your changes, log out of Linux and log back in.&lt;br /&gt;
&lt;br /&gt;
=== ERROR: Flag unrecognized. ===&lt;br /&gt;
Most Linux programs take parameters, or &#039;&#039;flags&#039;&#039; that modify or specify how they are run. For example, you can&#039;t just call the &amp;lt;code&amp;gt;recon-all&amp;lt;/code&amp;gt; command; you have to tell the progam &#039;&#039;what&#039;&#039; data you want to work on, and so this information is provided using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; flag. Other flags might tell the program how aggressive to be when deciding to remove potential skull voxels, for example.&lt;br /&gt;
There are no hard-and-fast rules, but to find the set of flags that you can use for a particular Linux program, there are a few options you can try:&lt;br /&gt;
#&amp;lt;code&amp;gt;man program_name&amp;lt;/code&amp;gt;&lt;br /&gt;
#&amp;lt;code&amp;gt;program_name -help&amp;lt;/code&amp;gt;&lt;br /&gt;
#&amp;lt;code&amp;gt;program_name -?&amp;lt;/code&amp;gt;&lt;br /&gt;
Though often the &amp;lt;code&amp;gt;program_name -some_flag&amp;lt;/code&amp;gt; option causes the usage information to be displayed if &#039;&#039;some_flag&#039;&#039; is not a valid flag.&lt;br /&gt;
&lt;br /&gt;
When you see an error message concerning an unrecognized flag, it is most likely because there is a typo in your command. For example:&lt;br /&gt;
 recon-all -autorecon1 watershed 15&lt;br /&gt;
Each of the flags is supposed to be prefixed with a &#039;-&#039; character, but in the example above, &amp;lt;code&amp;gt;-watershed&amp;lt;/code&amp;gt; was instead typed as &amp;lt;code&amp;gt;watershed&amp;lt;/code&amp;gt; &#039;&#039;without&#039;&#039; the &#039;-&#039; character. These little typos can be hard to spot. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;&#039;&#039;&#039;If you are completely perplexed why something doesn&#039;t work out when you followed the directions to the letter, the first thing you should do is throw out your assumption that you typed in the command correctly.&#039;&#039;&#039;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:FreeSurfer]]&lt;/div&gt;</summary>
		<author><name>Erica</name></author>
	</entry>
	<entry>
		<id>https://ccn-wiki.caset.buffalo.edu/index.php?title=FreeSurfer&amp;diff=1464</id>
		<title>FreeSurfer</title>
		<link rel="alternate" type="text/html" href="https://ccn-wiki.caset.buffalo.edu/index.php?title=FreeSurfer&amp;diff=1464"/>
		<updated>2018-02-14T21:42:13Z</updated>

		<summary type="html">&lt;p&gt;Erica: /* Using mri_convert */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Freesurfer is a surface-based fMRI processing and analysis package written for the Unix environment (including Mac OS X, which is based on Unix). The neuroanatomical organization of the brain has the grey matter on the outside surface of the cortex. Aside from the ventral/medial subcortical structures, the interior volume of the brain is predominately white matter axonal tracts. Because neurons metabolize in the cell body, rather than along the axons, we can focus on the grey matter found in the cortical surface because any fMRI signal changes detected in the white matter should theoretically be noise. This is the motivation for surface-based analyses of fMRI.&lt;br /&gt;
&lt;br /&gt;
Freesurfer has a rigid set of assumptions concerning how the input data is organized and labeled. The following instructions will help avoid any violations of these assumptions that might derail your Freesurfer fMRI processing pipeline.&lt;br /&gt;
&lt;br /&gt;
These instructions assume that Freesurfer has already been installed and configured on your workstation.&lt;br /&gt;
&lt;br /&gt;
== Organization ==&lt;br /&gt;
Freesurfer data for a collection of subjects is organized into a single project directory, called $SUBJECTS_DIR. Try this in the Linux terminal:&lt;br /&gt;
 echo $SUBJECTS_DIR&lt;br /&gt;
It is likely that you will see something like the following, which is the sample &#039;bert&#039; dataset that comes with a Freesurfer installation:&lt;br /&gt;
 /usr/local/freesurfer/subjects&lt;br /&gt;
&lt;br /&gt;
Let us assume that you have been collecting data for some lexical decision task experiment. All the data for all subjects should be stored in a single directory, which you  will set as your $SUBJECTS_DIR variable. For example, if we keep all our data in ~/ubfs/cpmcnorg/openfmri/LDT, then we would type the following:&lt;br /&gt;
 SUBJECTS_DIR=~/ubfs/cpmcnorg/openfmri/LDT&lt;br /&gt;
 echo $SUBJECTS_DIR&lt;br /&gt;
Another trick we can do is to use the Unix &amp;lt;code&amp;gt;pwd&amp;lt;/code&amp;gt; command to set the SUBJECTS_DIR to be whatever directory we happen to be in at the moment. The following series of commands will do the same as the previous example command:&lt;br /&gt;
 cd ~&lt;br /&gt;
 cd ubfs/cpmcnorg/openfmri&lt;br /&gt;
 cd LDT&lt;br /&gt;
 SUBJECTS_DIR=`pwd`&lt;br /&gt;
 echo $SUBJECTS_DIR&lt;br /&gt;
The first line above, &amp;lt;code&amp;gt;cd ~&amp;lt;/code&amp;gt; moves you to your home directory. The second line moves you from your home directory to the ubfs network folder containing several sets of experiments. The third line of code moves you into the subdirectory containing the LDT data. The fourth line sets the SUBJECTS_DIR environment variable to whatever gets printed out when you execute the &amp;lt;code&amp;gt;pwd&amp;lt;/code&amp;gt; command (the &amp;lt;code&amp;gt;pwd&amp;lt;/code&amp;gt; command &#039;&#039;&#039;&amp;lt;u&amp;gt;p&amp;lt;/u&amp;gt;&#039;&#039;&#039;rints the current &#039;&#039;&#039;&amp;lt;u&amp;gt;w&amp;lt;/u&amp;gt;&#039;&#039;&#039;orking &#039;&#039;&#039;&amp;lt;u&amp;gt;d&amp;lt;/u&amp;gt;&#039;&#039;&#039;irectory). As a result, the current working directory becomes the new SUBJECTS_DIR after you execute this command, as you can see when you execute the last line of code. Note that in the &amp;lt;code&amp;gt;SUBJECTS_DIR=`pwd`&amp;lt;/code&amp;gt; line, those are back-quotes, which you might find on your keyboard sharing a key with the ~ character. &amp;lt;code&amp;gt;`&amp;lt;/code&amp;gt; is not the same character as &amp;lt;code&amp;gt;&#039;&amp;lt;/code&amp;gt;. When you enclose a command in a pair of back-quotes, you are telling the operating something along the lines of &amp;quot;this is a command that I want you to execute first, before using its output to figure out the rest of this business.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
=== Subject directory organization ===&lt;br /&gt;
Data for each subject should be kept in their own directory. Moreover, different types of data (i.e., anatomical/structural or bold/functional) are kept in separate subdirectories. The basic directory structure for each participant (&#039;session&#039; in Freesurfer terminology) looks like this (see also [[Freesurfer_BOLD_files| Freesurfer BOLD files]]):&lt;br /&gt;
&lt;br /&gt;
*SUBJECTS_DIR&lt;br /&gt;
**Subject_001&lt;br /&gt;
***mri&lt;br /&gt;
***bold&lt;br /&gt;
****001&lt;br /&gt;
****002&lt;br /&gt;
****003&lt;br /&gt;
****004&lt;br /&gt;
****005&lt;br /&gt;
****006&lt;br /&gt;
Copy the data for the participant from the /raw subdirectory for the project in the ubfs folder. You will only need the /mri and the /bold directories. If you are processing data for multiple session for a single participant, you may need to rename some of the files as you copy them over, otherwise, you will end up overwriting files.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt; Note that all the functional data (in the &#039;bold&#039; subdirectory) are stored in sequentially numbered folders (3-digits), and all are given the same name (&#039;f.nii&#039; or &#039;f.nii.gz&#039;). This seems to be a requirement. It may be possible to circumvent this requirement, but this is a relatively minor concern at this time. &amp;lt;/i&amp;gt;&lt;br /&gt;
&lt;br /&gt;
By the end, your data should look like this:&lt;br /&gt;
&lt;br /&gt;
*SUBJECTS_DIR&lt;br /&gt;
**Subject_001&lt;br /&gt;
***mri&lt;br /&gt;
****orig.nii.gz (or MPRAGE.nii)&lt;br /&gt;
***bold&lt;br /&gt;
****001/f.nii.gz&lt;br /&gt;
****002/f.nii.gz&lt;br /&gt;
****etc.&lt;br /&gt;
&lt;br /&gt;
Note that the .nii.gz file extension indicates that this is a gzipped NIFTI file. You can use &amp;lt;code&amp;gt;gunzip&amp;lt;/code&amp;gt; to unzip the files, but this isn&#039;t really necessary unless you are going to manipulate these files in MATLAB. We have figured out ways to do everything for FreeSurfer in the BASH shell, so you may as well just leave them as-is unless you have a compelling reason (or compulsion) to unzip them.&lt;br /&gt;
&lt;br /&gt;
== Structural Preprocessing ==&lt;br /&gt;
The structural mri file (orig.nii) is transformed over a series of computationally-intensive steps invoked by the recon-all Freesurfer program. Recon-all is designed to execute all the steps in series without intervention, however in practice it seems preferable to execute the process in a series of smaller groups of steps and check the output in between. This is because the process is automated using computational algorithms, but if one step doesn&#039;t execute correctly, everything that follows will be compromised. The steps take many hours to complete, so by inspecting the progress along the way can save many hours of processing time redoing steps that had been done incorrectly.&lt;br /&gt;
&lt;br /&gt;
===Tip: Keeping Track===&lt;br /&gt;
If you are working on multiple subjects, or if you are sharing preprocessing duties with someone else, you can coordinate or track your progress by create a LOG_FS_xxxx.txt file. A template file can be found in ubfs in the LDT folder. After completing each step for processing (both structural and functional) mark an x next to the corresponding line. It can be really easy to forget to do this, but it will ensure that others accessing the folder on ubfs will know what&#039;s been done. This file should be a resource for you anyway since it contains a list of commands. More elaboration can be found on the wiki.&lt;br /&gt;
&lt;br /&gt;
Since most of the folders on ubfs do not contain this file, it is hard to know what steps of functional analysis have been done. I am working on writing down what specific files/folders are created at each step to use as markers. I will fill this in as I process a new participant. For now, here is the template:&lt;br /&gt;
&lt;br /&gt;
*slicer (a window-based graphical program) or mri_convert (a command-line program)&lt;br /&gt;
**MPRAGE.mgz&lt;br /&gt;
*autorecon1&lt;br /&gt;
**brainmask.mgz&lt;br /&gt;
*autorecon2&lt;br /&gt;
*autorecon3&lt;br /&gt;
*parfiles&lt;br /&gt;
*mkanalysis&lt;br /&gt;
*mkconstrast&lt;br /&gt;
*preproc&lt;br /&gt;
*selxavg3&lt;br /&gt;
 &lt;br /&gt;
Make sure to update ubfs! It&#039;s a bummer to run a participant through all the steps and then realize the files were just sitting completed &lt;br /&gt;
on someone&#039;s computer locally.&lt;br /&gt;
&lt;br /&gt;
In order to help you through analysis, there is a file called commands in the LDT folder of ubfs. This has all the commands you need to go from raw data to time courses (no GLM though). This is a good skeleton for reference, but it will still be necessary to reference the wiki to recognize what&#039;s going on at each step and address any errors.&lt;br /&gt;
&lt;br /&gt;
=== Image File Format ===&lt;br /&gt;
The first thing that you will need to do is convert the orig.nii file to .mgz format. This can be done using the graphic-based [[Slicer]] program, or in the terminal using &amp;lt;code&amp;gt;mri_convert&amp;lt;/code&amp;gt;. Though Freesurfer is capable of reading .nii files, it natively uses .mgz files, and so this conversion step will ensure that the structural data file has all the information that Freesurfer expects (there&#039;s no reason to expect a problem if using .nii files, but we have run into problems with archival data that had funny data header information).&lt;br /&gt;
&lt;br /&gt;
==== Using mri_convert ====&lt;br /&gt;
Slicer seemed to be useful for automatically fixing goofy .NII header info, however, if there&#039;s nothing wrong with your file header you might find the &amp;lt;code&amp;gt;mri_convert&amp;lt;/code&amp;gt; utility to be the fastest means of converting your files.&lt;br /&gt;
 INFILE_BASE=orig&lt;br /&gt;
 INFILE_EXT=nii.gz&lt;br /&gt;
 mri_convert --in_type nii --out_type mgz -i ${INFILE_BASE}.${INFILE_EXT} -o ${INFILE_BASE}.mgz&lt;br /&gt;
&lt;br /&gt;
=== Recon-All ===&lt;br /&gt;
The computationally-intensive surface mapping is carried out by a Freesurfer program called recon-all. This program is extensively spaghetti-documented on the Freesurfer wiki [https://surfer.nmr.mgh.harvard.edu/fswiki/recon-all here]. Though the Freesurfer documentation goes into much detail, it is also a little hard to follow at times and sometimes does some odd things. Notably, it describes a generally problem-free case. This might work well for data that you already know is going to be problem-free, but we seldom have that guarantee. Instead, this guide will split the recon-all processing into sub-stages where you can do quality-control inspection at each step. &lt;br /&gt;
#[[Autorecon1]] (~2-2.5  hours, assuming no problems encountered)&lt;br /&gt;
#[[Autorecon2]] (~6-8 hours, assuming no problems encountered)&lt;br /&gt;
#[[Autorecon3]]&lt;br /&gt;
&lt;br /&gt;
After running autorecon1, it is best to run autorecon2 immediately afterward. Usually autorecon1 does an alright job skull stripping. If it doesn&#039;t, the results will be evident after autorecon2 is completed if you use the &amp;lt;code&amp;gt;tksurfer SUBJECTID HEMI inflated&amp;lt;/code&amp;gt;. Fill in the SUBJECTID, and fill in HEMI with lh or rh (but check both). If the brain appears overly lumpy or there are odd points sticking out, proceede to [[Autorecon2]] editing.&lt;br /&gt;
&lt;br /&gt;
== Functional Analysis ==&lt;br /&gt;
The previous steps have been concerned only with processing the T1 anatomical data. Though this might suffice for a purely structural brain analysis (e.g., voxel-based brain morphometry, which might explore how cortical thickness relates to some cognitive ability), most of our studies will employ functional MRI, which measures how the hemodynamic response changes as a function of task, condition or group. In the Freesurfer pipeline, this is done using a program called FS-FAST.&lt;br /&gt;
&lt;br /&gt;
=== FS-FAST Functional Analysis===&lt;br /&gt;
Each of these steps are detailed more extensively elsewhere, but generally speaking you will need to follow these steps before starting your functional analysis:&lt;br /&gt;
#Copy BOLD data to the Freesurfer subject folder (see page for [[Freesurfer BOLD files]])&lt;br /&gt;
#Create (or copy if experiment used a fixed schedule) your paradigm files for each fMRI run, and edit them using matlab (see &amp;quot;[[Par-Files]]&amp;quot;).&lt;br /&gt;
#*When editing par files, be sure to check how many volumes to drop for each par file; it may be different every time! See above link for more details.&lt;br /&gt;
#Create a subjects text file in your $SUBJECTS_DIR called &amp;quot;subjectname&amp;quot; that contains a list of your subjects necessary for later (used for [[Configure preproc-sess|preproc-sess]])&lt;br /&gt;
#*A quick way to do this, assuming you want to preprocess everyone, is to pipe the results of the ls command with the -1 switch (1 per line) into grep and then redirect the output to a file:&lt;br /&gt;
#*&amp;lt;code&amp;gt;ls -1 | grep &amp;quot;^FS&amp;quot; &amp;gt; subjects&amp;lt;/code&amp;gt;&lt;br /&gt;
#*This will list the contents of the current directory, 1 per line, then keep only lines starting with &#039;&#039;FS&#039;&#039;&lt;br /&gt;
#Configure your analyses (using [[Configure mkanalysis-sess|mkanalysis-sess]])&lt;br /&gt;
#Configure your contrast (using [[Configure mkcontrast-sess|mkcontrast-sess]])&lt;br /&gt;
#Preprocess your data (smoothing, slice-time correction, intensity normalization and brain mask creation) (using [[Configure preproc-sess|preproc-sess]])&lt;br /&gt;
#Check the *.mcdat files in each of the &#039;&#039;subject_name/bold/xxx&#039;&#039; directories to inspect the amount of motion detected during the motion correction step. Data associated with periods of excessive movement (&amp;gt;1mm) should be dealt with carefully. Columns in the text document from left to right are: (1) time point number (2) roll rotation (in ) (3) pitch rotation (in ) (4) yaw rotation (in ) (5) between-slice motion or translation (in mm) (6) within slice up-down motion or translation (in mm) (7) within slice left-right motion or translation (in mm) (8) RMS error before correction (9) RMS error after correction (10) total vector motion or translation (in mm). We need to look at column 10. If any of these values are above 1, this might be indicative of excessive movement. Consult with someone further up the chain for advice (undergrads ask a grad student; grad students ask Chris or a Postdoc if he ever has the funding to get one).&lt;br /&gt;
#Run the GLM for Single Subjects([[selxavg3-sess]])&lt;br /&gt;
#*Typically, we will do a group-level GLM. I have come to realize that it should generally suffice to do all processing in &#039;&#039;&#039;fsaverage&#039;&#039;&#039; surface space (mri_glmfit requires all operations to be done in a common surface space).&lt;br /&gt;
#*This will be relevant to the parameters you use in steps 4,6 and 8&lt;br /&gt;
#Run the group-level GLM ([[mri_glmfit]])&lt;br /&gt;
Lab-specific documentation can be found on this wiki, but a more thorough (and accurate) description can be found on the [https://surfer.nmr.mgh.harvard.edu/fswiki/FsFastTutorialV5.1/FsFastFirstLevel Freesurfer Wiki]&lt;br /&gt;
&lt;br /&gt;
== Trouble Shooting ==&lt;br /&gt;
=== Missing surfaces ===&lt;br /&gt;
Not sure how this came to pass, as I have never encountered this before, but probably was the result of one of the autorecon steps stopping early. I was running preproc-sess using the &#039;&#039;fsaverage&#039;&#039; surface (having already successfully run it on the &#039;&#039;self&#039;&#039; surface) and got an error message about being unable to find lh.sphere.reg. A quick google found a FreeSurfer mailing list archive email that was concerned with a different issue that had a similar solution. In Bruce the Almighty&#039;s words:&lt;br /&gt;
&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; you can use mris_register with the -1 switch to indicate that the target &lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; is a single surface not a statistical atlas. You will however still have to&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; create the various surface representations and geometric measures we expect&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; (e.g. ?h.inflated, ?h.sulc, etc....). If you can convert your &lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; surfaces to our binary format (e.g. using mris_convert) to create&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; an lh.orig, it would be something like:&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt;&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; mris_smooth lh.orig lh.smoothwm&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; mris_inflate lh.smoothwm lh.inflated&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; mris_sphere lh.inflated lh.sphere&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; mris_register -1 lh.sphere $TARGET_SUBJECT_DIR/lh.sphere ./lh.sphere.reg&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt;&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; I&#039;ve probably left something out, but that basic approach should work.&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt;&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; cheers&lt;br /&gt;
 &amp;gt; &amp;gt;&amp;gt; Bruce&lt;br /&gt;
&lt;br /&gt;
The subject that had given me a problem already had ?h.inflated files, but no ?h.sphere files. I tried running some of the above steps but there were missing dependencies. Right now running:&lt;br /&gt;
 recon-all -s $SUBJECT -surfreg&lt;br /&gt;
This allegedly produces the surf.reg files as an output.&lt;br /&gt;
&lt;br /&gt;
=== My script can&#039;t find my data ===&lt;br /&gt;
Some versions of the autorecon*.sh scripts have the SUBJECTS_DIR hard-coded. Or sometimes you will close your terminal window (e.g., at the end of the day), and then launch a new terminal window when you come back to the workstation (or resume working at a different computer). There&#039;s a good chance that your Freesurfer malfunction is the result of your SUBJECTS_DIR environment variable being set to the incorrect value. Troubleshooting step #1 should be the following:&lt;br /&gt;
 echo $SUBJECTS_DIR&lt;br /&gt;
If the wrong directory name is printed to the screen, setting it to the correct value may well fix your problem.&lt;br /&gt;
&lt;br /&gt;
At this point you might expect me to tell you how to set SUBJECTS_DIR to the correct value. But I&#039;m not going to do that, and here&#039;s why:&lt;br /&gt;
#It&#039;s documented elsewhere on the wiki&lt;br /&gt;
#If you&#039;re confused about how to set SUBJECTS_DIR, you&#039;re also likely to just blindly type whatever example command I give without understanding what the command does. If this is the case, please become more proficient with the lab&#039;s procedures and software.&lt;br /&gt;
&lt;br /&gt;
=== Is Freesurfer in your path? ===&lt;br /&gt;
This is an unlikely issue, but it is possible that your ~/.bashrc file doesn&#039;t add Freesurfer to your path. To check, launch a new terminal window. You should see something like the following:&lt;br /&gt;
 -------- freesurfer-Linux-centos6_x86_64-stable-pub-v5.3.0 --------&lt;br /&gt;
 Setting up environment for FreeSurfer/FS-FAST (and FSL)&lt;br /&gt;
 FREESURFER_HOME   /usr/local/freesurfer&lt;br /&gt;
 FSFAST_HOME       /usr/local/freesurfer/fsfast&lt;br /&gt;
 FSF_OUTPUT_FORMAT nii.gz&lt;br /&gt;
 SUBJECTS_DIR      /usr/local/freesurfer/subjects&lt;br /&gt;
 MNI_DIR           /usr/local/freesurfer/mni&lt;br /&gt;
If you don&#039;t, then open up your .bashrc file with a text editor:&lt;br /&gt;
 gedit ~/.bashrc&lt;br /&gt;
Then add the following lines to the bottom of the file:&lt;br /&gt;
 #FREESURFER&lt;br /&gt;
 export FREESURFER_HOME=/usr/local/freesurfer&lt;br /&gt;
 source ${FREESURFER_HOME}/SetUpFreeSurfer.sh&lt;br /&gt;
 &lt;br /&gt;
 #FSL&lt;br /&gt;
 export FSLDIR=/usr/share/fsl/5.0&lt;br /&gt;
 source ${FSLDIR}/etc/fslconf/fsl.sh&lt;br /&gt;
Save your changes, log out of Linux and log back in.&lt;br /&gt;
&lt;br /&gt;
=== ERROR: Flag unrecognized. ===&lt;br /&gt;
Most Linux programs take parameters, or &#039;&#039;flags&#039;&#039; that modify or specify how they are run. For example, you can&#039;t just call the &amp;lt;code&amp;gt;recon-all&amp;lt;/code&amp;gt; command; you have to tell the progam &#039;&#039;what&#039;&#039; data you want to work on, and so this information is provided using the &amp;lt;code&amp;gt;-i&amp;lt;/code&amp;gt; flag. Other flags might tell the program how aggressive to be when deciding to remove potential skull voxels, for example.&lt;br /&gt;
There are no hard-and-fast rules, but to find the set of flags that you can use for a particular Linux program, there are a few options you can try:&lt;br /&gt;
#&amp;lt;code&amp;gt;man program_name&amp;lt;/code&amp;gt;&lt;br /&gt;
#&amp;lt;code&amp;gt;program_name -help&amp;lt;/code&amp;gt;&lt;br /&gt;
#&amp;lt;code&amp;gt;program_name -?&amp;lt;/code&amp;gt;&lt;br /&gt;
Though often the &amp;lt;code&amp;gt;program_name -some_flag&amp;lt;/code&amp;gt; option causes the usage information to be displayed if &#039;&#039;some_flag&#039;&#039; is not a valid flag.&lt;br /&gt;
&lt;br /&gt;
When you see an error message concerning an unrecognized flag, it is most likely because there is a typo in your command. For example:&lt;br /&gt;
 recon-all -autorecon1 watershed 15&lt;br /&gt;
Each of the flags is supposed to be prefixed with a &#039;-&#039; character, but in the example above, &amp;lt;code&amp;gt;-watershed&amp;lt;/code&amp;gt; was instead typed as &amp;lt;code&amp;gt;watershed&amp;lt;/code&amp;gt; &#039;&#039;without&#039;&#039; the &#039;-&#039; character. These little typos can be hard to spot. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;span style=&amp;quot;color:red&amp;quot;&amp;gt;&#039;&#039;&#039;If you are completely perplexed why something doesn&#039;t work out when you followed the directions to the letter, the first thing you should do is throw out your assumption that you typed in the command correctly.&#039;&#039;&#039;&amp;lt;/span&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:FreeSurfer]]&lt;/div&gt;</summary>
		<author><name>Erica</name></author>
	</entry>
</feed>