Connectivity Toolbox: Difference between revisions
No edit summary |
|||
(18 intermediate revisions by 2 users not shown) | |||
Line 33: | Line 33: | ||
==Structural data== | ==Structural data== | ||
Here is where you have to find the T1 and surface files for your participants. They made the program clever enough to do pattern matching, so if you pick a root directory, and then indicate the file pattern for the structural data, it will find all the matching files. For example, if you left-click on your $SUBJECTS_DIR, type in T1.mgz and click the '''Find''' button , it will identify the paths to all the structural files generated by FreeSurfer. From there, you highlight the files you wish to work with and click the Select button (make sure the number of selected files matches the number of highlighted Subjects). If the T1 data have already been preprocessed with FreeSurfer, you should be asked if you wish to also import the segmentation files. Say '''Yes'''. | Here is where you have to find the T1 and surface files for your participants, which are found in the 'mri' directory for each participant. They made the program clever enough to do pattern matching, so if you pick a root directory, and then indicate the file pattern for the structural data, it will find all the matching files. For example, if you left-click on your $SUBJECTS_DIR, type in T1.mgz and click the '''Find''' button , it will identify the paths to all the structural files generated by FreeSurfer. From there, you highlight the files you wish to work with and click the Select button (make sure the number of selected files matches the number of highlighted Subjects). If the T1 data have already been preprocessed with FreeSurfer, you should be asked if you wish to also import the segmentation files. Say '''Yes'''. | ||
One thing I will point out is that our FreeSurfer analyses are done in surface space, which doesn't care about where the particular voxels sit in relation to the cube of voxels surrounding the standard MNI template brain. For that reason, your structural and functional data will seldom be even close to aligned with the MNI brain. This isn't usually a problem, but I'm now considering coregistration of the data (but not spatial normalization) as a routine preprocessing step. Stay tuned. | One thing I will point out is that our FreeSurfer analyses are done in surface space, which doesn't care about where the particular voxels sit in relation to the cube of voxels surrounding the standard MNI template brain. For that reason, your structural and functional data will seldom be even close to aligned with the MNI brain. This isn't usually a problem, but I'm now considering coregistration of the data (but not spatial normalization) as a routine preprocessing step. Stay tuned. | ||
==Functional data== | ==Functional data== | ||
This works pretty much the same as selecting your structural data. Following our FreeSurfer conventions, you will be working with the f.nii.gz data (the zipped data will be automatically unzipped). Make sure that these data have already had the first 4 volumes dropped and that the slice timing and TR information has already been fixed in the header (see the page on [[Freesurfer BOLD files]]). | This works pretty much the same as selecting your structural data. Following our FreeSurfer conventions, you will be working with the f.nii.gz data (the zipped data will be automatically unzipped). Make sure that these data have already had the first 4 volumes dropped and that the slice timing and TR information has already been fixed in the header (see the page on [[Freesurfer BOLD files]]). | ||
==ROIs== | ==ROIs== | ||
Here we indicate regional sources of variance that will be used to denoise our data. By default conn has selected Grey and White matter, CSF, atlas and networks. We wish to extract the average timeseries for the Grey matter, the Principle Component Analysis signal from White matter and CSF. We can remove the atlas and networks ROIs. | Here we indicate regional sources of variance that will be used to denoise our data. By default conn has selected Grey and White matter, CSF, atlas and networks. We wish to extract the average timeseries for the Grey matter, the Principle Component Analysis signal from White matter and CSF. We can remove the atlas and networks ROIs from the list of ROIs we care about so that we only have the first 3. | ||
There's a checkbox for regressing out covariates before regressing out the PCA noise covariates, but I don't see any reason to use that option. | There's a checkbox for regressing out covariates before regressing out the PCA noise covariates, but I don't see any reason to use that option. | ||
Line 50: | Line 51: | ||
==Covariates (1st-level)== | ==Covariates (1st-level)== | ||
This would be timeseries related covariates, like motion parameters or perhaps if you wanted to exclude the onsets of button presses based on the runtime RT data. | This would be timeseries related covariates, like motion parameters or perhaps if you wanted to exclude the onsets of button presses based on the runtime RT data. We can leave this blank unless we have a reason to get fancy. | ||
==Covariates (2nd level)== | ==Covariates (2nd level)== | ||
Line 59: | Line 60: | ||
===Enabled analyses=== | ===Enabled analyses=== | ||
Check that we want to do Voxel-to-Voxel analyses | Check that we want to do Voxel-to-Voxel analyses | ||
Other analyses maybe checked by default, unless otherwise specified we only care about Voxel-to-Voxel. | |||
===Optional output files=== | ===Optional output files=== | ||
check that we create confound-corrected beta-maps and confound-corrected time-series (our primary objective for using conn). | check that we create confound-corrected beta-maps and confound-corrected time-series (our primary objective for using conn). | ||
===Analysis space (voxel-level) === | ===Analysis space (voxel-level) === | ||
Select | Select ''Volume: same as functionals'' so that conn doesn't resample the data. I made the mistake of saying that analyses would be done in FreeSurfer's fsaverage space, and it totally mangled the data by throwing away all the spatial information about the voxels (it produced a time-series cube, which is decidedly not brain-shaped). | ||
===Analysis mask (voxel-level)=== | ===Analysis mask (voxel-level)=== | ||
Select Implicit mask (subject-specific) because our subjects voxels are not guaranteed to fall within the template brainmask. | Select Implicit mask (subject-specific) because our subjects voxels are not guaranteed to fall within the template brainmask. | ||
===Second_Level Analyses=== | |||
Second level analyses is another option in the option menu. Unless otherwise specified you can ignore this option. | |||
===BOLD signal units=== | ===BOLD signal units=== | ||
Let's use Raw signal. | Let's use Raw signal. | ||
Line 88: | Line 95: | ||
Regresses out whether the data were flagged for outlier scrubbing by ARTRepair | Regresses out whether the data were flagged for outlier scrubbing by ARTRepair | ||
==Effect of Condition (e.g., rest, dummy, task)== | ==Effect of Condition (e.g., rest, dummy, task)== | ||
If we're just looking to use conn to remove noise from our data, and especially if we have task-related data, I don't think it makes sense to remove the experimental design from our data as a confound., so highlight this entry right click on the ''>'' that appears between the columns. | If we're just looking to use conn to remove noise from our data, and especially if we have task-related data, I don't think it makes sense to remove the experimental design from our data as a confound., so highlight this entry right click on the right angle (''>'') bracket that appears between the columns. | ||
==Band-pass filter== | ==Band-pass filter== | ||
Removes signal that falls outside the high- and low-end of the frequency range. These values are expressed in Hz, or cycles per second. As a postdoc, I saw first hand what happens if you don't give this some consideration. Suppose you have a block design, where each block is, say 40 seconds long. So any signal that is dependent on the block will appear at a frequency that matches the block frequency. If it takes 40 seconds for 1 block to coomplete, then the block frequency, in cycles per second is 1/40 = .025 Hz. The default Band-pass filter window goes from .008 Hz (1/125 seconds) to .09 (1/11 seconds). Our 40 second block signal would be safe within this range, but if you were to change the low-frequency number to .03 (1/30 seconds), then you'd end up filtering out your block signal, which may not be something you want to do. | Removes signal that falls outside the high- and low-end of the frequency range. These values are expressed in Hz, or cycles per second. As a postdoc, I saw first hand what happens if you don't give this some consideration. Suppose you have a block design, where each block is, say 40 seconds long. So any signal that is dependent on the block will appear at a frequency that matches the block frequency. If it takes 40 seconds for 1 block to coomplete, then the block frequency, in cycles per second is 1/40 = .025 Hz. The default Band-pass filter window goes from .008 Hz (1/125 seconds) to .09 (1/11 seconds). Our 40 second block signal would be safe within this range, but if you were to change the low-frequency number to .03 (1/30 seconds), then you'd end up filtering out your block signal, which may not be something you want to do. | ||
Line 94: | Line 102: | ||
==Click the Done Button== | ==Click the Done Button== | ||
Do it to begin the denoising pipeline. Go make yourself a sandwich. | Do it to begin the denoising pipeline. Go make yourself a sandwich. | ||
=Obtaining Denoised Time-Series Data= | |||
Your project folder will contain a data/ folder full of output files produced during all the above steps. Of interest are the series of .matc files. '''conn''' ships with a conversion utility, <code>conn_matc2nii</code> that will convert the .matc file for a session into a .nii file: | |||
>> conn_matc2nii('DATA_Subject001_Session001.matc', false) | |||
>> %the second parameter is optional and indicates whether to display a waitbar. By default, this is set to false. | |||
After you execute this command in MATLAB, you'll end up with a file with the same name, except with a .nii extension. | |||
Now, at this point, you probably started off with an f.nii.gz file, and now have a DATA_Subject'xxx'_Session'yyy'.nii file. In order to look at this newly denoised data in FreeSurfer, you will need to run it through the FS-FAST preproc-sess step. There's a good chance the source directory already has an f.nii.gz file, as well as a series of fmcpr.''${direction}''.sm''${smoothing}''.fsaverage.?h.nii.gz files. Assuming you want to hold on to these, I'd make a backup directory in each of the run subdirectories, and move all files to the backup directory (you can keep any .par files here). After you've backed everything up, move the new denoised files into the appropriate directories and rename them to f.nii. | |||
We have also discovered that the denoised data may have been stripped of its TR information, and so you may have to put this back in. To check if this is the case: | |||
FILENAME=f.nii | |||
fslhd ${FILENAME} | |||
Running this command will spit out the NIfTI file header information. You are looking for the '''dt''' or '''pixdim4''' information: | |||
... | |||
pixdim3 4.100000 | |||
'''pixdim4 1.000000''' <-- it says we have a 1-second TR! | |||
pixdim5 0.000000 | |||
... | |||
If this is the case, you can fix it following the [[Freesurfer_BOLD_files#Setting_the_TR_in_the_Terminal_Using_3drefit | standard procedure for setting your TR]] (none of our experiments have a 1-second TR; it is likely 2.047). | |||
When you redo the preproc-sess step, make sure you do not apply slice-time correction, because this has already been applied to the denoised data: | |||
preproc-sess -s FS_0231 -per-run -surface fsaverage lhrh -mni305 -fwhm 4 -fsd bold | |||
The departure from the other wiki entry on preproc-sess is that we do not use the -sliceorder flag on the denoised data. Other than that, just process the denoised functional data like you would any other data. When done, you'll have a set of fmcpr.${smooth}.fsaverage.?h.nii.gz (they will no longer have the word 'down' in the filename, so you may have to modify some of your later analysis or data processing scripts accordingly -- just be on the lookout for ''file not found'' errors, and if they pop up, the first thing you should look for is to see if your command or script was expecting a filename with the slice order). Of course, another option would be to manually rename the preprocessed data when preproc-sess is completed, so that the preprocessed denoised data still includes the sliceorder in the name. I'm not sure how big of a problem this minor change in filename is going to be for our various scripts. |
Latest revision as of 12:36, 13 November 2018
The conn Functional Connectivity toolbox is a MATLAB-based program that hijacks SPM to do some data de-noising and remove potential artefacts that might skew your connectivity data. It is somewhat geared towards voxel-space analyses, but can support the surface-space analyses that we typically do in our lab. The walkthrough below is geared towards running conn on your FreeSurfer data.
Preconditions
These instructions assume that you have functional and anatomical data organized in the manner that FreeSurfer expects, and that you have completed the autorecon steps on the T1 anatomical file and have generated the surface meshes for your target participants. These instructions will also assume that you have dropped some number of initial volumes from the BOLD runs (usually the first 4) if required. If working from the raw data archived from the CTRC scanner, this will be a required step; if you are working with BOLD data that has already been analyzed, or preprocessed this may already have been done, and you will not want to drop yet another initial 4 volumes!
Because conn runs on top of SPM, you need to ensure that SPM is in your MATLAB path. Of course, conn also needs to be in your path. Check that they are using the which
command in the MATLAB command window:
>> which spm /opt/spm12/spm.m >> which conn /opt/conn/conn.m
If you see the paths to both .m files, you can run conn from the MATLAB command window:
>> conn
Setup in CONN
My goal here isn't going to be to duplicate the conn user manual, and we're not going through an entire functional analysis. But you have to start somewhere, so here we are...
When you launch conn, you will get a nice gui window with a menu bar along the top. We want to start a new project: Project > New (blank) You will get a uiwindow that asks where you want to save the project details. You can save this .mat file wherever you like, though I would recommend somewhere near your $SUBJECTS_DIR for the project. Next you will have to specify parameters for setting up the analysis
Basic information
The first uiwindow you need to populate has some general experimental setup information:
Number of subjects
Indicate here how many subjects you wish to process. This will tell conn how many datasets to expect. You don't even have to add everyone in the experiment to the analysis: If you have FreeSurfer data for 10 people, but just want to process 1 or two people, then just indicate that there are 1 or 2 subjects.
Number of session
By this, they mean functional runs. Our Imagery experiment has 2 visits with 6 functional runs each. If someone completed both visits, they would have 12 sessions worth of data. As with the subjects parameter, you don't have to use all the runs. You might wish to just analyze the 6 runs from the first visit, in which case you would say there are 6 sessions. The program will almost certainly crash if some runs are missing for some participants because it will expect the same amount of data for each participant. You will need to separately process individuals with missing runs.
Repetition Time (seconds)
This is our experimental TR. For the Imagery study, our TR is approximately 2 seconds (2.047, to be precise). For other data, the TR may be different, though a 2-second TR is common.
Acquisition type
Leave it as the default, Continuous, which means the scanner is continuously acquiring BOLD data throughout the run. It's unlikely that you'll be analyzing data collected using sparse sampling.
Structural data
Here is where you have to find the T1 and surface files for your participants, which are found in the 'mri' directory for each participant. They made the program clever enough to do pattern matching, so if you pick a root directory, and then indicate the file pattern for the structural data, it will find all the matching files. For example, if you left-click on your $SUBJECTS_DIR, type in T1.mgz and click the Find button , it will identify the paths to all the structural files generated by FreeSurfer. From there, you highlight the files you wish to work with and click the Select button (make sure the number of selected files matches the number of highlighted Subjects). If the T1 data have already been preprocessed with FreeSurfer, you should be asked if you wish to also import the segmentation files. Say Yes.
One thing I will point out is that our FreeSurfer analyses are done in surface space, which doesn't care about where the particular voxels sit in relation to the cube of voxels surrounding the standard MNI template brain. For that reason, your structural and functional data will seldom be even close to aligned with the MNI brain. This isn't usually a problem, but I'm now considering coregistration of the data (but not spatial normalization) as a routine preprocessing step. Stay tuned.
Functional data
This works pretty much the same as selecting your structural data. Following our FreeSurfer conventions, you will be working with the f.nii.gz data (the zipped data will be automatically unzipped). Make sure that these data have already had the first 4 volumes dropped and that the slice timing and TR information has already been fixed in the header (see the page on Freesurfer BOLD files).
ROIs
Here we indicate regional sources of variance that will be used to denoise our data. By default conn has selected Grey and White matter, CSF, atlas and networks. We wish to extract the average timeseries for the Grey matter, the Principle Component Analysis signal from White matter and CSF. We can remove the atlas and networks ROIs from the list of ROIs we care about so that we only have the first 3.
There's a checkbox for regressing out covariates before regressing out the PCA noise covariates, but I don't see any reason to use that option.
Conditions
This is set up by default to only include a rest condition that spans the entire run, as if it's resting state data. But we're working with task data using a mixed design, where we might add the event or block onsets. The thing is, I don't know that we want to have conn do any funny business with each of our conditions because I'm only using the program for preprocessing, and not analysis. It would be catastrophic to regress out the condition-related signal that you're actually trying to study!
It turns out that you have to have at least 1 condition specified, so you can use the rest condition as a placeholder.
Covariates (1st-level)
This would be timeseries related covariates, like motion parameters or perhaps if you wanted to exclude the onsets of button presses based on the runtime RT data. We can leave this blank unless we have a reason to get fancy.
Covariates (2nd level)
These are participant-level covariates (e,g. age, task performance, etc.), though if you're studying individual differences, these may be precisely the factors you're interested in, so you wouldn't want to regress those out. The AllSubjects covariate is added by default, but that's fine, because it's just a vector of 1s, and therefore does not covary with anything.
Processing options
Almost there!
Enabled analyses
Check that we want to do Voxel-to-Voxel analyses Other analyses maybe checked by default, unless otherwise specified we only care about Voxel-to-Voxel.
Optional output files
check that we create confound-corrected beta-maps and confound-corrected time-series (our primary objective for using conn).
Analysis space (voxel-level)
Select Volume: same as functionals so that conn doesn't resample the data. I made the mistake of saying that analyses would be done in FreeSurfer's fsaverage space, and it totally mangled the data by throwing away all the spatial information about the voxels (it produced a time-series cube, which is decidedly not brain-shaped).
Analysis mask (voxel-level)
Select Implicit mask (subject-specific) because our subjects voxels are not guaranteed to fall within the template brainmask.
Second_Level Analyses
Second level analyses is another option in the option menu. Unless otherwise specified you can ignore this option.
BOLD signal units
Let's use Raw signal.
Preprocessing
Click the green Preprocessing button to bring up a uidialogbox to select a processing pipeline. We wish to use the preprocessing pipeline for sufface-based analyses in subject space. This will populate the window of processing steps. Note that we already have our structural segmentation from FreeSurfer, so you can go ahead and remove that last item from the prepopulated list. Click Start.
Slice timing window
You'll be asked for the slice order. Data from the CTRC used interleaved top-down
Functional outlier detection
If ART is in your pipeline (it likely will be if you did what I described above), here is where you set the threshold for detecting "bad" volumes. Intermediate is probably perfectly reasonable.
Denoising (1st level)
A bunch of data preprocessing will happen over the next 15 minutes or so (1 subject, 6 runs) after you click DONE on the setup process. A status window gives you updates on what steps in the preprocessing pipeline are executing. Once it's finished, you find yourself in the Denoising step.
In this step, you can remove confounds from your data, and see the effect on the distribution of connectivity values in the window area on the right. Before denoising, there is generally a positive bias on your connectivity values, and you should find the denoised data to be 0-centered and a narrower distribution across all your sessions. You can also see how the sessions differed within individuals, so that's somewhat interesting. Anyways, at this point, we select which confounds we want to remove from our data:
White Matter
Removes the WM timeseries ICA, by default set to 5 component decompsition. The number of compoments can be changed (try increasing it from 5 to 10) if the denoised distribution is still biased.
CSF
Likewise, this removes the CSF timeseries, also set to a 5 component ICA.
Realignment
Regresses out the 6 parameter motion correction parameters and by default adds their first order derivatives (for a total of 12 parameters).
Scrubbing
Regresses out whether the data were flagged for outlier scrubbing by ARTRepair
Effect of Condition (e.g., rest, dummy, task)
If we're just looking to use conn to remove noise from our data, and especially if we have task-related data, I don't think it makes sense to remove the experimental design from our data as a confound., so highlight this entry right click on the right angle (>) bracket that appears between the columns.
Band-pass filter
Removes signal that falls outside the high- and low-end of the frequency range. These values are expressed in Hz, or cycles per second. As a postdoc, I saw first hand what happens if you don't give this some consideration. Suppose you have a block design, where each block is, say 40 seconds long. So any signal that is dependent on the block will appear at a frequency that matches the block frequency. If it takes 40 seconds for 1 block to coomplete, then the block frequency, in cycles per second is 1/40 = .025 Hz. The default Band-pass filter window goes from .008 Hz (1/125 seconds) to .09 (1/11 seconds). Our 40 second block signal would be safe within this range, but if you were to change the low-frequency number to .03 (1/30 seconds), then you'd end up filtering out your block signal, which may not be something you want to do.
Click the Done Button
Do it to begin the denoising pipeline. Go make yourself a sandwich.
Obtaining Denoised Time-Series Data
Your project folder will contain a data/ folder full of output files produced during all the above steps. Of interest are the series of .matc files. conn ships with a conversion utility, conn_matc2nii
that will convert the .matc file for a session into a .nii file:
>> conn_matc2nii('DATA_Subject001_Session001.matc', false) >> %the second parameter is optional and indicates whether to display a waitbar. By default, this is set to false.
After you execute this command in MATLAB, you'll end up with a file with the same name, except with a .nii extension. Now, at this point, you probably started off with an f.nii.gz file, and now have a DATA_Subject'xxx'_Session'yyy'.nii file. In order to look at this newly denoised data in FreeSurfer, you will need to run it through the FS-FAST preproc-sess step. There's a good chance the source directory already has an f.nii.gz file, as well as a series of fmcpr.${direction}.sm${smoothing}.fsaverage.?h.nii.gz files. Assuming you want to hold on to these, I'd make a backup directory in each of the run subdirectories, and move all files to the backup directory (you can keep any .par files here). After you've backed everything up, move the new denoised files into the appropriate directories and rename them to f.nii.
We have also discovered that the denoised data may have been stripped of its TR information, and so you may have to put this back in. To check if this is the case:
FILENAME=f.nii fslhd ${FILENAME}
Running this command will spit out the NIfTI file header information. You are looking for the dt or pixdim4 information:
... pixdim3 4.100000 pixdim4 1.000000 <-- it says we have a 1-second TR! pixdim5 0.000000 ...
If this is the case, you can fix it following the standard procedure for setting your TR (none of our experiments have a 1-second TR; it is likely 2.047).
When you redo the preproc-sess step, make sure you do not apply slice-time correction, because this has already been applied to the denoised data:
preproc-sess -s FS_0231 -per-run -surface fsaverage lhrh -mni305 -fwhm 4 -fsd bold
The departure from the other wiki entry on preproc-sess is that we do not use the -sliceorder flag on the denoised data. Other than that, just process the denoised functional data like you would any other data. When done, you'll have a set of fmcpr.${smooth}.fsaverage.?h.nii.gz (they will no longer have the word 'down' in the filename, so you may have to modify some of your later analysis or data processing scripts accordingly -- just be on the lookout for file not found errors, and if they pop up, the first thing you should look for is to see if your command or script was expecting a filename with the slice order). Of course, another option would be to manually rename the preprocessed data when preproc-sess is completed, so that the preprocessed denoised data still includes the sliceorder in the name. I'm not sure how big of a problem this minor change in filename is going to be for our various scripts.