Connectivity Toolbox

From CCN Wiki
Revision as of 12:55, 1 November 2018 by Chris (talk | contribs)
Jump to navigation Jump to search

The conn Functional Connectivity toolbox is a MATLAB-based program that hijacks SPM to do some data de-noising and remove potential artefacts that might skew your connectivity data. It is somewhat geared towards voxel-space analyses, but can support the surface-space analyses that we typically do in our lab. The walkthrough below is geared towards running conn on your FreeSurfer data.

Preconditions

These instructions assume that you have functional and anatomical data organized in the manner that FreeSurfer expects, and that you have completed the autorecon steps on the T1 anatomical file and have generated the surface meshes for your target participants. These instructions will also assume that you have dropped some number of initial volumes from the BOLD runs (usually the first 4) if required. If working from the raw data archived from the CTRC scanner, this will be a required step; if you are working with BOLD data that has already been analyzed, or preprocessed this may already have been done, and you will not want to drop yet another initial 4 volumes!

Because conn runs on top of SPM, you need to ensure that SPM is in your MATLAB path. Of course, conn also needs to be in your path. Check that they are using the which command in the MATLAB command window:

>> which spm
/opt/spm12/spm.m
>> which conn
/opt/conn/conn.m

If you see the paths to both .m files, you can run conn from the MATLAB command window:

>> conn

Starting an Analysis in CONN

My goal here isn't going to be to duplicate the conn user manual, and we're not going through an entire functional analysis. But you have to start somewhere, so here we are...

When you launch conn, you will get a nice gui window with a menu bar along the top. We want to start a new project: Project > New (blank) You will get a uiwindow that asks where you want to save the project details. You can save this .mat file wherever you like, though I would recommend somewhere near your $SUBJECTS_DIR for the project. Next you will have to specify parameters for setting up the analysis

Basic information

The first uiwindow you need to populate has some general experimental setup information:

Number of subjects

Indicate here how many subjects you wish to process. This will tell conn how many datasets to expect. You don't even have to add everyone in the experiment to the analysis: If you have FreeSurfer data for 10 people, but just want to process 1 or two people, then just indicate that there are 1 or 2 subjects.

Number of session

By this, they mean functional runs. Our Imagery experiment has 2 visits with 6 functional runs each. If someone completed both visits, they would have 12 sessions worth of data. As with the subjects parameter, you don't have to use all the runs. You might wish to just analyze the 6 runs from the first visit, in which case you would say there are 6 sessions. The program will almost certainly crash if some runs are missing for some participants because it will expect the same amount of data for each participant. You will need to separately process individuals with missing runs.

Repetition Time (seconds)

This is our experimental TR. For the Imagery study, our TR is approximately 2 seconds (2.047, to be precise).

Acquisition type

Leave it as the default, Continuous, which means the scanner is continuously acquiring BOLD data throughout the run. It's unlikely that you'll be analyzing data collected using sparse sampling.

Structural data

Here is where you have to find the T1 and surface files for your participants. They made the program clever enough to do pattern matching, so if you pick a root directory, and then indicate the file pattern for the structural data, it will find all the matching files. For example, if you left-click on your $SUBJECTS_DIR, type in T1.mgz and click the Find button , it will identify the paths to all the structural files generated by FreeSurfer. From there, you highlight the files you wish to work with and click the Select button (make sure the number of selected files matches the number of highlighted Subjects). If the T1 data have already been preprocessed with FreeSurfer, you should be asked if you wish to also import the segmentation files. Say Yes.

One thing I will point out is that our FreeSurfer analyses are done in surface space, which doesn't care about where the particular voxels sit in relation to the cube of voxels surrounding the standard MNI template brain. For that reason, your structural and functional data will seldom be even close to aligned with the MNI brain. This isn't usually a problem, but I'm now considering coregistration of the data (but not spatial normalization) as a routine preprocessing step. Stay tuned.

Functional data

This works pretty much the same as selecting your structural data. Following our FreeSurfer conventions, you will be working with the f.nii.gz data (the zipped data will be automatically unzipped). Make sure that these data have already had the first 4 volumes dropped and that the slice timing and TR information has already been fixed in the header (see the page on Freesurfer BOLD files).

ROIs

Here we indicate regional sources of variance that will be used to denoise our data. By default conn has selected Grey and White matter, CSF, atlas and networks. We wish to extract the average timeseries for the Grey matter, the Principle Component Analysis signal from White matter and CSF. We can remove the atlas and networks ROIs.

There's a checkbox for regressing out covariates before regressing out the PCA noise covariates, but I don't see any reason to use that option.

Conditions

This is set up by default to only include a rest condition that spans the entire run, as if it's resting state data. But we're working with task data using a mixed design, where we might add the event or block onsets. The thing is, I don't know that we want to have conn do any funny business with each of our conditions because I'm only using the program for preprocessing, and not analysis. It would be catastrophic to regress out the condition-related signal that you're actually trying to study! So for the time being, I'm removing all conditions, so that conn doesn't have anything to go on.

Covariates (1st-level)

This would be timeseries related covariates, like motion parameters or perhaps if you wanted to exclude the onsets of button presses based on the runtime RT data.

Covariates (2nd level)

These are participant-level covariates (e,g. age, task performance, etc.), though if you're studying individual differences, these may be precisely the factors you're interested in, so you wouldn't want to regress those out. The AllSubjects covariate is added by default, but that's fine, because it's just a vector of 1s, and therefore does not covary with anything.

Processing options

Almost there!

Enabled analyses

Check that we want to do Voxel-to-Voxel analyses

Optional output files

check that we create confound-corrected beta-maps and confound-corrected time-series (our primary objective for using conn).

Analysis space (voxel-level)

Select surface:same as template (Freesurfer fsaverage). You'll be prompted for a smoothing level. I've just been going with the default of 10.

Analysis mask (voxel-level)

Select Implicit mask (subject-specific) because our subjects voxels are not guaranteed to fall within the template brainmask.

BOLD signal units

Let's use Raw signal.

Preprocessing

Click the green Preprocessing button to bring up a uidialogbox to select a processing pipeline. We wish to use the preprocessing pipeline for sufface-based analyses in subject space. This will populate the window of processing steps. Note that we already have our structural segmentation from FreeSurfer, so you can go ahead and remove that last item from the prepopulated list. Click Start.

Slice timing window

You'll be asked for the slice order. Data from the CTRC used interleaved top-down

Functional outlier detection

If ART is in your pipeline (it likely will be if you did what I described above), here is where you set the threshold for detecting "bad" volumes. Intermediate is probably perfectly reasonable.