TSTagger: Difference between revisions

From CCN Wiki
Jump to navigation Jump to search
No edit summary
 
(8 intermediate revisions by the same user not shown)
Line 9: Line 9:
         condition column
         condition column
   
   
tr: scan interval (E.g. 2.047)
    tr: scan interval (E.g. 2.047)
     dat: a cell array of structs: {dat.expinfo, dat.mat}, where expinfo comes
     dat: a cell array of structs: {dat.expinfo, dat.mat}, where expinfo comes
       from the PsychToolBox runtime .mat files, and mat is a (usually) normalized  
       from the PsychToolBox runtime .mat files, and mat is a (usually) normalized  
Line 16: Line 17:
      
      
   '''Optional arguments:'''
   '''Optional arguments:'''
     duration: a scalar indicating the number of volumes for each event
     duration: a scalar indicating the number of seconds of data to include
    (default: 1)
    in each event pattern (default: 5)
   
   
     volumes_dropped: number of INITIAL volumes dropped (i.e., volumes from  
     volumes_dropped: number of INITIAL volumes dropped (i.e., volumes from  
Line 23: Line 24:
         of numbers. If a single value is provided, it will be applied to
         of numbers. If a single value is provided, it will be applied to
         all paired data files, otherwise the values will be applied to the
         all paired data files, otherwise the values will be applied to the
         corresponding dat argument cell
         corresponding dat argument cell. '''Important: Do not use this parameter'''
        '''if your event timestamps have already been adjusted for the volumes dropped'''
   
   
     precision: input values will be rounded to PRECISION decimal places
     precision: input values will be rounded to PRECISION decimal places
Line 42: Line 44:
     median)
     median)


==Sample Usage==
==Use Case: Tagging Time Series Data from CCN Lab Experiments==
%CCN Lab experiments save runtime data in a series of .mat files (1 for each run)
%use PTBParser to extract the runtime data from the .mat files for the participant
  eio=PTBParser();
%Use loadFSTS() to import associated time series from *.wav.txt files
M=loadFSTS(); %No parameters given ... see the wiki entry for '''[[Importing_Time_Series_Into_MATLAB]]'''
 
%'''Important:''' steps omitted detailing the transformation of the raw timeseries matrices M into SCALED
   nruns=6
   nruns=6
   for i=1:nruns
   for i=1:nruns
Line 54: Line 65:
   TSTagger('tr', TR, 'volumes_dropped', DROPVOLS, 'condition', ...
   TSTagger('tr', TR, 'volumes_dropped', DROPVOLS, 'condition', ...
       hifam, 'dat', DAT);
       hifam, 'dat', DAT);
==Use Case: Tagging Time Series Data from External Experiment ==
For experiments run elsewhere, we cannot assume a standard format for the experiment meta-data (e.g., timestamps, condition codes, etc.). However, if we have gotten this far in the process, I will assume you've figured out how to create a .par file for the first level BOLD analysis on these data. A MATLAB script called <code>PARParser.m</code> can be found in ubfs/Scripts/Matlab that will translate a series of .par files into a data structure compatible with that generated by PTBParser().
%The following lines tagged 20 runs of data from the Horikawa experiment using 20 .par files (001.par to 020.par)
expinfo=PARParser('subject', 'FS_01'); %if no subject is given, default name is "no_name"
[M, hemis]=loadFSTS
%'''Important:''' steps omitted detailing the transformation of the raw timeseries matrices M into SCALED
nruns=20;
for i=1:nruns
    DAT{i}.expinfo=expinfo{i};
    DAT{i}.mat=M{i};
end
TR=3; %horikawa TR was 3 seconds; this can be found by examining the .nii header
condition=[1,2]; %horikawa par files were configured for living vs. nonliving
duration=15; %each tagged time window is 15 seconds long; corresponds to horikawa imagery block duration
%default duration is 5 seconds
TSTagger('tr', TR, 'condition', condition, 'dat', DAT, 'duration', duration)
%you will now have a series of ''nruns'' files labeled FS_01_nnn.csv

Latest revision as of 11:48, 20 November 2020

A MATLAB function named TSTagger has been written to facilitate assigning volumes from a time series to a particular condition for supervised learning in a neural network. The function has no return value, but instead writes a series of .csv files. Each row in the .csv file represents an event from the time series. The input patterns are the median values from each column of the time series within a 5-second window following the event onset.

function TSTagger(varargin)
 Isolate TimeSeries data associated with each condition in a .mat runtime
 file
 
 Mandatory arguments:
   condition: a cell array of condition codes matching values in the
       condition column

   tr: scan interval (E.g. 2.047)

   dat: a cell array of structs: {dat.expinfo, dat.mat}, where expinfo comes
     from the PsychToolBox runtime .mat files, and mat is a (usually) normalized 
     timepoints x regions matrix of BOLD data generated by the
     gettimecourses.sh BASH script 
   
 Optional arguments:
   duration: a scalar indicating the number of seconds of data to include 
   in each event pattern (default: 5)

   volumes_dropped: number of INITIAL volumes dropped (i.e., volumes from 
       the start of the run), either as a single value or else as a vector
       of numbers. If a single value is provided, it will be applied to
       all paired data files, otherwise the values will be applied to the
       corresponding dat argument cell. Important: Do not use this parameter 
       if your event timestamps have already been adjusted for the volumes dropped 

   precision: input values will be rounded to PRECISION decimal places
   (default: 3)

   jitter: n vectors will be generated for each event, where each element
   vector is randomly selected from the lower quartile, median or upper
   quartile of values within the window. (default: 1 - only 1 vector using
   the median is generated for each event)

   jitter_p: jittered vectors will have jitter_p proportion of values
   randomly replaced with the lower and upper quartile value. (default:
   0.1, where 0.05 are replaced by lower, and 0.05 are replaced by upper)

   bias: when jittering, the lower and upper quartile values will be
   pulled towards the median value with a weighting of bias:1. (default:
   2, meaning that each jittered value is weighted 2:1 in favour of the
   median)

Use Case: Tagging Time Series Data from CCN Lab Experiments

%CCN Lab experiments save runtime data in a series of .mat files (1 for each run) 
%use PTBParser to extract the runtime data from the .mat files for the participant
 eio=PTBParser();

%Use loadFSTS() to import associated time series from *.wav.txt files
M=loadFSTS(); %No parameters given ... see the wiki entry for Importing_Time_Series_Into_MATLAB
 
%Important: steps omitted detailing the transformation of the raw timeseries matrices M into SCALED

 nruns=6
 for i=1:nruns
   DAT{i}.expinfo=eio{i};
   DAT{i}.mat=SCALED{i};
 end
 hifam=[11 21 31]; %Condition codes for hi familiar Semcat experiment
 lofam=[12 22 32];
 TR=2.047; %TR in the semcat experiment
 DROPVOLS=4; %We dropped the first 4 volumes when processing the data
 TSTagger('tr', TR, 'volumes_dropped', DROPVOLS, 'condition', ...
      hifam, 'dat', DAT);

Use Case: Tagging Time Series Data from External Experiment

For experiments run elsewhere, we cannot assume a standard format for the experiment meta-data (e.g., timestamps, condition codes, etc.). However, if we have gotten this far in the process, I will assume you've figured out how to create a .par file for the first level BOLD analysis on these data. A MATLAB script called PARParser.m can be found in ubfs/Scripts/Matlab that will translate a series of .par files into a data structure compatible with that generated by PTBParser().

%The following lines tagged 20 runs of data from the Horikawa experiment using 20 .par files (001.par to 020.par)
expinfo=PARParser('subject', 'FS_01'); %if no subject is given, default name is "no_name"
[M, hemis]=loadFSTS
%Important: steps omitted detailing the transformation of the raw timeseries matrices M into SCALED
nruns=20;
for i=1:nruns
   DAT{i}.expinfo=expinfo{i};
   DAT{i}.mat=M{i};
end
TR=3; %horikawa TR was 3 seconds; this can be found by examining the .nii header
condition=[1,2]; %horikawa par files were configured for living vs. nonliving
duration=15; %each tagged time window is 15 seconds long; corresponds to horikawa imagery block duration
%default duration is 5 seconds
TSTagger('tr', TR, 'condition', condition, 'dat', DAT, 'duration', duration)
%you will now have a series of nruns files labeled FS_01_nnn.csv