Menu

Trial-wise parameters

Help
Giulia C
2018-06-04
2018-07-21
  • Giulia C

    Giulia C - 2018-06-04

    Hi,
    I collected data for an experiment aimed at assessing the physiological responses to 4 blocks of stimuli each presented for 8 seconds, ITIs were 8 seconds too. I imported scr data (exported from Brain Vision), trimmed them, and then following the tutorial I used the GLM to compare the two conditions (corresponding to the two different types of stimuli) and finally contrasted them. I see that the statistics exported from the GLM models are subjects-wise, but since I am interested in doing also a trial-wise analysis, I was wondering whether it’s possible to extract the scr response to each single stimulus with PsPM and how to do so. I am new to physiological analysis…I also used ledalab which extract phasic/tonic activity per stimulus, but it’s less clear to me how to do a similar thing in PsPM.

    Thank you,

    Giulia

     
  • Dominik Bach

    Dominik Bach - 2018-06-06

    Hi Giulia

    • single-trial estimates are easily obtained in GLM (if you find that GLM is suitable here). Just model 1 trial per condition, ie. you have as many conditions as you have trials.- I'm not sure GLM is the best way to analyse response to a 8 s stimulus. Are you modelling the stimulus onset? GLM assumes that responses directly follow the stimulus onset with constant latency which may not be the case. You can look at averaged responses per condition (using Segment Mean in the Tools menu) to check this assumption. Otherwise, I'd recommend using the non-linear model, or if you have more than one SCR per stimulus, possible using the SF model on the entire time series and later on splitting it into conditions.
    • Ledalab does not use any particular model to separate phasic and tonic activity. It simply filters the data, calls the filtered data "phasic" and the residual "tonic". Any software that does filtering can do that, but the problem is that there is no clear psychological or physiological meaning to the "tonic" component - see Boucsein 2012 for a discussion of "skin conductance level". If you wanted to get this separation in PsPM, you can just open the GLM file and subtract the preprocessed data in glm.Y from the raw data in glm.input.y. This will be similar to the "tonic" activity in Ledalab.

    Hope this helps
    Dominik

     
  • Giulia C

    Giulia C - 2018-06-08

    Thank you very much for your reply, really useful and exhaustive.

     

    Last edit: Giulia C 2018-06-08
  • Giulia C

    Giulia C - 2018-06-11

    Dear Dominik,

    Sorry for disturbing you again. I have problems setting up the non-linear model. When I try to run it using the batch (I added the tscr file and the timing file), it shows me this error:

    Error in event definition. Either events are outside the file, or trials overlap.

    I don't understand why this happens, since I trim the scr data 8 seconds before and 10 after the last marker indicated in the timing file. I didn't change any other option in the bactch. Is it possible that it shows this message because I don't have the marker channel in the scr data? I imported them from a txt file that was exported from BrainVision, which had only the scr data inside, whereas the markers were exported in another txt file.

    I then downloaded the DCM dataset to look into their structure, and I tried to create tscr files with a similar structure adding "manually" the marker channel. But then, if I try again the run the non-linear model with that file it shows these two other errors:

    Warning: /tscr_S02_EXP_GSR.mat is not a valid PsPM file:
    The data in channel 2 is out of the range [0, infos.duration]

    Warning: No SCR data contained in file tscr_S02_EXP_GSR.mat

    What do you think?Or did I do something wrong in the timing file?

    I attached the files that I tried to use, so you can take a look:
    mod_tscr_S02 is the file whose structure I tried to modify to make it similar to your dataset,
    tscr_01 is the original file from another subject
    S01_all_onsets is the timing file.

    Thank you very much for your help

     
  • tobias moser

    tobias moser - 2018-06-13

    Hi Giulia

    to me it seems that the events have timings which relate to the original data file and sould be corrected to agree with the trimming. According to tscr_S01_EXP_GSR.mat the trimpoints (in 'infos.trimpoints') are [401.9960 1.5330+e03]. Therefore you could correct your timings for the dcm like this:

    timingfile = load('S01_all_onsets.mat')
    datafile = load('tscr_S01_EXP_GSR.mat')
    timingfile.events{1} = timingfile.events{1} - datafile.infos.trimpoints(1)
    timingfile.events{2} = timingfile.events{2} - datafile.infos.trimpoints(1)
    events = timingfile.events
    save('S01_all_onsets.mat', 'events)
    

    Also if you add the marker channel manually to the data file you should add it before trimming, because then the timings are corrected by the trimming function. Otherwise (as in your case) the timings are related to the original file. In your case they point to incorrect timepoints and also point to times outside of the file.

    Best
    Tobias

     
  • Giulia C

    Giulia C - 2018-07-21

    Thank you very much for the code, it worked perfectly!

    I'm sorry to disturb you again, but I have some more questions.
    I've recently imported the output of the DCM models (flexible response) into R to run further analyses, but I realised that the distribution of the flexible response across trials seems to follow a gamma or inverse gaussian distribution (see attached screenshot)....is that normal? My main concern is related to the fact that while most values are distributed around zero, there are also some weirdly high ones (some of them >5 and a small % even around 15-20). Can they be due to genuine response to the stimuli? If not, do they maybe correspond to some artifacts? The stimuli of my experiment are artworks differing on their emotional content, but I am a bit suspicious of such wide range of values.

    Again, many thanks for your time and consideration.

     
  • Dominik Bach

    Dominik Bach - 2018-07-21

    Hi Giulia

    there are two points here: first, the distribution of residuals in a statistical test. Since DCM amplitude estimates (just like classical peak scoring measures) are bounded from below, they are not normally distributed. However, when you compute contrasts (average differences) between conditions, they will usually be approximately normal.

    The more serious point is whether you believe the estimates, and I share your view that it's better to check when some of the estimates are one order of magnitude higher than the rest. What sometimes happens is a an overfit of steeply rising responses where the response dispersion is at the lower bound, and the amplitude very high. You can see this by looking at the dispersion estimate. If this was the case, I'd suggest bounding the dispersion (which is a DCM option) to at least 0.3 s SD (this is the average physiological dispersion of SN bursts, see Gerster et al. 2018 Psychophysiology).

    On the other hand, SCR are highly variable, and sometimes subjects just have very pronounced responses on a subset of trials. You could then also see this in the raw data.

    Hope this helps
    Dominik

     

Log in to post a comment.