Menu

trial-by-trial analysis of startle eye blink response

Help
Anna
2019-10-10
2019-10-14
  • Anna

    Anna - 2019-10-10

    Thank you very much for providing this fantastic toolbox! We are interested in trial-by-trial responses in fear conditioning studies and are using PsPM already for skin conductance data. For our type of paradigm (CS duration: 6.5 sec, ITI: mean 16.5 sec, range: 15-18 sec, startle probe delivery 6000 ms after CS onset, US delivery 6450 ms after CS onset) a single-trial GLM seems to provide the most robust results for SCR. In order to make the analysis coherent across different measures we would like to apply a similar analysis approach to startle (SEBR) data. After inspecting the predicted and observed SEBR in a single trial GLM with fixed response latency we noticed that the observed responses have a consistently greater latency (~50 ms) than the predicted responses (probably due to a trigger timing deviation on our side?). In addition, this latency differs slightly between participants and individual trials. For these reasons we would prefer using a single trial GLM with flexible response latency. However, the PsPM output files for a model with flexible latency seem to get too big to be saved (>2GB) .

    Is a single-trial GLM with flexible response latency actually suitable for analyzing trial-by-trial SEBR and could you think of any reason of why the output files become so big?

    I have attached a sample tpspm file + onsets and the script for the single trial GLM with flex latency. Please let me know if you need any further information to evaluate the problem.

    Best wishes,
    Anna

     
  • Ivan Rojkov

    Ivan Rojkov - 2019-10-14

    Dear Anna,

    Thank you for relating this issue. What you are doing is correct - but the output files shouldn't get so big.

    We looked at it and the problem comes from your input data. Indeed, you input two arrays of size 1403368x1 without filtering it so during exetution PsPM will create five arrays of same size and three matrices of size 1403368x65. This is a huge file that MatLab tries to save. In theory it could do it but for that we have to modify our code. We will maybe implement that in a later version.

    Thus for now I would suggest you to try to filter your data by enabling the filter and changing the sampling rate to an appropriate value.

    I wish it solves your problem.

    Kind regards,
    Ivan.

     
  • Anna

    Anna - 2019-10-14

    Dear Ivan,
    thanks a lot for looking into this! As you suggested, down-sampling indeed solves the file size problem and, as hoped, the flexible latency models capture the inter-trial/inter-individual onset latency differences really well. Thanks again!
    Best,
    Anna

     

Log in to post a comment.

Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.