Hello,
I'm new to the forum and to the toolbox.
I have some ADinstrument labchart files with long recordings for a 3 blocks desing.
Blocks: resting baseline, experimental procedure, sham procedure.
Each block has approx 5 minutes of recordings, and each block is stored in a separate file with its own name (e.g. subj11_baseline, subj11_blockexp, subj11_blocksham).
I was able to batch import the files, without any problem.
Now, I'd like to batch analize EDA and I have a couple of questions.
1) Considering I do not have any event related response, is the best model SF?
2) How do I batch processing a large amount of files? (3 files x subject, almost 50 subjs).
I tried opening the Model for SF box to create a batch script.
However I 'm unable to automatically cycle all the imported file.
When I select the "data file" I can only select 1 file, and even if press "rec" only one file appears in my processing list field.
If I start the script and then open the output folder, I find only one model, with only one value.
(altought it seems to cycle all the file, because it ask me if it can overwrite the imported .mat)
I suppose there is not a way to have all the analyses in one model, and that I have to createa different model for each file. Therefore I would need to have different "model filename"s for each file. Otherwise, I think that every new analises in the batch will overwrite the previus output model.
How can I do that?
Thank you very much.
Dan
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
(1) This sounds very reasonable. Some people also analyse SCL, but over such long intervals there is a strong peripheral component in this measure and evidence that this is influenced by central/psychological processes is at best patchy.
(2) PsPM takes a hierarchical summary-statistic approach - it works on individual subjects and then allows to analyse parameter estimates from each subject. Create a batch script for one subject and modify it to loop over subjects. The Tutorial section in the Manual contains a how-to decription.
Hope this helps
Dominik
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi Dominik,
thank you very much!
One last question, my file are labchart 8 files. Labchart GSR procedure requires a "zeroing".
After the 5 min baseline, EDA is zeroed (as per labchart instructions because if you zeroed at the beginning of the baseline, then you will have massive negative values hence forward).
The topic has already been discussed here: https://sourceforge.net/p/pspm/discussion/help/thread/630e967b/
Therefore, these negative values should not be a problem, even in the SF model. Right?
What I'm wondering is:
SF AUC data for the baseline which is not zeroed and has let's say average of 20 uS... can be compared with the successive recordings that have been zeroed? which, conversely let's say have an average of 5uS (which is, in fact, 25uS. 5 uS is the change + 20 uS the subtracted value from the zeroing).
is the AUC somewhat compensating for this, as it compensates for negative values (which should not exist).
Thank you
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
all SCR analysis packages, including PsPM, high-pass filter the data before modelling or peak-scoring, thus removing negativ values. In addition, SF models subtract the minimum value per epoch before analysing the epoch.
If the baseline period and post-zeroing data are in separate epochs, I see absolutely no problem comparing them - the estimated amplitudes of phasic SFs are not affected by this filtering/baseline correction.
If the zeroing occured during an epoch, you would have to check out whether it's effectively filtered out.
Best
Dominik
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I came back to bother you becase I'm banging my head trying to use pspm to batch process a large amount of files.
I read the manual, I tried to build the loop, I also recruited our resident MatLab advisor... but we failed after several unsucesfull attempts.
I have to state that I'm not MatLab proficent. I'v started using matlab specifically to use your package.
Anyway, I was able to batch import all the file and to batch trim them (without the loop with my surprise). It was pretty straigtforward using the dependecies.
But the code failed when it has to batch calculate the AUC.
Attached you can find the plain code (without the loop, to avoid errors and messing).
Can you please give me some advice about how to modify it to process recursively the files you see in the structure?
Thank you very much for the support,
Best
/edit/ I also edited the script to better show the kind of files we have (they all have different, non consecutive names)
hopefully the new year will not bring too much more headbanging :)
I think the problem is that you bulk process several files for import and trim, but the SF module can handle only one file at a time. Apparently the dependency does not work here. I'll have to look into this.
In the meantime, you can restrict yourself to processing just one file at a time, for all modules in the batch. I assume the dependencies will work then.
Do let me know if this doesn't fix the problem.
Best
Dominik
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi Dominik, thank you very much for the fast reply!
In the past weeks, we tested the script with one file at the time.
It worked if we do not use the dependecies for the SF module. Otherwise this is the error:
No executable modules, but still unresolved dependencies or incomplete module inputs.
The following modules did not run:
Failed: SF
Skipped: Export Statistics
If the problem can be fixed without modifying the job, the computation can be resumed by running
cfg_util('cont',1)
from the MATLAB command line.
If we do not use the dependencies but manually select the single name file for each module, everything works.
Notwithstanding... Unfortunally we really need to batch process the SF module. We cannot select one file at the time even with the dependencies. We have something like 500 files to process...
Thank you for your help, we really appreciate it!
Best
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
happy to look into this dependency error, but such bug fixes often take a bit.
Meanwhile, no need to do the analysis manually, even without dependencies. Just define the desired file names in the batch script as strings, e.g. using sprintf and/or fullfile. If the input file is "xxx.asc', then the imported file will be 'pspm_xxx.mat', and the trimmed file 'tpspm_xxx.mat'.
Best
Dominik
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi Dominik,
this is Gianluca, I am working with Daniele on the EDA Analysis. Thank you very much for all your support so far.
May I ask you a bit more about the batch processing? I have tried to make the script loop through different filenames as I usually do for other scripts. However I am having a lot of troubles.
I have made many different attempts. This is just one of them:
First, I run the importon multiple files and generate the script. This is not an optimal way as I would have to select each file manually, but for now that's ok.
This is the generated code:
Import1
% List of open inputsnrun=1;% enter the number of runs herejobfile={'M:\Gianluca\06) DanieleEDA\V3_AUC\AUC6_SFOnlySingle_job.m'};jobs=repmat(jobfile,1,nrun);inputs=cell(0,nrun);forcrun=1:nrunendjob_id=cfg_util('initjob',jobs);sts=cfg_util('filljob',job_id,inputs{:});ifstscfg_util('run',job_id);endcfg_util('deljob',job_id);
As I said, this works. It imports the files and generaltes in the same folder some mat files called: pspmXXXXXX.mat
As for now I have skipped the trimming. The real problem is with the AUC script.
I have generated a code for a single participant and then tried to modify. This is what I did:
AUC2
% List of open inputsnrun=1;% enter the number of runs herejobfile={'M:\Gianluca\06) DanieleEDA\V3_AUC\AUC6_SFOnlySingle_job.m'};jobs=repmat(jobfile,1,nrun);inputs=cell(0,nrun);forcrun=1:nrunendjob_id=cfg_util('initjob',jobs);sts=cfg_util('filljob',job_id,inputs{:});ifstscfg_util('run',job_id);endcfg_util('deljob',job_id);
The problem with this script is that it still generates an output on for the LAST file.
It appears that the script loops through all the names, takes the last one and performs th analysis only for that one. This is pretty clear if you look at the Command Window output:
*
p =
in the second script, you create a matlabbatch, but you don't actually run it. After creating the job, the last line in the loop should be pspm_jobman('run', 'matlabbatch');
You don't need the lines of code after AUC2 until AUC2_Job
For the import, you don't need to define each file manually. In place of the data files specified with
Hello,
I'm new to the forum and to the toolbox.
I have some ADinstrument labchart files with long recordings for a 3 blocks desing.
Blocks: resting baseline, experimental procedure, sham procedure.
Each block has approx 5 minutes of recordings, and each block is stored in a separate file with its own name (e.g. subj11_baseline, subj11_blockexp, subj11_blocksham).
I was able to batch import the files, without any problem.
Now, I'd like to batch analize EDA and I have a couple of questions.
1) Considering I do not have any event related response, is the best model SF?
2) How do I batch processing a large amount of files? (3 files x subject, almost 50 subjs).
I tried opening the Model for SF box to create a batch script.
However I 'm unable to automatically cycle all the imported file.
When I select the "data file" I can only select 1 file, and even if press "rec" only one file appears in my processing list field.
If I start the script and then open the output folder, I find only one model, with only one value.
(altought it seems to cycle all the file, because it ask me if it can overwrite the imported .mat)
I suppose there is not a way to have all the analyses in one model, and that I have to createa different model for each file. Therefore I would need to have different "model filename"s for each file. Otherwise, I think that every new analises in the batch will overwrite the previus output model.
How can I do that?
Thank you very much.
Dan
Hi Dan
(1) This sounds very reasonable. Some people also analyse SCL, but over such long intervals there is a strong peripheral component in this measure and evidence that this is influenced by central/psychological processes is at best patchy.
(2) PsPM takes a hierarchical summary-statistic approach - it works on individual subjects and then allows to analyse parameter estimates from each subject. Create a batch script for one subject and modify it to loop over subjects. The Tutorial section in the Manual contains a how-to decription.
Hope this helps
Dominik
Hi Dominik,
thank you very much!
One last question, my file are labchart 8 files. Labchart GSR procedure requires a "zeroing".
After the 5 min baseline, EDA is zeroed (as per labchart instructions because if you zeroed at the beginning of the baseline, then you will have massive negative values hence forward).
The topic has already been discussed here: https://sourceforge.net/p/pspm/discussion/help/thread/630e967b/
Therefore, these negative values should not be a problem, even in the SF model. Right?
What I'm wondering is:
SF AUC data for the baseline which is not zeroed and has let's say average of 20 uS... can be compared with the successive recordings that have been zeroed? which, conversely let's say have an average of 5uS (which is, in fact, 25uS. 5 uS is the change + 20 uS the subtracted value from the zeroing).
is the AUC somewhat compensating for this, as it compensates for negative values (which should not exist).
Thank you
Hi Dan
all SCR analysis packages, including PsPM, high-pass filter the data before modelling or peak-scoring, thus removing negativ values. In addition, SF models subtract the minimum value per epoch before analysing the epoch.
If the baseline period and post-zeroing data are in separate epochs, I see absolutely no problem comparing them - the estimated amplitudes of phasic SFs are not affected by this filtering/baseline correction.
If the zeroing occured during an epoch, you would have to check out whether it's effectively filtered out.
Best
Dominik
Hi Dominik!
Happy new year :)
I came back to bother you becase I'm banging my head trying to use pspm to batch process a large amount of files.
I read the manual, I tried to build the loop, I also recruited our resident MatLab advisor... but we failed after several unsucesfull attempts.
I have to state that I'm not MatLab proficent. I'v started using matlab specifically to use your package.
Anyway, I was able to batch import all the file and to batch trim them (without the loop with my surprise). It was pretty straigtforward using the dependecies.
But the code failed when it has to batch calculate the AUC.
Attached you can find the plain code (without the loop, to avoid errors and messing).
Can you please give me some advice about how to modify it to process recursively the files you see in the structure?
Thank you very much for the support,
Best
/edit/ I also edited the script to better show the kind of files we have (they all have different, non consecutive names)
Last edit: DDL 2019-01-31
Hi Dan
hopefully the new year will not bring too much more headbanging :)
I think the problem is that you bulk process several files for import and trim, but the SF module can handle only one file at a time. Apparently the dependency does not work here. I'll have to look into this.
In the meantime, you can restrict yourself to processing just one file at a time, for all modules in the batch. I assume the dependencies will work then.
Do let me know if this doesn't fix the problem.
Best
Dominik
Hi Dominik, thank you very much for the fast reply!
In the past weeks, we tested the script with one file at the time.
It worked if we do not use the dependecies for the SF module. Otherwise this is the error:
If we do not use the dependencies but manually select the single name file for each module, everything works.
Notwithstanding... Unfortunally we really need to batch process the SF module. We cannot select one file at the time even with the dependencies. We have something like 500 files to process...
Thank you for your help, we really appreciate it!
Best
Hi Dan
happy to look into this dependency error, but such bug fixes often take a bit.
Meanwhile, no need to do the analysis manually, even without dependencies. Just define the desired file names in the batch script as strings, e.g. using sprintf and/or fullfile. If the input file is "xxx.asc', then the imported file will be 'pspm_xxx.mat', and the trimmed file 'tpspm_xxx.mat'.
Best
Dominik
Hi Dominik,
this is Gianluca, I am working with Daniele on the EDA Analysis. Thank you very much for all your support so far.
May I ask you a bit more about the batch processing? I have tried to make the script loop through different filenames as I usually do for other scripts. However I am having a lot of troubles.
I have made many different attempts. This is just one of them:
First, I run the importon multiple files and generate the script. This is not an optimal way as I would have to select each file manually, but for now that's ok.
This is the generated code:
Import1
Import1_JOB
As I said, this works. It imports the files and generaltes in the same folder some mat files called:
pspmXXXXXX.mat
As for now I have skipped the trimming. The real problem is with the AUC script.
I have generated a code for a single participant and then tried to modify. This is what I did:
AUC2
AUC2_Job
The problem with this script is that it still generates an output on for the LAST file.
It appears that the script loops through all the names, takes the last one and performs th analysis only for that one. This is pretty clear if you look at the Command Window output:
*
p =
p =
p =
p =
p =
p =
p =
05-Feb-2019 13:52:58 - Running job #2
05-Feb-2019 13:52:58 - Running 'SF'
SF analysis: M:\Gianluca\06) DanieleEDA\V3AUC\pspmAH5828gng3.mat ...epoch 1 ...auc
05-Feb-2019 13:53:01 - Done 'SF'
05-Feb-2019 13:53:01 - Done
**
This looks like a rather bizarre behavior for me.
Do you have any suggestion? I am really having hard times with this.
Thank you in advane for your time.
Best,
Gianluca
Hi Gianluca
in the second script, you create a matlabbatch, but you don't actually run it. After creating the job, the last line in the loop should be pspm_jobman('run', 'matlabbatch');
You don't need the lines of code after AUC2 until AUC2_Job
For the import, you don't need to define each file manually. In place of the data files specified with
matlabbatch{1}.pspm{1}.prep{1}.import.datatype.labchartmat.datafile = {...}
you can just specify a cell array of string with the filenames, using for example the functions 'dir', 'fullfile', and 'sprintf'.
Hope this helps
Dominik