Please could someone provide me with more information about exactly how the normalization procedure works. In particular, how are the mean and standard deviation calculated? Is this from the whole data file in which case whether the datafile is trimmed to exclude breaks in stimulus presentation etc will make a difference or is it done after response values are extracted so only the data around markers are important.
Also how does this work with excluded epochs? Are these ignored during the normalization? Currently I'm including files with epochs to ignore but still get the full number of trials extracted, all with a value so I'm not sure at what point this data is being ignored.
Many thanks in advance for your help,
Jo
(Also thank you Tobias for your very fast response to my previous question on pulse oxymeter data, I appreciate it.)
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
normalisation (in GLM or DCM) uses the entire data time series that it is given.
We recommend breaking the file up into experimental blocks, thus removing activity unrelated to the experiment from the normalisation.
With respect to the exclusion of data points, that depends on what model you are using, and how long the excluded epochs are.
For DCM, missing data under 2 s is ignored for model inversion i.e. you still get parameter estimates for that trial. If the missing epochs are longer, the file is broken up and trials that fall into missing epochs are set to NaN.
For GLM, parameters for all trials are fitted. If the missing epochs are very long, this could potentially mean that the parameter estimates are meaningless. There is no automated check for this.
Best wishes
Dominik
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi
Please could someone provide me with more information about exactly how the normalization procedure works. In particular, how are the mean and standard deviation calculated? Is this from the whole data file in which case whether the datafile is trimmed to exclude breaks in stimulus presentation etc will make a difference or is it done after response values are extracted so only the data around markers are important.
Also how does this work with excluded epochs? Are these ignored during the normalization? Currently I'm including files with epochs to ignore but still get the full number of trials extracted, all with a value so I'm not sure at what point this data is being ignored.
Many thanks in advance for your help,
Jo
(Also thank you Tobias for your very fast response to my previous question on pulse oxymeter data, I appreciate it.)
Hi Jo
normalisation (in GLM or DCM) uses the entire data time series that it is given.
We recommend breaking the file up into experimental blocks, thus removing activity unrelated to the experiment from the normalisation.
With respect to the exclusion of data points, that depends on what model you are using, and how long the excluded epochs are.
For DCM, missing data under 2 s is ignored for model inversion i.e. you still get parameter estimates for that trial. If the missing epochs are longer, the file is broken up and trials that fall into missing epochs are set to NaN.
For GLM, parameters for all trials are fitted. If the missing epochs are very long, this could potentially mean that the parameter estimates are meaningless. There is no automated check for this.
Best wishes
Dominik