dspsr -K does not correctly compute the residual fractional sample inter-channel dispersive delay. The problem stems from a mismatch between the assumptions made in Dedisperion.C and those made in DedispersionSampleDelay.C
In Dedispersion.C, the residual fractional sample delay is calculated using
delay = dispersion_per_MHz * ( 1.0/sqr(chan_cfreq) - 1.0/sqr(highest_freq) );
delay = - fmod(delay, samp_int);
which assumes that
In DedispersionSampleDelay.C, the integer delay is calculated using
delay = dispersion * (1.0/SQR(centre_frequency) - 1.0/SQR(freqs[ichan]));
delays[ichan] = int64_t( floor(delay*sampling_rate + 0.5) );
which assumes that
This bug was discovered by Mengting Liu and George Hobbs while processing a signal with high frequency resolution, such that incorrect shifts of a fractional sample are clearly visible (see attached plots provided by Andrew Jameson).
I'm working on a solution to this bug in which the Dedispersion class queries the residual fractional sample dispersive delay from the Dedispersion::SampleDelay class. The current solution is a bit clunky because Dedispersion::prepare is called earlier and it requires a prepared Dedispersion::SampleDelay instance. However, to correctly call Dedispersion::SampleDeplay::prepare requires an Observation instance as output by the Convolution or Filterbank class that has applied the Dedispersion filter. To break the circular dependence requires either:
a lazy evaluation solution in which Dedispersion::prepare postpones calculating the filter coefficients until after the entire signal chain as been prepared. On the next call to Dedispersion::prepare, the Dedispersion::SampleDelay instance from the downstream signal chain will have been prepared and can be used to compute fractional delays.
copying and pasting code from the Convolution and Filterbank classes to Dedispersion::prepare so that it can configure a simulated Observation instance that looks as though it has already passed through these operations, and provide this to Dedispersion::SampleDelay::prepare.
It was quicker to implement 2. which I've done only to prove that this fixes the bug. I copied the relevant code from Filterbank, which might not work if the Convolution class is used to perform phase-coherent dispersion removal. The lazy evaluation model is better because it should work in general, without any assumptions about what processing steps take place between coherent dedispersion and inter-channel dispersive delay correction. Therefore, I plan to implement the lazy evaluation solution before calling this bug fixed.
I've implemented a third solution that requires no kludgey assumptions and no lazy evaluation. Instead of having Dedispersion::SampleDelay compute the integer and fractional sample delays only when the match method is called, it can now compute the int+frac sample delays using a simple function call.
I've checked in the fix; dspsr -K now outputs sensible results; e.g.
yields
Here's what the same command sequence produced with the bug.
A new program has been added to fix folded archives that were produced by the buggy dspsr -K; e.g. when run on the file used to produce the previous plot
yields
Just trying to work out how serious this is in terms of reprocessing large volumes of data for the MeerTime TPA.
Can anyone say if this only affects in the frequency direction or might the delays also change with time - e.g. is it possible that different subints will have different offsets. Obviously if there is something that changes from pulse-to-pulse in the single-pulse data then it is more serious than if it's just that the dedispersion is wonky +/- 1 bin. (though probably we should reprocess what we can anway).
Hi Mike, This bug impacts only the frequency direction (it does not depend on time) and it is seen only when processing baseband/voltage data. If MeerTime TPA data are first recorded as timeseries of detected Stokes parameters (search mode), then this bug will have no impact.