Add ipcbuf_peek_eod to allow a reader to detect an EOD flag
fix bug in functional-pipeline-tests for test data shorter than 1s
fix bug in UWBFloatUnpackerCUDA
fix bug in kernel thread config for nbin < 1024
fix bugs in k_multiply kernels when nchan not power of two
Thanks for reporting back Mariano, I'm glad you managed to solve this issue. Tony, I've noticed that no one had replied to your original report - is this still a problem for you?
Hello Xiaowei Wang, We would need a bit more information in order to help diagnose the issue: Could you please provide the header of the SIGPROC filterbank file you are processing? that might help us reproduce the issue. It would also be useful if you could run heimdall with the -G option which will provide more verbose output from heimdall - again this might provide some insights. What git hash are you using in your dedisp and heimdall builds? Can you advise how you compiled dedisp - i.e. what GPU_ARCH...
The Candidate Time, is the time of the event, after dedispersion, at the highest frequency channel, ins seconds, that is offset from the start of the filterbank file that heimdall is processing.
Unfortunately no, just Sigproc filterbank files.
I think you want to be using the trans_gen_overview.py script to generate that plot. However, I'm now recollecting that this script was designed to use a candidates.dat file that was the result of performing co-incidence rejection with other beams (very PKS specific) via the coincidencer application that is part of the heimdall build. So if you are processing a single beam observation, then you would need to do something like: cat 2*.cand > all coincidencer all # this should generate an all_.cand...
This feature should still be available. If you are using the Scripts/trans_gen_overview.py there is an option called -just_time_dm which is what I think you want.
Hi, you just need to ensure the csh interpreter package is installed. Alternately, just execute the contents of the bootstrap script directly.
Fix the delay function to always be zero
Yes, they should be powers, I've checked in a correction to SigprocFile.C - thanks for the error report!
Hi Vinay, I've not come across gpuocelot before, so I cannot say anything definitive. It does appear to be an unmaintained project now, and given how old CUDA 5/6 are I doubt it would work easily (or perhaps at all). In order to compile both dedisp and heimdall, a CUDA installation would be required. I'm not sure if it is possible to install CUDA without the full SDK bundle. Cheers, Andrew
You should be able to see the change to kernels.cuh in the commit listed below and apply it to your fork of dedisp. https://github.com/ajameson/dedisp/commit/0f763a3bd1c726d9363c63cae855c844e9549d88
Hi Joe, I believe the texture related code was only ever intended for use on very old GPUs (compute capability < 2), so I doubt that any users of dedisp would be using that code. To solve the compilation issue, I've added some #ifdefs around the texture code so that it will now compile without any texture calls in cuda12 and above. Hope this solves your problem. Cheers, Andrew
Thanks Shin, That header was very helpful, and it allowed me to reproduce the bug. The issue was that the length of this file, in combination with the block length used by heimdall (262144 samples) resulted in the final block of data being less than the default baselining length (2s). If you were to change the baseline length on the command line (e.g. -baseline_length 1), then the file would be processed without error. I've made the baselining algorithm more robust to this sort of error in a small...
Hi Shin, Can you provide the header for the filterbank file you are processing? are you using the most recently available versions of dedisp and heimdall?
digifil not able to remove interchannel dispersion when fscrunching?
Fixed in commit cba272
Hi David, The issue of the first sample being zero and subsequent sample being close to the mean is a consequence of the default, small block that digifil uses (256 MB). Since your data set is quite large (many channels) and your Tscrunching quite substantial (128) , when digifil reads a block of data, it arises that all of those samples are scrunched into a single output time sample for the channel, and in this situation the rescaling algorithm can not determine the mean value - since it is trying...
David, would you be able to test against this branch and let me know how you find the results? https://sourceforge.net/p/dspsr/code/ci/bug98-rescale-first-integration/tree/ Cheers, Andrew
digifil not able to remove interchannel dispersion when fscrunching?
Hi David, Thanks for providing the example data set and lots of tests that you have run, that has been quite useful in determining the cause of the issue. I can indeed see a problem is being caused by the Recsale class and I believe I've got a fix that addresses many of these problem cases, perhaps the original one that Scott reported too. I hope to have a branch ready for you to test against soon. Cheers, Andrew
Hi Jānis, I believe this is because you have not installed psrcat prior to installing psrchive. Please check your psrchive installation, and use the packages/psrcat.csh script included in psrchive. Then you will need to configure and install psrchive, and then configuration and install dspsr.
I recommend first getting the dedisp and heimdall to work outside of a container. Next I would check that you have compiled the dedisp and cuda for the GPU/CUDA architectures that you will be executing within the container. For dedisp, this is set via the GPU_ARCH variable in the Makefile. For heimdall this is controlled in an environment variable called CUDA_NVCC_FLAGS. I would typically set this to something like "-O2 -gencode arch=compute_75,code=sm_75" Finally I would ensure that your cuda compilation...
I've updated heimdall with the patches suggested in this thread - thanks for your help.
I've updated heimdall with the patches suggested in this thread - thanks for your help.
Hi Nathaniel, I believe this error comes from dedisp, which only supports dedispersion with filterbank data with a number of channels that is a multiple of 32. I might add a better error message to heimdall/dedisp to warn of this problem.
I think you just need to ensure that your LD_LIBRARY_PATH includes the installation directory of the dedisp library. I'm guessing it would be something like /home/felipe/software/dedisp/lib
Looks like the cudalt.py utility script assumed the python interpreter is always located in /usr/bin/python. I've pushed a commit to heimdall which removes this requirement. Could you try to pull the latest version and recompile?
Hi Ryan, These look like linking errors for dedisp, have you managed to compile dedisp as per the instructions on the Heimdall Wiki? https://sourceforge.net/p/heimdall-astro/wiki/Install/ Cheers, Andrew
Hi Ryan, This error is due to the lack of C++11 support in the default compiler options for gcc 4.8.5. I think this could have been rectified by adding a -std=c++11 to your CXXFLAGS environment variable. However, I have pushed a commit to the master branch which will allow compilation even without this option. Could you please try it and check? Cheers, Andrew
dspsr folding error
Glad to hear it, I'll close this ticket then.
Since you are performing coherent dedispersion in each of the single channels, DSPSR must discard some of the data at the very start of the file since it is corrupted by the application of the dedispersion kernel. The amount of data that is discarded depends on RF frequency in each channel, with the lower channels requiring more samples to be discarded. You could avoid this problem by: 1. Not performing coherent dedispersion when using digifil. 2. Use digifits (to produce a PSRFITS search mode file...
Use
Use
Use
the_decimator is overwriting az=za=0
Wael, I've checked in a new commit to the SigProcObservation which will set the az_start and za_start header parameters based on the source position on MJD start of the observation. Could you please pull the this commit from DSPSR and test it out?
This has been fixed in commit #73757a. Running digifil on search mode data recorded with PTUSE now results in telescope_id 64 being set in the resulting output file: [ajameson@farnarkle1 ~/dspsr]$ header 2020-11-28-03:22:13.fil ... Telescope : MeerKAT Datataking Machine : FAKE ...
Digifil giving fake telescope ID by default for filterbanks
This has been fixed in commit #73757a. Running digifil on search mode data recorded with PTUSE now results in telescope_id 64 being set in the resulting output file: [ajameson@farnarkle1 ~/dspsr]$ header 2020-11-28-03:22:13.fil ... Telescope : MeerKAT Datataking Machine : FAKE ...
Perhaps this is not so straight forward. Sigproc] defines az_start and za_start as the telescope angle at the start of the observation. And so this means it could be different from the source position - e.g. your pulsar/source might not be in the centre of the your antenna or beam pointing... For this reason I think it would be unwise for the_decimator to re-write the Az and Za, and that is perhaps why the original author of this code reset these parameters to zero upon unloading. The more correct...
When writing a filterbank file, the SigProcObservation class doesn't (and shouldn't) make any assumptions about what type of input was used to generate the file. So having the more generalised approach will mean that it should work in most use cases.
It would appear as though the SigProcObsevation class was never configured to correctly compute these parameters when unloading a SigProc Filterbank file. This information is generally redundant, but it could be added reasonably easily, by making use of the sky_coord, telescope and start MJD in the dsp::Observation base class. I can look at doing this soon. Wael, the global variables are defined in Kernel/Formats/sigproc/filterbank.h (this is how the sigproc library works unfortunatley). These are...
Dear Maura, When you say you are using 32-bit filterbank files, are these 32-bit unsigned integers or 32-bit floating point. The dedispersion library used by heimdall (dedisp) does not support 32-bit input, so please try using a lower bit-width format filterbank file and compare again. I'm a little surprised that the 32-bit filterbank file ever worked properly.
Buffer Overflow in demorest github version of dspsr
Issue was not related to DSPSR software, but rather PSRCHIVE installation and autotools.
digifil does not produce output of the appropriate size
Hi Ryan, I believe that the error with the "-d 1" option was fixed in a recent commit to the DSPSR trunk. Would you be able to run this test again and report back if this last lingering problem is now resolved? Cheers, Andrew
Buffer Overflow in demorest github version of dspsr
Parul, You raised a number of simultaneous bug reports on DSPSR, and I believe that some of those were resolved? Have you resolved this particular issue with this file? If not - did you try adding a lot more channelisation to the command line? e.g. dspsr -E 1644-4559.eph -F 128:D -b 64 -s -K -t 4 1644-4559.cpsr2 Cheers, Andrew
digifil downsampling factor is not applied properly
Marked as closed as issue is reported as resolved by the commit to master
Error with digifil on dada file
Closing this ticket as reporter verifies the issue is fixed in master.
single pulse - incorrect start time in output headers
Hi Mike, Thanks for logging the reply here - hopefully others who encounter this issue will read it :) I'll close this ticket. Cheers, Andrew
Hi Kuo, I've pushed a change (commit 22aa48d4) which I believe fixes this issue, could you please verify the updated code works for you and let me know here? Thanks for the bug report. Andrew
Error with digifil on dada file
Can you provide the contents (first 4096 bytes) of the dada header so I can try to replicate this issue? To print out the ascii header, you can use the psrdada program: dada_edit test.dada It would also be useful if you could provide a verbose log file of the digifil command, e.g. digifil -c -t 16 -b 8 -F64:D test.dada -o test.fil -V >& digifil.log
A failure of Tempo2 might be due to the nature of this pulsar, or your installation of Tempo2 with respect to MWA. You should: 1) Run DSPSR to generate the tempo2 failure 2) Inspect the stdout and stderr files in /tmp/tempo2/<your username=""> This might provide some insight as to why tempo2 cannot generate a predictor from your ephemeris and observation information.</your>
That looks like it is failing when testing the S2 File format. I would recommend editing the backends.list file in dspsr's source directory to only include the backends that you are interested in e.g. cpsr2 fits sigproc then reconfigure DSPSR and rebuild.
I presume this is a PSRFITS search mode file (i.e. filterbank file)? If so, then it is not possible for DSPSR to perform any subsequent channelisation as the time series will be detected samples (i.e. not voltages). How many channels does your PSRFITS file have? It would be hard to deduce where the problem is without more information. Perhaps you could run dspsr with more verbosity using this command: dspsr -E 0950.eph -s -K 1141224136_ch133-156_0002.fits -V >& dspsr.log and then provide the log...
Could you please try to run dspsr with some channelisation enabled? The dspsr command you are using is performing no channelisation. Given the CPSR2 file you are processing is single 64 MHz channel - this results in a very large dedispersion kernel, which means DSPSR is using a very large amount of memory to perform coherent dedispersion. I can confirm that running the command you provided on my local installation of DSPSR returns the same error, but if I instead run something like this: dspsr -F...
Cannot update ephemeris
Hi Paul, Scott I can see where this error has been introduced for psrchive installations which do not have PSRCAT available at configure time. I would have expected to see the error "not implemented for PSRINFO" instead of "empty name" from pam, so I'm a little confused there... Regardless, I've checked the correction to prevent throwing an error if PSRCAT is not available, (c70fcd, but I would appreciate it if you would be able to confirm this resolves the issue in your environment?
digifil downsampling factor is not applied properly
Hi Ryan, Digifil performs the temporal downsampling after the channelisation stage, and so the -t 32 should apply to the sample rate coming out of the channeliser. Since you have chosen a channeliser with the same number of output channels (-F 256:D) as the number of input channels (256), that should not have made a differenced. I would have expected an output sampling time of 40.96 us as you mentioned. I did find a couple of other issues with digifil that needed to be fixed whilst looking into this....
digifits does not tscrunch to requested sampling time when creating filterbank with the -F option
Hi Ryan, Thanks for reporting this. Digifits was not correctly computing the required tscrunching in the case where the input data: a) have more than 1 channel b) further channelisation was requsted. I've checked in a fix to this problem in commit: d9c25565
Hi Ryan, can you confirm if this issue is now resolved for you? Cheers, Andrew
digifits fails to create file when output name is specified
Fixed in commit: 463b36
FITSOutputFile uses a mangled filename to when the file is being written, and then renames the file to the correct name once the file writing has completed.
Hi Parul, Dedisp has a hard-coded internal limit of 8192 channels and this filterbank file is exceeding that limit. I'm unsure whether this limit can be increased, but perhaps you could try down sampling the number of channels (by a factor of two) to 7424 channels and try processing it again? Regards, Andrew
Hi Tony, Devnash, It turns out there was a bug in dedisp which was causing this error with non power of two channelisation in the filterbank files, specifically for situations where (nchan * (nbit / 32)) % 32 != 0 I've checked in a change to dedisp along with some other minor fixes to heimdall that were causing warnings with cuda-memcheck. Can you please try updating dedisp and heimdall and let me know if this fixes your issue?
Hi Tony, Devansh, This looks to be an error in dedisp, which is a library used by heimdall, but it would still be good to try and understand the cause. Some further questions: In the header provided by Tony, the filterbank had very different parameters to that provided by Devnash. Are you seeing this memory error on all types of filterbank files for your heimdall installation? or just one of these? I see you are specifing the -nsamps_gulp by hand to be quite large. Is this due to the very low RF...
Hi Tony, Can you run the sigproc util "header" on the file and send the results? It would be good to check that the file header matches that of similar files that work (in terms of sample time etc). You could try using the command line options -rfi_no_narrow and -rfi_no_broad to disable the automatic RFI cleaning, that might be the cause... It should be possible to run a command like head -c 10485760 observation.fil > shortobservation.fil and send me the first 10 MB of the file.
Thanks for this report Tony, this code does fail under GNU compilers > 7, I've checked in a fix for this. The installation instructions were indeed quite stale, I've refereshed them a bit, removing the requirement for thrust > 1.6, since any modern version of CUDA should now satisfy that requirement. The --with-<package>-dir options do allow for arbitrary placement of the install directories for PSRDada and dedisp. I had bundled things in the one location to make this a bit more straight forward...
Install
Install
Install
Install
Install
Install
Hi Parul, It looks like you make not have installed the dedisp library prior to building heimdall. Did you install dedisp as per these instructions? https://sourceforge.net/p/heimdall-astro/wiki/Install/ Can you provide the output of the heimdall configure command that you ran prior to running make? Cheers, Andrew
Hi Tinn, The options for -vVgG provide increased verbosity in the log messages that heimdall generates when processing the data. -v provides the least logging and -G the most. The -baseline_length option specifies the number of seconds of data to use when smoothing the baseline for each F-scrunched dedispersion trial. The default for this is 2 seconds. The baseline smoothing is used in the baseline subtration algorithm which is required to measure the SNR of candidates. Cheers, Andrew
Added comap, leda bugfix
Approved - thanks for the leda bug fix Jkocz.
Looks like the merge requests feature worked ok - thanks for contributing.
Test to see if merge requests are working
APSR channel order looks scrambled
Closing bugs for systems that no longer exist
Make fails
Fixed in commit 77051550acde83eec7044d68f7b7bb8c9d501bf5
Transfer manager fails to start/stop