Thanks Mike. It has been quite a long time since I looked at this stuff, but yes the idea of the "pointer_tracker" stuff in psrchive.i is to automatically create Reference::To instances for python variables when relevant. I think this should happen automatically for any class that inherits Reference::Able (including ProfileShiftFit), and we should not have to explicitly name all such classes (it's basically all of psrchive). This at least used to work, I guess we should look closely at those commits...
Hi Jacob, I think there is a simple setting in dspsr that can correct this sort of thing. Can you provide a sample command line, and ideally a small test data set, to check against?
Hi Ryan, you should submit this request at https://github.com/conda-forge/dspsr-feedstock The short answer is I briefly looked into it a few years ago and ended up not doing it so there may have been some problem. But it's probably worth revisiting.
Thanks for the quick fix, Willem. I can confirm that it fixes the issue on my system.
psrstat segfaults
psrstat segfaults
unable to change edit type of data after folded by dspsr
Hi Yixuan, I'm glad you sorted it out already. In case this wasn't clear, I think the issue arose because the source name in your VLA file is 3C286, which is in psrchive's built-in list of flux calibrator sources (in file $PSRCHIVE/share/fluxcal.on). So psrchive automatically assumes it is a FluxCal-on observation. If needed you can work around this most easily by changing the source name.
Thanks Willem. You are right that there was an implicit assumption that a file would contain more than one data block. I don't remember details at this point but it looks like there must have been cases when the first data block was bad and needed to be ignored. I quickly skimmed your changes and they all seem reasonable to me!
hi Willem, We've been doing date-based versioning for a while now, though I guess no one has updated that change log webpage! Anyways all the released versions are here: https://sourceforge.net/projects/psrchive/files/psrchive/ I think semantic versioning has some nice features.. if we are willing to put in the extra work to follow the rules about when to bump the various numbers. Are we able to do this? If not then we might as well just stick with dates IMO. There may be some hassle involved in...
hi Willem, We've been doing date-based versioning for a while now, though I guess no one has updated that change log webpage! Anyways all the released versions are here: https://sourceforge.net/projects/psrchive/files/psrchive/ I think semantic versioning has some nice features.. if we are willing to put in the extra work to follow the rules about when to bump the various numbers. Are we able to do this? If not then we might as well just stick with dates IMO. There may be some hassle involved in...
OK thanks Willem. See commit 64551887cf. This has fixed the immediate problem but there are still a handful of functions that may run into trouble with this. For example meanTsys() will return zero if setup has not been run. Not sure if we need to chase all these down right now, but might want to keep it in mind for future.
Right, been a while since I had to use these C++ tricks.. but const_cast also works right? I added this line, which is already used elsewhere in FluxCalibrator and this seems to have addressed the CalibratorStokes issue: const_cast<FluxCalibrator*>(this)->setup(); I'll check this in soon unless you think there is any reason not to use this approach. Also you might want to check the spline stuff, I don't know how to test that (though it does sound useful). In case my change to get_nchan() messed with...
Hi Willem, Scott asked me about this again today so I looked into it some more. The problem does still exist in current psrchive. The issue is related to these two commits: commit 58c6f1ada756402ca6e1bd3e580afca8164a2b8f Author: Willem van Straten <vanstraten.willem@gmail.com> Date: Wed Apr 13 15:00:08 2022 +1000 lazily evaluate the FluxCalibrator::data array when not calibrating data commit 922334b29881ff82f9e04163422173a2d2bef333 Author: Willem van Straten <vanstraten.willem@gmail.com> Date: Sat...
DigitiserCounts problem with .fil input
Thanks Willem, all looks good to me!
DigitiserCounts problem with .fil input
Turned out to be pretty easy, #1 is done in commit 02c0fa1. Seems to work and removes that error message. I'll close this unless you have any other ideas.
That didn't actually fix the problem though they are probably good checks to keep. Turns out the segfault was in templates.h/minmax() which was not checking for an empty range before dereferencing. I fixed this in psrchive commit 3cbc71b, take a look for any issues. This did fix the segfault, and dspsr now produces valid PSRFITS output. It's a bit noisy as it prints an error message (see below) on every output file. But I guess it's all handled correctly since the files appear valid and do not contain...
OK thanks Willem! If I implement this I would probably do 1 first then 3 but I agree they are both worth doing.
DigitiserCounts problem with .fil input
OK there is a new conda-forge package (still v2023.01.24, build number 4) that enables the relevant option and restores Archive_load(). Useful to note that Archive.load() also works, so that may be a more "future-proof" choice, and any modifications made in the past day don't necessarily need to be reverted. I'll leave this ticket open for now since this should eventually be addressed in psrchive configure as well, to automatically enable -flatstaticmethod when swig version >= 4.1.0.
One more bit of info for the record, this change appeared in swig 4.1.0. From https://www.swig.org/Release/CHANGES: 2022-03-21: jschueller, jim-easterbrook, wsfulton [Python] #2137 C++ static member functions no longer generate a "flattened" name in the Python module. For example: s = example.Spam() s.foo() # Spam::foo() via an instance example.Spam.foo() # Spam::foo() using class method example.Spam_foo() # Spam::foo() "flattened" name The "flattened" name is no longer generated, but can be generated...
Just investigated a bit, it appears that psrchive.Archive.load() works with the new version in place of psrchive.Archive_load(). So that can be used as a short-term workaround. This is almost certainly due to a swig version change, I will work on making a new package that supports the previous syntax (either in place of or in addition to the new).
Newest Version Is Unable To Import Archive_load Function
hi Jacob, sorry for the trouble, I will take a look today. I assume you are using the conda-forge package?
compiling python bindings on macos BigSur
OK, thanks, I will go ahead and make that change in order to get it building again. More generally, the python interface started out as a way to get the basic data-container classes (Archive, Integration, Profile) available in python. That was a pretty well-defined scope, and it was not intended to cover all possible psrchive functionality. Of course since then many people (including me!) have added other stuff in as needed. Definitely worth considering if there is a better approach, it's been a...
PS @sixbynine was the one who added FrontendCorrection to the python interface, maybe he could comment?
I ran into this also while trying to update the conda-forge package (swig 4.1.1, gcc 11.3.0). This was on linux so it's not MacOS-specific, more likely due to swig version as Stefan found. I spent some time looking into it, the problem is swig's scheme for wrapping generic objects is failing on Jones and the complicated constructor rules in Jones.h. In particular, C++ is insisting on trying to use this constructor to cast the wrapped-Jones back to real-Jones: //! Construct from a castable type template<typename...
I ran into this also while trying to update the conda-forge package (swig 4.1.1, gcc 11.3.0). This was on linux so it's not MacOS-specific, more likely due to swig version as Stefan found. I spent some time looking into it, the problem is swig's scheme for wrapping generic objects is failing on Jones and the complicated constructor rules in Jones.h. In particular, C++ is insisting on trying to use this constructor to cast the wrapped-Jones back to real-Jones: //! Construct from a castable type template<typename...
Hi guys, looks like I wrote that code originally. I agree using the "expert" interface seems like an odd choice unless it was necessary for some reason. But it's been way too long to remember what that was. I think someone will just have to make the change and see what happens.
Hm maybe the problem now is just with pacv, though likely originates in the fluxcal-related changes over the past few months. The "no points to plot" also shows up for fcal files created with an older psrchive version, so I'd guess the fits files are OK.. I think I will check in my changes so far but we should keep trying to understand what is happening with pacv.
I can reproduce this and spent some time debugging this morning. Not fully solved but a couple issues so far: If fluxcal -g is not being used, the new columns in the flux cal FITS table should be removed, that will clean up the FITS errors Scott mentioned. This did not fix the segfault though. in FluxCalibrator::create the constant_scale attribute is being used before it's initialized. Fixing this fixes the segfault but I think there still may be other issues, since pacv claims "no points to plot"...
This sounds very similar to issue #446, which was fixed a while back. What version of psrchive are you using?
OK, a new psrchive release (2022.05.14) is now available on conda-forge.
Output statistic descriptions in help
Hi Wael, I can generate a new release (and conda package) in the next couple days. Willem do you have anything else you're working on that you'd like to get in first? We don't have a specific release schedule but tend to do them every few months. I agree conda install is much easier to deal with, but if you want new features or bug fixes on a quicker timescale you could consider building from source.
hi Wael, yes it looks like there is a compile-time setting needed for psrcat. We might be able to fix this in a future conda build. Until then you could just provide a parfile on the command line via -E instead of using the automatic psrcat lookup. Or, since psrinfo is the fallback you could probably make a script named "psrinfo" that calls psrcat appropriately..
Does -K work on its own in this case (ie, without also using -f)?
Hi Scott, from a quick look at the code I think dedispersion (-K option) should be applied before fscrunch (-f) if both options are used. I guess that is not what you are seeing?
hi Scott, you may need to update epsic (which is a submodule now). I think just "git submodule update" from the top level dir does it.
Missing pvdriv.patch when installing PGPLOT
Thanks Yiqian, this will be fixed in the next release tarball. In the meantime you could download the missing .patch files from here: https://sourceforge.net/p/psrchive/code/ci/master/tree/packages/
Thanks Willem!
psrstat segfaults
I think this is fixed in commit 29373ef4d. Can you give it a try?
OK, this is a python3 "string vs bytes" issue. If you do f.convert_state(b'Stokes') that also appears to work. I'll see if I can update the wrapper to handle this automatically.
Hi Maciej, I can reproduce this on my system. You're right that it used to work, I'll see if I can track it down. In the meantime, if you need a workaround, you can access psrsh commands from python using execute() ie f.execute('state Stokes').
One last(?) thought about this is that, longer-term, it might be useful for PSRCHIVE to optionally be able to just write the data as floats and avoid this conversion entirely. PSRFITS supports this and PSRCHIVE can already read this flavor of file. Saving a factor of 2 in disk space is nice but not always necessary, depends on the context.
Regarding the expected level of error in this calculation (ie "what N should be"), I believe the error in the packed (16-bit) values at the profile min or max will be on the order of: int16_err ~ few * INT16_MAX * (D/R) * 1e-7 Where INT16_MAX = 32767, D is the DC level in the profile data, R is the range (max-min) of the profile data. 1e-7 is the dynamic range of single-precision floats. The "few" is to account for accumulated error over the course of the scale/offset computation, it's maybe something...
OK I've checked in a fix in cc245bd28, using N=16 and also clipping the range at INT16_MIN, INT16_MAX. Still running some tests but so far I have not run into any cases where this fails. If you want to make any additional adjustments please go ahead!
Checking that the values are in range (and clipping them if not) after applying scale/offset but before casting to int16 may also be a good idea. Perhaps in combination with a slight reduction in range as I suggested above.
Here is the ASCII profile file from the example.
PSRFITS scale/offset 16-bit overflow
Thanks Willem!
UnaryStatistic location/dependence
Thanks for working on this! The "empty name" message came because Scott's par file originally used "PSR" rather than "PSRJ" for the pulsar name parameter. Changing it to PSRJ resulted in the "not implemeted for PSRINFO" error message as you expect.
Cannot update ephemeris
Scott and I looked at this a bit. The problem comes from code added in commit c9ff72324ca2 (2019/05/14) in T2Generator.C, in the set_parameters() function. It looks like this is trying to look up settings for T2 predictor generation using psrcat. But it's not properly checking whether psrcat is actually available, this leads to the error Scott mentioned.
Scott and I looked at this a bit. The problem comes from code added in commit c9ff72324ca2 (2019/05/14) in T2Generator.C. It looks like this is trying to look up settings for T2 predictor generation using psrcat. But it's not properly checking whether psrcat is actually available, this leads to the error Scott mentioned.
Appending data in frequency in psrchive python interface
I just added a simple example of how to use this in the More/python/examples/ subdir.
Hi guys, FrequencyAppend has been available in the python interface for a long time. Cheers, Paul
hi Ramesh, I don't think there is any way to distinguish stokes vs coherence in the filterbank headers. If you are using dspsr for folding I think the easiest thing to do is add "-j 'e state=Stokes'" to the dspsr command line. This will update the headers in the folded output file to say Stokes params. I think the only tricky thing to look out with multi-poln filterbank files are issues regarding signed vs unsigned ints. For 8 and 16-bit data, the sigproc reader in dspsr currently assumes all data...
hi Ramesh, I downloaded the data, and was able to reproduce your results. It turns out rescaling is not automatically disabled for floating-point output. If you add '-I0' to your command line this disables rescaling and the results look good. I suppose we could make this the default behavior, since there is really no need for rescaling when the output is floats. Any other opinions about this? Regarding your question on polarizations, in the output .fil file the data are in TPF order. Cheers, Paul...
hi Ramesh, Can you send the full digifil command line you were using? thanks, Paul On Tue, May 15, 2018 at 7:25 AM, Ramesh rameshka@users.sourceforge.net wrote: [bugs:#68] https://sourceforge.net/p/dspsr/bugs/68/ digifil --> scaling off?* Status: open Group: version_1.0 Created: Tue May 15, 2018 01:25 PM UTC by Ramesh Last Updated: Tue May 15, 2018 01:25 PM UTC Owner: nobody Attachments: Screen Shot 2018-05-15 at 15.19.08.png https://sourceforge.net/p/dspsr/bugs/68/attachment/Screen%20Shot%202018-05-15%20at%2015.19.08.png...
Hi Maciej, The "-32" setting was intentional. This is following the same convention that is used in FITS (see for example the standard BITPIX keyword), where a negative number denotes that the output is floating point, and positive numbers mean integer values. So in this scheme +32 (if implemented) would mean 32-bit integer values. You can get float output by using "-b-32" on the command line; I think you need to leave out the space between the option and argument to avoid command-line parsing confusion....
Ok, great! I have noticed a few other bugs that come up when using coherent dedispersion...
So, I think the main thing that is not quite working yet in the multi-threaded version...
hi Ramesh, Yes, there are some differences between multi- and single-threaded digifil...
Merge pull request #1 from smearedink/chime
Incorrect handling of VDIF 8-bit samples
hi Paul, You're right, the current VDIF implementation in dspsr is not a full implementation...
I added the --ephver option to pam to allow you to manually specify tempo or tempo2....
pam -E chooses poorly: tempo vs. tempo2
pam -E chooses poorly: tempo vs. tempo2