Hi,
haven't read the whole thread but from the OP I suggest downloading Astrometry.net packages for offline use - open source and freely available. It also offers a CLI interface for command line use (eg. Linux /bin/bash), that means: batch processing and scripting should be possible!
You may also want (need) to download one or more object catalogs from their huge collections.
That's not really relevant. I'm aware of Astrometry.net, and I like & regularly use their Web interface, but the question was about 1) specifically ASTAP and 2) usage as a static library in other apps, not CLI / scripting (which ASTAP also has).
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
So testing seems to work on other images too, but looks like I just replaced one problem with another - where ASTAP required to specify FOV for choosing stars by similar density from database, I don't rely on FOV but need to specify correct limiting magnitude. I guess I'll have to step through magnitudes too like ASTAP does for FOVs in "auto FOV" mode, although this is a bit underwhelming 😅
Another issue where I'm stuck is that I'm using SEP library for star detection. I've been working on and upstreaming various improvements and fixes there, but it's extremely slow on some images - e.g. for 24MP pictures from my camera it can easily take up to 70 seconds on my laptop to detect all stars before hashing can begin. I think I'll look into implementing star detection myself, although I'm not too happy about this prospect as it will require even more careful testing.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Yes many details have to be worked out. In every phase you have to keep the execution efficient. I'm currently working on some fine tuning of the background detection. Still after 2.5 year of fine tuning.
The SEP library should be fast. It must be a setting.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Maybe, I've just been using the default settings they use themselves. Plus, it probably depends on how crowded the field is - the particular image I've been referring to (that took 70 seconds) contained ~34,000 detected stars, it's made at 200mm but with sufficiently long exposure that I suspect it's not just noise.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Well, I've gotten a bit busy at work, but have been poking it a bit more on weekends, upstreaming some API improvements, thread-safety fixes etc. to the point where I have commit rights for SEP now 😅
Unfortunately, even with tweaking params, SEP has been still unbearably slow on large images in my testing set (that is, my own beginner astrophotography images at 24MP TIFF with plenty of noise), so I decided to give up and write my own simple implementation of star detection.
I figured for solving I don't need it to be as robust, or detect faint stars, or detect shape parameters, or even perform deblending of very close objects, but instead can try and trade most of those features off for speed - as long as I can remove the background and detect large-ish stars, I should be fine.
I wasn't sure if it actually would work, and there were few minor issues along the way, but now I'm very excited because it actually does! I'm finally getting matches on a few images I've tried so far in under 5 seconds, where previously I had to wait 2-3 min just for the detection to finish.
Attached an annotated example of M100 showing stars detected by my simple thresholding algorithm + Lutz.
Oh, by the way, I reimplemented Lutz algorithm from SExtractor / SEP in Rust and that one is already published as a separate crate in case someone wants to reuse it: https://github.com/RReverser/lutz
That screenshot looks good. More then sufficient detection's for solving.
The Lutz algorithm is an interesting approach . Note that Sextractor was designed for deep sky contour detection, not for star detection but it work superb for stars. Puzzling why Sextractor works so slow for you. I never compiled/really used it, but I have seen it can work in seconds.
As said before exploring and experimenting with algorithms is the way to go. With field testing you can fine tune the algorithm. I have download hundred of images from nova.astrometry.net and use them for performance testing. In addition With my setup I have made hundred of very short exposures near the detection limit to test which factors gives the best solving performance.
Han
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I never compiled/really used it, but I have seen it can work in seconds.
It did work in seconds for me too on some images - in particular, on the original M100 test image in the thread, as well as on some of the larger images if parameters are fine-tuned well enough, but for most others it slowed down significantly. I suspect there is non-linear dependency on number of detected objects somewhere in the code.
Anyway, experimenting with a custom algorithm proved both interesting / educational, as well as looks like a good enough replacement so far.
I have download hundred of images from nova.astrometry.net
This is superb idea. So far I've been testing only on my own images of varying quality, but Astrometry is indeed a great source for testing all sorts of complicated scenarios.
In fact, I tried that just now and the very first image I stumbled upon (the most recently uploaded at the moment of writing) - http://nova.astrometry.net/user_images/4625041#original - has revealed a crash in one place of my code. I've subsequently fixed that one, but I still can't get it to solve or even properly detect stars just yet.
In my defence, that example seems... interesting in more than one way :) I'm impressed that Astrometry.net manages to solve it. I couldn't even solve it with ASTAP on my first try, but maybe I just need to tune some parameters more carefully.
Anyway, I'll continue exploring next weekend.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Huh, tried few more images and looks like I'm struggling to solve all of them, even visually perfect ones like http://nova.astrometry.net/image/10716911. Stars seem to be detected correctly, so likely an issue in the matching algorithm.
I was perhaps too encouraged by the fact that ~all of my images are solving now, but then, even though I have plenty of examples with strong light pollution, noise etc., they're all made with roughly same FOVs, so I guess it makes sense that trying images with other FOVs reveals new issues.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
ASTAP can't solve that image either. Note that many images of Astrometry.net are problem images. They are not solvable by Pinpoint, Platesolve2 or ASTAP. Or people upload their stack result highly stretched. You have to browse and only download images of reasonable quality, maybe 5% of the available. Something like:
An other problem that 95% of the images today are .jpg files. That doesn't help either. Would be nice if you could filter on .fits but that is not possible.
Han
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I looked if there was a way to download images automatically from Nova. Astrometry.net but could not find one. Images originated from a fits file upload have become very rare. You really have to browse for small star containing images, with a gray noisy background and look if the source is a fits file. Too many are from a jpeg source.
Han
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
If you were referring to the image from my last message - http://nova.astrometry.net/image/10716911 - and not the one before, then ASTAP actually does solve it for me, although it requires specifying a close enough FOV (0.7 or 0.8 deg).
Thanks for the hints, I'll try those other examples you provided.
Too many are from a jpeg source.
In general, I would love to be able to solve those too, since that's often part of my workflow as well, since DSLR produces either RAW or JPEG, and JPEG is the quickest to get out of camera "manually" and pass to a solver. But we'll see, I'm happy to starts with better images and move down from there.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Huh, I've tried the first one, but couldn't detect any stars there. Debugging showed high RMS, and, upon close look, I understand why - they show pretty harsh matrix patterns due to different sensor structure, which makes them quite different from DSLR images I've used so far. I guess I'll need to spend some time to play with different blurring approaches.
I've also tried to load it in ASTAP and use "test button to show quads", but ASTAP doesn't seem to draw any. However, solving does work and I see it detected sufficient number of stars in the log - is there reason why quads test button wouldn't work on such image?
>>I've also tried to load it in ASTAP and use "test button to show quads", but ASTAP doesn't seem >>to draw any. However, solving does work and I see it detected sufficient number of stars in the >>log - is there reason why quads test button wouldn't work on such image?
The CCD inspector will only show stars with a SNR of 30 or more. for accuracy reasons. In the past it was >10. There is a CCD inspector variant for SNR >10 in pixel math 2. this will show all stars used for solving:
Huh, I've tried the first one, but couldn't detect any stars there. Debugging showed high RMS, and, upon close look, I understand why - they show pretty harsh matrix patterns due to different sensor structure, which makes them quite different from DSLR images
That image is a typical raw image of a one shot color sensor (OSC). The red, green and blue pixels in a 2x2 Bayer matrix have a different response to the sky. Best is to bin 2x2 or use as an alternative a 2x2 box filter. If you look to the viewer tools menu of ASTAP under CCD Inspector there is an option for "Bayer" images to these automatically.
In most cases ASTAP will bin these type of images because the resolution is high but this one is just below threshold resolution for binning selection. As soon you apply a box filter or binning the number of detected stars increases.
As a fall back for the CCD inspector, I will add a snr >10 detection if not enough stars are detected or a second menu with this option.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Right, binning works well too - that's what I switched to as well before I saw your message. Gaussian filter was a bit quicker to use just because it's built-in in the image library I've been using, but binning is very simple to implement by hand.
As for solving, it appears I can solve those images you linked to once stars are detected, but I'm running into more cases where my choice of limiting database stars by magnitude rather than by FOV turns out to be a self-inflicted problem.
Basically, it worked great for pictures of brighter stars with very rough magnitude limits, but as I move into pictures of dimmer star fields, the limit has to be set more and more precisely to make sure that just the right amount of stars is selected and that nearest neighbours match correctly what's in the picture.
With FOV it's easy to get fairly precise number, because it's an inherent property of camera/lens combo and easy to calculate even if not known in advance, but with magnitudes... well, not so much, and it becomes more of a guesswork. It appears I'll have to rework a large part of my solving algorithm to use FOVs after all.
Oh well, it was a fun experiment if nothing else :)
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I'm now re-reading early messages from this thread, and I suppose that's exactly what you were warning about in this message, but it was too early in the process for me to understand the problem :)
The density of the stars per square degrees should be calculated and matching with the database request. Only if the star density of the retrieved stars is similar in the image an database the quad shapes will in most cases match. Since the database is sorted from bright to faint and the same amount of stars is retrieved, the stars from the image and database will match if they are from the same sky area. Simple said if the image is showing stars up to a magnitude 14, ten database retrieval should be also up to magnitude 14. But the magnitude is not known in advantage so alternatively the star density (stars per square degree) is used.
At least now I see it, thanks for the early advice :)
Could you kindly point me to the file / function in ASTAP that is doing this selection? I'm trying to understand how this is supposed to work - do you just select every Nth star from the database "randomly" as long as density matches (and in decreasing order of brightness), or creating some sort of a mesh based on minimal distances found in the image or ...? Also, do you add any threshold to the density found on the image to include more stars or anything like that?
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
I guess for now I'll just try continuously reading stars from each 1476 file until the required density is reached - since they're all sorted by brightness, I guess this should give me a good enough distribution.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
The star density calculation is pretty simple. Assume the image height is defined as 2 degree, and 500 stars are detected. Image pixel dimensions are 1920x1080. then the star density is
Image width[d]:= (2x1920/1080)
image surface[d^2]: = (2 x (2x1920/1080))
star density image[stars/d^2] := 500/ (2 x (2x1920/1080))
Read each time the database for a square area of 2 x2 degrees up to density 500/ (2 x (2x1920/1080)) stars/d^2 or simpler just read 500*1080/1920 stars.
Solving will work up to about 30 to 40% FOV error.
That's all.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hm, I see responding to an email doesn't post an answer here, so I'll duplicate here what I've sent in the morning:
Thanks, I'll take a look. Yeah, calculating star density is easy, but a "fair selection" seemed problematic - in particular, in face of existence of star clusters (like M13 you mentioned before).
It seems too easy in theory to run into edge cases where you accidentally "waste" the required number of stars on arbitrary stars in the cluster and miss some others in the view that would be better candidates for polygon building and matching.
Meanwhile I've tried a quick & dirty approach of just selecting stars up to the required density out of each database tile (5x5 degrees), and for the images in question it works surprisingly well! On one of my images on 200mm lens though it seemed to require me specifying smaller FOV than the actual FOV of the camera; I suspect this is due to curvature which makes dependency between angular FOVs and pixel ones non-linear as lenses get wider. It's not too hard to account for it though.
I suspect I'll have to do a lot more tuning to account for star clusters / busy star fields in general, but so far this change seems pretty promising.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
So I fixed few more bugs and implemented CD matrix generation, but got stuck at one... I guess math problem: how does CD matrix account for wraparounds around celestial poles?
I've been reading about the tangential projection it uses, and in general it makes sense to me that for images with small FOVs we can assume that (x,y)->(RA,Dec) has a linear relation with fairly good accuracy, but surely any vector crossing the celestial pole would be throwing that math off?
Sadly, documentation around this matrix and common representations seems somewhat scarce, so I've been mostly trying to figure out what it's supposed to do experimentally as well as from ASTAP's code.
It appears that when solving images that include Polaris / NCP, it produces a matrix that can result in Dec > 90deg for certain parts of the image. I guess it makes sense, but I wonder if this is considered acceptable in this projection when other apps would read the resulting FITS?
Another question: I've been getting CD matrix by solving another system to get transformation matrix for (x,y)->(RA,Dec) directly from (0,1) and (1,0) steps, but I see that ASTAP uses a more direct approach to calculating those rows. I suppose ASTAP's variant is a bit simpler, but then I don't fully understand the delta_ra wraparound logic it uses, namely:
The first one makes sense for values >180deg - e.g. if difference turned out to be 270deg, turn it into 90deg since it's more likely a correct representation.
The second one seems odd though - if difference is <-180deg, then instead of adding 360deg so that e.g. -270deg would also transform into 90deg, it... subtracts 360deg, so e.g. -270deg would turn into -630deg which only increases the wraparound. Am I missing a reason why it's done this way or is it a bug?
Last edit: Ingvar Stepanyan 2021-05-20
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
1 --> 359 degrees, then the delta Ra is -2 degrees.
Right, that's how I understood the intent from the comment too, but what I'm saying is that the code on that line does something else than what the comment describes:
So you're saying that if delta_ra is 1-359, that is, -358 degrees, this code should turn it to -2 degrees, but instead it would turn it to -718. That's why I suspect that line has a bug and it should've been delta_ra:=delta_ra+2*pi instead.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi,
haven't read the whole thread but from the OP I suggest downloading Astrometry.net packages for offline use - open source and freely available. It also offers a CLI interface for command line use (eg. Linux /bin/bash), that means: batch processing and scripting should be possible!
You may also want (need) to download one or more object catalogs from their huge collections.
Please visit Astrometry.net for details.
Regards,
too
That's not really relevant. I'm aware of Astrometry.net, and I like & regularly use their Web interface, but the question was about 1) specifically ASTAP and 2) usage as a static library in other apps, not CLI / scripting (which ASTAP also has).
So testing seems to work on other images too, but looks like I just replaced one problem with another - where ASTAP required to specify FOV for choosing stars by similar density from database, I don't rely on FOV but need to specify correct limiting magnitude. I guess I'll have to step through magnitudes too like ASTAP does for FOVs in "auto FOV" mode, although this is a bit underwhelming 😅
Another issue where I'm stuck is that I'm using SEP library for star detection. I've been working on and upstreaming various improvements and fixes there, but it's extremely slow on some images - e.g. for 24MP pictures from my camera it can easily take up to 70 seconds on my laptop to detect all stars before hashing can begin. I think I'll look into implementing star detection myself, although I'm not too happy about this prospect as it will require even more careful testing.
Nice
Yes many details have to be worked out. In every phase you have to keep the execution efficient. I'm currently working on some fine tuning of the background detection. Still after 2.5 year of fine tuning.
The SEP library should be fast. It must be a setting.
Maybe, I've just been using the default settings they use themselves. Plus, it probably depends on how crowded the field is - the particular image I've been referring to (that took 70 seconds) contained ~34,000 detected stars, it's made at 200mm but with sufficiently long exposure that I suspect it's not just noise.
Well, I've gotten a bit busy at work, but have been poking it a bit more on weekends, upstreaming some API improvements, thread-safety fixes etc. to the point where I have commit rights for SEP now 😅
Unfortunately, even with tweaking params, SEP has been still unbearably slow on large images in my testing set (that is, my own beginner astrophotography images at 24MP TIFF with plenty of noise), so I decided to give up and write my own simple implementation of star detection.
I figured for solving I don't need it to be as robust, or detect faint stars, or detect shape parameters, or even perform deblending of very close objects, but instead can try and trade most of those features off for speed - as long as I can remove the background and detect large-ish stars, I should be fine.
I wasn't sure if it actually would work, and there were few minor issues along the way, but now I'm very excited because it actually does! I'm finally getting matches on a few images I've tried so far in under 5 seconds, where previously I had to wait 2-3 min just for the detection to finish.
Attached an annotated example of M100 showing stars detected by my simple thresholding algorithm + Lutz.
Oh, by the way, I reimplemented Lutz algorithm from SExtractor / SEP in Rust and that one is already published as a separate crate in case someone wants to reuse it: https://github.com/RReverser/lutz
Hello Ingvar,
That screenshot looks good. More then sufficient detection's for solving.
The Lutz algorithm is an interesting approach . Note that Sextractor was designed for deep sky contour detection, not for star detection but it work superb for stars. Puzzling why Sextractor works so slow for you. I never compiled/really used it, but I have seen it can work in seconds.
As said before exploring and experimenting with algorithms is the way to go. With field testing you can fine tune the algorithm. I have download hundred of images from nova.astrometry.net and use them for performance testing. In addition With my setup I have made hundred of very short exposures near the detection limit to test which factors gives the best solving performance.
Han
It did work in seconds for me too on some images - in particular, on the original M100 test image in the thread, as well as on some of the larger images if parameters are fine-tuned well enough, but for most others it slowed down significantly. I suspect there is non-linear dependency on number of detected objects somewhere in the code.
Anyway, experimenting with a custom algorithm proved both interesting / educational, as well as looks like a good enough replacement so far.
This is superb idea. So far I've been testing only on my own images of varying quality, but Astrometry is indeed a great source for testing all sorts of complicated scenarios.
In fact, I tried that just now and the very first image I stumbled upon (the most recently uploaded at the moment of writing) - http://nova.astrometry.net/user_images/4625041#original - has revealed a crash in one place of my code. I've subsequently fixed that one, but I still can't get it to solve or even properly detect stars just yet.
In my defence, that example seems... interesting in more than one way :) I'm impressed that Astrometry.net manages to solve it. I couldn't even solve it with ASTAP on my first try, but maybe I just need to tune some parameters more carefully.
Anyway, I'll continue exploring next weekend.
Huh, tried few more images and looks like I'm struggling to solve all of them, even visually perfect ones like http://nova.astrometry.net/image/10716911. Stars seem to be detected correctly, so likely an issue in the matching algorithm.
I was perhaps too encouraged by the fact that ~all of my images are solving now, but then, even though I have plenty of examples with strong light pollution, noise etc., they're all made with roughly same FOVs, so I guess it makes sense that trying images with other FOVs reveals new issues.
ASTAP can't solve that image either. Note that many images of Astrometry.net are problem images. They are not solvable by Pinpoint, Platesolve2 or ASTAP. Or people upload their stack result highly stretched. You have to browse and only download images of reasonable quality, maybe 5% of the available. Something like:
https://nova.astrometry.net/user_images/4625130#annotated
https://nova.astrometry.net/user_images/4625460#annotated
https://nova.astrometry.net/user_images/4625379#annotated
https://nova.astrometry.net/user_images/4624923#annotated
An other problem that 95% of the images today are .jpg files. That doesn't help either. Would be nice if you could filter on .fits but that is not possible.
Han
I looked if there was a way to download images automatically from Nova. Astrometry.net but could not find one. Images originated from a fits file upload have become very rare. You really have to browse for small star containing images, with a gray noisy background and look if the source is a fits file. Too many are from a jpeg source.
Han
If you were referring to the image from my last message - http://nova.astrometry.net/image/10716911 - and not the one before, then ASTAP actually does solve it for me, although it requires specifying a close enough FOV (0.7 or 0.8 deg).
Thanks for the hints, I'll try those other examples you provided.
In general, I would love to be able to solve those too, since that's often part of my workflow as well, since DSLR produces either RAW or JPEG, and JPEG is the quickest to get out of camera "manually" and pass to a solver. But we'll see, I'm happy to starts with better images and move down from there.
Huh, I've tried the first one, but couldn't detect any stars there. Debugging showed high RMS, and, upon close look, I understand why - they show pretty harsh matrix patterns due to different sensor structure, which makes them quite different from DSLR images I've used so far. I guess I'll need to spend some time to play with different blurring approaches.
I've also tried to load it in ASTAP and use "test button to show quads", but ASTAP doesn't seem to draw any. However, solving does work and I see it detected sufficient number of stars in the log - is there reason why quads test button wouldn't work on such image?
(immediate quick update - simple Gaussian blur certainly helps, although I can probably play a bit more)
>>I've also tried to load it in ASTAP and use "test button to show quads", but ASTAP doesn't seem >>to draw any. However, solving does work and I see it detected sufficient number of stars in the >>log - is there reason why quads test button wouldn't work on such image?
The CCD inspector will only show stars with a SNR of 30 or more. for accuracy reasons. In the past it was >10. There is a CCD inspector variant for SNR >10 in pixel math 2. this will show all stars used for solving:
Last edit: han.k 2021-05-11
That image is a typical raw image of a one shot color sensor (OSC). The red, green and blue pixels in a 2x2 Bayer matrix have a different response to the sky. Best is to bin 2x2 or use as an alternative a 2x2 box filter. If you look to the viewer tools menu of ASTAP under CCD Inspector there is an option for "Bayer" images to these automatically.
In most cases ASTAP will bin these type of images because the resolution is high but this one is just below threshold resolution for binning selection. As soon you apply a box filter or binning the number of detected stars increases.
As a fall back for the CCD inspector, I will add a snr >10 detection if not enough stars are detected or a second menu with this option.
Right, binning works well too - that's what I switched to as well before I saw your message. Gaussian filter was a bit quicker to use just because it's built-in in the image library I've been using, but binning is very simple to implement by hand.
As for solving, it appears I can solve those images you linked to once stars are detected, but I'm running into more cases where my choice of limiting database stars by magnitude rather than by FOV turns out to be a self-inflicted problem.
Basically, it worked great for pictures of brighter stars with very rough magnitude limits, but as I move into pictures of dimmer star fields, the limit has to be set more and more precisely to make sure that just the right amount of stars is selected and that nearest neighbours match correctly what's in the picture.
With FOV it's easy to get fairly precise number, because it's an inherent property of camera/lens combo and easy to calculate even if not known in advance, but with magnitudes... well, not so much, and it becomes more of a guesswork. It appears I'll have to rework a large part of my solving algorithm to use FOVs after all.
Oh well, it was a fun experiment if nothing else :)
I'm now re-reading early messages from this thread, and I suppose that's exactly what you were warning about in this message, but it was too early in the process for me to understand the problem :)
At least now I see it, thanks for the early advice :)
Could you kindly point me to the file / function in ASTAP that is doing this selection? I'm trying to understand how this is supposed to work - do you just select every Nth star from the database "randomly" as long as density matches (and in decreasing order of brightness), or creating some sort of a mesh based on minimal distances found in the image or ...? Also, do you add any threshold to the density found on the image to include more stars or anything like that?
I guess for now I'll just try continuously reading stars from each 1476 file until the required density is reached - since they're all sorted by brightness, I guess this should give me a good enough distribution.
The star density calculation is pretty simple. Assume the image height is defined as 2 degree, and 500 stars are detected. Image pixel dimensions are 1920x1080. then the star density is
Image width[d]:= (2x1920/1080)
image surface[d^2]: = (2 x (2x1920/1080))
star density image[stars/d^2] := 500/ (2 x (2x1920/1080))
Read each time the database for a square area of 2 x2 degrees up to density 500/ (2 x (2x1920/1080)) stars/d^2 or simpler just read 500*1080/1920 stars.
Solving will work up to about 30 to 40% FOV error.
That's all.
In the code it is written as:
nrstars_required:=round(nrstars*(height2/width2));{square search field based on height.}
But in later versions there are two additional factors added, oversize and extrastars.
Hm, I see responding to an email doesn't post an answer here, so I'll duplicate here what I've sent in the morning:
Meanwhile I've tried a quick & dirty approach of just selecting stars up to the required density out of each database tile (5x5 degrees), and for the images in question it works surprisingly well! On one of my images on 200mm lens though it seemed to require me specifying smaller FOV than the actual FOV of the camera; I suspect this is due to curvature which makes dependency between angular FOVs and pixel ones non-linear as lenses get wider. It's not too hard to account for it though.
I suspect I'll have to do a lot more tuning to account for star clusters / busy star fields in general, but so far this change seems pretty promising.
So I fixed few more bugs and implemented CD matrix generation, but got stuck at one... I guess math problem: how does CD matrix account for wraparounds around celestial poles?
I've been reading about the tangential projection it uses, and in general it makes sense to me that for images with small FOVs we can assume that (x,y)->(RA,Dec) has a linear relation with fairly good accuracy, but surely any vector crossing the celestial pole would be throwing that math off?
Sadly, documentation around this matrix and common representations seems somewhat scarce, so I've been mostly trying to figure out what it's supposed to do experimentally as well as from ASTAP's code.
It appears that when solving images that include Polaris / NCP, it produces a matrix that can result in Dec > 90deg for certain parts of the image. I guess it makes sense, but I wonder if this is considered acceptable in this projection when other apps would read the resulting FITS?
Another question: I've been getting CD matrix by solving another system to get transformation matrix for (x,y)->(RA,Dec) directly from (0,1) and (1,0) steps, but I see that ASTAP uses a more direct approach to calculating those rows. I suppose ASTAP's variant is a bit simpler, but then I don't fully understand the
delta_ra
wraparound logic it uses, namely:The first one makes sense for values >180deg - e.g. if difference turned out to be 270deg, turn it into 90deg since it's more likely a correct representation.
The second one seems odd though - if difference is <-180deg, then instead of adding 360deg so that e.g. -270deg would also transform into 90deg, it... subtracts 360deg, so e.g. -270deg would turn into -630deg which only increases the wraparound. Am I missing a reason why it's done this way or is it a bug?
Last edit: Ingvar Stepanyan 2021-05-20
The math around the pole works. It is a little tricky
https://sourceforge.net/projects/astap-program/files/some%20documentation%20and%20info/Methode%20de%20calibration%20astrometrique.html/download
Formules 6) and 7). but use the atan2(x,y) function. The standard atan will in some cases fail near the poles.
The two lines give the answer what is the delta RA in case the RA changes from -> to:
359 --> 1 degrees then the delta RA is +2 degrees.
or
1 --> 359 degrees, then the delta Ra is -2 degrees.
This problem occurs around RA is near zero. Maybe there is a better way but that's what I implemented.
Right, that's how I understood the intent from the comment too, but what I'm saying is that the code on that line does something else than what the comment describes:
So you're saying that if
delta_ra
is1-359
, that is,-358
degrees, this code should turn it to-2
degrees, but instead it would turn it to-718
. That's why I suspect that line has a bug and it should've beendelta_ra:=delta_ra+2*pi
instead.