Menu

Standalone solving library?

2020-10-04
2022-05-27
<< < 1 .. 3 4 5 6 > >> (Page 5 of 6)
  • han.k

    han.k - 2021-05-20

    You correct there is an error in both formulas. I have changed it now as:

    if delta_ra>+pi then delta_ra:=delta_ra-2*pi; {1 -> 359,    -2:=(359-1) -360 }{rev 2021}
    if delta_ra<-pi then delta_ra:=delta_ra+2*pi; {359 -> 1,    +2:=(1-359) +360 }
    

    This is very theoretical. It only happens if the RA is exactly at 0 within one pixel. Anyhow, I have tested it with two images from DSS survey. Now it works correct for positive and negative values:

    Test images:
    https://ufile.io/thqx4yjc

    DSS survey:
    https://archive.eso.org/dss/dss

    These scanned images are more difficult to solve then CCD images.

     

    Last edit: han.k 2021-05-20
  • Ingvar Stepanyan

    This is very theoretical.

    Oh yeah, absolutely - just thought I'd bring it up anyway and verify that I'm not misunderstanding the formulas.

     
  • Ingvar Stepanyan

    So... I've been staring at the description of equal area method described at https://www.hnsky.org/help/uk/hnsky.htm#database long enough now and I'm still feeling confused.

    Initially I thought - based on the name of the method - that each HNSKY represents roughly the same area (in sq. degrees), but that seems in contradiction to what the table suggests, where it describes various numbers of areas of equal size as well as numbers of HNSKY areas (which are apparently not the same) per declination ring.

    Another thing I find confusing is that it says "circles of constant declination are assumed", but in the table the declination step doesn't seem constant - it's varying from ~12.79 deg to ~4.77 deg.

    I've tried to look at the code too, but didn't understand much from it either, like why is the area number (size?) calculated as a sum of some number sequence... are those number of cells per ring?

    More generally, how do I get boundaries & area in sq. degrees by filename rather than by given coordinates? And, related question, do you plan to document the 1476 database (which is, really, the one I'm interested in and which I assume follows the same principle but with different constants) on the same HNSKY page? Looking up constants in Pascal code is possible, but a bit less convenient.

    Sorry if it's a lot of questions, as you can probably guess, for someone who didn't do this kind of database structuring, it takes time to wrap head around it :)

     
  • Ingvar Stepanyan

    One more thing I should've mentioned is I don't understand why asin(1-1/1475) and so on doesn't produce the declination minimum numbers in the table for 1476 database, similarly to how asin(1-1/289) works for 290 database.

     
  • Ingvar Stepanyan

    Hm, it really helps to somehow put question in writing. I guess my mistake that confused other observations was assuming that 290 database and 1476 database are split in the same way just with different number of tiles per ring and overall, but apparently they're not? It looks like 290 database uses constant declination circles in terms of height of segments on a unit sphere, whereas in 1476 it's a constant step in degrees (not counting the NCP area, where it's step/2).

    If they're different enough, a documentation for the 1476 format (similar to the linked one for 290) would prove even more useful.

     
  • han.k

    han.k - 2021-06-03

    I have written something in the help file. Read this and tell me if it is clear:

    http://www.hnsky.org/astap.htm#1476

    Han

     
  • Ingvar Stepanyan

    Thanks, that helps!

    I've added calculation of min-max dec & RA for each tile and added simple logging to print any stars that don't match my calculation to verify that they're correct. I found only a handful of stars (117 in the whole database) that don't belong to the tiles I expected. In each case the dec or RA of the star was off from the expected boundaries by less than 10^-5, so I assume it's just a matter of floating math precision adding up in some cases.

    Minor suggestions:

    1) I think it would be useful to link to the page for 290 documentation, or include similar visualisation of the globe, it really helped with my initial understanding.
    2) This formatting (see attachment) looks a bit off.
    3) I remember changelog that introduced the 1476 database mentioning that the tiles are approximately 5x5 degrees - perhaps worth including that here too?

    And a follow-up question:

    If I understand the description correctly, there are 1474 tiles of equal size + 2 half-tiles at the poles. This should render each tile at 4 * pi * r^2 / (1474*1+2*1/2), where if we substitute r with 1 rad = 180 / pi, we get ~28 square degrees.

    Is that correct? It seems a bit more than 5x5 = 25 degrees, so just wanted to make sure my calculation isn't missing something.

    Also, are all tiles (except for the poles) guaranteed to be in the vicinity of this square area or are there sufficiently large deviations?

    The reason I'm asking is that for calculating number of stars by density I initially just went with ~25 sq. deg per tile, but when I upped it to 28, I started getting a bit worse selections on some test images. I understand that selection per tile isn't a very precise method to start with, and I should be splitting areas up into smaller chunks (like 1x1 deg) and selecting stars from those instead, but I wonder if I simply missed something and ~25 sq. deg is indeed the correct area for most tiles.

     
  • han.k

    han.k - 2021-06-04

    Your calculation is correct. Only note the celestial pole tile area is a round circle with a diameter of 5 degrees. so average area is 4π^2/1476 . But near the poles they have differnet shapes. I designed the 1476 more or less in a spreadsheet. I just tried to keep the minimum width 5 degrees. Have constant RA and DEC makes much easier to identify the tile. There are many more advanced way to divide the sphere but much more match is required and I assume a simple system could be faster.

    ASTAP reads up to 4 tiles maximum and filters out the square search area required. so for a FOV of one degree most star are ignored.

     
  • han.k

    han.k - 2021-06-04

    I think it is wise to keep the search area simular to the image FOV. With search position, a square area around it is defined and percentage of stars required form each of the four areas. That will be mostly one area but could beup to four areas if the search area is near the boundaries of two or more areas.

    The important routine for that is:

    function read_stars(telescope_ra,telescope_dec,search_field : double; nrstars_required: integer; var nrstars:integer): boolean;{read star from star database}

    which calls:

    find_areas( telescope_ra,telescope_dec, search_field,{var} area1,area2,area3,area4, frac1,frac2,frac3,frac4);

    which calls:

    procedure area_and_boundaries1476(ra1,dec1 :double; var area_nr: integer; var spaceE,spaceW,spaceN,spaceS: double); {For a ra, dec position find the star database area number and the corresponding boundary distances N, E, W, S}

    It looks maybe a little complicated but in principle it is a simple routine to extract stars.

    Han

     
  • Ingvar Stepanyan

    Only note the celestial pole tile area is a round circle with a diameter of 5 degrees.

    Oh, hm. But in that case what does "except for the poles which are half of that size" mean in the description? I've assumed it means that 2 out of 1476 tiles have area half the size of any other, which would make it /1475 not /1476.

    I think it is wise to keep the search area simular to the image FOV.

    My extraction approach is fairly different to yours, as I was mostly optimising for blind search, and got it fast enough without limits or search area stepping. I have plans to try and add limitation of areas - mostly to reduce RAM consumption which in my current approach can reach few GB for small-FOV images - but a little later down the road. Now that I have understanding of how the database tiles are split, it will certainly help.

    Meanwhile I'm trying to improve the accuracy, which means improving selection of stars from each tile. This is where I seem to be currently worse than ASTAP in terms of FOV allowance. In particular on a test image I've been trying recently, where an accurate FOV is 0.28 deg, specifying the FOV 0.29 still works but 0.30 doesn't, while ASTAP handles both well.

    What I found surprising here is that when I was selecting density * (5*5) stars from each tile, I handled FOV 0.30 just fine as well, but when I updated it to the correct area (roughly density * 28), then now I'm selecting too many stars at FOV 0.30 and it no longer matches. I guess since calculations are correct, it might be just an accident that it worked before but not now... I'll try more test images of similar size.

     
  • han.k

    han.k - 2021-06-05

    Your correct, the half size is incorrect. It is 2.5 degrees from the pole but it continuess on the other side of the pole so the tile is a circel with a 5 degrees diameter. i will correct this.

    This half size was only the case in the 290 version but it is no longer the case in the 1476 version.

    The tiles near the pole will be "isosceles trapezoid" shapes. https://cdn1.byjus.com/formulas/2016/04/21122031/Isosceles-trapezoid-300x180.png

    So you better not use the tile size for density calculations. It is better to filter out stars of a square area (or circle) around your RA, DEC position of interest. The tiles are only a way to split the data in convenient smaller segments and prefilter the stars. The tiles shapes guarantees only that with 4 tiles you can cover an area of 5x5 degrees minimum, often more. If your position of interest is in the middle of a tile, only that tile is required. If you position of interest is at a corner of a tile, you will need four tiles to extract the stars required. For ASTAP I decided to select always the four tiles closest to the point of interest and then extract from these four tiles the stars required.

    You can split a sphere in more or less equals shapes using triangles or hexagons. There several studies done on this topic. I never tried seriously to write such a complicated code. See e.g.
    https://en.wikipedia.org/wiki/Geodesic_grid
    If you can write something like to use geodesic grid, that would be nice.

    Han

     
  • han.k

    han.k - 2021-06-05

    I have now added two PNG images showing the 1476 areas on a sphere.

     
  • Ingvar Stepanyan

    I see, thanks for the doc updates and clarifications!

    Choosing stars by density per tile was just a quick & dirty approach when I switched from magnitude-based selection (that performed poorly for reasons mentioned earlier in the thread) to density-based selection, and it seemed to work well enough for most test images that I didn't bother implementing something more accurate till I started testing on images with smaller FOVs. I'll look into geodesic grid, thanks for the pointer.

    Before I saw your message about half-size being indeed incorrect, I decided to make another set of calculations and check the areas of all tiles and added them to the table. I thought results might be interesting to share here:

    Declination minimum   Declination maximum Ring Area
     -90.00000000  -87.42857143   0-1 20.76949345287245
    -87.42857143 -82.28571429  1-2 55.30168996090863
    -82.28571429 -77.14285714  2-3 36.719374499545985
    -77.14285714 -72.00000000  3-4 32.82552563415302
    -72.00000000 -66.85714286  4-5 30.967953406984208
    -66.85714286 -61.71428571  5-6 29.74204109264403
    -61.71428571 -56.57142857  6-7 28.765988721750357
    -56.57142857 -51.42857143  7-8 28.62834532275075
    -51.42857143 -46.28571429  8-9 28.31901557660324
    -46.28571429 -41.14285714  9-10 27.869872361968845
    -41.14285714 -36.00000000  10-11 27.82729354165913
    -36.00000000 -30.85714286  11-12 27.582751949594485
    -30.85714286 -25.71428571  12-13 27.16354119714424
    -25.71428571 -20.57142857  13-14 27.013786292884713
    -20.57142857 -15.42857143  14-15 27.080340913960526
    -15.42857143 -10.28571429  15-16 26.931395649928167
    -10.28571429 -5.14285714  16-17 26.97142448832142
    -5.14285714 0.00000000  17-18 26.79628213455945
    0.00000000 5.14285714  18-19 26.79628213455945
    5.14285714 10.28571429  19-20 26.97142448832142
    10.28571429 15.42857143    20-21 26.931395649928167
    15.42857143 20.57142857  21-22 27.080340913960526
    20.57142857 25.71428571  22-23 27.013786292884713
    25.71428571 30.85714286  23-24 27.16354119714424
    30.85714286 36.00000000  24-25 27.582751949594485
    36.00000000 41.14285714  25-26 27.82729354165913
    41.14285714 46.28571429  26-27 27.869872361968845
    46.28571429 51.42857143  27-28 28.31901557660324
    51.42857143 56.57142857  28-29 28.62834532275075
    56.57142857 61.71428571  29-30 28.765988721750357
    61.71428571 66.85714286  30-31 29.74204109264403
    66.85714286 72.00000000  31-32 30.967953406984208
    72.00000000 77.14285714  32-33 32.82552563415302
    77.14285714 82.28571429  33-34 36.719374499545985
    82.28571429 87.42857143  34-35 55.30168996090863
    87.42857143 90.00000000  36-37 20.76949345287245

    Looks like most tiles indeed have reasonably similar areas in 27-29 sq. deg range, but I found it surprising that tiles closer to the pole deviate so much from the others - in particular, you can see that the ring 1-2 split into 3 tiles results in tiles of 55 sq. deg. Few other rings nearby also deviate from the expected ~28 sq. deg value, but less radically.

    I was wondering if this is a bug and the ring 1-2 should've been split into, say, 6 tiles instead of 3?

     

    Last edit: Ingvar Stepanyan 2021-06-06
  • han.k

    han.k - 2021-06-06

    Yes the 3 tiles in ring 2 are larger. The reason is that I wanted to have the distance between boundaries 2.6 degrees. Or in other words none of the tiles has a dimension less then 5.2 degrees being either the width, height or diameter. So you need only 4 tiles maximum to cover an area of 5x5 degrees. So the tiles of ring 2 have a width of 5.2 degrees on the celestial pole size. Creating more will violate this requirement.

    Larger area tiles are not a real problem. The biggest files covering the milkyway are up to 75 times larger then near the pole. For any database convering deeper then magnitude 18, tile size covering the MilkyWay has to be smaller and possible smaller then the rest of the sky.

     

    Last edit: han.k 2021-06-06
  • han.k

    han.k - 2021-06-06

    Thanks for the calculation.

    The attached image shows why ring 2 or 1-2 has only three tiles. Creating more then three tiles would require a more complicated reader.

     

    Last edit: han.k 2021-06-06
  • han.k

    han.k - 2021-06-06

    This study of a HTM , "Indexing the Sphere with the Hierarchical Triangular Mesh" is one of the many studies available:

    https://www.microsoft.com/en-us/research/wp-content/uploads/2005/09/tr-2005-123.pdf

    Since it is rather complicated, I never tried it. The complicated access could slow down it.

     
  • Ingvar Stepanyan

    Interesting, thanks for the explanation.

    Larger area tiles are not a real problem.

    Yeah I suppose it was only a problem for my approach because I've been using tiles for selection, assuming they all have the same area, which led to wrong number of stars being selected. But, as you noted above, that approach has other problems, and I need to try one of more accurate sphere indexing methods.

    So it sounds like all those different approaches boil down to

    You can split a sphere in more or less equals shapes using triangles or hexagons.

    Is the idea that, after the sphere has been split into an ideal number of shapes given the density, we choose one brightest star from each such section?

     
  • han.k

    han.k - 2021-06-06

    Is the idea that, after the sphere has been split into an ideal number of shapes
    given the density, we choose one brightest star from each such section?

    I assume that will not always work so well. The problem is that the local star density varies. The density inside cluster m13 will be huge and outside M13 will be lower. How smaller the tiles, how less comparible the star densities will be.

    In the past ASTAP used the star magnitudes to control the amount of extracted stars. This was later abandoned for the equal density.

    For extracting stars from the tiles, ASTAP assumes equals density for the used tiles. This is not always true but in pratise it seems good enough. Something to review again

     
  • han.k

    han.k - 2021-06-07

    It works as follows. See sketch attached. Assume the green search field is at a crossing of four tiles. The search field area, by definition 100% is split in 8%, 15%, 20%, 57% area for each tile. There are 500 stars required. It will then retrieve 8% x 500, 15% x 500, 20% x 500, 57% x 500 stars from each tile under the condition these stars are within the green area. This will work assuming the star density within the green area is reasonable homogene.

    A star cluster within the green area will spoil this setup. But if the star cluster is in the 57% area this area alone could be sufficient for the solution. If the star cluster is in the 8% area, then this area is not so relavant for the solution.

    Having constant line of RA and DEC helps to calculate the area split.


    Just got an other new idea. You could first retrieve the stars from the tile with the biggest area. so the 57% x 500 stars. Record the faintest magnitude retrieved and use that magnitude for the other three tiles. This will guarantee homegene retrieval. If the number of retrieve stars is larger e.g. 600 or less e.g. 400 you have to correct. This will make it more complicated.

    A totaly different setup could be to create a database with 50% overlapping tiles, 10 x10 degrees. Then you could select only one tile each time. The database will be twice as large as required and database star retrieval will be twice slower.

     
  • Alfredo Ortega

    Alfredo Ortega - 2021-09-28

    Hi! I downloaded ASTAP some weeks ago and I thought it was an outstanding project. I specially liked the command-line utility as a way to add image solving to external utilities. Currently it only provides solving but I needed some form of stacking and image calibration, so I added image calibration and sigma-clipping stacking to the command-line utility, as I believe it's the stacking method that provides the highest quality. I just added the two alignment methods that I most use: Star and astrometry.

    Also, I'm still missing several features like debayering, color image processing and stacking other image formats apart from fit.

    The project is currently at my github here: https://github.com/ortegaalfredo/astap-command-line
    If you plan in the future to add this functionality to the main project, I would gladly help, but I think it's easy as I added everything in an isolated pascal unit that's easy to import and use.

    Cheers!

    Alfred

    PS: By the way, I think the latest version of astap-command-line has a bug when you try to update an existing fit file header with new solved parameters, it sets the bpp to 8 and the number of dimensions to zero, making the fit file unreadable. It's fixed in my branch.

     

    Last edit: Alfredo Ortega 2021-09-28
    • han.k

      han.k - 2021-09-28

      Hello Alfredo,

      Nice you made some modifications. I will have more detailed look tomorrow.

      Note, one bug using -update option in astap-cli was fixed a few days ago and is missing in your code. The lines

      "if hasoption('update') then begin ... end" in file astap_command_line.lpr
      

      should move before line

      remove_key('NAXIS1 =',true{one});
      

      Regards, Han

       

      Last edit: han.k 2021-09-28
      • Alfredo Ortega

        Alfredo Ortega - 2021-09-28

        Oh I had found that bug but made an ugly fix. This is much better, thanks!

         

        Last edit: Alfredo Ortega 2021-09-28
  • han.k

    han.k - 2021-10-01

    Alfredo, here one more improvement:

    Old:
    - ra_offset:=distance_to_string(dec_mount-dec0,(ra_mount-ra0)*cos(dec0));
    - dec_offset:=distance_to_string(dec_mount-dec0,dec_mount-dec0);

    New:
    + ra_offset:=distance_to_string(sep, fnmodulo(ra_mount-ra0,pi)cos((dec0+dec_mount)0.5 {average dec}));
    + dec_offset:=distance_to_string(sep,dec_mount-dec0);

    Reason: The distance between RA=0 and RA=22 is +2 hourscos(dec) and not -22 hourscos(dec).

     

    Last edit: han.k 2021-10-01
    • Alfredo Ortega

      Alfredo Ortega - 2021-10-01

      Awesome. Just committed your fixes, and also added pre-built binaries for windows and linux.

       
  • Ingvar Stepanyan

    Hi, Han. Between first work, and later the invasion of my home country, I abandoned this project for a while, but slowly trying to gain back some sense of normalcy by working on side projects again 😅

    One thing I noticed and meant to ask about the database structure: rather than hardcoding the start/end offsets and RA steps from https://www.hnsky.org/astap.htm#1476, I chose to walk over 90/17.5 steps and 360/(number of tiles in a ring) myself, and only noted the discrepancy between the table values and my own when I added assertions.

    In particular, when using f64 (double-precision floating point), 90/17.5=5.142857142857143, but the table value is 5.14285714, and the error adds up rather quickly. Moreover, even with the table step value the actual (end-start) differences don't always match - e.g., there is a ring 82.28571429-77.14285714=5.14285715, not 5.14285714; same for ring 66.85714286-61.71428571.

    My question is, where are those differences coming from? Did you round up the values to a certain digit manually before hardcoding the values, or is it a result of single- vs double-precision floating point arithmetics or something else?

     
<< < 1 .. 3 4 5 6 > >> (Page 5 of 6)

Log in to post a comment.