From: Ian S. <ian...@st...> - 2003-11-26 20:49:41
|
Hi Marc, Lets move this discussion to vxl-maintainers. > -----Original Message----- > From: Laymon, Marc A (Research) [mailto:la...@cr...] > > Hi. > > I am porting code from TargetJr for reading NITF images. > Since the imaging > model > is somewhat different between TargetJr and VXL, I wanted to > check and see if > anyone > had suggestions for dealing with large images like are > encountered with NITF > files > (300-600MB) before I just ported the TargetJr code over verbatim. > > One approach that was used by TargetJr. was to use pyramid > image files, > where some kind of hierarchy of images with decreasing resolution is > pre-computed. > Any other suggestions ? The vil_image_resource framework (which was based on the old vil1_image framework) was designed to accommodate this specific problem. It was one of GE's requirements when designing vil1 to deal with very large images. However, you are the first person to actually want to use very large files in vxl, and if you want to do a lot of processing efficiently, you are going to have to do a fair bit of work. The vil API is designed to solve the large image problem by not loading in the image data when opening a new file. This is the evident in the design of the image loaders. Instead you load the image into vil_image_resource_sptr, and pixels are then loaded on demand. e.g. vil_image_resource_sptr data = vil_load_image_resource(filename); vil_image_resource_sptr decimated_data = vil_decimate(data, 10, 10); // An then get the image pixels from it. vil_image_view<unsigned char> uc_view = decimated_data->get_view(); vil_print_all(vcl_cout,uc_view); The whole implementation isn't as efficient as it could be, since we were only interested in proving that we had an decent API for this. I think that of the image_resource processing functions, only vil_decimate() actually attempts to deal with large images, and it decides to load the decimated image one pixel at a time when it decides the input image is too large. The solution is to do any processing block-by-block. I suggest adding preferred_block_size and preferred_block_origin hints to vil_property.h, and making use of them in any of the image_resource processing that you do. I guess most of the file image_resources would prefer loading images one raster at a time. Things like vil_convolve would have to make a fairly intelligent decision about how to deal with that. Have a read of http://paine.wiau.man.ac.uk/doc_clue/core/vil/html/Design.html to get a feel for the design. > > Secondly, I am the point where I can read the NITF image file headers, > create a vil_image_resource > (using an NITF specific sub-class I wrote) and get a > vil_image_view from > reading the pixel bytes. > Any image under 100MB displays fine. (I just modified the > example in the > VXL tutorial > on how to display images using vgui_image_tableau, > vgui_viewer2D_tableau, > vgui_shell_tableau > and vgui::run to use my own vil_image_view.) However, for > any image where > the actual number of bytes > in the displayed image is over 100MB, vgui:run generates a > segmentation > fault. > > I have traced this down to the call to glutMainLoop inside > static function > internal_run_till_idle in > vgui_glut_impl.cxx. Before I try to look into the GLUT code, > has anyone > else run into this limitation ? As I said, I think you are the first person to use very large images, so you probably come across several implementation limitations. Ian. |