From: Ian S. <ian...@st...> - 2003-12-04 20:12:09
|
> -----Original Message----- > From: Laymon, Marc A (Research) [mailto:la...@cr...] > Sent: Thursday, December 04, 2003 6:55 PM > To: Ian Scott > Subject: Follow up on large images > > > Ian, > > I was just reading your message on large images. > > From my discussions with Anthony, the reason he wants me > to preserve the 11-bit per pixel in the pyramid images is > so that they will still have the full dynamic range of gray > scale for his algorithms, even though the resolution of the images > is reduced. I'm not sure I understand the problem here. vil is quite happy dealing with 16 bit images. > When you talk about block size, do you mean ad-hoc sub-images ? > I can read in smaller portions of the image to do processing on. You could do all the processing yourself, or add some missing implementation to vil. I guess the VXL consortium would prefer you to do the latter, because then anyone could make use of efficient large image processing. The idea behind the design of vil_image_resource was that it would handle (amongst other things) dealing with large images. The first person who wanted to apply vil_image_proc_fn to a very large image would write the code which dealt with processing the image block by block. For example: we have vil_convolve_2d() which takes and returns a vil_image_view<>. If you wanted to use vil_convolve_2d() on a very large image, the idea would be to write another version of vil_convolve_2d() that takes and returns a vil_image_resource_sptr. The returned vil_image_resource_sptr would point to a vil_convolve_2d_resource object. Up to this point no actual loading or processing of pixel data has taken place. You can build up a whole tree of image processing functions like this. When you actually need to do the processing, e.g. when calling resource->get_view(), or vil_save("filename", resource), the get_view (or vil_save), asks for the vil_convolve_2d_resource's data in portions. The vil_convolve_2d_resource object would handle edge effects between portions, etc. but would use the existing vil_convolve_2d() function to do the actual pixel processing. The shape and size of these portions should be influenced by the declared preferred_block_size of the original image on disk, and the requirements of all the processing_resources in between. This handling of preferred_block_size is currently missing. Instead the existing vil_processing_resources, and vil_save choose an arbitrary image portion size. Currently there are 9 image processing functions implemented for vil_image_resource_sptr. Most of them are trivial (e.g. vil_flip_ud), and the others, e.g. vil_convolve_1d() aren't too complicated. > > The reason I ask is that NITF stores the image data as blocks > on the disk. Usually they are stored as 256 X 256 blocks. So vil_nitf_image (which is derived from vil_image_resource) would advertise via the properties interface that it prefers to give data in 256x256 blocks. As you may have guessed, I'm anxious for someone to actually implement more support for efficent large image processing in vil. I always vaguely assumed that someone at GE would do it, since it was GE who had the requirement for large image support during the design of vil's API. Of course, it is up to you whether you want to do this in your private code, or contibute it to the public repository. But, if it is the latter, and you want some help in fitting it into the vil framework (or modifying the vil framework to make it fit) then let me know. If you anyone has a lot of questions, it might be easier to call. Ian. +44 161 275 7356 |