Hi, thanks for the quick response! 
There is basic support for this block approach. The vil_image_resource
framework can wrap a vil_convolve_1d_resource and a
vil_decimate_resource around a vil_blocked_image_resource.
You should be able to ask the vil_decimate_resource (or which ever
processing resource you have on top of the blocked_image) for its block
size, and get a useful answer:
You can then get the processed view, block by block.
I didn't know about vil_decimate/convolve_1d_resource, so thanks for the pointers!  convolve_1d might work in my case.  Maybe I should have asked, what is the best way to resample (bilinear or bicubic) a 10GB image?

The only blocked image support really missing from vil is either
1. intelligent block support in some of the processing resources, so
that the top most resource reads the block size and assembles the view
result, block by block.
2. a vil_assemble_block_resource, which would sit on top of any chain of
image resources, read the block size, and assemble a single view block
by block.
I *might* follow what you're saying here, but I'm really not sure.  My impression is that there is much functionality that operates on views, not resources, and I don't see how these two suggestions help there.  To efficiently apply these operations on large images would require some kind of loop to process the image one chunk at a time, getting a view for each chunk. I was hoping that there already exists some facility to do this in a generic way for any operation.  For something like a fill or threshold the loop is trivial, but a filter would require fiddly handling of the chunk boundaries.

Am I just missing something fundamental?  Is there already an easy way to do this?