From: Udi F. <udi...@gm...> - 2009-10-20 05:10:30
|
First of all, if you are discussing the options here, I assume that you don't want me to apply the patch that started this patch. Still I have a few comments: + threshold *= sqrt(0x10000 / 4096); I think this is wrong. You are assuming a 12-bit CCD. It should probably be rgbMax instead of 4096. If we do end up scaling all pixels, we should be able to get rid of some ugly rgbMax code in the developer. + int doDenoise, doInterpolate; These don't really belong in the developer_data. They can be in ufraw_data. Lastly, I was having whitespaces/tabs issues. Sending the patch as an attachment, should take care o that. > Option 1: > Disable the ufraw_denoise_phase and always do denoising on the unshrinked > raw input as is done now for the interpolation path. This consolidates > the denoise code path which is very good but at a cost: denoise adjustment > will respond slower. At some point I want to get rid of create_base_image() completely and do all the rendering in tiles. This would make the speed of the denoising much less of an issue. > Option 3: > Instead of doing WB before the interpolate/shrink split, normalize the > pixel values to 16 bits using identical RGB multipliers. This provides > the additional precision I need. There's more work involved for me but > it's doable. Again the wavelet denoising threshold needs to be scaled, > this time for both interpolate and shrink code paths. This is also fine by me. > Contrary to the original dcraw.c/dcraw.cc wavelet denoising, ufraws > implementation is not pixel value range invariant. As a result the > same threshold will have different effects, depending on the input > color depth. This looks like a bug that I introduced. We should decide if we want to fix it. The tricky part would be to support old ID files that will have threshold with the old meaning. Udi |