From: Greg T. <gd...@ir...> - 2007-04-30 15:47:38
|
"Udi Fuchs" <udi...@gm...> writes: >> WB multipliers are applied; other than making values > 0xFFFF this isn't >> super relevant. >> >> I'm skipping curves, saturation, black point, input profiles, and color >> matrices because these can all be turned off. > > If you skip all these then 12-bit 4095 value will be mapped to 0xFFF0 > and then after the gamma curve it will become 255. This is boggling to me. If true, then the statements about how raw has 2 stops of latitude that you can recover (which seems to be true, from my processing experience) don't hold up, because if one did -1 exposure compensation then nothing would map to the interval: (one stop down from 255, 255] >> Then, one chooses an exposure level (in EV, with 0 being a value that >> produces JPGs more or less like the camera). Some value gets mapped to >> 255, and the others follow, using a sRGB like 8-bit enconding, or >> something like that. Here, some sort of "clipping" process is >> fundamentally necessary, as colors (meaning values in XYZ space, or RGB >> unconstrained in luminance values) can either be too saturated, or too >> high in luminance. > > The original colors are not in XYZ space, but camera space. Without > color matrix and color profiles, camera space is assumed to be linear > RGB, meaning same as sRGB, except for the gamma curve. This means that > there cannot be out of gamut colors. Only luminosity has to be > clipped, meaning values larger than 4095. With 0EV we get values > larger than 4095 only because of the WB multipliers. I agree about camera space, and that's typically not so far off from linear sRGB (e.g., on D200 enabling color matrics makes a difference, but the as-sRGB processing isn't wacky). lcms seems to separate the notion of a color space and the color space's encoding, and I think that's a helpful way to talk about it. The space itself has real-valued coordinates, and we either use some gamma table or a fixed-point representation of those real-valued coordinates. But I don't think this part is confusing at the moment. But I agree that sRGB has the gamma conversion built in. >> a smoother rolloff of values near the limit, emulating the >> characteristic curve of normal film. To me, this is a different thing >> than clipping and we really should have some "film characteristic >> curve" with a contrast value so we can load curves that correspond to >> particular films and get similar results. > > You are ignoring the more complicated part, which results for the WB > multipliers. This what the 'restore details' code is all about. I am still not convinced that these issues arise solely from WB multipliers. >> > Canon cameras usually use only about 11 bits of the possible 12 bits. >> > Therefore in UFRaw when the UI shows 0EV, UFRaw actually applies about >> > +1EV. The exact amount is calculated from the EXIF data to emulate >> > Canon default exposure. >> >> OK, but I still think it's mispresenting what's going on to call this >> exposure compensation. The encoding in the camera is simply different, >> and that part of the multiplication is not exposure compensation. I >> wonder if Canon does this so that outputs usually fit in 12 bits even >> with WB channel multiplers. > > I don't see the point in this. The 12 bit on the sensor are > "expensive", they represent what the sensor can handle. Adding a few > bits during processing is cheap (this is what UFRaw does). My guess is > that Canon don't trust the last bit, maybe it is highly non-linear. > Maybe someone with a Canon camera and an IT8 target can test this. I think we aren't following each other's points. I read the code, and that keeps the Canon normalization issue logically separate from commanded exposure compensation, and that's all I wanted. >> I see in ufraw_developer that highlight and restoredetails processing is >> conditional on exposure compensation (before applying canon gain). >> This is still my fundamental question. Why should processing be >> different for the following situations (same scene luminance): >> >> both ISO 200, f/8. >> >> 1/125 EV -0.5 in UFRaw >> 1/250 EV +0.5 in UFRaw >> >> Assume outdoors in a cloudy day so that no scene luminanace values >> produce maxed-out raw sensor values. >> >> I'd argue that we would produce the same JPG for both situations. In >> other words, I think clipping/restoring is about final pixel values that >> are too high/too low, not about how we got there. > > If there are no values in the scene near clipping, then the output > should be the same (assuming that the sensor is exactly linear). The > whole point of the code you are referring to is to handle the case > where there are clipping. I think the place where I am not following you is that I think that a raw file can represent scene luminances -- without clipping -- that are greater than the value that will be mapped to 255 in the JPG. So the 1/125 exposure could have sensor values that are still less than 4095 but mapped to > 255 and thus clipped in the in-camera JPG. (I know, now I should go out and actually do this.) Thanks for your patience in this discussion. |