"Udi Fuchs" <udifuchs@...> writes:
>> > From: udifuchs via CVS
>> > Retain some hue and saturation when clipping pixels to 0xFFFF.
>> > This is only relevant when applying positive exposure compensation,
>> From: gdt
>> I have long been confused by the notion that such clipping processing
>> happens only with positive exposure compensation. My understanding,
>> albeit somewhat fuzzy, is that the raw values have a greater dynamic
>> range than 8-bit sRGB, and that highlights can get clipped even with
>> exposure compensation of 0. Certainly positive exposure compensation
>> means that more raw values will be subject to clipping, but I don't see
>> how it changes the basic nature of the processing. So I think that I am
>> missing some key point.
> The point is that with 0EV the maximal channel value is mapped to
> 0xFFFF, so there is nothing to clip. If you set +1EV, then channel get
> mapped to values between 0 and 0x1FFFF, so everything between 0x10000
> and 0x1FFFF has to be clipped. There are some subtleties because of
> the WB multipliers, but this is the general idea.
I think there are two ways where something describable as clipping can
happen, and I'm confused how about this is handled. I'll describe what
I think has to happen - perhaps you can tell me where I'm wrong. I'd
like to end up with a precise description written down somewhere.
The individual pixel values are generally 12 bit linear (although
"compressed NEF" uses a u-lawish coding, that's undone first and not
important for the discussion). The value 4095 (neglecting WB) does not
correspond to the luminance the camera JPG converter would put at 255;
it's much higher, allowing one to apply negative exposure compensation
and recover the highlights that would have been blown out in the
original JPG. One can map the 12-bit values, scaled by WB, to a 16-bit
space, and putting 4095 at 0xFFF0 makes sense. But there's no intrinsic
reason to have to confine these values other than storage
WB multipliers are applied; other than making values > 0xFFFF this isn't
I'm skipping curves, saturation, black point, input profiles, and color
matrices because these can all be turned off.
Then, one chooses an exposure level (in EV, with 0 being a value that
produces JPGs more or less like the camera). Some value gets mapped to
255, and the others follow, using a sRGB like 8-bit enconding, or
something like that. Here, some sort of "clipping" process is
fundamentally necessary, as colors (meaning values in XYZ space, or RGB
unconstrained in luminance values) can either be too saturated, or too
high in luminance.
Here, two things happen:
reduction of out-of-space values to something in the space, and it
seems that the normal thing is to keep hue, and reduce value and
saturation to get in space.
a smoother rolloff of values near the limit, emulating the
characteristic curve of normal film. To me, this is a different thing
than clipping and we really should have some "film characteristic
curve" with a contrast value so we can load curves that correspond to
particular films and get similar results.
I realize that if 4095 raw maps to 0xFFFF in the linear working space
(which is sRGB with a nominally 4x higher luminance than the output
space?), then clipping must be applied there too. But if one chose -3
EV, then that clipping might not be necessary. There is also the issue
of pixel values of 4095 (or whatever the maximal code point is for
compressed raw), which might have been clipped in the camera itself.
>> > and mostly for Canon cameras where positive exposure is applied by
>> > default.
>> This sounds odd - I realize that people have said that one needs to
>> process Canon raw bits differently from others to get jpgs that look
>> like the camera jpgs. To me, this is not exposure compensation, it's
>> part of the definition of the baseline processing. But probably I don't
>> follow this for the same reason as the previous point.
> Canon cameras usually use only about 11 bits of the possible 12 bits.
> Therefore in UFRaw when the UI shows 0EV, UFRaw actually applies about
> +1EV. The exact amount is calculated from the EXIF data to emulate
> Canon default exposure.
OK, but I still think it's mispresenting what's going on to call this
exposure compensation. The encoding in the camera is simply different,
and that part of the multiplication is not exposure compensation. I
wonder if Canon does this so that outputs usually fit in 12 bits even
with WB channel multiplers.
Reading the code, I see that this is handled in a way that makes it
fairly clear what's going on.
I see in ufraw_developer that highlight and restoredetails processing is
conditional on exposure compensation (before applying canon gain).
This is still my fundamental question. Why should processing be
different for the following situations (same scene luminance):
both ISO 200, f/8.
1/125 EV -0.5 in UFRaw
1/250 EV +0.5 in UFRaw
Assume outdoors in a cloudy day so that no scene luminanace values
produce maxed-out raw sensor values.
I'd argue that we would produce the same JPG for both situations. In
other words, I think clipping/restoring is about final pixel values that
are too high/too low, not about how we got there.
I am slightly fuzzy on what's going on in the code; really I'm trying to
have a requirements and high level design discussion.