lcms-user Mailing List for Little cms color engine
An ICC-based CMM for color management
Brought to you by:
mm2
You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
(15) |
Jun
(24) |
Jul
(9) |
Aug
(14) |
Sep
|
Oct
(12) |
Nov
(17) |
Dec
(31) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(34) |
Feb
(7) |
Mar
(7) |
Apr
(16) |
May
(4) |
Jun
(14) |
Jul
(34) |
Aug
(54) |
Sep
(11) |
Oct
(25) |
Nov
(1) |
Dec
(6) |
2003 |
Jan
(27) |
Feb
(54) |
Mar
(23) |
Apr
(68) |
May
(82) |
Jun
(36) |
Jul
(45) |
Aug
(45) |
Sep
(49) |
Oct
(30) |
Nov
(65) |
Dec
(23) |
2004 |
Jan
(52) |
Feb
(52) |
Mar
(35) |
Apr
(38) |
May
(93) |
Jun
(22) |
Jul
(51) |
Aug
(50) |
Sep
(73) |
Oct
(28) |
Nov
(30) |
Dec
(51) |
2005 |
Jan
(22) |
Feb
(79) |
Mar
(38) |
Apr
(51) |
May
(95) |
Jun
(60) |
Jul
(56) |
Aug
(49) |
Sep
(22) |
Oct
(43) |
Nov
(15) |
Dec
(40) |
2006 |
Jan
(51) |
Feb
(31) |
Mar
(37) |
Apr
(25) |
May
(9) |
Jun
(13) |
Jul
(17) |
Aug
(66) |
Sep
(7) |
Oct
(12) |
Nov
(14) |
Dec
(31) |
2007 |
Jan
(18) |
Feb
(9) |
Mar
(22) |
Apr
(18) |
May
(5) |
Jun
(25) |
Jul
(2) |
Aug
(15) |
Sep
(12) |
Oct
(40) |
Nov
(10) |
Dec
(23) |
2008 |
Jan
(21) |
Feb
(56) |
Mar
(12) |
Apr
(23) |
May
(47) |
Jun
(75) |
Jul
(24) |
Aug
(2) |
Sep
(7) |
Oct
(26) |
Nov
(20) |
Dec
(16) |
2009 |
Jan
(14) |
Feb
(1) |
Mar
(29) |
Apr
(54) |
May
(18) |
Jun
(16) |
Jul
(5) |
Aug
(3) |
Sep
(38) |
Oct
(6) |
Nov
(25) |
Dec
(28) |
2010 |
Jan
(11) |
Feb
(26) |
Mar
(2) |
Apr
(10) |
May
(45) |
Jun
(94) |
Jul
(11) |
Aug
(32) |
Sep
(18) |
Oct
(37) |
Nov
(19) |
Dec
(34) |
2011 |
Jan
(21) |
Feb
(16) |
Mar
(16) |
Apr
(29) |
May
(17) |
Jun
(18) |
Jul
(7) |
Aug
(21) |
Sep
(10) |
Oct
(7) |
Nov
(15) |
Dec
(6) |
2012 |
Jan
(13) |
Feb
(16) |
Mar
(15) |
Apr
(12) |
May
(15) |
Jun
(31) |
Jul
(22) |
Aug
(15) |
Sep
(46) |
Oct
(21) |
Nov
(15) |
Dec
(33) |
2013 |
Jan
(19) |
Feb
(17) |
Mar
(31) |
Apr
(17) |
May
(27) |
Jun
(24) |
Jul
(26) |
Aug
(11) |
Sep
(9) |
Oct
(22) |
Nov
(14) |
Dec
(16) |
2014 |
Jan
(20) |
Feb
(66) |
Mar
(29) |
Apr
(13) |
May
(9) |
Jun
|
Jul
(11) |
Aug
(21) |
Sep
(15) |
Oct
(5) |
Nov
(5) |
Dec
(10) |
2015 |
Jan
(6) |
Feb
(26) |
Mar
(26) |
Apr
|
May
(9) |
Jun
(5) |
Jul
(5) |
Aug
(11) |
Sep
(8) |
Oct
|
Nov
|
Dec
|
2016 |
Jan
(3) |
Feb
|
Mar
(9) |
Apr
(3) |
May
(16) |
Jun
(26) |
Jul
(32) |
Aug
(27) |
Sep
(9) |
Oct
|
Nov
(4) |
Dec
(10) |
2017 |
Jan
(11) |
Feb
(44) |
Mar
(6) |
Apr
(8) |
May
(1) |
Jun
(2) |
Jul
(34) |
Aug
(28) |
Sep
(3) |
Oct
(9) |
Nov
(3) |
Dec
|
2018 |
Jan
(1) |
Feb
(5) |
Mar
(6) |
Apr
(1) |
May
(1) |
Jun
(2) |
Jul
|
Aug
(1) |
Sep
(6) |
Oct
|
Nov
(6) |
Dec
|
2019 |
Jan
(18) |
Feb
(16) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(7) |
Sep
(3) |
Oct
(10) |
Nov
(1) |
Dec
(3) |
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(17) |
Jun
(23) |
Jul
|
Aug
(4) |
Sep
|
Oct
|
Nov
|
Dec
|
2021 |
Jan
(10) |
Feb
(3) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(5) |
Oct
|
Nov
(1) |
Dec
|
2022 |
Jan
(8) |
Feb
|
Mar
(9) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(13) |
Nov
(12) |
Dec
|
2023 |
Jan
|
Feb
(1) |
Mar
(9) |
Apr
|
May
(3) |
Jun
(5) |
Jul
(3) |
Aug
(8) |
Sep
|
Oct
|
Nov
(1) |
Dec
(9) |
2024 |
Jan
(8) |
Feb
|
Mar
(14) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Pekka P. <pek...@ha...> - 2024-03-21 15:52:32
|
On Tue, 19 Mar 2024 17:56:42 +1100 Graeme Gill <gr...@ar...> wrote: > mar...@li... wrote: > > > I think the issue here > > is HDR is not well supported by ICC workflows, highlights and values above > > L* 100 of D50 are seldom discussed in the 4.4 ICC spec. For BT.2020/PQ, > > which profile I'm attaching this is quite evident. Lcms can deal with it on > > unbounded mode, but if you feed a RGB -> Lab transform, for > > RGB=(255,255,255) you will find L*=523!! This is not encodable in any CLUT > > unless you provide enough headroom. > > In my view the elephant in the room with regard to HDR standards > is the lack of explicitly dealing with the diffuse white. > [ As best I understand it this is a result of rushing mastering > HDR standards into consumer products. ] I suppose you refer to various HDR "systems" like BT.2100 PQ and HLG defining their diffuse white as a specific code point, so it does not need explicit communication as long as you know what system you are looking at? Then, PQ system defines "absolute" luminance by its TF, which is relative to its reference viewing environment. HDR static metadata tells the mastering display color volume (or not) to give the bounds on luminance and gamut. HLG system uses relative luminance signal, and defines a luminance conversion function to map the signal to given display capability. The result is again relative to reference viewing environment. Neither specifies how to adapt the image to a different viewing environment. Does this match your understanding? Our draft of Wayland protocol extension currently stops at delivering the HDR static metadata. We think that we also need to communicate expected or reference viewing environment somehow, and perhaps diffuse white level in order to pin everything down in the image content. Those may be defined by the "system" the content was made for, if nothing else. Then let the end user adjust a couple of simple knobs to get the image right for their display and viewing environment. That will affect the final diffuse white level on display, which affects also the HDR headroom. > This then flows through to a lack of standards in things like > ICC profiles for HDR displays. Simply determining a PCS diffuse white point for > a given ICC defined HDR space lets you trivially convert to/from > SDR, and not have to even worry about using ICC V4 to represent > above PCS white values. This also means that SDR mappings > are not hard coded into the profiles, and can be set by the user > to suite their viewing situation and/or the source material. > Levels above diffuse white can also be well managed in terms > of compressing highlights to fit the range of the display, > or applying tricks like highlight extrapolation from SDR sources. Thanks, pq |
From: Pekka P. <pek...@ha...> - 2024-03-21 15:29:49
|
On Tue, 19 Mar 2024 10:16:12 +0100 <mar...@li...> wrote: > Hi, > > > I have understood that LittleCMS has already made that decision for me, > for > > the v4 perceptual rendering intent: > > https://github.com/mm2/Little- > > CMS/blob/46355888b823b563db928faec59b0312a05e1143/src/cmscnvrt. > > c#L1129-L1133 > > That was a long time ago. You probably know about V4 and perceptual PCS. > For V4, ICC decided to define a different PCS for perceptual and saturation > intents. This PCS is almost identical to relative colorimetric but with a > perceptual black point. This decision messed out things to incredible > levels, converting some combined transforms in a big mess. Just consider a > scanner to printer going rel. colorimetric in scanner and perceptual on > printer. > To make things worse, I found a lot of matrix-shaper profiles claiming to be > V4 and providing only one set of matrix-curves. Obviously, this cannot give > zero and perceptual black at same time, so I reported the issue to ICC (I > was in the committee at that time) and get the answer "all those profiles > are wrong". Does that mean that it is basically illegal to craft a v4 profile with only TRC and Colorant tags, without any BToA0/BToD0 tags? Looking at ICC v4.4, there is no requirement for any *To* tags for a three-channel Display class profile. They really should say if more tags are required. > Then, since I could not discard such big number of profiles, I tried to > imagine a way to automatically do the rescaling from different PCS. And > found BPC to work fine for that. So, the idea behind this change is to > increase inter-operability and fix broken profiles. Given the spec, your decision does make sense to me. I actually had been wondering "which PCS" should win when a profile contains only TRC and Colorant tags, but no *To*0 tags, and why the ICC spec does not seem to say. I see now. I have understood that the cLUT profiles I generate are badly formed when they (the perceptual tags) do not map device black to the v4 perceptual PCS black. This causes my tests to fail. Therefore when constructing my cLUT, I am adding a BPC step that matches what LittleCMS would do. Then the device black point matches PCS expectations, and my both matrix-shaper and cLUT profiles produce identical results via LittleCMS. I've also wondered why BPC is not done twice: from source to PCS and PCS to destination. I suppose if PCS by definition has that hardcoded non-zero black point, then any conversion to/from PCS already has the PCS black point, and BPC would do nothing. But... uh... Thanks, pq |
From: <mar...@li...> - 2024-03-20 23:53:30
|
>It seems to me that BPC makes black worse almost as often as it makes it better though! Haha, blame Adobe for that, I just followed their recipe. > Case in point with lcms BPC: Nice matching, don't they? I guess both profiles you used are V2. The BPC is used automatically by lcms when you mix V2 and V4 Tried with built-in sRGB, which is V4: marti@PORTATIL:~$ transicc -i\*srgb -o bpctest.icm -t0 LittleCMS ColorSpace conversion calculator - 4.3 [LittleCMS 2.09] Enter values, 'q' to quit R? 0 G? 0 B? 0 R=19.3152 G=11.3658 B=24.4436 Regards Marti -----Original Message----- From: Graeme Gill <gr...@ar...> Sent: Wednesday, March 20, 2024 3:20 AM Cc: lcm...@li... Subject: Re: [Lcms-user] Creating equivalent matrix-shaper and 3D LUT profiles, problem with BPC mar...@li... wrote: Hello Marti, > Sure, 16 bits is a plenty of room, but if you use 16 bits to encode L* > and a big part of the domain are highlights over 100, then you could > end 0..100 to be encoded using 9 bits and this is still not enough for a decent accuracy. An available HDR display is lucky to hit 2000 cd/m^2. If you pick a diffuse white of (say) 200 cd/^m, and the max is encoded as L* = 100, then the diffuse white ends up at L* = 37.8. With 16 bits that leaves 24773 perceptually encoded levels SDR range or 14.6 bits. That's a resolution of 0.004 dE, more than enough. Worst case XYZ PCS is not so favourable of course - a 10000 cd/m^2 display running at a diffuse white point of 100 cd/m^s would only have a little over 9 bits of linear encoding available. This would have visible banding in black. So yes, either a ICC V2 16 bit L*a*b* PCS or ICC V4 MPE XYZ PCS floating point to encode source standards. > Another issue is what to do with those highlights. A tag with HDR > diffuse white would help to do a coarse clipping, but if you want some > gamut mapping to bring highlights into SDR gamut, you have to redo everything. I really don't imagine doing this in the profile itself. Too inflexible. Better off doing this in the linking or execution of the source to destination transform using non-ICC machinery. > Then, since I could not discard such big number of profiles, I tried > to imagine a way to automatically do the rescaling from different PCS. > And found BPC to work fine for that. So, the idea behind this change > is to increase inter-operability and fix broken profiles. It seems to me that BPC makes black worse almost as often as it makes it better though! Case in point with lcms BPC: Using this RGB printer profile: <https://www.argyllcms.com/bpctest.icm> transicc.exe -t1 -isRGB.icm -obpctest.icm Enter values, 'q' to quit R? 0 0 0 R=25.8949 G=5.3969 B=0.0000 icclu -ff -ir -s255 bpctest.icm 25.894900 5.396900 0.000000 [RGB] -> Lut_A2B1 -> 5.693560 0.252739 -3.343661 [Lab] transicc.exe -b -t1 -isRGB.icm -obpctest.icm Enter values, 'q' to quit R? 0 0 0 R=41.5914 G=9.5681 B=0.0000 icclu -ff -ir -s255 bpctest.icm 41.591400 9.568100 0.000000 [RGB] -> Lut_A2B1 -> 8.089132 -0.308324 -2.011786 [Lab] At least the lcms BPC doesn't seem to wreck the perceptual black on this one: (unlike Adobe's always on BPC in Lightroom) transicc.exe -b -t0 -isRGB.icm -obpctest.icm Enter values, 'q' to quit R? 0 0 0 R=19.3152 G=11.3658 B=24.4436 icclu -ff -ir -s255 bpctest.icm 19.315200 11.365800 24.443600 [RGB] -> Lut_A2B1 -> 4.296325 -0.431107 -5.422716 [Lab] Cheers, Graeme. _______________________________________________ Lcms-user mailing list Lcm...@li... https://lists.sourceforge.net/lists/listinfo/lcms-user |
From: Graeme G. <gr...@ar...> - 2024-03-20 02:19:49
|
mar...@li... wrote: Hello Marti, > Sure, 16 bits is a plenty of room, but if you use 16 bits to encode L* and a > big part of the domain are highlights over 100, then you could end 0..100 to > be encoded using 9 bits and this is still not enough for a decent accuracy. An available HDR display is lucky to hit 2000 cd/m^2. If you pick a diffuse white of (say) 200 cd/^m, and the max is encoded as L* = 100, then the diffuse white ends up at L* = 37.8. With 16 bits that leaves 24773 perceptually encoded levels SDR range or 14.6 bits. That's a resolution of 0.004 dE, more than enough. Worst case XYZ PCS is not so favourable of course - a 10000 cd/m^2 display running at a diffuse white point of 100 cd/m^s would only have a little over 9 bits of linear encoding available. This would have visible banding in black. So yes, either a ICC V2 16 bit L*a*b* PCS or ICC V4 MPE XYZ PCS floating point to encode source standards. > Another issue is what to do with those highlights. A tag with HDR diffuse > white would help to do a coarse clipping, but if you want some gamut mapping > to bring highlights into SDR gamut, you have to redo everything. I really don't imagine doing this in the profile itself. Too inflexible. Better off doing this in the linking or execution of the source to destination transform using non-ICC machinery. > Then, since I could not discard such big number of profiles, I tried to > imagine a way to automatically do the rescaling from different PCS. And > found BPC to work fine for that. So, the idea behind this change is to > increase inter-operability and fix broken profiles. It seems to me that BPC makes black worse almost as often as it makes it better though! Case in point with lcms BPC: Using this RGB printer profile: <https://www.argyllcms.com/bpctest.icm> transicc.exe -t1 -isRGB.icm -obpctest.icm Enter values, 'q' to quit R? 0 0 0 R=25.8949 G=5.3969 B=0.0000 icclu -ff -ir -s255 bpctest.icm 25.894900 5.396900 0.000000 [RGB] -> Lut_A2B1 -> 5.693560 0.252739 -3.343661 [Lab] transicc.exe -b -t1 -isRGB.icm -obpctest.icm Enter values, 'q' to quit R? 0 0 0 R=41.5914 G=9.5681 B=0.0000 icclu -ff -ir -s255 bpctest.icm 41.591400 9.568100 0.000000 [RGB] -> Lut_A2B1 -> 8.089132 -0.308324 -2.011786 [Lab] At least the lcms BPC doesn't seem to wreck the perceptual black on this one: (unlike Adobe's always on BPC in Lightroom) transicc.exe -b -t0 -isRGB.icm -obpctest.icm Enter values, 'q' to quit R? 0 0 0 R=19.3152 G=11.3658 B=24.4436 icclu -ff -ir -s255 bpctest.icm 19.315200 11.365800 24.443600 [RGB] -> Lut_A2B1 -> 4.296325 -0.431107 -5.422716 [Lab] Cheers, Graeme. |
From: <mar...@li...> - 2024-03-19 16:20:14
|
Hi Graeme, > In my view the elephant in the room with regard to HDR standards is the lack of explicitly dealing with the diffuse white. Agreed, but this is not the only issue. There is also the encoding and the containers for numbers. In V4 you have DtoBxx/BtoDxx tags that could hold floating point range and therefore store extended ranges, but if you want to stay in 16 bits representation, then you have a limited precision. Sure, 16 bits is a plenty of room, but if you use 16 bits to encode L* and a big part of the domain are highlights over 100, then you could end 0..100 to be encoded using 9 bits and this is still not enough for a decent accuracy. Another issue is what to do with those highlights. A tag with HDR diffuse white would help to do a coarse clipping, but if you want some gamut mapping to bring highlights into SDR gamut, you have to redo everything. > This also means that SDR mappings are not hard coded into the profiles, and can be set by the user to suite their viewing situation and/or the source material. Exactly. > A slightly more sophisticated measure than simple L* that I've used for printer linearization is equalizing distance in delta E space, but probably not something to pursue in initial development Nice idea. Maybe then using a minimization algorithm like damped least-squares to find best curve points in regard of dE would work, I wonder how weird the obtained curves would be in this case. I think ICC would say that the solution for HDR is to use ICCMax, but I'm afraid only Onyx is currently supporting those profile. Cheers Marti |
From: <mar...@li...> - 2024-03-19 11:01:39
|
Hi, > I have understood that LittleCMS has already made that decision for me, for > the v4 perceptual rendering intent: > https://github.com/mm2/Little- > CMS/blob/46355888b823b563db928faec59b0312a05e1143/src/cmscnvrt. > c#L1129-L1133 That was a long time ago. You probably know about V4 and perceptual PCS. For V4, ICC decided to define a different PCS for perceptual and saturation intents. This PCS is almost identical to relative colorimetric but with a perceptual black point. This decision messed out things to incredible levels, converting some combined transforms in a big mess. Just consider a scanner to printer going rel. colorimetric in scanner and perceptual on printer. To make things worse, I found a lot of matrix-shaper profiles claiming to be V4 and providing only one set of matrix-curves. Obviously, this cannot give zero and perceptual black at same time, so I reported the issue to ICC (I was in the committee at that time) and get the answer "all those profiles are wrong". Then, since I could not discard such big number of profiles, I tried to imagine a way to automatically do the rescaling from different PCS. And found BPC to work fine for that. So, the idea behind this change is to increase inter-operability and fix broken profiles. Regards Martí |
From: Graeme G. <gr...@ar...> - 2024-03-19 06:57:01
|
mar...@li... wrote: > I think the issue here > is HDR is not well supported by ICC workflows, highlights and values above > L* 100 of D50 are seldom discussed in the 4.4 ICC spec. For BT.2020/PQ, > which profile I'm attaching this is quite evident. Lcms can deal with it on > unbounded mode, but if you feed a RGB -> Lab transform, for > RGB=(255,255,255) you will find L*=523!! This is not encodable in any CLUT > unless you provide enough headroom. In my view the elephant in the room with regard to HDR standards is the lack of explicitly dealing with the diffuse white. [ As best I understand it this is a result of rushing mastering HDR standards into consumer products. ] This then flows through to a lack of standards in things like ICC profiles for HDR displays. Simply determining a PCS diffuse white point for a given ICC defined HDR space lets you trivially convert to/from SDR, and not have to even worry about using ICC V4 to represent above PCS white values. This also means that SDR mappings are not hard coded into the profiles, and can be set by the user to suite their viewing situation and/or the source material. Levels above diffuse white can also be well managed in terms of compressing highlights to fit the range of the display, or applying tricks like highlight extrapolation from SDR sources. > I did this paper many, many time ago regarding how to increase accuracy of > CLUT. Hope that would be of any help. > https://www.littlecms.com/ASICprelinerization_CGIV08.pdf To be explicit, the sort of thing I would be trying in applying this approach (i.e. distribute the cLut indexes evenly in perceptual space) would be to use the input colorspace profile to create device to L* curves, then normalize the curves to go through 0 and 1, as well as perhaps imposing non-monoticity and minimum slope. (A slightly more sophisticated measure than simple L* that I've used for printer linearization is equalizing distance in delta E space, but probably not something to pursue in initial development.] Maybe there will have to be some tweaks for HDR profiles to give a reasonable estimate of perceptual above the diffuse white point. So these device per channel curves would be applied as inverses in the population of the cLUT when looking up the overall device to device link, and applied in the normal direction in front of the cLUT, i.e. cLut population: lin_indev -> indev -> device link -> outdev cLut use indev -> lin_indev -> cLUT -> outdev Cheers, Graeme Gill. |
From: Pekka P. <pek...@ha...> - 2024-03-18 13:59:10
|
On Sat, 16 Mar 2024 10:09:29 +0100 <mar...@li...> wrote: > Hello, > > Well, no solution fits everything, of course, but I found the CLUT plus > linearization curves to work fine in many scenarios. I think the issue here > is HDR is not well supported by ICC workflows, highlights and values above > L* 100 of D50 are seldom discussed in the 4.4 ICC spec. For BT.2020/PQ, > which profile I'm attaching this is quite evident. Lcms can deal with it on > unbounded mode, but if you feed a RGB -> Lab transform, for > RGB=(255,255,255) you will find L*=523!! This is not encodable in any CLUT > unless you provide enough headroom. > > I did this paper many, many time ago regarding how to increase accuracy of > CLUT. Hope that would be of any help. > https://www.littlecms.com/ASICprelinerization_CGIV08.pdf Hi Marti, interesting indeed. The idea seems to be to separate the per-channel "average" approximate non-linearity from the rest of the transformation, assuming that the uniform distribution of the sampling points of a 3D LUT is then better suited for the rest of the transformation. When I think of the kind of color transformation pipelines from source space to blending space, they do seem to be exactly that form until tone and gamut mapping. That does seem worth studying in the context of HDR transformations to evaluate achievable precision. I would be surprised if someone hadn't already done that, but I don't know where to look. Weston should probably apply that method in its 3D LUT path anyway. Thanks, pq |
From: Pekka P. <pek...@ha...> - 2024-03-18 12:51:21
|
On Sat, 16 Mar 2024 15:35:50 +1100 Graeme Gill <gr...@ar...> wrote: > Pekka Paalanen wrote: > > Hi, > > > I have some difficulty in generating ICC profiles in Weston's test > > suite. I want the test suite to exercise all code paths to ensure the > > color operations are implemented correctly. I have separate code paths > > for matrices+curves and 3D LUTs. Which code path gets chosen depends on > > what processing blocks (cmsStage) an ICC file results in. Therefore, I > > would like to craft identically behaving ICC files that hit the > > different code paths (the goal). > > I think Marti's advice is to be taken seriously - a lot of the point of using > an ICC library like lcms is to let it do all the work with regard profile > details (and there are a lot of details that have accumulated over the years). > So using a generic extraction such as cLUT representation while > allowing you to do the pixel transformation in HW is the way to go. Hi Graeme, yes. That's why I want to take advantage of LittleCMS for everything ICC file related as much as I possibly can. The two complications I have are: - interfacing ICC profiles with television broadcast standard profiles that do not follow ICC rules, and - the desire to match the chain of mathematical operations implemented by display controller hardware (fixed-function, as opposed to programmable GPU shaders). There are people hard at work to define Linux KMS UAPI to get deterministic and well-defined color processing operations exposed from display controller hardware, btw. This explains the plan: https://lists.freedesktop.org/archives/dri-devel/2024-February/443518.html I think that is a direct consequence of Wayland color management design. I would like to believe that eventually we could even influence hardware design in the long run. > In my experience there is some advantage in terms of the quality/speed > tradeoff in being able to add optimized per channel in & out > curves to the cLUT, but it is somewhat non-trivial to implement > this in a generic way, if the ICC library itself doesn't provide > such facilities. I'll certainly read the paper Marti pointed to. > Not withstanding the above, in my ArgyllPRO color transformation > regression test (icc/lutest) code I create a bunch of ICC profiles > that all implement the same overall transform using different > ICC V2 & V4 tags, and you can grab the RGB sub-set of them from here: > <https://www.argyllpro.com.au/argyllPRO_RGB_lu_test_profiles.zip>. Interesting, thanks. So far I've preferred to create profiles programmatically during the tests, so that it's clear how they were formed. > Note the limitations of this set though: these are just RGB device profiles, > and does not include any other type of profile (i.e. device link), nor > does it include other device colorspaces such as monochrome, CMYK, CMY, N-color > etc., all spaces that can occur when transforming from source to (say) an RGB > display. They also do not all have non-linear per channel curves (in order > to make them reproduce the same device->XYZ transform), so this > isn't much of a test of these stages of the machinery. We are limiting the scope in Wayland. Currently the protocol draft requires a 3-channel Display Class profile for application content. DeviceLink is not supported directly, but an application can apply a DeviceLink itself and then tell the compositor that it already used the chosen display's profile to get the equivalent result. > > If my goal is reasonable, then what would be a good way to add the > > required BPC? Can I make LittleCMS compute it for me, or should I just > > copy the BPC algorithm from LittleCMS? > > I'd be somewhat wary of adding BPC to everything, particularly to > the perceptual or saturation intents. I have understood that LittleCMS has already made that decision for me, for the v4 perceptual rendering intent: https://github.com/mm2/Little-CMS/blob/46355888b823b563db928faec59b0312a05e1143/src/cmscnvrt.c#L1129-L1133 > Adobe's BPC white paper > notes that BPC should be optional for these intents, and there are > good reasons for this. In the Wayland protocol, we indeed have "media-relative rendering intent with BPC" as an explicitly separate option. > The underlying desire for BPC is due to a > fundamental architectural problem with the ICC idea of mixing and > matching source and destination device profiles while also supporting > gamut mapping. In general it can't work properly due to the need to > simultaneously know both source and destination gamuts to create such > a mapping. When I started learning about ICC color management, that was one of the fundamental questions I had: How do you know the two gamuts in order to the able to map between them. It is relieving to hear that not finding a clear answer to that is not my personal failure in understanding. Curiously, HDR static metadata carries something called Mastering Display Color Volume, which essentially is that color gamut, separately from the colorimetry encoding. In Wayland we would have the opportunity to extend it to SDR material too. > There are three attempts at working around this problem > that I'm aware of: > > 1) ICC PRMG. This can partially work, but is also limited by > attempting to implement gamut mapping in isolation in each > device profile. > > 2) Adobe BPC. This is true gamut mapping, but applied just to > the black point. Their algorithm is a heuristic, and doesn't > always identify the correct black points to map from & to, > sometimes causing worse black levels, rather than better. > > 3) ArgyllCMS gamut mapping. > This has the advantage of allowing true gamut mapping, even with > popular matrix source spaces, but has the disadvantage of the > resulting perceptual or saturation table only being correct for a > specific source profile. > See <http://www.argyllcms.com/doc/iccgamutmapping.html> for more details. I always remember that page by the diagram at the end. :-) Thanks, pq |
From: <mar...@li...> - 2024-03-16 09:47:25
|
Hello, Well, no solution fits everything, of course, but I found the CLUT plus linearization curves to work fine in many scenarios. I think the issue here is HDR is not well supported by ICC workflows, highlights and values above L* 100 of D50 are seldom discussed in the 4.4 ICC spec. For BT.2020/PQ, which profile I'm attaching this is quite evident. Lcms can deal with it on unbounded mode, but if you feed a RGB -> Lab transform, for RGB=(255,255,255) you will find L*=523!! This is not encodable in any CLUT unless you provide enough headroom. I did this paper many, many time ago regarding how to increase accuracy of CLUT. Hope that would be of any help. https://www.littlecms.com/ASICprelinerization_CGIV08.pdf Regards Marti Maria The LittleCMS Project https://www.littlecms.com > > -----Original Message----- > > From: Pekka Paalanen <pek...@ha...> > > Sent: Friday, March 15, 2024 5:06 PM > > To: mar...@li... > > Cc: lcm...@li... > > Subject: Re: [Lcms-user] Creating equivalent matrix-shaper and 3D LUT > > profiles, problem with BPC >> > > Hi Marti, > > > > I wish I could share your optimism. > > > > Do you know if that is also enough when one side is BT.2020/PQ encoded > > HDR and the other side can be anything, including an optical encoding? > > > > If you have any paper to point to that suggests a 3D LUT is all you > > need, I would be exhilarated, and I would tell the Linux display > > driver engineers to forward that to their hardware engineers, so we > > wouldn't have do things the hard way in the future. > > > > I know, ICC profiles cannot represent HDR spaces in a good way, unless > > maybe with the most recent addition of CICP tags. However, I don't > > need to represent such things as valid ICC conformant profiles, they > > will never escape Weston's processes. I'm planning to abuse the > > cmsHPROFILE/cmsPipeline machinery that LittleCMS already has in an > > extendable form instead of inventing my own, and craft such > > cmsHPROFILEs that a chain of them results in the pipeline I need. The > > reason I originally went this way is because I need to be able to > > handle source/destination profile combinations where one of them is a > > valid ICC profile file, and the other is a DTV video format description like > HDR10 with HDR static metadata. Of course, I need to support a pure ICC > workflow too. > > > > We want to make an explicit trip through an optically encoded > > intermediate space for blending, so transformations between two > > electrical encodings will be rare. We are also likely to get > > application content in optical encoding, optical scRGB sounds to be popular > in (Windows) HDR games. > > > > My experience with LUTs is that even something as simple as the > > inverse of power 2.2 curve will need a ridiculous number of uniformly > > distributed LUT elements to reach any kind of usable precision. For 8 > > bit-per-channel electrical encoding precision, with 4096 elements in > > optical domain the maximum error is still bigger than +/- 1 code point. > > I believe the perceptual quantizer curve is even more extreme. We will > > be adding the missing curve types as LittleCMS plugins, like the PQ > > and HLG curves, since they would be far too expensive as LUTs. > > > > Luckily though, I can choose the compositor blending space such that > > that kind of curves will appear alone and not need channel mixing at > > that stage. But a 1D LUT is probably not good enough even then, which > > is why display controller hardware designers are inventing things like > > fixed-function curves and non-uniformly distributed 1D LUTs. > > > > I suspect that if a 3D LUT could do everything with reasonable > > precision / storage trade-off, the display controller hardware > > designers would simply put a 3D LUT on the card instead of all the > > various matrix, fixed curve, and LUT elements they are having now, in > > addition to some 3D LUT elements. By this I mean composition > > off-loading to display controllers, a.k.a Linux DRM with multiple KMS > > planes that the hardware blends together live during a scanout cycle. > > > > My ultimate goal as a Weston compositor developer is to aim at > > off-loading all color processing to the display controller, and using a GPU only > as a fallback. > > That means I need to match the fixed-function hardware capabilities as > > closely as possible, and they seem to be having a series of elements. > > See for example the "DCN 3.0 family color caps and mapping" diagram > > just above > > https://dri.freedesktop.org/docs/drm/gpu/amdgpu/display/display- > > manager.html#blend-mode-properties . > > > > Hence, the idea of just collapsing everything into a single giant 3D > > LUT does not seem to fit, as easy as it would make my life. > > > > > Don't mess with DToBxx neither with matrix shaper, otherwise you > > > would end replicating all lcms functionality. Don't try to > > > oversimplify by using matrix-shaper, those are not good for gamut > > > mapping and have no > > intents. > > > Color management does not work by using matrix shaper. > > > > So, to get back to the point. > > > > I'm starting from a matrix-shaper profile in the tests, because a > > matrix-shaper profile carries all the stage types Weston currently recognises > and optimises. > > This is the way I can easily test the 3x1D LUT - matrix - 3x1D LUT > > path, and the optimiser. I have understood that matrix-shaper profiles > > are common as SDR display profiles when you don't bother profiling the > > display. Also television broadcast standards use essentially matrix-shaper > types. > > > > All I'm doing here is to check that the mathematics is coded correctly. > > > > When Weston fails to recognise a cmsStage or fails to map the > > optimised pipeline to its internal fixed-function pipeline (which > > mimics what DRM KMS exposes, so that we can more easily off-load the > > pipeline to KMS in the future), it falls back to a 3D LUT. I need an > > ICC profile file that contains such stages that force Weston to fall > > back. A literal 3D LUT stage is the best option, because I want to test > Weston's 3D LUT path. > > > > When I use a 3D LUT in an ICC file, I suddenly also need to adhere to > > the perceptual PCS definition of the black point as well. The fact > > that matrix- shaper profiles do not behave the same is the headache here. > > > > I did get it to work, by copying 10-20 lines of code from > > ComputeBlackPointCompensation() depending on if you count the > > comments. > > I was hoping to not need to do that and just tell LittleCMS to add BPC > > compensation in the transformation. > > > > I haven't tried to replace my matrix-shaper test profiles with a DToB > > profile using a matrix and a curveset, but then I would need to adhere > > to the perceptual PCS black point already there. I think it would just > > move my problem from one place to another instead of solving it. > > > > > > Thanks, > > pq > > > > > > > > Hope that helps > > > > > > Marti Maria > > > The LittleCMS Project > > > https://www.littlecms.com > > > > > > > > > > > > uetiwonder why are you takin such huge amount > > > > > > > -----Original Message----- > > > > From: Pekka Paalanen <pek...@ha...> > > > > Sent: Friday, March 15, 2024 11:23 AM > > > > To: lcm...@li... > > > > Subject: [Lcms-user] Creating equivalent matrix-shaper and 3D LUT > > > profiles, > > > > problem with BPC > > > > > > > > Hi, > > > > > > > > I'm working on a Wayland compositor (Weston) to integrate LittleCMS in > it. > > > I > > > > use LittleCMS to come up with the color transformation from ICC > > > > profiles, then extract and optimize the cmsPipeline, and implement > > > > it in OpenGL ES > > > 3.0 > > > > shaders. > > > > > > > > I have some difficulty in generating ICC profiles in Weston's test suite. > > > I want > > > > the test suite to exercise all code paths to ensure the color > > > > operations > > > are > > > > implemented correctly. I have separate code paths for > > > > matrices+curves and > > > 3D > > > > LUTs. Which code path gets chosen depends on what processing > > > > blocks > > > > (cmsStage) an ICC file results in. Therefore, I would like to > > > > craft > > > identically > > > > behaving ICC files that hit the different code paths (the goal). > > > > > > > > Crafting matrix-shaper profiles is easy. Crafting an equivalent > > > > profile > > > that > > > > internally has a 3D LUT (DToB/BToD) is the problem. I can craft > > > > such an > > > ICC > > > > profile, but because I need to test with the perceptual rendering > > > > intent, > > > I > > > > should also include BPC between the zeros black point of a > > > > matrix-shaper > > > and > > > > the non-zero black point of the v4 perceptual PCS. Otherwise the > > > assumption > > > > that a perceptual v4 CLUT profile has the ICCv4 defined perceptual > > > > black > > > point > > > > does not hold, and the tests fail. > > > > > > > > Is my goal reasonable to begin with, or should I just accept that > > > exercising the > > > > different Weston GL ES code paths by means of different ICC > > > > profiles > > > cannot > > > > lead to identical results and I need to define my expected test > > > > results separately for each case? > > > > > > > > If my goal is reasonable, then what would be a good way to add the > > > required > > > > BPC? Can I make LittleCMS compute it for me, or should I just copy > > > > the BPC algorithm from LittleCMS? > > > > > > > > I've tried creating the DToB/BToD test profile 3D LUT by creating > > > > a chain > > > of the > > > > usual matrix-shaper profile and the abstract XYZ profile and > > > > sampling > > > that, but > > > > now I'm convinced that I cannot get BPC applied that way, not with > > > > the XYZ profile at least. > > > > > > > > > > > > Thanks, > > > > pq > > > |
From: Graeme G. <gr...@ar...> - 2024-03-16 04:36:14
|
Pekka Paalanen wrote: Hi, > I have some difficulty in generating ICC profiles in Weston's test > suite. I want the test suite to exercise all code paths to ensure the > color operations are implemented correctly. I have separate code paths > for matrices+curves and 3D LUTs. Which code path gets chosen depends on > what processing blocks (cmsStage) an ICC file results in. Therefore, I > would like to craft identically behaving ICC files that hit the > different code paths (the goal). I think Marti's advice is to be taken seriously - a lot of the point of using an ICC library like lcms is to let it do all the work with regard profile details (and there are a lot of details that have accumulated over the years). So using a generic extraction such as cLUT representation while allowing you to do the pixel transformation in HW is the way to go. In my experience there is some advantage in terms of the quality/speed tradeoff in being able to add optimized per channel in & out curves to the cLUT, but it is somewhat non-trivial to implement this in a generic way, if the ICC library itself doesn't provide such facilities. Not withstanding the above, in my ArgyllPRO color transformation regression test (icc/lutest) code I create a bunch of ICC profiles that all implement the same overall transform using different ICC V2 & V4 tags, and you can grab the RGB sub-set of them from here: <https://www.argyllpro.com.au/argyllPRO_RGB_lu_test_profiles.zip>. Note the limitations of this set though: these are just RGB device profiles, and does not include any other type of profile (i.e. device link), nor does it include other device colorspaces such as monochrome, CMYK, CMY, N-color etc., all spaces that can occur when transforming from source to (say) an RGB display. They also do not all have non-linear per channel curves (in order to make them reproduce the same device->XYZ transform), so this isn't much of a test of these stages of the machinery. > If my goal is reasonable, then what would be a good way to add the > required BPC? Can I make LittleCMS compute it for me, or should I just > copy the BPC algorithm from LittleCMS? I'd be somewhat wary of adding BPC to everything, particularly to the perceptual or saturation intents. Adobe's BPC white paper notes that BPC should be optional for these intents, and there are good reasons for this. The underlying desire for BPC is due to a fundamental architectural problem with the ICC idea of mixing and matching source and destination device profiles while also supporting gamut mapping. In general it can't work properly due to the need to simultaneously know both source and destination gamuts to create such a mapping. There are three attempts at working around this problem that I'm aware of: 1) ICC PRMG. This can partially work, but is also limited by attempting to implement gamut mapping in isolation in each device profile. 2) Adobe BPC. This is true gamut mapping, but applied just to the black point. Their algorithm is a heuristic, and doesn't always identify the correct black points to map from & to, sometimes causing worse black levels, rather than better. 3) ArgyllCMS gamut mapping. This has the advantage of allowing true gamut mapping, even with popular matrix source spaces, but has the disadvantage of the resulting perceptual or saturation table only being correct for a specific source profile. See <http://www.argyllcms.com/doc/iccgamutmapping.html> for more details. Cheers, Graeme Gill. |
From: <mar...@li...> - 2024-03-15 17:54:03
|
Hi Paalanen, Nice to contact you. Regarding your question, matrix shaper is only a very limited subset, true ICC profiles uses CLUT. Also, matrix shaper does not allow gamut mapping so it is better to avoid matrix-shaper whatever possible. I think there is a much better solution than replicating all lcms internals. You have to implement thetrahedral interpolation on GPU. It is not a big deal, in CUDA and OpenCL is trivial. You can use the lcms code as basis. Then, when you want to do color management, just create a color transform and sample all RGB to RGB gamut in a CLUT of 33 points. You could use more points if wish so, but 33 is enough in most cases. Ignore the profiles internals, it is nothing of your business: cmsOpenProfileFromFile input cmsOpenProfileFromFile output cmsCreateTransform input to output in floating point float for (r=0; r<255; r += 256/33) for (g=0; g<255; g += 256/33) for (b=0; b<255; b += 256/33) { cmsDoTransform rgb -> rgb_out store in GPU CLUT } Close everything Then, when you want to transform an image, just use this CLUT and tethahedral interpolation to do the color matching. It is fast because it happens in the GPU. It is simple because you have not to worry about the internal representation of the profiles. It is scalable because you can add more points to increase precision and it is portable. And all color management features are working. Don't mess with DToBxx neither with matrix shaper, otherwise you would end replicating all lcms functionality. Don't try to oversimplify by using matrix-shaper, those are not good for gamut mapping and have no intents. Color management does not work by using matrix shaper. Hope that helps Marti Maria The LittleCMS Project https://www.littlecms.com uetiwonder why are you takin such huge amount > -----Original Message----- > From: Pekka Paalanen <pek...@ha...> > Sent: Friday, March 15, 2024 11:23 AM > To: lcm...@li... > Subject: [Lcms-user] Creating equivalent matrix-shaper and 3D LUT profiles, > problem with BPC > > Hi, > > I'm working on a Wayland compositor (Weston) to integrate LittleCMS in it. I > use LittleCMS to come up with the color transformation from ICC profiles, > then extract and optimize the cmsPipeline, and implement it in OpenGL ES 3.0 > shaders. > > I have some difficulty in generating ICC profiles in Weston's test suite. I want > the test suite to exercise all code paths to ensure the color operations are > implemented correctly. I have separate code paths for matrices+curves and 3D > LUTs. Which code path gets chosen depends on what processing blocks > (cmsStage) an ICC file results in. Therefore, I would like to craft identically > behaving ICC files that hit the different code paths (the goal). > > Crafting matrix-shaper profiles is easy. Crafting an equivalent profile that > internally has a 3D LUT (DToB/BToD) is the problem. I can craft such an ICC > profile, but because I need to test with the perceptual rendering intent, I > should also include BPC between the zeros black point of a matrix-shaper and > the non-zero black point of the v4 perceptual PCS. Otherwise the assumption > that a perceptual v4 CLUT profile has the ICCv4 defined perceptual black point > does not hold, and the tests fail. > > Is my goal reasonable to begin with, or should I just accept that exercising the > different Weston GL ES code paths by means of different ICC profiles cannot > lead to identical results and I need to define my expected test results > separately for each case? > > If my goal is reasonable, then what would be a good way to add the required > BPC? Can I make LittleCMS compute it for me, or should I just copy the BPC > algorithm from LittleCMS? > > I've tried creating the DToB/BToD test profile 3D LUT by creating a chain of the > usual matrix-shaper profile and the abstract XYZ profile and sampling that, but > now I'm convinced that I cannot get BPC applied that way, not with the XYZ > profile at least. > > > Thanks, > pq |
From: Pekka P. <pek...@ha...> - 2024-03-15 16:06:56
|
On Fri, 15 Mar 2024 15:16:13 +0100 <mar...@li...> wrote: > Hi Paalanen, > > Nice to contact you. Regarding your question, matrix shaper is only a very > limited subset, true ICC profiles uses CLUT. Also, matrix shaper does not > allow gamut mapping so it is better to avoid matrix-shaper whatever > possible. > > I think there is a much better solution than replicating all lcms internals. > You have to implement thetrahedral interpolation on GPU. It is not a big > deal, in CUDA and OpenCL is trivial. You can use the lcms code as basis. > > Then, when you want to do color management, just create a color transform > and sample all RGB to RGB gamut in a CLUT of 33 points. You could use more > points if wish so, but 33 is enough in most cases. Ignore the profiles > internals, it is nothing of your business: > > cmsOpenProfileFromFile input > cmsOpenProfileFromFile output > > cmsCreateTransform input to output in floating point float > > for (r=0; r<255; r += 256/33) > for (g=0; g<255; g += 256/33) > for (b=0; b<255; b += 256/33) > { > cmsDoTransform rgb -> rgb_out > store in GPU CLUT > } > > Close everything > > Then, when you want to transform an image, just use this CLUT and > tethahedral interpolation to do the color matching. It is fast because it > happens in the GPU. It is simple because you have not to worry about the > internal representation of the profiles. It is scalable because you can add > more points to increase precision and it is portable. And all color > management features are working. Hi Marti, I wish I could share your optimism. Do you know if that is also enough when one side is BT.2020/PQ encoded HDR and the other side can be anything, including an optical encoding? If you have any paper to point to that suggests a 3D LUT is all you need, I would be exhilarated, and I would tell the Linux display driver engineers to forward that to their hardware engineers, so we wouldn't have do things the hard way in the future. I know, ICC profiles cannot represent HDR spaces in a good way, unless maybe with the most recent addition of CICP tags. However, I don't need to represent such things as valid ICC conformant profiles, they will never escape Weston's processes. I'm planning to abuse the cmsHPROFILE/cmsPipeline machinery that LittleCMS already has in an extendable form instead of inventing my own, and craft such cmsHPROFILEs that a chain of them results in the pipeline I need. The reason I originally went this way is because I need to be able to handle source/destination profile combinations where one of them is a valid ICC profile file, and the other is a DTV video format description like HDR10 with HDR static metadata. Of course, I need to support a pure ICC workflow too. We want to make an explicit trip through an optically encoded intermediate space for blending, so transformations between two electrical encodings will be rare. We are also likely to get application content in optical encoding, optical scRGB sounds to be popular in (Windows) HDR games. My experience with LUTs is that even something as simple as the inverse of power 2.2 curve will need a ridiculous number of uniformly distributed LUT elements to reach any kind of usable precision. For 8 bit-per-channel electrical encoding precision, with 4096 elements in optical domain the maximum error is still bigger than +/- 1 code point. I believe the perceptual quantizer curve is even more extreme. We will be adding the missing curve types as LittleCMS plugins, like the PQ and HLG curves, since they would be far too expensive as LUTs. Luckily though, I can choose the compositor blending space such that that kind of curves will appear alone and not need channel mixing at that stage. But a 1D LUT is probably not good enough even then, which is why display controller hardware designers are inventing things like fixed-function curves and non-uniformly distributed 1D LUTs. I suspect that if a 3D LUT could do everything with reasonable precision / storage trade-off, the display controller hardware designers would simply put a 3D LUT on the card instead of all the various matrix, fixed curve, and LUT elements they are having now, in addition to some 3D LUT elements. By this I mean composition off-loading to display controllers, a.k.a Linux DRM with multiple KMS planes that the hardware blends together live during a scanout cycle. My ultimate goal as a Weston compositor developer is to aim at off-loading all color processing to the display controller, and using a GPU only as a fallback. That means I need to match the fixed-function hardware capabilities as closely as possible, and they seem to be having a series of elements. See for example the "DCN 3.0 family color caps and mapping" diagram just above https://dri.freedesktop.org/docs/drm/gpu/amdgpu/display/display-manager.html#blend-mode-properties . Hence, the idea of just collapsing everything into a single giant 3D LUT does not seem to fit, as easy as it would make my life. > Don't mess with DToBxx neither with matrix shaper, otherwise you would end > replicating all lcms functionality. Don't try to oversimplify by using > matrix-shaper, those are not good for gamut mapping and have no intents. > Color management does not work by using matrix shaper. So, to get back to the point. I'm starting from a matrix-shaper profile in the tests, because a matrix-shaper profile carries all the stage types Weston currently recognises and optimises. This is the way I can easily test the 3x1D LUT - matrix - 3x1D LUT path, and the optimiser. I have understood that matrix-shaper profiles are common as SDR display profiles when you don't bother profiling the display. Also television broadcast standards use essentially matrix-shaper types. All I'm doing here is to check that the mathematics is coded correctly. When Weston fails to recognise a cmsStage or fails to map the optimised pipeline to its internal fixed-function pipeline (which mimics what DRM KMS exposes, so that we can more easily off-load the pipeline to KMS in the future), it falls back to a 3D LUT. I need an ICC profile file that contains such stages that force Weston to fall back. A literal 3D LUT stage is the best option, because I want to test Weston's 3D LUT path. When I use a 3D LUT in an ICC file, I suddenly also need to adhere to the perceptual PCS definition of the black point as well. The fact that matrix-shaper profiles do not behave the same is the headache here. I did get it to work, by copying 10-20 lines of code from ComputeBlackPointCompensation() depending on if you count the comments. I was hoping to not need to do that and just tell LittleCMS to add BPC compensation in the transformation. I haven't tried to replace my matrix-shaper test profiles with a DToB profile using a matrix and a curveset, but then I would need to adhere to the perceptual PCS black point already there. I think it would just move my problem from one place to another instead of solving it. Thanks, pq > > Hope that helps > > Marti Maria > The LittleCMS Project > https://www.littlecms.com > > > > uetiwonder why are you takin such huge amount > > > -----Original Message----- > > From: Pekka Paalanen <pek...@ha...> > > Sent: Friday, March 15, 2024 11:23 AM > > To: lcm...@li... > > Subject: [Lcms-user] Creating equivalent matrix-shaper and 3D LUT > profiles, > > problem with BPC > > > > Hi, > > > > I'm working on a Wayland compositor (Weston) to integrate LittleCMS in it. > I > > use LittleCMS to come up with the color transformation from ICC profiles, > > then extract and optimize the cmsPipeline, and implement it in OpenGL ES > 3.0 > > shaders. > > > > I have some difficulty in generating ICC profiles in Weston's test suite. > I want > > the test suite to exercise all code paths to ensure the color operations > are > > implemented correctly. I have separate code paths for matrices+curves and > 3D > > LUTs. Which code path gets chosen depends on what processing blocks > > (cmsStage) an ICC file results in. Therefore, I would like to craft > identically > > behaving ICC files that hit the different code paths (the goal). > > > > Crafting matrix-shaper profiles is easy. Crafting an equivalent profile > that > > internally has a 3D LUT (DToB/BToD) is the problem. I can craft such an > ICC > > profile, but because I need to test with the perceptual rendering intent, > I > > should also include BPC between the zeros black point of a matrix-shaper > and > > the non-zero black point of the v4 perceptual PCS. Otherwise the > assumption > > that a perceptual v4 CLUT profile has the ICCv4 defined perceptual black > point > > does not hold, and the tests fail. > > > > Is my goal reasonable to begin with, or should I just accept that > exercising the > > different Weston GL ES code paths by means of different ICC profiles > cannot > > lead to identical results and I need to define my expected test results > > separately for each case? > > > > If my goal is reasonable, then what would be a good way to add the > required > > BPC? Can I make LittleCMS compute it for me, or should I just copy the BPC > > algorithm from LittleCMS? > > > > I've tried creating the DToB/BToD test profile 3D LUT by creating a chain > of the > > usual matrix-shaper profile and the abstract XYZ profile and sampling > that, but > > now I'm convinced that I cannot get BPC applied that way, not with the XYZ > > profile at least. > > > > > > Thanks, > > pq > |
From: Pekka P. <pek...@ha...> - 2024-03-15 11:21:40
|
Hi, I'm working on a Wayland compositor (Weston) to integrate LittleCMS in it. I use LittleCMS to come up with the color transformation from ICC profiles, then extract and optimize the cmsPipeline, and implement it in OpenGL ES 3.0 shaders. I have some difficulty in generating ICC profiles in Weston's test suite. I want the test suite to exercise all code paths to ensure the color operations are implemented correctly. I have separate code paths for matrices+curves and 3D LUTs. Which code path gets chosen depends on what processing blocks (cmsStage) an ICC file results in. Therefore, I would like to craft identically behaving ICC files that hit the different code paths (the goal). Crafting matrix-shaper profiles is easy. Crafting an equivalent profile that internally has a 3D LUT (DToB/BToD) is the problem. I can craft such an ICC profile, but because I need to test with the perceptual rendering intent, I should also include BPC between the zeros black point of a matrix-shaper and the non-zero black point of the v4 perceptual PCS. Otherwise the assumption that a perceptual v4 CLUT profile has the ICCv4 defined perceptual black point does not hold, and the tests fail. Is my goal reasonable to begin with, or should I just accept that exercising the different Weston GL ES code paths by means of different ICC profiles cannot lead to identical results and I need to define my expected test results separately for each case? If my goal is reasonable, then what would be a good way to add the required BPC? Can I make LittleCMS compute it for me, or should I just copy the BPC algorithm from LittleCMS? I've tried creating the DToB/BToD test profile 3D LUT by creating a chain of the usual matrix-shaper profile and the abstract XYZ profile and sampling that, but now I'm convinced that I cannot get BPC applied that way, not with the XYZ profile at least. Thanks, pq |
From: Tom L. <to...@ke...> - 2024-01-30 00:09:07
|
<div dir='auto'><div dir="auto">On 29 Jan 2024 21:33, Adrian Knagg-Baugh <aje...@gm...> wrote:<br></div><div><div class="elided-text"><blockquote style="margin:0 0 0 0.8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>Hi,<br></div><div><br></div><div>I have extracted the chromaticity xy coordinates from a cmsHPROFILE as follows:</div><div><br> cmsCIEXYZ *red;<br> cmsCIEXYZ *green;<br> cmsCIEXYZ *blue;<br> red = cmsReadTag (profile, cmsSigRedColorantTag);<br> green = cmsReadTag (profile, cmsSigGreenColorantTag);<br> blue = cmsReadTag (profile, cmsSigBlueColorantTag);<br></div><div></div> cmsXYZ2xyY(&redxyY, &red);<br> cmsXYZ2xyY(&greenxyY, &green);<br><div> cmsXYZ2xyY(&bluexyY, &blue);</div><div><br></div><div>However, if the profile I do this to is a sRGB profile made with chromaticities from the cmsCIExyYTRIPLE {<!-- -->{0.639998686, 0.330010138, 1.0}, {0.300003784, 0.600003357, 1.0}, {0.150002046, 0.059997204, 1.0}}, and the standard sRGB whitepoint cmsCIExyY d65_srgb_adobe_specs = {0.3127, 0.3290, 1.0}; I don't get back the same coordinates. Instead I get:<br></div><div><br></div><div><span style="font-family:monospace"><span style="color:rgb( 0 , 0 , 0 );background-color:rgb( 255 , 255 , 255 )">redxyY x: 0.648438, y: 0.330868, Y: 0.222488 </span><br>greenxyY x: 0.321176, y: 0.597877, Y: 0.716904 <br>bluexyY x: 0.155899, y: 0.066051, Y: 0.060608 <br></span></div><div><span style="font-family:'arial' , sans-serif"><br></span></div><div><span style="font-family:'arial' , sans-serif">What's going on here: is there something I'm not accounting for? And is there a more correct way to read the original chromaticities back out of a profile?<br></span></div><div><span style="font-family:'arial' , sans-serif"><br></span></div><div><span style="font-family:'arial' , sans-serif">Thanks,</span></div><div><span style="font-family:'arial' , sans-serif"><br></span></div><div><span style="font-family:'arial' , sans-serif">Adrian.<br></span></div><br></div> </blockquote></div><br></div><div dir="auto">Adrian,</div><div dir="auto"><br></div><div dir="auto">red, green and blue are already pointers, so you shouldn't be taking their address.<div dir="auto"><br></div><div dir="auto">i.e. remove the &s</div><div dir="auto"><br></div><div dir="auto">Surprised your compiler didn't warn about this?</div><div dir="auto"><br></div><div dir="auto">Cheers</div></div></div> |
From: <mar...@li...> - 2024-01-29 22:38:01
|
Hello, > is there something I'm not accounting for? And is there a more correct way to read the original chromaticities back out of a profile? You are assuming the internals of ICC profiles works in a way they don’t. The cmsSigXXXColorantTag tags does NOT contain direct information on the primaries, numbers are “cooked” and it is certainly possible for those tags to be missing on some profiles. You should avoid reading tags directly as much as possible. Instead, use an absolute colorimetric transform with no observer adaptation to get the primaries. This would work no matter which profile type are you using. Here an example by using the transicc tool, you can do the same creating a transform with absolute colorimetric intent making sure to set the observer adaptation state to 0 first. I am trying to recover sRGB red primary: marti@I7:~$ transicc -t3 -d0 -i*sRGB -o*XYZ LittleCMS ColorSpace conversion calculator - 5.1 [LittleCMS 2.16] Copyright (c) 1998-2023 Marti Maria Saguer. See COPYING file for details. Enter values, 'q' to quit R? 255 G? 0 B? 0 X=41.2391 Y=21.2639 Z=1.9331 You convert this XYZ of (41.2391, 21.2639, 1.9331) to xyY, for example using this link: http://www.brucelindbloom.com/ColorCalculator.html xyY = (0.6400, 0.3300, 21.2639) Which corresponds to the Rec709 chromaticity of red. Hope that helps Marti Maria The LittleCMS Project https://www.littlecms.com From: Adrian Knagg-Baugh <aje...@gm...> Sent: Monday, January 29, 2024 10:34 PM To: lcm...@li... Subject: [Lcms-user] Need help recovering the chromaticities that a profile was made with Hi, I have extracted the chromaticity xy coordinates from a cmsHPROFILE as follows: cmsCIEXYZ *red; cmsCIEXYZ *green; cmsCIEXYZ *blue; red = cmsReadTag (profile, cmsSigRedColorantTag); green = cmsReadTag (profile, cmsSigGreenColorantTag); blue = cmsReadTag (profile, cmsSigBlueColorantTag); cmsXYZ2xyY(&redxyY, &red); cmsXYZ2xyY(&greenxyY, &green); cmsXYZ2xyY(&bluexyY, &blue); However, if the profile I do this to is a sRGB profile made with chromaticities from the cmsCIExyYTRIPLE {{0.639998686, 0.330010138, 1.0}, {0.300003784, 0.600003357, 1.0}, {0.150002046, 0.059997204, 1.0}}, and the standard sRGB whitepoint cmsCIExyY d65_srgb_adobe_specs = {0.3127, 0.3290, 1.0}; I don't get back the same coordinates. Instead I get: redxyY x: 0.648438, y: 0.330868, Y: 0.222488 greenxyY x: 0.321176, y: 0.597877, Y: 0.716904 bluexyY x: 0.155899, y: 0.066051, Y: 0.060608 What's going on here: is there something I'm not accounting for? And is there a more correct way to read the original chromaticities back out of a profile? Thanks, Adrian. |
From: Adrian Knagg-B. <aje...@gm...> - 2024-01-29 22:07:39
|
Thank you for the explanation. Yes, that's great, I will create a transform from profile to XYZ, use it to transform each RGB primary using absolute colorimetric, and then convert to xyY straightforwardly. Thanks again, Adrian. On Mon, 29 Jan 2024, 22:02 , <mar...@li...> wrote: > Hello, > > > > > is there something I'm not accounting for? And is there a more correct > way to read the original chromaticities back out of a profile? > > > > You are assuming the internals of ICC profiles works in a way they don’t. > The cmsSigXXXColorantTag tags does NOT contain direct information on the > primaries, numbers are “cooked” and it is certainly possible for those tags > to be missing on some profiles. > > > > You should avoid reading tags directly as much as possible. Instead, use > an absolute colorimetric transform with no observer adaptation to get the > primaries. This would work no matter which profile type are you using. > > > > Here an example by using the transicc tool, you can do the same creating a > transform with absolute colorimetric intent making sure to set the observer > adaptation state to 0 first. I am trying to recover sRGB red primary: > > > > marti@I7:~$ transicc -t3 -d0 -i*sRGB -o*XYZ > > LittleCMS ColorSpace conversion calculator - 5.1 [LittleCMS 2.16] > > Copyright (c) 1998-2023 Marti Maria Saguer. See COPYING file for details. > > > > Enter values, 'q' to quit > > R? 255 > > G? 0 > > B? 0 > > > > X=41.2391 Y=21.2639 Z=1.9331 > > > > You convert this XYZ of (41.2391, 21.2639, 1.9331) to xyY, for example > using this link: http://www.brucelindbloom.com/ColorCalculator.html > > > > xyY = (0.6400, 0.3300, 21.2639) Which corresponds to the Rec709 > chromaticity of red. > > > > Hope that helps > > > > Marti Maria > > The LittleCMS Project > > https://www.littlecms.com > > > > > > > > > > *From:* Adrian Knagg-Baugh <aje...@gm...> > *Sent:* Monday, January 29, 2024 10:34 PM > *To:* lcm...@li... > *Subject:* [Lcms-user] Need help recovering the chromaticities that a > profile was made with > > > > Hi, > > > > I have extracted the chromaticity xy coordinates from a cmsHPROFILE as > follows: > > > cmsCIEXYZ *red; > cmsCIEXYZ *green; > cmsCIEXYZ *blue; > red = cmsReadTag (profile, cmsSigRedColorantTag); > green = cmsReadTag (profile, cmsSigGreenColorantTag); > blue = cmsReadTag (profile, cmsSigBlueColorantTag); > > cmsXYZ2xyY(&redxyY, &red); > cmsXYZ2xyY(&greenxyY, &green); > > cmsXYZ2xyY(&bluexyY, &blue); > > > > However, if the profile I do this to is a sRGB profile made with > chromaticities from the cmsCIExyYTRIPLE {{0.639998686, 0.330010138, 1.0}, > {0.300003784, 0.600003357, 1.0}, {0.150002046, 0.059997204, 1.0}}, and the > standard sRGB whitepoint cmsCIExyY d65_srgb_adobe_specs = {0.3127, 0.3290, > 1.0}; I don't get back the same coordinates. Instead I get: > > > > redxyY x: 0.648438, y: 0.330868, Y: 0.222488 > greenxyY x: 0.321176, y: 0.597877, Y: 0.716904 > bluexyY x: 0.155899, y: 0.066051, Y: 0.060608 > > > > What's going on here: is there something I'm not accounting for? And is > there a more correct way to read the original chromaticities back out of a > profile? > > > > Thanks, > > > > Adrian. > > > |
From: Adrian Knagg-B. <aje...@gm...> - 2024-01-29 21:34:10
|
Hi, I have extracted the chromaticity xy coordinates from a cmsHPROFILE as follows: cmsCIEXYZ *red; cmsCIEXYZ *green; cmsCIEXYZ *blue; red = cmsReadTag (profile, cmsSigRedColorantTag); green = cmsReadTag (profile, cmsSigGreenColorantTag); blue = cmsReadTag (profile, cmsSigBlueColorantTag); cmsXYZ2xyY(&redxyY, &red); cmsXYZ2xyY(&greenxyY, &green); cmsXYZ2xyY(&bluexyY, &blue); However, if the profile I do this to is a sRGB profile made with chromaticities from the cmsCIExyYTRIPLE {{0.639998686, 0.330010138, 1.0}, {0.300003784, 0.600003357, 1.0}, {0.150002046, 0.059997204, 1.0}}, and the standard sRGB whitepoint cmsCIExyY d65_srgb_adobe_specs = {0.3127, 0.3290, 1.0}; I don't get back the same coordinates. Instead I get: redxyY x: 0.648438, y: 0.330868, Y: 0.222488 greenxyY x: 0.321176, y: 0.597877, Y: 0.716904 bluexyY x: 0.155899, y: 0.066051, Y: 0.060608 What's going on here: is there something I'm not accounting for? And is there a more correct way to read the original chromaticities back out of a profile? Thanks, Adrian. |
From: <mar...@li...> - 2024-01-08 10:13:55
|
Hi, >ie JDK could (SFAICS) change to specify the flag only when both src and dest have alpha. >I don't know when we actually need it when there's no alpha. Sergey ? That would solve all issues. Problem is, the code has no idea on how alpha should be interpreted, and this includes initializing. So, whatever I do to initialize alpha would be wrong in some cases. At that point I think it is better just flagging the error and let’s user to decide. Please note if you don’t specify the flag, the dest alpha channel is untouched, so the caller is responsible of initializing it. If you specify the flag, then the destination alpha channel is initialized by the CMM by copying it from source. The difference is when using the flag you can just malloc the dest memory, without the flag you have to process it before calling cmsDoTransform. Regards Marti Maria The LittleCMS Project <https://www.littlecms.com> https://www.littlecms.com From: Philip Race <phi...@or...> Sent: Friday, January 5, 2024 8:55 PM To: mar...@li...; Lcm...@li... Subject: Re: [Lcms-user] LCMS 2.16: CMMException: LCMS error 13: Mismatched alpha channels On 1/4/24 1:19 AM, mar...@li... <mailto:mar...@li...> wrote: Hi Phil, The reason was to prevent using this flag when the output has alpha channel and there is no source alpha. Specifically that case ? The JDK code currently and only relies on the opposite case, but the check as written allows neither. Other examples are different number of alpha channels. It is not clear what to do with the alpha in such cases. Should lcms initialize the excess with zero? Some folks began to have ideas of adding a new interface to init spare alpha channels so I just applied Occam’s razor. I am open to changes as long as they make sense. The actual implementation complains if the request cannot be accomplished, user specify to copy alpha and there is no alpha or the number of channels is different, an error is triggered. I guess you don’t like this being reported as an error. Ignoring the request seems also not right because user may think it is being performed whilst is really not. What should I do? Mainly I'd like some quick clarity so we can decide if we should change JDK code or wait for an update to LCMS. I can see the argument that specifying "copy alpha" is inconsistent with "there's no alpha to copy from", or "there's nowhere to copy the alpha to". And I can also see that it could easily be a mistake on the part of the app, that is best flagged in an upfront way. But if there's a sensible default action in some subset of these cases, that's documented and easily understood, meaning not loaded with exceptions and caveats, which would allow the existing usage to function properly, that might be preferable. I am not sure that either choice (keep the check as is, or relax it) would make what Sergey is asking for impossible, but I'll defer to him to comment on that. ie JDK could (SFAICS) change to specify the flag only when both src and dest have alpha. I don't know when we actually need it when there's no alpha. Sergey ? -phil. Regards Marti Maria The LittleCMS Project <https://www.littlecms.com> https://www.littlecms.com From: Philip Race <mailto:phi...@or...> <phi...@or...> Sent: Wednesday, January 3, 2024 9:00 PM To: Lcm...@li... <mailto:Lcm...@li...> Subject: [Lcms-user] LCMS 2.16: CMMException: LCMS error 13: Mismatched alpha channels JDK is using the flag cmsFLAGS_COPY_ALPHA. It is specified in a few cases including if the source has alpha but the destination is opaque, expecting that the alpha channel will be ignored. Here's where JDK started using the flag https://github.com/openjdk/jdk/commit/16acfafb#diff-eed6ddb15e9c5bdab9fc3b3930d5d959966165d9622ddd293e8572489adea98b But in LCMS 2.16 there's a new check that this is running afoul of. https://github.com/mm2/Little-CMS/commit/e55b6fa4d3c5b7e08d9e4bc8c803a79ca908b5a4#diff-3627f903d37617a227e71f0ee1[…]2629b5ee838707e3fcd7dcb46e762f4e It seems to me that likely we could just drop specifying cmsFLAGS_COPY_ALPHA in this case, but I can't find any background information on the reason for the change in LCMS, or awareness that it might be a compatibility issue. Thoughts / comments ? -phil. |
From: Philip R. <phi...@or...> - 2024-01-05 20:00:12
|
On 1/4/24 1:19 AM, mar...@li... wrote: > > Hi Phil, > > The reason was to prevent using this flag when the output has alpha > channel and there is no source alpha. > Specifically that case ? The JDK code currently and only relies on the opposite case, but the check as written allows neither. > Other examples are different number of alpha channels. It is not clear > what to do with the alpha in such cases. Should lcms initialize the > excess with zero? > > Some folks began to have ideas of adding a new interface to init spare > alpha channels so I just applied Occam’s razor. > > I am open to changes as long as they make sense. The actual > implementation complains if the request cannot be accomplished, user > specify to copy alpha and there is no alpha or the number of channels > is different, an error is triggered. > > I guess you don’t like this being reported as an error. Ignoring the > request seems also not right because user may think it is being > performed whilst is really not. > > What should I do? > Mainly I'd like some quick clarity so we can decide if we should change JDK code or wait for an update to LCMS. I can see the argument that specifying "copy alpha" is inconsistent with "there's no alpha to copy from", or "there's nowhere to copy the alpha to". And I can also see that it could easily be a mistake on the part of the app, that is best flagged in an upfront way. But if there's a sensible default action in some subset of these cases, that's documented and easily understood, meaning not loaded with exceptions and caveats, which would allow the existing usage to function properly, that might be preferable. I am not sure that either choice (keep the check as is, or relax it) would make what Sergey is asking for impossible, but I'll defer to him to comment on that. ie JDK could (SFAICS) change to specify the flag only when both src and dest have alpha. I don't know when we actually need it when there's no alpha. Sergey ? -phil. > Regards > > Marti Maria > > The LittleCMS Project > > https://www.littlecms.com <https://www.littlecms.com> > > *From:* Philip Race <phi...@or...> > *Sent:* Wednesday, January 3, 2024 9:00 PM > *To:* Lcm...@li... > *Subject:* [Lcms-user] LCMS 2.16: CMMException: LCMS error 13: > Mismatched alpha channels > > JDK is using the flag cmsFLAGS_COPY_ALPHA. It is specified in a few > cases including if > the source has alpha but the destination is opaque, expecting that the > alpha channel will be ignored. > > Here's where JDK started using the flag > https://github.com/openjdk/jdk/commit/16acfafb#diff-eed6ddb15e9c5bdab9fc3b3930d5d959966165d9622ddd293e8572489adea98b > > But in LCMS 2.16 there's a new check that this is running afoul of. > https://github.com/mm2/Little-CMS/commit/e55b6fa4d3c5b7e08d9e4bc8c803a79ca908b5a4#diff-3627f903d37617a227e71f0ee1[…]2629b5ee838707e3fcd7dcb46e762f4e > > It seems to me that likely we could just drop specifying > cmsFLAGS_COPY_ALPHA in this case, > but I can't find any background information on the reason for the > change in LCMS, or awareness that > it might be a compatibility issue. > > Thoughts / comments ? > > -phil. > |
From: <mar...@li...> - 2024-01-04 11:49:04
|
Hi Phil, The reason was to prevent using this flag when the output has alpha channel and there is no source alpha. Other examples are different number of alpha channels. It is not clear what to do with the alpha in such cases. Should lcms initialize the excess with zero? Some folks began to have ideas of adding a new interface to init spare alpha channels so I just applied Occam’s razor. I am open to changes as long as they make sense. The actual implementation complains if the request cannot be accomplished, user specify to copy alpha and there is no alpha or the number of channels is different, an error is triggered. I guess you don’t like this being reported as an error. Ignoring the request seems also not right because user may think it is being performed whilst is really not. What should I do? Regards Marti Maria The LittleCMS Project <https://www.littlecms.com> https://www.littlecms.com From: Philip Race <phi...@or...> Sent: Wednesday, January 3, 2024 9:00 PM To: Lcm...@li... Subject: [Lcms-user] LCMS 2.16: CMMException: LCMS error 13: Mismatched alpha channels JDK is using the flag cmsFLAGS_COPY_ALPHA. It is specified in a few cases including if the source has alpha but the destination is opaque, expecting that the alpha channel will be ignored. Here's where JDK started using the flag https://github.com/openjdk/jdk/commit/16acfafb#diff-eed6ddb15e9c5bdab9fc3b3930d5d959966165d9622ddd293e8572489adea98b But in LCMS 2.16 there's a new check that this is running afoul of. https://github.com/mm2/Little-CMS/commit/e55b6fa4d3c5b7e08d9e4bc8c803a79ca908b5a4#diff-3627f903d37617a227e71f0ee1[…]2629b5ee838707e3fcd7dcb46e762f4e It seems to me that likely we could just drop specifying cmsFLAGS_COPY_ALPHA in this case, but I can't find any background information on the reason for the change in LCMS, or awareness that it might be a compatibility issue. Thoughts / comments ? -phil. |
From: Philip R. <phi...@or...> - 2024-01-03 20:00:29
|
JDK is using the flag cmsFLAGS_COPY_ALPHA. It is specified in a few cases including if the source has alpha but the destination is opaque, expecting that the alpha channel will be ignored. Here's where JDK started using the flag https://github.com/openjdk/jdk/commit/16acfafb#diff-eed6ddb15e9c5bdab9fc3b3930d5d959966165d9622ddd293e8572489adea98b But in LCMS 2.16 there's a new check that this is running afoul of. https://github.com/mm2/Little-CMS/commit/e55b6fa4d3c5b7e08d9e4bc8c803a79ca908b5a4#diff-3627f903d37617a227e71f0ee1[…]2629b5ee838707e3fcd7dcb46e762f4e It seems to me that likely we could just drop specifying cmsFLAGS_COPY_ALPHA in this case, but I can't find any background information on the reason for the change in LCMS, or awareness that it might be a compatibility issue. Thoughts / comments ? -phil. |
From: <mar...@li...> - 2023-12-29 14:41:43
|
Hi, >From your explanation, it seems to me that the transformation you want has nothing to do with color management. You know nothing about the L* of your grayscale and need only some scaling. I would rather use simple math in this case: Gray_out = (Gray_in * 40) /51 + 40; You could just use a for() loop. This is going to be faster that using any color management routines. Regards Marti Maria The LittleCMS Project https://www.littlecms.com > -----Original Message----- > From: flap <fb...@oh...> > Sent: Friday, December 29, 2023 12:06 PM > To: lcm...@li... > Subject: [Lcms-user] How does a transformation work > > Hi list, > > what I want: an 8 bit grey-scale output for my printer, where the input dot > values (0 … 255) are scaled and limited to values of 40 … 240 (…just an > example). > > So I created a simple test input data (256 elements of unsigned char) with a > linear ramp from 0 (at index 0) … 255 (at index 255). > > And I created an array with 4096 short entries beginning with 40 (* 256 for > unsigned short) and ramp up linear to 240 (* 256 for unsigned short) and feed > it into cmsBuildTabulatedToneCurve16() and cmsCreateGrayProfile() with > cmsD50_xyY() as its whitepoint (for the output profile). > > For the input profile I used cmsBuildGamma() with a value of 1.0 and > cmsCreateGrayProfile() with cmsD50_xyY(). > > After creating the transformation with cmsCreateTransform() with > input/output types both TYPE_GRAY_8 and an intent of INTENT_PERCEPTUAL > and applying this transformation with cmsDoTransformLineStride () to my > simple input data, the output result is still almost linear from 0 to 255 and not > limited to 40 … 240. > > So, I think I'm using the tone curves in a wrong way. But where is my mistake? > > Cheers, > Jürgen > > > > > _______________________________________________ > Lcms-user mailing list > Lcm...@li... > https://lists.sourceforge.net/lists/listinfo/lcms-user |
From: <mar...@li...> - 2023-12-29 14:21:17
|
Ok, The procedure to create a gray profile is simple. First measure the transfer function of your gray device to XYZ. That could be a gamma curve or anything. Then measure the white point of your gray space. The adaptation to D50 is already done by lcms. Then create a tone curve with your transfer function. Finally, by using white point and curve, create a profile. Please note you cannot map contone values for input to output since the output profile is unknown. So, you cannot map 0 to 40 and 255 to 240. You can only characterize which XYZ you get with gray = 0 and gray = 255. Example: cmsToneCurve* tone; cmsUInt16Number curve[256]; int i; cmsHPROFILE hProfile; for (i = 0; i < 256; i++) { curve[i] = (cmsUInt16Number) round(65535.0 * pow(i / 255.0, 2.2)); } tone = cmsBuildTabulatedToneCurve16(0, 256, curve); hProfile = cmsCreateGrayProfile(cmsD50_xyY(), tone); cmsFreeToneCurve(tone); cmsSaveProfileToFile(hProfile, "gray_test.icc"); cmsCloseProfile(hProfile); Regards Marti Maria The LittleCMS Project https://www.littlecms.com > -----Original Message----- > From: flap <fb...@oh...> > Sent: Friday, December 29, 2023 2:21 PM > To: lcm...@li... > Subject: Re: [Lcms-user] How does a transformation work > > Am Freitag, 29. Dezember 2023, 14:01:40 CET schrieb > mar...@li...: > > Hi, > > > > From your explanation, it seems to me that the transformation you want > > has > nothing to do with color management. You > > know nothing about the L* of your grayscale and need only some > > scaling. I > would rather use simple math in this case: > > > > Gray_out = (Gray_in * 40) /51 + 40; > > > > You could just use a for() loop. This is going to be faster that using > > any > color management routines. > > Hmm. It just was a test to understand how it (may) works. > The profile I create is used in the muPDF library to create a greyscale out from > a PDF document. I would expect, any kind of PDF could be rasterized into a > greyscale raster this way. > > Cheers, > Jürgen > > > > > > _______________________________________________ > Lcms-user mailing list > Lcm...@li... > https://lists.sourceforge.net/lists/listinfo/lcms-user |
From: flap <fb...@oh...> - 2023-12-29 13:21:30
|
Am Freitag, 29. Dezember 2023, 14:01:40 CET schrieb mar...@li...: > Hi, > > From your explanation, it seems to me that the transformation you want has nothing to do with color management. You > know nothing about the L* of your grayscale and need only some scaling. I would rather use simple math in this case: > > Gray_out = (Gray_in * 40) /51 + 40; > > You could just use a for() loop. This is going to be faster that using any color management routines. Hmm. It just was a test to understand how it (may) works. The profile I create is used in the muPDF library to create a greyscale out from a PDF document. I would expect, any kind of PDF could be rasterized into a greyscale raster this way. Cheers, Jürgen |