Thread: [Lcms-user] What does cmsFLAGS_NOOPTIMIZE actually do?
An ICC-based CMM for color management
Brought to you by:
mm2
From: Boudewijn R. <bo...@va...> - 2012-07-27 11:15:05
Attachments:
scRGB.icm
|
Hi! I've had a long discussion with Elle Stone who did a huge comparison of the results image applications have when converting between 16 bits linear rgb and srgb. It turned out that Krita gave different results from tifficc, so I dug in a bit and found that the reason was that I didn't use the cmsFLAGS_NOOPTIMIZE flag when creating transforms, while tifficc does. Code like this: quint16 src[4]; src[0] = 257; src[1] = 257; src[2] = 257; src[3] = 65535; quint16 dst[4]; cmsHPROFILE sRgbProfile = cmsCreate_sRGBProfile(); QByteArray rawData = linearRgb->profile()->rawData(); // from krita... it's a linear scRGB profile, as attached. cmsHPROFILE linearRgbProfile = cmsOpenProfileFromMem((void*)rawData.constData(), rawData.size()); cmsHTRANSFORM tf = cmsCreateTransform(linearRgbProfile, TYPE_BGRA_16, sRgbProfile, TYPE_BGRA_16, INTENT_RELATIVE_COLORIMETRIC, cmsFLAGS_NOOPTIMIZE | cmsFLAGS_BLACKPOINTCOMPENSATION); cmsDoTransform(tf, (quint8*)&src, (quint8*)&dst, 1); qDebug() << dst[0] << dst[1] << dst[2]; returns 3266,3266,3266, while cmsHTRANSFORM tf = cmsCreateTransform(linearRgbProfile, TYPE_BGRA_16, sRgbProfile, TYPE_BGRA_16, INTENT_RELATIVE_COLORIMETRIC, cmsFLAGS_BLACKPOINTCOMPENSATION); returns 1595,1595,1595 Which surprises me, since I would have thought optimization wouldn't have any influence on the actual results obtained. Is this a bug, or should I always use cmsFLAGS_NOOPTIMIZE to get accurate results? -- Boudewijn Rempt http://www.valdyas.org, http://www.krita.org, http://www.boudewijnrempt.nl |
From: Elle S. <l.e...@gm...> - 2012-07-27 11:20:17
|
The comparisons are at: http://ninedegreesbelow.com/temp/icc-profile-conversion-bugs.html http://ninedegreesbelow.com/temp/profile-conversion-errors.html The same problem also happens when converting from linear prophoto to regular prophoto. On 7/27/12, Boudewijn Rempt <bo...@va...> wrote: > Hi! > > I've had a long discussion with Elle Stone who did a huge comparison of the > results image applications have when converting between 16 bits linear rgb > and srgb. It turned out that Krita gave different results from tifficc, so I > dug in a bit and found that the reason was that I didn't use the > cmsFLAGS_NOOPTIMIZE flag when creating transforms, while tifficc does. Code > like this: > > > quint16 src[4]; > src[0] = 257; > src[1] = 257; > src[2] = 257; > src[3] = 65535; > > quint16 dst[4]; > > cmsHPROFILE sRgbProfile = cmsCreate_sRGBProfile(); > QByteArray rawData = linearRgb->profile()->rawData(); // from krita... > it's a linear scRGB profile, as attached. > cmsHPROFILE linearRgbProfile = > cmsOpenProfileFromMem((void*)rawData.constData(), rawData.size()); > > cmsHTRANSFORM tf = cmsCreateTransform(linearRgbProfile, > TYPE_BGRA_16, > sRgbProfile, > TYPE_BGRA_16, > INTENT_RELATIVE_COLORIMETRIC, > cmsFLAGS_NOOPTIMIZE | > cmsFLAGS_BLACKPOINTCOMPENSATION); > > cmsDoTransform(tf, (quint8*)&src, (quint8*)&dst, 1); > > qDebug() << dst[0] << dst[1] << dst[2]; > > > returns 3266,3266,3266, while > > > cmsHTRANSFORM tf = cmsCreateTransform(linearRgbProfile, > TYPE_BGRA_16, > sRgbProfile, > TYPE_BGRA_16, > INTENT_RELATIVE_COLORIMETRIC, > cmsFLAGS_BLACKPOINTCOMPENSATION); > > returns 1595,1595,1595 > > Which surprises me, since I would have thought optimization wouldn't have > any influence on the actual results obtained. Is this a bug, or should I > always use cmsFLAGS_NOOPTIMIZE to get accurate results? > > -- > Boudewijn Rempt > http://www.valdyas.org, http://www.krita.org, http://www.boudewijnrempt.nl > -- http://ninedegreesbelow.com Articles and tutorials on open source digital imaging and photography |
From: Elle S. <l.e...@gm...> - 2012-07-27 19:17:08
|
I noticed a problem when trying to view regular sRGB and linear gamma sRGB versions of the same image when using Krita 2.4: The linear gamma image was noticeably darker in the shadows. So I did some tests, including converting the linear and regular sRGB images to my monitor profile. The problem turned out to be in the conversion from the image ICC profile to the monitor ICC profile. When converting from the linear gamma image to the monitor profile, the darkest shadows were eye-droppering at about half the value that they should have had. So I created a very simple test image composed of ten blocks: (0,0,0), (1,1,1), (2,2,2), (4,4,4), (8,8,8) and so on, up to (128,128,128), (255,255,255). Then I converted the test image to 16-bits, the corresponding RGB values being 257 times the 8-bit values. Upon using Krita 2.4 to convert the 16-bit linear gamma image to the monitor profile, or to regular sRGB, the second darkest color block ended up with RGB values half of what they should have been. I tried the same test using Cinepaint, cctiff, tificc, showFoto, ImageMagick, GraphicsMagick, and Gimp (at 8 bits only), as well as Krita. I also tried using linear and gamma 1.8 versions of prophoto. cctiff and tificc (using -c 0, which I habitually use and did not think twice about as perhaps being relevant) produced the same values. Cinepaint produced nearly the same values as cctiff and tificc. ALL the other image editing programs cut the darkest shadow values in half. This "cutting in half" of the darkest shadow values is visible and obvious in any image with substantial areas of important shadow detail. I had Cinepaint set in the color management options to use "don't Precalculate" rather than one of the other Cinepaint options (Low Resolution, High Resolution, CMM default). I wish I had realized that particular setting might make a difference, because it would have saved a lot of time and tedious testing. I don't know of any image editing program besides Cinepaint that offers the user the choice to use Low Res, High Res, CMM default, or "Don't Precalculate". I would guess that most or all use something like "CMM default", because I just checked, and Cinepaint, when set to use "CMM default" and "use black point compensation" produces the same halving of the shadow values as all the other image editors. At any rate, at this point every image editor that I tested, other than Cinepaint and the latest Krita 2.6 alpha, produces visibly damaged shadow areas if there is a linear gamma profile involved in an ICC profile conversion. -- http://ninedegreesbelow.com Articles and tutorials on open source digital imaging and photography |
From: Marti M. <mar...@li...> - 2012-07-28 14:50:47
|
Hi Elle, I've seen your comparative and it is very good, I guess you have spent a lot of time in creating it. Cool. Most of the questions you raise are answered in my previous mail. See here some additional comments. The issue I'm described is not tied to lcms, you can reproduce it in Argyll as well. Let's build a devicelink by using Argyll collink. In colling, the default is to keep curves as opposite of lcms linkicc that requires the -l toggle. So, I am using -n on collink to make it behave like linkicc default. collink -n "SRGB_linear.icc" "sRGB Color Space Profile.icm" devlink_sRGB_argyll.icc Then I'm doing same with lcms: linkicc -r3.4 -o devlink_sRGB_lcms2.icc SRGB_linear.icc "sRGB Color Space Profile.icm" Now running cctiff for linear tiff with both devicelinks, ctiff devlink_sRGB_lcms2.icc linear16.tif out_lcms.tif ctiff devlink_sRGB_argyll.icc linear16.tif out_argyll.tif I am obtaining 5 for Argyll and 6 for lcms on the second patch, where it should be 13. Again, this is not a bug of Argyll, neither of lcms. They are just doing what I have asked for. If I let both linkers to use pre/post linearizarion curves, all works fine. It is just that you are pushing to the limits and in some situations defaults does not apply. Finally, a short comment on absolute colorimetric: ICC folks have changed the meaning of absolute colorimetric, so at no wonder you find differences. On lcms2 you have, however, a function to set the observer adaptation state. Setting the state to 0 (fully unadapted) does emulate the old behavior. Setting it to 1 (full adaption) enables the V4 thing. Best regards Marti -----Original Message----- From: Elle Stone [mailto:l.e...@gm...] Sent: viernes, 27 de julio de 2012 21:17 To: Boudewijn Rempt Cc: lcm...@li... Subject: Re: [Lcms-user] What does cmsFLAGS_NOOPTIMIZE actually do? I noticed a problem when trying to view regular sRGB and linear gamma sRGB versions of the same image when using Krita 2.4: The linear gamma image was noticeably darker in the shadows. So I did some tests, including converting the linear and regular sRGB images to my monitor profile. The problem turned out to be in the conversion from the image ICC profile to the monitor ICC profile. When converting from the linear gamma image to the monitor profile, the darkest shadows were eye-droppering at about half the value that they should have had. So I created a very simple test image composed of ten blocks: (0,0,0), (1,1,1), (2,2,2), (4,4,4), (8,8,8) and so on, up to (128,128,128), (255,255,255). Then I converted the test image to 16-bits, the corresponding RGB values being 257 times the 8-bit values. Upon using Krita 2.4 to convert the 16-bit linear gamma image to the monitor profile, or to regular sRGB, the second darkest color block ended up with RGB values half of what they should have been. I tried the same test using Cinepaint, cctiff, tificc, showFoto, ImageMagick, GraphicsMagick, and Gimp (at 8 bits only), as well as Krita. I also tried using linear and gamma 1.8 versions of prophoto. cctiff and tificc (using -c 0, which I habitually use and did not think twice about as perhaps being relevant) produced the same values. Cinepaint produced nearly the same values as cctiff and tificc. ALL the other image editing programs cut the darkest shadow values in half. This "cutting in half" of the darkest shadow values is visible and obvious in any image with substantial areas of important shadow detail. I had Cinepaint set in the color management options to use "don't Precalculate" rather than one of the other Cinepaint options (Low Resolution, High Resolution, CMM default). I wish I had realized that particular setting might make a difference, because it would have saved a lot of time and tedious testing. I don't know of any image editing program besides Cinepaint that offers the user the choice to use Low Res, High Res, CMM default, or "Don't Precalculate". I would guess that most or all use something like "CMM default", because I just checked, and Cinepaint, when set to use "CMM default" and "use black point compensation" produces the same halving of the shadow values as all the other image editors. At any rate, at this point every image editor that I tested, other than Cinepaint and the latest Krita 2.6 alpha, produces visibly damaged shadow areas if there is a linear gamma profile involved in an ICC profile conversion. -- http://ninedegreesbelow.com Articles and tutorials on open source digital imaging and photography ---------------------------------------------------------------------------- -- Live Security Virtual Conference Exclusive live event will cover all the ways today's security and threat landscape has changed and how IT managers can respond. Discussions will include endpoint security, mobile security and the latest in malware threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ _______________________________________________ Lcms-user mailing list Lcm...@li... https://lists.sourceforge.net/lists/listinfo/lcms-user |
From: Marti M. <mar...@li...> - 2012-07-28 14:09:19
|
Hi, This question has been raised a number of times across the years. The first time I faced it was with JDK incorporating lcms1. They had a test profile that was somehow giving bad results when optimized. After close inspection, the profile was found to be operating in linear XYZ space. The complaint was almost the same: when I use linear space as input, optimization doesn’t work. Otherwise, all other combinations are ok. And this seems to be also the actual case; all other combinations are working fine. Try for example to reverse the transform going from regular sRGB to your linear space, you will find all the issue are now gone. But anyway, there is this case that seems to fail. And that’s true: on this particular case, some dark shadows got a dE > 1.5 if using the default settings. Fine, it happens that for this extreme case, the defaults don’t work. This is the reason why there is called “default” and there is a setting to control it. So, the short answer is: don’t optimize when using a linear XYZ space as input in 16 bits transforms. But I guess you want also the long answer. So, here you go. When you use lcms to create a color transform joining two or more profiles, you are creating a devicelink profile. You don’t see it as a file; it lives in memory, and is destroyed when you delete the transform. But it is there. Devicelinks can be implemented in different ways, for example they can be implemented as a set of curves, or by a matrix, or by a 3D CLUT table, or by a combination of all elements above. Some of those ways are better than others in terms of xput, others are better in terms of image quality. CMMs have to “guess” which is the best combination of elements for a given set of profiles. There is balance between quality and performance. For some corner cases, optimizing for speed can effectively introduce defects on quality. The best devicelink representation often depends on the true nature of the space described by the profiles. Specially the input space, But then… the profile only gives you a way to convert form/to Lab to its space, and gives no other clue about the space nature. An example of ill-formed spaces are those that are operating in linear XYZ gamma space. You should NEVER user linear gamma to store your 8-bit images. Why? Because in 8 bits you have 256 levels, and in linear gamma the separation between those levels is not perceptually uniform. That means you have very few levels to encode the effective dynamic range of your image and many levels are wasted in highlights. Hold on, you would say, RAW images are encoded in linear gamma and they work quite well, isn’t it? You are right… but I said 8 bits, remember? If you move to 16 bits or floating point, you can still use linear encoding, but with some care. Back to our methods to encode devicelinks. One used by lcms when the transform converts form 16 bits to 16 bits, is a CLUT table. This is just a 3D (or 4D in CMYK) grid with nodes. Pixel values are interpolated across nodes. For example, the distortion you get when going from sRGB to AdobeRGB is stored in a 3D grid of 17 nodes on each R, G, B side. When a pixel arrives, the corresponding nodes that enclose the value are selected and the result is interpolated. In our 17 node example, a value of, say, (100, 100, 100) will go on the 100*(17-1)/255 = 6.7 so the nodes 6 and 7 of each side will be taken for interpolation. Le’s now take a linear space. Since as said, many colors are collapsed to a relatively few codes due to the gamma encoding, almost all dynamic range is confined to few nodes. That mean In a 17 nodes grid, most image dynamic range will fall in 5 or 6 nodes. And this is the reason you got posterization in shadows: most of dark tones falls in just 1-2 nodes and linear interpolation cannot deal with the non-linear nature of the transform linear-gamma 2.2. How to solve this? The most evident way is to not use 3D CLUT optimization. The CMM already does that if you use floating point, or if you use 8 bits. In lcms2.03 there is some experimental flag that tries to solve this issue adding an extra tone curve cmsFLAGS_CLUT_POST_LINEARIZATION and cmsFLAGS_CLUT_PRE_LINEARIZATION. I have checked that and found to solve this issue as well. So, that is the reason why you only see this issue when converting from 16 bits to 16 bits with default flags. Placing a NOOPTIMIZE in all transforms would prevent problems, but at big performance penalty that is hard to explain just to fix this specific case. It is like you have a Ferrari but you go always at 25Mph just because once upon a time you faced a winding road. My recommendation for programmers would be to allow end user to turn optimization off for general usage, or at least to provide a specialized workflow for RAW handling with optimizations turned off, that is the only place when linear XYZ makes sense. For users, I would recommend to NEVER use linear XYZ spaces. They are good for nothing, nor for storage, nor for image processing. The very few algorithms that need to be done in linear can do and undo the conversion when processing. But anyway, there are people with strong opinions on this field and everybody is free to do whatever they want. This is just a recommendation, please don’t take it as a stone-engraved truth. Hope that helps Best Marti Maria -----Original Message----- From: Boudewijn Rempt [mailto:bo...@va...] Sent: viernes, 27 de julio de 2012 13:15 To: lcm...@li... Subject: [Lcms-user] What does cmsFLAGS_NOOPTIMIZE actually do? Hi! I've had a long discussion with Elle Stone who did a huge comparison of the results image applications have when converting between 16 bits linear rgb and srgb. It turned out that Krita gave different results from tifficc, so I dug in a bit and found that the reason was that I didn't use the cmsFLAGS_NOOPTIMIZE flag when creating transforms, while tifficc does. Code like this: quint16 src[4]; src[0] = 257; src[1] = 257; src[2] = 257; src[3] = 65535; quint16 dst[4]; cmsHPROFILE sRgbProfile = cmsCreate_sRGBProfile(); QByteArray rawData = linearRgb->profile()->rawData(); // from krita... it's a linear scRGB profile, as attached. cmsHPROFILE linearRgbProfile = cmsOpenProfileFromMem((void*)rawData.constData(), rawData.size()); cmsHTRANSFORM tf = cmsCreateTransform(linearRgbProfile, TYPE_BGRA_16, sRgbProfile, TYPE_BGRA_16, INTENT_RELATIVE_COLORIMETRIC, cmsFLAGS_NOOPTIMIZE | cmsFLAGS_BLACKPOINTCOMPENSATION); cmsDoTransform(tf, (quint8*)&src, (quint8*)&dst, 1); qDebug() << dst[0] << dst[1] << dst[2]; returns 3266,3266,3266, while cmsHTRANSFORM tf = cmsCreateTransform(linearRgbProfile, TYPE_BGRA_16, sRgbProfile, TYPE_BGRA_16, INTENT_RELATIVE_COLORIMETRIC, cmsFLAGS_BLACKPOINTCOMPENSATION); returns 1595,1595,1595 Which surprises me, since I would have thought optimization wouldn't have any influence on the actual results obtained. Is this a bug, or should I always use cmsFLAGS_NOOPTIMIZE to get accurate results? -- Boudewijn Rempt http://www.valdyas.org, http://www.krita.org, http://www.boudewijnrempt.nl |
From: Boudewijn R. <bo...@va...> - 2012-07-29 06:22:13
|
On Saturday 28 July 2012 Jul, Marti Maria wrote: > Hi, > > This question has been raised a number of times across the years. The first time I faced it was with JDK incorporating lcms1. They had a test profile that was somehow giving bad results when optimized. After close inspection, the profile was found to be operating in linear XYZ space. The complaint was almost the same: when I use linear space as input, optimization doesn’t work. Otherwise, all other combinations are ok. And this seems to be also the actual case; all other combinations are working fine. Try for example to reverse the transform going from regular sRGB to your linear space, you will find all the issue are now gone. > > But anyway, there is this case that seems to fail. And that’s true: on this particular case, some dark shadows got a dE > 1.5 if using the default settings. Fine, it happens that for this extreme case, the defaults don’t work. This is the reason why there is called “default” and there is a setting to control it. > > So, the short answer is: don’t optimize when using a linear XYZ space as input in 16 bits transforms. Hm... I hadn't realized that, because it was just a 16 bitst rgb->rgb transform. In any case, maybe this is something to add to the documentation? And as a comment to tifficc, since tifficc adds the no-optimize flag by default, so I was wondering why and whether that was the right thing to do. Is there are a good way to check whether a profile defines a linear rgb colorspace? I guess not... Or should I just give the user an option? The reason we allow people to work in 16 bits linear rgb in Krita is that it works better for painting, when blending strokes together. > > But I guess you want also the long answer. So, here you go. Thanks! I'm not sure I really follow all of, though. I am still quite an amateur :-) > > When you use lcms to create a color transform joining two or more profiles, you are creating a devicelink profile. You don’t see it as a file; it lives in memory, and is destroyed when you delete the transform. But it is there. > > Devicelinks can be implemented in different ways, for example they can be implemented as a set of curves, or by a matrix, or by a 3D CLUT table, or by a combination of all elements above. Some of those ways are better than others in terms of xput, others are better in terms of image quality. CMMs have to “guess” which is the best combination of elements for a given set of profiles. There is balance between quality and performance. For some corner cases, optimizing for speed can effectively introduce defects on quality. > > The best devicelink representation often depends on the true nature of the space described by the profiles. Specially the input space, But then… the profile only gives you a way to convert form/to Lab to its space, and gives no other clue about the space nature. > > An example of ill-formed spaces are those that are operating in linear XYZ gamma space. You should NEVER user linear gamma to store your 8-bit images. Why? Because in 8 bits you have 256 levels, and in linear gamma the separation between those levels is not perceptually uniform. That means you have very few levels to encode the effective dynamic range of your image and many levels are wasted in highlights. Hold on, you would say, RAW images are encoded in linear gamma and they work quite well, isn’t it? You are right… but I said 8 bits, remember? If you move to 16 bits or floating point, you can still use linear encoding, but with some care. > > Back to our methods to encode devicelinks. One used by lcms when the transform converts form 16 bits to 16 bits, is a CLUT table. This is just a 3D (or 4D in CMYK) grid with nodes. Pixel values are interpolated across nodes. For example, the distortion you get when going from sRGB to AdobeRGB is stored in a 3D grid of 17 nodes on each R, G, B side. When a pixel arrives, the corresponding nodes that enclose the value are selected and the result is interpolated. In our 17 node example, a value of, say, (100, 100, 100) will go on the 100*(17-1)/255 = 6.7 so the nodes 6 and 7 of each side will be taken for interpolation. > > Le’s now take a linear space. Since as said, many colors are collapsed to a relatively few codes due to the gamma encoding, almost all dynamic range is confined to few nodes. That mean In a 17 nodes grid, most image dynamic range will fall in 5 or 6 nodes. And this is the reason you got posterization in shadows: most of dark tones falls in just 1-2 nodes and linear interpolation cannot deal with the non-linear nature of the transform linear-gamma 2.2. > > How to solve this? The most evident way is to not use 3D CLUT optimization. The CMM already does that if you use floating point, or if you use 8 bits. In lcms2.03 there is some experimental flag that tries to solve this issue adding an extra tone curve cmsFLAGS_CLUT_POST_LINEARIZATION and cmsFLAGS_CLUT_PRE_LINEARIZATION. I have checked that and found to solve this issue as well. > > So, that is the reason why you only see this issue when converting from 16 bits to 16 bits with default flags. Placing a NOOPTIMIZE in all transforms would prevent problems, but at big performance penalty that is hard to explain just to fix this specific case. It is like you have a Ferrari but you go always at 25Mph just because once upon a time you faced a winding road. > > My recommendation for programmers would be to allow end user to turn optimization off for general usage, or at least to provide a specialized workflow for RAW handling with optimizations turned off, that is the only place when linear XYZ makes sense. For users, I would recommend to NEVER use linear XYZ spaces. They are good for nothing, nor for storage, nor for image processing. The very few algorithms that need to be done in linear can do and undo the conversion when processing. But anyway, there are people with strong opinions on this field and everybody is free to do whatever they want. This is just a recommendation, please don’t take it as a stone-engraved truth. > > Hope that helps > > Best > Marti Maria > > > -----Original Message----- > From: Boudewijn Rempt [mailto:bo...@va...] > Sent: viernes, 27 de julio de 2012 13:15 > To: lcm...@li... > Subject: [Lcms-user] What does cmsFLAGS_NOOPTIMIZE actually do? > > Hi! > > I've had a long discussion with Elle Stone who did a huge comparison of the results image applications have when converting between 16 bits linear rgb and srgb. It turned out that Krita gave different results from tifficc, so I dug in a bit and found that the reason was that I didn't use the cmsFLAGS_NOOPTIMIZE flag when creating transforms, while tifficc does. Code like this: > > > quint16 src[4]; > src[0] = 257; > src[1] = 257; > src[2] = 257; > src[3] = 65535; > > quint16 dst[4]; > > cmsHPROFILE sRgbProfile = cmsCreate_sRGBProfile(); > QByteArray rawData = linearRgb->profile()->rawData(); // from krita... it's a linear scRGB profile, as attached. > cmsHPROFILE linearRgbProfile = cmsOpenProfileFromMem((void*)rawData.constData(), rawData.size()); > > cmsHTRANSFORM tf = cmsCreateTransform(linearRgbProfile, > TYPE_BGRA_16, > sRgbProfile, > TYPE_BGRA_16, > INTENT_RELATIVE_COLORIMETRIC, > cmsFLAGS_NOOPTIMIZE | cmsFLAGS_BLACKPOINTCOMPENSATION); > > cmsDoTransform(tf, (quint8*)&src, (quint8*)&dst, 1); > > qDebug() << dst[0] << dst[1] << dst[2]; > > > returns 3266,3266,3266, while > > > cmsHTRANSFORM tf = cmsCreateTransform(linearRgbProfile, > TYPE_BGRA_16, > sRgbProfile, > TYPE_BGRA_16, > INTENT_RELATIVE_COLORIMETRIC, > cmsFLAGS_BLACKPOINTCOMPENSATION); > > returns 1595,1595,1595 > > Which surprises me, since I would have thought optimization wouldn't have any influence on the actual results obtained. Is this a bug, or should I always use cmsFLAGS_NOOPTIMIZE to get accurate results? > > -- Boudewijn Rempt http://www.valdyas.org, http://www.krita.org, http://www.boudewijnrempt.nl |
From: Elle S. <l.e...@gm...> - 2012-07-30 11:09:21
|
On 7/28/12, Marti Maria <mar...@li...> wrote: > An example of ill-formed spaces are those that are operating in linear XYZ > gamma space. . . . > For users, I would recommend to NEVER use > linear XYZ spaces. They are good for nothing, nor for storage, nor for image > processing. Is the only source of the ill-formedness of linear gamma XYZ profiles the fact that at 8-bits there simply aren't enough levels to store or process the shadow information when using a linear gamma profile? But what if you store and process at 16-bits or higher? > In a 17 nodes grid, most image dynamic > range will fall in 5 or 6 nodes. And this is the reason you got > posterization in shadows: most of dark tones falls in just 1-2 nodes and > linear interpolation cannot deal with the non-linear nature of the transform > linear-gamma 2.2. > > Placing a NOOPTIMIZE in all transforms > would prevent problems, but at big performance penalty that is hard to > explain just to fix this specific case. It is like you have a Ferrari but > you go always at 25Mph just because once upon a time you faced a winding > road. "25Mph in a Ferrari" sounds like the editing application would just be crawling. How big of a performance hit is there when doing 16-bit to 16-bit color conversion without optimizing? In other words, how is it manifested (disk IO? cpu usage? really slow drawing to the screen upon making edits to the image?, etc) and what triggers the performance hit in such a way as to be apparent to the end user (the display to screen? doing a whole lot of conversions all in a row at the command line? having many layers in the image?, etc)? |
From: Marti M. <mar...@li...> - 2012-07-30 14:05:28
|
Hi Elle, > Is the only source of the ill-formedness > of linear gamma XYZ profiles the fact that > at 8-bits there simply aren't enough levels to >store or process the shadow information when >using a linear gamma profile? The issue is the lack of perceptually uniformity. XYZ is not the way WE see the images. And this is not limited to 8 bits, you would have to store the image in at 12 to 14 bits in a gamma 1.0 space to match the same quality you get from storing 8 bits in a gamma 2.2 space See here some links: http://chriscox.org/gamma/ http://www.poynton.com/PDFs/GammaFAQ.pdf > But what if you store and process at 16-bits or higher? As said, with 16 bits you could deal with linear XYZ, but for storage, I honestly believe it is just useless. For processing, there are some operations that should be done in linear light, then you can convert to linear, do the operation and back to perceptual. But anyway, I am not trying to convince you or anybody about to use or avoid not linear XYZ encoding for image processing. I just note that this is an extreme case and for this case the lcms defaults does not apply. The CMM *can* deal with that but you need to provide other flags than default >"25Mph in a Ferrari" sounds like the editing application >would just be crawling. How big of a performance hit is >there when doing 16-bit to 16-bit color conversion > without optimizing? On my rather old 2-core laptop, from 17 Mpixels/sec. to 1.2 Mpixels/sec. on CLUT profiles and to 2 MPixels/sec. on matrix-curve based. Based on those numbers, our Ferrari would go from 25Mph to 212,5Mph when optimization is set. It is just a matter of lots of multiplications and additions. And yes, it could be coded in SSE, but optimization does mostly the same and it is portable. Portability is right now a key feature for the library. Another option is to allow pre/post curves. Using cmsFLAGS_CLUT_POST_LINEARIZATION | cmsFLAGS_CLUT_PRE_LINEARIZATION, I got 9.5Mpixels/Sec, which is not full throttle, but still reasonable. Regards Marti -----Original Message----- From: Elle Stone [mailto:l.e...@gm...] Sent: lunes, 30 de julio de 2012 13:09 To: Marti Maria Cc: Boudewijn Rempt; lcm...@li... Subject: Re: [Lcms-user] What does cmsFLAGS_NOOPTIMIZE actually do? On 7/28/12, Marti Maria <mar...@li...> wrote: > An example of ill-formed spaces are those that are operating in > linear XYZ gamma space. . . . > For users, I would recommend to NEVER use linear XYZ spaces. They are > good for nothing, nor for storage, nor for image processing. Is the only source of the ill-formedness of linear gamma XYZ profiles the fact that at 8-bits there simply aren't enough levels to store or process the shadow information when using a linear gamma profile? But what if you store and process at 16-bits or higher? > In a 17 nodes grid, most image dynamic range will fall in 5 or 6 > nodes. And this is the reason you got posterization in shadows: most > of dark tones falls in just 1-2 nodes and linear interpolation cannot > deal with the non-linear nature of the transform linear-gamma 2.2. > > Placing a NOOPTIMIZE in all transforms would prevent problems, but at > big performance penalty that is hard to explain just to fix this > specific case. It is like you have a Ferrari but you go always at > 25Mph just because once upon a time you faced a winding road. "25Mph in a Ferrari" sounds like the editing application would just be crawling. How big of a performance hit is there when doing 16-bit to 16-bit color conversion without optimizing? In other words, how is it manifested (disk IO? cpu usage? really slow drawing to the screen upon making edits to the image?, etc) and what triggers the performance hit in such a way as to be apparent to the end user (the display to screen? doing a whole lot of conversions all in a row at the command line? having many layers in the image?, etc)? |
From: Marti M. <mar...@li...> - 2012-07-29 11:17:40
|
Hi, >Hm... I hadn't realized that, because it was just a 16 bitst rgb->rgb transform. In any case, maybe this is something to add to the documentation? Ok, I'm adding a paragraph in the tutorial about this. > And as a comment to tifficc, since tifficc adds the no-optimize flag by default, so I was wondering why and whether that was the right thing to do. It doesn't by default. You have to add -c0 to get the no-optimize behavior. >Is there are a good way to check whether a profile defines a linear rgb colorspace? I guess not... You are right. It is certainly possible a complex profile that behaves linear in most of its gamut but curved in dark shadows. The only way would be to examine all 65535x65535x65535 combinations. >Or should I just give the user an option? I think this is by far the easier way. Regards Marti |
From: Boudewijn R. <bo...@va...> - 2012-07-29 12:36:51
|
Thanks! I've done that just now. I'm also looking intensely forward to lcms 2.4 :-) Thanks for all the wonderful work and explanations On Sun, 29 Jul 2012, Marti Maria wrote: > Hi, > >> Hm... I hadn't realized that, because it was just a 16 bitst rgb->rgb transform. In any case, maybe this is something to add to the documentation? > > Ok, I'm adding a paragraph in the tutorial about this. > >> And as a comment to tifficc, since tifficc adds the no-optimize flag by default, so I was wondering why and whether that was the right thing to do. > > It doesn't by default. You have to add -c0 to get the no-optimize behavior. > >> Is there are a good way to check whether a profile defines a linear rgb colorspace? I guess not... > > You are right. It is certainly possible a complex profile that behaves linear in most of its gamut but curved in dark shadows. The only way would be to examine all 65535x65535x65535 combinations. > >> Or should I just give the user an option? > I think this is by far the easier way. > > Regards > Marti > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > Lcms-user mailing list > Lcm...@li... > https://lists.sourceforge.net/lists/listinfo/lcms-user > |