Thread: Re: [Lcms-user] alpha blending in LAB space
An ICC-based CMM for color management
Brought to you by:
mm2
From: Yaron T. <YaronT@HumanEyes.com> - 2006-12-05 12:17:31
|
Hi Kai, First, Thanx for the help. I'll try to explain better what I do. I have several layers 1 to n, each with its own profile (let's call it profile-i). I'm blending them all together (similar to layers in PhotoShop) on a single canvas which has its own profile as well (let's call it profile-c). I use profile-c as the output profile for exported images from my software (the output of the application). Now we also have the screen profile I'm showing the user the data on (let's call it profile-s). What I first did was this: For display, I convert all layers to profile-s, so I have n conversions profile-0->profile-s to profile-n->profile-s. Then I blend layers in profile-s. For output I do the same with the destination profile being profile-c. The problem here is that the blending output is different. That is, the result of the blending depends on the profile I'm blending in. I figured out the proper thing to do, is to always convert layers to profile-c, blend, and then if I'm displaying convert the result to profile-s for display. I hope up to here everything is clear. For this, blending can be done with openGL even if the format is 8-bit. The problem is that if profile-c is CMYK, I need to do 2 passes with openGL, since openGL doesn't support 4 color channels. So that's the problem. I thought of blending in LAB PCS to avoid blending in CMYK and doing 2 passes. However my concern was exactly the one you pointed out. That LAB is a wide space, and I might lose precision and get color stepping. So I guess my worries were justified. :-) and :-(. So I guess my question, is if there's another thing to do what I want to do? I'm using openGL since I use 3D graphics to create the layers. They can be turned and manipulated in 3D space. openGL gives me a robust and easy to use rendering engine which also does depth testing on hardware. I don't think openGL poses any performance problems or bottlenecks. As long as I really give the GPU RGB (or CMYK) data, it does everything fast and accurate. Perhaps giving it LAB data would be asking to much of low-precision GPU's, and that's exactly my question. Yaron > Yaron, >=20 > just some questions back to you: >=20 > Relying on OpenGL's output means most of the time to be dependent on > hardware. While the idea is very nice to let the GPU perform the PCS alpha > blending, are you shure reading from a texture is not a performance > bottleneck? > Are all your supported plattforms able to handle your maximum precision? > Some grafics bords still support 8-bit only (possibly less?). This could > be too narrow for Lab rendering as Lab is pretty large. Colour stepping > could become visible in this case. >=20 > Converting to your canvas profile, whatever this means, at the last stage > should be ok, as all reandering is done before in your concept. So, whats > the problem here? >=20 > regards > Kai-Uwe Behrmann > + development for color management > + imaging / panoramas > + email: ku...@gm... > + http://www.behrmann.name >=20 >=20 > Am 05.12.06, 13:31 +0200 schrieb Yaron Tadmor: >=20 > > Hi, > > > > I'm pretty new to color management, and color profiles. > > I'm writing an application which allows to composite several images on a > > single canvas, and each image can have its own color profile. The global > > canvas also has a color profile (which is the color profile the final > > canvas image will be exported in). > > > > When blending the images I need to blend them in a single profile. The > > most appropriate is the canvas profile. > > Since I'm using openGL to render the layers, there's a problem when this > > profile is a CMYK profile. The problem is that openGL doesn't support > > CMYKA (or CMYK for that matter) so I need to split my rendering to 2 > > parts (CMY and K), which slows down the GUI. > > > > I was wondering if I could somehow use a PCS (say LAB) to perform my > > blending. This way I can convert all input images to the PCS, load them > > to openGL as RGBA textures. Then I can render using openGL letting it do > > the alpha blending on this PCS, and then get the rendered image and > > convert to the canvas profile? > > Could this work? > > > > My main concern is that if the canvas profile is a narrow gamut profile, > > I might lose precision, since the PCS image data will need to be > > converted to unsigned chars in order to give to openGL. > > > > Am I making sense, or am I missing something? > > > > Thanx > > Yaron Tadmor > > > > |
From: Yaron T. <YaronT@HumanEyes.com> - 2006-12-05 14:45:26
|
If I understand the spec correctly, it says (pretty much like I thought), that all objects should be converted to the output device's color space and composited there (unless I misunderstood). In my application, even if I use one input profile, if it's CMYK I still have a problem, since I'd need the 2 rendering passes. Actually, the cause for the problem is that the canvas/document can have a CMYK space (that's equivalent to rendering the PDF to a CMYK output device). Is it possible to convert CMYK to CMY and back without losing accuracy or gamut? If so, I could use CMY to render in openGL, and then convert back to CMYK. =20 > -----Original Message----- > From: Kai-Uwe Behrmann [mailto:ku...@gm...] > Sent: Tuesday, December 05, 2006 2:48 PM > To: Yaron Tadmor > Cc: Lcms Liste > Subject: Re: [Lcms-user] alpha blending in LAB space >=20 > Possibly reducing the number of blending colour spaces is a solution. > Blending of mixed colour space layers is not a consistent thing anyway. >=20 > You could read the pdf spec how it was solved there. Chapter 7.2.3 > in the pdf reference 1.6 for instance. >=20 > regards > Kai-Uwe Behrmann > + development for color management > + imaging / panoramas > + email: ku...@gm... > + http://www.behrmann.name >=20 >=20 > Am 05.12.06, 14:18 +0200 schrieb Yaron Tadmor: >=20 > > Hi Kai, > > > > First, Thanx for the help. > > > > > > I'll try to explain better what I do. > > I have several layers 1 to n, each with its own profile (let's call it > > profile-i). I'm blending them all together (similar to layers in > > PhotoShop) on a single canvas which has its own profile as well (let's > > call it profile-c). I use profile-c as the output profile for exported > > images from my software (the output of the application). Now we also > > have the screen profile I'm showing the user the data on (let's call it > > profile-s). > > > > What I first did was this: For display, I convert all layers to > > profile-s, so I have n conversions profile-0->profile-s to > > profile-n->profile-s. Then I blend layers in profile-s. For output I do > > the same with the destination profile being profile-c. The problem here > > is that the blending output is different. > > That is, the result of the blending depends on the profile I'm blending > > in. > > > > I figured out the proper thing to do, is to always convert layers to > > profile-c, blend, and then if I'm displaying convert the result to > > profile-s for display. > > > > I hope up to here everything is clear. > > > > For this, blending can be done with openGL even if the format is 8-bit. > > The problem is that if profile-c is CMYK, I need to do 2 passes with > > openGL, since openGL doesn't support 4 color channels. So that's the > > problem. > > > > I thought of blending in LAB PCS to avoid blending in CMYK and doing 2 > > passes. However my concern was exactly the one you pointed out. That LAB > > is a wide space, and I might lose precision and get color stepping. So I > > guess my worries were justified. :-) and :-(. > > > > So I guess my question, is if there's another thing to do what I want to > > do? I'm using openGL since I use 3D graphics to create the layers. They > > can be turned and manipulated in 3D space. openGL gives me a robust and > > easy to use rendering engine which also does depth testing on hardware. > > > > I don't think openGL poses any performance problems or bottlenecks. As > > long as I really give the GPU RGB (or CMYK) data, it does everything > > fast and accurate. Perhaps giving it LAB data would be asking to much of > > low-precision GPU's, and that's exactly my question. > > > > Yaron > > |
From: Yaron T. <YaronT@HumanEyes.com> - 2006-12-06 10:37:32
|
Hi, I'm trying to sum-up the conversation to see where to go to from here. So what I theoretically want to do is this: Let the user of my application select a document color profile, and allow him to import layers in various profiles. Now I need to blend them all and display to screen as well as output to file.=20 The options are to blend in the document profile as selected by the user, or blend in some pre-selected profile/color-space which will provide the best results. What I now know about blending: 1) Blending is highly dependent upon color space/profile. Each space/profile has it's "blending quality", and blending in different profiles will look different. 2)I quote:=20 > CMYK will give you odd results with some combinations, because > it has a redundant dimension, meaning that for any > particular color there are many possible CMYK > combinations could encode that color, each one > having a different K value. You could in CMYK > space blend two CMYK values that represent the same > color, and end up with a different color. This actually means the CMYK should not be used as blending space, since it's "blending quality" is somewhat unexpected. Will CMY be an appropriate solution? 3) Blending in a PCS is better off done in higher resolution to avoid color banding, when the device profile's gamut is very small. So that leaves me with the following options (bare in mind that I'm using openGL): 1) Blend in the user selected profile. for CMYK spaces, convert to CMY, render and blend, convert back to CMYK.=20 Would this work? I assume since the K channel is actually redundant, that this can be done easily. 2) Blend in the user selected profile, and suffer the performance hit for CMYK spaces, and render twice (CMY part, and K part). 3) Choose some other space to blend in. This space would have to be a wide gamut space, which might cause loss of precision for very-low-gamut space, causing color banding effects. So, this is the best way I can sum this up. If you think I left something out, or have another idea, please let me know. Thanx for all the help Yaron Tadmor |
From: Kai-Uwe B. <ku...@gm...> - 2006-12-05 12:45:27
|
Possibly reducing the number of blending colour spaces is a solution. Blending of mixed colour space layers is not a consistent thing anyway. You could read the pdf spec how it was solved there. Chapter 7.2.3 in the pdf reference 1.6 for instance. regards Kai-Uwe Behrmann + development for color management + imaging / panoramas + email: ku...@gm... + http://www.behrmann.name Am 05.12.06, 14:18 +0200 schrieb Yaron Tadmor: > Hi Kai, > > First, Thanx for the help. > > > I'll try to explain better what I do. > I have several layers 1 to n, each with its own profile (let's call it > profile-i). I'm blending them all together (similar to layers in > PhotoShop) on a single canvas which has its own profile as well (let's > call it profile-c). I use profile-c as the output profile for exported > images from my software (the output of the application). Now we also > have the screen profile I'm showing the user the data on (let's call it > profile-s). > > What I first did was this: For display, I convert all layers to > profile-s, so I have n conversions profile-0->profile-s to > profile-n->profile-s. Then I blend layers in profile-s. For output I do > the same with the destination profile being profile-c. The problem here > is that the blending output is different. > That is, the result of the blending depends on the profile I'm blending > in. > > I figured out the proper thing to do, is to always convert layers to > profile-c, blend, and then if I'm displaying convert the result to > profile-s for display. > > I hope up to here everything is clear. > > For this, blending can be done with openGL even if the format is 8-bit. > The problem is that if profile-c is CMYK, I need to do 2 passes with > openGL, since openGL doesn't support 4 color channels. So that's the > problem. > > I thought of blending in LAB PCS to avoid blending in CMYK and doing 2 > passes. However my concern was exactly the one you pointed out. That LAB > is a wide space, and I might lose precision and get color stepping. So I > guess my worries were justified. :-) and :-(. > > So I guess my question, is if there's another thing to do what I want to > do? I'm using openGL since I use 3D graphics to create the layers. They > can be turned and manipulated in 3D space. openGL gives me a robust and > easy to use rendering engine which also does depth testing on hardware. > > I don't think openGL poses any performance problems or bottlenecks. As > long as I really give the GPU RGB (or CMYK) data, it does everything > fast and accurate. Perhaps giving it LAB data would be asking to much of > low-precision GPU's, and that's exactly my question. > > Yaron > |
From: <nos...@gm...> - 2006-12-05 13:46:35
|
-------- Original-Nachricht -------- Datum: Tue, 5 Dec 2006 14:18:40 +0200 Von: "Yaron Tadmor" <YaronT@HumanEyes.com> An: "Lcms Liste" <lcm...@li...> Betreff: Re: [Lcms-user] alpha blending in LAB space > I'll try to explain better what I do. > I have several layers 1 to n, each with its own profile (let's call it > profile-i). I'm blending them all together (similar to layers in > PhotoShop) on a single canvas which has its own profile as well (let's > call it profile-c). I use profile-c as the output profile for exported > images from my software (the output of the application). The question is, what result do you expect from a "blending" operation? If you want to model/emulate for instance the behaviour of optical blending via semitransparent mirrors, or the effect of projection of two images onto the same screen, then basically you need to carry out the blending operation in a linear intensity space (e.g. XYZ or linear (gamma=1) RGB), since the modeled physical process involves additive color mixing. But if you have different objectives, then you may want to do the operation in a different color space. For instance, if your desire is that a 50:50 blending between two colors should lie _perceptually_ midway between the two colors, then blending in CIELAB PCS may be more appropriate (or even better in CIECAMxx color appearance space, which has even better perceptual uniformity). Device color spaces (for real devices) are possibly not so suitable for this operation in general, as they are not necessarily well-behved and may have strange non-linearities (but of course it depends on the device). Probably you should compare blending in different color spaces, and finally choose that color space, whose result you like subjectively best. (Btw, for XYZ or linear RGB, the use of 16 bits/channel is strongly recommended) > Now we also > have the screen profile I'm showing the user the data on (let's call it > profile-s). > > What I first did was this: For display, I convert all layers to > profile-s, so I have n conversions profile-0->profile-s to > profile-n->profile-s. Then I blend layers in profile-s. For output I do > the same with the destination profile being profile-c. The problem here > is that the blending output is different. > That is, the result of the blending depends on the profile I'm blending > in. > > I figured out the proper thing to do, is to always convert layers to > profile-c, blend, and then if I'm displaying convert the result to > profile-s for display. If the display is intended to show a soft proof for what you will get on the canvas, then yes, you should first build the final canvas image and then convert it to the display image. Regards, Gerhard -- Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen! Ideal für Modem und ISDN: http://www.gmx.net/de/go/smartsurfer |