Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
Note: Gerhard & Hal, I've decided to move this discussion out of the previous thread to a new thread, to separate our discussion on various issues.
I'd like to continue our investigation into shadow clipping evident upon application of LPROF generated profiles. This has been a severe problem with scans made on any Nikon LS-series scanner, and less of a problem, but still present, in scans made with Imacon Flextights.
Recently I purchased a Hutch Color Target, hoping that its extended sampling of shadow colors may lead to a profile that better deals with dense slides.
I also took the advice of a helpful fellow named Pat over at Chromix, Inc. here in Seattle, WA. He mentioned that certain profiling packages are aware of the fact that certain tones may scan with higher or lower RGB values than encountered in a target scan, and so these profiling packages compensate for that in the profiles they generate (not sure technically how).
Hence I decided to give both LPROF and basICColor Input profiling packages a go at profiling Fujichrome film using a Hutch Color Target (HCT) and an IT8 target (RVP 50 from Wolf Faust).
In BOTH cases (IT8 & HCT), LPROF generated profiles that led to severe clipping in shadows.
In BOTH cases (IT8 & HCT), basICColor Input generated profiles that led to absolutely no clipped pixels.
Here's the HCT with LPROF-generated profile applied (then converted to Adobe RGB/8 bit/JPEG):
Here's the HCT with basICColor-generated profile applied (then converted to Adobe RGB/8 bit/JPEG):
The LPROF profile clips many dark shadows to pure black. I can post the original scans and the profiles themselves also, so you can closely examine the clipping problem. I just converted to Adobe RGB/8 bit JPEGs for quick web viewing for the purposes of this discussion.
Now, we had discussed optical flare before. Yes, there is optical flare with the Nikon scanner; however, regardless, the basICColor profiling package is able to handle the scan. Additionally, to minimize optical flare, I applied the following method which I came up with and which makes reasonable sense to me:
If you remember, Don Hutch in his RGB scanning guide recommends scanning a target 0 and 180 degrees, then flipped over also at 0 and 180 degrees. Gerhard, you mentioned this doesn't make any sense as to how it would reduce optical flare. I agree with you; it really doesn't make any sense. However, here's an inspired by the 'four angle' idea that DOES make sense: Take a dark patch (say Dmax), and, as long as it is a square (not a rectangle), select (marquee) the Dmax square, copy it, paste it on top, rotate it 90 degrees, and then blend using 'darken' mode in Photoshop. Continue doing this until you have 3 layers stacked on top of the original Dmax, one rotated 90 degrees, one rotated 180 degrees, and one rotated 270 degrees from the original. If all are stacked on top of each other in 'darken' blend mode, then any optical flare from adjacent bright patches should be reduced, if not fully eliminated. This worked very well for me, and I actually did this for over 20 darker patches on the HCT.
The result? basICColor's profile lightened up some deep shadows even more, leading to a very pleasing profile. LPROF's profile still generated terrible shadow clipping. In fact, these profiles applied to the HCT scan are what are linked above... that is, the HCT images above all have their dark patches modified using my modified '4 angle' technique above.
Now, our discussion of applying black-clipping to the target scan still remains to be researched (I'm working on it right now... that is, subtracting the difference in RGB values between Dmax and 2 layers of unexposed film in 16-bit mode); however, Gerhard & Hal, don't you feel that there is something wrong with LPROF's profiling if the shadows are consistently so severely clipped, yet not so with a different profiling package?
I just finished reading the other thread that this one is split from to make sure I was up to date. One of the things that I was struck by was that when you did raw scans in Viewscan that the DMAX values from the target (1,1,1) and from the double unexposed films scans (1,0,0) were actually very close. I actually expect that if you had a way to view the 16 bit values for these patches that this would be even closer relatively speaking. This compared to a difference that was much larger in your non-raw scans (I don't remember the exact difference between the DMAX values and the double unexposed slide values but it was on the order of 15). It appears that something is happening in the non-raw scans that is significantly affecting the darkest patches and in ways that I would not expect for scans that are intended for a color managed work flow.
Uncorrected raw scans out of Viewscan are basically the CCD data. This gives scans that are:
1. Linear - IE. no gamma correction
2. Generally very dark because DMIN ends up with RGB values that would normally be used for mid-tones. IE. the tonal scale is compressed from the top.
In Viewscan you can also add an exposure correction to the raw scan. What this does is to leave the black level alone but stretch out the other tones to distribute them over a greater RGB value range. It does not change the gamma.
Even though Don Hutchson recommends scanning slides using fairly high gamma settings (I think his scanning guide recommends 2.6 to 2.8) there are others that think that these steps should be done using linear scans. XSane which is an X11 scanning software front end only does color managed scans in pure RAW mode for example. Personally I don't know if either group is correct but the wide range of gamma values used for doing scans for color managed work flows indicates that this is dependent on a number of factors such as the scanner being used but that perhaps user preference may be the overriding factor.
As a test could you do a raw scans of your target and use that to create a profile and test it by applying the profile to the raw target scan like you did above to see if that helps this issue? I want as little processing of the CCD data as possible inside on Viewscan to minimize the number of variables.
Now I am not trying to imply that there is not something that needs work in LProf. But rather that I think there might be something such as some automatic setting in Viewscan that is exposing an issue in LProf with this particular scanner. If the profiles of the raw scans work then we can look at isolating what is causing this.
PS. I would have gotten back to you sooner but I was a little under the weather recently.
Rishi, sorry for the delayed answer, I was too busy otherwise.
> Gerhard & Hal, don't you feel that there is something wrong
> with LPROF's profiling if the shadows are consistently so
> severely clipped, yet not so with a different profiling package?
I don't know the profile and the corresponding measurements, but I guess that the other software is "cheating", conservatively favoring a pleasing result over accuratly fitting the supplied measurements (i.e. the scan of the target, and the target reference file).
What's actually wrong here, are the measurements, which suffer from various flaws of the scanner. The captured RGB numbers are inconsistent between different scanned images, and at different spatial locations. I'm not sure, whether it should be the profiler's task, not to trust the supplied measurements and to "cheat", or whether it is rather a task of the user to feed the profiler with "good" measurements. Note that techniques like dark frame subtraction and/or the extended method described by HutchColor can avoid the clipping effectively as well, if they are done appropriately by the user in a pre-processing step.
If one would like to implement a full-featured flare compensation, then this cannot be done with a color profile anyway, but spatial and image-dependent operations whould be necessary to accomplish this. And that would certainly not be a job for a profiler, but rather for the scanning or image processing software.
I could imagine to build support for the "extended method" into lprof, in order that the user does not need to pre-process the scans of the target manually, but only needs to specify the darkest RGB numbers encountered for "infinite density" (e.g. the darkest RGB numbers found on a scan of a double layer of unexposed film). But I rather tend to do such adjustments only if explicitly requestes by the user in the GUI, but not generally and automatically.
Dear Gerhard & Hal,
Apologies for the delay; I've been out & about at weddings & finishing up finals :)
I'm back now, so hopefully we can tackle this problem again.
Hal, I do see where you are coming from in terms of being suspicious of Vuescan's application of gamma correction to scanned images. Perhaps when it gamma corrects a scan of unexposed film, it does so differently than a scan of the target which has patches of various densities. Perhaps this is why the RGB values of Dmax of the target after gamma correction is so much higher than the RGB values of (gamma corrected) unexposed film.
Additionally, I should have realized this before, but, gamma correction does cause hue shifts. Whether or not this matters when you're profiling anyway I do not know. But one thing is certain: after a number of tests, the *hue angle* (in HSL) does NOT change when simple arithmetic is performed across all channels (R, G, & B). Therefore, in an effort to make an 'extended target', if I just subtract an arbitrary value x from all three channels, I should not get any hue shifts. Probably applying a black level adjust is better, because pure subtraction affects the white point, whereas black level adjusts should not. But the lack of hue shifts at least is good in that it will ensure the generation of a valid profile even for images where I do NOT perform this black level adjust or subtraction when scanning (which, obviously, I wouldn't, since the whole point is to lighten up the shadows with this 'extended profile'!)
Hal, you requested I do some RAW scans (Gamma 1.0), then profile these scans in LPROF, and apply the generated profile. Here are my results in comparing the results of LPROF & basICColor profiling of a RAW gamma 1.0 scan:
HCT target scanned w/ Gamma 1.0 (RAW), with subsequent LPROF-generated profile applied, then converted to 8-bit Adobe RGB JPEG:
HCT target scanned w/ Gamma 1.0 (RAW), with subsequent basICColor-generated profile applied, then converted to 8-bit Adobe RGB JPEG:
As you can see, LPROF is consistently clipping many colors, whereas basICColor still performs just fine.
I don't know what to say. Gerhard, either this 'cheating' that basICColor is doing is necessary to deal with these Nikon scanners, and, still to a certain extent (though to a much lesser degree), certain Imacon scans; OR, there's something wrong with LPROF's profiles that I'm generating? Has no one else noticed this before?
Also, a quick question: when you feed a gamma uncorrected scan to the profiling software, does the resulting profile then pretty much apply gamma ramps to the image?
Many thanks for your help guys! Hope to continue working thru this...
> Hal, you requested I do some RAW scans (Gamma 1.0), then
> profile these scans in LPROF, and apply the generated profile.
Rishi, for diagnosis purposes, I would actually be more interested in the HCT raw scan and the target reference file. Can you upload them too?
This one is indeed unsatisfactory, granted that this image really originates from the same raw scan which was also used to generate the profile. I cannot reproduce a simiarly strong impact with your scan of the IT8 target (if I apply the profile to the same raw scan of the IT8 target which was used to create the profile).
> Therefore, in an effort to make an 'extended target', if I
> just subtract an arbitrary value x from all three channels,
> I should not get any hue shifts. Probably applying a black
> level adjust is better, because pure subtraction affects the
> white point, whereas black level adjusts should not.
If done in linear light space, the difference is only marginal, since we are talking about a magnitude of about 0.5% of full scale (in linear space). You should generally avoid any addition, subtraction or black point scaling in a gamma encoded space, because physically the intensities of two light sources add up in linear light space, and mathematically (a+b)^gamma is NOT the same as a^gamma+b^gamma. Doing the black adjustment in a gamma encoded space will eventually distort the TRC at the dark end.
> I don't know what to say. Gerhard, either this 'cheating'
> that basICColor is doing is necessary...
I must admit that I have underestimated the limitations of usual film scanners (and a LS9000 is certainly NOT to be considered a low-cost scanner). It seems indeed, that either you need to compensate flaws in the scans before feeding them into the profiler (and/or before applying the profile to the scans), or alternatively, the profiler needs to "cheat", in order to obtain still pleasing [though not necessarily accurate] results if you don't want to flare-adjust the scans manually.
What I want to say is: If the scan of the IT8 target tells that Dmax corresponds to say RGB=[23,23,23] (in gamma 2.2 space), then it is basically to be expected that RGB numbers < 23 should be clipped by an accurate profile. If however a profile nevertheless still does not clip RGB numbers as low as say [15,15,15], then it simply does not honor what the scan of the IT8 target is saying about Dmax (that's what I mean when I say it is "cheating").
Btw, is the basICColor a LUT profile too, or a matrix/trc profile?
> ...I do see where you are coming from in terms of being suspicious
> of Vuescan's application of gamma correction to scanned images.
Actually I don't see a too big problem here, since this is not an unknown transformation, which needs to be estimated from the data, but Vuescan uses a well-defined gamma of 2.2 for 8-bit raw images (and obviously also for "Device RGB" images with color correction "None", which can be considered more or less "raw" either). A well-know gamma encoding can be easily undone in order to get back the original gamma 1.0 image (and with 16bits/channel the information loss won't be significant).
> Perhaps this is why the RGB values of Dmax of the target after
> gamma correction is so much higher than the RGB values of (gamma
> corrected) unexposed film.
The gamma encoding certainly "magnifies" the difference between these RGB numbers, so that the difference becomes more evident in the gamma-encoded space, but it is not the root cause for the difference.
I can't tell at this moment if the basICColor profile is Matrix or LUT... I believe it's LUT based on the size. Here's a link to both LPROF & basICColor profiles (the one without 'basICColor' in the file name is generated by LPROF):
As for the raw images + target data files, etc., I will be back later today to post them :)
> As for the raw images + target data files, etc.,
> I will be back later today to post them
Rishi, you did not yet upload them, did you?
Yes, sorry, Gerhard, here they are:
The RAW scan, Gamma 1.0:
The .txt data file for HCT target:
well, are these ones so bad?
The scan of the HCT target makes the significant inconsistency of the RGB numbers returned by the scanner for darker patches, due to flare, even more evident than the IT8 layout. These measurements obviously require a significantly higher smoothing factor than the default. I used "Manual Smoothness = 2" for the above profile, and I additionally applied a black point adjustment of about 100 RGB units (of 65535) to the scan of the target, as static flare compensation, before creating the profile. For the given measurement data, the "auto smoothness" setting [which is based on GCV (Generalized Cross Validation)] blatantly fails to estimate a reasonable smoothing factor due to the statistical properties of the errors in the measurements [we're not dealing with random repeatability errors here, but with with sytematic errors, which on the other hand cannot be explained by the RGB/XYZ numbers alone either, but which depend on the spatial arrangement and interaction of the patches]. Thus "auto smoothness" is not an appropriate choice here.
Btw, I also noticed that the setting "white point location = Dmin patch" fails in conjunction with an HCT target; this needs to be fixed too. As workaround I did speciy the white point RGB numbers by picking one of the white patches manually from the scan of the target. Your BasICColor profile seems to use a different WP, this explains why my above profile looks overall "cooler"; don't wonder about that, since a) you can freely choose the white point of your choice in lprof [which cany be different from my choice when I created the profile above], and b) slides need to be color-balanced after scanning and applying the profile anyway.
No... not bad at all! They look great!
So, the reason for my delay: I've sadly been searching for weeks for a way to adjust black level in Mac OS X in 16 bits! I tried CinePaint, & it'll let me adjust in 16 bits, however it won't let me write TIFF files. It just freezes. I believe I have something wrong with libtiff, or perhaps with dynamic library links... I think I screwed something up while trying to compile LPROF earlier.
So, in a nutshell, I just can't edit the target scans to perform the black clipping in 16 bits! I'm still trying with ImageMagick, etc., to do it somehow to test this out myself.
I did have one question though -- are you sure you only had to apply black level adjust of 100 in 16 bits? Because I did that in Cinepaint and the image hardly looked any darker...
Perhaps this method is exactly what basICColor does when it profiles??
If CinePaint will load a tiff file and then bombs out when you try to save the tiff file then this is likely a problem with libtiff or one of the libraries that it uses. Is it possible that you have an unsupported compression setting turned on in CinePaint and that this is causing libtiff to bomb? It could also be a problem with a library needed by libtiff like zlib.
I am not sure how building LProf could have caused this since all the build does is checks to see if libtiff is available at the start of the build and then links to the existing library during the link phase of the build. Or do you mean that you did something with libtiff to get the LProf build working and that this might be the source of the CinePaint problem?
About the images that Gerhard posted. When I looked at these images I was surprised at the amount of flare that was present. This is particularly true for the lower edge and to some extent the right hand edge of the image. The flare appears to be less pronounced as you move from the lower right corner to the upper left corner. I remember reading on the Hutch Color site about flare being a potential issue with smaller HCT film targets like the 35mm targets since for a given level of flare the smaller patch size made the flare affect a larger proportion of the patches area. His recommendation was to use a larger target if your scanner could handle it since the larger patch size of the larger target help to mitigate the problems caused by flare.
One of the things you might try in LProf that could help to deal with flare is to adjust the Safe frame setting. This changes the size of the area that is sampled from each patch for profiling. Normally when there is not much flare like when doing a reflective scan then using Safe frame settings that results in larger sample areas is the right thing to do but for high flare images this may not be the case.
Using a higher Safe frame value causes the sample area to be smaller. By using a smaller sample area you will be sampling pixels that are more toward the center of each patch and this should result in these samples being less affected by flare from the adjacent patches. However looking at the images it appears to me that at least in the lower right corner area that the flare is extensive enough that adjusting the Safe frame setting may not help much. The highest Safe frame setting will result in about 25% of the patch area being sampled. In any case this might be something to experiment with.
An adjustment of 100 to a 16 bit scale is a change of 100/65536 = 0.001526 or about 1.5% of the total scale. So overall I would not expect this to affect the image very much. But this adjustment when applied to a dark area with an average 16 bit value of 120 will result in this area being 83% darker. This is what I think you want to have happen since the issue is that the target image has it's darker areas affected to a greater extent by flare and this adjustment has a more pronounced affect on the darker areas.
Yes, I mean the latter -- that I did something with libtiff to get the LProf build working. I remember I had to move some of the libtiff, libjpeg, etc. files around in order to get the compile to work... can't quite remember why though.
Yes, the flare is horrendous, but I came up with a method to deal with the flare for individual patches where flare was exorbitant. This correction *does* end up creating a better profile (with less shadow clipping), BUT the effect is minimal, to say the least. Basically, you take every individual patch, create a square marquee around it, copy & paste into a new layer, rotate 90, then set blend mode to 'Darken' in Photoshop. Repeat for 180 and 270 degrees. Works like a charm to get rid of flare, but let me know if you see any inherent problems with the method. It IS tedious...
Don Hutch's method of rotating the entire film 90, 180, and 270 degrees only deals with flare *differentials* across the scan, but doesn't actually get rid of flare from patch to patch. As Gerhard pointed out earlier, theoretically, there's no way his method can effectively get rid of flare.
Getting a medium format target for $500 is not an option for me, unfortunately. Like most color management technology, I can't afford it.
What actually worries me more about the HCT targets is the unacceptable level of pepper grain... although this is for another post, I'll state here briefly that the pepper grain exhibited on the HCT is the most offensive & hideous I've ever seen on any film scan. Even the diffuse light source of the Nikon LS-9000 picks it up. What I wonder is how much the pepper grain itself screws up the profiling process! I'll post an 1:1 crop if you're interested.
I am not sure what you mean by pepper grain. I do know that unlike IT8 targets HCT targets do not contain patches that are all a single color but instead these have a basket weave pattern. When working on the HCT support in LProf we used the HCT SDK which consists of documentation and a set of target scans. The target scans used 4x5 Ektachrome transparencies one for an HCT target and another for an IT8 target from the same scanner.
In these scans the basket weave pattern in the patches is clearly visible. In a smaller 35mm target this might appear to be noise/grain. Don Hutcheson claims that this improves profiling accuracy. Yes I would like to see a crop showing this grain as I would like to compare this to the large format scans from the HCT SDK. Is this also visible when you inspect the target with a loop or does it just show up in the scans?
One other thing about these large format scans is that there is very little flare. Which is expected in part because of the large format but also because the scanner used to create these likely is very high quality. I am actually surprised that your Nikon scanner has this much flare to the point where I am thinking that perhaps there is something wrong with the scanner. Perhaps the lens needs to be checked or there is a light leak or some other hardware problem. I have a very cheap film scanner and it has much lower levels of flare than your scanner but it also has fairly high levels of noise. I know I would be very disappointed to have purchased what should be a high quality scanner and to then find that it had such bad flare issues.
The other "oddity" of the HCT targets is they these actually have very few truly neutral patches. Most of the patches that appear neutral actually have a slight cast to them. Again this is deliberate and is intended to help the profiling process by supplying more near neutral patches. But we did have to make some adjustments to the LProf code to deal with how this affected our processing of neutral patches.
In a case like this where you are dealing with a device that has high levels of flare it might actually make sense to use targets with fewer but bigger patches. But I think you also had the same basic issue with IT8 targets and again it appears that the flare issue is bad enough that larger patch sizes may not help much.
I agree that $500 for a larger target is prohibitive. I checked Wolf Faust's web page thinking that perhaps he has medium format targets and he does not have these as a stock item. But his 4x5 targets are only $70 plus shipping and I know he does custom targets and will also provide hand measured targets as well and he might be able to create a 6x7 or 6x9 size target for you as part of a production run for significantly less than $500. After all even if he had to use 4x5 stock to create the custom medium format target for you I can't see this costing too much more than a hand measured 4x5 target. But you would need to contact Wolf to find out if this is possible and what it would cost.
Fuji Pepper Grain:
A HUGE problem in scanning when working with harsh, collimated light sources. Less of a problem on newer Imacons and Nikon LS-9000, which place an optical diffuser in between the light source and lens/CCD. Unidirectional light gets refracted around air bubbles in the film base, leaving a spot of no light (black) at the corresponding location on the CCD. Multi-directional light coming from diffuse sources tend to, of course, get refracted in many different directions around said air bubbles, thereby eventually reaching the corresponding spot on the CCD, resulting in less of a black spot or 'pepper grain'.
We still need to do something about shadow clipping with LProf profiles. Hal or Gerhard-- is there any way you could put a little tweak into LProf's code that performs a tiny bit of black clipping in 16-bits to all input target scans for scanners? Or put in some option for this?
Now that I have Linux installed on my Mac, I could go ahead and do this, presumably, using CinePaint... no commercial Mac or PC application that I know of allows Levels edits in 16-bits (per channel, that is), so it'd really help if somehow this sort of edit could be added into LProf.
I'd say just make it default... I'm pretty sure that anyone would take a less contrasty profile over a more contrasty one, given that you can't recover shadows that've been clipped to black but you can certainly blacken shadows that haven't yet been clipped to black!
Thanks in advance,
Hi Rishi, I have added some rudimentary, experimental support for dealing with veiling glare introduced by the scanner to the latest version in CVS. You can specify a percentage of "perfect white", which will be treated as assumed amount of glare and added to the XYZ measurements (from the reference file) of each patch. This paramer is not yet exposed in the GUI, but can be passed via the environment to lprof - e.g. you can call lprof like "GLARE=0.6 lprof" from the Unix shell. Watch out for a message like "Adding x.xx% glare to the measurements" in the "Messages" frame when you click on the "Create Profile" button in order to confirm that the parameter was passed correctly. Reasonable values may be in the 0.2 ... 1.0 range, depending on the scanner (try to find the lowest number which still avoids clipping for any of your scans of various scenes). Your feedback is appreciated.