From: Thomas T. <tt...@bi...> - 2000-04-26 14:19:03
|
Hrafnkell Eiriksson wrote: > Yes photos and all natural images are "lowpass" in nature, that is they > are mostly smooth regions. Spreading the error over a large area is ok there. > Line art has hard edges, hard edges means high frequency content, > so spreading the error there is worse, its a kind of lowpass filtering. Actually, it is not lowpass filtering, but it is determining how far the noise propagates. There is a thing called a 'Riemersma dither' that uses a table to find the density printed so far. This means that your errors do not prapagate all over the page. If you think about it, it is a bit strange what happens if you print two dots. If one dot is printed a little too small because of rounding, the other dot has a good chance of getting a little bigger! What Riemersma dither does is look back at (say) the errors of the last 100 pixels printed, weigh each with a scale factor, and use this as error when deciding to print the current pixel. > What viewing distance do you juse to judge the performance of > dithering? JPEG used 8x8 pixel blocks to prevent fringing to occur too far from the hard edge that caused it. With dither, the mechanism is rather different. If less than half the error is transferred to the next pixel horrible effects can occur. Say only 10% is transferred to the next pixel, 10% two pixels to the right, etc. If we would print 60% gray level, the pixel would be black, The second would be black too, as only 10% of the 40% that was printed too much is transferred there as error. Only when one gets on the order of 10 pixels away do white pixels start to form. This is actually okay for very light tints. I did make a suggestion to Robert that it might be possible to do something with the error buffer. Before using the value in the error buffer, distribute it partially away. The result is a very large distribution area. By adjusting the amount that is re-distributed according to the density of the current color pixel, one can prevent the above 'clipping' effect from occuring. Another suggestion I had was to use the required gray value (of error + input) to decide whether to print, and when it is decided to print, lay down color dot of the color that needs to be printed most from a color error perspective. Keep laying down different color dots in this position until the gray value is as it should be. This has the advantage of following the human eye sensitivity better than working per color. But it probably is very difficult to integrate with variable dot size and other clever rastering support. An extension of that model would be something that actually knows about printer and observer capabilities, calculates the value of what has been produced so far, and bases a decision on that. That could be in the form of a heuristic that looks at the input value, looks at the immediate area that has been printed so far and compare to what it should have been, and determines what option would be the best fit. That could be as simple as a two-dimensional version of Riemersma, or as complex as something that also looks at contrast and disregards sharply contrasting areas (i.e. if the background was just a bit too light here, do not make the text on it any darker). The problem with the above is that for every dot to be printed, one has to build a perception model and compare this with the perception model of the original image. This is very slow compared to the few shifts and adds that make up error distribution (although the implememntation as it is now also has quite some decisionmaking built in). Thomas |