From: Hrafnkell E. <he...@kv...> - 2000-04-26 11:59:00
|
Hi I would like to point you to an interesting article on using a Least Mean Square algorithm to choose the coefficients for the error diffusion filter kernel: Lale Akarun, Yasemin Yardimci and A. Enis Cetin: "Adaptive Methods for Dithering Color Images" IEEE Transactions on Image Processing, vol 6, no 7, july 1997. Quotes form the conclusion: "The appearance of color impulses is greatly eliminated and smoother color transitions are achived" and "Both of the adaptive error diffusion algorithms show a distinct improvement in performance when compared with the Floyd-Steinberg filter" Given that I have some time available after my exams this summer I would like to try to implement the algorithm. One thing that some might like to try out from the article: "The scaling of the error diffusion filter coefficients is an important step in the performance of the algorithm. This scaling coefficient controls the balance of false edges and color impulses; it can, therefore be varied in different regions of an image to achive different goals." The main point with that is that the coefficient of the Floyd Steinberg (and other) filter kernels sum up to one. By making the sum up to f.x. 0.9 the effects of quantization error accumulation that leads to a disturbing color impulse in pale areas can be reduced. A first thing to try might be to use some heuristics to change a scale factor. I have a feeling that rlk has developed some heuristics he uses, at least in his head when playing with the dithering algorithms, that might be used. Just a thought... Homework here I come! -- //-----------------------//------------------------------------------------- // Hrafnkell Eiriksson // // he...@kv... // // TF3HR // "Blessed are they who go around in circles, // // for they shall be known as Wheels" |
From: Robert L K. <rl...@al...> - 2000-04-26 12:39:23
|
Date: Wed, 26 Apr 2000 13:50:51 +0200 From: Hrafnkell Eiriksson <he...@kv...> I would like to point you to an interesting article on using a Least Mean Square algorithm to choose the coefficients for the error diffusion filter kernel: Lale Akarun, Yasemin Yardimci and A. Enis Cetin: "Adaptive Methods for Dithering Color Images" IEEE Transactions on Image Processing, vol 6, no 7, july 1997. That sounds very interesting. We just have to make sure there's no patent on it. "The scaling of the error diffusion filter coefficients is an important step in the performance of the algorithm. This scaling coefficient controls the balance of false edges and color impulses; it can, therefore be varied in different regions of an image to achive different goals." That's very interesting. That's one of the few things I haven't seriously tried. I've found that adaptive filter coefficients are helpful. What I do now is use a wider spread for pale colors than for dark colors, and have different combinations for different kinds of images (photos use wider spread than line art, for example). The main point with that is that the coefficient of the Floyd Steinberg (and other) filter kernels sum up to one. By making the sum up to f.x. 0.9 the effects of quantization error accumulation that leads to a disturbing color impulse in pale areas can be reduced. A first thing to try might be to use some heuristics to change a scale factor. I have a feeling that rlk has developed some heuristics he uses, at least in his head when playing with the dithering algorithms, that might be used. It's mostly all cut and try. I should probably stop coding for a while and actually write some documentation on what's going on. -- Robert Krawitz <rl...@al...> http://www.tiac.net/users/rlk/ Tall Clubs International -- http://www.tall.org/ or 1-888-IM-TALL-2 Member of the League for Programming Freedom -- mail lp...@uu... Project lead for The Gimp Print -- http://gimp-print.sourceforge.net "Linux doesn't dictate how I work, I dictate how Linux works." --Eric Crampton |
From: Hrafnkell E. <he...@kv...> - 2000-04-26 13:07:15
|
On Wed, Apr 26, 2000 at 08:32:08AM -0400, Robert L Krawitz wrote: > That sounds very interesting. We just have to make sure there's no > patent on it. Isn't it better just to let possible patent holders send a "cease and desist" letter asking us to stop using their patented methods? If we don't look we are not willingly infringing the patent? I think I've seen this advice from someone somewhere. Any lawyers? Actually this method is rather obvious to "a master of the art" (as I think it is called in patent laws). Linear filtering is basic stuff to anyone learning signal processing and image analysis. Choosing coefficients for a linear filter kernel using the LMS algorithm is usually the subject of second course in signal processing. Just the fact that the signal happens to be an quantized image should not be considered enough to allow for a patent. (I think :) And remember, software patents do not exist here in Europe :) > dark colors, and have different combinations for different kinds of > images (photos use wider spread than line art, for example). Yes photos and all natural images are "lowpass" in nature, that is they are mostly smooth regions. Spreading the error over a large area is ok there. Line art has hard edges, hard edges means high frequency content, so spreading the error there is worse, its a kind of lowpass filtering. What viewing distance do you juse to judge the performance of dithering? > It's mostly all cut and try. I should probably stop coding for a > while and actually write some documentation on what's going on. I would appreciate that. Its difficult to follow some things in the dithering code. Thanks -- //-----------------------//------------------------------------------------- // Hrafnkell Eiriksson // // he...@kv... // // TF3HR // "Blessed are they who go around in circles, // // for they shall be known as Wheels" |
From: Thomas T. <tt...@bi...> - 2000-04-26 14:19:03
|
Hrafnkell Eiriksson wrote: > Yes photos and all natural images are "lowpass" in nature, that is they > are mostly smooth regions. Spreading the error over a large area is ok there. > Line art has hard edges, hard edges means high frequency content, > so spreading the error there is worse, its a kind of lowpass filtering. Actually, it is not lowpass filtering, but it is determining how far the noise propagates. There is a thing called a 'Riemersma dither' that uses a table to find the density printed so far. This means that your errors do not prapagate all over the page. If you think about it, it is a bit strange what happens if you print two dots. If one dot is printed a little too small because of rounding, the other dot has a good chance of getting a little bigger! What Riemersma dither does is look back at (say) the errors of the last 100 pixels printed, weigh each with a scale factor, and use this as error when deciding to print the current pixel. > What viewing distance do you juse to judge the performance of > dithering? JPEG used 8x8 pixel blocks to prevent fringing to occur too far from the hard edge that caused it. With dither, the mechanism is rather different. If less than half the error is transferred to the next pixel horrible effects can occur. Say only 10% is transferred to the next pixel, 10% two pixels to the right, etc. If we would print 60% gray level, the pixel would be black, The second would be black too, as only 10% of the 40% that was printed too much is transferred there as error. Only when one gets on the order of 10 pixels away do white pixels start to form. This is actually okay for very light tints. I did make a suggestion to Robert that it might be possible to do something with the error buffer. Before using the value in the error buffer, distribute it partially away. The result is a very large distribution area. By adjusting the amount that is re-distributed according to the density of the current color pixel, one can prevent the above 'clipping' effect from occuring. Another suggestion I had was to use the required gray value (of error + input) to decide whether to print, and when it is decided to print, lay down color dot of the color that needs to be printed most from a color error perspective. Keep laying down different color dots in this position until the gray value is as it should be. This has the advantage of following the human eye sensitivity better than working per color. But it probably is very difficult to integrate with variable dot size and other clever rastering support. An extension of that model would be something that actually knows about printer and observer capabilities, calculates the value of what has been produced so far, and bases a decision on that. That could be in the form of a heuristic that looks at the input value, looks at the immediate area that has been printed so far and compare to what it should have been, and determines what option would be the best fit. That could be as simple as a two-dimensional version of Riemersma, or as complex as something that also looks at contrast and disregards sharply contrasting areas (i.e. if the background was just a bit too light here, do not make the text on it any darker). The problem with the above is that for every dot to be printed, one has to build a perception model and compare this with the perception model of the original image. This is very slow compared to the few shifts and adds that make up error distribution (although the implememntation as it is now also has quite some decisionmaking built in). Thomas |
From: Hrafnkell E. <he...@kv...> - 2000-04-26 14:36:31
|
On Wed, Apr 26, 2000 at 04:15:21PM +0200, Thomas Tonino wrote: > Actually, it is not lowpass filtering, but it is determining how far the > noise propagates. There is a thing called a 'Riemersma dither' that uses Yes. What I meant is that there is a lowpass effect in it (although not 100% in the sense of linear filters). I probably should never have mentioned it, its just the way I was thinkng about it when I wrote my reply. > An extension of that model would be something that actually knows about > printer and observer capabilities, calculates the value of what has been > produced so far, and bases a decision on that. That could be in the form Ok, I'll admit, I've been searching for articles that might be interesting for the project and that might teach me something about dithering methods :) I've found two that describe model based dithering (based on a printer model and a perception model): Thrasyvoulos N. Pappas and David L. Neuhoff: "Least-Squares Model-Based Halftoning" IEEE Transactions on image processing, vol 8, no 8. august 1997 Thrasyvoulos N. Pappas: "Model-Based Halftoning of Color Images" IEEE Transactions on image processing, vol 6, no 7, july 1999 The model based methods are very interesting. They exploit characteristics of the human visual system to get the best possible dithering. They also compensate for dot overlap and such things (and use it to get even better results). I might have found even more articles but the printouts are not on the surface of the pile of papers on my desk so I can't see them now :) There is also an interesting book: http://www.spie.org/web/abstracts/oepress/MS154.html it includes the original paper by Floyd and Steinberg among other things. -- //-----------------------//------------------------------------------------- // Hrafnkell Eiriksson // // he...@kv... // // TF3HR // "Blessed are they who go around in circles, // // for they shall be known as Wheels" |
From: Robert L K. <rl...@al...> - 2000-04-27 00:17:58
|
Date: Wed, 26 Apr 2000 16:15:21 +0200 From: Thomas Tonino <tt...@bi...> What Riemersma dither does is look back at (say) the errors of the last 100 pixels printed, weigh each with a scale factor, and use this as error when deciding to print the current pixel. The problem is in the definition of "last 100 pixels". The last 100 pixels actually generated are in a line, but I suspect it would work better with 100 pixels in a semicircle centered on the point printed (a two-dimensional Riemersma). I did make a suggestion to Robert that it might be possible to do something with the error buffer. Before using the value in the error buffer, distribute it partially away. The result is a very large distribution area. Ah, the joys of open source. Anyone can do anything. Particularly when I get around to writing documentation :-) Another suggestion I had was to use the required gray value (of error + input) to decide whether to print, and when it is decided to print, lay down color dot of the color that needs to be printed most from a color error perspective. Keep laying down different color dots in this position until the gray value is as it should be. This has the advantage of following the human eye sensitivity better than working per color. But it probably is very difficult to integrate with variable dot size and other clever rastering support. Actually, what is currently done isn't too different. It looks at the value of the input to decide what one or two dot sizes (or tones) are eligible for printing, and then decides from the input+error whether to print based upon the scaled size of the dot. It then decides which dot to actually print based on a matrix or a random number, depending upon the dither mode. The problem with the above is that for every dot to be printed, one has to build a perception model and compare this with the perception model of the original image. This is very slow compared to the few shifts and adds that make up error distribution (although the implememntation as it is now also has quite some decisionmaking built in). There's a lot of decision making, but I've tried to cut down on the amount of branching actually going on. It's the combination of various decisions that make print_color so ugly. -- Robert Krawitz <rl...@al...> http://www.tiac.net/users/rlk/ Tall Clubs International -- http://www.tall.org/ or 1-888-IM-TALL-2 Member of the League for Programming Freedom -- mail lp...@uu... Project lead for The Gimp Print -- http://gimp-print.sourceforge.net "Linux doesn't dictate how I work, I dictate how Linux works." --Eric Crampton |
From: Jean-Jacques de J. <jea...@hp...> - 2000-04-26 15:00:11
|
Hrafnkell Eiriksson wrote: > > On Wed, Apr 26, 2000 at 08:32:08AM -0400, Robert L Krawitz wrote: > > That sounds very interesting. We just have to make sure there's no > > patent on it. > > Isn't it better just to let possible patent holders send a > "cease and desist" letter asking us to stop using their patented methods? > If we don't look we are not willingly infringing the patent? > I think I've seen this advice from someone somewhere. > Any lawyers? This is dangerous, especially in the US. However, I don't suspect patent holders wanting to go any further than that (i.e. so far as suing a free software developer). But anyway, this would mean a load of work dumped in the trash can! > Actually this method is rather obvious to "a master of the art" (as I > think it is called in patent laws). Linear filtering is > basic stuff to anyone learning signal processing and image analysis. > Choosing coefficients for a linear filter kernel using the LMS > algorithm is usually the subject of second course in signal processing. > Just the fact that the signal happens to be an quantized image should > not be considered enough to allow for a patent. (I think :) This is a difficult question. In practice you have to convince a judge who is likely to find the slightest technical improvement difficult to understand, thus non obvious and patentable. > And remember, software patents do not exist here in Europe :) Ouch! NO! There are thousands of patents in Europe covering software in the form of methods, which is about just as bad. JJJ |
From: Robert L K. <rl...@al...> - 2000-04-26 23:59:14
|
Date: Wed, 26 Apr 2000 14:59:05 +0200 From: Hrafnkell Eiriksson <he...@kv...> What viewing distance do you juse to judge the performance of dithering? I try to use a variety of distances. Viewing an 8x10 (20x25 cm) on glossy film from less than about 30 cm, I can't see any artifacts. > It's mostly all cut and try. I should probably stop coding for a > while and actually write some documentation on what's going on. I would appreciate that. Its difficult to follow some things in the dithering code. It does need cleanup. It's better than it was pre-3.1.3, but it's already started to accrete. |