Re: [Algorithms] Gaussian blur kernels
Brought to you by:
vexxed72
|
From: Fabian G. <f.g...@49...> - 2009-07-16 21:39:11
|
Jon Watte wrote: > Fabian Giesen wrote: >> However, all of this is a non-issue in graphics. Pixel shaders give you >> cheap pointwise evaluations at no cost. If you want only every second >> pixel, then render a quad half the size in each direction. > > But that cheap sample is using a pretty crappy box area filter for its > sampling, so it is not aliasing free. At no point is this using a box filter. I'll try to make a bit clearer what's going on: Mathematically, you perform downsampling by first low-pass filtering your signal and then throwing away every Nth sample. The latter is operation called N-decimation and the corresponding operator is typically written as a down-arrow with N after it. If you just do the decimation without any low-pass filtering, you run into trouble because of aliasing; hence the low-pass filter, which is there to remove any frequencies higher than the new nyquist frequency. Afterwards, there's (near enough) nothing there in the higher frequencies that could alias; that's the whole point. Fast-forward to 2D downsampling. Similarly, you want to first run a low-pass filter (the 5x5 Gauss) and then run a decimation on the image (=copying it into a rendertarget half the size in both directions with nearest neighbor filtering). Again, running just the decimation step will alias badly. And again, the lowpass filter is there precisely to get rid of any too-high frequencies that cause aliasing. Anyway, clearly you're doing redundant work during the filtering here, since you only keep every fourth pixel. This is where the polyphase filters come in; polyphase filters are just a way of avoiding this redundant computation, formulated in terms of convolutions of decimated versions of both the signal and the filter. But the details don't matter, since we don't have to write it in terms of convolutions here! That was the whole point of my previous mail. There's no point computing the 5x5 Gaussian convolution at full res only to throw away 3 out of every 4 resulting pixels. You can just render into a quarter-area rendertarget directly. The rest stays exactly the same - you're still using a gaussian lowpass filter, nothing has changed. You just don't compute anything you never use in the first place. > But originally the problem wasn't just downsampling -- it was using some > combination of downsampling, filtering and upsampling to approximate the > result of a bigger Gaussian blur, but making it faster. Yeah. Once you get to your lower resolution, you just execute your "main" blurring filter, which works exactly as it would've before except it's got a lot less pixels to process. The usual techniques apply here - using the bilinear filtering HW to get "4 samples for the price of one", using two 1D passes instead of one large 2D pass for seperable filters, etc. The only difference is that you're running on a lower-res version of the image, and to answer the OPs question, yes, a guassian with standard deviation sigma running on the lower-res version (say a downsampling factor of 2 in each dimension) is roughly equivalent to a gaussian with standard deviation 2*sigma on the full-res version. You will still get jarring aliasing artifacts if you take too many shortcuts during downsampling though; perhaps the most obvious (and rather common) artifact being that small thin objects start flickering in the blurred version (that's what happens if you just throw the samples away without doing a small low-pass first!). If you're really going for the full multirate filterbank approach, you have to use a good filter during the upsampling pass too. But in practice, the bilinear filter really is good enough with the relatively blurry signal you usually have at this point; I've never noticed significant improvements using a better upsampler. (Unlike the downsampler, where it really *does* help). Kind regards, -Fabian "ryg" Giesen |