gdalgorithms-list Mailing List for Game Dev Algorithms (Page 31)
Brought to you by:
vexxed72
You can subscribe to this list here.
| 2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(390) |
Aug
(767) |
Sep
(940) |
Oct
(964) |
Nov
(819) |
Dec
(762) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2001 |
Jan
(680) |
Feb
(1075) |
Mar
(954) |
Apr
(595) |
May
(725) |
Jun
(868) |
Jul
(678) |
Aug
(785) |
Sep
(410) |
Oct
(395) |
Nov
(374) |
Dec
(419) |
| 2002 |
Jan
(699) |
Feb
(501) |
Mar
(311) |
Apr
(334) |
May
(501) |
Jun
(507) |
Jul
(441) |
Aug
(395) |
Sep
(540) |
Oct
(416) |
Nov
(369) |
Dec
(373) |
| 2003 |
Jan
(514) |
Feb
(488) |
Mar
(396) |
Apr
(624) |
May
(590) |
Jun
(562) |
Jul
(546) |
Aug
(463) |
Sep
(389) |
Oct
(399) |
Nov
(333) |
Dec
(449) |
| 2004 |
Jan
(317) |
Feb
(395) |
Mar
(136) |
Apr
(338) |
May
(488) |
Jun
(306) |
Jul
(266) |
Aug
(424) |
Sep
(502) |
Oct
(170) |
Nov
(170) |
Dec
(134) |
| 2005 |
Jan
(249) |
Feb
(109) |
Mar
(119) |
Apr
(282) |
May
(82) |
Jun
(113) |
Jul
(56) |
Aug
(160) |
Sep
(89) |
Oct
(98) |
Nov
(237) |
Dec
(297) |
| 2006 |
Jan
(151) |
Feb
(250) |
Mar
(222) |
Apr
(147) |
May
(266) |
Jun
(313) |
Jul
(367) |
Aug
(135) |
Sep
(108) |
Oct
(110) |
Nov
(220) |
Dec
(47) |
| 2007 |
Jan
(133) |
Feb
(144) |
Mar
(247) |
Apr
(191) |
May
(191) |
Jun
(171) |
Jul
(160) |
Aug
(51) |
Sep
(125) |
Oct
(115) |
Nov
(78) |
Dec
(67) |
| 2008 |
Jan
(165) |
Feb
(37) |
Mar
(130) |
Apr
(111) |
May
(91) |
Jun
(142) |
Jul
(54) |
Aug
(104) |
Sep
(89) |
Oct
(87) |
Nov
(44) |
Dec
(54) |
| 2009 |
Jan
(283) |
Feb
(113) |
Mar
(154) |
Apr
(395) |
May
(62) |
Jun
(48) |
Jul
(52) |
Aug
(54) |
Sep
(131) |
Oct
(29) |
Nov
(32) |
Dec
(37) |
| 2010 |
Jan
(34) |
Feb
(36) |
Mar
(40) |
Apr
(23) |
May
(38) |
Jun
(34) |
Jul
(36) |
Aug
(27) |
Sep
(9) |
Oct
(18) |
Nov
(25) |
Dec
|
| 2011 |
Jan
(1) |
Feb
(14) |
Mar
(1) |
Apr
(5) |
May
(1) |
Jun
|
Jul
|
Aug
(37) |
Sep
(6) |
Oct
(2) |
Nov
|
Dec
|
| 2012 |
Jan
|
Feb
(7) |
Mar
|
Apr
(4) |
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(10) |
| 2013 |
Jan
|
Feb
(1) |
Mar
(7) |
Apr
(2) |
May
|
Jun
|
Jul
(9) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2014 |
Jan
(14) |
Feb
|
Mar
(2) |
Apr
|
May
(10) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
|
| 2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(12) |
Nov
|
Dec
(1) |
| 2016 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
| 2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
|
From: Fabian G. <f.g...@49...> - 2009-07-16 21:39:11
|
Jon Watte wrote: > Fabian Giesen wrote: >> However, all of this is a non-issue in graphics. Pixel shaders give you >> cheap pointwise evaluations at no cost. If you want only every second >> pixel, then render a quad half the size in each direction. > > But that cheap sample is using a pretty crappy box area filter for its > sampling, so it is not aliasing free. At no point is this using a box filter. I'll try to make a bit clearer what's going on: Mathematically, you perform downsampling by first low-pass filtering your signal and then throwing away every Nth sample. The latter is operation called N-decimation and the corresponding operator is typically written as a down-arrow with N after it. If you just do the decimation without any low-pass filtering, you run into trouble because of aliasing; hence the low-pass filter, which is there to remove any frequencies higher than the new nyquist frequency. Afterwards, there's (near enough) nothing there in the higher frequencies that could alias; that's the whole point. Fast-forward to 2D downsampling. Similarly, you want to first run a low-pass filter (the 5x5 Gauss) and then run a decimation on the image (=copying it into a rendertarget half the size in both directions with nearest neighbor filtering). Again, running just the decimation step will alias badly. And again, the lowpass filter is there precisely to get rid of any too-high frequencies that cause aliasing. Anyway, clearly you're doing redundant work during the filtering here, since you only keep every fourth pixel. This is where the polyphase filters come in; polyphase filters are just a way of avoiding this redundant computation, formulated in terms of convolutions of decimated versions of both the signal and the filter. But the details don't matter, since we don't have to write it in terms of convolutions here! That was the whole point of my previous mail. There's no point computing the 5x5 Gaussian convolution at full res only to throw away 3 out of every 4 resulting pixels. You can just render into a quarter-area rendertarget directly. The rest stays exactly the same - you're still using a gaussian lowpass filter, nothing has changed. You just don't compute anything you never use in the first place. > But originally the problem wasn't just downsampling -- it was using some > combination of downsampling, filtering and upsampling to approximate the > result of a bigger Gaussian blur, but making it faster. Yeah. Once you get to your lower resolution, you just execute your "main" blurring filter, which works exactly as it would've before except it's got a lot less pixels to process. The usual techniques apply here - using the bilinear filtering HW to get "4 samples for the price of one", using two 1D passes instead of one large 2D pass for seperable filters, etc. The only difference is that you're running on a lower-res version of the image, and to answer the OPs question, yes, a guassian with standard deviation sigma running on the lower-res version (say a downsampling factor of 2 in each dimension) is roughly equivalent to a gaussian with standard deviation 2*sigma on the full-res version. You will still get jarring aliasing artifacts if you take too many shortcuts during downsampling though; perhaps the most obvious (and rather common) artifact being that small thin objects start flickering in the blurred version (that's what happens if you just throw the samples away without doing a small low-pass first!). If you're really going for the full multirate filterbank approach, you have to use a good filter during the upsampling pass too. But in practice, the bilinear filter really is good enough with the relatively blurry signal you usually have at this point; I've never noticed significant improvements using a better upsampler. (Unlike the downsampler, where it really *does* help). Kind regards, -Fabian "ryg" Giesen |
|
From: Olivier G. <gal...@po...> - 2009-07-16 20:29:48
|
On Thu, Jul 16, 2009 at 09:56:58AM +0100, Simon Fenney wrote: > Jon Watte wrote: > > > Actually, they are both low-pass filters, so > > downsampling-plus-upsampling is a form of blur. However, box > > filtering (the traditional downsampling function) isn't actually all > > that great at low-pass filtering, so it can generate aliasing, which > > a Gaussian filter does not. > > AFAICS a box filter won't "generate" aliasing, it's just that it's not > very good at eliminating the high frequencies which cause aliasing. A > Gaussian, OTOH, has better behaviour in this respect... Aren't we talking about graphics here? Vision is area sampling, making box filters perfect for integer downsampling. It's sound that is point sampling and hence sensitive to high frequency aliasing. OG. |
|
From: Olivier G. <gal...@po...> - 2009-07-16 20:29:48
|
On Thu, Jul 16, 2009 at 10:10:42AM -0700, Jon Watte wrote: > I thought that at first, too, but consider: Even if you represent > numbers with spaces, you will run out of bits for the number of numbers > that you'll need to represent. At most, you can represent (N-1) numbers > with N available bits (and that doesn't take into account the > book-keeping needed for applying any actual algorithm). Countable infinity is still countable infinity. You can encode a set of positive integers as N = 2**i1 * 3**i2 * 5**i3 * 7**i4 * 11**i5 * ... which is only one number. Alternatively you can encode the tape for a turing machine as a number too. If you call Ti the value of the bit at position i relative to the head, you can encode the tape state as N = sum(Ti * 2**(i > 0 ? 2*i : i<0 ? -2*i-1 : 0)) If the tape wall all 0 at start, the number of non-zero Ti is finite so there's no convergence issue. In practice, you can encode any finite or countably infinite digital system state in one number (that's where names like Godel start to fuse). So, on a turing-ish tape, that means two bits are enough. Converting the program to run on the 1-bit limited version is the hard part though. I have no idea whether it's actually possible. OG. |
|
From: Jon W. <jw...@gm...> - 2009-07-16 19:27:15
|
Fabian Giesen wrote: > However, all of this is a non-issue in graphics. Pixel shaders give you > cheap pointwise evaluations at no cost. If you want only every second > pixel, then render a quad half the size in each direction. > > But that cheap sample is using a pretty crappy box area filter for its sampling, so it is not aliasing free. > In short, the optimal "polyphase" implementation of a 5x5 Gaussian > followed by a 2:1 downsample in both directions is just to render to the > smaller rendertarget directly with the 5x5 Gaussian Pixel shader and the > correct texture coordinates. > > But originally the problem wasn't just downsampling -- it was using some combination of downsampling, filtering and upsampling to approximate the result of a bigger Gaussian blur, but making it faster. Sincerely, jw -- Revenge is the most pointless and damaging of human desires. |
|
From: Fabian G. <f.g...@49...> - 2009-07-16 18:04:34
|
Fabian Giesen wrote: >> Fabian also mentioned Multirate Filter Banks -- in the audio case, the >> equivalent is the polyphase filter AFAICR, and that's used quite heavily >> in real time, because it actually saves cycles compared to first >> filtering and then decimating. I think a properly implemented multirate >> filter in a pixel shader might actually give you very good bang for the >> cycle buck! > > The basic idea behind polyphase filters is that you're doing > downsample(conv(a,b),2), where conv(a,b) is the convolution of a and b > and downsample is the "elementary" downsample operator (point sampling - > just throw away every second sample). So since you're ignoring every > second sample generated by your gaussian filter anyway, there's not much > point computing it in the first place. The math behind polyphase filters > are the specifics of how to do just that. [..] And, after thinking about this some more on the way home: you shouldn't care :). To elaborate: most of the math in signal processing deals with signals as a whole. This gives you a whole bunch of very powerful tools and notations (Convolutions, Fourier Transforms, z Transform) but makes some basic sample-by-sample operations such as downsampling (or decimating as it's usually called in DSP) a bit inconvenient in terms of notation. The whole polyphase filter stuff is mainly about writing the relevant index shuffling down in a concise and neat way so you can "bubble up" convolutions. However, all of this is a non-issue in graphics. Pixel shaders give you cheap pointwise evaluations at no cost. If you want only every second pixel, then render a quad half the size in each direction. In short, the optimal "polyphase" implementation of a 5x5 Gaussian followed by a 2:1 downsample in both directions is just to render to the smaller rendertarget directly with the 5x5 Gaussian Pixel shader and the correct texture coordinates. So all you have to do is use that pixelshader instead of a cheap one-texture-sample pixelshader that uses the bilinear filtering HW to average groups of 2x2 pixels together. It's more expensive though, and if you want a quarter of the size in each direction, you *do* have to do this twice - or, alternatively, use a larger gaussian during downsampling. Cheers, -Fabian "ryg" Giesen |
|
From: Fabian G. <f.g...@49...> - 2009-07-16 17:30:05
|
> Fabian also mentioned Multirate Filter Banks -- in the audio case, the > equivalent is the polyphase filter AFAICR, and that's used quite heavily > in real time, because it actually saves cycles compared to first > filtering and then decimating. I think a properly implemented multirate > filter in a pixel shader might actually give you very good bang for the > cycle buck! The basic idea behind polyphase filters is that you're doing downsample(conv(a,b),2), where conv(a,b) is the convolution of a and b and downsample is the "elementary" downsample operator (point sampling - just throw away every second sample). So since you're ignoring every second sample generated by your gaussian filter anyway, there's not much point computing it in the first place. The math behind polyphase filters are the specifics of how to do just that. It's basically splitting both the signal and filter into "odd" and "even" parts and then writing the result as a sum of convolutions of these - but it's been a while, so don't ask me about the details :) Multirate filter banks are the filter bank equivalent of discrete wavelet transforms, really - you split into low and high frequency parts, then do an extra decomposition on the low level part of the result, and so on. Each level has half the sampling rate of the previous, hence the "multirate". Nobody forces you to just use exactly 2 filters per level though. For blurring, you just don't care about the high frequency parts, so you throw them away and assume they're all 0 during reconstruction. The result is a cascade of the form downsample -> downsample -> ... -> upsample -> upsample If you want to do it correctly, the upsampling filter should match your downsampling filter. If you don't and you're cheap :), you just use the bilinear upsampler you get for free. Cheers, -Fabian "ryg" Giesen |
|
From: Jon W. <jw...@gm...> - 2009-07-16 17:11:03
|
I thought that at first, too, but consider: Even if you represent numbers with spaces, you will run out of bits for the number of numbers that you'll need to represent. At most, you can represent (N-1) numbers with N available bits (and that doesn't take into account the book-keeping needed for applying any actual algorithm). Sincerely, jw Danny Kodicek wrote: > > I'm not sure it's quite that simple. The machine can definitely count, by > making a space between two boxes. If it wants to count to 5, all it needs is > to make 1000001. To multiply that number by 2, it can use a third box as a > -- Revenge is the most pointless and damaging of human desires. |
|
From: Jon W. <jw...@gm...> - 2009-07-16 17:07:06
|
Simon Fenney wrote: > Jon Watte wrote: > > >> Actually, they are both low-pass filters, so >> downsampling-plus-upsampling is a form of blur. However, box >> filtering (the traditional downsampling function) isn't actually all >> that great at low-pass filtering, so it can generate aliasing, which >> a Gaussian filter does not. >> > > AFAICS a box filter won't "generate" aliasing, it's just that it's not > very good at eliminating the high frequencies which cause aliasing. A > Gaussian, OTOH, has better behaviour in this respect... > > Yes, what I meant by that was that the process of box filtering plus downsampling can generate aliasing. As can Gaussian plus downsampling if your kernel isn't wide enough :-) With filters, it's all about the response curve -- you'll need a significantly wide Gaussian to get below the -61 dB quantization noise floor of an 8 bpp RGB image for any possible aliasing. > > ...again, (and my recollection, too, is a bit hazy), the shape of a > Gaussian filter in the frequency domain is, again, a Gaussian, so it's > still going to let through some amount of illegally high frequencies. In > All filters do that, because there are no perfect brick wall filters. You have design a filter that rejects enough of the aliasing that you care about, while retaining as much as possible of the signal that you want. I think we can agree that a box filter is pretty far from the optimal choice for the more demanding cases :-) > A sinc filter, when mapped into the frequency domain, is a box and so > has a hard cut-off of higher frequencies, which should be ideal. > > That would be an infinitely wide sinc. In most cases, though, you end up with only one or two lobes of the sinc, which makes it significantly less than ideal box shape :-( Not to mention that all this is defined for static processes, but texture images are, at best, only locally static. The classic transient-response-vs-aliasing problem, where the answer is that you probably want to oversample by more than Nyquist suggests you should need to (or why a 48 kHz converter is surprisingly better than a 44.1 kHz converter for the 20-20k audible range -- but I think we're getting topic skew here :-) Fabian also mentioned Multirate Filter Banks -- in the audio case, the equivalent is the polyphase filter AFAICR, and that's used quite heavily in real time, because it actually saves cycles compared to first filtering and then decimating. I think a properly implemented multirate filter in a pixel shader might actually give you very good bang for the cycle buck! Sincerely, jw -- Revenge is the most pointless and damaging of human desires. |
|
From: Fabian G. <f.g...@49...> - 2009-07-16 15:06:01
|
> I'm not sure it's quite that simple. The machine can definitely count, by > making a space between two boxes. If it wants to count to 5, all it needs is > to make 1000001. To multiply that number by 2, it can use a third box as a > placeholder, then double each space. There's a lot of computation that can > be done in that way. The number itself could have godel-numbered > instructions which might be possible to unpack and run in some clever way. I > agree that it feels like the domain is far too finite to be able to do > everything, but I don't think proving it would be quite so cut-and-dried. To settle this, you'd need to define exactly what you mean by universality. I'd state that a universal turing machine U is a TM that will, given a coded representation of another TM (let's call it M) and its input, produce the exact same output on the tape as running the original M with the given input directly if M halts on that input, if and only if M halts. And if M with the given input doesn't halt, then neither will U. I'll further add the requirement that there be a fixed encoding for the TM input string; i.e. encoding a "0" or "1" input bit will always take the same amount of bits no matter where they are in the input string. Expanding my example a bit: Assume a turing machine X that gets a binary number n as input, with the head initially just at right of the final digit. X will proceed to write exactly n ones starting at the initial head position, clear the rest of the tape to zero, and stop with the head at its initial position. This is a more precisely specified version of my binary counter example. X is well-defined and will eventually halt on all inputs. Actually implementing X will involve some trickery such as padding the binary number somehow so that the start of the sequence of 1s can always be found, so it's not trivial, but it's not really complex either. Now, X can be made to produce a string of 1s arbitrarily long given appropriate input. Doing a "left-shift" of n will only add a constant number of 1 bits (boxes), but double the number of "1" bits in the output. So you'll eventually run out of ones (boxes) for large enough n. -- That's why I specifically mentioned a *binary* counter. If you allow arbitrary problem-specific transformations (like changing a binary into a unary counter), you can obviously work around the limitations for a lot longer, but you're not really executing the same algorithm. I added the definition of universality (with the requirement of producing the same output, which is key here) to illustrate the point. Kind regards, -Fabian "ryg" Giesen |
|
From: Danny K. <dr...@we...> - 2009-07-16 13:57:49
|
> Assuming that "box" corresponds to a 1 and "no box" > corresponds to a 0 > on the tape, then no, you definitely can't; you can't even > simulate an > unbounded binary counter with that model! No matter how many > boxes/bits > you give it initially, at some point it will run out of bits... > literally. I'm not sure it's quite that simple. The machine can definitely count, by making a space between two boxes. If it wants to count to 5, all it needs is to make 1000001. To multiply that number by 2, it can use a third box as a placeholder, then double each space. There's a lot of computation that can be done in that way. The number itself could have godel-numbered instructions which might be possible to unpack and run in some clever way. I agree that it feels like the domain is far too finite to be able to do everything, but I don't think proving it would be quite so cut-and-dried. Danny |
|
From: Fabian G. <f.g...@49...> - 2009-07-16 13:20:22
|
> I've been making a small game about Turing machines, which features a > programmable robot that moves boxes around on a straight track, and I'm > wondering if anyone knows the answer to this question: > > My robot works very similarly to a Turing machine, with one exception: it > can pick up and move boxes, but it can't create or destroy them. So if there > are 8 boxes on the track (equivalently, if there are 8 1's on the Turing > Machine tape), there will always be 8 boxes (one of which may be being > carried by the robot). Does anyone know if it's possible, given sufficient > boxes to work with, to make a universal TM using this system? My instinct > says no, but I'd be interested to know if the problem has been analysed or > has a name. Assuming that "box" corresponds to a 1 and "no box" corresponds to a 0 on the tape, then no, you definitely can't; you can't even simulate an unbounded binary counter with that model! No matter how many boxes/bits you give it initially, at some point it will run out of bits... literally. Since this binary counter is a relatively simple TM, any UTM would have to be able to run it, so there can't be any UTM in this model. This model seems closest to a linear bounded automaton, except it's deterministic. I don't think there's even any generally accepted name for this. Cheers, -Fabian "ryg" Giesen |
|
From: Jon V. <ju...@gm...> - 2009-07-16 12:54:05
|
This sounds pretty much like what you're talking about: http://en.wikipedia.org/wiki/Linear_bounded_automaton Good Luck! Regards, Jon Valdés On Thu, Jul 16, 2009 at 2:33 PM, Danny Kodicek<dr...@we...> wrote: > I've been making a small game about Turing machines, which features a > programmable robot that moves boxes around on a straight track, and I'm > wondering if anyone knows the answer to this question: > > My robot works very similarly to a Turing machine, with one exception: it > can pick up and move boxes, but it can't create or destroy them. So if there > are 8 boxes on the track (equivalently, if there are 8 1's on the Turing > Machine tape), there will always be 8 boxes (one of which may be being > carried by the robot). Does anyone know if it's possible, given sufficient > boxes to work with, to make a universal TM using this system? My instinct > says no, but I'd be interested to know if the problem has been analysed or > has a name. > > Thanks > Danny > > > ------------------------------------------------------------------------------ > Enter the BlackBerry Developer Challenge > This is your chance to win up to $100,000 in prizes! For a limited time, > vendors submitting new applications to BlackBerry App World(TM) will have > the opportunity to enter the BlackBerry Developer Challenge. See full prize > details at: http://p.sf.net/sfu/Challenge > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |
|
From: Danny K. <dr...@we...> - 2009-07-16 12:33:35
|
I've been making a small game about Turing machines, which features a programmable robot that moves boxes around on a straight track, and I'm wondering if anyone knows the answer to this question: My robot works very similarly to a Turing machine, with one exception: it can pick up and move boxes, but it can't create or destroy them. So if there are 8 boxes on the track (equivalently, if there are 8 1's on the Turing Machine tape), there will always be 8 boxes (one of which may be being carried by the robot). Does anyone know if it's possible, given sufficient boxes to work with, to make a universal TM using this system? My instinct says no, but I'd be interested to know if the problem has been analysed or has a name. Thanks Danny |
|
From: Sam M. <sam...@ge...> - 2009-07-16 10:13:57
|
I was typing a reply but Simon beat me to it! Some extra bits: You should think about reducing the sampling rate as a totally separate thing to any averaging/bluring that may also happen. When people say downsampling in computer graphics they usually mean a combination of averaging and reducing the sampling rate. You can 'downsample' simply by removing remove every other pixel, but this provides no defence against high frequencies aliasing. Small changes can have a large impact. For instance, downsampling by using the mid-points, as opposed to just adding up even and odd pixel pairs, approximates a Gaussian fairly well - particularly if you do it more than once. You can produce some very nice blurs by down+up sampling using this trick. Applying one filter then another is equivalent to convolving the two filters together first - so you can always reduce a series of filters down to a single filter (although this filter might be a lot larger!). You can google all the details, but this is a great starting point: http://www.dspguide.com/ And lastly, regardless of what the filters are, if you convolve several together the result will quickly start to look like a Gaussian anyway (known as the central limit theorem). Gaussians are like the home land all filters want to return to. Hope that helps, Sam -----Original Message----- From: Simon Fenney [mailto:sim...@po...] Sent: 16 July 2009 09:57 To: Game Development Algorithms Subject: Re: [Algorithms] Gaussian blur kernels Jon Watte wrote: > Actually, they are both low-pass filters, so > downsampling-plus-upsampling is a form of blur. However, box > filtering (the traditional downsampling function) isn't actually all > that great at low-pass filtering, so it can generate aliasing, which > a Gaussian filter does not. AFAICS a box filter won't "generate" aliasing, it's just that it's not very good at eliminating the high frequencies which cause aliasing. A Gaussian, OTOH, has better behaviour in this respect... > (AFAICR, the Gaussian filter kernel > shares some properties with a sinc reconstruction kernel, and thus is > "optimal" under certain conditions/assumptions, but details are > hazy). ...again, (and my recollection, too, is a bit hazy), the shape of a Gaussian filter in the frequency domain is, again, a Gaussian, so it's still going to let through some amount of illegally high frequencies. In order to stop that, one will probably need to make it a bit wider which, unfortunately, means it'll over-filter some of the legal frequencies you'd want to keep. A sinc filter, when mapped into the frequency domain, is a box and so has a hard cut-off of higher frequencies, which should be ideal. OTOH, I think that a box filter, when mapped into the frequency domain, looks a bit like a sinc, which is why it behaves so poorly. Cheers Simon ------------------------------------------------------------------------ ------ Enter the BlackBerry Developer Challenge This is your chance to win up to $100,000 in prizes! For a limited time, vendors submitting new applications to BlackBerry App World(TM) will have the opportunity to enter the BlackBerry Developer Challenge. See full prize details at: http://p.sf.net/sfu/Challenge _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-lis t |
|
From: Fabian G. <f.g...@49...> - 2009-07-16 09:39:53
|
>> No... That isn't how it works; A downsample shares very little in >> common with a blur. > > Actually, they are both low-pass filters, so > downsampling-plus-upsampling is a form of blur. However, box filtering > (the traditional downsampling function) isn't actually all that great at > low-pass filtering, so it can generate aliasing, which a Gaussian filter > does not. (AFAICR, the Gaussian filter kernel shares some properties > with a sinc reconstruction kernel, and thus is "optimal" under certain > conditions/assumptions, but details are hazy). All "practically implementable" downsampling filters cause some degree of aliasing, including sinc; the problem is that sinc has infinite support and decays quite slowly (as 1/x), so you have to use a windowing function: either do it explicitly with a good windowing function or implicitly choose a box window by just stopping the summation at some point. Anyway, the problem with sinc is that it's very "ringy". Windowing functions help a bit, but it's still very obvious when you have high-contrast edges. Text on solid background is a good example - with sinc and other filters with a very steep frequency response, you can see "wavefronts" emanating from the edges of characters. It's a continuum - the "simple" filters (box, triangle etc.) have no ringing but exhibit serious aliasing, sinc and other "good" lowpass filters (in terms of frequency response) have neglegible aliasing and tons of ringing. Gaussians are smack in the middle - they're all-positive so there's no ringing and they have a decent frequency response. As for the original question, you can build quite good 2D low-pass filters out of gaussians and downsampling, as long as you downsample *after* low-pass filtering. One example would be 5x5 Gaussian => 2x Downsample in both directions => 5x5 Gaussian => 2x Downsample in both directions => (more Gaussian blurs) => Upsample again. The result is not a gaussian kernel, but it "looks close enough". It's still a lot more expensive than the more usual downsample then blur, since you do the gaussian blur on a lot more pixels. What most people do for realtime appplications is just do the downsample first anyway, live with the aliasing, and do something else with the milliseconds of GPU time saved. :) Anyway, if you care for it, you can even do the downsample/filter sequence "perfectly", in the sense that it really does give you the exact filter you want: look up Multirate Filter Banks. But again, nobody uses this for real-time applications, or at least I've never seen it. And of course, to do it properly, you need a better upsampling filter than bilinear too... Kind regards, -Fabian "ryg" Giesen |
|
From: Simon F. <sim...@po...> - 2009-07-16 09:12:29
|
Jon Watte wrote: > Actually, they are both low-pass filters, so > downsampling-plus-upsampling is a form of blur. However, box > filtering (the traditional downsampling function) isn't actually all > that great at low-pass filtering, so it can generate aliasing, which > a Gaussian filter does not. AFAICS a box filter won't "generate" aliasing, it's just that it's not very good at eliminating the high frequencies which cause aliasing. A Gaussian, OTOH, has better behaviour in this respect... > (AFAICR, the Gaussian filter kernel > shares some properties with a sinc reconstruction kernel, and thus is > "optimal" under certain conditions/assumptions, but details are > hazy). ...again, (and my recollection, too, is a bit hazy), the shape of a Gaussian filter in the frequency domain is, again, a Gaussian, so it's still going to let through some amount of illegally high frequencies. In order to stop that, one will probably need to make it a bit wider which, unfortunately, means it'll over-filter some of the legal frequencies you'd want to keep. A sinc filter, when mapped into the frequency domain, is a box and so has a hard cut-off of higher frequencies, which should be ideal. OTOH, I think that a box filter, when mapped into the frequency domain, looks a bit like a sinc, which is why it behaves so poorly. Cheers Simon |
|
From: Jon W. <jw...@gm...> - 2009-07-16 02:36:05
|
Nicholas "Indy" Ray wrote: > On Wed, Jul 15, 2009 at 4:46 PM, Joe Meenaghan<jo...@ga...> wrote: > >> Would the 5x5 over the lower res texture be approximately equivalent to a >> 9x9 at the higher res for example, or perhaps something along these lines? >> > > No... That isn't how it works; A downsample shares very little in > common with a blur. > > Actually, they are both low-pass filters, so downsampling-plus-upsampling is a form of blur. However, box filtering (the traditional downsampling function) isn't actually all that great at low-pass filtering, so it can generate aliasing, which a Gaussian filter does not. (AFAICR, the Gaussian filter kernel shares some properties with a sinc reconstruction kernel, and thus is "optimal" under certain conditions/assumptions, but details are hazy). Sincerely, jw -- Revenge is the most pointless and damaging of human desires. |
|
From: Nicholas \Indy\ R. <ar...@gm...> - 2009-07-16 01:58:17
|
On Wed, Jul 15, 2009 at 4:46 PM, Joe Meenaghan<jo...@ga...> wrote: > Would the 5x5 over the lower res texture be approximately equivalent to a > 9x9 at the higher res for example, or perhaps something along these lines? No... That isn't how it works; A downsample shares very little in common with a blur. > Similarly, what about running the 5x5 (or whatever size) N times in > succession over the same texture? Is there a mapping that comes into play > here, like, "X successive passes with a kernel of NxN is approximately > equal to 1 pass with kernel of MxM"? I think the computation goes like this: Running two successive Gaussian blurs is equal to one Gaussian blur, who's distance is equal to the square root of the sum of the squares of the two blurs. Thus, for two 5x5 blurs: sqrt(5^2 + 5^2) ~= 7. Or two 5x5 blurs is about equal to a 7x7 blur. Nicholas "Indy" Ray |
|
From: Joe M. <jo...@ga...> - 2009-07-16 00:48:05
|
Hey Guys, I was just wondering if there are specific mappings with regards the relationships between Gaussian kernel sizes, data set sizes, and number of passes. For example, what relationship exists between a 5x5 blur done over, say, a 1024x1024 texture and a 5x5 blur on a 512x512 texture of its downsampled data (assuming a simple average to downsample to the lower res)? Would the 5x5 over the lower res texture be approximately equivalent to a 9x9 at the higher res for example, or perhaps something along these lines? Similarly, what about running the 5x5 (or whatever size) N times in succession over the same texture? Is there a mapping that comes into play here, like, "X successive passes with a kernel of NxN is approximately equal to 1 pass with kernel of MxM"? If there are very specific mathematical relationships with proper formulae, that's great, but even if there aren't, I'd be just as happy with general rules of thumb, best practices based on experience, etc.. Thanks so much in advance for any insight you can provide! Joe Meenaghan The Game Institute jo...@ga... |
|
From: Jon W. <jw...@gm...> - 2009-07-12 05:39:34
|
Rachel Blum wrote: > Diodes could just be treated as "open connections" (unless you're > interested in tunneling etc) It turns out that diodes and wires are actually significantly different, even in steady state. In the absolutely simplest case, where there is only a single linear flow, with a diode in the forward direction, there may still be something like a 0.5V drop through the diode. Once the circuit includes more branches, the differences between diodes and wire increase quite dramatically. Sincerely, jw -- Revenge is the most pointless and damaging of human desires. |
|
From: Rachel B. <ra...@ra...> - 2009-07-12 00:34:01
|
> > Not very gamey, but does anyone have a link to a general-purpose algorithm > for solving arbitrary circuits? We've been using a simple Kirchoff Law > algorithm, but it only works for linear circuits (fixed resistance). We > need > to be able to support diodes and bulbs that have resistance that varies > with > voltage, and that's baffling me a bit. Diodes could just be treated as "open connections" (unless you're interested in tunneling etc) Resistance varying with voltage smells like diff. eq. to me, though.... Either way, this might be a good starting point: http://computation.pa.msu.edu/NO/F90/PreconditioningReport.pdf Rachel |
|
From: Danny K. <dr...@we...> - 2009-07-09 10:29:23
|
> And I can totally see how you could build a game around > building/emulating circuits. > > > http://en.wikipedia.org/wiki/Robot_Odyssey That sounds amazing. I made a game a while ago (just as a programmer-for-hire - it wasn't my design) called Mission:Control that was based on programming, but it doesn't sound anywhere near as sophisticated. I was rather shocked to find my children playing it at school ten years later - without recognising me on the voiceover. Re SPICE, thanks for that, Jon, but it's possibly a bit too complicated for my purposes - the problem is that I need to get it running in Director, so I need the algorithm in a form that I can translate into Lingo. By the time I've delved into their class library it'll probably be weeks before I can get it working, even if I can get it in an uncompiled form (I had a quick nose around their website and couldn't tell). Still, it might be a useful starting point - maybe someone on their forum might be able to help. Thanks Danny |
|
From: Charles N. <cha...@gm...> - 2009-07-07 16:49:02
|
On Tue, Jul 7, 2009 at 9:21 AM, Jon Watte <jw...@gm...> wrote: > And I can totally see how you could build a game around building/emulating > circuits. > http://en.wikipedia.org/wiki/Robot_Odyssey <http://en.wikipedia.org/wiki/Robot_Odyssey> Best... game... ever. |
|
From: Jon W. <jw...@gm...> - 2009-07-07 16:21:22
|
Danny Kodicek wrote: > Not very gamey, but does anyone have a link to a general-purpose algorithm > for solving arbitrary circuits? We've been using a simple Kirchoff Law > algorithm, but it only works for linear circuits (fixed resistance). We need > to be able to support diodes and bulbs that have resistance that varies with > voltage, and that's baffling me a bit. I've looked all over the place but > I'm having no luck, so I'd appreciate the help. > Isn't SPICE what you'll usually use for these kinds of things? http://bwrc.eecs.berkeley.edu/Classes/icbook/SPICE/ And I can totally see how you could build a game around building/emulating circuits. Something like a spy that needs to generate various effects, and has components to build circuits with to do it. Totally cool edu-ware :-) Sincerely, jw -- Revenge is the most pointless and damaging of human desires. |
|
From: Danny K. <dr...@we...> - 2009-07-07 11:37:28
|
Not very gamey, but does anyone have a link to a general-purpose algorithm for solving arbitrary circuits? We've been using a simple Kirchoff Law algorithm, but it only works for linear circuits (fixed resistance). We need to be able to support diodes and bulbs that have resistance that varies with voltage, and that's baffling me a bit. I've looked all over the place but I'm having no luck, so I'd appreciate the help. The circuits have no time element, so steady state only - no capacitors, hysteresis effects etc. Thanks Danny PS: I'm aware I only ever seem to post here with general algorithm questions rather than specifically game-related ones. I'd be very happy to post them somewhere else if someone wants to suggest a more appropriate list, but I've always had more luck here than anywhere else. |