You can subscribe to this list here.
2000 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}
(390) 
_{Aug}
(767) 
_{Sep}
(940) 
_{Oct}
(964) 
_{Nov}
(819) 
_{Dec}
(762) 

2001 
_{Jan}
(680) 
_{Feb}
(1075) 
_{Mar}
(954) 
_{Apr}
(595) 
_{May}
(725) 
_{Jun}
(868) 
_{Jul}
(678) 
_{Aug}
(785) 
_{Sep}
(410) 
_{Oct}
(395) 
_{Nov}
(374) 
_{Dec}
(419) 
2002 
_{Jan}
(699) 
_{Feb}
(501) 
_{Mar}
(311) 
_{Apr}
(334) 
_{May}
(501) 
_{Jun}
(507) 
_{Jul}
(441) 
_{Aug}
(395) 
_{Sep}
(540) 
_{Oct}
(416) 
_{Nov}
(369) 
_{Dec}
(373) 
2003 
_{Jan}
(514) 
_{Feb}
(488) 
_{Mar}
(396) 
_{Apr}
(624) 
_{May}
(590) 
_{Jun}
(562) 
_{Jul}
(546) 
_{Aug}
(463) 
_{Sep}
(389) 
_{Oct}
(399) 
_{Nov}
(333) 
_{Dec}
(449) 
2004 
_{Jan}
(317) 
_{Feb}
(395) 
_{Mar}
(136) 
_{Apr}
(338) 
_{May}
(488) 
_{Jun}
(306) 
_{Jul}
(266) 
_{Aug}
(424) 
_{Sep}
(502) 
_{Oct}
(170) 
_{Nov}
(170) 
_{Dec}
(134) 
2005 
_{Jan}
(249) 
_{Feb}
(109) 
_{Mar}
(119) 
_{Apr}
(282) 
_{May}
(82) 
_{Jun}
(113) 
_{Jul}
(56) 
_{Aug}
(160) 
_{Sep}
(89) 
_{Oct}
(98) 
_{Nov}
(237) 
_{Dec}
(297) 
2006 
_{Jan}
(151) 
_{Feb}
(250) 
_{Mar}
(222) 
_{Apr}
(147) 
_{May}
(266) 
_{Jun}
(313) 
_{Jul}
(367) 
_{Aug}
(135) 
_{Sep}
(108) 
_{Oct}
(110) 
_{Nov}
(220) 
_{Dec}
(47) 
2007 
_{Jan}
(133) 
_{Feb}
(144) 
_{Mar}
(247) 
_{Apr}
(191) 
_{May}
(191) 
_{Jun}
(171) 
_{Jul}
(160) 
_{Aug}
(51) 
_{Sep}
(125) 
_{Oct}
(115) 
_{Nov}
(78) 
_{Dec}
(67) 
2008 
_{Jan}
(165) 
_{Feb}
(37) 
_{Mar}
(130) 
_{Apr}
(111) 
_{May}
(91) 
_{Jun}
(142) 
_{Jul}
(54) 
_{Aug}
(104) 
_{Sep}
(89) 
_{Oct}
(87) 
_{Nov}
(44) 
_{Dec}
(54) 
2009 
_{Jan}
(283) 
_{Feb}
(113) 
_{Mar}
(154) 
_{Apr}
(395) 
_{May}
(62) 
_{Jun}
(48) 
_{Jul}
(52) 
_{Aug}
(54) 
_{Sep}
(131) 
_{Oct}
(29) 
_{Nov}
(32) 
_{Dec}
(37) 
2010 
_{Jan}
(34) 
_{Feb}
(36) 
_{Mar}
(40) 
_{Apr}
(23) 
_{May}
(38) 
_{Jun}
(34) 
_{Jul}
(36) 
_{Aug}
(27) 
_{Sep}
(9) 
_{Oct}
(18) 
_{Nov}
(25) 
_{Dec}

2011 
_{Jan}
(1) 
_{Feb}
(14) 
_{Mar}
(1) 
_{Apr}
(5) 
_{May}
(1) 
_{Jun}

_{Jul}

_{Aug}
(37) 
_{Sep}
(6) 
_{Oct}
(2) 
_{Nov}

_{Dec}

2012 
_{Jan}

_{Feb}
(7) 
_{Mar}

_{Apr}
(4) 
_{May}

_{Jun}
(3) 
_{Jul}

_{Aug}

_{Sep}
(1) 
_{Oct}

_{Nov}

_{Dec}
(10) 
2013 
_{Jan}

_{Feb}
(1) 
_{Mar}
(7) 
_{Apr}
(2) 
_{May}

_{Jun}

_{Jul}
(9) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

2014 
_{Jan}
(14) 
_{Feb}

_{Mar}
(2) 
_{Apr}

_{May}
(10) 
_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 




1

2

3

4

5

6

7
(3) 
8

9
(1) 
10

11

12
(2) 
13

14

15

16
(21) 
17
(5) 
18
(3) 
19

20
(1) 
21

22
(2) 
23

24

25
(6) 
26

27
(1) 
28

29
(4) 
30
(3) 
31


From: Danny Kodicek <dragon@we...>  20090716 22:43:30

> In practice, you can encode any finite or countably infinite digital > system state in one number (that's where names like Godel start to > fuse). So, on a turingish tape, that means two bits are enough. > Converting the program to run on the 1bit limited version is the hard > part though. I have no idea whether it's actually possible. That's exactly where I was heading with the question, yes  thanks for reassuring me that I'm not being completely crazy. I envision a system where the distance between the first two boxes represents the simulated TM itself, the next distance represents its current state, the next represents the state of the tape, and the last represents the current position on the tape. (as you say you could further compress this into a single number, but that has the disadvantage that you have to do a lot more processing to work out what it's actually doing). Then any further calculations would be done with other boxes that come after these initial five. It seems to me that this ought to be doable with a finite number of boxes and a finite number of states, but I'm not sure. I know it's a totally pointless and academic question, but it intrigues me strangely. Danny 
From: Jon Watte <jwatte@gm...>  20090716 21:59:36

Fabian Giesen wrote: > versions of both the signal and the filter. But the details don't > matter, since we don't have to write it in terms of convolutions here! > > That was the whole point of my previous mail. There's no point computing > the 5x5 Gaussian convolution at full res only to throw away 3 out of > Oh, yeah, of course. I misunderstood your meaning, because the point was too obvious :) So, yes, the downsampling operation is, at the end, a single pass (assuming you do a 2D Gaussian in a single pass), and it samples some number of samples from the source, per output pixel. I guess even considering the "filter THEN discard" implementation is just too alien for those of us who have been actually implementing these things for so long that the details blend with memories of college beer bashes in the distant past... Sincerely, jw  Revenge is the most pointless and damaging of human desires. 
From: Fabian Giesen <f.giesen@49...>  20090716 21:39:11

Jon Watte wrote: > Fabian Giesen wrote: >> However, all of this is a nonissue in graphics. Pixel shaders give you >> cheap pointwise evaluations at no cost. If you want only every second >> pixel, then render a quad half the size in each direction. > > But that cheap sample is using a pretty crappy box area filter for its > sampling, so it is not aliasing free. At no point is this using a box filter. I'll try to make a bit clearer what's going on: Mathematically, you perform downsampling by first lowpass filtering your signal and then throwing away every Nth sample. The latter is operation called Ndecimation and the corresponding operator is typically written as a downarrow with N after it. If you just do the decimation without any lowpass filtering, you run into trouble because of aliasing; hence the lowpass filter, which is there to remove any frequencies higher than the new nyquist frequency. Afterwards, there's (near enough) nothing there in the higher frequencies that could alias; that's the whole point. Fastforward to 2D downsampling. Similarly, you want to first run a lowpass filter (the 5x5 Gauss) and then run a decimation on the image (=copying it into a rendertarget half the size in both directions with nearest neighbor filtering). Again, running just the decimation step will alias badly. And again, the lowpass filter is there precisely to get rid of any toohigh frequencies that cause aliasing. Anyway, clearly you're doing redundant work during the filtering here, since you only keep every fourth pixel. This is where the polyphase filters come in; polyphase filters are just a way of avoiding this redundant computation, formulated in terms of convolutions of decimated versions of both the signal and the filter. But the details don't matter, since we don't have to write it in terms of convolutions here! That was the whole point of my previous mail. There's no point computing the 5x5 Gaussian convolution at full res only to throw away 3 out of every 4 resulting pixels. You can just render into a quarterarea rendertarget directly. The rest stays exactly the same  you're still using a gaussian lowpass filter, nothing has changed. You just don't compute anything you never use in the first place. > But originally the problem wasn't just downsampling  it was using some > combination of downsampling, filtering and upsampling to approximate the > result of a bigger Gaussian blur, but making it faster. Yeah. Once you get to your lower resolution, you just execute your "main" blurring filter, which works exactly as it would've before except it's got a lot less pixels to process. The usual techniques apply here  using the bilinear filtering HW to get "4 samples for the price of one", using two 1D passes instead of one large 2D pass for seperable filters, etc. The only difference is that you're running on a lowerres version of the image, and to answer the OPs question, yes, a guassian with standard deviation sigma running on the lowerres version (say a downsampling factor of 2 in each dimension) is roughly equivalent to a gaussian with standard deviation 2*sigma on the fullres version. You will still get jarring aliasing artifacts if you take too many shortcuts during downsampling though; perhaps the most obvious (and rather common) artifact being that small thin objects start flickering in the blurred version (that's what happens if you just throw the samples away without doing a small lowpass first!). If you're really going for the full multirate filterbank approach, you have to use a good filter during the upsampling pass too. But in practice, the bilinear filter really is good enough with the relatively blurry signal you usually have at this point; I've never noticed significant improvements using a better upsampler. (Unlike the downsampler, where it really *does* help). Kind regards, Fabian "ryg" Giesen 
From: Olivier Galibert <galibert@po...>  20090716 20:29:48

On Thu, Jul 16, 2009 at 09:56:58AM +0100, Simon Fenney wrote: > Jon Watte wrote: > > > Actually, they are both lowpass filters, so > > downsamplingplusupsampling is a form of blur. However, box > > filtering (the traditional downsampling function) isn't actually all > > that great at lowpass filtering, so it can generate aliasing, which > > a Gaussian filter does not. > > AFAICS a box filter won't "generate" aliasing, it's just that it's not > very good at eliminating the high frequencies which cause aliasing. A > Gaussian, OTOH, has better behaviour in this respect... Aren't we talking about graphics here? Vision is area sampling, making box filters perfect for integer downsampling. It's sound that is point sampling and hence sensitive to high frequency aliasing. OG. 
From: Olivier Galibert <galibert@po...>  20090716 20:29:48

On Thu, Jul 16, 2009 at 10:10:42AM 0700, Jon Watte wrote: > I thought that at first, too, but consider: Even if you represent > numbers with spaces, you will run out of bits for the number of numbers > that you'll need to represent. At most, you can represent (N1) numbers > with N available bits (and that doesn't take into account the > bookkeeping needed for applying any actual algorithm). Countable infinity is still countable infinity. You can encode a set of positive integers as N = 2**i1 * 3**i2 * 5**i3 * 7**i4 * 11**i5 * ... which is only one number. Alternatively you can encode the tape for a turing machine as a number too. If you call Ti the value of the bit at position i relative to the head, you can encode the tape state as N = sum(Ti * 2**(i > 0 ? 2*i : i<0 ? 2*i1 : 0)) If the tape wall all 0 at start, the number of nonzero Ti is finite so there's no convergence issue. In practice, you can encode any finite or countably infinite digital system state in one number (that's where names like Godel start to fuse). So, on a turingish tape, that means two bits are enough. Converting the program to run on the 1bit limited version is the hard part though. I have no idea whether it's actually possible. OG. 
From: Jon Watte <jwatte@gm...>  20090716 19:27:15

Fabian Giesen wrote: > However, all of this is a nonissue in graphics. Pixel shaders give you > cheap pointwise evaluations at no cost. If you want only every second > pixel, then render a quad half the size in each direction. > > But that cheap sample is using a pretty crappy box area filter for its sampling, so it is not aliasing free. > In short, the optimal "polyphase" implementation of a 5x5 Gaussian > followed by a 2:1 downsample in both directions is just to render to the > smaller rendertarget directly with the 5x5 Gaussian Pixel shader and the > correct texture coordinates. > > But originally the problem wasn't just downsampling  it was using some combination of downsampling, filtering and upsampling to approximate the result of a bigger Gaussian blur, but making it faster. Sincerely, jw  Revenge is the most pointless and damaging of human desires. 
From: Fabian Giesen <f.giesen@49...>  20090716 18:04:34

Fabian Giesen wrote: >> Fabian also mentioned Multirate Filter Banks  in the audio case, the >> equivalent is the polyphase filter AFAICR, and that's used quite heavily >> in real time, because it actually saves cycles compared to first >> filtering and then decimating. I think a properly implemented multirate >> filter in a pixel shader might actually give you very good bang for the >> cycle buck! > > The basic idea behind polyphase filters is that you're doing > downsample(conv(a,b),2), where conv(a,b) is the convolution of a and b > and downsample is the "elementary" downsample operator (point sampling  > just throw away every second sample). So since you're ignoring every > second sample generated by your gaussian filter anyway, there's not much > point computing it in the first place. The math behind polyphase filters > are the specifics of how to do just that. [..] And, after thinking about this some more on the way home: you shouldn't care :). To elaborate: most of the math in signal processing deals with signals as a whole. This gives you a whole bunch of very powerful tools and notations (Convolutions, Fourier Transforms, z Transform) but makes some basic samplebysample operations such as downsampling (or decimating as it's usually called in DSP) a bit inconvenient in terms of notation. The whole polyphase filter stuff is mainly about writing the relevant index shuffling down in a concise and neat way so you can "bubble up" convolutions. However, all of this is a nonissue in graphics. Pixel shaders give you cheap pointwise evaluations at no cost. If you want only every second pixel, then render a quad half the size in each direction. In short, the optimal "polyphase" implementation of a 5x5 Gaussian followed by a 2:1 downsample in both directions is just to render to the smaller rendertarget directly with the 5x5 Gaussian Pixel shader and the correct texture coordinates. So all you have to do is use that pixelshader instead of a cheap onetexturesample pixelshader that uses the bilinear filtering HW to average groups of 2x2 pixels together. It's more expensive though, and if you want a quarter of the size in each direction, you *do* have to do this twice  or, alternatively, use a larger gaussian during downsampling. Cheers, Fabian "ryg" Giesen 
From: Fabian Giesen <f.giesen@49...>  20090716 17:30:05

> Fabian also mentioned Multirate Filter Banks  in the audio case, the > equivalent is the polyphase filter AFAICR, and that's used quite heavily > in real time, because it actually saves cycles compared to first > filtering and then decimating. I think a properly implemented multirate > filter in a pixel shader might actually give you very good bang for the > cycle buck! The basic idea behind polyphase filters is that you're doing downsample(conv(a,b),2), where conv(a,b) is the convolution of a and b and downsample is the "elementary" downsample operator (point sampling  just throw away every second sample). So since you're ignoring every second sample generated by your gaussian filter anyway, there's not much point computing it in the first place. The math behind polyphase filters are the specifics of how to do just that. It's basically splitting both the signal and filter into "odd" and "even" parts and then writing the result as a sum of convolutions of these  but it's been a while, so don't ask me about the details :) Multirate filter banks are the filter bank equivalent of discrete wavelet transforms, really  you split into low and high frequency parts, then do an extra decomposition on the low level part of the result, and so on. Each level has half the sampling rate of the previous, hence the "multirate". Nobody forces you to just use exactly 2 filters per level though. For blurring, you just don't care about the high frequency parts, so you throw them away and assume they're all 0 during reconstruction. The result is a cascade of the form downsample > downsample > ... > upsample > upsample If you want to do it correctly, the upsampling filter should match your downsampling filter. If you don't and you're cheap :), you just use the bilinear upsampler you get for free. Cheers, Fabian "ryg" Giesen 
From: Jon Watte <jwatte@gm...>  20090716 17:11:03

I thought that at first, too, but consider: Even if you represent numbers with spaces, you will run out of bits for the number of numbers that you'll need to represent. At most, you can represent (N1) numbers with N available bits (and that doesn't take into account the bookkeeping needed for applying any actual algorithm). Sincerely, jw Danny Kodicek wrote: > > I'm not sure it's quite that simple. The machine can definitely count, by > making a space between two boxes. If it wants to count to 5, all it needs is > to make 1000001. To multiply that number by 2, it can use a third box as a >  Revenge is the most pointless and damaging of human desires. 
From: Jon Watte <jwatte@gm...>  20090716 17:07:06

Simon Fenney wrote: > Jon Watte wrote: > > >> Actually, they are both lowpass filters, so >> downsamplingplusupsampling is a form of blur. However, box >> filtering (the traditional downsampling function) isn't actually all >> that great at lowpass filtering, so it can generate aliasing, which >> a Gaussian filter does not. >> > > AFAICS a box filter won't "generate" aliasing, it's just that it's not > very good at eliminating the high frequencies which cause aliasing. A > Gaussian, OTOH, has better behaviour in this respect... > > Yes, what I meant by that was that the process of box filtering plus downsampling can generate aliasing. As can Gaussian plus downsampling if your kernel isn't wide enough :) With filters, it's all about the response curve  you'll need a significantly wide Gaussian to get below the 61 dB quantization noise floor of an 8 bpp RGB image for any possible aliasing. > > ...again, (and my recollection, too, is a bit hazy), the shape of a > Gaussian filter in the frequency domain is, again, a Gaussian, so it's > still going to let through some amount of illegally high frequencies. In > All filters do that, because there are no perfect brick wall filters. You have design a filter that rejects enough of the aliasing that you care about, while retaining as much as possible of the signal that you want. I think we can agree that a box filter is pretty far from the optimal choice for the more demanding cases :) > A sinc filter, when mapped into the frequency domain, is a box and so > has a hard cutoff of higher frequencies, which should be ideal. > > That would be an infinitely wide sinc. In most cases, though, you end up with only one or two lobes of the sinc, which makes it significantly less than ideal box shape :( Not to mention that all this is defined for static processes, but texture images are, at best, only locally static. The classic transientresponsevsaliasing problem, where the answer is that you probably want to oversample by more than Nyquist suggests you should need to (or why a 48 kHz converter is surprisingly better than a 44.1 kHz converter for the 2020k audible range  but I think we're getting topic skew here :) Fabian also mentioned Multirate Filter Banks  in the audio case, the equivalent is the polyphase filter AFAICR, and that's used quite heavily in real time, because it actually saves cycles compared to first filtering and then decimating. I think a properly implemented multirate filter in a pixel shader might actually give you very good bang for the cycle buck! Sincerely, jw  Revenge is the most pointless and damaging of human desires. 
From: Fabian Giesen <f.giesen@49...>  20090716 15:06:01

> I'm not sure it's quite that simple. The machine can definitely count, by > making a space between two boxes. If it wants to count to 5, all it needs is > to make 1000001. To multiply that number by 2, it can use a third box as a > placeholder, then double each space. There's a lot of computation that can > be done in that way. The number itself could have godelnumbered > instructions which might be possible to unpack and run in some clever way. I > agree that it feels like the domain is far too finite to be able to do > everything, but I don't think proving it would be quite so cutanddried. To settle this, you'd need to define exactly what you mean by universality. I'd state that a universal turing machine U is a TM that will, given a coded representation of another TM (let's call it M) and its input, produce the exact same output on the tape as running the original M with the given input directly if M halts on that input, if and only if M halts. And if M with the given input doesn't halt, then neither will U. I'll further add the requirement that there be a fixed encoding for the TM input string; i.e. encoding a "0" or "1" input bit will always take the same amount of bits no matter where they are in the input string. Expanding my example a bit: Assume a turing machine X that gets a binary number n as input, with the head initially just at right of the final digit. X will proceed to write exactly n ones starting at the initial head position, clear the rest of the tape to zero, and stop with the head at its initial position. This is a more precisely specified version of my binary counter example. X is welldefined and will eventually halt on all inputs. Actually implementing X will involve some trickery such as padding the binary number somehow so that the start of the sequence of 1s can always be found, so it's not trivial, but it's not really complex either. Now, X can be made to produce a string of 1s arbitrarily long given appropriate input. Doing a "leftshift" of n will only add a constant number of 1 bits (boxes), but double the number of "1" bits in the output. So you'll eventually run out of ones (boxes) for large enough n.  That's why I specifically mentioned a *binary* counter. If you allow arbitrary problemspecific transformations (like changing a binary into a unary counter), you can obviously work around the limitations for a lot longer, but you're not really executing the same algorithm. I added the definition of universality (with the requirement of producing the same output, which is key here) to illustrate the point. Kind regards, Fabian "ryg" Giesen 
From: Danny Kodicek <dragon@we...>  20090716 13:57:49

> Assuming that "box" corresponds to a 1 and "no box" > corresponds to a 0 > on the tape, then no, you definitely can't; you can't even > simulate an > unbounded binary counter with that model! No matter how many > boxes/bits > you give it initially, at some point it will run out of bits... > literally. I'm not sure it's quite that simple. The machine can definitely count, by making a space between two boxes. If it wants to count to 5, all it needs is to make 1000001. To multiply that number by 2, it can use a third box as a placeholder, then double each space. There's a lot of computation that can be done in that way. The number itself could have godelnumbered instructions which might be possible to unpack and run in some clever way. I agree that it feels like the domain is far too finite to be able to do everything, but I don't think proving it would be quite so cutanddried. Danny 
From: Fabian Giesen <f.giesen@49...>  20090716 13:20:22

> I've been making a small game about Turing machines, which features a > programmable robot that moves boxes around on a straight track, and I'm > wondering if anyone knows the answer to this question: > > My robot works very similarly to a Turing machine, with one exception: it > can pick up and move boxes, but it can't create or destroy them. So if there > are 8 boxes on the track (equivalently, if there are 8 1's on the Turing > Machine tape), there will always be 8 boxes (one of which may be being > carried by the robot). Does anyone know if it's possible, given sufficient > boxes to work with, to make a universal TM using this system? My instinct > says no, but I'd be interested to know if the problem has been analysed or > has a name. Assuming that "box" corresponds to a 1 and "no box" corresponds to a 0 on the tape, then no, you definitely can't; you can't even simulate an unbounded binary counter with that model! No matter how many boxes/bits you give it initially, at some point it will run out of bits... literally. Since this binary counter is a relatively simple TM, any UTM would have to be able to run it, so there can't be any UTM in this model. This model seems closest to a linear bounded automaton, except it's deterministic. I don't think there's even any generally accepted name for this. Cheers, Fabian "ryg" Giesen 
From: Jon Valdés <juanval@gm...>  20090716 12:54:05

This sounds pretty much like what you're talking about: http://en.wikipedia.org/wiki/Linear_bounded_automaton Good Luck! Regards, Jon Valdés On Thu, Jul 16, 2009 at 2:33 PM, Danny Kodicek<dragon@...> wrote: > I've been making a small game about Turing machines, which features a > programmable robot that moves boxes around on a straight track, and I'm > wondering if anyone knows the answer to this question: > > My robot works very similarly to a Turing machine, with one exception: it > can pick up and move boxes, but it can't create or destroy them. So if there > are 8 boxes on the track (equivalently, if there are 8 1's on the Turing > Machine tape), there will always be 8 boxes (one of which may be being > carried by the robot). Does anyone know if it's possible, given sufficient > boxes to work with, to make a universal TM using this system? My instinct > says no, but I'd be interested to know if the problem has been analysed or > has a name. > > Thanks > Danny > > >  > Enter the BlackBerry Developer Challenge > This is your chance to win up to $100,000 in prizes! For a limited time, > vendors submitting new applications to BlackBerry App World(TM) will have > the opportunity to enter the BlackBerry Developer Challenge. See full prize > details at: http://p.sf.net/sfu/Challenge > _______________________________________________ > GDAlgorithmslist mailing list > GDAlgorithmslist@... > https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslist > 
From: Danny Kodicek <dragon@we...>  20090716 12:33:35

I've been making a small game about Turing machines, which features a programmable robot that moves boxes around on a straight track, and I'm wondering if anyone knows the answer to this question: My robot works very similarly to a Turing machine, with one exception: it can pick up and move boxes, but it can't create or destroy them. So if there are 8 boxes on the track (equivalently, if there are 8 1's on the Turing Machine tape), there will always be 8 boxes (one of which may be being carried by the robot). Does anyone know if it's possible, given sufficient boxes to work with, to make a universal TM using this system? My instinct says no, but I'd be interested to know if the problem has been analysed or has a name. Thanks Danny 
From: Sam Martin <sam.martin@ge...>  20090716 10:13:57

I was typing a reply but Simon beat me to it! Some extra bits: You should think about reducing the sampling rate as a totally separate thing to any averaging/bluring that may also happen. When people say downsampling in computer graphics they usually mean a combination of averaging and reducing the sampling rate. You can 'downsample' simply by removing remove every other pixel, but this provides no defence against high frequencies aliasing. Small changes can have a large impact. For instance, downsampling by using the midpoints, as opposed to just adding up even and odd pixel pairs, approximates a Gaussian fairly well  particularly if you do it more than once. You can produce some very nice blurs by down+up sampling using this trick. Applying one filter then another is equivalent to convolving the two filters together first  so you can always reduce a series of filters down to a single filter (although this filter might be a lot larger!). You can google all the details, but this is a great starting point: http://www.dspguide.com/ And lastly, regardless of what the filters are, if you convolve several together the result will quickly start to look like a Gaussian anyway (known as the central limit theorem). Gaussians are like the home land all filters want to return to. Hope that helps, Sam Original Message From: Simon Fenney [mailto:simon.fenney@...] Sent: 16 July 2009 09:57 To: Game Development Algorithms Subject: Re: [Algorithms] Gaussian blur kernels Jon Watte wrote: > Actually, they are both lowpass filters, so > downsamplingplusupsampling is a form of blur. However, box > filtering (the traditional downsampling function) isn't actually all > that great at lowpass filtering, so it can generate aliasing, which > a Gaussian filter does not. AFAICS a box filter won't "generate" aliasing, it's just that it's not very good at eliminating the high frequencies which cause aliasing. A Gaussian, OTOH, has better behaviour in this respect... > (AFAICR, the Gaussian filter kernel > shares some properties with a sinc reconstruction kernel, and thus is > "optimal" under certain conditions/assumptions, but details are > hazy). ...again, (and my recollection, too, is a bit hazy), the shape of a Gaussian filter in the frequency domain is, again, a Gaussian, so it's still going to let through some amount of illegally high frequencies. In order to stop that, one will probably need to make it a bit wider which, unfortunately, means it'll overfilter some of the legal frequencies you'd want to keep. A sinc filter, when mapped into the frequency domain, is a box and so has a hard cutoff of higher frequencies, which should be ideal. OTOH, I think that a box filter, when mapped into the frequency domain, looks a bit like a sinc, which is why it behaves so poorly. Cheers Simon   Enter the BlackBerry Developer Challenge This is your chance to win up to $100,000 in prizes! For a limited time, vendors submitting new applications to BlackBerry App World(TM) will have the opportunity to enter the BlackBerry Developer Challenge. See full prize details at: http://p.sf.net/sfu/Challenge _______________________________________________ GDAlgorithmslist mailing list GDAlgorithmslist@... https://lists.sourceforge.net/lists/listinfo/gdalgorithmslist Archives: http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithmslis t 
From: Fabian Giesen <f.giesen@49...>  20090716 09:39:53

>> No... That isn't how it works; A downsample shares very little in >> common with a blur. > > Actually, they are both lowpass filters, so > downsamplingplusupsampling is a form of blur. However, box filtering > (the traditional downsampling function) isn't actually all that great at > lowpass filtering, so it can generate aliasing, which a Gaussian filter > does not. (AFAICR, the Gaussian filter kernel shares some properties > with a sinc reconstruction kernel, and thus is "optimal" under certain > conditions/assumptions, but details are hazy). All "practically implementable" downsampling filters cause some degree of aliasing, including sinc; the problem is that sinc has infinite support and decays quite slowly (as 1/x), so you have to use a windowing function: either do it explicitly with a good windowing function or implicitly choose a box window by just stopping the summation at some point. Anyway, the problem with sinc is that it's very "ringy". Windowing functions help a bit, but it's still very obvious when you have highcontrast edges. Text on solid background is a good example  with sinc and other filters with a very steep frequency response, you can see "wavefronts" emanating from the edges of characters. It's a continuum  the "simple" filters (box, triangle etc.) have no ringing but exhibit serious aliasing, sinc and other "good" lowpass filters (in terms of frequency response) have neglegible aliasing and tons of ringing. Gaussians are smack in the middle  they're allpositive so there's no ringing and they have a decent frequency response. As for the original question, you can build quite good 2D lowpass filters out of gaussians and downsampling, as long as you downsample *after* lowpass filtering. One example would be 5x5 Gaussian => 2x Downsample in both directions => 5x5 Gaussian => 2x Downsample in both directions => (more Gaussian blurs) => Upsample again. The result is not a gaussian kernel, but it "looks close enough". It's still a lot more expensive than the more usual downsample then blur, since you do the gaussian blur on a lot more pixels. What most people do for realtime appplications is just do the downsample first anyway, live with the aliasing, and do something else with the milliseconds of GPU time saved. :) Anyway, if you care for it, you can even do the downsample/filter sequence "perfectly", in the sense that it really does give you the exact filter you want: look up Multirate Filter Banks. But again, nobody uses this for realtime applications, or at least I've never seen it. And of course, to do it properly, you need a better upsampling filter than bilinear too... Kind regards, Fabian "ryg" Giesen 
From: Simon Fenney <simon.fenney@po...>  20090716 09:12:29

Jon Watte wrote: > Actually, they are both lowpass filters, so > downsamplingplusupsampling is a form of blur. However, box > filtering (the traditional downsampling function) isn't actually all > that great at lowpass filtering, so it can generate aliasing, which > a Gaussian filter does not. AFAICS a box filter won't "generate" aliasing, it's just that it's not very good at eliminating the high frequencies which cause aliasing. A Gaussian, OTOH, has better behaviour in this respect... > (AFAICR, the Gaussian filter kernel > shares some properties with a sinc reconstruction kernel, and thus is > "optimal" under certain conditions/assumptions, but details are > hazy). ...again, (and my recollection, too, is a bit hazy), the shape of a Gaussian filter in the frequency domain is, again, a Gaussian, so it's still going to let through some amount of illegally high frequencies. In order to stop that, one will probably need to make it a bit wider which, unfortunately, means it'll overfilter some of the legal frequencies you'd want to keep. A sinc filter, when mapped into the frequency domain, is a box and so has a hard cutoff of higher frequencies, which should be ideal. OTOH, I think that a box filter, when mapped into the frequency domain, looks a bit like a sinc, which is why it behaves so poorly. Cheers Simon 
From: Jon Watte <jwatte@gm...>  20090716 02:36:05

Nicholas "Indy" Ray wrote: > On Wed, Jul 15, 2009 at 4:46 PM, Joe Meenaghan<joe@...> wrote: > >> Would the 5x5 over the lower res texture be approximately equivalent to a >> 9x9 at the higher res for example, or perhaps something along these lines? >> > > No... That isn't how it works; A downsample shares very little in > common with a blur. > > Actually, they are both lowpass filters, so downsamplingplusupsampling is a form of blur. However, box filtering (the traditional downsampling function) isn't actually all that great at lowpass filtering, so it can generate aliasing, which a Gaussian filter does not. (AFAICR, the Gaussian filter kernel shares some properties with a sinc reconstruction kernel, and thus is "optimal" under certain conditions/assumptions, but details are hazy). Sincerely, jw  Revenge is the most pointless and damaging of human desires. 
From: Nicholas \Indy\ Ray <arelius@gm...>  20090716 01:58:17

On Wed, Jul 15, 2009 at 4:46 PM, Joe Meenaghan<joe@...> wrote: > Would the 5x5 over the lower res texture be approximately equivalent to a > 9x9 at the higher res for example, or perhaps something along these lines? No... That isn't how it works; A downsample shares very little in common with a blur. > Similarly, what about running the 5x5 (or whatever size) N times in > succession over the same texture? Is there a mapping that comes into play > here, like, "X successive passes with a kernel of NxN is approximately > equal to 1 pass with kernel of MxM"? I think the computation goes like this: Running two successive Gaussian blurs is equal to one Gaussian blur, who's distance is equal to the square root of the sum of the squares of the two blurs. Thus, for two 5x5 blurs: sqrt(5^2 + 5^2) ~= 7. Or two 5x5 blurs is about equal to a 7x7 blur. Nicholas "Indy" Ray 
From: Joe Meenaghan <joe@ga...>  20090716 00:48:05

Hey Guys, I was just wondering if there are specific mappings with regards the relationships between Gaussian kernel sizes, data set sizes, and number of passes. For example, what relationship exists between a 5x5 blur done over, say, a 1024x1024 texture and a 5x5 blur on a 512x512 texture of its downsampled data (assuming a simple average to downsample to the lower res)? Would the 5x5 over the lower res texture be approximately equivalent to a 9x9 at the higher res for example, or perhaps something along these lines? Similarly, what about running the 5x5 (or whatever size) N times in succession over the same texture? Is there a mapping that comes into play here, like, "X successive passes with a kernel of NxN is approximately equal to 1 pass with kernel of MxM"? If there are very specific mathematical relationships with proper formulae, that's great, but even if there aren't, I'd be just as happy with general rules of thumb, best practices based on experience, etc.. Thanks so much in advance for any insight you can provide! Joe Meenaghan The Game Institute joe@... 
From: Jon Watte <jwatte@gm...>  20090712 05:39:34

Rachel Blum wrote: > Diodes could just be treated as "open connections" (unless you're > interested in tunneling etc) It turns out that diodes and wires are actually significantly different, even in steady state. In the absolutely simplest case, where there is only a single linear flow, with a diode in the forward direction, there may still be something like a 0.5V drop through the diode. Once the circuit includes more branches, the differences between diodes and wire increase quite dramatically. Sincerely, jw  Revenge is the most pointless and damaging of human desires. 
From: Rachel Blum <rachel@ra...>  20090712 00:34:01

> > Not very gamey, but does anyone have a link to a generalpurpose algorithm > for solving arbitrary circuits? We've been using a simple Kirchoff Law > algorithm, but it only works for linear circuits (fixed resistance). We > need > to be able to support diodes and bulbs that have resistance that varies > with > voltage, and that's baffling me a bit. Diodes could just be treated as "open connections" (unless you're interested in tunneling etc) Resistance varying with voltage smells like diff. eq. to me, though.... Either way, this might be a good starting point: http://computation.pa.msu.edu/NO/F90/PreconditioningReport.pdf Rachel 
From: Danny Kodicek <dragon@we...>  20090709 10:29:23

> And I can totally see how you could build a game around > building/emulating circuits. > > > http://en.wikipedia.org/wiki/Robot_Odyssey That sounds amazing. I made a game a while ago (just as a programmerforhire  it wasn't my design) called Mission:Control that was based on programming, but it doesn't sound anywhere near as sophisticated. I was rather shocked to find my children playing it at school ten years later  without recognising me on the voiceover. Re SPICE, thanks for that, Jon, but it's possibly a bit too complicated for my purposes  the problem is that I need to get it running in Director, so I need the algorithm in a form that I can translate into Lingo. By the time I've delved into their class library it'll probably be weeks before I can get it working, even if I can get it in an uncompiled form (I had a quick nose around their website and couldn't tell). Still, it might be a useful starting point  maybe someone on their forum might be able to help. Thanks Danny 
From: Charles Nicholson <charles.nicholson@gm...>  20090707 16:49:02

On Tue, Jul 7, 2009 at 9:21 AM, Jon Watte <jwatte@...> wrote: > And I can totally see how you could build a game around building/emulating > circuits. > http://en.wikipedia.org/wiki/Robot_Odyssey <http://en.wikipedia.org/wiki/Robot_Odyssey>; Best... game... ever. 