gdalgorithms-list Mailing List for Game Dev Algorithms (Page 1406)
Brought to you by:
vexxed72
You can subscribe to this list here.
2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(390) |
Aug
(767) |
Sep
(940) |
Oct
(964) |
Nov
(819) |
Dec
(762) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2001 |
Jan
(680) |
Feb
(1075) |
Mar
(954) |
Apr
(595) |
May
(725) |
Jun
(868) |
Jul
(678) |
Aug
(785) |
Sep
(410) |
Oct
(395) |
Nov
(374) |
Dec
(419) |
2002 |
Jan
(699) |
Feb
(501) |
Mar
(311) |
Apr
(334) |
May
(501) |
Jun
(507) |
Jul
(441) |
Aug
(395) |
Sep
(540) |
Oct
(416) |
Nov
(369) |
Dec
(373) |
2003 |
Jan
(514) |
Feb
(488) |
Mar
(396) |
Apr
(624) |
May
(590) |
Jun
(562) |
Jul
(546) |
Aug
(463) |
Sep
(389) |
Oct
(399) |
Nov
(333) |
Dec
(449) |
2004 |
Jan
(317) |
Feb
(395) |
Mar
(136) |
Apr
(338) |
May
(488) |
Jun
(306) |
Jul
(266) |
Aug
(424) |
Sep
(502) |
Oct
(170) |
Nov
(170) |
Dec
(134) |
2005 |
Jan
(249) |
Feb
(109) |
Mar
(119) |
Apr
(282) |
May
(82) |
Jun
(113) |
Jul
(56) |
Aug
(160) |
Sep
(89) |
Oct
(98) |
Nov
(237) |
Dec
(297) |
2006 |
Jan
(151) |
Feb
(250) |
Mar
(222) |
Apr
(147) |
May
(266) |
Jun
(313) |
Jul
(367) |
Aug
(135) |
Sep
(108) |
Oct
(110) |
Nov
(220) |
Dec
(47) |
2007 |
Jan
(133) |
Feb
(144) |
Mar
(247) |
Apr
(191) |
May
(191) |
Jun
(171) |
Jul
(160) |
Aug
(51) |
Sep
(125) |
Oct
(115) |
Nov
(78) |
Dec
(67) |
2008 |
Jan
(165) |
Feb
(37) |
Mar
(130) |
Apr
(111) |
May
(91) |
Jun
(142) |
Jul
(54) |
Aug
(104) |
Sep
(89) |
Oct
(87) |
Nov
(44) |
Dec
(54) |
2009 |
Jan
(283) |
Feb
(113) |
Mar
(154) |
Apr
(395) |
May
(62) |
Jun
(48) |
Jul
(52) |
Aug
(54) |
Sep
(131) |
Oct
(29) |
Nov
(32) |
Dec
(37) |
2010 |
Jan
(34) |
Feb
(36) |
Mar
(40) |
Apr
(23) |
May
(38) |
Jun
(34) |
Jul
(36) |
Aug
(27) |
Sep
(9) |
Oct
(18) |
Nov
(25) |
Dec
|
2011 |
Jan
(1) |
Feb
(14) |
Mar
(1) |
Apr
(5) |
May
(1) |
Jun
|
Jul
|
Aug
(37) |
Sep
(6) |
Oct
(2) |
Nov
|
Dec
|
2012 |
Jan
|
Feb
(7) |
Mar
|
Apr
(4) |
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(10) |
2013 |
Jan
|
Feb
(1) |
Mar
(7) |
Apr
(2) |
May
|
Jun
|
Jul
(9) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2014 |
Jan
(14) |
Feb
|
Mar
(2) |
Apr
|
May
(10) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(12) |
Nov
|
Dec
(1) |
2016 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
From: Pierre T. <p.t...@wa...> - 2000-08-18 22:30:31
|
> This kinda stuff is great! Thanks Angel. > (I knew there was a deCastelJau derivation of Triangular > patches but I wasn't going to derive it. :-) DX7 exe + source: http://www.witchlord.com/projects.asp Pierre |
From: Pierre T. <p.t...@wa...> - 2000-08-18 22:24:28
|
> Dave and I are talking about the same thing here. Swept sphere volumes give > you tighter fits, faster interference rejection, and MUCH faster proximity > queries than OBBs. Particularly important if you're trying to stave off the > poly^2 tests. I assumed from your topic you were talking about N-Body > dynamic simulation, for which fast proximity queries will be extremely > helpful. > > (re-)Read the paper "Fast Proximity Queries with Swept Sphere Volumes" at: > > You'll be glad you did, unless I'm missing your point. ;^) I'm talking about N-Body dynamic simulation, but I'm not concerned with reducing the poly^2 tests or even the BV^2 tests. Read the original post again: I already have this part. For example, if I send my list of 1024 bounding boxes to the system, I get the correct list of possibly colliding objects in output, and it already gets detected in a fraction of a frame. So far, so good. Now, my concern is this: how do I track the possible colliding pairs? How do I store them? How do I handle them? For the moment Adam seems to be the only one who actually understood the problem: that's why he introduced the sparse matrices. Let's have an example. Assume the system (it could be your SSV system, giving a fast answer to my proximity query) tells me pair (P,Q) is about to collide - that is, the bounding volumes collide. I need a way to: 1) know if they're overlapping for the first time (i.e. did they already overlap the frame before?) 2) retrieve some cached values for this pair (separating vectors, supporting vertices, contact features, whatever) Alternatively, I need to know pairs which do *not* collide anymore!...so that I can discard the previously cached values. What I have there looks more like a storage problem, a database problem, a design problem.... it's actually not really a CD-related problem. :) The possible solutions would be: 1) Allocate an array of N*N entries, each entry containing a pointer to a temporary cache, or null if the pair is not active. Very fast access, perfect. But for N=1024, forget it. 2) Put all those pointers in a linear list, using the minimal ram. P active pairs, P entries in a linear array, I like that. But access becomes the slow part. I can read the whole thing in search of the current pair, but it's obviously bad - the number of active pairs can get pretty high. I can use some better schemes: insertion sort, bisection, or even a hash-table to access the array. But all in all it looks painful, and I was curious about the existing standard ways to deal with it. ...now, the SSV article is very interesting nonetheless :) ... Pierre |
From: Mark W. <mwi...@cy...> - 2000-08-18 21:39:46
|
----- Original Message ----- From: "John Ratcliff" <jra...@ve...> To: <gda...@li...> Sent: Friday, August 18, 2000 4:19 PM Subject: RE: [Algorithms] portal engines in outdoor environments > > >at large polygon walls. The eye-to-vertex algorithm tells me the wall is > not > >visible because I can't see its vertices. I don't know what to do about it > >yet. Any ideas? > > Send more rays. For large polygons tesselate them down to some minimum > threshold size and send rays to them. > > John > Assuming all this work is done as a pre-process, wouldn't it be easier to simply render the scene and read back the frame buffer? To be more specific: 1) Assign each node some unique RGB color. 2) Render all polygons in a node using its unique RGB color. 3) Read back the frame buffer and record all unique RGB values in the image. This will give you all nodes visible from this node. The Z-Buffer will take care of eliminating any invisible nodes. 4) If you do this with 6 images/orientations (like those used to make skyboxes) you will cover the entire view. The main problem (which also exists in John's method) is that you don't know where to place the camera inside the node before you render the images or do your ray-casting. I guess you would need to repeat all this from random locations within a node to make sure you catch all possible cases. IIRC, the method I described above was used for Rogue Squadron on the N64. They used some hi-end SGI workstation to render thousands of images and analyze them to determine visibility. -Mark |
From: Aaron D. <ri...@ho...> - 2000-08-18 21:38:39
|
While it doesn't really matter which order you choose so long as both sides use it, network standard byte order is bigendian. There are a number of C macro functions that convert short and long values to this format. (htons, htonl, ntohs, ntohl from memory) Floats, as mentioned by Pierre can be a bit more of a headache. I generally tend to avoid them if at all possible. If I require decimal values they've usually been of fixed precision so I've simply sent them as integer using a common divisor on both sides. > -----Original Message----- > From: gda...@li... > [mailto:gda...@li...]On Behalf Of Kent > Quirk > Sent: Friday, August 18, 2000 11:50 PM > To: gda...@li... > Subject: Re: [Algorithms] Network & byte order. > > > > Lionel Fumery wrote: > > We are designing our network libraries, for our next games. We > would like to > > produce cross-platfom games, with misc processors targets. > > > > In the case of multi-platform network game, we wonder if we > have to consider > > the byte-ordering of the platforms... Intel is little-endian, > whereas Apple > > (Motorola) is beg-endian. > > Anybody can tell us what platforms are little-endian, or big-endian? > > > > If all our target-platforms are little-endian, we could avoid this > > byte-swaping and then keep some CPU time for something else... > > Compared to the time spent on the network, the amount of time you'll > spend byte-swapping is so microscopic as to be invisible. > > General rule of thumb: > Don't expect to write a multibyte value as a stream of binary bytes on > one platform and expect to read it in on another and have it work. > Define your formats in a way that's independent of byte order. Either > use a text value (XML, for example) or if you need to keep the data at > minimal size (and in modem-based networking you usually do) then define > your data formats at the byte level. > > Don't say: > > "The header consists of a 4-byte unsigned int packet ID." > > say: > > "The first 4 bytes of the header are a packet ID, sent as a four byte > integer, least-significant byte first." > > Then it's unambiguous what you're doing. > > With that said, I just found some comments in the header of one of the > files on our file format (called CHUFF) in MindRover. They were written > by Nat Goodspeed, who works here: > > ------------------------------------ > WARNING! For efficiency reasons, the read/write implementations for > types > such as 'bin4' are implemented by directly examining the storage used > for the > native-type variable. This is fast, but is inherently > platform-sensitive. > CHUFF data types are little-endian by definition (so that we can have > some > hope of exchanging files between different platforms). Therefore, when > you > port this implementation to a big-endian machine, make SURE you define > 'HIBYTE1ST' as one of the compiler's command-line switches! > > Our byte-swapping big-endian implementations assume that it's still > cheaper to > make a single I/O method call for the full size of the value, exchanging > bytes > using temporary variables in memory, than it is to break out separate > I/O > operations for each byte. That may not be entirely true. But one > advantage > of this scheme is that on input, we can still test for EOF on a single > call, > rather than having to test separately for each byte. > > There are two different philosophical approaches to implementing a > cross-platform binary format, that is, one such as ours, in which (for > instance) bin4 must be read and written as little-endian, regardless of > the > byte order in which the platform on which we're running normally stores > its > binary integers. > > Convert on Use > -------------- > One approach is to implement a family of classes that literally define > the way > the storage will be used. For instance, bin4 could be defined as a > class > which always contains a little-endian binary integer value. We would > then > define conversions to and from ordinary binary integers, arithmetic and > logical operations, etc., so that any operation on a bin4 object results > in a > little-endian value in memory. > > The advantage of this strategy is that such fields can apparently be > composed > into structs that describe the actual byte stream. In theory you can > then > instantiate such a struct, populate some or all of its fields and just > write > it out -- or, conversely, read the struct in its entirety (or even just > map > the struct onto part of a previously-read buffer) and then just > reference some > of its fields. > > In practice, this is complicated considerably by the need to worry about > platform-dependent struct alignment requirements. But you can still > build it, > even though you sometimes end up having to define the actual data as an > array > of bytes to bypass automatic compiler alignment. > > With this approach, you need to spend considerable development time on > each > individual field type; it must support the full suite of arithmetic and > logical operations you intend to use. Those operations are, of course, > somewhat more expensive than operations on the corresponding native > type. But > this can still be a win if: > > (a) there are very many more cross-platform structs than there are field > types. The whole rationale for this approach is that you do NOT need to > implement read/write methods for each different struct; composing such > fields > into structs should then permit the structs to be transparently used on > a byte > stream. > > (b) there are very many more fields in a typical cross-platform struct > than > you actually use. (In such a case, you might consider redesigning your > protocol, since it appears to be wasteful of space!) But if you have to > live > with a protocol definition like that, the tradeoff might work in your > favor: > with these fields, you pay for the conversion each time you use them, > but you > don't have to pay for converting fields that you don't use at all. > > (c) for some reason, you need random access to parts of a buffer. For > instance, you are filling a transmission buffer with such structs, but > the > protocol requires a header struct that describes how many other structs > follow > it, and it would be expensive or impossible to determine that number in > advance. If you have a pointer to the header struct in the buffer, you > can > simply patch the count field on the fly. > > Convert on I/O > -------------- > The other approach is to define fields that store values much like > native C++ > types, so that it's reasonably easy and cheap to perform arithmetic and > logical > operations on them, but each field knows how to serialize and > deserialize > itself to a data stream. > > Since the conversion of each field to and from a byte stream is > explicit, you > have explicit control over such things as alignment, rather than > worrying > about what the compiler might be doing behind your back. This approach > also > allows you to use C++ classes with virtual functions, which you can't do > with > a convert-on-use mechanism since the VFT pointer is part of the storage > occupied by each class object. > > The drawback is that for each struct or class you intend to write to, or > receive from, a cross-platform data stream, you must implement specific > read/write methods that enumerate all the (persistent) fields in that > struct > or class. These methods must be maintained every time you change the > set of > fields in the struct/class. > > This can be a win if: > > (a) there are relatively few predefined structs in the protocol. > Implementing > a small set of read/write methods can be easier than implementing all > the > support methods for each convert-on-use field type. > > (b) you access the fields in your structs much more often than you > de/serialize > them from/to the data stream. You only pay for conversion at the time > you > actually read or write the fields, rather than every time you touch one > of > them. > > (c) your protocol allows you to write header information and proceed, > rather > than needing to go back and revisit the header to fix up one or more of > its > fields. That is, either protocol headers don't need to make assertions > about > the data that follows, or it's relatively easy to derive that > forward-looking > information. > > I was going to say something about dynamic composition -- the case when > you > want to read or write individual fields in an order determined at > runtime > rather than at compile time -- but actually, I think that would probably > work > out equally well either way. > > In any case, we use the convert-on-I/O approach. bin4 and friends store > data > very much like C long int, etc., but they know how to read and write > themselves from/to a data stream. > > However, for internal purposes, we find it useful to borrow a > convert-on-use > notion: within this file, we implement a LittleEndian type that always > maintains data in little-endian form. > > -------------------------- > > Hope this helps. > > Kent > > -- > ----------------------------------------------------------------------- > Kent Quirk | CogniToy: Intelligent toys... > Game Architect | for intelligent minds. > ken...@co... | http://www.cognitoy.com/ > _____________________________|_________________________________________ > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: gl <gl...@nt...> - 2000-08-18 21:31:54
|
I think the idea is that you wait for the data to be spit out at the end of the pipe, so it shouldn't stall. Of course by this time you've queued a few frames and/or are already drawing some of them, hence the delay. -- gl ----- Original Message ----- From: "Bernd Kreimeier" <bk...@lo...> To: "Brian Paul" <br...@va...> Cc: <gda...@li...> Sent: Friday, August 18, 2000 10:22 PM Subject: Re: [Algorithms] portal engines in outdoor environments > Brian Paul writes: > > The actual value of this extension is questionable. The problem > > is you have to do a read-back from the hardware to get the > > occlusion test result and the hit from doing that can be substantial. > > Someone here said that the idea is to count on frame coherence > and use results from previous frames as an indicator. But yeah, > just on gut level I would not be suprised to see any kind of > readback interfere with the performance of pipelined architectures. > > > b. > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Matthew M. <ma...@me...> - 2000-08-18 21:31:40
|
Oops. The paper is at http://www.cs.unc.edu/~geom/SSV/ > -----Original Message----- > From: gda...@li... > [mailto:gda...@li...]On Behalf Of > Matthew MacLaurin > Sent: Friday, August 18, 2000 2:20 PM > To: gda...@li... > Subject: RE: [Algorithms] N-body processing > > > > > > Matthew, > > Err... I don't think swept volumes (as used in PQP) will help, > > except for 4D > > collision detection which is not what was worrying me :) But > maybe I miss > > you point. > > > > Dave, > > Welcome. Stefan actually tried that approach, but later. Well, whatever. > > Could you explain what would be the benefit of lozenges ? Especially a > > lozenge-tree ?! > > Dave and I are talking about the same thing here. Swept sphere > volumes give > you tighter fits, faster interference rejection, and MUCH faster proximity > queries than OBBs. Particularly important if you're trying to > stave off the > poly^2 tests. I assumed from your topic you were talking about N-Body > dynamic simulation, for which fast proximity queries will be extremely > helpful. > > (re-)Read the paper "Fast Proximity Queries with Swept Sphere Volumes" at: > > You'll be glad you did, unless I'm missing your point. ;^) > > > > > > /* > > Pierre Terdiman * Home: p.t...@wa... > > Software engineer * Zappy's Lair: www.codercorner.com > > */ > > > > ----- Original Message ----- > > From: Dave Eberly <eb...@ma...> > > To: <gda...@li...> > > Sent: Friday, August 18, 2000 7:56 PM > > Subject: Re: [Algorithms] N-body processing > > > > > > > From: "Matthew MacLaurin" <ma...@me...> > > > To: <gda...@li...> > > > Sent: Friday, August 18, 2000 1:02 PM > > > Subject: RE: [Algorithms] N-body processing > > > > > > > Hey! Check out the swept sphere stuff at UNC! It's the newest > > thing I've > > > > seen in a while for fast proximities and N-Body sim. Even more > > interesting > > > > is what they're using it for -- it appears ideally suited > to huge poly > > > > counts. I posted the link last week. > > > > > > I put swept sphere stuff into NetImmerse when UNC was > > > starting to look into the topic as an alternative to using > > > OBBs in a tree. I had asked Stefan Gottschalk at his > > > dissertation defense if he had considered such bounding > > > volumes, but he indicated 'no'. They work out quite well > > > because they rely on squared-distance calculations that > > > turn out to be less expensive (on average) to calculate > > > than the separating axes, especially so in situations of > > > close proximity. > > > > > > Code for building trees of spheres (points a specified > > > distance from a point), capsules (points a specified > > > distance from a line segment), and lozenges (points a > > > specified distance from a rectangle) is at my site, the > > > MgcContainment.html page. I eventually plan on > > > putting the code at my site for the collision detection, > > > but it may take a couple of months. > > > > > > -- > > > Dave Eberly > > > eb...@ma... > > > http://www.magic-software.com > > > > > > > > > > > > _______________________________________________ > > > GDAlgorithms-list mailing list > > > GDA...@li... > > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > > > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDA...@li... > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Tim T. <sup...@gm...> - 2000-08-18 21:27:53
|
Hi ! How do i calculate the extremal points of a given ( tangents, 4 points ) spline patch ? Any help welcome, even little hints or links Thanks in advance Tim -- Sent through GMX FreeMail - http://www.gmx.net |
From: Bernd K. <bk...@lo...> - 2000-08-18 21:22:12
|
Brian Paul writes: > The actual value of this extension is questionable. The problem > is you have to do a read-back from the hardware to get the > occlusion test result and the hit from doing that can be substantial. Someone here said that the idea is to count on frame coherence and use results from previous frames as an indicator. But yeah, just on gut level I would not be suprised to see any kind of readback interfere with the performance of pipelined architectures. b. |
From: Matthew M. <ma...@me...> - 2000-08-18 21:21:11
|
> Matthew, > Err... I don't think swept volumes (as used in PQP) will help, > except for 4D > collision detection which is not what was worrying me :) But maybe I miss > you point. > > Dave, > Welcome. Stefan actually tried that approach, but later. Well, whatever. > Could you explain what would be the benefit of lozenges ? Especially a > lozenge-tree ?! Dave and I are talking about the same thing here. Swept sphere volumes give you tighter fits, faster interference rejection, and MUCH faster proximity queries than OBBs. Particularly important if you're trying to stave off the poly^2 tests. I assumed from your topic you were talking about N-Body dynamic simulation, for which fast proximity queries will be extremely helpful. (re-)Read the paper "Fast Proximity Queries with Swept Sphere Volumes" at: You'll be glad you did, unless I'm missing your point. ;^) > > /* > Pierre Terdiman * Home: p.t...@wa... > Software engineer * Zappy's Lair: www.codercorner.com > */ > > ----- Original Message ----- > From: Dave Eberly <eb...@ma...> > To: <gda...@li...> > Sent: Friday, August 18, 2000 7:56 PM > Subject: Re: [Algorithms] N-body processing > > > > From: "Matthew MacLaurin" <ma...@me...> > > To: <gda...@li...> > > Sent: Friday, August 18, 2000 1:02 PM > > Subject: RE: [Algorithms] N-body processing > > > > > Hey! Check out the swept sphere stuff at UNC! It's the newest > thing I've > > > seen in a while for fast proximities and N-Body sim. Even more > interesting > > > is what they're using it for -- it appears ideally suited to huge poly > > > counts. I posted the link last week. > > > > I put swept sphere stuff into NetImmerse when UNC was > > starting to look into the topic as an alternative to using > > OBBs in a tree. I had asked Stefan Gottschalk at his > > dissertation defense if he had considered such bounding > > volumes, but he indicated 'no'. They work out quite well > > because they rely on squared-distance calculations that > > turn out to be less expensive (on average) to calculate > > than the separating axes, especially so in situations of > > close proximity. > > > > Code for building trees of spheres (points a specified > > distance from a point), capsules (points a specified > > distance from a line segment), and lozenges (points a > > specified distance from a rectangle) is at my site, the > > MgcContainment.html page. I eventually plan on > > putting the code at my site for the collision detection, > > but it may take a couple of months. > > > > -- > > Dave Eberly > > eb...@ma... > > http://www.magic-software.com > > > > > > > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDA...@li... > > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Pierre T. <p.t...@wa...> - 2000-08-18 20:23:46
|
Adam, I admit there's no point looking for other solutions in most cases :) But as a matter of facts, I'm not doing this for a particular commerial project, I have no milestone to face, so while I'm at it....... The sparse matrix may be worth looking at. I'll give it a try. Matthew, Err... I don't think swept volumes (as used in PQP) will help, except for 4D collision detection which is not what was worrying me :) But maybe I miss you point. Dave, Welcome. Stefan actually tried that approach, but later. Well, whatever. Could you explain what would be the benefit of lozenges ? Especially a lozenge-tree ?! /* Pierre Terdiman * Home: p.t...@wa... Software engineer * Zappy's Lair: www.codercorner.com */ ----- Original Message ----- From: Dave Eberly <eb...@ma...> To: <gda...@li...> Sent: Friday, August 18, 2000 7:56 PM Subject: Re: [Algorithms] N-body processing > From: "Matthew MacLaurin" <ma...@me...> > To: <gda...@li...> > Sent: Friday, August 18, 2000 1:02 PM > Subject: RE: [Algorithms] N-body processing > > > Hey! Check out the swept sphere stuff at UNC! It's the newest thing I've > > seen in a while for fast proximities and N-Body sim. Even more interesting > > is what they're using it for -- it appears ideally suited to huge poly > > counts. I posted the link last week. > > I put swept sphere stuff into NetImmerse when UNC was > starting to look into the topic as an alternative to using > OBBs in a tree. I had asked Stefan Gottschalk at his > dissertation defense if he had considered such bounding > volumes, but he indicated 'no'. They work out quite well > because they rely on squared-distance calculations that > turn out to be less expensive (on average) to calculate > than the separating axes, especially so in situations of > close proximity. > > Code for building trees of spheres (points a specified > distance from a point), capsules (points a specified > distance from a line segment), and lozenges (points a > specified distance from a rectangle) is at my site, the > MgcContainment.html page. I eventually plan on > putting the code at my site for the collision detection, > but it may take a couple of months. > > -- > Dave Eberly > eb...@ma... > http://www.magic-software.com > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: jason w. <jas...@po...> - 2000-08-18 20:22:38
|
Better would be a pipeline that allows you to specify some sort of simple bounding volume for each primative (ie, each index array, display list, whatever). You could build these volumes with no overhead in most games, and providing hardware with the hint would allow it to estimate if the entire primative is obscured, in which case it simply skips the entire primative. This would require finer grained primatives.. which I guess is sort of on the out in these geforce days... but I do imagine that for arrays of moderate size, in the 100's of trys, that the driver buffers and is already looking at the next set. So, it's not exactly a perfect solution, but it seems a relatively simple feature, and avoids the necessity of a readback to application level. I spent a while trying to dream up a good occlusion cull for this sort of environment... if you know fo the Bath model, that's pretty much my desired environement situation. View would mostly be on ground or on top of buildings.. on top of a building looking accross the cityscape is a pathlogical case that breaks nearly every scheme. Polygon level analytic solutions aren't even worth thinking about. Some raster based method like HOM works well in all but the on top of the building situation.. how can you pick a good set of potencial occluders in that case? My last thoughts were of using a hash based on a linear regular octree enumeration, storing a key for each primative. A skip list would be used to store an index on the keys (if you don't want to support dynamic changes to the environment, a simpler structure would work). For each frame, cull to view fustrum, then walk the skiplist in the view in order from near to far, using a coverage test similar to HOM.. the keys would allow for a fairly simple seperation test to be built, but I'm not really sure if that would be faster than just using the key to find screen bounds and check the coverage buffer.. In my case, each primative was going to be a scalable higher order surface of some sort, be it SS, bezier patch or VIPM, so the lowest level of detail would be used to quickly rasterize into the coverage buffer. The coverage buffer would probibly be somewhere around 1/4th the display resolution. Allthough it would be possible for improper rejections to happen in this case due to the resolution and geometric error, I can live with some visability errors on the order of 4 pixels. Thinking back on it now, I think for my situation moving to a geomtry rep that has some explicit idea of what's solid in the environement, like mathews sloppy internal occtree. I've been thinking more about image based geometry representations as well, and have a fuzzy feeling there's some nice structure out there that lends itself to both. |
From: John R. <jra...@ve...> - 2000-08-18 20:14:57
|
>at large polygon walls. The eye-to-vertex algorithm tells me the wall is not >visible because I can't see its vertices. I don't know what to do about it >yet. Any ideas? Send more rays. For large polygons tesselate them down to some minimum threshold size and send rays to them. John |
From: Joe B. <jb...@av...> - 2000-08-18 20:07:27
|
> -----Original Message----- > From: gda...@li... > [mailto:gda...@li...]On Behalf Of John > Ratcliff > Sent: Friday, August 18, 2000 1:10 PM > To: gda...@li... > Subject: RE: [Algorithms] portal engines in outdoor environments > <snip> > What I intend to due is just cast rays out from the camera position to the > vertex of every polygon inside the nodes I am testing against. I'm pretty > sure this will work with no holes. If I get some holes, I'll just do some > more samples until they go away. I don't think this is going to take that > long to build either, because as soon as any ray hits, then that node is > visible. I was thinking of the same system, but ran into a problem with looking down hallways at large polygon walls. The eye-to-vertex algorithm tells me the wall is not visible because I can't see its vertices. I don't know what to do about it yet. Any ideas? --------------- <-- Single Polygon wall (in node A) ---- ---- | | | | <-- Hallway | | ^ Eye (in node B) |
From: Brian P. <br...@va...> - 2000-08-18 19:39:20
|
Bernd Kreimeier wrote: > > John Ratcliff writes: > > If you had the ability to ask questions of the zbuffer like "is this > > bounding volume visable?" (yes/no) in an extremely high speed fashion then > > you could do gross culling on an object by object basis using the zbuffer > > contents during the render stage. Some of the old spanbuffer software > > renderers could do this, because it was a fairly cheap question to ask, > > especially relative to the time it took to software render an entire 3d > > model. > > > > But, since you can't ask zbuffers these kinds of questions it's a moot > > point. > > http://oss.sgi.com/projects/ogl-sample/registry/HP/occlusion_test.txt > > Brian Paul mentioned that he is going to add that to Mesa > using the Glide GR_STATS counters. I have no idea which Win32 > drivers offer this extension. The actual value of this extension is questionable. The problem is you have to do a read-back from the hardware to get the occlusion test result and the hit from doing that can be substantial. The extension works now (in a development branch of the 3dfx DRI driver) but I haven't done any performance analysis. -Brian |
From: Bernd K. <bk...@lo...> - 2000-08-18 19:31:07
|
John Ratcliff writes: > If you had the ability to ask questions of the zbuffer like "is this > bounding volume visable?" (yes/no) in an extremely high speed fashion then > you could do gross culling on an object by object basis using the zbuffer > contents during the render stage. Some of the old spanbuffer software > renderers could do this, because it was a fairly cheap question to ask, > especially relative to the time it took to software render an entire 3d > model. > > But, since you can't ask zbuffers these kinds of questions it's a moot > point. http://oss.sgi.com/projects/ogl-sample/registry/HP/occlusion_test.txt Brian Paul mentioned that he is going to add that to Mesa using the Glide GR_STATS counters. I have no idea which Win32 drivers offer this extension. b. |
From: John R. <jra...@ve...> - 2000-08-18 19:05:45
|
>>What about RAM requirements for this pre-computed list? Oh, it's not really that bad. You just keep a bitstream, where each bit represents a 'node'. Each node has a 'node-id'. When you go through your rendering pipeline, each time you hit a node you just test the node_id bit. Pretty much the same way Quake does it. It's not that much memory really, and you can always tweak your grid size. What you are looking for is gross culling anyway. If you've got a big mountain in front of you, no sense in drawing all the crap behind it. That sort of thing. >>And of course there is the inevitable question of how you compute what is visible to what node. I'd assume its done as a pre-process? Yes. In a previous engine what I did was stick a camera at a bunch of places inside the grid that would represent "a guy standing here", and then spun the camera around in all directions, rendering the entire world into a span-buffer. I then queried the span buffer for which *individual polygons* could be seen. For this new engine I have an insanely fast ray-tracer. What I intend to due is just cast rays out from the camera position to the vertex of every polygon inside the nodes I am testing against. I'm pretty sure this will work with no holes. If I get some holes, I'll just do some more samples until they go away. I don't think this is going to take that long to build either, because as soon as any ray hits, then that node is visible. My raytracer is extremely fast (because this is a massively multiplayer online game which is supposed to fit hundreds of people in a single environment firing weapons!). I wrote a class I called 'cyclops' which would generate 256 ray trace tasks every single frame out of the eyeballs of a character I moved around the environment. When the raytrace task would complete I would highlight the polygon in the world that it hit and draw a ray from his eyes to the hit location. I set the raytracer up to 2 kilometers range on a terrain of about 100k polygons. On this terrain I had dozens of very high polygon density buildings, each in their own co-ordinate space, including a two tiled polygon bridges each spanning about 500 meters. It was able to compute the solution set with virtually no noticable peformance hit. The way my raytracer and all collision detection routines work is by creating a circular cache of polygons which are being hit tested. The faces in the cache have all kinds of pre-computed information to allow for very high speed hit testing. > And do you do a full 3D solution with volumes, etc. or a 2.5D type of thing? 2 1/2 D. My game does have some low flying aircraft, but once you are up in the air you can pretty much see everything and LOD is your only hope there. John |
From: Jason Z. <zi...@n-...> - 2000-08-18 18:48:00
|
As usual John has done something I've only idly thought about then dismissed. :) Hope you dont mind me picking your brain on this for the whole list to see. What about RAM requirements for this pre-computed list? If your terrain is fairly large I can imagine the RAM usage could get quite high. Do you cut up the terrain into very large chunks so the visibility lists at each node don't get too high? And of course there is the inevitable question of how you compute what is visible to what node. I'd assume its done as a pre-process? And do you do a full 3D solution with volumes, etc. or a 2.5D type of thing? I was thinking some sort of grid-tracing like approach might work, where you check heights at each grid sample point to see if its visible. Thanks, - Jason Zisk - nFusion Interactive LLC ----- Original Message ----- From: "John Ratcliff" <jra...@ve...> To: <gda...@li...> Sent: Friday, August 18, 2000 2:05 PM Subject: RE: [Algorithms] portal engines in outdoor environments > In my game the player is almost always on the ground. I'm just going to do > a precomputed visiblity solution on a hash grid against the ground. > Meaning, wherever you are standing it has precomputed what objects, nodes, > etc. can be seen from that position. I've done it before, works fine. > > John > > -----Original Message----- > From: Charles Bloom [mailto:cb...@cb...] > Sent: Friday, August 18, 2000 12:26 PM > To: gda...@li... > Subject: [Algorithms] portal engines in outdoor environments > > > > .. is a hopeless proposition, I think. I'd just like you > guys to check my reasoning before I give up. Consider, > for example, a building placed on a landscape. The goal > is to construct portals so that when you stand on one > side of the building, you can't see the other. If the > player is constrained to stay on the ground, this is > easy enough, it's really a 2d occlusion problem, and you > need 8 portals (two * 4 faces of a square in 2d; two portals > cuz you need one on each side of a face of the square, > parrallel to that face) which splits the universe into 9 > zones (8 outside and one inside the occluding cube). > > However, if we have a plain cube in 3d and the player can > go anywhere around it, we need 48 portals !! (8 for each of > the 6 faces on the cube). Now if you have two cubes near > eachother, you need massive numbers of portals, and they > must all not intersect with eachother or other geometry. > This quickly becomes unworkable. > > I think something like a "real-time" occcluder (shadow volumes, > etc.) is the only hope for outdoors. > > -------------------------------------- > Charles Bloom www.cbloom.com > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: gl <gl...@nt...> - 2000-08-18 18:28:14
|
As I understand it, the idea with delayed z-testing is that whilst it's horribly slow to get 'did this primitive I just tried to render get totally z-rejected?' feedback from the card instantly (due to deep pipelining going on), it's reasonable to get that info back a few frames late without stalling anything. The idea is to have that info determine your objects' LOD, ie. you render your object with a low LOD by default, unless you find out that thing is now visible, when you can bump up the tri count. As the data is delayed, there will be a change from low to hi LOD, but as we're only talking a few frames (I believe), this shouldn't be all that noticable. Note that you can't simply avoid drawing the object, due to the delay - otherwise, it would suddenly appear, a few frames later than it should. In fact, my contention is that if it's only as little as two or three frames, the higher rates you run at, the less significant the delay gets, to the point were you probably don't even see the LOD pop, or you might not even need to render until you're told it's there. Tom, how many frames exactly would we be talking about? -- gl ----- Original Message ----- From: "John Ratcliff" <jra...@ve...> To: <gda...@li...> Sent: Friday, August 18, 2000 7:18 PM Subject: RE: [Algorithms] portal engines in outdoor environments > If you had the ability to ask questions of the zbuffer like "is this > bounding volume visable?" (yes/no) in an extremely high speed fashion then > you could do gross culling on an object by object basis using the zbuffer > contents during the render stage. Some of the old spanbuffer software > renderers could do this, because it was a fairly cheap question to ask, > especially relative to the time it took to software render an entire 3d > model. > > But, since you can't ask zbuffers these kinds of questions it's a moot > point. > > John > > -----Original Message----- > From: Ignacio Castano [mailto:i6...@ho...] > Sent: Friday, August 18, 2000 1:06 PM > To: gda...@li... > Subject: RE: [Algorithms] portal engines in outdoor environments > > > Tom Forsyth wrote: > > I agree - it's a nightmare case for geometric occlusion. Delayed > Z-visiblity > > is our only hope. A shame we've been waiting two DX versions for it, and > > it's going to miss this one as well. Grrrrrrr. So while we have fancy > > vertex shaders that nothing on Earth supports, we don't have API support > for > > something that even crusty old hardware like Permedia2 and Voodoo1 > supports. > > Life... don't talk to me about life. :-) > > what do you mean with 'delayed Z-visibility'? could you explain that a bit > more? > > > Ignacio Castano > ca...@cr... > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Dave S. <Dav...@sd...> - 2000-08-18 18:23:43
|
> FYI, the only difference between my 'old' and 'new' site > is that there is more free stuff now. The papers and > references are still available. The link I mentioned is to > *free* source code. Did you check the file preambles? > All I know is that before, when I went to your site to look up a topic, it had a paper and some source. Now, it seems that it's just source. I saw your source for this post. I was looking for references. I didn't see the references which prompted my, I'll admit hasty, response. > I am quite saddened that folks would think that > $60 for a meaty book with source code is too > expensive. The money I make from the book pays > my bills so that I can continue to post *free* stuff at > my web site. Let's hope enough people have a > different attitude than yours :) > > I wasn't criticizing your $60 book. I was critizing the web site change. That said, I didn't mean to offend you. I just wanted you to know my honest opinion. I respect you as an authority in this area and having another "master" on this list is greatly needed. If it'll make you happy, I'll buy your book. :-) Thanks, -DaveS |
From: Matthew M. <ma...@me...> - 2000-08-18 18:23:11
|
My current plantasy for outdoor occluding of finite terrains and villages is a sloppy-fitting "internal octree." We create an occlusion octree by filling the relevant mountains/buildings with the biggest possible collection of the largest possible AA cubes. Imagine taking a hollow shell of the building and filling it with the largest (cubic) lego model you can fit inside it. This is done with a pathetically simple in/out octree test with an arbitrary depth limit. We completely ignore building interiors; all windows and doors kick off a portal system for rendering their interiors. Our traverser switches back and forth between portals and ...this..other..thing... as it moves through doors and windows. We then get lots of simple fast math to clip the frustrum against these occluder cubes. We then test the bounding box of occludEEs against this limited frustum when deciding what to render. What we're mainly looking for is the cheap wins. We're absolutely allergic to any polygon specific algorithms, because we want BIG FAT FILL RATES from our HYPERPOLYGONATED geometry. ;^) - no view space anything - no touching the polygons - when in doubt, throw it at the hardware - VIPM is assumed There are all kinds of cheap wins that you get with this. You mainly want to catch the big, fat, stupid occlusion cases like when you're standing right next to a wall and half the city is just plain invisible. That's the most common, and it's damn cheap with this approach. The sloppy fitting is also good because it works as VIPM does its thing. You really don't want to bother with trimming the little occlusion portals, because they change so fast as you move around that if your frame rate is depending on them you'll lose anyway. Big buildings that are far enough away to be occluded by little tiny polygons are also far enough away to be taken out by VIPM, anyway. I think this is basically anti-portals, but I haven't seen that paper. There's some neat stuff with projected sillhouettes happending at MIT and one of those French research consortia. It looks really cool but more complicated. I try to avoid research wherever possible... -- Matt basic bounding-box versus frustum You know if you can describe it in 200 words it must be right... in which solid cubes are entirely inside geometry. You then cull the frustrum against the <ascii art> *** * **** * * * * * > -----Original Message----- > From: gda...@li... > [mailto:gda...@li...]On Behalf Of Tom > Forsyth > Sent: Friday, August 18, 2000 10:45 AM > To: gda...@li... > Subject: RE: [Algorithms] portal engines in outdoor environments > > > I agree - it's a nightmare case for geometric occlusion. Delayed > Z-visiblity > is our only hope. A shame we've been waiting two DX versions for it, and > it's going to miss this one as well. Grrrrrrr. So while we have fancy > vertex shaders that nothing on Earth supports, we don't have API > support for > something that even crusty old hardware like Permedia2 and > Voodoo1 supports. > Life... don't talk to me about life. :-) > > Tom Forsyth - Muckyfoot bloke. > Whizzing and pasting and pooting through the day. > > > -----Original Message----- > > From: Charles Bloom [mailto:cb...@cb...] > > Sent: 18 August 2000 18:26 > > To: gda...@li... > > Subject: [Algorithms] portal engines in outdoor environments > > > > > > > > .. is a hopeless proposition, I think. I'd just like you > > guys to check my reasoning before I give up. Consider, > > for example, a building placed on a landscape. The goal > > is to construct portals so that when you stand on one > > side of the building, you can't see the other. If the > > player is constrained to stay on the ground, this is > > easy enough, it's really a 2d occlusion problem, and you > > need 8 portals (two * 4 faces of a square in 2d; two portals > > cuz you need one on each side of a face of the square, > > parrallel to that face) which splits the universe into 9 > > zones (8 outside and one inside the occluding cube). > > > > However, if we have a plain cube in 3d and the player can > > go anywhere around it, we need 48 portals !! (8 for each of > > the 6 faces on the cube). Now if you have two cubes near > > eachother, you need massive numbers of portals, and they > > must all not intersect with eachother or other geometry. > > This quickly becomes unworkable. > > > > I think something like a "real-time" occcluder (shadow volumes, > > etc.) is the only hope for outdoors. > > > > -------------------------------------- > > Charles Bloom www.cbloom.com > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Jaimi M. <ja...@al...> - 2000-08-18 18:19:59
|
I use a combination approach. The exteriors of buildings are always visible when you are on the grid, and Portals only link to the interiors of the buildings. When you are inside, the terrain grid is drawn last (if any portal to the outside is visible). Jaimi -----Original Message----- From: gda...@li... [mailto:gda...@li...]On Behalf Of Charles Bloom Sent: Friday, August 18, 2000 12:26 PM To: gda...@li... Subject: [Algorithms] portal engines in outdoor environments .. is a hopeless proposition, I think. I'd just like you guys to check my reasoning before I give up. Consider, for example, a building placed on a landscape. The goal is to construct portals so that when you stand on one side of the building, you can't see the other. If the player is constrained to stay on the ground, this is easy enough, it's really a 2d occlusion problem, and you need 8 portals (two * 4 faces of a square in 2d; two portals cuz you need one on each side of a face of the square, parrallel to that face) which splits the universe into 9 zones (8 outside and one inside the occluding cube). However, if we have a plain cube in 3d and the player can go anywhere around it, we need 48 portals !! (8 for each of the 6 faces on the cube). Now if you have two cubes near eachother, you need massive numbers of portals, and they must all not intersect with eachother or other geometry. This quickly becomes unworkable. I think something like a "real-time" occcluder (shadow volumes, etc.) is the only hope for outdoors. -------------------------------------- Charles Bloom www.cbloom.com _______________________________________________ GDAlgorithms-list mailing list GDA...@li... http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: John R. <jra...@ve...> - 2000-08-18 18:13:32
|
If you had the ability to ask questions of the zbuffer like "is this bounding volume visable?" (yes/no) in an extremely high speed fashion then you could do gross culling on an object by object basis using the zbuffer contents during the render stage. Some of the old spanbuffer software renderers could do this, because it was a fairly cheap question to ask, especially relative to the time it took to software render an entire 3d model. But, since you can't ask zbuffers these kinds of questions it's a moot point. John -----Original Message----- From: Ignacio Castano [mailto:i6...@ho...] Sent: Friday, August 18, 2000 1:06 PM To: gda...@li... Subject: RE: [Algorithms] portal engines in outdoor environments Tom Forsyth wrote: > I agree - it's a nightmare case for geometric occlusion. Delayed Z-visiblity > is our only hope. A shame we've been waiting two DX versions for it, and > it's going to miss this one as well. Grrrrrrr. So while we have fancy > vertex shaders that nothing on Earth supports, we don't have API support for > something that even crusty old hardware like Permedia2 and Voodoo1 supports. > Life... don't talk to me about life. :-) what do you mean with 'delayed Z-visibility'? could you explain that a bit more? Ignacio Castano ca...@cr... _______________________________________________ GDAlgorithms-list mailing list GDA...@li... http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Tom H. <to...@3d...> - 2000-08-18 18:07:10
|
At 10:45 AM 8/18/2000, you wrote: >I just found out about this list group and my post to >your request was my first. What a great introduction! >(sarcasm intended) Welcome to the list. >FYI, the only difference between my 'old' and 'new' site >is that there is more free stuff now. The papers and >references are still available. The link I mentioned is to >*free* source code. Did you check the file preambles? > >The book is a separate issue. The source code that >comes with the book includes a lot of the free stuff at >my web site, but it also has code that may only be >used for non-commercial purposes. That 'restricted' >code will not be available at my web site. > >I am quite saddened that folks would think that >$60 for a meaty book with source code is too >expensive. The money I make from the book pays >my bills so that I can continue to post *free* stuff at >my web site. Let's hope enough people have a >different attitude than yours :) We just got spoiled :) As for the book, I'll probably buy it when it ships, although I've never heard of anyone making enough money off a tech book to actually pay the bills :) Tom |
From: Ignacio C. <i6...@ho...> - 2000-08-18 18:05:44
|
Tom Forsyth wrote: > I agree - it's a nightmare case for geometric occlusion. Delayed Z-visiblity > is our only hope. A shame we've been waiting two DX versions for it, and > it's going to miss this one as well. Grrrrrrr. So while we have fancy > vertex shaders that nothing on Earth supports, we don't have API support for > something that even crusty old hardware like Permedia2 and Voodoo1 supports. > Life... don't talk to me about life. :-) what do you mean with 'delayed Z-visibility'? could you explain that a bit more? Ignacio Castano ca...@cr... |
From: Michael S. H. <mic...@ud...> - 2000-08-18 18:01:05
|
Dave, While I can't yet say that I've made use of your free software (although I have checked out the various papers and such) I feel confident that *your* $60 book is likely to be well worth the price, give the number of times your site's been referred to on this list. As soon as it's available, I plan on picking it up. Can't have enough good references on the shelf. PS. Ron, if you're reading this, get to work! I want your book as well :-) At 01:45 PM 8/18/00, you wrote: >> Dave Eberly wrote: >> > >> > I have source code for tessellation by recursive subdivision >> > of quadratic or cubic Bezier triangles, rectangles, or >> > cylinders at http://www.magic-software.com/MgcSurface.html >> > >> >> Thanks anyway Dave. Just FYI, I liked your old site >> better since it included papers and references on it >> as opposed to source with book purchase. But I can't >> dog you for trying to make a living. > >I just found out about this list group and my post to >your request was my first. What a great introduction! >(sarcasm intended) > >FYI, the only difference between my 'old' and 'new' site >is that there is more free stuff now. The papers and >references are still available. The link I mentioned is to >*free* source code. Did you check the file preambles? > >The book is a separate issue. The source code that >comes with the book includes a lot of the free stuff at >my web site, but it also has code that may only be >used for non-commercial purposes. That 'restricted' >code will not be available at my web site. > >I am quite saddened that folks would think that >$60 for a meaty book with source code is too >expensive. The money I make from the book pays >my bills so that I can continue to post *free* stuff at >my web site. Let's hope enough people have a >different attitude than yours :) > >-- >Dave Eberly >eb...@ma... >http://www.magic-software.com > > > >_______________________________________________ >GDAlgorithms-list mailing list >GDA...@li... >http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |