gdalgorithms-list Mailing List for Game Dev Algorithms (Page 38)
Brought to you by:
vexxed72
You can subscribe to this list here.
| 2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(390) |
Aug
(767) |
Sep
(940) |
Oct
(964) |
Nov
(819) |
Dec
(762) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2001 |
Jan
(680) |
Feb
(1075) |
Mar
(954) |
Apr
(595) |
May
(725) |
Jun
(868) |
Jul
(678) |
Aug
(785) |
Sep
(410) |
Oct
(395) |
Nov
(374) |
Dec
(419) |
| 2002 |
Jan
(699) |
Feb
(501) |
Mar
(311) |
Apr
(334) |
May
(501) |
Jun
(507) |
Jul
(441) |
Aug
(395) |
Sep
(540) |
Oct
(416) |
Nov
(369) |
Dec
(373) |
| 2003 |
Jan
(514) |
Feb
(488) |
Mar
(396) |
Apr
(624) |
May
(590) |
Jun
(562) |
Jul
(546) |
Aug
(463) |
Sep
(389) |
Oct
(399) |
Nov
(333) |
Dec
(449) |
| 2004 |
Jan
(317) |
Feb
(395) |
Mar
(136) |
Apr
(338) |
May
(488) |
Jun
(306) |
Jul
(266) |
Aug
(424) |
Sep
(502) |
Oct
(170) |
Nov
(170) |
Dec
(134) |
| 2005 |
Jan
(249) |
Feb
(109) |
Mar
(119) |
Apr
(282) |
May
(82) |
Jun
(113) |
Jul
(56) |
Aug
(160) |
Sep
(89) |
Oct
(98) |
Nov
(237) |
Dec
(297) |
| 2006 |
Jan
(151) |
Feb
(250) |
Mar
(222) |
Apr
(147) |
May
(266) |
Jun
(313) |
Jul
(367) |
Aug
(135) |
Sep
(108) |
Oct
(110) |
Nov
(220) |
Dec
(47) |
| 2007 |
Jan
(133) |
Feb
(144) |
Mar
(247) |
Apr
(191) |
May
(191) |
Jun
(171) |
Jul
(160) |
Aug
(51) |
Sep
(125) |
Oct
(115) |
Nov
(78) |
Dec
(67) |
| 2008 |
Jan
(165) |
Feb
(37) |
Mar
(130) |
Apr
(111) |
May
(91) |
Jun
(142) |
Jul
(54) |
Aug
(104) |
Sep
(89) |
Oct
(87) |
Nov
(44) |
Dec
(54) |
| 2009 |
Jan
(283) |
Feb
(113) |
Mar
(154) |
Apr
(395) |
May
(62) |
Jun
(48) |
Jul
(52) |
Aug
(54) |
Sep
(131) |
Oct
(29) |
Nov
(32) |
Dec
(37) |
| 2010 |
Jan
(34) |
Feb
(36) |
Mar
(40) |
Apr
(23) |
May
(38) |
Jun
(34) |
Jul
(36) |
Aug
(27) |
Sep
(9) |
Oct
(18) |
Nov
(25) |
Dec
|
| 2011 |
Jan
(1) |
Feb
(14) |
Mar
(1) |
Apr
(5) |
May
(1) |
Jun
|
Jul
|
Aug
(37) |
Sep
(6) |
Oct
(2) |
Nov
|
Dec
|
| 2012 |
Jan
|
Feb
(7) |
Mar
|
Apr
(4) |
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(10) |
| 2013 |
Jan
|
Feb
(1) |
Mar
(7) |
Apr
(2) |
May
|
Jun
|
Jul
(9) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2014 |
Jan
(14) |
Feb
|
Mar
(2) |
Apr
|
May
(10) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
|
| 2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(12) |
Nov
|
Dec
(1) |
| 2016 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
| 2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
|
From: Sebastian S. <seb...@gm...> - 2009-04-26 12:52:12
|
On Sun, Apr 26, 2009 at 12:50 PM, <ne...@tw...> wrote: > > I would like to sound a note of caution on STM. It's quite nice to > program with -- think of it as threads-and-locks, made good. But I've > been to many presentations on STM, from prominent researchers in the > field, and the performance figures have always indicated that it doesn't > scale. Past about two-four threads contending on a piece of > transactional memory, the performance usually ends up worse than one > thread, which is hardly what you want from parallel programming. > Well that's kind of unavoidable though. If there is true contention on a piece of shared data, then no amount of software solutions will help you. So the name of the game is to reduce unnecessary contention (e.g. pessimistically excluding something from accessing data). For that reason avoiding shared data as much as possible is clearly a good idea - but once you're in the situation that you really do need it, what do you do? STM competes with locks and condition variables, not threads and messages. E.g. if you have a game world with 10K objects in it, and each object touches 5 other objects on average, when it's updated, then *actual* contention will probably be low, whereas *potential* contention is high, and that's where STM shines since it enables optimistic concurrency. I.e. the common case of no actual contention runs quickly, at the cost of some overhead when you're unlucky and two objects touch the same data. Whereas with locks you have to lock everything every single time even though it's unlikely that anything else is using it. For these cases STM does scale well beyond locks. I > think a message-passing model is more promising, and is equally valid in > Haskell or in C++ (disclaimer: I spend my days doing research into > message-passing concurrency :-), whereas STM in C++ is not as > easy and simple as the Haskell version. > Sure, message passing is another strategy, but I don't think they're in competition. STM is for shared state concurrency, which is to be avoided, but sometimes can't be, message passing offers very little benefit w.r.t. scalability over locks and condition variables when it comes to truly shared state (you pretty much end up simulating locks with them, by having some "server thread" manage access to data, and for transactions you need to gather up exclusive access somehow, which is essentially the same as "taking a lock"). Message passing can be very nice for some things, but I think there are plenty of cases where it doesn't work very well (e.g. the example mentioned earlier). Also, I find message passing can get very complicated in some instances, as it feels a bit like "goto programming", where you have to try to reason about how these messages flow through several threads - it can get hairy and "spaghetti-like" where the logic of the program is obscured by the details of coordination. So while there are problems where multiple sequential processes communicating through messages fit very nicely, I'd be very careful about considering them to be the *only* paradigm - we need to be able to support all the other problems too. -- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862 |
|
From: <ne...@tw...> - 2009-04-26 12:17:16
|
<html><body><span style="font-family:Verdana; color:#000000; font-size:10pt;"><br><blockquote webmail="1" style="border-left: 2px solid blue; margin-left: 8px; padding-left: 8px; font-size: 10pt; color: black; font-family: verdana;"><div ><br>
-------- Original Message --------<br>
Subject: Re: [Algorithms] Complexity of new hardware<br>
From: Sebastian Sylvan <seb...@gm...><br>
<br>
Being a massive Haskell fanboy myself, let me jump in with some<br>
other cool things it does that relates to game development.<br>
<br>
...<br>
<br>
2. It has Software Transactional Memory. So when you really need<br>
shared mutable state you can still access it from lots of different<br>
threads at once with optimistic concurrency (only block when there's<br>
an actual conflict). Yes, there are issues, and yes it adds<br>
overhead, but if the alternative is single threaded execution and<br>
the overhead is 2-3x, then we win once we have 4 hardware threads to<br>
spare.<br>
<br></div></blockquote><div ><br>I would like to sound a note of caution on STM. It's quite nice to <br>
program with -- think of it as threads-and-locks, made good. But I've <br>
been to many presentations on STM, from prominent researchers in the <br>
field, and the performance figures have always indicated that it doesn't <br>
scale. Past about two-four threads contending on a piece of <br>
transactional memory, the performance usually ends up worse than one <br>
thread, which is hardly what you want from parallel programming. I <br>
think a message-passing model is more promising, and is equally valid in <br>
Haskell or in C++ (disclaimer: I spend my days doing research into <br>
message-passing concurrency :-), whereas STM in C++ is not as <br>easy and simple as the Haskell version.<br><br></div><blockquote webmail="1" style="border-left: 2px solid blue; margin-left: 8px; padding-left: 8px; font-size: 10pt; color: black; font-family: verdana;"><div >
<br>
<br>
3. Monads! Basically this allows you to overload semi-colon, which<br>
means you can fairly easily define your own embedded DSLs. This can<br>
let you write certain code a lot easier.. You could have a<br>
"behaviour" monad for example, abstracting over all the details of<br>
entities in the game doing things which take multiple frames (so you<br>
don't need to litter your behaviour code with state machine code,<br>
saving and restoring state etc, you just write what you want to do<br>
and the implementation of the monad takes care of things that needs<br>
to "yield").<br>
<br>
4. It's safe. Most code in games isn't systems code, so IMO it<br>
doesn't make sense to pay the cost of using a systems programming<br>
language for it (productivity, safety).<br>
<br>
5. It's statically typed with a native compiler, meaning you could<br>
compile all your scripts and just link them into the game for<br>
release and get decent performance. Not C-like (yet, anyway!), but<br>
probably an order of magnitude over most dynamic languages.<br>
<br></div></blockquote><div ><br>Agreed on the monads and safety. Having programmed a large system in <br>
Haskell, I have found that the performance is surprisingly good, <br>
especially in terms of memory use. The laziness aspect seems to allow <br>
memory use to remain low, sometimes even lower than the equivalent C <br>
program. But I doubt that the straight-line performance is going to be <br>
as good as C, especially given the amount of garbage collection involved <br>
in Haskell programs.<br>
<br>
There was a talk a couple of years ago by Tim Sweeney that made a case <br>
for functional programming in games (the slides can be found here: <br>
<a href="http://morpheus.cs.ucdavis.edu/papers/sweeny.pdf" target="_blank" mce_href="http://morpheus.cs.ucdavis.edu/papers/sweeny.pdf">http://morpheus.cs.ucdavis.edu/papers/sweeny.pdf</a>), for anyone that's <br>
interested. I also think that Haskell could benefit game development, <br>
but it does take a little while to get your head round functional <br>
programming, monads, and all the other stuff that Haskell has to offer.<br>
<br>
Thanks,<br>
<br>
Neil.<br>
<br>
P.S. to be pedantic on the issue of the function of type a->a (from <br>
another post), const undefined is also a valid function of that <br>
type, if an unhelpful one :-)<br>
</div><blockquote webmail="1" style="border-left: 2px solid blue; margin-left: 8px; padding-left: 8px; font-size: 10pt; color: black; font-family: verdana;">
</blockquote></span></body></html>
|
|
From: Sebastian S. <seb...@gm...> - 2009-04-26 10:59:38
|
On Sun, Apr 26, 2009 at 11:30 AM, Nicholas "Indy" Ray <ar...@gm...>wrote: > On Sun, Apr 26, 2009 at 2:08 AM, Sebastian Sylvan > <seb...@gm...> wrote: > > They do actually. Haskell always infers the *most general* type for a > > function. Adding a type signature can specialize it giving you faster > code > > (by avoiding the need for runtime dispatch of any polymorphic functions - > if > > the compiler knows that the * always refers to integer multiplication, it > > can just emit the native honest-to-goodness int multiplication > instruction > > directly). > > That's in interesting property, I've never seriouslly used Haskell, > but in ML, even though the * operation can be of type 'a -> 'a the > compiler will specialiase it to int -> in the cases where the type > inference allows it to do such, with no dynamic dispatch required > unless there is the possibility of union/any types running around > (which I've found to be very rare). This may require a lot of cross-module optimizations so is a bit harder to do in general, but yes the specialization can happen at the "use site", but that requires that the use site itself knows it's using an Int and not "Num a". So at some point there needs to be something restricting it to a specific type, and the closer that happens to the implementation, the less cross-module optimization you need to get it. > > (e.g. a type of a -> a, has precisely one imlementation, id, whereas a > type > > from Integer -> Integer has an infinite number of implementations), > > I'm sorry, I don't follow. A function of type 'a -> 'a is a superset > of int -> int thus implementations must contain the later. If I (or rather, the context in which a function is called) give you the type "a->a", and ask you to implement a function satisfying that type, there is only one implementation (id). Since you know nothing about what type the parameter passed in has, you can't do anything except just return it back out. Likewise for "(a,b)->a" (fst). On the other hand if I give you the type "Integer->Integer" and ask you to write an implementation, there's an infinite number of possibilities that can satisfy that type (e.g. +1, +2, etc.). So the point is that if the expected type of a specific function is polymorphic, then you have less wiggle room to write something that satisfies the type - and in some cases the number of implementations that can satisfy the type is just one, but even when you add some non-polymorphic stuff to the type every polymorphic part will cut out a "degree of freedom" from the implementation. The fact that you know something is an Int means you can "do more" to the variable - if it's fully polymorphic you can't do anything to it (and likewise if it's "numeric" you can only do maths on it, and so on). Thus, the more polymorphic the type, the smaller the valid "implementation space" is, and therefore the more likely it is that an incorrect implementation will be caught by the type checker. > Perhaps, I still don't think that C# is well designed for game > development and this "something simular" doesn't exist yet and afaict > no one is working on it. If the language doesn't exist, it's difficult > to design the hardware to run it well. Well I wouldn't really consider C++ to be well designed for game development either, so it's all about the relative merits, I guess. Personally I'd prefer C# over C++ in 90% of game code, assuming that we get a proper incremental garbage collector, good static null- and out-of-bounds checking elimination etc. F# is looking pretty good. -- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862 |
|
From: Nicholas \Indy\ R. <ar...@gm...> - 2009-04-26 10:31:02
|
On Sun, Apr 26, 2009 at 2:08 AM, Sebastian Sylvan <seb...@gm...> wrote: > They do actually. Haskell always infers the *most general* type for a > function. Adding a type signature can specialize it giving you faster code > (by avoiding the need for runtime dispatch of any polymorphic functions - if > the compiler knows that the * always refers to integer multiplication, it > can just emit the native honest-to-goodness int multiplication instruction > directly). That's in interesting property, I've never seriouslly used Haskell, but in ML, even though the * operation can be of type 'a -> 'a the compiler will specialiase it to int -> in the cases where the type inference allows it to do such, with no dynamic dispatch required unless there is the possibility of union/any types running around (which I've found to be very rare). > As an aside, the Haskell way is actually great for writing generic code (in > C++ et al you have to do extra work to get generic code, in Haskell you have > to do extra work to specialize it). Also, a little hand-wavey, polymorphism > actually restricts the amounts of valid implementations for a given type You'll find no argument from me about the merits of type inferrence. > (e.g. a type of a -> a, has precisely one imlementation, id, whereas a type > from Integer -> Integer has an infinite number of implementations), I'm sorry, I don't follow. A function of type 'a -> 'a is a superset of int -> int thus implementations must contain the later. > As for Haskell, it is always completely statically, and strongly typed. It > just happens to infer those types for you at compile time, saving you some > typing (no pun intended). I hate to waste time arguing symantics/vocab, but I had ment adding "type annotations" > I would suspect that the next gen of consoles will make provisions to run > managed code more efficiently, and in particular mixing and matching between > C/C++ for low level systems, and C# or similar for "everything else". E.g. > on the Xbox 360 JIT:ed code has to run in user mode, which incurs a > performance hit. It would certainly be nice if that wasn't the case (either > if it didn't need to switch, or if the performance hit was smaller). Perhaps, I still don't think that C# is well designed for game development and this "something simular" doesn't exist yet and afaict no one is working on it. If the language doesn't exist, it's difficult to design the hardware to run it well. Nicholas "Indy" Ray |
|
From: Sebastian S. <seb...@gm...> - 2009-04-26 09:08:11
|
On Sun, Apr 26, 2009 at 5:00 AM, Nicholas "Indy" Ray <ar...@gm...>wrote: > On Sat, Apr 25, 2009 at 5:01 PM, Rachel Blum <r....@gm...> wrote: > > Actually, I'm referring to type annotations in Haskell. While they are > not > > necessary (the inference works quite well), they allow to generate better > > (i.e. faster/shorter) code. I'm looking to extend that into a more > generic > > system where you slowly annotate your code as you learn more about the > > problem at hand. > > I ment type annotations to be included in advantages of type > inference, as I haven't seen a type inferred system without optional > type annotations. Anyways, as far as my understand goes, type > annotations provide no runtime performance benefits, but help to > increase safety of computation, a compile time 'assert' of sorts. They do actually. Haskell always infers the *most general* type for a function. Adding a type signature can specialize it giving you faster code (by avoiding the need for runtime dispatch of any polymorphic functions - if the compiler knows that the * always refers to integer multiplication, it can just emit the native honest-to-goodness int multiplication instruction directly). Also, type signatures can be used to specify explicitly that a given value should be unboxed in GHC (e.g. Int# is an unboxed int). There are other annotations which can force strict evaluation (add a ! to the field of a record type, or the name of a parameter to a function), improving performance in many cases. As an aside, the Haskell way is actually great for writing generic code (in C++ et al you have to do extra work to get generic code, in Haskell you have to do extra work to specialize it). Also, a little hand-wavey, polymorphism actually restricts the amounts of valid implementations for a given type (e.g. a type of a -> a, has precisely one imlementation, id, whereas a type from Integer -> Integer has an infinite number of implementations), and if the space of legal implementation is reduced then the space of legal *incorrect* implementations is too, meaning that with generic code an incorrect implementation is more likely to give a type error. See the Girard-Reynolds isomorphism for more, and check out djinn, which given a (polymorphic) type will "magically" produce an implementation for that function! Sounds completely magical, I know! > It's nice to be able to start with more malleable code, and then add > types later to ensure safety, I understand. As for Haskell, it is always completely statically, and strongly typed. It just happens to infer those types for you at compile time, saving you some typing (no pun intended). > However I do have quite a bit of experience with XNA/C# and as it > turns out they have a lot of problems, the performance can be > problematic, GC isn't always desirable when developing games, The > large amount of bounds checking can also be problematic. Additionally, > the nature of being a proprietary language/library vastly limits the > platforms that can be developed for (Mono does a nice job at running > C# but XNA is still a Microsoft only sort of thing). Lastly while it > is possible to call into C/C++ libraries though managed C++, it's > never very pleasant, and on some platforms it may not be possible at > all. Depending on the game, these may actually be non-problems. But I > doubt we will be seeing any AAA titles in XNA/C# very shortly, and I > doubt that is due to the inertia of the entire industry and C++. I would suspect that the next gen of consoles will make provisions to run managed code more efficiently, and in particular mixing and matching between C/C++ for low level systems, and C# or similar for "everything else". E.g. on the Xbox 360 JIT:ed code has to run in user mode, which incurs a performance hit. It would certainly be nice if that wasn't the case (either if it didn't need to switch, or if the performance hit was smaller). -- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862 |
|
From: Nicholas \Indy\ R. <ar...@gm...> - 2009-04-26 04:00:39
|
On Sat, Apr 25, 2009 at 5:01 PM, Rachel Blum <r....@gm...> wrote: > Actually, I'm referring to type annotations in Haskell. While they are not > necessary (the inference works quite well), they allow to generate better > (i.e. faster/shorter) code. I'm looking to extend that into a more generic > system where you slowly annotate your code as you learn more about the > problem at hand. I ment type annotations to be included in advantages of type inference, as I haven't seen a type inferred system without optional type annotations. Anyways, as far as my understand goes, type annotations provide no runtime performance benefits, but help to increase safety of computation, a compile time 'assert' of sorts. > "Calcify" because your code becomes harder and harder to change - the price > of specializing it for the task at hand. > > Since I'm at the hand-waving stage with my thoughts on this, that's about as > much explanation I can give - it sounded better in my mind ;) It's nice to be able to start with more malleable code, and then add types later to ensure safety, I understand. > I'm curious - what do you feel C++ gives you (on a systems level) that's not > achievable with C and a decent set of libraries? > > Some are, some are not. None of them seem to call for C++. Systems level > work is done in C. If I need to step onto an OO level while doing systems > work, ObjC seems a better choice to me, and pretty much *all* prototyping is > done in Python or other HLLs. > > If I'm trying out performance intensive stuff, I'm more than happy to throw > rather large amounts of computational power at it if it gains me fast > development. EC2 is your friend ;) I mostly mean C++ as a superset of C, alas I think C++ has quite a few valuable additions, destructors, and typed containers which I often find to be valuable in a lot of code (depending on what value of 'system' I wouldn't choose C++ over C for driver development for instance. Other then that, C++ has some valuable performance characteristics, while I agree that there is great reason to prototype in higher level languages the performance is often not acceptable in production game code. For instance the dynamic nature of ObjC classes can provide some problems, and the insistence for all newer programming languages to be garbage collected proves to be largely problematic. > That's entirely due to inertia and unwillingness to explore alternatives. If > we spent less time on reinventing existing wheels, I'm confident we could do > a lot of useful work in terms of generating alternatives. Creating alternatives ends up being much more difficult then switching to existing alternatives if none of the existing alternatives are a great match. > (Side note: I'd *really* love to focus the "game development universities" > on that. I'd think students would benefit from doing actual research, as > opposed to vocational training...) I'm not sure that "game development universities" are yet mature enough for this. > I'm surprised that you as an Indy guy (or so I guess from the signature ;) > feel there are no alternatives. XNA/C# seems a viable one? (Note - this is > said as a bystander. I haven't used it yet. There's only so many hours in a > day :( ) Sorry for the confusion, the "Indy" in my signature is a nick name of mine I've had since long before I got into game development. However I do have quite a bit of experience with XNA/C# and as it turns out they have a lot of problems, the performance can be problematic, GC isn't always desirable when developing games, The large amount of bounds checking can also be problematic. Additionally, the nature of being a proprietary language/library vastly limits the platforms that can be developed for (Mono does a nice job at running C# but XNA is still a Microsoft only sort of thing). Lastly while it is possible to call into C/C++ libraries though managed C++, it's never very pleasant, and on some platforms it may not be possible at all. Depending on the game, these may actually be non-problems. But I doubt we will be seeing any AAA titles in XNA/C# very shortly, and I doubt that is due to the inertia of the entire industry and C++. Nicholas "Indy" Ray |
|
From: Rachel B. <r....@gm...> - 2009-04-26 00:02:09
|
> It seems to me that you are only referring to the advantages of Type > Inference in haskell, Actually, I'm referring to type annotations in Haskell. While they are not necessary (the inference works quite well), they allow to generate better (i.e. faster/shorter) code. I'm looking to extend that into a more generic system where you slowly annotate your code as you learn more about the problem at hand. "Calcify" because your code becomes harder and harder to change - the price of specializing it for the task at hand. Since I'm at the hand-waving stage with my thoughts on this, that's about as much explanation I can give - it sounded better in my mind ;) > > As I mentioned, C++ was not designed for making games, it's very > suitable systems language, and for many systems in game development, I > do find it enjoyable, I'm curious - what do you feel C++ gives you (on a systems level) that's not achievable with C and a decent set of libraries? >> At least for private projects, I've almost completely abandoned it >> - work >> has a slightly higher inertia ;) > > I don't know if you're private projects are game related. Some are, some are not. None of them seem to call for C++. Systems level work is done in C. If I need to step onto an OO level while doing systems work, ObjC seems a better choice to me, and pretty much *all* prototyping is done in Python or other HLLs. If I'm trying out performance intensive stuff, I'm more than happy to throw rather large amounts of computational power at it if it gains me fast development. EC2 is your friend ;) > But at the > moment there seems to be a much bigger issue then inertia in the work > environment, which is the lack of a viable alternative in our field. That's entirely due to inertia and unwillingness to explore alternatives. If we spent less time on reinventing existing wheels, I'm confident we could do a lot of useful work in terms of generating alternatives. (Side note: I'd *really* love to focus the "game development universities" on that. I'd think students would benefit from doing actual research, as opposed to vocational training...) > Nicholas "Indy" Ray I'm surprised that you as an Indy guy (or so I guess from the signature ;) feel there are no alternatives. XNA/C# seems a viable one? (Note - this is said as a bystander. I haven't used it yet. There's only so many hours in a day :( ) Rachel |
|
From: Nicholas \Indy\ R. <ar...@gm...> - 2009-04-25 23:07:31
|
On Sat, Apr 25, 2009 at 2:53 PM, Rachel Blum <r....@gm...> wrote: > In turn extending your compile times... which is really not that high on my > list of priorities. Writing a front end for your DSL that works with e.g. > LLVM seems like the better choice, if you *have* to write your own DSL > compiler. I've found LLVM is a great compiler backend, and a pleasure to work with. I've only found a handful of cases where compiling to LLVM IR is more difficult then to C, and I feel there are an equivalent amount of circumstances where the opposite is true. It's particularly pleasurably using the C++ API. > Ultimately, that's the first step into a model of "calcifying"[1] software. > You start with an extremely malleable language, and it gradually hardens > over time. (By compiling, providing annotations to the compiler, etc...) > > Haskell seems to be the farthest along that way, for now. If there are any > other recommendations, I'd love to hear them. It seems to me that you are only referring to the advantages of Type Inference in haskell, as other then that most software will "calcify" to a degree, however I do agree that type inference can be a very nice feature in a language, and helps to keep the whole application stable as it enlarges. But with that in mind I feel that the family of ML languages are also very suitable, and eager evaluation seems better suited for game development. > As far as I'm concerned, C++ is nearing a breaking point. We constantly cram > new features into it (sorry, I meant "we extend the number of supported > paradigms" ;), and design is by committee. Which leads to an extremely > powerful and overly complex language that's almost unreadable. On top of > that, the tool set is falling more and more behind. As I mentioned, C++ was not designed for making games, it's very suitable systems language, and for many systems in game development, I do find it enjoyable, however it starts to break down while combining systems into a large application, and additionally leads to a lot of repeated and glue code, which is often ad-hoc and bug-ridden. That combined with the very poor toolset makes it less then ideal to write entire games in. > At least for private projects, I've almost completely abandoned it - work > has a slightly higher inertia ;) I don't know if you're private projects are game related. But at the moment there seems to be a much bigger issue then inertia in the work environment, which is the lack of a viable alternative in our field. Nicholas "Indy" Ray |
|
From: Rachel B. <r....@gm...> - 2009-04-25 22:21:14
|
> Wouldn't that, like, cost more money than wringing code out of > freshmen > who should be grateful for having a job at all? If you changed that > cornerstone of the starving game studio, what would be next? > Well-defined acceptance milestones with testable pass/fail criteria? > Pretty soon you'll be asking for the moon! Even worse, it might lead to experienced people *staying* in the industry, thus allowing us to learn from previous failures. Now that's just crazy talk! Rachel |
|
From: Rachel B. <r....@gm...> - 2009-04-25 21:56:51
|
> My point was more whether the best way forward would be with a new > language > or whether new/different programming techniques for existing languages > (which means C++, I guess) would be better. As far as I'm concerned, C++ is nearing a breaking point. We constantly cram new features into it (sorry, I meant "we extend the number of supported paradigms" ;), and design is by committee. Which leads to an extremely powerful and overly complex language that's almost unreadable. On top of that, the tool set is falling more and more behind. At least for private projects, I've almost completely abandoned it - work has a slightly higher inertia ;) Rachel |
|
From: Rachel B. <r....@gm...> - 2009-04-25 21:53:39
|
> Exactly - C/C++ is an excellent target format. Maybe, possibly, C is a decent target format. C++, for all intents and purposes, is a dinosaur in an evolutionary dead end. The compile times are completely unacceptable for what it gives you. It might be worth considering a VM as your intermediate target instead. > Haskell already compiles 'via-c'. I'd be looking at targeting any > domain > specific language at generating C/C++, not assembly. In turn extending your compile times... which is really not that high on my list of priorities. Writing a front end for your DSL that works with e.g. LLVM seems like the better choice, if you *have* to write your own DSL compiler. > I note that generating C/C++ doesn't mean you can't run your DSL in an > interpreted mode for fast development/debugging. I'd expect the tight > control over side effects in functional languages to aid this. Ultimately, that's the first step into a model of "calcifying"[1] software. You start with an extremely malleable language, and it gradually hardens over time. (By compiling, providing annotations to the compiler, etc...) Haskell seems to be the farthest along that way, for now. If there are any other recommendations, I'd love to hear them. Rachel [1] My own terminology. If there's a commonly agreed upon term, please let me know. (And if there's any *research* in that area, doubly so!). Thanks in advance! |
|
From: Sam M. <sam...@ge...> - 2009-04-25 20:25:15
|
Yeah, that's what I'm talking about! :) I was trying to resist getting excited and going into over-sell mode, but likely undercooked how much potential I think there is here. To highlight just two more points I think are important: - Haskell stands a very good chance of allowing games to really get on top of their (growing) complexity. I think this is best illustrated in the paper, "Why functional programming matters", http://www.cs.chalmers.se/~rjmh/Papers/whyfp.html. Well worth a read if you've not seen it before. - It can be interactively evaluated and extended. Working with C/C++ we get so used to living without this I think we potentially under value how important a feature this is. Cheers, Sam -----Original Message----- From: Sebastian Sylvan [mailto:seb...@gm...] Sent: Sat 25/04/2009 19:16 To: Game Development Algorithms Cc: and...@ni... Subject: Re: [Algorithms] Complexity of new hardware On Wed, Apr 22, 2009 at 5:52 PM, Sam Martin <sam...@ge...>wrote: > > > Wouldn't that be a tough sell? You'd already be competing with free > > implementations of LUA, Python, JavaScript and their ilk on the low > end, > > and built-in languages like UnrealScript on the high end. > > I don't think there's a market for that kind of scripting DSL. A new > language would need to eat into the remaining C++ development burden > that isn't suitable to implementing in Lua, say. Which is plenty. > > > Doesn't this bring us back full circle? I recall a statement from a > > month ago saying that we all need to think differently about how we > put > > together massively parallel software, because the current tools don't > > really help us in the right ways... > > Another reason to consider pure functional languages. This is a much > deeper topic that I'm now about to trivialise, but the referential > transparency of these languages makes them particular suitable to > parallel evaluation. For example, GHC (arguably the most mature Haskell > compiler) can compile for an arbitrary number of cores, although it's > still an active research area as I understand it. Being a massive Haskell fanboy myself, let me jump in with some other cool things it does that relates to game development. 1. It's starting to get support for "Nested data parallelism". Basically flat data parallelism is what we get with shaders now, the problem with that is that the "per-element operation" can't itself be another data parallel operation. NDP allows you to write data parallel operations (on arrays) where the thing you do to each element is itself another data parallel operation. The compiler then has a team of magic pixies that fuses/flattens this into a series of data parallel appliacations, eliminating the need to do it manually. 2. It has Software Transactional Memory. So when you really need shared mutable state you can still access it from lots of different threads at once with optimistic concurrency (only block when there's an actual conflict). Yes, there are issues, and yes it adds overhead, but if the alternative is single threaded execution and the overhead is 2-3x, then we win once we have 4 hardware threads to spare. 3. Monads! Basically this allows you to overload semi-colon, which means you can fairly easily define your own embedded DSLs. This can let you write certain code a lot easier.. You could have a "behaviour" monad for example, abstracting over all the details of entities in the game doing things which take multiple frames (so you don't need to litter your behaviour code with state machine code, saving and restoring state etc, you just write what you want to do and the implementation of the monad takes care of things that needs to "yield"). 4. It's safe. Most code in games isn't systems code, so IMO it doesn't make sense to pay the cost of using a systems programming language for it (productivity, safety). 5. It's statically typed with a native compiler, meaning you could compile all your scripts and just link them into the game for release and get decent performance. Not C-like (yet, anyway!), but probably an order of magnitude over most dynamic languages. -- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862 |
|
From: Sebastian S. <seb...@gm...> - 2009-04-25 18:16:54
|
On Wed, Apr 22, 2009 at 5:52 PM, Sam Martin <sam...@ge...>wrote: > > > Wouldn't that be a tough sell? You'd already be competing with free > > implementations of LUA, Python, JavaScript and their ilk on the low > end, > > and built-in languages like UnrealScript on the high end. > > I don't think there's a market for that kind of scripting DSL. A new > language would need to eat into the remaining C++ development burden > that isn't suitable to implementing in Lua, say. Which is plenty. > > > Doesn't this bring us back full circle? I recall a statement from a > > month ago saying that we all need to think differently about how we > put > > together massively parallel software, because the current tools don't > > really help us in the right ways... > > Another reason to consider pure functional languages. This is a much > deeper topic that I'm now about to trivialise, but the referential > transparency of these languages makes them particular suitable to > parallel evaluation. For example, GHC (arguably the most mature Haskell > compiler) can compile for an arbitrary number of cores, although it's > still an active research area as I understand it. Being a massive Haskell fanboy myself, let me jump in with some other cool things it does that relates to game development. 1. It's starting to get support for "Nested data parallelism". Basically flat data parallelism is what we get with shaders now, the problem with that is that the "per-element operation" can't itself be another data parallel operation. NDP allows you to write data parallel operations (on arrays) where the thing you do to each element is itself another data parallel operation. The compiler then has a team of magic pixies that fuses/flattens this into a series of data parallel appliacations, eliminating the need to do it manually. 2. It has Software Transactional Memory. So when you really need shared mutable state you can still access it from lots of different threads at once with optimistic concurrency (only block when there's an actual conflict). Yes, there are issues, and yes it adds overhead, but if the alternative is single threaded execution and the overhead is 2-3x, then we win once we have 4 hardware threads to spare. 3. Monads! Basically this allows you to overload semi-colon, which means you can fairly easily define your own embedded DSLs. This can let you write certain code a lot easier.. You could have a "behaviour" monad for example, abstracting over all the details of entities in the game doing things which take multiple frames (so you don't need to litter your behaviour code with state machine code, saving and restoring state etc, you just write what you want to do and the implementation of the monad takes care of things that needs to "yield"). 4. It's safe. Most code in games isn't systems code, so IMO it doesn't make sense to pay the cost of using a systems programming language for it (productivity, safety). 5. It's statically typed with a native compiler, meaning you could compile all your scripts and just link them into the game for release and get decent performance. Not C-like (yet, anyway!), but probably an order of magnitude over most dynamic languages. -- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862 |
|
From: Mat N. <mat...@bu...> - 2009-04-24 17:10:10
|
You still pay the cost for a variable shift when setting a bit on some platforms. MSN -----Original Message----- From: Alan Latham [mailto:ram...@gm...] Sent: Friday, April 24, 2009 9:53 AM To: Game Development Algorithms Subject: Re: [Algorithms] memory pool algorithms unless you add hierarchy to the bit-vector.. ----- Original Message ----- From: "Tom Plunket" <ga...@fa...> To: "Game Development Algorithms" <gda...@li...> Sent: Saturday, April 25, 2009 2:09 AM Subject: Re: [Algorithms] memory pool algorithms >>> then clearly moving things around shouldn't be too terrible. I believe >>> keeping a parallel bitfield would be more straight forward than an >>> unsorted array of free elements. >> >> So with the parallel bit-field, each bit indicates whether the element >> has >> been allocated, and when you iterate over all elements you skip elements >> without a set bit? > > That was my thinking, with the 30 seconds I spent on the problem. ;) > > I'm not sure if it's ideal, and I'm sure its power would show off > differently in sparse vs. not allocation patterns, but compared to an > array of integers that would require traversing the allocations in > arbitrary order I think it would considerably outperform this. However, > I'm also aware that setting bits in bitfields is horrifically slow on some > platforms, so you'd probably want to write whatever your solution was in > assembler so that you could optimally schedule memory reads and writes. > (I did battle with a certain compiler this week on just this front, but my > solution was to remove the need for the set bits at all.) > > -tom! > > -- > > > > ------------------------------------------------------------------------------ > Crystal Reports - New Free Runtime and 30 Day Trial > Check out the new simplified licensign option that enables unlimited > royalty-free distribution of the report engine for externally facing > server and web deployment. > http://p.sf.net/sfu/businessobjects > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > ------------------------------------------------------------------------------ Crystal Reports - New Free Runtime and 30 Day Trial Check out the new simplified licensign option that enables unlimited royalty-free distribution of the report engine for externally facing server and web deployment. http://p.sf.net/sfu/businessobjects _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list |
|
From: Alan L. <ram...@gm...> - 2009-04-24 16:52:44
|
unless you add hierarchy to the bit-vector.. ----- Original Message ----- From: "Tom Plunket" <ga...@fa...> To: "Game Development Algorithms" <gda...@li...> Sent: Saturday, April 25, 2009 2:09 AM Subject: Re: [Algorithms] memory pool algorithms >>> then clearly moving things around shouldn't be too terrible. I believe >>> keeping a parallel bitfield would be more straight forward than an >>> unsorted array of free elements. >> >> So with the parallel bit-field, each bit indicates whether the element >> has >> been allocated, and when you iterate over all elements you skip elements >> without a set bit? > > That was my thinking, with the 30 seconds I spent on the problem. ;) > > I'm not sure if it's ideal, and I'm sure its power would show off > differently in sparse vs. not allocation patterns, but compared to an > array of integers that would require traversing the allocations in > arbitrary order I think it would considerably outperform this. However, > I'm also aware that setting bits in bitfields is horrifically slow on some > platforms, so you'd probably want to write whatever your solution was in > assembler so that you could optimally schedule memory reads and writes. > (I did battle with a certain compiler this week on just this front, but my > solution was to remove the need for the set bits at all.) > > -tom! > > -- > > > > ------------------------------------------------------------------------------ > Crystal Reports - New Free Runtime and 30 Day Trial > Check out the new simplified licensign option that enables unlimited > royalty-free distribution of the report engine for externally facing > server and web deployment. > http://p.sf.net/sfu/businessobjects > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |
|
From: Tom P. <ga...@fa...> - 2009-04-24 16:09:32
|
>> then clearly moving things around shouldn't be too terrible. I believe >> keeping a parallel bitfield would be more straight forward than an >> unsorted array of free elements. > > So with the parallel bit-field, each bit indicates whether the element has > been allocated, and when you iterate over all elements you skip elements > without a set bit? That was my thinking, with the 30 seconds I spent on the problem. ;) I'm not sure if it's ideal, and I'm sure its power would show off differently in sparse vs. not allocation patterns, but compared to an array of integers that would require traversing the allocations in arbitrary order I think it would considerably outperform this. However, I'm also aware that setting bits in bitfields is horrifically slow on some platforms, so you'd probably want to write whatever your solution was in assembler so that you could optimally schedule memory reads and writes. (I did battle with a certain compiler this week on just this front, but my solution was to remove the need for the set bits at all.) -tom! -- |
|
From: <Pau...@sc...> - 2009-04-24 08:49:32
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 > That'll work ok, but you can skip the array of allocated/free element > indices if the objects are small enough and you don't need pointers into > them; just copy the "lastAllocated" element into the "newlyFreed" slot > when freeing, and always stick onto the end of the array. Ahh yes, the old particle system trick :) > then clearly moving things around shouldn't be too terrible. I believe > keeping a parallel bitfield would be more straight forward than an > unsorted array of free elements. So with the parallel bit-field, each bit indicates whether the element has been allocated, and when you iterate over all elements you skip elements without a set bit? Cheers, Paul. ********************************************************************** This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you have received this email in error please notify pos...@sc... This footnote also confirms that this email message has been checked for all known viruses. Sony Computer Entertainment Europe Limited Registered Office: 10 Great Marlborough Street, London W1F 7LP, United Kingdom Registered in England: 3277793 ********************************************************************** P Please consider the environment before printing this e-mail -----BEGIN PGP SIGNATURE----- Version: PGP Universal 2.9.1 (Build 287) Charset: US-ASCII wsBVAwUBSfF21XajGqjtoMHxAQhB/AgAqmD3jNwpjDdAx818d8KrwC3vs91b6nxw gQ7T+2wUN2xjtaG24U8Xt/s80VmZbkKd63EVhCn9cjqcT8qy3gSoMH3NvG9W15Jr qRPFGeWZDsv0WFad9Ir0OPuSNDmIc/yX5VmgqducxopuK81laD6MLC+GPZ9KYd/f nH92g25fzr3lKWgN+ML7oGcpxHOjFj92U/bS7hy16nRDOtKXIaRt00acku+zhmu3 J56YMeQFGZ6sfBF+QusSvF4R+c+UZej4MP3rzuJymY2IXRWBzO3Igy5tMLiyOXDp n+XrJm2A7+L8bJ21oGgxLEQhLsPrOIv4r6umFQDZiFUYZkekUHbSVw== =uvC5 -----END PGP SIGNATURE----- |
|
From: Alan L. <ram...@gm...> - 2009-04-23 22:12:06
|
Adding hierarchy to your "free" bit-vector can make this really fast, and touch only a few cache-lines per alloc/free (even for huge pools). You need hardware clz tho. Alan. ----- Original Message ----- From: "Mat Noguchi" <mat...@bu...> To: "Game Development Algorithms" <gda...@li...> Sent: Friday, April 24, 2009 3:24 AM Subject: Re: [Algorithms] memory pool algorithms >> I believe > keeping a parallel bitfield would be more straight forward than an > unsorted array of free elements. |
|
From: Alen L. <ale...@cr...> - 2009-04-23 19:54:31
|
Thursday, April 23, 2009, 6:34:16 PM, Paul wrote: > Thanks Alen, I *think* this is the same solution that Jeremiha posted and > looks like it will suit my purposes nicely Yes, I think it is the same. My ISP was down, so I didn't see Jeremiah has posted already before me. :) I like the neat trick of joining the used and free arrays into one. Also note the difference in the address-to-index mapping. That mapping is necessary if the objects are going to be referenced from the outside, because the indices change at deallocation. If you are going to reference and delete the objects only through iteration, then you don't need it. Cheers, Alen |
|
From: Alen L. <ale...@cr...> - 2009-04-23 19:38:38
|
Thursday, April 23, 2009, 6:32:35 PM, Mat wrote: > Is the memory pool contiguous? If not, then it's kind of > scary/impossible to have indexable access, but who knows? Unless you keep a separate index table which can be contiguous. If the objects are large and/or you cannot afford to move them around, because someone has pointers to them, it is useful to allocate the objects any way you fancy, and then keep a contiguous table of index-to-pointer mappings in the ways previously described. If you are doing random deletion, you will need to store pointer-to-index mapping together with the object in that case. Alen |
|
From: Tom P. <ga...@fa...> - 2009-04-23 18:58:38
|
> A hash table actually gives you all of the above, no? And, in fact, a > hash table with chaining and a proper iterator can give you iteration of > all the members, too (although in random (hash) order). Hash what? A linked list would be the same thing if you assume order is irrelevant, but I would think the memory incoherency would be undesirable if trivially avoidable. -tom! -- |
|
From: Tom P. <ga...@fa...> - 2009-04-23 18:44:10
|
> Could you define "retrieve by index" further? Do you mean that if "n" > elements are allocated they must be reachable by a number with 0 and > n-1? If yes, you obviously can't have constant-time deletion, since > you'll have a mean of n/2 references to change for every deletion. > And if you allow holes to get constant-time deletion, what properties > do you want from your index that a pointer would not have? You only need to swap the last element into the deleted element, unless the order of the elements need to match the order of allocation. Hence O(1) for all operations, and it can be highly memory-efficient as well. -tom! |
|
From: Jon W. <jw...@gm...> - 2009-04-23 17:58:35
|
Olivier Galibert wrote: > Could you define "retrieve by index" further? Do you mean that if "n" > elements are allocated they must be reachable by a number with 0 and > n-1? If yes, you obviously can't have constant-time deletion, since > A hash table actually gives you all of the above, no? And, in fact, a hash table with chaining and a proper iterator can give you iteration of all the members, too (although in random (hash) order). Sincerely, jw |
|
From: Jon W. <jw...@gm...> - 2009-04-23 17:52:30
|
Why can't you use both an index and a list? Would the overhead be prohibitive? Sincerely, jw Pau...@sc... wrote: > I was just wondering if anyone knew of an algorithm/method which > facilitated simple memory pool allocation of a single type but with > constant time allocation/deallocation/item retrieval by index and also > provided a 'nice' way to iterate through the used elements? > > We have a class which gives us everything but the last two together. We > can add a linked list of used elements (giving us the iteration) but then > you can't retrieve by index in constant time. Or we can retrieve by index > in constant time but then you can't iterate cleanly because there will be > holes in the memory pool. > > |
|
From: Mat N. <mat...@bu...> - 2009-04-23 17:24:53
|
> I believe keeping a parallel bitfield would be more straight forward than an unsorted array of free elements. Yes. Very much so. It also makes iteration in-order very easy. How often do you need to iterate over things in order of allocation? MSN -----Original Message----- From: Tom Plunket [mailto:ga...@fa...] Sent: Thursday, April 23, 2009 9:40 AM To: Game Development Algorithms Subject: Re: [Algorithms] memory pool algorithms > Have a allocated/free list that is an array of indices into the memory > pool array. The allocated/free list is divided such that the 0 through > (M - 1) elements are allocated and the M through (N - 1) elements are > free; just use an index to the start of the free elements. The > allocated/free list is initialized with 0 through (N - 1) and the start > of the free list is index 0. When you allocate, increment the start of > the free list index, when you free an element, decrement the index and > swap the two elements. To iterate over the allocated elements, iterate > over the 0 through (M - 1) indices, and use the index to look up the > object. The problem with this is that you jump around memory when > iterating. If you are concerned with thrashing the cache, it would be > better to just go through the array and skip empty elements. > I was just wondering if anyone knew of an algorithm/method which > facilitated simple memory pool allocation of a single type but with > constant time allocation/deallocation/item retrieval by index and also > provided a 'nice' way to iterate through the used elements? That'll work ok, but you can skip the array of allocated/free element indices if the objects are small enough and you don't need pointers into them; just copy the "lastAllocated" element into the "newlyFreed" slot when freeing, and always stick onto the end of the array. An alternative, if your objects are "large" (so copying is undesirable) or you need to keep pointers at particular elements, Alexandrescu's small object allocator does most of this. Allocate your pool of N elements. reinterpret_cast the pointer to the front of each element to an int and put my_index+1 into it. (I.e. the first 4 bytes of each element is the index of the next free element. If your object size is smaller than 4 bytes then adjust accordingly.) Store in the pool object a "first free", and init it to zero. When you allocate an object, return the (constructed, if that's the way you roll) first free element and set firstFree to what was in the first four bytes of that element. When you deallocate the object, destroy it as necessary, stick the value of firstFree into the first four bytes and update firstFree to be myAddress - baseAddress. Now you can iterate through the free list in "random" order just by following the indices. You can't iterate the allocated list with this setup, but you could certainly allocate a parallel bitfield that flagged what was allocated and iterate that trivially. I've done something to this effect when I had to write a pooling allocator that allocated arrays of these micro-objects. The nice thing about this mechanism is that pointers to elements live for the lifetime of the element as shuffling elements around means that there's no (easy) way for people to hang onto one that they're interested in. On the other hand, if the objects are small and deallocation is rare and nobody needs to know about one in particular (e.g. a particle system), then clearly moving things around shouldn't be too terrible. I believe keeping a parallel bitfield would be more straight forward than an unsorted array of free elements. -tom! ------------------------------------------------------------------------------ Stay on top of everything new and different, both inside and around Java (TM) technology - register by April 22, and save $200 on the JavaOne (SM) conference, June 2-5, 2009, San Francisco. 300 plus technical and hands-on sessions. Register today. Use priority code J9JMT32. http://p.sf.net/sfu/p _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list |