gdalgorithms-list Mailing List for Game Dev Algorithms (Page 37)
Brought to you by:
vexxed72
You can subscribe to this list here.
| 2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(390) |
Aug
(767) |
Sep
(940) |
Oct
(964) |
Nov
(819) |
Dec
(762) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2001 |
Jan
(680) |
Feb
(1075) |
Mar
(954) |
Apr
(595) |
May
(725) |
Jun
(868) |
Jul
(678) |
Aug
(785) |
Sep
(410) |
Oct
(395) |
Nov
(374) |
Dec
(419) |
| 2002 |
Jan
(699) |
Feb
(501) |
Mar
(311) |
Apr
(334) |
May
(501) |
Jun
(507) |
Jul
(441) |
Aug
(395) |
Sep
(540) |
Oct
(416) |
Nov
(369) |
Dec
(373) |
| 2003 |
Jan
(514) |
Feb
(488) |
Mar
(396) |
Apr
(624) |
May
(590) |
Jun
(562) |
Jul
(546) |
Aug
(463) |
Sep
(389) |
Oct
(399) |
Nov
(333) |
Dec
(449) |
| 2004 |
Jan
(317) |
Feb
(395) |
Mar
(136) |
Apr
(338) |
May
(488) |
Jun
(306) |
Jul
(266) |
Aug
(424) |
Sep
(502) |
Oct
(170) |
Nov
(170) |
Dec
(134) |
| 2005 |
Jan
(249) |
Feb
(109) |
Mar
(119) |
Apr
(282) |
May
(82) |
Jun
(113) |
Jul
(56) |
Aug
(160) |
Sep
(89) |
Oct
(98) |
Nov
(237) |
Dec
(297) |
| 2006 |
Jan
(151) |
Feb
(250) |
Mar
(222) |
Apr
(147) |
May
(266) |
Jun
(313) |
Jul
(367) |
Aug
(135) |
Sep
(108) |
Oct
(110) |
Nov
(220) |
Dec
(47) |
| 2007 |
Jan
(133) |
Feb
(144) |
Mar
(247) |
Apr
(191) |
May
(191) |
Jun
(171) |
Jul
(160) |
Aug
(51) |
Sep
(125) |
Oct
(115) |
Nov
(78) |
Dec
(67) |
| 2008 |
Jan
(165) |
Feb
(37) |
Mar
(130) |
Apr
(111) |
May
(91) |
Jun
(142) |
Jul
(54) |
Aug
(104) |
Sep
(89) |
Oct
(87) |
Nov
(44) |
Dec
(54) |
| 2009 |
Jan
(283) |
Feb
(113) |
Mar
(154) |
Apr
(395) |
May
(62) |
Jun
(48) |
Jul
(52) |
Aug
(54) |
Sep
(131) |
Oct
(29) |
Nov
(32) |
Dec
(37) |
| 2010 |
Jan
(34) |
Feb
(36) |
Mar
(40) |
Apr
(23) |
May
(38) |
Jun
(34) |
Jul
(36) |
Aug
(27) |
Sep
(9) |
Oct
(18) |
Nov
(25) |
Dec
|
| 2011 |
Jan
(1) |
Feb
(14) |
Mar
(1) |
Apr
(5) |
May
(1) |
Jun
|
Jul
|
Aug
(37) |
Sep
(6) |
Oct
(2) |
Nov
|
Dec
|
| 2012 |
Jan
|
Feb
(7) |
Mar
|
Apr
(4) |
May
|
Jun
(3) |
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
(10) |
| 2013 |
Jan
|
Feb
(1) |
Mar
(7) |
Apr
(2) |
May
|
Jun
|
Jul
(9) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2014 |
Jan
(14) |
Feb
|
Mar
(2) |
Apr
|
May
(10) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
|
| 2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(12) |
Nov
|
Dec
(1) |
| 2016 |
Jan
|
Feb
(1) |
Mar
(1) |
Apr
(1) |
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
| 2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
|
|
From: Alen L. <ale...@cr...> - 2009-04-28 07:30:55
|
Hi all, When generating 2nd order SHs by convolving environment cubemaps, In some rare cases I get some strange artefacts that seem as if that is just a limitation of 2nd order SHs, but I'm not sure, so I figured I'd ask... The problematic case is where there is only a handful of very bright (~2000x the cubemap average) pixels in one narrow direction (~0.5 deg). (This example is generated by a sun disk in the skies, when seen through a small window in a dark room). In this case I get large undershooting (manifesting as a black spot) in some other direction. Depending on the bright direction, the black spot is sometimes on the opposite side, but sometimes it is e.g. 90 or 135 degrees from the bright side, etc. I was expecting to see something like this in the exact opposite direction, but I'm a bit surprised to see it move around. Then again, I'm not an SH expert, so I'm probably wrong. So to ask a concrete question: Is it normal for SH approximation errors in such "spiky" cases to generate undershooting in directions that are not directly opposite to the brightest point? Thanks, Alen |
|
From: Marc B. R. <mar...@or...> - 2009-04-28 06:54:47
|
> So, how to design a well-adopted, robust, small, powerful language > with > pretty syntax and an effective C integration layer? And what language > has all that but I've missed? :-) I've always wanted to goof around with http://www.iolanguage.com/. The first real OO language I learned was Smalltalk http://code.google.com/p/syx/ (Haven't looked at this either). Neither AngelScript nor LUA has ever appealed to me. |
|
From: Jon W. <jw...@gm...> - 2009-04-28 04:24:57
|
Jason Hughes wrote: > known offline. Pawn also was a register machine rather than a stack > machine, so it could be compiled to native code and translated to very > fast code, compared to most other JITs. Is that really a difference? You can transform between the two systems, so they are semantically equivalent, although some constructs have more straightforward transforms than other. That, and the fact that the x86/32 is almost a stack machine already ;-) > LUA worries me a bit because > I've heard it's a little fast-and-loose with memory, though I imagine > that depends a lot on the way scripts are written. > I've loved Lua to death for a long time for its smallish code size, its straightforward C integration, its clear separation of interepreter context and its lightweight approach to closures. However, despite that, I've never come to really like the Lua syntax. It's just slightly too weird to look good. Meanwhile, Python has always had a nice library and light-weight syntax (different enough to be unique, for sure!), but its C integration and size are quite cumbersome, and trying to do closures from C is like trying to pull teeth. (No, boost::python doesn't much help I'm afraid) So, how to design a well-adopted, robust, small, powerful language with pretty syntax and an effective C integration layer? And what language has all that but I've missed? :-) Sincerely, jw |
|
From: Jason H. <jas...@di...> - 2009-04-28 02:46:45
|
What's missing in C# is someone to convert my entire runtime library and tools into it, without losing any of the platform specific bit twiddling or the careful memory-tweaked asset management, and making it interface to all the 1st party libraries that we have to link to. :-) C# is great for certain kinds of tools, but as a complete replacement for a runtime system.... I think anything that isn't a systems language will be at a disadvantage for various reasons, and not solely due to inertia. (To hijack the thread a bit) While my engine is currently C++, I'm going with LUA for the majority of the game systems. I seriously considered Pawn and Game Monkey. The main thing that seemed good about Game Monkey is that it's very close to C syntax, so I could code in it with minimal re-education. However, the lack of a really nice debugger and considerably smaller community supporting makes it less attractive than LUA, for example. Pawn was neat, but isn't quite mature enough to be completely useful--it still has occasional parsing problems, can't easily reload a script live, etc. It did have a really nice feature that drew me toward it: the exact memory requirements for a script was known offline. Pawn also was a register machine rather than a stack machine, so it could be compiled to native code and translated to very fast code, compared to most other JITs. LUA worries me a bit because I've heard it's a little fast-and-loose with memory, though I imagine that depends a lot on the way scripts are written. JH Randall Bosetti wrote: > Is Mono lacking maturity in its tools or its runtime? > > Miguel's recent announcement of a fast SIMD implementation for Mono > (http://tirania.org/blog/archive/2008/Nov-03.html) strongly indicates > that performance is not an issue. In fact, the benchmarks linked from > his post indicate that the new runtime can beat naive C++ > implementations. Of course, this is only for bare number-crunching. > > MonoDevelop seems to be lacking support for dedicated console > development (no remote debugger, etc.), but is there something missing > for desktop games? Any info you have is appreciated: I'm considering > writing a simple shooter in C#. > > - Randall > > |
|
From: Andrew V. <and...@ni...> - 2009-04-27 16:22:28
|
> Andrew Vidler wrote: > > Of course there are, and all those other axes are > definitely important. > > But again, IME, if you lack design then they pale into > insignificance > > for all but the simplest task. > > > > Surprisingly, I've very seldom worked on projects where "lack > of design" > was the main reason for problems. In my experience, what gets > smart people in high-level trouble is either designing the > wrong thing (or for the wrong target), or over-designing, > which leads to an overly rigid, verbose or cumbersome implementation. Yeah, I agree with that - In my mind, I'd included getting a good problem spec' as part of the design process. > Half of the time, though, problems aren't technical at all, > but more of the "let's put ten elephants into one phone booth > by next Thursday" > kind. Those problems, we should probably discuss on sweng-gamedev :-) Good point. :) Cheers, Andrew. |
|
From: Jon W. <jw...@gm...> - 2009-04-27 16:16:34
|
Andrew Vidler wrote: > Of course there are, and all those other axes are definitely important. > But again, IME, if you lack design then they pale into insignificance for > all but the simplest task. > Surprisingly, I've very seldom worked on projects where "lack of design" was the main reason for problems. In my experience, what gets smart people in high-level trouble is either designing the wrong thing (or for the wrong target), or over-designing, which leads to an overly rigid, verbose or cumbersome implementation. Half of the time, though, problems aren't technical at all, but more of the "let's put ten elephants into one phone booth by next Thursday" kind. Those problems, we should probably discuss on sweng-gamedev :-) Sincerely, jw |
|
From: Rachel B. <r....@gm...> - 2009-04-27 15:50:57
|
>> IME, for anything other than the smallest projects, inferior design >> *never* >> leads to your product shipping faster. > > Is there a "Inferior-Superior" axis when evaluating software designs? In game development, many people seem to believe so. It goes like this "I wrote this"->"people I trust wrote this"->"everybody else" ;) Rachel |
|
From: Andrew V. <and...@ni...> - 2009-04-27 09:51:55
|
Of course there are, and all those other axes are definitely important. But again, IME, if you lack design then they pale into insignificance for all but the simplest task. I'm not saying it's impossible to produce medium-to-large systems with a bad design, just that they generally take longer to finish and are harder to adapt when requirements change. Cheers, Andrew. > -----Original Message----- > From: Alen Ladavac [mailto:ale...@cr...] > Sent: 27 April 2009 10:44 > To: Andrew Vidler > Cc: 'Game Development Algorithms' > Subject: Re: [Algorithms] Complexity of new hardware > > What I'm trying to say is that the problem is not > unidimensional. IME, there is a multitude of axes to consider > in evaluating engineering approachs, and "shipping on time" > is a weighted sum of all those axes, where weights may be > different for each team, or each project. > > Guess I'm just alergic to banalisation, sorry. :p > > Alen > > Andrew wrote at 4/27/2009: > > > Unless you're trying to assert that you can't look at two > designs and > > tell which one you think is better, then yes. > > > Cheers, > > Andrew. > > >> -----Original Message----- > >> From: Alen Ladavac [mailto:ale...@cr...] > >> Sent: 27 April 2009 10:23 > >> To: Andrew Vidler > >> Cc: 'Game Development Algorithms' > >> Subject: Re: [Algorithms] Complexity of new hardware > >> > >> Andrew wrote at 4/27/2009: > >> > IME, for anything other than the smallest projects, > inferior design > >> > *never* leads to your product shipping faster. > >> > >> Is there a "Inferior-Superior" axis when evaluating > software designs? > >> > >> JM2C, > >> Alen > >> > >> > >> -------------------------------------------------------------- > >> ---------------- > >> Crystal Reports - New Free Runtime and 30 Day Trial > Check out the > >> new simplified licensign option that enables unlimited > >> royalty-free distribution of the report engine for externally > >> facing server and web deployment. > >> http://p.sf.net/sfu/businessobjects > >> _______________________________________________ > >> GDAlgorithms-list mailing list > >> GDA...@li... > >> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > >> Archives: > >> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgo > >> rithms-list > >> > >> > _____________________________________________________________________ > >> _ This email has been scanned by the MessageLabs Email Security > >> System. > >> For more information please visit > >> http://www.messagelabs.com/email > >> > _____________________________________________________________________ > >> _ > >> > > > > > > ---------------------------------------------------------------------- > > -------- Crystal Reports - New Free Runtime and 30 Day > Trial Check > > out the new simplified licensign option that enables unlimited > > royalty-free distribution of the report engine for externally > > facing server and web deployment. > > http://p.sf.net/sfu/businessobjects > > _______________________________________________ > > GDAlgorithms-list mailing list > > GDA...@li... > > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > > Archives: > > > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-l > > ist > > > > -- > Alen > > > ______________________________________________________________________ > This email has been scanned by the MessageLabs Email Security System. > For more information please visit http://www.messagelabs.com/email > ______________________________________________________________________ > |
|
From: Alen L. <ale...@cr...> - 2009-04-27 09:43:42
|
What I'm trying to say is that the problem is not unidimensional. IME, there is a multitude of axes to consider in evaluating engineering approachs, and "shipping on time" is a weighted sum of all those axes, where weights may be different for each team, or each project. Guess I'm just alergic to banalisation, sorry. :p Alen Andrew wrote at 4/27/2009: > Unless you're trying to assert that you can't look at two designs and tell > which one you think is better, then yes. > Cheers, > Andrew. >> -----Original Message----- >> From: Alen Ladavac [mailto:ale...@cr...] >> Sent: 27 April 2009 10:23 >> To: Andrew Vidler >> Cc: 'Game Development Algorithms' >> Subject: Re: [Algorithms] Complexity of new hardware >> >> Andrew wrote at 4/27/2009: >> > IME, for anything other than the smallest projects, inferior design >> > *never* leads to your product shipping faster. >> >> Is there a "Inferior-Superior" axis when evaluating software designs? >> >> JM2C, >> Alen >> >> >> -------------------------------------------------------------- >> ---------------- >> Crystal Reports - New Free Runtime and 30 Day Trial Check >> out the new simplified licensign option that enables >> unlimited royalty-free distribution of the report engine >> for externally facing server and web deployment. >> http://p.sf.net/sfu/businessobjects >> _______________________________________________ >> GDAlgorithms-list mailing list >> GDA...@li... >> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >> Archives: >> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgo >> rithms-list >> >> ______________________________________________________________________ >> This email has been scanned by the MessageLabs Email Security System. >> For more information please visit >> http://www.messagelabs.com/email >> ______________________________________________________________________ >> > ------------------------------------------------------------------------------ > Crystal Reports - New Free Runtime and 30 Day Trial > Check out the new simplified licensign option that enables unlimited > royalty-free distribution of the report engine for externally facing > server and web deployment. > http://p.sf.net/sfu/businessobjects > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list -- Alen |
|
From: Andrew V. <and...@ni...> - 2009-04-27 09:30:18
|
Unless you're trying to assert that you can't look at two designs and tell which one you think is better, then yes. :) Cheers, Andrew. > -----Original Message----- > From: Alen Ladavac [mailto:ale...@cr...] > Sent: 27 April 2009 10:23 > To: Andrew Vidler > Cc: 'Game Development Algorithms' > Subject: Re: [Algorithms] Complexity of new hardware > > Andrew wrote at 4/27/2009: > > IME, for anything other than the smallest projects, inferior design > > *never* leads to your product shipping faster. > > Is there a "Inferior-Superior" axis when evaluating software designs? > > JM2C, > Alen > > > -------------------------------------------------------------- > ---------------- > Crystal Reports - New Free Runtime and 30 Day Trial Check > out the new simplified licensign option that enables > unlimited royalty-free distribution of the report engine > for externally facing server and web deployment. > http://p.sf.net/sfu/businessobjects > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgo > rithms-list > > ______________________________________________________________________ > This email has been scanned by the MessageLabs Email Security System. > For more information please visit > http://www.messagelabs.com/email > ______________________________________________________________________ > |
|
From: Alen L. <ale...@cr...> - 2009-04-27 09:23:14
|
Andrew wrote at 4/27/2009: > IME, for anything other than the smallest projects, inferior design *never* > leads to your product shipping faster. Is there a "Inferior-Superior" axis when evaluating software designs? JM2C, Alen |
|
From: Sylvain G. V. <vi...@ii...> - 2009-04-27 09:14:19
|
> GC just allows you to be lazy in this regard. Actually good examples against using GC have a name: Mozilla Firefox, the never satisfied memory piggy eater :D Still I love my Ffox. Sorry for the slightly OFT |
|
From: Sebastian S. <seb...@gm...> - 2009-04-27 07:24:34
|
On Sun, Apr 26, 2009 at 10:17 PM, Nicholas "Indy" Ray <ar...@gm...>wrote: > On Sun, Apr 26, 2009 at 3:59 AM, Sebastian Sylvan > <seb...@gm...> wrote: > > If I (or rather, the context in which a function is called) give you the > > type "a->a", and ask you to implement a function satisfying that type, > there > > is only one implementation (id). Since you know nothing about what type > the > > parameter passed in has, you can't do anything except just return it back > > out. Likewise for "(a,b)->a" (fst). > > On the other hand if I give you the type "Integer->Integer" and ask you > to > > write an implementation, there's an infinite number of possibilities that > > can satisfy that type (e.g. +1, +2, etc.). > > So the point is that if the expected type of a specific function is > > polymorphic, then you have less wiggle room to write something that > > satisfies the type - and in some cases the number of implementations that > > can satisfy the type is just one, but even when you add some > non-polymorphic > > stuff to the type every polymorphic part will cut out a "degree of > freedom" > > from the implementation. The fact that you know something is an Int means > > you can "do more" to the variable - if it's fully polymorphic you can't > do > > anything to it (and likewise if it's "numeric" you can only do maths on > it, > > and so on). > > Thus, the more polymorphic the type, the smaller the valid > "implementation > > space" is, and therefore the more likely it is that an incorrect > > implementation will be caught by the type checker. > > Ahh, I understand, And I feel this is the beauty of type inference, as > the simple act of providing an implementation for a function > automatically specializes it. in caml for instance I do not have to > provide any type annotations for the function let f(x, y , z) = x +. y > +. z;; in order for the compiler to know that it is of type float -> > float and thus the only time you encounter an a' -> a' is for > identity, which doesn't occur very often. > Well the context in which the function is used can constrain (or generalize, depending on your perspective) it to be more polymorphic (e.g. it may be used for both floats and ints). I guess my point is that if the language encourages you to write polymorphic code (in fact, makes it the default), then the amount of "slack" in the implementation space that will still satisfy the type checker is reduced which leads to the "works once the compiler stops complaining" effect. -- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862 |
|
From: Randall B. <rbo...@gm...> - 2009-04-27 07:07:43
|
Is Mono lacking maturity in its tools or its runtime? Miguel's recent announcement of a fast SIMD implementation for Mono (http://tirania.org/blog/archive/2008/Nov-03.html) strongly indicates that performance is not an issue. In fact, the benchmarks linked from his post indicate that the new runtime can beat naive C++ implementations. Of course, this is only for bare number-crunching. MonoDevelop seems to be lacking support for dedicated console development (no remote debugger, etc.), but is there something missing for desktop games? Any info you have is appreciated: I'm considering writing a simple shooter in C#. - Randall On Sun, Apr 26, 2009 at 6:18 PM, Nicholas "Indy" Ray <ar...@gm...> wrote: > On Sun, Apr 26, 2009 at 5:46 PM, Jon Watte <jw...@gm...> wrote: >> If you do PC games, then C# is within inches of being a totally suitable >> general purpose replacement, and it already is a good replacement for >> many specific games or subsystems. It has nice reflection, you can poke >> at objects while you're developing the classes, it has good interfacing >> to existing native libraries, it has good performance, it allows >> byte-by-byte access, etc. > > I don't know if PC Games include Mac or other *nix systems, but I > don't feel mono is yet mature enough for game development, and thus > for those who choose not to develop for Microsoft's platforms > exclusively it's not yet a suitable replacement. > > Nicholas "Indy" Ray > > ------------------------------------------------------------------------------ > Crystal Reports - New Free Runtime and 30 Day Trial > Check out the new simplified licensign option that enables unlimited > royalty-free distribution of the report engine for externally facing > server and web deployment. > http://p.sf.net/sfu/businessobjects > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > -- All great deeds and all great thoughts have a ridiculous beginning. |
|
From: Nicholas \Indy\ R. <ar...@gm...> - 2009-04-27 01:18:38
|
On Sun, Apr 26, 2009 at 5:46 PM, Jon Watte <jw...@gm...> wrote: > If you do PC games, then C# is within inches of being a totally suitable > general purpose replacement, and it already is a good replacement for > many specific games or subsystems. It has nice reflection, you can poke > at objects while you're developing the classes, it has good interfacing > to existing native libraries, it has good performance, it allows > byte-by-byte access, etc. I don't know if PC Games include Mac or other *nix systems, but I don't feel mono is yet mature enough for game development, and thus for those who choose not to develop for Microsoft's platforms exclusively it's not yet a suitable replacement. Nicholas "Indy" Ray |
|
From: Jon W. <jw...@gm...> - 2009-04-27 00:46:19
|
Nicholas "Indy" Ray wrote: >> At least for private projects, I've almost completely abandoned it - work >> has a slightly higher inertia ;) >> > > I don't know if you're private projects are game related. But at the > moment there seems to be a much bigger issue then inertia in the work > environment, which is the lack of a viable alternative in our field. > If you do PC games, then C# is within inches of being a totally suitable general purpose replacement, and it already is a good replacement for many specific games or subsystems. It has nice reflection, you can poke at objects while you're developing the classes, it has good interfacing to existing native libraries, it has good performance, it allows byte-by-byte access, etc. Sincerely, jw |
|
From: Nicholas \Indy\ R. <ar...@gm...> - 2009-04-26 21:17:27
|
On Sun, Apr 26, 2009 at 3:59 AM, Sebastian Sylvan <seb...@gm...> wrote: > If I (or rather, the context in which a function is called) give you the > type "a->a", and ask you to implement a function satisfying that type, there > is only one implementation (id). Since you know nothing about what type the > parameter passed in has, you can't do anything except just return it back > out. Likewise for "(a,b)->a" (fst). > On the other hand if I give you the type "Integer->Integer" and ask you to > write an implementation, there's an infinite number of possibilities that > can satisfy that type (e.g. +1, +2, etc.). > So the point is that if the expected type of a specific function is > polymorphic, then you have less wiggle room to write something that > satisfies the type - and in some cases the number of implementations that > can satisfy the type is just one, but even when you add some non-polymorphic > stuff to the type every polymorphic part will cut out a "degree of freedom" > from the implementation. The fact that you know something is an Int means > you can "do more" to the variable - if it's fully polymorphic you can't do > anything to it (and likewise if it's "numeric" you can only do maths on it, > and so on). > Thus, the more polymorphic the type, the smaller the valid "implementation > space" is, and therefore the more likely it is that an incorrect > implementation will be caught by the type checker. Ahh, I understand, And I feel this is the beauty of type inference, as the simple act of providing an implementation for a function automatically specializes it. in caml for instance I do not have to provide any type annotations for the function let f(x, y , z) = x +. y +. z;; in order for the compiler to know that it is of type float -> float and thus the only time you encounter an a' -> a' is for identity, which doesn't occur very often. Nicholas "Indy" Ray |
|
From: Sebastian S. <seb...@gm...> - 2009-04-26 19:56:58
|
On Sun, Apr 26, 2009 at 8:11 PM, Conor Stokes < bor...@ya...> wrote: > "Could you expand on this? Some specifics perhaps? I'm not sure I > understand what you mean. At least not when you compare to C++ which doesn't > even have a module system (though Haskell's module system is fairly > spartan)!" > > Somewhere between type-classes and modules sits the "component" that > Haskell has not yet mastered. > Couldn't it be just a smaller module? Lots of Haskell applications tend to use hierarchical modules this way, where you have a bunch of "low level" mini modules that are then included and exported from a main module. > > But sequenced execution and state setting are the native mode of imperative > languages. Pure functional lazy languages, like Haskell, require > abstractions to deal with that, which are an extra level of thought (and > complexity) > ... and expressive power. > In C++, you can have a CommandBuffer without the monad (or having to think > about having a monad, which is the important part). > Possibly, but you're hamstrung in how you can implement it since you have no control over what the semi-colon does, and for some things having just "state" as an underlying implementation isn't good enough. For example imagine being able to write something like: cover <- findBestCover moveToCover cover `withFallback` failedToGetToCover target <- acquireTarget ... etc.... The point being that the "moveToCover" function can take many frames to complete, and you can even let that action support the "withFallback" function to allow the action to fail (e.g. by an enemy firing at you). All the marshalling of actually running this action over multiple frames, keeping track of where in the action you need to resume and what the conditions are for resuming, can be handled by the AI monad. You can't do this (nicely) in C++ because it doesn't have the ability to let you define your own statement types. If you happen to have a sequence of imperative state modifications you're good to go, but if that's not what you're doing you're screwed. So like I said, in many ways Haskell beats loads of imperative languages at their own game. You can pretend it doesn't and just use the IO monad (which is essentially Haskell's "kitchen sink") like "imperative Haskell" and never have to worry about monads, if you think the complexity isn't worth it. So I don't think your characterization of imperative coding in Haskell as complex is necessarily true, you only pay for the complexity if you need it. You don't actually *need* to understand how monads work to do IO etc., but if you do spend the effort you find that it's a very powerful technique, and the tiny extra complexity it requires to understand is well worth it. > Hence, taking an absolute approach in either direction is probably not > going to get you the best system. > Well if that absolute approach allows you to do both (so long as you're explicit about when you're doing what), then there's no problem. Like I said earlier, Haskell does allow you to write sequential imperative code modifying state if that's what you want to do, you just need to be up front about it and say that you're going to do that in the type signature. The benefit of doing it that way is that you can later parallelise it trivially, since the compiler knows ahead of time where that's safe to do (as well as making it easier to reason about since you can easily see exactly what kind of stuff a function will do from its type). > > "The problem with only going half way is that certain properties really > need to be absolute unless they are to disappear. You can't be "sort of " > pregnant, you either are or you aren't." > > Yes, but I can't remember the last time I programmed a simulation of being > pregnant (I'm not saying I haven't...). > I have written plenty of software where I positively rely on purity of a function (parallelism, again). You can't be "sort of" pure, you either are or you aren't. > > I agree that side effects anywhere is not the right ideal (and Haskell does > really put them into a nice little monad focused box), but I tend to lean > towards a less absolute way of going about that. When you're doing a complex > and heavy IO oriented operation, it sometimes makes sense to be able to do > that in the simple way an imperative language allows (they're very good at > it!) without the extra level of abstraction. If you want to, in > most imperative languages today, you can even isolate it behind an interface > such that it can not be misused in an unsafe/leaky way (one of the selling > points of object orientation, I believe). > I'm not sure what you're referring to here, how is doing IO in C++ easier than in Haskell? Haskell is no better than C++, because you still have to do it in the same low-level imperative way. So it's not really improving on it much, you're essentially writing very C-like code with Haskell syntax, but I don't see any major way where it does worse either. Also, if you truly have a pure interface to something using IO, then unsafePerformIO can be used to give it a pure interface - though that really shouldn't be used very often and it's your responsibility to make sure there is no way of observing any side effects it does (think of it as a way of hooking into the runtime - you really, really shouldn't need it very often). And as I said before, if all you're doing is mutating some state (i.e. no real IO at all), then ST gives you away of wrapping it in a pure interface already (with compile time guarantees that you didn't screw up). -- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862 |
|
From: Conor S. <bor...@ya...> - 2009-04-26 19:11:55
|
"Could you expand on this? Some specifics perhaps? I'm not sure I understand what you mean. At least not when you compare to C++ which doesn't even have a module system (though Haskell's module system is fairly spartan)!"
Somewhere between type-classes and modules sits the "component" that Haskell has not yet mastered. C++ is reasonable for communicating "component frameworks" and Haskell is not quite there yet. The attempt at object orientation in C++ at least gives you to organize your "larger" thoughts in a way other people can understand (this is an entity, it does these things and it interacts with those things) and I don't think that Haskell has mastered this level of organisation yet. It's a good language for elegantly expressing algorithms and transforms, but it's not yet a full software engineering tool, with an easy methodology for relating with real world problems.
""Sequences of "stuff" does not imply imperative languages. The low level rendering abstraction could easily be a list of "Commands" (we could call them "command buffers", or maybe "display lists", hang on a minute!), rather than being a series of state modifying statements. In fact, a lot of abstractions treat graphics as a tree, hiding the details of the underlying state machine. At a higher level I do agree with Sweeney that graphics is pretty functional. Take something like a pixel shader, for instance, which is just a pure function really, even though most shader languages make it look like an imperative function inside (to look like C, usually).
Furthermore, if we're talking about Haskell specifically I'd say that in many ways it has much better support for imperative programming than C++ does, since you can define your own imperative sub-languages (e.g. you could have your CommandBuffer monad and write state setting etc. if that's how you really want to think about graphics). C++ doesn't allow you to abstract over what *kind* of statements you're working with, it only has one "kitchen sink" kind, and no way of overloading the way a block of multiple statements are bound together."
But sequenced execution and state setting are the native mode of imperative languages. Pure functional lazy languages, like Haskell, require abstractions to deal with that, which are an extra level of thought (and complexity). In C++, you can have a CommandBuffer without the monad (or having to think about having a monad, which is the important part). Sure, there are very functional aspects to rendering (shading is a good example) and if you want to think of it in a functional way you can, but in games currently you don't necessarily want to be that abstracted from the process. So I agree that graphics are fuctional, but only up to a point. If you're doing hardware interataction for rasterization (which most of us currently are), then graphics are a wild mix of imperative and functional (running on multiple pieces of hardware and multiple 3rd party pieces of software) and can not be categorized into one or the other area completely. Hence, taking an
absolute approach in either direction is probably not going to get you the best system.
"The problem with only going half way is that certain properties really need to be absolute unless they are to disappear. You can't be "sort of " pregnant, you either are or you aren't."
Yes, but I can't remember the last time I programmed a simulation of being pregnant (I'm not saying I haven't...). We shouldn't trap ourselves into a false dichotomy here. Software projects are incredibly complicated, not simple yes or no (are or aren't) questions. They may be a huge number of yes or no questions combining to make a nice fuzzy "yes/no" stew that is quite difficult to think of as a whole, but to be trapped in a single mindset is to miss the point of a software development.
"The key killer feature of Haskell for me is purity, and if you start allowing ad-hoc undisciplined use of side effects anywhere then the language is no longer pure. Either you contain side effects by design and enforce it, or you can never write code that relies on something being pure (parallelism!) without basically giving up on the language helping you, relying instead on convention (which always breaks)."
This is part of why I contend that Haskell is a good language in the small. That kind of purity is great at a micro level, but sometimes it is nice to do things another way. In fact, part of the beauty of pure functional programming is that you can apply it absolutely over a small area and then use that to compose code in a larger impure language, as long as the contract of the pure bits are enforced when you use them.
"Note that purity does emphatically *not* mean "no state mutations ever", it merely means that IO has to happen at the "bottom" of the program, not deep inside application code (this is pretty much what we do already though, so not much of an issue IMO), and that any localized side effects have to be marked up so that the compiler can enforce that they don't "leak" (e.g. you may want to do some in-place operations on an array for performance in a function, but from the outside the function still looks pure - that's fine in FP, Haskell uses the ST monad for it - you just need to seal off these local "bubbles" of impurity by explicitly marking up where things start going impure). I agree with Sweeney, again, that "side effects anywhere" is not the right default in a parallel world.
So really it's all about being disciplined about it and marking up functions that are impure up front, so that the compiler can be sure that you're not trying to do anything impure in a context where purity is required (e.g. parallelism, or lazy evaluation)."
I agree that side effects anywhere is not the right ideal (and Haskell does really put them into a nice little monad focused box), but I tend to lean towards a less absolute way of going about that. When you're doing a complex and heavy IO oriented operation, it sometimes makes sense to be able to do that in the simple way an imperative language allows (they're very good at it!) without the extra level of abstraction. If you want to, in most imperative languages today, you can even isolate it behind an interface such that it can not be misused in an unsafe/leaky way (one of the selling points of object orientation, I believe).
What I tend to think the right thing is to move to is a system where purity is an interface annotation that is then enforced in the compilation. Of course, I tend to think that data structure/type information should live in a language independent schema that is available at compile time to the "code", but I'm kind of crazy like that.
Cheers,
Conor
The new Internet Explorer 8 optimised for Yahoo!7: Faster, Safer, Easier. |
|
From: Sebastian S. <seb...@gm...> - 2009-04-26 17:12:44
|
On Sun, Apr 26, 2009 at 5:52 PM, <ne...@tw...> wrote: > > > I think you are right that Haskell has not been used in enough large-scale > projects to see if it scales well enough in terms of software development, > but it has at least the capability to do all aspects of games. Someone did > port the Quake 3 engine to Haskell without any leftover C, for example ( > http://haskell.org/haskellwiki/Frag) <http://haskell.org/haskellwiki/Frag> > . > A minor niggle: It's not a port of Quake 3, it's a completely new engine that loads the Quake 3 level format. -- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862 |
|
From: Sebastian S. <seb...@gm...> - 2009-04-26 17:04:42
|
On Sun, Apr 26, 2009 at 5:11 PM, Conor Stokes < bor...@ya...> wrote: > > Where I'd like to break in; Haskell is not yet ready for large games (or > medium/large project software development at large), because it's an > academic language that hasn't yet progressed to programming in the large > (big picture). Most of the "safety" in Haskell is for local problems, not > for the large architectual decisions, which are the hardest to change (and > in fact, haskell provides very little guidance or mechanism in these areas) > later down the line. At least "object oriented" (in the general sense) > programming languages try and provide a layered mechanism for programming in > the large. Could you expand on this? Some specifics perhaps? I'm not sure I understand what you mean. At least not when you compare to C++ which doesn't even have a module system (though Haskell's module system is fairly spartan)! > Functional programming is a very good tool, but it's too pure a tool for > production software. Most production software has areas that are "do this, > then do that", which pure functional language still has awkward and heavy > abstractions for (i.e. an extra level of thought that isn't necessary for > the functionality required). It is also interesting that when Tim Sweeney said in his > programming-language-for-the-future talk that the "graphics engine" would be > "fuctional", yet he doesn't mention that rendering (as it currently stands) > occurs in order and is highly stateful. Graphics hardware requires that you > set your states, followed by your rendering commands, in order, which is a > highly imperative way to think. This really shows that large problems tend > to be made up of mixed solutions, that don't follow any one set of rules. Sequences of "stuff" does not imply imperative languages. The low level rendering abstraction could easily be a list of "Commands" (we could call them "command buffers", or maybe "display lists", hang on a minute!), rather than being a series of state modifying statements. In fact, a lot of abstractions treat graphics as a tree, hiding the details of the underlying state machine. At a higher level I do agree with Sweeney that graphics is pretty functional. Take something like a pixel shader, for instance, which is just a pure function really, even though most shader languages make it look like an imperative function inside (to look like C, usually). Furthermore, if we're talking about Haskell specifically I'd say that in many ways it has much better support for imperative programming than C++ does, since you can define your own imperative sub-languages (e.g. you could have your CommandBuffer monad and write state setting etc. if that's how you really want to think about graphics). C++ doesn't allow you to abstract over what *kind* of statements you're working with, it only has one "kitchen sink" kind, and no way of overloading the way a block of multiple statements are bound together. > > Functional programming is part of the general solution to "better > programming", but to take it to extremes (like Haskell) is not the answer The problem with only going half way is that certain properties really need to be absolute unless they are to disappear. You can't be "sort of " pregnant, you either are or you aren't. The key killer feature of Haskell for me is purity, and if you start allowing ad-hoc undisciplined use of side effects anywhere then the language is no longer pure. Either you contain side effects by design and enforce it, or you can never write code that relies on something being pure (parallelism!) without basically giving up on the language helping you, relying instead on convention (which always breaks). Note that purity does emphatically *not* mean "no state mutations ever", it merely means that IO has to happen at the "bottom" of the program, not deep inside application code (this is pretty much what we do already though, so not much of an issue IMO), and that any localized side effects have to be marked up so that the compiler can enforce that they don't "leak" (e.g. you may want to do some in-place operations on an array for performance in a function, but from the outside the function still looks pure - that's fine in FP, Haskell uses the ST monad for it - you just need to seal off these local "bubbles" of impurity by explicitly marking up where things start going impure). I agree with Sweeney, again, that "side effects anywhere" is not the right default in a parallel world. So really it's all about being disciplined about it and marking up functions that are impure up front, so that the compiler can be sure that you're not trying to do anything impure in a context where purity is required (e.g. parallelism, or lazy evaluation). |
|
From: <ne...@tw...> - 2009-04-26 16:52:14
|
<html><body><span style="font-family:Verdana; color:#000000; font-size:10pt;"><br><br> <blockquote webmail="1" style="border-left: 2px solid blue; margin-left: 8px; padding-left: 8px; font-size: 10pt; color: black; font-family: verdana;"> <div > -------- Original Message --------<br> Subject: Re: [Algorithms] Complexity of new hardware<br> From: Conor Stokes <bor...@ya...><br></div></blockquote><br><blockquote webmail="1" style="border-left: 2px solid blue; margin-left: 8px; padding-left: 8px; font-size: 10pt; color: black; font-family: verdana;"><div > Functional programming is a very good tool, but it's too pure a tool for production software. Most production software has areas that are "do this, then do that", which pure functional language still has awkward and heavy abstractions for (i.e. an extra level of thought that isn't necessary for the functionality required). It is also interesting that when Tim Sweeney said in his programming-language-for-the-future talk that the "graphics engine" would be "fuctional", yet he doesn't mention that rendering (as it currently stands) occurs in order and is highly stateful. Graphics hardware requires that you set your states, followed by your rendering commands, in order, which is a highly imperative way to think. This really shows that large problems tend to be made up of mixed solutions, that don't follow any one set of rules.<br><br></div></blockquote><br>Monads do give you a way to do imperative code in functional languages. As a half-way relevant example, I find it easier to program OpenGL using the Haskell bindings that I do using the C/C++ bindings. To turn off blending, then draw a long list of triangles, then a list of quads:<br><br>do blendFunc $= Nothing<br> renderPrimitive Triangles (mapM vertex vs)<br> renderPrimitive Quads (mapM vertex vs2)<br><br>versus something like:<br><br>glDisable(GL_BLEND);<br>glBegin(GL_TRIANGLES);<br>for (int i = 0; i < vs.length();i++) drawVertex(vs[i]);<br>glEnd();<br>glBegin(GL_QUADS);<br> for (int i = 0; i < vs2.length();i++) drawVertex(vs2[i]);<br> glEnd();<br><br>I realise that's some very simplistic code (no display lists and so on), but I think (especially with Haskell under discussion), saying that it cannot do ordered things is misrepresentative, once you understand monads. (I do accept that having to understand monads is a barrier to learning Haskell against, say, C.) In Haskell you can get all the nice little features of functional programming (map and so forth), alongside all the order of imperative programming.<br><br>I think you are right that Haskell has not been used in enough large-scale projects to see if it scales well enough in terms of software development, but it has at least the capability to do all aspects of games. Someone did port the Quake 3 engine to Haskell without any leftover C, for example (<a href="http://haskell.org/haskellwiki/Frag">http://haskell.org/haskellwiki/Frag)</a>.<br><br>Neil.<br><blockquote webmail="1" style="border-left: 2px solid blue; margin-left: 8px; padding-left: 8px; font-size: 10pt; color: black; font-family: verdana;"> </blockquote></span></body></html> |
|
From: Conor S. <bor...@ya...> - 2009-04-26 16:11:51
|
Where I'd like to break in; Haskell is not yet ready for large games (or medium/large project software development at large), because it's an academic language that hasn't yet progressed to programming in the large (big picture). Most of the "safety" in Haskell is for local problems, not for the large architectual decisions, which are the hardest to change (and in fact, haskell provides very little guidance or mechanism in these areas) later down the line. At least "object oriented" (in the general sense) programming languages try and provide a layered mechanism for programming in the large. Functional programming is a very good tool, but it's too pure a tool for production software. Most production software has areas that are "do this, then do that", which pure functional language still has awkward and heavy abstractions for (i.e. an extra level of thought that isn't necessary for the functionality required). It is also interesting that when Tim Sweeney said in his programming-language-for-the-future talk that the "graphics engine" would be "fuctional", yet he doesn't mention that rendering (as it currently stands) occurs in order and is highly stateful. Graphics hardware requires that you set your states, followed by your rendering commands, in order, which is a highly imperative way to think. This really shows that large problems tend to be made up of mixed solutions, that don't follow any one set of rules. The interaction with evalutation is purely a tools problem. It has been shown you can write C# in a REPL and there is no reason why C++ couldn't work in a REPL if C# can (as long as you can isolate the illegal behaviour). All of this is not to write off functional programming. I love functional programming and I think it's the key to code re-use (or in Charles Simonyi's words, "going meta") that has been missing from a lot of the current "promised land" languages. I think C# 3.0 was a move in the right direction (somewhere between F#, Haskell, C# 3.0, Cyclone and C99 is probably a "sweet point" right now), I also think the next C++ standard is moving in the right direction with lambdas/closures (if not dispensing with a whole lot of crud that a re-worked systems language doesn't need), but I do think it's (functional programming) not a paradigm people should be grabbing with both hands (only one and a pinky or so). Functional programming is part of the general solution to "better programming", but to take it to extremes (like Haskell) is not the answer, in the way software transactional memory is not the answer to scalable parallel computation (only some of the time; STM still has the analogue to deadlocks; that is a pathological case of cross referential transactions). There is no silver bullet to either of these problems and what we should be looking at is the best balance of tools without a level of complication that makes us put all our mental effort into the mechanisms of compution as opposed to the outcomes we're trying to achieve. Cheers, Conor ----- Original Message ---- From: Sam Martin <sam...@ge...> To: Game Development Algorithms <gda...@li...> Cc: and...@ni... Sent: Sunday, 26 April, 2009 3:58:08 AM Subject: Re: [Algorithms] Complexity of new hardware Yeah, that's what I'm talking about! :) I was trying to resist getting excited and going into over-sell mode, but likely undercooked how much potential I think there is here. To highlight just two more points I think are important: - Haskell stands a very good chance of allowing games to really get on top of their (growing) complexity. I think this is best illustrated in the paper, "Why functional programming matters", http://www.cs.chalmers.se/~rjmh/Papers/whyfp.html. Well worth a read if you've not seen it before. - It can be interactively evaluated and extended. Working with C/C++ we get so used to living without this I think we potentially under value how important a feature this is. Cheers, Sam -----Original Message----- From: Sebastian Sylvan [mailto:seb...@gm...] Sent: Sat 25/04/2009 19:16 To: Game Development Algorithms Cc: and...@ni... Subject: Re: [Algorithms] Complexity of new hardware On Wed, Apr 22, 2009 at 5:52 PM, Sam Martin <sam...@ge...>wrote: > > > Wouldn't that be a tough sell? You'd already be competing with free > > implementations of LUA, Python, JavaScript and their ilk on the low > end, > > and built-in languages like UnrealScript on the high end. > > I don't think there's a market for that kind of scripting DSL. A new > language would need to eat into the remaining C++ development burden > that isn't suitable to implementing in Lua, say. Which is plenty. > > > Doesn't this bring us back full circle? I recall a statement from a > > month ago saying that we all need to think differently about how we > put > > together massively parallel software, because the current tools don't > > really help us in the right ways... > > Another reason to consider pure functional languages. This is a much > deeper topic that I'm now about to trivialise, but the referential > transparency of these languages makes them particular suitable to > parallel evaluation. For example, GHC (arguably the most mature Haskell > compiler) can compile for an arbitrary number of cores, although it's > still an active research area as I understand it. Being a massive Haskell fanboy myself, let me jump in with some other cool things it does that relates to game development. 1. It's starting to get support for "Nested data parallelism". Basically flat data parallelism is what we get with shaders now, the problem with that is that the "per-element operation" can't itself be another data parallel operation. NDP allows you to write data parallel operations (on arrays) where the thing you do to each element is itself another data parallel operation. The compiler then has a team of magic pixies that fuses/flattens this into a series of data parallel appliacations, eliminating the need to do it manually. 2. It has Software Transactional Memory. So when you really need shared mutable state you can still access it from lots of different threads at once with optimistic concurrency (only block when there's an actual conflict). Yes, there are issues, and yes it adds overhead, but if the alternative is single threaded execution and the overhead is 2-3x, then we win once we have 4 hardware threads to spare. 3. Monads! Basically this allows you to overload semi-colon, which means you can fairly easily define your own embedded DSLs. This can let you write certain code a lot easier.. You could have a "behaviour" monad for example, abstracting over all the details of entities in the game doing things which take multiple frames (so you don't need to litter your behaviour code with state machine code, saving and restoring state etc, you just write what you want to do and the implementation of the monad takes care of things that needs to "yield"). 4. It's safe. Most code in games isn't systems code, so IMO it doesn't make sense to pay the cost of using a systems programming language for it (productivity, safety). 5. It's statically typed with a native compiler, meaning you could compile all your scripts and just link them into the game for release and get decent performance. Not C-like (yet, anyway!), but probably an order of magnitude over most dynamic languages. -- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862 The new Internet Explorer 8 optimised for Yahoo!7: Faster, Safer, Easier. |
|
From: Sebastian S. <seb...@gm...> - 2009-04-26 15:40:50
|
On Sun, Apr 26, 2009 at 3:10 PM, Gribb, Gil <gg...@ra...> wrote: > > Seem like the death of C/C++ has been proclaimed for at least 20 years. As > game developers, for say at least the last 12 years or so, we have done lots > of work with scripting languages of all sorts. Outside of the game proper, > we use all tools available including really high level stuff like commercial > databases. So it isn't like game developer are just oblivious to language > technology. > I don't think the death has been proclaimed or even predicted, merely wished for :-) > To me, "ownership and lifetime" is an important concept in software > engineering. When is something created, when is it destroyed and what > higher level object is accountable for it? Garbage collection offers ONE > answer to the question of ownership and lifetime: Everyone referencing > something share ownership and the lifetime lasts until it can't be referred > to anymore. > You don't say which language you're referring to, but there is nothing about garbage collection that excludes all other forms of resource management from co-existing with it. At worst you may need to live with the memory being "backed" by garbage collection at the bottom (though most "full blown" languages offer an escape hatch), but you could usually just allocate a big block (that's garbage collected) and portion it out manually from there. There's no reason why you couldn't use RAII or even manual resource management in many high level languages that also have garbage collection. However, I'd wager that the vast majority of allocations are not special enough to need manual care, having a built in efficient system for automatically dealing with it in a way which is guaranteed to never corrupt the heap is jolly convenient. Also, not having to worry about memory fragmentation over time is pretty sweet too. That's definitely an issue a lot of people spend a lot of engineering effort to work around. > > As far as development time and bugs, well in my experience a garbage > collection system just gives you different sorts of bugs. With garbage > collection, you will spend your debugging time trying to understand what > link in super complex dependency chain is problematic, and even when it is > identified you are left with only hacky approaches to breaking the > undesirable links. Realize that with a console game, an object that does > not get destroyed soon enough is just as fatal as an object that gets > destroyed too soon, except the former is much harder to track down and fix. > This would happen in a system with automatic memory management too, except rather than getting bloated memory with a nice heap you could inspect using a variety of tools to find the unwanted reference, you get a bunch of stale pointers to random memory because the data they pointed to has been deleted from somewhere else. Yes, you may need to manually null out a few references to avoid space leaking, but in those instances you'd almost certainly need to do something equivalent for a manually managed system too (and if you mistakenly didn't it would be a lot harder to track down). Compare this to double deletes, memory leaks and scribbles which are often manifested as heisenbugs, I would definitely prefer these rare and easily fixed issues that GC has. You're absolutely right that it's not a perfect solution, and you'd definitely need to spend some effort dealing with memory management issues even with a garbage collector, but it does solve (or at least makes it easier to track down) a lot of the pedestrian hassle of doing it manually. -- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862 |
|
From: Gribb, G. <gg...@ra...> - 2009-04-26 14:25:51
|
>As I mentioned, C++ was not designed for making games, it's very suitable systems language, and for many systems in game development, I do find it enjoyable, however it starts to break down while combining systems into a large application, and additionally leads to a lot of repeated and glue code, which is often ad-hoc and bug-ridden. That combined with the very poor toolset makes it less then ideal to write entire games in. Seem like the death of C/C++ has been proclaimed for at least 20 years. As game developers, for say at least the last 12 years or so, we have done lots of work with scripting languages of all sorts. Outside of the game proper, we use all tools available including really high level stuff like commercial databases. So it isn't like game developer are just oblivious to language technology. But if we are talking about big-budget commercial games, and we are talking about the runtime code, the stuff that actually executes on the 360 or PS3, then the death of C++ is greatly exadgerated. Game developers find sucess with C/C++. The proposed benefits of highler level languages strike me as nieve and theoretical. In practice those benefits don't materialize, in my experience anyway. It isn't clear to me if those lobbying for change are saying "we had a really tough time with C++ on our last game, so we are going to try something different next time", or "we switched to language X for our development last game and saw real benefits". It would be nice to hear more about practical experience, rather than dubious theory. Let me just pick on one thing today: Garbage collection. Having made big-budget commercial games both with and without garbage collection, in my experience, these are myths: Myth: C++ does not "support" garbage collection. Myth: Garbage collection saves development time. Myth: Garbage collection reduces bugs. To me, "ownership and lifetime" is an important concept in software engineering. When is something created, when is it destroyed and what higher level object is accountable for it? Garbage collection offers ONE answer to the question of ownership and lifetime: Everyone referencing something share ownership and the lifetime lasts until it can't be referred to anymore. I feel that having only one answer to the ownership and lifetime question is very limiting on expressive power. In many cases, a different approach to ownership and lifetime will give you a superior design. We sure don't want to live with inferior designs because the language has a dogmatic and limiting view of ownership and lifetime. As far as development time and bugs, well in my experience a garbage collection system just gives you different sorts of bugs. With garbage collection, you will spend your debugging time trying to understand what link in super complex dependency chain is problematic, and even when it is identified you are left with only hacky approaches to breaking the undesirable links. Realize that with a console game, an object that does not get destroyed soon enough is just as fatal as an object that gets destroyed too soon, except the former is much harder to track down and fix. In the end using garbage collection isn't a huge problem; I'm satisfied with the products I've made that use GC. But I will say that whoever thinks garbage collection offer significant benefits to game development doesn't seem to be facing or solving the same problems that I confront. -Gil |