Thread: Re: [Algorithms] Complexity of new hardware (Page 10)
Brought to you by:
vexxed72
|
From: Randall B. <rbo...@gm...> - 2009-04-27 07:07:43
|
Is Mono lacking maturity in its tools or its runtime? Miguel's recent announcement of a fast SIMD implementation for Mono (http://tirania.org/blog/archive/2008/Nov-03.html) strongly indicates that performance is not an issue. In fact, the benchmarks linked from his post indicate that the new runtime can beat naive C++ implementations. Of course, this is only for bare number-crunching. MonoDevelop seems to be lacking support for dedicated console development (no remote debugger, etc.), but is there something missing for desktop games? Any info you have is appreciated: I'm considering writing a simple shooter in C#. - Randall On Sun, Apr 26, 2009 at 6:18 PM, Nicholas "Indy" Ray <ar...@gm...> wrote: > On Sun, Apr 26, 2009 at 5:46 PM, Jon Watte <jw...@gm...> wrote: >> If you do PC games, then C# is within inches of being a totally suitable >> general purpose replacement, and it already is a good replacement for >> many specific games or subsystems. It has nice reflection, you can poke >> at objects while you're developing the classes, it has good interfacing >> to existing native libraries, it has good performance, it allows >> byte-by-byte access, etc. > > I don't know if PC Games include Mac or other *nix systems, but I > don't feel mono is yet mature enough for game development, and thus > for those who choose not to develop for Microsoft's platforms > exclusively it's not yet a suitable replacement. > > Nicholas "Indy" Ray > > ------------------------------------------------------------------------------ > Crystal Reports - New Free Runtime and 30 Day Trial > Check out the new simplified licensign option that enables unlimited > royalty-free distribution of the report engine for externally facing > server and web deployment. > http://p.sf.net/sfu/businessobjects > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > -- All great deeds and all great thoughts have a ridiculous beginning. |
|
From: Jason H. <jas...@di...> - 2009-04-28 02:46:45
|
What's missing in C# is someone to convert my entire runtime library and tools into it, without losing any of the platform specific bit twiddling or the careful memory-tweaked asset management, and making it interface to all the 1st party libraries that we have to link to. :-) C# is great for certain kinds of tools, but as a complete replacement for a runtime system.... I think anything that isn't a systems language will be at a disadvantage for various reasons, and not solely due to inertia. (To hijack the thread a bit) While my engine is currently C++, I'm going with LUA for the majority of the game systems. I seriously considered Pawn and Game Monkey. The main thing that seemed good about Game Monkey is that it's very close to C syntax, so I could code in it with minimal re-education. However, the lack of a really nice debugger and considerably smaller community supporting makes it less attractive than LUA, for example. Pawn was neat, but isn't quite mature enough to be completely useful--it still has occasional parsing problems, can't easily reload a script live, etc. It did have a really nice feature that drew me toward it: the exact memory requirements for a script was known offline. Pawn also was a register machine rather than a stack machine, so it could be compiled to native code and translated to very fast code, compared to most other JITs. LUA worries me a bit because I've heard it's a little fast-and-loose with memory, though I imagine that depends a lot on the way scripts are written. JH Randall Bosetti wrote: > Is Mono lacking maturity in its tools or its runtime? > > Miguel's recent announcement of a fast SIMD implementation for Mono > (http://tirania.org/blog/archive/2008/Nov-03.html) strongly indicates > that performance is not an issue. In fact, the benchmarks linked from > his post indicate that the new runtime can beat naive C++ > implementations. Of course, this is only for bare number-crunching. > > MonoDevelop seems to be lacking support for dedicated console > development (no remote debugger, etc.), but is there something missing > for desktop games? Any info you have is appreciated: I'm considering > writing a simple shooter in C#. > > - Randall > > |
|
From: Philip T. <ex...@gm...> - 2009-04-28 08:15:26
|
On Tue, Apr 28, 2009 at 5:24 AM, Jon Watte <jw...@gm...> wrote: > So, how to design a well-adopted, robust, small, powerful language with > pretty syntax and an effective C integration layer? And what language > has all that but I've missed? :-) There's JavaScript, which has pretty huge adoption (outside of games), C-like syntax, closures, prototype-based inheritance, active communities working on the language spec and on implementations and on applications, is reasonably elegant and powerful (see some articles on http://www.crockford.com/javascript/), is quite small (the language itself, distinct from the web browser environment (with DOM and AJAX and all that stuff) it typically runs in), and is designed to be embedded into another application and to execute untrusted (often malicious) code. It has several major independent open-source implementations (http://www.mozilla.org/js/spidermonkey/ and http://webkit.org/blog/214/introducing-squirrelfish-extreme/ and http://code.google.com/p/v8/) all of which include some kind of JIT (there's been a lot of competition on performance over the past couple of years) and all of which have their own approaches to C integration. On the other hand, the implementations vary in their suitability for real-time use (non-incremental GC pauses), and in support for multithreading, debugging, non-standard language extensions, non-x86 JIT, etc, so there's always some tradeoffs. -- Philip Taylor ex...@gm... |
|
From: Rachel B. <r....@gm...> - 2009-04-26 00:02:09
Attachments:
smime.p7s
|
> It seems to me that you are only referring to the advantages of Type > Inference in haskell, Actually, I'm referring to type annotations in Haskell. While they are not necessary (the inference works quite well), they allow to generate better (i.e. faster/shorter) code. I'm looking to extend that into a more generic system where you slowly annotate your code as you learn more about the problem at hand. "Calcify" because your code becomes harder and harder to change - the price of specializing it for the task at hand. Since I'm at the hand-waving stage with my thoughts on this, that's about as much explanation I can give - it sounded better in my mind ;) > > As I mentioned, C++ was not designed for making games, it's very > suitable systems language, and for many systems in game development, I > do find it enjoyable, I'm curious - what do you feel C++ gives you (on a systems level) that's not achievable with C and a decent set of libraries? >> At least for private projects, I've almost completely abandoned it >> - work >> has a slightly higher inertia ;) > > I don't know if you're private projects are game related. Some are, some are not. None of them seem to call for C++. Systems level work is done in C. If I need to step onto an OO level while doing systems work, ObjC seems a better choice to me, and pretty much *all* prototyping is done in Python or other HLLs. If I'm trying out performance intensive stuff, I'm more than happy to throw rather large amounts of computational power at it if it gains me fast development. EC2 is your friend ;) > But at the > moment there seems to be a much bigger issue then inertia in the work > environment, which is the lack of a viable alternative in our field. That's entirely due to inertia and unwillingness to explore alternatives. If we spent less time on reinventing existing wheels, I'm confident we could do a lot of useful work in terms of generating alternatives. (Side note: I'd *really* love to focus the "game development universities" on that. I'd think students would benefit from doing actual research, as opposed to vocational training...) > Nicholas "Indy" Ray I'm surprised that you as an Indy guy (or so I guess from the signature ;) feel there are no alternatives. XNA/C# seems a viable one? (Note - this is said as a bystander. I haven't used it yet. There's only so many hours in a day :( ) Rachel |
|
From: Nicholas \Indy\ R. <ar...@gm...> - 2009-04-26 04:00:39
|
On Sat, Apr 25, 2009 at 5:01 PM, Rachel Blum <r....@gm...> wrote: > Actually, I'm referring to type annotations in Haskell. While they are not > necessary (the inference works quite well), they allow to generate better > (i.e. faster/shorter) code. I'm looking to extend that into a more generic > system where you slowly annotate your code as you learn more about the > problem at hand. I ment type annotations to be included in advantages of type inference, as I haven't seen a type inferred system without optional type annotations. Anyways, as far as my understand goes, type annotations provide no runtime performance benefits, but help to increase safety of computation, a compile time 'assert' of sorts. > "Calcify" because your code becomes harder and harder to change - the price > of specializing it for the task at hand. > > Since I'm at the hand-waving stage with my thoughts on this, that's about as > much explanation I can give - it sounded better in my mind ;) It's nice to be able to start with more malleable code, and then add types later to ensure safety, I understand. > I'm curious - what do you feel C++ gives you (on a systems level) that's not > achievable with C and a decent set of libraries? > > Some are, some are not. None of them seem to call for C++. Systems level > work is done in C. If I need to step onto an OO level while doing systems > work, ObjC seems a better choice to me, and pretty much *all* prototyping is > done in Python or other HLLs. > > If I'm trying out performance intensive stuff, I'm more than happy to throw > rather large amounts of computational power at it if it gains me fast > development. EC2 is your friend ;) I mostly mean C++ as a superset of C, alas I think C++ has quite a few valuable additions, destructors, and typed containers which I often find to be valuable in a lot of code (depending on what value of 'system' I wouldn't choose C++ over C for driver development for instance. Other then that, C++ has some valuable performance characteristics, while I agree that there is great reason to prototype in higher level languages the performance is often not acceptable in production game code. For instance the dynamic nature of ObjC classes can provide some problems, and the insistence for all newer programming languages to be garbage collected proves to be largely problematic. > That's entirely due to inertia and unwillingness to explore alternatives. If > we spent less time on reinventing existing wheels, I'm confident we could do > a lot of useful work in terms of generating alternatives. Creating alternatives ends up being much more difficult then switching to existing alternatives if none of the existing alternatives are a great match. > (Side note: I'd *really* love to focus the "game development universities" > on that. I'd think students would benefit from doing actual research, as > opposed to vocational training...) I'm not sure that "game development universities" are yet mature enough for this. > I'm surprised that you as an Indy guy (or so I guess from the signature ;) > feel there are no alternatives. XNA/C# seems a viable one? (Note - this is > said as a bystander. I haven't used it yet. There's only so many hours in a > day :( ) Sorry for the confusion, the "Indy" in my signature is a nick name of mine I've had since long before I got into game development. However I do have quite a bit of experience with XNA/C# and as it turns out they have a lot of problems, the performance can be problematic, GC isn't always desirable when developing games, The large amount of bounds checking can also be problematic. Additionally, the nature of being a proprietary language/library vastly limits the platforms that can be developed for (Mono does a nice job at running C# but XNA is still a Microsoft only sort of thing). Lastly while it is possible to call into C/C++ libraries though managed C++, it's never very pleasant, and on some platforms it may not be possible at all. Depending on the game, these may actually be non-problems. But I doubt we will be seeing any AAA titles in XNA/C# very shortly, and I doubt that is due to the inertia of the entire industry and C++. Nicholas "Indy" Ray |
|
From: Sebastian S. <seb...@gm...> - 2009-04-26 09:08:11
|
On Sun, Apr 26, 2009 at 5:00 AM, Nicholas "Indy" Ray <ar...@gm...>wrote: > On Sat, Apr 25, 2009 at 5:01 PM, Rachel Blum <r....@gm...> wrote: > > Actually, I'm referring to type annotations in Haskell. While they are > not > > necessary (the inference works quite well), they allow to generate better > > (i.e. faster/shorter) code. I'm looking to extend that into a more > generic > > system where you slowly annotate your code as you learn more about the > > problem at hand. > > I ment type annotations to be included in advantages of type > inference, as I haven't seen a type inferred system without optional > type annotations. Anyways, as far as my understand goes, type > annotations provide no runtime performance benefits, but help to > increase safety of computation, a compile time 'assert' of sorts. They do actually. Haskell always infers the *most general* type for a function. Adding a type signature can specialize it giving you faster code (by avoiding the need for runtime dispatch of any polymorphic functions - if the compiler knows that the * always refers to integer multiplication, it can just emit the native honest-to-goodness int multiplication instruction directly). Also, type signatures can be used to specify explicitly that a given value should be unboxed in GHC (e.g. Int# is an unboxed int). There are other annotations which can force strict evaluation (add a ! to the field of a record type, or the name of a parameter to a function), improving performance in many cases. As an aside, the Haskell way is actually great for writing generic code (in C++ et al you have to do extra work to get generic code, in Haskell you have to do extra work to specialize it). Also, a little hand-wavey, polymorphism actually restricts the amounts of valid implementations for a given type (e.g. a type of a -> a, has precisely one imlementation, id, whereas a type from Integer -> Integer has an infinite number of implementations), and if the space of legal implementation is reduced then the space of legal *incorrect* implementations is too, meaning that with generic code an incorrect implementation is more likely to give a type error. See the Girard-Reynolds isomorphism for more, and check out djinn, which given a (polymorphic) type will "magically" produce an implementation for that function! Sounds completely magical, I know! > It's nice to be able to start with more malleable code, and then add > types later to ensure safety, I understand. As for Haskell, it is always completely statically, and strongly typed. It just happens to infer those types for you at compile time, saving you some typing (no pun intended). > However I do have quite a bit of experience with XNA/C# and as it > turns out they have a lot of problems, the performance can be > problematic, GC isn't always desirable when developing games, The > large amount of bounds checking can also be problematic. Additionally, > the nature of being a proprietary language/library vastly limits the > platforms that can be developed for (Mono does a nice job at running > C# but XNA is still a Microsoft only sort of thing). Lastly while it > is possible to call into C/C++ libraries though managed C++, it's > never very pleasant, and on some platforms it may not be possible at > all. Depending on the game, these may actually be non-problems. But I > doubt we will be seeing any AAA titles in XNA/C# very shortly, and I > doubt that is due to the inertia of the entire industry and C++. I would suspect that the next gen of consoles will make provisions to run managed code more efficiently, and in particular mixing and matching between C/C++ for low level systems, and C# or similar for "everything else". E.g. on the Xbox 360 JIT:ed code has to run in user mode, which incurs a performance hit. It would certainly be nice if that wasn't the case (either if it didn't need to switch, or if the performance hit was smaller). -- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862 |
|
From: Nicholas \Indy\ R. <ar...@gm...> - 2009-04-26 10:31:02
|
On Sun, Apr 26, 2009 at 2:08 AM, Sebastian Sylvan <seb...@gm...> wrote: > They do actually. Haskell always infers the *most general* type for a > function. Adding a type signature can specialize it giving you faster code > (by avoiding the need for runtime dispatch of any polymorphic functions - if > the compiler knows that the * always refers to integer multiplication, it > can just emit the native honest-to-goodness int multiplication instruction > directly). That's in interesting property, I've never seriouslly used Haskell, but in ML, even though the * operation can be of type 'a -> 'a the compiler will specialiase it to int -> in the cases where the type inference allows it to do such, with no dynamic dispatch required unless there is the possibility of union/any types running around (which I've found to be very rare). > As an aside, the Haskell way is actually great for writing generic code (in > C++ et al you have to do extra work to get generic code, in Haskell you have > to do extra work to specialize it). Also, a little hand-wavey, polymorphism > actually restricts the amounts of valid implementations for a given type You'll find no argument from me about the merits of type inferrence. > (e.g. a type of a -> a, has precisely one imlementation, id, whereas a type > from Integer -> Integer has an infinite number of implementations), I'm sorry, I don't follow. A function of type 'a -> 'a is a superset of int -> int thus implementations must contain the later. > As for Haskell, it is always completely statically, and strongly typed. It > just happens to infer those types for you at compile time, saving you some > typing (no pun intended). I hate to waste time arguing symantics/vocab, but I had ment adding "type annotations" > I would suspect that the next gen of consoles will make provisions to run > managed code more efficiently, and in particular mixing and matching between > C/C++ for low level systems, and C# or similar for "everything else". E.g. > on the Xbox 360 JIT:ed code has to run in user mode, which incurs a > performance hit. It would certainly be nice if that wasn't the case (either > if it didn't need to switch, or if the performance hit was smaller). Perhaps, I still don't think that C# is well designed for game development and this "something simular" doesn't exist yet and afaict no one is working on it. If the language doesn't exist, it's difficult to design the hardware to run it well. Nicholas "Indy" Ray |
|
From: Sebastian S. <seb...@gm...> - 2009-04-26 10:59:38
|
On Sun, Apr 26, 2009 at 11:30 AM, Nicholas "Indy" Ray <ar...@gm...>wrote: > On Sun, Apr 26, 2009 at 2:08 AM, Sebastian Sylvan > <seb...@gm...> wrote: > > They do actually. Haskell always infers the *most general* type for a > > function. Adding a type signature can specialize it giving you faster > code > > (by avoiding the need for runtime dispatch of any polymorphic functions - > if > > the compiler knows that the * always refers to integer multiplication, it > > can just emit the native honest-to-goodness int multiplication > instruction > > directly). > > That's in interesting property, I've never seriouslly used Haskell, > but in ML, even though the * operation can be of type 'a -> 'a the > compiler will specialiase it to int -> in the cases where the type > inference allows it to do such, with no dynamic dispatch required > unless there is the possibility of union/any types running around > (which I've found to be very rare). This may require a lot of cross-module optimizations so is a bit harder to do in general, but yes the specialization can happen at the "use site", but that requires that the use site itself knows it's using an Int and not "Num a". So at some point there needs to be something restricting it to a specific type, and the closer that happens to the implementation, the less cross-module optimization you need to get it. > > (e.g. a type of a -> a, has precisely one imlementation, id, whereas a > type > > from Integer -> Integer has an infinite number of implementations), > > I'm sorry, I don't follow. A function of type 'a -> 'a is a superset > of int -> int thus implementations must contain the later. If I (or rather, the context in which a function is called) give you the type "a->a", and ask you to implement a function satisfying that type, there is only one implementation (id). Since you know nothing about what type the parameter passed in has, you can't do anything except just return it back out. Likewise for "(a,b)->a" (fst). On the other hand if I give you the type "Integer->Integer" and ask you to write an implementation, there's an infinite number of possibilities that can satisfy that type (e.g. +1, +2, etc.). So the point is that if the expected type of a specific function is polymorphic, then you have less wiggle room to write something that satisfies the type - and in some cases the number of implementations that can satisfy the type is just one, but even when you add some non-polymorphic stuff to the type every polymorphic part will cut out a "degree of freedom" from the implementation. The fact that you know something is an Int means you can "do more" to the variable - if it's fully polymorphic you can't do anything to it (and likewise if it's "numeric" you can only do maths on it, and so on). Thus, the more polymorphic the type, the smaller the valid "implementation space" is, and therefore the more likely it is that an incorrect implementation will be caught by the type checker. > Perhaps, I still don't think that C# is well designed for game > development and this "something simular" doesn't exist yet and afaict > no one is working on it. If the language doesn't exist, it's difficult > to design the hardware to run it well. Well I wouldn't really consider C++ to be well designed for game development either, so it's all about the relative merits, I guess. Personally I'd prefer C# over C++ in 90% of game code, assuming that we get a proper incremental garbage collector, good static null- and out-of-bounds checking elimination etc. F# is looking pretty good. -- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862 |
|
From: Nicholas \Indy\ R. <ar...@gm...> - 2009-04-26 21:17:27
|
On Sun, Apr 26, 2009 at 3:59 AM, Sebastian Sylvan <seb...@gm...> wrote: > If I (or rather, the context in which a function is called) give you the > type "a->a", and ask you to implement a function satisfying that type, there > is only one implementation (id). Since you know nothing about what type the > parameter passed in has, you can't do anything except just return it back > out. Likewise for "(a,b)->a" (fst). > On the other hand if I give you the type "Integer->Integer" and ask you to > write an implementation, there's an infinite number of possibilities that > can satisfy that type (e.g. +1, +2, etc.). > So the point is that if the expected type of a specific function is > polymorphic, then you have less wiggle room to write something that > satisfies the type - and in some cases the number of implementations that > can satisfy the type is just one, but even when you add some non-polymorphic > stuff to the type every polymorphic part will cut out a "degree of freedom" > from the implementation. The fact that you know something is an Int means > you can "do more" to the variable - if it's fully polymorphic you can't do > anything to it (and likewise if it's "numeric" you can only do maths on it, > and so on). > Thus, the more polymorphic the type, the smaller the valid "implementation > space" is, and therefore the more likely it is that an incorrect > implementation will be caught by the type checker. Ahh, I understand, And I feel this is the beauty of type inference, as the simple act of providing an implementation for a function automatically specializes it. in caml for instance I do not have to provide any type annotations for the function let f(x, y , z) = x +. y +. z;; in order for the compiler to know that it is of type float -> float and thus the only time you encounter an a' -> a' is for identity, which doesn't occur very often. Nicholas "Indy" Ray |
|
From: Sebastian S. <seb...@gm...> - 2009-04-27 07:24:34
|
On Sun, Apr 26, 2009 at 10:17 PM, Nicholas "Indy" Ray <ar...@gm...>wrote: > On Sun, Apr 26, 2009 at 3:59 AM, Sebastian Sylvan > <seb...@gm...> wrote: > > If I (or rather, the context in which a function is called) give you the > > type "a->a", and ask you to implement a function satisfying that type, > there > > is only one implementation (id). Since you know nothing about what type > the > > parameter passed in has, you can't do anything except just return it back > > out. Likewise for "(a,b)->a" (fst). > > On the other hand if I give you the type "Integer->Integer" and ask you > to > > write an implementation, there's an infinite number of possibilities that > > can satisfy that type (e.g. +1, +2, etc.). > > So the point is that if the expected type of a specific function is > > polymorphic, then you have less wiggle room to write something that > > satisfies the type - and in some cases the number of implementations that > > can satisfy the type is just one, but even when you add some > non-polymorphic > > stuff to the type every polymorphic part will cut out a "degree of > freedom" > > from the implementation. The fact that you know something is an Int means > > you can "do more" to the variable - if it's fully polymorphic you can't > do > > anything to it (and likewise if it's "numeric" you can only do maths on > it, > > and so on). > > Thus, the more polymorphic the type, the smaller the valid > "implementation > > space" is, and therefore the more likely it is that an incorrect > > implementation will be caught by the type checker. > > Ahh, I understand, And I feel this is the beauty of type inference, as > the simple act of providing an implementation for a function > automatically specializes it. in caml for instance I do not have to > provide any type annotations for the function let f(x, y , z) = x +. y > +. z;; in order for the compiler to know that it is of type float -> > float and thus the only time you encounter an a' -> a' is for > identity, which doesn't occur very often. > Well the context in which the function is used can constrain (or generalize, depending on your perspective) it to be more polymorphic (e.g. it may be used for both floats and ints). I guess my point is that if the language encourages you to write polymorphic code (in fact, makes it the default), then the amount of "slack" in the implementation space that will still satisfy the type checker is reduced which leads to the "works once the compiler stops complaining" effect. -- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862 |
|
From: Sylvain G. V. <vi...@ii...> - 2009-04-27 09:14:19
|
> GC just allows you to be lazy in this regard. Actually good examples against using GC have a name: Mozilla Firefox, the never satisfied memory piggy eater :D Still I love my Ffox. Sorry for the slightly OFT |
|
From: Jon W. <jw...@gm...> - 2009-04-28 04:24:57
|
Jason Hughes wrote: > known offline. Pawn also was a register machine rather than a stack > machine, so it could be compiled to native code and translated to very > fast code, compared to most other JITs. Is that really a difference? You can transform between the two systems, so they are semantically equivalent, although some constructs have more straightforward transforms than other. That, and the fact that the x86/32 is almost a stack machine already ;-) > LUA worries me a bit because > I've heard it's a little fast-and-loose with memory, though I imagine > that depends a lot on the way scripts are written. > I've loved Lua to death for a long time for its smallish code size, its straightforward C integration, its clear separation of interepreter context and its lightweight approach to closures. However, despite that, I've never come to really like the Lua syntax. It's just slightly too weird to look good. Meanwhile, Python has always had a nice library and light-weight syntax (different enough to be unique, for sure!), but its C integration and size are quite cumbersome, and trying to do closures from C is like trying to pull teeth. (No, boost::python doesn't much help I'm afraid) So, how to design a well-adopted, robust, small, powerful language with pretty syntax and an effective C integration layer? And what language has all that but I've missed? :-) Sincerely, jw |
|
From: Marc B. R. <mar...@or...> - 2009-04-28 06:54:47
|
> So, how to design a well-adopted, robust, small, powerful language > with > pretty syntax and an effective C integration layer? And what language > has all that but I've missed? :-) I've always wanted to goof around with http://www.iolanguage.com/. The first real OO language I learned was Smalltalk http://code.google.com/p/syx/ (Haven't looked at this either). Neither AngelScript nor LUA has ever appealed to me. |
|
From: Rachel B. <r....@gm...> - 2009-04-28 12:27:07
Attachments:
smime.p7s
|
> > There's JavaScript, which has pretty huge adoption (outside of games), > C-like syntax, closures, prototype-based inheritance, active > communities working on the language spec and on implementations and on > applications, is reasonably elegant and powerful (see some articles on > http://www.crockford.com/javascript/), is quite small (the language > itself, distinct from the web browser environment (with DOM and AJAX > and all that stuff) it typically runs in), The downside: If you want to include additional JavaScript libraries you have to (as far as I know) resort to the web browser as preprocessor. Which kind of destroys any idea of modularity. I like the language a lot, but this one "feature" kills it outright. If you (the OP) really want a C-like scripting language, see Ch (http://www.softintegration.com/ ) or CINT (http://root.cern.ch/drupal/content/cint) Rachel |
|
From: Philip T. <ex...@gm...> - 2009-04-28 13:55:05
|
On Tue, Apr 28, 2009 at 1:26 PM, Rachel Blum <r....@gm...> wrote: >> >> There's JavaScript, which has pretty huge adoption (outside of games), >> C-like syntax, closures, prototype-based inheritance, active >> communities working on the language spec and on implementations and on >> applications, is reasonably elegant and powerful (see some articles on >> http://www.crockford.com/javascript/), is quite small (the language >> itself, distinct from the web browser environment (with DOM and AJAX >> and all that stuff) it typically runs in), > > The downside: If you want to include additional JavaScript libraries you > have to (as far as I know) resort to the web browser as preprocessor. Which > kind of destroys any idea of modularity. There's no need to get a web browser involved; but you do need your embedding application to provide some functionality, since JS has no built-in API for accessing files or networks. (The core language does arrays, strings, regexps, dates, maths, etc, but little else.) That's pretty trivial - in SpiderMonkey it's a dozen lines of code to define a C function that reads your script library files and passes them to the script evaluation API, and to expose it to scripts so they can call it themselves to load other scripts. The language has no built-in support for concepts like namespacing, but you can use the same tricks that modern web scripting libraries use (e.g. wrapping files in anonymous functions so they don't pollute the global namespace, and then exporting everything as properties of a single namespace-like global value), and you can do some other tricks using the JS engine's embedding API. But I guess this is the wrong list for continuing discussion of such things... -- Philip Taylor ex...@gm... |
|
From: Jon W. <jw...@gm...> - 2009-04-28 16:18:16
|
Philip Taylor wrote: > On Tue, Apr 28, 2009 at 5:24 AM, Jon Watte <jw...@gm...> wrote: > > There's JavaScript, which has pretty huge adoption (outside of games), > > and all that stuff) it typically runs in), and is designed to be > embedded into another application and to execute untrusted (often > malicious) code. > > Except the available implementations are, by and large, terrible for embedding -- even worse than Python. Given that JS is found mostly in web browser, the actual implementations I've seen are all terribly browser-centric, and not even factored or documented for separate embedding. Although V8 looks good -- I hadn't looked at that previously. Sincerely, jw |
|
From: Mike S. <mik...@gm...> - 2009-04-28 20:14:23
|
On Tue, Apr 28, 2009 at 9:17 AM, Jon Watte <jw...@gm...> wrote: > Except the available implementations are, by and large, terrible for > embedding -- even worse than Python. Given that JS is found mostly in > web browser, the actual implementations I've seen are all terribly > browser-centric, and not even factored or documented for separate embedding. Which implementations? The Mozilla JS engine has been embedded in non-browser environments since I started working on it in 1997, and indeed before. V8 is pretty browser-independent as well, not sure about Squirrelfish/Nitro. Mike |
|
From: Nicholas \Indy\ R. <ar...@gm...> - 2009-04-28 20:14:29
|
Last I checked, the SquirrelFish Extreme engine was built comparatively well for embedding in applications other then the browser. Nicholas "Indy" Ray On Tue, Apr 28, 2009 at 9:17 AM, Jon Watte <jw...@gm...> wrote: > Philip Taylor wrote: >> On Tue, Apr 28, 2009 at 5:24 AM, Jon Watte <jw...@gm...> wrote: >> > >> There's JavaScript, which has pretty huge adoption (outside of games), >> >> and all that stuff) it typically runs in), and is designed to be >> embedded into another application and to execute untrusted (often >> malicious) code. >> >> > > Except the available implementations are, by and large, terrible for > embedding -- even worse than Python. Given that JS is found mostly in > web browser, the actual implementations I've seen are all terribly > browser-centric, and not even factored or documented for separate embedding. > > Although V8 looks good -- I hadn't looked at that previously. > > Sincerely, > > jw > > > ------------------------------------------------------------------------------ > Register Now & Save for Velocity, the Web Performance & Operations > Conference from O'Reilly Media. Velocity features a full day of > expert-led, hands-on workshops and two days of sessions from industry > leaders in dedicated Performance & Operations tracks. Use code vel09scf > and Save an extra 15% before 5/3. http://p.sf.net/sfu/velocityconf > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > <div><br></div> |
|
From: Andrew V. <and...@ni...> - 2009-04-22 16:44:57
|
> Wouldn't that be a tough sell? You'd already be competing > with free implementations of LUA, Python, JavaScript and > their ilk on the low end, and built-in languages like > UnrealScript on the high end. While middleware for something > like mesh exporting and animation (Granny) or something like > networking or AI make sense, because there is no good free > library for those areas, the scripting language market seems > full of entrenched competitors with a zero dollar price point. Maybe "scripting language" is the wrong term. I'm thinking of something that sits between the low-level C/C++ "details" and the high-level, LUA-style, scripting - probably for the purpose of enabling easier multithreading of gameplay style code (rather than engine-side code). I'm not saying you can't extend this language in either direction, but that's where I see the most use of it (YMMV). :) > > There definitely needs to be a change to the way most games are > > written when considering new hardware. It's perfectly > possible that a > > new/different language might be a good way to go - however, I'd be > > concerned that it would introduce more complexity, i.e. > > Doesn't this bring us back full circle? I recall a statement > from a month ago saying that we all need to think differently > about how we put together massively parallel software, > because the current tools don't really help us in the right > ways... That's not just games, mind you, but business > software is often less performance critical, and server > software already has a reasonable parallelization strategy > with data and service federation (and 800-way CPU boxes like > those from Azul...). Yep, agreed. My point was more whether the best way forward would be with a new language or whether new/different programming techniques for existing languages (which means C++, I guess) would be better. Cheers, Andrew. |
|
From: Rachel B. <r....@gm...> - 2009-04-25 21:56:51
Attachments:
smime.p7s
|
> My point was more whether the best way forward would be with a new > language > or whether new/different programming techniques for existing languages > (which means C++, I guess) would be better. As far as I'm concerned, C++ is nearing a breaking point. We constantly cram new features into it (sorry, I meant "we extend the number of supported paradigms" ;), and design is by committee. Which leads to an extremely powerful and overly complex language that's almost unreadable. On top of that, the tool set is falling more and more behind. At least for private projects, I've almost completely abandoned it - work has a slightly higher inertia ;) Rachel |
|
From: Nicholas \Indy\ R. <ar...@gm...> - 2009-04-21 22:10:05
|
On Tue, Apr 21, 2009 at 12:43 PM, Pal-Kristian Engstad <pal...@na...> wrote: > Goal was a great system - one that we still greatly miss. If we had to make > a Goal2, then we'd probably: > > Use an SML-ish or Scala-ish surface syntax, while trying to retain the power > of Lisp macros. Is there any reason you would choose against S-Expressions? I don't know about Scala, but I find SML syntax to be a little less maintainable for large systems and a lot less usable for Macros; Is this mostly a choice of preference by the team, or perhaps you think it'd be easier to get new employees to learn? > Introduce stronger typing features, which is difficult, given the need for > REPL and hot updates. > Do more in terms of high-level optimization, though LLVM might negate the > need for some of that. LLVM is rather quite nice, and while it'll take some infrastructure, and certainly a resident compiler instance, I don't suspect that hot updates would be too much of a problem with a better typed (likely type inferred) programming language. Indy |
|
From: Sam M. <sam...@ge...> - 2009-04-25 20:25:15
|
Yeah, that's what I'm talking about! :) I was trying to resist getting excited and going into over-sell mode, but likely undercooked how much potential I think there is here. To highlight just two more points I think are important: - Haskell stands a very good chance of allowing games to really get on top of their (growing) complexity. I think this is best illustrated in the paper, "Why functional programming matters", http://www.cs.chalmers.se/~rjmh/Papers/whyfp.html. Well worth a read if you've not seen it before. - It can be interactively evaluated and extended. Working with C/C++ we get so used to living without this I think we potentially under value how important a feature this is. Cheers, Sam -----Original Message----- From: Sebastian Sylvan [mailto:seb...@gm...] Sent: Sat 25/04/2009 19:16 To: Game Development Algorithms Cc: and...@ni... Subject: Re: [Algorithms] Complexity of new hardware On Wed, Apr 22, 2009 at 5:52 PM, Sam Martin <sam...@ge...>wrote: > > > Wouldn't that be a tough sell? You'd already be competing with free > > implementations of LUA, Python, JavaScript and their ilk on the low > end, > > and built-in languages like UnrealScript on the high end. > > I don't think there's a market for that kind of scripting DSL. A new > language would need to eat into the remaining C++ development burden > that isn't suitable to implementing in Lua, say. Which is plenty. > > > Doesn't this bring us back full circle? I recall a statement from a > > month ago saying that we all need to think differently about how we > put > > together massively parallel software, because the current tools don't > > really help us in the right ways... > > Another reason to consider pure functional languages. This is a much > deeper topic that I'm now about to trivialise, but the referential > transparency of these languages makes them particular suitable to > parallel evaluation. For example, GHC (arguably the most mature Haskell > compiler) can compile for an arbitrary number of cores, although it's > still an active research area as I understand it. Being a massive Haskell fanboy myself, let me jump in with some other cool things it does that relates to game development. 1. It's starting to get support for "Nested data parallelism". Basically flat data parallelism is what we get with shaders now, the problem with that is that the "per-element operation" can't itself be another data parallel operation. NDP allows you to write data parallel operations (on arrays) where the thing you do to each element is itself another data parallel operation. The compiler then has a team of magic pixies that fuses/flattens this into a series of data parallel appliacations, eliminating the need to do it manually. 2. It has Software Transactional Memory. So when you really need shared mutable state you can still access it from lots of different threads at once with optimistic concurrency (only block when there's an actual conflict). Yes, there are issues, and yes it adds overhead, but if the alternative is single threaded execution and the overhead is 2-3x, then we win once we have 4 hardware threads to spare. 3. Monads! Basically this allows you to overload semi-colon, which means you can fairly easily define your own embedded DSLs. This can let you write certain code a lot easier.. You could have a "behaviour" monad for example, abstracting over all the details of entities in the game doing things which take multiple frames (so you don't need to litter your behaviour code with state machine code, saving and restoring state etc, you just write what you want to do and the implementation of the monad takes care of things that needs to "yield"). 4. It's safe. Most code in games isn't systems code, so IMO it doesn't make sense to pay the cost of using a systems programming language for it (productivity, safety). 5. It's statically typed with a native compiler, meaning you could compile all your scripts and just link them into the game for release and get decent performance. Not C-like (yet, anyway!), but probably an order of magnitude over most dynamic languages. -- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862 |
|
From: Richard M. <mi...@tr...> - 2009-04-16 23:34:13
|
This is most definitely not an algorithm.
--
()() Richard Mitton
( '.')
(")_(") Beard Without Portfolio :: Treyarch
----- Original Message -----
From: "Jon Watte" <jw...@gm...>
To: "Game Development Algorithms" <gda...@li...>
Sent: Thursday, April 16, 2009 1:05 PM
Subject: [Algorithms] Complexity of new hardware
> Nicholas "Indy" Ray wrote:
>> On Thu, Apr 16, 2009 at 11:14 AM,
>> <chr...@pl...> wrote:
>>
>>> The fact that you have to rely on labelling assembly instructions
>>> and for-loops as "general components" to provide support for your
>>> initial statement shows how weak your statement was.
>>>
>>
>> Additionally with over 200 assembly instructions in x86, with more and
>> more special case assembly instructions being added, I wouldn't really
>>
>
> My favorite new x86 instruction:
> |
> vmadd231ps v0 {k1}, v5, [rbx+rcx*4] {4to16}
>
> Dealing with complexity is really becoming quite a chore, and our tools
> just aren't keeping up IMO.
> What good ways are you seeing of lifting game design and implementation
> up onto higher levels of abstraction? Is it just middleware, or will the
> middleware guys in turn need newer, better ways of doing things?
>
> Sincerely,
>
> jw
>
> |
>
> ------------------------------------------------------------------------------
> Stay on top of everything new and different, both inside and
> around Java (TM) technology - register by April 22, and save
> $200 on the JavaOne (SM) conference, June 2-5, 2009, San Francisco.
> 300 plus technical and hands-on sessions. Register today.
> Use priority code J9JMT32. http://p.sf.net/sfu/p
> _______________________________________________
> GDAlgorithms-list mailing list
> GDA...@li...
> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list
> Archives:
> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list
>
|
|
From: Jon O. <ze...@gm...> - 2009-04-16 22:59:02
|
I heard that google also optimizes their code for power consumption, though this is not very relevant for games it does apply to certain types of software development. - Jon Olick On Thu, Apr 16, 2009 at 2:02 PM, Tony Cox <to...@mi...> wrote: > I think you guys are getting tied up in an argument about semantics. > > The real point here, IMHO, is that software development is a *tradeoff*. > There are many things we'd like in any piece of software: > > - Good performance (both in time and space) > - Robustness > - Maintainability > - Shipped on time > - Shipped on budget > - Features > - Interoperability > - Testability > - Portability > - Reusability (perhaps this means being more rather than less general) > - And many others. > > Clearly, holding all other variables equals, it's better to have more of > these good things than less. But you don't get to hold the other variables > equal - you're always trading off things against each other. Would I like > any piece of code I write to be more general purpose and more reusable? > Sure, but (a) do I even know that how to achieve that goal, and (b) am I > willing to trade off something else to get it? Maybe not. Perhaps almost > always not. > > - Tony > > -----Original Message----- > From: chr...@pl... [mailto: > chr...@pl...] > Sent: Thursday, April 16, 2009 11:14 AM > To: Game Development Algorithms > Subject: Re: [Algorithms] General purpose task parallel threading approach > > > "Other than classroom examples, software is NOT written through > > composition of 'general components'. And even in the small subset > > of software writing where your statement may hold true, writing > > those components was a very small amount of work of the total." > > > > No, every piece of software in a modern language is written through > > composition of "general components", from the machine code op-codes, > > up to the language constructs they come from, up to the libraries, > > patterns and algorithms they're constructed with. > > The fact that you have to rely on labelling assembly instructions > and for-loops as "general components" to provide support for your > initial statement shows how weak your statement was. > > > Christer Ericson, Director of Tools and Technology > Sony Computer Entertainment, Santa Monica > > > > ------------------------------------------------------------------------------ > Stay on top of everything new and different, both inside and > around Java (TM) technology - register by April 22, and save > $200 on the JavaOne (SM) conference, June 2-5, 2009, San Francisco. > 300 plus technical and hands-on sessions. Register today. > Use priority code J9JMT32. http://p.sf.net/sfu/p > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > > > > ------------------------------------------------------------------------------ > Stay on top of everything new and different, both inside and > around Java (TM) technology - register by April 22, and save > $200 on the JavaOne (SM) conference, June 2-5, 2009, San Francisco. > 300 plus technical and hands-on sessions. Register today. > Use priority code J9JMT32. http://p.sf.net/sfu/p > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |
|
From: ~BG~ <arc...@gm...> - 2009-04-17 17:16:16
|
Actually it is still relevant handheld/mobile phone space... .ben On Thu, Apr 16, 2009 at 3:50 PM, Jon Olick <ze...@gm...> wrote: > I heard that google also optimizes their code for power consumption, though > this is not very relevant for games it does apply to certain types of > software development. > - Jon Olick > > > On Thu, Apr 16, 2009 at 2:02 PM, Tony Cox <to...@mi...> wrote: > >> I think you guys are getting tied up in an argument about semantics. >> >> The real point here, IMHO, is that software development is a *tradeoff*. >> There are many things we'd like in any piece of software: >> >> - Good performance (both in time and space) >> - Robustness >> - Maintainability >> - Shipped on time >> - Shipped on budget >> - Features >> - Interoperability >> - Testability >> - Portability >> - Reusability (perhaps this means being more rather than less general) >> - And many others. >> >> Clearly, holding all other variables equals, it's better to have more of >> these good things than less. But you don't get to hold the other variables >> equal - you're always trading off things against each other. Would I like >> any piece of code I write to be more general purpose and more reusable? >> Sure, but (a) do I even know that how to achieve that goal, and (b) am I >> willing to trade off something else to get it? Maybe not. Perhaps almost >> always not. >> >> - Tony >> >> -----Original Message----- >> From: chr...@pl... [mailto: >> chr...@pl...] >> Sent: Thursday, April 16, 2009 11:14 AM >> To: Game Development Algorithms >> Subject: Re: [Algorithms] General purpose task parallel threading approach >> >> > "Other than classroom examples, software is NOT written through >> > composition of 'general components'. And even in the small subset >> > of software writing where your statement may hold true, writing >> > those components was a very small amount of work of the total." >> > >> > No, every piece of software in a modern language is written through >> > composition of "general components", from the machine code op-codes, >> > up to the language constructs they come from, up to the libraries, >> > patterns and algorithms they're constructed with. >> >> The fact that you have to rely on labelling assembly instructions >> and for-loops as "general components" to provide support for your >> initial statement shows how weak your statement was. >> >> >> Christer Ericson, Director of Tools and Technology >> Sony Computer Entertainment, Santa Monica >> >> >> >> ------------------------------------------------------------------------------ >> Stay on top of everything new and different, both inside and >> around Java (TM) technology - register by April 22, and save >> $200 on the JavaOne (SM) conference, June 2-5, 2009, San Francisco. >> 300 plus technical and hands-on sessions. Register today. >> Use priority code J9JMT32. http://p.sf.net/sfu/p >> _______________________________________________ >> GDAlgorithms-list mailing list >> GDA...@li... >> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >> Archives: >> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >> >> >> >> ------------------------------------------------------------------------------ >> Stay on top of everything new and different, both inside and >> around Java (TM) technology - register by April 22, and save >> $200 on the JavaOne (SM) conference, June 2-5, 2009, San Francisco. >> 300 plus technical and hands-on sessions. Register today. >> Use priority code J9JMT32. http://p.sf.net/sfu/p >> _______________________________________________ >> GDAlgorithms-list mailing list >> GDA...@li... >> https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list >> Archives: >> http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list >> > > > > ------------------------------------------------------------------------------ > Stay on top of everything new and different, both inside and > around Java (TM) technology - register by April 22, and save > $200 on the JavaOne (SM) conference, June 2-5, 2009, San Francisco. > 300 plus technical and hands-on sessions. Register today. > Use priority code J9JMT32. http://p.sf.net/sfu/p > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list > |