Thread: Re: [GD-General] Scripting
Brought to you by:
vexxed72
From: brian h. <bri...@py...> - 2002-12-07 03:15:59
|
> I think data description (let alone "rich" data description) is reason > enough to use something like Lua (which started its life as a data > description language). I agree, and in fact that's what I use Lua for right now. It stores my configuration variables, and I also use it to execute extremely crude "level scripts" for some of our 2D games. But I've struggled a bit to figure out how to go past that basic functionality, which is part of the reason that I'm so down on scripting recently =) > I think you're taking on a couple separate issues here, 1) the > language itself, and 2) the development environment (i.e. the > mechanics of how you get your code into the running game). Sure, I kinda tossed them in together, but at the same time, they're both valid reasons why scripting is a pain in the ass for non- developers to use. Lack of a debugger alone can be an absolute killer, especially if you have a huge numbers of scripts and you have some small corner case that only manifests itself under certain conditions. > I can say that about Python (and Lua) with a straight face. Although > "intrinsically easier" is pretty subjective, in the absence of user > studies. I hate to be so curmudgeonly, but I think you're too experienced to make that call =) Even a simple one line bit of Python looks like gibberish to a non-programmer. I mean, this is something pretty simple: data = filter(lambda x: x>=' ' and x<='z', data) Yet you show that to an artist, and they're not going to have any idea what it does. Something as basic as parens for a function call -- or a function call, period -- can be daunting, and this is something most languages will have. Same with this chunk of Lua: function List:reverse() local t,n={},getn(self) for i=1,n do t[i]=self[n-i+1] end -- reverse for i=1,n do self[i]=t[i] end -- copy back end Now, I know that these aren't that obtuse, but they're obtuse enough to a non-programmer that I just think the difference between good C code and the above isn't going to be that huge either way. > I generally agree with your point that the "language" part > of "scripting language" is *not* the killer feature that makes it > worth doing. Right, that's a good summary of the way I feel. > IMO, in order to get designer friendliness and faster prototyping, the > key thing to focus on is the "development environment". (And I don't > mean "IDE" per se; I mean whatever it is specifically that a designer > has to do to see her changes in the game, and all the little > mechanical details of how feedback comes out, and how you debug, etc.) Agreed. For example, a level designer/mapper should access scripting facilities from inside their level editor. Assuming they even need to see the scripting code. > I've never worked on a commerical game whose average edit-compile-run > turnaround was less than a minute (and usually it has been at least 2x > worse than that). One minute, okay. Five minutes, even that's fine. For what it's worth, when I worked on Quake 2 and Quake 3, complete build times were on the order of a minute IIRC, and compile-edit-run was measured in seconds. I don't think anything magical was done for that either -- it was straight ANSI C with the subsystems broken apart into DLLs to avoid interdependencies. But these days, I'm hearing stories about build times measured in the tens of minutes or, in some cases, hours. HOURS. I just find this unimaginably bad, and in many cases it's because of heavy use of C++ templates or STL or other things like that. The rationale is that stuff is bigger, huger, and more complex these days, but at what cost? > The script-whatever-run cycle has the *potential* > to be very quick, although it also tends to get bogged down by > suboptimal process, in my experience. I would argue that it gets bogged down more often than not. When you talk to people that have written scripting languages or used them, inevitably it's trading off one set of evils for another set that are often just as bad and sometimes even worse. > Like, scripting engines where > it's theoretically trivial to inject updated scripts into the running > game, but nobody exposed the functionality in a way that the designers > could use. A simple "load(script)" command that you can trigger from > the console in the running game goes a long way. In general, I think > using a scripting language for a console language is a handy thing. I think this is what Bioware did with Baldur's Gate et. al. > Well, you can do either of the above, or both at the same time, with a > scripting language. (And, as I said, I think there are advantages to > using a scripting language's parser to digest your data.) The main reason I started using Lua for my cvars is that I found I was writing Yet Another Crappy Parser, and figured I'd just Lua handle it. The thing with scripting languages though is that finding the right area to bifurcate is extremely hard. I really think this is glossed over too much. Using our door example, you have a situation where you have to neatly define "What parts of being a door does the script decide; and what parts of being a door does the engine decide; and how do they reconcile these aspects?" And no matter where you draw that line, you incur some kind of cost in development time, run-time cost, robustness, features, etc. At one extreme, the "script door" is nothing but data defs read by the engine, at which point you have very little flexibility since any new capabilities implemented by the engine have to be exported repeatedly to the script designers. "Oh, I can now make doors out of different materials, must support this capability". At the other end of the spectrum, you have ALL properties of the game done in the script, and the engine is solely responsible for low level system tasks like rendering, marshalling input, low level audio, etc. At that point you're also in a special form of hell, because you've effectively exported huge chunks of your engine to the script so that the script can do meaningful work, AND you've put yourself in a position where your primary development pipeline is being gated by scripting tools which have a high likelihood of sucking. And, honestly, if you make any major changes to your engine, you're probably breaking the scripts anyway. > finalDoor = { > type = door, > key = function(opener) > if opener == evil_wizard then > nil > else > redKey > end > end > } That's a slippery slope you walk upon =) > Anyway, I also struggle with the issues you raise. I do think it's > very helpful to know about all the things that are wrong with > scripting languages before deciding to use one, and especially after > you're committed to using one. In this particular case, I went from using Lua to looking at everything else to deciding to roll my own to back to using Lua, all in the span of about a week. It was extremely helpful to me trying to write my own, because it made me acutely aware of all the different implementation and design trade offs that a language has to make, and by extension I learned why different languages are good at different things and bad at others. And that's why I've reached this philosophy now, because once I gave myself the "freedom" to write my own language, I realized that the _language_ wasn't the problem. The language and its implementations can eventually become a problem, e.g. Lua's garbage collector or Python's footprint, but long before you reach that point, the integration and development process issues will become a much bigger concern. I'm quite amazed at the number of developers I run into that use scripting languages but deep down inside still aren't comfortable with how and where they've integrated them into their games, which is why I'm so leery of scripting at this point. You hear a lot about backdoor hacks to get pieces of the engine and scripts to communicate something that isn't cleanly defined by the pre-determined APIs. For many developers, it makes a lot of sense, I just fear for the developers (and games) that feel a certain obligation to be script driven because everyone else. For example, console games -- obviously the mod community isn't going to be targeting console games, so I'm curious to hear why/when/how someone would fully script AI, combat, quests, levels, etc. for a console game. I'm sure there are good reasons, but I'm real curious if the actual benefits derived were worth the hassle (and if they couldn't have been reached some other way). This is a separate subject, but I really do feel that in many ways, games are being overengineered. Tempting technologies like C++, STL, scripts, garbage collection/ref-counting/smart pointers, keep rearing their heads, and they make such promises of how everything will be better, but when you get right down to brass tacks, rarely have things improved. I have a C++ application framework code base, and I have some open source stuff I'm developing on the side that is ANSI C. The latter stuff, without exception, is better architected than the former, because C++ hierarchies have a tendency of "hardening". Code acquires a sense of mass and inertia that makes it difficult to modify, and when you have huge amounts of interdependent code, like a C++ class hierarchy, then trivial changes become non-trivial. STL is the same thing -- it's so great to have something like std::set or std::list available all the time, but holy crap, is writing a linked list REALLY that stressful? Because the last couple times I used STL, the 15 second list vs. the 5 minute self-rolled list time advantage evaporated the first time I had to try to read STL code in the debugger and look at opaque pointers with no idea what died and where. But I was convinced that STL was a "better" way of doing things. Then you start hearing horror stories about build times with STL; rapid rise in binary data size; inability to debug code deep inside STL; etc. and suddenly you find that your project is suffering in many different areas because you wanted to take the easy way out and reuse a very popular and successful library. This isn't about NIH vs. non-NIH, it's about theoretical vs. actual benefits of things that are just touted as "better" without any real objective data to back it up. It's about purity of design vs. pragmatism in development. It's 80s software engineering dogma vs. old fashioned "just getting it done". Are STL and C++ worthwhile if you can no longer have an incremental build cycle? Are scripts worthwhile if you can no longer debug them, or if you spend just as much time dealing with script<->engine communication as you would have just writing the scripted stuff directly into the engine? Is SourceSafe's GUI better than using cvs at the command line if you can no longer check out stuff remotely from home? Is garbage collection really that much better if you now have to start searching for cycles and be real careful about how you allocate objects during run-time, and when you gc, and when/how you delete root objects that can cause cascading gc? Okay, I'll be quiet now, I feel better =) -Brian |
From: brian h. <bri...@py...> - 2002-12-07 22:58:29
|
A paper by Scott Bilas of GPG on Skrit (their pseudo-scripting language) particularly resonates with me: http://www.drizzle.com/~scottb/ds/skrit.htm a pretty good overview, and there's some links in there I think to his GDC talks where I think he even says all scripting languages suck (or something to that effect). Brian |
From: Noel L. <ll...@co...> - 2002-12-07 23:47:22
|
On Sat, 07 Dec 2002 16:58:25 -0600 brian hook <bri...@py...> wrote: > A paper by Scott Bilas of GPG on Skrit (their pseudo-scripting > language) particularly resonates with me: > > http://www.drizzle.com/~scottb/ds/skrit.htm > > a pretty good overview, I definitely like how it was tailored to their needs. No (or minimal) support for loops, etc. I like the part of "not duplicating engine functionality". Very true. > and there's some links in there I think to his > GDC talks where I think he even says all scripting languages suck (or > something to that effect). I didn't get that impression. I assume you refer to his GDC 2002 talk (http://www.drizzle.com/~scottb/gdc/). I was at the talk and I walked away with the impression that it worked really well for them. Yes, he admitted they used their scripts for more things that they had anticipated and maybe a few more of the really common ones should have been translated to C++, but all in all it worked really well for their situation. For me, it was his talk and paper that rekindled my interest (and faith) in scripts after almost giving them up as useless in game development. --Noel ll...@co... |
From: brian h. <bri...@py...> - 2002-12-07 22:58:52
|
A paper by Scott Bilas of GPG on Skrit (their pseudo-scripting language) particularly resonates with me: http://www.drizzle.com/~scottb/ds/skrit.htm a pretty good overview, and there's some links in there I think to his GDC talks where I think he even says all scripting languages suck (or something to that effect). Brian |
From: brian h. <bri...@py...> - 2002-12-07 23:05:35
|
A paper by Scott Bilas of GPG on Skrit (their pseudo-scripting language) particularly resonates with me: http://www.drizzle.com/~scottb/ds/skrit.htm a pretty good overview, and there's some links in there I think to his GDC talks where I think he even says all scripting languages suck (or something to that effect). Brian |
From: <cas...@ya...> - 2002-12-08 02:20:34
|
Brian Hook wrote: > A paper by Scott Bilas of GPG on Skrit (their pseudo-scripting > language) particularly resonates with me: > > http://www.drizzle.com/~scottb/ds/skrit.htm > > a pretty good overview, and there's some links in there I think to his > GDC talks where I think he even says all scripting languages suck (or > something to that effect). There are also a few chapters on AI Programing Wisdom on scripting languages that I think are worth a read. Specially interesting are: "How not to implement a basic scripting language", that describes the wrong decisions made during the development of NWScript. "Scripting for undefined circumstances", that describes the language used in Black&White to describe challenges and cinematic scenes. The later remembers me to the scripting language used in Ritual's FAKK2. Scripts were almost a sequence of commands (like quake console commands). The script interpreter is just a tokenizer, that interprets commands one by one. Such a simple system is easy to implement, and mixed with an event system becomes quite powerfull. It allowed you to describe things that happen concurrently with ease. I expect that a very similar system will be used in doom3, now that Jim Dose is working at id. Ignacio Castaño cas...@ya... ___________________________________________________ Yahoo! Sorteos Consulta si tu número ha sido premiado en Yahoo! Sorteos http://loteria.yahoo.es |
From: brian h. <bri...@py...> - 2002-12-08 00:49:15
|
> I didn't get that impression. I assume you refer to his GDC 2002 talk > (http://www.drizzle.com/~scottb/gdc/). I was at the talk and I walked > away with the impression that it worked really well for them. I swore I read something by him to the effect that the way we (developers) traditionally view scripting is fundamentally broken, and what they did with Go/Skrit was to re-examine the problem and think of it in terms as a database. In other words, Go/Skrit isn't broken, but traditional scripting is. But now I can't find where he said that, so I may have just misinterpeted something else he said. I might add that from what I see of Skrit, it's more what I consider a data definition language than a scripting language. That's a semantics issue, but it's an important one to me. When I complain about scripting languages, I'm specifically referring to the logic that's placed in there, not the actual use of a separate language to define external data. I mean, I don't expect everyone to do: const Vector kVerticesForGun[] = { ... }; =) Brian |
From: Evan B. <eb...@au...> - 2002-12-08 07:37:50
|
I have some thoughts... > For example, console games -- obviously the mod community isn't going=20 > to be targeting console games, so I'm curious to hear why/when/how=20 > someone would fully script AI, combat, quests, levels, etc. for a=20 > console game. I'm sure there are good reasons, but I'm real curious = if=20 > the actual benefits derived were worth the hassle (and if they = couldn't=20 > have been reached some other way). We are using a scripting language for our console games which uses a = C-style syntax. In hindsight this was a mistake. The first request I received was to = remove case sensitivity:) We currently do not use the scripting language to fully orchestrate AI, = Combat, quests, etc.=20 It is more like "glue". Our designers are the primary users of the = scripting language, with the artists=20 only using it to spawn particle emitters. The designers use the = scripting language functions just like=20 public interfaces to game objects. // .... SetPlatformDestination( 'BigFloatingRock', GetMarkerPos( = 'CenterMarker01' ), 1.0 ); So we mainly use it to give simple instructions to existing game objects = that are populated by the designers=20 in the level editing tool. The scripting language works in concert with = our data driven event system.=20 > The thing with scripting languages though is that finding the right = area to bifurcate is extremely hard.=20 The main problem we have is the designers request the ability to do more = and more complicated tasks with the scripting language, but do not enjoy the requisite increase in script complexity. = I actually had one ask "What I really want is a very simple way to create really complicated behaviour". I am still working on that = request:)=20 To avoid creating Yet Another Crappy Parser I used flex & bison to do = the hard work of creating the script compiler. I would like the next postmortem I read to include the number of = different text parsers used in the tool chain. Evan Bell ev...@ed... |
From: Noel L. <ll...@co...> - 2002-12-09 21:10:16
|
We were talking about turnaround time of script languages vs. compiled C++, and Brian Hook wrote: > For what it's worth, when I worked on Quake 2 and Quake 3, complete > build times were on the order of a minute IIRC, and compile-edit-run > was measured in seconds. I don't think anything magical was done for > that either -- it was straight ANSI C with the subsystems broken apart > into DLLs to avoid interdependencies. I'm totally impressed. When you say complete build times, that means *all* the code involved in Quake 2 or Quake 3? All the DLLs, everything? Was there some trick you were using to get such incredible build times (other than using straight C)? We're using C++ and STL, and even though we have a good physical organization (like Lakos described), compiling all our libraries and game code for one platform could take up to 15-20 minutes. That's only for a full build; a quick change somewhere still requires building that file and then doing a link, which is about 30-40 seconds. I find 30-40 seconds for a trivial change close to unbearable. Especially when it takes another 10 seconds for the game to start and another 10 seconds for the level to load. What are other people doing to reduce their compile times? Getting rid of STL is not really an option. It's way too useful to replace it with a crippled, custom container/algorithm set that is probably not even type safe. But any tricks that reduce how many times the same templates get evaluated to save time would be welcome. --Noel ll...@co... |
From: Brian H. <bri...@py...> - 2002-12-09 21:34:38
|
> I'm totally impressed. When you say complete build times, that means > *all* the code involved in Quake 2 or Quake 3? All the DLLs, > everything? Was there some trick you were using to get such > incredible build times (other than using straight C)? If I remember correctly, yes. Okay, Quake 2 is open sourced, so I just tested it out. On my very modest 2xP3/933 w/ 512MB RAM using MSVC 6, compiling ref_gl, game, and client, it took 37 seconds. Let me repeat: 37 seconds. Compiling each component averages 10-15 seconds for a full rebuild. If editing a single source file, compile-edit-run is probably 3 seconds. The trick was careful engineering. Nothing mind boggling, just doing the obvious stuff. <windows.h> was limited to only the files that needed it. We used ANSI C, not straight C++. But even so, if you don't use templates heavily or STL, straight C++ can compile very quickly as well. Dependencies between modules were well guarded -- changes to the renderer data structures had no effect on the game source code. Changes to the network stuff had no effect on the renderer, etc. There was absolutely nothing magical about it, and that's why I'm honestly confused and addled by the seeming necessity today to spend 10+ minutes to rebuild games on top notch hardware. And most new games today use scripting languages, so the build times should be close to non-existent (Q2 had no scripting language). Quake 3 was not significantly more complex, unless the Q3VM stuff that John did ended up taking a lot of time, but I highly doubt that as well. > We're using C++ and STL, and even though we have a good > physical organization (like Lakos described), compiling all > our libraries and game code for one platform could take up to > 15-20 minutes. That's only for a full build; a quick change > somewhere still requires building that file and then doing a > link, which is about 30-40 seconds. > > I find 30-40 seconds for a trivial change close to > unbearable. Especially when it takes another 10 seconds for > the game to start and another 10 seconds for the level to load. The above is exactly what I mean. STL is the devil's tool, I stand very firmly on that. I completely dislike the way C++ has made templates, which are effectively glorified search and replace macros. It leads to code bloat, and changing any of your template code causes massive cascades of rebuilds. > What are other people doing to reduce their compile times? > Getting rid of STL is not really an option. It's way too > useful to replace it with a crippled, custom > container/algorithm set that is probably not even type safe. I just don't buy this one second. We survived and thrived just fine before STL. STL is very, very convenient, but is it worth that cost? I don't think so, not one bit. Writing basic data structures like sets, lists, hashes, etc. is stuff we all did as undergrads in college. In the massive, grand scheme of engineering, it's a non-issue. It's tedious, sure, but even then, that tedium is measured in minutes or hours, not days and weeks. The overhead from using STL or any other features that incur extensive build times affects you through the life of the project. Every compile. This, to me, is one of the biggest indicators that software engineering has gotten completely out of hand. And I'm not coming down on you specifically, because pretty much 90% of all game companies that I'm aware of are using STL or something similar in such a capacity, and incurring the costs. Yet I rarely see any potential benefits measured concretely by this. Probably the #1 defense for STL is that it prevents you from doing something lame just to get things going. The classic case is, say, hardcoding an array when you should have a proper container class. Or using a linear search when a proper set/bag/map should be used. But 20 minutes per rebuild (and in many cases, much much more from what I've heard) to make up for lack of programmer discipline seems ridiculous. I use linear searches. And when I know they're going to be a problem, I substitute something better. I usually put in a comment like: //fixme: don't do a linear search here! But it lets me get something implemented, and then I can substitute it later on demand. This isn't a case of NIH, which is the common counterargument when I rant about STL. I'm 100% for code reuse, and in fact use a ton of open source libraries in my own code -- Lua, libpng, ijg, ogg vorbis, SDL, etc. But when code reuse has a measureable cost on productivity without an associated benefit, I have to raise an eyebrow. I just remain 100% unconvinced that STL (or almost any library or language feature) is worth the loss of hours each day in build times. Anyway, enough lecturing on that =) The only other things you can do is go through with heavily analyze every single dependency you have and try to minimize them. A common enough thing is to revert to pure dynamic allocation so that changing the physical structure of a class won't force rebuilds by clients of that class, i.e. the classic: //publicly export this class FooInterface { public: virtual void interfaceFunc( void ) = 0; }; //only seen by the Foo module class FooImplementation : public FooInterface { ... }; This can hurt though, since I personally abhor excessive dynamic memory allocation. I much prefer to have my data structures pre-allocated as much as possible, even though this does cause a much greater dependency. -Hook |
From: Ray <ra...@gu...> - 2002-12-09 21:59:03
|
Our game compiles in about 5 minutes in VC++6 for a full rebuild. A simple fix will just recompile that one file and link, mainly, unless I edit a root header file, which then it recompiles all the files that header file gets included in (it's very rare for us to edit a root header file). On macosx, compile for the same code takes about 45 minutes. On linux, about 30 for a full rebuild. I love vc++'s ability to compile fast. We use STL extensively. Well, we use map, list, vector a lot. I know we can rewrite teh stl stuff to use our own stuff, but our owk stuff won't be much faster in runtime and that's just more stuff that would need to be debugged. I only use vector if I need to sort, because I can never seem to get list.sort() to work. We use maps a lot for string lookups. We even use deque, set, queue for some things. It's just very convenient to use STL when making something go as soon as possible. Noel Llopis wrote: > I'm totally impressed. When you say complete build times, that means > *all* the code involved in Quake 2 or Quake 3? All the DLLs, everything? > Was there some trick you were using to get such incredible build times > (other than using straight C)? > > We're using C++ and STL, and even though we have a good physical > organization (like Lakos described), compiling all our libraries and > game code for one platform could take up to 15-20 minutes. That's only > for a full build; a quick change somewhere still requires building that > file and then doing a link, which is about 30-40 seconds. > > I find 30-40 seconds for a trivial change close to unbearable. > Especially when it takes another 10 seconds for the game to start and > another 10 seconds for the level to load. Linking should never take 30-40 seconds. VC++ linking for me is about the time it takes for the harddrive to save the file. (debug is about 3 megs, release is 1.6megs) Linking for the mac is slower, but that's because the file is much larger. (20 megs for a debug version! 2.6 megs for release) More RAM is the thing that would make linking faster. I remember back a couple years when a coworker was using an SGI O2 machine to build our of our products. That machine had like 128 megs and linking took forever! Doubling the ram sped up compiling and linking immensly. > What are other people doing to reduce their compile times? Getting rid > of STL is not really an option. It's way too useful to replace it with a > crippled, custom container/algorithm set that is probably not even type > safe. But any tricks that reduce how many times the same templates get > evaluated to save time would be welcome. For windows, I think making sure you have automatic use of precompiled headers on and incremental linking on. I wish gcc had those. There's this one file we have that takes literally 5 minutes to compile on the mac, and about 5 seconds on vc++. It uses stl::list a lot. - Ray |
From: Brian H. <bri...@py...> - 2002-12-09 22:17:39
|
> On macosx, compile for the same code takes about 45 minutes. > On linux, about 30 for a full rebuild. That seems...weird. Granted, these are different machines, but in my experience VC hasn't been that much faster unless you're heavily leveraging incremental linking and precompiled headers. My own code base seems to take a similar amount of time on OS X (G4/867) and my P3/933, maybe a tad slower, but not by a factor of 9. > It's just very convenient to use STL when making something go > as soon as possible. If you're seeing 5 minute full rebuild times and incremental changes only taking a few seconds, then I would agree that STL is a complete win in that situation, but the cases I hear about are more extreme. I can't help but wonder if your slow rebuilds on Linux and OS X are the result of using gcc and STL? Because that factor of 9 is just mind bogglingly dramatically different, enough so that it would raise alarm bells. gcc has issues, but damn, not THAT many issues. Brian |
From: Thatcher U. <tu...@tu...> - 2002-12-10 05:28:24
|
On Dec 09, 2002 at 02:17 -0800, Brian Hook wrote: > > I can't help but wonder if your slow rebuilds on Linux and OS X are the > result of using gcc and STL? Because that factor of 9 is just mind > bogglingly dramatically different, enough so that it would raise alarm > bells. gcc has issues, but damn, not THAT many issues. In addition to being slower than VC in general, gcc is surpisingly slow at compiling STL code, in my limited experience. A factor of 9 sounds awfully high to me too, though. At one point I got curious and measure the code pulled in by "#include <vector>" using various STL's -- VC6's default STL pulls in 122KB, gcc pulls in 181KB, and STLport-4.5 pulls in 203KB. I suffered severe compile-time shock when I first started working at Oddworld, so I remember timing a build of Soul Ride (my immediate prior project) on the machine at work. Stats: full build: ~45 seconds (VC6) 1.80M total 1.46M .cpp (94 files) 0.15M .c (19 files) 0.18M .h (88 files) (71K lines total) About 0.16M of the .cpp was written by a contractor who used STL; the rest was written by me with little to no STL. The .c is Lua 3.2. The Oddworld code base at that time was about 200K+ lines (dunno how many megs) and used STL. I think I measured a full build at like 12 minutes (VC7), but my memory could be wrong. Our new codebase is relatively cleaner than before IMO, but still uses STL. A full game-only build still takes ~12 minutes (VC7) on my (relatively slow 1GHz) laptop. (This particular laptop is much slower at compiling than the machine used for the above timings.) Here are the size stats: 5.83M total 4.33M .cpp (538 files) 1.50M .h (568 files) The last time compile times came up, someone on this list mentioned doing "lumped" builds, where you use a script to #include a whole bunch of .cpp files into one big compilation unit. I cooked up some make/perl monstrosity to try that out with Oddworld's code, and got some stunning results: 3 minute full builds! The reason this works is because the compiler only reads headers (and builds data structures from them) once for the whole lump. As opposed to reading, building structures, compiling a few hundred lines, and then flushing everything out again, once for each source file. Our typical #include line-count is in excess of 25K lines (because of STL). Unfortunately the link time didn't seem to drop any, which is the biggest determinant of our usual compile cycle time (and the build process changes were deemed too wacky to be adopted, since it seems nobody under the age of 30 can be bothered to type "make" :). -- Thatcher Ulrich http://tulrich.com |
From: <cas...@ya...> - 2002-12-10 13:49:51
|
Thatcher Ulrich wrote: > it seems nobody under the age of 30 can be bothered to type "make" :). hey, I'm just 21 ;-) > The simple STL-alike array<> and hash<> templates that I use for my > personal stuff take about 350 lines, and I consider them very > worthwhile. (They're mediocre, but public domain, if anyone wants the > link.) yep, I like them and use slightly modified versions. I'm getting damn scared with those project sizes. My projects don't usually have more than 2Mb of code (without external libraries) and usually build in 1-2 minutes. I don't use STL, mainly because it's a pain to debug, but never though it could affect performance in such ways! Ignacio Castaño cas...@ya... ___________________________________________________ Yahoo! Sorteos Consulta si tu número ha sido premiado en Yahoo! Sorteos http://loteria.yahoo.es |
From: Javier A. <ja...@py...> - 2002-12-10 09:09:57
|
Brian Hook <bri...@py...> wrote: >> On macosx, compile for the same code takes about 45 minutes. >> On linux, about 30 for a full rebuild. > > That seems...weird. Granted, these are different machines, but in my > experience VC hasn't been that much faster unless you're heavily > leveraging incremental linking and precompiled headers. For what is worth, during the tentative (and aborted) Mac port of Commandos 2, they started a Debug build of the game on the mac (note - it's a HUGE codebase, the Win32 retail exe was 8 megs IIRC), and two days later it still had not finished. Can't give details about machine, RAM, etc, but frankly, it was scary. And no, it wasn't using STL. Javier Arevalo Pyro Studios |
From: Tom N. <t.n...@vr...> - 2002-12-10 14:13:27
|
Hi, In the scripting thread, Brian mentioned "faster development" (i.e. no rebuilds) as one of the reasons one might use scripting. Would anyone actually go so far as to implement a scripting engine just for this purpose? Or, to take it one step further, would you consider writing your application in a different language altogether (not C/C++), just for the sake of improving programmer productivity? The reason I ask is because Brian brought up the build times for Quake 2. Our current project is probably roughly the same size as Q2 (110K lines of code), and takes just under 5 minutes to rebuild. Quake 2 took 40 seconds to build on the same machine -- something I can only dream of. But! You may not have heard about this, but a bunch of guys have taken it upon themselves to translate the entire Quake 2 source code to Delphi/Object Pascal (see http://sourceforge.net/projects/quake2delphi/). They have > 150K lines of code, and it builds in less than 3 seconds -- so fast that my watch isn't really accurate enough to time it. If it is indeed a fact that using STL in a C++ project would badly increase compile times, then this might be seen by some as a valid argument against the use of STL. By the same logic, if you knew that you could get your work done faster using another programming language, would you do it? _Has_ anyone actually done it (e.g. even if only for internal tools)? I'd love to hear about the build times for large(-ish) Java or C# projects, for example. -- Tom |
From: Donavon K. <kei...@ea...> - 2002-12-11 06:38:18
|
Tom Nuydens [t.n...@vr...] wrote: > In the scripting thread, Brian mentioned "faster development" (i.e. no > rebuilds) as one of the reasons one might use scripting. Would anyone > actually go so far as to implement a scripting engine just for this > purpose? Or, to take it one step further, would you consider writing > your application in a different language altogether (not C/C++), just > for the sake of improving programmer productivity? For years I've been desperate for something that could viably replace C++ for at least much of game development. Java and Python both fell short in my estimation. I've been using C# for two years now and it's still looking promising. In between contracts I do almost everything in C# and when I take a job and go back to C++, I actually find that for a while anyway I tend to significantly underestimate my tasks. C# eliminates a lot of the friction in C++ coding. Partly it's because VS.Net integrates with C# much better -- IntelliSense is nearly flawless, and that counts for a LOT. (Visual Assist certainly helps.) And I've really become aware of how much time gets wasted futzing around with header files -- create a header and declare your class in it, create a .CPP and define your functions there, make sure the signatures match, remember if you change one to change the other, flip back and forth a lot, and of course sit back and watch the dependencies recompile. Header files suck. They need to die. Just for kicks, I wrote a little multiplayer space trading game, sort of like Gazillionaire, in C#. You could play either through a browser or a native client. It used ADO.NET and SQL Server for the backend and ASP.NET to serve up the web pages. Knowing next to nothing about ASP.NET and ADO.NET, I had it up and playable -- by multiple players -- in under a week. And I wasn't rushing; I spent the better part of one of those days just pulling sound effects down off the Net. For me anyway, that's some pretty serious productivity. But is C# viable for real game development? Hard to say until somebody actually does it. Phil Taylor has claimed that the Managed DirectX examples will run 98% as fast as the C++ examples. That's bold, and promising. (Incidentally, Managed D3D is much cleaner and easier to work with than regular old D3D.) Still, it may be that you'd want your low-level stuff in C/C++, and watch out for chatty interfaces. Then there's GC. Depending on the game you may be able to tolerate the occasional GC hiccup and you may not. In .NET it's so easy to create and discard objects on the heap that every programmer would have to be very disciplined about how memory is being consumed. And of course there's portability. For now you'd have to be pretty much Windows-only. With Rotor and Mono this may change. > By the same logic, if you knew that you > could get your work done faster using another programming language, > would you do it? _Has_ anyone actually done it (e.g. even if only for > internal tools)? I'd love to hear about the build times for large(-ish) > Java or C# projects, for example. For most tools I wouldn't use anything else. None of my projects has gotten beyond maybe forty or fifty source files. A normal build is one to two seconds. A full rebuild (very rare) is maybe a few seconds more. I'm on vacation so I can't be more specific. To be sure there's also a second or two hidden in the Jit compile. Speaking of Jit, because the code is ultimately compiled on the user's machine, the compiler can transparently use specific processor features that C/C++ object code can't (MMX, SSI, etc). Theoretically (some) C# code could run faster than C/C++. There's a lot more to say about C#/.NET's virtues and flaws (there are some), but I'll leave it at that for now. --Donavon |
From: Tom H. <to...@3d...> - 2002-12-11 07:33:17
|
At 06:11 AM 12/10/2002, you wrote: >If it is indeed a fact that using STL in a C++ project would badly >increase compile times, then this might be seen by some as a valid >argument against the use of STL. By the same logic, if you knew that you >could get your work done faster using another programming language, >would you do it? _Has_ anyone actually done it (e.g. even if only for >internal tools)? I'd love to hear about the build times for large(-ish) >Java or C# projects, for example. When I first started working with Java code, I didn't think it was compiling. I expected at least a few seconds of feedback, but something flashed and it was done. Something must have been wrong ... but it wasn't. It was just done ;) The ~400k (59 .java files) Java Applet I just finished compiles in well under a second. After working with C/C++ for years, I recently started getting into Java. As a language, I think it destroys C/C++. However, I'm not willing to say its a better way to make games than C/C++. Now, if I was able to write my low level stuff in C/C++ and do all my game play, UI, etc in Java I think I'd be in heaven ;) In fact, one of the things I want to get to as soon as I can is trying to set up just that situation using JNI. From what I've seen, it looks like a real pain in the butt though. Probably the most worrisome thing to me would be the ability to debug the Java code in a situation like that, and I suspect it is somewhat lacking (compared to C/C++ debugging). Won't know for sure till I get it set up. Tom |
From: Noel L. <ll...@co...> - 2002-12-09 22:01:00
|
On Mon, 09 Dec 2002 13:34:24 -0800 Brian Hook <bri...@py...> wrote: [to build the full Quake 2 source code] > Let me repeat: 37 seconds. Wow, that's really the way it should be. I'm going to have a hard look at our build environment and try to significantly reduce our build time. > The trick was careful engineering. Nothing mind boggling, just doing > the obvious stuff. <windows.h> was limited to only the files that > needed it. Supposedly so are we, although some people put <windows.h> in their precompiled headers sometimes, so maybe that's slowing things down just because it adds so many new symbols to search through, even if the precompiled file is not recreated. > Dependencies between modules were well guarded -- changes to the > renderer data structures had no effect on the game source code. Changes > to the network stuff had no effect on the renderer, etc. Yeah, same here. Incremental builds are not a problem (other than the 30 second link time), it's full builds that are terrible. Fortunately nobody has to do full builds (we have an automated build system that does that), but still. For what is worth, we also don't have any scripting language, so there were a lot of C++ classes doing things that should/could have been done with scripts. > I just don't buy this one second. We survived and thrived just fine > before STL. STL is very, very convenient, but is it worth that cost? I > don't think so, not one bit. What are the alternatives other than rolling your own? Is there a set of available containers and algorithms that compiles much more quickly and provides some of the same functionality? > Probably the #1 defense for STL is that it prevents you from doing > something lame just to get things going. The classic case is, say, > hardcoding an array when you should have a proper container class. Or > using a linear search when a proper set/bag/map should be used. Along with: - Lots of debugged code already written - Already quite optimized - Type safety - Familiarity of new programmers with that API - Other APIs build on top of it (boost for example) The counter argument of "how long does it take to write a linked list" is totally convicing either. How long does it take to write an efficient balanced red-black tree? Can you write them so it is possible to change the way memory is allocated easily? Can you write them so they can be reused for different types and not make extra memory allocations? Personally, I find STL a pleasure to work with. But if I find that without STL compile times go from 15 minutes to 2 minutes, I'm throwing it out the window without even thinking about it twice. It it's only going to save a couple of minutes, then it's well worth it. > I just remain 100% unconvinced that STL (or almost any library or > language feature) is worth the loss of hours each day in build times. Just out of curiosity, what are you developing in these days? Light C++, or straight C? --Noel ll...@co... |
From: Brian H. <bri...@py...> - 2002-12-09 22:39:05
|
> [to build the full Quake 2 source code] > > Let me repeat: 37 seconds. > > Wow, that's really the way it should be. I'm going to have a > hard look at our build environment and try to significantly > reduce our build time. The amusing thing, of course, is that the times I've mentioned this, it's poo-pooed that the Quake2 and Quake3 stuff wasn't very technologically advanced. > What are the alternatives other than rolling your own? Rolling your own? =) For C++, unless you want a base root Object class, there's no alternative, since STL is templatized and gives you the static type checking that many feel is so vitally important. But in more dynamic languages such as Java (*shudder*) and Objective-C, the core container classes operate on base Objects and everything still manages to work just fine. > Along with: > - Lots of debugged code already written > - Already quite optimized > - Type safety > - Familiarity of new programmers with that API > - Other APIs build on top of it (boost for example) I can agree that the above are valuable, but in my experience, those are close to non-factors when compared to the cost (again) of using STL. Now, I tend to work on small teams -- 3 programmers at id, one programmer at my current gig. So I do have a procedural advantage right there. In most cases, if a container class is showing up as a performance hit, my guess is that I'm either doing something wrong or, more likely, I need to optimize it beyond what is already done in STL. But I don't have anything to substantiate that, so it's neither here nor there. > The counter argument of "how long does it take to write a > linked list" is totally convicing either. How long does it > take to write an efficient balanced red-black tree? Can you > write them so it is possible to change the way memory is > allocated easily? Can you write them so they can be reused > for different types and not make extra memory allocations? Even if I never, ever re-use any of my list or hash code, re-implementing from scratch is almost never a problem. It's not like I have containers littered about so much that I need the absolute best, most generalized implementation available at all times. I know that's a philosophical shift from what many believe in these days, but I almost never, ever have to implement radically specialized data structures. Every day, easy-to-write stuff like lists, maps, sets, etc. are basically all I ever use. And when I start getting into more exotic data structures -- BSP trees, quad-trees, oct-trees -- then STL isn't going to help. Maybe that's a good summary -- for the things that STL can help with, rolling my own consumes almost zero time. It's probably 15 minutes to write a simple binary tree implementation from scratch. I spent more time writing this e-mail =) For the things that really need an optimized implementation, odds are it's a data structure that STL just doesn't understand. So STL -- for me -- falls exactly in that area of being a non-issue. As a library for the masses, it's great, because it does a little bit of everything well, in a documented fashion. But when it gets down to brass tacks for _one project_ I think the value plummets. But I do recognize I'm singularly alone on this. I also recognize that I have 5 second compile and go times as well. =) > Just out of curiosity, what are you developing in these days? > Light C++, or straight C? Both, although frankly I wish I had just stick with pure ANSI C, since it compiles and runs in a much cleaner fashion. My core frameworks are 144 header files and 187 source files that cover a wide variety of platforms, and while the hierarchy started out clean, it has since become a mess because code just has a habit of growing, changing and morphing over time. And C++ hierarchies, to coin a phrase from Scott Bilas, tend to "harden" over time. The more interdependent your class trees get via inheritance, delegation, aggregation, etc. the more difficult it becomes to refactor your existing code because the chain of dependencies grows longer and longer with time. The code base becomes resistant to change. All my new libraries are written in ANSI C, because many of them I plan to release as open source. But in the process of writing them in ANSI C and acutely examining the dependencies that existed, I found that I was writing MUCH cleaner code, and not just because I expected others to read the code. Application frameworks are great for writing applications, but I think they're also extremely cumbersome and dangerous when trying to write libraries or allowing the reuse of code in a piece-meal fashion, which is often extremely handy. I've been frustrated at times because I'll write a simple command line app that I want to give to someone, but it uses one tiny little class out of my frameworks, and now suddenly I have to hand over megabytes of source code so someone can compile a 10 line program. I believe strongly in using C++ as an application language, but for libraries, I think ANSI C is probably superior for many reasons. Frameworks make sense for developing entire apps, but by their very nature they're incredibly difficult to partition and release in pieces. Code reuse introduces dependencies, and dependencies work directly against refactoring code cleanly. I'm not against code reuse, but it's worth pointing out that the more code is reused, the harder it is to change or improve. Brian |
From: Tom S. <to...@pi...> - 2002-12-09 23:07:29
|
> For C++, unless you want a base root Object class, there's no > alternative, since STL is templatized and gives you the static type > checking that many feel is so vitally important. But in more dynamic > languages such as Java (*shudder*) and Objective-C, the core container > classes operate on base Objects and everything still manages to work > just fine. You've traded static compile time error detection for runtime exceptions. I can't see that making your game more robust or your test loop any quicker. Tom ----- Original Message ----- From: "Brian Hook" <bri...@py...> To: <gam...@li...> Sent: Monday, December 09, 2002 4:38 PM Subject: RE: [GD-General] Compile times > > [to build the full Quake 2 source code] > > > Let me repeat: 37 seconds. > > > > Wow, that's really the way it should be. I'm going to have a > > hard look at our build environment and try to significantly > > reduce our build time. > > The amusing thing, of course, is that the times I've mentioned this, > it's poo-pooed that the Quake2 and Quake3 stuff wasn't very > technologically advanced. > > > What are the alternatives other than rolling your own? > > Rolling your own? =) > > For C++, unless you want a base root Object class, there's no > alternative, since STL is templatized and gives you the static type > checking that many feel is so vitally important. But in more dynamic > languages such as Java (*shudder*) and Objective-C, the core container > classes operate on base Objects and everything still manages to work > just fine. > > > Along with: > > - Lots of debugged code already written > > - Already quite optimized > > - Type safety > > - Familiarity of new programmers with that API > > - Other APIs build on top of it (boost for example) > > I can agree that the above are valuable, but in my experience, those are > close to non-factors when compared to the cost (again) of using STL. > Now, I tend to work on small teams -- 3 programmers at id, one > programmer at my current gig. So I do have a procedural advantage right > there. > > In most cases, if a container class is showing up as a performance hit, > my guess is that I'm either doing something wrong or, more likely, I > need to optimize it beyond what is already done in STL. But I don't > have anything to substantiate that, so it's neither here nor there. > > > The counter argument of "how long does it take to write a > > linked list" is totally convicing either. How long does it > > take to write an efficient balanced red-black tree? Can you > > write them so it is possible to change the way memory is > > allocated easily? Can you write them so they can be reused > > for different types and not make extra memory allocations? > > Even if I never, ever re-use any of my list or hash code, > re-implementing from scratch is almost never a problem. It's not like I > have containers littered about so much that I need the absolute best, > most generalized implementation available at all times. I know that's a > philosophical shift from what many believe in these days, but I almost > never, ever have to implement radically specialized data structures. > Every day, easy-to-write stuff like lists, maps, sets, etc. are > basically all I ever use. And when I start getting into more exotic > data structures -- BSP trees, quad-trees, oct-trees -- then STL isn't > going to help. > > Maybe that's a good summary -- for the things that STL can help with, > rolling my own consumes almost zero time. It's probably 15 minutes to > write a simple binary tree implementation from scratch. I spent more > time writing this e-mail =) > > For the things that really need an optimized implementation, odds are > it's a data structure that STL just doesn't understand. So STL -- for > me -- falls exactly in that area of being a non-issue. > > As a library for the masses, it's great, because it does a little bit of > everything well, in a documented fashion. But when it gets down to > brass tacks for _one project_ I think the value plummets. > > But I do recognize I'm singularly alone on this. I also recognize that > I have 5 second compile and go times as well. =) > > > Just out of curiosity, what are you developing in these days? > > Light C++, or straight C? > > Both, although frankly I wish I had just stick with pure ANSI C, since > it compiles and runs in a much cleaner fashion. > > My core frameworks are 144 header files and 187 source files that cover > a wide variety of platforms, and while the hierarchy started out clean, > it has since become a mess because code just has a habit of growing, > changing and morphing over time. > > And C++ hierarchies, to coin a phrase from Scott Bilas, tend to "harden" > over time. The more interdependent your class trees get via > inheritance, delegation, aggregation, etc. the more difficult it becomes > to refactor your existing code because the chain of dependencies grows > longer and longer with time. > > The code base becomes resistant to change. > > All my new libraries are written in ANSI C, because many of them I plan > to release as open source. But in the process of writing them in ANSI C > and acutely examining the dependencies that existed, I found that I was > writing MUCH cleaner code, and not just because I expected others to > read the code. > > Application frameworks are great for writing applications, but I think > they're also extremely cumbersome and dangerous when trying to write > libraries or allowing the reuse of code in a piece-meal fashion, which > is often extremely handy. I've been frustrated at times because I'll > write a simple command line app that I want to give to someone, but it > uses one tiny little class out of my frameworks, and now suddenly I have > to hand over megabytes of source code so someone can compile a 10 line > program. > > I believe strongly in using C++ as an application language, but for > libraries, I think ANSI C is probably superior for many reasons. > Frameworks make sense for developing entire apps, but by their very > nature they're incredibly difficult to partition and release in pieces. > > Code reuse introduces dependencies, and dependencies work directly > against refactoring code cleanly. I'm not against code reuse, but it's > worth pointing out that the more code is reused, the harder it is to > change or improve. > > Brian > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Gamedevlists-general mailing list > Gam...@li... > https://lists.sourceforge.net/lists/listinfo/gamedevlists-general > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=557 > |
From: Brian H. <bri...@py...> - 2002-12-09 23:27:52
|
> You've traded static compile time error detection for > runtime exceptions. I can't see that making your game more > robust or your test loop any quicker. Tom That's typically true, but in my experience static type checking is overrated. I almost never accidentally put the wrong type of object into a container. In addition, you can do type checking checking just like any other assertion. Static type checking isn't a magic bullet. It can help, but there are so many more real classes of run time errors that type mismatch is incredibly low on my list. It's one of those things that I think is overemphasized in traditional software engineering discussion. Note that there's a difference between "static type checking with occasional by-pass" and "no static type checking at all". I'm generally against the latter, and very for the former. For example, going back to the prior thread on scripting, some scripting languages have zero static type checking (which is one of my complaints with them, ironically enough) and they're still very successful. Same wiith languages like Objective-C, which has enjoyed strong underground success for over a decade and is considered one of the best languages for rapid prototyping and deployment of mission critical apps (one of the major areas that Obj-C/NextStep managed to acquire a foothold were places like medical, financial, etc. industries where custom vertical market apps that could be put together very quickly were important). That said, I now just have custom coded containers for most things and it hasn't impacted me negatively to the degree that I expected. My generic container classes will probably just go away. Brian |
From: Thatcher U. <tu...@tu...> - 2002-12-10 05:53:38
|
On Dec 09, 2002 at 02:38 -0800, Brian Hook wrote: > > What are the alternatives other than rolling your own? > > Rolling your own? =) > > For C++, unless you want a base root Object class, there's no > alternative, since STL is templatized and gives you the static type > checking that many feel is so vitally important. Actually I think there's a really good alternative, if your primary beef with STL is compile times: minimal vector<> and map<> workalikes. Basic vector<> comprises about 95% of my usage of STL, and map<> is an additional 4.9%. One thing I learned from scripting languages is that array and hash are the only two basic data structures that really matter. The simple STL-alike array<> and hash<> templates that I use for my personal stuff take about 350 lines, and I consider them very worthwhile. (They're mediocre, but public domain, if anyone wants the link.) -- Thatcher Ulrich http://tulrich.com |
From: Javier A. <ja...@py...> - 2002-12-10 13:21:24
|
Brian Hook <bri...@py...> wrote: > And C++ hierarchies, to coin a phrase from Scott Bilas, tend to > "harden" over time. Yeah, make as little use of hierarchies as possible. In general, avoid any kind of dependencies between your classes. It's a code design issue, not so much a C++ issue. (read below) > But in the process of writing them > in ANSI C and acutely examining the dependencies that existed, I > found that I was writing MUCH cleaner code, and not just because I > expected others to read the code. I believe this has more to do with your experience in both languages than anything else. You've huge experience writing C code, but (from what I can infer) little experience working with large C++ codebases. What's worse, C++ can be badly used in more varied and obscure ways than C or other simpler languages, and some design philosophies even encourage these uses. Flat hierarchies, single inheritance and interfaces all make your life damn easy. Javier Arevalo Pyro Studios |
From: Javier A. <ja...@py...> - 2002-12-10 07:49:03
|
Noel Llopis <ll...@co...> wrote: > We're using C++ and STL, and even though we have a good physical > organization (like Lakos described), compiling all our libraries and > game code for one platform could take up to 15-20 minutes. That's only > for a full build; a quick change somewhere still requires building > that file and then doing a link, which is about 30-40 seconds. > > I find 30-40 seconds for a trivial change close to unbearable. > Especially when it takes another 10 seconds for the game to start and > another 10 seconds for the level to load. We're using C++ (MSVC6) with no STL, project broken down into about 12 subprojects, two of them are DLLs and the rest are .libs. Total size of the game codebase (.cpp + .h) is 2316 files (1099 are .cpp) taking up 30 megs (538K lines of actual code). In my "development machine" (as little hands-on development as I do these days), a P3-600 with 512Mb of RAM, any change to a .cpp takes about 3-5 seconds to recompile. A full rebuild is in the order of 20 minutes. The game takes less than one second to start up, and most missions / maps take less than 5 seconds (many are about 2 seconds) to load in release build, about twice as much in debug. All these times are lower for the programmers, who use P4-1.7 + 512Mb machines. All machines have a 7200 rpm primary hard drive. We're not doing anything exceptional about compile times, other than keeping an eye on dependencies, using PCHs with some care (one of the reasons for the number of subprojects), not using STL, and following common Lakos wisdom. About load times, we have kept all our data file organisation very straightforward, and most of them are read sequentially (we have a log warning whenever a file seek operation goes backwards, and the only things that trigger it are WAV format loading and graphics reload). Javier Arevalo Pyro Studios |