RE: [GD-General] Compile times
Brought to you by:
vexxed72
From: Brian H. <bri...@py...> - 2002-12-10 01:25:01
|
> Hearing all these sub-five-minute build times is making my=20 > mouth water, but do we have a codebase that's just WAY bigger=20 > than those that are sporting such fast builds? I really think this is the crux of the issue -- why are the code bases so big and take so long to build? Is it poor engineering or just a casual, subconscious feeling that modern systems can handle it? This is probably a sweng list thing at this point, but hey, nothing is OT here as long as it pertains to games =3D) Obviously using language features or libraries that make for excessive build times is a contributing factor, but I also think that there's kind of an underlying assumption of "Hey, we're making big, complex games, so this is just kind of expected". A tacit acceptance that convoluted engineering is simply a real and necessary part of game development these days. That the task is so huge and daunting that whatever happens is probably necessary. I just don't buy into that, but maybe I'm na=EFve that way. Quake 2 and Quake 3 were pretty technologically advanced when they were released, and they never suffered from the intense bloat in terms of code and compile times we see today. I don't think John would have tolerated it. Some would argue that the games were really limited, crude, etc. but I don't think the level of refinement you'd see in a game today vs. Q2 can account for a factor of 10 to 200 delta in build times. Maybe it simply is the proliferation of header files and source files so that with 2000+ .obj files link times are excessive. Or maybe people have just gotten real sloppy about dependency checking. Or maybe software really, truly is that much more complex than it was just a few short years ago. Without sufficient data, it's hard to say. But based on my experiences, almost every time I've seen a project like this spiral out of control in terms of complexity, it's because of overengineering features in the build system/class hierarchies because they're "neat" or "useful" but in practice turn out to cause significant hits in productivity. Are "smart pointers" for garbage collection worth the headache that they so often cause? In lieu of just telling a team, "Hey, manage your memory effectively"? I've never had a problem with memory leaks once clear memory allocation and ownership policies were defined. That seems like a more practical solution than coming up with some inane templatized ref-counting pointer system that's incredibly fragile or has to be back-doored consistently. I'm aware of at least three or four projects that take more than an hour to perform a full rebuild on modern workstation class hardware. Systems probably 2-3x faster than my current one. This means that those code bases are taking 100-200x longer to build than Quake 2 (call it 36 seconds vs. 3600 seconds with the former on a P3/933 and the latter on a P4/2.8). I'm literally jaw droppingly amazed that this is routinely tolerated and, in many cases, flat out rationalized. I had one friend say "I know it takes a long time, but we can't give up STL's string class". The STL string class is worth THAT? Another friend defended his team's build times with some vague hemming and hawing and a blanket "Software today is more complex, that's just how it is". I'll buy that -- software today is more complex, sure. It's bigger, has to do more stuff. But I don't see how it's become 100 to 200 times bigger and more complex in three years. That's one product cycle. =20 I would LOVE it if someone could step forward and show how the products have become that much more complex from one generation to another, something with concrete examples. Maybe you guys can do that and show the delta from DX -> DX2, because I think that would be really good information to know for the indusrty as a whole. -Hook |