Thread: RE: [GD-General] Compile times
Brought to you by:
vexxed72
From: Douglas C. <zi...@ho...> - 2002-12-10 00:02:08
|
>Okay, Quake 2 is open sourced, so I just tested it out. On my very >modest 2xP3/933 w/ 512MB RAM using MSVC 6, compiling ref_gl, game, and >client, it took 37 seconds. > >Let me repeat: 37 seconds. Since we just spent 1/2 a day playing with VC6 PCH setups and testing compile speeds I'll throw ours in. (P4 2.8 / 512mb ) Game: C++, no templates, Core/Math/Engine/Game dlls: 1:05 in release. Editor: WTL (template heavy ATL like implementations of interface classes, no STL though). Exe & 3 plugin DLLs: 1:25 MaxExporter: STL heavy (plus numerous Max headers): 15secs So full release rebuild was 2:45. From using C in the previous game, I can definitely tell a hit in compile times with C++ even without templates, but there is no way I'd trade 20s for C.. :) I do think our WTL based editor compiles faster than the previous MFC one. As far as some way to make STL/template builds faster, 512mb of ram and PCHs are about it -- which is one reason why the game code has none. We never re-allocate memory after things are initially loaded, so I guess STL (at least containers) wouldn't have been much use anyway. -doug _________________________________________________________________ The new MSN 8: smart spam protection and 2 months FREE* http://join.msn.com/?page=features/junkmail |
From: Chris C. <cca...@io...> - 2002-12-10 00:38:37
|
> Since we just spent 1/2 a day playing with VC6 PCH setups and testing > compile speeds I'll throw ours in. (P4 2.8 / 512mb ) [snip] Out of curiousity, what's the general size of your codebase? I downloaded the Quake2 source myself to take a look and do a test build in .Net (~2 minutes on my 1Ghz laptop), and I noticed that the codebase is substantally smaller than ours. Quake2: .c - 188 files, 3.9MB .h - 78 files, 0.6MB Our codebase (which includes the editor, and admittedly a ton of code that could use some re-engineering/stripping): .cpp - 1188 files, 14.6MB .h - 1450 files, 5.8MB I'm currently in the middle of untangling a bunch of our header dependencies that are really pretty nasty, and there's way way way too much code in headers right now. Our builds are currently in the 20 minute range, which I'm trying to improve. We use some STL, though tend to use our own, simpler containers, and don't use a whole lot of templates other than that. Hearing all these sub-five-minute build times is making my mouth water, but do we have a codebase that's just WAY bigger than those that are sporting such fast builds? -Chris |
From: Brian H. <bri...@py...> - 2002-12-10 01:25:01
|
> Hearing all these sub-five-minute build times is making my=20 > mouth water, but do we have a codebase that's just WAY bigger=20 > than those that are sporting such fast builds? I really think this is the crux of the issue -- why are the code bases so big and take so long to build? Is it poor engineering or just a casual, subconscious feeling that modern systems can handle it? This is probably a sweng list thing at this point, but hey, nothing is OT here as long as it pertains to games =3D) Obviously using language features or libraries that make for excessive build times is a contributing factor, but I also think that there's kind of an underlying assumption of "Hey, we're making big, complex games, so this is just kind of expected". A tacit acceptance that convoluted engineering is simply a real and necessary part of game development these days. That the task is so huge and daunting that whatever happens is probably necessary. I just don't buy into that, but maybe I'm na=EFve that way. Quake 2 and Quake 3 were pretty technologically advanced when they were released, and they never suffered from the intense bloat in terms of code and compile times we see today. I don't think John would have tolerated it. Some would argue that the games were really limited, crude, etc. but I don't think the level of refinement you'd see in a game today vs. Q2 can account for a factor of 10 to 200 delta in build times. Maybe it simply is the proliferation of header files and source files so that with 2000+ .obj files link times are excessive. Or maybe people have just gotten real sloppy about dependency checking. Or maybe software really, truly is that much more complex than it was just a few short years ago. Without sufficient data, it's hard to say. But based on my experiences, almost every time I've seen a project like this spiral out of control in terms of complexity, it's because of overengineering features in the build system/class hierarchies because they're "neat" or "useful" but in practice turn out to cause significant hits in productivity. Are "smart pointers" for garbage collection worth the headache that they so often cause? In lieu of just telling a team, "Hey, manage your memory effectively"? I've never had a problem with memory leaks once clear memory allocation and ownership policies were defined. That seems like a more practical solution than coming up with some inane templatized ref-counting pointer system that's incredibly fragile or has to be back-doored consistently. I'm aware of at least three or four projects that take more than an hour to perform a full rebuild on modern workstation class hardware. Systems probably 2-3x faster than my current one. This means that those code bases are taking 100-200x longer to build than Quake 2 (call it 36 seconds vs. 3600 seconds with the former on a P3/933 and the latter on a P4/2.8). I'm literally jaw droppingly amazed that this is routinely tolerated and, in many cases, flat out rationalized. I had one friend say "I know it takes a long time, but we can't give up STL's string class". The STL string class is worth THAT? Another friend defended his team's build times with some vague hemming and hawing and a blanket "Software today is more complex, that's just how it is". I'll buy that -- software today is more complex, sure. It's bigger, has to do more stuff. But I don't see how it's become 100 to 200 times bigger and more complex in three years. That's one product cycle. =20 I would LOVE it if someone could step forward and show how the products have become that much more complex from one generation to another, something with concrete examples. Maybe you guys can do that and show the delta from DX -> DX2, because I think that would be really good information to know for the indusrty as a whole. -Hook |
From: Noel L. <ll...@co...> - 2002-12-10 14:02:00
|
On Mon, 09 Dec 2002 18:39:37 -0600 Chris Carollo <cca...@io...> wrote: > Quake2: > .c - 188 files, 3.9MB > .h - 78 files, 0.6MB >=20 > Our codebase (which includes the editor, and admittedly a ton of code=20 > that could use some re-engineering/stripping): > .cpp - 1188 files, 14.6MB > .h - 1450 files, 5.8MB For another data point, our code base for MechAssault was: =2Ecpp - 1164 files (13.2 MB) =2Eh - 1291 files (3.5 MB) Game-specific code and engine code are split exactly half and half (it just worked that way). So considering that it's roughly one order of magnitude larger than the Quake 2 source code, our full compile times (around 15 minutes) are not totally out of line. Still, I'd like them to be shorter, and especially the link times, which is the most important thing for iterative programming. I imagine using incremental linking with VC7 will help, but every time I've tried it we've had problems and had to turn it off. --Noel |
From: Michael M. (GAMES) <md...@mi...> - 2002-12-10 01:50:21
|
Does anyone out there have a game project of 10-20MB in size (source & header files), that builds in 5 minutes or less? I would love to hear about it. I wonder if in some ways you cross an invisible line where complexity and inter-dependency increases dramatically as your project size grows. Similar to the way the dynamics of a team changes as you go from 8 developers to 40. Large-scale projects are not a new beast -- there are whole books written on the subject (J. Lakos. Large-Scale C++ Software Design, for example). I wonder how much of the complexity in our larger game projects is "accidental" and how much is "essential", to borrow a term from Brooks. Brian has a strong suspicion much of it is accidental and could be removed if the right choices were made during the course of development. I would love to see some larger projects that build quickly so that I could gain more confidence in this theory. -Michael |
From: mike w. <mi...@ub...> - 2002-12-10 02:09:28
|
the full sourcecode to our game engine currently runs about 20 megs. can't say i've done a full recompile in a while, so i'm not sure how long it would take to build the entire sucker. the major benefit we have to compiling the source is the complete seperation of various parts of the engine into dlls & other libraries - we only need to recompile these as necessary (ie if we make low-level changes to the drivers or graphics engine), the main source itself is only about 5 megs - the rest is in 'compile as necessary' libraries. this also lets us have various 'versions' of the various libraries in-house. for example we have a 'stable' driver, ie tested working versions of stuff that the designers can use and know the functionality is there, and we also have 'development' drivers that are the bleeding edge drivers that most of the designers don't really need to access, only one or two designers have access to the dev drivers - and they are used to test & break the newer features before we unleash them on the rest of the team.... we have seperate dlls' for open gl drivers, software drivers, d3d drivers, network libraries, etc... works out well, if certain machine configurations barf on the newer updates, they can roll back to the previous drivers & still be productive while the programmers work on bug-fixes for their particular test-cases... mike w www.uber-geek.ca ----- Original Message ----- From: "Michael Moore (GAMES)" <md...@mi...> To: <gam...@li...> Sent: Monday, December 09, 2002 5:50 PM Subject: RE: [GD-General] Compile times > Does anyone out there have a game project of 10-20MB in size (source & > header files), that builds in 5 minutes or less? I would love to hear > about it. > > I wonder if in some ways you cross an invisible line where complexity > and inter-dependency increases dramatically as your project size grows. > Similar to the way the dynamics of a team changes as you go from 8 > developers to 40. > > Large-scale projects are not a new beast -- there are whole books > written on the subject (J. Lakos. Large-Scale C++ Software Design, for > example). I wonder how much of the complexity in our larger game > projects is "accidental" and how much is "essential", to borrow a term > from Brooks. Brian has a strong suspicion much of it is accidental and > could be removed if the right choices were made during the course of > development. I would love to see some larger projects that build > quickly so that I could gain more confidence in this theory. > > -Michael > > > > > > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Gamedevlists-general mailing list > Gam...@li... > https://lists.sourceforge.net/lists/listinfo/gamedevlists-general > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_idU7 > |
From: Mick W. <mi...@ne...> - 2002-12-10 02:19:02
|
Tony Hawk's Pro Skater 4 has 10.6 MB of code (engine included), and takes 7.5 minutes for a full code rebuild on a 2Ghz/1GB PC using GCC 2.93 (Targeting the PS2) I've heard VC is a lot faster than GCC, so it might get the sub-5 minute there. I think it can get a lot faster. We did some restructuring of one of our subsystems using the Lakos methodology, and it sped it up noticably. But you really have to apply it everything. The code is pretty interwoven in places. We don't make much use of templates, and no STL, which I think helps the build time quite a bit. Also no exceptions or RTTI. Mick > -----Original Message----- > From: gam...@li... > [mailto:gam...@li...] On > Behalf Of Michael Moore (GAMES) > Sent: Monday, December 09, 2002 5:50 PM > To: gam...@li... > Subject: RE: [GD-General] Compile times > > > Does anyone out there have a game project of 10-20MB in size > (source & header files), that builds in 5 minutes or less? I > would love to hear about it. > > I wonder if in some ways you cross an invisible line where > complexity and inter-dependency increases dramatically as > your project size grows. Similar to the way the dynamics of a > team changes as you go from 8 developers to 40. > > Large-scale projects are not a new beast -- there are whole > books written on the subject (J. Lakos. Large-Scale C++ > Software Design, for example). I wonder how much of the > complexity in our larger game projects is "accidental" and > how much is "essential", to borrow a term from Brooks. Brian > has a strong suspicion much of it is accidental and could be > removed if the right choices were made during the course of > development. I would love to see some larger projects that > build quickly so that I could gain more confidence in this theory. > > -Michael > > > > > > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Gamedevlists-general mailing list > Gam...@li... > https://lists.sourceforge.net/lists/listinfo/gamedevlists-general > Archives: http://sourceforge.net/mailarchive/forum.php?forum_idU7 > |
From: Brian H. <bri...@py...> - 2002-12-10 02:32:30
|
> Tony Hawk's Pro Skater 4 has 10.6 MB of code (engine > included), and takes 7.5 minutes for a full code rebuild on a > 2Ghz/1GB PC using GCC 2.93 (Targeting the PS2) That's downright speed-of-light compared to most people's comments, and to me is a pretty clear indicator that the problem isn't necessarily just the size of the code base. Brian |
From: Dan T. <da...@cs...> - 2002-12-10 03:54:22
|
Just to add my 2 cents...I just built linux on our isntruction servers at UW, which arent the best machines, doing a make clean then make all. I know it ins't building the full 110 megs(68 megs of .c files) for our particular implementation due to architecture and driver stuff, but I think 4:41(5 minutes...) is a nice build time for an operating system. For me it kinda puts things in perspective, anyway, depsite the optimizations and such. -Dan ----- Original Message ----- From: "Brian Hook" <bri...@py...> To: <gam...@li...> Sent: Monday, December 09, 2002 6:32 PM Subject: RE: [GD-General] Compile times > > Tony Hawk's Pro Skater 4 has 10.6 MB of code (engine > > included), and takes 7.5 minutes for a full code rebuild on a > > 2Ghz/1GB PC using GCC 2.93 (Targeting the PS2) > > That's downright speed-of-light compared to most people's comments, and > to me is a pretty clear indicator that the problem isn't necessarily > just the size of the code base. > > Brian > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Gamedevlists-general mailing list > Gam...@li... > https://lists.sourceforge.net/lists/listinfo/gamedevlists-general > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=557 > |
From: Brian H. <bri...@py...> - 2002-12-10 02:19:22
|
> Does anyone out there have a game project of 10-20MB in size > (source & header files), that builds in 5 minutes or less? I > would love to hear about it. I think that part of the problem is having a game project that large. With size comes almost unavoidable complexity, which is what I was talking about -- what is exactly causing them to get that huge? I know the games I'm personally working on are not that dramatically complex, they're cheezy puzzle games, but at the same time nearly all games (especially cross platform ones) end up having to do a minimal set of operations regardless of overall complexity. For example, no matter what type of game, you're looking at having some kind of unarchiver (unless you just dump all your files straight onto disk, which I suppose some people still do); some kind of TGA/JPG/PNG/BMP/whatever loader; some kind of streaming audio support (MP3, Miles, Ogg Vorbis); some kind of graphics/math stuff; if appropriate, some kind of networking layer built on UDP; some kind of audio wave file loader/playback; possibly some GUI stuff; memory management and file format loaders; cross-platform file system stuff; cross-platform endianess mucking about; cross-platform registry/preferences manipulation; etc. etc. My total source base (including third party open source stuff), including all cross platform stuff, is about 5.5MB (of which MySQL is 2MB). So at any given time, a full rebuild is probably around 4.5 - 5MB. Nothing huge, but even so, a full rebuild for me is around one minute or less. Take my code, multiply by 4 for a 20MB code base, and we're still talking about 5 minutes or so, tops. It's possible that there's some magic project size that causes MSVC to choke as well. Hmmm, actually, one thing I should probably mention is that my projects consist of tons of subprojects. I don't have a single monolithic DSP that has like 300 files or anything like that, and it's possible that this prevents symbol tables or other internal compiler data from getting out of control. > I wonder if in some ways you cross an invisible line where > complexity and inter-dependency increases dramatically as > your project size grows. My guess is that this is true. At some point you lose the ability to have a gestalt understanding of your code base as a whole, and once you're at that point, programmers just start chucking things into directories as they see fit. > Large-scale projects are not a new beast -- there are whole > books written on the subject (J. Lakos. Large-Scale C++ > Software Design, for example). The problem is that many of the books are either written with a late 80s mindset towards software engineering, which is often at odds with the new and trendy extreme programming we're seeing, or they're long, gory discussions about how things exploded on some large project, thinly disguised post-mortems. One of my former managers expressed discomfort about a project, and he said something like "I feel like we're trying to build a Boeing 777, where everyone puts together all their pieces at the very end and it fires up and works" and I was like "Yeah, that's exactly what's going on, and it'll work just fine, this is how modern software engineering works". I had complete faith that with well defined interfaces and unit tests that assembling disparate subsystems 6 months into a project would not be that tough. Holy crap was I wrong. These days, I'm the complete opposite. To paraphrase a friend of a friend, the most important thing you can do is to have the game running ALL THE TIME. Like, from day one, you should have SOMETHING that compiles, builds and runs. Every day there should be a running, functional executable. If you can't build one, or can't guarantee this, then this implies that there is something seriously wrong with your process. I know that sounds rather extreme, but from the projects I've observed, this is more true than not. When you hear phrases like "We'll put those pieces together in a few weeks", alarm bells should be going off. > Brian > has a strong suspicion much of it is accidental and could be > removed if the right choices were made during the course of > development. That's a fairly accurate assessment of my feelings, coupled with a strong feeling that a lot of the complexity is also designed in from the outset because of a belief that it's necessary. The case in point I use are smart pointers, which almost always end up costing more than they're worth in projects I've seen. Those are typically designed in from day one because they seem like a good idea, but then in practice enough ugly things rear their heads that everyone ends up regretting them at the end. And, as I've lamented earlier, I do feel that STL is a contributor to this as well, but that's borderline religious. > I would love to see some larger projects that > build quickly so that I could gain more confidence in this theory. My only addition to this is that I'm somewhat suspicious that projects actually need to be that large, but that's neither here nor there. I doubt anyone with a gigantic project feels that it's a bunch of bloated crap, but I know that I feel a good chunk of my code could just be tossed or at least refactored into something more manageable, and I only have a few megabytes of code. Brian |
From: Jesse J. <jes...@mi...> - 2002-12-16 01:17:41
|
At 6:19 PM -0800 12/9/02, Brian Hook wrote: >That's a fairly accurate assessment of my feelings, coupled with a >strong feeling that a lot of the complexity is also designed in from the >outset because of a belief that it's necessary. The case in point I use >are smart pointers, which almost always end up costing more than they're >worth in projects I've seen. Those are typically designed in from day >one because they seem like a good idea, but then in practice enough ugly >things rear their heads that everyone ends up regretting them at the >end. Hmm, I've seen smart pointers used on several projects and they have never caused any problems. What sort of "ugly things" are you referring to? -- Jesse |
From: <cas...@ya...> - 2002-12-16 02:11:32
|
Jesse Jones wrote: > Hmm, I've seen smart pointers used on several projects and they have > never caused any problems. What sort of "ugly things" are you > referring to? I've seen ugly things when used wrong. For example, when using ciclyc references. I allowed weak references to avoid cycles, but ended using them in places where I shoudn't and the end I had the feeling of not having too much control over the life of the objects. They definitely increase complexity, at least when you have never used them :-) After that, I switched to a handle based resource manager, much more robust than the previous code. You have total control over the life of the objects and if you mix the handle with a reference count, you detect invalid references inmediatly. Of course, if I used smart pointers again, I wouldn't make the same errors, but now I feel much more comfortable with my 'simple' handles. Ignacio Castaño cas...@ya... ___________________________________________________ Yahoo! Sorteos Consulta si tu número ha sido premiado en Yahoo! Sorteos http://loteria.yahoo.es |
From: Jesse J. <jes...@mi...> - 2002-12-16 11:41:53
|
At 3:12 AM +0100 12/16/02, Ignacio Casta=F1o wrote: >Jesse Jones wrote: >> Hmm, I've seen smart pointers used on several projects and they have >> never caused any problems. What sort of "ugly things" are you >> referring to? > >I've seen ugly things when used wrong. For example, when using ciclyc >references. I allowed weak references to avoid cycles, but ended using them >in places where I shoudn't and the end I had the feeling of not having too >much control over the life of the objects. They definitely increase >complexity, at least when you have never used them :-) > >After that, I switched to a handle based resource manager, much more robust >than the previous code. You have total control over the life of the objects >and if you mix the handle with a reference count, you detect invalid >references inmediatly. I have some thoughts on how this can be done, but I'd like to hear some more details on how you've implemented this. :-) -- Jesse |
From: <cas...@ya...> - 2002-12-16 17:24:56
|
Jesse Jones wrote: > I have some thoughts on how this can be done, but I'd like to hear > some more details on how you've implemented this. :-) Well, that was mentioned some time ago in the Algorithm mailing list by Tom Forsyth, I think... However, I saw that for the first time in the fmod sound library to handle channel references. Here is a snip from his mail: "Personally I prefer unique IDs. The bottom n bits are an index into an array (either of pointers to the object, or if the object is small enough, just an array of actual objects). The top 32-n bits are the "unique" part. Every time you allocate an item in the array, the top bits are incremented. Unless you really do get through a huge number of objects during your game, 32 bits for IDs is more thna enough. It means you can easily kill objects any time you like, and then when they are requested, you can return either NULL, or a placeholder, or whatever (as appropriate, e.g. if a texture is requested that isn't loaded any more, we return a 16x16 copy, which we always keep resident for all textures)." And this is from the FMOD documentation: "The channel handle : The return value is reference counted. This stops the user from updating a stolen channel. Basically it means the only sound you can change the attributes (ie volume/pan/frequency/3d position) for are the one you specifically called playsound for. If another sound steals that channel, and you keep trying to change its attributes (ie volume/pan/frequency/3d position), it will do nothing. This is great if you have sounds being updated from tasks and you just forget about it. You can keep updating the sound attributes and if another task steals that channel, your original task wont change the attributes of the new sound!!! The lower 12 bits contain the channel number. (yes this means a 4096 channel limit for FMOD :) The upper 19 bits contain the reference count. The top 1 bit is the sign bit. ie. S RRRRRRRRRRRRRRRRRRR CCCCCCCCCCCC" Sorry the copy&past thing, but English is not my mother language and I think this way is clearer and faster. :-) I don't think that handles are the solution to all the problems and that smart pointers are bad. However, I think that there are many ways of using smart pointers wrong, while the handle method is simple and errors are easy to catch. Ignacio Castaño cas...@ya... _______________________________________________________________ Copa del Mundo de la FIFA 2002 El único lugar de Internet con vídeos de los 64 partidos. ¡Apúntante ya! en http://fifaworldcup.yahoo.com/fc/es/ |
From: Jesse J. <jes...@mi...> - 2002-12-17 11:41:43
|
At 5:58 PM +0100 12/16/02, Ignacio Casta=F1o wrote: >Jesse Jones wrote: >> I have some thoughts on how this can be done, but I'd like to hear >> some more details on how you've implemented this. :-) > >Well, that was mentioned some time ago in the Algorithm mailing list by Tom >Forsyth, I think... However, I saw that for the first time in the fmod soun= d >library to handle channel references. Thanks for the summary. How is this better than simply ref counting objects though? >I don't think that handles are the solution to all the problems and that >smart pointers are bad. However, I think that there are many ways of using >smart pointers wrong, while the handle method is simple and errors are easy >to catch. Well, this is a separate issue: you don't have to use a smart pointer if you ref count your objects. It just makes the code clearer and eliminates a source of errors since the ref counting is automated. Having said the smart pointers are normally very simple. As a rule they don't use any template meta-programming wackiness and they contain very little code (look at the boost smart pointer implementations for example). It's hard to go wrong if you have a solid understanding of how constructors and assignment operators work. -- Jesse |
From: Douglas C. <zi...@ho...> - 2002-12-10 02:25:10
|
>Out of curiousity, what's the general size of your codebase? .cpp - 605 files, 4mb .h - 317 files, 1mb That includes all our code (including support for 2 consoles). The reason we have 1/2 the number of .h files is because our plugins for the editor only require a .cpp file per property page / utility plugin. There would probably be 150 .h files if I didn't have to export our 'action' classes so the property pages could get to them as they're never included by anything other than a single, corresponding .cpp file in the game. We're extremely data driven, using an action/value/event system instead of scripting -- but that includes all behaviors (save a couple FSMs in code) for the player, NPCs, GUI, triggers, cinematics, etc. I did spend a while early on making sure dependencies wouldn't be an issue, and it seems to have paid off pretty well so far. _________________________________________________________________ The new MSN 8: smart spam protection and 2 months FREE* http://join.msn.com/?page=features/junkmail |
From: Pierre T. <p.t...@wa...> - 2002-12-10 17:10:49
|
>Out of curiousity, what's the general size of your codebase? My personal codebase : 767 .h files, 3.3 Mb 719 .cpp files, 9.6 Mb Full rebuild in 4:05 according to MSVC's /y3 option. - 1Ghz Athlon, 256 Mb ram - No STL - Very few templates - No RTTI / Exceptions, but C++ otherwise (multiple inheritance, etc) - Cut in 19 DLLs and a main app (exe) - Almost all headers in the PCH (I know it's bad but I don't care in my case) It was much much slower on my previous machine (500 Mhz Celeron with less ram). It definitely makes a difference for compile times. Pierre |
From: brian h. <bri...@py...> - 2002-12-10 05:38:16
|
> I suffered severe compile-time shock when I first started working at > Oddworld, so I remember timing a build of Soul Ride (my immediate > prior project) on the machine at work. Stats: > > full build: ~45 seconds (VC6) > 1.80M total > 1.46M .cpp (94 files) > 0.15M .c (19 files) > 0.18M .h (88 files) > (71K lines total) Then we have very similar build times and sizes for our respective "small" projects. So the question is -- what are the really big chunks that "big" project adds that hoses everything? STL is the obvious commonality, because large projects sans STL still seem like they compile pretty quickly. > The reason this works is because the compiler only reads headers (and > builds data structures from them) once for the whole lump. As opposed > to reading, building structures, compiling a few hundred lines, and > then flushing everything out again, once for each source file. Our > typical #include line-count is in excess of 25K lines (because of > STL). So it would seem that STL is directly the culprit for lots of these build time explosions. > Unfortunately the link time didn't seem to drop any, which is the Are your codebases built into a single monolithic EXE, or an EXE and multiple DLLs? The latter approach seems to help link times appreciably. > nobody under the age of 30 can be bothered to type "make" :). Kids these days. =) Speaking of which, I'm curious if anyone has tried using jam (Just Another Make)? make isn't even quite 80s technology, much less 2000+ technology. It's mind bogglingly obtuse, yet somehow manages to survive even today. jam is an attempt to sort out this mess somewhat more cleanly, and Apple has migrated to it for ProjectBuilder. Brian |
From: Ray <ra...@gu...> - 2002-12-10 07:41:44
|
I guess I should say that my mac is a g3-350 with gcc3. I think that's why it takes like 45 minutes to compile. We also get the compiler to generate an internal compiler error on a couple of our files. 420 .h files (1.7 megs) 455 .cpp files (4.0 megs) 140 .c files (2.5 megs) (mostly libpng/zlib/libjpg) my win32 machine is a athlon 1700+ in win2k linux box is pIII-450 with gcc 2.9something mac box is g3-350 w/os10.2 gcc3 So that's not really comparing oranges to oranges. We use Jam for mac and linux. BUT we don't use the built-in version of Jam on the mac because its internal Jamrules are not compatible with a couple of things we do. (I'm not quite sure what, because I wasn't the one to get it working) We also have a problem with it not correctly generating dependancies for a few files so it doesn't compile them when it should. I also can't figure out how to force Jam to compile a specific .cpp file for me. It's pretty much either the main target app or nothing. We were just playing around with trying to get things to compile faster and found that gcc3.x is much slower than gcc 2.x. I also discovered gcc(at least on the mac) has some form of precompiled header support which I didn't try to use yet. All I can say is gcc is ssslllllllllllooooooooooooowwwwwwwwww. faster harddrives help compiling speed too. I just can't understand why stl is the main culprit for that much of a slow-down. Speaking of slow-down, my brain slowed down so I think I will go to bed now. - Ray |
From: Ray <ra...@gu...> - 2002-12-10 08:11:05
|
Hm, I did a little test with -O0, -O1, -O2, and -O3 on this 1 file that takes forever to compile. -O0: 12 seconds -O1: 46 seconds -O2: 1 minute 15 seconds -O3: 4 minutes 40 seconds (times are approximate) using this command-line: date ; c++ -c -D_POSIX -Wall -DDarwin -D_BIGENDIAN -fpascal-strings -O2 -fomit-frame-pointer -ffast-math -mdynamic-no-pic -DIMAGE_DIR=\"images\" -DTEXTURE_DIR=\"textures\" -DC3D_ENABLED -I../../vendetta2/oe -I../../gk -I../../vendetta2 -o bin.macosxppc.release/cradar.o gkobjects/cradar.cpp ; date This is on my state-of-the-art ;) g3-350 mac with osX and gcc3.1 optimizing sure takes a long time for some reason on this file. It uses stl::vector and it doesn't use that for very much. It does use one of our home-brew templates though. But other files using that same template doesn't take that long to compile. It also does a fair amount of 3d math and calls a bunch of virtual functions. I just don't get it. Oh well, I'm going to sleep now. - Ray Ray wrote: > I guess I should say that my mac is a g3-350 with gcc3. I think that's > why it takes like 45 minutes to compile. > We also get the compiler to generate an internal compiler error on a > couple of our files. > > 420 .h files (1.7 megs) > 455 .cpp files (4.0 megs) > 140 .c files (2.5 megs) (mostly libpng/zlib/libjpg) > > my win32 machine is a athlon 1700+ in win2k > linux box is pIII-450 with gcc 2.9something > mac box is g3-350 w/os10.2 gcc3 > So that's not really comparing oranges to oranges. > > We use Jam for mac and linux. BUT we don't use the built-in version of > Jam on the mac because its internal Jamrules are not compatible with a > couple of things we do. (I'm not quite sure what, because I wasn't the > one to get it working) > We also have a problem with it not correctly generating dependancies for > a few files so it doesn't compile them when it should. > I also can't figure out how to force Jam to compile a specific .cpp file > for me. It's pretty much either the main target app or nothing. > > We were just playing around with trying to get things to compile faster > and found that gcc3.x is much slower than gcc 2.x. > > I also discovered gcc(at least on the mac) has some form of precompiled > header support which I didn't try to use yet. > > All I can say is gcc is ssslllllllllllooooooooooooowwwwwwwwww. > > faster harddrives help compiling speed too. > > I just can't understand why stl is the main culprit for that much of a > slow-down. > > Speaking of slow-down, my brain slowed down so I think I will go to bed > now. > > - Ray > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Gamedevlists-general mailing list > Gam...@li... > https://lists.sourceforge.net/lists/listinfo/gamedevlists-general > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=557 > |
From: Thatcher U. <tu...@tu...> - 2002-12-15 17:57:31
|
On Dec 09, 2002 at 11:38 -0600, brian hook wrote: > > > Unfortunately the link time didn't seem to drop any, which is the > > Are your codebases built into a single monolithic EXE, or an EXE and > multiple DLLs? The latter approach seems to help link times > appreciably. It's monolithic (this is for Xbox; dunno if something DLL-like is possible). > Speaking of which, I'm curious if anyone has tried using jam (Just > Another Make)? make isn't even quite 80s technology, much less > 2000+ technology. It's mind bogglingly obtuse, yet somehow manages > to survive even today. jam is an attempt to sort out this mess > somewhat more cleanly, and Apple has migrated to it for > ProjectBuilder. I gave Jam the "old college try" on an earlier version of the code at Oddworld. I eventually decided it was just as cryptic as make, but cryptic in a way that's less familiar, less well documented, and less actively supported (compared to GNU make). I ran into numerous quirks with NT shell limitations, and confusion trying to shoehorn Xbox's build process into Jam's model. I have no doubt that it can be made to work smoothly, but I also think it offers no compelling advantage over GNU make, so for me it's a devil-you-know vs. devil-you-don't-know situation. Also, this (extremely practical and informative) paper convinced me that Jam's vaunted speed advantage over make is due to widespread misuse of make, rather than a fundamental problem: http://www.tip.net.au/~millerp/rmch/recu-make-cons-harm.html -- Thatcher Ulrich http://tulrich.com |
From: Root, K. <kr...@fu...> - 2002-12-10 10:26:07
|
My 2 cents =) I was just doing full rebuild of one of our server application with it services on my development PC - here are some details. .cpp - 1357 files (17.3MB) .c - 303 files (6.8MB) .h - 2039 files (11.9MB) Full rebuild time - ~17 minutes. That code consist of 113 sub-projects (components in server and helper services). We heavy use STL in code along with our huge template library which wrap some STL templates, Windows API functions and so on. My development PC is 2xP3 1000, 512 RAM, 2x40GB IDE (in RAID). Konstantin Root Software Engineer fusionOne Estonia www.fusionone.com / www.mightyphone.com > -----Original Message----- > From: Dan Thompson [mailto:da...@cs...] > Sent: Tuesday, December 10, 2002 6:02 AM > To: gam...@li... > Subject: Re: [GD-General] Compile times > > > Just to add my 2 cents...I just built linux on our > isntruction servers at UW, which arent the best machines, > doing a make clean then make all. I know it ins't building > the full 110 megs(68 megs of .c files) for our particular > implementation due to architecture and driver stuff, but I > think 4:41(5 > minutes...) is a nice build time for an operating system. For > me it kinda puts things in perspective, anyway, depsite the > optimizations and such. > > -Dan > > ----- Original Message ----- > From: "Brian Hook" <bri...@py...> > To: <gam...@li...> > Sent: Monday, December 09, 2002 6:32 PM > Subject: RE: [GD-General] Compile times > > > > > Tony Hawk's Pro Skater 4 has 10.6 MB of code (engine > included), and > > > takes 7.5 minutes for a full code rebuild on a 2Ghz/1GB > PC using GCC > > > 2.93 (Targeting the PS2) > > > > That's downright speed-of-light compared to most people's comments, > > and to me is a pretty clear indicator that the problem isn't > > necessarily just the size of the code base. > > > > Brian > > > > > > > > ------------------------------------------------------- > > This sf.net email is sponsored by:ThinkGeek > > Welcome to geek heaven. > > http://thinkgeek.com/sf > > _______________________________________________ > > Gamedevlists-general mailing list > > Gam...@li... > > https://lists.sourceforge.net/lists/listinfo/gamedevlists-general > > Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=557 > > > > > > ------------------------------------------------------- > This sf.net email is sponsored by:ThinkGeek > Welcome to geek heaven. > http://thinkgeek.com/sf > _______________________________________________ > Gamedevlists-general mailing list > Gam...@li... > https://lists.sourceforge.net/lists/listinfo/gamedevlists-general > Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=557 > |
From: brian h. <bri...@py...> - 2002-12-10 17:55:46
|
> I believe this has more to do with your experience in both languages > than anything else. You've huge experience writing C code, but (from > what I can infer) little experience working with large C++ codebases. That's not actually the case, I have significantly more time using C++ than C actually -- I started coding in C++ in 1989 and didn't actually switch "back" to ANSI C until 1995, then I switched back to C++ in 1999. The problem is that I've used and "survived" C++ through several different shifts in conventional idiomatic usage. When I first started using C++, inheritance was all the rage and templates didn't exist. You were supposed to use private inheritance to inherit an implementation and public inheritance to inherit an interface and/or implementation. Then sometime in the late 90s, inheritance was moving away from popularity and it was time to switch to using generics (and you were now expected to use exceptions and RTTI was becoming formalized). Except that no compiler had competent template implementations, and even today, MSVC 7 doesn't have partial template specialization. So if anything, my problem is that I've used C++ too much because the language and its "proper" usage has shifted massively over the past decade+. > What's worse, C++ can be badly used in more varied and obscure ways > than C or other simpler languages, and some design philosophies even > encourage these uses. This is quite true, and if I had to list one single reason why C++ code across the board is a disaster, it would be this: there are no generally accepted idiomatic usage patterns for even the most basic things. Every aspect of the language may or may not be used by a programmer, and you can have one person do it "right" and another person do it "right" and both of them completely and utterly not understand what the other is doing. There are valid reasons to use multiple inheritance, templates, exception handling, operator overloading, STL, private inheritance, default arguments, non-virtual functions, factory methods with protected constructors/destructors, etc. But there are also many valid reasons not to, and a lot of it boils down to philosophy. Because C++ tried so hard to be philosophically agnostic, there is no right/wrong way. Contrast this to a language like Eiffel, Objective-C or SmallTalk, where idioms WERE established, and those languages have a very clean feel to them compared to C++. You understand what one person's Obj-C is supposed to be doing, whereas decoding another person's Obj-C is completely hit/miss depending on the amount of time you have and how much effort they spent into cleaning up their code. Two pretty good books on these subjects: Objects Unencapsulated (written by a guy that likes Eiffel, but it's still a damn good take on modern OO languages) and Introduction to Object Oriented Programming by Timothy Budd. The latter feels extremely unbiased and is also a good overview. I don't actually have very deep hierarchies. In fact, I think the deepest I have anywhere is three, but I'm not even sure I have one that deep. The dependencies are due to aggregation and actual usage -- for example, my HPuzzleGame class probably uses about 20-30 different classes. And some classes are implemented on assumptions about how other classes operate (yes, this breaks all manner of OOP, but unfortunately in the real world you don't know what you're abstracting until you're done writing code -- that's why going back and refactoring is so important). Brian |
From: Javier A. <ja...@py...> - 2002-12-10 18:42:50
|
brian hook <bri...@py...> wrote: > So if anything, my problem is that I've used C++ too much because the > language and its "proper" usage has shifted massively over the past > decade+. That's interesting. In contrast, I believe I haven't changed my approach to code design that much from the way I used to back in Z80 assembler times. I was using function pointers to handle game entities (and behavior states) polymorphically, so there you go. Of course the tools have changed, the style has evolved and new techniques have been added to account for large projects, but the overall idea isn't much different. Learning OO design with Eiffel and Modula2 probablyhelped, too. Javier Arevalo Pyro Studios |
From: Brian H. <bri...@py...> - 2002-12-10 20:39:38
|
> Learning OO design with Eiffel and Modula2 probablyhelped, too. Definitely. The problem is that I had learned really basic procedural C and then immediately hopped over to C++ back when C++ was the new "it" language. In fact, I was at the local college bookstore before it opened when I knew they got in their first shipment of Borland C++ 1.0, the first C++ compiler for the PC that wasn't hideously expensive (prior to that you had to use cfront, g++ didn't exist yet -- and if you wanted to use cfront, you had to be on a Unix system). So my introduction to OOP came directly from learning C++ via Stroustrup and Lippman, and that imposed a ton of bad habits from day one. Specifically, it imposed no habits, because of C++'s strong philosophical belief that "there's no right way to do OOP". So given a syntax and some general guidelines, I marched on from there. If I had started with something with a lot stronger philosophy about programming, such as Objective-C, SmallTalk, Eiffel, etc. I would have had at least some context when learning. As it were, I just was kind of winging it, and this was before there was a lot of documentation on "proper" OOP. And, of course, the concepts of "proper OOP" have changed drastically over the past 10 years. Using Objective-C has greatly cleaned up a lot of my code structure, but at the same the inability to carry over parts of Obj-C to C++ is incredibly frustrating. Most of it is doable in ways I don't find appealing (interfaces-via-MI), but it's still doable. Brian |