RE: [GD-General] Compile times
Brought to you by:
vexxed72
|
From: Brian H. <bri...@py...> - 2002-12-09 22:39:05
|
> [to build the full Quake 2 source code] > > Let me repeat: 37 seconds. > > Wow, that's really the way it should be. I'm going to have a > hard look at our build environment and try to significantly > reduce our build time. The amusing thing, of course, is that the times I've mentioned this, it's poo-pooed that the Quake2 and Quake3 stuff wasn't very technologically advanced. > What are the alternatives other than rolling your own? Rolling your own? =) For C++, unless you want a base root Object class, there's no alternative, since STL is templatized and gives you the static type checking that many feel is so vitally important. But in more dynamic languages such as Java (*shudder*) and Objective-C, the core container classes operate on base Objects and everything still manages to work just fine. > Along with: > - Lots of debugged code already written > - Already quite optimized > - Type safety > - Familiarity of new programmers with that API > - Other APIs build on top of it (boost for example) I can agree that the above are valuable, but in my experience, those are close to non-factors when compared to the cost (again) of using STL. Now, I tend to work on small teams -- 3 programmers at id, one programmer at my current gig. So I do have a procedural advantage right there. In most cases, if a container class is showing up as a performance hit, my guess is that I'm either doing something wrong or, more likely, I need to optimize it beyond what is already done in STL. But I don't have anything to substantiate that, so it's neither here nor there. > The counter argument of "how long does it take to write a > linked list" is totally convicing either. How long does it > take to write an efficient balanced red-black tree? Can you > write them so it is possible to change the way memory is > allocated easily? Can you write them so they can be reused > for different types and not make extra memory allocations? Even if I never, ever re-use any of my list or hash code, re-implementing from scratch is almost never a problem. It's not like I have containers littered about so much that I need the absolute best, most generalized implementation available at all times. I know that's a philosophical shift from what many believe in these days, but I almost never, ever have to implement radically specialized data structures. Every day, easy-to-write stuff like lists, maps, sets, etc. are basically all I ever use. And when I start getting into more exotic data structures -- BSP trees, quad-trees, oct-trees -- then STL isn't going to help. Maybe that's a good summary -- for the things that STL can help with, rolling my own consumes almost zero time. It's probably 15 minutes to write a simple binary tree implementation from scratch. I spent more time writing this e-mail =) For the things that really need an optimized implementation, odds are it's a data structure that STL just doesn't understand. So STL -- for me -- falls exactly in that area of being a non-issue. As a library for the masses, it's great, because it does a little bit of everything well, in a documented fashion. But when it gets down to brass tacks for _one project_ I think the value plummets. But I do recognize I'm singularly alone on this. I also recognize that I have 5 second compile and go times as well. =) > Just out of curiosity, what are you developing in these days? > Light C++, or straight C? Both, although frankly I wish I had just stick with pure ANSI C, since it compiles and runs in a much cleaner fashion. My core frameworks are 144 header files and 187 source files that cover a wide variety of platforms, and while the hierarchy started out clean, it has since become a mess because code just has a habit of growing, changing and morphing over time. And C++ hierarchies, to coin a phrase from Scott Bilas, tend to "harden" over time. The more interdependent your class trees get via inheritance, delegation, aggregation, etc. the more difficult it becomes to refactor your existing code because the chain of dependencies grows longer and longer with time. The code base becomes resistant to change. All my new libraries are written in ANSI C, because many of them I plan to release as open source. But in the process of writing them in ANSI C and acutely examining the dependencies that existed, I found that I was writing MUCH cleaner code, and not just because I expected others to read the code. Application frameworks are great for writing applications, but I think they're also extremely cumbersome and dangerous when trying to write libraries or allowing the reuse of code in a piece-meal fashion, which is often extremely handy. I've been frustrated at times because I'll write a simple command line app that I want to give to someone, but it uses one tiny little class out of my frameworks, and now suddenly I have to hand over megabytes of source code so someone can compile a 10 line program. I believe strongly in using C++ as an application language, but for libraries, I think ANSI C is probably superior for many reasons. Frameworks make sense for developing entire apps, but by their very nature they're incredibly difficult to partition and release in pieces. Code reuse introduces dependencies, and dependencies work directly against refactoring code cleanly. I'm not against code reuse, but it's worth pointing out that the more code is reused, the harder it is to change or improve. Brian |