RE: [Algorithms] programmer productivity versus runtime performan ce
Brought to you by:
vexxed72
From: Tom H. <to...@3d...> - 2001-03-18 23:57:34
|
At 08:25 AM 3/18/2001, you wrote: >My answer is zero. I would not be willing to give up any run-time >performance. Oh .. so you do everything in assembler now? :) All your animation scripting is done in C/ASM as well? Wow .. hehe :) It is a rather ambiguous question at best as it depends greatly on what aspects are going slower and how much time they take to code currently. Since your tight loops take most of the execution time, and you can still code those in C/ASM (and through APIs like DX and OpenGL) then the overall performance of the game wouldn't be hurt too badly if you took the rest of the code that isn't performance sensitive and wrote it with something that let you code 2x faster. I would say that in a typical game something like 90% of the code takes 10% of the execution time. If you reduce the time it takes for you to write 90% of the code and take a 50% performance hit on that code (only) then you're still only showing a 5% performance hit for the entire game's frame rate. Seems pretty worthwhile to me. > That said, I seriously need to find a way to dramatically >improve my build times as my project gets bigger..and bigger..and >bigger..and bigger...and my team spends more and more time waiting on the >compiler and less time getting work done. > >Maybe breaking up parts of the project into DLL's so that it can be built in >smaller chunks? > >Don't know, but suggestions greatly appreciated. DLLs and LIBs can help IF the code and the interface to them changes very rarely. My biggest problem with DLLs and LIBs is making sure that people have the correct versions of things. You have no idea how many bugs you'll end up trying to track down that turn out to be mismatched DLLs or LIBs when things are changing fast an furious. When code changes you end up with problems in the compiler. When DLLs change you end up with problems at run-time. It was enough of a problem where I used to work that we dropped the whole concept until the low level stuff was so stable that it didn't change for months at a time. The "gotcha" is that when changes happen that infrequently, and changes you do make will most likely not get picked up because people aren't used to grabbing them, or end up only grabbing half of the change for any number of reasons :) I suppose you could break the project up into LIBs and have each programmer compile his own LIBs as separate dependant projects. Make sure that source is shared Best shot for reducing incremental builds is to attempt to limit header interdependence, but that can be a real pain in the ass. It seems like there's always one or two header files where if you touch them, you're looking at a very large rebuild. Best shot for reducing full builds is to invest in fast machines with fast hard drives. I suggest 10,000 RPM SCSI III drives with the code on one drive, and the data on another. You'll want the fastest CPU you can find. My setup right now is pretty close to that setup, and I have 100% CPU (P3 750) usage when writing to the SCSI drive, but less than 100% when writing to an IDE drive. Using Visual C++ btw :) Tom |