Thread: RE: [GD-General] Compile times (Page 2)
Brought to you by:
vexxed72
From: Chris C. <chr...@l8...> - 2002-12-11 05:42:33
|
> would you do it? _Has_ anyone actually done it (e.g. even if only for > internal tools)? I'd love to hear about the build times for > large(-ish) > Java or C# projects, for example. not related to GD, but jikes can compile our 1MB source code webapp in under 4 seconds (wouldnt want to use javac though) ChrisC |
From: Matt N. <mat...@ni...> - 2002-12-11 11:57:01
|
> -----Original Message----- > From: Donavon Keithley [mailto:kei...@ea...]=20 > Sent: 11 December 2002 06:40 > To: gam...@li... > Subject: RE: [GD-General] Compile times > > But is C# viable for real game development? Hard to say=20 > until somebody actually does it. Phil Taylor has claimed=20 > that the Managed DirectX examples will run 98% as fast as the=20 > C++ examples. That's bold, and promising. (Incidentally,=20 > Managed D3D is much cleaner and easier to work with than=20 > regular old D3D.) Still, it may be that you'd want your=20 > low-level stuff in C/C++, and watch out for chatty interfaces. >=20 Is there a decent profiler for C#? I would be a lot more comfortable = about trying it out if I knew that I had a good profiler so I could = quickly spot any areas which might be candidates for replacement with = C++ code. If something with similar functionality to VTune is available = for C# (as far as I know VTune itself is not?) then performance concerns = would be easier to address. Matt. --- Outgoing mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.423 / Virus Database: 238 - Release Date: 25/11/2002 =20 |
From: Donavon K. <kei...@ea...> - 2002-12-11 17:31:08
|
> -----Original Message----- > From: gam...@li... > [mailto:gam...@li...] On Behalf Of > Matt Newport > Sent: Wednesday, December 11, 2002 7:02 AM > To: gam...@li... > Subject: RE: [GD-General] Compile times > > Is there a decent profiler for C#? I would be a lot more comfortable about > trying it out if I knew that I had a good profiler so I could quickly spot > any areas which might be candidates for replacement with C++ code. If > something with similar functionality to VTune is available for C# (as far > as I know VTune itself is not?) then performance concerns would be easier > to address. Apparently VTune 6 profiles .NET (I don't have it myself), so the short answer to your question would appear to be "Yes." GotDotNet lists four other commercial profilers and one free profiler. The .NET CLR has very decent profiling support. For simple purposes you can read runtime statistics or look at the performance counters. It's also relatively easy to roll your own profiler (see 'Program Files\Microsoft Visual Studio.NET\FrameworkSDK\Tool Developers Guide\docs\Profiling.doc'). Matt Pietrek wrote an 'Under the Hood' article on this (http://msdn.microsoft.com/msdnmag/issues/01/12/hood/default.aspx). And I'll just throw this in: "Improve Your Understanding of .NET Internals by Building a Debugger for Managed Code" (http://msdn.microsoft.com/msdnmag/issues/02/11/CLRDebugging/default.asp x) --Donavon |
From: Iain R. <i.n...@re...> - 2002-12-12 09:16:21
|
For what it's worth, my experience shows a marked difference between VC6 and Borland C++ Builder (3 and 4). I'm using BCB, and my complete rebuild time (on a lowly P2-333, 128MB) is approx 1 minute. It's not a large project though - approx 70 .c files, all straight C. However, I did get the opportunity to build it with VC6, and it took approx 10 minutes! Now, I had my header files optimised for BCB (article at http://www.bcbdev.com/articles/pch.htm) and it's not hugely pretty, but it's good for rebuilds, capitalising on cached pch files. After BCB has created its pch, it literally flies through the source files (maybe 4 a second). I didn't manage to configure VC to rebuild this fast. But what it shows is that it pays to get to know your compiler, and find out how to optimise your code for it. And you shouldn't have to put up with slow compile times. BCB tells you how many lines are being compiled. Even a small source file can cause the compiler to look at over 200,000 lines of code in headers... Iain |
From: brian h. <bri...@py...> - 2002-12-16 02:18:57
|
> but now I feel much more comfortable with my 'simple' handles. I think this is also a big part of my gripe with smart pointers -- much like STL, I don't see them solving real world problems. They sound great in theory, but any well engineered project will have a pretty firm grasp on memory allocation, free and ownership policies. Memory management bugs are something you encounter when learning how to program, but at some point they cease to be a significant problem when there is proper discipline in place. -Hook |
From: Jesse J. <jes...@mi...> - 2002-12-16 12:27:11
|
At 8:19 PM -0600 12/15/02, brian hook wrote: > > but now I feel much more comfortable with my 'simple' handles. > >I think this is also a big part of my gripe with smart pointers -- much >like STL, I don't see them solving real world problems. For the most part that might be true if you're not using exceptions. But if you are then it's IMO nearly mandatory to encapsulate the act of newing/deleting into a class (like std::auto_ptr) and similarly adding/removing references. Even if you aren't using exceptions manually managing ref counts can get pretty hairy with large code bases. (I saw this with InDesign which is a huge desktop publishing system which used ref counting *very* heavily, but exceptions very little, and didn't add a smart pointer class until late in development). -- Jesse |
From: Kent Q. <ken...@co...> - 2002-12-16 17:00:04
|
At 08:19 PM 12/15/2002 -0600, you wrote: >Memory management bugs are something you encounter when learning how to >program, but at some point they cease to be a significant problem when >there is proper discipline in place. You've said something similar to the "proper discipline" comment several times now, and I think you've got a blind spot going on here. If you're coding by yourself, personal discipline is all you need. But if you're coding on a team, you have to have discipline that applies to everyone on the team. And not everyone is a coding machine. Not everyone works on all the code all the time. Even if every individual has perfect discipline in their own code (hah!), you'll find errors in the integration -- in the interfaces between modules. The old "He allocated it but *I* need to free it" is one example of the problem. I've worked on big teams, small teams, and by myself. And some of the stuff you've dissed lately -- wrapping allocations in smart pointers, typesafe containers, and so forth -- are useful in almost direct proportion to the number of people working on the code. Though I still think that I benefit greatly -- even as an individual code -- from such things. It lets me think at a higher level. Kent Kent Quirk, CTO, CogniToy ken...@co... http://www.cognitoy.com |
From: brian h. <bri...@py...> - 2002-12-16 17:46:28
|
> You've said something similar to the "proper discipline" comment > several times now, and I think you've got a blind spot going on here. It's not a blind spot, it's called wishful idealism =) > If you're coding by yourself, personal discipline is all you need. > But if you're coding on a team, you have to have discipline that > applies to everyone on the team. Yes. > And not everyone is a coding machine. Don't have to be a machine, just have to be careful and thoughtful. > Not everyone works on all the code all the time. They don't have to. Every time you call "new" or "delete" or return a pointer or accept a pointer, it should set off alarm bells immediately. It should mean that you need to stop and document what you're doing. It should mean that you should take the time to make a Doxygen or similar style header that defines exactly what you're doing. You should decide what preconditions, postconditions and invariants need to be defined and adhered to for your particular class or function. I don't think this is too much to ask. No one is in such a hurry that they can't take a minute to think about and write down the assumptions about allocation, ownership and referencing for memory they're dealing with. Now, don't get me wrong, my code isn't perfect in this regard, but I'm generally aware of where things are sloppy and when/how I need to fix them. I write code in an iterative fashion, and I comment the hell out of it when I need to fix something later. In fact, my code has probably gotten an order of magnitude better now that I'm going back to more ANSI C style of coding. I hesitate to call my coding style "XP" since that's so overused, but basically one reason I'm drifting away from large C++ frameworks is that you almost never, ever get it right the first time. Or the second time. Or the third time. The ability to abstract and refactor iteratively is immensely powerful, but C++ code bases have a tendency to build up resistance to fundamentaly change, making these changes difficult to do. Part of this is the fault of tools, obviously. > The old "He allocated it but *I* need to free it" is one example of > the problem. But that's a trivial problem. Ten years ago Taligent defined a set of naming conventions to indicate responsibility for creation, destruction, referencing, etc. And honestly, dynamic memory allocation is so rare (at least in my code base, because I abhor fragmentation) that it's not like I run into these problems multiple times a day. > And some of the stuff you've dissed lately -- wrapping allocations in > smart pointers, typesafe containers, and so forth -- are useful in > almost direct proportion to the number of people working on the code. Maybe vast team size is a mitigating factor, but I still can't help but feel that when this is used as an excuse it's because: - there is poor communication - the individuals that make up the team aren't as good as they should be because of the required size of the team I don't believe in trying to solve the above problems (poorly) by introducing a degree of non-determinism in my code (smart pointers), encouraging lazy and lack of thought (smart pointers), and increasing my build times by a factor of 100 (STL). > It lets me think at a higher level. Thinking about low level details is not mutually exclusive with thinking at a higher level. In fact, I would argue that routinely discounting the former affects the latter. Brian |
From: Jesse J. <jes...@mi...> - 2002-12-17 12:03:16
|
At 11:46 AM -0600 12/16/02, brian hook wrote: >Every time you call "new" or "delete" or return a pointer or accept a >pointer, it should set off alarm bells immediately. It should mean >that you need to stop and document what you're doing. It should mean >that you should take the time to make a Doxygen or similar style header >that defines exactly what you're doing. You should decide what >preconditions, postconditions and invariants need to be defined and >adhered to for your particular class or function. Pretty much. And that's the problem with what you're suggesting: it requires careful analysis of ownership issues and accompanying documentation. Whereas a smart pointer side steps all of these issues and AFAICT has no real drawbacks. >I hesitate to call my coding style "XP" since that's so overused, but >basically one reason I'm drifting away from large C++ frameworks is >that you almost never, ever get it right the first time. Or the second >time. Or the third time. The ability to abstract and refactor >iteratively is immensely powerful, but C++ code bases have a tendency >to build up resistance to fundamentaly change, making these changes >difficult to do. Part of this is the fault of tools, obviously. Frameworks don't have to have deep hierarchies or a zillion classes. But they do require some experience and knowledge of OOD. I'm confident that a skilled OO designer would have an easier time iterating classes than an equivalent C programmer evolving C code. >And honestly, dynamic memory allocation is so rare (at least in my code >base, because I abhor fragmentation) that it's not like I run into >these problems multiple times a day. That's a big difference. I, at least, use dynamic allocation a lot and I'd expect (naively?) that a lot of PC games do as well. -- Jesse |
From: Thatcher U. <tu...@tu...> - 2002-12-17 17:36:42
|
On Dec 16, 2002 at 11:46 -0600, brian hook wrote: > introducing a degree of non-determinism in my code (smart pointers), Note: the behavior of ref-counted smart pointers is completely deterministic. (That's one of the nice things about ref-counting, compared to other GC approaches.) -- Thatcher Ulrich http://tulrich.com |
From: Mick W. <mi...@ne...> - 2002-12-17 18:20:48
|
> -----Original Message----- > From: gam...@li... > [mailto:gam...@li...] On > Behalf Of Thatcher Ulrich > Sent: Tuesday, December 17, 2002 9:32 AM > To: gam...@li... > Subject: Re: [GD-General] Compile times > > > On Dec 16, 2002 at 11:46 -0600, brian hook wrote: > > introducing a degree of non-determinism in my code (smart pointers), > > Note: the behavior of ref-counted smart pointers is > completely deterministic. (That's one of the nice things > about ref-counting, compared to other GC approaches.) > Well, all code is completely deterministic, if you want to get picky. I think what Brian was referring to, is that with re-counted smart pointer, if you "delete" it, then you don't know if the object it points to is going to be actually deleted. So you are writing code where you don't know what it is going to do. Of course, you could actual determine what it is going to do, by adding some more code to check the reference count. But does this make your code deterministic? And is that even something you want the smart pointer to supply an interface to? I fear getting pedantic here. But when you speak of determinism of a thing, then you are referring to the lack of dependency on external factors in contributing to the future state of an object. When you speak of the determinism of code, I understand that as a measure of the extent to which you can predict the behaviour of a piece of code. For example "'delete p' will call the destructor of object *p" vs "'delete p' might call the destructor of the object *p" So, while you could (maybe) say that the behavior of a ref-counted smart pointer is completely deterministic, I think it's a fair comment to say that replacing the usage of a regular pointer with a smart pointer will decrease the amount of determinism in a piece of code (Brian's code, not the smart pointer code), by making it reliant on the actions of other loosely coupled objects about which it knows nothing. Okay, I'll stop now. Sorry. Mick. |
From: Kyle W. <ky...@ga...> - 2002-12-17 19:14:57
|
Mick West wrote: > When you speak of the determinism of code, I understand that as a > measure of the extent to which you can predict the behaviour of a piece > of code. For example "'delete p' will call the destructor of object *p" > vs "'delete p' might call the destructor of the object *p" This general problem -- hiding of potentially expensive operations -- is really a problem with C++ code in general, not just with smart pointers. After all, it's not immediately obvious whether a variable going out of scope will disappear off the stack or call an expensive destructor when a function exits. It's not obvious whether object destruction will free resources or recursively call other destructors or spin-lock waiting for another thread. And the C++ emphasis on interface over implementation makes it harder for users of a class to keep up with the cost of creating/deleting/using instances of that class. The uncertainty is, of course, the price we pay for the higher level of abstraction at which C++ allows us to operate. I like C++, and I like smart pointers, and I find them both quite useful for the applications on which I work. I'm working on complex games on PCs and newer consoles, though. If I were writing for the GBA, or PalmOS, or cell phone applications, I'd be a lot more likely to give up smart pointers and classes in favor of the tighter control of straight C. Kyle |
From: Brian H. <bri...@py...> - 2002-12-17 19:56:12
|
> Behalf Of Kyle Wilson > > This general problem -- hiding of potentially expensive > operations -- is really a problem with C++ code in general, > not just with smart pointers. Not to unfairly dog C++ -- because I've dogged it plenty in my life =) -- but this is generally a problem between "very high level languages" and "high level languages". It's a philosophical difference between "hide the details, concentrate on the overall" and "I need to know the details". The problem I see is that a lot of coders today are trying very hard to hide the details because this OOP dogma of "encapsulation, look at interfaces only, DON'T THINK ABOUT THE IMPLEMENTATION!" has been shoved down everyone's throats so much that you almost feel guilty knowing whether an operation might be expensive or not. > After all, it's not immediately > obvious whether a variable going out of scope will disappear > off the stack or call an expensive destructor when a function > exits. It's not obvious whether object destruction will free > resources or recursively call other destructors or spin-lock > waiting for another thread. And the C++ emphasis on > interface over implementation makes it harder for users of a > class to keep up with the cost of creating/deleting/using > instances of that class. Well stated. Brian |
From: Kyle W. <ky...@ga...> - 2002-12-17 22:46:19
|
Brian Hook wrote: > The problem I see is that a lot of coders today are trying very hard to > hide the details because this OOP dogma of "encapsulation, look at > interfaces only, DON'T THINK ABOUT THE IMPLEMENTATION!" has been shoved > down everyone's throats so much that you almost feel guilty knowing > whether an operation might be expensive or not. Well, in all fairness I think that on a large team (e.g., Bioware's Neverwinter Nights team, which had *22* programmers) it's just impossible to keep track of all the details of other engineers implementations. The teams I've worked on have averaged 10-12 programmers and generally haven't had what I consider great internal communication. Under those circumstances, separation of interface from implementation and avoidance of memory ownership issues are Good Things, despite the costs. Kyle |
From: Thatcher U. <tu...@tu...> - 2002-12-17 19:51:41
|
On Dec 17, 2002 at 10:21 -0800, Mick West wrote: > > Behalf Of Thatcher Ulrich > > > > On Dec 16, 2002 at 11:46 -0600, brian hook wrote: > > > introducing a degree of non-determinism in my code (smart pointers), > > > > Note: the behavior of ref-counted smart pointers is > > completely deterministic. (That's one of the nice things > > about ref-counting, compared to other GC approaches.) > > Well, all code is completely deterministic, if you want to get picky. When you compare reference counting to mark/sweep GC or multithreaded code, there's definitely a meaningful and practical distinction, even if you're not picky. Anyway, I personally think "smart pointers as an implementation of handles" is pretty worthwhile and doesn't violate Brian's original objection, which is that he doesn't know for sure when an object will be freed. I.e. you still have a manager somewhere that keeps a list of smart pointers, so nothing can go away until the manager says so. Then you put some checks in the manager to find and punish people who hold references when they shouldn't be (e.g. at level-shutdown time). So really it's a way to automate some of the discipline that Brian is advocating. -- Thatcher Ulrich http://tulrich.com |
From: Brian H. <bri...@py...> - 2002-12-17 20:18:26
|
> So really it's a way to automate some > of the discipline that Brian is advocating. And just to be clear, it's not like I'm advocating that we all eschew any form of compile time safeguards. I'm not quite ready to ditch function prototypes just yet. I believe in doing as much at compile time as possible up until the point where you start incurring greater costs than the benefit. Where that crossover occurs depends on the individual, which is why I don't mean to imply that "STL always sucks", but more like "the benefits of STL don't outweight the penalties for me". Much like scripting languages, etc. I guess it all wraps up philosophically to me as "What practical advantages am I gaining for theoretical benefits, and can I achieve these advantages in other ways?" Brian |
From: Paul B. <pa...@mi...> - 2002-12-17 19:38:28
|
> -----Original Message----- > From: Mick West [mailto:mi...@ne...]=20 > Subject: RE: [GD-General] Compile times >=20 > So, while you could (maybe) say that the behavior of a=20 > ref-counted smart pointer is completely deterministic,=20 > I think it's a fair comment to say that replacing the=20 > usage of a regular pointer with a smart pointer will > decrease the amount of determinism in a piece of code=20 > (Brian's code, not the smart pointer code), by making=20 > it reliant on the actions of other loosely coupled=20 > objects about which it knows nothing. Right. The one thing that I question is whether it is "easier" overall (in a large codebase) to mentally manage the "who-should-delete-this" problem or the similar=20 "is-this-release-going-to-delete-this" problem. We use refcounted objects a lot (we also use passively tracked pointers with explicit ownership for other things) and if there is a place in the code where one *expects* the release to delete the object we insert something like: ASSERT_LAST_REFERENCE(ref_counted_object_pointer); RELEASE(ref_counted_object_pointer); Both of these are macros for the non-smart pointer case. The smart pointer could support a similar method that could be asserted. The key here is that we have an compiled piece of code that verifies an assumption (that the code in question in fact the last remaining reference at this point). It is difficult=20 to have a similar *compiled* artifact for verifying that I=20 should be the one deleting a pointer. I think this has some=20 benefit. Like I said, we use both refcounting and explicit lifetime control (for different systems) as both have their benefits. We try to maintain a consistent scheme within major systems so people expend as little mental effort as possible when=20 writing or debugging code. Paul |
From: <phi...@pl...> - 2002-12-17 19:47:41
|
>> And honestly, dynamic memory allocation is so rare (at least in my code base, because I abhor fragmentation) that it's not like I run into these problems multiple times a day. > That's a big difference. I, at least, use dynamic allocation a lot and I'd expect (naively?) that a lot of PC games do as well. Virtual memory hides a multitude of sins. Fragmentation being the big one. Cheers, Phil |
From: Brian H. <bri...@py...> - 2002-12-17 19:56:11
|
> Virtual memory hides a multitude of sins. > > Fragmentation being the big one. Unfortunately, no it doesn't. Because the problem isn't fragmentation of physical memory, it's fragmentation of virtual addresses. There's a common misconception that you get 32-bits of address space on, say, Windows. You don't. You get 31-bits, since half the address space is reserved for the OS. Out of this 2GB you have to share between heap, static data, and code. So realistically speaking, you have a little less than 2GB. If you allocate and delete large chunks of memory, interspersed with small ones, you will fragment that address space such that you may have enough physical memory for an operation, but you won't have enough contiguous address space to allocate it. Brian |
From: Mick W. <mi...@ne...> - 2002-12-17 20:20:15
|
> -----Original Message----- > From: gam...@li... > [mailto:gam...@li...] On > Behalf Of Brian Hook > Sent: Tuesday, December 17, 2002 11:56 AM > To: gam...@li... > Subject: Fragmentation was RE: [GD-General] Compile times > > > > Virtual memory hides a multitude of sins. > > > > Fragmentation being the big one. > > Unfortunately, no it doesn't. Because the problem isn't > fragmentation of physical memory, it's fragmentation of > virtual addresses. > > There's a common misconception that you get 32-bits of > address space on, say, Windows. You don't. You get 31-bits, > since half the address space is reserved for the OS. Out of > this 2GB you have to share between heap, static data, and > code. So realistically speaking, you have a little less than 2GB. > > If you allocate and delete large chunks of memory, > interspersed with small ones, you will fragment that address > space such that you may have enough physical memory for an > operation, but you won't have enough contiguous address space > to allocate it. > Still, your 2GB of address space will hide fragmentation a lot better than the PS2s 32MB of address space. On the PC, it does not matter so much if you loose 100k for fragmentation (or even leaks...) when changing levels. But on the PS2, you are going to notice this very quickly. I've always felt that this area is one of the fundamental differences between PC programmers and Console programmers. Mick. |
From: Brian H. <bri...@py...> - 2002-12-17 20:43:38
|
> Still, your 2GB of address space will hide fragmentation a lot better > than the PS2s 32MB of address space. Right, but as a matter of discipline (*cough*) I try to code avoiding fragmentation because I never know where one of my, say, puzzle games might be ported (handhelds aren't exactly super highpowered). And on the completely opposite end, I've worked on extremely big games where datasts are routinely several hundred MB chunks. If you're not paying attention in these cases, you may find that you hit some pathological combination of allocs/free that ends up fragmenting things so badly that you're in trouble. And debugging this can often be a huge pain in the ass because if you have a memory tracker it will report a fairly innocent situation (unless your memory tracker also tracks fragmentation). > On the PC, it does not matter so > much if you loose 100k for fragmentation (or even leaks...) when > changing levels. True, you have to start working with VERY large data sets for this to become a serious problem. Alternatively, if you're dealing with a persistent world, this is also a problem with much smaller memory chunks. If you write a small scale persistent game with a long enough up time, you may find that even with no leaks you can't allocate the same stuff you could when the game first started up. That is a ROYAL bitch to find. > But on the PS2, you are going to notice this very > quickly. I've always felt that this area is one of the fundamental > differences between PC programmers and Console programmers. And this is also an area (again) that I feel shows a flaw in C++. When programmers start adopting the mindset of "I don't care about memory, it just works", the cost of finding out that it DOESN'T just work is MONUMENTAL. I've worked on projects like this, where 1GB+ datasets were routinely being considered, along with month+ up times, but ZERO thought was put into how to handle fragmentation. Disaster waiting to happen. Fragmentation is prevalent enough a problem that Objective-C supports "zones" inherently. And any long time programmers will have been bitten by this enough that they probably firmly understand the concept of managing multiple heaps/memory arenas to avoid fragmentation. Of course, you can overload operator new to "fix" this (for particle systems, for example). Until you need multiple arenas for a certain object type. Then, of course, you can work around that too. But after enough work arounds, I just go "You know, I spend more time working around solutions than I do fixing the problems the solutions 'fix'". Crap, when did I get so old that I feel like prefacing everything with "Back in my day..."? =) -Hook |
From: Thatcher U. <tu...@tu...> - 2002-12-18 02:21:17
|
On Dec 17, 2002 at 12:43 -0800, Brian Hook wrote: > > Fragmentation is prevalent enough a problem that Objective-C > supports "zones" inherently. And any long time programmers will > have been bitten by this enough that they probably firmly understand > the concept of managing multiple heaps/memory arenas to avoid > fragmentation. Has anyone been bitten when using a decent malloc? I've seen bad fragmentation with some older versions of msvcrt.dll, but MS's malloc in Win2K ran neck and neck with dlmalloc in the same program, and didn't seem to be showing any appreciable fragmentation. -- Thatcher Ulrich http://tulrich.com |
From: Josiah M. <jm...@ma...> - 2002-12-18 00:41:45
|
Maybe I am totally off base here, but it seems to me that if you really want to prevent fragmentation, for example in a server that has a really long up time, the best thing to do would be to use smart pointers of some sort. By not having your pointers directly point to a memory location, this allows you to change what the pointer points to. It seems to me that having this additional abstraction would allow you to periodically defragment memory, and the programmer wouldn't have to worry about the details at all. It just works. This is essentially what happens with your file system on the hard drive. There is no easy way to know where exactly a file is on disk, and it could change any time with no one being the wiser. Josiah Manson |
From: Mick W. <mi...@ne...> - 2002-12-18 01:10:16
|
> -----Original Message----- > From: gam...@li... > [mailto:gam...@li...] On > Behalf Of Josiah Manson > Sent: Tuesday, December 17, 2002 6:11 PM > To: gam...@li... > Subject: Re: [GD-General] Compile times > > > Maybe I am totally off base here, but it seems to me that if > you really want to prevent fragmentation, for example in a > server that has a really long up time, the best thing to do > would be to use smart pointers of some sort. > > By not having your pointers directly point to a memory > location, this allows you to change what the pointer points > to. It seems to me that having this additional abstraction > would allow you to periodically defragment memory, and the > programmer wouldn't have to worry about the details at all. > It just works. > Sounds good in theory, and I'm sure it has been tried many times. Smart pointer are not free. For it to to help with fragmentation you have to apply it to EVERYTHING, which starts to add up. There is the additional memory required to store it (8-16+ bytes, depending on implementation). Then there is the code bloat (in your executable size) due to all the extra code generated for the double dereferencing, which takes up memory. Then there is the extra CPU time taken up by this code, and the extra meory access (not cheap on the PS2, trashing the cache and all that). For that, you need to do a cost-benefit analysis, is it worth it? What do you gain? And even though, in theory, you should simply be able to replace "regular" pointers with smart pointers; in practice there will be numerous problems related to this changeover. Also, not every block of memory can be made movable. DMA packets on the PS2 can contain absolute addresses. 3rd party libraries like Renderware might not be malleable enough to use your smart system. I'd like to hear from anyone who uses a fully smart pointer based memory allocation scheme with movable blocks. I don't think you can do it, and still ship a game. Mick |
From: <phi...@pl...> - 2002-12-18 22:54:10
|
> I'd like to hear from anyone who uses a fully smart pointer based memory allocation scheme with movable blocks. FWIW, pre X, MacOS used 'handles' extensively. We used to have fully floating objects, and we still had problems. Largely because most of the time you'd have to lock an objects handle when you called a member function, or passed off an internal reference (which we made a lot of effort to avoid), and the locks themselves became a nightmare to manage, causing the heap to lock up and fragment all over again. Mind you, this was a database engine,and was thus being exposed to much less predictable requests than a game generally would (although definately comparable with an MMORPG server). Oh, and no, it didn't ship... Cheers, Phil |