gamedevlists-general Mailing List for gamedev (Page 63)
Brought to you by:
vexxed72
You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(3) |
Oct
(28) |
Nov
(13) |
Dec
(168) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(51) |
Feb
(16) |
Mar
(29) |
Apr
(3) |
May
(24) |
Jun
(25) |
Jul
(43) |
Aug
(18) |
Sep
(41) |
Oct
(16) |
Nov
(37) |
Dec
(208) |
2003 |
Jan
(82) |
Feb
(89) |
Mar
(54) |
Apr
(75) |
May
(78) |
Jun
(141) |
Jul
(47) |
Aug
(7) |
Sep
(3) |
Oct
(16) |
Nov
(50) |
Dec
(213) |
2004 |
Jan
(76) |
Feb
(76) |
Mar
(23) |
Apr
(30) |
May
(14) |
Jun
(37) |
Jul
(64) |
Aug
(29) |
Sep
(25) |
Oct
(26) |
Nov
(1) |
Dec
(10) |
2005 |
Jan
(9) |
Feb
(3) |
Mar
|
Apr
|
May
(11) |
Jun
|
Jul
(39) |
Aug
(1) |
Sep
(1) |
Oct
(4) |
Nov
|
Dec
|
2006 |
Jan
(24) |
Feb
(18) |
Mar
(9) |
Apr
|
May
|
Jun
|
Jul
(14) |
Aug
(29) |
Sep
(2) |
Oct
(5) |
Nov
(4) |
Dec
|
2007 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(11) |
Sep
(9) |
Oct
(5) |
Nov
(4) |
Dec
|
2008 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
(34) |
Jun
|
Jul
(9) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(4) |
2016 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
From: Ivan G. <dea...@ga...> - 2002-12-23 14:46:51
|
> The interface really sucks (you have to learn the shortcut keys to really > work with it). It has a lot of features though. > One of our artists uses it. He has learnt 3d modelling in blender first, so LW, 3DS and Maya interfaces 'suck' to him :) We don't complain, as the tool is pretty powerful, and if he can use it, so let him be. It's free now(GPL), so it makes us no additional cost. -Ivan |
From: Chris H. <c.h...@ke...> - 2002-12-23 09:57:22
|
The interface really sucks (you have to learn the shortcut keys to really work with it). It has a lot of features though. Chris -----Original Message----- From: gam...@li... [mailto:gam...@li...]On Behalf Of brian hook Sent: maandag 23 december 2002 1:59 To: gam...@li... Subject: [GD-General] Blender Anyone use Blender as a modeling tool and have any comments on it, e.g. comparing it to LW, Max, Maya, etc.? ------------------------------------------------------- This sf.net email is sponsored by:ThinkGeek Welcome to geek heaven. http://thinkgeek.com/sf _______________________________________________ Gamedevlists-general mailing list Gam...@li... https://lists.sourceforge.net/lists/listinfo/gamedevlists-general Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=557 |
From: brian h. <bri...@py...> - 2002-12-23 00:59:19
|
Anyone use Blender as a modeling tool and have any comments on it, e.g. comparing it to LW, Max, Maya, etc.? |
From: Thatcher U. <tu...@tu...> - 2002-12-21 23:26:53
|
On Dec 19, 2002 at 12:54 -0800, Mick West wrote: > > In Tony Hawk's Pro Skater 4 on the PS2, I just did a few tests, and on > the main heap, there are 474 different sizes of memory allocation in > 18,000 blocks (Taking the most frequent 6 sizes accounts for 12,000 of > these). Script heap (our next biggest heap) has 202 different sizes in > 7800 blocks (5000 of them in the most frequent 6 block sizes). We have > about seven distinct heaps using around 30,000 blocks, and 6 "pools" for > various fixed size allocations, using around 100,000 blocks Cool, thanks for the numbers. It sounds like it fits the general pattern. The thing that strikes me immediately is: seven heaps?! Are these traditional malloc/free-type heaps, or is it more like malloc, with occasional wipe-it-all-clean? It seems like using a unified heap could reduce external fragmentation (i.e. if each heap is not full all the time, then one heap's extra can't be used by another heap). But I'm assuming you're doing something more than just malloc/free with each of those heaps. -- Thatcher Ulrich http://tulrich.com |
From: <cas...@ya...> - 2002-12-20 15:40:27
|
Eero Pajarre wrote: > I would just like add my comment: Don't let the small size of Lua fool > you to think that it is "too simple". I am still starting with it, but > much of Lua programming is metaprogramming. This seems to have both > advantages and disadvantages, the language is very powerful for the user > who knows how to use it, but the novice user might wish for rigid > framework built in into the languge, instead of being metaprogrammed > on it. Yeah, Lua metaprogramming facilities are very powerfull, you can use it as a simple imperative language, but you can turn it out into a functional or object oriented language, you can implement inheritance, multiple inheritance, delegation, messaging, etc. There are also many different ways of binding code to lua. There are binding generators (tolua, swig), but another posibility is to take advantage of reflexivity. If you already have a reflexive class system, it should be possible to bind it to lua. See for example the bindings to com and corba. I haven't too much experience with other language bindings, but I've heard that lua c interface is very low level, and that it's easier to write bindings for other languages (javascript, io). The result of that is that lua may be simple on the surface, you can use it as a simple language, but you can also use it in more complex ways. However, taking the correct decisions and doing it right for the first time may not be that easy. Ignacio Castaño cas...@ya... ___________________________________________________ Yahoo! Sorteos Consulta si tu número ha sido premiado en Yahoo! Sorteos http://loteria.yahoo.es |
From: Eero P. <epa...@ko...> - 2002-12-20 13:45:19
|
Javier Arevalo wrote: > > Lua is simple and small. Phyton is complex and big. Tom Forsyth wrote: > It depends whether you want "just" a scripting language, in which case Lua > seems to be very light and simple, I would just like add my comment: Don't let the small size of Lua fool you to think that it is "too simple". I am still starting with it, but much of Lua programming is metaprogramming. This seems to have both advantages and disadvantages, the language is very powerful for the user who knows how to use it, but the novice user might wish for rigid framework built in into the languge, instead of being metaprogrammed on it. This doesn't mean that Lua is not suitable for simple scripting. It has worked ok there for me, and it actually seems to support constructing safe sandboxed end-user programing environments too. At the moment I am learning on the middle ground. Lua and Python both seem to have game developper friendly licenses, although Lua scores an extra point for having the shorter license of these two ;-) Eero |
From: Tom F. <to...@mu...> - 2002-12-20 11:17:37
|
Python is complex and big (though it still fits happily on a PS2, so it's not that big), but it's immensely powerful. There's a couple of tools like SWIG that make interfacing between C and Python really simple - Python can call C (and C class member functions) and vice versa. Which means it's a doddle to code stuff up in Python, and then if speed is an issue on that chunk, move it across to C. It depends whether you want "just" a scripting language, in which case Lua seems to be very light and simple, or want to write most of the game code in a high-level-language, and/or have very powerful scripts, in which case Python is great. For more on Python/C integration and using it in real games, check out Humungous Entertainment's GDC slides. Or I could pass more specific questions on to our Python expert (I'm just a graphics hacker :-) Tom Forsyth - Muckyfoot bloke and Microsoft MVP. This email is the product of your deranged imagination, and does not in any way imply existence of the author. > -----Original Message----- > From: Javier Arevalo [mailto:ja...@py...] > Sent: 20 December 2002 08:32 > To: gam...@li... > Subject: Re: [GD-General] Scripting in Lua tutorial > > > Parveen Kaler <pk...@sf...> wrote: > > > > Lua looks like a very cool language. What I'd like to see is a > > document helping me make an educated choice between Lua, Python, > > Simkin and any other scripting language people would care > to throw in. > > Lua is simple and small. Phyton is complex and big. Simkin I > have never seen > but recall it is a bit in between the two. > > All three have got the job done in several games before, so > you won't be > make a bad choice with any of them. The decision to go with > small & simple > versus complex & big depends essentially on how much software > engineering > your designers or script writers are able to do - if they are not > programmers, go with something as simple as possible. > > Javier Arevalo > Pyro Studios |
From: Javier A. <ja...@py...> - 2002-12-20 08:23:01
|
Parveen Kaler <pk...@sf...> wrote: > Lua looks like a very cool language. What I'd like to see is a > document helping me make an educated choice between Lua, Python, > Simkin and any other scripting language people would care to throw in. Lua is simple and small. Phyton is complex and big. Simkin I have never seen but recall it is a bit in between the two. All three have got the job done in several games before, so you won't be make a bad choice with any of them. The decision to go with small & simple versus complex & big depends essentially on how much software engineering your designers or script writers are able to do - if they are not programmers, go with something as simple as possible. Javier Arevalo Pyro Studios |
From: Parveen K. <pk...@sf...> - 2002-12-19 21:32:22
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Brian Hook wrote: > Saw this on another list, haven't had much time to look it over but it > looks interesting based on my brief glance: > > http://gamestudies.cdis.org/~amatheson/LUA-Part01/Part01-section01.html > > It was a pretty good introductory tutorial. Goes over how to get started with Lua. How to load Lua scripts. And how to pass parameters between Lua and your C/C++ code. Lua looks like a very cool language. What I'd like to see is a document helping me make an educated choice between Lua, Python, Simkin and any other scripting language people would care to throw in. Parveen -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.7 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQE+Ajrs6jdpm1dGc/cRAucqAJ9pm3NL8IT15DOytNZ9J65Li4JbXgCfSR8X Mtd2jHKd2M/YsjJSDU1lfDE= =VvOQ -----END PGP SIGNATURE----- |
From: Mick W. <mi...@ne...> - 2002-12-19 20:53:27
|
> -----Original Message----- > From: Thatcher Ulrich > Sent: Wednesday, December 18, 2002 8:37 PM > To: gam...@li... > Subject: Re: Fragmentation was RE: [GD-General] Compile times > > On Dec 18, 2002 at 12:38 -0600, brian hook wrote: > > > The reason I ask is because I *know* fragmentation is a > problem if > > > malloc sucks, and sucky mallocs have been ubiquitous > until recently, > > > but the only empirical study I'm aware of suggests that if malloc > > > doesn't suck, then fragmentation is not a problem. > > > > > > http://www.cs.utexas.edu/users/wilson/papers/fragsolved.pdf > > > > Okay, went back and browsed that paper, and I don't think > it applies. > > The datasets and object sizes they are looking at aren't > representative > > of what I consider to be normal game. For example, they make the > > observation that most programs average allocating objects > of about 6-7 > > different sizes. Any game that loads models, textures or > sound will > > almost definitely not fit into this area. > > Possibly; it would be interesting to see numbers on some actual games. > In Tony Hawk's Pro Skater 4 on the PS2, I just did a few tests, and on the main heap, there are 474 different sizes of memory allocation in 18,000 blocks (Taking the most frequent 6 sizes accounts for 12,000 of these). Script heap (our next biggest heap) has 202 different sizes in 7800 blocks (5000 of them in the most frequent 6 block sizes). We have about seven distinct heaps using around 30,000 blocks, and 6 "pools" for various fixed size allocations, using around 100,000 blocks Mick. |
From: Brian H. <bri...@py...> - 2002-12-19 19:54:56
|
Saw this on another list, haven't had much time to look it over but it looks interesting based on my brief glance: http://gamestudies.cdis.org/~amatheson/LUA-Part01/Part01-section01.html |
From: Brian H. <bri...@py...> - 2002-12-19 19:51:32
|
> Possibly; it would be interesting to see numbers on some actual games. Anyone with their own memory manager should probably be able to print out bucket sizes for allocations. My stuff can do this, sort of, but it's puzzle games, so not really indicative of what we consider "real games". But I'd bet anything I'd have allocations all over the map, although with obvious clustering. Brian |
From: Thatcher U. <tu...@tu...> - 2002-12-19 04:41:39
|
On Dec 18, 2002 at 12:38 -0600, brian hook wrote: > > The reason I ask is because I *know* fragmentation is a problem if > > malloc sucks, and sucky mallocs have been ubiquitous until recently, > > but the only empirical study I'm aware of suggests that if malloc > > doesn't suck, then fragmentation is not a problem. > > > > http://www.cs.utexas.edu/users/wilson/papers/fragsolved.pdf > > Okay, went back and browsed that paper, and I don't think it applies. > The datasets and object sizes they are looking at aren't representative > of what I consider to be normal game. For example, they make the > observation that most programs average allocating objects of about 6-7 > different sizes. Any game that loads models, textures or sound will > almost definitely not fit into this area. Possibly; it would be interesting to see numbers on some actual games. > And they don't take into account run times of weeks or months. Here's a paper that addresses server apps, although they seem more fixated on multithreading performance than memory utilization. Anyway, their measurements of memory utilization for dlmalloc look extremely good. They do use the dreaded "random chunk allocation" methodology. http://citeseer.nj.nec.com/221661.html -- Thatcher Ulrich http://tulrich.com |
From: Brian H. <bri...@py...> - 2002-12-19 02:02:35
|
> Behalf Of Josiah Manson > > If you are going to need a runtime of weeks or months, then > chances are that you are running a server. A server won't > need to ever load models, textures or sound. No, but it will need to load (possibly): - scripts - collision data - vis data Which will also be variably sized. The bigger problem is when you're loading things like in an outdoor terrain engine, where you might bring up 16km x 16km heightmaps at 1M resolution or something similarly ridiculous but still entirely possible. > I know that > often games such as quake will have a server that run > alongside of a graphical client, but those servers are only > going to be up for one game session, so they don't really > count for the same reason that the typical game run by a > typical user doesn't count in terms of fragmentation. Actually, those servers often have up times of weeks if not longer. The servers at id were typically run for weeks at a time. For most LAN sessions the up time may be a few hours, but for those that host internet servers, they often just keep 'em live 24/7. > There are only so many hours that a person is going to play > in a stretch. And, as you already mentioned, games can often > "cheat" by purging level data. This is a completely valid way of dodging the fragmentation bullet. I don't mean to imply that it's a BAD thing, only that it can't be done in all cases. Brian |
From: Josiah M. <jm...@ma...> - 2002-12-19 01:52:58
|
<snip> > > http://www.cs.utexas.edu/users/wilson/papers/fragsolved.pdf > > Okay, went back and browsed that paper, and I don't think it applies. > The datasets and object sizes they are looking at aren't representative > of what I consider to be normal game. For example, they make the > observation that most programs average allocating objects of about 6-7 > different sizes. Any game that loads models, textures or sound will > almost definitely not fit into this area. > > And they don't take into account run times of weeks or months. If you are going to need a runtime of weeks or months, then chances are that you are running a server. A server won't need to ever load models, textures or sound. I know that often games such as quake will have a server that run alongside of a graphical client, but those servers are only going to be up for one game session, so they don't really count for the same reason that the typical game run by a typical user doesn't count in terms of fragmentation. There are only so many hours that a person is going to play in a stretch. And, as you already mentioned, games can often "cheat" by purging level data. |
From: <cas...@ya...> - 2002-12-19 00:30:30
|
Hi, I'd also like to mention d-pointers as an alternative to smart pointers. I learned about d-pointers are with the Qt toolkit and they are used extensively in the kde desktop environment. d-pointers are used to manage shared data. They decouple the data from the methods of the object. The data resides on the shared part and the methods reside on a wrapper object that contain a pointer to the data, thus the d-pointer name. The d-pointer pattern is also known as 'pimpl' or 'cheshire cat'. I like d-pointers because they completely abstract the fact that you are using a pointer, and make you think that you are using the object itself. You can pass the object by value, and use it as if it were a native type. Creating a d-pointer object is quite tedious, but you only do it once, and in most cases the effort pays off. Maybe d-pointers aren't as versatile as smart pointers, in fact they are just a restricted and more friendly form of smart ptr. In some cases I think they make your life easier, and specially for 'shared resource management'. Have a look at http://doc.trolltech.com/qq/qq02-data-sharing-with-class.html to find out more. Ignacio Castaño cas...@ya... _______________________________________________________________ Copa del Mundo de la FIFA 2002 El único lugar de Internet con vídeos de los 64 partidos. ¡Apúntante ya! en http://fifaworldcup.yahoo.com/fc/es/ |
From: Brian H. <bri...@py...> - 2002-12-18 23:17:09
|
> FWIW, pre X, MacOS used 'handles' extensively. We used to > have fully floating objects, and we still had problems. > Largely because most of the time you'd have to lock an > objects handle when you called a member function, or passed > off an internal reference (which we made a lot of effort to > avoid), and the locks themselves became a nightmare to > manage, causing the heap to lock up and fragment all over again. This is how Win16 worked as well (mmm...LocalAlloc(), LocalFree()). The problems you describe are basically going to exist anywhere you want to be able to compact data behind the scenes while still allowing pointers. I'm sure there are handle based memory managers that use templates that allow for h->Lock()/h->Release() semantics as well, and these would suffer from the same problem. I have a suspicion that it's pretty much impossible to allow both direct memory addressing AND compaction without incurring some gnarly headache of convoluted manual lock/unlock behaviour. -Hook |
From: <phi...@pl...> - 2002-12-18 22:54:10
|
> I'd like to hear from anyone who uses a fully smart pointer based memory allocation scheme with movable blocks. FWIW, pre X, MacOS used 'handles' extensively. We used to have fully floating objects, and we still had problems. Largely because most of the time you'd have to lock an objects handle when you called a member function, or passed off an internal reference (which we made a lot of effort to avoid), and the locks themselves became a nightmare to manage, causing the heap to lock up and fragment all over again. Mind you, this was a database engine,and was thus being exposed to much less predictable requests than a game generally would (although definately comparable with an MMORPG server). Oh, and no, it didn't ship... Cheers, Phil |
From: brian h. <bri...@py...> - 2002-12-18 07:02:33
|
> For what it's worth, completely random data here. Simple test Win32 > app on Win2K SP3: > > first stack variable: 0x0012fb30 > WinMain: 0x00401262 > 192MB static variable: 0x00584234 > first malloc address: 0x11466840 Had some friends try this on x86 Linux and GCC with default link scripts and with a 256MB static array. main = 0x08048420 (128MB) static = 0x08049634 heap = 0x18049640 stack = 0xbffffc47 (3GB) -Hook |
From: Thatcher U. <tu...@tu...> - 2002-12-18 06:51:41
|
On Dec 18, 2002 at 12:24 -0600, brian hook wrote: > > but the only empirical study I'm aware of suggests that if malloc > > doesn't suck, then fragmentation is not a problem. I found another, more recent, paper with empirical data. Pretty interesting: http://www.cs.umass.edu/~emery/pubs/berger-oopsla2002.pdf They pretty much throw cold water on custom allocators (and the WinXP allocator for that matter...). > > took all kinds of measures to avoid it. In Munch's Oddysee we pretty > > much just let it rip (STL and all), with a general-purpose custom > > allocator, and didn't run into significant problems. [...] > But did Munch's Oddysee have any "cheats" in it that made fragmentation > less dangerous? For example, level changes where a lot of data was > purged simultaneously? Yes, definitely. Level changes pretty much purged everything, IIRC. -- Thatcher Ulrich http://tulrich.com |
From: brian h. <bri...@py...> - 2002-12-18 06:39:05
|
> The reason I ask is because I *know* fragmentation is a problem if > malloc sucks, and sucky mallocs have been ubiquitous until recently, > but the only empirical study I'm aware of suggests that if malloc > doesn't suck, then fragmentation is not a problem. > > http://www.cs.utexas.edu/users/wilson/papers/fragsolved.pdf Okay, went back and browsed that paper, and I don't think it applies. The datasets and object sizes they are looking at aren't representative of what I consider to be normal game. For example, they make the observation that most programs average allocating objects of about 6-7 different sizes. Any game that loads models, textures or sound will almost definitely not fit into this area. And they don't take into account run times of weeks or months. -Hook |
From: brian h. <bri...@py...> - 2002-12-18 06:24:13
|
> but the only empirical study I'm aware of suggests that if malloc > doesn't suck, then fragmentation is not a problem. I'll read that paper, but let's just say I'm confident I can contrive a scenario where address space fragmentation will still cause problems. Sure, it's contrived -- but if I can imagine it, I'm pretty sure it can happen as well =) > took all kinds of measures to avoid it. In Munch's Oddysee we pretty > much just let it rip (STL and all), with a general-purpose custom > allocator, and didn't run into significant problems. It's entirely possible for fragmentation to be a non-issue. Hell, languages like Objective-C and even C++ pretty much make their livings assuming you're not going to stress about it. "alloc() long and prosper". But did Munch's Oddysee have any "cheats" in it that made fragmentation less dangerous? For example, level changes where a lot of data was purged simultaneously? If you free up a lot of stuff at once, this will let the heap manager coalesce huge chunks of memory. But if your run-time allocation patterns both preclude large numbers of references being freed simultaneously and also force weird big/small allocations, then I still believe it's a problem. > Which scared the > crap out of me at the time, but I'm wondering if being afraid of > fragmentation is unfounded nowadays. I think this depends ENTIRELY on the application. I don't think there's a blanket rule that can be assumed on this. For what it's worth, completely random data here. Simple test Win32 app on Win2K SP3: first stack variable: 0x0012fb30 WinMain: 0x00401262 192MB static variable: 0x00584234 first malloc address: 0x11466840 I take from the above that it partitions memory as stack/code/static data/heap. If you allocate yourself a very large stack, then it limits your available address space as well (any ideas why they don't put the stack at 2GB and grow down? I guess if you exceed your stack it will automatically generate an exception with the current scheme, vs. watching for "top of heap"). Not sure what any of that means, but thought it was interesting anyway in some form =) -Hook |
From: Thatcher U. <tu...@tu...> - 2002-12-18 05:44:33
|
On Dec 17, 2002 at 10:04 -0600, brian hook wrote: > > Has anyone been bitten when using a decent malloc? > > To clarify, I don't think fragmentation is a problem UNLESS you have a > particular sequence of allocations that kick your ass and/or you have > extremely long up times. I think MUDs have actually run into this type > of problem before, but on much smaller scales (e.g. 16MB MUDs running > on older boxes with maybe 32MB installed, no VM, etc. -- similar to > what consoles deal with today). > > The average PC game simply won't have to deal with address space > fragmentation -- many force a reset between levels, others just don't > run long enough to make it a problem. Unfortunately some stuff I've > worked on hasn't been "average" in terms of scale, so I'm extremely > attuned to this. > > And if you blow off fragmentation on something like a Palm or GBA, it > will probably bite you in the ass one of these days. Unless you're > using something that can compact behind the scenes, it's a problem, no > matter how good the underlying malloc implementation may be. The > underlying implementation simply determines WHEN it affects you. The reason I ask is because I *know* fragmentation is a problem if malloc sucks, and sucky mallocs have been ubiquitous until recently, but the only empirical study I'm aware of suggests that if malloc doesn't suck, then fragmentation is not a problem. http://www.cs.utexas.edu/users/wilson/papers/fragsolved.pdf So I'm curious about any smoking guns of the form: "we used dlmalloc (or something like it) in project X on platform Y, and observed that our peak memory use was 3x the size of our peak live data". Or any updated studies. My personal experience with this is limited; in all games I worked on prior to starting at Oddworld, we were so fearful of malloc that we took all kinds of measures to avoid it. In Munch's Oddysee we pretty much just let it rip (STL and all), with a general-purpose custom allocator, and didn't run into significant problems. Which scared the crap out of me at the time, but I'm wondering if being afraid of fragmentation is unfounded nowadays. The other bit of personal experience is that my chunklod demo malloc's wontonly, and I only ever observed problems when it linked with a pre-Win2K version of msvcrt.dll that found its way on to my hard disk. dlmalloc and the correct version of msvcrt were both fine. -- Thatcher Ulrich http://tulrich.com |
From: brian h. <bri...@py...> - 2002-12-18 04:04:54
|
> Has anyone been bitten when using a decent malloc? To clarify, I don't think fragmentation is a problem UNLESS you have a particular sequence of allocations that kick your ass and/or you have extremely long up times. I think MUDs have actually run into this type of problem before, but on much smaller scales (e.g. 16MB MUDs running on older boxes with maybe 32MB installed, no VM, etc. -- similar to what consoles deal with today). The average PC game simply won't have to deal with address space fragmentation -- many force a reset between levels, others just don't run long enough to make it a problem. Unfortunately some stuff I've worked on hasn't been "average" in terms of scale, so I'm extremely attuned to this. And if you blow off fragmentation on something like a Palm or GBA, it will probably bite you in the ass one of these days. Unless you're using something that can compact behind the scenes, it's a problem, no matter how good the underlying malloc implementation may be. The underlying implementation simply determines WHEN it affects you. -Hook |
From: Thatcher U. <tu...@tu...> - 2002-12-18 02:21:17
|
On Dec 17, 2002 at 12:43 -0800, Brian Hook wrote: > > Fragmentation is prevalent enough a problem that Objective-C > supports "zones" inherently. And any long time programmers will > have been bitten by this enough that they probably firmly understand > the concept of managing multiple heaps/memory arenas to avoid > fragmentation. Has anyone been bitten when using a decent malloc? I've seen bad fragmentation with some older versions of msvcrt.dll, but MS's malloc in Win2K ran neck and neck with dlmalloc in the same program, and didn't seem to be showing any appreciable fragmentation. -- Thatcher Ulrich http://tulrich.com |