Thread: RE: [Algorithms] Handling memory fragmentation offline ?
Brought to you by:
vexxed72
From: Yannick L. <yan...@ub...> - 2006-01-25 23:09:16
|
MSkgWWVhaCBpdCBjb3VsZCBiZSBhIE5QQywgb3IgYW4gYW1iaWVudCAnYmlyZHMnIHNvdW5kIGZv ciBleGFtcGxlIHRoYXQgc2hvdWxkIG5vdCBzdG9wIHdoZW4gY3Jvc3NpbmcgYSBzZWN0b3IgYm91 bmRhcnkuDQoNCjIpICBPbmx5IG9uZSBzZWN0b3IgYXQgb25jZSB3b3VsZCBiZSAnYWN0aXZlJywg YnV0IGVhY2ggc2VjdG9yIGJhc2ljYWxseSBjb250YWlucyBhbGwgdGhlIHJlc291cmNlcyB0aGF0 IGFyZSBwb3RlbnRpYWxseSBuZWVkZWQgYnkgaXRzIGRpcmVjdCBuZWlnaGJvciBzZWN0b3JzLiAg VGhpcyB3b3JrcyBvbmx5IGlmIHlvdSBhc3N1bWUgdGhhdCB0aGUgdGltZSBpdCB0YWtlcyB0byBy ZWFjaCBhIHNlY3RvciBmcm9tIGl0cyBuZWlnaGJvciBpcyBncmVhdGVyIHRoYW4gdGhlIHRpbWUg aXQgdGFrZXMgdG8gbG9hZCBpdCwgd2hpY2ggaXMgbXkgY2FzZS4NCg0KWWFubmljaw0KIA0KLS0t LS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCkZyb206IGdkYWxnb3JpdGhtcy1saXN0LWFkbWluQGxp c3RzLnNvdXJjZWZvcmdlLm5ldCBbbWFpbHRvOmdkYWxnb3JpdGhtcy1saXN0LWFkbWluQGxpc3Rz LnNvdXJjZWZvcmdlLm5ldF0gT24gQmVoYWxmIE9mIE1lZ2FuIEZveA0KU2VudDogMjUgamFudmll ciAyMDA2IDE2OjQ3DQpUbzogZ2RhbGdvcml0aG1zLWxpc3RAbGlzdHMuc291cmNlZm9yZ2UubmV0 DQpTdWJqZWN0OiBSZTogW0FsZ29yaXRobXNdIEhhbmRsaW5nIG1lbW9yeSBmcmFnbWVudGF0aW9u IG9mZmxpbmUgPw0KDQpUbyBjbGFyaWZ5Og0KDQoxLikgd2hlbiB5b3Ugc2F5IGEgInNoYXJlZCIg cmVzb3VyY2UgYmV0d2VlbiB0d28gc2VjdG9ycywgZG8geW91IG1lYW4NCnNvbWV0aGluZyBsaWtl IGEgcm9jayBzaXR0aW5nIG9uIHRoZSBib3VuZGFyeSwgb3IgZG8geW91IG1lYW4gY2FzZXMNCndo ZXJlIGEgZ2l2ZW4gcmVzb3VyY2UgY2FuIHdhbGsgYWNyb3NzIHRoZSBib3VuZGFyeSAobGlrZSBh biBOUEMpPw0KDQoyLikgSG93IG1hbnkgc2VjdG9ycyBjYW4gYmUgbG9hZGVkIGF0IG9uY2U/DQoN Ck9uIDEvMjUvMDYsIFlhbm5pY2sgTGV0b3VybmVhdSA8eWFubmljay5sZXRvdXJuZWF1QHViaXNv ZnQuY29tPiB3cm90ZToNCj4NCj4NCj4NCj4gSSdkIGxpa2UgdG8gZXhwbG9yZSBhIG5ldyBhcHBy b2FjaCB0b3dhcmRzIG1lbW9yeSBhbGxvY2F0aW9uIGFuZA0KPiBmcmFnbWVudGF0aW9uIGZvciBh IGNvbnNvbGUgdGl0bGUuDQo+DQo+DQo+DQo+IE15IHJlcXVpcmVtZW50cyBhcmUgdGhlIGZvbGxv d2luZyA6DQo+DQo+DQo+DQo+IOKAoiBUaGUgbGV2ZWwgaXMgdG9vIGJpZyB0byBmaXQgaW4gbWVt b3J5IGF0IG9uY2UNCj4NCj4g4oCiIFRoZXJlIGFyZSBubyAibG9hZGluZyBzY3JlZW5zIiwgZS5n LiBJIHdhbnQgdG8gYXN5bmNocm9ub3VzbHkgbW92ZSBnYW1lDQo+IHJlc291cmNlcyBpbiBhbmQg b3V0IG9mIG1haW4gbWVtb3J5IGFzIHRoZSBnYW1lIHJ1bnMNCj4NCj4g4oCiIFRoZSBsZXZlbCBp cyBzcGxpdCBpbnRvIHNlY3RvcnMNCj4NCj4g4oCiIEkgaGF2ZSBjb21wbGV0ZSBzZWN0b3IgY29u bmVjdGl2aXR5IGluZm9ybWF0aW9uDQo+DQo+IOKAoiBFYWNoIHNlY3RvciBjb250YWlucyBhIGtu b3duIHNldCBvZiBnYW1lIHJlc291cmNlcyAoc291bmRzLCB0ZXh0dXJlcywNCj4gZXRjLikNCj4N Cj4g4oCiIEFzeW5jaHJvbm91cyByZXNvdXJjZSBzdHJlYW1pbmcgZnJvbSBEVkQgdHJpZ2dlcnMg d2hlbiB0aGUgbWFpbiBjaGFyYWN0ZXINCj4gY3Jvc3NlcyBzZWN0b3IgYm91bmRhcmllcw0KPg0K PiDigKIgSWYgYSByZXNvdXJjZSBpcyBzaGFyZWQgYmV0d2VlbiB0d28gY29ubmVjdGVkIHNlY3Rv cnMsIGl0cyBtZW1vcnkgYWRkcmVzcw0KPiBjYW5ub3QgY2hhbmdlIHdoZW4gY3Jvc3NpbmcgdGhl IHNlY3RvciBib3VuZGFyeSAobm8gbWVtb3J5IHBhY2tpbmcgYmV0d2Vlbg0KPiBzZWN0b3JzKQ0K Pg0KPiDigKIgSSB3YW50IHRvIGtlZXAgbWVtb3J5IGZyYWdtZW50YXRpb24gdW5kZXIgY29udHJv bA0KPg0KPg0KPg0KPiBDb25zaWRlcmluZyB0aGF0IEkga25vdyBvZmZsaW5lIHdoYXQgcmVzb3Vy Y2VzIGFyZSBuZWVkZWQgaW4gZWFjaCBzZWN0b3IsDQo+IGFuZCBrbm93aW5nIGFsbCB0aGUgY29u bmVjdGlvbnMgYmV0d2VlbiB0aGUgc2VjdG9ycywgaXQgc2VlbXMgdG8gbWUgdGhhdCBpdA0KPiBt aWdodCBiZSBwb3NzaWJsZSB0byBhZGRyZXNzIHRoZSBtZW1vcnkgZnJhZ21lbnRhdGlvbiBpc3N1 ZSBvZmZsaW5lLiAgSXQNCj4gbWlnaHQgYmUgcG9zc2libGUgdG8gcHJlY29tcHV0ZSB0aGUgbG9j YXRpb24gaW4gbWVtb3J5IG9mIGFsbCBnYW1lIHJlc291cmNlcw0KPiBmb3IgZWFjaCBzZWN0b3Iu DQo+DQo+DQo+DQo+IFRoZW4sIGF0IHJ1bnRpbWUsIGxvYWRpbmcgYSBuZXcgc2VjdG9yIHdvdWxk IGJlIGFzIHNpbXBsZSBhcyByZWxlYXNpbmcNCj4gdW51c2VkIHJlc291cmNlcyBhbmQgbG9hZGlu ZyBuZXcgcmVzb3VyY2VzIGludG8gdGhlaXIgcHJlLWRldGVybWluZWQgbWVtb3J5DQo+IGFyZWEg Zm9yIHRoYXQgc2VjdG9yLiAgTm8gbWVtb3J5IGZyYWdtZW50YXRpb24gaXNzdWVzICENCj4NCj4N Cj4NCj4gSG93ZXZlciwgSSBhbSBzdGlsbCB1bnN1cmUgaG93IHRvIHRhY2tsZSB0aGlzIHByb2Js ZW0uICBUaGUgaGFyZCBwYXJ0IGlzDQo+IGNvbWluZyB1cCB3aXRoIHRoZSBhbGdvcml0aG0gdGhh dCB3aWxsIHByZWNvbXB1dGUgdGhlIG9mZmxpbmUgbWVtb3J5DQo+IGFycmFuZ2VtZW50LiAgSSdt IHByZXR0eSBzdXJlIHRoYXQgZmluZGluZyB0aGUgb3B0aW1hbCBzb2x1dGlvbiAodGhlIG9uZQ0K PiB0aGF0IHdpbGwgdXNlIHRoZSBsZWFzdCBtZW1vcnkpIGlzIGEgbi1wIGNvbXBsZXRlIHByb2Js ZW0uICBCdXQgc3RpbGwgdGhlcmUNCj4gbWlnaHQgYmUgc29tZSBhbGdvcml0aG1zIHRoYXQgY2Fu IGdlbmVyYXRlICJhY2NlcHRhYmxlIiBzb2x1dGlvbnMuDQo+DQo+DQo+DQo+IEkgd2FzIHdvbmRl cmluZyBpZiBhbnlvbmUgaGFkIHBsYXllZCBhcm91bmQgd2l0aCBzaW1pbGFyIGlkZWFzIGFuZCBJ J20NCj4gbG9va2luZyBmb3IgaW5zcGlyYXRpb24sIHN1Z2dlc3Rpb25zIG9yIGF2ZW51ZXMgdG8g ZXhwbG9yZSBmb3IgdGhlDQo+IGFsZ29yaXRobS4gSWYgeW91IHRoaW5rIGl0cyBhIGNyYXp5IGlk ZWEgdGhlbiBJJ2QgbGlrZSB0byBoZWFyIGl0IHRvbyA7LSkNCj4NCj4NCj4NCj4gVGhhbmtzLA0K Pg0KPg0KPg0KPiBZYW5uaWNrDQo+DQo+DQo+DQo+DQo+DQo+DQoNCg0KLS0NCi1NZWdhbiBGb3gN Cmh0dHA6Ly9zaGFsaW5vci5jaXJjdXN0ZW50LnVzL21lZ2FuL3Jlc3VtZS8NCg0KTGVhZCBEZXZl bG9wZXIsIEVsaXVtIFByb2plY3QNCmh0dHA6Ly93d3cuZWxpdW0udGsNCk4YwqxIWcOewrXDqcWh xaBYwqzCssWhJ8KyxaDDnnXCvOKAmcKmW8KnwpDigLDDnA0KxZLCqMK6DQrDnsKmw5hrwqLDqCHi gJPLhsWgV8KsfsWgw6nCruKAoMOlemsSwrbFoEPCownDpcKhwqdt4oCmw6nDnsOAAkBew4fFocKt w4hexb4Iwqd6w5hawrZmwqR6w4seasK3IcWgeDLCosOqw6XCog0Kw6LigKLDqxrCscOmwqzDicKr LMK6wrfDosW+DQphew0K4oC6DQrDpcKNLMOgA0jDssOUNMKobcK2xbjDv8Kxw6lawrLDq2pZGuKA mnfCrcO+w4fCpXJn4oCTeeKAsMOTw5nDm8OzwrfDq8KN4oCgDQoJYMKiwrjCreKAoGslxaDDi2Zq KWLFvgliwrLDkcaSAlgowq4rYcWhw4liwrLDmWLCssObLMKiw6rDnHnDuivCgcOpw57CthttwqbD j8O/4oCTKy3CssOKLsKtw4fFuMKiwrgewp3Dq3/igJMrLcKzw7liwrLDmMKnfsKPw6B1wqlgwqLC uMKt4oCgayXFoMOLQMKtw4hiwr3DqyHCtsOaf8O+w4ouwq3Dh8W4wqLCuB7CncOrf+KEosKowqVq wrchxaDDt8K/fsWgw67FocucaX7FoMOuxaEnw6tfDQo= |
From: Yannick L. <yan...@ub...> - 2006-01-25 23:19:38
|
Yeah I agree that stuff like textures 'could' be moved with clever = tricks like you said. But for stuff like in-memory sounds, you can't move them while they're = playing unless you do some funky memory streaming which could be a pain = to implement on some console hardware. Considering the very limited = amount of dedicated sound memory on some consoles (PS2), dynamic = defragmentation seems risky to me. But I may be wrong, I have never = tried this approach. What did you do for sounds ? Yannick L=E9tourneau =20 -----Original Message----- From: gda...@li... = [mailto:gda...@li...] On Behalf Of Tom = Forsyth Sent: 25 janvier 2006 17:04 To: gda...@li... Subject: RE: [Algorithms] Handling memory fragmentation offline ? > . If a resource is shared between two connected sectors, > its memory address cannot change when crossing the sector > boundary (no memory packing between sectors) That's going to hurt you badly. Most read-only resources (textures, = meshes, etc) can and should be movable. This will allow you to do gradual defragmentation. To move them, you typically need to allocate a new = place, copy them there, wait a few frames while the graphics processor and so = on get used to using the new copy of the resource, then free the old one. Doing this helped the fragmentation of the streaming system I worked on (Xbox & PS2) a lot. It's not nice to code, but it is basically = necessary, otherwise you find something absurd like a quarter to a half of your = memory becomes unusable. We "defragmented" one resource per frame, which is = easily fast enough to reclaim a decent amount of space, but still have no noticeable effect on framerate. The added advantage of a fully-dynamic system is that you can do tricks = like only loading the mipmap or LOD levels that the user can actually see. = This saves a ton of memory. It also has the incredibly cool side effect that aside from disc space, you don't care if your artists make really big textures (within reason!). I've written about this before on this list I think. I don't think your scheme is workable - you're trying to solve the = knapsack problem for a sliding window of data. That is likely to drive you = completely insane, especially if a resource is used in many adjoining areas (e.g. leaf/bark/grass textures in a forested area - they'll be reused all over = the place). I know others have tried it though. Not sure how well it worked though. = I remain highly skeptical. TomF. -----Original Message----- From: gda...@li... [mailto:gda...@li...] On Behalf Of = Yannick Letourneau Sent: 25 January 2006 13:03 To: gda...@li... Subject: [Algorithms] Handling memory fragmentation offline ? I'd like to explore a new approach towards memory allocation and fragmentation for a console title. =20 My requirements are the following : =20 . The level is too big to fit in memory at once . There are no "loading screens", e.g. I want to asynchronously move = game resources in and out of main memory as the game runs . The level is split into sectors . I have complete sector connectivity information . Each sector contains a known set of game resources (sounds, textures, etc.) . Asynchronous resource streaming from DVD triggers when the main = character crosses sector boundaries . If a resource is shared between two connected sectors, its memory = address cannot change when crossing the sector boundary (no memory packing = between sectors) . I want to keep memory fragmentation under control =20 Considering that I know offline what resources are needed in each = sector, and knowing all the connections between the sectors, it seems to me that = it might be possible to address the memory fragmentation issue offline. It might be possible to precompute the location in memory of all game = resources for each sector. =20 Then, at runtime, loading a new sector would be as simple as releasing unused resources and loading new resources into their pre-determined = memory area for that sector. No memory fragmentation issues ! =20 However, I am still unsure how to tackle this problem. The hard part is coming up with the algorithm that will precompute the offline memory arrangement. I'm pretty sure that finding the optimal solution (the one that will use the least memory) is a n-p complete problem. But still = there might be some algorithms that can generate "acceptable" solutions. =20 I was wondering if anyone had played around with similar ideas and I'm looking for inspiration, suggestions or avenues to explore for the algorithm. If you think its a crazy idea then I'd like to hear it too = ;-) =20 Thanks, =20 Yannick =20 =20 =20 ------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Do you grep through log = files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://sel.as-us.falkag.net/sel?cmd=3Dlnk&kid=3D103432&bid=3D230486&dat=3D= 121642 _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_ida88 |
From: Richard M. <mi...@tr...> - 2006-01-25 23:23:24
|
I would imagine a good solution for looping sounds would be to simply=20 fade the old one out while fading the new one in. (no-one will=20 notice...) :-) ()() Richard Mitton ( '.') (")_(") code jester .:. treyarch .:. mi...@tr... Yannick Letourneau wrote: > Yeah I agree that stuff like textures 'could' be moved with clever tric= ks like you said. >=20 > But for stuff like in-memory sounds, you can't move them while they're = playing unless you do some funky memory streaming which could be a pain t= o implement on some console hardware. Considering the very limited amoun= t of dedicated sound memory on some consoles (PS2), dynamic defragmentati= on seems risky to me. But I may be wrong, I have never tried this approa= ch. What did you do for sounds ? >=20 >=20 >=20 > Yannick L=E9tourneau > =20 >=20 > -----Original Message----- > From: gda...@li... [mailto:gdalgorithm= s-l...@li...] On Behalf Of Tom Forsyth > Sent: 25 janvier 2006 17:04 > To: gda...@li... > Subject: RE: [Algorithms] Handling memory fragmentation offline ? >=20 >> . If a resource is shared between two connected sectors, >> its memory address cannot change when crossing the sector >> boundary (no memory packing between sectors) >=20 > That's going to hurt you badly. Most read-only resources (textures, mes= hes, > etc) can and should be movable. This will allow you to do gradual > defragmentation. To move them, you typically need to allocate a new pla= ce, > copy them there, wait a few frames while the graphics processor and so = on > get used to using the new copy of the resource, then free the old one. >=20 > Doing this helped the fragmentation of the streaming system I worked on > (Xbox & PS2) a lot. It's not nice to code, but it is basically necessar= y, > otherwise you find something absurd like a quarter to a half of your me= mory > becomes unusable. We "defragmented" one resource per frame, which is ea= sily > fast enough to reclaim a decent amount of space, but still have no > noticeable effect on framerate. >=20 > The added advantage of a fully-dynamic system is that you can do tricks= like > only loading the mipmap or LOD levels that the user can actually see. T= his > saves a ton of memory. It also has the incredibly cool side effect that > aside from disc space, you don't care if your artists make really big > textures (within reason!). I've written about this before on this list = I > think. >=20 >=20 > I don't think your scheme is workable - you're trying to solve the knap= sack > problem for a sliding window of data. That is likely to drive you compl= etely > insane, especially if a resource is used in many adjoining areas (e.g. > leaf/bark/grass textures in a forested area - they'll be reused all ove= r the > place). >=20 > I know others have tried it though. Not sure how well it worked though.= I > remain highly skeptical. >=20 >=20 > TomF. >=20 >=20 > -----Original Message----- > From: gda...@li... > [mailto:gda...@li...] On Behalf Of Yan= nick > Letourneau > Sent: 25 January 2006 13:03 > To: gda...@li... > Subject: [Algorithms] Handling memory fragmentation offline ? >=20 >=20 > I'd like to explore a new approach towards memory allocation and > fragmentation for a console title. > =20 > My requirements are the following : > =20 > . The level is too big to fit in memory at once > . There are no "loading screens", e.g. I want to asynchronously move ga= me > resources in and out of main memory as the game runs > . The level is split into sectors > . I have complete sector connectivity information > . Each sector contains a known set of game resources (sounds, textures, > etc.) > . Asynchronous resource streaming from DVD triggers when the main chara= cter > crosses sector boundaries > . If a resource is shared between two connected sectors, its memory add= ress > cannot change when crossing the sector boundary (no memory packing betw= een > sectors) > . I want to keep memory fragmentation under control > =20 > Considering that I know offline what resources are needed in each secto= r, > and knowing all the connections between the sectors, it seems to me tha= t it > might be possible to address the memory fragmentation issue offline. I= t > might be possible to precompute the location in memory of all game reso= urces > for each sector. > =20 > Then, at runtime, loading a new sector would be as simple as releasing > unused resources and loading new resources into their pre-determined me= mory > area for that sector. No memory fragmentation issues ! > =20 > However, I am still unsure how to tackle this problem. The hard part i= s > coming up with the algorithm that will precompute the offline memory > arrangement. I'm pretty sure that finding the optimal solution (the on= e > that will use the least memory) is a n-p complete problem. But still t= here > might be some algorithms that can generate "acceptable" solutions. > =20 > I was wondering if anyone had played around with similar ideas and I'm > looking for inspiration, suggestions or avenues to explore for the > algorithm. If you think its a crazy idea then I'd like to hear it too ;= -) > =20 > Thanks, > =20 > Yannick > =20 > =20 > =20 >=20 >=20 >=20 > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log = files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://sel.as-us.falkag.net/sel?cmd=3Dlnk&kid=3D103432&bid=3D230486&dat= =3D121642 > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_ida88 >=20 >=20 > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log = files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://sel.as-us.falkag.net/sel?cmd____________________________________= ___________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_ida88 |
From: Tom F. <tom...@ee...> - 2006-01-25 23:56:33
|
> But for stuff like in-memory sounds, you can't move them=20 > while they're playing unless you do some funky memory=20 Right - so you have to make a copy, play the new one, wait until the old = one is no longer playing, and then delete the old one. If the sample is looping, isn't going to stop any time soon, and you = don't have enough control over the audio hardware to swap from one version to = the other seamlessly, then yeah - you're in trouble. But that's a minority = of sounds, right? Just playing endless looping sounds is generally a pretty tedious thing to do :-) > Considering the very limited amount of dedicated=20 > sound memory on some consoles (PS2), dynamic defragmentation=20 > seems risky to me. My view is the opposite - the more restricted your memory budget, the = more you can't afford to have fragmentation destroy what little memory you = have! > What did you do for sounds ? For Xbox, exactly the same thing. The joys of unified memory! For PS2, I don't know. If Willem de Boer happens to see this, he might be able to comment, as he did that code (poor chap - he's in a better place now :-) TomF. > -----Original Message----- > From: gda...@li...=20 > [mailto:gda...@li...] On=20 > Behalf Of Yannick Letourneau > Sent: 25 January 2006 15:19 > To: gda...@li... > Subject: RE: [Algorithms] Handling memory fragmentation offline ? >=20 >=20 > Yeah I agree that stuff like textures 'could' be moved with=20 > clever tricks like you said. >=20 > But for stuff like in-memory sounds, you can't move them=20 > while they're playing unless you do some funky memory=20 > streaming which could be a pain to implement on some console=20 > hardware. Considering the very limited amount of dedicated=20 > sound memory on some consoles (PS2), dynamic defragmentation=20 > seems risky to me. But I may be wrong, I have never tried=20 > this approach. What did you do for sounds ? >=20 >=20 >=20 > Yannick L=E9tourneau > =20 >=20 > -----Original Message----- > From: gda...@li...=20 > [mailto:gda...@li...] On=20 > Behalf Of Tom Forsyth > Sent: 25 janvier 2006 17:04 > To: gda...@li... > Subject: RE: [Algorithms] Handling memory fragmentation offline ? >=20 > > . If a resource is shared between two connected sectors, > > its memory address cannot change when crossing the sector > > boundary (no memory packing between sectors) >=20 > That's going to hurt you badly. Most read-only resources=20 > (textures, meshes, > etc) can and should be movable. This will allow you to do gradual > defragmentation. To move them, you typically need to allocate=20 > a new place, > copy them there, wait a few frames while the graphics=20 > processor and so on > get used to using the new copy of the resource, then free the old one. >=20 > Doing this helped the fragmentation of the streaming system I=20 > worked on > (Xbox & PS2) a lot. It's not nice to code, but it is=20 > basically necessary, > otherwise you find something absurd like a quarter to a half=20 > of your memory > becomes unusable. We "defragmented" one resource per frame,=20 > which is easily > fast enough to reclaim a decent amount of space, but still have no > noticeable effect on framerate. >=20 > The added advantage of a fully-dynamic system is that you can=20 > do tricks like > only loading the mipmap or LOD levels that the user can=20 > actually see. This > saves a ton of memory. It also has the incredibly cool side=20 > effect that > aside from disc space, you don't care if your artists make really big > textures (within reason!). I've written about this before on=20 > this list I > think. >=20 >=20 > I don't think your scheme is workable - you're trying to=20 > solve the knapsack > problem for a sliding window of data. That is likely to drive=20 > you completely > insane, especially if a resource is used in many adjoining areas (e.g. > leaf/bark/grass textures in a forested area - they'll be=20 > reused all over the > place). >=20 > I know others have tried it though. Not sure how well it=20 > worked though. I > remain highly skeptical. >=20 >=20 > TomF. >=20 >=20 > -----Original Message----- > From: gda...@li... > [mailto:gda...@li...] On=20 > Behalf Of Yannick > Letourneau > Sent: 25 January 2006 13:03 > To: gda...@li... > Subject: [Algorithms] Handling memory fragmentation offline ? >=20 >=20 > I'd like to explore a new approach towards memory allocation and > fragmentation for a console title. > =20 > My requirements are the following : > =20 > . The level is too big to fit in memory at once > . There are no "loading screens", e.g. I want to=20 > asynchronously move game > resources in and out of main memory as the game runs > . The level is split into sectors > . I have complete sector connectivity information > . Each sector contains a known set of game resources (sounds,=20 > textures, > etc.) > . Asynchronous resource streaming from DVD triggers when the=20 > main character > crosses sector boundaries > . If a resource is shared between two connected sectors, its=20 > memory address > cannot change when crossing the sector boundary (no memory=20 > packing between > sectors) > . I want to keep memory fragmentation under control > =20 > Considering that I know offline what resources are needed in=20 > each sector, > and knowing all the connections between the sectors, it seems=20 > to me that it > might be possible to address the memory fragmentation issue=20 > offline. It > might be possible to precompute the location in memory of all=20 > game resources > for each sector. > =20 > Then, at runtime, loading a new sector would be as simple as releasing > unused resources and loading new resources into their=20 > pre-determined memory > area for that sector. No memory fragmentation issues ! > =20 > However, I am still unsure how to tackle this problem. The=20 > hard part is > coming up with the algorithm that will precompute the offline memory > arrangement. I'm pretty sure that finding the optimal=20 > solution (the one > that will use the least memory) is a n-p complete problem. =20 > But still there > might be some algorithms that can generate "acceptable" solutions. > =20 > I was wondering if anyone had played around with similar ideas and I'm > looking for inspiration, suggestions or avenues to explore for the > algorithm. If you think its a crazy idea then I'd like to=20 > hear it too ;-) > =20 > Thanks, > =20 > Yannick > =20 > =20 > =20 >=20 >=20 >=20 > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep=20 > through log files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. =20 > DOWNLOAD SPLUNK! > http://sel.as-us.falkag.net/sel?cmd=3Dlnk&kid=3D103432&bid=3D230486& dat=3D121642 _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_ida88 ------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Do you grep through log = files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://sel.as-us.falkag.net/sel?cmd=3Dk&kid=103432&bid#0486&dat=121642 _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_ida88 |
From: Robert D. <bli...@go...> - 2006-01-26 20:30:21
|
> > Considering the very limited amount of dedicated > > sound memory on some consoles (PS2), dynamic defragmentation > > seems risky to me. > > My view is the opposite - the more restricted your memory budget, the more > you can't afford to have fragmentation destroy what little memory you > have! Speaking as a long term PS2 programmer (now PC programmer, woohoo!) I can safely say that dynamic allocation is a nightmare problem, let alone dynamic defragmentation. I have worked on many games with no handling of fragmentation, because they never got fragmented. And the reason they never got fragmented was because they made very little use of dynamically allocated memory. e.g. you want a level which has 100 particle effects in it, so you allocate space for 100 at the start of the level. If the game tries to create effect number 101, then it throws away one, or refuses. Even in streaming games, I have managed to do one which didn't handle fragmentation because it simply never happened. Of course, because of the nature of the levels (effectively long track ribbons) you only really had one dimension to worry about. On a totally different game, which allowed multiple paths to reach the same destination, the results were far from predictable. Occasionally the game would only discover that it was too fragmented to load the next room after it had loaded several other chunks, and would at that point give up, throw out all the resources that were in chunks between spaces, until it had one large space left, and then reload until everything fitted. Fortunately there were no places where this also failed, but it still left the game with occasional disastrous loading delays. So in conclusion, any method which gets you to a place where you can say with 100% certainty that you will never ever need to defragment memory, is a really good idea in my book. But dynamic defragmentation systems are not risky of themselves - they are a damn good solution if you can't guarantee that you will never get into a fragmentation nightmare. Rob |
From: Michiel v. d. L. <mi...@gu...> - 2006-01-26 20:58:51
|
>>> Considering the very limited amount of dedicated >>> sound memory on some consoles (PS2), dynamic defragmentation >>> seems risky to me. >> >> My view is the opposite - the more restricted your memory budget, >> the more >> you can't afford to have fragmentation destroy what little memory you >> have! > > Speaking as a long term PS2 programmer (now PC programmer, woohoo!) > I can > safely say that dynamic allocation is a nightmare problem, let > alone dynamic > defragmentation. As a long term PS2 programmer (before that PC programmer :-) I thought I'd chip in my two cents. We shipped a game which had typical PC memory usage patterns, and streaming was added later on so the nightmare problem of randomly sized memory blocks with varying lifetime caused the heap to fragment within minutes of play. We allocated a streaming heap of about 10 megs of a PS2 and every resource was split into "full chunks" of 32K, or "sub-chunks" of 1K. Large resources were split to 32K chunks (large textures were split into 32K chunks and the DMA controller chained them back together when uploading, for example). We kept lists of full chunks and chunks split into 1K parts. We tried perfect match allocations in chunks to prevent fragmentation. The defragmentation algorithm was a simple two-pass algorithm. The first pass tried to find a chunk with internal fragmentation (i.e. free space in the middle), and deflated the chunk by moving data to the front. If no more chunks with fragmentation were found it would look for any non-full chunk and deallocate and reallocate it, allowing the memory manager to spread small blocks out over the heap and fill up chinks. We processed one chunk per frame which solved all fragmentation. Next time around I'd use a different allocator which handles arbitrary-sized chunks, the 32K limit was a nuisance and made a lot of code difficult to write, read, understand and debug. cheers, Michiel |
From: Tom P. <ga...@fa...> - 2006-01-27 06:38:43
|
> e.g. you want a level which has 100 particle effects in it, so > you allocate space for 100 at the start of the level. If the > game tries to create effect number 101, then it throws away > one, or refuses. The One True Path, IMHO. > On a totally different game, which allowed multiple paths to > reach the same destination, the results were far from > predictable. Occasionally the game would only discover that > it was too fragmented to load the next room after it had > loaded several other chunks, and would at that point give up, > throw out all... The games that I've worked on that did this well (e.g. multiple paths through the levels, non-deterministic load order, etc.), we solved this by chunking our streaming arena to suit the given level. Interestingly, perhaps, the guy who wrote the tool to figure out how big the chunk sizes needed to be was a designer who learned to program (in Perl, no less) to exhaustively search the allowable sizes (it was a fairly small set since the buffer was only so big and it chunks needed to be multiples of the minimum streaming size). So it was basically a pool-based system, where each terrain chunk fit into a known-sized block (and could be split on the chunk boundaries if necessary), but we knew for sure that everything could fit because the tool checked every terrain sector's load set against the chunk size. With a bit of "proper" engineering and clever hueristics, I'm sure this problem could be solved a lot more efficiently. For what we needed it to do, though, it was Fast Enough. I mean, it's a 1D fitting algorithm; not exactly rocket science. -tom! |
From: Tom F. <tom...@ee...> - 2006-01-27 06:49:23
|
As soon as significant numbers of assets are attached to things that can move betwene sectors, you're in trouble. Then there's just no way to get = out of doing a proper flexible system. And there's advantages to a real = system even in a completely linear game. Blade 2 was totally linear, but got = really good texture and mesh densities because of the streaming system. Again, you can design a game to prevent this happening. But yet's assume = for the sake of argument that you don't want to do such an "old skool" game = :-) TomF. > -----Original Message----- > From: gda...@li...=20 > [mailto:gda...@li...] On=20 > Behalf Of Tom Plunket > Sent: 26 January 2006 22:39 > To: gda...@li... > Subject: RE: [Algorithms] Handling memory fragmentation offline ? >=20 >=20 > > e.g. you want a level which has 100 particle effects in it, so > > you allocate space for 100 at the start of the level. If the > > game tries to create effect number 101, then it throws away > > one, or refuses. >=20 > The One True Path, IMHO. >=20 > > On a totally different game, which allowed multiple paths to > > reach the same destination, the results were far from > > predictable. Occasionally the game would only discover that > > it was too fragmented to load the next room after it had > > loaded several other chunks, and would at that point give up, > > throw out all... >=20 > The games that I've worked on that did this well (e.g. multiple > paths through the levels, non-deterministic load order, etc.), > we solved this by chunking our streaming arena to suit the > given level. Interestingly, perhaps, the guy who wrote the > tool to figure out how big the chunk sizes needed to be was a > designer who learned to program (in Perl, no less) to > exhaustively search the allowable sizes (it was a fairly small > set since the buffer was only so big and it chunks needed to be > multiples of the minimum streaming size). So it was basically > a pool-based system, where each terrain chunk fit into a > known-sized block (and could be split on the chunk boundaries > if necessary), but we knew for sure that everything could fit > because the tool checked every terrain sector's load set > against the chunk size. >=20 > With a bit of "proper" engineering and clever hueristics, I'm > sure this problem could be solved a lot more efficiently. For > what we needed it to do, though, it was Fast Enough. I mean, > it's a 1D fitting algorithm; not exactly rocket science. >=20 > -tom! >=20 >=20 > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep=20 > through log files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. =20 > DOWNLOAD SPLUNK! > http://sel.as-us.falkag.net/sel?cmd=3Dlnk&kid=3D103432&bid=3D230486& dat=3D121642 _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=3D6188 |
From: Robert D. <bli...@go...> - 2006-01-26 19:41:07
|
PS2 sounds are really simple, because you can jump around within SPU = memory instantaneously with no audible artefacts - this is how the looping = works, and is of course used for streaming sounds too, but you can use it for anything. So if you needed to move a sound, you would copy it first to = the target location and then change its terminating loop address to point to = the new copy. Of course limiting the sound guys works better, and requires less work, = but once you get a system working which can handle a large quantity of = sounds (way bigger than the SPU RAM) then you pretty quickly forget about limitations other than to limit the amount you can pull in over a given = time (i.e. not much on PS2) Incidentally we never used dynamic defragmentation, we just played audio through smallish buffers in SPU ram, DMAing chunks over from the IOP ram = as necessary, on a separate thread. Working this way all we ever needed to = do was allocate a buffer for a sound, until it was finished, and then make = it available to the next sound to come along. Rob > -----Original Message----- > From: gda...@li... = [mailto:gdalgorithms- > lis...@li...] On Behalf Of Yannick Letourneau > Sent: 25 January 2006 23:19 > To: gda...@li... > Subject: RE: [Algorithms] Handling memory fragmentation offline ? >=20 > Yeah I agree that stuff like textures 'could' be moved with clever = tricks > like you said. >=20 > But for stuff like in-memory sounds, you can't move them while they're > playing unless you do some funky memory streaming which could be a = pain to > implement on some console hardware. Considering the very limited = amount > of dedicated sound memory on some consoles (PS2), dynamic = defragmentation > seems risky to me. But I may be wrong, I have never tried this = approach. > What did you do for sounds ? >=20 >=20 >=20 > Yannick L=E9tourneau >=20 >=20 > -----Original Message----- > From: gda...@li... = [mailto:gdalgorithms- > lis...@li...] On Behalf Of Tom Forsyth > Sent: 25 janvier 2006 17:04 > To: gda...@li... > Subject: RE: [Algorithms] Handling memory fragmentation offline ? >=20 > > . If a resource is shared between two connected sectors, > > its memory address cannot change when crossing the sector > > boundary (no memory packing between sectors) >=20 > That's going to hurt you badly. Most read-only resources (textures, > meshes, > etc) can and should be movable. This will allow you to do gradual > defragmentation. To move them, you typically need to allocate a new = place, > copy them there, wait a few frames while the graphics processor and so = on > get used to using the new copy of the resource, then free the old one. >=20 > Doing this helped the fragmentation of the streaming system I worked = on > (Xbox & PS2) a lot. It's not nice to code, but it is basically = necessary, > otherwise you find something absurd like a quarter to a half of your > memory > becomes unusable. We "defragmented" one resource per frame, which is > easily > fast enough to reclaim a decent amount of space, but still have no > noticeable effect on framerate. >=20 > The added advantage of a fully-dynamic system is that you can do = tricks > like > only loading the mipmap or LOD levels that the user can actually see. = This > saves a ton of memory. It also has the incredibly cool side effect = that > aside from disc space, you don't care if your artists make really big > textures (within reason!). I've written about this before on this list = I > think. >=20 >=20 > I don't think your scheme is workable - you're trying to solve the > knapsack > problem for a sliding window of data. That is likely to drive you > completely > insane, especially if a resource is used in many adjoining areas (e.g. > leaf/bark/grass textures in a forested area - they'll be reused all = over > the > place). >=20 > I know others have tried it though. Not sure how well it worked = though. I > remain highly skeptical. >=20 >=20 > TomF. >=20 >=20 > -----Original Message----- > From: gda...@li... > [mailto:gda...@li...] On Behalf Of > Yannick > Letourneau > Sent: 25 January 2006 13:03 > To: gda...@li... > Subject: [Algorithms] Handling memory fragmentation offline ? >=20 >=20 > I'd like to explore a new approach towards memory allocation and > fragmentation for a console title. >=20 > My requirements are the following : >=20 > . The level is too big to fit in memory at once > . There are no "loading screens", e.g. I want to asynchronously move = game > resources in and out of main memory as the game runs > . The level is split into sectors > . I have complete sector connectivity information > . Each sector contains a known set of game resources (sounds, = textures, > etc.) > . Asynchronous resource streaming from DVD triggers when the main > character > crosses sector boundaries > . If a resource is shared between two connected sectors, its memory > address > cannot change when crossing the sector boundary (no memory packing = between > sectors) > . I want to keep memory fragmentation under control >=20 > Considering that I know offline what resources are needed in each = sector, > and knowing all the connections between the sectors, it seems to me = that > it > might be possible to address the memory fragmentation issue offline. = It > might be possible to precompute the location in memory of all game > resources > for each sector. >=20 > Then, at runtime, loading a new sector would be as simple as releasing > unused resources and loading new resources into their pre-determined > memory > area for that sector. No memory fragmentation issues ! >=20 > However, I am still unsure how to tackle this problem. The hard part = is > coming up with the algorithm that will precompute the offline memory > arrangement. I'm pretty sure that finding the optimal solution (the = one > that will use the least memory) is a n-p complete problem. But still > there > might be some algorithms that can generate "acceptable" solutions. >=20 > I was wondering if anyone had played around with similar ideas and I'm > looking for inspiration, suggestions or avenues to explore for the > algorithm. If you think its a crazy idea then I'd like to hear it too = ;-) >=20 > Thanks, >=20 > Yannick >=20 >=20 >=20 >=20 >=20 >=20 > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD = SPLUNK! > = http://sel.as-us.falkag.net/sel?cmd=3Dlnk&kid=3D103432&bid=3D230486&dat=3D= 121642 > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_ida88 >=20 >=20 > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD = SPLUNK! > http://sel.as-us.falkag.net/sel?cmd=3Dk&kid=103432&bid#0486&dat=121642 > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_ida88 |
From: Tom F. <tom...@ee...> - 2006-01-26 20:42:28
|
If your resources have a relatively known short lifetime, dynamic defragmentation isn't that useful. If you regard loop points as really = just playing the sample a second time (and possibly the new sample is loaded = in a different place), then you're done - no more code needed! Fragmentation is a problem when you have a mixture of short and long lifetimes (and large and small sizes). What dynamic defragmentation effectively does is to _force_ things to have a short lifetime. In fact that's almost exactly how I implemented it - every frame I picked a pesudo-random object and said "unload yourself" followed by a "reload yourself" (with a hint to say that it already had the data and didn't = need to go to disk), and then let the standard dynamic streaming system = handle it - because it has to deal with the problems of = wait-a-frame-before-deleteing and all the annoying book-keeping. TomF. > -----Original Message----- > From: gda...@li...=20 > [mailto:gda...@li...] On=20 > Behalf Of Robert Dibley > Sent: 26 January 2006 11:41 > To: gda...@li... > Subject: RE: [Algorithms] Handling memory fragmentation offline ? >=20 >=20 > PS2 sounds are really simple, because you can jump around=20 > within SPU memory > instantaneously with no audible artefacts - this is how the=20 > looping works, > and is of course used for streaming sounds too, but you can use it for > anything. So if you needed to move a sound, you would copy=20 > it first to the > target location and then change its terminating loop address=20 > to point to the > new copy. >=20 > Of course limiting the sound guys works better, and requires=20 > less work, but > once you get a system working which can handle a large=20 > quantity of sounds > (way bigger than the SPU RAM) then you pretty quickly forget about > limitations other than to limit the amount you can pull in=20 > over a given time > (i.e. not much on PS2) >=20 > Incidentally we never used dynamic defragmentation, we just=20 > played audio > through smallish buffers in SPU ram, DMAing chunks over from=20 > the IOP ram as > necessary, on a separate thread. Working this way all we=20 > ever needed to do > was allocate a buffer for a sound, until it was finished, and=20 > then make it > available to the next sound to come along. >=20 > Rob >=20 >=20 > > -----Original Message----- > > From: gda...@li...=20 > [mailto:gdalgorithms- > > lis...@li...] On Behalf Of Yannick Letourneau > > Sent: 25 January 2006 23:19 > > To: gda...@li... > > Subject: RE: [Algorithms] Handling memory fragmentation offline ? > >=20 > > Yeah I agree that stuff like textures 'could' be moved with=20 > clever tricks > > like you said. > >=20 > > But for stuff like in-memory sounds, you can't move them=20 > while they're > > playing unless you do some funky memory streaming which=20 > could be a pain to > > implement on some console hardware. Considering the very=20 > limited amount > > of dedicated sound memory on some consoles (PS2), dynamic=20 > defragmentation > > seems risky to me. But I may be wrong, I have never tried=20 > this approach. > > What did you do for sounds ? > >=20 > >=20 > >=20 > > Yannick L=E9tourneau > >=20 > >=20 > > -----Original Message----- > > From: gda...@li...=20 > [mailto:gdalgorithms- > > lis...@li...] On Behalf Of Tom Forsyth > > Sent: 25 janvier 2006 17:04 > > To: gda...@li... > > Subject: RE: [Algorithms] Handling memory fragmentation offline ? > >=20 > > > . If a resource is shared between two connected sectors, > > > its memory address cannot change when crossing the sector > > > boundary (no memory packing between sectors) > >=20 > > That's going to hurt you badly. Most read-only resources (textures, > > meshes, > > etc) can and should be movable. This will allow you to do gradual > > defragmentation. To move them, you typically need to=20 > allocate a new place, > > copy them there, wait a few frames while the graphics=20 > processor and so on > > get used to using the new copy of the resource, then free=20 > the old one. > >=20 > > Doing this helped the fragmentation of the streaming system=20 > I worked on > > (Xbox & PS2) a lot. It's not nice to code, but it is=20 > basically necessary, > > otherwise you find something absurd like a quarter to a half of your > > memory > > becomes unusable. We "defragmented" one resource per frame, which is > > easily > > fast enough to reclaim a decent amount of space, but still have no > > noticeable effect on framerate. > >=20 > > The added advantage of a fully-dynamic system is that you=20 > can do tricks > > like > > only loading the mipmap or LOD levels that the user can=20 > actually see. This > > saves a ton of memory. It also has the incredibly cool side=20 > effect that > > aside from disc space, you don't care if your artists make=20 > really big > > textures (within reason!). I've written about this before=20 > on this list I > > think. > >=20 > >=20 > > I don't think your scheme is workable - you're trying to solve the > > knapsack > > problem for a sliding window of data. That is likely to drive you > > completely > > insane, especially if a resource is used in many adjoining=20 > areas (e.g. > > leaf/bark/grass textures in a forested area - they'll be=20 > reused all over > > the > > place). > >=20 > > I know others have tried it though. Not sure how well it=20 > worked though. I > > remain highly skeptical. > >=20 > >=20 > > TomF. > >=20 > >=20 > > -----Original Message----- > > From: gda...@li... > > [mailto:gda...@li...] On Behalf Of > > Yannick > > Letourneau > > Sent: 25 January 2006 13:03 > > To: gda...@li... > > Subject: [Algorithms] Handling memory fragmentation offline ? > >=20 > >=20 > > I'd like to explore a new approach towards memory allocation and > > fragmentation for a console title. > >=20 > > My requirements are the following : > >=20 > > . The level is too big to fit in memory at once > > . There are no "loading screens", e.g. I want to=20 > asynchronously move game > > resources in and out of main memory as the game runs > > . The level is split into sectors > > . I have complete sector connectivity information > > . Each sector contains a known set of game resources=20 > (sounds, textures, > > etc.) > > . Asynchronous resource streaming from DVD triggers when the main > > character > > crosses sector boundaries > > . If a resource is shared between two connected sectors, its memory > > address > > cannot change when crossing the sector boundary (no memory=20 > packing between > > sectors) > > . I want to keep memory fragmentation under control > >=20 > > Considering that I know offline what resources are needed=20 > in each sector, > > and knowing all the connections between the sectors, it=20 > seems to me that > > it > > might be possible to address the memory fragmentation issue=20 > offline. It > > might be possible to precompute the location in memory of all game > > resources > > for each sector. > >=20 > > Then, at runtime, loading a new sector would be as simple=20 > as releasing > > unused resources and loading new resources into their pre-determined > > memory > > area for that sector. No memory fragmentation issues ! > >=20 > > However, I am still unsure how to tackle this problem. The=20 > hard part is > > coming up with the algorithm that will precompute the offline memory > > arrangement. I'm pretty sure that finding the optimal=20 > solution (the one > > that will use the least memory) is a n-p complete problem. =20 > But still > > there > > might be some algorithms that can generate "acceptable" solutions. > >=20 > > I was wondering if anyone had played around with similar=20 > ideas and I'm > > looking for inspiration, suggestions or avenues to explore for the > > algorithm. If you think its a crazy idea then I'd like to=20 > hear it too ;-) > >=20 > > Thanks, > >=20 > > Yannick > >=20 > >=20 > >=20 > >=20 > >=20 > >=20 > > ------------------------------------------------------- > > This SF.net email is sponsored by: Splunk Inc. Do you grep=20 > through log > > files > > for problems? Stop! Download the new AJAX search engine that makes > > searching your log files as easy as surfing the web. =20 > DOWNLOAD SPLUNK! > >=20 > http://sel.as-us.falkag.net/sel?cmd=3Dlnk&kid=3D103432&bid=3D230486& dat=3D121642 > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_ida88 >=20 >=20 > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log > files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD = SPLUNK! > http://sel.as-us.falkag.net/sel?cmd=3Dk&kid=103432&bid#0486&dat=121642 > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_ida88 ------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Do you grep through log = files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://sel.as-us.falkag.net/sel?cmd=3Dk&kid=103432&bid#0486&dat=121642 _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_ida88 |
From: <phi...@pl...> - 2006-01-26 22:58:51
|
We didn't use dynamic defragmentation on GoW. We just accepted the duplicated definition data in adjacent sectors (wads/levels/whatever), and deleted instances when their definition was unloaded. We made the designers aware of these limitations, and made them responsible for the results. We restricted malloc style memory allocation to load times as well, with objects being allocated out of a pool system. Or rather, that was the intention. There was one system that tried to defrag a shared memory heap, and another pair that allocated memory at run-time, and they were the source of all but one the fatal (but fortunately, incredibly rare) bugs that slipped into the final product (or at least the ones we subsequently found, and fixed, during the localisation process, which effectively added another six months of testing after the US master). We're being much stricter with our memory allocation this time around. Lesson learned! We had the advantage of being a very linear game, with no camera control, so we could fake out a lot of art issues with imposters (although there is one camera shot that always triggers a pause to load). Long ambient sounds and music were streamed off of the disc (sometimes over mpegs), We also did the double buffer thing with levels, although we had a slot system for creatures, in an attempt to minimise duplication. In the end it made little difference, as the designers would rarely have to adjacent sectors with the same creature in them. Cheers, Phil PS The remaining one was an evil race condition that caused a dining philosophers style hang in the sound system. Fortunately it was in a sub-system that was only used in one location. Unfortunately that was the final boss. Typically that sub-system had been added at the last minute, to mollify a designer. So the second lesson would be, to stand firm, and not to hack stuff in at the last minute. |
From: Tom F. <tom...@ee...> - 2006-01-26 23:39:13
|
> We > restricted malloc style memory allocation to load times as well, with > objects being allocated out of a pool system. I'm not clear on the differences - isn't a pool just a generic term for = a place you do allocation from? Or is a "pool" something special here? (I = know what "arenas" and "heaps" and "fixed-size blocks" etc are). Memory allocation terminology is rather arbitrary at times.... > We had the advantage of being a very linear game, with no=20 > camera control, I have a theory that unless you can explicitly limit the design of the = game to be linear (and as coders, that sort of high-level design is not = really our jobs!), you're going to have to deal with fragmentation. You can = chop everything into fixed-sized chunks (or a few different sizes of chunks), = but that's just hiding the fragmentation inside the fact that your chunks = are quantised - it's "formalising" the fragmentation. I assert (with only a vague hunch to back me up :-) that a decent defragmentor would allow you = to use at least 25% more memory for the same fixed total size, and probably more. Of course if you can limit your game to linear chunks, then life is = sweet :-) TomF. > -----Original Message----- > From: gda...@li...=20 > [mailto:gda...@li...] On=20 > Behalf Of phi...@pl... > Sent: 26 January 2006 14:59 > To: gda...@li... > Subject: RE: [Algorithms] Handling memory fragmentation offline ? >=20 >=20 > We didn't use dynamic defragmentation on GoW. We just accepted the > duplicated definition data in adjacent sectors=20 > (wads/levels/whatever), and > deleted instances when their definition was unloaded. We made=20 > the designers > aware of these limitations, and made them responsible for the=20 > results. We > restricted malloc style memory allocation to load times as well, with > objects being allocated out of a pool system. >=20 > Or rather, that was the intention. There was one system that tried to > defrag a shared memory heap, and another pair that allocated memory at > run-time, and they were the source of all but one the fatal (but > fortunately, incredibly rare) bugs that slipped into the=20 > final product (or > at least the ones we subsequently found, and fixed, during=20 > the localisation > process, which effectively added another six months of=20 > testing after the US > master). We're being much stricter with our memory allocation=20 > this time > around. Lesson learned! >=20 > We had the advantage of being a very linear game, with no=20 > camera control, > so we could fake out a lot of art issues with imposters=20 > (although there is > one camera shot that always triggers a pause to load). Long=20 > ambient sounds > and music were streamed off of the disc (sometimes over=20 > mpegs), We also did > the double buffer thing with levels, although we had a slot system for > creatures, in an attempt to minimise duplication. In the end=20 > it made little > difference, as the designers would rarely have to adjacent=20 > sectors with the > same creature in them. >=20 >=20 > Cheers, > Phil >=20 > PS The remaining one was an evil race condition that caused a dining > philosophers style hang in the sound system. Fortunately it was in a > sub-system that was only used in one location. Unfortunately=20 > that was the > final boss. Typically that sub-system had been added at the=20 > last minute, to > mollify a designer. So the second lesson would be, to stand=20 > firm, and not > to hack stuff in at the last minute. |
From: <chr...@pl...> - 2006-01-27 00:34:24
|
Tom Forsyth wrote: > > We restricted malloc style memory allocation to load times as well, with > > objects being allocated out of a pool system. > > I'm not clear on the differences - isn't a pool just a generic term for a > place you do allocation from? Or is a "pool" something special here? (I know > what "arenas" and "heaps" and "fixed-size blocks" etc are). Memory > allocation terminology is rather arbitrary at times.... A pool in our lingo (and that of most others, I believe?) is a chunk of N (frequently, but not necessarily, consecutive) same-size blocks, where allocation of said blocks is handled using an embedded free-list. Obviously such systems have no fragmentation issues, which is why we use them just about everywhere. Christer Ericson, Director of Tools and Technology Sony Computer Entertainment, Santa Monica |
From: Jonathan B. <jo...@nu...> - 2006-01-27 00:43:53
|
> A pool in our lingo (and that of most others, I believe?) is a chunk > of N (frequently, but not necessarily, consecutive) same-size blocks, > where allocation of said blocks is handled using an embedded free-list. Where I come from, that's a "refrigerator". (Uhh, I didn't make up that name). A "pool" is an arena of memory where you allocate arbitrarily-sized things, but the only method of deallocation is to eliminate everything in the pool. But "pool" is a pretty generic term when it comes to memory allocation, it gets used for a few different kinds of things. chr...@pl... wrote: > Tom Forsyth wrote: > >>> We restricted malloc style memory allocation to load times as well, >>> > with > >>> objects being allocated out of a pool system. >>> >> I'm not clear on the differences - isn't a pool just a generic term for a >> place you do allocation from? Or is a "pool" something special here? (I >> > know > >> what "arenas" and "heaps" and "fixed-size blocks" etc are). Memory >> allocation terminology is rather arbitrary at times.... >> > > A pool in our lingo (and that of most others, I believe?) is a chunk > of N (frequently, but not necessarily, consecutive) same-size blocks, > where allocation of said blocks is handled using an embedded free-list. > > Obviously such systems have no fragmentation issues, which is why we > use them just about everywhere. > > Christer Ericson, Director of Tools and Technology > Sony Computer Entertainment, Santa Monica > > > > ------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. Do you grep through log files > for problems? Stop! Download the new AJAX search engine that makes > searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > http://sel.as-us.falkag.net/sel?cmd=lnk&kid=103432&bid=230486&dat=121642 > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list > Archives: > http://sourceforge.net/mailarchive/forum.php?forum_id=6188 > > |
From: Tom F. <tom...@ee...> - 2006-01-27 01:04:20
|
> Where I come from, that's a "refrigerator". (Uhh, I didn't=20 > make up that name). Are you sure you didn't just dream it? :-) > A "pool" is an arena of memory where you allocate arbitrarily-sized=20 > things, but the only method of deallocation is to eliminate=20 > everything=20 > in the pool. That's what I've always called an "arena". As for Christer's "pools", I've always just called them "fixed-size allocators" or "block allocators". But Wikipedia agrees with him and not = me, so I guess I'm in the wrong. > Obviously such systems have no fragmentation issues, which is why we > use them just about everywhere. Not true! They don't have internal fragmentation, but they can lock down large chunks of memory that are not currently used, but can't be used by = the rest of the code, because it's difficult to return unused chunks of = memory to the central pool. So they can cause the same effect as fragmentation = - only 50% of your memory used, but you can't allocate a buffer. I love these because they're extremely fast, but you still have to be careful about how you let them grow. They're as bad as their peak use, = so if you have bursty use (e.g. at level load, etc) then they're probably a = bad choice for that allocation type. TomF. > -----Original Message----- > From: gda...@li...=20 > [mailto:gda...@li...] On=20 > Behalf Of Jonathan Blow > Sent: 26 January 2006 16:44 > To: gda...@li... > Subject: Re: [Algorithms] Handling memory fragmentation offline ? >=20 >=20 > > A pool in our lingo (and that of most others, I believe?) is a chunk > > of N (frequently, but not necessarily, consecutive)=20 > same-size blocks, > > where allocation of said blocks is handled using an=20 > embedded free-list. >=20 >=20 > Where I come from, that's a "refrigerator". (Uhh, I didn't=20 > make up that=20 > name). >=20 > A "pool" is an arena of memory where you allocate arbitrarily-sized=20 > things, but the only method of deallocation is to eliminate=20 > everything=20 > in the pool. >=20 > But "pool" is a pretty generic term when it comes to memory=20 > allocation,=20 > it gets used for a few different kinds of things. >=20 >=20 >=20 >=20 > chr...@pl... wrote: > > Tom Forsyth wrote: > > =20 > >>> We restricted malloc style memory allocation to load=20 > times as well, > >>> =20 > > with > > =20 > >>> objects being allocated out of a pool system. > >>> =20 > >> I'm not clear on the differences - isn't a pool just a=20 > generic term for a > >> place you do allocation from? Or is a "pool" something=20 > special here? (I > >> =20 > > know > > =20 > >> what "arenas" and "heaps" and "fixed-size blocks" etc are). Memory > >> allocation terminology is rather arbitrary at times.... > >> =20 > > > > A pool in our lingo (and that of most others, I believe?) is a chunk > > of N (frequently, but not necessarily, consecutive)=20 > same-size blocks, > > where allocation of said blocks is handled using an=20 > embedded free-list. > > > > Obviously such systems have no fragmentation issues, which is why we > > use them just about everywhere. > > > > Christer Ericson, Director of Tools and Technology > > Sony Computer Entertainment, Santa Monica |
From: <phi...@pl...> - 2006-01-27 01:26:55
|
> > Obviously such systems have no fragmentation issues, which is why we > > use them just about everywhere. > > Not true! They don't have internal fragmentation, but they can lock down > large chunks of memory that are not currently used, but can't be used by the > rest of the code, because it's difficult to return unused chunks of memory > to the central pool. Mmm, cracking that one would be the keystone...:) Cheers, Phil |
From: Jan W. <ur...@st...> - 2006-01-27 01:49:18
|
>> Not true! They don't have internal fragmentation, but they can lock down >> large chunks of memory that are not currently used, but can't be used by >> the rest of the code, because it's difficult to return unused chunks of >> memory to the central pool. > Mmm, cracking that one would be the keystone...:) "Reconsidering Custom Memory Allocation" (Berger et al) has an interesting idea here. First, they define "regions" to have operations alloc(size) and free_all(); memory is allocated from an intrusive list of 4KB chunks. free(p, size) could be supported by pushing free ranges onto a heap (shared by all chunks). Each chunk remembers how many bytes are occupied and can be released for further use when empty. This data structure is then called a reap and solves the memory retention issue of a "pool". (BTW in my code I have called the region thing a 'bucket allocator'; it is indeed true that everyone has a different name for things..) Further interesting papers, since I'm researching a related topic for my thesis: "The Memory Fragmentation Problem - Solved?" (Johnstone and Wilson) "Dynamic Storage Allocation - A Survey and Critical Review" (Johnstone and Wilson) Regards Jan |
From: Tom P. <ga...@fa...> - 2006-01-27 01:51:03
|
> I love [pools] because they're extremely fast, but you still > have to be careful about how you let them grow. They're as bad > as their peak use, so if you have bursty use (e.g. at level > load, etc) then they're probably a bad choice for that > allocation type. That's not necessarily the case; the Alexandrescu "small object allocator" upon which I built a couple of pool allocators (both in the sense that Christer describes and by the name I gave it) that allocated a fixed number of elements at a time (say, 128 or 256), but as long as everything was freed from that "chunk" the whole thing could be returned to the system. This was particularly useful when dealing with an animation system built by one of those /other/ middleware providers where every float of every bezier key was allocated individually... Anyhow, you've got to be sure you're not "leaking" these things; if one element doesn't get freed, the whole block persists. I agree that they're trading fragmentation for simply marking unusable, but they can prove to be a "commit-size" win in other ways due to the fact that the extra data that the heap manager needs simply disappears. E.g. an allocation needs to be aligned and typically has an additional 16 bytes of "header" info right before the pointer you get back; if you are allocating a bunch of smallish things, the cost of the allocation management can outstrip the actual requested allocation size (speaking from experience, with that middleware again). So, pooling these things can actually net you a huge space /savings/, even if you don't run at 100% capacity. (Said allocator shaved 40% off of our heap size once injected throughout that middleware even before any other memory optimization strategies were considered.) -tom! |
From: <chr...@pl...> - 2006-01-30 19:36:03
|
Tom Forsyth wrote (in response to Jon Blow): > > A "pool" is an arena of memory where you allocate arbitrarily-sized > > things, but the only method of deallocation is to eliminate > > everything in the pool. > > That's what I've always called an "arena". I'm with Tom: that's what I know as an arena. > As for Christer's "pools", I've always just called them "fixed-size > allocators" or "block allocators". But Wikipedia agrees with him and not me, > so I guess I'm in the wrong. > > Obviously such systems have no fragmentation issues, which is why we > > use them just about everywhere. > > Not true! They don't have internal fragmentation, but they can lock down > large chunks of memory that are not currently used, but can't be used by the > rest of the code, because it's difficult to return unused chunks of memory > to the central pool. So they can cause the same effect as fragmentation - > only 50% of your memory used, but you can't allocate a buffer. Right, I meant internal fragmentation. A solution to the other problem you mention is to do all your memory allocations with pools, statically size the pools at startup, and write code that copes with those static limits. You may be using your memory quite inefficiently this way, but at least you can sleep at night knowing you won't run out of memory on level 1002 when the player traversed all levels without killing any enemies. > I love these because they're extremely fast, but you still have to be > careful about how you let them grow. They're as bad as their peak use, so if > you have bursty use (e.g. at level load, etc) then they're probably a bad > choice for that allocation type. My recommendation is not to allow them to grow (because if you do, you're back to fragmenting memory). Just size them statically at startup. Hybridize your pool use with high-memory heap allocations for the noisy malloc/free stuff that you may have during loading, and most problems are solved. Christer Ericson, Director of Tools and Technology Sony Computer Entertainment, Santa Monica |
From: Mat N. \(BUNGIE\) <Mat...@mi...> - 2006-01-30 19:42:19
|
I cannot recommend this book enough: "Small Memory Software: Patterns for systems with limited memory" ISBN 0-201-59607-5 A great collection of various approaches to constrained memory platforms. My personal favorite is "Captain Oates": sacrifice non-critical components to satisfy more critical memory needs. MSN -----Original Message----- From: gda...@li... [mailto:gda...@li...] On Behalf Of chr...@pl... Sent: Monday, January 30, 2006 11:34 AM To: gda...@li... Subject: RE: [Algorithms] Handling memory fragmentation offline ? Tom Forsyth wrote (in response to Jon Blow): > > A "pool" is an arena of memory where you allocate arbitrarily-sized > > things, but the only method of deallocation is to eliminate > > everything in the pool. > > That's what I've always called an "arena". I'm with Tom: that's what I know as an arena. > As for Christer's "pools", I've always just called them "fixed-size > allocators" or "block allocators". But Wikipedia agrees with him and not me, > so I guess I'm in the wrong. > > Obviously such systems have no fragmentation issues, which is why we > > use them just about everywhere. > > Not true! They don't have internal fragmentation, but they can lock down > large chunks of memory that are not currently used, but can't be used by the > rest of the code, because it's difficult to return unused chunks of memory > to the central pool. So they can cause the same effect as fragmentation - > only 50% of your memory used, but you can't allocate a buffer. Right, I meant internal fragmentation. A solution to the other problem you mention is to do all your memory allocations with pools, statically size the pools at startup, and write code that copes with those static limits. You may be using your memory quite inefficiently this way, but at least you can sleep at night knowing you won't run out of memory on level 1002 when the player traversed all levels without killing any enemies. > I love these because they're extremely fast, but you still have to be > careful about how you let them grow. They're as bad as their peak use, so if > you have bursty use (e.g. at level load, etc) then they're probably a bad > choice for that allocation type. My recommendation is not to allow them to grow (because if you do, you're back to fragmenting memory). Just size them statically at startup. Hybridize your pool use with high-memory heap allocations for the noisy malloc/free stuff that you may have during loading, and most problems are solved. Christer Ericson, Director of Tools and Technology Sony Computer Entertainment, Santa Monica ------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Do you grep through log files for problems? Stop! Download the new AJAX search engine that makes searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! http://sel.as-us.falkag.net/sel?cmd=3Dlnk&kid=3D103432&bid=3D230486&dat=3D= 121642 _______________________________________________ GDAlgorithms-list mailing list GDA...@li... https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list Archives: http://sourceforge.net/mailarchive/forum.php?forum_id=3D6188 |
From: Richard M. <mi...@tr...> - 2006-01-30 19:42:33
|
chr...@pl... wrote: > Tom Forsyth wrote (in response to Jon Blow): >>> A "pool" is an arena of memory where you allocate arbitrarily-sized >>> things, but the only method of deallocation is to eliminate >>> everything in the pool. >> That's what I've always called an "arena". > > I'm with Tom: that's what I know as an arena. I always called it a 'stack'. Push to allocate memory, Pop to collapse the stack back to a previous state. (first-in-last-out and all that...) ()() Richard Mitton ( '.') (")_(") code jester .:. treyarch .:. mi...@tr... |
From: <phi...@pl...> - 2006-01-30 19:57:14
|
> I always called it a 'stack'. > > Push to allocate memory, Pop to collapse the stack back to a previous state. > (first-in-last-out and all that...) Except without the ability to pop individual objects, so no stack frame overhead (either on the stack, or implicitly). Cheers, Phil |
From: Tom N. <tom...@gm...> - 2006-01-31 11:13:35
|
On 30/01/06, chr...@pl... <chr...@pl...> wrote: > Tom Forsyth wrote (in response to Jon Blow): > > > A "pool" is an arena of memory where you allocate arbitrarily-sized > > > things, but the only method of deallocation is to eliminate > > > everything in the pool. > > > > That's what I've always called an "arena". > > I'm with Tom: that's what I know as an arena. To me that's a corral. But then i'm an old cowboy coder. :) TomN |
From: Jon W. <hp...@mi...> - 2006-01-27 21:20:01
|
> A pool in our lingo (and that of most others, I believe?) is a chunk > of N (frequently, but not necessarily, consecutive) same-size blocks, > where allocation of said blocks is handled using an embedded free-list. I've seen this usage of "pool" in two separate, large code bases, so I would agree that this is the most common usage of the word. It's a pool of fixed-sized blocks; typically allocated in chunks of N at a time to reduce header overhead and make for fewer, bigger blocks to help with allocation. In the extreme, pools are not growable, and are all allocated up-front. Now, if "pool" is too common a word, then what is a less ambiguous name? Cheers, / h+ -- -- The early bird gets the worm, but the second mouse gets the cheese. |
From: Paul Du B. <pau...@gm...> - 2006-01-30 11:02:16
|
No disagreements here; I just want to throw in a bit of terminology. On 1/27/06, Jon Watte <hp...@mi...> wrote: > [...] pool of fixed-sized blocks; typically allocated in chunks of N at a= time [...] > Now, if "pool" is too common a word, then what is a less ambiguous name? In the literature I've read this is called "simple segregated storage". Somewhat cumbersome name, but useful to know when searching for papers. Tom Plunket wrote: > I agree that they're trading fragmentation for simply marking unusable Also in said literature, the former is called external fragmentation, and the latter is internal fragmentation. --- In my experience optimizing memory allocation for a GC'd language runtime (ugh), pools are nigh unbeatable at reducing block overhead but don't fragment any less (they may even fragment more). On the other hand, it's much easier to make that loss deterministic. p |