In reading this thread, it seems to have turned into two topics to me...

One is how to represent dependencies with asset build systems (which is extremely interesting to me)...I'll comment with another subject line.

The other is how to deal with serialized data (classes, objects, foobar, whatever).  I'll mention how I've handled it for my $0.02.

Previously, I've tried to differentiate between dev and shipping assets.  Considering that I spend 90% of my time in dev (at least before I joined SCEA), I optimize for that.  Shipping is the end stage.  Shipping is also when I care the most about compression, load speeds off disc, etc.  In dev, I care most about productivity for the other people on my team (not only programmers, but also level designers and artists).  I tend to make sure that there is a nightly build for all content, but then there are local overrides by the production people.  For object data (such as "serialized class structures"), I have done a key/value pair for the member variables.  Yes, at load time, I have to fully construct the objects (ie, it isn't a binary load and go), but it gives a nice benefit.  Because it is a key/value pair, the game can load the nightly build, and then load the local changes on top of it to get any modifications that an artist/designer might have done.  With default values in code, you can also release binaries to the team without breaking all of your data.  Is it inefficient for a shipping game?  Heck, yes.  Once I'm into shipping mode, I have the resource compilers generate binary blobs which can be blasted into the class data structures.  Less backwards compatibility and robustness, but that isn't what I'm focusing on at that time.

  In reading all of the posts, it seems like people have different priorities, which set up different ways of doing things.  If you want to be in a shipping mode, then parsing XML or text strings is evil.  If you are focusing on the dev mode, then taking a performance or space hit is okay because it can increase productivity.

Mark Danks
Senior Manager, Dev Support
SCEA



Jason Hughes <jason_hughes@dirtybitsoftware.com>
Sent by: gdalgorithms-list-bounces@lists.sourceforge.net

04/04/2007 09:58 PM
Please respond to
Game Development Algorithms <gdalgorithms-list@lists.sourceforge.net>

To
Game Development Algorithms <gdalgorithms-list@lists.sourceforge.net>
cc
Subject
Re: [Algorithms] Data base choices





Conor Stokes wrote:

> Even with asynchronous IO, the IO can also take a significantly higher amount of CPU time (on top of the seek time etc) than deserialisation code. Especially if you do reads into a single big buffer at a time that actually requires multiple reads from the media behind the scenes.
>  
The question is whether the initialization stage is necessary at all.  
Big-block loading with fixups avoids it entirely, but not without some
cost to the developer (development time, not runtime).  It's harder to
embed C++ objects with vtables in such things, but not impossible.  And
again, these are simply fixups that need to be made, nothing near as
complex as a memory allocation.

> Using "deserialize as you go" with multiple sequential IO operations in flight can be faster than a single big read.

Extraordinary claims require extraordinary evidence...

> It can also solve problems involved with finding large pools of sequential memory to do your reading - Populating an object graph through deserialization lends itself naturally to quick region based allocation which means you can get low levels of fragmentation with relatively simple and fast allocators.
>  

...and if the same objects are already embedded inside a single block,
this is somehow inferior?  If you know the exact size of each future
load, there should be no trouble setting up your memory map to
accomodate it.  Using allocators, this is actually substantially harder
without a handle-based defragging system, because pools might need to
resize if they are shared across levels or chunks.  I've written it both
ways, many times.  :-)

> And you don't have the complexity of pointer fixing code :-). Of course, all of this is only true in the cases where you are IO limited, or you IO operations have a reasonable impact on CPU time (in other words, your mileage may vary case by case).
>  

The complicated stuff is in the tools, and it's not that complicated.  
The runtime pointer fixup logic is literally:

for (int i=0; i<numAddresses; i++)
{
   *(baseAddr + fixup[i]) += baseAddr;
}

Oops, I think I gave away some proprietary secrets!  ;-)

JH

-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
GDAlgorithms-list mailing list
GDAlgorithms-list@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/gdalgorithms-list
Archives:
http://sourceforge.net/mailarchive/forum.php?forum_name=gdalgorithms-list