There is another type of relationship.  Consider the references:
 
Mesh->Shader->Texture
 and
Mesh->Vertex Stream
 
The components you put into the vertex stream might depend on which shader you have applied to the mesh.  For example, maybe you don't need normals for a self-illuminated material.  In this case its conceivable you'll have to rebuild your vertex stream because you changed the shader.  This sort of thing has been a real problem in our old asset system.
 
I see a couple of ways to deal with this:

A) Don't deal with it.  In the case of the vertex stream, just include every conceivable component all the time.
B) Same as A, but delete the extra components in a special optimization pass done during the nightly build.
C) When you write out the mesh in the first place (from Max or whatever) figure out which components you need and write that into the vertex stream.
D) Along with the dependencies database, maintain a "clients" database, so for each resource you can query the resources that use it for any options or configuration settings.
 
After a lot of agonizing, I've been implementing (D) in our new asset system.  Its working so far, but it can be be messy.  One complication is that to check the validity of your build cache, you have to iterate through the clients list to see if any setting from clients have changed.
 
 
-D


From: gdalgorithms-list-bounces@lists.sourceforge.net [mailto:gdalgorithms-list-bounces@lists.sourceforge.net] On Behalf Of Jason Hughes
Sent: Wednesday, April 04, 2007 11:25 PM
To: Game Development Algorithms
Subject: Re: [Algorithms] Build dependencies: was Data base choices

Mark_Danks@PlayStation.Sony.Com wrote:
However, if you don't differentiate between dependencies and references, then they are going to be building more than needed.  For example, the following is reference:

    Mesh -> Shader -> Texture

If the texture changes, then there is no reason to build the mesh or the shader.  The mesh and shader simply point at the texture.

However, the following is a true dependency:

    Rig -> Skeleton -> Animation

If the rig changes, then you need to reprocess all of the animations.    I have seen lots of "dependency" systems which don't make this distinction and thus cause tremendous amounts of recooking whenever an asset changes.

Good thing to point out.  Both pieces of data are needed for different stages of the tools pipeline.  Both can be represented in a single dependency tree, but it all depends on what you consider a 'root' node in your dependency tree.

A dependency system can be as simple as A -> B, which reads as A needs to rebuild when B changes.  Showing references is really not important until the packaging stage.  I was in the midst of writing up my new asset build system just as this thread started up, and I ended up with a pretty simple model.  I'll describe it here, just for a kicks (and open myself to criticism  ;-):

So, I have an "art" directory that contains all source data which are directly manipulated by humans, broken up into some logical structure.  These are revision controlled.  I also have a "build" directory which is also broken down into a logical structure, but contain the results of some tool.  The main reason for this simplified structure is for trivial clean rebuilds with no cruft hanging about.  Also, the build directory can be rsync'd around or sent up to a central store if that proves useful, but no intention to check the contents into revision control.  That's what backups are for, not revision control.

In the art\objectdef\ folder, I have simple text files which simply list 1) a tool, 2) source data file(s), 3) output file(s), and 4) parameters to the tool.  Trivial format to parse.

To rebuild an object, I run a tool that parses one of these text files, which loads up the list of source files and target files, builds a simple DAG from them, determines the proper build order with a topological sort, and simply iterates across this list, skipping any steps that are not necessary due to time/date stamps.  Multiple objects could be loaded at once and put in the same DAG, cutting down on file tests.  Really simple.  I could just as easily emit a makefile and call make, but this is so easy I'm not bothering (until maybe I get a massively multicore machine!). 

The point is, the source data does *not* define the objectdef.txt file.  I have a plugin that I can use to generate one for me, but then, once I've decided to close Maya or Max, I never need to reopen it to tweak something.  The objectdef.txt file allows me to swap out shaders, textures, null out blendshapes, reorder them, add or remove animations, etc, without ever reopening the source art.  In the past, I've seen cases where just re-saving a file, due to numerous possible causes, breaks a data file.  Having to re-check in a large binary asset because of a tiny change to some attributes in the scene is just ridiculous; the load/save time to open a scene file is prohibitive; sometimes I've even seen the load/save operation cause corruption or incompatibility of assets that used to work.  So I'm avoiding that if at all possible. 

This text file also acts as a repository for my dependency information.  It's parseable and human readable.  It also describes the packing list for a single object prototype.  So, rather than this:

Mesh -> Shader -> Texture

I have this:

Object -> Mesh.bin -> Mesh.mb
Object -> Shader.bin -> Shader.fx
Object -> Texture.bin -> Texture.tga
Object-> Anim1.bin -> Skeleton.bin -> Mesh.mb

A higher-level meta-container will depend on objects or other raw assets, to build a definitive packing list for levels or regions.  Pretty straightforward stuff.  I have different tools laid out for each step of the process, and each one is batchable and surprisingly simple.  By splitting them up into fine-grained operations, I can include them on the dependency chain implicitly and only rebuild the pertinent data.  The packing step is optional, and just links together stuff without caring too much about what it is.

I don't think there's anything here groundbreaking, but the take away from this should be that a game-level entity should define your asset references, and these should in turn define the dependency information.  Otherwise, you might be wasting time processing data offline that never appear in the game.

Thanks,
JH