However, if you don't
between dependencies and references, then they are going to be building
more than needed. For example, the following is reference:
Good thing to point out. Both pieces of data are needed for different
stages of the tools pipeline. Both can be represented in a single
dependency tree, but it all depends on what you consider a 'root' node
in your dependency tree.
Mesh -> Shader ->
If the texture changes, then there
no reason to build the mesh or the shader. The mesh and shader simply
point at the texture.
However, the following is a true
Rig -> Skeleton ->
If the rig changes, then you need to
reprocess all of the animations. I have seen lots of "dependency"
systems which don't make this distinction and thus cause tremendous
of recooking whenever an asset changes.
A dependency system can be as simple as A -> B, which reads as A
needs to rebuild when B changes. Showing references is really not
important until the packaging stage. I was in the midst of writing up
my new asset build system just as this thread started up, and I ended
up with a pretty simple model. I'll describe it here, just for a kicks
(and open myself to criticism ;-):
So, I have an "art" directory that contains all source data which are
directly manipulated by humans, broken up into some logical structure.
These are revision controlled. I also have a "build" directory which
is also broken down into a logical structure, but contain the results
of some tool. The main reason for this simplified structure is for
trivial clean rebuilds with no cruft hanging about. Also, the build
directory can be rsync'd around or sent up to a central store if that
proves useful, but no intention to check the contents into revision
control. That's what backups are for, not revision control.
In the art\objectdef\ folder, I have simple text files which simply
list 1) a tool, 2) source data file(s), 3) output file(s), and 4)
parameters to the tool. Trivial format to parse.
To rebuild an object, I run a tool that parses one of these text files,
which loads up the list of source files and target files, builds a
simple DAG from them, determines the proper build order with a
topological sort, and simply iterates across this list, skipping any
steps that are not necessary due to time/date stamps. Multiple objects
could be loaded at once and put in the same DAG, cutting down on file
tests. Really simple. I could just as easily emit a makefile and call
make, but this is so easy I'm not bothering (until maybe I get a
massively multicore machine!).
The point is, the source data does *not* define the objectdef.txt
file. I have a plugin that I can use to generate one for me, but then,
once I've decided to close Maya or Max, I never need to reopen it to
tweak something. The objectdef.txt file allows me to swap out shaders,
textures, null out blendshapes, reorder them, add or remove animations,
etc, without ever reopening the source art. In the past, I've seen
cases where just re-saving a file, due to numerous possible causes,
breaks a data file. Having to re-check in a large binary asset because
of a tiny change to some attributes in the scene is just ridiculous;
the load/save time to open a scene file is prohibitive; sometimes I've
even seen the load/save operation cause corruption or incompatibility
of assets that used to work. So I'm avoiding that if at all possible.
This text file also acts as a repository for my dependency
information. It's parseable and human readable. It also describes the
packing list for a single object prototype. So, rather than this:
Mesh -> Shader -> Texture
I have this:
Object -> Mesh.bin -> Mesh.mb
Object -> Shader.bin -> Shader.fx
Object -> Texture.bin -> Texture.tga
Object-> Anim1.bin -> Skeleton.bin -> Mesh.mb
A higher-level meta-container will depend on objects or other raw
assets, to build a definitive packing list for levels or regions.
Pretty straightforward stuff. I have different tools laid out for each
step of the process, and each one is batchable and surprisingly
simple. By splitting them up into fine-grained operations, I can
include them on the dependency chain implicitly and only rebuild the
pertinent data. The packing step is optional, and just links together
stuff without caring too much about what it is.
I don't think there's anything here groundbreaking, but the take away
from this should be that a game-level entity should define your asset
references, and these should in turn define the dependency
information. Otherwise, you might be wasting time processing data
offline that never appear in the game.