From: Alexander G. <ale...@gm...> - 2010-05-10 10:01:05
|
What I don't like about that is the packing of all animation frames into a single array. Moving that to a dedicated structure (aiAnimMesh) is much cleaner, imho. Apart from that, it's very similar to our proposal, isn't it? On 08.05.2010 07:54, Mick Pearson wrote: > I'm following up on what the head honchos would like to do about > Vertex Animation, or the absence thereof. Below is a (very simple) > proposal for the interface additions. There are any number of cost > effective ways you could go about this however I think this is pretty > well inline with the all around cost effectiveness of the rest of the > API. > > If no one else is working on this at this time I would like to be able > to get something up as soon as possible. I know there was expressed > concern for the Quake 1 MDL format being so far incomplete but I'm not > sure anyone has committed to giving that any attention. Vertex > Animation is my top priority at the moment anyway so please someone > reach a conclusion or take a back seat. > > Please note that this will effect post processing steps which deal > with vertex parameters/attributes. It's also possible since each frame > shares the same connectivity (this is probably a good idea) the join > vertices steps might want to check the other frames before doing so > (where the vertices may diverge) and in general any changes to the > vertex data should be consistent frame to frame. > > The included proposal stores the parameters for all frames in a single > array per type (along with the original members) so any step > reallocating these arrays will need to be addressed. Of course the > Verbose Format for whatever its merit will be doubly bloated for many > frames however the same can be said for many/large mesh assets so > that's probably not worth fretting over for as long the Verbose Format > is deemed necessary. > > In my judgment representing the frames in a compressed or > relative/differences based format would not be in the style of the > rest of the API. The most realistic way for modern hardware to > implement this is probably an interpolation shader, so just having two > 1 to 1 frames side by side (front/back buffer) is probably best. > > No cocky attitude out of ulfjorensen please. > > enum aiParameterType > { > aiParameterType_VERTEX = 0x1, > > aiParameterType_NORMAL = 0x2, > > aiParameterType_COLOR = 0x4, > > // or aiParameterType_TEXTURECOORD? > aiParameterType_TEXTURECOORDS = 0x8, > > _aiParameterType_Force32Bit = 0x9fffffff > }; //! enum aiParameterType > > struct aiMesh > { > //! bitwise combination of aiParameterType values > unsigned int mConstantParameters; > > unsigned int mNumFrames; > > //todo: range check / NormalFrame() and so on... > aiVector3D *PositionFrame(unsigned int n){ return > aiParameterType_VERTEX&mConstantParameters?mVertices:mVertices+mNumVertices*n; > } > }; > > struct aiMeshKey > { > double mTime; unsigned int mPerNodeMesh, mMeshFrame; > }; > > struct aiNodeAnim > { > unsigned int mNumGeometryKeys; > > aiMeshKey *mGeometryKeys; > }; > > ------------------------------------------------------------------------------ > > _______________________________________________ > Assimp-discussions mailing list > Ass...@li... > https://lists.sourceforge.net/lists/listinfo/assimp-discussions > > |