Thread: RE: [Algorithms] Skeletal Character Animation with Mesh Interplat ion?
Brought to you by:
vexxed72
From: Jay S. <Ja...@va...> - 2000-11-20 02:28:23
|
> Of > course, then you need vertex-shader capable hardware, which basically > means Radeon and GF2 only right now, but that's the way of > the future... To the best of my knowledge, there is no vertex-shader capable hardware currently available including GF2 and Radeon. Both of these (and GF) support hardware T&L, but not DX8 vertex shaders. Even with vertex shaders, skinned characters will be problematic. There isn't enough room to hold an entire skeleton, and we don't yet know the cost of shader state changes. Not only that, but if you want to do lighting as well, you'll have to tradeoff bones for light sources, and weights for lighting terms. You'll also need to have multiple versions of each skinning shader with support for different numbers & types of light sources because there is no early out. Each additional instruction will probably (definitely?) slow down your triangle rate, so you don't want to do extra light sources if you don't need them. It looks pretty messy at this point. Jay |
From: Tom F. <to...@mu...> - 2000-11-20 14:45:49
|
Yes, people have suggested this (myself included). One of the advantages is that you can use hardware to do the tweening, since tweening is a comparatively simple operation compared to complex multi-matrix skinning methods, though both are now becoming more and more common. The other nice thing about it is that if you do your skinning on the CPU, you now no longer need to do ay particular modelling method - you can mix and match skeletal skinning, pre-keyframed animation (e.g. facial expressions, hand positions), mechanical-style stuff (weapon cycle anims), etc, and the renderer doesn't care which you are using - it just tweens. If you do need to switch elements not on a keyframe boundary, it's very quick for the CPU to generate a new keyframe by doing the tween itself, then use that as a keyframe to base stuff from. Er... not clear. Interpolating between keyframe1 and keyframe2, we suddenly find at relative time 0.75 that we need to change the animation (e.g. the player just pressed "jump"). No problem - CPU generates new keyframe3 = 0.25*keyframe1 + 0.75*keyframe2. It also generates keyframe4, which is the newly skinned destination keyframe. Then you change to time=0.0, and tween between keyframes 3 and 4. Obviously doing this a lot loses the efficiency, but I don't think it will happen all that much. Most of the time, players and NPCs keep doing what they were doing 10th of a second ago. You can also do more expensive compression of animation, since you are decompressing at a much lower speed (when you generate a keyframe), rather than having to do it every frame. Ditto with expensive stuff like IK, blending multiple anims and so on. Downside is that it requires a bit of work to do forward-prediciton of the target keyframes, though many games already plan for forward prediction, so adding this on is fairly easy. Tom Forsyth - purely hypothetical Muckyfoot bloke. This email is the product of your deranged imagination, and does not in any way imply existence of the author. > -----Original Message----- > From: Bass, Garrett T. [mailto:gt...@ut...] > Sent: 19 November 2000 17:39 > To: 'gda...@li...' > Subject: [Algorithms] Skeletal Character Animation with Mesh > Interplation? > > > List folks, > > I know two common forms of character mesh animation are > quake-style > mesh interpolation in which each animation keyframe is stored > as a complete > set of vertices, and half-life-style skeletal animation in > which only the > bone transforms are stored per keyframe. I'm guessing most > skeletal systems > interpolate a skeletal pose from the nearest two keyframes > every time an > animation frame is rendered, but I'm curious if there may be a cheaper > alternative that combines the best features of both methods. > I'm not sure > if the following is a widely used technique, but it just > occurred to me, so > I thought I'd suggest it to the list. > > Consider this: every few frames, perhaps every 1/10 second, the > skeletal system is used to create a complete target mesh for the next > skeletal keyframe, then as each frame is rendered the output mesh is > interpolated nearer the keyframe mesh until they are the > same, then a new > target mesh is created for the next skeletal keyframe. This > would double > the storage needed for each output mesh as compared to a pure skeletal > approach, but the per-frame computations could be far less, and there > wouldn't be nearly the kind of memory demand one sees with pure mesh > interpolation. > > Obviously there are some downsides to this. If you > only generate > target meshes once per keyframe, then you can only mix > multiple skeletal > animations on exact keyframe boundaries, although some > skeletal solutions > may already have this limitation. Obviously the system becomes less > efficient if you need to switch animation sets between > keyframes, though > perhaps still more efficient than pure skeletal animation > except in the > unlikely worst case scenario in which the animation set > changes every single > frame. > > Anyway, I thought this idea may help reduce CPU usage when > simulating environments with large numbers of skeletal system > characters. > As usual the price of reduced CPU usage is more memory usage. > Any comments > on this? Is anyone already doing this? > > Regards, > Garett Bass > gt...@ut... |
From: Tom F. <to...@mu...> - 2000-11-20 14:55:49
|
Yep. Whereas a tweening vertex shader is comparatively trivial. It might even run fast enough in software, since it is so simple. This is what the "dolphin" sample in the DX8 SDK is doing. Note that neither Radeon nor GF2 are vertex-shader compatible, though they do have some nice vertex features that are exposed in DX8 (tweening, multiple streams, etc). I believe the Radeon will do tweening in hardware, though the drivers I have don't expose it yet, so I wouldn't swear to it. Tom Forsyth - purely hypothetical Muckyfoot bloke. This email is the product of your deranged imagination, and does not in any way imply existence of the author. > -----Original Message----- > From: Charles Bloom [mailto:cb...@cb...] > Sent: 19 November 2000 20:01 > To: gda...@li... > Subject: Re: [Algorithms] Skeletal Character Animation with Mesh > Interplation? > > > > Just a quick comment on this : the "fixed function" hardware skinning > which was in Dx7 and is also in Dx8 is really worthless. You get too > few matrices and too many constraints, you end up having to split > your characters like mad. IHV's didn't seem to realize that our > characters have >= 100 bones in many cases, so splitting a character > into chunks influenced by 2 or 4 bones is just not reasonable. In > Dx8 this is much better. You can write your own vertex shader that > will do 8-matrix palette skinning, which is pretty reasonable. Of > course, then you need vertex-shader capable hardware, which basically > means Radeon and GF2 only right now, but that's the way of > the future... > > BTW V-shader hardware also supports vertex tweening, so > Garrett's technique > is also hardware acceleratable. > > At 06:55 PM 11/19/2000 -0000, you wrote: > >Depends if you wan't to use H-T&L for the characters in my opinion. > > > >In Dx8, dunno about Dx7/OGL, you can use matrix blending and > specify a > >matrix stack and weights (One matrix per bone). This gives > you the option to > >"Upload" your character (High detail) to the 3d card once > and then just mess > >with (interpolate) the matricies from then on. Certainly the > way I'm going. > > > >Regards, > > > >Sam > > ------------------------------------------------------- > Charles Bloom cb...@cb... http://www.cbloom.com > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Neal T. <ne...@ps...> - 2000-11-20 15:15:11
|
From: Tom Forsyth <to...@mu...> > Note that neither Radeon nor GF2 are vertex-shader compatible, though they > do have some nice vertex features that are exposed in DX8 (tweening, > multiple streams, etc). I believe the Radeon will do tweening in hardware, > though the drivers I have don't expose it yet, so I wouldn't swear to it. There's an OpenGL extension supported on the Radeon called ATI_vertex_streams, which I _think_ does tweening (though I wouldn't like to stake my life on it:-)). If that is what it's for, then I assume the Radeon does support hardware tweening, since an OpenGL extension wouldn't be provided unless there was hardware support for the functionality. (I don't actually have access to a Radeon at present, so I can't just sit down and try this, unfortunately:-)) (If this extension isn't for tweening, I'd be grateful to anyone who wants to tell me what it does do, by the way, though any conversation on that topic should presumably be conducted off the list, since its OpenGL specific.) Neal Tringham (Sick Puppies / Empire Interactive) ne...@ps... ne...@em... |
From: Giovanni B. <ba...@pr...> - 2000-11-20 15:51:18
|
As I already said, Radeon supports vertex tweening in the fixed function pipeline of DX, and thus it does it hardware. I do not know which is the equivalent OGL extension. --- Giovanni Bajo Lead Programmer Protonic Interactive www.protonic.net a brand of Prograph Research S.r.l. www.prograph.it > -----Original Message----- > From: gda...@li... > [mailto:gda...@li...]On Behalf Of Neal > Tringham > Sent: Monday, November 20, 2000 4:18 PM > To: gda...@li... > Subject: Re: [Algorithms] Hardware Tweening (was re: Skeletal Character > Animation with Mesh Interplation?) > > > From: Tom Forsyth <to...@mu...> > > Note that neither Radeon nor GF2 are vertex-shader compatible, > though they > > do have some nice vertex features that are exposed in DX8 (tweening, > > multiple streams, etc). I believe the Radeon will do tweening > in hardware, > > though the drivers I have don't expose it yet, so I wouldn't > swear to it. > > There's an OpenGL extension supported on the Radeon called > ATI_vertex_streams, which I _think_ does tweening (though I > wouldn't like to > stake my life on it:-)). If that is what it's for, then I assume > the Radeon > does support hardware tweening, since an OpenGL extension wouldn't be > provided unless there was hardware support for the functionality. > > (I don't actually have access to a Radeon at present, so I can't just sit > down and try this, unfortunately:-)) > > (If this extension isn't for tweening, I'd be grateful to anyone who wants > to tell me what it does do, by the way, though any conversation on that > topic should presumably be conducted off the list, since its OpenGL > specific.) > > Neal Tringham (Sick Puppies / Empire Interactive) > > ne...@ps... > ne...@em... > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Daniel R. <Dan...@ho...> - 2000-11-20 15:52:41
|
----- Original Message ----- From: "Tom Forsyth" <to...@mu...> > Note that neither Radeon nor GF2 are vertex-shader compatible, though they > do have some nice vertex features that are exposed in DX8 (tweening, > multiple streams, etc). I believe the Radeon will do tweening in hardware, > though the drivers I have don't expose it yet, so I wouldn't swear to it. > hopefully the new matrox card(s) will do so ... any comments from the matrox devrel's ? they told me (time ago) they're working on 100-bone-influence abilities .. sound's like my dreams ... actually i've got other problems now ... let's see and wait what happens after xmas ;o) Daniel "SirLeto" Renkel [D.Renkel@FutureInt.de] technical design director - creactivity and technowhow Future Interactive [http://www.FutureInt.de] |
From: Giovanni B. <ba...@pr...> - 2000-11-20 15:57:48
|
> -----Original Message----- > From: gda...@li... > [mailto:gda...@li...]On Behalf Of Tom > Forsyth > Sent: Monday, November 20, 2000 3:53 PM > To: gda...@li... > Subject: RE: [Algorithms] Skeletal Character Animation with Mesh > Interplation? > > > Yep. Whereas a tweening vertex shader is comparatively trivial. It might > even run fast enough in software, since it is so simple. This is what the > "dolphin" sample in the DX8 SDK is doing. The problem with this is that if you do tweening with software VSH, you are not going to use any hw acceleration for all the rest of the T&L process, and this can have some impact on the performances. IMHO this is a big problem in this transiction phase, when you need good support for both VSH and not-VSH hardware class. For example, I'd love to add some layered fog to my world using vertex shader, but then my world will be slowed down by software vertex processing. --- Giovanni Bajo Lead Programmer Protonic Interactive www.protonic.net a brand of Prograph Research S.r.l. www.prograph.it |
From: Max M. <amc...@an...> - 2000-11-20 08:42:20
|
Why are vertex shaders so limited in the amount of storage for "constants", like the bones of a skeleton or information about lights? Max At 06:29 PM 11/19/00 -0800, you wrote: > > Of > > course, then you need vertex-shader capable hardware, which basically > > means Radeon and GF2 only right now, but that's the way of > > the future... > >To the best of my knowledge, there is no vertex-shader capable hardware >currently available including GF2 and Radeon. Both of these (and GF) >support hardware T&L, but not DX8 vertex shaders. > >Even with vertex shaders, skinned characters will be problematic. There >isn't enough room to hold an entire skeleton, and we don't yet know the cost >of shader state changes. Not only that, but if you want to do lighting as >well, you'll have to tradeoff bones for light sources, and weights for >lighting terms. You'll also need to have multiple versions of each skinning >shader with support for different numbers & types of light sources because >there is no early out. Each additional instruction will probably >(definitely?) slow down your triangle rate, so you don't want to do extra >light sources if you don't need them. It looks pretty messy at this point. > >Jay >_______________________________________________ >GDAlgorithms-list mailing list >GDA...@li... >http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Giovanni B. <ba...@pr...> - 2000-11-20 11:01:40
|
Hello, I need an algorithm to find the best spline approximation of a 3d point, sampled at different time. Think for example of an animated vertex, whose position has been sampled several times. I want to find the best spline(s) (even more than one, if needed) that approximate the vertex animation, of course with a given error. Most 3d animation packages (Maya, LW) have a similar feature (they can simplify the animation curve, removing keyframes that can be easily approximated well within the animation curve), so I suppose that there are some papers already floating around. Thanks --- Giovanni Bajo Lead Programmer Protonic Interactive www.protonic.net a brand of Prograph Research S.r.l. www.prograph.it |
From: Johan H. <jh...@mw...> - 2000-11-20 17:37:02
|
Hi I am running a software cache, where I cache graphical elements in an outdoor scene. Cache size is typically about 300 - 500 elements, with 3/4 to 4/5 ths of the cache elements being valid and needed in any specific frame. Due to frame coherence the cache does not need to be much bigger than the amount of elements needed in a frame. My Problem: I need a data structure, that allows me to search the cache very quickly for a tile (X, Y, LOD in a simple quadtree structure), and another structure that allows me to search the cache very quickly for the least recently used tile, where I can cache a new tile. I was thinking of a linear n element cache. A FreeTileList , which is a sorted linked list pointing to the unused tiles, and a CacheTileList sorted linked list pointing to all valid tiles in the cache, sorted (LOD then Y then X), with quick pointers to each new LOD. Unfortunately these two lists should also point to each other so that we can remove the tiles from the other list of it changes from one... any thoughts.. thank you Johan |
From: Jeffrey C. <jef...@ya...> - 2000-11-23 14:24:53
|
I was wondering why you use LRU for your outdoor rendering. IMHO, LRU is used on a system that the access pattern is unpredictable. On the other hand, outdoor rendering seems not to act like this. The viewer moves linearly over a terrain so you know which tile should be replaced, right? For example if the viewer moves to the north then the southest tile should be replaced. If that's the case, a simple 2d array of tile can be used with the border is used as a cache. The array is wrapable. The initial tile might seems like this. North +----+----+----+----+ | | | | | | 1 | 2 | 3 | 4 | +----+----+----+----+ | | | | | | 12 | a | b | 5 | +----+----V----+----+ | | | | | | 11 | c | d | 6 | +----+----+----+----+ | 10 | 9 | 8 | 7 | | | | | | +----+----+----+----+ The cache are 1,2,3,4,5,6,7,8,9,10,11,12 and a,b,c,d are used. V is the viewer. Suppose the viewer moves to the north one tile, 2,3,a,b are now used, the map wraps, and 10,9,8,7 are now replaced and become the northest tiles. Regards, Jeffrey Cahyono ----- Original Message ----- From: "Johan Hammes" <jh...@mw...> To: <gda...@li...> Sent: Tuesday, November 21, 2000 12:38 AM Subject: [Algorithms] Cache data structure > Hi > > I am running a software cache, where I cache graphical elements in an > outdoor scene. Cache size is typically about 300 - 500 elements, with 3/4 to > 4/5 ths of the cache elements being valid and needed in any specific frame. > Due to frame coherence the cache does not need to be much bigger than the > amount of elements needed in a frame. > > My Problem: > I need a data structure, that allows me to search the cache very quickly for > a tile (X, Y, LOD in a simple quadtree structure), and another structure > that allows me to search the cache very quickly for the least recently used > tile, where I can cache a new tile. > > I was thinking of a linear n element cache. > A FreeTileList , which is a sorted linked list pointing to the unused tiles, > and a CacheTileList sorted linked list pointing to all valid tiles in the > cache, sorted (LOD then Y then X), with quick pointers to each new LOD. > Unfortunately these two lists should also point to each other so that we can > remove the tiles from the other list of it changes from one... > > any thoughts.. > > thank you > Johan > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list > |
From: Johan H. <jh...@mw...> - 2000-11-23 18:07:47
|
Hi > I was wondering why you use LRU for your outdoor rendering. Interesting question, and I am thinking about it.. Partially because it is a quadree, and I have tiles of different LOD in the same cache. By increasing higher levels faster, I can replace them with LRU, before I replace there parent tiles, but maybe I should rethink this. Johan |
From: Pierre T. <p.t...@wa...> - 2000-11-23 21:19:14
|
I'm porting my engine from DX7 to DX8 and I'm going to break a lot of things in it. Before warming up my rusty chainsaw and starting the gore cuts, I'd like to expose some parts of the plan. Maybe you guys will have some useful ideas about it. So far I've used two basic levels, implemented in two different DLLs: - the low level, a.k.a. "renderer level", which is a plain Direct3D wrapper. Plain, but complete. - the high level, a.k.a. "render manager level". That one uses the API from the renderer level, so in theory you could just replace D3D with OGL and it would work the same. The atomic element at low level is the vertex buffer. A high-level mesh wraps a vertex buffer and multiple lists of indices. There's one list for each submesh of a mesh. A submesh is a group of faces sharing the same rendering properties. The hardwired vertex buffer captures the geometry, the multiple lists capture the topology. I can stripify a mesh on the fly, which is just discarding the topology and building a new one, leaving the geometry untouched. The same goes for progressive meshes. High-order surfaces are a bit more painful, I'll leave them out of the way for the moment. But to be complete, let's say all computational geometry takes place in a separate DLL, completely independent of the rendering code. (the same for collision detection or physics) Now, in DX8, things are a bit different. Basically I must include this in the global design: 1) hardwired index buffers 2) N patches 3) meshes from D3DX ? 4) vertex & pixel shaders For 1), I just have to move the topology from the "render manager" level to the "renderer" level, and manage them as VBs. But while I'm at it, I may want to increase the granularity as well... And my atomic element could become a complete mesh, included at the (low) renderer level. I like the idea, but maybe there are flaws I can't think of. I think it would eventually end up very similar to 3), but with my own code. Speaking of it, maybe I could just use 3), wrap them in the low level renderer, and wipe my code out. But is it worth it? I don't think so. The only thing I'll probably borrow from D3DX is the idea of upgrading the granularity. It will at least provide the possibility to completely discard obsolete methods, hiding them from the user once and for all. For example, the high level currently calls low level methods such as Optimize() for VBs, which don't exist anymore in DX8. I still call them from the high-level since the low level can be DX7 based, but this is annoying, not convincing, not elegant, you name it. (the same goes for VB destruction, now managed by D3D, and for some other things.) 2) goes the same way, I think. I could then move the subdivision surfaces, etc, to the low render level as well. But it would introduce a dependence between the low level renderer and the DLL responsible for computational geometry, as I would have to emulate the high order surfaces for devices not exposing them. And of course this dependence is ugly, since the CG DLL also deals with a awful lot of things (BSP, CSG, etc). Not really convincing. Now, here's the painful 4). I'm not familiar with vertex/pixel shaders for the moment, and I don't think I can say, now, how they will be included in the existing framework. So I guess here are the possible questions: - do you have any useful comments? - do you know if the new plan is compatible with something like pixel shaders? ...or even with the future? What's expected in DX9 for example? Do we already know that? - what are your own architectural choices ? Things are pretty messy nowadays. The goals, from more important to less important: - keep very clean APIs and independent DLLs - ease of use - flexibility - performance - support for DX7, DX8, OGL...Xbox? PS2 ?! I welcome all ideas, tips and experience from wise design gurus. Pierre Terdiman * Home: p.t...@wa... Coder in the dark * Zappy's Lair: www.codercorner.com |
From: Jim O. <j.o...@in...> - 2000-11-24 09:22:32
|
> I welcome all ideas, tips and experience from wise design gurus. I'm not sure if the 'wise' and 'guru' parts apply to me (probably not); But I'm tackling the same issue in my engine architecture right now. My first focus is to port the current functionality to DX8 and rewrite those parts which are directly affected by changes in DX8 (i.e. the introduction of index buffers). I'll deal with patches, pixel- and vertexshaders once I am fully familiar with the new API and its effects on my engine architecture. As for the seperation between low level rendering code and high level 'render manager' code. I'm quite happy with my current solution, which is to have an abstract base class named Visual. Visual defines methods for getting all information you want from a visual object (bounding volumes, ray intersections, etc.) and some methods for rendering and updating/animating the visual. The whole idea being that the higher level api doesn't *want* to know whether its visuals are meshes, terrains, patches, particle systems, or whatever. As long as they behave properly within the constraints of the virtual world. I'm planning to combine this with a sort of visual factory, where you create visual objects by passing some identifier to a function, i.e.: Visual *visual = CreateVisualFromFile("mesh", "human.m3d"); (which would load a plain mesh) or similarly: Visual *visual = CreateVisualFromFile("subdivmesh", "somemode.sdm"); The latter would perhaps create a true subdivision mesh in some low level implementation (DX8) and a standard mesh on another implementation (assuming that the file formats provides the necessary information). Jim Offerman Innovade |
From: Pierre T. <p.t...@wa...> - 2000-11-24 21:51:05
|
> I'm not sure if the 'wise' and 'guru' parts apply to me (probably not); But > I'm tackling the same issue in my engine architecture right now. My first > focus is to port the current functionality to DX8 and rewrite those parts > which are directly affected by changes in DX8 (i.e. the introduction of > index buffers). I'll deal with patches, pixel- and vertexshaders once I am > fully familiar with the new API and its effects on my engine architecture. As far as index buffers are concerned, it actually makes things a lot cleaner. The DX8 model is elegant, and should probably be copied. Index buffers are now just resources, same as vertex buffers, and hiding both of them at low level is quite nice. Support for strips has also been moved to the index buffer level, and it makes life a lot easier for the high-level app, which now just has to set a bool to true in the very low renderer to get automatic stripping. (And that high-level app doesn't want to know either how the meshes are rendered). I also copied DX8's DrawSubset in my low level API, so that porting to D3DX will hopefully be a piece of cake. I'll see about this. > As for the seperation between low level rendering code and high level > 'render manager' code. I'm quite happy with my current solution, which is to > have an abstract base class named Visual. Visual defines methods for getting > all information you want from a visual object (bounding volumes, ray > intersections, etc.) and some methods for rendering and updating/animating > the visual. This is probably a good solution, since AFAIK maaaany people have designed something similar - most of the time with the same name for their classes and methods. For example I had a Visual class too in the former engine I've been working on, as well as Render() and Update() methods. The whole hierarchy was a lot more complex anyway, something like: Node->CNode->SceneNode->Visual->Mesh But I really don't like that complexity anymore. Now, I usually have a base class for a given characteristic (ex: renderable objects), a single derivation level (ex: Renderable->Mesh) and a lot of multiple inheritance (how could I live without that before?). A lot of developers have gone that way as well. Duno which is best, but for the moment I'm pleased with the overall results. Anyway, if you choose a given design and stick to it, you'll probably make your way safely to the top, regardless of the actual hierarchy chart. My real "problem" (or anyway what makes me think a lot) is more about the dependencies between DLLs, and the way one could use or not use a particular API out of the context. I really like self-sufficient DLLs exposing a clean independent API, something you can just grab from the package and plug into another one's engine. That's why my initial low level renderer was just a plain D3D wrapper. I built a render manager on top of it, but one could have used the low-level renderer all alone, in a totally different project. For example I have a skeletal animation system (doing skinning, motion blending, etc, very Granny-like) which is burried in a dedicated DLL, and it does not depends at all on the rendering system. (The same goes for maths, computational geometry, collision detection, physics, etc). You get the idea: cuts are clean. Want to use the collision detection DLL only? Fine, go for it. So far it was approximatively clean (at least I was happy with it). But DX8 makes my life more tedious for some parts. Skinning is a good example. I've never bothered using hardware vertex blending, since each time I've tried, my software skinning was just faster. So the skinning still takes place in the aforementioned DLL, doesn't care about D3D, nice. But how am I supposed to evolve toward vertex-shader skinning? I can't, at least not easily: there's a big fat new dependency between it and the renderer. The architecture needs rethinking. Tweening, skinning, patchs, pixel-shaders, etc, a lot of previously clean independent things have collapsed in the "renderer" DLL, and index buffers is just the easy part/top of the iceberg. > I'm planning to combine this with a sort of visual factory, where you create > visual objects by passing some identifier to a function, i.e.: > > Visual *visual = CreateVisualFromFile("mesh", "human.m3d"); Yep, I have something like this as well. We used GUIDs in the former engine, now I'm just using the name of the class. ...anyway... Pierre Terdiman * Home: p.t...@wa... Coder in the dark * Zappy's Lair: www.codercorner.com |
From: Jim O. <j.o...@in...> - 2000-11-25 10:54:01
|
> As far as index buffers are concerned, it actually makes things a lot > ... > app, which now just has to set a bool to true in the very low renderer to > get automatic stripping. (And that high-level app doesn't want to know > either how the meshes are rendered). Agreed. > Node->CNode->SceneNode->Visual->Mesh > ... > overall results. My Visual class has quite a different position in the class hierarchy; More comparable to the Renderable solution you suggest here. > Anyway, if you choose a given design and stick to it, you'll probably make > your way safely to the top, regardless of the actual hierarchy chart. My > ... > etc, a lot of previously clean independent things have collapsed in the > "renderer" DLL, and index buffers is just the easy part/top of the iceberg. Keep the APIs of various DLLs completely self contained is not an easy task and might not always be a favorable solution. I chose to have this modularity only in one direction. I.e. you can use my low level renderer stand alone, but if you want to use my scenegraph, you also have to use my low level renderer (or write your own using my interfaces) etc. > Yep, I have something like this as well. We used GUIDs in the former engine, > now I'm just using the name of the class. Classnames are fine with me too. Jim Offerman Innovade |
From: Maciej S. <ms...@kk...> - 2000-11-25 11:46:38
|
Hello, Saturday, November 25, 2000, 11:57:39 AM, Jim Offerman wrote: [snip] > Keep the APIs of various DLLs completely self contained is not an easy task > and might not always be a favorable solution. I chose to have this > modularity only in one direction. I.e. you can use my low level renderer > stand alone, but if you want to use my scenegraph, you also have to use my > low level renderer (or write your own using my interfaces) etc. Yes, I do the same and it seems to work OK (at least for the time being). I just have some levels. Modules from each level may only depend on modules from lower level, but they don't know about modules from the same or higher level. I found this approach in CrystalSpace, so this seems to be quite common way of handling things. ------------------------- Maciej Sinilo --- Przystap do Programu Partnerskiego BiznesPolska.pl - Zglos swoja witryne i zarabiaj na odslonach: http://www.biznespolska.pl/program |
From: Jim O. <j.o...@in...> - 2000-11-26 10:15:49
|
> Yes, I do the same and it seems to work OK (at least for the > time being). I just have some levels. Modules from each > ... > found this approach in CrystalSpace, so this seems to be > quite common way of handling things. Agreed. I think you will find this behaviour in almost any modular API. It is like the second floor of a house always being built on the first ;). Jim Offerman Innovade |
From: Jim O. <j.o...@in...> - 2000-11-23 23:25:01
|
Hi, > Interesting question, and I am thinking about it.. > Partially because it is a quadree, and I have tiles of different LOD in the > same cache. By increasing higher levels faster, I can replace them with LRU, > before I replace there parent tiles, but maybe I should rethink this. The nasty thing about terrains (I think) is the fact that players have this tendency to swiftly look at what's behind them; which absolutely trashes the LRU cache, since it had *just* unloaded the textures which it now needs. If not done properly, the player will notice slow downs each time he or she looks around to investigate the vicinity - and you'll be surprised how many times people do this! To implement a really effective caching system, one would probably need to know a bit about the way people maintain their orientation and bearings in the virtual world. You should try to contiously observe your own behaviour when you're moving through open terrain - unless there's something wrong with your neck, you'll find that you are constantly looking around you in order to maintain a clear view of what is going on around you. People do the same thing in games. Jim Offerman Innovade |
From: Johan H. <jh...@mw...> - 2000-11-24 09:00:38
|
> The nasty thing about terrains (I think) is the fact that players have this > tendency to swiftly look at what's behind them; which absolutely trashes the > LRU cache, since it had *just* unloaded the textures which it now needs. If > not done properly, the player will notice slow downs each time he or she > looks around to investigate the vicinity - and you'll be surprised how many > times people do this! I currently solve the slowdowns, through the quadtree system, by only caching higer up tiles, and thus missing detail. As long as the user is rotating very fast this is almosy no problem, and I feel is better than a slowdown. This is probably one of those places where it would help to keep detailed runtime statistics, and adjust the algorithm used according to a specific users gameplay. The difficult part would be to think up the different algorihms, and the criteria for selecting them. Johan |
From: Mark W. <mwa...@to...> - 2000-11-24 10:39:17
|
If the user is rotating fast, then I'm guessing that seeing that the distant geometry will be moving faster in screen space than closer geometry, you'd be better dropping detail in the distance. When they stop you can bring it back... Just a thought, Mark ----- Original Message ----- From: "Johan Hammes" <jh...@mw...> To: <gda...@li...> Sent: Friday, November 24, 2000 5:43 PM Subject: Re: [Algorithms] Cache data structure > > The nasty thing about terrains (I think) is the fact that players have > this > > tendency to swiftly look at what's behind them; which absolutely trashes > the > > LRU cache, since it had *just* unloaded the textures which it now needs. > If > > not done properly, the player will notice slow downs each time he or she > > looks around to investigate the vicinity - and you'll be surprised how > many > > times people do this! > > I currently solve the slowdowns, through the quadtree system, by only > caching higer up tiles, and thus missing detail. As long as the user is > rotating very fast this is almosy no problem, and I feel is better than a > slowdown. > > This is probably one of those places where it would help to keep detailed > runtime statistics, and adjust the algorithm used according to a specific > users gameplay. The difficult part would be to think up the different > algorihms, and the criteria for selecting them. > > Johan > > > > _______________________________________________ > GDAlgorithms-list mailing list > GDA...@li... > http://lists.sourceforge.net/mailman/listinfo/gdalgorithms-list |
From: Johan H. <jh...@mw...> - 2000-11-24 14:04:00
|
> If the user is rotating fast, then I'm guessing that seeing that the distant > geometry will be moving faster in screen space than closer geometry, you'd > be better dropping detail in the distance. When they stop you can bring it > back... ??????????? Distant geometry will be moving at exactly the same speed in screen space than close geometry when rotating. You should drop equal amounts of both. Johan |
From: Jim O. <j.o...@in...> - 2000-11-24 22:31:07
|
> ??????????? Distant geometry will be moving at exactly the same speed in > screen space than close geometry when rotating. This is not true. When you rotate your view, distant objects *will* move faster across your view. It is best compared to objects orbitting the origin of your virtual world. If the first object is at a distance r and the second is at a distance 2r away from the origin and each of them has to complete a full circle in the same time t, the fartest of the two has to travel twice the distance and thus it has to move at twice the speed of the closest object. So, yes, objects farthest away move faster in and out of view when the player is looking around (not walking). Jim Offerman Innovade |
From: Mark W. <mwa...@to...> - 2000-11-24 23:44:21
|
> > If the user is rotating fast, then I'm guessing that seeing that the > distant > > geometry will be moving faster in screen space than closer geometry, you'd > > be better dropping detail in the distance. When they stop you can bring it > > back... > > ??????????? Distant geometry will be moving at exactly the same speed in > screen space than close geometry when rotating. > You should drop equal amounts of both. Correct me if i am wrong, but if you have a line, say from 1m in front of you to 100m in front of you and rotate you view 5 degrees, does not the far end point move more than the near point of the line - this is what I meant... I see this as the far end point moving faster in view-space than the near end point. Sorry if I was unclear. Mark |
From: Jim O. <j.o...@in...> - 2000-11-24 22:31:09
|
Hey, > I currently solve the slowdowns, through the quadtree system, by only > caching higer up tiles, and thus missing detail. As long as the user is > rotating very fast this is almosy no problem, and I feel is better than a > slowdown. That sounds like a sound strategy :). > This is probably one of those places where it would help to keep detailed > runtime statistics, and adjust the algorithm used according to a specific > users gameplay. The difficult part would be to think up the different > algorihms, and the criteria for selecting them. Yup. There is no single best solution here - but that's what makes life interesting ;). Jim Offerman Innovade |