|
From: Michael S. <Mic...@gm...> - 2002-04-26 07:06:44
|
>How does the runtime know to render a visually rendered node? It knows >because the renderer walks the tree, and by design it won't visit a node >that shouldn't be rendered. Do we need something like an AudioRenderer? >That is, a class that works like the Viewer, only for audio instead of >geometry? I think it would be a good part to use a AudioRenderer for SoundNodes. But the current architecture of openvrml hasn't a good class type identification (only with toMovieTexture() or toGeometry() or something like that); I don't know about the new rearch branch, but I think it would be good thing, if we use a class type identification like open inventor does. And then it would be easier to determine the Sound Nodes. Also it would be a good thing to use render actions which get the path to the rendered objects. Then it is easier to determine transparency objects, render these at least and depth sorted. Micha. |