Bart Janssens wrote:
> On Thursday 20 November 2003 19:43, Timothy M. Shead wrote:
>>However, this is not a substitute for SDS preview within RenderMesh.
>>The point of RenderMesh is that it "renders" exactly what it sees
>>without interpretation - when you use your plugin to convert the SDS to
>>polygons, the downstream RenderMesh sees a collection of polygons, and
>>renders a collection of polygons. This is an improvement over what we
>>have now in OpenGL of course, but it is a step backwards for your final
>>render, because your RenderMan engine *also* gets a collection of
>>polygons, instead of an SDS.
> Yes, I had thought about that, but I was thinking of solving that by adding an
> option to RenderMesh to make it visible only in the editor. That way, only
> the cage would be passed on to the RenderMan engine and would be rendered
> using its own subdiv algorithm.
> I preferred this way, because putting the subdiv algorithm in RenderMesh would
> inevitably result in further changes to RenderMesh, like having options to
> separately control the appearance of cage and subdivided mesh.
> Of course in the end, the entire process would be transparent to the user,
> with the right RenderMesh objects appearing with default settings so only the
> cage is sent to the renderman engine, with "advanced" users having the option
> to manually choose what they want to be rendered.
One thing you may not be aware of is that RenderMesh incorporates its
own transformation matrix when rendering. This makes instancing
possible, i.e. you can take one battle-droid mesh and connect its output
to the inputs on a 1000 RenderMesh objects, each offset slightly from
the last, and you get a parade of droids with minimal overhead. So
conceptually if there are two RenderMesh objects it means the user wants
to see two different copies of something - this approach would introduce
problems with keeping the transformations for the two objects synchronized.
I anticipate that RenderMesh *will* end-up with a lot of different
properties that affect rendering - currently we have a large number of
hard-coded quantities that affect e.g. the fidelity of patches as drawn
with OpenGL. Plus there are many different ways to visualize a patch -
outlines, control hull, control points, UV curves, the surface itself,
combinations of the above, and-so-on. Also keep in mind that an
interactive preview of SDS benefits greatly from knowing details about
the viewing parameters - this allows you to adjust the quality of the
subdivision based on proximity to the camera, etc. You can see this
happening with our NURBS patches, BTW ... create a NurbsGrid object
(which still creates my backward "5" object for testing purposes), then
zoom the camera in-and-out and you can see how the level of subdivision
changes. This is why it is completely appropriate for
subdivision-for-purposes-of-preview to be integrated with the rest of
the UI layer.
Anyway, I'm looking forward to seeing your results!