From: Vitaly V. B. <vit...@us...> - 2004-10-14 12:45:01
|
On Thu, 14 Oct 2004 12:05:11 +0200 Burkhard Plaum <pl...@ip...> wrote: > But is this possible at all? > Did you ever made such a setup working? > How do you want to detect (at runtime), which lib should be dlopened? > > I doubt, that current X11 installations allow this at all. We can run 3 X servers independently. He-he. On different machines! I'll try that later. > > To do this gl calls should be "objectised". Like this > > lvglBegin(video, GL_LINES); > > ... > > lvglEnd(video); > > In this case, each lvgl*() function must internally call > glXMakeCurrent() or something similar. Not actually. It depends... > > to call specific function. If it's done nicely it should introduce almost > > no overhead. > > In one case lemuria draws more than 5000 triangles in one frame, along with > normal vectors and texture coordinates. > It already pushes weaker hardware to the limit. > > Propagating all these (15000+ / frame) function calls through another > lib would make it unusable for many people. Hm, let one function call overhead will be 100 instructions (VERY high overhead). It will be (less than, due to architecture) 15000*100=1.500.000 CPU clocks. On a 1GHz it's 1.5ms. Is that much? > > In theory ;) I can't see any problems with that. > Pratice looks different in this case. > > Wrapping library calls makes sense in many cases to keep things clean. > But here, we are inside the innermost rendering loop, and speed really > becomes important. CPU is not a bottleneck here. Memory, AGP buses, GPU -- yes, maybe. It must be extremly highly optimized software (for _this_ hardware only!) to make CPU a bottleneck. > Hard- and Software developers spend a lot of time, to speed optimize > OpenGL calls. So it's a bad idea to slow it down again > only for supporting some esotheric hardware configurations. Les't support a cluster! :) I'll try this in a few days. -- Vitaly GPG Key ID: F95A23B9 |