From: Bernhard W. <be...@bl...> - 2004-08-06 12:12:31
|
Hi Christos >>I do not agree about that. If you sync to vblank the software is >>sleeping/idling (I do not know how nvidia implemented it) till the new >>frame is spit out (there is just the main thread in TORCS). This can >>cause a performance loss (ok, it depends how you define better performance). >> > > Oh, yes. That depends on whether the client (torcs) sleeps from the > moment of the Render request until the first vblank. This would be > bad. A better way would be something like this: > > 0. Set flag=false > > 1. Client sends Render() > > 2. > -If (flag) then client must sleep. > -If (!flag), Server sets flag to true and starts rendering scene in > back buffer; the client continues running. > > 3. Client continues processing and at some point goes to 1. > > I. VBLANK Interrupt: If (flag) and back > scene has finished rendering: swap buffers and set flag to > false (this would mean that if a client had just sent a Render(), we > immediately start rendering the new scene); Otherwise do nothing. > > <end> > > Anyway, this stuff can be implemented on the server side so that the > game can work fluidly without knowing anything about double-buffering. That very depends on a lot of conditions (Def: "the server": OpenGL server side): - When you cannot render you have to store the commands in a buffer of finite size, so when the buffer is full, you get stalled again (immediate mode returns when the command has been executed from the client view, so the command must have been enqueued on the server at least). - You definitively need to signal the server (OpenGL server side) when the scene is complete, otherwise it will potentilly draw never (glFlush, or the tough blocking one glFinish). - If you do an OpenGL query, then the server must render first to give the answer. So it is very dependent on the OpenGL driver, the hardware capabilities, the glutSwapBuffer implementation and on the implementation of the application if you can use potential parallelism... NEVER read data from the server if time matters. A nice example which could cause stalls is the readback of the height in the flying camera, does it require a read of OpenGL server state? If this is the case, then this stalls and TORCS goes sleeping till the result is here (because OpenGL needs to render to be able to answer the query). I think TORCS is quite lousy what OpenGL performance concerns, because we render the most stuff in immediate mode. Things like the possibility of deformable objects also kill performance (otherwise you could use vertex_buffer_objects or display lists, sharing the whole vertices of equal cars, just changing texture state). The readback of the height of the flying camera kills performance as well as hell I guess... before that was introduced (1.2.1) I could let run TORCS over the network (remote OpenGL, indirect rendering), now (1.2.2) its impossible. > So, you can only get lockups/glitches if the simulation code is too > fast :) I don't think so, a lot of constraints need to be fulfilled that this finally works... bye, Bernhard. -- visit my homepage http://www.berniw.org coming soon: The TORCS Racing Board, http://www.berniw.org/trb |