I've been working on a real-time rendering application for some time
now. I should say its real-time-like, as it is running under linux and
we achieve consistent 85fps for the simple scenes we render. We don't
use framebuffer objs...instead double-buffered visuals work fine.
We are using r200 drivers/cards, so I have no experience with the X###
cards. Haven't been using fglrx, but I've had success with nvidia
drivers and cards.
Biggest headache has been driver bugs and missing features. Fglrx was
very problematic that way (and that's why we're not using it). The r200
driver has served us very well. This is used in academic research, so I
don't want to live too close to the bleeding edge - experimenters need
the stuff to work with available hardware (which I can get from
university surplus for real cheap!). I must point out that you are
likely working with a very different type of rendering than we are (we
use simple 3d scenes, flat shading), so its very possible you need a
little more firepower than we do.
I'd be glad to share some of my experiences with you off-list, but
here's a brief list of things that come to mind.
Most important is to run full-screen. Anything less will mean X makes
decisions about when your window is drawn which might not be optimal for
your application. Of course it depends on what you really mean by
"realtime", how fast your data is sampled, and how fast you want to
render your scenes (i.e. frame rate).
I run with buffer swaps sync'd to the blanking period. That's primarily
because my application is used where we absolutely cannot tolerate
tearing. Furthermore, we must check every frame that we have not
_missed_ a frame, and we report that to the client. Again, a demand of
the environment (visual neuroscience). We are pushing for higher frame
rates; ideally we'd like to run at 100-140Hz. The visual system doesn't
respond much to higher frequencies anyways. Tearing is almost ALWAYS
I use stripped down PIII systems that run a realtime-enabled kernel (see
http://www.osadl.org/). Stopping unnecessary processes, locking my app
into memory (using mlockall()) and using the rt-enabled kernel reduced
latencies tremendously. Further, the rendering machine is used
exclusively for that purpose. A client sends commands to the machine via
dedicated tcp/ip connection (1-4ms latency per msg) or gpib (ideal, but
expensive, very little latency, high throughput). The rt-kernel allows
for using rt-priority threads.... basically threads that cannot be
interrupted, even by the kernel (not strictly true.....see the osadl
site for more). Thus I can run my rendering thread, input thread at high
priority and run low-priority stuff when I'm waiting for the card to
render the scene.
That's all I have time for now. Feel free to contact me if you have more
questions or comments.
Brian Paul wrote:
> kadambi wrote:
>> At work, we are using ATI Radeon X850 card for rendering real time 3d CT data
>> (at least we are trying to!). Strictly with Xorg 7.2 or above and with
>> Mesa3d/Radeon driver.
>> DRI says this card does do direct rendering (xorg 7.2 supports this)
>> (glxinfo says "direct rendering = YES"). Has anybody here used this card for
>> rendering real time data? DRI currently does not support framebuffer
> Do you mean GL_EXT_framebuffer_object?
>> Is it still possible to do a software workaround for certain
>> extensions (I know mesa3d automatically does this)?
> GL_EXT_framebuffer_object could probably be implemented in any driver
> with software fallbacks.
>> We are strictly trying to stay with mesa/radeon and avoid fglrx. Any
>> suggestions and pointers would be appreciated. Any pointers to programs
>> which render real time data would be great.
> I don't know of any.
> SF.Net email is sponsored by: The Future of Linux Business White Paper
> from Novell. From the desktop to the data center, Linux is going
> mainstream. Let it simplify your IT future.
> Mesa3d-users mailing list