From: Brian P. <bri...@tu...> - 2006-03-01 18:27:09
|
Niraj Tolia wrote: > This report is against CVS HEAD. With the last round of changes, the > Async IO code path will block everytime there is no update to send. The > relevant stack trace is > > #3 0x00a63d75 in crWaitSemaphore (s=0xca46b8) at threads.c:259 > #4 0x00c8786c in vncspuWaitDirtyRects (region=0xa246884, > frame_num=0xa2469e8) > at vncspu.c:335 > #5 0x00c8b13e in rf_client_updatereq () at client_io.c:516 > #6 0x00c8989d in aio_process_input (slot=0xa246728) at async_io.c:562 > #7 0x00c89486 in aio_mainloop () at async_io.c:390 > > We never ran into this issue earlier as the VNC SPU had a fixed frame > update rate and we would always have something to send out. However, now > we can get stuck because of the above. Things that the user runs into > includes attempts at new connections "hanging". This happens because we > never return to the aio_mainloop unless an update was received from the > remote application running under crappfaker. The CVS HEAD code should work fine for a single VNC viewer, but I can see how it would hang with multiple viewers. I hadn't gotten around to fixing that yet. > As we already record that an update was requested on the current client > slot, one option is to return to the aio_mainloop and defer sending the > update till something is actually available. The deferred update can be > sent by leveraging the s_idle_func used in the AIO code. An example > patch of how this could be done is attached. One disadvantage is that we > need to reduce the timeout value of the select() call in aio_mainloop(). > Testing did not indicate any increased CPU usage though as the idle > function is cheap. > > I have a decent handle on the VNC SPU code and so, if you feel this > should be done differently, just let me know and I should be able to > code up another patch. Even with your patch, when I attach two VNC viewers, things aren't running as expected. One viewer or the other will freeze up for long periods. I'm looking into it. I may not have a fix until next week though. -Brian |