From: DRC <dco...@us...> - 2015-05-06 15:31:43
|
I'm not sure whether any of the information that follows is germane to the issue you're having, but I thought I would share it all the same, since the description of the issue rang a few bells in my head. Minimally, this information may provide some insight into the interaction between the deferred update timer and the TurboVNC encoder (which libvncserver now uses, as of version 0.9.9.) Section 5.2 of this report: http://www.virtualgl.org/pmwiki/uploads/About/vglperf21.pdf goes into some detail about the history of the deferred update timer within TurboVNC and the issues it posed when the TightVNC codec and protocol were first accelerated to make TurboVNC. I'll attempt to summarize here: -- Initially, TightVNC 1.3.x used a deferred update timer value of 40 ms, and the viewer waited until the current framebuffer update (FBU) was drawn before sending a new framebuffer update request (FBUR) to the server. As you can imagine, this is reeeeally slow on low-latency networks, but because the TightVNC codec was already really slow, the network latency wasn't really the limiting factor. But once the codec became an order of magnitude faster in TurboVNC, now the network latency was a big issue. -- In TurboVNC 0.3.2, I attempted to improve the performance on low-latency networks by pipelining the viewer a bit. Now it would send a FBUR back to the server before decoding and drawing the current FBU. This worked great on high-latency networks, but it created a new performance issue on low-latency networks. For reasons that are more properly explained in the report above, whenever the new FBUR was received from the server before the application had drawn a new frame, this caused the new frame to be deferred by the VNC server. Otherwise, if the application had drawn a new frame before the new FBUR arrived, then the FBU was sent immediately. Thus, pipelining the viewer caused the new FBUR to arrive at the server "too quickly" when the network latency was low, and this caused every frame the application drew to be deferred. With a DUT value of 40 ms, this slowed performance to a crawl. It was essentially the same as introducing 40 ms of network latency, and the overall update rate dropped to 1 / (2 * 40 ms) = about 12/sec. On modern PC equipment available at the time, TurboVNC was normally capable of streaming 1280 x 1024 x 25 fps on a 100 Mbit network, so this represented a 50% performance regression on such networks. -- In TurboVNC 0.3.3, I introduced a switch that allowed the user to turn on or off pipelining in the viewer depending on whether they were on a low-latency or high-latency network, but this was a total hack. Ultimately, in TurboVNC 0.4, I discovered after a great deal of testing that using a 1 ms deferred update timer value along with the pipelining feature in the viewer provided the best possible performance on both high-latency and low-latency networks. So I think the basic takeaway is that, in order to achieve the best performance possible for high-update-rate applications without using the RFB flow control extensions, you have to send a new FBUR in the client before processing the current FBU, and you have to set the server's DUT to a very low value (my own testing with VirtualGL indicated that 1 ms was the best value, but your mileage may vary.) Even with this, though, you're still going to be latency-bound (limited to [1 / latency] updates/sec.) This means that, if the network bandwidth and encoding settings would normally be able to accommodate a higher update rate than [1 / latency], that higher update rate won't ever be achieved. That is the raison d'etre of the RFB flow control extensions. On 5/6/15 3:06 AM, Paul Melis wrote: > Hmmm, is this related to libvncserver not supporting the > ContinuousUpdates and Fence RFB extensions (see e.g. [1]), which most > other VNC servers these days do support? I mean, using an application > with high update rate in a regular VNC session (e.g. TurboVNC) has never > been an issue for me so far, but that might be because the update rate > from server to client in that case is decoupled from the client request > rate and limited on the server side by the extensions, while the actual > framebuffer update rate on the server is much higher. > > Paul > > [1] http://sourceforge.net/p/libvncserver/mailman/message/31374353/ > > On 04-05-15 17:51, Paul Melis wrote: >> Hi all, >> >> I'm trying to get my head around a performance issue with libvncserver. >> See the attached example, which reproduces the problem (the VNC client >> I'm using is TigerVNC 1.4.3 on an Arch Linux 64-bit system, with >> libvncserver 0.9.10). The example simply draws colored (and noisy, to >> make compression harder) rectangles whenever the mouse is moved if a >> button is being pressed, in order to judge the framebuffer update >> performance. >> >> The example has two arguments: the deferUpdateTime value to use and a >> sleep value to use when drawing (both in ms). The latter value can be >> increased to simulate a draw routine that takes some time to complete >> (which is typical for my actual application). >> >> When I use the default deferUpdateTime value of 5 ms and use 0 for the >> sleep time the performance in the client is great, draw updates come >> almost instantly, even when using very fast mouse movements (and thus >> many rectangles get drawn). >> >> When I increase the sleep value to 10 ms, so each rectangle takes a bit >> of time to draw (but still can do around 100 fps), and use fast mouse >> movements the updates are no longer smooth and whole bunches of >> rectangles get drawn intermittently. This suggests that only every few >> frame buffer updates an update is sent to the client. I first thought >> that this had to do with the deferUpdateTime, but setting it to 0 >> doesn't really help. >> >> Maybe the choppy behaviour is an interaction between the mouse events >> getting sent at a high rate when using fast movements versus the slow fb >> update response to each event. When using slower mouse movements >> rectangles get drawn mostly one at a time, suggesting the frame buffer >> updates can be keep up. If this is the case, is there a general way to >> handle this kind of workload (other than rate limiting the >> rfbMarkRectAsModified() calls)? >> >> Regards, >> Paul >> >> >> >> ------------------------------------------------------------------------------ >> One dashboard for servers and applications across Physical-Virtual-Cloud >> Widest out-of-the-box monitoring support with 50+ applications >> Performance metrics, stats and reports that give you Actionable Insights >> Deep dive visibility with transaction tracing using APM Insight. >> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >> >> >> >> _______________________________________________ >> LibVNCServer-common mailing list >> Lib...@li... >> https://lists.sourceforge.net/lists/listinfo/libvncserver-common >> > > |