From: Fabian K. <fk...@fa...> - 2011-01-01 14:52:12
|
Lee <le...@gm...> wrote: > On 12/31/10, Fabian Keil <fk...@fa...> wrote: > > Lee <le...@gm...> wrote: > > > >> ref [ ijbswa-Support Requests-3135180 ] Low performance while > >> streaming > >> > >> I got about a 1Mb/s difference between no proxy and using privoxy on > >> the speedtest download sites. But removing the check for the client > >> socket still alive in the jcc.c chat function seems to get the > >> download speeds about equal with & without Privoxy. > > > > Interesting, I assume you're testing with some Windows version? > > Yes; Windows Vista, Firefox 3.6.13 and current Privoxy from CVS. > > > The overhead seems to measurable with FreeBSD 9.0 as well: > > > > # standard Privoxy > > fk@r500 ~ $curl -o /dev/null http://10.0.0.1:81/10G-response > > % Total % Received % Xferd Average Speed Time Time Time > > Current > > Dload Upload Total Spent Left > > Speed > > 100 10.0G 100 10.0G 0 0 195M 0 0:00:52 0:00:52 > > --:--:-- 199M > > # no proxy > > fk@r500 ~ $HTTP_PROXY= curl -o /dev/null > > http://10.0.0.1:81/10G-response % Total % Received % Xferd Average > > Speed Time Time Time Current > > Dload Upload Total Spent Left > > Speed > > 100 10.0G 100 10.0G 0 0 196M 0 0:00:52 0:00:52 > > --:--:-- 182M > > > standard Privoxy: 199M > no proxy: 182M > with socket_is_still_alive() disabled: 211M > That no proxy gives the worst results is a bit surprising. Are you > running a local web server or does "http://10.0.0.1:81/10G-response" > go out to the Internet? Yes, 10.0.0.1 is a local address on the system running Privoxy and curl. However I also just noticed that I wasn't actually testing without Privoxy yesterday, as I mispelled http_proxy (using all uppercase). Doh. Doing it correctly: fk@r500 ~ $curl -o /dev/null http://10.0.0.1:81/10G-response # % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 10.0G 100 10.0G 0 0 205M 0 0:00:49 0:00:49 --:--:-- 203M fk@r500 ~ $http_proxy= curl -o /dev/null http://10.0.0.1:81/10G-response % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 10.0G 100 10.0G 0 0 417M 0 0:00:24 0:00:24 --:--:-- 423M fk@r500 ~ $curl -o /dev/null http://10.0.0.1:81/10G-response % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 10.0G 100 10.0G 0 0 208M 0 0:00:49 0:00:49 --:--:-- 204M fk@r500 ~ $http_proxy= curl -o /dev/null http://10.0.0.1:81/10G-response % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 10.0G 100 10.0G 0 0 415M 0 0:00:24 0:00:24 --:--:-- 413M Interestingly enough, the difference is less impressive with fetch(1): fk@r500 ~ $fetch -o /dev/null http://10.0.0.1:81/10G-response /dev/null 100% of 10 GB 120 MBps 00m00s fk@r500 ~ $http_proxy= fetch -o /dev/null http://10.0.0.1:81/10G-response /dev/null 100% of 10 GB 143 MBps 00m00s fk@r500 ~ $fetch -o /dev/null http://10.0.0.1:81/10G-response /dev/null 100% of 10 GB 119 MBps 00m00s fk@r500 ~ $http_proxy= fetch -o /dev/null http://10.0.0.1:81/10G-response /dev/null 100% of 10 GB 146 MBps 00m00s fk@r500 ~ $fetch -o /dev/null http://10.0.0.1:81/10G-response /dev/null 100% of 10 GB 121 MBps 00m00s fk@r500 ~ $http_proxy= fetch -o /dev/null http://10.0.0.1:81/10G-response /dev/null 100% of 10 GB 146 MBps 00m00s It would probably be worth investigating why using curl without Privoxy can be about twice as fast. I don't think the difference has to be that big. > >> It seems close enough to no difference with/without privoxy now, so > >> would it be OK to remove the check for > >> !socket_is_still_alive(csp->cfd) in chat? > > > > If the content is supposed to be filtered, the check prevents > > Privoxy from reading too much data from the server if the client > > no longer cares. If the content doesn't get filtered, however, > > skipping the check should be fine. > > > > Can you confirm that the attached patch gets you the same speed-up? > > seems like not checking is a bit faster: > BUFFER_SIZE = 5000 > -- if (buffer_and_filter_content && !socket_is_still_alive(csp->cfd)) > DC: 12.91 down 3.03 up > LA: 12.29 1.77 > CG: 13.11 2.36 > > -- no proxy > DC: 13.19 down 3.22 up > LA: 11.65 2.66 > CG: 12.96 3.11 > > -- if (0 && buffer_and_filter_content && !socket_is_still_alive(csp->cfd)) > DC: 12.81 down 3.08 up > LA: 13.55 1.79 > CG: 13.41 2.59 > > but it's clearly not the 1Mb/s penalty I see when just checking > !socket_is_still_alive(csp->cfd) Did you run the tests multiple times with reproducible results? I'm wondering if the difference is statistically significant. Fabian |