curl-loader-devel Mailing List for curl-loader - web application testing (Page 35)
Status: Alpha
Brought to you by:
coroberti
You can subscribe to this list here.
2006 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
|
Feb
(1) |
Mar
(7) |
Apr
(19) |
May
(25) |
Jun
(16) |
Jul
(59) |
Aug
(29) |
Sep
(18) |
Oct
(19) |
Nov
(7) |
Dec
(29) |
2008 |
Jan
(6) |
Feb
(18) |
Mar
(8) |
Apr
(27) |
May
(26) |
Jun
(5) |
Jul
(6) |
Aug
|
Sep
(9) |
Oct
(37) |
Nov
(61) |
Dec
(17) |
2009 |
Jan
(21) |
Feb
(25) |
Mar
(4) |
Apr
(2) |
May
(8) |
Jun
(15) |
Jul
(18) |
Aug
(23) |
Sep
(10) |
Oct
(16) |
Nov
(14) |
Dec
(22) |
2010 |
Jan
(23) |
Feb
(8) |
Mar
(18) |
Apr
(1) |
May
(34) |
Jun
(23) |
Jul
(11) |
Aug
(1) |
Sep
(13) |
Oct
(10) |
Nov
(2) |
Dec
(8) |
2011 |
Jan
|
Feb
(7) |
Mar
(24) |
Apr
(12) |
May
(3) |
Jun
(2) |
Jul
(2) |
Aug
|
Sep
(5) |
Oct
(20) |
Nov
(7) |
Dec
(11) |
2012 |
Jan
(12) |
Feb
(5) |
Mar
(16) |
Apr
(3) |
May
|
Jun
(5) |
Jul
(12) |
Aug
(6) |
Sep
|
Oct
|
Nov
(8) |
Dec
|
2013 |
Jan
(1) |
Feb
(3) |
Mar
(5) |
Apr
(3) |
May
(1) |
Jun
|
Jul
(1) |
Aug
(2) |
Sep
(9) |
Oct
|
Nov
(8) |
Dec
(4) |
2014 |
Jan
(4) |
Feb
|
Mar
(1) |
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(4) |
Oct
|
Nov
(11) |
Dec
(5) |
2015 |
Jan
(1) |
Feb
|
Mar
(11) |
Apr
(3) |
May
(1) |
Jun
(1) |
Jul
(4) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(2) |
Dec
|
2016 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Robert I. <cor...@gm...> - 2007-09-09 09:15:26
|
Hi Jari, On 9/7/07, jari pietila <jar...@ho...> wrote: > My client in this test was is dual-core P4 2.8Ghz, 1GB RAM, Fedora 5 > and with -t 2 option there was almost linear scaling leading about 6000 > CAPS! > - I then switched to nginx, and now got 8000 CAPS with -t 2 and 1000 > virtual clients. > The numbers are matching our experience. I also experimented a little with the number of virtual clients, but I got > about the same results with 256 clients, > and performance started to degrade around 1500 clients. > Y, indeed to get more CAPS , its about the same with 200 - 1000 clients - Finally, I tried configuration where client closes the connection, > but with this I got much lower CAPS with both lighttpd and nginx. > It is also matching our experiments. If you see idle CPU remaning at your loading machine, you can try adding more servers. Else, you can scale either by increasing the CPUs/memory of the client and server machines or by increasing their numbers. http://curl-loader.sourceforge.net/high-load-hw/index.html Thanks and best wishes. Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Robert I. <cor...@gm...> - 2007-09-09 05:52:31
|
Hi folks, Added an option to use random records from file (e.g. credentials). One can load e.g. a 100000 records and use them in a random fashion for 1000 clients. We are not ensuring uniquickness of records used. Two more tags added to assist in doing that: FORM_RECORDS_FILE_MAX_NUM the number of records to load from a records file, and FORM_RECORDS_RANDOM, which says to the program to use the records loading randomly and not to link each records to a client by index. File with example of usage added is: ./conf-examples/random_file_records.conf The feature have been requested by Jeremy Brown, but some other people also expressed interest to test their web-site using a rather small number of concurrent VC but testing with all possible users. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Aleksandar L. <al-...@no...> - 2007-09-07 08:26:03
|
Hi, On Fre 07.09.2007 03:12, jari pietila wrote: >Robert, > >I did some experimenting and can share the results so far: Nice to share your experience ;-), thanks. >- with ligthttpd and minimal server tuning, I got around 3300 CAPS with > 1000 virtual clients. > (load-curl 0.41 re-compiled with optimization) compared to 1200 > with Apache. > > Then the good news: > My client in this test was is dual-core P4 2.8Ghz, 1GB RAM, Fedora 5 > and with -t 2 option there was almost linear scaling leading about 6000 CAPS! > Very impressive, thanks for pointing this out. > > - I then switched to nginx, and was able to reach 8000 CAPS > with -t 2 and 1000 virtual clients. > (no change in client configuration ) Cool ;-) > But there is also a change in server behavior, lighttpd does NOT use > persistent connections and closes each TCP connection (TCP FIN server > ->client ). > This is actually the kind of behavior, I wanted to see: > NO persistence, thus creating load on DUT that needs to track TCP > session creation and closing. I'am not a fan of lighty but to be fair have you looked into: http://trac.lighttpd.net/trac/wiki/Docs%3APerformance > Nginx with default settings has persistence, > (and not sure if there is directive to make it not to do so), > so this seems to explain the higher CAPS number. Afaik there is no option to switch off the keep-alive but you can tune it: http://wiki.codemongers.com/NginxHttpCoreModule#keepalive_timeout > I also experimented a little with the number of virtual clients, > but I got about the same results with 256 clients, > and performance started to degrade around 1500 clients. > > - Finally, I tried configuration where client closing the connection, but with this > I got much lower CAPS with both lighttpd and nginx. Have you tried to turn on/off tcp_{nodelay,nopush), yust for info?! http://wiki.codemongers.com/NginxHttpCoreModule#tcp_nodelay BR Aleks |
From: jari p. <jar...@ho...> - 2007-09-07 03:16:48
|
Robert, I did some experimenting and can share the results so far: =20 - with ligthttpd and minimal server tuning, I got around 3300 CAPS with 100= 0 virtual clients. (load-curl 0.41 re-compiled with optimization) compared to 1200 with A= pache. =20 Then the good news: My client in this test was is dual-core P4 2.8Ghz, 1GB RAM, Fedora 5 and with -t 2 option there was almost linear scaling leading about 6000 CA= PS! Very impressive, thanks for pointing this out. =20 - I then switched to nginx, and now got 8000 CAPS with -t 2 and 1000 v= irtual clients. (no change in client configuration ) But there was also a change in server behavior, lighttpd does NOT use pers= istent connections and closes each TCP connection (TCP FIN server ->client ). This is actually the kind of behavior, I wanted to see: NO persistence, thus creating load on DUT that needs to track TCP session = creation and closing. =20 Nginx with default settings has persistence, (and not sure if there is di= rective to make it not to do so), so this seems to explain the higher CAPS number. I also experimented a little with the number of virtual clients, but I got = about the same results with 256 clients, and performance started to degrade around 1500 clients. - Finally, I tried configuration where client closes the connection,=20 but with this I got much lower CAPS with both lighttpd and nginx. _________________________________________________________________ Discover the new Windows Vista http://search.msn.com/results.aspx?q=3Dwindows+vista&mkt=3Den-US&form=3DQBR= E= |
From: jari p. <jar...@ho...> - 2007-09-07 03:12:31
|
Robert, I did some experimenting and can share the results so far: =20 - with ligthttpd and minimal server tuning, I got around 3300 CAPS with 100= 0 virtual clients. (load-curl 0.41 re-compiled with optimization) compared to 1200 with A= pache. =20 Then the good news: My client in this test was is dual-core P4 2.8Ghz, 1GB RAM, Fedora 5= =20 and with -t 2 option there was almost linear scaling leading about 600= 0 CAPS! Very impressive, thanks for pointing this out. =20 - I then switched to nginx, and was able to reach 8000 CAPS with -t= 2 and 1000 virtual clients. (no change in client configuration ) But there is also a change in server behavior, lighttpd does NOT use pers= istent connections and closes each TCP connection (TCP FIN server ->client ). This is actually the kind of behavior, I wanted to see: NO persistence, thus creating load on DUT that needs to track TCP sess= ion creation and closing. =20 Nginx with default settings has persistence, (and not sure if there is directive to make it not to do so), so this seems to explain the higher CAPS number. I also experimented a little with the number of virtual clients,=20 but I got about the same results with 256 clients, and performance started to degrade around 1500 clients. =20 - Finally, I tried configuration where client closing the connectio= n, but with this=20 I got much lower CAPS with both lighttpd and nginx. -jari=20 _________________________________________________________________ Connect to the next generation of MSN Messenger=A0 http://imagine-msn.com/messenger/launch80/default.aspx?locale=3Den-us&sourc= e=3Dwlmailtagline= |
From: Robert I. <cor...@gm...> - 2007-09-05 04:45:19
|
This is the bugfix stable version, which fixes - FTP load bug, where the opened FTP connections has been kept foreever even with FRESH_CONNECT=1 specified; - compilation bug for linux distributions, where PAGE_SIZE not defined in asm/page.h Added a detailed logging feature with -d command-line option. Enjoy! -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Robert I. <cor...@gm...> - 2007-09-04 19:55:33
|
Hi Vinodh, On 9/4/07, Bug Nux <bu...@gm...> wrote: > Robert, > At the moment we have curl-loader packages for PCLinuxOS, and this is > available readily from our repository. The instructions to use the > repository is available in our website. We encourage you to indicate this > and the BugnuX LiveCD option to curl-loader users. Currently we have no > plans of creating Debian packages, due to lack of time. If we do plan to > create a Debian package, I will let you know. We do have the srpms, if you > have any ideas for using them, let me know. Great, forwarding this e-mail to our development and users list. Thank you. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Robert I. <cor...@gm...> - 2007-09-04 17:28:16
|
On 9/4/07, Michael Moser <mos...@gm...> wrote: > -x is a very good name > like bash -x kuku.sh It was renamed to -d option, try it. Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Michael M. <mos...@gm...> - 2007-09-04 13:27:42
|
-x is a very good name like bash -x kuku.sh On 9/2/07, Robert Iakobashvili <cor...@gm...> wrote: > > Artur, > > Detailed logging option (-x) added as you suggested outputs > to <batch-name>.log file all request and response headers and bodies. > Recommended not to combine it with verbose -v. > > This option is supposed to enable HTTP and FTP request/responses > tracing for encrypted HTTPS/FTPS traffic, where sniffers are not > helpful. > > If you can look into it, it would be appreciated. > Thanks. > > -- > Sincerely, > Robert Iakobashvili, > coroberti %x40 gmail %x2e com > ........................................................... > http://curl-loader.sourceforge.net > A web testing and traffic generation tool. > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. > Still grepping through log files to find problems? Stop. > Now Search log events and configuration files using AJAX and a browser. > Download your FREE copy of Splunk now >> http://get.splunk.com/ > _______________________________________________ > curl-loader-devel mailing list > cur...@li... > https://lists.sourceforge.net/lists/listinfo/curl-loader-devel > |
From: Robert I. <cor...@gm...> - 2007-09-04 06:27:24
|
On 9/2/07, Robert Iakobashvili <cor...@gm...> wrote: > Artur, > > Detailed logging option (-x) added as you suggested outputs > to <batch-name>.log file all request and response headers and bodies. > Recommended not to combine it with verbose -v. > > This option is supposed to enable HTTP and FTP request/responses > tracing for encrypted HTTPS/FTPS traffic, where sniffers are not > helpful. Ping. I have renamed it to -d[etailed] instead and fixed a problem with buff-size. Could somebody try the latest svn with -d command-line? Thanks. Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Robert I. <cor...@gm...> - 2007-09-03 09:39:34
|
Hi Jari, On 9/2/07, jari pietila <jar...@ho...> wrote: > My main purpose for using curl-load is to load a device (stateful fw) > beetween clients and server. > > The question: how to optimize Apache and curl-load query parameters > to achieve max in terms of CAPS Default Apache configuration is really limiting. > -how to optimize Apache (or would Lighthttpd be better ?) I would suggest you to use nginx, which was found even better, than lighttpd. It has a Debian package and nice docs here, but you can use just the default conf http://wiki.codemongers.com/Main?action=show&redirect=Nginx > - what curl-load client options could be adjusted one increase connection > rate? We seen, that you do not need much connections to reach high CAPS. Something between 100 and 1000 is enough. Compile curl-loader with optimization as described: http://curl-loader.sourceforge.net/doc/faq.html#big-load http://curl-loader.sourceforge.net/high-load-hw/index.html Most probably, you do not need to fine-tune linux, whereas you can try. What is most important is to ensure persistance of the tcp-connections. curl-loader makes its best, therefore make sure, that server does as well. If your client machine has multiple-core or multiple CPU, you can also try an option to run it with -t <number> option, where the numbers to try are 2, 4, 8 With nginx I have seen 6000 - 8000 CAPS with a small file of 100-200 bytes and 200-400 curl-loader virtual clients, each with an established keep-alived connection. Highest number of CAPS you'll get as a guess, when TIMER_URL_COMPLETION = 0 TIMER_AFTER_URL_SLEEP = 0 Best wishes and report to the list you best experience. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: jari p. <jar...@ho...> - 2007-09-02 18:59:17
|
First thank you for an excellent tool! slightly OT question: My main purpose for using curl-load is to load a device (stateful fw)=20 beetween clients and server. The question: how to optimize Apache and curl-load query parameters=20 to achieve max in terms of CAPS when retrieving small files from one httpd= server. Now with 1000 curl clients to Apache server I can get max 1200-1300 CAPS. in this test all clients are fetching the same 1k file. curl load config is= based on sample 10K example, but with 1000 clients. -how to optimize Apache (or would Lighthttpd be better ?) - what curl-load client options could be adjusted one increase connection= rate? =20 (again for this particular test I want to maximaze Connections/sec, and no= t necessary throughput or concurrent connections ) Jari _________________________________________________________________ Invite your mail contacts to join your friends list with Windows Live Space= s. It's easy! http://spaces.live.com/spacesapi.aspx?wx_action=3Dcreate&wx_url=3D/friends.= aspx&mkt=3Den-us= |
From: Robert I. <cor...@gm...> - 2007-09-02 15:57:06
|
Artur, Detailed logging option (-x) added as you suggested outputs to <batch-name>.log file all request and response headers and bodies. Recommended not to combine it with verbose -v. This option is supposed to enable HTTP and FTP request/responses tracing for encrypted HTTPS/FTPS traffic, where sniffers are not helpful. If you can look into it, it would be appreciated. Thanks. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: <Ko...@co...> - 2007-08-28 12:52:20
|
Hi Robert, =20 works just fine so far :-) Thanks a lot for your help =20 Best, Bjoern =20 ________________________________ Von: cur...@li... = [mailto:cur...@li...] Im Auftrag von = Robert Iakobashvili Gesendet: Dienstag, 28. August 2007 09:25 An: web loading and performance testing tool Betreff: Re: Compiling Issues Hi Bj=F6rn, On 8/28/07, Bj=F6rn Korall <Ko...@co... > wrote:=20 Hi Robert, =20 thanks for your quick responses. We have 64 bit processors over here = and the specific page.h does not contain any definitions for centOS with = 64 bit. Until I found any proper statements and solutions for that, I = will use the workaround ;-) If you are confirming, that it works for you, I will add the patch to source control. =20 Thank you. Sincerely, Robert Iakobashvili,=20 coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool.=20 |
From: Robert I. <cor...@gm...> - 2007-08-28 07:25:11
|
Hi Bj=F6rn, On 8/28/07, Bj=F6rn Korall <Ko...@co...> wrote: > > Hi Robert, > > thanks for your quick responses. We have 64 bit processors over here and > the specific page.h does not contain any definitions for centOS with 64 > bit. Until I found any proper statements and solutions for that, I will u= se > the workaround ;-) > If you are confirming, that it works for you, I will add the patch to source control. Thank you. Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: <Ko...@co...> - 2007-08-28 06:58:00
|
Hi Robert, =20 thanks for your quick responses. We have 64 bit processors over here and = the specific page.h does not contain any definitions for centOS with 64 = bit. Until I found any proper statements and solutions for that, I will = use the workaround ;-) =20 Thanks a lot so far :-) Yours, Bjoern ________________________________ Von: cur...@li... = [mailto:cur...@li...] Im Auftrag von = Robert Iakobashvili Gesendet: Montag, 27. August 2007 18:05 An: web loading and performance testing tool Betreff: Re: Compiling Issues On 8/27/07, Robert Iakobashvili <cor...@gm...> wrote:=20 Hi Bj=F6rn Korall,=20 =09 =09 On 8/27/07, Bj=F6rn Korall < Ko...@co... = <mailto:Ko...@co...> > wrote:=20 on a CentOS with a 2.6.18 kernel. However, I get the following error = (the compiler is bailing out when compiling the mpool.c): =09 gcc -W -Wall -Wpointer-arith -pipe -DCURL_LOADER_FD_SETSIZE=3D20000 = -D_FILE_OFFSET_BITS=3D64 -g -I. -I./inc -I/usr//include -c -o = obj/mpool.o mpool.c mpool.c: in function mpool_init: mpool.c:160: error: PAGE_SIZE not declared (first use in this = function)=20 make: *** [obj/mpool.o] error 1 Sorry, the correct work-around, after all other options do not work, and you will not find an include file for PAGE_SIZE: =20 =20 #if !defined PAGE_SIZE #define PAGE_SIZE 4096 endif --=20 Sincerely, Robert Iakobashvili,=20 coroberti %x40 gmail %x2e com ...........................................................=20 http://curl-loader.sourceforge.net A web testing and traffic generation tool.=20 |
From: Robert I. <cor...@gm...> - 2007-08-27 16:05:16
|
On 8/27/07, Robert Iakobashvili <cor...@gm...> wrote: > > Hi Bj=F6rn Korall, > > On 8/27/07, Bj=F6rn Korall <Ko...@co... > wrote: > > > > > > on a CentOS with a 2.6.18 kernel. However, I get the following error > > (the compiler is bailing out when compiling the mpool.c): > > > > gcc -W -Wall -Wpointer-arith -pipe -DCURL_LOADER_FD_SETSIZE=3D20000 > > -D_FILE_OFFSET_BITS=3D64 -g -I. -I./inc -I/usr//include -c -o obj/mpo= ol.o > > mpool.c > > mpool.c: in function mpool_init: > > mpool.c:160: error: PAGE_SIZE not declared (first use in this function) > > make: *** [obj/mpool.o] error 1 > > Sorry, the correct work-around, after all other options do not work, and you will not find an include file for PAGE_SIZE: #if !defined PAGE_SIZE #define PAGE_SIZE 4096 endif --=20 Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Robert I. <cor...@gm...> - 2007-08-27 15:24:41
|
Hi Bj=F6rn Korall, On 8/27/07, Bj=F6rn Korall <Ko...@co...> wrote: > > > on a CentOS with a 2.6.18 kernel. However, I get the following error (the > compiler is bailing out when compiling the mpool.c): > > gcc -W -Wall -Wpointer-arith -pipe -DCURL_LOADER_FD_SETSIZE=3D20000 > -D_FILE_OFFSET_BITS=3D64 -g -I. -I./inc -I/usr//include -c -o obj/mpool= .o > mpool.c > mpool.c: in function mpool_init: > mpool.c:160: error: PAGE_SIZE not declared (first use in this function) > make: *** [obj/mpool.o] error 1 > We are getting PAGE_SIZE from <asm/page.h> Please, check, that you have glibc headers installed in, normally: /usr/include This is normally called glibc-headers or glibc-dev packages, depends on you= r distribution. If your distribution has PAGE_SIZE defined in some other glibc header, whic= h is not <asm/page.h>, please, let us know and include this header. The worst work-around could be something like this: #if !defined (PAGE_SIZE) #define (PAGE_SIZE) 4096 #endif Please, report your advances. Take care and best wishes. Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: <Ko...@co...> - 2007-08-27 14:57:45
|
Dear curl-users, I am trying to compile the curl-loader on a CentOS with a 2.6.18 kernel. = However, I get the following error (the compiler is bailing out when = compiling the mpool.c): gcc -W -Wall -Wpointer-arith -pipe -DCURL_LOADER_FD_SETSIZE=3D20000 = -D_FILE_OFFSET_BITS=3D64 -g -I. -I./inc -I/usr//include -c -o = obj/mpool.o mpool.c mpool.c: in function mpool_init: mpool.c:160: error: PAGE_SIZE not declared (first use in this function) make: *** [obj/mpool.o] error 1 Any suggestions, ideas what went wrong? Thanks a lot in advance, Bjoern |
From: Robert I. <cor...@gm...> - 2007-08-08 11:33:16
|
>From recent ChangeLog: "When FRESH_CONNECT=1 tag is specified for FTP URLs, new FTP connections have been opened in due course, but old FTP connections were kept forever and not closed. The bugfix committed to SVN forces old connections to be closed. Current behavior is the following: a) an established control FTP-connection is closed immediately just after a file transfer is accomplished; b) an established data FTP-connection is not closed after transfer, but is kept during sleeping time and goes closed only after a new data connection is opened; Current behavior still may require to define twice higher number of connections allowed by your FTP server, than the number of clients in curl-loader. Thanks to John Gatewood Ham <bur...@gm...> for reporting the problem and testing patches." -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Robert I. <cor...@gm...> - 2007-08-06 11:25:09
|
On 8/6/07, Daniel Stenberg <da...@ha...> wrote: > > On Mon, 6 Aug 2007, Robert Iakobashvili wrote: > > >> Remove an easy handle from a multi handle would certainly not be right > >> either, I would say. And there as well, the connection is owned by the > >> multi handle and not by the individual easy handle after a completed > >> transfer. > > Haha, well I do seem to recall that the world is habitated by a few other > souls than just you and me, so there are at least a few other > possibilities... Hope so... :) As I see it, the problem for a generic API for both the easy interface and > the > multi interface is that the connection is not "owned" by the easy > interface in > the latter case, and when you've detached an easy interface from the multi > handle it's not even possible to know what multi handle that used to host > it > either. > So, do you have any suggested proto or API how this should/could be done > by an > application? I will try to come with: a) some usage cases first, b) that will allow to define required behavior, c) and enable to define some API; > What is even worse, that setting the bit of FRESH_CONNECT for a handle > busy > > with a FTP does open fresh FTP connections (data and control), but > without > > closing old connections. > > Well, libcurl is rather agressively trying to maintain persistent > connections > so it doesn't close any connection unless told so (unless it runs the > connection cache full), and you (effectively) tell it to close by using > CURLOPT_FORBID_REUSE - but that is of course set _before_ a transfer is > made. This is a very good point. When both FRESH_CONNECT and FORBID_REUSE have been set the behavior of a cycling FTP-handle, making FTP downloads and sleeping for several seconds in between, was a half way to the desired, namely: 1) an established control FTP-connection was closed immediately just after a file transfer is accomplished; 2) an established data FTP-connection was not closed after transfer, and was kept during sleeping time; the connection was closed only after a new data connection opened by a new transfer cycle; -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Robert I. <cor...@gm...> - 2007-08-06 11:12:53
|
On 8/6/07, BuraphaLinux Server <bur...@gm...> wrote: > > Hello, > > The change is good. I still get a few failures, but it doesn't > degrade like before. Thank you so much for the fast fix! I had just > finished the man page patch and was going to reply to your last email > with it when I got this message. My failure rate is now between 2 and > 3% for 50 connections with 100 connection server limit. Things are > still very bad for 100 connections with 100 connection server limit > however. I will try to devise a small test case with small wireshark > traces for you to look at when you get a chance later. For now I can > work with just running at 1/2 capacity for my testing. > Each FTP session has a control and a data connections. What happens is the following: a) an established control FTP-connection is closed immediately just after a file transfer is accomplished; b) an established data FTP-connection is not closed after transfer, is kept during sleeping time and goes closed only after a new data connection is opened; Therefore, it might be, that you can test with a 1/2 capacity due to the described above behavior. Hope, this is a temporary workaround for you. Thanks for your testing. Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: BuraphaLinux S. <bur...@gm...> - 2007-08-06 09:50:20
|
Hello, The change is good. I still get a few failures, but it doesn't degrade like before. Thank you so much for the fast fix! I had just finished the man page patch and was going to reply to your last email with it when I got this message. My failure rate is now between 2 and 3% for 50 connections with 100 connection server limit. Things are still very bad for 100 connections with 100 connection server limit however. I will try to devise a small test case with small wireshark traces for you to look at when you get a chance later. For now I can work with just running at 1/2 capacity for my testing. JGH On 8/6/07, Robert Iakobashvili <cor...@gm...> wrote: > On 8/6/07, BuraphaLinux Server <bur...@gm...> wrote: > > > > Hello, > > > > It appears that libcurl lacks good ftp support if you use more > > than one connection. > > > > I have tried a one more bit CURLOPT_FORBID_REUSE as Daniel recommended, and > it now closes old FTP connections for me. > Could you, please, give a try for the latest svn? > Thanks. > > > Sincerely, > Robert Iakobashvili, > coroberti %x40 gmail %x2e com > ........................................................... > http://curl-loader.sourceforge.net > A web testing and traffic generation tool. > |
From: Robert I. <cor...@gm...> - 2007-08-06 07:17:31
|
On 8/6/07, BuraphaLinux Server <bur...@gm...> wrote: > > Hello, > > It appears that libcurl lacks good ftp support if you use more > than one connection. > I have tried a one more bit CURLOPT_FORBID_REUSE as Daniel recommended, and it now closes old FTP connections for me. Could you, please, give a try for the latest svn? Thanks. Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Robert I. <cor...@gm...> - 2007-08-06 05:55:18
|
On 8/6/07, BuraphaLinux Server <bur...@gm...> wrote: > > Hello, > > It appears that libcurl lacks good ftp support if you use more > than one connection. I understand that you are not eager to write > your own FTP client library and fixing theirs would be quite labor > intensive. I will try to get this job done with shell scripting for > now, but when (if?) libcurl is fixed so you can fix curl-loader, I > will be ready to do full testing for you at that time. Thanks for your understanding. For now, perhaps the man page needs to be updated to document this issue > so > nobody else attempts to do it? If that is ok, I can send a patch to > update the man page (including stating that it's a libcurl limitation, > not a curl-loader limitation). > Thanks, I can commit it to svn. Well, we hope to correct it in a two month period. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |