curl-loader-devel Mailing List for curl-loader - web application testing (Page 29)
Status: Alpha
Brought to you by:
coroberti
You can subscribe to this list here.
2006 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
|
Feb
(1) |
Mar
(7) |
Apr
(19) |
May
(25) |
Jun
(16) |
Jul
(59) |
Aug
(29) |
Sep
(18) |
Oct
(19) |
Nov
(7) |
Dec
(29) |
2008 |
Jan
(6) |
Feb
(18) |
Mar
(8) |
Apr
(27) |
May
(26) |
Jun
(5) |
Jul
(6) |
Aug
|
Sep
(9) |
Oct
(37) |
Nov
(61) |
Dec
(17) |
2009 |
Jan
(21) |
Feb
(25) |
Mar
(4) |
Apr
(2) |
May
(8) |
Jun
(15) |
Jul
(18) |
Aug
(23) |
Sep
(10) |
Oct
(16) |
Nov
(14) |
Dec
(22) |
2010 |
Jan
(23) |
Feb
(8) |
Mar
(18) |
Apr
(1) |
May
(34) |
Jun
(23) |
Jul
(11) |
Aug
(1) |
Sep
(13) |
Oct
(10) |
Nov
(2) |
Dec
(8) |
2011 |
Jan
|
Feb
(7) |
Mar
(24) |
Apr
(12) |
May
(3) |
Jun
(2) |
Jul
(2) |
Aug
|
Sep
(5) |
Oct
(20) |
Nov
(7) |
Dec
(11) |
2012 |
Jan
(12) |
Feb
(5) |
Mar
(16) |
Apr
(3) |
May
|
Jun
(5) |
Jul
(12) |
Aug
(6) |
Sep
|
Oct
|
Nov
(8) |
Dec
|
2013 |
Jan
(1) |
Feb
(3) |
Mar
(5) |
Apr
(3) |
May
(1) |
Jun
|
Jul
(1) |
Aug
(2) |
Sep
(9) |
Oct
|
Nov
(8) |
Dec
(4) |
2014 |
Jan
(4) |
Feb
|
Mar
(1) |
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(4) |
Oct
|
Nov
(11) |
Dec
(5) |
2015 |
Jan
(1) |
Feb
|
Mar
(11) |
Apr
(3) |
May
(1) |
Jun
(1) |
Jul
(4) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(2) |
Dec
|
2016 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Pranav D. <pra...@gm...> - 2008-07-02 01:09:00
|
On Tue, Jul 1, 2008 at 1:06 PM, Robert Iakobashvili <cor...@gm...> wrote: > Hi Pranav, > > On Tue, Jul 1, 2008 at 10:55 PM, Pranav Desai <pra...@gm...> > wrote: >> >> Hello, >> >> I am trying to simulate a browser behavior for accessing a front page >> (e.g. www.cnn.com). From traces I see that a few requests go over the >> same TCP connections (persistence) and in general there are a few TCP >> connections made for completely fetching the whole front page. > > Y can see also the behavior of major browsers like IE-6, IE-7, FF-2, FF-3 > amd Safari-3.1 >> >> I am trying to use FRESH_CONNECT to create another TCP connections. >> What I was expecting was that all URLs before the FRESH_CONNECT tag >> would go on the same connection and after the URL that has the >> FRESH_CONNECT tag a new connection would start. So I was expecting to >> see 2 GETs for the first TCP connection, 6 on the second TCP conn. and >> the rest on the third one. Basically, I was thinking of the URL list >> as sequential with conn. close in between. >> >> But that doesn't seem to be case. There are 3 TCP connections, but one >> has most of the GETs and the other 2 have one req. each (for which the >> tag is specified). >> >> So it seems like curl-loader loads all the URLS with it associated >> tags and then access them randomly. Is that correct? If so, is there a >> way to have a behavior similar to the one described above. > > FRESH_CONNECT means that in the next cycle the connection should be closed > and re-established. > > What is the behavior, that you see with major browsers mentioned? > In general, most of them will create a bunch of TCP conn. and send off multiple requests through each connection. I can send you a trace if you like, and I am not trying to simulate any particular browser, just the way a browser normally fetches the main page of a website. To bring this in context, I am trying to load test a proxy, and would like to create/simulate thousands of users opening the main page of a bunch of popular websites. What I do is get a trace on the browser side for a website, from which I can get the URLs and sequence in which the browser fetched them to get whole page. I will also get the number of connections it utilized for the entire page. With that information I can create a curl-loader conf file with the same URLs and add a few FRESH_CONNECT to emulate the new TCP conn. Thats how I thought FRESH_CONNECT would work ... I could just add a bunch of URLs from somewhere in the curl-loader conf and add a few FRESH_CONNECT and TIMER_AFTER_SLEEP in the list and would probably get a similar behavior, but I was hoping to replicate the browser as closely as possible. Thanks for your help. -- Pranav > -- > Truly, > Robert Iakobashvili, Ph.D. > ...................................................................... > www.ghotit.com > Assistive technology that understands you > ...................................................................... > ------------------------------------------------------------------------- > Sponsored by: SourceForge.net Community Choice Awards: VOTE NOW! > Studies have shown that voting for your favorite open source project, > along with a healthy diet, reduces your potential for chronic lameness > and boredom. Vote Now at http://www.sourceforge.net/community/cca08 > _______________________________________________ > curl-loader-devel mailing list > cur...@li... > https://lists.sourceforge.net/lists/listinfo/curl-loader-devel > > |
From: Robert I. <cor...@gm...> - 2008-07-01 20:06:43
|
Hi Pranav, On Tue, Jul 1, 2008 at 10:55 PM, Pranav Desai <pra...@gm...> wrote: > Hello, > > I am trying to simulate a browser behavior for accessing a front page > (e.g. www.cnn.com). From traces I see that a few requests go over the > same TCP connections (persistence) and in general there are a few TCP > connections made for completely fetching the whole front page. Y can see also the behavior of major browsers like IE-6, IE-7, FF-2, FF-3 amd Safari-3.1 > > > I am trying to use FRESH_CONNECT to create another TCP connections. > What I was expecting was that all URLs before the FRESH_CONNECT tag > would go on the same connection and after the URL that has the > FRESH_CONNECT tag a new connection would start. So I was expecting to > see 2 GETs for the first TCP connection, 6 on the second TCP conn. and > the rest on the third one. Basically, I was thinking of the URL list > as sequential with conn. close in between. > > But that doesn't seem to be case. There are 3 TCP connections, but one > has most of the GETs and the other 2 have one req. each (for which the > tag is specified). > > So it seems like curl-loader loads all the URLS with it associated > tags and then access them randomly. Is that correct? If so, is there a > way to have a behavior similar to the one described above. > FRESH_CONNECT means that in the next cycle the connection should be closed and re-established. What is the behavior, that you see with major browsers mentioned? -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... www.ghotit.com Assistive technology that understands you ...................................................................... |
From: Pranav D. <pra...@gm...> - 2008-07-01 19:55:59
|
Hello, I am trying to simulate a browser behavior for accessing a front page (e.g. www.cnn.com). From traces I see that a few requests go over the same TCP connections (persistence) and in general there are a few TCP connections made for completely fetching the whole front page. I am trying to use FRESH_CONNECT to create another TCP connections. What I was expecting was that all URLs before the FRESH_CONNECT tag would go on the same connection and after the URL that has the FRESH_CONNECT tag a new connection would start. So I was expecting to see 2 GETs for the first TCP connection, 6 on the second TCP conn. and the rest on the third one. Basically, I was thinking of the URL list as sequential with conn. close in between. But that doesn't seem to be case. There are 3 TCP connections, but one has most of the GETs and the other 2 have one req. each (for which the tag is specified). So it seems like curl-loader loads all the URLS with it associated tags and then access them randomly. Is that correct? If so, is there a way to have a behavior similar to the one described above. config file ----------- BATCH_NAME=test_load CLIENTS_NUM_MAX=1 # Same as CLIENTS_NUM CLIENTS_NUM_START=1 CLIENTS_RAMPUP_INC=2 INTERFACE =eth1 NETMASK=16 IP_ADDR_MIN= 12.0.0.1 IP_ADDR_MAX= 12.0.16.250 #Actually - this is for self-control CYCLES_NUM=1 URLS_NUM=14 ########### URL SECTION #################################### URL=http://192.168.55.205/websites/cisco/www.cisco.com/cdc_content_elements/flash/home/sp_072307/spotlight/sp_webEx_tn.jpg URL=http://192.168.55.205/websites/cisco/www.cisco.com/cdc_content_elements/flash/home/sp_072307/spotlight/sp_CIN.swf FRESH_CONNECT=1 URL=http://192.168.55.205/websites/cisco/www.cisco.com/cdc_content_elements/flash/home/sp_072307/spotlight/sp_webEx.swf URL=http://192.168.55.205/websites/cisco/www.cisco.com/cdc_content_elements/flash/home/sp_072307/spotlight/sp_humanN_anthem_tn.jpg URL=http://192.168.55.205/websites/MOO/backfeed10.gif TIMER_AFTER_URL_SLEEP=2000-5000 URL=http://192.168.55.205/websites/MOO/bottom.gif URL=http://192.168.55.205/websites/MOO/style3.css URL=http://192.168.55.205/websites/MOO/topright.gif FRESH_CONNECT=1 URL=http://192.168.55.205/websites/MOO/rss_smaller.gif URL=http://192.168.55.205/websites/MOO/index.html.6 TIMER_AFTER_URL_SLEEP=2000-5000 URL=http://192.168.55.205/websites/cnn/www.cnn.com/.element/ssi/www/breaking_news/2.0/banner.html URL=http://192.168.55.205/websites/cnn/www.cnn.com/.element/ssi/auto/2.0/sect/MAIN/ftpartners/partner.people.html TIMER_AFTER_URL_SLEEP=2000-5000 URL=http://192.168.55.205/websites/cnn/www.cnn.com/.element/ssi/auto/2.0/sect/MAIN/ftpartners/partner.money.txt URL=http://192.168.55.205/websites/cnn/www.cnn.com/.element/img/2.0/global/icons/video_icon.gif |
From: Robert I. <cor...@gm...> - 2008-06-17 17:26:38
|
Hi Matt, On Tue, Jun 17, 2008 at 5:00 PM, Matt Bull <ma...@mo...> wrote: > > Have got 0.44 installed on ubuntu: > > CURL-LOADER VERSION: 0.44 > > QUESTION/ SUGGESTION/ PATCH: > Am seeing the bind() failed constantly, just wondered if tftp is supported > or not, the main page suggests that it is but with some patch or tweak.. ? > > Any info would be appreciated, I need to test a new tftp server setup and > this looks like just the ticket if it would work! > Cheers, > Matt > In theory libcurl supports it, but we have not added support for TFTP. I have not seen, that somebody is using the protocol for a while. Y can explore at the libcurl pages, how to add this option, but my guess, it will be some work to do. Take care, Truly, Robert |
From: Matt B. <ma...@mo...> - 2008-06-17 14:00:27
|
Hiya Chaps, Have got 0.44 installed on ubuntu: CURL-LOADER VERSION: 0.44 HW DETAILS: CPU/S and memory are must: processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 1 model name : Intel(R) Pentium(R) 4 CPU 1.60GHz stepping : 2 cpu MHz : 1594.868 cache size : 256 KB fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm up bogomips : 3192.41 clflush size : 64 MemTotal: 385424 kB MemFree: 90396 kB Buffers: 23284 kB Cached: 194608 kB SwapCached: 0 kB Active: 119280 kB Inactive: 136772 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 385424 kB LowFree: 90396 kB SwapTotal: 1124508 kB SwapFree: 1124336 kB Dirty: 32 kB Writeback: 0 kB AnonPages: 38176 kB Mapped: 16184 kB Slab: 33252 kB SReclaimable: 24636 kB SUnreclaim: 8616 kB PageTables: 1392 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 1317220 kB Committed_AS: 234236 kB VmallocTotal: 643064 kB VmallocUsed: 3436 kB VmallocChunk: 639568 kB LINUX DISTRIBUTION and KERNEL (uname -r): 2.6.20-16-server GCC VERSION (gcc -v): Target: i486-linux-gnu Configured with: ../src/configure -v --enable-languages=c,c++,fortran,objc,obj-c++,treelang --prefix=/usr --enable-shared --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --enable-nls --program-suffix=-4.1 --enable-__cxa_atexit --enable-clocale=gnu --enable-libstdcxx-debug --enable-mpfr --enable-checking=release i486-linux-gnu Thread model: posix gcc version 4.1.2 (Ubuntu 4.1.2-0ubuntu4) COMPILATION AND MAKING OPTIONS (if defaults changed): COMMAND-LINE: curl-loader -v -d -f tftp.conf CONFIGURATION-FILE (The most common source of problems): Place the file inline here: ########### GENERAL SECTION ################################ BATCH_NAME= tftp CLIENTS_NUM_MAX= 1 INTERFACE = eth0 NETMASK= 29 IP_ADDR_MIN= 81.187.76.229 IP_ADDR_MAX= 81.187.76.229 #Actually - this is for self-control CYCLES_NUM= 1 URLS_NUM = 1 ########### URL SECTION #################################### URL=tftp://81.103.222.46/test FRESH_CONNECT=1 # At least my proftpd has problems with connection re-use TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced by cancelling url fetch on timeout TIMER_AFTER_URL_SLEEP =3000 DOES THE PROBLEM AFFECT: COMPILATION? No LINKING? No EXECUTION? No OTHER (please specify)? Have you run $make cleanall prior to $make ? Yes DESCRIPTION: 0 1 (81.187.76.229) :== Info: About to connect() to 81.103.222.46 port 69 (#0) : eff-url: , url: Trying 81.103.222.46... 0 1 (81.187.76.229) :== Info: Trying 81.103.222.46... : eff-url: , url: Bind local address to 81.187.76.229 0 1 (81.187.76.229) :== Info: Bind local address to 81.187.76.229 : eff-url: , url: Local port: 36467 0 1 (81.187.76.229) :== Info: Local port: 36467 : eff-url: , url: connected 0 1 (81.187.76.229) :== Info: connected : eff-url: , url: Connected to 81.103.222.46 (81.103.222.46) port 69 (#0) 0 1 (81.187.76.229) :== Info: Connected to 81.103.222.46 (81.103.222.46) port 69 (#0) : eff-url: , url: set timeouts for state 0; Total 5, retry 1 maxtry 3 0 1 (81.187.76.229) :== Info: set timeouts for state 0; Total 5, retry 1 maxtry 3 : eff-url: , url: bind() failed; Invalid argument 0 1 (81.187.76.229) !! ERROR: bind() failed; Invalid argument : eff-url: , url: Expire cleared 0 1 (81.187.76.229) :== Info: Expire cleared : eff-url: , url: Closing connection #0 0 1 (81.187.76.229) :== Info: Closing connection #0 : eff-url: , url: QUESTION/ SUGGESTION/ PATCH: Am seeing the bind() failed constantly, just wondered if tftp is supported or not, the main page suggests that it is but with some patch or tweak.. ? Any info would be appreciated, I need to test a new tftp server setup and this looks like just the ticket if it would work! Cheers, Matt -- -- Matt ma...@ma... |
From: Aleksandar L. <al-...@no...> - 2008-06-09 16:01:29
|
Hi Robert, On Mon 09.06.2008 10:49, Robert Iakobashvili wrote: >Hi Aleksandar, > >On Sun, Jun 8, 2008 at 11:37 PM, Aleksandar Lazic ><al-...@no...> wrote: > >> >> LINUX DISTRIBUTION and KERNEL (uname -r): 2.6.25-gentoo-r3 QUESTION/ >> SUGGESTION/ PATCH: add '#include <limits.h>' into ip_secondary.c as >> last included file > >Thank you very much, fixed. >I cannot add you to the Thanks list, since you are already inside :-) ;-) always the same ;-) cheers Aleks |
From: Robert I. <cor...@gm...> - 2008-06-09 07:49:08
|
Hi Aleksandar, On Sun, Jun 8, 2008 at 11:37 PM, Aleksandar Lazic <al-...@no...> wrote: > > LINUX DISTRIBUTION and KERNEL (uname -r): 2.6.25-gentoo-r3 > QUESTION/ SUGGESTION/ PATCH: > add '#include <limits.h>' into ip_secondary.c as last included file Thank you very much, fixed. I cannot add you to the Thanks list, since you are already inside :-) Best wishes, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: Aleksandar L. <al-...@no...> - 2008-06-08 20:37:48
|
Hi all, today I got a complilation error, here the problem report: CURL-LOADER VERSION: 0.45 HW DETAILS: CPU/S and memory are must: attached the dmidecode output LINUX DISTRIBUTION and KERNEL (uname -r): 2.6.25-gentoo-r3 GCC VERSION (gcc -v): ### Using built-in specs. Target: i686-pc-linux-gnu Configured with: /var/tmp/portage/sys-devel/gcc-4.2.4/work/gcc-4.2.4/configure --prefix=/usr --bindir=/usr/i686-pc-linux-gnu/gcc-bin/4.2.4 --includedir=/usr/lib/gcc/i686-pc-linux-gnu/4.2.4/include --datadir=/usr/share/gcc-data/i686-pc-linux-gnu/4.2.4 --mandir=/usr/share/gcc-data/i686-pc-linux-gnu/4.2.4/man --infodir=/usr/share/gcc-data/i686-pc-linux-gnu/4.2.4/info --with-gxx-include-dir=/usr/lib/gcc/i686-pc-linux-gnu/4.2.4/include/g++-v4 --host=i686-pc-linux-gnu --build=i686-pc-linux-gnu --disable-altivec --enable-nls --without-included-gettext --with-system-zlib --disable-checking --disable-werror --enable-secureplt --disable-multilib --enable-libmudflap --disable-libssp --disable-libgcj --with-arch=i686 --enable-languages=c,c++,treelang,fortran --enable-shared --enable-threads=posix --enable-__cxa_atexit --enable-clocale=gnu Thread model: posix gcc version 4.2.4 (Gentoo 4.2.4 p1.0) ### COMPILATION AND MAKING OPTIONS (if defaults changed): COMMAND-LINE: make CONFIGURATION-FILE (The most common source of problems): Place the file inline here: DOES THE PROBLEM AFFECT: COMPILATION? Yes LINKING? No EXECUTION? No OTHER (please specify)? Have you run $make cleanall prior to $make ? Yes DESCRIPTION: ### gcc -W -Wall -Wpointer-arith -pipe -DCURL_LOADER_FD_SETSIZE=20000 -D_FILE_OFFSET_BITS=64 -g -I. -I./inc -I/usr//include -c -o obj/ip_secondary.o ip_secondary.c ip_secondary.c: In function 'get_integer': ip_secondary.c:309: error: 'INT_MAX' undeclared (first use in this function) ip_secondary.c:309: error: (Each undeclared identifier is reported only once ip_secondary.c:309: error: for each function it appears in.) ip_secondary.c:309: error: 'INT_MIN' undeclared (first use in this function) make: *** [obj/ip_secondary.o] Error 1 ### QUESTION/ SUGGESTION/ PATCH: add '#include <limits.h>' into ip_secondary.c as last included file |
From: Robert I. <cor...@gm...> - 2008-05-25 07:32:39
|
Hi Alex, On Sun, May 25, 2008 at 10:13 AM, A R <ale...@gm...> wrote: > I've tried to use it to test www.opencalais.com web service perfomance. > So I had to use Http POST. > It didn't work as expected in curl-loader 0.45 so I had to debug it. > I've fixed it with following limitation: > You can not use comments in the FORM_STRING line (with POST AS_IS) because > FORM_STRING can contain any kind of characters and there is no way > to distinguish POST data from a comment. > Please, see attached patches to curl-loader 0.45 and real test > configuration. > Sincerely, > Alex Thank you very much for providing the patches! We will look into it a bit later. Please, use for future postings, patches, etc our mailing list, that I am adding to the e-mail destinations. Y can subscribe to it here: https://lists.sourceforge.net/lists/listinfo/curl-loader-devel Best wishes! -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: Niko W. S. S. <nik...@ya...> - 2008-05-24 11:51:17
|
That I want to know is how to implements the throuhput measurement in my Apache code (actually Apache is in C/C++)?? Cause I want to do a scheduling policy in Apache web server that prioritize the fastest connection first. Robert Iakobashvili <cor...@gm...> wrote: Hi Niko, On Fri, May 23, 2008 at 10:36 AM, Niko Wilfritz Sianipar Sianipar wrote: > How do you limit the transfer rate/throughput in your code in curl-loader? > Actually. I want to measure the throughput of a connection in my Apache web > server code. Please tell me the idea.. In curl-loader you can configure the tag: TRANSFER_LIMIT_RATE - limits client maximum throughput for a url. The value of the tag to be provided as bytes (! not bits) per second. curl-loader is using the facilities provided by the great libcurl library, where you can find the measurements and the control code. apache2 can make bandwidth throutolling by configuration, isn't it? To monitor your networking connections you can for a small number of connection try to use wireshark, whereas for a larger number such proprietary tools as Sniffer, NetReality, Network Physics, NetQoS, etc > Best, > Niko Wilfritz Sianipar > Teknik Informatika IT Telkom Bandung Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... ------------------------------------------------------------------------- This SF.net email is sponsored by: Microsoft Defy all challenges. Microsoft(R) Visual Studio 2008. http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ _______________________________________________ curl-loader-devel mailing list cur...@li... https://lists.sourceforge.net/lists/listinfo/curl-loader-devel Best, Niko Wilfritz Sianipar Teknik Informatika IT Telkom Bandung --------------------------------- Yahoo! Toolbar kini dilengkapi dengan Search Assist. Download sekarang juga. |
From: Robert I. <cor...@gm...> - 2008-05-24 06:37:45
|
Hi Niko, On Fri, May 23, 2008 at 10:36 AM, Niko Wilfritz Sianipar Sianipar <nik...@ya...> wrote: > How do you limit the transfer rate/throughput in your code in curl-loader? > Actually. I want to measure the throughput of a connection in my Apache web > server code. Please tell me the idea.. In curl-loader you can configure the tag: TRANSFER_LIMIT_RATE - limits client maximum throughput for a url. The value of the tag to be provided as bytes (! not bits) per second. curl-loader is using the facilities provided by the great libcurl library, where you can find the measurements and the control code. apache2 can make bandwidth throutolling by configuration, isn't it? To monitor your networking connections you can for a small number of connection try to use wireshark, whereas for a larger number such proprietary tools as Sniffer, NetReality, Network Physics, NetQoS, etc > Best, > Niko Wilfritz Sianipar > Teknik Informatika IT Telkom Bandung Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: Niko W. S. S. <nik...@ya...> - 2008-05-23 07:36:57
|
How do you limit the transfer rate/throughput in your code in curl-loader? Actually. I want to measure the throughput of a connection in my Apache web server code. Please tell me the idea.. Best, Niko Wilfritz Sianipar Teknik Informatika IT Telkom Bandung --------------------------------- Yahoo! Toolbar kini dilengkapi dengan Search Assist. Download sekarang juga. |
From: F. <fra...@gm...> - 2008-05-19 22:30:50
|
I am honored! Thanks ! 2008/5/19 Robert Iakobashvili <cor...@gm...>: > Hi François, > > On Mon, May 19, 2008 at 3:50 PM, François <fra...@gm...> > wrote: > >> The bugfix is attached to this mail. >> > > Applied. Thanks. You are added to out THANKS list. > > -- > Truly, > > Robert Iakobashvili, Ph.D. > ...................................................................... > Assistive technology that understands you > ...................................................................... > > -- *Francois Pesce* |
From: Robert I. <cor...@gm...> - 2008-05-19 19:02:49
|
Hi François, On Mon, May 19, 2008 at 3:50 PM, François <fra...@gm...> wrote: > > The bugfix is attached to this mail. > Applied. Thanks. You are added to out THANKS list. -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: Robert I. <cor...@gm...> - 2008-05-19 13:07:41
|
Hi Francois, On Mon, May 19, 2008 at 3:50 PM, François <fra...@gm...> wrote: > I've just found this little bug: The proxy authentication does not work, > because incorrect CURL_ option enumerator are used in loader.c > The bugfix is attached to this mail. > I've added a bugfix to a condition that seemed erroneous because always > true. Feel free to remove it if you think I'm wrong. > > Thanks for this great tool. > > Francois Thank you for your patch. I will be applied shortly. With my best wishes and thanks, Sincerely, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: F. <fra...@gm...> - 2008-05-19 12:50:57
|
diff -Nru curl-loader-0.44/loader.c curl-loader-0.44-patched/loader.c --- curl-loader-0.44/loader.c 2007-10-14 15:51:05.000000000 +0200 +++ curl-loader-0.44-patched/loader.c 2008-05-19 11:47:10.000000000 +0200 @@ -784,13 +784,14 @@ char proxy_userpwd[256]; sprintf (proxy_userpwd, "%s:%s", url->username, url->password); - curl_easy_setopt(handle, CURLOPT_USERPWD, proxy_userpwd); + curl_easy_setopt(handle, CURLOPT_PROXYUSERPWD, proxy_userpwd); } else { - curl_easy_setopt(handle, CURLOPT_USERPWD, url->proxy_auth_credentials); + curl_easy_setopt(handle, CURLOPT_PROXYUSERPWD, url->proxy_auth_credentials); } - curl_easy_setopt(handle, CURLOPT_HTTPAUTH, url->proxy_auth_method); + curl_easy_setopt(handle, CURLOPT_PROXYAUTH, url->proxy_auth_method); + } } @@ -1315,7 +1316,7 @@ { /* 401 and 407 responses are just authentication challenges, that virtual client may overcome. */ - if (response_status != 401 || response_status != 407) + if (response_status != 401 && response_status != 407) { cctx->client_state = CSTATE_ERROR; } |
From: Michael M. <mos...@gm...> - 2008-05-18 10:11:53
|
Thanks unistd.h - the source of all wizdom (2) Thanks Michael 2008/5/18 Robert Iakobashvili <cor...@gm...>: > Hi Aleks, > > On Sun, May 18, 2008 at 12:17 AM, Aleksandar Lazic > <al-...@no...> wrote: > > On Sam 17.05.2008 18:03, Robert Iakobashvili wrote: > >>version 0.45, unstable, May 17, 2008 > >> > >>* Advanced to libevent-1.4.4, which is supposed to deliver > >> better performance for high loads > > > > Have you take a thought to use the libev with the libevent layer for due > > the fact that in this lib a NEVENT-limit is not available ;-) > > Indeed, we considered libev instead of libevent, but decided to remain > within > the fastly improving libevent. > > At the end of the day, we will find a timeslot and will send some patching > of libevent with moving NEVENT to some comming h-file, so that > any application could re-define it at compilation via -DNEVENT=500000, > for example. > > Take care. > -- > Truly, > Robert Iakobashvili, Ph.D. > ...................................................................... > Assistive technology that understands you > ...................................................................... > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2008. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > curl-loader-devel mailing list > cur...@li... > https://lists.sourceforge.net/lists/listinfo/curl-loader-devel > |
From: Robert I. <cor...@gm...> - 2008-05-18 08:18:47
|
Hi Aleks, On Sun, May 18, 2008 at 12:17 AM, Aleksandar Lazic <al-...@no...> wrote: > On Sam 17.05.2008 18:03, Robert Iakobashvili wrote: >>version 0.45, unstable, May 17, 2008 >> >>* Advanced to libevent-1.4.4, which is supposed to deliver >> better performance for high loads > > Have you take a thought to use the libev with the libevent layer for due > the fact that in this lib a NEVENT-limit is not available ;-) Indeed, we considered libev instead of libevent, but decided to remain within the fastly improving libevent. At the end of the day, we will find a timeslot and will send some patching of libevent with moving NEVENT to some comming h-file, so that any application could re-define it at compilation via -DNEVENT=500000, for example. Take care. -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: Aleksandar L. <al-...@no...> - 2008-05-17 21:17:51
|
On Sam 17.05.2008 18:03, Robert Iakobashvili wrote: >version 0.45, unstable, May 17, 2008 > >* Advanced to libevent-1.4.4, which is supposed to deliver > better performance for high loads Have you take a thought to use the libev with the libevent layer for due the fact that in this lib a NEVENT-limit is not available ;-) Cheers Aleks |
From: Robert I. <cor...@gm...> - 2008-05-17 15:03:17
|
version 0.45, unstable, May 17, 2008 * Advanced to libevent-1.4.4, which is supposed to deliver better performance for high loads * Corrected the patch changing the NEVENT number in libevent. The patch was not applied in previous versions, limiting the number of virtual clients to only 32K. Increased it to 121 K clients, whereas users can increase this number even more following the building for High Load procedure. You may wish to try this version, if you are using heavy loads with a large number of virtual clients. Enjoy! -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: Robert I. <cor...@gm...> - 2008-05-13 07:21:06
|
Hi Niko, On Tue, May 13, 2008 at 10:04 AM, Niko Wilfritz Sianipar Sianipar <nik...@ya...> wrote: > 1. Why the mean response time from lower concurrent user is not always lower > than the higher concurrent user with the same configuration? Is there a > condition where the lower concurrent user will always has mean response time > lower than higher concurrent user? The question is beyond of curl-loader scope. U may wish to learn about your web-server caching policy, etc application-related issues. > 2. What is the effect if I increase the amount of file descriptor with > ulimit command? It increases the allowed amount of open file descriptors (sockets). Y may wish to read the High-Load sections of the FAQs. > Best, > Niko Wilfritz Sianipar > Teknik Informatika IT Telkom Bandung -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: Niko W. S. S. <nik...@ya...> - 2008-05-13 07:04:39
|
1. Why the mean response time from lower concurrent user is not always lower than the higher concurrent user with the same configuration? Is there a condition where the lower concurrent user will always has mean response time lower than higher concurrent user? 2. What is the effect if I increase the amount of file descriptor with ulimit command? Thanks. Best, Niko Wilfritz Sianipar Teknik Informatika IT Telkom Bandung --------------------------------- Bergabunglah dengan orang-orang yang berwawasan, di bidang Anda di Yahoo! Answers |
From: Aleksandar L. <al-...@no...> - 2008-05-09 07:21:11
|
Hi Robert, On Don 08.05.2008 18:12, Robert Iakobashvili wrote: >On Thu, May 8, 2008 at 5:57 PM, Aleksandar Lazic ><al-...@no...> wrote: > >> Looks like, any experience with the 1.4.x libevent? > >We do not have any experience with libevent 1.4x, but the people on >libevent mailing list are claiming higher performance coupled at a good >stability. > >Any experience on your side? Not really ;-(. Aleks |
From: Robert I. <cor...@gm...> - 2008-05-08 15:12:56
|
Hi Aleksander, On Thu, May 8, 2008 at 5:57 PM, Aleksandar Lazic <al-...@no...> wrote: > >From the below e-mail looks like curl-loader can benefit from >>advancing to libevent-1.4.3, which is mature enough now. >> >>What do you think? > > Looks like, any experience with the 1.4.x libevent? We do not have any experience with libevent 1.4x, but the people on libevent mailing list are claiming higher performance coupled at a good stability. Any experience on your side? -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... www.ghotit.com Ghotit - Assistive technology that understands you ...................................................................... |
From: Aleksandar L. <al-...@no...> - 2008-05-08 14:58:47
|
Hi, On Mit 07.05.2008 19:30, Robert Iakobashvili wrote: >Hi folks, > >From the below e-mail looks like curl-loader can benefit from >advancing to libevent-1.4.3, which is mature enough now. > >What do you think? Looks like, any experience with the 1.4.x libevent? Cheers Aleks |