curl-loader-devel Mailing List for curl-loader - web application testing (Page 12)
Status: Alpha
Brought to you by:
coroberti
You can subscribe to this list here.
2006 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
|
Feb
(1) |
Mar
(7) |
Apr
(19) |
May
(25) |
Jun
(16) |
Jul
(59) |
Aug
(29) |
Sep
(18) |
Oct
(19) |
Nov
(7) |
Dec
(29) |
2008 |
Jan
(6) |
Feb
(18) |
Mar
(8) |
Apr
(27) |
May
(26) |
Jun
(5) |
Jul
(6) |
Aug
|
Sep
(9) |
Oct
(37) |
Nov
(61) |
Dec
(17) |
2009 |
Jan
(21) |
Feb
(25) |
Mar
(4) |
Apr
(2) |
May
(8) |
Jun
(15) |
Jul
(18) |
Aug
(23) |
Sep
(10) |
Oct
(16) |
Nov
(14) |
Dec
(22) |
2010 |
Jan
(23) |
Feb
(8) |
Mar
(18) |
Apr
(1) |
May
(34) |
Jun
(23) |
Jul
(11) |
Aug
(1) |
Sep
(13) |
Oct
(10) |
Nov
(2) |
Dec
(8) |
2011 |
Jan
|
Feb
(7) |
Mar
(24) |
Apr
(12) |
May
(3) |
Jun
(2) |
Jul
(2) |
Aug
|
Sep
(5) |
Oct
(20) |
Nov
(7) |
Dec
(11) |
2012 |
Jan
(12) |
Feb
(5) |
Mar
(16) |
Apr
(3) |
May
|
Jun
(5) |
Jul
(12) |
Aug
(6) |
Sep
|
Oct
|
Nov
(8) |
Dec
|
2013 |
Jan
(1) |
Feb
(3) |
Mar
(5) |
Apr
(3) |
May
(1) |
Jun
|
Jul
(1) |
Aug
(2) |
Sep
(9) |
Oct
|
Nov
(8) |
Dec
(4) |
2014 |
Jan
(4) |
Feb
|
Mar
(1) |
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(4) |
Oct
|
Nov
(11) |
Dec
(5) |
2015 |
Jan
(1) |
Feb
|
Mar
(11) |
Apr
(3) |
May
(1) |
Jun
(1) |
Jul
(4) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(2) |
Dec
|
2016 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: sajal <s.b...@qu...> - 2010-10-06 05:59:12
|
On 06/10/10 15:20, Robert Iakobashvili wrote: > does not store the Hi, Thanks for a quick reply. If it not stores these files then what exactly happens to these files once they are fetched from the web server? One more thing, what exactly curl-loader uses to fetch the files? I haven't looked into the code which appears to be a bit long so if you can answer this it would help me save some time :-) Regards -- Sajal Bhatia Research Masters Student QUT, Brisbane AUSTRALIA |
From: Robert I. <cor...@gm...> - 2010-10-06 05:20:59
|
Hi Sajal, On Wed, Oct 6, 2010 at 6:51 AM, sajal <s.b...@qu...> wrote: > Hi, > > I am using curl-loader to fetch index.html files from a web server using > GET command. I am not able to locate the appropriate directory where the > curl-loader stores these file. Can anyone tell me where exactly does > curl-loader stores these files? > > Thanks and Regards > > -- > > Sajal Bhatia > Research Masters Student > QUT, Brisbane > AUSTRALIA > > curl-loader is a tool to test your web-server, farm or a web-site. It is working at a client computer and does not store there the files, that are fetched. If you need for some reasons to do that, you might use curl or wget tools. -- Truly, Robert Iakobashvili, Ph.D. Home: http://www.ghotit.com Blog: http://dyslexia-blog.ghotit.com Twitter: http://twitter.com/ghotit Facebook: http://facebook.com/ghotit ...................................................................... Ghotit Dyslexia Assistive technology that understands you ...................................................................... |
From: sajal <s.b...@qu...> - 2010-10-06 05:11:40
|
Hi, I am using curl-loader to fetch index.html files from a web server using GET command. I am not able to locate the appropriate directory where the curl-loader stores these file. Can anyone tell me where exactly does curl-loader stores these files? Thanks and Regards -- Sajal Bhatia Research Masters Student QUT, Brisbane AUSTRALIA |
From: Robert I. <cor...@gm...> - 2010-09-29 12:13:56
|
Dear Pranav, On Tue, Sep 28, 2010 at 11:31 PM, Pranav Desai <pra...@gm...>wrote: > > > > The idea looks very interesting and it is worth adding to the mainline. > > Please, kindly, provide the detailed documentation in: > > 1. README, > > 2. man pages; > > 3. configuration example in conf directory > > Everything should enable a new to curl-loader QA-person to understand the > > new tags, how to use them and > > to easy the usage by providing an example. > > Thank you in advance. > > I have attached the patch with config examples and updated docs. Let > me know if I have missed something. > > Thanks > -- Pranav > > I'll look into it very soon. Thank you very much! -- Truly, Robert Iakobashvili, Ph.D. Home: http://www.ghotit.com Blog: http://dyslexia-blog.ghotit.com Twitter: http://twitter.com/ghotit Facebook: http://facebook.com/ghotit ...................................................................... Ghotit Dyslexia Assistive technology that understands you ...................................................................... |
From: Pranav D. <pra...@gm...> - 2010-09-28 21:31:50
|
> > The idea looks very interesting and it is worth adding to the mainline. > Please, kindly, provide the detailed documentation in: > 1. README, > 2. man pages; > 3. configuration example in conf directory > Everything should enable a new to curl-loader QA-person to understand the > new tags, how to use them and > to easy the usage by providing an example. > Thank you in advance. I have attached the patch with config examples and updated docs. Let me know if I have missed something. Thanks -- Pranav > > -- > Truly, > Robert Iakobashvili, Ph.D. > > Home: http://www.ghotit.com > Blog: http://dyslexia-blog.ghotit.com > Twitter: http://twitter.com/ghotit > Facebook: http://facebook.com/ghotit > ...................................................................... > Ghotit Dyslexia > Assistive technology that understands you > ...................................................................... > > ------------------------------------------------------------------------------ > Nokia and AT&T present the 2010 Calling All Innovators-North America contest > Create new apps & games for the Nokia N8 for consumers in U.S. and Canada > $10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing > Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store > http://p.sf.net/sfu/nokia-dev2dev > _______________________________________________ > curl-loader-devel mailing list > cur...@li... > https://lists.sourceforge.net/lists/listinfo/curl-loader-devel > > |
From: Robert I. <cor...@gm...> - 2010-09-27 05:44:17
|
Hi Ivan, On Mon, Sep 27, 2010 at 7:32 AM, Ivan <iva...@gm...> wrote: > Hi everyone, > > I am trying to test NTLM proxy authentication for an explicit proxy. > > I am aware about curllib's issues with NTLM; I > Indeed, it works well with a single connection. However, when it comes to working with multi-handle (for many connections), libcurl is broken. Effectively this mean, that you cannot use curl-loader for more than a single client, which make it useless for NTLM > > > When I am using curl-loader connection, it disconnects after recieving 407 > as ( see below ) > > # 1285564151266 Mon Sep 27 15:09:11 2010 > # msec_offset cycle_no url_no client_no (ip) indic info > 1 0 0 1 == About to connect() to proxy web-cache.usyd.edu.au port 8080 > (#0) eff-url: url http://www.google.com > 9 0 0 1 !! ERCL 407 HTTP/1.1 407 Proxy Authentication Required^M eff-url: > url http://www.google.com > 23 0 0 1 !! ERCL 407 HTTP/1.1 407 Proxy Authentication Required^M eff-url: > url http://www.google.com > 23 0 0 1 == Closing connection #0 eff-url: url http://www.google.com > > This is another story. -- Truly, Robert Iakobashvili, Ph.D. Home: http://www.ghotit.com Blog: http://dyslexia-blog.ghotit.com Twitter: http://twitter.com/ghotit Facebook: http://facebook.com/ghotit ...................................................................... Ghotit Dyslexia Assistive technology that understands you ...................................................................... |
From: Ivan <iva...@gm...> - 2010-09-27 05:32:55
|
Hi everyone, I am trying to test NTLM proxy authentication for an explicit proxy. I am aware about curllib's issues with NTLM; I believe it may not apply to my case, as I can authenticate successfully using plain CURL curl --proxy "http://web-cache.usyd.edu.au:8080" --proxy-ntlm --proxy-user 'mss\User:*********' www.google.com I get: <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8"> <TITLE>302 Moved</TITLE></HEAD><BODY> <H1>302 Moved</H1> The document has moved <A HREF="http://www.google.com.au/">here</A>. </BODY></HTML> When I am using curl-loader connection, it disconnects after recieving 407 as ( see below ) # 1285564151266 Mon Sep 27 15:09:11 2010 # msec_offset cycle_no url_no client_no (ip) indic info 1 0 0 1 == About to connect() to proxy web-cache.usyd.edu.au port 8080 (#0) eff-url: url http://www.google.com 9 0 0 1 !! ERCL 407 HTTP/1.1 407 Proxy Authentication Required^M eff-url: url http://www.google.com 23 0 0 1 !! ERCL 407 HTTP/1.1 407 Proxy Authentication Required^M eff-url: url http://www.google.com 23 0 0 1 == Closing connection #0 eff-url: url http://www.google.com I suspect that my poor understanding of the topic is to blame; I would greatly appreciate if you have a quick look the configuration file and tell me if what is wrong. ########### GENERAL SECTION ################################ BATCH_NAME=proxy-test CLIENTS_NUM_MAX=10 #Same as CLIENTS_NUM CLIENTS_NUM_START=1 CLIENTS_RAMPUP_INC=1 INTERFACE = eth0 NETMASK=24 IP_ADDR_MIN=172.16.236.229 IP_ADDR_MAX=172.16.236.229 CYCLES_NUM = 100 #FRESH_CONNECT=1 URLS_NUM=1 ########### Login URL SECTION ####################### URL=http://www.google.com URL_SHORT_NAME="google" REQUEST_TYPE=GET PROXY_AUTH_METHOD = NTLM PROXY_AUTH_CREDENTIALS='mss\ivan:***********' TIMER_URL_COMPLETION = 1000 TIMER_AFTER_URL_SLEEP = 200 Thank you in advance. -- Ivan |
From: Pranav D. <pra...@gm...> - 2010-09-24 22:45:17
|
On Fri, Sep 24, 2010 at 2:28 AM, Robert Iakobashvili <cor...@gm...> wrote: > Dear Prahav, > > On Thu, Sep 23, 2010 at 11:10 PM, Pranav Desai <pra...@gm...> > wrote: >> >> > >> > >> > And what happens, when >> > CLIENTS_NUM_START=10 >> > and commented out the below tag? >> > #CLIENTS_RAMPUP_INC=1 >> > >> >> Looks much better. Some issue with the rampup ? > > Please, see below: > >> >> 1 >> ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 >> 2 >> ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > > Client 3 has reached 5 cycles > >> >> 3 >> ,cycles:5,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 >> 4 >> ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 >> 5 >> ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 >> 6 >> ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 >> 7 >> ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 >> 8 >> ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 >> 9 >> ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 >> 10 >> ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 >> > > > And before you had: > > 1 > ,cycles:5,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > 2 > ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > 3 > ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > 4 > ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > 5 > ,cycles:3,state:2,b-in:59311596,b-out:860,req:4,1xx:0,2xx:4,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > 6 > ,cycles:3,state:2,b-in:59311596,b-out:860,req:4,1xx:0,2xx:4,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > 7 > ,cycles:3,state:2,b-in:59311596,b-out:860,req:4,1xx:0,2xx:4,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > 8 > ,cycles:2,state:2,b-in:44483697,b-out:645,req:3,1xx:0,2xx:3,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > 9 > ,cycles:2,state:2,b-in:44483697,b-out:645,req:3,1xx:0,2xx:3,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > 10 > ,cycles:2,state:2,b-in:44483697,b-out:645,req:3,1xx:0,2xx:3,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > > If you will compare current ctx-log to what was before, it is stopped > always, when the first client reaches > the number of cycles planned (5). The multi-handle is stopped and all is > going out. > The logic was designed for a heavy-load machinery, where the exact number of > cycles was a minor point. > If it is really necessary for you to re-work it, I can guide you. > Thats what I figured. Its not really necessary, since I use it only for sanity checks. But I wanted to make sure that its not some config that I added or removed, and it was by design. If you could still point me in the right direction, I can look at it. -- Pranav > > > > > > -- > Truly, > Robert Iakobashvili, Ph.D. > > Home: http://www.ghotit.com > Blog: http://dyslexia-blog.ghotit.com > Twitter: http://twitter.com/ghotit > Facebook: http://facebook.com/ghotit > ...................................................................... > Ghotit Dyslexia > Assistive technology that understands you > ...................................................................... > > ------------------------------------------------------------------------------ > Nokia and AT&T present the 2010 Calling All Innovators-North America contest > Create new apps & games for the Nokia N8 for consumers in U.S. and Canada > $10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing > Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store > http://p.sf.net/sfu/nokia-dev2dev > _______________________________________________ > curl-loader-devel mailing list > cur...@li... > https://lists.sourceforge.net/lists/listinfo/curl-loader-devel > > |
From: Robert I. <cor...@gm...> - 2010-09-24 11:48:28
|
Dear Prahav On Tue, Sep 21, 2010 at 2:52 AM, Pranav Desai <pra...@gm...>wrote: > Hello, > > I wanted to test a caching server using curl-loader. curl-loader uses > a fixed set of URLs, so a cache would perform quite well (serving many > things from RAM) unless you have a really large set of URLs as in the > real world. So to stress test the cache with a small URL set, I > created a small patch to add a random string in the URL. The > web-server on the other side of the cache also needs to be able to > translate the random string back to the original URL. This way you can > pretty much use a single URL to test the cache and at the same time > you dont have to store lots of objects on the webserver. > > I have attached the patch in case someone wants to use it. I can write > up some doc if its worth adding to the main code base. > > > curl-loader config > ------------------- > URL=http://172.16.55.210/JUNKSTR/websites/testube/video/cNvJy0zoXOY.34 > URL_RANDOM_RANGE=0-2000 > URL_RANDOM_TOKEN=JUNKSTR > TIMER_AFTER_URL_SLEEP=3000 > > This will replace the token "JUNKSTR" in the URL with a random number > between 0-2k. So the access logs should look like: > > 1285029140.940 5999 13.4.0.28 TCP_MISS/200 14827899 GET > http://172.16.55.210/276/websites/testube/video/cNvJy0zoXOY.34 - > DIRECT/172.16.55.21 > 0 application/octet-stream - > 1285029141.408 6164 13.4.0.23 TCP_MISS/200 14827899 GET > http://172.16.55.210/1158/websites/testube/video/cNvJy0zoXOY.34 - > DIRECT/172.16.55.2 > 10 application/octet-stream - > > lighttpd webserver config > ----------------------------- > To translate the URLs back to a single URL I used rewrite rules, e.g. > > url.rewrite = ( "^/[^/]*(.*)$" => "$1" ) > > you can choose to use some different method. > > > Hope this helps. > -- Pranav > > > ------------------------------------------------------------------------------ > Start uncovering the many advantages of virtual appliances > and start using them to simplify application deployment and > accelerate your shift to cloud computing. > http://p.sf.net/sfu/novell-sfdev2dev > _______________________________________________ > curl-loader-devel mailing list > cur...@li... > https://lists.sourceforge.net/lists/listinfo/curl-loader-devel > > The idea looks very interesting and it is worth adding to the mainline. Please, kindly, provide the detailed documentation in: 1. README, 2. man pages; 3. configuration example in conf directory Everything should enable a new to curl-loader QA-person to understand the new tags, how to use them and to easy the usage by providing an example. Thank you in advance. -- Truly, Robert Iakobashvili, Ph.D. Home: http://www.ghotit.com Blog: http://dyslexia-blog.ghotit.com Twitter: http://twitter.com/ghotit Facebook: http://facebook.com/ghotit ...................................................................... Ghotit Dyslexia Assistive technology that understands you ...................................................................... |
From: Robert I. <cor...@gm...> - 2010-09-24 09:28:51
|
Dear Prahav, On Thu, Sep 23, 2010 at 11:10 PM, Pranav Desai <pra...@gm...>wrote: > > > > > > And what happens, when > > CLIENTS_NUM_START=10 > > and commented out the below tag? > > #CLIENTS_RAMPUP_INC=1 > > > > Looks much better. Some issue with the rampup ? > Please, see below: > > 1 > ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > 2 > ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > Client 3 has reached 5 cycles > * 3 ,cycles:5,* > state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > 4 > ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > 5 > ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > 6 > ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > 7 > ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > 8 > ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > 9 > ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > 10 > ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > > And before you had: *1 ,cycles:5,* state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 2 ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 3 ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 4 ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 5 ,cycles:3,state:2,b-in:59311596,b-out:860,req:4,1xx:0,2xx:4,3xx:0,4xx:0,5xx:0,err:0,T-err:0 6 ,cycles:3,state:2,b-in:59311596,b-out:860,req:4,1xx:0,2xx:4,3xx:0,4xx:0,5xx:0,err:0,T-err:0 7 ,cycles:3,state:2,b-in:59311596,b-out:860,req:4,1xx:0,2xx:4,3xx:0,4xx:0,5xx:0,err:0,T-err:0 8 ,cycles:2,state:2,b-in:44483697,b-out:645,req:3,1xx:0,2xx:3,3xx:0,4xx:0,5xx:0,err:0,T-err:0 9 ,cycles:2,state:2,b-in:44483697,b-out:645,req:3,1xx:0,2xx:3,3xx:0,4xx:0,5xx:0,err:0,T-err:0 10 ,cycles:2,state:2,b-in:44483697,b-out:645,req:3,1xx:0,2xx:3,3xx:0,4xx:0,5xx:0,err:0,T-err:0 If you will compare current ctx-log to what was before, it is stopped always, when the first client reaches the number of cycles planned (5). The multi-handle is stopped and all is going out. The logic was designed for a heavy-load machinery, where the exact number of cycles was a minor point. If it is really necessary for you to re-work it, I can guide you. -- Truly, Robert Iakobashvili, Ph.D. Home: http://www.ghotit.com Blog: http://dyslexia-blog.ghotit.com Twitter: http://twitter.com/ghotit Facebook: http://facebook.com/ghotit ...................................................................... Ghotit Dyslexia Assistive technology that understands you ...................................................................... |
From: Pranav D. <pra...@gm...> - 2010-09-23 21:12:02
|
> > > And what happens, when > CLIENTS_NUM_START=10 > and commented out the below tag? > #CLIENTS_RAMPUP_INC=1 > Looks much better. Some issue with the rampup ? 1 ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 2 ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 3 ,cycles:5,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 4 ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 5 ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 6 ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 7 ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 8 ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 9 ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 10 ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > -- > Truly, > Robert Iakobashvili, Ph.D. > > Home: http://www.ghotit.com > Blog: http://dyslexia-blog.ghotit.com > Twitter: http://twitter.com/ghotit > Facebook: http://facebook.com/ghotit > ...................................................................... > Ghotit Dyslexia > Assistive technology that understands you > ...................................................................... > > ------------------------------------------------------------------------------ > Nokia and AT&T present the 2010 Calling All Innovators-North America contest > Create new apps & games for the Nokia N8 for consumers in U.S. and Canada > $10 million total in prizes - $4M cash, 500 devices, nearly $6M in marketing > Develop with Nokia Qt SDK, Web Runtime, or Java and Publish to Ovi Store > http://p.sf.net/sfu/nokia-dev2dev > _______________________________________________ > curl-loader-devel mailing list > cur...@li... > https://lists.sourceforge.net/lists/listinfo/curl-loader-devel > > |
From: Robert I. <cor...@gm...> - 2010-09-23 19:32:25
|
Hi Pranav, On Wed, Sep 22, 2010 at 9:38 PM, Pranav Desai <pra...@gm...>wrote: > Hi! > > I am running a test with the following config: > BATCH_NAME=test > CLIENTS_NUM_MAX=10 # Same as CLIENTS_NUM > CLIENTS_NUM_START=1 > CLIENTS_RAMPUP_INC=1 > INTERFACE =eth1 > NETMASK=16 > IP_ADDR_MIN= 13.4.0.10 > IP_ADDR_MAX= 13.4.150.250 #Actually - this is for self-control > IP_SHARED_NUM=500 > CYCLES_NUM=5 > URLS_NUM=1 > > URL=http://172.16.55.210/websites/testube/video/cNvJy0zoXOY.34 > TIMER_AFTER_URL_SLEEP=3000 > > I was expecting to see 50 requests but this is what I see in the .ctx > file. Looks like not all clients finished their 5 cycles, even the > .log file suggests that. Does the test stop when one of them reaches > the max cycles ? Am I missing something ? > > test.ctx > ---------- > 1 > ,cycles:5,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > 2 > ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > 3 > ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > 4 > ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > 5 > ,cycles:3,state:2,b-in:59311596,b-out:860,req:4,1xx:0,2xx:4,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > 6 > ,cycles:3,state:2,b-in:59311596,b-out:860,req:4,1xx:0,2xx:4,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > 7 > ,cycles:3,state:2,b-in:59311596,b-out:860,req:4,1xx:0,2xx:4,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > 8 > ,cycles:2,state:2,b-in:44483697,b-out:645,req:3,1xx:0,2xx:3,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > 9 > ,cycles:2,state:2,b-in:44483697,b-out:645,req:3,1xx:0,2xx:3,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > 10 > ,cycles:2,state:2,b-in:44483697,b-out:645,req:3,1xx:0,2xx:3,3xx:0,4xx:0,5xx:0,err:0,T-err:0 > > > Thanks for your time. > > -- pranav > > > ------------------------------------------------------------------------------ > Start uncovering the many advantages of virtual appliances > and start using them to simplify application deployment and > accelerate your shift to cloud computing. > http://p.sf.net/sfu/novell-sfdev2dev > _______________________________________________ > curl-loader-devel mailing list > cur...@li... > https://lists.sourceforge.net/lists/listinfo/curl-loader-devel > And what happens, when CLIENTS_NUM_START=10 and commented out the below tag? #CLIENTS_RAMPUP_INC=1 -- Truly, Robert Iakobashvili, Ph.D. Home: http://www.ghotit.com Blog: http://dyslexia-blog.ghotit.com Twitter: http://twitter.com/ghotit Facebook: http://facebook.com/ghotit ...................................................................... Ghotit Dyslexia Assistive technology that understands you ...................................................................... |
From: Pranav D. <pra...@gm...> - 2010-09-22 19:38:40
|
Hi! I am running a test with the following config: BATCH_NAME=test CLIENTS_NUM_MAX=10 # Same as CLIENTS_NUM CLIENTS_NUM_START=1 CLIENTS_RAMPUP_INC=1 INTERFACE =eth1 NETMASK=16 IP_ADDR_MIN= 13.4.0.10 IP_ADDR_MAX= 13.4.150.250 #Actually - this is for self-control IP_SHARED_NUM=500 CYCLES_NUM=5 URLS_NUM=1 URL=http://172.16.55.210/websites/testube/video/cNvJy0zoXOY.34 TIMER_AFTER_URL_SLEEP=3000 I was expecting to see 50 requests but this is what I see in the .ctx file. Looks like not all clients finished their 5 cycles, even the .log file suggests that. Does the test stop when one of them reaches the max cycles ? Am I missing something ? test.ctx ---------- 1 ,cycles:5,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 2 ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 3 ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 4 ,cycles:4,state:2,b-in:74139495,b-out:1075,req:5,1xx:0,2xx:5,3xx:0,4xx:0,5xx:0,err:0,T-err:0 5 ,cycles:3,state:2,b-in:59311596,b-out:860,req:4,1xx:0,2xx:4,3xx:0,4xx:0,5xx:0,err:0,T-err:0 6 ,cycles:3,state:2,b-in:59311596,b-out:860,req:4,1xx:0,2xx:4,3xx:0,4xx:0,5xx:0,err:0,T-err:0 7 ,cycles:3,state:2,b-in:59311596,b-out:860,req:4,1xx:0,2xx:4,3xx:0,4xx:0,5xx:0,err:0,T-err:0 8 ,cycles:2,state:2,b-in:44483697,b-out:645,req:3,1xx:0,2xx:3,3xx:0,4xx:0,5xx:0,err:0,T-err:0 9 ,cycles:2,state:2,b-in:44483697,b-out:645,req:3,1xx:0,2xx:3,3xx:0,4xx:0,5xx:0,err:0,T-err:0 10 ,cycles:2,state:2,b-in:44483697,b-out:645,req:3,1xx:0,2xx:3,3xx:0,4xx:0,5xx:0,err:0,T-err:0 Thanks for your time. -- pranav |
From: Robert I. <cor...@gm...> - 2010-09-21 04:59:57
|
Dear Prahav, On Tue, Sep 21, 2010 at 2:52 AM, Pranav Desai <pra...@gm...>wrote: > Hello, > > I wanted to test a caching server using curl-loader. curl-loader uses > a fixed set of URLs, so a cache would perform quite well (serving many > things from RAM) unless you have a really large set of URLs as in the > real world. So to stress test the cache with a small URL set, I > created a small patch to add a random string in the URL. The > web-server on the other side of the cache also needs to be able to > translate the random string back to the original URL. This way you can > pretty much use a single URL to test the cache and at the same time > you dont have to store lots of objects on the webserver. > > I have attached the patch in case someone wants to use it. I can write > up some doc if its worth adding to the main code base. > > > curl-loader config > ------------------- > URL=http://172.16.55.210/JUNKSTR/websites/testube/video/cNvJy0zoXOY.34 > URL_RANDOM_RANGE=0-2000 > URL_RANDOM_TOKEN=JUNKSTR > TIMER_AFTER_URL_SLEEP=3000 > > This will replace the token "JUNKSTR" in the URL with a random number > between 0-2k. So the access logs should look like: > > 1285029140.940 5999 13.4.0.28 TCP_MISS/200 14827899 GET > http://172.16.55.210/276/websites/testube/video/cNvJy0zoXOY.34 - > DIRECT/172.16.55.21 > 0 application/octet-stream - > 1285029141.408 6164 13.4.0.23 TCP_MISS/200 14827899 GET > http://172.16.55.210/1158/websites/testube/video/cNvJy0zoXOY.34 - > DIRECT/172.16.55.2 > 10 application/octet-stream - > > lighttpd webserver config > ----------------------------- > To translate the URLs back to a single URL I used rewrite rules, e.g. > > url.rewrite = ( "^/[^/]*(.*)$" => "$1" ) > > you can choose to use some different method. > > > Hope this helps. > -- Pranav > > -- Thank you very much for the patch. I'll look into it very soon. -- Truly, Robert Iakobashvili, Ph.D. Home: http://www.ghotit.com Blog: http://dyslexia-blog.ghotit.com Twitter: http://twitter.com/ghotit Facebook: http://facebook.com/ghotit ...................................................................... Ghotit Dyslexia Assistive technology that understands you ...................................................................... |
From: Pranav D. <pra...@gm...> - 2010-09-21 00:53:00
|
Hello, I wanted to test a caching server using curl-loader. curl-loader uses a fixed set of URLs, so a cache would perform quite well (serving many things from RAM) unless you have a really large set of URLs as in the real world. So to stress test the cache with a small URL set, I created a small patch to add a random string in the URL. The web-server on the other side of the cache also needs to be able to translate the random string back to the original URL. This way you can pretty much use a single URL to test the cache and at the same time you dont have to store lots of objects on the webserver. I have attached the patch in case someone wants to use it. I can write up some doc if its worth adding to the main code base. curl-loader config ------------------- URL=http://172.16.55.210/JUNKSTR/websites/testube/video/cNvJy0zoXOY.34 URL_RANDOM_RANGE=0-2000 URL_RANDOM_TOKEN=JUNKSTR TIMER_AFTER_URL_SLEEP=3000 This will replace the token "JUNKSTR" in the URL with a random number between 0-2k. So the access logs should look like: 1285029140.940 5999 13.4.0.28 TCP_MISS/200 14827899 GET http://172.16.55.210/276/websites/testube/video/cNvJy0zoXOY.34 - DIRECT/172.16.55.21 0 application/octet-stream - 1285029141.408 6164 13.4.0.23 TCP_MISS/200 14827899 GET http://172.16.55.210/1158/websites/testube/video/cNvJy0zoXOY.34 - DIRECT/172.16.55.2 10 application/octet-stream - lighttpd webserver config ----------------------------- To translate the URLs back to a single URL I used rewrite rules, e.g. url.rewrite = ( "^/[^/]*(.*)$" => "$1" ) you can choose to use some different method. Hope this helps. -- Pranav |
From: SAJAL B. <s.b...@qu...> - 2010-09-01 00:11:52
|
Hi, Can someone suggest me which parameters to tune to obtain a considerably high packet rate i.e. number of packets per IP ? Cheers, ---- Sajal Bhatia Research Masters Student QUT, Brisbane AUSTRALIA |
From: Robert I. <cor...@gm...> - 2010-08-03 04:30:56
|
Hi folks, Transparent spreading of incoming network traffic load across CPUsis a new feature in kernel. If/when at least a partial packet processing is done in kernel, it can have strong advantages. When userland packet processing is done, to take advantages will require re-write of applications dealing with traffic. http://kernelnewbies.org/Linux_2_6_35#head-94daf753b96280181e79a71ca4bb7f7a423e302a Take care Truly, Robert Iakobashvili, Ph.D. Home: http://www.ghotit.com Blog: http://dyslexia-blog.ghotit.com Twitter: http://twitter.com/ghotit ...................................................................... Ghotit Dyslexia Assistive technology that understands you ...................................................................... |
From: Val S. <va...@nv...> - 2010-07-29 00:11:02
|
Ron, curl-loader can be used for a large number of URLs. The right way to do it is to use a small config file that references a file with the URLs as a tokens file. Grep the Readme for tokens, you'll see what I mean. The space for those URLs is going to be allocated upfront, so the limit is the amount of memory you have on the machine. /Val ________________________________ From: Ron Parker <rp...@mo...> To: "cur...@li..." <cur...@li...> Sent: Wed, July 28, 2010 3:57:29 PM Subject: Playback of HTTP access logs Hi, I need to obtain or build a tool that can issue HTTP requests based on the contents of an http access log file. I’m assuming, of course, that I would need to translate my proprietary log file into a properly formatted config file. The log file contains approximately 1 million HTTP requests (GET/POST) spread across approximately 30,000 clients. I don’t know the distribution of the URLs, but lets say there are 200,000 unique URLs. While the curl-loader tool looks very interesting, all of the examples are built around a small number of URLs that are cycled over a large number of clients. Can curl-loader be used to support my requirements? What is the maximum number of URLs that may be contained in a config file (theoretical and tested)? If curl-loader is not the right tool for this application, do you have any other suggestions? Thanks, in advance. Ron Parker Movik Networks |
From: Ron P. <rp...@mo...> - 2010-07-28 23:00:51
|
Hi, I need to obtain or build a tool that can issue HTTP requests based on the contents of an http access log file. I'm assuming, of course, that I would need to translate my proprietary log file into a properly formatted config file. The log file contains approximately 1 million HTTP requests (GET/POST) spread across approximately 30,000 clients. I don't know the distribution of the URLs, but lets say there are 200,000 unique URLs. While the curl-loader tool looks very interesting, all of the examples are built around a small number of URLs that are cycled over a large number of clients. Can curl-loader be used to support my requirements? What is the maximum number of URLs that may be contained in a config file (theoretical and tested)? If curl-loader is not the right tool for this application, do you have any other suggestions? Thanks, in advance. Ron Parker Movik Networks |
From: Robert I. <cor...@gm...> - 2010-07-20 04:44:54
|
Hi Eric, On Tue, Jul 20, 2010 at 6:27 AM, ERIC FORMO <eri...@ho...> wrote: > The PROBLEM REPORTING FORM > > > CONFIGURATION-FILE (The most common source of problems): > Place the file inline here: > ########### GENERAL SECTION ################################ > BATCH_NAME= ftp > CLIENTS_NUM_MAX=100 > INTERFACE = eth1 > NETMASK=255.255.0.0 > #IP_SHARED_NUM = 1 > IP_ADDR_MIN= 10.0.0.1 > IP_ADDR_MAX= 10.0.100.253 #Actually - this is for self-control > CYCLES_NUM= -1 > URLS_NUM = 1 > > ########### URL SECTION #################################### > > #URL=ftp://tester:tester@10.0.10.52/100k-random.html > URL=ftp://tester:tester@10.0.0.101/100k-random.html > #URL=ftp://tester:tester@10.0.10.52/curl-loader-0.52.tar.gz > FRESH_CONNECT=1 # At least my proftpd has > problems with connection re-use > TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced by > cancelling url fetch on timeout > TIMER_AFTER_URL_SLEEP =3000 > FTP_ACTIVE=1 > > DESCRIPTION: > 1. I can do regular ftp and wget using that ftp command, it just won't work > using this config file. This config file was working when i used a 24 bit > netmask and 10.0.10.52 as the server. Now I want to test more hosts and > would like to use 16 bit netmask. My ifconfig on client and apache is the > following: > Client - eth1 Link encap:Ethernet HWaddr xxx > inet addr:10.10.10.75 Bcast:10.10.255.255 Mask:255.255.0.0 > Server - eth1 Link encap:Ethernet HWaddr xxx > inet addr:10.10.10.52 Bcast:10.10.255.255 Mask:255.255.0.0 > Thank you for the PRF! Could you explain a bit more, what happens, what types of errors you are getting in log file, when starting with -v option? Are you traffic routes normally to server? Anyhow, when using more clients, please, consider using the tags below: CLIENTS_NUM_START - number of clients to start the load with. If the tag is not present or zero CLIENTS_RAMPUP_INC value will be taken instead. If neither CLIENTS_NUM_START nor CLIENTS_RAMPUP_INC are present or have any positive value, the loader will take CLIENTS_NUM_MAX as the number of clients to start the load. CLIENTS_RAMPUP_INC - number of clients to be added to the load in the auto mode every second till CLIENTS_NUM_MAX number will be reached. For machines with a single CPU we would recommend not to keep the number above 50-100, whereas for dual-CPU machines you can keep it as large as 200-300. > > 2. When i use the linux ftp client to transfer the 100k file once i get > 484.8 Mib/sec. When i use curl-loader to ftp the same file with 1 FTP > client i get 0.02971 Mib/sec. What is the cause? Does curl-loader not > calculate the Ti the same way the native ftp client does? > The slow down is also seen in getting html files from my apache server. I > should be able to get close to line rate grabbing one 100K file with 10 > Concurrent clients, but curl-loader can only get up to 276.8 Mib/sec. > curl-loader is using libcurl for HTTP and FTP protocol stacks. I've seen there a bug fixed, when using FTP, which may be your case. Namely: Sat, Jul 17, 2010 at 1:55 AM, Jan Van Boghout <li...@ma...> wrote: > Hello List, > > When I upgraded to libcurl 7.21.0, I started getting strange timeout errors > from FTP commands (immediately after sending them). After some research, > I've concluded that the FTP implementation is missing at least one timestamp > reset point, which the following patch adds: > > http://macrabbit.com/misc/libssh2/ftp.c.patch > > If someone who knows the FTP subsystem could review and include, that would > be most appreciated. There's also a documentation/comment error in > pingpong.h: response_time is described as a number of seconds when it's > actually milliseconds. Relevant line: > > long response_time; /* When no timeout is given, this is the amount of > seconds we await for a server response. */ > > Cheers, > Jan > > > > QUESTION/ SUGGESTION/ PATCH: > Can you please enlighten me as to why my ftp config file is not working? > Why can i not get close to line rate with either ftp or http? I tried with > -r and -t with no luck. > > You can either: The best - take the most recent curl sanpshot and place to curl-loader and try to build with it. http://curl.haxx.se/snapshots/ Or to incorporate the patch, that Jan Van Boghout provided. Ask me if you need some more detailed guiding in curl-loader making system. -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... www.ghotit.com Assistive technology that understands you ...................................................................... |
From: ERIC F. <eri...@ho...> - 2010-07-20 03:27:32
|
The PROBLEM REPORTING FORM makes our support more effective. Please, subscribe to our mailing list here: https://lists.sourceforge.net/lists/listinfo/curl-loader-devel and mail the form to the mailing list: cur...@li... CURL-LOADER VERSION: 0.52, June 13, 2010 HW DETAILS: CPU/S and memory are must: Curl-loader Client - CPU - Pentium 4 3 GHz dual core MemTotal: 2056632 kB MemFree: 1923788 kB Server - CPU - Pentium 4 3 GHz dual core MemTotal: 4059056 kB MemFree: 3063160 kB LINUX DISTRIBUTION and KERNEL (uname -r): 2.6.32-21-server GCC VERSION (gcc -v): 4.4.3 (Ubuntu 4.4.3-4ubuntu5 COMPILATION AND MAKING OPTIONS (if defaults changed): opimization on/debug off COMMAND-LINE: curl-loader -f ./conf-examples/NAVL-ftp.conf CONFIGURATION-FILE (The most common source of problems): Place the file inline here: ########### GENERAL SECTION ################################ BATCH_NAME= ftp CLIENTS_NUM_MAX=100 INTERFACE = eth1 NETMASK=255.255.0.0 #IP_SHARED_NUM = 1 IP_ADDR_MIN= 10.0.0.1 IP_ADDR_MAX= 10.0.100.253 #Actually - this is for self-control CYCLES_NUM= -1 URLS_NUM = 1 ########### URL SECTION #################################### #URL=ftp://tester:tester@10.0.10.52/100k-random.html URL=ftp://tester:tester@10.0.0.101/100k-random.html #URL=ftp://tester:tester@10.0.10.52/curl-loader-0.52.tar.gz FRESH_CONNECT=1 # At least my proftpd has problems with connection re-use TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced by cancelling url fetch on timeout TIMER_AFTER_URL_SLEEP =3000 FTP_ACTIVE=1 DOES THE PROBLEM AFFECT: COMPILATION? No LINKING? No EXECUTION? YES OTHER (please specify)? Have you run $make cleanall prior to $make ? DESCRIPTION: 1. I can do regular ftp and wget using that ftp command, it just won't work using this config file. This config file was working when i used a 24 bit netmask and 10.0.10.52 as the server. Now I want to test more hosts and would like to use 16 bit netmask. My ifconfig on client and apache is the following: Client - eth1 Link encap:Ethernet HWaddr xxx inet addr:10.10.10.75 Bcast:10.10.255.255 Mask:255.255.0.0 Server - eth1 Link encap:Ethernet HWaddr xxx inet addr:10.10.10.52 Bcast:10.10.255.255 Mask:255.255.0.0 2. When i use the linux ftp client to transfer the 100k file once i get 484.8 Mib/sec. When i use curl-loader to ftp the same file with 1 FTP client i get 0.02971 Mib/sec. What is the cause? Does curl-loader not calculate the Ti the same way the native ftp client does? The slow down is also seen in getting html files from my apache server. I should be able to get close to line rate grabbing one 100K file with 10 Concurrent clients, but curl-loader can only get up to 276.8 Mib/sec. QUESTION/ SUGGESTION/ PATCH: Can you please enlighten me as to why my ftp config file is not working? Why can i not get close to line rate with either ftp or http? I tried with -r and -t with no luck. |
From: Robert I. <cor...@gm...> - 2010-07-19 07:53:30
|
Gentlemen, On Mon, Jul 19, 2010 at 5:58 AM, Val Shkolnikov <va...@nv...> wrote: > Vincent, > > So the message means that your server cannot keep up with this rate (800 > req/sec). To be more precise, you specified 8192 clients with the request > rate of 800/sec. When the request rate is fixed, the curl-loader generates > requests regardless of whether the server responds in time or not. So if > the server does not keep up, it does not respond in time to free a client > for the new request. The curl-loader uses another client to send a request > at the given rate. Eventually it runs out of clients and this is what you > see: all the clients are tied up waiting for the server to respond. The > message you see warns you that the request rate you want is not being > sustained. > > You have two options to fix the situation. The most obvious is to lower > the request rate: your server does not keep up. This is a load test after > all :-). The other approach is drop the REQ_RATE tag. The curl loader then > will wait for the server to respond before sending the next request, the > server will pace the exchange. Depending on what you want to model, choose > the scenario. > > /Val > Val, excellent clarification. Thank you very much! Vincent, any chance you can release your great script under GPL-2? Thank you in advance. -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... www.ghotit.com Assistive technology that understands you ...................................................................... |
From: Val S. <va...@nv...> - 2010-07-19 03:26:46
|
Vincent, So the message means that your server cannot keep up with this rate (800 req/sec). To be more precise, you specified 8192 clients with the request rate of 800/sec. When the request rate is fixed, the curl-loader generates requests regardless of whether the server responds in time or not. So if the server does not keep up, it does not respond in time to free a client for the new request. The curl-loader uses another client to send a request at the given rate. Eventually it runs out of clients and this is what you see: all the clients are tied up waiting for the server to respond. The message you see warns you that the request rate you want is not being sustained. You have two options to fix the situation. The most obvious is to lower the request rate: your server does not keep up. This is a load test after all :-). The other approach is drop the REQ_RATE tag. The curl loader then will wait for the server to respond before sending the next request, the server will pace the exchange. Depending on what you want to model, choose the scenario. /Val --- Original message --- From: Vincent Blondel Sent: 7/18/2010 1:50 PM > > On 18 Jul 2010, at 22:34, Val Shkolnikov wrote: > >> Vincent hi, >> does your config file use the fixed request rate, ie. tag REQ_RATE = >> <number> >> ? >> /Val >> > > yes indeed, hereunder you can find the global section of the last file > generated > > ########### GENERAL SECTION ################## > BATCH_NAME=BlueCoat-ProxySG > CLIENTS_NUM_START=10 > CLIENTS_RAMPUP_INC=10 > CLIENTS_NUM_MAX=8192 > REQ_RATE=800 > INTERFACE=eth0 > NETMASK=16 > IP_ADDR_MIN=10.99.0.1 > IP_ADDR_MAX=10.99.255.253 > USER_AGENT="Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_3; en-us) > AppleWebKit/531.22.7 (KHTML, like Gecko) Version/4.0.5 Safari/531.22.7" > CYCLES_NUM=-1 > URLS_NUM=2359 > > ########### URLs SECTION ####################### > > >> >> ----- Original Message ---- >>> From: Vincent Blondel <vin...@ma... >>> <mailto:vin...@ma...>> >>> To: cur...@li... >>> <mailto:cur...@li...> >>> Sent: Sun, July 18, 2010 4:44:38 AM >>> Subject: no room in free client client list >>> >>> >>> Hello, >>> >>> I am working on a project to replace our current squid >>> infrastructure by a >>> brand new bluecoat platform. >>> >>> For this I developed a little script to automatically generate some >>> curl-loader configuration file. >>> >>> this is running pretty well, generating some thousands of urls >>> simulating >>> thousands of users surfing on the net but I get a little error >>> message that >>> continuously comes back. >>> >>> put_free_client - error: no room in free client client list. >>> mperform_hyper error: cannot free a client. >>> >>> I do not know what it is exactly and how I can solve it ? >>> >>> details of my env >>> ******************* >>> >>> Vmware Fusion 3.1.0 machine (Single Core/512Mb RAM) on Mac OSX >>> 10.6.4 (Dual >>> Core/2048Mb RAM) >>> >>> debian:/var/tmp# uname -a >>> Linux debian 2.6.26-2-amd64 #1 SMP Sun Jun 20 20:16:30 UTC 2010 >>> x86_64 >>> GNU/Linux >>> >>> net.ipv4.tcp_tw_recycle=1 >>> net.ipv4.tcp_tw_reuse=1 >>> fs.file-max=102286 >>> net.core.rmem_max=8388608 >>> net.core.wmem_max=8388608 >>> net.core.rmem_default=65536 >>> net.core.wmem_default=65536 >>> net.ipv4.tcp_mem=8388608 8388608 8388608 >>> net.ipv4.tcp_rmem=4096 87380 8388608 >>> net.ipv4.tcp_wmem=4096 65536 8388608 >>> >>> debian:/var/tmp# ulimit -a >>> core file size (blocks, -c) 0 >>> data seg size (kbytes, -d) unlimited >>> scheduling priority (-e) 0 >>> file size (blocks, -f) unlimited >>> pending signals (-i) 4096 >>> max locked memory (kbytes, -l) 32 >>> max memory size (kbytes, -m) unlimited >>> open files (-n) 19999 >>> pipe size (512 bytes, -p) 8 >>> POSIX message queues (bytes, -q) 819200 >>> real-time priority (-r) 0 >>> stack size (kbytes, -s) 8192 >>> cpu time (seconds, -t) unlimited >>> max user processes (-u) 4096 >>> virtual memory (kbytes, -v) unlimited >>> file locks (-x) unlimited >>> >>> my script >>> ********** >>> >>> wget \ >>> --cache=off \ >>> -U "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_3; en-us) >>> AppleWebKit/531.22.7 (KHTML, like Gecko) Version/4.0.5 >>> Safari/531.22.7" \ >>> --glob=off \ >>> --tries=3 \ >>> -x -p -H \ >>> -i urls 2>&1 | \ >>> awk -F'--' 'BEGIN{tmp=0}{if ($0 ~ /^--2010-.*-- http/ && $0 !~ >>> /robots/) >>> {blank=gsub(/ /,""); if (tmp==2) tmp=0; tmp+=1 ; print length($0) " >>> " tmp " " >>> $3}}' | sort -rn -k 1 | sort -n -k 2 | \ >>> awk 'BEGIN { >>> fetch=0 >>> gen=gen "########### GENERAL SECTION ##################\n" >>> gen=gen "BATCH_NAME=BlueCoat-ProxySG\n" >>> gen=gen "CLIENTS_NUM_START=10\n" >>> gen=gen "CLIENTS_RAMPUP_INC=10\n" >>> gen=gen "CLIENTS_NUM_MAX=8192\n" >>> gen=gen "REQ_RATE=800\n" >>> gen=gen "INTERFACE=eth0\n" >>> gen=gen "NETMASK=16\n" >>> gen=gen "IP_ADDR_MIN=10.99.0.2\n" >>> gen=gen "IP_ADDR_MAX=10.99.255.253\n" >>> gen=gen "USER_AGENT=\"Mozilla/5.0 (Macintosh; U; Intel Mac OS X >>> 10_6_3; >>> en-us) AppleWebKit/531.22.7 (KHTML, like Gecko) Version/4.0.5 >>> Safari/531.22.7\"\n" >>> gen=gen "CYCLES_NUM=-1" >>> print gen >>> }{ >>> url+=1 >>> urls=urls "URL=" $3 "\n" >>> if (fetch==100) fetch=0 ; fetch+=1 >>> urls=urls "URL_SHORT_NAME=\"" url "\"\n" >>> urls=urls "FETCH_PROBABILITY=" fetch "\n" >>> urls=urls "REQUEST_TYPE=GET\n" >>> urls=urls "HEADER=\"Accept: >>> text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\"\n" >>> urls=urls "HEADER=\"Accept-Language: en-us,en;q=0.5\"\n" >>> urls=urls "HEADER=\"Accept-Encoding: gzip,deflate\"\n" >>> urls=urls "HEADER=\"Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7\"\n" >>> urls=urls "HEADER=\"Keep-Alive: 115\"\n" >>> urls=urls "HEADER=\"Connection: keep-alive\"\n" >>> urls=urls "TIMER_URL_COMPLETION=3000\n" >>> urls=urls "TIMER_AFTER_URL_SLEEP=0-2000\n" >>> urls=urls "RANDOM_SEED=10\n\n" >>> } END { >>> print "URLS_NUM=" url "\n\n########### URLs SECTION >>> #######################\n\n" urls >>> }' 2>&1 | tee ProxySG.conf >>> >>> ls -l |egrep -v 'curl-loader' |awk '$0 ~ /^d/ {print "rm -rf " $8}' >>> |sh >>> curl-loader -x 10.30.30.16:8080 -f ProxySG.conf >>> >>> >>> many thks for your help. >>> >>> Vincent >>> >>> >>> ------------------------------------------------------------------------------ >>> This SF.net <http://SF.net> email is sponsored by Sprint >>> What will you do first with EVO, the first 4G phone? >>> Visit sprint.com/first <http://sprint.com/first> -- >>> http://p.sf.net/sfu/sprint-com-first >>> _______________________________________________ >>> curl-loader-devel mailing list >>> cur...@li... >>> <mailto:cur...@li...> >>> https://lists.sourceforge.net/lists/listinfo/curl-loader-devel >>> >> >> ------------------------------------------------------------------------------ >> This SF.net <http://SF.net> email is sponsored by Sprint >> What will you do first with EVO, the first 4G phone? >> Visit sprint.com/first <http://sprint.com/first> -- >> http://p.sf.net/sfu/sprint-com-first >> _______________________________________________ >> curl-loader-devel mailing list >> cur...@li... >> <mailto:cur...@li...> >> https://lists.sourceforge.net/lists/listinfo/curl-loader-devel > |
From: Vincent B. <vin...@ma...> - 2010-07-18 21:22:27
|
On 18 Jul 2010, at 22:34, Val Shkolnikov wrote: > Vincent hi, > does your config file use the fixed request rate, ie. tag REQ_RATE = <number> > ? > /Val > yes indeed, hereunder you can find the global section of the last file generated ########### GENERAL SECTION ################## BATCH_NAME=BlueCoat-ProxySG CLIENTS_NUM_START=10 CLIENTS_RAMPUP_INC=10 CLIENTS_NUM_MAX=8192 REQ_RATE=800 INTERFACE=eth0 NETMASK=16 IP_ADDR_MIN=10.99.0.1 IP_ADDR_MAX=10.99.255.253 USER_AGENT="Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_3; en-us) AppleWebKit/531.22.7 (KHTML, like Gecko) Version/4.0.5 Safari/531.22.7" CYCLES_NUM=-1 URLS_NUM=2359 ########### URLs SECTION ####################### > > > ----- Original Message ---- >> From: Vincent Blondel <vin...@ma...> >> To: cur...@li... >> Sent: Sun, July 18, 2010 4:44:38 AM >> Subject: no room in free client client list >> >> >> Hello, >> >> I am working on a project to replace our current squid infrastructure by a >> brand new bluecoat platform. >> >> For this I developed a little script to automatically generate some >> curl-loader configuration file. >> >> this is running pretty well, generating some thousands of urls simulating >> thousands of users surfing on the net but I get a little error message that >> continuously comes back. >> >> put_free_client - error: no room in free client client list. >> mperform_hyper error: cannot free a client. >> >> I do not know what it is exactly and how I can solve it ? >> >> details of my env >> ******************* >> >> Vmware Fusion 3.1.0 machine (Single Core/512Mb RAM) on Mac OSX 10.6.4 (Dual >> Core/2048Mb RAM) >> >> debian:/var/tmp# uname -a >> Linux debian 2.6.26-2-amd64 #1 SMP Sun Jun 20 20:16:30 UTC 2010 x86_64 >> GNU/Linux >> >> net.ipv4.tcp_tw_recycle=1 >> net.ipv4.tcp_tw_reuse=1 >> fs.file-max=102286 >> net.core.rmem_max=8388608 >> net.core.wmem_max=8388608 >> net.core.rmem_default=65536 >> net.core.wmem_default=65536 >> net.ipv4.tcp_mem=8388608 8388608 8388608 >> net.ipv4.tcp_rmem=4096 87380 8388608 >> net.ipv4.tcp_wmem=4096 65536 8388608 >> >> debian:/var/tmp# ulimit -a >> core file size (blocks, -c) 0 >> data seg size (kbytes, -d) unlimited >> scheduling priority (-e) 0 >> file size (blocks, -f) unlimited >> pending signals (-i) 4096 >> max locked memory (kbytes, -l) 32 >> max memory size (kbytes, -m) unlimited >> open files (-n) 19999 >> pipe size (512 bytes, -p) 8 >> POSIX message queues (bytes, -q) 819200 >> real-time priority (-r) 0 >> stack size (kbytes, -s) 8192 >> cpu time (seconds, -t) unlimited >> max user processes (-u) 4096 >> virtual memory (kbytes, -v) unlimited >> file locks (-x) unlimited >> >> my script >> ********** >> >> wget \ >> --cache=off \ >> -U "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_3; en-us) >> AppleWebKit/531.22.7 (KHTML, like Gecko) Version/4.0.5 Safari/531.22.7" \ >> --glob=off \ >> --tries=3 \ >> -x -p -H \ >> -i urls 2>&1 | \ >> awk -F'--' 'BEGIN{tmp=0}{if ($0 ~ /^--2010-.*-- http/ && $0 !~ /robots/) >> {blank=gsub(/ /,""); if (tmp==2) tmp=0; tmp+=1 ; print length($0) " " tmp " " >> $3}}' | sort -rn -k 1 | sort -n -k 2 | \ >> awk 'BEGIN { >> fetch=0 >> gen=gen "########### GENERAL SECTION ##################\n" >> gen=gen "BATCH_NAME=BlueCoat-ProxySG\n" >> gen=gen "CLIENTS_NUM_START=10\n" >> gen=gen "CLIENTS_RAMPUP_INC=10\n" >> gen=gen "CLIENTS_NUM_MAX=8192\n" >> gen=gen "REQ_RATE=800\n" >> gen=gen "INTERFACE=eth0\n" >> gen=gen "NETMASK=16\n" >> gen=gen "IP_ADDR_MIN=10.99.0.2\n" >> gen=gen "IP_ADDR_MAX=10.99.255.253\n" >> gen=gen "USER_AGENT=\"Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_3; >> en-us) AppleWebKit/531.22.7 (KHTML, like Gecko) Version/4.0.5 >> Safari/531.22.7\"\n" >> gen=gen "CYCLES_NUM=-1" >> print gen >> }{ >> url+=1 >> urls=urls "URL=" $3 "\n" >> if (fetch==100) fetch=0 ; fetch+=1 >> urls=urls "URL_SHORT_NAME=\"" url "\"\n" >> urls=urls "FETCH_PROBABILITY=" fetch "\n" >> urls=urls "REQUEST_TYPE=GET\n" >> urls=urls "HEADER=\"Accept: >> text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\"\n" >> urls=urls "HEADER=\"Accept-Language: en-us,en;q=0.5\"\n" >> urls=urls "HEADER=\"Accept-Encoding: gzip,deflate\"\n" >> urls=urls "HEADER=\"Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7\"\n" >> urls=urls "HEADER=\"Keep-Alive: 115\"\n" >> urls=urls "HEADER=\"Connection: keep-alive\"\n" >> urls=urls "TIMER_URL_COMPLETION=3000\n" >> urls=urls "TIMER_AFTER_URL_SLEEP=0-2000\n" >> urls=urls "RANDOM_SEED=10\n\n" >> } END { >> print "URLS_NUM=" url "\n\n########### URLs SECTION >> #######################\n\n" urls >> }' 2>&1 | tee ProxySG.conf >> >> ls -l |egrep -v 'curl-loader' |awk '$0 ~ /^d/ {print "rm -rf " $8}' |sh >> curl-loader -x 10.30.30.16:8080 -f ProxySG.conf >> >> >> many thks for your help. >> >> Vincent >> >> >> ------------------------------------------------------------------------------ >> This SF.net email is sponsored by Sprint >> What will you do first with EVO, the first 4G phone? >> Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first >> _______________________________________________ >> curl-loader-devel mailing list >> cur...@li... >> https://lists.sourceforge.net/lists/listinfo/curl-loader-devel >> > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Sprint > What will you do first with EVO, the first 4G phone? > Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first > _______________________________________________ > curl-loader-devel mailing list > cur...@li... > https://lists.sourceforge.net/lists/listinfo/curl-loader-devel |
From: Val S. <va...@nv...> - 2010-07-18 20:35:00
|
Vincent hi, does your config file use the fixed request rate, ie. tag REQ_RATE = <number> ? /Val ----- Original Message ---- > From: Vincent Blondel <vin...@ma...> > To: cur...@li... > Sent: Sun, July 18, 2010 4:44:38 AM > Subject: no room in free client client list > > > Hello, > > I am working on a project to replace our current squid infrastructure by a >brand new bluecoat platform. > > For this I developed a little script to automatically generate some >curl-loader configuration file. > > this is running pretty well, generating some thousands of urls simulating >thousands of users surfing on the net but I get a little error message that >continuously comes back. > > put_free_client - error: no room in free client client list. > mperform_hyper error: cannot free a client. > > I do not know what it is exactly and how I can solve it ? > > details of my env > ******************* > > Vmware Fusion 3.1.0 machine (Single Core/512Mb RAM) on Mac OSX 10.6.4 (Dual >Core/2048Mb RAM) > > debian:/var/tmp# uname -a > Linux debian 2.6.26-2-amd64 #1 SMP Sun Jun 20 20:16:30 UTC 2010 x86_64 >GNU/Linux > > net.ipv4.tcp_tw_recycle=1 > net.ipv4.tcp_tw_reuse=1 > fs.file-max=102286 > net.core.rmem_max=8388608 > net.core.wmem_max=8388608 > net.core.rmem_default=65536 > net.core.wmem_default=65536 > net.ipv4.tcp_mem=8388608 8388608 8388608 > net.ipv4.tcp_rmem=4096 87380 8388608 > net.ipv4.tcp_wmem=4096 65536 8388608 > > debian:/var/tmp# ulimit -a > core file size (blocks, -c) 0 > data seg size (kbytes, -d) unlimited > scheduling priority (-e) 0 > file size (blocks, -f) unlimited > pending signals (-i) 4096 > max locked memory (kbytes, -l) 32 > max memory size (kbytes, -m) unlimited > open files (-n) 19999 > pipe size (512 bytes, -p) 8 > POSIX message queues (bytes, -q) 819200 > real-time priority (-r) 0 > stack size (kbytes, -s) 8192 > cpu time (seconds, -t) unlimited > max user processes (-u) 4096 > virtual memory (kbytes, -v) unlimited > file locks (-x) unlimited > > my script > ********** > > wget \ > --cache=off \ > -U "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_3; en-us) >AppleWebKit/531.22.7 (KHTML, like Gecko) Version/4.0.5 Safari/531.22.7" \ > --glob=off \ > --tries=3 \ > -x -p -H \ > -i urls 2>&1 | \ > awk -F'--' 'BEGIN{tmp=0}{if ($0 ~ /^--2010-.*-- http/ && $0 !~ /robots/) >{blank=gsub(/ /,""); if (tmp==2) tmp=0; tmp+=1 ; print length($0) " " tmp " " >$3}}' | sort -rn -k 1 | sort -n -k 2 | \ > awk 'BEGIN { > fetch=0 > gen=gen "########### GENERAL SECTION ##################\n" > gen=gen "BATCH_NAME=BlueCoat-ProxySG\n" > gen=gen "CLIENTS_NUM_START=10\n" > gen=gen "CLIENTS_RAMPUP_INC=10\n" > gen=gen "CLIENTS_NUM_MAX=8192\n" > gen=gen "REQ_RATE=800\n" > gen=gen "INTERFACE=eth0\n" > gen=gen "NETMASK=16\n" > gen=gen "IP_ADDR_MIN=10.99.0.2\n" > gen=gen "IP_ADDR_MAX=10.99.255.253\n" > gen=gen "USER_AGENT=\"Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_3; >en-us) AppleWebKit/531.22.7 (KHTML, like Gecko) Version/4.0.5 >Safari/531.22.7\"\n" > gen=gen "CYCLES_NUM=-1" > print gen > }{ > url+=1 > urls=urls "URL=" $3 "\n" > if (fetch==100) fetch=0 ; fetch+=1 > urls=urls "URL_SHORT_NAME=\"" url "\"\n" > urls=urls "FETCH_PROBABILITY=" fetch "\n" > urls=urls "REQUEST_TYPE=GET\n" > urls=urls "HEADER=\"Accept: >text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\"\n" > urls=urls "HEADER=\"Accept-Language: en-us,en;q=0.5\"\n" > urls=urls "HEADER=\"Accept-Encoding: gzip,deflate\"\n" > urls=urls "HEADER=\"Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7\"\n" > urls=urls "HEADER=\"Keep-Alive: 115\"\n" > urls=urls "HEADER=\"Connection: keep-alive\"\n" > urls=urls "TIMER_URL_COMPLETION=3000\n" > urls=urls "TIMER_AFTER_URL_SLEEP=0-2000\n" > urls=urls "RANDOM_SEED=10\n\n" > } END { > print "URLS_NUM=" url "\n\n########### URLs SECTION >#######################\n\n" urls > }' 2>&1 | tee ProxySG.conf > > ls -l |egrep -v 'curl-loader' |awk '$0 ~ /^d/ {print "rm -rf " $8}' |sh > curl-loader -x 10.30.30.16:8080 -f ProxySG.conf > > > many thks for your help. > > Vincent > > > ------------------------------------------------------------------------------ > This SF.net email is sponsored by Sprint > What will you do first with EVO, the first 4G phone? > Visit sprint.com/first -- http://p.sf.net/sfu/sprint-com-first > _______________________________________________ > curl-loader-devel mailing list > cur...@li... > https://lists.sourceforge.net/lists/listinfo/curl-loader-devel > |