Thread: TIMER_AFTER_URL_SLEEP question
Status: Alpha
Brought to you by:
coroberti
From: Pranav D. <pra...@gm...> - 2008-02-16 03:44:27
|
Hello All, I am working with version 0.44. I was curious about how the TIMER_AFTER_URL_SLEEP option works. I have set it up fairly high 20000, but I still seem to get like 70+ requests every 3 sec with just one client. I was expecting it to have 1 request every 20 sec. I am trying to control the load, so what am I missing here. Thanks -- Pranav P.S. Here is my config. ########### GENERAL SECTION ################################ BATCH_NAME=small CLIENTS_NUM_MAX=1 # Same as CLIENTS_NUM CLIENTS_NUM_START=1 CLIENTS_RAMPUP_INC=1 INTERFACE =eth1 NETMASK=16 IP_ADDR_MIN= 12.0.0.1 IP_ADDR_MAX= 12.0.1.250 #Actually - this is for self-control CYCLES_NUM=2000 URLS_NUM=4 ########### URL SECTION #################################### URL=http://172.16.55.200/MJOLNIRRAND/websites/cisco/www.cisco.com/ URL=http://172.16.55.200/MJOLNIRRAND/websites/cisco/www.cisco.com/swa/c/home.css URL=http://172.16.55.200/MJOLNIRRAND/websites/cnn/i.l.cnn.net/cnn/.element/img/2.0/weather/03/03.gif URL=http://172.16.55.200/MJOLNIRRAND/websites/cnn/i.l.cnn.net/cnn/.element/img/2.0/weather/03/17.gif URL_SHORT_NAME="url-http" REQUEST_TYPE=GET TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced by cancelling url fetch on timeout TIMER_AFTER_URL_SLEEP =20000 |
From: Robert I. <cor...@gm...> - 2008-02-16 16:06:52
|
Hi Pranav, On 2/16/08, Pranav Desai <pra...@gm...> wrote: > Hello All, > > I am working with version 0.44. I was curious about how the > TIMER_AFTER_URL_SLEEP option works. I have set it up fairly high > 20000, but I still seem to get like 70+ requests every 3 sec with just > one client. I was expecting it to have 1 request every 20 sec. I am > trying to control the load, so what am I missing here. Your intentions are clear. The syntax of your batch file should be corrected in the URL section: > ########### URL SECTION #################################### The below syntax is not supported. The below: > URL=http://172.16.55.200/MJOLNIRRAND/websites/cisco/www.cisco.com/ > URL=http://172.16.55.200/MJOLNIRRAND/websites/cisco/www.cisco.com/swa/c/home.css > URL=http://172.16.55.200/MJOLNIRRAND/websites/cnn/i.l.cnn.net/cnn/.element/img/2.0/weather/03/03.gif > URL=http://172.16.55.200/MJOLNIRRAND/websites/cnn/i.l.cnn.net/cnn/.element/img/2.0/weather/03/17.gif > Y can correct it to the following: URL=http://172.16.55.200/MJOLNIRRAND/websites/cisco/www.cisco.com/ URL_SHORT_NAME="url-main" REQUEST_TYPE=GET TIMER_URL_COMPLETION = 0 TIMER_AFTER_URL_SLEEP =0 URL=http://172.16.55.200/MJOLNIRRAND/websites/cisco/www.cisco.com/swa/c/home.css URL_SHORT_NAME="url-css" REQUEST_TYPE=GET TIMER_URL_COMPLETION = 0 TIMER_AFTER_URL_SLEEP =0 URL=http://172.16.55.200/MJOLNIRRAND/websites/cnn/i.l.cnn.net/cnn/.element/img/2.0/weather/03/03.gif URL_SHORT_NAME="url-03-gif" REQUEST_TYPE=GET TIMER_URL_COMPLETION = 0 TIMER_AFTER_URL_SLEEP =0 URL=http://172.16.55.200/MJOLNIRRAND/websites/cnn/i.l.cnn.net/cnn/.element/img/2.0/weather/03/17.gif URL_SHORT_NAME="url-17-gif" REQUEST_TYPE=GET TIMER_URL_COMPLETION = 0 TIMER_AFTER_URL_SLEEP =20000 So that you sleep, when fetched the whole page. Best wishes! Sincerely, Robert Iakobashvili "Light will come from Jerusalem" ........................................................... http://curl-loader.sourceforge.net An open-source web testing and traffic generation. |
From: Pranav D. <pra...@gm...> - 2008-02-16 20:18:36
|
Great ! Thanks for the help. Do you guys have any plans for adding proxy support ... ? Thanks again. -- Pranav On Feb 16, 2008 8:06 AM, Robert Iakobashvili <cor...@gm...> wrote: > Hi Pranav, > > On 2/16/08, Pranav Desai <pra...@gm...> wrote: > > Hello All, > > > > I am working with version 0.44. I was curious about how the > > TIMER_AFTER_URL_SLEEP option works. I have set it up fairly high > > 20000, but I still seem to get like 70+ requests every 3 sec with just > > one client. I was expecting it to have 1 request every 20 sec. I am > > trying to control the load, so what am I missing here. > > Your intentions are clear. The syntax of your batch file > should be corrected in the URL section: > > > > ########### URL SECTION #################################### > > > The below syntax is not supported. > > The below: > > URL=http://172.16.55.200/MJOLNIRRAND/websites/cisco/www.cisco.com/ > > URL=http://172.16.55.200/MJOLNIRRAND/websites/cisco/www.cisco.com/swa/c/home.css > > URL=http://172.16.55.200/MJOLNIRRAND/websites/cnn/i.l.cnn.net/cnn/.element/img/2.0/weather/03/03.gif > > URL=http://172.16.55.200/MJOLNIRRAND/websites/cnn/i.l.cnn.net/cnn/.element/img/2.0/weather/03/17.gif > > > > Y can correct it to the following: > > URL=http://172.16.55.200/MJOLNIRRAND/websites/cisco/www.cisco.com/ > URL_SHORT_NAME="url-main" > REQUEST_TYPE=GET > TIMER_URL_COMPLETION = 0 > TIMER_AFTER_URL_SLEEP =0 > > URL=http://172.16.55.200/MJOLNIRRAND/websites/cisco/www.cisco.com/swa/c/home.css > URL_SHORT_NAME="url-css" > REQUEST_TYPE=GET > TIMER_URL_COMPLETION = 0 > TIMER_AFTER_URL_SLEEP =0 > > URL=http://172.16.55.200/MJOLNIRRAND/websites/cnn/i.l.cnn.net/cnn/.element/img/2.0/weather/03/03.gif > URL_SHORT_NAME="url-03-gif" > REQUEST_TYPE=GET > TIMER_URL_COMPLETION = 0 > TIMER_AFTER_URL_SLEEP =0 > > URL=http://172.16.55.200/MJOLNIRRAND/websites/cnn/i.l.cnn.net/cnn/.element/img/2.0/weather/03/17.gif > URL_SHORT_NAME="url-17-gif" > REQUEST_TYPE=GET > TIMER_URL_COMPLETION = 0 > TIMER_AFTER_URL_SLEEP =20000 > > So that you sleep, when fetched the whole page. > > Best wishes! > > Sincerely, > Robert Iakobashvili > "Light will come from Jerusalem" > ........................................................... > http://curl-loader.sourceforge.net > An open-source web testing and traffic generation. > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2008. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > curl-loader-devel mailing list > cur...@li... > https://lists.sourceforge.net/lists/listinfo/curl-loader-devel > -- ------------------------------ http://pd.dnsalias.org |
From: Robert I. <cor...@gm...> - 2008-02-17 04:38:11
|
Pranav, On Feb 16, 2008 10:18 PM, Pranav Desai <pra...@gm...> wrote: > Great ! > Thanks for the help. > > Do you guys have any plans for adding proxy support ... ? It is rather easy. Since libcurl has such support, what you need to do is to add for the new configuration file tags treatment in parser, and to set such options to the libcurl handle, CURL* object. If you wish to add such support, I can guide you. Sincerely, Robert Iakobashvili "Light will come from Jerusalem" ........................................................... http://curl-loader.sourceforge.net An open-source web testing and traffic generation. |
From: Pranav D. <pra...@gm...> - 2008-02-19 03:12:59
|
On Feb 16, 2008 8:38 PM, Robert Iakobashvili <cor...@gm...> wrote: > Pranav, > > On Feb 16, 2008 10:18 PM, Pranav Desai <pra...@gm...> wrote: > > Great ! > > Thanks for the help. > > > > Do you guys have any plans for adding proxy support ... ? > > It is rather easy. Since libcurl has such support, what you need to do > is to add for the new configuration file tags treatment in parser, and > to set such options to the libcurl handle, CURL* object. > Will definitely look into that ... Regarding the original questions for TIMER_AFTER_URL_SLEEP, I am trying to increase the number of users/conections (5000+) but keep the BW low since my webserver can only support 100Mbps, and I thought increasing the above timer would help (2 sec), but it seems like the num. of users seems to be very low compared to what i have set it to. Some users just dont seem to send requests for a long time (a lot more than 2 sec), looking at tcpdump. What do you think could be problems. Can you suggest any other way to achieve the above test setup. I dont want to use the TRANSFER_RATE_LIMIT since I want the clients to fetch it at full BW. > If you wish to add such support, I can guide you. > > > > Sincerely, > Robert Iakobashvili > "Light will come from Jerusalem" > ........................................................... > http://curl-loader.sourceforge.net > An open-source web testing and traffic generation. > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2008. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > curl-loader-devel mailing list > cur...@li... > https://lists.sourceforge.net/lists/listinfo/curl-loader-devel > -- ------------------------------ http://pd.dnsalias.org |
From: Robert I. <cor...@gm...> - 2008-02-19 06:03:24
|
Pranav, On Feb 19, 2008 5:12 AM, Pranav Desai <pra...@gm...> wrote: > > Will definitely look into that ... > > Regarding the original questions for TIMER_AFTER_URL_SLEEP, I am > trying to increase the number of users/conections (5000+) but keep the > BW low since my webserver can only support 100Mbps, and I thought > increasing the above timer would help (2 sec), but it seems like the > num. of users seems to be very low compared to what i have set it to. > Some users just dont seem to send requests for a long time (a lot more > than 2 sec), looking at tcpdump. > > What do you think could be problems. Can you suggest any other way to > achieve the above test setup. I dont want to use the > TRANSFER_RATE_LIMIT since I want the clients to fetch it at full BW. > Talking in general may be 3 sources of problem: a) client load side; b) network bottleneck, including NATs, Firewall limits, etc; c) web-server side. If you suspect, that the problem is on the side of curl-loader, please, go through the procedure described in FAQs here: http://curl-loader.sourceforge.net/doc/faq.html#big-load It may be e.g. a limit of open descriptors at your client computer or at server, etc problems. If you have a multi-core or multi-cpu at the curl-loader linux computer, you can effectively increase your loading power, please use -t 2 or -t 4 options according to the number of cores/processors. -- Sincerely, Robert Iakobashvili "Light will come from Jerusalem" ........................................................... http://curl-loader.sourceforge.net An open-source web testing and traffic generation. |
From: Pranav D. <pra...@gm...> - 2008-02-19 07:19:41
|
On Feb 18, 2008 10:03 PM, Robert Iakobashvili <cor...@gm...> wrote: > Pranav, > > > On Feb 19, 2008 5:12 AM, Pranav Desai <pra...@gm...> wrote: > > > > > > Will definitely look into that ... > > > > Regarding the original questions for TIMER_AFTER_URL_SLEEP, I am > > trying to increase the number of users/conections (5000+) but keep the > > BW low since my webserver can only support 100Mbps, and I thought > > increasing the above timer would help (2 sec), but it seems like the > > num. of users seems to be very low compared to what i have set it to. > > Some users just dont seem to send requests for a long time (a lot more > > than 2 sec), looking at tcpdump. > > > > What do you think could be problems. Can you suggest any other way to > > achieve the above test setup. I dont want to use the > > TRANSFER_RATE_LIMIT since I want the clients to fetch it at full BW. > > > > Talking in general may be 3 sources of problem: > a) client load side; > b) network bottleneck, including NATs, Firewall limits, etc; > c) web-server side. > > If you suspect, that the problem is on the side of curl-loader, please, > go through the procedure described in FAQs here: > > http://curl-loader.sourceforge.net/doc/faq.html#big-load > > It may be e.g. a limit of open descriptors at your client computer or at > server, etc problems. > Hello Robert, I will look into those issues. One more thing I noticed was that for different bind addresses its using the same connection. Is this expected?. Or should different addresses use a new connection irrespective of the -r options. Here is a log: 0 1 (12.0.0.1) :== Info: About to connect() to 172.16.55.200 port 80 (#0) : eff-url: , url: 0 1 (12.0.0.1) :== Info: Trying 172.16.55.200... : eff-url: , url: 0 1 (12.0.0.1) :== Info: Bind local address to 12.0.0.1 : eff-url: , url: 0 1 (12.0.0.1) :== Info: Local port: 38494 : eff-url: , url: 0 1 (12.0.0.1) :== Info: Connected to 172.16.55.200 (172.16.55.200) port 80 (#0) : eff-url: , url: 0 1 (12.0.0.1) => Send header: eff-url: , url: 0 1 (12.0.0.1) <= Recv header: eff-url: , url: 0 1 (12.0.0.1) :!! 200 OK: eff-url: , url: 0 1 (12.0.0.1) <= Recv header: eff-url: , url: ... 0 1 (12.0.0.1) <= Recv header: eff-url: , url: 0 1 (12.0.0.1) <= Recv data: eff-url: , url: ... 0 1 (12.0.0.1) <= Recv data: eff-url: , url: 0 1 (12.0.0.1) :== Info: Connection #0 to host 172.16.55.200 left intact : eff-url: , url: 0 2 (12.0.0.2) :== Info: Re-using existing connection! (#0) with host 172.16.55.200 : eff-url: , url: 0 2 (12.0.0.2) :== Info: Connected to 172.16.55.200 (172.16.55.200) port 80 (#0) : eff-url: , url: 0 1 (12.0.0.1) :== Info: About to connect() to 172.16.55.200 port 80 (#1) Here 12.0.0.2 seems to be using the same connection as for 12.0.0.1 Thanks for your help. -- Pranav > If you have a multi-core or multi-cpu at the curl-loader linux computer, you > can effectively > increase your loading power, please use -t 2 or -t 4 options according to > the number of cores/processors. > > -- > > > Sincerely, > Robert Iakobashvili > "Light will come from Jerusalem" > ........................................................... > http://curl-loader.sourceforge.net > An open-source web testing and traffic generation. > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2008. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > curl-loader-devel mailing list > cur...@li... > https://lists.sourceforge.net/lists/listinfo/curl-loader-devel > > -- ------------------------------ http://pd.dnsalias.org |
From: Robert I. <cor...@gm...> - 2008-02-19 10:24:49
|
Hi Prahav, On Feb 19, 2008 9:19 AM, Pranav Desai <pra...@gm...> wrote: > On Feb 18, 2008 10:03 PM, Robert Iakobashvili <cor...@gm...> wrote: > > I will look into those issues. One more thing I noticed was that for > different bind addresses its using the same connection. Is this > expected?. Or should different addresses use a new connection > irrespective of the -r options. > Connections re-use is OK. Don't use -r. I would more suspect shortage of memory and/or lack of enough open descriptors. Please, follow the big-load procedure step by step and understand the memory concerns. Sincerely, Robert Iakobashvili "Light will come from Jerusalem" ........................................................... http://curl-loader.sourceforge.net An open-source web testing and traffic generation. |
From: Robert I. <cor...@gm...> - 2008-02-19 10:27:27
|
Hi Prahav, On Feb 19, 2008 12:24 PM, Robert Iakobashvili <cor...@gm...> wrote: Try also to exclude various network bottlenecks, like instead of 100 Mbps your get 10 Mbps network due to some weird issues, your router/switch settings, etc. Sincerely, Robert Iakobashvili "Light will come from Jerusalem" ........................................................... http://curl-loader.sourceforge.net An open-source web testing and traffic generation. |
From: Pranav D. <pra...@gm...> - 2008-02-19 21:46:51
|
On Feb 19, 2008 2:27 AM, Robert Iakobashvili <cor...@gm...> wrote: > Hi Prahav, > > On Feb 19, 2008 12:24 PM, Robert Iakobashvili <cor...@gm...> wrote: > > Try also to exclude various network bottlenecks, > like instead of 100 Mbps your get 10 Mbps network due > to some weird issues, your router/switch settings, etc. > Well, the log I sent you was for only 5 users, so I dont see how it could be a resource issue. In any case I will dig around a bit and see if I have messed up something, especially with heavier load testing. Between, to set a proxy you can simply set the env. variable (http_proxy=http://proxy_addr:prxoy_port) and libcurl will do it ... pretty easy :-) Thanks for all your help. -- Pranav > > > Sincerely, > Robert Iakobashvili > "Light will come from Jerusalem" > ........................................................... > http://curl-loader.sourceforge.net > An open-source web testing and traffic generation. > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2008. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > curl-loader-devel mailing list > cur...@li... > https://lists.sourceforge.net/lists/listinfo/curl-loader-devel > > -- ------------------------------ http://pd.dnsalias.org |
From: Robert I. <cor...@gm...> - 2008-02-20 05:29:23
|
Prahav, On Feb 19, 2008 11:46 PM, Pranav Desai <pra...@gm...> wrote: > > > Well, the log I sent you was for only 5 users, so I dont see how it > could be a resource issue. In any case I will dig around a bit and see > if I have messed up something, especially with heavier load testing. The recommendations are related to the "heavy" load. As you see in the FAQs, your memory should be about 35K per client and there are other issues in the FAQs. Between, to set a proxy you can simply set the env. variable > (http_proxy=http://proxy_addr:prxoy_port) and libcurl will do it ... > pretty easy :-) So, we are supporting HTTP/FTP proxy right now. :) I will add this tip to our FAQs. Thanks for all your help. > > -- Pranav Thank you. -- Sincerely, Robert Iakobashvili "Light will come from Jerusalem" ........................................................... http://curl-loader.sourceforge.net An open-source web testing and traffic generation. |
From: Pranav D. <pra...@gm...> - 2008-02-21 03:21:12
|
On Tue, Feb 19, 2008 at 9:29 PM, Robert Iakobashvili <cor...@gm...> wrote: > Prahav, > > > On Feb 19, 2008 11:46 PM, Pranav Desai <pra...@gm...> wrote: > > > > > > > > Well, the log I sent you was for only 5 users, so I dont see how it > > could be a resource issue. In any case I will dig around a bit and see > > if I have messed up something, especially with heavier load testing. > > > The recommendations are related to the "heavy" load. > As you see in the FAQs, your memory should be about 35K per client and > there are other issues in the FAQs. > > > > Between, to set a proxy you can simply set the env. variable > > (http_proxy=http://proxy_addr:prxoy_port) and libcurl will do it ... > > pretty easy :-) > > > So, we are supporting HTTP/FTP proxy right now. :) thats right :-) > I will add this tip to our FAQs. > > > > Thanks for all your help. > > > > -- Pranav > > Thank you. > > -- > > Sincerely, > Robert Iakobashvili > "Light will come from Jerusalem" > ........................................................... > http://curl-loader.sourceforge.net > An open-source web testing and traffic generation. > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2008. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > curl-loader-devel mailing list > cur...@li... > https://lists.sourceforge.net/lists/listinfo/curl-loader-devel > > -- ------------------------------ http://pd.dnsalias.org |