curl-loader-devel Mailing List for curl-loader - web application testing (Page 26)
Status: Alpha
Brought to you by:
coroberti
You can subscribe to this list here.
2006 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
|
Feb
(1) |
Mar
(7) |
Apr
(19) |
May
(25) |
Jun
(16) |
Jul
(59) |
Aug
(29) |
Sep
(18) |
Oct
(19) |
Nov
(7) |
Dec
(29) |
2008 |
Jan
(6) |
Feb
(18) |
Mar
(8) |
Apr
(27) |
May
(26) |
Jun
(5) |
Jul
(6) |
Aug
|
Sep
(9) |
Oct
(37) |
Nov
(61) |
Dec
(17) |
2009 |
Jan
(21) |
Feb
(25) |
Mar
(4) |
Apr
(2) |
May
(8) |
Jun
(15) |
Jul
(18) |
Aug
(23) |
Sep
(10) |
Oct
(16) |
Nov
(14) |
Dec
(22) |
2010 |
Jan
(23) |
Feb
(8) |
Mar
(18) |
Apr
(1) |
May
(34) |
Jun
(23) |
Jul
(11) |
Aug
(1) |
Sep
(13) |
Oct
(10) |
Nov
(2) |
Dec
(8) |
2011 |
Jan
|
Feb
(7) |
Mar
(24) |
Apr
(12) |
May
(3) |
Jun
(2) |
Jul
(2) |
Aug
|
Sep
(5) |
Oct
(20) |
Nov
(7) |
Dec
(11) |
2012 |
Jan
(12) |
Feb
(5) |
Mar
(16) |
Apr
(3) |
May
|
Jun
(5) |
Jul
(12) |
Aug
(6) |
Sep
|
Oct
|
Nov
(8) |
Dec
|
2013 |
Jan
(1) |
Feb
(3) |
Mar
(5) |
Apr
(3) |
May
(1) |
Jun
|
Jul
(1) |
Aug
(2) |
Sep
(9) |
Oct
|
Nov
(8) |
Dec
(4) |
2014 |
Jan
(4) |
Feb
|
Mar
(1) |
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(4) |
Oct
|
Nov
(11) |
Dec
(5) |
2015 |
Jan
(1) |
Feb
|
Mar
(11) |
Apr
(3) |
May
(1) |
Jun
(1) |
Jul
(4) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(2) |
Dec
|
2016 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: owen n. <fre...@gm...> - 2008-11-11 16:29:15
|
I could do this work, so I hope get a existing module. thanks gtalk:fre...@gm... |
From: alo s. <asi...@ic...> - 2008-11-11 14:14:12
|
Hi Robert, I wish to do that but i m not sure as how to do it. Anyway i will try it and if there is a success, i will let you know. Thanks, Alo Robert Iakobashvili wrote: > Hi Alo, > > On Tue, Nov 11, 2008 at 12:05 AM, alo sinnathamby > <asi...@ic... <mailto:asi...@ic...>> wrote: > > Hi Robert, > > You are right. > > i have run it in the following order, > > 1 - make cleanall; make debug=1 optimize=0 > > => throws the same error > > 2 - make cleanall; make debug=1 optimize=1 > > => works fine > > > *Greatly appreciate your help and Thank you.. * > > > Thanks, > Alo > > > > If you could debug and suggest a fix, it will be appreciated. > > -- > Truly, > Robert Iakobashvili, Ph.D. > ...................................................................... > Assistive technology that understands you > ...................................................................... |
From: Robert I. <cor...@gm...> - 2008-11-11 04:37:59
|
Hi Alo, On Tue, Nov 11, 2008 at 12:05 AM, alo sinnathamby <asi...@ic...>wrote: > Hi Robert, > > You are right. > > i have run it in the following order, > > 1 - make cleanall; make debug=1 optimize=0 > > => throws the same error > > 2 - make cleanall; make debug=1 optimize=1 > > => works fine > > > *Greatly appreciate your help and Thank you.. * > > > Thanks, > Alo > > If you could debug and suggest a fix, it will be appreciated. -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: alo s. <asi...@ic...> - 2008-11-10 22:06:31
|
Hi Robert, You are right. i have run it in the following order, 1 - make cleanall; make debug=1 optimize=0 => throws the same error 2 - make cleanall; make debug=1 optimize=1 => works fine *Greatly appreciate your help and Thank you.. * Thanks, Alo Robert Iakobashvili wrote: > > > On Mon, Nov 10, 2008 at 7:01 PM, alo sinnathamby > <asi...@ic... <mailto:asi...@ic...>> wrote: > > it is 64 bit distribution > > > Do you have both 64-bit user space and 64-bit kernel? > Please, verify it. > > Compile with debugging symbols and > a) without optimization > > make cleanall; make debug=1 optimize=0 > > b) with optimization > make cleanall; make debug=1 optimize=1 > > See, if you reproduce the problem, and try to debug it > by gdb or gdb and ddd. > > Looks like something rather simple to debug and fix. like > some optimization bug, etc. > > One more option is, please, try the recent svn-version and see, > if it helps. > > -- > Truly, > Robert Iakobashvili, Ph.D. > ...................................................................... > Assistive technology that understands you > ...................................................................... |
From: Robert I. <cor...@gm...> - 2008-11-10 17:22:30
|
On Mon, Nov 10, 2008 at 7:01 PM, alo sinnathamby <asi...@ic...>wrote: > it is 64 bit distribution > > Do you have both 64-bit user space and 64-bit kernel? Please, verify it. Compile with debugging symbols and a) without optimization make cleanall; make debug=1 optimize=0 b) with optimization make cleanall; make debug=1 optimize=1 See, if you reproduce the problem, and try to debug it by gdb or gdb and ddd. Looks like something rather simple to debug and fix. like some optimization bug, etc. One more option is, please, try the recent svn-version and see, if it helps. -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: alo s. <asi...@ic...> - 2008-11-10 17:04:12
|
it is 64 bit distribution Robert Iakobashvili wrote: > Hi Alo, > > On Mon, Nov 10, 2008 at 6:13 PM, alo sinnathamby > <asi...@ic... <mailto:asi...@ic...>> wrote: > > Hi Robert, > > Thanks for your reply. > > We have two test clients and the environment in both machines are same > except the client 2 has 8 Gig RAM and client 1 has 4 Gig RAM. The test > script works fine with test client 1 and it gives the following error > with test client 2. > > setup_curl_handle_appl - error: post_data is NULL. > setup_curl_handle_init - error: setup_curl_handle_appl () failed . > > > These are my client machines: > ----------------------------- > > Test client 1: > Operating System: CentOS release 5.2 (Final) > CPU: Intel(R) Core(TM)2 Duo CPU E4600 @ 2.40GHz > Ram: 4 Gigabytes DDR2 > > Test client 2: > Operating System: CentOS release 5.2 (Final) > CPU: Intel(R) Core(TM)2 Duo CPU E4600 @ 2.40GHz > Ram: 8 Gigabytes DDR2 > > > I have also followed the recommendations given in the following links, > > section 7.2 in http://curl-loader.sourceforge.net/doc/faq.html > http://curl-loader.sourceforge.net/high-load-hw/index.html > > > If you have any clue as why we get this type of error in only one > client, please advice us and Greatly appreciate your help. > > > Is it a 32-bit distribution or 64-bit ? > > -- > Truly, > Robert Iakobashvili, Ph.D. > ...................................................................... > Assistive technology that understands you > ...................................................................... |
From: Robert I. <cor...@gm...> - 2008-11-10 16:55:56
|
Hi Alo, On Mon, Nov 10, 2008 at 6:13 PM, alo sinnathamby <asi...@ic...>wrote: > Hi Robert, > > Thanks for your reply. > > We have two test clients and the environment in both machines are same > except the client 2 has 8 Gig RAM and client 1 has 4 Gig RAM. The test > script works fine with test client 1 and it gives the following error > with test client 2. > > setup_curl_handle_appl - error: post_data is NULL. > setup_curl_handle_init - error: setup_curl_handle_appl () failed . > > > These are my client machines: > ----------------------------- > > Test client 1: > Operating System: CentOS release 5.2 (Final) > CPU: Intel(R) Core(TM)2 Duo CPU E4600 @ 2.40GHz > Ram: 4 Gigabytes DDR2 > > Test client 2: > Operating System: CentOS release 5.2 (Final) > CPU: Intel(R) Core(TM)2 Duo CPU E4600 @ 2.40GHz > Ram: 8 Gigabytes DDR2 > > > I have also followed the recommendations given in the following links, > > section 7.2 in http://curl-loader.sourceforge.net/doc/faq.html > http://curl-loader.sourceforge.net/high-load-hw/index.html > > > If you have any clue as why we get this type of error in only one > client, please advice us and Greatly appreciate your help. > Is it a 32-bit distribution or 64-bit ? -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: alo s. <asi...@ic...> - 2008-11-10 16:14:51
|
Hi Robert, Thanks for your reply. We have two test clients and the environment in both machines are same except the client 2 has 8 Gig RAM and client 1 has 4 Gig RAM. The test script works fine with test client 1 and it gives the following error with test client 2. setup_curl_handle_appl - error: post_data is NULL. setup_curl_handle_init - error: setup_curl_handle_appl () failed . setup_url error: setup_curl_handle_init - failed setup_curl_handle_appl - error: post_data is NULL. setup_curl_handle_init - error: setup_curl_handle_appl () failed . setup_url error: setup_curl_handle_init - failed setup_curl_handle_appl - error: post_data is NULL. setup_curl_handle_init - error: setup_curl_handle_appl () failed . setup_url error: setup_curl_handle_init - failed setup_curl_handle_appl - error: post_data is NULL. setup_curl_handle_init - error: setup_curl_handle_appl () failed . This is my test script: -------------------- ########### GENERAL SECTION ################################ BATCH_NAME=NOV10_SA_login CLIENTS_NUM_MAX = 10000 CLIENTS_NUM_START=50 CLIENTS_RAMPUP_INC=50 INTERFACE=eth0 NETMASK=24 IP_ADDR_MIN=xx.x.x.xxx IP_ADDR_MAX=xx.x.x.xxx CYCLES_NUM= -1 URLS_NUM=2 ########### URL SECTION ################################## # GET-part URL=http://xx.x.x.xxx/main URL_SHORT_NAME="Login-GET" #URL_DONT_CYCLE = 1 REQUEST_TYPE=GET TIMER_URL_COMPLETION = 0 # In msec. Now it is enforced by cancelling url fetch on timeout TIMER_AFTER_URL_SLEEP =0 # POST-part URL=http://xx.x.x.xxx/login #URL_USE_CURRENT= 1 URL_SHORT_NAME="Login-POST" #URL_DONT_CYCLE = 1 USERNAME=test PASSWORD=test REQUEST_TYPE=POST FORM_USAGE_TYPE= SINGLE_USER FORM_STRING= USERNAME=%s&PASSWORD=%s # Means the same credentials for all clients/users TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced by cancelling url fetch on timeout TIMER_AFTER_URL_SLEEP =0 These are my client machines: ----------------------------- Test client 1: Operating System: CentOS release 5.2 (Final) CPU: Intel(R) Core(TM)2 Duo CPU E4600 @ 2.40GHz Ram: 4 Gigabytes DDR2 Test client 2: Operating System: CentOS release 5.2 (Final) CPU: Intel(R) Core(TM)2 Duo CPU E4600 @ 2.40GHz Ram: 8 Gigabytes DDR2 I have also followed the recommendations given in the following links, section 7.2 in http://curl-loader.sourceforge.net/doc/faq.html http://curl-loader.sourceforge.net/high-load-hw/index.html If you have any clue as why we get this type of error in only one client, please advice us and Greatly appreciate your help. Thanks, Alo Robert Iakobashvili wrote: > Hi Alo, > > On Tue, Nov 4, 2008 at 4:18 PM, alo sinnathamby > <asi...@ic... <mailto:asi...@ic...>> wrote: > > Hi Robert, > > Because of these different results, i am not able to determine the > correct run time for any given number of requests. Our objective is to > determine the load capability and run time for any given number of > requests, say for an example 20000, 50000, 70000, 100000 requests. > > Please render your valuable advice and Greatly appreciate your help. > > > > > The suggestion is always to measure such things not at a ramp-up > time (75 seems to be the max for your machine), but at a steady state. > > Y may wish to look at the steady run-time slot and to adjust > the number of cycles accordingly. > Besides that, you can see CAPS number, which is at a steady stage > the number of cycles, that all your virtual clients are doing each second. > > Besides that, the following conversation may be of your interest: > http://sourceforge.net/mailarchive/forum.php?thread_name=BAY131-W4271C5395CD9FFBBC04BF8D9C80%40phx.gbl&forum_name=curl-loader-devel > <http://sourceforge.net/mailarchive/forum.php?thread_name=BAY131-W4271C5395CD9FFBBC04BF8D9C80%40phx.gbl&forum_name=curl-loader-devel> > > Sincerely, > Robert |
From: Gary F. <ga...@in...> - 2008-11-04 16:45:44
|
Matt, I've recently started using curl-loader, and I ran into this too. I think we're extending the use of FORM_STRING beyond what was originally intended. I fixed this in the source file parse_config.c by changing the line if (ftype != FORM_USAGETYPE_AS_IS) to if (ftype != FORM_USAGETYPE_AS_IS && ftype != FORM_USAGETYPE_RECORDS_FROM_FILE) Maybe you could give this a try. It seems to work for me, although I'm still testing this along with a few other changes. Gary Fitts On Nov 4, 2008, at 8:29 AM, Matt Love wrote: > Hello, > > I am encountering a parse error when reading username/password data > from a file. > > > > The following is my .conf file: > > ########### GENERAL SECTION ################################ > > > > BATCH_NAME= nomadeskpoll > > CLIENTS_NUM_MAX=10 # Same as CLIENTS_NUM > > CLIENTS_NUM_START=1 > > CLIENTS_RAMPUP_INC=1 > > INTERFACE=eth0 > > NETMASK=16 > > IP_ADDR_MIN= 192.168.2.70 > > IP_ADDR_MAX= 192.168.2.128 > > CYCLES_NUM=100 > > URLS_NUM= 1 > > > > ########### URL SECTION #################################### > > URL=http://test.nomadesk.com/nomadesk/index.php?TaskNavigator::Task=ReceiveMessage > > URL_SHORT_NAME="AddExistingAccount" > > TIMER_URL_COMPLETION = 500 > > TIMER_AFTER_URL_SLEEP = 500 > > REQUEST_TYPE=POST > > FORM_USAGE_TYPE= RECORDS_FROM_FILE > > FORM_STRING=<PipeMsg><Header><AccountName></ > AccountName><Password>aventiv23</Password><LocationID>%s</ > LocationID><ClientVersion>2.6.0.16</ > ClientVersion>bogus<CreationTstamp></ > CreationTstamp><Task>AddExistingAccount</Task></Header><Body><Email> > %s</Email><LocationName>%s</LocationName></Body></PipeMsg> > > FORM_RECORDS_FILE_MAX_NUM=10 > > FORM_RECORDS_FILE= locid-email-locname.cred > > FORM_RECORDS_RANDOM=1 > > > > > > And I am reading from this file (locid-email-locname.cred) > > # Separator used here is ',' > > a3d683db-67f1-4e42-94f4-d942623de79e,p0...@ml...,PROTEST- > VISTA32 > > 2ae2596e-7ade-4fbd-9b21-61d95c4ea387,p0...@ml...,PROTEST- > XP32-1 > > a3d683db-67f1-4e42-94f4-d942623de79e,p0...@ml...,PROTEST- > VISTA32 > > 2ae2596e-7ade-4fbd-9b21-61d95c4ea387,p0...@ml...,PROTEST- > XP32-1 > > a3d683db-67f1-4e42-94f4-d942623de79e,p0...@ml...,PROTEST- > VISTA32: > > . > > . > > . > > > > > > I get the following error when running “curl-loader –f poll.conf”: > > parse_config_file - processing file string > "FORM_STRING=<PipeMsg><Header><AccountName></ > AccountName><Password>aventiv23</Password><LocationID>%s</ > LocationID><ClientVersion>2.6.0.16</ > ClientVersion>bogus<CreationTstamp></ > CreationTstamp><Task>AddExistingAccount</Task></Header><Body><Email> > %s</Email><LocationName>%s</LocationName></Body></PipeMsg> > > > > > > - to load user credentials (records) from a file two "%%s" , > something like "user=%%s&password=%%s" > > and _FILE defined. > > - error: FORM_STRING (form_string_parser) is not valid. > > Please, use: > > - to generate unique users with unique passwords two "%s%d" , > something like "user=%s%d&password=%s%d" > > - to generate unique users with the same passwords one "%s%d" > > for users and one "%s" for the password,something like "user=%s > %d&password=%s" > > - for a single configurable user with a password two "%s" , > something like "user=%s&password=%s" > > add_param_to_batch - error: parser failed for tag FORM_STRING and > value <PipeMsg><Header><AccountName></ > AccountName><Password>aventiv23</Password><LocationID>%s</ > LocationID><ClientVersion>2.6.0.16</ > ClientVersion>bogus<CreationTstamp></ > CreationTstamp><Task>AddExistingAccount</Task></Header><Body><Email> > %s</Email><LocationName>%s</LocationName></Body></PipeMsg>. > > parse_config_file - error: add_param_to_batch () failed processing > line "FORM_STRING" > > main - error: parse_config_file () failed. > > > > I have tried the %%s usage, as suggested in the error message above > but got the same result. Any ideas? Is there any additional > documentation about how to use this templating feature? > > > > Best Regards, > > Matt Love > > <ATT00001.txt><ATT00002.txt> |
From: Matt L. <mat...@av...> - 2008-11-04 16:29:44
|
Hello, I am encountering a parse error when reading username/password data from a file. The following is my .conf file: ########### GENERAL SECTION ################################ BATCH_NAME= nomadeskpoll CLIENTS_NUM_MAX=10 # Same as CLIENTS_NUM CLIENTS_NUM_START=1 CLIENTS_RAMPUP_INC=1 INTERFACE=eth0 NETMASK=16 IP_ADDR_MIN= 192.168.2.70 IP_ADDR_MAX= 192.168.2.128 CYCLES_NUM=100 URLS_NUM= 1 ########### URL SECTION #################################### URL=http://test.nomadesk.com/nomadesk/index.php?TaskNavigator::Task=Rece iveMessage URL_SHORT_NAME="AddExistingAccount" TIMER_URL_COMPLETION = 500 TIMER_AFTER_URL_SLEEP = 500 REQUEST_TYPE=POST FORM_USAGE_TYPE= RECORDS_FROM_FILE FORM_STRING=<PipeMsg><Header><AccountName></AccountName><Password>aventi v23</Password><LocationID>%s</LocationID><ClientVersion>2.6.0.16</Client Version>bogus<CreationTstamp></CreationTstamp><Task>AddExistingAccount</ Task></Header><Body><Email>%s</Email><LocationName>%s</LocationName></Bo dy></PipeMsg> FORM_RECORDS_FILE_MAX_NUM=10 FORM_RECORDS_FILE= locid-email-locname.cred FORM_RECORDS_RANDOM=1 And I am reading from this file (locid-email-locname.cred) # Separator used here is ',' a3d683db-67f1-4e42-94f4-d942623de79e,p0...@ml...,PROTEST-VIST A32 2ae2596e-7ade-4fbd-9b21-61d95c4ea387,p0...@ml...,PROTEST-XP32 -1 a3d683db-67f1-4e42-94f4-d942623de79e,p0...@ml...,PROTEST-VIST A32 2ae2596e-7ade-4fbd-9b21-61d95c4ea387,p0...@ml...,PROTEST-XP32 -1 a3d683db-67f1-4e42-94f4-d942623de79e,p0...@ml...,PROTEST-VIST A32: . . . I get the following error when running "curl-loader -f poll.conf": parse_config_file - processing file string "FORM_STRING=<PipeMsg><Header><AccountName></AccountName><Password>avent iv23</Password><LocationID>%s</LocationID><ClientVersion>2.6.0.16</Clien tVersion>bogus<CreationTstamp></CreationTstamp><Task>AddExistingAccount< /Task></Header><Body><Email>%s</Email><LocationName>%s</LocationName></B ody></PipeMsg> - to load user credentials (records) from a file two "%%s" , something like "user=%%s&password=%%s" and _FILE defined. - error: FORM_STRING (form_string_parser) is not valid. Please, use: - to generate unique users with unique passwords two "%s%d" , something like "user=%s%d&password=%s%d" - to generate unique users with the same passwords one "%s%d" for users and one "%s" for the password,something like "user=%s%d&password=%s" - for a single configurable user with a password two "%s" , something like "user=%s&password=%s" add_param_to_batch - error: parser failed for tag FORM_STRING and value <PipeMsg><Header><AccountName></AccountName><Password>aventiv23</Passwor d><LocationID>%s</LocationID><ClientVersion>2.6.0.16</ClientVersion>bogu s<CreationTstamp></CreationTstamp><Task>AddExistingAccount</Task></Heade r><Body><Email>%s</Email><LocationName>%s</LocationName></Body></PipeMsg >. parse_config_file - error: add_param_to_batch () failed processing line "FORM_STRING" main - error: parse_config_file () failed. I have tried the %%s usage, as suggested in the error message above but got the same result. Any ideas? Is there any additional documentation about how to use this templating feature? Best Regards, Matt Love |
From: Robert I. <cor...@gm...> - 2008-11-04 14:41:23
|
Hi Alo, On Tue, Nov 4, 2008 at 4:18 PM, alo sinnathamby <asi...@ic...>wrote: > Hi Robert, > > Because of these different results, i am not able to determine the > correct run time for any given number of requests. Our objective is to > determine the load capability and run time for any given number of > requests, say for an example 20000, 50000, 70000, 100000 requests. > > Please render your valuable advice and Greatly appreciate your help. > > > The suggestion is always to measure such things not at a ramp-up time (75 seems to be the max for your machine), but at a steady state. Y may wish to look at the steady run-time slot and to adjust the number of cycles accordingly. Besides that, you can see CAPS number, which is at a steady stage the number of cycles, that all your virtual clients are doing each second. Besides that, the following conversation may be of your interest: http://sourceforge.net/mailarchive/forum.php?thread_name=BAY131-W4271C5395CD9FFBBC04BF8D9C80%40phx.gbl&forum_name=curl-loader-devel Sincerely, Robert |
From: alo s. <asi...@ic...> - 2008-11-04 14:19:02
|
Hi Robert, Thanks and greatly appreciate it. Your help is a timely help and Thanks a lot. We are looking at it http://curl-loader.sourceforge.net/high-load-hw/index.html, and trying to set up the same or equivalent environment. In the mean time, this is my quick question, *When we change CLIENTS_RAMPUP_INC, the run time differs to a great extent. So how can i determine the correct run time with respect to a given number of requests.* For example (from my test runs): case 1: when CLIENTS_RAMPUP_INC=50, the run time for 20000 requests are 300 seconds case 2: when CLIENTS_RAMPUP_INC=75, the run time for 20000 requests are 200 seconds case 3: when CLIENTS_RAMPUP_INC=100, it throws so many connection time out error. Because of these different results, i am not able to determine the correct run time for any given number of requests. Our objective is to determine the load capability and run time for any given number of requests, say for an example 20000, 50000, 70000, 100000 requests. Please render your valuable advice and Greatly appreciate your help. Thanks, Alo Robert Iakobashvili wrote: > Hi Alo, > > On Mon, Nov 3, 2008 at 8:33 PM, alo sinnathamby > <asi...@ic... <mailto:asi...@ic...>> wrote: > > Hi Robert, > > Thanks for your reply and appreciate it. I have upgraded the test > client's RAM from 2 gig to 4 gig and now it gives me a kind of > consistent result, though it varies a little. > > > Great! > > > I have followed the FAQ > sec 7.2 recommendations but that didn't make any big difference. > > > It means, that your system was short in memory as expected. > 40K per virtual client! > > > > Now, i have another question: what is the best configuration for > testing > 50 000, 60 000, 70 000, 100 000 requests. Currently i am using the > following configuration, > > > Operating System: CentOS release 5.2 (Final) > CPU: Intel(R) Core(TM)2 Duo CPU E4600 @ 2.40GHz > Ram: 4 Gigabytes DDR2 > > My hosting server is: > Linux > Operating System: CentOS release 5 (Final) > CPU: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz > Ram: 4 Gigabytes DDR2 > > Please let me know the optimum and best configurations for testing > really big loads. The reason why i am asking this question is, when i > put CLIENTS_RAMPUP_INC=50, it takes more time but less failed. when i > put CLIENTS_RAMPUP_INC=75, it takes less time and more failed. > Alo > > > > What was the effect of running command line with option -t 4 ? > > Specifcally, I would recommend you *curl-loader - High Load HW * > as explained in the link here: > http://curl-loader.sourceforge.net/high-load-hw/index.html > > So, for 100 000 req/sec better to use a dual quad-core (a single > quad-core may be not enough) > with 8 GB of checked memory. > > Please, pay attention, that your OS should support above 4 GB > addressing in a single > process. We are using 64-bit Debian linux (lenny). > > I would recommend -t 4 or -t 8 options, not more than 15000 clients, > ramp rate could > be 100, if more powerful CPUs used. Take care about the NEVENT > parameter as described > in *curl-loader - High Load HW > * > Best wishes! > > Truly, > Robert Iakobashvili, Ph.D. > ...................................................................... > Assistive technology that understands you > ...................................................................... NOTE: This message, and any attached files, contain information that is privileged, confidential, proprietary or otherwise protected from disclosure. Any disclosure, copying or distribution of, or reliance upon, this message by anyone else is strictly prohibited. If you received this communication in error, please notify the sender immediately by reply e-mail message or by telephone to one of the numbers above and deleting it from your computer. Thank you. |
From: Robert I. <cor...@gm...> - 2008-11-04 11:29:07
|
Hi Matt, On Tue, Nov 4, 2008 at 12:39 PM, Matt Love <mat...@av...> wrote: > That works! > > Thank you for your help, I am now happily load testing. > Good news. Thanks! Thanks also to Alex for his workaround procedure. The final fix will be to resolve the issue in libcurl, when on configuring file uploading the library takes PUT instead of POST. When we'll have time... -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... www.ghotit.com Assistive technology that understands you ...................................................................... |
From: Matt L. <mat...@av...> - 2008-11-04 10:40:09
|
That works! Thank you for your help, I am now happily load testing. From: Robert Iakobashvili [mailto:cor...@gm...] Sent: Saturday, November 01, 2008 3:17 PM To: curl-loader-devel Subject: Re: xml post I got it. Let's use the workaround as recommended by Alex. Please, try the following: 1. Either checkout the latest version of curl-loader subversion or use the files attached; 2. Configuration: URL=http://192.168.2.56/nomadesk/index.phpTask?Navigator::Task=ReceiveMe ssage URL_SHORT_NAME="Poll" TIMER_URL_COMPLETION = 5000 TIMER_AFTER_URL_SLEEP = 5000 REQUEST_TYPE=POST FORM_USAGE_TYPE= AS_IS FORM_STRING=<Poll><Accounts><Account><AccountName>nmua000014</AccountNam e><Password>aventiv23</Password></Account></Accounts><LocationID>bf42425 f-fcef-4ea3-aaab-199120138cb3</LocationID><ClientVersion>2.6.0.13</Clien tVersion><CreationTstamp>10/28/2008 7:00:30 AM</CreationTstamp></Poll> FORM_STRING should be a single line, containing your XML file content without any newline symbols, because it is read as a single line by fgets. The length is limited at 8K, by you can increase it at parse_conf.c by increasing the buffer size here: char fgets_buff[1024*8]; Give it a try! Truly, Robert Iakobashvili, Ph.D. ...................................................................... www.ghotit.com Assistive technology that understands you ...................................................................... |
From: Robert I. <cor...@gm...> - 2008-11-03 19:42:38
|
Hi Alo, On Mon, Nov 3, 2008 at 8:33 PM, alo sinnathamby <asi...@ic...>wrote: > Hi Robert, > > Thanks for your reply and appreciate it. I have upgraded the test > client's RAM from 2 gig to 4 gig and now it gives me a kind of > consistent result, though it varies a little. Great! > I have followed the FAQ > sec 7.2 recommendations but that didn't make any big difference. It means, that your system was short in memory as expected. 40K per virtual client! > Now, i have another question: what is the best configuration for testing > 50 000, 60 000, 70 000, 100 000 requests. Currently i am using the > following configuration, > > Operating System: CentOS release 5.2 (Final) > CPU: Intel(R) Core(TM)2 Duo CPU E4600 @ 2.40GHz > Ram: 4 Gigabytes DDR2 > > My hosting server is: > Linux > Operating System: CentOS release 5 (Final) > CPU: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz > Ram: 4 Gigabytes DDR2 > > Please let me know the optimum and best configurations for testing > really big loads. The reason why i am asking this question is, when i > put CLIENTS_RAMPUP_INC=50, it takes more time but less failed. when i > put CLIENTS_RAMPUP_INC=75, it takes less time and more failed. > Alo > What was the effect of running command line with option -t 4 ? Specifcally, I would recommend you *curl-loader - High Load HW * as explained in the link here: http://curl-loader.sourceforge.net/high-load-hw/index.html So, for 100 000 req/sec better to use a dual quad-core (a single quad-core may be not enough) with 8 GB of checked memory. Please, pay attention, that your OS should support above 4 GB addressing in a single process. We are using 64-bit Debian linux (lenny). I would recommend -t 4 or -t 8 options, not more than 15000 clients, ramp rate could be 100, if more powerful CPUs used. Take care about the NEVENT parameter as described in *curl-loader - High Load HW * Best wishes! Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: alo s. <asi...@ic...> - 2008-11-03 19:19:14
|
Hi Robert, Thanks for your reply and appreciate it. I have upgraded the test client's RAM from 2 gig to 4 gig and now it gives me a kind of consistent result, though it varies a little. I have followed the FAQ sec 7.2 recommendations but that didn't make any big difference. Now, i have another question: what is the best configuration for testing 50 000, 60 000, 70 000, 100 000 requests. Currently i am using the following configuration, ########### GENERAL SECTION ################################ BATCH_NAME= 15Kr6 CLIENTS_NUM_MAX=15000 CLIENTS_NUM_START=100 CLIENTS_RAMPUP_INC=75 INTERFACE =eth0 NETMASK=16 IP_ADDR_MIN= 10.0.1.240 IP_ADDR_MAX= 10.0.1.240 CYCLES_NUM= 1 URLS_NUM= 1 ########### URL SECTION #################################### URL=http://xx.x.x.xxx:pppp/main URL_SHORT_NAME="SA-home" REQUEST_TYPE=GET TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced by cancelling url fetch on timeout TIMER_AFTER_URL_SLEEP =0 My test client is: Linux Operating System: CentOS release 5.2 (Final) CPU: Intel(R) Core(TM)2 Duo CPU E4600 @ 2.40GHz Ram: 4 Gigabytes DDR2 My hosting server is: Linux Operating System: CentOS release 5 (Final) CPU: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz Ram: 4 Gigabytes DDR2 Please let me know the optimum and best configurations for testing really big loads. The reason why i am asking this question is, when i put CLIENTS_RAMPUP_INC=50, it takes more time but less failed. when i put CLIENTS_RAMPUP_INC=75, it takes less time and more failed. And because of it, i am not able to come to a conclusion with respect to the runtime and number of failed requests. Most of the time, it throws the Connection time-out error in the log file. Again when i increased the TIMER_URL_COMPLETION, it didn't make big difference. Greatly appreciate your reply.. Thanks, Alo Robert Iakobashvili wrote: > Hi Alo, > > > On Thu, Oct 30, 2008 at 3:17 PM, alo sinnathamby > <asi...@ic... <mailto:asi...@ic...>> wrote: > > Hi Robert, > > Thanks for your immediate reply and appreciate it. I am looking at > your > suggestions and in the mean time please find below the conf file > for the > tests that i run, > Greatly appreciate your help. > > > The suggestions are: > a) to all at the erors in SAH10000r5.log, what is written there, when > there are errors; > b) look in FAQs about increasing performance on a client side (file > descriptors), etc > for a big load -optimization; > c) look at your server side; > > See others embedded: > > ########### GENERAL SECTION ################################ > BATCH_NAME=SAH10000r5 > CLIENTS_NUM_MAX = 10000 > > > Each client requires about 40K memory, which means, that your 4GB > memory is not > enough. > Working with more than 4000-5000 clients requires doing the Big-Load > optimizations > as in FAQs. > > Try to work lets say with 1000-5000 clients and to make Big-Load > optimizations. > > > CLIENTS_NUM_START = 50 > CLIENTS_RAMPUP_INC= 50 > INTERFACE=eth0 > NETMASK=24 > IP_ADDR_MIN=10.0.1.240 <http://10.0.1.240> > IP_ADDR_MAX=10.0.1.240 <http://10.0.1.240> > CYCLES_NUM= 1 > URLS_NUM=1 > > > Really you are running more than one cycle, are not you? > > -- > Truly, > Robert Iakobashvili, Ph.D. > ...................................................................... > Assistive technology that understands you > ...................................................................... NOTE: This message, and any attached files, contain information that is privileged, confidential, proprietary or otherwise protected from disclosure. Any disclosure, copying or distribution of, or reliance upon, this message by anyone else is strictly prohibited. If you received this communication in error, please notify the sender immediately by reply e-mail message or by telephone to one of the numbers above and deleting it from your computer. Thank you. |
From: Robert I. <cor...@gm...> - 2008-11-03 06:13:44
|
Hi Gary, On Mon, Nov 3, 2008 at 8:01 AM, Gary Fitts <ga...@in...> wrote: > Hello Robert, > > I think I've found a solution to this. I've patched curl-loader to > maintain a separate "url upload stream" for each client, so that > multiple clients cannot interfere with each other. (These streams are > actually just file offsets, so they use up extra file descriptors.) > This seems to have fixed the timeouts I was seeing before. I believe > they were caused when the server was told to expect an upload of N > bytes, but then received < N because the upload_file_ptr had been > consumed by another client. > > I'll send you the files, if you'd like, as soon as I get them properly > commented. > > Gary > > > > Sounds great! If you could also update the man page regarding the new tag, it would bwe very much appreciated. Truly, Robert |
From: Gary F. <ga...@in...> - 2008-11-03 06:04:45
|
Sorry, I meant to say "they DON'T use up extra file descriptors". > These streams are actually just file offsets, so they use up extra > file descriptors. |
From: Gary F. <ga...@in...> - 2008-11-03 06:01:44
|
Hello Robert, I think I've found a solution to this. I've patched curl-loader to maintain a separate "url upload stream" for each client, so that multiple clients cannot interfere with each other. (These streams are actually just file offsets, so they use up extra file descriptors.) This seems to have fixed the timeouts I was seeing before. I believe they were caused when the server was told to expect an upload of N bytes, but then received < N because the upload_file_ptr had been consumed by another client. I'll send you the files, if you'd like, as soon as I get them properly commented. Gary > Thanks for all your help so far. Now I've run into a possible bug, > and I want to see what you think. In loader.c, > setup_curl_handle_init(), there is this code: > > if (url->upload_file) > { > if (! url->upload_file_ptr) > { > if (! (url->upload_file_ptr = fopen (url->upload_file, > "rb"))) > { > fprintf (stderr, > "%s - error: failed to open() %s with errno %d. > \n", > __func__, url->upload_file, errno); > return -1; > } > } > > /* Enable uploading */ > curl_easy_setopt(handle, CURLOPT_UPLOAD, 1); > ... > > > Now if this is a cycling URL, we're going to want to upload the file > many times. But after the first upload, the file pointer will be set > to the end of the file, and subsequent reads will return eof. So I > changed the code to this: > > if (url->upload_file) > { > if (! url->upload_file_ptr) > { > if (! (url->upload_file_ptr = fopen (url->upload_file, > "rb"))) > { > fprintf (stderr, > "%s - error: failed to open() %s with errno %d. > \n", > __func__, url->upload_file, errno); > return -1; > } > } > else > { > rewind(url->upload_file_ptr); /* Added by GF October 2008 */ > } > > /* Enable uploading */ > curl_easy_setopt(handle, CURLOPT_UPLOAD, 1); > > This seems to fix the problem when a single client is cycling the > url. But now I'm worried about multiple clients. As it is, all the > clients share this url context, and so they all share this single > upload_file_ptr. What if client A begins the upload, and while the > upload is proceeding in stages, client B comes along and tries to > use the same pointer? It seems that this could certainly happen when > we're running clients in multiple threads, and it looks as though it > could happen even with multiple clients in a single thread. > > I'm thinking that each upload url has to keep a file offset for each > client, and arrange to use that offset for each read. What do you > think? > > Thanks, > Gary |
From: Gary F. <ga...@in...> - 2008-11-03 01:10:34
|
Hello Robert, Thanks for all your help so far. Now I've run into a possible bug, and I want to see what you think. In loader.c, setup_curl_handle_init(), there is this code: if (url->upload_file) { if (! url->upload_file_ptr) { if (! (url->upload_file_ptr = fopen (url->upload_file, "rb"))) { fprintf (stderr, "%s - error: failed to open() %s with errno %d. \n", __func__, url->upload_file, errno); return -1; } } /* Enable uploading */ curl_easy_setopt(handle, CURLOPT_UPLOAD, 1); ... Now if this is a cycling URL, we're going to want to upload the file many times. But after the first upload, the file pointer will be set to the end of the file, and subsequent reads will return eof. So I changed the code to this: if (url->upload_file) { if (! url->upload_file_ptr) { if (! (url->upload_file_ptr = fopen (url->upload_file, "rb"))) { fprintf (stderr, "%s - error: failed to open() %s with errno %d. \n", __func__, url->upload_file, errno); return -1; } } else { rewind(url->upload_file_ptr); /* Added by GF October 2008 */ } /* Enable uploading */ curl_easy_setopt(handle, CURLOPT_UPLOAD, 1); This seems to fix the problem when a single client is cycling the url. But now I'm worried about multiple clients. As it is, all the clients share this url context, and so they all share this single upload_file_ptr. What if client A begins the upload, and while the upload is proceeding in stages, client B comes along and tries to use the same pointer? It seems that this could certainly happen when we're running clients in multiple threads, and it looks as though it could happen even with multiple clients in a single thread. I'm thinking that each upload url has to keep a file offset for each client, and arrange to use that offset for each read. What do you think? Thanks, Gary |
From: Robert I. <cor...@gm...> - 2008-11-02 13:39:02
|
hi Gary On Sat, Nov 1, 2008 at 6:57 PM, Gary Fitts <ga...@in...> wrote: > I did use the URL facility, and the same thing happens. I don't have > that conf and log file around anymore. I can reconstruct them if you'd > like to see them. > I have filtered the problematic TCP session from your capture by the filter below in wireshark: (ip.addr eq 65.105.205.215 and ip.addr eq 65.105.205.203) and (tcp.port eq 38507 and tcp.port eq 9000) Client initiates closure: 38507 > cslistiner FIN-ACK Seq=5013 Server responds by ACK to that packet - ACK Ack=5014 (5014=5013+1) Now server is expected to send FIN-ACK, but what it does is as he has a SO_LINGER option at his socket: a) send the last data packet b) afterwards sends FIN-ACK So far so good, but client does not like (and this is also legal), that server sends him data packets after the FIN-ACK acknowledged by ACK and sends RST. I would say, that nothing serious besides that the TCP-stacks of the server and client are tuned a bit different. Is it a server with a Microsoft operating system? -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: Gary F. <ga...@in...> - 2008-11-01 16:58:00
|
I did use the URL facility, and the same thing happens. I don't have that conf and log file around anymore. I can reconstruct them if you'd like to see them. On Nov 1, 2008, at 8:40 AM, Robert Iakobashvili wrote: > What happens, if instead you are using the existing URL facility? > Try it just to debug yourself. |
From: Robert I. <cor...@gm...> - 2008-11-01 15:40:08
|
Hi Gary, On Sat, Nov 1, 2008 at 4:46 PM, Gary Fitts <ga...@in...> wrote: > Thanks, Robert. > > The URL_COMPLETION_TIMEOUT is set to 30 seconds (30000 ms), Y, it is OK > > FWIW here's the configuration file. You can see some of the extensions I've > built. But I doubt they're the cause of the problem, because they work to > generate the URLs, and the server doesn't know anything about them. URL_TEMPLATE - it is the patch, that you have probably developed. What happens, if instead you are using the existing URL facility? Try it just to debug yourself. Sincerely, Robert |
From: Robert I. <cor...@gm...> - 2008-11-01 15:35:32
|
On Sat, Nov 1, 2008 at 4:47 PM, Gary Fitts <ga...@in...> wrote: > There's no indication of anything amiss in the server's logs, > unfortunately. > When both server and clients are happy, this means, that tests passed sucessfully! Make them happy! :-) -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: Gary F. <ga...@in...> - 2008-11-01 14:47:43
|
There's no indication of anything amiss in the server's logs, unfortunately. Thanks, Gary On Nov 1, 2008, at 7:37 AM, Robert Iakobashvili wrote: > What are saying the server's logs. Are there any errors at server > side? |