Thread: Curl-loader results vary from time to time
Status: Alpha
Brought to you by:
coroberti
From: alo s. <asi...@ic...> - 2008-10-29 20:04:47
|
Dear support team, CURL-LOADER VERSION: 0.46 HW DETAILS: CPU/S and memory are must: x86_64 x86_64 x86_64 GNU/Linux OS: CentOS release 5.2 (Final) CPU: Intel(R) Core(TM)2 Duo CPU E4600 @ 2.40GHz Ram: 2 Gigabytes DDR2 LINUX DISTRIBUTION and KERNEL (uname -r): CentOS release 5.2 (Final) GCC VERSION (gcc -v): COMPILATION AND MAKING OPTIONS (if defaults changed): COMMAND-LINE: CONFIGURATION-FILE (The most common source of problems): Place the file inline here: DOES THE PROBLEM AFFECT: COMPILATION? No LINKING? No EXECUTION? No OTHER (please specify)? The results vary from time to time.. Have you run $make cleanall prior to $make ? No DESCRIPTION: QUESTION/ SUGGESTION/ PATCH: I have installed curl-loader on a testing client machine. i tried to test a home page of a website which is running with tomcat. the tomcat server system is: x86_64 x86_64 x86_64 GNU/Linux Operating System: CentOS release 5 (Final) CPU: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz Ram: 4 Gigabytes DDR2 when i run 10 000 requests at different time intervals, i am getting different results as given below, 10 000 - run1 =============== Test total duration was 202 seconds and CAPS average 97: H/F Req:10000,1xx:0,2xx:9993,3xx:0,4xx:0,5xx:7,Err:0,T-Err:0,D:111ms,D-2xx:111ms,Ti:280361B/s,To:7425B/s H/F/S Req:0,1xx:0,2xx:0,3xx:0,4xx:0,5xx:0,Err:0,T-Err:0,D:0ms,D-2xx:0ms,Ti:0B/s,To:0B/s Operations: Success Failed Timed out URL0:Login-GET 150 9993 0 7 0 0 10 000 - run2 ============== Test total duration was 234 seconds and CAPS average 85: H/F Req:9773,1xx:0,2xx:9773,3xx:0,4xx:0,5xx:0,Err:227,T-Err:0,D:861ms,D-2xx:861ms,Ti:236413B/s,To:6264B/s H/F/S Req:0,1xx:0,2xx:0,3xx:0,4xx:0,5xx:0,Err:0,T-Err:0,D:0ms,D-2xx:0ms,Ti:0B/s,To:0B/s Operations: Success Failed Timed out URL0:Login-GET 7 9773 0 227 0 0 10 000 - run3 ============== Test total duration was 318 seconds and CAPS average 62: H/F Req:6183,1xx:0,2xx:6012,3xx:0,4xx:0,5xx:0,Err:3988,T-Err:0,D:6602ms,D-2xx:6602ms,Ti:106771B/s,To:2916B/s H/F/S Req:0,1xx:0,2xx:0,3xx:0,4xx:0,5xx:0,Err:0,T-Err:0,D:0ms,D-2xx:0ms,Ti:0B/s,To:0B/s Operations: Success Failed Timed out URL0:Login-GET 0 6012 3 3988 0 0 Could you please let us know as how to fine tune it to get a consistent result. FYI: both the client and server are in the local network. Greatly appreciate your help. Thanks, Alo Sinnathamby Architect NOTE: This message, and any attached files, contain information that is privileged, confidential, proprietary or otherwise protected from disclosure. Any disclosure, copying or distribution of, or reliance upon, this message by anyone else is strictly prohibited. If you received this communication in error, please notify the sender immediately by reply e-mail message or by telephone to one of the numbers above and deleting it from your computer. Thank you. |
From: Robert I. <cor...@gm...> - 2008-10-29 20:42:00
|
Hi Alo Sinnathamby, On Wed, Oct 29, 2008 at 10:04 PM, alo sinnathamby <asi...@ic...>wrote: > Dear support team, > > CURL-LOADER VERSION: 0.46 > > HW DETAILS: CPU/S and memory are must: > x86_64 x86_64 x86_64 GNU/Linux > OS: CentOS release 5.2 (Final) > CPU: Intel(R) Core(TM)2 Duo CPU E4600 @ 2.40GHz > Ram: 2 Gigabytes DDR2 > > LINUX DISTRIBUTION and KERNEL (uname -r): CentOS release 5.2 (Final) > > GCC VERSION (gcc -v): > > COMPILATION AND MAKING OPTIONS (if defaults changed): > > COMMAND-LINE: > > CONFIGURATION-FILE (The most common source of problems): > Place the file inline here: > Thank you for PRF. You missed to place your configuration file, wich is essential for any judgement, > > > QUESTION/ SUGGESTION/ PATCH: > > I have installed curl-loader on a testing client machine. i tried to > test a home page of a website which is running with tomcat. the tomcat > server system is: > x86_64 x86_64 x86_64 GNU/Linux > Operating System: CentOS release 5 (Final) > CPU: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz > Ram: 4 Gigabytes DDR2 > > when i run 10 000 requests at different time intervals, i am getting > different results as given below, > > 10 000 - run1 > =============== > Test total duration was 202 seconds and CAPS average 97: > H/F > > Req:10000,1xx:0,2xx:9993,3xx:0,4xx:0,5xx:7,Err:0,T-Err:0,D:111ms,D-2xx:111ms,Ti:280361B/s,To:7425B/s > H/F/S > > Req:0,1xx:0,2xx:0,3xx:0,4xx:0,5xx:0,Err:0,T-Err:0,D:0ms,D-2xx:0ms,Ti:0B/s,To:0B/s > Operations: Success Failed > Timed out > URL0:Login-GET 150 9993 0 7 > 0 0 > > > 10 000 - run2 > ============== > Test total duration was 234 seconds and CAPS average 85: > H/F > > Req:9773,1xx:0,2xx:9773,3xx:0,4xx:0,5xx:0,Err:227,T-Err:0,D:861ms,D-2xx:861ms,Ti:236413B/s,To:6264B/s > H/F/S > > Req:0,1xx:0,2xx:0,3xx:0,4xx:0,5xx:0,Err:0,T-Err:0,D:0ms,D-2xx:0ms,Ti:0B/s,To:0B/s > Operations: Success Failed > Timed out > URL0:Login-GET 7 9773 0 227 > 0 0 > > > 10 000 - run3 > ============== > Test total duration was 318 seconds and CAPS average 62: > H/F > > Req:6183,1xx:0,2xx:6012,3xx:0,4xx:0,5xx:0,Err:3988,T-Err:0,D:6602ms,D-2xx:6602ms,Ti:106771B/s,To:2916B/s > H/F/S > > Req:0,1xx:0,2xx:0,3xx:0,4xx:0,5xx:0,Err:0,T-Err:0,D:0ms,D-2xx:0ms,Ti:0B/s,To:0B/s > Operations: Success Failed > Timed out > URL0:Login-GET 0 6012 3 3988 > 0 0 > > > Could you please let us know as how to fine tune it to get a consistent > result. FYI: both the client and server are in the local network. > I can only guess without the configuration file, that your server works really hard. Please, monitor the server-side performance. The reasons for errors could be seen in the log file. Please look into the <your-name>.log file. You can see at the reasons of errors, when adding to the command line -v. Most probably, your server gets saturated and does not handle TCP-connections. You can also try to recompile curl-loader with optimization and add more mem for TCP, file descriptors like here: http://curl-loader.sourceforge.net/doc/faq.html in the 7.2. How to run a really big load? Please, update me about your advances. > > -- > Truly, > Robert Iakobashvili, Ph.D. > ...................................................................... > Assistive technology that understands you > ...................................................................... > |
From: alo s. <asi...@ic...> - 2008-10-30 13:18:23
|
Hi Robert, Thanks for your immediate reply and appreciate it. I am looking at your suggestions and in the mean time please find below the conf file for the tests that i run, Greatly appreciate your help. Thanks, Alo ########### GENERAL SECTION ################################ BATCH_NAME=SAH10000r5 CLIENTS_NUM_MAX = 10000 CLIENTS_NUM_START = 50 CLIENTS_RAMPUP_INC= 50 INTERFACE=eth0 NETMASK=24 IP_ADDR_MIN=10.0.1.240 IP_ADDR_MAX=10.0.1.240 CYCLES_NUM= 1 URLS_NUM=1 ########### URL SECTION ################################## ### Login URL - only once for each client # GET-part URL= http://xx.x.x.x:pppp/main URL_SHORT_NAME="Login-GET" #URL_DONT_CYCLE = 1 REQUEST_TYPE=GET TIMER_URL_COMPLETION = 0 # In msec. Now it is enforced by cancelling url fetch on timeout TIMER_AFTER_URL_SLEEP = 0 # POST-part #URL="" #URL_USE_CURRENT= 1 #URL_SHORT_NAME="Login-POST" #URL_DONT_CYCLE = 1 #USERNAME=admin #PASSWORD=your_password #REQUEST_TYPE=POST #FORM_USAGE_TYPE= SINGLE_USER #FORM_STRING= username=%s&password=%s # Means the same credentials for all clients/users #TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced by cancelling url fetch on timeout #TIMER_AFTER_URL_SLEEP =500 Robert Iakobashvili wrote: > Hi Alo Sinnathamby, > > On Wed, Oct 29, 2008 at 10:04 PM, alo sinnathamby > <asi...@ic... <mailto:asi...@ic...>> wrote: > > Dear support team, > > CURL-LOADER VERSION: 0.46 > > HW DETAILS: CPU/S and memory are must: > x86_64 x86_64 x86_64 GNU/Linux > OS: CentOS release 5.2 (Final) > CPU: Intel(R) Core(TM)2 Duo CPU E4600 @ 2.40GHz > Ram: 2 Gigabytes DDR2 > > LINUX DISTRIBUTION and KERNEL (uname -r): CentOS release 5.2 (Final) > > GCC VERSION (gcc -v): > > COMPILATION AND MAKING OPTIONS (if defaults changed): > > COMMAND-LINE: > > CONFIGURATION-FILE (The most common source of problems): > Place the file inline here: > > Thank you for PRF. > > You missed to place your configuration file, wich > is essential for any judgement, > > > > > QUESTION/ SUGGESTION/ PATCH: > > I have installed curl-loader on a testing client machine. i tried to > test a home page of a website which is running with tomcat. the tomcat > server system is: > x86_64 x86_64 x86_64 GNU/Linux > Operating System: CentOS release 5 (Final) > CPU: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz > Ram: 4 Gigabytes DDR2 > > when i run 10 000 requests at different time intervals, i am getting > different results as given below, > > 10 000 - run1 > =============== > Test total duration was 202 seconds and CAPS average 97: > H/F > Req:10000,1xx:0,2xx:9993,3xx:0,4xx:0,5xx:7,Err:0,T-Err:0,D:111ms,D-2xx:111ms,Ti:280361B/s,To:7425B/s > H/F/S > Req:0,1xx:0,2xx:0,3xx:0,4xx:0,5xx:0,Err:0,T-Err:0,D:0ms,D-2xx:0ms,Ti:0B/s,To:0B/s > Operations: Success Failed > Timed out > URL0:Login-GET 150 9993 0 7 > 0 0 > > > 10 000 - run2 > ============== > Test total duration was 234 seconds and CAPS average 85: > H/F > Req:9773,1xx:0,2xx:9773,3xx:0,4xx:0,5xx:0,Err:227,T-Err:0,D:861ms,D-2xx:861ms,Ti:236413B/s,To:6264B/s > H/F/S > Req:0,1xx:0,2xx:0,3xx:0,4xx:0,5xx:0,Err:0,T-Err:0,D:0ms,D-2xx:0ms,Ti:0B/s,To:0B/s > Operations: Success Failed > Timed out > URL0:Login-GET 7 9773 0 227 > 0 0 > > > 10 000 - run3 > ============== > Test total duration was 318 seconds and CAPS average 62: > H/F > Req:6183,1xx:0,2xx:6012,3xx:0,4xx:0,5xx:0,Err:3988,T-Err:0,D:6602ms,D-2xx:6602ms,Ti:106771B/s,To:2916B/s > H/F/S > Req:0,1xx:0,2xx:0,3xx:0,4xx:0,5xx:0,Err:0,T-Err:0,D:0ms,D-2xx:0ms,Ti:0B/s,To:0B/s > Operations: Success Failed > Timed out > URL0:Login-GET 0 6012 3 3988 > 0 0 > > > Could you please let us know as how to fine tune it to get a > consistent > result. FYI: both the client and server are in the local network. > > > I can only guess without the configuration file, that your server > works really hard. Please, monitor the server-side performance. > > The reasons for errors could be seen in the log file. > Please look into the <your-name>.log file. > > You can see at the reasons of errors, when adding to the command line -v. > Most probably, your server gets saturated and does not handle > TCP-connections. > > > You can also try to recompile curl-loader with optimization and add more > mem for TCP, file descriptors like here: > http://curl-loader.sourceforge.net/doc/faq.html > > in the 7.2. How to run a really big load? > > Please, update me about your advances. > > > > -- > Truly, > Robert Iakobashvili, Ph.D. > ...................................................................... > Assistive technology that understands you > ...................................................................... > NOTE: This message, and any attached files, contain information that is privileged, confidential, proprietary or otherwise protected from disclosure. Any disclosure, copying or distribution of, or reliance upon, this message by anyone else is strictly prohibited. If you received this communication in error, please notify the sender immediately by reply e-mail message or by telephone to one of the numbers above and deleting it from your computer. Thank you. |
From: Robert I. <cor...@gm...> - 2008-10-30 13:29:04
|
Hi Alo, On Thu, Oct 30, 2008 at 3:17 PM, alo sinnathamby <asi...@ic...>wrote: > Hi Robert, > > Thanks for your immediate reply and appreciate it. I am looking at your > suggestions and in the mean time please find below the conf file for the > tests that i run, > Greatly appreciate your help. > The suggestions are: a) to all at the erors in SAH10000r5.log, what is written there, when there are errors; b) look in FAQs about increasing performance on a client side (file descriptors), etc for a big load -optimization; c) look at your server side; See others embedded: ########### GENERAL SECTION ################################ > BATCH_NAME=SAH10000r5 > CLIENTS_NUM_MAX = 10000 Each client requires about 40K memory, which means, that your 4GB memory is not enough. Working with more than 4000-5000 clients requires doing the Big-Load optimizations as in FAQs. Try to work lets say with 1000-5000 clients and to make Big-Load optimizations. > CLIENTS_NUM_START = 50 > CLIENTS_RAMPUP_INC= 50 > INTERFACE=eth0 > NETMASK=24 > IP_ADDR_MIN=10.0.1.240 > IP_ADDR_MAX=10.0.1.240 > CYCLES_NUM= 1 > URLS_NUM=1 > Really you are running more than one cycle, are not you? -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: alo s. <asi...@ic...> - 2008-11-03 19:19:14
|
Hi Robert, Thanks for your reply and appreciate it. I have upgraded the test client's RAM from 2 gig to 4 gig and now it gives me a kind of consistent result, though it varies a little. I have followed the FAQ sec 7.2 recommendations but that didn't make any big difference. Now, i have another question: what is the best configuration for testing 50 000, 60 000, 70 000, 100 000 requests. Currently i am using the following configuration, ########### GENERAL SECTION ################################ BATCH_NAME= 15Kr6 CLIENTS_NUM_MAX=15000 CLIENTS_NUM_START=100 CLIENTS_RAMPUP_INC=75 INTERFACE =eth0 NETMASK=16 IP_ADDR_MIN= 10.0.1.240 IP_ADDR_MAX= 10.0.1.240 CYCLES_NUM= 1 URLS_NUM= 1 ########### URL SECTION #################################### URL=http://xx.x.x.xxx:pppp/main URL_SHORT_NAME="SA-home" REQUEST_TYPE=GET TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced by cancelling url fetch on timeout TIMER_AFTER_URL_SLEEP =0 My test client is: Linux Operating System: CentOS release 5.2 (Final) CPU: Intel(R) Core(TM)2 Duo CPU E4600 @ 2.40GHz Ram: 4 Gigabytes DDR2 My hosting server is: Linux Operating System: CentOS release 5 (Final) CPU: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz Ram: 4 Gigabytes DDR2 Please let me know the optimum and best configurations for testing really big loads. The reason why i am asking this question is, when i put CLIENTS_RAMPUP_INC=50, it takes more time but less failed. when i put CLIENTS_RAMPUP_INC=75, it takes less time and more failed. And because of it, i am not able to come to a conclusion with respect to the runtime and number of failed requests. Most of the time, it throws the Connection time-out error in the log file. Again when i increased the TIMER_URL_COMPLETION, it didn't make big difference. Greatly appreciate your reply.. Thanks, Alo Robert Iakobashvili wrote: > Hi Alo, > > > On Thu, Oct 30, 2008 at 3:17 PM, alo sinnathamby > <asi...@ic... <mailto:asi...@ic...>> wrote: > > Hi Robert, > > Thanks for your immediate reply and appreciate it. I am looking at > your > suggestions and in the mean time please find below the conf file > for the > tests that i run, > Greatly appreciate your help. > > > The suggestions are: > a) to all at the erors in SAH10000r5.log, what is written there, when > there are errors; > b) look in FAQs about increasing performance on a client side (file > descriptors), etc > for a big load -optimization; > c) look at your server side; > > See others embedded: > > ########### GENERAL SECTION ################################ > BATCH_NAME=SAH10000r5 > CLIENTS_NUM_MAX = 10000 > > > Each client requires about 40K memory, which means, that your 4GB > memory is not > enough. > Working with more than 4000-5000 clients requires doing the Big-Load > optimizations > as in FAQs. > > Try to work lets say with 1000-5000 clients and to make Big-Load > optimizations. > > > CLIENTS_NUM_START = 50 > CLIENTS_RAMPUP_INC= 50 > INTERFACE=eth0 > NETMASK=24 > IP_ADDR_MIN=10.0.1.240 <http://10.0.1.240> > IP_ADDR_MAX=10.0.1.240 <http://10.0.1.240> > CYCLES_NUM= 1 > URLS_NUM=1 > > > Really you are running more than one cycle, are not you? > > -- > Truly, > Robert Iakobashvili, Ph.D. > ...................................................................... > Assistive technology that understands you > ...................................................................... NOTE: This message, and any attached files, contain information that is privileged, confidential, proprietary or otherwise protected from disclosure. Any disclosure, copying or distribution of, or reliance upon, this message by anyone else is strictly prohibited. If you received this communication in error, please notify the sender immediately by reply e-mail message or by telephone to one of the numbers above and deleting it from your computer. Thank you. |
From: Robert I. <cor...@gm...> - 2008-11-03 19:42:38
|
Hi Alo, On Mon, Nov 3, 2008 at 8:33 PM, alo sinnathamby <asi...@ic...>wrote: > Hi Robert, > > Thanks for your reply and appreciate it. I have upgraded the test > client's RAM from 2 gig to 4 gig and now it gives me a kind of > consistent result, though it varies a little. Great! > I have followed the FAQ > sec 7.2 recommendations but that didn't make any big difference. It means, that your system was short in memory as expected. 40K per virtual client! > Now, i have another question: what is the best configuration for testing > 50 000, 60 000, 70 000, 100 000 requests. Currently i am using the > following configuration, > > Operating System: CentOS release 5.2 (Final) > CPU: Intel(R) Core(TM)2 Duo CPU E4600 @ 2.40GHz > Ram: 4 Gigabytes DDR2 > > My hosting server is: > Linux > Operating System: CentOS release 5 (Final) > CPU: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz > Ram: 4 Gigabytes DDR2 > > Please let me know the optimum and best configurations for testing > really big loads. The reason why i am asking this question is, when i > put CLIENTS_RAMPUP_INC=50, it takes more time but less failed. when i > put CLIENTS_RAMPUP_INC=75, it takes less time and more failed. > Alo > What was the effect of running command line with option -t 4 ? Specifcally, I would recommend you *curl-loader - High Load HW * as explained in the link here: http://curl-loader.sourceforge.net/high-load-hw/index.html So, for 100 000 req/sec better to use a dual quad-core (a single quad-core may be not enough) with 8 GB of checked memory. Please, pay attention, that your OS should support above 4 GB addressing in a single process. We are using 64-bit Debian linux (lenny). I would recommend -t 4 or -t 8 options, not more than 15000 clients, ramp rate could be 100, if more powerful CPUs used. Take care about the NEVENT parameter as described in *curl-loader - High Load HW * Best wishes! Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: alo s. <asi...@ic...> - 2008-10-30 13:52:16
|
Hi Robert, I have copied the content of conf file below, ########### GENERAL SECTION ################################ BATCH_NAME=SAH10000r5 CLIENTS_NUM_MAX = 10000 CLIENTS_NUM_START = 50 CLIENTS_RAMPUP_INC= 50 INTERFACE=eth0 NETMASK=24 IP_ADDR_MIN=10.0.1.240 IP_ADDR_MAX=10.0.1.240 CYCLES_NUM= 1 URLS_NUM=1 ########### URL SECTION ################################## ### Login URL - only once for each client # GET-part URL= http://xx.x.x.x:pppp/main URL_SHORT_NAME="Login-GET" #URL_DONT_CYCLE = 1 REQUEST_TYPE=GET TIMER_URL_COMPLETION = 0 # In msec. Now it is enforced by cancelling url fetch on timeout TIMER_AFTER_URL_SLEEP = 0 # POST-part #URL="" #URL_USE_CURRENT= 1 #URL_SHORT_NAME="Login-POST" #URL_DONT_CYCLE = 1 #USERNAME=admin #PASSWORD=your_password #REQUEST_TYPE=POST #FORM_USAGE_TYPE= SINGLE_USER #FORM_STRING= username=%s&password=%s # Means the same credentials for all clients/users #TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced by cancelling url fetch on timeout #TIMER_AFTER_URL_SLEEP =500 Thanks, Alo alo sinnathamby wrote: > Hi Robert, > > Thanks for your immediate reply and appreciate it. I am looking at your > suggestions and in the mean time please find below the conf file for the > tests that i run, > Greatly appreciate your help. > > Thanks, > Alo > > ########### GENERAL SECTION ################################ > BATCH_NAME=SAH10000r5 > CLIENTS_NUM_MAX = 10000 > CLIENTS_NUM_START = 50 > CLIENTS_RAMPUP_INC= 50 > INTERFACE=eth0 > NETMASK=24 > IP_ADDR_MIN=10.0.1.240 > IP_ADDR_MAX=10.0.1.240 > CYCLES_NUM= 1 > URLS_NUM=1 > > ########### URL SECTION ################################## > > ### Login URL - only once for each client > > # GET-part > URL= http://xx.x.x.x:pppp/main > URL_SHORT_NAME="Login-GET" > #URL_DONT_CYCLE = 1 > REQUEST_TYPE=GET > TIMER_URL_COMPLETION = 0 # In msec. Now it is enforced by cancelling url > fetch on timeout > TIMER_AFTER_URL_SLEEP = 0 > > # POST-part > #URL="" > #URL_USE_CURRENT= 1 > #URL_SHORT_NAME="Login-POST" > #URL_DONT_CYCLE = 1 > #USERNAME=admin > #PASSWORD=your_password > #REQUEST_TYPE=POST > #FORM_USAGE_TYPE= SINGLE_USER > #FORM_STRING= username=%s&password=%s # Means the same credentials for > all clients/users > #TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is > enforced by cancelling url fetch on timeout > #TIMER_AFTER_URL_SLEEP =500 > > > > > > > > Robert Iakobashvili wrote: > >> Hi Alo Sinnathamby, >> >> On Wed, Oct 29, 2008 at 10:04 PM, alo sinnathamby >> <asi...@ic... <mailto:asi...@ic...>> wrote: >> >> Dear support team, >> >> CURL-LOADER VERSION: 0.46 >> >> HW DETAILS: CPU/S and memory are must: >> x86_64 x86_64 x86_64 GNU/Linux >> OS: CentOS release 5.2 (Final) >> CPU: Intel(R) Core(TM)2 Duo CPU E4600 @ 2.40GHz >> Ram: 2 Gigabytes DDR2 >> >> LINUX DISTRIBUTION and KERNEL (uname -r): CentOS release 5.2 (Final) >> >> GCC VERSION (gcc -v): >> >> COMPILATION AND MAKING OPTIONS (if defaults changed): >> >> COMMAND-LINE: >> >> CONFIGURATION-FILE (The most common source of problems): >> Place the file inline here: >> >> Thank you for PRF. >> >> You missed to place your configuration file, wich >> is essential for any judgement, >> >> >> >> >> QUESTION/ SUGGESTION/ PATCH: >> >> I have installed curl-loader on a testing client machine. i tried to >> test a home page of a website which is running with tomcat. the tomcat >> server system is: >> x86_64 x86_64 x86_64 GNU/Linux >> Operating System: CentOS release 5 (Final) >> CPU: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz >> Ram: 4 Gigabytes DDR2 >> >> when i run 10 000 requests at different time intervals, i am getting >> different results as given below, >> >> 10 000 - run1 >> =============== >> Test total duration was 202 seconds and CAPS average 97: >> H/F >> Req:10000,1xx:0,2xx:9993,3xx:0,4xx:0,5xx:7,Err:0,T-Err:0,D:111ms,D-2xx:111ms,Ti:280361B/s,To:7425B/s >> H/F/S >> Req:0,1xx:0,2xx:0,3xx:0,4xx:0,5xx:0,Err:0,T-Err:0,D:0ms,D-2xx:0ms,Ti:0B/s,To:0B/s >> Operations: Success Failed >> Timed out >> URL0:Login-GET 150 9993 0 7 >> 0 0 >> >> >> 10 000 - run2 >> ============== >> Test total duration was 234 seconds and CAPS average 85: >> H/F >> Req:9773,1xx:0,2xx:9773,3xx:0,4xx:0,5xx:0,Err:227,T-Err:0,D:861ms,D-2xx:861ms,Ti:236413B/s,To:6264B/s >> H/F/S >> Req:0,1xx:0,2xx:0,3xx:0,4xx:0,5xx:0,Err:0,T-Err:0,D:0ms,D-2xx:0ms,Ti:0B/s,To:0B/s >> Operations: Success Failed >> Timed out >> URL0:Login-GET 7 9773 0 227 >> 0 0 >> >> >> 10 000 - run3 >> ============== >> Test total duration was 318 seconds and CAPS average 62: >> H/F >> Req:6183,1xx:0,2xx:6012,3xx:0,4xx:0,5xx:0,Err:3988,T-Err:0,D:6602ms,D-2xx:6602ms,Ti:106771B/s,To:2916B/s >> H/F/S >> Req:0,1xx:0,2xx:0,3xx:0,4xx:0,5xx:0,Err:0,T-Err:0,D:0ms,D-2xx:0ms,Ti:0B/s,To:0B/s >> Operations: Success Failed >> Timed out >> URL0:Login-GET 0 6012 3 3988 >> 0 0 >> >> >> Could you please let us know as how to fine tune it to get a >> consistent >> result. FYI: both the client and server are in the local network. >> >> >> I can only guess without the configuration file, that your server >> works really hard. Please, monitor the server-side performance. >> >> The reasons for errors could be seen in the log file. >> Please look into the <your-name>.log file. >> >> You can see at the reasons of errors, when adding to the command line -v. >> Most probably, your server gets saturated and does not handle >> TCP-connections. >> >> >> You can also try to recompile curl-loader with optimization and add more >> mem for TCP, file descriptors like here: >> http://curl-loader.sourceforge.net/doc/faq.html >> >> in the 7.2. How to run a really big load? >> >> Please, update me about your advances. >> >> >> >> -- >> Truly, >> Robert Iakobashvili, Ph.D. >> ...................................................................... >> Assistive technology that understands you >> ...................................................................... >> >> > > > NOTE: This message, and any attached files, contain information that is privileged, confidential, proprietary or otherwise protected from disclosure. Any disclosure, copying or distribution of, or reliance upon, this message by anyone else is strictly prohibited. If you received this communication in error, please notify the sender immediately by reply e-mail message or by telephone to one of the numbers above and deleting it from your computer. Thank you. |
From: Robert I. <cor...@gm...> - 2008-10-30 14:48:02
|
On Thu, Oct 30, 2008 at 3:17 PM, alo sinnathamby <asi...@ic...>wrote: > Hi Robert, > > Thanks for your immediate reply and appreciate it. I am looking at your > suggestions and in the mean time please find below the conf file for the > tests that i run, > Greatly appreciate your help. > > one more suggestion besides decreasing the number of clients is to run it with -t 2 or -t4 command. Still this is a part of -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: alo s. <asi...@ic...> - 2008-11-04 14:19:02
|
Hi Robert, Thanks and greatly appreciate it. Your help is a timely help and Thanks a lot. We are looking at it http://curl-loader.sourceforge.net/high-load-hw/index.html, and trying to set up the same or equivalent environment. In the mean time, this is my quick question, *When we change CLIENTS_RAMPUP_INC, the run time differs to a great extent. So how can i determine the correct run time with respect to a given number of requests.* For example (from my test runs): case 1: when CLIENTS_RAMPUP_INC=50, the run time for 20000 requests are 300 seconds case 2: when CLIENTS_RAMPUP_INC=75, the run time for 20000 requests are 200 seconds case 3: when CLIENTS_RAMPUP_INC=100, it throws so many connection time out error. Because of these different results, i am not able to determine the correct run time for any given number of requests. Our objective is to determine the load capability and run time for any given number of requests, say for an example 20000, 50000, 70000, 100000 requests. Please render your valuable advice and Greatly appreciate your help. Thanks, Alo Robert Iakobashvili wrote: > Hi Alo, > > On Mon, Nov 3, 2008 at 8:33 PM, alo sinnathamby > <asi...@ic... <mailto:asi...@ic...>> wrote: > > Hi Robert, > > Thanks for your reply and appreciate it. I have upgraded the test > client's RAM from 2 gig to 4 gig and now it gives me a kind of > consistent result, though it varies a little. > > > Great! > > > I have followed the FAQ > sec 7.2 recommendations but that didn't make any big difference. > > > It means, that your system was short in memory as expected. > 40K per virtual client! > > > > Now, i have another question: what is the best configuration for > testing > 50 000, 60 000, 70 000, 100 000 requests. Currently i am using the > following configuration, > > > Operating System: CentOS release 5.2 (Final) > CPU: Intel(R) Core(TM)2 Duo CPU E4600 @ 2.40GHz > Ram: 4 Gigabytes DDR2 > > My hosting server is: > Linux > Operating System: CentOS release 5 (Final) > CPU: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz > Ram: 4 Gigabytes DDR2 > > Please let me know the optimum and best configurations for testing > really big loads. The reason why i am asking this question is, when i > put CLIENTS_RAMPUP_INC=50, it takes more time but less failed. when i > put CLIENTS_RAMPUP_INC=75, it takes less time and more failed. > Alo > > > > What was the effect of running command line with option -t 4 ? > > Specifcally, I would recommend you *curl-loader - High Load HW * > as explained in the link here: > http://curl-loader.sourceforge.net/high-load-hw/index.html > > So, for 100 000 req/sec better to use a dual quad-core (a single > quad-core may be not enough) > with 8 GB of checked memory. > > Please, pay attention, that your OS should support above 4 GB > addressing in a single > process. We are using 64-bit Debian linux (lenny). > > I would recommend -t 4 or -t 8 options, not more than 15000 clients, > ramp rate could > be 100, if more powerful CPUs used. Take care about the NEVENT > parameter as described > in *curl-loader - High Load HW > * > Best wishes! > > Truly, > Robert Iakobashvili, Ph.D. > ...................................................................... > Assistive technology that understands you > ...................................................................... NOTE: This message, and any attached files, contain information that is privileged, confidential, proprietary or otherwise protected from disclosure. Any disclosure, copying or distribution of, or reliance upon, this message by anyone else is strictly prohibited. If you received this communication in error, please notify the sender immediately by reply e-mail message or by telephone to one of the numbers above and deleting it from your computer. Thank you. |
From: Robert I. <cor...@gm...> - 2008-11-04 14:41:23
|
Hi Alo, On Tue, Nov 4, 2008 at 4:18 PM, alo sinnathamby <asi...@ic...>wrote: > Hi Robert, > > Because of these different results, i am not able to determine the > correct run time for any given number of requests. Our objective is to > determine the load capability and run time for any given number of > requests, say for an example 20000, 50000, 70000, 100000 requests. > > Please render your valuable advice and Greatly appreciate your help. > > > The suggestion is always to measure such things not at a ramp-up time (75 seems to be the max for your machine), but at a steady state. Y may wish to look at the steady run-time slot and to adjust the number of cycles accordingly. Besides that, you can see CAPS number, which is at a steady stage the number of cycles, that all your virtual clients are doing each second. Besides that, the following conversation may be of your interest: http://sourceforge.net/mailarchive/forum.php?thread_name=BAY131-W4271C5395CD9FFBBC04BF8D9C80%40phx.gbl&forum_name=curl-loader-devel Sincerely, Robert |
From: Matt L. <mat...@av...> - 2008-11-04 16:29:44
|
Hello, I am encountering a parse error when reading username/password data from a file. The following is my .conf file: ########### GENERAL SECTION ################################ BATCH_NAME= nomadeskpoll CLIENTS_NUM_MAX=10 # Same as CLIENTS_NUM CLIENTS_NUM_START=1 CLIENTS_RAMPUP_INC=1 INTERFACE=eth0 NETMASK=16 IP_ADDR_MIN= 192.168.2.70 IP_ADDR_MAX= 192.168.2.128 CYCLES_NUM=100 URLS_NUM= 1 ########### URL SECTION #################################### URL=http://test.nomadesk.com/nomadesk/index.php?TaskNavigator::Task=Rece iveMessage URL_SHORT_NAME="AddExistingAccount" TIMER_URL_COMPLETION = 500 TIMER_AFTER_URL_SLEEP = 500 REQUEST_TYPE=POST FORM_USAGE_TYPE= RECORDS_FROM_FILE FORM_STRING=<PipeMsg><Header><AccountName></AccountName><Password>aventi v23</Password><LocationID>%s</LocationID><ClientVersion>2.6.0.16</Client Version>bogus<CreationTstamp></CreationTstamp><Task>AddExistingAccount</ Task></Header><Body><Email>%s</Email><LocationName>%s</LocationName></Bo dy></PipeMsg> FORM_RECORDS_FILE_MAX_NUM=10 FORM_RECORDS_FILE= locid-email-locname.cred FORM_RECORDS_RANDOM=1 And I am reading from this file (locid-email-locname.cred) # Separator used here is ',' a3d683db-67f1-4e42-94f4-d942623de79e,p0...@ml...,PROTEST-VIST A32 2ae2596e-7ade-4fbd-9b21-61d95c4ea387,p0...@ml...,PROTEST-XP32 -1 a3d683db-67f1-4e42-94f4-d942623de79e,p0...@ml...,PROTEST-VIST A32 2ae2596e-7ade-4fbd-9b21-61d95c4ea387,p0...@ml...,PROTEST-XP32 -1 a3d683db-67f1-4e42-94f4-d942623de79e,p0...@ml...,PROTEST-VIST A32: . . . I get the following error when running "curl-loader -f poll.conf": parse_config_file - processing file string "FORM_STRING=<PipeMsg><Header><AccountName></AccountName><Password>avent iv23</Password><LocationID>%s</LocationID><ClientVersion>2.6.0.16</Clien tVersion>bogus<CreationTstamp></CreationTstamp><Task>AddExistingAccount< /Task></Header><Body><Email>%s</Email><LocationName>%s</LocationName></B ody></PipeMsg> - to load user credentials (records) from a file two "%%s" , something like "user=%%s&password=%%s" and _FILE defined. - error: FORM_STRING (form_string_parser) is not valid. Please, use: - to generate unique users with unique passwords two "%s%d" , something like "user=%s%d&password=%s%d" - to generate unique users with the same passwords one "%s%d" for users and one "%s" for the password,something like "user=%s%d&password=%s" - for a single configurable user with a password two "%s" , something like "user=%s&password=%s" add_param_to_batch - error: parser failed for tag FORM_STRING and value <PipeMsg><Header><AccountName></AccountName><Password>aventiv23</Passwor d><LocationID>%s</LocationID><ClientVersion>2.6.0.16</ClientVersion>bogu s<CreationTstamp></CreationTstamp><Task>AddExistingAccount</Task></Heade r><Body><Email>%s</Email><LocationName>%s</LocationName></Body></PipeMsg >. parse_config_file - error: add_param_to_batch () failed processing line "FORM_STRING" main - error: parse_config_file () failed. I have tried the %%s usage, as suggested in the error message above but got the same result. Any ideas? Is there any additional documentation about how to use this templating feature? Best Regards, Matt Love |
From: Gary F. <ga...@in...> - 2008-11-04 16:45:44
|
Matt, I've recently started using curl-loader, and I ran into this too. I think we're extending the use of FORM_STRING beyond what was originally intended. I fixed this in the source file parse_config.c by changing the line if (ftype != FORM_USAGETYPE_AS_IS) to if (ftype != FORM_USAGETYPE_AS_IS && ftype != FORM_USAGETYPE_RECORDS_FROM_FILE) Maybe you could give this a try. It seems to work for me, although I'm still testing this along with a few other changes. Gary Fitts On Nov 4, 2008, at 8:29 AM, Matt Love wrote: > Hello, > > I am encountering a parse error when reading username/password data > from a file. > > > > The following is my .conf file: > > ########### GENERAL SECTION ################################ > > > > BATCH_NAME= nomadeskpoll > > CLIENTS_NUM_MAX=10 # Same as CLIENTS_NUM > > CLIENTS_NUM_START=1 > > CLIENTS_RAMPUP_INC=1 > > INTERFACE=eth0 > > NETMASK=16 > > IP_ADDR_MIN= 192.168.2.70 > > IP_ADDR_MAX= 192.168.2.128 > > CYCLES_NUM=100 > > URLS_NUM= 1 > > > > ########### URL SECTION #################################### > > URL=http://test.nomadesk.com/nomadesk/index.php?TaskNavigator::Task=ReceiveMessage > > URL_SHORT_NAME="AddExistingAccount" > > TIMER_URL_COMPLETION = 500 > > TIMER_AFTER_URL_SLEEP = 500 > > REQUEST_TYPE=POST > > FORM_USAGE_TYPE= RECORDS_FROM_FILE > > FORM_STRING=<PipeMsg><Header><AccountName></ > AccountName><Password>aventiv23</Password><LocationID>%s</ > LocationID><ClientVersion>2.6.0.16</ > ClientVersion>bogus<CreationTstamp></ > CreationTstamp><Task>AddExistingAccount</Task></Header><Body><Email> > %s</Email><LocationName>%s</LocationName></Body></PipeMsg> > > FORM_RECORDS_FILE_MAX_NUM=10 > > FORM_RECORDS_FILE= locid-email-locname.cred > > FORM_RECORDS_RANDOM=1 > > > > > > And I am reading from this file (locid-email-locname.cred) > > # Separator used here is ',' > > a3d683db-67f1-4e42-94f4-d942623de79e,p0...@ml...,PROTEST- > VISTA32 > > 2ae2596e-7ade-4fbd-9b21-61d95c4ea387,p0...@ml...,PROTEST- > XP32-1 > > a3d683db-67f1-4e42-94f4-d942623de79e,p0...@ml...,PROTEST- > VISTA32 > > 2ae2596e-7ade-4fbd-9b21-61d95c4ea387,p0...@ml...,PROTEST- > XP32-1 > > a3d683db-67f1-4e42-94f4-d942623de79e,p0...@ml...,PROTEST- > VISTA32: > > . > > . > > . > > > > > > I get the following error when running “curl-loader –f poll.conf”: > > parse_config_file - processing file string > "FORM_STRING=<PipeMsg><Header><AccountName></ > AccountName><Password>aventiv23</Password><LocationID>%s</ > LocationID><ClientVersion>2.6.0.16</ > ClientVersion>bogus<CreationTstamp></ > CreationTstamp><Task>AddExistingAccount</Task></Header><Body><Email> > %s</Email><LocationName>%s</LocationName></Body></PipeMsg> > > > > > > - to load user credentials (records) from a file two "%%s" , > something like "user=%%s&password=%%s" > > and _FILE defined. > > - error: FORM_STRING (form_string_parser) is not valid. > > Please, use: > > - to generate unique users with unique passwords two "%s%d" , > something like "user=%s%d&password=%s%d" > > - to generate unique users with the same passwords one "%s%d" > > for users and one "%s" for the password,something like "user=%s > %d&password=%s" > > - for a single configurable user with a password two "%s" , > something like "user=%s&password=%s" > > add_param_to_batch - error: parser failed for tag FORM_STRING and > value <PipeMsg><Header><AccountName></ > AccountName><Password>aventiv23</Password><LocationID>%s</ > LocationID><ClientVersion>2.6.0.16</ > ClientVersion>bogus<CreationTstamp></ > CreationTstamp><Task>AddExistingAccount</Task></Header><Body><Email> > %s</Email><LocationName>%s</LocationName></Body></PipeMsg>. > > parse_config_file - error: add_param_to_batch () failed processing > line "FORM_STRING" > > main - error: parse_config_file () failed. > > > > I have tried the %%s usage, as suggested in the error message above > but got the same result. Any ideas? Is there any additional > documentation about how to use this templating feature? > > > > Best Regards, > > Matt Love > > <ATT00001.txt><ATT00002.txt> |
From: Robert I. <cor...@gm...> - 2008-11-11 17:52:33
|
Hi Matt, On Tue, Nov 4, 2008 at 6:45 PM, Gary Fitts <ga...@in...> wrote: > Matt, > > I've recently started using curl-loader, and I ran into this too. > I think we're extending the use of FORM_STRING beyond what was > originally intended. I fixed this in the source file parse_config.c > by changing the line > > if (ftype != FORM_USAGETYPE_AS_IS) > > to > > if (ftype != FORM_USAGETYPE_AS_IS && ftype != > FORM_USAGETYPE_RECORDS_FROM_FILE) > > Maybe you could give this a try. It seems to work for me, although > I'm still testing this along with a few other changes. > > Gary Fitts Is the issue solved, or you need me to investigate it? Sincerely, Robert |
From: Gary F. <ga...@in...> - 2008-11-12 21:37:54
|
Hello Robert, Here are the additions to curl-loader that I've been working on. The new tags are documented in an attached file. They allow clients to fetch urls built from prior server responses, or from a token file. There are also a few bug fixes (at least I think they were bugs): multiple clients can now upload the same file repeatedly without interference, form_strings can now have many %s markers, and whitespace is removed from form_string tokens. (The code always removed whitespace, but a minor bug caused the original token to be used instead.) I tried to keep all of my changes in one new source file, url_set.c, but I had to put hooks and small changes into 5 other files: url.h, client.h, parse_conf.c, loader.c, and loader_fsm.c. These are all included. I've marked all the changes with GF in a comment to make them easy to find. I've tested all this to my own satisfaction, but of course there will be bugs that I haven't uncovered yet. If you want to adopt any of these changes, it might make sense to move the url_set.c code back into parse_conf.c and loader.c -- whatever is most convenient. Gary Fitts |
From: Robert I. <cor...@gm...> - 2008-11-22 15:12:52
|
Hi Gary, On Sat, Nov 22, 2008 at 12:07 AM, Gary Fitts <ga...@in...> wrote: > Everything tests out just fine, Robert. Thanks, good to know. > > > One question: you put most of the new code (that was in a separate > file url_set.c) > into parse_conf.c. A lot of that is "run time" code that is executed > while the load > is running, while parse_conf.c used to contain only parse-time code. > Is that OK? > Or do you want to put run-time code into loader.c or a separate file? > Correct. I agree with this approach. There are more urgent tasks: - explaining to potential users how to use the new tags. Please, look at README changes and review them: http://curl-loader.svn.sourceforge.net/viewvc/curl-loader/trunk/curl-loader/doc/README?r1=534&r2=546 Added example files: http://curl-loader.svn.sourceforge.net/viewvc/curl-loader/trunk/curl-loader/conf-examples/url-template-fr-file.conf?revision=546&view=markup http://curl-loader.svn.sourceforge.net/viewvc/curl-loader/trunk/curl-loader/conf-examples/url-template-resp-dynamic.conf?view=markup http://curl-loader.svn.sourceforge.net/viewvc/curl-loader/trunk/curl-loader/conf-examples/url-template-file.txt?revision=546&view=markup Afterwards, the man-pages and the web-site pages to be updates - code reviewing and potential bugs tracking. In meanwhile, I have detected only a single potential issue: in load_fsm.c Calling of newly added close_connections() with a curl_easy_cleanup () for a client CURL handle is a bugfix for clients in CSTATE_FINISHED_OK state. However, it make create a bug for clients in state CSTATE_ERROR. Such clients with an error in one cycle may be re-scheduled once more for another cycle, thus, they will be attempted to be added to the multi-handle, rejected as null, and, therefore, such clients will be never scheduled again for more loading cycles (there is a command-line option to configure for such behavior, which is not the default). Calling curl_easy_cleanup () is a good hint to libcurl multi-handle to decrease the pool of connections to the server. Added for clients in error-state one more curl_easy_init() as you can see in svn. - bringing the newly added code to the coding conventions and coding style of the project (I started that, note, that a couple functions have been re-named to make their names more clear); - removing various extern declarations (done for functions); - functions should have function caption with description in appropriate format; > > Since I submitted these changes I've found the need for a couple more > tags. > I'll send those along in a day or so when I've been able to test them. > Great! If you can work against the code in svn and submit your code as patches diff -Naru newcode-tree old-core-tree > somepatch.patch, it may make integration easier. Try, where it is possible to follow the project's coding style and conventions. Thanks! -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: Robert I. <cor...@gm...> - 2008-11-27 22:41:41
|
Hi Gary, Had you a chance to look at the below changes? We would like to add the new tags also to the man and web-pages and to make a new release. Thanks, Robert ========================================================== - explaining to potential users how to use the new tags. Please, look at README changes and review them: http://curl-loader.svn.sourceforge.net/viewvc/curl-loader/trunk/curl-loader/doc/README?r1=534&r2=546 Added example files: http://curl-loader.svn.sourceforge.net/viewvc/curl-loader/trunk/curl-loader/conf-examples/url-template-fr-file.conf?revision=546&view=markup http://curl-loader.svn.sourceforge.net/viewvc/curl-loader/trunk/curl-loader/conf-examples/url-template-resp-dynamic.conf?view=markup http://curl-loader.svn.sourceforge.net/viewvc/curl-loader/trunk/curl-loader/conf-examples/url-template-file.txt?revision=546&view=markup Afterwards, the man-pages and the web-site pages to be updates - code reviewing and potential bugs tracking. In meanwhile, I have detected only a single potential issue: in load_fsm.c Calling of newly added close_connections() with a curl_easy_cleanup () for a client CURL handle is a bugfix for clients in CSTATE_FINISHED_OK state. However, it make create a bug for clients in state CSTATE_ERROR. Such clients with an error in one cycle may be re-scheduled once more for another cycle, thus, they will be attempted to be added to the multi-handle, rejected as null, and, therefore, such clients will be never scheduled again for more loading cycles (there is a command-line option to configure for such behavior, which is not the default). Calling curl_easy_cleanup () is a good hint to libcurl multi-handle to decrease the pool of connections to the server. Added for clients in error-state one more curl_easy_init() as you can see in svn. - bringing the newly added code to the coding conventions and coding style of the project (I started that, note, that a couple functions have been re-named to make their names more clear); - removing various extern declarations (done for functions); - functions should have function caption with description in appropriate format; > > Since I submitted these changes I've found the need for a couple more > tags. > I'll send those along in a day or so when I've been able to test them. Great! If you can work against the code in svn and submit your code as patches diff -Naru newcode-tree old-core-tree > somepatch.patch, it may make integration easier. Try, where it is possible to follow the project's coding style and conventions. Thanks! -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: Gary F. <ga...@in...> - 2008-11-28 16:07:37
|
We've had a major crunch at work for the last week, but I'll try to get this done soon, hopefully this weekend. On Nov 27, 2008, at 2:41 PM, Robert Iakobashvili wrote: > Hi Gary, > > Had you a chance to look at the below changes? > We would like to add the new tags also to the man and web-pages and to > make a new release. > > Thanks, > Robert > > > > ========================================================== > - explaining to potential users how to use the new tags. > Please, look at README changes and review them: > http://curl-loader.svn.sourceforge.net/viewvc/curl-loader/trunk/curl-loader/doc/README?r1=534&r2=546 > > Added example files: > > http://curl-loader.svn.sourceforge.net/viewvc/curl-loader/trunk/curl-loader/conf-examples/url-template-fr-file.conf?revision=546&view=markup > http://curl-loader.svn.sourceforge.net/viewvc/curl-loader/trunk/curl-loader/conf-examples/url-template-resp-dynamic.conf?view=markup > http://curl-loader.svn.sourceforge.net/viewvc/curl-loader/trunk/curl-loader/conf-examples/url-template-file.txt?revision=546&view=markup > > Afterwards, the man-pages and the web-site pages to be updates > > - code reviewing and potential bugs tracking. > > In meanwhile, I have detected only a single potential issue: > in load_fsm.c > > Calling of newly added close_connections() with a curl_easy_cleanup () > for a client CURL handle is a bugfix for clients in > CSTATE_FINISHED_OK state. > However, it make create a bug for clients in state CSTATE_ERROR. > > Such clients with an error in one cycle may be re-scheduled once more > for another cycle, thus, they will be attempted to be added to the > multi-handle, > rejected as null, and, therefore, such clients will be never > scheduled again for > more loading cycles (there is a command-line option to configure for > such behavior, > which is not the default). > > Calling curl_easy_cleanup () is a good hint to libcurl multi-handle to > decrease the pool of connections to the server. Added for clients in > error-state one more curl_easy_init() as you can see in svn. > > - bringing the newly added code to the coding conventions > and coding style of the project (I started that, note, that a couple > functions have been re-named to make their names more clear); > - removing various extern declarations (done for functions); > - functions should have function caption with description in > appropriate > format; > >> >> Since I submitted these changes I've found the need for a couple more >> tags. >> I'll send those along in a day or so when I've been able to test >> them. > > Great! > If you can work against the code in svn and submit your code as > patches > diff -Naru newcode-tree old-core-tree > somepatch.patch, it may make > integration easier. > > Try, where it is possible to follow the project's coding style and > conventions. > Thanks! > > -- > Truly, > Robert Iakobashvili, Ph.D. > ...................................................................... > Assistive technology that understands you > ...................................................................... |
From: Gary F. <ga...@in...> - 2008-11-29 05:32:38
Attachments:
Nov28.patch
|
Hello Robert, Here is the patch file for my latest source: diff -Naru my-source-tree version-547-source-tree > Nov28.patch I've tried to conform to your coding conventions and style. If I've missed anywhere I apologize. Please take a look at the code in load_fsm.c that you referred to, where I call curl_easy_cleanup. You pointed out that I shouldn't call this if the client might be scheduled again. But I think that at the point where this is called, the functions in load_state_func_table have returned CSTATE_ERROR, and I think this means that the client will never be scheduled again even if error_recovery_client is true. I may be wrong ... I'll check the document files tomorrow or Sunday. Thanks, Gary |
From: Robert I. <cor...@gm...> - 2008-11-10 16:55:56
|
Hi Alo, On Mon, Nov 10, 2008 at 6:13 PM, alo sinnathamby <asi...@ic...>wrote: > Hi Robert, > > Thanks for your reply. > > We have two test clients and the environment in both machines are same > except the client 2 has 8 Gig RAM and client 1 has 4 Gig RAM. The test > script works fine with test client 1 and it gives the following error > with test client 2. > > setup_curl_handle_appl - error: post_data is NULL. > setup_curl_handle_init - error: setup_curl_handle_appl () failed . > > > These are my client machines: > ----------------------------- > > Test client 1: > Operating System: CentOS release 5.2 (Final) > CPU: Intel(R) Core(TM)2 Duo CPU E4600 @ 2.40GHz > Ram: 4 Gigabytes DDR2 > > Test client 2: > Operating System: CentOS release 5.2 (Final) > CPU: Intel(R) Core(TM)2 Duo CPU E4600 @ 2.40GHz > Ram: 8 Gigabytes DDR2 > > > I have also followed the recommendations given in the following links, > > section 7.2 in http://curl-loader.sourceforge.net/doc/faq.html > http://curl-loader.sourceforge.net/high-load-hw/index.html > > > If you have any clue as why we get this type of error in only one > client, please advice us and Greatly appreciate your help. > Is it a 32-bit distribution or 64-bit ? -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: alo s. <asi...@ic...> - 2008-11-10 17:04:12
|
it is 64 bit distribution Robert Iakobashvili wrote: > Hi Alo, > > On Mon, Nov 10, 2008 at 6:13 PM, alo sinnathamby > <asi...@ic... <mailto:asi...@ic...>> wrote: > > Hi Robert, > > Thanks for your reply. > > We have two test clients and the environment in both machines are same > except the client 2 has 8 Gig RAM and client 1 has 4 Gig RAM. The test > script works fine with test client 1 and it gives the following error > with test client 2. > > setup_curl_handle_appl - error: post_data is NULL. > setup_curl_handle_init - error: setup_curl_handle_appl () failed . > > > These are my client machines: > ----------------------------- > > Test client 1: > Operating System: CentOS release 5.2 (Final) > CPU: Intel(R) Core(TM)2 Duo CPU E4600 @ 2.40GHz > Ram: 4 Gigabytes DDR2 > > Test client 2: > Operating System: CentOS release 5.2 (Final) > CPU: Intel(R) Core(TM)2 Duo CPU E4600 @ 2.40GHz > Ram: 8 Gigabytes DDR2 > > > I have also followed the recommendations given in the following links, > > section 7.2 in http://curl-loader.sourceforge.net/doc/faq.html > http://curl-loader.sourceforge.net/high-load-hw/index.html > > > If you have any clue as why we get this type of error in only one > client, please advice us and Greatly appreciate your help. > > > Is it a 32-bit distribution or 64-bit ? > > -- > Truly, > Robert Iakobashvili, Ph.D. > ...................................................................... > Assistive technology that understands you > ...................................................................... |
From: Robert I. <cor...@gm...> - 2008-11-10 17:22:30
|
On Mon, Nov 10, 2008 at 7:01 PM, alo sinnathamby <asi...@ic...>wrote: > it is 64 bit distribution > > Do you have both 64-bit user space and 64-bit kernel? Please, verify it. Compile with debugging symbols and a) without optimization make cleanall; make debug=1 optimize=0 b) with optimization make cleanall; make debug=1 optimize=1 See, if you reproduce the problem, and try to debug it by gdb or gdb and ddd. Looks like something rather simple to debug and fix. like some optimization bug, etc. One more option is, please, try the recent svn-version and see, if it helps. -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: alo s. <asi...@ic...> - 2008-11-10 22:06:31
|
Hi Robert, You are right. i have run it in the following order, 1 - make cleanall; make debug=1 optimize=0 => throws the same error 2 - make cleanall; make debug=1 optimize=1 => works fine *Greatly appreciate your help and Thank you.. * Thanks, Alo Robert Iakobashvili wrote: > > > On Mon, Nov 10, 2008 at 7:01 PM, alo sinnathamby > <asi...@ic... <mailto:asi...@ic...>> wrote: > > it is 64 bit distribution > > > Do you have both 64-bit user space and 64-bit kernel? > Please, verify it. > > Compile with debugging symbols and > a) without optimization > > make cleanall; make debug=1 optimize=0 > > b) with optimization > make cleanall; make debug=1 optimize=1 > > See, if you reproduce the problem, and try to debug it > by gdb or gdb and ddd. > > Looks like something rather simple to debug and fix. like > some optimization bug, etc. > > One more option is, please, try the recent svn-version and see, > if it helps. > > -- > Truly, > Robert Iakobashvili, Ph.D. > ...................................................................... > Assistive technology that understands you > ...................................................................... |
From: Robert I. <cor...@gm...> - 2008-11-11 04:37:59
|
Hi Alo, On Tue, Nov 11, 2008 at 12:05 AM, alo sinnathamby <asi...@ic...>wrote: > Hi Robert, > > You are right. > > i have run it in the following order, > > 1 - make cleanall; make debug=1 optimize=0 > > => throws the same error > > 2 - make cleanall; make debug=1 optimize=1 > > => works fine > > > *Greatly appreciate your help and Thank you.. * > > > Thanks, > Alo > > If you could debug and suggest a fix, it will be appreciated. -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: alo s. <asi...@ic...> - 2008-11-11 14:14:12
|
Hi Robert, I wish to do that but i m not sure as how to do it. Anyway i will try it and if there is a success, i will let you know. Thanks, Alo Robert Iakobashvili wrote: > Hi Alo, > > On Tue, Nov 11, 2008 at 12:05 AM, alo sinnathamby > <asi...@ic... <mailto:asi...@ic...>> wrote: > > Hi Robert, > > You are right. > > i have run it in the following order, > > 1 - make cleanall; make debug=1 optimize=0 > > => throws the same error > > 2 - make cleanall; make debug=1 optimize=1 > > => works fine > > > *Greatly appreciate your help and Thank you.. * > > > Thanks, > Alo > > > > If you could debug and suggest a fix, it will be appreciated. > > -- > Truly, > Robert Iakobashvili, Ph.D. > ...................................................................... > Assistive technology that understands you > ...................................................................... |
From: Robert I. <cor...@gm...> - 2008-11-11 17:43:32
|
Hi Alo, On Tue, Nov 11, 2008 at 4:13 PM, alo sinnathamby <asi...@ic...>wrote: > Hi Robert, > > I wish to do that but i m not sure as how to do it. Anyway i will try it > and if there is a success, i will let you know. > > Thanks, > Alo > OK. If you can share the numbers of concurrent clients and CAPS, that you are getting to, it could be also of interest for the list. Thanks, Robert |