curl-loader-devel Mailing List for curl-loader - web application testing (Page 27)
Status: Alpha
Brought to you by:
coroberti
You can subscribe to this list here.
2006 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
|
Feb
(1) |
Mar
(7) |
Apr
(19) |
May
(25) |
Jun
(16) |
Jul
(59) |
Aug
(29) |
Sep
(18) |
Oct
(19) |
Nov
(7) |
Dec
(29) |
2008 |
Jan
(6) |
Feb
(18) |
Mar
(8) |
Apr
(27) |
May
(26) |
Jun
(5) |
Jul
(6) |
Aug
|
Sep
(9) |
Oct
(37) |
Nov
(61) |
Dec
(17) |
2009 |
Jan
(21) |
Feb
(25) |
Mar
(4) |
Apr
(2) |
May
(8) |
Jun
(15) |
Jul
(18) |
Aug
(23) |
Sep
(10) |
Oct
(16) |
Nov
(14) |
Dec
(22) |
2010 |
Jan
(23) |
Feb
(8) |
Mar
(18) |
Apr
(1) |
May
(34) |
Jun
(23) |
Jul
(11) |
Aug
(1) |
Sep
(13) |
Oct
(10) |
Nov
(2) |
Dec
(8) |
2011 |
Jan
|
Feb
(7) |
Mar
(24) |
Apr
(12) |
May
(3) |
Jun
(2) |
Jul
(2) |
Aug
|
Sep
(5) |
Oct
(20) |
Nov
(7) |
Dec
(11) |
2012 |
Jan
(12) |
Feb
(5) |
Mar
(16) |
Apr
(3) |
May
|
Jun
(5) |
Jul
(12) |
Aug
(6) |
Sep
|
Oct
|
Nov
(8) |
Dec
|
2013 |
Jan
(1) |
Feb
(3) |
Mar
(5) |
Apr
(3) |
May
(1) |
Jun
|
Jul
(1) |
Aug
(2) |
Sep
(9) |
Oct
|
Nov
(8) |
Dec
(4) |
2014 |
Jan
(4) |
Feb
|
Mar
(1) |
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(4) |
Oct
|
Nov
(11) |
Dec
(5) |
2015 |
Jan
(1) |
Feb
|
Mar
(11) |
Apr
(3) |
May
(1) |
Jun
(1) |
Jul
(4) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(2) |
Dec
|
2016 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Gary F. <ga...@in...> - 2008-11-01 14:46:49
|
Thanks, Robert. The URL_COMPLETION_TIMEOUT is set to 30 seconds (30000 ms), and nothing happens for the last 25 seconds. And timing out wouldn't explain why the server initiates the disconnect. If the client were timing out too soon, the client would initiate the disconnect, I would think. This certainly looks like a server problem. I wonder if the client could somehow be taught to reestablish the connection when the server disconnects, rather than timing out? FWIW here's the configuration file. You can see some of the extensions I've built. But I doubt they're the cause of the problem, because they work to generate the URLs, and the server doesn't know anything about them. On Nov 1, 2008, at 7:22 AM, Robert Iakobashvili wrote: > Could it be, that your URL_COMPLETION_TIMEOUT value is too short > and you need to increase it? > > It may happen, when POST-ing is accompanied by 100-Continue as seen > in your log file. |
From: Robert I. <cor...@gm...> - 2008-11-01 14:37:14
|
Hi Gary, On Sat, Nov 1, 2008 at 2:47 AM, Gary Fitts <ga...@in...> wrote: > Hello Robert, > > I've run into a problem that seems unrelated to the curl-loader extensions > I'm building. This occurs whenever I have two or more virtual clients > running simultaneously. Here's what I see when I look at the TCP traffic > with Wireshark: > > Roughly half the clients proceed normally. The server sends the final > response of the conversation, the client ACKs this, and then the client > initiates a FIN-ACK and the server replies with FIN-ACK. That's it, and > everyone's happy. > > But with the rest of the clients, something happens in the middle of the > conversation, before the client is ready to finish. The server replies to a > client request, the client ACKs this reply, and then the server unexpectedly > initiates a FIN-ACK. The client responds with two RST packets, but the > server doesn't react, and the client times out. > > Unfortunately I'm no TCP expert. Is this a server malfunction? Or perhaps > this is normal server behavior, and the curl clients should handle the > unexpected FIN-ACK differently? I don't know if you can shed any light on > this, but I'd be grateful for any ideas. > > I've included a tcpdump file (which you can view with Wireshark) and the > .log file. (It's interesting that in the case of the malfunctioning clients, > the final server reply is never logged, although it appears in the tcpdump.) > I can send the .conf file as well if you want (but that might just be a > distraction). The command line was $curl-loader -u -v -f genusers.conf. The > same thing happens when I add the -r option. > What are saying the server's logs. Are there any errors at server side? -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: Robert I. <cor...@gm...> - 2008-11-01 14:22:34
|
On Sat, Nov 1, 2008 at 2:47 AM, Gary Fitts <ga...@in...> wrote: > Hello Robert, > > I've run into a problem that seems unrelated to the curl-loader extensions > I'm building. This occurs whenever I have two or more virtual clients > running simultaneously. Here's what I see when I look at the TCP traffic > with Wireshark: > > Roughly half the clients proceed normally. The server sends the final > response of the conversation, the client ACKs this, and then the client > initiates a FIN-ACK and the server replies with FIN-ACK. That's it, and > everyone's happy. > > But with the rest of the clients, something happens in the middle of the > conversation, before the client is ready to finish. The server replies to a > client request, the client ACKs this reply, and then the server unexpectedly > initiates a FIN-ACK. The client responds with two RST packets, but the > server doesn't react, and the client times out. > Could it be, that your URL_COMPLETION_TIMEOUT value is too short and you need to increase it? It may happen, when POST-ing is accompanied by 100-Continue as seen in your log file. -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... www.ghotit.com Assistive technology that understands you ...................................................................... |
From: Gary F. <ga...@in...> - 2008-11-01 00:47:23
|
Hello Robert, I've run into a problem that seems unrelated to the curl-loader extensions I'm building. This occurs whenever I have two or more virtual clients running simultaneously. Here's what I see when I look at the TCP traffic with Wireshark: Roughly half the clients proceed normally. The server sends the final response of the conversation, the client ACKs this, and then the client initiates a FIN-ACK and the server replies with FIN-ACK. That's it, and everyone's happy. But with the rest of the clients, something happens in the middle of the conversation, before the client is ready to finish. The server replies to a client request, the client ACKs this reply, and then the server unexpectedly initiates a FIN-ACK. The client responds with two RST packets, but the server doesn't react, and the client times out. Unfortunately I'm no TCP expert. Is this a server malfunction? Or perhaps this is normal server behavior, and the curl clients should handle the unexpected FIN-ACK differently? I don't know if you can shed any light on this, but I'd be grateful for any ideas. I've included a tcpdump file (which you can view with Wireshark) and the .log file. (It's interesting that in the case of the malfunctioning clients, the final server reply is never logged, although it appears in the tcpdump.) I can send the .conf file as well if you want (but that might just be a distraction). The command line was $curl-loader -u -v -f genusers.conf. The same thing happens when I add the -r option. Thanks, Gary Fitts |
From: Robert I. <cor...@gm...> - 2008-10-31 12:06:23
|
Hi Gary, On Mon, Oct 27, 2008 at 8:38 PM, Gary Fitts <ga...@in...> wrote: > Thanks for replying, Robert. > > > Actually, when cctx->post_data is allocated, > > it should be set zero to the first char, like > > cctx->post_data[0] = '\0'; > > > Yes, cctx->post_data[0] is set to zero, and that's the problem. This > causes the condition "else if (cctx->post_data && cctx->post_data[0])" > to fail, and to drop through to the "post_data is NULL" error, which > aborts the batch run. > > When I comment out the second part: "else if (cctx->post_data /* && > cctx->post_data[0] */)", we call init_client_url_post_data, and this > seems to work. > Committed your fix. You and Alex, which reported this issue previously, have been added to our THANKS list. Thank you very much! -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: Robert I. <cor...@gm...> - 2008-10-31 11:25:00
|
On Fri, Oct 31, 2008 at 10:48 AM, Matt Love <mat...@av...> wrote: > > > Thank you for your suggestion. I tried using the MULTIPART_FORM_DATA tag > as described below, but ran into the following error: > > > > RUNNING LOAD > > setup_curl_handle_appl - error: post_data is NULL. > It looks like similar to the issue faced by Gary Fitts. Sending to you file, where it is resolved by commenting out some extra checking. else if (cctx->post_data /*&& cctx->post_data[0]*/) Please, try this file instead of the file in your distribution. -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: Matt L. <mat...@av...> - 2008-10-31 08:49:04
|
Thank you for your suggestion. I tried using the MULTIPART_FORM_DATA tag as described below, but ran into the following error: RUNNING LOAD setup_curl_handle_appl - error: post_data is NULL. setup_curl_handle_init - error: setup_curl_handle_appl () failed . setup_url error: setup_curl_handle_init - failed add_loading_clients error: load_next_step() initial failed init_timers_and_add_initial_clients_to_load error: add_loading_clients () failed. user_activity_hyper - error: init_timers_and_add_initial_clients_to_load () failed. batch_function - "nomadeskpoll" -user activity failed. Exited batch_function My xml file, poll2.xml resides in the same dir as curl-loader, and I don't see any traffic with wireshark. Any other ideas? Regards, Matt Love From: Robert Iakobashvili [mailto:cor...@gm...] Sent: Thursday, October 30, 2008 6:07 PM To: curl-loader-devel Subject: Re: xml post Hi Matt, There is a known issue of libcurl, that with UPLOAD_FILE is places PUT although you have correctly specified POST. Let's try to make some workaround using MULTIPART_FORM_DATA <mailto:file=@poll2.xml> tag instead. the syntax is very similar to the -F curl option. ########### URL SECTION #################################### URL=http://192.168.2.56/nomadesk/index.php?TaskNavigator::Task=ReceiveMe ssage URL_SHORT_NAME="Poll" REQUEST_TYPE=POST MULTIPART_FORM_DATA="file=@poll2.xml <mailto:file=@poll2.xml> " TIMER_URL_COMPLETION = 5000 TIMER_AFTER_URL_SLEEP = 5000 place poll2.xml to the same directory, where curl-loader resides. You may give it a try. Sincerely, Robert |
From: Robert I. <cor...@gm...> - 2008-10-30 17:06:58
|
Hi Matt, On Thu, Oct 30, 2008 at 5:12 PM, Matt Love <mat...@av...> wrote: > Attached are two captures from tshark. You can see that one of them > fails due to a 405 Method Not Allowed error, which I assume it is referring > to the PUT. The second response I get what I am expecting from the server. > > > There is a known issue of libcurl, that with UPLOAD_FILE is places PUT although you have correctly specified POST. Let's try to make some workaround using MULTIPART_FORM_DATA<file=@poll2.xml> tag instead. the syntax is very similar to the -F curl option. ########### URL SECTION #################################### URL= http://192.168.2.56/nomadesk/index.phpTask?Navigator::Task=ReceiveMessage URL_SHORT_NAME="Poll" REQUEST_TYPE=POST MULTIPART_FORM_DATA="file=@poll2.xml <file=@poll2.xml>" TIMER_URL_COMPLETION = 5000 TIMER_AFTER_URL_SLEEP = 5000 place poll2.xml to the same directory, where curl-loader resides. You may give it a try. Sincerely, Robert |
From: Robert I. <cor...@gm...> - 2008-10-30 14:52:27
|
Hi Matt, On Thu, Oct 30, 2008 at 4:28 PM, Matt Love <mat...@av...> wrote: > The decoded response is: > > > > The server wants a POST, but is there a way for me to pass a file in the > CONF file to use as the POST stream? This is really easy to do with curl > with the command: "curl –F file=@post.xml http://urltopostto.php", I > figured it would be just as easy in curl-loader but so far, no luck! > Please, send to me two wireshark packet captures: a. from curl –F file=@post.xml http://urltopostto.php b. from cul-loader. Thanks. -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: Robert I. <cor...@gm...> - 2008-10-30 14:48:02
|
On Thu, Oct 30, 2008 at 3:17 PM, alo sinnathamby <asi...@ic...>wrote: > Hi Robert, > > Thanks for your immediate reply and appreciate it. I am looking at your > suggestions and in the mean time please find below the conf file for the > tests that i run, > Greatly appreciate your help. > > one more suggestion besides decreasing the number of clients is to run it with -t 2 or -t4 command. Still this is a part of -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: Matt L. <mat...@av...> - 2008-10-30 14:29:15
|
The decoded response is: Server: Apache/2.2.8 (Fedora)^M Allow: GET,HEAD,POST,OPTIONS,TRACE^M Content-Length: 320^M Connection: close^M Content-Type: text/html; charset=iso-8859-1^M ^M <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>405 Method Not Allowed</title> </head><body> <h1>Method Not Allowed</h1> <p>The requested method PUT is not allowed for the URL /nomadesk/index.phpTask.</p> <hr> <address>Apache/2.2.8 (Fedora) Server at 192.168.2.56 Port 80</address> </body></html> Ãà IH<8d>^@^@B^@^@^@B^@^@^@^@^L)ÃKÃ^@^L)¹^^Ã^H^@E^@^@4<82>ü@^@@^F1øÃ¨^BBG^&Ã^A^W^CxÃà IL<8d>^@^@B^@^@^@B^@^@^@^@^L)¹^^Ã^@^L)ÃKÃ^H^@E^@^@4uë@^@@^F? è^BGè^&ÃÃà IÃ<8d>^@^@B^@^@^@B^@^@^@^@^L)¹^^Ã^@^L)ÃKÃ^H^@E^@^@4uì@^@@^F?^Hè ^Cy^Oà ^BGè^B8<&ÃÃà IH<8e>^@^@B^@^@^@B^@^@^@^@^L)ÃKÃ^@^L)¹^^Ã^H^@E^@^@4<82>ý@^@@^F1÷ ^Cy^Oà è&Ã^A^W^Cy^@P<8c>j_&¦b^D«xä<80>^P^@>Â¥Â^@^@^A^A^H ^Oà The server wants a POST, but is there a way for me to pass a file in the CONF file to use as the POST stream? This is really easy to do with curl with the command: "curl -F file=@post.xml http://urltopostto.php", I figured it would be just as easy in curl-loader but so far, no luck! Thanks again for any help. Regards, Matt Love From: Robert Iakobashvili [mailto:cor...@gm...] Sent: Wednesday, October 29, 2008 1:30 PM To: curl-loader-devel Subject: Re: xml post Hi Matt, On Wed, Oct 29, 2008 at 1:26 PM, Matt Love <mat...@av...> wrote: REQUEST_TYPE=POST wireshark sees this: 761.972165 Vmware_d6:4b:dd -> Broadcast ARP Who has 192.168.2.56 <http://192.168.2.56/> ? Tell 192.168.2.71 <http://192.168.2.71/> 761.972667 Vmware_b9:1e:c2 -> Vmware_d6:4b:dd ARP 192.168.2.56 <http://192.168.2.56/> is at 00:0c:29:b9:1e:c2 761.972672 192.168.2.71 <http://192.168.2.71/> -> 192.168.2.56 <http://192.168.2.56/> TCP 47403 > http [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=167834233 TSER=0 WS=7 761.972832 192.168.2.56 <http://192.168.2.56/> -> 192.168.2.71 <http://192.168.2.71/> TCP http > 47403 [SYN, ACK] Seq=0 Ack=1 Win=5792 Len=0 MSS=1460 TSV=167910328 TSER=167834233 WS=7 761.972854 192.168.2.71 <http://192.168.2.71/> -> 192.168.2.56 <http://192.168.2.56/> TCP 47403 > http [ACK] Seq=1 Ack=1 Win=5888 Len=0 TSV=167834235 TSER=167910328 761.973090 192.168.2.71 <http://192.168.2.71/> -> 192.168.2.56 <http://192.168.2.56/> TCP [TCP segment of a reassembled PDU] 761.973265 192.168.2.56 <http://192.168.2.56/> -> 192.168.2.71 <http://192.168.2.71/> TCP http > 47403 [ACK] Seq=1 Ack=212 Win=6912 Len=0 TSV=167910329 TSER=167834235 761.973660 192.168.2.56 <http://192.168.2.56/> -> 192.168.2.71 <http://192.168.2.71/> HTTP HTTP/1.1 100 Continue 761.973665 192.168.2.71 <http://192.168.2.71/> -> 192.168.2.56 <http://192.168.2.56/> TCP 47403 > http [ACK] Seq=212 Ack=26 Win=5888 Len=0 TSV=167834236 TSER=167910329 761.973907 192.168.2.71 <http://192.168.2.71/> -> 192.168.2.56 <http://192.168.2.56/> HTTP PUT /nomadesk/index.phpTask?Navigator::Task=ReceiveMessage HTTP/1.1 761.974298 192.168.2.56 <http://192.168.2.56/> -> 192.168.2.71 <http://192.168.2.71/> HTTP HTTP/1.1 405 Method Not Allowed (text/html) 761.974431 192.168.2.56 <http://192.168.2.56/> -> 192.168.2.71 <http://192.168.2.71/> TCP http > 47403 [FIN, ACK] Seq=570 Ack=595 Win=7936 Len=0 TSV=167910330 TSER=167834236 761.974435 192.168.2.71 <http://192.168.2.71/> -> 192.168.2.56 <http://192.168.2.56/> TCP 47403 > http [ACK] Seq=595 Ack=571 Win=7040 Len=0 TSV=167834237 TSER=167910330 761.974609 192.168.2.71 <http://192.168.2.71/> -> 192.168.2.56 <http://192.168.2.56/> TCP 47403 > http [FIN, ACK] Seq=595 Ack=571 Win=7040 Len=0 TSV=167834237 TSER=167910330 761.974734 192.168.2.56 <http://192.168.2.56/> -> 192.168.2.71 <http://192.168.2.71/> TCP http > 47403 [ACK] Seq=571 Ack=596 Win=7936 Len=0 TSV=167910330 TSER=167834237 In the sourceforge TODO doc for curl-loader I see that "transparent support for POST-ing some file, e.g. a SOAP file" is on the TODO list for the next beta. This seems to be exactly what I need. So would there be a non-transparent way (workaround) to achieve this same thing in the current release? What we see from the wireshark trace is that the first HTTP is most probably POST or PUT (you may wish to use the "Decode AS" function of wireshark GUI or look at the capture text strings) with subsequent 100 Continue and further PUT HTTP request attempt, responded by 405 + more decoded Could you, please, either attach the capture or send the HTTP-decoded stream? Thanks. Robert |
From: alo s. <asi...@ic...> - 2008-10-30 13:52:16
|
Hi Robert, I have copied the content of conf file below, ########### GENERAL SECTION ################################ BATCH_NAME=SAH10000r5 CLIENTS_NUM_MAX = 10000 CLIENTS_NUM_START = 50 CLIENTS_RAMPUP_INC= 50 INTERFACE=eth0 NETMASK=24 IP_ADDR_MIN=10.0.1.240 IP_ADDR_MAX=10.0.1.240 CYCLES_NUM= 1 URLS_NUM=1 ########### URL SECTION ################################## ### Login URL - only once for each client # GET-part URL= http://xx.x.x.x:pppp/main URL_SHORT_NAME="Login-GET" #URL_DONT_CYCLE = 1 REQUEST_TYPE=GET TIMER_URL_COMPLETION = 0 # In msec. Now it is enforced by cancelling url fetch on timeout TIMER_AFTER_URL_SLEEP = 0 # POST-part #URL="" #URL_USE_CURRENT= 1 #URL_SHORT_NAME="Login-POST" #URL_DONT_CYCLE = 1 #USERNAME=admin #PASSWORD=your_password #REQUEST_TYPE=POST #FORM_USAGE_TYPE= SINGLE_USER #FORM_STRING= username=%s&password=%s # Means the same credentials for all clients/users #TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced by cancelling url fetch on timeout #TIMER_AFTER_URL_SLEEP =500 Thanks, Alo alo sinnathamby wrote: > Hi Robert, > > Thanks for your immediate reply and appreciate it. I am looking at your > suggestions and in the mean time please find below the conf file for the > tests that i run, > Greatly appreciate your help. > > Thanks, > Alo > > ########### GENERAL SECTION ################################ > BATCH_NAME=SAH10000r5 > CLIENTS_NUM_MAX = 10000 > CLIENTS_NUM_START = 50 > CLIENTS_RAMPUP_INC= 50 > INTERFACE=eth0 > NETMASK=24 > IP_ADDR_MIN=10.0.1.240 > IP_ADDR_MAX=10.0.1.240 > CYCLES_NUM= 1 > URLS_NUM=1 > > ########### URL SECTION ################################## > > ### Login URL - only once for each client > > # GET-part > URL= http://xx.x.x.x:pppp/main > URL_SHORT_NAME="Login-GET" > #URL_DONT_CYCLE = 1 > REQUEST_TYPE=GET > TIMER_URL_COMPLETION = 0 # In msec. Now it is enforced by cancelling url > fetch on timeout > TIMER_AFTER_URL_SLEEP = 0 > > # POST-part > #URL="" > #URL_USE_CURRENT= 1 > #URL_SHORT_NAME="Login-POST" > #URL_DONT_CYCLE = 1 > #USERNAME=admin > #PASSWORD=your_password > #REQUEST_TYPE=POST > #FORM_USAGE_TYPE= SINGLE_USER > #FORM_STRING= username=%s&password=%s # Means the same credentials for > all clients/users > #TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is > enforced by cancelling url fetch on timeout > #TIMER_AFTER_URL_SLEEP =500 > > > > > > > > Robert Iakobashvili wrote: > >> Hi Alo Sinnathamby, >> >> On Wed, Oct 29, 2008 at 10:04 PM, alo sinnathamby >> <asi...@ic... <mailto:asi...@ic...>> wrote: >> >> Dear support team, >> >> CURL-LOADER VERSION: 0.46 >> >> HW DETAILS: CPU/S and memory are must: >> x86_64 x86_64 x86_64 GNU/Linux >> OS: CentOS release 5.2 (Final) >> CPU: Intel(R) Core(TM)2 Duo CPU E4600 @ 2.40GHz >> Ram: 2 Gigabytes DDR2 >> >> LINUX DISTRIBUTION and KERNEL (uname -r): CentOS release 5.2 (Final) >> >> GCC VERSION (gcc -v): >> >> COMPILATION AND MAKING OPTIONS (if defaults changed): >> >> COMMAND-LINE: >> >> CONFIGURATION-FILE (The most common source of problems): >> Place the file inline here: >> >> Thank you for PRF. >> >> You missed to place your configuration file, wich >> is essential for any judgement, >> >> >> >> >> QUESTION/ SUGGESTION/ PATCH: >> >> I have installed curl-loader on a testing client machine. i tried to >> test a home page of a website which is running with tomcat. the tomcat >> server system is: >> x86_64 x86_64 x86_64 GNU/Linux >> Operating System: CentOS release 5 (Final) >> CPU: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz >> Ram: 4 Gigabytes DDR2 >> >> when i run 10 000 requests at different time intervals, i am getting >> different results as given below, >> >> 10 000 - run1 >> =============== >> Test total duration was 202 seconds and CAPS average 97: >> H/F >> Req:10000,1xx:0,2xx:9993,3xx:0,4xx:0,5xx:7,Err:0,T-Err:0,D:111ms,D-2xx:111ms,Ti:280361B/s,To:7425B/s >> H/F/S >> Req:0,1xx:0,2xx:0,3xx:0,4xx:0,5xx:0,Err:0,T-Err:0,D:0ms,D-2xx:0ms,Ti:0B/s,To:0B/s >> Operations: Success Failed >> Timed out >> URL0:Login-GET 150 9993 0 7 >> 0 0 >> >> >> 10 000 - run2 >> ============== >> Test total duration was 234 seconds and CAPS average 85: >> H/F >> Req:9773,1xx:0,2xx:9773,3xx:0,4xx:0,5xx:0,Err:227,T-Err:0,D:861ms,D-2xx:861ms,Ti:236413B/s,To:6264B/s >> H/F/S >> Req:0,1xx:0,2xx:0,3xx:0,4xx:0,5xx:0,Err:0,T-Err:0,D:0ms,D-2xx:0ms,Ti:0B/s,To:0B/s >> Operations: Success Failed >> Timed out >> URL0:Login-GET 7 9773 0 227 >> 0 0 >> >> >> 10 000 - run3 >> ============== >> Test total duration was 318 seconds and CAPS average 62: >> H/F >> Req:6183,1xx:0,2xx:6012,3xx:0,4xx:0,5xx:0,Err:3988,T-Err:0,D:6602ms,D-2xx:6602ms,Ti:106771B/s,To:2916B/s >> H/F/S >> Req:0,1xx:0,2xx:0,3xx:0,4xx:0,5xx:0,Err:0,T-Err:0,D:0ms,D-2xx:0ms,Ti:0B/s,To:0B/s >> Operations: Success Failed >> Timed out >> URL0:Login-GET 0 6012 3 3988 >> 0 0 >> >> >> Could you please let us know as how to fine tune it to get a >> consistent >> result. FYI: both the client and server are in the local network. >> >> >> I can only guess without the configuration file, that your server >> works really hard. Please, monitor the server-side performance. >> >> The reasons for errors could be seen in the log file. >> Please look into the <your-name>.log file. >> >> You can see at the reasons of errors, when adding to the command line -v. >> Most probably, your server gets saturated and does not handle >> TCP-connections. >> >> >> You can also try to recompile curl-loader with optimization and add more >> mem for TCP, file descriptors like here: >> http://curl-loader.sourceforge.net/doc/faq.html >> >> in the 7.2. How to run a really big load? >> >> Please, update me about your advances. >> >> >> >> -- >> Truly, >> Robert Iakobashvili, Ph.D. >> ...................................................................... >> Assistive technology that understands you >> ...................................................................... >> >> > > > NOTE: This message, and any attached files, contain information that is privileged, confidential, proprietary or otherwise protected from disclosure. Any disclosure, copying or distribution of, or reliance upon, this message by anyone else is strictly prohibited. If you received this communication in error, please notify the sender immediately by reply e-mail message or by telephone to one of the numbers above and deleting it from your computer. Thank you. |
From: Robert I. <cor...@gm...> - 2008-10-30 13:29:04
|
Hi Alo, On Thu, Oct 30, 2008 at 3:17 PM, alo sinnathamby <asi...@ic...>wrote: > Hi Robert, > > Thanks for your immediate reply and appreciate it. I am looking at your > suggestions and in the mean time please find below the conf file for the > tests that i run, > Greatly appreciate your help. > The suggestions are: a) to all at the erors in SAH10000r5.log, what is written there, when there are errors; b) look in FAQs about increasing performance on a client side (file descriptors), etc for a big load -optimization; c) look at your server side; See others embedded: ########### GENERAL SECTION ################################ > BATCH_NAME=SAH10000r5 > CLIENTS_NUM_MAX = 10000 Each client requires about 40K memory, which means, that your 4GB memory is not enough. Working with more than 4000-5000 clients requires doing the Big-Load optimizations as in FAQs. Try to work lets say with 1000-5000 clients and to make Big-Load optimizations. > CLIENTS_NUM_START = 50 > CLIENTS_RAMPUP_INC= 50 > INTERFACE=eth0 > NETMASK=24 > IP_ADDR_MIN=10.0.1.240 > IP_ADDR_MAX=10.0.1.240 > CYCLES_NUM= 1 > URLS_NUM=1 > Really you are running more than one cycle, are not you? -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: alo s. <asi...@ic...> - 2008-10-30 13:18:23
|
Hi Robert, Thanks for your immediate reply and appreciate it. I am looking at your suggestions and in the mean time please find below the conf file for the tests that i run, Greatly appreciate your help. Thanks, Alo ########### GENERAL SECTION ################################ BATCH_NAME=SAH10000r5 CLIENTS_NUM_MAX = 10000 CLIENTS_NUM_START = 50 CLIENTS_RAMPUP_INC= 50 INTERFACE=eth0 NETMASK=24 IP_ADDR_MIN=10.0.1.240 IP_ADDR_MAX=10.0.1.240 CYCLES_NUM= 1 URLS_NUM=1 ########### URL SECTION ################################## ### Login URL - only once for each client # GET-part URL= http://xx.x.x.x:pppp/main URL_SHORT_NAME="Login-GET" #URL_DONT_CYCLE = 1 REQUEST_TYPE=GET TIMER_URL_COMPLETION = 0 # In msec. Now it is enforced by cancelling url fetch on timeout TIMER_AFTER_URL_SLEEP = 0 # POST-part #URL="" #URL_USE_CURRENT= 1 #URL_SHORT_NAME="Login-POST" #URL_DONT_CYCLE = 1 #USERNAME=admin #PASSWORD=your_password #REQUEST_TYPE=POST #FORM_USAGE_TYPE= SINGLE_USER #FORM_STRING= username=%s&password=%s # Means the same credentials for all clients/users #TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced by cancelling url fetch on timeout #TIMER_AFTER_URL_SLEEP =500 Robert Iakobashvili wrote: > Hi Alo Sinnathamby, > > On Wed, Oct 29, 2008 at 10:04 PM, alo sinnathamby > <asi...@ic... <mailto:asi...@ic...>> wrote: > > Dear support team, > > CURL-LOADER VERSION: 0.46 > > HW DETAILS: CPU/S and memory are must: > x86_64 x86_64 x86_64 GNU/Linux > OS: CentOS release 5.2 (Final) > CPU: Intel(R) Core(TM)2 Duo CPU E4600 @ 2.40GHz > Ram: 2 Gigabytes DDR2 > > LINUX DISTRIBUTION and KERNEL (uname -r): CentOS release 5.2 (Final) > > GCC VERSION (gcc -v): > > COMPILATION AND MAKING OPTIONS (if defaults changed): > > COMMAND-LINE: > > CONFIGURATION-FILE (The most common source of problems): > Place the file inline here: > > Thank you for PRF. > > You missed to place your configuration file, wich > is essential for any judgement, > > > > > QUESTION/ SUGGESTION/ PATCH: > > I have installed curl-loader on a testing client machine. i tried to > test a home page of a website which is running with tomcat. the tomcat > server system is: > x86_64 x86_64 x86_64 GNU/Linux > Operating System: CentOS release 5 (Final) > CPU: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz > Ram: 4 Gigabytes DDR2 > > when i run 10 000 requests at different time intervals, i am getting > different results as given below, > > 10 000 - run1 > =============== > Test total duration was 202 seconds and CAPS average 97: > H/F > Req:10000,1xx:0,2xx:9993,3xx:0,4xx:0,5xx:7,Err:0,T-Err:0,D:111ms,D-2xx:111ms,Ti:280361B/s,To:7425B/s > H/F/S > Req:0,1xx:0,2xx:0,3xx:0,4xx:0,5xx:0,Err:0,T-Err:0,D:0ms,D-2xx:0ms,Ti:0B/s,To:0B/s > Operations: Success Failed > Timed out > URL0:Login-GET 150 9993 0 7 > 0 0 > > > 10 000 - run2 > ============== > Test total duration was 234 seconds and CAPS average 85: > H/F > Req:9773,1xx:0,2xx:9773,3xx:0,4xx:0,5xx:0,Err:227,T-Err:0,D:861ms,D-2xx:861ms,Ti:236413B/s,To:6264B/s > H/F/S > Req:0,1xx:0,2xx:0,3xx:0,4xx:0,5xx:0,Err:0,T-Err:0,D:0ms,D-2xx:0ms,Ti:0B/s,To:0B/s > Operations: Success Failed > Timed out > URL0:Login-GET 7 9773 0 227 > 0 0 > > > 10 000 - run3 > ============== > Test total duration was 318 seconds and CAPS average 62: > H/F > Req:6183,1xx:0,2xx:6012,3xx:0,4xx:0,5xx:0,Err:3988,T-Err:0,D:6602ms,D-2xx:6602ms,Ti:106771B/s,To:2916B/s > H/F/S > Req:0,1xx:0,2xx:0,3xx:0,4xx:0,5xx:0,Err:0,T-Err:0,D:0ms,D-2xx:0ms,Ti:0B/s,To:0B/s > Operations: Success Failed > Timed out > URL0:Login-GET 0 6012 3 3988 > 0 0 > > > Could you please let us know as how to fine tune it to get a > consistent > result. FYI: both the client and server are in the local network. > > > I can only guess without the configuration file, that your server > works really hard. Please, monitor the server-side performance. > > The reasons for errors could be seen in the log file. > Please look into the <your-name>.log file. > > You can see at the reasons of errors, when adding to the command line -v. > Most probably, your server gets saturated and does not handle > TCP-connections. > > > You can also try to recompile curl-loader with optimization and add more > mem for TCP, file descriptors like here: > http://curl-loader.sourceforge.net/doc/faq.html > > in the 7.2. How to run a really big load? > > Please, update me about your advances. > > > > -- > Truly, > Robert Iakobashvili, Ph.D. > ...................................................................... > Assistive technology that understands you > ...................................................................... > NOTE: This message, and any attached files, contain information that is privileged, confidential, proprietary or otherwise protected from disclosure. Any disclosure, copying or distribution of, or reliance upon, this message by anyone else is strictly prohibited. If you received this communication in error, please notify the sender immediately by reply e-mail message or by telephone to one of the numbers above and deleting it from your computer. Thank you. |
From: Robert I. <cor...@gm...> - 2008-10-29 20:42:00
|
Hi Alo Sinnathamby, On Wed, Oct 29, 2008 at 10:04 PM, alo sinnathamby <asi...@ic...>wrote: > Dear support team, > > CURL-LOADER VERSION: 0.46 > > HW DETAILS: CPU/S and memory are must: > x86_64 x86_64 x86_64 GNU/Linux > OS: CentOS release 5.2 (Final) > CPU: Intel(R) Core(TM)2 Duo CPU E4600 @ 2.40GHz > Ram: 2 Gigabytes DDR2 > > LINUX DISTRIBUTION and KERNEL (uname -r): CentOS release 5.2 (Final) > > GCC VERSION (gcc -v): > > COMPILATION AND MAKING OPTIONS (if defaults changed): > > COMMAND-LINE: > > CONFIGURATION-FILE (The most common source of problems): > Place the file inline here: > Thank you for PRF. You missed to place your configuration file, wich is essential for any judgement, > > > QUESTION/ SUGGESTION/ PATCH: > > I have installed curl-loader on a testing client machine. i tried to > test a home page of a website which is running with tomcat. the tomcat > server system is: > x86_64 x86_64 x86_64 GNU/Linux > Operating System: CentOS release 5 (Final) > CPU: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz > Ram: 4 Gigabytes DDR2 > > when i run 10 000 requests at different time intervals, i am getting > different results as given below, > > 10 000 - run1 > =============== > Test total duration was 202 seconds and CAPS average 97: > H/F > > Req:10000,1xx:0,2xx:9993,3xx:0,4xx:0,5xx:7,Err:0,T-Err:0,D:111ms,D-2xx:111ms,Ti:280361B/s,To:7425B/s > H/F/S > > Req:0,1xx:0,2xx:0,3xx:0,4xx:0,5xx:0,Err:0,T-Err:0,D:0ms,D-2xx:0ms,Ti:0B/s,To:0B/s > Operations: Success Failed > Timed out > URL0:Login-GET 150 9993 0 7 > 0 0 > > > 10 000 - run2 > ============== > Test total duration was 234 seconds and CAPS average 85: > H/F > > Req:9773,1xx:0,2xx:9773,3xx:0,4xx:0,5xx:0,Err:227,T-Err:0,D:861ms,D-2xx:861ms,Ti:236413B/s,To:6264B/s > H/F/S > > Req:0,1xx:0,2xx:0,3xx:0,4xx:0,5xx:0,Err:0,T-Err:0,D:0ms,D-2xx:0ms,Ti:0B/s,To:0B/s > Operations: Success Failed > Timed out > URL0:Login-GET 7 9773 0 227 > 0 0 > > > 10 000 - run3 > ============== > Test total duration was 318 seconds and CAPS average 62: > H/F > > Req:6183,1xx:0,2xx:6012,3xx:0,4xx:0,5xx:0,Err:3988,T-Err:0,D:6602ms,D-2xx:6602ms,Ti:106771B/s,To:2916B/s > H/F/S > > Req:0,1xx:0,2xx:0,3xx:0,4xx:0,5xx:0,Err:0,T-Err:0,D:0ms,D-2xx:0ms,Ti:0B/s,To:0B/s > Operations: Success Failed > Timed out > URL0:Login-GET 0 6012 3 3988 > 0 0 > > > Could you please let us know as how to fine tune it to get a consistent > result. FYI: both the client and server are in the local network. > I can only guess without the configuration file, that your server works really hard. Please, monitor the server-side performance. The reasons for errors could be seen in the log file. Please look into the <your-name>.log file. You can see at the reasons of errors, when adding to the command line -v. Most probably, your server gets saturated and does not handle TCP-connections. You can also try to recompile curl-loader with optimization and add more mem for TCP, file descriptors like here: http://curl-loader.sourceforge.net/doc/faq.html in the 7.2. How to run a really big load? Please, update me about your advances. > > -- > Truly, > Robert Iakobashvili, Ph.D. > ...................................................................... > Assistive technology that understands you > ...................................................................... > |
From: alo s. <asi...@ic...> - 2008-10-29 20:04:47
|
Dear support team, CURL-LOADER VERSION: 0.46 HW DETAILS: CPU/S and memory are must: x86_64 x86_64 x86_64 GNU/Linux OS: CentOS release 5.2 (Final) CPU: Intel(R) Core(TM)2 Duo CPU E4600 @ 2.40GHz Ram: 2 Gigabytes DDR2 LINUX DISTRIBUTION and KERNEL (uname -r): CentOS release 5.2 (Final) GCC VERSION (gcc -v): COMPILATION AND MAKING OPTIONS (if defaults changed): COMMAND-LINE: CONFIGURATION-FILE (The most common source of problems): Place the file inline here: DOES THE PROBLEM AFFECT: COMPILATION? No LINKING? No EXECUTION? No OTHER (please specify)? The results vary from time to time.. Have you run $make cleanall prior to $make ? No DESCRIPTION: QUESTION/ SUGGESTION/ PATCH: I have installed curl-loader on a testing client machine. i tried to test a home page of a website which is running with tomcat. the tomcat server system is: x86_64 x86_64 x86_64 GNU/Linux Operating System: CentOS release 5 (Final) CPU: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz Ram: 4 Gigabytes DDR2 when i run 10 000 requests at different time intervals, i am getting different results as given below, 10 000 - run1 =============== Test total duration was 202 seconds and CAPS average 97: H/F Req:10000,1xx:0,2xx:9993,3xx:0,4xx:0,5xx:7,Err:0,T-Err:0,D:111ms,D-2xx:111ms,Ti:280361B/s,To:7425B/s H/F/S Req:0,1xx:0,2xx:0,3xx:0,4xx:0,5xx:0,Err:0,T-Err:0,D:0ms,D-2xx:0ms,Ti:0B/s,To:0B/s Operations: Success Failed Timed out URL0:Login-GET 150 9993 0 7 0 0 10 000 - run2 ============== Test total duration was 234 seconds and CAPS average 85: H/F Req:9773,1xx:0,2xx:9773,3xx:0,4xx:0,5xx:0,Err:227,T-Err:0,D:861ms,D-2xx:861ms,Ti:236413B/s,To:6264B/s H/F/S Req:0,1xx:0,2xx:0,3xx:0,4xx:0,5xx:0,Err:0,T-Err:0,D:0ms,D-2xx:0ms,Ti:0B/s,To:0B/s Operations: Success Failed Timed out URL0:Login-GET 7 9773 0 227 0 0 10 000 - run3 ============== Test total duration was 318 seconds and CAPS average 62: H/F Req:6183,1xx:0,2xx:6012,3xx:0,4xx:0,5xx:0,Err:3988,T-Err:0,D:6602ms,D-2xx:6602ms,Ti:106771B/s,To:2916B/s H/F/S Req:0,1xx:0,2xx:0,3xx:0,4xx:0,5xx:0,Err:0,T-Err:0,D:0ms,D-2xx:0ms,Ti:0B/s,To:0B/s Operations: Success Failed Timed out URL0:Login-GET 0 6012 3 3988 0 0 Could you please let us know as how to fine tune it to get a consistent result. FYI: both the client and server are in the local network. Greatly appreciate your help. Thanks, Alo Sinnathamby Architect NOTE: This message, and any attached files, contain information that is privileged, confidential, proprietary or otherwise protected from disclosure. Any disclosure, copying or distribution of, or reliance upon, this message by anyone else is strictly prohibited. If you received this communication in error, please notify the sender immediately by reply e-mail message or by telephone to one of the numbers above and deleting it from your computer. Thank you. |
From: Robert I. <cor...@gm...> - 2008-10-29 12:29:46
|
Hi Matt, On Wed, Oct 29, 2008 at 1:26 PM, Matt Love <mat...@av...> wrote: > REQUEST_TYPE=POST > > wireshark sees this: > > 761.972165 Vmware_d6:4b:dd -> Broadcast ARP Who has 192.168.2.56? Tell > 192.168.2.71 > > 761.972667 Vmware_b9:1e:c2 -> Vmware_d6:4b:dd ARP 192.168.2.56 is at > 00:0c:29:b9:1e:c2 > > 761.972672 192.168.2.71 -> 192.168.2.56 TCP 47403 > http [SYN] Seq=0 > Win=5840 Len=0 MSS=1460 TSV=167834233 TSER=0 WS=7 > > 761.972832 192.168.2.56 -> 192.168.2.71 TCP http > 47403 [SYN, ACK] Seq=0 > Ack=1 Win=5792 Len=0 MSS=1460 TSV=167910328 TSER=167834233 WS=7 > > 761.972854 192.168.2.71 -> 192.168.2.56 TCP 47403 > http [ACK] Seq=1 Ack=1 > Win=5888 Len=0 TSV=167834235 TSER=167910328 > > 761.973090 192.168.2.71 -> 192.168.2.56 TCP [TCP segment of a reassembled > PDU] > > 761.973265 192.168.2.56 -> 192.168.2.71 TCP http > 47403 [ACK] Seq=1 > Ack=212 Win=6912 Len=0 TSV=167910329 TSER=167834235 > > 761.973660 192.168.2.56 -> 192.168.2.71 HTTP HTTP/1.1 100 Continue > > 761.973665 192.168.2.71 -> 192.168.2.56 TCP 47403 > http [ACK] Seq=212 > Ack=26 Win=5888 Len=0 TSV=167834236 TSER=167910329 > > 761.973907 192.168.2.71 -> 192.168.2.56 HTTP PUT > /nomadesk/index.phpTask?Navigator::Task=ReceiveMessage HTTP/1.1 > > 761.974298 192.168.2.56 -> 192.168.2.71 HTTP HTTP/1.1 405 Method Not > Allowed (text/html) > > 761.974431 192.168.2.56 -> 192.168.2.71 TCP http > 47403 [FIN, ACK] > Seq=570 Ack=595 Win=7936 Len=0 TSV=167910330 TSER=167834236 > > 761.974435 192.168.2.71 -> 192.168.2.56 TCP 47403 > http [ACK] Seq=595 > Ack=571 Win=7040 Len=0 TSV=167834237 TSER=167910330 > > 761.974609 192.168.2.71 -> 192.168.2.56 TCP 47403 > http [FIN, ACK] > Seq=595 Ack=571 Win=7040 Len=0 TSV=167834237 TSER=167910330 > > 761.974734 192.168.2.56 -> 192.168.2.71 TCP http > 47403 [ACK] Seq=571 > Ack=596 Win=7936 Len=0 TSV=167910330 TSER=167834237 > > In the sourceforge TODO doc for curl-loader I see that "transparent support > for POST-ing some file, e.g. a SOAP file" is on the TODO list for the next > beta. This seems to be exactly what I need. So would there be a > non-transparent way (workaround) to achieve this same thing in the current > release? > > What we see from the wireshark trace is that the first HTTP is most probably POST or PUT (you may wish to use the "Decode AS" function of wireshark GUI or look at the capture text strings) with subsequent 100 Continue and further PUT HTTP request attempt, responded by 405 + more decoded Could you, please, either attach the capture or send the HTTP-decoded stream? Thanks. Robert |
From: Matt L. <mat...@av...> - 2008-10-29 11:26:23
|
Below is a conf file I have tried to use (unsuccessfully) to post an xml file. ########### GENERAL SECTION ################################ BATCH_NAME= nomadeskpoll CLIENTS_NUM_MAX=1 # Same as CLIENTS_NUM INTERFACE=eth0 NETMASK=16 IP_ADDR_MIN= 192.168.2.71 IP_ADDR_MAX= 192.168.2.128 CYCLES_NUM=1 URLS_NUM= 1 ########### URL SECTION #################################### URL=http://192.168.2.56/nomadesk/index.phpTask?Navigator::Task=ReceiveMe ssage URL_SHORT_NAME="Poll" REQUEST_TYPE=POST UPLOAD_FILE=./conf-examples/poll2.xml TIMER_URL_COMPLETION = 5000 TIMER_AFTER_URL_SLEEP = 5000 The xml file looks like this: <?xml version="1.0" encoding="utf-8" standalone="yes"?><Poll><Accounts><Account><AccountName>nmua000014</Acco untName><Password>aventiv23</Password></Account></Accounts><LocationID>b f42425f-fcef-11a3-aaab-199121138cb3</LocationID><ClientVersion>2.6.0.13< /ClientVersion><CreationTstamp>10/28/2008 7:00:30 AM</CreationTstamp></Poll> And my command line: curl-loader -f ./conf-examples/nomadeskpoll.conf wireshark sees this: 761.972165 Vmware_d6:4b:dd -> Broadcast ARP Who has 192.168.2.56? Tell 192.168.2.71 761.972667 Vmware_b9:1e:c2 -> Vmware_d6:4b:dd ARP 192.168.2.56 is at 00:0c:29:b9:1e:c2 761.972672 192.168.2.71 -> 192.168.2.56 TCP 47403 > http [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=167834233 TSER=0 WS=7 761.972832 192.168.2.56 -> 192.168.2.71 TCP http > 47403 [SYN, ACK] Seq=0 Ack=1 Win=5792 Len=0 MSS=1460 TSV=167910328 TSER=167834233 WS=7 761.972854 192.168.2.71 -> 192.168.2.56 TCP 47403 > http [ACK] Seq=1 Ack=1 Win=5888 Len=0 TSV=167834235 TSER=167910328 761.973090 192.168.2.71 -> 192.168.2.56 TCP [TCP segment of a reassembled PDU] 761.973265 192.168.2.56 -> 192.168.2.71 TCP http > 47403 [ACK] Seq=1 Ack=212 Win=6912 Len=0 TSV=167910329 TSER=167834235 761.973660 192.168.2.56 -> 192.168.2.71 HTTP HTTP/1.1 100 Continue 761.973665 192.168.2.71 -> 192.168.2.56 TCP 47403 > http [ACK] Seq=212 Ack=26 Win=5888 Len=0 TSV=167834236 TSER=167910329 761.973907 192.168.2.71 -> 192.168.2.56 HTTP PUT /nomadesk/index.phpTask?Navigator::Task=ReceiveMessage HTTP/1.1 761.974298 192.168.2.56 -> 192.168.2.71 HTTP HTTP/1.1 405 Method Not Allowed (text/html) 761.974431 192.168.2.56 -> 192.168.2.71 TCP http > 47403 [FIN, ACK] Seq=570 Ack=595 Win=7936 Len=0 TSV=167910330 TSER=167834236 761.974435 192.168.2.71 -> 192.168.2.56 TCP 47403 > http [ACK] Seq=595 Ack=571 Win=7040 Len=0 TSV=167834237 TSER=167910330 761.974609 192.168.2.71 -> 192.168.2.56 TCP 47403 > http [FIN, ACK] Seq=595 Ack=571 Win=7040 Len=0 TSV=167834237 TSER=167910330 761.974734 192.168.2.56 -> 192.168.2.71 TCP http > 47403 [ACK] Seq=571 Ack=596 Win=7936 Len=0 TSV=167910330 TSER=167834237 In the sourceforge TODO doc for curl-loader I see that "transparent support for POST-ing some file, e.g. a SOAP file" is on the TODO list for the next beta. This seems to be exactly what I need. So would there be a non-transparent way (workaround) to achieve this same thing in the current release? Thank you, Matt Love From: Robert Iakobashvili [mailto:cor...@gm...] Sent: Wednesday, October 29, 2008 10:50 AM To: curl-loader-devel Subject: Re: xml post Hi Matt, On Tue, Oct 28, 2008 at 5:59 PM, Matt Love <mat...@av...> wrote: I am trying to do an xml post with curl-loader similar to the curl commad below: curl -F file=@/tmp/addexistingaccount.xml http://localhost/nomadesk/index.php?TaskNavigator::Task=ReceiveMessage How is a similar command created in curl-loader .conf file? I have tried setting the UPLOAD_FILE tag, to no avail. I am trying to guess, what you are doing, since PRF was not provided, which is normally including your configuration, command-line, etc Could it be, that you missed that it should be POST or POST after GET? You may wish to run your curl command line, collect the capture by wireshark and attach it. There are several examples in curl-loader examples, like: curl-loader/conf-examples/post-form-tokens-fr-file.conf Still, without the PRF and the capture, it is just guessing. Sincerely, Robert Iakobashvili |
From: Robert I. <cor...@gm...> - 2008-10-29 09:50:18
|
Hi Matt, On Tue, Oct 28, 2008 at 5:59 PM, Matt Love <mat...@av...> wrote: > I am trying to do an xml post with curl-loader similar to the curl commad > below: > > curl -F file=@/tmp/addexistingaccount.xml > http://localhost/nomadesk/index.php?TaskNavigator::Task=ReceiveMessage > > How is a similar command created in curl-loader .conf file? I have tried > setting the UPLOAD_FILE tag, to no avail. > I am trying to guess, what you are doing, since PRF was not provided, which is normally including your configuration, command-line, etc Could it be, that you missed that it should be POST or POST after GET? You may wish to run your curl command line, collect the capture by wireshark and attach it. There are several examples in curl-loader examples, like: curl-loader/conf-examples/post-form-tokens-fr-file.conf Still, without the PRF and the capture, it is just guessing. Sincerely, Robert Iakobashvili |
From: Robert I. <cor...@gm...> - 2008-10-28 16:49:54
|
Hi Sridhar, On Tue, Oct 28, 2008 at 9:32 AM, Sridhar Gundubogula < sri...@re...> wrote: > Hi All, > > When i was doing some research on open source projects that could replace > Ixload to generate and simulate HTTP traffic on the client machine to > connect to Apache webserver and report the results > I would recommend ngnix web-server as much more powerful. > I found curl-loader an interesting tool to start and explore to > replicate my test scenario what i was trying to do on Ixiaload. Im > interested in using this tool so I played around config file to generate > Http traffic to test number of connections to fetch couple of urls from > Webserver which resulted me in posing few questions in mind towards its > operation with my conf file below: > > ########### GENERAL SECTION ################################ > > BATCH_NAME= http > > CLIENTS_NUM_MAX=1000 # Same as CLIENTS_NUM > > #CLIENTS_NUM_START=1 > > #CLIENTS_RAMPUP_INC=1 > > INTERFACE=eth2a > > NETMASK=24 > > IP_ADDR_MIN= 10.10.1.101 > > IP_ADDR_MAX= 10.10.1.101 #Actually - this is for self-control > > CYCLES_NUM=1 > > URLS_NUM=1 > > ########### URL SECTION #################################### > > URL=http://192.168.10.x/4k.html > > #URL=http://192.168.10.y/index1.html > > #URL=https://localhost/apache2-default/ACE-INSTALL.html > > #URL=https://localhost/ACE-INSTALL.html > > URL_SHORT_NAME="url-http" > > REQUEST_TYPE=GET > > TIMER_URL_COMPLETION =0 # In msec. When positive, Now it is enforced by > cancelli > > ng url fetch on timeout > > 1. Can I make 1000 connections/sec with one client ? > > So am I following the right way inorder to achieve what im trying to > accomplish as defined with question 1. > One client means a one HTTP client with all TCP amd HTTP communication, server delay, server application logic, etc. Normally one HTTP client depending on your server and server application logic can make 10-20 cycles per second. For your purpose you'll need 100-200 clients. > 2. what are the factors or criteria that lets curl-loader knows when to > stop the test.? > > The reason I asked above question is trying to uderstand whether > curl-loader could successfully establish 1000 connections in a second if not > how to varies it connection rate? > It makes either CYCLES_NUM number of cycles, if that a positive value, or when negative it runs till you stop it by Control-C > What is the role of interval option -i in determining the connection > rate. > > 3. when i look at the results what does Success really indicates from left > and right column? > > For example, what does 151 1000 means under the Sucess column in results > file. > > > > I appreciate answers for the above questions that will help me in making > progress towards using this tool and making enhancements that fits to my > test requirements. Please let me know if any additional details are > required that would help me in assisting. > > > The questions are appearing in FAQS, therefore, please, read them at e.g.: http://curl-loader.sourceforge.net/doc/faq.html Take care, Robert |
From: Matt L. <mat...@av...> - 2008-10-28 16:15:32
|
Hello, I am trying to do an xml post with curl-loader similar to the curl commad below: curl -F file=@/tmp/addexistingaccount.xml http://localhost/nomadesk/index.php?TaskNavigator::Task=ReceiveMessage How is a similar command created in curl-loader .conf file? I have tried setting the UPLOAD_FILE tag, to no avail. Best Regards, Matt Love |
From: Sridhar G. <sri...@re...> - 2008-10-28 07:33:11
|
Hi All, When i was doing some research on open source projects that could replace Ixload to generate and simulate HTTP traffic on the client machine to connect to Apache webserver and report the results I found curl-loader an interesting tool to start and explore to replicate my test scenario what i was trying to do on Ixiaload. Im interested in using this tool so I played around config file to generate Http traffic to test number of connections to fetch couple of urls from Webserver which resulted me in posing few questions in mind towards its operation with my conf file below: ########### GENERAL SECTION ################################ BATCH_NAME= http CLIENTS_NUM_MAX=1000 # Same as CLIENTS_NUM #CLIENTS_NUM_START=1 #CLIENTS_RAMPUP_INC=1 INTERFACE=eth2a NETMASK=24 IP_ADDR_MIN= 10.10.1.101 IP_ADDR_MAX= 10.10.1.101 #Actually - this is for self-control CYCLES_NUM=1 URLS_NUM=1 ########### URL SECTION #################################### URL=http://192.168.10.x/4k.html #URL=http://192.168.10.y/index1.html #URL=https://localhost/apache2-default/ACE-INSTALL.html #URL=https://localhost/ACE-INSTALL.html URL_SHORT_NAME="url-http" REQUEST_TYPE=GET TIMER_URL_COMPLETION =0 # In msec. When positive, Now it is enforced by cancelli ng url fetch on timeout 1. Can I make 1000 connections/sec with one client ? So am I following the right way inorder to achieve what im trying to accomplish as defined with question 1. 2. what are the factors or criteria that lets curl-loader knows when to stop the test.? The reason I asked above question is trying to uderstand whether curl-loader could successfully establish 1000 connections in a second if not how to varies it connection rate? What is the role of interval option -i in determining the connection rate. 3. when i look at the results what does Success really indicates from left and right column? For example, what does 151 1000 means under the Sucess column in results file. I appreciate answers for the above questions that will help me in making progress towards using this tool and making enhancements that fits to my test requirements. Please let me know if any additional details are required that would help me in assisting. Thanks Sridhar |
From: Robert I. <cor...@gm...> - 2008-10-27 19:32:32
|
On Mon, Oct 27, 2008 at 8:38 PM, Gary Fitts <ga...@in...> wrote: > Thanks for replying, Robert. > > > Actually, when cctx->post_data is allocated, > > it should be set zero to the first char, like > > cctx->post_data[0] = '\0'; > > > Yes, cctx->post_data[0] is set to zero, and that's the problem. This > causes the condition "else if (cctx->post_data && cctx->post_data[0])" > to fail, and to drop through to the "post_data is NULL" error, which > aborts the batch run. > > When I comment out the second part: "else if (cctx->post_data /* && > cctx->post_data[0] */)", we call init_client_url_post_data, and this > seems to work. > We had a one more user suggesting such fix. Y may do it, and I will try to remember, why it was actually added. Thanks. -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... www.ghotit.com Assistive technology that understands you ...................................................................... |
From: Gary F. <ga...@in...> - 2008-10-27 18:38:47
|
Thanks for replying, Robert. > Actually, when cctx->post_data is allocated, > it should be set zero to the first char, like > cctx->post_data[0] = '\0'; Yes, cctx->post_data[0] is set to zero, and that's the problem. This causes the condition "else if (cctx->post_data && cctx->post_data[0])" to fail, and to drop through to the "post_data is NULL" error, which aborts the batch run. When I comment out the second part: "else if (cctx->post_data /* && cctx->post_data[0] */)", we call init_client_url_post_data, and this seems to work. if (url->req_type == HTTP_REQ_TYPE_POST) { /* Make POST, using post buffer, if requested. */ if (url->upload_file && url->upload_file_ptr && (!cctx- >post_data || !cctx->post_data[0])) { curl_easy_setopt(handle, CURLOPT_POST, 1); } else if (cctx->post_data && cctx->post_data[0]) { /* Sets POST as the HTTP request method using either: - POST-fields; - multipart form-data as in RFC 1867; */ if (init_client_url_post_data (cctx, url) == -1) { fprintf (stderr, "%s - error: init_client_url_post_data() failed. \n", __func__); return -1; } } else { fprintf (stderr, "%s - error: post_data is NULL.\n", __func__); return -1; } } Thanks, Gary |
From: Robert I. <cor...@gm...> - 2008-10-27 18:25:51
|
Hi Gary On Mon, Oct 27, 2008 at 8:13 PM, Gary Fitts <ga...@in...> wrote: > Hello Robert, > > I have my changes working now, although I want to test them a bit more > before letting them out. I'm having one problem, though. In the > application I'm testing, the very first URL is a POST request using a > FORM_STRING. Great! > > > This seems to cause a failure in loader.c, setup_curl_handle_appl(). > The code is copied below. > > When this code is called, cctx->post_data has been allocated but not > initialized, so it's an empty string. This causes the condition "else > if (cctx->post_data && cctx->post_data[0])" to fail, and we drop > through to the "post_data is NULL" error. > > I've fixed this by commenting out the second condition: > else if (cctx->post_data /* && cctx->post_data[0] */) > > So far this seems to work for me, but I wonder if I'm missing > something that might cause an error somewhere else. > Actually, when cctx->post_data is allocated, it should be set zero to the first char, like cctx->post_data[0] = '\0'; You may wish add this fix to your patches. Thanks. Robert |