Thread: Re: [curl-loader-devel] Output understanding
Status: Alpha
Brought to you by:
coroberti
From: Robert I. <cor...@gm...> - 2007-05-04 13:30:02
|
Aleks, Something gone wrong with the e-mail and I got an empty body. Please, re-submit it. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Aleksandar L. <al-...@no...> - 2007-05-04 15:14:25
|
On Fre 04.05.2007 16:29, Robert Iakobashvili wrote: >Aleks, > >Something gone wrong with the e-mail and I got an empty >body. >Please, re-submit it. I have it however here is the msg: --- I have read http://curl-loader.sourceforge.net/doc/faq.html#statistics but there are still some open questions: 1.) Is from CAPS, at runtime, a RPS (Request per second), after the run is finished, a summery availabe? 2.) have curl-loader the possibility to handle 'sessions' like httperf http://www.hpl.hp.com/research/linux/httperf/httperf-man.txt 3.) Is there a 'summery' like output which some of the other benchmarks tool have? Maybe I have overread something or missunderstood, then pleas point me to the right way ;-) --- I was unable to look into the mail from sf mailarchive: http://sourceforge.net/mailarchive/forum.php?forum_name=curl-loader-devel => http://sourceforge.net/mailarchive/forum.php?thread_name=20070504081842.GA20907%40none.at&forum_name=curl-loader-devel Do you have more luck?! BR Aleks |
From: Robert I. <cor...@gm...> - 2007-05-08 12:29:00
|
Aleks, FYI: I have added the following to our TODO list: h. Aleksandar Lazic <al-...@no...> suggested: to make configurable a number of TCP connections, that our virtual client can use (the default is up to 5), whether the connections are persistant or closed after each request-response and, if persistant, what is the number of request-responses till the connection to be closed and re-established. Another proposal of Aleks is to make configurable HTTP 1.1 or 1.0, that we will not implement unless it becomes a feature required by many users. We can add configurable CURLOPT_MAXCONNECTS number and play with CURLOPT_FRESH_CONNECT/CURLOPT_FORBID_REUSE options and optionally to enhance them to make configurable number of request-responses via a tcp-connection till the connection renewal. Thank you for your suggestions, Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Aleksandar L. <al-...@no...> - 2007-05-08 13:20:46
|
On Die 08.05.2007 15:28, Robert Iakobashvili wrote: >Aleks, > >FYI: >I have added the following to our TODO list: [snipp] >Thank you for your suggestions, Your welcome ;-) Have you thought about the timing thing in the faq/doc/output ?? cheers Aleks |
From: Robert I. <cor...@gm...> - 2007-05-04 16:42:32
|
On 5/4/07, Aleksandar Lazic <al-...@no...> wrote: > 1.) Is from CAPS, at runtime, a RPS (Request per second), after the run > is finished, a summery availabe? Yes. Please, read the FAQs, e.g. http://curl-loader.sourceforge.net/doc/faq.html#statistics or in the README file of the tarball. > 2.) have curl-loader the possibility to handle 'sessions' like > httperf http://www.hpl.hp.com/research/linux/httperf/httperf-man.txt No, but since statistics in the client dump file <batch-name>.ctx per each virtual client is provided and the rate of adding the clients is known, such data may be post-processed > 3.) Is there a 'summery' like output which some of the other benchmarks > tool have? Yes. Please, read the FAQs, e.g. http://curl-loader.sourceforge.net/doc/faq.html#statistics or in the README file of the tarball. > > I was unable to look into the mail from sf mailarchive: I have filed early today the 6-th complain/Support Request on the issue of mail archives. They are improving something, but suddenly it breaks. Thank you. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Aleksandar L. <al-...@no...> - 2007-05-05 07:09:27
|
On Fre 04.05.2007 19:42, Robert Iakobashvili wrote: >On 5/4/07, Aleksandar Lazic <al-...@no...> wrote: >>1.) Is from CAPS, at runtime, a RPS (Request per second), after the run >> is finished, a summery availabe? > >Yes. Please, read the FAQs, e.g. >http://curl-loader.sourceforge.net/doc/faq.html#statistics or in the >README file of the tarball. Ok I have read it again but maybe I don't understand it sorry. ---from FAQ Currently HTTP/HTTPS statistics includes the following counters: - requests num; - 2xx success num; - 3xx redirects num; - client 4xx errors num; - server 5xx errors num; - other errors num, like resolving, tcp-connect, server closing or empty responses number; - average application server Delay (msec), estimated as the time between HTTP request and HTTP response without taking into the account network latency (RTT); - average application server Delay for 2xx (success) HTTP-responses, as above, but only for 2xx responses. The motivation for that is that 3xx redirections and 5xx server errors/rejects may not necessarily provide a true indication of a testing server working functionality. - throughput out (batch average); - throughput in (batch average); --- Do you mean that: requests / $RUNTIME <= where can I get this value?! => RPS I don't see any time value which I can use for the calculation which requests was done in the same $TIME (e.g.: Second/Minute/..). Sorry for my lack of understanding mybe I'am to spoiled from the other tools and think in a wrong way. >>2.) have curl-loader the possibility to handle 'sessions' like >> httperf http://www.hpl.hp.com/research/linux/httperf/httperf-man.txt > >No, but since statistics in the client dump file <batch-name>.ctx per >each virtual client is provided and the rate of adding the clients is >known, such data may be post-processed Here I have the same problem, where are the time value which I can use to calculate Sessions per second. >>3.) Is there a 'summery' like output which some of the other benchmarks >> tool have? > >Yes. Please, read the FAQs, e.g. >http://curl-loader.sourceforge.net/doc/faq.html#statistics or in the >README file of the tarball. See above. Cheers Aleks |
From: Robert I. <cor...@gm...> - 2007-05-05 15:47:00
|
On 5/5/07, Aleksandar Lazic <al-...@no...> wrote: > Do you mean that: requests / $RUNTIME <= where can I get this value?! => RPS > I don't see any time value which I can use for the calculation which > requests was done in the same $TIME (e.g.: Second/Minute/..). See the FAQs: http://curl-loader.sourceforge.net/doc/faq.html#statistics It is stated: "Some strings from the file: ------------------------------------------------------------------------------------------------------------------- Run-Time,Appl,Clients,Req,2xx,3xx,4xx,5xx,Err,Delay,Delay-2xx,Thr-In,Thr-Out 2, Appl , 100, 155, 0, 96, 0, 0, 1154, 1154, 2108414, 15538 2, Sec-Appl, 100, 0, 0, 0, 0, 0, 0, 0, 0, 0 4, Appl, 100, 75, 32, 69, 0, 0, 1267, 1559, 1634656, 8181 4, Sec-Appl, 100, 0, 0, 0, 0, 0, 0, 0, 0, 0 Cutted here 36, Appl , 39, 98, 35, 58, 0, 0, 869, 851, 1339168, 11392 36, Sec-Appl, 39, 0, 0, 0, 0, 0, 0, 0, 0, 0 38, Appl , 3, 91, 44, 62, 0, 0, 530, 587, 1353899, 10136 38, Sec-Appl, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0 *, *, *, *, *, *, *, *, *, *, *, * Run-Time,Appl,Clients,Req,2xx,3xx,4xx,5xx,Err,Delay,Delay-2xx,Thr-In,Thr-Out 38, Appl , 0, 2050, 643, 1407, 0, 213, 725, 812, 1610688, 11706 38, Sec-Appl, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ---------------------------------------------------------------------------------- The bottom strings after asterisks are for final averages." Sot: "Run-Time" is a time in seconds, "Req" is number of requests, the bottom string after asterisks are really numbers for final averages. Moreover, using this file one can make a graph of CAPS changes upon a time. Indeed, time scale has been missed from the FAQs - the place to improve on our side :) Thank you for the point. > >>2.) have curl-loader the possibility to handle 'sessions' like > >> httperf http://www.hpl.hp.com/research/linux/httperf/httperf-man.txt > > > >No, but since statistics in the client dump file <batch-name>.ctx per > >each virtual client is provided and the rate of adding the clients is > >known, such data may be post-processed > > Here I have the same problem, where are the time value which I can use > to calculate Sessions per second. "Session" is not clear what is it? "Clients" raw in the same file .txt contains really number of clients. If the number of clients is increasing you can graph, calculate, etc being in possession of both time and clients numbers. There are two modes of increasing number of clients: automatic, managed by the CLIENTS_INITIAL_INC value and manual from the console. Y may wish to look for that in FAQs. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Aleksandar L. <al-...@no...> - 2007-05-05 17:33:20
|
On Sam 05.05.2007 17:46, Robert Iakobashvili wrote: >On 5/5/07, Aleksandar Lazic <al-...@no...> wrote: >> Do you mean that: requests / $RUNTIME <= where can I get this value?! >> => RPS I don't see any time value which I can use for the calculation >> which requests was done in the same $TIME (e.g.: Second/Minute/..). > >See the FAQs: >http://curl-loader.sourceforge.net/doc/faq.html#statistics > >It is stated: [snipp] >Sot: >"Run-Time" is a time in seconds, "Req" is number of requests, the >bottom string after asterisks are really numbers for final averages. Aha ok thanks to remove the fog from my eays, I thought I have overseen or missunderstood something ;-) >> Here I have the same problem, where are the time value which I can >> use to calculate Sessions per second. > >"Session" is not clear what is it? A session, as in httperf defined and many $PEOPLE means, is the whole requests which are needed to get the full webpage loaded: e.g.: --- index.html x*.css x*.gif(s) x*.js . . . --- This means when you plan to have 100 concurrent Users and you site have. lets say 4 components, then the server must anwer 400 requests, you got my point? >"Clients" raw in the same file .txt contains really number of clients. >If the number of clients is increasing you can graph, calculate, etc >being in possession of both time and clients numbers. > >There are two modes of increasing number of clients: automatic, managed >by the CLIENTS_INITIAL_INC value and manual from the console. Y may >wish to look for that in FAQs. Thanks for explanation, I think this I have know understand ;-) Btw.: should I still answer to list and you directly or only list? cheers Aleks |
From: Robert I. <cor...@gm...> - 2007-05-05 19:55:29
|
On 5/5/07, Aleksandar Lazic <al-...@no...> wrote: > A session, as in httperf defined and many $PEOPLE means, is the whole > requests which are needed to get the full webpage loaded: > This means when you plan to have 100 concurrent Users and you site > have. lets say 4 components, then the server must anwer 400 requests, > you got my point? OK, HTTP session or HTTP business transaction. If this is a single url, one curl-loader client is mapped 1:1 to libcurl handle. Libcurl handle, if my understanding is correct, has up to 5 sockets/connections. Fetching a single url may lead to multiple requests, GETs, POSTs, redirections, more GETs and POSTs. They are all reflected in statistics per client. Talking about 3-5 different independent urls (without redirections), one can specify all them in a batch file configuration file (test plan) and they all will be fetched. We call it cycle of fetching. There is some statistics in clients dump *.ctx file with numbers of cycles, etc. Sure, it may be improved/enhanced. Which counters you think may be of interest here? -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Aleksandar L. <al-...@no...> - 2007-05-06 07:21:36
|
On Sam 05.05.2007 22:55, Robert Iakobashvili wrote: >On 5/5/07, Aleksandar Lazic <al-...@no...> wrote: > >> A session, as in httperf defined and many $PEOPLE means, is the whole >> requests which are needed to get the full webpage loaded: >> This means when you plan to have 100 concurrent Users and you site >> have. lets say 4 components, then the server must anwer 400 requests, >> you got my point? > >OK, HTTP session or HTTP business transaction. Yep, you got the point ;-) >If this is a single url, one curl-loader client is mapped 1:1 to >libcurl handle. Libcurl handle, if my understanding is correct, has up >to 5 sockets/connections. This depend on: HTTP/1.0 or HTTP/1.1 --- HTTP/1.0 without 'Connection'-Header (=no persistant), yep one $URL/request one connection. HTTP/1.0 with 'Connection'-Header (=persistant), one $URL/request must not be one connection. HTTP/1.1 default (=persistant), one $URL/request must not be one connection. HTTP/1.1 with 'Connection'-Header close (=no persistant), one $URL/request one connection. --- I think you know this but yust for clarifing ;-) And for this the behaviour of client and server is different per session. What I mean is: The benchmark tool should be able to handle all this 4 scenarios in a proper way, also to force some $COMPRESSION-respone from the server, this is possible with curl-loader, afaik with the CUSTOM_HEADER ;-) I don't know what's the best way to give the customer the possibility to say how many requests should go thru one connection, there are many possibilities ;-) >Fetching a single url may lead to multiple requests, GETs, POSTs, >redirections, more GETs and POSTs. They are all reflected in statistics >per client. Yep. >Talking about 3-5 different independent urls (without redirections), >one can specify all them in a batch file configuration file (test plan) >and they all will be fetched. We call it cycle of fetching. There is >some statistics in clients dump *.ctx file with numbers of cycles, >etc. Sure, it may be improved/enhanced. > >Which counters you think may be of interest here? Well I don't know how modular and flexible the curl-loader is and how many times you have, I will try to show some options. 1.) a new config options e.g.: UAS_SESSION_MODE=([Yy]|http1|http1-pers|http11|http11-nopers) UAS_SESSION_CLIENTS=$NUM UAS_SESSION_PER_CLIENT_REQUESTS=$NUM <= this is only used if SESSION_MODE is htt1-pers|http11 UAS_SESSION_FILE=$PATH <= here are the urls for one session 2.) curl-loader assume that all requests to one server should be counted as session and after the run curl-loader calculate the RPS and divide it thru the $URLS, it's not the best but I think a possible way. 3.) There is a extra counter and a simple FLAG like UAS_URL_SESSION=(1|0) and after the run curl-loader calculates based on this flag and counter how many session are possible. 4.) There could be some other ways ;-)) I think the httperf is nice for such thing and a good input for some of this session thing ;-), but the development is stop, it looks to me, and he is based on select(). What do you think about this?! Cheers Aleks |
From: Robert I. <cor...@gm...> - 2007-05-06 11:12:37
|
On 5/6/07, Aleksandar Lazic <al-...@no...> wrote: > >If this is a single url, one curl-loader client is mapped 1:1 to > >libcurl handle. Libcurl handle, if my understanding is correct, has up > >to 5 sockets/connections. > > This depend on: HTTP/1.0 or HTTP/1.1 I do not think, that we will support 1.0. > HTTP/1.1 default (=persistant), one $URL/request must not be one > connection. > HTTP/1.1 with 'Connection'-Header close (=no persistant), one > $URL/request one connection. curl-loader has an option -r, which is translated into a libcurl option to arrange a new connection for each request. Another option is to customize "Connection: close", thus a server will close connections. There are also KA options with numbers and timeouts of Keep-Alive, etc > The benchmark tool should be able to handle all this 4 scenarios in a > proper way, also to force some $COMPRESSION-respone from the server, > this is possible with curl-loader, afaik with the CUSTOM_HEADER ;-) Agree. > Well I don't know how modular and flexible the curl-loader is and how > many times you have, I will try to show some options. > > 1.) a new config options e.g.: > > UAS_SESSION_MODE=([Yy]|http1|http1-pers|http11|http11-nopers) > UAS_SESSION_CLIENTS=$NUM > UAS_SESSION_PER_CLIENT_REQUESTS=$NUM <= this is only used if > SESSION_MODE is htt1-pers|http11 > UAS_SESSION_FILE=$PATH <= here are the urls for one session > > 2.) curl-loader assume that all requests to one server should be counted > as session and after the run curl-loader calculate the RPS and > divide it thru the $URLS, it's not the best but I think a possible > way. > > 3.) There is a extra counter and a simple FLAG like > UAS_URL_SESSION=(1|0) and after the run curl-loader calculates based > on this flag and counter how many session are possible. > > 4.) There could be some other ways ;-)) > > I think the httperf is nice for such thing and a good input for some of > this session thing ;-), but the development is stop, it looks to me, and > he is based on select(). Thank you very much, Aleks, I will add it to our TODO list, but don't know, when we will handle it. Currently, we are re-structuring all the approach by going from the sections LOGIN, UAS and LOGOFF to a planar and flexible structure. Current structure does not enable more that a single login and not flexible enough. In a new version each url could be of any type (GET/POST/PUT), contain its own credentials, number of cycles configurable of the url-level, etc. On a later stage we are planning to add customized per-url analyzes of the results based on response code, response body analyzes, conditions, etc. We have also performance issues like memory cached allocations and multi-core/SMP optimization, etc Thank you very much for your suggestions. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Aleksandar L. <al-...@no...> - 2007-05-06 13:30:15
|
On Son 06.05.2007 13:12, Robert Iakobashvili wrote: >On 5/6/07, Aleksandar Lazic <al-...@no...> wrote: > >> Well I don't know how modular and flexible the curl-loader is and how >> many times you have, I will try to show some options. [snipp] >> I think the httperf is nice for such thing and a good input for some >> of this session thing ;-), but the development is stop, it looks to >> me, and he is based on select(). > >Thank you very much, Aleks, I will add it to our TODO list, but don't >know, when we will handle it. That's a deal, thanks ;-) >Currently, we are re-structuring all the approach by going from the >sections LOGIN, UAS and LOGOFF to a planar and flexible structure. > >Current structure does not enable more that a single login and not >flexible enough. I'am sure that many sites only allow login with cookies and therefore it will be nice that curl-loader is able to handle this ;-) >In a new version each url could be of any type (GET/POST/PUT), contain >its own credentials, number of cycles configurable of the url-level, >etc. > >On a later stage we are planning to add customized per-url analyzes of >the results based on response code, response body analyzes, conditions, >etc. > >We have also performance issues like memory cached allocations and >multi-core/SMP optimization, etc You have a lot todo . Such a design is not so easy due the fact that you must handle some sort of 'share somethings thru processes'. I hope you look into some nice libs which have allready implemented such things in a fast an generic way. >Thank you very much for your suggestions. Your welcome ;-) Cheers Aleks |
From: Robert I. <cor...@gm...> - 2007-05-06 13:39:25
|
On 5/6/07, Aleksandar Lazic <al-...@no...> wrote: > >Current structure does not enable more that a single login and not > >flexible enough. > > I'am sure that many sites only allow login with cookies and therefore it > will be nice that curl-loader is able to handle this ;-) It handles cookies. I mean login to one place - redirection to another and second login, third login in each place with different credentials. > >We have also performance issues like memory cached allocations and > >multi-core/SMP optimization, etc > > Such a design is not so easy due the fact that you must handle some > sort of 'share somethings thru processes'. > I hope you look into some nice libs which have allready implemented such > things in a fast an generic way. Oh, this part is rather easy to handle. We have already threads support. The first thread will share the ranges of virtual clients with respective ip-ranges among all threads on start-up and collect from alls statistics in a lockless way. Memory allocation in such conditions is an issue, where Michael Moser has a proposal with a ready open-source allocator for such case. Thanks, -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |