curl-loader-devel Mailing List for curl-loader - web application testing (Page 40)
Status: Alpha
Brought to you by:
coroberti
You can subscribe to this list here.
2006 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
|
Feb
(1) |
Mar
(7) |
Apr
(19) |
May
(25) |
Jun
(16) |
Jul
(59) |
Aug
(29) |
Sep
(18) |
Oct
(19) |
Nov
(7) |
Dec
(29) |
2008 |
Jan
(6) |
Feb
(18) |
Mar
(8) |
Apr
(27) |
May
(26) |
Jun
(5) |
Jul
(6) |
Aug
|
Sep
(9) |
Oct
(37) |
Nov
(61) |
Dec
(17) |
2009 |
Jan
(21) |
Feb
(25) |
Mar
(4) |
Apr
(2) |
May
(8) |
Jun
(15) |
Jul
(18) |
Aug
(23) |
Sep
(10) |
Oct
(16) |
Nov
(14) |
Dec
(22) |
2010 |
Jan
(23) |
Feb
(8) |
Mar
(18) |
Apr
(1) |
May
(34) |
Jun
(23) |
Jul
(11) |
Aug
(1) |
Sep
(13) |
Oct
(10) |
Nov
(2) |
Dec
(8) |
2011 |
Jan
|
Feb
(7) |
Mar
(24) |
Apr
(12) |
May
(3) |
Jun
(2) |
Jul
(2) |
Aug
|
Sep
(5) |
Oct
(20) |
Nov
(7) |
Dec
(11) |
2012 |
Jan
(12) |
Feb
(5) |
Mar
(16) |
Apr
(3) |
May
|
Jun
(5) |
Jul
(12) |
Aug
(6) |
Sep
|
Oct
|
Nov
(8) |
Dec
|
2013 |
Jan
(1) |
Feb
(3) |
Mar
(5) |
Apr
(3) |
May
(1) |
Jun
|
Jul
(1) |
Aug
(2) |
Sep
(9) |
Oct
|
Nov
(8) |
Dec
(4) |
2014 |
Jan
(4) |
Feb
|
Mar
(1) |
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(4) |
Oct
|
Nov
(11) |
Dec
(5) |
2015 |
Jan
(1) |
Feb
|
Mar
(11) |
Apr
(3) |
May
(1) |
Jun
(1) |
Jul
(4) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(2) |
Dec
|
2016 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Robert I. <cor...@gm...> - 2007-05-08 12:29:00
|
Aleks, FYI: I have added the following to our TODO list: h. Aleksandar Lazic <al-...@no...> suggested: to make configurable a number of TCP connections, that our virtual client can use (the default is up to 5), whether the connections are persistant or closed after each request-response and, if persistant, what is the number of request-responses till the connection to be closed and re-established. Another proposal of Aleks is to make configurable HTTP 1.1 or 1.0, that we will not implement unless it becomes a feature required by many users. We can add configurable CURLOPT_MAXCONNECTS number and play with CURLOPT_FRESH_CONNECT/CURLOPT_FORBID_REUSE options and optionally to enhance them to make configurable number of request-responses via a tcp-connection till the connection renewal. Thank you for your suggestions, Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Robert I. <cor...@gm...> - 2007-05-06 13:39:25
|
On 5/6/07, Aleksandar Lazic <al-...@no...> wrote: > >Current structure does not enable more that a single login and not > >flexible enough. > > I'am sure that many sites only allow login with cookies and therefore it > will be nice that curl-loader is able to handle this ;-) It handles cookies. I mean login to one place - redirection to another and second login, third login in each place with different credentials. > >We have also performance issues like memory cached allocations and > >multi-core/SMP optimization, etc > > Such a design is not so easy due the fact that you must handle some > sort of 'share somethings thru processes'. > I hope you look into some nice libs which have allready implemented such > things in a fast an generic way. Oh, this part is rather easy to handle. We have already threads support. The first thread will share the ranges of virtual clients with respective ip-ranges among all threads on start-up and collect from alls statistics in a lockless way. Memory allocation in such conditions is an issue, where Michael Moser has a proposal with a ready open-source allocator for such case. Thanks, -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Aleksandar L. <al-...@no...> - 2007-05-06 13:30:15
|
On Son 06.05.2007 13:12, Robert Iakobashvili wrote: >On 5/6/07, Aleksandar Lazic <al-...@no...> wrote: > >> Well I don't know how modular and flexible the curl-loader is and how >> many times you have, I will try to show some options. [snipp] >> I think the httperf is nice for such thing and a good input for some >> of this session thing ;-), but the development is stop, it looks to >> me, and he is based on select(). > >Thank you very much, Aleks, I will add it to our TODO list, but don't >know, when we will handle it. That's a deal, thanks ;-) >Currently, we are re-structuring all the approach by going from the >sections LOGIN, UAS and LOGOFF to a planar and flexible structure. > >Current structure does not enable more that a single login and not >flexible enough. I'am sure that many sites only allow login with cookies and therefore it will be nice that curl-loader is able to handle this ;-) >In a new version each url could be of any type (GET/POST/PUT), contain >its own credentials, number of cycles configurable of the url-level, >etc. > >On a later stage we are planning to add customized per-url analyzes of >the results based on response code, response body analyzes, conditions, >etc. > >We have also performance issues like memory cached allocations and >multi-core/SMP optimization, etc You have a lot todo . Such a design is not so easy due the fact that you must handle some sort of 'share somethings thru processes'. I hope you look into some nice libs which have allready implemented such things in a fast an generic way. >Thank you very much for your suggestions. Your welcome ;-) Cheers Aleks |
From: Robert I. <cor...@gm...> - 2007-05-06 11:12:37
|
On 5/6/07, Aleksandar Lazic <al-...@no...> wrote: > >If this is a single url, one curl-loader client is mapped 1:1 to > >libcurl handle. Libcurl handle, if my understanding is correct, has up > >to 5 sockets/connections. > > This depend on: HTTP/1.0 or HTTP/1.1 I do not think, that we will support 1.0. > HTTP/1.1 default (=persistant), one $URL/request must not be one > connection. > HTTP/1.1 with 'Connection'-Header close (=no persistant), one > $URL/request one connection. curl-loader has an option -r, which is translated into a libcurl option to arrange a new connection for each request. Another option is to customize "Connection: close", thus a server will close connections. There are also KA options with numbers and timeouts of Keep-Alive, etc > The benchmark tool should be able to handle all this 4 scenarios in a > proper way, also to force some $COMPRESSION-respone from the server, > this is possible with curl-loader, afaik with the CUSTOM_HEADER ;-) Agree. > Well I don't know how modular and flexible the curl-loader is and how > many times you have, I will try to show some options. > > 1.) a new config options e.g.: > > UAS_SESSION_MODE=([Yy]|http1|http1-pers|http11|http11-nopers) > UAS_SESSION_CLIENTS=$NUM > UAS_SESSION_PER_CLIENT_REQUESTS=$NUM <= this is only used if > SESSION_MODE is htt1-pers|http11 > UAS_SESSION_FILE=$PATH <= here are the urls for one session > > 2.) curl-loader assume that all requests to one server should be counted > as session and after the run curl-loader calculate the RPS and > divide it thru the $URLS, it's not the best but I think a possible > way. > > 3.) There is a extra counter and a simple FLAG like > UAS_URL_SESSION=(1|0) and after the run curl-loader calculates based > on this flag and counter how many session are possible. > > 4.) There could be some other ways ;-)) > > I think the httperf is nice for such thing and a good input for some of > this session thing ;-), but the development is stop, it looks to me, and > he is based on select(). Thank you very much, Aleks, I will add it to our TODO list, but don't know, when we will handle it. Currently, we are re-structuring all the approach by going from the sections LOGIN, UAS and LOGOFF to a planar and flexible structure. Current structure does not enable more that a single login and not flexible enough. In a new version each url could be of any type (GET/POST/PUT), contain its own credentials, number of cycles configurable of the url-level, etc. On a later stage we are planning to add customized per-url analyzes of the results based on response code, response body analyzes, conditions, etc. We have also performance issues like memory cached allocations and multi-core/SMP optimization, etc Thank you very much for your suggestions. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Aleksandar L. <al-...@no...> - 2007-05-06 07:21:36
|
On Sam 05.05.2007 22:55, Robert Iakobashvili wrote: >On 5/5/07, Aleksandar Lazic <al-...@no...> wrote: > >> A session, as in httperf defined and many $PEOPLE means, is the whole >> requests which are needed to get the full webpage loaded: >> This means when you plan to have 100 concurrent Users and you site >> have. lets say 4 components, then the server must anwer 400 requests, >> you got my point? > >OK, HTTP session or HTTP business transaction. Yep, you got the point ;-) >If this is a single url, one curl-loader client is mapped 1:1 to >libcurl handle. Libcurl handle, if my understanding is correct, has up >to 5 sockets/connections. This depend on: HTTP/1.0 or HTTP/1.1 --- HTTP/1.0 without 'Connection'-Header (=no persistant), yep one $URL/request one connection. HTTP/1.0 with 'Connection'-Header (=persistant), one $URL/request must not be one connection. HTTP/1.1 default (=persistant), one $URL/request must not be one connection. HTTP/1.1 with 'Connection'-Header close (=no persistant), one $URL/request one connection. --- I think you know this but yust for clarifing ;-) And for this the behaviour of client and server is different per session. What I mean is: The benchmark tool should be able to handle all this 4 scenarios in a proper way, also to force some $COMPRESSION-respone from the server, this is possible with curl-loader, afaik with the CUSTOM_HEADER ;-) I don't know what's the best way to give the customer the possibility to say how many requests should go thru one connection, there are many possibilities ;-) >Fetching a single url may lead to multiple requests, GETs, POSTs, >redirections, more GETs and POSTs. They are all reflected in statistics >per client. Yep. >Talking about 3-5 different independent urls (without redirections), >one can specify all them in a batch file configuration file (test plan) >and they all will be fetched. We call it cycle of fetching. There is >some statistics in clients dump *.ctx file with numbers of cycles, >etc. Sure, it may be improved/enhanced. > >Which counters you think may be of interest here? Well I don't know how modular and flexible the curl-loader is and how many times you have, I will try to show some options. 1.) a new config options e.g.: UAS_SESSION_MODE=([Yy]|http1|http1-pers|http11|http11-nopers) UAS_SESSION_CLIENTS=$NUM UAS_SESSION_PER_CLIENT_REQUESTS=$NUM <= this is only used if SESSION_MODE is htt1-pers|http11 UAS_SESSION_FILE=$PATH <= here are the urls for one session 2.) curl-loader assume that all requests to one server should be counted as session and after the run curl-loader calculate the RPS and divide it thru the $URLS, it's not the best but I think a possible way. 3.) There is a extra counter and a simple FLAG like UAS_URL_SESSION=(1|0) and after the run curl-loader calculates based on this flag and counter how many session are possible. 4.) There could be some other ways ;-)) I think the httperf is nice for such thing and a good input for some of this session thing ;-), but the development is stop, it looks to me, and he is based on select(). What do you think about this?! Cheers Aleks |
From: Robert I. <cor...@gm...> - 2007-05-05 19:55:29
|
On 5/5/07, Aleksandar Lazic <al-...@no...> wrote: > A session, as in httperf defined and many $PEOPLE means, is the whole > requests which are needed to get the full webpage loaded: > This means when you plan to have 100 concurrent Users and you site > have. lets say 4 components, then the server must anwer 400 requests, > you got my point? OK, HTTP session or HTTP business transaction. If this is a single url, one curl-loader client is mapped 1:1 to libcurl handle. Libcurl handle, if my understanding is correct, has up to 5 sockets/connections. Fetching a single url may lead to multiple requests, GETs, POSTs, redirections, more GETs and POSTs. They are all reflected in statistics per client. Talking about 3-5 different independent urls (without redirections), one can specify all them in a batch file configuration file (test plan) and they all will be fetched. We call it cycle of fetching. There is some statistics in clients dump *.ctx file with numbers of cycles, etc. Sure, it may be improved/enhanced. Which counters you think may be of interest here? -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Aleksandar L. <al-...@no...> - 2007-05-05 17:33:20
|
On Sam 05.05.2007 17:46, Robert Iakobashvili wrote: >On 5/5/07, Aleksandar Lazic <al-...@no...> wrote: >> Do you mean that: requests / $RUNTIME <= where can I get this value?! >> => RPS I don't see any time value which I can use for the calculation >> which requests was done in the same $TIME (e.g.: Second/Minute/..). > >See the FAQs: >http://curl-loader.sourceforge.net/doc/faq.html#statistics > >It is stated: [snipp] >Sot: >"Run-Time" is a time in seconds, "Req" is number of requests, the >bottom string after asterisks are really numbers for final averages. Aha ok thanks to remove the fog from my eays, I thought I have overseen or missunderstood something ;-) >> Here I have the same problem, where are the time value which I can >> use to calculate Sessions per second. > >"Session" is not clear what is it? A session, as in httperf defined and many $PEOPLE means, is the whole requests which are needed to get the full webpage loaded: e.g.: --- index.html x*.css x*.gif(s) x*.js . . . --- This means when you plan to have 100 concurrent Users and you site have. lets say 4 components, then the server must anwer 400 requests, you got my point? >"Clients" raw in the same file .txt contains really number of clients. >If the number of clients is increasing you can graph, calculate, etc >being in possession of both time and clients numbers. > >There are two modes of increasing number of clients: automatic, managed >by the CLIENTS_INITIAL_INC value and manual from the console. Y may >wish to look for that in FAQs. Thanks for explanation, I think this I have know understand ;-) Btw.: should I still answer to list and you directly or only list? cheers Aleks |
From: Robert I. <cor...@gm...> - 2007-05-05 15:47:00
|
On 5/5/07, Aleksandar Lazic <al-...@no...> wrote: > Do you mean that: requests / $RUNTIME <= where can I get this value?! => RPS > I don't see any time value which I can use for the calculation which > requests was done in the same $TIME (e.g.: Second/Minute/..). See the FAQs: http://curl-loader.sourceforge.net/doc/faq.html#statistics It is stated: "Some strings from the file: ------------------------------------------------------------------------------------------------------------------- Run-Time,Appl,Clients,Req,2xx,3xx,4xx,5xx,Err,Delay,Delay-2xx,Thr-In,Thr-Out 2, Appl , 100, 155, 0, 96, 0, 0, 1154, 1154, 2108414, 15538 2, Sec-Appl, 100, 0, 0, 0, 0, 0, 0, 0, 0, 0 4, Appl, 100, 75, 32, 69, 0, 0, 1267, 1559, 1634656, 8181 4, Sec-Appl, 100, 0, 0, 0, 0, 0, 0, 0, 0, 0 Cutted here 36, Appl , 39, 98, 35, 58, 0, 0, 869, 851, 1339168, 11392 36, Sec-Appl, 39, 0, 0, 0, 0, 0, 0, 0, 0, 0 38, Appl , 3, 91, 44, 62, 0, 0, 530, 587, 1353899, 10136 38, Sec-Appl, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0 *, *, *, *, *, *, *, *, *, *, *, * Run-Time,Appl,Clients,Req,2xx,3xx,4xx,5xx,Err,Delay,Delay-2xx,Thr-In,Thr-Out 38, Appl , 0, 2050, 643, 1407, 0, 213, 725, 812, 1610688, 11706 38, Sec-Appl, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ---------------------------------------------------------------------------------- The bottom strings after asterisks are for final averages." Sot: "Run-Time" is a time in seconds, "Req" is number of requests, the bottom string after asterisks are really numbers for final averages. Moreover, using this file one can make a graph of CAPS changes upon a time. Indeed, time scale has been missed from the FAQs - the place to improve on our side :) Thank you for the point. > >>2.) have curl-loader the possibility to handle 'sessions' like > >> httperf http://www.hpl.hp.com/research/linux/httperf/httperf-man.txt > > > >No, but since statistics in the client dump file <batch-name>.ctx per > >each virtual client is provided and the rate of adding the clients is > >known, such data may be post-processed > > Here I have the same problem, where are the time value which I can use > to calculate Sessions per second. "Session" is not clear what is it? "Clients" raw in the same file .txt contains really number of clients. If the number of clients is increasing you can graph, calculate, etc being in possession of both time and clients numbers. There are two modes of increasing number of clients: automatic, managed by the CLIENTS_INITIAL_INC value and manual from the console. Y may wish to look for that in FAQs. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Aleksandar L. <al-...@no...> - 2007-05-05 07:09:27
|
On Fre 04.05.2007 19:42, Robert Iakobashvili wrote: >On 5/4/07, Aleksandar Lazic <al-...@no...> wrote: >>1.) Is from CAPS, at runtime, a RPS (Request per second), after the run >> is finished, a summery availabe? > >Yes. Please, read the FAQs, e.g. >http://curl-loader.sourceforge.net/doc/faq.html#statistics or in the >README file of the tarball. Ok I have read it again but maybe I don't understand it sorry. ---from FAQ Currently HTTP/HTTPS statistics includes the following counters: - requests num; - 2xx success num; - 3xx redirects num; - client 4xx errors num; - server 5xx errors num; - other errors num, like resolving, tcp-connect, server closing or empty responses number; - average application server Delay (msec), estimated as the time between HTTP request and HTTP response without taking into the account network latency (RTT); - average application server Delay for 2xx (success) HTTP-responses, as above, but only for 2xx responses. The motivation for that is that 3xx redirections and 5xx server errors/rejects may not necessarily provide a true indication of a testing server working functionality. - throughput out (batch average); - throughput in (batch average); --- Do you mean that: requests / $RUNTIME <= where can I get this value?! => RPS I don't see any time value which I can use for the calculation which requests was done in the same $TIME (e.g.: Second/Minute/..). Sorry for my lack of understanding mybe I'am to spoiled from the other tools and think in a wrong way. >>2.) have curl-loader the possibility to handle 'sessions' like >> httperf http://www.hpl.hp.com/research/linux/httperf/httperf-man.txt > >No, but since statistics in the client dump file <batch-name>.ctx per >each virtual client is provided and the rate of adding the clients is >known, such data may be post-processed Here I have the same problem, where are the time value which I can use to calculate Sessions per second. >>3.) Is there a 'summery' like output which some of the other benchmarks >> tool have? > >Yes. Please, read the FAQs, e.g. >http://curl-loader.sourceforge.net/doc/faq.html#statistics or in the >README file of the tarball. See above. Cheers Aleks |
From: Robert I. <cor...@gm...> - 2007-05-04 16:42:32
|
On 5/4/07, Aleksandar Lazic <al-...@no...> wrote: > 1.) Is from CAPS, at runtime, a RPS (Request per second), after the run > is finished, a summery availabe? Yes. Please, read the FAQs, e.g. http://curl-loader.sourceforge.net/doc/faq.html#statistics or in the README file of the tarball. > 2.) have curl-loader the possibility to handle 'sessions' like > httperf http://www.hpl.hp.com/research/linux/httperf/httperf-man.txt No, but since statistics in the client dump file <batch-name>.ctx per each virtual client is provided and the rate of adding the clients is known, such data may be post-processed > 3.) Is there a 'summery' like output which some of the other benchmarks > tool have? Yes. Please, read the FAQs, e.g. http://curl-loader.sourceforge.net/doc/faq.html#statistics or in the README file of the tarball. > > I was unable to look into the mail from sf mailarchive: I have filed early today the 6-th complain/Support Request on the issue of mail archives. They are improving something, but suddenly it breaks. Thank you. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Aleksandar L. <al-...@no...> - 2007-05-04 15:14:25
|
On Fre 04.05.2007 16:29, Robert Iakobashvili wrote: >Aleks, > >Something gone wrong with the e-mail and I got an empty >body. >Please, re-submit it. I have it however here is the msg: --- I have read http://curl-loader.sourceforge.net/doc/faq.html#statistics but there are still some open questions: 1.) Is from CAPS, at runtime, a RPS (Request per second), after the run is finished, a summery availabe? 2.) have curl-loader the possibility to handle 'sessions' like httperf http://www.hpl.hp.com/research/linux/httperf/httperf-man.txt 3.) Is there a 'summery' like output which some of the other benchmarks tool have? Maybe I have overread something or missunderstood, then pleas point me to the right way ;-) --- I was unable to look into the mail from sf mailarchive: http://sourceforge.net/mailarchive/forum.php?forum_name=curl-loader-devel => http://sourceforge.net/mailarchive/forum.php?thread_name=20070504081842.GA20907%40none.at&forum_name=curl-loader-devel Do you have more luck?! BR Aleks |
From: Aleksandar L. <al-...@no...> - 2007-05-04 15:11:04
|
Hi, I have read http://curl-loader.sourceforge.net/doc/faq.html#statistics but there are still some open questions: 1.) Is from CAPS, at runtime, a RPS (Request per second), after the run is finished, a summery availabe? 2.) have curl-loader the possibility to handle 'sessions' like httperf http://www.hpl.hp.com/research/linux/httperf/httperf-man.txt 3.) Is there a 'summery' like output which some of the other benchmarks tool have? Maybe I have overread something or missunderstood, then pleas point me to the right way ;-) BR ALeks |
From: Robert I. <cor...@gm...> - 2007-05-04 13:30:02
|
Aleks, Something gone wrong with the e-mail and I got an empty body. Please, re-submit it. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Robert I. <cor...@gm...> - 2007-05-04 08:43:20
|
Note, that version in svn head is broken by recent check-ins since yesterday, May 04 2007. Hopefully, we will return to stabilty within a week or two with a much simple and flexible approach to load/test configuration. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Aleksandar L. <al-...@no...> - 2007-05-04 08:18:54
|
Hi, I have read http://curl-loader.sourceforge.net/doc/faq.html#statistics but there are still some open questions: 1.) Is from CAPS, at runtime, a RPS (Request per second), after the run is finished, a summery availabe? 2.) have curl-loader the possibility to handle 'sessions' like httperf http://www.hpl.hp.com/research/linux/httperf/httperf-man.txt 3.) Is there a 'summery' like output which some of the other benchmarks tool have? Maybe I have overread something or missunderstood, then pleas point me to the right way ;-) BR ALeks |
From: Robert I. <cor...@gm...> - 2007-04-27 04:37:45
|
Hi Yusuf, On 4/27/07, Yusuf Goolamabbas <yu...@ou...> wrote: > I've been following your curl-loader project (I haven't installed it as > yet since I still have to understand fully the impact of this statement > > 'Each client performs loading from its own source IP-address' > In particular, what is the impact on a kernel stack on creating 10,000 > IP's ? I have not learned the issue in depth, but even the PC with 500 M memory does not demo any problems with that. My guess, that each address takes 5-6 bytes of the kernel memo. Y can browse the lxr linux kernel code and search for the secondary IP storage done bia netlink system. > I am particularly interested in the hyper-mode creating lots of virtual users > In your recent 0.30 release you mentioned a reduction in memory usage by > patching libcurl. Is this patch upstream ? There is a patch, which is took upstream and cuts 16K, beyond of this we are bulding libcurl with buffers sized 8k, not 16K. > As such are you now mandating > that curl-loader should not be linked with system supplied > libcurl/libevent and should only be used with the the supplied > libcurl/libevent We are using our own building (with optimization) and at least 1 patch of libcurl, which is not mainstream. Yes, we are mandating it by building them as a static libraries and linking them statically in order: a) to have a unified building system; b) not to mess with the versions shipped on various linux distros. c) to have an option to patch; d) to have an option to optimize; e) not to intervene (destroy, mess) user-used linux. Actually, there is no impact of this on your linux distro. > > Also, with 10,000 clients @ 35K each would be a memory consumption of > 350MB. Seems a tad high. Has there been any comment from the libcurl > developers on how to reduce this even further ? Y can enter our Makefile and change an option passed to ./configure CFLAGS -DCURL_MAX_WRITE_SIZE=8192 to e.g. 4096. Thus, you will cut 8K more, but test the impact on your performance. Still, you can do it with curl mainstream cvs version and recent daily snapshot. Please, direct all questions to cur...@li..., subscribe however first. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ................................................................... Navigare necesse est, vivere non est necesse ................................................................... http://curl-loader.sourceforge.net An open-source HTTP/S, FTP/S traffic generating, and web testing tool. |
From: Robert I. <cor...@gm...> - 2007-04-26 20:08:26
|
curl-loader version 0.30 released. curl-loader now officially supports 10K and more simultaneously loading HTTP clients in hyper mode (commandline option -m0). For more info about this open-source project, please, look at: http://curl-loader.sourceforge.net/ -- Sincerely, Robert |
From: Robert I. <cor...@gm...> - 2007-04-25 13:20:09
|
Abount bugnux: http://www.bugnux.org/ curl-loader is inside: http://www.bugnux.org/index.php?option=com_content&task=view&id=2&Itemid=1 -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ................................................................... Navigare necesse est, vivere non est necesse ................................................................... http://curl-loader.sourceforge.net An open-source HTTP/S, FTP/S traffic generating, and web testing tool. |
From: Robert I. <cor...@gm...> - 2007-04-22 20:16:12
|
Alex, Sorry, for the delayed response. It is sourceforge to blame for the problems with our mailing lists and archives, which are hopefully solved. >But I agree it is not the cleanest solution, proper way to do it is to >add a command line key to specify path to .txt file and a key to modify >output format. My requirements were: it has to be one line per event, >and it has to be easily parseable, and include parameter names along >with values. I am running curl-loaders across a number of the machines >and need this kind of output to easily aggregate statistics from all >processes. Perhaps there is a simpler way to do the same? Today you need one type of the output, next day, next customer - another type. May be a customizable template file may govern the output? Each statistics param will have its tag and a customizable string representation, e.g. the template, like: Req:"Requests" 2xx:"200-OK" Delay-2xx:"Delay-200" will lead to the output: ----------------------------------------------------------- Requests, 200-OK, Delay-200 100, 50, 87 50, 25, 130 Currently, we are working on the bug-fixes and stabilization to release a "stable" version very soon with hyper mode working with 10K simultaneously loading client and memory consumption with recent patches decrease about twice from 60K per client down to 30K. But, if you can come with a patch, delivering something customizable and generic, we will be delighted. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ................................................................... Navigare necesse est, vivere non est necesse ................................................................... http://curl-loader.sourceforge.net An open-source HTTP/S, FTP/S traffic generating, and web testing tool. |
From: Robert I. <cor...@gm...> - 2007-04-20 08:01:48
|
On 4/19/07, Michael Moser <mos...@gm...> wrote: > don't know about AMD_64 and pointer alignment, however; do you > remember? what is Intel_64 doing?? Intel recommends to align a pointer on 64-bit at 8 byte boundary. Please, look here: www.intel.com/cd/ids/developer/asmo-na/eng/328769.htm > On 4/19/07, Michael Moser <mos...@gm...> wrote: > > mpool implementation does not align the pooled object size - therefore > > the pointer returned by the beast will not be four byte aligned (on > > Intel this is not good - will create misalignment interrupts when > > using the data). > > > > fix: > > > > #define PTR_ALIGN 4 > > > > int mpool_init (mpool* mpool, size_t object_size, int num_obj) > > { > > object_size = (object_size + PTR_ALIGN) & ( ~ (PTR_ALIGN - 1) ); > > .... > > if we have x86_64 or amd64 - check which flags exactly - 8 bytes, else -4 bytes. But I do not have a 64 bit linux now, therefore the question is how to test. Do you have something 64 bit with linux? -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ................................................................... Navigare necesse est, vivere non est necesse ................................................................... http://curl-loader.sourceforge.net An open-source HTTP/S, FTP/S traffic generating, and web testing tool. |
From: Robert I. <cor...@gm...> - 2007-04-19 13:11:54
|
Daniel, Thank you for the detailed explanations. We will look into it deeper. On 4/19/07, Daniel Stenberg <da...@ha...> wrote: > On Thu, 19 Apr 2007, Robert Iakobashvili wrote: > > > Could you recommend a way (sharing?) do decrease memory consumption per > > client in libcurl? > > Using the multi interface it already shares connections and DNS cache > automatically and then there's not much more to share. The easy struct has a > bunch of variables for internal house-keeping and then there are the buffers: > > There are no less than three buffers allocated at CURL_MAX_WRITE_SIZE (16K), > so 16*3=48K is "wasted" there already. You can of course: > > 1 - change the code to alloc the required buffer(s) once they're needed, as > the upload buffer is rather pointless without uploads and I believe the > master_buffer can be skipped if not using pipeline... > > 2 - experiment with lowering the CURL_MAX_WRITE_SIZE value, but that might of > course also risk getting a slower throughput. > > -- > Commercial curl and libcurl Technical Support: http://haxx.se/curl.html > -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ................................................................... Navigare necesse est, vivere non est necesse ................................................................... http://curl-loader.sourceforge.net An open-source HTTP/S, FTP/S traffic generating, and web testing tool. |
From: Robert I. <cor...@gm...> - 2007-04-18 11:48:57
|
The version contains bugfixes only looking forward to the "stable" 0.30 version. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ................................................................... Navigare necesse est, vivere non est necesse ................................................................... http://curl-loader.sourceforge.net An open-source HTTP/S, FTP/S traffic generating, and web testing tool. |
From: Jamie L. <ja...@sh...> - 2007-04-17 22:54:07
|
Daniel Stenberg wrote: > Possibly we should simply detect that no millisecond timeout option is > used, and then switch to a more "sloppy" approach. That could perhaps > involve getting the time immediately when curl_multi_socket*() is called > and then simply not get it again (i.e keep returning the same) unless one > of the functions is called that we know is time consuming. As far as I know, the only way to remove lots gettimeofday() or equivalent calls, from code like this which must detect timeouts rapidly, is to use some kind of signal (SIGALRM etc.) to set a flag, or a thread which sleeps for the appropriate time, then wakes and sets a flag, or other equivalent things (like POSIX timer events). It's a noticable performance improvement in event-driven code with an unpredictable mixture of short and long processing in each event. (That's why a counting method doesn't work, as you note). Unfortunately, they're all difficult to use in a library which is used in lots of different kinds of program, and mustn't use process-global resources. But those techniques might work fine with the HiPer enhancements, as that kind of time optimisation can be part of the application, just as the select() call is, and integrate with the application's use of signals and threads. -- Jamie |
From: Daniel S. <da...@ha...> - 2007-04-17 20:57:08
|
On Tue, 17 Apr 2007, Robert Iakobashvili wrote: (Replying only to a part of the previous message - for now) > Sorry, for the poor explanation. libcurl library verbose mode was always on > during these profiling experiments. Ah, ok. That explains the extensive use of *printf(). In my view, it would be more interesting to do the profiling without verbose mode. > It was curl-loader's verbose mode was off which has another meaning, namely: > - don't print to the logfile file any messages besides having CURLINFO_ERROR > level (our patch to split CURLINFO_TEXT to errors and indeed info text). I don't see the need for that patch/CURLINFO_ERROR. It simply makes the output that would otherwise be stored in the CURLOPT_ERRORBUFFER get sent to the debug callback instead and there you filter out only that. I would rather recommend you using CURLOPT_ERRORBUFFER instead, as it would then also reduce the amount of *printf()s etc within libcurl. -- Commercial curl and libcurl Technical Support: http://haxx.se/curl.html |
From: Robert I. <cor...@gm...> - 2007-04-17 13:23:32
|
On 4/17/07, Daniel Stenberg <da...@ha...> wrote: > >> 1 - gettimeofday() certainly is called a lot and indeed takes a > >> significant time. We should address this (somehow). > > > > It is called in cycles in many cases. > > A pattern to solve it something like this : > > > > gettimeofday (); > > int cycle_counter = 0; > > > > for (i....) { > > > > DO_THE_JOB > > > > if (! (++cycle_counter % TIME_RECALCULATION_CYCLES_NUM)) { > > gettimeofday (); > > } > > Yes, but We need pretty accurate time info at many places Yes, that is why a really good knowledge of curl is required to detect the right places for the trick. > Possibly we should simply detect that no millisecond timeout option is used, > and then switch to a more "sloppy" approach. That could perhaps involve > getting the time immediately when curl_multi_socket*() is called and then > simply not get it again (i.e keep returning the same) unless one of the > functions is called that we know is time consuming. Indeed. This is similar to what we are doing ++ correcting timestamp in the cases, where looping takes long cycles. I would call it sloppy++ approach :). If you will look through the code and indicate the appropriate places besides curl_multi_socket*(), I could try to cook some patch proposal. > >> 2 - dprintf_formatf() seems to take quite a lot of time as well > >> How come this function is used so much? > > ----------------------------------------------- > > 0.00 0.00 1/239402 curl_msnprintf [67] > > 0.04 0.00 37300/239402 curl_mvaprintf [27] > > 0.08 0.01 79201/239402 curl_maprintf [21] > > 0.12 0.02 122900/239402 curl_mvsnprintf [16] > > [12] 19.9 0.24 0.03 239402 dprintf_formatf [12] > > 0.02 0.00 5696319/5696319 addbyter [34] > > 0.01 0.00 7782971/7782971 alloc_addbyter [45] > > This is the snprintf() and similar calls within libcurl. > > > Just a hypotheses below: > > We are using (as posted) options: > > CURLOPT_VERBOSE with 1 > > CURLOPT_DEBUGFUNCTION with client_tracing_function > > > Well, VERBOSE and DEBUGFUNCTION are for debugging and then we should be > prepared to get extra function calls like this. > > > The profiling was done in a non-verbose mode. Sorry, for the poor explanation. libcurl library verbose mode was always on during these profiling experiments. It was curl-loader's verbose mode was off which has another meaning, namely: - don't print to the logfile file any messages besides having CURLINFO_ERROR level (our patch to split CURLINFO_TEXT to errors and indeed info text). The motivation for the patch: - curl-loader verbose mode prints tons of libcurl messages, and it is important to grep fast to errors, when treating GB files; - curl-loader non-verbose mode is normally used for heavy loads, when it is important to save CPU by filtering out from the logfile output everything and leaving only really important errors. By the way, do you think, that this patch may be of interest for somebody else besides curl-loader: http://curl-loader.svn.sourceforge.net/viewvc/curl-loader/trunk/curl-loader/patches/curl-trace-info-error.patch?view=markup Hopefully, this misunderstanding does not impact the analysis below: > Then we should use these functions for only "core" creation of strings and not > for debug/verbose messages. There are several things we can do to improve the > situation: > 1 - optimize buffer sizes for the typical sizes (for the aprintf() function) > 2 - make sure we don't use *printf() functions when there's no need. Possibly > use memcpy() etc instead. > 3 - make sure that the *_formatf() is written to perform well, especially for > the typical input/flags that we use (which I guess means %s). Thank you. Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ................................................................... Navigare necesse est, vivere non est necesse ................................................................... http://curl-loader.sourceforge.net An open-source HTTP/S, FTP/S traffic generating, and web testing tool. |