curl-loader-devel Mailing List for curl-loader - web application testing (Page 41)
Status: Alpha
Brought to you by:
coroberti
You can subscribe to this list here.
2006 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
|
Feb
(1) |
Mar
(7) |
Apr
(19) |
May
(25) |
Jun
(16) |
Jul
(59) |
Aug
(29) |
Sep
(18) |
Oct
(19) |
Nov
(7) |
Dec
(29) |
2008 |
Jan
(6) |
Feb
(18) |
Mar
(8) |
Apr
(27) |
May
(26) |
Jun
(5) |
Jul
(6) |
Aug
|
Sep
(9) |
Oct
(37) |
Nov
(61) |
Dec
(17) |
2009 |
Jan
(21) |
Feb
(25) |
Mar
(4) |
Apr
(2) |
May
(8) |
Jun
(15) |
Jul
(18) |
Aug
(23) |
Sep
(10) |
Oct
(16) |
Nov
(14) |
Dec
(22) |
2010 |
Jan
(23) |
Feb
(8) |
Mar
(18) |
Apr
(1) |
May
(34) |
Jun
(23) |
Jul
(11) |
Aug
(1) |
Sep
(13) |
Oct
(10) |
Nov
(2) |
Dec
(8) |
2011 |
Jan
|
Feb
(7) |
Mar
(24) |
Apr
(12) |
May
(3) |
Jun
(2) |
Jul
(2) |
Aug
|
Sep
(5) |
Oct
(20) |
Nov
(7) |
Dec
(11) |
2012 |
Jan
(12) |
Feb
(5) |
Mar
(16) |
Apr
(3) |
May
|
Jun
(5) |
Jul
(12) |
Aug
(6) |
Sep
|
Oct
|
Nov
(8) |
Dec
|
2013 |
Jan
(1) |
Feb
(3) |
Mar
(5) |
Apr
(3) |
May
(1) |
Jun
|
Jul
(1) |
Aug
(2) |
Sep
(9) |
Oct
|
Nov
(8) |
Dec
(4) |
2014 |
Jan
(4) |
Feb
|
Mar
(1) |
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(4) |
Oct
|
Nov
(11) |
Dec
(5) |
2015 |
Jan
(1) |
Feb
|
Mar
(11) |
Apr
(3) |
May
(1) |
Jun
(1) |
Jul
(4) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(2) |
Dec
|
2016 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Daniel S. <da...@ha...> - 2007-04-17 12:08:47
|
On Tue, 17 Apr 2007, Robert Iakobashvili wrote: >> 1 - gettimeofday() certainly is called a lot and indeed takes a >> significant time. We should address this (somehow). > > It is called in cycles in many cases. > A pattern to solve it something like this : > > gettimeofday (); > int cycle_counter = 0; > > for (i....) { > > DO_THE_JOB > > if (! (++cycle_counter % TIME_RECALCULATION_CYCLES_NUM)) { > gettimeofday (); > } Yes, but that relies on the fact that the condition equals true often enough to keep the date value accurate enough. We just recently added code to libcurl that allows it to timeout with a a millisecond precision and that doesn't give us much room to speculate on. We need pretty accurate time info at many places in the code, and it isn't easy to come up with a generic approach for skipping system calls for this. Possibly we should simply detect that no millisecond timeout option is used, and then switch to a more "sloppy" approach. That could perhaps involve getting the time immediately when curl_multi_socket*() is called and then simply not get it again (i.e keep returning the same) unless one of the functions is called that we know is time consuming. >> 2 - dprintf_formatf() seems to take quite a lot of time as well >> How come this function is used so much? > > This is the question, that I was going to ask you. :) > Y can dig inside into the callstasks for .prof files and look, where > it is called. > What I see is, e.g.: > ----------------------------------------------- > 0.00 0.00 1/239402 curl_msnprintf [67] > 0.04 0.00 37300/239402 curl_mvaprintf [27] > 0.08 0.01 79201/239402 curl_maprintf [21] > 0.12 0.02 122900/239402 curl_mvsnprintf [16] > [12] 19.9 0.24 0.03 239402 dprintf_formatf [12] > 0.02 0.00 5696319/5696319 addbyter [34] > 0.01 0.00 7782971/7782971 alloc_addbyter [45] This is the snprintf() and similar calls within libcurl. > Just a hypotheses below: > We are using (as posted) options: > CURLOPT_VERBOSE with 1 > CURLOPT_DEBUGFUNCTION with client_tracing_function > > All these CURL_INFO, etc messages are somehow printed to the buffers > delivered. May be there is something there, that may be improved? Well, VERBOSE and DEBUGFUNCTION are for debugging and then we should be prepared to get extra function calls like this. > The profiling was done in a non-verbose mode. Then we should use these functions for only "core" creation of strings and not for debug/verbose messages. There are several things we can do to improve the situation: 1 - optimize buffer sizes for the typical sizes (for the aprintf() function) 2 - make sure we don't use *printf() functions when there's no need. Possibly use memcpy() etc instead. 3 - make sure that the *_formatf() is written to perform well, especially for the typical input/flags that we use (which I guess means %s). -- Commercial curl and libcurl Technical Support: http://haxx.se/curl.html |
From: Robert I. <cor...@gm...> - 2007-04-17 08:30:16
|
On 4/17/07, Daniel Stenberg <da...@ha...> wrote: > > I had a first look at these just now and three things struck me: > > 1 - gettimeofday() certainly is called a lot and indeed takes a significant > time. We should address this (somehow). It is called in cycles in many cases. A pattern to solve it something like this : gettimeofday (); int cycle_counter = 0; for (i....) { DO_THE_JOB if (! (++cycle_counter % TIME_RECALCULATION_CYCLES_NUM)) { gettimeofday (); } An example from curl-loader here, look for get_tick_count (), that calls gettimeofday (): http://curl-loader.svn.sourceforge.net/viewvc/curl-loader/trunk/curl-loader/loader_smooth.c?view=markup Better, that somebody with a deep understanding of curl will do the job. > > 2 - dprintf_formatf() seems to take quite a lot of time as well and I find > it strange. What kind of *printf() operations are made with your tests? > How come this function is used so much? This is the question, that I was going to ask you. :) Y can dig inside into the callstasks for .prof files and look, where it is called. What I see is, e.g.: ----------------------------------------------- 0.00 0.00 1/239402 curl_msnprintf [67] 0.04 0.00 37300/239402 curl_mvaprintf [27] 0.08 0.01 79201/239402 curl_maprintf [21] 0.12 0.02 122900/239402 curl_mvsnprintf [16] [12] 19.9 0.24 0.03 239402 dprintf_formatf [12] 0.02 0.00 5696319/5696319 addbyter [34] 0.01 0.00 7782971/7782971 alloc_addbyter [45] Just a hypotheses below: We are using (as posted) options: CURLOPT_VERBOSE with 1 CURLOPT_DEBUGFUNCTION with client_tracing_function All these CURL_INFO, etc messages are somehow printed to the buffers delivered. May be there is something there, that may be improved? More details: The profiling was done in a non-verbose mode. Still client_tracing_function receives all messages, but prints to a logfile only errors (CURL_ERROR). There was no any errors during profiling. Y can see here the patch, that we use to filter errors and to enable for a user to segregate among really important errors and just infos: http://curl-loader.svn.sourceforge.net/viewvc/curl-loader/trunk/curl-loader/patches/curl-trace-info-error.patch?view=markup For our client_tracing_function you can look here: http://curl-loader.svn.sourceforge.net/viewvc/curl-loader/trunk/curl-loader/loader.c?view=markup > 3 - I figure we need more precision on the 5th and 6th columns (how long > each specific time takes by itself) to better identify functions that > seem to take too long. Y can get more understanding from the lower (call-stack) sections of *.prof outputs For event more detailed information, please, find code-type gprof output for the same expetiments with more details attached. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ................................................................... Navigare necesse est, vivere non est necesse ................................................................... http://curl-loader.sourceforge.net An open-source HTTP/S, FTP/S traffic generating, and web testing tool. |
From: Daniel S. <da...@ha...> - 2007-04-16 21:36:57
|
On Mon, 16 Apr 2007, Robert Iakobashvili wrote: > hyper-mode.profiling.tar.bz2: > hyper-0.prof - gprof res of hyper mode, 0 msec delay between requests; > hyper-0.op - oprofile res of hyper mode, 0 msec delay between requests; > hyper-1000.prof - gprof res of hyper mode, 1000 msec delay b requests; > hyper-1000.op - oprofile res of hyper mode, 1000 msec delay b requests; Thanks for doing this profile work! I had a first look at these just now and three things struck me: 1 - gettimeofday() certainly is called a lot and indeed takes a significant time. We should address this (somehow). 2 - dprintf_formatf() seems to take quite a lot of time as well and I find it strange. What kind of *printf() operations are made with your tests? How come this function is used so much? 3 - I figure we need more precision on the 5th and 6th columns (how long each specific time takes by itself) to better identify functions that seem to take too long. -- Commercial curl and libcurl Technical Support: http://haxx.se/curl.html |
From: <pen...@sc...> - 2007-04-16 19:04:11
|
Hi, Well, I needed send the statistics in runtime into a pipe. Without headers, in specific format I needed. Perhaps I should have added extra switches to modify format .txt instead, you are right. Now I have patched curl-loader to add -x command line option to specify the path to the pipe. But I agree it is not the cleanest solution, proper way to do it is to add a command line key to specify path to .txt file and a key to modify output format. My requirements were: it has to be one line per event, and it has to be easily parseable, and include parameter names along with values. I am running curl-loaders across a number of the machines and need this kind of output to easily aggregate statistics from all processes. Perhaps there is a simpler way to do the same? Regards, Alex >>I have customized it to output its statistics in a machine parseable >>form (see the attached patch in case you find this feature useful). > > > Thank you for your suggestion and the patch against recent svn. > > There is some regular statistics output to file named: > <batch_name>.txt. Could you, please, explain a bit, what is > incorrect or inconvenient with the output? > Regards, Alex |
From: Robert I. <cor...@gm...> - 2007-04-16 09:35:06
|
CURL VERSION: snapshot 7.16.2-20070326 with a recently submitted patch for optimization of curl_multi_socket () by passing bitmask, named here curl_multi_socket_noselect (renamed to curl_multi_socket_action) HOST MACHINE and OPERATING SYSTEM: linux, debian with vanilla kernel 2.6.20.6 COMPILER: gcc version 4.1.2 20061115 (prerelease) (Debian 4.1.1-21) CURL CONFIGURATION: configure --prefix $(CURL_BUILD) \ --enable-thread \ --enable-ipv6 \ --with-random=/dev/urandom \ --with-ssl=/usr/include/openssl \ --enable-shared=no \ CFLAGS="-pg -g -O3 -ffast-math -finline-functions -funroll-all-loops \ -finline-limit=1000 -mmmx -msse -foptimize-sibling-calls -mtune=pentium4 \ -mcpu=pentium4" COMPILATION OPTIONS: are the same for the application, libcurl and libevent: -pg -g -O3 -ffast-math -finline-functions -funroll-all-loops \ -finline-limit=1000 -mmmx -msse -foptimize-sibling-calls -mtune=pentium4 \ -mcpu=pentium4 CURL_EASY_SETOPT: The curl_easy_setopt used are: CURLOPT_IPRESOLVE CURLOPT_INTERFACE CURLOPT_NOSIGNAL CURLOPT_URL with http://localhost/ACE-INSTALL.html (104K file) CURLOPT_CONNECTTIMEOUT 5 CURLOPT_FRESHCONNECT 0 CURLOPT_DNS_CACHE_TIMEOUT with -1 CURLOPT_VERBOSE with 1 CURLOPT_DEBUGFUNCTION with client_tracing_function CURLOPT_DEBUGDATA, cctx; CURLOPT_WRITEFUNCTION with do_nothing_write_function CURLOPT_SSL_VERIFYPEER 0 /* not used, http-url */ CURLOPT_SSL_VERIFYHOST 0 /* not used, http-url */ CURLOPT_PRIVATE, cctx CURLOPT_WRITEDATA, cctx - for hyper CURLOPT_ERRORBUFFER, bctx->error_buffer CURLOPT_FOLLOWLOCATION, 1 /* not used, no 3xx */ CURLOPT_UNRESTRICTED_AUTH, 1 /* not used, no auth */ CURLOPT_MAXREDIRS, -1 /* not used, no 3xx */ CURLOPT_USERAGENT, bctx->user_agent /*MSIE-6 like string */ CURLOPT_COOKIEFILE, "" /* not used, no cookies*/ SERVER: lighttpd-1.4.13 (ssl) from debian, the stock configuration file with the more strings added: server.event-handler = "linux-sysepoll" server.max-fds = 32000 FILE FETCHED: 104K static file CLIENT APPLICATION: curl-loader CLIENT LOADING MODES: Smooth mode - using curl_multi_perform based on poll () demultiplexing; Hyper mode - based on hipev.c example and using epoll () based demultiplexing with further curl_multi_socket_noselect () CLIENT LOAD: For each loading mode (smooth and hyper) two loads have been run each for the time 200 seconds: - 200 libcurl clients loading localhost server with 0 msec between the fetched file and the new request; - 200 libcurl clients localhost server with 1000 msec between the fetched file and the new request The clients have been started simulteneously without any gradual increase, however, due to the 200 seconds of the run, the impact of connection establishment is rather low. PROFILING METHODS: gprof - provides application level information - collected to files suffixed *.prof; oprofile- whole system information, including glibc and kernel functions, filtered for the relevant curl-loader application to files suffixed *.op; RESULTING FILES: Smooth mode experiments: smooth-mode.profiling.tar.bz2: smooth-0.prof - gprof res of smooth mode, 0 msec delay between requests; smooth-0.op - oprofile res of smooth mode, 0 msec delay between requests; smooth-1000.prof - gprof res of smooth mode, 1000 msec delay b requests; smooth-1000.op - oprofile res of smooth mode, 1000 msec delay b requests; Hyper mode experiments: hyper-mode.profiling.tar.bz2: hyper-0.prof - gprof res of hyper mode, 0 msec delay between requests; hyper-0.op - oprofile res of hyper mode, 0 msec delay between requests; hyper-1000.prof - gprof res of hyper mode, 1000 msec delay b requests; hyper-1000.op - oprofile res of hyper mode, 1000 msec delay b requests; The recommendations and analyses of curl gurus would be very much appreciated. Any your feedback would be highly valued. Profiling of the loads with high number of connection establishements / new clients per second is out of the mail and will be a subject of some next e-mail (hopefully). In meanwhile, very premature results (without engagement) 50 new clients per second are totally burning out my CPU with Curl_connect () mainly is the CPU eater. Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ................................................................... Navigare necesse est, vivere non est necesse ................................................................... http://curl-loader.sourceforge.net An open-source HTTP/S, FTP/S traffic generating, and web testing tool. |
From: Robert I. <cor...@gm...> - 2007-04-16 04:21:43
|
Hi Alex, On 4/15/07, Alexandre Bezroutchko <ab...@sc...> wrote: > static void* batch_function (void * batch_data) > { > ... > sprintf (bctx->batch_statistics, "./%s.txt", bctx->batch_name); > ... > } > > Here you add 6 extra characters to the batch name and store it in 36 > bytes buffer, so if somebody will actually use 31 characters batch name, > the buffer will be overflown with 37 bytes. Thank you a lot. The bugfix is in SVN. We have added you to our THANKS file. >I have customized it to output its statistics in a machine parseable > form (see the attached patch in case you find this feature useful). Thank you for your suggestion and the patch against recent svn. There is some regular statistics output to file named: <batch_name>.txt. Could you, please, explain a bit, what is incorrect or inconvenient with the output? -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ................................................................... Navigare necesse est, vivere non est necesse ................................................................... http://curl-loader.sourceforge.net An open-source HTTP/S, FTP/S traffic generating, and web testing tool. |
From: <pen...@sc...> - 2007-04-15 08:38:27
|
Hi, The patch is against svn. Regards, Alex Robert Iakobashvili wrote: > Alexandre, > > On 4/15/07, Alexandre Bezroutchko <ab...@sc...> wrote: > >> While doing so I have noticed a potential problem: in batch.h, in >> structure batch_context, you allocate very small buffers for file names: >> Here you add 6 extra characters to the batch name and store it in 36 >> bytes buffer, so if somebody will actually use 31 characters batch name, >> the buffer will be overflown with 37 bytes. > > > Indeed, thanks we will fix it. > >> I have customized it to output its statistics in a machine parseable >> form (see the attached patch in case you find this feature useful). > > > Interesting. Thanks, we will look into it. > Which version of curl-loader you are using or the patch is against svn? > > Could you, please, subscribe to the mailing list and post there? > https://lists.sourceforge.net/lists/listinfo/curl-loader-devel > Thank you in advance. > > There are several other recent bugs fixed in svn, therefore we are > recommending > usage of the svn version. > > Sincerely, > Robert Iakobashvili, > coroberti %x40 gmail %x2e com > ................................................................... > Navigare necesse est, vivere non est necesse > ................................................................... > http://curl-loader.sourceforge.net > An open-source HTTP/S, FTP/S traffic > generating, and web testing tool. |
From: Robert I. <cor...@gm...> - 2007-04-15 08:09:09
|
Alexandre, On 4/15/07, Alexandre Bezroutchko <ab...@sc...> wrote: > While doing so I have noticed a potential problem: in batch.h, in > structure batch_context, you allocate very small buffers for file names: > Here you add 6 extra characters to the batch name and store it in 36 > bytes buffer, so if somebody will actually use 31 characters batch name, > the buffer will be overflown with 37 bytes. Indeed, thanks we will fix it. > I have customized it to output its statistics in a machine parseable > form (see the attached patch in case you find this feature useful). Interesting. Thanks, we will look into it. Which version of curl-loader you are using or the patch is against svn? Could you, please, subscribe to the mailing list and post there? https://lists.sourceforge.net/lists/listinfo/curl-loader-devel Thank you in advance. There are several other recent bugs fixed in svn, therefore we are recommending usage of the svn version. Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ................................................................... Navigare necesse est, vivere non est necesse ................................................................... http://curl-loader.sourceforge.net An open-source HTTP/S, FTP/S traffic generating, and web testing tool. |
From: Robert I. <cor...@gm...> - 2007-04-08 07:38:16
|
The release adds: - hyper mode based on epoll from libevent library; - bugfixes, improved validation of environment and configuration -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ................................................................... Navigare necesse est, vivere non est necesse ................................................................... http://curl-loader.sourceforge.net An open-source HTTP/S, FTP/S traffic generating, and web testing tool. |
From: Robert I. <cor...@gm...> - 2007-03-26 15:43:52
|
Misha, please, confirm, that you see the e-mail. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ................................................................... Navigare necesse est, vivere non est necesse ................................................................... http://curl-loader.sourceforge.net A powerful open-source HTTP/S, FTP/S traffic generating, loading and testing tool. |
From: Robert I. <cor...@gm...> - 2007-03-24 12:34:43
|
This release adds support for: - IPv6 addresses and IPv6 URIs; - loading clients credentials for authentication (users and passwords) from file; - custom HTTP/FTP headers; - bug fixes; - documentation improvement; - our new web-site is at http://curl-loader.sourceforge.net -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ................................................................... Navigare necesse est, vivere non est necesse ................................................................... |
From: Robert I. <cor...@gm...> - 2007-03-24 09:47:00
|
> On 3/24/07, Jeremy Hicks <je...@no...> wrote: > > initial_handles_init - error: alloc_client_post_buffers () . > > batch_function - "login" initial_handles_init () failed. > > > > I do not think is that VMWare or Ubuntu (Michael > and another man are using Ubuntu with curl-loader successfully) > to be blamed. > > After a brief code analyzes, however, when a user is willing to use LOGIN > for some kind of POST-ing (either GET+POST or POST-only), > but provided "not expected" (bug at our side) number of %symbols > in LOGIN_POST_STR, the errors may appear. I have committed a fix to the case validation. It is not necessarily fixes the problem (till I get the batch configuration file and test it), but this is: "The right way to do" TM in any case. It may be fetched from curl-project SVN: svn co https://curl-loader.svn.sourceforge.net/svnroot/curl-loader curl-loader or I may cook and send the patch (if requested) for the latest version 0.27. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ................................................................... Navigare necesse est, vivere non est necesse ................................................................... http://curl-loader.sourceforge.net A powerful open-source HTTP/S, FTP/S traffic generating, loading and testing tool. |
From: Robert I. <cor...@gm...> - 2007-03-24 06:21:11
|
Jeremy, On 3/24/07, Jeremy Hicks <je...@no...> wrote: > initial_handles_init - error: alloc_client_post_buffers () . > batch_function - "login" initial_handles_init () failed. > I do not think is that VMWare or Ubuntu (Michael and another man are using Ubuntu with curl-loader successfully) to be blamed. After a brief code analyzes, however, when a user is willing to use LOGIN for some kind of POST-ing (either GET+POST or POST-only), but provided "not expected" (bug at our side) number of %symbols in LOGIN_POST_STR, the errors may appear. Currently, we count number of % symbols in such string and allow either 4 or 2 (loader.c, function alloc_init_client_post_buffers ()) Two is for the cases, when all users and password for all clients to be the same (like "user=%s&password=%s") or when users with passwords are loaded from a credentials file. Four is for the case, when we are generating unique credentials by adding a number (like "user=%s%d&password=%s%d") Well, it could be the case, that a user needs some more % symbols, e.g. to place spaces like %20. Therefore, on our side the fix will be to count the number of "%s" and "%d" strings and to verify, that their sum is coming as 4 or 2. Enhancement of configuration validation looks to be also appropriate. When your configuration file will be available with a brief explanation, of what you are planning to do, we will make the fixes, patches promptly. Thank you and sorry for the bugs. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ................................................................... Navigare necesse est, vivere non est necesse ................................................................... http://curl-loader.sourceforge.net A powerful open-source HTTP/S, FTP/S traffic generating, loading and testing tool. |
From: Robert I. <cor...@gm...> - 2007-03-23 14:55:08
|
This release adds support for: - IPv6 source addresses and IPv6 URIs; - loading clients credentials for authentication (usernames and passwords) from file; - custom HTTP/FTP headers; - bug fixes; - documentation improvement; - our new web-site at: http://curl-loader.sourceforge.net/ Our hyper-mode based on hyper-mode of libcurl (curl_multi_socket() API + libevent epoll () event demultiplexing) is now "under construction" and progressing, yet not released for usage. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ................................................................... Navigare necesse est, vivere non est necesse ................................................................... |
From: Robert I. <cor...@gm...> - 2007-03-22 22:15:42
|
curl-loader is a HTTP/S and FTP/S performance testing and loading tool, based on libcurl, and simulating thousands of clients each with own IP-address. This release adds support for: - IPv6 source addresses and IPv6 URIs; - loading clients credentials for authentication (usernames and passwords) from file; - custom HTTP/FTP headers; - bug fixes; - documentation improvement; - our new web-site at: http://curl-loader.sourceforge.net/ Our hyper-mode based on hyper-mode of libcurl (curl_multi_socket() API + libevent epoll () event demultiplexing) is now "under construction" and progressing, yet not released for usage. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ................................................................... Navigare necesse est, vivere non est necesse ................................................................... http://curl-loader.sourceforge.net A powerful open-source HTTP/S, FTP/S traffic generating, loading and testing tool. |
From: Robert I. <cor...@gm...> - 2007-03-22 15:36:44
|
curl-loader is a HTTP/S and FTP/S performance testing and loading tool, based on libcurl, and simulating thousands of clients each with own IP-address. This release adds support for: - IPv6 source addresses and IPv6 URIs; - loading clients credentials for authentication (usernames and passwords) from file; - custom HTTP/FTP headers; - bug fixes; - documentation improvement; - our new web-site at: http://curl-loader.sourceforge.net/ Our hyper-mode based on hyper-mode of libcurl (curl_multi_socket() API + libevent epoll () event demultiplexing) is now "under construction" and progressing, yet not released for usage. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ................................................................... Navigare necesse est, vivere non est necesse ................................................................... http://curl-loader.sourceforge.net A powerful open-source HTTP/S, FTP/S traffic generating, loading and testing tool. |
From: Robert I. <cor...@gm...> - 2007-02-06 09:26:18
|
>From the ChangeLog: version 0.24, 06/02/2007 * Smooth mode now respects the interleave timeouts between urls, when the timeouts configured to be some values in milliseconds above 10 msec. This is achived by keeping clients in heap-based timer queue. When an interleave timeout is specified as 0, next url or next action of such clients is scheduled immediately without timer-queue scheduling. * Added support for gradual increase of clients number at the loading start. Use tag CLIENTS_INITIAL_INC in GENERAL section to specify the number of loading clients to be added each second till the total client number, specified by CLIENTS_NUM tag. * The interleave timeout numbers will be interpreted now as milliseconds, not seconds as before. Take care to update your configuration files. * Configuration file parser has been re-designed to be based on TAG-tag_parser_func mapping. The big "object-oriented" parsing switch has been waived out. * Advanced to the newest libcurl version 7.16.1 to fix several FTP and multi-handle issues THE NEXT BOMB promised by Michael Moser is the hyper-mode with support of tens of thousand of clients for a single batch (thread) using libcurl hyper-API and libevent. This is what Michael is developing right now. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ................................................................... Navigare necesse est, vivere non est necesse ................................................................... http://sourceforge.net/projects/curl-loader A powerful open-source HTTP/S, FTP/S traffic generating, loading and testing tool. |
From: Robert I. <cor...@gm...> - 2006-11-04 16:13:00
|
-- Sincerely, Robert Iakobashvili, coroberti at gmail dot com ------------------------------------------------------------------ Navigare necesse est, vivere non est necesse. ------------------------------------------------------------------ http://sourceforge.net/projects/curl-loader An ultimate HTTP/S performance testing tool. |