curl-loader-devel Mailing List for curl-loader - web application testing (Page 20)
Status: Alpha
Brought to you by:
coroberti
You can subscribe to this list here.
2006 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
|
Feb
(1) |
Mar
(7) |
Apr
(19) |
May
(25) |
Jun
(16) |
Jul
(59) |
Aug
(29) |
Sep
(18) |
Oct
(19) |
Nov
(7) |
Dec
(29) |
2008 |
Jan
(6) |
Feb
(18) |
Mar
(8) |
Apr
(27) |
May
(26) |
Jun
(5) |
Jul
(6) |
Aug
|
Sep
(9) |
Oct
(37) |
Nov
(61) |
Dec
(17) |
2009 |
Jan
(21) |
Feb
(25) |
Mar
(4) |
Apr
(2) |
May
(8) |
Jun
(15) |
Jul
(18) |
Aug
(23) |
Sep
(10) |
Oct
(16) |
Nov
(14) |
Dec
(22) |
2010 |
Jan
(23) |
Feb
(8) |
Mar
(18) |
Apr
(1) |
May
(34) |
Jun
(23) |
Jul
(11) |
Aug
(1) |
Sep
(13) |
Oct
(10) |
Nov
(2) |
Dec
(8) |
2011 |
Jan
|
Feb
(7) |
Mar
(24) |
Apr
(12) |
May
(3) |
Jun
(2) |
Jul
(2) |
Aug
|
Sep
(5) |
Oct
(20) |
Nov
(7) |
Dec
(11) |
2012 |
Jan
(12) |
Feb
(5) |
Mar
(16) |
Apr
(3) |
May
|
Jun
(5) |
Jul
(12) |
Aug
(6) |
Sep
|
Oct
|
Nov
(8) |
Dec
|
2013 |
Jan
(1) |
Feb
(3) |
Mar
(5) |
Apr
(3) |
May
(1) |
Jun
|
Jul
(1) |
Aug
(2) |
Sep
(9) |
Oct
|
Nov
(8) |
Dec
(4) |
2014 |
Jan
(4) |
Feb
|
Mar
(1) |
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(4) |
Oct
|
Nov
(11) |
Dec
(5) |
2015 |
Jan
(1) |
Feb
|
Mar
(11) |
Apr
(3) |
May
(1) |
Jun
(1) |
Jul
(4) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(2) |
Dec
|
2016 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Robert I. <cor...@gm...> - 2009-08-24 06:13:16
|
Hi Val, On Mon, Aug 24, 2009 at 4:03 AM, Val Shkolnikov <va...@nv...> wrote: > Hi Robert, > Attached is patch for a bug that has to do with the debugging code I > left in loader.c in my recent changes to the curl-loader. I just made it > conditional in this patch, because the code is adding a client name to the > requested url, and this is sometimes handy for tracing requests, however, > you may remove the debug code altogether. The patch is against source forge > SVN rev 566 and is implemented by running in the curl-loader directory the > cmd: > > patch <curl-loader-patch.566+vs3 > > /Val > Applied. Thanks! -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... www.ghotit.com Assistive technology that understands you ...................................................................... |
From: Val S. <va...@nv...> - 2009-08-24 01:15:53
|
Index: loader.c =================================================================== --- loader.c (revision 566) +++ loader.c (working copy) @@ -539,7 +539,7 @@ { if (! is_template(url)) /* Handled in update_url_from_set_or_template () above. GF */ { - // ###DEBUG### +#if DEBUG // curl_easy_setopt (handle, CURLOPT_URL, url->url_str); char buf[1000]; sprintf(buf,"%s.%ld.%ld.%s",url->url_str, @@ -547,7 +547,10 @@ cctx->client_name); buf[strlen(buf)-1] = '\0'; // suppress space curl_easy_setopt (handle, CURLOPT_URL, buf); - // ###DEBUG### +#else + curl_easy_setopt (handle, CURLOPT_URL, url->url_str); +#endif // DEBUG + } } } |
From: Val S. <va...@nv...> - 2009-08-21 20:05:59
|
Hi Robert, Thank you for accepting this. I've downloaded svn.566 and I am running both fixed and non-fixed rate. So far no problem. I am not sure what you mean by the "riskiness" of the fixed rate. The worst thing I saw happening is that the fixed rate is not maintained. This happened to me in two cases: 1. Very slow server response to opening a connection (SYN). Seems libcurl does not return until the connection is established. 2. Very slow server response to HTTP requests. Here the number of available curl-loader clients gets exhausted and the rate falls to the level they can provide, my code complains about the absence of free clients. In both of those cases the rate will be low in the non-fixed rate mode as well, although that mode will adapt to the load better (will not complain :-)). But of course, it all depends on what load you really want to model. Best regards, /Val > >From: Robert Iakobashvili <cor...@gm...> >To: curl-loader-devel <cur...@li...> >Sent: Thursday, August 20, 2009 11:33:54 PM >Subject: Re: Some changes to curl-loader > >>Hi Val, > > >On Fri, Aug 21, 2009 at 1:12 AM, Val Shkolnikov <va...@nv...> wrote: > >>>Hi, Robert, >>>> Well, I am sorry to say, but the README file does contain all the >>stuff that the FAQ does, do not know how I missed it. I've updated it >>for both versions and I am attaching here the new patches that contain >>those changes. Sorry for the inconvenience. >>>>/Val >Thanks, applied with some minor changes. >Added you to our THANKS list. > >Could you, please, take the version from svn and test it a bit? >Thanks. >Fixed-rate load is a risky feature, therefore, please, try to test >>without fixed-rate and see, if it does not break something. > >Best wishes! >-- >Truly, >Robert Iakobashvili, Ph.D. > > |
From: Robert I. <cor...@gm...> - 2009-08-21 06:34:07
|
Hi Val, On Fri, Aug 21, 2009 at 1:12 AM, Val Shkolnikov <va...@nv...> wrote: > Hi, Robert, > Well, I am sorry to say, but the README file does contain all the stuff > that the FAQ does, do not know how I missed it. I've updated it for both > versions and I am attaching here the new patches that contain those > changes. Sorry for the inconvenience. > /Val > Thanks, applied with some minor changes. Added you to our THANKS list. Could you, please, take the version from svn and test it a bit? Thanks. Fixed-rate load is a risky feature, therefore, please, try to test without fixed-rate and see, if it does not break something. Best wishes! -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... www.ghotit.com Assistive technology that understands you ...................................................................... |
From: Robert I. <cor...@gm...> - 2009-08-21 04:32:36
|
Hi Val, On Fri, Aug 21, 2009 at 5:00 AM, Val Shkolnikov <va...@nv...> wrote: > Hi, > In the latest SVN revision 564, file curl-loader/parse_conf.c, line 3516 > > for (ind = 0, lineno = 1; getline(f_buf, sizeof f_buf, file) != 0; > lineno++) > > replace getline with get_line > > /Val > Fixed, Thank you very much! -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... www.ghotit.com Assistive technology that understands you ...................................................................... |
From: Robert I. <cor...@gm...> - 2009-08-21 04:22:12
|
Hi Val, On Fri, Aug 21, 2009 at 1:12 AM, Val Shkolnikov <va...@nv...> wrote: > Hi, Robert, > Well, I am sorry to say, but the README file does contain all the stuff > that the FAQ does, do not know how I missed it. I've updated it for both > versions and I am attaching here the new patches that contain those > changes. Sorry for the inconvenience. > /Val > Thanks a lot! We will look at it shortly. -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... www.ghotit.com Assistive technology that understands you ...................................................................... |
From: Val S. <va...@nv...> - 2009-08-21 02:00:39
|
Hi, In the latest SVN revision 564, file curl-loader/parse_conf.c, line 3516 for (ind = 0, lineno = 1; getline(f_buf, sizeof f_buf, file) != 0; lineno++) replace getline with get_line /Val |
From: Val S. <va...@nv...> - 2009-08-20 22:39:49
|
diff -ur curl-loader/batch.h curl-loader.564+0.48.vs1/batch.h --- curl-loader/batch.h 2009-08-09 13:27:26.000000000 -0700 +++ curl-loader.564+0.48.vs1/batch.h 2009-08-13 12:00:01.000000000 -0700 @@ -82,6 +82,9 @@ /* Statistics file <batch-name>.txt */ char batch_statistics[BATCH_NAME_SIZE+BATCH_NAME_EXTRA_SIZE]; + /* Operational statistics file <batch-name>.ops */ + char batch_opstats[BATCH_NAME_SIZE+BATCH_NAME_EXTRA_SIZE]; + /* Maximum number of clients (each client with its own IP-address) in the batch */ int client_num_max; @@ -135,10 +138,15 @@ /* Number of cycles to repeat the urls downloads and afterwards sleeping - cycles. Zero means run it forever. + cycles. Zero means run it until time to run is exhausted. */ long cycles_num; + /* + Time to run in msec. Zero means time to run is infinite. + */ + unsigned long run_time; + /* User-agent string to appear in the HTTP 1/1 requests. */ @@ -247,6 +255,12 @@ /* The file to be used for statistics output */ FILE* statistics_file; + /* The file to be used for operational statistics output */ + FILE* opstats_file; + + /* Dump operational statistics indicator, 0: no dump */ + int dump_opstats; + /* Timestamp, when the loading started */ unsigned long start_time; @@ -267,6 +281,10 @@ op_stat_point op_delta; op_stat_point op_total; + /* Count of response times dumped before new-line, + used to limit line length */ + int ct_resps; + } batch_context; diff -ur curl-loader/ChangeLog curl-loader.564+0.48.vs1/ChangeLog --- curl-loader/ChangeLog 2009-08-09 13:27:26.000000000 -0700 +++ curl-loader.564+0.48.vs1/ChangeLog 2009-08-13 12:48:14.000000000 -0700 @@ -1,3 +1,9 @@ +Val Shkolnikov, June 21, 2009. + +* Directed operational statistics into a file with the extension .ops +* Added DUMP_OPSTATS and RUN_TIME options. +* Added time stamp to the .log file, streamlined logging using macros. +* Added a second -v option for varied level of logging detail. * Renamed function getline to get_line since folks from RedHat are polluting the namespace. Thanks to jari pietila <jar...@ho...> diff -ur curl-loader/conf.c curl-loader.564+0.48.vs1/conf.c --- curl-loader/conf.c 2009-08-09 13:27:26.000000000 -0700 +++ curl-loader.564+0.48.vs1/conf.c 2009-08-13 12:00:01.000000000 -0700 @@ -176,8 +176,8 @@ } break; - case 'v': - verbose_logging = 1; + case 'v': /* accumulate verbosity */ + verbose_logging += 1; break; case 'u': diff -ur curl-loader/doc/curl-loader.1 curl-loader.564+0.48.vs1/doc/curl-loader.1 --- curl-loader/doc/curl-loader.1 2009-08-09 13:27:20.000000000 -0700 +++ curl-loader.564+0.48.vs1/doc/curl-loader.1 2009-08-13 11:59:57.000000000 -0700 @@ -87,8 +87,9 @@ logical CPUs as seen by cat /proc/cpuinfo. .TP .B "\-v" -Request more verbose output to the log files, including information about -the headers sent and received. In some cases\-d option is a more informative. +Request more verbose output to the log files. If used twice, include +information about +the headers sent and received. In some cases \-d option is a more informative. .TP .B "\-u" Activate logging of URLs to the log file, when the \-v verbose option is used. diff -ur curl-loader/doc/curl-loader-config.5 curl-loader.564+0.48.vs1/doc/curl-loader-config.5 --- curl-loader/doc/curl-loader-config.5 2009-08-09 13:27:20.000000000 -0700 +++ curl-loader.564+0.48.vs1/doc/curl-loader-config.5 2009-08-13 11:59:57.000000000 -0700 @@ -182,10 +182,10 @@ .TP .B CYCLES_NUM This requires a valid signed integer value. This is the number of -cycles to be performed. If the value is -1, the +cycles to be performed. If the value is negative or 0, the .B curl\-loader -tool cycles forever until you hit ctl-C. This is a tag for the -general section. +tool cycles forever until the run time is exhausted or you hit ctl-C. +This is a tag for the general section. .TP .B USER_AGENT This requires a valid quoted string value. This is a way to override the @@ -194,6 +194,20 @@ .B HEADER tag. This is a tag for the general section. .TP +.B DUMP_OPSTATS +This requires a value of "yes" or "no". The operational statistics file +with the extension ".ops" is generated if the value is "yes" (the default) +and is not generated otherwise. +This is a tag for the general section. +.TP +.B RUN_TIME +This requires a value combined of up to 4 numbers separated with a colon, as +in D:H:M:S, H:M:S, M:S or S. Here D is days, H is hours, M is minutes +and S is seconds. The value specifies maximum run time of the curl-loader. +If the value is negative or 0, the curl-loader runs until the number of +cycles to be performed is exhausted or you hit ctl-C. +This is a tag for the general section. +.TP .B URLS_NUM This requires a valid unsigned integer value. This is the number of URLs in the URL section. This is a tag for the general section. @@ -663,4 +677,4 @@ /usr/share/doc/curl-loader/conf-examples/* for configuration examples .br .SH "SEE ALSO" -.BR curl\-loader (8) +.BR curl\-loader (1) diff -ur curl-loader/doc/README curl-loader.564+0.48.vs1/doc/README --- curl-loader/doc/README 2009-08-09 13:27:20.000000000 -0700 +++ curl-loader.564+0.48.vs1/doc/README 2009-08-20 14:06:26.000000000 -0700 @@ -1,4 +1,4 @@ -version 0.49, unstable, July 5, 2009 +version 0.49, unstable, April 9, 2009 2006 - 2009 Copyright (C) Robert Iakobashvili, <cor...@gm...>, Ashdod, Israel @@ -444,14 +444,28 @@ when is appropriate, is to/from DNS server for resolving. Firewalling (netfilter/iptables) settings are also among those to check. -CYCLES_NUM is the number of cycles to be performed. Any positive value is valid -here as well as (-1), which means cycle indefinitely. Loading can be normally +CYCLES_NUM is the number of cycles to be performed if the value is positive. +A zero or negative number means indefinite number of cycles, but see the +RUN_TIME tag below. The default value is zero. Loading can be normally stopped at any moment by pressing Cntl-C; all statistics will be collected properly and files closed to contain necessary information. Note, that cycling relates only to the URLs not containing tag URL_DONT_CYCLE = 1. Curl-loader enables two optional non-cycling areas: the first before the cycling url/s area and the second after the cycling url/s area. +DUMP_OPSTATS is the flag indicating whether to write out a fairly voluminous +operational statistics file. It takes values "yes" or "no". The default +value is "yes". + +RUN_TIME is the maximum time to run the load. It is specified as up to +four numbers: hours, days, minutes and seconds separated by a colon (":"). +When less than four numbers are specified, the senior categories are assumed +to be omitted. That is 1:1:0:0 means one day and one hour, 1:1 means one +minute and 1 second, 10000 means 10000 seconds, etc. A zero or negative +value means no limit to run time. As noted above, the curl-loader may +stop before the maximum time, if any, is reached if either the number +of clycles to be performed runs out or the Cntl-C is pressed on the keyboard. + USER_AGENT provides an option to over-write the default MSIE-6-like HTTP header User-Agent. Place here a quoted string to emulate the browser that you need. The header is entered globally. If you need an option to customize it on a per-URL @@ -777,7 +791,7 @@ () based)] -r[euse connections disabled. Closes TCP-connections and re-open them. Try with and without] --v[erbose output to the logfiles; includes info about headers sent/received] +-v[erbose output to the logfiles; includes info about headers sent/received. Increase the level of verbosity by using this option twice] -u[rl logging - logs url names to logfile, when -v verbose option is used] -w[arnings skip] -x[set|unset proxy] "<proxy:port>" @@ -845,16 +859,37 @@ Detailed log is written to the file named: <batch-name>.log: -The semantics of logfile output, using command line options -v (verbose) and -u -(url print): -"Cycle number", "Client number (ip-address)" - some information string, e.g.: - -4 39 (192.168.0.39) :== Info: Trying 10.30.6.42... : eff-url: -http://10.30.6.42:8888/server/Admin/ServiceList.do, url: - -Which means: cycle: 4, client number 39 with ipv4 address (192.168.0.39), status -of the message is Info, eff-url - is the url, used right now, "url:" is empty, -which means, that it is the same as effective. +The semantics of logfile output, using command line options -v -v (very +verbose) and -u (url print) are as follows. + +The first line is the time of the load start in milliseconds and in text +form: + +# 1248743647809 Mon Jul 27 18:14:07 2009 + +The second line identifies log columns: + +# msec_offset cycle_no url_no client_no (ip) indic info + +That is the columns are: + + - time offset from start in milliseconds + - cycle number + - url number + - client number + - client ip address in parentheses + - indicator, qualifies the message as setup (==), send (>=), + receive (<=) or response (!!) + - information string + +Subsequent lines look as follows. + +1234 4 3 39 (192.168.0.39) == Trying 10.30.6.42... : eff-url: http://10.30.6.42 :8888/server/Admin/ServiceList.do, url: + +Which means: 1234 msec from start, cycle 4, url number 3, client number 39 +with ipv4 address (192.168.0.39), type of the message is setup, eff-url - +is the url, used right now, "url:" is empty, which means, that it is the +same as effective. Effective url may be a result of redirection and, thus, "url:" (target url, specified in batch configuration file) will be printed as well. diff -ur curl-loader/loader.c curl-loader.564+0.48.vs1/loader.c --- curl-loader/loader.c 2009-08-09 13:27:26.000000000 -0700 +++ curl-loader.564+0.48.vs1/loader.c 2009-08-13 12:50:13.000000000 -0700 @@ -48,7 +48,6 @@ #include <curl/multi.h> // getrlimit -#include <sys/time.h> #include <sys/resource.h> #include "batch.h" @@ -118,6 +117,14 @@ user_activity_smooth }; +static FILE *create_file (batch_context* bctx, char* fname) +{ + FILE *fp = fopen(fname,"w"); + if (!fp) + (void)fprintf(stderr,"%s, cannot create file \"%s\", %s\n", + bctx->batch_name,fname,strerror(errno)); + return fp; +} int main (int argc, char *argv []) @@ -258,6 +265,7 @@ batch_context* bctx = (batch_context *) batch_data; FILE* log_file = 0; FILE* statistics_file = 0; + FILE* opstats_file = 0; int rval = -1; @@ -273,34 +281,38 @@ /* Init batch logfile for the batch client output */ - sprintf (bctx-> batch_logfile, "./%s.log", bctx->batch_name); - - if (!(log_file = fopen(bctx-> batch_logfile, "w"))) - { - fprintf (stderr, - "%s - \"%s\" - failed to open file \"%s\" with errno %d.\n", - __func__, bctx->batch_name, bctx-> batch_logfile, errno); + (void)sprintf (bctx-> batch_logfile, "./%s.log", bctx->batch_name); + if (!(log_file = create_file(bctx,bctx->batch_logfile))) return NULL; + else + { + char tbuf[256]; + (void)fprintf(log_file,"# %ld %s",get_tick_count(),ascii_time(tbuf)); + (void)fprintf(log_file, + "# msec_offset cycle_no url_no client_no (ip) indic info\n"); } } /* Init batch statistics file */ - sprintf (bctx->batch_statistics, "./%s.txt", bctx->batch_name); - - if (!(statistics_file = fopen(bctx->batch_statistics, "w"))) - { - fprintf (stderr, - "%s - \"%s\" - failed to open file \"%s\" with errno %d.\n", - __func__, bctx->batch_name, bctx->batch_statistics, errno); + (void)sprintf (bctx->batch_statistics, "./%s.txt", bctx->batch_name); + if (!(bctx->statistics_file = statistics_file = create_file(bctx, + bctx->batch_statistics))) return NULL; - } else - { - bctx->statistics_file = statistics_file; print_statistics_header (statistics_file); - } + + /* + Init batch operational statistics file + */ + if (bctx->dump_opstats) + { + (void)sprintf (bctx->batch_opstats, "./%s.ops", bctx->batch_name); + if (!(bctx->opstats_file = opstats_file = create_file(bctx, + bctx->batch_opstats))) + return NULL; + } /* Init the objects, containing client-context information. @@ -349,6 +361,9 @@ if (statistics_file) fclose (statistics_file); + if (opstats_file) + fclose (opstats_file); + free_batch_data_allocations (bctx); return NULL; @@ -879,8 +894,8 @@ snprintf (body_file, sizeof (body_file) -1, "%s/cl-%Zu-cycle-%ld.body", url->dir_log, - cctx->client_index, - cctx->cycle_num + (unsigned)cctx->client_index, + (unsigned long)cctx->cycle_num ); if (cctx->logfile_bodies) @@ -909,8 +924,8 @@ snprintf (hdr_file, sizeof (hdr_file) -1, "%s/cl-%Zu-cycle-%ld.hdr", url->dir_log, - cctx->client_index, - cctx->cycle_num + (unsigned)cctx->client_index, + (unsigned long)cctx->cycle_num ); if (cctx->logfile_headers) @@ -1108,6 +1123,36 @@ return 0; } +#define write_log(ind, data) \ + if (1) {\ + char *end = data+strlen(data)-1;\ + if (*end == '\n')\ + *end = '\0';\ + (void)fprintf(cctx->file_output,"%ld %ld %ld %s%s %s",\ + offs_resp, cctx->cycle_num, cctx->url_curr_index, cctx->client_name,\ + ind, data);\ + if (url_print)\ + (void)fprintf(cctx->file_output," eff-url: url %s",url);\ + if (url_diff)\ + (void)fprintf(cctx->file_output," url: url %s",url_target);\ + (void)fprintf(cctx->file_output,"\n");\ + } + +#define write_log_num(ind, num) \ + if (1) {\ + char buf[100];\ + (void)snprintf(buf,sizeof(buf),"%ld",num);\ + write_log(ind,buf);\ + } + +#define write_log_ext(ind, num, ext) \ + if (1) {\ + char buf[200];\ + (void)snprintf(buf,sizeof(buf),"%ld %s",num, ext);\ + write_log(ind,buf);\ + } + +#define startswith(str, start) !strncmp((char *)str,(char *)start,strlen((char *)start)) /**************************************************************************************** * Function name - client_tracing_function @@ -1172,19 +1217,20 @@ */ scan_response(type, (char*) data, size, cctx); + const unsigned long time_resp = get_tick_count(); + const unsigned long offs_resp = time_resp - cctx->bctx->start_time; + switch (type) { case CURLINFO_TEXT: if (verbose_logging) - fprintf(cctx->file_output, "%ld %s :== Info: %s: eff-url: %s, url: %s\n", - cctx->cycle_num, cctx->client_name, data, - url_print ? url : "", url_diff ? url_target : ""); + if (verbose_logging > 1 || startswith(data,"About") || + startswith(data,"Closing")) + write_log("==",(char *)data); break; case CURLINFO_ERROR: - fprintf(cctx->file_output, "%ld %s !! ERROR: %s: eff-url: %s, url: %s\n", - cctx->cycle_num, cctx->client_name, data, - url_print ? url : "", url_diff ? url_target : ""); + write_log("!! ERR",(char *)data); cctx->client_state = CSTATE_ERROR; @@ -1193,9 +1239,8 @@ break; case CURLINFO_HEADER_OUT: - if (verbose_logging) - fprintf(cctx->file_output, "%ld %s => Send header: eff-url: %s, url: %s\n", - cctx->cycle_num, cctx->client_name, url_print ? url : "", url_diff ? url_target : ""); + if (verbose_logging > 1) + write_log("=>","Send header"); stat_data_out_add (cctx, (unsigned long) size); @@ -1204,26 +1249,21 @@ /* First header of the HTTP-request. */ first_hdr_req_inc (cctx); stat_req_inc (cctx); /* Increment number of requests */ - cctx->req_sent_timestamp = get_tick_count (); } first_hdrs_clear_non_req (cctx); break; case CURLINFO_DATA_OUT: - if (verbose_logging) - fprintf(cctx->file_output, "%ld %s => Send data: eff-url: %s, url: %s\n", - cctx->cycle_num, cctx->client_name, url_print ? url : "", - url_diff ? url_target : ""); + if (verbose_logging > 1) + write_log("=>","Send data"); stat_data_out_add (cctx, (unsigned long) size); first_hdrs_clear_all (cctx); break; case CURLINFO_SSL_DATA_OUT: - if (verbose_logging) - fprintf(cctx->file_output, "%ld %s => Send ssl data: eff-url: %s, url: %s\n", - cctx->cycle_num, cctx->client_name, url_print ? url : "", - url_diff ? url_target : ""); + if (verbose_logging > 1) + write_log("=>","Send ssl data"); stat_data_out_add (cctx, (unsigned long) size); first_hdrs_clear_all (cctx); @@ -1241,10 +1281,8 @@ curl_easy_getinfo (handle, CURLINFO_RESPONSE_CODE, &response_status); - if (verbose_logging) - fprintf(cctx->file_output, "%ld %s <= Recv header:%ld eff-url: %s, url: %s\n", - cctx->cycle_num, cctx->client_name, response_status, url_print ? url : "", - url_diff ? url_target : ""); + if (verbose_logging > 1) + write_log_num("<= Recv header:",response_status); response_module = response_status / (long)100; @@ -1254,16 +1292,12 @@ case 1: /* 100-Continue and 101 responses */ if (! first_hdr_1xx (cctx)) { - if (verbose_logging) - fprintf(cctx->file_output, "%ld %s:!! %ld CONTINUE: eff-url: %s, url: %s\n", - cctx->cycle_num, cctx->client_name, response_status, - url_print ? url : "", url_diff ? url_target : ""); + write_log_num("!! CONT",response_status); /* First header of 1xx response */ first_hdr_1xx_inc (cctx); stat_1xx_inc (cctx); /* Increment number of 1xx responses */ - const unsigned long time_1xx_resp = get_tick_count (); - stat_appl_delay_add (cctx, time_1xx_resp); + stat_appl_delay_add (cctx, time_resp); } first_hdrs_clear_non_1xx (cctx); @@ -1273,19 +1307,15 @@ case 2: /* 200 OK */ if (! first_hdr_2xx (cctx)) { - if (verbose_logging) - fprintf(cctx->file_output, "%ld %s:!! %ld OK: eff-url: %s, url: %s\n", - cctx->cycle_num, cctx->client_name, response_status, - url_print ? url : "", url_diff ? url_target : ""); + write_log_num("!! OK",response_status); /* First header of 2xx response */ first_hdr_2xx_inc (cctx); stat_2xx_inc (cctx); /* Increment number of 2xx responses */ /* Count into the averages HTTP/S server response delay */ - const unsigned long time_2xx_resp = get_tick_count (); - stat_appl_delay_2xx_add (cctx, time_2xx_resp); - stat_appl_delay_add (cctx, time_2xx_resp); + stat_appl_delay_2xx_add (cctx, time_resp); + stat_appl_delay_add (cctx, time_resp); } first_hdrs_clear_non_2xx (cctx); break; @@ -1293,15 +1323,12 @@ case 3: /* 3xx REDIRECTIONS */ if (! first_hdr_3xx (cctx)) { - fprintf(cctx->file_output, "%ld %s:!! %ld REDIRECTION: %s: eff-url: %s, url: %s\n", - cctx->cycle_num, cctx->client_name, response_status, data, - url_print ? url : "", url_diff ? url_target : ""); + write_log_num("!! RDR",response_status); /* First header of 3xx response */ first_hdr_3xx_inc (cctx); stat_3xx_inc (cctx); /* Increment number of 3xx responses */ - const unsigned long time_3xx_resp = get_tick_count (); - stat_appl_delay_add (cctx, time_3xx_resp); + stat_appl_delay_add (cctx, time_resp); } first_hdrs_clear_non_3xx (cctx); break; @@ -1309,16 +1336,13 @@ case 4: /* 4xx Client Error */ if (! first_hdr_4xx (cctx)) { - fprintf(cctx->file_output, "%ld %s :!! %ld CLIENT_ERROR : %s: eff-url: %s, url: %s\n", - cctx->cycle_num, cctx->client_name, response_status, data, - url_print ? url : "", url_diff ? url_target : ""); + write_log_ext("!! CLER",response_status,data); /* First header of 4xx response */ first_hdr_4xx_inc (cctx); stat_4xx_inc (cctx); /* Increment number of 4xx responses */ - const unsigned long time_4xx_resp = get_tick_count (); - stat_appl_delay_add (cctx, time_4xx_resp); + stat_appl_delay_add (cctx, time_resp); } first_hdrs_clear_non_4xx (cctx); break; @@ -1327,24 +1351,20 @@ case 5: /* 5xx Server Error */ if (! first_hdr_5xx (cctx)) { - fprintf(cctx->file_output, "%ld %s :!! %ld SERVER_ERROR : %s: eff-url: %s, url: %s\n", - cctx->cycle_num, cctx->client_name, response_status, data, - url_print ? url : "", url_diff ? url_target : ""); + write_log_ext("!! SRER",response_status,data); /* First header of 5xx response */ first_hdr_5xx_inc (cctx); stat_5xx_inc (cctx); /* Increment number of 5xx responses */ - const unsigned long time_5xx_resp = get_tick_count (); - stat_appl_delay_add (cctx, time_5xx_resp); + stat_appl_delay_add (cctx, time_resp); } first_hdrs_clear_non_5xx (cctx); break; default : - fprintf(cctx->file_output, - "%ld %s:<= WARNING: parsing error: wrong response code (FTP?) %ld .\n", - cctx->cycle_num, cctx->client_name, response_status /*(char*) data*/); + write_log_num("<= WARNING: wrong response code (FTP?)", + response_status); /* FTP breaks it: - cctx->client_state = CSTATE_ERROR; */ break; } @@ -1375,9 +1395,11 @@ break; case CURLINFO_DATA_IN: - if (verbose_logging) - fprintf(cctx->file_output, "%ld %s <= Recv data: eff-url: %s, url: %s\n", - cctx->cycle_num, cctx->client_name, + if (verbose_logging > 1) + (void)fprintf(cctx->file_output, + "%ld %ld %ld %s<= Recv data: eff-url: %s, url: %s\n", + offs_resp, cctx->cycle_num, cctx->url_curr_index, + cctx->client_name, url_print ? url : "", url_diff ? url_target : ""); stat_data_in_add (cctx, (unsigned long) size); @@ -1385,10 +1407,8 @@ break; case CURLINFO_SSL_DATA_IN: - if (verbose_logging) - fprintf(cctx->file_output, "%ld %s <= Recv ssl data: eff-url: %s, url: %s\n", - cctx->cycle_num, cctx->client_name, - url_print && url ? url : "", url_diff ? url_target : ""); + if (verbose_logging > 1) + write_log("<=","Recv ssl data"); stat_data_in_add (cctx, (unsigned long) size); first_hdrs_clear_all (cctx); @@ -1447,10 +1467,15 @@ */ cctx->cycle_num = 0; - snprintf(cctx->client_name, sizeof(cctx->client_name) - 1, + if (verbose_logging > 1) + snprintf(cctx->client_name, sizeof(cctx->client_name) - 1, "%d (%s) ", i + 1, bctx->ip_addr_array[i]); + else + /* Shorten client name for low logging */ + snprintf(cctx->client_name, sizeof(cctx->client_name) - 1, + "%d ",i+1); /* Mark timer-ids as non-valid. */ cctx->tid_sleeping = cctx->tid_url_completion = -1; diff -ur curl-loader/loader_fsm.c curl-loader.564+0.48.vs1/loader_fsm.c --- curl-loader/loader_fsm.c 2009-08-09 13:27:26.000000000 -0700 +++ curl-loader.564+0.48.vs1/loader_fsm.c 2009-08-13 12:00:01.000000000 -0700 @@ -319,14 +319,16 @@ Therefore, remembering here possible error state. */ int recoverable_error_state = cctx->client_state; - + if (bctx->run_time && (now_time - bctx->start_time >= bctx->run_time)) + rval_load = CSTATE_FINISHED_OK; + else /* Initialize virtual client's CURL handle for the next step of loading by calling load_<state-name>_state() function relevant for a particular client state. */ - rval_load = load_state_func_table[cctx->client_state+1] (cctx, - now_time, - &interleave_waiting_time); + rval_load = load_state_func_table[cctx->client_state+1] (cctx, + now_time, + &interleave_waiting_time); /* Update operational statistics @@ -668,6 +670,7 @@ cctx->preload_url_curr_index = cctx->url_curr_index; /* Schedule the client immediately */ + cctx->req_sent_timestamp = now_time; if (curl_multi_add_handle (bctx->multiple_handle, cctx->handle) == CURLM_OK) { unsigned long timer_url_completion = 0; @@ -1071,7 +1074,7 @@ { advance_cycle_num (cctx); - if (cctx->cycle_num >= bctx->cycles_num) + if (bctx->cycles_num && cctx->cycle_num >= bctx->cycles_num) { return (cctx->client_state = CSTATE_ERROR); } @@ -1127,7 +1130,7 @@ // advance_cycle_num (cctx); - if (cctx->cycle_num == bctx->cycles_num) + if (bctx->cycles_num && cctx->cycle_num == bctx->cycles_num) { // Cycling completed diff -ur curl-loader/Makefile curl-loader.564+0.48.vs1/Makefile --- curl-loader/Makefile 2009-08-09 13:27:26.000000000 -0700 +++ curl-loader.564+0.48.vs1/Makefile 2009-08-13 12:46:29.000000000 -0700 @@ -171,6 +171,6 @@ $(OBJ_DIR)/%.o: %.c $(CC) $(CFLAGS) $(PROF_FLAG) $(OPT_FLAGS) $(DEBUG_FLAGS) $(INCDIR) -c -o $(OBJ_DIR)/$*.o $< -$(CONF_OBJ): +$(CONF_OBJ): parse_conf.c rm -rf $(OBJ_DIR)/parse_conf.o $(CC) $(CFLAGS) $(PROF_FLAG) $(DEBUG_FLAGS) $(INCDIR) -c -o $(OBJ_DIR)/parse_conf.o parse_conf.c diff -ur curl-loader/parse_conf.c curl-loader.564+0.48.vs1/parse_conf.c --- curl-loader/parse_conf.c 2009-08-09 13:27:26.000000000 -0700 +++ curl-loader.564+0.48.vs1/parse_conf.c 2009-08-13 13:05:19.000000000 -0700 @@ -102,8 +102,10 @@ static int ip_addr_max_parser (batch_context*const bctx, char*const value); static int ip_shared_num_parser (batch_context*const bctx, char*const value); static int cycles_num_parser (batch_context*const bctx, char*const value); +static int run_time_parser (batch_context*const bctx, char*const value); static int user_agent_parser (batch_context*const bctx, char*const value); static int urls_num_parser (batch_context*const bctx, char*const value); +static int dump_opstats_parser (batch_context*const bctx, char*const value); /* * URL section tag parsers. @@ -172,8 +174,10 @@ {"IP_ADDR_MAX", ip_addr_max_parser}, {"IP_SHARED_NUM", ip_shared_num_parser}, {"CYCLES_NUM", cycles_num_parser}, + {"RUN_TIME", run_time_parser}, {"USER_AGENT", user_agent_parser}, {"URLS_NUM", urls_num_parser}, + {"DUMP_OPSTATS", dump_opstats_parser}, /*------------------------ URL SECTION -------------------------------- */ @@ -869,6 +873,43 @@ } return 0; } + +static int run_time_parser (batch_context*const bctx, char*const value) +{ + char *token = 0, *strtokp = 0; + const char *delim = ":"; + long dhms[4]; + int ct = 0; + const int max_ct = sizeof(dhms)/sizeof(*dhms); + char value_buf[100]; + // preserve value from destruction by strtok + (void)strncpy(value_buf,value,sizeof(value_buf)); + (void)memset((char *)dhms,0,sizeof(dhms)); + for (token = strtok_r(value_buf,delim,&strtokp); + token != 0; token = strtok_r(0,delim,&strtokp), ct++) + { + if (ct >= max_ct) + { + (void)fprintf(stderr, + "%s - error: a value in the form [[[D:]H:]M:]S is expected" + " for tag RUN_TIME\n", __func__); + return -1; + } + int i = 0; + while (++i < max_ct) + dhms[i - 1] = dhms[i]; + dhms[max_ct - 1] = atol(token); + } + long run_time = (60 * (60 * (24 * dhms[0] + dhms[1]) + dhms[2]) + dhms[3])* + 1000; // msecs + if (run_time < 0) + { + run_time = LONG_MAX - 1; + } + bctx->run_time = (unsigned long)run_time; + return 0; +} + static int clients_rampup_inc_parser (batch_context*const bctx, char*const value) { bctx->clients_rampup_inc = atol (value); @@ -918,6 +959,21 @@ return 0; } +static int dump_opstats_parser (batch_context*const bctx, char*const value) +{ + if (value[0] == 'Y' || value[0] == 'y' || + value[0] == 'N' || value[0] == 'n') + bctx->dump_opstats = (value[0] == 'Y' || value[0] == 'y'); + else + { + fprintf (stderr, + "%s - error: DUMP_OPSTATS value (%s) must start with Y|y|N|n.\n", + __func__, value); + return -1; + } + return 0; +} + static int url_parser (batch_context*const bctx, char*const value) { size_t url_length = 0; @@ -2730,6 +2786,12 @@ set_default_response_errors_table (); + /* for compatibility with older configurations set default value + of dump_opstats to 1 ("yes") */ + unsigned i; + for (i = 0; i < bctx_array_size; i++) + bctx_array[i].dump_opstats = 1; + while (fgets (fgets_buff, sizeof (fgets_buff) - 1, fp)) { // fprintf (stderr, "%s - processing file string \"%s\n", diff -ur curl-loader/statistics.c curl-loader.564+0.48.vs1/statistics.c --- curl-loader/statistics.c 2009-08-09 13:27:26.000000000 -0700 +++ curl-loader.564+0.48.vs1/statistics.c 2009-08-13 12:00:01.000000000 -0700 @@ -27,6 +27,7 @@ #include <errno.h> #include <stdlib.h> #include <string.h> +#include <time.h> #include "batch.h" #include "client.h" @@ -57,7 +58,8 @@ stat_point *sd, unsigned long period); -static void print_operational_statistics (op_stat_point*const osp_curr, +static void print_operational_statistics (FILE *opstats_file, + op_stat_point*const osp_curr, op_stat_point*const osp_total, url_context* url_arr); @@ -400,7 +402,8 @@ } op_stat_point_add (&bctx->op_total, &bctx->op_delta); - print_operational_statistics (&bctx->op_delta, + print_operational_statistics (bctx->opstats_file, + &bctx->op_delta, &bctx->op_total, bctx->url_ctx_array); @@ -430,13 +433,34 @@ } dump_clients (cctx); - fprintf (stderr, "\nExited. For details look in the files:\n" + (void)fprintf (stderr, "\nExited. For details look in the files:\n" "- %s.log for errors and traces;\n" "- %s.txt for loading statistics;\n" - "- %s.ctx for virtual client based statistics.\n" - "You may add -v and -u options to the command line for more " - "verbouse output to %s.log file.\n", bctx->batch_name, bctx->batch_name, - bctx->batch_name, bctx->batch_name); + "- %s.ctx for virtual client based statistics.\n", + bctx->batch_name, bctx->batch_name, bctx->batch_name); + if (bctx->dump_opstats) + (void)fprintf (stderr,"- %s.ops for operational statistics.\n", + bctx->batch_name); + (void)fprintf (stderr, + "Add -v and -u options to the command line for " + "verbose output to %s.log file.\n",bctx->batch_name); +} + +/****** +* Function name - ascii_time +* +* Description - evaluate current time in ascii +* +* Input - *tbuf - pointer to time buffer +* Return - tbuf filled with time +******/ + +char *ascii_time (char *tbuf) +{ + time_t timeb; + + (void)time(&timeb); + return ctime_r(&timeb,tbuf); } /**************************************************************************************** @@ -593,7 +617,8 @@ op_stat_point_add (&bctx->op_total, &bctx->op_delta ); - print_operational_statistics (&bctx->op_delta, + print_operational_statistics (bctx->opstats_file, + &bctx->op_delta, &bctx->op_total, bctx->url_ctx_array); @@ -811,29 +836,33 @@ /*********************************************************************************** * Function name - print_operational_statistics * -* Description - Prints to stdout number of login, UAS - for each URL and logoff operations +* Description - writes number of login, UAS - for each URL and logoff operations * success and failure numbers * -* Input - *osp_curr - pointer to the current operational statistics point +* Input - *opstats_file - FILE pointer +* *osp_curr - pointer to the current operational statistics point * *osp_total - pointer to the current operational statistics point * * Return Code/Output - None *************************************************************************************/ -static void print_operational_statistics (op_stat_point*const osp_curr, +static void print_operational_statistics (FILE *opstats_file, + op_stat_point*const osp_curr, op_stat_point*const osp_total, url_context* url_arr) { - if (!osp_curr || !osp_total) + if (!osp_curr || !osp_total || !opstats_file) return; - fprintf (stdout, " Operations:\t\t Success\t\t Failed\t\t\tTimed out\n"); + (void)fprintf (opstats_file, + " Operations:\t\t Success\t\t Failed\t\t\tTimed out\n"); if (osp_curr->url_num && (osp_curr->url_num == osp_total->url_num)) { unsigned long i; for (i = 0; i < osp_curr->url_num; i++) { - fprintf (stdout, "URL%ld:%-12.12s\t%-6ld %-8ld\t\t%-6ld %-8ld\t\t%-6ld %-8ld\n", + (void)fprintf (opstats_file, + "URL%ld:%-12.12s\t%-6ld %-8ld\t\t%-6ld %-8ld\t\t%-6ld %-8ld\n", i, url_arr[i].url_short_name, osp_curr->url_ok[i], osp_total->url_ok[i], osp_curr->url_failed[i], osp_total->url_failed[i], diff -ur curl-loader/statistics.h curl-loader.564+0.48.vs1/statistics.h --- curl-loader/statistics.h 2009-08-09 13:27:26.000000000 -0700 +++ curl-loader.564+0.48.vs1/statistics.h 2009-08-13 12:00:01.000000000 -0700 @@ -216,10 +216,22 @@ ****************************************************************************************/ void dump_final_statistics (struct client_context* cctx); +/****** +* Function name - ascii_time +* +* Description - evaluate current time in ascii +* +* Input - *tbuf - pointer to time buffer +* Return - tbuf filled with time +******/ + +char *ascii_time (char *tbuf); + /**************************************************************************************** -* Function name - dump_snapshot_interval and up to the interval time summary statistics +* Function name - dump_snapshot_interval * * Description - Dumps summary statistics since the start of load +* and up to the interval * * Input - *bctx - pointer to batch structure * now - current time in msec since the epoch |
From: Robert I. <cor...@gm...> - 2009-08-18 15:56:17
|
Dear Val, On Fri, Aug 14, 2009 at 10:44 PM, Val Shkolnikov <va...@nv...> wrote: > > > Could you, please, generate the patch against svn version or at least > version 0.49? > > > I have generated the patches against source-forge svn revision 564. I did > not update the README file as you suggested, because there is no explanation > of options and tags there. Instead, I have in my patches updates for > doc/curl-loader.1 and doc/curl-loader-config.5 man pages. In addition, I > have updated the curl-loader-faq.html file as I downloaded it from the > source forge web page (I think you should put it under svn as well). > > Now, since my last submission, I added another feature. It allows to set > up the curl-loader to produce a fixed number of requests per second, a > REQ_RATE parameter in the config file. This is needed when you want to > simulate a situation where a request rate was measured in the field and in > the lab one wants to reproduce it. > > So I am attaching two sets of patches for your review: > > - Code (curl-loader.patch.564+048.vs1) and faq > (curl-loader-faq.patch.564+vs1) patches for going from 564 to my first set > of features (DUMP_OPSTATS, RUN_TIME, time stamp logging, levels of logging), > vs1. > > - Code (curl-loader.patch.048.vs1+vs2) and faq > (curl-loader-faq.patch.vs1+vs2) patches for going from vs1 to the fixed > request rate feature, vs2. > > You should be able to apply the code changes by checking out revision 564 > and running in the curl-loader directory the command > > patch -p1 <curl-loader.patch.564+048.vs1 > > then > > patch -p1 <curl-loader.patch.vs1+vs2. > > Similar for the faq changes. > > /Regards, Val > > Thanks, I will look into that Best wishes! -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... www.ghotit.com Assistive technology that understands you ...................................................................... |
From: Val S. <va...@nv...> - 2009-08-14 20:11:29
|
diff -ur curl-loader/batch.h curl-loader.564+0.48.vs1/batch.h --- curl-loader/batch.h 2009-08-09 13:27:26.000000000 -0700 +++ curl-loader.564+0.48.vs1/batch.h 2009-08-13 12:00:01.000000000 -0700 @@ -82,6 +82,9 @@ /* Statistics file <batch-name>.txt */ char batch_statistics[BATCH_NAME_SIZE+BATCH_NAME_EXTRA_SIZE]; + /* Operational statistics file <batch-name>.ops */ + char batch_opstats[BATCH_NAME_SIZE+BATCH_NAME_EXTRA_SIZE]; + /* Maximum number of clients (each client with its own IP-address) in the batch */ int client_num_max; @@ -135,10 +138,15 @@ /* Number of cycles to repeat the urls downloads and afterwards sleeping - cycles. Zero means run it forever. + cycles. Zero means run it until time to run is exhausted. */ long cycles_num; + /* + Time to run in msec. Zero means time to run is infinite. + */ + unsigned long run_time; + /* User-agent string to appear in the HTTP 1/1 requests. */ @@ -247,6 +255,12 @@ /* The file to be used for statistics output */ FILE* statistics_file; + /* The file to be used for operational statistics output */ + FILE* opstats_file; + + /* Dump operational statistics indicator, 0: no dump */ + int dump_opstats; + /* Timestamp, when the loading started */ unsigned long start_time; @@ -267,6 +281,10 @@ op_stat_point op_delta; op_stat_point op_total; + /* Count of response times dumped before new-line, + used to limit line length */ + int ct_resps; + } batch_context; diff -ur curl-loader/ChangeLog curl-loader.564+0.48.vs1/ChangeLog --- curl-loader/ChangeLog 2009-08-09 13:27:26.000000000 -0700 +++ curl-loader.564+0.48.vs1/ChangeLog 2009-08-13 12:48:14.000000000 -0700 @@ -1,3 +1,9 @@ +Val Shkolnikov, June 21, 2009. + +* Directed operational statistics into a file with the extension .ops +* Added DUMP_OPSTATS and RUN_TIME options. +* Added time stamp to the .log file, streamlined logging using macros. +* Added a second -v option for varied level of logging detail. * Renamed function getline to get_line since folks from RedHat are polluting the namespace. Thanks to jari pietila <jar...@ho...> diff -ur curl-loader/conf.c curl-loader.564+0.48.vs1/conf.c --- curl-loader/conf.c 2009-08-09 13:27:26.000000000 -0700 +++ curl-loader.564+0.48.vs1/conf.c 2009-08-13 12:00:01.000000000 -0700 @@ -176,8 +176,8 @@ } break; - case 'v': - verbose_logging = 1; + case 'v': /* accumulate verbosity */ + verbose_logging += 1; break; case 'u': diff -ur curl-loader/doc/curl-loader.1 curl-loader.564+0.48.vs1/doc/curl-loader.1 --- curl-loader/doc/curl-loader.1 2009-08-09 13:27:20.000000000 -0700 +++ curl-loader.564+0.48.vs1/doc/curl-loader.1 2009-08-13 11:59:57.000000000 -0700 @@ -87,8 +87,9 @@ logical CPUs as seen by cat /proc/cpuinfo. .TP .B "\-v" -Request more verbose output to the log files, including information about -the headers sent and received. In some cases\-d option is a more informative. +Request more verbose output to the log files. If used twice, include +information about +the headers sent and received. In some cases \-d option is a more informative. .TP .B "\-u" Activate logging of URLs to the log file, when the \-v verbose option is used. diff -ur curl-loader/doc/curl-loader-config.5 curl-loader.564+0.48.vs1/doc/curl-loader-config.5 --- curl-loader/doc/curl-loader-config.5 2009-08-09 13:27:20.000000000 -0700 +++ curl-loader.564+0.48.vs1/doc/curl-loader-config.5 2009-08-13 11:59:57.000000000 -0700 @@ -182,10 +182,10 @@ .TP .B CYCLES_NUM This requires a valid signed integer value. This is the number of -cycles to be performed. If the value is -1, the +cycles to be performed. If the value is negative or 0, the .B curl\-loader -tool cycles forever until you hit ctl-C. This is a tag for the -general section. +tool cycles forever until the run time is exhausted or you hit ctl-C. +This is a tag for the general section. .TP .B USER_AGENT This requires a valid quoted string value. This is a way to override the @@ -194,6 +194,20 @@ .B HEADER tag. This is a tag for the general section. .TP +.B DUMP_OPSTATS +This requires a value of "yes" or "no". The operational statistics file +with the extension ".ops" is generated if the value is "yes" (the default) +and is not generated otherwise. +This is a tag for the general section. +.TP +.B RUN_TIME +This requires a value combined of up to 4 numbers separated with a colon, as +in D:H:M:S, H:M:S, M:S or S. Here D is days, H is hours, M is minutes +and S is seconds. The value specifies maximum run time of the curl-loader. +If the value is negative or 0, the curl-loader runs until the number of +cycles to be performed is exhausted or you hit ctl-C. +This is a tag for the general section. +.TP .B URLS_NUM This requires a valid unsigned integer value. This is the number of URLs in the URL section. This is a tag for the general section. @@ -663,4 +677,4 @@ /usr/share/doc/curl-loader/conf-examples/* for configuration examples .br .SH "SEE ALSO" -.BR curl\-loader (8) +.BR curl\-loader (1) diff -ur curl-loader/loader.c curl-loader.564+0.48.vs1/loader.c --- curl-loader/loader.c 2009-08-09 13:27:26.000000000 -0700 +++ curl-loader.564+0.48.vs1/loader.c 2009-08-13 12:50:13.000000000 -0700 @@ -48,7 +48,6 @@ #include <curl/multi.h> // getrlimit -#include <sys/time.h> #include <sys/resource.h> #include "batch.h" @@ -118,6 +117,14 @@ user_activity_smooth }; +static FILE *create_file (batch_context* bctx, char* fname) +{ + FILE *fp = fopen(fname,"w"); + if (!fp) + (void)fprintf(stderr,"%s, cannot create file \"%s\", %s\n", + bctx->batch_name,fname,strerror(errno)); + return fp; +} int main (int argc, char *argv []) @@ -258,6 +265,7 @@ batch_context* bctx = (batch_context *) batch_data; FILE* log_file = 0; FILE* statistics_file = 0; + FILE* opstats_file = 0; int rval = -1; @@ -273,34 +281,38 @@ /* Init batch logfile for the batch client output */ - sprintf (bctx-> batch_logfile, "./%s.log", bctx->batch_name); - - if (!(log_file = fopen(bctx-> batch_logfile, "w"))) - { - fprintf (stderr, - "%s - \"%s\" - failed to open file \"%s\" with errno %d.\n", - __func__, bctx->batch_name, bctx-> batch_logfile, errno); + (void)sprintf (bctx-> batch_logfile, "./%s.log", bctx->batch_name); + if (!(log_file = create_file(bctx,bctx->batch_logfile))) return NULL; + else + { + char tbuf[256]; + (void)fprintf(log_file,"# %ld %s",get_tick_count(),ascii_time(tbuf)); + (void)fprintf(log_file, + "# msec_offset cycle_no url_no client_no (ip) indic info\n"); } } /* Init batch statistics file */ - sprintf (bctx->batch_statistics, "./%s.txt", bctx->batch_name); - - if (!(statistics_file = fopen(bctx->batch_statistics, "w"))) - { - fprintf (stderr, - "%s - \"%s\" - failed to open file \"%s\" with errno %d.\n", - __func__, bctx->batch_name, bctx->batch_statistics, errno); + (void)sprintf (bctx->batch_statistics, "./%s.txt", bctx->batch_name); + if (!(bctx->statistics_file = statistics_file = create_file(bctx, + bctx->batch_statistics))) return NULL; - } else - { - bctx->statistics_file = statistics_file; print_statistics_header (statistics_file); - } + + /* + Init batch operational statistics file + */ + if (bctx->dump_opstats) + { + (void)sprintf (bctx->batch_opstats, "./%s.ops", bctx->batch_name); + if (!(bctx->opstats_file = opstats_file = create_file(bctx, + bctx->batch_opstats))) + return NULL; + } /* Init the objects, containing client-context information. @@ -349,6 +361,9 @@ if (statistics_file) fclose (statistics_file); + if (opstats_file) + fclose (opstats_file); + free_batch_data_allocations (bctx); return NULL; @@ -879,8 +894,8 @@ snprintf (body_file, sizeof (body_file) -1, "%s/cl-%Zu-cycle-%ld.body", url->dir_log, - cctx->client_index, - cctx->cycle_num + (unsigned)cctx->client_index, + (unsigned long)cctx->cycle_num ); if (cctx->logfile_bodies) @@ -909,8 +924,8 @@ snprintf (hdr_file, sizeof (hdr_file) -1, "%s/cl-%Zu-cycle-%ld.hdr", url->dir_log, - cctx->client_index, - cctx->cycle_num + (unsigned)cctx->client_index, + (unsigned long)cctx->cycle_num ); if (cctx->logfile_headers) @@ -1108,6 +1123,36 @@ return 0; } +#define write_log(ind, data) \ + if (1) {\ + char *end = data+strlen(data)-1;\ + if (*end == '\n')\ + *end = '\0';\ + (void)fprintf(cctx->file_output,"%ld %ld %ld %s%s %s",\ + offs_resp, cctx->cycle_num, cctx->url_curr_index, cctx->client_name,\ + ind, data);\ + if (url_print)\ + (void)fprintf(cctx->file_output," eff-url: url %s",url);\ + if (url_diff)\ + (void)fprintf(cctx->file_output," url: url %s",url_target);\ + (void)fprintf(cctx->file_output,"\n");\ + } + +#define write_log_num(ind, num) \ + if (1) {\ + char buf[100];\ + (void)snprintf(buf,sizeof(buf),"%ld",num);\ + write_log(ind,buf);\ + } + +#define write_log_ext(ind, num, ext) \ + if (1) {\ + char buf[200];\ + (void)snprintf(buf,sizeof(buf),"%ld %s",num, ext);\ + write_log(ind,buf);\ + } + +#define startswith(str, start) !strncmp((char *)str,(char *)start,strlen((char *)start)) /**************************************************************************************** * Function name - client_tracing_function @@ -1172,19 +1217,20 @@ */ scan_response(type, (char*) data, size, cctx); + const unsigned long time_resp = get_tick_count(); + const unsigned long offs_resp = time_resp - cctx->bctx->start_time; + switch (type) { case CURLINFO_TEXT: if (verbose_logging) - fprintf(cctx->file_output, "%ld %s :== Info: %s: eff-url: %s, url: %s\n", - cctx->cycle_num, cctx->client_name, data, - url_print ? url : "", url_diff ? url_target : ""); + if (verbose_logging > 1 || startswith(data,"About") || + startswith(data,"Closing")) + write_log("==",(char *)data); break; case CURLINFO_ERROR: - fprintf(cctx->file_output, "%ld %s !! ERROR: %s: eff-url: %s, url: %s\n", - cctx->cycle_num, cctx->client_name, data, - url_print ? url : "", url_diff ? url_target : ""); + write_log("!! ERR",(char *)data); cctx->client_state = CSTATE_ERROR; @@ -1193,9 +1239,8 @@ break; case CURLINFO_HEADER_OUT: - if (verbose_logging) - fprintf(cctx->file_output, "%ld %s => Send header: eff-url: %s, url: %s\n", - cctx->cycle_num, cctx->client_name, url_print ? url : "", url_diff ? url_target : ""); + if (verbose_logging > 1) + write_log("=>","Send header"); stat_data_out_add (cctx, (unsigned long) size); @@ -1204,26 +1249,21 @@ /* First header of the HTTP-request. */ first_hdr_req_inc (cctx); stat_req_inc (cctx); /* Increment number of requests */ - cctx->req_sent_timestamp = get_tick_count (); } first_hdrs_clear_non_req (cctx); break; case CURLINFO_DATA_OUT: - if (verbose_logging) - fprintf(cctx->file_output, "%ld %s => Send data: eff-url: %s, url: %s\n", - cctx->cycle_num, cctx->client_name, url_print ? url : "", - url_diff ? url_target : ""); + if (verbose_logging > 1) + write_log("=>","Send data"); stat_data_out_add (cctx, (unsigned long) size); first_hdrs_clear_all (cctx); break; case CURLINFO_SSL_DATA_OUT: - if (verbose_logging) - fprintf(cctx->file_output, "%ld %s => Send ssl data: eff-url: %s, url: %s\n", - cctx->cycle_num, cctx->client_name, url_print ? url : "", - url_diff ? url_target : ""); + if (verbose_logging > 1) + write_log("=>","Send ssl data"); stat_data_out_add (cctx, (unsigned long) size); first_hdrs_clear_all (cctx); @@ -1241,10 +1281,8 @@ curl_easy_getinfo (handle, CURLINFO_RESPONSE_CODE, &response_status); - if (verbose_logging) - fprintf(cctx->file_output, "%ld %s <= Recv header:%ld eff-url: %s, url: %s\n", - cctx->cycle_num, cctx->client_name, response_status, url_print ? url : "", - url_diff ? url_target : ""); + if (verbose_logging > 1) + write_log_num("<= Recv header:",response_status); response_module = response_status / (long)100; @@ -1254,16 +1292,12 @@ case 1: /* 100-Continue and 101 responses */ if (! first_hdr_1xx (cctx)) { - if (verbose_logging) - fprintf(cctx->file_output, "%ld %s:!! %ld CONTINUE: eff-url: %s, url: %s\n", - cctx->cycle_num, cctx->client_name, response_status, - url_print ? url : "", url_diff ? url_target : ""); + write_log_num("!! CONT",response_status); /* First header of 1xx response */ first_hdr_1xx_inc (cctx); stat_1xx_inc (cctx); /* Increment number of 1xx responses */ - const unsigned long time_1xx_resp = get_tick_count (); - stat_appl_delay_add (cctx, time_1xx_resp); + stat_appl_delay_add (cctx, time_resp); } first_hdrs_clear_non_1xx (cctx); @@ -1273,19 +1307,15 @@ case 2: /* 200 OK */ if (! first_hdr_2xx (cctx)) { - if (verbose_logging) - fprintf(cctx->file_output, "%ld %s:!! %ld OK: eff-url: %s, url: %s\n", - cctx->cycle_num, cctx->client_name, response_status, - url_print ? url : "", url_diff ? url_target : ""); + write_log_num("!! OK",response_status); /* First header of 2xx response */ first_hdr_2xx_inc (cctx); stat_2xx_inc (cctx); /* Increment number of 2xx responses */ /* Count into the averages HTTP/S server response delay */ - const unsigned long time_2xx_resp = get_tick_count (); - stat_appl_delay_2xx_add (cctx, time_2xx_resp); - stat_appl_delay_add (cctx, time_2xx_resp); + stat_appl_delay_2xx_add (cctx, time_resp); + stat_appl_delay_add (cctx, time_resp); } first_hdrs_clear_non_2xx (cctx); break; @@ -1293,15 +1323,12 @@ case 3: /* 3xx REDIRECTIONS */ if (! first_hdr_3xx (cctx)) { - fprintf(cctx->file_output, "%ld %s:!! %ld REDIRECTION: %s: eff-url: %s, url: %s\n", - cctx->cycle_num, cctx->client_name, response_status, data, - url_print ? url : "", url_diff ? url_target : ""); + write_log_num("!! RDR",response_status); /* First header of 3xx response */ first_hdr_3xx_inc (cctx); stat_3xx_inc (cctx); /* Increment number of 3xx responses */ - const unsigned long time_3xx_resp = get_tick_count (); - stat_appl_delay_add (cctx, time_3xx_resp); + stat_appl_delay_add (cctx, time_resp); } first_hdrs_clear_non_3xx (cctx); break; @@ -1309,16 +1336,13 @@ case 4: /* 4xx Client Error */ if (! first_hdr_4xx (cctx)) { - fprintf(cctx->file_output, "%ld %s :!! %ld CLIENT_ERROR : %s: eff-url: %s, url: %s\n", - cctx->cycle_num, cctx->client_name, response_status, data, - url_print ? url : "", url_diff ? url_target : ""); + write_log_ext("!! CLER",response_status,data); /* First header of 4xx response */ first_hdr_4xx_inc (cctx); stat_4xx_inc (cctx); /* Increment number of 4xx responses */ - const unsigned long time_4xx_resp = get_tick_count (); - stat_appl_delay_add (cctx, time_4xx_resp); + stat_appl_delay_add (cctx, time_resp); } first_hdrs_clear_non_4xx (cctx); break; @@ -1327,24 +1351,20 @@ case 5: /* 5xx Server Error */ if (! first_hdr_5xx (cctx)) { - fprintf(cctx->file_output, "%ld %s :!! %ld SERVER_ERROR : %s: eff-url: %s, url: %s\n", - cctx->cycle_num, cctx->client_name, response_status, data, - url_print ? url : "", url_diff ? url_target : ""); + write_log_ext("!! SRER",response_status,data); /* First header of 5xx response */ first_hdr_5xx_inc (cctx); stat_5xx_inc (cctx); /* Increment number of 5xx responses */ - const unsigned long time_5xx_resp = get_tick_count (); - stat_appl_delay_add (cctx, time_5xx_resp); + stat_appl_delay_add (cctx, time_resp); } first_hdrs_clear_non_5xx (cctx); break; default : - fprintf(cctx->file_output, - "%ld %s:<= WARNING: parsing error: wrong response code (FTP?) %ld .\n", - cctx->cycle_num, cctx->client_name, response_status /*(char*) data*/); + write_log_num("<= WARNING: wrong response code (FTP?)", + response_status); /* FTP breaks it: - cctx->client_state = CSTATE_ERROR; */ break; } @@ -1375,9 +1395,11 @@ break; case CURLINFO_DATA_IN: - if (verbose_logging) - fprintf(cctx->file_output, "%ld %s <= Recv data: eff-url: %s, url: %s\n", - cctx->cycle_num, cctx->client_name, + if (verbose_logging > 1) + (void)fprintf(cctx->file_output, + "%ld %ld %ld %s<= Recv data: eff-url: %s, url: %s\n", + offs_resp, cctx->cycle_num, cctx->url_curr_index, + cctx->client_name, url_print ? url : "", url_diff ? url_target : ""); stat_data_in_add (cctx, (unsigned long) size); @@ -1385,10 +1407,8 @@ break; case CURLINFO_SSL_DATA_IN: - if (verbose_logging) - fprintf(cctx->file_output, "%ld %s <= Recv ssl data: eff-url: %s, url: %s\n", - cctx->cycle_num, cctx->client_name, - url_print && url ? url : "", url_diff ? url_target : ""); + if (verbose_logging > 1) + write_log("<=","Recv ssl data"); stat_data_in_add (cctx, (unsigned long) size); first_hdrs_clear_all (cctx); @@ -1447,10 +1467,15 @@ */ cctx->cycle_num = 0; - snprintf(cctx->client_name, sizeof(cctx->client_name) - 1, + if (verbose_logging > 1) + snprintf(cctx->client_name, sizeof(cctx->client_name) - 1, "%d (%s) ", i + 1, bctx->ip_addr_array[i]); + else + /* Shorten client name for low logging */ + snprintf(cctx->client_name, sizeof(cctx->client_name) - 1, + "%d ",i+1); /* Mark timer-ids as non-valid. */ cctx->tid_sleeping = cctx->tid_url_completion = -1; diff -ur curl-loader/loader_fsm.c curl-loader.564+0.48.vs1/loader_fsm.c --- curl-loader/loader_fsm.c 2009-08-09 13:27:26.000000000 -0700 +++ curl-loader.564+0.48.vs1/loader_fsm.c 2009-08-13 12:00:01.000000000 -0700 @@ -319,14 +319,16 @@ Therefore, remembering here possible error state. */ int recoverable_error_state = cctx->client_state; - + if (bctx->run_time && (now_time - bctx->start_time >= bctx->run_time)) + rval_load = CSTATE_FINISHED_OK; + else /* Initialize virtual client's CURL handle for the next step of loading by calling load_<state-name>_state() function relevant for a particular client state. */ - rval_load = load_state_func_table[cctx->client_state+1] (cctx, - now_time, - &interleave_waiting_time); + rval_load = load_state_func_table[cctx->client_state+1] (cctx, + now_time, + &interleave_waiting_time); /* Update operational statistics @@ -668,6 +670,7 @@ cctx->preload_url_curr_index = cctx->url_curr_index; /* Schedule the client immediately */ + cctx->req_sent_timestamp = now_time; if (curl_multi_add_handle (bctx->multiple_handle, cctx->handle) == CURLM_OK) { unsigned long timer_url_completion = 0; @@ -1071,7 +1074,7 @@ { advance_cycle_num (cctx); - if (cctx->cycle_num >= bctx->cycles_num) + if (bctx->cycles_num && cctx->cycle_num >= bctx->cycles_num) { return (cctx->client_state = CSTATE_ERROR); } @@ -1127,7 +1130,7 @@ // advance_cycle_num (cctx); - if (cctx->cycle_num == bctx->cycles_num) + if (bctx->cycles_num && cctx->cycle_num == bctx->cycles_num) { // Cycling completed diff -ur curl-loader/Makefile curl-loader.564+0.48.vs1/Makefile --- curl-loader/Makefile 2009-08-09 13:27:26.000000000 -0700 +++ curl-loader.564+0.48.vs1/Makefile 2009-08-13 12:46:29.000000000 -0700 @@ -171,6 +171,6 @@ $(OBJ_DIR)/%.o: %.c $(CC) $(CFLAGS) $(PROF_FLAG) $(OPT_FLAGS) $(DEBUG_FLAGS) $(INCDIR) -c -o $(OBJ_DIR)/$*.o $< -$(CONF_OBJ): +$(CONF_OBJ): parse_conf.c rm -rf $(OBJ_DIR)/parse_conf.o $(CC) $(CFLAGS) $(PROF_FLAG) $(DEBUG_FLAGS) $(INCDIR) -c -o $(OBJ_DIR)/parse_conf.o parse_conf.c diff -ur curl-loader/parse_conf.c curl-loader.564+0.48.vs1/parse_conf.c --- curl-loader/parse_conf.c 2009-08-09 13:27:26.000000000 -0700 +++ curl-loader.564+0.48.vs1/parse_conf.c 2009-08-13 13:05:19.000000000 -0700 @@ -102,8 +102,10 @@ static int ip_addr_max_parser (batch_context*const bctx, char*const value); static int ip_shared_num_parser (batch_context*const bctx, char*const value); static int cycles_num_parser (batch_context*const bctx, char*const value); +static int run_time_parser (batch_context*const bctx, char*const value); static int user_agent_parser (batch_context*const bctx, char*const value); static int urls_num_parser (batch_context*const bctx, char*const value); +static int dump_opstats_parser (batch_context*const bctx, char*const value); /* * URL section tag parsers. @@ -172,8 +174,10 @@ {"IP_ADDR_MAX", ip_addr_max_parser}, {"IP_SHARED_NUM", ip_shared_num_parser}, {"CYCLES_NUM", cycles_num_parser}, + {"RUN_TIME", run_time_parser}, {"USER_AGENT", user_agent_parser}, {"URLS_NUM", urls_num_parser}, + {"DUMP_OPSTATS", dump_opstats_parser}, /*------------------------ URL SECTION -------------------------------- */ @@ -869,6 +873,43 @@ } return 0; } + +static int run_time_parser (batch_context*const bctx, char*const value) +{ + char *token = 0, *strtokp = 0; + const char *delim = ":"; + long dhms[4]; + int ct = 0; + const int max_ct = sizeof(dhms)/sizeof(*dhms); + char value_buf[100]; + // preserve value from destruction by strtok + (void)strncpy(value_buf,value,sizeof(value_buf)); + (void)memset((char *)dhms,0,sizeof(dhms)); + for (token = strtok_r(value_buf,delim,&strtokp); + token != 0; token = strtok_r(0,delim,&strtokp), ct++) + { + if (ct >= max_ct) + { + (void)fprintf(stderr, + "%s - error: a value in the form [[[D:]H:]M:]S is expected" + " for tag RUN_TIME\n", __func__); + return -1; + } + int i = 0; + while (++i < max_ct) + dhms[i - 1] = dhms[i]; + dhms[max_ct - 1] = atol(token); + } + long run_time = (60 * (60 * (24 * dhms[0] + dhms[1]) + dhms[2]) + dhms[3])* + 1000; // msecs + if (run_time < 0) + { + run_time = LONG_MAX - 1; + } + bctx->run_time = (unsigned long)run_time; + return 0; +} + static int clients_rampup_inc_parser (batch_context*const bctx, char*const value) { bctx->clients_rampup_inc = atol (value); @@ -918,6 +959,21 @@ return 0; } +static int dump_opstats_parser (batch_context*const bctx, char*const value) +{ + if (value[0] == 'Y' || value[0] == 'y' || + value[0] == 'N' || value[0] == 'n') + bctx->dump_opstats = (value[0] == 'Y' || value[0] == 'y'); + else + { + fprintf (stderr, + "%s - error: DUMP_OPSTATS value (%s) must start with Y|y|N|n.\n", + __func__, value); + return -1; + } + return 0; +} + static int url_parser (batch_context*const bctx, char*const value) { size_t url_length = 0; @@ -2730,6 +2786,12 @@ set_default_response_errors_table (); + /* for compatibility with older configurations set default value + of dump_opstats to 1 ("yes") */ + unsigned i; + for (i = 0; i < bctx_array_size; i++) + bctx_array[i].dump_opstats = 1; + while (fgets (fgets_buff, sizeof (fgets_buff) - 1, fp)) { // fprintf (stderr, "%s - processing file string \"%s\n", diff -ur curl-loader/statistics.c curl-loader.564+0.48.vs1/statistics.c --- curl-loader/statistics.c 2009-08-09 13:27:26.000000000 -0700 +++ curl-loader.564+0.48.vs1/statistics.c 2009-08-13 12:00:01.000000000 -0700 @@ -27,6 +27,7 @@ #include <errno.h> #include <stdlib.h> #include <string.h> +#include <time.h> #include "batch.h" #include "client.h" @@ -57,7 +58,8 @@ stat_point *sd, unsigned long period); -static void print_operational_statistics (op_stat_point*const osp_curr, +static void print_operational_statistics (FILE *opstats_file, + op_stat_point*const osp_curr, op_stat_point*const osp_total, url_context* url_arr); @@ -400,7 +402,8 @@ } op_stat_point_add (&bctx->op_total, &bctx->op_delta); - print_operational_statistics (&bctx->op_delta, + print_operational_statistics (bctx->opstats_file, + &bctx->op_delta, &bctx->op_total, bctx->url_ctx_array); @@ -430,13 +433,34 @@ } dump_clients (cctx); - fprintf (stderr, "\nExited. For details look in the files:\n" + (void)fprintf (stderr, "\nExited. For details look in the files:\n" "- %s.log for errors and traces;\n" "- %s.txt for loading statistics;\n" - "- %s.ctx for virtual client based statistics.\n" - "You may add -v and -u options to the command line for more " - "verbouse output to %s.log file.\n", bctx->batch_name, bctx->batch_name, - bctx->batch_name, bctx->batch_name); + "- %s.ctx for virtual client based statistics.\n", + bctx->batch_name, bctx->batch_name, bctx->batch_name); + if (bctx->dump_opstats) + (void)fprintf (stderr,"- %s.ops for operational statistics.\n", + bctx->batch_name); + (void)fprintf (stderr, + "Add -v and -u options to the command line for " + "verbose output to %s.log file.\n",bctx->batch_name); +} + +/****** +* Function name - ascii_time +* +* Description - evaluate current time in ascii +* +* Input - *tbuf - pointer to time buffer +* Return - tbuf filled with time +******/ + +char *ascii_time (char *tbuf) +{ + time_t timeb; + + (void)time(&timeb); + return ctime_r(&timeb,tbuf); } /**************************************************************************************** @@ -593,7 +617,8 @@ op_stat_point_add (&bctx->op_total, &bctx->op_delta ); - print_operational_statistics (&bctx->op_delta, + print_operational_statistics (bctx->opstats_file, + &bctx->op_delta, &bctx->op_total, bctx->url_ctx_array); @@ -811,29 +836,33 @@ /*********************************************************************************** * Function name - print_operational_statistics * -* Description - Prints to stdout number of login, UAS - for each URL and logoff operations +* Description - writes number of login, UAS - for each URL and logoff operations * success and failure numbers * -* Input - *osp_curr - pointer to the current operational statistics point +* Input - *opstats_file - FILE pointer +* *osp_curr - pointer to the current operational statistics point * *osp_total - pointer to the current operational statistics point * * Return Code/Output - None *************************************************************************************/ -static void print_operational_statistics (op_stat_point*const osp_curr, +static void print_operational_statistics (FILE *opstats_file, + op_stat_point*const osp_curr, op_stat_point*const osp_total, url_context* url_arr) { - if (!osp_curr || !osp_total) + if (!osp_curr || !osp_total || !opstats_file) return; - fprintf (stdout, " Operations:\t\t Success\t\t Failed\t\t\tTimed out\n"); + (void)fprintf (opstats_file, + " Operations:\t\t Success\t\t Failed\t\t\tTimed out\n"); if (osp_curr->url_num && (osp_curr->url_num == osp_total->url_num)) { unsigned long i; for (i = 0; i < osp_curr->url_num; i++) { - fprintf (stdout, "URL%ld:%-12.12s\t%-6ld %-8ld\t\t%-6ld %-8ld\t\t%-6ld %-8ld\n", + (void)fprintf (opstats_file, + "URL%ld:%-12.12s\t%-6ld %-8ld\t\t%-6ld %-8ld\t\t%-6ld %-8ld\n", i, url_arr[i].url_short_name, osp_curr->url_ok[i], osp_total->url_ok[i], osp_curr->url_failed[i], osp_total->url_failed[i], diff -ur curl-loader/statistics.h curl-loader.564+0.48.vs1/statistics.h --- curl-loader/statistics.h 2009-08-09 13:27:26.000000000 -0700 +++ curl-loader.564+0.48.vs1/statistics.h 2009-08-13 12:00:01.000000000 -0700 @@ -216,10 +216,22 @@ ****************************************************************************************/ void dump_final_statistics (struct client_context* cctx); +/****** +* Function name - ascii_time +* +* Description - evaluate current time in ascii +* +* Input - *tbuf - pointer to time buffer +* Return - tbuf filled with time +******/ + +char *ascii_time (char *tbuf); + /**************************************************************************************** -* Function name - dump_snapshot_interval and up to the interval time summary statistics +* Function name - dump_snapshot_interval * * Description - Dumps summary statistics since the start of load +* and up to the interval * * Input - *bctx - pointer to batch structure * now - current time in msec since the epoch |
From: Robert I. <cor...@gm...> - 2009-08-11 18:55:21
|
Hi Sergei, On Tue, Aug 11, 2009 at 9:49 PM, Sergei S <jun...@ya...> wrote: > Hi, Robert! > > In log I have mostly such errors: > : eff-url: , url: > 1 3723 (192.168.2.110) !! ERROR: Empty reply from server > : eff-url: , url: > 9 452 (192.168.2.110) !! ERROR: Connection time-out > > I've run with this parameters: > curl-loader -f chatnew.conf -r -c 15 remove -r and -c options; they are not helpful > > > TIMER_URL_COMPLETION = 15000 > TIMER_AFTER_URL_SLEEP = 1500 > > And the question is if there was http request in first case? (when > "Empty reply" error) or just connection try? > You have a slow server. Check you server logs. > > > Robert Iakobashvili wrote: > > Hi Sergei S. > > > > On Tue, Aug 11, 2009 at 4:19 PM, Sergei S <jun...@ya...> wrote: > > > > > >> Hi > >> I've looked at reports from last test, and I have a couple of questions: > >> Let's see following lines: > >> RunTime(sec),Appl, > >> Clients,Req,1xx,2xx,3xx,4xx,5xx,Err,T-Err,D,D-2xx,Ti,To > >> 242, H/F , 1514, 256, 0, 158, 0, 0, 0, 1033, 75, 5084, > >> 5084, 13219, 28278 > >> 215, H/F , 3938, 380, 0, 162, 0, 0, 0, 1381, 119, > >> 3041, 3041, 13554, 34858 > >> > >> There's 256 requests in first line, and 158 "ok"s, but 1033 errors and > >> 75 timeouts. I thought that "Req" column is "all requests for a 3 > >> second" that includes both "ok"s and errors and timeouts. > >> > > > > > >> But you can > >> see that nubers are not suitable.. So the question is what is "Req" > >> column, which requests number he refers to? > >> > > > > Req - states for HTTP requests. Errors at HTTP-level are covered by 4xx > and > > 5xx. > > > > Err means TCP/IP and DNS level errors. An overall impression, that you > have > > some network/bandwidth issues. > > > > Run it with -v and investigate the logs. > > > > > > > >> Also the question how can I calculate the CAPS from every line in txt > >> report file? > >> > >> > > > > Number of HTTP request / Number of seconds. > > If running at defaults (3 sec reporting) is is 256/3 > > > > > > > >> -- > >> Sergey Sh. > >> > >> > >> > > Take care! > > > > > > ------------------------------------------------------------------------ > > > > > ------------------------------------------------------------------------------ > > Let Crystal Reports handle the reporting - Free Crystal Reports 2008 > 30-Day > > trial. Simplify your report design, integration and deployment - and > focus on > > what you do best, core application coding. Discover what's new with > > Crystal Reports now. http://p.sf.net/sfu/bobj-july > > ------------------------------------------------------------------------ > > > > _______________________________________________ > > curl-loader-devel mailing list > > cur...@li... > > https://lists.sourceforge.net/lists/listinfo/curl-loader-devel > > > > > > ------------------------------------------------------------------------------ > Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day > trial. Simplify your report design, integration and deployment - and focus > on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobj-july > _______________________________________________ > curl-loader-devel mailing list > cur...@li... > https://lists.sourceforge.net/lists/listinfo/curl-loader-devel > -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... www.ghotit.com Assistive technology that understands you ...................................................................... |
From: Sergei S <jun...@ya...> - 2009-08-11 18:48:51
|
Hi, Robert! In log I have mostly such errors: : eff-url: , url: 1 3723 (192.168.2.110) !! ERROR: Empty reply from server : eff-url: , url: 9 452 (192.168.2.110) !! ERROR: Connection time-out I've run with this parameters: curl-loader -f chatnew.conf -r -c 15 TIMER_URL_COMPLETION = 15000 TIMER_AFTER_URL_SLEEP = 1500 And the question is if there was http request in first case? (when "Empty reply" error) or just connection try? Robert Iakobashvili wrote: > Hi Sergei S. > > On Tue, Aug 11, 2009 at 4:19 PM, Sergei S <jun...@ya...> wrote: > > >> Hi >> I've looked at reports from last test, and I have a couple of questions: >> Let's see following lines: >> RunTime(sec),Appl, >> Clients,Req,1xx,2xx,3xx,4xx,5xx,Err,T-Err,D,D-2xx,Ti,To >> 242, H/F , 1514, 256, 0, 158, 0, 0, 0, 1033, 75, 5084, >> 5084, 13219, 28278 >> 215, H/F , 3938, 380, 0, 162, 0, 0, 0, 1381, 119, >> 3041, 3041, 13554, 34858 >> >> There's 256 requests in first line, and 158 "ok"s, but 1033 errors and >> 75 timeouts. I thought that "Req" column is "all requests for a 3 >> second" that includes both "ok"s and errors and timeouts. >> > > >> But you can >> see that nubers are not suitable.. So the question is what is "Req" >> column, which requests number he refers to? >> > > Req - states for HTTP requests. Errors at HTTP-level are covered by 4xx and > 5xx. > > Err means TCP/IP and DNS level errors. An overall impression, that you have > some network/bandwidth issues. > > Run it with -v and investigate the logs. > > > >> Also the question how can I calculate the CAPS from every line in txt >> report file? >> >> > > Number of HTTP request / Number of seconds. > If running at defaults (3 sec reporting) is is 256/3 > > > >> -- >> Sergey Sh. >> >> >> > Take care! > > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------------ > Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day > trial. Simplify your report design, integration and deployment - and focus on > what you do best, core application coding. Discover what's new with > Crystal Reports now. http://p.sf.net/sfu/bobj-july > ------------------------------------------------------------------------ > > _______________________________________________ > curl-loader-devel mailing list > cur...@li... > https://lists.sourceforge.net/lists/listinfo/curl-loader-devel > |
From: Robert I. <cor...@gm...> - 2009-08-11 14:15:49
|
Hi Sergei S. On Tue, Aug 11, 2009 at 4:19 PM, Sergei S <jun...@ya...> wrote: > Hi > I've looked at reports from last test, and I have a couple of questions: > Let's see following lines: > RunTime(sec),Appl, > Clients,Req,1xx,2xx,3xx,4xx,5xx,Err,T-Err,D,D-2xx,Ti,To > 242, H/F , 1514, 256, 0, 158, 0, 0, 0, 1033, 75, 5084, > 5084, 13219, 28278 > 215, H/F , 3938, 380, 0, 162, 0, 0, 0, 1381, 119, > 3041, 3041, 13554, 34858 > > There's 256 requests in first line, and 158 "ok"s, but 1033 errors and > 75 timeouts. I thought that "Req" column is "all requests for a 3 > second" that includes both "ok"s and errors and timeouts. > But you can >see that nubers are not suitable.. So the question is what is "Req" >column, which requests number he refers to? Req - states for HTTP requests. Errors at HTTP-level are covered by 4xx and 5xx. Err means TCP/IP and DNS level errors. An overall impression, that you have some network/bandwidth issues. Run it with -v and investigate the logs. > > Also the question how can I calculate the CAPS from every line in txt > report file? > Number of HTTP request / Number of seconds. If running at defaults (3 sec reporting) is is 256/3 > > -- > Sergey Sh. > > Take care! -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... www.ghotit.com Assistive technology that understands you ...................................................................... |
From: Sergei S <jun...@ya...> - 2009-08-11 13:18:36
|
Hi I've looked at reports from last test, and I have a couple of questions: Let's see following lines: RunTime(sec),Appl, Clients,Req,1xx,2xx,3xx,4xx,5xx,Err,T-Err,D,D-2xx,Ti,To 242, H/F , 1514, 256, 0, 158, 0, 0, 0, 1033, 75, 5084, 5084, 13219, 28278 215, H/F , 3938, 380, 0, 162, 0, 0, 0, 1381, 119, 3041, 3041, 13554, 34858 There's 256 requests in first line, and 158 "ok"s, but 1033 errors and 75 timeouts. I thought that "Req" column is "all requests for a 3 second" that includes both "ok"s and errors and timeouts. But you can see that nubers are not suitable.. So the question is what is "Req" column, which requests number he refers to? Also the question how can I calculate the CAPS from every line in txt report file? Thanks for great tool! -- Sergey Sh. |
From: Robert I. <cor...@gm...> - 2009-08-09 15:12:11
|
Hi jari, On Fri, Jul 31, 2009 at 2:18 AM, jari pietila <jar...@ho...> wrote: > Make for curl-loader-0.49 fails as follows > > rm -rf obj/parse_conf.o > gcc -W -Wall -Wpointer-arith -pipe -DCURL_LOADER_FD_SETSIZE=20000 > -D_FILE_OFFSET_BITS=64 -g -I. -I./inc -I/usr//include -c -o > obj/parse_conf.o parse_conf.c > parse_conf.c:3122: error: conflicting types for ‘getline’ > /usr/include/stdio.h:653: note: previous declaration of ‘getline’ was here > make: *** [obj/parse_conf.o] Error 1 > > gcc version 4.4.0 20090506 (Red Hat 4.4.0-4) (GCC) > > Google search found similar errors > and also suggestions to rename getline to st else > s/getline/my_getline/ in parse_conf.c > > -jari > Renamed getline to get_line in svn. Added you to our Thanks list. Thank you! -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... www.ghotit.com Assistive technology that understands you ...................................................................... |
From: Robert I. <cor...@gm...> - 2009-08-09 15:00:51
|
Hi Val, On Wed, Jul 29, 2009 at 12:10 AM, Val Shkolnikov <va...@nv...> wrote: > ________________________________ > > From:Robert Iakobashvili > [mailto:cor...@gm...] > Sent: Monday, July 27, 2009 8:32 AM > To: curl-loader-devel > Subject: Re: Some changes to curl-loader > > Hi, > >On Wed, Jul 22, 2009 at 3:47 AM, Val Shkolnikov < > Val...@no...> >wrote: > >Hi, > > Here are some > changes I made to the code to improve logging. Mainly so that the .log file > could be postprocessed to get more detailed statistics of the response time > beyond the average and count. If interested, I can send the Python code > that computes percentiles, average and std.dev from that. I redirected > the voluminous opstats output into a file. Also added a couple of new > options. The changes are summarized in the ChangeLog. I also > updated the documentation. All the changes are against Rev 0.48. > It is not applied and has many rejects. Could you, please, generate the patch against svn version or at least version 0.49? Could you, please, generate a patch for file doc/README, where there is explanation of all options and tags? Thank you in advance. -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... www.ghotit.com Assistive technology that understands you ...................................................................... |
From: Robert I. <cor...@gm...> - 2009-07-31 05:01:46
|
Hi Jari, On Fri, Jul 31, 2009 at 2:18 AM, jari pietila <jar...@ho...> wrote: > Make for curl-loader-0.49 fails as follows > > rm -rf obj/parse_conf.o > gcc -W -Wall -Wpointer-arith -pipe -DCURL_LOADER_FD_SETSIZE=20000 > -D_FILE_OFFSET_BITS=64 -g -I. -I./inc -I/usr//include -c -o > obj/parse_conf.o parse_conf.c > parse_conf.c:3122: error: conflicting types for ‘getline’ > /usr/include/stdio.h:653: note: previous declaration of ‘getline’ was here > make: *** [obj/parse_conf.o] Error 1 > > gcc version 4.4.0 20090506 (Red Hat 4.4.0-4) (GCC) > > Google search found similar errors > and also suggestions to rename getline to st else > s/getline/my_getline/ in parse_conf.c > > -jari > Good catch! We'll correct it. Thanks! -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... www.ghotit.com Assistive technology that understands you ...................................................................... |
From: Robert I. <cor...@gm...> - 2009-07-29 16:36:18
|
Hi Bob, On Wed, Jul 29, 2009 at 6:53 PM, Bob Felts<bf...@kl...> wrote: > I am getting the following error when attempting to make curl-loader 0.49: > > In file included from loader.c:58: > > Ssl_thr_lock.h:27:28: error: openssl/crypto.h No such file or directory > > Loader.c: In function ‘response_logfiles_set’: > > > > CURL-LOADER VERSION: 0.49, July 5, 2009 Please, install headers of openssl. http://curl-loader.sourceforge.net/doc/fast.html http://curl-loader.sourceforge.net/doc/faq.html#pre-requirements Take care! -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... www.ghotit.com Assistive technology that understands you ...................................................................... |
From: Val S. <va...@nv...> - 2009-07-28 21:10:36
|
________________________________ From:Robert Iakobashvili [mailto:cor...@gm...] Sent: Monday, July 27, 2009 8:32 AM To: curl-loader-devel Subject: Re: Some changes to curl-loader Hi, >On Wed, Jul 22, 2009 at 3:47 AM, Val Shkolnikov <Val...@no...> >wrote: >Hi, > Here are some changes I made to the code to improve logging. Mainly so that the .log file could be postprocessed to get more detailed statistics of the response time beyond the average and count. If interested, I can send the Python code that computes percentiles, average and std.dev from that. I redirected the voluminous opstats output into a file. Also added a couple of new options. The changes are summarized in the ChangeLog. I also updated the documentation. All the changes are against Rev 0.48. > /V Looks nice. Could you, please, provide descriptions of the newly added tags and a new log file for the FAQs? Thanks. -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... www.ghotit.com Assistive technology that understands you ...................................................................... Robert, I have updated the FAQ as I got it from the web site today. I am attaching the diffs to that file. Also, it turns out, I needed one more field added to the log, the url number. This is already reflected in the FAQ. I am attaching the diffs for the loader.c compared to my last submission and the overall diffs against version 0.48. Also note my new e-mail address. /Val |
From: Robert I. <cor...@gm...> - 2009-07-27 18:04:06
|
On Mon, Jul 27, 2009 at 7:35 PM, Gary Fitts <ga...@if...> wrote: > I've been away from this for a long time now, and it will be a week or two > before I can take a look, but I'll see what I can do. > > Gary > > Nice hearing from you, Gary. Take your time! -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... www.ghotit.com Assistive technology that understands you ...................................................................... |
From: Gary F. <ga...@iF...> - 2009-07-27 17:36:26
|
I've been away from this for a long time now, and it will be a week or two before I can take a look, but I'll see what I can do. Gary On Jul 26, 2009, at 10:38 PM, Robert Iakobashvili wrote: Hi Jan, On Fri, Jul 24, 2009 at 4:25 PM, Jan Wielemaker <J.W...@cs...> wrote: Hi, Just trying my first steps into curl-loader :-) I looked through the FAQ and config files, but I couldn't find an answer on how to query a *lot* of random paths using a pool of -lets say- 20 clients. I see we have FORM_RECORDS_RANDOM, which seems to do the trick for form-data. I am trying to debug server crashes, where I know it is related to concurrency, but otherwise I have little clue on whether or not it is related to the actual query paths. Just to bash the server, I collected a list of 6,000 paths that have been queried on the server recently and then I use the config below, but curl-loader only seems to use the first 20 paths of the file :-( Did I overlook something trivial? Cheers --- Jan ================================================================ BATCH_NAME= bulk CLIENTS_NUM_MAX=20 CLIENTS_NUM_START=20 CLIENTS_RAMPUP_INC=0 INTERFACE=eth0 NETMASK=16 IP_ADDR_MIN= 192.168.1.1 IP_ADDR_MAX= 192.168.1.200 IP_SHARED_NUM=200 CYCLES_NUM= -1 URLS_NUM= 1 URL_TEMPLATE=http://localhost:3040%s URL_TOKEN_FILE=paths URL_SHORT_NAME="local-server" REQUEST_TYPE=GET TIMER_URL_COMPLETION = 5000 TIMER_AFTER_URL_SLEEP = 0 ================================================================ It is a rather recent feature developed by Gary and it will be great if Gary will comment. Still, you are keeping 20 clients, which are testing 20 paths. Could it be that 6000 clients will do the job? Give it a try. Just keep in mind, that by default you need for 35K of memory per each client and ramp-up about 100 not to start all them at start-up. -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... www.ghotit.com Assistive technology that understands you ...................................................................... ------------------------------------------------------------------------------ _______________________________________________ curl-loader-devel mailing list cur...@li... https://lists.sourceforge.net/lists/listinfo/curl-loader-devel |
From: Robert I. <cor...@gm...> - 2009-07-27 15:31:48
|
Hi, On Wed, Jul 22, 2009 at 3:47 AM, Val Shkolnikov <Val...@no...>wrote: > Hi, > > Here are some changes I made to the code to improve logging. Mainly so > that the .log file could be postprocessed to get more detailed statistics of > the response time beyond the average and count. If interested, I can send > the Python code that computes percentiles, average and std.dev from that. I > redirected the voluminous opstats output into a file. Also added a couple > of new options. The changes are summarized in the ChangeLog. I also > updated the documentation. All the changes are against Rev 0.48. > > /V > Looks nice. Could you, please, provide descriptions of the newly added tags and a new log file for the FAQs? Thanks. -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... www.ghotit.com Assistive technology that understands you ...................................................................... |
From: Robert I. <cor...@gm...> - 2009-07-27 15:17:54
|
On Mon, Jul 27, 2009 at 2:13 PM, Jan Wielemaker <J.W...@cs...>wrote: > On Monday 27 July 2009 07:38:12 am Robert Iakobashvili wrote: > > Hi Jan, > > > > On Fri, Jul 24, 2009 at 4:25 PM, Jan Wielemaker > <J.W...@cs...>wrote: > > > Hi, > > > > > > Just trying my first steps into curl-loader :-) I looked through the > FAQ > > > and config files, but I couldn't find an answer on how to query a *lot* > > > of random paths using a pool of -lets say- 20 clients. > > > > > > I see we have FORM_RECORDS_RANDOM, which seems to do the trick for > > > form-data. I am trying to debug server crashes, where I know it is > > > related to concurrency, but otherwise I have little clue on whether or > > > not it is related to the actual query paths. > > > > > > Just to bash the server, I collected a list of 6,000 paths that have > > > been queried on the server recently and then I use the config below, > but > > > curl-loader only seems to use the first 20 paths of the file :-( > > > > > > Did I overlook something trivial? > > > > > > Cheers --- Jan > > > > > > ================================================================ > > > BATCH_NAME= bulk > > > CLIENTS_NUM_MAX=20 > > > CLIENTS_NUM_START=20 > > > CLIENTS_RAMPUP_INC=0 > > > INTERFACE=eth0 > > > NETMASK=16 > > > IP_ADDR_MIN= 192.168.1.1 > > > IP_ADDR_MAX= 192.168.1.200 > > > IP_SHARED_NUM=200 > > > CYCLES_NUM= -1 > > > URLS_NUM= 1 > > > > > > URL_TEMPLATE=http://localhost:3040%s > > > URL_TOKEN_FILE=paths > > > URL_SHORT_NAME="local-server" > > > REQUEST_TYPE=GET > > > TIMER_URL_COMPLETION = 5000 > > > TIMER_AFTER_URL_SLEEP = 0 > > > ================================================================ > > > > It is a rather recent feature developed by Gary and it will be great if > > Gary will comment. > > > I must admit that I still don't really get the meaning of the CIENTS_* > settings. Is the number of clients expressed by this formula (t is > time in seconds). > > max(CLIENTS_NUM_MAX, CLIENTS_NUM_START+t*CLIENTS_RAMPUP_INC) > current clients num = CLIENTS_NUM_START+t*CLIENTS_RAMPUP_INC till it comes the the CLIENTS_NUM_MAX -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... www.ghotit.com Assistive technology that understands you ...................................................................... |
From: Jan W. <J.W...@cs...> - 2009-07-27 11:20:52
|
Dear Val, Thanks for the trick. Maybe not very elegant, but workable :-) --- Jan On Friday 24 July 2009 08:25:28 pm Val Shkolnikov wrote: > I'm not an expert on curl-loader and I have not used the url templates yet. > The way I do it is literally generate all the urls in the urls section and > set FETCH_PROBABILITY on each url to a low value, e.g 2., so it looks like > this: |
From: Jan W. <J.W...@cs...> - 2009-07-27 11:13:19
|
On Monday 27 July 2009 07:38:12 am Robert Iakobashvili wrote: > Hi Jan, > > On Fri, Jul 24, 2009 at 4:25 PM, Jan Wielemaker <J.W...@cs...>wrote: > > Hi, > > > > Just trying my first steps into curl-loader :-) I looked through the FAQ > > and config files, but I couldn't find an answer on how to query a *lot* > > of random paths using a pool of -lets say- 20 clients. > > > > I see we have FORM_RECORDS_RANDOM, which seems to do the trick for > > form-data. I am trying to debug server crashes, where I know it is > > related to concurrency, but otherwise I have little clue on whether or > > not it is related to the actual query paths. > > > > Just to bash the server, I collected a list of 6,000 paths that have > > been queried on the server recently and then I use the config below, but > > curl-loader only seems to use the first 20 paths of the file :-( > > > > Did I overlook something trivial? > > > > Cheers --- Jan > > > > ================================================================ > > BATCH_NAME= bulk > > CLIENTS_NUM_MAX=20 > > CLIENTS_NUM_START=20 > > CLIENTS_RAMPUP_INC=0 > > INTERFACE=eth0 > > NETMASK=16 > > IP_ADDR_MIN= 192.168.1.1 > > IP_ADDR_MAX= 192.168.1.200 > > IP_SHARED_NUM=200 > > CYCLES_NUM= -1 > > URLS_NUM= 1 > > > > URL_TEMPLATE=http://localhost:3040%s > > URL_TOKEN_FILE=paths > > URL_SHORT_NAME="local-server" > > REQUEST_TYPE=GET > > TIMER_URL_COMPLETION = 5000 > > TIMER_AFTER_URL_SLEEP = 0 > > ================================================================ > > It is a rather recent feature developed by Gary and it will be great if > Gary will comment. I've seen that the config allows for getting random records from form-data files. Would be nice to have the same for token-files. > Still, you are keeping 20 clients, which are testing 20 paths. > Could it be that 6000 clients will do the job? Give it a try. Probably that will work, but I don't want a lot of concurrent clients. > Just keep in mind, that by default you need for 35K of memory per each > client > and ramp-up about 100 not to start all them at start-up. I must admit that I still don't really get the meaning of the CIENTS_* settings. Is the number of clients expressed by this formula (t is time in seconds). max(CLIENTS_NUM_MAX, CLIENTS_NUM_START+t*CLIENTS_RAMPUP_INC) --- Jan |