curl-loader-devel Mailing List for curl-loader - web application testing (Page 28)
Status: Alpha
Brought to you by:
coroberti
You can subscribe to this list here.
2006 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
|
Feb
(1) |
Mar
(7) |
Apr
(19) |
May
(25) |
Jun
(16) |
Jul
(59) |
Aug
(29) |
Sep
(18) |
Oct
(19) |
Nov
(7) |
Dec
(29) |
2008 |
Jan
(6) |
Feb
(18) |
Mar
(8) |
Apr
(27) |
May
(26) |
Jun
(5) |
Jul
(6) |
Aug
|
Sep
(9) |
Oct
(37) |
Nov
(61) |
Dec
(17) |
2009 |
Jan
(21) |
Feb
(25) |
Mar
(4) |
Apr
(2) |
May
(8) |
Jun
(15) |
Jul
(18) |
Aug
(23) |
Sep
(10) |
Oct
(16) |
Nov
(14) |
Dec
(22) |
2010 |
Jan
(23) |
Feb
(8) |
Mar
(18) |
Apr
(1) |
May
(34) |
Jun
(23) |
Jul
(11) |
Aug
(1) |
Sep
(13) |
Oct
(10) |
Nov
(2) |
Dec
(8) |
2011 |
Jan
|
Feb
(7) |
Mar
(24) |
Apr
(12) |
May
(3) |
Jun
(2) |
Jul
(2) |
Aug
|
Sep
(5) |
Oct
(20) |
Nov
(7) |
Dec
(11) |
2012 |
Jan
(12) |
Feb
(5) |
Mar
(16) |
Apr
(3) |
May
|
Jun
(5) |
Jul
(12) |
Aug
(6) |
Sep
|
Oct
|
Nov
(8) |
Dec
|
2013 |
Jan
(1) |
Feb
(3) |
Mar
(5) |
Apr
(3) |
May
(1) |
Jun
|
Jul
(1) |
Aug
(2) |
Sep
(9) |
Oct
|
Nov
(8) |
Dec
(4) |
2014 |
Jan
(4) |
Feb
|
Mar
(1) |
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(4) |
Oct
|
Nov
(11) |
Dec
(5) |
2015 |
Jan
(1) |
Feb
|
Mar
(11) |
Apr
(3) |
May
(1) |
Jun
(1) |
Jul
(4) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(2) |
Dec
|
2016 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Gary F. <ga...@in...> - 2008-10-27 18:18:12
|
Hello Robert, I have my changes working now, although I want to test them a bit more before letting them out. I'm having one problem, though. In the application I'm testing, the very first URL is a POST request using a FORM_STRING. This seems to cause a failure in loader.c, setup_curl_handle_appl(). The code is copied below. When this code is called, cctx->post_data has been allocated but not initialized, so it's an empty string. This causes the condition "else if (cctx->post_data && cctx->post_data[0])" to fail, and we drop through to the "post_data is NULL" error. I've fixed this by commenting out the second condition: else if (cctx->post_data /* && cctx->post_data[0] */) So far this seems to work for me, but I wonder if I'm missing something that might cause an error somewhere else. if (url->req_type == HTTP_REQ_TYPE_POST) { /* Make POST, using post buffer, if requested. */ if (url->upload_file && url->upload_file_ptr && (!cctx- >post_data || !cctx->post_data[0])) { curl_easy_setopt(handle, CURLOPT_POST, 1); } else if (cctx->post_data && cctx->post_data[0]) { /* Sets POST as the HTTP request method using either: - POST-fields; - multipart form-data as in RFC 1867; */ if (init_client_url_post_data (cctx, url) == -1) { fprintf (stderr, "%s - error: init_client_url_post_data() failed. \n", __func__); return -1; } } else { fprintf (stderr, "%s - error: post_data is NULL.\n", __func__); return -1; } } Thanks, Gary |
From: Robert I. <cor...@gm...> - 2008-10-12 06:11:43
|
Hi Sergei, On Fri, Oct 10, 2008 at 5:11 PM, Sergei Sh <jun...@na...> wrote: > Hi, Robert! > How can I debug it? > > Please, rebuild it by using the attached parse_conf.c file with more prints available. a) substitute the existing parse_conf.c file b) run the commands: make cleanall make c) run your command line as a super-user and send to me the printed to the console output. Thank you very much! Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: Sergei Sh <jun...@na...> - 2008-10-10 15:08:33
|
Hi, Robert! How can I debug it? Robert Iakobashvili wrote: > Hi Sergei, > > On Fri, Oct 10, 2008 at 2:19 PM, Sergei Sh <jun...@na... > <mailto:jun...@na...>> wrote: > > Hi, Robert! > > When I remove all "#" commented lines: > > So, no difference > But it works good on Ubuntu 32bit, so I don't know what else different > here.. > > There's a PRF: > > CURL-LOADER VERSION: 0.46 > > HW DETAILS: Intel(R) Core(TM)2 Duo CPU E8200 @ 2.66GHz, MemTotal: > 4040564 kB > > LINUX DISTRIBUTION and KERNEL (uname -r):ubuntu 8.04, 2.6.24-19-server > > GCC VERSION (gcc -v):Using built-in specs. > gcc version 4.2.3 (Ubuntu 4.2.3-2ubuntu7) > > COMPILATION AND MAKING OPTIONS (if defaults changed): default > > COMMAND-LINE: ./curl=loader -f 10K.conf > > CONFIGURATION-FILE (The most common source of problems): > > > > Could you fire a debugger or add prints to see, where it breaks and > not continues > to load the config file? > Thanks! > > -- > Truly, > Robert Iakobashvili, Ph.D. > ...................................................................... > Assistive technology that understands you > ...................................................................... > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's challenge > Build the coolest Linux based applications with Moblin SDK & win great prizes > Grand prize is a trip for two to an Open Source event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > ------------------------------------------------------------------------ > > _______________________________________________ > curl-loader-devel mailing list > cur...@li... > https://lists.sourceforge.net/lists/listinfo/curl-loader-devel > |
From: Robert I. <cor...@gm...> - 2008-10-10 13:25:03
|
Hi Sergei, On Fri, Oct 10, 2008 at 2:19 PM, Sergei Sh <jun...@na...> wrote: > Hi, Robert! > > When I remove all "#" commented lines: > > So, no difference > But it works good on Ubuntu 32bit, so I don't know what else different > here.. > > There's a PRF: > > CURL-LOADER VERSION: 0.46 > > HW DETAILS: Intel(R) Core(TM)2 Duo CPU E8200 @ 2.66GHz, MemTotal: > 4040564 kB > > LINUX DISTRIBUTION and KERNEL (uname -r):ubuntu 8.04, 2.6.24-19-server > > GCC VERSION (gcc -v):Using built-in specs. > gcc version 4.2.3 (Ubuntu 4.2.3-2ubuntu7) > > COMPILATION AND MAKING OPTIONS (if defaults changed): default > > COMMAND-LINE: ./curl=loader -f 10K.conf > > CONFIGURATION-FILE (The most common source of problems): > Could you fire a debugger or add prints to see, where it breaks and not continues to load the config file? Thanks! -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: Sergei Sh <jun...@na...> - 2008-10-10 12:47:44
|
Hi, Robert! When I remove all "#" commented lines: geteuid() = 0 stat("./10K.conf", {st_mode=S_IFREG|0644, st_size=453, ...}) = 0 brk(0) = 0x677000 brk(0x698000) = 0x698000 open("./10K.conf", O_RDONLY) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=453, ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f8056ae1000 read(3, "BATCH_NAME=10K \nCLIENTS_NUM_MAX"..., 4096) = 453 read(3, "", 4096) = 0 close(3) = 0 munmap(0x7f8056ae1000, 4096) = 0 write(2, "parse_config_file - error: faile"..., 63parse_config_file - error: failed to load even a single batch. ) = 63 write(2, "main - error: parse_config_file "..., 43main - error: parse_config_file () failed. ) = 43 exit_group(-1) = ? Process 8033 detached So, no difference But it works good on Ubuntu 32bit, so I don't know what else different here.. There's a PRF: CURL-LOADER VERSION: 0.46 HW DETAILS: Intel(R) Core(TM)2 Duo CPU E8200 @ 2.66GHz, MemTotal: 4040564 kB LINUX DISTRIBUTION and KERNEL (uname -r):ubuntu 8.04, 2.6.24-19-server GCC VERSION (gcc -v):Using built-in specs. Target: x86_64-linux-gnu Configured with: ../src/configure -v --enable-languages=c,c++,fortran,objc,obj-c++,treelang --prefix=/usr --enable-shared --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --enable-nls --with-gxx-include-dir=/usr/include/c++/4.2 --program-suffix=-4.2 --enable-clocale=gnu --enable-libstdcxx-debug --enable-objc-gc --enable-mpfr --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu Thread model: posix gcc version 4.2.3 (Ubuntu 4.2.3-2ubuntu7) COMPILATION AND MAKING OPTIONS (if defaults changed): default COMMAND-LINE: ./curl=loader -f 10K.conf CONFIGURATION-FILE (The most common source of problems): Place the file inline here: the default one: BATCH_NAME = 10K CLIENTS_NUM_MAX=10000 CLIENTS_NUM_START=100 CLIENTS_RAMPUP_INC=50 INTERFACE=eth0 NETMASK=16 IP_ADDR_MIN= 192.168.1.1 IP_ADDR_MAX= 192.168.53.255 CYCLES_NUM= -1 URLS_NUM= 1 URL=http://localhost/index.html URL_SHORT_NAME="local-index" REQUEST_TYPE=GET TIMER_URL_COMPLETION = 5000 TIMER_AFTER_URL_SLEEP =20 DOES THE PROBLEM AFFECT: COMPILATION? No LINKING? No EXECUTION? Yes OTHER (please specify)? Have you run $make cleanall prior to $make ? - No DESCRIPTION: > Hi Sergei, > > On Fri, Oct 10, 2008 at 9:57 AM, Sergei Sh <ser...@qi...> wrote: > > Hi > I have problem running curl-loader on ubuntu: Linux server 2.6.24-19-server #1 SMP Wed Aug 20 18:43:06 UTC 2008 x86_64 GNU/Linux > But it successful runs on other machine: Linux private 2.6.24-19-generic #1 SMP Wed Aug 20 22:56:21 UTC 2008 i686 GNU/Linux > > when I run on the server: > ./curl-loader -f conf-examples/10K.conf - > > parse_config_file - error: failed to load even a single batch. > main - error: parse_config_file () failed. > > This is problem with both 0.44 and 0.46. But on the my PC (i686) both versions run perfectly.. > May it be problem with x86_64?? It's only the difference I can see here. > > This is from stracing: > > stat("conf-examples/10K.conf", {st_mode=S_IFREG|0644, st_size=617, ...}) = 0 > brk(0) = 0x677000 > brk(0x698000) = 0x698000 > open("conf-examples/10K.conf", O_RDONLY) = 3 > fstat(3, {st_mode=S_IFREG|0644, st_size=617, ...}) = 0 > mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f0767c0b000 > read(3, "########### GENERAL SECTION ####"..., 4096) = 617 > read(3, "", 4096) = 0 > close(3) = 0 > munmap(0x7f0767c0b000, 4096) = 0 > write(2, "parse_config_file - error: faile"..., 63) = 63 > write(2, "main - error: parse_config_file "..., 43) = 43 > exit_group(-1) = ? > > -- > Sergei Sh. > > > > Thanks for your reporting. > > Please, kindly, subscribe to the mailing list: > https://lists.sourceforge.net/lists/listinfo/curl-loader-devel > and make your further postings to the list. > We have a Problem-Reporting-Form (PRF), which comes > with every distribution and it helps us to trace the problems. > > curl-loader is developed also at x86_64 Debian (lenny) 64-bit with Intel quad-core HW. > > What happens, if you remove the strings starting with # (comments)? > Which locality, etc settings do you have, which compiler, etc questions from the PRF? > > I would suspect, that the following testing from parse_conf.c somehow > goes screwed in your settings: > > /* Line may be commented out by '#'.*/ > if (fgets_buff[0] == '#') > > { > // fprintf (stderr, "%s - skipping commented file string \"%s\n", > // __func__, fgets_buff); > > continue; > } > > > -- > Truly, > Robert Iakobashvili, Ph.D. > ................................................. |
From: Robert I. <cor...@gm...> - 2008-10-10 09:01:39
|
Hi Sergei, On Fri, Oct 10, 2008 at 9:57 AM, Sergei Sh <ser...@qi...> wrote: > Hi > I have problem running curl-loader on ubuntu: Linux server 2.6.24-19-server > #1 SMP Wed Aug 20 18:43:06 UTC 2008 x86_64 GNU/Linux > But it successful runs on other machine: Linux private 2.6.24-19-generic #1 > SMP Wed Aug 20 22:56:21 UTC 2008 i686 GNU/Linux > > when I run on the server: > ./curl-loader -f conf-examples/10K.conf - > > parse_config_file - error: failed to load even a single batch. > main - error: parse_config_file () failed. > > This is problem with both 0.44 and 0.46. But on the my PC (i686) both > versions run perfectly.. > May it be problem with x86_64?? It's only the difference I can see here. > > This is from stracing: > > stat("conf-examples/10K.conf", {st_mode=S_IFREG|0644, st_size=617, ...}) = > 0 > brk(0) = 0x677000 > brk(0x698000) = 0x698000 > open("conf-examples/10K.conf", O_RDONLY) = 3 > fstat(3, {st_mode=S_IFREG|0644, st_size=617, ...}) = 0 > mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = > 0x7f0767c0b000 > read(3, "########### GENERAL SECTION ####"..., 4096) = 617 > read(3, "", 4096) = 0 > close(3) = 0 > munmap(0x7f0767c0b000, 4096) = 0 > write(2, "parse_config_file - error: faile"..., 63) = 63 > write(2, "main - error: parse_config_file "..., 43) = 43 > exit_group(-1) = ? > > -- > Sergei Sh. > Thanks for your reporting. Please, kindly, subscribe to the mailing list: https://lists.sourceforge.net/lists/listinfo/curl-loader-devel and make your further postings to the list. We have a Problem-Reporting-Form (PRF), which comes with every distribution and it helps us to trace the problems. curl-loader is developed also at x86_64 Debian (lenny) 64-bit with Intel quad-core HW. What happens, if you remove the strings starting with # (comments)? Which locality, etc settings do you have, which compiler, etc questions from the PRF? I would suspect, that the following testing from parse_conf.c somehow goes screwed in your settings: /* Line may be commented out by '#'.*/ if (fgets_buff[0] == '#') { // fprintf (stderr, "%s - skipping commented file string \"%s\n", // __func__, fgets_buff); continue; } -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... www.ghotit.com Assistive technology that understands you ...................................................................... |
From: Robert I. <cor...@gm...> - 2008-10-09 21:26:08
|
On Thu, Oct 9, 2008 at 11:18 PM, Gary Fitts <ga...@in...> wrote: > Hello Robert, > > > I'm finishing my URL_SET_TEMPLATE code. The last step is to grab the > appropriate URL from the url-set just before the client needs it. I suspect > that the place to do this is in load_fsm.c : setup_url(). Does this look > right to you? > Sorry, no. setup_url just switches URLs upon loading. All configuration job is to be done in parse_conf.c Please, try to look thru the previous instructions and if any questions, just ask them. A point. that was missed is the URLs name to appear as a URL-name for each URL of the set. When dealing with a SET, you may take a URL_NAME specified name just as a base and add e.g. some number, like "MyURLSet-0" "My URLSet-1", .... > Thanks, > Gary > Best wishes and appreciating the job, that you are doing. Truly, Robert Iakobashvili, Ph.D. ...................................................................... www.ghotit.com Assistive technology that understands you ...................................................................... |
From: Gary F. <ga...@in...> - 2008-10-09 21:18:25
|
Hello Robert, Yes, I saw your previous message, and I posted a followup question. My reply is on the archive website http://sourceforge.net/mailarchive/forum.php?thread_name=A01460E9-2C40-4BA3-A536-C67563D53C64%40inTouchGroup.com&forum_name=curl-loader-devel (Meanwhile, an older query that I posted on September 29 has finally appeared. This is what you you're responding to here. I had given up and reposted my question, and that's the conversation we've been having.) I'll repeat my latest question: I'm finishing my URL_SET_TEMPLATE code. The last step is to grab the appropriate URL from the url-set just before the client needs it. I suspect that the place to do this is in load_fsm.c : setup_url(). Does this look right to you? Thanks, Gary > Re: Help creating a patch for variable URLs > From: Robert Iakobashvili <coroberti@gm...> - 2008-10-05 04:52 > > Attachments: Message as HTML > Hi Gary, > > > Have you seen my previous message? |
From: Gary F. <ga...@in...> - 2008-10-08 22:30:37
|
Hello Robert, I'm finishing my URL_SET_TEMPLATE code. The last step is to grab the appropriate URL from the url-set just before the client needs it. I suspect that the place to do this is in load_fsm.c : setup_url(). Do you agree that this is the right place? Thanks, Gary |
From: Robert I. <cor...@gm...> - 2008-10-05 04:52:38
|
Hi Gary, Have you seen my previous message? On Mon, Sep 29, 2008 at 8:24 PM, Gary Fitts <ga...@in...> wrote: > Regarding a post from 2007-11-26 20:46 "Is it possible to use data > from the response header to a POST in a subsequent GET?" > > Ken Mamitsuka wanted to use a variable in the URL for a PUT. I have a > similar need -- I'd like to be able to take URLs from a list of user- > specific URLs. Perhaps something like this: > > URL = file(./user_urls1) > REQUEST_TYPE = PUT > ... > URL = file(./user_urls2) > REQUEST_TYPE = POST > > or something like that -- maybe there's a better syntax that you could > suggest. I just need each client to take its URL from the next line of > a given file. > > I do know C and I'd be glad to write a patch if you can give me some > guidance about where to look. > > Thanks! > Gary Fitts > > The URL list may be supported as described in my previous reply. PUT request is supposed to be supported as is. If it goes buggy, please, report and I will investigate the case, whether there is an issue with curl-library. Here is below my previous reply: ============================================================ Sounds great! Several people have requested something like this, and you have formulated the requirements. Let's call it URL_SET_TEMPLATE, which will have any number of %s symbols in any places of its url template, which will be read from a file URL_SET_TOKEN_FILE It presumes, that all other tags for the set URL will be the same. Files parse_conf.c contains tag_parser_pair_tp_map where the two new tag can be easily added with their handling functions. Now the matters to care about. URLS_NUM from general section should contain number of total URLs, which can come from individual URL tags as well as from URL_SET_TEMPLATE. parser of URL_SET_TEMPLATE does not allocate a URL by itself, but instead should read the template and sets a flag to wait for the next URL_SET_TOKEN_FILE. parser of URL_SET_TOKEN_FILE will create N URL objects, where N is the number of rows in the file. It may keep in the batch object the two numbers: where the SET starts, and the number of the URLs in the set (N) . Each next tag of the URL section (e.g. REQUEST_TYPE, TIMER_AFTER_URL_COMPLETION, etc) should be applied to all the URL objects, which to be signaled to the parsers of the tags by the non-zero N, or by a flag, as you wish. The next URL or URL_SET_TEMPLATE tags should care about advancing a field in batch object Therefore, in url_parser (and in url_set_template_parser) instead the current bctx->url_index++; should appear something like this: if (!N) { bctx->url_index++; } else { url_index += N; N = 0; } > > > 2. Build client-specific URLs from tokens returned in the body of previous > responses. > For instance, here's a sample client-server exchange (with most headers > omitted): > > POST /login HTTP/1.1 > > login=gu...@in...&password=guest > > HTTP/1.1 200 OK > > <?xml version="1.0" encoding="UTF-8"?> > <user user_id="38" profile_id="3"/> > > PUT /users/38/profiles/3/subscriptions HTTP/1.1 > > stuff ... > > > In this case I'd like to be able to grab user_id and profile_id from the > returned > XML, and use these tokens to build the subsequent URL. > Maybe the config file could look something like this: > > URL=http://a.xyz.com/login > RESPONSE_TOKEN=user_id > RESPONSE_TOKEN= profile_id > > URL=http://a.xyz.com/users/<user_id>/profiles/<profile_id>/subscriptions > > I realize that it would be hard to make this completely general, but I'm > sure > I could write response-parsing code specific to my needs. > Any guidance would be appreciated! > Gary > It could be made in generic fashion. Function client_tracing_function () filters all bytes from headers as well as from bodies. Incoming body bytes are passing switch CURLINFO_DATA_IN cctx - (client context) object can have a set of filters and parse the passing body bytes to extract them. Take care however about partial token, e.g. by keeping several last bytes from the previous packet, or etc method. A suggestion is for you to look at the existing mechanism delivered by URL_USE_CURRENT. More questions/ suggestions are welcomed! -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: Robert I. <cor...@gm...> - 2008-10-03 13:17:57
|
Hi Gary, On Thu, Oct 2, 2008 at 11:28 PM, Gary Fitts <ga...@in...> wrote: > > 1. Client-specific urls built from tokens in a file. For example, here's > some possible syntax for the configuration file: > > URL=http://a.xyz.com/users/%s/profiles/%s/subscriptions > URL_TOKEN_FILE=foo > > In this case, each client would take two tokens from foo and use them to > build the > client-specific URL. > > Or, it might be simpler to build a file of complete client-specific URLs in > advance, > and take whole URLs from the file. Maybe that could look like this in the > config file: > > URL=file("foo") > Sounds great! Several people have requested something like this, and you have formulated the requirements. Let's call it URL_SET_TEMPLATE, which will have any number of %s symbols in any places of its url template, which will be read from a file URL_SET_TOKEN_FILE It presumes, that all other tags for the set URL will be the same. Files parse_conf.c contains tag_parser_pair_tp_map where the two new tag can be easily added with their handling functions. Now the matters to care about. URLS_NUM from general section should contain number of total URLs, which can come from individual URL tags as well as from URL_SET_TEMPLATE. parser of URL_SET_TEMPLATE does not allocate a URL by itself, but instead should read the template and sets a flag to wait for the next URL_SET_TOKEN_FILE. parser of URL_SET_TOKEN_FILE will create N URL objects, where N is the number of rows in the file. It may keep in the batch object the two numbers: where the SET starts, and the number of the URLs in the set (N) . Each next tag of the URL section (e.g. REQUEST_TYPE, TIMER_AFTER_URL_COMPLETION, etc) should be applied to all the URL objects, which to be signaled to the parsers of the tags by the non-zero N, or by a flag, as you wish. The next URL or URL_SET_TEMPLATE tags should care about advancing a field in batch object Therefore, in url_parser (and in url_set_template_parser) instead the current bctx->url_index++; should appear something like this: if (!N) { bctx->url_index++; } else { url_index += N; N = 0; } > > 2. Build client-specific URLs from tokens returned in the body of previous > responses. > For instance, here's a sample client-server exchange (with most headers > omitted): > > POST /login HTTP/1.1 > > login=gu...@in...&password=guest > > HTTP/1.1 200 OK > > <?xml version="1.0" encoding="UTF-8"?> > <user user_id="38" profile_id="3"/> > > PUT /users/38/profiles/3/subscriptions HTTP/1.1 > > stuff ... > > > In this case I'd like to be able to grab user_id and profile_id from the > returned > XML, and use these tokens to build the subsequent URL. > Maybe the config file could look something like this: > > URL=http://a.xyz.com/login > RESPONSE_TOKEN=user_id > RESPONSE_TOKEN= profile_id > > URL=http://a.xyz.com/users/<user_id>/profiles/<profile_id>/subscriptions > > I realize that it would be hard to make this completely general, but I'm > sure > I could write response-parsing code specific to my needs. > Any guidance would be appreciated! > Gary > It could be made in generic fashion. Function client_tracing_function () filters all bytes from headers as well as from bodies. Incoming body bytes are passing switch CURLINFO_DATA_IN cctx - (client context) object can have a set of filters and parse the passing body bytes to extract them. Take care however about partial token, e.g. by keeping several last bytes from the previous packet, or etc method. A suggestion is for you to look at the existing mechanism delivered by URL_USE_CURRENT. More questions/ suggestions are welcomed! -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... www.ghotit.com Assistive technology that understands you ...................................................................... |
From: Gary F. <ga...@in...> - 2008-10-02 20:28:50
|
> Where exactly is the text you need to grab? I an header or > in body. Please, a bit more details. I'd like to do two things. Both involve constructing client-specific URLS. If the second is too hard, I could probably make do with the first. 1. Client-specific urls built from tokens in a file. For example, here's some possible syntax for the configuration file: URL=http://a.xyz.com/users/%s/profiles/%s/subscriptions URL_TOKEN_FILE=foo In this case, each client would take two tokens from foo and use them to build the client-specific URL. Or, it might be simpler to build a file of complete client-specific URLs in advance, and take whole URLs from the file. Maybe that could look like this in the config file: URL=file("foo") 2. Build client-specific URLs from tokens returned in the body of previous responses. For instance, here's a sample client-server exchange (with most headers omitted): POST /login HTTP/1.1 login=gu...@in...&password=guest HTTP/1.1 200 OK <?xml version="1.0" encoding="UTF-8"?> <user user_id="38" profile_id="3"/> PUT /users/38/profiles/3/subscriptions HTTP/1.1 stuff ... In this case I'd like to be able to grab user_id and profile_id from the returned XML, and use these tokens to build the subsequent URL. Maybe the config file could look something like this: URL=http://a.xyz.com/login RESPONSE_TOKEN=user_id RESPONSE_TOKEN= profile_id URL=http://a.xyz.com/users/<user_id>/profiles/<profile_id>/subscriptions I realize that it would be hard to make this completely general, but I'm sure I could write response-parsing code specific to my needs. Any guidance would be appreciated! Gary On Oct 2, 2008, at 11:28 AM, Robert Iakobashvili wrote: > Hi Gary, > > On Thu, Oct 2, 2008 at 9:16 PM, Gary Fitts <ga...@in...> > wrote: >> Hello list, >> There seems to have been no followup to this post from last >> November. I have a similar need, I know C and I'm willing to >> write a patch. Has anyone tried something similar? > > Where exactly is the text you need to grab? I an header or > in body. Please, a bit more details. > >> Any guidance? > > I will guide you. > >> Thanks, >> Gary Fitts |
From: Robert I. <cor...@gm...> - 2008-10-02 18:29:37
|
Hi Gary, On Thu, Oct 2, 2008 at 9:16 PM, Gary Fitts <ga...@in...> wrote: > Hello list, > There seems to have been no followup to this post from last > November. I have a similar need, I know C and I'm willing to > write a patch. Has anyone tried something similar? Where exactly is the text you need to grab? I an header or in body. Please, a bit more details. > Any guidance? I will guide you. > Thanks, > Gary Fitts Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: Gary F. <ga...@in...> - 2008-10-02 18:16:21
|
Hello list, There seems to have been no followup to this post from last November. I have a similar need, I know C and I'm willing to write a patch. Has anyone tried something similar? Any guidance? Thanks, Gary Fitts > From: Robert Iakobashvili <coroberti@gm...> - 2007-11-26 20:56 > Hi Ken, > > On Nov 26, 2007 10:46 PM, Ken Mamitsuka <kenm@sh...> wrote: > > > Hi, we have a web app that first takes a POST to a generic URL and > returns > > an id in the POST response header. I'd like to use curl-loader to > grab that > > unique id and the subsequently PUT to that id. >From a straight curl > > perspective, it'd look like: > > If you know C and willing to write a patch, I can guide you about > what is > necessary to do for that. |
From: Gary F. <ga...@in...> - 2008-09-29 19:24:29
|
Regarding a post from 2007-11-26 20:46 "Is it possible to use data from the response header to a POST in a subsequent GET?" Ken Mamitsuka wanted to use a variable in the URL for a PUT. I have a similar need -- I'd like to be able to take URLs from a list of user- specific URLs. Perhaps something like this: URL = file(./user_urls1) REQUEST_TYPE = PUT ... URL = file(./user_urls2) REQUEST_TYPE = POST or something like that -- maybe there's a better syntax that you could suggest. I just need each client to take its URL from the next line of a given file. I do know C and I'd be glad to write a patch if you can give me some guidance about where to look. Thanks! Gary Fitts |
From: Robert I. <cor...@gm...> - 2008-09-28 04:14:38
|
Gentlemen, Daniel Stenberg, a developer and maintainer of curl and libcurl, has asked recently at his lists about the names of companies, that are using curl/libcurl. Since curl-loader is embedding curl/libcurl, could you, please, say the names of companies, and, if allowed, describe the projects in just a few words? Thanks. I will concentrate the input and forward it to Daniel for his web listing. -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... www.ghotit.com Assistive technology that understands you ...................................................................... |
From: Robert I. <cor...@gm...> - 2008-09-25 10:33:13
|
Hi Jos, Good news, just keep posting to the mailing list. This is, where PRF form comes to assistance. The latest version of curl-loader has fixed our bug with handling proxies + upgraded libcurl library. Best wishes, Robert. On Thu, Sep 25, 2008 at 12:13 PM, Jos Andel <jos...@te...> wrote: > > Hi Robert, > > I've tried 0.46, and seems to work fine with re-using connections! > I've used the exact same configuration, and connections keep alive now. > And I get much more CAPS of course :-) > > Bye, Jos > > > "Robert Iakobashvili" <cor...@gm...> wrote on 09/23/2008 02:53:34 > PM: > > > Hi Jos, > > > > On Tue, Sep 23, 2008 at 3:31 PM, Jos Andel <jos...@te...> wrote: > > > > > > Hi, > > > > > > I'm trying to test my Bluecoat proxies with curl-loader. But I don't > seem > > > to get reusing connections to work. After every get my client closes > the > > > connection. With tcpdump I can quite clearly see that 'Connection > Keepalive' > > > is enabled, but curl-loader closes the connection anyway. > > > > Please, fill the PROBLEM-REPORTING-FORM > > > > > > > > I'm doing something like this: > > > > > > ########### GENERAL SECTION ################################ > > > > > > BATCH_NAME= bluecoat1 > > > CLIENTS_NUM_MAX=33 # Same as CLIENTS_NUM > > > CLIENTS_NUM_START=3 > > > CLIENTS_RAMPUP_INC=3 > > > INTERFACE=eth3 > > > NETMASK=27 > > > IP_ADDR_MIN= 10.48.128.13 > > > IP_ADDR_MAX= 10.48.128.25 > > > #IP_SHARED_NUM=25 > > > IP_SHARED_NUM=13 > > > CYCLES_NUM= -1 > > > URLS_NUM= 1 > > > > > > ########### URL SECTION #################################### > > > > > > URL=http://www.jubitketentest.nl/index0.html > > > REQUEST_TYPE=GET > > > TIMER_URL_COMPLETION = 0 > > > TIMER_AFTER_URL_SLEEP = 0 > > > > > > > > > http_proxy is set to the proxy, and seems to work just fine. > > > > > > I start with: curl-loader -f jos.conf > > > (so no -r). > > > > -r is relevant only for TCP level, not for HTTP. > > > > > Am I doing something wrong? > > > > > > Kind regards, > > > > > > Jos Andel > > > Tele2 Netherlands > > > > > > > Please, make a capture for a single client, one IP and a couple cycles. > > Are you sure, that curl-loader is sending FIN or RST? > > > > Is it HTTP1/1? > > Is it chunked-encoding or Connection-Close? > > > > In other words, worth looking at the capture. > > > > And another question is that whatever is the policy at your server (KA > > or not KA) > > does not actually mind. If a client will send HTTP header "Connection: > close", > > HTTP 1/1 server will close such connection after any non 1-xx reply. > > > > Therefore, the worst case testing scenario for server is for closing > > connections. > > > > Take care! > > > > Truly, > > Robert Iakobashvili, Ph.D. > > ...................................................................... > > www.ghotit.com > > Assistive technology that understands you > > ...................................................................... > > > ******** IMPORTANT NOTICE ******** > This e-mail (including any attachments) may contain information that is > confidential or otherwise protected from disclosure and it is intended only > for the addressees. If you are not the intended recipient, please note that > any copying, distribution or other use of information contained in this > e-mail (and its attachments) is not allowed. If you have received this > e-mail in error, kindly notify us immediately by telephone or e-mail and > delete the message (including any attachments) from your system. > > Please note that e-mail messages may contain computer viruses or other > defects, may not be accurately replicated on other systems, or may be > subject of unauthorized interception or other interference without the > knowledge of sender or recipient. Tele2 only send and receive e-mails on the > basis that Tele2 is not responsible for any such computer viruses, > corruption or other interference or any consequences thereof. > -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... www.ghotit.com Assistive technology that understands you ...................................................................... |
From: Vlad W. <wvl...@gm...> - 2008-09-25 07:59:27
|
Hi Alexander, "curl -k..." option is already enabled by calling following function in loader.c: curl_easy_setopt (handle, CURLOPT_SSL_VERIFYPEER, 0); The problem may be related to an inconsistency between client and server SSL configuration, like missing supported ciphers or disabled protocol. I had a similar problem when my server did not work with SSLv2, failing CLIENT_HELLO handling which is in SSLv2 by default. Since curl required "-3" option, I have added following line to loader.c (just before setting CURLOPT_SSL_VERIFYPEER): curl_easy_setopt (handle, CURLOPT_SSLVERSION, 3); In general, if curl-loader is linked with same openssl libraries as curl is, it should work on SSL layer just like curl does. Note that SSL errors are shown in the log when cur-loader is running with verbose option, in usual OpenSSL format (like using "curl -v -k https://..."). It's very useful thing. Thanks, Vlad On Sun, Sep 21, 2008 at 11:05 PM, Robert Iakobashvili <cor...@gm...>wrote: > Hi Alexander, > > On Sun, Sep 21, 2008 at 9:21 PM, Bradley Alexander <st...@tu...> wrote: > > > > Hi, > > > > I'm having a problem with curl-loader. I have tried both 0.44 and 0.46. I > did a default build of both on my test machine. I modified the https.conf > file and pointed it to the target system, and am experiencing timeouts, as > if it is not connecting. I ran a curl -k to the site, and the page loads > fine, which leads me to believe it may be the way curl loader is mangling > the URL. > > Please, collect the facts prior to conclusions. > > > > > > > If I run: > > > > curl -k " > https://target.example.com/index.php?NASID=test-nsaid&NASIP=accessctlr.example.com&&CallerID=00:1C:26:19:0C:87&CustIP=192.168.2.248&VLAN=2&RD=http%3A%2Fen-us.start2.mozilla.com%2Ffirefox%3Fclient%3Dfirefox-a&LgnURL=https://accessctlr.example.com/goform/HtmlLoginRequest.php > " > > > > it gives me the login page. However, my curl loader config has the > following: > > > > BATCH_NAME= https > > CLIENTS_NUM_MAX=20 # Same as CLIENTS_NUM > > CLIENTS_NUM_START=2 > > CLIENTS_RAMPUP_INC=2 > > INTERFACE =eth0 > > NETMASK=24 > > IP_ADDR_MIN= 10.56.57.1 > > IP_ADDR_MAX= 10.56.57.254 > > CYCLES_NUM=-1 > > URLS_NUM= 1 > > LOG_RESP_HEADERS=1 > > LOG_RESP_BODIES=1 > > > > ########### URL SECTION #################################### > > > > URL=" > https://target.example.com/index.php?NASID=test-nsaid&NASIP=accessctlr.example.com&&CallerID=00:1C:26:19:0C:87&CustIP=192.168.2.248&VLAN=2&RD=http%3A%2Fen-us.start2.mozilla.com%2Ffirefox%3Fclient%3Dfirefox-a&LgnURL=https://accessctlr.example.com/goform/HtmlLoginRequest.php > " > > URL_SHORT_NAME="target" > > REQUEST_TYPE=GET > > TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced by > cancelling url fetch on timeout > > TIMER_AFTER_URL_SLEEP =1000 > > > > I have run tcpdumps on the test box and the target box. The test box > shows the curl-loader packets going out, but nothing coming back. The target > box does not show any traffic from the test box at all. > > > > Can someone point out what it is that I am nissing? > > > > First, fill the PROBLEM-REPORTING-FORM (PRF) in due course, since your > data is not enough. > Second, include to your curl-loader command line (unclear what is the line) > -v > Third, try a non https , e.g. some http URL and for a single client. > > Forth, read the FAQs at the web-site regarding the batch log file. you > may wish to collect such > file with option -v and to attach to you PRF in a gzipped form. > The filtered tcpdump or wireshark capture could be also helpful. > > > If your problem in some SSL/TLS handshake, we'll see a complain about > it in the log file. > Please, run curl-loader for the log with a single configured client, > single IP and single cycle. > > > >From curl man pages: > > ---------------------------------------------------------------------------------------------------------------- > -k/--insecure > > (SSL) This option explicitly allows curl to perform "insecure" SSL > connections and transfers. All SSL connections are attempted to be > made secure by using the CA certificate bundle installed by default. > This makes all connections considered "insecure" fail unless > -k/--insecure is used. > > See this online resource for further details: > http://curl.haxx.se/docs/sslcerts.html > > > ---------------------------------------------------------------------------------------------------------------------- > > What happens with your curl command line fetch, if you remove -k? > > Vald Wainbaum has recently maid some patching for a similar issue. > Vlad could you guide us regarding your practice? > > > Thanks.Truly, > Robert Iakobashvili, Ph.D. > ...................................................................... > www.ghotit.com > Assistive technology that understands you > ...................................................................... > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's > challenge > Build the coolest Linux based applications with Moblin SDK & win great > prizes > Grand prize is a trip for two to an Open Source event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > curl-loader-devel mailing list > cur...@li... > https://lists.sourceforge.net/lists/listinfo/curl-loader-devel > |
From: Robert I. <cor...@gm...> - 2008-09-21 20:05:50
|
Hi Alexander, On Sun, Sep 21, 2008 at 9:21 PM, Bradley Alexander <st...@tu...> wrote: > > Hi, > > I'm having a problem with curl-loader. I have tried both 0.44 and 0.46. I did a default build of both on my test machine. I modified the https.conf file and pointed it to the target system, and am experiencing timeouts, as if it is not connecting. I ran a curl -k to the site, and the page loads fine, which leads me to believe it may be the way curl loader is mangling the URL. Please, collect the facts prior to conclusions. > > > If I run: > > curl -k "https://target.example.com/index.php?NASID=test-nsaid&NASIP=accessctlr.example.com&&CallerID=00:1C:26:19:0C:87&CustIP=192.168.2.248&VLAN=2&RD=http%3A%2Fen-us.start2.mozilla.com%2Ffirefox%3Fclient%3Dfirefox-a&LgnURL=https://accessctlr.example.com/goform/HtmlLoginRequest.php" > > it gives me the login page. However, my curl loader config has the following: > > BATCH_NAME= https > CLIENTS_NUM_MAX=20 # Same as CLIENTS_NUM > CLIENTS_NUM_START=2 > CLIENTS_RAMPUP_INC=2 > INTERFACE =eth0 > NETMASK=24 > IP_ADDR_MIN= 10.56.57.1 > IP_ADDR_MAX= 10.56.57.254 > CYCLES_NUM=-1 > URLS_NUM= 1 > LOG_RESP_HEADERS=1 > LOG_RESP_BODIES=1 > > ########### URL SECTION #################################### > > URL="https://target.example.com/index.php?NASID=test-nsaid&NASIP=accessctlr.example.com&&CallerID=00:1C:26:19:0C:87&CustIP=192.168.2.248&VLAN=2&RD=http%3A%2Fen-us.start2.mozilla.com%2Ffirefox%3Fclient%3Dfirefox-a&LgnURL=https://accessctlr.example.com/goform/HtmlLoginRequest.php" > URL_SHORT_NAME="target" > REQUEST_TYPE=GET > TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced by cancelling url fetch on timeout > TIMER_AFTER_URL_SLEEP =1000 > > I have run tcpdumps on the test box and the target box. The test box shows the curl-loader packets going out, but nothing coming back. The target box does not show any traffic from the test box at all. > > Can someone point out what it is that I am nissing? > First, fill the PROBLEM-REPORTING-FORM (PRF) in due course, since your data is not enough. Second, include to your curl-loader command line (unclear what is the line) -v Third, try a non https , e.g. some http URL and for a single client. Forth, read the FAQs at the web-site regarding the batch log file. you may wish to collect such file with option -v and to attach to you PRF in a gzipped form. The filtered tcpdump or wireshark capture could be also helpful. If your problem in some SSL/TLS handshake, we'll see a complain about it in the log file. Please, run curl-loader for the log with a single configured client, single IP and single cycle. >From curl man pages: ---------------------------------------------------------------------------------------------------------------- -k/--insecure (SSL) This option explicitly allows curl to perform "insecure" SSL connections and transfers. All SSL connections are attempted to be made secure by using the CA certificate bundle installed by default. This makes all connections considered "insecure" fail unless -k/--insecure is used. See this online resource for further details: http://curl.haxx.se/docs/sslcerts.html ---------------------------------------------------------------------------------------------------------------------- What happens with your curl command line fetch, if you remove -k? Vald Wainbaum has recently maid some patching for a similar issue. Vlad could you guide us regarding your practice? Thanks.Truly, Robert Iakobashvili, Ph.D. ...................................................................... www.ghotit.com Assistive technology that understands you ...................................................................... |
From: Robert I. <cor...@gm...> - 2008-09-15 11:40:35
|
Hi KM, On Fri, Sep 12, 2008 at 4:40 AM, kewlemer <kew...@gm...> wrote: > First off many thanks to all the contributers to curl-loader - it > really is a cool tool. Thanks. Please, note, that you need to fill PROBLEM-REPORTING-FORM for any questions > In > the conf-examples directory and the mail threads I've noticed that the > URL section usually has a handful of URLs. For my testing I have a > .txt file with close to 1000 URLs which I would like to integrate in > the .conf file. I have two questions in this regard - > > 1. Whats the easiest way to integrate all the URLs with related > fields like REQUEST_TYPE? Can I simple throw in my 1000 URLs and have > a common set of other URL parameters? No. Follow the guidelines in due course >This thread suggests that > grouping like this may be incorrect but its not clear- > http://sourceforge.net/mailarchive/message.php?msg_name=7e63f56c0807012108p311b0c1cwfd5faa53d7516eed%40mail.gmail.com > Can anyone please confirm if such grouping is valid? Confirming that such grouping is not valid. Create some script to handle writing of configuration. Somebody did similar cases, but not released the scripts here: http://barelyenough.org/blog/2008/03/load-testing-and-virtualization-tools/ > 2. Is there a limit on the number of URLs that can be used? Number of URLs is not limited. The issue to care is that the screen output may be screwed since you most probably do not have enough space for 1000 URLs If you'll have any such issues. you may comment out the code, that prints URLs statistics to your screen. The code is in file statistics.c , function dump_snapshot_interval_and_advance_total_statistics(), comment out the call to print_operational_statistics () /* print_operational_statistics (&bctx->op_delta, &bctx->op_total, bctx->url_ctx_array); */ Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: kewlemer <kew...@gm...> - 2008-09-12 01:40:42
|
Hi all, First off many thanks to all the contributers to curl-loader - it really is a cool tool. I'm trying to test a web proxy cache performance using curl-loader. In the conf-examples directory and the mail threads I've noticed that the URL section usually has a handful of URLs. For my testing I have a .txt file with close to 1000 URLs which I would like to integrate in the .conf file. I have two questions in this regard - 1. Whats the easiest way to integrate all the URLs with related fields like REQUEST_TYPE? Can I simple throw in my 1000 URLs and have a common set of other URL parameters? This thread suggests that grouping like this may be incorrect but its not clear- http://sourceforge.net/mailarchive/message.php?msg_name=7e63f56c0807012108p311b0c1cwfd5faa53d7516eed%40mail.gmail.com Can anyone please confirm if such grouping is valid? 2. Is there a limit on the number of URLs that can be used? Thanks in advance, KM |
From: Robert I. <cor...@gm...> - 2008-09-03 18:08:20
|
Hi folks, Released: version 0.46, unstable, Sep 3, 2008 * Advanced to curl-7.19.0 with more relevant fixes, particular in the area of multi-transfer and FTP bug fixes. * Compilation fix for some linux-distros, like recent gentoo, by adding #include <limits.h> in ip_secondary.c. Thanks to Aleksandar Lazic < al-...@no...> for reporting it. * Proxy authentication bug fixed by Francois Pesce <fra...@gm... > curl-loader now tested and works with proxies. Take care and have a nice curl-loading! :-) -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |
From: Robert I. <cor...@gm...> - 2008-07-21 04:44:21
|
Hi Chirag, On Mon, Jul 21, 2008 at 1:53 AM, Chirag Patel <pat...@ya...> wrote: > Before I tried curl-loader, I wanted to make sure I could use my normal > cURL switches (like the example below). Could someone send me an example of > how to run the below cURL command with curl-loader? > > Thanks! chirag > > curl -v -H "Content-Type: text/xml" -d > "<battery_unplugged><device_id>2</device_id><percentage>23</percentage><timestamp>Mon > Dec 25 15:52:55 -0600 > 2007</timestamp><time_remaining>231</time_remaining><user_id>1</user_id></battery_unplugged>" > " > http://localhost:3000/battery_unpluggeds?gateway_id=1&auth=9ad3cad0f0e130653ec377a47289eaf7f22f83edb81e406c7bd7919ea725e024 > No, it does not work with curl switches and has its own file configuration format. Read the FAQs here: http://curl-loader.sourceforge.net/doc/faq.html Example of configuration (batch) files are in tarball, where you may take a POST-ing example, place you XML to a file and work. There is also a tag to add/modify the headers similar to libcurl. Take care. > -- > Truly, > Robert Iakobashvili, Ph.D. > ...................................................................... > Assistive technology that understands you > ...................................................................... |
From: Chirag P. <pat...@ya...> - 2008-07-20 22:53:31
|
Before I tried curl-loader, I wanted to make sure I could use my normal cURL switches (like the example below). Could someone send me an example of how to run the below cURL command with curl-loader? Thanks! chirag curl -v -H "Content-Type: text/xml" -d "<battery_unplugged><device_id>2</device_id><percentage>23</percentage><timestamp>Mon Dec 25 15:52:55 -0600 2007</timestamp><time_remaining>231</time_remaining><user_id>1</user_id></battery_unplugged>" "http://localhost:3000/battery_unpluggeds?gateway_id=1&auth=9ad3cad0f0e130653ec377a47289eaf7f22f83edb81e406c7bd7919ea725e024 |
From: Robert I. <cor...@gm...> - 2008-07-02 04:08:07
|
Hi Prahav, On Tue, Jul 1, 2008 at 10:55 PM, Pranav Desai <pra...@gm...> wrote: > Hello, > > I am trying to simulate a browser behavior for accessing a front page > (e.g. www.cnn.com). From traces I see that a few requests go over the > same TCP connections (persistence) and in general there are a few TCP > connections made for completely fetching the whole front page. > > config file > ----------- > BATCH_NAME=test_load > CLIENTS_NUM_MAX=1 # Same as CLIENTS_NUM > CLIENTS_NUM_START=1 > CLIENTS_RAMPUP_INC=2 > INTERFACE =eth1 > NETMASK=16 > IP_ADDR_MIN= 12.0.0.1 > IP_ADDR_MAX= 12.0.16.250 #Actually - this is for self-control > CYCLES_NUM=1 > URLS_NUM=14 > > ########### URL SECTION #################################### > > URL= > http://192.168.55.205/websites/cisco/www.cisco.com/cdc_content_elements/flash/home/sp_072307/spotlight/sp_webEx_tn.jpg > URL= > http://192.168.55.205/websites/cisco/www.cisco.com/cdc_content_elements/flash/home/sp_072307/spotlight/sp_CIN.swf > FRESH_CONNECT=1 > > URL= > http://192.168.55.205/websites/cisco/www.cisco.com/cdc_content_elements/flash/home/sp_072307/spotlight/sp_webEx.swf > > URL=http://192.168.55.205/websites/cisco/www.cisco.com/cdc_content_elements/flash/home/sp_072307/spotlight/sp_humanN_anthem_tn.jpg > URL=http://192.168.55.205/websites/MOO/backfeed10.gif > TIMER_AFTER_URL_SLEEP=2000-5000 > URL=http://192.168.55.205/websites/MOO/bottom.gif > URL=http://192.168.55.205/websites/MOO/style3.css > URL=http://192.168.55.205/websites/MOO/topright.gif > FRESH_CONNECT=1 > > URL=http://192.168.55.205/websites/MOO/rss_smaller.gif > URL=http://192.168.55.205/websites/MOO/index.html.6 > TIMER_AFTER_URL_SLEEP=2000-5000 > > URL=http://192.168.55.205/websites/cnn/www.cnn.com/.element/ssi/www/breaking_news/2.0/banner.html > URL= > http://192.168.55.205/websites/cnn/www.cnn.com/.element/ssi/auto/2.0/sect/MAIN/ftpartners/partner.people.html > TIMER_AFTER_URL_SLEEP=2000-5000 > URL= > http://192.168.55.205/websites/cnn/www.cnn.com/.element/ssi/auto/2.0/sect/MAIN/ftpartners/partner.money.txt > URL= > http://192.168.55.205/websites/cnn/www.cnn.com/.element/img/2.0/global/icons/video_icon.gif > > I am not sure, that the syntax of URL section you are using is correct. Please, try to re-write the configuration file keeping for each URL section it's own parameters, something like below: URL=http://localhost/ACE-INSTALL.html # http://localhost/apache2-default/ACE-INSTALL.html URL_SHORT_NAME=" ACE" REQUEST_TYPE=GET TIMER_URL_COMPLETION = 0 TIMER_AFTER_URL_SLEEP =0 FRESH_CONNECT=1 URL= http://localhost/index.html URL_SHORT_NAME=" INDEX" REQUEST_TYPE=GET TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced by cancelling url fetch on timeout TIMER_AFTER_URL_SLEEP =1000 FRESH_CONNECT=1 >From the HTTP point of view GET requests may still go via the same TCP/IP connection. What we should expect from FRESH_CONNECT=1 is that the connection will be closed and re-established at each loading cycle. In general connections are the matter of libcurl library, where it is keeping some 5-10 connections minimum, which is govern my MAX_CONNECTIONS flag logic. What happens with the connection policy, when you are trying e.g. 50, 100 virtual clients? -- Truly, Robert Iakobashvili, Ph.D. ...................................................................... Assistive technology that understands you ...................................................................... |