curl-loader-devel Mailing List for curl-loader - web application testing (Page 39)
Status: Alpha
Brought to you by:
coroberti
You can subscribe to this list here.
2006 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2007 |
Jan
|
Feb
(1) |
Mar
(7) |
Apr
(19) |
May
(25) |
Jun
(16) |
Jul
(59) |
Aug
(29) |
Sep
(18) |
Oct
(19) |
Nov
(7) |
Dec
(29) |
2008 |
Jan
(6) |
Feb
(18) |
Mar
(8) |
Apr
(27) |
May
(26) |
Jun
(5) |
Jul
(6) |
Aug
|
Sep
(9) |
Oct
(37) |
Nov
(61) |
Dec
(17) |
2009 |
Jan
(21) |
Feb
(25) |
Mar
(4) |
Apr
(2) |
May
(8) |
Jun
(15) |
Jul
(18) |
Aug
(23) |
Sep
(10) |
Oct
(16) |
Nov
(14) |
Dec
(22) |
2010 |
Jan
(23) |
Feb
(8) |
Mar
(18) |
Apr
(1) |
May
(34) |
Jun
(23) |
Jul
(11) |
Aug
(1) |
Sep
(13) |
Oct
(10) |
Nov
(2) |
Dec
(8) |
2011 |
Jan
|
Feb
(7) |
Mar
(24) |
Apr
(12) |
May
(3) |
Jun
(2) |
Jul
(2) |
Aug
|
Sep
(5) |
Oct
(20) |
Nov
(7) |
Dec
(11) |
2012 |
Jan
(12) |
Feb
(5) |
Mar
(16) |
Apr
(3) |
May
|
Jun
(5) |
Jul
(12) |
Aug
(6) |
Sep
|
Oct
|
Nov
(8) |
Dec
|
2013 |
Jan
(1) |
Feb
(3) |
Mar
(5) |
Apr
(3) |
May
(1) |
Jun
|
Jul
(1) |
Aug
(2) |
Sep
(9) |
Oct
|
Nov
(8) |
Dec
(4) |
2014 |
Jan
(4) |
Feb
|
Mar
(1) |
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
(4) |
Oct
|
Nov
(11) |
Dec
(5) |
2015 |
Jan
(1) |
Feb
|
Mar
(11) |
Apr
(3) |
May
(1) |
Jun
(1) |
Jul
(4) |
Aug
(1) |
Sep
(7) |
Oct
(4) |
Nov
(2) |
Dec
|
2016 |
Jan
|
Feb
(1) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2022 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Robert I. <cor...@gm...> - 2007-07-01 08:33:31
|
Michael, +install: + cp -f ./curl-loader /usr/bin + cp -f curl-loader.1 /usr/share/man/man1/curl-loader.1 + + + Cool! Could you, please, also add to install our documentation to some appropriate place, namely: README THANKS PROBLEM-REPORTING ./conf-examples directory Please, also update our THANKS file to include BuraphaLinux Server <bur...@gm...> -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Robert I. <cor...@gm...> - 2007-06-26 17:40:51
|
On 6/26/07, BuraphaLinux Server <bur...@gm...> wrote: > make curl-loader.1 > groff -man -Tascii curl-loader.1 | less -ic > > The most critical part of the man page is that content; the > formatting doesn't matter if the content is wrong, and if the > content is right people can use the program and they will > (normally) forgive any ugly formatting. > > Is that what you were asking me, or did I just answer the > wrong question? Exactly, what was required. Michael Moser is currently looking into incorporating the man page you provided into our building/making infrastructure and its extension. It's a shame for me for being using UNIX system and C-programming since 1982, I have never learn the issues of creating man-pages. Great and thank you very much! -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: BuraphaLinux S. <bur...@gm...> - 2007-06-26 08:49:41
|
Hello, Updating the version and date is easy. Try this solution. In the top-level Makefile, near the top add these two lines: VERZUN=0.32 DEIGHT=`date +"%B %d, %Y"` Each release, change the VERZUN line to have the correct version number, or have a more elaborate rule that can generate it somehow. Then near the end before the generic rules add this (gmail may wrap and will probably ruin the real tabs you need; sorry; assuming GNU make): .PHONY: curl-loader.1 curl-loader.1: curl-loader.man sed -e"s/DUHVERZUN/$(VERZUN)/g" -e "s/DEIGHT/$(DEIGHT)/g" < curl-loader.man > curl-loader.1 Rename the curl-loader.1 I sent you to curl-loader.man Update curl-loader.man so the .TH line looks like this: .TH curl\-loader 8 "DEIGHT" "Version DUHVERZUN" Now you can do: make curl-loader.1 and you will get a curl-loader.1 with the .TH line with something like this: .TH curl\-loader 8 "June 26, 2007" "Version 0.32" I spell things incorrectly to prevent collisions with real variables; if you were paranoid you could use longer names or something. The 8 is the section number; since you have to be root to run this tool, I put it in section 8. To create the original page and to update content on man pages I maintain, I just use vi and type the groff/man commands I need. You can change words and add more content easily, remembering some simple rules: 1. Never begin a line with a period 2. Every hyphen must be escaped like \- 3. Create a line with nothing but .P on it to start a new paragraph. The groff.info file describes the groff commands and man macro set you would need to use, and for normal man pages it is easy. Most people learn to do this by copying existing man pages, and that is how I learned initially. So to add a new command-line switch just copy everything from a different one (I did many on the page) and then change the words and letters. To add normal paragraphs, just type them following the 3 rules above. To add sample files or sample commands, use the .nf/.fi to start no-fill and fill modes (fill is justify in groff). You can test your page any time with a line like this: make curl-loader.1 groff -man -Tascii curl-loader.1 | less -ic The most critical part of the man page is that content; the formatting doesn't matter if the content is wrong, and if the content is right people can use the program and they will (normally) forgive any ugly formatting. Is that what you were asking me, or did I just answer the wrong question? On 6/25/07, Robert Iakobashvili <cor...@gm...> wrote: > On 6/25/07, BuraphaLinux Server <bur...@gm...> wrote: > > Attached is a man page for curl-loader, since it appears there is not > > one that comes in the curl-loader-0.32 package. > > > > Would you like a man page for the configure file too? > Yes. > > Great! Thank you very much indeed. > > Actually, we need also a way to update, edit and correct as our procedure. > Can you propose something? Thanks. > > Michael, > we should arrange a Makefile target install, which will install > our binary (no libs till now, all linked static), docs like config > files examples, > README and, finally, man-pages. > > May be it is worth to have a single man-page for all curl-loader issues? > Your opinions would be very much appreciated. > > -- > Sincerely, > Robert Iakobashvili, > coroberti %x40 gmail %x2e com > ........................................................... > http://curl-loader.sourceforge.net > A web testing and traffic generation tool. > > ------------------------------------------------------------------------- > This SF.net email is sponsored by DB2 Express > Download DB2 Express C - the FREE version of DB2 express and take > control of your XML. No limits. Just data. Click to get it now. > http://sourceforge.net/powerbar/db2/ > _______________________________________________ > curl-loader-devel mailing list > cur...@li... > https://lists.sourceforge.net/lists/listinfo/curl-loader-devel > |
From: Robert I. <cor...@gm...> - 2007-06-25 15:43:56
|
On 6/25/07, BuraphaLinux Server <bur...@gm...> wrote: > Attached is a man page for curl-loader, since it appears there is not > one that comes in the curl-loader-0.32 package. > > Would you like a man page for the configure file too? Yes. Great! Thank you very much indeed. Actually, we need also a way to update, edit and correct as our procedure. Can you propose something? Thanks. Michael, we should arrange a Makefile target install, which will install our binary (no libs till now, all linked static), docs like config files examples, README and, finally, man-pages. May be it is worth to have a single man-page for all curl-loader issues? Your opinions would be very much appreciated. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Robert I. <cor...@gm...> - 2007-06-25 08:58:12
|
FYI, Forwarded from netdev. Looks like IPv6 addresses have some rather low limits numbers, like only 8K, whereas IPv4 60 000 addresses are added smoothly. ---------- Forwarded message ---------- From: Robert Iakobashvili <cor...@gm...> Date: Jun 25, 2007 10:47 AM Subject: Re: Scaling Max IP address limitation To: dj...@ro... Cc: lin...@vg..., ne...@vg..., Andrew Morton <ak...@li...> David, On 6/25/07, David Jones <dj...@ro...> wrote: > >> > I am trying to add multiple IP addresses ( v6 ) to my FC7 box on eth0. > >> > But I am hitting a max limit of 4000 IP address . Seems like there > >> is a > >> > limiting variable in linux kernel (which one? ) that prevents from > >> > adding more IP addresses than 4096. What do I need to change in Linux > >> > kernel ( and then recompile ) to be able to add more IP addresses > >> than 4K addresses per system? .. > > > I am using the "ip add " command looping sequentially up until RTNETLINK > starts refusing to add more IP addresses. > How are you adding via Netlink interface ? Yes. OK. Now it looks that I am reproducing something. Running curl-loader with 60K.conf (edit the name of interface) configuration: #ulimit -n 80000 #curl-loader -f ./conf-examples/60K.conf -w it adds successfully 60 000 secondary IPv4 addresses as seen by #ip addr | wc -l When I tryed adding IPv6 addresses, using ipv6.conf with addresses range edited: IP_ADDR_MIN= 2001:db8:fff5:1::1 IP_ADDR_MAX= 2001:db8:fff5:ffff::1 I am getting after initial successes some errors: "rtnl_talk(): RTNETLINK answers: Cannot allocate memory" and #ip addr | wc-l is 8194. 8K addresses added and no more? It might be a memory issue. Y can dig into the code and look into the allocation process and limits on the kernel memory for IPv6. The physical memory on my computer is 480 MB. kernel is vanilla 2.6.20.7. Try to see, what happens, when you increase the memory on your comp, if an option. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: BuraphaLinux S. <bur...@gm...> - 2007-06-25 07:51:55
|
Attached is a man page for curl-loader, since it appears there is not one that comes in the curl-loader-0.32 package. Would you like a man page for the configure file too? |
From: Robert I. <cor...@gm...> - 2007-06-24 11:54:35
|
Hi BuraphaLinux Server, Thank you for using the PRF. On 6/24/07, BuraphaLinux Server <bur...@gm...> wrote: > CURL-LOADER VERSION: 0.32, released 21/06/2007 > > HW DETAILS: CPU/S and memory are must: > processor : 0 > MemTotal: 1030596 kB > LINUX DISTRIBUTION and KERNEL (uname -r): > BLS 1.0.072 (http://www.buraphalinux.org/) > 2.6.21.5 Interesting, I'll look into this distro. > GCC VERSION (gcc -v): > gcc version 4.0.4 > COMPILATION AND MAKING OPTIONS (if defaults changed): > I had to apply this patch: > -LIBS= -ldl -lpthread -lrt -lidn -lcurl -levent -lz -lssl -lcrypto #-lcares > +LIBS= -ldl -lpthread -lrt -lcurl -levent -lz -lssl -lcrypto #-lcares -lidn OK > curl-loader -f monster.conf -v -u > > CONFIGURATION-FILE (The most common source of problems): > > Place the file inline here: > ########### GENERAL SECTION ################################ > > BATCH_NAME= monster > CLIENTS_NUM_MAX=50 # Same as CLIENTS_NUM > CLIENTS_NUM_START=10 > CLIENTS_RAMPUP_INC=10 > INTERFACE=eth0 > NETMASK=32 > IP_ADDR_MIN=10.16.68.197 > IP_ADDR_MAX=10.16.68.197 > CYCLES_NUM=-1 > URLS_NUM=6 > > ########### URL SECTION #################################### > > URL=http://10.16.68.186/ftp/openoffice/stable/2.2.1/OOo_2.2.1_Win32Intel_install_wJRE_en-US.exe > FRESH_CONNECT=1 > URL_SHORT_NAME="url 1" > REQUEST_TYPE=GET > TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced > by cancelling url fetch on timeout > TIMER_AFTER_URL_SLEEP =1000 > TIMER_TCP_CONN_SETUP=50 > > URL=ftp://anonymous:joe%040@10.16.68.186/debian/pool/main/g/gimp/gimp_2.2.15.orig.tar.gz > FRESH_CONNECT=1 > URL_SHORT_NAME="url 2" > TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced > by cancelling url fetch on timeout > TIMER_AFTER_URL_SLEEP =1000 > TIMER_TCP_CONN_SETUP=50 > > URL=http://10.16.68.186/ftp/ruby/1.8/ruby-1.8.6.tar.bz2 > FRESH_CONNECT=1 > URL_SHORT_NAME="url 3" > REQUEST_TYPE=GET > TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced > by cancelling url fetch on timeout > TIMER_AFTER_URL_SLEEP =1000 > TIMER_TCP_CONN_SETUP=50 > URL=ftp://anonymous:joe%040@10.16.68.186/apache/ant/binaries/apache-ant-1.7.0-bin.tar.bz2 > FRESH_CONNECT=1 > URL_SHORT_NAME="url 4" > TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced > by cancelling url fetch on timeout > TIMER_AFTER_URL_SLEEP =1000 > TIMER_TCP_CONN_SETUP=50 > > URL=http://10.16.68.186/ftp/ftp.postgresql.org/postgresql-8.2.4.tar.bz2 > FRESH_CONNECT=1 > URL_SHORT_NAME="url 5" > REQUEST_TYPE=GET > TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced > by cancelling url fetch on timeout > TIMER_AFTER_URL_SLEEP =1000 > TIMER_TCP_CONN_SETUP=50 I do not believe, that TCP handshake and resolving are taking up to 50 seconds and this is OK. > URL=ftp://anonymous:joe%040@10.16.68.186/apache/httpd/httpd-2.2.4.tar.bz2 > FRESH_CONNECT=1 > URL_SHORT_NAME="url 6" > TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced > by cancelling url fetch on timeout > TIMER_AFTER_URL_SLEEP =1000 > TIMER_TCP_CONN_SETUP=50 > > DESCRIPTION: > I have noticed the disk drive on my server is not active much during testing > with curl-loader. I looked at the curl-loader log file and I think I > know what is happening, but not how to change it. Let me describe > what I think it is doing, and then what I would like it to do. > > What do I think it is doing now? > If I cycle through N URLs with 100 clients, the curl-loader will setup > all 100 clients to process the first URL, then it has them all do the > second URL, then it has them all do the third URL, etc. This means > that all clients are normally fetching the same file (I am using 100MB > files for testing). This means that I am testing networking but all > clients are pulling the same file so all but one of them are just > pulling from > the cached copy. It also stresses either http or ftp (whatever the current > URL is) but not both. Am I wrong? Correct. > > QUESTION/ SUGGESTION/ PATCH: > > What I want > If I have N URLs and many clients, I would like curl-loader to > if (process % N) == 0 then start on URL 0 > if (process % N) == 1 then start on URL 1 > if (process % N) == 2 then start on URL 2 > (and then if I have more processes than URLs, wrap back to URL 0 > when I reach URL N-1) > > Why do I want this? > This means that if I have a large set of URLs (too big for server > file cache), > I can force the server to work hard at loading files from disk and get > a more realisitic load for my server (which will be a mirror archive that > I expect many people to use as their mirror source). This means that > normally the file a client wants is probably NOT in cache, and with > a collection of ISO images the filesystem cache will not be able to > hold everything and the disk will be busy. > > Can curl-loader do this already? Y can create 3 conf files with different BATCH_NAME and run from 3 consoles 3 loads with each one having another sorting of the urls. Not a very convenient way - agree. > My workaround is to run many separate instances of curl-loader at once > instead of one large, combined load, but then all the statistics > and logs are separate. Exactly. We have the two features in our RoadMap/TODO list: http://curl-loader.svn.sourceforge.net/viewvc/curl-loader/trunk/curl-loader/TODO?view=markup 2. An option to download a url not only once a cycle, but according to its "Weight", probability. Weight can be less than 1, e.g. 0.3, which means, that there is a 30 % probability, that a client will load this url. Weight 3 means, that there is 300 % probability, that a url will be fetched, which practically means, that a url will be fetched several times by clients prior to going to next and with an average of fetching it 3-times. 11. Usage of random time intervals, e.g 100-200 (from 100 to 200 msec); Thus, the clients will be less synchronized and after a couple cycles will be de-focussed. By the way can also achieve this in part by starting from a single client and placing a adding a client per second to de-focus them. CLIENTS_NUM_START=1 CLIENTS_RAMPUP_INC=1 It looks like with your 50 clients it could be done well (or well-done). I do not know, when we'll get to add these 2 feature, which are not complex. If you wish to volunteer and add them, please, let me know to guide you in our code. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: BuraphaLinux S. <bur...@gm...> - 2007-06-24 10:28:26
|
The PROBLEM REPORTING FORM makes our support more effective. Please, subscribe to our mailing list here: https://lists.sourceforge.net/lists/listinfo/curl-loader-devel and mail the form to the mailing list: cur...@li... CURL-LOADER VERSION: 0.32, released 21/06/2007 HW DETAILS: CPU/S and memory are must: processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 2 model name : Intel(R) Pentium(R) 4 CPU 2.40GHz stepping : 9 cpu MHz : 2394.071 cache size : 512 KB fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe up cid xtpr bogomips : 4792.96 clflush size : 64 MemTotal: 1030596 kB MemFree: 711240 kB Buffers: 6860 kB Cached: 228640 kB SwapCached: 0 kB Active: 108880 kB Inactive: 185748 kB HighTotal: 126960 kB HighFree: 3472 kB LowTotal: 903636 kB LowFree: 707768 kB SwapTotal: 1806328 kB SwapFree: 1803836 kB Dirty: 52 kB Writeback: 0 kB AnonPages: 59152 kB Mapped: 41444 kB Slab: 17744 kB SReclaimable: 9248 kB SUnreclaim: 8496 kB PageTables: 1228 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 2321624 kB Committed_AS: 314736 kB VmallocTotal: 114680 kB VmallocUsed: 2128 kB VmallocChunk: 112380 kB LINUX DISTRIBUTION and KERNEL (uname -r): BLS 1.0.072 (http://www.buraphalinux.org/) 2.6.21.5 GCC VERSION (gcc -v): Using built-in specs. Target: i586-pc-linux-gnu Configured with: /tmp/gcc-4.0.4/configure --prefix=/usr --enable-shared --enable-threads=posix --enable-__cxa_atexit --with-gnu-as --with-gnu-ld --verbose --enable-languages=c,c++,f95 --mandir=/usr/man --infodir=/usr/info --disable-nls --disable-rpath --build=i586-pc-linux-gnu --target=i586-pc-linux-gnu --host=i586-pc-linux-gnu Thread model: posix gcc version 4.0.4 COMPILATION AND MAKING OPTIONS (if defaults changed): I had to apply this patch: --- curl-loader-0.32.orig/Makefile 2007-06-11 19:40:03.000000000 +0700 +++ curl-loader-0.32/Makefile 2007-06-22 22:10:46.000000000 +0700 @@ -74,7 +74,7 @@ LDFLAGS=-L./lib -L$(OPENSSLDIR)/lib # Link Libraries. RedHat/FC require sometimes lidn -LIBS= -ldl -lpthread -lrt -lidn -lcurl -levent -lz -lssl -lcrypto #-lcares +LIBS= -ldl -lpthread -lrt -lcurl -levent -lz -lssl -lcrypto #-lcares -lidn # Include directories INCDIR=-I. -I./inc -I$(OPENSSLDIR)/include make OPT_FLAGS="-O2 -march=i586 -mtune=i686 -fno-strict-aliasing" COMMAND-LINE: curl-loader -f monster.conf -v -u CONFIGURATION-FILE (The most common source of problems): *************> I changed URLs often, but was always using 6; problem was noticed when I had two 650MB ISO images in the list, one for ftp and one for http; I already changed this file before I knew you needed it, but only changed the URLs <*********** Place the file inline here: ########### GENERAL SECTION ################################ BATCH_NAME= monster CLIENTS_NUM_MAX=50 # Same as CLIENTS_NUM CLIENTS_NUM_START=10 CLIENTS_RAMPUP_INC=10 INTERFACE=eth0 NETMASK=32 IP_ADDR_MIN=10.16.68.197 IP_ADDR_MAX=10.16.68.197 CYCLES_NUM=-1 URLS_NUM=6 ########### URL SECTION #################################### URL=http://10.16.68.186/ftp/openoffice/stable/2.2.1/OOo_2.2.1_Win32Intel_install_wJRE_en-US.exe FRESH_CONNECT=1 URL_SHORT_NAME="url 1" REQUEST_TYPE=GET TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced by cancelling url fetch on timeout TIMER_AFTER_URL_SLEEP =1000 TIMER_TCP_CONN_SETUP=50 URL=ftp://anonymous:joe%040@10.16.68.186/debian/pool/main/g/gimp/gimp_2.2.15.orig.tar.gz FRESH_CONNECT=1 URL_SHORT_NAME="url 2" TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced by cancelling url fetch on timeout TIMER_AFTER_URL_SLEEP =1000 TIMER_TCP_CONN_SETUP=50 URL=http://10.16.68.186/ftp/ruby/1.8/ruby-1.8.6.tar.bz2 FRESH_CONNECT=1 URL_SHORT_NAME="url 3" REQUEST_TYPE=GET TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced by cancelling url fetch on timeout TIMER_AFTER_URL_SLEEP =1000 TIMER_TCP_CONN_SETUP=50 URL=ftp://anonymous:joe%040@10.16.68.186/apache/ant/binaries/apache-ant-1.7.0-bin.tar.bz2 FRESH_CONNECT=1 URL_SHORT_NAME="url 4" TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced by cancelling url fetch on timeout TIMER_AFTER_URL_SLEEP =1000 TIMER_TCP_CONN_SETUP=50 URL=http://10.16.68.186/ftp/ftp.postgresql.org/postgresql-8.2.4.tar.bz2 FRESH_CONNECT=1 URL_SHORT_NAME="url 5" REQUEST_TYPE=GET TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced by cancelling url fetch on timeout TIMER_AFTER_URL_SLEEP =1000 TIMER_TCP_CONN_SETUP=50 URL=ftp://anonymous:joe%040@10.16.68.186/apache/httpd/httpd-2.2.4.tar.bz2 FRESH_CONNECT=1 URL_SHORT_NAME="url 6" TIMER_URL_COMPLETION = 0 # In msec. When positive, Now it is enforced by cancelling url fetch on timeout TIMER_AFTER_URL_SLEEP =1000 TIMER_TCP_CONN_SETUP=50 DOES THE PROBLEM AFFECT: COMPILATION? No LINKING? No EXECUTION? Yes OTHER (please specify)? See QUESTION below Have you run $make cleanall prior to $make ? No DESCRIPTION: I have noticed the disk drive on my server is not active much during testing with curl-loader. I looked at the curl-loader log file and I think I know what is happening, but not how to change it. Let me describe what I think it is doing, and then what I would like it to do. What do I think it is doing now? If I cycle through N URLs with 100 clients, the curl-loader will setup all 100 clients to process the first URL, then it has them all do the second URL, then it has them all do the third URL, etc. This means that all clients are normally fetching the same file (I am using 100MB files for testing). This means that I am testing networking but all clients are pulling the same file so all but one of them are just pulling from the cached copy. It also stresses either http or ftp (whatever the current URL is) but not both. Am I wrong? QUESTION/ SUGGESTION/ PATCH: What I want If I have N URLs and many clients, I would like curl-loader to if (process % N) == 0 then start on URL 0 if (process % N) == 1 then start on URL 1 if (process % N) == 2 then start on URL 2 (and then if I have more processes than URLs, wrap back to URL 0 when I reach URL N-1) Why do I want this? This means that if I have a large set of URLs (too big for server file cache), I can force the server to work hard at loading files from disk and get a more realisitic load for my server (which will be a mirror archive that I expect many people to use as their mirror source). This means that normally the file a client wants is probably NOT in cache, and with a collection of ISO images the filesystem cache will not be able to hold everything and the disk will be busy. Can curl-loader do this already? Maybe curl-loader can already do this, but I could not find this in the sample configurations or the README, and I could not find a man page to read. It's probably in there somewhere and I missed it? My workaround is to run many separate instances of curl-loader at once instead of one large, combined load, but then all the statistics and logs are separate. |
Hello again, I have noticed the disk drive on my server is not active much during testing with curl-loader. I looked at the curl-loader log file and I think I know what is happening, but not how to change it. Let me describe what I think it is doing, and then what I would like it to do. 1. What do I think it is doing now? If I cycle through N URLs with 100 clients, the curl-loader will setup all 100 clients to process the first URL, then it has them all do the second URL, then it has them all do the third URL, etc. This means that all clients are normally fetching the same file (I am using 100MB files for testing). This means that I am testing networking but all clients are pulling the same file so 99 of them are just pulling from the cached copy. Am I wrong? 2. What I want If I have N URLs and 100 clients, I would like curl-loader to if (process % N) == 0 then start on URL 0 if (process % N) == 1 then start on URL 1 if (process % N) == 2 then start on URL 2 (and then if I have more processes than URLs, wrap back to URL 0 when I reach URL N-1) Why do I want this? This means that if I have a large set of URLs, I can force the server to work hard at loading files from disk and get a more realisitic load for my server (which will be a mirror archive that I expect many people to use as their mirror source). This means that normally the file a client wants is probably NOT in cache, and with a collection of ISO images the filesystem cache will not be able to hold everything and the disk will be busy. curl-loader has already helped me solve many tuning problems on my server about the firewall, ulimits, and kernel tunables. With the ability to do the testing like I talked about in #2, I can also get a closer simulation to the real load so when I go "live" I will know for sure what number of clients is my maximum under heavy load instead of just knowing how many will work if the data is all cached. Maybe curl-loader can already do this, but I could not find this in the sample configurations or the README, and I could not find a man page to read. It's probably in there somewhere and I missed it? |
From: Robert I. <cor...@gm...> - 2007-06-22 18:08:27
|
Hi, On 6/22/07, BuraphaLinux Server <bur...@gm...> wrote: > > Hello, > > I'm ashamed to say I found the problem was in the firewalls on the > two machines. I use linux 2.6.x with a modular kernel, and somehow the > module for nf_conntrack_ftp keeps getting unloaded since the system > says it is unused. So I would force load that module on client and server > and things work with normal ftp tools, but if I come back an hour later > they don't because those modules get unloaded (/etc/cron.hourly/kmod > does rmmod -as). That's not a curl-loader problem, that's my own > configuration problem. I finally just added: > > # bizzare trick to prevent unloading of nf_conntrack_ftp > remove nf_conntrack_ftp /bin/true > > to the end of my /etc/modprobe.conf on both machines, but I'm sure > that is not the elegant solution. > > Now I got 10 simultaneous ftp with passive mode working well. > Great, thanks for updating us. No shame, it happens. Y may wish to subscribe to curl-loader-devel list for future updates and discussions. The list is not bulky. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Robert I. <cor...@gm...> - 2007-06-22 14:13:46
|
Hi, On 6/22/07, BuraphaLinux Server <bur...@gm...> wrote: > > Hello, I attached my conf file. I want to do ftp load testing, and > all clients share one IP. > I am able to ftp and get files normally with an ftp client, curl, > wget, etc. However, when > I try to use curl-loader-0.32 I get errors. The log file says this: > > 0 2 (192.168.0.2) :<= WARNING: parsing error: wrong response code (FTP?) 0 > . > > for every connection. The files I want to fetch are large, and the > ftp server is vsftpd and > it has a 512k/sec rate limit set. It looks like curl-loader gets > confused and doesn't ever send > the commands to the server to start the download. So I think either > the transfer does not start > or it takes so long curl-loader gives up. I tried to read everything > but I probably missed > something obvious. The server does fork enough daemons so they all do > get connected, > but beyond that nothing seems to happen. The ftp server does send a > banner file - do I > have to turn that off? > > How can I get this working? More options to try with a single client and a single url and -v -u option are: FTP_ACTIVE=1 By the way, have you ensured routing to the ip-address you are using for loading 192.168.0.2? Do you have ping working (use i-capital - I to issue ping from the address you are using : #ping -I 192.168.0.2 192.168.0.3? -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Robert I. <cor...@gm...> - 2007-06-22 13:58:55
|
Hi, On 6/22/07, BuraphaLinux Server <bur...@gm...> wrote: > > Hello, I attached my conf file. I want to do ftp load testing, and > all clients share one IP. > I am able to ftp and get files normally with an ftp client, curl, > wget, etc. However, when > I try to use curl-loader-0.32 I get errors. The log file says this: > > 0 2 (192.168.0.2) :<= WARNING: parsing error: wrong response code (FTP?) 0 > . > > for every connection. The files I want to fetch are large, and the > ftp server is vsftpd and > it has a 512k/sec rate limit set. It looks like curl-loader gets > confused and doesn't ever send > the commands to the server to start the download. So I think either > the transfer does not start > or it takes so long curl-loader gives up. I tried to read everything > but I probably missed > something obvious. The server does fork enough daemons so they all do > get connected, > but beyond that nothing seems to happen. The ftp server does send a > banner file - do I > have to turn that off? Which curl-loader version you are working with? Lets align our efforts and work with the latest version to download, namely, curl-loader-0.32. OK? Thanks. How can I get this working? > > First of all, please, remove REQUEST_TYPE as it is applicable for HTTP only. #REQUEST_TYPE=GET - better remove it Second, lets start from a single url and a single client and a single cycle. CLIENTS_NUM_MAX=1 CLIENTS_NUM_START=1 #CLIENTS_RAMPUP_INC=5 Please, run it with command line options -v and -u (which means verbose and url) and send me the log-file (ftp.log). Y may also wish to look at it. Don't worry, we'll work together to rectify the issues, which could our bug, libcurl bugs or server/client side configuration issue. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: BuraphaLinux S. <bur...@gm...> - 2007-06-22 12:47:20
|
Hello, I attached my conf file. I want to do ftp load testing, and all clients share one IP. I am able to ftp and get files normally with an ftp client, curl, wget, etc. However, when I try to use curl-loader-0.32 I get errors. The log file says this: 0 2 (192.168.0.2) :<= WARNING: parsing error: wrong response code (FTP?) 0 . for every connection. The files I want to fetch are large, and the ftp server is vsftpd and it has a 512k/sec rate limit set. It looks like curl-loader gets confused and doesn't ever send the commands to the server to start the download. So I think either the transfer does not start or it takes so long curl-loader gives up. I tried to read everything but I probably missed something obvious. The server does fork enough daemons so they all do get connected, but beyond that nothing seems to happen. The ftp server does send a banner file - do I have to turn that off? How can I get this working? |
From: Robert I. <cor...@gm...> - 2007-06-21 06:02:01
|
The new release increases curl-loader loading power on SMP/multi-core machines by s haring loading clients among several loading threads using command-line option -t <threads-num> The release also brings: - important bugfixes; - GET with support of forms; - POST forms with up to 16 form-filled tokens for each client and url loaded from file; - configurable per url response statuses to be considered either success or errors. Enjoy, -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Robert I. <cor...@gm...> - 2007-06-19 07:04:16
|
The bugs fixed: - Ramp-up clients number timer was sometimes canceled by url-timeout timers; - Log of responses to files had also some problems been fixed. SMP-support can be gained by using option -t <threads-num>. We can recommend to use number of threads as number of CPUs from /proc/cpuinfo. Note, that sometimes the loader crashes on start with the option used due to suspected bugs in libevent. Change/decrease the number of threads used. Note, that total statistics is written to file $batch-name_0.txt, whereas logs are per-thread and written to files $batch-name_<i>.log The changes are in project subversion. For more details, please, look here: http://curl-loader.svn.sourceforge.net/viewvc/curl-loader/trunk/curl-loader/ChangeLog?view=markup Enjoy, Robert |
From: Aleksandar L. <al-...@no...> - 2007-06-11 21:49:25
|
On Mon 11.06.2007 22:00, Robert Iakobashvili wrote: >On 6/10/07, Aleksandar Lazic <al-...@no...> wrote: > >>Do you plan to add this into curl-loader ;-)? > >Kind of yes. >Currently, there is a 1 to 1 mapping between the CURL-handle and >curl-loader's client_context object; it requires certain re-design to >think about. Well it makes then also the $session easier to handle right ;-) But I think we should move to curl-loader list only due the fact that this is *not* a issue for the libcurl, imho. Cheers Aleks |
From: Robert I. <cor...@gm...> - 2007-06-11 20:00:33
|
On 6/10/07, Aleksandar Lazic <al-...@no...> wrote: > > > >Let's say, somebody wants to emulate behavior of Firefox or MSIE, using > >libcurl. > > That's a very interesting question for a commandline tool, imho. > > > I would go like this: > > --- > set max_connections_per_site <NUM> > set deep_or_breadth (DEEP|BREADTH) > set deep_or_breadth_count <NUM> > set wait_human_read_time <NUM> > > repeat as long as deep_or_breadth_count is not reached { > > get site (e.g.: index.html) > parse site and ( count links and get the needed elements from remote > server > || if element is to get make new connection but not more > the max_connections_per_site > ) > > wait_human_read_time() > > if breadth && deep_or_breadth_count not reached { > get the next link from same directory-level (e.g.: /, /news/, ...) > }elsif deep && deep_or_breadth_count not reached { > get the next link from the next directory-level > } > } Thanks for your suggestion > Do you plan to add this into curl-loader ;-)? Kind of yes. Currently, there is a 1 to 1 mapping between the CURL-handle and curl-loader's client_context object; it requires certain re-design to think about. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Robert I. <cor...@gm...> - 2007-05-31 06:46:28
|
The version brings: - a more flexible configuration approach; - improved FTP support, inc active FTP and upload; - responses logging to files (headers and bodies); - URL fetching timers: monitoring and enforcement; - HTTP PUT support; - configurable connection refresh or keeping it on a per-url bases; - etc For more details, please, look in ChangeLog: http://curl-loader.svn.sourceforge.net/viewvc/curl-loader/trunk/curl-loader/ChangeLog?view=markup -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Robert I. <cor...@gm...> - 2007-05-28 15:27:54
|
Hi Sunspot, On 5/28/07, sunspot sunspot <red...@ya...> wrote: > > > Guy's I need your help with regards in using curl remotely, if I'm using > it on the machine itself (localhost) I'm not encountering any problem, but > when I'm using it remotely, this is the error that I get ... > > 0 1 (192.168.1.1) !! ERROR: Connection time-out after 5108 ms > : eff-url: , url: > 0 2 (192.168.1.2) !! ERROR: Connection time-out after 5107 ms > : eff-url: , url: > 0 3 (192.168.1.3) !! ERROR: Connection time-out after 5106 ms > : eff-url: , url: > > and so on .... Here are the details of my configuration file. What I am not sure of is the > value that should be placed in IP_ADDR_MIN and IP_ADDR_MAX. > > [root@test curl]# more test.cfg > ########### GENERAL SECTION ################################ > BATCH_NAME= bulk_batch # The name of the batch. Logfile - bulk_batch.log > CLIENTS_NUM_MAX=300 # Max number of clients, same as CLIENTS_NUM > CLIENTS_NUM_START=100 # Number of clients to start with. > CLIENTS_INITIAL_INC=50 # Clients to be added each second till > CLIENTS_NUM_MAX > INTERFACE = eth0 # Name of the network interface from which to load > NETMASK=255.255.240.0 # Netmask either as an IPv4 dotted string or as a > CIDR number > IP_ADDR_MIN= 192.168.1.1 # Client addresses range starting address > IP_ADDR_MAX= 192.168.5.255 # Client addresses range last address > CYCLES_NUM= 10 # Number of loading cycles to run, -1 means forever > > ########### UAS SECTION #################################### > UAS=y # If 'y' or 'Y', login enabled, and other lines of the section to be > filled > UAS_URLS_NUM = 1 # Number of urls > UAS_URL=http://192.168.0.40:80/index.html > UAS_URL_MAX_TIME = 20 # Maximum batch time in seconds to fetch the url > UAS_URL_INTERLEAVE_TIME = 0 # Time in msec to sleep after fetching the url > > This is my first time to use curl loader remotely, kindly be detailed as > possible for that instructions that you will give me. > This means, that the connections cannot be established within 5 seconds (which is the default). Actually, it looks like, that you may have no route. One can use any addresses, providing, that you ensure a smooth route from client and from server for the addresses you use. Please, start with a single client and see that it works, than progress further. IP_ADDR_MIN and IP_ADDR_MAX. The addresses from the same network should be routable to 192.168.0.40<http://192.168.0.40/index.html>, e,g, 192.168.0.41 up to 192.168.1.255. When you are specifying any addresses here, they are added by curl-loader initialization to the network interface eth0 and may be seen by the command: #ip addr Command ping (man ping) has an option -I to force the ping to be issued from a certain ip-address. To test, that your routing is OK, you may use, e.g. #ping -I <the address you wish to use> 192.168.0.40<http://192.168.0.40/index.html> When it works - it will be OK. Y may also wish to read a bit into the linux routing/networking HOWTOS. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: sunspot s. <red...@ya...> - 2007-05-28 14:58:54
|
Guy's I need your help with regards in using curl remotely, if I'm using it on the machine itself (localhost) I'm not encountering any problem, but when I'm using it remotely, this is the error that I get ... 0 1 (192.168.1.1) !! ERROR: Connection time-out after 5108 ms : eff-url: , url: 0 2 (192.168.1.2) !! ERROR: Connection time-out after 5107 ms : eff-url: , url: 0 3 (192.168.1.3) !! ERROR: Connection time-out after 5106 ms : eff-url: , url: and so on .... Here are the details of my configuration file. What I am not sure of is the value that should be placed in IP_ADDR_MIN and IP_ADDR_MAX. [root@test curl]# more test.cfg ########### GENERAL SECTION ################################ BATCH_NAME= bulk_batch # The name of the batch. Logfile - bulk_batch.log CLIENTS_NUM_MAX=300 # Max number of clients, same as CLIENTS_NUM CLIENTS_NUM_START=100 # Number of clients to start with. CLIENTS_INITIAL_INC=50 # Clients to be added each second till CLIENTS_NUM_MAX INTERFACE = eth0 # Name of the network interface from which to load NETMASK=255.255.240.0 # Netmask either as an IPv4 dotted string or as a CIDR number IP_ADDR_MIN= 192.168.1.1 # Client addresses range starting address IP_ADDR_MAX= 192.168.5.255 # Client addresses range last address CYCLES_NUM= 10 # Number of loading cycles to run, -1 means forever ########### UAS SECTION #################################### UAS=y # If 'y' or 'Y', login enabled, and other lines of the section to be filled UAS_URLS_NUM = 1 # Number of urls UAS_URL=http://192.168.0.40:80/index.html UAS_URL_MAX_TIME = 20 # Maximum batch time in seconds to fetch the url UAS_URL_INTERLEAVE_TIME = 0 # Time in msec to sleep after fetching the url This is my first time to use curl loader remotely, kindly be detailed as possible for that instructions that you will give me. Thanks --------------------------------- Take the Internet to Go: Yahoo!Go puts the Internet in your pocket: mail, news, photos & more. |
From: Robert I. <cor...@gm...> - 2007-05-17 14:14:45
|
On 5/17/07, Jeremy Hicks <je...@no...> wrote: > > Is there any way to see the actual data being received/sent for the > requests (e.g. the html of the pages)? > The good news is that I have added initial support for that to svn. Y may try it for a not more than several clients and a few cycles. Please, add the new tags described below to the login_new.conf file, which I sent you before. Please, note, that the tags are per url, therefore, you may wish to add them both to you GET-url as well as the POST-url section (no actual url, just posting to the url fetched by GET and 3xx-es). >From the Changelog ------------------------------------------------------------------------------------ * Added an option to log response body-bytes and headers to files. To log body bytes add tag LOG_RESP_BODIES=1 to your configuration file. To log headers add LOG_RESP_HEADERS=1. See log_hdr_body.conf as an example. Directory <batch-name> is created with subdirs url0, url1, url<n> Each url subdir contains files named cl-<client-num>-cycle-<cycle-num>.body or cl-<client-num>-cycle-<cycle-num>.hdr respectively. Note, that output to file is currenly done in a most simple way and seriously impacts performance. Use it only with a small number of clients and for a few cycles. ---------------------------------------------------------------------------- Thank you. Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Robert I. <cor...@gm...> - 2007-05-17 08:21:59
|
On 5/17/07, Jeremy Hicks <je...@no...> wrote: > > Is there any way to see the actual data being received/sent for the > requests (e.g. the html of the pages)? > Correcting myself. option -o may not to be working any more. However, all the HTTP body bytes passed are seen by the function do_nothing_write_func in loader.c, which just skips the bytes. As a temporary workaround you can place inside the function: -------------------------------------------------------------------------- static FILE* myfile = fopen ("myfile.txt", "w+"); fwrite(ptr, size, nmemb, myfile); return(nmemb*size); --------------------------------------------------------------------------- It should work for a single transfer, whereas a solid solution to be as described in our TODO list. Another option in meanwhile to see the body bytes of the non-encrypted responses is using ethereal/wireshark sniffer. -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Jeremy H. <je...@no...> - 2007-05-16 21:42:51
|
Run-Time,Appl,Clients,Req,2xx,3xx,4xx,5xx,Err,Delay,Delay-2xx,Thr-In,Thr-Ou= t 0, Appl, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 0, Sec-Appl, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 *, *, *, *, *, *, *, *, *, *, *, *, * Run-Time,Appl,Clients,Req,2xx,3xx,4xx,5xx,Err,Delay,Delay-2xx,Thr-In,Thr-Ou= t 1, Appl, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 1, Sec-Appl, 0, 7, 2, 5, 0, 0, 0, 35, 3, 38088, 3202 |
From: Robert I. <cor...@gm...> - 2007-05-14 10:02:24
|
svn seems to be back to normal with changes performed: http://curl-loader.svn.sourceforge.net/viewvc/curl-loader/trunk/curl-loader/ChangeLog?view=markup and nearest TODO list: http://curl-loader.svn.sourceforge.net/viewvc/curl-loader/trunk/curl-loader/TODO?view=markup -- Sincerely, Robert Iakobashvili, coroberti %x40 gmail %x2e com ........................................................... http://curl-loader.sourceforge.net A web testing and traffic generation tool. |
From: Aleksandar L. <al-...@no...> - 2007-05-08 13:20:46
|
On Die 08.05.2007 15:28, Robert Iakobashvili wrote: >Aleks, > >FYI: >I have added the following to our TODO list: [snipp] >Thank you for your suggestions, Your welcome ;-) Have you thought about the timing thing in the faq/doc/output ?? cheers Aleks |