RE: [Grinder-use] Solaris tuning to maximize grinder threads (WAS:Add topic to the list grinder-use
Distributed load testing framework - Java, Jython, or Clojure scripts.
Brought to you by:
philipa
From: David Damo-T. <Dav...@te...> - 2005-12-12 17:18:39
|
Hi Keith, My hardware is 64 bit SUN Sparc. One box is very powerful and the = other two are decent. One box is a 690 which I am using 8 CPU's and 16 = cores. The box actually has 32 CPU's, but the other CPU's are being used = elsewhere. I can push even more then 15,000 users on that box. The other = two boxes are 440's which are 4 CPU's and 8 cores. I can get up to 15,000 = thousands on those, but the CPU starts to bottle neck, but it survives. = I am using OS threads and not the JVM threads. I have 32 gigs of RAM on = the 890 and 16 megs of RAM on the other two boxes. 32 gigs of SWAP on = the 890 and 16 on the other two. I found that having larger threads and less processes in the = grinder.properties is the best. When your processes increase I found = that it allocated memory in SWAP sort of like a malloc in C every time a = new process was spawned until my SWAP was full and thus grinder got an = I/O error when the SWAP filled. I also have a sleep varation of .1 and some sleep in = the scripts so my CPU is not pinned. My network is full duplex gig so I = have no collisions or packet errors. David :) -----Original Message----- From: McAlister, Keith [mailto:kei...@lo...]=20 Sent: December 12, 2005 11:44 AM To: David Damo-TM; Andrew Stevens; gri...@li... Subject: RE: [Grinder-use] Solaris tuning to maximize grinder threads = (WAS:Add topic to the list grinder-use Archives). =20 My experience is very similar. I am hugely impressed with 15,000 users per load injector. How did you manage that with Grinder? Were you using PC based hardware?=20 Keith McAlister -----Original Message----- From: David Damo-TM [mailto:Dav...@te...] Sent: 12 December 2005 15:16 To: McAlister, Keith; Andrew Stevens; gri...@li... Subject: RE: [Grinder-use] Solaris tuning to maximize grinder threads = (WAS:Add topic to the list grinder-use Archives). Hi, I agree on most networks when traffic gets serialized you get = collisions and errors, but if you have a full duplex GIG backbone then = you should have no collisions if the network is setup properly. Also the = definition of concurrency can be interpreted a few ways. Parellel = concurrency ex. 35,000 concurrent users from 35,000 machines or software concurrency. Where as = long as the 35,000 users get to the machine under a certain time then it = is considered concurrent. I had a test with three boxes on a GIG full = duplex backbone. I had 15,000 users per machine. I had no collisions or = packet errors, but I had to do some TCP tuning as you did. I agree that if the = TCP Stack is not configured properly your box will struggle as mine did = and also you will experience bad behaviour from the TCP backlog causing = dropped connections, ... In the process of testing I had also discovered = that my load balancer was the bottle neck since it was on a 100 BaseT network = and not on the 1000 BaseT. I found the Solaris Performance Tuning link = useful in tuning the OS to handle high load. http://docs.sun.com/app/docs/doc/816-0607 David :) -----Original Message----- From: gri...@li... [mailto:gri...@li...] On Behalf Of McAlister, = Keith Sent: December 12, 2005 3:09 AM To: Andrew Stevens; gri...@li... Subject: RE: [Grinder-use] Solaris tuning to maximize grinder threads = (WAS:Add topic to the list grinder-use Archives). You are correct the traffic does get serialised again at the target = host, and that leads to collisions, retransmissions and lots of other = work in the TCP stack. =20 The effect of retransmissions is dramatic: I was performance testing a = few weeks ago with remote users and one of the 2mbit circuit that = generated CRC errors under load. At one point the system under test was = retransmitting 100% of all traffic on that circuit, and the CPU load was = sat at 100% (4 x XEON MP). That was due to a faulty line, but I was surprise how easily = CPU disappears if the TCP stack is made to work hard. =20 If a single injector can saturate a LAN segment, then all you will see = is this on your switch:=20 Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0 Queueing strategy: fifo Output queue: 0/40 (size/max) 5 minute input rate 45530000 bits/sec, 3988 packets/sec 5 minute output rate 1175000 bits/sec, 2230 packets/sec=20 1402249 packets input, 2002579518 bytes, 0 no buffer Received 210 broadcasts (0 multicast) 0 runts, 0 giants, 0 throttles 0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored 0 watchdog, 12 multicast, 0 pause input 0 input packets with dribble condition detected Lots of traffic and no errors. Try that again with three injectors that = can saturate the LAN, and something will happen (usually bad) and that = will knock on to the system under test.=20 Keith McAlister -----Original Message----- From: gri...@li... [mailto:gri...@li...] On Behalf Of Andrew = Stevens Sent: 09 December 2005 01:35 To: gri...@li... Subject: RE: [Grinder-use] Solaris tuning to maximize grinder threads = (WAS:Add topic to the list grinder-use Archives). On Wed, 2005-12-07 at 22:06, McAlister, Keith wrote: > One thing to bear in mind is that is if you use only one load injector > then all traffic will be serialised through its lan card. This=20 > produces a nice clean flow with little contention, and artificially=20 > reduces the load on the system under test. > =20 > I am not suggesting you should have hundreds of load injectors, but=20 > three is a good number to start from and will ensure the LAN traffic=20 > is more realistic at the system under test. Hmm... just playing devil's advocate for a minute, unless there's = multiple routes from the injectors to the web server(s) wouldn't the = traffic just be serialised through the router or server's LAN card = instead? In which case, if a single injector were fast enough to = saturate the network bandwidth, what would using more than one do other than make the router do a lot = more switching? Andrew. > =20 > Keith McAlister >=20 > ______________________________________________________________________ > From: gri...@li... > [mailto:gri...@li...] On Behalf Of David=20 > Damo-TM > Sent: 05 December 2005 15:43 > To: gri...@li... > Subject: [Grinder-use] Solaris tuning to maximize grinder threads > (WAS: Add topic to the list grinder-use Archives). >=20 >=20 >=20 > Hi, >=20 > I wanted to get your take on some tuning I did to try to maximize=20 > threads/tests on Solaris 10. >=20 > Machine: >=20 > 8 CPU's > 32 Gigs of RAM > 32 Gigs of SWAP > More then 200 gigs of Hard Drive. > 64 bit machine. >=20 > /etc/system has been set to max processes and file descriptors to be=20 > able to handle several concurrent connections: >=20 > ** ########## File Descriptor per Process Hard and Soft Limits. > set rlim_fd_max=3D65536 > set rlim_fd_cur=3D65536 > ** ########## >=20 > ** ########## Max number of processes per user orig=3D29995 set=20 > maxuprc=3D65000 > ** ########## >=20 > Agent JVM settings: >=20 > java -server -Xms1024m -Xmx1024m -XX:NewRatio=3D4 = -XX:SurvivorRatio=3D4=20 > -XX:TargetSurvivorRatio=3D80 -XX:PermSize=3D96m -XX:MaxPermSize=3D256m = > -Xnoclass >=20 > gc -XX:+DisableExplicitGC -XX:+UsePerfData -verbose:gc=20 > -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintClassHistogram=20 > -Xconcurrentio -cp >=20 > $CLASSPATH net.grinder.Grinder $GRINDERPROPERTIES >=20 > I need to get up to 35,000 concurrent threads to simulate 35,000=20 > concurrent users. I understand that the term concurrency is loosely=20 > defined since the network can only send 1 packet at a time, but we are > happy if all the requests get to the server in under 5 seconds. After=20 > some tuning I found the best performance was to run 1000 threads per=20 > process and to ramp up every minute. Running too many process would=20 > cause the SWAP to drop too much and you would get an IO exception=20 > eventually since no SWAP was free.The SWAP seemed to be dropping=20 > because each new process would cause a UNIX fork and each fork would=20 > allocate memory in SWAP and allocated memory would remain in SWAP. I=20 > was able to get up to about 9,000 threads before all the CPU's started > to drop to 0. CPU would get back up to about half usage and would only > suffer after a bunch of new threads was created which is normal since=20 > creating new threads eats up CPU and memory. I was able to get up to=20 > about 25,000 threads before the machine really started to suffer. I=20 > have also set my sleepTimevaration to 0 so the only sleep time being=20 > used is the sleep in the script. The script I have logs into a secure=20 > page then sleeps for a minute then logs out. I wanted to know if you=20 > have any suggestions what to do to maximize threads and in turn=20 > grinder tests on this machine. I also understad many will mention to=20 > use a new machine, but I wanted to maximize the amount of=20 > threads/tests per machine. >=20 > Thanks for your help, >=20 > David ------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Do you grep through log = files for problems? Stop! Download the new AJAX search engine that = makes searching your log files as easy as surfing the web. DOWNLOAD = SPLUNK! http://ads.osdn.com/?ad_id=3D7637&alloc_id=3D16865&op=3Dclick _______________________________________________ Grinder-use mailing list Gri...@li... https://lists.sourceforge.net/lists/listinfo/grinder-use This e-mail and any attachment is for authorised use by the intended recipient(s) only. It may contain proprietary material, confidential = information and/or be subject to legal privilege. It should not be = copied, disclosed to, retained or used by, any other party. If you are = not an intended recipient then please promptly delete this e-mail and = any attachment and all copies and inform the sender. Thank you. ------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Do you grep through log = files for problems? Stop! Download the new AJAX search engine that = makes searching your log files as easy as surfing the web. DOWNLOAD = SPLUNK! http://ads.osdn.com/?ad_idv37&alloc_id=16865&op=3Dick _______________________________________________ Grinder-use mailing list Gri...@li... https://lists.sourceforge.net/lists/listinfo/grinder-use |