opensta-users Mailing List for OpenSTA (Page 9)
Brought to you by:
dansut
You can subscribe to this list here.
| 2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(8) |
Nov
(34) |
Dec
(59) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2001 |
Jan
(49) |
Feb
(66) |
Mar
(60) |
Apr
(93) |
May
(55) |
Jun
(81) |
Jul
(96) |
Aug
(79) |
Sep
(75) |
Oct
(141) |
Nov
(73) |
Dec
(77) |
| 2002 |
Jan
(123) |
Feb
(95) |
Mar
(50) |
Apr
(66) |
May
(88) |
Jun
(120) |
Jul
(176) |
Aug
(101) |
Sep
(95) |
Oct
(86) |
Nov
(97) |
Dec
(62) |
| 2003 |
Jan
(114) |
Feb
(179) |
Mar
(152) |
Apr
(238) |
May
(229) |
Jun
(187) |
Jul
(158) |
Aug
(110) |
Sep
(142) |
Oct
(69) |
Nov
(88) |
Dec
(87) |
| 2004 |
Jan
(66) |
Feb
(99) |
Mar
(94) |
Apr
(67) |
May
(66) |
Jun
(116) |
Jul
(39) |
Aug
(99) |
Sep
(29) |
Oct
(143) |
Nov
(100) |
Dec
(102) |
| 2005 |
Jan
(31) |
Feb
(30) |
Mar
(88) |
Apr
(214) |
May
(151) |
Jun
(155) |
Jul
(44) |
Aug
(92) |
Sep
(61) |
Oct
(93) |
Nov
(73) |
Dec
(115) |
| 2006 |
Jan
(113) |
Feb
(110) |
Mar
(49) |
Apr
(89) |
May
(34) |
Jun
(43) |
Jul
(76) |
Aug
(48) |
Sep
(41) |
Oct
(24) |
Nov
(31) |
Dec
(19) |
| 2007 |
Jan
(23) |
Feb
(50) |
Mar
(59) |
Apr
(12) |
May
(14) |
Jun
(18) |
Jul
(36) |
Aug
(20) |
Sep
(6) |
Oct
(9) |
Nov
(5) |
Dec
(11) |
| 2008 |
Jan
(6) |
Feb
(7) |
Mar
(4) |
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(7) |
Sep
(2) |
Oct
(2) |
Nov
|
Dec
|
|
From: Michael D. <md...@in...> - 2007-03-09 01:15:35
|
> Next, I don't know how close Chris' scripts emulate "reality". Determining that the system supports 10 VUs=20 > tells me absolutely nothing about it's capacity unless I know how a VUs work compares to "real life" users. IF=20 > a VU is a reasonable approximation of a real user and the system under test begins to timeout at 10 VUs, then=20 > it is appropriate to say that the system has a severe bottleneck at 10 users. I suspect in this case,=20 > performance at 9 users was also bad. To determine a limit meaningful for capacity planning, you need to have=20 > emulate virtual users in a realistic way and judge capacity limits by looking at how response times,=20 > throughput, and server resources vary as load changes. > -B I think that is an excellent point. I have stressed this multiple times to my management folks too: "I can drag the server down to a crawl with just 1 VU, or I can put 1 million VUs to the server and it won't feel a scratch. So don't ask me how many concurrent users the system can support without defining what a concurrent user is." :) --Michael =20 |
|
From: Bernie V. <Ber...@iP...> - 2007-03-08 20:15:44
|
> So OpenSTA timing out could be an indication of server application > overloading. And if the scripts are recorded and modeled correctly, > that could well be the point that Chris is looking for to stop putting > more load when doing the throughput test. > > --Michael Hi Michael, I agree as long as the default timeout is appropriate given how long the transaction runs for a single user. I am testing an application right now that does complex delivery scheduling for a package delivery service where some transactions take 3 minutes when no one else is using the system! Given the work that is being done, it is completely reasonable. Forget usability factors, it is what it is and the users are thankful the task is automated. If I used a default timeout of 1 minute, an OpenSTA error would not indicate a server problem. This is an extreme example, but I don't have any idea what Chris is modeling, so I'm pointing it out. Next, I don't know how close Chris' scripts emulate "reality". Determining that the system supports 10 VUs tells me absolutely nothing about it's capacity unless I know how a VUs work compares to "real life" users. IF a VU is a reasonable approximation of a real user and the system under test begins to timeout at 10 VUs, then it is appropriate to say that the system has a severe bottleneck at 10 users. I suspect in this case, performance at 9 users was also bad. To determine a limit meaningful for capacity planning, you need to have emulate virtual users in a realistic way and judge capacity limits by looking at how response times, throughput, and server resources vary as load changes. -B |
|
From: Bernie V. <Ber...@iP...> - 2007-03-08 19:44:30
|
Dev, > Now the problem is CPU usage is showing 100%. > Please suggest any solution to reduce CPU usage. The only thing in your control that I can think of that causes high CPU utilization is string manipulation in scripts (the kind that you write). If you are doing "load response_info body..." into strings and looking for substrings or breaking apart strings to get at substrings, then you'll need to either make them more efficient, playback from a server with more CPU capacity (OpenSTA capacity scales "ok" on SMP servers), or start using multiple servers for playback. Just out of curiosity, what % of the CPU utilization is User mode? -Bernie |
|
From: Michael D. <md...@in...> - 2007-03-08 19:32:20
|
> Exactly. I was pointing out there is a difference between OpenSTA and the=20 > application recognizing and throwing an error because too much time had=20 > passed. Perhaps its just a matter of semantics, but it was clear to me > that in the example raised by Chris it was OpenSTA that recognized too > much time had passed, not the application. > -Bernie=20 If the application is overloaded, it won't have time to keep track of time. :) So OpenSTA timing out could be an indication of server application overloading. And if the scripts are recorded and modeled correctly, that could well be the point that Chris is looking for to stop putting more load when doing the throughput test. --Michael |
|
From: Bernie V. <Ber...@iP...> - 2007-03-08 18:38:19
|
Thanks for the compliment Olaf. Coming from you, that means a lot. I think that in some cases I have used far too few words, like referring to the first type of testing as "capacity planning" which is much more then testing. More accurately, I should have referred it as capacity testing which can be part of capacity planning. There are other cases where capacity planning is just monitoring load and extrapolating what additional hardware would be required to stay ahead of increased demand using complex modeling tools, spreadsheets, and Ouija boards. I'll try and clean it up so that, if taken literally, does more good then harm and then submit it under testing strategies as you requested. Cheers, -Bernie www.iPerformax.com ----- Original Message ----- From: "Olaf Kock" <ok...@ab...> To: "OpenSTA users discussion and support" <ope...@li...> Sent: Thursday, March 08, 2007 1:05 PM Subject: Re: [OpenSTA-users] Performance testing with Open STA > Bernie Velivis schrieb: >> We've certainly left the realm of OpenSTA related questions and moved >> into a >> discussion of performance testing. Its a slow day, I'll bite. > > Hi, > > that's been a great article about different kinds of test. > In order to draw attention to it and make it easily linkable would you > mind submitting it to http://portal.opensta.org/? This also would bring > back some action (read: life) that has been missing there for quite some > time... > > The content might somehow end up in the FAQ - but I believe there's > worth in keeping all those testing strategies together and compare them > side by side. It certainly goes directly into my toolbox of explaining > these techniques with only few, but well-chosen, words. > > Cheers, > Olaf > > -- > No part of this message may reproduce, store itself in a retrieval > system, or transmit disease, in any form, without the permissiveness of > the author. > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys-and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > -- > OpenSTA-users mailing list Ope...@li... > Subscribe/Unsubscribe/Options: > http://lists.sf.net/lists/listinfo/opensta-users > Posting Guidelines: > http://portal.opensta.org/faq.php?topic=UserMailingList |
|
From: Bernie V. <Ber...@iP...> - 2007-03-08 18:28:19
|
Hi Danny, How's things in the Lone Star State? > Osoata, Christabel wrote: >> If I understand Chris' email, Chris has found the point where OpenSTA >> times out, not the application. > > I'm not sure what this means. Do you mean to distinguish between OpenSTA > aborting a connection because it times out waiting for a response, and the > web application detecting a timeout itself and returning some sort of > error on the text of a page (with a 200 response)? > -- Exactly. I was pointing out there is a difference between OpenSTA and the application recognizing and throwing an error because too much time had passed. Perhaps its just a matter of semantics, but it was clear to me that in the example raised by Chris it was OpenSTA that recognized too much time had passed, not the application. -Bernie |
|
From: Danny R. F. <fa...@te...> - 2007-03-08 18:10:41
|
Osoata, Christabel wrote: > If I understand Chris' email, Chris has found the point where OpenSTA > times out, not the application. I'm not sure what this means. Do you mean to distinguish between OpenSTA aborting a connection because it times out waiting for a response, and the web application detecting a timeout itself and returning some sort of error on the text of a page (with a 200 response)? -- Danny R. Faught Tejas Software Consulting http://tejasconsulting.com/ |
|
From: Olaf K. <ok...@ab...> - 2007-03-08 18:05:56
|
Bernie Velivis schrieb: > We've certainly left the realm of OpenSTA related questions and moved into a > discussion of performance testing. Its a slow day, I'll bite. Hi, that's been a great article about different kinds of test. In order to draw attention to it and make it easily linkable would you mind submitting it to http://portal.opensta.org/? This also would bring back some action (read: life) that has been missing there for quite some time... The content might somehow end up in the FAQ - but I believe there's worth in keeping all those testing strategies together and compare them side by side. It certainly goes directly into my toolbox of explaining these techniques with only few, but well-chosen, words. Cheers, Olaf -- No part of this message may reproduce, store itself in a retrieval system, or transmit disease, in any form, without the permissiveness of the author. |
|
From: Bernie V. <Ber...@iP...> - 2007-03-08 17:48:22
|
> <Chris writes> > > Hi there, > Thanks for your response, so does this mean that running the test for 10 > users simultaneously should not cause time out errors. The developers > think it may be because it is an unrealistic test i.e. In the real > world, I don't think we will have 10 users running the same test and > performing the same action at the same time for an hour, I think it is > probably best to ramp the test up such that 1 user is added every 30 > seconds with a 10 second delay until I get to the maximum number of > users, do you think this is a more realistic test? > > Although running the same test now with 10 simultaneous users even > though it displays timeout errors it still creates record in the > database. > Chris, We've certainly left the realm of OpenSTA related questions and moved into a discussion of performance testing. Its a slow day, I'll bite. There are three major areas of performance testing. Different people use different terminology, so you'll have to put up with mine understanding that it might not jive completely with what others say. Still, its the goal of the testing that is important, not what you call it. If your goal is to do CAPACITY PLANNING, the you should create a "realistic" workload. A mix of the most popular transactions plus those deemed critical presented to the server(s) under test in a realistic fashion. This is easy to say, and I've seen 3 day seminars and countless books dedicated to how to do this "correctly". For the most part this boils down to picking a manageable (in terms of time to develop vs. budget, goals, etc) set of transactions to emulate, determining the % probability of executing each transaction, and the overall arrival rate, and also the "success criteria" for the transactions (i.e. response time limits, throughput goals, etc). Collectively, I'll refer to these attributes as the "workload definition". One implement a given workload description is to create a master script which is assigned to each VU and have it generate random numbers and then call other scripts (that model the workload transactions) based on a table of probabilities. The scripts should modeled with think times consistent with the way your users will interact with the system. This varies greatly from one app to another and unless you are mining logs from an application already in use, is somewhat subjective. The best advice I can give is be conservative, but no so much so that the sum of all your conservative decisions is pathological. Once you have a workload that has pacing (think times) you are comfortable with, then increase the number of users and monitor how response times, server resource utilization (CPU, IO rate, Network, and memory), and throughput (number of tasks completed system wide) vary with the increased load. You might set up your test so you ramp up to a specific number of users, then let them run for a while, and repeat as necessary. This way, you capture the behavior of the system various steady states. The length of time to allow a particular number of users to run varies with a number of factors including how different the transactions are from one another in terms of resource utilization and response time. If you can't get repeatable results, your steady state interval might be too small. I've seen intervals as small as 10 minutes work and other workloads that require an interval of hours to useful. That's a rough outline of one approach to capacity planning which in summary is an attempt to load up the system with VUs in a way that a VU is indistinguishable from a "real user". Again, much easier said then done. Pick the wrong workload, and your results might be worthless. The end game here is to increase load until response times become excessive (whatever that means to you, but it needs to be defined.. again, tons of material to read about this) at which point you have found a limit to system capacity. This limit will be due to either a hardware or software bottleneck. Now, if you are on a tuning expedition, then analyze the performance metrics captured and either do some tuning, code optimization, or add some hardware resources and repeat as necessary until you either meet throughput goals, find the limits to the architecture, or run out of time (happens more then most performance engineers would like). The same scripts can be used for SOAK TESTING, where you load up the system at close to it's maximum capacity and let it run for hours, days, etc. This is a great way to spot stability problems that only occur after the system has been running a long time (memory leaks are a good example of things you will find). Run a long test and start failing components (servers, routers, etc) to see how response times are effected and how long the system takes to return to a steady state and you are on your way towards FAILOVER TESTING. You can find reams of material to read about failover testing and high availability as well. If you goals is to determine where or how the system will fail, then you are doing STRESS TESTING. One way to do this is to comment out the think times and increase VUs until something (hopefully not your emulator!) breaks. This is just one form of stress testing, a valuable aspect of performance testing, but not the same as capacity planning. How the VUs compare to "real users" may be irrelevant as you are trying to determine how the system behaves when pushed past its limits. So I guess only you can answer your question. Decide what your goals are (capacity planning, stability testing, failover testing, or stress testing) and then see if your script and test behavior is aligned with the goal(s). -Bernie www.iPerformax.com |
|
From: Dev G. <dev...@gm...> - 2007-03-08 17:38:06
|
Hi, I am using OpenSTA 1.4.3 on Windows XP machine to generate a load of 200VUs . I am doing modular testing , it conatins 2 scripts, first script contains Login info and second script contains application navigation info. In these scripts i have used 12 variables for parameterization. All these variable are linked with one .FVR file which contains 1600 data values. Here is test scenario settings-: 1. Total 200Vus 2. Number of VUs per batch-: 10 3. Batch Ramup Time-: 30 seconds 4. Duration-: 10 mins 5. First Script-: 1 iteration 6. Second script-: 10 mins Load Generator Machine Configuration. 1. Windows XP 2. CPU- 3.00 GHz, 2.99 GHz 3. 8 GB RAM Now the problem is CPU usage is showing 100%. Please suggest any solution to reduce CPU usage. Thanks Dev |
|
From: Osoata, C. <Chr...@at...> - 2007-03-08 16:27:32
|
<Olaf wrote> > Although running the same test now with 10 simultaneous users even=20 > though it displays timeout errors it still creates record in the=20 > database. Sure, that's the nature of http: If you fire up requests for http://myserver/performLengthyOperation?count=3D1 http://myserver/performLengthyOperation?count=3D2 http://myserver/performLengthyOperation?count=3D3 http://myserver/performLengthyOperation?count=3D4 http://myserver/performLengthyOperation?count=3D5 every seconds, the appropriate action will be started (and usually finished) 5 times, regardless of how long you wait for each result (e.g. regardless of when OpenSTA or your browser times out) The server will likely notice that you have gone away when it starts to send back a response. If your lengthy Operation doesn't send anything back to the client until it finishes, it might recognize the fact that the client has gone away after having written something to the database. Cheers, Olaf <Chris writes> Hi Olaf, I'm a bit confussed now actually, does this mean that it's OoenSTA that is timing out and not the application under test? as in the error log there are loads of 'The wait operation timed out error messages' and Timeout generated for Socket )x360 error messages. Thanks Chris This email and any attached files are confidential and copyright protected.= If you are not the addressee, any dissemination of this communication is s= trictly prohibited. Unless otherwise expressly agreed in writing, nothing s= tated in this communication shall be legally binding. The ultimate parent company of the Atkins Group is WS Atkins plc. Register= ed in England No. 1885586. Registered Office Woodcote Grove, Ashley Road, = Epsom, Surrey KT18 5BW. Consider the environment. Please don't print this e-mail unless you really = need to.=20 |
|
From: Olaf K. <ok...@ab...> - 2007-03-08 13:31:16
|
Osoata, Christabel schrieb: > Although running the same test now with 10 simultaneous users even > though it displays timeout errors it still creates record in the > database. Sure, that's the nature of http: If you fire up requests for http://myserver/performLengthyOperation?count=1 http://myserver/performLengthyOperation?count=2 http://myserver/performLengthyOperation?count=3 http://myserver/performLengthyOperation?count=4 http://myserver/performLengthyOperation?count=5 every seconds, the appropriate action will be started (and usually finished) 5 times, regardless of how long you wait for each result (e.g. regardless of when OpenSTA or your browser times out) The server will likely notice that you have gone away when it starts to send back a response. If your lengthy Operation doesn't send anything back to the client until it finishes, it might recognize the fact that the client has gone away after having written something to the database. Cheers, Olaf -- No part of this message may reproduce, store itself in a retrieval system, or transmit disease, in any form, without the permissiveness of the author. |
|
From: Osoata, C. <Chr...@at...> - 2007-03-08 12:38:45
|
Hi Michael, Bernie, Danny and Olaf, =20 >>aim is to identify the number of orders created in an hour by 10 users. >>Any ideas why I am getting these timeout errors? < Michael wrote> > Let's assume you have recorded and modeled your scripts correctly. > > Now, you are testing your server performance (throughput) by=20 > identifying the number of orders created in an hour by 10 users, right? > > Aren't you then looking for at what point the load would cause your=20 > server to timing out request? And bingo, you have reached that point. Michael, I understand what you are saying in principle and I don't disagree. When looking for bottlenecks, a useful technique is to compare throughput vs. load and see when it becomes non-linear. Frequent timeouts will always lead to non-linear throughput vs. load (users) graphs. > Aren't you then looking for at what point the load would cause your=20 > server to timing out request? And bingo, you have reached that point. If I understand Chris' email, Chris has found the point where OpenSTA times out, not the application. I have no idea how long Chris' single user response times are. If they are just a few seconds, then given the default 1 minute timeout (actually, the default value is 1 minute but OpenSTA can take anywhere up to 2X the timeout value to actually deliver the timeout error to scripts.) the server(s) have reached a significant bottleneck. If single user response times started at close to 1 minute, then a response time of 1.5 minutes would not be that unusual or indicate a bottleneck under load. <Chris writes> Hi there, Thanks for your response, so does this mean that running the test for 10 users simultaneously should not cause time out errors. The developers think it may be because it is an unrealistic test i.e. In the real world, I don't think we will have 10 users running the same test and performing the same action at the same time for an hour, I think it is probably best to ramp the test up such that 1 user is added every 30 seconds with a 10 second delay until I get to the maximum number of users, do you think this is a more realistic test? Although running the same test now with 10 simultaneous users even though it displays timeout errors it still creates record in the database. Many thanks Chris This email and any attached files are confidential and copyright protected.= If you are not the addressee, any dissemination of this communication is s= trictly prohibited. Unless otherwise expressly agreed in writing, nothing s= tated in this communication shall be legally binding. The ultimate parent company of the Atkins Group is WS Atkins plc. Register= ed in England No. 1885586. Registered Office Woodcote Grove, Ashley Road, = Epsom, Surrey KT18 5BW. Consider the environment. Please don't print this e-mail unless you really = need to.=20 |
|
From: Bernie V. <Ber...@iP...> - 2007-03-07 20:38:36
|
>>aim is to identify the number of orders created in an hour by 10 users. >>Any ideas why I am getting these timeout errors? < Michael wrote> > Let's assume you have recorded and modeled your scripts correctly. > > Now, you are testing your server performance (throughput) by identifying > the number of orders created in an hour by 10 users, right? > > Aren't you then looking for at what point the load would cause your > server to timing out request? And bingo, you have reached that point. Michael, I understand what you are saying in principle and I don't disagree. When looking for bottlenecks, a useful technique is to compare throughput vs. load and see when it becomes non-linear. Frequent timeouts will always lead to non-linear throughput vs. load (users) graphs. > Aren't you then looking for at what point the load would cause your > server to timing out request? And bingo, you have reached that point. If I understand Chris' email, Chris has found the point where OpenSTA times out, not the application. I have no idea how long Chris' single user response times are. If they are just a few seconds, then given the default 1 minute timeout (actually, the default value is 1 minute but OpenSTA can take anywhere up to 2X the timeout value to actually deliver the timeout error to scripts.) the server(s) have reached a significant bottleneck. If single user response times started at close to 1 minute, then a response time of 1.5 minutes would not be that unusual or indicate a bottleneck under load. -Bernie (www.iPerformax.com) |
|
From: Michael D. <md...@in...> - 2007-03-07 20:23:56
|
>Hi there, >I am trying to use OpenSTA to do performance testing. Currently I have 3 >scripts, Login, Create orders and Logout. I am currently running the >test for 10 users simultaneously and I am getting time out errors, the >aim is to identify the number of orders created in an hour by 10 users. >Any ideas why I am getting these timeout errors? >Many Thanks=20 >Chris Let's assume you have recorded and modeled your scripts correctly. Now, you are testing your server performance (throughput) by identifying the number of orders created in an hour by 10 users, right? Aren't you then looking for at what point the load would cause your server to timing out request? And bingo, you have reached that point. -- |
|
From: Danny R. F. <fa...@te...> - 2007-03-07 20:15:10
|
Osoata, Christabel wrote: > Any ideas why I am getting these timeout errors? Congratulations, it looks like you've found what you were looking for (a bug or performance bottleneck), and it's time to start debugging. What I like to do is to try to reproduce the same problem in a browser while OpenSTA is running. That makes the problem look much more real to your stakeholders, and helps to rule an OpenSTA bug causing the problem. One other thing I do is to edit OpenSTA's config file to bump up its timeout, so I can continue to track response times as they get longer. -- Danny R. Faught Tejas Software Consulting http://tejasconsulting.com/ |
|
From: Bernie V. <Ber...@iP...> - 2007-03-07 19:11:45
|
Chris, > Any ideas why I am getting these timeout errors? The server is responding slowly (or not at all) and the connection times out. User Guide (http://www.opensta.org/docs/ug/) is useful for understanding many aspects of OpenSTA. Searching the mail archives and google will reduce your dependence on outside help. Took me 10 seconds to find this (google opensta timeout) http://portal.opensta.org/modules.php?op=modload&name=phpWiki&file=index&pagename=PlaybackRequestTimeout -Bernie (www.iPerformax.com) |
|
From: Olaf K. <ok...@ab...> - 2007-03-07 19:06:37
|
Osoata, Christabel schrieb: > Hi there, > I am trying to use OpenSTA to do performance testing. Currently I have 3 > scripts, Login, Create orders and Logout. I am currently running the > test for 10 users simultaneously and I am getting time out errors, the > aim is to identify the number of orders created in an hour by 10 users. > Any ideas why I am getting these timeout errors? > > Many Thanks > Chris Hi Chris, the first thing that comes to my mind is: application overload? I can think of more, but you do not provide enough information: When do timeouts occur? With 1,3,8 VUs? Is the script running correctly (e.g. seeing the correct results)? Please understand, that right now we know nothing more than "it does not work". Plus you might want to look at http://portal.opensta.org/faq.php?topic=PlaybackRequestTimeout Cheers, Olaf -- No part of this message may reproduce, store itself in a retrieval system, or transmit disease, in any form, without the permissiveness of the author. |
|
From: Osoata, C. <Chr...@at...> - 2007-03-07 17:46:48
|
Hi there, I am trying to use OpenSTA to do performance testing. Currently I have 3 scripts, Login, Create orders and Logout. I am currently running the test for 10 users simultaneously and I am getting time out errors, the aim is to identify the number of orders created in an hour by 10 users. Any ideas why I am getting these timeout errors? Many Thanks=20 Chris This email and any attached files are confidential and copyright protected.= If you are not the addressee, any dissemination of this communication is s= trictly prohibited. Unless otherwise expressly agreed in writing, nothing s= tated in this communication shall be legally binding. The ultimate parent company of the Atkins Group is WS Atkins plc. Register= ed in England No. 1885586. Registered Office Woodcote Grove, Ashley Road, = Epsom, Surrey KT18 5BW. Consider the environment. Please don't print this e-mail unless you really = need to.=20 |
|
From: Osoata, C. <Chr...@at...> - 2007-03-07 15:42:01
|
Many thanks for your response.=20 -----Original Message----- From: ope...@li... [mailto:ope...@li...] On Behalf Of Bernie Velivis Sent: 07 March 2007 13:12 To: OpenSTA users discussion and support Subject: Re: [OpenSTA-users] Problem with missing Orders Christabel, > Can I have some help please, I have recorded 3 tests in OpenSTA, a=20 > Login, Create Order and Logout. However if I have 35 users login in at > the same time and creating orders for 9 hours then logging out, some=20 > of the order numbers are missing in the database, i.e. it creates=20 > orders 1,2,3, ..... then missing out some orders then partly creates=20 > orders 12, 13, 14. Some of the orders created are blank, I am not sure > if the prblem is with OepnSTA or my recorded scripts, any ideas why=20 > this is happening? I suspect that parts of the script are failing intermittently. The first thing I would do is add code to the script to verify the results of all primary get/put operations. There is a decent article at http://portal.opensta.org/index.php?name=3DNews&file=3Darticle&sid=3D40 that gives an example of checking results. The bottom line is that you should not assume all is going as it did when you recorded, and that runtime verification is, IMHO, absolutely required. Another possible explanation is that the server is rejecting valid orders due to being too busy. Check application and database logs for errors (rollbacks, server to busy errors). Also have a look at 'HTTP Data List'=20 under test results to check for 4XX and 5XX response codes to narrow down where the problem is. -Bernie (www.iperformax.com) ------------------------------------------------------------------------ - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys-and earn cash http://www.techsay.com/default.php?page=3Djoin.php&p=3Dsourceforge&CID=3DDE= VDE V -- OpenSTA-users mailing list Ope...@li... Subscribe/Unsubscribe/Options: http://lists.sf.net/lists/listinfo/opensta-users Posting Guidelines: http://portal.opensta.org/faq.php?topic=3DUserMailingList This message has been scanned for viruses by MailControl - (see http://bluepages.wsatkins.co.uk/?6875772) This email and any attached files are confidential and copyright protected.= If you are not the addressee, any dissemination of this communication is s= trictly prohibited. Unless otherwise expressly agreed in writing, nothing s= tated in this communication shall be legally binding. The ultimate parent company of the Atkins Group is WS Atkins plc. Register= ed in England No. 1885586. Registered Office Woodcote Grove, Ashley Road, = Epsom, Surrey KT18 5BW. Consider the environment. Please don't print this e-mail unless you really = need to.=20 |
|
From: Bernie V. <Ber...@iP...> - 2007-03-07 13:11:41
|
Christabel, > Can I have some help please, I have recorded 3 tests in OpenSTA, a > Login, Create Order and Logout. However if I have 35 users login in at > the same time and creating orders for 9 hours then logging out, some of > the order numbers are missing in the database, i.e. it creates orders > 1,2,3, ..... then missing out some orders then partly creates orders 12, > 13, 14. Some of the orders created are blank, I am not sure if the > prblem is with OepnSTA or my recorded scripts, any ideas why this is > happening? I suspect that parts of the script are failing intermittently. The first thing I would do is add code to the script to verify the results of all primary get/put operations. There is a decent article at http://portal.opensta.org/index.php?name=News&file=article&sid=40 that gives an example of checking results. The bottom line is that you should not assume all is going as it did when you recorded, and that runtime verification is, IMHO, absolutely required. Another possible explanation is that the server is rejecting valid orders due to being too busy. Check application and database logs for errors (rollbacks, server to busy errors). Also have a look at 'HTTP Data List' under test results to check for 4XX and 5XX response codes to narrow down where the problem is. -Bernie (www.iperformax.com) |
|
From: Olaf K. <ok...@ab...> - 2007-03-07 12:57:21
|
Osoata, Christabel schrieb: > Hi All, > Can I have some help please, I have recorded 3 tests in OpenSTA, a > Login, Create Order and Logout. However if I have 35 users login in at > the same time and creating orders for 9 hours then logging out, some of > the order numbers are missing in the database, i.e. it creates orders > 1,2,3, ..... then missing out some orders then partly creates orders 12, > 13, 14. Some of the orders created are blank, I am not sure if the > prblem is with OepnSTA or my recorded scripts, any ideas why this is > happening? Christabel, the order part of OpenSTA has been thoroughly audited with industry leaders in supply chain management, therefor I doubt that any bugs remain in the order processing part... But to be honest: OpenSTA has no notion of your orders, it simply throws HTTP request at your application. Hopefully they make sense, If they make sense depends completely on you, the script and the application. As stated many times on this list, OpenSTA is generally a load generator, not a functional testing tool, though it can handle a bit of this. If you have scripted - automatic - functional tests, try to execute them in parallel to see if your application has a problem. Otherwise you should modify the script in order to check for the correct answer given by the server. See my post from Feb 16 on this list "Re: [OpenSTA-users] Transactions not submitted" for some more info. Cheers, Olaf -- No part of this message may reproduce, store itself in a retrieval system, or transmit disease, in any form, without the permissiveness of the author. |
|
From: Osoata, C. <Chr...@at...> - 2007-03-07 10:19:17
|
Hi All, Can I have some help please, I have recorded 3 tests in OpenSTA, a Login, Create Order and Logout. However if I have 35 users login in at the same time and creating orders for 9 hours then logging out, some of the order numbers are missing in the database, i.e. it creates orders 1,2,3, ..... then missing out some orders then partly creates orders 12, 13, 14. Some of the orders created are blank, I am not sure if the prblem is with OepnSTA or my recorded scripts, any ideas why this is happening? Many thanks Chris This email and any attached files are confidential and copyright protected.= If you are not the addressee, any dissemination of this communication is s= trictly prohibited. Unless otherwise expressly agreed in writing, nothing s= tated in this communication shall be legally binding. The ultimate parent company of the Atkins Group is WS Atkins plc. Register= ed in England No. 1885586. Registered Office Woodcote Grove, Ashley Road, = Epsom, Surrey KT18 5BW. Consider the environment. Please don't print this e-mail unless you really = need to.=20 |
|
From: Olaf K. <ok...@ab...> - 2007-03-02 09:39:06
|
Flint, Kent W schrieb: > I have been using OpenSTA for quite a while now against a web app > running on Weblogic and it's been working great. > We are now in the process of beginning to convert over to JBoss app > server. > > We have a Jboss environment up and I'm trying to run my OpenSTA tests > against this JBoss environment and am running into problems. > The problem I'm up against right now is a POST command to the > server(which is the actual login page) and I'm loading the Response_Info > Header into cookie_7_0, WITH "Set-Cookie,prtcpntId", and when OpenSTA is > parsing these header contents, it cannot find the cookie 'prtpcntId'. > I've also tried just straight loading the Response_Info Header into > cookie_7_0 variable, and logging that to output, and it's empty. > So I'm wondering why this would work fine with Weblogic, but nothing in > the Response_Info Header for JBoss... It seems, that while you recorded, the server (whatever brand and version it was) sent a cookie named prtcpntId and this was recorded into your script. When now, at runtime, this cookie is not being sent any more, this is no problem that OpenSTA has with your server, but a problem that your script has with your application. You'll have to look for other cookies that are sent with any request and need to be re-sent later during 'playback' of your script. This kind of scripting is kind of annoying, but please keep in mind that you should know and understand your application well enough to know when a cookie will be expected to be sent from the server. This all sacrifices the load-test requirement of using as little processing power as possible on the client (unless your script explicitly analyses more) in order to run as many parallel client processes as possible on given hardware. It'd be a lot easier to load test from 1000 different boxes, but if you want to simulate 1000 users from 1-5 boxes, you'd better don't analyse too much of http traffic by default. > Anybody else run into any problems using OpenSTA with JBoss? > And/or have any ideas what the problem might be here? > > Any help will be greatly appreciated! Regarding your concrete problem: Did you record agains WebLogic and play back to JBoss? prtpcntId might be a Weblogic cookie (I don't know about that...) Cheers, Olaf -- No part of this message may reproduce, store itself in a retrieval system, or transmit disease, in any form, without the permissiveness of the author. |
|
From: Flint, K. W <kf...@vf...> - 2007-03-01 14:03:15
|
I have been using OpenSTA for quite a while now against a web app running on Weblogic and it's been working great. We are now in the process of beginning to convert over to JBoss app server. =20 We have a Jboss environment up and I'm trying to run my OpenSTA tests against this JBoss environment and am running into problems. The problem I'm up against right now is a POST command to the server(which is the actual login page) and I'm loading the Response_Info Header into cookie_7_0, WITH "Set-Cookie,prtcpntId", and when OpenSTA is parsing these header contents, it cannot find the cookie 'prtpcntId'. I've also tried just straight loading the Response_Info Header into cookie_7_0 variable, and logging that to output, and it's empty. So I'm wondering why this would work fine with Weblogic, but nothing in the Response_Info Header for JBoss... =20 Anybody else run into any problems using OpenSTA with JBoss? And/or have any ideas what the problem might be here? =20 Any help will be greatly appreciated! =20 Thanks, Kent =20 ___________________________________ Kent Flint Customer Support & Quality Assurance VFA, Inc | 500 E. 96th St., Indianapolis, IN 46240 t 317.805.6015 f 317.805.6009 =20 |