Thread: Re: [OpenSTA-users] Reading the results
Brought to you by:
dansut
|
From: Osoata, C. <Chr...@at...> - 2007-02-20 17:52:06
|
Hi Dan, Many thanks for your email, does the Active Users/Elapsed Time graph show the response time for the total number of users? I currently have 3 different scripts, login, create new order and logout. I am running the test for 50 simultaneous uses, will this graph show me the response time for the 50 users for the 3 different scrips? Will it also show the response time if I ramp up the users? Many thanks Chris -----Original Message----- From: ope...@li... [mailto:ope...@li...] On Behalf Of Dan Downing Sent: 20 February 2007 14:26 To: 'OpenSTA users discussion and support' Subject: Re: [OpenSTA-users] Reading the results Chris, If there is one key graph that captures the essence of stress testing it is "timer values vs active users" -- as this illustrates the "scalability" of the business process and "pages" of the app, i.e., "how does response time vary as load is increased". However, to assess "reliability" of the test, you need to look at the error log and the http data list (filtered for >399 and <599 errors). Too many timeout, script, or http errors could invalidate your results, and you should strive to at least explain if not eliminate most errors. Also, a very key graph is Custom NT Performance, created by defining a Collector that captures Bytes Sent and Bytes Received from your load injector. Graphed against Active Users, you can gauge whether there is a bandwidth throughput bottleneck somewhere in your system. My opinion: The opensta graphs are rather primitive and limited, and I much prefer exporting Timers and graphing in Excel (using Pivot Tables to summarize values into 1 or 2 minute intervals. You can view examples on our website). ..Dan www.mentora.com =20 -----Original Message----- From: ope...@li... [mailto:ope...@li...] On Behalf Of Osoata, Christabel Sent: Tuesday, February 20, 2007 4:27 AM To: OpenSTA users discussion and support Subject: [OpenSTA-users] Reading the results Hi All, I am in the process of using OpenSTA for performance testing, however I don't understand the graphs that are displayed for the result. Can anyone please inform me the best graph to determine the performance of a website application. There are currently several graphs but I'm not sure what the results are showing, I want to be able to determine the performance for different number of virtual users. Many thanks Chris This email and any attached files are confidential and copyright protected. If you are not the addressee, any dissemination of this communication is strictly prohibited. Unless otherwise expressly agreed in writing, nothing stated in this communication shall be legally binding. The ultimate parent company of the Atkins Group is WS Atkins plc. Registered in England No. 1885586. Registered Office Woodcote Grove, Ashley Road, Epsom, Surrey KT18 5BW. Consider the environment. Please don't print this e-mail unless you really need to.=20 ------------------------------------------------------------------------ - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys-and earn cash http://www.techsay.com/default.php?page=3Djoin.php&p=3Dsourceforge&CID=3D= DEVDE V -- OpenSTA-users mailing list Ope...@li... Subscribe/Unsubscribe/Options: http://lists.sf.net/lists/listinfo/opensta-users Posting Guidelines: http://portal.opensta.org/faq.php?topic=3DUserMailingList ------------------------------------------------------------------------ - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys-and earn cash http://www.techsay.com/default.php?page=3Djoin.php&p=3Dsourceforge&CID=3D= DEVDE V --=20 OpenSTA-users mailing list Ope...@li... Subscribe/Unsubscribe/Options: http://lists.sf.net/lists/listinfo/opensta-users Posting Guidelines: http://portal.opensta.org/faq.php?topic=3DUserMailingList This message has been scanned for viruses by MailControl - (see http://bluepages.wsatkins.co.uk/?6875772) |
|
From: Osoata, C. <Chr...@at...> - 2007-02-21 09:51:52
|
=20 Hi there, Thanks for your response, so if the virtual users are simultaneous (i.e. if I have 30 users all starting at the same time) the timer values vs active users graph does not give any information. Will the best graph in this case be the "Active users v Elapsed Time? Does this graph show the response time for users throughtut the test run? Many thanks -----Original Message----- From: ope...@li... [mailto:ope...@li...] On Behalf Of Dan Downing Sent: 20 February 2007 14:26 To: 'OpenSTA users discussion and support' Subject: Re: [OpenSTA-users] Reading the results Chris, If there is one key graph that captures the essence of stress testing it is "timer values vs active users" -- as this illustrates the "scalability" of the business process and "pages" of the app, i.e., "how does response time vary as load is increased". However, to assess "reliability" of the test, you need to look at the error log and the http data list (filtered for >399 and <599 errors). Too many timeout, script, or http errors could invalidate your results, and you should strive to at least explain if not eliminate most errors. Also, a very key graph is Custom NT Performance, created by defining a Collector that captures Bytes Sent and Bytes Received from your load injector. Graphed against Active Users, you can gauge whether there is a bandwidth throughput bottleneck somewhere in your system. My opinion: The opensta graphs are rather primitive and limited, and I much prefer exporting Timers and graphing in Excel (using Pivot Tables to summarize values into 1 or 2 minute intervals. You can view examples on our website). ..Dan www.mentora.com =20 -----Original Message----- From: ope...@li... [mailto:ope...@li...] On Behalf Of Osoata, Christabel Sent: Tuesday, February 20, 2007 4:27 AM To: OpenSTA users discussion and support Subject: [OpenSTA-users] Reading the results Hi All, I am in the process of using OpenSTA for performance testing, however I don't understand the graphs that are displayed for the result. Can anyone please inform me the best graph to determine the performance of a website application. There are currently several graphs but I'm not sure what the results are showing, I want to be able to determine the performance for different number of virtual users. Many thanks Chris This email and any attached files are confidential and copyright protected. If you are not the addressee, any dissemination of this communication is strictly prohibited. Unless otherwise expressly agreed in writing, nothing stated in this communication shall be legally binding. The ultimate parent company of the Atkins Group is WS Atkins plc. Registered in England No. 1885586. Registered Office Woodcote Grove, Ashley Road, Epsom, Surrey KT18 5BW. Consider the environment. Please don't print this e-mail unless you really need to.=20 ------------------------------------------------------------------------ - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys-and earn cash http://www.techsay.com/default.php?page=3Djoin.php&p=3Dsourceforge&CID=3D= DEVDE V -- OpenSTA-users mailing list Ope...@li... Subscribe/Unsubscribe/Options: http://lists.sf.net/lists/listinfo/opensta-users Posting Guidelines: http://portal.opensta.org/faq.php?topic=3DUserMailingList ------------------------------------------------------------------------ - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys-and earn cash http://www.techsay.com/default.php?page=3Djoin.php&p=3Dsourceforge&CID=3D= DEVDE V --=20 OpenSTA-users mailing list Ope...@li... Subscribe/Unsubscribe/Options: http://lists.sf.net/lists/listinfo/opensta-users Posting Guidelines: http://portal.opensta.org/faq.php?topic=3DUserMailingList This message has been scanned for viruses by MailControl - (see http://bluepages.wsatkins.co.uk/?6875772) |
|
From: Osoata, C. <Chr...@at...> - 2007-02-21 14:32:14
|
Many thanks Dan.=20 -----Original Message----- From: ope...@li... [mailto:opensta-users-bou= nc...@li...] On Behalf Of Dan Downing Sent: 21 February 2007 14:27 To: 'OpenSTA users discussion and support' Subject: Re: [OpenSTA-users] Reading the results Chris wrote: =A0 >Many thanks for your email, does the Active Users/Elapsed Time graph=20 >show the response time for the total number of users? I currently have=20 >3 different scripts, login, create new order and logout. I am running=20 >the test for 50 simultaneous uses, will this graph show me the response=20 >time for the 50 users for the 3 different scrips? Will it also show the=20 >response time if I ramp up the users? =A0 Chris, That graph shows response time (y-axis) vs. Active Users (x-axis) *for what= ever Timer you select*. =A0So yes, it is showing "average response time" at various concurrent active user intervals for all scripts that are runnin= g. =A0 Suggest you read more in the Help under Graphs. =A0 ...Dan www.mentora.com =A0 =A0 ------------------------------------------------------------------------- Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's = Techsay panel and you'll get the chance to share your opinions on IT & busi= ness topics through brief surveys-and earn cash http://www.techsay.com/defa= ult.php?page=3Djoin.php&p=3Dsourceforge&CID=3DDEVDEV -- OpenSTA-users mailing list Ope...@li... Subscribe/Unsubscribe/Options: http://lists.sf.net/lists/listinfo/opensta-u= sers Posting Guidelines: http://portal.opensta.org/faq.php?topic=3DUserMailingLi= st This message has been scanned for viruses by MailControl - (see http://blue= pages.wsatkins.co.uk/?6875772) This email and any attached files are confidential and copyright protected.= If you are not the addressee, any dissemination of this communication is s= trictly prohibited. Unless otherwise expressly agreed in writing, nothing s= tated in this communication shall be legally binding. The ultimate parent company of the Atkins Group is WS Atkins plc. Register= ed in England No. 1885586. Registered Office Woodcote Grove, Ashley Road, = Epsom, Surrey KT18 5BW. Consider the environment. Please don't print this e-mail unless you really = need to.=20 |
|
From: Osoata, C. <Chr...@at...> - 2007-02-21 17:09:47
|
Many thanks Dan, that is relly helpful. Cheers Chris=20 -----Original Message----- From: ope...@li... [mailto:ope...@li...] On Behalf Of Dan Downing Sent: 21 February 2007 14:59 To: 'OpenSTA users discussion and support' Subject: Re: [OpenSTA-users] Reading the results Chris wrote: >>Many thanks for your email, does the Active Users/Elapsed Time graph=20 >>show the response time for the total number of users? I currently have >>3 different scripts, login, create new order and logout. I am running=20 >>the test for 50 simultaneous uses, will this graph show me the=20 >>response time for the 50 users for the 3 different scrips? Will it=20 >>also show the response time if I ramp up the users? =20 Dan responded: >>That graph shows response time (y-axis) vs. Active Users (x-axis) *for whatever >>Timer you select*. So yes, it is showing "average response time" at various >>concurrent active user intervals for all scripts that are running.=20 >>Suggest you read more in the Help under Graphs. Chris responded: >Thanks for your response, so if the virtual users are simultaneous (i.e. >if I have 30 users all starting at the same time) the timer values vs active users >graph does not give any information. Will the best graph in this case be the "Active >users v Elapsed Time? Does this graph show the response time for users > throughput the test run? If all users are simultaneous, then the Active Users/Elapsed Time graph will show the average response time at 30 users (likely a straight line, as there is no variation in Active Users over the duration of the test). If you want to see how response time varies over load, suggest you ramp your users at, say, 1 user every 10 seconds (6 users a minute). Then you should see response time at 6, 12, 18, 24, and 30 users over the first 5 minutes of the test. Note: I have noticed that sometimes the graphs show "nothing" -- unclear why. This is why I prefer exporting the Timers to Excel and creating my own graphs. To see how, go here: http://tejasconsulting.com/blog/?p=3D88#comments ..Dan www.mentora.com =20 ------------------------------------------------------------------------ - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys-and earn cash http://www.techsay.com/default.php?page=3Djoin.php&p=3Dsourceforge&CID=3DDE= VDE V -- OpenSTA-users mailing list Ope...@li... Subscribe/Unsubscribe/Options: http://lists.sf.net/lists/listinfo/opensta-users Posting Guidelines: http://portal.opensta.org/faq.php?topic=3DUserMailingList This message has been scanned for viruses by MailControl - (see http://bluepages.wsatkins.co.uk/?6875772) This email and any attached files are confidential and copyright protected.= If you are not the addressee, any dissemination of this communication is s= trictly prohibited. Unless otherwise expressly agreed in writing, nothing s= tated in this communication shall be legally binding. The ultimate parent company of the Atkins Group is WS Atkins plc. Register= ed in England No. 1885586. Registered Office Woodcote Grove, Ashley Road, = Epsom, Surrey KT18 5BW. Consider the environment. Please don't print this e-mail unless you really = need to.=20 |
|
From: Dan D. <ddo...@me...> - 2007-02-21 14:27:27
|
Chris wrote: =A0 >Many thanks for your email, does the Active Users/Elapsed Time graph >show the response time for the total number of users? I currently have 3= >different scripts, login, create new order and logout. I am running the >test for 50 simultaneous uses, will this graph show me the response time= >for the 50 users for the 3 different scrips? Will it also show the >response time if I ramp up the users? =A0 Chris, That graph shows response time (y-axis) vs. Active Users (x-axis) *for whatever Timer you select*. =A0So yes, it is showing =93average response t= ime=94 at various concurrent active user intervals for all scripts that are running. =A0 Suggest you read more in the Help under Graphs. =A0 =2E..Dan www.mentora.com =A0 =A0 |
|
From: Dan D. <ddo...@me...> - 2007-02-21 14:59:33
|
Chris wrote: >>Many thanks for your email, does the Active Users/Elapsed Time graph >>show the response time for the total number of users? I currently have 3 >>different scripts, login, create new order and logout. I am running the >>test for 50 simultaneous uses, will this graph show me the response time >>for the 50 users for the 3 different scrips? Will it also show the >>response time if I ramp up the users? Dan responded: >>That graph shows response time (y-axis) vs. Active Users (x-axis) *for whatever >>Timer you select*. So yes, it is showing "average response time" at various >>concurrent active user intervals for all scripts that are running. >>Suggest you read more in the Help under Graphs. Chris responded: >Thanks for your response, so if the virtual users are simultaneous (i.e. >if I have 30 users all starting at the same time) the timer values vs active users >graph does not give any information. Will the best graph in this case be the "Active >users v Elapsed Time? Does this graph show the response time for users > throughput the test run? If all users are simultaneous, then the Active Users/Elapsed Time graph will show the average response time at 30 users (likely a straight line, as there is no variation in Active Users over the duration of the test). If you want to see how response time varies over load, suggest you ramp your users at, say, 1 user every 10 seconds (6 users a minute). Then you should see response time at 6, 12, 18, 24, and 30 users over the first 5 minutes of the test. Note: I have noticed that sometimes the graphs show "nothing" -- unclear why. This is why I prefer exporting the Timers to Excel and creating my own graphs. To see how, go here: http://tejasconsulting.com/blog/?p=88#comments ..Dan www.mentora.com |
|
From: Daniel S. <da...@Op...> - 2007-02-23 04:20:56
|
Dan Downing wrote: > Note: I have noticed that sometimes the graphs show "nothing" -- > unclear why. This is why I prefer exporting the Timers to Excel > and creating my own graphs. So, there are still exportable results yet the graph area is blank? If this is the case then you ought to send me this result set (if reasonably sized and possible) then it might at least be possible to track down this potential problem if there is any time and/or funding for OpenSTA development going spare. Cheers /dan -- Daniel Sutcliffe <Da...@Op...> OpenSTA part-time caretaker - http://OpenSTA.org/ |
|
From: Osoata, C. <Chr...@at...> - 2007-03-07 10:19:17
|
Hi All, Can I have some help please, I have recorded 3 tests in OpenSTA, a Login, Create Order and Logout. However if I have 35 users login in at the same time and creating orders for 9 hours then logging out, some of the order numbers are missing in the database, i.e. it creates orders 1,2,3, ..... then missing out some orders then partly creates orders 12, 13, 14. Some of the orders created are blank, I am not sure if the prblem is with OepnSTA or my recorded scripts, any ideas why this is happening? Many thanks Chris This email and any attached files are confidential and copyright protected.= If you are not the addressee, any dissemination of this communication is s= trictly prohibited. Unless otherwise expressly agreed in writing, nothing s= tated in this communication shall be legally binding. The ultimate parent company of the Atkins Group is WS Atkins plc. Register= ed in England No. 1885586. Registered Office Woodcote Grove, Ashley Road, = Epsom, Surrey KT18 5BW. Consider the environment. Please don't print this e-mail unless you really = need to.=20 |
|
From: Olaf K. <ok...@ab...> - 2007-03-07 12:57:21
|
Osoata, Christabel schrieb: > Hi All, > Can I have some help please, I have recorded 3 tests in OpenSTA, a > Login, Create Order and Logout. However if I have 35 users login in at > the same time and creating orders for 9 hours then logging out, some of > the order numbers are missing in the database, i.e. it creates orders > 1,2,3, ..... then missing out some orders then partly creates orders 12, > 13, 14. Some of the orders created are blank, I am not sure if the > prblem is with OepnSTA or my recorded scripts, any ideas why this is > happening? Christabel, the order part of OpenSTA has been thoroughly audited with industry leaders in supply chain management, therefor I doubt that any bugs remain in the order processing part... But to be honest: OpenSTA has no notion of your orders, it simply throws HTTP request at your application. Hopefully they make sense, If they make sense depends completely on you, the script and the application. As stated many times on this list, OpenSTA is generally a load generator, not a functional testing tool, though it can handle a bit of this. If you have scripted - automatic - functional tests, try to execute them in parallel to see if your application has a problem. Otherwise you should modify the script in order to check for the correct answer given by the server. See my post from Feb 16 on this list "Re: [OpenSTA-users] Transactions not submitted" for some more info. Cheers, Olaf -- No part of this message may reproduce, store itself in a retrieval system, or transmit disease, in any form, without the permissiveness of the author. |
|
From: Osoata, C. <Chr...@at...> - 2007-03-07 15:42:01
|
Many thanks for your response.=20 -----Original Message----- From: ope...@li... [mailto:ope...@li...] On Behalf Of Bernie Velivis Sent: 07 March 2007 13:12 To: OpenSTA users discussion and support Subject: Re: [OpenSTA-users] Problem with missing Orders Christabel, > Can I have some help please, I have recorded 3 tests in OpenSTA, a=20 > Login, Create Order and Logout. However if I have 35 users login in at > the same time and creating orders for 9 hours then logging out, some=20 > of the order numbers are missing in the database, i.e. it creates=20 > orders 1,2,3, ..... then missing out some orders then partly creates=20 > orders 12, 13, 14. Some of the orders created are blank, I am not sure > if the prblem is with OepnSTA or my recorded scripts, any ideas why=20 > this is happening? I suspect that parts of the script are failing intermittently. The first thing I would do is add code to the script to verify the results of all primary get/put operations. There is a decent article at http://portal.opensta.org/index.php?name=3DNews&file=3Darticle&sid=3D40 that gives an example of checking results. The bottom line is that you should not assume all is going as it did when you recorded, and that runtime verification is, IMHO, absolutely required. Another possible explanation is that the server is rejecting valid orders due to being too busy. Check application and database logs for errors (rollbacks, server to busy errors). Also have a look at 'HTTP Data List'=20 under test results to check for 4XX and 5XX response codes to narrow down where the problem is. -Bernie (www.iperformax.com) ------------------------------------------------------------------------ - Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveys-and earn cash http://www.techsay.com/default.php?page=3Djoin.php&p=3Dsourceforge&CID=3DDE= VDE V -- OpenSTA-users mailing list Ope...@li... Subscribe/Unsubscribe/Options: http://lists.sf.net/lists/listinfo/opensta-users Posting Guidelines: http://portal.opensta.org/faq.php?topic=3DUserMailingList This message has been scanned for viruses by MailControl - (see http://bluepages.wsatkins.co.uk/?6875772) This email and any attached files are confidential and copyright protected.= If you are not the addressee, any dissemination of this communication is s= trictly prohibited. Unless otherwise expressly agreed in writing, nothing s= tated in this communication shall be legally binding. The ultimate parent company of the Atkins Group is WS Atkins plc. Register= ed in England No. 1885586. Registered Office Woodcote Grove, Ashley Road, = Epsom, Surrey KT18 5BW. Consider the environment. Please don't print this e-mail unless you really = need to.=20 |
|
From: Bernie V. <Ber...@iP...> - 2007-03-07 13:11:41
|
Christabel, > Can I have some help please, I have recorded 3 tests in OpenSTA, a > Login, Create Order and Logout. However if I have 35 users login in at > the same time and creating orders for 9 hours then logging out, some of > the order numbers are missing in the database, i.e. it creates orders > 1,2,3, ..... then missing out some orders then partly creates orders 12, > 13, 14. Some of the orders created are blank, I am not sure if the > prblem is with OepnSTA or my recorded scripts, any ideas why this is > happening? I suspect that parts of the script are failing intermittently. The first thing I would do is add code to the script to verify the results of all primary get/put operations. There is a decent article at http://portal.opensta.org/index.php?name=News&file=article&sid=40 that gives an example of checking results. The bottom line is that you should not assume all is going as it did when you recorded, and that runtime verification is, IMHO, absolutely required. Another possible explanation is that the server is rejecting valid orders due to being too busy. Check application and database logs for errors (rollbacks, server to busy errors). Also have a look at 'HTTP Data List' under test results to check for 4XX and 5XX response codes to narrow down where the problem is. -Bernie (www.iperformax.com) |
|
From: Osoata, C. <Chr...@at...> - 2007-03-07 17:46:48
|
Hi there, I am trying to use OpenSTA to do performance testing. Currently I have 3 scripts, Login, Create orders and Logout. I am currently running the test for 10 users simultaneously and I am getting time out errors, the aim is to identify the number of orders created in an hour by 10 users. Any ideas why I am getting these timeout errors? Many Thanks=20 Chris This email and any attached files are confidential and copyright protected.= If you are not the addressee, any dissemination of this communication is s= trictly prohibited. Unless otherwise expressly agreed in writing, nothing s= tated in this communication shall be legally binding. The ultimate parent company of the Atkins Group is WS Atkins plc. Register= ed in England No. 1885586. Registered Office Woodcote Grove, Ashley Road, = Epsom, Surrey KT18 5BW. Consider the environment. Please don't print this e-mail unless you really = need to.=20 |
|
From: Olaf K. <ok...@ab...> - 2007-03-07 19:06:37
|
Osoata, Christabel schrieb: > Hi there, > I am trying to use OpenSTA to do performance testing. Currently I have 3 > scripts, Login, Create orders and Logout. I am currently running the > test for 10 users simultaneously and I am getting time out errors, the > aim is to identify the number of orders created in an hour by 10 users. > Any ideas why I am getting these timeout errors? > > Many Thanks > Chris Hi Chris, the first thing that comes to my mind is: application overload? I can think of more, but you do not provide enough information: When do timeouts occur? With 1,3,8 VUs? Is the script running correctly (e.g. seeing the correct results)? Please understand, that right now we know nothing more than "it does not work". Plus you might want to look at http://portal.opensta.org/faq.php?topic=PlaybackRequestTimeout Cheers, Olaf -- No part of this message may reproduce, store itself in a retrieval system, or transmit disease, in any form, without the permissiveness of the author. |
|
From: Danny R. F. <fa...@te...> - 2007-03-07 20:15:10
|
Osoata, Christabel wrote: > Any ideas why I am getting these timeout errors? Congratulations, it looks like you've found what you were looking for (a bug or performance bottleneck), and it's time to start debugging. What I like to do is to try to reproduce the same problem in a browser while OpenSTA is running. That makes the problem look much more real to your stakeholders, and helps to rule an OpenSTA bug causing the problem. One other thing I do is to edit OpenSTA's config file to bump up its timeout, so I can continue to track response times as they get longer. -- Danny R. Faught Tejas Software Consulting http://tejasconsulting.com/ |
|
From: Michael D. <md...@in...> - 2007-03-07 20:23:56
|
>Hi there, >I am trying to use OpenSTA to do performance testing. Currently I have 3 >scripts, Login, Create orders and Logout. I am currently running the >test for 10 users simultaneously and I am getting time out errors, the >aim is to identify the number of orders created in an hour by 10 users. >Any ideas why I am getting these timeout errors? >Many Thanks=20 >Chris Let's assume you have recorded and modeled your scripts correctly. Now, you are testing your server performance (throughput) by identifying the number of orders created in an hour by 10 users, right? Aren't you then looking for at what point the load would cause your server to timing out request? And bingo, you have reached that point. -- |
|
From: Bernie V. <Ber...@iP...> - 2007-03-07 20:38:36
|
>>aim is to identify the number of orders created in an hour by 10 users. >>Any ideas why I am getting these timeout errors? < Michael wrote> > Let's assume you have recorded and modeled your scripts correctly. > > Now, you are testing your server performance (throughput) by identifying > the number of orders created in an hour by 10 users, right? > > Aren't you then looking for at what point the load would cause your > server to timing out request? And bingo, you have reached that point. Michael, I understand what you are saying in principle and I don't disagree. When looking for bottlenecks, a useful technique is to compare throughput vs. load and see when it becomes non-linear. Frequent timeouts will always lead to non-linear throughput vs. load (users) graphs. > Aren't you then looking for at what point the load would cause your > server to timing out request? And bingo, you have reached that point. If I understand Chris' email, Chris has found the point where OpenSTA times out, not the application. I have no idea how long Chris' single user response times are. If they are just a few seconds, then given the default 1 minute timeout (actually, the default value is 1 minute but OpenSTA can take anywhere up to 2X the timeout value to actually deliver the timeout error to scripts.) the server(s) have reached a significant bottleneck. If single user response times started at close to 1 minute, then a response time of 1.5 minutes would not be that unusual or indicate a bottleneck under load. -Bernie (www.iPerformax.com) |
|
From: Osoata, C. <Chr...@at...> - 2007-03-08 12:38:45
|
Hi Michael, Bernie, Danny and Olaf, =20 >>aim is to identify the number of orders created in an hour by 10 users. >>Any ideas why I am getting these timeout errors? < Michael wrote> > Let's assume you have recorded and modeled your scripts correctly. > > Now, you are testing your server performance (throughput) by=20 > identifying the number of orders created in an hour by 10 users, right? > > Aren't you then looking for at what point the load would cause your=20 > server to timing out request? And bingo, you have reached that point. Michael, I understand what you are saying in principle and I don't disagree. When looking for bottlenecks, a useful technique is to compare throughput vs. load and see when it becomes non-linear. Frequent timeouts will always lead to non-linear throughput vs. load (users) graphs. > Aren't you then looking for at what point the load would cause your=20 > server to timing out request? And bingo, you have reached that point. If I understand Chris' email, Chris has found the point where OpenSTA times out, not the application. I have no idea how long Chris' single user response times are. If they are just a few seconds, then given the default 1 minute timeout (actually, the default value is 1 minute but OpenSTA can take anywhere up to 2X the timeout value to actually deliver the timeout error to scripts.) the server(s) have reached a significant bottleneck. If single user response times started at close to 1 minute, then a response time of 1.5 minutes would not be that unusual or indicate a bottleneck under load. <Chris writes> Hi there, Thanks for your response, so does this mean that running the test for 10 users simultaneously should not cause time out errors. The developers think it may be because it is an unrealistic test i.e. In the real world, I don't think we will have 10 users running the same test and performing the same action at the same time for an hour, I think it is probably best to ramp the test up such that 1 user is added every 30 seconds with a 10 second delay until I get to the maximum number of users, do you think this is a more realistic test? Although running the same test now with 10 simultaneous users even though it displays timeout errors it still creates record in the database. Many thanks Chris This email and any attached files are confidential and copyright protected.= If you are not the addressee, any dissemination of this communication is s= trictly prohibited. Unless otherwise expressly agreed in writing, nothing s= tated in this communication shall be legally binding. The ultimate parent company of the Atkins Group is WS Atkins plc. Register= ed in England No. 1885586. Registered Office Woodcote Grove, Ashley Road, = Epsom, Surrey KT18 5BW. Consider the environment. Please don't print this e-mail unless you really = need to.=20 |
|
From: Olaf K. <ok...@ab...> - 2007-03-08 13:31:16
|
Osoata, Christabel schrieb: > Although running the same test now with 10 simultaneous users even > though it displays timeout errors it still creates record in the > database. Sure, that's the nature of http: If you fire up requests for http://myserver/performLengthyOperation?count=1 http://myserver/performLengthyOperation?count=2 http://myserver/performLengthyOperation?count=3 http://myserver/performLengthyOperation?count=4 http://myserver/performLengthyOperation?count=5 every seconds, the appropriate action will be started (and usually finished) 5 times, regardless of how long you wait for each result (e.g. regardless of when OpenSTA or your browser times out) The server will likely notice that you have gone away when it starts to send back a response. If your lengthy Operation doesn't send anything back to the client until it finishes, it might recognize the fact that the client has gone away after having written something to the database. Cheers, Olaf -- No part of this message may reproduce, store itself in a retrieval system, or transmit disease, in any form, without the permissiveness of the author. |
|
From: Osoata, C. <Chr...@at...> - 2007-03-08 16:27:32
|
<Olaf wrote> > Although running the same test now with 10 simultaneous users even=20 > though it displays timeout errors it still creates record in the=20 > database. Sure, that's the nature of http: If you fire up requests for http://myserver/performLengthyOperation?count=3D1 http://myserver/performLengthyOperation?count=3D2 http://myserver/performLengthyOperation?count=3D3 http://myserver/performLengthyOperation?count=3D4 http://myserver/performLengthyOperation?count=3D5 every seconds, the appropriate action will be started (and usually finished) 5 times, regardless of how long you wait for each result (e.g. regardless of when OpenSTA or your browser times out) The server will likely notice that you have gone away when it starts to send back a response. If your lengthy Operation doesn't send anything back to the client until it finishes, it might recognize the fact that the client has gone away after having written something to the database. Cheers, Olaf <Chris writes> Hi Olaf, I'm a bit confussed now actually, does this mean that it's OoenSTA that is timing out and not the application under test? as in the error log there are loads of 'The wait operation timed out error messages' and Timeout generated for Socket )x360 error messages. Thanks Chris This email and any attached files are confidential and copyright protected.= If you are not the addressee, any dissemination of this communication is s= trictly prohibited. Unless otherwise expressly agreed in writing, nothing s= tated in this communication shall be legally binding. The ultimate parent company of the Atkins Group is WS Atkins plc. Register= ed in England No. 1885586. Registered Office Woodcote Grove, Ashley Road, = Epsom, Surrey KT18 5BW. Consider the environment. Please don't print this e-mail unless you really = need to.=20 |
|
From: Olaf K. <ok...@ab...> - 2007-03-09 08:46:13
|
Osoata, Christabel schrieb: > Hi Olaf, > I'm a bit confussed now actually, does this mean that it's OoenSTA that > is timing out and not the application under test? as in the error log > there are loads of 'The wait operation timed out error messages' and > Timeout generated for Socket )x360 error messages. I've more or less given an example about the mechanics of http. Maybe I've misled you by phrasing the example this way. On the other hand you are right: If OpenSTA times out waiting for a response, that does not mean, that the action caused by the request does not happen - e.g. an order might be placed even though the client chooses not to wait any longer for the result. Compare this with ordering at a restaurant: Place an order and wait. If you wait long enough and then leave the restaurant, you can't tell if your meal will be delivered when you're gone - maybe it will, maybe it wont. The same applies to http requests/responses. Have you tried Bernies reference to http://www.google.com/search?q=opensta+timeout and read a few of the articles? Cheers, Olaf -- No part of this message may reproduce, store itself in a retrieval system, or transmit disease, in any form, without the permissiveness of the author. |
|
From: Bernie V. <Ber...@iP...> - 2007-03-08 17:48:22
|
> <Chris writes> > > Hi there, > Thanks for your response, so does this mean that running the test for 10 > users simultaneously should not cause time out errors. The developers > think it may be because it is an unrealistic test i.e. In the real > world, I don't think we will have 10 users running the same test and > performing the same action at the same time for an hour, I think it is > probably best to ramp the test up such that 1 user is added every 30 > seconds with a 10 second delay until I get to the maximum number of > users, do you think this is a more realistic test? > > Although running the same test now with 10 simultaneous users even > though it displays timeout errors it still creates record in the > database. > Chris, We've certainly left the realm of OpenSTA related questions and moved into a discussion of performance testing. Its a slow day, I'll bite. There are three major areas of performance testing. Different people use different terminology, so you'll have to put up with mine understanding that it might not jive completely with what others say. Still, its the goal of the testing that is important, not what you call it. If your goal is to do CAPACITY PLANNING, the you should create a "realistic" workload. A mix of the most popular transactions plus those deemed critical presented to the server(s) under test in a realistic fashion. This is easy to say, and I've seen 3 day seminars and countless books dedicated to how to do this "correctly". For the most part this boils down to picking a manageable (in terms of time to develop vs. budget, goals, etc) set of transactions to emulate, determining the % probability of executing each transaction, and the overall arrival rate, and also the "success criteria" for the transactions (i.e. response time limits, throughput goals, etc). Collectively, I'll refer to these attributes as the "workload definition". One implement a given workload description is to create a master script which is assigned to each VU and have it generate random numbers and then call other scripts (that model the workload transactions) based on a table of probabilities. The scripts should modeled with think times consistent with the way your users will interact with the system. This varies greatly from one app to another and unless you are mining logs from an application already in use, is somewhat subjective. The best advice I can give is be conservative, but no so much so that the sum of all your conservative decisions is pathological. Once you have a workload that has pacing (think times) you are comfortable with, then increase the number of users and monitor how response times, server resource utilization (CPU, IO rate, Network, and memory), and throughput (number of tasks completed system wide) vary with the increased load. You might set up your test so you ramp up to a specific number of users, then let them run for a while, and repeat as necessary. This way, you capture the behavior of the system various steady states. The length of time to allow a particular number of users to run varies with a number of factors including how different the transactions are from one another in terms of resource utilization and response time. If you can't get repeatable results, your steady state interval might be too small. I've seen intervals as small as 10 minutes work and other workloads that require an interval of hours to useful. That's a rough outline of one approach to capacity planning which in summary is an attempt to load up the system with VUs in a way that a VU is indistinguishable from a "real user". Again, much easier said then done. Pick the wrong workload, and your results might be worthless. The end game here is to increase load until response times become excessive (whatever that means to you, but it needs to be defined.. again, tons of material to read about this) at which point you have found a limit to system capacity. This limit will be due to either a hardware or software bottleneck. Now, if you are on a tuning expedition, then analyze the performance metrics captured and either do some tuning, code optimization, or add some hardware resources and repeat as necessary until you either meet throughput goals, find the limits to the architecture, or run out of time (happens more then most performance engineers would like). The same scripts can be used for SOAK TESTING, where you load up the system at close to it's maximum capacity and let it run for hours, days, etc. This is a great way to spot stability problems that only occur after the system has been running a long time (memory leaks are a good example of things you will find). Run a long test and start failing components (servers, routers, etc) to see how response times are effected and how long the system takes to return to a steady state and you are on your way towards FAILOVER TESTING. You can find reams of material to read about failover testing and high availability as well. If you goals is to determine where or how the system will fail, then you are doing STRESS TESTING. One way to do this is to comment out the think times and increase VUs until something (hopefully not your emulator!) breaks. This is just one form of stress testing, a valuable aspect of performance testing, but not the same as capacity planning. How the VUs compare to "real users" may be irrelevant as you are trying to determine how the system behaves when pushed past its limits. So I guess only you can answer your question. Decide what your goals are (capacity planning, stability testing, failover testing, or stress testing) and then see if your script and test behavior is aligned with the goal(s). -Bernie www.iPerformax.com |
|
From: Olaf K. <ok...@ab...> - 2007-03-08 18:05:56
|
Bernie Velivis schrieb: > We've certainly left the realm of OpenSTA related questions and moved into a > discussion of performance testing. Its a slow day, I'll bite. Hi, that's been a great article about different kinds of test. In order to draw attention to it and make it easily linkable would you mind submitting it to http://portal.opensta.org/? This also would bring back some action (read: life) that has been missing there for quite some time... The content might somehow end up in the FAQ - but I believe there's worth in keeping all those testing strategies together and compare them side by side. It certainly goes directly into my toolbox of explaining these techniques with only few, but well-chosen, words. Cheers, Olaf -- No part of this message may reproduce, store itself in a retrieval system, or transmit disease, in any form, without the permissiveness of the author. |
|
From: Bernie V. <Ber...@iP...> - 2007-03-08 18:38:19
|
Thanks for the compliment Olaf. Coming from you, that means a lot. I think that in some cases I have used far too few words, like referring to the first type of testing as "capacity planning" which is much more then testing. More accurately, I should have referred it as capacity testing which can be part of capacity planning. There are other cases where capacity planning is just monitoring load and extrapolating what additional hardware would be required to stay ahead of increased demand using complex modeling tools, spreadsheets, and Ouija boards. I'll try and clean it up so that, if taken literally, does more good then harm and then submit it under testing strategies as you requested. Cheers, -Bernie www.iPerformax.com ----- Original Message ----- From: "Olaf Kock" <ok...@ab...> To: "OpenSTA users discussion and support" <ope...@li...> Sent: Thursday, March 08, 2007 1:05 PM Subject: Re: [OpenSTA-users] Performance testing with Open STA > Bernie Velivis schrieb: >> We've certainly left the realm of OpenSTA related questions and moved >> into a >> discussion of performance testing. Its a slow day, I'll bite. > > Hi, > > that's been a great article about different kinds of test. > In order to draw attention to it and make it easily linkable would you > mind submitting it to http://portal.opensta.org/? This also would bring > back some action (read: life) that has been missing there for quite some > time... > > The content might somehow end up in the FAQ - but I believe there's > worth in keeping all those testing strategies together and compare them > side by side. It certainly goes directly into my toolbox of explaining > these techniques with only few, but well-chosen, words. > > Cheers, > Olaf > > -- > No part of this message may reproduce, store itself in a retrieval > system, or transmit disease, in any form, without the permissiveness of > the author. > > ------------------------------------------------------------------------- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share > your > opinions on IT & business topics through brief surveys-and earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > -- > OpenSTA-users mailing list Ope...@li... > Subscribe/Unsubscribe/Options: > http://lists.sf.net/lists/listinfo/opensta-users > Posting Guidelines: > http://portal.opensta.org/faq.php?topic=UserMailingList |
|
From: Olaf K. <ok...@ab...> - 2007-03-09 12:21:20
|
Bernie Velivis schrieb: > Thanks for the compliment Olaf. Coming from you, that means a lot. <blush>Now its my turn to thank for the compliment</blush> > I think that in some cases I have used far too few words, like referring to > the first type of testing as "capacity planning" which is much more then > testing. More accurately, I should have referred it as capacity testing > which can be part of capacity planning. There are other cases where capacity > planning is just monitoring load and extrapolating what additional hardware > would be required to stay ahead of increased demand using complex modeling > tools, spreadsheets, and Ouija boards. Well, I read it as a starting point to make the reader think about what s/he wants to achieve - not as encyclopedic reference... Even if I did so, I believe I'd been happy with it. I also like Michaels point about "don't ask me how many concurrent users the system can support without defining what a concurrent user is.". Certainly this is easily understood as an insult (by the one that asked the question) but it summarizes what you dissected as different testing purposes. As this thread carries the generic name "Performance testing with OpenSTA" you provided a foundation vocabulary to refer to in later posts on this list. Hopefully questions get more information about their circumstances with this vocabulary. > I'll try and clean it up so that, if taken literally, does more > good then harm and then submit it under testing strategies as > you requested. Great - thanks. Olaf -- No part of this message may reproduce, store itself in a retrieval system, or transmit disease, in any form, without the permissiveness of the author. |
|
From: Danny R. F. <fa...@te...> - 2007-03-08 18:10:41
|
Osoata, Christabel wrote: > If I understand Chris' email, Chris has found the point where OpenSTA > times out, not the application. I'm not sure what this means. Do you mean to distinguish between OpenSTA aborting a connection because it times out waiting for a response, and the web application detecting a timeout itself and returning some sort of error on the text of a page (with a 200 response)? -- Danny R. Faught Tejas Software Consulting http://tejasconsulting.com/ |