opensta-users Mailing List for OpenSTA (Page 4)
Brought to you by:
dansut
You can subscribe to this list here.
| 2000 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(8) |
Nov
(34) |
Dec
(59) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2001 |
Jan
(49) |
Feb
(66) |
Mar
(60) |
Apr
(93) |
May
(55) |
Jun
(81) |
Jul
(96) |
Aug
(79) |
Sep
(75) |
Oct
(141) |
Nov
(73) |
Dec
(77) |
| 2002 |
Jan
(123) |
Feb
(95) |
Mar
(50) |
Apr
(66) |
May
(88) |
Jun
(120) |
Jul
(176) |
Aug
(101) |
Sep
(95) |
Oct
(86) |
Nov
(97) |
Dec
(62) |
| 2003 |
Jan
(114) |
Feb
(179) |
Mar
(152) |
Apr
(238) |
May
(229) |
Jun
(187) |
Jul
(158) |
Aug
(110) |
Sep
(142) |
Oct
(69) |
Nov
(88) |
Dec
(87) |
| 2004 |
Jan
(66) |
Feb
(99) |
Mar
(94) |
Apr
(67) |
May
(66) |
Jun
(116) |
Jul
(39) |
Aug
(99) |
Sep
(29) |
Oct
(143) |
Nov
(100) |
Dec
(102) |
| 2005 |
Jan
(31) |
Feb
(30) |
Mar
(88) |
Apr
(214) |
May
(151) |
Jun
(155) |
Jul
(44) |
Aug
(92) |
Sep
(61) |
Oct
(93) |
Nov
(73) |
Dec
(115) |
| 2006 |
Jan
(113) |
Feb
(110) |
Mar
(49) |
Apr
(89) |
May
(34) |
Jun
(43) |
Jul
(76) |
Aug
(48) |
Sep
(41) |
Oct
(24) |
Nov
(31) |
Dec
(19) |
| 2007 |
Jan
(23) |
Feb
(50) |
Mar
(59) |
Apr
(12) |
May
(14) |
Jun
(18) |
Jul
(36) |
Aug
(20) |
Sep
(6) |
Oct
(9) |
Nov
(5) |
Dec
(11) |
| 2008 |
Jan
(6) |
Feb
(7) |
Mar
(4) |
Apr
(1) |
May
|
Jun
|
Jul
(9) |
Aug
(7) |
Sep
(2) |
Oct
(2) |
Nov
|
Dec
|
|
From: Parvinder G. <Par...@ka...> - 2007-08-15 13:54:48
|
Hello, Is it possible to select data randomly from a file?=A0 I doesn't appear = to be the case out-of-box, if you know of a work around please let me = know. I created a variable whose source type is file and marked it as = "Random", when I try to access the file using the 'generate' command I = get an error "invalid operand type." Thanks. Parvinder Ghotra QA Automation Engineer |
|
From: Bernie V. <Ber...@iP...> - 2007-08-15 11:08:37
|
Hi Stephen,
I think I understand what is happening, but I'm going suggest a workaround
instead of guessing or putting in the time to nail this down.
Workaround: If you don't like the random numbers generated by OpenSTA ( and
depending on the version you have, there are some known bugs I think), you
could always generate a list outside OpenSTA and import them...
integer userthinktime (19624, 15201, 11692, 10762, 10927, 8732, 15496, &
5387, 7544, 19403, 15315, 19253, 12551 ), script ! This list was
generated in excel using int(5000+rand()*15000)
! This list can be much longer
! Note the scope is SCRIPT... think about it.
next userthinktime
log userthinktime
-Bernie
Need rapid response OpenSTA support and bug fixes?
Learn more at http://iperformax.com/services.html
----- Original Message -----
From: "Stephen Kay" <Ste...@on...>
To: <ope...@li...>
Sent: Wednesday, August 15, 2007 4:58 AM
Subject: [OpenSTA-users] Why isn't my Random variable very random?
>I can't quite get the hang of random variables and I'm sure I'm missing
> something, but I've looked at the help, sourceforge and tried some tests,
> but can't work out what I'm doing wrong.
>
> To highlight the problem I created a tiny script.
>
> I have this line in my definition section of script:
>
> Integer userthinktime (5000 - 20000), RANDOM
>
> Then in the code section of my script I have these lines:
>
> generate userthinktime
> log userthinktime
>
> If I run this script in the modeler it logs a random value each time and
> that's good. However, when I run this as a test for 10 seconds with 10
> users running the same script I get this sort of thing in the logs (i.e.
> it
> increments by a few milliseconds each time):
> LOG: 13373,
> LOG: 13376,
> LOG: 13379,
> LOG: 13382,
> LOG: 13386,
> LOG: 13389,
> LOG: 13395,
> LOG: 13399,
> LOG: 13405,
> LOG: 13409,
> LOG: 13412,
> LOG: 13425,
> LOG: 13428,
> LOG: 13431,
>
>
> What I want is a genuinely random variable picked up by each virtual user
> running the script, but I can't work out how to do it. I'll keep trying
> but if anyone knows how to do this I'd appreciate a nudge in the right
> direction.
>
> Thanks, Steve
>
>
>
> For the latest data on the economy and society consult National Statistics
> at http://www.statistics.gov.uk
>
> *********************************************************************************
>
>
> Please Note: Incoming and outgoing email messages are routinely monitored
> for compliance with our policy on the use of electronic communications
> *********************************************************************************
>
>
> Legal Disclaimer : Any views expressed by the sender of this message are
> not necessarily those of the Office for National Statistics
> *********************************************************************************
>
>
> The original of this email was scanned for viruses by the Government
> Secure Intranet Anti-Virus service supplied by Cable&Wireless in
> partnership with MessageLabs. (CCTM Certificate Number 2006/04/0007.) On
> leaving the GSi this email was certified virus free.
> Communications via the GSi may be automatically logged, monitored and/or
> recorded for legal purposes.
>
> -------------------------------------------------------------------------
> This SF.net email is sponsored by: Splunk Inc.
> Still grepping through log files to find problems? Stop.
> Now Search log events and configuration files using AJAX and a browser.
> Download your FREE copy of Splunk now >> http://get.splunk.com/
> --
> OpenSTA-users mailing list Ope...@li...
> Subscribe/Unsubscribe/Options:
> http://lists.sf.net/lists/listinfo/opensta-users
> Posting Guidelines:
> http://portal.opensta.org/faq.php?topic=UserMailingList
|
|
From: Stephen K. <Ste...@on...> - 2007-08-15 09:11:06
|
Parag, Have you seen this: http://portal.opensta.org/modules.php?op=modload&name=phpWiki&file=index&pagename=HtmlParserErrors It might be the cause of your problem. Cheers, steve For the latest data on the economy and society consult National Statistics at http://www.statistics.gov.uk ********************************************************************************* Please Note: Incoming and outgoing email messages are routinely monitored for compliance with our policy on the use of electronic communications ********************************************************************************* Legal Disclaimer : Any views expressed by the sender of this message are not necessarily those of the Office for National Statistics ********************************************************************************* The original of this email was scanned for viruses by the Government Secure Intranet Anti-Virus service supplied by Cable&Wireless in partnership with MessageLabs. (CCTM Certificate Number 2006/04/0007.) On leaving the GSi this email was certified virus free. Communications via the GSi may be automatically logged, monitored and/or recorded for legal purposes. |
|
From: Stephen K. <Ste...@on...> - 2007-08-15 08:57:31
|
I can't quite get the hang of random variables and I'm sure I'm missing
something, but I've looked at the help, sourceforge and tried some tests,
but can't work out what I'm doing wrong.
To highlight the problem I created a tiny script.
I have this line in my definition section of script:
Integer userthinktime (5000 - 20000), RANDOM
Then in the code section of my script I have these lines:
generate userthinktime
log userthinktime
If I run this script in the modeler it logs a random value each time and
that's good. However, when I run this as a test for 10 seconds with 10
users running the same script I get this sort of thing in the logs (i.e. it
increments by a few milliseconds each time):
LOG: 13373,
LOG: 13376,
LOG: 13379,
LOG: 13382,
LOG: 13386,
LOG: 13389,
LOG: 13395,
LOG: 13399,
LOG: 13405,
LOG: 13409,
LOG: 13412,
LOG: 13425,
LOG: 13428,
LOG: 13431,
What I want is a genuinely random variable picked up by each virtual user
running the script, but I can't work out how to do it. I'll keep trying
but if anyone knows how to do this I'd appreciate a nudge in the right
direction.
Thanks, Steve
For the latest data on the economy and society consult National Statistics at http://www.statistics.gov.uk
*********************************************************************************
Please Note: Incoming and outgoing email messages are routinely monitored for compliance with our policy on the use of electronic communications
*********************************************************************************
Legal Disclaimer : Any views expressed by the sender of this message are not necessarily those of the Office for National Statistics
*********************************************************************************
The original of this email was scanned for viruses by the Government Secure Intranet Anti-Virus service supplied by Cable&Wireless in partnership with MessageLabs. (CCTM Certificate Number 2006/04/0007.) On leaving the GSi this email was certified virus free.
Communications via the GSi may be automatically logged, monitored and/or recorded for legal purposes.
|
|
From: Stephen K. <Ste...@on...> - 2007-08-14 15:31:28
|
Thanks for the confirmation Bernie, I've raised the bug on sourceforge. Cheers, Steve For the latest data on the economy and society consult National Statistics at http://www.statistics.gov.uk ********************************************************************************* Please Note: Incoming and outgoing email messages are routinely monitored for compliance with our policy on the use of electronic communications ********************************************************************************* Legal Disclaimer : Any views expressed by the sender of this message are not necessarily those of the Office for National Statistics ********************************************************************************* The original of this email was scanned for viruses by the Government Secure Intranet Anti-Virus service supplied by Cable&Wireless in partnership with MessageLabs. (CCTM Certificate Number 2006/04/0007.) On leaving the GSi this email was certified virus free. Communications via the GSi may be automatically logged, monitored and/or recorded for legal purposes. |
|
From: Bernie V. <Ber...@iP...> - 2007-08-14 12:50:25
|
Hi Stephen, Its a bug. I tried a simple subroutine with 4 string args. As soon as one (counting from left to right) is set to "", then all remaining arguments are set to "" or 0 if numeric. Looks like the code that parses arguments passed to subroutines quits when it see's a null. I would definitely report it on sourceforge. If you need an urgent resolution, please read more at http://www.iperformax.com/services.html and http://www.iperformax.com/downloads/OpenSTA%20Support%20Subscriptions%20Brochure.pdf All the best, Bernie Velivis www.iPerformax.com ----- Original Message ----- From: "Stephen Kay" <Ste...@on...> To: <ope...@li...> Sent: Tuesday, August 14, 2007 7:45 AM Subject: [OpenSTA-users] empty strings passed as parameters cause all parameters to be ignored. > Hello all, > > With the Modeler that ships with OpenSTA 1.4.3.20, if you pass an empty > string, i.e. "" as a parameter to a subroutine, all parameters are > ignored. > > There is an example script below. > The first time CHECKRESPONSE is called with all 3 parameters populated, it > works fine. The second time CHECKRESPONSE is called with the first > parameter set to "", it ignores all the parameters and uses the parameters > passed the first time. > > This seems like a bug to me (and has given us quite a headache), has > anyone > else seen this? Should I raise a bug on sourceforge? > > Thanks, > Steve > > > ============ EXAMPLE SCRIPT ================== > !Browser:IE5 > !Date : 14/08/2007 > Environment > Description "" > Mode HTTP > Wait UNIT MILLISECONDS > > Definitions > ! Standard Defines > Include "RESPONSE_CODES.INC" > Include "GLOBAL_VARIABLES.INC" > Include "PHASE8_FUNCTION_VARIABLES.INC" > > CHARACTER*512 USER_AGENT > Integer USE_PAGE_TIMERS > CHARACTER*256 MESSAGE > > CHARACTER*256 p1 > CHARACTER*256 p2 > CHARACTER*256 p3 > > Timer T_QQQQ > > CONSTANT DEFAULT_HEADERS = "" > > Code > !Read in the default browser user agent field > Entry[USER_AGENT,USE_PAGE_TIMERS] > > Start Timer T_QQQQ > > CALL CHECKRESPONSE["1param1","1param2","1param3"] > > CALL CHECKRESPONSE["","2param2","2param3"] > > SYNCHRONIZE REQUESTS > > End Timer T_QQQQ > > SUBROUTINE CHECKRESPONSE[p1,p2,p3] > Log "Parameter 1:", p1 > Log "Parameter 2:", p2 > Log "Parameter 3:", p3 > END SUBROUTINE > > Exit > > ERR_LABEL: > > If (MESSAGE <> "") Then > Report MESSAGE > Endif > > Exit > > > > For the latest data on the economy and society consult National Statistics > at http://www.statistics.gov.uk > > ********************************************************************************* > > > Please Note: Incoming and outgoing email messages are routinely monitored > for compliance with our policy on the use of electronic communications > ********************************************************************************* > > > Legal Disclaimer : Any views expressed by the sender of this message are > not necessarily those of the Office for National Statistics > ********************************************************************************* > > > The original of this email was scanned for viruses by the Government > Secure Intranet Anti-Virus service supplied by Cable&Wireless in > partnership with MessageLabs. (CCTM Certificate Number 2006/04/0007.) On > leaving the GSi this email was certified virus free. > Communications via the GSi may be automatically logged, monitored and/or > recorded for legal purposes. > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. > Still grepping through log files to find problems? Stop. > Now Search log events and configuration files using AJAX and a browser. > Download your FREE copy of Splunk now >> http://get.splunk.com/ > -- > OpenSTA-users mailing list Ope...@li... > Subscribe/Unsubscribe/Options: > http://lists.sf.net/lists/listinfo/opensta-users > Posting Guidelines: > http://portal.opensta.org/faq.php?topic=UserMailingList |
|
From: Stephen K. <Ste...@on...> - 2007-08-14 11:43:51
|
Hello all,
With the Modeler that ships with OpenSTA 1.4.3.20, if you pass an empty
string, i.e. "" as a parameter to a subroutine, all parameters are ignored.
There is an example script below.
The first time CHECKRESPONSE is called with all 3 parameters populated, it
works fine. The second time CHECKRESPONSE is called with the first
parameter set to "", it ignores all the parameters and uses the parameters
passed the first time.
This seems like a bug to me (and has given us quite a headache), has anyone
else seen this? Should I raise a bug on sourceforge?
Thanks,
Steve
============ EXAMPLE SCRIPT ==================
!Browser:IE5
!Date : 14/08/2007
Environment
Description ""
Mode HTTP
Wait UNIT MILLISECONDS
Definitions
! Standard Defines
Include "RESPONSE_CODES.INC"
Include "GLOBAL_VARIABLES.INC"
Include "PHASE8_FUNCTION_VARIABLES.INC"
CHARACTER*512 USER_AGENT
Integer USE_PAGE_TIMERS
CHARACTER*256 MESSAGE
CHARACTER*256 p1
CHARACTER*256 p2
CHARACTER*256 p3
Timer T_QQQQ
CONSTANT DEFAULT_HEADERS = ""
Code
!Read in the default browser user agent field
Entry[USER_AGENT,USE_PAGE_TIMERS]
Start Timer T_QQQQ
CALL CHECKRESPONSE["1param1","1param2","1param3"]
CALL CHECKRESPONSE["","2param2","2param3"]
SYNCHRONIZE REQUESTS
End Timer T_QQQQ
SUBROUTINE CHECKRESPONSE[p1,p2,p3]
Log "Parameter 1:", p1
Log "Parameter 2:", p2
Log "Parameter 3:", p3
END SUBROUTINE
Exit
ERR_LABEL:
If (MESSAGE <> "") Then
Report MESSAGE
Endif
Exit
For the latest data on the economy and society consult National Statistics at http://www.statistics.gov.uk
*********************************************************************************
Please Note: Incoming and outgoing email messages are routinely monitored for compliance with our policy on the use of electronic communications
*********************************************************************************
Legal Disclaimer : Any views expressed by the sender of this message are not necessarily those of the Office for National Statistics
*********************************************************************************
The original of this email was scanned for viruses by the Government Secure Intranet Anti-Virus service supplied by Cable&Wireless in partnership with MessageLabs. (CCTM Certificate Number 2006/04/0007.) On leaving the GSi this email was certified virus free.
Communications via the GSi may be automatically logged, monitored and/or recorded for legal purposes.
|
|
From: Bernie V. <Ber...@iP...> - 2007-08-13 13:21:37
|
New England OpenSTA professionals, I am planning an informal backyard BBQ to meet, greet, and discuss OpenSTA. Dan has asked me to host the BBQ since I am futher south (I live in Hollis NH) and I thought I might be to able to intice a few other folks to join us with this email. I'm thinking either the 16th or 22nd of September which are weekend dates, but would be more then happy to do this on a weekday if thats what the majority wants. I have a house full of relatives until September 15, so it will have to be after that. If interested, please drop me an email and let me know what dates in the 2nd half of September would work for you. If I don't know you, please tell me a little about yourself, where you work, and your experience with OpenSTA or the performance community. All the best, Bernie Velivis |
|
From: Bernie V. <Ber...@iP...> - 2007-08-10 10:47:02
|
OpenSTA in general is beginning to show its age and I am convinced that without a major initiative for bug fixes and new features it will soon fade into history. A good example is the way SSL recording is done. The gateway's method of recording SSL is error prone, unreliable, and does not work with the latest versions of Mozilla and IE (version 7). This will be the beginning of the end of OpenSTA unless corrected. This is just the first in a long list of "must fix" and "should fix" items in OpenSTA. I am convinced that a commercial support offering is the only way OpenSTA will survive. I am also convinced that, with proper support and training, any company be can successful using OpenSTA as a primary load testing tool. Read more at http://portal.opensta.org/index.php?name=News&file=article&sid=50 Bernie Velivis www.iPerformax.com/services.html |
|
From: Mad S. F. <net...@sp...> - 2007-08-09 22:27:51
|
Hi there. I'm running a small batch (50 vus) on two machines. When the test finished, the status on both task groups changes to "(red ball) completed". At this point, I cannot stop the test, and no results are posted; attempting to kill the test results in Commander locking up. Any clues for me? Thanks, Max |
|
From: Sivasankari M. <Siv...@so...> - 2007-08-09 16:00:55
|
I will be out of the office starting 08/03/2007 and will not return until 09/10/2007. I will respond to your message when I return. Attention: This email may contain information intended for the sole use of the original recipient. Please respect this when sharing or disclosing this email's contents with any third party. If you believe you have received this email in error, please delete it and notify the sender or pos...@so... as soon as possible. The content of this email does not necessarily reflect the views of Solnet Solutions Ltd. |
|
From: Bernie V. <Ber...@iP...> - 2007-08-09 11:40:58
|
Sachin, Step 1: Read the FAQ and Documentation on OpenSTA.org. Step 2: Google "opensta data parameterization:" and follow one of the many links such as http://www.geocities.com/ranjitshewale/writings/para_in_opensta.pdf Step 3: If you still need help, buy training or support from one of the many vendors who advertise here, such as www.iPerformax.com/services.html -Bernie ----- Original Message ----- From: "sachin mathew" <sac...@ya...> To: <ope...@li...> Sent: Thursday, August 09, 2007 12:56 AM Subject: [OpenSTA-users] Data Parameterization in open sta > Hi, > I am Using OpenSta for testing of my project > performance. > For testing the performance, I need to pump set of > data to the grid. So what shall i do for data > parameterization in opensta. Please tell me the > procedure for data parameterization. > > > It would be a greate help from your end. > > Expecting Solution. > > Thanks and Regards > Sachin Mathew > mob:91-9880935733 > > > > ____________________________________________________________________________________ > Looking for a deal? Find great prices on flights and hotels with Yahoo! > FareChase. > http://farechase.yahoo.com/ > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. > Still grepping through log files to find problems? Stop. > Now Search log events and configuration files using AJAX and a browser. > Download your FREE copy of Splunk now >> http://get.splunk.com/ > -- > OpenSTA-users mailing list Ope...@li... > Subscribe/Unsubscribe/Options: > http://lists.sf.net/lists/listinfo/opensta-users > Posting Guidelines: > http://portal.opensta.org/faq.php?topic=UserMailingList |
|
From: sachin m. <sac...@ya...> - 2007-08-09 04:56:15
|
Hi,
I am Using OpenSta for testing of my project
performance.
For testing the performance, I need to pump set of
data to the grid. So what shall i do for data
parameterization in opensta. Please tell me the
procedure for data parameterization.
It would be a greate help from your end.
Expecting Solution.
Thanks and Regards
Sachin Mathew
mob:91-9880935733
____________________________________________________________________________________
Looking for a deal? Find great prices on flights and hotels with Yahoo! FareChase.
http://farechase.yahoo.com/
|
|
From: parag j. <par...@gm...> - 2007-08-01 05:50:29
|
Hi, I have used frv file for parameterization which is stored in \Data folder. It works perfectly fine for 2 users but not for more than 2 users. But i am not able to execute test for more than 2 users, it give following error. Can anybody help me to solve this problem? TestManager Initialized for Test USERS_10 Task Group 1 started Call Subroutine: REPLACEBYHEX(674)....// This subroutine is used to process Viewstate Call Subroutine: REPLACEBYHEX(674) Call Subroutine: REPLACEBYHEX(674) Call Subroutine: REPLACEBYHEX(674) Call Subroutine: REPLACEBYHEX(674) Call Subroutine: REPLACEBYHEX(674) Call Subroutine: REPLACEBYHEX(674) Call Subroutine: REPLACEBYHEX(674) Call Subroutine: REPLACEBYHEX(674) Call Subroutine: REPLACEBYHEX(674) Call Subroutine: REPLACEBYHEX(674) LOG: My agent Name from File is: 19688---//Parameterization using .fvr file LOG: My agent Name from File is: 115173--//stored in /Data folder LOG: My agent Name from File is: 32878 LOG: My agent Name from File is: 128424 LOG: My agent Name from File is: 205308 LOG: My agent Name from File is: 213553 LOG: My agent Name from File is: 217971 LOG: My agent Name from File is: 317257 LOG: My agent Name from File is: 289377 LOG: My agent Name from File is: 216661 Call Subroutine: REPLACEBYHEX(674) Call Subroutine: REPLACEBYHEX(674) Call Subroutine: REPLACEBYHEX(674) Call Subroutine: REPLACEBYHEX(674) Call Subroutine: REPLACEBYHEX(674) Call Subroutine: REPLACEBYHEX(674) Call Subroutine: REPLACEBYHEX(674) E* HTTPRESPONSE (HTML parser): unresolved variable for connection (4) E* TScript::run: ERROR in TOF execution; resuming... Call Subroutine: REPLACEBYHEX(674) E* HTTPRESPONSE (HTML parser): unresolved variable for connection (3) E* TScript::run: ERROR in TOF execution; resuming... Call Subroutine: REPLACEBYHEX(674) E* HTTPRESPONSE (HTML parser): unresolved variable for connection (3) E* TScript::run: ERROR in TOF execution; resuming... E* HTTPRESPONSE (HTML parser): unresolved variable for connection (5) E* TScript::run: ERROR in TOF execution; resuming... Call Subroutine: REPLACEBYHEX(674) E* HTTPRESPONSE (HTML parser): unresolved variable for connection (5) E* TScript::run: ERROR in TOF execution; resuming... E* HTTPRESPONSE (HTML parser): unresolved variable for connection (7) E* HTTPRESPONSE (HTML parser): unresolved variable for connection (5) E* TScript::run: ERROR in TOF execution; resuming... E* TScript::run: ERROR in TOF execution; resuming... Call Subroutine: REPLACEBYHEX(674) E* HTTPRESPONSE (HTML parser): unresolved variable for connection (7) E* TScript::run: ERROR in TOF execution; resuming... E* HTTPRESPONSE (HTML parser): unresolved variable for connection (7) E* TScript::run: ERROR in TOF execution; resuming... E* HTTPRESPONSE (HTML parser): unresolved variable for connection (6) E* TScript::run: ERROR in TOF execution; resuming... Call Subroutine: REPLACEBYHEX(674) E* HTTPRESPONSE (HTML parser): unresolved variable for connection (6) E* TScript::run: ERROR in TOF execution; resuming... E* HTTPRESPONSE (HTML parser): unresolved variable for connection (6) E* TScript::run: ERROR in TOF execution; resuming... E* HTTPRESPONSE (HTML parser): unresolved variable for connection (7) E* TScript::run: ERROR in TOF execution; resuming... Call Subroutine: REPLACEBYHEX(674) E* HTTPRESPONSE (HTML parser): unresolved variable for connection (7) E* TScript::run: ERROR in TOF execution; resuming... E* HTTPRESPONSE (HTML parser): unresolved variable for connection (7) E* HTTPRESPONSE (HTML parser): unresolved variable for connection (8) E* TScript::run: ERROR in TOF execution; resuming... E* TScript::run: ERROR in TOF execution; resuming... Call Subroutine: REPLACEBYHEX(674) E* HTTPRESPONSE (HTML parser): unresolved variable for connection (8) E* TScript::run: ERROR in TOF execution; resuming... E* HTTPRESPONSE (HTML parser): unresolved variable for connection (8) E* TScript::run: ERROR in TOF execution; resuming... E* HTTPRESPONSE (HTML parser): unresolved variable for connection (7) E* TScript::run: ERROR in TOF execution; resuming... E* HTTPRESPONSE (HTML parser): unresolved variable for connection (8) E* TScript::run: ERROR in TOF execution; resuming... E* HTTPRESPONSE (HTML parser): unresolved variable for connection (8) E* TScript::run: ERROR in TOF execution; resuming... Call Subroutine: REPLACEBYHEX(674) E* HTTPRESPONSE (HTML parser): unresolved variable for connection (8) E* HTTPRESPONSE (HTML parser): unresolved variable for connection (8) E* TScript::run: ERROR in TOF execution; resuming... E* TScript::run: ERROR in TOF execution; resuming... E* HTTPRESPONSE (HTML parser): unresolved variable for connection (8) E* TScript::run: ERROR in TOF execution; resuming... E* HTTPRESPONSE (HTML parser): unresolved variable for connection (8) E* TScript::run: ERROR in TOF execution; resuming... E* HTTPRESPONSE (HTML parser): unresolved variable for connection (8) E* TScript::run: ERROR in TOF execution; resuming... E* HTTPRESPONSE (HTML parser): unresolved variable for connection (9) E* TScript::run: ERROR in TOF execution; resuming... Call Subroutine: REPLACEBYHEX(674) E* HTTPRESPONSE (HTML parser): unresolved variable for connection (9) E* TScript::run: ERROR in TOF execution; resuming... E* HTTPRESPONSE (HTML parser): unresolved variable for connection (8) E* HTTPRESPONSE (HTML parser): unresolved variable for connection (8) E* TScript::run: ERROR in TOF execution; resuming... E* TScript::run: ERROR in TOF execution; resuming... E* HTTPRESPONSE (HTML parser): unresolved variable for connection (9) E* TScript::run: ERROR in TOF execution; resuming... E* HTTPRESPONSE (HTML parser): unresolved variable for connection (10) E* TScript::run: ERROR in TOF execution; resuming... Call Subroutine: REPLACEBYHEX(674) E* HTTPRESPONSE (HTML parser): unresolved variable for connection (8) E* TScript::run: ERROR in TOF execution; resuming... E* HTTPRESPONSE (HTML parser): unresolved variable for connection (8) E* TScript::run: ERROR in TOF execution; resuming... E* HTTPRESPONSE (HTML parser): unresolved variable for connection (10) E* TScript::run: ERROR in TOF execution; resuming... E* HTTPRESPONSE (HTML parser): unresolved variable for connection (10) E* TScript::run: ERROR in TOF execution; resuming... E* HTTPRESPONSE (HTML parser): unresolved variable for connection (9) E* TScript::run: ERROR in TOF execution; resuming... E* HTTPRESPONSE (HTML parser): unresolved variable for connection (8) E* HTTPRESPONSE (HTML parser): unresolved variable for connection (8) E* TScript::run: ERROR in TOF execution; resuming... E* TScript::run: ERROR in TOF execution; resuming... E* HTTPRESPONSE (HTML parser): unresolved variable for connection (8) E* HTTPRESPONSE (HTML parser): unresolved variable for connection (8) E* TScript::run: ERROR in TOF execution; resuming... E* TScript::run: ERROR in TOF execution; resuming... E* HTTPRESPONSE (HTML parser): unresolved variable for connection (10) E* TScript::run: ERROR in TOF execution; resuming... E* HTTPRESPONSE (HTML parser): unresolved variable for connection (9) E* HTTPRESPONSE (HTML parser): unresolved variable for connection (9) E* TScript::run: ERROR in TOF execution; resuming... E* TScript::run: ERROR in TOF execution; resuming... E* HTTPRESPONSE (HTML parser): unresolved variable for connection (10) E* HTTPRESPONSE (HTML parser): unresolved variable for connection (10) E* TScript::run: ERROR in TOF execution; resuming... E* TScript::run: ERROR in TOF execution; resuming... Task Group 1 completed Can anybody help me? Thanks Parag Jadhav |
|
From: Bernie V. <Ber...@iP...> - 2007-07-31 09:46:37
|
Omkar, This is a known bug in OpenSTA. To be fixed in some future release. If your need is urgent, and you have some budget for support, commercial support is available that may be able to solve your problem. See www.iPerformax.com/services for more details. -Bernie ----- Original Message ----- From: "omkar kesa" <omk...@gm...> To: <ope...@li...> Sent: Tuesday, July 31, 2007 4:52 AM Subject: [OpenSTA-users] Static Cookies, set up file for opensta 1.4.4 > Hi All, > I am currently using opensta 1.4.3 for performance testing.When i > recorded my application, script is generated with static cookies where > dynamic cookies are expected.Becaus of this the replay is not as > expected(especially for login pages).I refered mailinglists for the > same and somewhere i found that it is bug in Opensta and it has been > fixed recently in the release of opensta 1.4.4. > I searched entire opensta site to findout set up file for Opensta > 1.4.4 but could not find it. > I will be very thankful if anybody could send the setup file(or any > link to download the same) for opensta 1.4.4 > Please inform me if anybody have any alternative solution for this. > > Thanks in Advance > Omkar > > ------------------------------------------------------------------------- > This SF.net email is sponsored by: Splunk Inc. > Still grepping through log files to find problems? Stop. > Now Search log events and configuration files using AJAX and a browser. > Download your FREE copy of Splunk now >> http://get.splunk.com/ > -- > OpenSTA-users mailing list Ope...@li... > Subscribe/Unsubscribe/Options: > http://lists.sf.net/lists/listinfo/opensta-users > Posting Guidelines: > http://portal.opensta.org/faq.php?topic=UserMailingList |
|
From: omkar k. <omk...@gm...> - 2007-07-31 08:52:29
|
Hi All, I am currently using opensta 1.4.3 for performance testing.When i recorded my application, script is generated with static cookies where dynamic cookies are expected.Becaus of this the replay is not as expected(especially for login pages).I refered mailinglists for the same and somewhere i found that it is bug in Opensta and it has been fixed recently in the release of opensta 1.4.4. I searched entire opensta site to findout set up file for Opensta 1.4.4 but could not find it. I will be very thankful if anybody could send the setup file(or any link to download the same) for opensta 1.4.4 Please inform me if anybody have any alternative solution for this. Thanks in Advance Omkar |
|
From: Dan D. <ddo...@me...> - 2007-07-19 17:14:10
|
Thomas wrote: | I am new with Open STA, Can anyone guide me in doing parameterization. | What is Session id & Engine Id? | How do we parameterize.. the user action... | I need to parameterize the login action.. just by giving 10 different | login... and need to give time delay in login also... | please help me... Thomas, welcome to the wonderful world of open-source software! Lest you think the entrance fee is *free*, there is a definite initial price--which is: training yourself on the basics by reading/studying the FAQs and Help. All of what you ask is in there. After you do that, and get your first script partially developed and have more advanced issues you are trying to resolve, send it along with plenty of detailed info and you'll find people happy to help. Pr, if you need to come up to speed faster than that, there are tutorials and consultants ready to provide help, for a fee. ...Dan Downing www.mentora.com |
|
From: Nishan T. <Nis...@ae...> - 2007-07-19 11:14:25
|
Hi All, I am new with Open STA, Can anyone guide me in doing parameterization. What is Session id & Engine Id? How do we parameterize.. the user action... I need to parameterize the login action.. just by giving 10 different login... and need to give time delay in login also... please help me... Thanks & Regards, Nishan Thomas |
|
From: Daniel S. <da...@Op...> - 2007-07-16 14:14:20
|
Dan Downing wrote: > given how perplexing this behavior was, and how suspect it made me > that perhaps I've been reporting inacurate response times on > *other* projects where cpus was *not* pegged. Sorry but I don't follow your logic here. Because the timings you take when the timing system is overloaded are proven inaccurate, you then suspect that the timings may be inaccurate when the system is running normally ... ??? I think that any timings you see will be affected by 2 possible overload problems: - Processing delays in time from script saying 'END TIMER' to when the system actually gets around to timestamping the record for the results. - Processing delays from the actual tasks you are timing taking longer not due to the loaded system taking longer but because the virtual client can't handle its stages in the process as quick as it should be able to. Innevitably you get some of both added in to increase any recorded time but NOTHING in this would lead me to worry about the same system in a none overloaded state. I believe that ANY toolset that measures timing data cannot have that data trusted when it is running at or near any of its limits. The problem we need to find for you though is WHY the load generating system suddenly became overloaded when it should have easily been able to cope with the tasks at hand. The results after the overload point are worthless and should be ignored. Also, comparing timing results between different tools is notoriously difficult and shouldn't be taken as evidence of anything unless you are 100% sure that exactly the same tasks were being measured and the start/stop timers were being activated at exactly the same time by event. Given the known differences between LR and OpenSTA this is very difficult to achieve - best just compare results to previous runs of the same test using the same toolset. Cheers /dan -- Daniel Sutcliffe <Da...@Op...> OpenSTA part-time caretaker - http://OpenSTA.org |
|
From: Bernie V. <Ber...@iP...> - 2007-07-16 13:52:57
|
> Interesting you mention run-time feedback - what are you watching? > I try to avoid doing any sort of Test monitoring when I'm running a > heavy test that I want accurate timings for - monitoring is something > that can definitely cause extra load ... not that it should but ... > > Bernie: were you monitoring your test at runtime? Yes, I had the monitoring tab open and Summary, Total Active Users, and Error Log displays tiled horizontally. -Bernie |
|
From: Daniel S. <da...@Op...> - 2007-07-16 13:44:23
|
Dan Downing wrote:
> Nothing changed on the server; issue replicated on a second server;
> and yes, have successfully completed other tests since.
So, my questions then would be: what are the similarities of the 2
failure? and, what are the differences between these and the tests
that worked as you expected?
> In the second 'cpu-pegging' example that I did not describe -- a
> much more complex script with minimal millisecond WAITs --
> cpu-pegging problem was solved by inserting randomized 10-20 second
> WAITs between the 26 script steps.
But then you weren't creating as much load ... ?
It is an interesting fact that the (larger) WAITs did help though. I
wonder if the smaller WAITs were seen by the executor as just not
enough to cause it to pause (once it got behind) and therefore the
potential context switches were few and far between.
Did you try this technique on your other failure and it didn't help?
> > > we eventually accomplished this using LR
>
> > Out of interest: how much were the LR licenses that allowed you to
> > complete this task?
>
> Pulled in a favor and used another customer's license :)
Lucky customer this time round ;)
> > Except: look at the content of this page ..It's not well formed
> > HTML at all just a bunch of javascript.
>
> True...but should this matter?
Only if you are doing a LOAD RESPONSE_INFO BODY with an identifier,
then there is a chance the HTML parser could have got all sorts of
upset.
> > Are you doing any LOAD RESPONSE_INFO's with this content? Do you
> > have any WAITs left in your script or have they all been edited
> > out? SYNCHRONIZE used?
>
> No LOAD RESPONSE, had 4 WAIT 25's btw PRIMARY GET and GET URIs did
> *not* use SYNCHRONIZE REQUESTS (see the full script in my response to
> Bernie).
I just went back and had another look at your script - the interesting
point is that you only use a single connection id, was the script
actually recorded this way? Because the requests all occur on a
single connection then the chances are once load ramps up all of your
WAITs will be totally ignored and you have absolutely no need for any
SYNCHRONIZE between them. The final SYNCHRONIZE at the end of the
scripts serves absolutely no purpose as there are no connections open
at that point to synchronize with. If anything, I would have made
your script end:
SYNCHRONINZE REQUESTS
End Timer SP01_load_page
End Timer T_SP_SYN1_V1
DISCONNECT ALL
Although I don't think any of this is the source of your issue just
from the fact that Bernie has run this exact script without issues.
Your script is also using HTTP/1.0 but with Keep-Alive - can you
compare this (with the connection usage) to the way that the LR
repacement was scripted? Just out of interest.
> Good suggestion about looping, though I resist looping in the script
> because the Summary Results monitor only refreshes when the script
> completes a Commander-controlled iteration -- and you lose run-time
> feedback.
Interesting you mention run-time feedback - what are you watching?
I try to avoid doing any sort of Test monitoring when I'm running a
heavy test that I want accurate timings for - monitoring is something
that can definitely cause extra load ... not that it should but ...
Bernie: were you monitoring your test at runtime?
> > The "Failed processing TOF" error is usually accompanied by
> > another, more meaningful error - I'd be very interested if there
> > is one and if, what it is ...
>
> There was no other error reported.
Might just be memory shortage being hit, or could be some sort of
corruption. My gut feeling is that once 'something' in your test
goes 'pear shaped' then it's all down hill from there - what the
original problem is holds the most interest for me.
Cheers
/dan
--
Daniel Sutcliffe <Da...@Op...>
OpenSTA part-time caretaker - http://OpenSTA.org/
|
|
From: Dan D. <ddo...@me...> - 2007-07-15 13:07:17
|
Dan Sutcliffe wrote: >So, my questions then would be: what are the similarities of the 2 failure? and, what are >the differences between these and the tests that worked as you expected? The similarities between the two examples: 1 - Run on same W2K load server 2 - In the first attempt, script had only the *small* (subsecind) WAITs btw PRIMARY GETs and GET URIs (commented out any longer than 1 second) 3 - Similar aggressive load ramp per Task Group -- 100vu/goup, 50 users/batch, 3 seconds/batch, 1 sec. batch ramp-up (later reduced to 1 vu/batch, 3 sec/batch, 1 sec. bath rampup) The differences: 1 - Many pages, each with many resources, using multiple conids, lots of LOAD RESPONSEs 2 - Reading a file with 26 comma-delimited script parameters, then calling a routine that parsed these out into local variables >> In the second 'cpu-pegging' example that I did not describe -- a much >> more complex script with minimal millisecond WAITs -- cpu-pegging >> problem was solved by inserting randomized 10-20 second WAITs between >> the 26 script steps. >But then you weren't creating as much load ... ? Correct; 1/20th the vusers, about 1 minute end-end response time for the script. >It is an interesting fact that the (larger) WAITs did help though. I wonder if the smaller >WAITs were seen by the executor as just not enough to cause it to pause (once it got >behind) and therefore the potential context switches were few and far between. >Did you try this technique on your other failure and it didn't help? Definitely, it is the 10-20 secind WAITs that solved the cpu-pegging problem (this after we worked on tuning the data-parsing code, which we thought might be the problem--it wasn't). >I just went back and had another look at your script - the interesting point is that you >only use a single connection id, was the script actually recorded this way? Yes, it was recorded this way; I did not notice there was a single conid for all the GETs til, you two mentioned it. >Because the requests all occur on a single connection then the chances are once load >ramps up all of your WAITs will be totally ignored and you have absolutely no need for any >SYNCHRONIZE between them. The final SYNCHRONIZE at the end of the scripts serves >absolutely no purpose as there are no connections open at that point to synchronize with. >If anything, I would have made your script end: >Although I don't think any of this is the source of your issue just from the fact that >Bernie has run this exact script without issues. >Your script is also using HTTP/1.0 but with Keep-Alive - can you compare this (with the >connection usage) to the way that the LR repacement was scripted? Just out of interest. Yeah. I will have to retest this on my other laptop from home with my Verizon FIOS 15 Megabit connection--and will send another report. >> Good suggestion about looping, though I resist looping in the script >> because the Summary Results monitor only refreshes when the script >> completes a Commander-controlled iteration -- and you lose run-time >> feedback. >Interesting you mention run-time feedback - what are you watching? Was watching the opensta Summary Results, plus perfmon on our load driver. >Bernie: were you monitoring your test at runtime? > > The "Failed processing TOF" error is usually accompanied by another, > > more meaningful error - I'd be very interested if there is one and > > if, what it is ... > > There was no other error reported. >Might just be memory shortage being hit, or could be some sort of corruption. My gut >feeling is that once 'something' in your test goes 'pear shaped' then it's all down hill >from there - what the original problem is holds the most interest for me. Roger this. ...Dan www.mentora.com ...Dan Dan Downing www.mentora.com |
|
From: Dan D. <ddo...@me...> - 2007-07-13 22:13:49
|
Bernie wrote: >>Daniel wrote: >>I am sure we are not. Notice the connection ids are all 1 in the >full script Dan posted. LR is surely sending the secondary gets in >parallel and most likely prior to the primary get finishing. That >would alert me to the possibility that response times reported by >the two tools may very well be different. >> I've never really used LR but from what I understand from discussions I've had with people who do and have used it, its replay actually works quite differently than the OpenSTA Executor, and may actually be self throttling to some extent. ie. when the primary return starts to slow down then the secondaries will get relevant delays before they are sent. >>The SCL replay will just attempt to keep sending those secondary GETs at the interval that is given in the script. >> This is one of the reasons it is OK to put a SYNCHRONIZE after your primary GET - it 'sort of' simulates the total slow down as the primary response times get longer ... although it isn't ideal because you lose the actual simulation of secondary GETs starting before the primary has completely finished, which may well happen in a real browser. >I suspect Dan edited the connection Ids to be all 1, or perhaps the script is >synthetic and was not the product of a recording. In any event, I'd consider >putting the primary get on id 1, followed by a synchronize command, followed >by the secondary gets each having a seperate channel Id, followed by a >synchronize and then an end timer command. Of course I could be talking out >of my hat here... not knowing all the details and this could all be putting >too fine a point on things given that Dan has a much bigger problem on his >hands in that the test won't run! DanD responds: Bernie, this script was recorded, connection IDs were not edited; but I will be trying your suggestions anyway...given how perplexing this behavior was, and how suspect it made me that perhaps I've been reporting inacurate response times on *other* projects where cpus was *not* pegged. ...Dan Dan Downing www.mentora.com |
|
From: Bernie V. <Ber...@iP...> - 2007-07-13 19:10:24
|
Daniel wrote: >> That said, LR had no cpu-pegging with these bursts of load... and > >it returned reasonable times (0.5 seconds at the low end). > > I don't think we're comparing like with like though I am sure we are not. Notice the connection ids are all 1 in the full script Dan posted. LR is surely sending the secondary gets in parallel and most likely prior to the primary get finishing. That would alert me to the possibility that response times reported by the two tools may very well be different. - I've never > really used LR but from what I understand from discussions I've had > with people who do and have used it, its replay actually works > quite differently than the OpenSTA Executor, and may actually be > self throttling to some extent. ie. when the primary return starts > to slow down then the secondaries will get relevant delays before > they are sent. >The SCL replay will just attempt to keep sending > those secondary GETs at the interval that is given in the script. > > This is one of the reasons it is OK to put a SYNCHRONIZE after your > primary GET - it 'sort of' simulates the total slow down as the > primary response times get longer ... although it isn't ideal because > you lose the actual simulation of secondary GETs starting before the > primary has completely finished, which may well happen in a real > browser. I suspect Dan edited the connection Ids to be all 1, or perhaps the scirpt is synthetic and was not the product of a recording. In any event, I'd consider putting the primary get on id 1, followed by a synchronize command, followed by the secondary gets each having a seperate channel Id, followed by a synchronize and then an end timer command. Of course I could be talking out of my hat here... not knowing all the details and this could all be putting too fine a point on things given that Dan has a much bigger problem on his hands in that the test won't run! Good luck Dan. -Bernie |
|
From: Danny R. F. <fa...@te...> - 2007-07-13 16:16:26
|
Dan Downing wrote: > Well that is true--no WAITs in this one -- given it is a single page; an only one iteration, so no pacing WAITs between iterations either. Okay, you're right - for a scenario where you're accessing only a single page on the site, there is no think time to simulate. But there is still the browser processing time between the requests to load the secondary elements of the page. When you take a recording, these are the sub-second delays inserted between secondary gets. I think delays of even a tenth of a second would make a huge impact on your CPU usage. I haven't thought through how the use of synchronization affects this, and whether it could be used in lieu of artificial waits. -- Danny R. Faught Tejas Software Consulting http://tejasconsulting.com/ |