Re: [Grinder-use] Bad TPS number reported by the console
Distributed load testing framework - Java, Jython, or Clojure scripts.
Brought to you by:
philipa
|
From: olivier m. <ome...@gm...> - 2013-10-13 16:24:52
|
Hello Phil, Impl is a Jython object. All Impl inherit from the Core(class) which is an Abstract class. I have found a workaround ... i have suppressed all other calls to impl and keep only the impl.sendData() instrumented call. Now the numbers are correct. I will try to reduce the use case to something simpler ... there is something strange behind - i don't understand why it was running before. Thanks, 2013/10/13 Philip Aston <ph...@ma...> > There may well be a bug here. Is impl a Java object, or a Jython object? > > I just tried to reproduce this, but weirdly I can't even reproduce the > issue I'd seen with helloworld.py. > > If impl is a Java object, try this workaround: > > class SendDataFilter(Test.InstrumentationFilter): > def matches(self, method): > return method.name == "sendData" > > sendDataFilter = SendDataFilter() > # ... > > Test(testId, line.getTestName()).record(impl, sendDataFilter) > > > - Phil > > > On 13/10/13 15:35, olivier merlin wrote: > > Hello Phil, > > I see better where the problem is around > This has nothing to do with overloaded method but to the scope of > instrumentation when we have dynamic instrumentation. > The Test( ).record(impl.sendData) is dynamic and re-evaluated at each run. > The proof is below: > > The code logic is the following: > > __call__ > foreach line of scenario: > > # get the protocol implementation > # this can be ejb, http, http apache, database, command ... > # > impl.getImplementation() > > (...) > # for debugging > impl.setFullRunID(fullIDWithLine) > > # for load balancing on any criteria > impl.setProcessID(processNumber) > impl.setThreadID(threadNumber) > impl.setRunID(runNumber) > impl.setLineID(iCntProcessed) > > (...) > # replace data with all templates and contextual data > (memorized data from previous call) > submitCmd = impl.newProcessLine(line, self.memMgr) > > (...) > # ======= Tests are dynamic ! > if testNumbers.has_key(line.getTestName()): > testId = testNumbers[line.getTestName()] > else: > testId = processNumber*testRangeSize+len(testNumbers)+1 > testNumbers[line.getTestName()] = testId > > (...) > Test(testId, line.getTestName()).record(impl.sendData) > (...) > spResp = impl.sendData(submitCmd, ...) > > If i run for 10 runs in one thread, i get 64 success test which is > exactly : 1 + 7x9 > The run=0, instruments only one call (impl.senData) - because the Test is > created just before the call. > For each run>0, the instrumentation record each call to impl although it > should record impl.sendData only. > > To be completely sure of this behavior, i have commented out 5 impl.xxx > calls above and i have got 19 transactions recorded as successful. > 1+2x9=19 > > This proves that the instrumentation does not apply to the > impl.sendData() method but the whole object impl. > > The question is : > Could we consider this as a bug ? > The same code was working with version <= 3.5 with the getattr() method. > > Do you see a workaround i could apply ? > > > Thanks again for your help, > Olivier > > > > > 2013/10/12 Philip Aston <ph...@ma...> > >> No, there's no way to trace what is instrumented. There should be - >> please open a Feature Request. >> >> So you are now definitely instrumenting the sendData method and not the >> getattr result? (E.g. Test(...).record(impl.sendData)) >> >> If so, is sendData() overloaded - are there many variants of the sendData >> method? >> >> - Phil >> >> >> On 11/10/13 17:45, olivier merlin wrote: >> >> Hello Phil, >> >> I really don't understand, i have used your new instrumentation method >> and i have still the problem >> if i launch one thread with 2 runs, i have: >> Thread, Run, Test, Start time (ms since Epoch), Test time, Errors, >> SyncML >> 0, 0, 1, 1381509722862, 1, 0, 0.0 >> 0, 1, 1, 1381509722867, 0, 0, 0.0 >> 0, 1, 1, 1381509722867, 0, 0, 0.0 >> 0, 1, 1, 1381509722867, 0, 0, 0.0 >> 0, 1, 1, 1381509722867, 0, 0, 0.0 >> 0, 1, 1, 1381509722867, 0, 0, 0.0 >> 0, 1, 1, 1381509722867, 2, 0, 0.0 >> 0, 1, 1, 1381509722871, 0, 0, 0.0 >> >> as you can see, the run 1 is executed 7 times. >> Very strange ?? >> Do you have an idea on how i can trace the instrumentation ? >> >> cheers, >> Olivier >> >> >> 2013/10/6 Philip Aston <ph...@ma...> >> >>> This is to do with what you are instrumenting, not the logging. >>> >>> getattr will return some internal Jython object that represents the >>> bound method. You're instrumenting all methods of this internal Jython >>> object. When you invoke it, the call path is crossing multiple (7!) >>> instrumented methods, but that's completely down to how Jython is >>> implemented. >>> >>> Instead I think you should try to instrument the sendData method itself. >>> Try something like: >>> >>> Test(...).record(impl.sendData) >>> >>> ... >>> >>> spResp = impl.sendData(....) >>> >>> >>> - Phil >>> >>> >>> >>> >>> >>> On 02/10/13 10:35, olivier merlin wrote: >>> >>> Hello, >>> >>> I try to switch our grinder framework to grinder 3.11 and i get a >>> problem on the TPS count. >>> >>> When i inject a constant rate of 3 TPS, after a period of X minutes, >>> the console write a throughput of 21 TPS so x7 the real throughput. >>> >>> When looking in the grinder-data_X.log file i see that 3 thread per >>> second is there but each threads write in the data log 7 times ! >>> >>> I'm quite sure it's a problem of instrumentation somewhere similar to >>> the bug entered by Phil on the #234 helloworld sample double counts<http://sourceforge.net/p/grinder/bugs/234/> >>> >>> *We are using the old way for instrumenting:* >>> (...) >>> testSubmit = Test(testId, line.getTestName()).wrap(getattr(impl, >>> "sendData")) >>> (...) >>> spResp = testSubmit(submitCmd, >>> line.isMemorized(),line.getLastTemplate(), self.memMgr) >>> (...) >>> >>> The impl is a protocol implementation that may change between test - >>> that implements sendData() method. >>> The problem is perhaps that i switch the logs to logback >>> (grinder.logger) in all the implementations (like the #234) ... thus >>> leading to a test record for each logging ? >>> >>> Do i have to completely change my way of instrumenting ? >>> Or do i have to use distinct logger from the grinder.logger to avoid >>> this king of problem ... >>> >>> Does someone encounter the same trouble ? >>> >>> cheers, >>> Olivier >>> >>> >> >> >> ------------------------------------------------------------------------------ >> October Webinars: Code for Performance >> Free Intel webinars can help you accelerate application performance. >> Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most >> from >> the latest Intel processors and coprocessors. See abstracts and register > >> >> http://pubads.g.doubleclick.net/gampad/clk?id=60134071&iu=/4140/ostg.clktrk >> _______________________________________________ >> grinder-use mailing list >> gri...@li... >> https://lists.sourceforge.net/lists/listinfo/grinder-use >> >> > > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most from > the latest Intel processors and coprocessors. See abstracts and register >http://pubads.g.doubleclick.net/gampad/clk?id=60134071&iu=/4140/ostg.clktrk > > > > _______________________________________________ > grinder-use mailing lis...@li...://lists.sourceforge.net/lists/listinfo/grinder-use > > > > > ------------------------------------------------------------------------------ > October Webinars: Code for Performance > Free Intel webinars can help you accelerate application performance. > Explore tips for MPI, OpenMP, advanced profiling, and more. Get the most > from > the latest Intel processors and coprocessors. See abstracts and register > > http://pubads.g.doubleclick.net/gampad/clk?id=60134071&iu=/4140/ostg.clktrk > _______________________________________________ > grinder-use mailing list > gri...@li... > https://lists.sourceforge.net/lists/listinfo/grinder-use > > |