From: Steve B. <Ste...@an...> - 2006-12-17 03:05:09
|
Hi Eric, Sorry for the slow response on this. Yes, I think we have exactly what you want. Sorry that it is not =20 documented on the web page, but it is in the current release. Run with the "-h" option, and you'll see the following: Measurement methodology options -converge Allow benchmark times to converge before =20= timing -max_iterations <n> Run a max of n iterations (default 20) -variance <pct> Target coefficient of variation =20 (default 3.0) -window <n> Measure variance over n runs (default 3) -n <iter> Run the benchmark <iter> times So depending on whether the defaults work for you, you may just need =20 to say "-converge" otherwise mess with the other three options. I =20 realize the wording is perhaps a little confusing. "run" =3D=3D =20 "iteration" (means one execution of the benchmark, n iterations occur =20= within a single java invocation). The window specifies N =20 *consecutive* iterations with respect to which the variance will be =20 measured (if -window is bigger than -max_iterations, then it will =20 never converge). Robin developed the above in cooperation with Perry Cheng at IBM Watson. Apologies again for the slow response. --Steve On 14/12/2006, at 2:16 AM, Eric Bodden wrote: > Hi, all. > > I am now finally getting into the phase where we are trying to get =20 > some final runtime numbers using DaCapo and at the moment I am =20 > struggling a bit with the problem that the variance between =20 > multiple runs with the very same configuration sometimes seems to =20 > be relatively high (between 2% and 5%). > > I saw that you have this "-n" option in order to perform multiple =20 > runs and I also saw that you support convergence checks somehow. Is =20= > this feature properly documented anywhere? In particular, is there =20 > a way to do something like the following: "run multiple times, =20 > accumulating an average runtime and a confidence value until this =20 > value is below a given confidence interval"? By doing taking a =20 > running average, you basically force the values to converge as you =20 > gather more and more data. This is what we used to do during my =20 > time at the IBM performance labs and it proved very useful. > > Cheers, > Eric > > -- > Eric Bodden > Sable Research Group > McGill University, Montr=E9al, Canada > > ----------------------------------------------------------------------=20= > --- > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to =20 > share your > opinions on IT & business topics through brief surveys - and earn cash > http://www.techsay.com/default.php?=20 > page=3Djoin.php&p=3Dsourceforge&CID=3DDEVDEV > _______________________________________________ > dacapobench-researchers mailing list > dac...@li... > https://lists.sourceforge.net/lists/listinfo/dacapobench-researchers |