From: SourceForge.net <no...@so...> - 2009-11-16 17:53:01
|
Feature Requests item #416828, was opened at 2001-04-17 16:34 Message generated for change (Comment added) made by dsaff You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=365278&aid=416828&group_id=15278 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None >Status: Pending Resolution: None Priority: 5 Private: No Submitted By: John Stoneham (jstoneham) Assigned to: Nobody/Anonymous (nobody) Summary: Integrated timing callbacks Initial Comment: As a development aid where I work, I've written a custom Swing UI on top of the junit framework. One of its features is to report and manipulate the timing of tests. The easiest current way to do this is through a TestDecorator such as that included in <a href="http://www.clarkware.com/software/JUnitPerf.html" >JUnitPerf</a>. The problem here is that this approach can afford no finer granularity than finding time(setUp() + test() + tearDown()). The solution I found was to extend TestListener with a new callback, addTime(Test,long), similar to addFailure() and addError(), with the first argument being similar to the Test argument in all the other TestListener callbacks, and the long argument being the timing of -only- the actual test method of the last test run, in milliseconds, excluding the timing of setUp() and tearDown(). (This value is easily found by storing a System.currentTimeMillis() before and after the test, and registering the difference.) For the purposes of a wider-scale application, this could be extended to an addTime(Test,long,long,long), the three latter arguments being timing of setUp(), timing of test method, timing of tearDown(). The TestListener implementation interpreting the callbacks should retain full responsibility in interpreting the order of receipt of these callbacks; its own logic is determining the order in which test cases are run, and thus will be able to determine which callback applies to which test for reporting purposes. This does, however pose an issue for tests being run multiple times in their own threads - the developer must separate by test method and synchronize all the threads at the end of each method, or find another way to distinguish the callbacks (perhaps through the Test argument), as thread ending times will not be predictable. This modification may not be backwards compatible, unfortunately. It requires modifications to TestListener, requiring the addTime() function in all classes implementing TestListener. This would suggest a subclass of TestListener might work, but because we need to replace the call to runBare() from TestResult.run(TestCase), we would be calling a method on a TestListener that might or might not exist in particular instances, depending if they were from the subclassed version or not. Only a run-time type-id of the registered TestListener would solve this problem. The following is code for the one-argument version existing in my code (beware, it's written to JUnit 3.2): addition to TestListener.java: /** * The last-run test took testTime amount of time. */ public void addTime(Test test,long testTime); change in TestResult.java: protected void run(final TestCase test) { startTest(test); Protectable p= new Protectable() { public void protect() throws Throwable { // original statement. we need to add a timing call // here - replicate runBare() including timing call // REMOVED vvvvvvvvvv // test.runBare(); // REMOVED ^^^^^^^^^^ // ADDED vvvvvvvvvvvv test.setUp(); long startTime = System.currentTimeMillis(); try { test.runTest(); } finally { long endTime = System.currentTimeMillis(); addTime(test,endTime - startTime); test.tearDown(); } // ADDED ^^^^^^^^^^^^ } }; runProtected(test, p); endTest(test); } The reason this code needs to exist in TestResult is because TestResult is the class with which the TestListener is registered. There may be a better architectural solution to this via runProtected(), however. ---------------------------------------------------------------------- Comment By: David Saff (dsaff) Date: 2009-11-16 12:53 Message: This tracker is being shut down. Please move this item to http://github.com/KentBeck/junit/issues ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2001-08-07 18:28 Message: Logged In: NO This is a great idea. In addition, what if there was an assertMaxTimeSeconds(double) that could be called within a test method? When the method completes, if the time took more than the limit, this would be considered a duration failure. This would help identify situations where refactoring results in the same expected result, but whose run-time performance is unacceptable. ---------------------------------------------------------------------- Comment By: John Stoneham (jstoneham) Date: 2001-04-17 16:35 Message: Logged In: YES user_id=198358 [Apologies for the html and lack of indentation - I'm brand new to SourceForge and it shows. I'll know better next time.] ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=365278&aid=416828&group_id=15278 |
From: SourceForge.net <no...@so...> - 2009-12-01 02:20:37
|
Feature Requests item #416828, was opened at 2001-04-17 20:34 Message generated for change (Comment added) made by sf-robot You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=365278&aid=416828&group_id=15278 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: None Group: None >Status: Closed Resolution: None Priority: 5 Private: No Submitted By: John Stoneham (jstoneham) Assigned to: Nobody/Anonymous (nobody) Summary: Integrated timing callbacks Initial Comment: As a development aid where I work, I've written a custom Swing UI on top of the junit framework. One of its features is to report and manipulate the timing of tests. The easiest current way to do this is through a TestDecorator such as that included in <a href="http://www.clarkware.com/software/JUnitPerf.html" >JUnitPerf</a>. The problem here is that this approach can afford no finer granularity than finding time(setUp() + test() + tearDown()). The solution I found was to extend TestListener with a new callback, addTime(Test,long), similar to addFailure() and addError(), with the first argument being similar to the Test argument in all the other TestListener callbacks, and the long argument being the timing of -only- the actual test method of the last test run, in milliseconds, excluding the timing of setUp() and tearDown(). (This value is easily found by storing a System.currentTimeMillis() before and after the test, and registering the difference.) For the purposes of a wider-scale application, this could be extended to an addTime(Test,long,long,long), the three latter arguments being timing of setUp(), timing of test method, timing of tearDown(). The TestListener implementation interpreting the callbacks should retain full responsibility in interpreting the order of receipt of these callbacks; its own logic is determining the order in which test cases are run, and thus will be able to determine which callback applies to which test for reporting purposes. This does, however pose an issue for tests being run multiple times in their own threads - the developer must separate by test method and synchronize all the threads at the end of each method, or find another way to distinguish the callbacks (perhaps through the Test argument), as thread ending times will not be predictable. This modification may not be backwards compatible, unfortunately. It requires modifications to TestListener, requiring the addTime() function in all classes implementing TestListener. This would suggest a subclass of TestListener might work, but because we need to replace the call to runBare() from TestResult.run(TestCase), we would be calling a method on a TestListener that might or might not exist in particular instances, depending if they were from the subclassed version or not. Only a run-time type-id of the registered TestListener would solve this problem. The following is code for the one-argument version existing in my code (beware, it's written to JUnit 3.2): addition to TestListener.java: /** * The last-run test took testTime amount of time. */ public void addTime(Test test,long testTime); change in TestResult.java: protected void run(final TestCase test) { startTest(test); Protectable p= new Protectable() { public void protect() throws Throwable { // original statement. we need to add a timing call // here - replicate runBare() including timing call // REMOVED vvvvvvvvvv // test.runBare(); // REMOVED ^^^^^^^^^^ // ADDED vvvvvvvvvvvv test.setUp(); long startTime = System.currentTimeMillis(); try { test.runTest(); } finally { long endTime = System.currentTimeMillis(); addTime(test,endTime - startTime); test.tearDown(); } // ADDED ^^^^^^^^^^^^ } }; runProtected(test, p); endTest(test); } The reason this code needs to exist in TestResult is because TestResult is the class with which the TestListener is registered. There may be a better architectural solution to this via runProtected(), however. ---------------------------------------------------------------------- >Comment By: SourceForge Robot (sf-robot) Date: 2009-12-01 02:20 Message: This Tracker item was closed automatically by the system. It was previously set to a Pending status, and the original submitter did not respond within 14 days (the time period specified by the administrator of this Tracker). ---------------------------------------------------------------------- Comment By: David Saff (dsaff) Date: 2009-11-16 17:53 Message: This tracker is being shut down. Please move this item to http://github.com/KentBeck/junit/issues ---------------------------------------------------------------------- Comment By: Nobody/Anonymous (nobody) Date: 2001-08-07 22:28 Message: Logged In: NO This is a great idea. In addition, what if there was an assertMaxTimeSeconds(double) that could be called within a test method? When the method completes, if the time took more than the limit, this would be considered a duration failure. This would help identify situations where refactoring results in the same expected result, but whose run-time performance is unacceptable. ---------------------------------------------------------------------- Comment By: John Stoneham (jstoneham) Date: 2001-04-17 20:35 Message: Logged In: YES user_id=198358 [Apologies for the html and lack of indentation - I'm brand new to SourceForge and it shows. I'll know better next time.] ---------------------------------------------------------------------- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=365278&aid=416828&group_id=15278 |