You can subscribe to this list here.
2005 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
(9) |
Oct
(7) |
Nov
|
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2006 |
Jan
|
Feb
(4) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(2) |
Dec
(11) |
2007 |
Jan
(16) |
Feb
(3) |
Mar
(3) |
Apr
(2) |
May
(10) |
Jun
|
Jul
|
Aug
(1) |
Sep
(5) |
Oct
(2) |
Nov
(16) |
Dec
(3) |
2008 |
Jan
(1) |
Feb
|
Mar
(10) |
Apr
(2) |
May
(5) |
Jun
(9) |
Jul
|
Aug
(8) |
Sep
(3) |
Oct
(1) |
Nov
(15) |
Dec
(16) |
2009 |
Jan
(5) |
Feb
(9) |
Mar
(21) |
Apr
(12) |
May
(12) |
Jun
(2) |
Jul
|
Aug
(6) |
Sep
(10) |
Oct
(2) |
Nov
(6) |
Dec
(7) |
2010 |
Jan
(20) |
Feb
(21) |
Mar
(20) |
Apr
(8) |
May
(1) |
Jun
(1) |
Jul
|
Aug
|
Sep
(9) |
Oct
(4) |
Nov
(7) |
Dec
(5) |
2011 |
Jan
(15) |
Feb
(4) |
Mar
(20) |
Apr
(13) |
May
(10) |
Jun
(7) |
Jul
(8) |
Aug
(13) |
Sep
(1) |
Oct
(4) |
Nov
(9) |
Dec
(4) |
2012 |
Jan
|
Feb
(9) |
Mar
(8) |
Apr
(2) |
May
(6) |
Jun
(6) |
Jul
(8) |
Aug
(8) |
Sep
(3) |
Oct
(16) |
Nov
(6) |
Dec
(3) |
2013 |
Jan
(7) |
Feb
(10) |
Mar
(5) |
Apr
(3) |
May
|
Jun
|
Jul
(6) |
Aug
(30) |
Sep
(24) |
Oct
(10) |
Nov
(10) |
Dec
|
2014 |
Jan
(8) |
Feb
|
Mar
(13) |
Apr
(7) |
May
(1) |
Jun
|
Jul
|
Aug
(1) |
Sep
(1) |
Oct
(2) |
Nov
|
Dec
|
2015 |
Jan
|
Feb
(2) |
Mar
(12) |
Apr
|
May
(1) |
Jun
(2) |
Jul
|
Aug
(4) |
Sep
(13) |
Oct
(5) |
Nov
|
Dec
(2) |
2016 |
Jan
(13) |
Feb
(7) |
Mar
(7) |
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2017 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2018 |
Jan
(1) |
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
(4) |
Jul
(3) |
Aug
(1) |
Sep
(2) |
Oct
|
Nov
|
Dec
|
2019 |
Jan
|
Feb
|
Mar
(1) |
Apr
(3) |
May
(4) |
Jun
|
Jul
|
Aug
|
Sep
(5) |
Oct
(6) |
Nov
(2) |
Dec
|
2020 |
Jan
|
Feb
|
Mar
(1) |
Apr
(1) |
May
(3) |
Jun
(5) |
Jul
|
Aug
(6) |
Sep
(7) |
Oct
(3) |
Nov
(1) |
Dec
(1) |
2021 |
Jan
(3) |
Feb
(1) |
Mar
|
Apr
|
May
(3) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
|
Dec
|
2022 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
(1) |
Jul
(1) |
Aug
(7) |
Sep
(7) |
Oct
(5) |
Nov
(2) |
Dec
|
2023 |
Jan
|
Feb
(1) |
Mar
(3) |
Apr
(3) |
May
|
Jun
|
Jul
|
Aug
|
Sep
(4) |
Oct
|
Nov
(4) |
Dec
(1) |
2024 |
Jan
|
Feb
|
Mar
(4) |
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Geoff B. <geo...@gm...> - 2024-07-27 10:07:32
|
Hi Stéphane, Thanks for this, it sounds like a bug in TextTest. Please report it as an issue on Github and we'll take a look when we have time. Regards, Geoff On Thu, Jul 25, 2024 at 2:33 PM <tom...@gm...> wrote: > Hi, > > I am using 4.1.3 and I have noticed a flow that corrupt the maximum number > of simultaneous task to run. > > Step is: > > - Create a regression list to run (any) > - Set max Running item to 3 (for ex) > - Run it > - Select items (More than one) in the “Not started” list > - Kill them > - => Item get place in Failed condition under Cancelled, which is > expected, > - However, in the process, the Runing upper limit is corrupted and > is increased (Not consistently increased). > > I now have more than 3 processes running at once. > > > > Regards > > Stéphane > > > _______________________________________________ > Texttest-users mailing list > Tex...@li... > https://lists.sourceforge.net/lists/listinfo/texttest-users > |
From: <tom...@gm...> - 2024-07-25 12:33:11
|
Hi, I am using 4.1.3 and I have noticed a flow that corrupt the maximum number of simultaneous task to run. Step is: * Create a regression list to run (any) * Set max Running item to 3 (for ex) * Run it * Select items (More than one) in the Not started list * Kill them * => Item get place in Failed condition under Cancelled, which is expected, * However, in the process, the Runing upper limit is corrupted and is increased (Not consistently increased). I now have more than 3 processes running at once. Regards Stéphane |
From: Corey C. <cor...@ac...> - 2024-03-29 22:54:25
|
Hi all, I’m trying to figure out a problem with texttest, hoping someone can help. We’re testing out a new grid system to replace our current SGE system. In terms of what files are created, I just see one file, “teststate” in a “framework_tmp” directory. There are a number of tests in the test case, and they all have fairly identical looking teststate files, they start with this header: $more framework_tmp/teststate �[1]ctexttestlib.plugins Unrunnable q Job ID was 1033 ---------- Full accounting info from SGE ---------- And then there’s the accounting info. There’s a slavelogs directory, but that’s empty. There are also grid_core_files, I made a little modification to also generate stdout files as well as stderr files, but both are 0 size. I set up a log directory with as many sections in the logging.debug file set to “Level=DEBUG” as I could find, but there aren’t any obvious error messages to lead me to what’s failing here. Anyone have any ideas about what could be causing the problem here? Just some hint about where to start in terms of the debug here? I’m open for any suggestions, thanks for any help! ------- Corey |
From: John R. <Joh...@ou...> - 2024-03-14 20:27:48
|
Hi Geoff, No, I realized that now. It was because of Sikuli that I use for controlling the GUI in our GUI-tests with TextTest. I found a solution for it so I didn't have to filter it, but thanks for the tip. For anyone who wants to know I fixed it by creating a log4j.properties-file and pointed it out with a flag when I start the java environment like so: -Dlog4j.configuration=file:///SikuliX/log4j.properties Best regards John ________________________________ Från: Geoff Bache <geo...@gm...> Skickat: den 14 mars 2024 13:23 Till: General list for texttest issues <tex...@li...> Ämne: Re: [Texttest-users] Log4j warning Hi John, TextTest itself doesn't use log4j, I suspect you're using or testing some Java program. The great thing is though, if you don't care you can filter it away... Regards, Geoff On Wed, Mar 13, 2024 at 4:47 PM John Rydén <Joh...@ou...<mailto:Joh...@ou...>> wrote: > Hi! > I get a warning lately from log4j saying: ”No appenders could be found for logger” > > Does anyone know how to handle that? > > Best regards > John _______________________________________________ Texttest-users mailing list Tex...@li...<mailto:Tex...@li...> https://lists.sourceforge.net/lists/listinfo/texttest-users |
From: Geoff B. <geo...@gm...> - 2024-03-14 12:23:48
|
Hi John, TextTest itself doesn't use log4j, I suspect you're using or testing some Java program. The great thing is though, if you don't care you can filter it away... Regards, Geoff On Wed, Mar 13, 2024 at 4:47 PM John Rydén <Joh...@ou...> wrote: > > Hi! > > > I get a warning lately from log4j saying: ”No appenders could be found > for logger” > > > > Does anyone know how to handle that? > > > > Best regards > > John > > _______________________________________________ > Texttest-users mailing list > Tex...@li... > https://lists.sourceforge.net/lists/listinfo/texttest-users > |
From: John R. <Joh...@ou...> - 2024-03-13 15:47:12
|
> Hi! > I get a warning lately from log4j saying: ”No appenders could be found for logger” > > Does anyone know how to handle that? > > Best regards > John |
From: Geoff B. <geo...@gm...> - 2023-12-11 14:51:24
|
Hi all, There's a new TextTest out now. As well as the actual changes below, this will hopefully fix the problems around terminating running tests before they complete, and the "Stack smashing detected" messages that appear on shutdown with 4.3.0. This was not an issue with TextTest itself but presumably one of its dependencies, probably PyGObject. It is in any case fixed in the newer versions present in this new 4.3.1. For details of actual TextTest changes, see below. Regards, Geoff New development: New knownbugs option to report as known bug even if all reruns fail (extra checkbox in the UI) - as requested by Michael Behrisch on this list. Bugfixes: Standard error files should always have highest display priority unless overridden (some tools truncate preview output and lose stacktraces etc) Make spin buttons reset correctly, irrespective of locale Removed very old hack that caused problems on Linux testing with UIs visible Fixes for AZ Devops integration - count Rejected/Removed bugs as completed, don't fail if Severity not present (so we can link to Features/User stories etc) Fixed selection of fastest/slowest tests (thanks Nils Olsson) Compatibility with Python 3.11 Fix for partial approval of a test with custom test comparison |
From: Geoff B. <geo...@gm...> - 2023-11-21 12:36:38
|
Hi John! 1) There is a config setting "kill_timeout" which would explain this behaviour, basically terminating the test after 5 minutes. Have you set that in your config file(s) to 300, i.e. 5 minutes? Possibly copied it from a previous one without noticing it was there? https://texttest.sourceforge.net/index.php?page=documentation_4_3&n=running_texttest_unattended#kill_timeout 2) I don't think the stack smashing detected message has anything to do with this issue. At least I am seeing this frequently when closing TextTest these days. Unfortunately it seems to be some issue with newer versions of PyGObject, the UI library that TextTest uses. I recommend to add the following text to your config file: suppress_stderr_text:stack smashing detected which will get rid of it in all circumstances. Regards, Geoff On Tue, Nov 21, 2023 at 1:29 PM John Rydén <Joh...@ou...> wrote: > Hi! > > We're using texttest to do a test that is expected to take up to 1h to > carry through. > We're using Sikuli to perform the necessary steps in the application. > The problem is that after about 5 minutes into the test it aborts with an > error message: "*** stack smashing detected ***: terminated". > > Is there something inherent in the framework that prevents doing long-run > tests like this? > > Kind regards > John > _______________________________________________ > Texttest-users mailing list > Tex...@li... > https://lists.sourceforge.net/lists/listinfo/texttest-users > |
From: John R. <Joh...@ou...> - 2023-11-21 11:15:19
|
Hi! We're using texttest to do a test that is expected to take up to 1h to carry through. We're using Sikuli to perform the necessary steps in the application. The problem is that after about 5 minutes into the test it aborts with an error message: "*** stack smashing detected ***: terminated". Is there something inherent in the framework that prevents doing long-run tests like this? Kind regards John |
From: Geoff B. <geo...@gm...> - 2023-11-04 09:31:08
|
Hi Michael! I believe that was the original intention, yes. Perhaps another flag could be introduced on what to do if all reruns fail. The general idea was originally to handle tests that fail 5% of the time due to circumstances outside your control. A couple of reruns make this then vanishingly unlikely. It wasn't really intended as a means of brute-forcing tests that fail for example 50% of the time or more. But it would probably not be difficult to introduce something like this. I will try to have a go next time I get some on TextTest. Regards, Geoff On Thu, Nov 2, 2023 at 9:36 PM Michael Behrisch <os...@be...> wrote: > Hi, > I have a question about rerunning using the knownbugs file. > I have a test which I want to rerun multiple times but if all reruns > fail I still want it to be flagged as known bug. I tried different > combinations of "rerun_only" and "internal_error" but without achieving > the desired result. Is this currently possible? And is it intended that > the failing test is marked as internal error if all reruns fail? > > Best regards, > Michael > > > _______________________________________________ > Texttest-users mailing list > Tex...@li... > https://lists.sourceforge.net/lists/listinfo/texttest-users > |
From: Michael B. <os...@be...> - 2023-11-02 20:36:21
|
Hi, I have a question about rerunning using the knownbugs file. I have a test which I want to rerun multiple times but if all reruns fail I still want it to be flagged as known bug. I tried different combinations of "rerun_only" and "internal_error" but without achieving the desired result. Is this currently possible? And is it intended that the failing test is marked as internal error if all reruns fail? Best regards, Michael |
From: Geoff B. <geo...@gm...> - 2023-09-19 06:39:07
|
Hi again Karl, Much the same observations as for your previous message. (I have no possibility to test this myself - please run self-tests and submit a pull request, and preferably write a new self-test). With the added observation that I think this setup - with some tests running locally and others via a queuesystem in the same run - is not a scenario I have ever run myself and I would not be surprised to encounter other problems with it than the ones you have found. A simple fix is to run TextTest separately on test suites / applications that need to run sequentially. Regards, Geoff On Fri, Sep 15, 2023 at 5:59 PM Karl Koehler <ka...@ac...> wrote: > Hi everyone, > > as you know, each testsuite can be configured with it's own queuesystem. > This is useful if you have some tests that run fast, and some tests that > run slowly - you want the fast jobs to be on the local machine because the > SGE queuing time would be likely longer than the test execution time. > Thus there are testsuites with: > config_module:queuesystem > queue_system_module:SGE > and testsuites with > queue_system_module:local > > Now here's the bug in texttest: When looking for "all jobs complete", we > only look at the queuesystem for test[0]. If that is "local" and we have > SGE tests, this has the effect that we try to attribute error-states to all > other tests, then quit early. > Thus: > > --- > ~/texttest_latest/texttest-master/texttestlib/queuesystem/masterprocess.py > 2023-08-28 10:04:33.000000000 -0700 > +++ queuesystem/masterprocess.py 2023-09-15 08:30:15.505148514 -0700 > @@ -183,21 +183,52 @@ > return queueSystem.supportsPolling() > > def updateJobStatus(self): > - queueSystem = self.getQueueSystem(list(self.jobs.keys())[0]) > - statusInfo = queueSystem.getStatusForAllJobs() > + ## > + # set of queueSystems > + statusInfo = dict() > + qsSet = set() > + for test in self.jobs.keys(): > + qsSet.add(self.getQueueSystem(test)) > + for queueSystem in qsSet : > + statusInfo.update(queueSystem.getStatusForAllJobs()) > + > self.diag.info("Got status for all jobs : " + repr(statusInfo)) > if statusInfo is not None: # queue system not available for some > reason > > > I'd like to say that probably making the following thing a two-pass loop > is better, waiting for all jobs to complete before we call "qacct". There > is a gap in time between SGE job completion and the job appearing on qacct, > and thus if we wait for the queue to be empty we are less likely to spend > effort waiting for qacct to be ready for individual failed jobs, so we can > update actually running and passing job status more expediently: > > > --- > ~/texttest_latest/texttest-master/texttestlib/queuesystem/masterprocess.py > 2023-08-28 10:04:33.000000000 -0700 > +++ queuesystem/masterprocess.py 2023-09-15 08:51:15.828557223 -0700 > @@ -183,21 +183,38 @@ > return queueSystem.supportsPolling() > > def updateJobStatus(self): > - queueSystem = self.getQueueSystem(list(self.jobs.keys())[0]) > - statusInfo = queueSystem.getStatusForAllJobs() > + ## > + # set of queueSystems > + statusInfo = dict() > + qsSet = set() > + for test in self.jobs.keys(): > + qsSet.add(self.getQueueSystem(test)) > + for queueSystem in qsSet : > + statusInfo.update(queueSystem.getStatusForAllJobs()) > + > self.diag.info("Got status for all jobs : " + repr(statusInfo)) > if statusInfo is not None: # queue system not available for some > reason > + ## > + # setSlaveFailed only if there are no more jobs running. > + activejobs = 0 > for test, jobs in list(self.jobs.items()): > if not test.state.isComplete(): > for jobId, jobName in jobs: > status = statusInfo.get(jobId) > if status: > + activejobs += 1 > # Only do this to test jobs (might make a > difference for derived configurations) > # Ignore filtering states for now, which have > empty 'briefText'. > self.updateRunStatus(test, status) > - elif not status and not self.jobCompleted(test, > jobName): > - # Do this to any jobs > - self.setSlaveFailed(test, > self.jobStarted(test, jobName), True, jobId) > + if activejobs == 0: > + for test, jobs in list(self.jobs.items()): > + if not test.state.isComplete(): > + for jobId, jobName in jobs: > + status = statusInfo.get(jobId) > + if not status and not self.jobCompleted(test, > jobName): > + print("state of %s : %s" % (str(test), > test.state.category)) > + # Do this to any jobs > + self.setSlaveFailed(test, > self.jobStarted(test, jobName), True, jobId) > > > > Similar with the cleanup function. > > @@ -391,8 +408,16 @@ > def cleanup(self, final=False): > cleanupComplete = True > if self.jobs: > - queueSystem = self.getQueueSystem(list(self.jobs.keys())[0]) > - cleanupComplete &= queueSystem.cleanup(final) > + ## multi-queue-system > + # > + qsSet = set() > + for test in self.jobs.keys(): > + qsSet.add(self.getQueueSystem(test)) > + for queueSystem in qsSet : > + cleanupComplete &= queueSystem.cleanup(final) > + # > + ## > > Thanks, > > - Karl Koehler > > > > _______________________________________________ > Texttest-users mailing list > Tex...@li... > https://lists.sourceforge.net/lists/listinfo/texttest-users > |
From: Geoff B. <geo...@gm...> - 2023-09-19 06:34:27
|
Hi Karl! This seems a plausible fix. I don't have any access to SGE myself any more so it's difficult for me to reproduce errors like this. Have you run the self-tests at all? (If you're working at my former employer Jeppesen - the only people I know of using TextTest and SGE - there are a few people who could help you with that) It would be best if you could submit the change in pull request format on github also. Regards, Geoff On Wed, Sep 13, 2023 at 4:44 PM Karl Koehler <ka...@ac...> wrote: > Hi, > > We are using texttest with SGE, and have found that there is a problem > when qacct is not fast enough. > What happens: > (1) the job completes > > (2) via a message from the slave, the status of the test is updated > ( masterprocess.py", line 1025, in handleRequestFromHost ) . This is an > independent thread. > > (3) masterprocess.py, updateJobStatus sees that the job is not in the > qstat any longer. > But in masterprocess.py:198, updateJobStatus, the jobComplete is not yet > true. > (4) setSlaveFailed will wait a long time for qacct to finally get the info > on the job, at which time step (2) has happened. > > Result: failures that are not quite real, and incorrect error messages in > the test.state.freeText and test.state.briefText. > > So, there are questions: > * Should there be a lock around Test.ChangeState ? > * And what do you think of the following work-around/solution for > the problem that SGE is too late, regardless of locking ? > > > > -bash-4.2$ diff -du > ~/texttest_latest/texttest-master/texttestlib/queuesystem/masterprocess.py > texttestlib/queuesystem/masterprocess.py > --- > /home/karlkoehler/texttest_latest/texttest-master/texttestlib/queuesystem/masterprocess.py > 2023-08-28 10:04:33.000000000 -0700 > +++ texttestlib/queuesystem/masterprocess.py 2023-09-12 > 17:00:14.795473685 -0700 > @@ -646,8 +646,11 @@ > return system > > def changeState(self, test, newState, previouslySubmitted=True): > - test.changeState(newState) > - self.handleLocalError(test, previouslySubmitted) > + # this has to check the test because otherwise slowness in sge > qacct will > + # set the state to failed and with the wrong message. > + if not test.state.isComplete(): > + test.changeState(newState) > + self.handleLocalError(test, previouslySubmitted) > > Thanks, > Karl Koehler > > > > _______________________________________________ > Texttest-users mailing list > Tex...@li... > https://lists.sourceforge.net/lists/listinfo/texttest-users > |
From: Karl K. <ka...@ac...> - 2023-09-15 15:58:53
|
Hi everyone, as you know, each testsuite can be configured with it's own queuesystem. This is useful if you have some tests that run fast, and some tests that run slowly - you want the fast jobs to be on the local machine because the SGE queuing time would be likely longer than the test execution time. Thus there are testsuites with: config_module:queuesystem queue_system_module:SGE and testsuites with queue_system_module:local Now here's the bug in texttest: When looking for "all jobs complete", we only look at the queuesystem for test[0]. If that is "local" and we have SGE tests, this has the effect that we try to attribute error-states to all other tests, then quit early. Thus: --- ~/texttest_latest/texttest-master/texttestlib/queuesystem/masterprocess.py 2023-08-28 10:04:33.000000000 -0700 +++ queuesystem/masterprocess.py 2023-09-15 08:30:15.505148514 -0700 @@ -183,21 +183,52 @@ return queueSystem.supportsPolling() def updateJobStatus(self): - queueSystem = self.getQueueSystem(list(self.jobs.keys())[0]) - statusInfo = queueSystem.getStatusForAllJobs() + ## + # set of queueSystems + statusInfo = dict() + qsSet = set() + for test in self.jobs.keys(): + qsSet.add(self.getQueueSystem(test)) + for queueSystem in qsSet : + statusInfo.update(queueSystem.getStatusForAllJobs()) + self.diag.info("Got status for all jobs : " + repr(statusInfo)) if statusInfo is not None: # queue system not available for some reason I'd like to say that probably making the following thing a two-pass loop is better, waiting for all jobs to complete before we call "qacct". There is a gap in time between SGE job completion and the job appearing on qacct, and thus if we wait for the queue to be empty we are less likely to spend effort waiting for qacct to be ready for individual failed jobs, so we can update actually running and passing job status more expediently: --- ~/texttest_latest/texttest-master/texttestlib/queuesystem/masterprocess.py 2023-08-28 10:04:33.000000000 -0700 +++ queuesystem/masterprocess.py 2023-09-15 08:51:15.828557223 -0700 @@ -183,21 +183,38 @@ return queueSystem.supportsPolling() def updateJobStatus(self): - queueSystem = self.getQueueSystem(list(self.jobs.keys())[0]) - statusInfo = queueSystem.getStatusForAllJobs() + ## + # set of queueSystems + statusInfo = dict() + qsSet = set() + for test in self.jobs.keys(): + qsSet.add(self.getQueueSystem(test)) + for queueSystem in qsSet : + statusInfo.update(queueSystem.getStatusForAllJobs()) + self.diag.info("Got status for all jobs : " + repr(statusInfo)) if statusInfo is not None: # queue system not available for some reason + ## + # setSlaveFailed only if there are no more jobs running. + activejobs = 0 for test, jobs in list(self.jobs.items()): if not test.state.isComplete(): for jobId, jobName in jobs: status = statusInfo.get(jobId) if status: + activejobs += 1 # Only do this to test jobs (might make a difference for derived configurations) # Ignore filtering states for now, which have empty 'briefText'. self.updateRunStatus(test, status) - elif not status and not self.jobCompleted(test, jobName): - # Do this to any jobs - self.setSlaveFailed(test, self.jobStarted(test, jobName), True, jobId) + if activejobs == 0: + for test, jobs in list(self.jobs.items()): + if not test.state.isComplete(): + for jobId, jobName in jobs: + status = statusInfo.get(jobId) + if not status and not self.jobCompleted(test, jobName): + print("state of %s : %s" % (str(test), test.state.category)) + # Do this to any jobs + self.setSlaveFailed(test, self.jobStarted(test, jobName), True, jobId) Similar with the cleanup function. @@ -391,8 +408,16 @@ def cleanup(self, final=False): cleanupComplete = True if self.jobs: - queueSystem = self.getQueueSystem(list(self.jobs.keys())[0]) - cleanupComplete &= queueSystem.cleanup(final) + ## multi-queue-system + # + qsSet = set() + for test in self.jobs.keys(): + qsSet.add(self.getQueueSystem(test)) + for queueSystem in qsSet : + cleanupComplete &= queueSystem.cleanup(final) + # + ## Thanks, - Karl Koehler |
From: Karl K. <ka...@ac...> - 2023-09-13 14:43:53
|
Hi, We are using texttest with SGE, and have found that there is a problem when qacct is not fast enough. What happens: (1) the job completes (2) via a message from the slave, the status of the test is updated ( masterprocess.py", line 1025, in handleRequestFromHost ) . This is an independent thread. (3) masterprocess.py, updateJobStatus sees that the job is not in the qstat any longer. But in masterprocess.py:198, updateJobStatus, the jobComplete is not yet true. (4) setSlaveFailed will wait a long time for qacct to finally get the info on the job, at which time step (2) has happened. Result: failures that are not quite real, and incorrect error messages in the test.state.freeText and test.state.briefText. So, there are questions: * Should there be a lock around Test.ChangeState ? * And what do you think of the following work-around/solution for the problem that SGE is too late, regardless of locking ? -bash-4.2$ diff -du ~/texttest_latest/texttest-master/texttestlib/queuesystem/masterprocess.py texttestlib/queuesystem/masterprocess.py --- /home/karlkoehler/texttest_latest/texttest-master/texttestlib/queuesystem/masterprocess.py 2023-08-28 10:04:33.000000000 -0700 +++ texttestlib/queuesystem/masterprocess.py 2023-09-12 17:00:14.795473685 -0700 @@ -646,8 +646,11 @@ return system def changeState(self, test, newState, previouslySubmitted=True): - test.changeState(newState) - self.handleLocalError(test, previouslySubmitted) + # this has to check the test because otherwise slowness in sge qacct will + # set the state to failed and with the wrong message. + if not test.state.isComplete(): + test.changeState(newState) + self.handleLocalError(test, previouslySubmitted) Thanks, Karl Koehler |
From: Geoff B. <geo...@gm...> - 2023-04-13 19:29:11
|
Hi John, Perhaps you could give an example of what you mean here, I'm having trouble imagining how it would look. In general it's not very easy to filter duplicate lines as such if there are no other patterns to go on, but it sounds like you're not after just the same log message appearing multiple times? I generally try to attack such things from the other side, i.e. configure the logging in the system so it doesn't do that - the principle being that logging that is annoying for TextTest is often annoying for the users too :) Regards, Geoff On Thu, Apr 13, 2023 at 9:41 AM John Rydén <Joh...@ou...> wrote: > Hi! > > Is there a way to filter duplicate lines (after replacement of timestamps > and such)? > > I have output that is time dependent and there will be different number of > duplicate (part from the timestamp) lines depending on the time the call is > waiting. > > Kind regards > John > _______________________________________________ > Texttest-users mailing list > Tex...@li... > https://lists.sourceforge.net/lists/listinfo/texttest-users > |
From: John R. <Joh...@ou...> - 2023-04-13 07:41:38
|
Hi! Is there a way to filter duplicate lines (after replacement of timestamps and such)? I have output that is time dependent and there will be different number of duplicate (part from the timestamp) lines depending on the time the call is waiting. Kind regards John |
From: Geoff B. <geo...@gm...> - 2023-04-05 12:47:27
|
Hi all, There's a new TextTest out now. This includes the support for trx and Jetbrains test result formats that Emily mentioned in her previous message on this list, see below. I have also (finally!) updated the old documentation page with the details of this, you can search for the new "batch_external_format" setting at https://texttest.sourceforge.net/index.php?page=documentation_4_3&n=configfile_default which explains how to use this new configuration. Regards, Geoff New development: We now support multiple external formats for test results, Visual Studio's trx format and JetBrains XML format. This means it is now possible to run TextTest from within Visual Studio or JetBrains IDEs (e.g. PyCharm, IntelliJ, CLion) and present the results there. Bugfixes: Default interpreter for Python on Windows changed to "py -3" launcher, after 4.2.3 introduced usage of "py". (Python 3.11 now gives an error if it is run without arguments) Fixed bug #107 AttributeError when reordering test suites (thanks Michael Behrisch) |
From: Geoff B. <geo...@gm...> - 2023-03-21 18:35:44
|
Hi Hamid, I don't think there is a way to do that as such. What you can do is write your own wrapper script that can then run your system a configurable number of times, and treat that as the system under test that TextTest can then measure. Regards, Geoff On Tue, Mar 21, 2023 at 11:53 AM Hamid Kharraziha via Texttest-users < tex...@li...> wrote: > Hi TextTest users > > > > Is there a recommended way to set up a test-case so it runs several times > and the (CPU) performance is measured on the sum or average of the runs. > We would like to try this as a way to get more stable performance results. > > > > Thanks and regards, > > Hamid > > > > > > *Hamid Kharraziha* > > > > Senior Decision Scientist > > Service Delivery Platform > > > > A.P. Møller – Mærsk > > Oslo Plads 2, 2100 Copenhagen Ø, Denmark > > Phone work: +45 31 15 38 23 > > Phone private: +46 76 557 1930 > > > > Classification: Internal > > ------------------------------ > > The information contained in this message is privileged and intended only > for the recipients named. If the reader is not a representative of the > intended recipient, any review, dissemination or copying of this message or > the information it contains is prohibited. If you have received this > message in error, please immediately notify the sender, and delete the > original message and attachments. > > Maersk will as part of our communication and interaction with you collect > and process your personal data. You can read more about Maersk’s collection > and processing of your personal data and your rights as a data subject in > our privacy policy > <https://www.maersk.com/front-page-requirements/privacy-policy> > > Please consider the environment before printing this email. > _______________________________________________ > Texttest-users mailing list > Tex...@li... > https://lists.sourceforge.net/lists/listinfo/texttest-users > |
From: Hamid K. <ham...@ma...> - 2023-03-21 10:52:57
|
Hi TextTest users Is there a recommended way to set up a test-case so it runs several times and the (CPU) performance is measured on the sum or average of the runs. We would like to try this as a way to get more stable performance results. Thanks and regards, Hamid Hamid Kharraziha Senior Decision Scientist Service Delivery Platform A.P. Møller – Mærsk Oslo Plads 2, 2100 Copenhagen Ø, Denmark Phone work: +45 31 15 38 23 Phone private: +46 76 557 1930 [cid:ima...@01...FD2310] Classification: Internal ________________________________ The information contained in this message is privileged and intended only for the recipients named. If the reader is not a representative of the intended recipient, any review, dissemination or copying of this message or the information it contains is prohibited. If you have received this message in error, please immediately notify the sender, and delete the original message and attachments. Maersk will as part of our communication and interaction with you collect and process your personal data. You can read more about Maersk’s collection and processing of your personal data and your rights as a data subject in our privacy policy <https://www.maersk.com/front-page-requirements/privacy-policy> Please consider the environment before printing this email. |
From: Emily B. <em...@ba...> - 2023-03-17 14:06:22
|
Hi, Geoff and I have been doing some work this week on improving the developer experience when using TextTest while developing code in an IDE like Visual Studio or PyCharm. One thing we ran into was that IDEs generally have a way to import test results, but the format is not consistent. In the JetBrains world of IDEs there is some consistency, but Rider (for C#) is different, and I filed this bug: https://youtrack.jetbrains.com/issue/RIDER-91008/export-import-unit-test-results-in-same-xml-format-as-intellij-pycharm-clion-etc <https://youtrack.jetbrains.com/issue/RIDER-91008/export-import-unit-test-results-in-same-xml-format-as-intellij-pycharm-clion-etc> The other IDEs in the JetBrains family all use a proprietary xml format which is at least consistent between IntelliJ, PyCharm and CLion. If you like the idea of being able to run TextTest from Rider and see the test results there, and be able to click on stack traces for example, you could help by up-voting this issue. I believe you need to register for a free account before they will let you vote. Regards, Emily |
From: Geoff B. <geo...@gm...> - 2023-02-03 09:14:51
|
Hi all, There's a new TextTest out now. Have fixed a couple of annoying longstanding issues, and we have a new feature in the HTML report! Details below. Regards, Geoff New development: In batch mode HTML pages there is now an additional control to make the filtering apply to any run, not just the most recent. Thanks Matz Larsson for this change. Bugfixes: Default interpreter for Python on Windows changed to "py" launcher, as Python is often not on the PATH Fix problem where checkboxes get enabled when clicking outside them |
From: Geoff B. <geo...@gm...> - 2022-11-09 14:13:28
|
Hi Hamid, There is no way to do that at the moment. You can get it to generate HTML pages which show all the results in the table and provide the diffs on a click. You can also get it to generate JUnit format, which can be read and presented by standard CI tools for example. You can also get it to send an email as you mention. You can also "reconnect" to the results, i.e. load them into the TextTest GUI after the run. Regards, Geoff On Wed, Nov 9, 2022 at 8:57 AM Hamid Kharraziha via Texttest-users < tex...@li...> wrote: > Hi > > > > We run some tests as part of the container build. The build fails if any > of the tests fail. When the build fails we can see the output and texttest > prints which tests failed and on which results. We think it would help a > lot to also see the diffs, e.g. the first 30 lines in text-diff. Specially > if our application fails with error we would like to see the error message. > > Is there a way to make the batch mode automatically print the diffs? > > I can see there is an interactive mode where you can get details on > request, and texttest can send email, possibly with more details. In our > case it would be easier just to see them in the console. > > > > Thanks and regards, > > Hamid > > Classification: Internal > > ------------------------------ > > The information contained in this message is privileged and intended only > for the recipients named. If the reader is not a representative of the > intended recipient, any review, dissemination or copying of this message or > the information it contains is prohibited. If you have received this > message in error, please immediately notify the sender, and delete the > original message and attachments. > > Maersk will as part of our communication and interaction with you collect > and process your personal data. You can read more about Maersk’s collection > and processing of your personal data and your rights as a data subject in > our privacy policy > <https://www.maersk.com/front-page-requirements/privacy-policy> > > Please consider the environment before printing this email. > _______________________________________________ > Texttest-users mailing list > Tex...@li... > https://lists.sourceforge.net/lists/listinfo/texttest-users > |
From: Hamid K. <ham...@ma...> - 2022-11-09 07:57:13
|
Hi We run some tests as part of the container build. The build fails if any of the tests fail. When the build fails we can see the output and texttest prints which tests failed and on which results. We think it would help a lot to also see the diffs, e.g. the first 30 lines in text-diff. Specially if our application fails with error we would like to see the error message. Is there a way to make the batch mode automatically print the diffs? I can see there is an interactive mode where you can get details on request, and texttest can send email, possibly with more details. In our case it would be easier just to see them in the console. Thanks and regards, Hamid Classification: Internal ________________________________ The information contained in this message is privileged and intended only for the recipients named. If the reader is not a representative of the intended recipient, any review, dissemination or copying of this message or the information it contains is prohibited. If you have received this message in error, please immediately notify the sender, and delete the original message and attachments. Maersk will as part of our communication and interaction with you collect and process your personal data. You can read more about Maersk’s collection and processing of your personal data and your rights as a data subject in our privacy policy <https://www.maersk.com/front-page-requirements/privacy-policy> Please consider the environment before printing this email. |
From: Hamid K. <ham...@ma...> - 2022-10-31 09:53:16
|
Thanks Geoff. We will check his setting. We have specified [interpreters] python:python [end] in config. I suppose this is still not enough for windows. Regards, Hamid Classification: Internal From: Geoff Bache <geo...@gm...> Sent: Monday, 31 October 2022 09.11 To: General list for texttest issues <tex...@li...> Subject: Re: [Texttest-users] How to debug no results with collate script This message was sent from outside of your organization. Please do not click links or open attachments unless you recognize the source of this email and know the content is safe. Hi Hamid! A classic cause of this kind of problem is that he might not have Python scripts (.py files) associated with Python on his machine. I would suggest checking that before going too much further. I have had colleagues here where the collate_script appears in their favourite editor when they run the tests. (The diags you mention sound about right. I believe the debug level is always INFO but DEBUG cannot harm...) Regards, Geoff On Mon, Oct 31, 2022 at 8:53 AM Hamid Kharraziha via Texttest-users <tex...@li...<mailto:tex...@li...>> wrote: Hi user Asking for some debug help. We have a setup for windows that works fine on my machine but for one colleague some of the results are missing. These results are produced with a collate script which is a python script that does some reformatting. Results are found when he uncomments the collate script in the config. The collate script runs fine from command line. We don’t see any error message but e.g. if we rename the script there is a message that it is not found. We might have different but recent versions of texttest (mine is 4.1.3). Any ideas what could go wrong? We want to try the texttest diag on his machine. Which diags would be relevant for this problem? I was considering “Collate Files” and “Prepare Writedir”. Anything else that could be useful? Is the debug level always INFO or can DEBUG give more details? Thanks. /Hamid Classification: Internal ________________________________ The information contained in this message is privileged and intended only for the recipients named. If the reader is not a representative of the intended recipient, any review, dissemination or copying of this message or the information it contains is prohibited. If you have received this message in error, please immediately notify the sender, and delete the original message and attachments. Maersk will as part of our communication and interaction with you collect and process your personal data. You can read more about Maersk’s collection and processing of your personal data and your rights as a data subject in our privacy policy <https://www.maersk.com/front-page-requirements/privacy-policy> Please consider the environment before printing this email. _______________________________________________ Texttest-users mailing list Tex...@li...<mailto:Tex...@li...> https://lists.sourceforge.net/lists/listinfo/texttest-users<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.sourceforge.net%2Flists%2Flistinfo%2Ftexttest-users&data=05%7C01%7Chamid.kharraziha%40maersk.com%7Cd170f43577c9466706d808dabb1781e3%7C05d75c05fa1a42e79cf1eb416c396f2d%7C0%7C0%7C638028006983280755%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=JCOgOkT73OUKTLj2RZDHJy9wWS6t4Smz0RLHpSPklQo%3D&reserved=0> ________________________________ The information contained in this message is privileged and intended only for the recipients named. If the reader is not a representative of the intended recipient, any review, dissemination or copying of this message or the information it contains is prohibited. If you have received this message in error, please immediately notify the sender, and delete the original message and attachments. Maersk will as part of our communication and interaction with you collect and process your personal data. You can read more about Maersk’s collection and processing of your personal data and your rights as a data subject in our privacy policy <https://www.maersk.com/front-page-requirements/privacy-policy> Please consider the environment before printing this email. |