From: Brian G. <bri...@ea...> - 2024-10-24 19:37:49
|
This is an area I struggled with professionally when implementing application testing, as well as running Tcl/Tk tests. With respect to Window Managers, there is a battle over "who's in charge", meaning who determines what happens to the windows and mouse pointer. The 3 actors are WM, Application, and User. The degrees of control vary wildly between the different window managers and operating systems. All Tk can do is attempt to provide "access" to the window manager, but otherwise has no say when all is said and done. Same with the mouse pointer. This makes testing precarious at best. The other area of Tk failures is fonts. From my experience, there are no 2 computers that have the same set of fonts. Consequently, it's impossible to have stable test results by simply assuming some minimum set. What we did for our application testing was to provide our own fonts with the application, and used those by default. And our test environment also fixed upon one (simple) window manager. Only under these conditions could we achieve stable test results. The choice to include a font set with the app also solved customer installation issues as well (primarily linux). But, even with these measures, our application GUI testing was still noisy, but less so than before. Because of the success I've seen w.r.t. fonts, I've often thought that Tk should be shipped with a set of base fonts, to provide more consistency out-of-the-box. These are my thoughts on this matter. -Brian On Oct 24, 2024, at 10:45, Erik Leunissen <el...@xs...> wrote: On 10/23/24 22:49 , Francois Vogel wrote: I don't think there is any reference environment in which the tests are supposed to pass. People see different failures in the Tk test suite on different environments. Tk itself is supposed to work with any of those, and it's probably the case, but some tests are sensitive to the environment so that we cannot prove it easily. OK. That's clear. Thanks. (The reason for my asking about this was that some window managers exhibit exceptional issues/peculiarities. And I thought it'd be wise that Tk has some policy to draw a line as to how far it goes in order to adapt its tests to such quirks, as we see them for example in tickets #e3888d5820 (the Aurorae issue) and #3733b924a2 ). So, only if there is a reference graphical environment of sorts: - which window manager is being used? - does the wm run decoupled from / without a physical screen, like with xvfb? - does the wm run stand-alone, or does it run with a desktop environment being active? - if a desktop environment is active, then which one? - has all of this been documented/standardized as well? Personally speaking, on Linux I'm using Debian with KDE, with a physical screen. I also run the tests under xvfb sometimes, mainly to try to figure out why they don't fail for me while they fail at CI. And I'm seeing several tenths of failures, both with and without xvfb, and the failures are not all the same in the two cases. A similar issue is with the fonts, which are present or not in the system running the tests. OK. I understand. In the process of looking at the failures on my system I sometimes stumble on a real Tk bug that a failing test reveals, and I fix this bug if I can. What is interesting is to realize that the failures at CI, which are masked by "failsOnUbuntu" (or"failsOnXQuartz")constraints, fall in the same category as those I'm seeing on my particular system: they can be real failures demonstrating issues in Tk (not in the test environment). I think the constraints blindly set on tests just because they "failsOnUbuntu" are just a plain mistake masking the lack of analysis in the interest of having green lights at CI. We are purposely hiding failing test results just to see green lights at CI. This has advantages in the ease of seeing regressions (green lights suddenly turn red when some commit is wrong), but bottom line this way of setting constraints on a test because it fails is just a scam and should be eradicated. I'm trying to iron all this out, step by step, one by one, when I have some time, in branch less_tests_constraints, but I'm far from being done (read: help welcome). There is more structural time coming available to me in the near future, and I'd be happy/grateful to be able to contribute to Tk on a more structural basis than the odd bug ticket. The question is in what area my personal capabilities are most effective, and I'll especially consider the work on the branch less_tests_constraints. Before committing myself, I need to gain a bit more overview of eligible area's where I can best help out, and I will come back to you about this in due time (maybe better by private email instead of this list?). I'll have a look at the work done so far on the branch less_tests_constraints in the Fossil repository. Please let me know if you have more information or tips that would help in gaining an overview of the work (done/todo) on that branch (instead of browsing through all individual commits). Regards, Erik. -- Regards, Francois before committing myself _______________________________________________ Tcl-Core mailing list Tcl...@li...<mailto:Tcl...@li...> https://lists.sourceforge.net/lists/listinfo/tcl-core |