From: Ania P. <ani...@gm...> - 2008-07-04 16:13:47
|
Hello, I'm Ania Pawelczyk, a student from Poland who participates in Google Summer of Code and works on updating Tk test suite. I'd like to ask for comments on my work, get to know whether I took right assumptions. I'd be also thankful for hints and suggestions on the other aspects that I omitted but I should concentrate on. I'd very welcome if people involved in upgrading test suite (like dgp, patthoyts, dkf, das...) would steer me on the right direction. As for now I took the following assumptions and concentrated on those aspects: 1. Failing tests I work by comparing results on different environments and my goal is to let tktests be passed on them. Currently I work on Slackware, Ubuntu and XP (on 2 machines and tomorrow I'll get a third one which, I think, is the minimum to do simultaneous testing). WinXP and Ubuntu pass rather well (Ubuntu has problems generally with tests where font constraint is checked whereas Win and Slack don't pass the font constraint at all). I'd install on the third one Vista and p.e. SuSE. I usually work by assuming that the test that passes on one machine but not on the others has probably "good" - I mean required - settings and the failing ones lack something or take wrong assumptions. I compare configurations and try to find what p.e. - wasn't set, but matters in the result and may have different default values - was somewhere wrongly assumed (like in http://www.assembla.com/flows/show/brq6hCsrar3BB4ab7jnrAJ) Is this alright? 2. Constraints.tcl I also wonder whether I could mess up with constraints.tcl to try them to be passed more likely (like fonts; i.e. http://www.assembla.com/flows/show/dM_zSCsdWr3zNyab7jnrAJ ). Or rather Should I only try to configure different desktop settings? Or maybe should I leave tests with skipped constraints as they are? 3. Individual test's structure Furthermore, should I also restructure tests? So that they'd follow the pattern that is presented in man tcltest page? P.e. message.test. I was advised to work on. As it passes all the tests on my current OSes and my friends ones, what should be done with it? This kind? # Current look test message-3.6 {MessageWidgetObjCmd procedure, "configure"} { message .m set result [list [catch {.m configure -foo} msg] $msg] destroy .m set result } {1 {unknown option "-foo"}} # I thought about test message-3.0 {MessageWidgetObjCmd procedure, "cget"} -setup { message .m } -body { .m cget } -cleanup { destroy .m } -returnCodes error -result {wrong # args: should be ".m cget option"} There're also are quite often widgets defined and set outside the test case and test case that uses the previous settings; Sometimes a few tests take advantage of that settings. Should I take care for this and turn it into (the same) individual test case setups? 4. Wiki page I added my project's wiki page where I put my ideas (marked(i)) and questions (marked(?)). http://www.assembla.com/flows/flow/a_KwcurkGr3ztnab7jnrAJ For the moment there're rfc for, briefly, my following problems: > Should I try to make the tktest constraints in a way that they'd be more likely to pass? {(?i) constraints.tcl - testConstraint fonts} > How to figure out in tcl if a definition was made during the compilation? {(?) frame.test frame-2.5} > Strange ubutnu's behaviour - wm geometry x and x coordination {(?) wm.test - wm-geometry-2.1 in Ubuntu } > RFC scrollbar.test scrollbar-3.42 I'd very welcome any comments for that or new threads with suggestions. I'm also available via email ani...@gm... Thanks in advance, Regards, Ania. |