From: <cr...@be...> - 2010-03-29 15:52:46
|
Art, I have the impetus to fix some, if not all, of the failing tests. Considering that other developers on this list seem not very responsive until something stops working for them, how do you recommend me submitting my proposed changes? I see two main methods of going about this: 1. I directly commit my changes as I determine them suitable to do so, then I resolve issues people encounter later and revert the changes if need be. 2. I post patches to the list for review by interested parties, then commit. Ideally I'd get feedback from at least one person, but I'm skeptical of that. I would like to get feedback because I'm not an expert in RFC-2445, though I do read it. I've been making changes to the tests themselves that are clearly wrong, on the test side (mainly failures due to incorrect line endings). A lot of the rest of the failing tests seem to be related to timezone conversions. From what I've seen of the tests they do appear to be geared towards standards compliance, but I don't think they are failing because we are trying to maintain compatibility with other software. And from what I've seen from the code, I don't see that many compatibility hacks (at least labeled). So, I'm a little concerned that I might not realize that a test is failing because of a compatibility hack. In this case, I'll probably just have to break compatibility and restore it when notified along with a comment making note of the hack. What do you think about this? Glenn On Sun, 28 Mar 2010 07:42:16 -0400 "IGnatius T Foobar" <roo...@un...> wrote: > >Anyone noticed that the regression tests fail in SVN? I think we > >should have a policy that all code commits cause no tests to fail > >(after we get the tests to a point where they are passing). > >Anyone else have opinions on this? Is there any reason not to fix > >the regression tests? > > They were broken when we (the current maintainers) originally > inherited the code. While I agree that it needs to be fixed, I'm not > quite sure where to begin. > > I suppose the first step is to determine whether it's the code or > the tests that are broken. > > It's also possible that the tests are built to determine standards > compliance, and they've just always been that way because there are > things we just don't do according to the standards. If that's the > case then they'll always fail because we prioritize interoperability > with other software that's actually out there in the real world. > > > Dunno. Suggestions are welcome. Code contributions are even more > welcome. :) > > -- Art > > > ------------------------------------------------------------------------------ > Download Intel® Parallel Studio Eval > Try the new software tools for yourself. Speed compiling, find bugs > proactively, and fine-tune applications for parallel performance. > See why Intel Parallel Studio got high marks during beta. > http://p.sf.net/sfu/intel-sw-dev > _______________________________________________ > Freeassociation-devel mailing list > Fre...@li... > https://lists.sourceforge.net/lists/listinfo/freeassociation-devel |