|
From: Julian S. <js...@ac...> - 2006-09-02 01:34:39
|
Now that 3.2.0 is out, and 3.2.1 is not far away, I've been musing about what sorts of things might go into a 3.3 line of Valgrind. Here are some suggestions for discussion. J Comments on upcoming releases ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ I am hoping to roll 3.2.1 soon (in the next couple of weeks). 3.2.0 seems to have been pretty stable, but enough bugs have cropped up, a couple of them serious, to make 3.2.1 worthwhile. I've found that making major releases (3.X.0) is a major hassle. 3.1.0 and 3.2.0 each took two weeks full time by the time all the regression test fixing, packaging and testing on 99 different platforms is done. As a consequence: * I prefer not to release 3.3.0 until early next year. * We should keep 3.2.X going longer than 3.1.X and 3.0.X stayed alive, with at least a 3.2.2 release some time around November. * It would be nice to have a second person to help with some of the testing work for 3.3.0. New tools for 3.3.0 ~~~~~~~~~~~~~~~~~~~ In 3.X so far the primary emphasis has been on portability, stability and speed. Now there are some new tools under development (drd, omega, covergrind) and some old experimental ones, in various states of disrepair, which it would be nice to play with from time to time (annelid, diduce, helgrind). I've been thinking of splitting the tools into two groups: production-grade tools (memcheck, cachegrind, callgrind, massif) and experimental tools (drd, omega, covergrind, annelid, diduce, helgrind), and having both sets in the standard Valgrind tree. For production tools we would continue to ensure they have high quality and are stable, as at present. For experimental tools we would try to ensure they compile, but give no assurance beyond that. The problem with experimental tools is they need a lot of engineering effort to get them to a production status (or conclusion that the tool cannot be moved to production status for technical reasons.) Getting that effort means having users try them out and contribute feedback and patches. Putting the tools in the tree, and having them compile, even if they don't work well, makes it a lot easier for users to do that. It's also more inclusive for the tool authors. There are downsides: - more code in the tree inevitably means a higher maintenance overhead - stability of the existing code base is important, and we don't want to undermine that I'm thinking of an arrangement in which experimental tool authors have commit access to the tree, but - we make it clear it is their responsibility to keep tools compiling and working - we ask that such authors do not commit changes outside of their individual tools without first consulting with the core developers New ports for 3.3.0 ~~~~~~~~~~~~~~~~~~~ This year there has been quiet but steady work towards porting V away from Linux. There are now ports to FreeBSD, MacOSX and AIX5 in various states of progress, and it seems likely that some of those will appear in the mainline tree at some point. I am expecting that our existing porting infrastructure will continue to be refined and extended so that these ports can be accommodated without majorly intrusive changes in the majority of the code base. So far with the Linux ports to x86,amd64,ppc32,ppc64, the target-specific stuff has been isolated into a relatively few places (eg, m_syswrap, m_sigframe, m_syscall, VEX), leaving the rest of the system fairly untouched, and I am hoping this can be continued. The regression test infrastructure has proven invaluable in making V as reliable as it is. However, even with Linux it is too sensitive to changes in stack backtraces and address space layouts, and as a result causes tests to fail when really they should succeed. With new ports on the horizon this problem is about to get much worse. If anyone is motivated to overhaul this unglamorous but critical subsystem, that would be much appreciated. |