|
From: Josef W. <Jos...@gm...> - 2005-06-30 11:59:12
|
On Thursday 30 June 2005 05:54, Nicholas Nethercote wrote: > On Thu, 30 Jun 2005, Josef Weidendorfer wrote: > > Linking with valgrind at build time would make RPMs independent, i.e. a > > callgrind installation could exist without valgrind at all or with > > another valgrind release. Similar, currently I am forced to install into > > the same prefix, as valgrind has to find the tool. > > That's a good point. I was assuming that Callgrind would have to link > with the already installed Valgrind, but it can have its own copy in the > RPM. I thought the idea was to statically link the tool with valgrind at build time. What else would a RPM have to provide from core valgrind other than the built executable? In this sense, there is no own copy in the RPM. Am I missing something here? Configuration files are tool dependent (suppressions etc.), and help files in tool RPMs would be only for the tool. > This would give you more independence from the main Valgrind releases, > because you don't necessarily have to update Callgrind when a new Valgrind > is released It was always possible to install a second Valgrind release in some private path, and adjust the callgrind wrapper (which is in $PATH) to use this second version. > -- there's no danger of someone having a new Valgrind which > doesn't match an old Callgrind. Given that the tool API version checking is working correctly -- which AFAIK always was the case for stable releases -- this was prevented successfully. > (Although if the new Valgrind had a new > feature that you wanted, you'd have to rebuild and release a new Callgrind > to take advantage of it.) Exactly this is the main point: People want to have callgrind working with the newest valgrind: if I didn't provide a matching callgrind release a few days after a stable valgrind release, I got an increased number of mails asking for when the new callgrind release would happen. But releasing a new callgrind version is of course only needed if the major version of the tool API is increased in a new valgrind release. Unfortunately, this was almost always the case, and required me to release the updated tool. AFAIK there was only one time where the major version was kept - and it was indeed working without me doing anything. So the main problem with the version check was that the tool API simply was not stable at all. Regarding maintenance effort, my wrapper script simply could have checked the exact valgrind version requirement instead - without introducing tool API versions at all. > We also wouldn't have to worry about having a version number for the > core/tool interface any more, which is nice -- one fewer thing to get > wrong. Why? You do not need to do any check at runtime, but a check for the tool API version at build time is still better than relying an exact valgrind release numbers. The API still would exist - and major versions still make sense. > It would be more consistent to build every tool into a separate > executable, so that internal tools and external tools can be treated the > same way. I do not understand this. What is the difference between one executable for multiple tools and a separate for each? You could have one executable and hardlinks to this one, and run the tool depending on argv[0]. > It would make sense to have a minimal "stage1" which just looks for the > --tool option and invokes the appropriate executable. This could even > work for external tools if they were installed in the same place. The issue here is that you again introduce some kind of dependency of a "standalone" callgrind installation to an installed valgrind: the minimal stage1. So why such a minimal stage1 at all? Josef |