|
From: Julian S. <js...@ac...> - 2005-06-28 10:11:55
|
Some time back in a thread "RFC: libc futures" (31 March 05) it
was suggested that statically linking the core to each tool would
possibly be beneficial. Recently I've been thinking a lot about
overhauling address space management, and as part of that I thought
I'd try the static linking game.
It's really not difficult. gcc needs -static, obviously, and
the magic incantations to set a non-default load address
(-Wl,-defsym,kickstart_base=0x70000000 -Wl,-T,../coregrind/stage2.lds),
load_tool() in m_main.c has to be turned into a no-op, more
or less, and the stuff in m_main that unmaps stage1's padding file
disappears. Once that's done I got a statically linked stage2+tool
combination that can be started directly; no need for stage1.
I checked that nulgrind and memcheck that work OK on both x86 and
amd64-linux.
So what are the advantages/disadvantages:
+ Simplicity: less code in m_main; stage1 disappears completely.
stage2 is started directly.
+ Independence: removes dependence on dlopen
+ Robustness: no need for huge mmaps, less likely to go wrong.
+ A big step towards build-time enforcement of no glibc use.
I've been experimenting with not only static linking but
also passing -nodefaultlibs -lgcc. This means glibc et al
are simply not linked in, and any use of a symbol not supplied
by V causes the link to fail. This doesn't work yet since we
make quite a lot of use of glibc, but I can see getting rid of
it is on the order of a day's work (apart from
localtime_r(), used for --time-stamp=yes).
- Disk space: usage increases since core is linked with every tool.
- External tools: installing eg Callgrind could be more complex as it
would have to be linked against libcoregrind.a at
installation time, even if installing from RPMs.
- PIE: this would make PIE'd valgrinds impossible. I'm not too
bothered since it seems like the main use for pie is to work
around the current address-space-layout inflexibilities, which
are up for review anyway.
- Need a replacement driver: since only one tool at a time is
linked into libcoregrind.a, the --tool= flag is ignored, and so
we'd need a new ultra-trivial driver which inspects that flag
and selects the right executable to start.
Alternatively, adjust the core/tool interface so that multiple
tools can be linked into the core all at once. This sounds
attractive, and could save disk space.
Comments? In particular, are there other bad consequences I haven't
thought of? Also, [GregP] how would this play for making V work
for MacOS ?
J
|
|
From: Nicholas N. <nj...@cs...> - 2005-06-28 13:14:44
|
Hi, I basically like this idea. The things I'm unsure about are below. > - External tools: installing eg Callgrind could be more complex as it > would have to be linked against libcoregrind.a at > installation time, even if installing from RPMs. This is important, lots of people use Callgrind so we want to make it possible to do this. > - PIE: this would make PIE'd valgrinds impossible. I'm not too > bothered since it seems like the main use for pie is to work > around the current address-space-layout inflexibilities, which > are up for review anyway. I'd be happier with ditching this once the address-space reworking has been done and shown to work. > - Need a replacement driver: since only one tool at a time is > linked into libcoregrind.a, the --tool= flag is ignored, and so > we'd need a new ultra-trivial driver which inspects that flag > and selects the right executable to start. > > Alternatively, adjust the core/tool interface so that multiple > tools can be linked into the core all at once. This sounds > attractive, and could save disk space. After recent changes to the 3.0 this is much closer to possible, since we're not using predefined names (eg. SK_(instrument)()) for everything. But it doesn't sound like it would play well with external tools. > Comments? In particular, are there other bad consequences I haven't > thought of? Also, [GregP] how would this play for making V work > for MacOS ? I assume the LD_PRELOAD modules (eg. vg_preload_core.so, vgpreload_memcheck.so) are still being used correctly? They must be if you say Memcheck is working properly. N |
|
From: Greg P. <gp...@us...> - 2005-06-28 18:09:21
|
Julian Seward writes: > Comments? In particular, are there other bad consequences I haven't > thought of? Also, [GregP] how would this play for making V work > for MacOS ? No problem for Mac OS X. In fact, ditching dlopen is likely to be a bigger win on Mac OS X than elsewhere, because running two copies of the dynamic loader (one for Valgrind and one for the target) is hard. -- Greg Parker gp...@us... |
|
From: Josef W. <Jos...@gm...> - 2005-06-29 22:37:26
|
On Tuesday 28 June 2005 12:11, Julian Seward wrote:
> - External tools: installing eg Callgrind could be more complex as it
> would have to be linked against libcoregrind.a at
> installation time, even if installing from RPMs.
I don't see real problems here. Finding the correct lib is similar to finding
the headers now.
Linking with valgrind at build time would make RPMs independent, i.e. a
callgrind installation could exist without valgrind at all or with another
valgrind release. Similar, currently I am forced to install into the same
prefix, as valgrind has to find the tool.
But actually, I do not see any big advantages or disadvantages regarding
external tools.
> Alternatively, adjust the core/tool interface so that multiple
> tools can be linked into the core all at once. This sounds
> attractive, and could save disk space.
For external tools you still have to provide the library to link against.
I.e. each external tools would get its own executable. As I am used to install
my wrapper script ("callgrind"), which would become the executable, AFAICS it
would not make a big difference.
Josef
>
> Comments? In particular, are there other bad consequences I haven't
> thought of? Also, [GregP] how would this play for making V work
> for MacOS ?
>
> J
>
>
> -------------------------------------------------------
> SF.Net email is sponsored by: Discover Easy Linux Migration Strategies
> from IBM. Find simple to follow Roadmaps, straightforward articles,
> informative Webcasts and more! Get everything you need to get up to
> speed, fast. http://ads.osdn.com/?ad_id=7477&alloc_id=16492&op=click
> _______________________________________________
> Valgrind-developers mailing list
> Val...@li...
> https://lists.sourceforge.net/lists/listinfo/valgrind-developers
|
|
From: Nicholas N. <nj...@cs...> - 2005-06-30 03:54:08
|
On Thu, 30 Jun 2005, Josef Weidendorfer wrote:
> Linking with valgrind at build time would make RPMs independent, i.e. a
> callgrind installation could exist without valgrind at all or with another
> valgrind release. Similar, currently I am forced to install into the same
> prefix, as valgrind has to find the tool.
That's a good point. I was assuming that Callgrind would have to link
with the already installed Valgrind, but it can have its own copy in the
RPM.
This would give you more independence from the main Valgrind releases,
because you don't necessarily have to update Callgrind when a new Valgrind
is released -- there's no danger of someone having a new Valgrind which
doesn't match an old Callgrind. (Although if the new Valgrind had a new
feature that you wanted, you'd have to rebuild and release a new Callgrind
to take advantage of it.)
We also wouldn't have to worry about having a version number for the
core/tool interface any more, which is nice -- one fewer thing to get
wrong.
>> Alternatively, adjust the core/tool interface so that multiple
>> tools can be linked into the core all at once. This sounds
>> attractive, and could save disk space.
>
> For external tools you still have to provide the library to link against.
> I.e. each external tools would get its own executable. As I am used to install
> my wrapper script ("callgrind"), which would become the executable, AFAICS it
> would not make a big difference.
It would be more consistent to build every tool into a separate
executable, so that internal tools and external tools can be treated the
same way.
It would make sense to have a minimal "stage1" which just looks for the
--tool option and invokes the appropriate executable. This could even
work for external tools if they were installed in the same place. But
Josef could still use his "callgrind" wrapper script.
After Josef's good comments, and with Greg saying that this will make
things easier on MacOS, I'm all in favour of this idea.
N
|
|
From: Josef W. <Jos...@gm...> - 2005-06-30 11:59:12
|
On Thursday 30 June 2005 05:54, Nicholas Nethercote wrote: > On Thu, 30 Jun 2005, Josef Weidendorfer wrote: > > Linking with valgrind at build time would make RPMs independent, i.e. a > > callgrind installation could exist without valgrind at all or with > > another valgrind release. Similar, currently I am forced to install into > > the same prefix, as valgrind has to find the tool. > > That's a good point. I was assuming that Callgrind would have to link > with the already installed Valgrind, but it can have its own copy in the > RPM. I thought the idea was to statically link the tool with valgrind at build time. What else would a RPM have to provide from core valgrind other than the built executable? In this sense, there is no own copy in the RPM. Am I missing something here? Configuration files are tool dependent (suppressions etc.), and help files in tool RPMs would be only for the tool. > This would give you more independence from the main Valgrind releases, > because you don't necessarily have to update Callgrind when a new Valgrind > is released It was always possible to install a second Valgrind release in some private path, and adjust the callgrind wrapper (which is in $PATH) to use this second version. > -- there's no danger of someone having a new Valgrind which > doesn't match an old Callgrind. Given that the tool API version checking is working correctly -- which AFAIK always was the case for stable releases -- this was prevented successfully. > (Although if the new Valgrind had a new > feature that you wanted, you'd have to rebuild and release a new Callgrind > to take advantage of it.) Exactly this is the main point: People want to have callgrind working with the newest valgrind: if I didn't provide a matching callgrind release a few days after a stable valgrind release, I got an increased number of mails asking for when the new callgrind release would happen. But releasing a new callgrind version is of course only needed if the major version of the tool API is increased in a new valgrind release. Unfortunately, this was almost always the case, and required me to release the updated tool. AFAIK there was only one time where the major version was kept - and it was indeed working without me doing anything. So the main problem with the version check was that the tool API simply was not stable at all. Regarding maintenance effort, my wrapper script simply could have checked the exact valgrind version requirement instead - without introducing tool API versions at all. > We also wouldn't have to worry about having a version number for the > core/tool interface any more, which is nice -- one fewer thing to get > wrong. Why? You do not need to do any check at runtime, but a check for the tool API version at build time is still better than relying an exact valgrind release numbers. The API still would exist - and major versions still make sense. > It would be more consistent to build every tool into a separate > executable, so that internal tools and external tools can be treated the > same way. I do not understand this. What is the difference between one executable for multiple tools and a separate for each? You could have one executable and hardlinks to this one, and run the tool depending on argv[0]. > It would make sense to have a minimal "stage1" which just looks for the > --tool option and invokes the appropriate executable. This could even > work for external tools if they were installed in the same place. The issue here is that you again introduce some kind of dependency of a "standalone" callgrind installation to an installed valgrind: the minimal stage1. So why such a minimal stage1 at all? Josef |
|
From: Nicholas N. <nj...@cs...> - 2005-07-01 21:01:44
|
On Thu, 30 Jun 2005, Josef Weidendorfer wrote: > I thought the idea was to statically link the tool with valgrind at build > time. What else would a RPM have to provide from core valgrind other than the > built executable? In this sense, there is no own copy in the RPM. > Am I missing something here? No, I was confused. >> -- there's no danger of someone having a new Valgrind which >> doesn't match an old Callgrind. > > Given that the tool API version checking is working correctly -- which AFAIK > always was the case for stable releases -- this was prevented successfully. Sure, it never went wrong, but it could have. With the static linking proposal there's nothing to go wrong. >> We also wouldn't have to worry about having a version number for the >> core/tool interface any more, which is nice -- one fewer thing to get >> wrong. > > Why? You do not need to do any check at runtime, but a check for the tool API > version at build time is still better than relying an exact valgrind release > numbers. The API still would exist - and major versions still make sense. I wasn't thinking in terms of exact release numbers. I was just thinking that if the tool and core compiled together, they matched ok. >> It would be more consistent to build every tool into a separate >> executable, so that internal tools and external tools can be treated the >> same way. > > I do not understand this. What is the difference between one executable for > multiple tools and a separate for each? You could have one executable and > hardlinks to this one, and run the tool depending on argv[0]. If the tools that come with the Valgrind distribution are all in one executable, but Callgrind comes in a different executable, there is a lack of consistency. >> It would make sense to have a minimal "stage1" which just looks for the >> --tool option and invokes the appropriate executable. This could even >> work for external tools if they were installed in the same place. > > The issue here is that you again introduce some kind of dependency of a > "standalone" callgrind installation to an installed valgrind: the minimal > stage1. So why such a minimal stage1 at all? If we preserve the --tool option and want to use it for both internal and external tools, a minimal stage1 seems unavoidable. Overall, I think the static linking idea is sound. There are a couple of minor details to resolve, such as how exactly the chosen tool should get invoked (via --tool? via some kind of symlink or hardlink?) but they are not a big problem. N |