You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
1
(22) |
2
(19) |
3
(8) |
4
(34) |
5
(14) |
6
(14) |
|
7
(12) |
8
(15) |
9
(15) |
10
(10) |
11
(10) |
12
(28) |
13
(11) |
|
14
(22) |
15
(29) |
16
(20) |
17
(15) |
18
(39) |
19
(11) |
20
(12) |
|
21
(8) |
22
(9) |
23
(8) |
24
(10) |
25
(9) |
26
(7) |
27
(7) |
|
28
(6) |
29
(6) |
30
(11) |
|
|
|
|
|
From: Tom H. <th...@cy...> - 2004-11-24 03:38:19
|
Nightly build on standard ( Red Hat 7.2 ) started at 2004-11-24 03:00:02 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow insn_fpu: valgrind ./insn_fpu insn_mmx: valgrind ./insn_mmx insn_mmxext: valgrind ./insn_mmxext insn_sse: valgrind ./insn_sse insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind ./int rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 191 tests, 2 stderr failures, 0 stdout failures ================= memcheck/tests/scalar (stderr) memcheck/tests/vgtest_ume (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <to...@co...> - 2004-11-24 03:30:24
|
Nightly build on dunsmere ( Fedora Core 3 ) started at 2004-11-24 03:20:04 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 191 tests, 12 stderr failures, 1 stdout failure ================= corecheck/tests/fdleak_cmsg (stderr) corecheck/tests/fdleak_fcntl (stderr) corecheck/tests/fdleak_ipv4 (stderr) corecheck/tests/fdleak_socketpair (stderr) memcheck/tests/badpoll (stderr) memcheck/tests/buflen_check (stderr) memcheck/tests/execve (stderr) memcheck/tests/execve2 (stderr) memcheck/tests/scalar (stderr) memcheck/tests/scalar_exit_group (stderr) memcheck/tests/scalar_supp (stderr) memcheck/tests/writev (stderr) none/tests/exec-sigmask (stdout) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-11-24 03:21:02
|
Nightly build on audi ( Red Hat 9 ) started at 2004-11-24 03:15:03 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 191 tests, 12 stderr failures, 0 stdout failures ================= corecheck/tests/fdleak_cmsg (stderr) corecheck/tests/fdleak_fcntl (stderr) corecheck/tests/fdleak_ipv4 (stderr) corecheck/tests/fdleak_socketpair (stderr) memcheck/tests/badpoll (stderr) memcheck/tests/buflen_check (stderr) memcheck/tests/execve (stderr) memcheck/tests/execve2 (stderr) memcheck/tests/scalar (stderr) memcheck/tests/scalar_exit_group (stderr) memcheck/tests/scalar_supp (stderr) memcheck/tests/writev (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-11-24 03:15:28
|
Nightly build on ginetta ( Red Hat 8.0 ) started at 2004-11-24 03:10:02 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow insn_cmov: valgrind ./insn_cmov insn_fpu: valgrind ./insn_fpu insn_mmx: valgrind ./insn_mmx insn_mmxext: valgrind ./insn_mmxext insn_sse: valgrind ./insn_sse insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind ./int rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 191 tests, 1 stderr failure, 0 stdout failures ================= memcheck/tests/scalar (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-11-24 03:10:58
|
Nightly build on alvis ( Red Hat 7.3 ) started at 2004-11-24 03:05:02 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow insn_fpu: valgrind ./insn_fpu insn_mmx: valgrind ./insn_mmx insn_mmxext: valgrind ./insn_mmxext insn_sse: valgrind ./insn_sse insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind ./int rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 191 tests, 2 stderr failures, 0 stdout failures ================= memcheck/tests/scalar (stderr) memcheck/tests/vgtest_ume (stderr) make: *** [regtest] Error 1 |
|
From: Jeremy F. <je...@go...> - 2004-11-24 00:01:00
|
On Mon, 2004-11-22 at 16:06 +0000, Nicholas Nethercote wrote:
> I don't think "reasonably sure" is good enough.
At the moment, in Valgrind as it stands, its "absolutely sure", because
we set the ulimit to prevent the kernel from paying attention to the brk
syscall.
In the larger scheme of things, memory allocators are *the* most
overridden class of functions in any libc; every malloc debugging
package needs to be able to do it. If we can't do it, then that's a
pretty severe libc bug on that platform.
BSD's libc makes it easy to intercept the brk syscall, which solves the
problem simply. glibc makes it hard, but not impossibly so. The
problem will need to be addressed in some way for each platform.
But the libc question is a digression.
> > But using system libraries wasn't the only reason to disentangle
> > ourselves from the dynamic linker.
>
> How does FV provide that independence?
The client and the Valgrind core are running completely separate
instances of ld.so. The client may not have an ld.so at all, or it may
be a completely different implementation from the core's. The point is
that the core uses ld.so like any other normal program would, and
doesn't rely on it performing special tricks or magic (or at all -
Valgrind could be statically linked if the target required it).
In the original scheme, we were relying on:
* ld.so starting Valgrind "early enough"
* the client and V running ld.so on both the virtual and real CPUs
* blind good luck that these would never happen at the same time
* the client not trashing the ld.so structures
The only reason it worked at all is that we used dlopen(, LD_BIND_NOW)
which stopped V from doing lazy incremental binding as it ran, but it's
putting a lot of faith in the dynamic linker to assume that that means
it will never run code on your behalf.
> > A bounds-limit test for each memory access isn't that expensive, and in
> > 64-bit address spaces, you can make the client address space a power of
> > 2 in size, which simplifies the test. You could also use a v. large
> > redzone to make hits much more unlikely. The segment test is nice
> > because it actually is free, but explicit testing probably isn't that
> > expensive, particularly if the codegen can remove redundant tests, and
> > schedule the tests it does generate appropriately.
>
> I'm not very keen on features that require greatly different mechanisms on
> different architectures.
Me neither. In this particular instance, it is something which is
arch-dependent anyway, and doesn't have widespread implications. For
CPU X, how do we generate a "bounds check pointer Y" operation"? It's
just a codegen question. Similarly, creating a redzone is just part of
"How do we lay out a process? What goes where?".
The bigger problem is handing shadow memory access. The current scheme
can't be scaled to a larger address space simply - at the very least you
need to add an extra layer of page table. Once you have that, you need
to work out how to abstract "shadow memory access" so that the every
tool doesn't need to know about how to do it, which in turn means that
your options are wider.
The nice thing about a lare shadow mapping is that it does naturally
scale unchanged from 32-bit to 64-bit address spaces, because we use the
CPU's own mapping hardware to do the expensive/tricky parts. The
downside is that it uses too much virtual address space on 32-bit
machines. The "manual" pagetable scheme is better for virtual address
space use, but it flat out doesn't scale to 64-bits at all.
> Julian made a good point about distinguishing between read-only and
> read-write memory with Memcheck/Addrcheck. Also, my proposal doesn't
> preclude Valgrind from keeping it's own copy of ld.so, as is done now.
Yeah, that would have to be a prerequisite for it to be a workable
scheme.
I guess you could keep them separate, but you'd still need to stage1/2
system, and a specially linked Valgrind to make sure that as V
initializes it doesn't start occupying the client's address space.
> I rewrote the skip-list the other day to provide a much cleaner
> interface that avoids these strange behaviours.
Oh, good. Can you describe it?
> several of the places that use the
> skip-list functions do so in such a difficult-to-understand way that I was
> unable to determine for a number of them if they were buggy, or doing
> something extremely subtle
Yeah, that needs cleaning up.
> There's also a nasty 7-function cycle in the
> memory allocation stuff which Julian stumbled across
Yep, that stuff is always tricky.
> Of course not. My proposal doesn't reduce the functionality at all,
> except for the strict client/Valgrind separation. What I object to is the
> use of techniques that are clever but fragile. Also, doing work that the
> kernel could do for us (ie. deciding where to put maps) is not good.
Well, map placement is pretty straightforward. It seems to me that if
we can use the CPU's pagetable hardware directly, that's a bigger
complexity/performance win, and if the cost is doing our own placement,
that's a fair tradeoff. Even aside from that, I think pointercheck is a
pretty important thing in its own right.
> Like Julian said, people writing programs that have strict memory limits.
> A couple of people have recently asked for a --mem-limit option, because
> they cannot use ulimit to restrict memory sizes.
Eh? That's different. They want to be able to restrict the client's
memory allocation. That doesn't mean that Valgrind itself is
constrained in how much memory it can use. And as I said to Julian,
embedded systems have physical memory limits, not virtual.
> Of course I can't prove it. But I do think that Valgrind crashes with
> random, unexplained seg faults more than it used to. There are a quite a
> lot of bugs in Bugzilla like that. Some of them have been dealt with
> since I made FV more strict about checking the results of mmap(), etc, but
> we still get them.
My concern is that allowing the client to trash Valgrind's memory will
increase the incidence of unexplainable bugs rather than decrease them.
> FV is a nice idea, but
> practice has shown us that it ultimately is flawed. There's no shame in
> that, but it's not a good idea to ignore these flaws.
I agree we should acknowledge and try to fix the problems, but I'm
basically pessimistic. The real problem is that 32-bits is just not
enough address space for us. We need more. After all, both schemes use
the same amount of physical memory; this is just a question of how
virtual address space gets used. With, say, memcheck, a process which
approaches using 1.5Gbytes of memory (in a 3G user space) is going to
run out of memory either way.
I also think the implied process memory model created by the
intermingled scheme presents a lot of problems.
At the moment, the client gets a nice clear piece of address space which
it can do what it likes with: it can create large(-ish) mappings,
knowing that the space is clear; it can scan /proc/self/maps and see
only the mappings it created[*], knowing that it can create mappings in
the gaps. It can know where it has been placed, make assumptions about
what mappings in the address space exist, use MAP_FIXED, munmap,
mprotect without causing us any problems. If it tries to get out of its
address space, it fails exactly as if the kernel had given it a small
address space.
With the intermingled scheme, we can handle the client making lots of
little maps all over the address space, but we're fragmenting the
client's address space. Programs which want to create large mappings
will be thwarted because there just aren't any more holes left in the
address space (these could get very small). What's worse, this could
easily be non-deterministic from run to run, since the kernel could
easily choose new places for the mappings.
If a process does a mmap/munmap/mprotect, we have to make sure it
doesn't hit any Valgrind mappings, and work out what a sane response is
if it does. There's no analogy in the normal operation of the kernel
(since we're effectively creating a discontiguous address space), so we
need to invent a discontiguous process memory model and its semantics.
[ * - not implemented yet ]
So if I can summarize:
FV - pros:
* Valgrind protected from the client
* Client gets clear, flat address space
cons:
* static allocation of the address spaces limits client address
space size
* construction of address space mappings can fail on some systems
* code complexity in managing mappings, and separating address
spaces
Intermingled - pros:
* flexible address space layout handles clients with lots of small
sparse mappings
* no large mmaps or other tricky allocations
cons:
* introduces discontigious process memory model
* code complexity in protecting valgrind mmaps
* valgrind unprotected from client
* non-deterministic process layout and address space fragmentation
J
|
|
From: Jeremy F. <je...@go...> - 2004-11-23 23:57:52
|
On Sat, 2004-11-20 at 12:08 +0000, Julian Seward wrote: > I have to disagree. Having our own mini-glibc decouples us from > the vagaries of what's supplied as libc on, eg, *BSD, Solaris, AIX, > etc. If we were going to reproduce a large fraction of glibc then > it would be a liability, but we only use a small amount. I am in > favour of a "system abstraction layer" to support V's internal > activities, such as Mozilla's NPR (?) and OOo's SAL. Clearly it > would be smaller and simpler than either of those, but the > principle is the same. Most of what we have in vg_libc is either incredibly generic stuff, like str*(), or very system-dependent stuff, like syscall interfaces. The trouble is that every OS has different conventions for how syscalls are called, and how libc functions are mapped to those syscalls. That's stuff that the system libc already has and knows how to deal with. We would just have to replicate it for each system. Our libc needs are pretty generic: we're not going to have problems with a libc not having memcpy. I think using the system libcs will present less variation than the differences in the underlying kernel interfaces which they hide. > I never got the impression from user feedback that the P-clobbers-V > problem was significant. What's more, telling people to fix errors > in the order is necessary even at present: even if P does not trash > V, errors often form cascades, and it is the first one that needs > to be fixed first. The whole point is that Valgrind is for operating on programs which are assumed to be broken; assumed to be behaving in unpredictable ways. A program may work fine under memcheck but trash random addresses under helgrind. If Valgrind can't protect itself from the client, then its results will always be under a cloud. With pointercheck, I know that something is either a bug in Valgrind, or a bug in the client, but it definitely isn't the client trashing Valgrind. > > Well, hm. If Valgrind is sharing ld.so with the client, then they're > > not really separate programs at all. If the client screws up the > > dynamic linker, Valgrind could get hit and crash without being able to > > report on it at all. > > The underlying issue here is that mc/ac access control is too crude. > For a while I have been thinking about 2 A bits per byte, one for > read/exec access and one for write. Then executable areas could be > marked as readonly and the above cannot happen. I don't see how that would help. If there's one instance of ld.so, then there's one set of datastructures. Sometimes those structures will be manipulated by the client, with ld.so running on the virtual CPU, and sometimes running on the real CPU as part of Valgrind. I don't see how you could set up a permissions system which allows those structures to be manipulated correctly but never trashed by the client. > It would also finally > give us Robert's memory watchpoints for free. Well, at the cost of increasing the virtual address space pressure, which apparently is a significant problem. > I never understood why we care about statically linked executables. > My view is they are a special-case anomaly which it is not worth supporting. > No developer doing day-to-day hacking is going to continually be building > statically linked executables (are they?) IMO just detecting them and > stopping with a warning is good enough. They're useful for people doing embedded stuff. You can link an RTOS with an app into a simulation environment and run it as a normal process. > > > - Also, the inflexibility of the memory layout has caused many problems: > > > - difficulties for non-standard (ie. 3G:1G) kernels > > > - can't run with a virtual memory ulimit (bad esp. for embedded > > > developers) > > > > Can you explain? What do you mean by "embedded developers"? The > > I guess, developers operating in scenarios with small amounts of > virtual memory? Embedded systems normally have a problem of not enough physical memory, which is unaffected by any of these proposls. If they're running Linux on an x86, then there's no particular reason they'd have limited virtual space. The virtual ulimit is only useful for stopping a runaway program from generating a thrash storm, and only under a very limited number of cases (since it only affects the brk syscall, and not mmap). > > > - V and P mappings are totally intermingled. We just let the kernel > > > mmap things wherever it wants, without trying to enforce any layout > > > ourselves. (This precludes big-bang shadow allocation, and thus > > > precludes fixed-offset shadow memory addressing being used in the > > > future. This does not worry me.) This makes startup much easier, > > > since we don't have to be so careful about where things go. > > > > ... > > I guess my OS/kernel background is really making me dislike this idea. > > Uh ... you need to say *why* you dislike the idea. Why? Well, from an OS perspective, having a kernel which protects itself from buggy application code is what marks the difference from a real OS and a piece of unreliable junk. Unprotected operating systems work with the charming naievety that all application code is basically bug-free and won't cause any damage. Since in Valgrind we know the client code is buggy, probably with some kind of memory problem, we know there's a good likelihood that the client is going to start taking pot-shots at the core. If we're lucky, it will do it when we're running under addr/memcheck. If not, it will quietly corrupt things when we're using some other tool. And I know, heisenbugs which appear under some tools but not other are an orthogonal problem which is generally unsolvable, but FV makes checking for the wildest memory access problems pretty cheap, and it also means that Valgrind's memory allocation patterns have less/no effect on the client allocation layout. > The new VCPU (VEX) provides precise (memory) exceptions if you need > them. Good. > :-) the obsolete systems are going to be around for a *long* time > yet :-) and will probably always be the majority. Yes, 64-bit machines are already becoming pretty common, and will be a more attractive machine for hosting Valgrind sessions than 32-bit machines. In other words, in the total population of systems I agree with you, but in the world of developers-using-Valgrind, I think 64-bit systems will be much more common. J |
|
From: Richard v. d. H. <ric...@mx...> - 2004-11-23 21:23:19
|
Richard van der Hoff wrote: > I've been doing some work on tracking down an assertion failure with > Helgrind, and it turns out to be thrown whenever vgPlain_sprintf() is > called with a %y conversion - because the %y conversion uses sprintf > itself. > ... > Let's try the attached patch against CVS HEAD - unfortunately there > are a few more changes, but it does fix it properly in a thread-safe > kind of way. Sorry to pester, but would it be possible to have the patch applied, or at least have some feedback, before it goes stale? I'm aware that things are busy at the moment, and that whether this code would even be in the next major release is currently questionable; however, I hope that this patch is pretty safe and should apply easily at the moment, and it would make my list of things-to-remember shorter if this fix made it into CVS. Cheers, -- Richard van der Hoff <ric...@mx...> Systems Analyst Tel: +44 (0) 845 666 7778 http://www.mxtelecom.com |
|
From: Robert W. <rj...@du...> - 2004-11-23 20:29:53
|
CVS commit by rjwalsh:
Fix fcntl to only check arg3 when it's actually used.
fixes bug 93810.
M +34 -4 coregrind/vg_syscalls.c 1.226
M +5 -2 include/x86-linux/vki_arch.h 1.8
--- valgrind/coregrind/vg_syscalls.c #1.225:1.226
@@ -1927,6 +1927,21 @@ PRE(sys_fcntl, 0)
{
PRINT("sys_fcntl ( %d, %d, %d )", arg1,arg2,arg3);
+ switch(arg2) {
+ case VKI_F_DUPFD:
+ case VKI_F_SETFD:
+ case VKI_F_SETFL:
+ case VKI_F_GETLK:
+ case VKI_F_SETLK:
+ case VKI_F_SETLKW:
+ case VKI_F_SETOWN:
+ case VKI_F_SETSIG:
+ case VKI_F_SETLEASE:
PRE_REG_READ3(long, "fcntl",
unsigned int, fd, unsigned int, cmd, unsigned long, arg);
+ break;
+ default:
+ PRE_REG_READ2(long, "fcntl",
+ unsigned int, fd, unsigned int, cmd);
+ }
if (arg2 == VKI_F_SETLKW)
tst->sys_flags |= MayBlock;
@@ -1976,6 +1991,21 @@ PRE(sys_fcntl64, 0)
{
PRINT("sys_fcntl64 ( %d, %d, %d )", arg1,arg2,arg3);
+ switch(arg2) {
+ case VKI_F_DUPFD:
+ case VKI_F_SETFD:
+ case VKI_F_SETFL:
+ case VKI_F_GETLK:
+ case VKI_F_SETLK:
+ case VKI_F_SETLKW:
+ case VKI_F_SETOWN:
+ case VKI_F_SETSIG:
+ case VKI_F_SETLEASE:
PRE_REG_READ3(long, "fcntl64",
unsigned int, fd, unsigned int, cmd, unsigned long, arg);
+ break;
+ default:
+ PRE_REG_READ2(long, "fcntl64",
+ unsigned int, fd, unsigned int, cmd);
+ }
if (arg2 == VKI_F_SETLKW || arg2 == VKI_F_SETLKW64)
tst->sys_flags |= MayBlock;
--- valgrind/include/x86-linux/vki_arch.h #1.7:1.8
@@ -280,7 +280,10 @@ struct vki_sigcontext {
#define VKI_F_GETFL 3 /* get file->f_flags */
#define VKI_F_SETFL 4 /* set file->f_flags */
-//#define VKI_F_GETLK 5
-//#define VKI_F_SETLK 6
+#define VKI_F_GETLK 5
+#define VKI_F_SETLK 6
#define VKI_F_SETLKW 7
+#define VKI_F_SETOWN 8
+#define VKI_F_SETSIG 10
+#define VKI_F_SETLEASE 1024
#define VKI_F_SETLKW64 14
|
|
From: Tom H. <to...@co...> - 2004-11-23 03:25:49
|
Nightly build on dunsmere ( Fedora Core 3 ) started at 2004-11-23 03:20:04 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 191 tests, 12 stderr failures, 1 stdout failure ================= corecheck/tests/fdleak_cmsg (stderr) corecheck/tests/fdleak_fcntl (stderr) corecheck/tests/fdleak_ipv4 (stderr) corecheck/tests/fdleak_socketpair (stderr) memcheck/tests/badpoll (stderr) memcheck/tests/buflen_check (stderr) memcheck/tests/execve (stderr) memcheck/tests/execve2 (stderr) memcheck/tests/scalar (stderr) memcheck/tests/scalar_exit_group (stderr) memcheck/tests/scalar_supp (stderr) memcheck/tests/writev (stderr) none/tests/exec-sigmask (stdout) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-11-23 03:20:50
|
Nightly build on audi ( Red Hat 9 ) started at 2004-11-23 03:15:02 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 191 tests, 12 stderr failures, 0 stdout failures ================= corecheck/tests/fdleak_cmsg (stderr) corecheck/tests/fdleak_fcntl (stderr) corecheck/tests/fdleak_ipv4 (stderr) corecheck/tests/fdleak_socketpair (stderr) memcheck/tests/badpoll (stderr) memcheck/tests/buflen_check (stderr) memcheck/tests/execve (stderr) memcheck/tests/execve2 (stderr) memcheck/tests/scalar (stderr) memcheck/tests/scalar_exit_group (stderr) memcheck/tests/scalar_supp (stderr) memcheck/tests/writev (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-11-23 03:12:13
|
Nightly build on ginetta ( Red Hat 8.0 ) started at 2004-11-23 03:10:02 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow gcc -Winline -Wall -Wshadow -g -o metadata metadata.o source='threadederrno.c' object='threadederrno.o' libtool=no \ depfile='.deps/threadederrno.Po' tmpdepfile='.deps/threadederrno.TPo' \ depmode=gcc3 /bin/sh ../../depcomp \ gcc -DHAVE_CONFIG_H -I. -I. -I../.. -I../../include -Winline -Wall -Wshadow -g -c `test -f 'threadederrno.c' || echo './'`threadederrno.c gcc -Winline -Wall -Wshadow -g -o threadederrno threadederrno.o -lpthread source='vgtest_ume.c' object='vgtest_ume.o' libtool=no \ depfile='.deps/vgtest_ume.Po' tmpdepfile='.deps/vgtest_ume.TPo' \ depmode=gcc3 /bin/sh ../../depcomp \ gcc -DHAVE_CONFIG_H -I. -I. -I../.. -I../../include -Winline -Wall -Wshadow -g -c `test -f 'vgtest_ume.c' || echo './'`vgtest_ume.c cc1: No space left on device: error writing to /tmp/cc1ln4hQ.s make[4]: *** [vgtest_ume.o] Error 1 make[4]: Leaving directory `/tmp/valgrind.18138/valgrind/memcheck/tests' make[3]: *** [check-am] Error 2 make[3]: Leaving directory `/tmp/valgrind.18138/valgrind/memcheck/tests' make[2]: *** [check-recursive] Error 1 make[2]: Leaving directory `/tmp/valgrind.18138/valgrind/memcheck/tests' make[1]: *** [check-recursive] Error 1 make[1]: Leaving directory `/tmp/valgrind.18138/valgrind/memcheck' make: *** [check-recursive] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-11-23 03:08:41
|
Nightly build on alvis ( Red Hat 7.3 ) started at 2004-11-23 03:05:02 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow insn_fpu: valgrind ./insn_fpu insn_mmx: valgrind ./insn_mmx insn_mmxext: valgrind ./insn_mmxext insn_sse: valgrind ./insn_sse insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind ./int rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 191 tests, 2 stderr failures, 0 stdout failures ================= memcheck/tests/scalar (stderr) memcheck/tests/vgtest_ume (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-11-23 03:04:23
|
Nightly build on standard ( Red Hat 7.2 ) started at 2004-11-23 03:00:01 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow insn_fpu: valgrind ./insn_fpu insn_mmx: valgrind ./insn_mmx insn_mmxext: valgrind ./insn_mmxext insn_sse: valgrind ./insn_sse insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind ./int rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 191 tests, 2 stderr failures, 0 stdout failures ================= memcheck/tests/scalar (stderr) memcheck/tests/vgtest_ume (stderr) make: *** [regtest] Error 1 |
|
From: Ashley P. <as...@qu...> - 2004-11-22 16:35:12
|
On Mon, 2004-11-22 at 16:12 +0000, Nicholas Nethercote wrote: > On Sat, 20 Nov 2004, Julian Seward wrote: > > > I never understood why we care about statically linked executables. > > My view is they are a special-case anomaly which it is not worth supporting. > > No developer doing day-to-day hacking is going to continually be building > > statically linked executables (are they?) > > I don't think that's true. I remember at least one person saying "thank > you thank you thank you" when support for this was added. I to write and maintain software that doesn't really like static executables and there are a surprising number of people who use them. Ashley, |
|
From: Nicholas N. <nj...@ca...> - 2004-11-22 16:13:06
|
On Sat, 20 Nov 2004, Julian Seward wrote: > I never understood why we care about statically linked executables. > My view is they are a special-case anomaly which it is not worth supporting. > No developer doing day-to-day hacking is going to continually be building > statically linked executables (are they?) I don't think that's true. I remember at least one person saying "thank you thank you thank you" when support for this was added. |
|
From: Nicholas N. <nj...@ca...> - 2004-11-22 16:06:35
|
On Fri, 19 Nov 2004, Jeremy Fitzhardinge wrote: > Having our own libc is definitely not a good idea, but we haven't gone > to much effort to replace it yet. glibc makes it hard to intercept brk > directly, but it is possible to replace malloc/calloc/realloc/etc and be > reasonably sure of avoiding the use of brk (particularly if you get the > kernel to enforce it). I don't think "reasonably sure" is good enough. > But using system libraries wasn't the only reason to disentangle > ourselves from the dynamic linker. The dynamic linker itself is 1) very > GNU/glibc-specific 2) has changed a lot over the last few years, and > doesn't seem like stopping, and so 3) depending on it in detail is going > to continue to be a maintenance and portability problem. We're stuck > with having to deal with it for the purposes of interception, but it > would be nice to be independent of it for the basic functioning of > Valgrind. How does FV provide that independence? > A bounds-limit test for each memory access isn't that expensive, and in > 64-bit address spaces, you can make the client address space a power of > 2 in size, which simplifies the test. You could also use a v. large > redzone to make hits much more unlikely. The segment test is nice > because it actually is free, but explicit testing probably isn't that > expensive, particularly if the codegen can remove redundant tests, and > schedule the tests it does generate appropriately. I'm not very keen on features that require greatly different mechanisms on different architectures. > Well, hm. If Valgrind is sharing ld.so with the client, then they're > not really separate programs at all. If the client screws up the > dynamic linker, Valgrind could get hit and crash without being able to > report on it at all. Julian made a good point about distinguishing between read-only and read-write memory with Memcheck/Addrcheck. Also, my proposal doesn't preclude Valgrind from keeping it's own copy of ld.so, as is done now. >> - Code size. FV added a lot of code. Especially keeping track of all the >> mapped segments (and there are still several nasty bugs in there). > > You know, I'm really not sure that it did. I'll agree that the skiplist > code has been more subtly broken for longer than it should have been, > but as a generic data structure we should be able to get good use from > it. And really, the mapped segment code is there to replace the old > stuff which kept reading /proc/self/map; that was getting to be a pretty > significant bottleneck and was plain ugly (ie, the mapped segment stuff > would have been needed anyway, regardless of FV). Ok, the segment list could be kept with my proposal. The implementation needs overhauling though. The big problem is that each segment is a range, but the skip-list's interface only allows for it to be (easily) treated as a key-value table, rather than a range-value table. And so various hoops have to be jumped through to account for that -- for example, SkipList_Find's non-intuitive behaviour that it returns the matching node, or the previous one if there's no match, or NULL if the key is below the first on the list; if you want to find the segment that contains an address, you have to call SkipList_Find and then look at the returned node to see if the searched for address is within it. This is crazy. I rewrote the skip-list the other day to provide a much cleaner interface that avoids these strange behaviours. I haven't managed to integrate it yet, however, because several of the places that use the skip-list functions do so in such a difficult-to-understand way that I was unable to determine for a number of them if they were buggy, or doing something extremely subtle. There's also a nasty 7-function cycle in the memory allocation stuff which Julian stumbled across; in obscure circumstances you can get an infinite loop when the program allocates memory, so Valgrind creates a new skip-list node, which can require allocating a new superblock, which requires another skip-list node, etc. (Or something like that, I can't remember the exact details now.) The segment list should not use the same allocator as the rest of Valgrind. > The other large code change is the syscall handing stuff, which is > independent of FV. Sure. > I dunno. Valgrind is a lot more complex now, but it does do a lot more > stuff. I don't think we're going to return to the halcyon days of 1.0 > simplicity and still manage to keep the functionality. Of course not. My proposal doesn't reduce the functionality at all, except for the strict client/Valgrind separation. What I object to is the use of techniques that are clever but fragile. Also, doing work that the kernel could do for us (ie. deciding where to put maps) is not good. >> - Robustness. FV is generally more fragile; there are more things to >> get right, and the consequences are bad if they are not right. IMHO >> we get more random seg fault problems now. A lot have been cleaned up >> (it was really bad at first), but they still happen. > > Yes, but I think that comes with the "doing more stuff". 1.0 would just > fail outright on a lot of programs. 2.x tries to run them, and > generally (but not always) succeeds. And again, I don't think this is > strictly an FV issue. I agree with that statement for the ProxyLWP stuff -- yes, it's more complicated, but handles more programs. As for FV... apart from statically linked binaries, what kinds of programs can we run with it that we could not run without? My argument is that FV reduced the number of programs Valgrind could run, since you need a standard-ish kernel, no virtual memory limit, enough swap space if you have a non-overcommitting kernel. And also it runs out of memory earlier than previously due to the address layout inflexibilities. > Can you explain? What do you mean by "embedded developers"? The Like Julian said, people writing programs that have strict memory limits. A couple of people have recently asked for a --mem-limit option, because they cannot use ulimit to restrict memory sizes. >> V has dropped from #6 to #72 highest-rated project at Freshmeat.net over >> the last year or so; I think the reason for this is that V's "it just >> works" characteristic has been diminished, due to the robustness and >> inflexibility problems. > > Um, do you have anything to support that? I think the ranking is > dropping because V is not new anymore, and people are taking it for > granted. Are we seeing an increase in bug reports disproportionate to > the number of users? Of course I can't prove it. But I do think that Valgrind crashes with random, unexplained seg faults more than it used to. There are a quite a lot of bugs in Bugzilla like that. Some of them have been dealt with since I made FV more strict about checking the results of mmap(), etc, but we still get them. > I'd still like to be able to use C++ internally. It could certainly be useful in places; some places where tools augment core data structures with extra info cry out for inheritance. But I'm happy to not use C++ if it causes too many problems. It's also a slippery slope if you try to restrict yourself to only a subset of the language. > And I really think that the direct mapped shadow memory makes the most > sense for 64-bit systems, even if it doesn't for 32-bit. Well, it's only speculation at the moment. > The increased layout flexibility is the only obvious win to me, and I > think it costs quite a bit. I consider it a big win, and the disadvantages not that big. It's the thin vs. thick model again; with FV we are duplicating the work of the kernel for memory layout. Just take a look at the syscall wrappers for mremap and brk. Yesterday I tried removing the strict partitioning, and the built-in support for shadow memory (tools allocate shadow memory just with VG_(get_memory_from_mmap)(), like they used to). I cut about 300 lines of code with hardly any effort, and that was just a start. > Well, we still need to keep Valgrind and the client stack separate. > Valgrind's stack is fixed size, but the client's has to grow. If we use > the stack the system gave us as the client stack, then we don't need to > worry about it. Yes. >> I think the end result would be simpler, have less code, be more robust, >> and cause fewer problems for users. Discuss. > > I think its more complicated than that. How did I come around to this viewpoint? I've been doing a lot of thinking about memory layout in the last couple of months, mostly trying to work out how to get the flexibility back while preserving FV's strict client/Valgrind separation. The thinking has been prompted by the large number of people who have been having difficulties due to non-3G:1G kernels, the RH8 over-committing problem, people complaining about ulimits not working, running out of memory prematurely, etc. There have been a lot. I've really been trying to come up with plausible ways to address the problems within FV, and I have concluded that it can't be done. If you are willing to abandon the strict client/Valgrind separation, lots of complication and problems fall away immediately. Also, the difficulties above that I encountered when trying to fix the skip-list problems made me quite unhappy; the code has to stay maintainable. FV is a nice idea, but practice has shown us that it ultimately is flawed. There's no shame in that, but it's not a good idea to ignore these flaws. N |
|
From: Tom H. <th...@cy...> - 2004-11-22 04:21:45
|
Nightly build on audi ( Red Hat 9 ) started at 2004-11-22 03:15:12 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 191 tests, 12 stderr failures, 0 stdout failures ================= corecheck/tests/fdleak_cmsg (stderr) corecheck/tests/fdleak_fcntl (stderr) corecheck/tests/fdleak_ipv4 (stderr) corecheck/tests/fdleak_socketpair (stderr) memcheck/tests/badpoll (stderr) memcheck/tests/buflen_check (stderr) memcheck/tests/execve (stderr) memcheck/tests/execve2 (stderr) memcheck/tests/scalar (stderr) memcheck/tests/scalar_exit_group (stderr) memcheck/tests/scalar_supp (stderr) memcheck/tests/writev (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-11-22 04:15:16
|
Nightly build on alvis ( Red Hat 7.3 ) started at 2004-11-22 03:05:12 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow insn_fpu: valgrind ./insn_fpu insn_mmx: valgrind ./insn_mmx insn_mmxext: valgrind ./insn_mmxext insn_sse: valgrind ./insn_sse insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind ./int rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 191 tests, 2 stderr failures, 0 stdout failures ================= memcheck/tests/scalar (stderr) memcheck/tests/vgtest_ume (stderr) make: *** [regtest] Error 1 |
|
From: <js...@ac...> - 2004-11-22 03:57:09
|
Nightly build on phoenix ( SuSE 9.1 ) started at 2004-11-22 03:50:00 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow insn_sse: valgrind ./insn_sse insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind ./int rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 186 tests, 5 stderr failures, 0 stdout failures ================= corecheck/tests/as_mmap (stderr) corecheck/tests/fdleak_fcntl (stderr) memcheck/tests/scalar (stderr) memcheck/tests/writev (stderr) memcheck/tests/zeropage (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <to...@co...> - 2004-11-22 03:52:17
|
Nightly build on dunsmere ( Fedora Core 3 ) started at 2004-11-22 03:20:03 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow -- Finished tests in none/tests ---------------------------------------- == 191 tests, 14 stderr failures, 1 stdout failure ================= corecheck/tests/fdleak_cmsg (stderr) corecheck/tests/fdleak_fcntl (stderr) corecheck/tests/fdleak_ipv4 (stderr) corecheck/tests/fdleak_socketpair (stderr) memcheck/tests/badjump (stderr) memcheck/tests/badjump2 (stderr) memcheck/tests/badpoll (stderr) memcheck/tests/buflen_check (stderr) memcheck/tests/execve (stderr) memcheck/tests/execve2 (stderr) memcheck/tests/scalar (stderr) memcheck/tests/scalar_exit_group (stderr) memcheck/tests/scalar_supp (stderr) memcheck/tests/writev (stderr) none/tests/exec-sigmask (stdout) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-11-22 03:21:41
|
Nightly build on ginetta ( Red Hat 8.0 ) started at 2004-11-22 03:10:07 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow insn_cmov: valgrind ./insn_cmov insn_fpu: valgrind ./insn_fpu insn_mmx: valgrind ./insn_mmx insn_mmxext: valgrind ./insn_mmxext insn_sse: valgrind ./insn_sse insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind ./int rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 191 tests, 1 stderr failure, 0 stdout failures ================= memcheck/tests/scalar (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-11-22 03:06:10
|
Nightly build on standard ( Red Hat 7.2 ) started at 2004-11-22 03:00:12 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow insn_fpu: valgrind ./insn_fpu insn_mmx: valgrind ./insn_mmx insn_mmxext: valgrind ./insn_mmxext insn_sse: valgrind ./insn_sse insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind ./int rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 191 tests, 2 stderr failures, 0 stdout failures ================= memcheck/tests/scalar (stderr) memcheck/tests/vgtest_ume (stderr) make: *** [regtest] Error 1 |
|
From: Robert W. <rj...@du...> - 2004-11-21 06:55:16
|
So, I finally got around to measuring the impact of watchpoints on execution time. It varies between 5%-8% in the case where the patch is applied but no watchpoints are defined. When the patch is applied and a watchpoint is defined, it jumps to 20%-30%, even if they watchpoint is never triggered. Ouch! I wonder if it's possible to minimize the impact in any way? Regards, Robert. --=20 Robert Walsh Amalgamated Durables, Inc. - "We don't make the things you buy." Email: rj...@du... |
|
From: Robert W. <rj...@du...> - 2004-11-21 06:45:12
|
About the question of "do we use glibc" versus "do we write our own",
have we looked at any existing alternate libc packages out there?
uClibc, diet libc, klibc, etc. If they're as small as they sound (I
haven't really checked) then there's the possibility of just including
the entire package with Valgrind.
Questions to be answered are:
* What's a comprehensive list of existing libc packages?
* Are they licensed appropriately?
* Do they do everything we want?
* Do they do it in a Valgrind-safe way (brk use, etc.)?
* If not, would they be easy to fix up?
* Are they small enough to include lock, stock and barrel in the
Valgrind source tree?
* If not, is it worth having a separate tree to contain them, with
all of the complications that would add to the build process?
* Are they portable?
* If they're already ported to other OSes and platforms, do they
include ones we care about? (FreeBSD, PPC, x86_64, etc.)
* If not, would they be easy to port?
* How would we deal with integrating new releases, submitting
patches back to the maintainers, etc.?
Lots of questions. I haven't thought about the answers yet.
Assuming I've been smoking the crack pipe and this is a silly idea, how
about making a new vglibc directory (sibling of coregrind) and putting
what's currently found in vg_mylibc.c and vg_pthread* in there,
splitting these n-thousand line files up into something more manageable
and building a libvgc.a?
Regards,
Robert.
--=20
Robert Walsh
Amalgamated Durables, Inc. - "We don't make the things you buy."
Email: rj...@du...
|