You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
1
(15) |
2
(12) |
3
(11) |
4
(20) |
5
(6) |
|
6
(6) |
7
(7) |
8
(8) |
9
(17) |
10
(25) |
11
(27) |
12
(6) |
|
13
(28) |
14
(16) |
15
(20) |
16
(9) |
17
(26) |
18
(7) |
19
(25) |
|
20
(7) |
21
(18) |
22
(25) |
23
(15) |
24
(21) |
25
(32) |
26
(15) |
|
27
(23) |
28
(33) |
|
|
|
|
|
|
From: Nicholas N. <nj...@cs...> - 2005-02-14 23:48:32
|
CVS commit by nethercote: typo M +1 -1 users.html 1.80 --- devel-home/valgrind/users.html #1.79:1.80 @@ -238,5 +238,5 @@ <dt><a href="http://lavape.sourceforge.net">Lava</a> -<dd>An experimental OO programming language implementaion, including a +<dd>An experimental OO programming language implementation, including a structure editor. </dl> |
|
From: Greg P. <gp...@us...> - 2005-02-14 22:40:02
|
Jeremy Fitzhardinge writes: > On Mon, 2005-02-14 at 11:18 +0000, Julian Seward wrote: > > On ppc32, and other architectures, which I'm sure we'll > > eventually get to, we can't implement pointercheck the way we > > do now anyway. > > PPC has its own interesting segmentation magic which might be usable. I don't think any of these are available on ppc-darwin; all the fun VM toys are reserved for the kernel's use. > > There are a couple of options for <handle failure now>, which need > > to be investigated for their impact on code size and the extent to > > which they inhibit IR optimisation. Either way, they can be expressed > > purely in IR, so no portability questions there either. > > Could we generate the <handle failure> code out of line (after or before > the BB's main generated code), and change it to > jnz,pn failure > > In either case, on ia32 we could use an interrupt instruction to raise a > fault (2 byte instruction). ppc's twi (trap word immediate) instruction is perfect for this purpose. It's basically a conditional interrupt instruction. Java null checks use this, with a signal handler in the JVM. On the plus side, twi would be in line with the rest of the test, so there would be no "handle failure" block. On the minus side, it's presumably not directly represented in the IR (although "conditional interrupt" is close enough in meaning and might be a reasonable thing to have). srlwi r0, rAddr, n // r0 = addr >> n addi r1, r0, table_offset lbzx r2, r1, rThread // r2 = thread->table[addr>>n] twnei r2, is_client // die if table byte != client's value OK: -- Greg Parker gp...@us... |
|
From: Jeremy F. <je...@go...> - 2005-02-14 18:04:17
|
On Mon, 2005-02-14 at 11:18 +0000, Julian Seward wrote: > > movl %addr, %tmp > shrl $n, %tmp > testb $0, offset(%ebp, %tmp, 1) > jz OK > <handle failure somehow> > OK: A thought I just had in the shower: if the table entries are actually pages, you can use mprotect to mark them as client-accessible or not. Then the sequence looks like: mov %addr, %tmp shrl $n, %tmp /* %tmp /= (superblocksize / pagesize) */ testb $0, table(%tmp) /* any memory-touching instruction; faults if %addr is bad */ This would use no cache and has no jumps, but it does use TLB entries for the table, so it isn't completely free. But it's worth considering. J |
|
From: Jeremy F. <je...@go...> - 2005-02-14 17:57:47
|
On Mon, 2005-02-14 at 11:54 +0000, Julian Seward wrote: > Frankly I'm unsympathetic to the view that even after a program > does a bad write which is reported by Memcheck, it should not > actually trash Valgrind. My view is, once a bad write is > reported, you can just forget the rest of your run on the basis > that V _might_ have gotten trashed. I think that's a bit simplistic. I tend to overlook reports coming from other people's code in my program if I'm fairly sure they're not relevant to the particular problem I'm trying to track down; if I know that they're not trashing Valgrind, then that helps. I would always run with pointercheck enabled; the only time I disable it is with programs which use ioctls which create mmaps outside the client address space. > I said as much in the valgrind-1.0 release notes, and I can't > remember getting complaints about this behaviour. Yeah, I'm pretty sure that because people ignored the advice. We regularly get "bug" reports from people saying that Valgrind crashed after hundreds of client warnings. J |
|
From: Jeremy F. <je...@go...> - 2005-02-14 17:33:53
|
On Mon, 2005-02-14 at 11:18 +0000, Julian Seward wrote: > I had a run-in last night with mmap. This, recent amd64 > hackery, and contemplation of ppc and MacOSX, got me thinking > again about address space layout. > > Currently on x86 we divide the address space at some point > (eg, 0x52000000), force the client to exist below that boundary > and give all above it to V. This scheme allows pointercheck to > be implemented for free using x86 segmentation. > > On x86 that works well, although it does seem to have given > various kinds of brittleness when running large processes > and/or for people with unusual kernel/user address boundaries. Using -fpie has relieved a lot of that brittleness. We still get constrained by address space if the user boundary is too low, but at least it runs. > On amd64 that scheme isn't going to work, since all > client code has to be below 2G. A more flexible layout is > needed. That only applies to the main executable. All the shared libraries, ld.so, etc are much higher (~0x3f3b'0000'0000 on my machine). If we can keep generated code within 2G of Valgrind itself, then we can generate fairly compact code; otherwise it will need to do everything with absolute 64-bit pointers. > On ppc32, and other architectures, which I'm sure we'll > eventually get to, we can't implement pointercheck the way we > do now anyway. PPC has its own interesting segmentation magic which might be usable. > To support MacOSX and other OSs, a more flexible approach > wouldn't hurt. > > I've been thinking about a low-overhead, portable, software > implementation of pointercheck which would allow more flexibility. > > One observation is (unless I am wrong) is that pointercheck > is valueless when running Memcheck, since Memcheck should be > able to catch all illegal accesses (if this isn't true then > Memcheck is broken and we should fix it). That means we > can omit pointercheck when doing Memcheck and so any > software-based checking scheme will have zero overhead for > our most-used tool. As Tom said, I think this is a bit over-simplified. Besides, I suspect the cost of a pointer check is tiny compared to all the other extra work memcheck does - particularly if the codegen can schedule the check in advance of the actual memory access (and even CSE it for multiple accesses). > If the table is accessible at a fixed offset from the guest state > pointer, this becomes four insns on x86: > > movl %addr, %tmp > shrl $n, %tmp > testb $0, offset(%ebp, %tmp, 1) > jz OK > <handle failure somehow> > OK: > > There are a couple of options for <handle failure now>, which need > to be investigated for their impact on code size and the extent to > which they inhibit IR optimisation. Either way, they can be expressed > purely in IR, so no portability questions there either. Could we generate the <handle failure> code out of line (after or before the BB's main generated code), and change it to jnz,pn failure In either case, on ia32 we could use an interrupt instruction to raise a fault (2 byte instruction). > Why 64M-sized blocks? 64M is a coarse but just-about-manageable > granularity. On a 32-bit target, with 64M superblocks, the check > table has only 64 entries -- that is, 64 bytes -- so accesses to it > are unlikely to cause significant cache pollution (iow I hope all of > it will end up permanently in D1). > > On a 64-bit target, we clearly cannot have a check table covering > the entire address space since that would require 2^38 entries. > However, even a 4096-entry table would cover 256GB of address space, > which is more than enough for the foreseeable future. The access > test would become a little more expensive because it would also have > to test that the upper bits of the address which are not covered by > the table, are all zero -- assuming the table maps the address > space of 0 - 256GB and not some other part. > > So, what am I missing? It looks reasonable. For ia32 I would suggest sticking to the current layout, so there would just be a series of fixed, preallocated superblocks. In fact, for any 32-bit implementation I think that's the best way to go. Since there are only 64 superblocks, we can pretty easily determine in advance whether they'll end up being client or Valgrind usable (maybe a few of them would be unallocated). For 64-bit, there's no reason we couldn't use, say, a 16Gbyte super-page size and have a 16k-entry table for a 48-bit address space. At the moment Linux constrains user-space to 256Gbytes, but after 2.6.1[12] that will unconstrained (ie, up to 2^47 or so, I think) because it will use a 4-level pagetable structure. At that point, we could put Valgrind way up high and leave the client with a very clear address space. J |
|
From: Julian S. <js...@ac...> - 2005-02-14 17:33:29
|
All There's been a lot of development work on V recently, not all of it visible. Jeremy has improved the threading situation by finally allowing V to run thread libraries natively, which is a great simplification. I'm working on "Vex", an architecture-neutral replacement for the UCode framework which facilitates porting V away from x86. It has been a good six months since 2.2 was released and it's now way overdue for another stable release. Many bugs have been fixed, and the new threading stuff really ought to go live sooner rather than later. So we will do a 2.4 release, hopefully within the next month. Jeremy is in charge of stabilising the tree to a point where it is at releasable quality. 2.4 will be the last x86-only valgrind, and the last using the UCode framework. Future valgrinds will be built on Vex, a new library for dynamic translation and instrumentation. Vex is now working well for x86, and amd64 support is starting to work. I will be merging the current cvs tree and the vex development tree. The aim will be to release an alpha grade "technology demonstrator" as soon as amd64-linux support is usable. We will then push towards stabilising the tree for a mid-year valgrind-3.0 release, which should provide Linux support for x86, amd64, and possibly ppc32, all from a unified code base. J |
|
From: Julian S. <js...@ac...> - 2005-02-14 11:54:36
|
> Julian Seward <js...@ac...> wrote: > > On amd64 that scheme isn't going to work, since all > > client code has to be below 2G. A more flexible layout is > > needed. > > Only for 32 bit client programs presumably. Even then the limit > is 4G surely? Well, only on code which is compiled using this "small model" (or whatever it's really called). That can still be 64-bit code; the critical thing is the assumptions made by the compiler+binutils about whether all inter-code offsets can fit in a 32-bit signed integer, or not. At least, this is my understanding of it. 2G, 4G, I'm not sure which. Either way, same principle. > The advantage of pointercheck when memcheck is used is that although > memcheck may spot a write to valgrind's memory and complain about it > the write will still go ahead and corrupt valgrind's data structures > leading to an obscure crash later on. > > With pointercheck the process is aborted immediately with a clear > message that the client tried to mess with valgrind. That is true. Well, we could optionally leave pointercheck enabled even with Memcheck if people wanted this. Frankly I'm unsympathetic to the view that even after a program does a bad write which is reported by Memcheck, it should not actually trash Valgrind. My view is, once a bad write is reported, you can just forget the rest of your run on the basis that V _might_ have gotten trashed. I said as much in the valgrind-1.0 release notes, and I can't remember getting complaints about this behaviour. > > A check is a basically > > > > if (table[address >> n]) > > goto invalid-access > > Presumably the idea is that valgrind blocks will be marked as > inaccessible in that table so that attempts by the client to access > them in any way would fail, thus preserving the current pointercheck > behaviour. Duh, sorry, yes, forgot to mention that. J |
|
From: Tom H. <to...@co...> - 2005-02-14 11:39:28
|
In message <200...@ac...>
Julian Seward <js...@ac...> wrote:
> On amd64 that scheme isn't going to work, since all
> client code has to be below 2G. A more flexible layout is
> needed.
Only for 32 bit client programs presumably. Even then the limit
is 4G surely?
> One observation is (unless I am wrong) is that pointercheck
> is valueless when running Memcheck, since Memcheck should be
> able to catch all illegal accesses (if this isn't true then
> Memcheck is broken and we should fix it). That means we
> can omit pointercheck when doing Memcheck and so any
> software-based checking scheme will have zero overhead for
> our most-used tool.
The advantage of pointercheck when memcheck is used is that although
memcheck may spot a write to valgrind's memory and complain about it
the write will still go ahead and corrupt valgrind's data structures
leading to an obscure crash later on.
With pointercheck the process is aborted immediately with a clear
message that the client tried to mess with valgrind.
> Access checks are performed using a table of booleans (bytes),
> one for each superblock in the address space (with some qualification
> for 64-bit address spaces, see below). The checking code can easily
> enough be expressed in Vex's IR and turned into efficient machine
> code by Vex's back ends, so there is no portability issue there.
>
> A check is a basically
>
> if (table[address >> n])
> goto invalid-access
Presumably the idea is that valgrind blocks will be marked as
inaccessible in that table so that attempts by the client to access
them in any way would fail, thus preserving the current pointercheck
behaviour.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Julian S. <js...@ac...> - 2005-02-14 11:19:17
|
I had a run-in last night with mmap. This, recent amd64
hackery, and contemplation of ppc and MacOSX, got me thinking
again about address space layout.
Currently on x86 we divide the address space at some point
(eg, 0x52000000), force the client to exist below that boundary
and give all above it to V. This scheme allows pointercheck to
be implemented for free using x86 segmentation.
On x86 that works well, although it does seem to have given
various kinds of brittleness when running large processes
and/or for people with unusual kernel/user address boundaries.
On amd64 that scheme isn't going to work, since all
client code has to be below 2G. A more flexible layout is
needed.
On ppc32, and other architectures, which I'm sure we'll
eventually get to, we can't implement pointercheck the way we
do now anyway.
To support MacOSX and other OSs, a more flexible approach
wouldn't hurt.
I've been thinking about a low-overhead, portable, software
implementation of pointercheck which would allow more flexibility.
One observation is (unless I am wrong) is that pointercheck
is valueless when running Memcheck, since Memcheck should be
able to catch all illegal accesses (if this isn't true then
Memcheck is broken and we should fix it). That means we
can omit pointercheck when doing Memcheck and so any
software-based checking scheme will have zero overhead for
our most-used tool.
-------------
The idea is to divide the process' address space up into
large blocks, call them superblocks. Each superblock is
2^n sized and 2^n aligned (huge pages, if you like). Currently
I am thinking of having 64M-sized superblocks.
Each superblock is allocated either to the client, to valgrind,
or is unallocated. Valgrind intercepts and messes with the client's
mmap calls to ensure they fall inside client-allocated superblocks;
dually V ensures all its own code and data, including shadow data,
falls inside its own superblocks. Unallocated superblocks are
handed out on demand. If we need to work around host-specific
constraints (eg amd64 code < 2G) then specific superblock ranges
can be reserved for the host only.
Access checks are performed using a table of booleans (bytes),
one for each superblock in the address space (with some qualification
for 64-bit address spaces, see below). The checking code can easily
enough be expressed in Vex's IR and turned into efficient machine
code by Vex's back ends, so there is no portability issue there.
A check is a basically
if (table[address >> n])
goto invalid-access
If the table is accessible at a fixed offset from the guest state
pointer, this becomes four insns on x86:
movl %addr, %tmp
shrl $n, %tmp
testb $0, offset(%ebp, %tmp, 1)
jz OK
<handle failure somehow>
OK:
There are a couple of options for <handle failure now>, which need
to be investigated for their impact on code size and the extent to
which they inhibit IR optimisation. Either way, they can be expressed
purely in IR, so no portability questions there either.
Why 64M-sized blocks? 64M is a coarse but just-about-manageable
granularity. On a 32-bit target, with 64M superblocks, the check
table has only 64 entries -- that is, 64 bytes -- so accesses to it
are unlikely to cause significant cache pollution (iow I hope all of
it will end up permanently in D1).
On a 64-bit target, we clearly cannot have a check table covering
the entire address space since that would require 2^38 entries.
However, even a 4096-entry table would cover 256GB of address space,
which is more than enough for the foreseeable future. The access
test would become a little more expensive because it would also have
to test that the upper bits of the address which are not covered by
the table, are all zero -- assuming the table maps the address
space of 0 - 256GB and not some other part.
So, what am I missing?
J
|
|
From: <js...@ac...> - 2005-02-14 04:00:24
|
Nightly build on phoenix ( SuSE 9.1 ) started at 2005-02-14 03:50:00 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow rcl_assert: valgrind --num-callers=4 ./rcl_assert seg_override: valgrind --num-callers=4 ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind --num-callers=4 ./yield -- Finished tests in none/tests ---------------------------------------- == 196 tests, 11 stderr failures, 0 stdout failures ================= corecheck/tests/fdleak_fcntl (stderr) helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) memcheck/tests/pth_once (stderr) memcheck/tests/scalar (stderr) memcheck/tests/threadederrno (stderr) memcheck/tests/writev (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <to...@co...> - 2005-02-14 03:30:09
|
Nightly build on dunsmere ( Fedora Core 3 ) started at 2005-02-14 03:20:05 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind --num-callers=4 ./pushpopseg rcl_assert: valgrind --num-callers=4 ./rcl_assert seg_override: valgrind --num-callers=4 ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind --num-callers=4 ./yield -- Finished tests in none/tests ---------------------------------------- == 203 tests, 8 stderr failures, 0 stdout failures ================= helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) memcheck/tests/scalar (stderr) memcheck/tests/scalar_supp (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2005-02-14 03:23:28
|
Nightly build on audi ( Red Hat 9 ) started at 2005-02-14 03:15:02 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind --num-callers=4 ./int rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind --num-callers=4 ./pushpopseg rcl_assert: valgrind --num-callers=4 ./rcl_assert seg_override: valgrind --num-callers=4 ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind --num-callers=4 ./yield -- Finished tests in none/tests ---------------------------------------- == 202 tests, 6 stderr failures, 0 stdout failures ================= helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2005-02-14 03:18:40
|
Nightly build on ginetta ( Red Hat 8.0 ) started at 2005-02-14 03:10:03 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow rm: cannot remove `vgcore.pid*': No such file or directory (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind --num-callers=4 ./pushpopseg rcl_assert: valgrind --num-callers=4 ./rcl_assert seg_override: valgrind --num-callers=4 ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind --num-callers=4 ./yield -- Finished tests in none/tests ---------------------------------------- == 201 tests, 8 stderr failures, 0 stdout failures ================= helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) memcheck/tests/pth_once (stderr) memcheck/tests/threadederrno (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2005-02-14 03:16:03
|
Nightly build on standard ( Red Hat 7.2 ) started at 2005-02-14 03:00:05 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow (cleanup operation failed: rm vgcore.pid*) pushpopseg: valgrind --num-callers=4 ./pushpopseg rcl_assert: valgrind --num-callers=4 ./rcl_assert seg_override: valgrind --num-callers=4 ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind --num-callers=4 ./yield -- Finished tests in none/tests ---------------------------------------- == 201 tests, 9 stderr failures, 0 stdout failures ================= helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) memcheck/tests/pth_once (stderr) memcheck/tests/threadederrno (stderr) memcheck/tests/vgtest_ume (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2005-02-14 03:12:08
|
Nightly build on alvis ( Red Hat 7.3 ) started at 2005-02-14 03:05:13 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind --num-callers=4 ./yield -- Finished tests in none/tests ---------------------------------------- == 201 tests, 13 stderr failures, 0 stdout failures ================= helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) massif/tests/toobig-allocs (stderr) massif/tests/true_html (stderr) massif/tests/true_text (stderr) memcheck/tests/badjump (stderr) memcheck/tests/post-syscall (stderr) memcheck/tests/pth_once (stderr) memcheck/tests/threadederrno (stderr) make: *** [regtest] Error 1 |
|
From: <je...@go...> - 2005-02-14 00:13:31
|
Quoting Nicholas Nethercote <nj...@cs...>:
> I'm trying to get self-hosting to work on my machine. I just switched =
to
> recent versions of gcc (3.4.3) and ld (binutils 2.15) which support PIE=
.
> But once I built Valgrind, I get a seg fault at startup. Diagnostic pr=
intfs
> tell me that things are ok until at least the just before the call to
> jmp_with_stack() at the end of stage1.c:main2(). But the seg fault
> manifests before the first statement in vg_main.c:main() executes.
What's the fault address? What's the faulting instruction?
> The distro on the machine is an oldish Debian one; in particular the g=
libc
> version is 2.2.5:
In principle there should be no problem, but it wouldn't surprise me if t=
here
were some bug which prevents PIE from working. Or something we haven't t=
aken
into account.
J
|