You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
1
(16) |
2
(7) |
3
(9) |
4
(12) |
5
(20) |
|
6
(24) |
7
(10) |
8
(7) |
9
(17) |
10
(9) |
11
(7) |
12
(8) |
|
13
(12) |
14
(17) |
15
(15) |
16
(15) |
17
(21) |
18
(9) |
19
(17) |
|
20
(16) |
21
(12) |
22
(18) |
23
(8) |
24
(2) |
25
(17) |
26
(23) |
|
27
(30) |
28
(19) |
29
(14) |
30
(11) |
|
|
|
|
From: Nuno L. <nun...@sa...> - 2008-04-17 23:40:03
|
Thanks for the answers! >> - How did valgrind handle the case when a block is removed from the >> cache >> and it has other blocks that jump directly into it (i.e. blocks that were >> previously patched)? Did valgrind purged the whole cache? Did it scanned >> the remaining blocks for references and "unpatched" blocks? > > Honestly .. I can't remember. But you ask the right questions at least; > this is the #1 hard problem with chaining. Read the 2.4 sources ... Ok, so as far as I understood, the cache is divided in 8 sectors, which are managed in a FIFO way. When the oldest sector is purged, it searches all the others sector's blocks for patched jumps and unpatches thems. This can be a good solution if the cache is not purged very often, otherwise the cost may be prohibitive (where adding a little table to each superblock with back-pointers to referencing blocks would perform better, or Josef's idea in the other e-mail as well). Do you have any stats to confirm if's that the case? I dunno about the perftests, but I assume they are so small they don't even fill up the whole cache, right? > btw, for lots of background on code cache management, you might be > interested to read Kim Hazelwood's PhD thesis: > http://www.cs.virginia.edu/kim/docs/phd-thesis.pdf Out of curiosity: why did you change from FIFO to LRU in valgrind 3? I'm only asking this because that thesis suggests that FIFO may be better. Thanks, Nuno |
|
From: Doug E. <dj...@go...> - 2008-04-17 23:14:04
|
On Fri, Mar 28, 2008 at 11:58 AM, <bar...@gm...> wrote: > By the way, do you know how gdb handles variable info ? Does it read > all variable info at once or does it read this information only when > needed ? GDB reads the info lazily. The granularity (basically) is at the object file level. |
|
From: Julian S. <js...@ac...> - 2008-04-17 23:13:59
|
On Friday 18 April 2008 00:37, Nuno Lopes wrote: > Ok, so as far as I understood, the cache is divided in 8 sectors, which are > managed in a FIFO way. Yes. > When the oldest sector is purged, it searches all > the others sector's blocks for patched jumps and unpatches thems. I'm confused. Are you asking how the current code works, or asking for comments on proposed changes? At the moment (V 3.X) there is no chaining at all, all translations are completely independent, and they do not change once they are made. So right now, there is nothing that needs to be done when throwing away a sector - we just forget it exists, and start filling it up again from empty. (not completely true: need to tell the host machine to flush its icache, "icbi" insn on ppc). > cost may be prohibitive (where adding a little table to each superblock > with back-pointers to referencing blocks I think that is the "standard" solution. > that the case? I dunno about the perftests, but I assume they are so small > they don't even fill up the whole cache, right? The cache is huge, really, very large. It rarely fills up. To fill it up completely requires (eg) starting OpenOffice, loading a .ppt presentation and a .doc document, so as to force V to translate all the Powerpoint import code and all the Word import code. For meaningful testing/evaluation, you will need to make it much smaller, by changing N_TTES_PER_SECTOR in m_transtab.c to some small prime number (use auxprogs/primes.c to produce prime numbers). > Out of curiosity: why did you change from FIFO to LRU in valgrind 3? I'm > only asking this because that thesis suggests that FIFO may be better. V 3 uses FIFO; we haven't used LRU in a very long time. LRU has the disadvantage that you need to make some kind of mark/stamp every time a translation is used, which reduces performance. FIFO avoids that. J |
|
From: Julian S. <js...@ac...> - 2008-04-17 22:46:58
|
On Thursday 17 April 2008 22:29, Josef Weidendorfer wrote:
> On Thursday 17 April 2008, Julian Seward wrote:
> > > - How did valgrind handle the case when a block is removed from the
> > > cache and it has other blocks that jump directly into it (i.e. blocks
> > > that were previously patched)? Did valgrind purged the whole cache? Did
> > > it scanned the remaining blocks for references and "unpatched" blocks?
> >
> > Honestly .. I can't remember. But you ask the right questions at least;
> > this is the #1 hard problem with chaining. Read the 2.4 sources ...
To elaborate:
1. This is not only the #1 hard problem, really it's the only hard
problem
2. I'm not claiming the 2.4 solution is a good one. For a start it's
only x86 specific, and now we need something that works well for
ppc too. See below.
> Hmm... Wasn't the code cache partitioned into 8 or 16 parts, and
> filled/flushed in a FIFO like manner?
Yes, 8 parts ("sectors").
> If chaining is allowed only between blocks inside of one part,
Reasonable, but ..
> and all code of a part is always flushed at once, there is no scanning
> needed...
.. this is more of a problem. No problem when throwing away the
complete contents of a sector. But suppose we need to invalidate
only a few translations, then what? This isn't very common on x86/amd64,
but is very common on ppc (ld.so does one invalidate for each symbol
resolved, which means tens of thousands of invalidates for starting
OOo or for a KDE app.)
J
|
|
From: Josef W. <Jos...@gm...> - 2008-04-17 20:29:48
|
On Thursday 17 April 2008, Julian Seward wrote: > > > - How did valgrind handle the case when a block is removed from the cache > > and it has other blocks that jump directly into it (i.e. blocks that were > > previously patched)? Did valgrind purged the whole cache? Did it scanned > > the remaining blocks for references and "unpatched" blocks? > > Honestly .. I can't remember. But you ask the right questions at least; > this is the #1 hard problem with chaining. Read the 2.4 sources ... Hmm... Wasn't the code cache partitioned into 8 or 16 parts, and filled/flushed in a FIFO like manner? If chaining is allowed only between blocks inside of one part, and all code of a part is always flushed at once, there is no scanning needed... Josef |
|
From: Julian S. <js...@ac...> - 2008-04-17 18:48:21
|
> > Because .. x86 doesn't contain an instruction "jmp $32-bit-constant", > > unfortunately. It does contain "jmp $32-bit-offset-relative-to-pc", > > which is what is normally used (+ there is a short form with an 8-bit > > signed offset). > > You can do it in 32 bit mode can't you? Using FF/4 with a modrm > that encodes a 32 bit constant? In 64 bit mode that becomes the > encoding for a 32 bit RIP relative address though. > > The same goes for call using the FF/2 opcode. Nearly, but, alas, no. FF/4 $imm32 is "goto *(void*)$imm32" and not merely "goto $imm32", that is, it dereferences its argument. Ditto FF/2. J |
|
From: Filipe C. <fi...@gm...> - 2008-04-17 18:40:32
|
Hi On 17 Apr, 2008, at 16:19, Julian Seward wrote: > > Because .. x86 doesn't contain an instruction "jmp $32-bit-constant", > unfortunately. It does contain "jmp $32-bit-offset-relative-to-pc", > which is what is normally used (+ there is a short form with an 8-bit > signed offset). > I see. We forgot checking for that little detail, unfortunately. We'll take a closer look at the block chaining for valgrind 2, and maybe think about more optimizations. Thanks for the reply, - Filipe Cabecinhas |
|
From: Tom H. <to...@co...> - 2008-04-17 16:05:31
|
In message <200...@ac...>
Julian Seward <js...@ac...> wrote:
> On Thursday 17 April 2008 16:58, Filipe Cabecinhas wrote:
>
>> We have noticed that the assembly generated by a "boring goto" has an
>> indirect jump to $dispatcher_addr.
>>
>> example:
>> -- goto {Boring} 0x80483A1:I32
>> movl $0x80483A1,%eax ; movl $dispatcher_addr,%edx ; jmp *%edx
>>
>>
>> Why the indirection? The dispatcher address (vta->dispatch) is known
>> since the start of the translation and (as far as we can tell) isn't
>> changed in runtime. Couldn't we just emit a jmp $dispatcher_addr?
>
> Because .. x86 doesn't contain an instruction "jmp $32-bit-constant",
> unfortunately. It does contain "jmp $32-bit-offset-relative-to-pc",
> which is what is normally used (+ there is a short form with an 8-bit
> signed offset).
You can do it in 32 bit mode can't you? Using FF/4 with a modrm
that encodes a 32 bit constant? In 64 bit mode that becomes the
encoding for a 32 bit RIP relative address though.
The same goes for call using the FF/2 opcode.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Julian S. <js...@ac...> - 2008-04-17 15:44:02
|
> - How did valgrind handle the case when a block is removed from the cache > and it has other blocks that jump directly into it (i.e. blocks that were > previously patched)? Did valgrind purged the whole cache? Did it scanned > the remaining blocks for references and "unpatched" blocks? Honestly .. I can't remember. But you ask the right questions at least; this is the #1 hard problem with chaining. Read the 2.4 sources ... One problem to be aware of, when constantly chaining and unchaining translations is, it can lead to very expensive Icache-vs-Dcache interlocks/ invalidations by the hardware. Especially on Pentium 4 iirc. > - Other question is what changes need to be done to the blocks in order to > implement the block chaining? I assume I would need to add some prologue to > blocks to check for e.g. thread time-slice end and signal handling stuff? Yes. I'm sure 2.4.0 has a timeslice-and-event check at the start of each translation. Try it and see - again, this is beginning to be a long time ago. What else: for most blocks, nothing. For blocks translated with self-modifying-code support (try --smc-check=all), there is also a check at the start to ensure that CRC32 for the guest code for the block matches a CRC32 made at the time the block was translated. If not, the block exits immediately, requesting a retranslation. btw, for lots of background on code cache management, you might be interested to read Kim Hazelwood's PhD thesis: http://www.cs.virginia.edu/kim/docs/phd-thesis.pdf J |
|
From: John R.
|
> We have noticed that the assembly generated by a "boring goto" has an
> indirect jump to $dispatcher_addr.
>
> example:
> -- goto {Boring} 0x80483A1:I32
> movl $0x80483A1,%eax ; movl $dispatcher_addr,%edx ; jmp *%edx
>
>
> Why the indirection? The dispatcher address (vta->dispatch) is known
> since the start of the translation and (as far as we can tell) isn't
> changed in runtime. Couldn't we just emit a jmp $dispatcher_addr?
That would work on i686 but not on x86_64 or powerpc or powerpc64.
Those other architectures have limited displacement for direct branches
and calls.
--
|
|
From: Julian S. <js...@ac...> - 2008-04-17 15:24:48
|
On Thursday 17 April 2008 16:58, Filipe Cabecinhas wrote:
> Hi,
>
> We have noticed that the assembly generated by a "boring goto" has an
> indirect jump to $dispatcher_addr.
>
> example:
> -- goto {Boring} 0x80483A1:I32
> movl $0x80483A1,%eax ; movl $dispatcher_addr,%edx ; jmp *%edx
>
>
> Why the indirection? The dispatcher address (vta->dispatch) is known
> since the start of the translation and (as far as we can tell) isn't
> changed in runtime. Couldn't we just emit a jmp $dispatcher_addr?
Because .. x86 doesn't contain an instruction "jmp $32-bit-constant",
unfortunately. It does contain "jmp $32-bit-offset-relative-to-pc",
which is what is normally used (+ there is a short form with an 8-bit
signed offset).
Unfortunately vex is constrained to generate position independent code,
which means a relative jump can't be used. This makes other parts of
the engineering simpler. Specifically, it means vex can generate the
code somewhere, and valgrind's core (m_translate, m_transtab) can copy
the resulting translation to whereever it likes for long term storage.
You'll see the same problem for calls to helper functions, since there's
no call-absolute-address insn either. Except worse, since most SBs
instrumented by most tools have multiple helper calls in them.
There are two possible solutions, both kinda ugly and complex
(but standard and well-known):
1. Generate relative jmp/call insns. Also, generate a relocation
table for the translation, so we know how to adjust the offsets
when the translation is moved.
2. Valgrind decides beforehand where the translation will end up,
and vex generates it directly to that address.
(1) is generally more flexible, since it allows moving the translation
as many times as desired over its lifetime. Also (2) is difficult
because in general you might not know where you want the translation
to go until you know how big it is.
J
|
|
From: Filipe C. <fi...@gm...> - 2008-04-17 14:58:32
|
Hi,
We have noticed that the assembly generated by a "boring goto" has an
indirect jump to $dispatcher_addr.
example:
-- goto {Boring} 0x80483A1:I32
movl $0x80483A1,%eax ; movl $dispatcher_addr,%edx ; jmp *%edx
Why the indirection? The dispatcher address (vta->dispatch) is known
since the start of the translation and (as far as we can tell) isn't
changed in runtime. Couldn't we just emit a jmp $dispatcher_addr?
Thanks in advance,
- Filipe Cabecinhas
|
|
From: Vladimir S. <vla...@gm...> - 2008-04-17 14:56:38
|
Hi, I'm trying to raise a synchronous signal from Valgrind before some detected specific load, so I can halt before that load and call some function from the user code. I'm raising the signal with VG_(kill)(VG_(getpid)(), VKI_SIGUSR1) but it is not served synchronously. My signal handler from the user code takes a lot of time to catch this signal. ¿How can I issue synchronous signal? Is there any other way to detect some instruction in Valgrind (somewhere in the middle of the block) and call some function from the user code before that instruction. Signal handler to be called doesn't have to be in the user code, but would contain some calls to MPI library (so I don't se where else could it be) thanks in advance Vladimir Subotic |
|
From: trevor h. <tr...@ly...> - 2008-04-17 12:44:44
|
Index: coregrind/m_syswrap/syswrap-amd64-linux.c
===================================================================
--- coregrind/m_syswrap/syswrap-amd64-linux.c (revision 7884)
+++ coregrind/m_syswrap/syswrap-amd64-linux.c (working copy)
@@ -1168,7 +1168,7 @@
GENXY(__NR_times, sys_times), // 100
PLAXY(__NR_ptrace, sys_ptrace), // 101
GENX_(__NR_getuid, sys_getuid), // 102
- // (__NR_syslog, sys_syslog), // 103
+ LINXY(__NR_syslog, sys_syslog), // 103
GENX_(__NR_getgid, sys_getgid), // 104
GENX_(__NR_setuid, sys_setuid), // 105
@@ -1253,7 +1253,7 @@
// (__NR_setdomainname, sys_setdomainname), // 171
GENX_(__NR_iopl, sys_iopl), // 172
LINX_(__NR_ioperm, sys_ioperm), // 173
- // (__NR_create_module, sys_ni_syscall), // 174
+ GENX_(__NR_create_module, sys_ni_syscall), // 174
// (__NR_init_module, sys_init_module), // 175
// (__NR_delete_module, sys_delete_module), // 176
Index: README_MISSING_SYSCALL_OR_IOCTL
===================================================================
--- README_MISSING_SYSCALL_OR_IOCTL (revision 7884)
+++ README_MISSING_SYSCALL_OR_IOCTL (working copy)
@@ -46,16 +46,16 @@
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Removing the debug printing clutter, it looks like this:
- PRE(time)
+ PRE(sys_time)
{
/* time_t time(time_t *t); */
PRINT("time ( %p )",arg1);
if (arg1 != (UWord)NULL) {
- PRE_MEM_WRITE( "time", arg1, sizeof(time_t) );
+ PRE_MEM_WRITE( "time", arg1, sizeof(vki_time_t) );
}
}
- POST(time)
+ POST(sys_time)
{
if (arg1 != (UWord)NULL) {
POST_MEM_WRITE( arg1, sizeof(vki_time_t) );
@@ -134,8 +134,7 @@
dependant ones (in syswrap-$(PLATFORM)-linux.c).
The *XY variant if it requires a PRE() and POST() function, and
the *X_ variant if it only requires a PRE()
- function. The 2nd arg of these macros indicate if the syscall
- could possibly block.
+ function.
If you find this difficult, read the wrappers for other syscalls
for ideas. A good tip is to look for the wrapper for a syscall
|
|
From: Tom H. <th...@cy...> - 2008-04-17 03:20:20
|
Nightly build on alvis ( i686, Red Hat 7.3 ) started at 2008-04-17 03:15:04 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 330 tests, 75 stderr failures, 1 stdout failure, 29 post failures == memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/erringfds (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-pool-0 (stderr) memcheck/tests/leak-pool-1 (stderr) memcheck/tests/leak-pool-2 (stderr) memcheck/tests/leak-pool-3 (stderr) memcheck/tests/leak-pool-4 (stderr) memcheck/tests/leak-pool-5 (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/long_namespace_xml (stderr) memcheck/tests/lsframe1 (stderr) memcheck/tests/lsframe2 (stderr) memcheck/tests/malloc_free_fill (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/noisy_child (stderr) memcheck/tests/partial_load_dflt (stderr) memcheck/tests/partial_load_ok (stderr) memcheck/tests/partiallydefinedeq (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/sigkill (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/varinfo1 (stderr) memcheck/tests/varinfo2 (stderr) memcheck/tests/varinfo3 (stderr) memcheck/tests/varinfo4 (stderr) memcheck/tests/varinfo5 (stderr) memcheck/tests/varinfo6 (stderr) memcheck/tests/x86/bug152022 (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/x86/xor-undef-x86 (stderr) memcheck/tests/xml1 (stderr) massif/tests/alloc-fns-A (post) massif/tests/alloc-fns-B (post) massif/tests/basic (post) massif/tests/basic2 (post) massif/tests/big-alloc (post) massif/tests/culling1 (stderr) massif/tests/culling2 (stderr) massif/tests/custom_alloc (post) massif/tests/deep-A (post) massif/tests/deep-B (stderr) massif/tests/deep-B (post) massif/tests/deep-C (stderr) massif/tests/deep-C (post) massif/tests/deep-D (post) massif/tests/ignoring (post) massif/tests/insig (post) massif/tests/long-names (post) massif/tests/long-time (post) massif/tests/new-cpp (post) massif/tests/null (post) massif/tests/one (post) massif/tests/overloaded-new (post) massif/tests/peak (post) massif/tests/peak2 (stderr) massif/tests/peak2 (post) massif/tests/realloc (stderr) massif/tests/realloc (post) massif/tests/thresholds_0_0 (post) massif/tests/thresholds_0_10 (post) massif/tests/thresholds_10_0 (post) massif/tests/thresholds_10_10 (post) massif/tests/thresholds_5_0 (post) massif/tests/thresholds_5_10 (post) massif/tests/zero1 (post) massif/tests/zero2 (post) none/tests/blockfault (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/shell (stderr) none/tests/shell_valid1 (stderr) none/tests/shell_valid2 (stderr) none/tests/shell_valid3 (stderr) helgrind/tests/hg01_all_ok (stderr) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/hg06_readshared (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc02_simple_tls (stderr) helgrind/tests/tc03_re_excl (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc07_hbl1 (stderr) helgrind/tests/tc08_hbl2 (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc11_XCHG (stderr) helgrind/tests/tc12_rwl_trivial (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) helgrind/tests/tc24_nonzero_sem (stderr) |
|
From: Tom H. <th...@cy...> - 2008-04-17 03:13:45
|
Nightly build on alvis ( i686, Red Hat 7.3 ) started at 2008-04-17 03:15:04 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 326 tests, 75 stderr failures, 1 stdout failure, 29 post failures == memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/erringfds (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-pool-0 (stderr) memcheck/tests/leak-pool-1 (stderr) memcheck/tests/leak-pool-2 (stderr) memcheck/tests/leak-pool-3 (stderr) memcheck/tests/leak-pool-4 (stderr) memcheck/tests/leak-pool-5 (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/long_namespace_xml (stderr) memcheck/tests/lsframe1 (stderr) memcheck/tests/lsframe2 (stderr) memcheck/tests/malloc_free_fill (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/noisy_child (stderr) memcheck/tests/partial_load_dflt (stderr) memcheck/tests/partial_load_ok (stderr) memcheck/tests/partiallydefinedeq (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/sigkill (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/varinfo1 (stderr) memcheck/tests/varinfo2 (stderr) memcheck/tests/varinfo3 (stderr) memcheck/tests/varinfo4 (stderr) memcheck/tests/varinfo5 (stderr) memcheck/tests/varinfo6 (stderr) memcheck/tests/x86/bug152022 (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/x86/xor-undef-x86 (stderr) memcheck/tests/xml1 (stderr) massif/tests/alloc-fns-A (post) massif/tests/alloc-fns-B (post) massif/tests/basic (post) massif/tests/basic2 (post) massif/tests/big-alloc (post) massif/tests/culling1 (stderr) massif/tests/culling2 (stderr) massif/tests/custom_alloc (post) massif/tests/deep-A (post) massif/tests/deep-B (stderr) massif/tests/deep-B (post) massif/tests/deep-C (stderr) massif/tests/deep-C (post) massif/tests/deep-D (post) massif/tests/ignoring (post) massif/tests/insig (post) massif/tests/long-names (post) massif/tests/long-time (post) massif/tests/new-cpp (post) massif/tests/null (post) massif/tests/one (post) massif/tests/overloaded-new (post) massif/tests/peak (post) massif/tests/peak2 (stderr) massif/tests/peak2 (post) massif/tests/realloc (stderr) massif/tests/realloc (post) massif/tests/thresholds_0_0 (post) massif/tests/thresholds_0_10 (post) massif/tests/thresholds_10_0 (post) massif/tests/thresholds_10_10 (post) massif/tests/thresholds_5_0 (post) massif/tests/thresholds_5_10 (post) massif/tests/zero1 (post) massif/tests/zero2 (post) none/tests/blockfault (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/shell (stderr) none/tests/shell_valid1 (stderr) none/tests/shell_valid2 (stderr) none/tests/shell_valid3 (stderr) helgrind/tests/hg01_all_ok (stderr) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/hg06_readshared (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc02_simple_tls (stderr) helgrind/tests/tc03_re_excl (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc07_hbl1 (stderr) helgrind/tests/tc08_hbl2 (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc11_XCHG (stderr) helgrind/tests/tc12_rwl_trivial (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) helgrind/tests/tc24_nonzero_sem (stderr) |
|
From: Tom H. <th...@cy...> - 2008-04-17 03:06:40
|
Nightly build on lloyd ( x86_64, Fedora 7 ) started at 2008-04-17 03:05:10 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 413 tests, 5 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) |
|
From: Tom H. <th...@cy...> - 2008-04-17 03:04:09
|
Nightly build on aston ( x86_64, Fedora Core 5 ) started at 2008-04-17 03:20:13 BST Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 419 tests, 8 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/x86/scalar (stderr) none/tests/blockfault (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc08_hbl2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 419 tests, 8 stderr failures, 1 stdout failure, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/x86/scalar (stderr) none/tests/blockfault (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Thu Apr 17 03:43:00 2008 --- new.short Thu Apr 17 04:04:13 2008 *************** *** 8,10 **** ! == 419 tests, 8 stderr failures, 1 stdout failure, 0 post failures == memcheck/tests/malloc_free_fill (stderr) --- 8,10 ---- ! == 419 tests, 8 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) *************** *** 15,16 **** --- 15,17 ---- none/tests/mremap2 (stdout) + helgrind/tests/tc08_hbl2 (stdout) helgrind/tests/tc20_verifywrap (stderr) |
|
From: Tom H. <th...@cy...> - 2008-04-17 02:48:56
|
Nightly build on trojan ( x86_64, Fedora Core 6 ) started at 2008-04-17 03:25:07 BST Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 417 tests, 8 stderr failures, 5 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/bug133694 (stdout) memcheck/tests/x86/bug133694 (stderr) memcheck/tests/x86/scalar (stderr) none/tests/cmdline1 (stdout) none/tests/cmdline2 (stdout) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 417 tests, 7 stderr failures, 5 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/bug133694 (stdout) memcheck/tests/x86/bug133694 (stderr) memcheck/tests/x86/scalar (stderr) none/tests/cmdline1 (stdout) none/tests/cmdline2 (stdout) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Thu Apr 17 03:34:51 2008 --- new.short Thu Apr 17 03:49:02 2008 *************** *** 8,10 **** ! == 417 tests, 7 stderr failures, 5 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) --- 8,10 ---- ! == 417 tests, 8 stderr failures, 5 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) *************** *** 18,19 **** --- 18,20 ---- none/tests/mremap2 (stdout) + helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc20_verifywrap (stderr) |
|
From: Tom H. <th...@cy...> - 2008-04-17 02:40:06
|
Nightly build on dellow ( x86_64, Fedora 8 ) started at 2008-04-17 03:10:07 BST Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 413 tests, 8 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) none/tests/blockfault (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 413 tests, 8 stderr failures, 3 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) none/tests/blockfault (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/pth_cvsimple (stdout) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Thu Apr 17 03:26:59 2008 --- new.short Thu Apr 17 03:40:05 2008 *************** *** 8,10 **** ! == 413 tests, 8 stderr failures, 3 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) --- 8,10 ---- ! == 413 tests, 8 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) *************** *** 15,17 **** none/tests/mremap2 (stdout) - none/tests/pth_cvsimple (stdout) helgrind/tests/tc18_semabuse (stderr) --- 15,16 ---- |
|
From: Tom H. <th...@cy...> - 2008-04-17 02:24:59
|
Nightly build on gill ( x86_64, Fedora Core 2 ) started at 2008-04-17 03:00:06 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 419 tests, 30 stderr failures, 3 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/varinfo6 (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/amd64/insn_ssse3 (stdout) none/tests/amd64/insn_ssse3 (stderr) none/tests/amd64/ssse3_misaligned (stderr) none/tests/blockfault (stderr) none/tests/fdleak_fcntl (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/x86/insn_ssse3 (stdout) none/tests/x86/insn_ssse3 (stderr) none/tests/x86/ssse3_misaligned (stderr) helgrind/tests/hg01_all_ok (stderr) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) |