You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
1
(15) |
2
(17) |
3
(23) |
4
(13) |
5
(7) |
6
(8) |
7
(9) |
|
8
(8) |
9
(31) |
10
(31) |
11
(19) |
12
(11) |
13
(38) |
14
(14) |
|
15
(8) |
16
(11) |
17
(7) |
18
(17) |
19
(12) |
20
(12) |
21
(17) |
|
22
(19) |
23
(33) |
24
(42) |
25
(37) |
26
(23) |
27
(27) |
28
(27) |
|
29
(16) |
30
(52) |
31
(33) |
|
|
|
|
|
From: Jeremy F. <je...@go...> - 2004-08-12 23:48:08
|
On Thu, 2004-08-12 at 23:49 +0100, Nicholas Nethercote wrote: > Lazy debug reading would be nice if it wasn't too difficult. We had that > at one point, but changed it; I can't remember why but there was a > reason. It was because we started intercepting function calls. The simple thing was to make all symtab loading eager, though for interception we just need the symbol table, and not the full debug info. So we could defer loading file/line info and type info until we actually need it, but still load the name->address->name mapping eagerly. J |
|
From: Jeremy F. <je...@go...> - 2004-08-12 23:44:33
|
On Thu, 2004-08-12 at 19:07 +0100, Nicholas Nethercote wrote:
> It's been a good thread! Now that it's died down, I feel it's worth
> summarising, so that what we've said and learnt doesn't get lost.
> Basically, there were a few key ideas expressed, and I want to gather
> them all in a coherent whole.
Great summary.
> Jeremy thinks the light parallel pthread state could be fragile, and
> suggested hooking into library function calls and observing the client
> behaviour. For example, if we client calls pthread_mutex_lock(), and
> then that thread ends up blocking in sys_futex() before returning, we
> can deduce that it blocked in a lock. But that could be fragile too.
It would be fragile, but only to the extent of producing false messages
- the program's execution shouldn't change. As opposed to now, where
"fragile" means total breakage.
> Tom thinks the pthread checking could be moved out of core into a
> separate tool [in which case the pthread state is the tool's problem,
> not the core's, just like A bits, V bits, FD tracking, etc; also, the
> assumption of pthreads is then within the tool, not the core].
Oh, I missed this point. I don't think this is a bad idea, but it does
suggest that we might want to be able to load multiple orthogonal tools
at once. It would be impossibly hard to deal with two tools both doing
code instrumentation, but it one tool want to instrument code and
another wants to hook pthreads, then they should be able to co-exist.
(Even multiple library hooks, like massif+memcheck, might be possible.)
Otherwise, if we factor everything out into separate tools, it will take
more Valgrind passes to get good comprehensive coverage.
> [Summary: it's unclear which is better, and even if one has to be chosen
> exclusively over the other -- isn't the current system kind of a mix?
> As for complexity/code size, with all else being equal (ie. assuming
> they're both feasible) which is better would depend on the size of the
> "generic innards" in the thick model, plus the size of the OS-specific
> parts in each model, plus the number of OSes to which Valgrind is
> ported. And these sizes depend on various characteristics of the OSes
> involved. So there's no clear answer there.]
Right. Some parts of the "thin" model are thicker than others, but I've
tried to make it all as thin as possible.
> Eric suggested an alternative approach to the whole threading/signals
> mess that I didn't really understand [sorry, Eric] but Jeremy didn't
> seem to like it.
My understanding of Eric's suggestion is: go multithreaded and ignore
the problems. That is, run each client thread in its own kernel clone
thread, and let them touch the shadow data without any synchronization.
His logic is that any correct program will already be doing locking or
some other synchronization for its own data, which will naturally also
protect the corresponding shadow data accesses. Incorrect programs with
bad synchronization will update its own memory in a non-deterministic
way, and as a side-effect will cause non-deterministic shadow updates.
He further argues that the only tool which really needs to be resistant
to this is helgrind, since it's explicitly there to fix concurrency
problems.
The idea is appealingly simple, but I have some concerns:
1. some correct programs have non-deterministic memory access
patterns. For example they may have two threads writing to one
location without caring which one "wins", so long as one does.
2. even if the program is incorrect, we want to give the user the
best chance of finding the problem, which is best done by giving
as repeatable results as possible
J
|
|
From: Nicholas N. <nj...@ca...> - 2004-08-12 22:50:06
|
On Thu, 12 Aug 2004, Eric Estievenart wrote: > Sorry if I bother you, I didn't mind. Just trying to help > a bit. Thanks for the input. I don't think Tom was bothered, he's just direct when he disagrees with an idea :) Here's my view on the problem. The fundamental problem here is that code locations used in stack traces are expressed as memory addresses, ie. the memory address that an instruction is loaded at. However, this is not a great way of doing it as the addresses can become out of date (if the code is unloaded) or change (if the code is reloaded). Ultimately, code locations should be expressed as object code locations and source code locations, since that's what we're interested in (for printing the error messages). The source code location is the file/line/number triple. The object code location differs; for code that's always mapped into the same place, a code address is ok, because that's always the same. For shared object code, something like an offset into the shared object is more informative than the direct address. So code locations in stack traces should probably be expressed in this alternative way that never goes out of date; the tricky part is doing so in a way such that the size of the stack traces don't increase a lot. Keeping the debug info in memory when code is unloaded doesn't seem like a good idea. Lazy debug reading would be nice if it wasn't too difficult. We had that at one point, but changed it; I can't remember why but there was a reason. Incremental DWARF reading would be cool, as it would reduce the amount of debug info reading, which could help performance and save space. N |
|
From: <js...@ac...> - 2004-08-12 20:39:28
|
Nightly build on phoenix ( SuSE 9.1 ) started at 2004-08-12 21:32:47 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow sem: valgrind ./sem semlimit: valgrind ./semlimit sha1_test: valgrind ./sha1_test shortpush: valgrind ./shortpush shorts: valgrind ./shorts smc1: valgrind ./smc1 susphello: valgrind ./susphello syscall-restart1: valgrind ./syscall-restart1 syscall-restart2: valgrind ./syscall-restart2 system: valgrind ./system yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 171 tests, 4 stderr failures, 0 stdout failures ================= corecheck/tests/as_mmap (stderr) corecheck/tests/fdleak_fcntl (stderr) memcheck/tests/writev (stderr) memcheck/tests/zeropage (stderr) make: *** [regtest] Error 1 |
|
From: Nicholas N. <nj...@ca...> - 2004-08-12 18:07:47
|
Hi,
It's been a good thread! Now that it's died down, I feel it's worth
summarising, so that what we've said and learnt doesn't get lost.
Basically, there were a few key ideas expressed, and I want to gather
them all in a coherent whole.
I've summarised different topics within the thread below. My additional
comments are in square brackets. My final summary is at the bottom.
Apologies if I misrepresent anyone. Please reply to this if I've made
mistakes, confused or omitted things, or you just have some new ideas :)
N
OS stuff is a problem, arch stuff is not
----------------------------------------
Nick began the thread out of concern that the
scheduling/threads/signals/proxyLWP stuff is a mess: that it's big,
complicated, hard to maintain, and something needs to be done. Jeremy
agreed, and said this is because OS details are not fixed, well-defined
and well documented.
Nick, Jeremy and Julian all agreed that the arch-specific stuff is much
less of a problem in this respect.
[Summary: Everyone seems to agree with these two propositions.]
Dropping sequential execution is very difficult
-----------------------------------------------
Jeremy stated his design goal for the ProxyLWP stuff, which was to use
as much of the kernel machinery as possible so we don't have to emulate
it. In this respect, running the app threads in their own kernel
threads would probably make a lot of things simpler, if other
constraints can be satisfied.
But Julian doesn't think it can be done, because protection of shadow
state would be too expensive, that sequential execution is necessary,
but would be happy to be proven wrong. Tom agreed it would be very
hard.
Jeremy said the only way to do this is either by putting locks around
all the critical sections, or by using lock-free algorithms. The first
is expensive, and the second is v. complex in general. He's been
thinking about adaptive locking, whereby only shared memory is protected
by locks, but it's tricky, and not necessarily a performance win.
[Summary: barring sudden bursts of genius this doesn't look possible.]
Replace pthread interception with clone() interception
------------------------------------------------------
Jeremy suggested dropping all the pthreads support from the core. The
scheduler doesn't really need to know much about a thread's state other
than its running or blocked on some event. This means that to do
threading, all OS ports must do enough to support the native threads
library. This may be complex, but means vg_libpthread.c can be dropped.
Julian voted in favour, and described it in terms of maintaining
single-threadedness by intercepting/simulating at the clone() level,
rather than pthreads level. That also would break the dependence on
pthreads [but see next paragraph].
Tom likes the idea too, and is looking into whether such an approach
tried by Adam Gundy would work. Jeremy suggested it too.
Along with this, Julian also said that for tools like Helgrind, we need
to intercept the relevant functions (pthread_mutex_lock() etc), note
they have happened, but let the native thread library handle it. At
least some simulation of the pthreads machinery is needed to find
errors, but this can be much lighter than the full simulation. [Nb:
assuming pthreads here, and see Tom's point about pthread checking
below]
Jeremy thinks the light parallel pthread state could be fragile, and
suggested hooking into library function calls and observing the client
behaviour. For example, if we client calls pthread_mutex_lock(), and
then that thread ends up blocking in sys_futex() before returning, we
can deduce that it blocked in a lock. But that could be fragile too.
Tom thinks the pthread checking could be moved out of core into a
separate tool [in which case the pthread state is the tool's problem,
not the core's, just like A bits, V bits, FD tracking, etc; also, the
assumption of pthreads is then within the tool, not the core].
Jeremy also said one of the big advantages of dropping the pthreads
stuff is that we can drop all those ThreadStates associated with them,
which will make the thread state machine (which is currently complex and
requires a lot of checking) much more managable, and this could help fix
some bugs.
[Summary:
- This is a widely supported idea, but the exact details remain to be
seen. Tom is/might be looking into it. See ProxyLWP topic below for
a caveat about 2.4 vs. 2.6 kernels.
- Moving pthread checking out of a core into a separate tool is a good
idea.
]
1:1 vs. N:M Threading models?
-----------------------------
Bob suggested that Linux needs to one day use a multi-level threading
model (multiple user threads per kernel thread, also known as N:M),
although Jeremy noted that Solaris has apparently gone to a 1:1 (one
kernel thread per user thread). Tom noted that Linux's NPTL uses 1:1
too, and Paul seconded it.
Bob also noted that OSes using N:M will cause problems if Valgrind
relies on intercepting clone(); Jeremy agreed, and noted another
approach would be required.
[Summary: 1:1 is here to stay on Linux, but other OSes use it so we
should be aware of this.]
Scheduler as discrete event simulator?
--------------------------------------
Julian suggested that the scheduler could be modelled as an discrete
event simulation, based on a time-ordered queue of future events.
Ie., an abstraction framework based explicitly on the notions of state
and events, and around inheritance/opacity, by which details of events
specific to a specific environment (simulation) can be localised in a
module supporting that environment, instead of being scattered across
the entire code base.
There would be a mix of core events and state, and OS-specific events
and state. The core would handle core stuff, and give the OS-specific
ones to the OS module.
The kinds of events that would be in the queue might be:
* deliver a signal
* signal delivery done (if that makes sense, I suspect not)
* mess with host signals in some way
* check to see if some event has happened by some specific time
* syscall initiation, termination
* run a valgrind thread for a while
* low-level synchronisation events between V-scheduled threads
The aim being to decouple the V core as much as possible from the
env-simulation specifics. This might even allow running eg. a raw ARM
image on an x86 host which expects little or no OS at all, and much of
this would be opaque to the core. [Nb: assumes cross-arch translation is
working...]
Jeremy thinks the discrete event simulation is largely already there,
that the interface between vg_scheduler and the rest of the OS-interface
code is already pretty thin, and could be cleaned up pretty easily (this
is assuming you ignore the pthreads stuff, which is a lot of the
apparent complexity).
The scheduler loop is basically:
for(ever)
for(each_thread)
if runnable
run it
if (nothing ran)
idle()
And idle() is responsible for collecting async events from elsewhere,
the events being timeouts, signal polling (2.4 kernel only) and kernel
events (eg. thread waking up due to syscall interruption/completion).
Most of the work (which is the part which has Nick worried, and
definitely needs cleaning up) is done by things which directly change
the thread state in response to the event. For syscalls, its just "keep
running normally"; for signals, it loads a whole new CPU context to run
the signal handler.
If we move all the CPU-specific stuff out of vg_scheduler.c (the actual
context-switching machinery) and drop all the lipthread stuff, then
vg_scheduler will essentially be just this event loop, which is nice and
simple.
Well, the other part is dealing with client requests coming out of the
running client instruction stream. This is also a pretty thin
interface. If it's a client request, then do it; if it's a syscall, then
just call the arch/OS-specific "do syscall" routine, saying "this thread
N has a syscall set up in its context, go do it"; vg_scheduler itself
doesn't need to know any other details.
[Summary: it would be good to make this scheduler-as-state-machine
aspect clearer in the current code, and some of the proposed changes
above could help with that. However, Julian and Jeremy seem to have
different ideas about the level of abstractness involved.]
Architectural approaches: thin model vs. thick model
-----------------------------------------------------
In response to Julian's idea of running ARM code on x86 and the
scheduler-as-discrete-event-simulator idea, Jeremy stated his
assumptions:
1. No cross-system emulation, so
2. Our clients are always native apps which think they're running on
the underlying OS
Because Valgrind is a debugging tool, and should therefore try as hard
as possible to not change the behaviour of the target program.
This leads to two models:
Thin model:
Client application
- - - Valgrind interposer layer - - -
kernel
Thick model:
Client application
- - - - Valgrind client interface - - -
Generic Valgrind innards
- - - - Valgrind kernel interface - - -
kernel
In the thin model, we have a chunk of code which takes application
requests, massage them as little as necessary, and (often) pass them
through to the kernel; and conversely, take kernel events, massage them
a little and pass them to the app. The intent is to expose the raw
kernel behaviour to the application as much as possible, because that's
what it expects to see.
In the thick model, we have a nice chunk of generic code, which doesn't
depend much on OS or CPU services (like address space translation),
because it does it all for itself. But it needs two
interface layers: the top one to make Valgrind's innards look like the
kernel to the application, and the bottom one to make Valgrind's innards
look like the application to the kernel.
Jeremy thinks the thick model will be much more complex than the thin
model.
Julian's not sure, but thinks that the thick model might be better,
because it might make the OS-specific code for each OS small. But he
admits he does't know if it's really viable -- whereas at least Jeremy
has demonstrated that his scheme works well at least for Linux.
[Summary: it's unclear which is better, and even if one has to be chosen
exclusively over the other -- isn't the current system kind of a mix?
As for complexity/code size, with all else being equal (ie. assuming
they're both feasible) which is better would depend on the size of the
"generic innards" in the thick model, plus the size of the OS-specific
parts in each model, plus the number of OSes to which Valgrind is
ported. And these sizes depend on various characteristics of the OSes
involved. So there's no clear answer there.]
ProxyLWP shortcomings
---------------------
About proxyLWP, Jeremy said it fell short of his goals. First, 2.4 and
2.6 kernels are handled differently, which is annoying, and that will
only get worse with the clone-level emulation.
Second, there's also just a lot of careful state management to make sure
everything is right: masks, signals and syscall interrupts for the right
threads. For 2.6 the kernel does some of this.
Bob suggested using loadable modules to manage different versions/OSes.
Bits and pieces
---------------
John Carter suggested judicious modification of the kernel to get what we
want, but Jeremy disagreed because it's no help for other OSes.
Rauch asked about porting to Windows. Nick said it would be great to
have, but it would be hard to do cleanly.
Eric suggested an alternative approach to the whole threading/signals
mess that I didn't really understand [sorry, Eric] but Jeremy didn't
seem to like it.
Summary
-------
It's widely agreed that the current OS/threads/signals situation needs
improving.
Everyone seems happy with the idea of dropping pthread-level emulation,
and replacing it with clone()-level emulation. Tom might be looking
into this. This requires a way of intercepting interesting thread
events like mutex locking. Going further than that, ie. using a kernel
thread per user thread, doesn't seem possible. If/when this happens,
moving pthread checking into a separate tool seems like a good idea.
We should be aware of both 1:1 and N:M threading models.
The scheduler can be viewed as a discrete thread simulator. This could
be made much more obvious in the current code. The idea could possibly
be taken much further, and provide an architecture for separating
generic and OS-specific events and data, but the best level of
abstraction is unclear.
There are two approaches to architecturing this stuff: the thin model,
where we rely on the kernel to do as much as possible for us, and the
thick model, where Valgrind has a generic layer that hides as many of
the OS-specific details as possible.
What now?
---------
Ok, that's all good, but what should we do about it?
Ideally, to resolve the questions above (eg. thick vs. thin model? what
level of abstraction in the scheduler? etc.) we could all sit down and
implement all the alternatives for a number of interesting OSes, and see
which ones work out the best.
Of course, that's not going to happen. To an extent, we can discuss
this forever, but it won't get anywhere without people actually writing
code and trying things. Doug, you have real experience with the FreeBSD
port, do you have any comments about these issues?
Another key issue here which hasn't been touched on much -- what are our
assumptions here? Which OSes are we targetting? That will have a large
impact on the levels of abstraction involved, and which architectural
approaches are most appropriate. Eg. just POSIX? Windows? Allow for
little or no OS, eg. Julian's ARM-on-x86 scenario?
Perhaps some exploratory programming would be useful here? Anyone want
to volunteer?
|
|
From: Tom H. <th...@cy...> - 2004-08-12 03:14:30
|
Nightly build on standard ( Red Hat 7.2 ) started at 2004-08-12 03:00:02 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow errs1: valgrind -q ./errs1 execve: valgrind -q ./execve execve2: valgrind -q --trace-children=yes ./execve2 exitprog: valgrind -q ./exitprog fpeflags: valgrind -q ./fpeflags fprw: valgrind -q ./fprw fwrite: valgrind -q ./fwrite inits: valgrind -q ./inits inline: valgrind -q ./inline insn_basic: valgrind -q ./../../none/tests/insn_basic insn_cmov: valgrind -q ./../../none/tests/insn_cmov insn_fpu: valgrind -q ./../../none/tests/insn_fpu insn_mmx: valgrind -q ./../../none/tests/insn_mmx insn_mmxext: valgrind -q ./../../none/tests/insn_mmxext insn_sse: valgrind -q ./../../none/tests/insn_sse malloc1: valgrind -q ./malloc1 malloc2: valgrind -q ./malloc2 malloc3: valgrind -q ./malloc3 Could not read `malloc3.stderr.exp' make: *** [regtest] Error 2 |
|
From: <js...@ac...> - 2004-08-12 02:58:10
|
Nightly build on nemesis ( SuSE 9.1 ) started at 2004-08-12 03:50:00 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow sem: valgrind ./sem semlimit: valgrind ./semlimit sha1_test: valgrind ./sha1_test shortpush: valgrind ./shortpush shorts: valgrind ./shorts smc1: valgrind ./smc1 susphello: valgrind ./susphello syscall-restart1: valgrind ./syscall-restart1 syscall-restart2: valgrind ./syscall-restart2 system: valgrind ./system yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 171 tests, 4 stderr failures, 0 stdout failures ================= corecheck/tests/as_mmap (stderr) corecheck/tests/fdleak_fcntl (stderr) memcheck/tests/writev (stderr) memcheck/tests/zeropage (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <to...@co...> - 2004-08-12 02:26:10
|
Nightly build on dunsmere ( Fedora Core 2 ) started at 2004-08-12 03:20:02 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow smc1: valgrind ./smc1 susphello: valgrind ./susphello syscall-restart1: valgrind ./syscall-restart1 syscall-restart2: valgrind ./syscall-restart2 system: valgrind ./system yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 176 tests, 8 stderr failures, 1 stdout failure ================= corecheck/tests/fdleak_cmsg (stderr) corecheck/tests/fdleak_fcntl (stderr) corecheck/tests/fdleak_ipv4 (stderr) corecheck/tests/fdleak_socketpair (stderr) memcheck/tests/buflen_check (stderr) memcheck/tests/execve (stderr) memcheck/tests/execve2 (stderr) memcheck/tests/writev (stderr) none/tests/exec-sigmask (stdout) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-08-12 02:20:01
|
Nightly build on audi ( Red Hat 9 ) started at 2004-08-12 03:15:02 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow shorts: valgrind ./shorts smc1: valgrind ./smc1 susphello: valgrind ./susphello syscall-restart1: valgrind ./syscall-restart1 syscall-restart2: valgrind ./syscall-restart2 system: valgrind ./system yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 176 tests, 8 stderr failures, 0 stdout failures ================= corecheck/tests/fdleak_cmsg (stderr) corecheck/tests/fdleak_fcntl (stderr) corecheck/tests/fdleak_ipv4 (stderr) corecheck/tests/fdleak_socketpair (stderr) memcheck/tests/buflen_check (stderr) memcheck/tests/execve (stderr) memcheck/tests/execve2 (stderr) memcheck/tests/writev (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-08-12 02:13:20
|
Nightly build on ginetta ( Red Hat 8.0 ) started at 2004-08-12 03:10:02 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow seg_override: valgrind ./seg_override sem: valgrind ./sem semlimit: valgrind ./semlimit sha1_test: valgrind ./sha1_test shortpush: valgrind ./shortpush shorts: valgrind ./shorts smc1: valgrind ./smc1 susphello: valgrind ./susphello syscall-restart1: valgrind ./syscall-restart1 syscall-restart2: valgrind ./syscall-restart2 system: valgrind ./system yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 176 tests, 3 stderr failures, 0 stdout failures ================= helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) memcheck/tests/writev (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-08-12 02:08:18
|
Nightly build on alvis ( Red Hat 7.3 ) started at 2004-08-12 03:05:02 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow susphello: valgrind ./susphello syscall-restart1: valgrind ./syscall-restart1 syscall-restart2: valgrind ./syscall-restart2 system: valgrind ./system yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 176 tests, 9 stderr failures, 1 stdout failure ================= addrcheck/tests/toobig-allocs (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) memcheck/tests/badjump (stderr) memcheck/tests/brk (stderr) memcheck/tests/error_counts (stdout) memcheck/tests/new_nothrow (stderr) memcheck/tests/toobig-allocs (stderr) memcheck/tests/writev (stderr) make: *** [regtest] Error 1 |