You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
1
(5) |
2
(5) |
|
3
(5) |
4
(6) |
5
(6) |
6
(10) |
7
(10) |
8
(10) |
9
(10) |
|
10
(6) |
11
(6) |
12
(8) |
13
(33) |
14
(19) |
15
(6) |
16
(15) |
|
17
(8) |
18
(29) |
19
(23) |
20
(20) |
21
(8) |
22
(6) |
23
(6) |
|
24
(6) |
25
(22) |
26
(21) |
27
(9) |
28
(21) |
29
(11) |
30
(7) |
|
31
(16) |
|
|
|
|
|
|
|
From: Tom H. <th...@cy...> - 2004-10-17 15:18:26
|
CVS commit by thughes:
Fix problems with very long timeouts given when waiting on a mutex or
condition variable. The pthread routines now use a timeout of 0xfffffffe
if the user asks for something longer than that otherwise we will wrap
around and actually get a much shorter timeout.
The scheduler has also been changed so that it it now limits itself to
a timeout of 0x7fffffff when working how how long to poll for. This won't
affect how long a thread actually sleeps for as we'll just wind up waiting
a bit more on the next pass round the loop.
This fixes bug 76845.
M +24 -4 vg_libpthread.c 1.169
M +7 -2 vg_scheduler.c 1.191
--- valgrind/coregrind/vg_libpthread.c #1.168:1.169
@@ -1356,4 +1356,6 @@ int __pthread_mutex_timedlock(pthread_mu
unsigned long long int ull_ms_now_after_1970;
unsigned long long int ull_ms_end_after_1970;
+ unsigned long long int ull_ms_now;
+ unsigned long long int ull_ms_end;
vg_pthread_mutex_t* vg_mutex;
CONVERT(mutex, mutex, vg_mutex);
@@ -1374,6 +1376,13 @@ int __pthread_mutex_timedlock(pthread_mu
if (ull_ms_end_after_1970 < ull_ms_now_after_1970)
ull_ms_end_after_1970 = ull_ms_now_after_1970;
- ms_end
- = ms_now + (unsigned int)(ull_ms_end_after_1970 - ull_ms_now_after_1970);
+ if (ull_ms_end >= (unsigned long long int)(0xFFFFFFFFUL)) {
+ /* use 0xFFFFFFFEUL because 0xFFFFFFFFUL is reserved for no timeout
+ (the fine difference between a long wait and a possible abort
+ due to a detected deadlock).
+ */
+ ms_end = 0xFFFFFFFEUL;
+ } else {
+ ms_end = (unsigned int)(ull_ms_end);
+ }
VALGRIND_MAGIC_SEQUENCE(res, 0 /* default */,
VG_USERREQ__PTHREAD_MUTEX_TIMEDLOCK,
@@ -1519,4 +1528,6 @@ int pthread_cond_timedwait ( pthread_con
unsigned long long int ull_ms_now_after_1970;
unsigned long long int ull_ms_end_after_1970;
+ unsigned long long int ull_ms_now;
+ unsigned long long int ull_ms_end;
vg_pthread_mutex_t* vg_mutex;
CONVERT(mutex, mutex, vg_mutex);
@@ -1538,6 +1549,15 @@ int pthread_cond_timedwait ( pthread_con
if (ull_ms_end_after_1970 < ull_ms_now_after_1970)
ull_ms_end_after_1970 = ull_ms_now_after_1970;
- ms_end
- = ms_now + (unsigned int)(ull_ms_end_after_1970 - ull_ms_now_after_1970);
+ ull_ms_now = ((unsigned long long int)(ms_now));
+ ull_ms_end = ull_ms_now + (ull_ms_end_after_1970 - ull_ms_now_after_1970);
+ if (ull_ms_end >= (unsigned long long int)(0xFFFFFFFFUL)) {
+ /* use 0xFFFFFFFEUL because 0xFFFFFFFFUL is reserved for no timeout
+ (the fine difference between a long wait and a possible abort
+ due to a detected deadlock).
+ */
+ ms_end = 0xFFFFFFFEUL;
+ } else {
+ ms_end = (unsigned int)(ull_ms_end);
+ }
VALGRIND_MAGIC_SEQUENCE(res, 0 /* default */,
VG_USERREQ__PTHREAD_COND_TIMEDWAIT,
--- valgrind/coregrind/vg_scheduler.c #1.190:1.191
@@ -615,6 +615,11 @@ void idle ( void )
if (tp != NULL) {
+ vg_assert(tp->time >= now);
+ /* limit the signed int delta to INT_MAX */
+ if ((tp->time - now) <= 0x7FFFFFFFU) {
delta = tp->time - now;
- vg_assert(delta >= 0);
+ } else {
+ delta = 0x7FFFFFFF;
+ }
}
if (wicked)
|
|
From: Tom H. <th...@cy...> - 2004-10-17 15:00:24
|
CVS commit by thughes:
Implement pthread_mutex_timedlock. This resolves bug 78422.
M +3 -2 core.h 1.37
M +36 -0 vg_libpthread.c 1.168
M +1 -1 vg_libpthread_unimp.c 1.49
M +49 -6 vg_scheduler.c 1.190
--- valgrind/coregrind/core.h #1.36:1.37
@@ -510,6 +510,7 @@ extern Bool VG_(is_empty_arena) ( Arena
#define VG_USERREQ__SET_OR_GET_DETACH 0x3009
-#define VG_USERREQ__PTHREAD_GET_THREADID 0x300B
-#define VG_USERREQ__PTHREAD_MUTEX_LOCK 0x300C
+#define VG_USERREQ__PTHREAD_GET_THREADID 0x300A
+#define VG_USERREQ__PTHREAD_MUTEX_LOCK 0x300B
+#define VG_USERREQ__PTHREAD_MUTEX_TIMEDLOCK 0x300C
#define VG_USERREQ__PTHREAD_MUTEX_TRYLOCK 0x300D
#define VG_USERREQ__PTHREAD_MUTEX_UNLOCK 0x300E
--- valgrind/coregrind/vg_libpthread.c #1.167:1.168
@@ -1348,4 +1348,39 @@ int __pthread_mutex_lock(pthread_mutex_t
+int __pthread_mutex_timedlock(pthread_mutex_t *mutex,
+ const struct timespec *abstime )
+{
+ int res;
+ unsigned int ms_now, ms_end;
+ struct timeval timeval_now;
+ unsigned long long int ull_ms_now_after_1970;
+ unsigned long long int ull_ms_end_after_1970;
+ vg_pthread_mutex_t* vg_mutex;
+ CONVERT(mutex, mutex, vg_mutex);
+
+ VALGRIND_MAGIC_SEQUENCE(ms_now, 0xFFFFFFFF /* default */,
+ VG_USERREQ__READ_MILLISECOND_TIMER,
+ 0, 0, 0, 0);
+ my_assert(ms_now != 0xFFFFFFFF);
+ res = gettimeofday(&timeval_now, NULL);
+ my_assert(res == 0);
+
+ ull_ms_now_after_1970
+ = 1000ULL * ((unsigned long long int)(timeval_now.tv_sec))
+ + ((unsigned long long int)(timeval_now.tv_usec / 1000));
+ ull_ms_end_after_1970
+ = 1000ULL * ((unsigned long long int)(abstime->tv_sec))
+ + ((unsigned long long int)(abstime->tv_nsec / 1000000));
+ if (ull_ms_end_after_1970 < ull_ms_now_after_1970)
+ ull_ms_end_after_1970 = ull_ms_now_after_1970;
+ ms_end
+ = ms_now + (unsigned int)(ull_ms_end_after_1970 - ull_ms_now_after_1970);
+ VALGRIND_MAGIC_SEQUENCE(res, 0 /* default */,
+ VG_USERREQ__PTHREAD_MUTEX_TIMEDLOCK,
+ vg_mutex, ms_end, 0, 0);
+ return res;
+}
+
+
int __pthread_mutex_trylock(pthread_mutex_t *mutex)
{
@@ -3377,4 +3412,5 @@ int __libc_allocate_rtsig (int high)
------------------------------------------------------------------ */
strong_alias(__pthread_mutex_lock, pthread_mutex_lock)
+strong_alias(__pthread_mutex_timedlock, pthread_mutex_timedlock)
strong_alias(__pthread_mutex_trylock, pthread_mutex_trylock)
strong_alias(__pthread_mutex_unlock, pthread_mutex_unlock)
--- valgrind/coregrind/vg_libpthread_unimp.c #1.48:1.49
@@ -137,5 +137,5 @@ void pthread_getcpuclockid ( void ) { u
//void pthread_mutex_init ( void ) { unimp("pthread_mutex_init"); }
//void pthread_mutex_lock ( void ) { unimp("pthread_mutex_lock"); }
-void pthread_mutex_timedlock ( void ) { unimp("pthread_mutex_timedlock"); }
+//void pthread_mutex_timedlock ( void ) { unimp("pthread_mutex_timedlock"); }
//void pthread_mutex_trylock ( void ) { unimp("pthread_mutex_trylock"); }
//void pthread_mutex_unlock ( void ) { unimp("pthread_mutex_unlock"); }
--- valgrind/coregrind/vg_scheduler.c #1.189:1.190
@@ -97,4 +97,5 @@ static Addr __libc_freeres_wrapper;
static void do_client_request ( ThreadId tid, UInt* args );
static void scheduler_sanity ( void );
+static void do_pthread_mutex_timedlock_TIMEOUT ( ThreadId tid );
static void do_pthread_cond_timedwait_TIMEOUT ( ThreadId tid );
static void maybe_rendezvous_joiners_and_joinees ( void );
@@ -668,4 +669,8 @@ void idle ( void )
break;
+ case VgTs_WaitMX:
+ do_pthread_mutex_timedlock_TIMEOUT(tst->tid);
+ break;
+
case VgTs_WaitCV:
do_pthread_cond_timedwait_TIMEOUT(tst->tid);
@@ -767,4 +772,6 @@ VgSchedReturnCode do_scheduler ( Int* ex
if (VG_(threads)[tid_next].status == VgTs_Sleeping
|| VG_(threads)[tid_next].status == VgTs_WaitSys
+ || (VG_(threads)[tid_next].status == VgTs_WaitMX
+ && VG_(threads)[tid_next].awaken_at != 0xFFFFFFFF)
|| (VG_(threads)[tid_next].status == VgTs_WaitCV
&& VG_(threads)[tid_next].awaken_at != 0xFFFFFFFF))
@@ -1886,4 +1893,27 @@ void do__apply_in_new_thread ( ThreadId
/* Helper fns ... */
static
+void do_pthread_mutex_timedlock_TIMEOUT ( ThreadId tid )
+{
+ Char msg_buf[100];
+ vg_pthread_mutex_t* mx;
+
+ vg_assert(VG_(is_valid_tid)(tid)
+ && VG_(threads)[tid].status == VgTs_WaitMX
+ && VG_(threads)[tid].awaken_at != 0xFFFFFFFF);
+ mx = VG_(threads)[tid].associated_mx;
+ vg_assert(mx != NULL);
+
+ VG_(threads)[tid].status = VgTs_Runnable;
+ SET_PTHREQ_RETVAL(tid, ETIMEDOUT); /* pthread_mutex_lock return value */
+ VG_(threads)[tid].associated_mx = NULL;
+
+ if (VG_(clo_trace_pthread_level) >= 1) {
+ VG_(sprintf)(msg_buf, "pthread_mutex_timedlock mx %p: TIMEOUT", mx);
+ print_pthread_event(tid, msg_buf);
+ }
+}
+
+
+static
void release_one_thread_waiting_on_mutex ( vg_pthread_mutex_t* mutex,
Char* caller )
@@ -1933,5 +1963,6 @@ static
void do_pthread_mutex_lock( ThreadId tid,
Bool is_trylock,
- vg_pthread_mutex_t* mutex )
+ vg_pthread_mutex_t* mutex,
+ UInt ms_end )
{
Char msg_buf[100];
@@ -1940,4 +1971,7 @@ void do_pthread_mutex_lock( ThreadId tid
: "pthread_mutex_lock ";
+ /* If ms_end == 0xFFFFFFFF, wait forever (no timeout). Otherwise,
+ ms_end is the ending millisecond. */
+
if (VG_(clo_trace_pthread_level) >= 2) {
VG_(sprintf)(msg_buf, "%s mx %p ...", caller, mutex );
@@ -1986,5 +2020,5 @@ void do_pthread_mutex_lock( ThreadId tid
/* Someone has it already. */
- if ((ThreadId)mutex->__vg_m_owner == tid) {
+ if ((ThreadId)mutex->__vg_m_owner == tid && ms_end == 0xFFFFFFFF) {
/* It's locked -- by me! */
if (mutex->__vg_m_kind == PTHREAD_MUTEX_RECURSIVE_NP) {
@@ -2015,4 +2049,7 @@ void do_pthread_mutex_lock( ThreadId tid
VG_(threads)[tid].status = VgTs_WaitMX;
VG_(threads)[tid].associated_mx = mutex;
+ VG_(threads)[tid].awaken_at = ms_end;
+ if (ms_end != 0xFFFFFFFF)
+ add_timeout(tid, ms_end);
SET_PTHREQ_RETVAL(tid, 0); /* pth_mx_lock success value */
if (VG_(clo_trace_pthread_level) >= 1) {
@@ -2911,9 +2948,13 @@ void do_client_request ( ThreadId tid, U
scheduler checks for that on return from this function. */
case VG_USERREQ__PTHREAD_MUTEX_LOCK:
- do_pthread_mutex_lock( tid, False, (void *)(arg[1]) );
+ do_pthread_mutex_lock( tid, False, (void *)(arg[1]), 0xFFFFFFFF );
+ break;
+
+ case VG_USERREQ__PTHREAD_MUTEX_TIMEDLOCK:
+ do_pthread_mutex_lock( tid, False, (void *)(arg[1]), arg[2] );
break;
case VG_USERREQ__PTHREAD_MUTEX_TRYLOCK:
- do_pthread_mutex_lock( tid, True, (void *)(arg[1]) );
+ do_pthread_mutex_lock( tid, True, (void *)(arg[1]), 0xFFFFFFFF );
break;
@@ -3219,4 +3260,5 @@ void scheduler_sanity ( void )
vg_assert(VG_(threads)[top->tid].awaken_at != top->time ||
VG_(threads)[top->tid].status == VgTs_Sleeping ||
+ VG_(threads)[top->tid].status == VgTs_WaitMX ||
VG_(threads)[top->tid].status == VgTs_WaitCV);
#endif
@@ -3243,5 +3285,6 @@ void scheduler_sanity ( void )
/* 2 */ vg_assert(mx->__vg_m_count > 0);
/* 3 */ vg_assert(VG_(is_valid_tid)((ThreadId)mx->__vg_m_owner));
- /* 4 */ vg_assert((UInt)i != (ThreadId)mx->__vg_m_owner);
+ /* 4 */ vg_assert((UInt)i != (ThreadId)mx->__vg_m_owner ||
+ VG_(threads)[i].awaken_at != 0xFFFFFFFF);
} else
if (VG_(threads)[i].status == VgTs_WaitCV) {
|
|
From: Tom H. <th...@cy...> - 2004-10-17 03:08:14
|
Nightly build on standard ( Red Hat 7.2 ) started at 2004-10-17 02:00:01 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow execve: valgrind ./execve fcntl_setown: valgrind ./fcntl_setown floored: valgrind ./floored fork: valgrind -q ./fork fpu_lazy_eflags: valgrind ./fpu_lazy_eflags fucomip: valgrind ./fucomip gxx304: valgrind ./gxx304 insn_basic: valgrind ./insn_basic insn_cmov: valgrind ./insn_cmov insn_fpu: valgrind ./insn_fpu insn_mmx: valgrind ./insn_mmx insn_mmxext: valgrind ./insn_mmxext insn_sse: valgrind ./insn_sse int: valgrind ./int map_unmap: valgrind ./map_unmap mq: valgrind ./mq mremap: valgrind ./mremap munmap_exe: valgrind ./munmap_exe Could not read `munmap_exe.stderr.exp' make: *** [regtest] Error 2 |
|
From: <js...@ac...> - 2004-10-17 02:55:37
|
Nightly build on phoenix ( SuSE 9.1 ) started at 2004-10-17 03:50:00 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow sem: valgrind ./sem semlimit: valgrind ./semlimit sha1_test: valgrind ./sha1_test shortpush: valgrind ./shortpush shorts: valgrind ./shorts smc1: valgrind ./smc1 susphello: valgrind ./susphello syscall-restart1: valgrind ./syscall-restart1 syscall-restart2: valgrind ./syscall-restart2 system: valgrind ./system yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 177 tests, 3 stderr failures, 1 stdout failure ================= corecheck/tests/fdleak_fcntl (stderr) memcheck/tests/threadederrno (stdout) memcheck/tests/threadederrno (stderr) memcheck/tests/writev (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <to...@co...> - 2004-10-17 02:25:53
|
Nightly build on dunsmere ( Fedora Core 2 ) started at 2004-10-17 03:20:02 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow smc1: valgrind ./smc1 susphello: valgrind ./susphello syscall-restart1: valgrind ./syscall-restart1 syscall-restart2: valgrind ./syscall-restart2 system: valgrind ./system yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 182 tests, 8 stderr failures, 1 stdout failure ================= corecheck/tests/fdleak_cmsg (stderr) corecheck/tests/fdleak_fcntl (stderr) corecheck/tests/fdleak_ipv4 (stderr) corecheck/tests/fdleak_socketpair (stderr) memcheck/tests/buflen_check (stderr) memcheck/tests/execve (stderr) memcheck/tests/execve2 (stderr) memcheck/tests/writev (stderr) none/tests/exec-sigmask (stdout) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-10-17 02:19:47
|
Nightly build on audi ( Red Hat 9 ) started at 2004-10-17 03:15:02 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow smc1: valgrind ./smc1 susphello: valgrind ./susphello syscall-restart1: valgrind ./syscall-restart1 syscall-restart2: valgrind ./syscall-restart2 system: valgrind ./system yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 182 tests, 9 stderr failures, 0 stdout failures ================= corecheck/tests/fdleak_cmsg (stderr) corecheck/tests/fdleak_fcntl (stderr) corecheck/tests/fdleak_ipv4 (stderr) corecheck/tests/fdleak_socketpair (stderr) corecheck/tests/pth_cancel2 (stderr) memcheck/tests/buflen_check (stderr) memcheck/tests/execve (stderr) memcheck/tests/execve2 (stderr) memcheck/tests/writev (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-10-17 02:13:36
|
Nightly build on ginetta ( Red Hat 8.0 ) started at 2004-10-17 03:10:01 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow sem: valgrind ./sem semlimit: valgrind ./semlimit sha1_test: valgrind ./sha1_test shortpush: valgrind ./shortpush shorts: valgrind ./shorts smc1: valgrind ./smc1 susphello: valgrind ./susphello syscall-restart1: valgrind ./syscall-restart1 syscall-restart2: valgrind ./syscall-restart2 system: valgrind ./system yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 182 tests, 3 stderr failures, 1 stdout failure ================= helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) memcheck/tests/threadederrno (stdout) memcheck/tests/threadederrno (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-10-17 02:08:16
|
Nightly build on alvis ( Red Hat 7.3 ) started at 2004-10-17 03:05:02 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow syscall-restart1: valgrind ./syscall-restart1 syscall-restart2: valgrind ./syscall-restart2 system: valgrind ./system yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 182 tests, 9 stderr failures, 2 stdout failures ================= addrcheck/tests/toobig-allocs (stderr) helgrind/tests/inherit (stderr) memcheck/tests/badjump (stderr) memcheck/tests/brk (stderr) memcheck/tests/brk2 (stderr) memcheck/tests/error_counts (stdout) memcheck/tests/threadederrno (stdout) memcheck/tests/threadederrno (stderr) memcheck/tests/toobig-allocs (stderr) memcheck/tests/vgtest_ume (stderr) memcheck/tests/writev (stderr) make: *** [regtest] Error 1 |