You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
1
(12) |
2
(5) |
3
(12) |
4
(9) |
5
(4) |
6
(7) |
|
7
(6) |
8
(10) |
9
(5) |
10
(5) |
11
(4) |
12
(7) |
13
(19) |
|
14
(11) |
15
(9) |
16
(6) |
17
(21) |
18
(13) |
19
(12) |
20
(9) |
|
21
(22) |
22
(24) |
23
(21) |
24
(12) |
25
(6) |
26
(3) |
27
(4) |
|
28
(3) |
29
(5) |
30
(11) |
31
(7) |
|
|
|
|
From: Bart V. A. <bar...@gm...> - 2008-12-30 16:02:42
|
On Tue, Dec 30, 2008 at 2:54 PM, Felix Schmidt <fel...@we...> wrote: > maybe I found an error in m_machine.c. > In line 248: for (i = (*tid)+1; .... > Why stands there a +1? I think this is a bug If you are looking at this code, this is probably because you are using VG_(thread_stack_reset_iter)() and VG_(thread_stack_next)() ? It would help if you could post the code that calls these functions. Bart. |
|
From: Felix S. <fel...@we...> - 2008-12-30 13:54:52
|
Dear, maybe I found an error in m_machine.c. In line 248: for (i = (*tid)+1; .... Why stands there a +1? I think this is a bug fs |
|
From: Felix S. <fel...@we...> - 2008-12-30 13:52:42
|
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Dear, maybe I found an error in m_machine.c. In line 248: for (i = (*tid)+1; .... Why stands there a +1? I think this is a bug fs -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (MingW32) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAklaJ6IACgkQmH8OAwYoDBmM2gCgyAINaZo1DWQIB/OgLTPFTTUL fpAAn1H97BEy4gKU8s52EJNvkxkZ9c81 =/uoM -----END PGP SIGNATURE----- |
|
From: Josef W. <Jos...@gm...> - 2008-12-30 11:07:15
|
> > Hmm... what does go wrong with these 8 tests? Is this something about > > the cpuid check? > > I'm thinking about trying to improve the regtests, they would be much more > useful if more reliable and less noisy. Yes. For the profiling tools, I just do not know about the best way. For callgrind, it would be very good to just check for the call graph detected in a given short program, but that very much depends on the compiler and the runtime system. > One thing I want to do is work out a good way to attach the diffs from the > failing tests so that people can investigate problems without having to ask > the usual "what went wrong with this test?" Because the diffs can be quite > large, I'd probably do something like take the first N lines (for N=100, or > N=1000, say) for each .diff file, then tar and bzip them, and attach them to > the email. I just have to work out how to do attachments without requiring > external programs... An alternative to attachments would be, if the script copies the results of the last regression failures to some web space, and provide a link in the mail. Josef |
|
From: Bart V. A. <bar...@gm...> - 2008-12-30 11:05:46
|
On Tue, Dec 30, 2008 at 1:30 AM, Nicholas Nethercote <nj...@cs...> wrote: > I'm thinking about trying to improve the regtests, they would be much more > useful if more reliable and less noisy. > > One thing I want to do is work out a good way to attach the diffs from the > failing tests so that people can investigate problems without having to ask > the usual "what went wrong with this test?" Because the diffs can be quite > large, I'd probably do something like take the first N lines (for N=100, or > N=1000, say) for each .diff file, then tar and bzip them, and attach them to > the email. I just have to work out how to do attachments without requiring > external programs... Being able to look at the diff output would be a great help. One possible approach is to include the output of head -n 100 */tests/*diff* in the e-mail sent by the nightly/bin/nightly script. Bart. |
|
From: Bart V. A. <bar...@gm...> - 2008-12-30 10:55:32
|
On Tue, Dec 30, 2008 at 10:18 AM, Julian Seward <js...@ac...> wrote: > I've noticed that the most common reason for regtests to fail on different > distros is due to trivial differences in stack tracebacks, particularly the > addition/loss of intermediate frames due to differences in inlining. So the > test appears to fail for a completely spurious reason, when in fact the > functionality it was trying to test actually works correctly. > > What would help greatly is a more flexible way of specifying the acceptable > (expected) tracebacks. Something exactly like the current suppression syntax, > allowing frame-level wildcards and function/object-name-level-wildcards would > be ideal. So I guess that would mean writing a custom C program to do the > check, rather than just using diff. Maybe it's a good idea to extend syntax of the .vgtest files with a keyword that allows to specify a custom diff program instead of /usr/bin/diff -- not all regression tests produce output that contains stack traces. Bart. |
|
From: Julian S. <js...@ac...> - 2008-12-30 09:46:04
|
On Tuesday 30 December 2008 00:30:48 Nicholas Nethercote wrote: > I'm thinking about trying to improve the regtests, they would be much more > useful if more reliable and less noisy. Yes. That would be great. > One thing I want to do is work out a good way to attach the diffs from the > failing tests so that people can investigate problems without having to ask > the usual "what went wrong with this test?" That would also be very helpful. I've noticed that the most common reason for regtests to fail on different distros is due to trivial differences in stack tracebacks, particularly the addition/loss of intermediate frames due to differences in inlining. So the test appears to fail for a completely spurious reason, when in fact the functionality it was trying to test actually works correctly. What would help greatly is a more flexible way of specifying the acceptable (expected) tracebacks. Something exactly like the current suppression syntax, allowing frame-level wildcards and function/object-name-level-wildcards would be ideal. So I guess that would mean writing a custom C program to do the check, rather than just using diff. J |
|
From: Tom H. <th...@cy...> - 2008-12-30 03:47:48
|
Nightly build on vauxhall ( x86_64, Fedora 10 ) started at 2008-12-30 03:20:05 GMT Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 481 tests, 13 stderr failures, 0 stdout failures, 0 post failures == cachegrind/tests/chdir (stderr) cachegrind/tests/clreq (stderr) cachegrind/tests/dlclose (stderr) cachegrind/tests/wrap5 (stderr) cachegrind/tests/x86/fpu-28-108 (stderr) callgrind/tests/simwork1 (stderr) callgrind/tests/simwork2 (stderr) callgrind/tests/simwork3 (stderr) exp-ptrcheck/tests/base (stderr) exp-ptrcheck/tests/preen_invars (stderr) memcheck/tests/linux-syscalls-2007 (stderr) memcheck/tests/x86/scalar (stderr) none/tests/blockfault (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 481 tests, 14 stderr failures, 0 stdout failures, 0 post failures == cachegrind/tests/chdir (stderr) cachegrind/tests/clreq (stderr) cachegrind/tests/dlclose (stderr) cachegrind/tests/wrap5 (stderr) cachegrind/tests/x86/fpu-28-108 (stderr) callgrind/tests/simwork1 (stderr) callgrind/tests/simwork2 (stderr) callgrind/tests/simwork3 (stderr) drd/tests/qt4_mutex (stderr) exp-ptrcheck/tests/base (stderr) exp-ptrcheck/tests/preen_invars (stderr) memcheck/tests/linux-syscalls-2007 (stderr) memcheck/tests/x86/scalar (stderr) none/tests/blockfault (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Tue Dec 30 03:34:02 2008 --- new.short Tue Dec 30 03:47:41 2008 *************** *** 8,10 **** ! == 481 tests, 14 stderr failures, 0 stdout failures, 0 post failures == cachegrind/tests/chdir (stderr) --- 8,10 ---- ! == 481 tests, 13 stderr failures, 0 stdout failures, 0 post failures == cachegrind/tests/chdir (stderr) *************** *** 17,19 **** callgrind/tests/simwork3 (stderr) - drd/tests/qt4_mutex (stderr) exp-ptrcheck/tests/base (stderr) --- 17,18 ---- |
|
From: Tom H. <th...@cy...> - 2008-12-30 03:47:35
|
Nightly build on trojan ( x86_64, Fedora Core 6 ) started at 2008-12-30 03:25:05 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 476 tests, 8 stderr failures, 4 stdout failures, 0 post failures == exp-ptrcheck/tests/ccc (stderr) exp-ptrcheck/tests/preen_invars (stderr) exp-ptrcheck/tests/pth_create (stderr) exp-ptrcheck/tests/pth_specific (stderr) helgrind/tests/tc20_verifywrap (stderr) memcheck/tests/x86/bug133694 (stdout) memcheck/tests/x86/bug133694 (stderr) memcheck/tests/x86/scalar (stderr) none/tests/blockfault (stderr) none/tests/cmdline1 (stdout) none/tests/cmdline2 (stdout) none/tests/mremap2 (stdout) |
|
From: Tom H. <th...@cy...> - 2008-12-30 03:36:51
|
Nightly build on mg ( x86_64, Fedora 9 ) started at 2008-12-30 03:10:08 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 478 tests, 14 stderr failures, 1 stdout failure, 0 post failures == cachegrind/tests/chdir (stderr) cachegrind/tests/clreq (stderr) cachegrind/tests/dlclose (stderr) cachegrind/tests/wrap5 (stderr) cachegrind/tests/x86/fpu-28-108 (stderr) callgrind/tests/simwork1 (stderr) callgrind/tests/simwork2 (stderr) callgrind/tests/simwork3 (stderr) exp-ptrcheck/tests/ccc (stderr) exp-ptrcheck/tests/preen_invars (stderr) exp-ptrcheck/tests/pth_create (stderr) exp-ptrcheck/tests/pth_specific (stderr) memcheck/tests/x86/scalar (stderr) none/tests/blockfault (stderr) none/tests/mremap2 (stdout) |
|
From: Nicholas N. <nj...@cs...> - 2008-12-30 00:31:03
|
On Mon, 29 Dec 2008, Josef Weidendorfer wrote: >> == 481 tests, 13 stderr failures, 0 stdout failures, 0 post failures == >> cachegrind/tests/chdir (stderr) >> cachegrind/tests/clreq (stderr) >> cachegrind/tests/dlclose (stderr) >> cachegrind/tests/wrap5 (stderr) >> cachegrind/tests/x86/fpu-28-108 (stderr) >> callgrind/tests/simwork1 (stderr) >> callgrind/tests/simwork2 (stderr) >> callgrind/tests/simwork3 (stderr) > > Hmm... what does go wrong with these 8 tests? Is this something about > the cpuid check? I'm thinking about trying to improve the regtests, they would be much more useful if more reliable and less noisy. One thing I want to do is work out a good way to attach the diffs from the failing tests so that people can investigate problems without having to ask the usual "what went wrong with this test?" Because the diffs can be quite large, I'd probably do something like take the first N lines (for N=100, or N=1000, say) for each .diff file, then tar and bzip them, and attach them to the email. I just have to work out how to do attachments without requiring external programs... Nick |