You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
1
(3) |
2
|
3
|
4
|
|
5
(13) |
6
(2) |
7
(5) |
8
(4) |
9
(3) |
10
(4) |
11
(4) |
|
12
(7) |
13
|
14
(1) |
15
|
16
|
17
(2) |
18
|
|
19
|
20
|
21
(3) |
22
(8) |
23
(7) |
24
(5) |
25
(4) |
|
26
(6) |
27
|
28
(9) |
29
|
30
(4) |
31
(5) |
|
|
From: Paul F. <pj...@wa...> - 2023-03-25 20:31:57
|
On 25-03-23 01:23, Nicholas Nethercote wrote: > One way to do it is to divide the tests into "must pass on CI" and "the > rest". I suspect there are plenty of tests that work on all platforms, > which would give a lot of useful coverage from the start. Over time you > can hopefully move tests from the first category to the second. > > The other way to do it is to divide the tests into "run on CI" and > "don't run on CI", i.e. exceptions, which does require a mechanism for > specifying those exceptions. In practice I think this works out much the > same as the first approach, because a test that consistently fails on > one platform isn't much use. (In fact, it can have negative value if its > presence masks new failures in other tests.) > > One consequence of all this is that the CI platforms become gospel. E.g. > if a test passes on CI but fails locally, that's good enough. This is > fine in practice, assuming the CI platforms are reasonable choices. > > Flaky tests can be a problem. For rare failures you can always just > trigger another CI run. For regular failures you should either fix the > test or disable it. Our problems are different to most company testing systems that I've used. Typical examples of flakiness are threading nondeterminism with floating point, use of pointers as keys for ordered collections, . In a corporate environment I'm used to using standardized build and test machines, all running the same OS and using the same compiler and generally on similar hardware. We do have a bit of thread non-determinism. Our build and test kit is pretty much a random bunch of bits and bobs. Since a large number of our tests are deliberately executing UB it's hard to have a set of deterministic and reliable reference results. Things often change with compiler or OS upgrades. If we do go for CI (and I'm in favour of it) then I also think that we need to have some sort of tiering for platforms. At the moment we have glibc Linux amd64/PPC/s390 and FreeBSD amd64 that are both fairly close to clean - less than 5 failures. After that it goes downhill fairly rapidly. Linux aarch64 has 17 errors mostly related to identifying variables i error messages. Last time I tried Solaris 11.3 there were 20 or so failures, but there are many more on Illumos and Solaris 11.4. Alipne Linux (musl based) is a mess and macOS is still a basket case (counting on you Louis!). So I would say Tier 1 - as "officially" supported as we can manage glibc Linux amd64/PPC/s390 and FreeBSD amd64 Tier 2 - best effort support glibc Linux aarch64 and FreeBSD x86 Tier 3 - practically unsupported, try to get them to build for releases all the rest It's too early to tell how Loongson and riscv64 would fit in if/when they get merged. A+ Paul |
|
From: Paul F. <pa...@so...> - 2023-03-25 18:56:59
|
https://sourceware.org/git/gitweb.cgi?p=valgrind.git;h=3eaac588274bbfecc3df4b73e3f86df8833c7f80 commit 3eaac588274bbfecc3df4b73e3f86df8833c7f80 Author: Paul Floyd <pj...@wa...> Date: Sat Mar 25 19:52:41 2023 +0100 Regtest: clean aligned alloc tests on FreeBSD x86 Add a filter for size_t (unsigned long on 64bit platforms and unsigned int on 32bit ones). Add another expected for x86. Diff: --- memcheck/tests/Makefile.am | 4 +++- memcheck/tests/filter_size_t | 5 ++++ memcheck/tests/memalign_args.stderr.exp-x86 | 28 ++++++++++++++++++++++ .../sized_aligned_new_delete_misaligned.vgtest | 1 + 4 files changed, 37 insertions(+), 1 deletion(-) diff --git a/memcheck/tests/Makefile.am b/memcheck/tests/Makefile.am index 0509d45869..ec16313ddf 100644 --- a/memcheck/tests/Makefile.am +++ b/memcheck/tests/Makefile.am @@ -80,7 +80,8 @@ dist_noinst_SCRIPTS = \ filter_varinfo3 \ filter_memcheck \ filter_overlaperror \ - filter_malloc_free + filter_malloc_free \ + filter_size_t noinst_HEADERS = leak.h @@ -228,6 +229,7 @@ EXTRA_DIST = \ memalign_args.stderr.exp-glibc \ memalign_args.stderr.exp-ppc64 \ memalign_args.stderr.exp-arm \ + memalign_args.stderr.exp-x86 \ memcmptest.stderr.exp memcmptest.stderr.exp2 \ memcmptest.stdout.exp memcmptest.vgtest \ memmem.stderr.exp memmem.vgtest \ diff --git a/memcheck/tests/filter_size_t b/memcheck/tests/filter_size_t new file mode 100755 index 0000000000..08386b219c --- /dev/null +++ b/memcheck/tests/filter_size_t @@ -0,0 +1,5 @@ +#! /bin/sh + + +./filter_stderr "$@" | +sed "s/unsigned int/unsigned long/" diff --git a/memcheck/tests/memalign_args.stderr.exp-x86 b/memcheck/tests/memalign_args.stderr.exp-x86 new file mode 100644 index 0000000000..1bb553ea6b --- /dev/null +++ b/memcheck/tests/memalign_args.stderr.exp-x86 @@ -0,0 +1,28 @@ +Conditional jump or move depends on uninitialised value(s) + at 0x........: memalign (vg_replace_malloc.c:...) + by 0x........: main (memalign_args.c:19) + +Conditional jump or move depends on uninitialised value(s) + at 0x........: memalign (vg_replace_malloc.c:...) + by 0x........: main (memalign_args.c:19) + +Conditional jump or move depends on uninitialised value(s) + at 0x........: posix_memalign (vg_replace_malloc.c:...) + by 0x........: main (memalign_args.c:23) + +Conditional jump or move depends on uninitialised value(s) + at 0x........: posix_memalign (vg_replace_malloc.c:...) + by 0x........: main (memalign_args.c:23) + +Conditional jump or move depends on uninitialised value(s) + at 0x........: aligned_alloc (vg_replace_malloc.c:...) + by 0x........: main (memalign_args.c:26) + +Conditional jump or move depends on uninitialised value(s) + at 0x........: aligned_alloc (vg_replace_malloc.c:...) + by 0x........: main (memalign_args.c:26) + +Conditional jump or move depends on uninitialised value(s) + at 0x........: valloc (vg_replace_malloc.c:...) + by 0x........: main (memalign_args.c:29) + diff --git a/memcheck/tests/sized_aligned_new_delete_misaligned.vgtest b/memcheck/tests/sized_aligned_new_delete_misaligned.vgtest index fc7b6f4712..13f61924b7 100644 --- a/memcheck/tests/sized_aligned_new_delete_misaligned.vgtest +++ b/memcheck/tests/sized_aligned_new_delete_misaligned.vgtest @@ -1,3 +1,4 @@ prog: sized_aligned_new_delete_misaligned prereq: test -e ./sized_aligned_new_delete_misaligned vgopts: -q +stderr_filter: filter_size_t |
|
From: Nicholas N. <n.n...@gm...> - 2023-03-25 00:25:33
|
On Fri, 24 Mar 2023 at 22:25, Mark Wielaard <ma...@kl...> wrote: > > We aren't (yet?) using all of them (and some of them would mean moving > over bugzilla and the mailinglist, which might be controversial). But > I'll at least add the buildbot CI testers to the website (and we should > at least make use of the try-branches) this weekend. > Great! I'd be happy to try this out. Though I guess I'd need to do a no-change try run before testing a real change, to give a baseline of expected test failures, right? Nick |
|
From: Nicholas N. <n.n...@gm...> - 2023-03-25 00:24:13
|
On Fri, 24 Mar 2023 at 22:52, Mark Wielaard <ma...@kl...> wrote: > > I completely agree with this sentiment. But how do you get there? Ruthless pragmatism and an incremental approach :) > And > how do you cross the psychological barrier. I mean that it feels like > cheating to just disable failing or flaky tests. > They might fail on some, but not all setups. Or they might even be just > flaky depending on CPU model (I think I saw some failures with an AMD > Ryzen processor, which succeeded on an Intel Xeon processor). > > What should our policy be to get to zero fail? > Does that mean a test should always pass on any arch/setup? > Or do we make exceptions for tests that fail on some setups? > Do we keep an "exception list" based on...? > What do we do with the "removed" (or excepted) tests? > Do those turn into high priority bugs instead? > What about new ports, they often start with a bunch of failing tests. > One way to do it is to divide the tests into "must pass on CI" and "the rest". I suspect there are plenty of tests that work on all platforms, which would give a lot of useful coverage from the start. Over time you can hopefully move tests from the first category to the second. The other way to do it is to divide the tests into "run on CI" and "don't run on CI", i.e. exceptions, which does require a mechanism for specifying those exceptions. In practice I think this works out much the same as the first approach, because a test that consistently fails on one platform isn't much use. (In fact, it can have negative value if its presence masks new failures in other tests.) One consequence of all this is that the CI platforms become gospel. E.g. if a test passes on CI but fails locally, that's good enough. This is fine in practice, assuming the CI platforms are reasonable choices. Flaky tests can be a problem. For rare failures you can always just trigger another CI run. For regular failures you should either fix the test or disable it. Nick |