You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
1
(15) |
|
2
(24) |
3
(16) |
4
(17) |
5
(11) |
6
(20) |
7
(11) |
8
(15) |
|
9
(10) |
10
(9) |
11
(10) |
12
(24) |
13
(16) |
14
(15) |
15
(8) |
|
16
(13) |
17
(15) |
18
(35) |
19
(11) |
20
(10) |
21
(11) |
22
(9) |
|
23
(10) |
24
(9) |
25
(9) |
26
(9) |
27
(9) |
28
(12) |
29
(16) |
|
30
(12) |
|
|
|
|
|
|
|
From: Julian S. <js...@ac...> - 2006-04-17 23:48:25
|
> So that is really _huge_ for profile data.
> You need a huge amount of executed code with profile data at instruction
> level. So I suspect you are using --dump-instr=3Dyes?
No, only --simulate-cache=3Dyes.
> So I still do not understand
> it. You would need a lot of functions (> 500 000), which is very
> unusual.
Ah .. but I'm profiling memcheck, and memcheck is producing=20
code with many calls to MC_(helperc_{LOAD,STORE}V*. I know
that memcheck (in the inner valgrind) produces about 80000
translations and so if each translation has eg 5 such calls
then there will be 400000 call points to these functions.
I suspect this is why the profile data is very large and why
kcachegrind is taking a long time.
> > kind of thing before?
>
> No, never such large loading times. With some options, you can "force"
> callgrind to create profiles with function counts in the million range
> (by producing a different "function" for every call trace, eg. with
> --separate-callers=3D30). This can lead to this huge amount of memory
> consumption; but such profiles are really insane...
>
> > Can you have a look at it if I send you =A0
> > the file?
>
> Sure. If the function count is really huge, I can understand at least the
> memory consumption. Can you check the number of functions in this 75 MB
> profile with "grep ^fn callgrind.out.XXX | wc" ?
sewardj@suse10:~/VgTRUNK$ grep ^fn koffice-started.1 | wc
80660 160583 1891572
> Again: wow. I never thought of using callgrind with self-hosting
> because there is still this design problem that profile counters are never
> freed when discarding code.
Well, self-hosting does not work when the inner valgrind starts to
discard code, so I don't think this is a problem.
> BTW: how much memory is the callgrind process using itself?
After about 10 CPU hours it was at 1600M virtual and about 670M
resident, but my machine only has 1G memory I had to stop it (so I
first did 'callgrind_control -d ...' (very cool btw)).
To stop it I did control-C for the application (kpresenter) and so=20
the outer valgrind (callgrind) finished normally, and produced a final
dump which is very small compared to the two intermediate ones I did:
=2Drw------- 1 sewardj users 55936636 2006-04-17 02:40 callgrind.out.2437=
1.1
=2Drw------- 1 sewardj users 63330227 2006-04-17 13:32 callgrind.out.2437=
1.2
=2Drw------- 1 sewardj users 1127604 2006-04-17 15:35 callgrind.out.24371
$ for f in callgrind.out.24371* ; do echo $f ; grep ^fn $f | wc ; done
callgrind.out.24371
1736 3070 36549
callgrind.out.24371.1
63941 127124 1483254
callgrind.out.24371.2
61833 123046 1467148
This confused me. I was even more confused to find that kcachegrind
still takes just as long to load the 1M file as the 63M file.
Anyway, after all this, (1) from the intermediate dumps I found that
MC_(realloc) might be a memcheck slow point in realloc-intensive code,
and (2) I restarted the run using cachegrind instead of callgrind, to=20
see if it can successfully complete the profiling run with the memory
I have available. So far it has been running 6.6 hours.
J
|
|
From: Josef W. <Jos...@gm...> - 2006-04-17 23:05:31
|
[I only saw that this went to the mailing list after replying in private... ] On Monday 17 April 2006 15:20, you wrote: > I have been using kcachegrind to profile memcheck starting koffice > (!) Wow.=20 > and this seems to work well. =A0However, callgrind generates =A0 > snapshot files of up to 75M long So that is really _huge_ for profile data. You need a huge amount of executed code with profile data at instruction le= vel. So I suspect you are using --dump-instr=3Dyes? Still, I have some problem to understand this. 75 MB can mean a lot of memory consumption for KCachegrind. However, if data is only about annotationg source or assembler, KCachegrind does some lazy construction of data structures... which means: memory consumption is more or less dependent on the function count, and not on the absolute number of cost counters in the profile. So I still do not understand it. You would need a lot of functions (> 500 000), which is very unusual. I remember konqueror with around 25 000 unique functions in the profile... But you are "only" profiling valgrind (but see below). > , and kcachegrind takes > 30 mins=20 > CPU time and > 450M memory to load the file. =A0Have you seen this > kind of thing before? No, never such large loading times. With some options, you can "force" callgrind to create profiles with function counts in the million range (by producing a different "function" for every call trace, eg. with =2D-separate-callers=3D30). This can lead to this huge amount of memory consumption; but such profiles are really insane... > Can you have a look at it if I send you =A0 > the file? Sure. If the function count is really huge, I can understand at least the memory consumption. Can you check the number of functions in this 75 MB pro= file with "grep ^fn callgrind.out.XXX | wc" ? > (Note: I just thought you might like to see this. =A0I have been > using callgrind/kcachegrind for self-hosting Again: wow. I never thought of using callgrind with self-hosting because there is still this design problem that profile counters are never freed when discarding code. You made me aware of this with PPC32, where dynamic linking is done with self modifying code. I suspect that valgrind triggers this bug really bad when the translation cache is flushed and is reused for new instrumented code a few times (similar for other VMs like Java). Callgrind will never get rid of any counters, but will present you some kind of superposit= ion of counters for all the code versions in the translation cache. Maybe this way you get the huge function count? BTW: how much memory is the callgrind process using itself? I suspect that the best way to cope with this would be to handle calls into anonymous memory (eg. the translation cache?) as ending at the same (artifi= cial) function, and to not do any cost counting for such code. At least, KCachegrind can not do even assembler annotation, as for dynamic generated code, there is no binary to run objdump against. So there is no point to store costs on instruction granularity for this. Josef |
|
From: Dave N. <dc...@us...> - 2006-04-17 21:06:07
|
Nightly build on elm3b145 ( SuSE 9.0, IBM PPC970 ) started at 2006-04-17
12:06:33 PDT
Results unchanged from 24 hours ago
Checking out valgrind source tree ... done
Configuring valgrind ... done
Building valgrind ... done
Running regression tests ... failed
Regression test results follow
== 210 tests, 4 stderr failures, 3 stdout failures, 0 posttest failures ==
memcheck/tests/deep_templates (stdout)
memcheck/tests/leak-cycle (stderr)
memcheck/tests/leak-tree (stderr)
memcheck/tests/leakotron (stdout)
none/tests/faultstatus (stderr)
none/tests/mremap (stderr)
none/tests/ppc32/mftocrf (stdout)
|
|
From: Julian S. <js...@ac...> - 2006-04-17 19:47:54
|
Greetings. It's once again time for the major-release fun 'n' games.
I'm proposing we make a first release candidate on Friday 12 May, with
final release as soon as possible after that, and not later than
Friday 19 Nov, unless something really nasty turns up.
3.2.0 will support {x86,amd64,ppc32,ppc64}-linux. It will contain
the following major changes:
* faster, lighter memcheck; addrcheck is gone
* all-round performance tuning
* ppc64-linux support
* improved floating point accuracy on ppc32/64-linux
* callgrind has been merged in
* function wrapping support, and MPI wrappers
* support for Fedora Core 5 and SuSE 10.1
* Lackey, the example tool, is much improved
* the usual bunch of bug fixes; see docs/internals/3_1_BUGSTATUS.txt
If you are a packager, or if you want to use Valgrind on obscure
setups, please take the time to build and test the trunk on
platforms that are important to you. The ever-growing set of platforms
on which Valgrind runs means that inevitably means that some setups
will not be well tested; nobody has the time to try everything.
I (still) believe the code base is in pretty good shape, and I'm
not aware of any major brokenness at this point.
Closer to the time, I'll trawl through regtest failures on machines I
have access to see if there's anything critical there. It would be
helpful if others could do likewise. Also, trawling the bugzilla
reports for for serious bugs would be helpful.
J
|
|
From: Dave N. <dc...@us...> - 2006-04-17 19:08:21
|
Nightly build on vervain ( SuSE 9.0, IBM Power5 ) started at 2006-04-17
12:44:06 CDT
Results unchanged from 24 hours ago
Checking out valgrind source tree ... done
Configuring valgrind ... done
Building valgrind ... done
Running regression tests ... failed
Regression test results follow
== 210 tests, 7 stderr failures, 6 stdout failures, 0 posttest failures ==
memcheck/tests/deep_templates (stdout)
memcheck/tests/leak-cycle (stderr)
memcheck/tests/leak-tree (stderr)
memcheck/tests/leakotron (stdout)
none/tests/faultstatus (stderr)
none/tests/mremap (stderr)
none/tests/ppc32/jm-vmx (stdout)
none/tests/ppc32/jm-vmx (stderr)
none/tests/ppc32/mftocrf (stdout)
none/tests/ppc32/testVMX (stdout)
none/tests/ppc32/testVMX (stderr)
none/tests/ppc64/jm-vmx (stdout)
none/tests/ppc64/jm-vmx (stderr)
|
|
From: Dave N. <dc...@us...> - 2006-04-17 17:43:39
|
Nightly build on keyser ( SuSE 9.0 IBM Power4 ) started at 2006-04-17
04:39:36 CDT
Results unchanged from 24 hours ago
Checking out valgrind source tree ... done
Configuring valgrind ... done
Building valgrind ... done
Running regression tests ... failed
Regression test results follow
== 210 tests, 13 stderr failures, 7 stdout failures, 0 posttest failures ==
memcheck/tests/badjump (stderr)
memcheck/tests/deep_templates (stdout)
memcheck/tests/describe-block (stderr)
memcheck/tests/leak-cycle (stderr)
memcheck/tests/leak-tree (stderr)
memcheck/tests/leakotron (stdout)
memcheck/tests/match-overrun (stderr)
memcheck/tests/supp_unknown (stderr)
none/tests/async-sigs (stdout)
none/tests/async-sigs (stderr)
none/tests/blockfault (stderr)
none/tests/faultstatus (stderr)
none/tests/mremap (stderr)
none/tests/ppc32/jm-vmx (stdout)
none/tests/ppc32/jm-vmx (stderr)
none/tests/ppc32/mftocrf (stdout)
none/tests/ppc32/testVMX (stdout)
none/tests/ppc32/testVMX (stderr)
none/tests/ppc64/jm-vmx (stdout)
none/tests/ppc64/jm-vmx (stderr)
|
|
From: <js...@ac...> - 2006-04-17 13:27:19
|
Nightly build on minnie ( SuSE 10.0, ppc32 ) started at 2006-04-17 02:00:01 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 204 tests, 12 stderr failures, 6 stdout failures, 0 posttest failures == memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/leakotron (stdout) memcheck/tests/mempool (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/sigaltstack (stderr) memcheck/tests/stack_changes (stdout) memcheck/tests/stack_changes (stderr) memcheck/tests/xml1 (stderr) none/tests/faultstatus (stderr) none/tests/mremap (stderr) none/tests/ppc32/jm-fp (stdout) none/tests/ppc32/jm-fp (stderr) none/tests/ppc32/round (stdout) none/tests/ppc32/round (stderr) none/tests/ppc32/test_fx (stdout) none/tests/ppc32/test_fx (stderr) none/tests/ppc32/test_gx (stdout) |
|
From: Julian S. <js...@ac...> - 2006-04-17 13:21:11
|
Hi Josef I have been using kcachegrind to profile memcheck starting koffice (!) and this seems to work well. However, callgrind generates snapshot files of up to 75M long, and kcachegrind takes > 30 mins CPU time and > 450M memory to load the file. Have you seen this kind of thing before? Can you have a look at it if I send you the file? J |
|
From: Tom H. <th...@cy...> - 2006-04-17 02:46:22
|
Nightly build on ford ( i686, Fedora Core 4 ) started at 2006-04-17 03:25:04 BST Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 237 tests, 7 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/leak-tree (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/tls (stdout) none/tests/x86/faultstatus (stderr) none/tests/x86/int (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 237 tests, 8 stderr failures, 0 stdout failures, 0 posttest failures == memcheck/tests/leak-tree (stderr) memcheck/tests/mempool (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/x86/faultstatus (stderr) none/tests/x86/int (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Mon Apr 17 03:36:29 2006 --- new.short Mon Apr 17 03:46:16 2006 *************** *** 8,12 **** ! == 237 tests, 8 stderr failures, 0 stdout failures, 0 posttest failures == memcheck/tests/leak-tree (stderr) - memcheck/tests/mempool (stderr) memcheck/tests/pointer-trace (stderr) --- 8,11 ---- ! == 237 tests, 7 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/leak-tree (stderr) memcheck/tests/pointer-trace (stderr) *************** *** 15,16 **** --- 14,16 ---- memcheck/tests/x86/scalar_supp (stderr) + none/tests/tls (stdout) none/tests/x86/faultstatus (stderr) |
|
From: Tom H. <to...@co...> - 2006-04-17 02:45:27
|
Nightly build on dunsmere ( athlon, Fedora Core 5 ) started at 2006-04-17 03:30:05 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 236 tests, 7 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/mempool (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/sse1_memory (stdout) memcheck/tests/xml1 (stderr) none/tests/x86/faultstatus (stderr) none/tests/x86/int (stderr) |
|
From: Tom H. <th...@cy...> - 2006-04-17 02:34:37
|
Nightly build on alvis ( i686, Red Hat 7.3 ) started at 2006-04-17 03:15:04 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 235 tests, 21 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/erringfds (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/mempool (stderr) memcheck/tests/partial_load_dflt (stderr) memcheck/tests/partial_load_ok (stderr) memcheck/tests/partiallydefinedeq (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/sigkill (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/x86/sse1_memory (stdout) memcheck/tests/xml1 (stderr) none/tests/x86/faultstatus (stderr) none/tests/x86/int (stderr) |
|
From: Tom H. <th...@cy...> - 2006-04-17 02:27:30
|
Nightly build on dellow ( x86_64, Fedora Core 5 ) started at 2006-04-17 03:10:11 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 258 tests, 5 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/sse1_memory (stdout) memcheck/tests/xml1 (stderr) none/tests/amd64/faultstatus (stderr) none/tests/x86/faultstatus (stderr) none/tests/x86/int (stderr) |
|
From: Tom H. <th...@cy...> - 2006-04-17 02:25:59
|
Nightly build on aston ( x86_64, Fedora Core 3 ) started at 2006-04-17 03:05:11 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 258 tests, 7 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/x86/sse1_memory (stdout) memcheck/tests/xml1 (stderr) none/tests/amd64/faultstatus (stderr) none/tests/x86/faultstatus (stderr) none/tests/x86/int (stderr) |
|
From: Tom H. <th...@cy...> - 2006-04-17 02:14:21
|
Nightly build on gill ( x86_64, Fedora Core 2 ) started at 2006-04-17 03:00:02 BST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 258 tests, 7 stderr failures, 1 stdout failure, 0 posttest failures == memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/x86/sse1_memory (stdout) none/tests/amd64/faultstatus (stderr) none/tests/fdleak_fcntl (stderr) none/tests/x86/faultstatus (stderr) none/tests/x86/int (stderr) |
|
From: <js...@ac...> - 2006-04-17 01:31:12
|
Nightly build on phoenix ( SuSE 10.0 ) started at 2006-04-17 03:30:01 BST Checking out vex source tree ... done Building vex ... done Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 234 tests, 6 stderr failures, 0 stdout failures, 0 posttest failures == memcheck/tests/leak-tree (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/x86/faultstatus (stderr) none/tests/x86/int (stderr) |