You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
1
(21) |
|
2
(19) |
3
(33) |
4
(24) |
5
(18) |
6
(13) |
7
(22) |
8
(21) |
|
9
(38) |
10
(25) |
11
(20) |
12
(27) |
13
(43) |
14
(9) |
15
(19) |
|
16
(37) |
17
(19) |
18
(13) |
19
(11) |
20
(8) |
21
(11) |
22
(25) |
|
23
(21) |
24
(30) |
25
(18) |
26
(11) |
27
(10) |
28
(14) |
29
(40) |
|
30
(24) |
31
(14) |
|
|
|
|
|
|
From: Julian S. <js...@ac...> - 2008-03-28 23:30:20
|
Konstantin > I'd like to collect ideas regarding the subject raised today in a > separate thread: how to decipher Helgrind's reports about 'Possible > data race'. That is an excellent question; unfortunately not easy to answer. Let me ask a related question. In a way it is chasing the problem from the other end. Question is: In an ideal world (no constraints on CPU time or memory), what information would make it easy to find the root cause of race reports? J |
|
From: <bar...@gm...> - 2008-03-28 18:58:07
|
On 3/27/08, Julian Seward <js...@ac...> wrote: > Did you configure firefox like this? > > ./configure --prefix=$XYZZYFOOBAR --enable-application=browser \ > --enable-optimize="-O -g" > > so that you are building at -O1 instead of -O0 ? It reduces > the amount of generated code a lot and so might reduce the amount > of debug info too. My experience with firefox is that the > variable info takes "only" about 500MB. > > Alternative you could just add more swap space. After all that > variable info is read in and mostly ignored, so it's OK for the > process size to get very large if half of it can be swapped out > and is never referenced again. Thanks for the hint. Adding -O1 helped but, sorry, not enough. Virtual memory usage of exp-drd is now about 2.2 GB just after having read variable info -- this is still too much. And yes, I can increase the available virtual memory on my PC, but I can't ask this to all exp-drd users. By the way, do you know how gdb handles variable info ? Does it read all variable info at once or does it read this information only when needed ? Bart. |
|
From: Nuno L. <nun...@sa...> - 2008-03-28 17:56:33
|
Hi, In the past days I have improved the patches I had provided previously. So this e-mail is a summary of the patches I have for valgrind (for now). http://web.ist.utl.pt/nuno.lopes/vex_regalloc_eqspill_bugfix.txt Fix a bug in the register allocator that assumes that a register that is modified in the same instruction that triggers the reload isn't dirty. This means that e.g.: addl %vr1, %vr2 if %vr2 has to be reloaded and isn't written before the next spill, it won't get written back to memory (because eq_spill_slot == True). http://web.ist.utl.pt/nuno.lopes/vex_regalloc_mov_vr.txt This is an optimziation that removes redudant 'movl %vrX, %RR' when %vrX is mapped to the real register %RR. http://web.ist.utl.pt/nuno.lopes/vex_peephole_optimizations.txt The peephole optimizer I described earlier. It does some coalescing and removes redudant moves between virtual registers. On code instrumented with memcheck it can reduce the number of reg-allocated instructions by up to 12%. All the regression tests pass after applying these patches (except those that didn't pass before applying the patches). I've only tested on a x86 pc, but I'll soon try on a x86_64 and ppc64. Regards, Nuno P.S.: I have two colleagues that are experimenting with other optimizations as well. They should submit a few patches in the next few days (we hope). |
|
From: John R. <joh...@gm...> - 2008-03-28 15:07:22
|
I meant it would generate branches in simple if's with no else case, e.g:
if(unlikely(cond))
do_something_expensive;
rest_of_function;
would incorrectly turn into:
if(!cond) goto label;
do_something_expensive;
label1:
rest_of_function
rather than the more optimal:
if(cond) goto label1;
label2:
rest_of_function;
... return
label1:
do_something_expensive;
goto label2;
I can't think of an architecture where it would in general be better
to do it the other way. Maybe in some cases where you can use
conditional execution. Anyway, I'm getting off-topic.
John.
On 28/03/2008, Erik Sandberg <san...@vi...> wrote:
> John Ripley wrote:
> > I'd check that the compiler is using the hint properly. I can't
> > remember which version (somewhere from 4.0-4.1) but I found gcc would
> > actually generate branches for the "likely" path and straight line
> > code for the "unlikely" - completely the opposite of what it should
> > do. In the end I actually added a gcc version check around the macros
> > to invert the sense :)
> >
> > I'd check a disassembly to see if the the compiler is actually taking
> > your hints properly.
> >
>
>
> I don't know about this specific case, but doesn't "what it should do"
> depend on the target architecture?
>
> If there is an else clause, it seems to me that the GCC behaviour you
> described is desirable, it would save an unconditional jump instruction:
> If the code "if likely(x) foo(); else bar();" is compiled into this
> pseudoassembler,
>
> if x goto FOO
> call bar
> jmp END
> FOO:
> call foo
> END:
>
> then there is an extra jmp in the unlikely branch but not in the likely one.
>
>
> Erik
>
|
|
From: Erik S. <san...@vi...> - 2008-03-28 14:50:50
|
John Ripley wrote: > I'd check that the compiler is using the hint properly. I can't > remember which version (somewhere from 4.0-4.1) but I found gcc would > actually generate branches for the "likely" path and straight line > code for the "unlikely" - completely the opposite of what it should > do. In the end I actually added a gcc version check around the macros > to invert the sense :) > > I'd check a disassembly to see if the the compiler is actually taking > your hints properly. > I don't know about this specific case, but doesn't "what it should do" depend on the target architecture? If there is an else clause, it seems to me that the GCC behaviour you described is desirable, it would save an unconditional jump instruction: If the code "if likely(x) foo(); else bar();" is compiled into this pseudoassembler, if x goto FOO call bar jmp END FOO: call foo END: then there is an extra jmp in the unlikely branch but not in the likely one. Erik |
|
From: Konstantin S. <kon...@gm...> - 2008-03-28 11:44:48
|
Hi,
I'd like to collect ideas regarding the subject raised today in a
separate thread: how to decipher Helgrind's reports about 'Possible
data race'.
So, this is the usual format of helgrind report about a race:
- ACCESS_TYPE (read or write)
- memory address ADDR
- thread segment SEG
- thread THR
- stack dump of access ACCESS_CONTEXT
- stack dump of the place where ADDR has been allocated:
ALLOC_CONTEXT (or a name of a global variable).
- stack dump of the place where last consistently used lock was
used: LOCK_CONTEXT
- Previous state OLD_STATE which indicates:
- If there were writes to this memory before (or only reads
happened): R or W
- In what segments and threads where these previous accesses:
(like this: S123/T1 S456/T3 S987/T7)
So, if the race happens on a global var, life is easy: we just check
all uses of this var manually.
If there are too many uses, we can call Helgrind second time with
--trace-addr=ADDR --trace-level=2 and we get all accesses.
If the race happens on a memory location allocated from heap, and
which is e.g. a field of a structure inside out code, --trace-addr may
not work (in my experience it never works on big apps).
This is because addresses allocated in multi-threaded programs differ
from run to run (idea: hack the allocator to make it more
reproducible; not sure if possible).
In this case VG_USERREQ__HG_TRACE_MEM is useful: we annotate the racy
field with this client request and rerun Helgrind with --trace-level=2
getting all the accesses.
In my experience it helps in ~50% of cases.
I think that sometimes printing the traces annoys the scheduler and
the race gets hidden (idea: instead of printing traces store them
somewhere and print only when showing the race).
Ok, but what shall we do if the race is inside some library code (e.g.
STL)? We can't annotate it...
That's what I do (not perfect and requires a lot of manual work):
- On each segment creation I record the current context (stack dump)
(added ExeContext* field to Segment)
- When printing a race report I also print contexts of all segment in
the OLD_STATE.
It gives me information like this: access to ADDR in thread T1
happened after context C1, access in T2 happened after C2, ...
Usually, C1 and C2 are quite far from the actual access :(
But now I can find the actual access by creating new segments in
random parts of code starting from C1 and C2. (the new segments can be
created by annotating the code with
_VG_USERREQ__HG_PTHREAD_COND_SIGNAL_PRE(0xDEADBEAF))
A long process I should say... Just yesterday I spent 1.5 hours trying
to understand a particularly nasty race reported inside vector<>.
Does anyone have a better idea?
--kcc
P.S. Julian, the mail to you still bounces:
----- Transcript of session follows -----
.. while talking to open-works.net
>>> DATA
<<< 554 5.7.1 Penalty Box error, please contact the server support to
ensure delivery
|
|
From: Tom H. <th...@cy...> - 2008-03-28 06:00:42
|
Nightly build on alvis ( i686, Red Hat 7.3 ) started at 2008-03-28 03:15:03 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 330 tests, 76 stderr failures, 1 stdout failure, 29 post failures == memcheck/tests/addressable (stderr) memcheck/tests/badjump (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/erringfds (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-pool-0 (stderr) memcheck/tests/leak-pool-1 (stderr) memcheck/tests/leak-pool-2 (stderr) memcheck/tests/leak-pool-3 (stderr) memcheck/tests/leak-pool-4 (stderr) memcheck/tests/leak-pool-5 (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/long_namespace_xml (stderr) memcheck/tests/lsframe1 (stderr) memcheck/tests/lsframe2 (stderr) memcheck/tests/malloc_free_fill (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/noisy_child (stderr) memcheck/tests/partial_load_dflt (stderr) memcheck/tests/partial_load_ok (stderr) memcheck/tests/partiallydefinedeq (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/sigkill (stderr) memcheck/tests/stack_changes (stderr) memcheck/tests/varinfo1 (stderr) memcheck/tests/varinfo2 (stderr) memcheck/tests/varinfo3 (stderr) memcheck/tests/varinfo4 (stderr) memcheck/tests/varinfo5 (stderr) memcheck/tests/varinfo6 (stderr) memcheck/tests/x86/bug152022 (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/x86/xor-undef-x86 (stderr) memcheck/tests/xml1 (stderr) massif/tests/alloc-fns-A (post) massif/tests/alloc-fns-B (post) massif/tests/basic (post) massif/tests/basic2 (post) massif/tests/big-alloc (post) massif/tests/culling1 (stderr) massif/tests/culling2 (stderr) massif/tests/custom_alloc (post) massif/tests/deep-A (post) massif/tests/deep-B (stderr) massif/tests/deep-B (post) massif/tests/deep-C (stderr) massif/tests/deep-C (post) massif/tests/deep-D (post) massif/tests/ignoring (post) massif/tests/insig (post) massif/tests/long-names (post) massif/tests/long-time (post) massif/tests/new-cpp (post) massif/tests/null (post) massif/tests/one (post) massif/tests/overloaded-new (post) massif/tests/peak (post) massif/tests/peak2 (stderr) massif/tests/peak2 (post) massif/tests/realloc (stderr) massif/tests/realloc (post) massif/tests/thresholds_0_0 (post) massif/tests/thresholds_0_10 (post) massif/tests/thresholds_10_0 (post) massif/tests/thresholds_10_10 (post) massif/tests/thresholds_5_0 (post) massif/tests/thresholds_5_10 (post) massif/tests/zero1 (post) massif/tests/zero2 (post) none/tests/blockfault (stderr) none/tests/faultstatus (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/shell (stderr) none/tests/shell_valid1 (stderr) none/tests/shell_valid2 (stderr) none/tests/shell_valid3 (stderr) helgrind/tests/hg01_all_ok (stderr) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/hg06_readshared (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc02_simple_tls (stderr) helgrind/tests/tc03_re_excl (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc07_hbl1 (stderr) helgrind/tests/tc08_hbl2 (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc11_XCHG (stderr) helgrind/tests/tc12_rwl_trivial (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) helgrind/tests/tc24_nonzero_sem (stderr) |
|
From: Tom H. <th...@cy...> - 2008-03-28 04:26:34
|
Nightly build on lloyd ( x86_64, Fedora 7 ) started at 2008-03-28 03:05:06 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 413 tests, 6 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) none/tests/faultstatus (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc22_exit_w_lock (stderr) |
|
From: Tom H. <th...@cy...> - 2008-03-28 03:52:05
|
Nightly build on aston ( x86_64, Fedora Core 5 ) started at 2008-03-28 03:20:08 GMT Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 419 tests, 9 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/x86/scalar (stderr) none/tests/blockfault (stderr) none/tests/faultstatus (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc08_hbl2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 419 tests, 9 stderr failures, 1 stdout failure, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/x86/scalar (stderr) none/tests/blockfault (stderr) none/tests/faultstatus (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Fri Mar 28 03:36:29 2008 --- new.short Fri Mar 28 03:52:10 2008 *************** *** 8,10 **** ! == 419 tests, 9 stderr failures, 1 stdout failure, 0 post failures == memcheck/tests/malloc_free_fill (stderr) --- 8,10 ---- ! == 419 tests, 9 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) *************** *** 16,17 **** --- 16,18 ---- none/tests/mremap2 (stdout) + helgrind/tests/tc08_hbl2 (stdout) helgrind/tests/tc20_verifywrap (stderr) |
|
From: Tom H. <th...@cy...> - 2008-03-28 03:40:06
|
Nightly build on trojan ( x86_64, Fedora Core 6 ) started at 2008-03-28 03:25:05 GMT Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 417 tests, 9 stderr failures, 5 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/bug133694 (stdout) memcheck/tests/x86/bug133694 (stderr) memcheck/tests/x86/scalar (stderr) none/tests/cmdline1 (stdout) none/tests/cmdline2 (stdout) none/tests/faultstatus (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 417 tests, 8 stderr failures, 5 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/bug133694 (stdout) memcheck/tests/x86/bug133694 (stderr) memcheck/tests/x86/scalar (stderr) none/tests/cmdline1 (stdout) none/tests/cmdline2 (stdout) none/tests/faultstatus (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Fri Mar 28 03:32:42 2008 --- new.short Fri Mar 28 03:40:12 2008 *************** *** 8,10 **** ! == 417 tests, 8 stderr failures, 5 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) --- 8,10 ---- ! == 417 tests, 9 stderr failures, 5 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) *************** *** 19,20 **** --- 19,21 ---- none/tests/mremap2 (stdout) + helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc20_verifywrap (stderr) |
|
From: Tom H. <th...@cy...> - 2008-03-28 03:34:00
|
Nightly build on dellow ( x86_64, Fedora 8 ) started at 2008-03-28 03:10:04 GMT Results differ from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 413 tests, 8 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) none/tests/faultstatus (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) ================================================= == Results from 24 hours ago == ================================================= Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 413 tests, 8 stderr failures, 3 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) memcheck/tests/vcpu_fnfns (stdout) memcheck/tests/x86/scalar (stderr) none/tests/faultstatus (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/pth_cvsimple (stdout) helgrind/tests/tc18_semabuse (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) ================================================= == Difference between 24 hours ago and now == ================================================= *** old.short Fri Mar 28 03:22:05 2008 --- new.short Fri Mar 28 03:33:58 2008 *************** *** 8,10 **** ! == 413 tests, 8 stderr failures, 3 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) --- 8,10 ---- ! == 413 tests, 8 stderr failures, 2 stdout failures, 0 post failures == memcheck/tests/pointer-trace (stderr) *************** *** 15,17 **** none/tests/mremap2 (stdout) - none/tests/pth_cvsimple (stdout) helgrind/tests/tc18_semabuse (stderr) --- 15,16 ---- |
|
From: Tom H. <th...@cy...> - 2008-03-28 03:17:06
|
Nightly build on gill ( x86_64, Fedora Core 2 ) started at 2008-03-28 03:00:02 GMT Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 419 tests, 31 stderr failures, 3 stdout failures, 0 post failures == memcheck/tests/malloc_free_fill (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/stack_switch (stderr) memcheck/tests/varinfo6 (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) none/tests/amd64/insn_ssse3 (stdout) none/tests/amd64/insn_ssse3 (stderr) none/tests/amd64/ssse3_misaligned (stderr) none/tests/blockfault (stderr) none/tests/faultstatus (stderr) none/tests/fdleak_fcntl (stderr) none/tests/mremap (stderr) none/tests/mremap2 (stdout) none/tests/x86/insn_ssse3 (stdout) none/tests/x86/insn_ssse3 (stderr) none/tests/x86/ssse3_misaligned (stderr) helgrind/tests/hg01_all_ok (stderr) helgrind/tests/hg02_deadlock (stderr) helgrind/tests/hg03_inherit (stderr) helgrind/tests/hg04_race (stderr) helgrind/tests/hg05_race2 (stderr) helgrind/tests/tc01_simple_race (stderr) helgrind/tests/tc05_simple_race (stderr) helgrind/tests/tc06_two_races (stderr) helgrind/tests/tc09_bad_unlock (stderr) helgrind/tests/tc14_laog_dinphils (stderr) helgrind/tests/tc16_byterace (stderr) helgrind/tests/tc17_sembar (stderr) helgrind/tests/tc19_shadowmem (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc21_pthonce (stderr) helgrind/tests/tc22_exit_w_lock (stderr) helgrind/tests/tc23_bogus_condwait (stderr) |
|
From: <sv...@va...> - 2008-03-28 01:07:32
|
Author: sewardj
Date: 2008-03-28 01:07:37 +0000 (Fri, 28 Mar 2008)
New Revision: 7795
Log:
Ignore .bss symbols in the case where an ELF object's .bss is mapped
r-x only (not writable). (But what could this possibly mean? Just
sounds really bizarre.)
Modified:
branches/HGDEV/coregrind/m_debuginfo/readelf.c
Modified: branches/HGDEV/coregrind/m_debuginfo/readelf.c
===================================================================
--- branches/HGDEV/coregrind/m_debuginfo/readelf.c 2008-03-28 00:17:26 UTC (rev 7794)
+++ branches/HGDEV/coregrind/m_debuginfo/readelf.c 2008-03-28 01:07:37 UTC (rev 7795)
@@ -1393,6 +1393,27 @@
di->bss_avma + di->bss_size - 1);
TRACE_SYMTAB("acquiring .bss bias = %p\n", di->bss_bias);
} else
+
+ /* Now one from the wtf?! department ... */
+ if (inrx && (!inrw) && size >= 0 && !di->bss_present) {
+ /* File contains a .bss, but it got mapped as rx only.
+ This is very strange. For now, just pretend we didn't
+ see it :-) */
+ di->bss_present = False;
+ di->bss_svma = 0;
+ di->bss_avma = 0;
+ di->bss_size = 0;
+ di->bss_bias = 0;
+ bss_align = 0;
+ if (!VG_(clo_xml)) {
+ VG_(message)(Vg_UserMsg, "Warning: the following file's .bss is "
+ "mapped r-x only - ignoring .bss syms");
+ VG_(message)(Vg_UserMsg, " %s", di->filename
+ ? di->filename
+ : (UChar*)"(null?!)" );
+ }
+ } else
+
if ((!inrw) && (!inrx) && size > 0 && !di->bss_present) {
/* File contains a .bss, but it didn't get mapped. Ignore. */
di->bss_present = False;
|
|
From: <sv...@va...> - 2008-03-28 00:17:23
|
Author: sewardj
Date: 2008-03-28 00:17:26 +0000 (Fri, 28 Mar 2008)
New Revision: 7794
Log:
Add notes on how to build OpenOffice from source. A Fun Game!
Modified:
branches/HGDEV/docs/internals/BIG_APP_NOTES.txt
Modified: branches/HGDEV/docs/internals/BIG_APP_NOTES.txt
===================================================================
--- branches/HGDEV/docs/internals/BIG_APP_NOTES.txt 2008-03-27 17:07:50 UTC (rev 7793)
+++ branches/HGDEV/docs/internals/BIG_APP_NOTES.txt 2008-03-28 00:17:26 UTC (rev 7794)
@@ -60,3 +60,40 @@
if (zeroit)
memset(data, 0, bytes);
return data;
+
+
+
+Building OpenOffice 2.4 from source
+-----------------------------------
+
+svn co svn://svn.gnome.org/svn/ooo-build/trunk ooo-build
+
+cd ooo-build
+
+export ARCH_FLAGS="-g -O"
+
+./autogen.sh --with-distro=SUSE-10.2 --with-java=no
+--disable-gstreamer --disable-mono --with-max-jobs=2 --with-num-cpus=2
+
+./download
+
+make
+
+# make now runs the 'inner' configure (of OOo proper) and
+# invariably fails. To fix, install 987,654,321 packages you never
+# heard of before, that OOo absolutely needs, and go back to the
+# autogen step. You probably need to do this ten times or more.
+
+# eventually you might get through the inner configure. After
+# a couple of hours of flat out computation on both cores of
+# a fast Core 2, the build might complete successfully.
+
+# in the likely event of even all that not working, go on to #go-oo
+# at irc.freenode.org and ask questions
+
+# eventually ...
+
+./bin/ooinstall ~/Tools/OOPlay/Inst01
+cd ~/Tools/OOPlay/Inst01
+valgrind -v ./program/soffice.bin
+
|