You can subscribe to this list here.
| 2003 |
Jan
|
Feb
|
Mar
(58) |
Apr
(261) |
May
(169) |
Jun
(214) |
Jul
(201) |
Aug
(219) |
Sep
(198) |
Oct
(203) |
Nov
(241) |
Dec
(94) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2004 |
Jan
(137) |
Feb
(149) |
Mar
(150) |
Apr
(193) |
May
(95) |
Jun
(173) |
Jul
(137) |
Aug
(236) |
Sep
(157) |
Oct
(150) |
Nov
(136) |
Dec
(90) |
| 2005 |
Jan
(139) |
Feb
(130) |
Mar
(274) |
Apr
(138) |
May
(184) |
Jun
(152) |
Jul
(261) |
Aug
(409) |
Sep
(239) |
Oct
(241) |
Nov
(260) |
Dec
(137) |
| 2006 |
Jan
(191) |
Feb
(142) |
Mar
(169) |
Apr
(75) |
May
(141) |
Jun
(169) |
Jul
(131) |
Aug
(141) |
Sep
(192) |
Oct
(176) |
Nov
(142) |
Dec
(95) |
| 2007 |
Jan
(98) |
Feb
(120) |
Mar
(93) |
Apr
(96) |
May
(95) |
Jun
(65) |
Jul
(62) |
Aug
(56) |
Sep
(53) |
Oct
(95) |
Nov
(106) |
Dec
(87) |
| 2008 |
Jan
(58) |
Feb
(149) |
Mar
(175) |
Apr
(110) |
May
(106) |
Jun
(72) |
Jul
(55) |
Aug
(89) |
Sep
(26) |
Oct
(96) |
Nov
(83) |
Dec
(93) |
| 2009 |
Jan
(97) |
Feb
(106) |
Mar
(74) |
Apr
(64) |
May
(115) |
Jun
(83) |
Jul
(137) |
Aug
(103) |
Sep
(56) |
Oct
(59) |
Nov
(61) |
Dec
(37) |
| 2010 |
Jan
(94) |
Feb
(71) |
Mar
(53) |
Apr
(105) |
May
(79) |
Jun
(111) |
Jul
(110) |
Aug
(81) |
Sep
(50) |
Oct
(82) |
Nov
(49) |
Dec
(21) |
| 2011 |
Jan
(87) |
Feb
(105) |
Mar
(108) |
Apr
(99) |
May
(91) |
Jun
(94) |
Jul
(114) |
Aug
(77) |
Sep
(58) |
Oct
(58) |
Nov
(131) |
Dec
(62) |
| 2012 |
Jan
(76) |
Feb
(93) |
Mar
(68) |
Apr
(95) |
May
(62) |
Jun
(109) |
Jul
(90) |
Aug
(87) |
Sep
(49) |
Oct
(54) |
Nov
(66) |
Dec
(84) |
| 2013 |
Jan
(67) |
Feb
(52) |
Mar
(93) |
Apr
(65) |
May
(33) |
Jun
(34) |
Jul
(52) |
Aug
(42) |
Sep
(52) |
Oct
(48) |
Nov
(66) |
Dec
(14) |
| 2014 |
Jan
(66) |
Feb
(51) |
Mar
(34) |
Apr
(47) |
May
(58) |
Jun
(27) |
Jul
(52) |
Aug
(41) |
Sep
(78) |
Oct
(30) |
Nov
(28) |
Dec
(26) |
| 2015 |
Jan
(41) |
Feb
(42) |
Mar
(20) |
Apr
(73) |
May
(31) |
Jun
(48) |
Jul
(23) |
Aug
(55) |
Sep
(36) |
Oct
(47) |
Nov
(48) |
Dec
(41) |
| 2016 |
Jan
(32) |
Feb
(34) |
Mar
(33) |
Apr
(22) |
May
(14) |
Jun
(31) |
Jul
(29) |
Aug
(41) |
Sep
(17) |
Oct
(27) |
Nov
(38) |
Dec
(28) |
| 2017 |
Jan
(28) |
Feb
(30) |
Mar
(16) |
Apr
(9) |
May
(27) |
Jun
(57) |
Jul
(28) |
Aug
(43) |
Sep
(31) |
Oct
(20) |
Nov
(24) |
Dec
(18) |
| 2018 |
Jan
(34) |
Feb
(50) |
Mar
(18) |
Apr
(26) |
May
(13) |
Jun
(31) |
Jul
(13) |
Aug
(11) |
Sep
(15) |
Oct
(12) |
Nov
(18) |
Dec
(13) |
| 2019 |
Jan
(12) |
Feb
(29) |
Mar
(51) |
Apr
(22) |
May
(13) |
Jun
(20) |
Jul
(13) |
Aug
(12) |
Sep
(21) |
Oct
(6) |
Nov
(9) |
Dec
(5) |
| 2020 |
Jan
(13) |
Feb
(5) |
Mar
(25) |
Apr
(4) |
May
(40) |
Jun
(27) |
Jul
(5) |
Aug
(17) |
Sep
(21) |
Oct
(1) |
Nov
(5) |
Dec
(15) |
| 2021 |
Jan
(28) |
Feb
(6) |
Mar
(11) |
Apr
(5) |
May
(7) |
Jun
(8) |
Jul
(5) |
Aug
(5) |
Sep
(11) |
Oct
(9) |
Nov
(10) |
Dec
(12) |
| 2022 |
Jan
(7) |
Feb
(13) |
Mar
(8) |
Apr
(7) |
May
(12) |
Jun
(27) |
Jul
(14) |
Aug
(27) |
Sep
(27) |
Oct
(17) |
Nov
(17) |
Dec
|
| 2023 |
Jan
(10) |
Feb
(18) |
Mar
(9) |
Apr
(26) |
May
|
Jun
(13) |
Jul
(18) |
Aug
(5) |
Sep
(12) |
Oct
(16) |
Nov
(1) |
Dec
|
| 2024 |
Jan
(4) |
Feb
(3) |
Mar
(6) |
Apr
(17) |
May
(2) |
Jun
(33) |
Jul
(13) |
Aug
(1) |
Sep
(6) |
Oct
(8) |
Nov
(6) |
Dec
(15) |
| 2025 |
Jan
(5) |
Feb
(11) |
Mar
(8) |
Apr
(20) |
May
(1) |
Jun
|
Jul
|
Aug
(9) |
Sep
(1) |
Oct
(7) |
Nov
(1) |
Dec
|
|
From: Philippe W. <phi...@sk...> - 2015-06-18 19:52:08
|
On Thu, 2015-06-18 at 14:20 +0800, 王阳 wrote:
> I have done it two times with different options series. The result is
> as follows:
> 1.--tool=helgrind --gen-suppressions=all --stats=yes
> --profile-heap=yes --track-lockorders=no --read-var-info=yes
> --trace-children=yes --error-limit=no --log-file=xxx.log myprog
Ok, the origin of the memory is related to the locks/nr of locks
your application creates and/or locks/keeps locked.
About 50Gb of memory is consumed for hg.ids.4.
> 51,760,182,272 in 341,498: hg.ids.4
This is the 'universe of locksets' : ie. all the locksets that
helgrind has ever seen.
It looks like your application uses a lot of locks
and keeps a lot of locks in a status 'locked' shown by:
> locks: 1,029,288 acquires, 915,546 releases
>From the above, I guess your application has thousands
of locks, and that each thread has sometimes hundreds
of locks held.
Well, if that is the case, I think helgrind data structure
is not done for such a behaviour :(
There is no garbage collection of the lock set universe.
At this point, I think there are 2 possible approaches:
* implement in helgrind a garbage collection of the univ_lsets
(unclear if this is implementable, and for sure not trivial)
* you change the limits of Valgrind max memory, to e.g. go
to 128G or 256G.
You might refer for that to the revision r13278, which increased
from 32G to 64G. That gives the various constants to multiply by 2
or 4 to go to 128G or 256G.
So, checkout the SVN version, then do
svn diff -r13277:13278
and follow that pattern to increase to 128 or 256G
Philippe
|
|
From: 王阳 <412...@qq...> - 2015-06-18 06:32:01
|
Hi Philippe,
>If --track-lockorders=no does not solve the problem,
yes, it does not work, but the increasing speed of cousuming memory is enhanced, and it break the limit of 64G quickly.
>can you then re-run with
>and post the result ?
I have done it two times with different options series. I send it with two email because of limit size of email.
The result is as follows:
1.--tool=helgrind --gen-suppressions=all --stats=yes --profile-heap=yes --track-lockorders=no --read-var-info=yes --trace-children=yes --error-limit=no --log-file=xxx.log myprog
sending command v.info stats to pid 19749
--19749-- translate: fast SP updates identified: 0 ( 0.0%)
--19749-- translate: generic_known SP updates identified: 4,510 ( 93.6%)
--19749-- translate: generic_unknown SP updates identified: 306 ( 6.3%)
--19749-- tt/tc: 2,095,227 tt lookups requiring 2,141,230 probes
--19749-- tt/tc: 2,072,199 fast-cache updates, 6 flushes
--19749-- transtab: new 30,619 (860,271 -> 7,678,917; ratio 89:10) [0 scs]
--19749-- transtab: dumped 0 (0 -> ??)
--19749-- transtab: discarded 18 (594 -> ??)
--19749-- scheduler: 265,075,965 event checks.
--19749-- scheduler: 164,155,368 indir transfers, 1,638,407 misses (1 in 100)
--19749-- scheduler: 2,649/21,359,755 major/minor sched events.
--19749-- sanity: 2655 cheap, 54 expensive checks.
--19749-- exectx: 12,289 lists, 10,469 contexts (avg 0 per list)
--19749-- exectx: 9,270,498 searches, 9,276,450 full compares (1,000 per 1000)
--19749-- exectx: 0 cmp2, 8,581 cmp4, 0 cmpAll
--19749-- errormgr: 3 supplist searches, 55 comparisons during search
--19749-- errormgr: 4,586 errlist searches, 8,581 comparisons during search
WordSet "univ_lsets":
addTo 2057968 (113892 uncached)
delFrom 1831092 (228 uncached)
union 0
intersect 1 (0 uncached) [nb. incl isSubsetOf]
minus 0 (0 uncached)
elem 0
doubleton 0
isEmpty 0
isSingleton 0
anyElementOf 0
isSubsetOf 1
dieWS 0
locksets: 113,832 unique lock sets
LockN-to-P map: 1 queries (1 map size)
string table map: 0 queries (0 map size)
locks: 1,029,288 acquires, 915,546 releases
sanity checks: 1
<<< BEGIN libhb stats >>>
secmaps: 71,826 allocd ( 588,398,592 g-a-range)
linesZ: 9,193,728 allocd ( 441,298,944 bytes occupied)
linesF: 422,865 allocd ( 219,889,800 bytes occupied)
secmaps: 0 iterator steppings
secmaps: 34,955,464 searches ( 11,249,487 slow)
cache: 1,317,086,773 totrefs (14,233,749 misses)
cache: 13,278,898 Z-fetch, 954,851 F-fetch
cache: 12,856,393 Z-wback, 1,311,820 F-wback
cache: 19 invals, 18 flushes
cache: 6,553,527,149 arange_New 1,782,704,320 direct-to-Zreps
cline: 14,233,749 normalises
cline: c rds 8/4/2/1: 110,690,124 16,459,300 4,482,262 56,304,543
cline: c wrs 8/4/2/1: 272,628,312 29,239,745 17,673,480 39,104,868
cline: s wrs 8/4/2/1: 765,271,067 1,532,438 2,226,925 1,414,393
cline: s rd1s 63,510, s copy1s 63,510
cline: splits: 8to4 6,452,847 4to2 7,537,034 2to1 7,914,706
cline: pulldowns: 8to4 25,768,693 4to2 17,244,662 2to1 24,124,885
libhb: 183,405,289 msmcread (123,158,943 dragovers)
libhb: 341,272,758 msmcwrite (47,503,666 dragovers)
libhb: 160,362,776 cmpLEQ queries (12,643,035 misses)
libhb: 124,995,684 join2 queries (6,250,924 misses)
libhb: VTSops: tick 1,831,132, join 6,250,924, cmpLEQ 12,643,035
libhb: VTSops: cmp_structural 191,246,876 (176,949,227 slow)
libhb: VTSset: find__or__clone_and_add 8,082,057 (925,673 allocd)
libhb: VTSops: indexAt_SLOW 6
libhb: 673314 entries in vts_table (approximately 16159536 bytes)
libhb: 673314 entries in vts_set
libhb: ctxt__rcdec: 1=160967522(38314009 eq), 2=9, 3=8793505
libhb: ctxt__rcdec: calls 169761036, discards 0
libhb: contextTab: 196613 slots, 3466 max ents
libhb: contextTab: 170662609 queries, 174681088 cmps
<<< END libhb stats >>>
sending command v.info memory aspacemgr to pid 19749
58646061056 bytes have already been allocated.
--19749-- core : 57180975104/57180975104 max/curr mmap'd, 22/23 unsplit/split sb unmmap'd, 52561781768/52561765192 max/curr, 7522704/55692746552 totalloc-blocks/bytes, 305297631 searches 8 rzB
--19749-- dinfo : 580026368/499212288 max/curr mmap'd, 51/31 unsplit/split sb unmmap'd, 424546576/167730720 max/curr, 12146404/1396066272 totalloc-blocks/bytes, 12735394 searches 8 rzB
--19749-- client : 402145280/402145280 max/curr mmap'd, 2/2 unsplit/split sb unmmap'd, 189697952/188416512 max/curr, 7974243/ 558505808 totalloc-blocks/bytes, 7974428 searches 24 rzB
--19749-- demangle: 0/ 0 max/curr mmap'd, 0/0 unsplit/split sb unmmap'd, 0/ 0 max/curr, 0/ 0 totalloc-blocks/bytes, 0 searches 8 rzB
--19749-- ttaux : 1425408/ 987136 max/curr mmap'd, 5/0 unsplit/split sb unmmap'd, 1303280/ 900224 max/curr, 3370/ 2522400 totalloc-blocks/bytes, 3360 searches 8 rzB
-------- Arena "core": 57180975104/57180975104 max/curr mmap'd, 22/23 unsplit/split sb unmmap'd, 52561782008/52561765192 max/curr on_loan 8 rzB --------
51,760,182,272 in 341,498: hg.ids.4
219,921,328 in 32,858: libhb.aFfw.1 (LineF storage)
120,972,208 in 57,714: libhb.event_map_init.4 (oldref tree)
104,788,896 in 2,622: hg_malloc_metadata_pool
96,104,640 in 1,004: libhb.event_map_init.3 (OldRef pools)
56,755,872 in 341,226: hg.lNaw.1
55,645,808 in 538: libhb.event_map_init.1 (RCEC pools)
32,779,504 in 673,315: libhb.vts_tab__do_GC.new_set
25,169,840 in 1: hashtable.resize.1
20,995,456 in 262,371: hg.mk_Lock.1
18,970,400 in 576,604: libhb.vts_set_focaa.1
16,801,792 in 2: libhb.vts_tab__do_GC.new_tab
12,603,904 in 262,372: hg.ids.2
8,412,208 in 262,398: libhb.SO__Alloc.1
3,470,672 in 71,827: libhb.zsm_init.1 (map_shmem)
3,216,640 in 96,710: libhb.vts_tab__do_GC.new_vts
2,097,168 in 1: libhb.libhb_init.1
1,572,912 in 1: libhb.event_map_init.2 (context table)
1,080,408 in 10,469: perm_malloc
98,320 in 1: execontext.reh1
65,536 in 4: libhb.Thr__new.2
36,896 in 22: gdbsrv
6,160 in 1: hashtable.Hc.2
4,000 in 1: hg.ids.1
2,240 in 28: errormgr.losf.1
1,472 in 22: hg.mctCI.1
1,072 in 28: errormgr.losf.2
992 in 39: errormgr.sLTy.1
832 in 28: errormgr.losf.4
768 in 3: errormgr.mre.3
704 in 21: hg.mctCloa.1
608 in 4: hg.mstSs.1
576 in 4: hg.mpttT.1
480 in 2: hg.mLPfLN.1
288 in 6: hg.pSfs.1
256 in 4: hg.mk_Thread.1
240 in 3: errormgr.mre.1
192 in 6: errormgr.sLTy.2
192 in 4: libhb.Thr__new.1
160 in 2: commandline.sua.2
144 in 1: initimg-linux.sce.5
144 in 1: m_cache
128 in 2: libhb.Thr__new.4
128 in 4: stacks.rs.1
128 in 2: commandline.sua.3
128 in 2: hashtable.Hc.1
96 in 6: hg.eWSiLPa
80 in 1: hg.mLPfLN.2
80 in 1: gdbserved_watches
80 in 1: libhb.verydead_thread_table_init.1
64 in 1: main.mpclo.3
64 in 1: options.efn.4
16 in 1: sched_lock
-------- Arena "dinfo": 580026368/499212288 max/curr mmap'd, 51/31 unsplit/split sb unmmap'd, 424546576/167730720 max/curr on_loan 8 rzB --------
51,720,032 in 609,243: di.readdwarf3.mgGX.2
41,726,624 in 98,768: di.storage.avta.2
17,505,424 in 5: di.storage.addInl.1
12,078,432 in 6: di.storage.addLoc.1
7,079,824 in 12: di.readdwarf3.ndrw.4
6,725,712 in 12: di.readdwarf3.ndrw.7 (TyEnt to-keep array)
5,241,120 in 95,776: di.storage.addVar.2
4,713,904 in 50,423: di.storage.avta.1
4,579,296 in 131: di.storage.addStr.1
3,326,080 in 13,086: di.readdwarf3.ptD.struct_type.2
3,121,120 in 96,105: di.readdwarf3.msGX.1
2,512,000 in 16: di.storage.addSym.1
2,186,032 in 29,895: di.readdwarf3.pTD.struct_type.1
1,608,768 in 16: di.storage.finCfSI.1
1,355,216 in 6: di.storage.addLoc.2
485,504 in 25,397: di.readdwarf3.ptD.member.1
348,864 in 4: di.ccCt.2
333,488 in 16: di.storage.finCfSI.2
220,016 in 8,539: di.readdwarf3.pTD.enumerator.1
148,816 in 1,306: di.readdwarf3.ptD.enum_type.1
129,152 in 2,430: di.readdwarf3.ptD.array_type.1
105,808 in 64: di.storage.DiCfSI_m_pool
103,968 in 4,922: di.readdwarf3.ptD.typedef.1
86,224 in 4,062: di.storage.aDntf.1
83,344 in 5,209: di.readdwarf3.ptD.member.2
72,320 in 63: di.storage.addVar.3
51,456 in 24: di.storage.addFnDn.1
19,776 in 209: redir.rnnD.1
14,336 in 16: di.debuginfo.aDI.1
13,760 in 140: redir.ri.1
12,816 in 400: di.readdwarf3.pTD.enum_type.3
5,344 in 253: di.readdwarf3.pTD.enum_type.2
4,608 in 209: redir.rnnD.3
3,440 in 209: redir.rnnD.2
2,784 in 32: di.debuginfo.aDI.3
1,776 in 86: di.readdwarf3.ptD.base_type.1
1,232 in 12: di.storage.addVar.1
640 in 16: di.debuginfo.aDI.2
512 in 16: redir.rnnD.4
448 in 2: di.ccCt.1
416 in 17: di.readdwarf3.ptD.member.3
224 in 12: di.redi.1
64 in 4: di.redi.2
-------- Arena "client": 402145280/402145280 max/curr mmap'd, 2/2 unsplit/split sb unmmap'd, 189697952/188416512 max/curr on_loan 24 rzB --------
188,416,512 in 2,618,098: replacemalloc.cm.1
-------- Arena "demangle": 0/0 max/curr mmap'd, 0/0 unsplit/split sb unmmap'd, 0/0 max/curr on_loan 8 rzB --------
-------- Arena "ttaux": 1425408/987136 max/curr mmap'd, 5/0 unsplit/split sb unmmap'd, 1303280/900224 max/curr on_loan 8 rzB --------
659,456 in 2: transtab.initialiseSector(host_extents)
116,144 in 656: transtab.IEA__add
106,864 in 257: transtab.aECN.1
17,760 in 230: transtab.OEA__add |
|
From: Philippe W. <phi...@sk...> - 2015-06-17 20:49:06
|
On Wed, 2015-06-17 at 20:25 +0800, 王阳 wrote: > >Before fixing the above, what you can do then is to capture > >the statistics before it crashes, using from a shell: > > vgdb v.info stats > >and post the resulting output > >(do the above when your application has already consumed a > significant > >memory, but before it has crashed :). > I take it when myprog under valgrind used 29.2GB memory, I guess it is > significant enough. Yes, this is for sure ok. > Wordset "univ_laog": > addTo 240996259(240994769 uncached) > delForm 10 (10 uncached) This (and the LAOG exposition size) below is a possible candidate for the big memory use. Can you try to run with --track-lockorders=no so as to disable the LAOG algorithm ? > union 0 > intersect 0 (0 uncahed ) [nb .incl isSubsetof] > minus 0 > elem 0 > doubleton 15559 > isEmpty 0 > isSingleton 0 > anyElementOf 0 > isSubsetof 0 > dieWS 240966779 > > > locksets: 15,615 unique lock sets > univ_laog: 46,445 unique lock sets > LockN-to-P map: 1 queries(1 map size) > string table map: 0 queries(0 map size) > LAOG:15.557 map size > LAOG exposition: 120,505,100 map size This LAOG exposition seems huge. > locks: 931,132 acquires, 915,607 releases > sanity checks: > > > <<<BEGIN libhb stats>>> > secmaps: 66,706 allocd (546,455,552 g-a-range) this means that your own memory allocation is not huge (slightly more than 0.5GB). > If --track-lockorders=no does not solve the problem, can you then re-run with --stats=yes --profile-heap=yes and while it runs (and has consumed already significant memory), do vgdb -c v.info stats -c v.info memory aspacemgr and post the result ? Thanks Philippe |
|
From: 王阳 <412...@qq...> - 2015-06-17 12:41:08
|
Hi Philippe,
>Is your application 'memcheck-clean' ?
yes , it is. By the way, myprog uses 5 GB memory without Valgrind, and uses 10GB memory with memcheck, and uses over 64GB with helgrind. Using memcheck myprog can run right, but using helgrind it crashs like what you have said.
>Before fixing the above, what you can do then is to capture
>the statistics before it crashes, using from a shell:
> vgdb v.info stats
>and post the resulting output
>(do the above when your application has already consumed a significant
>memory, but before it has crashed :).
I take it when myprog under valgrind used 29.2GB memory, I guess it is significant enough.
sending command v.info stats to pid 28712
--28712-- translate : fast sp updates identified: 0 (0.0%)
--28712-- translate : generic_known sp updates identified:4,510 (93.6%)
--28712-- translate : generic_unknown sp updates identified:306 (6.3%)
--28712-- tt/tc:2,095,337 tt lookups requiring 2141437 probes
--28712-- tt/tc:2,072,309 fast-cache updates, 6 flushes
--28712-- new 30620(860,393 ->7,679,672; ratio 89:10) [0 scs]
--28712-- dumped 0(0->??)
--28712-- discarded 18(594 ->??)
--28712-- scheduler:264,186,172 event checks.
--28712-- scheduler:163,765,532 indir transfers, 1,638,538 misses (1 in 99)
--28712-- scheduler:2,641/21,065,894 major/minor sched events.
--28712-- sanity 2644 cheap, 53 expensive checks.
--28712-- exectx:12,289 lists, 10,468 contexts (avg 0 per list)
--28712-- exectx :9,172,551 searches, 9,179,201 full compares (1,000 per 1000)
--28712-- exectx : 0 cmp2, 8,734 cmp4, 0 cmpAll
--28712-- errormgr: 3 supplist searches, 55 comparisions during search
--28712-- errormgr: 4,705 errlist searches, 8,734 comparisons during search
Wordset " univ_lsets ":
addTo 1861656(15667 uncached)
delForm 1831214(228 uncached)
union 0
intersect 1(0 uncached) [nb. incl isSubsetOf]
minus 0 (0 uncached)
elem 947507
doubleton 0
isEmpty 931132
isSingleton 0
anyElemtentOf 0
ifsubsetof 1
dieWS 0
Wordset "univ_laog":
addTo 240996259(240994769 uncached)
delForm 10 (10 uncached)
union 0
intersect 0 (0 uncahed ) [nb .incl isSubsetof]
minus 0
elem 0
doubleton 15559
isEmpty 0
isSingleton 0
anyElementOf 0
isSubsetof 0
dieWS 240966779
locksets: 15,615 unique lock sets
univ_laog: 46,445 unique lock sets
LockN-to-P map: 1 queries(1 map size)
string table map: 0 queries(0 map size)
LAOG:15.557 map size
LAOG exposition: 120,505,100 map size
locks: 931,132 acquires, 915,607 releases
sanity checks:
<<<BEGIN libhb stats>>>
secmaps: 66,706 allocd (546,455,552 g-a-range)
linesZ: 8,538,368 allocd(409,841,664 bytes occupied)
linesF: 531,515 allocd (276,387,800 bytes occupied)
secmaps: 0 iterator steppings
secmaps:34,659,884 searches(10,913,136 slow)
cache: 1,312,567,566 totrefs (14,120,124 misses)
cache: 13,092,998 Z-fetch, 1027,126 F-fetch
cache: 12,669,322 Z-wback, 1,385,266 F-wback
cache: 19 invals, 18 flushes
cache: 6,522,063,696 arang_New 1,770,286,400 direct-to-Zreps
cline:14,120,124 normalises
cline:c rds 8/4/2/1 : 110,939,011 16,127,798 4,417,962 56,259,749
cline:c wrs 8/4/2/1: 271,897,522 29,052,560 17,694,752 39,060,366
cline:s wrs 8/4/2/:761,884,776 1,532,437 2,226,925 1,414,392
cline:s rdls 63,510 s copyls 63,510
cline: splits:8to4 6,458,760 4to2 7,524,761 2to1 7,814,224
cline:pulldowns:8to4 25,582,382 4to2 17,336,120 2to1 24,085,196
libhb: 183,323,453 msmcread (122,877,511 dragovers)
libhb: 340,328,522 msmcwrite (47,347,878 dragovers)
libhb:159,883,155 cmpLEQ queries (12,644,655 misses)
libhb:124,714,476 join2 queries(6,251,276 misses)
libhb:VTSops:tick 1,831,254 ,join 6,251,276, cmpLEQ 12,644,655
libhb:VTSops: cmp_structural 195,342,726 (176,646,626 slow)
libhb:VRSset: find_or_clone_and_add 8,082,531 (925,969 allocd)
libhb:VTSops:indexAt_SLOW 6
libhb:679272 entries in vts_table (approximately 16302528 bytes)
libhb:679272 entries in vts_set
libhb:ctxt_rcdec:1=161037815(38320474 eq), 2=9, 3=8292043
libhb:ctxt_rcdec:calls 169329867,discards 0
libhb:contextTab: 196613 slots ,258461 max ents
libhb:contextTab:170225389 queries, 173948778 cmps
<<<END libhb stats>>>
> --8146-- univ_laog_do_GC exit seen 41326 next gc at cardinality 61780
> --8146-- univ_laog_do_GC enter cardinality 61780
> --8146-- VALGRIND INTERNAL ERROR:Valgrind received a signal 11
> (SIGSEGV) -exiting
> --8146-- si_code = 1; Faulting address:0x8031AD000; sp: 0x80317db70
> valgrind :the 'impossible' happened:
> Killed by fatal signal
The above means Valgrind crashed, for an undetermined reason
(either an internal bug in Valgrind, or alternatively, a bug
in your application that corrupts the memory).
Is your application 'memcheck-clean' ?
Before fixing the above, what you can do then is to capture
the statistics before it crashes, using from a shell:
vgdb v.info stats
and post the resulting output
(do the above when your application has already consumed a significant
memory, but before it has crashed :).
Philippe |
|
From: Philippe W. <phi...@sk...> - 2015-06-16 19:35:17
|
On Tue, 2015-06-16 at 18:53 +0800, 王阳 wrote: > --8146-- univ_laog_do_GC exit seen 41326 next gc at cardinality 61780 > --8146-- univ_laog_do_GC enter cardinality 61780 > --8146-- VALGRIND INTERNAL ERROR:Valgrind received a signal 11 > (SIGSEGV) -exiting > --8146-- si_code = 1; Faulting address:0x8031AD000; sp: 0x80317db70 > valgrind :the 'impossible' happened: > Killed by fatal signal The above means Valgrind crashed, for an undetermined reason (either an internal bug in Valgrind, or alternatively, a bug in your application that corrupts the memory). Is your application 'memcheck-clean' ? Before fixing the above, what you can do then is to capture the statistics before it crashes, using from a shell: vgdb v.info stats and post the resulting output (do the above when your application has already consumed a significant memory, but before it has crashed :). Philippe |
|
From: Maran P. <mpa...@ca...> - 2015-06-16 18:40:14
|
On Monday 15 June 2015 08:11 PM, Matt Bennett wrote: > ==3248== Memcheck, a memory error detector > ==3248== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al. > ==3248== Using Valgrind-3.11.0.SVN and LibVEX; rerun with -h for > copyright info > ==3248== Command: passwd > ==3248== > ==3248== Invalid write of size 8 > ==3248== at 0x4003028: _dl_start_user (in /lib64/ld-2.16.so) > ==3248== by 0x4002FB8: __start (in /lib64/ld-2.16.so) > ==3248== Address 0xfff0017e8 is on thread 1's stack > ==3248== 8 bytes below stack pointer > ==3248== This error sounds familiar to me and could most likely be a toolchain issue. Please update your Octeon SDK to 3.1.1 release and retry. --Maran |
|
From: 王阳 <412...@qq...> - 2015-06-16 10:53:54
|
Hi Philippe, >The best way to understand where valgrind/helgrind spends >memory is to use --stats=yes and post the result here. >Really, it is really the best way :). >If it takes too long to reach the end (or it crashes >before producing the stats), >you can from the command line use > vgdb v.info stats >to get the needed info while helgrind is running. I can not post log directly, I copy it and write here.as follows: use --stats=yes valgrind 3.10.1 ...... --8146--libhb:EvM GC: delete generations 129 and below, retaining 505211 entries --8146--libhb:EvM GC: delete generations 139 and below, retaining 526605 entries --8146--libhb:EvM GC: delete generations 149 and below, retaining 520572 entries --8146--libhb:EvM GC: delete generations 159 and below, retaining 504065 entries --8146-- univ_laog_do_GC enter cardinality 31 --8146-- univ_laog_do_GC exit seen 24 next gc at cardinality 32 --8146-- univ_laog_do_GC enter cardinality 33 --8146-- univ_laog_do_GC exit seen 30 next gc at cardinality 33 .... --8146-- univ_laog_do_GC enter cardinality 61779 --8146-- univ_laog_do_GC exit seen 41326 next gc at cardinality 61780 --8146-- univ_laog_do_GC enter cardinality 61780 --8146-- VALGRIND INTERNAL ERROR:Valgrind received a signal 11 (SIGSEGV) -exiting --8146-- si_code = 1; Faulting address:0x8031AD000; sp: 0x80317db70 valgrind :the 'impossible' happened: Killed by fatal signal host stacktrace: ==8146== at 0x3802D180:reclaimSuperblock(m_mallocfree.c:918) ==8146== by 0x3802F2D0:deferred_reclaimSuperblock(m_mallocfree.c:1939) ==8146== by 0x3800CA22:delete_WV (hg_wordset.c:204) ==8146== by 0x3800EC54:vgHelgrind_dieWS(hg_wordset.c:487) ==8146== by 0x38005816:univ_laog_do_GC(hg_main.c:3454) HI Philippe, 》memory is to use --stats=yes and post the result here. >Really, it is really the best way :). I can not export log from my PC, because PC is isolated from internet. Can I take a photo of log to you or Can pick up some key information from log to you. By the way, the size of email is limited to 40KB, am I right? so posting a photo maybe not work out. > HI Philippe, > I read valgrind user manual, there are some hints which are related > to my problem I guess. As follows, > Myprog uses tons of mmap and memory pool ,and do not use free/delete > to give back memory to pool. > My question is that If mypog use memoy pool without free/delete and > don't use VALGRIND_HG_CLEAN_MEMORY will lead to myprog do mmap > endlessly until use 64G memory? No, HG_CLEAN_MEMORY is not useful to use less memory. It is only useful if you recycle memory, and you get false positive race errors due to this recycling. The best way to understand where valgrind/helgrind spends memory is to use --stats=yes and post the result here. Really, it is really the best way :). If it takes too long to reach the end (or it crashes before producing the stats), you can from the command line use vgdb v.info stats to get the needed info while helgrind is running. Philippe |
|
From: Matt B. <Mat...@al...> - 2015-06-16 03:11:21
|
To get Valgrind working for my MIPS64 Cavium chip so far I have applied a couple of patches. 1. Initially I was seeing the "Valgrind's memory management: out of memory:" error. This was due to the PAGE_SIZE in valgrind being incorrect. To support an 8kB page size I used the patch from here: https://bugs.kde.org/show_bug.cgi?id=342356. After applying this 'valgrind -h' worked correctly. Then when I attempted to run valgrind with a program I would see this error "valgrind: m_coredump/coredump-elf.c:261 (fill_prstatus): Assertion 'sizeof(*regs) == sizeof(prs->pr_reg)' failed" I believe this is due to Cavium specific instructions not being supported currently by Valgrind. I applied the patch from here: https://bugs.kde.org/show_bug.cgi?id=341036 However now the error no longer occurs but it looks to me like Valgrind is just hanging, see below: ==3248== Memcheck, a memory error detector ==3248== Copyright (C) 2002-2013, and GNU GPL'd, by Julian Seward et al. ==3248== Using Valgrind-3.11.0.SVN and LibVEX; rerun with -h for copyright info ==3248== Command: passwd ==3248== ==3248== Invalid write of size 8 ==3248== at 0x4003028: _dl_start_user (in /lib64/ld-2.16.so) ==3248== by 0x4002FB8: __start (in /lib64/ld-2.16.so) ==3248== Address 0xfff0017e8 is on thread 1's stack ==3248== 8 bytes below stack pointer ==3248== It stays like this forever. Has anyone encountered this before? |
|
From: 王阳 <412...@qq...> - 2015-06-16 03:06:05
|
HI Philippe, 》memory is to use --stats=yes and post the result here. >Really, it is really the best way :). I can not export log from my PC, because PC is isolated from internet. Can I take a photo of log to you or Can pick up some key information from log to you. By the way, the size of email is limited to 40KB, am I right? so posting a photo maybe not work out. > HI Philippe, > I read valgrind user manual, there are some hints which are related > to my problem I guess. As follows, > Myprog uses tons of mmap and memory pool ,and do not use free/delete > to give back memory to pool. > My question is that If mypog use memoy pool without free/delete and > don't use VALGRIND_HG_CLEAN_MEMORY will lead to myprog do mmap > endlessly until use 64G memory? No, HG_CLEAN_MEMORY is not useful to use less memory. It is only useful if you recycle memory, and you get false positive race errors due to this recycling. The best way to understand where valgrind/helgrind spends memory is to use --stats=yes and post the result here. Really, it is really the best way :). If it takes too long to reach the end (or it crashes before producing the stats), you can from the command line use vgdb v.info stats to get the needed info while helgrind is running. Philippe |
|
From: Matt B. <Mat...@al...> - 2015-06-16 01:32:40
|
The blame lies with me as I never checked what page sizes Valgrind
actually supports. Looking at configure.ac it only supports 4kB, 16kB
and 64kB page sizes. Therefore when I used '--with-pagesize=8' during
compilation it was defaulting to 4kB...
I have manually patched configure.ac to accept 8kB and set VKI_PAGE_SIZE
correctly (via VKI_PAGE_SHIFT and MIPS_PAGE_SHIFT).
I then modified the VKI_PAGE_SIZE and VKI_MAX_PAGE_SIZE assert
statements in coregrind/m_main.c to include 8kB.
Valgrind now appears to run correctly (I can enter 'valgrind -h').
However, if I try to run a program under Valgrind I get another
(unrelated) issue that I will address via a separate thread.
Thanks for your help.
On Mon, 2015-06-15 at 07:01 -0700, John Reiser wrote:
> On 06/14/2015 10:22 PM, Matt Bennett wrote:
> > On Sun, 2015-06-14 at 22:01 -0700, John Reiser wrote:
> >>> mmap(0x802001000, 4194304, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, 0, 0) = -1 EINVAL (Invalid argument)
> >>
> >> What is SHMLBA for this configuration? [grep -sr SHMLBA /usr/include]
> >
>
> > /usr/include/valgrind/vki/vki-mips32-linux.h:#define VKI_SHMLBA 0x40000
> > /usr/include/valgrind/vki/vki-mips64-linux.h:#define VKI_SHMLBA 0x40000
>
> [Note that 0x40000 is 256KiB, which is rather large and likely to cause
> noticeable fragmentation of address space.]
>
> Notice that all those files are from /usr/include/valgrind/vki/.
> Your compilation environment apparently lacked the definitions
> that correspond to the target hardware [these for x86_64]:
> -----
> /usr/include/bits/shm.h:#define SHM_RND 020000 /* round attach address to SHMLBA */
> /usr/include/bits/shm.h:#define SHMLBA (__getpagesize ())
> /usr/include/linux/shm.h:#define SHM_RND 020000 /* round attach address to SHMLBA boundary */
> /usr/include/asm-generic/shmparam.h:#define SHMLBA PAGE_SIZE /* attach addr a multiple of this */
> -----
>
> >
> >> And on the same topic, what is the page size? [grep -sr PAGE_SIZE /usr/include]
> >
> > /usr/include/valgrind/pub_tool_libcbase.h:#define VG_IS_PAGE_ALIGNED(aaa_p) (0 == (((Addr)(aaa_p)) & ((Addr)(VKI_PAGE_SIZE-1))))
> > /usr/include/valgrind/pub_tool_libcbase.h:#define VG_PGROUNDDN(p) VG_ROUNDDN(p, VKI_PAGE_SIZE)
> > /usr/include/valgrind/pub_tool_libcbase.h:#define VG_PGROUNDUP(p) VG_ROUNDUP(p, VKI_PAGE_SIZE)
>
> > /usr/include/valgrind/vki/vki-mips32-linux.h:#define VKI_PAGE_SIZE (1UL << VKI_PAGE_SHIFT)
> > /usr/include/valgrind/vki/vki-mips32-linux.h:#define VKI_PAGE_MASK (~(VKI_PAGE_SIZE-1))
> > /usr/include/valgrind/vki/vki-mips32-linux.h:#define VKI_MAX_PAGE_SIZE VKI_PAGE_SIZE
> > /usr/include/valgrind/vki/vki-mips64-linux.h:#define VKI_PAGE_SIZE (1UL << VKI_PAGE_SHIFT)
> > /usr/include/valgrind/vki/vki-mips64-linux.h:#define VKI_PAGE_MASK (~(VKI_PAGE_SIZE-1))
> > /usr/include/valgrind/vki/vki-mips64-linux.h:#define VKI_MAX_PAGE_SIZE VKI_PAGE_SIZE
>
> Well, what is VKI_PAGE_SHIFT; and does one of {vki-mips32-linux.h, vki-mips64-linux.h}
> actually apply to the target machine?
>
> >
> >> Every successful mmap() return value happens to be a multiple of 0x2000 (8KiB);
> >> there are no odd multiples of 0x1000 (4KiB).
> >> [Some ARM machines have an SHMLBA of 0x4000 (16KiB) even though the page size is 0x1000 (4KiB).]
> >
> > This is on a MIPS chip, specifically Cavium (cnMIPS). The kernel page size is configured to 8kB and since I didn't configure the page size when
> > cross compiling valgrind it looks like it took the page size of my local machine (4kB). Looking at the upstream source code it appears that
> > 8kB page size isn't currently supported however there are some patches here that may make it work. https://bugs.kde.org/show_bug.cgi?id=342356
> >
> > Either way if this is indeed the issue here perhaps this is something that should be fixed upstream (supporting 8k and 32k pages)?
>
> Some of the blame belongs to you and the marketing/sales/support team
> that sold you the chips. Sometimes inexpensive hardware is *TOO* cheap!
> In this case it looks like Cavium (cnMIPS) is an architectural variant
> that does not have all the properties previously promised by MIPS.
>
> Also, successful cross-compiling requires setting the declarations
> and #includes correctly. Nothing in the valgrind source can prevent
> operator error in this department.
>
> Therefore: arrange for the proper definitions of VKI_PAGE_SHIFT and VKI_SHMLBA,
> and tell us the numerical values. (Write a test program which prints them!)
> If valgrind attempts MAP_FIXED at an address such as 0x802001000 which is not
> a multiple of the hardware page size (apparently 8KiB) then that is a clue
> that the compilation environment is not correct. If necessary, edit the
> definitions by hand _after_ performing "automatic configuration" and
> _before_ compiling.
>
> Look carefully at ARM:
> /usr/include/valgrind/vki/vki-arm-linux.h:#define VKI_SHMLBA (4 * VKI_PAGE_SIZE)
> which does work, and contrast with your case.
>
|
|
From: Philippe W. <phi...@sk...> - 2015-06-15 20:14:31
|
On Mon, 2015-06-15 at 19:23 +0800, 王阳 wrote: > HI Philippe, > I read valgrind user manual, there are some hints which are related > to my problem I guess. As follows, > Myprog uses tons of mmap and memory pool ,and do not use free/delete > to give back memory to pool. > My question is that If mypog use memoy pool without free/delete and > don't use VALGRIND_HG_CLEAN_MEMORY will lead to myprog do mmap > endlessly until use 64G memory? No, HG_CLEAN_MEMORY is not useful to use less memory. It is only useful if you recycle memory, and you get false positive race errors due to this recycling. The best way to understand where valgrind/helgrind spends memory is to use --stats=yes and post the result here. Really, it is really the best way :). If it takes too long to reach the end (or it crashes before producing the stats), you can from the command line use vgdb v.info stats to get the needed info while helgrind is running. Philippe |
|
From: John R. <jr...@bi...> - 2015-06-15 14:01:10
|
On 06/14/2015 10:22 PM, Matt Bennett wrote:
> On Sun, 2015-06-14 at 22:01 -0700, John Reiser wrote:
>>> mmap(0x802001000, 4194304, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, 0, 0) = -1 EINVAL (Invalid argument)
>>
>> What is SHMLBA for this configuration? [grep -sr SHMLBA /usr/include]
>
> /usr/include/valgrind/vki/vki-mips32-linux.h:#define VKI_SHMLBA 0x40000
> /usr/include/valgrind/vki/vki-mips64-linux.h:#define VKI_SHMLBA 0x40000
[Note that 0x40000 is 256KiB, which is rather large and likely to cause
noticeable fragmentation of address space.]
Notice that all those files are from /usr/include/valgrind/vki/.
Your compilation environment apparently lacked the definitions
that correspond to the target hardware [these for x86_64]:
-----
/usr/include/bits/shm.h:#define SHM_RND 020000 /* round attach address to SHMLBA */
/usr/include/bits/shm.h:#define SHMLBA (__getpagesize ())
/usr/include/linux/shm.h:#define SHM_RND 020000 /* round attach address to SHMLBA boundary */
/usr/include/asm-generic/shmparam.h:#define SHMLBA PAGE_SIZE /* attach addr a multiple of this */
-----
>
>> And on the same topic, what is the page size? [grep -sr PAGE_SIZE /usr/include]
>
> /usr/include/valgrind/pub_tool_libcbase.h:#define VG_IS_PAGE_ALIGNED(aaa_p) (0 == (((Addr)(aaa_p)) & ((Addr)(VKI_PAGE_SIZE-1))))
> /usr/include/valgrind/pub_tool_libcbase.h:#define VG_PGROUNDDN(p) VG_ROUNDDN(p, VKI_PAGE_SIZE)
> /usr/include/valgrind/pub_tool_libcbase.h:#define VG_PGROUNDUP(p) VG_ROUNDUP(p, VKI_PAGE_SIZE)
> /usr/include/valgrind/vki/vki-mips32-linux.h:#define VKI_PAGE_SIZE (1UL << VKI_PAGE_SHIFT)
> /usr/include/valgrind/vki/vki-mips32-linux.h:#define VKI_PAGE_MASK (~(VKI_PAGE_SIZE-1))
> /usr/include/valgrind/vki/vki-mips32-linux.h:#define VKI_MAX_PAGE_SIZE VKI_PAGE_SIZE
> /usr/include/valgrind/vki/vki-mips64-linux.h:#define VKI_PAGE_SIZE (1UL << VKI_PAGE_SHIFT)
> /usr/include/valgrind/vki/vki-mips64-linux.h:#define VKI_PAGE_MASK (~(VKI_PAGE_SIZE-1))
> /usr/include/valgrind/vki/vki-mips64-linux.h:#define VKI_MAX_PAGE_SIZE VKI_PAGE_SIZE
Well, what is VKI_PAGE_SHIFT; and does one of {vki-mips32-linux.h, vki-mips64-linux.h}
actually apply to the target machine?
>
>> Every successful mmap() return value happens to be a multiple of 0x2000 (8KiB);
>> there are no odd multiples of 0x1000 (4KiB).
>> [Some ARM machines have an SHMLBA of 0x4000 (16KiB) even though the page size is 0x1000 (4KiB).]
>
> This is on a MIPS chip, specifically Cavium (cnMIPS). The kernel page size is configured to 8kB and since I didn't configure the page size when
> cross compiling valgrind it looks like it took the page size of my local machine (4kB). Looking at the upstream source code it appears that
> 8kB page size isn't currently supported however there are some patches here that may make it work. https://bugs.kde.org/show_bug.cgi?id=342356
>
> Either way if this is indeed the issue here perhaps this is something that should be fixed upstream (supporting 8k and 32k pages)?
Some of the blame belongs to you and the marketing/sales/support team
that sold you the chips. Sometimes inexpensive hardware is *TOO* cheap!
In this case it looks like Cavium (cnMIPS) is an architectural variant
that does not have all the properties previously promised by MIPS.
Also, successful cross-compiling requires setting the declarations
and #includes correctly. Nothing in the valgrind source can prevent
operator error in this department.
Therefore: arrange for the proper definitions of VKI_PAGE_SHIFT and VKI_SHMLBA,
and tell us the numerical values. (Write a test program which prints them!)
If valgrind attempts MAP_FIXED at an address such as 0x802001000 which is not
a multiple of the hardware page size (apparently 8KiB) then that is a clue
that the compilation environment is not correct. If necessary, edit the
definitions by hand _after_ performing "automatic configuration" and
_before_ compiling.
Look carefully at ARM:
/usr/include/valgrind/vki/vki-arm-linux.h:#define VKI_SHMLBA (4 * VKI_PAGE_SIZE)
which does work, and contrast with your case.
--
|
|
From: 王阳 <412...@qq...> - 2015-06-15 11:23:32
|
HI Philippe,
I read valgrind user manual, there are some hints which are related to my problem I guess. As follows,
Myprog uses tons of mmap and memory pool ,and do not use free/delete to give back memory to pool.
My question is that If mypog use memoy pool without free/delete and don't use VALGRIND_HG_CLEAN_MEMORY will lead to myprog do mmap endlessly until use 64G memory?
--------------------------------
Avoid memory recycling. If you can’t avoid it, you must use tell Helgrind what is going on via the
VALGRIND_HG_CLEAN_MEMORY client request (in helgrind.h).
Helgrind is aware of standard heap memory allocation and deallocation that occurs via malloc/free/new/delete
and from entry and exit of stack frames. In particular, when memory is deallocated via free, delete, or
function exit, Helgrind considers that memory clean, so when it is eventually reallocated, its history is irrelevant.
However, it is common practice to implement memory recycling schemes. In these, memory to be freed is not
handed to free/delete, but instead put into a pool of free buffers to be handed out again as required. The
problem is that Helgrind has no way to know that such memory is logically no longer in use, and its history is
irrelevant. Hence you must make that explicit, using the VALGRIND_HG_CLEAN_MEMORY client request to
specify the relevant address ranges. It’s easiest to put these requests into the pool manager code, and use them either when memory is returned to the pool, or is allocated from it.
---------------------------
> --28682-- univ_laog_do_GC enter cardinality 9614
> --28682-- univ_laog_do_GC exit seen 6591 next gc at cardinality 9615
yes, one of the things that --stats=yes activates is some information
about laog GC.
>
> why?
> DRD have not that problem ,but DRD 's message is not accurate
> comparing with helgrind.
helgrind data structures are different of drd (a.o. to be able
to give precise information about race conditios).
Normally, --stats=yes should have produced statistics
at the end of your program (if your program exited due to
out of memory). If valgrind itself encountered the oom situation
then it should equally have produced statistics.
Can you post here these statistics ?
If they are not produced (unclear why), then you could instead
regularly run in another window
vgdb -c v.info stats -c v.info memory
to capture stats/memory during the run, before it reaches 64G.
Philippe |
|
From: Mayank K. (mayankum) <may...@ci...> - 2015-06-15 05:47:35
|
I already tried that, but the error changes to:- > error: missing binary operator before token ( Trying to debug what the issue is. -----Original Message----- From: Philippe Waroquiers [mailto:phi...@sk...] Sent: Saturday, June 13, 2015 12:24 PM To: Mayank Kumar (mayankum) Cc: val...@li... Subject: Re: [Valgrind-users] Detecting if a process is running under valgrind On Fri, 2015-06-12 at 07:23 +0000, Mayank Kumar (mayankum) wrote: > Hi Users > I am running valgrind on windriver and trying to use > RUNNING_ON_VALGRIND macro to figure out if my process is running under > valgrind. > While compiling I get the error :- > > error: missing binary operator before token "__extension__" > > Is there a way to fix it . I am using gcc compiler 4.3.2 In valgrind.h, there is somewhere the following 3 lines: #if !defined(__GNUC__) # define __extension__ /* */ #endif You might try to copy the # define line just before including valgrind.h Philippe |
|
From: Matt B. <Mat...@al...> - 2015-06-15 05:22:39
|
On Sun, 2015-06-14 at 22:01 -0700, John Reiser wrote: > > mmap(0x802001000, 4194304, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, 0, 0) = -1 EINVAL (Invalid argument) > > What is SHMLBA for this configuration? [grep -sr SHMLBA /usr/include] /usr/include/valgrind/vki/vki-amd64-linux.h:#define VKI_SHMLBA VKI_PAGE_SIZE /usr/include/valgrind/vki/vki-arm-linux.h:#define VKI_SHMLBA (4 * VKI_PAGE_SIZE) /usr/include/valgrind/vki/vki-arm64-linux.h:#define VKI_SHMLBA (4 * VKI_PAGE_SIZE) /usr/include/valgrind/vki/vki-darwin.h:#define VKI_SHMLBA SHMLBA /usr/include/valgrind/vki/vki-linux.h:#define VKI_SHM_RND 020000 /* round attach address to SHMLBA boundary */ /usr/include/valgrind/vki/vki-mips32-linux.h:#define VKI_SHMLBA 0x40000 /usr/include/valgrind/vki/vki-mips64-linux.h:#define VKI_SHMLBA 0x40000 /usr/include/valgrind/vki/vki-ppc32-linux.h:#define VKI_SHMLBA VKI_PAGE_SIZE /usr/include/valgrind/vki/vki-ppc64-linux.h:#define VKI_SHMLBA VKI_PAGE_SIZE /usr/include/valgrind/vki/vki-s390x-linux.h:#define VKI_SHMLBA VKI_PAGE_SIZE /usr/include/valgrind/vki/vki-x86-linux.h:#define VKI_SHMLBA VKI_PAGE_SIZE > And on the same topic, what is the page size? [grep -sr PAGE_SIZE /usr/include] /usr/include/valgrind/pub_tool_libcbase.h:#define VG_IS_PAGE_ALIGNED(aaa_p) (0 == (((Addr)(aaa_p)) & ((Addr)(VKI_PAGE_SIZE-1)))) /usr/include/valgrind/pub_tool_libcbase.h:#define VG_PGROUNDDN(p) VG_ROUNDDN(p, VKI_PAGE_SIZE) /usr/include/valgrind/pub_tool_libcbase.h:#define VG_PGROUNDUP(p) VG_ROUNDUP(p, VKI_PAGE_SIZE) /usr/include/valgrind/vki/vki-amd64-linux.h:#define VKI_PAGE_SIZE (1UL << VKI_PAGE_SHIFT) /usr/include/valgrind/vki/vki-amd64-linux.h:#define VKI_MAX_PAGE_SIZE VKI_PAGE_SIZE /usr/include/valgrind/vki/vki-amd64-linux.h:#define VKI_SHMLBA VKI_PAGE_SIZE /usr/include/valgrind/vki/vki-arm-linux.h:#define VKI_PAGE_SIZE (1UL << VKI_PAGE_SHIFT) /usr/include/valgrind/vki/vki-arm-linux.h:#define VKI_MAX_PAGE_SIZE VKI_PAGE_SIZE /usr/include/valgrind/vki/vki-arm-linux.h:#define VKI_SHMLBA (4 * VKI_PAGE_SIZE) /usr/include/valgrind/vki/vki-arm64-linux.h:extern UWord VKI_PAGE_SIZE; /usr/include/valgrind/vki/vki-arm64-linux.h:#define VKI_MAX_PAGE_SIZE (1UL << VKI_MAX_PAGE_SHIFT) /usr/include/valgrind/vki/vki-arm64-linux.h:// shared memory with 64 bit processes, VKI_PAGE_SIZE is good /usr/include/valgrind/vki/vki-arm64-linux.h:// the old-style 16k value (4 * VKI_PAGE_SIZE) to be safe. /usr/include/valgrind/vki/vki-arm64-linux.h:#define VKI_SHMLBA (4 * VKI_PAGE_SIZE) /usr/include/valgrind/vki/vki-darwin.h:#define VKI_MAX_PAGE_SIZE VKI_PAGE_SIZE /usr/include/valgrind/vki/vki-darwin.h:#define VKI_PAGE_SIZE PAGE_SIZE /usr/include/valgrind/vki/vki-mips32-linux.h:#define VKI_PAGE_SIZE (1UL << VKI_PAGE_SHIFT) /usr/include/valgrind/vki/vki-mips32-linux.h:#define VKI_PAGE_MASK (~(VKI_PAGE_SIZE-1)) /usr/include/valgrind/vki/vki-mips32-linux.h:#define VKI_MAX_PAGE_SIZE VKI_PAGE_SIZE /usr/include/valgrind/vki/vki-mips64-linux.h:#define VKI_PAGE_SIZE (1UL << VKI_PAGE_SHIFT) /usr/include/valgrind/vki/vki-mips64-linux.h:#define VKI_PAGE_MASK (~(VKI_PAGE_SIZE-1)) /usr/include/valgrind/vki/vki-mips64-linux.h:#define VKI_MAX_PAGE_SIZE VKI_PAGE_SIZE /usr/include/valgrind/vki/vki-ppc32-linux.h:extern UWord VKI_PAGE_SIZE; /usr/include/valgrind/vki/vki-ppc32-linux.h:#define VKI_MAX_PAGE_SIZE (1UL << VKI_MAX_PAGE_SHIFT) /usr/include/valgrind/vki/vki-ppc32-linux.h:#define VKI_SHMLBA VKI_PAGE_SIZE /usr/include/valgrind/vki/vki-ppc64-linux.h:extern UWord VKI_PAGE_SIZE; /usr/include/valgrind/vki/vki-ppc64-linux.h:#define VKI_MAX_PAGE_SIZE (1UL << VKI_MAX_PAGE_SHIFT) /usr/include/valgrind/vki/vki-ppc64-linux.h:#define VKI_SHMLBA VKI_PAGE_SIZE /usr/include/valgrind/vki/vki-s390x-linux.h:#define VKI_PAGE_SIZE (1UL << VKI_PAGE_SHIFT) /usr/include/valgrind/vki/vki-s390x-linux.h:#define VKI_MAX_PAGE_SIZE VKI_PAGE_SIZE /usr/include/valgrind/vki/vki-s390x-linux.h:#define VKI_SHMLBA VKI_PAGE_SIZE /usr/include/valgrind/vki/vki-x86-linux.h:#define VKI_PAGE_SIZE (1UL << VKI_PAGE_SHIFT) /usr/include/valgrind/vki/vki-x86-linux.h:#define VKI_MAX_PAGE_SIZE VKI_PAGE_SIZE /usr/include/valgrind/vki/vki-x86-linux.h:#define VKI_SHMLBA VKI_PAGE_SIZE > Every successful mmap() return value happens to be a multiple of 0x2000 (8KiB); > there are no odd multiples of 0x1000 (4KiB). > [Some ARM machines have an SHMLBA of 0x4000 (16KiB) even though the page size is 0x1000 (4KiB).] This is on a MIPS chip, specifically Cavium (cnMIPS). The kernel page size is configured to 8kB and since I didn't configure the page size when cross compiling valgrind it looks like it took the page size of my local machine (4kB). Looking at the upstream source code it appears that 8kB page size isn't currently supported however there are some patches here that may make it work. https://bugs.kde.org/show_bug.cgi?id=342356 Either way if this is indeed the issue here perhaps this is something that should be fixed upstream (supporting 8k and 32k pages)? > > Construct a standalone test program whose sole action is that call to mmap(). > Does the mmap() succeed there? > |
|
From: John R. <jr...@bi...> - 2015-06-15 05:01:27
|
> mmap(0x802001000, 4194304, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, 0, 0) = -1 EINVAL (Invalid argument) What is SHMLBA for this configuration? [grep -sr SHMLBA /usr/include] And on the same topic, what is the page size? [grep -sr PAGE_SIZE /usr/include] Every successful mmap() return value happens to be a multiple of 0x2000 (8KiB); there are no odd multiples of 0x1000 (4KiB). [Some ARM machines have an SHMLBA of 0x4000 (16KiB) even though the page size is 0x1000 (4KiB).] Construct a standalone test program whose sole action is that call to mmap(). Does the mmap() succeed there? -- |
|
From: Matt B. <Mat...@al...> - 2015-06-15 01:44:31
|
Hello, This is similar to the issue seen here (http://valgrind.10908.n7.nabble.com/valgrind-out-of-memory-error-td54401.html) however I have compiled with n64 ABI and therefore should be supported. When I attempt to use valgrind I get the following error: ==5133== Valgrind's memory management: out of memory: ==5133== newSuperblock's request for 4194304 bytes failed. ==5133== 27607040 bytes have already been allocated. ==5133== Valgrind cannot continue. Sorry. I have run valgrind using strace (entire trace is pasted at the end of the email) and I can see the execution tripping up on a call to mmap: mmap(0x802001000, 4194304, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, 0, 0) = -1 EINVAL (Invalid argument) At this stage I am not sure what is causing this problem? I have attached 'ulimit -a' and 'cat /proc/meminfo' below. ulimit -a -f: file size (blocks) unlimited -t: cpu time (seconds) unlimited -d: data seg size (kb) unlimited -s: stack size (kb) 8192 -c: core file size (blocks) unlimited -m: resident set size (kb) unlimited -l: locked memory (kb) 64 -p: processes 2987 -n: file descriptors 1024 -v: address space (kb) unlimited -w: locks unlimited -e: scheduling priority 0 -r: real-time priority 0 cat /proc/meminfo MemTotal: 825776 kB MemFree: 608024 kB MemAvailable: 677880 kB Buffers: 16936 kB Cached: 65360 kB SwapCached: 0 kB Active: 140392 kB Inactive: 53720 kB Active(anon): 112112 kB Inactive(anon): 5216 kB Active(file): 28280 kB Inactive(file): 48504 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 0 kB Writeback: 0 kB AnonPages: 111888 kB Mapped: 41928 kB Shmem: 5512 kB Slab: 25256 kB SReclaimable: 4288 kB SUnreclaim: 20968 kB KernelStack: 2512 kB PageTables: 3336 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 412888 kB Committed_AS: 685024 kB VmallocTotal: 8585740272 kB VmallocUsed: 1152 kB VmallocChunk: 8585732560 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 8192 kB Thanks, Matt ---------------------------- strace -f valgrind -v -v -v -d -d -d execve("/usr/bin/valgrind", ["valgrind", "-v", "-v", "-v", "-d", "-d", "-d"], [/* 23 vars */]) = 0 brk(0) = 0x1c292000 uname({sys="Linux", node="awplus", ...}) = 0 mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xffe8cde000 access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory) open("/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=27905, ...}) = 0 mmap(NULL, 27905, PROT_READ, MAP_PRIVATE, 3, 0) = 0xffe8cd6000 close(3) = 0 open("/lib/libpthread.so.0", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\2\1\0\0\0\0\0\0\0\0\0\0\3\0\10\0\0\0\1\0\0\0\0\0\0\233\320"..., 832) = 832 lseek(3, 9208, SEEK_SET) = 9208 read(3, "\0\0\0\4\0\0\0\20\0\0\0\1GNU\0\0\0\0\0\0\0\0\2\0\0\0\6\0\0\0\f", 32) = 32 fstat(3, {st_mode=S_IFREG|0755, st_size=1056879, ...}) = 0 mmap(NULL, 214880, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xffe8c6c000 mprotect(0xffe8c8a000, 65536, PROT_NONE) = 0 mmap(0xffe8c9a000, 16384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x1e000) = 0xffe8c9a000 mmap(0xffe8c9e000, 10080, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0xffe8c9e000 close(3) = 0 open("/lib/libc.so.6", O_RDONLY|O_CLOEXEC) = 3 read(3, "\177ELF\2\2\1\0\0\0\0\0\0\0\0\0\0\3\0\10\0\0\0\1\0\0\0\0\0\3\262\4"..., 832) = 832 lseek(3, 59824, SEEK_SET) = 59824 read(3, "\0\0\0\4\0\0\0\20\0\0\0\1GNU\0\0\0\0\0\0\0\0\2\0\0\0\6\0\0\0\f", 32) = 32 fstat(3, {st_mode=S_IFREG|0755, st_size=9287062, ...}) = 0 mmap(NULL, 1671152, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xffe8ad4000 mprotect(0xffe8c46000, 98304, PROT_NONE) = 0 mmap(0xffe8c5e000, 49152, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x17a000) = 0xffe8c5e000 mmap(0xffe8c6a000, 8176, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0xffe8c6a000 close(3) = 0 mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xffe8cd4000 set_thread_area(0xffe8cdb700) = 0 mprotect(0xffe8c5e000, 24576, PROT_READ) = 0 mprotect(0xffe8c9a000, 8192, PROT_READ) = 0 mprotect(0xffe8ce0000, 8192, PROT_READ) = 0 munmap(0xffe8cd6000, 27905) = 0 set_tid_address(0xffe8cd40d0) = 4948 set_robust_list(0xffe8cd40e0, 24) = 0 futex(0xffffe9abc8, FUTEX_WAKE_PRIVATE, 1) = 0 futex(0xffffe9abc8, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 1, NULL, ffe8cdb700) = -1 EAGAIN (Resource temporarily unavailable) rt_sigaction(SIGRT_0, {0x800000000, [RT_37 RT_40 RT_41 RT_46 RT_52 RT_55 RT_56 RT_60 RT_62 RT_63 RT_64 RT_65 RT_66 RT_67 RT_68 RT_69 RT_70 RT_71 RT_72], SA_STACK|SA_INTERRUPT|SA_NODEFER|SA_RESETHAND|SA_NOCLDWA IT|0xc65700}, NULL, 16) = 0 rt_sigaction(SIGRT_1, {0x1000000800000000, [RT_37 RT_40 RT_41 RT_46 RT_52 RT_55 RT_56 RT_60 RT_62 RT_63 RT_64 RT_65 RT_66 RT_67 RT_68 RT_69 RT_70 RT_71 RT_72], SA_STACK|SA_INTERRUPT|SA_NODEFER|SA_RESETHAND|SA_ SIGINFO|SA_NOCLDWAIT|0xc655d0}, NULL, 16) = 0 rt_sigprocmask(SIG_UNBLOCK, [RT_0 RT_1], NULL, 16) = 0 getrlimit(RLIMIT_STACK, {rlim_cur=8192*1024, rlim_max=RLIM64_INFINITY}) = 0 getpid() = 4948 write(2, "--4948:1:debuglog DebugLog syste"..., 80--4948:1:debuglog DebugLog system started by Stage 1, level 3 logging requested ) = 80 getpid() = 4948 write(2, "--4948:1:launcher no tool reques"..., 62--4948:1:launcher no tool requested, defaulting to 'memcheck' ) = 62 getpid() = 4948 write(2, "--4948:1:launcher no client spec"..., 77--4948:1:launcher no client specified, defaulting platform to 'mips64-linux' ) = 77 readlink("/proc/self/exe", "/usr/bin/valgrind", 4096) = 17 brk(0) = 0x1c292000 brk(0x1c2b4000) = 0x1c2b4000 getpid() = 4948 write(2, "--4948:1:launcher launching /usr"..., 68--4948:1:launcher launching /usr/lib/valgrind/memcheck-mips64-linux ) = 68 execve("/usr/lib/valgrind/memcheck-mips64-linux", ["valgrind", "-v", "-v", "-v", "-d", "-d", "-d"], [/* 24 vars */]) = 0 getpid() = 4948 write(2, "--4948:1:debuglog DebugLog syste"..., 87--4948:1:debuglog DebugLog system started by Stage 2 (main), level 3 logging requested ) = 87 getpid() = 4948 write(2, "--4948:1:main Welcome to Val"..., 67--4948:1:main Welcome to Valgrind version 3.10.1 debug logging ) = 67 getpid() = 4948 write(2, "--4948:1:main Checking curre"..., 54--4948:1:main Checking current stack is plausible ) = 54 getpid() = 4948 write(2, "--4948:1:main Checking initi"..., 51--4948:1:main Checking initial stack was noted ) = 51 getpid() = 4948 write(2, "--4948:1:main Starting the a"..., 53--4948:1:main Starting the address space manager ) = 53 getpid() = 4948 write(2, "--4948:2:aspacem sp_a"..., 68--4948:2:aspacem sp_at_startup = 0xffff907b80 (supplied) ) = 68 getpid() = 4948 write(2, "--4948:2:aspacem "..., 68--4948:2:aspacem minAddr = 0x0004000000 (computed) ) = 68 getpid() = 4948 write(2, "--4948:2:aspacem "..., 68--4948:2:aspacem maxAddr = 0x0fffffffff (computed) ) = 68 getpid() = 4948 write(2, "--4948:2:aspacem "..., 68--4948:2:aspacem cStart = 0x0004000000 (computed) ) = 68 getpid() = 4948 write(2, "--4948:2:aspacem "..., 68--4948:2:aspacem vStart = 0x0802000000 (computed) ) = 68 getpid() = 4948 write(2, "--4948:2:aspacem suggested_cl"..., 68--4948:2:aspacem suggested_clstack_end = 0x0fff000fff (computed) ) = 68 getpid() = 4948 write(2, "--4948:2:aspacem <<< SHOW_SEG"..., 79--4948:2:aspacem <<< SHOW_SEGMENTS: Initial layout (5 segments, 0 segnames) ) = 79 getpid() = 4948 write(2, "--4948:2:aspacem 0: RSVN 00"..., 74--4948:2:aspacem 0: RSVN 0000000000-0003ffffff 64m ----- SmFixed ) = 74 getpid() = 4948 write(2, "--4948:2:aspacem 1: 00"..., 60--4948:2:aspacem 1: 0004000000-0801ffffff 32736m ) = 60 getpid() = 4948 write(2, "--4948:2:aspacem 2: RSVN 08"..., 74--4948:2:aspacem 2: RSVN 0802000000-0802000fff 4096 ----- SmFixed ) = 74 getpid() = 4948 write(2, "--4948:2:aspacem 3: 08"..., 60--4948:2:aspacem 3: 0802001000-0fffffffff 32735m ) = 60 getpid() = 4948 write(2, "--4948:2:aspacem 4: RSVN 10"..., 80--4948:2:aspacem 4: RSVN 1000000000-ffffffffffffffff 16383e ----- SmFixed ) = 80 getpid() = 4948 write(2, "--4948:2:aspacem >>>\n", 24--4948:2:aspacem >>> ) = 24 getpid() = 4948 write(2, "--4948:2:aspacem Reading /pro"..., 44--4948:2:aspacem Reading /proc/self/maps ) = 44 open("/proc/self/maps", O_RDONLY) = 3 read(3, "38000000-384bc000 r-xp 00010000 "..., 100000) = 441 read(3, "", 99559) = 0 close(3) = 0 getpid() = 4948 write(2, "--4948:2:aspacem <<< SHOW_SEG"..., 90--4948:2:aspacem <<< SHOW_SEGMENTS: With contents of /proc/self/maps (14 segments, 1 se) = 90 write(2, "gnames)\n", 8gnames) ) = 8 getpid() = 4948 write(2, "--4948:2:aspacem ( 0) /usr/li"..., 67--4948:2:aspacem ( 0) /usr/lib64/valgrind/memcheck-mips64-linux ) = 67 getpid() = 4948 write(2, "--4948:2:aspacem 0: RSVN 00"..., 74--4948:2:aspacem 0: RSVN 0000000000-0003ffffff 64m ----- SmFixed ) = 74 getpid() = 4948 write(2, "--4948:2:aspacem 1: 00"..., 60--4948:2:aspacem 1: 0004000000-0037ffffff 832m ) = 60 getpid() = 4948 write(2, "--4948:2:aspacem 2: FILE 00"..., 90--4948:2:aspacem 2: FILE 0038000000-00384bbfff 4964352 r-x-- d=0x100 i=1974 o=6553) = 90 write(2, "6 (0)\n", 86 (0) ) = 8 getpid() = 4948 write(2, "--4948:2:aspacem 3: 00"..., 60--4948:2:aspacem 3: 00384bc000-00384c9fff 57344 ) = 60 getpid() = 4948 write(2, "--4948:2:aspacem 4: FILE 00"..., 90--4948:2:aspacem 4: FILE 00384ca000-00384ebfff 139264 rw--- d=0x100 i=1974 o=5021) = 90 write(2, "696 (0)\n", 8696 (0) ) = 8 getpid() = 4948 write(2, "--4948:2:aspacem 5: ANON 00"..., 66--4948:2:aspacem 5: ANON 00384ec000-0039f1bfff 26m rwx-- ) = 66 getpid() = 4948 write(2, "--4948:2:aspacem 6: 00"..., 60--4948:2:aspacem 6: 0039f1c000-0801ffffff 31872m ) = 60 getpid() = 4948 write(2, "--4948:2:aspacem 7: RSVN 08"..., 74--4948:2:aspacem 7: RSVN 0802000000-0802000fff 4096 ----- SmFixed ) = 74 getpid() = 4948 write(2, "--4948:2:aspacem 8: 08"..., 60--4948:2:aspacem 8: 0802001000-0fffffffff 32735m ) = 60 getpid() = 4948 write(2, "--4948:2:aspacem 9: RSVN 10"..., 74--4948:2:aspacem 9: RSVN 1000000000-ffff8e7fff 983032m ----- SmFixed ) = 74 getpid() = 4948 write(2, "--4948:2:aspacem 10: ANON ff"..., 66--4948:2:aspacem 10: ANON ffff8e8000-ffff909fff 139264 rwx-- ) = 66 getpid() = 4948 write(2, "--4948:2:aspacem 11: RSVN ff"..., 74--4948:2:aspacem 11: RSVN ffff90a000-ffffffdfff 7290880 ----- SmFixed ) = 74 getpid() = 4948 write(2, "--4948:2:aspacem 12: ANON ff"..., 66--4948:2:aspacem 12: ANON ffffffe000-ffffffffff 8192 r-x-- ) = 66 getpid() = 4948 write(2, "--4948:2:aspacem 13: RSVN 10"..., 81--4948:2:aspacem 13: RSVN 10000000000-ffffffffffffffff 16383e ----- SmFixed ) = 81 getpid() = 4948 write(2, "--4948:2:aspacem >>>\n", 24--4948:2:aspacem >>> ) = 24 getpid() = 4948 write(2, "--4948:1:main Address space "..., 51--4948:1:main Address space manager is running ) = 51 getpid() = 4948 write(2, "--4948:1:main Starting the d"..., 54--4948:1:main Starting the dynamic memory manager ) = 54 mmap(0x802001000, 4194304, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, 0, 0) = -1 EINVAL (Invalid argument) getpid() = 4948 write(2, "--4948:0:aspacem <<< SHOW_SEGME"..., 77--4948:0:aspacem <<< SHOW_SEGMENTS: out_of_memory (14 segments, 1 segnames) ) = 77 getpid() = 4948 write(2, "--4948:0:aspacem ( 0) /usr/lib6"..., 65--4948:0:aspacem ( 0) /usr/lib64/valgrind/memcheck-mips64-linux ) = 65 getpid() = 4948 write(2, "--4948:0:aspacem 0: RSVN 0000"..., 72--4948:0:aspacem 0: RSVN 0000000000-0003ffffff 64m ----- SmFixed ) = 72 getpid() = 4948 write(2, "--4948:0:aspacem 1: 0004"..., 58--4948:0:aspacem 1: 0004000000-0037ffffff 832m ) = 58 getpid() = 4948 write(2, "--4948:0:aspacem 2: FILE 0038"..., 90--4948:0:aspacem 2: FILE 0038000000-00384bbfff 4964352 r-x-- d=0x100 i=1974 o=65536 ) = 90 write(2, " (0)\n", 6 (0) ) = 6 getpid() = 4948 write(2, "--4948:0:aspacem 3: 0038"..., 58--4948:0:aspacem 3: 00384bc000-00384c9fff 57344 ) = 58 getpid() = 4948 write(2, "--4948:0:aspacem 4: FILE 0038"..., 90--4948:0:aspacem 4: FILE 00384ca000-00384ebfff 139264 rw--- d=0x100 i=1974 o=502169) = 90 write(2, "6 (0)\n", 66 (0) ) = 6 getpid() = 4948 write(2, "--4948:0:aspacem 5: ANON 0038"..., 64--4948:0:aspacem 5: ANON 00384ec000-0039f1bfff 26m rwx-- ) = 64 getpid() = 4948 write(2, "--4948:0:aspacem 6: 0039"..., 58--4948:0:aspacem 6: 0039f1c000-0801ffffff 31872m ) = 58 getpid() = 4948 write(2, "--4948:0:aspacem 7: RSVN 0802"..., 72--4948:0:aspacem 7: RSVN 0802000000-0802000fff 4096 ----- SmFixed ) = 72 getpid() = 4948 write(2, "--4948:0:aspacem 8: 0802"..., 58--4948:0:aspacem 8: 0802001000-0fffffffff 32735m ) = 58 getpid() = 4948 write(2, "--4948:0:aspacem 9: RSVN 1000"..., 72--4948:0:aspacem 9: RSVN 1000000000-ffff8e7fff 983032m ----- SmFixed ) = 72 getpid() = 4948 write(2, "--4948:0:aspacem 10: ANON ffff"..., 64--4948:0:aspacem 10: ANON ffff8e8000-ffff909fff 139264 rwx-- ) = 64 getpid() = 4948 write(2, "--4948:0:aspacem 11: RSVN ffff"..., 72--4948:0:aspacem 11: RSVN ffff90a000-ffffffdfff 7290880 ----- SmFixed ) = 72 getpid() = 4948 write(2, "--4948:0:aspacem 12: ANON ffff"..., 64--4948:0:aspacem 12: ANON ffffffe000-ffffffffff 8192 r-x-- ) = 64 getpid() = 4948 write(2, "--4948:0:aspacem 13: RSVN 1000"..., 79--4948:0:aspacem 13: RSVN 10000000000-ffffffffffffffff 16383e ----- SmFixed ) = 79 getpid() = 4948 write(2, "--4948:0:aspacem >>>\n", 22--4948:0:aspacem >>> ) = 22 getpid() = 4948 write(2, "--4948-- core : 0/ "..., 188--4948-- core : 0/ 0 max/curr mmap'd, 0/0 unsplit/split sb unmmap'd, 0/ 0 max/curr, 0/ 0 totalloc-blocks/byte s, 0 searches 8 rzB ) = 188 getpid() = 4948 write(2, "--4948-- dinfo : 0/ "..., 188--4948-- dinfo : 0/ 0 max/curr mmap'd, 0/0 unsplit/split sb unmmap'd, 0/ 0 max/curr, 0/ 0 totalloc-blocks/byte s, 0 searches 8 rzB ) = 188 getpid() = 4948 write(2, "--4948-- (null) : 0/ "..., 188--4948-- (null) : 0/ 0 max/curr mmap'd, 0/0 unsplit/split sb unmmap'd, 0/ 0 max/curr, 0/ 0 totalloc-blocks/byte s, 0 searches 0 rzB ) = 188 getpid() = 4948 write(2, "--4948-- demangle: 0/ "..., 188--4948-- demangle: 0/ 0 max/curr mmap'd, 0/0 unsplit/split sb unmmap'd, 0/ 0 max/curr, 0/ 0 totalloc-blocks/byte s, 0 searches 8 rzB ) = 188 getpid() = 4948 write(2, "--4948-- ttaux : 0/ "..., 188--4948-- ttaux : 0/ 0 max/curr mmap'd, 0/0 unsplit/split sb unmmap'd, 0/ 0 max/curr, 0/ 0 totalloc-blocks/byte s, 0 searches 8 rzB ) = 188 getpid() = 4948 write(2, "--4948-- translate: f"..., 70--4948-- translate: fast SP updates identified: 0 ( --%) ) = 70 getpid() = 4948 write(2, "--4948-- translate: generic_kn"..., 70--4948-- translate: generic_known SP updates identified: 0 ( --%) ) = 70 getpid() = 4948 write(2, "--4948-- translate: generic_unkn"..., 70--4948-- translate: generic_unknown SP updates identified: 0 ( --%) ) = 70 getpid() = 4948 write(2, "--4948-- tt/tc: 0 tt lookups"..., 52--4948-- tt/tc: 0 tt lookups requiring 0 probes ) = 52 getpid() = 4948 write(2, "--4948-- tt/tc: 0 fast-cache"..., 52--4948-- tt/tc: 0 fast-cache updates, 0 flushes ) = 52 getpid() = 4948 write(2, "--4948-- transtab: new 0"..., 62--4948-- transtab: new 0 (0 -> 0; ratio 0:10) [0 scs] ) = 62 getpid() = 4948 write(2, "--4948-- transtab: dumped 0"..., 43--4948-- transtab: dumped 0 (0 -> ??) ) = 43 getpid() = 4948 write(2, "--4948-- transtab: discarded 0"..., 43--4948-- transtab: discarded 0 (0 -> ??) ) = 43 getpid() = 4948 write(2, "--4948-- scheduler: 0 event chec"..., 36--4948-- scheduler: 0 event checks. ) = 36 getpid() = 4948 write(2, "--4948-- scheduler: 0 indir tran"..., 57--4948-- scheduler: 0 indir transfers, 0 misses (1 in 0) ) = 57 getpid() = 4948 write(2, "--4948-- scheduler: 0/0 major/mi"..., 50--4948-- scheduler: 0/0 major/minor sched events. ) = 50 getpid() = 4948 write(2, "--4948-- sanity: 0 cheap, 0 e"..., 49--4948-- sanity: 0 cheap, 0 expensive checks. ) = 49 mmap(0x802001000, 4194304, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, 0, 0) = -1 EINVAL (Invalid argument) getpid() = 4948 getpid() = 4948 getpid() = 4948 getpid() = 4948 getpid() = 4948 getpid() = 4948 getpid() = 4948 getpid() = 4948 getpid() = 4948 getpid() = 4948 getpid() = 4948 write(2, "==4948== \n==4948== Valgrind'"..., 512==4948== ==4948== Valgrind's memory management: out of memory: ==4948== newSuperblock's request for 4194304 bytes failed. ==4948== 27607040 bytes have already been allocated. ==4948== Valgrind cannot continue. Sorry. ==4948== ==4948== There are several possible reasons for this. ==4948== - You have some kind of memory limit in place. Look at the ==4948== output of 'ulimit -a'. Is there a limit on the size of ==4948== virtual memory or address space? ==4948== ) = 512 getpid() = 4948 getpid() = 4948 getpid() = 4948 getpid() = 4948 getpid() = 4948 getpid() = 4948 getpid() = 4948 write(2, " - You have run out of swap spa"..., 512 - You have run out of swap space. ==4948== - Valgrind has a bug. If you think this is the case or you are ==4948== not sure, please let us know and we'll try to fix it. ==4948== Please note that programs can take substantially more memory than ==4948== normal when running under Valgrind tools, eg. up to twice or ==4948== more, depending on the tool. On a 64-bit machine, Valgrind ==4948== should be able to make use of up 32GB memory. On a 32-bit ==4948== machine, Valgrind sho) = 512 getpid() = 4948 getpid() = 4948 getpid() = 4948 getpid() = 4948 getpid() = 4948 write(2, "uld be able to use all the memor"..., 301uld be able to use all the memory available ==4948== to a single process, up to 4GB if that's how you have your ==4948== kernel configured. Most 32-bit Linux setups allow a maximum of ==4948== 3GB per process. ==4948== ==4948== Whatever the reason, Valgrind cannot continue. Sorry. ) = 301 exit_group(1) = ? +++ exited with 1 +++ |
|
From: Philippe W. <phi...@sk...> - 2015-06-14 09:44:39
|
Thanks for the below info.
Can you post the full stacktraces of a mismatch error
(including program counters etc) ?
Also, can you confirm if the mismatch errors
appeared with a switch to valgrind 3.10.1 ?
Or with a switch to gcc 4.8.3 ?
Thanks
Philippe
On Thu, 2015-06-11 at 23:29 +0100, David Carter wrote:
> For the alloc'd line Valgrind points at new_allocator.h line 110, the
> de-alloc line is pointed to new_allocator.h line 104. The first looks
> like this in GCC sources:
>
>
> return static_cast<_Tp*>(::operator new(__n * sizeof(_Tp)));
>
>
> the second looks like this:
>
>
> deallocate(pointer __p, size_type)
> { ::operator delete(__p); }
>
>
> I don't see a mismatch there.
>
>
> The particular use-case I'm looking at is in string_buf, but there are
> literally thousands of others.
>
>
> Regards,
> David.
>
> On Thu, Jun 11, 2015 at 6:58 PM, Philippe Waroquiers
> <phi...@sk...> wrote:
> On Thu, 2015-06-11 at 12:14 +0200, Julian Seward wrote:
> > > I've recently switched over to Valgrind 3.10.1 and I'm now
> see vast numbers
> > > of 'mismatched free/delete' type messages all coming from
> std::string
> > > shipped with GCC 4.8.3.
> Do you see these 'mismatches free/delete' only with Valgrind
> 3.10.1 ?
> I.e. has it appeared with Valgrind 3.10.1 or has it appeared
> with a switch to gcc 4.8.3 ?
> >
> > I've seen this a lot in the past year when working with
> Firefox. I believe
> > that it is due to what you could call "differential
> inlining". I'm not 100%
> > sure of the details, but the general problem is like this:
> >
> > Memcheck intercepts malloc, free, new and delete. It
> expects memory
> > allocated by malloc to be freed by free and memory allocated
> by new
> > to be freed by delete (and the same for new[] and delete[]).
> >
> > Imagine now that some C++ header file contains something
> like this
> >
> > operator new ( size_t n ) { return malloc(n); }
> > operator delete ( void* p ) { free(p); }
> >
> > If g++ decides to inline new but not delete, or the other
> way round, then
> > the code still works (of course) but from Memcheck's point
> of view there is
> > a problem. That's because it can't intercept the inlined
> function -- there
> > is no single piece of code to intercept. So what it ends up
> seeing is,
> > for example, memory allocated by new (because that isn't
> inlined) but freed
> > by free (because delete got inlined). So it complains --
> incorrectly.
> If the problem is due to "differential inlining", then that
> should
> be visible in the stacktraces either of the "new" or of the
> "delete" :
> unless you specify --read-inline-info=no, valgrind stacktraces
> will
> show both the "new/malloc" or "delete/free" in the
> stacktraces,
> but with an identical program counter.
>
> >
> > I couldn't figure out any sane way to work around this, so I
> added a new
> > flag,
>
>
> > --show-mismatched-frees=no|yes [default=yes], to the trunk.
> This
> > disables allocator-mismatch checking and gets rid of the
> noise, but it of
> > course also gets rid of the ability to detect genuine
> mismatch errors.
> What might be done is (assuming that the inline info shows
> properly the
> call to new/delete), is to have an additional check before
> reporting
> a 'mismatched' error: if the stacktrace contains an inlined
> call to
> the expected "freeing function", then do not report the error.
> Eg, we might have a third value for
> --show-mismatched-frees=checkinline
> which would activate this verification.
> That should remove the false positive errors, and allow to
> keep the true positive. The price to pay will be a translation
> of program counters to function name and string comparison for
> all cases of such "differential inlining" (and that might have
> to
> be done both for the alloc and the free stacktrace).
>
> Philippe
>
>
>
>
|
|
From: John R. <jr...@bi...> - 2015-06-13 21:07:46
|
> ==31878== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al. > ==31878== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info That version of valgrind is at least 3.5 years old. Upgrade to the current version 3.10.1. Running an old version gets no sympathy. > ==31878== Command: /home/caizb/bind/bind/bind-9.9.2-P1/bin/named/.libs/lt-named > -u bind -t /var/lib/named > ==31878== > ==31938== error 2 No such file or directory > ==31938== mknod /tmp/vgdb-pipe-from-vgdb-to-31938-by-root-on-??? > ==31938== valgrind: fatal error: vgdb FIFOs cannot be created. Can you run that failing command, such as: mknod /tmp/foo-bar-baz p to make a pipe in /tmp ? Does running the top-level command bin/named/named -u bind -t /var/lib/named by itself work, without valgrind? Add the valgrind command-line parameter "--trace-syscalls=yes". Look at the output from running strace -f bin/named/named -u bind -t /var/lib/named and compare it with valgrind's --trace-syscalls=yes. Look at the output from running strace -f valgrind --trace-children=yes bin/named/named -u bind -t /var/lib/named and compare to the output from omitting the valgrind. -- |
|
From: Philippe W. <phi...@sk...> - 2015-06-13 19:24:31
|
On Fri, 2015-06-12 at 07:23 +0000, Mayank Kumar (mayankum) wrote: > Hi Users > I am running valgrind on windriver and trying to use > RUNNING_ON_VALGRIND macro to figure out if my process is running under > valgrind. > While compiling I get the error :- > > error: missing binary operator before token "__extension__" > > Is there a way to fix it . I am using gcc compiler 4.3.2 In valgrind.h, there is somewhere the following 3 lines: #if !defined(__GNUC__) # define __extension__ /* */ #endif You might try to copy the # define line just before including valgrind.h Philippe |
|
From: Aurora C. <ca...@kn...> - 2015-06-13 08:25:57
|
Hello, Bind: a dns system, program name is named, it is a deamon, and it will fork child process and create a few threads. When I use valgrind to check bind, valgrind outputs a few messages and quit, but named is still running. Please help me, thanks a lot. The information is as follows: # valgrind --version valgrind-3.7.0 # valgrind --trace-children=yes bin/named/named -u bind -t /var/lib/named ==31878== Memcheck, a memory error detector ==31878== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al. ==31878== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info ==31878== Command: bin/named/named -u bind -t /var/lib/named ==31878== ==31892== ==31892== HEAP SUMMARY: ==31892== in use at exit: 0 bytes in 0 blocks ==31892== total heap usage: 0 allocs, 0 frees, 0 bytes allocated ==31892== ==31892== All heap blocks were freed -- no leaks are possible ==31892== ==31892== For counts of detected and suppressed errors, rerun with: -v ==31892== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 2 from 2) ==31921== ==31921== HEAP SUMMARY: ==31921== in use at exit: 0 bytes in 0 blocks ==31921== total heap usage: 0 allocs, 0 frees, 0 bytes allocated ==31921== ==31921== All heap blocks were freed -- no leaks are possible ...... ==31878== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al. ==31878== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info ==31878== Command: /home/caizb/bind/bind/bind-9.9.2-P1/bin/named/.libs/lt-named -u bind -t /var/lib/named ==31878== ==31938== error 2 No such file or directory ==31938== mknod /tmp/vgdb-pipe-from-vgdb-to-31938-by-root-on-??? ==31938== valgrind: fatal error: vgdb FIFOs cannot be created. ==31878== ==31878== HEAP SUMMARY: ==31878== in use at exit: 323,328 bytes in 15 blocks ==31878== total heap usage: 29 allocs, 14 frees, 325,785 bytes allocated ==31878== ==31878== LEAK SUMMARY: ==31878== definitely lost: 0 bytes in 0 blocks ==31878== indirectly lost: 0 bytes in 0 blocks ==31878== possibly lost: 0 bytes in 0 blocks ==31878== still reachable: 323,328 bytes in 15 blocks ==31878== suppressed: 0 bytes in 0 blocks ==31878== Rerun with --leak-check=full to see details of leaked memory ==31878== ==31878== For counts of detected and suppressed errors, rerun with: -v ==31878== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0) ==31878== could not unlink /tmp/vgdb-pipe-from-vgdb-to-31878-by-root-on-??? ==31878== could not unlink /tmp/vgdb-pipe-to-vgdb-from-31878-by-root-on-??? ==31878== could not unlink /tmp/vgdb-pipe-shared-mem-vgdb-31878-by-root-on-??? # ps -ef | grep named bind 10544 1 99 15:15 ? 02:26:12 /home/bind/bin/named/named -u bind -c etc/bind/named.conf -n 7 -u bind -t /var/lib/named |
|
From: Mayank K. (mayankum) <may...@ci...> - 2015-06-12 07:23:18
|
Hi Users I am running valgrind on windriver and trying to use RUNNING_ON_VALGRIND macro to figure out if my process is running under valgrind. While compiling I get the error :- error: missing binary operator before token "__extension__" Is there a way to fix it . I am using gcc compiler 4.3.2 -Mayank |
|
From: 王阳 <412...@qq...> - 2015-06-12 07:04:03
|
Hi Philippe.
Thank you for your reply.
>Can you post here these statistics ?
yes,I can,but I will do it tomorrow.Doing one time will cost over 24hours. It sounds unbelievable.
But it is true.
When I use helgrind to analyse myprog, it runs so slowly (more than 1000 times slowly,user guide say it will be 20~50times slowly). So I want to know why it runs so slowly. I use memcheck to analyse myprog, and it run 20~50times slowly,but why helgrind can not do it, and it cost so many memory.
I print memcheck log of myprog below,I think whether helgrind can not deal with these operation of alloc big memory.
------------------------------
==63283== Warning: set address range perms:large range[0x3a04c000,0x7a04f000)(defined)
==63283== Warning: set address range perms:large range[0x7a04f040,0xba052040)(defined)
==63283== Warning: set address range perms:large range[0xd4674000,0x114677000)(defined)
...
==63283== Warning: set address range perms:large range[0x288ba7000,0x298baa000)(defined)
------------------------------
I use picture as follows.
------------------ 原始邮件 ------------------
发件人: "Philippe Waroquiers";<phi...@sk...>;
发送时间: 2015年6月12日(星期五) 凌晨2:45
收件人: "王阳"<412...@qq...>;
抄送: "valgrind-users"<val...@li...>;
主题: Re: 回复:[Valgrind-users] 回复: helgrind use more than 32G memory.
On Thu, 2015-06-11 at 18:19 +0800, 王阳 wrote:
> --28682-- univ_laog_do_GC enter cardinality 9614
> --28682-- univ_laog_do_GC exit seen 6591 next gc at cardinality 9615
yes, one of the things that --stats=yes activates is some information
about laog GC.
>
> why?
> DRD have not that problem ,but DRD 's message is not accurate
> comparing with helgrind.
helgrind data structures are different of drd (a.o. to be able
to give precise information about race conditios).
Normally, --stats=yes should have produced statistics
at the end of your program (if your program exited due to
out of memory). If valgrind itself encountered the oom situation
then it should equally have produced statistics.
Can you post here these statistics ?
If they are not produced (unclear why), then you could instead
regularly run in another window
vgdb -c v.info stats -c v.info memory
to capture stats/memory during the run, before it reaches 64G.
Philippe |
|
From: David C. <dc...@gm...> - 2015-06-11 22:29:55
|
For the alloc'd line Valgrind points at new_allocator.h line 110, the
de-alloc line is pointed to new_allocator.h line 104. The first looks like
this in GCC sources:
return static_cast<_Tp*>(::operator new(__n * sizeof(_Tp)));
the second looks like this:
deallocate(pointer __p, size_type)
{ ::operator delete(__p); }
I don't see a mismatch there.
The particular use-case I'm looking at is in string_buf, but there are
literally thousands of others.
Regards,
David.
On Thu, Jun 11, 2015 at 6:58 PM, Philippe Waroquiers <
phi...@sk...> wrote:
> On Thu, 2015-06-11 at 12:14 +0200, Julian Seward wrote:
> > > I've recently switched over to Valgrind 3.10.1 and I'm now see vast
> numbers
> > > of 'mismatched free/delete' type messages all coming from std::string
> > > shipped with GCC 4.8.3.
> Do you see these 'mismatches free/delete' only with Valgrind 3.10.1 ?
> I.e. has it appeared with Valgrind 3.10.1 or has it appeared
> with a switch to gcc 4.8.3 ?
> >
> > I've seen this a lot in the past year when working with Firefox. I
> believe
> > that it is due to what you could call "differential inlining". I'm not
> 100%
> > sure of the details, but the general problem is like this:
> >
> > Memcheck intercepts malloc, free, new and delete. It expects memory
> > allocated by malloc to be freed by free and memory allocated by new
> > to be freed by delete (and the same for new[] and delete[]).
> >
> > Imagine now that some C++ header file contains something like this
> >
> > operator new ( size_t n ) { return malloc(n); }
> > operator delete ( void* p ) { free(p); }
> >
> > If g++ decides to inline new but not delete, or the other way round, then
> > the code still works (of course) but from Memcheck's point of view there
> is
> > a problem. That's because it can't intercept the inlined function --
> there
> > is no single piece of code to intercept. So what it ends up seeing is,
> > for example, memory allocated by new (because that isn't inlined) but
> freed
> > by free (because delete got inlined). So it complains -- incorrectly.
> If the problem is due to "differential inlining", then that should
> be visible in the stacktraces either of the "new" or of the "delete" :
> unless you specify --read-inline-info=no, valgrind stacktraces will
> show both the "new/malloc" or "delete/free" in the stacktraces,
> but with an identical program counter.
>
> >
> > I couldn't figure out any sane way to work around this, so I added a new
> > flag,
>
>
> > --show-mismatched-frees=no|yes [default=yes], to the trunk. This
> > disables allocator-mismatch checking and gets rid of the noise, but it of
> > course also gets rid of the ability to detect genuine mismatch errors.
> What might be done is (assuming that the inline info shows properly the
> call to new/delete), is to have an additional check before reporting
> a 'mismatched' error: if the stacktrace contains an inlined call to
> the expected "freeing function", then do not report the error.
> Eg, we might have a third value for --show-mismatched-frees=checkinline
> which would activate this verification.
> That should remove the false positive errors, and allow to
> keep the true positive. The price to pay will be a translation
> of program counters to function name and string comparison for
> all cases of such "differential inlining" (and that might have to
> be done both for the alloc and the free stacktrace).
>
> Philippe
>
>
>
|
|
From: Philippe W. <phi...@sk...> - 2015-06-11 18:45:48
|
On Thu, 2015-06-11 at 18:19 +0800, 王阳 wrote:
> --28682-- univ_laog_do_GC enter cardinality 9614
> --28682-- univ_laog_do_GC exit seen 6591 next gc at cardinality 9615
yes, one of the things that --stats=yes activates is some information
about laog GC.
>
> why?
> DRD have not that problem ,but DRD 's message is not accurate
> comparing with helgrind.
helgrind data structures are different of drd (a.o. to be able
to give precise information about race conditios).
Normally, --stats=yes should have produced statistics
at the end of your program (if your program exited due to
out of memory). If valgrind itself encountered the oom situation
then it should equally have produced statistics.
Can you post here these statistics ?
If they are not produced (unclear why), then you could instead
regularly run in another window
vgdb -c v.info stats -c v.info memory
to capture stats/memory during the run, before it reaches 64G.
Philippe
|