|
From: Benjamin S. <bsc...@vr...> - 2013-11-19 10:21:30
|
I did start with --profile-heap, and it gave me the following (before I
killed it). The bottom-most entry was growing really fast. The other
entries seemed to stay more or less contact. Attached for reference is a
complete run ( I snipped a few outputs because the mailing list doesn't
like messages over 40k)
I think the likelyhood of my app being faulty is rather small. It runs
all fine using memcheck and standalone as well as in drd (it does show a
few possible races)
Regards
Benjamin
-------- Arena "core": 159383552/159383552 max/curr mmap'd, 0/0
unsplit/split sb unmmap'd, 103453184/103453184 max/curr on_loan 8 rzB
--------
16 in 1: sched_lock
32 in 1: main.mpclo.3
32 in 2: hg.lae.1
32 in 1: stacks.rs.1
32 in 2: hg.lae.2
48 in 1: libhb.Thr__new.1
64 in 2: hg.lae.3
64 in 1: hg.mk_Thread.1
80 in 1: libhb.Thr__new.3 (local_Kws_and_stacks)
80 in 1: commandline.sua.3
80 in 1: libhb.verydead_thread_table_init.1
80 in 1: gdbserved_watches
96 in 2: commandline.sua.2
96 in 2: libhb.Thr__new.4
128 in 1: initimg-linux.sce.5
128 in 2: hashtable.Hc.1
144 in 1: m_cache
208 in 6: errormgr.sLTy.2
496 in 3: hg.lNaw.1
528 in 3: hg.laog__init.2
640 in 5: hg.laog__init.1
736 in 27: errormgr.losf.4
848 in 34: errormgr.sLTy.1
1,040 in 27: errormgr.losf.2
2,160 in 27: errormgr.losf.1
2,224 in 65: libhb.SO__Alloc.1
3,376 in 2: libhb.vts_tab_init.1
3,552 in 66: hg.ids.2
4,000 in 1: hg.ids.1
4,112 in 116: libhb.vts_set_focaa.1
4,304 in 17: hg.ids.5 (univ_laog)
5,264 in 65: hg.mk_Lock.1
5,280 in 44: hg.ids.4
6,160 in 1: hashtable.Hc.2
6,416 in 117: libhb.vts_set_init.1
15,632 in 10: libhb.aFfw.1 (LineF storage)
16,384 in 1: libhb.Thr__new.2
33,056 in 6: gdbsrv
49,216 in 1: execontext.reh1
49,216 in 1: hashtable.resize.1
237,632 in 4,945: hg.new_MallocMeta.1
389,152 in 3,632: perm_malloc
1,572,912 in 1: libhb.event_map_init.2 (context table)
2,097,168 in 1: libhb.libhb_init.1
6,656,864 in 67: libhb.event_map_init.1 (RCEC pools)
15,937,808 in 169: libhb.event_map_init.3 (OldRef pools)
25,078,672 in 11,973: libhb.event_map_init.4 (oldref tree)
51,266,896 in 1,068,037: libhb.zsm_init.1 (map_shmem)
On 11/18/2013 11:55 PM, Philippe Waroquiers wrote:
> On Mon, 2013-11-18 at 11:40 +0100, Benjamin Schindler wrote:
>> Hi
>>
>> I just upgraded to valgrind 3.9.0 (debian sid backport). Most features
>> work, but helgrind hangs with my application, and starts to fill up
>> memory until the system starts swapping badly. This did not happen with
>> 3.8. I tried with and without the suppressions file I'm using but that
>> doesn't help.
>> The only thing that might be an issue is that the application uses cuda
>> and helgrind used to throw a lot of errors concerning cuda (all of which
>> I dealt with using a suppression file)
>>
>> Is this something known or is there a way I can debug this further?
> You can try several things to get more info about what is going on:
> * run with --profile-heap=yes
> This will regularly provide details about who allocates what
> * together with the above, you can also use gdb+vgdb to debug
> your app running under valgrind, and use
> monitor v.info memory aspacemgr
> to get memory details at selected places.
>
> It might be interesting to do the above with 3.8.1 and 3.9 and look at
> the differences.
> Note that in 3.9.0, several arenas (kind of memory
> zones of the valgrind allocator) have been merged together.
>
> Philippe
>
>
--
Dr. Benjamin Schindler, Software Developer
VRVis Forschungs-GmbH, www.VRVis.at
FN, 195369h, HG Wien
mail: bsc...@vr..., tel +43(0)1 20501 30803
|