You can subscribe to this list here.
| 2003 |
Jan
|
Feb
|
Mar
(58) |
Apr
(261) |
May
(169) |
Jun
(214) |
Jul
(201) |
Aug
(219) |
Sep
(198) |
Oct
(203) |
Nov
(241) |
Dec
(94) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2004 |
Jan
(137) |
Feb
(149) |
Mar
(150) |
Apr
(193) |
May
(95) |
Jun
(173) |
Jul
(137) |
Aug
(236) |
Sep
(157) |
Oct
(150) |
Nov
(136) |
Dec
(90) |
| 2005 |
Jan
(139) |
Feb
(130) |
Mar
(274) |
Apr
(138) |
May
(184) |
Jun
(152) |
Jul
(261) |
Aug
(409) |
Sep
(239) |
Oct
(241) |
Nov
(260) |
Dec
(137) |
| 2006 |
Jan
(191) |
Feb
(142) |
Mar
(169) |
Apr
(75) |
May
(141) |
Jun
(169) |
Jul
(131) |
Aug
(141) |
Sep
(192) |
Oct
(176) |
Nov
(142) |
Dec
(95) |
| 2007 |
Jan
(98) |
Feb
(120) |
Mar
(93) |
Apr
(96) |
May
(95) |
Jun
(65) |
Jul
(62) |
Aug
(56) |
Sep
(53) |
Oct
(95) |
Nov
(106) |
Dec
(87) |
| 2008 |
Jan
(58) |
Feb
(149) |
Mar
(175) |
Apr
(110) |
May
(106) |
Jun
(72) |
Jul
(55) |
Aug
(89) |
Sep
(26) |
Oct
(96) |
Nov
(83) |
Dec
(93) |
| 2009 |
Jan
(97) |
Feb
(106) |
Mar
(74) |
Apr
(64) |
May
(115) |
Jun
(83) |
Jul
(137) |
Aug
(103) |
Sep
(56) |
Oct
(59) |
Nov
(61) |
Dec
(37) |
| 2010 |
Jan
(94) |
Feb
(71) |
Mar
(53) |
Apr
(105) |
May
(79) |
Jun
(111) |
Jul
(110) |
Aug
(81) |
Sep
(50) |
Oct
(82) |
Nov
(49) |
Dec
(21) |
| 2011 |
Jan
(87) |
Feb
(105) |
Mar
(108) |
Apr
(99) |
May
(91) |
Jun
(94) |
Jul
(114) |
Aug
(77) |
Sep
(58) |
Oct
(58) |
Nov
(131) |
Dec
(62) |
| 2012 |
Jan
(76) |
Feb
(93) |
Mar
(68) |
Apr
(95) |
May
(62) |
Jun
(109) |
Jul
(90) |
Aug
(87) |
Sep
(49) |
Oct
(54) |
Nov
(66) |
Dec
(84) |
| 2013 |
Jan
(67) |
Feb
(52) |
Mar
(93) |
Apr
(65) |
May
(33) |
Jun
(34) |
Jul
(52) |
Aug
(42) |
Sep
(52) |
Oct
(48) |
Nov
(66) |
Dec
(14) |
| 2014 |
Jan
(66) |
Feb
(51) |
Mar
(34) |
Apr
(47) |
May
(58) |
Jun
(27) |
Jul
(52) |
Aug
(41) |
Sep
(78) |
Oct
(30) |
Nov
(28) |
Dec
(26) |
| 2015 |
Jan
(41) |
Feb
(42) |
Mar
(20) |
Apr
(73) |
May
(31) |
Jun
(48) |
Jul
(23) |
Aug
(55) |
Sep
(36) |
Oct
(47) |
Nov
(48) |
Dec
(41) |
| 2016 |
Jan
(32) |
Feb
(34) |
Mar
(33) |
Apr
(22) |
May
(14) |
Jun
(31) |
Jul
(29) |
Aug
(41) |
Sep
(17) |
Oct
(27) |
Nov
(38) |
Dec
(28) |
| 2017 |
Jan
(28) |
Feb
(30) |
Mar
(16) |
Apr
(9) |
May
(27) |
Jun
(57) |
Jul
(28) |
Aug
(43) |
Sep
(31) |
Oct
(20) |
Nov
(24) |
Dec
(18) |
| 2018 |
Jan
(34) |
Feb
(50) |
Mar
(18) |
Apr
(26) |
May
(13) |
Jun
(31) |
Jul
(13) |
Aug
(11) |
Sep
(15) |
Oct
(12) |
Nov
(18) |
Dec
(13) |
| 2019 |
Jan
(12) |
Feb
(29) |
Mar
(51) |
Apr
(22) |
May
(13) |
Jun
(20) |
Jul
(13) |
Aug
(12) |
Sep
(21) |
Oct
(6) |
Nov
(9) |
Dec
(5) |
| 2020 |
Jan
(13) |
Feb
(5) |
Mar
(25) |
Apr
(4) |
May
(40) |
Jun
(27) |
Jul
(5) |
Aug
(17) |
Sep
(21) |
Oct
(1) |
Nov
(5) |
Dec
(15) |
| 2021 |
Jan
(28) |
Feb
(6) |
Mar
(11) |
Apr
(5) |
May
(7) |
Jun
(8) |
Jul
(5) |
Aug
(5) |
Sep
(11) |
Oct
(9) |
Nov
(10) |
Dec
(12) |
| 2022 |
Jan
(7) |
Feb
(13) |
Mar
(8) |
Apr
(7) |
May
(12) |
Jun
(27) |
Jul
(14) |
Aug
(27) |
Sep
(27) |
Oct
(17) |
Nov
(17) |
Dec
|
| 2023 |
Jan
(10) |
Feb
(18) |
Mar
(9) |
Apr
(26) |
May
|
Jun
(13) |
Jul
(18) |
Aug
(5) |
Sep
(12) |
Oct
(16) |
Nov
(1) |
Dec
|
| 2024 |
Jan
(4) |
Feb
(3) |
Mar
(6) |
Apr
(17) |
May
(2) |
Jun
(33) |
Jul
(13) |
Aug
(1) |
Sep
(6) |
Oct
(8) |
Nov
(6) |
Dec
(15) |
| 2025 |
Jan
(5) |
Feb
(11) |
Mar
(8) |
Apr
(20) |
May
(1) |
Jun
|
Jul
|
Aug
(9) |
Sep
(1) |
Oct
(7) |
Nov
(1) |
Dec
|
|
From: Sean Au <xu...@gm...> - 2016-01-02 23:32:07
|
Hi there, I saw in the release notes for 3.11.0 that " Preliminary support for Mac OS X 10.11 (El Capitan) has been added." What does this mean? Is there any more information on how to install it on 10.11? I've tried but encountered issues and further searching led me to this link (https://bugs.kde.org/show_bug.cgi?id=348909) where it's been mentioned that: *OS X 10.11 (El Capitan) does not allow user links in /usr/bin. It is cleared from non-Apple things and is further locked down using some new mechanism: I.e. not even root can put links there.* Does anyone know the current status? Cheers, Sean |
|
From: Josef W. <Jos...@gm...> - 2015-12-18 17:33:14
|
Am 18.12.2015 um 14:50 schrieb Rohit Takhar: > Suppose I have a class |TEST| and there is a function |func()|. How can > I restrict callgrind to get information about this function only. I know > how to do it in C++, but it is not working in python. > I tried following > methods: |--toggle-collect=func| and |--toggle-collect=TEST::func| These actually are too low-level, as they work on binary code, which would enable/disable collection for specific functions in the Python interpreter. You could do a Python extension module allowing to call a C functions from Python which enables/disables callgrind collection by usingthe Valgrind client request macro "CALLGRIND_TOGGLE_COLLECT". Then call this function from your python code whenever you want to toggle collection state. Josef > > Thanks, > Rohit > > > ------------------------------------------------------------------------------ > > > > _______________________________________________ > Valgrind-users mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-users > |
|
From: Rohit T. <roh...@gm...> - 2015-12-18 13:51:05
|
Hi everyone, I have been stuck on this. How can I use '--toggle-collect=function' to get information about a function in Python? Suppose I have a class TEST and there is a function func(). How can I restrict callgrind to get information about this function only. I know how to do it in C++, but it is not working in python. I tried following methods: --toggle-collect=func and --toggle-collect=TEST::func Thanks, Rohit |
|
From: pritesh <pri...@gm...> - 2015-12-16 19:20:13
|
Hi , I am trying to run valgrind on dex2oat command which converts a dex file to oat file on android platform with armv8 arch. However I see the below unhandled instructions logs when running valgrind. disInstr(thumb): unhandled instruction: 0xDEFF 0x4EA9 disInstr(thumb): unhandled instruction: 0xDEFF 0x9309 disInstr(thumb): unhandled instruction: 0xDEFF 0x4699 disInstr(thumb): unhandled instruction: 0xDEFF 0xF8DF But the program is not killed . Is there anyway to know if these errors are significant ? How to debug this ? thanks Pritesh |
|
From: Philippe W. <phi...@sk...> - 2015-12-13 13:35:12
|
On Sun, 2015-12-13 at 20:43 +0800, yoma sophian wrote: > hi Philippe: > I compile the trunk valgrind with ur patch, but I still cannot see the > location of mmap I put in the c file, even I set the --threshold=0.0. > (I attach the c file and mass output as well) --threshold indicates the threshold below which massif does not show the details of the memory. Details will be shown in 2 different kinds of snapshots: * peak snapshots * detailed snapshots In your case, I guess that the file you are mmap-ing is too small to be the peak, and you do not have a detailed snaphot being taken between the mmap and the munmap. E.g. try to mmap a much bigger file, to ensure that the peak is reached by the mmap you are doing, and not any other operations. > BTW, would you please let me know how to compile and run the programs > in massif/tests? in valgrind top directory, to compile/install locally and run all regtests: ./configure --prefix=`pwd`/Inst make install make regtest Philippe |
|
From: yoma s. <sop...@gm...> - 2015-12-13 12:43:20
|
> I use git svn instruction for getting the repository. > it should be no differneces with git svn command. > Anyway, I will use svn command to redo again. hi ivo: after I use svn instead of git svn, I can successfully compile valgrind trunk. it seems git svn not handle external link ^^ >>> >> I think there is a bug in the massif logic to make a peak detailed >>> >> snapshot at the moment of munmap: it should try to produce a peak >>> >> snapshot when releasing the first page of munmap, not when releasing >>> >> the last page. >>> > revision 15745 introduces a test for this case, and fixes the bug. hi Philippe: I compile the trunk valgrind with ur patch, but I still cannot see the location of mmap I put in the c file, even I set the --threshold=0.0. (I attach the c file and mass output as well) BTW, would you please let me know how to compile and run the programs in massif/tests? appreciate all your kind help, |
|
From: yoma s. <sop...@gm...> - 2015-12-11 15:00:17
|
hi Ivo: 2015-12-11 22:57 GMT+08:00 Ivo Raisr <iv...@iv...>: > > > 2015-12-11 14:01 GMT+01:00 yoma sophian <sop...@gm...>: >> >> hi Phillippe: >> >> 2015-12-11 6:42 GMT+08:00 Philippe Waroquiers >> <phi...@sk...>: >> > On Mon, 2015-12-07 at 00:05 +0100, Philippe Waroquiers wrote: >> >> I think there is a bug in the massif logic to make a peak detailed >> >> snapshot at the moment of munmap: it should try to produce a peak >> >> snapshot when releasing the first page of munmap, not when releasing >> >> the last page. >> > revision 15745 introduces a test for this case, and fixes the bug. >> >> I checkout 15745 and compile. >> but I bump into below error message >> >> valgrind.git# ./autogen.sh >> running: aclocal >> running: autoheader >> running: automake -a >> Makefile.am:21: error: required directory ./VEX does not exist > > > Why valgrind.git? Valgrind is versioned in SVN. > Anyway, it seems your source tree is incomplete and you did not get > an external repo 'VEX'. See > http://valgrind.org/downloads/repository.html I use git svn instruction for getting the repository. it should be no differneces with git svn command. Anyway, I will use svn command to redo again. thanks for your kind remind, |
|
From: Ivo R. <iv...@iv...> - 2015-12-11 14:57:33
|
2015-12-11 14:01 GMT+01:00 yoma sophian <sop...@gm...>: > hi Phillippe: > > 2015-12-11 6:42 GMT+08:00 Philippe Waroquiers < > phi...@sk...>: > > On Mon, 2015-12-07 at 00:05 +0100, Philippe Waroquiers wrote: > >> I think there is a bug in the massif logic to make a peak detailed > >> snapshot at the moment of munmap: it should try to produce a peak > >> snapshot when releasing the first page of munmap, not when releasing > >> the last page. > > revision 15745 introduces a test for this case, and fixes the bug. > > I checkout 15745 and compile. > but I bump into below error message > > valgrind.git# ./autogen.sh > running: aclocal > running: autoheader > running: automake -a > Makefile.am:21: error: required directory ./VEX does not exist > Why valgrind.git? Valgrind is versioned in SVN. Anyway, it seems your source tree is incomplete and you did not get an external repo 'VEX'. See http://valgrind.org/downloads/repository.html I, |
|
From: yoma s. <sop...@gm...> - 2015-12-11 13:02:05
|
hi Phillippe: 2015-12-11 6:42 GMT+08:00 Philippe Waroquiers <phi...@sk...>: > On Mon, 2015-12-07 at 00:05 +0100, Philippe Waroquiers wrote: >> I think there is a bug in the massif logic to make a peak detailed >> snapshot at the moment of munmap: it should try to produce a peak >> snapshot when releasing the first page of munmap, not when releasing >> the last page. > revision 15745 introduces a test for this case, and fixes the bug. I checkout 15745 and compile. but I bump into below error message valgrind.git# ./autogen.sh running: aclocal running: autoheader running: automake -a Makefile.am:21: error: required directory ./VEX does not exist error: while running 'automake -a' valgrind.git# Did I miss anything? thanks for your kind help, |
|
From: Philippe W. <phi...@sk...> - 2015-12-10 22:41:14
|
On Mon, 2015-12-07 at 00:05 +0100, Philippe Waroquiers wrote: > I think there is a bug in the massif logic to make a peak detailed > snapshot at the moment of munmap: it should try to produce a peak > snapshot when releasing the first page of munmap, not when releasing > the last page. revision 15745 introduces a test for this case, and fixes the bug. Philippe |
|
From: Philippe W. <phi...@sk...> - 2015-12-10 21:38:58
|
On Thu, 2015-12-10 at 13:20 -0800, Nikolaus Rath wrote: > So the answer is then that massif simply does not support inlined calls? Effectively, today, massif does not support outputting inlined calls. pp_snapshot_SXPt in ms_main.c has to be modified to print inlined function calls. Philippe |
|
From: Nikolaus R. <nr...@tr...> - 2015-12-10 21:20:45
|
On 12/10/2015 01:11 PM, Philippe Waroquiers wrote: > On Thu, 2015-12-10 at 11:06 -0800, Nikolaus Rath wrote: >> On 12/10/2015 10:20 AM, Matthias Schwarzott wrote: >>> Am 09.12.2015 um 22:10 schrieb Nikolaus Rath: >>>> >>>> Yes. But that makes it even more confusing to me: apparently gdb picks >>>> up the debug information about the inlined function just fine - but >>>> valgrind doesn't. >>>> >>>> >>>> I'll update valgrind to most recent and see if I can explicitly disable >>>> inlining. >>>> >>>> >>> Do you do your experiments with massif or with memcheck? >>> >>> If it is massif then read-inline-info defaults to "no". >>> an explicit --read-inline-info=yes could at least make valgrind core >>> save the extra information. > Oops, yes, I forgot that inline info was not activated for massif tool. > >> >> I am using massif - and bingo, using --read-inline-info=yes makes >> everything work. > This is strange. > --read-inline-info=yes will ensure that e.g. 'v.info scheduler' > stacktraces will show inlined calls, or that 'v.info location <address>' > will describe inlined calls. > > But I do not see how massif output would use it: to be used, > a non NULL iipc argument must be given to VG_(describe_IP). > And massif snapshot does pass NULL (see ms_main.c:2149). > > Are you sure the massif output shows the inlined calls ? Oh, no, sorry. I was silently assuming that if 'v.info scheduler' produces the correct output, massif would produce the correct output as well. So the answer is then that massif simply does not support inlined calls? Best, -Nikolaus |
|
From: Philippe W. <phi...@sk...> - 2015-12-10 21:09:59
|
On Thu, 2015-12-10 at 11:06 -0800, Nikolaus Rath wrote: > On 12/10/2015 10:20 AM, Matthias Schwarzott wrote: > > Am 09.12.2015 um 22:10 schrieb Nikolaus Rath: > >> > >> Yes. But that makes it even more confusing to me: apparently gdb picks > >> up the debug information about the inlined function just fine - but > >> valgrind doesn't. > >> > >> > >> I'll update valgrind to most recent and see if I can explicitly disable > >> inlining. > >> > >> > > Do you do your experiments with massif or with memcheck? > > > > If it is massif then read-inline-info defaults to "no". > > an explicit --read-inline-info=yes could at least make valgrind core > > save the extra information. Oops, yes, I forgot that inline info was not activated for massif tool. > > I am using massif - and bingo, using --read-inline-info=yes makes > everything work. This is strange. --read-inline-info=yes will ensure that e.g. 'v.info scheduler' stacktraces will show inlined calls, or that 'v.info location <address>' will describe inlined calls. But I do not see how massif output would use it: to be used, a non NULL iipc argument must be given to VG_(describe_IP). And massif snapshot does pass NULL (see ms_main.c:2149). Are you sure the massif output shows the inlined calls ? Philippe |
|
From: Nikolaus R. <nr...@tr...> - 2015-12-10 19:06:55
|
On 12/10/2015 10:20 AM, Matthias Schwarzott wrote: > Am 09.12.2015 um 22:10 schrieb Nikolaus Rath: >> >> Yes. But that makes it even more confusing to me: apparently gdb picks >> up the debug information about the inlined function just fine - but >> valgrind doesn't. >> >> >> I'll update valgrind to most recent and see if I can explicitly disable >> inlining. >> >> > Do you do your experiments with massif or with memcheck? > > If it is massif then read-inline-info defaults to "no". > an explicit --read-inline-info=yes could at least make valgrind core > save the extra information. I am using massif - and bingo, using --read-inline-info=yes makes everything work. (So much time wasted experimenting when a careful study of the manpage would have been enough... grmbl). Thanks! -Niko |
|
From: Matthias S. <zz...@ge...> - 2015-12-10 18:21:23
|
Am 09.12.2015 um 22:10 schrieb Nikolaus Rath: > > Yes. But that makes it even more confusing to me: apparently gdb picks > up the debug information about the inlined function just fine - but > valgrind doesn't. > > > I'll update valgrind to most recent and see if I can explicitly disable > inlining. > > Do you do your experiments with massif or with memcheck? If it is massif then read-inline-info defaults to "no". an explicit --read-inline-info=yes could at least make valgrind core save the extra information. But I do not know if massif will care about this information. Regards Matthias |
|
From: Nikolaus R. <nr...@tr...> - 2015-12-09 22:59:20
|
On 12/09/2015 02:15 PM, Philippe Waroquiers wrote: > On Wed, 2015-12-09 at 22:59 +0100, Philippe Waroquiers wrote: > >> If this is not recorded, then it is the valgrind dwarf reader that likely has >> a problem. >> Otherwise, it is the unwinder which does not properly use the inline info. > What you can also do is to use > (gdb) monitor v.info location <address> > > When address is some code, it will print the function/file/line nr > (and if address corresponds to inlined calls, it should describe all > what is inlined). > > For example, for the test memcheck/tests/inlinfo, I obtain: > (gdb) mo v.info loc 0x80486DC > Address 0x80486dc is in the Text segment of memcheck/tests/inlinfo > ==8004== at 0x80486DC: fun_d (inlinfo.c:7) > ==8004== by 0x80486DC: fun_c (inlinfo.c:15) > ==8004== by 0x80486DC: fun_b (inlinfo.c:21) > ==8004== by 0x80486DC: fun_a (inlinfo.c:27) > ==8004== by 0x80486DC: main (inlinfo.c:66) > > If the inline info is recorded and used properly, the above command > should give the inlining and inlined functions for the relevant program counter. Apparently the info is not used correctly then: (gdb) monitor v.info location 0xB99FC6 Address 0xb99fc6 is in the Text segment of /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model ==18278== at 0xB99FC6: taehdf5_mp_h5append_data_double_0d_ (taehdf5.f90:1936) (gdb) Best, -Nikolaus |
|
From: Nikolaus R. <nr...@tr...> - 2015-12-09 22:57:44
|
On 12/09/2015 01:59 PM, Philippe Waroquiers wrote: > On Wed, 2015-12-09 at 13:41 -0800, Nikolaus Rath wrote: >> I believe this is the relevant objdump output - but again I don't understand it. Does it tell you anything? >> >> <1><a8f8c>: Abbrev Number: 42 (DW_TAG_subprogram) >> <a8f8d> DW_AT_decl_line : 1916 >> <a8f8f> DW_AT_decl_file : 1 >> <a8f90> DW_AT_declaration : 1 >> <a8f91> DW_AT_name : (indirect string, offset: 0xa5fc): h5dump_attr_int >> <a8f95> DW_AT_external : 1 >> <a8f96> DW_AT_inline : 1 (inlined) > > So, we have now clearly confirmed that the wrong stacktrace is caused by > valgrind not understanding that h5dump_attr_int has been inlined. > > What you could do is to modify storage.c line 668 put a (1) instead of > (0) : > > if (0) VG_(message) > (Vg_DebugMsg, > "addInlInfo: fn %s inlined as addr_lo %#lx,addr_hi %#lx," > "caller fndn_ix %u %s:%d\n", > inlinedfn, addr_lo, addr_hi, fndn_ix, > ML_(fndn_ix2filename) (di, fndn_ix), lineno); > > > This will then trace the inlined call information. > If this inlined info is properly understood and recorded, you should > see some information produced that tells that h5dump_attr_int > has been inlined around the program counter of: > ==2047== by 0xB99FC6: taehdf5_mp_h5append_data_double_0d_ (taehdf5.f90:1936) > The output is still the same: Thread 1: status = VgTs_Runnable (lwpid 18278) ==18278== at 0xFE9640: H5FL_reg_calloc (in /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model) ==18278== by 0xF7F3D9: H5A_create (in /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model) ==18278== by 0xF79500: H5Acreate2 (in /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model) ==18278== by 0xF682AD: h5acreate_c_ (in /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model) ==18278== by 0xF626A6: h5a_mp_h5acreate_f_ (in /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model) ==18278== by 0xB99FC6: taehdf5_mp_h5append_data_double_0d_ (taehdf5.f90:1936) ==18278== by 0xB248E6: plot_m_mp_plots_ (plot_hdf5.f:144) ==18278== by 0xB3B722: lr_mod_m_mp_check_dt_ (LR_model.F:487) ==18278== by 0xB272E3: lr_mod_m_mp_lr_step_ (LR_model.F:252) ==18278== by 0xB261DD: MAIN__ (LR_model.F:544) ==18278== by 0x406E3D: main (in /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model) client stack range: [0xFFEBFE000 0xFFF000FFF] client SP: 0xFFEC2CF48 Best, -Nikolaus |
|
From: Philippe W. <phi...@sk...> - 2015-12-09 22:13:38
|
On Wed, 2015-12-09 at 22:59 +0100, Philippe Waroquiers wrote: > If this is not recorded, then it is the valgrind dwarf reader that likely has > a problem. > Otherwise, it is the unwinder which does not properly use the inline info. What you can also do is to use (gdb) monitor v.info location <address> When address is some code, it will print the function/file/line nr (and if address corresponds to inlined calls, it should describe all what is inlined). For example, for the test memcheck/tests/inlinfo, I obtain: (gdb) mo v.info loc 0x80486DC Address 0x80486dc is in the Text segment of memcheck/tests/inlinfo ==8004== at 0x80486DC: fun_d (inlinfo.c:7) ==8004== by 0x80486DC: fun_c (inlinfo.c:15) ==8004== by 0x80486DC: fun_b (inlinfo.c:21) ==8004== by 0x80486DC: fun_a (inlinfo.c:27) ==8004== by 0x80486DC: main (inlinfo.c:66) If the inline info is recorded and used properly, the above command should give the inlining and inlined functions for the relevant program counter. Philippe |
|
From: Philippe W. <phi...@sk...> - 2015-12-09 21:57:23
|
On Wed, 2015-12-09 at 13:41 -0800, Nikolaus Rath wrote:
> I believe this is the relevant objdump output - but again I don't understand it. Does it tell you anything?
>
> <1><a8f8c>: Abbrev Number: 42 (DW_TAG_subprogram)
> <a8f8d> DW_AT_decl_line : 1916
> <a8f8f> DW_AT_decl_file : 1
> <a8f90> DW_AT_declaration : 1
> <a8f91> DW_AT_name : (indirect string, offset: 0xa5fc): h5dump_attr_int
> <a8f95> DW_AT_external : 1
> <a8f96> DW_AT_inline : 1 (inlined)
So, we have now clearly confirmed that the wrong stacktrace is caused by
valgrind not understanding that h5dump_attr_int has been inlined.
What you could do is to modify storage.c line 668 put a (1) instead of
(0) :
if (0) VG_(message)
(Vg_DebugMsg,
"addInlInfo: fn %s inlined as addr_lo %#lx,addr_hi %#lx,"
"caller fndn_ix %u %s:%d\n",
inlinedfn, addr_lo, addr_hi, fndn_ix,
ML_(fndn_ix2filename) (di, fndn_ix), lineno);
This will then trace the inlined call information.
If this inlined info is properly understood and recorded, you should
see some information produced that tells that h5dump_attr_int
has been inlined around the program counter of:
==2047== by 0xB99FC6: taehdf5_mp_h5append_data_double_0d_ (taehdf5.f90:1936)
If this is not recorded, then it is the valgrind dwarf reader that likely has
a problem.
Otherwise, it is the unwinder which does not properly use the inline info.
Philippe
|
|
From: Nikolaus R. <nr...@tr...> - 2015-12-09 21:41:29
|
On 12/09/2015 12:35 PM, Philippe Waroquiers wrote:
> On Wed, 2015-12-09 at 10:32 -0800, Nikolaus Rath wrote:
>> On 12/09/2015 10:00 AM, Nikolaus Rath wrote:
>
>> Interestingly enough, but stacktraces are incorrect: gdb is
>> missing the call to taehdf5_mp_h5append_data_double_0d_,
>> and valgrind is missing the call to h5dump_attr_int.
>
> The missing valgrind entry has the same "look" as an inlining "not
> understood": we see a a function, but with a source that is another
> function.
> Maybe you could try by recompiling with an option that fully disable
> inlining ?
>
>> This is with valgrind 3.10.0 and gdb 7.7.1 (as above).
>
> At least for valgrind, it would be better to upgrade to the last
> released version (3.11).
Ok, I tried again with version 3.11.0. The problem is still there.
However, I can confirm that it's caused by inlining: if I explicitly disable inlining I can still compile with -O3 and get correct stacktraces.
>> Short of only using -O1 and -O0, is there a way to fix this?
> Without more info on the origin of the wrong stack trace, difficult to
> to say. You might try by using -O1, and then add individual optimisation
> flags one by one, till you see which optimisation flag causes the
> wrong stacktrace (assuming your compiler is like gcc, i.e. has very
> fine grained optimisation control).
>
> Alternatively, you could investigate by using the valgrind monitor
> command (gdb) monitor v.info unwind <addr> <len>
> to investigate the unwind info around the missing stacktrace entry
> and/or by activating the debug trace of valgrind e.g.
> --trace-symtab=no|yes show symbol table details? [no]
> --trace-symtab-patt=<patt> limit debuginfo tracing to obj name <patt>
> --trace-cfi=no|yes show call-frame-info details? [no]
Hmm. Unfortunately the output doesn't tell me anything. Here's what I get:
Correct stacktrace:
Thread 1: status = VgTs_Runnable (lwpid 1654)
==1654== at 0xF1F5D0: H5FL_reg_calloc (in /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model)
==1654== by 0xEB5369: H5A_create (in /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model)
==1654== by 0xEAF490: H5Acreate2 (in /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model)
==1654== by 0xE9E23D: h5acreate_c_ (in /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model)
==1654== by 0xE98636: h5a_mp_h5acreate_f_ (in /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model)
==1654== by 0xAE9C68: taehdf5_mp_h5dump_attr_int_ (taehdf5.f90:1936)
==1654== by 0xAE58A3: taehdf5_mp_h5append_data_double_0d_ (taehdf5.f90:4193)
==1654== by 0xA8A063: plot_m_mp_plots_ (plot_hdf5.f:144)
==1654== by 0xAA0578: lr_mod_m_mp_check_dt_ (LR_model.F:487)
==1654== by 0xA8D059: lr_mod_m_mp_lr_step_ (LR_model.F:252)
==1654== by 0xA8BF33: MAIN__ (LR_model.F:544)
==1654== by 0x406E3D: main (in /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model)
client stack range: [0xFFEBFE000 0xFFF000FFF] client SP: 0xFFEC2CF88
valgrind stack top usage: 12312 of 1048576
(gdb) monitor v.info unwind 0x0000000000ae9c69
[0xae9c69 .. 0xae9c69]: let cfa=oldBP+16 in RA=*(cfa+-8) SP=cfa+0 BP=*(cfa+-16)
(gdb) monitor v.info unwind 0x0000000000ae58a4
[0xae58a4 .. 0xae58a4]: let cfa=oldBP+16 in RA=*(cfa+-8) SP=cfa+0 BP=*(cfa+-16)
Incorrect stacktrace:
Thread 1: status = VgTs_Runnable (lwpid 2047)
==2047== at 0xFE9640: H5FL_reg_calloc (in /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model)
==2047== by 0xF7F3D9: H5A_create (in /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model)
==2047== by 0xF79500: H5Acreate2 (in /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model)
==2047== by 0xF682AD: h5acreate_c_ (in /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model)
==2047== by 0xF626A6: h5a_mp_h5acreate_f_ (in /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model)
==2047== by 0xB99FC6: taehdf5_mp_h5append_data_double_0d_ (taehdf5.f90:1936)
==2047== by 0xB248E6: plot_m_mp_plots_ (plot_hdf5.f:144)
==2047== by 0xB3B722: lr_mod_m_mp_check_dt_ (LR_model.F:487)
==2047== by 0xB272E3: lr_mod_m_mp_lr_step_ (LR_model.F:252)
==2047== by 0xB261DD: MAIN__ (LR_model.F:544)
==2047== by 0x406E3D: main (in /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model)
client stack range: [0xFFEBFE000 0xFFF000FFF] client SP: 0xFFEC2CFC8
valgrind stack top usage: 12312 of 1048576
(gdb) monitor v.info unwind 0xF626A6
[0xf626a6 .. 0xf626a6]: let cfa=oldSP+48 in RA=*(cfa+-8) SP=cfa+0 BP=Same
(gdb) monitor v.info unwind 0xB99FC6
[0xb99fc6 .. 0xb99fc6]: let cfa=oldBP+16 in RA=*(cfa+-8) SP=cfa+0 BP=*(cfa+-16)
> and/or by using objdump to look at the dwarf info.
I believe this is the relevant objdump output - but again I don't understand it. Does it tell you anything?
<1><a8f8c>: Abbrev Number: 42 (DW_TAG_subprogram)
<a8f8d> DW_AT_decl_line : 1916
<a8f8f> DW_AT_decl_file : 1
<a8f90> DW_AT_declaration : 1
<a8f91> DW_AT_name : (indirect string, offset: 0xa5fc): h5dump_attr_int
<a8f95> DW_AT_external : 1
<a8f96> DW_AT_inline : 1 (inlined)
Best,
-Nikolaus
|
|
From: Nikolaus R. <nr...@tr...> - 2015-12-09 21:10:19
|
On 12/09/2015 12:44 PM, Philippe Waroquiers wrote: > On Wed, 2015-12-09 at 10:32 -0800, Nikolaus Rath wrote: >> (gdb) bt >> #0 0x0000000001010750 in H5FL_reg_calloc () >> #1 0x0000000000fa64ea in H5A_create () >> #2 0x0000000000fa0611 in H5Acreate2 () >> #3 0x0000000000f8f3be in h5acreate_c_ () >> #4 0x0000000000f897b7 in h5a_mp_h5acreate_f_ () >> #5 0x0000000000b99fc7 in h5dump_attr_int (loc_id=<optimized out>, >> f=<optimized out>, name=..., >> .tmp.NAME.len_V$1086=<optimized out>) >> at /home/nrath/Q2D/utils/src/taehdf5.f90:1936 >> #6 h5append_data_double_0d (group_id=1, >> f=<error reading variable: Cannot access memory at address >> 0xa000008>, name=..., >> .tmp.NAME.len_V$1cd8=272) >> at /home/nrath/Q2D/utils/src/taehdf5.f90:4193 >> #7 0x0000000000b248e7 in plot_m::plots (idt=1) >> at /home/nrath/Q2D/LamyRidge/src/model/plot_hdf5.f:144 >> #8 0x0000000000b3b723 in lr_mod_m::check_dt (idt=1) >> at /home/nrath/Q2D/LamyRidge/src/model/LR_model.F:487 >> #9 0x0000000000b272e4 in lr_mod_m::lr_step (idt=1, >> dt_r=<error reading variable: Cannot access memory at address >> 0xa000008>, t_r=0) >> at /home/nrath/Q2D/LamyRidge/src/model/LR_model.F:252 >> #10 0x0000000000b261de in lr_model () >> at /home/nrath/Q2D/LamyRidge/src/model/LR_model.F:544 >> >> Interestingly enough, but stacktraces are incorrect: gdb is missing >> the call to taehdf5_mp_h5append_data_double_0d_, and valgrind is >> missing the call to h5dump_attr_int. > > The gdb stacktrace contains a call to h5append_data_double_0d (frame 6). Oh, indeed. I missed that because it didn't start with the long hex address. > So, as far as I can see, the gdb stacktrace is similar with -O1 and -O2 > (if we ignore small details like the namespace taehdf5::) Yes. But that makes it even more confusing to me: apparently gdb picks up the debug information about the inlined function just fine - but valgrind doesn't. I'll update valgrind to most recent and see if I can explicitly disable inlining. Thanks, -Nikolaus |
|
From: Philippe W. <phi...@sk...> - 2015-12-09 20:43:22
|
On Wed, 2015-12-09 at 10:32 -0800, Nikolaus Rath wrote: > (gdb) bt > #0 0x0000000001010750 in H5FL_reg_calloc () > #1 0x0000000000fa64ea in H5A_create () > #2 0x0000000000fa0611 in H5Acreate2 () > #3 0x0000000000f8f3be in h5acreate_c_ () > #4 0x0000000000f897b7 in h5a_mp_h5acreate_f_ () > #5 0x0000000000b99fc7 in h5dump_attr_int (loc_id=<optimized out>, > f=<optimized out>, name=..., > .tmp.NAME.len_V$1086=<optimized out>) > at /home/nrath/Q2D/utils/src/taehdf5.f90:1936 > #6 h5append_data_double_0d (group_id=1, > f=<error reading variable: Cannot access memory at address > 0xa000008>, name=..., > .tmp.NAME.len_V$1cd8=272) > at /home/nrath/Q2D/utils/src/taehdf5.f90:4193 > #7 0x0000000000b248e7 in plot_m::plots (idt=1) > at /home/nrath/Q2D/LamyRidge/src/model/plot_hdf5.f:144 > #8 0x0000000000b3b723 in lr_mod_m::check_dt (idt=1) > at /home/nrath/Q2D/LamyRidge/src/model/LR_model.F:487 > #9 0x0000000000b272e4 in lr_mod_m::lr_step (idt=1, > dt_r=<error reading variable: Cannot access memory at address > 0xa000008>, t_r=0) > at /home/nrath/Q2D/LamyRidge/src/model/LR_model.F:252 > #10 0x0000000000b261de in lr_model () > at /home/nrath/Q2D/LamyRidge/src/model/LR_model.F:544 > > Interestingly enough, but stacktraces are incorrect: gdb is missing > the call to taehdf5_mp_h5append_data_double_0d_, and valgrind is > missing the call to h5dump_attr_int. The gdb stacktrace contains a call to h5append_data_double_0d (frame 6). So, as far as I can see, the gdb stacktrace is similar with -O1 and -O2 (if we ignore small details like the namespace taehdf5::) (I know nothing about fortran, so no idea if that is a namespace or whatever). When using gdb, you might maybe find more info using e.g. info frame 5/6/7 Philippe |
|
From: Philippe W. <phi...@sk...> - 2015-12-09 20:33:22
|
On Wed, 2015-12-09 at 10:32 -0800, Nikolaus Rath wrote:
> On 12/09/2015 10:00 AM, Nikolaus Rath wrote:
> Interestingly enough, but stacktraces are incorrect: gdb is missing the call to taehdf5_mp_h5append_data_double_0d_,
> and valgrind is missing the call to h5dump_attr_int.
The missing valgrind entry has the same "look" as an inlining
"not understood":
we see a a function, but with a source that is another function.
Maybe you could try by recompiling with an option that fully disable
inlining ?
>
> This is with valgrind 3.10.0 and gdb 7.7.1 (as above).
At least for valgrind, it would be better to upgrade to the last
released version (3.11).
>
> (I also tried compiling with just "-O3" (should be using dwarf-3), "-O3 -gdwarf-4", and just "-O2",
> but the stacktrace difference was there in every case).
If this is related to inlining, then as far as I know, you need at least
dwarf-3.
>
>
> Short of only using -O1 and -O0, is there a way to fix this?
Without more info on the origin of the wrong stack trace, difficult to
to say. You might try by using -O1, and then add individual optimisation
flags one by one, till you see which optimisation flag causes the
wrong stacktrace (assuming your compiler is like gcc, i.e. has very
fine grained optimisation control).
Alternatively, you could investigate by using the valgrind monitor
command (gdb) monitor v.info unwind <addr> <len>
to investigate the unwind info around the missing stacktrace entry
and/or by activating the debug trace of valgrind e.g.
--trace-symtab=no|yes show symbol table details? [no]
--trace-symtab-patt=<patt> limit debuginfo tracing to obj name <patt>
--trace-cfi=no|yes show call-frame-info details? [no]
and/or by using objdump to look at the dwarf info.
Philippe
|
|
From: Nikolaus R. <nr...@tr...> - 2015-12-09 18:32:13
|
On 12/09/2015 10:00 AM, Nikolaus Rath wrote:
> Hi Philippe,
>
> I found that I can work around the problem of gdb failing to produce backtraces by compiling with -O0. Switching to -O1 or higher is enough to cause issues. I also experimented using dwarf-2, dwarf-3, or dwarf-3 debug information but that did not seem to matter.
>
> I tried to narrow down the problem with -O1, -gdwarf2, newer valgrind, and newer gdb:
>
> $ valgrind --tool=massif --vgdb-error=0 ../../Q2D/LamyRidge/src/model/LR_model
> ==4881== Massif, a heap profiler
> ==4881== Copyright (C) 2003-2013, and GNU GPL'd, by Nicholas Nethercote
> ==4881== Using Valgrind-3.10.0 and LibVEX; rerun with -h for copyright info
> ==4881== Command: ../../Q2D/LamyRidge/src/model/LR_model
> [...]
>
> $ gdb ../../Q2D/LamyRidge/src/model/LR_model
> GNU gdb (Debian 7.7.1+dfsg-5) 7.7.1
> Copyright (C) 2014 Free Software Foundation, Inc.
> [...]
> (gdb) target remote | /usr/lib/valgrind/../../bin/vgdb --pid=4881
> Remote debugging using | /usr/lib/valgrind/../../bin/vgdb --pid=4881
> [...]
> (gdb) b taehdf5.f90:1936
> (gdb) c
> (gdb) c
> (gdb) b H5FL_reg_calloc
> (gdb) c
> Continuing.
>
>[...]
>
> So as far as I can tell, valgrind is getting the backtrace right. Is this correct?
>
> If so, I guess the only explanation is that I am not setting the breakpoint at the time where massif takes the snapshot?
Ok, I fell into a trap. I assumed that whatever causes gdb to hang when trying to print a backtrace also causes valgrind to produce wrong stacktraces. But that is not the case.
So, when compiling with -O1 -gdwarf-2, the valgrind and gdb backtraces agree. However, when compiling with -O3 -gdwarf-2, there is a difference:
Valgrind thinks:
(gdb) monitor v.info scheduler
[...]
Thread 1: status = VgTs_Runnable
==5489== at 0x1010750: H5FL_reg_calloc (in /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model)
==5489== by 0xFA64E9: H5A_create (in /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model)
==5489== by 0xFA0610: H5Acreate2 (in /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model)
==5489== by 0xF8F3BD: h5acreate_c_ (in /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model)
==5489== by 0xF897B6: h5a_mp_h5acreate_f_ (in /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model)
==5489== by 0xB99FC6: taehdf5_mp_h5append_data_double_0d_ (taehdf5.f90:1936)
==5489== by 0xB248E6: plot_m_mp_plots_ (plot_hdf5.f:144)
==5489== by 0xB3B722: lr_mod_m_mp_check_dt_ (LR_model.F:487)
==5489== by 0xB272E3: lr_mod_m_mp_lr_step_ (LR_model.F:252)
==5489== by 0xB261DD: MAIN__ (LR_model.F:544)
==5489== by 0x406E3D: main (in /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model)
client stack range: [0xFFEBFE000 0xFFF000FFF] client SP: 0xFFEC2CFC8
valgrind stack top usage: 12424 of 1048576
But gdb says:
(gdb) bt
#0 0x0000000001010750 in H5FL_reg_calloc ()
#1 0x0000000000fa64ea in H5A_create ()
#2 0x0000000000fa0611 in H5Acreate2 ()
#3 0x0000000000f8f3be in h5acreate_c_ ()
#4 0x0000000000f897b7 in h5a_mp_h5acreate_f_ ()
#5 0x0000000000b99fc7 in h5dump_attr_int (loc_id=<optimized out>, f=<optimized out>, name=...,
.tmp.NAME.len_V$1086=<optimized out>) at /home/nrath/Q2D/utils/src/taehdf5.f90:1936
#6 h5append_data_double_0d (group_id=1,
f=<error reading variable: Cannot access memory at address 0xa000008>, name=...,
.tmp.NAME.len_V$1cd8=272) at /home/nrath/Q2D/utils/src/taehdf5.f90:4193
#7 0x0000000000b248e7 in plot_m::plots (idt=1) at /home/nrath/Q2D/LamyRidge/src/model/plot_hdf5.f:144
#8 0x0000000000b3b723 in lr_mod_m::check_dt (idt=1)
at /home/nrath/Q2D/LamyRidge/src/model/LR_model.F:487
#9 0x0000000000b272e4 in lr_mod_m::lr_step (idt=1,
dt_r=<error reading variable: Cannot access memory at address 0xa000008>, t_r=0)
at /home/nrath/Q2D/LamyRidge/src/model/LR_model.F:252
#10 0x0000000000b261de in lr_model () at /home/nrath/Q2D/LamyRidge/src/model/LR_model.F:544
Interestingly enough, but stacktraces are incorrect: gdb is missing the call to taehdf5_mp_h5append_data_double_0d_, and valgrind is missing the call to h5dump_attr_int.
This is with valgrind 3.10.0 and gdb 7.7.1 (as above).
(I also tried compiling with just "-O3" (should be using dwarf-3), "-O3 -gdwarf-4", and just "-O2", but the stacktrace difference was there in every case).
Short of only using -O1 and -O0, is there a way to fix this?
Best,
-Nikolaus
|
|
From: Nikolaus R. <nr...@tr...> - 2015-12-09 18:00:14
|
Hi Philippe,
I found that I can work around the problem of gdb failing to produce backtraces by compiling with -O0. Switching to -O1 or higher is enough to cause issues. I also experimented using dwarf-2, dwarf-3, or dwarf-3 debug information but that did not seem to matter.
I tried to narrow down the problem with -O1, -gdwarf2, newer valgrind, and newer gdb:
$ valgrind --tool=massif --vgdb-error=0 ../../Q2D/LamyRidge/src/model/LR_model
==4881== Massif, a heap profiler
==4881== Copyright (C) 2003-2013, and GNU GPL'd, by Nicholas Nethercote
==4881== Using Valgrind-3.10.0 and LibVEX; rerun with -h for copyright info
==4881== Command: ../../Q2D/LamyRidge/src/model/LR_model
[...]
$ gdb ../../Q2D/LamyRidge/src/model/LR_model
GNU gdb (Debian 7.7.1+dfsg-5) 7.7.1
Copyright (C) 2014 Free Software Foundation, Inc.
[...]
(gdb) target remote | /usr/lib/valgrind/../../bin/vgdb --pid=4881
Remote debugging using | /usr/lib/valgrind/../../bin/vgdb --pid=4881
[...]
(gdb) b taehdf5.f90:1936
(gdb) c
(gdb) c
(gdb) b H5FL_reg_calloc
(gdb) c
Continuing.
Breakpoint 2, 0x00000000009a7da0 in H5FL_reg_calloc ()
(gdb) monitor v.info scheduler
[...]
Thread 1: status = VgTs_Runnable
==4881== at 0x9A7DA0: H5FL_reg_calloc (in /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model)
==4881== by 0x93DB39: H5A_create (in /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model)
==4881== by 0x937C60: H5Acreate2 (in /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model)
==4881== by 0x926A0D: h5acreate_c_ (in /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model)
==4881== by 0x920E06: h5a_mp_h5acreate_f_ (in /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model)
==4881== by 0x6CC1E0: taehdf5_mp_h5dump_attr_int_ (taehdf5.f90:1936)
==4881== by 0x6D2EAB: taehdf5_mp_h5append_data_double_0d_ (taehdf5.f90:4193)
==4881== by 0x6A23F6: plot_m_mp_plots_ (plot_hdf5.f:144)
==4881== by 0x6AC10F: lr_mod_m_mp_check_dt_ (LR_model.F:487)
==4881== by 0x6A4E96: lr_mod_m_mp_lr_step_ (LR_model.F:252)
==4881== by 0x6AC38F: MAIN__ (LR_model.F:544)
==4881== by 0x406F3D: main (in /mnt/nfs-home/nrath/Q2D/LamyRidge/src/model/build/LR_model)
client stack range: [0xFFEBFE000 0xFFF000FFF] client SP: 0xFFEC2CF78
valgrind stack top usage: 12424 of 1048576
(gdb) bt
#0 0x00000000009a7da0 in H5FL_reg_calloc ()
#1 0x000000000093db3a in H5A_create ()
#2 0x0000000000937c61 in H5Acreate2 ()
#3 0x0000000000926a0e in h5acreate_c_ ()
#4 0x0000000000920e07 in h5a_mp_h5acreate_f_ ()
#5 0x00000000006cc1e1 in taehdf5::h5dump_attr_int (loc_id=1,
[..hangs for a while...]
f=<error reading variable: Cannot access memory at address 0xa000008>,
name=<error reading variable: Cannot access memory at address 0x832f5f9>, .tmp.NAME.len_V$1086=272)
at /home/nrath/Q2D/utils/src/taehdf5.f90:1936
#6 0x00000000006d2eac in taehdf5::h5append_data_double_0d (group_id=1,
f=<error reading variable: Cannot access memory at address 0xa000008>,
name=<error reading variable: Cannot access memory at address 0x832f5f9>, .tmp.NAME.len_V$1cd8=272)
at /home/nrath/Q2D/utils/src/taehdf5.f90:4193
#7 0x00000000006a23f7 in plot_m::plots (idt=1) at /home/nrath/Q2D/LamyRidge/src/model/plot_hdf5.f:144
#8 0x00000000006ac110 in lr_mod_m::check_dt (idt=1)
at /home/nrath/Q2D/LamyRidge/src/model/LR_model.F:487
#9 0x00000000006a4e97 in lr_mod_m::lr_step (idt=1,
dt_r=<error reading variable: Cannot access memory at address 0xa000008>, t_r=0)
at /home/nrath/Q2D/LamyRidge/src/model/LR_model.F:252
#10 0x00000000006ac390 in lr_model () at /home/nrath/Q2D/LamyRidge/src/model/LR_model.F:544
So as far as I can tell, valgrind is getting the backtrace right. Is this correct?
If so, I guess the only explanation is that I am not setting the breakpoint at the time where massif takes the snapshot?
Best,
-Nikolaus
|