You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
(58) |
Apr
(261) |
May
(169) |
Jun
(214) |
Jul
(201) |
Aug
(219) |
Sep
(198) |
Oct
(203) |
Nov
(241) |
Dec
(94) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(137) |
Feb
(149) |
Mar
(150) |
Apr
(193) |
May
(95) |
Jun
(173) |
Jul
(137) |
Aug
(236) |
Sep
(157) |
Oct
(150) |
Nov
(136) |
Dec
(90) |
2005 |
Jan
(139) |
Feb
(130) |
Mar
(274) |
Apr
(138) |
May
(184) |
Jun
(152) |
Jul
(261) |
Aug
(409) |
Sep
(239) |
Oct
(241) |
Nov
(260) |
Dec
(137) |
2006 |
Jan
(191) |
Feb
(142) |
Mar
(169) |
Apr
(75) |
May
(141) |
Jun
(169) |
Jul
(131) |
Aug
(141) |
Sep
(192) |
Oct
(176) |
Nov
(142) |
Dec
(95) |
2007 |
Jan
(98) |
Feb
(120) |
Mar
(93) |
Apr
(96) |
May
(95) |
Jun
(65) |
Jul
(62) |
Aug
(56) |
Sep
(53) |
Oct
(95) |
Nov
(106) |
Dec
(87) |
2008 |
Jan
(58) |
Feb
(149) |
Mar
(175) |
Apr
(110) |
May
(106) |
Jun
(72) |
Jul
(55) |
Aug
(89) |
Sep
(26) |
Oct
(96) |
Nov
(83) |
Dec
(93) |
2009 |
Jan
(97) |
Feb
(106) |
Mar
(74) |
Apr
(64) |
May
(115) |
Jun
(83) |
Jul
(137) |
Aug
(103) |
Sep
(56) |
Oct
(59) |
Nov
(61) |
Dec
(37) |
2010 |
Jan
(94) |
Feb
(71) |
Mar
(53) |
Apr
(105) |
May
(79) |
Jun
(111) |
Jul
(110) |
Aug
(81) |
Sep
(50) |
Oct
(82) |
Nov
(49) |
Dec
(21) |
2011 |
Jan
(87) |
Feb
(105) |
Mar
(108) |
Apr
(99) |
May
(91) |
Jun
(94) |
Jul
(114) |
Aug
(77) |
Sep
(58) |
Oct
(58) |
Nov
(131) |
Dec
(62) |
2012 |
Jan
(76) |
Feb
(93) |
Mar
(68) |
Apr
(95) |
May
(62) |
Jun
(109) |
Jul
(90) |
Aug
(87) |
Sep
(49) |
Oct
(54) |
Nov
(66) |
Dec
(84) |
2013 |
Jan
(67) |
Feb
(52) |
Mar
(93) |
Apr
(65) |
May
(33) |
Jun
(34) |
Jul
(52) |
Aug
(42) |
Sep
(52) |
Oct
(48) |
Nov
(66) |
Dec
(14) |
2014 |
Jan
(66) |
Feb
(51) |
Mar
(34) |
Apr
(47) |
May
(58) |
Jun
(27) |
Jul
(52) |
Aug
(41) |
Sep
(78) |
Oct
(30) |
Nov
(28) |
Dec
(26) |
2015 |
Jan
(41) |
Feb
(42) |
Mar
(20) |
Apr
(73) |
May
(31) |
Jun
(48) |
Jul
(23) |
Aug
(55) |
Sep
(36) |
Oct
(47) |
Nov
(48) |
Dec
(41) |
2016 |
Jan
(32) |
Feb
(34) |
Mar
(33) |
Apr
(22) |
May
(14) |
Jun
(31) |
Jul
(29) |
Aug
(41) |
Sep
(17) |
Oct
(27) |
Nov
(38) |
Dec
(28) |
2017 |
Jan
(28) |
Feb
(30) |
Mar
(16) |
Apr
(9) |
May
(27) |
Jun
(57) |
Jul
(28) |
Aug
(43) |
Sep
(31) |
Oct
(20) |
Nov
(24) |
Dec
(18) |
2018 |
Jan
(34) |
Feb
(50) |
Mar
(18) |
Apr
(26) |
May
(13) |
Jun
(31) |
Jul
(13) |
Aug
(11) |
Sep
(15) |
Oct
(12) |
Nov
(18) |
Dec
(13) |
2019 |
Jan
(12) |
Feb
(29) |
Mar
(51) |
Apr
(22) |
May
(13) |
Jun
(20) |
Jul
(13) |
Aug
(12) |
Sep
(21) |
Oct
(6) |
Nov
(9) |
Dec
(5) |
2020 |
Jan
(13) |
Feb
(5) |
Mar
(25) |
Apr
(4) |
May
(40) |
Jun
(27) |
Jul
(5) |
Aug
(17) |
Sep
(21) |
Oct
(1) |
Nov
(5) |
Dec
(15) |
2021 |
Jan
(28) |
Feb
(6) |
Mar
(11) |
Apr
(5) |
May
(7) |
Jun
(8) |
Jul
(5) |
Aug
(5) |
Sep
(11) |
Oct
(9) |
Nov
(10) |
Dec
(12) |
2022 |
Jan
(7) |
Feb
(13) |
Mar
(8) |
Apr
(7) |
May
(12) |
Jun
(27) |
Jul
(14) |
Aug
(27) |
Sep
(27) |
Oct
(17) |
Nov
(17) |
Dec
|
2023 |
Jan
(10) |
Feb
(18) |
Mar
(9) |
Apr
(26) |
May
|
Jun
(13) |
Jul
(18) |
Aug
(5) |
Sep
(12) |
Oct
(16) |
Nov
(1) |
Dec
|
2024 |
Jan
(4) |
Feb
(3) |
Mar
(6) |
Apr
(17) |
May
(2) |
Jun
(33) |
Jul
(13) |
Aug
(1) |
Sep
(6) |
Oct
(8) |
Nov
(6) |
Dec
(15) |
2025 |
Jan
(5) |
Feb
(11) |
Mar
(8) |
Apr
(20) |
May
(1) |
Jun
|
Jul
|
Aug
(2) |
Sep
|
Oct
|
Nov
|
Dec
|
From: Rayce W. <rwest@g.clemson.edu> - 2020-11-08 18:56:15
|
Hello, After researching GGPL some, I'm wondering how it's possible that angr (and pyvex) are allowed to use BSD licenses when they integrate VEX as their lifter and IR, especially since it's the General and not the Lesser? It's not that I wish to cause them any trouble, I've just been considering using VEX for other work myself and I (frankly) wouldn't want to GPL it if I did. Thanks for any insight! Rayce |
From: LEPAREUR L. <loi...@ce...> - 2020-10-02 16:40:00
|
Le 29/09/2020 à 15:19, Milian Wolff a écrit : > On Dienstag, 29. September 2020 12:19:24 CEST LEPAREUR Loic wrote: >> Le 21/09/2020 à 17:04, Milian Wolff a écrit : >>> On Donnerstag, 17. September 2020 15:32:04 CEST John Reiser wrote: >>>> On 2020-09-17 LEPAREUR Loic wrote: >>>>> Several years ago, I developed a Massif patch to let the client code >>>>> select which part of the code should be profiled with Massif. >>>> Today this is much less interesting unless you compare and contrast with >>>> https://github.com/KDE/heaptrack . Do a web search for "heaptrack vs >>>> massif". Consider particularly "A Faster Massif" in >>>> https://milianw.de/tag/heaptrack , and >>>> https://milianw.de/tag/massif-visualizer . >>> In other news: I just pushed a bunch of patches to heaptrack git which >>> allow you to do time-diffing. I.e. you can now select a time range in the >>> charts and then request filtering to show the delta costs between the two >>> time points. >>> >>> This should allow you to solve your problem nicely. >>> >>> Alternatively, you can also runtime-attach heaptrack after you finished >>> your initialization phase to only record the main computational loop. The >>> overhead should be minimal or actually close to zero when your main loop >>> isn't allocating anything. >>> >>> Cheers >> OK, thanks for pointing that out. I didn't know about it, I will give it >> a try. >> >> Heaptrack requires recent versions of KChart and Qt for the GUI and it >> isn't easy to upgrade them here. > What do you mean by "recent" - to my knowledge it can build with some-years- > old versions of both ;-) > > But even then, you can just download the AppImage and don't need to install > anything then: > > https://download.kde.org/stable/heaptrack/1.2.0/ > >> And since you LD_PRELOAD Heaptrack, it >> would be very easy to add client requests mechanism, but I will try >> vanilla version first. > Yes, there's already `heaptrack_api.h`, which could be expanded to add support > for client markers or similar. > > Cheers > Sorry, I only meant : "more recent that what I have here". OK I will try the AppImage, it's nice it exists one. Loïc |
From: John R. <jr...@bi...> - 2020-09-29 21:27:27
|
On 2020-09-29 Prujan, Michael wrote: > I’m running valgrind-3.16.1 on application with DPDK. > > I got the following errors: > > ERROR: This system does not support "FSGSBASE". That's correct: the virtual machine that valgrind exports to the app, does not support FSGSBASE. The valgrind source says: ./VEX/priv/guest_amd64_helpers.c: /* Don't advertise FSGSBASE support, bit 0 in EBX. */ Perhaps you could develop the support for fsgsbase, and submit the implementation patch? |
From: Prujan, M. <Mic...@ve...> - 2020-09-29 20:10:38
|
Hi, I'm running valgrind-3.16.1 on application with DPDK. I got the following errors: ERROR: This system does not support "FSGSBASE". Please check that RTE_MACHINE is set correctly. EAL: FATAL: unsupported cpu type. EAL: unsupported cpu type. However, FSGSBASE exists. Cat /proc/cpuinfo: flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 intel_pt mba tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local ibpb ibrs stibp dtherm arat pln pts pku ospke spec_ctrl intel_stibp I compiled DPDK, without that flag,... application is not running!!!! Thanks, Michael. This electronic message may contain proprietary and confidential information of Verint Systems Inc., its affiliates and/or subsidiaries. The information is intended to be for the use of the individual(s) or entity(ies) named above. If you are not the intended recipient (or authorized to receive this e-mail for the intended recipient), you may not use, copy, disclose or distribute to anyone this message or any information contained in this message. If you have received this electronic message in error, please notify us by replying to this e-mail. |
From: Milian W. <ma...@mi...> - 2020-09-29 13:20:44
|
On Dienstag, 29. September 2020 12:19:24 CEST LEPAREUR Loic wrote: > Le 21/09/2020 à 17:04, Milian Wolff a écrit : > > On Donnerstag, 17. September 2020 15:32:04 CEST John Reiser wrote: > >> On 2020-09-17 LEPAREUR Loic wrote: > >>> Several years ago, I developed a Massif patch to let the client code > >>> select which part of the code should be profiled with Massif. > >> > >> Today this is much less interesting unless you compare and contrast with > >> https://github.com/KDE/heaptrack . Do a web search for "heaptrack vs > >> massif". Consider particularly "A Faster Massif" in > >> https://milianw.de/tag/heaptrack , and > >> https://milianw.de/tag/massif-visualizer . > > > > In other news: I just pushed a bunch of patches to heaptrack git which > > allow you to do time-diffing. I.e. you can now select a time range in the > > charts and then request filtering to show the delta costs between the two > > time points. > > > > This should allow you to solve your problem nicely. > > > > Alternatively, you can also runtime-attach heaptrack after you finished > > your initialization phase to only record the main computational loop. The > > overhead should be minimal or actually close to zero when your main loop > > isn't allocating anything. > > > > Cheers > > OK, thanks for pointing that out. I didn't know about it, I will give it > a try. > > Heaptrack requires recent versions of KChart and Qt for the GUI and it > isn't easy to upgrade them here. What do you mean by "recent" - to my knowledge it can build with some-years- old versions of both ;-) But even then, you can just download the AppImage and don't need to install anything then: https://download.kde.org/stable/heaptrack/1.2.0/ > And since you LD_PRELOAD Heaptrack, it > would be very easy to add client requests mechanism, but I will try > vanilla version first. Yes, there's already `heaptrack_api.h`, which could be expanded to add support for client markers or similar. Cheers -- Milian Wolff ma...@mi... http://milianw.de |
From: LEPAREUR L. <loi...@ce...> - 2020-09-29 10:19:38
|
Le 21/09/2020 à 17:04, Milian Wolff a écrit : > On Donnerstag, 17. September 2020 15:32:04 CEST John Reiser wrote: >> On 2020-09-17 LEPAREUR Loic wrote: >>> Several years ago, I developed a Massif patch to let the client code >>> select which part of the code should be profiled with Massif. >> Today this is much less interesting unless you compare and contrast with >> https://github.com/KDE/heaptrack . Do a web search for "heaptrack vs >> massif". Consider particularly "A Faster Massif" in >> https://milianw.de/tag/heaptrack , and >> https://milianw.de/tag/massif-visualizer . > In other news: I just pushed a bunch of patches to heaptrack git which allow > you to do time-diffing. I.e. you can now select a time range in the charts and > then request filtering to show the delta costs between the two time points. > > This should allow you to solve your problem nicely. > > Alternatively, you can also runtime-attach heaptrack after you finished your > initialization phase to only record the main computational loop. The overhead > should be minimal or actually close to zero when your main loop isn't > allocating anything. > > Cheers > OK, thanks for pointing that out. I didn't know about it, I will give it a try. Heaptrack requires recent versions of KChart and Qt for the GUI and it isn't easy to upgrade them here. And since you LD_PRELOAD Heaptrack, it would be very easy to add client requests mechanism, but I will try vanilla version first. Cheers |
From: Milian W. <ma...@mi...> - 2020-09-21 15:05:55
|
On Donnerstag, 17. September 2020 15:32:04 CEST John Reiser wrote: > On 2020-09-17 LEPAREUR Loic wrote: > > Several years ago, I developed a Massif patch to let the client code > > select which part of the code should be profiled with Massif. > Today this is much less interesting unless you compare and contrast with > https://github.com/KDE/heaptrack . Do a web search for "heaptrack vs > massif". Consider particularly "A Faster Massif" in > https://milianw.de/tag/heaptrack , and > https://milianw.de/tag/massif-visualizer . In other news: I just pushed a bunch of patches to heaptrack git which allow you to do time-diffing. I.e. you can now select a time range in the charts and then request filtering to show the delta costs between the two time points. This should allow you to solve your problem nicely. Alternatively, you can also runtime-attach heaptrack after you finished your initialization phase to only record the main computational loop. The overhead should be minimal or actually close to zero when your main loop isn't allocating anything. Cheers -- Milian Wolff ma...@mi... http://milianw.de |
From: John R. <jr...@bi...> - 2020-09-17 13:32:21
|
On 2020-09-17 LEPAREUR Loic wrote: > Several years ago, I developed a Massif patch to let the client code select which part of the code should be profiled with Massif. Today this is much less interesting unless you compare and contrast with https://github.com/KDE/heaptrack . Do a web search for "heaptrack vs massif". Consider particularly "A Faster Massif" in https://milianw.de/tag/heaptrack , and https://milianw.de/tag/massif-visualizer . |
From: LEPAREUR L. <loi...@ce...> - 2020-09-17 09:26:57
|
Hi, Several years ago, I developed a Massif patch to let the client code select which part of the code should be profiled with Massif. This is particulary useful for scientific simulation codes which often have two phases during the run : 1) initialisation's phase 2) main computational loop (heavy CPU load) During phase 1, one allocates a lot of memory (reading meshes, creating variables, ...) but the critical part is the main loop : bogus codes slightly increase memory consumption but the increase due to one iteration of the loop is tiny compared to the total amount of memory allocated before the main loop. Since the memory is freed after the loop, this is not a leak, but the code can run out of memory because there are thousands and thousands of iterations. To detect the problem with Massif, one has to let the code run many iterations of the loop under valgrind ... and it's way too slow. The patch I've made ease the detection of such a problem within just a few iterations. It consists in two new command line options and three client's requests : --record-from-start=yes/no : this disables heap profiling until Massif meets the client's request which tells it to start profiling the heap (see below). --disable-auto-snapshots=yes/no : this disables all the snapshots except the ones that are explicitly asked in a client's request. VALGRIND_START_MEM_RECORDING : this request tells Massif to start heap profiling. VALGRIND_STOP_MEM_RECORDING : this request tells Massif to stop heap profiling. VALGRIND_TAKE_DETAILED_SNAPSHOT : this request tells Massif to take a detailed snapshot. So using VALGRIND_START_MEM_RECORDING just before the main loop and VALGRIND_TAKE_DETAILED_SNAPSHOT at the beginning of the loop, Massif can report exactly what is going on during the main loop. As an example, I made a regression test which simulates that problem (big initial allocation and then tiny allocations in a loop). Massif report without the patch: MB 1.001^ : | #::::::::::::::::::::::::::::::::::::::::::::::::::@::::::::: | #:::::::::::: :::: :::: ::: ::::::::::: :::: ::: ::@ | #:::::::::::: :::: :::: ::: ::::::::::: :::: ::: ::@ | #:::::::::::: :::: :::: ::: ::::::::::: :::: ::: ::@ | #:::::::::::: :::: :::: ::: ::::::::::: :::: ::: ::@ | #:::::::::::: :::: :::: ::: ::::::::::: :::: ::: ::@ | #:::::::::::: :::: :::: ::: ::::::::::: :::: ::: ::@ | #:::::::::::: :::: :::: ::: ::::::::::: :::: ::: ::@ | #:::::::::::: :::: :::: ::: ::::::::::: :::: ::: ::@ | #:::::::::::: :::: :::: ::: ::::::::::: :::: ::: ::@ | #:::::::::::: :::: :::: ::: ::::::::::: :::: ::: ::@ | #:::::::::::: :::: :::: ::: ::::::::::: :::: ::: ::@ | #:::::::::::: :::: :::: ::: ::::::::::: :::: ::: ::@ | #:::::::::::: :::: :::: ::: ::::::::::: :::: ::: ::@ | #:::::::::::: :::: :::: ::: ::::::::::: :::: ::: ::@ | #:::::::::::: :::: :::: ::: ::::::::::: :::: ::: ::@ | #:::::::::::: :::: :::: ::: ::::::::::: :::: ::: ::@ | #:::::::::::: :::: :::: ::: ::::::::::: :::: ::: ::@ | #:::::::::::: :::: :::: ::: ::::::::::: :::: ::: ::@ 0 +----------------------------------------------------------------------->MB 0 7.056 Massif report with the patch: B 216^ @ | @ | @ | @@@@@@@@ | @ @ | @@@@@@@@@ @ | @ @ @ | @@@@@@@@@ @ @ | @ @ @ @ | @@@@@@@@@ @ @ @ | @ @ @ @ @ | @ @ @ @ @ | @@@@@@@@@ @ @ @ @ | @ @ @ @ @ @ | @@@@@@@@@ @ @ @ @ @ | @ @ @ @ @ @ @ | @@@@@@@@@ @ @ @ @ @ @ | @ @ @ @ @ @ @ @ | @@@@@@@@@ @ @ @ @ @ @ @ | @ @ @ @ @ @ @ @ @ 0 +----------------------------------------------------------------------->MB 0 4.550 I've developed the patch in a GIT branch that I've just rebased on the master. I've updated the docs (NEWS, manual, ...) and created a regression test for it in massif/tests. I think this feature could interest other developers of scientific codes (even if, I think, DHAT can now help with such issue), so if you (valgrind developers) think it could be interesting to take a look at it, let me know : I can send you the patch or do a pull request. Thanks, Loïc |
From: Philippe W. <phi...@sk...> - 2020-09-12 14:24:17
|
On Fri, 2020-09-11 at 12:38 +0200, Milian Wolff wrote: > And finally, with heaptrack it is also not yet easily doable to get a diff > between two time stamps. Also a feature I've long thought about implementing, > but never got around to... Note that valgrind allows to report a "delta/diff" memory heap status e.g. under memcheck. See memcheck leak_check monitor command. Philippe |
From: Philippe W. <phi...@sk...> - 2020-09-12 14:22:23
|
On Thu, 2020-09-10 at 15:26 +0200, folkert wrote: > Hi, > > How can I obtain the number of mallocs per type in a time-frame using > massif? I'm NOT interested in the total in use, I would like to know how > often type x is allocated between t+1 and t+2. You could run your application under valgrind + gdb/vgdb. You can then put breakpoints at relevant places to trigger them at t+1 and t+2. You can then e.g. run with memcheck and have delta malloc info being reported, using the memcheck monitor command leak_check. This command can show the delta "alloc/free" since the previous call. Philippe |
From: Philippe W. <phi...@sk...> - 2020-09-12 14:18:41
|
On Tue, 2020-09-08 at 14:09 +0200, Mario Emmenlauer wrote: > On 08.09.20 12:25, Mario Emmenlauer wrote: > > On 08.09.20 12:04, Mario Emmenlauer wrote: > > > The error I get most frequently is (full output attached in log.txt) > > > ==32== Valgrind's memory management: out of memory: > > > ==32== newSuperblock's request for 6864695621860790272 bytes failed. > > > ==32== 114,106,368 bytes have already been mmap-ed ANONYMOUS. > > > > Argh! After sending the email, I went through the stack trace for > > the hundredth time, and spotted the use of "zlib". And indeed, when > > replacing my own zlib 1.2.11 with the system zlib 1.2.11, valgrind > > works as expected! > > > > Does that make sense? Is zlib used by valgrind itself? And why could > > my debug build differ (so much) from the system zlib that it breaks > > valgrind? I double-checked and its the identical source code from > > Ubuntu, just missing two or three patches. > > So it seems I can (partially) answer my own question: when valgrind > is used on an executable that links zlib built with -ggdb3, then it > does not work (due to aforementioned error). Keeping all other debug- > settings except -ggdb3 works still fine. > > I have no clue as to _why_ this may happen, but I hope it can be > helpful to other people running into the same issue. zlib is not used by the valgrind tools. In fact, valgrind tools do not use any library (even not libc). The above newSuperblock trace shows that a *huge* block is requested. As this bug only happens when you use -ggdb3, this is likely a problem in the debuginfo reader of valgrind: some debug info generated by -ggdb3 is very probably not handled properly. I have recompiled libz with -ggdb3, but no problem when running this lib under valgrind. We might have a more clear idea of what happens on your side by adding some trace. The best is to file a bug on bugzilla, and attach the output of running valgrind with -d -d -d -v -v -v. That might give some information about what is wrong and possibly some more detailed trace can then be activated. Thanks Philippe |
From: folkert <fo...@va...> - 2020-09-12 13:00:52
|
> Anyhow, it sounds like you are starting to reinvent heaptrack - it does > exactly the above and then some. Indeed it does: I looked at and it is perfectly for my use-case. Thank you! |
From: Pakize S. <psa...@fa...> - 2020-09-11 16:06:10
|
Hi, Somehow I am not able to see error line numberings when I try to test a Microsoft library: https://github.com/microsoft/PQCrypto-SIDH. I compiled the library by using the command: $ make ARCH=x64 CC=gcc OPT_LEVEL=GENERIC I also changed the optimization level as -g and added -static option before compiling. Then I used the command: $ valgrind --leak-check=full --show-leak-kinds=all --track-origins=yes --dsymutil=yes ./test_SIKE with the executable file is under PQCrypto-SIDH/sike503/. However, I got the errors always as "below main" even if I added some errors myself to the code consciously. The reason could be related to the Makefile of the library. I checked most of the reasons for that but I could not find the solution. Thank you, Pakize |
From: John R. <jr...@bi...> - 2020-09-11 15:11:11
|
On 2020-09-11 at 10:50 UTC, folkert wrote: [snip]] > This morning I came up with an other solution: I made a LD_PRELOAD > wrapper which counts every malloc-call. Decided that allocated-type is > not really required in my case, but to know which malloc did it would > work as well. For a random C++ program, then most calls to malloc are made by 'operator new' or some closely-related flavor. After that comes std::string::alloc<...>. You *really* want 'heaptrack'. Get it (and libunwind)! Use it! Do not waste your time trying to concoct anything else. |
From: Milian W. <ma...@mi...> - 2020-09-11 14:54:31
|
On Freitag, 11. September 2020 12:50:18 CEST folkert wrote: > Hi, > > > > How can I obtain the number of mallocs per type in a time-frame using > > > massif? I'm NOT interested in the total in use, I would like to know how > > > often type x is allocated between t+1 and t+2. > > > > To my knowledge, this data is not recorded by Massif. You could try to > > have a look at heaptrack [1] instead. > > > > [1]: https://invent.kde.org/sdk/heaptrack > > > > Note though that on Linux, malloc does not retain any type information. As > > such, even with heaptrack you cannot easily filter by type. But by > > leveraging the flamegraph you can often still get a similar intuition > > based on the callstacks. I have an idea to parse the callstack code lines > > to find type information from the call to `new` but that's probably quite > > hairy to get right in practice. Suggestions welcome on how to trace the > > type information! > > > > And finally, with heaptrack it is also not yet easily doable to get a diff > > between two time stamps. Also a feature I've long thought about > > implementing, but never got around to... > > This morning I came up with an other solution: I made a LD_PRELOAD > wrapper which counts every malloc-call. Decided that allocated-type is > not really required in my case, but to know which malloc did it would > work as well. > So in malloc() I do __builtin_return_address(0), hash the pointer and > use that as an index in array of counters. > > Gives me: > > pointer count > ... > 000000000041db10 00000000002a427 > 00007ffff4a6c1b1 00000000007f1da > > Going from 000000000041db10 to a symbol works fine, the shared library > (00007ffff4a6c1b1) is a bit troublesome though (disabled > randomize_va_space and using eu-addr2line). You have to take the memory mapping of the library into account and subtract that offset from the address. Anyhow, it sounds like you are starting to reinvent heaptrack - it does exactly the above and then some. Cheers -- Milian Wolff ma...@mi... http://milianw.de |
From: Milian W. <ma...@mi...> - 2020-09-11 10:58:56
|
On Donnerstag, 10. September 2020 15:26:37 CEST folkert wrote: > Hi, > > How can I obtain the number of mallocs per type in a time-frame using > massif? I'm NOT interested in the total in use, I would like to know how > often type x is allocated between t+1 and t+2. To my knowledge, this data is not recorded by Massif. You could try to have a look at heaptrack [1] instead. [1]: https://invent.kde.org/sdk/heaptrack Note though that on Linux, malloc does not retain any type information. As such, even with heaptrack you cannot easily filter by type. But by leveraging the flamegraph you can often still get a similar intuition based on the callstacks. I have an idea to parse the callstack code lines to find type information from the call to `new` but that's probably quite hairy to get right in practice. Suggestions welcome on how to trace the type information! And finally, with heaptrack it is also not yet easily doable to get a diff between two time stamps. Also a feature I've long thought about implementing, but never got around to... Cheers -- Milian Wolff ma...@mi... http://milianw.de |
From: folkert <fo...@va...> - 2020-09-11 10:51:04
|
Hi, > > How can I obtain the number of mallocs per type in a time-frame using > > massif? I'm NOT interested in the total in use, I would like to know how > > often type x is allocated between t+1 and t+2. > > To my knowledge, this data is not recorded by Massif. You could try to have a > look at heaptrack [1] instead. > > [1]: https://invent.kde.org/sdk/heaptrack > > Note though that on Linux, malloc does not retain any type information. As > such, even with heaptrack you cannot easily filter by type. But by leveraging > the flamegraph you can often still get a similar intuition based on the > callstacks. I have an idea to parse the callstack code lines to find type > information from the call to `new` but that's probably quite hairy to get > right in practice. Suggestions welcome on how to trace the type information! > > And finally, with heaptrack it is also not yet easily doable to get a diff > between two time stamps. Also a feature I've long thought about implementing, > but never got around to... This morning I came up with an other solution: I made a LD_PRELOAD wrapper which counts every malloc-call. Decided that allocated-type is not really required in my case, but to know which malloc did it would work as well. So in malloc() I do __builtin_return_address(0), hash the pointer and use that as an index in array of counters. Gives me: pointer count ... 000000000041db10 00000000002a427 00007ffff4a6c1b1 00000000007f1da Going from 000000000041db10 to a symbol works fine, the shared library (00007ffff4a6c1b1) is a bit troublesome though (disabled randomize_va_space and using eu-addr2line). Folkert van Heusden -- |
From: folkert <fo...@va...> - 2020-09-10 13:26:51
|
Hi, How can I obtain the number of mallocs per type in a time-frame using massif? I'm NOT interested in the total in use, I would like to know how often type x is allocated between t+1 and t+2. regards |
From: Duc N. <duc...@gm...> - 2020-09-09 12:38:39
|
Thanks a lot Philippe! regarding the code that performs self-modifying-code check: *create_self_checks_as_needed*() (inside guest_generic_bb_to_IR.c). I am trying to write another statement to just print a string out when a self-modifying-code has been detected. To do that, I increase the allocated spaces for self-modifying-check from 21 (i.e., three extents, 7 statements each) to 24 (i.e, three extents, 8 statements each). The newly allocated space is used to perform a mkIRExprCCall to the print function. However, most of the existing support is for statements that create/modify values e.g., IRStmt_WrTmp,. I would like to have a mkIRExprCCall that simply calls a helper function (similar to checksum functions) to print out a string when self-modifying-code has been detected. Yet, if such a statement (of the corresponding function) is not used in the succeeding statement, the helper function is not triggered at run-time. Would you please let me know if there is a way to integrate such a helper function to print out a simple string indicating that a self-modifying-code has been detected by Valgrind. Thank you in advance for your help! Best regards, Duc On Sun, Aug 30, 2020 at 4:40 PM Philippe Waroquiers < phi...@sk...> wrote: > Valgrind has a lot of heuristics to optimise the speed of the JIT-ted code. > One of these heuristics is to chase jumps/calls to known destination. > > This is somewhat similar to inlining performed by the compiler, > but performed by valgrind at runtime, when it encounters a new call/jump. > > In this case, the function f1 is inlined twice following this heuristic. > So, the second inlining is using the modified function. > > If you disable chasing, then the code prints twice the same value: > valgrind --tool=none --smc-check=none --vex-guest-chase=no ... > produces twice 4660 as output. > > Also, if you do a loop > for (int j = 0; j < 2; j++) { > f1(); > .... here modify f1 code > } > > then valgrind inserts only once the code of f1, and it prints twice the > same > value, whatever the parameter --vex-guest-chase > > The code that does the self modifying code check is in the function > needs_self_check > in m_translate.c. This function is called by VEX. > > Philippe > > > On Fri, 2020-08-28 at 12:15 +0200, Duc Nguyen wrote: > > Hello everyone, > > > > I am trying the self-modifying-code check of Valgrind but I am not sure > if I understand the definition of self-modifying-code in Valgrind > correctly. > > > > I had prepared an example (see below) that has function f1 that is first > executed in main, outputs something (number 4660). Afterward, two > instructions of f1 are modified, and f1 is then executed one more time. It > then outputs something (number 22068) that is different from the first > time. > > > > When I run Valgrind with --smc-check=all and --smc-check=none I do not > see any difference in the outputs of Valgrind e.g., both times f1 produces > different numbers (e.g., self-modifying-code successfully runs despite the > --smc-check if turned on or off) > > > > Could someone please let me know if this behavior is expected from > Valgrind? > > > > I further looked into the source code and found > valgrind\VEX\priv\guest_generic_bb_to_IR.c that generates the code to > check. However, I do not know where such a check is executed. It would be > great if somebody knows where such a check takes place, and where we can > modify the source code to just simply say e.g., self-modifying-code is > found. > > > > Thank you very much in advance. > > > > Best regards, > > Duc > > > > > > ============================= > > Self-modifying-code example > > ------- > > > > > > > > #include <stdio.h> > > #include <sys/mman.h> > > #include <unistd.h> > > > > __asm__( ".text" ); > > __asm__( ".align 4096" ); > > > > void f1( void ) > > { > > printf( "%d\n", 0x1234 ); > > } > > void f2( void ){ > > printf("this is just a dummy function"); > > } > > > > int main( void ) > > { > > int rc; > > int pagesize; > > char *p; > > int i; > > > > printf( "f1=0x%08X.\n", f1 ); > > > > f1( ); > > > > pagesize = sysconf( _SC_PAGE_SIZE ); > > printf( "pagesize=%d (0x%08X).\n", pagesize, pagesize ); > > if( pagesize == -1 ) > > return( 2 ); > > > > p = (char*) f1; > > rc = mprotect( p, pagesize, PROT_READ | PROT_WRITE | PROT_EXEC ); > > printf( "rc=%d.\n", rc ); > > if( rc != 0 ) > > return( 2 ); > > printf( "'mprotect()' succeeded.\n" ); > > > > > > for( i = 0; i+1 < (size_t) f2- (size_t)f1; i++ ) { > > if( ((char*) f1)[ i ] == 0x34 && ((char*) f1)[ i+1 ] == 0x12 ) { > > > ((char*) f1)[ i+1 ] =0x78;//here performs self-modifying-code > > ((char*) f1)[ i+1 ] =0x56;//here performs self-modifying-code > > } > > } > > > > f1( );//here the output of f1 will be different from the first f1() > call > > > > printf( "Call succeeded.\n" ); > > return( 0 ); > > } > > > > > > _______________________________________________ > > Valgrind-users mailing list > > Val...@li... > > https://lists.sourceforge.net/lists/listinfo/valgrind-users > > |
From: Mario E. <ma...@em...> - 2020-09-08 12:10:19
|
On 08.09.20 12:25, Mario Emmenlauer wrote: > > On 08.09.20 12:04, Mario Emmenlauer wrote: >> The error I get most frequently is (full output attached in log.txt) >> ==32== Valgrind's memory management: out of memory: >> ==32== newSuperblock's request for 6864695621860790272 bytes failed. >> ==32== 114,106,368 bytes have already been mmap-ed ANONYMOUS. > > Argh! After sending the email, I went through the stack trace for > the hundredth time, and spotted the use of "zlib". And indeed, when > replacing my own zlib 1.2.11 with the system zlib 1.2.11, valgrind > works as expected! > > Does that make sense? Is zlib used by valgrind itself? And why could > my debug build differ (so much) from the system zlib that it breaks > valgrind? I double-checked and its the identical source code from > Ubuntu, just missing two or three patches. So it seems I can (partially) answer my own question: when valgrind is used on an executable that links zlib built with -ggdb3, then it does not work (due to aforementioned error). Keeping all other debug- settings except -ggdb3 works still fine. I have no clue as to _why_ this may happen, but I hope it can be helpful to other people running into the same issue. All the best, Mario Emmenlauer |
From: Mario E. <ma...@em...> - 2020-09-08 10:40:07
|
Dear All, many years ago, I've been using valgrind frequently and successfully, admittedly without ever giving it much thought! Thanks for the awesome tool. Now I'm setting up a larger CI system and want automatic memcheck for our tests. However, in the whole past year, I could not get a single successful run. So I must be doing something very wrong. Help would be greatly appreciated :-( The error I get most frequently is (full output attached in log.txt) ==32== Valgrind's memory management: out of memory: ==32== newSuperblock's request for 6864695621860790272 bytes failed. ==32== 114,106,368 bytes have already been mmap-ed ANONYMOUS. Here is what I tried so far: - Versions valgrind-3.13.0 from Ubuntu 18.04 and valgrind-3.16.1 compiled from source - Executed valgrind in a docker container running Ubuntu 18.04 x86_64 and Ubuntu 20.04 x86_64 - Checked `ulimit -a` in Docker, there are no tight limits - Tried valgrind with some 50++ different executables, all lead to the same error message - Tried valgrind outside Docker, leads to the same error message - Checked `ulimit -a` outside Docker, there are no tight limits - Tried the tests work successfully when _not_ using valgrind I have also tried valgrind on other executables than our debug builds, and it seems to work there without problems. So maybe the errors are related to how we create debug builds? We make pretty standard debug builds (I assume), with flags -ggdb3 -fno-omit-frame-pointer -O1 -m64 -march=nehalem -mtune=haswell. Are some of these suspicious? The host machines I have tried are relatively modern desktop computers with 64GB of RAM, and modern Skylake or Ryzen processors. The OS is typically Ubuntu 18.04 or 20.04. I have not set up any tight permission restrictions like selinux (unless it would be the default for Ubuntu). And ideas for what I can try are more than appreciated! All the best, Mario Emmenlauer |
From: Mario E. <ma...@em...> - 2020-09-08 10:25:36
|
On 08.09.20 12:04, Mario Emmenlauer wrote: > The error I get most frequently is (full output attached in log.txt) > ==32== Valgrind's memory management: out of memory: > ==32== newSuperblock's request for 6864695621860790272 bytes failed. > ==32== 114,106,368 bytes have already been mmap-ed ANONYMOUS. Argh! After sending the email, I went through the stack trace for the hundredth time, and spotted the use of "zlib". And indeed, when replacing my own zlib 1.2.11 with the system zlib 1.2.11, valgrind works as expected! Does that make sense? Is zlib used by valgrind itself? And why could my debug build differ (so much) from the system zlib that it breaks valgrind? I double-checked and its the identical source code from Ubuntu, just missing two or three patches. All the best, Mario Emmenlauer -- BioDataAnalysis GmbH, Mario Emmenlauer Tel. Buero: +49-89-74677203 Balanstr. 43 mailto: memmenlauer * biodataanalysis.de D-81669 München http://www.biodataanalysis.de/ |
From: Philippe W. <phi...@sk...> - 2020-08-30 14:41:27
|
Valgrind has a lot of heuristics to optimise the speed of the JIT-ted code. One of these heuristics is to chase jumps/calls to known destination. This is somewhat similar to inlining performed by the compiler, but performed by valgrind at runtime, when it encounters a new call/jump. In this case, the function f1 is inlined twice following this heuristic. So, the second inlining is using the modified function. If you disable chasing, then the code prints twice the same value: valgrind --tool=none --smc-check=none --vex-guest-chase=no ... produces twice 4660 as output. Also, if you do a loop for (int j = 0; j < 2; j++) { f1(); .... here modify f1 code } then valgrind inserts only once the code of f1, and it prints twice the same value, whatever the parameter --vex-guest-chase The code that does the self modifying code check is in the function needs_self_check in m_translate.c. This function is called by VEX. Philippe On Fri, 2020-08-28 at 12:15 +0200, Duc Nguyen wrote: > Hello everyone, > > I am trying the self-modifying-code check of Valgrind but I am not sure if I understand the definition of self-modifying-code in Valgrind correctly. > > I had prepared an example (see below) that has function f1 that is first executed in main, outputs something (number 4660). Afterward, two instructions of f1 are modified, and f1 is then executed one more time. It then outputs something (number 22068) that is different from the first time. > > When I run Valgrind with --smc-check=all and --smc-check=none I do not see any difference in the outputs of Valgrind e.g., both times f1 produces different numbers (e.g., self-modifying-code successfully runs despite the --smc-check if turned on or off) > > Could someone please let me know if this behavior is expected from Valgrind? > > I further looked into the source code and found valgrind\VEX\priv\guest_generic_bb_to_IR.c that generates the code to check. However, I do not know where such a check is executed. It would be great if somebody knows where such a check takes place, and where we can modify the source code to just simply say e.g., self-modifying-code is found. > > Thank you very much in advance. > > Best regards, > Duc > > > ============================= > Self-modifying-code example > ------- > > > > #include <stdio.h> > #include <sys/mman.h> > #include <unistd.h> > > __asm__( ".text" ); > __asm__( ".align 4096" ); > > void f1( void ) > { > printf( "%d\n", 0x1234 ); > } > void f2( void ){ > printf("this is just a dummy function"); > } > > int main( void ) > { > int rc; > int pagesize; > char *p; > int i; > > printf( "f1=0x%08X.\n", f1 ); > > f1( ); > > pagesize = sysconf( _SC_PAGE_SIZE ); > printf( "pagesize=%d (0x%08X).\n", pagesize, pagesize ); > if( pagesize == -1 ) > return( 2 ); > > p = (char*) f1; > rc = mprotect( p, pagesize, PROT_READ | PROT_WRITE | PROT_EXEC ); > printf( "rc=%d.\n", rc ); > if( rc != 0 ) > return( 2 ); > printf( "'mprotect()' succeeded.\n" ); > > > for( i = 0; i+1 < (size_t) f2- (size_t)f1; i++ ) { > if( ((char*) f1)[ i ] == 0x34 && ((char*) f1)[ i+1 ] == 0x12 ) { > ((char*) f1)[ i+1 ] =0x78;//here performs self-modifying-code > ((char*) f1)[ i+1 ] =0x56;//here performs self-modifying-code > } > } > > f1( );//here the output of f1 will be different from the first f1() call > > printf( "Call succeeded.\n" ); > return( 0 ); > } > > > _______________________________________________ > Valgrind-users mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-users |
From: Duc N. <duc...@gm...> - 2020-08-28 10:16:21
|
Hello everyone, I am trying the self-modifying-code check of Valgrind but I am not sure if I understand the definition of self-modifying-code in Valgrind correctly. I had prepared an example (see below) that has function f1 that is first executed in main, outputs something (number 4660). Afterward, two instructions of f1 are modified, and f1 is then executed one more time. It then outputs something (number 22068) that is different from the first time. When I run Valgrind with --smc-check=all and --smc-check=none I do not see any difference in the outputs of Valgrind e.g., both times f1 produces different numbers (e.g., self-modifying-code successfully runs despite the --smc-check if turned on or off) *Could someone please let me know if this behavior is expected from Valgrind?* I further looked into the source code and found valgrind\VEX\priv\guest_generic_bb_to_IR.c that generates the code to check. However, I do not know where such a check is executed. It would be great if somebody knows where such a check takes place, and *where we can modify the source code to just simply say e.g., self-modifying-code is found*. Thank you very much in advance. Best regards, Duc ============================= Self-modifying-code example ------- *#include <stdio.h>#include <sys/mman.h>#include <unistd.h>__asm__( ".text" );__asm__( ".align 4096" );void f1( void ){ printf( "%d\n", 0x1234 );}void f2( void ){ printf("this is just a dummy function");}int main( void ){ int rc; int pagesize; char *p; int i; printf( "f1=0x%08X.\n", f1 ); f1( ); pagesize = sysconf( _SC_PAGE_SIZE ); printf( "pagesize=%d (0x%08X).\n", pagesize, pagesize ); if( pagesize == -1 ) return( 2 ); p = (char*) f1; rc = mprotect( p, pagesize, PROT_READ | PROT_WRITE | PROT_EXEC ); printf( "rc=%d.\n", rc ); if( rc != 0 ) return( 2 ); printf( "'mprotect()' succeeded.\n" ); for( i = 0; i+1 < (size_t) f2- (size_t)f1; i++ ) { if( ((char*) f1)[ i ] == 0x34 && ((char*) f1)[ i+1 ] == 0x12 ) { ((char*) f1)[ i+1 ] =0x78;//here performs self-modifying-code ((char*) f1)[ i+1 ] =0x56;//here performs self-modifying-code } } f1( );//here the output of f1 will be different from the first f1() call printf( "Call succeeded.\n" ); return( 0 );} * |