You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
(58) |
Apr
(261) |
May
(169) |
Jun
(214) |
Jul
(201) |
Aug
(219) |
Sep
(198) |
Oct
(203) |
Nov
(241) |
Dec
(94) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(137) |
Feb
(149) |
Mar
(150) |
Apr
(193) |
May
(95) |
Jun
(173) |
Jul
(137) |
Aug
(236) |
Sep
(157) |
Oct
(150) |
Nov
(136) |
Dec
(90) |
2005 |
Jan
(139) |
Feb
(130) |
Mar
(274) |
Apr
(138) |
May
(184) |
Jun
(152) |
Jul
(261) |
Aug
(409) |
Sep
(239) |
Oct
(241) |
Nov
(260) |
Dec
(137) |
2006 |
Jan
(191) |
Feb
(142) |
Mar
(169) |
Apr
(75) |
May
(141) |
Jun
(169) |
Jul
(131) |
Aug
(141) |
Sep
(192) |
Oct
(176) |
Nov
(142) |
Dec
(95) |
2007 |
Jan
(98) |
Feb
(120) |
Mar
(93) |
Apr
(96) |
May
(95) |
Jun
(65) |
Jul
(62) |
Aug
(56) |
Sep
(53) |
Oct
(95) |
Nov
(106) |
Dec
(87) |
2008 |
Jan
(58) |
Feb
(149) |
Mar
(175) |
Apr
(110) |
May
(106) |
Jun
(72) |
Jul
(55) |
Aug
(89) |
Sep
(26) |
Oct
(96) |
Nov
(83) |
Dec
(93) |
2009 |
Jan
(97) |
Feb
(106) |
Mar
(74) |
Apr
(64) |
May
(115) |
Jun
(83) |
Jul
(137) |
Aug
(103) |
Sep
(56) |
Oct
(59) |
Nov
(61) |
Dec
(37) |
2010 |
Jan
(94) |
Feb
(71) |
Mar
(53) |
Apr
(105) |
May
(79) |
Jun
(111) |
Jul
(110) |
Aug
(81) |
Sep
(50) |
Oct
(82) |
Nov
(49) |
Dec
(21) |
2011 |
Jan
(87) |
Feb
(105) |
Mar
(108) |
Apr
(99) |
May
(91) |
Jun
(94) |
Jul
(114) |
Aug
(77) |
Sep
(58) |
Oct
(58) |
Nov
(131) |
Dec
(62) |
2012 |
Jan
(76) |
Feb
(93) |
Mar
(68) |
Apr
(95) |
May
(62) |
Jun
(109) |
Jul
(90) |
Aug
(87) |
Sep
(49) |
Oct
(54) |
Nov
(66) |
Dec
(84) |
2013 |
Jan
(67) |
Feb
(52) |
Mar
(93) |
Apr
(65) |
May
(33) |
Jun
(34) |
Jul
(52) |
Aug
(42) |
Sep
(52) |
Oct
(48) |
Nov
(66) |
Dec
(14) |
2014 |
Jan
(66) |
Feb
(51) |
Mar
(34) |
Apr
(47) |
May
(58) |
Jun
(27) |
Jul
(52) |
Aug
(41) |
Sep
(78) |
Oct
(30) |
Nov
(28) |
Dec
(26) |
2015 |
Jan
(41) |
Feb
(42) |
Mar
(20) |
Apr
(73) |
May
(31) |
Jun
(48) |
Jul
(23) |
Aug
(55) |
Sep
(36) |
Oct
(47) |
Nov
(48) |
Dec
(41) |
2016 |
Jan
(32) |
Feb
(34) |
Mar
(33) |
Apr
(22) |
May
(14) |
Jun
(31) |
Jul
(29) |
Aug
(41) |
Sep
(17) |
Oct
(27) |
Nov
(38) |
Dec
(28) |
2017 |
Jan
(28) |
Feb
(30) |
Mar
(16) |
Apr
(9) |
May
(27) |
Jun
(57) |
Jul
(28) |
Aug
(43) |
Sep
(31) |
Oct
(20) |
Nov
(24) |
Dec
(18) |
2018 |
Jan
(34) |
Feb
(50) |
Mar
(18) |
Apr
(26) |
May
(13) |
Jun
(31) |
Jul
(13) |
Aug
(11) |
Sep
(15) |
Oct
(12) |
Nov
(18) |
Dec
(13) |
2019 |
Jan
(12) |
Feb
(29) |
Mar
(51) |
Apr
(22) |
May
(13) |
Jun
(20) |
Jul
(13) |
Aug
(12) |
Sep
(21) |
Oct
(6) |
Nov
(9) |
Dec
(5) |
2020 |
Jan
(13) |
Feb
(5) |
Mar
(25) |
Apr
(4) |
May
(40) |
Jun
(27) |
Jul
(5) |
Aug
(17) |
Sep
(21) |
Oct
(1) |
Nov
(5) |
Dec
(15) |
2021 |
Jan
(28) |
Feb
(6) |
Mar
(11) |
Apr
(5) |
May
(7) |
Jun
(8) |
Jul
(5) |
Aug
(5) |
Sep
(11) |
Oct
(9) |
Nov
(10) |
Dec
(12) |
2022 |
Jan
(7) |
Feb
(13) |
Mar
(8) |
Apr
(7) |
May
(12) |
Jun
(27) |
Jul
(14) |
Aug
(27) |
Sep
(27) |
Oct
(17) |
Nov
(17) |
Dec
|
2023 |
Jan
(10) |
Feb
(18) |
Mar
(9) |
Apr
(26) |
May
|
Jun
(13) |
Jul
(18) |
Aug
(5) |
Sep
(12) |
Oct
(16) |
Nov
(1) |
Dec
|
2024 |
Jan
(4) |
Feb
(3) |
Mar
(6) |
Apr
(17) |
May
(2) |
Jun
(33) |
Jul
(13) |
Aug
(1) |
Sep
(6) |
Oct
(8) |
Nov
(6) |
Dec
(15) |
2025 |
Jan
(5) |
Feb
(11) |
Mar
(8) |
Apr
(20) |
May
(1) |
Jun
|
Jul
|
Aug
(9) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
From: Leon P. <leo...@gm...> - 2023-02-27 21:11:24
|
Hello, all. I am trying to compile Valgrind 3.20.0 on ARMv7 Linux 2.6.37 (not cross!). At first, compilation produced a lot of errors with binary constants in the form 0bXXXX, but I replaced them with normal numbers and compilation continued. It failed with: ../coregrind/link_tool_exe_linux 0x58000000 gcc -std=gnu99 -o memcheck-arm-linux -O2 -g -Wall -Wmissing-prototypes -Wshadow -Wpointer-arith -Wstrict-prototypes -Wmissing-declarations -Wcast-qual -Wwrite-st rings -Wformat -Wformat-security -finline-functions -fno-stack-protector -fno-strict-aliasing -fno-builtin -marm -mcpu=cortex-a8 -O2 -static -nodefaultlibs -nostartfiles -u _start memcheck_arm_linux-mc_leak check.o memcheck_arm_linux-mc_malloc_wrappers.o memcheck_arm_linux-mc_main.o memcheck_arm_linux-mc_main_asm.o memcheck_arm_linux-mc_translate.o memcheck_arm_linux-mc_machine.o memcheck_arm_linux-mc_errors.o ../ coregrind/libcoregrind-arm-linux.a ../VEX/libvex-arm-linux.a -lgcc ../coregrind/libgcc-sup-arm-linux.a ../coregrind/link_tool_exe_linux: line 58: use: command not found ../coregrind/link_tool_exe_linux: line 59: use: command not found ../coregrind/link_tool_exe_linux: line 62: die: command not found ../coregrind/link_tool_exe_linux: line 70: syntax error near unexpected token `$ala' ../coregrind/link_tool_exe_linux: line 70: ` if (length($ala) < 3 || index($ala, "0x") != 0);' Looking into the link_tool_exe_linux script I must admit that I did not understand a thing about what should be done...:-( Please, help!!! Many thanks ahead. Leon |
From: Eliot M. <mo...@cs...> - 2023-02-25 22:07:25
|
On 2/26/2023 4:29 AM, Philippe Waroquiers wrote: > On Fri, 2023-02-24 at 10:42 -0700, User 10482 wrote: >> Dear All, >> >> I am looking to fix dangling pointer issue and was pleasantly surprised to find the >> `who-points-at` functionality in valgrind which tells the stack variable names (assuming >> --read-var-info=yes) and any addresses on heap with holding the searched address. >> [link](https://valgrind.org/docs/manual/mc-manual.html#mc-manual.monitor-commands) >> >> The tool is just splendid but I wish there was some way to do it recursively on the heap >> addresses (i.e who-points-at on the output of previous who-points-at) until we get the >> stack variable names holding the dangling pointers; something like how core-analyzer's >> `ref` command does. [link]( >> https://core-analyzer.sourceforge.net/index_files/Page600.html). > > Yes, a recursive who-points-at would be a nice thing to have. > I have added this on my list of things to do (one day, whenever I have time :(). > > >> >> On a side note, is there a way to know which variable/type a heap address points to? >> That will be helpful too. > The only information valgrind has about a (live) heap block is the stack trace that > allocated it. > Valgrind does not know the type of the object for which this memory was allocated. > Unclear to me how that can be implemented (at least without support of the compiler). I wonder if gdb (or whatever debugger) info about the types of the pointers would allow providing useful information? Presumably that could be had if the executable had the information kept with it and not stripped. Best - Eliot Moss |
From: Philippe W. <phi...@sk...> - 2023-02-25 17:30:15
|
On Fri, 2023-02-24 at 10:42 -0700, User 10482 wrote: > Dear All, > > I am looking to fix dangling pointer issue and was pleasantly surprised to find the > `who-points-at` functionality in valgrind which tells the stack variable names (assuming > --read-var-info=yes) and any addresses on heap with holding the searched address. > [link](https://valgrind.org/docs/manual/mc-manual.html#mc-manual.monitor-commands) > > The tool is just splendid but I wish there was some way to do it recursively on the heap > addresses (i.e who-points-at on the output of previous who-points-at) until we get the > stack variable names holding the dangling pointers; something like how core-analyzer's > `ref` command does. [link]( > https://core-analyzer.sourceforge.net/index_files/Page600.html). Yes, a recursive who-points-at would be a nice thing to have. I have added this on my list of things to do (one day, whenever I have time :(). > > On a side note, is there a way to know which variable/type a heap address points to? > That will be helpful too. The only information valgrind has about a (live) heap block is the stack trace that allocated it. Valgrind does not know the type of the object for which this memory was allocated. Unclear to me how that can be implemented (at least without support of the compiler). > > Thanks and have a good day! > > best, > Abhi > _______________________________________________ > Valgrind-users mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-users |
From: User 1. <abh...@gm...> - 2023-02-24 17:42:55
|
Dear All, I am looking to fix dangling pointer issue and was pleasantly surprised to find the `who-points-at` functionality in valgrind which tells the stack variable names (assuming --read-var-info=yes) and any addresses on heap with holding the searched address. [link]( https://valgrind.org/docs/manual/mc-manual.html#mc-manual.monitor-commands) The tool is just splendid but I wish there was some way to do it recursively on the heap addresses (i.e who-points-at on the output of previous who-points-at) until we get the stack variable names holding the dangling pointers; something like how core-analyzer's `ref` command does. [link](https://core-analyzer.sourceforge.net/index_files/Page600.html). On a side note, is there a way to know which variable/type a heap address points to? That will be helpful too. Thanks and have a good day! best, Abhi |
From: John R. <jr...@bi...> - 2023-02-16 15:04:26
|
$ cat foo.s .byte 0x66,0xF,0x3A,0x22 .byte 0,0,0,0 # # For x86_64 on x86_64 # $ gcc --version gcc (GCC) 12.2.1 20221121 (Red Hat 12.2.1-4) $ gcc -c foo.s $ file foo.o foo.o: ELF 64-bit LSB relocatable, x86-64, version 1 (SYSV), stripped $ gdb foo.o GNU gdb (GDB) Fedora 12.1-2.fc36 (gdb) x/i 0 0x0: pinsrd $0x0,(%rax),%xmm0 (gdb) 0x6: add %al,(%rax) # # For i686 on x86_64 # $ gcc -m32 -c foo.s $ file foo.o foo.o: ELF 32-bit LSB relocatable, Intel 80386, version 1 (SYSV), stripped $ gdb foo.o (gdb) x/i 0 0x0: pinsrd $0x0,(%eax),%xmm0 (gdb) 0x6: add %al,(%eax) |
From: Paul F. <pj...@wa...> - 2023-02-16 06:53:28
|
> On 16 Feb 2023, at 00:37, Anand K R <kar...@gm...> wrote: > > 0x66 0xF 0x3A 0x22 > I can’t see what that disassembles to. Can you tell us what CPU exactly this is for, and which OS and compiler you are using? Do you get any call stacks (for Valgtind itself or the test exe)? Lastly, can you provide a small reproducer? My guess is that somehow you are jumping to a memory location that is not on an instruction boundary. This could be caused by something like stack corruption overwriting a function return address. A+ Paul |
From: Anand K R <kar...@gm...> - 2023-02-15 23:37:54
|
Hi, I am getting the following error when using valgrind version 20. It terminates with sigill. Request for any help in resolving this. vex x86->IR: unhandled instruction bytes: 0x66 0xF 0x3A 0x22 Regards, Anand |
From: Ivica B <ibo...@gm...> - 2023-02-08 20:51:17
|
Thanks a lot :) On Wed, Feb 8, 2023, 9:11 PM Paul Floyd <pj...@wa...> wrote: > > > On 29-01-23 20:50, Ivica B wrote: > > Hi Paul! > > > > I read the info you provided, but none of the programs actually > > support detecting cache conflicts. > > Hi > > No, the tools I suggested would only give an indication, you would then > have to use your code knowledge and maybe some trial and error to make > improvements. > > Coincidentally about the same time as you asked about this I watched > your cppcon talk an observability tools. Thumbs up. > > I'm looking at your bugzilla patch and will post feedback. > > A+ > Paul > > > > _______________________________________________ > Valgrind-users mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-users > |
From: Paul F. <pj...@wa...> - 2023-02-08 20:10:18
|
On 29-01-23 20:50, Ivica B wrote: > Hi Paul! > > I read the info you provided, but none of the programs actually > support detecting cache conflicts. Hi No, the tools I suggested would only give an indication, you would then have to use your code knowledge and maybe some trial and error to make improvements. Coincidentally about the same time as you asked about this I watched your cppcon talk an observability tools. Thumbs up. I'm looking at your bugzilla patch and will post feedback. A+ Paul |
From: Philippe W. <phi...@sk...> - 2023-02-08 08:37:59
|
If you are envisaging to modify valgrind, you could take some inspiration from the way callgrind can dynamically activate/de-activate tracing. See callgrind manual command line options and client requests for more details Philippe On Wed, 2023-02-08 at 17:47 +1100, Eliot Moss wrote: > On 2/8/2023 4:10 PM, SAI GOVARDHAN M C PES1UG19EC255PESU ECE Student wrote: > > Hi, > > > > We are students working on memory access analysis, using the Lackey tool in Valgrind. > > Our memory trace results in a large log file, and we need the trace from discrete points of > > execution (between 40-60%). > > Instead of logging completely, and splitting manually, is there a way we can modify the Lackey > > command to pick from a desired point in the execution? > > > > For reference, the command we use is > > $ valgrind --tool=lackey --trace-mem=yes --log-file=/path_to_log ./program > > > > We need to modify this to command to trace from 40-60% of the program > > If you know the approximate number of memory accesses, you could do something > as simple as: > > valgrind ... | tail +n XXX | head -n YYY > > to start after XXX lines of output and stop after producing YYY lines. You > could do something more sophisticated using, say, gawk, to trigger on a > particular address being accessed, e.g., as an instruction fetch. > > This will all slow things down a bit, but might accomplish your goals. > > I'm not claiming there isn't some sophisticated way to tell valgrind when > to start tracing, either. Also, nobody is stopping you from customizing > the tool yourself :-) ... a mere exercise in programming, no? > > Best wishes - EM > > > _______________________________________________ > Valgrind-users mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-users |
From: Eliot M. <mo...@cs...> - 2023-02-08 06:47:29
|
On 2/8/2023 4:10 PM, SAI GOVARDHAN M C PES1UG19EC255PESU ECE Student wrote: > Hi, > > We are students working on memory access analysis, using the Lackey tool in Valgrind. > Our memory trace results in a large log file, and we need the trace from discrete points of > execution (between 40-60%). > Instead of logging completely, and splitting manually, is there a way we can modify the Lackey > command to pick from a desired point in the execution? > > For reference, the command we use is > $ valgrind --tool=lackey --trace-mem=yes --log-file=/path_to_log ./program > > We need to modify this to command to trace from 40-60% of the program If you know the approximate number of memory accesses, you could do something as simple as: valgrind ... | tail +n XXX | head -n YYY to start after XXX lines of output and stop after producing YYY lines. You could do something more sophisticated using, say, gawk, to trigger on a particular address being accessed, e.g., as an instruction fetch. This will all slow things down a bit, but might accomplish your goals. I'm not claiming there isn't some sophisticated way to tell valgrind when to start tracing, either. Also, nobody is stopping you from customizing the tool yourself :-) ... a mere exercise in programming, no? Best wishes - EM |
From: SAI G. M C P. E. S. <sai...@pe...> - 2023-02-08 05:40:47
|
Hi, We are students working on memory access analysis, using the Lackey tool in Valgrind. Our memory trace results in a large log file, and we need the trace from discrete points of execution (between 40-60%). Instead of logging completely, and splitting manually, is there a way we can modify the Lackey command to pick from a desired point in the execution? For reference, the command we use is $ valgrind --tool=lackey --trace-mem=yes --log-file=/path_to_log ./program We need to modify this to command to trace from 40-60% of the program Regards |
From: Eliot M. <mo...@cs...> - 2023-01-29 20:40:00
|
On 1/30/2023 7:08 AM, Ivica B wrote: > Can you please share the instructions on how to do it? > > On Sun, Jan 29, 2023, 9:07 PM Eliot Moss <mo...@cs... <mailto:mo...@cs...>> wrote: > > I have used lackey to get traces, which I have fed into > a cache model to detect conflicts and such. You could > also start with the lackey code and model the cache model > into the tool (which a student of mine did at one point). Lackey is one of the built-in valgrind tools. It has instructions. It produces a trace giving one memory access per line, and indicating if the access is for instruction fetch, memory read, memory write, or both read and write, with the address and size. You write a program to parse that and run your own model of whatever cache you're concerned with. Doing that part is for you to figure out. You do need to know the details of the cache you're going to model. There may be programs or libraries out there for analyzing address traces, but this would not be the list to find them. Sorry, but I'm not prepared to go through how to code a cache model ... EM |
From: Eliot M. <mo...@cs...> - 2023-01-29 20:25:08
|
I have used lackey to get traces, which I have fed into a cache model to detect conflicts and such. You could also start with the lackey code and model the cache model into the tool (which a student of mine did at one point). Regards - Eliot Moss |
From: Ivica B <ibo...@gm...> - 2023-01-29 20:09:20
|
Can you please share the instructions on how to do it? On Sun, Jan 29, 2023, 9:07 PM Eliot Moss <mo...@cs...> wrote: > I have used lackey to get traces, which I have fed into > a cache model to detect conflicts and such. You could > also start with the lackey code and model the cache model > into the tool (which a student of mine did at one point). > > Regards - Eliot Moss > |
From: Ivica B <ibo...@gm...> - 2023-01-29 19:51:15
|
Hi Paul! I read the info you provided, but none of the programs actually support detecting cache conflicts. Performance counters can detect cache misses, similar to cachegrind, but they cannot distinguish between cache misses related to cache conflicts and other cache misses. pahole is a tool with completely different usage, and that is to detect paddings in data structures. This isn't related to cache conflicts in any way. DHAT provides useful information, by allowing you to assess which data is accessed more frequently, but you need additional data to verify that the hot data is not evicted from the cache too soon. On Sun, Jan 29, 2023 at 4:25 PM Paul Floyd <pj...@wa...> wrote: > > > > On 29-01-23 14:31, Ivica B wrote: > > Hi! > > > > I am looking for a tool that can detect cache conflicts, but I am not > > finding any. There are a few that are mostly academic, and thus not > > maintained. I think it is important for the performance analysis > > community to have a tool that to some extent can detect cache > > conflicts. Is it possible to implement support for detecting source > > code lines where cache conflicts occur? More info on cache conflicts > > below. > > [snip] > > I agree that this is an interesting topic. If anyone else has ideas I'm > all ears. > > My recommendations for this are: > > 1/ PMU/PMC (performance monitoring unit/counter) event counting tools > (perf record on Linux, pmcstat on FreeBSD, Oracle Studio collect on > Solaris, don't know for macOS). These can record events such as cache > misses with the associated callstacks. You can then use tools HotSpot > and perfgrind/kcachegrind (I hae used HotSpot but not perfgrind). > > The big advantage of this is that the PMCs are part of the hardware and > the overhead of doing this is minor. The only slight limitation is that > then number of counters is limited. > > 2/ pahole > https://github.com/acmel/dwarves > A really nice binary analysis tool. It will analyze your binary (with > debuginfo) and generate a report for all structures showing holes, > padding and cache lines. It can even generate modified source with > members reordered to improve the packing. However as this is a static > tool working only on the data structures it knows nothing about your > access patterns. > > 3/ DHAT > One of the Valgrind tools. This profiles heap memory. If the block is > less than 1k it will also generate a kind of ascii-html heat map. That > map is an aggregate, but you can usually guess which offsets get hit the > most together. > > Cachegrind doesn't really do this with the kind of accuracy that PMCs > do. It has a reduced model of the cache and has a basic branch > predictor. I don't know if or how speculative execution affects the > cache hit rate, but Valgrind doesn't do any of that. > > A+ > Paul > > > _______________________________________________ > Valgrind-users mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-users |
From: John R. <jr...@bi...> - 2023-01-29 17:25:43
|
On 2023-01-29, Paul Floyd wrote: > My recommendations for this are: > > 1/ PMU/PMC (performance monitoring unit/counter) event counting tools (perf record on Linux, pmcstat on FreeBSD, Oracle Studio collect on Solaris, don't know for macOS). These can record events such as cache misses with the associated callstacks. You can then use tools HotSpot and > perfgrind/kcachegrind (I hae used HotSpot but not perfgrind). > > The big advantage of this is that the PMCs are part of the hardware and the overhead of doing this is minor. The only slight limitation is that then number of counters is limited. Another disadvantage: the hardware does not know which accesses belong to the target code versus which accesses belong to the code of valgrind itself. Even if the hardware could separate accesses on that basis, it does not know about stack frames. Allocating a stack frame shortly after CALL, and discarding it shortly before RETURN, can be significant reasons for cache misses, either immediately or in the near future. Then there are system calls, which might significantly alter cache contents. Sometimes the resulting cache misses should be included (they most certainly do affect wall clock time), but in some other cases you may wish that the operating system was ignored. If the target program uses threads, then using memory for inter-thread communication (semaphore, mutex, pipeline, etc.) becomes another factor. |
From: Paul F. <pj...@wa...> - 2023-01-29 15:24:07
|
On 29-01-23 14:31, Ivica B wrote: > Hi! > > I am looking for a tool that can detect cache conflicts, but I am not > finding any. There are a few that are mostly academic, and thus not > maintained. I think it is important for the performance analysis > community to have a tool that to some extent can detect cache > conflicts. Is it possible to implement support for detecting source > code lines where cache conflicts occur? More info on cache conflicts > below. [snip] I agree that this is an interesting topic. If anyone else has ideas I'm all ears. My recommendations for this are: 1/ PMU/PMC (performance monitoring unit/counter) event counting tools (perf record on Linux, pmcstat on FreeBSD, Oracle Studio collect on Solaris, don't know for macOS). These can record events such as cache misses with the associated callstacks. You can then use tools HotSpot and perfgrind/kcachegrind (I hae used HotSpot but not perfgrind). The big advantage of this is that the PMCs are part of the hardware and the overhead of doing this is minor. The only slight limitation is that then number of counters is limited. 2/ pahole https://github.com/acmel/dwarves A really nice binary analysis tool. It will analyze your binary (with debuginfo) and generate a report for all structures showing holes, padding and cache lines. It can even generate modified source with members reordered to improve the packing. However as this is a static tool working only on the data structures it knows nothing about your access patterns. 3/ DHAT One of the Valgrind tools. This profiles heap memory. If the block is less than 1k it will also generate a kind of ascii-html heat map. That map is an aggregate, but you can usually guess which offsets get hit the most together. Cachegrind doesn't really do this with the kind of accuracy that PMCs do. It has a reduced model of the cache and has a basic branch predictor. I don't know if or how speculative execution affects the cache hit rate, but Valgrind doesn't do any of that. A+ Paul |
From: Ivica B <ibo...@gm...> - 2023-01-29 13:31:25
|
Hi! I am looking for a tool that can detect cache conflicts, but I am not finding any. There are a few that are mostly academic, and thus not maintained. I think it is important for the performance analysis community to have a tool that to some extent can detect cache conflicts. Is it possible to implement support for detecting source code lines where cache conflicts occur? More info on cache conflicts below. === What are cache conflicts? === Cache conflict happens when a cache line is brought up from the memory to the cache, but very soon has to be evicted to the main memory because another cache line is mapped to the same entry. The problem with detecting cache conflicts is that it is normal that one cache line gets evicted because it is replaced by another cache line. Therefore, a cache conflict is an outlier: the cache line spent very little time in the cache before it got evicted. === How to detect cache conflicts? === As I said, there are a few science papers that talk about it. And probably there are a few different approaches on how to do it. One approach is to count the amount of time a cache line has been sitting in cache before it got evicted. For each instruction that causes an eviction, we count what is the amount of time that the evicted cache line spent in the cache. Next we build a statistic. Instructions evicting mostly shortly-lived cache lines are the ones where cache conflicts are most likely to happen. ========================= Please comment! Ivica |
From: Gordon M. <gor...@gm...> - 2023-01-16 22:05:54
|
On 2023-01-16 13:02, Gordon Messmer wrote: > Can anyone suggest why valgrind prints so many loss records for this > particular leak? Well, now I feel very silly, because these loss records are *not* 100% identical, and valgrind is actually reporting that rpmluaNew makes > 100 separate allocations. Sorry for the noise. |
From: Paul F. <pj...@wa...> - 2023-01-16 22:04:24
|
On 16-01-23 22:02, Gordon Messmer wrote: > Can anyone suggest why valgrind prints so many loss records for this > particular leak? Links for the two functions that I mentioned follow, > along with one of the loss records printed by valgrind. In my experience the most likely reason that you are getting a large number of leaks reported by Valgrind is that there is a large number of leaks. You need more stack depth to see all of the stack. Otherwise you can use gdb and put a breakpoint on malloc to confirm the allocations. A+ Paul |
From: Gordon M. <gor...@gm...> - 2023-01-16 21:02:23
|
I'm working on eliminating memory leaks in PackageKit, and I'd like to know more about whether I should suppress one of the results I'm getting. The code in question is dynamically loaded at runtime, but as far as I know, it's only loaded once and unloaded at exit. When I exit packagekitd, after even a very short run, I get one particular stack over a hundred times in valgrind's output. If I got this stack once, then I would conclude that it was a leak I could ignore: memory allocated for global state one time. But because it's reported repeatedly, I'm not sure how to interpret the output. The other reason that I find this very strange is that there are actually two mechanisms that should both individually guarantee that this allocation only happens once. The rpm Lua INITSTATE should only call rpmluaNew if the static variable globalLuaState is null, and libdnf calls rpmReadConfigFiles in a g_once_init_enterblock. Can anyone suggest why valgrind prints so many loss records for this particular leak? Links for the two functions that I mentioned follow, along with one of the loss records printed by valgrind. https://github.com/rpm-software-management/rpm/blob/master/rpmio/rpmlua.c#L93 https://github.com/rpm-software-management/libdnf/blob/dnf-4-master/libdnf/dnf-context.cpp#L400 ==49724== 24 bytes in 1 blocks are possibly lost in loss record 1,247 of 4,550 ==49724== at 0x484378A: malloc (vg_replace_malloc.c:392) ==49724== by 0x484870B: realloc (vg_replace_malloc.c:1451) ==49724== by 0x14F60600: luaM_malloc_ (lmem.c:192) ==49724== by 0x14F6B047: UnknownInlinedFun (ltable.c:490) ==49724== by 0x14F6B047: UnknownInlinedFun (ltable.c:478) ==49724== by 0x14F6B047: luaH_resize (ltable.c:558) ==49724== by 0x14F4CE34: lua_createtable (lapi.c:772) ==49724== by 0x14F68F43: UnknownInlinedFun (loadlib.c:732) ==49724== by 0x14F68F43: luaopen_package (loadlib.c:740) ==49724== by 0x14F5A671: UnknownInlinedFun (ldo.c:507) ==49724== by 0x14F5A671: luaD_precall (ldo.c:573) ==49724== by 0x14F522D7: UnknownInlinedFun (ldo.c:608) ==49724== by 0x14F522D7: UnknownInlinedFun (ldo.c:628) ==49724== by 0x14F522D7: lua_callk (lapi.c:1022) ==49724== by 0x14F5280B: luaL_requiref (lauxlib.c:976) ==49724== by 0x14F5D6E3: luaL_openlibs (linit.c:61) ==49724== by 0x14815163: rpmluaNew (rpmlua.c:128) ==49724== by 0x14815340: UnknownInlinedFun (rpmlua.c:96) ==49724== by 0x14815340: rpmluaGetGlobalState (rpmlua.c:93) ==49724== by 0x14B83E4C: rpmReadConfigFiles (rpmrc.c:1662) ==49724== by 0x146EA173: dnf_context_globals_init (in /usr/lib64/libdnf.so.2) ==49724== by 0x1475B155: ??? (in /usr/lib64/libdnf.so.2) ==49724== by 0x1475B66A: libdnf::getUserAgent(std::map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::less<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > > const&) (in /usr/lib64/libdnf.so.2) ==49724== by 0x1475BC99: libdnf::getUserAgent[abi:cxx11]() (in /usr/lib64/libdnf.so.2) ==49724== by 0x146EC07F: ??? (in /usr/lib64/libdnf.so.2) ==49724== by 0x4A5B0E7: g_type_create_instance (gtype.c:1931) ==49724== by 0x4A40C1F: g_object_new_internal (gobject.c:2228) ==49724== by 0x4A42247: g_object_new_with_properties (gobject.c:2391) ==49724== by 0x4A42FF0: g_object_new (gobject.c:2037) ==49724== by 0x146F2375: dnf_context_new (in /usr/lib64/libdnf.so.2) ==49724== by 0x48616BB: pk_backend_ensure_default_dnf_context (pk-backend-dnf.c:225) ==49724== by 0x486757D: pk_backend_initialize (pk-backend-dnf.c:289) |
From: <569...@qq...> - 2022-11-23 11:52:41
|
I got the reply from the openmp tem, it said like this "The code you have sent should not cause the issue, as you are not doing any memory allocations. The allocation is coming from a data structure that GCC uses internally to keep track of task dependences. It looks like the data structure is allocated when the OpenMP implementation is initialized and it is not released before the program terminates." So the code has no issues. ------------------ Original ------------------ From: "Floyd, Paul" <pj...@wa...>; Date: Wed, Nov 23, 2022 07:49 PM To: "valgrind-users"<val...@li...>; Subject: Re: [Valgrind-users] client program compiled with pie On 17/11/2022 19:22, Mark Roberts wrote: > How do I find the loaded address of a client program that was compiled > with -pie? I.e., how to I map the current execution address - such as > 0x4021151 - to the address in the elf file - such as 0x1193? With -nopie > the two are identical. Hi Do the address space maps that you get when running with -d do what you want? A+ Paul _______________________________________________ Valgrind-users mailing list Val...@li... https://lists.sourceforge.net/lists/listinfo/valgrind-users |
From: Floyd, P. <pj...@wa...> - 2022-11-23 11:49:44
|
On 17/11/2022 19:22, Mark Roberts wrote: > How do I find the loaded address of a client program that was compiled > with -pie? I.e., how to I map the current execution address - such as > 0x4021151 - to the address in the elf file - such as 0x1193? With -nopie > the two are identical. Hi Do the address space maps that you get when running with -d do what you want? A+ Paul |
From: <569...@qq...> - 2022-11-23 00:22:40
|
Hi, The OS is CentOS 7.6 ARM CPU, kunpeng 920 and I have try it on intel 8260 (the same result) gcc 10.2.1 Valgrind-3.16.1 the omp is the default version working with gcc 10.2.1. You need to run the test serial times to get the error. ------------------ Original ------------------ From: "Floyd, Paul" <pj...@wa...>; Date: Tue, Nov 22, 2022 05:02 PM To: "valgrind-users"<val...@li...>; Subject: Re: [Valgrind-users] A weird memory leak with openmp task depend Hi You need to tell us more details Which OS? Which version of Valgrind? What CPU? Which compiler? Which version of OMP? A+ Paul _______________________________________________ Valgrind-users mailing list Val...@li... https://lists.sourceforge.net/lists/listinfo/valgrind-users |