You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
(58) |
Apr
(261) |
May
(169) |
Jun
(214) |
Jul
(201) |
Aug
(219) |
Sep
(198) |
Oct
(203) |
Nov
(241) |
Dec
(94) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(137) |
Feb
(149) |
Mar
(150) |
Apr
(193) |
May
(95) |
Jun
(173) |
Jul
(137) |
Aug
(236) |
Sep
(157) |
Oct
(150) |
Nov
(136) |
Dec
(90) |
2005 |
Jan
(139) |
Feb
(130) |
Mar
(274) |
Apr
(138) |
May
(184) |
Jun
(152) |
Jul
(261) |
Aug
(409) |
Sep
(239) |
Oct
(241) |
Nov
(260) |
Dec
(137) |
2006 |
Jan
(191) |
Feb
(142) |
Mar
(169) |
Apr
(75) |
May
(141) |
Jun
(169) |
Jul
(131) |
Aug
(141) |
Sep
(192) |
Oct
(176) |
Nov
(142) |
Dec
(95) |
2007 |
Jan
(98) |
Feb
(120) |
Mar
(93) |
Apr
(96) |
May
(95) |
Jun
(65) |
Jul
(62) |
Aug
(56) |
Sep
(53) |
Oct
(95) |
Nov
(106) |
Dec
(87) |
2008 |
Jan
(58) |
Feb
(149) |
Mar
(175) |
Apr
(110) |
May
(106) |
Jun
(72) |
Jul
(55) |
Aug
(89) |
Sep
(26) |
Oct
(96) |
Nov
(83) |
Dec
(93) |
2009 |
Jan
(97) |
Feb
(106) |
Mar
(74) |
Apr
(64) |
May
(115) |
Jun
(83) |
Jul
(137) |
Aug
(103) |
Sep
(56) |
Oct
(59) |
Nov
(61) |
Dec
(37) |
2010 |
Jan
(94) |
Feb
(71) |
Mar
(53) |
Apr
(105) |
May
(79) |
Jun
(111) |
Jul
(110) |
Aug
(81) |
Sep
(50) |
Oct
(82) |
Nov
(49) |
Dec
(21) |
2011 |
Jan
(87) |
Feb
(105) |
Mar
(108) |
Apr
(99) |
May
(91) |
Jun
(94) |
Jul
(114) |
Aug
(77) |
Sep
(58) |
Oct
(58) |
Nov
(131) |
Dec
(62) |
2012 |
Jan
(76) |
Feb
(93) |
Mar
(68) |
Apr
(95) |
May
(62) |
Jun
(109) |
Jul
(90) |
Aug
(87) |
Sep
(49) |
Oct
(54) |
Nov
(66) |
Dec
(84) |
2013 |
Jan
(67) |
Feb
(52) |
Mar
(93) |
Apr
(65) |
May
(33) |
Jun
(34) |
Jul
(52) |
Aug
(42) |
Sep
(52) |
Oct
(48) |
Nov
(66) |
Dec
(14) |
2014 |
Jan
(66) |
Feb
(51) |
Mar
(34) |
Apr
(47) |
May
(58) |
Jun
(27) |
Jul
(52) |
Aug
(41) |
Sep
(78) |
Oct
(30) |
Nov
(28) |
Dec
(26) |
2015 |
Jan
(41) |
Feb
(42) |
Mar
(20) |
Apr
(73) |
May
(31) |
Jun
(48) |
Jul
(23) |
Aug
(55) |
Sep
(36) |
Oct
(47) |
Nov
(48) |
Dec
(41) |
2016 |
Jan
(32) |
Feb
(34) |
Mar
(33) |
Apr
(22) |
May
(14) |
Jun
(31) |
Jul
(29) |
Aug
(41) |
Sep
(17) |
Oct
(27) |
Nov
(38) |
Dec
(28) |
2017 |
Jan
(28) |
Feb
(30) |
Mar
(16) |
Apr
(9) |
May
(27) |
Jun
(57) |
Jul
(28) |
Aug
(43) |
Sep
(31) |
Oct
(20) |
Nov
(24) |
Dec
(18) |
2018 |
Jan
(34) |
Feb
(50) |
Mar
(18) |
Apr
(26) |
May
(13) |
Jun
(31) |
Jul
(13) |
Aug
(11) |
Sep
(15) |
Oct
(12) |
Nov
(18) |
Dec
(13) |
2019 |
Jan
(12) |
Feb
(29) |
Mar
(51) |
Apr
(22) |
May
(13) |
Jun
(20) |
Jul
(13) |
Aug
(12) |
Sep
(21) |
Oct
(6) |
Nov
(9) |
Dec
(5) |
2020 |
Jan
(13) |
Feb
(5) |
Mar
(25) |
Apr
(4) |
May
(40) |
Jun
(27) |
Jul
(5) |
Aug
(17) |
Sep
(21) |
Oct
(1) |
Nov
(5) |
Dec
(15) |
2021 |
Jan
(28) |
Feb
(6) |
Mar
(11) |
Apr
(5) |
May
(7) |
Jun
(8) |
Jul
(5) |
Aug
(5) |
Sep
(11) |
Oct
(9) |
Nov
(10) |
Dec
(12) |
2022 |
Jan
(7) |
Feb
(13) |
Mar
(8) |
Apr
(7) |
May
(12) |
Jun
(27) |
Jul
(14) |
Aug
(27) |
Sep
(27) |
Oct
(17) |
Nov
(17) |
Dec
|
2023 |
Jan
(10) |
Feb
(18) |
Mar
(9) |
Apr
(26) |
May
|
Jun
(13) |
Jul
(18) |
Aug
(5) |
Sep
(12) |
Oct
(16) |
Nov
(1) |
Dec
|
2024 |
Jan
(4) |
Feb
(3) |
Mar
(6) |
Apr
(17) |
May
(2) |
Jun
(33) |
Jul
(13) |
Aug
(1) |
Sep
(6) |
Oct
(8) |
Nov
(6) |
Dec
(15) |
2025 |
Jan
(5) |
Feb
(11) |
Mar
(8) |
Apr
(20) |
May
(1) |
Jun
|
Jul
|
Aug
(9) |
Sep
(1) |
Oct
|
Nov
|
Dec
|
From: Tom H. <to...@co...> - 2019-01-25 10:21:46
|
On 25/01/2019 10:00, Padala Dileep wrote: > My system has 4GB memory ( arm-linux ) . But when I try to run the > valgrind, the process becomes very slow, and it gets rebooted with a > message "valgrind: the 'impossible' happened' > > How much RAM is expected to be present in the system to run valgrind > for 15-20 mins atleast. There's no fixed number because it entirely depends on what you are running valgrind on and which tool you are using. Using the default tool (memcheck) which you appear to be you will typically need a litle over double the memory your program would need without valgrind because memcheck is keeping shadow state to tell it which bits of your programs memory are initialised. Tom -- Tom Hughes (to...@co...) http://compton.nu/ |
From: Alan C. <ala...@gm...> - 2019-01-25 10:10:35
|
I run it often on my Raspberry Pi 3b with 1 GB, never thought about it. Sent from my Motorola XT1527 On Fri, Jan 25, 2019, 5:02 AM Padala Dileep <pad...@gm...> wrote: > Hi > > My system has 4GB memory ( arm-linux ) . But when I try to run the > valgrind, the process becomes very slow, and it gets rebooted with a > message "valgrind: the 'impossible' happened' > > How much RAM is expected to be present in the system to run valgrind for > 15-20 mins atleast. > > > If i stop the process, before 100% memory gets over, it dumps the > evaluation till that point. > But it is only for 5 mins of Run. and it has not hit all the threads as > the process is running slowly. > > > ==1382== Memcheck, a memory error detector > > ==1382== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al. > > ==1382== Using Valgrind-3.14.0 and LibVEX; rerun with -h for copyright info > > ==1382== Command: Frm20xd > > ==1382== > > --1382-- VALGRIND INTERNAL ERROR: Valgrind received a signal 11 (SIGSEGV) > - exiting > > --1382-- si_code=1; Faulting address: 0xFFFF0FE3; sp: 0x426987b0 > > > > valgrind: the 'impossible' happened: > > Killed by fatal signal > > > > host stacktrace: > > ==1382== at 0x58184470: getUIntLittleEndianly (guest_arm_toIR.c:196) > > ==1382== by 0x58184470: disInstr_ARM_WRK.isra.36 > (guest_arm_toIR.c:16118) > > ==1382== by 0x5818E2B3: disInstr_ARM (guest_arm_toIR.c:23690) > > ==1382== by 0x581501D3: bb_to_IR (guest_generic_bb_to_IR.c:365) > > ==1382== by 0x581328AF: LibVEX_FrontEnd (main_main.c:560) > > ==1382== by 0x58133043: LibVEX_Translate (main_main.c:1185) > > ==1382== by 0x58059CBB: vgPlain_translate (m_translate.c:1813) > > ==1382== by 0x58095F53: handle_chain_me (scheduler.c:1134) > > ==1382== by 0x58097FFF: vgPlain_scheduler (scheduler.c:1483) > > ==1382== by 0x580F0F27: thread_wrapper (syswrap-linux.c:103) > > ==1382== by 0x580F0F27: run_a_thread_NORETURN (syswrap-linux.c:156) > > ==1382== by 0xFFFFFFFF: ??? > > > > Once I stop the process before it gets exited like above, It is dumping > the evaluation it was doing. > > > > = > > ==1375== Process terminating with default action of signal 2 (SIGINT) > > ==1375== at 0x53B90E8: std::string::_Rep::_S_create(unsigned int, > unsigned int, std::allocator<char> const&) (in /lib/libstdc++.so.6.0.17) > > ==1375== by 0x7DD155E7: ??? > > ==1375== > > ==1375== HEAP SUMMARY: > > ==1375== in use at exit: 16,558 bytes in 292 blocks > > ==1375== total heap usage: 1,449 allocs, 1,157 frees, 53,262 bytes > allocated > > ==1375== > > ==1375== LEAK SUMMARY: > > ==1375== definitely lost: 0 bytes in 0 blocks > > ==1375== indirectly lost: 0 bytes in 0 blocks > > ==1375== possibly lost: 0 bytes in 0 blocks > > ==1375== still reachable: 16,558 bytes in 292 blocks > > ==1375== of which reachable via heuristic: > > ==1375== stdstring : 5,178 bytes in 194 > blocks > > ==1375== suppressed: 0 bytes in 0 blocks > > ==1375== Rerun with --leak-check=full to see details of leaked memory > > ==1375== > > ==1375== For counts of detected and suppressed errors, rerun with: -v > > ==1375== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0) > > > > Thanks & Regards, > > Dileep Padala > _______________________________________________ > Valgrind-users mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-users > |
From: Padala D. <pad...@gm...> - 2019-01-25 10:00:42
|
Hi My system has 4GB memory ( arm-linux ) . But when I try to run the valgrind, the process becomes very slow, and it gets rebooted with a message "valgrind: the 'impossible' happened' How much RAM is expected to be present in the system to run valgrind for 15-20 mins atleast. If i stop the process, before 100% memory gets over, it dumps the evaluation till that point. But it is only for 5 mins of Run. and it has not hit all the threads as the process is running slowly. ==1382== Memcheck, a memory error detector ==1382== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al. ==1382== Using Valgrind-3.14.0 and LibVEX; rerun with -h for copyright info ==1382== Command: Frm20xd ==1382== --1382-- VALGRIND INTERNAL ERROR: Valgrind received a signal 11 (SIGSEGV) - exiting --1382-- si_code=1; Faulting address: 0xFFFF0FE3; sp: 0x426987b0 valgrind: the 'impossible' happened: Killed by fatal signal host stacktrace: ==1382== at 0x58184470: getUIntLittleEndianly (guest_arm_toIR.c:196) ==1382== by 0x58184470: disInstr_ARM_WRK.isra.36 (guest_arm_toIR.c:16118) ==1382== by 0x5818E2B3: disInstr_ARM (guest_arm_toIR.c:23690) ==1382== by 0x581501D3: bb_to_IR (guest_generic_bb_to_IR.c:365) ==1382== by 0x581328AF: LibVEX_FrontEnd (main_main.c:560) ==1382== by 0x58133043: LibVEX_Translate (main_main.c:1185) ==1382== by 0x58059CBB: vgPlain_translate (m_translate.c:1813) ==1382== by 0x58095F53: handle_chain_me (scheduler.c:1134) ==1382== by 0x58097FFF: vgPlain_scheduler (scheduler.c:1483) ==1382== by 0x580F0F27: thread_wrapper (syswrap-linux.c:103) ==1382== by 0x580F0F27: run_a_thread_NORETURN (syswrap-linux.c:156) ==1382== by 0xFFFFFFFF: ??? Once I stop the process before it gets exited like above, It is dumping the evaluation it was doing. = ==1375== Process terminating with default action of signal 2 (SIGINT) ==1375== at 0x53B90E8: std::string::_Rep::_S_create(unsigned int, unsigned int, std::allocator<char> const&) (in /lib/libstdc++.so.6.0.17) ==1375== by 0x7DD155E7: ??? ==1375== ==1375== HEAP SUMMARY: ==1375== in use at exit: 16,558 bytes in 292 blocks ==1375== total heap usage: 1,449 allocs, 1,157 frees, 53,262 bytes allocated ==1375== ==1375== LEAK SUMMARY: ==1375== definitely lost: 0 bytes in 0 blocks ==1375== indirectly lost: 0 bytes in 0 blocks ==1375== possibly lost: 0 bytes in 0 blocks ==1375== still reachable: 16,558 bytes in 292 blocks ==1375== of which reachable via heuristic: ==1375== stdstring : 5,178 bytes in 194 blocks ==1375== suppressed: 0 bytes in 0 blocks ==1375== Rerun with --leak-check=full to see details of leaked memory ==1375== ==1375== For counts of detected and suppressed errors, rerun with: -v ==1375== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0) Thanks & Regards, Dileep Padala |
From: Sean M. <se...@ro...> - 2019-01-21 16:57:16
|
On Wed, 16 Jan 2019 03:33:19 +0000, Alexander Aron said: >I'm a student working in IT at The University of Chicago, and our team >has been tasked with updating our Mac lab machines to Mac OS Mojave. >Some of our professors teach Valgrind in various courses as a useful >debugging tool to help students catch memory bugs in their programs - I >myself have benefited greatly from its use. Currently, the only thing >preventing us from completing the Mojave transition is the absence of >Valgrind support in Mojave. If I recall correctly, the same thing >happened with Apple's release of Sierra a few years ago. > >Is there any expected timeline of support or further information anyone >can provide me regarding Valgrind on Mojave? Thank you! Alexander, Curious what version of macOS you are using now? Last I tried, valgrind can't even launch TextEdit on 10.12: <https://bugs.kde.org/show_bug.cgi?id=399504> I guess you are running trivial examples for demonstration purposes? Cheers, -- ____________________________________________________________ Sean McBride, B. Eng se...@ro... Rogue Research www.rogue-research.com Mac Software Developer Montréal, Québec, Canada |
From: Alexander A. <am...@uc...> - 2019-01-16 06:07:26
|
Hello Valgrind Community, I'm a student working in IT at The University of Chicago, and our team has been tasked with updating our Mac lab machines to Mac OS Mojave. Some of our professors teach Valgrind in various courses as a useful debugging tool to help students catch memory bugs in their programs - I myself have benefited greatly from its use. Currently, the only thing preventing us from completing the Mojave transition is the absence of Valgrind support in Mojave. If I recall correctly, the same thing happened with Apple's release of Sierra a few years ago. Is there any expected timeline of support or further information anyone can provide me regarding Valgrind on Mojave? Thank you! Thanks, Alexander Aron am...@uc...<mailto:am...@uc...> University of Chicago ‘19 B.S. Computer Science B.A. Economics |
From: Nikolaus R. <Nik...@ra...> - 2018-12-28 10:02:37
|
Hello, I am struggling to understand a backtrace given to me by Helgrind, because it seems to be impossible: ==16699== Helgrind, a thread error detector ==16699== Copyright (C) 2007-2015, and GNU GPL'd, by OpenWorks LLP et al. ==16699== Using Valgrind-3.12.0.SVN and LibVEX; rerun with -h for copyright info ==16699== Command: ./testfs --noblock /lib mnt ==16699== Parent PID: 3636 [...] ==16699== Lock at 0x6520E10 was first observed ==16699== at 0x4C3010C: ??? (in /usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so) ==16699== by 0x1146B1: __gthread_mutex_lock (gthr-default.h:748) ==16699== by 0x1146B1: lock (std_mutex.h:103) ==16699== by 0x1146B1: lock_guard (std_mutex.h:162) ==16699== by 0x1146B1: sfs_opendir(fuse_req*, unsigned long, fuse_file_info*) (testfs.cpp:870) ==16699== by 0x4E5325B: do_opendir (fuse_lowlevel.c:1442) ==16699== by 0x4E54170: fuse_session_process_buf_int (fuse_lowlevel.c:2579) ==16699== by 0x4E4FE00: fuse_do_work (fuse_loop_mt.c:163) ==16699== by 0x4C32D06: ??? (in /usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so) ==16699== by 0x507F493: start_thread (pthread_create.c:333) ==16699== by 0x5916ACE: clone (clone.S:97) ==16699== Address 0x6520e10 is 16 bytes inside a block of size 56 alloc'd ==16699== at 0x4C2D8CF: operator new(unsigned long, std::nothrow_t const&) (in /usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so) ==16699== by 0x1147F0: sfs_opendir(fuse_req*, unsigned long, fuse_file_info*) (testfs.cpp:861) ==16699== by 0x4E5325B: do_opendir (fuse_lowlevel.c:1442) ==16699== by 0x4E54170: fuse_session_process_buf_int (fuse_lowlevel.c:2579) ==16699== by 0x4E4FE00: fuse_do_work (fuse_loop_mt.c:163) ==16699== by 0x4C32D06: ??? (in /usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so) ==16699== by 0x507F493: start_thread (pthread_create.c:333) ==16699== by 0x5916ACE: clone (clone.S:97) ==16699== Block was alloc'd by thread #4 ==16699== ==16699== Lock at 0x654A250 was first observed ==16699== at 0x4C3010C: ??? (in /usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so) ==16699== by 0x1146B1: __gthread_mutex_lock (gthr-default.h:748) ==16699== by 0x1146B1: lock (std_mutex.h:103) ==16699== by 0x1146B1: lock_guard (std_mutex.h:162) ==16699== by 0x1146B1: sfs_opendir(fuse_req*, unsigned long, fuse_file_info*) (testfs.cpp:870) ==16699== by 0x4E5325B: do_opendir (fuse_lowlevel.c:1442) ==16699== by 0x4E54170: fuse_session_process_buf_int (fuse_lowlevel.c:2579) ==16699== by 0x4E4FE00: fuse_do_work (fuse_loop_mt.c:163) ==16699== by 0x4C32D06: ??? (in /usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so) ==16699== by 0x507F493: start_thread (pthread_create.c:333) ==16699== by 0x5916ACE: clone (clone.S:97) ==16699== Address 0x654a250 is 16 bytes inside a block of size 40 alloc'd ==16699== at 0x4C2D63F: operator new(unsigned long) (in /usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so) ==16699== by 0x116979: allocate (new_allocator.h:104) ==16699== by 0x116979: allocate (alloc_traits.h:436) ==16699== by 0x116979: _M_allocate_node<const std::piecewise_construct_t&, std::tuple<const std::pair<long unsigned int, long unsigned int>&>, std::tuple<> > (hashtable_policy.h:1947) ==16699== by 0x116979: operator[] (hashtable_policy.h:595) ==16699== by 0x116979: operator[] (unordered_map.h:904) ==16699== by 0x116979: sfs_do_lookup(fuse_req*, unsigned long, char const*, fuse_entry_param*, bool) [clone .constprop.982] (testfs.cpp:525) ==16699== by 0x116F5D: sfs_do_readdir(fuse_req*, unsigned long, unsigned long, long, fuse_file_info*, int) (testfs.cpp:950) ==16699== by 0x4E51789: do_readdirplus (fuse_lowlevel.c:1470) ==16699== by 0x4E54170: fuse_session_process_buf_int (fuse_lowlevel.c:2579) ==16699== by 0x4E4FE00: fuse_do_work (fuse_loop_mt.c:163) ==16699== by 0x4C32D06: ??? (in /usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so) ==16699== by 0x507F493: start_thread (pthread_create.c:333) ==16699== by 0x5916ACE: clone (clone.S:97) ==16699== Block was alloc'd by thread #6 [...] Note that both locks were first seen in testfs.cpp:870. However, (according to Valgrind) the first lock was allocated in testfs.cpp:861 and the second in testfs.cpp:525. However, the relevant code in testfs.cpp looks like this: static void sfs_opendir(fuse_req_t req, fuse_ino_t ino, fuse_file_info *fi) { [...] auto d = new (nothrow) DirHandle; // === line 861 === if (d == nullptr) { fuse_reply_err(req, ENOMEM); return; } // Make Helgrind happy - it can't know that there's an implicit // synchronization due to the fact that other threads cannot // access d until we've called fuse_reply_*. lock_guard<mutex> g {d->m}; // === line 870 === In other words, a lock that is first seen in line 870 cannot have been allocated anywhere other than line 861. It is a fresh, local variable pointing at a newly allocated buffer. Therefore, it seems to me that the backtrace given for the allocation of the second lock cannot possibly be correct. Am I missing something? Best, -Nikolaus -- GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F »Time flies like an arrow, fruit flies like a Banana.« |
From: Nikolaus R. <Nik...@ra...> - 2018-12-27 21:44:16
|
Hello, Can someone help me to understand why there is a potential data race here? Access to the data structure is protected by a mutex, and the error message seems to even report that. So why is this a potential race? ==9352== ---------------------------------------------------------------- ==9352== ==9352== Lock at 0x634AF38 was first observed ==9352== at 0x4C3010C: ??? (in /usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so) ==9352== by 0x11A142: __gthread_mutex_lock (gthr-default.h:748) ==9352== by 0x11A142: std::mutex::lock() (std_mutex.h:103) ==9352== by 0x116E14: lock_guard (std_mutex.h:162) ==9352== by 0x116E14: sfs_do_readdir(fuse_req*, unsigned long, unsigned long, long, fuse_file_info*, int) (steamfs.cpp:916) ==9352== by 0x4E51259: do_readdirplus (fuse_lowlevel.c:1469) ==9352== by 0x4E53C20: fuse_session_process_buf_int (fuse_lowlevel.c:2555) ==9352== by 0x4E4F980: fuse_do_work (fuse_loop_mt.c:160) ==9352== by 0x4C32D06: ??? (in /usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so) ==9352== by 0x507E493: start_thread (pthread_create.c:333) ==9352== by 0x5915ACE: clone (clone.S:97) ==9352== Address 0x634af38 is 24 bytes inside a block of size 64 alloc'd ==9352== at 0x4C2D8CF: operator new(unsigned long, std::nothrow_t const&) (in /usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so) ==9352== by 0x1146D0: sfs_opendir(fuse_req*, unsigned long, fuse_file_info*) (steamfs.cpp:864) ==9352== by 0x4E52D0B: do_opendir (fuse_lowlevel.c:1441) ==9352== by 0x4E53C20: fuse_session_process_buf_int (fuse_lowlevel.c:2555) ==9352== by 0x4E4F980: fuse_do_work (fuse_loop_mt.c:160) ==9352== by 0x4C32D06: ??? (in /usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so) ==9352== by 0x507E493: start_thread (pthread_create.c:333) ==9352== by 0x5915ACE: clone (clone.S:97) ==9352== Block was alloc'd by thread #3 ==9352== ==9352== Possible data race during read of size 8 at 0x634AF30 by thread #5 ==9352== Locks held: 1, at address 0x634AF38 ==9352== at 0x116E15: sfs_do_readdir(fuse_req*, unsigned long, unsigned long, long, fuse_file_info*, int) (steamfs.cpp:917) ==9352== by 0x4E51259: do_readdirplus (fuse_lowlevel.c:1469) ==9352== by 0x4E53C20: fuse_session_process_buf_int (fuse_lowlevel.c:2555) ==9352== by 0x4E4F980: fuse_do_work (fuse_loop_mt.c:160) ==9352== by 0x4C32D06: ??? (in /usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so) ==9352== by 0x507E493: start_thread (pthread_create.c:333) ==9352== by 0x5915ACE: clone (clone.S:97) ==9352== ==9352== This conflicts with a previous write of size 8 by thread #3 ==9352== Locks held: 1, at address 0x634AF38 ==9352== at 0x116E81: sfs_do_readdir(fuse_req*, unsigned long, unsigned long, long, fuse_file_info*, int) (steamfs.cpp:938) ==9352== by 0x4E51259: do_readdirplus (fuse_lowlevel.c:1469) ==9352== by 0x4E53C20: fuse_session_process_buf_int (fuse_lowlevel.c:2555) ==9352== by 0x4E4F980: fuse_do_work (fuse_loop_mt.c:160) ==9352== by 0x4C32D06: ??? (in /usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so) ==9352== by 0x507E493: start_thread (pthread_create.c:333) ==9352== by 0x5915ACE: clone (clone.S:97) ==9352== Address 0x634af30 is 16 bytes inside a block of size 64 alloc'd ==9352== at 0x4C2D8CF: operator new(unsigned long, std::nothrow_t const&) (in /usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so) ==9352== by 0x1146D0: sfs_opendir(fuse_req*, unsigned long, fuse_file_info*) (steamfs.cpp:864) ==9352== by 0x4E52D0B: do_opendir (fuse_lowlevel.c:1441) ==9352== by 0x4E53C20: fuse_session_process_buf_int (fuse_lowlevel.c:2555) ==9352== by 0x4E4F980: fuse_do_work (fuse_loop_mt.c:160) ==9352== by 0x4C32D06: ??? (in /usr/lib/valgrind/vgpreload_helgrind-amd64-linux.so) ==9352== by 0x507E493: start_thread (pthread_create.c:333) ==9352== by 0x5915ACE: clone (clone.S:97) ==9352== Block was alloc'd by thread #3 ==9352== ==9352== ---------------------------------------------------------------- Best, -Nikolaus -- GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F »Time flies like an arrow, fruit flies like a Banana.« |
From: Philippe W. <phi...@sk...> - 2018-12-26 13:47:42
|
On Wed, 2018-12-26 at 10:15 +0000, Kumara, Asanka wrote: > Hi, > Is there a way to detect ALL dangling pointers (at exit) in valgrind ? > > e.g. > > char* globalVar; > > void main() > { > char* z1 = new char[10]; > char* z2 = z1; > globalVar = z2; > > delete[] z1; > > } > > At exit of the program say that “globalVar” holds a pointer to a freed memory. No, this feature does not exist. I do not think that it would be very difficult to implement: it would basically be the same algorithm as the leak search, but instead of searching for pointers pointing at allocated blocks, the search should find the pointers pointing at recently freed blocks. That being said, it means that you will only find the dangling pointers pointing in the recently freed blocks list, as dimensioned with --freelist-vol=<number> volume of freed blocks queue [20000000] So, you would only have a guarantee to detect ALL dangling pointers if the free list volume is big enough to hold all blocks that were freed during the run of the program. And of course, as valgrind implements an optimistic leak search, you would get an 'optimistic dangling pointer search' : on a 64 bits platform, any aligned 8 bytes (e.g. integer, char array, ...) might be considered by 'luck' as pointing to a recently freed block, giving a false positive dangling pointer. Philippe |
From: Kumara, A. <as...@ls...> - 2018-12-26 10:15:26
|
Hi, Is there a way to detect ALL dangling pointers (at exit) in valgrind ? e.g. char* globalVar; void main() { char* z1 = new char[10]; char* z2 = z1; globalVar = z2; delete[] z1; } At exit of the program say that "globalVar" holds a pointer to a freed memory. ------------------------------------------------------------------------------------------------------------ Please read these warnings and restrictions: This e-mail transmission is strictly confidential and intended solely for the ordinary user of the e-mail address to which it was addressed. It may contain legally privileged and/or CONFIDENTIAL information. The unauthorised use, disclosure, distribution and/or copying of this e-mail or any information it contains is prohibited and could, in certain circumstances, constitute a criminal offence. If you have received this e-mail in error or are not an intended recipient please inform the London Stock Exchange immediately by return e-mail or telephone 020 7797 1000. LSEG may collect, process and retain your personal information for its business purposes. For more information please see our Privacy Policy. We advise that in keeping with good computing practice the recipient of this e-mail should ensure that it is virus free. We do not accept responsibility for any virus that may be transferred by way of this e-mail. E-mail may be susceptible to data corruption, interception and unauthorised amendment, and we do not accept liability for any such corruption, interception or amendment or any consequences thereof. Calls to the London Stock Exchange may be recorded to enable the Exchange to carry out its regulatory responsibilities. London Stock Exchange plc 10 Paternoster Square London EC4M 7LS Registered in England and Wales No 2075721 ------------------------------------------------------------------------------------------------------------ |
From: Julian S. <js...@ac...> - 2018-12-18 07:27:01
|
> Are there any such recommendated specs for an "ideal" valgrind running > machine? Assuming this machine will only run valgrind and nothing else. Although it runs multithreaded programs (obviously), Valgrind itself is single threaded. This means it can only make use of one core (really, one hardware thread) per process. Hence a machine with lots of slow cores will be at a relative disadvantage to one with a few fast cores. Having a decent-sized last-level cache probably is a good thing too. That said, YMMV; you need to measure your own workload and draw your own conclusions. V's internals are complex; there are many paths that are optimised and many that aren't, because they don't seem important. It may also be that having a faster processor doesn't help much (because your-app-on-V is memory bound), or that having a processor with a larger last-level cache doesn't help much (because your-app-on-V fits well enough in a smaller LLC). I should point out too, that over the years V has acquired a bunch of machinery for collecting performance data for various aspects of its internals, which is sometimes useful for diagnosing performance problems. You could post some of that data, although that would inevitably mean publically exposing some details of your presumably proprietary application. Other things to bear in mind are: * use the latest release * probably the best-tuned port is for x86_64-linux; that may or may not be faster than ports for other targets * make sure that your test cases do a decent amount of work per process-start. I've seen cases where a test suite starts thousands of processes and does only a very short test in each process. This puts V at a severe disadvantage because it spends most of its time repeatedly instrumenting the same code over and over again, and relatively little time running that instrumented code. J |
From: John R. <jr...@bi...> - 2018-12-18 04:48:31
|
> Are there any such recommendated specs for an "ideal" valgrind running machine? Assuming this machine will only run valgrind and nothing else. It varies somewhat by tool: memcheck, drd, massif, ... . In general, most important first: real RAM at least 2.5 times the total memory size of the processes being monitored at the same time, data+instruction cache as large as possible (*minimum* 6MB [consumer-grade Intel Core i5]), one real CPU (not hyperthreaded) per process being monitored at the same time, one real CPU for OS overhead, and then CPU speed as fast as possible. |
From: Wouter V. <wou...@gm...> - 2018-12-17 18:32:45
|
Hi everybody, We are running valgrind as part of our code analysis tools. We used to run all our tools on a dedicated machine, but we are now moving to cloud machines to be able to scale up our parallel build capabilities. As part of this effort, I am looking at what kind of machine would be more appriopate for each tool. For example, I have noticed that runing cppcheck with 8 threads or with 16 threads does not bring the improvement we expected. However running it on a faster CPU does make a big difference. Are there any such recommendated specs for an "ideal" valgrind running machine? Assuming this machine will only run valgrind and nothing else. Regards, Wouter Vermeiren |
From: Julian S. <js...@ac...> - 2018-12-13 08:08:28
|
On 13/12/2018 07:53, Chris Teague wrote: > For some > reason, valgrind produces errors on this code with -O0, but not with any > other settings (-O1, -O2, -O3, -Os, -Ofast). It might just be the case that gcc is clever enough to know that the memset has no visible effect on the program state, since you're just about to free the memory. So it simply removes the memset: memset(p_stack, 'A', stack_size); <-- redundant free(p_stack); Gcc version 6 acquired a new optimisation, "lifetime dead-store elimination", which is suspiciously similar to the above scenario. So it might be that. But the only way to know for sure is to look at the generated machine code. (Maybe try with -O -fno-lifetime-dse ?) J |
From: Julian S. <js...@ac...> - 2018-12-13 07:02:14
|
On 13/12/2018 01:07, Chris Teague wrote: > 4. Clear out the threads stack via memset() to ensure I don't leave any > interesting data in the heap. If I had to guess, I'd say it was this. When the stack pointer moves up (to deallocate stuff stored on the stack), Memcheck marks the just-freed area as no-access, so it can detect mistaken attempts to access there later. So I imagine it has done this with your stack too. Result is that when you memset-zero your stack, you got the complaint you got. If you really want to do that, you can #include <valgrind/memcheck.h> and then use VALGRIND_MAKE_MEMORY_UNDEFINED(stack, length of stack) so as to mark it accessible but containing garbage, before you zero it out. J |
From: Chris T. <chr...@gm...> - 2018-12-13 06:53:54
|
Thank you both for your insights, they are both helpful and much appreciated. I think Julian is correct, valgrind doesn't know how to deal with this user allocated heap - and it isn't OK with the memset() past the end of the stack pointer. Using the memcheck.h API to mark it as undefined will probably work for me. Based on John's comments, I noticed he was specifying optimization level 1 when compiling ("-O"). I had been using the default ("-O0"). For some reason, valgrind produces errors on this code with -O0, but not with any other settings (-O1, -O2, -O3, -Os, -Ofast). I wonder if the optimizations make it harder for valgrind to detect and mark memory beyond the stack pointer as no-access? Regardless, using anything other than -O0 results in clean output in my test case, so I have a second viable workaround. Thank you very much for your help! On Wed, Dec 12, 2018 at 10:43 PM Julian Seward <js...@ac...> wrote: > On 13/12/2018 01:07, Chris Teague wrote: > > > 4. Clear out the threads stack via memset() to ensure I don't leave any > > interesting data in the heap. > > If I had to guess, I'd say it was this. When the stack pointer moves up > (to deallocate stuff stored on the stack), Memcheck marks the just-freed > area as no-access, so it can detect mistaken attempts to access there > later. So I imagine it has done this with your stack too. Result is that > when you memset-zero your stack, you got the complaint you got. > > If you really want to do that, you can #include <valgrind/memcheck.h> and > then use VALGRIND_MAKE_MEMORY_UNDEFINED(stack, length of stack) so as to > mark it accessible but containing garbage, before you zero it out. > > J > |
From: Chris T. <chr...@gm...> - 2018-12-13 06:39:40
|
Strange. I only have an Ubuntu 18.04 machine and the latest valgrind is 3.13. I downloaded the Fedora 28 docker image and built in there, and it definitely complains. Output below: [root@4fe07db648b4 home]# gcc p.c -lpthread -o p [root@4fe07db648b4 home]# ./p start thread created thread running done [root@4fe07db648b4 home]# valgrind ./p ==137== Memcheck, a memory error detector ==137== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al. ==137== Using Valgrind-3.14.0 and LibVEX; rerun with -h for copyright info ==137== Command: ./p ==137== start thread created thread running ==137== Invalid write of size 8 ==137== at 0x4C35E5F: memset (vg_replace_strmem.c:1251) ==137== by 0x400873: main (in /home/p) ==137== Address 0x54281d8 is 60,760 bytes inside a block of size 65,536 alloc'd ==137== at 0x4C2EE0B: malloc (vg_replace_malloc.c:299) ==137== by 0x4007DF: main (in /home/p) ==137== ==137== Invalid write of size 8 ==137== at 0x4C35E54: memset (vg_replace_strmem.c:1251) ==137== by 0x400873: main (in /home/p) ==137== Address 0x54281e0 is 60,768 bytes inside a block of size 65,536 alloc'd ==137== at 0x4C2EE0B: malloc (vg_replace_malloc.c:299) ==137== by 0x4007DF: main (in /home/p) ==137== ==137== Invalid write of size 8 ==137== at 0x4C35E57: memset (vg_replace_strmem.c:1251) ==137== by 0x400873: main (in /home/p) ==137== Address 0x54281e8 is 60,776 bytes inside a block of size 65,536 alloc'd ==137== at 0x4C2EE0B: malloc (vg_replace_malloc.c:299) ==137== by 0x4007DF: main (in /home/p) ==137== ==137== Invalid write of size 8 ==137== at 0x4C35E5B: memset (vg_replace_strmem.c:1251) ==137== by 0x400873: main (in /home/p) ==137== Address 0x54281f0 is 60,784 bytes inside a block of size 65,536 alloc'd ==137== at 0x4C2EE0B: malloc (vg_replace_malloc.c:299) ==137== by 0x4007DF: main (in /home/p) ==137== done ==137== ==137== HEAP SUMMARY: ==137== in use at exit: 0 bytes in 0 blocks ==137== total heap usage: 3 allocs, 3 frees, 66,832 bytes allocated ==137== ==137== All heap blocks were freed -- no leaks are possible ==137== ==137== For counts of detected and suppressed errors, rerun with: -v ==137== ERROR SUMMARY: 37 errors from 4 contexts (suppressed: 0 from 0) [root@4fe07db648b4 home]# valgrind --version valgrind-3.14.0 On Wed, Dec 12, 2018 at 9:36 PM John Reiser <jr...@bi...> wrote: > > - I have reproduced this on valgrind 3.10 and 3.13 > > It works correctly (no complaints) on x86_64 under valgrind 3.14 as > distributed > by Fedora 28 in valgrind-3.14.0-1.fc28.x86_64.rpm . > > $ rpm -q valgrind > valgrind-3.14.0-1.fc28.x86_64 > $ type valgrind > valgrind is hashed (/usr/bin/valgrind) > $ valgrind --version > valgrind-3.14.0 > $ gcc -g -O pthread_test.c -lpthread > $ valgrind ./a.out > ==18824== Memcheck, a memory error detector > ==18824== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al. > ==18824== Using Valgrind-3.14.0 and LibVEX; rerun with -h for copyright > info > ==18824== Command: ./a.out > ==18824== > start > thread created > thread running > done > ==18824== > ==18824== HEAP SUMMARY: > ==18824== in use at exit: 0 bytes in 0 blocks > ==18824== total heap usage: 3 allocs, 3 frees, 66,832 bytes allocated > ==18824== > ==18824== All heap blocks were freed -- no leaks are possible > ==18824== > ==18824== For counts of detected and suppressed errors, rerun with: -v > ==18824== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0) > $ > > > _______________________________________________ > Valgrind-users mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-users > |
From: John R. <jr...@bi...> - 2018-12-13 05:34:55
|
> - I have reproduced this on valgrind 3.10 and 3.13 It works correctly (no complaints) on x86_64 under valgrind 3.14 as distributed by Fedora 28 in valgrind-3.14.0-1.fc28.x86_64.rpm . $ rpm -q valgrind valgrind-3.14.0-1.fc28.x86_64 $ type valgrind valgrind is hashed (/usr/bin/valgrind) $ valgrind --version valgrind-3.14.0 $ gcc -g -O pthread_test.c -lpthread $ valgrind ./a.out ==18824== Memcheck, a memory error detector ==18824== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al. ==18824== Using Valgrind-3.14.0 and LibVEX; rerun with -h for copyright info ==18824== Command: ./a.out ==18824== start thread created thread running done ==18824== ==18824== HEAP SUMMARY: ==18824== in use at exit: 0 bytes in 0 blocks ==18824== total heap usage: 3 allocs, 3 frees, 66,832 bytes allocated ==18824== ==18824== All heap blocks were freed -- no leaks are possible ==18824== ==18824== For counts of detected and suppressed errors, rerun with: -v ==18824== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0) $ |
From: Chris T. <chr...@gm...> - 2018-12-13 00:07:42
|
I am getting confusing output from valgrind when running the memory checker on my pthread code. Specifically, it seems to not like the use of the pthread_attr_setstack() function. I have reduced my problem to a small example, pasted at the end of this message. What I'm trying to do is: 1. Allocate a 64KB buffer for use as the thread stack 2. Call pthread_attr_setstack() to set the stack attributes 3. Call pthread_create(), let the thread run to completion, and then pthread_join() 4. Clear out the threads stack via memset() to ensure I don't leave any interesting data in the heap. 5. free() the thread stack This all seems to work fine, but valgrind complains that the memset() in step 4 is an invalid write. An example of this error is here: ==26698== Invalid write of size 8 ==26698== at 0x4C3665A: memset (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==26698== by 0x108A4E: main (in /home/cteague/develop/hello/pthread_test) ==26698== Address 0x545ae88 is 59,912 bytes inside a block of size 65,536 alloc'd ==26698== at 0x4C2FB0F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==26698== by 0x1089B6: main (in /home/cteague/develop/hello/pthread_test) I am very confused. How can it be an invalid write inside my own block of allocated memory? It looks to me like a problem with valgrind - but my experience has been that valgrind is incorrect far less frequently than I am! My command line is this: valgrind --leak-check=full ./pthread_test And my test code is here below. I would really appreciate any insight into this problem. A few notes: - I thought the problem was the "stack guard" in pthread, but using pthread_attr_setguardsize() to set it to 0 did not help. - Yes, I do need to use pthread_attr_setstack(). I am on an embedded platform where I need to specify from which memory the stack will live in. - I have reproduced this on valgrind 3.10 and 3.13 #include <stdio.h> #include <string.h> #include <stdlib.h> #include <pthread.h> static void* thread_start(void *arg) { printf("thread running\n"); return NULL; } int main(int argc, char **argv) { pthread_t thd; pthread_attr_t thd_attr; size_t stack_size = 64*1024; void* p_stack; printf("start\n"); p_stack = malloc(stack_size); if (NULL != p_stack) { if (0 == pthread_attr_init(&thd_attr)) { if (0 == pthread_attr_setstack(&thd_attr, p_stack, stack_size)) { if (0 == pthread_create(&thd, &thd_attr, thread_start, NULL)) { printf("thread created\n"); pthread_join(thd, NULL); } } pthread_attr_destroy(&thd_attr); } memset(p_stack, 'A', stack_size); free(p_stack); } printf("done\n"); return 0; } |
From: Philippe W. <phi...@sk...> - 2018-11-20 20:34:46
|
On Tue, 2018-11-20 at 10:35 +0100, Ivo Raisr wrote: > Il giorno lun 19 nov 2018 alle ore 16:53 David Faure <fa...@kd...> ha scritto: > > > > When using vgdb (e.g. `valgrind --vgdb-error=0 myprog`) > > and there's a valgrind warning for an uninitialized read, on a line like > > if (a || b) > > > > The question that happens then is, of course, was it a or b that was > > uninitialized. If one uses vgdb to print the values of a and b, it won't > > necessarily be obvious (e.g. two bools, both happen to show as "false", with > > only one actually uninitialized). This makes me wonder, wouldn't it be > > possible for vgdb to output a warning when doing "print a" or "print b" from > > gdb and the value is marked as uninitialized? > > > > If I understand the architecture correctly, this should be possible to > > implement, right? > > I do not want to estimate how feasible would be to implement this feature. > Patches are welcome, of course. > > But you can use an existing feature: > http://valgrind.org/docs/manual/mc-manual.html#mc-manual.machine > http://valgrind.org/docs/manual/mc-manual.html#mc-manual.monitor-commands > > This will give you what (I think) you want. Yes, using e.g. the monitor command xb allows to look at the V bits: you need to know the address (and len) of the value you want to look at. If a value is in a register, then you have to print the shadow register. For what concerns automatically printing warnings when GDB prints a 'non initialised' value: I think this is not very easy, and will likely give many false positives : the valgrind gdbserver has no idea why GDB asks to read some memory and/or asks to read the registers. In particular, GDB will very likely often read all the registers (including the one having uninitialised values). The GDB protocol also does not allow to read individual bits, and so, when a part of a byte is not initialised (but correctly so, i.e. not used by the program), printing the variable will give a warning. So, in summary, would be a nice thing to do, but I see no way to do it properly. Philippe |
From: Ivo R. <iv...@iv...> - 2018-11-20 09:36:14
|
Il giorno lun 19 nov 2018 alle ore 16:53 David Faure <fa...@kd...> ha scritto: > > When using vgdb (e.g. `valgrind --vgdb-error=0 myprog`) > and there's a valgrind warning for an uninitialized read, on a line like > if (a || b) > > The question that happens then is, of course, was it a or b that was > uninitialized. If one uses vgdb to print the values of a and b, it won't > necessarily be obvious (e.g. two bools, both happen to show as "false", with > only one actually uninitialized). This makes me wonder, wouldn't it be > possible for vgdb to output a warning when doing "print a" or "print b" from > gdb and the value is marked as uninitialized? > > If I understand the architecture correctly, this should be possible to > implement, right? I do not want to estimate how feasible would be to implement this feature. Patches are welcome, of course. But you can use an existing feature: http://valgrind.org/docs/manual/mc-manual.html#mc-manual.machine http://valgrind.org/docs/manual/mc-manual.html#mc-manual.monitor-commands This will give you what (I think) you want. I. |
From: David F. <fa...@kd...> - 2018-11-19 15:52:53
|
When using vgdb (e.g. `valgrind --vgdb-error=0 myprog`) and there's a valgrind warning for an uninitialized read, on a line like if (a || b) The question that happens then is, of course, was it a or b that was uninitialized. If one uses vgdb to print the values of a and b, it won't necessarily be obvious (e.g. two bools, both happen to show as "false", with only one actually uninitialized). This makes me wonder, wouldn't it be possible for vgdb to output a warning when doing "print a" or "print b" from gdb and the value is marked as uninitialized? If I understand the architecture correctly, this should be possible to implement, right? -- David Faure | dav...@kd... | Managing Director KDAB France KDAB (France) S.A.S., a KDAB Group company Tel. France +33 (0)4 90 84 08 53, http://www.kdab.fr KDAB - The Qt, C++ and OpenGL Experts |
From: Tom H. <to...@co...> - 2018-11-06 17:58:23
|
On 06/11/2018 17:50, Rafael Antognolli wrote: > Oh, so this would be in valgrind's code, right? Yes. > I do see VKI_DRM_IOCTL_I915_GEM_MMAP_GTT in the code, but the ioctl > in question (that I can't find there) is DRM_IOCTL_I915_GEM_MMAP. So > maybe with that it should work? Oh if there's no handling for it at all then it's definitely not going to work ;-) Tom -- Tom Hughes (to...@co...) http://compton.nu/ |
From: Rafael A. <ant...@gm...> - 2018-11-06 17:50:28
|
On Tue, Nov 6, 2018 at 9:40 AM Tom Hughes <to...@co...> wrote: > On 06/11/2018 17:38, Rafael Antognolli wrote: > > > > > > On Tue, Nov 6, 2018 at 9:25 AM Tom Hughes <to...@co... > > <mailto:to...@co...>> wrote: > > > > On 06/11/2018 17:02, Rafael Antognolli wrote: > > > > > My (limited) understanding is that valgrind's mremap doesn't let > me > > > remap an address that was allocated by the ioctl, since valgrind > > doesn't > > > "own" that memory. Is there some way around this, or this is never > > > supposed to work? > > > > > > I don't see any reason why it shouldn't IF the ioctl wrapper is > > correctly written to update valgrind's internal state and record > > the mapping that it creates. > > > > > > By that, do you mean using VALGRIND_MALLOCLIKE_BLOCK and > > VALGRIND_FREELIKE_BLOCK? If so, that's already happening but no > > luck so far. > > No I mean that internally valgrind keeps track of the mappings > in the process address space in the aspacemgr component and if a > system calls creates a mapping then the post handler for that > system call needs to update that. > Oh, so this would be in valgrind's code, right? I do see VKI_DRM_IOCTL_I915_GEM_MMAP_GTT in the code, but the ioctl in question (that I can't find there) is DRM_IOCTL_I915_GEM_MMAP. So maybe with that it should work? > Tom > > -- > Tom Hughes (to...@co...) > http://compton.nu/ > |
From: Tom H. <to...@co...> - 2018-11-06 17:41:05
|
On 06/11/2018 17:38, Rafael Antognolli wrote: > > > On Tue, Nov 6, 2018 at 9:25 AM Tom Hughes <to...@co... > <mailto:to...@co...>> wrote: > > On 06/11/2018 17:02, Rafael Antognolli wrote: > > > My (limited) understanding is that valgrind's mremap doesn't let me > > remap an address that was allocated by the ioctl, since valgrind > doesn't > > "own" that memory. Is there some way around this, or this is never > > supposed to work? > > > I don't see any reason why it shouldn't IF the ioctl wrapper is > correctly written to update valgrind's internal state and record > the mapping that it creates. > > > By that, do you mean using VALGRIND_MALLOCLIKE_BLOCK and > VALGRIND_FREELIKE_BLOCK? If so, that's already happening but no > luck so far. No I mean that internally valgrind keeps track of the mappings in the process address space in the aspacemgr component and if a system calls creates a mapping then the post handler for that system call needs to update that. Tom -- Tom Hughes (to...@co...) http://compton.nu/ |
From: Rafael A. <ant...@gm...> - 2018-11-06 17:39:07
|
On Tue, Nov 6, 2018 at 9:25 AM Tom Hughes <to...@co...> wrote: > On 06/11/2018 17:02, Rafael Antognolli wrote: > > > My (limited) understanding is that valgrind's mremap doesn't let me > > remap an address that was allocated by the ioctl, since valgrind doesn't > > "own" that memory. Is there some way around this, or this is never > > supposed to work? > I don't see any reason why it shouldn't IF the ioctl wrapper is > correctly written to update valgrind's internal state and record > the mapping that it creates. > By that, do you mean using VALGRIND_MALLOCLIKE_BLOCK and VALGRIND_FREELIKE_BLOCK? If so, that's already happening but no luck so far. Do you have any pointers or examples of that? > > If the wrapper doesn't do that then you probably have bigger > problems than whether you can remap it because it will mean > valgrind's state is out of sync with the kernel. > > Tom > > -- > Tom Hughes (to...@co...) > http://compton.nu/ > |