You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
1
(8) |
2
(11) |
3
(21) |
4
(15) |
5
(10) |
|
6
(7) |
7
(7) |
8
(5) |
9
(7) |
10
(5) |
11
(1) |
12
(21) |
|
13
(8) |
14
(17) |
15
(6) |
16
(10) |
17
(7) |
18
(6) |
19
(15) |
|
20
(12) |
21
(16) |
22
(25) |
23
(14) |
24
(10) |
25
(7) |
26
(6) |
|
27
(34) |
28
(13) |
29
(10) |
30
(8) |
|
|
|
|
From: Tom H. <th...@cy...> - 2004-06-22 22:52:25
|
In message <9cd...@ma...>
Mike Hearn <mik...@gm...> wrote:
> In order to debug the wineserver (which is a regular Linux app),
> valgrind needs to support tkill. I am therefore following the advice
> in the README and asking here to see if anybody can give me some tips
> on how to do it, or if anybody is working on it yet.
The problem with emulating tkill is that under valgrind threads all
run in one process so you can't just pass it on to the kernel.
In fact, where are you getting the TID values from to send the signals
to with tkill? If you're using gettid then that's going to be the first
hurdle you need to overcome as we don't implement that either ;-)
Actually, given the horrible things that wineserver does, are these
threads actually threads in the wineserver, or are they in another
process altogether - ie one of the processes running under wine?
Tom
--
Tom Hughes (th...@cy...)
Software Engineer, Cyberscience Corporation
http://www.cyberscience.com/
|
|
From: Tom H. <th...@cy...> - 2004-06-22 22:48:07
|
In message <166...@ca...>
Paul Mackerras <pa...@sa...> wrote:
> Nicholas Nethercote writes:
>
> > * What's considered a normal client stack size? Is it reasonable to
> > limit it? If I could safely limit that to say, 8MB, that could be very
> > useful in making limits more flexible. (Even something much bigger, eg.
> > 64MB, would be ok).
>
> 8MB is probably enough for most of the class of applications that
> people use Valgrind on. However, there are numerical apps that want
> 1GB or so.
Can we configure it at run time based on the user's stacksize limit
or is this something that needs to be fixed at build time?
Tom
--
Tom Hughes (th...@cy...)
Software Engineer, Cyberscience Corporation
http://www.cyberscience.com/
|
|
From: Paul M. <pa...@sa...> - 2004-06-22 22:40:43
|
Nicholas Nethercote writes: > * What's considered a normal client stack size? Is it reasonable to > limit it? If I could safely limit that to say, 8MB, that could be very > useful in making limits more flexible. (Even something much bigger, eg. > 64MB, would be ok). 8MB is probably enough for most of the class of applications that people use Valgrind on. However, there are numerical apps that want 1GB or so. Paul. |
|
From: Jeremy F. <je...@go...> - 2004-06-22 21:40:12
|
On Tue, 2004-06-22 at 13:03 +0100, Nicholas Nethercote wrote: > Hi, > > I've been looking at ways to improve FV's memory layout, to address bug > #82301. > > There's a comment in coregrind/ume.c that explains things: > > CLIENT_BASE +-------------------------+ > | client address space | > : : > : : > | client stack | > client_end +-------------------------+ > | redzone | > shadow_base +-------------------------+ > | | > : shadow memory for skins : > | (may be 0 sized) | > shadow_end +-------------------------+ > : gap (may be 0 sized) : > valgrind_base +-------------------------+ > | valgrind .so files | > | and mappings | > valgrind_mmap_end - > | kickstart executable | > - - > | valgrind heap vvvvvvvvv| > valgrind_end - - > | valgrind stack ^^^^^^^^^| > +-------------------------+ > : kernel : > > Basically, memory is partitioned into 3 parts: > > - client space > - shadow memory space (if needed) > - valgrind + tool space > > The problem is that sometimes one of these spaces gets exhausted when > there's still plenty of room in the others, which is a shame. > > It's not easy to see how to make things more flexible, although I have > some ideas. I have some questions, most of which are primarily directed > to Jeremy: > > * What's considered a normal client stack size? Is it reasonable to > limit it? If I could safely limit that to say, 8MB, that could be very > useful in making limits more flexible. (Even something much bigger, eg. > 64MB, would be ok). Mostly it can be pretty small (~8M); I would guess that almost nothing uses more normally. Some programs may be heavily recursive, or use large local arrays, and they would need more. You could make this a CLO. > * CLIENT_SIZE_MULTIPLE is 64M. Why so big? It means the gap between > shadow memory and valgrind's space is 95MB on my machine, for Memcheck. > I've reduced it to 4M without any apparent ill-effects, and the gap is > reduced to 1.5MB, which gives the client an extra 44MB of space to play > with. As a result, the largest segment I can successfully mmap jumps > from 235MB to 280MB. Changing this seems like the easiest way to squeeze > out some more megs from the address space. There's no particular reason for this number; it was mostly to make the addresses round for easy reading. Reducing it to 4k should be OK. > * The diagram above mentions the "valgrind heap". AIUI, Valgrind doesn't > really have a heap as such, because all its allocations are done out of > maps. Are those allocations done out of the "valgrind .so files and > mappings" area? If so, then 128MB for the "kickstart executable, valgrind > heap, valgrind stack" area seems far more than is necessary. Could it be > reduced to just big enough for the kickstart executable + valgrind's > stack, which together should only be a couple of MB?. (I tried changing > this 128MB size but it kept seg-faulting, even when I only shrunk it to > 112MB, so I think I was not making the change correctly.) Valgrind does have a heap - it's where VG_(malloc) goes, allocated with VG_(brk). valgrind_base->valgrind_map_end is used for when V mmaps files, but not memory allocation. The mmap area only needs to be big enough to fix one .so at a time when reading symbols. Note that we don't use glibc's malloc internally. Any relying on the "fallback to mmap" behavior is pretty non-portable anyway. > * On my machine, normally, the heap is roughly 800MB, and the space for > mmap segments is 2GB. (I compute this from the usual heap start being > slightly bigger than 0x8048000, and mmap segments usually starting at > 0x40000000, and the kernel starting at 0xc0000000). So the heap is about > 2--2.5x smaller than the mmap segment area. But under Valgrind, the > "heap" (client_base..client_mapbase distance) is set to be 3x the mmap > segment area (client_mapbase..client_end). Surely the ratios should be > similar to normal execution? If I change the ratio from 3x to 0.5x, I get > a heap size of 441MB and mmap-segment size of 882MB, and can mmap 640MB > segments in Memcheck (up from 235MB). [Actually, it's complicated by > Memcheck's replacement malloc() not using brk() but rather mmap(). Hmm.] > There's definitely room for improvement here. Comments? I picked all the initial numbers out of the air, so this is definitely a tuning process I expected to happen. I think we can do better for "typical" programs, but I don't think a static layout can suit everyone. The other variable is that the kernel might be higher. FC2's user address space goes up to ffff0000, so there's a lot more space to play with. In principle it should be easy to build Valgrind to work in this larger space, but unfortunately it can't be done dynamically, since it affects the linking address of the code. J |
|
From: Nicholas N. <nj...@ca...> - 2004-06-22 16:36:36
|
On Tue, 22 Jun 2004, Nicholas Nethercote wrote: > * On my machine, normally, the heap is roughly 800MB, and the space for > mmap segments is 2GB. (I compute this from the usual heap start being > slightly bigger than 0x8048000, and mmap segments usually starting at > 0x40000000, and the kernel starting at 0xc0000000). So the heap is about > 2--2.5x smaller than the mmap segment area. But under Valgrind, the > "heap" (client_base..client_mapbase distance) is set to be 3x the mmap > segment area (client_mapbase..client_end). Surely the ratios should be > similar to normal execution? If I change the ratio from 3x to 0.5x, I get > a heap size of 441MB and mmap-segment size of 882MB, and can mmap 640MB > segments in Memcheck (up from 235MB). [Actually, it's complicated by > Memcheck's replacement malloc() not using brk() but rather mmap(). Hmm.] Actually, I think this is even better than I first thought. For those tools that don't replace malloc(), AFAICT the standard malloc() falls back on mmap() (instead of brk()) for any allocations larger than about 100KB, or if the brk() fails. So reducing the heap size doesn't matter at all. And for tools that do replace malloc(), I think V's malloc doesn't do any malloc mmapping below client_mapbase anyway -- in which case the client "heap" (client_baase..client_mapbase) is barely being used? N |
|
From: Tom H. <th...@cy...> - 2004-06-22 15:31:20
|
In message <Pin...@sc...>
Bob Friesenhahn <bfr...@si...> wrote:
> On Tue, 22 Jun 2004, Tom Hughes wrote:
>
>> In message <Pin...@sc...>
>> Bob Friesenhahn <bfr...@si...> wrote:
>>
>>> Our software is using POSIX.4 semaphores in shared memory. These
>>> semaphores are initialized by a different application than the one
>>> being tested by valgrind. Valgrind complains vigorously because the
>>> semaphores are not initialized.
>>
>> Which interface is POSIX.4 semaphores? pthread semaphores?
>
> sem_init(), sem_wait(), etc.
Right. They are part of the pthreads interface so valgrind makes
no attempt to support them in process shared mode - all the various
pthreads data types are only supported in thread shared mode to the
extent that they are supported at all in many cases...
Tom
--
Tom Hughes (th...@cy...)
Software Engineer, Cyberscience Corporation
http://www.cyberscience.com/
|
|
From: Bob F. <bfr...@si...> - 2004-06-22 15:20:53
|
On Tue, 22 Jun 2004, Tom Hughes wrote: > In message <Pin...@sc...> > Bob Friesenhahn <bfr...@si...> wrote: > >> Our software is using POSIX.4 semaphores in shared memory. These >> semaphores are initialized by a different application than the one >> being tested by valgrind. Valgrind complains vigorously because the >> semaphores are not initialized. > > Which interface is POSIX.4 semaphores? pthread semaphores? sem_init(), sem_wait(), etc. Bob ====================================== Bob Friesenhahn bfr...@si... http://www.simplesystems.org/users/bfriesen |
|
From: Tom H. <th...@cy...> - 2004-06-22 15:07:53
|
In message <Pin...@sc...>
Bob Friesenhahn <bfr...@si...> wrote:
> Our software is using POSIX.4 semaphores in shared memory. These
> semaphores are initialized by a different application than the one
> being tested by valgrind. Valgrind complains vigorously because the
> semaphores are not initialized.
Which interface is POSIX.4 semaphores? pthread semaphores?
Tom
--
Tom Hughes (th...@cy...)
Software Engineer, Cyberscience Corporation
http://www.cyberscience.com/
|
|
From: Bob F. <bfr...@si...> - 2004-06-22 15:07:35
|
Hmmm, it may be that some of these problem semaphore's are being initialized in the program under test, but valgrind doesn't know about semaphores with a pshared attribute yet: ==3766== sem_init: unsupported pshared value ==3766== at 0x3C050B17: pthread_error (vg_libpthread.c:380) ==3766== by 0x3C057320: sem_init (vg_libpthread.c:2718) ==3766== by 0x81974A8: CTCPCtrl::CTCPCtrl() (TCPCtrl.cc:76) ==3766== by 0x812684E: CLCD::CLCD() (LCD.cc:49) ==3766== by 0x8080800: CDisplayManager::CDisplayManager() (DisplayManager.cc:42) ==3766== by 0x81952FC: CSystemInit::CSystemInit() (SystemInit.cc:127) ==3766== by 0x819657A: main (SystemMain.cc:49) Since the content of these shared semaphores will be updated by other programs, valgrind can't monitor the status of this memory unless it is enhanced to monitor all programs which access it. Bob On Tue, 22 Jun 2004, Bob Friesenhahn wrote: > Our software is using POSIX.4 semaphores in shared memory. These semaphores > are initialized by a different application than the one being tested by > valgrind. Valgrind complains vigorously because the semaphores are not > initialized. > > Is there a way around this problem? Will it help at all if the application > which initializes the semaphore is also executed under valgrind? > > It seems like valgrind needs a separate agent program used to collect > information regarding updates to shared resources so that it can know about > things like semaphores in shared memory. > > Bob > ====================================== > Bob Friesenhahn > bfr...@si... > http://www.simplesystems.org/users/bfriesen > > > ------------------------------------------------------- > This SF.Net email sponsored by Black Hat Briefings & Training. > Attend Black Hat Briefings & Training, Las Vegas July 24-29 - digital self > defense, top technical experts, no vendor pitches, unmatched networking > opportunities. Visit www.blackhat.com > _______________________________________________ > Valgrind-developers mailing list > Val...@li... > https://lists.sourceforge.net/lists/listinfo/valgrind-developers > ====================================== Bob Friesenhahn bfr...@si... http://www.simplesystems.org/users/bfriesen |
|
From: Bob F. <bfr...@si...> - 2004-06-22 14:52:12
|
Our software is using POSIX.4 semaphores in shared memory. These semaphores are initialized by a different application than the one being tested by valgrind. Valgrind complains vigorously because the semaphores are not initialized. Is there a way around this problem? Will it help at all if the application which initializes the semaphore is also executed under valgrind? It seems like valgrind needs a separate agent program used to collect information regarding updates to shared resources so that it can know about things like semaphores in shared memory. Bob ====================================== Bob Friesenhahn bfr...@si... http://www.simplesystems.org/users/bfriesen |
|
From: Nicholas N. <nj...@ca...> - 2004-06-22 14:20:25
|
CVS commit by nethercote: Remove unused #define M +0 -3 vg_main.c 1.162 --- valgrind/coregrind/vg_main.c #1.161:1.162 @@ -75,7 +75,4 @@ #endif /* AT_SECURE */ -/* Amount to reserve for Valgrind's internal heap */ -#define VALGRIND_HEAPSIZE (128*1024*1024) - /* Amount to reserve for Valgrind's internal mappings */ #define VALGRIND_MAPSIZE (128*1024*1024) |
|
From: Nicholas N. <nj...@ca...> - 2004-06-22 14:18:51
|
CVS commit by nethercote:
Convert VG_(exitcode), a global variable, into a local variable.
M +1 -6 vg_include.h 1.196
M +4 -8 vg_main.c 1.161
M +3 -3 vg_scheduler.c 1.152
--- valgrind/coregrind/vg_include.h #1.195:1.196
@@ -981,5 +981,5 @@ typedef
/* The scheduler. */
-extern VgSchedReturnCode VG_(scheduler) ( void );
+extern VgSchedReturnCode VG_(scheduler) ( Int* exit_code );
extern void VG_(scheduler_init) ( void );
@@ -1475,9 +1475,4 @@ extern UInt VG_(dispatch_ctr);
extern ThreadId VG_(last_run_tid);
-/* This is the argument to __NR_exit() supplied by the first thread to
- call that syscall. We eventually pass that to __NR_exit() for
- real. */
-extern Int VG_(exitcode);
-
/* If we're doing the default action of a fatal signal */
extern jmp_buf VG_(fatal_signal_jmpbuf);
--- valgrind/coregrind/vg_main.c #1.160:1.161
@@ -180,9 +180,4 @@ ThreadId VG_(last_run_tid) = 0;
Bool VG_(logging_to_filedes) = True;
-/* This is the argument to __NR_exit() supplied by the first thread to
- call that syscall. We eventually pass that to __NR_exit() for
- real. */
-Int VG_(exitcode) = 0;
-
/*====================================================================*/
@@ -2634,4 +2629,5 @@ int main(int argc, char **argv)
UInt * client_auxv;
VgSchedReturnCode src;
+ Int exitcode = 0;
vki_rlimit zero = { 0, 0 };
@@ -2977,5 +2973,5 @@ int main(int argc, char **argv)
if (__builtin_setjmp(&VG_(fatal_signal_jmpbuf)) == 0) {
VG_(fatal_signal_set) = True;
- src = VG_(scheduler)();
+ src = VG_(scheduler)( &exitcode );
} else
src = VgSrc_FatalSig;
@@ -3004,5 +3000,5 @@ int main(int argc, char **argv)
VG_(show_all_errors)();
- SK_(fini)( VG_(exitcode) );
+ SK_(fini)( exitcode );
VG_(do_sanity_checks)( True /*include expensive checks*/ );
@@ -3046,5 +3042,5 @@ int main(int argc, char **argv)
the arg to __NR_exit(), so we just do __NR_exit() with
that arg. */
- VG_(exit)( VG_(exitcode) );
+ VG_(exit)( exitcode );
/* NOT ALIVE HERE! */
VG_(core_panic)("entered the afterlife in main() -- ExitSyscall");
--- valgrind/coregrind/vg_scheduler.c #1.151:1.152
@@ -877,5 +877,5 @@ void idle ( void )
* The specified number of basic blocks has gone by.
*/
-VgSchedReturnCode VG_(scheduler) ( void )
+VgSchedReturnCode VG_(scheduler) ( Int* exitcode )
{
ThreadId tid, tid_next;
@@ -957,5 +957,5 @@ VgSchedReturnCode VG_(scheduler) ( void
/* All threads have exited - pretend someone called exit() */
if (n_waiting_for_reaper == n_exists) {
- VG_(exitcode) = 0; /* ? */
+ *exitcode = 0; /* ? */
return VgSrc_ExitSyscall;
}
@@ -1109,5 +1109,5 @@ VgSchedReturnCode VG_(scheduler) ( void
/* If __NR_exit, remember the supplied argument. */
- VG_(exitcode) = VG_(threads)[tid].m_ebx; /* syscall arg1 */
+ *exitcode = VG_(threads)[tid].m_ebx; /* syscall arg1 */
/* Only run __libc_freeres if the tool says it's ok and
|
|
From: Nicholas N. <nj...@ca...> - 2004-06-22 14:10:06
|
CVS commit by nethercote:
Remove a function and global variable no longer needed.
M +0 -6 coregrind/vg_main.c 1.160
M +0 -26 include/vg_skin.h.base 1.22
--- valgrind/coregrind/vg_main.c #1.159:1.160
@@ -180,8 +180,4 @@ ThreadId VG_(last_run_tid) = 0;
Bool VG_(logging_to_filedes) = True;
-/* This Bool is needed by wrappers in vg_clientmalloc.c to decide how
- to behave. Initially we say False. */
-Bool VG_(running_on_simd_CPU) = False;
-
/* This is the argument to __NR_exit() supplied by the first thread to
call that syscall. We eventually pass that to __NR_exit() for
@@ -2976,5 +2972,4 @@ int main(int argc, char **argv)
// Run!
//--------------------------------------------------------------
- VG_(running_on_simd_CPU) = True;
VGP_POPCC(VgpStartup);
VGP_PUSHCC(VgpSched);
@@ -2987,5 +2982,4 @@ int main(int argc, char **argv)
VGP_POPCC(VgpSched);
- VG_(running_on_simd_CPU) = False;
--- valgrind/include/vg_skin.h.base #1.21:1.22
@@ -1811,30 +1811,4 @@
/*====================================================================*/
-/*=== General stuff for replacing functions ===*/
-/*====================================================================*/
-
-/* Some skins need to replace the standard definitions of some functions. */
-
-/* ------------------------------------------------------------------ */
-/* General stuff, for replacing any functions */
-
-/* Is the client running on the simulated CPU or the real one?
-
- Nb: If it is, and you want to call a function to be run on the real CPU,
- use one of the VALGRIND_NON_SIMD_CALL[123] macros in valgrind.h to call it.
-
- Nb: don't forget the function parentheses when using this in a
- condition... write this:
-
- if (VG_(is_running_on_simd_CPU)()) { ... } // calls function
-
- not this:
-
- if (VG_(is_running_on_simd_CPU)) { ... } // address of var!
-*/
-extern Bool VG_(is_running_on_simd_CPU) ( void );
-
-
-/*====================================================================*/
/*=== Specific stuff for replacing malloc() and friends ===*/
/*====================================================================*/
|
|
From: Nicholas N. <nj...@ca...> - 2004-06-22 14:01:46
|
CVS commit by nethercote: Update .cvsignore files M +1 -0 corecheck/tests/.cvsignore 1.6 M +1 -0 memcheck/tests/.cvsignore 1.12 M +2 -0 none/tests/.cvsignore 1.16 --- valgrind/corecheck/tests/.cvsignore #1.5:1.6 @@ -5,4 +5,5 @@ sigkill pth_atfork1 +pth_cancel1 pth_cancel2 pth_cvsimple --- valgrind/memcheck/tests/.cvsignore #1.11:1.12 @@ -30,4 +30,5 @@ memalign_test memcmptest +mempool mismatches mmaptest --- valgrind/none/tests/.cvsignore #1.15:1.16 @@ -13,4 +13,5 @@ discard exec-sigmask +execve floored fork @@ -43,4 +44,5 @@ resolv seg_override +sem semlimit sha1_test |
|
From: Nicholas N. <nj...@ca...> - 2004-06-22 14:00:20
|
CVS commit by nethercote:
Slightly disentangle main().
M +15 -13 vg_main.c 1.159 [POSSIBLY UNSAFE: printf]
--- valgrind/coregrind/vg_main.c #1.158:1.159
@@ -1046,9 +1046,4 @@ static Addr setup_client_stack(char **or
cl_esp = ROUNDDN(cl_esp, 16); /* make stack 16 byte aligned */
- if (0)
- printf("stringsize=%d auxsize=%d stacksize=%d\n",
- stringsize, auxsize, stacksize);
-
-
/* base of the string table (aligned) */
stringbase = strtab = (char *)(VG_(client_trampoline_code) - ROUNDUP(stringsize, sizeof(int)));
@@ -1057,4 +1052,11 @@ static Addr setup_client_stack(char **or
VG_(clstk_end) = VG_(client_end);
+ if (0)
+ printf("stringsize=%d auxsize=%d stacksize=%d\n"
+ "clstk_base %x\n"
+ "clstk_end %x\n",
+ stringsize, auxsize, stacksize, VG_(clstk_base), VG_(clstk_end));
+
+
/* ==================== allocate space ==================== */
@@ -1186,4 +1188,8 @@ static Addr setup_client_stack(char **or
vg_assert((strtab-stringbase) == stringsize);
+ /* We know the initial ESP is pointing at argc/argv */
+ VG_(client_argc) = *(Int*)cl_esp;
+ VG_(client_argv) = (Char**)(cl_esp + sizeof(Int));
+
return cl_esp;
}
@@ -1634,6 +1640,5 @@ static void pre_process_cmd_line_options
}
-static void process_cmd_line_options
- ( UInt* client_auxv, Addr esp_at_startup, const char* toolname )
+static void process_cmd_line_options( UInt* client_auxv, const char* toolname )
{
Int i, eventually_log_fd;
@@ -1659,8 +1664,4 @@ static void process_cmd_line_options
}
- /* We know the initial ESP is pointing at argc/argv */
- VG_(client_argc) = *(Int *)esp_at_startup;
- VG_(client_argv) = (Char **)(esp_at_startup + sizeof(Int));
-
for (i = 1; i < VG_(vg_argc); i++) {
@@ -2750,5 +2751,5 @@ int main(int argc, char **argv)
//--------------------------------------------------------------
- // Setup client stack and eip
+ // Setup client stack, eip, and VG_(client_arg[cv])
// p: load_client() [for 'info']
// p: fix_environment() [for 'env']
@@ -2792,4 +2793,5 @@ int main(int argc, char **argv)
// XXX: alternatively, if sk_pre_clo_init does use VG_(malloc)(), is it
// wrong to ignore any segments that might add in parse_procselfmaps?
+ // p: setup_client_stack() [for 'VG_(client_arg[cv]']
//--------------------------------------------------------------
(*toolinfo->sk_pre_clo_init)();
@@ -2812,5 +2814,5 @@ int main(int argc, char **argv)
// p: sk_pre_clo_init [to set 'command_line_options' need]
//--------------------------------------------------------------
- process_cmd_line_options(client_auxv, esp_at_startup, tool);
+ process_cmd_line_options(client_auxv, tool);
//--------------------------------------------------------------
|
|
From: Nicholas N. <nj...@ca...> - 2004-06-22 13:19:28
|
CVS commit by nethercote: Remove two no-longer-used global vars. M +0 -5 vg_main.c 1.158 --- valgrind/coregrind/vg_main.c #1.157:1.158 @@ -162,9 +162,4 @@ Char** VG_(client_envp); UInt VG_(sigstack)[VG_SIGSTACK_SIZE_W]; -/* Saving stuff across system calls. */ -__attribute__ ((aligned (16))) -UInt VG_(real_sse_state_saved_over_syscall)[VG_SIZE_OF_SSESTATE_W]; -Addr VG_(esp_saved_over_syscall); - /* jmp_buf for fatal signals */ Int VG_(fatal_sigNo) = -1; |
|
From: Tom H. <th...@cy...> - 2004-06-22 12:18:11
|
In message <Pin...@ye...>
Nicholas Nethercote <nj...@ca...> wrote:
> * What's considered a normal client stack size? Is it reasonable to
> limit it? If I could safely limit that to say, 8MB, that could be very
> useful in making limits more flexible. (Even something much bigger, eg.
> 64MB, would be ok).
Well on my machines my stack size ulimit seems to be 8Mb by default
and it's never given me a problem that I can recall.
Tom
--
Tom Hughes (th...@cy...)
Software Engineer, Cyberscience Corporation
http://www.cyberscience.com/
|
|
From: Nicholas N. <nj...@ca...> - 2004-06-22 12:03:54
|
Hi,
I've been looking at ways to improve FV's memory layout, to address bug
#82301.
There's a comment in coregrind/ume.c that explains things:
CLIENT_BASE +-------------------------+
| client address space |
: :
: :
| client stack |
client_end +-------------------------+
| redzone |
shadow_base +-------------------------+
| |
: shadow memory for skins :
| (may be 0 sized) |
shadow_end +-------------------------+
: gap (may be 0 sized) :
valgrind_base +-------------------------+
| valgrind .so files |
| and mappings |
valgrind_mmap_end -
| kickstart executable |
- -
| valgrind heap vvvvvvvvv|
valgrind_end - -
| valgrind stack ^^^^^^^^^|
+-------------------------+
: kernel :
Basically, memory is partitioned into 3 parts:
- client space
- shadow memory space (if needed)
- valgrind + tool space
The problem is that sometimes one of these spaces gets exhausted when
there's still plenty of room in the others, which is a shame.
It's not easy to see how to make things more flexible, although I have
some ideas. I have some questions, most of which are primarily directed
to Jeremy:
* What's considered a normal client stack size? Is it reasonable to
limit it? If I could safely limit that to say, 8MB, that could be very
useful in making limits more flexible. (Even something much bigger, eg.
64MB, would be ok).
* CLIENT_SIZE_MULTIPLE is 64M. Why so big? It means the gap between
shadow memory and valgrind's space is 95MB on my machine, for Memcheck.
I've reduced it to 4M without any apparent ill-effects, and the gap is
reduced to 1.5MB, which gives the client an extra 44MB of space to play
with. As a result, the largest segment I can successfully mmap jumps
from 235MB to 280MB. Changing this seems like the easiest way to squeeze
out some more megs from the address space.
* The diagram above mentions the "valgrind heap". AIUI, Valgrind doesn't
really have a heap as such, because all its allocations are done out of
maps. Are those allocations done out of the "valgrind .so files and
mappings" area? If so, then 128MB for the "kickstart executable, valgrind
heap, valgrind stack" area seems far more than is necessary. Could it be
reduced to just big enough for the kickstart executable + valgrind's
stack, which together should only be a couple of MB?. (I tried changing
this 128MB size but it kept seg-faulting, even when I only shrunk it to
112MB, so I think I was not making the change correctly.)
* On my machine, normally, the heap is roughly 800MB, and the space for
mmap segments is 2GB. (I compute this from the usual heap start being
slightly bigger than 0x8048000, and mmap segments usually starting at
0x40000000, and the kernel starting at 0xc0000000). So the heap is about
2--2.5x smaller than the mmap segment area. But under Valgrind, the
"heap" (client_base..client_mapbase distance) is set to be 3x the mmap
segment area (client_mapbase..client_end). Surely the ratios should be
similar to normal execution? If I change the ratio from 3x to 0.5x, I get
a heap size of 441MB and mmap-segment size of 882MB, and can mmap 640MB
segments in Memcheck (up from 235MB). [Actually, it's complicated by
Memcheck's replacement malloc() not using brk() but rather mmap(). Hmm.]
There's definitely room for improvement here. Comments?
Thanks.
N
|
|
From: Nicholas N. <nj...@ca...> - 2004-06-22 08:53:51
|
Hi, Looks like someone's managed to run Valgrind programs under GDB: www.atomice.com/gdb-valgrind.html I haven't tried it, but looks interesting. N |
|
From: Tom H. <th...@cy...> - 2004-06-22 08:43:35
|
CVS commit by thughes:
Prevent applications from blocking SIGVGINT in the proxy threads.
M +4 -2 vg_proxylwp.c 1.15
--- valgrind/coregrind/vg_proxylwp.c #1.14:1.15
@@ -682,5 +682,6 @@ static Int proxylwp(void *v)
vg_assert(px->state == PXS_SigACK);
appsigmask = req.sigmask;
- VG_(ksigdelset)(&appsigmask, VKI_SIGVGKILL); /* but allow SIGVGKILL to interrupt */
+ VG_(ksigdelset)(&appsigmask, VKI_SIGVGKILL); /* but allow SIGVGKILL */
+ VG_(ksigdelset)(&appsigmask, VKI_SIGVGINT); /* and SIGVGINT to interrupt */
px->state = PXS_WaitReq;
reply.req = PX_BAD; /* don't reply */
@@ -689,5 +690,6 @@ static Int proxylwp(void *v)
case PX_SetSigmask:
appsigmask = req.sigmask;
- VG_(ksigdelset)(&appsigmask, VKI_SIGVGKILL); /* but allow SIGVGKILL to interrupt */
+ VG_(ksigdelset)(&appsigmask, VKI_SIGVGKILL); /* but allow SIGVGKILL */
+ VG_(ksigdelset)(&appsigmask, VKI_SIGVGINT); /* and SIGVGINT to interrupt */
vg_assert(px->state == PXS_WaitReq ||
|
|
From: Tom H. <th...@cy...> - 2004-06-22 08:24:44
|
Nightly build on ginetta ( Red Hat 8.0 ) started at 2004-06-22 03:10:01 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow -- Finished tests in cachegrind/tests ---------------------------------- -- Running tests in corecheck/tests ----------------------------------- as_mmap: valgrind ./as_mmap as_shm: valgrind ./as_shm erringfds: valgrind ./erringfds fdleak_cmsg: valgrind --track-fds=yes ./fdleak_cmsg < /dev/null fdleak_creat: valgrind --track-fds=yes ./fdleak_creat < /dev/null fdleak_dup: valgrind --track-fds=yes ./fdleak_dup < /dev/null fdleak_dup2: valgrind --track-fds=yes ./fdleak_dup2 < /dev/null fdleak_fcntl: valgrind --track-fds=yes ./fdleak_fcntl < /dev/null fdleak_ipv4: valgrind --track-fds=yes ./fdleak_ipv4 < /dev/null fdleak_open: valgrind --track-fds=yes ./fdleak_open < /dev/null fdleak_pipe: valgrind --track-fds=yes ./fdleak_pipe < /dev/null fdleak_socketpair: valgrind --track-fds=yes ./fdleak_socketpair < /dev/null pth_atfork1: valgrind ./pth_atfork1 pth_cancel1: valgrind ./pth_cancel1 *** pth_cancel1 failed (stdout) *** File pth_cancel2.vgtest not openable *** pth_cancel1 failed (stderr) *** make: *** [regtest] Error 2 |
|
From: Tom H. <to...@co...> - 2004-06-22 02:25:12
|
Nightly build on dunsmere ( Fedora Core 2 ) started at 2004-06-22 03:20:02 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow shorts: valgrind ./shorts smc1: valgrind ./smc1 susphello: valgrind ./susphello syscall-restart1: valgrind ./syscall-restart1 syscall-restart2: valgrind ./syscall-restart2 system: valgrind ./system yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 168 tests, 7 stderr failures, 1 stdout failure ================= corecheck/tests/fdleak_cmsg (stderr) corecheck/tests/fdleak_fcntl (stderr) corecheck/tests/fdleak_ipv4 (stderr) corecheck/tests/fdleak_socketpair (stderr) memcheck/tests/buflen_check (stderr) memcheck/tests/execve (stderr) memcheck/tests/writev (stderr) none/tests/exec-sigmask (stdout) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-06-22 02:20:37
|
Nightly build on audi ( Red Hat 9 ) started at 2004-06-22 03:15:05 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow shortpush: valgrind ./shortpush shorts: valgrind ./shorts smc1: valgrind ./smc1 susphello: valgrind ./susphello syscall-restart1: valgrind ./syscall-restart1 syscall-restart2: valgrind ./syscall-restart2 system: valgrind ./system yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 168 tests, 7 stderr failures, 0 stdout failures ================= corecheck/tests/fdleak_cmsg (stderr) corecheck/tests/fdleak_fcntl (stderr) corecheck/tests/fdleak_ipv4 (stderr) corecheck/tests/fdleak_socketpair (stderr) memcheck/tests/buflen_check (stderr) memcheck/tests/execve (stderr) memcheck/tests/writev (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-06-22 02:08:33
|
Nightly build on alvis ( Red Hat 7.3 ) started at 2004-06-22 03:05:02 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow smc1: valgrind ./smc1 susphello: valgrind ./susphello syscall-restart1: valgrind ./syscall-restart1 syscall-restart2: valgrind ./syscall-restart2 system: valgrind ./system yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 168 tests, 8 stderr failures, 1 stdout failure ================= helgrind/tests/deadlock (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) memcheck/tests/badfree-2trace (stderr) memcheck/tests/badjump (stderr) memcheck/tests/brk (stderr) memcheck/tests/error_counts (stdout) memcheck/tests/new_nothrow (stderr) memcheck/tests/writev (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-06-22 02:06:49
|
Nightly build on standard ( Red Hat 7.2 ) started at 2004-06-22 03:00:02 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow readline1: valgrind ./readline1 resolv: valgrind ./resolv seg_override: valgrind ./seg_override sem: valgrind ./sem semlimit: valgrind ./semlimit sha1_test: valgrind ./sha1_test shortpush: valgrind ./shortpush shorts: valgrind ./shorts smc1: valgrind ./smc1 susphello: valgrind ./susphello syscall-restart1: valgrind ./syscall-restart1 syscall-restart2: valgrind ./syscall-restart2 system: valgrind ./system yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 168 tests, 1 stderr failure, 0 stdout failures ================= memcheck/tests/badfree-2trace (stderr) make: *** [regtest] Error 1 |