You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
1
(17) |
2
(21) |
3
(17) |
4
(28) |
5
(21) |
6
(11) |
|
7
(13) |
8
(21) |
9
(21) |
10
(9) |
11
(11) |
12
(15) |
13
(23) |
|
14
(15) |
15
(22) |
16
(28) |
17
(12) |
18
(15) |
19
(8) |
20
(7) |
|
21
(8) |
22
(12) |
23
(13) |
24
(7) |
25
(7) |
26
(3) |
27
(9) |
|
28
(13) |
29
(7) |
30
(7) |
31
(9) |
|
|
|
|
From: Tom H. <th...@cy...> - 2004-03-23 19:52:45
|
In message <108...@as...>
Aleksander Salwa <A....@os...> wrote:
> It removes VG_N_SEMAPHORES limitation.
> Now it uses malloc/free for each sem_init/sem_destroy, as Jeremy
> suggested. I've done some benchmarking - it works even faster than my
> previous version (with pointers to a big global table). It shows again
> that "programmers are notoriously bad at predicting how their programs
> actually perform", as GCC's manual says ;-)
>
> Could someone with write access do a "cvs ci" ?
Looks good to me, so I've added a test case and committed it. Thanks
for the patch.
Tom
--
Tom Hughes (th...@cy...)
Software Engineer, Cyberscience Corporation
http://www.cyberscience.com/
|
|
From: Tom H. <th...@cy...> - 2004-03-23 19:52:09
|
CVS commit by thughes: Added some extra .cvsignore entries. M +1 -0 .cvsignore 1.7 M +2 -0 none/tests/.cvsignore 1.14 --- valgrind/.cvsignore #1.6:1.7 @@ -18,4 +18,5 @@ cachegrind.out.* autom4te.cache +autom4te-*.cache valgrind.pc .in_place --- valgrind/none/tests/.cvsignore #1.13:1.14 @@ -41,8 +41,10 @@ resolv seg_override +semlimit sha1_test shortpush shorts smc1 +susphello syscall-restart1 syscall-restart2 |
|
From: Tom H. <th...@cy...> - 2004-03-23 19:49:00
|
CVS commit by thughes:
Commit rewrite of semaphore handling to avoid having a fixed upper
limit. Patch courtesy of Aleksander Salwa <A....@os...>.
A none/tests/semlimit.c 1.1 [no copyright]
A none/tests/semlimit.stderr.exp 1.1
A none/tests/semlimit.stdout.exp 1.1
A none/tests/semlimit.vgtest 1.1
M +0 -3 coregrind/vg_include.h 1.188
M +78 -56 coregrind/vg_libpthread.c 1.149
M +5 -2 none/tests/Makefile.am 1.34
--- valgrind/coregrind/vg_include.h #1.187:1.188
@@ -117,7 +117,4 @@
#define VG_PTHREAD_STACK_SIZE (1 << 20)
-/* Number of entries in the semaphore-remapping table. */
-#define VG_N_SEMAPHORES 50
-
/* Number of entries in the rwlock-remapping table. */
#define VG_N_RWLOCKS 500
--- valgrind/coregrind/vg_libpthread.c #1.148:1.149
@@ -2490,7 +2490,4 @@ pid_t __vfork(void)
#include <semaphore.h>
-/* This is a terrible way to do the remapping. Plan is to import an
- AVL tree at some point. */
-
typedef
struct {
@@ -2502,57 +2499,50 @@ typedef
vg_sem_t;
-static pthread_mutex_t se_remap_mx = PTHREAD_MUTEX_INITIALIZER;
+#define SEM_CHECK_MAGIC 0x5b1d0772
-static int se_remap_used = 0;
-static sem_t* se_remap_orig[VG_N_SEMAPHORES];
-static vg_sem_t se_remap_new[VG_N_SEMAPHORES];
+typedef
+ struct {
+ union {
+ vg_sem_t* p;
+ int i;
+ } shadow;
+ int err_check;
+ }
+ user_sem_t;
-static vg_sem_t* se_remap ( sem_t* orig )
+
+static vg_sem_t* se_new ( sem_t* orig )
{
- int res, i;
- res = __pthread_mutex_lock(&se_remap_mx);
- my_assert(res == 0);
+ user_sem_t* u_sem = (user_sem_t*)orig;
+ vg_sem_t* vg_sem;
- for (i = 0; i < se_remap_used; i++) {
- if (se_remap_orig[i] == orig)
- break;
- }
- if (i == se_remap_used) {
- if (se_remap_used == VG_N_SEMAPHORES) {
- res = pthread_mutex_unlock(&se_remap_mx);
- my_assert(res == 0);
- barf("VG_N_SEMAPHORES is too low. Increase and recompile.");
- }
- se_remap_used++;
- se_remap_orig[i] = orig;
- /* printf("allocated semaphore %d\n", i); */
- }
- res = __pthread_mutex_unlock(&se_remap_mx);
- my_assert(res == 0);
- return &se_remap_new[i];
+ vg_sem = my_malloc(sizeof(vg_sem_t));
+
+ u_sem->shadow.p = vg_sem;
+ u_sem->err_check = u_sem->shadow.i ^ SEM_CHECK_MAGIC;
+
+ return vg_sem;
}
-static void se_unmap( sem_t* orig )
+static vg_sem_t* se_lookup ( sem_t* orig )
{
- int res, i;
- res = __pthread_mutex_lock(&se_remap_mx);
- my_assert(res == 0);
+ user_sem_t* u_sem = (user_sem_t*) orig;
- for (i = 0; i < se_remap_used; i++) {
- if (se_remap_orig[i] == orig)
- break;
- }
- if (i == se_remap_used) {
- res = pthread_mutex_unlock(&se_remap_mx);
- my_assert(res == 0);
- barf("se_unmap: unmapping invalid semaphore");
- } else {
- se_remap_orig[i] = se_remap_orig[--se_remap_used];
- se_remap_orig[se_remap_used] = 0;
- memset(&se_remap_new[se_remap_used], 0,
- sizeof(se_remap_new[se_remap_used]));
- }
- res = pthread_mutex_unlock(&se_remap_mx);
- my_assert(res == 0);
+ if(!u_sem->shadow.p || ((u_sem->shadow.i ^ SEM_CHECK_MAGIC) != u_sem->err_check))
+ return NULL;
+
+ return u_sem->shadow.p;
+}
+
+static void se_free( sem_t* orig )
+{
+ user_sem_t* u_sem = (user_sem_t*) orig;
+
+ my_free(u_sem->shadow.p);
+
+ u_sem->shadow.p = NULL;
+ u_sem->err_check = 0;
+
+ return;
}
@@ -2567,5 +2557,6 @@ int sem_init(sem_t *sem, int pshared, un
return -1;
}
- vg_sem = se_remap(sem);
+ vg_sem = se_new(sem);
+
res = pthread_mutex_init(&vg_sem->se_mx, NULL);
my_assert(res == 0);
@@ -2573,4 +2564,5 @@ int sem_init(sem_t *sem, int pshared, un
my_assert(res == 0);
vg_sem->count = value;
+ vg_sem->waiters = 0;
return 0;
}
@@ -2581,5 +2573,10 @@ int sem_wait ( sem_t* sem )
vg_sem_t* vg_sem;
ensure_valgrind("sem_wait");
- vg_sem = se_remap(sem);
+ vg_sem = se_lookup(sem);
+ if(!vg_sem) {
+ pthread_error("sem_wait: semaphore overwritten or not initialized");
+ *(__errno_location()) = EINVAL;
+ return -1;
+ }
res = __pthread_mutex_lock(&vg_sem->se_mx);
my_assert(res == 0);
@@ -2601,5 +2598,10 @@ int sem_post ( sem_t* sem )
vg_sem_t* vg_sem;
ensure_valgrind("sem_post");
- vg_sem = se_remap(sem);
+ vg_sem = se_lookup(sem);
+ if(!vg_sem) {
+ pthread_error("sem_post: semaphore overwritten or not initialized");
+ *(__errno_location()) = EINVAL;
+ return -1;
+ }
res = __pthread_mutex_lock(&vg_sem->se_mx);
my_assert(res == 0);
@@ -2622,5 +2624,10 @@ int sem_trywait ( sem_t* sem )
vg_sem_t* vg_sem;
ensure_valgrind("sem_trywait");
- vg_sem = se_remap(sem);
+ vg_sem = se_lookup(sem);
+ if(!vg_sem) {
+ pthread_error("sem_trywait: semaphore overwritten or not initialized");
+ *(__errno_location()) = EINVAL;
+ return -1;
+ }
res = __pthread_mutex_lock(&vg_sem->se_mx);
my_assert(res == 0);
@@ -2643,5 +2650,10 @@ int sem_getvalue(sem_t* sem, int * sval)
vg_sem_t* vg_sem;
ensure_valgrind("sem_getvalue");
- vg_sem = se_remap(sem);
+ vg_sem = se_lookup(sem);
+ if(!vg_sem) {
+ pthread_error("sem_getvalue: semaphore overwritten or not initialized");
+ *(__errno_location()) = EINVAL;
+ return -1;
+ }
res = __pthread_mutex_lock(&vg_sem->se_mx);
my_assert(res == 0);
@@ -2659,5 +2671,10 @@ int sem_destroy(sem_t * sem)
int res;
ensure_valgrind("sem_destroy");
- vg_sem = se_remap(sem);
+ vg_sem = se_lookup(sem);
+ if(!vg_sem) {
+ pthread_error("sem_destroy: semaphore overwritten or not initialized");
+ *(__errno_location()) = EINVAL;
+ return -1;
+ }
res = __pthread_mutex_lock(&vg_sem->se_mx);
my_assert(res == 0);
@@ -2675,5 +2692,5 @@ int sem_destroy(sem_t * sem)
res = pthread_mutex_destroy(&vg_sem->se_mx);
my_assert(res == 0);
- se_unmap(sem);
+ se_free(sem);
return 0;
}
@@ -2685,5 +2702,10 @@ int sem_timedwait(sem_t* sem, const stru
vg_sem_t* vg_sem;
ensure_valgrind("sem_timedwait");
- vg_sem = se_remap(sem);
+ vg_sem = se_lookup(sem);
+ if(!vg_sem) {
+ pthread_error("sem_timedwait: semaphore overwritten or not initialized");
+ *(__errno_location()) = EINVAL;
+ return -1;
+ }
res = __pthread_mutex_lock(&vg_sem->se_mx);
my_assert(res == 0);
--- valgrind/none/tests/Makefile.am #1.33:1.34
@@ -46,4 +46,5 @@
seg_override.stderr.exp \
seg_override.stdout.exp seg_override.vgtest \
+ semlimit.stderr.exp semlimit.stdout.exp semlimit.vgtest \
susphello.stdout.exp susphello.stderr.exp susphello.vgtest \
sha1_test.stderr.exp sha1_test.vgtest \
@@ -62,6 +63,6 @@
fucomip $(INSN_TESTS) \
int munmap_exe map_unmap mremap rcl_assert \
- rcrl readline1 resolv seg_override sha1_test shortpush shorts smc1 \
- susphello pth_blockedsig pushpopseg \
+ rcrl readline1 resolv seg_override semlimit sha1_test \
+ shortpush shorts smc1 susphello pth_blockedsig pushpopseg \
syscall-restart1 syscall-restart2 system \
coolo_sigaction gxx304 yield
@@ -109,4 +110,6 @@
resolv_SOURCES = resolv.c
seg_override_SOURCES = seg_override.c
+semlimit_SOURCES = semlimit.c
+semlimit_LDADD = -lpthread
smc1_SOURCES = smc1.c
sha1_test_SOURCES = sha1_test.c
|
|
From: KJK::Hyperion <no...@li...> - 2004-03-23 18:04:14
|
At 08.16 23/03/2004, Jeremy Fitzhardinge wrote: >Yes, but there is the issue of simply running out of address space. The >numbers you mention below suggest that there's less that 2G of address >space for applications under Windows, which means that if the client is >sharing the address space with shadow data, there is less than 1G for the >client's own use. most applications I've seen (including heavyweights like Opera with dozens of tabs and huge link/tab history files) never require more than half of the address space. I've verified this experimentally with a small program I've written that plots the virtual memory map of a given process. In general, the highest portion of the address space is taken by system DLLs and system data such as the PEB and TEBs, the portion slightly below it by other DLLs, and the lowest by nearly everything else (heaps, stacks, mapped shared memory, the main executable, etc.) They say a picture tells a thousand words, so I've attached a sample output. It shows the memory usage of the aforementioned instance of Opera, 1 pixel per memory page. The colors: black is free memory, yellow is DLL-mapped memory (the yellow bar at the top is the main executable), green is anonymous virtual memory (dark if reserved but not committed) and red is mapped memory (dark if mapped from a file). Addresses increase from top to bottom and from left to right. The lowest address is 0x00010000 and the highest 0x7FFEFFFF Anyway, note the *large* black space in the middle. Now, consider that this is an anomaly. The second largest virtual memory space on this machine (the instance of Eudora I'm typing this e-mail in) has *way* more than half of the address space free, in a nice contiguous block in the middle. Other things to consider are that 1) all DLLs are relocable, so those yellow bars you see could easily be moved up or down should necessity arise, that 2) reserved anonymous memory (dark green) can be considered to all practical effects free (is the shadow memory sparse?) and that 3) to 'grind *really* memory-consuming applications you can always boot with the /3GB kernel switch (it does what it sounds it does) >>Anyway, how do tools register with the JIT engine so they are called at >>certain points? >Um, well, they get to instrument the code as it goes through the JIT. so they are linked statically? >There's also special callbacks for things like allocations, but the >majority is done with >instrumentation. hmmm. I'll have to get back at this, when I have some more time >Julian's internals document is still a reasonable start for the overall >design, though many of the details have changed. Using --trace-* options >will give you some idea about what's going on inside. It isn't wildly >complex, but there are a lot of details. cool. I'll try as soon I can >Well, for each application level thread, Valgrind creates a kernel thread >in order to deal with blocking syscalls. perfect >Well, that's only about 2G. Typically under linux, the client address >space is from 0-3G (though it can be different for different kernel >configurations). it's not the default for Windows, but it's supported. It has a problem in that, even when enabled, a certain flag must be set in the main executable for the address space to be really 3GB, but I know a way to work around that |
|
From: Aleksander S. <A....@os...> - 2004-03-23 14:21:17
|
Here comes final patch for semaphores :) It removes VG_N_SEMAPHORES limitation. Now it uses malloc/free for each sem_init/sem_destroy, as Jeremy suggested. I've done some benchmarking - it works even faster than my previous version (with pointers to a big global table). It shows again that "programmers are notoriously bad at predicting how their programs actually perform", as GCC's manual says ;-) Could someone with write access do a "cvs ci" ? Best regards, Aleksander. |
|
From: Nicholas N. <nj...@ca...> - 2004-03-23 09:05:46
|
On Mon, 22 Mar 2004, Jeremy Fitzhardinge wrote: > > Anyway, how do tools register with the JIT engine so they are called > > at certain points? > > Um, well, they get to instrument the code as it goes through the JIT. There's > also special callbacks for things like allocations, but the majority is done with > instrumentation. Tools don't need to "register" as such; by choosing the right names for the appropriate functions (eg. the instrumentation function) they get called at the right times. At least, that's how it used to work; recent changes may have affected this, but the basic idea is the same. > Why do you say that? Memcheck, addrcheck, cachegrind, and helgrind use shadow > memory a lot (at least every memory access), and making access to the shadow any > slower would have enormous performance effects. (Cachegrind doesn't use shadow memory.) > Julian's internals document is still a reasonable start for the overall design, > though many of the details have changed. Using --trace-* options will give you > some idea about what's going on inside. It isn't wildly complex, but there are a > lot of details. You could also look at http://www.cl.cam.ac.uk/~njn25/pubs/valgrind2003.ps.gz, which is a bit more recent than the internals document, and is mostly still up-to-date. Also, look at the example skins: "Lackey", and the one in the example/ directory. N |
|
From: Jeremy F. <je...@go...> - 2004-03-23 07:16:21
|
Quoting "KJK::Hyperion" <no...@li...>: > I'm not too worried about memory usage (well, there's the issue of > placing > Valgrind data so that it doesn't conflict with certain non-relocable > system > DLLs... but there should be plenty of room in the middle). Separation of > > address spaces is more a matter of "playing by the rules". Yes, but there is the issue of simply running out of address space. The numbers you mention below suggest that there's less that 2G of address space for applications under Windows, which means that if the client is sharing the address space with shadow data, there is less than 1G for the client's own use. > Anyway, how > do > tools register with the JIT engine so they are called at certain points? Um, well, they get to instrument the code as it goes through the JIT. There's also special callbacks for things like allocations, but the majority is done with instrumentation. > because the issue now is whether they can store most data in Valgrind's > > process and only require small "registration data" in the client, or > not. > Ideally, all tools (except maybe memcheck) should run in Valgrind's > process > and be called through some form of RPC by the JIT (running in the > client), > so their execution won't interfere with the client Why do you say that? Memcheck, addrcheck, cachegrind, and helgrind use shadow memory a lot (at least every memory access), and making access to the shadow any slower would have enormous performance effects. > Apropos, I've downloaded some CVS release, but I have a hard time > understanding much of it. Basically, all I've understood is that > Valgrind > has its own scheduler. The rest looks pretty obscure. What do you think > > would be the best way to get started on Valgrind internals? Julian's internals document is still a reasonable start for the overall design, though many of the details have changed. Using --trace-* options will give you some idea about what's going on inside. It isn't wildly complex, but there are a lot of details. > On an unrelated topic: does the core code depend on GCCisms? I've only > seen > surprisingly little inline assembler, some expression-with-statements, > > noreturn functions and functions with registry parameters - all of which > > have some equivalent in Windows compilers - and playing games with > symbol > names, which doesn't have effect on Win32. Is there much more? Local functions with lexically-scoped variables are probably the most unportable gcc extension. > >It is really multithreaded as far as the client is concerned; > > *this* is what I'm not sure about. I've read the latest Microsoft SQL > Server has its own scheduler, and I've read a pretty detailed > description > of it on the weblog of some Microsoft guy. It looks like it can work > only > for very specific operations, like only for file I/O - SQL server can > afford it because, like all database servers, it's largely > self-contained, > but most applications aren't Well, for each application level thread, Valgrind creates a kernel thread in order to deal with blocking syscalls. It's just that the application code itself doesn't run in that thread. In other words, Valgrind looks like a multi-threaded program to the kernel, even if it does simple time-slicing within one thread for the client application threading. > >Hm, that isn't all that high. Does that mean a process has less than > 2G > >of available address space under XP? > > maybe I'm confusing addresses. You know, all those hexadecimal digits... > > anyway </me fetches calculator>, the highest user-mode address is > reported > here (Windows 2000) as being 0x7FFEFFFF, meaning 64 Kb are unavailable. > The > shared read-only data begins at 0x7FFE0000, and includes the tick > counter, > some information about the kernel and of course the system call thunk. > Not > sure where the probe address is at, and if its semantics are what I > believe > they are (probably not) Well, that's only about 2G. Typically under linux, the client address space is from 0-3G (though it can be different for different kernel configurations). [ sorry about the formatting - nasty webmail ] J |
|
From: Jeremy F. <je...@go...> - 2004-03-23 06:56:22
|
Quoting Tom Hughes <th...@cy...>: > I've come up with a simpler solution for my problem now. I've added > a redirect from _dl_sysinfo_int80 to the system call routine in > valgrind's trampoline page, which valgrind will then recognise and > do it's special unwind trick on. Hah - I was just about to send you that patch to try. Since it's a two-liner, it seems like the right solution. J |
|
From: Tom H. <to...@co...> - 2004-03-23 03:23:14
|
Nightly build on dunsmere ( Fedora Core 1 ) started at 2004-03-23 03:20:02 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow readline1: valgrind ./readline1 resolv: valgrind ./resolv seg_override: valgrind ./seg_override sha1_test: valgrind ./sha1_test shortpush: valgrind ./shortpush shorts: valgrind ./shorts smc1: valgrind ./smc1 susphello: valgrind ./susphello syscall-restart1: valgrind ./syscall-restart1 syscall-restart2: valgrind ./syscall-restart2 system: valgrind ./system yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 150 tests, 2 stderr failures, 1 stdout failure ================= helgrind/tests/inherit (stderr) memcheck/tests/trivialleak (stderr) none/tests/exec-sigmask (stdout) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-03-23 03:18:25
|
Nightly build on audi ( Red Hat 9 ) started at 2004-03-23 03:15:03 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow rcrl: valgrind ./rcrl readline1: valgrind ./readline1 resolv: valgrind ./resolv seg_override: valgrind ./seg_override sha1_test: valgrind ./sha1_test shortpush: valgrind ./shortpush shorts: valgrind ./shorts smc1: valgrind ./smc1 susphello: valgrind ./susphello syscall-restart1: valgrind ./syscall-restart1 syscall-restart2: valgrind ./syscall-restart2 system: valgrind ./system yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 150 tests, 2 stderr failures, 0 stdout failures ================= helgrind/tests/inherit (stderr) memcheck/tests/trivialleak (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-03-23 03:13:08
|
Nightly build on ginetta ( Red Hat 8.0 ) started at 2004-03-23 03:10:03 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow sha1_test: valgrind ./sha1_test shortpush: valgrind ./shortpush shorts: valgrind ./shorts smc1: valgrind ./smc1 susphello: valgrind ./susphello syscall-restart1: valgrind ./syscall-restart1 syscall-restart2: valgrind ./syscall-restart2 system: valgrind ./system yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 150 tests, 6 stderr failures, 0 stdout failures ================= helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) memcheck/tests/nanoleak (stderr) memcheck/tests/trivialleak (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-03-23 03:08:07
|
Nightly build on alvis ( Red Hat 7.3 ) started at 2004-03-23 03:05:03 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow shortpush: valgrind ./shortpush shorts: valgrind ./shorts smc1: valgrind ./smc1 susphello: valgrind ./susphello syscall-restart1: valgrind ./syscall-restart1 syscall-restart2: valgrind ./syscall-restart2 system: valgrind ./system yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 150 tests, 6 stderr failures, 1 stdout failure ================= helgrind/tests/inherit (stderr) memcheck/tests/badfree-2trace (stderr) memcheck/tests/badjump (stderr) memcheck/tests/brk (stderr) memcheck/tests/error_counts (stdout) memcheck/tests/new_nothrow (stderr) memcheck/tests/writev (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2004-03-23 03:06:13
|
Nightly build on standard ( Red Hat 7.2 ) started at 2004-03-23 03:00:03 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow rcrl: valgrind ./rcrl readline1: valgrind ./readline1 resolv: valgrind ./resolv seg_override: valgrind ./seg_override sha1_test: valgrind ./sha1_test shortpush: valgrind ./shortpush shorts: valgrind ./shorts smc1: valgrind ./smc1 susphello: valgrind ./susphello syscall-restart1: valgrind ./syscall-restart1 syscall-restart2: valgrind ./syscall-restart2 system: valgrind ./system yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 150 tests, 2 stderr failures, 0 stdout failures ================= helgrind/tests/inherit (stderr) memcheck/tests/badfree-2trace (stderr) make: *** [regtest] Error 1 |