You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
1
(4) |
|
2
(5) |
3
(3) |
4
(3) |
5
(7) |
6
(7) |
7
(9) |
8
(10) |
|
9
(12) |
10
(26) |
11
(9) |
12
(6) |
13
(7) |
14
(15) |
15
(25) |
|
16
(20) |
17
(32) |
18
(11) |
19
(19) |
20
(22) |
21
(6) |
22
(8) |
|
23
(16) |
24
(25) |
25
(11) |
26
(16) |
27
(12) |
28
(15) |
29
(11) |
|
30
(5) |
31
(8) |
|
|
|
|
|
|
From: Nicholas N. <nj...@ca...> - 2005-01-26 22:40:42
|
On Wed, 26 Jan 2005, Jeremy Fitzhardinge wrote: > I think that's when I used AS_HELP_STRING for the --with[out]-pie > option. I was just following the manual; I didn't realize it would > cause a compatibility problem. If dropping that is all it takes to get > old autoconfs working, then we should. That sounds right to me. N |
|
From: Jeremy F. <je...@go...> - 2005-01-26 21:55:49
|
On Wed, 2005-01-26 at 11:08 -0800, Robert Walsh wrote: > I'm of the opinion that we should be trying to make our autoconf stuff > as simple as possible to avoid situations like this. Maybe by choosing > a baseline version (perhaps the version that ships with the earliest > distribution release that we support) and not using any macros that > don't exist in that version? Or shipping any extra macros above this > that we need. I think that's when I used AS_HELP_STRING for the --with[out]-pie option. I was just following the manual; I didn't realize it would cause a compatibility problem. If dropping that is all it takes to get old autoconfs working, then we should. Or is it more complex than that? [ It's irritating that the autoconf docs don't say when various features were introduced, and which ones will cause backwards-compatibility problems. But I guess that's why I have 2 versions of autoconf and 6 versions of automake installed. ] J |
|
From: Bryan O'S. <bo...@se...> - 2005-01-26 19:30:45
|
On Wed, 2005-01-26 at 11:25 -0800, Yuri wrote: > Isn't autoconf designed that way that only resulting configure script > should be shipped and used > by the installing party? Configure is designed to be as portrable as > possible, not autoconf. The problem is that for CVS developers, configure isn't checked into the tree (which is appropriate). Thus you need to have the appropriate version of autoconf to even build the CVS head. <b |
|
From: Yuri <yu...@ts...> - 2005-01-26 19:25:43
|
Robert Walsh wrote: >(skip) > > >I'm of the opinion that we should be trying to make our autoconf stuff >as simple as possible to avoid situations like this. Maybe by choosing >a baseline version (perhaps the version that ships with the earliest >distribution release that we support) and not using any macros that >don't exist in that version? Or shipping any extra macros above this >that we need. > >Thoughts? > > Isn't autoconf designed that way that only resulting configure script should be shipped and used by the installing party? Configure is designed to be as portrable as possible, not autoconf. Yuri -- excerpt from "info autoconf -- > Autoconf is a tool for producing shell scripts that automatically > configure software source code packages to adapt to many kinds of > UNIX-like systems. The configuration scripts produced by Autoconf are > independent of Autoconf when they are run, so their users do not need > to have Autoconf. -- end excerpt -- |
|
From: Robert W. <rj...@du...> - 2005-01-26 19:08:24
|
I just tried building Valgrind for the folks here at work on our FC1 machine. The build fell over because FC1 doesn't ship with autoconf 2.59, which Valgrind apparently needs. I can't install that version of autoconf very easily on this machine because it's also used as one of our product build machines and we don't want to change the system software. BTW: this appears to be a relatively new requirement - I built a version of FC1 back in December without any problems. I'm of the opinion that we should be trying to make our autoconf stuff as simple as possible to avoid situations like this. Maybe by choosing a baseline version (perhaps the version that ships with the earliest distribution release that we support) and not using any macros that don't exist in that version? Or shipping any extra macros above this that we need. Thoughts? |
|
From: Jeremy F. <je...@go...> - 2005-01-26 18:49:47
|
On Wed, 2005-01-26 at 19:14 +1100, Eyal Lebedinsky wrote: > I do not have an easy way to reproduce, however I can provide better logs. > The attached one shows a successfull run of the currently failing program > followed by one that aborted quietly. > > The failure in the ipc area may explain why my program always dies while > holding a semaphore, which I must remove manually before my system is > usable again. Is your program multithreaded? Your trace makes it appear that it dies in the sys_ipc syscall doing a semget, but Valgrind does literally nothing in this case - there's nothing for it to check, so it does nothing. Could you try again with --trace-signals=yes. And could you please, please, please file a bug and attach these logs to it. > > There are only a few places within Valgrind where it would quietly die > > with SIGSEGV without being able to say anything. I'm not sure what it > > might be in this case. > > Hope the above helps, I will turn on my own logging to see what I was > trying to do at the time of the crash. It is "good" that the crash is > very consistent now, it used to be very variable before. Yes, I do see > the cup half full... Yes, that will definitely help. J |
|
From: Eyal L. <ey...@ey...> - 2005-01-26 08:14:41
|
Jeremy Fitzhardinge wrote: > On Mon, 2005-01-24 at 21:35 +1100, Eyal Lebedinsky wrote: > >>It still crashes, but differently. faultstatus does not fail >>anymore, but my tests still fail. The pattern is that one program >>fails with >> >>/ssa/builds/20050118g-vgi/bin/showtime: line 81: 10967 Segmentation fault $valgrind_prefix --logfile-fd=9 $orig "$@" 9>>$log >> >>The log does not show any errors. Actually, this last run of >>showtime has not a single line from VG in the log, as if vg >>itself died quietly. >> >>And after this I cannot run anything. This is due to the segfault >>happening when the failed program was holding a semaphore. This >>is most unusual because that sem is held for a very brief time, >>and I do not expect a random crash to happen just then. The fact >>is that these crashes are very common, actually the earlier >>problem (the sig11 now patched) was causing exactly the same >>thing, and I always had to manually remove the sem. Almost as if >>the fix is not just right. > > > Well, I did fix a real bug, but there could be another one. Lots of > bugs have "died with SIGSEGV" as their symptom. > > Could you file a bug for this? And if you can give me some way to > reproduce this, that would be helpful. I do not have an easy way to reproduce, however I can provide better logs. The attached one shows a successfull run of the currently failing program followed by one that aborted quietly. The failure in the ipc area may explain why my program always dies while holding a semaphore, which I must remove manually before my system is usable again. > There are only a few places within Valgrind where it would quietly die > with SIGSEGV without being able to say anything. I'm not sure what it > might be in this case. Hope the above helps, I will turn on my own logging to see what I was trying to do at the time of the crash. It is "good" that the crash is very consistent now, it used to be very variable before. Yes, I do see the cup half full... > J -- Eyal Lebedinsky (ey...@ey...) <http://samba.org/eyal/> attach .zip as .dat |
|
From: Jeremy F. <je...@go...> - 2005-01-26 05:22:16
|
On Tue, 2005-01-25 at 17:28 -0800, Naveen Kumar wrote: > --- Jeremy Fitzhardinge <je...@go...> wrote: > > > On Tue, 2005-01-25 at 12:32 -0800, Naveen Kumar > > wrote: > > > Hi all, > > > I have finally got to the stage where the > > following > > > commands > > > > > > valgrind > > > valgrind --help > > > valgrind --tool=xxxxx > > > valgrind --tool=xxxxx --help > > > > > > work. Some items of note. I am using the following > > > address space mapping > > > > > > client_base 9000000 (498MB) > > > client_mapbase 28280000 (998MB) > > > client_end 66900000 (1MB) > > > shadow_base 66A00000 (1684MB) > > > shadow_end CFE20000 (1MB) > > > valgrind_base D0000000 (255MB) > > > valgrind_end DFFFFFFF > > > > > > since the stack is positioned below the text > > > segment(for sol-x86) the whole space above text > > until > > > kernelbase can be used. kernelbase has been > > #defined > > > to 0xE0000000. CLIENT_BASE starts from 0x09000000 > > > instead of 0x00000000 as in Linux-x86. In > > > layout_remaining_space > > > > Why start at 0x9000000? client_base is the base of > > the client address > > space, so it has to include the client stack. > > Setting client_base to > > 0x9000000 implies that it is absolutely forbidden, > > under any > > circumstances, for the client to touch memory in the > > range > > 0x0-0x9000000. > > I tried setting client_base to 0 but got a fault at > munmap((void*)VG_(client_base), client_size); > > Incurred fault #6, FLTBOUNDS %pc = 0x00000000 > siginfo: SIGSEGV SEGV_MAPERR addr=0x00000000 > Received signal #11, SIGSEGV [default] > siginfo: SIGSEGV SEGV_MAPERR addr=0x00000000 > *** process killed *** > > Actually I just looked into it through the debugger > and this is what happens. Just after completion of the > munmap system call when I try to look at the stack > this is what I get > x/10w $esp > 0x8047ab0: Cannot access memory at address 0x8047ab0 > > Is it because the stack is getting mapped out ? > This is the map of the process(first few entries) > > 08046000 8K read/write/exec [ stack ] > 08048000 188K read/exec > /export/home/msat/my_local/bin/valgrind > 08077000 16K read/write > /export/home/msat/my_local/bin/valgrind > 0807B000 140K read/write [ heap ] > 0809E000 8K read/write/exec [ heap ] > 66900000 1024K - [ anon ] > D0000000 528K read/exec dev:102,7 > ino:181652 > D0084000 8K read/write dev:102,7 > ino:181652 > D0086000 1384K read/write [ anon ] > D1000000 576K read/exec /usr/lib/libc.so.1 > D1090000 64K read/write/exec /usr/lib/libc.so.1 You need to move the stack too. In Linux, the stack lives at the top of the address space, so stage2 uses it directly. In Solaris, when stage1 sets up the address space for stage2, it needs allocate memory for the stack in the top part of the address space, and make sure that stage2 starts of using it. This means you need to build the stack from scratch, rather than diddle with the kernel-generated one as in the Linux case. Once control has been passed to stage2, nothing should be using the original stack, so it should be OK to unmap (and then re-create for the client's use). J |
|
From: <js...@ac...> - 2005-01-26 03:56:56
|
Nightly build on phoenix ( SuSE 9.1 ) started at 2005-01-26 03:50:00 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow -- Finished tests in none/tests ---------------------------------------- == 194 tests, 15 stderr failures, 0 stdout failures ================= corecheck/tests/as_mmap (stderr) corecheck/tests/fdleak_fcntl (stderr) helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) massif/tests/toobig-allocs (stderr) massif/tests/true_html (stderr) massif/tests/true_text (stderr) memcheck/tests/pth_once (stderr) memcheck/tests/scalar (stderr) memcheck/tests/threadederrno (stderr) memcheck/tests/writev (stderr) make: *** [regtest] Error 1 |
|
From: Naveen K. <g_n...@ya...> - 2005-01-26 03:56:33
|
http://story.news.yahoo.com/news?tmpl=story&cid=2265&ncid=738&e=9&u=/infoworld/20050126/tc_infoworld/53597 Finally!!!!. This should make the job of porting valgrind to solaris easier. Naveen __________________________________ Do you Yahoo!? Read only the mail you want - Yahoo! Mail SpamGuard. http://promotions.yahoo.com/new_mail |
|
From: Tom H. <to...@co...> - 2005-01-26 03:24:52
|
Nightly build on dunsmere ( Fedora Core 3 ) started at 2005-01-26 03:20:05 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow seg_override: valgrind --num-callers=4 ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind --num-callers=4 ./yield -- Finished tests in none/tests ---------------------------------------- == 201 tests, 12 stderr failures, 0 stdout failures ================= helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) massif/tests/toobig-allocs (stderr) massif/tests/true_html (stderr) massif/tests/true_text (stderr) memcheck/tests/scalar (stderr) memcheck/tests/scalar_supp (stderr) memcheck/tests/vgtest_ume (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2005-01-26 03:21:41
|
Nightly build on audi ( Red Hat 9 ) started at 2005-01-26 03:15:03 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) massif/tests/toobig-allocs (stderr) massif/tests/true_html (stderr) massif/tests/true_text (stderr) memcheck/tests/badpoll (stderr) memcheck/tests/buflen_check (stderr) memcheck/tests/execve (stderr) memcheck/tests/execve2 (stderr) memcheck/tests/scalar (stderr) memcheck/tests/scalar_exit_group (stderr) memcheck/tests/scalar_supp (stderr) memcheck/tests/writev (stderr) none/tests/tls (stdout) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2005-01-26 03:15:17
|
Nightly build on ginetta ( Red Hat 8.0 ) started at 2005-01-26 03:10:04 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow seg_override: valgrind --num-callers=4 ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind --num-callers=4 ./yield -- Finished tests in none/tests ---------------------------------------- == 199 tests, 12 stderr failures, 0 stdout failures ================= helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) massif/tests/toobig-allocs (stderr) massif/tests/true_html (stderr) massif/tests/true_text (stderr) memcheck/tests/pth_once (stderr) memcheck/tests/scalar (stderr) memcheck/tests/threadederrno (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2005-01-26 03:09:10
|
Nightly build on alvis ( Red Hat 7.3 ) started at 2005-01-26 03:05:03 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow yield: valgrind --num-callers=4 ./yield -- Finished tests in none/tests ---------------------------------------- == 199 tests, 14 stderr failures, 0 stdout failures ================= helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) massif/tests/toobig-allocs (stderr) massif/tests/true_html (stderr) massif/tests/true_text (stderr) memcheck/tests/post-syscall (stderr) memcheck/tests/pth_once (stderr) memcheck/tests/scalar (stderr) memcheck/tests/threadederrno (stderr) memcheck/tests/vgtest_ume (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2005-01-26 03:05:04
|
Nightly build on standard ( Red Hat 7.2 ) started at 2005-01-26 03:00:03 GMT Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind --num-callers=4 ./yield -- Finished tests in none/tests ---------------------------------------- == 199 tests, 13 stderr failures, 0 stdout failures ================= helgrind/tests/allok (stderr) helgrind/tests/deadlock (stderr) helgrind/tests/inherit (stderr) helgrind/tests/race (stderr) helgrind/tests/race2 (stderr) helgrind/tests/readshared (stderr) massif/tests/toobig-allocs (stderr) massif/tests/true_html (stderr) massif/tests/true_text (stderr) memcheck/tests/pth_once (stderr) memcheck/tests/scalar (stderr) memcheck/tests/threadederrno (stderr) memcheck/tests/vgtest_ume (stderr) make: *** [regtest] Error 1 |
|
From: Naveen K. <g_n...@ya...> - 2005-01-26 01:28:52
|
--- Jeremy Fitzhardinge <je...@go...> wrote:
> On Tue, 2005-01-25 at 12:32 -0800, Naveen Kumar
> wrote:
> > Hi all,
> > I have finally got to the stage where the
> following
> > commands
> >
> > valgrind
> > valgrind --help
> > valgrind --tool=xxxxx
> > valgrind --tool=xxxxx --help
> >
> > work. Some items of note. I am using the following
> > address space mapping
> >
> > client_base 9000000 (498MB)
> > client_mapbase 28280000 (998MB)
> > client_end 66900000 (1MB)
> > shadow_base 66A00000 (1684MB)
> > shadow_end CFE20000 (1MB)
> > valgrind_base D0000000 (255MB)
> > valgrind_end DFFFFFFF
> >
> > since the stack is positioned below the text
> > segment(for sol-x86) the whole space above text
> until
> > kernelbase can be used. kernelbase has been
> #defined
> > to 0xE0000000. CLIENT_BASE starts from 0x09000000
> > instead of 0x00000000 as in Linux-x86. In
> > layout_remaining_space
>
> Why start at 0x9000000? client_base is the base of
> the client address
> space, so it has to include the client stack.
> Setting client_base to
> 0x9000000 implies that it is absolutely forbidden,
> under any
> circumstances, for the client to touch memory in the
> range
> 0x0-0x9000000.
I tried setting client_base to 0 but got a fault at
munmap((void*)VG_(client_base), client_size);
Incurred fault #6, FLTBOUNDS %pc = 0x00000000
siginfo: SIGSEGV SEGV_MAPERR addr=0x00000000
Received signal #11, SIGSEGV [default]
siginfo: SIGSEGV SEGV_MAPERR addr=0x00000000
*** process killed ***
Actually I just looked into it through the debugger
and this is what happens. Just after completion of the
munmap system call when I try to look at the stack
this is what I get
x/10w $esp
0x8047ab0: Cannot access memory at address 0x8047ab0
Is it because the stack is getting mapped out ?
This is the map of the process(first few entries)
08046000 8K read/write/exec [ stack ]
08048000 188K read/exec
/export/home/msat/my_local/bin/valgrind
08077000 16K read/write
/export/home/msat/my_local/bin/valgrind
0807B000 140K read/write [ heap ]
0809E000 8K read/write/exec [ heap ]
66900000 1024K - [ anon ]
D0000000 528K read/exec dev:102,7
ino:181652
D0084000 8K read/write dev:102,7
ino:181652
D0086000 1384K read/write [ anon ]
D1000000 576K read/exec /usr/lib/libc.so.1
D1090000 64K read/write/exec /usr/lib/libc.so.1
>
> > The problem I am currently having is that when
> > executing in stage2, the program crashes at
> fprintf(or
> > printf) in load_ELF(ume.c). A printf in vg_main.c
> > shows no problems but a printf at any point in
> > load_ELF gives a seg fault. Any ideas ???
>
> Is it just printf, or any libc function? Is memory
> padded at the time?
> Could it be something trying to allocate and
> failing? What's the fault
> address?
This happens only for printf/fprintf. malloc etc work
fine.
G
__________________________________
Do you Yahoo!?
All your favorites on one personal page Try My Yahoo!
http://my.yahoo.com
|