You can subscribe to this list here.
| 2003 |
Jan
|
Feb
|
Mar
(58) |
Apr
(261) |
May
(169) |
Jun
(214) |
Jul
(201) |
Aug
(219) |
Sep
(198) |
Oct
(203) |
Nov
(241) |
Dec
(94) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2004 |
Jan
(137) |
Feb
(149) |
Mar
(150) |
Apr
(193) |
May
(95) |
Jun
(173) |
Jul
(137) |
Aug
(236) |
Sep
(157) |
Oct
(150) |
Nov
(136) |
Dec
(90) |
| 2005 |
Jan
(139) |
Feb
(130) |
Mar
(274) |
Apr
(138) |
May
(184) |
Jun
(152) |
Jul
(261) |
Aug
(409) |
Sep
(239) |
Oct
(241) |
Nov
(260) |
Dec
(137) |
| 2006 |
Jan
(191) |
Feb
(142) |
Mar
(169) |
Apr
(75) |
May
(141) |
Jun
(169) |
Jul
(131) |
Aug
(141) |
Sep
(192) |
Oct
(176) |
Nov
(142) |
Dec
(95) |
| 2007 |
Jan
(98) |
Feb
(120) |
Mar
(93) |
Apr
(96) |
May
(95) |
Jun
(65) |
Jul
(62) |
Aug
(56) |
Sep
(53) |
Oct
(95) |
Nov
(106) |
Dec
(87) |
| 2008 |
Jan
(58) |
Feb
(149) |
Mar
(175) |
Apr
(110) |
May
(106) |
Jun
(72) |
Jul
(55) |
Aug
(89) |
Sep
(26) |
Oct
(96) |
Nov
(83) |
Dec
(93) |
| 2009 |
Jan
(97) |
Feb
(106) |
Mar
(74) |
Apr
(64) |
May
(115) |
Jun
(83) |
Jul
(137) |
Aug
(103) |
Sep
(56) |
Oct
(59) |
Nov
(61) |
Dec
(37) |
| 2010 |
Jan
(94) |
Feb
(71) |
Mar
(53) |
Apr
(105) |
May
(79) |
Jun
(111) |
Jul
(110) |
Aug
(81) |
Sep
(50) |
Oct
(82) |
Nov
(49) |
Dec
(21) |
| 2011 |
Jan
(87) |
Feb
(105) |
Mar
(108) |
Apr
(99) |
May
(91) |
Jun
(94) |
Jul
(114) |
Aug
(77) |
Sep
(58) |
Oct
(58) |
Nov
(131) |
Dec
(62) |
| 2012 |
Jan
(76) |
Feb
(93) |
Mar
(68) |
Apr
(95) |
May
(62) |
Jun
(109) |
Jul
(90) |
Aug
(87) |
Sep
(49) |
Oct
(54) |
Nov
(66) |
Dec
(84) |
| 2013 |
Jan
(67) |
Feb
(52) |
Mar
(93) |
Apr
(65) |
May
(33) |
Jun
(34) |
Jul
(52) |
Aug
(42) |
Sep
(52) |
Oct
(48) |
Nov
(66) |
Dec
(14) |
| 2014 |
Jan
(66) |
Feb
(51) |
Mar
(34) |
Apr
(47) |
May
(58) |
Jun
(27) |
Jul
(52) |
Aug
(41) |
Sep
(78) |
Oct
(30) |
Nov
(28) |
Dec
(26) |
| 2015 |
Jan
(41) |
Feb
(42) |
Mar
(20) |
Apr
(73) |
May
(31) |
Jun
(48) |
Jul
(23) |
Aug
(55) |
Sep
(36) |
Oct
(47) |
Nov
(48) |
Dec
(41) |
| 2016 |
Jan
(32) |
Feb
(34) |
Mar
(33) |
Apr
(22) |
May
(14) |
Jun
(31) |
Jul
(29) |
Aug
(41) |
Sep
(17) |
Oct
(27) |
Nov
(38) |
Dec
(28) |
| 2017 |
Jan
(28) |
Feb
(30) |
Mar
(16) |
Apr
(9) |
May
(27) |
Jun
(57) |
Jul
(28) |
Aug
(43) |
Sep
(31) |
Oct
(20) |
Nov
(24) |
Dec
(18) |
| 2018 |
Jan
(34) |
Feb
(50) |
Mar
(18) |
Apr
(26) |
May
(13) |
Jun
(31) |
Jul
(13) |
Aug
(11) |
Sep
(15) |
Oct
(12) |
Nov
(18) |
Dec
(13) |
| 2019 |
Jan
(12) |
Feb
(29) |
Mar
(51) |
Apr
(22) |
May
(13) |
Jun
(20) |
Jul
(13) |
Aug
(12) |
Sep
(21) |
Oct
(6) |
Nov
(9) |
Dec
(5) |
| 2020 |
Jan
(13) |
Feb
(5) |
Mar
(25) |
Apr
(4) |
May
(40) |
Jun
(27) |
Jul
(5) |
Aug
(17) |
Sep
(21) |
Oct
(1) |
Nov
(5) |
Dec
(15) |
| 2021 |
Jan
(28) |
Feb
(6) |
Mar
(11) |
Apr
(5) |
May
(7) |
Jun
(8) |
Jul
(5) |
Aug
(5) |
Sep
(11) |
Oct
(9) |
Nov
(10) |
Dec
(12) |
| 2022 |
Jan
(7) |
Feb
(13) |
Mar
(8) |
Apr
(7) |
May
(12) |
Jun
(27) |
Jul
(14) |
Aug
(27) |
Sep
(27) |
Oct
(17) |
Nov
(17) |
Dec
|
| 2023 |
Jan
(10) |
Feb
(18) |
Mar
(9) |
Apr
(26) |
May
|
Jun
(13) |
Jul
(18) |
Aug
(5) |
Sep
|
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
1
(18) |
2
(6) |
|
3
(2) |
4
(2) |
5
(5) |
6
(12) |
7
(2) |
8
(1) |
9
(2) |
|
10
(6) |
11
(11) |
12
(12) |
13
(3) |
14
(3) |
15
(2) |
16
|
|
17
(2) |
18
(3) |
19
(43) |
20
(22) |
21
(10) |
22
(21) |
23
(5) |
|
24
|
25
(3) |
26
(12) |
27
(3) |
28
(14) |
29
(25) |
30
(5) |
|
31
(6) |
|
|
|
|
|
|
|
From: Julian S. <js...@ac...> - 2005-07-19 14:07:31
|
> I think Valgrind is a fabulous achievement. But regarding the 3.0 line, > I think it should be made more clear that this version still doesn't > take advantage of the large address space that is available for > processes on AMD64 platforms. There are various limitations etc which will be documented, and this will be amongst them. It's not a reason for delaying 3.0 though. J |
|
From: Yeshurun, M. <mei...@in...> - 2005-07-19 13:40:19
|
I think Valgrind is a fabulous achievement. But regarding the 3.0 line, I think it should be made more clear that this version still doesn't take advantage of the large address space that is available for processes on AMD64 platforms. I think this is the main reason why people are so eagerly waiting for this version. Regards, Meir -----Original Message----- From: val...@li... [mailto:val...@li...] On Behalf Of Julian Seward Sent: Tuesday, July 19, 2005 4:28 PM To: val...@li... Cc: val...@li...; Jos...@gm... Subject: [Valgrind-users] Release plan for Valgrind 3.0 Over the past year, a tremendous amount of development effort has gone into the Valgrind 3 line. It now runs on x86 and amd64 quite usably, and ppc32 (Linux) is looking promising. There have been a large number of of bug fixes, functionality improvments, and restructuring of the source code to enhance accessibility and maintainability. A GUI (Valkyrie) is also under development, and we plan to make coordinated releases of that along with Valgrind in the future. The time for a Valgrind-3.0 release draws near. I propose to have a release candidate (feature freeze) by next Monday 25 July, with a possible release on Monday 1 Aug. Valgrind-3.0 is already stable and usable on x86 and amd64. If you haven't already done so, please checkout and build it (easy: see http://www.valgrind.org/devel/cvs_svn.html), test it on whatever applications are critical for you, and let us know of any critical breakage. The more people who do this now, the better quality 3.0 release we will have. Outstanding issues which I'm aware of, and which need to be fixed, are: - decide on a version numbering / branch management scheme - decide about how to handle dependency on libvex - minor tweaks to XML output and to logfile naming (me) - fix: #88116 (x86, "enter" variant causes assertion) #96542 (x86, possible assertions with push variants) #87263 (x86, segment stuff) #103594 (x86, FICOM) INT/INT3 insns (x86) Missing 0xA3 insn (amd64) All of these I'll chase. - Update documentation (me, + ??) I'm sure there are other things I've forgotten/am not aware of. Much effort recently has gone into making ppc32-linux work well, but that target is not yet really usable. In particular there are problems with getting a low noise level from Memcheck, and some difficulties with floating point. Work to resolve these is in progress, but I do not know if it will be successful in the limited time before the release. We are already overdue for a 3.0 release and I am reluctant to delay it further in order to have ppc32 support. Therefore I propose to present 3.0 as a production-quality release for x86 and amd64 only, and if ppc32 happens to be usable, well that's an extra bonus. Obviously it would be wonderful to have ppc32 usable too, and I will endeavour to cause that to be the case. So: - pls download, build, test, report critical bugs - JosefW: what is the calltree status for the 3 line? It would be good to ensure that calltree/kcachegrind works well with 3.0. J ------------------------------------------------------- SF.Net email is sponsored by: Discover Easy Linux Migration Strategies from IBM. Find simple to follow Roadmaps, straightforward articles, informative Webcasts and more! Get everything you need to get up to speed, fast. = http://ads.osdn.com/?ad_id=3D7477&alloc_id=3D16492&op=3Dclick _______________________________________________ Valgrind-users mailing list Val...@li... https://lists.sourceforge.net/lists/listinfo/valgrind-users |
|
From: Julian S. <ju...@va...> - 2005-07-19 13:27:22
|
Over the past year, a tremendous amount of development effort has gone into the Valgrind 3 line. It now runs on x86 and amd64 quite usably, and ppc32 (Linux) is looking promising. There have been a large number of of bug fixes, functionality improvments, and restructuring of the source code to enhance accessibility and maintainability. A GUI (Valkyrie) is also under development, and we plan to make coordinated releases of that along with Valgrind in the future. The time for a Valgrind-3.0 release draws near. I propose to have a release candidate (feature freeze) by next Monday 25 July, with a possible release on Monday 1 Aug. Valgrind-3.0 is already stable and usable on x86 and amd64. If you haven't already done so, please checkout and build it (easy: see http://www.valgrind.org/devel/cvs_svn.html), test it on whatever applications are critical for you, and let us know of any critical breakage. The more people who do this now, the better quality 3.0 release we will have. Outstanding issues which I'm aware of, and which need to be fixed, are: - decide on a version numbering / branch management scheme - decide about how to handle dependency on libvex - minor tweaks to XML output and to logfile naming (me) - fix: #88116 (x86, "enter" variant causes assertion) #96542 (x86, possible assertions with push variants) #87263 (x86, segment stuff) #103594 (x86, FICOM) INT/INT3 insns (x86) Missing 0xA3 insn (amd64) All of these I'll chase. - Update documentation (me, + ??) I'm sure there are other things I've forgotten/am not aware of. Much effort recently has gone into making ppc32-linux work well, but that target is not yet really usable. In particular there are problems with getting a low noise level from Memcheck, and some difficulties with floating point. Work to resolve these is in progress, but I do not know if it will be successful in the limited time before the release. We are already overdue for a 3.0 release and I am reluctant to delay it further in order to have ppc32 support. Therefore I propose to present 3.0 as a production-quality release for x86 and amd64 only, and if ppc32 happens to be usable, well that's an extra bonus. Obviously it would be wonderful to have ppc32 usable too, and I will endeavour to cause that to be the case. So: - pls download, build, test, report critical bugs - JosefW: what is the calltree status for the 3 line? It would be good to ensure that calltree/kcachegrind works well with 3.0. J |
|
From: Tom H. <to...@co...> - 2005-07-19 13:04:18
|
In message <200...@or...>
Christoph Bartoschek <bar...@or...> wrote:
>> That's a bit nasty. Can you run with --trace-syscalls=yes and see
>> what the last system call it executed before that assertion was?
>
> Here is the output for with --trace-syscalls=yes
>
> SYSCALL[13897,1]( 9) sys_mmap2 ( 0x31D9A00000, 1087432, 5, 2050, 8, 0 ) -->
> [pre-fail] Failure(0xC)
>
> valgrind: syswrap-main.c:660 (vgPlain_client_syscall): Assertion
> 'eq_SyscallArgs(&sci->args, &sci->orig_args)' failed.
The pre-handler for mmap2 was corrupting the arguments when it
returned a failure which it shouldn't do. I've committed a fix
for it now.
The
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Christoph B. <bar...@or...> - 2005-07-19 12:07:39
|
> That's a bit nasty. Can you run with --trace-syscalls=yes and see > what the last system call it executed before that assertion was? Here is the output for with --trace-syscalls=yes SYSCALL[13897,1]( 2) ... [async] --> Failure(0x2) SYSCALL[13897,1]( 2) sys_open ( 0x1191170D(/etc/ld.so.cache), 0 ) --> [async] ... SYSCALL[13897,1]( 2) ... [async] --> Success(0x8) SYSCALL[13897,1]( 5) sys_newfstat ( 8, 0x349F77B0 )[sync] --> Success(0x0) SYSCALL[13897,1]( 9) sys_mmap2 ( 0x0, 137887, 1, 2, 8, 0 )[sync] --> Success(0x34903000) SYSCALL[13897,1]( 3) sys_close ( 8 )[sync] --> Success(0x0) SYSCALL[13897,1]( 2) sys_open ( 0x349236F6(/usr/X11R6/lib64/libXcursor.so.1), 0 ) --> [async] ... SYSCALL[13897,1]( 2) ... [async] --> Success(0x8) SYSCALL[13897,1]( 0) sys_read ( 8, 0x349F7918, 640 ) --> [async] ... SYSCALL[13897,1]( 0) ... [async] --> Success(0x280) SYSCALL[13897,1]( 5) sys_newfstat ( 8, 0x349F77F0 )[sync] --> Success(0x0) SYSCALL[13897,1]( 9) sys_mmap2 ( 0x31D9A00000, 1087432, 5, 2050, 8, 0 ) --> [pre-fail] Failure(0xC) valgrind: syswrap-main.c:660 (vgPlain_client_syscall): Assertion 'eq_SyscallArgs(&sci->args, &sci->orig_args)' failed. ... Christoph Bartoschek |
|
From: Allin C. <cot...@wf...> - 2005-07-19 12:05:10
|
I've been using valgrind for a couple of years and am very pleased with it. An invaluable tool, thank you. One problem I've encountered recently. I have valgrind 2.4.0 installed on my machine at work (glibc 2.3.5, linux 2.6.12.2) and it works perfectly. At home I have an older system with glibc 2.3.2 and kernel 2.4.23. I've been running valgrind 2.2.0 on that system with no problems. I tried updating to 2.4.0. Valgrind configured, built and installed OK, but it won't run: it immediately segfaults (even just "valgrind --version"). (Both systems use gcc 3.4.4.) I tried getting a trace from gdb but it seems that stack is smashed right away: bt just gives "???". I tried ensuring that all the files installed by 2.2.0 were deleted in case some funny mixture of versions was happening, but this didn't help. Finally, I tried downloading the 2.2.0 source and rebuilding it: again, 2.2.0 works fine. Has anyone else seen this with 2.4.0? Does anyone have a suggestion on how I could produce a more useful report? -- Allin Cottrell Department of Economics Wake Forest University, NC |
|
From: Julian S. <js...@ac...> - 2005-07-19 10:03:17
|
> Christoph Bartoschek <bar...@or...> wrote: > > Am Dienstag 19 Juli 2005 11:02 schrieb Tom Hughes: > >> Unfortunately there is, yes. In fact there tends to be less address > >> space available on AMD64 than on x86 because of limitations on what > >> address we can load valgrind at. > > > > Hmm, less than on x86? An AMD64 port is fairly useless for me if it > > cannot run programms which require about 15GB. > The plan is to break the memory up into chunks instead which each > chunk being either a client chunk or a valgrind chunk. That way the > entire address space will be available. Christoph Of course this is a limitation we're very aware of. As Tom says there is a plan to get rid of it. But our time is limited and rewriting the address space manager so it works well for both 32- and 64-bit platforms is a difficult and complicated exercise, and it hasn't happened yet. I hope to get to it in the next couple of months if nobody beats me to it. J |
|
From: Tom H. <to...@co...> - 2005-07-19 09:25:12
|
In message <200...@or...>
Christoph Bartoschek <bar...@or...> wrote:
> Am Dienstag 19 Juli 2005 11:02 schrieb Tom Hughes:
>> Unfortunately there is, yes. In fact there tends to be less address
>> space available on AMD64 than on x86 because of limitations on what
>> address we can load valgrind at.
>
>
> Hmm, less than on x86? An AMD64 port is fairly useless for me if it cannot
> run programms which require about 15GB.
Yes, valgrind has to load below 0x80000000 because of limitations on
which code models the toolchain supports. On x86 we load it as close
to the kernel as we can, and typically around 0xb0000000 or so.
The current scheme partitions the memory space so that everything
below the valgrind binary is for the client and everything above it
for valgrind.
The plan is to break the memory up into chunks instead which each
chunk being either a client chunk or a valgrind chunk. That way the
entire address space will be available.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Christoph B. <bar...@or...> - 2005-07-19 09:13:30
|
Am Dienstag 19 Juli 2005 11:02 schrieb Tom Hughes: > Unfortunately there is, yes. In fact there tends to be less address > space available on AMD64 than on x86 because of limitations on what > address we can load valgrind at. Hmm, less than on x86? An AMD64 port is fairly useless for me if it cannot run programms which require about 15GB. Greets, Christoph Bartoschek |
|
From: Tom H. <to...@co...> - 2005-07-19 09:02:32
|
In message <200...@or...>
Christoph Bartoschek <bar...@or...> wrote:
> is there a memory limit on the AMD64 in the current development branch of
> valgrind? I have an application which tries to load code (I guess via dlopen)
> and complains about not enough memory.
Unfortunately there is, yes. In fact there tends to be less address
space available on AMD64 than on x86 because of limitations on what
address we can load valgrind at.
We do have a plan to fix this so that the address space does not have
to be rigidly partitioned between valgrind space and client space.
> Additionally the log file says.
> -------------------------------------------------------------------------------------------
> valgrind: syswrap-main.c:658 (vgPlain_client_syscall): Assertion
> 'eq_SyscallArgs(&sci->args, &sci->orig_args)' failed.
That's a bit nasty. Can you run with --trace-syscalls=yes and see
what the last system call it executed before that assertion was?
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Christoph B. <bar...@or...> - 2005-07-19 08:31:32
|
Hi, is there a memory limit on the AMD64 in the current development branch of valgrind? I have an application which tries to load code (I guess via dlopen) and complains about not enough memory. [DCL-187]: (E) Failed to load DCM < /fs/data/asiclibs/tech/lin64opt>, msg=< /fs/data/asiclibs/tech/lin64opt: failed to map segment from shared object: Cannot allocate memory>. The process has at that time around 7.8 GB allocated. The machine has 64GB. Additionally the log file says. ------------------------------------------------------------------------------------------- valgrind: syswrap-main.c:658 (vgPlain_client_syscall): Assertion 'eq_SyscallArgs(&sci->args, &sci->orig_args)' failed. Note: see also the FAQ.txt in the source distribution. It contains workarounds to several common problems. If that doesn't help, please report this bug to: www.valgrind.org In the bug report, send all the above text, the valgrind version, and what Linux distro you are using. Thanks. ----------------------------------------------------------------------------------- But I guess this is a message which is issued after the loading fails. Greets, Christoph Bartoschek |
|
From: Julian S. <js...@ac...> - 2005-07-19 07:45:11
|
> >> (2) if memcpy zeroes out the target before copying. This has been shown > >> to improve the performance of memcpy on some intel architectures, due to > >> cache effects. Of course it is fatal if there is any overlap between > >> source and target. Most memcpy's don't do this kind of trick, but it's > >> worth keeping in mind. > > > > I think it's common on the PPC, because the dcbz instruction zeros a > > cache line in preparation for the destination write (this is based on 5 > > year old PPC experience, so I don't know what's best on a modern > > implementation). Yeh, it allocates the line in L2 without having to fetch it from memory since we know it's just about to be completely overwritten. Certainly memset in glibc uses dcbz; not sure about memcpy. J |
|
From: Nicholas N. <nj...@cs...> - 2005-07-19 03:53:23
|
On Mon, 11 Jul 2005, Simon Vaillancourt wrote:
> I have noticed the following behaviour with valgrind, if you create a key
> with a destructor and you set it from the main thread:
>
> if (rc=pthread_key_create(&gKey,free))
> {
> perror("pthread_key_create");
> }
> else
> {
> if (rc=pthread_setspecific(gKey,(void*)strdup("I will be reported as
> a leak")))
> {
> perror("pthread_setspecific1");
> }
> }
>
> It will report a leak if the pthread_setspecific() was called from the main
> thread(wrong) but no leak when its called from a spawned thread(correct). I'm
> not sure it's a valgrind bug however.
If you are still seeing this can you please enter it as a Bugzilla report
(http://bugs.kde.org/enter_valgrind_bug.cgi) so this doesn't get lost.
It would be very helpful if you could post a complete program that
exhibits the problem. Thanks.
N
|
|
From: Nicholas N. <nj...@cs...> - 2005-07-19 03:49:09
|
Paul, Are you still seeing the problems you reported below? Thanks. N On Sat, 2 Jul 2005, Paul Pluzhnikov wrote: > Today's SVN checkout fails to build on > Red Hat Enterprise Linux AS release 3 (Taroon Update 2) x86_64 with > gcc (GCC) 3.2.3 20030502 (Red Hat Linux 3.2.3-39), binutils-2.14.90.0.4-35 > > $ gcc -pie -m64 ... m_syswrap/libsyswrap.a > /home/paul/valgrind-3.0-svn/vex/libvex.a -ldl > /usr/bin/ld: /home/paul/valgrind-3.0-svn/vex/libvex.a(irdefs.o): > relocation R_X86_64_32S can not be used when making a shared object; > recompile with -fPIC > /home/paul/valgrind-3.0-svn/vex/libvex.a: could not read symbols: Bad value > collect2: ld returned 1 exit status > > Removing '-pie' solves that, but the resulting valgrind crashes at startup. > Adding '-fPIC' to CFLAGS in vex/Makefile also solves this problem and produces a > working VG. > > Every program on this system produces several of these: > ==15027== Warning: zero-sized CIE/FDE but not at section end in DWARF2 > CFI reading > > Finally, Insure++-instrumented binaries trigger: > valgrind: syswrap-amd64-linux.c:718 > (vgSysWrap_amd64_linux_sys_arch_prctl_before): Assertion 'ARG1 == > VKI_ARCH_SET_FS' failed. > > On FC2-i686 the same test triggers: > valgrind: syswrap-main.c:812 (vgPlain_post_syscall): Assertion > 'eq_SyscallStatus( &sci->status, &test_status )' failed. > > The same binary runs fine on FC2-i686 under VG-2.4. This may be a case > of 2 memory debugers fighting each other. > > Regards, > |
|
From: Nicholas N. <nj...@cs...> - 2005-07-19 03:47:10
|
On Mon, 13 Jun 2005, Farid Abizeid wrote: > Hi, > I recently switched from valgrind 2.0.0 to valgrind 2.4.0. > > when I started valgrind with the following options: > valgrind --skin=none --trace-syscalls=yes -v MyProgram > > valgrind worked fine in version 2.0.0 but loading the dynamic libraries at > startup takes a long long time to load in verion 2.4.0 (some dynamic > libraries took around 4 minutes each to load. Also system calls which did not > show in 2.0.0 are now showing in 2.4.0. Do you know if the shared objects have debugging information in the stabs format? If so, this could be the problem described in bug #109040 (http://bugs.kde.org/show_bug.cgi?id=109040). N |
|
From: Nicholas N. <nj...@cs...> - 2005-07-19 01:18:05
|
On Sat, 9 Jul 2005, Kailash Sethuraman wrote: > I am working on porting valgrind to netbsd/i386. I actually started > working with the FreeBSD port of Valgrind 2.x, I have done a bit of > work on it already and I am currently struggling with the OS specifics > in coregrind/vg_scheduler.c > > However, I am convinced that I should concentrate on porting the 3.x > line. As I currently do not have a linux box at hand ( I am working on > that) to test the valgrind's development line I have a few doubts. > > I intend to grab a snapshot of valgrind 3.x and work on it, and sync > semi-regularly with the svn repo. Is the current 3.x line for > linux/i386 in a good enough state to use as a template for the OS > port? Yes. Working from the 3.x repository is the right thing to do. Syncing semi-regularly is also a good idea. > Is there anything I should definitely know thats not in the > documentation before going off on porting it? The internal workings of > valgrind are still hazy to me. Working from the FreeBSD port is a good idea, even though Valgrind's internals have changed a lot since 2.x. Porting Valgrind to a new OS is not for the faint-hearted, although it has been done successfully by others :) Good luck! N |
|
From: Nicholas N. <nj...@cs...> - 2005-07-19 01:02:29
|
On Wed, 11 May 2005, Stefan Kost wrote: > as the list does not likes big attachments - here they are as links: > > http://www.buzztard.org/files/valgrind-banner.png > http://www.buzztard.org/files/valgrind-banner2.png > http://www.buzztard.org/files/Valgrind.cpt > http://www.buzztard.org/files/Valgrind2.cpt I've finally added these to the Valgrind site: www.valgrind.org/gallery/artwork.html. Thank you, Stefan! N |
|
From: Nicholas N. <nj...@cs...> - 2005-07-19 00:48:35
|
On Thu, 5 May 2005, Jeremy Fitzhardinge wrote: >> Maybe it's worth giving more of an explanation here. I've noticed while floating around on the internet >> that people thing valgrind is just being pedantic when giving this warning and target < source. There are >> two problems: >> >> (1) if copying is done from largest address to smallest address. I don't know of any memcpy that does this. >> > > The some optimisation guide (AMD? Via?) recommends it for some cases. > >> (2) if memcpy zeroes out the target before copying. This has been shown to improve the performance of memcpy >> on some intel architectures, due to cache effects. Of course it is fatal if there is any overlap between >> source and target. Most memcpy's don't do this kind of trick, but it's worth keeping in mind. >> > > I think it's common on the PPC, because the dcbz instruction zeros a > cache line in preparation for the destination write (this is based on 5 > year old PPC experience, so I don't know what's best on a modern > implementation). I've updated the 3.0 docs for this. N |