You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
1
(21) |
2
(18) |
3
(19) |
4
(17) |
|
5
(6) |
6
(5) |
7
(9) |
8
(21) |
9
(16) |
10
(21) |
11
(22) |
|
12
(19) |
13
(19) |
14
(8) |
15
(16) |
16
(17) |
17
(16) |
18
(33) |
|
19
(33) |
20
(34) |
21
(32) |
22
(26) |
23
(23) |
24
(16) |
25
(21) |
|
26
(19) |
27
(7) |
28
(29) |
29
(27) |
30
(55) |
|
|
|
From: <sv...@va...> - 2005-06-22 23:49:05
|
Author: de Date: 2005-06-23 00:49:00 +0100 (Thu, 23 Jun 2005) New Revision: 3998 Log: Removed ante-deluvian file Removed: trunk/.cvsignore Deleted: trunk/.cvsignore =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- trunk/.cvsignore 2005-06-22 18:38:41 UTC (rev 3997) +++ trunk/.cvsignore 2005-06-22 23:49:00 UTC (rev 3998) @@ -1,22 +0,0 @@ -Makefile.in -Makefile -acinclude.m4 -aclocal.m4 -configure -config.h* -stamp-h* -valgrind -valgrind.spec -cachegrind -vg_annotate -vg_cachegen -default.supp -bin -lib -include -share -cachegrind.out.* -autom4te.cache -autom4te-*.cache -valgrind.pc -.in_place |
|
From: Paul M. <pa...@sa...> - 2005-06-22 22:27:29
|
Julian Seward writes: > Re invalidations, I was going to ask: throwing stuff out of the > (V's) translation cache is tremendously expensive as it involves > a linear search of all translations. That means performance will > suffer badly if the client does a lot of icbis. Is that something > you noticed to be a problem with 2.4.0-ppc ? The translation > cache stuff could be redesigned (I suppose) to convert invalidation > cost from O(N) to O(log N) or O(handwaving-hashtable-cost N) > kinda thing, but that's significant hassle that I'd rather avoid > if not necessary. We get a lot of icbis when doing dynamic linking, because the PLT contains code which gets modified when an entry gets resolved, which is done lazily. In other words, if a program calls a function that turns out to be in a shared library, the linker will create a PLT entry for it and make the bl instruction jump to the PLT entry. The dynamic linker initially sets up the PLT entry to contain instructions to load a constant into r11 (IIRC) and jump to the resolver. The resolver then works out the actual address of the function and modifies the PLT entry to contain instructions to jump directly to the function. It then has to use dcbst and icbi on the PLT entry to make sure the CPU sees the new instructions. That is good for us because it gives us an explicit signal to go off and invalidate the translation of the PLT entry, but it does mean that icbis are relatively common. We are in the process of changing the way that the PLT works because of the security concerns about having memory that is both writable and executable, so eventually the need for efficient icbis will diminish. But for now they need to go reasonably fast. I'm not sure whether the cost you're talking about is of the same order as the cost we have in 2.x for invalidating a translation, where we basically had to scan all other translations to find any that had chained to the one we're invalidating and unchain them. That loop was (IIRC) consuming about 90% of the several minutes that it took to start up mozilla, until I changed vg_transtab.c to create, for each translation, a linked list of translations that chained to it. With that the unchaining is very fast (because we know precisely which translations to unchain) and the overhead of icbi became negligible. Paul. |
|
From: Greg P. <gp...@us...> - 2005-06-22 19:29:16
|
Julian Seward writes: > PIE is a GNU-toolchain-ism. Do you really want a system which relies > on it? For example, what about MacOS -- will that know about PIE ? Mac OS X can build and run position-independent executables. Non-PIE is called "-mdynamic-no-pic" in the compiler flags; I'm not sure which is currently the default. Most executables are built non-PIE for speed. -- Greg Parker gp...@us... |
|
From: <sv...@va...> - 2005-06-22 18:38:46
|
Author: njn Date: 2005-06-22 19:38:41 +0100 (Wed, 22 Jun 2005) New Revision: 3997 Log: I forgot to remove this a while back. Removed: trunk/include/pub_basics_asm.h Deleted: trunk/include/pub_basics_asm.h =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- trunk/include/pub_basics_asm.h 2005-06-22 12:11:42 UTC (rev 3996) +++ trunk/include/pub_basics_asm.h 2005-06-22 18:38:41 UTC (rev 3997) @@ -1,49 +0,0 @@ - -/*--------------------------------------------------------------------*/ -/*--- Header imported directly by every asm file, and indirectly ---*/ -/*--- (via pub_basics.h) by every C file. pub_basics_asm.h ---*/ -/*--------------------------------------------------------------------*/ - -/* - This file is part of Valgrind, a dynamic binary instrumentation - framework. - - Copyright (C) 2000-2005 Julian Seward=20 - js...@ac... - - This program is free software; you can redistribute it and/or - modify it under the terms of the GNU General Public License as - published by the Free Software Foundation; either version 2 of the - License, or (at your option) any later version. - - This program is distributed in the hope that it will be useful, but - WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - General Public License for more details. - - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA - 02111-1307, USA. - - The GNU General Public License is contained in the file COPYING. -*/ - -#ifndef __PUB_BASICS_ASM_H -#define __PUB_BASICS_ASM_H - -/* All symbols externally visible from Valgrind are prefixed - as specified here to avoid namespace conflict problems. */ - -#define VGAPPEND(str1,str2) str1##str2 - -#define VG_(str) VGAPPEND(vgPlain_, str) -#define VGA_(str) VGAPPEND(vgArch_, str) -#define VGO_(str) VGAPPEND(vgOS_, str) -#define VGP_(str) VGAPPEND(vgPlatform_, str) - -#endif /* __PUB_BASICS_ASM_H */ - -/*--------------------------------------------------------------------*/ -/*--- end ---*/ -/*--------------------------------------------------------------------*/ |
|
From: Julian S. <js...@ac...> - 2005-06-22 17:38:25
|
> > - If the user/kernel boundary is higher than expected then > > some space that could be given to the client is wasted. Your > > planned address space management changes may avoid this. > > > > - If the user/kernel boundary is higher than expected then > > valgrind may fail to load because the fixed load address > > is in kernel space. > > We've floated before the idea of building V for multiple load addresses > for the common configurations -- eg. 2GB, 3GB, 4GB. We have indeed floated that one before. However let's assume aspacemgr will get rewritten to give a more flexible address space layout. Then the actual location that V loads at isn't so critical; we can use space both above and below it. The only constraints are (1) that it doesn't intersect with the standard load place for executables (0x8048000, etc) and (2) deciding exactly where we do want V parked has a bearing on fragmentation of the address space and thus on the size of the largest contiguous block we can hand out. I don't think it's a big deal though. > I like the idea of PIE, but you're right that it is fragile. Ideally > everyone would switch over to supporting PIE now (it seems like support is > slowly increasing) and then we wouldn't have to support non-PIE systesm. PIE is a GNU-toolchain-ism. Do you really want a system which relies on it? For example, what about MacOS -- will that know about PIE ? [I don't know]. J |
|
From: Tom H. <to...@co...> - 2005-06-22 16:51:00
|
In message <Pin...@ch...>
Nicholas Nethercote <nj...@cs...> wrote:
> I see this code in coregrind/m_syswrap/syswrap-linux.c:
>
> POST(sys_futex)
> {
> vg_assert(SUCCESS);
> POST_MEM_WRITE( ARG1, sizeof(int) );
> if (ARG2 == VKI_FUTEX_FD) {
> if (!VG_(fd_allowed)(RES, "futex", tid, True)) {
> VG_(close)(RES);
> SET_STATUS_Failure( VKI_EMFILE );
> } else {
> if (VG_(clo_track_fds))
> VG_(record_fd_open)(tid, RES, VG_(arena_strdup)(VG_AR_CORE, (Char*)ARG1));
> }
> }
>
> For the VG_(record_fd_open)() call, should the 3rd arg really be ARG1?
> That arg is meant to be the pathname of the opened fd. Does ARG1 fit that
> bill? I would have thought we'd need to call VG_(resolve_filename)()
> on RES to get the pathname for the fd, like all the examples in
> syswrap-generic.c.
It looks like nonsense to me. I don't think a futex FD will have
a name associated with it so I resolve_filename probably won't be
able to do much.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Nicholas N. <nj...@cs...> - 2005-06-22 16:46:57
|
Hi,
Can anyone help with this?
I see this code in coregrind/m_syswrap/syswrap-linux.c:
POST(sys_futex)
{
vg_assert(SUCCESS);
POST_MEM_WRITE( ARG1, sizeof(int) );
if (ARG2 == VKI_FUTEX_FD) {
if (!VG_(fd_allowed)(RES, "futex", tid, True)) {
VG_(close)(RES);
SET_STATUS_Failure( VKI_EMFILE );
} else {
if (VG_(clo_track_fds))
VG_(record_fd_open)(tid, RES, VG_(arena_strdup)(VG_AR_CORE, (Char*)ARG1));
}
}
For the VG_(record_fd_open)() call, should the 3rd arg really be ARG1?
That arg is meant to be the pathname of the opened fd. Does ARG1 fit that
bill? I would have thought we'd need to call VG_(resolve_filename)()
on RES to get the pathname for the fd, like all the examples in
syswrap-generic.c.
I tried to work out who wrote the code, but the SVN history is difficult
to follow because this code got moved about.
Thanks.
Nick
|
|
From: Nicholas N. <nj...@cs...> - 2005-06-22 16:36:26
|
On Wed, 22 Jun 2005, Tom Hughes wrote: > It's helpful as Paul says in that it allows us to give the maximum > possible space to the client on 32 bit systems. With a fixed load > address there are two problems that arise: > > - If the user/kernel boundary is higher than expected then > some space that could be given to the client is wasted. Your > planned address space management changes may avoid this. > > - If the user/kernel boundary is higher than expected then > valgrind may fail to load because the fixed load address > is in kernel space. We've floated before the idea of building V for multiple load addresses for the common configurations -- eg. 2GB, 3GB, 4GB. I like the idea of PIE, but you're right that it is fragile. Ideally everyone would switch over to supporting PIE now (it seems like support is slowly increasing) and then we wouldn't have to support non-PIE systesm. N |
|
From: Julian S. <js...@ac...> - 2005-06-22 13:22:58
|
Thanks for the info. > The easiest and best way on ppc32 to know the cache line size and > whether we have altivec is to look in the aux vector. Fine, ok. That does mean reading the aux vector before initialising Vex, but that's OK. > The cache line size is also needed for implementing icbi, which ends > up invalidating cached block translations. Yeh ... we stared at that one for a while. It seems that icbi can be safely implemented assuming any cache line size which is a power of 2 and >= the real cache line size, so we initially used 256. It just means more translations than necessary get thrown away, which affects performance but not correctness. Unfortunately such an approximation obviously doesn't work with dcbz. --- Re invalidations, I was going to ask: throwing stuff out of the (V's) translation cache is tremendously expensive as it involves a linear search of all translations. That means performance will suffer badly if the client does a lot of icbis. Is that something you noticed to be a problem with 2.4.0-ppc ? The translation cache stuff could be redesigned (I suppose) to convert invalidation cost from O(N) to O(log N) or O(handwaving-hashtable-cost N) kinda thing, but that's significant hassle that I'd rather avoid if not necessary. J |
|
From: Tom H. <to...@co...> - 2005-06-22 13:16:20
|
In message <200...@ac...>
Julian Seward <js...@ac...> wrote:
>> The main advantage of PIE is that it gives us more space for the
>> client on systems where the kernel gives us more of the address space
>> to play with. On ppc32 systems, in many configurations we only get
>> 2GB address space, but on some machines we get 3GB, and on a ppc64
>> machine we get a full 4GB of address space for 32-bit processes. If
>> we have a fixed base for stage2 we have to pick 0x70000000 and thus
>> waste 2GB of address space on a ppc64 machine (such as my G5).
>
> Just checking my understanding is correct: the fact that a fixed base
> of (say) 0x70000000 for stage2 causes a wastage of address space is
> a consequence only of the current address-space-layout scheme V uses,
> correct? So if I just compile some program with and without PIE
> and run it natively, the amount of address space available to it
> is unaffected by the PIEness, correct?
Absolutely. The reason for the address space wastage is simply that
valgrind currently limits the client to using space below it's own
load address.
> Motivation for these questions is that there's been for a while a
> plan to rewrite the address space manage (m_aspacemgr) to be more
> flexible in layout. I want to be sure that if that happens, it will
> render moot the issue of limited address space when PIE is not used.
Exactly - if the address space is broken up into chunks and parcelled
out to valgrind and client as required which I believe was your plan
then the issue is probably moot.
The only limit then would be on the size of the client executable
itself which would probably still need to fit beneath valgrind. Other
memory allocations and shared libraries used by the client could go
anywhere.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Julian S. <js...@ac...> - 2005-06-22 13:12:07
|
> The main advantage of PIE is that it gives us more space for the > client on systems where the kernel gives us more of the address space > to play with. On ppc32 systems, in many configurations we only get > 2GB address space, but on some machines we get 3GB, and on a ppc64 > machine we get a full 4GB of address space for 32-bit processes. If > we have a fixed base for stage2 we have to pick 0x70000000 and thus > waste 2GB of address space on a ppc64 machine (such as my G5). Just checking my understanding is correct: the fact that a fixed base of (say) 0x70000000 for stage2 causes a wastage of address space is a consequence only of the current address-space-layout scheme V uses, correct? So if I just compile some program with and without PIE and run it natively, the amount of address space available to it is unaffected by the PIEness, correct? Motivation for these questions is that there's been for a while a plan to rewrite the address space manage (m_aspacemgr) to be more flexible in layout. I want to be sure that if that happens, it will render moot the issue of limited address space when PIE is not used. J |
|
From: Tom H. <to...@co...> - 2005-06-22 13:09:27
|
In message <200...@ac...>
Julian Seward <js...@ac...> wrote:
> * PIE is helpful, but not essential, for running Valgrind on
> Valgrind. The alternative is change the hardwired
> stage2 load address for the outer valgrind by hand, which is
> a minor burden for V developers and then only those playing
> self-hosting games.
It's helpful as Paul says in that it allows us to give the maximum
possible space to the client on 32 bit systems. With a fixed load
address there are two problems that arise:
- If the user/kernel boundary is higher than expected then
some space that could be given to the client is wasted. Your
planned address space management changes may avoid this.
- If the user/kernel boundary is higher than expected then
valgrind may fail to load because the fixed load address
is in kernel space.
> * Tom -- you said that PIE was necessary for using large
> address spaces with memcheck on your amd64 distros. Is
> that still the case? Could you clarify this?
That is largely based on this comment (your's I think) in the
configure script:
# XXX: relocations under amd64's "small model" are 32-bit signed
# quantities; therefore going above 0x7fffffff doesn't work... this is
# a problem.
KICKSTART_BASE="0x70000000"
If we have to keep KICKSTART_BASE that low on amd64 then the client
gets even less space that on most x86 systems unless PIE is used.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Tom H. <to...@co...> - 2005-06-22 13:05:35
|
In message <170...@ca...>
Paul Mackerras <pa...@sa...> wrote:
> Julian Seward writes:
>
>> * hardware_capabilities. Hmm. What's that used for? Is there
>> any significance to the fact that neither the x86 nor amd64
>> code read that auxv entry?
>
> That's mainly used to tell whether we have altivec or not. I assume
> x86 and amd64 use cpuid to tell whether they have mmx/sse/sse2 etc.
I believe that x86 systems use it to decide which libraries to use
if there is more than one choice - so an i686 library will be
preferred over a standard i386 one if the processor can handle it.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Dirk M. <dm...@gm...> - 2005-06-22 12:48:27
|
On Wednesday 22 June 2005 12:09, Tom Hughes wrote: > > please use svn mv instead of mv for preserving history. > I think he did - an svn log on m_ume.c seems to go past the point > where it was renamed. I think that moves just show up as a delete > and add combination in the mails. Weird, then the logging script appears to be broken. I apologize. Dirk |
|
From: Paul M. <pa...@sa...> - 2005-06-22 12:45:32
|
Julian Seward writes: > What problems does PIE actually solve? Do we really need it? > Are there alternatives? My understanding is The main advantage of PIE is that it gives us more space for the client on systems where the kernel gives us more of the address space to play with. On ppc32 systems, in many configurations we only get 2GB address space, but on some machines we get 3GB, and on a ppc64 machine we get a full 4GB of address space for 32-bit processes. If we have a fixed base for stage2 we have to pick 0x70000000 and thus waste 2GB of address space on a ppc64 machine (such as my G5). I imagine a similar thing applies for amd64 machines running 32-bit processes. Paul. |
|
From: Paul M. <pa...@sa...> - 2005-06-22 12:40:15
|
Julian Seward writes: > Hi. Lots of ppc32 stuff got folded into the 3 line today. > My apologies for how long it all took. Thanks for providing > 2.4.0-ppc as a base, and for your patience. I'm sure a ppc build > right now won't do anything much, but all the infrastructure is > in place and I hope it will be up to flying speed pretty shortly. That's great! > - in m_main.c, there's a bunch of stuff for finding (the?) vdso > address and various address-space-mashing stuff as a result. > Is this needed? Neither the x86-linux nor amd64-linux code > has anything similar, and it seems to work fine on all tested > x86-linux and amd64-linux distros. Nothing like that should be needed on ppc32. We don't have a VDSO, and when we get one, it will almost certainly end up being mapped by default at the 1MB point. > - I see that you also extract both the cache_line_size and the > hardware_capabilities from the aux vector passed to V, and > pass those onwards to the client. > > * cache_line_size is needed for implementing dcbz I see. > Is there a reliable way to get that same info without > asking the kernel? eg, on x86/amd64 one can do "cpuid" > in userspace and get all that stuff. Does ppc have something > similar? Not really. On many ppc32 machines there is a copy of the Open Firmware device tree in /proc/device-tree and it would be possible to go looking in there. Not all machines have OF though. There is a processor version register (PVR) which has a unique code for each processor implementation. Reading it is privileged, but Linux catches the exception and emulates the instruction for userspace. I don't recommend we do that, though, because it means we need a table of PVR values, which will get out of date (i.e. when you move to a new CPU you expect to need a new kernel but generally not new versions of userland things). The cache line size is also needed for implementing icbi, which ends up invalidating cached block translations. > * hardware_capabilities. Hmm. What's that used for? Is there > any significance to the fact that neither the x86 nor amd64 > code read that auxv entry? That's mainly used to tell whether we have altivec or not. I assume x86 and amd64 use cpuid to tell whether they have mmx/sse/sse2 etc. The easiest and best way on ppc32 to know the cache line size and whether we have altivec is to look in the aux vector. The code to handle them wouldn't even need to be #ifdef ppc32; on x86 we just wouldn't see those entries (well, we might see the AT_HWCAP, but that wouldn't hurt). Paul. |
|
From: Julian S. <js...@ac...> - 2005-06-22 12:37:06
|
Hmm. PIE basically seems like a liability and I'm very tempted to remove it. So far it's been the cause of non-working builds on amd64 and ppc32. More fragileness, more hassle, and more build variants which need to be constantly re-validated. What problems does PIE actually solve? Do we really need it? Are there alternatives? My understanding is * PIE is helpful, but not essential, for running Valgrind on Valgrind. The alternative is change the hardwired stage2 load address for the outer valgrind by hand, which is a minor burden for V developers and then only those playing self-hosting games. * Tom -- you said that PIE was necessary for using large address spaces with memcheck on your amd64 distros. Is that still the case? Could you clarify this? J |
|
From: <sv...@va...> - 2005-06-22 12:11:47
|
Author: tom Date: 2005-06-22 13:11:42 +0100 (Wed, 22 Jun 2005) New Revision: 3996 Log: Declare my_sigreturn as static. This is correct in so much as it isn't used anywhere else, but it does cause gcc to issue a warning because it doesn't realised that the assembly code has defined the function. The reason for changing it to static despite the warning is that when it is declared extern PIE builds break on amd64 because gcc generates code that does a load from the address of the my_sigreturn symbol to get address of the function instead of just computing the address of the symbol. In other words it generates this: mov -212(%rip), %rax to get the address of the function instead of this: lea -212(%rip), %rax Obviously this breaks things because we store the wrong address as the signal restorer when installing the signal handler... Modified: trunk/coregrind/m_signals.c Modified: trunk/coregrind/m_signals.c =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D --- trunk/coregrind/m_signals.c 2005-06-21 23:44:58 UTC (rev 3995) +++ trunk/coregrind/m_signals.c 2005-06-22 12:11:42 UTC (rev 3996) @@ -395,7 +395,7 @@ =20 // We need two levels of macro-expansion here to convert __NR_rt_sigretu= rn // to a number before converting it to a string... sigh. -extern void my_sigreturn(void); +static void my_sigreturn(void); =20 #if defined(VGP_x86_linux) # define _MYSIG(name) \ |
|
From: Tom H. <to...@co...> - 2005-06-22 10:09:37
|
In message <200...@gm...>
Dirk Mueller <dm...@gm...> wrote:
> On Wednesday 22 June 2005 00:47, sv...@va... wrote:
>
>> Added:
>> trunk/coregrind/m_ume.c
>> trunk/coregrind/pub_core_ume.h
>> Removed:
>> trunk/coregrind/ume.c
>> trunk/coregrind/ume.h
>
> please use svn mv instead of mv for preserving history.
I think he did - an svn log on m_ume.c seems to go past the point
where it was renamed. I think that moves just show up as a delete
and add combination in the mails.
Tom
--
Tom Hughes (to...@co...)
http://www.compton.nu/
|
|
From: Dirk M. <dm...@gm...> - 2005-06-22 07:58:06
|
On Wednesday 22 June 2005 00:47, sv...@va... wrote: > Added: > trunk/coregrind/m_ume.c > trunk/coregrind/pub_core_ume.h > Removed: > trunk/coregrind/ume.c > trunk/coregrind/ume.h please use svn mv instead of mv for preserving history. |
|
From: Tom H. <th...@cy...> - 2005-06-22 02:58:54
|
Nightly build on audi ( i686, Red Hat 9 ) started at 2005-06-22 03:25:02 BST Checking out vex source tree ... done Building vex ... done Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 177 tests, 11 stderr failures, 2 stdout failures ================= memcheck/tests/error_counts (stdout) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/sigaltstack (stderr) memcheck/tests/xml1 (stderr) corecheck/tests/fdleak_cmsg (stderr) corecheck/tests/pth_cancel1 (stdout) corecheck/tests/pth_cancel1 (stderr) corecheck/tests/pth_cancel2 (stderr) none/tests/faultstatus (stderr) none/tests/pth_blockedsig (stderr) none/tests/x86/int (stderr) |
|
From: <js...@ac...> - 2005-06-22 02:42:11
|
Nightly build on phoenix ( SuSE 9.1 ) started at 2005-06-22 03:30:01 BST Checking out vex source tree ... done Building vex ... done Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 175 tests, 4 stderr failures, 1 stdout failure ================= memcheck/tests/error_counts (stdout) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-tree (stderr) none/tests/faultstatus (stderr) none/tests/x86/int (stderr) |
|
From: Tom H. <to...@co...> - 2005-06-22 02:36:20
|
Nightly build on dunsmere ( athlon, Fedora Core 3 ) started at 2005-06-22 03:30:03 BST Checking out vex source tree ... done Building vex ... done Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 178 tests, 8 stderr failures, 2 stdout failures ================= memcheck/tests/error_counts (stdout) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/x86/scalar (stderr) memcheck/tests/x86/scalar_supp (stderr) memcheck/tests/xml1 (stderr) none/tests/faultstatus (stderr) none/tests/selfrun (stdout) none/tests/selfrun (stderr) none/tests/x86/int (stderr) |
|
From: Tom H. <th...@cy...> - 2005-06-22 02:20:40
|
Nightly build on alvis ( i686, Red Hat 7.3 ) started at 2005-06-22 03:15:03 BST Checking out vex source tree ... done Building vex ... done Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 176 tests, 13 stderr failures, 2 stdout failures ================= memcheck/tests/addressable (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/error_counts (stdout) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/vgtest_ume (stderr) memcheck/tests/xml1 (stderr) corecheck/tests/fdleak_cmsg (stderr) none/tests/faultstatus (stderr) none/tests/x86/int (stderr) none/tests/yield (stdout) |
|
From: Tom H. <th...@cy...> - 2005-06-22 02:18:49
|
Nightly build on dellow ( x86_64, Fedora Core 3 ) started at 2005-06-22 03:10:04 BST Checking out vex source tree ... done Building vex ... done Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 157 tests, 134 stderr failures, 57 stdout failures ================= memcheck/tests/addressable (stdout) memcheck/tests/addressable (stderr) memcheck/tests/badaddrvalue (stdout) memcheck/tests/badaddrvalue (stderr) memcheck/tests/badfree-2trace (stderr) memcheck/tests/badfree (stderr) memcheck/tests/badjump (stderr) memcheck/tests/badjump2 (stderr) memcheck/tests/badloop (stderr) memcheck/tests/badpoll (stderr) memcheck/tests/badrw (stderr) memcheck/tests/brk (stderr) memcheck/tests/brk2 (stderr) memcheck/tests/buflen_check (stderr) memcheck/tests/clientperm (stdout) memcheck/tests/clientperm (stderr) memcheck/tests/custom_alloc (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/doublefree (stderr) memcheck/tests/error_counts (stdout) memcheck/tests/errs1 (stderr) memcheck/tests/execve (stderr) memcheck/tests/execve2 (stderr) memcheck/tests/exitprog (stderr) memcheck/tests/fprw (stderr) memcheck/tests/fwrite (stdout) memcheck/tests/fwrite (stderr) memcheck/tests/inits (stderr) memcheck/tests/inline (stdout) memcheck/tests/inline (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/leakotron (stdout) memcheck/tests/malloc1 (stderr) memcheck/tests/malloc2 (stderr) memcheck/tests/malloc3 (stdout) memcheck/tests/malloc3 (stderr) memcheck/tests/manuel1 (stdout) memcheck/tests/manuel1 (stderr) memcheck/tests/manuel2 (stdout) memcheck/tests/manuel2 (stderr) memcheck/tests/manuel3 (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/memalign_test (stderr) memcheck/tests/memcmptest (stdout) memcheck/tests/memcmptest (stderr) memcheck/tests/mempool (stderr) memcheck/tests/mismatches (stderr) memcheck/tests/nanoleak (stderr) memcheck/tests/new_override (stdout) memcheck/tests/new_override (stderr) memcheck/tests/overlap (stdout) memcheck/tests/overlap (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/post-syscall (stdout) memcheck/tests/post-syscall (stderr) memcheck/tests/realloc3 (stderr) memcheck/tests/sigaltstack (stderr) memcheck/tests/signal2 (stdout) memcheck/tests/signal2 (stderr) memcheck/tests/sigprocmask (stderr) memcheck/tests/strchr (stderr) memcheck/tests/supp2 (stderr) memcheck/tests/suppfree (stderr) memcheck/tests/toobig-allocs (stderr) memcheck/tests/trivialleak (stderr) memcheck/tests/vgtest_ume (stderr) memcheck/tests/weirdioctl (stdout) memcheck/tests/weirdioctl (stderr) memcheck/tests/writev (stderr) memcheck/tests/xml1 (stdout) memcheck/tests/xml1 (stderr) memcheck/tests/zeropage (stdout) cachegrind/tests/chdir (stderr) cachegrind/tests/dlclose (stdout) cachegrind/tests/dlclose (stderr) corecheck/tests/as_mmap (stderr) corecheck/tests/as_shm (stdout) corecheck/tests/as_shm (stderr) corecheck/tests/erringfds (stdout) corecheck/tests/erringfds (stderr) corecheck/tests/fdleak_cmsg (stderr) corecheck/tests/fdleak_creat (stderr) corecheck/tests/fdleak_dup (stderr) corecheck/tests/fdleak_dup2 (stderr) corecheck/tests/fdleak_fcntl (stderr) corecheck/tests/fdleak_ipv4 (stdout) corecheck/tests/fdleak_ipv4 (stderr) corecheck/tests/fdleak_open (stderr) corecheck/tests/fdleak_pipe (stderr) corecheck/tests/fdleak_socketpair (stderr) corecheck/tests/pth_atfork1 (stdout) corecheck/tests/pth_atfork1 (stderr) corecheck/tests/pth_cancel1 (stdout) corecheck/tests/pth_cancel1 (stderr) corecheck/tests/pth_cancel2 (stderr) corecheck/tests/pth_cvsimple (stdout) corecheck/tests/pth_cvsimple (stderr) corecheck/tests/pth_empty (stderr) corecheck/tests/pth_exit (stderr) corecheck/tests/pth_exit2 (stderr) corecheck/tests/pth_mutexspeed (stdout) corecheck/tests/pth_mutexspeed (stderr) corecheck/tests/pth_once (stdout) corecheck/tests/pth_once (stderr) corecheck/tests/pth_rwlock (stderr) corecheck/tests/res_search (stdout) corecheck/tests/sigkill (stderr) corecheck/tests/stack_changes (stdout) corecheck/tests/threadederrno (stdout) corecheck/tests/vgprintf (stdout) corecheck/tests/vgprintf (stderr) massif/tests/toobig-allocs (stderr) massif/tests/true_html (stderr) massif/tests/true_text (stderr) lackey/tests/true (stderr) none/tests/amd64/insn_fpu (stdout) none/tests/amd64/insn_fpu (stderr) none/tests/amd64/insn_mmx (stdout) none/tests/amd64/insn_mmx (stderr) none/tests/amd64/insn_sse (stdout) none/tests/amd64/insn_sse (stderr) none/tests/amd64/insn_sse2 (stdout) none/tests/amd64/insn_sse2 (stderr) none/tests/args (stdout) none/tests/args (stderr) none/tests/async-sigs (stdout) none/tests/async-sigs (stderr) none/tests/bitfield1 (stderr) none/tests/blockfault (stderr) none/tests/closeall (stderr) none/tests/coolo_sigaction (stdout) none/tests/coolo_sigaction (stderr) none/tests/coolo_strlen (stderr) none/tests/discard (stdout) none/tests/discard (stderr) none/tests/exec-sigmask (stderr) none/tests/faultstatus (stderr) none/tests/fcntl_setown (stderr) none/tests/floored (stdout) none/tests/floored (stderr) none/tests/fork (stdout) none/tests/fucomip (stderr) none/tests/gxx304 (stderr) none/tests/manythreads (stdout) none/tests/manythreads (stderr) none/tests/map_unaligned (stderr) none/tests/map_unmap (stdout) none/tests/map_unmap (stderr) none/tests/mq (stderr) none/tests/mremap (stderr) none/tests/munmap_exe (stderr) none/tests/pending (stdout) none/tests/pending (stderr) none/tests/pth_blockedsig (stdout) none/tests/pth_blockedsig (stderr) none/tests/pth_stackalign (stdout) none/tests/pth_stackalign (stderr) none/tests/rcrl (stdout) none/tests/rcrl (stderr) none/tests/readline1 (stdout) none/tests/readline1 (stderr) none/tests/resolv (stdout) none/tests/resolv (stderr) none/tests/rlimit_nofile (stderr) none/tests/selfrun (stdout) none/tests/selfrun (stderr) none/tests/sem (stderr) none/tests/semlimit (stderr) none/tests/sha1_test (stderr) none/tests/shortpush (stderr) none/tests/shorts (stderr) none/tests/sigstackgrowth (stdout) none/tests/sigstackgrowth (stderr) none/tests/smc1 (stdout) none/tests/smc1 (stderr) none/tests/stackgrowth (stdout) none/tests/stackgrowth (stderr) none/tests/syscall-restart1 (stderr) none/tests/syscall-restart2 (stderr) none/tests/system (stderr) none/tests/thread-exits (stdout) none/tests/thread-exits (stderr) none/tests/threaded-fork (stdout) none/tests/threaded-fork (stderr) none/tests/tls (stdout) none/tests/tls (stderr) none/tests/yield (stdout) none/tests/yield (stderr) |