You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
1
(36) |
2
(30) |
|
3
(17) |
4
(21) |
5
(18) |
6
(14) |
7
(23) |
8
(12) |
9
(11) |
|
10
(11) |
11
(12) |
12
(11) |
13
(12) |
14
(11) |
15
(11) |
16
(15) |
|
17
(12) |
18
(15) |
19
(15) |
20
(25) |
21
(26) |
22
(21) |
23
(18) |
|
24
(25) |
25
(28) |
26
(27) |
27
(32) |
28
(13) |
29
(12) |
30
(10) |
|
From: Tom H. <th...@cy...> - 2005-04-02 02:15:58
|
Nightly build on alvis ( i686, Red Hat 7.3 ) started at 2005-04-02 03:15:02 BST Checking out vex source tree ... done Building vex ... failed Last 20 lines of log.verbose follow A vex/auxprogs/genoffsets.c A vex/COPYING A vex/test_main.h A vex/Makefile-icc A vex/Makefile Checked out revision 1116. /tmp/valgrind.10791/vex rm -f priv/ir/irdefs.o priv/ir/irmatch.o priv/ir/iropt.o priv/main/vex_main.o priv/main/vex_globals.o priv/main/vex_util.o priv/host-x86/hdefs.o priv/host-amd64/hdefs.o priv/host-arm/hdefs.o priv/host-ppc32/hdefs.o priv/host-x86/isel.o priv/host-amd64/isel.o priv/host-arm/isel.o priv/host-ppc32/isel.o priv/host-generic/h_generic_regs.o priv/host-generic/h_generic_simd64.o priv/host-generic/reg_alloc2.o priv/guest-generic/g_generic_x87.o priv/guest-x86/ghelpers.o priv/guest-amd64/ghelpers.o priv/guest-arm/ghelpers.o priv/guest-ppc32/ghelpers.o priv/guest-x86/toIR.o priv/guest-amd64/toIR.o priv/guest-arm/toIR.o priv/guest-ppc32/toIR.o libvex.a vex test_main.o \ priv/main/vex_svnversion.h \ pub/libvex_guest_offsets.h rm -f priv/main/vex_svnversion.h echo -n "\"" > priv/main/vex_svnversion.h svnversion -n . >> priv/main/vex_svnversion.h echo "\"" >> priv/main/vex_svnversion.h gcc -Wall -g -o auxprogs/genoffsets auxprogs/genoffsets.c ./auxprogs/genoffsets > pub/libvex_guest_offsets.h gcc -g -O -Wall -Wmissing-prototypes -Wshadow -Winline -Wpointer-arith -Wbad-function-cast -Wcast-qual -Wcast-align -Wmissing-declarations -fpie -Ipub -Ipriv -o priv/ir/irdefs.o \ -c priv/ir/irdefs.c cc1: unrecognized option `-fpie' make: *** [priv/ir/irdefs.o] Error 1 |
|
From: Tom H. <th...@cy...> - 2005-04-02 02:15:52
|
Nightly build on standard ( Red Hat 7.2 ) started at 2005-04-02 03:00:01 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow insn_mmxext: valgrind ./insn_mmxext insn_sse: valgrind ./insn_sse insn_sse2: (skipping, prereq failed: ../../../tests/cputest x86-sse2) int: valgrind ./int pushpopseg: valgrind ./pushpopseg rcl_assert: valgrind ./rcl_assert seg_override: valgrind ./seg_override -- Finished tests in none/tests/x86 ------------------------------------ yield: valgrind ./yield -- Finished tests in none/tests ---------------------------------------- == 205 tests, 6 stderr failures, 0 stdout failures ================= memcheck/tests/leak-tree (stderr) memcheck/tests/pth_once (stderr) memcheck/tests/scalar (stderr) memcheck/tests/threadederrno (stderr) memcheck/tests/vgtest_ume (stderr) addrcheck/tests/leak-tree (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2005-04-02 02:13:29
|
Nightly build on honda ( x86_64, Fedora Core 3 ) started at 2005-04-02 03:10:05 BST Checking out vex source tree ... done Building vex ... done Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 165 tests, 85 stderr failures, 22 stdout failures ================= memcheck/tests/addressable (stdout) memcheck/tests/addressable (stderr) memcheck/tests/badaddrvalue (stdout) memcheck/tests/badaddrvalue (stderr) memcheck/tests/badfree-2trace (stderr) memcheck/tests/badfree (stderr) memcheck/tests/badjump (stderr) memcheck/tests/badjump2 (stderr) memcheck/tests/badloop (stderr) memcheck/tests/badpoll (stderr) memcheck/tests/badrw (stderr) memcheck/tests/brk (stderr) memcheck/tests/brk2 (stderr) memcheck/tests/buflen_check (stderr) memcheck/tests/clientperm (stdout) memcheck/tests/clientperm (stderr) memcheck/tests/custom_alloc (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/doublefree (stderr) memcheck/tests/error_counts (stdout) memcheck/tests/errs1 (stderr) memcheck/tests/execve (stderr) memcheck/tests/execve2 (stderr) memcheck/tests/exitprog (stderr) memcheck/tests/fprw (stderr) memcheck/tests/fwrite (stdout) memcheck/tests/fwrite (stderr) memcheck/tests/inits (stderr) memcheck/tests/inline (stdout) memcheck/tests/inline (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/leakotron (stdout) memcheck/tests/malloc1 (stderr) memcheck/tests/malloc2 (stderr) memcheck/tests/malloc3 (stdout) memcheck/tests/malloc3 (stderr) memcheck/tests/manuel1 (stdout) memcheck/tests/manuel1 (stderr) memcheck/tests/manuel2 (stdout) memcheck/tests/manuel2 (stderr) memcheck/tests/manuel3 (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/memalign_test (stderr) memcheck/tests/memcmptest (stdout) memcheck/tests/memcmptest (stderr) memcheck/tests/mempool (stderr) memcheck/tests/metadata (stdout) memcheck/tests/metadata (stderr) memcheck/tests/mismatches (stderr) memcheck/tests/nanoleak (stderr) memcheck/tests/new_override (stdout) memcheck/tests/new_override (stderr) memcheck/tests/overlap (stdout) memcheck/tests/overlap (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/post-syscall (stdout) memcheck/tests/post-syscall (stderr) memcheck/tests/realloc3 (stderr) memcheck/tests/scalar (stderr) memcheck/tests/scalar_exit_group (stderr) memcheck/tests/scalar_fork (stderr) memcheck/tests/scalar_supp (stderr) memcheck/tests/scalar_vfork (stderr) memcheck/tests/sigaltstack (stderr) memcheck/tests/signal2 (stdout) memcheck/tests/signal2 (stderr) memcheck/tests/sigprocmask (stderr) memcheck/tests/supp2 (stderr) memcheck/tests/suppfree (stderr) memcheck/tests/threadederrno (stdout) memcheck/tests/toobig-allocs (stderr) memcheck/tests/trivialleak (stderr) memcheck/tests/vgtest_ume (stderr) memcheck/tests/weirdioctl (stdout) memcheck/tests/weirdioctl (stderr) memcheck/tests/writev (stderr) memcheck/tests/zeropage (stdout) addrcheck/tests/addressable (stdout) addrcheck/tests/addressable (stderr) addrcheck/tests/badrw (stderr) addrcheck/tests/fprw (stderr) addrcheck/tests/leak-0 (stderr) addrcheck/tests/leak-cycle (stderr) addrcheck/tests/leak-regroot (stderr) addrcheck/tests/leak-tree (stderr) addrcheck/tests/overlap (stdout) addrcheck/tests/overlap (stderr) addrcheck/tests/toobig-allocs (stderr) cachegrind/tests/chdir (stderr) cachegrind/tests/dlclose (stderr) corecheck/tests/fdleak_cmsg (stderr) corecheck/tests/fdleak_creat (stderr) corecheck/tests/fdleak_dup (stderr) corecheck/tests/fdleak_dup2 (stderr) corecheck/tests/fdleak_fcntl (stderr) corecheck/tests/fdleak_ipv4 (stderr) corecheck/tests/fdleak_open (stderr) corecheck/tests/fdleak_pipe (stderr) corecheck/tests/fdleak_socketpair (stderr) massif/tests/toobig-allocs (stderr) none/tests/blockfault (stderr) none/tests/faultstatus (stderr) none/tests/selfrun (stdout) none/tests/selfrun (stderr) |
|
From: Tom H. <th...@cy...> - 2005-04-02 02:11:43
|
Nightly build on alvis ( Red Hat 7.3 ) started at 2005-04-02 03:05:02 BST Checking out source tree ... done Configuring ... done Building ... done Running regression tests ... done Last 20 lines of log.verbose follow == 205 tests, 17 stderr failures, 0 stdout failures ================= memcheck/tests/addressable (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/distinguished-writes (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/pth_once (stderr) memcheck/tests/scalar (stderr) memcheck/tests/threadederrno (stderr) memcheck/tests/vgtest_ume (stderr) addrcheck/tests/leak-0 (stderr) addrcheck/tests/leak-cycle (stderr) addrcheck/tests/leak-regroot (stderr) addrcheck/tests/leak-tree (stderr) make: *** [regtest] Error 1 |
|
From: Tom H. <th...@cy...> - 2005-04-02 02:11:00
|
Nightly build on standard ( i686, Red Hat 7.2 ) started at 2005-04-02 03:10:05 BST Checking out vex source tree ... done Building vex ... failed Last 20 lines of log.verbose follow A vex/auxprogs/genoffsets.c A vex/COPYING A vex/test_main.h A vex/Makefile-icc A vex/Makefile Checked out revision 1116. /tmp/valgrind.17238/vex rm -f priv/ir/irdefs.o priv/ir/irmatch.o priv/ir/iropt.o priv/main/vex_main.o priv/main/vex_globals.o priv/main/vex_util.o priv/host-x86/hdefs.o priv/host-amd64/hdefs.o priv/host-arm/hdefs.o priv/host-ppc32/hdefs.o priv/host-x86/isel.o priv/host-amd64/isel.o priv/host-arm/isel.o priv/host-ppc32/isel.o priv/host-generic/h_generic_regs.o priv/host-generic/h_generic_simd64.o priv/host-generic/reg_alloc2.o priv/guest-generic/g_generic_x87.o priv/guest-x86/ghelpers.o priv/guest-amd64/ghelpers.o priv/guest-arm/ghelpers.o priv/guest-ppc32/ghelpers.o priv/guest-x86/toIR.o priv/guest-amd64/toIR.o priv/guest-arm/toIR.o priv/guest-ppc32/toIR.o libvex.a vex test_main.o \ priv/main/vex_svnversion.h \ pub/libvex_guest_offsets.h rm -f priv/main/vex_svnversion.h echo -n "\"" > priv/main/vex_svnversion.h svnversion -n . >> priv/main/vex_svnversion.h echo "\"" >> priv/main/vex_svnversion.h gcc -Wall -g -o auxprogs/genoffsets auxprogs/genoffsets.c ./auxprogs/genoffsets > pub/libvex_guest_offsets.h gcc -g -O -Wall -Wmissing-prototypes -Wshadow -Winline -Wpointer-arith -Wbad-function-cast -Wcast-qual -Wcast-align -Wmissing-declarations -fpie -Ipub -Ipriv -o priv/ir/irdefs.o \ -c priv/ir/irdefs.c cc1: unrecognized option `-fpie' make: *** [priv/ir/irdefs.o] Error 1 |
|
From: Tom H. <th...@cy...> - 2005-04-02 02:03:41
|
Nightly build on gill ( x86_64, Fedora Core 2 ) started at 2005-04-02 03:00:03 BST Checking out vex source tree ... done Building vex ... done Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 165 tests, 84 stderr failures, 23 stdout failures ================= memcheck/tests/addressable (stdout) memcheck/tests/addressable (stderr) memcheck/tests/badaddrvalue (stdout) memcheck/tests/badaddrvalue (stderr) memcheck/tests/badfree-2trace (stderr) memcheck/tests/badfree (stderr) memcheck/tests/badjump (stderr) memcheck/tests/badjump2 (stderr) memcheck/tests/badloop (stderr) memcheck/tests/badpoll (stderr) memcheck/tests/badrw (stderr) memcheck/tests/brk (stderr) memcheck/tests/brk2 (stderr) memcheck/tests/buflen_check (stderr) memcheck/tests/clientperm (stdout) memcheck/tests/clientperm (stderr) memcheck/tests/custom_alloc (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/doublefree (stderr) memcheck/tests/error_counts (stdout) memcheck/tests/errs1 (stderr) memcheck/tests/execve (stderr) memcheck/tests/execve2 (stderr) memcheck/tests/exitprog (stderr) memcheck/tests/fprw (stderr) memcheck/tests/fwrite (stdout) memcheck/tests/fwrite (stderr) memcheck/tests/inits (stderr) memcheck/tests/inline (stdout) memcheck/tests/inline (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/leakotron (stdout) memcheck/tests/malloc1 (stderr) memcheck/tests/malloc2 (stderr) memcheck/tests/malloc3 (stdout) memcheck/tests/malloc3 (stderr) memcheck/tests/manuel1 (stdout) memcheck/tests/manuel1 (stderr) memcheck/tests/manuel2 (stdout) memcheck/tests/manuel2 (stderr) memcheck/tests/manuel3 (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/memalign_test (stderr) memcheck/tests/memcmptest (stdout) memcheck/tests/memcmptest (stderr) memcheck/tests/mempool (stderr) memcheck/tests/metadata (stdout) memcheck/tests/metadata (stderr) memcheck/tests/mismatches (stderr) memcheck/tests/nanoleak (stderr) memcheck/tests/new_override (stdout) memcheck/tests/new_override (stderr) memcheck/tests/overlap (stdout) memcheck/tests/overlap (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/post-syscall (stdout) memcheck/tests/post-syscall (stderr) memcheck/tests/realloc3 (stderr) memcheck/tests/scalar (stderr) memcheck/tests/scalar_exit_group (stderr) memcheck/tests/scalar_fork (stderr) memcheck/tests/scalar_supp (stderr) memcheck/tests/scalar_vfork (stderr) memcheck/tests/sigaltstack (stderr) memcheck/tests/signal2 (stdout) memcheck/tests/signal2 (stderr) memcheck/tests/sigprocmask (stderr) memcheck/tests/supp2 (stderr) memcheck/tests/suppfree (stderr) memcheck/tests/threadederrno (stdout) memcheck/tests/toobig-allocs (stderr) memcheck/tests/trivialleak (stderr) memcheck/tests/vgtest_ume (stderr) memcheck/tests/weirdioctl (stdout) memcheck/tests/weirdioctl (stderr) memcheck/tests/writev (stderr) memcheck/tests/zeropage (stdout) addrcheck/tests/addressable (stdout) addrcheck/tests/addressable (stderr) addrcheck/tests/badrw (stderr) addrcheck/tests/fprw (stderr) addrcheck/tests/leak-0 (stderr) addrcheck/tests/leak-cycle (stderr) addrcheck/tests/leak-regroot (stderr) addrcheck/tests/leak-tree (stderr) addrcheck/tests/overlap (stdout) addrcheck/tests/overlap (stderr) addrcheck/tests/toobig-allocs (stderr) cachegrind/tests/chdir (stderr) cachegrind/tests/dlclose (stderr) corecheck/tests/fdleak_cmsg (stderr) corecheck/tests/fdleak_creat (stderr) corecheck/tests/fdleak_dup (stderr) corecheck/tests/fdleak_dup2 (stderr) corecheck/tests/fdleak_fcntl (stderr) corecheck/tests/fdleak_ipv4 (stderr) corecheck/tests/fdleak_open (stderr) corecheck/tests/fdleak_pipe (stderr) corecheck/tests/fdleak_socketpair (stderr) massif/tests/toobig-allocs (stderr) none/tests/faultstatus (stderr) none/tests/selfrun (stdout) none/tests/selfrun (stderr) none/tests/yield (stdout) |
|
From: <sv...@va...> - 2005-04-01 23:38:41
|
Author: tom
Date: 2005-04-02 00:38:37 +0100 (Sat, 02 Apr 2005)
New Revision: 3498
Modified:
trunk/coregrind/amd64-linux/syscalls.c
Log:
Yet more amd64 system calls.
Modified: trunk/coregrind/amd64-linux/syscalls.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/amd64-linux/syscalls.c 2005-04-01 23:22:36 UTC (rev 3=
497)
+++ trunk/coregrind/amd64-linux/syscalls.c 2005-04-01 23:38:37 UTC (rev 3=
498)
@@ -1040,11 +1040,11 @@
=20
GENXY(__NR_getgroups, sys_getgroups), // 115=20
GENX_(__NR_setgroups, sys_setgroups), // 116=20
- // (__NR_setresuid, sys_setresuid), // 117=20
- // (__NR_getresuid, sys_getresuid), // 118=20
- // (__NR_setresgid, sys_setresgid), // 119=20
+ LINX_(__NR_setresuid, sys_setresuid), // 117=20
+ LINXY(__NR_getresuid, sys_getresuid), // 118=20
+ LINX_(__NR_setresgid, sys_setresgid), // 119=20
=20
- // (__NR_getresgid, sys_getresgid), // 120=20
+ LINXY(__NR_getresgid, sys_getresgid), // 120=20
// (__NR_getpgid, sys_getpgid), // 121=20
// (__NR_setfsuid, sys_setfsuid), // 122=20
// (__NR_setfsgid, sys_setfsgid), // 123=20
@@ -1070,13 +1070,13 @@
=20
// (__NR_getpriority, sys_getpriority), // =
140=20
// (__NR_setpriority, sys_setpriority), // =
141=20
- // (__NR_sched_setparam, sys_sched_setparam), // =
142=20
- // (__NR_sched_getparam, sys_sched_getparam), // =
143=20
- // (__NR_sched_setscheduler, sys_sched_setscheduler), // =
144=20
+ GENXY(__NR_sched_setparam, sys_sched_setparam), // =
142=20
+ GENXY(__NR_sched_getparam, sys_sched_getparam), // =
143=20
+ GENX_(__NR_sched_setscheduler, sys_sched_setscheduler), // =
144=20
=20
- // (__NR_sched_getscheduler, sys_sched_getscheduler), // =
145=20
- // (__NR_sched_get_priority_max, sys_sched_get_priority_max), // =
146=20
- // (__NR_sched_get_priority_min, sys_sched_get_priority_min), // =
147=20
+ GENX_(__NR_sched_getscheduler, sys_sched_getscheduler), // =
145=20
+ GENX_(__NR_sched_get_priority_max, sys_sched_get_priority_max), // =
146=20
+ GENX_(__NR_sched_get_priority_min, sys_sched_get_priority_min), // =
147=20
// (__NR_sched_rr_get_interval, sys_sched_rr_get_interval), // =
148=20
// (__NR_mlock, sys_mlock), // =
149=20
=20
|
|
From: Jeremy F. <je...@go...> - 2005-04-01 23:32:24
|
Nicholas Nethercote wrote:
> And then we wouldn't need dlopen() at all, right?
Well, we'd still need it to get stage2 into the right place, unless we
can use some linker magic to get it there first shot; I've tried doing
this before, but its very hard to get right, and extremely hard to make
it work over a wide range of toolchain versions (and that's just
binutils). It would also have the same disadvantages of the current
non-PIE case. A dynamically placed stage2 is definitely preferred (at
least for 32-bit platforms; 64-bit should give us enough address space
to be able to place things statically without cramping anyone).
> Hmm, that doesn't sound so bad. One problem is external tools, eg.
> Callgrind. Is there any other way to achieve the core/tool opening
> and connection without using dlopen()?
Create a valgrind-core.o (or .a) and link callgrind against it, to
produce a stage2-callgrind.
J
|
|
From: <sv...@va...> - 2005-04-01 23:22:59
|
Author: tom
Date: 2005-04-02 00:22:36 +0100 (Sat, 02 Apr 2005)
New Revision: 3497
Modified:
trunk/coregrind/amd64-linux/syscalls.c
Log:
More amd64 system calls.
Modified: trunk/coregrind/amd64-linux/syscalls.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/amd64-linux/syscalls.c 2005-04-01 20:20:12 UTC (rev 3=
496)
+++ trunk/coregrind/amd64-linux/syscalls.c 2005-04-01 23:22:36 UTC (rev 3=
497)
@@ -592,6 +592,20 @@
VG_(generic_PRE_sys_setsockopt)(tid, ARG1,ARG2,ARG3,ARG4,ARG5);
}
=20
+PRE(sys_getsockopt, 0)
+{
+ PRINT("sys_getsockopt ( %d, %d, %d, %p, %p )",ARG1,ARG2,ARG3,ARG4,ARG=
5);
+ PRE_REG_READ5(long, "getsockopt",
+ int, s, int, level, int, optname,
+ void *, optval, int, *optlen);
+ VG_(generic_PRE_sys_getsockopt)(tid, ARG1,ARG2,ARG3,ARG4,ARG5);
+}
+
+POST(sys_getsockopt)
+{
+ VG_(generic_POST_sys_getsockopt)(tid, RES,ARG1,ARG2,ARG3,ARG4,ARG5);
+}
+
PRE(sys_connect, MayBlock)
{
PRINT("sys_connect ( %d, %p, %d )",ARG1,ARG2,ARG3);
@@ -952,7 +966,7 @@
PLAXY(__NR_socketpair, sys_socketpair), // 53=20
PLAX_(__NR_setsockopt, sys_setsockopt), // 54
=20
- // (__NR_getsockopt, sys_getsockopt), // 55=20
+ PLAXY(__NR_getsockopt, sys_getsockopt), // 55=20
PLAX_(__NR_clone, sys_clone), // 56=20
GENX_(__NR_fork, sys_fork), // 57=20
GENX_(__NR_vfork, sys_fork), // 58 treat as for=
k
@@ -984,14 +998,14 @@
=20
GENX_(__NR_chdir, sys_chdir), // 80=20
GENX_(__NR_fchdir, sys_fchdir), // 81=20
- // (__NR_rename, sys_rename), // 82=20
- // (__NR_mkdir, sys_mkdir), // 83=20
- // (__NR_rmdir, sys_rmdir), // 84=20
+ GENX_(__NR_rename, sys_rename), // 82=20
+ GENX_(__NR_mkdir, sys_mkdir), // 83=20
+ GENX_(__NR_rmdir, sys_rmdir), // 84=20
=20
GENXY(__NR_creat, sys_creat), // 85=20
- // (__NR_link, sys_link), // 86=20
+ GENX_(__NR_link, sys_link), // 86=20
GENX_(__NR_unlink, sys_unlink), // 87=20
- // (__NR_symlink, sys_symlink), // 88=20
+ GENX_(__NR_symlink, sys_symlink), // 88=20
GENX_(__NR_readlink, sys_readlink), // 89=20
=20
GENX_(__NR_chmod, sys_chmod), // 90=20
@@ -1016,7 +1030,7 @@
GENX_(__NR_setgid, sys_setgid), // 106=20
GENX_(__NR_geteuid, sys_geteuid), // 107=20
GENX_(__NR_getegid, sys_getegid), // 108=20
- // (__NR_setpgid, sys_setpgid), // 109=20
+ GENX_(__NR_setpgid, sys_setpgid), // 109=20
=20
GENX_(__NR_getppid, sys_getppid), // 110=20
GENX_(__NR_getpgrp, sys_getpgrp), // 111=20
@@ -1025,7 +1039,7 @@
// (__NR_setregid, sys_setregid), // 114=20
=20
GENXY(__NR_getgroups, sys_getgroups), // 115=20
- // (__NR_setgroups, sys_setgroups), // 116=20
+ GENX_(__NR_setgroups, sys_setgroups), // 116=20
// (__NR_setresuid, sys_setresuid), // 117=20
// (__NR_getresuid, sys_getresuid), // 118=20
// (__NR_setresgid, sys_setresgid), // 119=20
@@ -1045,7 +1059,7 @@
GENX_(__NR_rt_sigsuspend, sys_rt_sigsuspend), // 130=20
GENXY(__NR_sigaltstack, sys_sigaltstack), // 131=20
GENX_(__NR_utime, sys_utime), // 132=20
- // (__NR_mknod, sys_mknod), // 133=20
+ GENX_(__NR_mknod, sys_mknod), // 133=20
// (__NR_uselib, sys_uselib), // 134=20
=20
// (__NR_personality, sys_personality), // 135=20
@@ -1079,7 +1093,7 @@
// (__NR_adjtimex, sys_adjtimex), // 159=20
=20
GENX_(__NR_setrlimit, sys_setrlimit), // 160=20
- // (__NR_chroot, sys_chroot), // 161=20
+ GENX_(__NR_chroot, sys_chroot), // 161=20
// (__NR_sync, sys_sync), // 162=20
// (__NR_acct, sys_acct), // 163=20
// (__NR_settimeofday, sys_settimeofday), // 164=20
|
|
From: Nicholas N. <nj...@cs...> - 2005-04-01 22:35:03
|
On Fri, 1 Apr 2005, Jeremy Fitzhardinge wrote: >> Umm. Well, that's a(nother) good reason not to snarf it as-is :-) >> But we might still be able to saw off the ldso parts and route any >> syscall requirements it has via Kal; then it would [in theory :-] >> still work on non-Linux platforms. > > Not really. The dynamic linker is object-file and toolchain dependent, > so unless we want to force all platforms to use a specific > toolchain+object file format, we're going to have to work out how to > live with whatever the platform provides. > > The dynamic linking for tools is hardly essential. At worst, we could > easily link the tool into stage2, and have a stage2 per tool. With the > cleanup of the tool interface, you could link all the tools into one > stage2 binary as well. > > Loading and locating stage2 would remain a problem, but that's easier > since it happens before Valgrind takes control, and stage1 can make full > use of all the system libraries etc. And then we wouldn't need dlopen() at all, right? Hmm, that doesn't sound so bad. One problem is external tools, eg. Callgrind. Is there any other way to achieve the core/tool opening and connection without using dlopen()? N |
|
From: Jeremy F. <je...@go...> - 2005-04-01 21:57:58
|
Julian Seward wrote:
>Umm. Well, that's a(nother) good reason not to snarf it as-is :-)
>But we might still be able to saw off the ldso parts and route any
>syscall requirements it has via Kal; then it would [in theory :-]
>still work on non-Linux platforms.
>
>
Not really. The dynamic linker is object-file and toolchain dependent,
so unless we want to force all platforms to use a specific
toolchain+object file format, we're going to have to work out how to
live with whatever the platform provides.
The dynamic linking for tools is hardly essential. At worst, we could
easily link the tool into stage2, and have a stage2 per tool. With the
cleanup of the tool interface, you could link all the tools into one
stage2 binary as well.
Loading and locating stage2 would remain a problem, but that's easier
since it happens before Valgrind takes control, and stage1 can make full
use of all the system libraries etc.
J
|
|
From: <sv...@va...> - 2005-04-01 20:20:16
|
Author: sewardj
Date: 2005-04-01 21:20:12 +0100 (Fri, 01 Apr 2005)
New Revision: 3496
Modified:
trunk/memcheck/mc_translate.c
Log:
Add a missing case. I guess it can't have been wildly popular :-)
Modified: trunk/memcheck/mc_translate.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/memcheck/mc_translate.c 2005-04-01 18:58:09 UTC (rev 3495)
+++ trunk/memcheck/mc_translate.c 2005-04-01 20:20:12 UTC (rev 3496)
@@ -1532,6 +1532,7 @@
case Iop_Yl2xF64:
case Iop_Yl2xp1F64:
case Iop_PRemF64:
+ case Iop_PRem1F64:
case Iop_AtanF64:
case Iop_AddF64:
case Iop_DivF64:
|
|
From: <sv...@va...> - 2005-04-01 20:19:40
|
Author: sewardj
Date: 2005-04-01 21:19:20 +0100 (Fri, 01 Apr 2005)
New Revision: 1116
Modified:
trunk/priv/guest-amd64/toIR.c
trunk/priv/guest-x86/toIR.c
Log:
Remember to clear C2 after fsincos, as that actually makes it work
right with reasonable-sized inputs. This confirms fsincos as the
golden lemon of x87 floating point instructions, since Vex has by now
chomped through vast amounts of floating point code on x86 and this is
the first time this bug has come to light.
Modified: trunk/priv/guest-amd64/toIR.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/priv/guest-amd64/toIR.c 2005-03-30 23:20:47 UTC (rev 1115)
+++ trunk/priv/guest-amd64/toIR.c 2005-04-01 20:19:20 UTC (rev 1116)
@@ -4901,6 +4901,7 @@
//.. put_ST_UNCHECKED(0, unop(Iop_SinF64, mkexpr(a1)));
//.. fp_push();
//.. put_ST(0, unop(Iop_CosF64, mkexpr(a1)));
+//.. clear_C2(); /* HACK */
//.. break;
//.. }
//..=20
Modified: trunk/priv/guest-x86/toIR.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/priv/guest-x86/toIR.c 2005-03-30 23:20:47 UTC (rev 1115)
+++ trunk/priv/guest-x86/toIR.c 2005-04-01 20:19:20 UTC (rev 1116)
@@ -4112,6 +4112,7 @@
put_ST_UNCHECKED(0, unop(Iop_SinF64, mkexpr(a1)));
fp_push();
put_ST(0, unop(Iop_CosF64, mkexpr(a1)));
+ clear_C2(); /* HACK */
break;
}
=20
|
|
From: <sv...@va...> - 2005-04-01 18:58:30
|
Author: tom
Date: 2005-04-01 19:58:09 +0100 (Fri, 01 Apr 2005)
New Revision: 3495
Modified:
trunk/coregrind/vg_redir.c
Log:
Rework the vsyscall redirections to work in pie code - the old form
seemed to completely confuse the compiler and it was generating
nonsense code to get the address of the replacement routines.
Modified: trunk/coregrind/vg_redir.c
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
--- trunk/coregrind/vg_redir.c 2005-04-01 08:07:54 UTC (rev 3494)
+++ trunk/coregrind/vg_redir.c 2005-04-01 18:58:09 UTC (rev 3495)
@@ -377,16 +377,21 @@
96 =3D=3D __NR_gettimeofday
201 =3D=3D __NR_time
*/
+static void amd64_linux_rerouted__vgettimeofday(void)
+{
asm(
-"amd64_linux_rerouted__vgettimeofday:\n"
" movq $96, %rax\n"
" syscall\n"
-" ret\n"
-"amd64_linux_rerouted__vtime:\n"
+);
+}
+
+static void amd64_linux_rerouted__vtime(void)
+{
+asm(
" movq $201, %rax\n"
" syscall\n"
-" ret\n"
);
+}
#endif
=20
/* If address 'a' is being redirected, return the redirected-to
@@ -399,12 +404,10 @@
/* HACK. Reroute the amd64-linux vsyscalls. This should be moved
out of here into an amd64-linux specific initialisation routine.
*/
- extern void amd64_linux_rerouted__vgettimeofday;
- extern void amd64_linux_rerouted__vtime;
if (a =3D=3D 0xFFFFFFFFFF600000ULL)
- return (Addr)&amd64_linux_rerouted__vgettimeofday;
+ return (Addr)amd64_linux_rerouted__vgettimeofday;
if (a =3D=3D 0xFFFFFFFFFF600400ULL)
- return (Addr)&amd64_linux_rerouted__vtime;
+ return (Addr)amd64_linux_rerouted__vtime;
#endif
=20
r =3D VG_(SkipList_Find_Exact)(&sk_resolved_redir, &a);
|
|
From: Erik A. <and...@co...> - 2005-04-01 17:48:10
|
On Fri Apr 01, 2005 at 06:46:27PM +0100, Julian Seward wrote: > > It will support amd64 when interested parties do the needed work. > > There is preliminary architecture support in place in SVN, but > > support of ldso has not even begun. Lacking an amd64 system, I > > havn't personally been very motivated. > > [...] > > Hmm, ok. Back to the drawing board. I might stare at the ldso > stuff to see how hard it would be to get it to work on amd64. > We're going to need that functionality one way or the other; > and the good thing about working with uclibc is that we'd only > presumably have to fill in the amd64-specific holes in ldso; > the generic infrastructure is already all there. Correct? Yup -Erik -- Erik B. Andersen http://codepoet-consulting.com/ --This message was written using 73% post-consumer electrons-- |
|
From: Julian S. <js...@ac...> - 2005-04-01 17:46:44
|
> A point of pedantry - s/Eric/Erik/ :) Sorry about that :) -- I noticed it just after I clicked 'send'. > It will support amd64 when interested parties do the needed work. > There is preliminary architecture support in place in SVN, but > support of ldso has not even begun. Lacking an amd64 system, I > havn't personally been very motivated. > [...] Hmm, ok. Back to the drawing board. I might stare at the ldso stuff to see how hard it would be to get it to work on amd64. We're going to need that functionality one way or the other; and the good thing about working with uclibc is that we'd only presumably have to fill in the amd64-specific holes in ldso; the generic infrastructure is already all there. Correct? J |
|
From: Erik A. <and...@co...> - 2005-04-01 17:38:13
|
On Fri Apr 01, 2005 at 06:18:31PM +0100, Julian Seward wrote: > > Erik > > Thanks for the info. > > So, do you support amd64 on Linux? It's not clear from what > you said. There are a few architecture specific areas in uClibc. Basic architecture support means you can staticly link applications. That generally just requires an implementation for crti.o, crtn.o, crt1.o, and any overrides for generic syscalls that are different for the architecture (typically just a few syscalls such as brk, setjmp, longjmp, vfork, clone). This part is done so staticly linked apps alledgedly work. I say alledgedly since I lack amd64 hardware and thus have never tested it. For more complete architecture support, one should add the needed bits for libpthread (i.e. testandset and compare_and_swap). For improved architecture support, one should then add optimized string functions (i.e. strcpy, memcpy, etc). This is not critical, but does tend to improve performance. This has not been done for x86_64. Finally, the most difficult bit of architecture specific code is adding ldso support. This has not been done for x86_64. -Erik -- Erik B. Andersen http://codepoet-consulting.com/ --This message was written using 73% post-consumer electrons-- |
|
From: Erik A. <and...@co...> - 2005-04-01 17:27:16
|
On Fri Apr 01, 2005 at 04:03:32PM +0100, Julian Seward wrote: > On Friday 01 April 2005 14:48, Nicholas Nethercote wrote: > > On Fri, 1 Apr 2005, Julian Seward wrote: > > > I had a look at the uCLibc-0.9.27 sources just now, and I must say > > > that looks promising. I am a little concerned not to see an amd64 > > > directory hanging off uClibc-0.9.27/ldso/ldso; does that mean it > > > doesn't support amd64? > > > > And I believe uCLibc is Linux-only... > > Umm. Well, that's a(nother) good reason not to snarf it as-is :-) It rather depends on your goals. Thus far, linux only has been sufficient. The structure is there such that uClibc could be ported, should interested parties care to do the needed work. > But we might still be able to saw off the ldso parts and route any > syscall requirements it has via Kal; then it would [in theory :-] > still work on non-Linux platforms. The ldso license is such that you are more than welcome to do so should you feel the need. Only a few syscalls are needed for ldso, and they are seperated out into a single header file. As long as the target platform does ELF, uClibc's ldso should work fine. We've been working on some major changes and cleanup in the last couple of weeks. Which is to say, half the architectures are currently broken, but I hope to get all sorted out in the next couple of days. > Eric, what's the situation with uCLibc on amd64? Does/will the ldso > stuff support amd64? What about ppc32 and ppc64? A point of pedantry - s/Eric/Erik/ :) It will support amd64 when interested parties do the needed work. There is preliminary architecture support in place in SVN, but support of ldso has not even begun. Lacking an amd64 system, I havn't personally been very motivated. Linux/ppc32 works great. As with amd64, I lack a ppc64 capable system and thus have not persued the port. A basic static linking only port is really pretty easy to do. Getting ldso working takes a lot more work though. Alledgedly a contract I am working on will provide me with an Apple G5 in the next few months, which will no doubt get me working on that port as time permits. -Erik -- Erik B. Andersen http://codepoet-consulting.com/ --This message was written using 73% post-consumer electrons-- |
|
From: Julian S. <js...@ac...> - 2005-04-01 17:19:06
|
Erik Thanks for the info. So, do you support amd64 on Linux? It's not clear from what you said. J On Friday 01 April 2005 18:14, Erik Andersen wrote: > On Fri Apr 01, 2005 at 07:48:32AM -0600, Nicholas Nethercote wrote: > > On Fri, 1 Apr 2005, Julian Seward wrote: > > >I had a look at the uCLibc-0.9.27 sources just now, and I must say > > >that looks promising. I am a little concerned not to see an amd64 > > >directory hanging off uClibc-0.9.27/ldso/ldso; does that mean it > > >doesn't support amd64? > > > > And I believe uCLibc is Linux-only... > > Currently uCLibc is indeed Linux only. Thus far there has been > little interest or effort towards supporting alternatives, since > glibc seems to hace that covered. We focus on being small and > correct. Actually, someone did port to libgloss, so uClibc could > also run on bare metal but that work has never been merged. > > -Erik > > -- > Erik B. Andersen http://codepoet-consulting.com/ > --This message was written using 73% post-consumer electrons-- |
|
From: Erik A. <and...@co...> - 2005-04-01 17:15:16
|
On Fri Apr 01, 2005 at 07:48:32AM -0600, Nicholas Nethercote wrote: > On Fri, 1 Apr 2005, Julian Seward wrote: > > >I had a look at the uCLibc-0.9.27 sources just now, and I must say > >that looks promising. I am a little concerned not to see an amd64 > >directory hanging off uClibc-0.9.27/ldso/ldso; does that mean it > >doesn't support amd64? > > And I believe uCLibc is Linux-only... Currently uCLibc is indeed Linux only. Thus far there has been little interest or effort towards supporting alternatives, since glibc seems to hace that covered. We focus on being small and correct. Actually, someone did port to libgloss, so uClibc could also run on bare metal but that work has never been merged. -Erik -- Erik B. Andersen http://codepoet-consulting.com/ --This message was written using 73% post-consumer electrons-- |
|
From: Julian S. <js...@ac...> - 2005-04-01 15:04:09
|
On Friday 01 April 2005 14:48, Nicholas Nethercote wrote: > On Fri, 1 Apr 2005, Julian Seward wrote: > > I had a look at the uCLibc-0.9.27 sources just now, and I must say > > that looks promising. I am a little concerned not to see an amd64 > > directory hanging off uClibc-0.9.27/ldso/ldso; does that mean it > > doesn't support amd64? > > And I believe uCLibc is Linux-only... Umm. Well, that's a(nother) good reason not to snarf it as-is :-) But we might still be able to saw off the ldso parts and route any syscall requirements it has via Kal; then it would [in theory :-] still work on non-Linux platforms. Eric, what's the situation with uCLibc on amd64? Does/will the ldso stuff support amd64? What about ppc32 and ppc64? J |
|
From: Nicholas N. <nj...@cs...> - 2005-04-01 13:48:39
|
On Fri, 1 Apr 2005, Julian Seward wrote:
> I had a look at the uCLibc-0.9.27 sources just now, and I must say
> that looks promising. I am a little concerned not to see an amd64
> directory hanging off uClibc-0.9.27/ldso/ldso; does that mean it
> doesn't support amd64?
And I believe uCLibc is Linux-only...
>> 3. allocation
>> 1. Our allocation model is basically the same as libc's, with
>> the added complexity of arenas. We're really only using the
>> arenas as a crude kind of memory profiling mechanism,
>
> The arenas differ also in the size of the allocation redzones. This
> is used by {mem,addr}check -- client allocations come from an arena
> with big redzones, whilst the rest of V uses small or zero redzones
> for size efficiency.
Yes, the redzones are one of the most important reasons for having our own
allocator.
N
|
|
From: Julian S. <js...@ac...> - 2005-04-01 13:42:10
|
> > I've undoubtedly oversimplified the issues involved and missed out
> > some important details. Anyone care to add their two cents? I figure
> > it's a good idea to discuss this and come to some sort of
> > conclusion/agreement before too much implementation effort occurs.
>
> I definitely understand the appeal of 3, and I don't object too strongly
> if we go that way, but I think we should really avoid wasting effort.
> Every line of code is a liability, and if we're going to maintain code,
> it had better pay for itself.
In reality we need a hybrid of 2 (import a different libc) and 3
(roll our own). uCLibc, whilst way smaller than glibc, is still
huge -- importing it all would dwarf our own code base.
Basically the only thing that bothers me is dlopen etc all. All
the rest are harmless, unless anyone can find any other large heavy
objects hidden in the undergrowth.
I'm not crazy enough to suggest we should write our own ldso facility.
Far from it. Much better to see if we can easily saw off ldso from
(eg) uCLibc, then ringfence it adequately so as to guarantee it won't
screw us up in some obscure way. Pretty much the same way the
demangler is done.
I had a look at the uCLibc-0.9.27 sources just now, and I must say
that looks promising. I am a little concerned not to see an amd64
directory hanging off uClibc-0.9.27/ldso/ldso; does that mean it
doesn't support amd64?
> 3. allocation
> 1. Our allocation model is basically the same as libc's, with
> the added complexity of arenas. We're really only using the
> arenas as a crude kind of memory profiling mechanism,
The arenas differ also in the size of the allocation redzones. This
is used by {mem,addr}check -- client allocations come from an arena
with big redzones, whilst the rest of V uses small or zero redzones
for size efficiency.
I like having the arenas; it allows us to crudely profile space use,
as you say; it also makes it easier to ensure that misuse of space
acquired from one arena (writing off the end of blocks) does not
screw up stuff in other arenas. I don't know if that's useful or
not in practice.
Yeh, having to pass around arena tags isn't the best. Not convinced
the extra hassle is a big deal though.
> the libc API isn't very good for us.
Sure. I also prefer not to inherit API stupidity from libc if
possible.
J
|
|
From: Tom H. <th...@cy...> - 2005-04-01 08:23:08
|
Nightly build on alvis ( unknown, Red Hat 7.3 ) started at 2005-04-01 09:15:58 BST Checking out vex source tree ... done Building vex ... done Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 183 tests, 20 stderr failures, 0 stdout failures ================= memcheck/tests/addressable (stderr) memcheck/tests/describe-block (stderr) memcheck/tests/leak-0 (stderr) memcheck/tests/leak-cycle (stderr) memcheck/tests/leak-regroot (stderr) memcheck/tests/leak-tree (stderr) memcheck/tests/match-overrun (stderr) memcheck/tests/pointer-trace (stderr) memcheck/tests/scalar (stderr) memcheck/tests/threadederrno (stderr) memcheck/tests/vgtest_ume (stderr) addrcheck/tests/leak-0 (stderr) addrcheck/tests/leak-cycle (stderr) addrcheck/tests/leak-regroot (stderr) addrcheck/tests/leak-tree (stderr) cachegrind/tests/chdir (stderr) cachegrind/tests/dlclose (stderr) cachegrind/tests/x86/fpu-28-108 (stderr) none/tests/faultstatus (stderr) none/tests/x86/int (stderr) |
|
From: Tom H. <th...@cy...> - 2005-04-01 08:22:02
|
Nightly build on ginetta ( athlon, Red Hat 8.0 ) started at 2005-04-01 09:15:39 BST Checking out vex source tree ... done Building vex ... done Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 183 tests, 7 stderr failures, 0 stdout failures ================= memcheck/tests/scalar (stderr) memcheck/tests/threadederrno (stderr) cachegrind/tests/chdir (stderr) cachegrind/tests/dlclose (stderr) cachegrind/tests/x86/fpu-28-108 (stderr) none/tests/faultstatus (stderr) none/tests/x86/int (stderr) |