You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
1
(3) |
2
|
3
(2) |
4
(3) |
5
|
6
(1) |
7
|
|
8
|
9
(4) |
10
(1) |
11
(2) |
12
(1) |
13
(8) |
14
|
|
15
|
16
|
17
|
18
(2) |
19
(1) |
20
|
21
|
|
22
|
23
|
24
|
25
|
26
|
27
(1) |
28
(2) |
|
29
(6) |
30
|
31
|
|
|
|
|
|
From: Andreas A. <ar...@so...> - 2020-03-04 17:23:59
|
https://sourceware.org/git/gitweb.cgi?p=valgrind.git;h=91bf2d2d8cbf5942d73dc748150945a5c24ecae2 commit 91bf2d2d8cbf5942d73dc748150945a5c24ecae2 Author: Andreas Arnez <ar...@li...> Date: Mon Mar 2 16:22:59 2020 +0100 Bug 418435 - s390x: Avoid extra value dependency in CLC implementation The test memcheck/tests/memcmp currently fails on s390x because it yields the expected "conditional jump or move depends on uninitialised value(s)" message twice instead of just once. This is caused by the handling of the s390x instruction CLC, see s390_irgen_CLC_EX(). When comparing two bytes from the two input strings, the implementation uses the comparison result for a conditional branch to the next instruction. But if no further bytes need to be compared, the comparison result is also used for generating the resulting condition code. There are two cases: Either the inputs are equal; then the resulting condition code is zero. This is what happens in the memcmp test case. Or the inputs are different; then the resulting condition code is 1 or 2 if the first or second operand is greater, respectively. At least in the first case it is easy to avoid the additional dependency, by clearing the condition code explicitly. Just do this. Diff: --- NEWS | 1 + VEX/priv/guest_s390_toIR.c | 3 +++ 2 files changed, 4 insertions(+) diff --git a/NEWS b/NEWS index d305c83..d8095be 100644 --- a/NEWS +++ b/NEWS @@ -129,6 +129,7 @@ n-i-bz Fix non-glibc build of test suite with s390x_features 417452 s390_insn_store_emit: dst->tag for HRcVec128 417578 Add suppressions for glibc DTV leaks 417906 clone with CLONE_VFORK and no CLONE_VM fails +418435 s390x: memcmp test yields extra "Conditional jump or move depends on uninitialised value(s)" Release 3.15.0 (12 April 2019) diff --git a/VEX/priv/guest_s390_toIR.c b/VEX/priv/guest_s390_toIR.c index eb3a25b..c27a8d3 100644 --- a/VEX/priv/guest_s390_toIR.c +++ b/VEX/priv/guest_s390_toIR.c @@ -12840,6 +12840,9 @@ s390_irgen_CLC_EX(IRTemp length, IRTemp start1, IRTemp start2) put_counter_dw0(binop(Iop_Add64, mkexpr(counter), mkU64(1))); iterate_if(binop(Iop_CmpNE64, mkexpr(counter), mkexpr(length))); put_counter_dw0(mkU64(0)); + + /* Equal. Clear CC, to avoid duplicate dependency on the comparison. */ + s390_cc_set_val(0); } static void |
|
From: Mark W. <ma...@so...> - 2020-03-04 14:10:25
|
https://sourceware.org/git/gitweb.cgi?p=valgrind.git;h=3d6a8157d52f18261f2c1a0888c2cfd3289b371e commit 3d6a8157d52f18261f2c1a0888c2cfd3289b371e Author: Mark Wielaard <ma...@kl...> Date: Fri Feb 28 13:36:31 2020 +0100 Add 32bit time64 syscalls for arm, mips32, ppc32 and x86. This patch adds sycall wrappers for the following syscalls which use a 64bit time_t on 32bit arches: gettime64, settime64, clock_getres_time64, clock_nanosleep_time64, timer_gettime64, timer_settime64, timerfd_gettime64, timerfd_settime64, utimensat_time64, pselect6_time64, ppoll_time64, recvmmsg_time64, mq_timedsend_time64, mq_timedreceive_time64, semtimedop_time64, rt_sigtimedwait_time64, futex_time64 and sched_rr_get_interval_time64. Still missing are clock_adjtime64 and io_pgetevents_time64. For the more complicated syscalls futex[_time64], pselect6[_time64] and ppoll[_time64] there are shared pre and/or post helper functions. Other functions just have their own PRE and POST handler. Note that the vki_timespec64 struct really is the struct as used by by glibc (it internally translates a 32bit timespec struct to a 64bit timespec64 struct before passing it to any of the time64 syscalls). The kernel uses a 64-bit signed int, but is ignoring the upper 32 bits of the tv_nsec field. It does always write the full struct though. So avoid checking the padding is only needed for PRE_MEM_READ. There are two helper pre_read_timespec64 and pre_read_itimerspec64 to check the new structs. https://bugs.kde.org/show_bug.cgi?id=416753 Diff: --- NEWS | 1 + coregrind/m_syswrap/priv_syswrap-linux.h | 22 ++ coregrind/m_syswrap/syswrap-arm-linux.c | 24 +- coregrind/m_syswrap/syswrap-linux.c | 531 ++++++++++++++++++++++++++--- coregrind/m_syswrap/syswrap-mips32-linux.c | 24 +- coregrind/m_syswrap/syswrap-ppc32-linux.c | 22 ++ coregrind/m_syswrap/syswrap-x86-linux.c | 22 ++ include/vki/vki-linux.h | 26 ++ 8 files changed, 631 insertions(+), 41 deletions(-) diff --git a/NEWS b/NEWS index afe8872..d305c83 100644 --- a/NEWS +++ b/NEWS @@ -123,6 +123,7 @@ n-i-bz Add support for the Linux io_uring system calls n-i-bz sys_statx: don't complain if both |filename| and |buf| are NULL. n-i-bz Fix non-glibc build of test suite with s390x_features 416667 gcc10 ppc64le impossible constraint in 'asm' in test_isa. +416753 new 32bit time syscalls for 2038+ 417427 commit to fix vki_siginfo_t definition created numerous regression errors on ppc64 417452 s390_insn_store_emit: dst->tag for HRcVec128 diff --git a/coregrind/m_syswrap/priv_syswrap-linux.h b/coregrind/m_syswrap/priv_syswrap-linux.h index 0cfe782..708e5fd 100644 --- a/coregrind/m_syswrap/priv_syswrap-linux.h +++ b/coregrind/m_syswrap/priv_syswrap-linux.h @@ -393,6 +393,28 @@ DECL_TEMPLATE(linux, sys_socketpair); DECL_TEMPLATE(linux, sys_kcmp); DECL_TEMPLATE(linux, sys_copy_file_range); +/* 64bit time_t syscalls for 32bit arches. */ +DECL_TEMPLATE(linux, sys_clock_gettime64) +DECL_TEMPLATE(linux, sys_clock_settime64) +// clock_adjtime64 +DECL_TEMPLATE(linux, sys_clock_getres_time64) +DECL_TEMPLATE(linux, sys_clock_nanosleep_time64); +DECL_TEMPLATE(linux, sys_timer_gettime64); +DECL_TEMPLATE(linux, sys_timer_settime64); +DECL_TEMPLATE(linux, sys_timerfd_gettime64); +DECL_TEMPLATE(linux, sys_timerfd_settime64); +DECL_TEMPLATE(linux, sys_utimensat_time64); +DECL_TEMPLATE(linux, sys_pselect6_time64); +DECL_TEMPLATE(linux, sys_ppoll_time64); +// io_pgetevents_time64 +DECL_TEMPLATE(linux, sys_recvmmsg_time64); +DECL_TEMPLATE(linux, sys_mq_timedsend_time64); +DECL_TEMPLATE(linux, sys_mq_timedreceive_time64); +DECL_TEMPLATE(linux, sys_semtimedop_time64); +DECL_TEMPLATE(linux, sys_rt_sigtimedwait_time64); +DECL_TEMPLATE(linux, sys_futex_time64); +DECL_TEMPLATE(linux, sys_sched_rr_get_interval_time64); + // Some arch specific functions called from syswrap-linux.c extern Int do_syscall_clone_x86_linux ( Word (*fn)(void *), void* stack, diff --git a/coregrind/m_syswrap/syswrap-arm-linux.c b/coregrind/m_syswrap/syswrap-arm-linux.c index 18468f0..3722cdd 100644 --- a/coregrind/m_syswrap/syswrap-arm-linux.c +++ b/coregrind/m_syswrap/syswrap-arm-linux.c @@ -742,7 +742,7 @@ static SyscallTableEntry syscall_main_table[] = { LINX_(__NR_sched_get_priority_max, sys_sched_get_priority_max),// 159 LINX_(__NR_sched_get_priority_min, sys_sched_get_priority_min),// 160 -//zz //LINX?(__NR_sched_rr_get_interval, sys_sched_rr_get_interval), // 161 */* + LINXY(__NR_sched_rr_get_interval, sys_sched_rr_get_interval), // 161 GENXY(__NR_nanosleep, sys_nanosleep), // 162 GENX_(__NR_mremap, sys_mremap), // 163 LINX_(__NR_setresuid, sys_setresuid16), // 164 @@ -1020,6 +1020,28 @@ static SyscallTableEntry syscall_main_table[] = { LINX_(__NR_pwritev2, sys_pwritev2), // 393 LINXY(__NR_statx, sys_statx), // 397 + + LINXY(__NR_clock_gettime64, sys_clock_gettime64), // 403 + LINX_(__NR_clock_settime64, sys_clock_settime64), // 404 + + LINXY(__NR_clock_getres_time64, sys_clock_getres_time64), // 406 + LINXY(__NR_clock_nanosleep_time64, sys_clock_nanosleep_time64), // 407 + LINXY(__NR_timer_gettime64, sys_timer_gettime64), // 408 + LINXY(__NR_timer_settime64, sys_timer_settime64), // 409 + LINXY(__NR_timerfd_gettime64, sys_timerfd_gettime64),// 410 + LINXY(__NR_timerfd_settime64, sys_timerfd_settime64),// 411 + LINX_(__NR_utimensat_time64, sys_utimensat_time64), // 412 + LINXY(__NR_pselect6_time64, sys_pselect6_time64), // 413 + LINXY(__NR_ppoll_time64, sys_ppoll_time64), // 414 + + LINXY(__NR_recvmmsg_time64, sys_recvmmsg_time64), // 417 + LINX_(__NR_mq_timedsend_time64, sys_mq_timedsend_time64), // 418 + LINXY(__NR_mq_timedreceive_time64, sys_mq_timedreceive_time64), // 419 + LINX_(__NR_semtimedop_time64, sys_semtimedop_time64),// 420 + LINXY(__NR_rt_sigtimedwait_time64, sys_rt_sigtimedwait_time64), // 421 + LINXY(__NR_futex_time64, sys_futex_time64), // 422 + LINXY(__NR_sched_rr_get_interval_time64, + sys_sched_rr_get_interval_time64), // 423 }; diff --git a/coregrind/m_syswrap/syswrap-linux.c b/coregrind/m_syswrap/syswrap-linux.c index 87334c9..1190a57 100644 --- a/coregrind/m_syswrap/syswrap-linux.c +++ b/coregrind/m_syswrap/syswrap-linux.c @@ -1622,7 +1622,23 @@ POST(sys_sendfile64) } } -PRE(sys_futex) +static void pre_read_timespec64 (ThreadId tid, const char *msg, UWord arg) +{ + struct vki_timespec64 *ts64 = (void *)(Addr)arg; + PRE_MEM_READ (msg, (Addr) &ts64->tv_sec, sizeof(vki_time64_t)); + PRE_MEM_READ (msg, (Addr) &ts64->tv_nsec, sizeof(vki_int32_t)); +} + +static void pre_read_itimerspec64 (ThreadId tid, const char *msg, UWord arg) +{ + struct vki_itimerspec64 *its64 = (void *)(Addr)arg; + pre_read_timespec64 (tid, msg, (UWord) &its64->it_interval); + pre_read_timespec64 (tid, msg, (UWord) &its64->it_value); +} + +static void futex_pre_helper ( ThreadId tid, SyscallArgLayout* layout, + SyscallArgs* arrghs, SyscallStatus* status, + UWord* flags, Bool is_time64 ) { /* arg param used by ops @@ -1634,21 +1650,32 @@ PRE(sys_futex) ARG5 - u32 *uaddr2 REQUEUE,CMP_REQUEUE ARG6 - int val3 CMP_REQUEUE */ - PRINT("sys_futex ( %#" FMT_REGWORD "x, %ld, %ld, %#" FMT_REGWORD - "x, %#" FMT_REGWORD "x )", ARG1, SARG2, SARG3, ARG4, ARG5); + switch(ARG2 & ~(VKI_FUTEX_PRIVATE_FLAG|VKI_FUTEX_CLOCK_REALTIME)) { case VKI_FUTEX_CMP_REQUEUE: case VKI_FUTEX_WAKE_OP: case VKI_FUTEX_CMP_REQUEUE_PI: - PRE_REG_READ6(long, "futex", - vki_u32 *, futex, int, op, int, val, - struct timespec *, utime, vki_u32 *, uaddr2, int, val3); + if (is_time64) { + PRE_REG_READ6(long, "futex_time64", + vki_u32 *, futex, int, op, int, val, + struct timespec64 *, utime, vki_u32 *, uaddr2, int, val3); + } else { + PRE_REG_READ6(long, "futex", + vki_u32 *, futex, int, op, int, val, + struct timespec *, utime, vki_u32 *, uaddr2, int, val3); + } break; case VKI_FUTEX_REQUEUE: case VKI_FUTEX_WAIT_REQUEUE_PI: - PRE_REG_READ5(long, "futex", - vki_u32 *, futex, int, op, int, val, - struct timespec *, utime, vki_u32 *, uaddr2); + if (is_time64) { + PRE_REG_READ5(long, "futex_time64", + vki_u32 *, futex, int, op, int, val, + struct timespec64 *, utime, vki_u32 *, uaddr2); + } else { + PRE_REG_READ5(long, "futex", + vki_u32 *, futex, int, op, int, val, + struct timespec *, utime, vki_u32 *, uaddr2); + } break; case VKI_FUTEX_WAIT_BITSET: /* Check that the address at least begins in client-accessible area. */ @@ -1657,15 +1684,27 @@ PRE(sys_futex) return; } if (*(vki_u32 *)(Addr)ARG1 != ARG3) { - PRE_REG_READ4(long, "futex", - vki_u32 *, futex, int, op, int, val, - struct timespec *, utime); + if (is_time64) { + PRE_REG_READ4(long, "futex_time64", + vki_u32 *, futex, int, op, int, val, + struct timespec64 *, utime); + } else { + PRE_REG_READ4(long, "futex", + vki_u32 *, futex, int, op, int, val, + struct timespec64 *, utime); + } } else { /* Note argument 5 is unused, but argument 6 is used. So we cannot just PRE_REG_READ6. Read argument 6 separately. */ - PRE_REG_READ4(long, "futex", - vki_u32 *, futex, int, op, int, val, - struct timespec *, utime); + if (is_time64) { + PRE_REG_READ4(long, "futex_time64", + vki_u32 *, futex, int, op, int, val, + struct timespec64 *, utime); + } else { + PRE_REG_READ4(long, "futex", + vki_u32 *, futex, int, op, int, val, + struct timespec *, utime); + } if (VG_(tdict).track_pre_reg_read) PRA6("futex",int,val3); } @@ -1679,9 +1718,15 @@ PRE(sys_futex) break; case VKI_FUTEX_WAIT: case VKI_FUTEX_LOCK_PI: - PRE_REG_READ4(long, "futex", - vki_u32 *, futex, int, op, int, val, - struct timespec *, utime); + if (is_time64) { + PRE_REG_READ4(long, "futex_time64", + vki_u32 *, futex, int, op, int, val, + struct timespec64 *, utime); + } else { + PRE_REG_READ4(long, "futex", + vki_u32 *, futex, int, op, int, val, + struct timespec *, utime); + } break; case VKI_FUTEX_WAKE: case VKI_FUTEX_FD: @@ -1703,8 +1748,14 @@ PRE(sys_futex) case VKI_FUTEX_WAIT_BITSET: case VKI_FUTEX_WAIT_REQUEUE_PI: PRE_MEM_READ( "futex(futex)", ARG1, sizeof(Int) ); - if (ARG4 != 0) - PRE_MEM_READ( "futex(timeout)", ARG4, sizeof(struct vki_timespec) ); + if (ARG4 != 0) { + if (is_time64) { + pre_read_timespec64 (tid, "futex_time64(timeout)", ARG4); + } else { + PRE_MEM_READ( "futex(timeout)", ARG4, + sizeof(struct vki_timespec) ); + } + } break; case VKI_FUTEX_REQUEUE: @@ -1728,7 +1779,9 @@ PRE(sys_futex) break; } } -POST(sys_futex) + +static void futex_post_helper ( ThreadId tid, SyscallArgs* arrghs, + SyscallStatus* status ) { vg_assert(SUCCESS); POST_MEM_WRITE( ARG1, sizeof(int) ); @@ -1743,6 +1796,30 @@ POST(sys_futex) } } +PRE(sys_futex) +{ + PRINT("sys_futex ( %#" FMT_REGWORD "x, %ld, %ld, %#" FMT_REGWORD + "x, %#" FMT_REGWORD "x )", ARG1, SARG2, SARG3, ARG4, ARG5); + futex_pre_helper (tid, layout, arrghs, status, flags, False); +} + +POST(sys_futex) +{ + futex_post_helper (tid, arrghs, status); +} + +PRE(sys_futex_time64) +{ + PRINT("sys_futex_time64 ( %#" FMT_REGWORD "x, %ld, %ld, %#" FMT_REGWORD + "x, %#" FMT_REGWORD "x )", ARG1, SARG2, SARG3, ARG4, ARG5); + futex_pre_helper (tid, layout, arrghs, status, flags, True); +} + +POST(sys_futex_time64) +{ + futex_post_helper (tid, arrghs, status); +} + PRE(sys_set_robust_list) { PRINT("sys_set_robust_list ( %#" FMT_REGWORD "x, %" @@ -1785,16 +1862,22 @@ struct pselect_adjusted_sigset { vki_sigset_t adjusted_ss; }; -PRE(sys_pselect6) +static void pselect6_pre_helper ( ThreadId tid, SyscallArgLayout* layout, + SyscallArgs* arrghs, SyscallStatus* status, + UWord* flags, Bool is_time64 ) { *flags |= SfMayBlock | SfPostOnFail; - PRINT("sys_pselect6 ( %ld, %#" FMT_REGWORD "x, %#" FMT_REGWORD "x, %#" - FMT_REGWORD "x, %#" FMT_REGWORD "x, %#" FMT_REGWORD "x )", - SARG1, ARG2, ARG3, ARG4, ARG5, ARG6); - PRE_REG_READ6(long, "pselect6", - int, n, vki_fd_set *, readfds, vki_fd_set *, writefds, - vki_fd_set *, exceptfds, struct vki_timeval *, timeout, - void *, sig); + if (is_time64) { + PRE_REG_READ6(long, "pselect6_time64", + int, n, vki_fd_set *, readfds, vki_fd_set *, writefds, + vki_fd_set *, exceptfds, struct vki_timespec64 *, timeout, + void *, sig); + } else { + PRE_REG_READ6(long, "pselect6", + int, n, vki_fd_set *, readfds, vki_fd_set *, writefds, + vki_fd_set *, exceptfds, struct vki_timespec *, timeout, + void *, sig); + } // XXX: this possibly understates how much memory is read. if (ARG2 != 0) PRE_MEM_READ( "pselect6(readfds)", @@ -1805,8 +1888,14 @@ PRE(sys_pselect6) if (ARG4 != 0) PRE_MEM_READ( "pselect6(exceptfds)", ARG4, ARG1/8 /* __FD_SETSIZE/8 */ ); - if (ARG5 != 0) - PRE_MEM_READ( "pselect6(timeout)", ARG5, sizeof(struct vki_timeval) ); + if (ARG5 != 0) { + if (is_time64) { + pre_read_timespec64(tid, "pselect6_time64(timeout)", ARG5); + } else { + PRE_MEM_READ( "pselect6(timeout)", ARG5, + sizeof(struct vki_timespec) ); + } + } if (ARG6 != 0) { const struct pselect_sized_sigset *pss = (struct pselect_sized_sigset *)(Addr)ARG6; @@ -1834,6 +1923,15 @@ PRE(sys_pselect6) } } } + +PRE(sys_pselect6) +{ + PRINT("sys_pselect6 ( %ld, %#" FMT_REGWORD "x, %#" FMT_REGWORD "x, %#" + FMT_REGWORD "x, %#" FMT_REGWORD "x, %#" FMT_REGWORD "x )", + SARG1, ARG2, ARG3, ARG4, ARG5, ARG6); + pselect6_pre_helper (tid, layout, arrghs, status, flags, False); +} + POST(sys_pselect6) { if (ARG6 != 0 && ARG6 != 1) { @@ -1841,7 +1939,24 @@ POST(sys_pselect6) } } -PRE(sys_ppoll) +PRE(sys_pselect6_time64) +{ + PRINT("sys_pselect6_time64 ( %ld, %#" FMT_REGWORD "x, %#" FMT_REGWORD "x, %#" + FMT_REGWORD "x, %#" FMT_REGWORD "x, %#" FMT_REGWORD "x )", + SARG1, ARG2, ARG3, ARG4, ARG5, ARG6); + pselect6_pre_helper (tid, layout, arrghs, status, flags, True); +} + +POST(sys_pselect6_time64) +{ + if (ARG6 != 0 && ARG6 != 1) { + VG_(free)((struct pselect_adjusted_sigset *)(Addr)ARG6); + } +} + +static void ppoll_pre_helper ( ThreadId tid, SyscallArgLayout* layout, + SyscallArgs* arrghs, SyscallStatus* status, + UWord* flags, Bool is_time64 ) { UInt i; struct vki_pollfd* ufds = (struct vki_pollfd *)(Addr)ARG1; @@ -1849,10 +1964,17 @@ PRE(sys_ppoll) PRINT("sys_ppoll ( %#" FMT_REGWORD "x, %" FMT_REGWORD "u, %#" FMT_REGWORD "x, %#" FMT_REGWORD "x, %" FMT_REGWORD "u )\n", ARG1, ARG2, ARG3, ARG4, ARG5); - PRE_REG_READ5(long, "ppoll", - struct vki_pollfd *, ufds, unsigned int, nfds, - struct vki_timespec *, tsp, vki_sigset_t *, sigmask, - vki_size_t, sigsetsize); + if (is_time64) { + PRE_REG_READ5(long, "ppoll_time64", + struct vki_pollfd *, ufds, unsigned int, nfds, + struct vki_timespec64 *, tsp, vki_sigset_t *, sigmask, + vki_size_t, sigsetsize); + } else { + PRE_REG_READ5(long, "ppoll", + struct vki_pollfd *, ufds, unsigned int, nfds, + struct vki_timespec *, tsp, vki_sigset_t *, sigmask, + vki_size_t, sigsetsize); + } for (i = 0; i < ARG2; i++) { PRE_MEM_READ( "ppoll(ufds.fd)", @@ -1863,8 +1985,14 @@ PRE(sys_ppoll) (Addr)(&ufds[i].revents), sizeof(ufds[i].revents) ); } - if (ARG3) - PRE_MEM_READ( "ppoll(tsp)", ARG3, sizeof(struct vki_timespec) ); + if (ARG3) { + if (is_time64) { + pre_read_timespec64(tid, "ppoll_time64(tsp)", ARG3); + } else { + PRE_MEM_READ( "ppoll(tsp)", ARG3, + sizeof(struct vki_timespec) ); + } + } if (ARG4 != 0 && sizeof(vki_sigset_t) == ARG5) { const vki_sigset_t *guest_sigmask = (vki_sigset_t *)(Addr)ARG4; PRE_MEM_READ( "ppoll(sigmask)", ARG4, ARG5); @@ -1880,7 +2008,8 @@ PRE(sys_ppoll) } } -POST(sys_ppoll) +static void ppoll_post_helper ( ThreadId tid, SyscallArgs* arrghs, + SyscallStatus* status ) { vg_assert(SUCCESS || FAILURE); if (SUCCESS && (RES >= 0)) { @@ -1894,6 +2023,32 @@ POST(sys_ppoll) } } +PRE(sys_ppoll) +{ + PRINT("sys_ppoll ( %#" FMT_REGWORD "x, %" FMT_REGWORD "u, %#" FMT_REGWORD + "x, %#" FMT_REGWORD "x, %" FMT_REGWORD "u )\n", + ARG1, ARG2, ARG3, ARG4, ARG5); + ppoll_pre_helper (tid, layout, arrghs, status, flags, False); +} + +POST(sys_ppoll) +{ + ppoll_post_helper (tid, arrghs, status); +} + +PRE(sys_ppoll_time64) +{ + PRINT("sys_ppoll_time64 ( %#" FMT_REGWORD "x, %" FMT_REGWORD + "u, %#" FMT_REGWORD "x, %#" FMT_REGWORD "x, %" FMT_REGWORD "u )\n", + ARG1, ARG2, ARG3, ARG4, ARG5); + ppoll_pre_helper (tid, layout, arrghs, status, flags, False); +} + +POST(sys_ppoll_time64) +{ + ppoll_post_helper (tid, arrghs, status); +} + /* --------------------------------------------------------------------- epoll_* wrappers @@ -2682,6 +2837,25 @@ PRE(sys_mq_timedsend) } } +PRE(sys_mq_timedsend_time64) +{ + *flags |= SfMayBlock; + PRINT("sys_mq_timedsend_time64 ( %ld, %#" FMT_REGWORD "x, %" FMT_REGWORD + "u, %" FMT_REGWORD "u, %#" FMT_REGWORD "x )", + SARG1,ARG2,ARG3,ARG4,ARG5); + PRE_REG_READ5(long, "mq_timedsend_time64", + vki_mqd_t, mqdes, const char *, msg_ptr, vki_size_t, msg_len, + unsigned int, msg_prio, + const struct vki_timespec64 *, abs_timeout); + if (!ML_(fd_allowed)(ARG1, "mq_timedsend_time64", tid, False)) { + SET_STATUS_Failure( VKI_EBADF ); + } else { + PRE_MEM_READ( "mq_timedsend_time64(msg_ptr)", ARG2, ARG3 ); + if (ARG5 != 0) + pre_read_timespec64(tid, "mq_timedsend_time64(abs_timeout)", ARG5); + } +} + PRE(sys_mq_timedreceive) { *flags |= SfMayBlock; @@ -2711,6 +2885,35 @@ POST(sys_mq_timedreceive) POST_MEM_WRITE( ARG4, sizeof(unsigned int) ); } +PRE(sys_mq_timedreceive_time64) +{ + *flags |= SfMayBlock; + PRINT("sys_mq_timedreceive_time64( %ld, %#" FMT_REGWORD "x, %" + FMT_REGWORD "u, %#" FMT_REGWORD "x, %#" FMT_REGWORD "x )", + SARG1,ARG2,ARG3,ARG4,ARG5); + PRE_REG_READ5(ssize_t, "mq_timedreceive_time64", + vki_mqd_t, mqdes, char *, msg_ptr, vki_size_t, msg_len, + unsigned int *, msg_prio, + const struct vki_timespec64 *, abs_timeout); + if (!ML_(fd_allowed)(ARG1, "mq_timedreceive_time64", tid, False)) { + SET_STATUS_Failure( VKI_EBADF ); + } else { + PRE_MEM_WRITE( "mq_timedreceive_time64(msg_ptr)", ARG2, ARG3 ); + if (ARG4 != 0) + PRE_MEM_WRITE( "mq_timedreceive_time64(msg_prio)", + ARG4, sizeof(unsigned int) ); + if (ARG5 != 0) + pre_read_timespec64(tid, "mq_timedreceive_time64(abs_timeout)", ARG5); + } +} + +POST(sys_mq_timedreceive_time64) +{ + POST_MEM_WRITE( ARG2, RES ); + if (ARG4 != 0) + POST_MEM_WRITE( ARG4, sizeof(unsigned int) ); +} + PRE(sys_mq_notify) { PRINT("sys_mq_notify( %ld, %#" FMT_REGWORD "x )", SARG1, ARG2 ); @@ -2761,6 +2964,14 @@ PRE(sys_clock_settime) PRE_MEM_READ( "clock_settime(tp)", ARG2, sizeof(struct vki_timespec) ); } +PRE(sys_clock_settime64) +{ + PRINT("sys_clock_settime64( %ld, %#" FMT_REGWORD "x )", SARG1, ARG2); + PRE_REG_READ2(long, "clock_settime64", + vki_clockid_t, clk_id, const struct timespec64 *, tp); + pre_read_timespec64(tid, "clock_settime64(tp)", ARG2); +} + PRE(sys_clock_gettime) { PRINT("sys_clock_gettime( %ld, %#" FMT_REGWORD "x )" , SARG1, ARG2); @@ -2773,6 +2984,19 @@ POST(sys_clock_gettime) POST_MEM_WRITE( ARG2, sizeof(struct vki_timespec) ); } +PRE(sys_clock_gettime64) +{ + PRINT("sys_clock_gettime64( %ld, %#" FMT_REGWORD "x )" , SARG1, ARG2); + PRE_REG_READ2(long, "clock_gettime64", + vki_clockid_t, clk_id, struct vki_timespec64 *, tp); + PRE_MEM_WRITE ( "clock_gettime64(tp)", ARG2, + sizeof(struct vki_timespec64) ); +} +POST(sys_clock_gettime64) +{ + POST_MEM_WRITE( ARG2, sizeof(struct vki_timespec64) ); +} + PRE(sys_clock_getres) { PRINT("sys_clock_getres( %ld, %#" FMT_REGWORD "x )" , SARG1, ARG2); @@ -2789,6 +3013,23 @@ POST(sys_clock_getres) POST_MEM_WRITE( ARG2, sizeof(struct vki_timespec) ); } +PRE(sys_clock_getres_time64) +{ + PRINT("sys_clock_getres_time64( %ld, %#" FMT_REGWORD "x )" , SARG1, ARG2); + // Nb: we can't use "RES" as the param name because that's a macro + // defined above! + PRE_REG_READ2(long, "clock_getres_time64", + vki_clockid_t, clk_id, struct vki_timespec64 *, res); + if (ARG2 != 0) + PRE_MEM_WRITE( "clock_getres_time64(res)", ARG2, + sizeof(struct vki_timespec64) ); +} +POST(sys_clock_getres_time64) +{ + if (ARG2 != 0) + POST_MEM_WRITE( ARG2, sizeof(struct vki_timespec64) ); +} + PRE(sys_clock_nanosleep) { *flags |= SfMayBlock|SfPostOnFail; @@ -2808,6 +3049,27 @@ POST(sys_clock_nanosleep) POST_MEM_WRITE( ARG4, sizeof(struct vki_timespec) ); } +PRE(sys_clock_nanosleep_time64) +{ + *flags |= SfMayBlock|SfPostOnFail; + PRINT("sys_clock_nanosleep_time64( %ld, %ld, %#" FMT_REGWORD "x, %#" + FMT_REGWORD "x )", + SARG1, SARG2, ARG3, ARG4); + PRE_REG_READ4(int32_t, "clock_nanosleep_time64", + vki_clockid_t, clkid, int, flags, + const struct vki_timespec64 *, rqtp, + struct vki_timespec64 *, rmtp); + pre_read_timespec64(tid, "clock_nanosleep_time64(rqtp)", ARG3); + if (ARG4 != 0) + PRE_MEM_WRITE( "clock_nanosleep_time64(rmtp)", ARG4, + sizeof(struct vki_timespec64) ); +} +POST(sys_clock_nanosleep_time64) +{ + if (ARG4 != 0 && FAILURE && ERR == VKI_EINTR) + POST_MEM_WRITE( ARG4, sizeof(struct vki_timespec64) ); +} + /* --------------------------------------------------------------------- timer_* wrappers ------------------------------------------------------------------ */ @@ -2859,6 +3121,26 @@ POST(sys_timer_settime) POST_MEM_WRITE( ARG4, sizeof(struct vki_itimerspec) ); } +PRE(sys_timer_settime64) +{ + PRINT("sys_timer_settime64( %ld, %ld, %#" FMT_REGWORD "x, %#" + FMT_REGWORD "x )", SARG1,SARG2,ARG3,ARG4); + PRE_REG_READ4(long, "timer_settime64", + vki_timer_t, timerid, int, flags, + const struct vki_itimerspec64 *, value, + struct vki_itimerspec64 *, ovalue); + PRE_MEM_READ( "timer_settime64(value)", ARG3, + sizeof(struct vki_itimerspec64) ); + if (ARG4 != 0) + PRE_MEM_WRITE( "timer_settime64(ovalue)", ARG4, + sizeof(struct vki_itimerspec64) ); +} +POST(sys_timer_settime64) +{ + if (ARG4 != 0) + POST_MEM_WRITE( ARG4, sizeof(struct vki_itimerspec64) ); +} + PRE(sys_timer_gettime) { PRINT("sys_timer_gettime( %ld, %#" FMT_REGWORD "x )", SARG1, ARG2); @@ -2872,6 +3154,19 @@ POST(sys_timer_gettime) POST_MEM_WRITE( ARG2, sizeof(struct vki_itimerspec) ); } +PRE(sys_timer_gettime64) +{ + PRINT("sys_timer_gettime64( %ld, %#" FMT_REGWORD "x )", SARG1, ARG2); + PRE_REG_READ2(long, "timer_gettime64", + vki_timer_t, timerid, struct vki_itimerspec64 *, value); + PRE_MEM_WRITE( "timer_gettime64(value)", ARG2, + sizeof(struct vki_itimerspec64)); +} +POST(sys_timer_gettime64) +{ + POST_MEM_WRITE( ARG2, sizeof(struct vki_itimerspec64) ); +} + PRE(sys_timer_getoverrun) { PRINT("sys_timer_getoverrun( %#" FMT_REGWORD "x )", ARG1); @@ -2978,6 +3273,24 @@ POST(sys_timerfd_gettime) POST_MEM_WRITE(ARG2, sizeof(struct vki_itimerspec)); } +PRE(sys_timerfd_gettime64) +{ + PRINT("sys_timerfd_gettime64 ( %ld, %#" FMT_REGWORD "x )", SARG1, ARG2); + PRE_REG_READ2(long, "timerfd_gettime64", + int, ufd, + struct vki_itimerspec64*, otmr); + if (!ML_(fd_allowed)(ARG1, "timerfd_gettime64", tid, False)) + SET_STATUS_Failure(VKI_EBADF); + else + PRE_MEM_WRITE("timerfd_gettime64(result)", + ARG2, sizeof(struct vki_itimerspec64)); +} +POST(sys_timerfd_gettime64) +{ + if (RES == 0) + POST_MEM_WRITE(ARG2, sizeof(struct vki_itimerspec64)); +} + PRE(sys_timerfd_settime) { PRINT("sys_timerfd_settime ( %ld, %ld, %#" FMT_REGWORD "x, %#" @@ -3006,6 +3319,33 @@ POST(sys_timerfd_settime) POST_MEM_WRITE(ARG4, sizeof(struct vki_itimerspec)); } +PRE(sys_timerfd_settime64) +{ + PRINT("sys_timerfd_settime64 ( %ld, %ld, %#" FMT_REGWORD "x, %#" + FMT_REGWORD "x )", SARG1, SARG2, ARG3, ARG4); + PRE_REG_READ4(long, "timerfd_settime64", + int, ufd, + int, flags, + const struct vki_itimerspec64*, utmr, + struct vki_itimerspec64*, otmr); + if (!ML_(fd_allowed)(ARG1, "timerfd_settime64", tid, False)) + SET_STATUS_Failure(VKI_EBADF); + else + { + pre_read_itimerspec64 (tid, "timerfd_settime64(result)", ARG3); + if (ARG4) + { + PRE_MEM_WRITE("timerfd_settime64(result)", + ARG4, sizeof(struct vki_itimerspec64)); + } + } +} +POST(sys_timerfd_settime64) +{ + if (RES == 0 && ARG4 != 0) + POST_MEM_WRITE(ARG4, sizeof(struct vki_itimerspec64)); +} + /* --------------------------------------------------------------------- capabilities wrappers ------------------------------------------------------------------ */ @@ -3388,6 +3728,22 @@ POST(sys_sched_rr_get_interval) POST_MEM_WRITE(ARG2, sizeof(struct vki_timespec)); } +PRE(sys_sched_rr_get_interval_time64) +{ + PRINT("sys_sched_rr_get_interval_time64 ( %ld, %#" FMT_REGWORD "x )", + SARG1, ARG2); + PRE_REG_READ2(int, "sched_rr_get_interval_time64", + vki_pid_t, pid, + struct vki_timespec *, tp); + PRE_MEM_WRITE("sched_rr_get_interval_time64(timespec)", + ARG2, sizeof(struct vki_timespec64)); +} + +POST(sys_sched_rr_get_interval_time64) +{ + POST_MEM_WRITE(ARG2, sizeof(struct vki_timespec64)); +} + PRE(sys_sched_setaffinity) { PRINT("sched_setaffinity ( %ld, %" FMT_REGWORD "u, %#" FMT_REGWORD "x )", @@ -4115,6 +4471,30 @@ POST(sys_rt_sigtimedwait) POST_MEM_WRITE( ARG2, sizeof(vki_siginfo_t) ); } +PRE(sys_rt_sigtimedwait_time64) +{ + *flags |= SfMayBlock; + PRINT("sys_rt_sigtimedwait_time64 ( %#" FMT_REGWORD "x, %#" + FMT_REGWORD "x, %#" FMT_REGWORD "x, %" FMT_REGWORD "u )", + ARG1, ARG2, ARG3, ARG4); + PRE_REG_READ4(long, "rt_sigtimedwait_time64", + const vki_sigset_t *, set, vki_siginfo_t *, info, + const struct vki_timespec64 *, timeout, + vki_size_t, sigsetsize); + if (ARG1 != 0) + PRE_MEM_READ( "rt_sigtimedwait_time64(set)", ARG1, sizeof(vki_sigset_t) ); + if (ARG2 != 0) + PRE_MEM_WRITE( "rt_sigtimedwait_time64(info)", ARG2, + sizeof(vki_siginfo_t) ); + if (ARG3 != 0) + pre_read_timespec64(tid, "rt_sigtimedwait_time64(timeout)", ARG3); +} +POST(sys_rt_sigtimedwait_time64) +{ + if (ARG2 != 0) + POST_MEM_WRITE( ARG2, sizeof(vki_siginfo_t) ); +} + PRE(sys_rt_sigqueueinfo) { PRINT("sys_rt_sigqueueinfo(%ld, %ld, %#" FMT_REGWORD "x)", @@ -4563,6 +4943,20 @@ PRE(sys_semtimedop) ML_(generic_PRE_sys_semtimedop)(tid, ARG1,ARG2,ARG3,ARG4); } +PRE(sys_semtimedop_time64) +{ + *flags |= SfMayBlock; + PRINT("sys_semtimedop_time64 ( %ld, %#" FMT_REGWORD "x, %" + FMT_REGWORD "u, %#" FMT_REGWORD "x )", SARG1, ARG2, ARG3, ARG4); + PRE_REG_READ4(long, "semtimedop_time64", + int, semid, struct sembuf *, sops, unsigned, nsoops, + struct vki_timespec64 *, timeout); + PRE_MEM_READ( "semtimedop_time64(sops)", ARG1, + ARG2 * sizeof(struct vki_sembuf) ); + if (ARG3 != 0) + pre_read_timespec64(tid, "semtimedop_time64(timeout)", ARG3); +} + PRE(sys_msgget) { PRINT("sys_msgget ( %ld, %ld )", SARG1, SARG2); @@ -5372,6 +5766,36 @@ PRE(sys_utimensat) } } +PRE(sys_utimensat_time64) +{ + PRINT("sys_utimensat_time64 ( %ld, %#" FMT_REGWORD "x(%s), %#" + FMT_REGWORD "x, 0x%" FMT_REGWORD "x )", + SARG1, ARG2, (HChar*)(Addr)ARG2, ARG3, ARG4); + PRE_REG_READ4(long, "utimensat_time64", + int, dfd, char *, filename, struct timespec *, utimes, int, flags); + if (ARG2 != 0) + PRE_MEM_RASCIIZ( "utimensat_time64(filename)", ARG2 ); + if (ARG3 != 0) { + /* If timespec.tv_nsec has the special value UTIME_NOW or UTIME_OMIT + then the tv_sec field is ignored. */ + struct vki_timespec64 *times = (struct vki_timespec64 *)(Addr)ARG3; + PRE_MEM_READ( "utimensat_time64(times[0].tv_nsec)", + (Addr)×[0].tv_nsec, sizeof(times[0].tv_nsec)); + PRE_MEM_READ( "utimensat_time64(times[1].tv_nsec)", + (Addr)×[1].tv_nsec, sizeof(times[1].tv_nsec)); + if (ML_(safe_to_deref)(times, 2 * sizeof(struct vki_timespec64))) { + if (times[0].tv_nsec != VKI_UTIME_NOW + && times[0].tv_nsec != VKI_UTIME_OMIT) + PRE_MEM_READ( "utimensat_time64(times[0].tv_sec)", + (Addr)×[0].tv_sec, sizeof(times[0].tv_sec)); + if (times[1].tv_nsec != VKI_UTIME_NOW + && times[1].tv_nsec != VKI_UTIME_OMIT) + PRE_MEM_READ( "utimensat_time64(times[1].tv_sec)", + (Addr)×[1].tv_sec, sizeof(times[1].tv_sec)); + } + } +} + #if !defined(VGP_nanomips_linux) PRE(sys_newfstatat) { @@ -5866,6 +6290,34 @@ POST(sys_recvmmsg) ML_(linux_POST_sys_recvmmsg) (tid, RES, ARG1,ARG2,ARG3,ARG4,ARG5); } +PRE(sys_recvmmsg_time64) +{ + *flags |= SfMayBlock; + PRINT("sys_recvmmsg_time64 ( %ld, %#" FMT_REGWORD "x, %ld, %ld, %#" + FMT_REGWORD "x )", + SARG1, ARG2, SARG3, SARG4, ARG5); + PRE_REG_READ5(long, "recvmmsg_time64", + int, s, struct mmsghdr *, mmsg, int, vlen, + int, flags, struct vki_timespec64 *, timeout); + struct vki_mmsghdr *mmsg = (struct vki_mmsghdr *)ARG2; + HChar name[40]; // large enough + UInt i; + for (i = 0; i < ARG3; i++) { + VG_(sprintf)(name, "mmsg[%u].msg_hdr", i); + ML_(generic_PRE_sys_recvmsg)(tid, name, &mmsg[i].msg_hdr); + VG_(sprintf)(name, "recvmmsg(mmsg[%u].msg_len)", i); + PRE_MEM_WRITE( name, (Addr)&mmsg[i].msg_len, sizeof(mmsg[i].msg_len) ); + } + if (ARG5) + pre_read_timespec64(tid, "recvmmsg(timeout)", ARG5); +} + +POST(sys_recvmmsg_time64) +{ + /* ARG5 isn't actually used, so just use the generic POST. */ + ML_(linux_POST_sys_recvmmsg) (tid, RES, ARG1,ARG2,ARG3,ARG4,ARG5); +} + /* --------------------------------------------------------------------- key retention service wrappers ------------------------------------------------------------------ */ @@ -12679,3 +13131,4 @@ POST(sys_io_uring_register) /*--------------------------------------------------------------------*/ /*--- end ---*/ /*--------------------------------------------------------------------*/ + diff --git a/coregrind/m_syswrap/syswrap-mips32-linux.c b/coregrind/m_syswrap/syswrap-mips32-linux.c index bac555a..477f599 100644 --- a/coregrind/m_syswrap/syswrap-mips32-linux.c +++ b/coregrind/m_syswrap/syswrap-mips32-linux.c @@ -1103,7 +1103,29 @@ static SyscallTableEntry syscall_main_table[] = { LINXY (__NR_preadv2, sys_preadv2), // 361 LINX_ (__NR_pwritev2, sys_pwritev2), // 362 //.. - LINXY(__NR_statx, sys_statx) // 366 + LINXY(__NR_statx, sys_statx), // 366 + + LINXY(__NR_clock_gettime64, sys_clock_gettime64), // 403 + LINX_(__NR_clock_settime64, sys_clock_settime64), // 404 + + LINXY(__NR_clock_getres_time64, sys_clock_getres_time64), // 406 + LINXY(__NR_clock_nanosleep_time64, sys_clock_nanosleep_time64), // 407 + LINXY(__NR_timer_gettime64, sys_timer_gettime64), // 408 + LINXY(__NR_timer_settime64, sys_timer_settime64), // 409 + LINXY(__NR_timerfd_gettime64, sys_timerfd_gettime64), // 410 + LINXY(__NR_timerfd_settime64, sys_timerfd_settime64), // 411 + LINX_(__NR_utimensat_time64, sys_utimensat_time64), // 412 + LINXY(__NR_pselect6_time64, sys_pselect6_time64), // 413 + LINXY(__NR_ppoll_time64, sys_ppoll_time64), // 414 + + LINXY(__NR_recvmmsg_time64, sys_recvmmsg_time64), // 417 + LINX_(__NR_mq_timedsend_time64, sys_mq_timedsend_time64), // 418 + LINXY(__NR_mq_timedreceive_time64, sys_mq_timedreceive_time64), // 419 + LINX_(__NR_semtimedop_time64, sys_semtimedop_time64), // 420 + LINXY(__NR_rt_sigtimedwait_time64, sys_rt_sigtimedwait_time64), // 421 + LINXY(__NR_futex_time64, sys_futex_time64), // 422 + LINXY(__NR_sched_rr_get_interval_time64, + sys_sched_rr_get_interval_time64), // 423 }; SyscallTableEntry* ML_(get_linux_syscall_entry) (UInt sysno) diff --git a/coregrind/m_syswrap/syswrap-ppc32-linux.c b/coregrind/m_syswrap/syswrap-ppc32-linux.c index 484aac2..8f8eec3 100644 --- a/coregrind/m_syswrap/syswrap-ppc32-linux.c +++ b/coregrind/m_syswrap/syswrap-ppc32-linux.c @@ -1022,6 +1022,28 @@ static SyscallTableEntry syscall_table[] = { LINX_(__NR_copy_file_range, sys_copy_file_range), // 379 LINXY(__NR_statx, sys_statx), // 383 + + LINXY(__NR_clock_gettime64, sys_clock_gettime64), // 403 + LINX_(__NR_clock_settime64, sys_clock_settime64), // 404 + + LINXY(__NR_clock_getres_time64, sys_clock_getres_time64), // 406 + LINXY(__NR_clock_nanosleep_time64, sys_clock_nanosleep_time64), // 407 + LINXY(__NR_timer_gettime64, sys_timer_gettime64), // 408 + LINXY(__NR_timer_settime64, sys_timer_settime64), // 409 + LINXY(__NR_timerfd_gettime64, sys_timerfd_gettime64),// 410 + LINXY(__NR_timerfd_settime64, sys_timerfd_settime64),// 411 + LINX_(__NR_utimensat_time64, sys_utimensat_time64), // 412 + LINXY(__NR_pselect6_time64, sys_pselect6_time64), // 413 + LINXY(__NR_ppoll_time64, sys_ppoll_time64), // 414 + + LINXY(__NR_recvmmsg_time64, sys_recvmmsg_time64), // 417 + LINX_(__NR_mq_timedsend_time64, sys_mq_timedsend_time64), // 418 + LINXY(__NR_mq_timedreceive_time64, sys_mq_timedreceive_time64), // 419 + LINX_(__NR_semtimedop_time64, sys_semtimedop_time64),// 420 + LINXY(__NR_rt_sigtimedwait_time64, sys_rt_sigtimedwait_time64), // 421 + LINXY(__NR_futex_time64, sys_futex_time64), // 422 + LINXY(__NR_sched_rr_get_interval_time64, + sys_sched_rr_get_interval_time64), // 423 }; SyscallTableEntry* ML_(get_linux_syscall_entry) ( UInt sysno ) diff --git a/coregrind/m_syswrap/syswrap-x86-linux.c b/coregrind/m_syswrap/syswrap-x86-linux.c index 33d1213..68d24e1 100644 --- a/coregrind/m_syswrap/syswrap-x86-linux.c +++ b/coregrind/m_syswrap/syswrap-x86-linux.c @@ -1618,6 +1618,28 @@ static SyscallTableEntry syscall_table[] = { /* Explicitly not supported on i386 yet. */ GENX_(__NR_arch_prctl, sys_ni_syscall), // 384 + LINXY(__NR_clock_gettime64, sys_clock_gettime64), // 403 + LINX_(__NR_clock_settime64, sys_clock_settime64), // 404 + + LINXY(__NR_clock_getres_time64, sys_clock_getres_time64), // 406 + LINXY(__NR_clock_nanosleep_time64, sys_clock_nanosleep_time64), // 407 + LINXY(__NR_timer_gettime64, sys_timer_gettime64), // 408 + LINXY(__NR_timer_settime64, sys_timer_settime64), // 409 + LINXY(__NR_timerfd_gettime64, sys_timerfd_gettime64),// 410 + LINXY(__NR_timerfd_settime64, sys_timerfd_settime64),// 411 + LINX_(__NR_utimensat_time64, sys_utimensat_time64), // 412 + LINXY(__NR_pselect6_time64, sys_pselect6_time64), // 413 + LINXY(__NR_ppoll_time64, sys_ppoll_time64), // 414 + + LINXY(__NR_recvmmsg_time64, sys_recvmmsg_time64), // 417 + LINX_(__NR_mq_timedsend_time64, sys_mq_timedsend_time64), // 418 + LINXY(__NR_mq_timedreceive_time64, sys_mq_timedreceive_time64), // 419 + LINX_(__NR_semtimedop_time64, sys_semtimedop_time64),// 420 + LINXY(__NR_rt_sigtimedwait_time64, sys_rt_sigtimedwait_time64), // 421 + LINXY(__NR_futex_time64, sys_futex_time64), // 422 + LINXY(__NR_sched_rr_get_interval_time64, + sys_sched_rr_get_interval_time64), // 423 + LINXY(__NR_io_uring_setup, sys_io_uring_setup), // 425 LINXY(__NR_io_uring_enter, sys_io_uring_enter), // 426 LINXY(__NR_io_uring_register, sys_io_uring_register),// 427 diff --git a/include/vki/vki-linux.h b/include/vki/vki-linux.h index f27161e..6f1100f 100644 --- a/include/vki/vki-linux.h +++ b/include/vki/vki-linux.h @@ -5313,6 +5313,32 @@ struct vki_ptp_pin_desc { #define VKI_PTP_SYS_OFFSET_EXTENDED \ _VKI_IOWR('=', 9, struct vki_ptp_sys_offset_extended) +/* Needed for 64bit time_t on 32bit arches. */ + +typedef vki_int64_t vki_time64_t; + +/* Note that this is the padding used by glibc, the kernel uses + a 64-bit signed int, but is ignoring the upper 32 bits of the + tv_nsec field. It does always write the full struct though. + So this is only needed for PRE_MEM_READ. See pre_read_timespec64. */ +struct vki_timespec64 { + vki_time64_t tv_sec; +#if defined(VKI_BIG_ENDIAN) + vki_int32_t tv_pad; + vki_int32_t tv_nsec; +#elif defined(VKI_LITTLE_ENDIAN) + vki_int32_t tv_nsec; + vki_int32_t tv_pad; +#else +#error edit for your odd byteorder. +#endif +}; + +struct vki_itimerspec64 { + struct vki_timespec it_interval; + struct vki_timespec it_value; +}; + /*--------------------------------------------------------------------*/ /*--- end ---*/ /*--------------------------------------------------------------------*/ |
|
From: Mark W. <ma...@so...> - 2020-03-04 13:25:47
|
https://sourceware.org/git/gitweb.cgi?p=valgrind.git;h=28371e73dbb9eb6c003455cfb5d5663ca1d465fc commit 28371e73dbb9eb6c003455cfb5d5663ca1d465fc Author: Mark Wielaard <ma...@kl...> Date: Wed Mar 4 14:23:37 2020 +0100 Add suppressions for glibc DTV leaks The glibc DTV (Dynamic Thread Vector) for the main thread is never released, not even through __libc_freeres. This causes it to always show up as a reachable block when used, and sometimes, when it is extended and then reduced, as a possible leak when memcheck cannot find a pointer to the start of the block. https://bugzilla.redhat.com/show_bug.cgi?id=1796433 https://bugzilla.redhat.com/show_bug.cgi?id=1796559 https://bugs.kde.org/show_bug.cgi?id=417578 Diff: --- NEWS | 1 + glibc-2.X.supp.in | 34 ++++++++++++++++++++++++++++++++++ 2 files changed, 35 insertions(+) diff --git a/NEWS b/NEWS index 84e0840..afe8872 100644 --- a/NEWS +++ b/NEWS @@ -126,6 +126,7 @@ n-i-bz Fix non-glibc build of test suite with s390x_features 417427 commit to fix vki_siginfo_t definition created numerous regression errors on ppc64 417452 s390_insn_store_emit: dst->tag for HRcVec128 +417578 Add suppressions for glibc DTV leaks 417906 clone with CLONE_VFORK and no CLONE_VM fails diff --git a/glibc-2.X.supp.in b/glibc-2.X.supp.in index 126e8b3..eeefa39 100644 --- a/glibc-2.X.supp.in +++ b/glibc-2.X.supp.in @@ -248,3 +248,37 @@ Memcheck:Cond fun:_dl_runtime_resolve_avx_slow } + +# The main thread dynamic thread vector, DTV, which contains pointers +# to thread local variables, isn't freed. There are a couple of call +# patterns that can cause it to be extended. +{ + dtv-addr-tail + Memcheck:Leak + match-leak-kinds: possible,reachable + fun:malloc + fun:tls_get_addr_tail* + fun:__tls_get_addr +} + +{ + dtv-addr-resize + Memcheck:Leak + match-leak-kinds: possible,reachable + fun:malloc + fun:_dl_resize_dtv + fun:_dl_update_slotinfo + fun:update_get_addr + fun:__tls_get_addr +} + +{ + dtv-addr-init + Memcheck:Leak + match-leak-kinds: possible,reachable + fun:malloc + fun:allocate_dtv_entry + fun:allocate_and_init + fun:tls_get_addr_tail* + fun:__tls_get_addr +} |