You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
1
(10) |
2
(3) |
3
(25) |
4
(8) |
|
5
(13) |
6
(8) |
7
(9) |
8
(10) |
9
(8) |
10
(13) |
11
(12) |
|
12
|
13
(7) |
14
(8) |
15
(11) |
16
(13) |
17
(13) |
18
(11) |
|
19
(13) |
20
(7) |
21
(1) |
22
(1) |
23
(1) |
24
(8) |
25
(15) |
|
26
(16) |
27
(20) |
28
(17) |
29
(10) |
30
(2) |
|
|
|
From: Christian B. <bor...@de...> - 2011-06-05 20:40:56
|
Nightly build on fedora390 ( Fedora 13/14/15 mix with gcc 3.5.3 on z196 (s390x) ) Started at 2011-06-05 22:10:01 CEST Ended at 2011-06-05 22:40:03 CEST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 476 tests, 6 stderr failures, 0 stdout failures, 0 stderrB failures, 0 stdoutB failures, 0 post failures == helgrind/tests/tc06_two_races_xml (stderr) helgrind/tests/tc20_verifywrap (stderr) helgrind/tests/tc23_bogus_condwait (stderr) drd/tests/tc04_free_lock (stderr) drd/tests/tc09_bad_unlock (stderr) drd/tests/tc23_bogus_condwait (stderr) |
|
From: Christian B. <bor...@de...> - 2011-06-05 20:36:37
|
Nightly build on sless390 ( SUSE Linux Enterprise Server 11 SP1 gcc 4.3.4 on z196 (s390x) ) Started at 2011-06-05 22:10:01 CEST Ended at 2011-06-05 22:36:26 CEST Results unchanged from 24 hours ago Checking out valgrind source tree ... done Configuring valgrind ... done Building valgrind ... done Running regression tests ... failed Regression test results follow == 476 tests, 6 stderr failures, 0 stdout failures, 3 stderrB failures, 0 stdoutB failures, 0 post failures == gdbserver_tests/mcbreak (stderrB) gdbserver_tests/mcclean_after_fork (stderrB) gdbserver_tests/mssnapshot (stderrB) none/tests/faultstatus (stderr) helgrind/tests/tc06_two_races_xml (stderr) helgrind/tests/tc23_bogus_condwait (stderr) drd/tests/tc04_free_lock (stderr) drd/tests/tc09_bad_unlock (stderr) drd/tests/tc23_bogus_condwait (stderr) |
|
From: Marcin S. <mar...@gm...> - 2011-06-05 18:45:00
|
On Sun, Jun 05, 2011 at 07:29:55PM +0100, Tom Hughes wrote: > On 05/06/11 19:24, Marcin Slusarz wrote: > > MMapTrace / NVTrace > > > > This is a plugin which allows tracing application accesses to mmaped memory. > > It is a very useful tool in figuring out how closed source drivers program > > the hardware. It was created by Dave Airlie for tracing ATI drivers and then > > extended/fixed by others (including me). Now it is mostly used by Nouveau > > developers (http://nouveau.freedesktop.org). > > > > Note that this patch contain nvidia specific code for ioctl tracing, but > > it's completely optional. > > > > Please apply. > > As with the other patch, please open a bug and attach the patch. > Done. https://bugs.kde.org/show_bug.cgi?id=274999 |
|
From: Marcin S. <mar...@gm...> - 2011-06-05 18:44:32
|
On Sun, Jun 05, 2011 at 07:28:37PM +0100, Tom Hughes wrote: > On 05/06/11 19:09, Marcin Slusarz wrote: > > Please apply. > > Please open a ticket in the bug tracker and attach your patch, or it > will likely get missed. > Done. https://bugs.kde.org/show_bug.cgi?id=274998 |
|
From: Tom H. <to...@co...> - 2011-06-05 18:30:06
|
On 05/06/11 19:24, Marcin Slusarz wrote: > MMapTrace / NVTrace > > This is a plugin which allows tracing application accesses to mmaped memory. > It is a very useful tool in figuring out how closed source drivers program > the hardware. It was created by Dave Airlie for tracing ATI drivers and then > extended/fixed by others (including me). Now it is mostly used by Nouveau > developers (http://nouveau.freedesktop.org). > > Note that this patch contain nvidia specific code for ioctl tracing, but > it's completely optional. > > Please apply. As with the other patch, please open a bug and attach the patch. Thanks, Tom -- Tom Hughes (to...@co...) http://compton.nu/ |
|
From: Tom H. <to...@co...> - 2011-06-05 18:28:42
|
On 05/06/11 19:09, Marcin Slusarz wrote: > Please apply. Please open a ticket in the bug tracker and attach your patch, or it will likely get missed. Tom -- Tom Hughes (to...@co...) http://compton.nu/ |
|
From: Marcin S. <mar...@gm...> - 2011-06-05 18:24:47
|
MMapTrace / NVTrace This is a plugin which allows tracing application accesses to mmaped memory. It is a very useful tool in figuring out how closed source drivers program the hardware. It was created by Dave Airlie for tracing ATI drivers and then extended/fixed by others (including me). Now it is mostly used by Nouveau developers (http://nouveau.freedesktop.org). Note that this patch contain nvidia specific code for ioctl tracing, but it's completely optional. Please apply. diff --git a/Makefile.am b/Makefile.am index 6bbe4d4..f906d5f 100644 --- a/Makefile.am +++ b/Makefile.am @@ -8,6 +8,7 @@ TOOLS = memcheck \ callgrind \ massif \ lackey \ + mmt \ none \ helgrind \ drd diff --git a/configure.in b/configure.in index a47fede..770ad32 100644 --- a/configure.in +++ b/configure.in @@ -1986,6 +1986,8 @@ AC_CONFIG_FILES([ massif/ms_print lackey/Makefile lackey/tests/Makefile + mmt/Makefile + mmt/tests/Makefile none/Makefile none/tests/Makefile none/tests/amd64/Makefile diff --git a/mmt/Makefile.am b/mmt/Makefile.am new file mode 100644 index 0000000..d964a5e --- /dev/null +++ b/mmt/Makefile.am @@ -0,0 +1,48 @@ +include $(top_srcdir)/Makefile.tool.am + +noinst_PROGRAMS = mmt-@VGCONF_ARCH_PRI@-@VGCONF_OS@ +if VGCONF_HAVE_PLATFORM_SEC +noinst_PROGRAMS += mmt-@VGCONF_ARCH_SEC@-@VGCONF_OS@ +endif + +MMT_SOURCES_COMMON = mmt_main.c + +mmt_@VGCONF_ARCH_PRI@_@VGCONF_OS@_SOURCES = \ + $(MMT_SOURCES_COMMON) +mmt_@VGCONF_ARCH_PRI@_@VGCONF_OS@_CPPFLAGS = \ + $(AM_CPPFLAGS_@VGCONF_PLATFORM_PRI_CAPS@) +mmt_@VGCONF_ARCH_PRI@_@VGCONF_OS@_CFLAGS = \ + $(AM_CFLAGS_@VGCONF_PLATFORM_PRI_CAPS@) +mmt_@VGCONF_ARCH_PRI@_@VGCONF_OS@_DEPENDENCIES = \ + $(TOOL_DEPENDENCIES_@VGCONF_PLATFORM_PRI_CAPS@) +mmt_@VGCONF_ARCH_PRI@_@VGCONF_OS@_LDADD = \ + $(TOOL_LDADD_@VGCONF_PLATFORM_PRI_CAPS@) +mmt_@VGCONF_ARCH_PRI@_@VGCONF_OS@_LDFLAGS = \ + $(TOOL_LDFLAGS_@VGCONF_PLATFORM_PRI_CAPS@) +mmt_@VGCONF_ARCH_PRI@_@VGCONF_OS@_LINK = \ + $(top_builddir)/coregrind/link_tool_exe_@VGCONF_OS@ \ + @VALT_LOAD_ADDRESS_PRI@ \ + $(LINK) \ + $(mmt_@VGCONF_ARCH_PRI@_@VGCONF_OS@_CFLAGS) \ + $(mmt_@VGCONF_ARCH_PRI@_@VGCONF_OS@_LDFLAGS) + +if VGCONF_HAVE_PLATFORM_SEC +mmt_@VGCONF_ARCH_SEC@_@VGCONF_OS@_SOURCES = \ + $(MMT_SOURCES_COMMON) +mmt_@VGCONF_ARCH_SEC@_@VGCONF_OS@_CPPFLAGS = \ + $(AM_CPPFLAGS_@VGCONF_PLATFORM_SEC_CAPS@) +mmt_@VGCONF_ARCH_SEC@_@VGCONF_OS@_CFLAGS = \ + $(AM_CFLAGS_@VGCONF_PLATFORM_SEC_CAPS@) +mmt_@VGCONF_ARCH_SEC@_@VGCONF_OS@_DEPENDENCIES = \ + $(TOOL_DEPENDENCIES_@VGCONF_PLATFORM_SEC_CAPS@) +mmt_@VGCONF_ARCH_SEC@_@VGCONF_OS@_LDADD = \ + $(TOOL_LDADD_@VGCONF_PLATFORM_SEC_CAPS@) +mmt_@VGCONF_ARCH_SEC@_@VGCONF_OS@_LDFLAGS = \ + $(TOOL_LDFLAGS_@VGCONF_PLATFORM_SEC_CAPS@) +mmt_@VGCONF_ARCH_SEC@_@VGCONF_OS@_LINK = \ + $(top_builddir)/coregrind/link_tool_exe_@VGCONF_OS@ \ + @VALT_LOAD_ADDRESS_SEC@ \ + $(LINK) \ + $(mmt_@VGCONF_ARCH_SEC@_@VGCONF_OS@_CFLAGS) \ + $(mmt_@VGCONF_ARCH_SEC@_@VGCONF_OS@_LDFLAGS) +endif diff --git a/mmt/mmaptest/Makefile b/mmt/mmaptest/Makefile new file mode 100644 index 0000000..33929eb --- /dev/null +++ b/mmt/mmaptest/Makefile @@ -0,0 +1,24 @@ +all: build test + +build: mmaptest32 mmaptest64 + +mmaptest32: mmaptest.c + @gcc -m32 -msse mmaptest.c -o mmaptest32 + +mmaptest64: mmaptest.c + @gcc -m64 mmaptest.c -o mmaptest64 + +clean: + @rm -f mmaptest32 mmaptest64 mmaptest32.out1.tmp mmaptest32.out2.tmp mmaptest64.out1.tmp mmaptest64.out2.tmp + +test: test32 test64 + +test32: mmaptest32 + @../../coregrind/valgrind --tool=mmt --mmt-trace-file=/dev/zero ./mmaptest32 >mmaptest32.out1.tmp 2>mmaptest32.out2.tmp || (cat mmaptest32.out1.tmp && cat mmaptest32.out2.tmp && false) + @cat mmaptest32.out1.tmp | diff -u mmaptest32.out1 - + @cat mmaptest32.out2.tmp | grep "^--" | sed "s/^--[0-9]*-- //" | diff -u mmaptest32.out2 - + +test64: mmaptest64 + @../../coregrind/valgrind --tool=mmt --mmt-trace-file=/dev/zero ./mmaptest64 >mmaptest64.out1.tmp 2>mmaptest64.out2.tmp || (cat mmaptest64.out1.tmp && cat mmaptest64.out2.tmp && false) + @cat mmaptest64.out1.tmp | diff -u mmaptest64.out1 - + @cat mmaptest64.out2.tmp | grep "^--" | sed "s/^--[0-9]*-- //" | diff -u mmaptest64.out2 - diff --git a/mmt/mmaptest/mmaptest.c b/mmt/mmaptest/mmaptest.c new file mode 100644 index 0000000..0756b37 --- /dev/null +++ b/mmt/mmaptest/mmaptest.c @@ -0,0 +1,103 @@ +#define _GNU_SOURCE +#include <stdio.h> +#include <unistd.h> +#include <sys/mman.h> +#include <sys/types.h> +#include <sys/stat.h> +#include <fcntl.h> +#include <stdlib.h> +#include <string.h> + +typedef float v4sf __attribute__((vector_size(16))); + +#define MMAP_LEN 0x1000 +#define STARTING_ADDRESS ((void *)0x77770000) +#define MMAP_OFFSET 0x2000 + +#define NVIDIA_IOCTL_REQUEST 0xc030464e + +unsigned long long data128[] = { + 0x1234567890abcdefULL, 0xdeadbeeffeeddeadULL, + 0x1234567890abcdefULL, 0xdeadbeeffeeddeadULL}; + +int main() +{ + char *ptr; + int fd = open("/dev/zero", O_RDWR); + if (fd < 0) + { + perror("open"); + exit(-1); + } + + unsigned int ioctl_data[16] = { + 0, 0, 0, 0, + 0, 0, 0, 0, + MMAP_OFFSET}; + + ioctl(fd, NVIDIA_IOCTL_REQUEST, ioctl_data); + + ptr = mmap(STARTING_ADDRESS, MMAP_LEN, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, fd, MMAP_OFFSET); + if (ptr == MAP_FAILED) + { + perror("mmap"); + exit(-2); + } + + if (ptr != STARTING_ADDRESS) + { + fprintf(stderr, "failed to mmap at specified address: %p != %p\n", ptr, STARTING_ADDRESS); + exit(-3); + } + + ptr[0x11] = 0x0f; + ptr[0x14] = 0x77; + + ((short *)ptr) [0xC8 / 2] = 0x1234; + ((short *)ptr) [0xCA / 2] = 0x5678; + + ((int *)ptr) [0x190 / 4] = 0x98765432; + ((int *)ptr) [0x194 / 4] = 0xdeadbeef; + + ((long long *)ptr) [0x320 / 8] = 0x1234567890abcdefULL; + ((long long *)ptr) [0x328 / 8] = 0xfedcba9876543210ULL; + + __builtin_ia32_movntq((void *)(ptr + 0x20), data128[0]); + + __builtin_ia32_movntps((void *)(ptr + 0x40), *((v4sf *)data128)); + + printf("%x\n", ptr[0x11]); + printf("%x\n", ptr[0x14]); + printf("%x\n", ((short *)ptr)[0xC8 / 2]); + printf("%x\n", ((short *)ptr)[0xCA / 2]); + printf("%x\n", ((int *)ptr)[0x190 / 4]); + printf("%x\n", ((int *)ptr)[0x194 / 4]); + printf("%llx\n", ((long long *)ptr)[0x320 / 8]); + printf("%llx\n", ((long long *)ptr)[0x328 / 8]); + + __builtin_ia32_movntq(data128, *(unsigned long long *)(ptr + 0x20)); + + __builtin_ia32_movntps((float *)data128, *(v4sf *)(ptr + 0x40)); + + ptr = mremap(ptr, MMAP_LEN, MMAP_LEN + 4096, MREMAP_FIXED | MREMAP_MAYMOVE, + STARTING_ADDRESS + 213411 * 4096); + if (ptr == MAP_FAILED) + { + perror("mremap"); + exit(-97); + } + + if (munmap(ptr, MMAP_LEN) < 0) + { + perror("munmap"); + exit(-98); + } + + if (close(fd) < 0) + { + perror("close"); + exit(-99); + } + + return 0; +} diff --git a/mmt/mmaptest/mmaptest32.out1 b/mmt/mmaptest/mmaptest32.out1 new file mode 100644 index 0000000..23bb6f3 --- /dev/null +++ b/mmt/mmaptest/mmaptest32.out1 @@ -0,0 +1,8 @@ +f +77 +1234 +5678 +98765432 +deadbeef +1234567890abcdef +fedcba9876543210 diff --git a/mmt/mmaptest/mmaptest32.out2 b/mmt/mmaptest/mmaptest32.out2 new file mode 100644 index 0000000..86233a7 --- /dev/null +++ b/mmt/mmaptest/mmaptest32.out2 @@ -0,0 +1,28 @@ +got new mmap at 0x77770000, len: 0x00001000, offset: 0x2000, serial: 1 +w 1:0x0011, 0x0f +w 1:0x0014, 0x77 +w 1:0x00c8, 0x1234 +w 1:0x00ca, 0x5678 +w 1:0x0190, 0x98765432 +w 1:0x0194, 0xdeadbeef +w 1:0x0320, 0x90abcdef +w 1:0x0324, 0x12345678 +w 1:0x0328, 0x76543210 +w 1:0x032c, 0xfedcba98 +w 1:0x0020, 0x12345678,0x90abcdef +w 1:0x0040, 0xdeadbeef,0xfeeddead,0x12345678,0x90abcdef +r 1:0x0011, 0x0f +r 1:0x0014, 0x77 +r 1:0x00c8, 0x1234 +r 1:0x00ca, 0x5678 +r 1:0x0190, 0x98765432 +r 1:0x0194, 0xdeadbeef +r 1:0x0324, 0x12345678 +r 1:0x0320, 0x90abcdef +r 1:0x032c, 0xfedcba98 +r 1:0x0328, 0x76543210 +r 1:0x0024, 0x12345678 +r 1:0x0020, 0x90abcdef +r 1:0x0040, 0xdeadbeef,0xfeeddead,0x12345678,0x90abcdef +changed mmap 0x0:0x0 from: (address: 0x77770000, len: 0x00001000), to: (address: 0xAB913000, len: 0x00002000), offset 0x2000, serial 1 +removed mmap 0x0:0x0 for: 0xAB913000, len: 0x00002000, offset: 0x2000, serial: 1 diff --git a/mmt/mmaptest/mmaptest64.out1 b/mmt/mmaptest/mmaptest64.out1 new file mode 100644 index 0000000..23bb6f3 --- /dev/null +++ b/mmt/mmaptest/mmaptest64.out1 @@ -0,0 +1,8 @@ +f +77 +1234 +5678 +98765432 +deadbeef +1234567890abcdef +fedcba9876543210 diff --git a/mmt/mmaptest/mmaptest64.out2 b/mmt/mmaptest/mmaptest64.out2 new file mode 100644 index 0000000..6ebb0f5 --- /dev/null +++ b/mmt/mmaptest/mmaptest64.out2 @@ -0,0 +1,25 @@ +got new mmap at 0x77770000, len: 0x00001000, offset: 0x2000, serial: 1 +w 1:0x0011, 0x0f +w 1:0x0014, 0x77 +w 1:0x00c8, 0x1234 +w 1:0x00ca, 0x5678 +w 1:0x0190, 0x98765432 +w 1:0x0194, 0xdeadbeef +w 1:0x0320, 0x90abcdef +w 1:0x0324, 0x12345678 +w 1:0x0328, 0x76543210 +w 1:0x032c, 0xfedcba98 +w 1:0x0020, 0x12345678,0x90abcdef +w 1:0x0040, 0xdeadbeef,0xfeeddead,0x12345678,0x90abcdef +r 1:0x0011, 0x0f +r 1:0x0014, 0x77 +r 1:0x00c8, 0x1234 +r 1:0x00ca, 0x5678 +r 1:0x0190, 0x98765432 +r 1:0x0194, 0xdeadbeef +r 1:0x0320, 0x12345678,0x90abcdef +r 1:0x0328, 0xfedcba98,0x76543210 +r 1:0x0020, 0x12345678,0x90abcdef +r 1:0x0040, 0xdeadbeef,0xfeeddead,0x12345678,0x90abcdef +changed mmap 0x0:0x0 from: (address: 0x77770000, len: 0x00001000), to: (address: 0xAB913000, len: 0x00002000), offset 0x2000, serial 1 +removed mmap 0x0:0x0 for: 0xAB913000, len: 0x00002000, offset: 0x2000, serial: 1 diff --git a/mmt/mmt_main.c b/mmt/mmt_main.c new file mode 100644 index 0000000..90d656d --- /dev/null +++ b/mmt/mmt_main.c @@ -0,0 +1,1713 @@ +/*--------------------------------------------------------------------*/ +/*--- nvtrace: mmaptracer tool that tracks NVidia ioctls ---*/ +/*--------------------------------------------------------------------*/ + +/* + Copyright (C) 2006 Dave Airlie + Copyright (C) 2007 Wladimir J. van der Laan + Copyright (C) 2009 Marcin Slusarz <mar...@gm...> + + This program is free software; you can redistribute it and/or + modify it under the terms of the GNU General Public License as + published by the Free Software Foundation; either version 2 of the + License, or (at your option) any later version. + + This program is distributed in the hope that it will be useful, but + WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program; if not, write to the Free Software + Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA + 02111-1307, USA. + + The GNU General Public License is contained in the file COPYING. +*/ + +/* + Vg_UserMsg for important messages + Vg_DebugMsg for memory load/store messages + Vg_DebugExtraMsg for other messages +*/ + +#include "pub_tool_basics.h" +#include "pub_tool_libcprint.h" +#include "pub_tool_libcassert.h" + +#include "pub_tool_tooliface.h" +#include "pub_tool_debuginfo.h" +#include "pub_tool_libcbase.h" +#include "pub_tool_options.h" +#include "pub_tool_machine.h" +#include "pub_tool_threadstate.h" +#include "pub_tool_vki.h" + +#include "pub_tool_vkiscnums.h" +#include "pub_tool_libcfile.h" +#include "pub_tool_mallocfree.h" + +#include "coregrind/pub_core_basics.h" +#include "coregrind/pub_core_libcassert.h" +#include "coregrind/m_syswrap/priv_types_n_macros.h" + +#include <fcntl.h> +#include <string.h> + +#define MAX_REGIONS 100 +#define MAX_TRACE_FILES 10 + +#ifdef __LP64__ +#define MMT_64BIT +#endif + +static struct mmt_mmap_data { + Addr start; + Addr end; + int fd; + Off64T offset; + UInt id; + UWord data1; + UWord data2; +} mmt_mmaps[MAX_REGIONS]; +static int last_region = -1; + +static struct object_type { + UInt id; // type id + char *name; // some name + UInt cargs; // number of constructor args (uint32) +} object_types[] = +{ + {0x0000, "NV_CONTEXT_NEW", 0}, + + {0x0004, "NV_PTIMER", 0}, + + {0x0041, "NV_CONTEXT", 0}, + + {0x502d, "NV50_2D", 0}, + {0x902d, "NVC0_2D", 0}, + + {0x5039, "NV50_M2MF", 0}, + {0x9039, "NVC0_M2MF", 0}, + + {0x9068, "NVC0_PEEPHOLE", 0}, + + {0x406e, "NV40_FIFO_DMA", 6}, + + {0x506f, "NV50_FIFO_IB", 6}, + {0x826f, "NV84_FIFO_IB", 6}, + {0x906f, "NVC0_FIFO_IB", 6}, + + {0x5070, "NV84_DISPLAY", 4}, + {0x8270, "NV84_DISPLAY", 4}, + {0x8370, "NVA0_DISPLAY", 4}, + {0x8870, "NV98_DISPLAY", 4}, + {0x8570, "NVA3_DISPLAY", 4}, + + {0x5072, NULL, 8}, + + {0x7476, "NV84_VP", 0}, + + {0x507a, "NV50_DISPLAY_CURSOR", 0}, + {0x827a, "NV84_DISPLAY_CURSOR", 0}, + {0x857a, "NVA3_DISPLAY_CURSOR", 0}, + + {0x507b, "NV50_DISPLAY_OVERLAY", 0}, + {0x827b, "NV84_DISPLAY_OVERLAY", 0}, + {0x857b, "NVA3_DISPLAY_OVERLAY", 0}, + + {0x507c, "NV50_DISPLAY_SYNC_FIFO", 8}, + {0x827c, "NV84_DISPLAY_SYNC_FIFO", 8}, + {0x837c, "NVA0_DISPLAY_SYNC_FIFO", 8}, + {0x857c, "NVA3_DISPLAY_SYNC_FIFO", 8}, + + {0x507d, "NV50_DISPLAY_MASTER_FIFO", 0}, + {0x827d, "NV84_DISPLAY_MASTER_FIFO", 0}, + {0x837d, "NVA0_DISPLAY_MASTER_FIFO", 0}, + {0x887d, "NV98_DISPLAY_MASTER_FIFO", 0}, + {0x857d, "NVA3_DISPLAY_MASTER_FIFO", 0}, + + {0x307e, "NV30_PEEPHOLE", 0}, + + {0x507e, "NV50_DISPLAY_OVERLAY_FIFO", 8}, + {0x827e, "NV84_DISPLAY_OVERLAY_FIFO", 8}, + {0x837e, "NVA0_DISPLAY_OVERLAY_FIFO", 8}, + {0x857e, "NVA3_DISPLAY_OVERLAY_FIFO", 8}, + + {0x0080, "NV_DEVICE", 1}, + {0x2080, "NV_SUBDEVICE_0", 0}, + {0x2081, "NV_SUBDEVICE_1", 0}, + {0x2082, "NV_SUBDEVICE_2", 0}, + {0x2083, "NV_SUBDEVICE_3", 0}, + + {0x5097, "NV50_3D", 0}, + {0x8297, "NV84_3D", 0}, + {0x8397, "NVA0_3D", 0}, + {0x8597, "NVA3_3D", 0}, + {0x8697, "NVAF_3D", 0}, + {0x9097, "NVC0_3D", 0}, + + {0x74b0, "NV84_BSP", 0}, + + {0x88b1, "NV98_BSP", 0}, + {0x85b1, "NVA3_BSP", 0}, + {0x86b1, "NVAF_BSP", 0}, + {0x90b1, "NVC0_BSP", 0}, + + {0x88b2, "NV98_VP", 0}, + {0x85b2, "NVA3_VP", 0}, + {0x90b2, "NVC0_VP", 0}, + + {0x88b3, "NV98_PPP", 0}, + {0x85b3, "NVA3_PPP", 0}, + {0x90b3, "NVC0_PPP", 0}, + + {0x88b4, "NV98_CRYPT", 0}, + + {0x85b5, "NVA3_COPY", 0}, + {0x90b5, "NVC0_COPY0", 0}, + + {0x50c0, "NV50_COMPUTE", 0}, + {0x85c0, "NVA3_COMPUTE", 0}, + {0x90c0, "NVC0_COMPUTE", 0}, + + {0x74c1, "NV84_CRYPT", 0}, + + {0x50e0, "NV50_PGRAPH", 0}, + {0x50e2, "NV50_PFIFO", 0}, +}; + +static UInt current_item = 1; + +/* Command line options */ +//UInt mmt_clo_offset = (UInt) -1; +int dump_load = True, dump_store = True; +static int trace_opens = False; + +static struct trace_file { + const char *path; + fd_set fds; +} trace_files[MAX_TRACE_FILES]; +static int trace_all_files = False; + +static int trace_nvidia_ioctls = False; +static int trace_marks = False; +static int trace_mark_fd; +static int trace_mark_cnt = 0; +static fd_set nvidiactl_fds; +static fd_set nvidia0_fds; + +static struct mmt_mmap_data *find_mmap(Addr addr) +{ + struct mmt_mmap_data *region = NULL; + int i; + + for (i = 0; i <= last_region; i++) + { + region = &mmt_mmaps[i]; + if (addr >= region->start && addr < region->end) + return region; + } + + return NULL; +} + +static struct object_type *find_objtype(UInt id) +{ + int i; + int n = sizeof(object_types) / sizeof(struct object_type); + + for (i = 0; i < n; ++i) + if (object_types[i].id == id) + return &object_types[i]; + + return NULL; +} + +static void mydescribe(Addr inst_addr, char *namestr, int len) +{ +#if 0 + const SegInfo *si; + /* Search for it in segments */ + VG_(snprintf) (namestr, len, "@%08x", inst_addr); + for (si = VG_(next_seginfo) (NULL); + si != NULL; si = VG_(next_seginfo) (si)) + { + Addr base = VG_(seginfo_start) (si); + SizeT size = VG_(seginfo_size) (si); + + if (inst_addr >= base && inst_addr < base + size) + { + const UChar *filename = VG_(seginfo_filename) (si); + VG_(snprintf) (namestr, len, "@%08x (%s:%08x)", inst_addr, + filename, inst_addr - base); + + break; + } + } +#else + VG_(strcpy) (namestr, ""); +#endif + +} + +static VG_REGPARM(2) +void trace_store(Addr addr, SizeT size, Addr inst_addr, UWord value) +{ + struct mmt_mmap_data *region; + char valstr[64]; + char namestr[256]; + + region = find_mmap(addr); + if (!region) + return; + + switch (size) + { + case 1: + VG_(sprintf) (valstr, "0x%02lx", value); + break; + case 2: + VG_(sprintf) (valstr, "0x%04lx", value); + break; + case 4: + VG_(sprintf) (valstr, "0x%08lx", value); + break; +#ifdef MMT_64BIT + case 8: + VG_(sprintf) (valstr, "0x%08lx,0x%08lx", value >> 32, value & 0xffffffff); + break; +#endif + default: + return; + } + mydescribe(inst_addr, namestr, 256); + + VG_(message) (Vg_DebugMsg, "w %d:0x%04x, %s %s\n", region->id, (unsigned int)(addr - region->start), valstr, namestr); +} + +static VG_REGPARM(2) +void trace_store2(Addr addr, SizeT size, Addr inst_addr, UWord value1, UWord value2) +{ + struct mmt_mmap_data *region; + char valstr[64]; + char namestr[256]; + + region = find_mmap(addr); + if (!region) + return; + + switch (size) + { + case 4: + VG_(sprintf) (valstr, "0x%08lx,0x%08lx", value1, value2); + break; +#ifdef MMT_64BIT + case 8: + VG_(sprintf) (valstr, "0x%08lx,0x%08lx,0x%08lx,0x%08lx", + value1 >> 32, value1 & 0xffffffff, + value2 >> 32, value2 & 0xffffffff); + break; +#endif + default: + return; + } + + mydescribe(inst_addr, namestr, 256); + + VG_(message) (Vg_DebugMsg, "w %d:0x%04x, %s %s\n", region->id, (unsigned int)(addr - region->start), valstr, namestr); +} + +#ifndef MMT_64BIT +static VG_REGPARM(2) +void trace_store4(Addr addr, Addr inst_addr, UWord value1, UWord value2, UWord value3, UWord value4) +{ + struct mmt_mmap_data *region; + char valstr[64]; + char namestr[256]; + + region = find_mmap(addr); + if (!region) + return; + + VG_(sprintf) (valstr, "0x%08lx,0x%08lx,0x%08lx,0x%08lx", value1, value2, value3, value4); + mydescribe(inst_addr, namestr, 256); + + VG_(message) (Vg_DebugMsg, "w %d:0x%04x, %s %s\n", region->id, (unsigned int)(addr - region->start), valstr, namestr); +} +#endif + +static VG_REGPARM(2) +void trace_load(Addr addr, SizeT size, UInt inst_addr, UWord value) +{ + struct mmt_mmap_data *region; + char valstr[64]; + char namestr[256]; + + region = find_mmap(addr); + if (!region) + return; + + switch (size) + { + case 1: + VG_(sprintf) (valstr, "0x%02lx", value); + break; + case 2: + VG_(sprintf) (valstr, "0x%04lx", value); + break; + case 4: + VG_(sprintf) (valstr, "0x%08lx", value); + break; +#ifdef MMT_64BIT + case 8: + VG_(sprintf) (valstr, "0x%08lx,0x%08lx", value >> 32, value & 0xffffffff); + break; +#endif + default: + return; + } + mydescribe(inst_addr, namestr, 256); + + VG_(message) (Vg_DebugMsg, "r %d:0x%04x, %s %s\n", region->id, (unsigned int)(addr - region->start), valstr, namestr); +} + +static VG_REGPARM(2) +void trace_load2(Addr addr, SizeT size, UInt inst_addr, UWord value1, UWord value2) +{ + struct mmt_mmap_data *region; + char valstr[64]; + char namestr[256]; + + region = find_mmap(addr); + if (!region) + return; + + switch (size) + { + case 4: + VG_(sprintf) (valstr, "0x%08lx,0x%08lx", value1, value2); + break; +#ifdef MMT_64BIT + case 8: + VG_(sprintf) (valstr, "0x%08lx,0x%08lx,0x%08lx,0x%08lx", + value1 >> 32, value1 & 0xffffffff, + value2 >> 32, value2 & 0xffffffff); + break; +#endif + default: + return; + } + mydescribe(inst_addr, namestr, 256); + + VG_(message) (Vg_DebugMsg, "r %d:0x%04x, %s %s\n", region->id, (unsigned int)(addr - region->start), valstr, namestr); +} + +#ifndef MMT_64BIT +static VG_REGPARM(2) +void trace_load4(Addr addr, SizeT size, UInt inst_addr, UWord value1, UWord value2, UWord value3, UWord value4) +{ + struct mmt_mmap_data *region; + char valstr[64]; + char namestr[256]; + + region = find_mmap(addr); + if (!region) + return; + + VG_(sprintf) (valstr, "0x%08lx,0x%08lx,0x%08lx,0x%08lx", value1, value2, value3, value4); + mydescribe(inst_addr, namestr, 256); + + VG_(message) (Vg_DebugMsg, "r %d:0x%04x, %s %s\n", region->id, (unsigned int)(addr - region->start), valstr, namestr); +} +#endif + +static void add_trace_load1(IRSB *bb, IRExpr *addr, Int size, Addr inst_addr, IRExpr *val1) +{ + IRExpr **argv = mkIRExprVec_4(addr, mkIRExpr_HWord(size), + mkIRExpr_HWord(inst_addr), val1); + IRDirty *di = unsafeIRDirty_0_N(2, + "trace_load", + VG_(fnptr_to_fnentry) (trace_load), + argv); + addStmtToIRSB(bb, IRStmt_Dirty(di)); +} + +static void add_trace_load2(IRSB *bb, IRExpr *addr, Int size, Addr inst_addr, IRExpr *val1, IRExpr *val2) +{ + IRExpr **argv = mkIRExprVec_5(addr, mkIRExpr_HWord(size), + mkIRExpr_HWord(inst_addr), val1, val2); + IRDirty *di = unsafeIRDirty_0_N(2, + "trace_load2", + VG_(fnptr_to_fnentry) (trace_load2), + argv); + addStmtToIRSB(bb, IRStmt_Dirty(di)); +} + +#ifndef MMT_64BIT +static void add_trace_load4(IRSB *bb, IRExpr *addr, Int size, Addr inst_addr, IRExpr *val1, IRExpr *val2, IRExpr *val3, IRExpr *val4) +{ + IRExpr **argv = mkIRExprVec_7(addr, mkIRExpr_HWord(size), + mkIRExpr_HWord(inst_addr), val1, val2, val3, val4); + IRDirty *di = unsafeIRDirty_0_N(2, + "trace_load4", + VG_(fnptr_to_fnentry) (trace_load4), + argv); + addStmtToIRSB(bb, IRStmt_Dirty(di)); +} +#endif + +#ifdef MMT_64BIT +static void add_trace_load(IRSB *bb, IRExpr *addr, Int size, Addr inst_addr, IRExpr *data, IRType arg_ty) +{ + IRTemp t; + IRStmt *cast; + IRExpr *data1, *data2; + + switch (arg_ty) + { + case Ity_I8: + t = newIRTemp(bb->tyenv, Ity_I64); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_8Uto64, data)); + + addStmtToIRSB(bb, cast); + data = IRExpr_RdTmp(t); + + add_trace_load1(bb, addr, size, inst_addr, data); + break; + + case Ity_I16: + t = newIRTemp(bb->tyenv, Ity_I64); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_16Uto64, data)); + + addStmtToIRSB(bb, cast); + data = IRExpr_RdTmp(t); + add_trace_load1(bb, addr, size, inst_addr, data); + break; + + case Ity_F32: + // reinterpret as I32 + t = newIRTemp(bb->tyenv, Ity_I32); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_ReinterpF32asI32, data)); + + addStmtToIRSB(bb, cast); + data = IRExpr_RdTmp(t); + + // no break; + case Ity_I32: + t = newIRTemp(bb->tyenv, Ity_I64); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_32Uto64, data)); + + addStmtToIRSB(bb, cast); + data = IRExpr_RdTmp(t); + + add_trace_load1(bb, addr, size, inst_addr, data); + break; + + case Ity_F64: + // reinterpret as I64 + t = newIRTemp(bb->tyenv, Ity_I64); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_ReinterpF64asI64, data)); + + addStmtToIRSB(bb, cast); + data = IRExpr_RdTmp(t); + // no break; + + case Ity_I64: + add_trace_load1(bb, addr, size, inst_addr, data); + break; + + case Ity_V128: + // upper 64 + t = newIRTemp(bb->tyenv, Ity_I64); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_V128HIto64, data)); + addStmtToIRSB(bb, cast); + data1 = IRExpr_RdTmp(t); + + // lower 64 + t = newIRTemp(bb->tyenv, Ity_I64); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_V128to64, data)); + addStmtToIRSB(bb, cast); + data2 = IRExpr_RdTmp(t); + + add_trace_load2(bb, addr, sizeofIRType(Ity_I64), inst_addr, data1, data2); + break; + default: + VG_(message) (Vg_UserMsg, "Warning! we missed a read of 0x%08x\n", (UInt) arg_ty); + break; + } +} +#else +static void add_trace_load(IRSB *bb, IRExpr *addr, Int size, Addr inst_addr, IRExpr *data, IRType arg_ty) +{ + IRTemp t; + IRStmt *cast; + IRExpr *data0; + IRExpr *data1, *data2; + IRExpr *data3, *data4; + + switch (arg_ty) + { + case Ity_I8: + t = newIRTemp(bb->tyenv, Ity_I32); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_8Uto32, data)); + + addStmtToIRSB(bb, cast); + data = IRExpr_RdTmp(t); + + add_trace_load1(bb, addr, size, inst_addr, data); + break; + + case Ity_I16: + t = newIRTemp(bb->tyenv, Ity_I32); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_16Uto32, data)); + + addStmtToIRSB(bb, cast); + data = IRExpr_RdTmp(t); + + add_trace_load1(bb, addr, size, inst_addr, data); + break; + + case Ity_F32: + // reinterpret as I32 + t = newIRTemp(bb->tyenv, Ity_I32); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_ReinterpF32asI32, data)); + + addStmtToIRSB(bb, cast); + data = IRExpr_RdTmp(t); + + // no break; + case Ity_I32: + add_trace_load1(bb, addr, size, inst_addr, data); + break; + + case Ity_F64: + // reinterpret as I64 + t = newIRTemp(bb->tyenv, Ity_I64); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_ReinterpF64asI64, data)); + + addStmtToIRSB(bb, cast); + data = IRExpr_RdTmp(t); + // no break; + case Ity_I64: + // we cannot pass whole 64-bit value in one parameter, so we split it + + // upper 32 + t = newIRTemp(bb->tyenv, Ity_I32); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_64HIto32, data)); + addStmtToIRSB(bb, cast); + data1 = IRExpr_RdTmp(t); + + // lower 32 + t = newIRTemp(bb->tyenv, Ity_I32); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_64to32, data)); + addStmtToIRSB(bb, cast); + data2 = IRExpr_RdTmp(t); + + add_trace_load2(bb, addr, sizeofIRType(Ity_I32), inst_addr, data1, data2); + break; + case Ity_V128: + // upper 64 + t = newIRTemp(bb->tyenv, Ity_I64); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_V128HIto64, data)); + addStmtToIRSB(bb, cast); + data0 = IRExpr_RdTmp(t); + + // upper 32 of upper 64 + t = newIRTemp(bb->tyenv, Ity_I32); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_64HIto32, data0)); + addStmtToIRSB(bb, cast); + data1 = IRExpr_RdTmp(t); + + // lower 32 of upper 64 + t = newIRTemp(bb->tyenv, Ity_I32); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_64to32, data0)); + addStmtToIRSB(bb, cast); + data2 = IRExpr_RdTmp(t); + + // lower 64 + t = newIRTemp(bb->tyenv, Ity_I64); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_V128to64, data)); + addStmtToIRSB(bb, cast); + data0 = IRExpr_RdTmp(t); + + // upper 32 of lower 64 + t = newIRTemp(bb->tyenv, Ity_I32); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_64HIto32, data0)); + addStmtToIRSB(bb, cast); + data3 = IRExpr_RdTmp(t); + + // lower 32 of lower 64 + t = newIRTemp(bb->tyenv, Ity_I32); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_64to32, data0)); + addStmtToIRSB(bb, cast); + data4 = IRExpr_RdTmp(t); + + add_trace_load4(bb, addr, sizeofIRType(Ity_I32), inst_addr, data1, data2, data3, data4); + break; + default: + VG_(message) (Vg_UserMsg, "Warning! we missed a read of 0x%08x\n", (UInt) arg_ty); + break; + } +} +#endif + +static void +add_trace_store1(IRSB *bb, IRExpr *addr, Int size, Addr inst_addr, + IRExpr *data) +{ + IRExpr **argv = mkIRExprVec_4(addr, mkIRExpr_HWord(size), + mkIRExpr_HWord(inst_addr), data); + IRDirty *di = unsafeIRDirty_0_N(2, + "trace_store", + VG_(fnptr_to_fnentry) (trace_store), + argv); + addStmtToIRSB(bb, IRStmt_Dirty(di)); +} + +static void +add_trace_store2(IRSB *bb, IRExpr *addr, Int size, Addr inst_addr, + IRExpr *data1, IRExpr *data2) +{ + IRExpr **argv = mkIRExprVec_5(addr, mkIRExpr_HWord(size), + mkIRExpr_HWord(inst_addr), + data1, data2); + IRDirty *di = unsafeIRDirty_0_N(2, + "trace_store2", + VG_(fnptr_to_fnentry) (trace_store2), + argv); + addStmtToIRSB(bb, IRStmt_Dirty(di)); +} + +#ifndef MMT_64BIT +static void +add_trace_store4(IRSB *bb, IRExpr *addr, Addr inst_addr, + IRExpr *data1, IRExpr *data2, IRExpr *data3, IRExpr *data4) +{ + IRExpr **argv = mkIRExprVec_6(addr, mkIRExpr_HWord(inst_addr), + data1, data2, data3, data4); + IRDirty *di = unsafeIRDirty_0_N(2, + "trace_store4", + VG_(fnptr_to_fnentry) (trace_store4), + argv); + addStmtToIRSB(bb, IRStmt_Dirty(di)); +} +#endif + +#ifdef MMT_64BIT +static void add_trace_store(IRSB *bbOut, IRExpr *destAddr, Addr inst_addr, + IRType arg_ty, IRExpr *data_expr) +{ + IRTemp t = IRTemp_INVALID; + IRStmt *cast = NULL; + IRExpr *data_expr1, *data_expr2; + + Int size = sizeofIRType(arg_ty); + + switch (arg_ty) + { + case Ity_I8: + t = newIRTemp(bbOut->tyenv, Ity_I64); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_8Uto64, data_expr)); + + addStmtToIRSB(bbOut, cast); + data_expr = IRExpr_RdTmp(t); + + add_trace_store1(bbOut, destAddr, size, inst_addr, data_expr); + break; + case Ity_I16: + t = newIRTemp(bbOut->tyenv, Ity_I64); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_16Uto64, data_expr)); + + addStmtToIRSB(bbOut, cast); + data_expr = IRExpr_RdTmp(t); + + add_trace_store1(bbOut, destAddr, size, inst_addr, data_expr); + break; + case Ity_F32: + // reinterpret as I32 + t = newIRTemp(bbOut->tyenv, Ity_I32); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_ReinterpF32asI32, data_expr)); + + addStmtToIRSB(bbOut, cast); + data_expr = IRExpr_RdTmp(t); + + // no break; + case Ity_I32: + t = newIRTemp(bbOut->tyenv, Ity_I64); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_32Uto64, data_expr)); + + addStmtToIRSB(bbOut, cast); + data_expr = IRExpr_RdTmp(t); + + add_trace_store1(bbOut, destAddr, size, inst_addr, data_expr); + break; + case Ity_F64: + // reinterpret as I64 + t = newIRTemp(bbOut->tyenv, Ity_I64); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_ReinterpF64asI64, data_expr)); + + addStmtToIRSB(bbOut, cast); + data_expr = IRExpr_RdTmp(t); + // no break; + case Ity_I64: + add_trace_store1(bbOut, destAddr, size, inst_addr, data_expr); + break; + case Ity_V128: + // we cannot pass whole 128-bit value in one parameter, so we split it + + // upper 64 + t = newIRTemp(bbOut->tyenv, Ity_I64); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_V128HIto64, data_expr)); + addStmtToIRSB(bbOut, cast); + data_expr1 = IRExpr_RdTmp(t); + + // lower 64 + t = newIRTemp(bbOut->tyenv, Ity_I64); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_V128to64, data_expr)); + addStmtToIRSB(bbOut, cast); + data_expr2 = IRExpr_RdTmp(t); + + add_trace_store2(bbOut, destAddr, sizeofIRType(Ity_I64), inst_addr, + data_expr1, data_expr2); + + break; + default: + VG_(message) (Vg_UserMsg, "Warning! we missed a write of 0x%08x\n", (UInt) arg_ty); + break; + } +} +#else +static void add_trace_store(IRSB *bbOut, IRExpr *destAddr, Addr inst_addr, + IRType arg_ty, IRExpr *data_expr) +{ + IRTemp t = IRTemp_INVALID; + IRStmt *cast = NULL; + IRExpr *data_expr0; + IRExpr *data_expr1, *data_expr2; + IRExpr *data_expr3, *data_expr4; + + Int size = sizeofIRType(arg_ty); + + switch (arg_ty) + { + case Ity_I8: + t = newIRTemp(bbOut->tyenv, Ity_I32); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_8Uto32, data_expr)); + + addStmtToIRSB(bbOut, cast); + data_expr = IRExpr_RdTmp(t); + + add_trace_store1(bbOut, destAddr, size, inst_addr, data_expr); + break; + case Ity_I16: + t = newIRTemp(bbOut->tyenv, Ity_I32); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_16Uto32, data_expr)); + + addStmtToIRSB(bbOut, cast); + data_expr = IRExpr_RdTmp(t); + + add_trace_store1(bbOut, destAddr, size, inst_addr, data_expr); + break; + case Ity_F32: + // reinterpret as I32 + t = newIRTemp(bbOut->tyenv, Ity_I32); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_ReinterpF32asI32, data_expr)); + + addStmtToIRSB(bbOut, cast); + data_expr = IRExpr_RdTmp(t); + + // no break; + case Ity_I32: + add_trace_store1(bbOut, destAddr, size, inst_addr, data_expr); + break; + case Ity_F64: + // reinterpret as I64 + t = newIRTemp(bbOut->tyenv, Ity_I64); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_ReinterpF64asI64, data_expr)); + + addStmtToIRSB(bbOut, cast); + data_expr = IRExpr_RdTmp(t); + // no break; + case Ity_I64: + // we cannot pass whole 64-bit value in one parameter, so we split it + + // upper 32 + t = newIRTemp(bbOut->tyenv, Ity_I32); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_64HIto32, data_expr)); + addStmtToIRSB(bbOut, cast); + data_expr1 = IRExpr_RdTmp(t); + + // lower 32 + t = newIRTemp(bbOut->tyenv, Ity_I32); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_64to32, data_expr)); + addStmtToIRSB(bbOut, cast); + data_expr2 = IRExpr_RdTmp(t); + + add_trace_store2(bbOut, destAddr, sizeofIRType(Ity_I32), inst_addr, data_expr1, data_expr2); + break; + case Ity_V128: + // upper 64 + t = newIRTemp(bbOut->tyenv, Ity_I64); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_V128HIto64, data_expr)); + addStmtToIRSB(bbOut, cast); + data_expr0 = IRExpr_RdTmp(t); + + // upper 32 of upper 64 + t = newIRTemp(bbOut->tyenv, Ity_I32); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_64HIto32, data_expr0)); + addStmtToIRSB(bbOut, cast); + data_expr1 = IRExpr_RdTmp(t); + + // lower 32 of upper 64 + t = newIRTemp(bbOut->tyenv, Ity_I32); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_64to32, data_expr0)); + addStmtToIRSB(bbOut, cast); + data_expr2 = IRExpr_RdTmp(t); + + // lower 64 + t = newIRTemp(bbOut->tyenv, Ity_I64); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_V128to64, data_expr)); + addStmtToIRSB(bbOut, cast); + data_expr0 = IRExpr_RdTmp(t); + + // upper 32 of lower 64 + t = newIRTemp(bbOut->tyenv, Ity_I32); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_64HIto32, data_expr0)); + addStmtToIRSB(bbOut, cast); + data_expr3 = IRExpr_RdTmp(t); + + // lower 32 of lower 64 + t = newIRTemp(bbOut->tyenv, Ity_I32); + cast = IRStmt_WrTmp(t, IRExpr_Unop(Iop_64to32, data_expr0)); + addStmtToIRSB(bbOut, cast); + data_expr4 = IRExpr_RdTmp(t); + + add_trace_store4(bbOut, destAddr, inst_addr, + data_expr1, data_expr2, data_expr3, data_expr4); + + break; + default: + VG_(message) (Vg_UserMsg, "Warning! we missed a write of 0x%08x\n", (UInt) arg_ty); + break; + } +} +#endif + +static IRSB *mmt_instrument(VgCallbackClosure *closure, + IRSB *bbIn, + VexGuestLayout *layout, + VexGuestExtents *vge, + IRType gWordTy, IRType hWordTy) +{ + IRSB *bbOut; + int i = 0; + Addr inst_addr; + + if (gWordTy != hWordTy) + { + /* We don't currently support this case. */ + VG_(tool_panic) ("host/guest word size mismatch"); + } + + /* Set up BB */ + bbOut = deepCopyIRSBExceptStmts(bbIn); + + /* Copy verbatim any IR preamble preceding the first IMark */ + while (i < bbIn->stmts_used && bbIn->stmts[i]->tag != Ist_IMark) + { + addStmtToIRSB(bbOut, bbIn->stmts[i]); + i++; + } + + inst_addr = 0; + + for (; i < bbIn->stmts_used; i++) + { + IRStmt *st = bbIn->stmts[i]; + IRExpr *data_expr; + IRType arg_ty; + + if (!st) + continue; + + if (st->tag == Ist_IMark) + { + inst_addr = st->Ist.IMark.addr; + addStmtToIRSB(bbOut, st); + } + else if (st->tag == Ist_Store && dump_store) + { + data_expr = st->Ist.Store.data; + + arg_ty = typeOfIRExpr(bbIn->tyenv, data_expr); + + add_trace_store(bbOut, st->Ist.Store.addr, inst_addr, + arg_ty, data_expr); + addStmtToIRSB(bbOut, st); + } + else if (st->tag == Ist_WrTmp && dump_load) + { + data_expr = st->Ist.WrTmp.data; + + if (data_expr->tag == Iex_Load) + { + IRTemp dest = st->Ist.WrTmp.tmp; + IRExpr *value; + + addStmtToIRSB(bbOut, st); + + value = IRExpr_RdTmp(dest); + + arg_ty = typeOfIRExpr(bbIn->tyenv, value); + + add_trace_load(bbOut, data_expr->Iex.Load.addr, + sizeofIRType(data_expr->Iex.Load.ty), + inst_addr, value, arg_ty); + } + else + addStmtToIRSB(bbOut, st); + } + else + addStmtToIRSB(bbOut, st); + + } + return bbOut; +} + +#define TF_OPT "--mmt-trace-file=" +#define TN_OPT "--mmt-trace-nvidia-ioctls" +#define TO_OPT "--mmt-trace-all-opens" +#define TA_OPT "--mmt-trace-all-files" +#define TM_OPT "--mmt-trace-marks" + +static Bool mmt_process_cmd_line_option(Char * arg) +{ +// VG_(printf)("arg: %s\n", arg); + if (VG_(strncmp)(arg, TF_OPT, strlen(TF_OPT)) == 0) + { + int i; + for (i = 0; i < MAX_TRACE_FILES; ++i) + if (trace_files[i].path == NULL) + break; + if (i == MAX_TRACE_FILES) + { + VG_(printf)("too many files to trace\n"); + return False; + } + trace_files[i].path = VG_(strdup)("mmt.options-parsing", arg + strlen(TF_OPT)); + return True; + } + else if (VG_(strcmp)(arg, TN_OPT) == 0) + { + trace_nvidia_ioctls = True; + return True; + } + else if (VG_(strcmp)(arg, TO_OPT) == 0) + { + trace_opens = True; + return True; + } + else if (VG_(strcmp)(arg, TA_OPT) == 0) + { + trace_all_files = True; + return True; + } + else if (VG_(strcmp)(arg, TM_OPT) == 0) + { + trace_marks = True; + return True; + } + + return False; +} + +static void mmt_print_usage(void) +{ + VG_(printf)(" " TF_OPT "path trace loads and stores to memory mapped for\n" + " this file (e.g. /dev/nvidia0) (you can pass \n" + " this option multiple times)\n"); + VG_(printf)(" " TA_OPT " trace loads and store to memory mapped for all files\n"); + VG_(printf)(" " TN_OPT " trace ioctls on /dev/nvidiactl\n"); + VG_(printf)(" " TO_OPT " trace all 'open' syscalls\n"); + VG_(printf)(" " TM_OPT " send mmiotrace marks before and after ioctls\n"); +} + +static void mmt_print_debug_usage(void) +{ +} + +static void mmt_fini(Int exitcode) +{ + if (trace_marks) { + VG_(close)(trace_mark_fd); + } +} + +static void mmt_post_clo_init(void) +{ + if (trace_marks) { + SysRes ff; + ff = VG_(open)("/sys/kernel/debug/tracing/trace_marker", O_WRONLY, 0777); + if (ff._isError) { + VG_(message) (Vg_UserMsg, "Cannot open marker file!\n"); + trace_marks = 0; + } + trace_mark_fd = ff._val; + } +} + +static void dumpmem(char *s, Addr addr, UInt size) +{ + char line[4096]; + int idx = 0; + line[0] = 0; + + UInt i; + if (!addr || (addr & 0xffff0000) == 0xbeef0000) + return; + + size = size / 4; + + for (i = 0; i < size; ++i) + { + if (idx + 11 >= 4095) + break; + VG_(sprintf) (line + idx, "0x%08x ", ((UInt *) addr)[i]); + idx += 11; + } + VG_(message) (Vg_DebugMsg, "%s%s\n", s, line); +} + +static inline unsigned long long mmt_2x4to8(UInt h, UInt l) +{ + return (((unsigned long long)h) << 32) | l; +} + +static void pre_ioctl(ThreadId tid, UWord *args, UInt nArgs) +{ + int fd = args[0]; + UInt id = args[1]; + UInt *data = (UInt *) args[2]; + UWord addr; + UInt obj1, obj2, size; + int i; + + if (!FD_ISSET(fd, &nvidiactl_fds) && !FD_ISSET(fd, &nvidia0_fds)) + return; + + if (trace_marks) { + char buf[50]; + VG_(snprintf)(buf, 50, "VG-%d-%d-PRE\n", VG_(getpid)(), trace_mark_cnt); + VG_(write)(trace_mark_fd, buf, VG_(strlen)(buf)); + VG_(message)(Vg_DebugMsg, "MARK: %s", buf); + } + + if ((id & 0x0000FF00) == 0x4600) + { + char line[4096]; + int idx = 0; + + size = ((id & 0x3FFF0000) >> 16) / 4; + VG_(sprintf) (line, "pre_ioctl: fd:%d, id:0x%02x (full:0x%x), data: ", fd, id & 0xFF, id); + idx = strlen(line); + + for (i = 0; i < size; ++i) + { + if (idx + 11 >= 4095) + break; + VG_(sprintf) (line + idx, "0x%08x ", data[i]); + idx += 11; + } + VG_(message) (Vg_DebugMsg, "%s\n", line); + } + else + VG_(message)(Vg_DebugMsg, "pre_ioctl, fd: %d, wrong id:0x%x\n", fd, id); + + switch (id) + { + // 0x23 + // c1d00041 5c000001 00000080 00000000 00000000 00000000 00000000 00000000 + // c1d0004a beef0003 000000ff 00000000 04fe8af8 00000000 00000000 00000000 + case 0xc0204623: + obj1 = data[1]; + VG_(message) (Vg_DebugMsg, "create device object 0x%08x\n", obj1); + + // argument can be a string (7:0, indicating the bus number), but only if + // argument is 0xff + dumpmem("in ", data[4], 0x3C); + + break; + // 0x37 read stuff from video ram? + //case 0xc0204637: + case 0xc0204638: + dumpmem("in ", data[4], data[6]); + break; +#if 1 + case 0xc0204637: + { + UInt *addr2; + dumpmem("in ", data[4], data[6]); + + addr2 = (*(UInt **) (&data[4])); + //if(data[2]==0x14c && addr2[2]) + // dumpmem("in2 ", addr2[2], 0x40); + break; + } +#endif + case 0xc020462a: + VG_(message) (Vg_DebugMsg, "call method 0x%08x:0x%08x\n", data[1], data[2]); + dumpmem("in ", mmt_2x4to8(data[5], data[4]), mmt_2x4to8(data[7], data[6])); + // 0x10000002 + // word 2 is an address + // what is there? + if (data[2] == 0x10000002) + { + UInt *addr2 = (*(UInt **) (&data[4])); + dumpmem("in2 ", addr2[2], 0x3c); + } + break; + + case 0xc040464d: + VG_(message) (Vg_DebugMsg, "in %s\n", *(char **) (&data[6])); + break; + case 0xc028465e: + { + // Copy data from mem to GPU +#if 0 + SysRes ff; + ff = VG_(open) ("dump5e", O_CREAT | O_WRONLY | O_TRUNC, 0777); + if (!ff.isError) + { + VG_(write) (ff.res, (void *) data[6], 0x01000000); + VG_(close) (ff.res); + } +#endif + break; + } + case 0xc0104629: + obj1 = data[1]; + obj2 = data[2]; + VG_(message) (Vg_DebugMsg, "destroy object 0x%08x:0x%08x\n", obj1, obj2); + break; + case 0xc020462b: + { + struct object_type *objtype; + char *name = "???"; + obj1 = data[1]; + obj2 = data[2]; + addr = data[3]; + objtype = find_objtype(addr); + if (objtype && objtype->name) + name = objtype->name; + VG_(message) (Vg_DebugMsg, + "create gpu object 0x%08x:0x%08x type 0x%04lx (%s)\n", + obj1, obj2, addr, name); + if (data[4]) + { + if (objtype) + dumpmem("in ", mmt_2x4to8(data[5], data[4]), objtype->cargs * 4); + else + dumpmem("in ", mmt_2x4to8(data[5], data[4]), 0x40); + } + + break; + } + case 0xc014462d: + obj1 = data[1]; + obj2 = data[2]; + addr = data[3]; + VG_(message) (Vg_DebugMsg, + "create driver object 0x%08x:0x%08x type 0x%04lx\n", obj1, obj2, addr); + break; + + } +} + +static void pre_syscall(ThreadId tid, UInt syscallno, UWord *args, UInt nArgs) +{ + if (syscallno == __NR_ioctl) + pre_ioctl(tid, args, nArgs); +} + +static struct mmt_mmap_data *get_nvidia_mapping(Off64T offset) +{ + struct mmt_mmap_data *region; + int i; + for (i = 0; i <= last_region; ++i) + { + region = &mmt_mmaps[i]; + if (FD_ISSET(region->fd, &nvidia0_fds)) + if (region->offset == offset) + return region; + } + + if (last_region + 1 >= MAX_REGIONS) + { + VG_(message)(Vg_UserMsg, "no space for new mapping!\n"); + return NULL; + } + + region = &mmt_mmaps[++last_region]; + region->id = current_item++; + region->fd = 0; + region->offset = offset; + return region; +} + +static inline void free_region(int idx) +{ + if (last_region != idx) + VG_(memmove)(mmt_mmaps + idx, mmt_mmaps + idx + 1, + (last_region - idx) * sizeof(struct mmt_mmap_data)); + VG_(memset)(&mmt_mmaps[last_region--], 0, sizeof(struct mmt_mmap_data)); +} + +static Addr release_nvidia_mapping(Off64T offset) +{ + int i; + for (i = 0; i <= last_region; ++i) + { + struct mmt_mmap_data *region = &mmt_mmaps[i]; + if (FD_ISSET(region->fd, &nvidia0_fds)) + if (region->offset == offset) + { + Addr addr = region->start; + free_region(i); + return addr; + } + } + return 0; +} + +static Addr release_nvidia_mapping2(UWord data1, UWord data2) +{ + int i; + for (i = 0; i <= last_region; ++i) + { + struct mmt_mmap_data *region = &mmt_mmaps[i]; + if (FD_ISSET(region->fd, &nvidia0_fds)) + if (region->data1 == data1 && region->data2 == data2) + { + Addr addr = region->start; + free_region(i); + return addr; + } + } + return 0; +} + +static void post_ioctl(ThreadId tid, UWord *args, UInt nArgs) +{ + int fd = args[0]; + UInt id = args[1]; + UInt *data = (UInt *) args[2]; + UWord addr; + UInt obj1, obj2, size, type; + int i; + struct mmt_mmap_data *region; + + if (!FD_ISSET(fd, &nvidiactl_fds) && !FD_ISSET(fd, &nvidia0_fds)) + return; + + if (trace_marks) { + char buf[50]; + VG_(snprintf)(buf, 50, "VG-%d-%d-POST\n", VG_(getpid)(), trace_mark_cnt++); + VG_(write)(trace_mark_fd, buf, VG_(strlen)(buf)); + VG_(message)(Vg_DebugMsg, "MARK: %s", buf); + } + + if ((id & 0x0000FF00) == 0x4600) + { + char line[4096]; + int idx = 0; + + size = ((id & 0x3FFF0000) >> 16) / 4; + VG_(sprintf) (line, "post_ioctl: fd:%d, id:0x%02x (full:0x%x), data: ", fd, id & 0xFF, id); + idx = strlen(line); + + for (i = 0; i < size; ++i) + { + if (idx + 11 >= 4095) + break; + VG_(sprintf) (line + idx, "0x%08x ", data[i]); + idx += 11; + } + VG_(message) (Vg_DebugMsg, "%s\n", line); + } + else + VG_(message)(Vg_DebugMsg, "post_ioctl, fd: %d, wrong id:0x%x\n", fd, id); + + switch (id) + { + // NVIDIA + case 0xc00c4622: // Initialize + obj1 = data[0]; + VG_(message) (Vg_DebugMsg, "created context object 0x%08x\n", obj1); + break; + + case 0xc0204623: + dumpmem("out", data[4], 0x3C); + break; + + case 0xc030464e: // Allocate map for existing object + obj1 = data[1]; + obj2 = data[2]; + addr = data[8]; + VG_(message) (Vg_DebugMsg, "allocate map 0x%08x:0x%08x 0x%08lx\n", obj1, obj2, addr); + + region = get_nvidia_mapping(addr); + if (region) + { + region->data1 = obj1; + region->data2 = obj2; + } + + break; + case 0xc020464f: // Deallocate map for existing object + obj1 = data[1]; + obj2 = data[2]; + addr = data[4]; + /// XXX some currently mapped memory might be orphaned + + if (release_nvidia_mapping(addr)) + VG_(message) (Vg_DebugMsg, "deallocate map 0x%08x:0x%08x 0x%08lx\n", obj1, obj2, addr); + + break; + case 0xc0304627: // 0x27 Allocate map (also create object) + obj1 = data[1]; + obj2 = data[2]; + type = data[3]; + addr = data[6]; + VG_(message) (Vg_DebugMsg, + "create mapped object 0x%08x:0x%08x type=0x%08x 0x%08lx\n", + obj1, obj2, type, addr); + if (addr == 0) + break; + + region = get_nvidia_mapping(addr); + if (region) + { + region->data1 = obj1; + region->data2 = obj2; + } +#if 0 + dumpmem("out ", data[2], 0x40); +#endif + break; + // 0x29 seems to destroy/deallocate + case 0xc0104629: + obj1 = data[1]; + obj2 = data[2]; + /// XXX some currently mapped memory might be orphaned + + { + Addr addr1 = release_nvidia_mapping2(obj1, obj2); + if ((void *)addr1 != NULL) + VG_(message) (Vg_DebugMsg, "deallocate map 0x%08x:0x%08x %p\n", + obj1, obj2, (void *)addr1); + } + break; + // 0x2a read stuff from video ram? + // i 3 pre 2a: c1d00046 c1d00046 02000014 00000000 be88a948 00000000 00000080 00000000 + case 0xc020462a: + dumpmem("out ", mmt_2x4to8(data[5], data[4]), mmt_2x4to8(data[7], data[6])); + + if (data[2] == 0x10000002) + { + UInt *addr2 = (*(UInt **) (&data[4])); + dumpmem("out2 ", addr2[2], 0x3c); + } + break; + // 0x37 read configuration parameter + case 0xc0204638: + dumpmem("out", data[4], data[6]); + break; + case 0xc0204637: + { + UInt *addr2 = (*(UInt **) (&data[4])); + dumpmem("out", data[4], data[6]); + if (data[2] == 0x14c && addr2[2]) + /// List supported object types + dumpmem("out2 ", addr2[2], addr2[0] * 4); + } + break; + case 0xc0384657: // map GPU address + VG_(message) (Vg_DebugMsg, + "gpu map 0x%08x:0x%08x:0x%08x, addr 0x%08x, len 0x%08x\n", + data[1], data[2], data[3], data[10], data[6]); + break; + case 0xc0284658: // unmap GPU address + VG_(message) (Vg_DebugMsg, + "gpu unmap 0x%08x:0x%08x:0x%08x addr 0x%08x\n", data[1], + data[2], data[3], data[6]); + break; + case 0xc0304654: // create DMA object [3] is some kind of flags, [6] is an offset? + VG_(message) (Vg_DebugMsg, + "create dma object 0x%08x, type 0x%08x, parent 0x%08x\n", + data[1], data[2], data[5]); + break; + case 0xc0104659: // bind + VG_(message) (Vg_DebugMsg, "bind 0x%08x 0x%08x\n", data[1], data[2]); + break; + //case 0xc01c4634: + // dumpmem("out", data[4], 0x40); + // break; + // to c1d00046 c1d00046 02000014 00000000, from be88a948 00000000, size 00000080 00000000 + // 2b: c1d00046 5c000001 5c000009 0000506f be88a888 00000000 00000000 00000000 + // same, but other way around? + // i 5 pre 37: c1d00046 5c000001 0000014c 00000000 be88a9c8 00000000 00000010 00000000 + + // 0x23 create first object?? + // 0x2a method call? args/in/out depend + // 0x2b object creation + // c1d00046 beef0003 beef0028 0000307e + // 0x32 gets some value + // 0x37 read from GPU object? seems a read, not a write + // 0x4a memory allocation + // 0x4d after opening /dev/nvidiaX + // 0xd2 version id check + // 0x22 initialize (get context) + // 0x54 bind? 0xc0304654 + // 0x57 map to card 0xc0384657 + // 0x58 unmap from card 0xc0284658 + // 0xca ?? + + // These have external pointer: + // 0x2a (offset 0x10, size 0x18) + // 0x2b (offset 0x10, no size specified) + // 0x37 (offset 0x10, size 0x18) + // 0x38 (offset 0x10, size 0x18) + } +} + +static void post_open(ThreadId tid, UWord *args, UInt nArgs, SysRes res) +{ + const char *path = (const char *)args[0]; + int i; + + if (trace_opens) + { + int flags = (int)args[1]; + int mode = (int)args[2]; + VG_(message)(Vg_DebugMsg, "sys_open: %s, flags: 0x%x, mode: 0x%x, ret: %ld\n", path, flags, mode, res._val); + } + if (res._isError) + return; + + if (!trace_all_files) + { + for (i = 0; i < MAX_TRACE_FILES; ++i) + { + const char *path2 = trace_files[i].path; + if (path2 != NULL && VG_(strcmp)(path, path2) == 0) + { + FD_SET(res._val, &trace_files[i].fds); +// VG_(message)(Vg_DebugMsg, "fd %ld connected to %s\n", res._val, path); + break; + } + } + } + + if (trace_nvidia_ioctls) + { + if (VG_(strcmp)(path, "/dev/nvidiactl") == 0) + FD_SET(res._val, &nvidiactl_fds); + else if (VG_(strncmp)(path, "/dev/nvidia", 11) == 0) + FD_SET(res._val, &nvidia0_fds); + } +} + +static void post_close(ThreadId tid, UWord *args, UInt nArgs, SysRes res) +{ + int fd = (int)args[0]; + int i; + + if (!trace_all_files) + for(i = 0; i < MAX_TRACE_FILES; ++i) + { + if (trace_files[i].path != NULL && FD_ISSET(fd, &trace_files[i].fds)) + { + FD_CLR(fd, &trace_files[i].fds); + break; + } + } + + if (trace_nvidia_ioctls) + { + FD_CLR(fd, &nvidiactl_fds); + FD_CLR(fd, &nvidia0_fds); + } +} + +static void post_mmap(ThreadId tid, UWord *args, UInt nArgs, SysRes res, int offset_unit) +{ + void *start = (void *)args[0]; + unsigned long len = args[1]; +// unsigned long prot = args[2]; +// unsigned long flags = args[3]; + unsigned long fd = args[4]; + unsigned long offset = args[5]; + int i; + struct mmt_mmap_data *region; + + if (res._isError || (int)fd == -1) + return; + + start = (void *)res._val; + + if (!trace_all_files) + { + for(i = 0; i < MAX_TRACE_FILES; ++i) + { + if (FD_ISSET(fd, &trace_files[i].fds)) + break; + } + if (i == MAX_TRACE_FILES) + { +// VG_(message)(Vg_DebugMsg, "fd %ld not found\n", fd); + return; + } + } + + if (trace_nvidia_ioctls && FD_ISSET(fd, &nvidia0_fds)) + { + for (i = 0; i <= last_region; ++i) + { + region = &mmt_mmaps[i]; + if (region->id > 0 && + (region->fd == fd || region->fd == 0) && //region->fd=0 when created from get_nvidia_mapping + region->offset == offset * offset_unit) + { + region->fd = fd; + region->start = (Addr)start; + region->end = (Addr)(((char *)start) + len); + VG_(message) (Vg_DebugMsg, + "got new mmap for 0x%08lx:0x%08lx at %p, len: 0x%08lx, offset: 0x%llx, serial: %d\n", + region->data1, region->data2, (void *)region->start, len, + region->offset, region->id); + return; + } + } + } + + if (last_region + 1 >= MAX_REGIONS) + { + VG_(message)(Vg_UserMsg, "not enough space for new mmap!\n"); + return; + } + + region = &mmt_mmaps[++last_region]; + + region->fd = fd; + region->id = current_item++; + region->start = (Addr)start; + region->end = (Addr)(((char *)start) + len); + region->offset = offset * offset_unit; + + VG_(message) (Vg_DebugMsg, + "got new mmap at %p, len: 0x%08lx, offset: 0x%llx, serial: %d\n", + (void *)region->start, len, region->offset, region->id); +} + +static void post_munmap(ThreadId tid, UWord *args, UInt nArgs, SysRes res) +{ + void *start = (void *)args[0]; +// unsigned long len = args[1]; + int i; + struct mmt_mmap_data *region; + + if (res._isError) + return; + + for (i = 0; i <= last_region; ++i) + { + region = &mmt_mmaps[i]; + if (region->start == (Addr)start) + { + VG_(message) (Vg_DebugMsg, + "removed mmap 0x%lx:0x%lx for: %p, len: 0x%08lx, offset: 0x%llx, serial: %d\n", + region->data1, region->data2, (void *)region->start, + region->end - region->start, region->offset, region->id); + free_region(i); + return; + } + } +} + +static void post_mremap(ThreadId tid, UWord *args, UInt nArgs, SysRes res) +{ + void *start = (void *)args[0]; + unsigned long old_len = args[1]; + unsigned long new_len = args[2]; +// unsigned long flags = args[3]; + int i; + struct mmt_mmap_data *region; + + if (res._isError) + return; + + for (i = 0; i <= last_region; ++i) + { + region = &mmt_mmaps[i]; + if (region->start == (Addr)start) + { + region->start = (Addr) res._val; + region->end = region->start + new_len; + VG_(message) (Vg_DebugMsg, + "changed mmap 0x%lx:0x%lx from: (address: %p, len: 0x%08lx), to: (address: %p, len: 0x%08lx), offset 0x%llx, serial %d\n", + region->data1, region->data2, + start, old_len, + (void *)region->start, region->end - region->start, + region->offset, region->id); + return; + } + } +} + +static void post_syscall(ThreadId tid, UInt syscallno, UWord *args, + UInt nArgs, SysRes res) +{ + if (syscallno == __NR_ioctl) + post_ioctl(tid, args, nArgs); + else if (syscallno == __NR_open) + post_open(tid, args, nArgs, res); + else if (syscallno == __NR_close) + post_close(tid, args, nArgs, res); + else if (syscallno == __NR_mmap) + post_mmap(tid, args, nArgs, res, 1); +#ifndef MMT_64BIT + else if (syscallno == __NR_mmap2) + post_mmap(tid, args, nArgs, res, 4096); +#endif + else if (syscallno == __NR_munmap) + post_munmap(tid, args, nArgs, res); + else if (syscallno == __NR_mremap) + post_mremap(tid, args, nArgs, res); +} + +static void mmt_pre_clo_init(void) +{ + int i; + VG_(details_name) ("mmaptrace"); + VG_(details_version) (NULL); + VG_(details_description) ("an MMAP tracer"); + VG_(details_copyright_author) + ("Copyright (C) 2007,2009, and GNU GPL'd, by Dave Airlie, W.J. van der Laan, Marcin Slusarz."); + VG_(details_bug_reports_to) (VG_BUGS_TO); + + VG_(basic_tool_funcs) (mmt_post_clo_init, mmt_instrument, mmt_fini); + + VG_(needs_command_line_options) (mmt_process_cmd_line_option, + mmt_print_usage, + mmt_print_debug_usage); + + VG_(needs_syscall_wrapper) (pre_syscall, post_syscall); + + for (i = 0; i < MAX_TRACE_FILES; ++i) + FD_ZERO(&trace_files[i].fds); + FD_ZERO(&nvidiactl_fds); + FD_ZERO(&nvidia0_fds); +} + +VG_DETERMINE_INTERFACE_VERSION(mmt_pre_clo_init) diff --git a/mmt/tests/Makefile.am b/mmt/tests/Makefile.am new file mode 100644 index 0000000..8b13789 --- /dev/null +++ b/mmt/tests/Makefile.am @@ -0,0 +1 @@ + |
|
From: Marcin S. <mar...@gm...> - 2011-06-05 18:09:20
|
Please apply.
Index: include/vki/vki-linux.h
===================================================================
--- include/vki/vki-linux.h (wersja 11797)
+++ include/vki/vki-linux.h (kopia robocza)
@@ -2722,6 +2722,586 @@
#define VKI_EV_MAX 0x1f
#define VKI_EV_CNT (VKI_EV_MAX+1)
+
+// drm
+
+// drm_mode.h
+#define VKI_DRM_DISPLAY_MODE_LEN 32
+#define VKI_DRM_PROP_NAME_LEN 32
+
+struct vki_drm_mode_modeinfo {
+ __vki_u32 clock;
+ __vki_u16 hdisplay, hsync_start, hsync_end, htotal, hskew;
+ __vki_u16 vdisplay, vsync_start, vsync_end, vtotal, vscan;
+
+ __vki_u32 vrefresh;
+
+ __vki_u32 flags;
+ __vki_u32 type;
+ char name[VKI_DRM_DISPLAY_MODE_LEN];
+};
+
+struct vki_drm_mode_card_res {
+ __vki_u64 fb_id_ptr;
+ __vki_u64 crtc_id_ptr;
+ __vki_u64 connector_id_ptr;
+ __vki_u64 encoder_id_ptr;
+ __vki_u32 count_fbs;
+ __vki_u32 count_crtcs;
+ __vki_u32 count_connectors;
+ __vki_u32 count_encoders;
+ __vki_u32 min_width, max_width;
+ __vki_u32 min_height, max_height;
+};
+
+struct vki_drm_mode_crtc {
+ __vki_u64 set_connectors_ptr;
+ __vki_u32 count_connectors;
+
+ __vki_u32 crtc_id; /**< Id */
+ __vki_u32 fb_id; /**< Id of framebuffer */
+
+ __vki_u32 x, y; /**< Position on the frameuffer */
+
+ __vki_u32 gamma_size;
+ __vki_u32 mode_valid;
+ struct vki_drm_mode_modeinfo mode;
+};
+
+struct vki_drm_mode_get_encoder {
+ __vki_u32 encoder_id;
+ __vki_u32 encoder_type;
+
+ __vki_u32 crtc_id; /**< Id of crtc */
+
+ __vki_u32 possible_crtcs;
+ __vki_u32 possible_clones;
+};
+
+struct vki_drm_mode_get_connector {
+ __vki_u64 encoders_ptr;
+ __vki_u64 modes_ptr;
+ __vki_u64 props_ptr;
+ __vki_u64 prop_values_ptr;
+
+ __vki_u32 count_modes;
+ __vki_u32 count_props;
+ __vki_u32 count_encoders;
+
+ __vki_u32 encoder_id; /**< Current Encoder */
+ __vki_u32 connector_id; /**< Id */
+ __vki_u32 connector_type;
+ __vki_u32 connector_type_id;
+
+ __vki_u32 connection;
+ __vki_u32 mm_width, mm_height; /**< HxW in millimeters */
+ __vki_u32 subpixel;
+};
+
+struct vki_drm_mode_property_enum {
+ __vki_u64 value;
+ char name[VKI_DRM_PROP_NAME_LEN];
+};
+
+struct vki_drm_mode_get_property {
+ __vki_u64 values_ptr; /* values and blob lengths */
+ __vki_u64 enum_blob_ptr; /* enum and blob id ptrs */
+
+ __vki_u32 prop_id;
+ __vki_u32 flags;
+ char name[VKI_DRM_PROP_NAME_LEN];
+
+ __vki_u32 count_values;
+ __vki_u32 count_enum_blobs;
+};
+
+struct vki_drm_mode_connector_set_property {
+ __vki_u64 value;
+ __vki_u32 prop_id;
+ __vki_u32 connector_id;
+};
+
+struct vki_drm_mode_get_blob {
+ __vki_u32 blob_id;
+ __vki_u32 length;
+ __vki_u64 data;
+};
+
+struct vki_drm_mode_fb_cmd {
+ __vki_u32 fb_id;
+ __vki_u32 width, height;
+ __vki_u32 pitch;
+ __vki_u32 bpp;
+ __vki_u32 depth;
+ /* driver specific handle */
+ __vki_u32 handle;
+};
+
+struct vki_drm_mode_fb_dirty_cmd {
+ __vki_u32 fb_id;
+ __vki_u32 flags;
+ __vki_u32 color;
+ __vki_u32 num_clips;
+ __vki_u64 clips_ptr;
+};
+
+struct vki_drm_mode_mode_cmd {
+ __vki_u32 connector_id;
+ struct vki_drm_mode_modeinfo mode;
+};
+
+#define VKI_DRM_MODE_CURSOR_BO (1<<0)
+#define VKI_DRM_MODE_CURSOR_MOVE (1<<1)
+
+struct vki_drm_mode_cursor {
+ __vki_u32 flags;
+ __vki_u32 crtc_id;
+ __vki_s32 x;
+ __vki_s32 y;
+ __vki_u32 width;
+ __vki_u32 height;
+ /* driver specific handle */
+ __vki_u32 handle;
+};
+
+struct vki_drm_mode_crtc_lut {
+ __vki_u32 crtc_id;
+ __vki_u32 gamma_size;
+
+ /* pointers to arrays */
+ __vki_u64 red;
+ __vki_u64 green;
+ __vki_u64 blue;
+};
+
+struct vki_drm_mode_crtc_page_flip {
+ __vki_u32 crtc_id;
+ __vki_u32 fb_id;
+ __vki_u32 flags;
+ __vki_u32 reserved;
+ __vki_u64 user_data;
+};
+
+/* create a dumb scanout buffer */
+struct vki_drm_mode_create_dumb {
+ __vki_u32 height;
+ __vki_u32 width;
+ __vki_u32 bpp;
+ __vki_u32 flags;
+ /* handle, pitch, size will be returned */
+ __vki_u32 handle;
+ __vki_u32 pitch;
+ __vki_u64 size;
+};
+
+/* set up for mmap of a dumb scanout buffer */
+struct vki_drm_mode_map_dumb {
+ /** Handle for the object being mapped. */
+ __vki_u32 handle;
+ __vki_u32 pad;
+ /**
+ * Fake offset to use for subsequent mmap call
+ *
+ * This is a fixed-size type for 32/64 compatibility.
+ */
+ __vki_u64 offset;
+};
+
+struct vki_drm_mode_destroy_dumb {
+ __vki_u32 handle;
+};
+
+
+// drm.h
+
+typedef unsigned int vki_drm_drawable_t;
+typedef unsigned int vki_drm_magic_t;
+
+struct vki_drm_clip_rect {
+ unsigned short x1;
+ unsigned short y1;
+ unsigned short x2;
+ unsigned short y2;
+};
+
+struct vki_drm_version {
+ int version_major; /**< Major version */
+ int version_minor; /**< Minor version */
+ int version_patchlevel; /**< Patch level */
+ vki_size_t name_len; /**< Length of name buffer */
+ char *name; /**< Name of driver */
+ vki_size_t date_len; /**< Length of date buffer */
+ char *date; /**< User-space buffer to hold date */
+ vki_size_t desc_len; /**< Length of desc buffer */
+ char *desc; /**< User-space buffer to hold desc */
+};
+
+struct vki_drm_unique {
+ vki_size_t unique_len; /**< Length of unique */
+ char *unique; /**< Unique name for driver instantiation */
+};
+
+struct vki_drm_update_draw {
+ vki_drm_drawable_t handle;
+ unsigned int type;
+ unsigned int num;
+ unsigned long long data;
+};
+
+struct vki_drm_auth {
+ vki_drm_magic_t magic;
+};
+
+struct vki_drm_irq_busid {
+ int irq; /**< IRQ number */
+ int busnum; /**< bus number */
+ int devnum; /**< device number */
+ int funcnum; /**< function number */
+};
+
+/** DRM_IOCTL_GEM_CLOSE ioctl argument type */
+struct vki_drm_gem_close {
+ /** Handle of the object to be closed. */
+ __vki_u32 handle;
+ __vki_u32 pad;
+};
+
+/** DRM_IOCTL_GEM_FLINK ioctl argument type */
+struct vki_drm_gem_flink {
+ /** Handle for the object being named */
+ __vki_u32 handle;
+
+ /** Returned global name */
+ __vki_u32 name;
+};
+
+/** DRM_IOCTL_GEM_OPEN ioctl argument type */
+struct vki_drm_gem_open {
+ /** Name of object being opened */
+ __vki_u32 name;
+
+ /** Returned handle for the object */
+ __vki_u32 handle;
+
+ /** Returned size of the object */
+ __vki_u64 size;
+};
+
+// xf86drm.h
+#define VKI_DRM_IOCTL_NR(n) _VKI_IOC_NR(n)
+#define VKI_DRM_IOC_VOID _VKI_IOC_NONE
+#define VKI_DRM_IOC_READ _VKI_IOC_READ
+#define VKI_DRM_IOC_WRITE _VKI_IOC_WRITE
+#define VKI_DRM_IOC_READWRITE _VKI_IOC_READ|_IOC_WRITE
+#define VKI_DRM_IOC(dir, group, nr, size) _VKI_IOC(dir, group, nr, size)
+
+enum vki_drm_vblank_seq_type {
+ VKI_DRM_VBLANK_ABSOLUTE = 0x0, /**< Wait for specific vblank sequence number */
+ VKI_DRM_VBLANK_RELATIVE = 0x1, /**< Wait for given number of vblanks */
+ VKI_DRM_VBLANK_EVENT = 0x4000000, /**< Send event instead of blocking */
+ VKI_DRM_VBLANK_FLIP = 0x8000000, /**< Scheduled buffer swap should flip */
+ VKI_DRM_VBLANK_NEXTONMISS = 0x10000000, /**< If missed, wait for next vblank */
+ VKI_DRM_VBLANK_SECONDARY = 0x20000000, /**< Secondary display controller */
+ VKI_DRM_VBLANK_SIGNAL = 0x40000000 /**< Send signal instead of blocking, unsupported */
+};
+
+struct vki_drm_wait_vblank_request {
+ enum vki_drm_vblank_seq_type type;
+ unsigned int sequence;
+ unsigned long signal;
+};
+
+struct vki_drm_wait_vblank_reply {
+ enum vki_drm_vblank_seq_type type;
+ unsigned int sequence;
+ long tval_sec;
+ long tval_usec;
+};
+
+/**
+ * DRM_IOCTL_WAIT_VBLANK ioctl argument type.
+ *
+ * \sa drmWaitVBlank().
+ */
+union vki_drm_wait_vblank {
+ struct vki_drm_wait_vblank_request request;
+ struct vki_drm_wait_vblank_reply reply;
+};
+
+//drm.h
+typedef unsigned int vki_drm_context_t;
+enum vki_drm_ctx_flags {
+ VKI_DRM_CONTEXT_PRESERVED = 0x01,
+ VKI_DRM_CONTEXT_2DONLY = 0x02
+};
+struct vki_drm_ctx {
+ vki_drm_context_t handle;
+ enum vki_drm_ctx_flags flags;
+};
+
+struct vki_drm_ctx_res {
+ int count;
+ struct vki_drm_ctx *contexts;
+};
+
+/* because of bug in libdrm, some of the ioctls were passed as int, so when
+ * they were casted to ulong most significant bit was extended to higher word;
+ * kernel drm sees only lower part, so nobody noticed */
+#if VG_WORDSIZE == 8
+#define VKI_DRM_IOCTL_DOUBLE(X) case (unsigned long)(int)(X): case X
+#else
+#define VKI_DRM_IOCTL_DOUBLE(X) case X
+#endif
+
+#define VKI_DRM_IOCTL_BASE 'd'
+#define VKI_DRM_IO(nr) _VKI_IO(VKI_DRM_IOCTL_BASE,nr)
+#define VKI_DRM_IOR(nr,type) _VKI_IOR(VKI_DRM_IOCTL_BASE,nr,type)
+#define VKI_DRM_IOW(nr,type) _VKI_IOW(VKI_DRM_IOCTL_BASE,nr,type)
+#define VKI_DRM_IOWR(nr,type) _VKI_IOWR(VKI_DRM_IOCTL_BASE,nr,type)
+
+#define VKI_DRM_IOCTL_VERSION VKI_DRM_IOWR(0x00, struct vki_drm_version)
+#define VKI_DRM_IOCTL_GET_UNIQUE VKI_DRM_IOWR(0x01, struct vki_drm_unique)
+#define VKI_DRM_IOCTL_GET_MAGIC VKI_DRM_IOR( 0x02, struct vki_drm_auth)
+#define VKI_DRM_IOCTL_IRQ_BUSID VKI_DRM_IOWR(0x03, struct vki_drm_irq_busid)
+//#define VKI_DRM_IOCTL_GET_MAP VKI_DRM_IOWR(0x04, struct vki_drm_map)
+//#define VKI_DRM_IOCTL_GET_CLIENT VKI_DRM_IOWR(0x05, struct vki_drm_client)
+//#define VKI_DRM_IOCTL_GET_STATS VKI_DRM_IOR( 0x06, struct vki_drm_stats)
+//#define VKI_DRM_IOCTL_SET_VERSION VKI_DRM_IOWR(0x07, struct vki_drm_set_version)
+//#define VKI_DRM_IOCTL_MODESET_CTL VKI_DRM_IOW(0x08, struct vki_drm_modeset_ctl)
+#define VKI_DRM_IOCTL_GEM_CLOSE VKI_DRM_IOW (0x09, struct vki_drm_gem_close)
+#define VKI_DRM_IOCTL_GEM_FLINK VKI_DRM_IOWR(0x0a, struct vki_drm_gem_flink)
+#define VKI_DRM_IOCTL_GEM_OPEN VKI_DRM_IOWR(0x0b, struct vki_drm_gem_open)
+//#define VKI_DRM_IOCTL_GET_CAP VKI_DRM_IOWR(0x0c, struct vki_drm_get_cap)
+
+//#define VKI_DRM_IOCTL_SET_UNIQUE VKI_DRM_IOW( 0x10, struct vki_drm_unique)
+//#define VKI_DRM_IOCTL_AUTH_MAGIC VKI_DRM_IOW( 0x11, struct vki_drm_auth)
+//#define VKI_DRM_IOCTL_BLOCK VKI_DRM_IOWR(0x12, struct vki_drm_block)
+//#define VKI_DRM_IOCTL_UNBLOCK VKI_DRM_IOWR(0x13, struct vki_drm_block)
+//#define VKI_DRM_IOCTL_CONTROL VKI_DRM_IOW( 0x14, struct vki_drm_control)
+//#define VKI_DRM_IOCTL_ADD_MAP VKI_DRM_IOWR(0x15, struct vki_drm_map)
+//#define VKI_DRM_IOCTL_ADD_BUFS VKI_DRM_IOWR(0x16, struct vki_drm_buf_desc)
+//#define VKI_DRM_IOCTL_MARK_BUFS VKI_DRM_IOW( 0x17, struct vki_drm_buf_desc)
+//#define VKI_DRM_IOCTL_INFO_BUFS VKI_DRM_IOWR(0x18, struct vki_drm_buf_info)
+//#define VKI_DRM_IOCTL_MAP_BUFS VKI_DRM_IOWR(0x19, struct vki_drm_buf_map)
+//#define VKI_DRM_IOCTL_FREE_BUFS VKI_DRM_IOW( 0x1a, struct vki_drm_buf_free)
+
+//#define VKI_DRM_IOCTL_RM_MAP VKI_DRM_IOW( 0x1b, struct vki_drm_map)
+
+//#define VKI_DRM_IOCTL_SET_SAREA_CTX VKI_DRM_IOW( 0x1c, struct vki_drm_ctx_priv_map)
+//#define VKI_DRM_IOCTL_GET_SAREA_CTX VKI_DRM_IOWR(0x1d, struct vki_drm_ctx_priv_map)
+
+#define VKI_DRM_IOCTL_SET_MASTER VKI_DRM_IO(0x1e)
+#define VKI_DRM_IOCTL_DROP_MASTER VKI_DRM_IO(0x1f)
+
+#define VKI_DRM_IOCTL_ADD_CTX VKI_DRM_IOWR(0x20, struct vki_drm_ctx)
+#define VKI_DRM_IOCTL_RM_CTX VKI_DRM_IOWR(0x21, struct vki_drm_ctx)
+#define VKI_DRM_IOCTL_MOD_CTX VKI_DRM_IOW( 0x22, struct vki_drm_ctx)
+#define VKI_DRM_IOCTL_GET_CTX VKI_DRM_IOWR(0x23, struct vki_drm_ctx)
+#define VKI_DRM_IOCTL_SWITCH_CTX VKI_DRM_IOW( 0x24, struct vki_drm_ctx)
+#define VKI_DRM_IOCTL_NEW_CTX VKI_DRM_IOW( 0x25, struct vki_drm_ctx)
+#define VKI_DRM_IOCTL_RES_CTX VKI_DRM_IOWR(0x26, struct vki_drm_ctx_res)
+//#define VKI_DRM_IOCTL_ADD_DRAW VKI_DRM_IOWR(0x27, struct vki_drm_draw)
+//#define VKI_DRM_IOCTL_RM_DRAW VKI_DRM_IOWR(0x28, struct vki_drm_draw)
+//#define VKI_DRM_IOCTL_DMA VKI_DRM_IOWR(0x29, struct vki_drm_dma)
+//#define VKI_DRM_IOCTL_LOCK VKI_DRM_IOW( 0x2a, struct vki_drm_lock)
+//#define VKI_DRM_IOCTL_UNLOCK VKI_DRM_IOW( 0x2b, struct vki_drm_lock)
+//#define VKI_DRM_IOCTL_FINISH VKI_DRM_IOW( 0x2c, struct vki_drm_lock)
+
+//#define VKI_DRM_IOCTL_AGP_ACQUIRE VKI_DRM_IO( 0x30)
+//#define VKI_DRM_IOCTL_AGP_RELEASE VKI_DRM_IO( 0x31)
+//#define VKI_DRM_IOCTL_AGP_ENABLE VKI_DRM_IOW( 0x32, struct vki_drm_agp_mode)
+//#define VKI_DRM_IOCTL_AGP_INFO VKI_DRM_IOR( 0x33, struct vki_drm_agp_info)
+//#define VKI_DRM_IOCTL_AGP_ALLOC VKI_DRM_IOWR(0x34, struct vki_drm_agp_buffer)
+//#define VKI_DRM_IOCTL_AGP_FREE VKI_DRM_IOW( 0x35, struct vki_drm_agp_buffer)
+//#define VKI_DRM_IOCTL_AGP_BIND VKI_DRM_IOW( 0x36, struct vki_drm_agp_binding)
+//#define VKI_DRM_IOCTL_AGP_UNBIND VKI_DRM_IOW( 0x37, struct vki_drm_agp_binding)
+
+//#define VKI_DRM_IOCTL_SG_ALLOC VKI_DRM_IOWR(0x38, struct vki_drm_scatter_gather)
+//#define VKI_DRM_IOCTL_SG_FREE VKI_DRM_IOW( 0x39, struct vki_drm_scatter_gather)
+
+#define VKI_DRM_IOCTL_WAIT_VBLANK VKI_DRM_IOWR(0x3a, union vki_drm_wait_vblank)
+
+#define VKI_DRM_IOCTL_UPDATE_DRAW VKI_DRM_IOW(0x3f, struct vki_drm_update_draw)
+
+#define VKI_DRM_IOCTL_MODE_GETRESOURCES VKI_DRM_IOWR(0xA0, struct vki_drm_mode_card_res)
+#define VKI_DRM_IOCTL_MODE_GETCRTC VKI_DRM_IOWR(0xA1, struct vki_drm_mode_crtc)
+#define VKI_DRM_IOCTL_MODE_SETCRTC VKI_DRM_IOWR(0xA2, struct vki_drm_mode_crtc)
+#define VKI_DRM_IOCTL_MODE_CURSOR VKI_DRM_IOWR(0xA3, struct vki_drm_mode_cursor)
+#define VKI_DRM_IOCTL_MODE_GETGAMMA VKI_DRM_IOWR(0xA4, struct vki_drm_mode_crtc_lut)
+#define VKI_DRM_IOCTL_MODE_SETGAMMA VKI_DRM_IOWR(0xA5, struct vki_drm_mode_crtc_lut)
+#define VKI_DRM_IOCTL_MODE_GETENCODER VKI_DRM_IOWR(0xA6, struct vki_drm_mode_get_encoder)
+#define VKI_DRM_IOCTL_MODE_GETCONNECTOR VKI_DRM_IOWR(0xA7, struct vki_drm_mode_get_connector)
+#define VKI_DRM_IOCTL_MODE_ATTACHMODE VKI_DRM_IOWR(0xA8, struct vki_drm_mode_mode_cmd)
+#define VKI_DRM_IOCTL_MODE_DETACHMODE VKI_DRM_IOWR(0xA9, struct vki_drm_mode_mode_cmd)
+
+#define VKI_DRM_IOCTL_MODE_GETPROPERTY VKI_DRM_IOWR(0xAA, struct vki_drm_mode_get_property)
+#define VKI_DRM_IOCTL_MODE_SETPROPERTY VKI_DRM_IOWR(0xAB, struct vki_drm_mode_connector_set_property)
+#define VKI_DRM_IOCTL_MODE_GETPROPBLOB VKI_DRM_IOWR(0xAC, struct vki_drm_mode_get_blob)
+#define VKI_DRM_IOCTL_MODE_GETFB VKI_DRM_IOWR(0xAD, struct vki_drm_mode_fb_cmd)
+#define VKI_DRM_IOCTL_MODE_ADDFB VKI_DRM_IOWR(0xAE, struct vki_drm_mode_fb_cmd)
+#define VKI_DRM_IOCTL_MODE_RMFB VKI_DRM_IOWR(0xAF, unsigned int)
+#define VKI_DRM_IOCTL_MODE_PAGE_FLIP VKI_DRM_IOWR(0xB0, struct vki_drm_mode_crtc_page_flip)
+#define VKI_DRM_IOCTL_MODE_DIRTYFB VKI_DRM_IOWR(0xB1, struct vki_drm_mode_fb_dirty_cmd)
+
+#define VKI_DRM_IOCTL_MODE_CREATE_DUMB VKI_DRM_IOWR(0xB2, struct vki_drm_mode_create_dumb)
+#define VKI_DRM_IOCTL_MODE_MAP_DUMB VKI_DRM_IOWR(0xB3, struct vki_drm_mode_map_dumb)
+#define VKI_DRM_IOCTL_MODE_DESTROY_DUMB VKI_DRM_IOWR(0xB4, struct vki_drm_mode_destroy_dumb)
+
+#define VKI_DRM_COMMAND_BASE 0x40
+//#define VKI_DRM_COMMAND_END 0xA0
+
+// xf86drm.c: drmCommandWriteRead
+#define VKI_DRM_COMMAND(dir, index, size) \
+ VKI_DRM_IOC(dir, VKI_DRM_IOCTL_BASE, VKI_DRM_COMMAND_BASE + index, size)
+
+#define VKI_DRM_COMMAND_RW(index, size) \
+ VKI_DRM_COMMAND(VKI_DRM_IOC_READ | VKI_DRM_IOC_WRITE, index, size)
+
+#define VKI_DRM_COMMAND_W(index, size) \
+ VKI_DRM_COMMAND(VKI_DRM_IOC_WRITE, index, size)
+
+#define VKI_DRM_COMMAND_R(index, size) \
+ VKI_DRM_COMMAND(VKI_DRM_IOC_READ, index, size)
+
+// nouveau_drm.h
+#define VKI_DRM_NOUVEAU_GETPARAM 0x00
+#define VKI_DRM_NOUVEAU_SETPARAM 0x01
+#define VKI_DRM_NOUVEAU_CHANNEL_ALLOC 0x02
+#define VKI_DRM_NOUVEAU_CHANNEL_FREE 0x03
+#define VKI_DRM_NOUVEAU_GROBJ_ALLOC 0x04
+#define VKI_DRM_NOUVEAU_NOTIFIEROBJ_ALLOC 0x05
+#define VKI_DRM_NOUVEAU_GPUOBJ_FREE 0x06
+#define VKI_DRM_NOUVEAU_GEM_NEW 0x40
+#define VKI_DRM_NOUVEAU_GEM_PUSHBUF 0x41
+#define VKI_DRM_NOUVEAU_GEM_CPU_PREP 0x42
+#define VKI_DRM_NOUVEAU_GEM_CPU_FINI 0x43
+#define VKI_DRM_NOUVEAU_GEM_INFO 0x44
+
+struct vki_drm_nouveau_channel_alloc {
+ __vki_u32 fb_ctxdma_handle;
+ __vki_u32 tt_ctxdma_handle;
+
+ int channel;
+ __vki_u32 pushbuf_domains;
+
+ /* Notifier memory */
+ __vki_u32 notifier_handle;
+
+ /* DRM-enforced subchannel assignments */
+ struct {
+ __vki_u32 handle;
+ __vki_u32 grclass;
+ } subchan[8];
+ __vki_u32 nr_subchan;
+};
+
+struct vki_drm_nouveau_channel_free {
+ int channel;
+};
+
+struct vki_drm_nouveau_grobj_alloc {
+ int channel;
+ __vki_u32 handle;
+ int class;
+};
+
+struct vki_drm_nouveau_notifierobj_alloc {
+ __vki_u32 channel;
+ __vki_u32 handle;
+ __vki_u32 size;
+ __vki_u32 offset;
+};
+
+struct vki_drm_nouveau_gpuobj_free {
+ int channel;
+ __vki_u32 handle;
+};
+
+struct vki_drm_nouveau_getparam {
+ __vki_u64 param;
+ __vki_u64 value;
+};
+
+struct vki_drm_nouveau_setparam {
+ __vki_u64 param;
+ __vki_u64 value;
+};
+
+struct vki_drm_nouveau_gem_info {
+ __vki_u32 handle;
+ __vki_u32 domain;
+ __vki_u64 size;
+ __vki_u64 offset;
+ __vki_u64 map_handle;
+ __vki_u32 tile_mode;
+ __vki_u32 tile_flags;
+};
+
+struct vki_drm_nouveau_gem_new {
+ struct vki_drm_nouveau_gem_info info;
+ __vki_u32 channel_hint;
+ __vki_u32 align;
+};
+
+
+struct vki_drm_nouveau_gem_pushbuf_bo_presumed {
+ __vki_u32 valid;
+ __vki_u32 domain;
+ __vki_u64 offset;
+};
+
+struct vki_drm_nouveau_gem_pushbuf_bo {
+ __vki_u64 user_priv;
+ __vki_u32 handle;
+ __vki_u32 read_domains;
+ __vki_u32 write_domains;
+ __vki_u32 valid_domains;
+ struct vki_drm_nouveau_gem_pushbuf_bo_presumed presumed;
+};
+
+struct vki_drm_nouveau_gem_pushbuf_push {
+ __vki_u32 bo_index;
+ __vki_u32 pad;
+ __vki_u64 offset;
+ __vki_u64 length;
+};
+
+struct vki_drm_nouveau_gem_pushbuf {
+ __vki_u32 channel;
+ __vki_u32 nr_buffers;
+ __vki_u64 buffers;
+ __vki_u32 nr_relocs;
+ __vki_u32 nr_push;
+ __vki_u64 relocs;
+ __vki_u64 push;
+ __vki_u32 suffix0;
+ __vki_u32 suffix1;
+ __vki_u64 vram_available;
+ __vki_u64 gart_available;
+};
+
+struct vki_drm_nouveau_gem_cpu_prep {
+ __vki_u32 handle;
+ __vki_u32 flags;
+};
+
+struct vki_drm_nouveau_gem_cpu_fini {
+ __vki_u32 handle;
+};
+
+#define VKI_DRM_IOCTL_NOUVEAU_GETPARAM VKI_DRM_COMMAND_RW(VKI_DRM_NOUVEAU_GETPARAM, sizeof(struct vki_drm_nouveau_getparam))
+#define VKI_DRM_IOCTL_NOUVEAU_SETPARAM VKI_DRM_COMMAND_RW(VKI_DRM_NOUVEAU_SETPARAM, sizeof(struct vki_drm_nouveau_setparam))
+#define VKI_DRM_IOCTL_NOUVEAU_CHANNEL_ALLOC VKI_DRM_COMMAND_RW(VKI_DRM_NOUVEAU_CHANNEL_ALLOC, sizeof(struct vki_drm_nouveau_channel_alloc))
+#define VKI_DRM_IOCTL_NOUVEAU_CHANNEL_FREE VKI_DRM_COMMAND_W (VKI_DRM_NOUVEAU_CHANNEL_FREE, sizeof(struct vki_drm_nouveau_channel_free))
+#define VKI_DRM_IOCTL_NOUVEAU_GROBJ_ALLOC VKI_DRM_COMMAND_W (VKI_DRM_NOUVEAU_GROBJ_ALLOC, sizeof(struct vki_drm_nouveau_grobj_alloc))
+#define VKI_DRM_IOCTL_NOUVEAU_NOTIFIEROBJ_ALLOC VKI_DRM_COMMAND_RW(VKI_DRM_NOUVEAU_NOTIFIEROBJ_ALLOC, sizeof(struct vki_drm_nouveau_notifierobj_alloc))
+#define VKI_DRM_IOCTL_NOUVEAU_GPUOBJ_FREE VKI_DRM_COMMAND_W (VKI_DRM_NOUVEAU_GPUOBJ_FREE, sizeof(struct vki_drm_nouveau_gpuobj_free))
+#define VKI_DRM_IOCTL_NOUVEAU_GEM_NEW VKI_DRM_COMMAND_RW(VKI_DRM_NOUVEAU_GEM_NEW, sizeof(struct vki_drm_nouveau_gem_new))
+#define VKI_DRM_IOCTL_NOUVEAU_GEM_PUSHBUF VKI_DRM_COMMAND_RW(VKI_DRM_NOUVEAU_GEM_PUSHBUF, sizeof(struct vki_drm_nouveau_gem_pushbuf))
+#define VKI_DRM_IOCTL_NOUVEAU_GEM_CPU_PREP VKI_DRM_COMMAND_W (VKI_DRM_NOUVEAU_GEM_CPU_PREP, sizeof(struct vki_drm_nouveau_gem_cpu_prep))
+#define VKI_DRM_IOCTL_NOUVEAU_GEM_CPU_FINI VKI_DRM_COMMAND_W (VKI_DRM_NOUVEAU_GEM_CPU_FINI, sizeof(struct vki_drm_nouveau_gem_cpu_fini))
+#define VKI_DRM_IOCTL_NOUVEAU_GEM_INFO VKI_DRM_COMMAND_RW(VKI_DRM_NOUVEAU_GEM_INFO, sizeof(struct vki_drm_nouveau_gem_info))
+
#endif // __VKI_LINUX_H
/*--------------------------------------------------------------------*/
Index: coregrind/m_syswrap/syswrap-linux.c
===================================================================
--- coregrind/m_syswrap/syswrap-linux.c (wersja 11797)
+++ coregrind/m_syswrap/syswrap-linux.c (kopia robocza)
@@ -59,6 +59,7 @@
#include "priv_syswrap-generic.h"
#include "priv_syswrap-linux.h"
+#define max(a,b) ((a)>(b)?(a):(b))
// Run a thread from beginning to end and return the thread's
// scheduler-return-code.
@@ -4880,7 +4881,684 @@
sizeof(struct vki_sockaddr));
}
break;
+ case VKI_DRM_IOCTL_VERSION:
+ if (ARG3) {
+ struct vki_drm_version *req = (struct vki_drm_version *)ARG3;
+ PRE_FIELD_WRITE("ioctl(DRM_VERSION).version_major", req->version_major);
+ PRE_FIELD_WRITE("ioctl(DRM_VERSION).version_minor", req->version_minor);
+ PRE_FIELD_WRITE("ioctl(DRM_VERSION).version_patchlevel", req->version_patchlevel);
+
+ PRE_FIELD_READ("ioctl(DRM_VERSION).name_len", req->name_len);
+ PRE_FIELD_READ("ioctl(DRM_VERSION).date_len", req->date_len);
+ PRE_FIELD_READ("ioctl(DRM_VERSION).desc_len", req->desc_len);
+
+ PRE_MEM_WRITE("ioctl(DRM_VERSION).name[]", (Addr)req->name, req->name_len);
+ PRE_MEM_WRITE("ioctl(DRM_VERSION).date[]", (Addr)req->date, req->date_len);
+ PRE_MEM_WRITE("ioctl(DRM_VERSION).desc[]", (Addr)req->desc, req->desc_len);
+
+ PRE_FIELD_WRITE("ioctl(DRM_VERSION).name_len", req->name_len);
+ PRE_FIELD_WRITE("ioctl(DRM_VERSION).date_len", req->date_len);
+ PRE_FIELD_WRITE("ioctl(DRM_VERSION).desc_len", req->desc_len);
+ }
+ break;
+ case VKI_DRM_IOCTL_GET_UNIQUE:
+ if (ARG3) {
+ struct vki_drm_unique *req = (struct vki_drm_unique *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_UNIQUE).unique_len", req->unique_len);
+ if (req->unique_len) {
+ PRE_FIELD_READ("ioctl(DRM_UNIQUE).unique", req->unique);
+ PRE_MEM_WRITE("ioctl(DRM_UNIQUE).unique[]", (Addr)req->unique, req->unique_len);
+ }
+ PRE_FIELD_WRITE("ioctl(DRM_UNIQUE).unique_len", req->unique_len);
+ }
+ break;
+ case VKI_DRM_IOCTL_GET_MAGIC:
+ if (ARG3) {
+ struct vki_drm_auth *req = (struct vki_drm_auth *)ARG3;
+
+ PRE_FIELD_WRITE("ioctl(DRM_GET_MAGIC).magic", req->magic);
+ }
+ break;
+ case VKI_DRM_IOCTL_IRQ_BUSID:
+ if (ARG3) {
+ struct vki_drm_irq_busid *req = (struct vki_drm_irq_busid *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_IRQ_BUSID).busnum", req->busnum);
+ PRE_FIELD_READ("ioctl(DRM_IRQ_BUSID).devnum", req->devnum);
+ PRE_FIELD_READ("ioctl(DRM_IRQ_BUSID).funcnum", req->funcnum);
+ PRE_FIELD_WRITE("ioctl(DRM_IRQ_BUSID).irq", req->irq);
+ }
+ break;
+ case VKI_DRM_IOCTL_GEM_CLOSE:
+ if (ARG3) {
+ struct vki_drm_gem_close *req = (struct vki_drm_gem_close *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_GEM_CLOSE).handle", req->handle);
+ }
+ break;
+ case VKI_DRM_IOCTL_GEM_FLINK:
+ if (ARG3) {
+ struct vki_drm_gem_flink *req = (struct vki_drm_gem_flink *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_GEM_FLINK).handle", req->handle);
+ PRE_FIELD_WRITE("ioctl(DRM_GEM_FLINK).name", req->name);
+ }
+ break;
+ case VKI_DRM_IOCTL_GEM_OPEN:
+ if (ARG3) {
+ struct vki_drm_gem_open *req = (struct vki_drm_gem_open *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_GEM_OPEN).name", req->name);
+ PRE_FIELD_WRITE("ioctl(DRM_GEM_OPEN).handle", req->handle);
+ PRE_FIELD_WRITE("ioctl(DRM_GEM_OPEN).size", req->size);
+ }
+ break;
+ case VKI_DRM_IOCTL_SET_MASTER:
+ break;
+ case VKI_DRM_IOCTL_DROP_MASTER:
+ break;
+ case VKI_DRM_IOCTL_ADD_CTX:
+ if (ARG3) {
+ struct vki_drm_ctx *req = (struct vki_drm_ctx *)ARG3;
+
+ PRE_FIELD_WRITE("ioctl(DRM_ADD_CTX).handle", req->handle);
+ }
+ break;
+ case VKI_DRM_IOCTL_RM_CTX:
+ if (ARG3) {
+ struct vki_drm_ctx *req = (struct vki_drm_ctx *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_RM_CTX).handle", req->handle);
+ }
+ break;
+ case VKI_DRM_IOCTL_MOD_CTX:
+ break;
+ case VKI_DRM_IOCTL_GET_CTX:
+ if (ARG3) {
+ struct vki_drm_ctx *req = (struct vki_drm_ctx *)ARG3;
+
+ PRE_FIELD_WRITE("ioctl(DRM_GET_CTX).flags", req->flags);
+ }
+ break;
+ case VKI_DRM_IOCTL_SWITCH_CTX:
+ if (ARG3) {
+ struct vki_drm_ctx *req = (struct vki_drm_ctx *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_SWITCH_CTX).handle", req->handle);
+ }
+ break;
+ case VKI_DRM_IOCTL_NEW_CTX:
+ if (ARG3) {
+ struct vki_drm_ctx *req = (struct vki_drm_ctx *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_NEW_CTX).handle", req->handle);
+ }
+ break;
+ case VKI_DRM_IOCTL_RES_CTX:
+ if (ARG3) {
+ struct vki_drm_ctx_res *req = (struct vki_drm_ctx_res *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_RES_CTX).count", req->count);
+ if (req->count) {
+ PRE_FIELD_READ("ioctl(DRM_RES_CTX).contexts", req->contexts);
+
+ PRE_MEM_WRITE("ioctl(DRM_RES_CTX).contexts[]", (Addr)req->contexts,
+ req->count * sizeof (req->contexts[0]));
+ }
+ PRE_FIELD_WRITE("ioctl(DRM_RES_CTX).count", req->count);
+ }
+ break;
+ case VKI_DRM_IOCTL_WAIT_VBLANK:
+ if (ARG3) {
+ union vki_drm_wait_vblank *req = (union vki_drm_wait_vblank *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_WAIT_VBLANK).request.type", req->request.type);
+ PRE_FIELD_READ("ioctl(DRM_WAIT_VBLANK).request.sequence", req->request.sequence);
+
+ if (req->request.type & VKI_DRM_VBLANK_EVENT) {
+ PRE_FIELD_READ("ioctl(DRM_WAIT_VBLANK).request.signal", req->request.signal);
+ } else {
+ PRE_FIELD_WRITE("ioctl(DRM_WAIT_VBLANK).reply.tval_sec", req->reply.tval_sec);
+ PRE_FIELD_WRITE("ioctl(DRM_WAIT_VBLANK).reply.tval_usec", req->reply.tval_usec);
+ }
+
+ PRE_FIELD_WRITE("ioctl(DRM_WAIT_VBLANK).reply.sequence", req->reply.sequence);
+ }
+ break;
+ case VKI_DRM_IOCTL_UPDATE_DRAW:
+ break;
+ case VKI_DRM_IOCTL_MODE_GETRESOURCES:
+ if (ARG3) {
+ struct vki_drm_mode_card_res *req = (struct vki_drm_mode_card_res *)ARG3;
+
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETRESOURCES).min_width", req->min_width);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETRESOURCES).max_width", req->max_width);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETRESOURCES).min_height", req->min_height);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETRESOURCES).max_height", req->max_height);
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETRESOURCES).count_fbs", req->count_fbs);
+ if (req->count_fbs) {
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETRESOURCES).fb_id_ptr", req->fb_id_ptr);
+ PRE_MEM_WRITE("ioctl(DRM_MODE_GETRESOURCES).fb_id_ptr[]", (Addr)req->fb_id_ptr,
+ req->count_fbs * sizeof (__vki_u32));
+ }
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETRESOURCES).count_fbs", req->count_fbs);
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETRESOURCES).count_crtcs", req->count_crtcs);
+ if (req->count_crtcs) {
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETRESOURCES).crtc_id_ptr", req->crtc_id_ptr);
+ PRE_MEM_WRITE("ioctl(DRM_MODE_GETRESOURCES).crtc_id_ptr[]", (Addr)req->crtc_id_ptr,
+ req->count_crtcs * sizeof (__vki_u32));
+ }
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETRESOURCES).count_crtcs", req->count_crtcs);
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETRESOURCES).count_encoders", req->count_encoders);
+ if (req->count_encoders) {
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETRESOURCES).encoder_id_ptr", req->encoder_id_ptr);
+ PRE_MEM_WRITE("ioctl(DRM_MODE_GETRESOURCES).encoder_id_ptr[]", (Addr)req->encoder_id_ptr,
+ req->count_encoders * sizeof (__vki_u32));
+ }
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETRESOURCES).count_encoders", req->count_encoders);
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETRESOURCES).count_connectors", req->count_connectors);
+ if (req->count_connectors) {
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETRESOURCES).connector_id_ptr", req->connector_id_ptr);
+ PRE_MEM_WRITE("ioctl(DRM_MODE_GETRESOURCES).connector_id_ptr[]", (Addr)req->connector_id_ptr,
+ req->count_connectors * sizeof (__vki_u32));
+ }
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETRESOURCES).count_connectors", req->count_connectors);
+ }
+ break;
+ VKI_DRM_IOCTL_DOUBLE(VKI_DRM_IOCTL_MODE_GETCRTC):
+ if (ARG3) {
+ struct vki_drm_mode_crtc *req = (struct vki_drm_mode_crtc *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETCRTC).crtc_id", req->crtc_id);
+
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCRTC).x", req->x);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCRTC).y", req->y);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCRTC).gamma_size", req->gamma_size);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCRTC).fb_id", req->fb_id);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCRTC).mode_valid", req->mode_valid);
+
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCRTC).mode.clock", req->mode.clock);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCRTC).mode.hdisplay", req->mode.hdisplay);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCRTC).mode.hsync_start", req->mode.hsync_start);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCRTC).mode.hsync_end", req->mode.hsync_end);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCRTC).mode.htotal", req->mode.htotal);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCRTC).mode.hskew", req->mode.hskew);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCRTC).mode.vdisplay", req->mode.vdisplay);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCRTC).mode.vsync_start", req->mode.vsync_start);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCRTC).mode.vsync_end", req->mode.vsync_end);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCRTC).mode.vtotal", req->mode.vtotal);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCRTC).mode.vscan", req->mode.vscan);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCRTC).mode.vrefresh", req->mode.vrefresh);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCRTC).mode.flags", req->mode.flags);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCRTC).mode.type", req->mode.type);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCRTC).mode.name", req->mode.name);
+
+ }
+ break;
+ VKI_DRM_IOCTL_DOUBLE(VKI_DRM_IOCTL_MODE_SETCRTC):
+ if (ARG3) {
+ struct vki_drm_mode_crtc *req = (struct vki_drm_mode_crtc *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_SETCRTC).crtc_id", req->crtc_id);
+ PRE_FIELD_READ("ioctl(DRM_MODE_SETCRTC).mode_valid", req->mode_valid);
+
+ if (req->mode_valid) {
+ PRE_FIELD_READ("ioctl(DRM_MODE_SETCRTC).fb_id", req->fb_id);
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_SETCRTC).mode.clock", req->mode.clock);
+ PRE_FIELD_READ("ioctl(DRM_MODE_SETCRTC).mode.hdisplay", req->mode.hdisplay);
+ PRE_FIELD_READ("ioctl(DRM_MODE_SETCRTC).mode.hsync_start", req->mode.hsync_start);
+ PRE_FIELD_READ("ioctl(DRM_MODE_SETCRTC).mode.hsync_end", req->mode.hsync_end);
+ PRE_FIELD_READ("ioctl(DRM_MODE_SETCRTC).mode.htotal", req->mode.htotal);
+ PRE_FIELD_READ("ioctl(DRM_MODE_SETCRTC).mode.hskew", req->mode.hskew);
+ PRE_FIELD_READ("ioctl(DRM_MODE_SETCRTC).mode.vdisplay", req->mode.vdisplay);
+ PRE_FIELD_READ("ioctl(DRM_MODE_SETCRTC).mode.vsync_start", req->mode.vsync_start);
+ PRE_FIELD_READ("ioctl(DRM_MODE_SETCRTC).mode.vsync_end", req->mode.vsync_end);
+ PRE_FIELD_READ("ioctl(DRM_MODE_SETCRTC).mode.vtotal", req->mode.vtotal);
+ PRE_FIELD_READ("ioctl(DRM_MODE_SETCRTC).mode.vscan", req->mode.vscan);
+ PRE_FIELD_READ("ioctl(DRM_MODE_SETCRTC).mode.vrefresh", req->mode.vrefresh);
+ PRE_FIELD_READ("ioctl(DRM_MODE_SETCRTC).mode.flags", req->mode.flags);
+ PRE_FIELD_READ("ioctl(DRM_MODE_SETCRTC).mode.type", req->mode.type);
+ PRE_FIELD_READ("ioctl(DRM_MODE_SETCRTC).mode.name", req->mode.name);
+
+ }
+ PRE_FIELD_READ("ioctl(DRM_MODE_SETCRTC).count_connectors", req->count_connectors);
+ if (req->count_connectors > 0) {
+ PRE_FIELD_READ("ioctl(DRM_MODE_SETCRTC).set_connectors_ptr", req->set_connectors_ptr);
+ PRE_MEM_READ("ioctl(DRM_MODE_SETCRTC).set_connectors_ptr[]",
+ (Addr)req->set_connectors_ptr, req->count_connectors * sizeof(__vki_u32));
+ }
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_SETCRTC).x", req->x);
+ PRE_FIELD_READ("ioctl(DRM_MODE_SETCRTC).y", req->y);
+ }
+ break;
+ VKI_DRM_IOCTL_DOUBLE(VKI_DRM_IOCTL_MODE_CURSOR):
+ if (ARG3) {
+ struct vki_drm_mode_cursor *req = (struct vki_drm_mode_cursor *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_CURSOR).flags", req->flags);
+ if (req->flags)
+ PRE_FIELD_READ("ioctl(DRM_MODE_CURSOR).crtc_id", req->crtc_id);
+
+ if (req->flags & VKI_DRM_MODE_CURSOR_BO) {
+ PRE_FIELD_READ("ioctl(DRM_MODE_CURSOR).handle", req->handle);
+ PRE_FIELD_READ("ioctl(DRM_MODE_CURSOR).width", req->width);
+ PRE_FIELD_READ("ioctl(DRM_MODE_CURSOR).height", req->height);
+ }
+
+ if (req->flags & VKI_DRM_MODE_CURSOR_MOVE) {
+ PRE_FIELD_READ("ioctl(DRM_MODE_CURSOR).x", req->x);
+ PRE_FIELD_READ("ioctl(DRM_MODE_CURSOR).y", req->y);
+ }
+ }
+ break;
+ VKI_DRM_IOCTL_DOUBLE(VKI_DRM_IOCTL_MODE_GETGAMMA):
+ if (ARG3) {
+ struct vki_drm_mode_crtc_lut *req = (struct vki_drm_mode_crtc_lut *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETGAMMA).crtc_id", req->crtc_id);
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETGAMMA).gamma_size", req->gamma_size);
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETGAMMA).red", req->red);
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETGAMMA).green", req->green);
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETGAMMA).blue", req->blue);
+
+ PRE_MEM_WRITE("ioctl(DRM_MODE_GETGAMMA).red[]", (Addr)req->red, req->gamma_size * sizeof(__vki_u16));
+ PRE_MEM_WRITE("ioctl(DRM_MODE_GETGAMMA).green[]", (Addr)req->green, req->gamma_size * sizeof(__vki_u16));
+ PRE_MEM_WRITE("ioctl(DRM_MODE_GETGAMMA).blue[]", (Addr)req->blue, req->gamma_size * sizeof(__vki_u16));
+ }
+ break;
+ VKI_DRM_IOCTL_DOUBLE(VKI_DRM_IOCTL_MODE_SETGAMMA):
+ if (ARG3) {
+ struct vki_drm_mode_crtc_lut *req = (struct vki_drm_mode_crtc_lut *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_SETGAMMA).crtc_id", req->crtc_id);
+ PRE_FIELD_READ("ioctl(DRM_MODE_SETGAMMA).gamma_size", req->gamma_size);
+ PRE_FIELD_READ("ioctl(DRM_MODE_SETGAMMA).red", req->red);
+ PRE_FIELD_READ("ioctl(DRM_MODE_SETGAMMA).green", req->green);
+ PRE_FIELD_READ("ioctl(DRM_MODE_SETGAMMA).blue", req->blue);
+
+ PRE_MEM_READ("ioctl(DRM_MODE_SETGAMMA).red[]", (Addr)req->red, req->gamma_size * sizeof(__vki_u16));
+ PRE_MEM_READ("ioctl(DRM_MODE_SETGAMMA).green[]", (Addr)req->green, req->gamma_size * sizeof(__vki_u16));
+ PRE_MEM_READ("ioctl(DRM_MODE_SETGAMMA).blue[]", (Addr)req->blue, req->gamma_size * sizeof(__vki_u16));
+ }
+ break;
+ case VKI_DRM_IOCTL_MODE_GETENCODER:
+ if (ARG3) {
+ struct vki_drm_mode_get_encoder *req = (struct vki_drm_mode_get_encoder *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETENCODER).encoder_id", req->encoder_id);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETENCODER).crtc_id", req->crtc_id);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETENCODER).encoder_type", req->encoder_type);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETENCODER).encoder_id", req->encoder_id);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETENCODER).possible_clones", req->possible_clones);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETENCODER).possible_crtcs", req->possible_crtcs);
+ }
+ break;
+ case VKI_DRM_IOCTL_MODE_GETCONNECTOR:
+ if (ARG3) {
+ struct vki_drm_mode_get_connector *req = (struct vki_drm_mode_get_connector *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETCONNECTOR).connector_id", req->connector_id);
+
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCONNECTOR).connector_id", req->connector_id);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCONNECTOR).connector_type", req->connector_type);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCONNECTOR).connector_type_id", req->connector_type_id);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCONNECTOR).mm_width", req->mm_width);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCONNECTOR).mm_height", req->mm_height);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCONNECTOR).subpixel", req->subpixel);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCONNECTOR).connection", req->connection);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCONNECTOR).encoder_id", req->encoder_id);
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETCONNECTOR).count_modes", req->count_modes);
+ if (req->count_modes > 0) {
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETCONNECTOR).modes_ptr", req->modes_ptr);
+ PRE_MEM_WRITE("ioctl(DRM_MODE_GETCONNECTOR).modes_ptr[]", (Addr)req->modes_ptr,
+ req->count_modes * sizeof(struct vki_drm_mode_modeinfo));
+ }
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCONNECTOR).count_modes", req->count_modes);
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETCONNECTOR).count_props", req->count_props);
+ if (req->count_props > 0) {
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETCONNECTOR).props_ptr", req->props_ptr);
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETCONNECTOR).prop_values_ptr", req->prop_values_ptr);
+ PRE_MEM_WRITE("ioctl(DRM_MODE_GETCONNECTOR).props_ptr[]", (Addr)req->props_ptr,
+ req->count_props * sizeof(__vki_u32));
+ PRE_MEM_WRITE("ioctl(DRM_MODE_GETCONNECTOR).prop_values_ptr[]", (Addr)req->prop_values_ptr,
+ req->count_props * sizeof(__vki_u64));
+ }
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCONNECTOR).count_props", req->count_props);
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETCONNECTOR).count_encoders", req->count_encoders);
+ if (req->count_encoders > 0) {
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETCONNECTOR).encoders_ptr", req->encoders_ptr);
+ PRE_MEM_WRITE("ioctl(DRM_MODE_GETCONNECTOR).encoders_ptr[]", (Addr)req->encoders_ptr,
+ req->count_encoders * sizeof(__vki_u32));
+ }
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETCONNECTOR).count_encoders", req->count_encoders);
+ }
+ break;
+ VKI_DRM_IOCTL_DOUBLE(VKI_DRM_IOCTL_MODE_ATTACHMODE):
+ if (ARG3) {
+ struct vki_drm_mode_mode_cmd *req = (struct vki_drm_mode_mode_cmd *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_ATTACHMODE).connector_id", req->connector_id);
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_ATTACHMODE).mode.clock", req->mode.clock);
+ PRE_FIELD_READ("ioctl(DRM_MODE_ATTACHMODE).mode.hdisplay", req->mode.hdisplay);
+ PRE_FIELD_READ("ioctl(DRM_MODE_ATTACHMODE).mode.hsync_start", req->mode.hsync_start);
+ PRE_FIELD_READ("ioctl(DRM_MODE_ATTACHMODE).mode.hsync_end", req->mode.hsync_end);
+ PRE_FIELD_READ("ioctl(DRM_MODE_ATTACHMODE).mode.htotal", req->mode.htotal);
+ PRE_FIELD_READ("ioctl(DRM_MODE_ATTACHMODE).mode.hskew", req->mode.hskew);
+ PRE_FIELD_READ("ioctl(DRM_MODE_ATTACHMODE).mode.vdisplay", req->mode.vdisplay);
+ PRE_FIELD_READ("ioctl(DRM_MODE_ATTACHMODE).mode.vsync_start", req->mode.vsync_start);
+ PRE_FIELD_READ("ioctl(DRM_MODE_ATTACHMODE).mode.vsync_end", req->mode.vsync_end);
+ PRE_FIELD_READ("ioctl(DRM_MODE_ATTACHMODE).mode.vtotal", req->mode.vtotal);
+ PRE_FIELD_READ("ioctl(DRM_MODE_ATTACHMODE).mode.vscan", req->mode.vscan);
+ PRE_FIELD_READ("ioctl(DRM_MODE_ATTACHMODE).mode.vrefresh", req->mode.vrefresh);
+ PRE_FIELD_READ("ioctl(DRM_MODE_ATTACHMODE).mode.flags", req->mode.flags);
+ PRE_FIELD_READ("ioctl(DRM_MODE_ATTACHMODE).mode.type", req->mode.type);
+ PRE_FIELD_READ("ioctl(DRM_MODE_ATTACHMODE).mode.name", req->mode.name);
+ }
+ break;
+ VKI_DRM_IOCTL_DOUBLE(VKI_DRM_IOCTL_MODE_DETACHMODE):
+ if (ARG3) {
+ struct vki_drm_mode_mode_cmd *req = (struct vki_drm_mode_mode_cmd *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_DETACHMODE).connector_id", req->connector_id);
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_DETACHMODE).mode.clock", req->mode.clock);
+ PRE_FIELD_READ("ioctl(DRM_MODE_DETACHMODE).mode.hdisplay", req->mode.hdisplay);
+ PRE_FIELD_READ("ioctl(DRM_MODE_DETACHMODE).mode.hsync_start", req->mode.hsync_start);
+ PRE_FIELD_READ("ioctl(DRM_MODE_DETACHMODE).mode.hsync_end", req->mode.hsync_end);
+ PRE_FIELD_READ("ioctl(DRM_MODE_DETACHMODE).mode.htotal", req->mode.htotal);
+ PRE_FIELD_READ("ioctl(DRM_MODE_DETACHMODE).mode.hskew", req->mode.hskew);
+ PRE_FIELD_READ("ioctl(DRM_MODE_DETACHMODE).mode.vdisplay", req->mode.vdisplay);
+ PRE_FIELD_READ("ioctl(DRM_MODE_DETACHMODE).mode.vsync_start", req->mode.vsync_start);
+ PRE_FIELD_READ("ioctl(DRM_MODE_DETACHMODE).mode.vsync_end", req->mode.vsync_end);
+ PRE_FIELD_READ("ioctl(DRM_MODE_DETACHMODE).mode.vtotal", req->mode.vtotal);
+ PRE_FIELD_READ("ioctl(DRM_MODE_DETACHMODE).mode.vscan", req->mode.vscan);
+ PRE_FIELD_READ("ioctl(DRM_MODE_DETACHMODE).mode.vrefresh", req->mode.vrefresh);
+ PRE_FIELD_READ("ioctl(DRM_MODE_DETACHMODE).mode.flags", req->mode.flags);
+ PRE_FIELD_READ("ioctl(DRM_MODE_DETACHMODE).mode.type", req->mode.type);
+ PRE_FIELD_READ("ioctl(DRM_MODE_DETACHMODE).mode.name", req->mode.name);
+ }
+ break;
+ case VKI_DRM_IOCTL_MODE_GETPROPERTY:
+ if (ARG3) {
+ struct vki_drm_mode_get_property *req = (struct vki_drm_mode_get_property *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETPROPERTY).prop_id", req->prop_id);
+
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETPROPERTY).name", req->name);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETPROPERTY).flags", req->flags);
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETPROPERTY).count_values", req->count_values);
+ // TODO: figure out how many bytes kernel is going to write, based on type of property
+ if (req->count_values)
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETPROPERTY).values_ptr", req->values_ptr);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETPROPERTY).count_values", req->count_values);
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETPROPERTY).count_enum_blobs", req->count_enum_blobs);
+ // TODO: as above
+ if (req->count_enum_blobs) {
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETPROPERTY).enum_blob_ptr", req->enum_blob_ptr);
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETPROPERTY).values_ptr", req->values_ptr);
+ }
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETPROPERTY).count_enum_blobs", req->count_enum_blobs);
+ }
+ break;
+ VKI_DRM_IOCTL_DOUBLE(VKI_DRM_IOCTL_MODE_SETPROPERTY):
+ if (ARG3) {
+ struct vki_drm_mode_connector_set_property *req = (struct vki_drm_mode_connector_set_property *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_SETPROPERTY).connector_id", req->connector_id);
+ PRE_FIELD_READ("ioctl(DRM_MODE_SETPROPERTY).prop_id", req->prop_id);
+ PRE_FIELD_READ("ioctl(DRM_MODE_SETPROPERTY).value", req->value);
+ }
+ break;
+ case VKI_DRM_IOCTL_MODE_GETPROPBLOB:
+ if (ARG3) {
+ struct vki_drm_mode_get_blob *req = (struct vki_drm_mode_get_blob *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETPROPBLOB).blob_id", req->blob_id);
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETPROPBLOB).length", req->length);
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETPROPBLOB).data", req->data);
+
+ PRE_MEM_WRITE("ioctl(DRM_MODE_GETPROPBLOB).data[]", (Addr)req->data, req->length);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETPROPBLOB).length", req->length);
+ }
+ break;
+ case VKI_DRM_IOCTL_MODE_GETFB:
+ if (ARG3) {
+ struct vki_drm_mode_fb_cmd *req = (struct vki_drm_mode_fb_cmd *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_GETFB).fb_id", req->fb_id);
+
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETFB).height", req->height);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETFB).width", req->width);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETFB).depth", req->depth);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETFB).bpp", req->bpp);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETFB).pitch", req->pitch);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_GETFB).handle", req->handle);
+ }
+ break;
+ VKI_DRM_IOCTL_DOUBLE(VKI_DRM_IOCTL_MODE_ADDFB):
+ if (ARG3) {
+ struct vki_drm_mode_fb_cmd *req = (struct vki_drm_mode_fb_cmd *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_ADDFB).width", req->width);
+ PRE_FIELD_READ("ioctl(DRM_MODE_ADDFB).height", req->height);
+ PRE_FIELD_READ("ioctl(DRM_MODE_ADDFB).handle", req->handle);
+ PRE_FIELD_READ("ioctl(DRM_MODE_ADDFB).pitch", req->pitch);
+ PRE_FIELD_READ("ioctl(DRM_MODE_ADDFB).bpp", req->bpp);
+ PRE_FIELD_READ("ioctl(DRM_MODE_ADDFB).depth", req->depth);
+
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_ADDFB).fb_id", req->fb_id);
+ }
+ break;
+ VKI_DRM_IOCTL_DOUBLE(VKI_DRM_IOCTL_MODE_RMFB):
+ PRE_MEM_READ("ioctl(DRM_MODE_RMFB)", ARG3, sizeof(unsigned int));
+ break;
+ VKI_DRM_IOCTL_DOUBLE(VKI_DRM_IOCTL_MODE_PAGE_FLIP):
+ if (ARG3) {
+ struct vki_drm_mode_crtc_page_flip *req = (struct vki_drm_mode_crtc_page_flip *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_PAGE_FLIP).flags", req->flags);
+ PRE_FIELD_READ("ioctl(DRM_MODE_PAGE_FLIP).reserved", req->reserved);
+ PRE_FIELD_READ("ioctl(DRM_MODE_PAGE_FLIP).crtc_id", req->crtc_id);
+ PRE_FIELD_READ("ioctl(DRM_MODE_PAGE_FLIP).fb_id", req->fb_id);
+ PRE_FIELD_READ("ioctl(DRM_MODE_PAGE_FLIP).user_data", req->user_data);
+ }
+ break;
+ VKI_DRM_IOCTL_DOUBLE(VKI_DRM_IOCTL_MODE_DIRTYFB):
+ if (ARG3) {
+ struct vki_drm_mode_fb_dirty_cmd *req = (struct vki_drm_mode_fb_dirty_cmd *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_DIRTYFB).fb_id", req->fb_id);
+ PRE_FIELD_READ("ioctl(DRM_MODE_DIRTYFB).num_clips", req->num_clips);
+ PRE_FIELD_READ("ioctl(DRM_MODE_DIRTYFB).flags", req->flags);
+ PRE_FIELD_READ("ioctl(DRM_MODE_DIRTYFB).color", req->color);
+ PRE_FIELD_READ("ioctl(DRM_MODE_DIRTYFB).clips_ptr", req->clips_ptr);
+ PRE_MEM_READ("ioctl(DRM_MODE_DIRTYFB).clips_ptr[]", (Addr)req->clips_ptr,
+ req->num_clips * sizeof(struct vki_drm_clip_rect));
+ }
+ break;
+ case VKI_DRM_IOCTL_MODE_CREATE_DUMB:
+ if (ARG3) {
+ struct vki_drm_mode_create_dumb *req = (struct vki_drm_mode_create_dumb *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_CREATE_DUMB).width", req->width);
+ PRE_FIELD_READ("ioctl(DRM_MODE_CREATE_DUMB).height", req->height);
+ PRE_FIELD_READ("ioctl(DRM_MODE_CREATE_DUMB).bpp", req->bpp);
+
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_CREATE_DUMB).pitch", req->pitch);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_CREATE_DUMB).size", req->size);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_CREATE_DUMB).handle", req->handle);
+ }
+ break;
+ case VKI_DRM_IOCTL_MODE_MAP_DUMB:
+ if (ARG3) {
+ struct vki_drm_mode_map_dumb *req = (struct vki_drm_mode_map_dumb *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_MAP_DUMB).handle", req->handle);
+ PRE_FIELD_WRITE("ioctl(DRM_MODE_MAP_DUMB).offset", req->offset);
+ }
+ break;
+ case VKI_DRM_IOCTL_MODE_DESTROY_DUMB:
+ if (ARG3) {
+ struct vki_drm_mode_destroy_dumb *req = (struct vki_drm_mode_destroy_dumb *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_MODE_DESTROY_DUMB).handle", req->handle);
+ }
+ break;
+ case VKI_DRM_IOCTL_NOUVEAU_GETPARAM:
+ if (ARG3) {
+ struct vki_drm_nouveau_getparam *req = (struct vki_drm_nouveau_getparam *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_GETPARAM).param", req->param);
+ PRE_FIELD_WRITE("ioctl(DRM_NOUVEAU_GETPARAM).value", req->value);
+ }
+ break;
+ case VKI_DRM_IOCTL_NOUVEAU_SETPARAM:
+ if (ARG3) {
+ struct vki_drm_nouveau_setparam *req = (struct vki_drm_nouveau_setparam *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_SETPARAM).param", req->param);
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_SETPARAM).value", req->value);
+ }
+ break;
+ case VKI_DRM_IOCTL_NOUVEAU_CHANNEL_ALLOC:
+ if (ARG3) {
+ struct vki_drm_nouveau_channel_alloc *req = (struct vki_drm_nouveau_channel_alloc *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_CHANNEL_ALLOC).fb_ctxdma_handle", req->fb_ctxdma_handle);
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_CHANNEL_ALLOC).tt_ctxdma_handle", req->tt_ctxdma_handle);
+ PRE_FIELD_WRITE("ioctl(DRM_NOUVEAU_CHANNEL_ALLOC).channel", req->channel);
+ PRE_FIELD_WRITE("ioctl(DRM_NOUVEAU_CHANNEL_ALLOC).pushbuf_domains", req->pushbuf_domains);
+ PRE_FIELD_WRITE("ioctl(DRM_NOUVEAU_CHANNEL_ALLOC).nr_subchan", req->nr_subchan);
+ PRE_FIELD_WRITE("ioctl(DRM_NOUVEAU_CHANNEL_ALLOC).subchan[0].handle", req->subchan[0].handle);
+ PRE_FIELD_WRITE("ioctl(DRM_NOUVEAU_CHANNEL_ALLOC).subchan[0].grclass", req->subchan[0].grclass);
+ PRE_FIELD_WRITE("ioctl(DRM_NOUVEAU_CHANNEL_ALLOC).notifier_handle", req->notifier_handle);
+ }
+ break;
+ case VKI_DRM_IOCTL_NOUVEAU_CHANNEL_FREE:
+ if (ARG3) {
+ struct vki_drm_nouveau_channel_free *req = (struct vki_drm_nouveau_channel_free *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_CHANNEL_FREE).channel", req->channel);
+ }
+ break;
+ case VKI_DRM_IOCTL_NOUVEAU_GROBJ_ALLOC:
+ if (ARG3) {
+ struct vki_drm_nouveau_grobj_alloc *req = (struct vki_drm_nouveau_grobj_alloc *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_GROBJ_ALLOC).handle", req->handle);
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_GROBJ_ALLOC).channel", req->channel);
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_GROBJ_ALLOC).class", req->class);
+ }
+ break;
+ case VKI_DRM_IOCTL_NOUVEAU_NOTIFIEROBJ_ALLOC:
+ if (ARG3) {
+ struct vki_drm_nouveau_notifierobj_alloc *req = (struct vki_drm_nouveau_notifierobj_alloc *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_NOTIFIEROBJ_ALLOC).channel", req->channel);
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_NOTIFIEROBJ_ALLOC).channel", req->handle);
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_NOTIFIEROBJ_ALLOC).size", req->size);
+ PRE_FIELD_WRITE("ioctl(DRM_NOUVEAU_NOTIFIEROBJ_ALLOC).offset", req->offset);
+ }
+ break;
+ case VKI_DRM_IOCTL_NOUVEAU_GPUOBJ_FREE:
+ if (ARG3) {
+ struct vki_drm_nouveau_gpuobj_free *req = (struct vki_drm_nouveau_gpuobj_free *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_GPUOBJ_FREE).channel", req->channel);
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_GPUOBJ_FREE).handle", req->handle);
+ }
+ break;
+ case VKI_DRM_IOCTL_NOUVEAU_GEM_NEW:
+ if (ARG3) {
+ struct vki_drm_nouveau_gem_new *req = (struct vki_drm_nouveau_gem_new *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_GEM_NEW).info.tile_flags", req->info.tile_flags);
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_GEM_NEW).channel_hint", req->channel_hint);
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_GEM_NEW).info.size", req->info.size);
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_GEM_NEW).align", req->align);
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_GEM_NEW).info.domain", req->info.domain);
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_GEM_NEW).info.tile_mode", req->info.tile_mode);
+
+ PRE_FIELD_WRITE("ioctl(DRM_NOUVEAU_GEM_NEW).info.domain", req->info.domain);
+ PRE_FIELD_WRITE("ioctl(DRM_NOUVEAU_GEM_NEW).info.size", req->info.size);
+ PRE_FIELD_WRITE("ioctl(DRM_NOUVEAU_GEM_NEW).info.offset", req->info.offset);
+ PRE_FIELD_WRITE("ioctl(DRM_NOUVEAU_GEM_NEW).info.map_handle", req->info.map_handle);
+ PRE_FIELD_WRITE("ioctl(DRM_NOUVEAU_GEM_NEW).info.map_handle", req->info.tile_mode);
+ PRE_FIELD_WRITE("ioctl(DRM_NOUVEAU_GEM_NEW).info.tile_flags", req->info.tile_flags);
+ PRE_FIELD_WRITE("ioctl(DRM_NOUVEAU_GEM_NEW).info.handle", req->info.handle);
+ }
+ break;
+ case VKI_DRM_IOCTL_NOUVEAU_GEM_PUSHBUF:
+ if (ARG3) {
+ struct vki_drm_nouveau_gem_pushbuf *req = (struct vki_drm_nouveau_gem_pushbuf *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_GEM_PUSHBUF).channel", req->channel);
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_GEM_PUSHBUF).nr_push", req->nr_push);
+ if (req->nr_push) {
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_GEM_PUSHBUF).push", req->push);
+ PRE_MEM_READ("ioctl(DRM_NOUVEAU_GEM_PUSHBUF).push[]", (Addr)req->push,
+ req->nr_push * sizeof(struct vki_drm_nouveau_gem_pushbuf_push));
+
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_GEM_PUSHBUF).nr_buffers", req->nr_buffers);
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_GEM_PUSHBUF).buffers", req->buffers);
+ if (req->nr_buffers)
+ PRE_MEM_READ("ioctl(DRM_NOUVEAU_GEM_PUSHBUF).buffers[]", (Addr)req->buffers,
+ req->nr_buffers * sizeof(struct vki_drm_nouveau_gem_pushbuf_bo));
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_GEM_PUSHBUF).suffix0", req->suffix0);
+
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_GEM_PUSHBUF).nr_relocs", req->nr_relocs);
+ }
+
+ PRE_FIELD_WRITE("ioctl(DRM_NOUVEAU_GEM_PUSHBUF).vram_available", req->vram_available);
+ PRE_FIELD_WRITE("ioctl(DRM_NOUVEAU_GEM_PUSHBUF).gart_available", req->gart_available);
+ PRE_FIELD_WRITE("ioctl(DRM_NOUVEAU_GEM_PUSHBUF).suffix0", req->suffix0);
+ PRE_FIELD_WRITE("ioctl(DRM_NOUVEAU_GEM_PUSHBUF).suffix1", req->suffix1);
+ }
+ break;
+ case VKI_DRM_IOCTL_NOUVEAU_GEM_CPU_PREP:
+ if (ARG3) {
+ struct vki_drm_nouveau_gem_cpu_prep *req = (struct vki_drm_nouveau_gem_cpu_prep *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_GEM_CPU_PREP).flags", req->flags);
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_GEM_CPU_PREP).handle", req->handle);
+ }
+ break;
+ case VKI_DRM_IOCTL_NOUVEAU_GEM_CPU_FINI:
+ if (ARG3) {
+ struct vki_drm_nouveau_gem_cpu_fini *req = (struct vki_drm_nouveau_gem_cpu_fini *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_GEM_CPU_FINI).handle", req->handle);
+ }
+ break;
+ case VKI_DRM_IOCTL_NOUVEAU_GEM_INFO:
+ if (ARG3) {
+ struct vki_drm_nouveau_gem_new *req = (struct vki_drm_nouveau_gem_new *)ARG3;
+
+ PRE_FIELD_READ("ioctl(DRM_NOUVEAU_GEM_NEW).info.handle", req->info.handle);
+
+ PRE_FIELD_WRITE("ioctl(DRM_NOUVEAU_GEM_NEW).info.domain", req->info.domain);
+ PRE_FIELD_WRITE("ioctl(DRM_NOUVEAU_GEM_NEW).info.size", req->info.size);
+ PRE_FIELD_WRITE("ioctl(DRM_NOUVEAU_GEM_NEW).info.offset", req->info.offset);
+ PRE_FIELD_WRITE("ioctl(DRM_NOUVEAU_GEM_NEW).info.map_handle", req->info.map_handle);
+ PRE_FIELD_WRITE("ioctl(DRM_NOUVEAU_GEM_NEW).info.map_handle", req->info.tile_mode);
+ PRE_FIELD_WRITE("ioctl(DRM_NOUVEAU_GEM_NEW).info.tile_flags", req->info.tile_flags);
+ }
+ break;
+
default:
/* EVIOC* are variable length and return size written on success */
switch (ARG2 & ~(_VKI_IOC_SIZEMASK << _VKI_IOC_SIZESHIFT)) {
@@ -5692,7 +6370,395 @@
sizeof(struct vki_sockaddr));
}
break;
+ case VKI_DRM_IOCTL_VERSION:
+ if (ARG3) {
+ struct vki_drm_version *req = (struct vki_drm_version *)ARG3;
+ POST_FIELD_WRITE(req->version_major);
+ POST_FIELD_WRITE(req->version_minor);
+ POST_FIELD_WRITE(req->version_patchlevel);
+
+ if (req->name_len && req->name)
+ POST_MEM_WRITE((Addr)req->name, req->name_len);
+ if (req->date_len && req->date)
+ POST_MEM_WRITE((Addr)req->date, req->date_len);
+ if (req->desc_len && req->desc)
+ POST_MEM_WRITE((Addr)req->desc, req->desc_len);
+
+ POST_FIELD_WRITE(req->name_len);
+ POST_FIELD_WRITE(req->date_len);
+ POST_FIELD_WRITE(req->desc_len);
+ }
+ break;
+ case VKI_DRM_IOCTL_GET_UNIQUE:
+ if (ARG3) {
+ struct vki_drm_unique *req = (struct vki_drm_unique *)ARG3;
+
+ if (req->unique_len && req->unique)
+ POST_MEM_WRITE((Addr)req->unique, req->unique_len);
+ POST_FIELD_WRITE(req->unique_len);
+ }
+ break;
+ case VKI_DRM_IOCTL_GET_MAGIC:
+ if (ARG3) {
+ struct vki_drm_auth *req = (struct vki_drm_auth *)ARG3;
+
+ POST_FIELD_WRITE(req->magic);
+ }
+ break;
+ case VKI_DRM_IOCTL_IRQ_BUSID:
+ if (ARG3) {
+ struct vki_drm_irq_busid *req = (struct vki_drm_irq_busid *)ARG3;
+
+ POST_FIELD_WRITE(req->irq);
+ }
+ break;
+ case VKI_DRM_IOCTL_GEM_CLOSE:
+ break;
+ case VKI_DRM_IOCTL_GEM_FLINK:
+ if (ARG3) {
+ struct vki_drm_gem_flink *req = (struct vki_drm_gem_flink *)ARG3;
+
+ POST_FIELD_WRITE(req->name);
+ }
+ break;
+ case VKI_DRM_IOCTL_GEM_OPEN:
+ if (ARG3) {
+ struct vki_drm_gem_open *req = (struct vki_drm_gem_open *)ARG3;
+
+ POST_FIELD_WRITE(req->handle);
+ POST_FIELD_WRITE(req->size);
+ }
+ break;
+ case VKI_DRM_IOCTL_SET_MASTER:
+ break;
+ case VKI_DRM_IOCTL_DROP_MASTER:
+ break;
+ case VKI_DRM_IOCTL_ADD_CTX:
+ if (ARG3) {
+ struct vki_drm_ctx *req = (struct vki_drm_ctx *)ARG3;
+
+ POST_FIELD_WRITE(req->handle);
+ }
+ break;
+ case VKI_DRM_IOCTL_RM_CTX:
+ break;
+ case VKI_DRM_IOCTL_MOD_CTX:
+ break;
+ case VKI_DRM_IOCTL_GET_CTX:
+ if (ARG3) {
+ struct vki_drm_ctx *req = (struct vki_drm_ctx *)ARG3;
+
+ POST_FIELD_WRITE(req->flags);
+ }
+ break;
+ case VKI_DRM_IOCTL_SWITCH_CTX:
+ break;
+ case VKI_DRM_IOCTL_NEW_CTX:
+ break;
+ case VKI_DRM_IOCTL_RES_CTX:
+ if (ARG3) {
+ struct vki_drm_ctx_res *req = (struct vki_drm_ctx_res *)ARG3;
+
+ if (req->count && req->contexts) {
+ POST_MEM_WRITE((Addr)req->contexts,
+ req->count * sizeof (req->contexts[0]));
+ }
+ POST_FIELD_WRITE(req->count);
+ }
+ break;
+ case VKI_DRM_IOCTL_WAIT_VBLANK:
+ if (ARG3) {
+ union vki_drm_wait_vblank *req = (union vki_drm_wait_vblank *)ARG3;
+
+ if (!(req->request.type & VKI_DRM_VBLANK_EVENT)) {
+ POST_FIELD_WRITE(req->reply.tval_sec);
+ POST_FIELD_WRITE(req->reply.tval_usec);
+ }
+
+ POST_FIELD_WRITE(req->reply.sequence);
+ }
+ break;
+ case VKI_DRM_IOCTL_UPDATE_DRAW:
+ break;
+ case VKI_DRM_IOCTL_MODE_GETRESOURCES:
+ if (ARG3) {
+ struct vki_drm_mode_card_res *req = (struct vki_drm_mode_card_res *)ARG3;
+
+ POST_FIELD_WRITE(req->min_width);
+ POST_FIELD_WRITE(req->max_width);
+ POST_FIELD_WRITE(req->min_height);
+ POST_FIELD_WRITE(req->max_height);
+
+ // everything below is not quite true - kernel always writes to
+ // count_*, but writes to *_ptr only if initial count_* is bigger
+ // than needed; we don't have initial count_* in POST ioctl, so it's
+ // impossible to tell whether kernel wrote to *_ptr or not;
+ // so assume "null *_ptr" means "query count_*"
+ POST_FIELD_WRITE(req->count_fbs);
+ if (req->count_fbs && req->fb_id_ptr)
+ POST_MEM_WRITE((Addr)req->fb_id_ptr, req->count_fbs * sizeof (__vki_u32));
+
+ POST_FIELD_WRITE(req->count_crtcs);
+ if (req->count_crtcs && req->crtc_id_ptr)
+ POST_MEM_WRITE((Addr)req->crtc_id_ptr, req->count_crtcs * sizeof (__vki_u32));
+
+ POST_FIELD_WRITE(req->count_encoders);
+ if (req->count_encoders && req->encoder_id_ptr)
+ POST_MEM_WRITE((Addr)req->encoder_id_ptr, req->count_encoders * sizeof (__vki_u32));
+
+ POST_FIELD_WRITE(req->count_connectors);
+ if (req->count_connectors && req->connector_id_ptr)
+ POST_MEM_WRITE((Addr)req->connector_id_ptr, req->count_connectors * sizeof (__vki_u32));
+ }
+ break;
+ VKI_DRM_IOCTL_DOUBLE(VKI_DRM_IOCTL_MODE_GETCRTC):
+ if (ARG3) {
+ struct vki_drm_mode_crtc *req = (struct vki_drm_mode_crtc *)ARG3;
+
+ POST_FIELD_WRITE(req->x);
+ POST_FIELD_WRITE(req->y);
+ POST_FIELD_WRITE(req->gamma_size);
+ POST_FIELD_WRITE(req->fb_id);
+ POST_FIELD_WRITE(req->mode_valid);
+
+ if (req->mode_valid) {
+ POST_FIELD_WRITE(req->mode.clock);
+ POST_FIELD_WRITE(req->mode.hdisplay);
+ POST_FIELD_WRITE(req->mode.clock);
+ POST_FIELD_WRITE(req->mode.hsync_end);
+ POST_FIELD_WRITE(req->mode.htotal);
+ POST_FIELD_WRITE(req->mode.hskew);
+ POST_FIELD_WRITE(req->mode.vdisplay);
+ POST_FIELD_WRITE(req->mode.vsync_start);
+ POST_FIELD_WRITE(req->mode.vsync_end);
+ POST_FIELD_WRITE(req->mode.vtotal);
+ POST_FIELD_WRITE(req->mode.vscan);
+ POST_FIELD_WRITE(req->mode.vrefresh);
+ POST_FIELD_WRITE(req->mode.flags);
+ POST_FIELD_WRITE(req->mode.type);
+ POST_FIELD_WRITE(req->mode.name);
+ }
+ }
+ break;
+ VKI_DRM_IOCTL_DOUBLE(VKI_DRM_IOCTL_MODE_SETCRTC):
+ break;
+ VKI_DRM_IOCTL_DOUBLE(VKI_DRM_IOCTL_MODE_CURSOR):
+ break;
+ VKI_DRM_IOCTL_DOUBLE(VKI_DRM_IOCTL_MODE_GETGAMMA):
+ if (ARG3) {
+ struct vki_drm_mode_crtc_lut *req = (struct vki_drm_mode_crtc_lut *)ARG3;
+
+ POST_MEM_WRITE((Addr)req->red, req->gamma_size * sizeof(__vki_u16));
+ POST_MEM_WRITE((Addr)req->green, req->gamma_size * sizeof(__vki_u16));
+ POST_MEM_WRITE((Addr)req->blue, req->gamma_size * sizeof(__vki_u16));
+ }
+ break;
+ VKI_DRM_IOCTL_DOUBLE(VKI_DRM_IOCTL_MODE_SETGAMMA):
+ break;
+ case VKI_DRM_IOCTL_MODE_GETENCODER:
+ if (ARG3) {
+ struct vki_drm_mode_get_encoder *req = (struct vki_drm_mode_get_encoder *)ARG3;
+
+ POST_FIELD_WRITE(req->crtc_id);
+ POST_FIELD_WRITE(req->encoder_type);
+ POST_FIELD_WRITE(req->encoder_id);
+ POST_FIELD_WRITE(req->possible_clones);
+ POST_FIELD_WRITE(req->possible_crtcs);
+ }
+ break;
+ case VKI_DRM_IOCTL_MODE_GETCONNECTOR:
+ if (ARG3) {
+ struct vki_drm_mode_get_connector *req = (struct vki_drm_mode_get_connector *)ARG3;
+
+ POST_FIELD_WRITE(req->connector_id);
+ POST_FIELD_WRITE(req->connector_type);
+ POST_FIELD_WRITE(req->connector_type_id);
+ POST_FIELD_WRITE(req->mm_width);
+ POST_FIELD_WRITE(req->mm_height);
+ POST_FIELD_WRITE(req->subpixel);
+ POST_FIELD_WRITE(req->connection);
+ POST_FIELD_WRITE(req->encoder_id);
+ POST_FIELD_WRITE(req->count_modes);
+ POST_FIELD_WRITE(req->count_props);
+ POST_FIELD_WRITE(req->count_encoders);
+
+ // see comment near VKI_DRM_IOCTL_MODE_GETRESOURCES
+ if (req->count_modes > 0 && req->modes_ptr)
+ POST_MEM_WRITE((Addr)req->modes_ptr, req->count_m...
[truncated message content] |
|
From: <sv...@va...> - 2011-06-05 18:00:54
|
Author: sewardj
Date: 2011-06-05 18:56:03 +0100 (Sun, 05 Jun 2011)
New Revision: 2156
Log:
Improvements to code generation for 32 bit instructions. When
appropriate, generate 32 bit add/sub/and/or/xor/cmp, so as to avoid a
bunch of cases where previously values would have been widened to 64
bits, or shifted left 32 bits, before being used. Reduces the size of
the generated code by up to 2.8%.
Modified:
trunk/priv/guest_amd64_helpers.c
trunk/priv/host_amd64_defs.c
trunk/priv/host_amd64_defs.h
trunk/priv/host_amd64_isel.c
Modified: trunk/priv/guest_amd64_helpers.c
===================================================================
--- trunk/priv/guest_amd64_helpers.c 2011-05-29 09:29:18 UTC (rev 2155)
+++ trunk/priv/guest_amd64_helpers.c 2011-06-05 17:56:03 UTC (rev 2156)
@@ -877,6 +877,7 @@
# define unop(_op,_a1) IRExpr_Unop((_op),(_a1))
# define binop(_op,_a1,_a2) IRExpr_Binop((_op),(_a1),(_a2))
# define mkU64(_n) IRExpr_Const(IRConst_U64(_n))
+# define mkU32(_n) IRExpr_Const(IRConst_U32(_n))
# define mkU8(_n) IRExpr_Const(IRConst_U8(_n))
Int i, arity = 0;
@@ -959,34 +960,34 @@
if (isU64(cc_op, AMD64G_CC_OP_SUBL) && isU64(cond, AMD64CondZ)) {
/* long sub/cmp, then Z --> test dst==src */
return unop(Iop_1Uto64,
- binop(Iop_CmpEQ64,
- binop(Iop_Shl64,cc_dep1,mkU8(32)),
- binop(Iop_Shl64,cc_dep2,mkU8(32))));
+ binop(Iop_CmpEQ32,
+ unop(Iop_64to32, cc_dep1),
+ unop(Iop_64to32, cc_dep2)));
}
if (isU64(cc_op, AMD64G_CC_OP_SUBL) && isU64(cond, AMD64CondNZ)) {
/* long sub/cmp, then NZ --> test dst!=src */
return unop(Iop_1Uto64,
- binop(Iop_CmpNE64,
- binop(Iop_Shl64,cc_dep1,mkU8(32)),
- binop(Iop_Shl64,cc_dep2,mkU8(32))));
+ binop(Iop_CmpNE32,
+ unop(Iop_64to32, cc_dep1),
+ unop(Iop_64to32, cc_dep2)));
}
if (isU64(cc_op, AMD64G_CC_OP_SUBL) && isU64(cond, AMD64CondL)) {
/* long sub/cmp, then L (signed less than)
--> test dst <s src */
return unop(Iop_1Uto64,
- binop(Iop_CmpLT64S,
- binop(Iop_Shl64,cc_dep1,mkU8(32)),
- binop(Iop_Shl64,cc_dep2,mkU8(32))));
+ binop(Iop_CmpLT32S,
+ unop(Iop_64to32, cc_dep1),
+ unop(Iop_64to32, cc_dep2)));
}
if (isU64(cc_op, AMD64G_CC_OP_SUBL) && isU64(cond, AMD64CondLE)) {
/* long sub/cmp, then LE (signed less than or equal)
--> test dst <=s src */
return unop(Iop_1Uto64,
- binop(Iop_CmpLE64S,
- binop(Iop_Shl64,cc_dep1,mkU8(32)),
- binop(Iop_Shl64,cc_dep2,mkU8(32))));
+ binop(Iop_CmpLE32S,
+ unop(Iop_64to32, cc_dep1),
+ unop(Iop_64to32, cc_dep2)));
}
if (isU64(cc_op, AMD64G_CC_OP_SUBL) && isU64(cond, AMD64CondNLE)) {
@@ -995,9 +996,9 @@
--> test (dst >s src)
--> test (src <s dst) */
return unop(Iop_1Uto64,
- binop(Iop_CmpLT64S,
- binop(Iop_Shl64,cc_dep2,mkU8(32)),
- binop(Iop_Shl64,cc_dep1,mkU8(32))));
+ binop(Iop_CmpLT32S,
+ unop(Iop_64to32, cc_dep2),
+ unop(Iop_64to32, cc_dep1)));
}
@@ -1005,28 +1006,28 @@
/* long sub/cmp, then BE (unsigned less than or equal)
--> test dst <=u src */
return unop(Iop_1Uto64,
- binop(Iop_CmpLE64U,
- binop(Iop_Shl64,cc_dep1,mkU8(32)),
- binop(Iop_Shl64,cc_dep2,mkU8(32))));
+ binop(Iop_CmpLE32U,
+ unop(Iop_64to32, cc_dep1),
+ unop(Iop_64to32, cc_dep2)));
}
if (isU64(cc_op, AMD64G_CC_OP_SUBL) && isU64(cond, AMD64CondNBE)) {
/* long sub/cmp, then NBE (unsigned greater than)
--> test src <u dst */
/* Note, args are opposite way round from the usual */
return unop(Iop_1Uto64,
- binop(Iop_CmpLT64U,
- binop(Iop_Shl64,cc_dep2,mkU8(32)),
- binop(Iop_Shl64,cc_dep1,mkU8(32))));
+ binop(Iop_CmpLT32U,
+ unop(Iop_64to32, cc_dep2),
+ unop(Iop_64to32, cc_dep1)));
}
if (isU64(cc_op, AMD64G_CC_OP_SUBL) && isU64(cond, AMD64CondS)) {
/* long sub/cmp, then S (negative) --> test (dst-src <s 0) */
return unop(Iop_1Uto64,
- binop(Iop_CmpLT64S,
- binop(Iop_Sub64,
- binop(Iop_Shl64, cc_dep1, mkU8(32)),
- binop(Iop_Shl64, cc_dep2, mkU8(32))),
- mkU64(0)));
+ binop(Iop_CmpLT32S,
+ binop(Iop_Sub32,
+ unop(Iop_64to32, cc_dep1),
+ unop(Iop_64to32, cc_dep2)),
+ mkU32(0)));
}
/*---------------- SUBW ----------------*/
@@ -1126,17 +1127,17 @@
if (isU64(cc_op, AMD64G_CC_OP_LOGICL) && isU64(cond, AMD64CondZ)) {
/* long and/or/xor, then Z --> test dst==0 */
return unop(Iop_1Uto64,
- binop(Iop_CmpEQ64,
- binop(Iop_Shl64,cc_dep1,mkU8(32)),
- mkU64(0)));
+ binop(Iop_CmpEQ32,
+ unop(Iop_64to32, cc_dep1),
+ mkU32(0)));
}
if (isU64(cc_op, AMD64G_CC_OP_LOGICL) && isU64(cond, AMD64CondNZ)) {
/* long and/or/xor, then NZ --> test dst!=0 */
return unop(Iop_1Uto64,
- binop(Iop_CmpNE64,
- binop(Iop_Shl64,cc_dep1,mkU8(32)),
- mkU64(0)));
+ binop(Iop_CmpNE32,
+ unop(Iop_64to32, cc_dep1),
+ mkU32(0)));
}
if (isU64(cc_op, AMD64G_CC_OP_LOGICL) && isU64(cond, AMD64CondLE)) {
@@ -1147,9 +1148,9 @@
the result is <=signed 0. Hence ...
*/
return unop(Iop_1Uto64,
- binop(Iop_CmpLE64S,
- binop(Iop_Shl64,cc_dep1,mkU8(32)),
- mkU64(0)));
+ binop(Iop_CmpLE32S,
+ unop(Iop_64to32, cc_dep1),
+ mkU32(0)));
}
/*---------------- LOGICB ----------------*/
@@ -1214,9 +1215,9 @@
if (isU64(cc_op, AMD64G_CC_OP_DECL) && isU64(cond, AMD64CondZ)) {
/* dec L, then Z --> test dst == 0 */
return unop(Iop_1Uto64,
- binop(Iop_CmpEQ64,
- binop(Iop_Shl64,cc_dep1,mkU8(32)),
- mkU64(0)));
+ binop(Iop_CmpEQ32,
+ unop(Iop_64to32, cc_dep1),
+ mkU32(0)));
}
/*---------------- DECW ----------------*/
@@ -1337,9 +1338,9 @@
if (isU64(cc_op, AMD64G_CC_OP_SUBL)) {
/* C after sub denotes unsigned less than */
return unop(Iop_1Uto64,
- binop(Iop_CmpLT64U,
- binop(Iop_Shl64,cc_dep1,mkU8(32)),
- binop(Iop_Shl64,cc_dep2,mkU8(32))));
+ binop(Iop_CmpLT32U,
+ unop(Iop_64to32, cc_dep1),
+ unop(Iop_64to32, cc_dep2)));
}
if (isU64(cc_op, AMD64G_CC_OP_SUBB)) {
/* C after sub denotes unsigned less than */
@@ -1373,6 +1374,7 @@
# undef unop
# undef binop
# undef mkU64
+# undef mkU32
# undef mkU8
return NULL;
Modified: trunk/priv/host_amd64_defs.c
===================================================================
--- trunk/priv/host_amd64_defs.c 2011-05-29 09:29:18 UTC (rev 2155)
+++ trunk/priv/host_amd64_defs.c 2011-06-05 17:56:03 UTC (rev 2156)
@@ -314,13 +314,16 @@
return op;
}
-void ppAMD64RMI ( AMD64RMI* op ) {
+static void ppAMD64RMI_wrk ( AMD64RMI* op, Bool lo32 ) {
switch (op->tag) {
case Armi_Imm:
vex_printf("$0x%x", op->Armi.Imm.imm32);
return;
- case Armi_Reg:
- ppHRegAMD64(op->Armi.Reg.reg);
+ case Armi_Reg:
+ if (lo32)
+ ppHRegAMD64_lo32(op->Armi.Reg.reg);
+ else
+ ppHRegAMD64(op->Armi.Reg.reg);
return;
case Armi_Mem:
ppAMD64AMode(op->Armi.Mem.am);
@@ -329,6 +332,12 @@
vpanic("ppAMD64RMI");
}
}
+void ppAMD64RMI ( AMD64RMI* op ) {
+ ppAMD64RMI_wrk(op, False/*!lo32*/);
+}
+void ppAMD64RMI_lo32 ( AMD64RMI* op ) {
+ ppAMD64RMI_wrk(op, True/*lo32*/);
+}
/* An AMD64RMI can only be used in a "read" context (what would it mean
to write or modify a literal?) and so we enumerate its registers
@@ -679,6 +688,19 @@
i->Ain.Lea64.dst = dst;
return i;
}
+AMD64Instr* AMD64Instr_Alu32R ( AMD64AluOp op, AMD64RMI* src, HReg dst ) {
+ AMD64Instr* i = LibVEX_Alloc(sizeof(AMD64Instr));
+ i->tag = Ain_Alu32R;
+ i->Ain.Alu32R.op = op;
+ i->Ain.Alu32R.src = src;
+ i->Ain.Alu32R.dst = dst;
+ switch (op) {
+ case Aalu_ADD: case Aalu_SUB: case Aalu_CMP:
+ case Aalu_AND: case Aalu_OR: case Aalu_XOR: break;
+ default: vassert(0);
+ }
+ return i;
+}
AMD64Instr* AMD64Instr_MulL ( Bool syned, AMD64RM* src ) {
AMD64Instr* i = LibVEX_Alloc(sizeof(AMD64Instr));
i->tag = Ain_MulL;
@@ -1083,6 +1105,12 @@
vex_printf(",");
ppHRegAMD64(i->Ain.Lea64.dst);
return;
+ case Ain_Alu32R:
+ vex_printf("%sl ", showAMD64AluOp(i->Ain.Alu32R.op));
+ ppAMD64RMI_lo32(i->Ain.Alu32R.src);
+ vex_printf(",");
+ ppHRegAMD64_lo32(i->Ain.Alu32R.dst);
+ return;
case Ain_MulL:
vex_printf("%cmulq ", i->Ain.MulL.syned ? 's' : 'u');
ppAMD64RM(i->Ain.MulL.src);
@@ -1423,6 +1451,15 @@
addRegUsage_AMD64AMode(u, i->Ain.Lea64.am);
addHRegUse(u, HRmWrite, i->Ain.Lea64.dst);
return;
+ case Ain_Alu32R:
+ vassert(i->Ain.Alu32R.op != Aalu_MOV);
+ addRegUsage_AMD64RMI(u, i->Ain.Alu32R.src);
+ if (i->Ain.Alu32R.op == Aalu_CMP) {
+ addHRegUse(u, HRmRead, i->Ain.Alu32R.dst);
+ return;
+ }
+ addHRegUse(u, HRmModify, i->Ain.Alu32R.dst);
+ return;
case Ain_MulL:
addRegUsage_AMD64RM(u, i->Ain.MulL.src, HRmRead);
addHRegUse(u, HRmModify, hregAMD64_RAX());
@@ -1719,6 +1756,10 @@
mapRegs_AMD64AMode(m, i->Ain.Lea64.am);
mapReg(m, &i->Ain.Lea64.dst);
return;
+ case Ain_Alu32R:
+ mapRegs_AMD64RMI(m, i->Ain.Alu32R.src);
+ mapReg(m, &i->Ain.Alu32R.dst);
+ return;
case Ain_MulL:
mapRegs_AMD64RM(m, i->Ain.MulL.src);
return;
@@ -2586,6 +2627,69 @@
p = doAMode_M(p, i->Ain.Lea64.dst, i->Ain.Lea64.am);
goto done;
+ case Ain_Alu32R:
+ /* ADD/SUB/AND/OR/XOR/CMP */
+ opc = opc_rr = subopc_imm = opc_imma = 0;
+ switch (i->Ain.Alu32R.op) {
+ case Aalu_ADD: opc = 0x03; opc_rr = 0x01;
+ subopc_imm = 0; opc_imma = 0x05; break;
+ case Aalu_SUB: opc = 0x2B; opc_rr = 0x29;
+ subopc_imm = 5; opc_imma = 0x2D; break;
+ case Aalu_AND: opc = 0x23; opc_rr = 0x21;
+ subopc_imm = 4; opc_imma = 0x25; break;
+ case Aalu_XOR: opc = 0x33; opc_rr = 0x31;
+ subopc_imm = 6; opc_imma = 0x35; break;
+ case Aalu_OR: opc = 0x0B; opc_rr = 0x09;
+ subopc_imm = 1; opc_imma = 0x0D; break;
+ case Aalu_CMP: opc = 0x3B; opc_rr = 0x39;
+ subopc_imm = 7; opc_imma = 0x3D; break;
+ default: goto bad;
+ }
+ switch (i->Ain.Alu32R.src->tag) {
+ case Armi_Imm:
+ if (i->Ain.Alu32R.dst == hregAMD64_RAX()
+ && !fits8bits(i->Ain.Alu32R.src->Armi.Imm.imm32)) {
+ goto bad; /* FIXME: awaiting test case */
+ *p++ = toUChar(opc_imma);
+ p = emit32(p, i->Ain.Alu32R.src->Armi.Imm.imm32);
+ } else
+ if (fits8bits(i->Ain.Alu32R.src->Armi.Imm.imm32)) {
+ rex = clearWBit( rexAMode_R( fake(0), i->Ain.Alu32R.dst ) );
+ if (rex != 0x40) *p++ = rex;
+ *p++ = 0x83;
+ p = doAMode_R(p, fake(subopc_imm), i->Ain.Alu32R.dst);
+ *p++ = toUChar(0xFF & i->Ain.Alu32R.src->Armi.Imm.imm32);
+ } else {
+ rex = clearWBit( rexAMode_R( fake(0), i->Ain.Alu32R.dst) );
+ if (rex != 0x40) *p++ = rex;
+ *p++ = 0x81;
+ p = doAMode_R(p, fake(subopc_imm), i->Ain.Alu32R.dst);
+ p = emit32(p, i->Ain.Alu32R.src->Armi.Imm.imm32);
+ }
+ goto done;
+ case Armi_Reg:
+ rex = clearWBit(
+ rexAMode_R( i->Ain.Alu32R.src->Armi.Reg.reg,
+ i->Ain.Alu32R.dst) );
+ if (rex != 0x40) *p++ = rex;
+ *p++ = toUChar(opc_rr);
+ p = doAMode_R(p, i->Ain.Alu32R.src->Armi.Reg.reg,
+ i->Ain.Alu32R.dst);
+ goto done;
+ case Armi_Mem:
+ rex = clearWBit(
+ rexAMode_M( i->Ain.Alu32R.dst,
+ i->Ain.Alu32R.src->Armi.Mem.am) );
+ if (rex != 0x40) *p++ = rex;
+ *p++ = toUChar(opc);
+ p = doAMode_M(p, i->Ain.Alu32R.dst,
+ i->Ain.Alu32R.src->Armi.Mem.am);
+ goto done;
+ default:
+ goto bad;
+ }
+ break;
+
case Ain_MulL:
subopc = i->Ain.MulL.syned ? 5 : 4;
switch (i->Ain.MulL.src->tag) {
Modified: trunk/priv/host_amd64_defs.h
===================================================================
--- trunk/priv/host_amd64_defs.h 2011-05-29 09:29:18 UTC (rev 2155)
+++ trunk/priv/host_amd64_defs.h 2011-06-05 17:56:03 UTC (rev 2156)
@@ -189,7 +189,8 @@
extern AMD64RMI* AMD64RMI_Reg ( HReg );
extern AMD64RMI* AMD64RMI_Mem ( AMD64AMode* );
-extern void ppAMD64RMI ( AMD64RMI* );
+extern void ppAMD64RMI ( AMD64RMI* );
+extern void ppAMD64RMI_lo32 ( AMD64RMI* );
/* --------- Operand, which can be reg or immediate only. --------- */
@@ -359,6 +360,7 @@
Ain_Test64, /* 64-bit test (AND, set flags, discard result) */
Ain_Unary64, /* 64-bit not and neg */
Ain_Lea64, /* 64-bit compute EA into a reg */
+ Ain_Alu32R, /* 32-bit add/sub/and/or/xor/cmp, dst=REG (a la Alu64R) */
Ain_MulL, /* widening multiply */
Ain_Div, /* div and mod */
//.. Xin_Sh3232, /* shldl or shrdl */
@@ -449,6 +451,12 @@
AMD64AMode* am;
HReg dst;
} Lea64;
+ /* 32-bit add/sub/and/or/xor/cmp, dst=REG (a la Alu64R) */
+ struct {
+ AMD64AluOp op;
+ AMD64RMI* src;
+ HReg dst;
+ } Alu32R;
/* 64 x 64 -> 128 bit widening multiply: RDX:RAX = RAX *s/u
r/m64 */
struct {
@@ -676,6 +684,7 @@
extern AMD64Instr* AMD64Instr_Alu64M ( AMD64AluOp, AMD64RI*, AMD64AMode* );
extern AMD64Instr* AMD64Instr_Unary64 ( AMD64UnaryOp op, HReg dst );
extern AMD64Instr* AMD64Instr_Lea64 ( AMD64AMode* am, HReg dst );
+extern AMD64Instr* AMD64Instr_Alu32R ( AMD64AluOp, AMD64RMI*, HReg );
extern AMD64Instr* AMD64Instr_Sh64 ( AMD64ShiftOp, UInt, HReg );
extern AMD64Instr* AMD64Instr_Test64 ( UInt imm32, HReg dst );
extern AMD64Instr* AMD64Instr_MulL ( Bool syned, AMD64RM* );
Modified: trunk/priv/host_amd64_isel.c
===================================================================
--- trunk/priv/host_amd64_isel.c 2011-05-29 09:29:18 UTC (rev 2155)
+++ trunk/priv/host_amd64_isel.c 2011-06-05 17:56:03 UTC (rev 2156)
@@ -1173,19 +1173,11 @@
/* Handle misc other ops. */
if (e->Iex.Binop.op == Iop_Max32U) {
- /* This generates a truly rotten piece of code. Just as well
- it doesn't happen very often. */
- HReg src1 = iselIntExpr_R(env, e->Iex.Binop.arg1);
- HReg src1L = newVRegI(env);
- HReg src2 = iselIntExpr_R(env, e->Iex.Binop.arg2);
- HReg src2L = newVRegI(env);
- HReg dst = newVRegI(env);
- addInstr(env, mk_iMOVsd_RR(src1,dst));
- addInstr(env, mk_iMOVsd_RR(src1,src1L));
- addInstr(env, AMD64Instr_Sh64(Ash_SHL, 32, src1L));
- addInstr(env, mk_iMOVsd_RR(src2,src2L));
- addInstr(env, AMD64Instr_Sh64(Ash_SHL, 32, src2L));
- addInstr(env, AMD64Instr_Alu64R(Aalu_CMP, AMD64RMI_Reg(src2L), src1L));
+ HReg src1 = iselIntExpr_R(env, e->Iex.Binop.arg1);
+ HReg dst = newVRegI(env);
+ HReg src2 = iselIntExpr_R(env, e->Iex.Binop.arg2);
+ addInstr(env, mk_iMOVsd_RR(src1, dst));
+ addInstr(env, AMD64Instr_Alu32R(Aalu_CMP, AMD64RMI_Reg(src2), dst));
addInstr(env, AMD64Instr_CMov64(Acc_B, AMD64RM_Reg(src2), dst));
return dst;
}
@@ -1422,6 +1414,36 @@
}
}
+ /* 32Uto64( Add32/Sub32/And32/Or32/Xor32(expr32, expr32) )
+ Use 32 bit arithmetic and let the default zero-extend rule
+ do the 32Uto64 for free. */
+ if (e->Iex.Unop.op == Iop_32Uto64 && e->Iex.Unop.arg->tag == Iex_Binop) {
+ IROp opi = e->Iex.Unop.arg->Iex.Binop.op; /* inner op */
+ IRExpr* argL = e->Iex.Unop.arg->Iex.Binop.arg1;
+ IRExpr* argR = e->Iex.Unop.arg->Iex.Binop.arg2;
+ AMD64AluOp aluOp = Aalu_INVALID;
+ switch (opi) {
+ case Iop_Add32: aluOp = Aalu_ADD; break;
+ case Iop_Sub32: aluOp = Aalu_SUB; break;
+ case Iop_And32: aluOp = Aalu_AND; break;
+ case Iop_Or32: aluOp = Aalu_OR; break;
+ case Iop_Xor32: aluOp = Aalu_XOR; break;
+ default: break;
+ }
+ if (aluOp != Aalu_INVALID) {
+ /* For commutative ops we assume any literal values are on
+ the second operand. */
+ HReg dst = newVRegI(env);
+ HReg reg = iselIntExpr_R(env, argL);
+ AMD64RMI* rmi = iselIntExpr_RMI(env, argR);
+ addInstr(env, mk_iMOVsd_RR(reg,dst));
+ addInstr(env, AMD64Instr_Alu32R(aluOp, rmi, dst));
+ return dst;
+ }
+ /* just fall through to normal handling for Iop_32Uto64 */
+ }
+
+ /* Fallback cases */
switch (e->Iex.Unop.op) {
case Iop_32Uto64:
case Iop_32Sto64: {
@@ -2176,10 +2198,8 @@
if (e->tag == Iex_Unop
&& e->Iex.Unop.op == Iop_CmpNEZ32) {
HReg r1 = iselIntExpr_R(env, e->Iex.Unop.arg);
- HReg tmp = newVRegI(env);
AMD64RMI* rmi2 = AMD64RMI_Imm(0);
- addInstr(env, AMD64Instr_MovxLQ(False, r1, tmp));
- addInstr(env, AMD64Instr_Alu64R(Aalu_CMP,rmi2,tmp));
+ addInstr(env, AMD64Instr_Alu32R(Aalu_CMP,rmi2,r1));
return Acc_NZ;
}
@@ -2249,25 +2269,6 @@
}
}
- /* CmpEQ32 / CmpNE32 */
- if (e->tag == Iex_Binop
- && (e->Iex.Binop.op == Iop_CmpEQ32
- || e->Iex.Binop.op == Iop_CmpNE32
- || e->Iex.Binop.op == Iop_CasCmpEQ32
- || e->Iex.Binop.op == Iop_CasCmpNE32)) {
- HReg r1 = iselIntExpr_R(env, e->Iex.Binop.arg1);
- AMD64RMI* rmi2 = iselIntExpr_RMI(env, e->Iex.Binop.arg2);
- HReg r = newVRegI(env);
- addInstr(env, mk_iMOVsd_RR(r1,r));
- addInstr(env, AMD64Instr_Alu64R(Aalu_XOR,rmi2,r));
- addInstr(env, AMD64Instr_Sh64(Ash_SHL, 32, r));
- switch (e->Iex.Binop.op) {
- case Iop_CmpEQ32: case Iop_CasCmpEQ32: return Acc_Z;
- case Iop_CmpNE32: case Iop_CasCmpNE32: return Acc_NZ;
- default: vpanic("iselCondCode(amd64): CmpXX32");
- }
- }
-
/* CmpNE64(ccall, 64-bit constant) (--smc-check=all optimisation).
Saves a "movq %rax, %tmp" compared to the default route. */
if (e->tag == Iex_Binop
@@ -2312,6 +2313,30 @@
}
}
+ /* Cmp*32*(x,y) */
+ if (e->tag == Iex_Binop
+ && (e->Iex.Binop.op == Iop_CmpEQ32
+ || e->Iex.Binop.op == Iop_CmpNE32
+ || e->Iex.Binop.op == Iop_CmpLT32S
+ || e->Iex.Binop.op == Iop_CmpLT32U
+ || e->Iex.Binop.op == Iop_CmpLE32S
+ || e->Iex.Binop.op == Iop_CmpLE32U
+ || e->Iex.Binop.op == Iop_CasCmpEQ32
+ || e->Iex.Binop.op == Iop_CasCmpNE32)) {
+ HReg r1 = iselIntExpr_R(env, e->Iex.Binop.arg1);
+ AMD64RMI* rmi2 = iselIntExpr_RMI(env, e->Iex.Binop.arg2);
+ addInstr(env, AMD64Instr_Alu32R(Aalu_CMP,rmi2,r1));
+ switch (e->Iex.Binop.op) {
+ case Iop_CmpEQ32: case Iop_CasCmpEQ32: return Acc_Z;
+ case Iop_CmpNE32: case Iop_CasCmpNE32: return Acc_NZ;
+ case Iop_CmpLT32S: return Acc_L;
+ case Iop_CmpLT32U: return Acc_B;
+ case Iop_CmpLE32S: return Acc_LE;
+ case Iop_CmpLE32U: return Acc_BE;
+ default: vpanic("iselCondCode(amd64): CmpXX32");
+ }
+ }
+
ppIRExpr(e);
vpanic("iselCondCode(amd64)");
}
|
|
From: Josef W. <Jos...@gm...> - 2011-06-05 17:09:03
|
On Friday 03 June 2011, Julian Seward wrote: > On Friday, June 03, 2011, Josef Weidendorfer wrote: > > > In real hardware, instead of modulo (which obviously takes a lot of time), > > you probably would do some hashing of a subset of address bits. > > > > We probably need to do something similar. And make this a special case only > > if needed. > > Do you have any suggestion for how to implement such a hash fn? Hmm. It should approximate "<memory block> % sets", perhaps using a precalculated 256-byte lookup table "set[<memory block> & 255]", or "<memory block> - sets * int(<memory block> * 1/sets)". And do "* 1/sets" with "*(256/sets) >>8" or by converting to float and back again? I'll have to check what is fastest. The only thing is that our approximation for sure will not be the same as what some HW does in that case. > One good thing is that these strange caches generally appear only > at L3 or LL, Yes. In HW, it also adds latency to a critical path, so I suppose it is done only for shared last-level caches. I suppose it is needed there to be flexible on the number of LL cache partitions (when you have a tile-based die layout, a L3 partition would be part of a tile). So if you have a chip with 6 cores, you would have 6 L3 cache partitions, and get a multiple of 6 of total sets even if one L3 partition has a power-of-2 number of sets. This strange number of sets has the benefit that getting an access stream which always competes for the same set (with lots of conflict cases) is less probably. > and so the number of references we'd have to process > is relatively small compared to (eg) D1 or I1. That makes the > extra costs not so bad. Probably. I still think we want to fall back to the fast way if we have a power-of-2 even in LL. The best way is to instrument different handler calls depending on the LL cache configuration, ie. a duplication of all the log_ handlers? > re my earlier suggestion about kludging this by simulating a different > associativity .. > > If I remember Hennessy & Patterson correctly, associativity above 4 makes > very little difference to the hit rate. That's why I suggested it. eg > > > L3: 12MB, 16-way, 64B lines (12288 sets) > > If we instead pretended it was 12-way, then there would be 16384 sets. Perhaps this indeed is the best default behavior. To actually be able to check this, it would be nice to still enforce 12288 sets if requested. > I would be happy with such an approximation. I don't know if it's > always possible though -- need to think about the address arithmetic > more. In general, for sure not. But for practical numbers, it could work out. Josef > Any preferences (or other proposals) ? I don't mind either way. I only > care that the performance hit is minimal. > > J > > |
|
From: <sv...@va...> - 2011-06-05 10:06:40
|
Author: bart
Date: 2011-06-05 11:01:48 +0100 (Sun, 05 Jun 2011)
New Revision: 11797
Log:
Mention the "3.x" Linux kernel version explicitly in the kernel version configure message
Modified:
trunk/configure.in
Modified: trunk/configure.in
===================================================================
--- trunk/configure.in 2011-06-05 08:51:47 UTC (rev 11796)
+++ trunk/configure.in 2011-06-05 10:01:48 UTC (rev 11797)
@@ -226,8 +226,8 @@
case "${kernel}" in
2.6.*|3.*)
- AC_MSG_RESULT([2.6 family (${kernel})])
- AC_DEFINE([KERNEL_2_6], 1, [Define to 1 if you're using Linux 2.6.x])
+ AC_MSG_RESULT([2.6.x/3.x family (${kernel})])
+ AC_DEFINE([KERNEL_2_6], 1, [Define to 1 if you're using Linux 2.6.x or Linux 3.x])
;;
2.4.*)
|
|
From: Tom H. <to...@co...> - 2011-06-05 09:59:17
|
On 05/06/11 09:51, sv...@va... wrote: > Make Valgrind build on Linux kernel 3.0 and beyond. Closes #274926. An official > statement that the Linux kernel 3.0 API and ABI is compatible with Linux kernel > 2.6 can be found here: http://lkml.org/lkml/2011/5/29/204. I'm not sure that having configure print "2.6 family" is a good idea. I know the ABI is the same but it will likely cause confusion. Tom -- Tom Hughes (to...@co...) http://compton.nu/ |
|
From: <sv...@va...> - 2011-06-05 08:56:38
|
Author: bart Date: 2011-06-05 09:51:47 +0100 (Sun, 05 Jun 2011) New Revision: 11796 Log: Make Valgrind build on Linux kernel 3.0 and beyond. Closes #274926. An official statement that the Linux kernel 3.0 API and ABI is compatible with Linux kernel 2.6 can be found here: http://lkml.org/lkml/2011/5/29/204. Modified: trunk/configure.in Modified: trunk/configure.in =================================================================== --- trunk/configure.in 2011-06-03 23:27:39 UTC (rev 11795) +++ trunk/configure.in 2011-06-05 08:51:47 UTC (rev 11796) @@ -225,7 +225,7 @@ kernel=`uname -r` case "${kernel}" in - 2.6.*) + 2.6.*|3.*) AC_MSG_RESULT([2.6 family (${kernel})]) AC_DEFINE([KERNEL_2_6], 1, [Define to 1 if you're using Linux 2.6.x]) ;; |