You can subscribe to this list here.
| 2002 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
(122) |
Nov
(152) |
Dec
(69) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2003 |
Jan
(6) |
Feb
(25) |
Mar
(73) |
Apr
(82) |
May
(24) |
Jun
(25) |
Jul
(10) |
Aug
(11) |
Sep
(10) |
Oct
(54) |
Nov
(203) |
Dec
(182) |
| 2004 |
Jan
(307) |
Feb
(305) |
Mar
(430) |
Apr
(312) |
May
(187) |
Jun
(342) |
Jul
(487) |
Aug
(637) |
Sep
(336) |
Oct
(373) |
Nov
(441) |
Dec
(210) |
| 2005 |
Jan
(385) |
Feb
(480) |
Mar
(636) |
Apr
(544) |
May
(679) |
Jun
(625) |
Jul
(810) |
Aug
(838) |
Sep
(634) |
Oct
(521) |
Nov
(965) |
Dec
(543) |
| 2006 |
Jan
(494) |
Feb
(431) |
Mar
(546) |
Apr
(411) |
May
(406) |
Jun
(322) |
Jul
(256) |
Aug
(401) |
Sep
(345) |
Oct
(542) |
Nov
(308) |
Dec
(481) |
| 2007 |
Jan
(427) |
Feb
(326) |
Mar
(367) |
Apr
(255) |
May
(244) |
Jun
(204) |
Jul
(223) |
Aug
(231) |
Sep
(354) |
Oct
(374) |
Nov
(497) |
Dec
(362) |
| 2008 |
Jan
(322) |
Feb
(482) |
Mar
(658) |
Apr
(422) |
May
(476) |
Jun
(396) |
Jul
(455) |
Aug
(267) |
Sep
(280) |
Oct
(253) |
Nov
(232) |
Dec
(304) |
| 2009 |
Jan
(486) |
Feb
(470) |
Mar
(458) |
Apr
(423) |
May
(696) |
Jun
(461) |
Jul
(551) |
Aug
(575) |
Sep
(134) |
Oct
(110) |
Nov
(157) |
Dec
(102) |
| 2010 |
Jan
(226) |
Feb
(86) |
Mar
(147) |
Apr
(117) |
May
(107) |
Jun
(203) |
Jul
(193) |
Aug
(238) |
Sep
(300) |
Oct
(246) |
Nov
(23) |
Dec
(75) |
| 2011 |
Jan
(133) |
Feb
(195) |
Mar
(315) |
Apr
(200) |
May
(267) |
Jun
(293) |
Jul
(353) |
Aug
(237) |
Sep
(278) |
Oct
(611) |
Nov
(274) |
Dec
(260) |
| 2012 |
Jan
(303) |
Feb
(391) |
Mar
(417) |
Apr
(441) |
May
(488) |
Jun
(655) |
Jul
(590) |
Aug
(610) |
Sep
(526) |
Oct
(478) |
Nov
(359) |
Dec
(372) |
| 2013 |
Jan
(467) |
Feb
(226) |
Mar
(391) |
Apr
(281) |
May
(299) |
Jun
(252) |
Jul
(311) |
Aug
(352) |
Sep
(481) |
Oct
(571) |
Nov
(222) |
Dec
(231) |
| 2014 |
Jan
(185) |
Feb
(329) |
Mar
(245) |
Apr
(238) |
May
(281) |
Jun
(399) |
Jul
(382) |
Aug
(500) |
Sep
(579) |
Oct
(435) |
Nov
(487) |
Dec
(256) |
| 2015 |
Jan
(338) |
Feb
(357) |
Mar
(330) |
Apr
(294) |
May
(191) |
Jun
(108) |
Jul
(142) |
Aug
(261) |
Sep
(190) |
Oct
(54) |
Nov
(83) |
Dec
(22) |
| 2016 |
Jan
(49) |
Feb
(89) |
Mar
(33) |
Apr
(50) |
May
(27) |
Jun
(34) |
Jul
(53) |
Aug
(53) |
Sep
(98) |
Oct
(206) |
Nov
(93) |
Dec
(53) |
| 2017 |
Jan
(65) |
Feb
(82) |
Mar
(102) |
Apr
(86) |
May
(187) |
Jun
(67) |
Jul
(23) |
Aug
(93) |
Sep
(65) |
Oct
(45) |
Nov
(35) |
Dec
(17) |
| 2018 |
Jan
(26) |
Feb
(35) |
Mar
(38) |
Apr
(32) |
May
(8) |
Jun
(43) |
Jul
(27) |
Aug
(30) |
Sep
(43) |
Oct
(42) |
Nov
(38) |
Dec
(67) |
| 2019 |
Jan
(32) |
Feb
(37) |
Mar
(53) |
Apr
(64) |
May
(49) |
Jun
(18) |
Jul
(14) |
Aug
(53) |
Sep
(25) |
Oct
(30) |
Nov
(49) |
Dec
(31) |
| 2020 |
Jan
(87) |
Feb
(45) |
Mar
(37) |
Apr
(51) |
May
(99) |
Jun
(36) |
Jul
(11) |
Aug
(14) |
Sep
(20) |
Oct
(24) |
Nov
(40) |
Dec
(23) |
| 2021 |
Jan
(14) |
Feb
(53) |
Mar
(85) |
Apr
(15) |
May
(19) |
Jun
(3) |
Jul
(14) |
Aug
(1) |
Sep
(57) |
Oct
(73) |
Nov
(56) |
Dec
(22) |
| 2022 |
Jan
(3) |
Feb
(22) |
Mar
(6) |
Apr
(55) |
May
(46) |
Jun
(39) |
Jul
(15) |
Aug
(9) |
Sep
(11) |
Oct
(34) |
Nov
(20) |
Dec
(36) |
| 2023 |
Jan
(79) |
Feb
(41) |
Mar
(99) |
Apr
(169) |
May
(48) |
Jun
(16) |
Jul
(16) |
Aug
(57) |
Sep
(19) |
Oct
|
Nov
|
Dec
|
| S | M | T | W | T | F | S |
|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
1
(14) |
|
2
(5) |
3
(15) |
4
(20) |
5
(2) |
6
(4) |
7
(16) |
8
(13) |
|
9
(3) |
10
(19) |
11
(13) |
12
(10) |
13
(16) |
14
|
15
|
|
16
|
17
(5) |
18
(14) |
19
(1) |
20
(12) |
21
(1) |
22
|
|
23
(1) |
24
(1) |
25
(1) |
26
(13) |
27
(2) |
28
(19) |
29
(15) |
|
30
(17) |
|
|
|
|
|
|
|
From: <sv...@va...> - 2013-06-12 21:45:59
|
philippe 2013-06-12 22:45:39 +0100 (Wed, 12 Jun 2013)
New Revision: 13426
Log:
improve --help for --main-stacksize and supported ARM cpu
If the command line option --main-stacksize is not used,
the current ulimit value is used, with a min of 1MB
and a max of 16MB. Document this min/max default formula
in the --help.
Also indicate that Valgrind supports ARMv7
Modified files:
trunk/coregrind/m_main.c
trunk/none/tests/cmdline1.stdout.exp
trunk/none/tests/cmdline2.stdout.exp
Modified: trunk/none/tests/cmdline1.stdout.exp (+1 -1)
===================================================================
--- trunk/none/tests/cmdline1.stdout.exp 2013-06-10 09:34:26 +01:00 (rev 13425)
+++ trunk/none/tests/cmdline1.stdout.exp 2013-06-12 22:45:39 +01:00 (rev 13426)
@@ -47,7 +47,7 @@
--max-stackframe=<number> assume stack switch for SP changes larger
than <number> bytes [2000000]
--main-stacksize=<number> set size of main thread's stack (in bytes)
- [use current 'ulimit' value]
+ [min(max(current 'ulimit' value,1MB),16MB)]
user options for Valgrind tools that replace malloc:
--alignment=<number> set minimum alignment of heap allocations [not used by this tool]
Modified: trunk/coregrind/m_main.c (+2 -1)
===================================================================
--- trunk/coregrind/m_main.c 2013-06-10 09:34:26 +01:00 (rev 13425)
+++ trunk/coregrind/m_main.c 2013-06-12 22:45:39 +01:00 (rev 13426)
@@ -159,7 +159,7 @@
" --max-stackframe=<number> assume stack switch for SP changes larger\n"
" than <number> bytes [2000000]\n"
" --main-stacksize=<number> set size of main thread's stack (in bytes)\n"
-" [use current 'ulimit' value]\n"
+" [min(max(current 'ulimit' value,1MB),16MB)]\n"
"\n"
" user options for Valgrind tools that replace malloc:\n"
" --alignment=<number> set minimum alignment of heap allocations [%s]\n"
@@ -1700,6 +1700,7 @@
VG_(printf)(" * x86 (practically any; Pentium-I or above), "
"AMD Athlon or above)\n");
VG_(printf)(" * AMD Athlon64/Opteron\n");
+ VG_(printf)(" * ARM (armv7)\n");
VG_(printf)(" * PowerPC (most; ppc405 and above)\n");
VG_(printf)(" * System z (64bit only - s390x; z900 and above)\n");
VG_(printf)("\n");
Modified: trunk/none/tests/cmdline2.stdout.exp (+1 -1)
===================================================================
--- trunk/none/tests/cmdline2.stdout.exp 2013-06-10 09:34:26 +01:00 (rev 13425)
+++ trunk/none/tests/cmdline2.stdout.exp 2013-06-12 22:45:39 +01:00 (rev 13426)
@@ -47,7 +47,7 @@
--max-stackframe=<number> assume stack switch for SP changes larger
than <number> bytes [2000000]
--main-stacksize=<number> set size of main thread's stack (in bytes)
- [use current 'ulimit' value]
+ [min(max(current 'ulimit' value,1MB),16MB)]
user options for Valgrind tools that replace malloc:
--alignment=<number> set minimum alignment of heap allocations [not used by this tool]
|
|
From: Philippe W. <phi...@sk...> - 2013-06-12 19:59:41
|
On Wed, 2013-06-12 at 11:08 +0200, Julian Seward wrote: > I would be happy to make stabs reading work in the new framework; it's > probably pretty easy. My concern mostly is to have a way to check I > didn't break anything. I suppose one option is to build something with > gcc-4.7 -gstabs and check that the old and new readers produce the same > debugging output. > > Better suggestions, and/or offers to test it properly, gratefully received. Build valgrind (or at least its regression tests) with -gstabs, make regtest and see if same nr of success before and after the change ? |
|
From: Ian C. <Ian...@ci...> - 2013-06-12 13:45:31
|
On Wed, 2013-06-12 at 14:42 +0100, Ian Campbell wrote: > The following small set of patches updates valgrind for the interface > changes made in Xen 4.3. > > With this and the two xl patches I just sent to the xen-devel list the > xl commands list, create (PV guest) and destroy commands are leak free. I've also attached these to https://bugs.kde.org/show_bug.cgi?id=321065 (I started doing this before I realised I'd have to add the patches 1-by-1 and SPAM the various bugzilla subscribers N times. Sorry about this -- next time I'll attach them as a tarball...) Ian. |
|
From: Ian C. <ian...@ci...> - 2013-06-12 13:43:06
|
From: Andrew Cooper <and...@ci...>
These hypercalls take no parameters and their return value is stright from the
ioctl() on privcmd. There are no memory read/writes.
Signed-off-by: Andrew Cooper <and...@ci...>
---
coregrind/m_syswrap/syswrap-xen.c | 9 +++++++++
1 files changed, 9 insertions(+), 0 deletions(-)
diff --git a/coregrind/m_syswrap/syswrap-xen.c b/coregrind/m_syswrap/syswrap-xen.c
index 8a0196f..ce300e8 100644
--- a/coregrind/m_syswrap/syswrap-xen.c
+++ b/coregrind/m_syswrap/syswrap-xen.c
@@ -152,6 +152,10 @@ PRE(memory_op)
break;
}
+ case VKI_XENMEM_get_sharing_freed_pages:
+ case VKI_XENMEM_get_sharing_shared_pages:
+ break;
+
default:
bad_subop(tid, layout, arrghs, status, flags,
"__HYPERVISOR_memory_op", ARG1);
@@ -672,6 +676,11 @@ POST(memory_op)
sizeof(vki_xen_pfn_t) * memory_reservation->nr_extents);
break;
}
+
+ case VKI_XENMEM_get_sharing_freed_pages:
+ case VKI_XENMEM_get_sharing_shared_pages:
+ /* No outputs */
+ break;
}
}
--
1.7.2.5
|
|
From: Ian C. <ian...@ci...> - 2013-06-12 13:43:04
|
From: Andrew Cooper <and...@ci...>
Xen takes a pointer to a sysctl_sched_id struct, and writes a single uint32_t
into it. There are no memory reads, and a single memory write.
Signed-off-by: Andrew Cooper <and...@ci...>
---
coregrind/m_syswrap/syswrap-xen.c | 8 ++++++++
include/vki/vki-xen-sysctl.h | 7 ++++++-
2 files changed, 14 insertions(+), 1 deletions(-)
diff --git a/coregrind/m_syswrap/syswrap-xen.c b/coregrind/m_syswrap/syswrap-xen.c
index 61aa1e1..8a0196f 100644
--- a/coregrind/m_syswrap/syswrap-xen.c
+++ b/coregrind/m_syswrap/syswrap-xen.c
@@ -400,6 +400,10 @@ PRE(sysctl) {
}
break;
+ case VKI_XEN_SYSCTL_sched_id:
+ /* No inputs */
+ break;
+
case VKI_XEN_SYSCTL_cpupool_op:
PRE_XEN_SYSCTL_READ(cpupool_op, op);
@@ -791,6 +795,10 @@ POST(sysctl)
}
break;
+ case VKI_XEN_SYSCTL_sched_id:
+ POST_XEN_SYSCTL_WRITE(sched_id, sched_id);
+ break;
+
case VKI_XEN_SYSCTL_cpupool_op:
if (sysctl->u.cpupool_op.op == VKI_XEN_SYSCTL_CPUPOOL_OP_CREATE ||
sysctl->u.cpupool_op.op == VKI_XEN_SYSCTL_CPUPOOL_OP_INFO)
diff --git a/include/vki/vki-xen-sysctl.h b/include/vki/vki-xen-sysctl.h
index 32c8722..453752c 100644
--- a/include/vki/vki-xen-sysctl.h
+++ b/include/vki/vki-xen-sysctl.h
@@ -120,6 +120,11 @@ struct vki_xen_sysctl_physinfo_0000000a {
vki_uint32_t capabilities;
};
+struct vki_xen_sysctl_sched_id {
+ /* OUT variable. */
+ vki_uint32_t sched_id;
+};
+
struct vki_xen_sysctl {
vki_uint32_t cmd;
vki_uint32_t interface_version; /* XEN_SYSCTL_INTERFACE_VERSION */
@@ -130,7 +135,7 @@ struct vki_xen_sysctl {
struct vki_xen_sysctl_physinfo_0000000a physinfo_0000000a;
struct vki_xen_sysctl_topologyinfo topologyinfo;
struct vki_xen_sysctl_numainfo numainfo;
- //struct vki_xen_sysctl_sched_id sched_id;
+ struct vki_xen_sysctl_sched_id sched_id;
//struct vki_xen_sysctl_perfc_op perfc_op;
struct vki_xen_sysctl_getdomaininfolist_00000008 getdomaininfolist_00000008;
struct vki_xen_sysctl_getdomaininfolist_00000009 getdomaininfolist_00000009;
--
1.7.2.5
|
|
From: Ian C. <ian...@ci...> - 2013-06-12 13:43:03
|
New hypercalls:
- VKI_XENMEM_claim_pages
- VKI_XEN_DOMCTL_getnodeaffinity
- VKI_XEN_DOMCTL_setnodeaffinity
Plus placeholders for other new hypercalls which we don't yet support here.
New revision of sysctl and domctl interfaces, due to new field in
outstanding_pages field in physinfo and dominfo.
Xen changed the API but not ABI of cpumasks to be a more generic bitmask.
Switch to using the latest names.
---
coregrind/m_syswrap/syswrap-xen.c | 92 +++++++++++++++++++++++++++++++------
include/vki/vki-xen-domctl.h | 42 ++++++++++++++++-
include/vki/vki-xen-memory.h | 1 +
include/vki/vki-xen-sysctl.h | 28 ++++++++++-
include/vki/vki-xen.h | 4 +-
5 files changed, 144 insertions(+), 23 deletions(-)
diff --git a/coregrind/m_syswrap/syswrap-xen.c b/coregrind/m_syswrap/syswrap-xen.c
index be884a1..61aa1e1 100644
--- a/coregrind/m_syswrap/syswrap-xen.c
+++ b/coregrind/m_syswrap/syswrap-xen.c
@@ -104,7 +104,8 @@ PRE(memory_op)
}
case VKI_XENMEM_increase_reservation:
case VKI_XENMEM_decrease_reservation:
- case VKI_XENMEM_populate_physmap: {
+ case VKI_XENMEM_populate_physmap:
+ case VKI_XENMEM_claim_pages: {
struct xen_memory_reservation *memory_reservation =
(struct xen_memory_reservation *)ARG2;
const HChar *which;
@@ -125,6 +126,9 @@ PRE(memory_op)
(Addr)memory_reservation->extent_start.p,
sizeof(vki_xen_pfn_t) * memory_reservation->nr_extents);
break;
+ case VKI_XENMEM_claim_pages:
+ which = "XENMEM_claim_pages";
+ break;
default:
which = "XENMEM_unknown";
break;
@@ -354,6 +358,7 @@ PRE(sysctl) {
{
case 0x00000008:
case 0x00000009:
+ case 0x0000000a:
break;
default:
VG_(dmsg)("WARNING: sysctl version %"PRIx32" not supported\n",
@@ -470,6 +475,7 @@ PRE(domctl)
{
case 0x00000007:
case 0x00000008:
+ case 0x00000009:
break;
default:
VG_(dmsg)("WARNING: domctl version %"PRIx32" not supported\n",
@@ -567,7 +573,17 @@ PRE(domctl)
__PRE_XEN_DOMCTL_READ(setvcpuaffinity, vcpuaffinity, vcpu);
PRE_MEM_READ("XEN_DOMCTL_setvcpuaffinity u.vcpuaffinity.cpumap.bitmap",
(Addr)domctl->u.vcpuaffinity.cpumap.bitmap.p,
- domctl->u.vcpuaffinity.cpumap.nr_cpus / 8);
+ domctl->u.vcpuaffinity.cpumap.nr_bits / 8);
+ break;
+
+ case VKI_XEN_DOMCTL_getnodeaffinity:
+ __PRE_XEN_DOMCTL_READ(nodeaffinity, nodeaffinity, nodemap.nr_bits);
+ break;
+ case VKI_XEN_DOMCTL_setnodeaffinity:
+ __PRE_XEN_DOMCTL_READ(nodeaffinity, nodeaffinity, nodemap.nr_bits);
+ PRE_MEM_READ("XEN_DOMCTL_setnodeaffinity u.nodeaffinity.cpumap.bitmap",
+ (Addr)domctl->u.nodeaffinity.nodemap.bitmap.p,
+ domctl->u.nodeaffinity.nodemap.nr_bits / 8);
break;
case VKI_XEN_DOMCTL_getvcpucontext:
@@ -640,6 +656,7 @@ POST(memory_op)
switch (ARG1) {
case VKI_XENMEM_set_memory_map:
case VKI_XENMEM_decrease_reservation:
+ case VKI_XENMEM_claim_pages:
/* No outputs */
break;
case VKI_XENMEM_increase_reservation:
@@ -743,6 +760,7 @@ POST(sysctl)
{
case 0x00000008:
case 0x00000009:
+ case 0x0000000a:
break;
default:
return;
@@ -787,18 +805,39 @@ POST(sysctl)
break;
case VKI_XEN_SYSCTL_physinfo:
- POST_XEN_SYSCTL_WRITE(physinfo, threads_per_core);
- POST_XEN_SYSCTL_WRITE(physinfo, cores_per_socket);
- POST_XEN_SYSCTL_WRITE(physinfo, nr_cpus);
- POST_XEN_SYSCTL_WRITE(physinfo, max_cpu_id);
- POST_XEN_SYSCTL_WRITE(physinfo, nr_nodes);
- POST_XEN_SYSCTL_WRITE(physinfo, max_node_id);
- POST_XEN_SYSCTL_WRITE(physinfo, cpu_khz);
- POST_XEN_SYSCTL_WRITE(physinfo, total_pages);
- POST_XEN_SYSCTL_WRITE(physinfo, free_pages);
- POST_XEN_SYSCTL_WRITE(physinfo, scrub_pages);
- POST_XEN_SYSCTL_WRITE(physinfo, hw_cap[8]);
- POST_XEN_SYSCTL_WRITE(physinfo, capabilities);
+ switch (sysctl->interface_version)
+ {
+ case 0x00000008:
+ case 0x00000009: /* Unchanged from version 8 */
+ POST_XEN_SYSCTL_WRITE(physinfo_00000008, threads_per_core);
+ POST_XEN_SYSCTL_WRITE(physinfo_00000008, cores_per_socket);
+ POST_XEN_SYSCTL_WRITE(physinfo_00000008, nr_cpus);
+ POST_XEN_SYSCTL_WRITE(physinfo_00000008, max_cpu_id);
+ POST_XEN_SYSCTL_WRITE(physinfo_00000008, nr_nodes);
+ POST_XEN_SYSCTL_WRITE(physinfo_00000008, max_node_id);
+ POST_XEN_SYSCTL_WRITE(physinfo_00000008, cpu_khz);
+ POST_XEN_SYSCTL_WRITE(physinfo_00000008, total_pages);
+ POST_XEN_SYSCTL_WRITE(physinfo_00000008, free_pages);
+ POST_XEN_SYSCTL_WRITE(physinfo_00000008, scrub_pages);
+ POST_XEN_SYSCTL_WRITE(physinfo_00000008, hw_cap[8]);
+ POST_XEN_SYSCTL_WRITE(physinfo_00000008, capabilities);
+ break;
+ case 0x0000000a:
+ POST_XEN_SYSCTL_WRITE(physinfo_0000000a, threads_per_core);
+ POST_XEN_SYSCTL_WRITE(physinfo_0000000a, cores_per_socket);
+ POST_XEN_SYSCTL_WRITE(physinfo_0000000a, nr_cpus);
+ POST_XEN_SYSCTL_WRITE(physinfo_0000000a, max_cpu_id);
+ POST_XEN_SYSCTL_WRITE(physinfo_0000000a, nr_nodes);
+ POST_XEN_SYSCTL_WRITE(physinfo_0000000a, max_node_id);
+ POST_XEN_SYSCTL_WRITE(physinfo_0000000a, cpu_khz);
+ POST_XEN_SYSCTL_WRITE(physinfo_0000000a, total_pages);
+ POST_XEN_SYSCTL_WRITE(physinfo_0000000a, free_pages);
+ POST_XEN_SYSCTL_WRITE(physinfo_0000000a, scrub_pages);
+ POST_XEN_SYSCTL_WRITE(physinfo_0000000a, outstanding_pages);
+ POST_XEN_SYSCTL_WRITE(physinfo_0000000a, hw_cap[8]);
+ POST_XEN_SYSCTL_WRITE(physinfo_0000000a, capabilities);
+ break;
+ }
break;
case VKI_XEN_SYSCTL_topologyinfo:
@@ -834,6 +873,7 @@ POST(domctl){
switch (domctl->interface_version) {
case 0x00000007:
case 0x00000008:
+ case 0x00000009:
break;
default:
return;
@@ -855,6 +895,7 @@ POST(domctl){
case VKI_XEN_DOMCTL_hypercall_init:
case VKI_XEN_DOMCTL_setvcpuaffinity:
case VKI_XEN_DOMCTL_setvcpucontext:
+ case VKI_XEN_DOMCTL_setnodeaffinity:
case VKI_XEN_DOMCTL_set_cpuid:
case VKI_XEN_DOMCTL_unpausedomain:
/* No output fields */
@@ -908,7 +949,12 @@ POST(domctl){
case VKI_XEN_DOMCTL_getvcpuaffinity:
POST_MEM_WRITE((Addr)domctl->u.vcpuaffinity.cpumap.bitmap.p,
- domctl->u.vcpuaffinity.cpumap.nr_cpus / 8);
+ domctl->u.vcpuaffinity.cpumap.nr_bits / 8);
+ break;
+
+ case VKI_XEN_DOMCTL_getnodeaffinity:
+ POST_MEM_WRITE((Addr)domctl->u.nodeaffinity.nodemap.bitmap.p,
+ domctl->u.nodeaffinity.nodemap.nr_bits / 8);
break;
case VKI_XEN_DOMCTL_getdomaininfo:
@@ -942,6 +988,22 @@ POST(domctl){
POST_XEN_DOMCTL_WRITE(getdomaininfo_00000008, handle);
POST_XEN_DOMCTL_WRITE(getdomaininfo_00000008, cpupool);
break;
+ case 0x00000009:
+ POST_XEN_DOMCTL_WRITE(getdomaininfo_00000009, domain);
+ POST_XEN_DOMCTL_WRITE(getdomaininfo_00000009, flags);
+ POST_XEN_DOMCTL_WRITE(getdomaininfo_00000009, tot_pages);
+ POST_XEN_DOMCTL_WRITE(getdomaininfo_00000009, max_pages);
+ POST_XEN_DOMCTL_WRITE(getdomaininfo_00000009, outstanding_pages);
+ POST_XEN_DOMCTL_WRITE(getdomaininfo_00000009, shr_pages);
+ POST_XEN_DOMCTL_WRITE(getdomaininfo_00000009, paged_pages);
+ POST_XEN_DOMCTL_WRITE(getdomaininfo_00000009, shared_info_frame);
+ POST_XEN_DOMCTL_WRITE(getdomaininfo_00000009, cpu_time);
+ POST_XEN_DOMCTL_WRITE(getdomaininfo_00000009, nr_online_vcpus);
+ POST_XEN_DOMCTL_WRITE(getdomaininfo_00000009, max_vcpu_id);
+ POST_XEN_DOMCTL_WRITE(getdomaininfo_00000009, ssidref);
+ POST_XEN_DOMCTL_WRITE(getdomaininfo_00000009, handle);
+ POST_XEN_DOMCTL_WRITE(getdomaininfo_00000009, cpupool);
+ break;
}
break;
case VKI_XEN_DOMCTL_getvcpucontext:
diff --git a/include/vki/vki-xen-domctl.h b/include/vki/vki-xen-domctl.h
index 241c008..815e0a7 100644
--- a/include/vki/vki-xen-domctl.h
+++ b/include/vki/vki-xen-domctl.h
@@ -7,6 +7,7 @@
*
* - 00000007: Xen 4.1
* - 00000008: Xen 4.2
+ * - 00000009: Xen 4.3
*
* When adding a new subop be sure to include the variants used by all
* of the above, both here and in syswrap-xen.c
@@ -57,7 +58,7 @@
#define VKI_XEN_DOMCTL_pin_mem_cacheattr 41
#define VKI_XEN_DOMCTL_set_ext_vcpucontext 42
#define VKI_XEN_DOMCTL_get_ext_vcpucontext 43
-#define VKI_XEN_DOMCTL_set_opt_feature 44
+#define VKI_XEN_DOMCTL_set_opt_feature 44 /*Obsolete IA64 only */
#define VKI_XEN_DOMCTL_test_assign_device 45
#define VKI_XEN_DOMCTL_set_target 46
#define VKI_XEN_DOMCTL_deassign_device 47
@@ -80,6 +81,9 @@
#define VKI_XEN_DOMCTL_set_access_required 64
#define VKI_XEN_DOMCTL_audit_p2m 65
#define VKI_XEN_DOMCTL_set_virq_handler 66
+#define VKI_XEN_DOMCTL_set_broken_page_p2m 67
+#define VKI_XEN_DOMCTL_setnodeaffinity 68
+#define VKI_XEN_DOMCTL_getnodeaffinity 69
#define VKI_XEN_DOMCTL_gdbsx_guestmemio 1000
#define VKI_XEN_DOMCTL_gdbsx_pausevcpu 1001
#define VKI_XEN_DOMCTL_gdbsx_unpausevcpu 1002
@@ -130,9 +134,39 @@ struct vki_xen_domctl_getdomaininfo_00000008 {
typedef struct vki_xen_domctl_getdomaininfo_00000008 vki_xen_domctl_getdomaininfo_00000008_t;
DEFINE_VKI_XEN_GUEST_HANDLE(vki_xen_domctl_getdomaininfo_00000008_t);
+struct vki_xen_domctl_getdomaininfo_00000009 {
+ /* OUT variables. */
+ vki_xen_domid_t domain;
+ vki_uint32_t flags;
+ vki_xen_uint64_aligned_t tot_pages;
+ vki_xen_uint64_aligned_t max_pages;
+ vki_xen_uint64_aligned_t outstanding_pages;
+ vki_xen_uint64_aligned_t shr_pages;
+ vki_xen_uint64_aligned_t paged_pages;
+ vki_xen_uint64_aligned_t shared_info_frame;
+ vki_xen_uint64_aligned_t cpu_time;
+ vki_uint32_t nr_online_vcpus;
+ vki_uint32_t max_vcpu_id;
+ vki_uint32_t ssidref;
+ vki_xen_domain_handle_t handle;
+ vki_uint32_t cpupool;
+};
+typedef struct vki_xen_domctl_getdomaininfo_00000009 vki_xen_domctl_getdomaininfo_00000009_t;
+DEFINE_VKI_XEN_GUEST_HANDLE(vki_xen_domctl_getdomaininfo_00000009_t);
+
+/* Get/set the NUMA node(s) with which the guest has affinity with. */
+/* XEN_DOMCTL_setnodeaffinity */
+/* XEN_DOMCTL_getnodeaffinity */
+struct vki_xen_domctl_nodeaffinity {
+ struct vki_xenctl_bitmap nodemap;/* IN */
+};
+typedef struct vki_xen_domctl_nodeaffinity vki_xen_domctl_nodeaffinity_t;
+DEFINE_VKI_XEN_GUEST_HANDLE(vki_xen_domctl_nodeaffinity_t);
+
+
struct vki_xen_domctl_vcpuaffinity {
vki_uint32_t vcpu; /* IN */
- struct vki_xenctl_cpumap cpumap; /* IN/OUT */
+ struct vki_xenctl_bitmap cpumap; /* IN/OUT */
};
struct vki_xen_domctl_max_mem {
@@ -233,10 +267,12 @@ struct vki_xen_domctl {
struct vki_xen_domctl_createdomain createdomain;
struct vki_xen_domctl_getdomaininfo_00000007 getdomaininfo_00000007;
struct vki_xen_domctl_getdomaininfo_00000008 getdomaininfo_00000008;
+ struct vki_xen_domctl_getdomaininfo_00000009 getdomaininfo_00000009;
//struct vki_xen_domctl_getmemlist getmemlist;
//struct vki_xen_domctl_getpageframeinfo getpageframeinfo;
//struct vki_xen_domctl_getpageframeinfo2 getpageframeinfo2;
//struct vki_xen_domctl_getpageframeinfo3 getpageframeinfo3;
+ struct vki_xen_domctl_nodeaffinity nodeaffinity;
struct vki_xen_domctl_vcpuaffinity vcpuaffinity;
//struct vki_xen_domctl_shadow_op shadow_op;
struct vki_xen_domctl_max_mem max_mem;
@@ -266,7 +302,6 @@ struct vki_xen_domctl {
//struct vki_xen_domctl_ioport_mapping ioport_mapping;
//struct vki_xen_domctl_pin_mem_cacheattr pin_mem_cacheattr;
//struct vki_xen_domctl_ext_vcpucontext ext_vcpucontext;
- //struct vki_xen_domctl_set_opt_feature set_opt_feature;
//struct vki_xen_domctl_set_target set_target;
//struct vki_xen_domctl_subscribe subscribe;
//struct vki_xen_domctl_debug_op debug_op;
@@ -280,6 +315,7 @@ struct vki_xen_domctl {
//struct vki_xen_domctl_audit_p2m audit_p2m;
//struct vki_xen_domctl_set_virq_handler set_virq_handler;
//struct vki_xen_domctl_gdbsx_memio gdbsx_guest_memio;
+ //struct vki_xen_domctl_set_broken_page_p2m set_broken_page_p2m;
//struct vki_xen_domctl_gdbsx_pauseunp_vcpu gdbsx_pauseunp_vcpu;
//struct vki_xen_domctl_gdbsx_domstatus gdbsx_domstatus;
vki_uint8_t pad[128];
diff --git a/include/vki/vki-xen-memory.h b/include/vki/vki-xen-memory.h
index 7de8d33..eac7871 100644
--- a/include/vki/vki-xen-memory.h
+++ b/include/vki/vki-xen-memory.h
@@ -20,6 +20,7 @@
#define VKI_XENMEM_get_pod_target 17
#define VKI_XENMEM_get_sharing_freed_pages 18
#define VKI_XENMEM_get_sharing_shared_pages 19
+#define VKI_XENMEM_claim_pages 24
struct vki_xen_memory_map {
unsigned int nr_entries;
diff --git a/include/vki/vki-xen-sysctl.h b/include/vki/vki-xen-sysctl.h
index c5178d7..32c8722 100644
--- a/include/vki/vki-xen-sysctl.h
+++ b/include/vki/vki-xen-sysctl.h
@@ -7,6 +7,7 @@
*
* - 00000008: Xen 4.1
* - 00000009: Xen 4.2
+ * - 0000000a: Xen 4.3
*
* When adding a new subop be sure to include the variants used by all
* of the above, both here and in syswrap-xen.c
@@ -35,6 +36,7 @@
#define VKI_XEN_SYSCTL_numainfo 17
#define VKI_XEN_SYSCTL_cpupool_op 18
#define VKI_XEN_SYSCTL_scheduler_op 19
+#define VKI_XEN_SYSCTL_coverage_op 20
struct vki_xen_sysctl_getdomaininfolist_00000008 {
/* IN variables. */
@@ -69,7 +71,7 @@ struct vki_xen_sysctl_cpupool_op {
vki_uint32_t domid; /* IN: M */
vki_uint32_t cpu; /* IN: AR */
vki_uint32_t n_dom; /* OUT: I */
- struct vki_xenctl_cpumap cpumap; /* OUT: IF */
+ struct vki_xenctl_bitmap cpumap; /* OUT: IF */
};
struct vki_xen_sysctl_topologyinfo {
@@ -85,7 +87,7 @@ struct vki_xen_sysctl_numainfo {
VKI_XEN_GUEST_HANDLE_64(vki_uint64) node_to_memfree;
VKI_XEN_GUEST_HANDLE_64(vki_uint32) node_to_node_distance;
};
-struct vki_xen_sysctl_physinfo {
+struct vki_xen_sysctl_physinfo_00000008 {
vki_uint32_t threads_per_core;
vki_uint32_t cores_per_socket;
vki_uint32_t nr_cpus; /* # CPUs currently online */
@@ -101,13 +103,31 @@ struct vki_xen_sysctl_physinfo {
vki_uint32_t capabilities;
};
+struct vki_xen_sysctl_physinfo_0000000a {
+ vki_uint32_t threads_per_core;
+ vki_uint32_t cores_per_socket;
+ vki_uint32_t nr_cpus; /* # CPUs currently online */
+ vki_uint32_t max_cpu_id; /* Largest possible CPU ID on this host */
+ vki_uint32_t nr_nodes; /* # nodes currently online */
+ vki_uint32_t max_node_id; /* Largest possible node ID on this host */
+ vki_uint32_t cpu_khz;
+ vki_xen_uint64_aligned_t total_pages;
+ vki_xen_uint64_aligned_t free_pages;
+ vki_xen_uint64_aligned_t scrub_pages;
+ vki_xen_uint64_aligned_t outstanding_pages;
+ vki_uint32_t hw_cap[8];
+
+ vki_uint32_t capabilities;
+};
+
struct vki_xen_sysctl {
vki_uint32_t cmd;
vki_uint32_t interface_version; /* XEN_SYSCTL_INTERFACE_VERSION */
union {
//struct vki_xen_sysctl_readconsole readconsole;
//struct vki_xen_sysctl_tbuf_op tbuf_op;
- struct vki_xen_sysctl_physinfo physinfo;
+ struct vki_xen_sysctl_physinfo_00000008 physinfo_00000008;
+ struct vki_xen_sysctl_physinfo_0000000a physinfo_0000000a;
struct vki_xen_sysctl_topologyinfo topologyinfo;
struct vki_xen_sysctl_numainfo numainfo;
//struct vki_xen_sysctl_sched_id sched_id;
@@ -124,6 +144,8 @@ struct vki_xen_sysctl {
//struct vki_xen_sysctl_lockprof_op lockprof_op;
struct vki_xen_sysctl_cpupool_op cpupool_op;
//struct vki_xen_sysctl_scheduler_op scheduler_op;
+ //struct vki_xen_sysctl_coverage_op coverage_op;
+
vki_uint8_t pad[128];
} u;
};
diff --git a/include/vki/vki-xen.h b/include/vki/vki-xen.h
index ed3cc1b..87fbb4f 100644
--- a/include/vki/vki-xen.h
+++ b/include/vki/vki-xen.h
@@ -71,9 +71,9 @@ __DEFINE_VKI_XEN_GUEST_HANDLE(vki_uint16, vki_uint16_t);
__DEFINE_VKI_XEN_GUEST_HANDLE(vki_uint32, vki_uint32_t);
__DEFINE_VKI_XEN_GUEST_HANDLE(vki_uint64, vki_uint64_t);
-struct vki_xenctl_cpumap {
+struct vki_xenctl_bitmap {
VKI_XEN_GUEST_HANDLE_64(vki_uint8) bitmap;
- vki_uint32_t nr_cpus;
+ vki_uint32_t nr_bits;
};
#include <vki/vki-xen-domctl.h>
--
1.7.2.5
|
|
From: Ian C. <ian...@ci...> - 2013-06-12 13:43:01
|
---
coregrind/m_syswrap/syswrap-linux.c | 22 +++++++++++-----------
1 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/coregrind/m_syswrap/syswrap-linux.c b/coregrind/m_syswrap/syswrap-linux.c
index a42a572..039f8d4 100644
--- a/coregrind/m_syswrap/syswrap-linux.c
+++ b/coregrind/m_syswrap/syswrap-linux.c
@@ -6508,37 +6508,37 @@ PRE(sys_ioctl)
case VKI_XEN_IOCTL_PRIVCMD_MMAP: {
struct vki_xen_privcmd_mmap *args =
(struct vki_xen_privcmd_mmap *)(ARG3);
- PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAP",
+ PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAP(num)",
(Addr)&args->num, sizeof(args->num));
- PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAP",
+ PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAP(dom)",
(Addr)&args->dom, sizeof(args->dom));
- PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAP",
+ PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAP(entry)",
(Addr)args->entry, sizeof(*(args->entry)) * args->num);
break;
}
case VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH: {
struct vki_xen_privcmd_mmapbatch *args =
(struct vki_xen_privcmd_mmapbatch *)(ARG3);
- PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH",
+ PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH(num)",
(Addr)&args->num, sizeof(args->num));
- PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH",
+ PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH(dom)",
(Addr)&args->dom, sizeof(args->dom));
- PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH",
+ PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH(addr)",
(Addr)&args->addr, sizeof(args->addr));
- PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH",
+ PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH(arr)",
(Addr)args->arr, sizeof(*(args->arr)) * args->num);
break;
}
case VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH_V2: {
struct vki_xen_privcmd_mmapbatch_v2 *args =
(struct vki_xen_privcmd_mmapbatch_v2 *)(ARG3);
- PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH_V2",
+ PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH_V2(num)",
(Addr)&args->num, sizeof(args->num));
- PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH_V2",
+ PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH_V2(dom)",
(Addr)&args->dom, sizeof(args->dom));
- PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH_V2",
+ PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH_V2(addr)",
(Addr)&args->addr, sizeof(args->addr));
- PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH_V2",
+ PRE_MEM_READ("VKI_XEN_IOCTL_PRIVCMD_MMAPBATCH_V2(arr)",
(Addr)args->arr, sizeof(*(args->arr)) * args->num);
break;
}
--
1.7.2.5
|
|
From: Ian C. <Ian...@ci...> - 2013-06-12 13:42:44
|
The following small set of patches updates valgrind for the interface changes made in Xen 4.3. With this and the two xl patches I just sent to the xen-devel list the xl commands list, create (PV guest) and destroy commands are leak free. Ian. |
|
From: Sebastian F. <seb...@gm...> - 2013-06-12 11:10:17
|
On Mon, Jun 10, 2013 at 2:26 PM, Julian Seward <js...@ac...> wrote: > > Does anybody still use either of these debuginfo formats? Is it > going to be a big deal if support for them is dropped? Yes, stabs support is still needed: 1. Not all compilers support DWARF 2. A lot of tools like IBM's Purify do stabs well but DWARF support sucks. To use both tools on a project we'd have to do two builds instead of one if there's no stabs support anymore. 3. A lot of commercial applications and libraries, even those for Linux, are using stabs. Don't ask me for the reason, they just do it. Would the stabs-less valgrind still be able to work if one or more shared libraries has stabs instead of DWARF? |
|
From: Julian S. <js...@ac...> - 2013-06-12 09:08:18
|
On 06/10/2013 02:58 PM, Mark Wielaard wrote: > On Mon, 2013-06-10 at 14:26 +0200, Julian Seward wrote: >> Does anybody still use either of these debuginfo formats? Is it >> going to be a big deal if support for them is dropped? > > I personally think it won't be a big deal, at least not for GNU/Linux > systems. I don't know of anybody still using DWARF-1 and as far as I > know no distribution has been using STABS for years. STABS is currently > still supported in GCC. I would be happy to make stabs reading work in the new framework; it's probably pretty easy. My concern mostly is to have a way to check I didn't break anything. I suppose one option is to build something with gcc-4.7 -gstabs and check that the old and new readers produce the same debugging output. Better suggestions, and/or offers to test it properly, gratefully received. J |