You can subscribe to this list here.
2008 |
Jan
(41) |
Feb
(101) |
Mar
(164) |
Apr
(94) |
May
(27) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
---|
From: Dong, E. <edd...@in...> - 2008-03-20 09:13:47
|
Jeremy & all: Current xen kernel codes are in arch/x86/xen, but xen dynamic irqchip (events.c) are common for other architectures such as IA64. We are in progress with enabling pv_ops for IA64 now and want to reuse same code, do we need to move the code to some place common? suggestions? Thanks, eddie |
From: Zhang, X. <xia...@in...> - 2008-03-20 08:13:12
|
Avi Kivity wrote: > Zhang, Xiantao wrote: >> Avi Kivity wrote: >> > > I see. ./configure --with-patched-kernel should work for that, but I > have no issue with copying include/asm-ia64 either. Copy should be ugly, since it needs extral documentation to describle. If --with-patched-kernel can call a script, that should be fine as well. Xiantao |
From: Avi K. <av...@qu...> - 2008-03-20 07:39:28
|
Zhang, Xiantao wrote: > Avi Kivity wrote: > >> Zhang, Xiantao wrote: >> >>> Avi Kivity wrote: >>> >>> >>>> Zhang, Xiantao wrote: >>>> >>>> >>>>> Hi, Avi >>>>> Currently, make sync in userspace only syncs x86-specific heads >>>>> from kernel source due to hard-coded in Makefile. >>>>> Do you have plan to provide cross-arch support for that? >>>>> >>>>> >>>> No plans. I'll apply patches though. But don't you need kernel >>>> changes which make it impossible to run kvm-ia64 on older kernels? >>>> >>>> >>>> >>>>> Other archs may >>>>> need it for save/restore :) >>>>> >>>>> >>>>> >>>> Save/restore? Don't understand. >>>> >>>> >>> You know, currently make sync would sync header files to userspace >>> from include/asm-x86/, so kvm.h and kvm_host.h are always synced >>> from there for any archs. Since some arch-specific stuff for >>> save/restore should be defined in include/asm-$arch/(kvm.h; >>> kvm_host.h), so ia64 or other archs should need it when they >>> implement save/restore. >>> >> I see. But is 'make sync' actually useful for you? Can you run >> kvm-ia64 on top of 2.6.24, which doesn't include your ia64 core API >> changes? >> > > Now we don't intend to provide support for kernel which is older than > 2.6.24. And we don't want to compile kernel module in userspace. > But at least we need to ensure "make sync" work first, because we need > it to guarantee Qemu to use right header files for its compilation. > Xiantao > I see. ./configure --with-patched-kernel should work for that, but I have no issue with copying include/asm-ia64 either. -- Any sufficiently difficult bug is indistinguishable from a feature. |
From: Zhang, X. <xia...@in...> - 2008-03-20 07:25:00
|
Avi Kivity wrote: > Zhang, Xiantao wrote: >> Avi Kivity wrote: >> >>> Zhang, Xiantao wrote: >>> >>>> Hi, Avi >>>> Currently, make sync in userspace only syncs x86-specific heads >>>> from kernel source due to hard-coded in Makefile. >>>> Do you have plan to provide cross-arch support for that? >>>> >>> No plans. I'll apply patches though. But don't you need kernel >>> changes which make it impossible to run kvm-ia64 on older kernels? >>> >>> >>>> Other archs may >>>> need it for save/restore :) >>>> >>>> >>> Save/restore? Don't understand. >>> >> >> You know, currently make sync would sync header files to userspace >> from include/asm-x86/, so kvm.h and kvm_host.h are always synced >> from there for any archs. Since some arch-specific stuff for >> save/restore should be defined in include/asm-$arch/(kvm.h; >> kvm_host.h), so ia64 or other archs should need it when they >> implement save/restore. > > I see. But is 'make sync' actually useful for you? Can you run > kvm-ia64 on top of 2.6.24, which doesn't include your ia64 core API > changes? Now we don't intend to provide support for kernel which is older than 2.6.24. And we don't want to compile kernel module in userspace. But at least we need to ensure "make sync" work first, because we need it to guarantee Qemu to use right header files for its compilation. Xiantao |
From: Avi K. <av...@qu...> - 2008-03-20 07:00:05
|
Zhang, Xiantao wrote: > Avi Kivity wrote: > >> Zhang, Xiantao wrote: >> >>> Hi, Avi >>> Currently, make sync in userspace only syncs x86-specific heads from >>> kernel source due to hard-coded in Makefile. >>> Do you have plan to provide cross-arch support for that? >>> >> No plans. I'll apply patches though. But don't you need kernel >> changes which make it impossible to run kvm-ia64 on older kernels? >> >> >>> Other archs may >>> need it for save/restore :) >>> >>> >> Save/restore? Don't understand. >> > > You know, currently make sync would sync header files to userspace from > include/asm-x86/, so kvm.h and kvm_host.h are always synced from there > for any archs. Since some arch-specific stuff for save/restore should be > defined in include/asm-$arch/(kvm.h; kvm_host.h), so ia64 or other archs > should need it when they implement save/restore. I see. But is 'make sync' actually useful for you? Can you run kvm-ia64 on top of 2.6.24, which doesn't include your ia64 core API changes? Note you will also need to add preempt notifier emulation for older kernels. -- Any sufficiently difficult bug is indistinguishable from a feature. |
From: Zhang, X. <xia...@in...> - 2008-03-20 06:38:05
|
Avi Kivity wrote: > Zhang, Xiantao wrote: >> Hi, Avi >> Currently, make sync in userspace only syncs x86-specific heads from >> kernel source due to hard-coded in Makefile. >> Do you have plan to provide cross-arch support for that? > > No plans. I'll apply patches though. But don't you need kernel > changes which make it impossible to run kvm-ia64 on older kernels? > >> Other archs may >> need it for save/restore :) >> > > Save/restore? Don't understand. You know, currently make sync would sync header files to userspace from include/asm-x86/, so kvm.h and kvm_host.h are always synced from there for any archs. Since some arch-specific stuff for save/restore should be defined in include/asm-$arch/(kvm.h; kvm_host.h), so ia64 or other archs should need it when they implement save/restore. Xiantao |
From: Avi K. <av...@qu...> - 2008-03-20 05:48:40
|
Zhang, Xiantao wrote: > Hi, Avi > Currently, make sync in userspace only syncs x86-specific heads from > kernel source due to hard-coded in Makefile. > Do you have plan to provide cross-arch support for that? No plans. I'll apply patches though. But don't you need kernel changes which make it impossible to run kvm-ia64 on older kernels? > Other archs may > need it for save/restore :) > Save/restore? Don't understand. -- Any sufficiently difficult bug is indistinguishable from a feature. |
From: Zhang, X. <xia...@in...> - 2008-03-20 02:44:29
|
From: Xiantao Zhang <xia...@in...> Date: Thu, 20 Mar 2008 10:17:29 +0800 Subject: [PATCH] kvm:qemu: qemu_system_cpu_hot_add not supported for ia64. Comment it out first for ia64 build. Signed-off-by: Xiantao Zhang <xia...@in...> --- qemu/hw/acpi.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/qemu/hw/acpi.c b/qemu/hw/acpi.c index ae74f32..35641a0 100644 --- a/qemu/hw/acpi.c +++ b/qemu/hw/acpi.c @@ -718,7 +718,7 @@ static void disable_processor(struct gpe_regs *g, int cpu) g->en |= 1; g->down |= (1 << cpu); } - +#if defined(TARGET_I386) || defined(TARGET_X86_64) void qemu_system_cpu_hot_add(int cpu, int state) { CPUState *env; @@ -743,7 +743,7 @@ void qemu_system_cpu_hot_add(int cpu, int state) disable_processor(&gpe, cpu); qemu_set_irq(pm_state->irq, 0); } - +#endif static void enable_device(struct pci_status *p, struct gpe_regs *g, int slot) { g->sts |= 2; -- 1.5.2 |
From: Zhang, X. <xia...@in...> - 2008-03-20 02:36:12
|
Hi, Avi Currently, make sync in userspace only syncs x86-specific heads from kernel source due to hard-coded in Makefile. Do you have plan to provide cross-arch support for that? Other archs may need it for save/restore :) Thanks Xiantao |
From: Zhang, X. <xia...@in...> - 2008-03-20 02:16:22
|
Hi, Jes do you see the commit named as "kvm: qemu: live migration for tpr optimization ". I mean this commit :) Xiantao -----Original Message----- From: Jes Sorensen [mailto:je...@sg...] Sent: 2008年3月19日 23:38 To: Zhang, Xiantao Subject: Re: [kvm-ia64-devel] kvm-ia64.git is created on master.kernel.org! Zhang, Xiantao wrote: > Maybe you can try 5ffc6784843600e72d78dd5609e0f7861a3f0e2d, it should be > workable . [jes@leavenworth kvm-ia64.git]$ git reset --hard 5ffc6784843600e72d78dd5609e0f7861a3f0e2d fatal: Could not parse object '5ffc6784843600e72d78dd5609e0f7861a3f0e2d'. [jes@leavenworth kvm-ia64.git]$ Hi Xiantao, Did you give me the right commit ID? Cheers, Jes |
From: Jes S. <je...@sg...> - 2008-03-19 10:30:38
|
Zhang, Xiantao wrote: > Since I don't know how old the Qemu you are using, I can't say yes or > no. Generally, we need to use the latest source to build, but seems it > is broken for ia64 due to recent merge with Qemu upstream. > > BTW, Do you see any error info in the shell ? Are you sure the kvm > guest is created or just run qemu with kvm support? You can check it > with dmesg in host. > If a new guest is created, it will generate some debug info in kernel > log. :) > > Maybe you can try 5ffc6784843600e72d78dd5609e0f7861a3f0e2d, it should be > workable . Hi Xiantao, I get this in the dmesg log: device tap0 entered promiscuous mode kvm:vm data base address:0xe000006059000000 kvm: vcpu:e000006059d00000,ivt: 0xd000000000018000 I presume KVM is doing something :-) My QEMU isn't that old, but I'll try and revert to that patch. Thanks, Jes |
From: Zhang, X. <xia...@in...> - 2008-03-19 10:27:47
|
Jes Sorensen wrote: >>>>>> "Xiantao" == Zhang, Xiantao <xia...@in...> writes: > > Xiantao> Hi, Jes You need to checkout a remote branch with the > Xiantao> following command: git branch --track kvm-ia64-mc4 > Xiantao> origin/kvm-ia64-mc4 git checkout kvm-ia64-mc4 Xiantao > > Gotcha! > > Seems to work! > > However when I launch qemu with the new patches, I just get a white > screen in vncviewer - do I need a new version of qemu to match the new > kvm patches? Hi, Jes Since I don't know how old the Qemu you are using, I can't say yes or no. Generally, we need to use the latest source to build, but seems it is broken for ia64 due to recent merge with Qemu upstream. BTW, Do you see any error info in the shell ? Are you sure the kvm guest is created or just run qemu with kvm support? You can check it with dmesg in host. If a new guest is created, it will generate some debug info in kernel log. :) Maybe you can try 5ffc6784843600e72d78dd5609e0f7861a3f0e2d, it should be workable . Thanks Xiantao |
From: Jes S. <je...@sg...> - 2008-03-19 10:08:25
|
>>>>> "Xiantao" == Zhang, Xiantao <xia...@in...> writes: Xiantao> Hi, Jes You need to checkout a remote branch with the Xiantao> following command: git branch --track kvm-ia64-mc4 Xiantao> origin/kvm-ia64-mc4 git checkout kvm-ia64-mc4 Xiantao Gotcha! Seems to work! However when I launch qemu with the new patches, I just get a white screen in vncviewer - do I need a new version of qemu to match the new kvm patches? Cheers, Jes |
From: Zhang, X. <xia...@in...> - 2008-03-18 02:07:15
|
Jes Sorensen wrote: >>>>>> "Xiantao" == Zhang, Xiantao <xia...@in...> writes: > > Xiantao> Hi, guys We have created kvm-ia64.git on master.kernel.org > Xiantao> for open development, and the latest source is also included > Xiantao> in this repository. So you can clone and make contributions > Xiantao> to it now. Cheers!! In this repository, I created the > Xiantao> branch kvm-ia64-mc4 to hold the patchset. Now, the whole > Xiantao> community had better work on the branch together for > Xiantao> reviewing code, doing cleanup, and adding the new features. > Xiantao> If you have any contribution or questions, please feel free > Xiantao> to submit to the kvm-ia64 mailing > Xiantao> > list(https://lists.sourceforge.net/lists/listinfo/kvm-ia64-devel). > > Hi Xiantao. > > Just back from vacation for a week, this is great news! > > How do I check out the tree with the ia64 patches in it? If I just do > a clone then I seem to get a copy of Avi's tree. Hi, Jes You need to checkout a remote branch with the following command: git branch --track kvm-ia64-mc4 origin/kvm-ia64-mc4 git checkout kvm-ia64-mc4 Xiantao |
From: Jes S. <je...@sg...> - 2008-03-17 13:59:35
|
>>>>> "Xiantao" == Zhang, Xiantao <xia...@in...> writes: Xiantao> Hi, guys We have created kvm-ia64.git on master.kernel.org Xiantao> for open development, and the latest source is also included Xiantao> in this repository. So you can clone and make contributions Xiantao> to it now. Cheers!! In this repository, I created the Xiantao> branch kvm-ia64-mc4 to hold the patchset. Now, the whole Xiantao> community had better work on the branch together for Xiantao> reviewing code, doing cleanup, and adding the new features. Xiantao> If you have any contribution or questions, please feel free Xiantao> to submit to the kvm-ia64 mailing Xiantao> list(https://lists.sourceforge.net/lists/listinfo/kvm-ia64-devel). Hi Xiantao. Just back from vacation for a week, this is great news! How do I check out the tree with the ia64 patches in it? If I just do a clone then I seem to get a copy of Avi's tree. Thanks, Jes |
From: Tristan G. <tgi...@fr...> - 2008-03-14 05:19:58
|
On Fri, Mar 14, 2008 at 09:53:49AM +0800, Zhang, Xiantao wrote: > I checked it at kvm side. It also works well. :) Good news! Thank you for the report. Tristan. |
From: Zhang, X. <xia...@in...> - 2008-03-14 01:54:14
|
I checked it at kvm side. It also works well. :) Xiantao -----Original Message----- From: xen...@li... [mailto:xen...@li...] On Behalf Of Tristan Gingold Sent: 2008年3月13日 20:46 To: Xen-ia64-devel Subject: [Xen-ia64-devel] GFW release Hi, I have just updated the GFW binary. Please test it. If it is OK, it will be used for the official GFW release. [ I made this patch before adding INIT support. Should I make a new release ?] Tristan. _______________________________________________ Xen-ia64-devel mailing list Xen...@li... http://lists.xensource.com/xen-ia64-devel |
From: Zhang, X. <xia...@in...> - 2008-03-12 14:00:02
|
Akio Takebe wrote: > Hi, Xiantao > >> We have created kvm-ia64.git on master.kernel.org for open >> development, and the latest source is also included in this >> repository. So you can clone and make contributions to it now. >> Cheers!! >> In this repository, I created the branch kvm-ia64-mc4 to hold the >> patchset. Now, the whole community had better work on the branch >> together for reviewing code, doing cleanup, and adding the new >> features. If you have any contribution or questions, please feel >> free to submit to the kvm-ia64 mailing >> list(https://lists.sourceforge.net/lists/listinfo/kvm-ia64-devel). > Wow, greate! > Can we use the same userspace tree as x86? Yes, but seems it is broken for ia64 side due to latest merge with qemu upstream. > Are save/restore already available? It needs userspace patch. I enabled save&restore without log dirty mechanism, but it breaks after adding log dirty. So need more debug effort on it. Xiantao |
From: Akio T. <tak...@jp...> - 2008-03-12 10:38:26
|
Hi, Xiantao >We have created kvm-ia64.git on master.kernel.org for open development, >and the latest source is also included in this repository. So you can >clone and make contributions to it now. Cheers!! >In this repository, I created the branch kvm-ia64-mc4 to hold the >patchset. Now, the whole community had better work on the branch >together for reviewing code, doing cleanup, and adding the new features. >If you have any contribution or questions, please feel free to submit to >the kvm-ia64 mailing >list(https://lists.sourceforge.net/lists/listinfo/kvm-ia64-devel). Wow, greate! Can we use the same userspace tree as x86? Are save/restore already available? Best Regards, Akio Takebe |
From: Zhang, X. <xia...@in...> - 2008-03-12 10:22:54
|
Hi, guys We have created kvm-ia64.git on master.kernel.org for open development, and the latest source is also included in this repository. So you can clone and make contributions to it now. Cheers!! In this repository, I created the branch kvm-ia64-mc4 to hold the patchset. Now, the whole community had better work on the branch together for reviewing code, doing cleanup, and adding the new features. If you have any contribution or questions, please feel free to submit to the kvm-ia64 mailing list(https://lists.sourceforge.net/lists/listinfo/kvm-ia64-devel). BTW, You know, since 2.6.26 merge window is coming, we have to prepare a clean and mature tree before its due. Welcome to join in kvm/ia64 development! Thanks for you any contributions! Xiantao |
From: Hollis B. <ho...@us...> - 2008-03-11 18:25:15
|
On Fri, 2008-03-07 at 20:52 +0800, Yang, Sheng wrote: > From 98543bb3c3821e5bc9003bb91d7d0c755394ffac Mon Sep 17 00:00:00 2001 > From: Sheng Yang <she...@in...> > Date: Fri, 7 Mar 2008 14:24:32 +0800 > Subject: [PATCH] kvm: qemu: Add option for enable/disable in kernel PIT This patch breaks all non-x86 architectures, since common code now calls functions defined only in libkvm-x86.c . -- Hollis Blanchard IBM Linux Technology Center |
From: Rami T. <ra...@qu...> - 2008-03-11 15:18:57
|
We'd like to invite all of you to attend the second annual KVM Forum. Following the success of the last year's event, we'd like to keep the format similar. The purpose of the forum is to bring together developers, testers and other technical individuals from within the community to discuss the state of KVM today. We will also review and shape the future roadmap of KVM, examine development challenges, and agree on modes of collaboration needed for common development. The KVM Forum 2008 will also give developers an opportunity to update the community on the work that they are doing and coordinate efforts for the betterment of KVM and Linux virtualization. Please reserve these dates, the event will take on June 11th - 13th, at Marriot Napa Valley, California, USA. For those of you who want to get there earlier, we will be holding a reception cocktail on June 10th evening time. The registration web site will be up shortly as will the call for papers For suggestions and comments please e-mail kvm...@qu.... |
From: Jes S. <je...@sg...> - 2008-03-06 15:28:36
|
>>>>> "Jes" == Jes Sorensen <je...@sg...> writes: Jes> Yes I am using Tristan's firmware. The thing never gets anywhere, Jes> I just get a blank screen :-( Ok, did some more digging. The latest kernel I am able to build myself and boot is 2.6.22. Looks like I need to do some git bisect'ing. Cheers, Jes |
From: Isaku Y. <yam...@va...> - 2008-03-05 18:19:32
|
Signed-off-by: Isaku Yamahata <yam...@va...> --- arch/ia64/xen/Makefile | 2 +- arch/ia64/xen/hypercall.S | 10 + arch/ia64/xen/irq_xen.c | 435 ++++++++++++++++++++++++++++++++++++++++++++ arch/ia64/xen/irq_xen.h | 8 + arch/ia64/xen/xen_pv_ops.c | 3 + include/asm-ia64/hw_irq.h | 4 + include/asm-ia64/irq.h | 33 ++++ 7 files changed, 494 insertions(+), 1 deletions(-) create mode 100644 arch/ia64/xen/irq_xen.c create mode 100644 arch/ia64/xen/irq_xen.h diff --git a/arch/ia64/xen/Makefile b/arch/ia64/xen/Makefile index 4b1db56..ff7a58d 100644 --- a/arch/ia64/xen/Makefile +++ b/arch/ia64/xen/Makefile @@ -2,7 +2,7 @@ # Makefile for Xen components # -obj-y := xen_pv_ops.o +obj-y := xen_pv_ops.o irq_xen.o obj-$(CONFIG_PARAVIRT_ALT) += paravirt_xen.o privops_asm.o privops_c.o obj-$(CONFIG_PARAVIRT_NOP_B_PATCH) += paravirt_xen.o diff --git a/arch/ia64/xen/hypercall.S b/arch/ia64/xen/hypercall.S index 7c5242b..3fad2fe 100644 --- a/arch/ia64/xen/hypercall.S +++ b/arch/ia64/xen/hypercall.S @@ -123,6 +123,16 @@ END(xen_set_eflag) #endif /* CONFIG_IA32_SUPPORT */ #endif /* ASM_SUPPORTED */ +GLOBAL_ENTRY(xen_send_ipi) + mov r14=r32 + mov r15=r33 + mov r2=0x400 + break 0x1000 + ;; + br.ret.sptk.many rp + ;; +END(xen_send_ipi) + GLOBAL_ENTRY(__hypercall) mov r2=r37 break 0x1000 diff --git a/arch/ia64/xen/irq_xen.c b/arch/ia64/xen/irq_xen.c new file mode 100644 index 0000000..57fab2b --- /dev/null +++ b/arch/ia64/xen/irq_xen.c @@ -0,0 +1,435 @@ +/****************************************************************************** + * arch/ia64/xen/irq_xen.c + * + * Copyright (c) 2006 Isaku Yamahata <yamahata at valinux co jp> + * VA Linux Systems Japan K.K. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + * + */ + +#include <linux/cpu.h> + +#include <xen/events.h> +#include <xen/interface/callback.h> + +#include "irq_xen.h" + +/*************************************************************************** + * pv_irq_ops + * irq operations + */ + +static int +xen_assign_irq_vector(int irq) +{ + struct physdev_irq irq_op; + + irq_op.irq = irq; + if (HYPERVISOR_physdev_op(PHYSDEVOP_alloc_irq_vector, &irq_op)) + return -ENOSPC; + + return irq_op.vector; +} + +static void +xen_free_irq_vector(int vector) +{ + struct physdev_irq irq_op; + + if (vector < IA64_FIRST_DEVICE_VECTOR || + vector > IA64_LAST_DEVICE_VECTOR) + return; + + irq_op.vector = vector; + if (HYPERVISOR_physdev_op(PHYSDEVOP_free_irq_vector, &irq_op)) + printk(KERN_WARNING "%s: xen_free_irq_vecotr fail vector=%d\n", + __func__, vector); +} + + +static DEFINE_PER_CPU(int, timer_irq) = -1; +static DEFINE_PER_CPU(int, ipi_irq) = -1; +static DEFINE_PER_CPU(int, resched_irq) = -1; +static DEFINE_PER_CPU(int, cmc_irq) = -1; +static DEFINE_PER_CPU(int, cmcp_irq) = -1; +static DEFINE_PER_CPU(int, cpep_irq) = -1; +#define NAME_SIZE 15 +static DEFINE_PER_CPU(char[NAME_SIZE], timer_name); +static DEFINE_PER_CPU(char[NAME_SIZE], ipi_name); +static DEFINE_PER_CPU(char[NAME_SIZE], resched_name); +static DEFINE_PER_CPU(char[NAME_SIZE], cmc_name); +static DEFINE_PER_CPU(char[NAME_SIZE], cmcp_name); +static DEFINE_PER_CPU(char[NAME_SIZE], cpep_name); +#undef NAME_SIZE + +struct saved_irq { + unsigned int irq; + struct irqaction *action; +}; +/* 16 should be far optimistic value, since only several percpu irqs + * are registered early. + */ +#define MAX_LATE_IRQ 16 +static struct saved_irq saved_percpu_irqs[MAX_LATE_IRQ]; +static unsigned short late_irq_cnt = 0; +static unsigned short saved_irq_cnt = 0; +static int xen_slab_ready = 0; + +#ifdef CONFIG_SMP +/* Dummy stub. Though we may check RESCHEDULE_VECTOR before __do_IRQ, + * it ends up to issue several memory accesses upon percpu data and + * thus adds unnecessary traffic to other paths. + */ +static irqreturn_t +xen_dummy_handler(int irq, void *dev_id) +{ + + return IRQ_HANDLED; +} + +static struct irqaction xen_resched_irqaction = { + .handler = xen_dummy_handler, + .flags = IRQF_DISABLED, + .name = "resched" +}; + +static struct irqaction xen_tlb_irqaction = { + .handler = xen_dummy_handler, + .flags = IRQF_DISABLED, + .name = "tlb_flush" +}; +#endif + +/* + * This is xen version percpu irq registration, which needs bind + * to xen specific evtchn sub-system. One trick here is that xen + * evtchn binding interface depends on kmalloc because related + * port needs to be freed at device/cpu down. So we cache the + * registration on BSP before slab is ready and then deal them + * at later point. For rest instances happening after slab ready, + * we hook them to xen evtchn immediately. + * + * FIXME: MCA is not supported by far, and thus "nomca" boot param is + * required. + */ +static void +__xen_register_percpu_irq(unsigned int cpu, unsigned int vec, + struct irqaction *action, int save) +{ + irq_desc_t *desc; + int irq = 0; + + if (xen_slab_ready) { + switch (vec) { + case IA64_TIMER_VECTOR: + snprintf(per_cpu(timer_name, cpu), + sizeof(per_cpu(timer_name, cpu)), + "%s%d", action->name, cpu); + irq = bind_virq_to_irqhandler(VIRQ_ITC, cpu, + action->handler, action->flags, + per_cpu(timer_name, cpu), action->dev_id); + per_cpu(timer_irq, cpu) = irq; + break; + case IA64_IPI_RESCHEDULE: + snprintf(per_cpu(resched_name, cpu), + sizeof(per_cpu(resched_name, cpu)), + "%s%d", action->name, cpu); + irq = bind_ipi_to_irqhandler(RESCHEDULE_VECTOR, cpu, + action->handler, action->flags, + per_cpu(resched_name, cpu), action->dev_id); + per_cpu(resched_irq, cpu) = irq; + break; + case IA64_IPI_VECTOR: + snprintf(per_cpu(ipi_name, cpu), + sizeof(per_cpu(ipi_name, cpu)), + "%s%d", action->name, cpu); + irq = bind_ipi_to_irqhandler(IPI_VECTOR, cpu, + action->handler, action->flags, + per_cpu(ipi_name, cpu), action->dev_id); + per_cpu(ipi_irq, cpu) = irq; + break; + case IA64_CMC_VECTOR: + snprintf(per_cpu(cmc_name, cpu), + sizeof(per_cpu(cmc_name, cpu)), + "%s%d", action->name, cpu); + irq = bind_virq_to_irqhandler(VIRQ_MCA_CMC, cpu, + action->handler, + action->flags, + per_cpu(cmc_name, cpu), + action->dev_id); + per_cpu(cmc_irq, cpu) = irq; + break; + case IA64_CMCP_VECTOR: + snprintf(per_cpu(cmcp_name, cpu), + sizeof(per_cpu(cmcp_name, cpu)), + "%s%d", action->name, cpu); + irq = bind_ipi_to_irqhandler(CMCP_VECTOR, cpu, + action->handler, + action->flags, + per_cpu(cmcp_name, cpu), + action->dev_id); + per_cpu(cmcp_irq, cpu) = irq; + break; + case IA64_CPEP_VECTOR: + snprintf(per_cpu(cpep_name, cpu), + sizeof(per_cpu(cpep_name, cpu)), + "%s%d", action->name, cpu); + irq = bind_ipi_to_irqhandler(CPEP_VECTOR, cpu, + action->handler, + action->flags, + per_cpu(cpep_name, cpu), + action->dev_id); + per_cpu(cpep_irq, cpu) = irq; + break; + case IA64_CPE_VECTOR: + case IA64_MCA_RENDEZ_VECTOR: + case IA64_PERFMON_VECTOR: + case IA64_MCA_WAKEUP_VECTOR: + case IA64_SPURIOUS_INT_VECTOR: + /* No need to complain, these aren't supported. */ + break; + default: + printk(KERN_WARNING "Percpu irq %d is unsupported " + "by xen!\n", vec); + break; + } + BUG_ON(irq < 0); + + if (irq > 0) { + /* + * Mark percpu. Without this, migrate_irqs() will + * mark the interrupt for migrations and trigger it + * on cpu hotplug. + */ + desc = irq_desc + irq; + desc->status |= IRQ_PER_CPU; + } + } + + /* For BSP, we cache registered percpu irqs, and then re-walk + * them when initializing APs + */ + if (!cpu && save) { + BUG_ON(saved_irq_cnt == MAX_LATE_IRQ); + saved_percpu_irqs[saved_irq_cnt].irq = vec; + saved_percpu_irqs[saved_irq_cnt].action = action; + saved_irq_cnt++; + if (!xen_slab_ready) + late_irq_cnt++; + } +} + +static void +xen_register_percpu_irq(ia64_vector vec, struct irqaction *action) +{ + __xen_register_percpu_irq(smp_processor_id(), vec, action, 1); +} + +static void +xen_bind_early_percpu_irq(void) +{ + int i; + + xen_slab_ready = 1; + /* There's no race when accessing this cached array, since only + * BSP will face with such step shortly + */ + for (i = 0; i < late_irq_cnt; i++) + __xen_register_percpu_irq(smp_processor_id(), + saved_percpu_irqs[i].irq, + saved_percpu_irqs[i].action, 0); +} + +/* FIXME: There's no obvious point to check whether slab is ready. So + * a hack is used here by utilizing a late time hook. + */ +extern void (*late_time_init)(void); +extern char xen_event_callback; +extern void xen_init_IRQ(void); + +#ifdef CONFIG_HOTPLUG_CPU +static int __devinit +unbind_evtchn_callback(struct notifier_block *nfb, + unsigned long action, void *hcpu) +{ + unsigned int cpu = (unsigned long)hcpu; + + if (action == CPU_DEAD) { + /* Unregister evtchn. */ + if (per_cpu(cpep_irq, cpu) >= 0) { + unbind_from_irqhandler(per_cpu(cpep_irq, cpu), NULL); + per_cpu(cpep_irq, cpu) = -1; + } + if (per_cpu(cmcp_irq, cpu) >= 0) { + unbind_from_irqhandler(per_cpu(cmcp_irq, cpu), NULL); + per_cpu(cmcp_irq, cpu) = -1; + } + if (per_cpu(cmc_irq, cpu) >= 0) { + unbind_from_irqhandler(per_cpu(cmc_irq, cpu), NULL); + per_cpu(cmc_irq, cpu) = -1; + } + if (per_cpu(ipi_irq, cpu) >= 0) { + unbind_from_irqhandler(per_cpu(ipi_irq, cpu), NULL); + per_cpu(ipi_irq, cpu) = -1; + } + if (per_cpu(resched_irq, cpu) >= 0) { + unbind_from_irqhandler(per_cpu(resched_irq, cpu), + NULL); + per_cpu(resched_irq, cpu) = -1; + } + if (per_cpu(timer_irq, cpu) >= 0) { + unbind_from_irqhandler(per_cpu(timer_irq, cpu), NULL); + per_cpu(timer_irq, cpu) = -1; + } + } + return NOTIFY_OK; +} + +static struct notifier_block unbind_evtchn_notifier = { + .notifier_call = unbind_evtchn_callback, + .priority = 0 +}; +#endif + +DECLARE_PER_CPU(int, ipi_to_irq[NR_IPIS]); +void xen_smp_intr_init_early(unsigned int cpu) +{ +#ifdef CONFIG_SMP + unsigned int i; + + for (i = 0; i < saved_irq_cnt; i++) + __xen_register_percpu_irq(cpu, saved_percpu_irqs[i].irq, + saved_percpu_irqs[i].action, 0); +#endif +} + +void xen_smp_intr_init(void) +{ +#ifdef CONFIG_SMP + unsigned int cpu = smp_processor_id(); + struct callback_register event = { + .type = CALLBACKTYPE_event, + .address = (unsigned long)&xen_event_callback, + }; + + if (cpu == 0) { + /* Initialization was already done for boot cpu. */ +#ifdef CONFIG_HOTPLUG_CPU + /* Register the notifier only once. */ + register_cpu_notifier(&unbind_evtchn_notifier); +#endif + return; + } + + /* This should be piggyback when setup vcpu guest context */ + BUG_ON(HYPERVISOR_callback_op(CALLBACKOP_register, &event)); +#endif /* CONFIG_SMP */ +} + +void __init +xen_irq_init(void) +{ + struct callback_register event = { + .type = CALLBACKTYPE_event, + .address = (unsigned long)&xen_event_callback, + }; + + xen_init_IRQ(); + BUG_ON(HYPERVISOR_callback_op(CALLBACKOP_register, &event)); + late_time_init = xen_bind_early_percpu_irq; +} + +void +xen_platform_send_ipi(int cpu, int vector, int delivery_mode, int redirect) +{ + int irq = -1; + +#ifdef CONFIG_SMP + /* TODO: we need to call vcpu_up here */ + if (unlikely(vector == ap_wakeup_vector)) { + /* XXX + * This should be in __cpu_up(cpu) in ia64 smpboot.c + * like x86. But don't want to modify it, + * keep it untouched. + */ + xen_smp_intr_init_early(cpu); + + xen_send_ipi(cpu, vector); + /* vcpu_prepare_and_up(cpu); */ + return; + } +#endif + + switch (vector) { + case IA64_IPI_VECTOR: + irq = per_cpu(ipi_to_irq, cpu)[IPI_VECTOR]; + break; + case IA64_IPI_RESCHEDULE: + irq = per_cpu(ipi_to_irq, cpu)[RESCHEDULE_VECTOR]; + break; + case IA64_CMCP_VECTOR: + irq = per_cpu(ipi_to_irq, cpu)[CMCP_VECTOR]; + break; + case IA64_CPEP_VECTOR: + irq = per_cpu(ipi_to_irq, cpu)[CPEP_VECTOR]; + break; + default: + printk(KERN_WARNING "Unsupported IPI type 0x%x\n", + vector); + irq = 0; + break; + } + + BUG_ON(irq < 0); + notify_remote_via_irq(irq); + return; +} + +static void __init +xen_init_IRQ_early(void) +{ +#ifdef CONFIG_SMP + register_percpu_irq(IA64_IPI_RESCHEDULE, &xen_resched_irqaction); + register_percpu_irq(IA64_IPI_LOCAL_TLB_FLUSH, &xen_tlb_irqaction); +#endif +} + +static void __init +xen_init_IRQ_late(void) +{ +#ifdef CONFIG_XEN_PRIVILEGED_GUEST + if (is_running_on_xen() && !ia64_platform_is("xen")) + xen_irq_init(); +#endif +} + +static void +xen_resend_irq(unsigned int vector) +{ + (void)resend_irq_on_evtchn(vector); +} + +const struct pv_irq_ops xen_irq_ops __initdata = { + .init_IRQ_early = xen_init_IRQ_early, + .init_IRQ_late = xen_init_IRQ_late, + + .assign_irq_vector = xen_assign_irq_vector, + .free_irq_vector = xen_free_irq_vector, + .register_percpu_irq = xen_register_percpu_irq, + + .send_ipi = xen_platform_send_ipi, + .resend_irq = xen_resend_irq, +}; diff --git a/arch/ia64/xen/irq_xen.h b/arch/ia64/xen/irq_xen.h new file mode 100644 index 0000000..a2c3ed9 --- /dev/null +++ b/arch/ia64/xen/irq_xen.h @@ -0,0 +1,8 @@ +#ifndef IRQ_XEN_H +#define IRQ_XEN_H + +extern const struct pv_irq_ops xen_irq_ops __initdata; +extern void xen_smp_intr_init(void); +extern void xen_send_ipi(int cpu, int vec); + +#endif /* IRQ_XEN_H */ diff --git a/arch/ia64/xen/xen_pv_ops.c b/arch/ia64/xen/xen_pv_ops.c index c35bb23..93a5c64 100644 --- a/arch/ia64/xen/xen_pv_ops.c +++ b/arch/ia64/xen/xen_pv_ops.c @@ -35,6 +35,8 @@ #include <asm/xen/hypervisor.h> #include <asm/xen/xencomm.h> +#include "irq_xen.h" + /*************************************************************************** * general info */ @@ -313,4 +315,5 @@ xen_setup_pv_ops(void) pv_info = xen_info; pv_init_ops = xen_init_ops; pv_iosapic_ops = xen_iosapic_ops; + pv_irq_ops = xen_irq_ops; } diff --git a/include/asm-ia64/hw_irq.h b/include/asm-ia64/hw_irq.h index 678efec..80009cd 100644 --- a/include/asm-ia64/hw_irq.h +++ b/include/asm-ia64/hw_irq.h @@ -15,7 +15,11 @@ #include <asm/ptrace.h> #include <asm/smp.h> +#ifndef CONFIG_XEN typedef u8 ia64_vector; +#else +typedef u16 ia64_vector; +#endif /* * 0 special diff --git a/include/asm-ia64/irq.h b/include/asm-ia64/irq.h index a66d268..aead249 100644 --- a/include/asm-ia64/irq.h +++ b/include/asm-ia64/irq.h @@ -14,6 +14,7 @@ #include <linux/types.h> #include <linux/cpumask.h> +#ifndef CONFIG_XEN #define NR_VECTORS 256 #if (NR_VECTORS + 32 * NR_CPUS) < 1024 @@ -21,6 +22,38 @@ #else #define NR_IRQS 1024 #endif +#else +/* + * The flat IRQ space is divided into two regions: + * 1. A one-to-one mapping of real physical IRQs. This space is only used + * if we have physical device-access privilege. This region is at the + * start of the IRQ space so that existing device drivers do not need + * to be modified to translate physical IRQ numbers into our IRQ space. + * 3. A dynamic mapping of inter-domain and Xen-sourced virtual IRQs. These + * are bound using the provided bind/unbind functions. + */ + +#define PIRQ_BASE 0 +#define NR_PIRQS 256 + +#define DYNIRQ_BASE (PIRQ_BASE + NR_PIRQS) +#define NR_DYNIRQS (CONFIG_NR_CPUS * 8) + +#define NR_IRQS (NR_PIRQS + NR_DYNIRQS) +#define NR_IRQ_VECTORS NR_IRQS + +#define pirq_to_irq(_x) ((_x) + PIRQ_BASE) +#define irq_to_pirq(_x) ((_x) - PIRQ_BASE) + +#define dynirq_to_irq(_x) ((_x) + DYNIRQ_BASE) +#define irq_to_dynirq(_x) ((_x) - DYNIRQ_BASE) + +#define RESCHEDULE_VECTOR 0 +#define IPI_VECTOR 1 +#define CMCP_VECTOR 2 +#define CPEP_VECTOR 3 +#define NR_IPIS 4 +#endif /* CONFIG_XEN */ static __inline__ int irq_canonicalize (int irq) -- 1.5.3 |
From: Isaku Y. <yam...@va...> - 2008-03-05 18:19:31
|
Signed-off-by: Isaku Yamahata <yam...@va...> --- arch/ia64/xen/hypervisor.c | 235 +++++++++++++++++++++++++++++++++++++ include/asm-ia64/xen/hypervisor.h | 194 ++++++++++++++++++++++++++++++ 2 files changed, 429 insertions(+), 0 deletions(-) create mode 100644 arch/ia64/xen/hypervisor.c diff --git a/arch/ia64/xen/hypervisor.c b/arch/ia64/xen/hypervisor.c new file mode 100644 index 0000000..cb4b27f --- /dev/null +++ b/arch/ia64/xen/hypervisor.c @@ -0,0 +1,235 @@ +/****************************************************************************** + * include/asm-ia64/shadow.h + * + * Copyright (c) 2006 Isaku Yamahata <yamahata at valinux co jp> + * VA Linux Systems Japan K.K. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + * + */ + +#include <linux/spinlock.h> +#include <linux/bootmem.h> +#include <linux/module.h> +#include <linux/vmalloc.h> +#include <linux/efi.h> +#include <asm/page.h> +#include <asm/pgalloc.h> +#include <asm/meminit.h> +#include <asm/xen/hypervisor.h> +#include <asm/xen/hypercall.h> +#include <xen/interface/memory.h> + +#include "irq_xen.h" + +struct shared_info *HYPERVISOR_shared_info __read_mostly = + (struct shared_info *)XSI_BASE; +EXPORT_SYMBOL(HYPERVISOR_shared_info); + +DEFINE_PER_CPU(struct vcpu_info *, xen_vcpu); +#ifdef notyet +DEFINE_PER_CPU(struct vcpu_info, xen_vcpu_info); +#endif + +struct start_info *xen_start_info; +EXPORT_SYMBOL(xen_start_info); + +EXPORT_SYMBOL(running_on_xen); + +EXPORT_SYMBOL(__hypercall); + +/* Stolen from arch/x86/xen/enlighten.c */ +/* + * Flag to determine whether vcpu info placement is available on all + * VCPUs. We assume it is to start with, and then set it to zero on + * the first failure. This is because it can succeed on some VCPUs + * and not others, since it can involve hypervisor memory allocation, + * or because the guest failed to guarantee all the appropriate + * constraints on all VCPUs (ie buffer can't cross a page boundary). + * + * Note that any particular CPU may be using a placed vcpu structure, + * but we can only optimise if the all are. + * + * 0: not available, 1: available + */ +#ifdef notyet +static int have_vcpu_info_placement; +#endif + +static void __init xen_vcpu_setup(int cpu) +{ +/* on Xen/IA64 VCPUOP_register_vcpu_info isn't supported */ +#ifdef notyet + struct vcpu_register_vcpu_info info; + int err; + struct vcpu_info *vcpup; +#endif + + /* + * WARNING: + * before changing MAX_VIRT_CPUS, + * check that shared_info fits on a page + */ + BUILD_BUG_ON(sizeof(struct shared_info) > PAGE_SIZE); + per_cpu(xen_vcpu, cpu) = &HYPERVISOR_shared_info->vcpu_info[cpu]; + +#ifdef notyet + if (!have_vcpu_info_placement) + return; /* already tested, not available */ + + vcpup = &per_cpu(xen_vcpu_info, cpu); + + info.mfn = virt_to_mfn(vcpup); + info.offset = offset_in_page(vcpup); + + printk(KERN_DEBUG + "trying to map vcpu_info %d at %p, mfn %llx, offset %d\n", + cpu, vcpup, info.mfn, info.offset); + + /* Check to see if the hypervisor will put the vcpu_info + structure where we want it, which allows direct access via + a percpu-variable. */ + err = HYPERVISOR_vcpu_op(VCPUOP_register_vcpu_info, cpu, &info); + + if (err) { + printk(KERN_DEBUG "register_vcpu_info failed: err=%d\n", err); + have_vcpu_info_placement = 0; + } else { + /* This cpu is using the registered vcpu info, even if + later ones fail to. */ + per_cpu(xen_vcpu, cpu) = vcpup; + + printk(KERN_DEBUG "cpu %d using vcpu_info at %p\n", + cpu, vcpup); + } +#endif +} + +void __init xen_setup_vcpu_info_placement(void) +{ + int cpu; + + for_each_possible_cpu(cpu) + xen_vcpu_setup(cpu); +} + +void __init +xen_setup(char **cmdline_p) +{ + extern void dig_setup(char **cmdline_p); + + if (ia64_platform_is("xen")) + dig_setup(cmdline_p); +} + +void __cpuinit +xen_cpu_init(void) +{ + xen_smp_intr_init(); +} + +/**************************************************************************** + * grant table hack + * cmd: GNTTABOP_xxx + */ + +#include <linux/mm.h> +#include <xen/interface/xen.h> +#include <xen/grant_table.h> + +int arch_gnttab_map_shared(unsigned long *frames, unsigned long nr_gframes, + unsigned long max_nr_gframes, + struct grant_entry **__shared) +{ + *__shared = __va(frames[0] << PAGE_SHIFT); + return 0; +} + +void arch_gnttab_unmap_shared(struct grant_entry *shared, + unsigned long nr_gframes) +{ + /* nothing */ +} + +static void +gnttab_map_grant_ref_pre(struct gnttab_map_grant_ref *uop) +{ + uint32_t flags; + + flags = uop->flags; + + if (flags & GNTMAP_host_map) { + if (flags & GNTMAP_application_map) { + printk(KERN_DEBUG + "GNTMAP_application_map is not supported yet: " + "flags 0x%x\n", flags); + BUG(); + } + if (flags & GNTMAP_contains_pte) { + printk(KERN_DEBUG + "GNTMAP_contains_pte is not supported yet: " + "flags 0x%x\n", flags); + BUG(); + } + } else if (flags & GNTMAP_device_map) { + printk("GNTMAP_device_map is not supported yet 0x%x\n", flags); + BUG(); /* XXX not yet. actually this flag is not used. */ + } else { + BUG(); + } +} + +int +HYPERVISOR_grant_table_op(unsigned int cmd, void *uop, unsigned int count) +{ + if (cmd == GNTTABOP_map_grant_ref) { + unsigned int i; + for (i = 0; i < count; i++) { + gnttab_map_grant_ref_pre( + (struct gnttab_map_grant_ref *)uop + i); + } + } + return xencomm_hypercall_grant_table_op(cmd, uop, count); +} +EXPORT_SYMBOL(HYPERVISOR_grant_table_op); + +/************************************************************************** + * opt feature + */ +void +xen_ia64_enable_opt_feature(void) +{ + /* Enable region 7 identity map optimizations in Xen */ + struct xen_ia64_opt_feature optf; + + optf.cmd = XEN_IA64_OPTF_IDENT_MAP_REG7; + optf.on = XEN_IA64_OPTF_ON; + optf.pgprot = pgprot_val(PAGE_KERNEL); + optf.key = 0; /* No key on linux. */ + HYPERVISOR_opt_feature(&optf); +} + +/************************************************************************** + * suspend/resume + */ +void +xen_post_suspend(int suspend_cancelled) +{ + if (suspend_cancelled) + return; + + xen_ia64_enable_opt_feature(); + /* add more if necessary */ +} diff --git a/include/asm-ia64/xen/hypervisor.h b/include/asm-ia64/xen/hypervisor.h index 78c5635..3c93109 100644 --- a/include/asm-ia64/xen/hypervisor.h +++ b/include/asm-ia64/xen/hypervisor.h @@ -42,9 +42,203 @@ extern const int running_on_xen; # define is_running_on_xen() (1) # else /* CONFIG_VMX_GUEST */ # define is_running_on_xen() (0) +# define HYPERVISOR_ioremap(offset, size) (offset) # endif /* CONFIG_VMX_GUEST */ #endif /* CONFIG_XEN */ +#if defined(CONFIG_XEN) || defined(CONFIG_VMX_GUEST) +#include <linux/types.h> +#include <linux/kernel.h> +#include <linux/version.h> +#include <linux/errno.h> +#include <linux/init.h> +#include <xen/interface/xen.h> +#include <xen/interface/version.h> /* to compile feature.c */ +#include <xen/interface/event_channel.h> +#include <xen/interface/physdev.h> +#include <xen/interface/sched.h> +#include <asm/ptrace.h> +#include <asm/page.h> +#include <asm/percpu.h> +#ifdef CONFIG_XEN +#include <asm/xen/hypercall.h> +#endif + +extern struct shared_info *HYPERVISOR_shared_info; +extern struct start_info *xen_start_info; + +DECLARE_PER_CPU(struct vcpu_info *, xen_vcpu); +void __init xen_setup_vcpu_info_placement(void); +void force_evtchn_callback(void); + +struct vm_struct *xen_alloc_vm_area(unsigned long size); +void xen_free_vm_area(struct vm_struct *area); + +/* Turn jiffies into Xen system time. XXX Implement me. */ +#define jiffies_to_st(j) 0 + +static inline int +HYPERVISOR_yield( + void) +{ + int rc = HYPERVISOR_sched_op(SCHEDOP_yield, NULL); + + return rc; +} + +static inline int +HYPERVISOR_block( + void) +{ + int rc = HYPERVISOR_sched_op(SCHEDOP_block, NULL); + + return rc; +} + +static inline int +HYPERVISOR_shutdown( + unsigned int reason) +{ + struct sched_shutdown sched_shutdown = { + .reason = reason + }; + + int rc = HYPERVISOR_sched_op(SCHEDOP_shutdown, &sched_shutdown); + + return rc; +} + +static inline int +HYPERVISOR_poll( + evtchn_port_t *ports, unsigned int nr_ports, u64 timeout) +{ + struct sched_poll sched_poll = { + .nr_ports = nr_ports, + .timeout = jiffies_to_st(timeout) + }; + + int rc; + + set_xen_guest_handle(sched_poll.ports, ports); + rc = HYPERVISOR_sched_op(SCHEDOP_poll, &sched_poll); + + return rc; +} + +#ifndef CONFIG_VMX_GUEST +/* for drivers/xen/privcmd/privcmd.c */ +#define machine_to_phys_mapping 0 +struct vm_area_struct; +int direct_remap_pfn_range(struct vm_area_struct *vma, + unsigned long address, + unsigned long mfn, + unsigned long size, + pgprot_t prot, + domid_t domid); +struct file; +int privcmd_enforce_singleshot_mapping(struct vm_area_struct *vma); +int privcmd_mmap(struct file *file, struct vm_area_struct *vma); +#define HAVE_ARCH_PRIVCMD_MMAP + +/* for drivers/xen/balloon/balloon.c */ +#ifdef CONFIG_XEN_SCRUB_PAGES +#define scrub_pages(_p, _n) memset((void *)(_p), 0, (_n) << PAGE_SHIFT) +#else +#define scrub_pages(_p, _n) ((void)0) +#endif +#define pte_mfn(_x) pte_pfn(_x) +#define phys_to_machine_mapping_valid(_x) (1) + +void xen_contiguous_bitmap_init(unsigned long end_pfn); +int __xen_create_contiguous_region(unsigned long vstart, unsigned int order, + unsigned int address_bits); +static inline int +xen_create_contiguous_region(unsigned long vstart, + unsigned int order, unsigned int address_bits) +{ + int ret = 0; + if (is_running_on_xen()) { + ret = __xen_create_contiguous_region(vstart, order, + address_bits); + } + return ret; +} + +void __xen_destroy_contiguous_region(unsigned long vstart, unsigned int order); +static inline void +xen_destroy_contiguous_region(unsigned long vstart, unsigned int order) +{ + if (is_running_on_xen()) + __xen_destroy_contiguous_region(vstart, order); +} + +struct page; + +int xen_limit_pages_to_max_mfn(struct page *pages, unsigned int order, + unsigned int address_bits); + +/* For drivers/xen/core/machine_reboot.c */ +#define HAVE_XEN_POST_SUSPEND +void xen_post_suspend(int suspend_cancelled); + +/* For setup_arch() in arch/ia64/kernel/setup.c */ +void xen_ia64_enable_opt_feature(void); +#endif /* !CONFIG_VMX_GUEST */ + +#define __pte_ma(_x) ((pte_t) {(_x)}) /* unmodified use */ +#define mfn_pte(_x, _y) __pte_ma(0) /* unmodified use */ + +/* for netfront.c, netback.c */ +#define MULTI_UVMFLAGS_INDEX 0 /* XXX any value */ + +static inline void +MULTI_update_va_mapping( + struct multicall_entry *mcl, unsigned long va, + pte_t new_val, unsigned long flags) +{ + mcl->op = __HYPERVISOR_update_va_mapping; + mcl->result = 0; +} + +static inline void +MULTI_grant_table_op(struct multicall_entry *mcl, unsigned int cmd, + void *uop, unsigned int count) +{ + mcl->op = __HYPERVISOR_grant_table_op; + mcl->args[0] = cmd; + mcl->args[1] = (unsigned long)uop; + mcl->args[2] = count; +} + +static inline void +MULTI_mmu_update(struct multicall_entry *mcl, struct mmu_update *req, + int count, int *success_count, domid_t domid) +{ + mcl->op = __HYPERVISOR_mmu_update; + mcl->args[0] = (unsigned long)req; + mcl->args[1] = count; + mcl->args[2] = (unsigned long)success_count; + mcl->args[3] = domid; +} + +/* + * for blktap.c + * int create_lookup_pte_addr(struct mm_struct *mm, + * unsigned long address, + * uint64_t *ptep); + */ +#define create_lookup_pte_addr(mm, address, ptep) \ + ({ \ + printk(KERN_EMERG \ + "%s:%d " \ + "create_lookup_pte_addr() isn't supported.\n", \ + __func__, __LINE__); \ + BUG(); \ + (-ENOSYS); \ + }) + +#endif /* CONFIG_XEN || CONFIG_VMX_GUEST */ + #ifdef CONFIG_XEN_PRIVILEGED_GUEST #define is_initial_xendomain() \ (is_running_on_xen() ? xen_start_info->flags & SIF_INITDOMAIN : 0) -- 1.5.3 |