From: Robin H. <ho...@sg...> - 2008-01-10 14:50:27
|
On Thu, Jan 10, 2008 at 03:27:24PM +0200, Avi Kivity wrote: > Robin Holt wrote: >> >>> The patch does enable some nifty things; one example you may be familiar >>> with is using page migration to move a guest from one numa node to >>> another. >>> >> >> xpmem allows one MPI rank to "export" his address space, a different >> MPI rank to "import" that address space, and they share the same pages. >> This allows sharing of things like stack and heap space. XPMEM also >> provides a mechanism to share that PFN information across partition >> boundaries so the pages become available on a different host. This, >> of course, is dependent upon hardware that supports direct access to >> the memory by the processor. >> >> > > So this is yet another instance of hardware that has a tlb that needs to be > kept in sync with the page tables, yes? Yep, the external TLBs happen to be cpus in a different OS instance, but you get the idea. > Excellent, the more users the patch has, the easier it will be to justify > it. I think we have another hardware device driver that will use it first. It is sort of a hardware coprocessor that is available from user space to do operations against a processes address space. That driver will probably be first out the door. Looking at the mmu_notifiers patch, there are locks held which will preclude the use of invalidate_page for xpmem. In that circumstance, the clearing operation will need to be messaged to the other OS instance and that will certainly involving putting the current task to sleep. We will work on that detail later. First, we will focus on getting the other driver submitted to the community. Thanks, Robin |