You can subscribe to this list here.
| 1999 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(15) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2000 |
Jan
(6) |
Feb
(1) |
Mar
(39) |
Apr
(13) |
May
(24) |
Jun
(11) |
Jul
(23) |
Aug
(85) |
Sep
(12) |
Oct
(103) |
Nov
(79) |
Dec
(112) |
| 2001 |
Jan
(52) |
Feb
(82) |
Mar
(84) |
Apr
(65) |
May
(105) |
Jun
(188) |
Jul
(174) |
Aug
(182) |
Sep
(103) |
Oct
(137) |
Nov
(143) |
Dec
(98) |
| 2002 |
Jan
(258) |
Feb
(236) |
Mar
(386) |
Apr
(307) |
May
(238) |
Jun
(170) |
Jul
(252) |
Aug
(230) |
Sep
(278) |
Oct
(394) |
Nov
(336) |
Dec
(194) |
| 2003 |
Jan
(290) |
Feb
(182) |
Mar
(175) |
Apr
(220) |
May
(209) |
Jun
(286) |
Jul
(279) |
Aug
(164) |
Sep
(208) |
Oct
(324) |
Nov
(204) |
Dec
(380) |
| 2004 |
Jan
(344) |
Feb
(332) |
Mar
(395) |
Apr
(357) |
May
(349) |
Jun
(352) |
Jul
(279) |
Aug
(269) |
Sep
(374) |
Oct
(442) |
Nov
(428) |
Dec
(253) |
| 2005 |
Jan
(225) |
Feb
(219) |
Mar
(245) |
Apr
(249) |
May
(203) |
Jun
(157) |
Jul
(171) |
Aug
(194) |
Sep
(200) |
Oct
(232) |
Nov
(190) |
Dec
(195) |
| 2006 |
Jan
(158) |
Feb
(190) |
Mar
(235) |
Apr
(161) |
May
(134) |
Jun
(169) |
Jul
(117) |
Aug
(161) |
Sep
(170) |
Oct
(297) |
Nov
(230) |
Dec
(205) |
| 2007 |
Jan
(197) |
Feb
(132) |
Mar
(151) |
Apr
(97) |
May
(109) |
Jun
(99) |
Jul
(57) |
Aug
(110) |
Sep
(56) |
Oct
(119) |
Nov
(39) |
Dec
(45) |
| 2008 |
Jan
(101) |
Feb
(116) |
Mar
(141) |
Apr
(98) |
May
(133) |
Jun
(61) |
Jul
(43) |
Aug
(76) |
Sep
(20) |
Oct
(32) |
Nov
(22) |
Dec
(41) |
| 2009 |
Jan
(35) |
Feb
(15) |
Mar
(18) |
Apr
(13) |
May
(13) |
Jun
(26) |
Jul
(12) |
Aug
(32) |
Sep
(21) |
Oct
(41) |
Nov
(35) |
Dec
(12) |
| 2010 |
Jan
(3) |
Feb
(35) |
Mar
(28) |
Apr
(20) |
May
(5) |
Jun
(14) |
Jul
(6) |
Aug
(8) |
Sep
(20) |
Oct
(20) |
Nov
(10) |
Dec
(12) |
| 2011 |
Jan
(14) |
Feb
(10) |
Mar
(14) |
Apr
(14) |
May
(13) |
Jun
(43) |
Jul
(13) |
Aug
(50) |
Sep
(30) |
Oct
(23) |
Nov
(15) |
Dec
(49) |
| 2012 |
Jan
(15) |
Feb
(28) |
Mar
(7) |
Apr
|
May
(12) |
Jun
(13) |
Jul
(28) |
Aug
(11) |
Sep
(19) |
Oct
(27) |
Nov
(5) |
Dec
(25) |
| 2013 |
Jan
(18) |
Feb
(19) |
Mar
(56) |
Apr
(26) |
May
(38) |
Jun
(24) |
Jul
(42) |
Aug
(24) |
Sep
(4) |
Oct
(3) |
Nov
(18) |
Dec
(4) |
| 2014 |
Jan
(10) |
Feb
(9) |
Mar
(3) |
Apr
|
May
(12) |
Jun
(34) |
Jul
(8) |
Aug
(18) |
Sep
(3) |
Oct
(27) |
Nov
(2) |
Dec
(1) |
| 2015 |
Jan
|
Feb
(10) |
Mar
(49) |
Apr
(2) |
May
(4) |
Jun
(7) |
Jul
(1) |
Aug
(17) |
Sep
(7) |
Oct
(35) |
Nov
(40) |
Dec
(4) |
| 2016 |
Jan
(9) |
Feb
|
Mar
(6) |
Apr
|
May
(10) |
Jun
(2) |
Jul
|
Aug
|
Sep
(5) |
Oct
|
Nov
|
Dec
(1) |
| 2017 |
Jan
(2) |
Feb
(4) |
Mar
(1) |
Apr
(4) |
May
(31) |
Jun
(9) |
Jul
(1) |
Aug
|
Sep
|
Oct
(1) |
Nov
(1) |
Dec
(2) |
| 2018 |
Jan
|
Feb
|
Mar
(1) |
Apr
(4) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2022 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
(1) |
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Laurent D. <ld...@li...> - 2015-03-26 10:37:51
|
On 26/03/2015 10:43, Ingo Molnar wrote:
>
> * Benjamin Herrenschmidt <be...@ke...> wrote:
>
>> On Wed, 2015-03-25 at 19:36 +0100, Ingo Molnar wrote:
>>> * Ingo Molnar <mi...@ke...> wrote:
>>>
>>>>> +#define __HAVE_ARCH_REMAP
>>>>> +static inline void arch_remap(struct mm_struct *mm,
>>>>> + unsigned long old_start, unsigned long old_end,
>>>>> + unsigned long new_start, unsigned long new_end)
>>>>> +{
>>>>> + /*
>>>>> + * mremap() doesn't allow moving multiple vmas so we can limit the
>>>>> + * check to old_start == vdso_base.
>>>>> + */
>>>>> + if (old_start == mm->context.vdso_base)
>>>>> + mm->context.vdso_base = new_start;
>>>>> +}
>>>>
>>>> mremap() doesn't allow moving multiple vmas, but it allows the
>>>> movement of multi-page vmas and it also allows partial mremap()s,
>>>> where it will split up a vma.
>>>
>>> I.e. mremap() supports the shrinking (and growing) of vmas. In that
>>> case mremap() will unmap the end of the vma and will shrink the
>>> remaining vDSO vma.
>>>
>>> Doesn't that result in a non-working vDSO that should zero out
>>> vdso_base?
>>
>> Right. Now we can't completely prevent the user from shooting itself
>> in the foot I suppose, though there is a legit usage scenario which
>> is to move the vDSO around which it would be nice to support. I
>> think it's reasonable to put the onus on the user here to do the
>> right thing.
>
> I argue we should use the right condition to clear vdso_base: if the
> vDSO gets at least partially unmapped. Otherwise there's little point
> in the whole patch: either correctly track whether the vDSO is OK, or
> don't ...
That's a good option, but it may be hard to achieve in the case the vDSO
area has been splitted in multiple pieces.
Not sure there is a right way to handle that, here this is a best
effort, allowing a process to unmap its vDSO and having the sigreturn
call done through the stack area (it has to make it executable).
Anyway I'll dig into that, assuming that the vdso_base pointer should be
clear if a part of the vDSO is moved or unmapped. The patch will be
larger since I'll have to get the vDSO size which is private to the
vdso.c file.
> There's also the question of mprotect(): can users mprotect() the vDSO
> on PowerPC?
Yes, mprotect() the vDSO is allowed on PowerPC, as it is on x86, and
certainly all the other architectures.
Furthermore, if it is done on a partial part of the vDSO it is splitting
the vma...
|
|
From: Laurent D. <ld...@li...> - 2015-03-26 10:14:10
|
On 26/03/2015 10:48, Ingo Molnar wrote:
>
> * Benjamin Herrenschmidt <be...@ke...> wrote:
>
>>>> +#define __HAVE_ARCH_REMAP
>>>> +static inline void arch_remap(struct mm_struct *mm,
>>>> + unsigned long old_start, unsigned long old_end,
>>>> + unsigned long new_start, unsigned long new_end)
>>>> +{
>>>> + /*
>>>> + * mremap() doesn't allow moving multiple vmas so we can limit the
>>>> + * check to old_start == vdso_base.
>>>> + */
>>>> + if (old_start == mm->context.vdso_base)
>>>> + mm->context.vdso_base = new_start;
>>>> +}
>>>
>>> mremap() doesn't allow moving multiple vmas, but it allows the
>>> movement of multi-page vmas and it also allows partial mremap()s,
>>> where it will split up a vma.
>>>
>>> In particular, what happens if an mremap() is done with
>>> old_start == vdso_base, but a shorter end than the end of the vDSO?
>>> (i.e. a partial mremap() with fewer pages than the vDSO size)
>>
>> Is there a way to forbid splitting ? Does x86 deal with that case at
>> all or it doesn't have to for some other reason ?
>
> So we use _install_special_mapping() - maybe PowerPC does that too?
> That adds VM_DONTEXPAND which ought to prevent some - but not all - of
> the VM API weirdnesses.
The same is done on PowerPC. So calling mremap() to extend the vDSO is
failing but splitting it or unmapping a part of it is allowed but lead
to an unusable vDSO.
> On x86 we'll just dump core if someone unmaps the vdso.
On PowerPC, you'll get the same result.
Should we prevent the user to break its vDSO ?
Thanks,
Laurent.
|
|
From: Ingo M. <mi...@ke...> - 2015-03-26 09:48:57
|
* Benjamin Herrenschmidt <be...@ke...> wrote:
> > > +#define __HAVE_ARCH_REMAP
> > > +static inline void arch_remap(struct mm_struct *mm,
> > > + unsigned long old_start, unsigned long old_end,
> > > + unsigned long new_start, unsigned long new_end)
> > > +{
> > > + /*
> > > + * mremap() doesn't allow moving multiple vmas so we can limit the
> > > + * check to old_start == vdso_base.
> > > + */
> > > + if (old_start == mm->context.vdso_base)
> > > + mm->context.vdso_base = new_start;
> > > +}
> >
> > mremap() doesn't allow moving multiple vmas, but it allows the
> > movement of multi-page vmas and it also allows partial mremap()s,
> > where it will split up a vma.
> >
> > In particular, what happens if an mremap() is done with
> > old_start == vdso_base, but a shorter end than the end of the vDSO?
> > (i.e. a partial mremap() with fewer pages than the vDSO size)
>
> Is there a way to forbid splitting ? Does x86 deal with that case at
> all or it doesn't have to for some other reason ?
So we use _install_special_mapping() - maybe PowerPC does that too?
That adds VM_DONTEXPAND which ought to prevent some - but not all - of
the VM API weirdnesses.
On x86 we'll just dump core if someone unmaps the vdso.
Thanks,
Ingo
|
|
From: Ingo M. <mi...@ke...> - 2015-03-26 09:43:43
|
* Benjamin Herrenschmidt <be...@ke...> wrote:
> On Wed, 2015-03-25 at 19:36 +0100, Ingo Molnar wrote:
> > * Ingo Molnar <mi...@ke...> wrote:
> >
> > > > +#define __HAVE_ARCH_REMAP
> > > > +static inline void arch_remap(struct mm_struct *mm,
> > > > + unsigned long old_start, unsigned long old_end,
> > > > + unsigned long new_start, unsigned long new_end)
> > > > +{
> > > > + /*
> > > > + * mremap() doesn't allow moving multiple vmas so we can limit the
> > > > + * check to old_start == vdso_base.
> > > > + */
> > > > + if (old_start == mm->context.vdso_base)
> > > > + mm->context.vdso_base = new_start;
> > > > +}
> > >
> > > mremap() doesn't allow moving multiple vmas, but it allows the
> > > movement of multi-page vmas and it also allows partial mremap()s,
> > > where it will split up a vma.
> >
> > I.e. mremap() supports the shrinking (and growing) of vmas. In that
> > case mremap() will unmap the end of the vma and will shrink the
> > remaining vDSO vma.
> >
> > Doesn't that result in a non-working vDSO that should zero out
> > vdso_base?
>
> Right. Now we can't completely prevent the user from shooting itself
> in the foot I suppose, though there is a legit usage scenario which
> is to move the vDSO around which it would be nice to support. I
> think it's reasonable to put the onus on the user here to do the
> right thing.
I argue we should use the right condition to clear vdso_base: if the
vDSO gets at least partially unmapped. Otherwise there's little point
in the whole patch: either correctly track whether the vDSO is OK, or
don't ...
There's also the question of mprotect(): can users mprotect() the vDSO
on PowerPC?
Thanks,
Ingo
|
|
From: Benjamin H. <be...@ke...> - 2015-03-25 22:56:27
|
On Wed, 2015-03-25 at 19:36 +0100, Ingo Molnar wrote:
> * Ingo Molnar <mi...@ke...> wrote:
>
> > > +#define __HAVE_ARCH_REMAP
> > > +static inline void arch_remap(struct mm_struct *mm,
> > > + unsigned long old_start, unsigned long old_end,
> > > + unsigned long new_start, unsigned long new_end)
> > > +{
> > > + /*
> > > + * mremap() doesn't allow moving multiple vmas so we can limit the
> > > + * check to old_start == vdso_base.
> > > + */
> > > + if (old_start == mm->context.vdso_base)
> > > + mm->context.vdso_base = new_start;
> > > +}
> >
> > mremap() doesn't allow moving multiple vmas, but it allows the
> > movement of multi-page vmas and it also allows partial mremap()s,
> > where it will split up a vma.
>
> I.e. mremap() supports the shrinking (and growing) of vmas. In that
> case mremap() will unmap the end of the vma and will shrink the
> remaining vDSO vma.
>
> Doesn't that result in a non-working vDSO that should zero out
> vdso_base?
Right. Now we can't completely prevent the user from shooting itself in
the foot I suppose, though there is a legit usage scenario which is to
move the vDSO around which it would be nice to support. I think it's
reasonable to put the onus on the user here to do the right thing.
Cheers,
Ben.
> Thanks,
>
> Ingo
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to maj...@vg...
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
|
|
From: Benjamin H. <be...@ke...> - 2015-03-25 21:55:34
|
On Wed, 2015-03-25 at 19:33 +0100, Ingo Molnar wrote:
> * Laurent Dufour <ld...@li...> wrote:
>
> > +static inline void arch_unmap(struct mm_struct *mm,
> > + struct vm_area_struct *vma,
> > + unsigned long start, unsigned long end)
> > +{
> > + if (start <= mm->context.vdso_base && mm->context.vdso_base < end)
> > + mm->context.vdso_base = 0;
> > +}
>
> So AFAICS PowerPC can have multi-page vDSOs, right?
>
> So what happens if I munmap() the middle or end of the vDSO? The above
> condition only seems to cover unmaps that affect the first page. I
> think 'affects any page' ought to be the right condition? (But I know
> nothing about PowerPC so I might be wrong.)
You are right, we have at least two pages.
>
> > +#define __HAVE_ARCH_REMAP
> > +static inline void arch_remap(struct mm_struct *mm,
> > + unsigned long old_start, unsigned long old_end,
> > + unsigned long new_start, unsigned long new_end)
> > +{
> > + /*
> > + * mremap() doesn't allow moving multiple vmas so we can limit the
> > + * check to old_start == vdso_base.
> > + */
> > + if (old_start == mm->context.vdso_base)
> > + mm->context.vdso_base = new_start;
> > +}
>
> mremap() doesn't allow moving multiple vmas, but it allows the
> movement of multi-page vmas and it also allows partial mremap()s,
> where it will split up a vma.
>
> In particular, what happens if an mremap() is done with
> old_start == vdso_base, but a shorter end than the end of the vDSO?
> (i.e. a partial mremap() with fewer pages than the vDSO size)
Is there a way to forbid splitting ? Does x86 deal with that case at all
or it doesn't have to for some other reason ?
Cheers,
Ben.
> Thanks,
>
> Ingo
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to maj...@vg...
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
|
|
From: Ingo M. <mi...@ke...> - 2015-03-25 18:37:02
|
* Ingo Molnar <mi...@ke...> wrote:
> > +#define __HAVE_ARCH_REMAP
> > +static inline void arch_remap(struct mm_struct *mm,
> > + unsigned long old_start, unsigned long old_end,
> > + unsigned long new_start, unsigned long new_end)
> > +{
> > + /*
> > + * mremap() doesn't allow moving multiple vmas so we can limit the
> > + * check to old_start == vdso_base.
> > + */
> > + if (old_start == mm->context.vdso_base)
> > + mm->context.vdso_base = new_start;
> > +}
>
> mremap() doesn't allow moving multiple vmas, but it allows the
> movement of multi-page vmas and it also allows partial mremap()s,
> where it will split up a vma.
I.e. mremap() supports the shrinking (and growing) of vmas. In that
case mremap() will unmap the end of the vma and will shrink the
remaining vDSO vma.
Doesn't that result in a non-working vDSO that should zero out
vdso_base?
Thanks,
Ingo
|
|
From: Ingo M. <mi...@ke...> - 2015-03-25 18:33:28
|
* Laurent Dufour <ld...@li...> wrote:
> +static inline void arch_unmap(struct mm_struct *mm,
> + struct vm_area_struct *vma,
> + unsigned long start, unsigned long end)
> +{
> + if (start <= mm->context.vdso_base && mm->context.vdso_base < end)
> + mm->context.vdso_base = 0;
> +}
So AFAICS PowerPC can have multi-page vDSOs, right?
So what happens if I munmap() the middle or end of the vDSO? The above
condition only seems to cover unmaps that affect the first page. I
think 'affects any page' ought to be the right condition? (But I know
nothing about PowerPC so I might be wrong.)
> +#define __HAVE_ARCH_REMAP
> +static inline void arch_remap(struct mm_struct *mm,
> + unsigned long old_start, unsigned long old_end,
> + unsigned long new_start, unsigned long new_end)
> +{
> + /*
> + * mremap() doesn't allow moving multiple vmas so we can limit the
> + * check to old_start == vdso_base.
> + */
> + if (old_start == mm->context.vdso_base)
> + mm->context.vdso_base = new_start;
> +}
mremap() doesn't allow moving multiple vmas, but it allows the
movement of multi-page vmas and it also allows partial mremap()s,
where it will split up a vma.
In particular, what happens if an mremap() is done with
old_start == vdso_base, but a shorter end than the end of the vDSO?
(i.e. a partial mremap() with fewer pages than the vDSO size)
Thanks,
Ingo
|
|
From: Laurent D. <ld...@li...> - 2015-03-25 13:54:16
|
Some processes (CRIU) are moving the vDSO area using the mremap system
call. As a consequence the kernel reference to the vDSO base address is
no more valid and the signal return frame built once the vDSO has been
moved is not pointing to the new sigreturn address.
This patch handles vDSO remapping and unmapping.
Signed-off-by: Laurent Dufour <ld...@li...>
---
arch/powerpc/include/asm/mmu_context.h | 36 +++++++++++++++++++++++++++++++++-
1 file changed, 35 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
index 73382eba02dc..7d315c1898d4 100644
--- a/arch/powerpc/include/asm/mmu_context.h
+++ b/arch/powerpc/include/asm/mmu_context.h
@@ -8,7 +8,6 @@
#include <linux/spinlock.h>
#include <asm/mmu.h>
#include <asm/cputable.h>
-#include <asm-generic/mm_hooks.h>
#include <asm/cputhreads.h>
/*
@@ -109,5 +108,40 @@ static inline void enter_lazy_tlb(struct mm_struct *mm,
#endif
}
+static inline void arch_dup_mmap(struct mm_struct *oldmm,
+ struct mm_struct *mm)
+{
+}
+
+static inline void arch_exit_mmap(struct mm_struct *mm)
+{
+}
+
+static inline void arch_unmap(struct mm_struct *mm,
+ struct vm_area_struct *vma,
+ unsigned long start, unsigned long end)
+{
+ if (start <= mm->context.vdso_base && mm->context.vdso_base < end)
+ mm->context.vdso_base = 0;
+}
+
+static inline void arch_bprm_mm_init(struct mm_struct *mm,
+ struct vm_area_struct *vma)
+{
+}
+
+#define __HAVE_ARCH_REMAP
+static inline void arch_remap(struct mm_struct *mm,
+ unsigned long old_start, unsigned long old_end,
+ unsigned long new_start, unsigned long new_end)
+{
+ /*
+ * mremap() doesn't allow moving multiple vmas so we can limit the
+ * check to old_start == vdso_base.
+ */
+ if (old_start == mm->context.vdso_base)
+ mm->context.vdso_base = new_start;
+}
+
#endif /* __KERNEL__ */
#endif /* __ASM_POWERPC_MMU_CONTEXT_H */
--
1.9.1
|
|
From: Laurent D. <ld...@li...> - 2015-03-25 13:54:15
|
Some architecture would like to be triggered when a memory area is moved
through the mremap system call.
This patch is introducing a new arch_remap mm hook which is placed in the
path of mremap, and is called before the old area is unmapped (and the
arch_unmap hook is called).
The architectures which need to call this hook should define
__HAVE_ARCH_REMAP in their asm/mmu_context.h and provide the arch_remap
service with the following prototype:
void arch_remap(struct mm_struct *mm,
unsigned long old_start, unsigned long old_end,
unsigned long new_start, unsigned long new_end);
Signed-off-by: Laurent Dufour <ld...@li...>
---
mm/mremap.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/mm/mremap.c b/mm/mremap.c
index 57dadc025c64..bafc234db45c 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -25,6 +25,7 @@
#include <asm/cacheflush.h>
#include <asm/tlbflush.h>
+#include <asm/mmu_context.h>
#include "internal.h"
@@ -286,8 +287,14 @@ static unsigned long move_vma(struct vm_area_struct *vma,
old_len = new_len;
old_addr = new_addr;
new_addr = -ENOMEM;
- } else if (vma->vm_file && vma->vm_file->f_op->mremap)
- vma->vm_file->f_op->mremap(vma->vm_file, new_vma);
+ } else {
+ if (vma->vm_file && vma->vm_file->f_op->mremap)
+ vma->vm_file->f_op->mremap(vma->vm_file, new_vma);
+#ifdef __HAVE_ARCH_REMAP
+ arch_remap(mm, old_addr, old_addr+old_len,
+ new_addr, new_addr+new_len);
+#endif
+ }
/* Conceal VM_ACCOUNT so old reservation is not undone */
if (vm_flags & VM_ACCOUNT) {
--
1.9.1
|
|
From: Laurent D. <ld...@li...> - 2015-03-25 13:54:11
|
CRIU is recreating the process memory layout by remapping the checkpointee memory area on top of the current process (criu). This includes remapping the vDSO to the place it has at checkpoint time. However some architectures like powerpc are keeping a reference to the vDSO base address to build the signal return stack frame by calling the vDSO sigreturn service. So once the vDSO has been moved, this reference is no more valid and the signal frame built later are not usable. This patch serie is introducing a new mm hook 'arch_remap' which is called when mremap is done and the mm lock still hold. The next patch is adding the vDSO remap and unmap tracking to the powerpc architecture. Changes in v3: -------------- - Fixed grammatical error in a comment of the second patch. Thanks again, Ingo. Changes in v2: -------------- - Following the Ingo Molnar's advice, enabling the call to arch_remap through the __HAVE_ARCH_REMAP macro. This reduces considerably the first patch. Laurent Dufour (2): mm: Introducing arch_remap hook powerpc/mm: Tracking vDSO remap arch/powerpc/include/asm/mmu_context.h | 36 +++++++++++++++++++++++++++++++++- mm/mremap.c | 11 +++++++++-- 2 files changed, 44 insertions(+), 3 deletions(-) -- 1.9.1 |
|
From: Laurent D. <ld...@li...> - 2015-03-25 13:25:34
|
On 25/03/2015 13:11, Ingo Molnar wrote:
>
> * Laurent Dufour <ld...@li...> wrote:
>
>> Some processes (CRIU) are moving the vDSO area using the mremap system
>> call. As a consequence the kernel reference to the vDSO base address is
>> no more valid and the signal return frame built once the vDSO has been
>> moved is not pointing to the new sigreturn address.
>>
>> This patch handles vDSO remapping and unmapping.
>>
>> Signed-off-by: Laurent Dufour <ld...@li...>
>> ---
>> arch/powerpc/include/asm/mmu_context.h | 36 +++++++++++++++++++++++++++++++++-
>> 1 file changed, 35 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
>> index 73382eba02dc..be5dca3f7826 100644
>> --- a/arch/powerpc/include/asm/mmu_context.h
>> +++ b/arch/powerpc/include/asm/mmu_context.h
>> @@ -8,7 +8,6 @@
>> #include <linux/spinlock.h>
>> #include <asm/mmu.h>
>> #include <asm/cputable.h>
>> -#include <asm-generic/mm_hooks.h>
>> #include <asm/cputhreads.h>
>>
>> /*
>> @@ -109,5 +108,40 @@ static inline void enter_lazy_tlb(struct mm_struct *mm,
>> #endif
>> }
>>
>> +static inline void arch_dup_mmap(struct mm_struct *oldmm,
>> + struct mm_struct *mm)
>> +{
>> +}
>> +
>> +static inline void arch_exit_mmap(struct mm_struct *mm)
>> +{
>> +}
>> +
>> +static inline void arch_unmap(struct mm_struct *mm,
>> + struct vm_area_struct *vma,
>> + unsigned long start, unsigned long end)
>> +{
>> + if (start <= mm->context.vdso_base && mm->context.vdso_base < end)
>> + mm->context.vdso_base = 0;
>> +}
>> +
>> +static inline void arch_bprm_mm_init(struct mm_struct *mm,
>> + struct vm_area_struct *vma)
>> +{
>> +}
>> +
>> +#define __HAVE_ARCH_REMAP
>> +static inline void arch_remap(struct mm_struct *mm,
>> + unsigned long old_start, unsigned long old_end,
>> + unsigned long new_start, unsigned long new_end)
>> +{
>> + /*
>> + * mremap don't allow moving multiple vma so we can limit the check
>> + * to old_start == vdso_base.
>
> s/mremap don't allow moving multiple vma
> mremap() doesn't allow moving multiple vmas
>
> right?
Sure you're right.
I'll provide a v3 fixing that comment.
Thanks,
Laurent.
|
|
From: Ingo M. <mi...@ke...> - 2015-03-25 12:11:31
|
* Laurent Dufour <ld...@li...> wrote:
> Some processes (CRIU) are moving the vDSO area using the mremap system
> call. As a consequence the kernel reference to the vDSO base address is
> no more valid and the signal return frame built once the vDSO has been
> moved is not pointing to the new sigreturn address.
>
> This patch handles vDSO remapping and unmapping.
>
> Signed-off-by: Laurent Dufour <ld...@li...>
> ---
> arch/powerpc/include/asm/mmu_context.h | 36 +++++++++++++++++++++++++++++++++-
> 1 file changed, 35 insertions(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
> index 73382eba02dc..be5dca3f7826 100644
> --- a/arch/powerpc/include/asm/mmu_context.h
> +++ b/arch/powerpc/include/asm/mmu_context.h
> @@ -8,7 +8,6 @@
> #include <linux/spinlock.h>
> #include <asm/mmu.h>
> #include <asm/cputable.h>
> -#include <asm-generic/mm_hooks.h>
> #include <asm/cputhreads.h>
>
> /*
> @@ -109,5 +108,40 @@ static inline void enter_lazy_tlb(struct mm_struct *mm,
> #endif
> }
>
> +static inline void arch_dup_mmap(struct mm_struct *oldmm,
> + struct mm_struct *mm)
> +{
> +}
> +
> +static inline void arch_exit_mmap(struct mm_struct *mm)
> +{
> +}
> +
> +static inline void arch_unmap(struct mm_struct *mm,
> + struct vm_area_struct *vma,
> + unsigned long start, unsigned long end)
> +{
> + if (start <= mm->context.vdso_base && mm->context.vdso_base < end)
> + mm->context.vdso_base = 0;
> +}
> +
> +static inline void arch_bprm_mm_init(struct mm_struct *mm,
> + struct vm_area_struct *vma)
> +{
> +}
> +
> +#define __HAVE_ARCH_REMAP
> +static inline void arch_remap(struct mm_struct *mm,
> + unsigned long old_start, unsigned long old_end,
> + unsigned long new_start, unsigned long new_end)
> +{
> + /*
> + * mremap don't allow moving multiple vma so we can limit the check
> + * to old_start == vdso_base.
s/mremap don't allow moving multiple vma
mremap() doesn't allow moving multiple vmas
right?
Thanks,
Ingo
|
|
From: Laurent D. <ld...@li...> - 2015-03-25 11:07:00
|
Some processes (CRIU) are moving the vDSO area using the mremap system
call. As a consequence the kernel reference to the vDSO base address is
no more valid and the signal return frame built once the vDSO has been
moved is not pointing to the new sigreturn address.
This patch handles vDSO remapping and unmapping.
Signed-off-by: Laurent Dufour <ld...@li...>
---
arch/powerpc/include/asm/mmu_context.h | 36 +++++++++++++++++++++++++++++++++-
1 file changed, 35 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
index 73382eba02dc..be5dca3f7826 100644
--- a/arch/powerpc/include/asm/mmu_context.h
+++ b/arch/powerpc/include/asm/mmu_context.h
@@ -8,7 +8,6 @@
#include <linux/spinlock.h>
#include <asm/mmu.h>
#include <asm/cputable.h>
-#include <asm-generic/mm_hooks.h>
#include <asm/cputhreads.h>
/*
@@ -109,5 +108,40 @@ static inline void enter_lazy_tlb(struct mm_struct *mm,
#endif
}
+static inline void arch_dup_mmap(struct mm_struct *oldmm,
+ struct mm_struct *mm)
+{
+}
+
+static inline void arch_exit_mmap(struct mm_struct *mm)
+{
+}
+
+static inline void arch_unmap(struct mm_struct *mm,
+ struct vm_area_struct *vma,
+ unsigned long start, unsigned long end)
+{
+ if (start <= mm->context.vdso_base && mm->context.vdso_base < end)
+ mm->context.vdso_base = 0;
+}
+
+static inline void arch_bprm_mm_init(struct mm_struct *mm,
+ struct vm_area_struct *vma)
+{
+}
+
+#define __HAVE_ARCH_REMAP
+static inline void arch_remap(struct mm_struct *mm,
+ unsigned long old_start, unsigned long old_end,
+ unsigned long new_start, unsigned long new_end)
+{
+ /*
+ * mremap don't allow moving multiple vma so we can limit the check
+ * to old_start == vdso_base.
+ */
+ if (old_start == mm->context.vdso_base)
+ mm->context.vdso_base = new_start;
+}
+
#endif /* __KERNEL__ */
#endif /* __ASM_POWERPC_MMU_CONTEXT_H */
--
1.9.1
|
|
From: Laurent D. <ld...@li...> - 2015-03-25 11:06:57
|
Some architecture would like to be triggered when a memory area is moved
through the mremap system call.
This patch is introducing a new arch_remap mm hook which is placed in the
path of mremap, and is called before the old area is unmapped (and the
arch_unmap hook is called).
The architectures which need to call this hook should define
__HAVE_ARCH_REMAP in their asm/mmu_context.h and provide the arch_remap
service with the following prototype:
void arch_remap(struct mm_struct *mm,
unsigned long old_start, unsigned long old_end,
unsigned long new_start, unsigned long new_end);
Signed-off-by: Laurent Dufour <ld...@li...>
---
mm/mremap.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/mm/mremap.c b/mm/mremap.c
index 57dadc025c64..bafc234db45c 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -25,6 +25,7 @@
#include <asm/cacheflush.h>
#include <asm/tlbflush.h>
+#include <asm/mmu_context.h>
#include "internal.h"
@@ -286,8 +287,14 @@ static unsigned long move_vma(struct vm_area_struct *vma,
old_len = new_len;
old_addr = new_addr;
new_addr = -ENOMEM;
- } else if (vma->vm_file && vma->vm_file->f_op->mremap)
- vma->vm_file->f_op->mremap(vma->vm_file, new_vma);
+ } else {
+ if (vma->vm_file && vma->vm_file->f_op->mremap)
+ vma->vm_file->f_op->mremap(vma->vm_file, new_vma);
+#ifdef __HAVE_ARCH_REMAP
+ arch_remap(mm, old_addr, old_addr+old_len,
+ new_addr, new_addr+new_len);
+#endif
+ }
/* Conceal VM_ACCOUNT so old reservation is not undone */
if (vm_flags & VM_ACCOUNT) {
--
1.9.1
|
|
From: Laurent D. <ld...@li...> - 2015-03-25 11:06:56
|
CRIU is recreating the process memory layout by remapping the checkpointee memory area on top of the current process (criu). This includes remapping the vDSO to the place it has at checkpoint time. However some architectures like powerpc are keeping a reference to the vDSO base address to build the signal return stack frame by calling the vDSO sigreturn service. So once the vDSO has been moved, this reference is no more valid and the signal frame built later are not usable. This patch serie is introducing a new mm hook 'arch_remap' which is called when mremap is done and the mm lock still hold. The next patch is adding the vDSO remap and unmap tracking to the powerpc architecture. Changes in v2: -------------- - Following the Ingo Molnar's advice, enabling the call to arch_remap through the __HAVE_ARCH_REMAP macro. This reduces considerably the first patch. Laurent Dufour (2): mm: Introducing arch_remap hook powerpc/mm: Tracking vDSO remap arch/powerpc/include/asm/mmu_context.h | 36 +++++++++++++++++++++++++++++++++- mm/mremap.c | 11 +++++++++-- 2 files changed, 44 insertions(+), 3 deletions(-) -- 1.9.1 |
|
From: Andrey R. <a.r...@sa...> - 2015-03-24 15:31:47
|
Almost all arches define ELF_ET_DYN_BASE as 2/3 of TASK_SIZE. Though it seems that some architectures do this in a wrong way. The problem is that 2*TASK_SIZE may overflow 32-bits so the real ELF_ET_DYN_BASE becomes wrong. Fix this overflow by dividing TASK_SIZE prior to multiplying: (TASK_SIZE / 3 * 2) Signed-off-by: Andrey Ryabinin <a.r...@sa...> --- arch/x86/um/asm/elf.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/arch/x86/um/asm/elf.h b/arch/x86/um/asm/elf.h index 25a1022..0a656b7 100644 --- a/arch/x86/um/asm/elf.h +++ b/arch/x86/um/asm/elf.h @@ -210,7 +210,7 @@ extern int elf_core_copy_fpregs(struct task_struct *t, elf_fpregset_t *fpu); #define ELF_EXEC_PAGESIZE 4096 -#define ELF_ET_DYN_BASE (2 * TASK_SIZE / 3) +#define ELF_ET_DYN_BASE (TASK_SIZE / 3 * 2) extern long elf_aux_hwcap; #define ELF_HWCAP (elf_aux_hwcap) -- 2.3.3 |
|
From: Laurent D. <ld...@li...> - 2015-03-23 09:11:39
|
On 23/03/2015 09:52, Ingo Molnar wrote:
>
> * Laurent Dufour <ld...@li...> wrote:
>
>> Some architecture would like to be triggered when a memory area is moved
>> through the mremap system call.
>>
>> This patch is introducing a new arch_remap mm hook which is placed in the
>> path of mremap, and is called before the old area is unmapped (and the
>> arch_unmap hook is called).
>>
>> To no break the build, this patch adds the empty hook definition to the
>> architectures that were not using the generic hook's definition.
>>
>> Signed-off-by: Laurent Dufour <ld...@li...>
>> ---
>> arch/s390/include/asm/mmu_context.h | 6 ++++++
>> arch/um/include/asm/mmu_context.h | 5 +++++
>> arch/unicore32/include/asm/mmu_context.h | 6 ++++++
>> arch/x86/include/asm/mmu_context.h | 6 ++++++
>> include/asm-generic/mm_hooks.h | 6 ++++++
>> mm/mremap.c | 9 +++++++--
>> 6 files changed, 36 insertions(+), 2 deletions(-)
>>
>> diff --git a/arch/s390/include/asm/mmu_context.h b/arch/s390/include/asm/mmu_context.h
>> index 8fb3802f8fad..ddd861a490ba 100644
>> --- a/arch/s390/include/asm/mmu_context.h
>> +++ b/arch/s390/include/asm/mmu_context.h
>> @@ -131,4 +131,10 @@ static inline void arch_bprm_mm_init(struct mm_struct *mm,
>> {
>> }
>>
>> +static inline void arch_remap(struct mm_struct *mm,
>> + unsigned long old_start, unsigned long old_end,
>> + unsigned long new_start, unsigned long new_end)
>> +{
>> +}
>> +
>> #endif /* __S390_MMU_CONTEXT_H */
>> diff --git a/arch/um/include/asm/mmu_context.h b/arch/um/include/asm/mmu_context.h
>> index 941527e507f7..f499b017c1f9 100644
>> --- a/arch/um/include/asm/mmu_context.h
>> +++ b/arch/um/include/asm/mmu_context.h
>> @@ -27,6 +27,11 @@ static inline void arch_bprm_mm_init(struct mm_struct *mm,
>> struct vm_area_struct *vma)
>> {
>> }
>> +static inline void arch_remap(struct mm_struct *mm,
>> + unsigned long old_start, unsigned long old_end,
>> + unsigned long new_start, unsigned long new_end)
>> +{
>> +}
>> /*
>> * end asm-generic/mm_hooks.h functions
>> */
>> diff --git a/arch/unicore32/include/asm/mmu_context.h b/arch/unicore32/include/asm/mmu_context.h
>> index 1cb5220afaf9..39a0a553172e 100644
>> --- a/arch/unicore32/include/asm/mmu_context.h
>> +++ b/arch/unicore32/include/asm/mmu_context.h
>> @@ -97,4 +97,10 @@ static inline void arch_bprm_mm_init(struct mm_struct *mm,
>> {
>> }
>>
>> +static inline void arch_remap(struct mm_struct *mm,
>> + unsigned long old_start, unsigned long old_end,
>> + unsigned long new_start, unsigned long new_end)
>> +{
>> +}
>> +
>> #endif
>> diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
>> index 883f6b933fa4..75cb71f4be1e 100644
>> --- a/arch/x86/include/asm/mmu_context.h
>> +++ b/arch/x86/include/asm/mmu_context.h
>> @@ -172,4 +172,10 @@ static inline void arch_unmap(struct mm_struct *mm, struct vm_area_struct *vma,
>> mpx_notify_unmap(mm, vma, start, end);
>> }
>>
>> +static inline void arch_remap(struct mm_struct *mm,
>> + unsigned long old_start, unsigned long old_end,
>> + unsigned long new_start, unsigned long new_end)
>> +{
>> +}
>> +
>> #endif /* _ASM_X86_MMU_CONTEXT_H */
>
> So instead of spreading these empty prototypes around mmu_context.h
> files, why not add something like this to the PPC definition:
>
> #define __HAVE_ARCH_REMAP
>
> and define the empty prototype for everyone else? It's a bit like how
> the __HAVE_ARCH_PTEP_* namespace works.
>
> That should shrink this patch considerably.
My idea was to mimic the MMU hook's definition. This new hook is in the
continuity of what have been done for arch_dup_mmap, arch_exit_mmap,
arch_unmap and arch_bprm_mm_init.
Do you think that there is a need to make this one in another way ?
Thanks,
Laurent.
|
|
From: Ingo M. <mi...@ke...> - 2015-03-23 08:52:20
|
* Laurent Dufour <ld...@li...> wrote:
> Some architecture would like to be triggered when a memory area is moved
> through the mremap system call.
>
> This patch is introducing a new arch_remap mm hook which is placed in the
> path of mremap, and is called before the old area is unmapped (and the
> arch_unmap hook is called).
>
> To no break the build, this patch adds the empty hook definition to the
> architectures that were not using the generic hook's definition.
>
> Signed-off-by: Laurent Dufour <ld...@li...>
> ---
> arch/s390/include/asm/mmu_context.h | 6 ++++++
> arch/um/include/asm/mmu_context.h | 5 +++++
> arch/unicore32/include/asm/mmu_context.h | 6 ++++++
> arch/x86/include/asm/mmu_context.h | 6 ++++++
> include/asm-generic/mm_hooks.h | 6 ++++++
> mm/mremap.c | 9 +++++++--
> 6 files changed, 36 insertions(+), 2 deletions(-)
>
> diff --git a/arch/s390/include/asm/mmu_context.h b/arch/s390/include/asm/mmu_context.h
> index 8fb3802f8fad..ddd861a490ba 100644
> --- a/arch/s390/include/asm/mmu_context.h
> +++ b/arch/s390/include/asm/mmu_context.h
> @@ -131,4 +131,10 @@ static inline void arch_bprm_mm_init(struct mm_struct *mm,
> {
> }
>
> +static inline void arch_remap(struct mm_struct *mm,
> + unsigned long old_start, unsigned long old_end,
> + unsigned long new_start, unsigned long new_end)
> +{
> +}
> +
> #endif /* __S390_MMU_CONTEXT_H */
> diff --git a/arch/um/include/asm/mmu_context.h b/arch/um/include/asm/mmu_context.h
> index 941527e507f7..f499b017c1f9 100644
> --- a/arch/um/include/asm/mmu_context.h
> +++ b/arch/um/include/asm/mmu_context.h
> @@ -27,6 +27,11 @@ static inline void arch_bprm_mm_init(struct mm_struct *mm,
> struct vm_area_struct *vma)
> {
> }
> +static inline void arch_remap(struct mm_struct *mm,
> + unsigned long old_start, unsigned long old_end,
> + unsigned long new_start, unsigned long new_end)
> +{
> +}
> /*
> * end asm-generic/mm_hooks.h functions
> */
> diff --git a/arch/unicore32/include/asm/mmu_context.h b/arch/unicore32/include/asm/mmu_context.h
> index 1cb5220afaf9..39a0a553172e 100644
> --- a/arch/unicore32/include/asm/mmu_context.h
> +++ b/arch/unicore32/include/asm/mmu_context.h
> @@ -97,4 +97,10 @@ static inline void arch_bprm_mm_init(struct mm_struct *mm,
> {
> }
>
> +static inline void arch_remap(struct mm_struct *mm,
> + unsigned long old_start, unsigned long old_end,
> + unsigned long new_start, unsigned long new_end)
> +{
> +}
> +
> #endif
> diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
> index 883f6b933fa4..75cb71f4be1e 100644
> --- a/arch/x86/include/asm/mmu_context.h
> +++ b/arch/x86/include/asm/mmu_context.h
> @@ -172,4 +172,10 @@ static inline void arch_unmap(struct mm_struct *mm, struct vm_area_struct *vma,
> mpx_notify_unmap(mm, vma, start, end);
> }
>
> +static inline void arch_remap(struct mm_struct *mm,
> + unsigned long old_start, unsigned long old_end,
> + unsigned long new_start, unsigned long new_end)
> +{
> +}
> +
> #endif /* _ASM_X86_MMU_CONTEXT_H */
So instead of spreading these empty prototypes around mmu_context.h
files, why not add something like this to the PPC definition:
#define __HAVE_ARCH_REMAP
and define the empty prototype for everyone else? It's a bit like how
the __HAVE_ARCH_PTEP_* namespace works.
That should shrink this patch considerably.
Thanks,
Ingo
|
|
From: Richard W. <ri...@no...> - 2015-03-20 23:19:55
|
Am 20.03.2015 um 16:53 schrieb Laurent Dufour: > Some architecture would like to be triggered when a memory area is moved > through the mremap system call. > > This patch is introducing a new arch_remap mm hook which is placed in the > path of mremap, and is called before the old area is unmapped (and the > arch_unmap hook is called). > > To no break the build, this patch adds the empty hook definition to the > architectures that were not using the generic hook's definition. Just wanted to point out that I like that new hook as UserModeLinux can benefit from it. UML has the concept of stub pages where the UML host process can inject commands to guest processes. Currently we play nasty games in the TLB code to make all this work. arch_unmap() could make this stuff more clear and less error prone. Thanks, //richard |
|
From: Laurent D. <ld...@li...> - 2015-03-20 15:53:52
|
Some processes (CRIU) are moving the vDSO area using the mremap system
call. As a consequence the kernel reference to the vDSO base address is
no more valid and the signal return frame built once the vDSO has been
moved is not pointing to the new sigreturn address.
This patch handles vDSO remapping and unmapping.
Signed-off-by: Laurent Dufour <ld...@li...>
---
arch/powerpc/include/asm/mmu_context.h | 35 +++++++++++++++++++++++++++++++++-
1 file changed, 34 insertions(+), 1 deletion(-)
diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
index 73382eba02dc..ce7fc93518ee 100644
--- a/arch/powerpc/include/asm/mmu_context.h
+++ b/arch/powerpc/include/asm/mmu_context.h
@@ -8,7 +8,6 @@
#include <linux/spinlock.h>
#include <asm/mmu.h>
#include <asm/cputable.h>
-#include <asm-generic/mm_hooks.h>
#include <asm/cputhreads.h>
/*
@@ -109,5 +108,39 @@ static inline void enter_lazy_tlb(struct mm_struct *mm,
#endif
}
+static inline void arch_dup_mmap(struct mm_struct *oldmm,
+ struct mm_struct *mm)
+{
+}
+
+static inline void arch_exit_mmap(struct mm_struct *mm)
+{
+}
+
+static inline void arch_unmap(struct mm_struct *mm,
+ struct vm_area_struct *vma,
+ unsigned long start, unsigned long end)
+{
+ if (start <= mm->context.vdso_base && mm->context.vdso_base < end)
+ mm->context.vdso_base = 0;
+}
+
+static inline void arch_bprm_mm_init(struct mm_struct *mm,
+ struct vm_area_struct *vma)
+{
+}
+
+static inline void arch_remap(struct mm_struct *mm,
+ unsigned long old_start, unsigned long old_end,
+ unsigned long new_start, unsigned long new_end)
+{
+ /*
+ * mremap don't allow moving multiple vma so we can limit the check
+ * to old_start == vdso_base.
+ */
+ if (old_start == mm->context.vdso_base)
+ mm->context.vdso_base = new_start;
+}
+
#endif /* __KERNEL__ */
#endif /* __ASM_POWERPC_MMU_CONTEXT_H */
--
1.9.1
|
|
From: Laurent D. <ld...@li...> - 2015-03-20 15:53:48
|
Some architecture would like to be triggered when a memory area is moved
through the mremap system call.
This patch is introducing a new arch_remap mm hook which is placed in the
path of mremap, and is called before the old area is unmapped (and the
arch_unmap hook is called).
To no break the build, this patch adds the empty hook definition to the
architectures that were not using the generic hook's definition.
Signed-off-by: Laurent Dufour <ld...@li...>
---
arch/s390/include/asm/mmu_context.h | 6 ++++++
arch/um/include/asm/mmu_context.h | 5 +++++
arch/unicore32/include/asm/mmu_context.h | 6 ++++++
arch/x86/include/asm/mmu_context.h | 6 ++++++
include/asm-generic/mm_hooks.h | 6 ++++++
mm/mremap.c | 9 +++++++--
6 files changed, 36 insertions(+), 2 deletions(-)
diff --git a/arch/s390/include/asm/mmu_context.h b/arch/s390/include/asm/mmu_context.h
index 8fb3802f8fad..ddd861a490ba 100644
--- a/arch/s390/include/asm/mmu_context.h
+++ b/arch/s390/include/asm/mmu_context.h
@@ -131,4 +131,10 @@ static inline void arch_bprm_mm_init(struct mm_struct *mm,
{
}
+static inline void arch_remap(struct mm_struct *mm,
+ unsigned long old_start, unsigned long old_end,
+ unsigned long new_start, unsigned long new_end)
+{
+}
+
#endif /* __S390_MMU_CONTEXT_H */
diff --git a/arch/um/include/asm/mmu_context.h b/arch/um/include/asm/mmu_context.h
index 941527e507f7..f499b017c1f9 100644
--- a/arch/um/include/asm/mmu_context.h
+++ b/arch/um/include/asm/mmu_context.h
@@ -27,6 +27,11 @@ static inline void arch_bprm_mm_init(struct mm_struct *mm,
struct vm_area_struct *vma)
{
}
+static inline void arch_remap(struct mm_struct *mm,
+ unsigned long old_start, unsigned long old_end,
+ unsigned long new_start, unsigned long new_end)
+{
+}
/*
* end asm-generic/mm_hooks.h functions
*/
diff --git a/arch/unicore32/include/asm/mmu_context.h b/arch/unicore32/include/asm/mmu_context.h
index 1cb5220afaf9..39a0a553172e 100644
--- a/arch/unicore32/include/asm/mmu_context.h
+++ b/arch/unicore32/include/asm/mmu_context.h
@@ -97,4 +97,10 @@ static inline void arch_bprm_mm_init(struct mm_struct *mm,
{
}
+static inline void arch_remap(struct mm_struct *mm,
+ unsigned long old_start, unsigned long old_end,
+ unsigned long new_start, unsigned long new_end)
+{
+}
+
#endif
diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
index 883f6b933fa4..75cb71f4be1e 100644
--- a/arch/x86/include/asm/mmu_context.h
+++ b/arch/x86/include/asm/mmu_context.h
@@ -172,4 +172,10 @@ static inline void arch_unmap(struct mm_struct *mm, struct vm_area_struct *vma,
mpx_notify_unmap(mm, vma, start, end);
}
+static inline void arch_remap(struct mm_struct *mm,
+ unsigned long old_start, unsigned long old_end,
+ unsigned long new_start, unsigned long new_end)
+{
+}
+
#endif /* _ASM_X86_MMU_CONTEXT_H */
diff --git a/include/asm-generic/mm_hooks.h b/include/asm-generic/mm_hooks.h
index 866aa461efa5..e507f4783a5b 100644
--- a/include/asm-generic/mm_hooks.h
+++ b/include/asm-generic/mm_hooks.h
@@ -26,4 +26,10 @@ static inline void arch_bprm_mm_init(struct mm_struct *mm,
{
}
+static inline void arch_remap(struct mm_struct *mm,
+ unsigned long old_start, unsigned long old_end,
+ unsigned long new_start, unsigned long new_end)
+{
+}
+
#endif /* _ASM_GENERIC_MM_HOOKS_H */
diff --git a/mm/mremap.c b/mm/mremap.c
index 57dadc025c64..6a409ca09425 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -25,6 +25,7 @@
#include <asm/cacheflush.h>
#include <asm/tlbflush.h>
+#include <asm/mmu_context.h>
#include "internal.h"
@@ -286,8 +287,12 @@ static unsigned long move_vma(struct vm_area_struct *vma,
old_len = new_len;
old_addr = new_addr;
new_addr = -ENOMEM;
- } else if (vma->vm_file && vma->vm_file->f_op->mremap)
- vma->vm_file->f_op->mremap(vma->vm_file, new_vma);
+ } else {
+ if (vma->vm_file && vma->vm_file->f_op->mremap)
+ vma->vm_file->f_op->mremap(vma->vm_file, new_vma);
+ arch_remap(mm, old_addr, old_addr+old_len,
+ new_addr, new_addr+new_len);
+ }
/* Conceal VM_ACCOUNT so old reservation is not undone */
if (vm_flags & VM_ACCOUNT) {
--
1.9.1
|
|
From: Laurent D. <ld...@li...> - 2015-03-20 15:53:45
|
CRIU is recreating the process memory layout by remapping the checkpointee memory area on top of the current process (criu). This includes remapping the vDSO to the place it has at checkpoint time. However some architectures like powerpc are keeping a reference to the vDSO base address to build the signal return stack frame by calling the vDSO sigreturn service. So once the vDSO has been moved, this reference is no more valid and the signal frame built later are not usable. This patch serie is introducing a new mm hook 'arch_remap' which is called when mremap is done and the mm lock still hold. The next patch is adding the vDSO remap and unmap tracking to the powerpc architecture. Laurent Dufour (2): mm: Introducing arch_remap hook powerpc/mm: Tracking vDSO remap arch/powerpc/include/asm/mmu_context.h | 35 +++++++++++++++++++++++++++++++- arch/s390/include/asm/mmu_context.h | 6 ++++++ arch/um/include/asm/mmu_context.h | 5 +++++ arch/unicore32/include/asm/mmu_context.h | 6 ++++++ arch/x86/include/asm/mmu_context.h | 6 ++++++ include/asm-generic/mm_hooks.h | 6 ++++++ mm/mremap.c | 9 ++++++-- 7 files changed, 70 insertions(+), 3 deletions(-) -- 1.9.1 |
|
From: Alex D. <ale...@gm...> - 2015-03-13 18:17:19
|
The 'arg' argument to copy_thread() is only ever used when forking a new
kernel thread. Hence, rename it to 'kthread_arg' for clarity (and consistency
with do_fork() and other arch-specific implementations of copy_thread()).
Signed-off-by: Alex Dowad <ale...@gm...>
---
arch/um/kernel/process.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/arch/um/kernel/process.c b/arch/um/kernel/process.c
index f17bca8..80ac9fe 100644
--- a/arch/um/kernel/process.c
+++ b/arch/um/kernel/process.c
@@ -149,8 +149,11 @@ void fork_handler(void)
userspace(¤t->thread.regs.regs);
}
+/*
+ * Copy architecture-specific thread state
+ */
int copy_thread(unsigned long clone_flags, unsigned long sp,
- unsigned long arg, struct task_struct * p)
+ unsigned long kthread_arg, struct task_struct *p)
{
void (*handler)(void);
int kthread = current->flags & PF_KTHREAD;
@@ -159,6 +162,7 @@ int copy_thread(unsigned long clone_flags, unsigned long sp,
p->thread = (struct thread_struct) INIT_THREAD;
if (!kthread) {
+ /* user thread */
memcpy(&p->thread.regs.regs, current_pt_regs(),
sizeof(p->thread.regs.regs));
PT_REGS_SET_SYSCALL_RETURN(&p->thread.regs, 0);
@@ -169,9 +173,10 @@ int copy_thread(unsigned long clone_flags, unsigned long sp,
arch_copy_thread(¤t->thread.arch, &p->thread.arch);
} else {
+ /* kernel thread */
get_safe_registers(p->thread.regs.regs.gp, p->thread.regs.regs.fp);
p->thread.request.u.thread.proc = (int (*)(void *))sp;
- p->thread.request.u.thread.arg = (void *)arg;
+ p->thread.request.u.thread.arg = (void *)kthread_arg;
handler = new_thread_handler;
}
--
2.0.0.GIT
|
|
From: Geert U. <ge...@li...> - 2015-03-12 20:20:08
|
On Thu, Mar 12, 2015 at 4:24 PM, Christophe Leroy
<chr...@c-...> wrote:
> Two config options exist to define powerpc MPC8xx:
> * CONFIG_PPC_8xx
> * CONFIG_8xx
> In addition, CONFIG_PPC_8xx also defines CONFIG_CPM1 as
> communication co-processor
>
> arch/powerpc/platforms/Kconfig.cputype has contained the following
> comment about CONFIG_8xx item for some years:
> "# this is temp to handle compat with arch=ppc"
>
> It looks like not many places still have that old CONFIG_8xx used,
> so it is likely to be a good time to get rid of it completely ?
We also have CONFIG_40x,. CONFIG_44x, and CONFIG_4xx.
There's no CONFIG_PPC_4* though.
Do we want (some) consistency?
Gr{oetje,eeting}s,
Geert
--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- ge...@li...
In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
-- Linus Torvalds
|