From: Andy W. <ap...@sh...> - 2004-03-25 16:50:57
|
Here is the latest incarnation of my hugetlb patches. Rediffed against 2.6.5-rc2-bk4. With the addition of 080-mem_acctdom_hugetlb_sysctl which generalises the sysctl support and uses it for hugetlb. The overall problem is described below. Feedback and testing appreciated. Cheers. -apw HUGETLB Overcommit Handling --------------------------- When building mappings the kernel tracks committed but not yet allocated pages against available memory and swap preventing memory allocation problems later. The introduction of hugetlb pages has has significant ramifications for this accounting as the pages used to back them are already removed from the available memory pool. Currently mapping involving these pages are still accounted against the small page pool leading to either over commitment of the normal page pool, or incorrectly failed hugetlb allocations in the case where hugetlb memory exceeds the remaining normal pool. Also as there is no commitment tracking on hugetlb pages it is not possible to safely fault them on demand which is a problem for large segments where the prefault and clear times are excessive. This patch set attempts to addresses of these issues and provide a platform for fixing the remainder. Firstly, by removing the hugetlb allocations from the normal page pool. Secondly, by introducing a general mechanism for accounting for multiple page pools. Thirdly, by implmenting and enforcing hugetlb commitments via these pools. 050-mem_acctdom_core: core changes to create two accounting domains 055-mem_acctdom_arch: architecture specific changes for above 060-mem_acctdom_commitments: splits vm_committed into a per domain count 070-mem_acctdom_hugetlb: use vm_committed to track HUGETLB usage 075-mem_acctdom_hugetlb_arch: architecture specific changes for above 080-mem_acctdom_hugetlb_sysctl: generalise sysctl parameters and add hugetlb These first two patches introduce the concept of a split between the default and hugetlb memory pools and stop the hugtlb pool being accounted at all. This is not as clean as I would like, particularly the need to check against VM_AD_DEFAULT in a few places. The third patch splits the vm_committed count into a per domain count and exposes the domain in the interface. The fourth and fifth patches convert hugetlb to use the vm_commitment interfaces exposed above. The sixth patch generalises the overcommit mode and rations to all domains and adds support for controlling the hugetlb domain with it. Below is a transcript of a test showing the commitments being applied. The test attempts to make 3 400x2MB page shared memory segments, 850 pages are available. The main things to note are the commitment against the pages at shmget() time. This is a prerequisite for reliable accounting under fault driven page instantiation. [root@kite apw]# ./tester kernel.shmmax = 2147483648 kernel.shmall = 2147483648 vm.nr_hugepages = 850 === FIRST === === before shmget === HugePages_Total: 850 HugePages_Free: 850 Hugepagesize: 2048 kB HugeCommited_AS: 0 kB === before shmat === HugePages_Total: 850 HugePages_Free: 850 Hugepagesize: 2048 kB HugeCommited_AS: 819200 kB test: shmat smp=42200000 === after shmat === HugePages_Total: 850 HugePages_Free: 450 Hugepagesize: 2048 kB HugeCommited_AS: 819200 kB === SECOND === === before shmget === HugePages_Total: 850 HugePages_Free: 450 Hugepagesize: 2048 kB HugeCommited_AS: 819200 kB === before shmat === HugePages_Total: 850 HugePages_Free: 450 Hugepagesize: 2048 kB HugeCommited_AS: 1638400 kB test: shmat smp=42200000 === after shmat === HugePages_Total: 850 HugePages_Free: 50 Hugepagesize: 2048 kB HugeCommited_AS: 1638400 kB === THIRD === === before shmget === HugePages_Total: 850 HugePages_Free: 50 Hugepagesize: 2048 kB HugeCommited_AS: 1638400 kB test: shmget failed - errno=12 === before ipcrm -M 0xdead0000 === HugePages_Total: 850 HugePages_Free: 50 Hugepagesize: 2048 kB HugeCommited_AS: 1638400 kB === before ipcrm -M 0xdead0001 === HugePages_Total: 850 HugePages_Free: 450 Hugepagesize: 2048 kB HugeCommited_AS: 819200 kB === before ipcrm -M 0xdead0002 === HugePages_Total: 850 HugePages_Free: 850 Hugepagesize: 2048 kB HugeCommited_AS: 0 kB ipcrm: invalid key (0xdead0002) === after === HugePages_Total: 850 HugePages_Free: 850 Hugepagesize: 2048 kB HugeCommited_AS: 0 kB vm.nr_hugepages = 0 |
From: Andy W. <ap...@sh...> - 2004-03-25 16:54:08
|
[050-mem_acctdom_core] Memory accounting domains (core) When hugetlb memory is in user we effectivly split memory into to two independent and non-overlapping 'page' pools from which we can allocate pages and against which we wish to handle commitments. Currently all allocations are accounted against the normal page pool which can lead to false allocation failures. This patch provides the framework to allow these pools to be treated separatly, preventing allocation in the hugetlb pool from being accounted against the small page pool. The hugetlb page pool is not accounted at all and effectibly is treated as in overcommit mode. The patch creates the concept of an accounting domain, against which pages are to be accounted. In this implementation there are two domains VM_AD_DEFAULT which is used to account normal small pages in the normal way and VM_AD_HUGETLB which is used to select and identify VM_HUGETLB pages. I have not attempted to add any actual accounting for VM_HUGETLB pages, as currently they are prefaulted and thus there is always 0 outstanding commitment to track. Obviously, if hugetlb was also changed to support demand paging that would --- fs/exec.c | 2 +- include/linux/mm.h | 6 ++++++ include/linux/security.h | 15 ++++++++------- kernel/fork.c | 8 +++++--- mm/memory.c | 1 + mm/mmap.c | 18 +++++++++++------- mm/mprotect.c | 5 +++-- mm/mremap.c | 4 ++-- mm/shmem.c | 10 ++++++---- mm/swapfile.c | 2 +- security/commoncap.c | 8 +++++++- security/dummy.c | 8 +++++++- security/selinux/hooks.c | 8 +++++++- 13 files changed, 65 insertions(+), 30 deletions(-) diff -upN reference/fs/exec.c current/fs/exec.c --- reference/fs/exec.c 2004-03-11 20:47:24.000000000 +0000 +++ current/fs/exec.c 2004-03-25 15:03:32.000000000 +0000 @@ -409,7 +409,7 @@ int setup_arg_pages(struct linux_binprm if (!mpnt) return -ENOMEM; - if (security_vm_enough_memory(arg_size >> PAGE_SHIFT)) { + if (security_vm_enough_memory(VM_AD_DEFAULT, arg_size >> PAGE_SHIFT)) { kmem_cache_free(vm_area_cachep, mpnt); return -ENOMEM; } diff -upN reference/include/linux/mm.h current/include/linux/mm.h --- reference/include/linux/mm.h 2004-03-25 02:43:39.000000000 +0000 +++ current/include/linux/mm.h 2004-03-25 15:03:32.000000000 +0000 @@ -112,6 +112,12 @@ struct vm_area_struct { #define VM_HUGETLB 0x00400000 /* Huge TLB Page VM */ #define VM_NONLINEAR 0x00800000 /* Is non-linear (remap_file_pages) */ +/* Memory accounting domains. */ +#define VM_ACCTDOM_NR 2 +#define VM_ACCTDOM(vma) (!!((vma)->vm_flags & VM_HUGETLB)) +#define VM_AD_DEFAULT 0 +#define VM_AD_HUGETLB 1 + #ifndef VM_STACK_DEFAULT_FLAGS /* arch can override this */ #define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS #endif diff -upN reference/include/linux/security.h current/include/linux/security.h --- reference/include/linux/security.h 2004-03-25 02:43:39.000000000 +0000 +++ current/include/linux/security.h 2004-03-25 15:03:32.000000000 +0000 @@ -51,7 +51,7 @@ extern int cap_inode_removexattr(struct extern int cap_task_post_setuid (uid_t old_ruid, uid_t old_euid, uid_t old_suid, int flags); extern void cap_task_reparent_to_init (struct task_struct *p); extern int cap_syslog (int type); -extern int cap_vm_enough_memory (long pages); +extern int cap_vm_enough_memory (int domain, long pages); static inline int cap_netlink_send (struct sk_buff *skb) { @@ -988,7 +988,8 @@ struct swap_info_struct; * @type contains the type of action. * Return 0 if permission is granted. * @vm_enough_memory: - * Check permissions for allocating a new virtual mapping. + * Check permissions for allocating a new virtual mapping. + * @domain contains the accounting domain. * @pages contains the number of pages. * Return 0 if permission is granted. * @@ -1022,7 +1023,7 @@ struct security_operations { int (*quotactl) (int cmds, int type, int id, struct super_block * sb); int (*quota_on) (struct file * f); int (*syslog) (int type); - int (*vm_enough_memory) (long pages); + int (*vm_enough_memory) (int domain, long pages); int (*bprm_alloc_security) (struct linux_binprm * bprm); void (*bprm_free_security) (struct linux_binprm * bprm); @@ -1277,9 +1278,9 @@ static inline int security_syslog(int ty return security_ops->syslog(type); } -static inline int security_vm_enough_memory(long pages) +static inline int security_vm_enough_memory(int domain, long pages) { - return security_ops->vm_enough_memory(pages); + return security_ops->vm_enough_memory(domain, pages); } static inline int security_bprm_alloc (struct linux_binprm *bprm) @@ -1949,9 +1950,9 @@ static inline int security_syslog(int ty return cap_syslog(type); } -static inline int security_vm_enough_memory(long pages) +static inline int security_vm_enough_memory(int domain, long pages) { - return cap_vm_enough_memory(pages); + return cap_vm_enough_memory(domain, pages); } static inline int security_bprm_alloc (struct linux_binprm *bprm) diff -upN reference/kernel/fork.c current/kernel/fork.c --- reference/kernel/fork.c 2004-03-11 20:47:29.000000000 +0000 +++ current/kernel/fork.c 2004-03-25 15:03:32.000000000 +0000 @@ -301,9 +301,10 @@ static inline int dup_mmap(struct mm_str continue; if (mpnt->vm_flags & VM_ACCOUNT) { unsigned int len = (mpnt->vm_end - mpnt->vm_start) >> PAGE_SHIFT; - if (security_vm_enough_memory(len)) + if (security_vm_enough_memory(VM_ACCTDOM(mpnt), len)) goto fail_nomem; - charge += len; + if (VM_ACCTDOM(mpnt) == VM_AD_DEFAULT) + charge += len; } tmp = kmem_cache_alloc(vm_area_cachep, SLAB_KERNEL); if (!tmp) @@ -358,7 +359,8 @@ out: fail_nomem: retval = -ENOMEM; fail: - vm_unacct_memory(charge); + if (charge) + vm_unacct_memory(charge); goto out; } static inline int mm_alloc_pgd(struct mm_struct * mm) diff -upN reference/mm/memory.c current/mm/memory.c --- reference/mm/memory.c 2004-03-25 02:43:43.000000000 +0000 +++ current/mm/memory.c 2004-03-25 15:03:32.000000000 +0000 @@ -551,6 +551,7 @@ int unmap_vmas(struct mmu_gather **tlbp, if (end <= vma->vm_start) continue; + /* We assume that only accountable VMAs are VM_ACCOUNT. */ if (vma->vm_flags & VM_ACCOUNT) *nr_accounted += (end - start) >> PAGE_SHIFT; diff -upN reference/mm/mmap.c current/mm/mmap.c --- reference/mm/mmap.c 2004-03-25 02:43:43.000000000 +0000 +++ current/mm/mmap.c 2004-03-25 15:03:32.000000000 +0000 @@ -490,8 +490,11 @@ unsigned long do_mmap_pgoff(struct file int error; struct rb_node ** rb_link, * rb_parent; unsigned long charged = 0; + long acctdom = VM_AD_DEFAULT; if (file) { + if (is_file_hugepages(file)) + acctdom = VM_AD_HUGETLB; if (!file->f_op || !file->f_op->mmap) return -ENODEV; @@ -608,7 +611,8 @@ munmap_back: > current->rlim[RLIMIT_AS].rlim_cur) return -ENOMEM; - if (!(flags & MAP_NORESERVE) || sysctl_overcommit_memory > 1) { + if (acctdom == VM_AD_DEFAULT && (!(flags & MAP_NORESERVE) || + sysctl_overcommit_memory > 1)) { if (vm_flags & VM_SHARED) { /* Check memory availability in shmem_file_setup? */ vm_flags |= VM_ACCOUNT; @@ -617,7 +621,7 @@ munmap_back: * Private writable mapping: check memory availability */ charged = len >> PAGE_SHIFT; - if (security_vm_enough_memory(charged)) + if (security_vm_enough_memory(acctdom, charged)) return -ENOMEM; vm_flags |= VM_ACCOUNT; } @@ -926,8 +930,8 @@ int expand_stack(struct vm_area_struct * spin_lock(&vma->vm_mm->page_table_lock); grow = (address - vma->vm_end) >> PAGE_SHIFT; - /* Overcommit.. */ - if (security_vm_enough_memory(grow)) { + /* Overcommit ... assume stack is in normal memory */ + if (security_vm_enough_memory(VM_AD_DEFAULT, grow)) { spin_unlock(&vma->vm_mm->page_table_lock); return -ENOMEM; } @@ -980,8 +984,8 @@ int expand_stack(struct vm_area_struct * spin_lock(&vma->vm_mm->page_table_lock); grow = (vma->vm_start - address) >> PAGE_SHIFT; - /* Overcommit.. */ - if (security_vm_enough_memory(grow)) { + /* Overcommit ... assume stack is in normal memory */ + if (security_vm_enough_memory(VM_AD_DEFAULT, grow)) { spin_unlock(&vma->vm_mm->page_table_lock); return -ENOMEM; } @@ -1378,7 +1382,7 @@ unsigned long do_brk(unsigned long addr, if (mm->map_count > MAX_MAP_COUNT) return -ENOMEM; - if (security_vm_enough_memory(len >> PAGE_SHIFT)) + if (security_vm_enough_memory(VM_AD_DEFAULT, len >> PAGE_SHIFT)) return -ENOMEM; flags = VM_DATA_DEFAULT_FLAGS | VM_ACCOUNT | mm->def_flags; diff -upN reference/mm/mprotect.c current/mm/mprotect.c --- reference/mm/mprotect.c 2004-03-25 15:03:28.000000000 +0000 +++ current/mm/mprotect.c 2004-03-25 15:03:32.000000000 +0000 @@ -173,9 +173,10 @@ mprotect_fixup(struct vm_area_struct *vm * a MAP_NORESERVE private mapping to writable will now reserve. */ if (newflags & VM_WRITE) { - if (!(vma->vm_flags & (VM_ACCOUNT|VM_WRITE|VM_SHARED))) { + if (!(vma->vm_flags & (VM_ACCOUNT|VM_WRITE|VM_SHARED)) && + VM_ACCTDOM(vma) == VM_AD_DEFAULT) { charged = (end - start) >> PAGE_SHIFT; - if (security_vm_enough_memory(charged)) + if (security_vm_enough_memory(VM_ACCTDOM(vma), charged)) return -ENOMEM; newflags |= VM_ACCOUNT; } diff -upN reference/mm/mremap.c current/mm/mremap.c --- reference/mm/mremap.c 2004-02-23 18:15:13.000000000 +0000 +++ current/mm/mremap.c 2004-03-25 15:03:32.000000000 +0000 @@ -400,10 +400,10 @@ unsigned long do_mremap(unsigned long ad if (vma->vm_flags & VM_ACCOUNT) { charged = (new_len - old_len) >> PAGE_SHIFT; - if (security_vm_enough_memory(charged)) + if (security_vm_enough_memory(VM_ACCTDOM(vma), charged)) goto out_nc; } - + /* old_len exactly to the end of the area.. * And we're not relocating the area. */ diff -upN reference/mm/shmem.c current/mm/shmem.c --- reference/mm/shmem.c 2004-03-25 02:43:43.000000000 +0000 +++ current/mm/shmem.c 2004-03-25 15:03:32.000000000 +0000 @@ -526,7 +526,7 @@ static int shmem_notify_change(struct de */ change = VM_ACCT(attr->ia_size) - VM_ACCT(inode->i_size); if (change > 0) { - if (security_vm_enough_memory(change)) + if (security_vm_enough_memory(VM_AD_DEFAULT, change)) return -ENOMEM; } else if (attr->ia_size < inode->i_size) { vm_unacct_memory(-change); @@ -1187,7 +1187,8 @@ shmem_file_write(struct file *file, cons maxpos = inode->i_size; if (maxpos < pos + count) { maxpos = pos + count; - if (security_vm_enough_memory(VM_ACCT(maxpos) - VM_ACCT(inode->i_size))) { + if (security_vm_enough_memory(VM_AD_DEFAULT, + VM_ACCT(maxpos) - VM_ACCT(inode->i_size))) { err = -ENOMEM; goto out; } @@ -1551,7 +1552,7 @@ static int shmem_symlink(struct inode *d memcpy(info, symname, len); inode->i_op = &shmem_symlink_inline_operations; } else { - if (security_vm_enough_memory(VM_ACCT(1))) { + if (security_vm_enough_memory(VM_AD_DEFAULT, VM_ACCT(1))) { iput(inode); return -ENOMEM; } @@ -1947,7 +1948,8 @@ struct file *shmem_file_setup(char *name if (size > SHMEM_MAX_BYTES) return ERR_PTR(-EINVAL); - if ((flags & VM_ACCOUNT) && security_vm_enough_memory(VM_ACCT(size))) + if ((flags & VM_ACCOUNT) && security_vm_enough_memory(VM_AD_DEFAULT, + VM_ACCT(size))) return ERR_PTR(-ENOMEM); error = -ENOMEM; diff -upN reference/mm/swapfile.c current/mm/swapfile.c --- reference/mm/swapfile.c 2004-03-25 02:43:43.000000000 +0000 +++ current/mm/swapfile.c 2004-03-25 15:03:32.000000000 +0000 @@ -1048,7 +1048,7 @@ asmlinkage long sys_swapoff(const char _ swap_list_unlock(); goto out_dput; } - if (!security_vm_enough_memory(p->pages)) + if (!security_vm_enough_memory(VM_AD_DEFAULT, p->pages)) vm_unacct_memory(p->pages); else { err = -ENOMEM; diff -upN reference/security/commoncap.c current/security/commoncap.c --- reference/security/commoncap.c 2004-03-25 02:43:44.000000000 +0000 +++ current/security/commoncap.c 2004-03-25 15:03:32.000000000 +0000 @@ -308,10 +308,16 @@ int cap_syslog (int type) * Strict overcommit modes added 2002 Feb 26 by Alan Cox. * Additional code 2002 Jul 20 by Robert Love. */ -int cap_vm_enough_memory(long pages) +int cap_vm_enough_memory(int domain, long pages) { unsigned long free, allowed; + /* We only account for the default memory domain, assume overcommit + * for all others. + */ + if (domain != VM_AD_DEFAULT) + return 0; + vm_acct_memory(pages); /* diff -upN reference/security/dummy.c current/security/dummy.c --- reference/security/dummy.c 2004-03-25 02:43:44.000000000 +0000 +++ current/security/dummy.c 2004-03-25 15:03:32.000000000 +0000 @@ -109,10 +109,16 @@ static int dummy_syslog (int type) * We currently support three overcommit policies, which are set via the * vm.overcommit_memory sysctl. See Documentation/vm/overcommit-accounting */ -static int dummy_vm_enough_memory(long pages) +static int dummy_vm_enough_memory(int domain, long pages) { unsigned long free, allowed; + /* We only account for the default memory domain, assume overcommit + * for all others. + */ + if (domain != VM_AD_DEFAULT) + return 0; + vm_acct_memory(pages); /* diff -upN reference/security/selinux/hooks.c current/security/selinux/hooks.c --- reference/security/selinux/hooks.c 2004-03-25 02:43:44.000000000 +0000 +++ current/security/selinux/hooks.c 2004-03-25 15:03:32.000000000 +0000 @@ -1496,12 +1496,18 @@ static int selinux_syslog(int type) * Strict overcommit modes added 2002 Feb 26 by Alan Cox. * Additional code 2002 Jul 20 by Robert Love. */ -static int selinux_vm_enough_memory(long pages) +static int selinux_vm_enough_memory(int domain, long pages) { unsigned long free, allowed; int rc; struct task_security_struct *tsec = current->security; + /* We only account for the default memory domain, assume overcommit + * for all others. + */ + if (domain != VM_AD_DEFAULT) + return 0; + vm_acct_memory(pages); /* |
From: Andy W. <ap...@sh...> - 2004-03-25 16:55:03
|
[055-mem_acctdom_arch] Memory accounting domains (arch) --- ia64/ia32/binfmt_elf32.c | 3 ++- mips/kernel/sysirix.c | 3 ++- s390/kernel/compat_exec.c | 3 ++- x86_64/ia32/ia32_binfmt.c | 3 ++- 4 files changed, 8 insertions(+), 4 deletions(-) diff -upN reference/arch/ia64/ia32/binfmt_elf32.c current/arch/ia64/ia32/binfmt_elf32.c --- reference/arch/ia64/ia32/binfmt_elf32.c 2004-03-11 20:47:12.000000000 +0000 +++ current/arch/ia64/ia32/binfmt_elf32.c 2004-03-25 15:03:32.000000000 +0000 @@ -168,7 +168,8 @@ ia32_setup_arg_pages (struct linux_binpr if (!mpnt) return -ENOMEM; - if (security_vm_enough_memory((IA32_STACK_TOP - (PAGE_MASK & (unsigned long) bprm->p))>>PAGE_SHIFT)) { + if (security_vm_enough_memory(VM_AD_DEFAULT, (IA32_STACK_TOP - + (PAGE_MASK & (unsigned long) bprm->p))>>PAGE_SHIFT)) { kmem_cache_free(vm_area_cachep, mpnt); return -ENOMEM; } diff -upN reference/arch/mips/kernel/sysirix.c current/arch/mips/kernel/sysirix.c --- reference/arch/mips/kernel/sysirix.c 2004-03-11 20:47:13.000000000 +0000 +++ current/arch/mips/kernel/sysirix.c 2004-03-25 15:03:32.000000000 +0000 @@ -578,7 +578,8 @@ asmlinkage int irix_brk(unsigned long br /* * Check if we have enough memory.. */ - if (security_vm_enough_memory((newbrk-oldbrk) >> PAGE_SHIFT)) { + if (security_vm_enough_memory(VM_AD_DEFAULT, + (newbrk-oldbrk) >> PAGE_SHIFT)) { ret = -ENOMEM; goto out; } diff -upN reference/arch/s390/kernel/compat_exec.c current/arch/s390/kernel/compat_exec.c --- reference/arch/s390/kernel/compat_exec.c 2004-01-09 06:59:57.000000000 +0000 +++ current/arch/s390/kernel/compat_exec.c 2004-03-25 15:03:32.000000000 +0000 @@ -56,7 +56,8 @@ int setup_arg_pages32(struct linux_binpr if (!mpnt) return -ENOMEM; - if (security_vm_enough_memory((STACK_TOP - (PAGE_MASK & (unsigned long) bprm->p))>>PAGE_SHIFT)) { + if (security_vm_enough_memory(VM_AD_DEFAULT, (STACK_TOP - + (PAGE_MASK & (unsigned long) bprm->p))>>PAGE_SHIFT)) { kmem_cache_free(vm_area_cachep, mpnt); return -ENOMEM; } diff -upN reference/arch/x86_64/ia32/ia32_binfmt.c current/arch/x86_64/ia32/ia32_binfmt.c --- reference/arch/x86_64/ia32/ia32_binfmt.c 2004-03-25 02:42:14.000000000 +0000 +++ current/arch/x86_64/ia32/ia32_binfmt.c 2004-03-25 15:03:32.000000000 +0000 @@ -344,7 +344,8 @@ int setup_arg_pages(struct linux_binprm if (!mpnt) return -ENOMEM; - if (security_vm_enough_memory((IA32_STACK_TOP - (PAGE_MASK & (unsigned long) bprm->p))>>PAGE_SHIFT)) { + if (security_vm_enough_memory(VM_AD_DEFAULT, (IA32_STACK_TOP - + (PAGE_MASK & (unsigned long) bprm->p))>>PAGE_SHIFT)) { kmem_cache_free(vm_area_cachep, mpnt); return -ENOMEM; } |
From: Andy W. <ap...@sh...> - 2004-03-25 16:56:07
|
[060-mem_acctdom_commitments] Split vm_commited_space per domain Currently only normal page commitments are tracked. This patch provides a framework for tracking page commitments in multiple independent domains. With this patch vm_commited_space becomes a --- fs/proc/proc_misc.c | 2 +- include/linux/mm.h | 13 +++++++++++-- include/linux/mman.h | 12 ++++++------ kernel/fork.c | 8 +++----- mm/memory.c | 12 +++++++++--- mm/mmap.c | 23 ++++++++++++----------- mm/mprotect.c | 5 ++--- mm/mremap.c | 12 ++++++------ mm/nommu.c | 3 ++- mm/shmem.c | 13 +++++++------ mm/swap.c | 17 +++++++++++++---- mm/swapfile.c | 4 +++- security/commoncap.c | 10 +++++----- security/dummy.c | 10 +++++----- security/selinux/hooks.c | 10 +++++----- 15 files changed, 90 insertions(+), 64 deletions(-) diff -upN reference/fs/proc/proc_misc.c current/fs/proc/proc_misc.c --- reference/fs/proc/proc_misc.c 2004-03-25 15:03:28.000000000 +0000 +++ current/fs/proc/proc_misc.c 2004-03-25 15:03:32.000000000 +0000 @@ -174,7 +174,7 @@ static int meminfo_read_proc(char *page, #define K(x) ((x) << (PAGE_SHIFT - 10)) si_meminfo(&i); si_swapinfo(&i); - committed = atomic_read(&vm_committed_space); + committed = atomic_read(&vm_committed_space[VM_AD_DEFAULT]); vmtot = (VMALLOC_END-VMALLOC_START)>>10; vmi = get_vmalloc_info(); diff -upN reference/include/linux/mman.h current/include/linux/mman.h --- reference/include/linux/mman.h 2004-01-09 06:59:09.000000000 +0000 +++ current/include/linux/mman.h 2004-03-25 15:03:32.000000000 +0000 @@ -12,20 +12,20 @@ extern int sysctl_overcommit_memory; extern int sysctl_overcommit_ratio; -extern atomic_t vm_committed_space; +extern atomic_t vm_committed_space[]; #ifdef CONFIG_SMP -extern void vm_acct_memory(long pages); +extern void vm_acct_memory(int domain, long pages); #else -static inline void vm_acct_memory(long pages) +static inline void vm_acct_memory(int domain, long pages) { - atomic_add(pages, &vm_committed_space); + atomic_add(pages, &vm_committed_space[domain]); } #endif -static inline void vm_unacct_memory(long pages) +static inline void vm_unacct_memory(int domain, long pages) { - vm_acct_memory(-pages); + vm_acct_memory(domain, -pages); } /* diff -upN reference/include/linux/mm.h current/include/linux/mm.h --- reference/include/linux/mm.h 2004-03-25 15:03:32.000000000 +0000 +++ current/include/linux/mm.h 2004-03-25 15:03:32.000000000 +0000 @@ -117,7 +117,16 @@ struct vm_area_struct { #define VM_ACCTDOM(vma) (!!((vma)->vm_flags & VM_HUGETLB)) #define VM_AD_DEFAULT 0 #define VM_AD_HUGETLB 1 - +typedef struct { + long vec[VM_ACCTDOM_NR]; +} madv_t; +#define MADV_NONE { {[0 ... VM_ACCTDOM_NR-1] = 0UL} } +static inline void madv_add(madv_t *madv, int domain, long size) +{ + madv->vec[domain] += size; +} +void vm_unacct_memory_domains(madv_t *madv); + #ifndef VM_STACK_DEFAULT_FLAGS /* arch can override this */ #define VM_STACK_DEFAULT_FLAGS VM_DATA_DEFAULT_FLAGS #endif @@ -446,7 +455,7 @@ void zap_page_range(struct vm_area_struc unsigned long size); int unmap_vmas(struct mmu_gather **tlbp, struct mm_struct *mm, struct vm_area_struct *start_vma, unsigned long start_addr, - unsigned long end_addr, unsigned long *nr_accounted); + unsigned long end_addr, madv_t *nr_accounted); void unmap_page_range(struct mmu_gather *tlb, struct vm_area_struct *vma, unsigned long address, unsigned long size); void clear_page_tables(struct mmu_gather *tlb, unsigned long first, int nr); diff -upN reference/kernel/fork.c current/kernel/fork.c --- reference/kernel/fork.c 2004-03-25 15:03:32.000000000 +0000 +++ current/kernel/fork.c 2004-03-25 15:03:32.000000000 +0000 @@ -267,7 +267,7 @@ static inline int dup_mmap(struct mm_str struct vm_area_struct * mpnt, *tmp, **pprev; struct rb_node **rb_link, *rb_parent; int retval; - unsigned long charge = 0; + madv_t charge = MADV_NONE; down_write(&oldmm->mmap_sem); flush_cache_mm(current->mm); @@ -303,8 +303,7 @@ static inline int dup_mmap(struct mm_str unsigned int len = (mpnt->vm_end - mpnt->vm_start) >> PAGE_SHIFT; if (security_vm_enough_memory(VM_ACCTDOM(mpnt), len)) goto fail_nomem; - if (VM_ACCTDOM(mpnt) == VM_AD_DEFAULT) - charge += len; + madv_add(&charge, VM_ACCTDOM(mpnt), len); } tmp = kmem_cache_alloc(vm_area_cachep, SLAB_KERNEL); if (!tmp) @@ -359,8 +358,7 @@ out: fail_nomem: retval = -ENOMEM; fail: - if (charge) - vm_unacct_memory(charge); + vm_unacct_memory_domains(&charge); goto out; } static inline int mm_alloc_pgd(struct mm_struct * mm) diff -upN reference/mm/memory.c current/mm/memory.c --- reference/mm/memory.c 2004-03-25 15:03:32.000000000 +0000 +++ current/mm/memory.c 2004-03-25 15:03:32.000000000 +0000 @@ -524,7 +524,7 @@ void unmap_page_range(struct mmu_gather */ int unmap_vmas(struct mmu_gather **tlbp, struct mm_struct *mm, struct vm_area_struct *vma, unsigned long start_addr, - unsigned long end_addr, unsigned long *nr_accounted) + unsigned long end_addr, madv_t *nr_accounted) { unsigned long zap_bytes = ZAP_BLOCK_SIZE; unsigned long tlb_start = 0; /* For tlb_finish_mmu */ @@ -553,7 +553,8 @@ int unmap_vmas(struct mmu_gather **tlbp, /* We assume that only accountable VMAs are VM_ACCOUNT. */ if (vma->vm_flags & VM_ACCOUNT) - *nr_accounted += (end - start) >> PAGE_SHIFT; + madv_add(nr_accounted, + VM_ACCTDOM(vma), (end - start) >> PAGE_SHIFT); ret++; while (start != end) { @@ -602,7 +603,12 @@ void zap_page_range(struct vm_area_struc struct mm_struct *mm = vma->vm_mm; struct mmu_gather *tlb; unsigned long end = address + size; - unsigned long nr_accounted = 0; + madv_t nr_accounted = MADV_NONE; + + /* XXX: we seem to avoid thinking about the memory accounting + * for both the hugepages where don't bother even tracking it and + * in the normal path where we figure it out and do nothing with it?? + */ might_sleep(); diff -upN reference/mm/mmap.c current/mm/mmap.c --- reference/mm/mmap.c 2004-03-25 15:03:32.000000000 +0000 +++ current/mm/mmap.c 2004-03-25 15:03:32.000000000 +0000 @@ -54,7 +54,8 @@ pgprot_t protection_map[16] = { int sysctl_overcommit_memory = 0; /* default is heuristic overcommit */ int sysctl_overcommit_ratio = 50; /* default is 50% */ -atomic_t vm_committed_space = ATOMIC_INIT(0); +atomic_t vm_committed_space[VM_ACCTDOM_NR] = + { [ 0 ... VM_ACCTDOM_NR-1 ] = ATOMIC_INIT(0) }; EXPORT_SYMBOL(sysctl_overcommit_memory); EXPORT_SYMBOL(sysctl_overcommit_ratio); @@ -611,8 +612,8 @@ munmap_back: > current->rlim[RLIMIT_AS].rlim_cur) return -ENOMEM; - if (acctdom == VM_AD_DEFAULT && (!(flags & MAP_NORESERVE) || - sysctl_overcommit_memory > 1)) { + if (!(flags & MAP_NORESERVE) || + (acctdom == VM_AD_DEFAULT && sysctl_overcommit_memory > 1)) { if (vm_flags & VM_SHARED) { /* Check memory availability in shmem_file_setup? */ vm_flags |= VM_ACCOUNT; @@ -730,7 +731,7 @@ free_vma: kmem_cache_free(vm_area_cachep, vma); unacct_error: if (charged) - vm_unacct_memory(charged); + vm_unacct_memory(acctdom, charged); return error; } @@ -940,7 +941,7 @@ int expand_stack(struct vm_area_struct * ((vma->vm_mm->total_vm + grow) << PAGE_SHIFT) > current->rlim[RLIMIT_AS].rlim_cur) { spin_unlock(&vma->vm_mm->page_table_lock); - vm_unacct_memory(grow); + vm_unacct_memory(VM_AD_DEFAULT, grow); return -ENOMEM; } vma->vm_end = address; @@ -994,7 +995,7 @@ int expand_stack(struct vm_area_struct * ((vma->vm_mm->total_vm + grow) << PAGE_SHIFT) > current->rlim[RLIMIT_AS].rlim_cur) { spin_unlock(&vma->vm_mm->page_table_lock); - vm_unacct_memory(grow); + vm_unacct_memory(VM_AD_DEFAULT, grow); return -ENOMEM; } vma->vm_start = address; @@ -1152,12 +1153,12 @@ static void unmap_region(struct mm_struc unsigned long end) { struct mmu_gather *tlb; - unsigned long nr_accounted = 0; + madv_t nr_accounted = MADV_NONE; lru_add_drain(); tlb = tlb_gather_mmu(mm, 0); unmap_vmas(&tlb, mm, vma, start, end, &nr_accounted); - vm_unacct_memory(nr_accounted); + vm_unacct_memory_domains(&nr_accounted); if (is_hugepage_only_range(start, end - start)) hugetlb_free_pgtables(tlb, prev, start, end); @@ -1397,7 +1398,7 @@ unsigned long do_brk(unsigned long addr, */ vma = kmem_cache_alloc(vm_area_cachep, SLAB_KERNEL); if (!vma) { - vm_unacct_memory(len >> PAGE_SHIFT); + vm_unacct_memory(VM_AD_DEFAULT, len >> PAGE_SHIFT); return -ENOMEM; } @@ -1430,7 +1431,7 @@ void exit_mmap(struct mm_struct *mm) { struct mmu_gather *tlb; struct vm_area_struct *vma; - unsigned long nr_accounted = 0; + madv_t nr_accounted = MADV_NONE; profile_exit_mmap(mm); @@ -1443,7 +1444,7 @@ void exit_mmap(struct mm_struct *mm) /* Use ~0UL here to ensure all VMAs in the mm are unmapped */ mm->map_count -= unmap_vmas(&tlb, mm, mm->mmap, 0, ~0UL, &nr_accounted); - vm_unacct_memory(nr_accounted); + vm_unacct_memory_domains(&nr_accounted); BUG_ON(mm->map_count); /* This is just debugging */ clear_page_tables(tlb, FIRST_USER_PGD_NR, USER_PTRS_PER_PGD); tlb_finish_mmu(tlb, 0, MM_VM_SIZE(mm)); diff -upN reference/mm/mprotect.c current/mm/mprotect.c --- reference/mm/mprotect.c 2004-03-25 15:03:32.000000000 +0000 +++ current/mm/mprotect.c 2004-03-25 15:03:32.000000000 +0000 @@ -173,8 +173,7 @@ mprotect_fixup(struct vm_area_struct *vm * a MAP_NORESERVE private mapping to writable will now reserve. */ if (newflags & VM_WRITE) { - if (!(vma->vm_flags & (VM_ACCOUNT|VM_WRITE|VM_SHARED)) && - VM_ACCTDOM(vma) == VM_AD_DEFAULT) { + if (!(vma->vm_flags & (VM_ACCOUNT|VM_WRITE|VM_SHARED))) { charged = (end - start) >> PAGE_SHIFT; if (security_vm_enough_memory(VM_ACCTDOM(vma), charged)) return -ENOMEM; @@ -218,7 +217,7 @@ success: return 0; fail: - vm_unacct_memory(charged); + vm_unacct_memory(VM_ACCTDOM(vma), charged); return error; } diff -upN reference/mm/mremap.c current/mm/mremap.c --- reference/mm/mremap.c 2004-03-25 15:03:32.000000000 +0000 +++ current/mm/mremap.c 2004-03-25 15:03:32.000000000 +0000 @@ -401,7 +401,7 @@ unsigned long do_mremap(unsigned long ad if (vma->vm_flags & VM_ACCOUNT) { charged = (new_len - old_len) >> PAGE_SHIFT; if (security_vm_enough_memory(VM_ACCTDOM(vma), charged)) - goto out_nc; + goto out; } /* old_len exactly to the end of the area.. @@ -426,7 +426,7 @@ unsigned long do_mremap(unsigned long ad addr + new_len); } ret = addr; - goto out; + goto out_commited; } } @@ -445,14 +445,14 @@ unsigned long do_mremap(unsigned long ad vma->vm_pgoff, map_flags); ret = new_addr; if (new_addr & ~PAGE_MASK) - goto out; + goto out_commited; } ret = move_vma(vma, addr, old_len, new_len, new_addr); } -out: +out_commited: if (ret & ~PAGE_MASK) - vm_unacct_memory(charged); -out_nc: + vm_unacct_memory(VM_ACCTDOM(vma), charged); +out: return ret; } diff -upN reference/mm/nommu.c current/mm/nommu.c --- reference/mm/nommu.c 2004-02-04 15:09:16.000000000 +0000 +++ current/mm/nommu.c 2004-03-25 15:03:32.000000000 +0000 @@ -29,7 +29,8 @@ struct page *mem_map; unsigned long max_mapnr; unsigned long num_physpages; unsigned long askedalloc, realalloc; -atomic_t vm_committed_space = ATOMIC_INIT(0); +atomic_t vm_committed_space[VM_ACCTDOM_NR] = + { [ 0 ... VM_ACCTDOM_NR-1 ] = ATOMIC_INIT(0) }; int sysctl_overcommit_memory; /* default is heuristic overcommit */ int sysctl_overcommit_ratio = 50; /* default is 50% */ diff -upN reference/mm/shmem.c current/mm/shmem.c --- reference/mm/shmem.c 2004-03-25 15:03:32.000000000 +0000 +++ current/mm/shmem.c 2004-03-25 15:03:32.000000000 +0000 @@ -529,7 +529,7 @@ static int shmem_notify_change(struct de if (security_vm_enough_memory(VM_AD_DEFAULT, change)) return -ENOMEM; } else if (attr->ia_size < inode->i_size) { - vm_unacct_memory(-change); + vm_unacct_memory(VM_AD_DEFAULT, -change); /* * If truncating down to a partial page, then * if that page is already allocated, hold it @@ -564,7 +564,7 @@ static int shmem_notify_change(struct de if (page) page_cache_release(page); if (error) - vm_unacct_memory(change); + vm_unacct_memory(VM_AD_DEFAULT, change); return error; } @@ -578,7 +578,7 @@ static void shmem_delete_inode(struct in list_del(&info->list); spin_unlock(&shmem_ilock); if (info->flags & VM_ACCOUNT) - vm_unacct_memory(VM_ACCT(inode->i_size)); + vm_unacct_memory(VM_AD_DEFAULT, VM_ACCT(inode->i_size)); inode->i_size = 0; shmem_truncate(inode); } @@ -1271,7 +1271,8 @@ shmem_file_write(struct file *file, cons /* Short writes give back address space */ if (inode->i_size != maxpos) - vm_unacct_memory(VM_ACCT(maxpos) - VM_ACCT(inode->i_size)); + vm_unacct_memory(VM_AD_DEFAULT, VM_ACCT(maxpos) - + VM_ACCT(inode->i_size)); out: up(&inode->i_sem); return err; @@ -1558,7 +1559,7 @@ static int shmem_symlink(struct inode *d } error = shmem_getpage(inode, 0, &page, SGP_WRITE, NULL); if (error) { - vm_unacct_memory(VM_ACCT(1)); + vm_unacct_memory(VM_AD_DEFAULT, VM_ACCT(1)); iput(inode); return error; } @@ -1988,7 +1989,7 @@ put_dentry: dput(dentry); put_memory: if (flags & VM_ACCOUNT) - vm_unacct_memory(VM_ACCT(size)); + vm_unacct_memory(VM_AD_DEFAULT, VM_ACCT(size)); return ERR_PTR(error); } diff -upN reference/mm/swap.c current/mm/swap.c --- reference/mm/swap.c 2004-03-25 02:43:43.000000000 +0000 +++ current/mm/swap.c 2004-03-25 15:03:32.000000000 +0000 @@ -368,17 +368,18 @@ unsigned int pagevec_lookup(struct pagev */ #define ACCT_THRESHOLD max(16, NR_CPUS * 2) -static DEFINE_PER_CPU(long, committed_space) = 0; +/* XXX: zero this????? */ +static DEFINE_PER_CPU(long, committed_space[VM_ACCTDOM_NR]); -void vm_acct_memory(long pages) +void vm_acct_memory(int domain, long pages) { long *local; preempt_disable(); - local = &__get_cpu_var(committed_space); + local = &__get_cpu_var(committed_space[domain]); *local += pages; if (*local > ACCT_THRESHOLD || *local < -ACCT_THRESHOLD) { - atomic_add(*local, &vm_committed_space); + atomic_add(*local, &vm_committed_space[domain]); *local = 0; } preempt_enable(); @@ -416,6 +417,14 @@ static int cpu_swap_callback(struct noti #endif /* CONFIG_HOTPLUG_CPU */ #endif /* CONFIG_SMP */ +void vm_unacct_memory_domains(madv_t *adv) +{ + if (adv->vec[0]) + vm_unacct_memory(VM_AD_DEFAULT, adv->vec[0]); + if (adv->vec[1]) + vm_unacct_memory(VM_AD_DEFAULT, adv->vec[1]); +} + #ifdef CONFIG_SMP void percpu_counter_mod(struct percpu_counter *fbc, long amount) { diff -upN reference/mm/swapfile.c current/mm/swapfile.c --- reference/mm/swapfile.c 2004-03-25 15:03:32.000000000 +0000 +++ current/mm/swapfile.c 2004-03-25 15:03:32.000000000 +0000 @@ -1048,8 +1048,10 @@ asmlinkage long sys_swapoff(const char _ swap_list_unlock(); goto out_dput; } + /* There is an assumption here that we only may have swapped things + * from the default memory accounting domain to this device. */ if (!security_vm_enough_memory(VM_AD_DEFAULT, p->pages)) - vm_unacct_memory(p->pages); + vm_unacct_memory(VM_AD_DEFAULT, p->pages); else { err = -ENOMEM; swap_list_unlock(); diff -upN reference/security/commoncap.c current/security/commoncap.c --- reference/security/commoncap.c 2004-03-25 15:03:32.000000000 +0000 +++ current/security/commoncap.c 2004-03-25 15:03:32.000000000 +0000 @@ -312,14 +312,14 @@ int cap_vm_enough_memory(int domain, lon { unsigned long free, allowed; + vm_acct_memory(domain, pages); + /* We only account for the default memory domain, assume overcommit * for all others. */ if (domain != VM_AD_DEFAULT) return 0; - vm_acct_memory(pages); - /* * Sometimes we want to use more memory than we have */ @@ -360,17 +360,17 @@ int cap_vm_enough_memory(int domain, lon if (free > pages) return 0; - vm_unacct_memory(pages); + vm_unacct_memory(domain, pages); return -ENOMEM; } allowed = totalram_pages * sysctl_overcommit_ratio / 100; allowed += total_swap_pages; - if (atomic_read(&vm_committed_space) < allowed) + if (atomic_read(&vm_committed_space[domain]) < allowed) return 0; - vm_unacct_memory(pages); + vm_unacct_memory(domain, pages); return -ENOMEM; } diff -upN reference/security/dummy.c current/security/dummy.c --- reference/security/dummy.c 2004-03-25 15:03:32.000000000 +0000 +++ current/security/dummy.c 2004-03-25 15:03:32.000000000 +0000 @@ -113,14 +113,14 @@ static int dummy_vm_enough_memory(int do { unsigned long free, allowed; + vm_acct_memory(domain, pages); + /* We only account for the default memory domain, assume overcommit * for all others. */ if (domain != VM_AD_DEFAULT) return 0; - vm_acct_memory(pages); - /* * Sometimes we want to use more memory than we have */ @@ -148,17 +148,17 @@ static int dummy_vm_enough_memory(int do if (free > pages) return 0; - vm_unacct_memory(pages); + vm_unacct_memory(domain, pages); return -ENOMEM; } allowed = totalram_pages * sysctl_overcommit_ratio / 100; allowed += total_swap_pages; - if (atomic_read(&vm_committed_space) < allowed) + if (atomic_read(&vm_committed_space[domain]) < allowed) return 0; - vm_unacct_memory(pages); + vm_unacct_memory(domain, pages); return -ENOMEM; } diff -upN reference/security/selinux/hooks.c current/security/selinux/hooks.c --- reference/security/selinux/hooks.c 2004-03-25 15:03:32.000000000 +0000 +++ current/security/selinux/hooks.c 2004-03-25 15:03:32.000000000 +0000 @@ -1502,14 +1502,14 @@ static int selinux_vm_enough_memory(int int rc; struct task_security_struct *tsec = current->security; + vm_acct_memory(domain, pages); + /* We only account for the default memory domain, assume overcommit * for all others. */ if (domain != VM_AD_DEFAULT) return 0; - vm_acct_memory(pages); - /* * Sometimes we want to use more memory than we have */ @@ -1546,17 +1546,17 @@ static int selinux_vm_enough_memory(int if (free > pages) return 0; - vm_unacct_memory(pages); + vm_unacct_memory(domain, pages); return -ENOMEM; } allowed = totalram_pages * sysctl_overcommit_ratio / 100; allowed += total_swap_pages; - if (atomic_read(&vm_committed_space) < allowed) + if (atomic_read(&vm_committed_space[domain]) < allowed) return 0; - vm_unacct_memory(pages); + vm_unacct_memory(domain, pages); return -ENOMEM; } |
From: Andy W. <ap...@sh...> - 2004-03-25 16:57:08
|
[070-mem_acctdom_hugetlb] Convert hugetlb to accounting domains (core) --- fs/hugetlbfs/inode.c | 45 ++++++++++++++++++++++++++++++++++++++------- include/linux/hugetlb.h | 5 +++++ security/commoncap.c | 9 +++++++++ security/dummy.c | 9 +++++++++ security/selinux/hooks.c | 9 +++++++++ 5 files changed, 70 insertions(+), 7 deletions(-) diff -upN reference/fs/hugetlbfs/inode.c current/fs/hugetlbfs/inode.c --- reference/fs/hugetlbfs/inode.c 2004-03-25 02:43:00.000000000 +0000 +++ current/fs/hugetlbfs/inode.c 2004-03-25 15:03:33.000000000 +0000 @@ -26,12 +26,15 @@ #include <linux/dnotify.h> #include <linux/statfs.h> #include <linux/security.h> +#include <linux/mman.h> #include <asm/uaccess.h> /* some random number */ #define HUGETLBFS_MAGIC 0x958458f6 +#define VM_ACCT(size) (PAGE_CACHE_ALIGN(size) >> PAGE_SHIFT) + static struct super_operations hugetlbfs_ops; static struct address_space_operations hugetlbfs_aops; struct file_operations hugetlbfs_file_operations; @@ -191,6 +194,7 @@ void truncate_hugepages(struct address_s static void hugetlbfs_delete_inode(struct inode *inode) { struct hugetlbfs_sb_info *sbinfo = HUGETLBFS_SB(inode->i_sb); + long change; hlist_del_init(&inode->i_hash); list_del_init(&inode->i_list); @@ -198,6 +202,9 @@ static void hugetlbfs_delete_inode(struc inodes_stat.nr_inodes--; spin_unlock(&inode_lock); + change = VM_ACCT(inode->i_size) - VM_ACCT(0); + if (change) + vm_unacct_memory(VM_AD_HUGETLB, change); if (inode->i_data.nrpages) truncate_hugepages(&inode->i_data, 0); @@ -217,6 +224,7 @@ static void hugetlbfs_forget_inode(struc { struct super_block *super_block = inode->i_sb; struct hugetlbfs_sb_info *sbinfo = HUGETLBFS_SB(super_block); + long change; if (hlist_unhashed(&inode->i_hash)) goto out_truncate; @@ -239,6 +247,9 @@ out_truncate: inode->i_state |= I_FREEING; inodes_stat.nr_inodes--; spin_unlock(&inode_lock); + change = VM_ACCT(inode->i_size) - VM_ACCT(0); + if (change) + vm_unacct_memory(VM_AD_HUGETLB, change); if (inode->i_data.nrpages) truncate_hugepages(&inode->i_data, 0); @@ -312,8 +323,10 @@ static int hugetlb_vmtruncate(struct ino unsigned long pgoff; struct address_space *mapping = inode->i_mapping; + /* if (offset > inode->i_size) return -EINVAL; + */ BUG_ON(offset & ~HPAGE_MASK); pgoff = offset >> HPAGE_SHIFT; @@ -334,6 +347,8 @@ static int hugetlbfs_setattr(struct dent struct inode *inode = dentry->d_inode; int error; unsigned int ia_valid = attr->ia_valid; + long change = 0; + loff_t csize; BUG_ON(!inode); @@ -345,15 +360,27 @@ static int hugetlbfs_setattr(struct dent if (error) goto out; if (ia_valid & ATTR_SIZE) { + csize = i_size_read(inode); error = -EINVAL; - if (!(attr->ia_size & ~HPAGE_MASK)) - error = hugetlb_vmtruncate(inode, attr->ia_size); - if (error) + if (!(attr->ia_size & ~HPAGE_MASK)) + goto out; + if (attr->ia_size > csize) goto out; + change = VM_ACCT(csize) - VM_ACCT(attr->ia_size); + if (change) + vm_unacct_memory(VM_AD_HUGETLB, change); + /* XXX: here we commit to removing the mappings, should we do + * this before we attmempt to write the inode or after. What + * should we do if it fails? + */ + hugetlb_vmtruncate(inode, attr->ia_size); attr->ia_valid &= ~ATTR_SIZE; } error = inode_setattr(inode, attr); out: + if (error && change) + vm_acct_memory(VM_AD_HUGETLB, change); + return error; } @@ -710,17 +737,19 @@ struct file *hugetlb_zero_setup(size_t s if (!capable(CAP_IPC_LOCK)) return ERR_PTR(-EPERM); - if (!is_hugepage_mem_enough(size)) + if (security_vm_enough_memory(VM_AD_HUGETLB, VM_ACCT(size))) return ERR_PTR(-ENOMEM); - + root = hugetlbfs_vfsmount->mnt_root; snprintf(buf, 16, "%lu", hugetlbfs_counter()); quick_string.name = buf; quick_string.len = strlen(quick_string.name); quick_string.hash = 0; dentry = d_alloc(root, &quick_string); - if (!dentry) - return ERR_PTR(-ENOMEM); + if (!dentry) { + error = -ENOMEM; + goto out_committed; + } error = -ENFILE; file = get_empty_filp(); @@ -747,6 +776,8 @@ out_file: put_filp(file); out_dentry: dput(dentry); +out_committed: + vm_unacct_memory(VM_AD_HUGETLB, VM_ACCT(size)); return ERR_PTR(error); } diff -upN reference/include/linux/hugetlb.h current/include/linux/hugetlb.h --- reference/include/linux/hugetlb.h 2004-02-23 18:15:09.000000000 +0000 +++ current/include/linux/hugetlb.h 2004-03-25 15:03:33.000000000 +0000 @@ -19,6 +19,7 @@ int hugetlb_prefault(struct address_spac void huge_page_release(struct page *); int hugetlb_report_meminfo(char *); int is_hugepage_mem_enough(size_t); +unsigned long hugetlb_total_pages(void); struct page *follow_huge_addr(struct mm_struct *mm, struct vm_area_struct *vma, unsigned long address, int write); struct vm_area_struct *hugepage_vma(struct mm_struct *mm, @@ -48,6 +49,10 @@ static inline int is_vm_hugetlb_page(str { return 0; } +static inline unsigned long hugetlb_total_pages(void) +{ + return 0; +} #define follow_hugetlb_page(m,v,p,vs,a,b,i) ({ BUG(); 0; }) #define follow_huge_addr(mm, vma, addr, write) 0 diff -upN reference/security/commoncap.c current/security/commoncap.c --- reference/security/commoncap.c 2004-03-25 15:03:32.000000000 +0000 +++ current/security/commoncap.c 2004-03-25 15:03:33.000000000 +0000 @@ -22,6 +22,7 @@ #include <linux/netlink.h> #include <linux/ptrace.h> #include <linux/xattr.h> +#include <linux/hugetlb.h> int cap_capable (struct task_struct *tsk, int cap) { @@ -314,6 +315,13 @@ int cap_vm_enough_memory(int domain, lon vm_acct_memory(domain, pages); + /* Check against the full compliment of hugepages, no reserve. */ + if (domain == VM_AD_HUGETLB) { + allowed = hugetlb_total_pages(); + + goto check; + } + /* We only account for the default memory domain, assume overcommit * for all others. */ @@ -367,6 +375,7 @@ int cap_vm_enough_memory(int domain, lon allowed = totalram_pages * sysctl_overcommit_ratio / 100; allowed += total_swap_pages; +check: if (atomic_read(&vm_committed_space[domain]) < allowed) return 0; diff -upN reference/security/dummy.c current/security/dummy.c --- reference/security/dummy.c 2004-03-25 15:03:32.000000000 +0000 +++ current/security/dummy.c 2004-03-25 15:03:33.000000000 +0000 @@ -25,6 +25,7 @@ #include <linux/netlink.h> #include <net/sock.h> #include <linux/xattr.h> +#include <linux/hugetlb.h> static int dummy_ptrace (struct task_struct *parent, struct task_struct *child) { @@ -115,6 +116,13 @@ static int dummy_vm_enough_memory(int do vm_acct_memory(domain, pages); + /* Check against the full compliment of hugepages, no reserve. */ + if (domain == VM_AD_HUGETLB) { + allowed = hugetlb_total_pages(); + + goto check; + } + /* We only account for the default memory domain, assume overcommit * for all others. */ @@ -155,6 +163,7 @@ static int dummy_vm_enough_memory(int do allowed = totalram_pages * sysctl_overcommit_ratio / 100; allowed += total_swap_pages; +check: if (atomic_read(&vm_committed_space[domain]) < allowed) return 0; diff -upN reference/security/selinux/hooks.c current/security/selinux/hooks.c --- reference/security/selinux/hooks.c 2004-03-25 15:03:32.000000000 +0000 +++ current/security/selinux/hooks.c 2004-03-25 15:03:33.000000000 +0000 @@ -59,6 +59,7 @@ #include <net/af_unix.h> /* for Unix socket types */ #include <linux/parser.h> #include <linux/nfs_mount.h> +#include <linux/hugetlb.h> #include "avc.h" #include "objsec.h" @@ -1504,6 +1505,13 @@ static int selinux_vm_enough_memory(int vm_acct_memory(domain, pages); + /* Check against the full compliment of hugepages, no reserve. */ + if (domain == VM_AD_HUGETLB) { + allowed = hugetlb_total_pages(); + + goto check; + } + /* We only account for the default memory domain, assume overcommit * for all others. */ @@ -1553,6 +1561,7 @@ static int selinux_vm_enough_memory(int allowed = totalram_pages * sysctl_overcommit_ratio / 100; allowed += total_swap_pages; +check: if (atomic_read(&vm_committed_space[domain]) < allowed) return 0; |
From: Andy W. <ap...@sh...> - 2004-03-25 16:59:01
|
[075-mem_acctdom_hugetlb_arch] Convert hugetlb to accounting domains (arch) --- i386/mm/hugetlbpage.c | 16 +++++++++++++--- ia64/mm/hugetlbpage.c | 16 +++++++++++++--- ppc64/mm/hugetlbpage.c | 16 +++++++++++++--- sparc64/mm/hugetlbpage.c | 16 +++++++++++++--- 4 files changed, 52 insertions(+), 12 deletions(-) diff -upN reference/arch/i386/mm/hugetlbpage.c current/arch/i386/mm/hugetlbpage.c --- reference/arch/i386/mm/hugetlbpage.c 2004-01-09 07:00:02.000000000 +0000 +++ current/arch/i386/mm/hugetlbpage.c 2004-03-25 15:03:27.000000000 +0000 @@ -15,7 +15,7 @@ #include <linux/module.h> #include <linux/err.h> #include <linux/sysctl.h> -#include <asm/mman.h> +#include <linux/mman.h> #include <asm/pgalloc.h> #include <asm/tlb.h> #include <asm/tlbflush.h> @@ -513,13 +513,17 @@ module_init(hugetlb_init); int hugetlb_report_meminfo(char *buf) { + int committed = atomic_read(&vm_committed_space[VM_AD_HUGETLB]); +#define K(x) ((x) << (PAGE_SHIFT - 10)) return sprintf(buf, "HugePages_Total: %5lu\n" "HugePages_Free: %5lu\n" - "Hugepagesize: %5lu kB\n", + "Hugepagesize: %5lu kB\n" + "HugeCommited_AS: %8u kB\n", htlbzone_pages, htlbpagemem, - HPAGE_SIZE/1024); + HPAGE_SIZE/1024, + K(committed)); } int is_hugepage_mem_enough(size_t size) @@ -527,6 +531,12 @@ int is_hugepage_mem_enough(size_t size) return (size + ~HPAGE_MASK)/HPAGE_SIZE <= htlbpagemem; } +/* Return the number pages of memory we physically have, in PAGE_SIZE units. */ +unsigned long hugetlb_total_pages(void) +{ + return htlbzone_pages * (HPAGE_SIZE / PAGE_SIZE); +} + /* * We cannot handle pagefaults against hugetlb pages at all. They cause * handle_mm_fault() to try to instantiate regular-sized pages in the diff -upN reference/arch/ia64/mm/hugetlbpage.c current/arch/ia64/mm/hugetlbpage.c --- reference/arch/ia64/mm/hugetlbpage.c 2004-03-11 20:47:12.000000000 +0000 +++ current/arch/ia64/mm/hugetlbpage.c 2004-03-25 15:03:27.000000000 +0000 @@ -17,7 +17,7 @@ #include <linux/smp_lock.h> #include <linux/slab.h> #include <linux/sysctl.h> -#include <asm/mman.h> +#include <linux/mman.h> #include <asm/pgalloc.h> #include <asm/tlb.h> #include <asm/tlbflush.h> @@ -576,13 +576,17 @@ __initcall(hugetlb_init); int hugetlb_report_meminfo(char *buf) { + int committed = atomic_read(&vm_committed_space[VM_AD_HUGETLB]); +#define K(x) ((x) << (PAGE_SHIFT - 10)) return sprintf(buf, "HugePages_Total: %5lu\n" "HugePages_Free: %5lu\n" - "Hugepagesize: %5lu kB\n", + "Hugepagesize: %5lu kB\n" + "HugeCommited_AS: %8u kB\n", htlbzone_pages, htlbpagemem, - HPAGE_SIZE/1024); + HPAGE_SIZE/1024, + K(committed)); } int is_hugepage_mem_enough(size_t size) @@ -592,6 +596,12 @@ int is_hugepage_mem_enough(size_t size) return 1; } +/* Return the number pages of memory we physically have, in PAGE_SIZE units. */ +unsigned long hugetlb_total_pages(void) +{ + return htlbzone_pages * (HPAGE_SIZE / PAGE_SIZE); +} + static struct page *hugetlb_nopage(struct vm_area_struct * area, unsigned long address, int *unused) { BUG(); diff -upN reference/arch/ppc64/mm/hugetlbpage.c current/arch/ppc64/mm/hugetlbpage.c --- reference/arch/ppc64/mm/hugetlbpage.c 2004-03-11 20:47:14.000000000 +0000 +++ current/arch/ppc64/mm/hugetlbpage.c 2004-03-25 15:03:27.000000000 +0000 @@ -17,7 +17,7 @@ #include <linux/module.h> #include <linux/err.h> #include <linux/sysctl.h> -#include <asm/mman.h> +#include <linux/mman.h> #include <asm/pgalloc.h> #include <asm/tlb.h> #include <asm/tlbflush.h> @@ -896,13 +896,17 @@ module_init(hugetlb_init); int hugetlb_report_meminfo(char *buf) { + int committed = atomic_read(&vm_committed_space[VM_AD_HUGETLB]); +#define K(x) ((x) << (PAGE_SHIFT - 10)) return sprintf(buf, "HugePages_Total: %5d\n" "HugePages_Free: %5d\n" - "Hugepagesize: %5lu kB\n", + "Hugepagesize: %5lu kB\n" + "HugeCommited_AS: %8u kB", htlbpage_total, htlbpage_free, - HPAGE_SIZE/1024); + HPAGE_SIZE/1024, + K(commited)); } /* This is advisory only, so we can get away with accesing @@ -912,6 +916,12 @@ int is_hugepage_mem_enough(size_t size) return (size + ~HPAGE_MASK)/HPAGE_SIZE <= htlbpage_free; } +/* Return the number pages of memory we physically have, in PAGE_SIZE units. */ +int hugetlb_total_pages(void) +{ + return htlbpage_total * (HPAGE_SIZE / PAGE_SIZE); +} + /* * We cannot handle pagefaults against hugetlb pages at all. They cause * handle_mm_fault() to try to instantiate regular-sized pages in the diff -upN reference/arch/sparc64/mm/hugetlbpage.c current/arch/sparc64/mm/hugetlbpage.c --- reference/arch/sparc64/mm/hugetlbpage.c 2004-01-09 06:59:45.000000000 +0000 +++ current/arch/sparc64/mm/hugetlbpage.c 2004-03-25 15:03:27.000000000 +0000 @@ -13,8 +13,8 @@ #include <linux/smp_lock.h> #include <linux/slab.h> #include <linux/sysctl.h> +#include <linux/mman.h> -#include <asm/mman.h> #include <asm/pgalloc.h> #include <asm/tlb.h> #include <asm/tlbflush.h> @@ -483,13 +483,17 @@ module_init(hugetlb_init); int hugetlb_report_meminfo(char *buf) { + int committed = atomic_read(&vm_committed_space[VM_AD_HUGETLB]); +#define K(x) ((x) << (PAGE_SHIFT - 10)) return sprintf(buf, "HugePages_Total: %5lu\n" "HugePages_Free: %5lu\n" - "Hugepagesize: %5lu kB\n", + "Hugepagesize: %5lu kB\n" + "HugeCommited_AS: %8u kB\n", htlbzone_pages, htlbpagemem, - HPAGE_SIZE/1024); + HPAGE_SIZE/1024, + K(committed)); } int is_hugepage_mem_enough(size_t size) @@ -497,6 +501,12 @@ int is_hugepage_mem_enough(size_t size) return (size + ~HPAGE_MASK)/HPAGE_SIZE <= htlbpagemem; } +/* Return the number pages of memory we physically have, in PAGE_SIZE units. */ +int hugetlb_total_pages(void) +{ + return htlbzone_pages * (HPAGE_SIZE / PAGE_SIZE); +} + /* * We cannot handle pagefaults against hugetlb pages at all. They cause * handle_mm_fault() to try to instantiate regular-sized pages in the |
From: Andy W. <ap...@sh...> - 2004-03-25 16:59:34
|
[080-mem_acctdom_hugetlb_sysctl] --- include/linux/mman.h | 4 ++-- include/linux/sysctl.h | 2 ++ kernel/sysctl.c | 28 ++++++++++++++++++++++------ mm/mmap.c | 11 +++++++---- mm/nommu.c | 8 ++++++-- security/commoncap.c | 19 ++++++++++--------- security/dummy.c | 19 ++++++++++--------- security/selinux/hooks.c | 19 ++++++++++--------- 8 files changed, 69 insertions(+), 41 deletions(-) diff -X /home/apw/lib/vdiff.excl -rupN reference/include/linux/mman.h current/include/linux/mman.h --- reference/include/linux/mman.h 2004-03-25 15:03:32.000000000 +0000 +++ current/include/linux/mman.h 2004-03-25 16:43:46.000000000 +0000 @@ -10,8 +10,8 @@ #define MREMAP_MAYMOVE 1 #define MREMAP_FIXED 2 -extern int sysctl_overcommit_memory; -extern int sysctl_overcommit_ratio; +extern int sysctl_overcommit_memory[]; +extern int sysctl_overcommit_ratio[]; extern atomic_t vm_committed_space[]; #ifdef CONFIG_SMP diff -X /home/apw/lib/vdiff.excl -rupN reference/include/linux/sysctl.h current/include/linux/sysctl.h --- reference/include/linux/sysctl.h 2004-03-11 20:47:28.000000000 +0000 +++ current/include/linux/sysctl.h 2004-03-25 16:45:06.000000000 +0000 @@ -158,6 +158,8 @@ enum VM_SWAPPINESS=19, /* Tendency to steal mapped memory */ VM_LOWER_ZONE_PROTECTION=20,/* Amount of protection of lower zones */ VM_MIN_FREE_KBYTES=21, /* Minimum free kilobytes to maintain */ + VM_OVERCOMMIT_MEMORY_HUGEPAGES=22, /* Turn off the virtual memory safety limit */ + VM_OVERCOMMIT_RATIO_HUGEPAGES=23, /* percent of RAM to allow overcommit in */ }; diff -X /home/apw/lib/vdiff.excl -rupN reference/kernel/sysctl.c current/kernel/sysctl.c --- reference/kernel/sysctl.c 2004-03-25 15:03:28.000000000 +0000 +++ current/kernel/sysctl.c 2004-03-25 16:44:46.000000000 +0000 @@ -50,8 +50,8 @@ /* External variables not in a header file. */ extern int panic_timeout; extern int C_A_D; -extern int sysctl_overcommit_memory; -extern int sysctl_overcommit_ratio; +extern int sysctl_overcommit_memory[]; +extern int sysctl_overcommit_ratio[]; extern int max_threads; extern atomic_t nr_queued_signals; extern int max_queued_signals; @@ -628,16 +628,16 @@ static ctl_table vm_table[] = { { .ctl_name = VM_OVERCOMMIT_MEMORY, .procname = "overcommit_memory", - .data = &sysctl_overcommit_memory, - .maxlen = sizeof(sysctl_overcommit_memory), + .data = &sysctl_overcommit_memory[VM_AD_DEFAULT], + .maxlen = sizeof(sysctl_overcommit_memory[VM_AD_DEFAULT]), .mode = 0644, .proc_handler = &proc_dointvec, }, { .ctl_name = VM_OVERCOMMIT_RATIO, .procname = "overcommit_ratio", - .data = &sysctl_overcommit_ratio, - .maxlen = sizeof(sysctl_overcommit_ratio), + .data = &sysctl_overcommit_ratio[VM_AD_DEFAULT], + .maxlen = sizeof(sysctl_overcommit_ratio[VM_AD_DEFAULT]), .mode = 0644, .proc_handler = &proc_dointvec, }, @@ -715,6 +715,22 @@ static ctl_table vm_table[] = { .mode = 0644, .proc_handler = &hugetlb_sysctl_handler, }, + { + .ctl_name = VM_OVERCOMMIT_MEMORY_HUGEPAGES, + .procname = "overcommit_memory_hugepages", + .data = &sysctl_overcommit_memory[VM_AD_HUGETLB], + .maxlen = sizeof(sysctl_overcommit_memory[VM_AD_HUGETLB]), + .mode = 0644, + .proc_handler = &proc_dointvec, + }, + { + .ctl_name = VM_OVERCOMMIT_RATIO_HUGEPAGES, + .procname = "overcommit_ratio_hugepages", + .data = &sysctl_overcommit_ratio[VM_AD_HUGETLB], + .maxlen = sizeof(sysctl_overcommit_ratio[VM_AD_HUGETLB]), + .mode = 0644, + .proc_handler = &proc_dointvec, + }, #endif { .ctl_name = VM_LOWER_ZONE_PROTECTION, diff -X /home/apw/lib/vdiff.excl -rupN reference/mm/mmap.c current/mm/mmap.c --- reference/mm/mmap.c 2004-03-25 15:03:32.000000000 +0000 +++ current/mm/mmap.c 2004-03-25 17:23:45.000000000 +0000 @@ -52,8 +52,12 @@ pgprot_t protection_map[16] = { __S000, __S001, __S010, __S011, __S100, __S101, __S110, __S111 }; -int sysctl_overcommit_memory = 0; /* default is heuristic overcommit */ -int sysctl_overcommit_ratio = 50; /* default is 50% */ +/* Defaults are: + * VM_AD_DEFAULT heuristic overcommit, ratio 50% + * VM_AD_HUGETLB strict commit, ratio 100% + */ +int sysctl_overcommit_memory[VM_ACCTDOM_NR] = { 0, 0 }; +int sysctl_overcommit_ratio[VM_ACCTDOM_NR] = { 50, 100 }; atomic_t vm_committed_space[VM_ACCTDOM_NR] = { [ 0 ... VM_ACCTDOM_NR-1 ] = ATOMIC_INIT(0) }; @@ -612,8 +616,7 @@ munmap_back: > current->rlim[RLIMIT_AS].rlim_cur) return -ENOMEM; - if (!(flags & MAP_NORESERVE) || - (acctdom == VM_AD_DEFAULT && sysctl_overcommit_memory > 1)) { + if (!(flags & MAP_NORESERVE) || sysctl_overcommit_memory[acctdom] > 1) { if (vm_flags & VM_SHARED) { /* Check memory availability in shmem_file_setup? */ vm_flags |= VM_ACCOUNT; diff -X /home/apw/lib/vdiff.excl -rupN reference/mm/nommu.c current/mm/nommu.c --- reference/mm/nommu.c 2004-03-25 15:03:32.000000000 +0000 +++ current/mm/nommu.c 2004-03-25 17:23:22.000000000 +0000 @@ -31,8 +31,12 @@ unsigned long num_physpages; unsigned long askedalloc, realalloc; atomic_t vm_committed_space[VM_ACCTDOM_NR] = { [ 0 ... VM_ACCTDOM_NR-1 ] = ATOMIC_INIT(0) }; -int sysctl_overcommit_memory; /* default is heuristic overcommit */ -int sysctl_overcommit_ratio = 50; /* default is 50% */ +/* Defaults are: + * VM_AD_DEFAULT heuristic overcommit, ratio 50% + * VM_AD_HUGETLB strict commit, ratio 100% + */ +int sysctl_overcommit_memory[VM_ACCTDOM_NR] = { 0, 0 }; +int sysctl_overcommit_ratio[VM_ACCTDOM_NR] = { 50, 100 }; /* * Handle all mappings that got truncated by a "truncate()" diff -X /home/apw/lib/vdiff.excl -rupN reference/security/commoncap.c current/security/commoncap.c --- reference/security/commoncap.c 2004-03-25 15:03:33.000000000 +0000 +++ current/security/commoncap.c 2004-03-25 17:15:17.000000000 +0000 @@ -315,9 +315,16 @@ int cap_vm_enough_memory(int domain, lon vm_acct_memory(domain, pages); + /* + * Sometimes we want to use more memory than we have + */ + if (sysctl_overcommit_memory[domain] == 1) + return 0; + /* Check against the full compliment of hugepages, no reserve. */ if (domain == VM_AD_HUGETLB) { - allowed = hugetlb_total_pages(); + allowed = hugetlb_total_pages() * + sysctl_overcommit_ratio[domain] / 100; goto check; } @@ -328,13 +335,7 @@ int cap_vm_enough_memory(int domain, lon if (domain != VM_AD_DEFAULT) return 0; - /* - * Sometimes we want to use more memory than we have - */ - if (sysctl_overcommit_memory == 1) - return 0; - - if (sysctl_overcommit_memory == 0) { + if (sysctl_overcommit_memory[domain] == 0) { unsigned long n; free = get_page_cache_size(); @@ -372,7 +373,7 @@ int cap_vm_enough_memory(int domain, lon return -ENOMEM; } - allowed = totalram_pages * sysctl_overcommit_ratio / 100; + allowed = totalram_pages * sysctl_overcommit_ratio[domain] / 100; allowed += total_swap_pages; check: diff -X /home/apw/lib/vdiff.excl -rupN reference/security/dummy.c current/security/dummy.c --- reference/security/dummy.c 2004-03-25 15:03:33.000000000 +0000 +++ current/security/dummy.c 2004-03-25 17:16:21.000000000 +0000 @@ -116,9 +116,16 @@ static int dummy_vm_enough_memory(int do vm_acct_memory(domain, pages); + /* + * Sometimes we want to use more memory than we have + */ + if (sysctl_overcommit_memory[domain] == 1) + return 0; + /* Check against the full compliment of hugepages, no reserve. */ if (domain == VM_AD_HUGETLB) { - allowed = hugetlb_total_pages(); + allowed = hugetlb_total_pages() * + sysctl_overcommit_ratio[domain] / 100; goto check; } @@ -129,13 +136,7 @@ static int dummy_vm_enough_memory(int do if (domain != VM_AD_DEFAULT) return 0; - /* - * Sometimes we want to use more memory than we have - */ - if (sysctl_overcommit_memory == 1) - return 0; - - if (sysctl_overcommit_memory == 0) { + if (sysctl_overcommit_memory[domain] == 0) { free = get_page_cache_size(); free += nr_free_pages(); free += nr_swap_pages; @@ -160,7 +161,7 @@ static int dummy_vm_enough_memory(int do return -ENOMEM; } - allowed = totalram_pages * sysctl_overcommit_ratio / 100; + allowed = totalram_pages * sysctl_overcommit_ratio[domain] / 100; allowed += total_swap_pages; check: diff -X /home/apw/lib/vdiff.excl -rupN reference/security/selinux/hooks.c current/security/selinux/hooks.c --- reference/security/selinux/hooks.c 2004-03-25 15:03:33.000000000 +0000 +++ current/security/selinux/hooks.c 2004-03-25 17:16:44.000000000 +0000 @@ -1505,9 +1505,16 @@ static int selinux_vm_enough_memory(int vm_acct_memory(domain, pages); + /* + * Sometimes we want to use more memory than we have + */ + if (sysctl_overcommit_memory[domain] == 1) + return 0; + /* Check against the full compliment of hugepages, no reserve. */ if (domain == VM_AD_HUGETLB) { - allowed = hugetlb_total_pages(); + allowed = hugetlb_total_pages() * + sysctl_overcommit_ratio[domain] / 100; goto check; } @@ -1518,13 +1525,7 @@ static int selinux_vm_enough_memory(int if (domain != VM_AD_DEFAULT) return 0; - /* - * Sometimes we want to use more memory than we have - */ - if (sysctl_overcommit_memory == 1) - return 0; - - if (sysctl_overcommit_memory == 0) { + if (sysctl_overcommit_memory[domain] == 0) { free = get_page_cache_size(); free += nr_free_pages(); free += nr_swap_pages; @@ -1558,7 +1559,7 @@ static int selinux_vm_enough_memory(int return -ENOMEM; } - allowed = totalram_pages * sysctl_overcommit_ratio / 100; + allowed = totalram_pages * sysctl_overcommit_ratio[domain] / 100; allowed += total_swap_pages; check: |
From: Andrew M. <ak...@os...> - 2004-03-25 21:02:58
|
Andy Whitcroft <ap...@sh...> wrote: > > HUGETLB Overcommit Handling > --------------------------- > When building mappings the kernel tracks committed but not yet > allocated pages against available memory and swap preventing memory > allocation problems later. The introduction of hugetlb pages has > has significant ramifications for this accounting as the pages used > to back them are already removed from the available memory pool. Sorry, but I just don't see why we need all this complexity and generality. If there was any likelihood that there would be additional memory domains in the 2.6 future then OK. But I don't think there will be. We simply need some little old patch which fixes this bug. Such as adding a `vma' arg to vm_enough_memory() and vm_unacct_memory() and doing if (is_vm_hugetlb_page(vma)) return; and - allowed = totalram_pages * sysctl_overcommit_ratio / 100; + allowed = (totalram_pages - htlbpagemem << HPAGE_SHIFT) * + sysctl_overcommit_ratio / 100; in cap_vm_enough_memory(). |
From: Andy W. <ap...@sh...> - 2004-03-25 23:24:00
|
--On 25 March 2004 13:04 -0800 Andrew Morton <ak...@os...> wrote: > Sorry, but I just don't see why we need all this complexity and generality. > > If there was any likelihood that there would be additional memory domains > in the 2.6 future then OK. But I don't think there will be. We simply > need some little old patch which fixes this bug. > > Such as adding a `vma' arg to vm_enough_memory() and vm_unacct_memory() and > doing > > if (is_vm_hugetlb_page(vma)) > return; > > and > > - allowed = totalram_pages * sysctl_overcommit_ratio / 100; > + allowed = (totalram_pages - htlbpagemem << HPAGE_SHIFT) * > + sysctl_overcommit_ratio / 100; > > in cap_vm_enough_memory(). That's pretty much what you get if you only apply the first two patches. Sadly, you can't just pass a vma as you don't always have one when you are making the decision. For example when a shm segment is being created you need to commit the memory at that point, but its not been attached at all so there is no vma to check. That's why I went with an abstract domain. These patches have been tested in isolation and do seem to work. The other patches started out wanting to solve a second issue, the generality seemed to come out naturally. I am not sure how important it is, but when we create a normal shm domain we commit the memory then. For an hugetlb one we only commit the memory when the region is attached the first time, ie when the pages are cleared and filled. Also we have no policy control over them. In short I guess if we only are trying to fix the overcommit cross over between the normal and hugetlb, then the first two patches should be basically there. Let me know what the decision is and I'll steer the ship in that direction. -apw |
From: Andrew M. <ak...@os...> - 2004-03-25 23:49:28
|
Andy Whitcroft <ap...@sh...> wrote: > > --On 25 March 2004 13:04 -0800 Andrew Morton <ak...@os...> wrote: > > > Sorry, but I just don't see why we need all this complexity and generality. > > > > If there was any likelihood that there would be additional memory domains > > in the 2.6 future then OK. But I don't think there will be. We simply > > need some little old patch which fixes this bug. > > > > Such as adding a `vma' arg to vm_enough_memory() and vm_unacct_memory() and > > doing > > > > if (is_vm_hugetlb_page(vma)) > > return; > > > > and > > > > - allowed = totalram_pages * sysctl_overcommit_ratio / 100; > > + allowed = (totalram_pages - htlbpagemem << HPAGE_SHIFT) * > > + sysctl_overcommit_ratio / 100; > > > > in cap_vm_enough_memory(). > > That's pretty much what you get if you only apply the first two patches. Sadly, you can't just pass a vma as you don't always have one when you are making the decision. For example when a shm segment is being created you need to commit the memory at that point, but its not been attached at all so there is no vma to check. That's why I went with an abstract domain. These patches have been tested in isolation and do seem to work. > > The other patches started out wanting to solve a second issue, the generality seemed to come out naturally. I am not sure how important it is, but when we create a normal shm domain we commit the memory then. For an hugetlb one we only commit the memory when the region is attached the first time, ie when the pages are cleared and filled. Also we have no policy control over them. > > In short I guess if we only are trying to fix the overcommit cross over between the normal and hugetlb, then the first two patches should be basically there. > > Let me know what the decision is and I'll steer the ship in that direction. I think it's simply: - Make normal overcommit logic skip hugepages completely - Teach the overcommit_memory=2 logic that hugepages are basically "pinned", so subtract them from the arithmetic. And that's it. The hugepages are semantically quite different from normal memory (prefaulted, preallocated, unswappable) and we've deliberately avoided pretending otherwise. As for the shm problem, well, perhaps it's best to leave vm_enough_memory() as it is and fix it up in the callers. So most callsites will call: static inline int vm_anough_memory_vma(struct vm_area_struct *vma, unsigned long nr_pages) { if (is_vm_hugetlb_page(vma)) return 0; return vm_enough_memory(nr_pages); } and in do_mmap_pgoff() perhaps we can do: + if (file && !is_file_hugepages(file)) { charged = len >> PAGE_SHIFT; if (security_vm_enough_memory(charged)) return -ENOMEM; + } |
From: Martin J. B. <mb...@ar...> - 2004-03-26 00:19:02
|
> I think it's simply: > > - Make normal overcommit logic skip hugepages completely > > - Teach the overcommit_memory=2 logic that hugepages are basically > "pinned", so subtract them from the arithmetic. > > And that's it. The hugepages are semantically quite different from normal > memory (prefaulted, preallocated, unswappable) and we've deliberately > avoided pretending otherwise. It would be nice (to fix some of the posted problems) if hugepages didn't have to be prefaulted ... if they had their own overcommit pool (that we used whether normal overcommit was on or not), that'd be unnecessary. Specifically: 1) SGI found that requesting oodles of large pages took forever. 2) NUMA allocation API wants to be able to specify policies, which means not prefaulting them. I'd agree that fixing stopping hugepages from using the main overcommit pool is the first priority, but it'd be nice to go one stage further. M. |
From: Ray B. <ra...@sg...> - 2004-03-28 17:57:24
|
I guess I am missing something entirely here. I've been off making "allocate on fault" hugetlb pages work on 2.4.21 on Altix (that is, after all, the kernel for the production code for Altix at the present time -- It's getting close, still working on making fork() work correctly with this, and once that is done I will move it to 2.6 and submit a patch.) As I understood this originally, the suggestion was to reserve hugetlb pages at mmap() or shm_get() time so that the user would get an -ENOMEM at that time if there aren't enough hugetlb pages to (eventually) satisfy the request, as per the notion that we shouldn't modify the user API due to going with allocate on fault instead of hugetlb_prefault(). Since the reservation belongs to the mapped object (file or segment), I've been storing the current file/segments's reservation in the file system dependent part of the inode. That way, it is easily accessible when the hugetlbfs file or SysV segment is removed and we can reduce the total number of reserved pages by that file's reservation at that time. This also allows us to handle the reservation in the absence of a vma, as per Andy'c comment below. Admittedly this doesn't alow one to request that hugetlbpages be overcommitted, or to handle problems caused to the "normal" page overcommit code due to the presence of hugepages. But we figure that anyone that is actually using hugetlb pages is likely to take over almost all of main memory anyway in a single job, so overcommit doesn't make much sense to us. So, am completely off "in the weeds" on this or does the above seem like an acceptable, and simple, approach? Andy Whitcroft wrote: > --On 25 March 2004 13:04 -0800 Andrew Morton <ak...@os...> wrote: > > >>Sorry, but I just don't see why we need all this complexity and generality. >> >>If there was any likelihood that there would be additional memory domains >>in the 2.6 future then OK. But I don't think there will be. We simply >>need some little old patch which fixes this bug. >> >>Such as adding a `vma' arg to vm_enough_memory() and vm_unacct_memory() and >>doing >> >> if (is_vm_hugetlb_page(vma)) >> return; >> >>and >> >>- allowed = totalram_pages * sysctl_overcommit_ratio / 100; >>+ allowed = (totalram_pages - htlbpagemem << HPAGE_SHIFT) * >>+ sysctl_overcommit_ratio / 100; >> >>in cap_vm_enough_memory(). > > > That's pretty much what you get if you only apply the first two patches. Sadly, you can't just pass a vma as you don't always have one when you are making the decision. For example when a shm segment is being created you need to commit the memory at that point, but its not been attached at all so there is no vma to check. That's why I went with an abstract domain. These patches have been tested in isolation and do seem to work. > > The other patches started out wanting to solve a second issue, the generality seemed to come out naturally. I am not sure how important it is, but when we create a normal shm domain we commit the memory then. For an hugetlb one we only commit the memory when the region is attached the first time, ie when the pages are cleared and filled. Also we have no policy control over them. > > In short I guess if we only are trying to fix the overcommit cross over between the normal and hugetlb, then the first two patches should be basically there. > > Let me know what the decision is and I'll steer the ship in that direction. > > -apw > -- Best Regards, Ray ----------------------------------------------- Ray Bryant 512-453-9679 (work) 512-507-7807 (cell) ra...@sg... ra...@au... The box said: "Requires Windows 98 or better", so I installed Linux. ----------------------------------------------- |
From: Martin J. B. <mb...@ar...> - 2004-03-28 19:10:08
|
> As I understood this originally, the suggestion was to reserve hugetlb > pages at mmap() or shm_get() time so that the user would get an -ENOMEM > at that time if there aren't enough hugetlb pages to (eventually) satisfy > the request, as per the notion that we shouldn't modify the user API due > to going with allocate on fault instead of hugetlb_prefault(). Yup, but there were two parts to it: 1. Stop hugepages using the existing overcommit pool for small pages, which breaks small page allocations by prematurely the pool. 2. Give hugepages their own over-commit pool, instead of prefaulting. Personally I think we need both (as you seem to), but (1) is probably more urgent. > Since the reservation belongs to the mapped object (file or segment), > I've been storing the current file/segments's reservation in the file > system dependent part of the inode. That way, it is easily accessible > when the hugetlbfs file or SysV segment is removed and we can reduce > the total number of reserved pages by that file's reservation at that > time. This also allows us to handle the reservation in the absence > of a vma, as per Andy'c comment below. Do we need to store it there, or is one central pool number sufficient? I would have thought it was ... > Admittedly this doesn't alow one to request that hugetlbpages be > overcommitted, or to handle problems caused to the "normal" page > overcommit code due to the presence of hugepages. But we figure that > anyone that is actually using hugetlb pages is likely to take over > almost all of main memory anyway in a single job, so overcommit > doesn't make much sense to us. Seeing as you can't swap them, overcommitting makes no sense to me either ;-) M. |
From: Ray B. <ra...@sg...> - 2004-03-28 21:25:34
|
Martin J. Bligh wrote: >>As I understood this originally, the suggestion was to reserve hugetlb >>pages at mmap() or shm_get() time so that the user would get an -ENOMEM >>at that time if there aren't enough hugetlb pages to (eventually) satisfy >>the request, as per the notion that we shouldn't modify the user API due >>to going with allocate on fault instead of hugetlb_prefault(). > > > Yup, but there were two parts to it: > > 1. Stop hugepages using the existing overcommit pool for small pages, > which breaks small page allocations by prematurely the pool. > 2. Give hugepages their own over-commit pool, instead of prefaulting. > > Personally I think we need both (as you seem to), but (1) is probably > more urgent. Just to review: even if we allocate hugetlb pages at fault rather than at mmap() time, hugetlb pages are created either at system boot time (kernel parameter "hugepages=") or by setting /proc/sys/vm/nr_hugepages (or by using the corresponding sysctl). Once the set of hugepages is created this way, it never is changed by the act of allocating a huge page to a process. (Changing nr_pages can cause the number of unallocated hugetlbpages to be increased or decreased.) The reason for pointing this out (apologies if this was obvious to all) is to emphaisze that hugetlbpages are not created at hugetlbpage allocation time (which is now done at mmap() time and we'd like to change it to happen at fault time). So to stop hugepages from using the small page overcommit pool, we just need code in set_hugetlb_mem_size() to reduce the number of hugetlbpages created by that code. As for (2), I'm a little confused there, as later you appear to agree with me that overcomitting hugetlbpages is likely not useful. Is it possible that you meant that there should be a list of hugetlbpages from which all allocations are made? If so, that is the way the code has always worked; step 1 was to create the list of hugetlbpages, and step 2 was to allocate them. (Once again, if this is obvious to all, I apologize and we can dump the last 4 paragraphs into the bit bucket with no known effect on entropy in this universe, at least.) > > >>Since the reservation belongs to the mapped object (file or segment), >>I've been storing the current file/segments's reservation in the file >>system dependent part of the inode. That way, it is easily accessible >>when the hugetlbfs file or SysV segment is removed and we can reduce >>the total number of reserved pages by that file's reservation at that >>time. This also allows us to handle the reservation in the absence >>of a vma, as per Andy'c comment below. > > > Do we need to store it there, or is one central pool number sufficient? > I would have thought it was ... > Yes, there is a central pool number indicating how many hugepages are reserved. The question is, when (and how) do you release that reservation? My take is that the reservation is associated with the file (for mmap) or segment for SysV. For example, program A mmap()'s a hugetlbfs file, but only touches part of the pages. Program B then mmap()'s the same file with the same size, etc. When program B does the mmap() the previous reservation should still be in place, right? (The file is persistent in the page cache even if it does not persist over reboot, so the 2nd program is exepecting to see the data that the first program put there.) Ditto for a SysV segement. So one can't release the reservation when the current process doing the mmap() goes away, one has to release the reservation when the file/segment is deleted. Since both mmap() and shmget() create an inode, and the inode is released by hugetlbfs_drop_inode() and friends, it seemed simplest to put the size of the mapped object's reservation in the inode. The global count of reserved pages (the "central pool number" in your note), is incremented at mmap() time (well, actually done by hugetlbfs_file_mmap() for both mmap() and shmget()) and decremented at hugetlbfs_drop_inode() time. If at mmap() time, incrementing the global reservation count would make the global reserved pages count > the number of hugetlbpages, we fail the mmap() with -ENONMEM. At least that is the way my 2.4.21 code works. Does that make things clearer? > >>Admittedly this doesn't alow one to request that hugetlbpages be >>overcommitted, or to handle problems caused to the "normal" page >>overcommit code due to the presence of hugepages. But we figure that >>anyone that is actually using hugetlb pages is likely to take over >>almost all of main memory anyway in a single job, so overcommit >>doesn't make much sense to us. > > > Seeing as you can't swap them, overcommitting makes no sense to me > either ;-) > > M. > > > > ------------------------------------------------------- > This SF.Net email is sponsored by: IBM Linux Tutorials > Free Linux tutorial presented by Daniel Robbins, President and CEO of > GenToo technologies. Learn everything from fundamentals to system > administration.http://ads.osdn.com/?ad_id=1470&alloc_id=3638&op=click > _______________________________________________ > Lse-tech mailing list > Lse...@li... > https://lists.sourceforge.net/lists/listinfo/lse-tech > -- Best Regards, Ray ----------------------------------------------- Ray Bryant 512-453-9679 (work) 512-507-7807 (cell) ra...@sg... ra...@au... The box said: "Requires Windows 98 or better", so I installed Linux. ----------------------------------------------- |
From: Martin J. B. <mb...@ar...> - 2004-03-29 16:50:45
|
>> Yup, but there were two parts to it: >> >> 1. Stop hugepages using the existing overcommit pool for small pages, >> which breaks small page allocations by prematurely the pool. >> 2. Give hugepages their own over-commit pool, instead of prefaulting. >> >> Personally I think we need both (as you seem to), but (1) is probably >> more urgent. > > Just to review: even if we allocate hugetlb pages at fault rather than > at mmap() time, hugetlb pages are created either at system boot time > (kernel parameter "hugepages=") or by setting /proc/sys/vm/nr_hugepages > (or by using the corresponding sysctl). Once the set of hugepages is > created this way, it never is changed by the act of allocating a huge > page to a process. (Changing nr_pages can cause the number of unallocated > hugetlbpages to be increased or decreased.) Yup. > The reason for pointing this out (apologies if this was obvious to all) > is to emphaisze that hugetlbpages are not created at hugetlbpage allocation > time (which is now done at mmap() time and we'd like to change it to happen > at fault time). Yup. > So to stop hugepages from using the small page overcommit pool, we just > need code in set_hugetlb_mem_size() to reduce the number of hugetlbpages > created by that code. I think Andy already fixed that bit, though I'm not sure what method he used. It seems to me (without really looking), that we just need to not decrement the pool size when we map a huge page. > As for (2), I'm a little confused there, as later you appear to agree > with me that overcomitting hugetlbpages is likely not useful. I think I'm just being confusing via sloppy terminology, but we're in resounding agreement in reality ;-) > Is it possible that you meant that there should be a list of hugetlbpages > from which all allocations are made? If so, that is the way the code has > always worked; step 1 was to create the list of hugetlbpages, and step 2 > was to allocate them. I meant if we keep a counter of the number of hugetlb pages available, every time we get a call to allocate them, we can avoid prefault by just decrementing the counter of "available" pages, and fault them in later, just like the existing strict-overcommit code does, and we'll never fail to allocate. If we're doing *strict* NUMA bindings, it does need to be a little more complex, in that things will need to remember which node they're "pre-allocated" from. The fact that the "overcommit" code *prevents* overcommit is probably not helping the discussion's clarity ;-) > (Once again, if this is obvious to all, I apologize and we can dump the last > 4 paragraphs into the bit bucket with no known effect on entropy in this > universe, at least.) Well, above is what *I* meant, and I *think* roughly what you meant. But probably best to clarify ;-) >>> Since the reservation belongs to the mapped object (file or segment), >>> I've been storing the current file/segments's reservation in the file >>> system dependent part of the inode. That way, it is easily accessible >>> when the hugetlbfs file or SysV segment is removed and we can reduce >>> the total number of reserved pages by that file's reservation at that >>> time. This also allows us to handle the reservation in the absence >>> of a vma, as per Andy'c comment below. >> >> >> Do we need to store it there, or is one central pool number sufficient? >> I would have thought it was ... > > Yes, there is a central pool number indicating how many hugepages are reserved. > The question is, when (and how) do you release that reservation? My take is > that the reservation is associated with the file (for mmap) or segment for SysV. Ah, I see what you mean. You can't really release it at 0 refcount without changing the semantics, in case it's re-used later. Hum. Yes, I see what you mean. > For example, program A mmap()'s a hugetlbfs file, but only touches part of the > pages. Program B then mmap()'s the same file with the same size, etc. When > program B does the mmap() the previous reservation should still be in place, right? > (The file is persistent in the page cache even if it does not persist over reboot, > so the 2nd program is exepecting to see the data that the first program put there.) > > Ditto for a SysV segement. Yes. I think Adam's patches in my tree support anon mem_map though. That's going to get rather tricky ... we run into similar problems as objrmap, I think. > So one can't release the reservation when the current process doing the mmap() > goes away, one has to release the reservation when the file/segment is deleted. > Since both mmap() and shmget() create an inode, and the inode is released by > hugetlbfs_drop_inode() and friends, it seemed simplest to put the size of the > mapped object's reservation in the inode. Yup, I'd missed that - thanks for explaining ;-) > The global count of reserved pages (the "central pool number" in your note), > is incremented at mmap() time (well, actually done by hugetlbfs_file_mmap() > for both mmap() and shmget()) and decremented at hugetlbfs_drop_inode() time. > If at mmap() time, incrementing the global reservation count would make the > global reserved pages count the number of hugetlbpages, we fail the mmap() > with -ENONMEM. > > At least that is the way my 2.4.21 code works. Does that make things clearer? A lot ;-) Thanks, M. |
From: Andy W. <ap...@sh...> - 2004-03-29 12:28:42
Attachments:
070-hugetlb_commit.txt
|
--On 28 March 2004 11:10 -0800 "Martin J. Bligh" <mb...@ar...> wrote: > 1. Stop hugepages using the existing overcommit pool for small pages, > which breaks small page allocations by prematurely the pool. > 2. Give hugepages their own over-commit pool, instead of prefaulting. Indeed. The previous patches I submitted only address #1. Attached is another patch which should address #2, it supplies hugetlb commit accounting. This is checked and applied when the segment is created. It also supplements the meminfo information to display this new commitment. The patch only implments strict commitment, but as has been stated here often, it is not clear that overcommit of unswappable memory makes any sense in the absence of demand allocation. When that is implemented then this will likely need a policy. Patch applies on top of my previous patch and has been tested on i386. -apw |
From: Chen, K. W <ken...@in...> - 2004-03-29 20:46:26
|
>>>> Andy Whitcroft wrote on Mon, March 29, 2004 4:30 AM > Indeed. The previous patches I submitted only address #1. Attached is > another patch which should address #2, it supplies hugetlb commit > accounting. This is checked and applied when the segment is created. It > also supplements the meminfo information to display this new commitment. > The patch only implments strict commitment, but as has been stated here > often, it is not clear that overcommit of unswappable memory makes any > sense in the absence of demand allocation. When that is implemented then > this will likely need a policy. > > Patch applies on top of my previous patch and has been tested on i386. +int hugetlbfs_report_meminfo(char *buf) +{ + long htlb = atomic_read(&hugetlb_committed_space); + return sprintf(buf, "HugeCommited_AS: %5lu\n", htlb); +} "HugeCommited_AS", typo?? Should that be double "t"? Also can we print in terms of kB instead of num pages to match all other entries? Something Like: htlb<<(PAGE_SHIFT-10)? overcomit is not checked for hugetlb mmap, is it intentional here? - Ken |
From: Andy W. <ap...@sh...> - 2004-03-30 12:54:42
Attachments:
070-hugetlb_commit.txt
|
--On 29 March 2004 12:45 -0800 "Chen, Kenneth W" <ken...@in...> wrote: > +int hugetlbfs_report_meminfo(char *buf) > +{ > + long htlb = atomic_read(&hugetlb_committed_space); > + return sprintf(buf, "HugeCommited_AS: %5lu\n", htlb); > +} > > "HugeCommited_AS", typo?? Should that be double "t"? Also can we print > in terms of kB instead of num pages to match all other entries? Something > Like: htlb<<(PAGE_SHIFT-10)? Doh and Doh. Yes, we went though a stage where this was in hugetlb pages, but it has ended up in the same units as the small page pool. Attached is a replacement patch with this changed, below is a relative diff against the previous patch. > overcomit is not checked for hugetlb mmap, is it intentional here? > Just to follow up myself, I meant overcommit accounting is not done > for mmap hugetlb page. (typical Monday morning symptom :)) Essentially, hugetlb pages can only be part of a shared mapping in the current implementation. As a result all commitments are made and checked at segment create time. The commitment cannot change. Hope that's what you meant. Martin, perhaps this is a candidate for your -mjb tree? -apw diff -X /home/apw/lib/vdiff.excl -rupN reference/fs/hugetlbfs/inode.c current/fs/hugetlbfs/inode.c --- reference/fs/hugetlbfs/inode.c 2004-03-29 14:05:22.000000000 +0100 +++ current/fs/hugetlbfs/inode.c 2004-03-30 09:52:59.000000000 +0100 @@ -47,8 +47,10 @@ int hugetlb_acct_memory(long delta) int hugetlbfs_report_meminfo(char *buf) { +#define K(x) ((x) << (PAGE_SHIFT - 10)) long htlb = atomic_read(&hugetlb_committed_space); - return sprintf(buf, "HugeCommited_AS: %5lu\n", htlb); + return sprintf(buf, "HugeCommitted_AS: %5lu kB\n", K(htlb)); +#undef K } static struct super_operations hugetlbfs_ops; |
From: Chen, K. W <ken...@in...> - 2004-03-30 20:04:38
|
>>>>> Andy Whitcroft wrote on Tuesday, March 30, 2004 4:58 AM > > > > Just to follow up myself, I meant overcommit accounting is not done > > for mmap hugetlb page. (typical Monday morning symptom :)) > > Essentially, hugetlb pages can only be part of a shared mapping in > the current implementation. As a result all commitments are made > and checked at segment create time. The commitment cannot change. > > Hope that's what you meant. Not quite, I can simply mmap on a hugetlbfs backed file to get hugetlb pages. File expansion is transparent. It gets even trickier with file that has holes in it. I can do: fd = open("/mnt/htlb/myhtlbfile", O_CREAT|O_RDWR, 0755); mmap(..., fd, offset); Accounting didn't happen in this case, (grep Huge /proc/meminfo): HugePages_Total: 10 HugePages_Free: 9 Hugepagesize: 262144 kB HugeCommitted_AS: 0 kB Now if I remove the file "myhtlbfile", accounting is done for inode removal and hugetlb_committed_space underflows. HugePages_Total: 10 HugePages_Free: 10 Hugepagesize: 262144 kB HugeCommitted_AS: 18446744073709289472 kB |
From: Andy W. <ap...@sh...> - 2004-03-30 21:50:38
|
--On 30 March 2004 12:04 -0800 "Chen, Kenneth W" <ken...@in...> wrote: > I can do: > fd = open("/mnt/htlb/myhtlbfile", O_CREAT|O_RDWR, 0755); > mmap(..., fd, offset); > > Accounting didn't happen in this case, (grep Huge /proc/meminfo): > > HugePages_Total: 10 > HugePages_Free: 9 > Hugepagesize: 262144 kB > HugeCommitted_AS: 0 kB Oooops. Now I get you. Thanks for pointing that out. More work required. -apw |
From: Andy W. <ap...@sh...> - 2004-03-31 01:49:21
Attachments:
070-hugetlb_commit.txt
|
--On 30 March 2004 22:48 +0100 Andy Whitcroft <ap...@sh...> wrote: >> I can do: >> fd = open("/mnt/htlb/myhtlbfile", O_CREAT|O_RDWR, 0755); >> mmap(..., fd, offset); >> >> Accounting didn't happen in this case, (grep Huge /proc/meminfo): O.k. Try this one. Should fix that case. There is some uglyness in there which needs review, but my testing says this works. Thanks for testing. -apw |
From: Chen, K. W <ken...@in...> - 2004-03-31 08:51:58
|
>>>> Andy Whitcroft wrote on Tuesday, March 30, 2004 5:49 PM >>> fd = open("/mnt/htlb/myhtlbfile", O_CREAT|O_RDWR, 0755); >>> mmap(..., fd, offset); >>> >>> Accounting didn't happen in this case, (grep Huge /proc/meminfo): > > O.k. Try this one. Should fix that case. There is some uglyness in > there which needs review, but my testing says this works. Under common case, worked perfectly! But there are always corner cases. I can think of two ugliness: 1. very sparse hugetlb file. I can mmap one hugetlb page, at offset 512 GB. This would account 512GB + 1 hugetlb page as committed_AS. But I only asked for one page mapping. One can say it's a feature, but I think it's a bug. 2. There is no error checking (to undo the committed_AS accounting) after hugetlb_prefault(). hugetlb_prefault doesn't always succeed in allocat- ing all the pages user asked for due to disk quota limit. It can have partial allocation which would put the committed_AS in a wedged state. - Ken |
From: Andy W. <ap...@sh...> - 2004-03-31 16:17:13
|
--On 31 March 2004 00:51 -0800 "Chen, Kenneth W" <ken...@in...> wrote: >>>>> Andy Whitcroft wrote on Tuesday, March 30, 2004 5:49 PM >>>> fd = open("/mnt/htlb/myhtlbfile", O_CREAT|O_RDWR, 0755); >>>> mmap(..., fd, offset); >>>> >>>> Accounting didn't happen in this case, (grep Huge /proc/meminfo): >> >> O.k. Try this one. Should fix that case. There is some uglyness in >> there which needs review, but my testing says this works. > > Under common case, worked perfectly! But there are always corner cases. > > I can think of two ugliness: > 1. very sparse hugetlb file. I can mmap one hugetlb page, at offset > 512 GB. This would account 512GB + 1 hugetlb page as committed_AS. > But I only asked for one page mapping. One can say it's a feature, > but I think it's a bug. Yes. This is true. This is consistent with the preallocation behaviour of shared memory segments, but inconsistent with the behaviour of mmap'ing /dev/zero which it essentially emulates. This is not trival to fix as we do not get informed when the unmap occurs. Accounting for normal pages is handled directly by the VM unmap code. I think I have found a way to track these but it does blur the interfaces between the hugetlbfs and hugepage implementations. There are a number of other 'bugs' in the implementation of hugetlb. For example, the MAP_SHARED/MAP_PRIVATE flags are ignored, behaviour is identical in both cases. > 2. There is no error checking (to undo the committed_AS accounting) after > hugetlb_prefault(). hugetlb_prefault doesn't always succeed in allocat- > ing all the pages user asked for due to disk quota limit. It can have > partial allocation which would put the committed_AS in a wedged state. True, this needs work on the interface to the quota system in hugetlbfs. We essentially need to check the quota before we attempt to fault any pages. I'll change it around see how it looks. Expect new patches tomorrow ... -apw |
From: Andy W. <ap...@sh...> - 2004-04-01 21:17:06
|
--On 31 March 2004 00:51 -0800 "Chen, Kenneth W" <ken...@in...> wrote: > Under common case, worked perfectly! But there are always corner cases. > > I can think of two ugliness: > 1. very sparse hugetlb file. I can mmap one hugetlb page, at offset > 512 GB. This would account 512GB + 1 hugetlb page as committed_AS. > But I only asked for one page mapping. One can say it's a feature, > but I think it's a bug. > > 2. There is no error checking (to undo the committed_AS accounting) after > hugetlb_prefault(). hugetlb_prefault doesn't always succeed in allocat- > ing all the pages user asked for due to disk quota limit. It can have > partial allocation which would put the committed_AS in a wedged state. O.k. Here is the latest version of the hugetlb commitment tracking patch (hugetlb_tracking_R4). This now understands the difference between shm allocated and mmap allocated and handles them differently. This should fix 1. We now handle the commitments correctly under quota failures. Please review. -apw --- arch/i386/mm/hugetlbpage.c | 30 +++++++++++++------ file | 1 fs/hugetlbfs/inode.c | 69 +++++++++++++++++++++++++++++++++++++++++++-- fs/proc/proc_misc.c | 1 include/linux/hugetlb.h | 5 +++ 5 files changed, 93 insertions(+), 13 deletions(-) diff -X /home/apw/lib/vdiff.excl -rupN reference/arch/i386/mm/hugetlbpage.c current/arch/i386/mm/hugetlbpage.c --- reference/arch/i386/mm/hugetlbpage.c 2004-04-01 13:37:14.000000000 +0100 +++ current/arch/i386/mm/hugetlbpage.c 2004-04-01 21:54:54.000000000 +0100 @@ -72,6 +72,7 @@ static struct page *alloc_hugetlb_page(v spin_unlock(&htlbpage_lock); return NULL; } +printk(KERN_WARNING "alloc_hugetlb_page: alloced %08lx\n", (unsigned long) page); htlbpagemem--; spin_unlock(&htlbpage_lock); set_page_count(page, 1); @@ -282,6 +283,7 @@ static void free_huge_page(struct page * INIT_LIST_HEAD(&page->list); +printk(KERN_WARNING "free_huge_page: returned %08lx\n", (unsigned long) page); spin_lock(&htlbpage_lock); enqueue_huge_page(page); htlbpagemem++; @@ -334,6 +336,7 @@ int hugetlb_prefault(struct address_spac struct mm_struct *mm = current->mm; unsigned long addr; int ret = 0; + struct page *page; BUG_ON(vma->vm_start & ~HPAGE_MASK); BUG_ON(vma->vm_end & ~HPAGE_MASK); @@ -342,7 +345,6 @@ int hugetlb_prefault(struct address_spac for (addr = vma->vm_start; addr < vma->vm_end; addr += HPAGE_SIZE) { unsigned long idx; pte_t *pte = huge_pte_alloc(mm, addr); - struct page *page; if (!pte) { ret = -ENOMEM; @@ -355,30 +357,38 @@ int hugetlb_prefault(struct address_spac + (vma->vm_pgoff >> (HPAGE_SHIFT - PAGE_SHIFT)); page = find_get_page(mapping, idx); if (!page) { - /* charge the fs quota first */ + /* charge against commitment */ + ret = hugetlb_charge_page(vma); + if (ret) + goto out; + /* charge the fs quota */ if (hugetlb_get_quota(mapping)) { ret = -ENOMEM; - goto out; + goto undo_charge; } page = alloc_hugetlb_page(); if (!page) { - hugetlb_put_quota(mapping); ret = -ENOMEM; - goto out; + goto undo_quota; } ret = add_to_page_cache(page, mapping, idx, GFP_ATOMIC); unlock_page(page); - if (ret) { - hugetlb_put_quota(mapping); - free_huge_page(page); - goto out; - } + if (ret) + goto undo_page; } set_huge_pte(mm, vma, page, pte, vma->vm_flags & VM_WRITE); } out: spin_unlock(&mm->page_table_lock); return ret; + +undo_page: + free_huge_page(page); +undo_quota: + hugetlb_put_quota(mapping); +undo_charge: + hugetlb_uncharge_page(vma); + goto out; } static void update_and_free_page(struct page *page) diff -X /home/apw/lib/vdiff.excl -rupN reference/file current/file --- reference/file 1970-01-01 01:00:00.000000000 +0100 +++ current/file 2004-04-01 13:37:14.000000000 +0100 @@ -0,0 +1 @@ +this is more text diff -X /home/apw/lib/vdiff.excl -rupN reference/fs/hugetlbfs/inode.c current/fs/hugetlbfs/inode.c --- reference/fs/hugetlbfs/inode.c 2004-03-25 02:43:00.000000000 +0000 +++ current/fs/hugetlbfs/inode.c 2004-04-01 22:41:07.000000000 +0100 @@ -32,6 +32,53 @@ /* some random number */ #define HUGETLBFS_MAGIC 0x958458f6 +#define HUGETLBFS_NOACCT (~0UL) + +atomic_t hugetlb_committed_space = ATOMIC_INIT(0); + +int hugetlb_acct_memory(long delta) +{ +printk(KERN_WARNING "hugetlb_acct_memory: delta<%ld>\n", delta); + atomic_add(delta, &hugetlb_committed_space); + if (delta > 0 && atomic_read(&hugetlb_committed_space) > + hugetlb_total_pages()) { + atomic_add(-delta, &hugetlb_committed_space); + return -ENOMEM; + } + return 0; +} +int hugetlb_charge_page(struct vm_area_struct *vma) +{ + int ret; + + /* if this file is marked for commit on demand then see if we can + * commmit a page, if so account for it against this file. */ + if (vma->vm_file->f_dentry->d_inode->i_blocks != ~0) { + ret = hugetlb_acct_memory(HPAGE_SIZE / PAGE_SIZE); + if (ret) + return ret; + vma->vm_file->f_dentry->d_inode->i_blocks++; + } + return 0; +} +int hugetlb_uncharge_page(struct vm_area_struct *vma) +{ + /* if this file is marked for commit on demand return a page. */ + if (vma->vm_file->f_dentry->d_inode->i_blocks != ~0) { + hugetlb_acct_memory(-(HPAGE_SIZE / PAGE_SIZE)); + vma->vm_file->f_dentry->d_inode->i_blocks--; + } + return 0; +} + +int hugetlbfs_report_meminfo(char *buf) +{ +#define K(x) ((x) << (PAGE_SHIFT - 10)) + long htlb = atomic_read(&hugetlb_committed_space); + return sprintf(buf, "HugeCommitted_AS: %5lu kB\n", K(htlb)); +#undef K +} + static struct super_operations hugetlbfs_ops; static struct address_space_operations hugetlbfs_aops; struct file_operations hugetlbfs_file_operations; @@ -62,11 +109,11 @@ static int hugetlbfs_file_mmap(struct fi vma_len = (loff_t)(vma->vm_end - vma->vm_start); down(&inode->i_sem); + len = vma_len + ((loff_t)vma->vm_pgoff << PAGE_SHIFT); file_accessed(file); vma->vm_flags |= VM_HUGETLB | VM_RESERVED; vma->vm_ops = &hugetlb_vm_ops; ret = hugetlb_prefault(mapping, vma); - len = vma_len + ((loff_t)vma->vm_pgoff << PAGE_SHIFT); if (ret == 0 && inode->i_size < len) inode->i_size = len; up(&inode->i_sem); @@ -200,6 +247,11 @@ static void hugetlbfs_delete_inode(struc if (inode->i_data.nrpages) truncate_hugepages(&inode->i_data, 0); + if (inode->i_blocks != HUGETLBFS_NOACCT) + hugetlb_acct_memory(-(inode->i_blocks * + (HPAGE_SIZE / PAGE_SIZE))); + else + hugetlb_acct_memory(-(inode->i_size / PAGE_SIZE)); security_inode_delete(inode); @@ -241,6 +293,11 @@ out_truncate: spin_unlock(&inode_lock); if (inode->i_data.nrpages) truncate_hugepages(&inode->i_data, 0); + if (inode->i_blocks != HUGETLBFS_NOACCT) + hugetlb_acct_memory(-(inode->i_blocks * + (HPAGE_SIZE / PAGE_SIZE))); + else + hugetlb_acct_memory(-(inode->i_size / PAGE_SIZE)); if (sbinfo->free_inodes >= 0) { spin_lock(&sbinfo->stat_lock); @@ -350,6 +407,10 @@ static int hugetlbfs_setattr(struct dent error = hugetlb_vmtruncate(inode, attr->ia_size); if (error) goto out; + /* We rely on the fact that the sizes are hugepage aligned, + * and that hugetlb_vmtruncate prevents extend. */ + hugetlb_acct_memory((attr->ia_size - i_size_read(inode)) / + PAGE_SIZE); attr->ia_valid &= ~ATTR_SIZE; } error = inode_setattr(inode, attr); @@ -710,8 +771,9 @@ struct file *hugetlb_zero_setup(size_t s if (!capable(CAP_IPC_LOCK)) return ERR_PTR(-EPERM); - if (!is_hugepage_mem_enough(size)) - return ERR_PTR(-ENOMEM); + error = hugetlb_acct_memory(size / PAGE_SIZE); + if (error) + return ERR_PTR(error); root = hugetlbfs_vfsmount->mnt_root; snprintf(buf, 16, "%lu", hugetlbfs_counter()); @@ -736,6 +798,7 @@ struct file *hugetlb_zero_setup(size_t s d_instantiate(dentry, inode); inode->i_size = size; inode->i_nlink = 0; + inode->i_blocks = HUGETLBFS_NOACCT; file->f_vfsmnt = mntget(hugetlbfs_vfsmount); file->f_dentry = dentry; file->f_mapping = inode->i_mapping; diff -X /home/apw/lib/vdiff.excl -rupN reference/fs/proc/proc_misc.c current/fs/proc/proc_misc.c --- reference/fs/proc/proc_misc.c 2004-03-29 12:10:18.000000000 +0100 +++ current/fs/proc/proc_misc.c 2004-04-01 13:37:14.000000000 +0100 @@ -232,6 +232,7 @@ static int meminfo_read_proc(char *page, ); len += hugetlb_report_meminfo(page + len); + len += hugetlbfs_report_meminfo(page + len); return proc_calc_metrics(page, start, off, count, eof, len); #undef K diff -X /home/apw/lib/vdiff.excl -rupN reference/include/linux/hugetlb.h current/include/linux/hugetlb.h --- reference/include/linux/hugetlb.h 2004-03-29 12:10:22.000000000 +0100 +++ current/include/linux/hugetlb.h 2004-04-01 21:56:56.000000000 +0100 @@ -115,11 +115,16 @@ static inline void set_file_hugepages(st { file->f_op = &hugetlbfs_file_operations; } +int hugetlbfs_report_meminfo(char *); +int hugetlb_charge_page(struct vm_area_struct *vma); +int hugetlb_uncharge_page(struct vm_area_struct *vma); + #else /* !CONFIG_HUGETLBFS */ #define is_file_hugepages(file) 0 #define set_file_hugepages(file) BUG() #define hugetlb_zero_setup(size) ERR_PTR(-ENOSYS) +#define hugetlbfs_report_meminfo(buf) 0 #endif /* !CONFIG_HUGETLBFS */ |
From: Andy W. <ap...@sh...> - 2004-04-01 22:56:38
|
--On 01 April 2004 22:15 +0100 Andy Whitcroft <ap...@sh...> wrote: > O.k. Here is the latest version of the hugetlb commitment tracking patch > (hugetlb_tracking_R4). This now understands the difference between shm > allocated and mmap allocated and handles them differently. This should > fix 1. We now handle the commitments correctly under quota failures. Ok. Here is R5, including all of the architectures hooked to the new interface. Plus the spurious debug is gone. -apw --- arch/i386/mm/hugetlbpage.c | 28 +++++++++++------ arch/ia64/mm/hugetlbpage.c | 28 +++++++++++------ arch/ppc64/mm/hugetlbpage.c | 28 +++++++++++------ arch/sh/mm/hugetlbpage.c | 28 +++++++++++------ arch/sparc64/mm/hugetlbpage.c | 28 +++++++++++------ fs/hugetlbfs/inode.c | 66 ++++++++++++++++++++++++++++++++++++++++-- fs/proc/proc_misc.c | 1 include/linux/hugetlb.h | 5 +++ 8 files changed, 160 insertions(+), 52 deletions(-) diff -X /home/apw/lib/vdiff.excl -rupN reference/arch/i386/mm/hugetlbpage.c current/arch/i386/mm/hugetlbpage.c --- reference/arch/i386/mm/hugetlbpage.c 2004-04-02 00:38:24.000000000 +0100 +++ current/arch/i386/mm/hugetlbpage.c 2004-04-01 22:58:48.000000000 +0100 @@ -334,6 +334,7 @@ int hugetlb_prefault(struct address_spac struct mm_struct *mm = current->mm; unsigned long addr; int ret = 0; + struct page *page; BUG_ON(vma->vm_start & ~HPAGE_MASK); BUG_ON(vma->vm_end & ~HPAGE_MASK); @@ -342,7 +343,6 @@ int hugetlb_prefault(struct address_spac for (addr = vma->vm_start; addr < vma->vm_end; addr += HPAGE_SIZE) { unsigned long idx; pte_t *pte = huge_pte_alloc(mm, addr); - struct page *page; if (!pte) { ret = -ENOMEM; @@ -355,30 +355,38 @@ int hugetlb_prefault(struct address_spac + (vma->vm_pgoff >> (HPAGE_SHIFT - PAGE_SHIFT)); page = find_get_page(mapping, idx); if (!page) { - /* charge the fs quota first */ + /* charge against commitment */ + ret = hugetlb_charge_page(vma); + if (ret) + goto out; + /* charge the fs quota */ if (hugetlb_get_quota(mapping)) { ret = -ENOMEM; - goto out; + goto undo_charge; } page = alloc_hugetlb_page(); if (!page) { - hugetlb_put_quota(mapping); ret = -ENOMEM; - goto out; + goto undo_quota; } ret = add_to_page_cache(page, mapping, idx, GFP_ATOMIC); unlock_page(page); - if (ret) { - hugetlb_put_quota(mapping); - free_huge_page(page); - goto out; - } + if (ret) + goto undo_page; } set_huge_pte(mm, vma, page, pte, vma->vm_flags & VM_WRITE); } out: spin_unlock(&mm->page_table_lock); return ret; + +undo_page: + free_huge_page(page); +undo_quota: + hugetlb_put_quota(mapping); +undo_charge: + hugetlb_uncharge_page(vma); + goto out; } static void update_and_free_page(struct page *page) diff -X /home/apw/lib/vdiff.excl -rupN reference/arch/ia64/mm/hugetlbpage.c current/arch/ia64/mm/hugetlbpage.c --- reference/arch/ia64/mm/hugetlbpage.c 2004-04-02 00:38:24.000000000 +0100 +++ current/arch/ia64/mm/hugetlbpage.c 2004-04-02 00:39:22.000000000 +0100 @@ -352,6 +352,7 @@ int hugetlb_prefault(struct address_spac struct mm_struct *mm = current->mm; unsigned long addr; int ret = 0; + struct page *page; BUG_ON(vma->vm_start & ~HPAGE_MASK); BUG_ON(vma->vm_end & ~HPAGE_MASK); @@ -360,7 +361,6 @@ int hugetlb_prefault(struct address_spac for (addr = vma->vm_start; addr < vma->vm_end; addr += HPAGE_SIZE) { unsigned long idx; pte_t *pte = huge_pte_alloc(mm, addr); - struct page *page; if (!pte) { ret = -ENOMEM; @@ -373,30 +373,38 @@ int hugetlb_prefault(struct address_spac + (vma->vm_pgoff >> (HPAGE_SHIFT - PAGE_SHIFT)); page = find_get_page(mapping, idx); if (!page) { - /* charge the fs quota first */ + /* charge against commitment */ + ret = hugetlb_charge_page(vma); + if (ret) + goto out; + /* charge the fs quota */ if (hugetlb_get_quota(mapping)) { ret = -ENOMEM; - goto out; + goto undo_charge; } page = alloc_hugetlb_page(); if (!page) { - hugetlb_put_quota(mapping); ret = -ENOMEM; - goto out; + goto undo_quota; } ret = add_to_page_cache(page, mapping, idx, GFP_ATOMIC); unlock_page(page); - if (ret) { - hugetlb_put_quota(mapping); - free_huge_page(page); - goto out; - } + if (ret) + goto undo_page; } set_huge_pte(mm, vma, page, pte, vma->vm_flags & VM_WRITE); } out: spin_unlock(&mm->page_table_lock); return ret; + +undo_page: + free_huge_page(page); +undo_quota: + hugetlb_put_quota(mapping); +undo_charge: + hugetlb_uncharge_page(vma); + goto out; } unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr, unsigned long len, diff -X /home/apw/lib/vdiff.excl -rupN reference/arch/ppc64/mm/hugetlbpage.c current/arch/ppc64/mm/hugetlbpage.c --- reference/arch/ppc64/mm/hugetlbpage.c 2004-04-02 00:38:24.000000000 +0100 +++ current/arch/ppc64/mm/hugetlbpage.c 2004-04-02 00:45:10.000000000 +0100 @@ -482,6 +482,7 @@ int hugetlb_prefault(struct address_spac struct mm_struct *mm = current->mm; unsigned long addr; int ret = 0; + struct page *page; WARN_ON(!is_vm_hugetlb_page(vma)); BUG_ON((vma->vm_start % HPAGE_SIZE) != 0); @@ -491,7 +492,6 @@ int hugetlb_prefault(struct address_spac for (addr = vma->vm_start; addr < vma->vm_end; addr += HPAGE_SIZE) { unsigned long idx; hugepte_t *pte = hugepte_alloc(mm, addr); - struct page *page; BUG_ON(!in_hugepage_area(mm->context, addr)); @@ -506,30 +506,38 @@ int hugetlb_prefault(struct address_spac + (vma->vm_pgoff >> (HPAGE_SHIFT - PAGE_SHIFT)); page = find_get_page(mapping, idx); if (!page) { - /* charge the fs quota first */ + /* charge against commitment */ + ret = hugetlb_charge_page(vma); + if (ret) + goto out; + /* charge the fs quota */ if (hugetlb_get_quota(mapping)) { ret = -ENOMEM; - goto out; + goto undo_charge; } page = alloc_hugetlb_page(); if (!page) { - hugetlb_put_quota(mapping); ret = -ENOMEM; - goto out; + goto undo_quota; } ret = add_to_page_cache(page, mapping, idx, GFP_ATOMIC); unlock_page(page); - if (ret) { - hugetlb_put_quota(mapping); - free_huge_page(page); - goto out; - } + if (ret) + goto undo_page; } setup_huge_pte(mm, page, pte, vma->vm_flags & VM_WRITE); } out: spin_unlock(&mm->page_table_lock); return ret; + +undo_page: + free_huge_page(page); +undo_quota: + hugetlb_put_quota(mapping); +undo_charge: + hugetlb_uncharge_page(vma); + goto out; } /* Because we have an exclusive hugepage region which lies within the diff -X /home/apw/lib/vdiff.excl -rupN reference/arch/sh/mm/hugetlbpage.c current/arch/sh/mm/hugetlbpage.c --- reference/arch/sh/mm/hugetlbpage.c 2004-04-02 00:36:59.000000000 +0100 +++ current/arch/sh/mm/hugetlbpage.c 2004-04-02 00:39:45.000000000 +0100 @@ -313,6 +313,7 @@ int hugetlb_prefault(struct address_spac struct mm_struct *mm = current->mm; unsigned long addr; int ret = 0; + struct page *page; BUG_ON(vma->vm_start & ~HPAGE_MASK); BUG_ON(vma->vm_end & ~HPAGE_MASK); @@ -321,7 +322,6 @@ int hugetlb_prefault(struct address_spac for (addr = vma->vm_start; addr < vma->vm_end; addr += HPAGE_SIZE) { unsigned long idx; pte_t *pte = huge_pte_alloc(mm, addr); - struct page *page; if (!pte) { ret = -ENOMEM; @@ -334,30 +334,38 @@ int hugetlb_prefault(struct address_spac + (vma->vm_pgoff >> (HPAGE_SHIFT - PAGE_SHIFT)); page = find_get_page(mapping, idx); if (!page) { - /* charge the fs quota first */ + /* charge against commitment */ + ret = hugetlb_charge_page(vma); + if (ret) + goto out; + /* charge the fs quota */ if (hugetlb_get_quota(mapping)) { ret = -ENOMEM; - goto out; + goto undo_charge; } page = alloc_hugetlb_page(); if (!page) { - hugetlb_put_quota(mapping); ret = -ENOMEM; - goto out; + goto undo_quota; } ret = add_to_page_cache(page, mapping, idx, GFP_ATOMIC); unlock_page(page); - if (ret) { - hugetlb_put_quota(mapping); - free_huge_page(page); - goto out; - } + if (ret) + goto undo_page; } set_huge_pte(mm, vma, page, pte, vma->vm_flags & VM_WRITE); } out: spin_unlock(&mm->page_table_lock); return ret; + +undo_page: + free_huge_page(page); +undo_quota: + hugetlb_put_quota(mapping); +undo_charge: + hugetlb_uncharge_page(vma); + goto out; } static void update_and_free_page(struct page *page) diff -X /home/apw/lib/vdiff.excl -rupN reference/arch/sparc64/mm/hugetlbpage.c current/arch/sparc64/mm/hugetlbpage.c --- reference/arch/sparc64/mm/hugetlbpage.c 2004-04-02 00:38:24.000000000 +0100 +++ current/arch/sparc64/mm/hugetlbpage.c 2004-04-02 00:39:56.000000000 +0100 @@ -309,6 +309,7 @@ int hugetlb_prefault(struct address_spac struct mm_struct *mm = current->mm; unsigned long addr; int ret = 0; + struct page *page; BUG_ON(vma->vm_start & ~HPAGE_MASK); BUG_ON(vma->vm_end & ~HPAGE_MASK); @@ -317,7 +318,6 @@ int hugetlb_prefault(struct address_spac for (addr = vma->vm_start; addr < vma->vm_end; addr += HPAGE_SIZE) { unsigned long idx; pte_t *pte = huge_pte_alloc(mm, addr); - struct page *page; if (!pte) { ret = -ENOMEM; @@ -330,30 +330,38 @@ int hugetlb_prefault(struct address_spac + (vma->vm_pgoff >> (HPAGE_SHIFT - PAGE_SHIFT)); page = find_get_page(mapping, idx); if (!page) { - /* charge the fs quota first */ + /* charge against commitment */ + ret = hugetlb_charge_page(vma); + if (ret) + goto out; + /* charge the fs quota */ if (hugetlb_get_quota(mapping)) { ret = -ENOMEM; - goto out; + goto undo_charge; } page = alloc_hugetlb_page(); if (!page) { - hugetlb_put_quota(mapping); ret = -ENOMEM; - goto out; + goto undo_quota; } ret = add_to_page_cache(page, mapping, idx, GFP_ATOMIC); unlock_page(page); - if (ret) { - hugetlb_put_quota(mapping); - free_huge_page(page); - goto out; - } + if (ret) + goto undo_page; } set_huge_pte(mm, vma, page, pte, vma->vm_flags & VM_WRITE); } out: spin_unlock(&mm->page_table_lock); return ret; + +undo_page: + free_huge_page(page); +undo_quota: + hugetlb_put_quota(mapping); +undo_charge: + hugetlb_uncharge_page(vma); + goto out; } static void update_and_free_page(struct page *page) diff -X /home/apw/lib/vdiff.excl -rupN reference/fs/hugetlbfs/inode.c current/fs/hugetlbfs/inode.c --- reference/fs/hugetlbfs/inode.c 2004-03-25 02:43:00.000000000 +0000 +++ current/fs/hugetlbfs/inode.c 2004-04-01 23:07:02.000000000 +0100 @@ -32,6 +32,52 @@ /* some random number */ #define HUGETLBFS_MAGIC 0x958458f6 +#define HUGETLBFS_NOACCT (~0UL) + +atomic_t hugetlb_committed_space = ATOMIC_INIT(0); + +int hugetlb_acct_memory(long delta) +{ + atomic_add(delta, &hugetlb_committed_space); + if (delta > 0 && atomic_read(&hugetlb_committed_space) > + hugetlb_total_pages()) { + atomic_add(-delta, &hugetlb_committed_space); + return -ENOMEM; + } + return 0; +} +int hugetlb_charge_page(struct vm_area_struct *vma) +{ + int ret; + + /* if this file is marked for commit on demand then see if we can + * commmit a page, if so account for it against this file. */ + if (vma->vm_file->f_dentry->d_inode->i_blocks != ~0) { + ret = hugetlb_acct_memory(HPAGE_SIZE / PAGE_SIZE); + if (ret) + return ret; + vma->vm_file->f_dentry->d_inode->i_blocks++; + } + return 0; +} +int hugetlb_uncharge_page(struct vm_area_struct *vma) +{ + /* if this file is marked for commit on demand return a page. */ + if (vma->vm_file->f_dentry->d_inode->i_blocks != ~0) { + hugetlb_acct_memory(-(HPAGE_SIZE / PAGE_SIZE)); + vma->vm_file->f_dentry->d_inode->i_blocks--; + } + return 0; +} + +int hugetlbfs_report_meminfo(char *buf) +{ +#define K(x) ((x) << (PAGE_SHIFT - 10)) + long htlb = atomic_read(&hugetlb_committed_space); + return sprintf(buf, "HugeCommitted_AS: %5lu kB\n", K(htlb)); +#undef K +} + static struct super_operations hugetlbfs_ops; static struct address_space_operations hugetlbfs_aops; struct file_operations hugetlbfs_file_operations; @@ -200,6 +246,11 @@ static void hugetlbfs_delete_inode(struc if (inode->i_data.nrpages) truncate_hugepages(&inode->i_data, 0); + if (inode->i_blocks != HUGETLBFS_NOACCT) + hugetlb_acct_memory(-(inode->i_blocks * + (HPAGE_SIZE / PAGE_SIZE))); + else + hugetlb_acct_memory(-(inode->i_size / PAGE_SIZE)); security_inode_delete(inode); @@ -241,6 +292,11 @@ out_truncate: spin_unlock(&inode_lock); if (inode->i_data.nrpages) truncate_hugepages(&inode->i_data, 0); + if (inode->i_blocks != HUGETLBFS_NOACCT) + hugetlb_acct_memory(-(inode->i_blocks * + (HPAGE_SIZE / PAGE_SIZE))); + else + hugetlb_acct_memory(-(inode->i_size / PAGE_SIZE)); if (sbinfo->free_inodes >= 0) { spin_lock(&sbinfo->stat_lock); @@ -350,6 +406,10 @@ static int hugetlbfs_setattr(struct dent error = hugetlb_vmtruncate(inode, attr->ia_size); if (error) goto out; + /* We rely on the fact that the sizes are hugepage aligned, + * and that hugetlb_vmtruncate prevents extend. */ + hugetlb_acct_memory((attr->ia_size - i_size_read(inode)) / + PAGE_SIZE); attr->ia_valid &= ~ATTR_SIZE; } error = inode_setattr(inode, attr); @@ -710,8 +770,9 @@ struct file *hugetlb_zero_setup(size_t s if (!capable(CAP_IPC_LOCK)) return ERR_PTR(-EPERM); - if (!is_hugepage_mem_enough(size)) - return ERR_PTR(-ENOMEM); + error = hugetlb_acct_memory(size / PAGE_SIZE); + if (error) + return ERR_PTR(error); root = hugetlbfs_vfsmount->mnt_root; snprintf(buf, 16, "%lu", hugetlbfs_counter()); @@ -736,6 +797,7 @@ struct file *hugetlb_zero_setup(size_t s d_instantiate(dentry, inode); inode->i_size = size; inode->i_nlink = 0; + inode->i_blocks = HUGETLBFS_NOACCT; file->f_vfsmnt = mntget(hugetlbfs_vfsmount); file->f_dentry = dentry; file->f_mapping = inode->i_mapping; diff -X /home/apw/lib/vdiff.excl -rupN reference/fs/proc/proc_misc.c current/fs/proc/proc_misc.c --- reference/fs/proc/proc_misc.c 2004-04-02 00:37:04.000000000 +0100 +++ current/fs/proc/proc_misc.c 2004-04-01 22:51:19.000000000 +0100 @@ -232,6 +232,7 @@ static int meminfo_read_proc(char *page, ); len += hugetlb_report_meminfo(page + len); + len += hugetlbfs_report_meminfo(page + len); return proc_calc_metrics(page, start, off, count, eof, len); #undef K diff -X /home/apw/lib/vdiff.excl -rupN reference/include/linux/hugetlb.h current/include/linux/hugetlb.h --- reference/include/linux/hugetlb.h 2004-04-02 00:38:24.000000000 +0100 +++ current/include/linux/hugetlb.h 2004-04-01 22:51:19.000000000 +0100 @@ -115,11 +115,16 @@ static inline void set_file_hugepages(st { file->f_op = &hugetlbfs_file_operations; } +int hugetlbfs_report_meminfo(char *); +int hugetlb_charge_page(struct vm_area_struct *vma); +int hugetlb_uncharge_page(struct vm_area_struct *vma); + #else /* !CONFIG_HUGETLBFS */ #define is_file_hugepages(file) 0 #define set_file_hugepages(file) BUG() #define hugetlb_zero_setup(size) ERR_PTR(-ENOSYS) +#define hugetlbfs_report_meminfo(buf) 0 #endif /* !CONFIG_HUGETLBFS */ |