[lc-checkins] CVS: linux/mm oom_kill.c,1.7,1.8 memory.c,1.16,1.17 mmap.c,1.5,1.6
Status: Beta
Brought to you by:
nitin_sf
From: Rodrigo S. de C. <rc...@us...> - 2002-01-14 12:05:11
|
Update of /cvsroot/linuxcompressed/linux/mm In directory usw-pr-cvs1:/tmp/cvs-serv13325/mm Modified Files: memory.c mmap.c Added Files: oom_kill.c Log Message: This batch of changes still includes lots of cleanups and code rewrite to make it simpler. Perfomance increase has been noticed too. - number_of_pages in comp_cache_t removed. We can check if there are no fragments by fragments list. - vswap: no semaphore is needed. I have no idea why the functions {lock,unlock}_vswap has once been added. I can't see why they are needed. So they were removed. The same for real_entry field in struct vswap_address. - vswap: a new function has been added, namely add_fragment_vswap(), analogous to remove_fragment_vswap(). It's called from get_comp_cache_page() and it's a great hand to make things modular. - vm_enough_memory(): now we take into account compressed cache space when allowing an application to allocate memory. That is done calling a function named comp_cache_free_space() which returns, based upon the estimated_free_space, the number of pages that still can be compressed. - move_and_fix_fragments() deleted. comp_cache_free() has a new police to not move data to and fro all the time like before. We free the fragment but leave it there waiting for being merged with the free space. It's pretty simple, check the code. The new code has two new functions: merge_right_neighbour() and merge_left_neighbour(). - the fragments list is kept sorted by offset field. So, when freeing, we don't have to search for the next and previous fragments everytime. Since most of times it's just a plain list_add_tail() in get_comp_cache_page(), that makes the code simpler and nicer. - lookup_comp_cache() was partially rewritten, mainly due to the fact we won't sleep to get a lock on the comp_page. - find_and_lock_comp_page() function removed and find_nolock_comp_page() was renamed to find_comp_page(). All functions that previously called find_and_lock... now calls the find_comp_page() and locks the comp_page at once with TryLockPage(). - oom_kill() was fixed and takes into account the free space in compressed cache by calling comp_cache_available_space(). That avoids killing an application if we have space left in compressed cache yet. Index: memory.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/memory.c,v retrieving revision 1.16 retrieving revision 1.17 diff -C2 -r1.16 -r1.17 *** memory.c 2001/12/21 18:33:11 1.16 --- memory.c 2002/01/14 12:05:08 1.17 *************** *** 1081,1087 **** for (i = 0; i < num; offset++, i++) { /* Ok, do the async read-ahead now */ - lock_vswap(SWP_ENTRY(SWP_TYPE(entry), offset)); new_page = read_swap_cache_async(SWP_ENTRY(SWP_TYPE(entry), offset)); - unlock_vswap(SWP_ENTRY(SWP_TYPE(entry), offset)); if (!new_page) --- 1081,1085 ---- *************** *** 1106,1116 **** spin_unlock(&mm->page_table_lock); - lock_vswap(entry); - - if (!pte_same(*page_table, orig_pte)) { - unlock_vswap(entry); - return 1; - } - page = lookup_comp_cache(entry); --- 1104,1107 ---- *************** *** 1147,1151 **** unlock_page(page); page_cache_release(page); - unlock_vswap(entry); return 1; } --- 1138,1141 ---- *************** *** 1155,1159 **** remove_pte_vswap(page_table); - unlock_vswap(entry); /* The page isn't present yet, go ahead with the fault. */ --- 1145,1148 ---- Index: mmap.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/mmap.c,v retrieving revision 1.5 retrieving revision 1.6 diff -C2 -r1.5 -r1.6 *** mmap.c 2001/12/13 19:12:58 1.5 --- mmap.c 2002/01/14 12:05:08 1.6 *************** *** 82,85 **** --- 82,88 ---- free += swapper_space.nrpages; + /* Let's count the free space left in compressed cache */ + free += comp_cache_free_space(); + /* * The code below doesn't account for free space in the inode |