linuxcompressed-checkins Mailing List for Linux Compressed Cache (Page 5)
Status: Beta
Brought to you by:
nitin_sf
You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
|
Dec
(31) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(28) |
Feb
(50) |
Mar
(29) |
Apr
(6) |
May
(33) |
Jun
(36) |
Jul
(60) |
Aug
(7) |
Sep
(12) |
Oct
|
Nov
(13) |
Dec
(3) |
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(9) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2006 |
Jan
(13) |
Feb
(4) |
Mar
(4) |
Apr
(1) |
May
|
Jun
(22) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Rodrigo S. de C. <rc...@us...> - 2002-07-28 15:47:07
|
Update of /cvsroot/linuxcompressed/linux/fs/proc In directory usw-pr-cvs1:/tmp/cvs-serv26313/fs/proc Modified Files: proc_misc.c Log Message: Features o First page cache support for preempted kernels is implemented. o Fragments have a "count" field that stores the number of references to the fragment, so we don't have to worry about it getting freed in the middle of an operation. That tries to fix a highly potential source of bugs. Bug fixes o Fix memory accountancy for double page sizes. Meminfo was broken for 8K pages. o truncate_list_comp_pages() could try to truncate fragments that were in locked_comp_pages list, which is bogus. Only swap buffers list are on this list, and are listed there only for wait_comp_pages(). o when writing out fragments, we didn't pay attention to the return value, so we may end up freeing a fragment (when refilling swap buffer) even if the writepage failed. In particular, ramfs, ramdisk and other memory file systems always fail to write out its pages. Now we check if the swap buffer page has been set dirty (the writepage() usually does that after failing to write a page), moving back the fragment to the dirty list (and of course not freeing the fragment). o fixed bug that would corrupt the swap buffer list. A bug in the variable that returned the error code could return error even if a fragment was found afterall, so the caller function would backout the writeout operation, leaving the swap buffer locked on the used list, and it wouldn't never get unlocked. o account writeout stats only for pages that have been actually submitted to IO operation. o fixed bug that would deadlock a system with comp_cache that has page cache support. The lookup_comp_pages() function may be called from the following code path: __sync_one() -> filemap_fdatasync(). This code path tries to sync an inode (and keeps it locked while it is syncing). However, that very inode can be also in the clear path (clear_inode() function, called in the exit process path) which will lock the super block and then wait for inode if it is locked (what happens with an inode syncing). Since the allocation path may write pages, which may need to lock the same super block, it will deadlock, because the super block is locked by the exit path explained above. So, we end up not being able to allocate the page (in order to finish this function and unlock the inode) _and_ the super block won't be unlocked since the inode doesn't get unlocked either. The fix was to allocate pages with GFP_NOFS mask. Cleanups o Some functions were renamed. o Compression algorithms (removed unnecessary data structures that were allocated, made some structures to be allocated statically in the algorithms, some data statically allocated are now kmalloc()) o Removed /proc/sys/vm/comp_cache/actual_size, it doesn't make sense with resizing on demand. Others o Compressed cache only resizes on demand. Index: proc_misc.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/fs/proc/proc_misc.c,v retrieving revision 1.6 retrieving revision 1.7 diff -C2 -r1.6 -r1.7 *** proc_misc.c 11 Jul 2002 19:08:10 -0000 1.6 --- proc_misc.c 28 Jul 2002 15:47:04 -0000 1.7 *************** *** 183,187 **** #ifdef CONFIG_COMP_CACHE K(pg_size + num_swapper_fragments - swapper_space.nrpages), ! K(num_comp_pages), comp_cache_used_space/1024, K(swapper_space.nrpages - num_swapper_fragments), --- 183,187 ---- #ifdef CONFIG_COMP_CACHE K(pg_size + num_swapper_fragments - swapper_space.nrpages), ! K(num_comp_pages << comp_page_order), comp_cache_used_space/1024, K(swapper_space.nrpages - num_swapper_fragments), |
From: Rodrigo S. de C. <rc...@us...> - 2002-07-18 21:31:12
|
Update of /cvsroot/linuxcompressed/linux/include/linux In directory usw-pr-cvs1:/tmp/cvs-serv20667/include/linux Modified Files: comp_cache.h Log Message: Feature o Make resizing (manual, not on demand) work with a preempted kernel. First and very crude implementation. So far, swap cache support and manual resizing are working in the tests that have been run. Cleanups o Cleanups in virtual_swap_free() (now __virtual_swap_free()) function. Index: comp_cache.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/comp_cache.h,v retrieving revision 1.95 retrieving revision 1.96 diff -C2 -r1.95 -r1.96 *** comp_cache.h 18 Jul 2002 13:32:50 -0000 1.95 --- comp_cache.h 18 Jul 2002 21:31:08 -0000 1.96 *************** *** 2,6 **** * linux/mm/comp_cache.h * ! * Time-stamp: <2002-07-18 09:47:44 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache.h * ! * Time-stamp: <2002-07-18 15:45:44 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 324,327 **** --- 324,328 ---- int read_comp_cache(struct address_space *, unsigned long, struct page *); + int __invalidate_comp_cache(struct address_space *, unsigned long); int invalidate_comp_cache(struct address_space *, unsigned long); void invalidate_comp_pages(struct address_space *); *************** *** 406,409 **** --- 407,412 ---- #define VSWAP_ALLOCATING ((struct page *) 0xffffffff) + extern spinlock_t virtual_swap_list; + #ifdef CONFIG_COMP_CACHE #define vswap_info_struct(p) (p == &swap_info[COMP_CACHE_SWP_TYPE]) *************** *** 412,415 **** --- 415,419 ---- int virtual_swap_duplicate(swp_entry_t); + int __virtual_swap_free(unsigned long); int virtual_swap_free(unsigned long); swp_entry_t get_virtual_swap_page(void); *************** *** 427,431 **** int vswap_alloc_and_init(struct vswap_address **, unsigned long); - extern spinlock_t virtual_swap_list; #else --- 431,434 ---- *************** *** 451,455 **** /* free.c */ void comp_cache_free_locked(struct comp_cache_fragment *); ! inline void comp_cache_free(struct comp_cache_fragment *); #ifdef CONFIG_COMP_CACHE --- 454,458 ---- /* free.c */ void comp_cache_free_locked(struct comp_cache_fragment *); ! void comp_cache_free(struct comp_cache_fragment *); #ifdef CONFIG_COMP_CACHE |
From: Rodrigo S. de C. <rc...@us...> - 2002-07-18 21:31:11
|
Update of /cvsroot/linuxcompressed/linux/mm In directory usw-pr-cvs1:/tmp/cvs-serv20667/mm Modified Files: swapfile.c Log Message: Feature o Make resizing (manual, not on demand) work with a preempted kernel. First and very crude implementation. So far, swap cache support and manual resizing are working in the tests that have been run. Cleanups o Cleanups in virtual_swap_free() (now __virtual_swap_free()) function. Index: swapfile.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/swapfile.c,v retrieving revision 1.33 retrieving revision 1.34 diff -C2 -r1.33 -r1.34 *** swapfile.c 17 Jul 2002 20:44:36 -0000 1.33 --- swapfile.c 18 Jul 2002 21:31:08 -0000 1.34 *************** *** 201,210 **** { if (vswap_info_struct(p)) ! goto virtual_swap; swap_device_unlock(p); swap_list_unlock(); - return; - virtual_swap: - spin_unlock(&virtual_swap_list); } --- 201,207 ---- { if (vswap_info_struct(p)) ! return; swap_device_unlock(p); swap_list_unlock(); } *************** *** 268,271 **** --- 265,269 ---- if (vswap_address[SWP_OFFSET(entry)]->swap_count == 1) exclusive = 1; + spin_unlock(&virtual_swap_list); goto check_exclusive; } *************** *** 344,347 **** --- 342,346 ---- if (vswap_address[SWP_OFFSET(entry)]->swap_count == 1) exclusive = 1; + spin_unlock(&virtual_swap_list); goto check_exclusive; } |
From: Rodrigo S. de C. <rc...@us...> - 2002-07-18 21:31:11
|
Update of /cvsroot/linuxcompressed/linux/mm/comp_cache In directory usw-pr-cvs1:/tmp/cvs-serv20667/mm/comp_cache Modified Files: adaptivity.c aux.c free.c main.c swapin.c vswap.c Log Message: Feature o Make resizing (manual, not on demand) work with a preempted kernel. First and very crude implementation. So far, swap cache support and manual resizing are working in the tests that have been run. Cleanups o Cleanups in virtual_swap_free() (now __virtual_swap_free()) function. Index: adaptivity.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/adaptivity.c,v retrieving revision 1.36 retrieving revision 1.37 diff -C2 -r1.36 -r1.37 *** adaptivity.c 16 Jul 2002 18:41:55 -0000 1.36 --- adaptivity.c 18 Jul 2002 21:31:08 -0000 1.37 *************** *** 2,6 **** * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-07-16 14:03:17 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-07-18 15:44:59 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 116,135 **** struct comp_cache_fragment * fragment; struct vswap_address ** new_vswap_address; - unsigned int failed_alloc = 0; unsigned long index, new_index, vswap_new_num_entries = NUM_VSWAP_ENTRIES; swp_entry_t old_entry, entry; if (!vswap_address) return; ! if (vswap_current_num_entries <= 1.10 * NUM_VSWAP_ENTRIES) return; /* more used entries than the new size? can't shrink */ if (vswap_num_used_entries >= NUM_VSWAP_ENTRIES) ! return; ! ! if (down_trylock(&vswap_resize_semaphore)) ! return; #if 0 --- 116,137 ---- struct comp_cache_fragment * fragment; struct vswap_address ** new_vswap_address; unsigned long index, new_index, vswap_new_num_entries = NUM_VSWAP_ENTRIES; swp_entry_t old_entry, entry; + int failed_alloc = 0, ret; if (!vswap_address) return; ! if (down_trylock(&vswap_resize_semaphore)) return; + spin_lock(&virtual_swap_list); + + if (vswap_current_num_entries <= 1.10 * NUM_VSWAP_ENTRIES) + goto out_unlock; + /* more used entries than the new size? can't shrink */ if (vswap_num_used_entries >= NUM_VSWAP_ENTRIES) ! goto out_unlock; #if 0 *************** *** 214,218 **** /* let's fix the ptes */ ! if (!set_pte_list_to_entry(vswap_address[index]->pte_list, old_entry, entry)) goto backout; --- 216,224 ---- /* let's fix the ptes */ ! spin_unlock(&virtual_swap_list); ! ret = set_pte_list_to_entry(vswap_address[index]->pte_list, old_entry, entry); ! spin_lock(&virtual_swap_list); ! ! if (!ret) goto backout; *************** *** 271,282 **** if (vswap_last_used >= vswap_new_num_entries) ! goto out; allocate_new_vswap: new_vswap_address = (struct vswap_address **) vmalloc(vswap_new_num_entries * sizeof(struct vswap_address*)); if (!new_vswap_address) { vswap_failed_alloc = 1; ! goto out; } --- 277,290 ---- if (vswap_last_used >= vswap_new_num_entries) ! goto out_unlock; allocate_new_vswap: + spin_unlock(&virtual_swap_list); new_vswap_address = (struct vswap_address **) vmalloc(vswap_new_num_entries * sizeof(struct vswap_address*)); + spin_lock(&virtual_swap_list); if (!new_vswap_address) { vswap_failed_alloc = 1; ! goto out_unlock; } *************** *** 326,331 **** vswap_last_used = vswap_new_num_entries - 1; vswap_failed_alloc = 0; ! out: ! up(&vswap_resize_semaphore); } --- 334,340 ---- vswap_last_used = vswap_new_num_entries - 1; vswap_failed_alloc = 0; ! out_unlock: ! spin_unlock(&virtual_swap_list); ! up(&vswap_resize_semaphore); } *************** *** 348,359 **** return; /* using vswap_last_used instead of vswap_current_num_entries * forces us to grow the cache even if we started shrinking * it, but one set comp cache to the original size */ if (vswap_last_used >= 0.90 * (NUM_VSWAP_ENTRIES - 1)) ! return; ! ! if (down_trylock(&vswap_resize_semaphore)) ! return; #if 0 --- 357,370 ---- return; + if (down_trylock(&vswap_resize_semaphore)) + return; + + spin_lock(&virtual_swap_list); + /* using vswap_last_used instead of vswap_current_num_entries * forces us to grow the cache even if we started shrinking * it, but one set comp cache to the original size */ if (vswap_last_used >= 0.90 * (NUM_VSWAP_ENTRIES - 1)) ! goto out_unlock; #if 0 *************** *** 366,375 **** if (vswap_current_num_entries == vswap_new_num_entries) goto fix_old_vswap; - - new_vswap_address = (struct vswap_address **) vmalloc(vswap_new_num_entries * sizeof(struct vswap_address*)); if (!new_vswap_address) { vswap_failed_alloc = 1; ! goto out; } --- 377,388 ---- if (vswap_current_num_entries == vswap_new_num_entries) goto fix_old_vswap; + spin_unlock(&virtual_swap_list); + new_vswap_address = (struct vswap_address **) vmalloc(vswap_new_num_entries * sizeof(struct vswap_address*)); + spin_lock(&virtual_swap_list); + if (!new_vswap_address) { vswap_failed_alloc = 1; ! goto out_unlock; } *************** *** 415,419 **** vswap_last_used = vswap_new_num_entries - 1; vswap_failed_alloc = 0; ! goto out; fix_old_vswap: --- 428,432 ---- vswap_last_used = vswap_new_num_entries - 1; vswap_failed_alloc = 0; ! goto out_unlock; fix_old_vswap: *************** *** 434,438 **** last_vswap_allocated = vswap_new_num_entries - 1; vswap_last_used = vswap_current_num_entries - 1; ! out: up(&vswap_resize_semaphore); } --- 447,452 ---- last_vswap_allocated = vswap_new_num_entries - 1; vswap_last_used = vswap_current_num_entries - 1; ! out_unlock: ! spin_unlock(&virtual_swap_list); up(&vswap_resize_semaphore); } *************** *** 484,487 **** --- 498,503 ---- * check the comp_page and free it if possible, we don't want to * perform an agressive shrinkage. + * + * caller must hold comp_cache_lock lock */ int *************** *** 491,494 **** --- 507,512 ---- int retval = 0; + spin_lock(&comp_cache_lock); + if (!comp_page->page) BUG(); *************** *** 538,542 **** shrink_fragment_hash_table(); shrink_vswap(); ! return retval; --- 556,561 ---- shrink_fragment_hash_table(); shrink_vswap(); ! out_unlock: ! spin_unlock(&comp_cache_lock); return retval; *************** *** 546,550 **** if (!empty_comp_page || !empty_comp_page->page) ! return retval; lock_page(empty_comp_page->page); --- 565,569 ---- if (!empty_comp_page || !empty_comp_page->page) ! goto out_unlock; lock_page(empty_comp_page->page); *************** *** 553,557 **** if (!list_empty(&(comp_page->fragments))) { UnlockPage(empty_comp_page->page); ! return retval; } --- 572,576 ---- if (!list_empty(&(comp_page->fragments))) { UnlockPage(empty_comp_page->page); ! goto out_unlock; } *************** *** 616,620 **** --- 635,642 ---- struct comp_cache_page * comp_page; struct page * page; + int ret = 0; + spin_lock(&comp_cache_lock); + while (comp_cache_needs_to_grow() && nrpages--) { page = alloc_pages(GFP_ATOMIC, comp_page_order); *************** *** 622,630 **** /* couldn't allocate the page */ if (!page) ! return 0; if (!init_comp_page(&comp_page, page)) { __free_pages(page, comp_page_order); ! return 0; } --- 644,652 ---- /* couldn't allocate the page */ if (!page) ! goto out_unlock; if (!init_comp_page(&comp_page, page)) { __free_pages(page, comp_page_order); ! goto out_unlock; } *************** *** 637,653 **** } if (!comp_cache_needs_to_grow()) { grow_zone_watermarks(); ! goto out; } if (!fragment_failed_alloc && !vswap_failed_alloc) ! return 1; ! out: grow_fragment_hash_table(); grow_vswap(); ! ! return 1; } --- 659,678 ---- } + ret = 1; + if (!comp_cache_needs_to_grow()) { grow_zone_watermarks(); ! goto grow_structures; } if (!fragment_failed_alloc && !vswap_failed_alloc) ! goto out_unlock; ! grow_structures: grow_fragment_hash_table(); grow_vswap(); ! out_unlock: ! spin_unlock(&comp_cache_lock); ! return ret; } Index: aux.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/aux.c,v retrieving revision 1.40 retrieving revision 1.41 diff -C2 -r1.40 -r1.41 *** aux.c 17 Jul 2002 20:44:36 -0000 1.40 --- aux.c 18 Jul 2002 21:31:08 -0000 1.41 *************** *** 2,6 **** * linux/mm/comp_cache/aux.c * ! * Time-stamp: <2002-07-17 16:06:21 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/aux.c * ! * Time-stamp: <2002-07-18 14:14:42 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 127,130 **** --- 127,131 ---- } + /* unlikely, but list can still be corrupted since it is not protected by any lock */ int set_pte_list_to_entry(struct pte_list * start_pte_list, swp_entry_t old_entry, swp_entry_t entry) Index: free.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/free.c,v retrieving revision 1.42 retrieving revision 1.43 diff -C2 -r1.42 -r1.43 *** free.c 18 Jul 2002 13:32:50 -0000 1.42 --- free.c 18 Jul 2002 21:31:08 -0000 1.43 *************** *** 2,6 **** * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-07-18 10:04:09 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-07-18 16:20:01 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 139,142 **** --- 139,143 ---- } + /* caller must hold comp_cache_lock lock */ void comp_cache_free_locked(struct comp_cache_fragment * fragment) *************** *** 218,222 **** } ! inline void comp_cache_free(struct comp_cache_fragment * fragment) { struct comp_cache_page * comp_page; --- 219,224 ---- } ! /* caller must hold comp_cache_lock lock */ ! void comp_cache_free(struct comp_cache_fragment * fragment) { struct comp_cache_page * comp_page; *************** *** 322,326 **** /* let's proceed to fix swap counter for either entries */ for(; num_freed_ptes > 0; --num_freed_ptes) { ! virtual_swap_free(vswap->offset); swap_duplicate(entry); } --- 324,328 ---- /* let's proceed to fix swap counter for either entries */ for(; num_freed_ptes > 0; --num_freed_ptes) { ! __virtual_swap_free(vswap->offset); swap_duplicate(entry); } *************** *** 338,342 **** __delete_from_swap_cache(swap_cache_page); spin_unlock(&pagecache_lock); ! virtual_swap_free(vswap->offset); add_to_swap_cache(swap_cache_page, entry); --- 340,344 ---- __delete_from_swap_cache(swap_cache_page); spin_unlock(&pagecache_lock); ! __virtual_swap_free(vswap->offset); add_to_swap_cache(swap_cache_page, entry); Index: main.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/main.c,v retrieving revision 1.58 retrieving revision 1.59 diff -C2 -r1.58 -r1.59 *** main.c 17 Jul 2002 21:45:12 -0000 1.58 --- main.c 18 Jul 2002 21:31:08 -0000 1.59 *************** *** 2,6 **** * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-07-17 18:28:09 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-07-18 13:19:51 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 127,131 **** if (!dirty) BUG(); ! invalidate_comp_cache(page->mapping, page->index); } --- 127,131 ---- if (!dirty) BUG(); ! __invalidate_comp_cache(page->mapping, page->index); } Index: swapin.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapin.c,v retrieving revision 1.46 retrieving revision 1.47 diff -C2 -r1.46 -r1.47 *** swapin.c 17 Jul 2002 20:44:36 -0000 1.46 --- swapin.c 18 Jul 2002 21:31:08 -0000 1.47 *************** *** 2,6 **** * linux/mm/comp_cache/swapin.c * ! * Time-stamp: <2002-07-17 13:58:48 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/swapin.c * ! * Time-stamp: <2002-07-18 17:59:01 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 19,30 **** int ! invalidate_comp_cache(struct address_space *mapping, unsigned long offset) { struct comp_cache_fragment * fragment; ! int err = find_comp_page(mapping, offset, &fragment); if (!err) comp_cache_free(fragment); return err; } --- 19,40 ---- int ! __invalidate_comp_cache(struct address_space * mapping, unsigned long offset) { struct comp_cache_fragment * fragment; ! int err = find_comp_page(mapping, offset, &fragment); if (!err) comp_cache_free(fragment); + return err; + } + int + invalidate_comp_cache(struct address_space * mapping, unsigned long offset) + { + int err; + + spin_lock(&comp_cache_lock); + err = __invalidate_comp_cache(mapping, offset); + spin_unlock(&comp_cache_lock); return err; } *************** *** 38,43 **** int err = -ENOENT; if (likely(!PageTestandClearCompCache(page))) ! goto out; /* we may have a null page->mapping if the page have been --- 48,55 ---- int err = -ENOENT; + spin_lock(&comp_cache_lock); + if (likely(!PageTestandClearCompCache(page))) ! goto out_unlock; /* we may have a null page->mapping if the page have been *************** *** 47,51 **** if (err) ! goto out; if (CompFragmentTestandClearDirty(fragment)) { --- 59,63 ---- if (err) ! goto out_unlock; if (CompFragmentTestandClearDirty(fragment)) { *************** *** 58,62 **** } comp_cache_free(fragment); ! out: return err; } --- 70,76 ---- } comp_cache_free(fragment); ! ! out_unlock: ! spin_unlock(&comp_cache_lock); return err; } *************** *** 139,142 **** --- 153,158 ---- struct list_head * fragment_lh, * tmp_lh; struct comp_cache_fragment * fragment; + + spin_lock(&comp_cache_lock); list_for_each_safe(fragment_lh, tmp_lh, list) { *************** *** 146,149 **** --- 162,167 ---- comp_cache_free(fragment); } + + spin_unlock(&comp_cache_lock); } *************** *** 183,192 **** struct comp_cache_fragment * fragment; if (list_empty(&mapping->dirty_comp_pages)) ! return; page = page_cache_alloc(mapping); if (!page) ! return; if (list_empty(&mapping->dirty_comp_pages)) --- 201,212 ---- struct comp_cache_fragment * fragment; + spin_lock(&comp_cache_lock); + if (list_empty(&mapping->dirty_comp_pages)) ! goto out_unlock; page = page_cache_alloc(mapping); if (!page) ! goto out_unlock; if (list_empty(&mapping->dirty_comp_pages)) *************** *** 221,224 **** --- 241,246 ---- out_release: page_cache_release(page); + out_unlock: + spin_unlock(&comp_cache_lock); } Index: vswap.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/vswap.c,v retrieving revision 1.43 retrieving revision 1.44 diff -C2 -r1.43 -r1.44 *** vswap.c 18 Jul 2002 13:32:51 -0000 1.43 --- vswap.c 18 Jul 2002 21:31:08 -0000 1.44 *************** *** 2,6 **** * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-07-18 10:01:50 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-07-18 17:59:42 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 288,292 **** */ int ! virtual_swap_free(unsigned long offset) { unsigned int swap_count; --- 288,292 ---- */ int ! __virtual_swap_free(unsigned long offset) { unsigned int swap_count; *************** *** 323,326 **** --- 323,327 ---- vswap->pte_list = NULL; vswap->swap_cache_page = NULL; + vswap->fragment = NULL; /* if this entry is reserved, it's not on any list (either *************** *** 330,338 **** if (fragment == VSWAP_RESERVED) { vswap_num_reserved_entries--; ! vswap->fragment = NULL; ! list_add(&(vswap->list), &vswap_address_free_head); ! nr_free_vswap++; ! ! return 0; } --- 331,335 ---- if (fragment == VSWAP_RESERVED) { vswap_num_reserved_entries--; ! goto out; } *************** *** 341,359 **** /* remove from used list */ ! list_del_init(&(vswap_address[offset]->list)); nr_used_vswap--; - vswap->fragment = NULL; /* add to to the free list */ list_add(&(vswap->list), &vswap_address_free_head); nr_free_vswap++; ! /* global freeable space */ ! comp_cache_freeable_space += fragment->compressed_size; ! /* whops, it will DEADLOCK when shrinking the vswap table ! * since we hold virtual_swap_list */ comp_cache_free(fragment); ! return 0; } --- 338,380 ---- /* remove from used list */ ! list_del(&(vswap_address[offset]->list)); nr_used_vswap--; + /* global freeable space */ + comp_cache_freeable_space += fragment->compressed_size; + out: /* add to to the free list */ list_add(&(vswap->list), &vswap_address_free_head); nr_free_vswap++; + return 0; + } ! /* caller must hold vswap_list_lock ! * retuns virtual_swap_list unlocked */ ! int ! virtual_swap_free(unsigned long offset) ! { ! struct comp_cache_fragment * fragment; ! int ret; ! ! fragment = vswap_address[offset]->fragment; ! ret = __virtual_swap_free(offset); ! ! if (ret) ! goto out_unlock; ! ! if (fragment == VSWAP_RESERVED) ! goto out_unlock; ! spin_unlock(&virtual_swap_list); ! ! spin_lock(&comp_cache_lock); comp_cache_free(fragment); ! spin_unlock(&comp_cache_lock); ! out: ! return ret; ! out_unlock: ! spin_unlock(&virtual_swap_list); ! goto out; } |
From: Rodrigo S. de C. <rc...@us...> - 2002-07-18 13:32:58
|
Update of /cvsroot/linuxcompressed/linux/mm/comp_cache In directory usw-pr-cvs1:/tmp/cvs-serv19795/mm/comp_cache Modified Files: free.c vswap.c Log Message: Bug fixes o Fixed race between comp_cache_use_address() and do_swap_page(). A pte could be unsafely changed during a page fault service. Index: free.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/free.c,v retrieving revision 1.41 retrieving revision 1.42 diff -C2 -r1.41 -r1.42 *** free.c 18 Jul 2002 11:54:48 -0000 1.41 --- free.c 18 Jul 2002 13:32:50 -0000 1.42 *************** *** 2,6 **** * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-07-18 08:40:22 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-07-18 10:04:09 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 279,282 **** --- 279,285 ---- old_entry = SWP_ENTRY(COMP_CACHE_SWP_TYPE, vswap->offset); + if (vswap->fault_count) + continue; + if (TryLockPage(vswap->fragment->comp_page->page)) continue; *************** *** 301,307 **** /* set old virtual addressed ptes to the real swap entry */ ret = set_pte_list_to_entry(vswap->pte_list, old_entry, entry); ! if (!ret) goto backout; --- 304,315 ---- /* set old virtual addressed ptes to the real swap entry */ + spin_unlock(&virtual_swap_list); ret = set_pte_list_to_entry(vswap->pte_list, old_entry, entry); + spin_lock(&virtual_swap_list); ! /* if we set all the pte list, but while setting to the new ! * entry, a pte has faulted in, back out the changes so ! * hopefully the page fault can be serviced */ ! if (!ret || vswap->fault_count) goto backout; *************** *** 318,330 **** } - if (vswap->swap_cache_page) { - if (vswap->swap_count != 1) - BUG(); - } - else { - if (vswap->swap_count) - BUG(); - } - /* let's fix swap cache page address (if any) */ if (vswap->swap_cache_page) { --- 326,329 ---- *************** *** 347,351 **** } ! if (vswap->swap_count) BUG(); --- 346,354 ---- } ! /* Even if the swap cache page has been removed but the ! * swap_count not yet decremented, the maximum value of ! * swap_count is 1. This vswap entry will be added to free ! * list as soon as swap_count gets to zero. */ ! if (vswap->swap_count > 1) BUG(); Index: vswap.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/vswap.c,v retrieving revision 1.42 retrieving revision 1.43 diff -C2 -r1.42 -r1.43 *** vswap.c 17 Jul 2002 20:44:36 -0000 1.42 --- vswap.c 18 Jul 2002 13:32:51 -0000 1.43 *************** *** 2,6 **** * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-07-17 16:48:47 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-07-18 10:01:50 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 708,711 **** --- 708,730 ---- spin_unlock(&virtual_swap_list); } + + void FASTCALL(get_vswap(swp_entry_t)); + void get_vswap(swp_entry_t entry) { + if (!vswap_address(entry)) + return; + spin_lock(&virtual_swap_list); + vswap_address[SWP_OFFSET(entry)]->fault_count++; + spin_unlock(&virtual_swap_list); + } + + void FASTCALL(put_vswap(swp_entry_t)); + void put_vswap(swp_entry_t entry) { + if (!vswap_address(entry)) + return; + spin_lock(&virtual_swap_list); + vswap_address[SWP_OFFSET(entry)]->fault_count--; + spin_unlock(&virtual_swap_list); + } + /** |
From: Rodrigo S. de C. <rc...@us...> - 2002-07-18 13:32:57
|
Update of /cvsroot/linuxcompressed/linux/include/linux In directory usw-pr-cvs1:/tmp/cvs-serv19795/include/linux Modified Files: comp_cache.h Log Message: Bug fixes o Fixed race between comp_cache_use_address() and do_swap_page(). A pte could be unsafely changed during a page fault service. Index: comp_cache.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/comp_cache.h,v retrieving revision 1.94 retrieving revision 1.95 diff -C2 -r1.94 -r1.95 *** comp_cache.h 17 Jul 2002 20:44:36 -0000 1.94 --- comp_cache.h 18 Jul 2002 13:32:50 -0000 1.95 *************** *** 2,6 **** * linux/mm/comp_cache.h * ! * Time-stamp: <2002-07-17 16:23:16 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache.h * ! * Time-stamp: <2002-07-18 09:47:44 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 422,439 **** extern void FASTCALL(del_swap_cache_page_vswap(struct page *)); extern int FASTCALL(free_pte_list(struct pte_list *, unsigned long)); int vswap_alloc_and_init(struct vswap_address **, unsigned long); - - static inline void get_vswap(swp_entry_t entry) { - if (!vswap_address(entry)) - return; - vswap_address[SWP_OFFSET(entry)]->fault_count++; - } - - static inline void put_vswap(swp_entry_t entry) { - if (!vswap_address(entry)) - return; - vswap_address[SWP_OFFSET(entry)]->fault_count--; - } extern spinlock_t virtual_swap_list; --- 422,429 ---- extern void FASTCALL(del_swap_cache_page_vswap(struct page *)); extern int FASTCALL(free_pte_list(struct pte_list *, unsigned long)); + extern void FASTCALL(get_vswap(swp_entry_t)); + extern void FASTCALL(put_vswap(swp_entry_t)); int vswap_alloc_and_init(struct vswap_address **, unsigned long); extern spinlock_t virtual_swap_list; |
From: Rodrigo S. de C. <rc...@us...> - 2002-07-18 13:32:57
|
Update of /cvsroot/linuxcompressed/linux/mm In directory usw-pr-cvs1:/tmp/cvs-serv19795/mm Modified Files: memory.c Log Message: Bug fixes o Fixed race between comp_cache_use_address() and do_swap_page(). A pte could be unsafely changed during a page fault service. Index: memory.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/memory.c,v retrieving revision 1.34 retrieving revision 1.35 diff -C2 -r1.34 -r1.35 *** memory.c 16 Jul 2002 18:41:55 -0000 1.34 --- memory.c 18 Jul 2002 13:32:50 -0000 1.35 *************** *** 1130,1133 **** --- 1130,1134 ---- int ret = 1; + get_vswap(entry); spin_unlock(&mm->page_table_lock); page = lookup_swap_cache(entry); *************** *** 1135,1145 **** /* perform readahead only if the page is on disk */ if (!in_comp_cache(&swapper_space, entry.val)) { swapin_readahead(entry); /* major fault */ ret = 2; } - get_vswap(entry); page = read_swap_cache_async(entry); - put_vswap(entry); if (!page) { /* --- 1136,1146 ---- /* perform readahead only if the page is on disk */ if (!in_comp_cache(&swapper_space, entry.val)) { + if (vswap_address(entry)) + BUG(); swapin_readahead(entry); /* major fault */ ret = 2; } page = read_swap_cache_async(entry); if (!page) { /* *************** *** 1151,1154 **** --- 1152,1156 ---- retval = pte_same(*page_table, orig_pte) ? -1 : 1; spin_unlock(&mm->page_table_lock); + put_vswap(entry); return retval; } *************** *** 1168,1171 **** --- 1170,1174 ---- unlock_page(page); page_cache_release(page); + put_vswap(entry); return 1; } *************** *** 1194,1197 **** --- 1197,1201 ---- update_mmu_cache(vma, address, pte); spin_unlock(&mm->page_table_lock); + put_vswap(entry); return ret; } |
From: Rodrigo S. de C. <rc...@us...> - 2002-07-18 11:54:52
|
Update of /cvsroot/linuxcompressed/linux/mm/comp_cache In directory usw-pr-cvs1:/tmp/cvs-serv21414/mm/comp_cache Modified Files: free.c swapout.c Log Message: Bug fixes o Fixed potential deadlock in comp_cache_use_address() when calling delete_from_swap_cache() with the virtual_swap_list locked. Cleanup o Cleanup in find_free_swp_buffer() Index: free.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/free.c,v retrieving revision 1.40 retrieving revision 1.41 diff -C2 -r1.40 -r1.41 *** free.c 17 Jul 2002 21:45:12 -0000 1.40 --- free.c 18 Jul 2002 11:54:48 -0000 1.41 *************** *** 2,6 **** * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-07-17 17:51:14 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-07-18 08:40:22 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 317,320 **** --- 317,329 ---- swap_duplicate(entry); } + + if (vswap->swap_cache_page) { + if (vswap->swap_count != 1) + BUG(); + } + else { + if (vswap->swap_count) + BUG(); + } /* let's fix swap cache page address (if any) */ *************** *** 327,333 **** page_cache_get(swap_cache_page); ! delete_from_swap_cache(swap_cache_page); ! if (add_to_swap_cache(swap_cache_page, entry)) ! BUG(); UnlockPage(swap_cache_page); --- 336,345 ---- page_cache_get(swap_cache_page); ! spin_lock(&pagecache_lock); ! __delete_from_swap_cache(swap_cache_page); ! spin_unlock(&pagecache_lock); ! virtual_swap_free(vswap->offset); ! ! add_to_swap_cache(swap_cache_page, entry); UnlockPage(swap_cache_page); Index: swapout.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapout.c,v retrieving revision 1.64 retrieving revision 1.65 diff -C2 -r1.64 -r1.65 *** swapout.c 17 Jul 2002 21:45:12 -0000 1.64 --- swapout.c 18 Jul 2002 11:54:48 -0000 1.65 *************** *** 2,6 **** * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-07-17 18:26:45 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-07-17 18:48:02 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 144,148 **** refill_swp_buffer(gfp_mask, priority--); ! /* Failed to get a free swap buffer. Probably gfp_mask does * not allow buffer sync in refill_swp_buffer() function. */ if (list_empty(&swp_free_buffer_head)) { --- 144,150 ---- refill_swp_buffer(gfp_mask, priority--); ! error = -ENOENT; ! ! /* Failed to get a free swap buffer. Probably gfp_mask does * not allow buffer sync in refill_swp_buffer() function. */ if (list_empty(&swp_free_buffer_head)) { *************** *** 153,157 **** /* Fragment totally freed. Free its struct to avoid leakage. */ if (!CompFragmentIO(fragment)) { - error = -ENOENT; kmem_cache_free(fragment_cachep, (fragment)); goto failed; --- 155,158 ---- *************** *** 159,166 **** /* Fragment partially freed (to be merged). Nothing to do. */ ! if (CompFragmentFreed(fragment)) { ! error = -ENOENT; goto failed; - } get_free_buffer: --- 160,165 ---- /* Fragment partially freed (to be merged). Nothing to do. */ ! if (CompFragmentFreed(fragment)) goto failed; get_free_buffer: |
From: Rodrigo S. de C. <rc...@us...> - 2002-07-17 21:45:17
|
Update of /cvsroot/linuxcompressed/linux/mm/comp_cache In directory usw-pr-cvs1:/tmp/cvs-serv9378/mm/comp_cache Modified Files: free.c main.c swapout.c Log Message: Bug fixes o Fix bug introduced in the last changes into compress_clean_page() function. It wasn't retuning any value. o Completed fix for the case where no swap buffer is freed. The fix was incomplete, so it would mess up the lists, discard dirty fragments, what would end up corrupting memory. Now, writeout_fragments() checks the error value of decompress_to_swp_buffer() and handles accordingly for the case where the error == -ENOMEM (no swap buffer available). Index: free.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/free.c,v retrieving revision 1.39 retrieving revision 1.40 diff -C2 -r1.39 -r1.40 *** free.c 17 Jul 2002 20:44:36 -0000 1.39 --- free.c 17 Jul 2002 21:45:12 -0000 1.40 *************** *** 2,6 **** * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-07-17 16:06:56 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-07-17 17:51:14 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 331,336 **** BUG(); - page_cache_release(swap_cache_page); UnlockPage(swap_cache_page); } --- 331,336 ---- BUG(); UnlockPage(swap_cache_page); + page_cache_release(swap_cache_page); } Index: main.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/main.c,v retrieving revision 1.57 retrieving revision 1.58 diff -C2 -r1.57 -r1.58 *** main.c 17 Jul 2002 20:44:36 -0000 1.57 --- main.c 17 Jul 2002 21:45:12 -0000 1.58 *************** *** 2,6 **** * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-07-17 15:14:43 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-07-17 18:28:09 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 103,106 **** --- 103,108 ---- ret = compress_page(page, 0, gfp_mask, priority); spin_unlock(&comp_cache_lock); + + return ret; } Index: swapout.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapout.c,v retrieving revision 1.63 retrieving revision 1.64 diff -C2 -r1.63 -r1.64 *** swapout.c 17 Jul 2002 20:44:36 -0000 1.63 --- swapout.c 17 Jul 2002 21:45:12 -0000 1.64 *************** *** 2,6 **** * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-07-17 12:34:20 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-07-17 18:26:45 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 125,135 **** * */ ! static struct swp_buffer * ! find_free_swp_buffer(struct comp_cache_fragment * fragment, unsigned int gfp_mask) { struct page * buffer_page; ! struct list_head * swp_buffer_lh; struct swp_buffer * swp_buffer; ! int priority = 6; if (!fragment) --- 125,135 ---- * */ ! static int ! find_free_swp_buffer(struct comp_cache_fragment * fragment, unsigned int gfp_mask, struct swp_buffer ** swp_buffer_out) { struct page * buffer_page; ! struct list_head * swp_buffer_lh; struct swp_buffer * swp_buffer; ! int priority = 6, error = 0; if (!fragment) *************** *** 146,154 **** /* Failed to get a free swap buffer. Probably gfp_mask does * not allow buffer sync in refill_swp_buffer() function. */ ! if (list_empty(&swp_free_buffer_head)) goto failed; /* Fragment totally freed. Free its struct to avoid leakage. */ if (!CompFragmentIO(fragment)) { kmem_cache_free(fragment_cachep, (fragment)); goto failed; --- 146,157 ---- /* Failed to get a free swap buffer. Probably gfp_mask does * not allow buffer sync in refill_swp_buffer() function. */ ! if (list_empty(&swp_free_buffer_head)) { ! error = -ENOMEM; goto failed; + } /* Fragment totally freed. Free its struct to avoid leakage. */ if (!CompFragmentIO(fragment)) { + error = -ENOENT; kmem_cache_free(fragment_cachep, (fragment)); goto failed; *************** *** 156,161 **** /* Fragment partially freed (to be merged). Nothing to do. */ ! if (CompFragmentFreed(fragment)) goto failed; get_free_buffer: --- 159,166 ---- /* Fragment partially freed (to be merged). Nothing to do. */ ! if (CompFragmentFreed(fragment)) { ! error = -ENOENT; goto failed; + } get_free_buffer: *************** *** 176,201 **** list_add(&buffer_page->list, &fragment->mapping->locked_comp_pages); ! ! return (swp_buffer); ! failed: CompFragmentClearIO(fragment); ! return NULL; } extern void decompress_fragment(struct comp_cache_fragment *, struct page *); ! static struct swp_buffer * ! decompress_to_swp_buffer(struct comp_cache_fragment * fragment, unsigned int gfp_mask) { struct page * buffer_page; struct swp_buffer * swp_buffer; if (fragment->comp_page->page->buffers) BUG(); - swp_buffer = find_free_swp_buffer(fragment, gfp_mask); ! /* no need for IO any longer */ ! if (!swp_buffer) ! return NULL; buffer_page = swp_buffer->page; --- 181,207 ---- list_add(&buffer_page->list, &fragment->mapping->locked_comp_pages); ! (*swp_buffer_out) = swp_buffer; ! out: ! return error; ! failed: CompFragmentClearIO(fragment); ! goto out; } extern void decompress_fragment(struct comp_cache_fragment *, struct page *); ! static int ! decompress_to_swp_buffer(struct comp_cache_fragment * fragment, unsigned int gfp_mask, struct swp_buffer ** swp_buffer_out) { struct page * buffer_page; struct swp_buffer * swp_buffer; + int error; if (fragment->comp_page->page->buffers) BUG(); ! error = find_free_swp_buffer(fragment, gfp_mask, &swp_buffer); ! if (error) ! goto out; buffer_page = swp_buffer->page; *************** *** 210,214 **** UnlockPage(fragment->comp_page->page); ! return swp_buffer; } --- 216,222 ---- UnlockPage(fragment->comp_page->page); ! (*swp_buffer_out) = swp_buffer; ! out: ! return error; } *************** *** 221,225 **** int (*writepage)(struct page *); struct list_head * fragment_lh; ! int maxscan, nrpages, swap_cache_page; struct comp_cache_fragment * fragment; struct swp_buffer * swp_buffer; --- 229,233 ---- int (*writepage)(struct page *); struct list_head * fragment_lh; ! int maxscan, nrpages, swap_cache_page, error; struct comp_cache_fragment * fragment; struct swp_buffer * swp_buffer; *************** *** 308,316 **** spin_unlock(&comp_cache_lock); - swp_buffer = decompress_to_swp_buffer(fragment, gfp_mask); ! /* no need for IO */ ! if (!swp_buffer) ! goto out; if (!swp_buffer->page->mapping) --- 316,323 ---- spin_unlock(&comp_cache_lock); ! error = decompress_to_swp_buffer(fragment, gfp_mask, &swp_buffer); ! if (error) ! goto failed; if (!swp_buffer->page->mapping) *************** *** 334,337 **** --- 341,359 ---- continue; break; + + failed: + /* ok, freed in the meanwhile */ + if (error == -ENOENT) + goto out; + + /* -ENOMEM - couldn't find a buffer (gfp_mask) */ + if (TryLockPage(fragment->comp_page->page)) + BUG(); + add_fragment_to_lru_queue(fragment); + list_del(&fragment->mapping_list); + list_add(&fragment->mapping_list, &fragment->mapping->dirty_comp_pages); + CompFragmentSetDirty(fragment); + UnlockPage(fragment->comp_page->page); + goto out; } |
From: Rodrigo S. de C. <rc...@us...> - 2002-07-17 20:44:40
|
Update of /cvsroot/linuxcompressed/linux/mm/comp_cache In directory usw-pr-cvs1:/tmp/cvs-serv16293/mm/comp_cache Modified Files: aux.c free.c main.c swapin.c swapout.c vswap.c Log Message: Features o First implementation of support for SMP systems. There are only two spinlocks used for that, but the goal at the moment is stability, not performance. With our first tests, it is working without corruption on a system with preempt patch, but only swap cache support (and without resizing compressed cache). Let the first races show up :-) As soon as the whole code is working somewhat well, those global locks will be divided into many other to improve concurrency. Bug fixes o fixed compilation error when compressed cache is disabled Cleanups o removed virtual_swap_count() since it wasn't used (swap_count() isn't used either). Index: aux.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/aux.c,v retrieving revision 1.39 retrieving revision 1.40 diff -C2 -r1.39 -r1.40 *** aux.c 16 Jul 2002 21:58:08 -0000 1.39 --- aux.c 17 Jul 2002 20:44:36 -0000 1.40 *************** *** 2,6 **** * linux/mm/comp_cache/aux.c * ! * Time-stamp: <2002-07-16 16:33:09 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/aux.c * ! * Time-stamp: <2002-07-17 16:06:21 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 108,113 **** while (pte_list != failed_pte_list) { mm = ptep_to_mm(pte_list->ptep); ! address = ptep_to_address(pte_list->ptep); ! vma = find_vma(mm, address); --- 108,114 ---- while (pte_list != failed_pte_list) { mm = ptep_to_mm(pte_list->ptep); ! spin_lock(&mm->page_table_lock); ! ! address = ptep_to_address(pte_list->ptep); vma = find_vma(mm, address); *************** *** 121,124 **** --- 122,126 ---- set_pte(pte_list->ptep, swp_entry_to_pte(old_entry)); + spin_unlock(&mm->page_table_lock); pte_list = pte_list->next; } *************** *** 135,144 **** pte_list = start_pte_list; ! while (pte_list) { ptep = pte_list->ptep; mm = ptep_to_mm(ptep); address = ptep_to_address(ptep); - vma = find_vma(mm, address); --- 137,147 ---- pte_list = start_pte_list; ! while (pte_list) { ptep = pte_list->ptep; mm = ptep_to_mm(ptep); + spin_lock(&mm->page_table_lock); + address = ptep_to_address(ptep); vma = find_vma(mm, address); *************** *** 151,155 **** set_pte(ptep, swp_entry_to_pte(entry)); ! pte_list = pte_list->next; } --- 154,159 ---- set_pte(ptep, swp_entry_to_pte(entry)); ! ! spin_unlock(&mm->page_table_lock); pte_list = pte_list->next; } *************** *** 158,161 **** --- 162,166 ---- error: + spin_unlock(&mm->page_table_lock); backout_pte_changes(start_pte_list, pte_list, old_entry); return 0; Index: free.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/free.c,v retrieving revision 1.38 retrieving revision 1.39 diff -C2 -r1.38 -r1.39 *** free.c 17 Jul 2002 13:00:58 -0000 1.38 --- free.c 17 Jul 2002 20:44:36 -0000 1.39 *************** *** 2,6 **** * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-07-17 08:49:59 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-07-17 16:06:56 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 256,264 **** struct list_head * vswap_lh; swp_entry_t old_entry; ! int num_freed_ptes; /* no virtual swap entry with a compressed page */ if (list_empty(&vswap_address_used_head)) ! return 0; vswap_lh = &vswap_address_used_head; --- 256,266 ---- struct list_head * vswap_lh; swp_entry_t old_entry; ! int num_freed_ptes, ret = 0; + spin_lock(&virtual_swap_list); + /* no virtual swap entry with a compressed page */ if (list_empty(&vswap_address_used_head)) ! goto out_unlock; vswap_lh = &vswap_address_used_head; *************** *** 294,303 **** /* no page could be locked for changes */ if (vswap_lh == &vswap_address_used_head) ! return 0; fragment = vswap->fragment; /* set old virtual addressed ptes to the real swap entry */ ! if (!set_pte_list_to_entry(vswap->pte_list, old_entry, entry)) goto backout; --- 296,307 ---- /* no page could be locked for changes */ if (vswap_lh == &vswap_address_used_head) ! goto out_unlock; fragment = vswap->fragment; /* set old virtual addressed ptes to the real swap entry */ ! ret = set_pte_list_to_entry(vswap->pte_list, old_entry, entry); ! ! if (!ret) goto backout; *************** *** 339,343 **** add_fragment_to_hash_table(fragment); UnlockPage(fragment->comp_page->page); ! return 1; backout: --- 343,349 ---- add_fragment_to_hash_table(fragment); UnlockPage(fragment->comp_page->page); ! out_unlock: ! spin_unlock(&virtual_swap_list); ! return ret; backout: Index: main.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/main.c,v retrieving revision 1.56 retrieving revision 1.57 diff -C2 -r1.56 -r1.57 *** main.c 16 Jul 2002 21:58:08 -0000 1.56 --- main.c 17 Jul 2002 20:44:36 -0000 1.57 *************** *** 2,6 **** * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-07-16 16:35:35 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-07-17 15:14:43 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 46,56 **** extern unsigned long num_physpages; - extern struct comp_cache_page * get_comp_cache_page(struct page *, unsigned short, struct comp_cache_fragment **, unsigned int, int); inline void compress_dirty_page(struct page * page, int (*writepage)(struct page *), unsigned int gfp_mask, int priority) { ! int write; write = !!page->buffers; --- 46,58 ---- extern unsigned long num_physpages; extern struct comp_cache_page * get_comp_cache_page(struct page *, unsigned short, struct comp_cache_fragment **, unsigned int, int); + /* ugly global comp_cache_lock (only to start make it SMP-safe) */ + spinlock_t comp_cache_lock __cacheline_aligned = SPIN_LOCK_UNLOCKED; + inline void compress_dirty_page(struct page * page, int (*writepage)(struct page *), unsigned int gfp_mask, int priority) { ! int write, ret; write = !!page->buffers; *************** *** 73,80 **** if (page->buffers) BUG(); ! ! /* in the case we fail to compress the page, set the bits back ! * since that's a dirty page */ ! if (compress_page(page, 1, gfp_mask, priority)) return; set_bits_back: --- 75,85 ---- if (page->buffers) BUG(); ! ! spin_lock(&comp_cache_lock); ! ret = compress_page(page, 1, gfp_mask, priority); ! spin_unlock(&comp_cache_lock); ! ! /* failed to compress the dirty page? set the bits back */ ! if (ret) return; set_bits_back: *************** *** 86,89 **** --- 91,96 ---- compress_clean_page(struct page * page, unsigned int gfp_mask, int priority) { + int ret; + if (page->buffers) BUG(); *************** *** 93,97 **** return 1; #endif ! return compress_page(page, 0, gfp_mask, priority); } --- 100,106 ---- return 1; #endif ! spin_lock(&comp_cache_lock); ! ret = compress_page(page, 0, gfp_mask, priority); ! spin_unlock(&comp_cache_lock); } Index: swapin.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapin.c,v retrieving revision 1.45 retrieving revision 1.46 diff -C2 -r1.45 -r1.46 *** swapin.c 16 Jul 2002 18:41:55 -0000 1.45 --- swapin.c 17 Jul 2002 20:44:36 -0000 1.46 *************** *** 2,6 **** * linux/mm/comp_cache/swapin.c * ! * Time-stamp: <2002-07-16 14:55:53 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/swapin.c * ! * Time-stamp: <2002-07-17 13:58:48 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 92,95 **** --- 92,96 ---- int err; + spin_lock(&comp_cache_lock); err = find_comp_page(mapping, offset, &fragment); *************** *** 97,101 **** * had a real address assigned */ if (err) ! goto out; if (!PageLocked(page)) --- 98,102 ---- * had a real address assigned */ if (err) ! goto out_unlock; if (!PageLocked(page)) *************** *** 126,130 **** UnlockPage(page); ! out: return err; } --- 127,132 ---- UnlockPage(page); ! out_unlock: ! spin_unlock(&comp_cache_lock); return err; } Index: swapout.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapout.c,v retrieving revision 1.62 retrieving revision 1.63 diff -C2 -r1.62 -r1.63 *** swapout.c 17 Jul 2002 13:00:58 -0000 1.62 --- swapout.c 17 Jul 2002 20:44:36 -0000 1.63 *************** *** 2,6 **** * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-07-17 09:42:34 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-07-17 12:34:20 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 232,237 **** --- 232,239 ---- while (!list_empty(&lru_queue) && maxscan--) { if (unlikely(current->need_resched)) { + spin_unlock(&comp_cache_lock); __set_current_state(TASK_RUNNING); schedule(); + spin_lock(&comp_cache_lock); } *************** *** 304,308 **** if (swap_cache_page && !swap_duplicate(entry)) BUG(); ! swp_buffer = decompress_to_swp_buffer(fragment, gfp_mask); --- 306,311 ---- if (swap_cache_page && !swap_duplicate(entry)) BUG(); ! ! spin_unlock(&comp_cache_lock); swp_buffer = decompress_to_swp_buffer(fragment, gfp_mask); *************** *** 326,329 **** --- 329,334 ---- swap_free(entry); + spin_lock(&comp_cache_lock); + if (!swp_buffer || --nrpages) continue; *************** *** 452,457 **** --- 457,464 ---- if (unlikely(current->need_resched)) { + spin_unlock(&comp_cache_lock); __set_current_state(TASK_RUNNING); schedule(); + spin_lock(&comp_cache_lock); } *************** *** 488,492 **** --- 495,501 ---- BUG(); + spin_unlock(&comp_cache_lock); new_page = alloc_page(gfp_mask); + spin_lock(&comp_cache_lock); if (!new_page) Index: vswap.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/vswap.c,v retrieving revision 1.41 retrieving revision 1.42 diff -C2 -r1.41 -r1.42 *** vswap.c 17 Jul 2002 13:00:58 -0000 1.41 --- vswap.c 17 Jul 2002 20:44:36 -0000 1.42 *************** *** 2,6 **** * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-07-17 08:52:56 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-07-17 16:48:47 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 61,64 **** --- 61,67 ---- unsigned long nr_free_vswap = 0, nr_used_vswap = 0; + spinlock_t virtual_swap_list __cacheline_aligned = SPIN_LOCK_UNLOCKED; + + /* the caller must hold the virtual_swap_list lock */ static int comp_cache_vswap_alloc(void) *************** *** 107,112 **** * comp_cache_available_vswap - this function returns 1 if we have any * available vswap entry and also if we can assign any vswap entry. */ ! int comp_cache_available_vswap(void) { unsigned short available_mean_size; --- 110,117 ---- * comp_cache_available_vswap - this function returns 1 if we have any * available vswap entry and also if we can assign any vswap entry. + * + * the caller must hold virtual_swap_list lock */ ! static int comp_cache_available_vswap(void) { unsigned short available_mean_size; *************** *** 131,138 **** if (!vswap_alloc_and_init(vswap_address, last_vswap_allocated + 1)) return 0; ! for (i = last_vswap_allocated + 1; i < vswap_current_num_entries && vswap_address[i]; i++); last_vswap_allocated = i - 1; ! return 1; } --- 136,145 ---- if (!vswap_alloc_and_init(vswap_address, last_vswap_allocated + 1)) return 0; ! ! /* update the last_vswap_allocated variable to the ! * actual last vswap allocated entry */ for (i = last_vswap_allocated + 1; i < vswap_current_num_entries && vswap_address[i]; i++); last_vswap_allocated = i - 1; ! return 1; } *************** *** 160,173 **** * if we have virtual swap entries to be compressed. */ ! inline int comp_cache_available_space(void) { if (comp_cache_available_vswap()) ! return 1; /* can we still compress all these entries? */ if (vswap_num_reserved_entries > 0) ! return 1; ! return 0; } --- 167,187 ---- * if we have virtual swap entries to be compressed. */ ! int comp_cache_available_space(void) { + int ret = 1; + + spin_lock(&virtual_swap_list); + if (comp_cache_available_vswap()) ! goto out_unlock; /* can we still compress all these entries? */ if (vswap_num_reserved_entries > 0) ! goto out_unlock; ! ret = 0; ! out_unlock: ! spin_unlock(&virtual_swap_list); ! return ret; } *************** *** 192,201 **** entry.val = 0; if (!vswap_address && !comp_cache_vswap_alloc()) ! return entry; if (!comp_cache_available_vswap()) ! return entry; vswap = list_entry(vswap_address_free_head.next, struct vswap_address, list); --- 206,216 ---- entry.val = 0; + spin_lock(&virtual_swap_list); if (!vswap_address && !comp_cache_vswap_alloc()) ! goto out_unlock; if (!comp_cache_available_vswap()) ! goto out_unlock; vswap = list_entry(vswap_address_free_head.next, struct vswap_address, list); *************** *** 228,237 **** BUG(); return entry; } /** ! * comp_cache_swp_duplicate - swap_duplicate for virtual swap ! * addresses. * @entry: the virtual swap entry which will have its count * incremented --- 243,254 ---- BUG(); + out_unlock: + spin_unlock(&virtual_swap_list); return entry; } /** ! * virtual_swap_duplicate - swap_duplicate for virtual swap addresses. ! * * @entry: the virtual swap entry which will have its count * incremented *************** *** 249,269 **** return 0; if (offset >= vswap_current_num_entries) ! return 0; vswap_address[offset]->swap_count++; return 1; } /** ! * virtual_swap_free - swap_free for virtual swap addresses. @entry: ! * the virtual swap entry which will have its count decremented and ! * possibly the vswap entry freed. ! * ! * This function will decrement the vswap entry counter. If we have ! * had a real swap address assigned, we will call swap_free() for it, ! * since we hold a reference to the real address for every pending ! * pte. If we get to count == 0, the entry will have its struct ! * initalized and be added to the free list. In the case we have a ! * fragment (recall that fragments don't hold references on swap ! * addresses), we will free it too. */ int --- 266,289 ---- return 0; if (offset >= vswap_current_num_entries) ! return 0; ! spin_lock(&virtual_swap_list); vswap_address[offset]->swap_count++; + spin_unlock(&virtual_swap_list); return 1; } /** ! * virtual_swap_free - swap_free() for virtual swap addresses. ! * ! * @entry: the virtual swap entry which will have its count ! * decremented and possibly the vswap entry freed. ! * ! * This function will decrement the vswap entry counter. If we get to ! * swap_count == 0, the entry will have its struct initalized and be ! * added to the free list. In the case we have a fragment (recall that ! * fragments don't hold references on swap addresses), we will free it ! * too. ! * ! * the caller must hold virtual_swap_list lock */ int *************** *** 332,354 **** comp_cache_freeable_space += fragment->compressed_size; comp_cache_free(fragment); return 0; } - /** - * virtual_swap_count - swap_count for virtual swap addresses. - * @entry: virtual swap entry that will be returned its counter. - * - * This function returns the counter for the vswap entry parameter. - */ - int - virtual_swap_count(swp_entry_t entry) - { - unsigned long offset = SWP_OFFSET(entry); - if (!vswap_address[offset]->swap_count) - BUG(); - return (vswap_address[offset]->swap_count); - } - /*** * remove_fragment_vswap - this function tells the vswap entry that it --- 352,361 ---- comp_cache_freeable_space += fragment->compressed_size; + /* whops, it will DEADLOCK when shrinking the vswap table + * since we hold virtual_swap_list */ comp_cache_free(fragment); return 0; } /*** * remove_fragment_vswap - this function tells the vswap entry that it *************** *** 379,386 **** return; offset = SWP_OFFSET(entry); - if (!vswap_address[offset]->swap_count) ! return; if (reserved(offset) || !vswap_address[offset]->fragment) --- 386,394 ---- return; + spin_lock(&virtual_swap_list); + offset = SWP_OFFSET(entry); if (!vswap_address[offset]->swap_count) ! goto out_unlock; if (reserved(offset) || !vswap_address[offset]->fragment) *************** *** 397,400 **** --- 405,410 ---- comp_cache_freeable_space += fragment->compressed_size; vswap_num_reserved_entries++; + out_unlock: + spin_unlock(&virtual_swap_list); } *************** *** 423,426 **** --- 433,438 ---- if (!vswap_address(entry)) return; + + spin_lock(&virtual_swap_list); offset = SWP_OFFSET(entry); *************** *** 441,444 **** --- 453,458 ---- comp_cache_freeable_space -= fragment->compressed_size; vswap_num_reserved_entries--; + + spin_unlock(&virtual_swap_list); } *************** *** 452,456 **** * may be on and adds the pte_list to the free list. May also be * called for new pte_list structures which aren't on any list yet. ! * Caller needs to hold the pagemap_lru_list. * * (adapted from Rik van Riel's rmap patch) --- 466,471 ---- * may be on and adds the pte_list to the free list. May also be * called for new pte_list structures which aren't on any list yet. ! * ! * caller needs to hold the pagemap_lru_list. * * (adapted from Rik van Riel's rmap patch) *************** *** 499,503 **** * Returns a pointer to a fresh pte_list structure. Allocates new * pte_list structures as required. ! * Caller needs to hold the pagemap_lru_lock. * * (adapted from Rik van Riel's rmap patch) --- 514,519 ---- * Returns a pointer to a fresh pte_list structure. Allocates new * pte_list structures as required. ! * ! * caller needs to hold the pagemap_lru_lock. * * (adapted from Rik van Riel's rmap patch) *************** *** 551,554 **** --- 567,572 ---- * adressed ptes. * + * caller must hold the mm->page_table_lock. + * * (adapted from Rik van Riel's rmap patch) */ *************** *** 599,602 **** --- 617,622 ---- * entries. * + * caller must hold the mm->page_table_lock. + * * (adapted from Rik van Riel's rmap patch) */ *************** *** 639,642 **** --- 659,664 ---- return; + spin_lock(&virtual_swap_list); + offset = SWP_OFFSET(entry); *************** *** 647,650 **** --- 669,674 ---- vswap_address[offset]->swap_cache_page = page; vswap_num_swap_cache++; + + spin_unlock(&virtual_swap_list); } *************** *** 672,675 **** --- 696,701 ---- return; + spin_lock(&virtual_swap_list); + offset = SWP_OFFSET(entry); *************** *** 679,682 **** --- 705,710 ---- vswap_address[offset]->swap_cache_page = NULL; vswap_num_swap_cache--; + + spin_unlock(&virtual_swap_list); } *************** *** 691,694 **** --- 719,723 ---- * its struct, adding it to the list of free vswap entries. * + * the caller must hold virtual_swap_list lock */ int |
From: Rodrigo S. de C. <rc...@us...> - 2002-07-17 20:44:39
|
Update of /cvsroot/linuxcompressed/linux/mm In directory usw-pr-cvs1:/tmp/cvs-serv16293/mm Modified Files: swapfile.c Log Message: Features o First implementation of support for SMP systems. There are only two spinlocks used for that, but the goal at the moment is stability, not performance. With our first tests, it is working without corruption on a system with preempt patch, but only swap cache support (and without resizing compressed cache). Let the first races show up :-) As soon as the whole code is working somewhat well, those global locks will be divided into many other to improve concurrency. Bug fixes o fixed compilation error when compressed cache is disabled Cleanups o removed virtual_swap_count() since it wasn't used (swap_count() isn't used either). Index: swapfile.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/swapfile.c,v retrieving revision 1.32 retrieving revision 1.33 diff -C2 -r1.32 -r1.33 *** swapfile.c 17 Jul 2002 13:00:57 -0000 1.32 --- swapfile.c 17 Jul 2002 20:44:36 -0000 1.33 *************** *** 162,166 **** type = SWP_TYPE(entry); if (vswap_address(entry)) ! return &swap_info[type]; if (type >= nr_swapfiles) goto bad_nofile; --- 162,166 ---- type = SWP_TYPE(entry); if (vswap_address(entry)) ! goto virtual_swap; if (type >= nr_swapfiles) goto bad_nofile; *************** *** 192,195 **** --- 192,199 ---- out: return NULL; + virtual_swap: + spin_lock(&virtual_swap_list); + /* it returns a bogus value (not allocated). FIX IT */ + return &swap_info[type]; } *************** *** 197,203 **** { if (vswap_info_struct(p)) ! return; swap_device_unlock(p); swap_list_unlock(); } --- 201,210 ---- { if (vswap_info_struct(p)) ! goto virtual_swap; swap_device_unlock(p); swap_list_unlock(); + return; + virtual_swap: + spin_unlock(&virtual_swap_list); } *************** *** 259,263 **** /* Is the only swap cache user the cache itself? */ if (vswap_address(entry)) { ! if (virtual_swap_count(entry) == 1) exclusive = 1; goto check_exclusive; --- 266,270 ---- /* Is the only swap cache user the cache itself? */ if (vswap_address(entry)) { ! if (vswap_address[SWP_OFFSET(entry)]->swap_count == 1) exclusive = 1; goto check_exclusive; *************** *** 335,339 **** retval = 0; if (vswap_address(entry)) { ! if (virtual_swap_count(entry) == 1) exclusive = 1; goto check_exclusive; --- 342,346 ---- retval = 0; if (vswap_address(entry)) { ! if (vswap_address[SWP_OFFSET(entry)]->swap_count == 1) exclusive = 1; goto check_exclusive; *************** *** 1188,1192 **** if (vswap_address(entry)) ! return virtual_swap_duplicate(entry); type = SWP_TYPE(entry); if (type >= nr_swapfiles) --- 1195,1199 ---- if (vswap_address(entry)) ! goto virtual_swap; type = SWP_TYPE(entry); if (type >= nr_swapfiles) *************** *** 1214,1217 **** --- 1221,1227 ---- printk(KERN_ERR "swap_dup: %s%08lx\n", Bad_file, entry.val); goto out; + + virtual_swap: + return virtual_swap_duplicate(entry); } *************** *** 1231,1235 **** goto bad_entry; if (vswap_address(entry)) ! return virtual_swap_count(entry); type = SWP_TYPE(entry); if (type >= nr_swapfiles) --- 1241,1245 ---- goto bad_entry; if (vswap_address(entry)) ! goto virtual_swap; type = SWP_TYPE(entry); if (type >= nr_swapfiles) *************** *** 1257,1260 **** --- 1267,1272 ---- printk(KERN_ERR "swap_count: %s%08lx\n", Unused_offset, entry.val); goto out; + virtual_swap: + return vswap_address[SWP_OFFSET(entry)]->swap_count; } |
From: Rodrigo S. de C. <rc...@us...> - 2002-07-17 20:44:39
|
Update of /cvsroot/linuxcompressed/linux/include/linux In directory usw-pr-cvs1:/tmp/cvs-serv16293/include/linux Modified Files: comp_cache.h Log Message: Features o First implementation of support for SMP systems. There are only two spinlocks used for that, but the goal at the moment is stability, not performance. With our first tests, it is working without corruption on a system with preempt patch, but only swap cache support (and without resizing compressed cache). Let the first races show up :-) As soon as the whole code is working somewhat well, those global locks will be divided into many other to improve concurrency. Bug fixes o fixed compilation error when compressed cache is disabled Cleanups o removed virtual_swap_count() since it wasn't used (swap_count() isn't used either). Index: comp_cache.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/comp_cache.h,v retrieving revision 1.93 retrieving revision 1.94 diff -C2 -r1.93 -r1.94 *** comp_cache.h 17 Jul 2002 13:00:57 -0000 1.93 --- comp_cache.h 17 Jul 2002 20:44:36 -0000 1.94 *************** *** 2,6 **** * linux/mm/comp_cache.h * ! * Time-stamp: <2002-07-17 08:48:48 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache.h * ! * Time-stamp: <2002-07-17 16:23:16 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 352,355 **** --- 352,357 ---- #define COMP_PAGE_SIZE ((comp_page_order + 1) * PAGE_SIZE) + #define comp_cache_used_space ((num_comp_pages * PAGE_SIZE) - comp_cache_free_space) + #define page_to_comp_page(n) ((n) >> comp_page_order) #define comp_page_to_page(n) ((n) << comp_page_order) *************** *** 357,361 **** extern int comp_page_order; extern unsigned long comp_cache_free_space; ! #define comp_cache_used_space ((num_comp_pages * PAGE_SIZE) - comp_cache_free_space) #else static inline void comp_cache_init(void) {}; --- 359,363 ---- extern int comp_page_order; extern unsigned long comp_cache_free_space; ! extern spinlock_t comp_cache_lock; #else static inline void comp_cache_init(void) {}; *************** *** 411,421 **** int virtual_swap_duplicate(swp_entry_t); int virtual_swap_free(unsigned long); - int virtual_swap_count(swp_entry_t); swp_entry_t get_virtual_swap_page(void); ! inline int comp_cache_available_space(void); ! ! inline void set_vswap_allocating(swp_entry_t entry); ! inline void clear_vswap_allocating(swp_entry_t entry); extern void FASTCALL(add_pte_vswap(pte_t *, swp_entry_t)); --- 413,419 ---- int virtual_swap_duplicate(swp_entry_t); int virtual_swap_free(unsigned long); swp_entry_t get_virtual_swap_page(void); ! int comp_cache_available_space(void); extern void FASTCALL(add_pte_vswap(pte_t *, swp_entry_t)); *************** *** 438,441 **** --- 436,441 ---- vswap_address[SWP_OFFSET(entry)]->fault_count--; } + + extern spinlock_t virtual_swap_list; #else *************** *** 446,457 **** static inline int virtual_swap_duplicate(swp_entry_t entry) { return 0; }; static inline int virtual_swap_free(unsigned long offset) { return 0; } - static inline int virtual_swap_count(swp_entry_t entry) { return 0; } static inline swp_entry_t get_virtual_swap_page(void) { return (swp_entry_t) { 0 }; } static inline int comp_cache_available_space(void) { return 0; } - static inline void set_vswap_allocating(swp_entry_t entry) { }; - static inline void clear_vswap_allocating(swp_entry_t entry) { }; - static inline void add_pte_vswap(pte_t * ptep, swp_entry_t entry) {}; static inline void remove_pte_vswap(pte_t * ptep) {}; --- 446,453 ---- *************** *** 459,462 **** --- 455,460 ---- static inline void del_swap_cache_page_vswap(struct page * page) {}; static inline int free_pte_list(struct pte_list * pte_list, unsigned long offset) { return 0; } + static inline void get_vswap(swp_entry_t entry) {}; + static inline void put_vswap(swp_entry_t entry) {}; #endif |
From: Rodrigo S. de C. <rc...@us...> - 2002-07-17 13:01:02
|
Update of /cvsroot/linuxcompressed/linux/mm/comp_cache In directory usw-pr-cvs1:/tmp/cvs-serv577/mm/comp_cache Modified Files: free.c swapout.c vswap.c Log Message: Bug fixes o Fixed bug in find_free_swp_buffer() that would leak fragment structs if the fragment got completely freed while refilling swap buffers. o Fixed bug in find_free_swp_buffer() that would panic in the case it couldn't free any swap buffers because of gfp_mask. In this case, simply return neither decompressing nor writing the dirty fragment. o Fixed bug in comp_cache_swp_duplicate() (now known as virtual_swap_duplicate()) that would cause a kernel BUG if duplicating a freed entry. This scenario may happen in the swapin path code. Cleanups o Renamed comp_cache_swp_{duplicate,free,count} -> virtual_swap_* o Removed useless "nrpages" parameter Index: free.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/free.c,v retrieving revision 1.37 retrieving revision 1.38 diff -C2 -r1.37 -r1.38 *** free.c 16 Jul 2002 21:58:08 -0000 1.37 --- free.c 17 Jul 2002 13:00:58 -0000 1.38 *************** *** 2,6 **** * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-07-16 18:35:21 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-07-17 08:49:59 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 310,314 **** /* let's proceed to fix swap counter for either entries */ for(; num_freed_ptes > 0; --num_freed_ptes) { ! comp_cache_swp_free(old_entry); swap_duplicate(entry); } --- 310,314 ---- /* let's proceed to fix swap counter for either entries */ for(; num_freed_ptes > 0; --num_freed_ptes) { ! virtual_swap_free(vswap->offset); swap_duplicate(entry); } Index: swapout.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapout.c,v retrieving revision 1.61 retrieving revision 1.62 diff -C2 -r1.61 -r1.62 *** swapout.c 16 Jul 2002 21:58:08 -0000 1.61 --- swapout.c 17 Jul 2002 13:00:58 -0000 1.62 *************** *** 2,6 **** * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-07-16 16:35:08 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-07-17 09:42:34 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 24,28 **** static int ! refill_swp_buffer(unsigned int gfp_mask, int nrpages, int priority) { struct list_head * swp_buffer_lh; --- 24,28 ---- static int ! refill_swp_buffer(unsigned int gfp_mask, int priority) { struct list_head * swp_buffer_lh; *************** *** 32,40 **** int wait, maxscan; ! maxscan = max(NUM_SWP_BUFFERS/priority, (int) (nrpages * 1.5)); wait = 0; try_again: ! while(--maxscan >= 0 && nrpages && (swp_buffer_lh = swp_used_buffer_head.prev) != &swp_used_buffer_head) { swp_buffer = list_entry(swp_buffer_lh, struct swp_buffer, list); buffer_page = swp_buffer->page; --- 32,40 ---- int wait, maxscan; ! maxscan = NUM_SWP_BUFFERS/priority; wait = 0; try_again: ! while(--maxscan >= 0 && (swp_buffer_lh = swp_used_buffer_head.prev) != &swp_used_buffer_head) { swp_buffer = list_entry(swp_buffer_lh, struct swp_buffer, list); buffer_page = swp_buffer->page; *************** *** 98,109 **** UnlockPage(buffer_page); ! --nrpages; } /* couldn't free any swap buffer? so let's start waiting for * the lock from the locked pages */ ! if (!wait && nrpages > 0) { wait = 1; ! maxscan = max(NUM_SWP_BUFFERS >> 4, (int) (nrpages * 4)); if (unlikely(current->need_resched)) { __set_current_state(TASK_RUNNING); --- 98,109 ---- UnlockPage(buffer_page); ! return 1; } /* couldn't free any swap buffer? so let's start waiting for * the lock from the locked pages */ ! if (!wait) { wait = 1; ! maxscan = NUM_SWP_BUFFERS >> 3; if (unlikely(current->need_resched)) { __set_current_state(TASK_RUNNING); *************** *** 112,117 **** goto try_again; } ! ! return (nrpages > 0?0:1); } --- 112,116 ---- goto try_again; } ! return 0; } *************** *** 140,159 **** if (!list_empty(&swp_free_buffer_head)) ! goto get_a_page; while (list_empty(&swp_free_buffer_head) && priority) ! refill_swp_buffer(gfp_mask, 1, priority--); if (list_empty(&swp_free_buffer_head)) ! panic("couldn't free a swap buffer\n"); ! /* has the fragment been totally (!IO) or partially ! * freed (Freed)? no need to swap it out any longer */ ! if (!CompFragmentIO(fragment) || CompFragmentFreed(fragment)) { ! CompFragmentClearIO(fragment); ! return NULL; } ! get_a_page: swp_buffer = list_entry(swp_buffer_lh = swp_free_buffer_head.prev, struct swp_buffer, list); --- 139,163 ---- if (!list_empty(&swp_free_buffer_head)) ! goto get_free_buffer; while (list_empty(&swp_free_buffer_head) && priority) ! refill_swp_buffer(gfp_mask, priority--); + /* Failed to get a free swap buffer. Probably gfp_mask does + * not allow buffer sync in refill_swp_buffer() function. */ if (list_empty(&swp_free_buffer_head)) ! goto failed; ! /* Fragment totally freed. Free its struct to avoid leakage. */ ! if (!CompFragmentIO(fragment)) { ! kmem_cache_free(fragment_cachep, (fragment)); ! goto failed; } ! /* Fragment partially freed (to be merged). Nothing to do. */ ! if (CompFragmentFreed(fragment)) ! goto failed; ! ! get_free_buffer: swp_buffer = list_entry(swp_buffer_lh = swp_free_buffer_head.prev, struct swp_buffer, list); *************** *** 174,177 **** --- 178,185 ---- return (swp_buffer); + + failed: + CompFragmentClearIO(fragment); + return NULL; } Index: vswap.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/vswap.c,v retrieving revision 1.40 retrieving revision 1.41 diff -C2 -r1.40 -r1.41 *** vswap.c 16 Jul 2002 18:41:55 -0000 1.40 --- vswap.c 17 Jul 2002 13:00:58 -0000 1.41 *************** *** 2,6 **** * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-07-16 14:57:52 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-07-17 08:52:56 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 238,245 **** */ int ! comp_cache_swp_duplicate(swp_entry_t entry) { unsigned long offset = SWP_OFFSET(entry); - int ret = 0; if (!vswap_address(entry)) --- 238,244 ---- */ int ! virtual_swap_duplicate(swp_entry_t entry) { unsigned long offset = SWP_OFFSET(entry); if (!vswap_address(entry)) *************** *** 248,265 **** BUG(); if (!vswap_address[offset]->swap_count) ! BUG(); if (offset >= vswap_current_num_entries) ! goto out; ! vswap_address[offset]->swap_count++; ! ret = 1; ! out: ! return ret; } /** ! * comp_cache_swp_free - swap_free for virtual swap addresses. ! * @entry: the virtual swap entry which will have its count ! * decremented and possibly the vswap entry freed. * * This function will decrement the vswap entry counter. If we have --- 247,261 ---- BUG(); if (!vswap_address[offset]->swap_count) ! return 0; if (offset >= vswap_current_num_entries) ! return 0; vswap_address[offset]->swap_count++; ! return 1; } /** ! * virtual_swap_free - swap_free for virtual swap addresses. @entry: ! * the virtual swap entry which will have its count decremented and ! * possibly the vswap entry freed. * * This function will decrement the vswap entry counter. If we have *************** *** 272,285 **** */ int ! comp_cache_swp_free(swp_entry_t entry) { - unsigned long offset = SWP_OFFSET(entry); unsigned int swap_count; struct comp_cache_fragment * fragment; struct vswap_address * vswap; - if (!vswap_address(entry)) - BUG(); - if (offset >= vswap_current_num_entries) BUG(); --- 268,277 ---- */ int ! virtual_swap_free(unsigned long offset) { unsigned int swap_count; struct comp_cache_fragment * fragment; struct vswap_address * vswap; if (offset >= vswap_current_num_entries) BUG(); *************** *** 345,349 **** /** ! * comp_cache_swp_count - swap_count for virtual swap addresses. * @entry: virtual swap entry that will be returned its counter. * --- 337,341 ---- /** ! * virtual_swap_count - swap_count for virtual swap addresses. * @entry: virtual swap entry that will be returned its counter. * *************** *** 351,364 **** */ int ! comp_cache_swp_count(swp_entry_t entry) { unsigned long offset = SWP_OFFSET(entry); - - if (!vswap_address(entry)) - BUG(); - if (!vswap_address[offset]->swap_count) BUG(); - return (vswap_address[offset]->swap_count); } --- 343,351 ---- */ int ! virtual_swap_count(swp_entry_t entry) { unsigned long offset = SWP_OFFSET(entry); if (!vswap_address[offset]->swap_count) BUG(); return (vswap_address[offset]->swap_count); } |
From: Rodrigo S. de C. <rc...@us...> - 2002-07-17 13:01:02
|
Update of /cvsroot/linuxcompressed/linux/mm In directory usw-pr-cvs1:/tmp/cvs-serv577/mm Modified Files: swapfile.c Log Message: Bug fixes o Fixed bug in find_free_swp_buffer() that would leak fragment structs if the fragment got completely freed while refilling swap buffers. o Fixed bug in find_free_swp_buffer() that would panic in the case it couldn't free any swap buffers because of gfp_mask. In this case, simply return neither decompressing nor writing the dirty fragment. o Fixed bug in comp_cache_swp_duplicate() (now known as virtual_swap_duplicate()) that would cause a kernel BUG if duplicating a freed entry. This scenario may happen in the swapin path code. Cleanups o Renamed comp_cache_swp_{duplicate,free,count} -> virtual_swap_* o Removed useless "nrpages" parameter Index: swapfile.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/swapfile.c,v retrieving revision 1.31 retrieving revision 1.32 diff -C2 -r1.31 -r1.32 *** swapfile.c 16 Jul 2002 18:41:55 -0000 1.31 --- swapfile.c 17 Jul 2002 13:00:57 -0000 1.32 *************** *** 208,212 **** if (vswap_info_struct(p)) ! return comp_cache_swp_free(SWP_ENTRY(COMP_CACHE_SWP_TYPE, offset)); count = p->swap_map[offset]; --- 208,212 ---- if (vswap_info_struct(p)) ! return virtual_swap_free(offset); count = p->swap_map[offset]; *************** *** 259,270 **** /* Is the only swap cache user the cache itself? */ if (vswap_address(entry)) { ! if (comp_cache_swp_count(entry) == 1) exclusive = 1; } ! else { ! if (p->swap_map[SWP_OFFSET(entry)] == 1) ! exclusive = 1; ! } ! if (exclusive) { /* Recheck the page count with the pagecache lock held.. */ --- 259,269 ---- /* Is the only swap cache user the cache itself? */ if (vswap_address(entry)) { ! if (virtual_swap_count(entry) == 1) exclusive = 1; + goto check_exclusive; } ! if (p->swap_map[SWP_OFFSET(entry)] == 1) ! exclusive = 1; ! check_exclusive: if (exclusive) { /* Recheck the page count with the pagecache lock held.. */ *************** *** 336,347 **** retval = 0; if (vswap_address(entry)) { ! if (comp_cache_swp_count(entry) == 1) exclusive = 1; } ! else { ! if (p->swap_map[SWP_OFFSET(entry)] == 1) ! exclusive = 1; ! } ! if (exclusive) { /* Recheck the page count with the pagecache lock held.. */ --- 335,345 ---- retval = 0; if (vswap_address(entry)) { ! if (virtual_swap_count(entry) == 1) exclusive = 1; + goto check_exclusive; } ! if (p->swap_map[SWP_OFFSET(entry)] == 1) ! exclusive = 1; ! check_exclusive: if (exclusive) { /* Recheck the page count with the pagecache lock held.. */ *************** *** 1190,1194 **** if (vswap_address(entry)) ! return comp_cache_swp_duplicate(entry); type = SWP_TYPE(entry); if (type >= nr_swapfiles) --- 1188,1192 ---- if (vswap_address(entry)) ! return virtual_swap_duplicate(entry); type = SWP_TYPE(entry); if (type >= nr_swapfiles) *************** *** 1233,1237 **** goto bad_entry; if (vswap_address(entry)) ! return comp_cache_swp_count(entry); type = SWP_TYPE(entry); if (type >= nr_swapfiles) --- 1231,1235 ---- goto bad_entry; if (vswap_address(entry)) ! return virtual_swap_count(entry); type = SWP_TYPE(entry); if (type >= nr_swapfiles) |
From: Rodrigo S. de C. <rc...@us...> - 2002-07-17 13:01:01
|
Update of /cvsroot/linuxcompressed/linux/include/linux In directory usw-pr-cvs1:/tmp/cvs-serv577/include/linux Modified Files: comp_cache.h Log Message: Bug fixes o Fixed bug in find_free_swp_buffer() that would leak fragment structs if the fragment got completely freed while refilling swap buffers. o Fixed bug in find_free_swp_buffer() that would panic in the case it couldn't free any swap buffers because of gfp_mask. In this case, simply return neither decompressing nor writing the dirty fragment. o Fixed bug in comp_cache_swp_duplicate() (now known as virtual_swap_duplicate()) that would cause a kernel BUG if duplicating a freed entry. This scenario may happen in the swapin path code. Cleanups o Renamed comp_cache_swp_{duplicate,free,count} -> virtual_swap_* o Removed useless "nrpages" parameter Index: comp_cache.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/comp_cache.h,v retrieving revision 1.92 retrieving revision 1.93 diff -C2 -r1.92 -r1.93 *** comp_cache.h 16 Jul 2002 21:58:08 -0000 1.92 --- comp_cache.h 17 Jul 2002 13:00:57 -0000 1.93 *************** *** 2,6 **** * linux/mm/comp_cache.h * ! * Time-stamp: <2002-07-16 16:34:27 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache.h * ! * Time-stamp: <2002-07-17 08:48:48 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 409,415 **** #define reserved(offset) (vswap_address[offset]->fragment == VSWAP_RESERVED) ! int comp_cache_swp_duplicate(swp_entry_t); ! int comp_cache_swp_free(swp_entry_t); ! int comp_cache_swp_count(swp_entry_t); swp_entry_t get_virtual_swap_page(void); --- 409,415 ---- #define reserved(offset) (vswap_address[offset]->fragment == VSWAP_RESERVED) ! int virtual_swap_duplicate(swp_entry_t); ! int virtual_swap_free(unsigned long); ! int virtual_swap_count(swp_entry_t); swp_entry_t get_virtual_swap_page(void); *************** *** 444,450 **** #define vswap_address(entry) (0) ! static inline int comp_cache_swp_duplicate(swp_entry_t entry) { return 0; }; ! static inline int comp_cache_swp_free(swp_entry_t entry) { return 0; } ! static inline int comp_cache_swp_count(swp_entry_t entry) { return 0; } static inline swp_entry_t get_virtual_swap_page(void) { return (swp_entry_t) { 0 }; } --- 444,450 ---- #define vswap_address(entry) (0) ! static inline int virtual_swap_duplicate(swp_entry_t entry) { return 0; }; ! static inline int virtual_swap_free(unsigned long offset) { return 0; } ! static inline int virtual_swap_count(swp_entry_t entry) { return 0; } static inline swp_entry_t get_virtual_swap_page(void) { return (swp_entry_t) { 0 }; } |
From: Rodrigo S. de C. <rc...@us...> - 2002-07-16 21:58:12
|
Update of /cvsroot/linuxcompressed/linux/include/linux In directory usw-pr-cvs1:/tmp/cvs-serv2686/include/linux Modified Files: comp_cache.h Log Message: Bug fixes o Fixed bug in compact_fragments() which could corrupt the fragments list of a comp cache and also corrupt the comp page data when compacting fragments. Critical bug. Cleanups o Cleanup in comp_cache_free_locked() code o Removed alloc parameter from get_comp_cache_page() (not any longer used) Other o /proc/comp_cache_hist shows up to 6 fragments in a comp page Index: comp_cache.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/comp_cache.h,v retrieving revision 1.91 retrieving revision 1.92 diff -C2 -r1.91 -r1.92 *** comp_cache.h 16 Jul 2002 18:41:54 -0000 1.91 --- comp_cache.h 16 Jul 2002 21:58:08 -0000 1.92 *************** *** 2,6 **** * linux/mm/comp_cache.h * ! * Time-stamp: <2002-07-16 14:49:02 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache.h * ! * Time-stamp: <2002-07-16 16:34:27 rcastro> * * Linux Virtual Memory Compressed Cache |
From: Rodrigo S. de C. <rc...@us...> - 2002-07-16 21:58:11
|
Update of /cvsroot/linuxcompressed/linux/mm/comp_cache In directory usw-pr-cvs1:/tmp/cvs-serv2686/mm/comp_cache Modified Files: aux.c free.c main.c proc.c swapout.c Log Message: Bug fixes o Fixed bug in compact_fragments() which could corrupt the fragments list of a comp cache and also corrupt the comp page data when compacting fragments. Critical bug. Cleanups o Cleanup in comp_cache_free_locked() code o Removed alloc parameter from get_comp_cache_page() (not any longer used) Other o /proc/comp_cache_hist shows up to 6 fragments in a comp page Index: aux.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/aux.c,v retrieving revision 1.38 retrieving revision 1.39 diff -C2 -r1.38 -r1.39 *** aux.c 16 Jul 2002 18:41:55 -0000 1.38 --- aux.c 16 Jul 2002 21:58:08 -0000 1.39 *************** *** 2,6 **** * linux/mm/comp_cache/aux.c * ! * Time-stamp: <2002-07-16 14:53:55 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/aux.c * ! * Time-stamp: <2002-07-16 16:33:09 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 232,237 **** break; default: ! if (total_fragments > 4) ! num_fragments[5]++; else num_fragments[total_fragments]++; --- 232,237 ---- break; default: ! if (total_fragments > 6) ! num_fragments[7]++; else num_fragments[total_fragments]++; Index: free.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/free.c,v retrieving revision 1.36 retrieving revision 1.37 diff -C2 -r1.36 -r1.37 *** free.c 16 Jul 2002 18:41:55 -0000 1.36 --- free.c 16 Jul 2002 21:58:08 -0000 1.37 *************** *** 2,6 **** * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-07-16 14:31:04 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-07-16 18:35:21 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 75,84 **** num_fragments--; comp_cache_free_space += fragment->compressed_size; - - /*** - * Add the fragment compressed size only to total_free_space - * field since fragments that will be standing to be merged - * cannot be added to free_space field at this moment - */ fragment->comp_page->total_free_space += fragment->compressed_size; } --- 75,78 ---- *************** *** 94,98 **** /* remove all the freed fragments */ ! for_each_fragment(fragment_lh, comp_page) { fragment = list_entry(fragment_lh, struct comp_cache_fragment, list); --- 88,92 ---- /* remove all the freed fragments */ ! for_each_fragment_safe(fragment_lh, tmp_lh, comp_page) { fragment = list_entry(fragment_lh, struct comp_cache_fragment, list); *************** *** 139,142 **** --- 133,140 ---- list_add(fragment_lh, &(comp_page->fragments)); } + + + if (comp_page->free_space != comp_page->total_free_space) + BUG(); } *************** *** 166,174 **** previous_fragment = list_entry(fragment->list.prev, struct comp_cache_fragment, list); /* simple case - no free space * 1 - one not compressed page * 2 - sum of all fragments = COMP_PAGE_SIZE */ if (!comp_page->free_space) { - remove_fragment_from_comp_cache(fragment); comp_page->free_offset = fragment->offset; goto remove; --- 164,173 ---- previous_fragment = list_entry(fragment->list.prev, struct comp_cache_fragment, list); + remove_fragment_from_comp_cache(fragment); + /* simple case - no free space * 1 - one not compressed page * 2 - sum of all fragments = COMP_PAGE_SIZE */ if (!comp_page->free_space) { comp_page->free_offset = fragment->offset; goto remove; *************** *** 177,182 **** /* this fragment has the free space as its left neighbour */ if (comp_page->free_offset + comp_page->free_space == fragment->offset) { - remove_fragment_from_comp_cache(fragment); - merge_right_neighbour(fragment, next_fragment); goto remove; --- 176,179 ---- *************** *** 185,190 **** /* this fragment has the free space as its right neighbour */ if (fragment->offset + fragment->compressed_size == comp_page->free_offset) { - remove_fragment_from_comp_cache(fragment); - merge_left_neighbour(fragment, previous_fragment); comp_page->free_offset = fragment->offset; --- 182,185 ---- *************** *** 193,198 **** /* we have used fragment(s) between the free space and the one we want to free */ - remove_fragment_from_comp_cache(fragment); - if (CompFragmentTestandSetFreed(fragment)) BUG(); --- 188,191 ---- Index: main.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/main.c,v retrieving revision 1.55 retrieving revision 1.56 diff -C2 -r1.55 -r1.56 *** main.c 16 Jul 2002 18:41:55 -0000 1.55 --- main.c 16 Jul 2002 21:58:08 -0000 1.56 *************** *** 2,6 **** * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-07-16 14:55:38 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-07-16 16:35:35 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 47,51 **** extern unsigned long num_physpages; ! extern struct comp_cache_page * get_comp_cache_page(struct page *, unsigned short, struct comp_cache_fragment **, int, unsigned int, int); inline void --- 47,51 ---- extern unsigned long num_physpages; ! extern struct comp_cache_page * get_comp_cache_page(struct page *, unsigned short, struct comp_cache_fragment **, unsigned int, int); inline void *************** *** 120,124 **** comp_size = compress(current_compressed_page = page, buffer_compressed = (unsigned long *) &buffer_compressed1, &algorithm, dirty); ! comp_page = get_comp_cache_page(page, comp_size, &fragment, 1, gfp_mask, priority); /* if comp_page == NULL, get_comp_cache_page() gave up --- 120,124 ---- comp_size = compress(current_compressed_page = page, buffer_compressed = (unsigned long *) &buffer_compressed1, &algorithm, dirty); ! comp_page = get_comp_cache_page(page, comp_size, &fragment, gfp_mask, priority); /* if comp_page == NULL, get_comp_cache_page() gave up Index: proc.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/proc.c,v retrieving revision 1.20 retrieving revision 1.21 diff -C2 -r1.20 -r1.21 *** proc.c 16 Jul 2002 18:41:55 -0000 1.20 --- proc.c 16 Jul 2002 21:58:08 -0000 1.21 *************** *** 2,6 **** * linux/mm/comp_cache/proc.c * ! * Time-stamp: <2002-07-16 14:55:14 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/proc.c * ! * Time-stamp: <2002-07-16 16:32:43 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 398,403 **** #define HIST_PRINTK \ num_fragments[0], num_fragments[1], num_fragments[2], num_fragments[3], \ ! num_fragments[4], num_fragments[5] ! #define HIST_COUNT 6 int --- 398,403 ---- #define HIST_PRINTK \ num_fragments[0], num_fragments[1], num_fragments[2], num_fragments[3], \ ! num_fragments[4], num_fragments[5], num_fragments[6], num_fragments[7] ! #define HIST_COUNT 8 int *************** *** 416,420 **** length = sprintf(page, "compressed cache - free space histogram (free space x number of fragments)\n" ! " total 0f 1f 2f 3f 4f more\n"); memset((void *) num_fragments, 0, HIST_COUNT * sizeof(unsigned long)); --- 416,420 ---- length = sprintf(page, "compressed cache - free space histogram (free space x number of fragments)\n" ! " total 0f 1f 2f 3f 4f 5f 6f more\n"); memset((void *) num_fragments, 0, HIST_COUNT * sizeof(unsigned long)); *************** *** 422,426 **** total1 = free_space_count(0, num_fragments); length += sprintf(page + length, ! " %4d: %7lu %5lu %5lu %5lu %5lu %5lu %5lu\n", 0, total1, --- 422,426 ---- total1 = free_space_count(0, num_fragments); length += sprintf(page + length, ! " %4d: %7lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu\n", 0, total1, *************** *** 435,439 **** length += sprintf(page + length, ! "%4d - %4d: %7lu %5lu %5lu %5lu %5lu %5lu %5lu\n", (i-1)*free_space_interval+1, (i+1)*free_space_interval<COMP_PAGE_SIZE?(i+1)*free_space_interval:(int)COMP_PAGE_SIZE, total1 + total2, HIST_PRINTK); --- 435,439 ---- length += sprintf(page + length, ! "%4d - %4d: %7lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu\n", (i-1)*free_space_interval+1, (i+1)*free_space_interval<COMP_PAGE_SIZE?(i+1)*free_space_interval:(int)COMP_PAGE_SIZE, total1 + total2, HIST_PRINTK); Index: swapout.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapout.c,v retrieving revision 1.60 retrieving revision 1.61 diff -C2 -r1.60 -r1.61 *** swapout.c 16 Jul 2002 18:41:55 -0000 1.60 --- swapout.c 16 Jul 2002 21:58:08 -0000 1.61 *************** *** 2,6 **** * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-07-16 14:56:05 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-07-16 16:35:08 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 341,348 **** * @compressed_size: size of swap cache page in compressed state * - * @alloc: do we allocate in case the comp_page->page == NULL? Usually - * yes, but in case we are going to store a page from page cache with - * buffers, that's not needed. - * * @fragment: variable that will store the fragment to store the * compressed data --- 341,344 ---- *************** *** 350,354 **** * @gfp_mask: we need to know if we can perform IO */ struct comp_cache_page * ! get_comp_cache_page(struct page * page, unsigned short compressed_size, struct comp_cache_fragment ** fragment_out, int alloc, unsigned int gfp_mask, int priority) { struct comp_cache_page * comp_page = NULL, ** hash_table; --- 346,350 ---- * @gfp_mask: we need to know if we can perform IO */ struct comp_cache_page * ! get_comp_cache_page(struct page * page, unsigned short compressed_size, struct comp_cache_fragment ** fragment_out, unsigned int gfp_mask, int priority) { struct comp_cache_page * comp_page = NULL, ** hash_table; *************** *** 384,392 **** if (comp_page->free_space != COMP_PAGE_SIZE) BUG(); ! if (alloc) ! goto alloc_new_page; ! ! remove_comp_page_from_hash_table(comp_page); ! goto check_references; } --- 380,384 ---- if (comp_page->free_space != COMP_PAGE_SIZE) BUG(); ! goto alloc_new_page; } |
From: Rodrigo S. de C. <rc...@us...> - 2002-07-16 18:41:59
|
Update of /cvsroot/linuxcompressed/linux/arch/i386 In directory usw-pr-cvs1:/tmp/cvs-serv28423/arch/i386 Modified Files: config.in Log Message: Cleanups o remove support for pages with buffers o virtual swap code (mainly when freeing an entry or allocating a page to service a page fault) Other o Added help for CONFIG_COMP_DOUBLE_PAGE into Configure.help Index: config.in =================================================================== RCS file: /cvsroot/linuxcompressed/linux/arch/i386/config.in,v retrieving revision 1.20 retrieving revision 1.21 diff -C2 -r1.20 -r1.21 *** config.in 15 Jul 2002 20:52:22 -0000 1.20 --- config.in 16 Jul 2002 18:41:54 -0000 1.21 *************** *** 212,216 **** bool ' Support for Page Cache compression' CONFIG_COMP_PAGE_CACHE bool ' Resize Compressed Cache On Demand' CONFIG_COMP_DEMAND_RESIZE ! dep_bool ' Double Page Size' CONFIG_COMP_DOUBLE_PAGE $CONFIG_COMP_DEMAND_RESIZE fi fi --- 212,216 ---- bool ' Support for Page Cache compression' CONFIG_COMP_PAGE_CACHE bool ' Resize Compressed Cache On Demand' CONFIG_COMP_DEMAND_RESIZE ! bool ' Double Page Size' CONFIG_COMP_DOUBLE_PAGE fi fi |
From: Rodrigo S. de C. <rc...@us...> - 2002-07-16 18:41:58
|
Update of /cvsroot/linuxcompressed/linux/mm/comp_cache In directory usw-pr-cvs1:/tmp/cvs-serv28423/mm/comp_cache Modified Files: adaptivity.c aux.c free.c main.c proc.c swapin.c swapout.c vswap.c Log Message: Cleanups o remove support for pages with buffers o virtual swap code (mainly when freeing an entry or allocating a page to service a page fault) Other o Added help for CONFIG_COMP_DOUBLE_PAGE into Configure.help Index: adaptivity.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/adaptivity.c,v retrieving revision 1.35 retrieving revision 1.36 diff -C2 -r1.35 -r1.36 *** adaptivity.c 15 Jul 2002 20:52:23 -0000 1.35 --- adaptivity.c 16 Jul 2002 18:41:55 -0000 1.36 *************** *** 2,6 **** * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-07-15 14:42:43 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-07-16 14:03:17 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 153,171 **** for (index = vswap_last_used; index >= vswap_new_num_entries; index--) { ! /* either this entry has already been freed or hasn't ! * been sucessfully allocated */ ! if (!vswap_address[index]) continue; - /* we are shrinking this vswap table from a function - * which is freeing a vswap entry, so forget this - * entry. The same for the case this entry is in the - * middle of a swapin process (allocating a new - * page) */ - if (freeing(index) || allocating(index)) - continue; - /* unused entry? let's only free it */ ! if (!vswap_address[index]->count) { list_del(&(vswap_address[index]->list)); nr_free_vswap--; --- 153,165 ---- for (index = vswap_last_used; index >= vswap_new_num_entries; index--) { ! /* this vswap entry has already been freed, has been ! * sucessfully allocated or has any page fault being ! * serviced, so we are unable to move it in the vswap ! * table */ ! if (!vswap_address[index] || vswap_address[index]->fault_count) continue; /* unused entry? let's only free it */ ! if (!vswap_address[index]->swap_count) { list_del(&(vswap_address[index]->list)); nr_free_vswap--; *************** *** 207,217 **** break; ! if (freeing(new_index)) ! goto next; ! ! if (!vswap_address[new_index]->count) break; - next: new_index--; } --- 201,207 ---- break; ! if (!vswap_address[new_index]->swap_count) break; new_index--; } *************** *** 273,279 **** continue; ! if (!vswap_address[vswap_last_used]->count ! && vswap_last_used >= vswap_new_num_entries ! && !freeing(vswap_last_used)) BUG(); --- 263,268 ---- continue; ! if (!vswap_address[vswap_last_used]->swap_count ! && vswap_last_used >= vswap_new_num_entries) BUG(); Index: aux.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/aux.c,v retrieving revision 1.37 retrieving revision 1.38 diff -C2 -r1.37 -r1.38 *** aux.c 15 Jul 2002 20:52:23 -0000 1.37 --- aux.c 16 Jul 2002 18:41:55 -0000 1.38 *************** *** 2,6 **** * linux/mm/comp_cache/aux.c * ! * Time-stamp: <2002-07-15 15:14:01 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/aux.c * ! * Time-stamp: <2002-07-16 14:53:55 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 210,221 **** #if 0 if (comp_page->page) { - if (PageMappedCompCache(comp_page->page)) { - if (index != 0 && index != free_space_hash_size - 1) - BUG(); - if (!comp_page->page->mapping) - BUG(); - if (page_count(comp_page->page) != 2) - BUG(); - } if (!comp_page->page->buffers && page_count(comp_page->page) != 1) BUG(); --- 210,213 ---- *************** *** 234,244 **** { case 0: - if (!comp_page->page) - num_fragments[7]++; num_fragments[0]++; break; case 1: - if (comp_page->page && PageMappedCompCache(comp_page->page)) - num_fragments[6]++; num_fragments[1]++; break; --- 226,232 ---- Index: free.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/free.c,v retrieving revision 1.35 retrieving revision 1.36 diff -C2 -r1.35 -r1.36 *** free.c 15 Jul 2002 20:52:23 -0000 1.35 --- free.c 16 Jul 2002 18:41:55 -0000 1.36 *************** *** 2,6 **** * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-07-15 14:56:06 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-07-16 14:31:04 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 309,323 **** goto backout; /* remove all those ptes from vswap struct */ num_freed_ptes = free_pte_list(vswap->pte_list, vswap->offset); - vswap->count -= num_freed_ptes; ! /* let's proceed to fix swap counter for the new entry */ ! for(; num_freed_ptes > 0; num_freed_ptes--) swap_duplicate(entry); ! ! remove_fragment_vswap(fragment); ! remove_fragment_from_hash_table(fragment); ! /* let's fix swap cache page address (if any) */ if (vswap->swap_cache_page) { --- 309,324 ---- goto backout; + remove_fragment_vswap(fragment); + remove_fragment_from_hash_table(fragment); + /* remove all those ptes from vswap struct */ num_freed_ptes = free_pte_list(vswap->pte_list, vswap->offset); ! /* let's proceed to fix swap counter for either entries */ ! for(; num_freed_ptes > 0; --num_freed_ptes) { ! comp_cache_swp_free(old_entry); swap_duplicate(entry); ! } ! /* let's fix swap cache page address (if any) */ if (vswap->swap_cache_page) { *************** *** 328,332 **** page_cache_get(swap_cache_page); - vswap->count++; delete_from_swap_cache(swap_cache_page); --- 329,332 ---- *************** *** 334,356 **** BUG(); - vswap->count--; page_cache_release(swap_cache_page); UnlockPage(swap_cache_page); } ! if (vswap->count) ! BUG(); ! ! vswap->count = 1; ! ! if (comp_cache_swp_free(old_entry)) BUG(); fragment->index = entry.val; add_fragment_to_lru_queue(fragment); - add_fragment_to_hash_table(fragment); UnlockPage(fragment->comp_page->page); - return 1; --- 334,349 ---- BUG(); page_cache_release(swap_cache_page); UnlockPage(swap_cache_page); } ! if (vswap->swap_count) BUG(); fragment->index = entry.val; + add_fragment_to_lru_queue(fragment); add_fragment_to_hash_table(fragment); UnlockPage(fragment->comp_page->page); return 1; Index: main.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/main.c,v retrieving revision 1.54 retrieving revision 1.55 diff -C2 -r1.54 -r1.55 *** main.c 15 Jul 2002 20:52:24 -0000 1.54 --- main.c 16 Jul 2002 18:41:55 -0000 1.55 *************** *** 2,6 **** * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-07-15 13:45:54 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-07-16 14:55:38 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 71,77 **** } - if (PageMappedCompCache(page)) - BUG(); - if (page->buffers) BUG(); --- 71,74 ---- *************** *** 89,96 **** compress_clean_page(struct page * page, unsigned int gfp_mask, int priority) { - /* that should not happen */ - if (PageMappedCompCache(page)) - BUG(); - if (page->buffers) BUG(); --- 86,89 ---- *************** *** 170,294 **** return 1; } - - #ifdef CONFIG_COMP_PAGE_CACHE - void - steal_page_from_comp_cache(struct page * page, struct page * new_page) - { - struct comp_cache_fragment * fragment; - struct comp_cache_page * comp_page; - struct page * old_page; - int locked; - - if (!PageMappedCompCache(page)) - return; - - if (find_comp_page(page->mapping, page->index, &fragment)) - BUG(); - - if (CompFragmentIO(fragment)) - BUG(); - comp_page = fragment->comp_page; - old_page = comp_page->page; - - if (old_page != page) - BUG(); - locked = !TryLockPage(old_page); - - set_comp_page(comp_page, new_page); - - if (new_page) { - /* the reference got in the caller will be drop */ - page_cache_get(new_page); - if (page_count(new_page) != 3) - BUG(); - } - - #if 0 - if (page != new_page) - lru_cache_add(page); - #endif - - comp_cache_free(fragment); - PageClearMappedCompCache(old_page); - - if (locked) - UnlockPage(old_page); - } - - #ifndef CONFIG_COMP_DEMAND_RESIZE - int - comp_cache_try_to_release_page(struct page ** page, int gfp_mask, int priority) - { - struct comp_cache_fragment * fragment; - struct comp_cache_page * comp_page; - unsigned short comp_size; - struct page * old_page; - int ret = 0; - - if (PageCompCache(*page)) - BUG(); - - /* if mapped comp cache pages aren't removed from LRU queues, - * then here we should return 1, otherwise BUG() */ - if (PageMappedCompCache(*page)) - return 1; - - if (page_count(*page) != 3) - return try_to_release_page(*page, gfp_mask); - - if (PageSwapCache(*page)) { - swp_entry_t entry = (swp_entry_t) { (*page)->index }; - - if (vswap_address(entry)) - BUG(); - } - - /* could we free the buffer without IO, so why store in - * compressed cache with the buffers? it can be ocasionally - * stored later as a clean page */ - if (try_to_release_page(*page, 0)) - return 1; - - /* it's not mapped by any process, therefore we can trade this - * page with a page reserved for compressed cache use */ - comp_size = PAGE_SIZE; - comp_page = get_comp_cache_page(*page, comp_size, &fragment, 0, gfp_mask, priority); - - if (!comp_page) - return ret; - - /* let's swap the pages */ - old_page = comp_page->page; - set_comp_page(comp_page, (*page)); - - PageSetMappedCompCache(comp_page->page); - - /* whoops, no page to set to (*page), so it's time to leave */ - if (!old_page) - goto out; - - /* no need to call page_cache_get() since we have an extra - * reference got in shrink_cache() that won't be released - * (recall that we are setting *page to a new page) */ - *page = old_page; - if (page_count(old_page) != 1) - BUG(); - if (old_page->mapping) - BUG(); - if (!PageLocked(old_page)) - BUG(); - - UnlockPage(comp_page->page); - page_cache_release(comp_page->page); - ret = 1; - out: - #if 0 - lru_cache_del(comp_page->page); - #endif - comp_cache_update_page_stats(comp_page->page, 0); - return ret; - } - #endif - #endif extern void __init comp_cache_hash_init(void); --- 163,166 ---- Index: proc.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/proc.c,v retrieving revision 1.19 retrieving revision 1.20 diff -C2 -r1.19 -r1.20 *** proc.c 15 Jul 2002 20:52:24 -0000 1.19 --- proc.c 16 Jul 2002 18:41:55 -0000 1.20 *************** *** 2,6 **** * linux/mm/comp_cache/proc.c * ! * Time-stamp: <2002-07-15 15:13:14 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/proc.c * ! * Time-stamp: <2002-07-16 14:55:14 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 398,403 **** #define HIST_PRINTK \ num_fragments[0], num_fragments[1], num_fragments[2], num_fragments[3], \ ! num_fragments[4], num_fragments[5], num_fragments[6], num_fragments[7] ! #define HIST_COUNT 8 int --- 398,403 ---- #define HIST_PRINTK \ num_fragments[0], num_fragments[1], num_fragments[2], num_fragments[3], \ ! num_fragments[4], num_fragments[5] ! #define HIST_COUNT 6 int *************** *** 416,420 **** length = sprintf(page, "compressed cache - free space histogram (free space x number of fragments)\n" ! " total 0f 1f 2f 3f 4f more buffers nopg\n"); memset((void *) num_fragments, 0, HIST_COUNT * sizeof(unsigned long)); --- 416,420 ---- length = sprintf(page, "compressed cache - free space histogram (free space x number of fragments)\n" ! " total 0f 1f 2f 3f 4f more\n"); memset((void *) num_fragments, 0, HIST_COUNT * sizeof(unsigned long)); *************** *** 422,426 **** total1 = free_space_count(0, num_fragments); length += sprintf(page + length, ! " %4d: %7lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu\n", 0, total1, --- 422,426 ---- total1 = free_space_count(0, num_fragments); length += sprintf(page + length, ! " %4d: %7lu %5lu %5lu %5lu %5lu %5lu %5lu\n", 0, total1, *************** *** 435,439 **** length += sprintf(page + length, ! "%4d - %4d: %7lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu\n", (i-1)*free_space_interval+1, (i+1)*free_space_interval<COMP_PAGE_SIZE?(i+1)*free_space_interval:(int)COMP_PAGE_SIZE, total1 + total2, HIST_PRINTK); --- 435,439 ---- length += sprintf(page + length, ! "%4d - %4d: %7lu %5lu %5lu %5lu %5lu %5lu %5lu\n", (i-1)*free_space_interval+1, (i+1)*free_space_interval<COMP_PAGE_SIZE?(i+1)*free_space_interval:(int)COMP_PAGE_SIZE, total1 + total2, HIST_PRINTK); Index: swapin.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapin.c,v retrieving revision 1.44 retrieving revision 1.45 diff -C2 -r1.44 -r1.45 *** swapin.c 15 Jul 2002 20:52:24 -0000 1.44 --- swapin.c 16 Jul 2002 18:41:55 -0000 1.45 *************** *** 2,6 **** * linux/mm/comp_cache/swapin.c * ! * Time-stamp: <2002-07-15 14:35:45 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/swapin.c * ! * Time-stamp: <2002-07-16 14:55:53 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 103,109 **** if (TryLockPage(fragment->comp_page->page)) BUG(); ! if (PageMappedCompCache(fragment->comp_page->page)) ! BUG(); ! /* move the fragment to the back of the lru list */ remove_fragment_from_lru_queue(fragment); --- 103,107 ---- if (TryLockPage(fragment->comp_page->page)) BUG(); ! /* move the fragment to the back of the lru list */ remove_fragment_from_lru_queue(fragment); *************** *** 143,159 **** fragment = list_entry(fragment_lh, struct comp_cache_fragment, mapping_list); ! if ((fragment->index >= start) || (partial && (fragment->index + 1) == start)) { ! /*** ! * Only valid if we are invalidating entries ! * in compressed cache. We could invalidate in ! * invalidate_inode_pages() each page, but ! * that would make us search comp cache for ! * every page, which is not wanted ! */ ! if (PageMappedCompCache(fragment->comp_page->page)) ! steal_page_from_comp_cache(fragment->comp_page->page, NULL); ! else ! comp_cache_free(fragment); ! } } } --- 141,146 ---- fragment = list_entry(fragment_lh, struct comp_cache_fragment, mapping_list); ! if ((fragment->index >= start) || (partial && (fragment->index + 1) == start)) ! comp_cache_free(fragment); } } *************** *** 207,212 **** hash = page_hash(mapping, fragment->index); - if (PageMappedCompCache(fragment->comp_page->page)) - BUG(); if (!CompFragmentTestandClearDirty(fragment)) BUG(); --- 194,197 ---- Index: swapout.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapout.c,v retrieving revision 1.59 retrieving revision 1.60 diff -C2 -r1.59 -r1.60 *** swapout.c 15 Jul 2002 20:52:24 -0000 1.59 --- swapout.c 16 Jul 2002 18:41:55 -0000 1.60 *************** *** 2,6 **** * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-07-15 10:06:15 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-07-16 14:56:05 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 240,281 **** continue; - /* page which has/had buffer? */ - if (PageMappedCompCache(page)) { - list_del(fragment_lh); - page_cache_get(page); - CompFragmentSetIO(fragment); - if (page->buffers && !try_to_release_page(page, gfp_mask)) { - UnlockPage(page); - page_cache_release(page); - CompFragmentClearIO(fragment); - list_add(fragment_lh, &lru_queue); - continue; - } - - if (PageSwapCache(page)) - delete_from_swap_cache(page); - else { - __remove_inode_page(page); - page_cache_release(page); - } - PageClearMappedCompCache(page); - page->flags &= ~((1 << PG_uptodate) | (1 << PG_referenced)); - - if (CompFragmentTestandClearIO(fragment)) - comp_cache_free_locked(fragment); - else - kmem_cache_free(fragment_cachep, (fragment)); - UnlockPage(page); - - if (page_count(page) != 1) - BUG(); - if (PageDirty(page)) - BUG(); - - if (--nrpages) - continue; - break; - } - /* clean page, let's free it */ if (!CompFragmentDirty(fragment)) { --- 240,243 ---- Index: vswap.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/vswap.c,v retrieving revision 1.39 retrieving revision 1.40 diff -C2 -r1.39 -r1.40 *** vswap.c 15 Jul 2002 20:52:24 -0000 1.39 --- vswap.c 16 Jul 2002 18:41:55 -0000 1.40 *************** *** 2,6 **** * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-07-15 14:24:03 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-07-16 14:57:52 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 61,110 **** unsigned long nr_free_vswap = 0, nr_used_vswap = 0; - /*** - * Lock this vswap entry since it has a new page being allocated. That - * avoids this entry to be moved either when vswap is shrunk or to - * gain a new real swap entry. This sort of vswap entry does not have - * a swap cache page, so this is the field used to set this flag. - */ - inline void - set_vswap_allocating(swp_entry_t entry) - { - unsigned long offset = SWP_OFFSET(entry); - struct vswap_address * vswap; - - if (!vswap_address(entry)) - return; - if (offset >= vswap_current_num_entries) - BUG(); - vswap = vswap_address[offset]; - - //if (vswap->swap_cache_page) - //BUG(); - - vswap->swap_cache_page = VSWAP_ALLOCATING; - } - - /*** - * Clear the allocating flag of this vswap entry. - */ - inline void - clear_vswap_allocating(swp_entry_t entry) - { - unsigned long offset = SWP_OFFSET(entry); - struct vswap_address * vswap; - - if (!vswap_address(entry)) - return; - if (offset >= vswap_current_num_entries) - BUG(); - vswap = vswap_address[offset]; - - if (vswap->swap_cache_page != VSWAP_ALLOCATING) - //BUG(); - return; - - vswap->swap_cache_page = NULL; - } - static int comp_cache_vswap_alloc(void) --- 61,64 ---- *************** *** 258,265 **** if (vswap_address[offset]->fragment) BUG(); ! if (vswap_address[offset]->count) BUG(); ! vswap_address[offset]->count = 1; vswap_address[offset]->pte_list = NULL; vswap_address[offset]->swap_cache_page = NULL; --- 212,219 ---- if (vswap_address[offset]->fragment) BUG(); ! if (vswap_address[offset]->swap_count) BUG(); ! vswap_address[offset]->swap_count = 1; vswap_address[offset]->pte_list = NULL; vswap_address[offset]->swap_cache_page = NULL; *************** *** 293,302 **** if (!vswap_address[offset]) BUG(); ! if (!vswap_address[offset]->count) BUG(); if (offset >= vswap_current_num_entries) goto out; ! vswap_address[offset]->count++; ret = 1; out: --- 247,256 ---- if (!vswap_address[offset]) BUG(); ! if (!vswap_address[offset]->swap_count) BUG(); if (offset >= vswap_current_num_entries) goto out; ! vswap_address[offset]->swap_count++; ret = 1; out: *************** *** 321,325 **** { unsigned long offset = SWP_OFFSET(entry); ! unsigned int count; struct comp_cache_fragment * fragment; struct vswap_address * vswap; --- 275,279 ---- { unsigned long offset = SWP_OFFSET(entry); ! unsigned int swap_count; struct comp_cache_fragment * fragment; struct vswap_address * vswap; *************** *** 337,347 **** BUG(); ! if (!vswap->count) BUG(); ! count = vswap->count; ! if (--count) { ! vswap->count = count; ! return count; } --- 291,301 ---- BUG(); ! if (!vswap->swap_count) BUG(); ! swap_count = vswap->swap_count; ! if (--swap_count) { ! vswap->swap_count = swap_count; ! return swap_count; } *************** *** 354,358 **** BUG(); ! vswap->count = 0; vswap->pte_list = NULL; vswap->swap_cache_page = NULL; --- 308,312 ---- BUG(); ! vswap->swap_count = 0; vswap->pte_list = NULL; vswap->swap_cache_page = NULL; *************** *** 377,391 **** list_del_init(&(vswap_address[offset]->list)); nr_used_vswap--; ! ! vswap->fragment = VSWAP_FREEING; ! comp_cache_freeable_space += fragment->compressed_size; ! ! comp_cache_free(fragment); /* add to to the free list */ list_add(&(vswap->list), &vswap_address_free_head); nr_free_vswap++; ! ! vswap->fragment = NULL; return 0; } --- 331,344 ---- list_del_init(&(vswap_address[offset]->list)); nr_used_vswap--; ! vswap->fragment = NULL; /* add to to the free list */ list_add(&(vswap->list), &vswap_address_free_head); nr_free_vswap++; ! ! /* global freeable space */ ! comp_cache_freeable_space += fragment->compressed_size; ! ! comp_cache_free(fragment); return 0; } *************** *** 405,412 **** BUG(); ! if (!vswap_address[offset]->count) BUG(); ! return (vswap_address[offset]->count); } --- 358,365 ---- BUG(); ! if (!vswap_address[offset]->swap_count) BUG(); ! return (vswap_address[offset]->swap_count); } *************** *** 441,447 **** offset = SWP_OFFSET(entry); ! /* if we are freeing this vswap, don't have to worry since it ! * will be handled by comp_cache_swp_free() function */ ! if (freeing(offset)) return; --- 394,398 ---- offset = SWP_OFFSET(entry); ! if (!vswap_address[offset]->swap_count) return; *************** *** 490,494 **** if (!reserved(offset)) BUG(); ! if (!vswap_address[offset]->count) BUG(); --- 441,445 ---- if (!reserved(offset)) BUG(); ! if (!vswap_address[offset]->swap_count) BUG(); *************** *** 762,769 **** vswap_address[offset]->offset = offset; ! vswap_address[offset]->count = 0; vswap_address[offset]->pte_list = NULL; vswap_address[offset]->swap_cache_page = NULL; vswap_address[offset]->fragment = NULL; list_add(&(vswap_address[offset]->list), &vswap_address_free_head); --- 713,721 ---- vswap_address[offset]->offset = offset; ! vswap_address[offset]->swap_count = 0; vswap_address[offset]->pte_list = NULL; vswap_address[offset]->swap_cache_page = NULL; vswap_address[offset]->fragment = NULL; + vswap_address[offset]->fault_count = 0; list_add(&(vswap_address[offset]->list), &vswap_address_free_head); |
From: Rodrigo S. de C. <rc...@us...> - 2002-07-16 18:41:58
|
Update of /cvsroot/linuxcompressed/linux/Documentation In directory usw-pr-cvs1:/tmp/cvs-serv28423/Documentation Modified Files: Configure.help Log Message: Cleanups o remove support for pages with buffers o virtual swap code (mainly when freeing an entry or allocating a page to service a page fault) Other o Added help for CONFIG_COMP_DOUBLE_PAGE into Configure.help Index: Configure.help =================================================================== RCS file: /cvsroot/linuxcompressed/linux/Documentation/Configure.help,v retrieving revision 1.8 retrieving revision 1.9 diff -C2 -r1.8 -r1.9 *** Configure.help 25 Jun 2002 14:34:07 -0000 1.8 --- Configure.help 16 Jul 2002 18:41:54 -0000 1.9 *************** *** 423,427 **** (/proc/sys/vm/comp_cache/size). ! If unsure, say N here. Normal floppy disk support --- 423,443 ---- (/proc/sys/vm/comp_cache/size). ! If unsure, say Y here. ! ! Double Page Size ! CONFIG_COMP_DOUBLE_PAGE ! ! Select this option to make compressed cache use double sized pages ! (two pages contiguos) instead of single memory pages. That increases ! the effect of the compressed cache use even when the compression ! ratio isn't very good. On i386, compressed cache pages will have ! 8KiB instead of the regular 4KiB. ! ! Note that with double pages, all the values showed in compressed ! initialization info (in boot process) and in ! /proc/sys/vm/comp_cache/{actual_size,size} entries are in functions ! of those pages and not in function of memory pages. ! ! If unsure, say Y here. Normal floppy disk support |
From: Rodrigo S. de C. <rc...@us...> - 2002-07-16 18:41:58
|
Update of /cvsroot/linuxcompressed/linux/include/linux In directory usw-pr-cvs1:/tmp/cvs-serv28423/include/linux Modified Files: comp_cache.h mm.h Log Message: Cleanups o remove support for pages with buffers o virtual swap code (mainly when freeing an entry or allocating a page to service a page fault) Other o Added help for CONFIG_COMP_DOUBLE_PAGE into Configure.help Index: comp_cache.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/comp_cache.h,v retrieving revision 1.90 retrieving revision 1.91 diff -C2 -r1.90 -r1.91 *** comp_cache.h 15 Jul 2002 20:52:23 -0000 1.90 --- comp_cache.h 16 Jul 2002 18:41:54 -0000 1.91 *************** *** 2,6 **** * linux/mm/comp_cache.h * ! * Time-stamp: <2002-07-15 13:44:19 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache.h * ! * Time-stamp: <2002-07-16 14:49:02 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 364,384 **** #endif - #ifdef CONFIG_COMP_PAGE_CACHE - void steal_page_from_comp_cache(struct page *, struct page *); - #else - static inline void steal_page_from_comp_cache(struct page * page, struct page * new_page) {}; - #endif - - #if defined(CONFIG_COMP_PAGE_CACHE) && !defined(CONFIG_COMP_DEMAND_RESIZE) - int comp_cache_try_to_release_page(struct page **, int, int); - #else - static inline int comp_cache_try_to_release_page(struct page ** page, int gfp_mask, int priority) { return try_to_release_page(*page, gfp_mask); } - #endif - /* vswap.c */ struct vswap_address { struct list_head list; ! unsigned int count; unsigned long offset; --- 364,376 ---- #endif /* vswap.c */ struct vswap_address { struct list_head list; ! /* how many ptes are set to this vswap address */ ! unsigned int swap_count; ! /* number of faults being serviced at a given moment */ ! unsigned int fault_count; ! /* offset within the vswap table */ unsigned long offset; *************** *** 409,413 **** #define COMP_CACHE_SWP_TYPE MAX_SWAPFILES #define VSWAP_RESERVED ((struct comp_cache_fragment *) 0xffffffff) - #define VSWAP_FREEING ((struct comp_cache_fragment *) 0xfffffffe) #define VSWAP_ALLOCATING ((struct page *) 0xffffffff) --- 401,404 ---- *************** *** 417,422 **** #define vswap_address(entry) (SWP_TYPE(entry) == COMP_CACHE_SWP_TYPE) #define reserved(offset) (vswap_address[offset]->fragment == VSWAP_RESERVED) - #define freeing(offset) (vswap_address[offset]->fragment == VSWAP_FREEING) - #define allocating(offset) (vswap_address[offset]->swap_cache_page == VSWAP_ALLOCATING) int comp_cache_swp_duplicate(swp_entry_t); --- 408,411 ---- *************** *** 438,441 **** --- 427,441 ---- int vswap_alloc_and_init(struct vswap_address **, unsigned long); + static inline void get_vswap(swp_entry_t entry) { + if (!vswap_address(entry)) + return; + vswap_address[SWP_OFFSET(entry)]->fault_count++; + } + + static inline void put_vswap(swp_entry_t entry) { + if (!vswap_address(entry)) + return; + vswap_address[SWP_OFFSET(entry)]->fault_count--; + } #else Index: mm.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/mm.h,v retrieving revision 1.16 retrieving revision 1.17 diff -C2 -r1.16 -r1.17 *** mm.h 11 Jun 2002 13:20:49 -0000 1.16 --- mm.h 16 Jul 2002 18:41:55 -0000 1.17 *************** *** 332,339 **** #ifdef CONFIG_COMP_CACHE #define PageCompCache(page) test_bit(PG_comp_cache, &(page)->flags) - #define PageMappedCompCache(page) test_bit(PG_mapped_comp_cache, &(page)->flags) #else #define PageCompCache(page) 0 - #define PageMappedCompCache(page) 0 #endif --- 332,337 ---- *************** *** 342,350 **** #define PageTestandSetCompCache(page) test_and_set_bit(PG_comp_cache, &(page)->flags) #define PageTestandClearCompCache(page) test_and_clear_bit(PG_comp_cache, &(page)->flags) - - #define PageSetMappedCompCache(page) set_bit(PG_mapped_comp_cache, &(page)->flags) - #define PageClearMappedCompCache(page) clear_bit(PG_mapped_comp_cache, &(page)->flags) - #define PageTestandSetMappedCompCache(page) test_and_set_bit(PG_mapped_comp_cache, &(page)->flags) - #define PageTestandClearMappedCompCache(page) test_and_clear_bit(PG_mapped_comp_cache, &(page)->flags) #define PageActive(page) test_bit(PG_active, &(page)->flags) --- 340,343 ---- |
From: Rodrigo S. de C. <rc...@us...> - 2002-07-16 18:41:58
|
Update of /cvsroot/linuxcompressed/linux/mm In directory usw-pr-cvs1:/tmp/cvs-serv28423/mm Modified Files: filemap.c memory.c page_alloc.c swap_state.c swapfile.c vmscan.c Log Message: Cleanups o remove support for pages with buffers o virtual swap code (mainly when freeing an entry or allocating a page to service a page fault) Other o Added help for CONFIG_COMP_DOUBLE_PAGE into Configure.help Index: filemap.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/filemap.c,v retrieving revision 1.33 retrieving revision 1.34 diff -C2 -r1.33 -r1.34 *** filemap.c 15 Jul 2002 20:52:23 -0000 1.33 --- filemap.c 16 Jul 2002 18:41:55 -0000 1.34 *************** *** 163,173 **** mark_inode_dirty_pages(mapping->host); #ifdef CONFIG_COMP_CACHE ! if (PageTestandClearCompCache(page)) { ! if (PageMappedCompCache(page)) { ! steal_page_from_comp_cache(page, NULL); ! return; ! } invalidate_comp_cache(mapping, page->index); - } #endif } --- 163,168 ---- mark_inode_dirty_pages(mapping->host); #ifdef CONFIG_COMP_CACHE ! if (PageTestandClearCompCache(page)) invalidate_comp_cache(mapping, page->index); #endif } *************** *** 253,264 **** /* - * Let's steal the page from comp cache to be safely removed - * from page cache below. Actually, this page will also be - * used by compressed cache, that's why it's passed as second - * parameter to the function - */ - steal_page_from_comp_cache(page, page); - - /* * We remove the page from the page cache _after_ we have * destroyed all buffer-cache references to it. Otherwise some --- 248,251 ---- *************** *** 1009,1013 **** if (page) { page_cache_get(page); - steal_page_from_comp_cache(page, NULL); if (TryLockPage(page)) { spin_unlock(&pagecache_lock); --- 996,999 ---- *************** *** 1099,1106 **** lru_cache_add(page); #ifdef CONFIG_COMP_PAGE_CACHE ! if (!read_comp_cache(mapping, index, page) && TryLockPage(page)) { ! ClearPageUptodate(page); BUG(); - } #endif } --- 1085,1090 ---- lru_cache_add(page); #ifdef CONFIG_COMP_PAGE_CACHE ! if (!read_comp_cache(mapping, index, page) && TryLockPage(page)) BUG(); #endif } *************** *** 1513,1518 **** found_page: page_cache_get(page); - steal_page_from_comp_cache(page, NULL); - spin_unlock(&pagecache_lock); --- 1497,1500 ---- *************** *** 2064,2068 **** if (!page) goto no_cached_page; - steal_page_from_comp_cache(page, NULL); /* --- 2046,2049 ---- *************** *** 2945,2952 **** page_cache_release(cached_page); #ifdef CONFIG_COMP_PAGE_CACHE ! if (page) { ! steal_page_from_comp_cache(page, NULL); flush_comp_cache(page); - } #endif return page; --- 2926,2931 ---- page_cache_release(cached_page); #ifdef CONFIG_COMP_PAGE_CACHE ! if (page) flush_comp_cache(page); #endif return page; Index: memory.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/memory.c,v retrieving revision 1.33 retrieving revision 1.34 diff -C2 -r1.33 -r1.34 *** memory.c 5 Jul 2002 15:21:49 -0000 1.33 --- memory.c 16 Jul 2002 18:41:55 -0000 1.34 *************** *** 1139,1143 **** --- 1139,1145 ---- ret = 2; } + get_vswap(entry); page = read_swap_cache_async(entry); + put_vswap(entry); if (!page) { /* Index: page_alloc.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/page_alloc.c,v retrieving revision 1.22 retrieving revision 1.23 diff -C2 -r1.22 -r1.23 *** page_alloc.c 15 Jul 2002 20:52:23 -0000 1.22 --- page_alloc.c 16 Jul 2002 18:41:55 -0000 1.23 *************** *** 92,97 **** if (PageSwapCache(page)) BUG(); - if (PageMappedCompCache(page)) - BUG(); if (PageLocked(page)) BUG(); --- 92,95 ---- Index: swap_state.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/swap_state.c,v retrieving revision 1.36 retrieving revision 1.37 diff -C2 -r1.36 -r1.37 *** swap_state.c 15 Jul 2002 20:52:23 -0000 1.36 --- swap_state.c 16 Jul 2002 18:41:55 -0000 1.37 *************** *** 179,186 **** */ INC_CACHE_INFO(find_total); ! if (found) { INC_CACHE_INFO(find_success); - steal_page_from_comp_cache(found, NULL); - } return found; } --- 179,184 ---- */ INC_CACHE_INFO(find_total); ! if (found) INC_CACHE_INFO(find_success); return found; } *************** *** 205,212 **** */ found_page = find_get_page(&swapper_space, entry.val); ! if (found_page) { ! steal_page_from_comp_cache(found_page, new_page); break; - } /* --- 203,208 ---- */ found_page = find_get_page(&swapper_space, entry.val); ! if (found_page) break; /* *************** *** 214,220 **** */ if (!new_page) { - set_vswap_allocating(entry); new_page = alloc_page(GFP_HIGHUSER); - clear_vswap_allocating(entry); if (!new_page) break; /* Out of memory */ --- 210,214 ---- *************** *** 223,230 **** if (readahead) { found_page = find_get_page(&swapper_space, entry.val); ! if (found_page) { ! steal_page_from_comp_cache(found_page, new_page); break; - } if (in_comp_cache(&swapper_space, entry.val)) return new_page; --- 217,222 ---- if (readahead) { found_page = find_get_page(&swapper_space, entry.val); ! if (found_page) break; if (in_comp_cache(&swapper_space, entry.val)) return new_page; Index: swapfile.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/swapfile.c,v retrieving revision 1.30 retrieving revision 1.31 diff -C2 -r1.30 -r1.31 *** swapfile.c 11 Jun 2002 13:20:49 -0000 1.30 --- swapfile.c 16 Jul 2002 18:41:55 -0000 1.31 *************** *** 384,388 **** /* Only cache user (+us), or swap space full? Free it! */ if (page_count(page) - !!page->buffers == 2 || vm_swap_full()) { - steal_page_from_comp_cache(page, NULL); delete_from_swap_cache(page); SetPageDirty(page); --- 384,387 ---- Index: vmscan.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/vmscan.c,v retrieving revision 1.40 retrieving revision 1.41 diff -C2 -r1.40 -r1.41 *** vmscan.c 11 Jul 2002 19:08:11 -0000 1.40 --- vmscan.c 16 Jul 2002 18:41:55 -0000 1.41 *************** *** 443,447 **** page_cache_get(page); ! if (comp_cache_try_to_release_page(&page, gfp_mask, priority)) { if (!page->mapping) { /* --- 443,447 ---- page_cache_get(page); ! if (try_to_release_page(page, gfp_mask)) { if (!page->mapping) { /* *************** *** 486,490 **** * this is the non-racy check for busy page. */ ! if (!page->mapping || !is_page_cache_freeable(page) || PageMappedCompCache(page)) { spin_unlock(&pagecache_lock); UnlockPage(page); --- 486,490 ---- * this is the non-racy check for busy page. */ ! if (!page->mapping || !is_page_cache_freeable(page)) { spin_unlock(&pagecache_lock); UnlockPage(page); |
From: Rodrigo S. de C. <rc...@us...> - 2002-07-15 20:52:27
|
Update of /cvsroot/linuxcompressed/linux/mm/comp_cache In directory usw-pr-cvs1:/tmp/cvs-serv1029/mm/comp_cache Modified Files: adaptivity.c aux.c free.c main.c proc.c swapin.c swapout.c vswap.c Log Message: Feature o Added feature to enable 8K pages (on i386). This option can only be selected if "Resize Compressed Cache On Demand" is enabled since it does not support pages with buffers. The motive to implement this idea is to make better use of the space reserved for compressed cache, since depending on the compression ratio, several fragments end up stored alone in a page. Index: adaptivity.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/adaptivity.c,v retrieving revision 1.34 retrieving revision 1.35 diff -C2 -r1.34 -r1.35 *** adaptivity.c 11 Jul 2002 19:08:11 -0000 1.34 --- adaptivity.c 15 Jul 2002 20:52:23 -0000 1.35 *************** *** 2,6 **** * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-07-09 16:33:37 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-07-15 14:42:43 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 30,34 **** unsigned int i, new_fragment_hash_bits, new_fragment_hash_order, hash_index; ! new_fragment_hash_size = 3 * num_comp_pages * sizeof(struct comp_cache_fragment *); new_fragment_hash = create_fragment_hash(&new_fragment_hash_size, &new_fragment_hash_bits, &new_fragment_hash_order); --- 30,34 ---- unsigned int i, new_fragment_hash_bits, new_fragment_hash_order, hash_index; ! new_fragment_hash_size = NUM_FRAG_HASH_ENTRIES * sizeof(struct comp_cache_fragment *); new_fragment_hash = create_fragment_hash(&new_fragment_hash_size, &new_fragment_hash_bits, &new_fragment_hash_order); *************** *** 40,44 **** /* if we are growing the hash, but couldn't allocate a bigger * chunk, let's back out and keep the current one */ ! if (3 * num_comp_pages > fragment_hash_size && new_fragment_hash_order <= fragment_hash_order) goto free_new_hash; --- 40,44 ---- /* if we are growing the hash, but couldn't allocate a bigger * chunk, let's back out and keep the current one */ ! if (NUM_FRAG_HASH_ENTRIES > fragment_hash_size && new_fragment_hash_order <= fragment_hash_order) goto free_new_hash; *************** *** 451,455 **** static inline void shrink_fragment_hash_table(void) { ! unsigned long new_fragment_hash_size = (3 * num_comp_pages) * sizeof(struct comp_cache_fragment *); /* if we shrink the hash table an order, will the data fit in --- 451,455 ---- static inline void shrink_fragment_hash_table(void) { ! unsigned long new_fragment_hash_size = NUM_FRAG_HASH_ENTRIES * sizeof(struct comp_cache_fragment *); /* if we shrink the hash table an order, will the data fit in *************** *** 528,532 **** BUG(); UnlockPage(empty_comp_page->page); ! page_cache_release(empty_comp_page->page); set_comp_page(empty_comp_page, NULL); --- 528,532 ---- BUG(); UnlockPage(empty_comp_page->page); ! __free_pages(empty_comp_page->page, comp_page_order); set_comp_page(empty_comp_page, NULL); *************** *** 604,608 **** static inline void grow_fragment_hash_table(void) { ! unsigned long new_fragment_hash_size = (3 * num_comp_pages) * sizeof(struct comp_cache_fragment *); /* do we really need a bigger hash table? */ --- 604,608 ---- static inline void grow_fragment_hash_table(void) { ! unsigned long new_fragment_hash_size = NUM_FRAG_HASH_ENTRIES * sizeof(struct comp_cache_fragment *); /* do we really need a bigger hash table? */ *************** *** 629,633 **** while (comp_cache_needs_to_grow() && nrpages--) { ! page = alloc_page(GFP_ATOMIC); /* couldn't allocate the page */ --- 629,633 ---- while (comp_cache_needs_to_grow() && nrpages--) { ! page = alloc_pages(GFP_ATOMIC, comp_page_order); /* couldn't allocate the page */ *************** *** 636,645 **** if (!init_comp_page(&comp_page, page)) { ! page_cache_release(page); return 0; } ! comp_cache_freeable_space += PAGE_SIZE; ! comp_cache_free_space += PAGE_SIZE; num_comp_pages++; #if 0 --- 636,645 ---- if (!init_comp_page(&comp_page, page)) { ! __free_pages(page, comp_page_order); return 0; } ! comp_cache_freeable_space += COMP_PAGE_SIZE; ! comp_cache_free_space += COMP_PAGE_SIZE; num_comp_pages++; #if 0 Index: aux.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/aux.c,v retrieving revision 1.36 retrieving revision 1.37 diff -C2 -r1.36 -r1.37 *** aux.c 11 Jul 2002 19:08:11 -0000 1.36 --- aux.c 15 Jul 2002 20:52:23 -0000 1.37 *************** *** 2,6 **** * linux/mm/comp_cache/aux.c * ! * Time-stamp: <2002-07-11 15:45:38 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/aux.c * ! * Time-stamp: <2002-07-15 15:14:01 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 80,85 **** if (page) goto out; ! comp_cache_freeable_space -= PAGE_SIZE; ! comp_cache_free_space -= PAGE_SIZE; goto out; } --- 80,85 ---- if (page) goto out; ! comp_cache_freeable_space -= COMP_PAGE_SIZE; ! comp_cache_free_space -= COMP_PAGE_SIZE; goto out; } *************** *** 88,93 **** BUG(); ! comp_cache_freeable_space += PAGE_SIZE; ! comp_cache_free_space += PAGE_SIZE; out: --- 88,93 ---- BUG(); ! comp_cache_freeable_space += COMP_PAGE_SIZE; ! comp_cache_free_space += COMP_PAGE_SIZE; out: *************** *** 226,230 **** if (index != free_space_hash_size - 1) BUG(); ! if (comp_page->free_space != PAGE_SIZE) BUG(); } --- 226,230 ---- if (index != free_space_hash_size - 1) BUG(); ! if (comp_page->free_space != COMP_PAGE_SIZE) BUG(); } *************** *** 235,249 **** case 0: if (!comp_page->page) ! num_fragments[6]++; num_fragments[0]++; break; case 1: if (comp_page->page && PageMappedCompCache(comp_page->page)) ! num_fragments[5]++; num_fragments[1]++; break; default: ! if (total_fragments > 3) ! num_fragments[4]++; else num_fragments[total_fragments]++; --- 235,249 ---- case 0: if (!comp_page->page) ! num_fragments[7]++; num_fragments[0]++; break; case 1: if (comp_page->page && PageMappedCompCache(comp_page->page)) ! num_fragments[6]++; num_fragments[1]++; break; default: ! if (total_fragments > 4) ! num_fragments[5]++; else num_fragments[total_fragments]++; *************** *** 256,260 **** unsigned long ! fragmentation_count(int index, unsigned long * frag_space) { struct comp_cache_page * comp_page; struct comp_cache_fragment * fragment; --- 256,260 ---- unsigned long ! fragmentation_count(int index, unsigned long * frag_space, int interval) { struct comp_cache_page * comp_page; struct comp_cache_fragment * fragment; *************** *** 281,285 **** BUG(); ! frag_space[(int) fragmented_space/500]++; } --- 281,285 ---- BUG(); ! frag_space[(int) fragmented_space/interval]++; } *************** *** 518,522 **** } ! if (comp_page->free_space != PAGE_SIZE - used_space) BUG(); --- 518,522 ---- } ! if (comp_page->free_space != COMP_PAGE_SIZE - used_space) BUG(); *************** *** 577,581 **** /* fragment hash table (code heavily based on * page_cache_init():filemap.c */ ! fragment_hash_size = 3 * num_comp_pages * sizeof(struct comp_cache_fragment *); fragment_hash_used = 0; fragment_hash = create_fragment_hash(&fragment_hash_size, &fragment_hash_bits, &fragment_hash_order); --- 577,581 ---- /* fragment hash table (code heavily based on * page_cache_init():filemap.c */ ! fragment_hash_size = NUM_FRAG_HASH_ENTRIES * sizeof(struct comp_cache_fragment *); fragment_hash_used = 0; fragment_hash = create_fragment_hash(&fragment_hash_size, &fragment_hash_bits, &fragment_hash_order); *************** *** 588,593 **** /* inits comp cache free space hash table */ ! free_space_interval = 100 * ((float) PAGE_SIZE)/4096; ! free_space_hash_size = (int) (PAGE_SIZE/free_space_interval) + 2; free_space_hash = (struct comp_cache_page **) kmalloc(free_space_hash_size * sizeof(struct comp_cache_page *), GFP_ATOMIC); --- 588,593 ---- /* inits comp cache free space hash table */ ! free_space_interval = 100 * (comp_page_order + 1); ! free_space_hash_size = (int) (PAGE_SIZE/100) + 2; free_space_hash = (struct comp_cache_page **) kmalloc(free_space_hash_size * sizeof(struct comp_cache_page *), GFP_ATOMIC); *************** *** 601,606 **** /* inits comp cache total free space hash table */ ! total_free_space_interval = 100 * ((float) PAGE_SIZE)/4096; ! total_free_space_hash_size = (int) (PAGE_SIZE/free_space_interval) + 2; total_free_space_hash = (struct comp_cache_page **) kmalloc(total_free_space_hash_size * sizeof(struct comp_cache_page *), GFP_ATOMIC); --- 601,606 ---- /* inits comp cache total free space hash table */ ! total_free_space_interval = 100 * (comp_page_order + 1); ! total_free_space_hash_size = (int) (PAGE_SIZE/100) + 2; total_free_space_hash = (struct comp_cache_page **) kmalloc(total_free_space_hash_size * sizeof(struct comp_cache_page *), GFP_ATOMIC); Index: free.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/free.c,v retrieving revision 1.34 retrieving revision 1.35 diff -C2 -r1.34 -r1.35 *** free.c 11 Jul 2002 19:08:11 -0000 1.34 --- free.c 15 Jul 2002 20:52:23 -0000 1.35 *************** *** 2,6 **** * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-07-09 16:34:26 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-07-15 14:56:06 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 89,93 **** struct comp_cache_fragment * fragment, * min_fragment = NULL; struct list_head * fragment_lh, * tmp_lh, aux_fragment_list; ! int min_offset = PAGE_SIZE + 1, num_fragments = 0, next_offset = 0; INIT_LIST_HEAD(&aux_fragment_list); --- 89,93 ---- struct comp_cache_fragment * fragment, * min_fragment = NULL; struct list_head * fragment_lh, * tmp_lh, aux_fragment_list; ! int min_offset = COMP_PAGE_SIZE + 1, num_fragments = 0, next_offset = 0; INIT_LIST_HEAD(&aux_fragment_list); *************** *** 121,125 **** min_fragment->offset = next_offset; ! min_offset = PAGE_SIZE + 1; next_offset += min_fragment->compressed_size; --- 121,125 ---- min_fragment->offset = next_offset; ! min_offset = COMP_PAGE_SIZE + 1; next_offset += min_fragment->compressed_size; *************** *** 152,157 **** if (!comp_page) BUG(); - if (not_compressed(fragment) && comp_page->free_space) - BUG(); /* remove from the free space hash table to update it */ --- 152,155 ---- *************** *** 170,174 **** /* simple case - no free space * 1 - one not compressed page ! * 2 - sum of all fragments = PAGE_SIZE */ if (!comp_page->free_space) { remove_fragment_from_comp_cache(fragment); --- 168,172 ---- /* simple case - no free space * 1 - one not compressed page ! * 2 - sum of all fragments = COMP_PAGE_SIZE */ if (!comp_page->free_space) { remove_fragment_from_comp_cache(fragment); Index: main.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/main.c,v retrieving revision 1.53 retrieving revision 1.54 diff -C2 -r1.53 -r1.54 *** main.c 15 Jul 2002 11:24:27 -0000 1.53 --- main.c 15 Jul 2002 20:52:24 -0000 1.54 *************** *** 2,6 **** * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-07-15 08:20:26 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-07-15 13:45:54 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 39,42 **** --- 39,48 ---- kmem_cache_t * fragment_cachep; + #ifdef CONFIG_COMP_DOUBLE_PAGE + int comp_page_order = 1; + #else + int comp_page_order = 0; + #endif + extern unsigned long num_physpages; *************** *** 131,135 **** return 0; ! if (fragment->offset + fragment->compressed_size > PAGE_SIZE) BUG(); --- 137,141 ---- return 0; ! if (fragment->offset + fragment->compressed_size > COMP_PAGE_SIZE) BUG(); *************** *** 157,161 **** memcpy(page_address(comp_page->page) + fragment->offset, buffer_compressed , fragment->compressed_size); } else ! memcpy(page_address(comp_page->page), page_address(page), PAGE_SIZE); if (PageTestandSetCompCache(page)) --- 163,167 ---- memcpy(page_address(comp_page->page) + fragment->offset, buffer_compressed , fragment->compressed_size); } else ! memcpy(page_address(comp_page->page) + fragment->offset, page_address(page), PAGE_SIZE); if (PageTestandSetCompCache(page)) *************** *** 300,305 **** return 0; ! (*comp_page)->free_space = PAGE_SIZE; ! (*comp_page)->total_free_space = PAGE_SIZE; (*comp_page)->free_offset = 0; (*comp_page)->page = page; --- 306,310 ---- return 0; ! (*comp_page)->free_space = (*comp_page)->total_free_space = (comp_page_order + 1) * PAGE_SIZE; (*comp_page)->free_offset = 0; (*comp_page)->page = page; *************** *** 319,329 **** #ifdef CONFIG_COMP_DEMAND_RESIZE ! min_num_comp_pages = 48; #else ! min_num_comp_pages = num_physpages * 0.05; #endif if (!max_num_comp_pages || max_num_comp_pages < min_num_comp_pages || max_num_comp_pages > num_physpages * 0.5) ! max_num_comp_pages = num_physpages * 0.5; if (!init_num_comp_pages || init_num_comp_pages < min_num_comp_pages || init_num_comp_pages > max_num_comp_pages) --- 324,334 ---- #ifdef CONFIG_COMP_DEMAND_RESIZE ! min_num_comp_pages = page_to_comp_page(48); #else ! min_num_comp_pages = page_to_comp_page((unsigned long) (num_physpages * 0.05)); #endif if (!max_num_comp_pages || max_num_comp_pages < min_num_comp_pages || max_num_comp_pages > num_physpages * 0.5) ! max_num_comp_pages = page_to_comp_page((unsigned long) (num_physpages * 0.5)); if (!init_num_comp_pages || init_num_comp_pages < min_num_comp_pages || init_num_comp_pages > max_num_comp_pages) *************** *** 336,345 **** "Compressed Cache: maximum size\n" "Compressed Cache: %lu pages = %luKiB\n", ! init_num_comp_pages, (init_num_comp_pages * PAGE_SIZE)/1024, ! max_num_comp_pages, (max_num_comp_pages * PAGE_SIZE)/1024); /* fiz zone watermarks */ comp_cache_init_fix_watermarks(init_num_comp_pages); ! /* create slab caches */ comp_cachep = kmem_cache_create("comp_cache_struct", sizeof(struct comp_cache_page), 0, SLAB_HWCACHE_ALIGN, NULL, NULL); --- 341,350 ---- "Compressed Cache: maximum size\n" "Compressed Cache: %lu pages = %luKiB\n", ! init_num_comp_pages, (init_num_comp_pages * COMP_PAGE_SIZE)/1024, ! max_num_comp_pages, (max_num_comp_pages * COMP_PAGE_SIZE)/1024); /* fiz zone watermarks */ comp_cache_init_fix_watermarks(init_num_comp_pages); ! /* create slab caches */ comp_cachep = kmem_cache_create("comp_cache_struct", sizeof(struct comp_cache_page), 0, SLAB_HWCACHE_ALIGN, NULL, NULL); *************** *** 355,364 **** /* initialize each comp cache entry */ for (i = 0; i < num_comp_pages; i++) { ! page = alloc_page(GFP_KERNEL); if (!init_comp_page(&comp_page, page)) ! page_cache_release(page); } ! comp_cache_free_space = num_comp_pages * PAGE_SIZE; /* initialize our algorithms statistics array */ --- 360,369 ---- /* initialize each comp cache entry */ for (i = 0; i < num_comp_pages; i++) { ! page = alloc_pages(GFP_KERNEL, comp_page_order); if (!init_comp_page(&comp_page, page)) ! __free_pages(page, comp_page_order); } ! comp_cache_free_space = num_comp_pages * COMP_PAGE_SIZE; /* initialize our algorithms statistics array */ *************** *** 371,380 **** static int __init comp_cache_size(char *str) { char * endp; #ifdef CONFIG_COMP_DEMAND_RESIZE ! max_num_comp_pages = memparse(str, &endp) >> PAGE_SHIFT; #else ! init_num_comp_pages = memparse(str, &endp) >> PAGE_SHIFT; #endif return 1; --- 376,388 ---- static int __init comp_cache_size(char *str) { + unsigned long nr_pages; char * endp; + nr_pages = memparse(str, &endp) >> (PAGE_SHIFT + comp_page_order); + #ifdef CONFIG_COMP_DEMAND_RESIZE ! max_num_comp_pages = nr_pages; #else ! init_num_comp_pages = nr_pages; #endif return 1; Index: proc.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/proc.c,v retrieving revision 1.18 retrieving revision 1.19 diff -C2 -r1.18 -r1.19 *** proc.c 11 Jul 2002 19:08:11 -0000 1.18 --- proc.c 15 Jul 2002 20:52:24 -0000 1.19 *************** *** 2,6 **** * linux/mm/comp_cache/proc.c * ! * Time-stamp: <2002-07-11 16:00:00 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/proc.c * ! * Time-stamp: <2002-07-15 15:13:14 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 398,403 **** #define HIST_PRINTK \ num_fragments[0], num_fragments[1], num_fragments[2], num_fragments[3], \ ! num_fragments[4], num_fragments[5], num_fragments[6] ! #define HIST_COUNT 7 int --- 398,403 ---- #define HIST_PRINTK \ num_fragments[0], num_fragments[1], num_fragments[2], num_fragments[3], \ ! num_fragments[4], num_fragments[5], num_fragments[6], num_fragments[7] ! #define HIST_COUNT 8 int *************** *** 414,418 **** } ! length = sprintf(page, "compressed cache - free space histogram (free space x number of fragments)\n"); memset((void *) num_fragments, 0, HIST_COUNT * sizeof(unsigned long)); --- 414,420 ---- } ! length = sprintf(page, ! "compressed cache - free space histogram (free space x number of fragments)\n" ! " total 0f 1f 2f 3f 4f more buffers nopg\n"); memset((void *) num_fragments, 0, HIST_COUNT * sizeof(unsigned long)); *************** *** 420,449 **** total1 = free_space_count(0, num_fragments); length += sprintf(page + length, ! " total 0f 1f 2f 3f more buffers nopg\n" ! " %4d: %7lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu\n", 0, total1, HIST_PRINTK); ! for (i = 1; i < free_space_hash_size - 1; i += 2) { memset((void *) num_fragments, 0, HIST_COUNT * sizeof(unsigned long)); total1 = free_space_count(i, num_fragments); ! total2 = free_space_count(i + 1, num_fragments); length += sprintf(page + length, ! "%4d - %4d: %7lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu\n", ! (i+1)*100-200?:1, (i+1)*100, total1 + total2, HIST_PRINTK); } - memset((void *) num_fragments, 0, HIST_COUNT * sizeof(unsigned long)); - - total1 = free_space_count(free_space_hash_size - 1, num_fragments); - length += sprintf(page + length, - "%4d - %4d: %7lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu\n", - (free_space_hash_size - 2) * 100 + 1, (int) PAGE_SIZE, - total1, - HIST_PRINTK); - vfree(num_fragments); out: --- 422,443 ---- total1 = free_space_count(0, num_fragments); length += sprintf(page + length, ! " %4d: %7lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu\n", 0, total1, HIST_PRINTK); ! for (i = 1; i < free_space_hash_size; i += 2) { memset((void *) num_fragments, 0, HIST_COUNT * sizeof(unsigned long)); total1 = free_space_count(i, num_fragments); ! total2 = 0; ! if (i + 1 < free_space_hash_size) ! total2 = free_space_count(i + 1, num_fragments); length += sprintf(page + length, ! "%4d - %4d: %7lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu\n", ! (i-1)*free_space_interval+1, (i+1)*free_space_interval<COMP_PAGE_SIZE?(i+1)*free_space_interval:(int)COMP_PAGE_SIZE, total1 + total2, HIST_PRINTK); } vfree(num_fragments); out: *************** *** 451,455 **** } ! #define FRAG_INTERVAL 500 #define FRAG_PRINTK \ frag_space[0], frag_space[1], frag_space[2], frag_space[3], \ --- 445,449 ---- } ! #define FRAG_INTERVAL (500 * (comp_page_order + 1)) #define FRAG_PRINTK \ frag_space[0], frag_space[1], frag_space[2], frag_space[3], \ *************** *** 463,467 **** int length = 0, i; ! frag_space = (unsigned long *) vmalloc((PAGE_SIZE/FRAG_INTERVAL + 1) * sizeof(unsigned long)); if (!frag_space) { --- 457,461 ---- int length = 0, i; ! frag_space = (unsigned long *) vmalloc((COMP_PAGE_SIZE/FRAG_INTERVAL + 1) * sizeof(unsigned long)); if (!frag_space) { *************** *** 472,497 **** length = sprintf(page, "compressed cache - fragmentation histogram (free space x fragmented space)\n" ! " total <500 -1000 -1500 -2000 -2500 -3000 -3500 -4000 -4096\n"); ! ! for (i = 1; i < free_space_hash_size - 1; i += 2) { ! memset((void *) frag_space, 0, (PAGE_SIZE/FRAG_INTERVAL + 1) * sizeof(unsigned long)); ! total1 = fragmentation_count(i, frag_space); ! total2 = fragmentation_count(i + 1, frag_space); length += sprintf(page + length, "%4d - %4d: %5lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu\n", ! (i+1)*100-200?:1, (i+1)*100, total1 + total2, FRAG_PRINTK); } - memset((void *) frag_space, 0, (PAGE_SIZE/FRAG_INTERVAL + 1)* sizeof(unsigned long)); - - total1 = free_space_count(free_space_hash_size - 1, frag_space); - length += sprintf(page + length, - "%4d - %4d: %5lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu\n", - (free_space_hash_size - 2) * 100 + 1, (int) PAGE_SIZE, - total1, - FRAG_PRINTK); - vfree(frag_space); out: --- 466,487 ---- length = sprintf(page, "compressed cache - fragmentation histogram (free space x fragmented space)\n" ! " total <%4d", FRAG_INTERVAL); ! for (i = FRAG_INTERVAL * 2; i < COMP_PAGE_SIZE; i += FRAG_INTERVAL) ! length += sprintf(page + length, " -%d", i); ! length += sprintf(page + length, " -%d\n", (int)COMP_PAGE_SIZE); ! ! for (i = 1; i < free_space_hash_size; i += 2) { ! memset((void *) frag_space, 0, (COMP_PAGE_SIZE/FRAG_INTERVAL + 1) * sizeof(unsigned long)); ! total1 = fragmentation_count(i, frag_space, FRAG_INTERVAL); ! total2 = 0; ! if (i + 1 < free_space_hash_size) ! total2 = fragmentation_count(i + 1, frag_space, FRAG_INTERVAL); length += sprintf(page + length, "%4d - %4d: %5lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu\n", ! (i-1)*free_space_interval+1, (i+1)*free_space_interval<COMP_PAGE_SIZE?(i+1)*free_space_interval:(int)COMP_PAGE_SIZE, total1 + total2, FRAG_PRINTK); } vfree(frag_space); out: Index: swapin.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapin.c,v retrieving revision 1.43 retrieving revision 1.44 diff -C2 -r1.43 -r1.44 *** swapin.c 1 Jul 2002 17:37:30 -0000 1.43 --- swapin.c 15 Jul 2002 20:52:24 -0000 1.44 *************** *** 2,6 **** * linux/mm/comp_cache/swapin.c * ! * Time-stamp: <2002-07-01 11:28:19 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/swapin.c * ! * Time-stamp: <2002-07-15 14:35:45 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 80,84 **** decompress(fragment, page); else ! memcpy(page_address(page), page_address(comp_page->page), PAGE_SIZE); PageSetCompCache(page); --- 80,84 ---- decompress(fragment, page); else ! memcpy(page_address(page), page_address(comp_page->page) + fragment->offset, PAGE_SIZE); PageSetCompCache(page); *************** *** 230,237 **** PageClearCompCache(page); __set_page_dirty(page); - page_cache_release(page); - UnlockPage(page); - return; out_release: --- 230,234 ---- Index: swapout.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapout.c,v retrieving revision 1.58 retrieving revision 1.59 diff -C2 -r1.58 -r1.59 *** swapout.c 11 Jul 2002 19:08:11 -0000 1.58 --- swapout.c 15 Jul 2002 20:52:24 -0000 1.59 *************** *** 2,6 **** * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-07-11 15:33:33 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-07-15 10:06:15 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 420,424 **** if (!comp_page->page) { ! if (comp_page->free_space != PAGE_SIZE) BUG(); if (alloc) --- 420,424 ---- if (!comp_page->page) { ! if (comp_page->free_space != COMP_PAGE_SIZE) BUG(); if (alloc) Index: vswap.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/vswap.c,v retrieving revision 1.38 retrieving revision 1.39 diff -C2 -r1.38 -r1.39 *** vswap.c 25 Jun 2002 14:34:08 -0000 1.38 --- vswap.c 15 Jul 2002 20:52:24 -0000 1.39 *************** *** 2,6 **** * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-06-24 18:24:11 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-07-15 14:24:03 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 79,84 **** vswap = vswap_address[offset]; ! if (vswap->swap_cache_page) ! BUG(); vswap->swap_cache_page = VSWAP_ALLOCATING; --- 79,84 ---- vswap = vswap_address[offset]; ! //if (vswap->swap_cache_page) ! //BUG(); vswap->swap_cache_page = VSWAP_ALLOCATING; *************** *** 101,105 **** if (vswap->swap_cache_page != VSWAP_ALLOCATING) ! BUG(); vswap->swap_cache_page = NULL; --- 101,106 ---- if (vswap->swap_cache_page != VSWAP_ALLOCATING) ! //BUG(); ! return; vswap->swap_cache_page = NULL; *************** *** 143,147 **** mean_size = total/NUM_MEAN_PAGES; ! if (mean_size < 0 || mean_size > PAGE_SIZE) BUG(); --- 144,148 ---- mean_size = total/NUM_MEAN_PAGES; ! if (mean_size < 0 || mean_size > COMP_PAGE_SIZE) BUG(); *************** *** 161,165 **** * compressed cache, even if we have to move fragments in * order to make room for any vswap entry */ ! if (vswap_num_reserved_entries > num_comp_pages) return 0; --- 162,166 ---- * compressed cache, even if we have to move fragments in * order to make room for any vswap entry */ ! if (vswap_num_reserved_entries > comp_page_to_page(num_comp_pages)) return 0; *************** *** 190,194 **** available_mean_size = (unsigned short) (comp_cache_freeable_space/num_comp_pages); ! if (available_mean_size > PAGE_SIZE) BUG(); --- 191,195 ---- available_mean_size = (unsigned short) (comp_cache_freeable_space/num_comp_pages); ! if (available_mean_size > COMP_PAGE_SIZE) BUG(); *************** *** 237,241 **** entry.val = 0; ! if (!vswap_address && !comp_cache_vswap_alloc()) return entry; --- 238,242 ---- entry.val = 0; ! if (!vswap_address && !comp_cache_vswap_alloc()) return entry; *************** *** 779,788 **** INIT_LIST_HEAD(&(vswap_address_used_head)); ! comp_cache_freeable_space = PAGE_SIZE * num_comp_pages; last_page_size = (unsigned short *) vmalloc(NUM_MEAN_PAGES * sizeof(unsigned short)); for (i = 0; i < NUM_MEAN_PAGES; i++) ! last_page_size[i] = PAGE_SIZE/2; /* alloc only one page right now to avoid problems when --- 780,789 ---- INIT_LIST_HEAD(&(vswap_address_used_head)); ! comp_cache_freeable_space = COMP_PAGE_SIZE * num_comp_pages; last_page_size = (unsigned short *) vmalloc(NUM_MEAN_PAGES * sizeof(unsigned short)); for (i = 0; i < NUM_MEAN_PAGES; i++) ! last_page_size[i] = COMP_PAGE_SIZE/2; /* alloc only one page right now to avoid problems when |
From: Rodrigo S. de C. <rc...@us...> - 2002-07-15 20:52:27
|
Update of /cvsroot/linuxcompressed/linux/arch/i386 In directory usw-pr-cvs1:/tmp/cvs-serv1029/arch/i386 Modified Files: config.in Log Message: Feature o Added feature to enable 8K pages (on i386). This option can only be selected if "Resize Compressed Cache On Demand" is enabled since it does not support pages with buffers. The motive to implement this idea is to make better use of the space reserved for compressed cache, since depending on the compression ratio, several fragments end up stored alone in a page. Index: config.in =================================================================== RCS file: /cvsroot/linuxcompressed/linux/arch/i386/config.in,v retrieving revision 1.19 retrieving revision 1.20 diff -C2 -r1.19 -r1.20 *** config.in 25 Jun 2002 14:34:07 -0000 1.19 --- config.in 15 Jul 2002 20:52:22 -0000 1.20 *************** *** 212,215 **** --- 212,216 ---- bool ' Support for Page Cache compression' CONFIG_COMP_PAGE_CACHE bool ' Resize Compressed Cache On Demand' CONFIG_COMP_DEMAND_RESIZE + dep_bool ' Double Page Size' CONFIG_COMP_DOUBLE_PAGE $CONFIG_COMP_DEMAND_RESIZE fi fi |
From: Rodrigo S. de C. <rc...@us...> - 2002-07-15 20:52:27
|
Update of /cvsroot/linuxcompressed/linux/include/linux In directory usw-pr-cvs1:/tmp/cvs-serv1029/include/linux Modified Files: comp_cache.h Log Message: Feature o Added feature to enable 8K pages (on i386). This option can only be selected if "Resize Compressed Cache On Demand" is enabled since it does not support pages with buffers. The motive to implement this idea is to make better use of the space reserved for compressed cache, since depending on the compression ratio, several fragments end up stored alone in a page. Index: comp_cache.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/comp_cache.h,v retrieving revision 1.89 retrieving revision 1.90 diff -C2 -r1.89 -r1.90 *** comp_cache.h 11 Jul 2002 19:08:10 -0000 1.89 --- comp_cache.h 15 Jul 2002 20:52:23 -0000 1.90 *************** *** 2,6 **** * linux/mm/comp_cache.h * ! * Time-stamp: <2002-07-11 15:28:35 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache.h * ! * Time-stamp: <2002-07-15 13:44:19 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 29,39 **** #include <linux/WKcommon.h> ! #define COMP_CACHE_VERSION "0.23" /* maximum compressed size of a page */ #define MAX_COMPRESSED_SIZE 4500 - #define NUM_VSWAP_ENTRIES (3 * num_comp_pages) - extern unsigned long num_comp_pages, num_fragments, num_swapper_fragments, new_num_comp_pages, min_num_comp_pages, max_num_comp_pages, zone_num_comp_pages; --- 29,37 ---- #include <linux/WKcommon.h> ! #define COMP_CACHE_VERSION "0.24pre1" /* maximum compressed size of a page */ #define MAX_COMPRESSED_SIZE 4500 extern unsigned long num_comp_pages, num_fragments, num_swapper_fragments, new_num_comp_pages, min_num_comp_pages, max_num_comp_pages, zone_num_comp_pages; *************** *** 353,356 **** --- 351,359 ---- inline int compress_clean_page(struct page *, unsigned int, int); + #define COMP_PAGE_SIZE ((comp_page_order + 1) * PAGE_SIZE) + #define page_to_comp_page(n) ((n) >> comp_page_order) + #define comp_page_to_page(n) ((n) << comp_page_order) + + extern int comp_page_order; extern unsigned long comp_cache_free_space; #define comp_cache_used_space ((num_comp_pages * PAGE_SIZE) - comp_cache_free_space) *************** *** 403,406 **** --- 406,410 ---- extern unsigned short last_page; + #define NUM_VSWAP_ENTRIES (3 * comp_page_to_page(num_comp_pages)) #define COMP_CACHE_SWP_TYPE MAX_SWAPFILES #define VSWAP_RESERVED ((struct comp_cache_fragment *) 0xffffffff) *************** *** 518,521 **** --- 522,527 ---- extern unsigned int fragment_hash_bits; + #define NUM_FRAG_HASH_ENTRIES (3 * comp_page_to_page(num_comp_pages)) + /* hash function adapted from _page_hashfn:pagemap.h since our * parameters for hash table are the same: mapping and index */ *************** *** 569,573 **** unsigned long free_space_count(int, unsigned long *); ! unsigned long fragmentation_count(int, unsigned long *); /* enough memory functions */ --- 575,579 ---- unsigned long free_space_count(int, unsigned long *); ! unsigned long fragmentation_count(int, unsigned long *, int); /* enough memory functions */ |