[lc-checkins] CVS: linux/mm/comp_cache WK4x4.c,1.3,1.4 WKdm.c,1.3,1.4 adaptivity.c,1.37,1.38 aux.c,1
Status: Beta
Brought to you by:
nitin_sf
From: Rodrigo S. de C. <rc...@us...> - 2002-07-28 15:47:08
|
Update of /cvsroot/linuxcompressed/linux/mm/comp_cache In directory usw-pr-cvs1:/tmp/cvs-serv26313/mm/comp_cache Modified Files: WK4x4.c WKdm.c adaptivity.c aux.c free.c main.c proc.c swapin.c swapout.c vswap.c Log Message: Features o First page cache support for preempted kernels is implemented. o Fragments have a "count" field that stores the number of references to the fragment, so we don't have to worry about it getting freed in the middle of an operation. That tries to fix a highly potential source of bugs. Bug fixes o Fix memory accountancy for double page sizes. Meminfo was broken for 8K pages. o truncate_list_comp_pages() could try to truncate fragments that were in locked_comp_pages list, which is bogus. Only swap buffers list are on this list, and are listed there only for wait_comp_pages(). o when writing out fragments, we didn't pay attention to the return value, so we may end up freeing a fragment (when refilling swap buffer) even if the writepage failed. In particular, ramfs, ramdisk and other memory file systems always fail to write out its pages. Now we check if the swap buffer page has been set dirty (the writepage() usually does that after failing to write a page), moving back the fragment to the dirty list (and of course not freeing the fragment). o fixed bug that would corrupt the swap buffer list. A bug in the variable that returned the error code could return error even if a fragment was found afterall, so the caller function would backout the writeout operation, leaving the swap buffer locked on the used list, and it wouldn't never get unlocked. o account writeout stats only for pages that have been actually submitted to IO operation. o fixed bug that would deadlock a system with comp_cache that has page cache support. The lookup_comp_pages() function may be called from the following code path: __sync_one() -> filemap_fdatasync(). This code path tries to sync an inode (and keeps it locked while it is syncing). However, that very inode can be also in the clear path (clear_inode() function, called in the exit process path) which will lock the super block and then wait for inode if it is locked (what happens with an inode syncing). Since the allocation path may write pages, which may need to lock the same super block, it will deadlock, because the super block is locked by the exit path explained above. So, we end up not being able to allocate the page (in order to finish this function and unlock the inode) _and_ the super block won't be unlocked since the inode doesn't get unlocked either. The fix was to allocate pages with GFP_NOFS mask. Cleanups o Some functions were renamed. o Compression algorithms (removed unnecessary data structures that were allocated, made some structures to be allocated statically in the algorithms, some data statically allocated are now kmalloc()) o Removed /proc/sys/vm/comp_cache/actual_size, it doesn't make sense with resizing on demand. Others o Compressed cache only resizes on demand. Index: WK4x4.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/WK4x4.c,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -r1.3 -r1.4 *** WK4x4.c 19 Jun 2002 12:18:44 -0000 1.3 --- WK4x4.c 28 Jul 2002 15:47:04 -0000 1.4 *************** *** 258,268 **** WK_word* destinationBuffer, unsigned int words, ! void *page) { ! DictionaryElement *dictionary = ((struct comp_alg_data *)page)->dictionary; ! unsigned int *hashTable = ((struct comp_alg_data *)page)->hashLookupTable_WK4x4; ! ! /*DictionaryElement dictionary[DICTIONARY_SIZE]; ! unsigned int hashTable [] = HASH_LOOKUP_TABLE_CONTENTS_WK4x4;*/ int wordIndex; unsigned int remainingBits = BITS_PER_WORD; --- 258,265 ---- WK_word* destinationBuffer, unsigned int words, ! struct comp_alg_data * data) { ! DictionaryElement dictionary[DICTIONARY_SIZE]; ! unsigned int hashTable [] = HASH_LOOKUP_TABLE_CONTENTS_WK4x4; int wordIndex; unsigned int remainingBits = BITS_PER_WORD; *************** *** 273,278 **** PRELOAD_DICTIONARY; - //printk("WK4x4\n"); - /* Loop through each word in the source page. */ for (wordIndex = 0; wordIndex < words; wordIndex++) { --- 270,273 ---- *************** *** 418,422 **** WK_word* destinationBuffer, unsigned int words, ! void * page) { /* The dictionary array is divided into sets. Each entry in the dictionary array is really an entry in one of the dictionary's --- 413,418 ---- WK_word* destinationBuffer, unsigned int words, ! struct comp_alg_data * data) ! { /* The dictionary array is divided into sets. Each entry in the dictionary array is really an entry in one of the dictionary's *************** *** 426,433 **** which that set begins in the dictionary array. */ ! /*DictionaryElement dictionary[DICTIONARY_SIZE]; ! unsigned int hashTable [] = HASH_LOOKUP_TABLE_CONTENTS_WK4x4;*/ ! DictionaryElement *dictionary = ((struct comp_alg_data *)page)->dictionary; ! unsigned int *hashTable = ((struct comp_alg_data *)page)->hashLookupTable_WK4x4; unsigned int initialIndexTable [] = INITIAL_INDEX_TABLE_CONTENTS; --- 422,427 ---- which that set begins in the dictionary array. */ ! DictionaryElement dictionary[DICTIONARY_SIZE]; ! unsigned int hashTable [] = HASH_LOOKUP_TABLE_CONTENTS_WK4x4; unsigned int initialIndexTable [] = INITIAL_INDEX_TABLE_CONTENTS; Index: WKdm.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/WKdm.c,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -r1.3 -r1.4 *** WKdm.c 19 Jun 2002 12:18:44 -0000 1.3 --- WKdm.c 28 Jul 2002 15:47:04 -0000 1.4 *************** *** 40,43 **** --- 40,44 ---- /* Included files */ + #include <linux/slab.h> #include <linux/comp_cache.h> #include <linux/WKcommon.h> *************** *** 383,392 **** WK_word* dest_buf, unsigned int num_input_words, ! void *page) { ! /* DictionaryElement dictionary[DICTIONARY_SIZE]; ! char hashLookupTable [] = HASH_LOOKUP_TABLE_CONTENTS; */ ! DictionaryElement *dictionary = ((struct comp_alg_data *)page)->dictionary; ! char *hashLookupTable = ((struct comp_alg_data *)page)->hashLookupTable_WKdm; /* arrays that hold output data in intermediate form during modeling */ --- 384,391 ---- WK_word* dest_buf, unsigned int num_input_words, ! struct comp_alg_data * data) { ! DictionaryElement dictionary[DICTIONARY_SIZE]; ! char hashLookupTable [] = HASH_LOOKUP_TABLE_CONTENTS_WKDM; /* arrays that hold output data in intermediate form during modeling */ *************** *** 396,406 **** * pages larger than 4KB */ ! ! /* WK_word tempTagsArray[300]; tags for everything */ ! /* WK_word tempQPosArray[300]; queue positions for matches */ ! /* WK_word tempLowBitsArray[1200]; low bits for partial matches */ ! WK_word *tempTagsArray = ((struct comp_alg_data *)page)->tempTagsArray; ! WK_word *tempQPosArray = ((struct comp_alg_data *)page)->tempQPosArray; ! WK_word *tempLowBitsArray = ((struct comp_alg_data *)page)->tempLowBitsArray; /* boundary_tmp will be used for keeping track of what's where in --- 395,401 ---- * pages larger than 4KB */ ! WK_word * tempTagsArray = data->tempTagsArray; ! WK_word * tempQPosArray = data->tempQPosArray; ! WK_word * tempLowBitsArray = data->tempLowBitsArray; /* boundary_tmp will be used for keeping track of what's where in *************** *** 423,429 **** PRELOAD_DICTIONARY; ! ! //printk("WKDM\n"); ! next_full_patt = dest_buf + TAGS_AREA_OFFSET + (num_input_words / 16); --- 418,422 ---- PRELOAD_DICTIONARY; ! next_full_patt = dest_buf + TAGS_AREA_OFFSET + (num_input_words / 16); *************** *** 621,625 **** } - return ((char *) boundary_tmp - (char *) dest_buf); --- 614,617 ---- *************** *** 638,648 **** WK_word* dest_buf, unsigned int words, ! void *page) { ! ! /*DictionaryElement dictionary[DICTIONARY_SIZE]; ! unsigned int hashLookupTable [] = HASH_LOOKUP_TABLE_CONTENTS_WKDM;*/ ! DictionaryElement *dictionary = ((struct comp_alg_data *)page)->dictionary; ! char *hashLookupTable = ((struct comp_alg_data *)page)->hashLookupTable_WKdm; /* arrays that hold output data in intermediate form during modeling */ --- 630,638 ---- WK_word* dest_buf, unsigned int words, ! struct comp_alg_data * data) ! { + DictionaryElement dictionary[DICTIONARY_SIZE]; + unsigned int hashLookupTable [] = HASH_LOOKUP_TABLE_CONTENTS_WKDM; /* arrays that hold output data in intermediate form during modeling */ *************** *** 652,664 **** * pages larger than 4KB */ ! //WK_word tempTagsArray[300]; /* tags for everything */ ! //WK_word tempQPosArray[300]; /* queue positions for matches */ ! //WK_word tempLowBitsArray[1200]; /* low bits for partial matches */ ! WK_word *tempTagsArray = ((struct comp_alg_data *)page)->tempTagsArray; ! WK_word *tempQPosArray = ((struct comp_alg_data *)page)->tempQPosArray; ! WK_word *tempLowBitsArray = ((struct comp_alg_data *)page)->tempLowBitsArray; PRELOAD_DICTIONARY; #ifdef WK_DEBUG --- 642,658 ---- * pages larger than 4KB */ ! WK_word * tempTagsArray = data->tempTagsArray; ! WK_word * tempQPosArray = data->tempQPosArray; ! WK_word * tempLowBitsArray = data->tempLowBitsArray; PRELOAD_DICTIONARY; + + if (!tempTagsArray) + BUG(); + if (!tempQPosArray) + BUG(); + if (!tempLowBitsArray) + BUG(); #ifdef WK_DEBUG Index: adaptivity.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/adaptivity.c,v retrieving revision 1.37 retrieving revision 1.38 diff -C2 -r1.37 -r1.38 *** adaptivity.c 18 Jul 2002 21:31:08 -0000 1.37 --- adaptivity.c 28 Jul 2002 15:47:04 -0000 1.38 *************** *** 2,6 **** * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-07-18 15:44:59 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-07-26 17:22:32 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 501,505 **** * caller must hold comp_cache_lock lock */ ! int shrink_comp_cache(struct comp_cache_page * comp_page, int check_further) { --- 501,505 ---- * caller must hold comp_cache_lock lock */ ! static int shrink_comp_cache(struct comp_cache_page * comp_page, int check_further) { *************** *** 578,582 **** } - #ifdef CONFIG_COMP_DEMAND_RESIZE /*** * shrink_on_demand(comp_page) - called by comp_cache_free(), it will --- 578,581 ---- *************** *** 606,610 **** return 0; } - #endif #define comp_cache_needs_to_grow() (new_num_comp_pages > num_comp_pages) --- 605,608 ---- *************** *** 630,634 **** } ! int grow_comp_cache(int nrpages) { --- 628,632 ---- } ! static int grow_comp_cache(int nrpages) { *************** *** 677,681 **** } - #ifdef CONFIG_COMP_DEMAND_RESIZE /*** * grow_on_demand(void) - called by get_comp_cache_page() when it --- 675,678 ---- *************** *** 703,707 **** return 0; } - #endif void __init --- 700,703 ---- Index: aux.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/aux.c,v retrieving revision 1.41 retrieving revision 1.42 diff -C2 -r1.41 -r1.42 *** aux.c 18 Jul 2002 21:31:08 -0000 1.41 --- aux.c 28 Jul 2002 15:47:04 -0000 1.42 *************** *** 2,6 **** * linux/mm/comp_cache/aux.c * ! * Time-stamp: <2002-07-18 14:14:42 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/aux.c * ! * Time-stamp: <2002-07-28 11:55:38 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 268,272 **** fragment = list_entry(fragment_lh, struct comp_cache_fragment, list); ! if (CompFragmentFreed(fragment)) fragmented_space += fragment->compressed_size; } --- 268,272 ---- fragment = list_entry(fragment_lh, struct comp_cache_fragment, list); ! if (fragment_freed(fragment)) fragmented_space += fragment->compressed_size; } *************** *** 427,431 **** } ! /* adapted version of __find_page_nolock:filemap.c */ int FASTCALL(find_comp_page(struct address_space *, unsigned long, struct comp_cache_fragment **)); int find_comp_page(struct address_space * mapping, unsigned long offset, struct comp_cache_fragment ** fragment) --- 427,432 ---- } ! /* adapted version of __find_page_nolock:filemap.c ! * caller must hold comp_cache_hold */ int FASTCALL(find_comp_page(struct address_space *, unsigned long, struct comp_cache_fragment **)); int find_comp_page(struct address_space * mapping, unsigned long offset, struct comp_cache_fragment ** fragment) *************** *** 465,469 **** { struct comp_cache_fragment * fragment; ! return !find_comp_page(mapping, offset, &fragment); } --- 466,475 ---- { struct comp_cache_fragment * fragment; ! int ret; ! ! spin_lock(&comp_cache_lock); ! ret = !find_comp_page(mapping, offset, &fragment); ! spin_unlock(&comp_cache_lock); ! return ret; } Index: free.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/free.c,v retrieving revision 1.43 retrieving revision 1.44 diff -C2 -r1.43 -r1.44 *** free.c 18 Jul 2002 21:31:08 -0000 1.43 --- free.c 28 Jul 2002 15:47:04 -0000 1.44 *************** *** 2,6 **** * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-07-18 16:20:01 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-07-28 10:21:04 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 35,46 **** return; ! if (CompFragmentFreed(right_fragment)) { fragment_to_free->compressed_size += right_fragment->compressed_size; list_del(&(right_fragment->list)); ! ! if (!CompFragmentTestandClearIO(right_fragment)) ! kmem_cache_free(fragment_cachep, (right_fragment)); ! ! } } --- 35,42 ---- return; ! if (fragment_freed(right_fragment)) { fragment_to_free->compressed_size += right_fragment->compressed_size; list_del(&(right_fragment->list)); ! kmem_cache_free(fragment_cachep, (right_fragment)); } } *************** *** 52,63 **** return; ! if (CompFragmentFreed(left_fragment)) { fragment_to_free->offset = left_fragment->offset; fragment_to_free->compressed_size += left_fragment->compressed_size; list_del(&(left_fragment->list)); ! ! if (!CompFragmentTestandClearIO(left_fragment)) ! kmem_cache_free(fragment_cachep, (left_fragment)); } } --- 48,57 ---- return; ! if (fragment_freed(left_fragment)) { fragment_to_free->offset = left_fragment->offset; fragment_to_free->compressed_size += left_fragment->compressed_size; list_del(&(left_fragment->list)); ! kmem_cache_free(fragment_cachep, (left_fragment)); } } *************** *** 66,79 **** remove_fragment_from_comp_cache(struct comp_cache_fragment * fragment) { ! remove_fragment_vswap(fragment); ! remove_fragment_from_hash_table(fragment); ! remove_fragment_from_lru_queue(fragment); list_del_init(&fragment->mapping_list); fragment->mapping->nrpages--; if (PageSwapCache(fragment)) num_swapper_fragments--; num_fragments--; comp_cache_free_space += fragment->compressed_size; ! fragment->comp_page->total_free_space += fragment->compressed_size; } --- 60,96 ---- remove_fragment_from_comp_cache(struct comp_cache_fragment * fragment) { ! if (!fragment->mapping) ! BUG(); ! ! /* fragments that have already been submitted to IO have a ! * non-null swp_buffer. Let's warn the swap buffer that this ! * page has been already removed by setting its fragment field ! * to NULL and also let's wait for the IO to finish. */ ! if (fragment->swp_buffer) { ! fragment->swp_buffer->fragment = NULL; ! wait_on_page(fragment->swp_buffer->page); ! } ! ! /* remove from mapping->{clean,dirty}_comp_pages */ list_del_init(&fragment->mapping_list); fragment->mapping->nrpages--; + + /* compressed fragments of swap cache are accounted in + * swapper_space.nrpages, so we need to account them + * separately to display sane values in /proc/meminfo */ if (PageSwapCache(fragment)) num_swapper_fragments--; num_fragments--; + + /* total free space in compressed cache */ comp_cache_free_space += fragment->compressed_size; ! ! /* used to know a fragment with zero count is actually freed ! * or waiting to be freed (like when sleeping to lock its page ! * in comp_cache_free()) */ ! fragment->mapping = NULL; ! ! /* debug */ ! fragment->index = 0; } *************** *** 91,99 **** fragment = list_entry(fragment_lh, struct comp_cache_fragment, list); ! if (CompFragmentFreed(fragment)) { list_del(&(fragment->list)); comp_page->free_space += fragment->compressed_size; ! if (!CompFragmentTestandClearIO(fragment)) ! kmem_cache_free(fragment_cachep, (fragment)); continue; } --- 108,115 ---- fragment = list_entry(fragment_lh, struct comp_cache_fragment, list); ! if (fragment_freed(fragment)) { list_del(&(fragment->list)); comp_page->free_space += fragment->compressed_size; ! kmem_cache_free(fragment_cachep, (fragment)); continue; } *************** *** 140,144 **** /* caller must hold comp_cache_lock lock */ ! void comp_cache_free_locked(struct comp_cache_fragment * fragment) { --- 156,160 ---- /* caller must hold comp_cache_lock lock */ ! static void comp_cache_free_locked(struct comp_cache_fragment * fragment) { *************** *** 166,169 **** --- 182,189 ---- remove_fragment_from_comp_cache(fragment); + fragment->comp_page->total_free_space += fragment->compressed_size; + + if (fragment->mapping) + BUG(); /* simple case - no free space *************** *** 188,198 **** } - /* we have used fragment(s) between the free space and the one we want to free */ - if (CompFragmentTestandSetFreed(fragment)) - BUG(); - - /* debug only */ - fragment->mapping = NULL; - merge_right_neighbour(fragment, next_fragment); merge_left_neighbour(fragment, previous_fragment); --- 208,211 ---- *************** *** 209,218 **** comp_page->free_space += fragment->compressed_size; ! /* is this fragment waiting for swap out? let's not free it ! * now, but let's tell swap out path that it does not need IO ! * anymore because it has been freed (maybe due to swapin) */ ! if (!CompFragmentTestandClearIO(fragment)) ! kmem_cache_free(fragment_cachep, (fragment)); ! out: add_comp_page_to_hash_table(comp_page); --- 222,226 ---- comp_page->free_space += fragment->compressed_size; ! kmem_cache_free(fragment_cachep, (fragment)); out: add_comp_page_to_hash_table(comp_page); *************** *** 220,250 **** /* caller must hold comp_cache_lock lock */ ! void comp_cache_free(struct comp_cache_fragment * fragment) { struct comp_cache_page * comp_page; - struct page * page; - int locked = 0; if (!fragment) BUG(); comp_page = fragment->comp_page; - if (comp_page->page) { - locked = !TryLockPage(comp_page->page); - page = comp_page->page; - } comp_cache_free_locked(fragment); ! /* steal the page if we need to shrink the cache. The ! * page will be unlocked in shrink_comp_cache() ! * function */ ! if (locked) { ! #ifdef CONFIG_COMP_DEMAND_RESIZE ! shrink_on_demand(comp_page); ! #else ! shrink_comp_cache(comp_page, 1); ! #endif ! } } --- 228,266 ---- /* caller must hold comp_cache_lock lock */ ! static void comp_cache_free(struct comp_cache_fragment * fragment) { struct comp_cache_page * comp_page; if (!fragment) BUG(); + remove_fragment_vswap(fragment); + remove_fragment_from_hash_table(fragment); + remove_fragment_from_lru_queue(fragment); comp_page = fragment->comp_page; + spin_unlock(&comp_cache_lock); + lock_page(comp_page->page); + spin_lock(&comp_cache_lock); + comp_cache_free_locked(fragment); ! /* steal the page if we need to shrink the cache. The page ! * will be unlocked in shrink_comp_cache() (even if shrinking ! * on demand, shrink_on_demand() will call it anyway) */ ! shrink_on_demand(comp_page); ! } ! ! /* caller must hold comp_cache_lock lock */ ! int ! __comp_cache_free(struct comp_cache_fragment * fragment) { ! int zero; ! ! if (!fragment_count(fragment)) ! BUG(); ! ! if ((zero = put_fragment_testzero(fragment))) ! comp_cache_free(fragment); ! return zero; } *************** *** 316,321 **** --- 332,339 ---- goto backout; + spin_lock(&comp_cache_lock); remove_fragment_vswap(fragment); remove_fragment_from_hash_table(fragment); + spin_unlock(&comp_cache_lock); /* remove all those ptes from vswap struct */ *************** *** 357,363 **** --- 375,383 ---- fragment->index = entry.val; + spin_lock(&comp_cache_lock); add_fragment_to_lru_queue(fragment); add_fragment_to_hash_table(fragment); UnlockPage(fragment->comp_page->page); + spin_unlock(&comp_cache_lock); out_unlock: spin_unlock(&virtual_swap_list); Index: main.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/main.c,v retrieving revision 1.59 retrieving revision 1.60 diff -C2 -r1.59 -r1.60 *** main.c 18 Jul 2002 21:31:08 -0000 1.59 --- main.c 28 Jul 2002 15:47:04 -0000 1.60 *************** *** 2,6 **** * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-07-18 13:19:51 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-07-28 10:04:28 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 51,112 **** spinlock_t comp_cache_lock __cacheline_aligned = SPIN_LOCK_UNLOCKED; ! inline void ! compress_dirty_page(struct page * page, int (*writepage)(struct page *), unsigned int gfp_mask, int priority) ! { ! int write, ret; ! ! write = !!page->buffers; ! #ifdef CONFIG_COMP_PAGE_CACHE ! write |= shmem_page(page); ! #else ! write |= !PageSwapCache(page); ! #endif ! if (write) { ! /* if gfp_mask does not allow us to write out the ! * page, unlock the page and set all the bits back */ ! if (!(gfp_mask & __GFP_FS)) { ! UnlockPage(page); ! goto set_bits_back; ! } ! writepage(page); ! return; ! } ! ! if (page->buffers) ! BUG(); ! ! spin_lock(&comp_cache_lock); ! ret = compress_page(page, 1, gfp_mask, priority); ! spin_unlock(&comp_cache_lock); ! ! /* failed to compress the dirty page? set the bits back */ ! if (ret) ! return; ! set_bits_back: ! SetPageDirty(page); ! ClearPageLaunder(page); ! } ! ! inline int ! compress_clean_page(struct page * page, unsigned int gfp_mask, int priority) ! { ! int ret; ! ! if (page->buffers) ! BUG(); ! ! #ifndef CONFIG_COMP_PAGE_CACHE ! if (!PageSwapCache(page)) ! return 1; ! #endif ! spin_lock(&comp_cache_lock); ! ret = compress_page(page, 0, gfp_mask, priority); ! spin_unlock(&comp_cache_lock); ! ! return ret; ! } ! ! int ! compress_page(struct page * page, int dirty, unsigned int gfp_mask, int priority) { struct comp_cache_page * comp_page; --- 51,56 ---- spinlock_t comp_cache_lock __cacheline_aligned = SPIN_LOCK_UNLOCKED; ! static int ! compress_page(struct page * page, int state, unsigned int gfp_mask, int priority) { struct comp_cache_page * comp_page; *************** *** 124,134 **** if (!PageLocked(page)) BUG(); ! if (PageTestandClearCompCache(page)) { ! if (!dirty) BUG(); ! __invalidate_comp_cache(page->mapping, page->index); } ! comp_size = compress(current_compressed_page = page, buffer_compressed = (unsigned long *) &buffer_compressed1, &algorithm, dirty); comp_page = get_comp_cache_page(page, comp_size, &fragment, gfp_mask, priority); --- 68,79 ---- if (!PageLocked(page)) BUG(); ! if (!find_comp_page(page->mapping, page->index, &fragment)) { ! if (!CompFragmentToBeFreed(fragment)) BUG(); ! return 0; } + ! comp_size = compress(current_compressed_page = page, buffer_compressed = (unsigned long *) &buffer_compressed1, &algorithm, state); comp_page = get_comp_cache_page(page, comp_size, &fragment, gfp_mask, priority); *************** *** 148,152 **** /* fix mapping stuff */ page->mapping->nrpages++; ! if (!dirty) { list_add(&fragment->mapping_list, &fragment->mapping->clean_comp_pages); goto copy_page; --- 93,97 ---- /* fix mapping stuff */ page->mapping->nrpages++; ! if (state != DIRTY_PAGE) { list_add(&fragment->mapping_list, &fragment->mapping->clean_comp_pages); goto copy_page; *************** *** 164,168 **** if (compressed(fragment)) { if (current_compressed_page != page) ! compress(page, buffer_compressed = (unsigned long *) &buffer_compressed2, &algorithm, dirty); memcpy(page_address(comp_page->page) + fragment->offset, buffer_compressed , fragment->compressed_size); } else --- 109,113 ---- if (compressed(fragment)) { if (current_compressed_page != page) ! compress(page, buffer_compressed = (unsigned long *) &buffer_compressed2, &algorithm, state); memcpy(page_address(comp_page->page) + fragment->offset, buffer_compressed , fragment->compressed_size); } else *************** *** 175,178 **** --- 120,181 ---- } + int + compress_dirty_page(struct page * page, int (*writepage)(struct page *), unsigned int gfp_mask, int priority) + { + int write, ret = 0; + + write = !!page->buffers; + #ifdef CONFIG_COMP_PAGE_CACHE + write |= shmem_page(page); + #else + write |= !PageSwapCache(page); + #endif + if (write) { + /* if gfp_mask does not allow us to write out the + * page, unlock the page and set all the bits back */ + if (!(gfp_mask & __GFP_FS)) + goto set_bits_back; + writepage(page); + return 0; + } + + if (page->buffers) + BUG(); + + spin_lock(&comp_cache_lock); + ret = compress_page(page, DIRTY_PAGE, gfp_mask, priority); + spin_unlock(&comp_cache_lock); + + /* failed to compress the dirty page? set the bits back */ + if (!ret) { + set_bits_back: + spin_lock(&pagemap_lru_lock); + SetPageDirty(page); + ClearPageLaunder(page); + UnlockPage(page); + spin_unlock(&pagemap_lru_lock); + } + return ret; + } + + int + compress_clean_page(struct page * page, unsigned int gfp_mask, int priority) + { + int ret; + + if (page->buffers) + BUG(); + + #ifndef CONFIG_COMP_PAGE_CACHE + if (!PageSwapCache(page)) + return 1; + #endif + spin_lock(&comp_cache_lock); + ret = compress_page(page, CLEAN_PAGE, gfp_mask, priority); + spin_unlock(&comp_cache_lock); + + return ret; + } + extern void __init comp_cache_hash_init(void); extern void __init comp_cache_swp_buffer_init(void); *************** *** 206,214 **** int i; - #ifdef CONFIG_COMP_DEMAND_RESIZE min_num_comp_pages = page_to_comp_page(48); - #else - min_num_comp_pages = page_to_comp_page((unsigned long) (num_physpages * 0.05)); - #endif if (!max_num_comp_pages || max_num_comp_pages < min_num_comp_pages || max_num_comp_pages > num_physpages * 0.5) --- 209,213 ---- *************** *** 264,272 **** nr_pages = memparse(str, &endp) >> (PAGE_SHIFT + comp_page_order); - #ifdef CONFIG_COMP_DEMAND_RESIZE max_num_comp_pages = nr_pages; - #else - init_num_comp_pages = nr_pages; - #endif return 1; } --- 263,267 ---- Index: proc.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/proc.c,v retrieving revision 1.21 retrieving revision 1.22 diff -C2 -r1.21 -r1.22 *** proc.c 16 Jul 2002 21:58:08 -0000 1.21 --- proc.c 28 Jul 2002 15:47:04 -0000 1.22 *************** *** 2,6 **** * linux/mm/comp_cache/proc.c * ! * Time-stamp: <2002-07-16 16:32:43 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/proc.c * ! * Time-stamp: <2002-07-28 12:01:51 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 32,76 **** static int current_algorithm = 0; ! /* data used for compression */ ! static struct comp_alg_data comp_data; ! static WK_word compresseddata[1200]; ! static WK_word decompresseddata[1200]; ! static WK_word compressedtempTagsArray[300]; ! static WK_word compressedtempQPosArray[300]; ! static WK_word compressedtempLowBitsArray[1200]; ! ! static char compressedhashLookupTable_WKdm [] = HASH_LOOKUP_TABLE_CONTENTS_WKDM; ! static unsigned int compressedhashLookupTable_WK4x4 [] = HASH_LOOKUP_TABLE_CONTENTS_WK4x4; ! static DictionaryElement compresseddictionary[DICTIONARY_SIZE]; ! ! lzo_byte * wrkmem; enum { CC_SIZE=1, ! CC_ACTUAL_SIZE=2, ! CC_ALGORITHM=3 }; ctl_table comp_cache_table[] = { ! {CC_SIZE, "size", &new_num_comp_pages, sizeof(int), ! #ifdef CONFIG_COMP_DEMAND_RESIZE ! 0444, ! #else ! 0644, ! #endif ! NULL, ! &proc_dointvec_minmax, &sysctl_intvec, NULL, &min_num_comp_pages, ! &max_num_comp_pages}, ! {CC_ACTUAL_SIZE, "actual_size", &num_comp_pages, sizeof(int), 0444, NULL, &proc_dointvec}, {CC_ALGORITHM, "algorithm", ¤t_algorithm, sizeof(int), 0644, NULL, ! &proc_dointvec_minmax, &sysctl_intvec, NULL, &algorithm_min, ! &algorithm_max}, {0} }; ! inline void ! comp_cache_update_page_stats(struct page * page, int dirty) { #ifdef CONFIG_COMP_PAGE_CACHE --- 32,54 ---- static int current_algorithm = 0; ! struct comp_alg_data comp_data; ! static spinlock_t comp_data_lock __cacheline_aligned = SPIN_LOCK_UNLOCKED; enum { CC_SIZE=1, ! CC_ALGORITHM=2 }; ctl_table comp_cache_table[] = { ! {CC_SIZE, "size", &num_comp_pages, sizeof(int), 0444, NULL, &proc_dointvec}, {CC_ALGORITHM, "algorithm", ¤t_algorithm, sizeof(int), 0644, NULL, ! &proc_dointvec_minmax, &sysctl_intvec, NULL, &algorithm_min, &algorithm_max}, {0} }; ! static inline void ! comp_cache_update_page_stats(struct page * page, int state) { #ifdef CONFIG_COMP_PAGE_CACHE *************** *** 80,84 **** #endif compression_algorithms[current_algorithm].stats.comp_swap++; ! if (dirty) compression_algorithms[current_algorithm].stats.comp_dirty++; else --- 58,62 ---- #endif compression_algorithms[current_algorithm].stats.comp_swap++; ! if (state == DIRTY_PAGE) compression_algorithms[current_algorithm].stats.comp_dirty++; else *************** *** 87,91 **** static void ! comp_cache_update_comp_stats(struct stats_page * comp_page_stats, struct page * page, int dirty) { struct comp_alg * algorithm = &compression_algorithms[current_algorithm]; --- 65,69 ---- static void ! comp_cache_update_comp_stats(struct stats_page * comp_page_stats, struct page * page, int state) { struct comp_alg * algorithm = &compression_algorithms[current_algorithm]; *************** *** 113,117 **** stats->comp_cycles_sum += comp_page_stats->comp_cycles; ! comp_cache_update_page_stats(page, dirty); } --- 91,95 ---- stats->comp_cycles_sum += comp_page_stats->comp_cycles; ! comp_cache_update_page_stats(page, state); } *************** *** 180,189 **** static unsigned int ! lzo_wrapper_compress(unsigned long * from, unsigned long * to, unsigned int words, void * page) { int error; lzo_uint out_len; ! error = lzo1x_1_compress((lzo_byte *) from, words * sizeof(unsigned long), (lzo_byte *) to, &out_len, wrkmem); if (error != LZO_E_OK) --- 158,167 ---- static unsigned int ! lzo_wrapper_compress(unsigned long * from, unsigned long * to, unsigned int words, struct comp_alg_data * data) { int error; lzo_uint out_len; ! error = lzo1x_1_compress((lzo_byte *) from, words * sizeof(unsigned long), (lzo_byte *) to, &out_len, data->wrkmem); if (error != LZO_E_OK) *************** *** 194,203 **** static void ! lzo_wrapper_decompress(unsigned long * from, unsigned long * to, unsigned int words, void * page) { ! int error; lzo_uint new_len; ! error = lzo1x_decompress((lzo_byte *) from, ((struct comp_alg_data *) page)->compressed_size, (lzo_byte *) to, &new_len, NULL); if (error != LZO_E_OK || new_len != PAGE_SIZE) { --- 172,181 ---- static void ! lzo_wrapper_decompress(unsigned long * from, unsigned long * to, unsigned int words, struct comp_alg_data * data) { ! int error; lzo_uint new_len; ! error = lzo1x_decompress((lzo_byte *) from, data->compressed_size, (lzo_byte *) to, &new_len, NULL); if (error != LZO_E_OK || new_len != PAGE_SIZE) { *************** *** 208,212 **** int ! compress(struct page * page, void * to, unsigned short * algorithm, int dirty) { struct stats_page comp_page_stats; --- 186,190 ---- int ! compress(struct page * page, void * to, unsigned short * algorithm, int state) { struct stats_page comp_page_stats; *************** *** 222,230 **** } #endif ! START_ZEN_TIME(comp_page_stats.myTimer); ! comp_page_stats.comp_size = compression_algorithms[current_algorithm].comp(from, to, PAGE_SIZE/4, (void *)(&comp_data)); ! STOP_ZEN_TIME(comp_page_stats.myTimer, comp_page_stats.comp_cycles); ! comp_cache_update_comp_stats(&comp_page_stats, page, dirty); *algorithm = current_algorithm; --- 200,210 ---- } #endif ! ! spin_lock(&comp_data_lock); START_ZEN_TIME(comp_page_stats.myTimer); ! comp_page_stats.comp_size = compression_algorithms[current_algorithm].comp(from, to, PAGE_SIZE/4, &comp_data); ! STOP_ZEN_TIME(comp_page_stats.myTimer, comp_page_stats.comp_cycles); ! spin_unlock(&comp_data_lock); ! comp_cache_update_comp_stats(&comp_page_stats, page, state); *algorithm = current_algorithm; *************** *** 249,256 **** comp_data.compressed_size = fragment->compressed_size; } ! START_ZEN_TIME(comp_page_stats.myTimer); ! compression_algorithms[algorithm].decomp(from, to, PAGE_SIZE/4, (void *)(&comp_data)); STOP_ZEN_TIME(comp_page_stats.myTimer, comp_page_stats.decomp_cycles); comp_cache_update_decomp_stats(algorithm, &comp_page_stats, fragment); } --- 229,238 ---- comp_data.compressed_size = fragment->compressed_size; } ! ! spin_lock(&comp_data_lock); START_ZEN_TIME(comp_page_stats.myTimer); ! compression_algorithms[algorithm].decomp(from, to, PAGE_SIZE/4, &comp_data); STOP_ZEN_TIME(comp_page_stats.myTimer, comp_page_stats.decomp_cycles); + spin_unlock(&comp_data_lock); comp_cache_update_decomp_stats(algorithm, &comp_page_stats, fragment); } *************** *** 261,275 **** { int i; - - /* initialize our data for the `test' compressed_page */ - comp_data.compressed_data = compresseddata; - comp_data.decompressed_data = decompresseddata; - comp_data.hashLookupTable_WKdm = compressedhashLookupTable_WKdm; - comp_data.hashLookupTable_WK4x4 = compressedhashLookupTable_WK4x4; - comp_data.dictionary = compresseddictionary; - comp_data.tempTagsArray = compressedtempTagsArray; - comp_data.tempQPosArray = compressedtempQPosArray; - comp_data.tempLowBitsArray = compressedtempLowBitsArray; for (i = 0; i < NUM_ALGORITHMS; i++) { memset((void *) &compression_algorithms[i], 0, sizeof(struct stats_summary)); --- 243,261 ---- { int i; + /* data structures for WKdm and WK4x4 */ + comp_data.tempTagsArray = kmalloc(300 * sizeof(WK_word), GFP_ATOMIC); + comp_data.tempQPosArray = kmalloc(300 * sizeof(WK_word), GFP_ATOMIC); + comp_data.tempLowBitsArray = kmalloc(1200 * sizeof(WK_word), GFP_ATOMIC); + + if (!comp_data.tempTagsArray || !comp_data.tempQPosArray || !comp_data.tempLowBitsArray) + panic("comp_cache_algorithms_init(): cannot allocate structures for WKdm/WK4x4"); + + /* data structure (dictionary) for LZO */ + comp_data.wrkmem = (lzo_byte *) kmalloc(LZO1X_1_MEM_COMPRESS, GFP_ATOMIC); + if (!comp_data.wrkmem) + panic("comp_cache_algorithms_init(): cannot allocate dictionary for LZO"); + + /* stats for algorithms */ for (i = 0; i < NUM_ALGORITHMS; i++) { memset((void *) &compression_algorithms[i], 0, sizeof(struct stats_summary)); *************** *** 278,282 **** compression_algorithms[i].stats.decomp_cycles_min = INF; } ! strcpy(compression_algorithms[WKDM_IDX].name, "WKdm"); compression_algorithms[WKDM_IDX].comp = WKdm_compress; --- 264,269 ---- compression_algorithms[i].stats.decomp_cycles_min = INF; } ! ! /* compression algorithms */ strcpy(compression_algorithms[WKDM_IDX].name, "WKdm"); compression_algorithms[WKDM_IDX].comp = WKdm_compress; *************** *** 291,298 **** compression_algorithms[LZO_IDX].decomp = lzo_wrapper_decompress; - wrkmem = (lzo_byte *) kmalloc(LZO1X_1_MEM_COMPRESS, GFP_ATOMIC); - if (!wrkmem) - panic("comp_cache_algorithms_init(): cannot allocate wrkmem (for LZO)"); - if (!current_algorithm || current_algorithm < algorithm_min || current_algorithm > algorithm_max) current_algorithm = WKDM_IDX; --- 278,281 ---- *************** *** 420,423 **** --- 403,408 ---- memset((void *) num_fragments, 0, HIST_COUNT * sizeof(unsigned long)); + spin_lock(&comp_cache_lock); + total1 = free_space_count(0, num_fragments); length += sprintf(page + length, *************** *** 439,443 **** HIST_PRINTK); } ! vfree(num_fragments); out: --- 424,429 ---- HIST_PRINTK); } ! ! spin_unlock(&comp_cache_lock); vfree(num_fragments); out: *************** *** 469,474 **** for (i = FRAG_INTERVAL * 2; i < COMP_PAGE_SIZE; i += FRAG_INTERVAL) length += sprintf(page + length, " -%d", i); ! length += sprintf(page + length, " -%d\n", (int)COMP_PAGE_SIZE); for (i = 1; i < free_space_hash_size; i += 2) { memset((void *) frag_space, 0, (COMP_PAGE_SIZE/FRAG_INTERVAL + 1) * sizeof(unsigned long)); --- 455,462 ---- for (i = FRAG_INTERVAL * 2; i < COMP_PAGE_SIZE; i += FRAG_INTERVAL) length += sprintf(page + length, " -%d", i); ! length += sprintf(page + length, " %d\n", (int)COMP_PAGE_SIZE); + spin_lock(&comp_cache_lock); + for (i = 1; i < free_space_hash_size; i += 2) { memset((void *) frag_space, 0, (COMP_PAGE_SIZE/FRAG_INTERVAL + 1) * sizeof(unsigned long)); *************** *** 479,486 **** length += sprintf(page + length, ! "%4d - %4d: %5lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu\n", (i-1)*free_space_interval+1, (i+1)*free_space_interval<COMP_PAGE_SIZE?(i+1)*free_space_interval:(int)COMP_PAGE_SIZE, total1 + total2, FRAG_PRINTK); } vfree(frag_space); --- 467,476 ---- length += sprintf(page + length, ! "%4d - %4d: %5lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu\n", (i-1)*free_space_interval+1, (i+1)*free_space_interval<COMP_PAGE_SIZE?(i+1)*free_space_interval:(int)COMP_PAGE_SIZE, total1 + total2, FRAG_PRINTK); } + + spin_unlock(&comp_cache_lock); vfree(frag_space); Index: swapin.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapin.c,v retrieving revision 1.47 retrieving revision 1.48 diff -C2 -r1.47 -r1.48 *** swapin.c 18 Jul 2002 21:31:08 -0000 1.47 --- swapin.c 28 Jul 2002 15:47:04 -0000 1.48 *************** *** 2,6 **** * linux/mm/comp_cache/swapin.c * ! * Time-stamp: <2002-07-18 17:59:01 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/swapin.c * ! * Time-stamp: <2002-07-27 18:55:37 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 19,39 **** int - __invalidate_comp_cache(struct address_space * mapping, unsigned long offset) - { - struct comp_cache_fragment * fragment; - int err = find_comp_page(mapping, offset, &fragment); - - if (!err) - comp_cache_free(fragment); - return err; - } - - int invalidate_comp_cache(struct address_space * mapping, unsigned long offset) { int err; spin_lock(&comp_cache_lock); ! err = __invalidate_comp_cache(mapping, offset); spin_unlock(&comp_cache_lock); return err; --- 19,31 ---- int invalidate_comp_cache(struct address_space * mapping, unsigned long offset) { + struct comp_cache_fragment * fragment; int err; spin_lock(&comp_cache_lock); ! err = find_comp_page(mapping, offset, &fragment); ! if (!err) ! drop_fragment(fragment); spin_unlock(&comp_cache_lock); return err; *************** *** 69,73 **** __set_page_dirty(page); } ! comp_cache_free(fragment); out_unlock: --- 61,65 ---- __set_page_dirty(page); } ! drop_fragment(fragment); out_unlock: *************** *** 83,86 **** --- 75,80 ---- if (!fragment) BUG(); + if (!fragment_count(fragment)) + BUG(); comp_page = fragment->comp_page; if (!comp_page->page) *************** *** 96,103 **** memcpy(page_address(page), page_address(comp_page->page) + fragment->offset, PAGE_SIZE); - PageSetCompCache(page); SetPageUptodate(page); } int read_comp_cache(struct address_space *mapping, unsigned long offset, struct page * page) --- 90,97 ---- memcpy(page_address(page), page_address(comp_page->page) + fragment->offset, PAGE_SIZE); SetPageUptodate(page); } + /* caller may hold pagecache_lock (__find_lock_page()) */ int read_comp_cache(struct address_space *mapping, unsigned long offset, struct page * page) *************** *** 106,109 **** --- 100,106 ---- int err; + if (!PageLocked(page)) + BUG(); + spin_lock(&comp_cache_lock); err = find_comp_page(mapping, offset, &fragment); *************** *** 114,144 **** goto out_unlock; ! if (!PageLocked(page)) ! BUG(); ! if (TryLockPage(fragment->comp_page->page)) BUG(); ! /* move the fragment to the back of the lru list */ remove_fragment_from_lru_queue(fragment); add_fragment_to_lru_queue(fragment); decompress_fragment(fragment, page); ! /* update fault in stats */ comp_cache_update_faultin_stats(fragment); - #ifdef CONFIG_COMP_DEMAND_RESIZE - PageClearCompCache(page); - if (CompFragmentTestandClearDirty(fragment)) __set_page_dirty(page); UnlockPage(fragment->comp_page->page); ! comp_cache_free(fragment); ! #else ! UnlockPage(fragment->comp_page->page); ! #endif ! ! UnlockPage(page); out_unlock: spin_unlock(&comp_cache_lock); --- 111,138 ---- goto out_unlock; ! if (!fragment_count(fragment)) BUG(); + + get_fragment(fragment); ! /* move the fragment to the back of the lru list */ remove_fragment_from_lru_queue(fragment); add_fragment_to_lru_queue(fragment); + + spin_unlock(&comp_cache_lock); + lock_page(fragment->comp_page->page); decompress_fragment(fragment, page); ! spin_lock(&comp_cache_lock); comp_cache_update_faultin_stats(fragment); if (CompFragmentTestandClearDirty(fragment)) __set_page_dirty(page); UnlockPage(fragment->comp_page->page); ! put_fragment(fragment); ! ! drop_fragment(fragment); out_unlock: spin_unlock(&comp_cache_lock); *************** *** 146,150 **** } ! extern struct page * find_and_dirty_page(struct address_space *mapping, unsigned long offset, struct page **hash); static void --- 140,145 ---- } ! #ifdef CONFIG_COMP_PAGE_CACHE ! extern struct page * find_and_dirty_page(struct page * new_page, struct address_space *mapping, unsigned long offset, struct page **hash); static void *************** *** 160,164 **** if ((fragment->index >= start) || (partial && (fragment->index + 1) == start)) ! comp_cache_free(fragment); } --- 155,159 ---- if ((fragment->index >= start) || (partial && (fragment->index + 1) == start)) ! drop_fragment(fragment); } *************** *** 166,169 **** --- 161,165 ---- } + /* caller must hold pagecache_lock */ void truncate_comp_pages(struct address_space * mapping, unsigned long start, unsigned partial) *************** *** 171,177 **** truncate_list_comp_pages(&mapping->clean_comp_pages, start, partial); truncate_list_comp_pages(&mapping->dirty_comp_pages, start, partial); - truncate_list_comp_pages(&mapping->locked_comp_pages, start, partial); } void invalidate_comp_pages(struct address_space * mapping) --- 167,173 ---- truncate_list_comp_pages(&mapping->clean_comp_pages, start, partial); truncate_list_comp_pages(&mapping->dirty_comp_pages, start, partial); } + /* caller must hold pagecache_lock */ void invalidate_comp_pages(struct address_space * mapping) *************** *** 180,185 **** } void ! wait_all_comp_pages(struct address_space * mapping) { struct page * page; --- 176,182 ---- } + /* caller must hold pagecache_lock */ void ! wait_comp_pages(struct address_space * mapping) { struct page * page; *************** *** 190,199 **** list_del_init(&page->list); wait_on_page(page); } } void ! lookup_all_comp_pages(struct address_space * mapping) { struct page **hash; --- 187,199 ---- list_del_init(&page->list); + spin_unlock(&pagecache_lock); wait_on_page(page); + spin_lock(&pagecache_lock); } } + /* caller must hold pagecache_lock */ void ! lookup_comp_pages(struct address_space * mapping) { struct page **hash; *************** *** 206,215 **** goto out_unlock; ! page = page_cache_alloc(mapping); if (!page) goto out_unlock; ! if (list_empty(&mapping->dirty_comp_pages)) goto out_release; fragment = list_entry(mapping->dirty_comp_pages.next, struct comp_cache_fragment, mapping_list); --- 206,241 ---- goto out_unlock; ! spin_unlock(&pagecache_lock); ! /*** ! * This function may be called from the following code path: ! * ! * __sync_one() -> filemap_fdatasync() ! * ! * This code path tries to sync an inode (and keeps it locked ! * while it is syncing). However, that inode can be also in ! * the clear path (clear_inode() function, called in the exit ! * process path) which will lock the super block and then wait ! * for the inode, if locked (what happens when syncing it like ! * here). ! * ! * Since the allocation path may write pages, which may need ! * to lock the same super block, it will deadlock, because the ! * super block is locked by the exit path explained above. So, ! * we end up not being able to allocate the page (in order to ! * finish this function and unlock the inode) _and_ the super ! * block won't be unlocked since the inode doesn't get ! * unlocked either. ! * ! * That's why the page must be allocated with GFP_NOFS mask. ! */ ! page = alloc_page(GFP_NOFS); if (!page) goto out_unlock; ! spin_lock(&pagecache_lock); ! if (list_empty(&mapping->dirty_comp_pages)) { ! spin_unlock(&pagecache_lock); goto out_release; + } fragment = list_entry(mapping->dirty_comp_pages.next, struct comp_cache_fragment, mapping_list); *************** *** 221,237 **** list_del(&fragment->mapping_list); list_add(&fragment->mapping_list, &fragment->mapping->clean_comp_pages); ! ! if (add_to_page_cache_unique(page, mapping, fragment->index, hash)) { ! if (!find_and_dirty_page(mapping, fragment->index, hash)) ! BUG(); goto out_release; - } ! if (TryLockPage(fragment->comp_page->page)) ! BUG(); decompress_fragment(fragment, page); UnlockPage(fragment->comp_page->page); ! comp_cache_free(fragment); PageClearCompCache(page); --- 247,270 ---- list_del(&fragment->mapping_list); list_add(&fragment->mapping_list, &fragment->mapping->clean_comp_pages); ! ! /* Checks if the page has been added to the page cache and add ! * this new page to the cache if the former condition is ! * false. Dirty the page in the page cache otherwise. */ ! if (find_and_dirty_page(page, mapping, fragment->index, hash)) goto out_release; ! get_fragment(fragment); ! spin_unlock(&pagecache_lock); ! spin_unlock(&comp_cache_lock); ! ! lock_page(fragment->comp_page->page); decompress_fragment(fragment, page); UnlockPage(fragment->comp_page->page); ! spin_lock(&comp_cache_lock); ! put_fragment(fragment); ! ! /* effectively free it */ ! drop_fragment(fragment); PageClearCompCache(page); *************** *** 244,247 **** --- 277,281 ---- spin_unlock(&comp_cache_lock); } + #endif /* Index: swapout.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapout.c,v retrieving revision 1.65 retrieving revision 1.66 diff -C2 -r1.65 -r1.66 *** swapout.c 18 Jul 2002 11:54:48 -0000 1.65 --- swapout.c 28 Jul 2002 15:47:04 -0000 1.66 *************** *** 2,6 **** * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-07-17 18:48:02 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-07-28 11:33:53 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 23,26 **** --- 23,30 ---- struct list_head swp_free_buffer_head, swp_used_buffer_head; + static spinlock_t swap_buffer_lock __cacheline_aligned = SPIN_LOCK_UNLOCKED; + + #define SWP_BUFFER_PRIORITY 6 + static int refill_swp_buffer(unsigned int gfp_mask, int priority) *************** *** 39,102 **** swp_buffer = list_entry(swp_buffer_lh, struct swp_buffer, list); buffer_page = swp_buffer->page; - fragment = swp_buffer->fragment; - list_del(swp_buffer_lh); - list_add(swp_buffer_lh, &swp_used_buffer_head); if (TryLockPage(buffer_page)) { if (!wait) ! continue; ! list_del_init(swp_buffer_lh); lock_page(buffer_page); } ! /* its fragment was added to locked_pages list below, ! * right before being returned to the caller, so let's ! * remove it now from any mapping->*_pages list */ ! list_del(&buffer_page->list); if (buffer_page->buffers) { ! list_del_init(swp_buffer_lh); if (!try_to_free_buffers(buffer_page, gfp_mask)) { - list_add(swp_buffer_lh, &swp_used_buffer_head); - - list_add(&buffer_page->list, &fragment->mapping->locked_comp_pages); UnlockPage(buffer_page); ! continue; } } ! /*** ! * Has the fragment we are swapping out been already ! * freed? Given that we were on IO process, ! * comp_cache_free() didn't free the fragment struct, ! * so let's do it now. ! */ ! if (!CompFragmentTestandClearIO(fragment)) { ! kmem_cache_free(fragment_cachep, (fragment)); ! goto out; } ! /*** ! * In the case it is waiting for merge in ! * comp_cache_free(), we don't have to free it. To be ! * clearer, it has been freed, except its data ! * structure, what will be freed when merged in ! * comp_cache_free() ! */ ! if (CompFragmentFreed(fragment)) ! goto out; ! ! /* it's not swapped out, so let' free it */ ! comp_cache_free(fragment); ! ! out: ! swp_buffer->fragment = NULL; ! ! list_del(swp_buffer_lh); ! list_add_tail(swp_buffer_lh, &swp_free_buffer_head); ! UnlockPage(buffer_page); return 1; } --- 43,100 ---- swp_buffer = list_entry(swp_buffer_lh, struct swp_buffer, list); buffer_page = swp_buffer->page; list_del(swp_buffer_lh); if (TryLockPage(buffer_page)) { if (!wait) ! goto add_to_dirty; ! spin_unlock(&swap_buffer_lock); lock_page(buffer_page); + spin_lock(&swap_buffer_lock); } ! /* remove from buffer_page->mapping->locked_comp_pages */ ! list_del_init(&buffer_page->list); if (buffer_page->buffers) { ! spin_unlock(&swap_buffer_lock); if (!try_to_free_buffers(buffer_page, gfp_mask)) { UnlockPage(buffer_page); ! spin_lock(&swap_buffer_lock); } + spin_lock(&swap_buffer_lock); } ! fragment = swp_buffer->fragment; ! ! /* A swap buffer page that has been set to dirty means ! * that the writepage() function failed, so we cannot ! * free the fragment and should simply backout. */ ! if (PageDirty(buffer_page)) { ! if (fragment) { ! spin_lock(&pagecache_lock); ! list_del(&fragment->mapping_list); ! list_add(&fragment->mapping_list, &fragment->mapping->dirty_comp_pages); ! spin_unlock(&pagecache_lock); ! ! CompFragmentSetDirty(fragment); ! } ! goto add_to_free; } ! /* A clean swap buffer page means that the writepage() ! * didn't failed, so we can go on freeing the fragment ! * (if still needed). */ ! spin_lock(&comp_cache_lock); ! if (fragment) { ! fragment->swp_buffer = NULL; ! drop_fragment(fragment); ! } ! spin_unlock(&comp_cache_lock); ! add_to_free: UnlockPage(buffer_page); + list_add_tail(swp_buffer_lh, &swp_free_buffer_head); return 1; + add_to_dirty: + list_add(swp_buffer_lh, &swp_used_buffer_head); } *************** *** 108,112 **** --- 106,112 ---- if (unlikely(current->need_resched)) { __set_current_state(TASK_RUNNING); + spin_unlock(&swap_buffer_lock); schedule(); + spin_lock(&swap_buffer_lock); } goto try_again; *************** *** 123,141 **** * If there's a free buffer page, it will lock the page and * return. Otherwise we may sleep to get the lock. - * */ ! static int ! find_free_swp_buffer(struct comp_cache_fragment * fragment, unsigned int gfp_mask, struct swp_buffer ** swp_buffer_out) { struct page * buffer_page; struct list_head * swp_buffer_lh; ! struct swp_buffer * swp_buffer; ! int priority = 6, error = 0; if (!fragment) BUG(); ! CompFragmentSetIO(fragment); ! if (!list_empty(&swp_free_buffer_head)) goto get_free_buffer; --- 123,140 ---- * If there's a free buffer page, it will lock the page and * return. Otherwise we may sleep to get the lock. */ ! static struct swp_buffer * ! find_free_swp_buffer(struct comp_cache_fragment * fragment, unsigned int gfp_mask) { struct page * buffer_page; struct list_head * swp_buffer_lh; ! struct swp_buffer * swp_buffer = NULL; ! int priority = SWP_BUFFER_PRIORITY; if (!fragment) BUG(); ! spin_lock(&swap_buffer_lock); ! if (!list_empty(&swp_free_buffer_head)) goto get_free_buffer; *************** *** 144,165 **** refill_swp_buffer(gfp_mask, priority--); - error = -ENOENT; - /* Failed to get a free swap buffer. Probably gfp_mask does * not allow buffer sync in refill_swp_buffer() function. */ ! if (list_empty(&swp_free_buffer_head)) { ! error = -ENOMEM; ! goto failed; ! } ! ! /* Fragment totally freed. Free its struct to avoid leakage. */ ! if (!CompFragmentIO(fragment)) { ! kmem_cache_free(fragment_cachep, (fragment)); ! goto failed; ! } ! ! /* Fragment partially freed (to be merged). Nothing to do. */ ! if (CompFragmentFreed(fragment)) ! goto failed; get_free_buffer: --- 143,150 ---- refill_swp_buffer(gfp_mask, priority--); /* Failed to get a free swap buffer. Probably gfp_mask does * not allow buffer sync in refill_swp_buffer() function. */ ! if (list_empty(&swp_free_buffer_head)) ! goto out_unlock; get_free_buffer: *************** *** 172,206 **** list_del(swp_buffer_lh); - list_add(swp_buffer_lh, &swp_used_buffer_head); swp_buffer->fragment = fragment; buffer_page->index = fragment->index; buffer_page->mapping = fragment->mapping; list_add(&buffer_page->list, &fragment->mapping->locked_comp_pages); ! (*swp_buffer_out) = swp_buffer; ! out: ! return error; ! ! failed: ! CompFragmentClearIO(fragment); ! goto out; } extern void decompress_fragment(struct comp_cache_fragment *, struct page *); ! static int ! decompress_to_swp_buffer(struct comp_cache_fragment * fragment, unsigned int gfp_mask, struct swp_buffer ** swp_buffer_out) { struct page * buffer_page; struct swp_buffer * swp_buffer; - int error; - - if (fragment->comp_page->page->buffers) - BUG(); ! error = find_free_swp_buffer(fragment, gfp_mask, &swp_buffer); ! if (error) ! goto out; buffer_page = swp_buffer->page; --- 157,191 ---- list_del(swp_buffer_lh); + spin_lock(&comp_cache_lock); swp_buffer->fragment = fragment; + fragment->swp_buffer = swp_buffer; buffer_page->index = fragment->index; buffer_page->mapping = fragment->mapping; + spin_unlock(&comp_cache_lock); + spin_lock(&pagecache_lock); + if (!fragment->mapping) + BUG(); list_add(&buffer_page->list, &fragment->mapping->locked_comp_pages); ! spin_unlock(&pagecache_lock); ! ! list_add(swp_buffer_lh, &swp_used_buffer_head); ! out_unlock: ! spin_unlock(&swap_buffer_lock); ! return swp_buffer; } extern void decompress_fragment(struct comp_cache_fragment *, struct page *); ! static struct swp_buffer * ! decompress_to_swp_buffer(struct comp_cache_fragment * fragment, unsigned int gfp_mask) { struct page * buffer_page; struct swp_buffer * swp_buffer; ! swp_buffer = find_free_swp_buffer(fragment, gfp_mask); ! if (!swp_buffer) ! return NULL; buffer_page = swp_buffer->page; *************** *** 208,221 **** if (!buffer_page) BUG(); - if (TryLockPage(fragment->comp_page->page)) - BUG(); decompress_fragment(fragment, buffer_page); - buffer_page->flags &= (1 << PG_locked); - UnlockPage(fragment->comp_page->page); ! (*swp_buffer_out) = swp_buffer; ! out: ! return error; } --- 193,203 ---- if (!buffer_page) BUG(); + lock_page(fragment->comp_page->page); decompress_fragment(fragment, buffer_page); UnlockPage(fragment->comp_page->page); ! ! buffer_page->flags &= (1 << PG_locked); ! return swp_buffer; } *************** *** 228,240 **** int (*writepage)(struct page *); struct list_head * fragment_lh; ! int maxscan, nrpages, swap_cache_page, error; struct comp_cache_fragment * fragment; struct swp_buffer * swp_buffer; - struct page * page; swp_entry_t entry; nrpages = SWAP_CLUSTER_MAX; maxscan = max((int) (num_fragments/priority), (int) (nrpages * 1.5)); ! while (!list_empty(&lru_queue) && maxscan--) { if (unlikely(current->need_resched)) { --- 210,223 ---- int (*writepage)(struct page *); struct list_head * fragment_lh; ! int ret, maxscan, nrpages, swap_cache_page; struct comp_cache_fragment * fragment; struct swp_buffer * swp_buffer; swp_entry_t entry; nrpages = SWAP_CLUSTER_MAX; maxscan = max((int) (num_fragments/priority), (int) (nrpages * 1.5)); ! ! ret = 0; ! while (!list_empty(&lru_queue) && maxscan--) { if (unlikely(current->need_resched)) { *************** *** 246,251 **** fragment = list_entry(fragment_lh = lru_queue.prev, struct comp_cache_fragment, lru_queue); - - page = fragment->comp_page->page; /* move it to the back of the list */ --- 229,232 ---- *************** *** 253,265 **** list_add(fragment_lh, &lru_queue); - /* page locked */ - if (TryLockPage(page)) - continue; - /* clean page, let's free it */ if (!CompFragmentDirty(fragment)) { ! comp_cache_free_locked(fragment); ! UnlockPage(page); ! if (--nrpages) continue; --- 234,240 ---- list_add(fragment_lh, &lru_queue); /* clean page, let's free it */ if (!CompFragmentDirty(fragment)) { ! drop_fragment(fragment); if (--nrpages) continue; *************** *** 268,275 **** /* we can't perform IO, so we can't go on */ ! if (!(gfp_mask & __GFP_FS)) { ! UnlockPage(page); continue; - } if ((swap_cache_page = PageSwapCache(fragment))) { --- 243,248 ---- /* we can't perform IO, so we can't go on */ ! if (!(gfp_mask & __GFP_FS)) continue; if ((swap_cache_page = PageSwapCache(fragment))) { *************** *** 283,325 **** remove_fragment_from_lru_queue(fragment); - list_del(&... [truncated message content] |