[lc-checkins] CVS: linux/mm/comp_cache adaptivity.c,1.39,1.40 aux.c,1.42,1.43 free.c,1.46,1.47 main.
Status: Beta
Brought to you by:
nitin_sf
From: Rodrigo S. de C. <rc...@us...> - 2002-09-10 16:44:03
|
Update of /cvsroot/linuxcompressed/linux/mm/comp_cache In directory usw-pr-cvs1:/tmp/cvs-serv17835/mm/comp_cache Modified Files: adaptivity.c aux.c free.c main.c proc.c swapin.c swapout.c Log Message: New features o Adaptivity: the greatest feature of the changeset is the adaptivity implementation. Now compressed cache resizes by itself and it seems to be picking the a size pretty close to the best size noticed in our tests. The police can be described as follow. Instead of having an LRU queue, we have now two queues: active and inactive, like the LRU queues in vanilla. The active list has the pages that would be in memory if the compressed cache is not used and the inactive list is the gain from using the compressed cache. If there are many accesses to the active list, we first block growing (by demand) and later shrink the compressed cache, and if we have many accesses to the inactive list, we let the cache grow if needed. The active list size is computed based on the effective compression ratio (number of fragments/number of memory pages). When shrinking the cache, we try to free a compressed cache by moving its fragments to other places. If unable to free a page that way, we free a fragment at the end of inactive list. o Compressed swap: now all swap cache pages are swapped out in compressed format. A bit in swap_map array is used to know if the entry is compressed or not. The compressed size is stored in the entry on the disk. There is almost no cost to store the pages in compressed format, that's why it is the default configuration for compressed cache. o Compacted swap: besides swapping out the pages in compressed format, we may decrease the number of writeouts by writing many fragments to the same disk block. Since it has a memory cost to store some metadata, it is an option to be enabled by user. It uses two arrays, real_swap (unsigned long array) and real_swap_map (unsigned short array). All the metadata about the fragments in the disk block are stored on the block, like offset, size, index. o Clean fragments not decompressed when they would be used to write some data. We don't decompress a clean fragment when grabbing a page cache page in __grab_cache_page() any longer. We would decompress a fragment, but it's data wouldn't be used (that's why this __grab_cache_page() creates a page if not found in page cache). Dirty fragments will be decompressed, but that's a rare situation in page cache since most data are written via buffers. Bug fixes o Larger compressed cache page support would not support pages larger than 2*PAGE_SIZE (8K). Reason: wrong computation of comp page size, very simple to fix. o In /proc/comp_cache_hist, we were showing the number of fragments in a comp page, no matter if those fragments were freed. It has been fixed to not show the freed fragments. o Writing out every dirty page with buffers. That was a conceptual bug, since all the swapped in pages would have bugs, and if they got dirty, they would not be added to compressed cache as dirty, they would be written out first and only then added to swap cache as a clean page. Now we try to free the buffers and if we are unable to do that, we write it out. With this bug, the page was added to compressed cache, but we were forcing many writes. Other: o Removed support to change algorithms online. That was a not very used option and would introduce a space cost to pages swapped out in compressed format, so it was removed. It also saved some memory space, since we allocate only the data structure used by the selected algorithm. Recall that the algorithm can be set through the compalg= kernel parameter. o All entries in /proc/sys/vm/comp_cache removed. Since compression algorithms cannot be changed nor compressed cache size, so it's useless to have a directory in /proc/sys. Compressed cache size can still be checked in /proc/meminfo. o Info for compression algorithm is shown even if no page has been compressed. o There are many code blocks with "#if 0" that are/were being tested. Cleanups: o Code to add the fragment into a comp page fragment list was split to a new function. o decompress() function removed. Index: adaptivity.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/adaptivity.c,v retrieving revision 1.39 retrieving revision 1.40 diff -C2 -r1.39 -r1.40 *** adaptivity.c 7 Aug 2002 18:30:58 -0000 1.39 --- adaptivity.c 10 Sep 2002 16:43:20 -0000 1.40 *************** *** 2,6 **** * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-08-03 12:12:40 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-09-02 18:43:33 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 18,21 **** --- 18,22 ---- static int fragment_failed_alloc = 0, vswap_failed_alloc = 0; unsigned long failed_comp_page_allocs = 0; + int growing_lock = 0; /* semaphore used to avoid two concurrent instances of *************** *** 536,540 **** BUG(); UnlockPage(empty_comp_page->page); ! __free_pages(empty_comp_page->page, comp_page_order); set_comp_page(empty_comp_page, NULL); --- 537,541 ---- BUG(); UnlockPage(empty_comp_page->page); ! __free_pages(empty_comp_page->page, COMP_PAGE_ORDER); set_comp_page(empty_comp_page, NULL); *************** *** 639,643 **** while (comp_cache_needs_to_grow() && nrpages--) { ! page = alloc_pages(GFP_ATOMIC, comp_page_order); /* couldn't allocate the page */ --- 640,644 ---- while (comp_cache_needs_to_grow() && nrpages--) { ! page = alloc_pages(GFP_ATOMIC, COMP_PAGE_ORDER); /* couldn't allocate the page */ *************** *** 648,652 **** if (!init_comp_page(&comp_page, page)) { ! __free_pages(page, comp_page_order); goto out_unlock; } --- 649,653 ---- if (!init_comp_page(&comp_page, page)) { ! __free_pages(page, COMP_PAGE_ORDER); goto out_unlock; } *************** *** 692,695 **** --- 693,699 ---- return 0; + if (growing_lock) + return 0; + /* to force the grow_comp_cache() to grow the cache */ new_num_comp_pages = num_comp_pages + 1; *************** *** 704,707 **** --- 708,852 ---- new_num_comp_pages = num_comp_pages; return 0; + } + + void + compact_comp_cache(void) + { + struct comp_cache_page * comp_page, * previous_comp_page = NULL, * new_comp_page, ** hash_table = free_space_hash; + struct comp_cache_fragment * fragment, * new_fragment; + int i; + + next_fragment: + i = free_space_hash_size - 1; + do { + comp_page = hash_table[i--]; + } while(i > 0 && !comp_page); + + if (previous_comp_page && previous_comp_page != comp_page) + return; + + if (!comp_page || TryLockPage(comp_page->page)) + goto writeout; + + if (list_empty(&comp_page->fragments)) { + shrink_on_demand(comp_page); + return; + } + + fragment = list_entry(comp_page->fragments.prev, struct comp_cache_fragment, list); + search_again: + new_comp_page = search_comp_page(free_space_hash, fragment->compressed_size); + + if (new_comp_page && !TryLockPage(new_comp_page->page)) + goto got_page; + + if (hash_table == free_space_hash) { + hash_table = total_free_space_hash; + goto search_again; + } + goto out2_failed; + + got_page: + if (hash_table == total_free_space_hash) + compact_fragments(new_comp_page); + + remove_comp_page_from_hash_table(new_comp_page); + + /* allocate the new fragment */ + new_fragment = alloc_fragment(); + + if (!new_fragment) { + UnlockPage(comp_page->page); + goto out_failed; + } + + new_fragment->index = fragment->index; + new_fragment->mapping = fragment->mapping; + new_fragment->offset = new_comp_page->free_offset; + new_fragment->compressed_size = fragment->compressed_size; + new_fragment->flags = fragment->flags; + new_fragment->comp_page = new_comp_page; + set_fragment_count(new_fragment, fragment_count(fragment)); + + if ((new_fragment->swp_buffer = fragment->swp_buffer)) + new_fragment->swp_buffer->fragment = new_fragment; + + memcpy(page_address(new_comp_page->page) + new_fragment->offset, page_address(comp_page->page) + fragment->offset, fragment->compressed_size); + + previous_comp_page = comp_page; + + UnlockPage(comp_page->page); + if (!drop_fragment(fragment)) { + if (fragment->swp_buffer) + fragment->swp_buffer->fragment = fragment; + kmem_cache_free(fragment_cachep, new_fragment); + goto out_failed; + } + + /* let's update some important fields */ + new_comp_page->free_space -= new_fragment->compressed_size; + new_comp_page->total_free_space -= new_fragment->compressed_size; + new_comp_page->free_offset += new_fragment->compressed_size; + + add_to_comp_page_list(new_comp_page, new_fragment); + add_fragment_vswap(new_fragment); + add_fragment_to_hash_table(new_fragment); + + if (CompFragmentActive(new_fragment)) + add_fragment_to_active_lru_queue(new_fragment); + else + add_fragment_to_inactive_lru_queue(new_fragment); + + if (PageSwapCache(new_fragment)) + num_swapper_fragments++; + num_fragments++; + + new_fragment->mapping->nrpages++; + if (CompFragmentDirty(new_fragment)) + list_add(&new_fragment->mapping_list, &new_fragment->mapping->dirty_comp_pages); + else { + list_add(&new_fragment->mapping_list, &new_fragment->mapping->clean_comp_pages); + num_clean_fragments++; + } + + balance_lru_queues(); + + add_comp_page_to_hash_table(new_comp_page); + UnlockPage(new_comp_page->page); + goto next_fragment; + //return; + + writeout: + writeout_fragments(GFP_KERNEL, 1, 6); + return; + + out_failed: + add_comp_page_to_hash_table(new_comp_page); + UnlockPage(new_comp_page->page); + goto writeout; + + out2_failed: + UnlockPage(comp_page->page); + goto writeout; + + } + + void + balance_lru_queues(void) + { + struct comp_cache_fragment * fragment; + unsigned long num_memory_pages; + + /* while condition: + * + * (num_active_fragments * 100)/num_fragments > ((num_comp_pages << COMP_PAGE_ORDER) * 100)/num_fragments + */ + num_memory_pages = (num_comp_pages << COMP_PAGE_ORDER); + while (num_active_fragments > num_memory_pages) { + fragment = list_entry(active_lru_queue.prev, struct comp_cache_fragment, lru_queue); + + remove_fragment_from_lru_queue(fragment); + add_fragment_to_inactive_lru_queue(fragment); + } } Index: aux.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/aux.c,v retrieving revision 1.42 retrieving revision 1.43 diff -C2 -r1.42 -r1.43 *** aux.c 28 Jul 2002 15:47:04 -0000 1.42 --- aux.c 10 Sep 2002 16:43:20 -0000 1.43 *************** *** 2,6 **** * linux/mm/comp_cache/aux.c * ! * Time-stamp: <2002-07-28 11:55:38 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/aux.c * ! * Time-stamp: <2002-09-02 18:43:50 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 199,202 **** --- 199,203 ---- free_space_count(int index, unsigned long * num_fragments) { struct comp_cache_page * comp_page; + struct comp_cache_fragment * fragment; unsigned long total, total_fragments; struct list_head * fragment_lh; *************** *** 211,216 **** total_fragments = 0; ! for_each_fragment(fragment_lh, comp_page) ! total_fragments++; #if 0 --- 212,221 ---- total_fragments = 0; ! for_each_fragment(fragment_lh, comp_page) { ! fragment = list_entry(fragment_lh, struct comp_cache_fragment, list); ! ! if (!fragment_freed(fragment)) ! total_fragments++; ! } #if 0 *************** *** 328,331 **** --- 333,375 ---- } + void + add_to_comp_page_list(struct comp_cache_page * comp_page, struct comp_cache_fragment * fragment) + { + struct list_head * fragment_lh; + struct comp_cache_fragment * previous_fragment = NULL; + + /* add the fragment to the comp_page list of fragments */ + if (list_empty(&(comp_page->fragments))) { + list_add(&(fragment->list), &(comp_page->fragments)); + return; + } + + previous_fragment = list_entry(comp_page->fragments.prev, struct comp_cache_fragment, list); + + if (previous_fragment->offset + previous_fragment->compressed_size == fragment->offset) { + list_add_tail(&(fragment->list), &(comp_page->fragments)); + return; + } + + /* let's search for the correct place in the comp_page list */ + previous_fragment = NULL; + + for_each_fragment(fragment_lh, comp_page) { + struct comp_cache_fragment * aux_fragment; + + aux_fragment = list_entry(fragment_lh, struct comp_cache_fragment, list); + + if (aux_fragment->offset + aux_fragment->compressed_size > fragment->offset) + break; + + previous_fragment = aux_fragment; + } + + if (previous_fragment) + list_add(&(fragment->list), &(previous_fragment->list)); + else + list_add(&(fragment->list), &(comp_page->fragments)); + } + struct comp_cache_page * search_comp_page(struct comp_cache_page ** hash_table, int free_space) { *************** *** 368,428 **** inline void ! add_fragment_to_lru_queue_tail(struct comp_cache_fragment * fragment) { ! swp_entry_t entry; ! if (!fragment) BUG(); ! #ifdef CONFIG_COMP_PAGE_CACHE ! if (!PageSwapCache(fragment)) { ! list_add_tail(&(fragment->lru_queue), &lru_queue); ! return; } ! #endif ! /* swap cache page */ ! entry.val = fragment->index; ! if (vswap_address(entry)) ! return; ! list_add_tail(&(fragment->lru_queue), &lru_queue); } inline void ! add_fragment_to_lru_queue(struct comp_cache_fragment * fragment) { ! swp_entry_t entry; ! if (!fragment) BUG(); ! #ifdef CONFIG_COMP_PAGE_CACHE ! if (!PageSwapCache(fragment)) { ! list_add(&(fragment->lru_queue), &lru_queue); ! return; } ! #endif ! /* swap cache page */ ! entry.val = fragment->index; ! if (vswap_address(entry)) ! return; ! list_add(&(fragment->lru_queue), &lru_queue); } inline void remove_fragment_from_lru_queue(struct comp_cache_fragment * fragment) { - swp_entry_t entry; - if (!fragment) BUG(); ! #ifdef CONFIG_COMP_PAGE_CACHE ! if (!PageSwapCache(fragment)) { ! list_del_init(&(fragment->lru_queue)); ! return; } ! #endif ! /* swap cache page */ ! entry.val = fragment->index; ! if (vswap_address(entry)) ! return; list_del_init(&(fragment->lru_queue)); } --- 412,461 ---- inline void ! add_fragment_to_active_lru_queue(struct comp_cache_fragment * fragment) { if (!fragment) BUG(); ! if (PageSwapCache(fragment)) { ! swp_entry_t entry; ! entry.val = fragment->index; ! if (vswap_address(entry)) ! return; } ! ! list_add(&(fragment->lru_queue), &active_lru_queue); ! CompFragmentSetActive(fragment); ! num_active_fragments++; } inline void ! add_fragment_to_inactive_lru_queue(struct comp_cache_fragment * fragment) { if (!fragment) BUG(); ! if (PageSwapCache(fragment)) { ! swp_entry_t entry; ! entry.val = fragment->index; ! if (vswap_address(entry)) ! return; } ! ! list_add(&(fragment->lru_queue), &inactive_lru_queue); } inline void remove_fragment_from_lru_queue(struct comp_cache_fragment * fragment) { if (!fragment) BUG(); ! if (PageSwapCache(fragment)) { ! swp_entry_t entry; ! entry.val = fragment->index; ! if (vswap_address(entry)) ! return; } ! list_del_init(&(fragment->lru_queue)); + if (CompFragmentTestandClearActive(fragment)) + num_active_fragments--; } *************** *** 588,592 **** /* inits comp cache free space hash table */ ! free_space_interval = 100 * (comp_page_order + 1); free_space_hash_size = (int) (PAGE_SIZE/100) + 2; --- 621,625 ---- /* inits comp cache free space hash table */ ! free_space_interval = 100 * (COMP_PAGE_ORDER + 1); free_space_hash_size = (int) (PAGE_SIZE/100) + 2; *************** *** 601,605 **** /* inits comp cache total free space hash table */ ! total_free_space_interval = 100 * (comp_page_order + 1); total_free_space_hash_size = (int) (PAGE_SIZE/100) + 2; --- 634,638 ---- /* inits comp cache total free space hash table */ ! total_free_space_interval = 100 * (COMP_PAGE_ORDER + 1); total_free_space_hash_size = (int) (PAGE_SIZE/100) + 2; Index: free.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/free.c,v retrieving revision 1.46 retrieving revision 1.47 diff -C2 -r1.46 -r1.47 *** free.c 7 Aug 2002 18:30:58 -0000 1.46 --- free.c 10 Sep 2002 16:43:21 -0000 1.47 *************** *** 2,6 **** * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-08-07 12:50:00 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-08-21 17:57:52 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 75,78 **** --- 75,80 ---- if (PageSwapCache(fragment)) num_swapper_fragments--; + if (!CompFragmentDirty(fragment)) + num_clean_fragments--; num_fragments--; *************** *** 376,380 **** spin_lock(&comp_cache_lock); ! add_fragment_to_lru_queue(fragment); add_fragment_to_hash_table(fragment); UnlockPage(fragment->comp_page->page); --- 378,382 ---- spin_lock(&comp_cache_lock); ! add_fragment_to_active_lru_queue(fragment); add_fragment_to_hash_table(fragment); UnlockPage(fragment->comp_page->page); Index: main.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/main.c,v retrieving revision 1.63 retrieving revision 1.64 diff -C2 -r1.63 -r1.64 *** main.c 7 Aug 2002 18:30:58 -0000 1.63 --- main.c 10 Sep 2002 16:43:22 -0000 1.64 *************** *** 2,6 **** * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-08-07 15:17:28 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-09-04 16:06:25 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 15,19 **** #include <linux/init.h> #include <linux/pagemap.h> - #include <linux/slab.h> #include <asm/page.h> --- 15,18 ---- *************** *** 21,26 **** /* compressed cache control variables */ unsigned long num_comp_pages = 0; - unsigned long num_swapper_fragments = 0; unsigned long num_fragments = 0; unsigned long init_num_comp_pages = 0; --- 20,27 ---- /* compressed cache control variables */ unsigned long num_comp_pages = 0; unsigned long num_fragments = 0; + unsigned long num_swapper_fragments = 0; + unsigned long num_active_fragments = 0; + unsigned long num_clean_fragments = 0; unsigned long init_num_comp_pages = 0; *************** *** 40,49 **** kmem_cache_t * fragment_cachep; - #ifdef CONFIG_COMP_DOUBLE_PAGE - int comp_page_order = 1; - #else - int comp_page_order = 0; - #endif - extern unsigned long num_physpages; extern struct comp_cache_page * get_comp_cache_page(struct page *, unsigned short, struct comp_cache_fragment **, unsigned int, int); --- 41,44 ---- *************** *** 57,66 **** struct comp_cache_page * comp_page; struct comp_cache_fragment * fragment; ! unsigned short comp_size, algorithm; static struct page * current_compressed_page; static char buffer_compressed1[MAX_COMPRESSED_SIZE]; static char buffer_compressed2[MAX_COMPRESSED_SIZE]; ! unsigned long * buffer_compressed; --- 52,61 ---- struct comp_cache_page * comp_page; struct comp_cache_fragment * fragment; ! unsigned short comp_size, comp_offset; static struct page * current_compressed_page; static char buffer_compressed1[MAX_COMPRESSED_SIZE]; static char buffer_compressed2[MAX_COMPRESSED_SIZE]; ! unsigned long * buffer_compressed = NULL; *************** *** 79,83 **** try_again: ! comp_size = compress(current_compressed_page = page, buffer_compressed = (unsigned long *) &buffer_compressed1, &algorithm); comp_page = get_comp_cache_page(page, comp_size, &fragment, gfp_mask, priority); --- 74,84 ---- try_again: ! /* don't compress a page already compressed */ ! if (PageCompressed(page)) ! get_comp_data(page, &comp_size, &comp_offset); ! else ! comp_size = compress(current_compressed_page = page, buffer_compressed = (unsigned long *) &buffer_compressed1, state); ! if (comp_size > PAGE_SIZE) ! BUG(); comp_page = get_comp_cache_page(page, comp_size, &fragment, gfp_mask, priority); *************** *** 93,108 **** BUG(); ! set_fragment_algorithm(fragment, algorithm); ! ! /* fix mapping stuff */ page->mapping->nrpages++; if (state != DIRTY_PAGE) { list_add(&fragment->mapping_list, &fragment->mapping->clean_comp_pages); goto copy_page; } ! CompFragmentSetDirty(fragment); list_add(&fragment->mapping_list, &fragment->mapping->dirty_comp_pages); ! /* the inode might have been synced in the meanwhile (if we * slept to get a free comp cache entry above), so dirty it */ --- 94,109 ---- BUG(); ! /* fix mapping stuff - clean fragment */ page->mapping->nrpages++; if (state != DIRTY_PAGE) { list_add(&fragment->mapping_list, &fragment->mapping->clean_comp_pages); + num_clean_fragments++; goto copy_page; } ! ! /* dirty fragment */ CompFragmentSetDirty(fragment); list_add(&fragment->mapping_list, &fragment->mapping->dirty_comp_pages); ! /* the inode might have been synced in the meanwhile (if we * slept to get a free comp cache entry above), so dirty it */ *************** *** 111,117 **** copy_page: if (compressed(fragment)) { if (current_compressed_page != page) { ! comp_size = compress(page, buffer_compressed = (unsigned long *) &buffer_compressed2, &algorithm); if (comp_size != fragment->compressed_size) { UnlockPage(comp_page->page); --- 112,123 ---- copy_page: + if (PageCompressed(page)) { + memcpy(page_address(comp_page->page) + fragment->offset, page_address(page) + comp_offset, comp_size); + goto out; + } + if (compressed(fragment)) { if (current_compressed_page != page) { ! comp_size = compress(page, buffer_compressed = (unsigned long *) &buffer_compressed2, state); if (comp_size != fragment->compressed_size) { UnlockPage(comp_page->page); *************** *** 124,127 **** --- 130,134 ---- memcpy(page_address(comp_page->page) + fragment->offset, page_address(page), PAGE_SIZE); + out: if (PageTestandSetCompCache(page)) BUG(); *************** *** 133,139 **** compress_dirty_page(struct page * page, int (*writepage)(struct page *), unsigned int gfp_mask, int priority) { ! int write, ret = 0; ! ! write = !!page->buffers; #ifdef CONFIG_COMP_PAGE_CACHE write |= shmem_page(page); --- 140,147 ---- compress_dirty_page(struct page * page, int (*writepage)(struct page *), unsigned int gfp_mask, int priority) { ! int write = 0, ret = 0; ! ! if (page->buffers) ! write = !try_to_free_buffers(page, 0); #ifdef CONFIG_COMP_PAGE_CACHE write |= shmem_page(page); *************** *** 193,197 **** extern void __init comp_cache_adaptivity_init(void); ! LIST_HEAD(lru_queue); inline int --- 201,206 ---- extern void __init comp_cache_adaptivity_init(void); ! LIST_HEAD(active_lru_queue); ! LIST_HEAD(inactive_lru_queue); inline int *************** *** 202,206 **** return 0; ! (*comp_page)->free_space = (*comp_page)->total_free_space = (comp_page_order + 1) * PAGE_SIZE; (*comp_page)->free_offset = 0; (*comp_page)->page = page; --- 211,215 ---- return 0; ! (*comp_page)->free_space = (*comp_page)->total_free_space = (COMP_PAGE_ORDER + 1) * PAGE_SIZE; (*comp_page)->free_offset = 0; (*comp_page)->page = page; *************** *** 247,254 **** /* initialize each comp cache entry */ for (i = 0; i < num_comp_pages; i++) { ! page = alloc_pages(GFP_KERNEL, comp_page_order); if (!init_comp_page(&comp_page, page)) ! __free_pages(page, comp_page_order); } comp_cache_free_space = num_comp_pages * COMP_PAGE_SIZE; --- 256,263 ---- /* initialize each comp cache entry */ for (i = 0; i < num_comp_pages; i++) { ! page = alloc_pages(GFP_KERNEL, COMP_PAGE_ORDER); if (!init_comp_page(&comp_page, page)) ! __free_pages(page, COMP_PAGE_ORDER); } comp_cache_free_space = num_comp_pages * COMP_PAGE_SIZE; *************** *** 266,270 **** char * endp; ! nr_pages = memparse(str, &endp) >> (PAGE_SHIFT + comp_page_order); max_num_comp_pages = nr_pages; --- 275,279 ---- char * endp; ! nr_pages = memparse(str, &endp) >> (PAGE_SHIFT + COMP_PAGE_ORDER); max_num_comp_pages = nr_pages; Index: proc.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/proc.c,v retrieving revision 1.25 retrieving revision 1.26 diff -C2 -r1.25 -r1.26 *** proc.c 13 Aug 2002 14:15:20 -0000 1.25 --- proc.c 10 Sep 2002 16:43:23 -0000 1.26 *************** *** 2,6 **** * linux/mm/comp_cache/proc.c * ! * Time-stamp: <2002-08-12 19:19:39 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/proc.c * ! * Time-stamp: <2002-09-10 13:27:33 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 50,58 **** decompress_function_t * decomp; struct stats_summary stats; ! } compression_algorithms[NUM_ALGORITHMS]; static int algorithm_min = WKDM_IDX; static int algorithm_max = LZO_IDX; ! static int current_algorithm = 0; static struct comp_alg_data comp_data; --- 50,59 ---- decompress_function_t * decomp; struct stats_summary stats; ! } compression_algorithm; static int algorithm_min = WKDM_IDX; static int algorithm_max = LZO_IDX; ! static int algorithm_idx = 0; ! struct stats_summary * stats = &compression_algorithm.stats; static struct comp_alg_data comp_data; *************** *** 60,112 **** static spinlock_t comp_data_lock __cacheline_aligned = SPIN_LOCK_UNLOCKED; ! enum ! { ! CC_SIZE=1, ! CC_ALGORITHM=2 ! }; ! ! ctl_table comp_cache_table[] = { ! {CC_SIZE, "size", &num_comp_pages, sizeof(int), 0444, NULL, &proc_dointvec}, ! {CC_ALGORITHM, "algorithm", ¤t_algorithm, sizeof(int), 0644, NULL, ! &proc_dointvec_minmax, &sysctl_intvec, NULL, &algorithm_min, &algorithm_max}, ! {0} ! }; ! ! int ! get_fragment_algorithm(struct comp_cache_fragment * fragment) ! { ! if (CompFragmentWKdm(fragment)) ! return WKDM_IDX; ! if (CompFragmentWK4x4(fragment)) ! return WK4X4_IDX; ! if (CompFragmentLZO(fragment)) ! return LZO_IDX; ! BUG(); ! return -1; ! } ! ! void ! set_fragment_algorithm(struct comp_cache_fragment * fragment, unsigned short algorithm) ! { ! switch (algorithm) { ! case WKDM_IDX: ! CompFragmentSetWKdm(fragment); ! break; ! case WK4X4_IDX: ! CompFragmentSetWK4x4(fragment); ! break; ! case LZO_IDX: ! CompFragmentSetLZO(fragment); ! break; ! default: ! BUG(); ! } ! } inline void ! comp_cache_update_read_stats(unsigned short algorithm, struct comp_cache_fragment * fragment) { - struct stats_summary * stats = &(compression_algorithms[algorithm].stats); - #ifdef CONFIG_COMP_PAGE_CACHE if (!PageSwapCache(fragment)) { --- 61,69 ---- static spinlock_t comp_data_lock __cacheline_aligned = SPIN_LOCK_UNLOCKED; ! int clean_page_compress_lock = 1; inline void ! comp_cache_update_read_stats(struct comp_cache_fragment * fragment) { #ifdef CONFIG_COMP_PAGE_CACHE if (!PageSwapCache(fragment)) { *************** *** 119,126 **** inline void ! comp_cache_update_written_stats(unsigned short algorithm, struct comp_cache_fragment * fragment) { - struct stats_summary * stats = &(compression_algorithms[algorithm].stats); - #ifdef CONFIG_COMP_PAGE_CACHE if (!PageSwapCache(fragment)) { --- 76,81 ---- inline void ! comp_cache_update_written_stats(struct comp_cache_fragment * fragment) { #ifdef CONFIG_COMP_PAGE_CACHE if (!PageSwapCache(fragment)) { *************** *** 133,140 **** static inline void ! comp_cache_update_decomp_stats(unsigned short algorithm, struct comp_cache_fragment * fragment) { - struct stats_summary * stats = &(compression_algorithms[algorithm].stats); - #ifdef CONFIG_COMP_PAGE_CACHE if (!PageSwapCache(fragment)) { --- 88,93 ---- static inline void ! comp_cache_update_decomp_stats(struct comp_cache_fragment * fragment) { #ifdef CONFIG_COMP_PAGE_CACHE if (!PageSwapCache(fragment)) { *************** *** 149,154 **** comp_cache_update_comp_stats(unsigned int comp_size, struct page * page) { - struct stats_summary * stats = &(compression_algorithms[current_algorithm].stats); - /* update compressed size statistics */ if (!comp_size) --- 102,105 ---- *************** *** 196,200 **** int ! compress(struct page * page, void * to, unsigned short * algorithm) { unsigned int comp_size; --- 147,151 ---- int ! compress(struct page * page, void * to, int state) { unsigned int comp_size; *************** *** 202,220 **** #if 0 ! /* That's a testing police to compress only swap cache ! * pages. All other pages from page cache will be stored ! * without compression in compressed cache. */ ! if (!PageSwapCache(page)) { ! *algorithm = current_algorithm; ! return PAGE_SIZE; } #endif ! spin_lock(&comp_data_lock); ! comp_size = compression_algorithms[current_algorithm].comp(from, to, PAGE_SIZE/4, &comp_data); spin_unlock(&comp_data_lock); comp_cache_update_comp_stats(comp_size, page); - *algorithm = current_algorithm; if (comp_size > PAGE_SIZE) comp_size = PAGE_SIZE; --- 153,168 ---- #if 0 ! if (state == CLEAN_PAGE && clean_page_compress_lock) { ! comp_size = PAGE_SIZE; ! comp_cache_update_comp_stats(comp_size, page); ! return comp_size; } #endif ! spin_lock(&comp_data_lock); ! comp_size = compression_algorithm.comp(from, to, PAGE_SIZE/4, &comp_data); spin_unlock(&comp_data_lock); comp_cache_update_comp_stats(comp_size, page); if (comp_size > PAGE_SIZE) comp_size = PAGE_SIZE; *************** *** 224,279 **** void ! decompress(struct comp_cache_fragment * fragment, struct page * page, int algorithm) { void * from = page_address(fragment->comp_page->page) + fragment->offset; void * to = page_address(page); spin_lock(&comp_data_lock); comp_data.compressed_size = fragment->compressed_size; ! compression_algorithms[algorithm].decomp(from, to, PAGE_SIZE/4, &comp_data); spin_unlock(&comp_data_lock); ! comp_cache_update_decomp_stats(algorithm, fragment); } void __init comp_cache_algorithms_init(void) { ! int i; ! /* data structures for WKdm and WK4x4 */ ! comp_data.tempTagsArray = kmalloc(300 * sizeof(WK_word), GFP_ATOMIC); ! comp_data.tempQPosArray = kmalloc(300 * sizeof(WK_word), GFP_ATOMIC); ! comp_data.tempLowBitsArray = kmalloc(1200 * sizeof(WK_word), GFP_ATOMIC); ! ! if (!comp_data.tempTagsArray || !comp_data.tempQPosArray || !comp_data.tempLowBitsArray) ! panic("comp_cache_algorithms_init(): cannot allocate structures for WKdm/WK4x4"); ! ! /* data structure (dictionary) for LZO */ ! comp_data.wrkmem = (lzo_byte *) kmalloc(LZO1X_1_MEM_COMPRESS, GFP_ATOMIC); ! if (!comp_data.wrkmem) ! panic("comp_cache_algorithms_init(): cannot allocate dictionary for LZO"); ! ! /* stats for algorithms */ ! for (i = 0; i < NUM_ALGORITHMS; i++) ! memset((void *) &compression_algorithms[i], 0, sizeof(struct stats_summary)); ! /* compression algorithms */ ! strcpy(compression_algorithms[WKDM_IDX].name, "WKdm"); ! compression_algorithms[WKDM_IDX].comp = WKdm_compress; ! compression_algorithms[WKDM_IDX].decomp = WKdm_decompress; ! ! strcpy(compression_algorithms[WK4X4_IDX].name, "WK4x4"); ! compression_algorithms[WK4X4_IDX].comp = WK4x4_compress; ! compression_algorithms[WK4X4_IDX].decomp = WK4x4_decompress; ! ! strcpy(compression_algorithms[LZO_IDX].name, "LZO"); ! compression_algorithms[LZO_IDX].comp = lzo_wrapper_compress; ! compression_algorithms[LZO_IDX].decomp = lzo_wrapper_decompress; ! if (!current_algorithm || current_algorithm < algorithm_min || current_algorithm > algorithm_max) ! current_algorithm = WKDM_IDX; ! printk("Compressed Cache: initial compression algorithm: %s\n", compression_algorithms[current_algorithm].name); } --- 172,311 ---- void ! decompress_fragment_to_page(struct comp_cache_fragment * fragment, struct page * page) { + struct comp_cache_page * comp_page; void * from = page_address(fragment->comp_page->page) + fragment->offset; void * to = page_address(page); + if (!fragment) + BUG(); + if (!fragment_count(fragment)) + BUG(); + comp_page = fragment->comp_page; + if (!comp_page->page) + BUG(); + if (!PageLocked(page)) + BUG(); + if (!PageLocked(comp_page->page)) + BUG(); + + SetPageUptodate(page); + + if (!compressed(fragment)) { + copy_page(to, from); + return; + } + + /* regular compressed fragment */ spin_lock(&comp_data_lock); comp_data.compressed_size = fragment->compressed_size; ! compression_algorithm.decomp(from, to, PAGE_SIZE/4, &comp_data); spin_unlock(&comp_data_lock); ! comp_cache_update_decomp_stats(fragment); ! } ! ! #ifdef CONFIG_COMP_SWAP ! void ! get_comp_data(struct page * page, unsigned short * size, unsigned short * offset) ! { ! unsigned short counter, metadata_offset; ! unsigned long fragment_index; ! ! counter = *((unsigned short *) page_address(page)); ! metadata_offset = *((unsigned short *) (page_address(page) + 2)); ! ! fragment_index = 0; ! ! while (counter-- && fragment_index != page->index) { ! fragment_index = *((unsigned long *) (page_address(page) + metadata_offset + 4)); ! metadata_offset += 8; ! } ! ! if (!fragment_index) ! BUG(); ! if (fragment_index != page->index) ! BUG(); ! ! metadata_offset -= 8; ! *size = *((unsigned short *) (page_address(page) + metadata_offset)); ! *offset = *((unsigned short *) (page_address(page) + metadata_offset + 2)); } + #endif + + void + decompress_swap_cache_page(struct page * page) + { + unsigned short comp_size, comp_offset; + + if (!PageLocked(page)) + BUG(); + + spin_lock(&comp_data_lock); + get_comp_data(page, &comp_size, &comp_offset); + + if (comp_size > PAGE_SIZE) + BUG(); + memcpy(page_address(comp_data.decompress_buffer), page_address(page) + comp_offset, comp_size); + + comp_data.compressed_size = comp_size; + compression_algorithm.decomp(page_address(comp_data.decompress_buffer), page_address(page), PAGE_SIZE/4, &comp_data); + + spin_unlock(&comp_data_lock); + + stats->decomp_swap++; + PageClearCompressed(page); + } void __init comp_cache_algorithms_init(void) { ! if (!algorithm_idx || algorithm_idx < algorithm_min || algorithm_idx > algorithm_max) ! algorithm_idx = WKDM_IDX; ! /* data structure for compression algorithms */ ! switch(algorithm_idx) { ! case WKDM_IDX: ! case WK4X4_IDX: ! comp_data.tempTagsArray = kmalloc(300 * sizeof(WK_word), GFP_ATOMIC); ! comp_data.tempQPosArray = kmalloc(300 * sizeof(WK_word), GFP_ATOMIC); ! comp_data.tempLowBitsArray = kmalloc(1200 * sizeof(WK_word), GFP_ATOMIC); ! ! if (!comp_data.tempTagsArray || !comp_data.tempQPosArray || !comp_data.tempLowBitsArray) ! panic("comp_cache_algorithms_init(): cannot allocate structures for WKdm/WK4x4"); ! break; ! case LZO_IDX: ! comp_data.wrkmem = (lzo_byte *) kmalloc(LZO1X_1_MEM_COMPRESS, GFP_ATOMIC); ! if (!comp_data.wrkmem) ! panic("comp_cache_algorithms_init(): cannot allocate dictionary for LZO"); ! break; ! } ! comp_data.decompress_buffer = alloc_page(GFP_ATOMIC); ! if (!comp_data.decompress_buffer) ! panic("comp_cache_algorithms_init(): cannot allocate decompression buffer"); ! /* stats for algorithm */ ! memset((void *) &compression_algorithm, 0, sizeof(struct stats_summary)); ! /* compression algorithms */ ! switch(algorithm_idx) { ! case WKDM_IDX: ! strcpy(compression_algorithm.name, "WKdm"); ! compression_algorithm.comp = WKdm_compress; ! compression_algorithm.decomp = WKdm_decompress; ! break; ! case WK4X4_IDX: ! strcpy(compression_algorithm.name, "WK4x4"); ! compression_algorithm.comp = WK4x4_compress; ! compression_algorithm.decomp = WK4x4_decompress; ! break; ! case LZO_IDX: ! strcpy(compression_algorithm.name, "LZO"); ! compression_algorithm.comp = lzo_wrapper_compress; ! compression_algorithm.decomp = lzo_wrapper_decompress; ! break; ! } ! printk("Compressed Cache: compression algorithm: %s\n", compression_algorithm.name); } *************** *** 289,303 **** } - #define current_msg ((algorithm == &compression_algorithms[current_algorithm])?"*":"") #define proportion(part, total) (total?((unsigned int) ((part * 100)/(total))):0) ! void ! print_comp_cache_stats(unsigned short alg_idx, char * page, int * length) { unsigned int compression_ratio_swap, compression_ratio_page, compression_ratio_total; unsigned long long total_sum_comp_pages; unsigned long total_comp_pages; - struct comp_alg * algorithm = &compression_algorithms[alg_idx]; - struct stats_summary * stats = &algorithm->stats; /* swap cache */ --- 321,332 ---- } #define proportion(part, total) (total?((unsigned int) ((part * 100)/(total))):0) ! static void ! print_comp_cache_stats(char * page, int * length) { unsigned int compression_ratio_swap, compression_ratio_page, compression_ratio_total; unsigned long long total_sum_comp_pages; unsigned long total_comp_pages; /* swap cache */ *************** *** 318,334 **** /* total */ ! if (!total_comp_pages) ! return; ! ! compression_ratio_total = ((big_division(total_sum_comp_pages, total_comp_pages)*100)/PAGE_SIZE); *length += sprintf(page + *length, ! " algorithm %s%s\n" " - (C) compressed pages: %8lu (S: %3d%% P: %3d%%)\n" ! " - (D) decompressed pages: %8lu (S: %3d%% P: %3d%%) D/C %3u%%\n" " - (R) read pages: %8lu (S: %3d%% P: %3d%%) R/C: %3u%%\n" ! " - (W) written pages: %8lu (S: %3d%% P: %3d%%) W/C: %3u%% \n" " compression ratio: %8u%% (S: %3u%% P: %3u%%)\n", ! algorithm->name, current_msg, total_comp_pages, proportion(stats->comp_swap, total_comp_pages), --- 347,362 ---- /* total */ ! compression_ratio_total = 0; ! if (total_comp_pages) ! compression_ratio_total = ((big_division(total_sum_comp_pages, total_comp_pages)*100)/PAGE_SIZE); *length += sprintf(page + *length, ! " algorithm %s\n" " - (C) compressed pages: %8lu (S: %3d%% P: %3d%%)\n" ! " - (D) decompressed pages: %8lu (S: %3d%% P: %3d%%) D/C: %3u%%\n" " - (R) read pages: %8lu (S: %3d%% P: %3d%%) R/C: %3u%%\n" ! " - (W) written pages: %8lu (S: %3d%% P: %3d%%) W/C: %3u%%\n" " compression ratio: %8u%% (S: %3u%% P: %3u%%)\n", ! compression_algorithm.name, total_comp_pages, proportion(stats->comp_swap, total_comp_pages), *************** *** 337,349 **** proportion(stats->decomp_swap, stats->decomp_swap + stats->decomp_page), proportion(stats->decomp_page, stats->decomp_swap + stats->decomp_page), ! (unsigned int) (((stats->decomp_swap + stats->decomp_page) * 100)/total_comp_pages), stats->read_swap + stats->read_page, proportion(stats->read_swap, stats->read_swap + stats->read_page), proportion(stats->read_page, stats->read_swap + stats->read_page), ! (unsigned int) (((stats->read_swap + stats->read_page) * 100)/total_comp_pages), stats->written_swap + stats->written_page, proportion(stats->written_swap, stats->written_swap + stats->written_page), proportion(stats->written_page, stats->written_swap + stats->written_page), ! (unsigned int) (((stats->written_swap + stats->written_page) * 100)/total_comp_pages), compression_ratio_total, compression_ratio_swap, --- 365,377 ---- proportion(stats->decomp_swap, stats->decomp_swap + stats->decomp_page), proportion(stats->decomp_page, stats->decomp_swap + stats->decomp_page), ! total_comp_pages?((unsigned int) (((stats->decomp_swap + stats->decomp_page) * 100)/total_comp_pages)):0, stats->read_swap + stats->read_page, proportion(stats->read_swap, stats->read_swap + stats->read_page), proportion(stats->read_page, stats->read_swap + stats->read_page), ! total_comp_pages?((unsigned int) (((stats->read_swap + stats->read_page) * 100)/total_comp_pages)):0, stats->written_swap + stats->written_page, proportion(stats->written_swap, stats->written_swap + stats->written_page), proportion(stats->written_page, stats->written_swap + stats->written_page), ! total_comp_pages?((unsigned int) (((stats->written_swap + stats->written_page) * 100)/total_comp_pages)):0, compression_ratio_total, compression_ratio_swap, *************** *** 352,357 **** #define HIST_PRINTK \ ! num_fragments[0], num_fragments[1], num_fragments[2], num_fragments[3], \ ! num_fragments[4], num_fragments[5], num_fragments[6], num_fragments[7] #define HIST_COUNT 8 --- 380,385 ---- #define HIST_PRINTK \ ! array_num_fragments[0], array_num_fragments[1], array_num_fragments[2], array_num_fragments[3], \ ! array_num_fragments[4], array_num_fragments[5], array_num_fragments[6], array_num_fragments[7] #define HIST_COUNT 8 *************** *** 359,368 **** comp_cache_hist_read_proc(char *page, char **start, off_t off, int count, int *eof, void *data) { ! unsigned long * num_fragments, total1, total2; int length = 0, i; ! num_fragments = (unsigned long *) vmalloc(HIST_COUNT * sizeof(unsigned long)); ! if (!num_fragments) { printk("couldn't allocate data structures for free space histogram\n"); goto out; --- 387,396 ---- comp_cache_hist_read_proc(char *page, char **start, off_t off, int count, int *eof, void *data) { ! unsigned long * array_num_fragments, total1, total2; int length = 0, i; ! array_num_fragments = (unsigned long *) vmalloc(HIST_COUNT * sizeof(unsigned long)); ! if (!array_num_fragments) { printk("couldn't allocate data structures for free space histogram\n"); goto out; *************** *** 373,381 **** " total 0f 1f 2f 3f 4f 5f 6f more\n"); ! memset((void *) num_fragments, 0, HIST_COUNT * sizeof(unsigned long)); spin_lock(&comp_cache_lock); ! total1 = free_space_count(0, num_fragments); length += sprintf(page + length, " %4d: %7lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu\n", --- 401,410 ---- " total 0f 1f 2f 3f 4f 5f 6f more\n"); ! memset((void *) array_num_fragments, 0, HIST_COUNT * sizeof(unsigned long)); spin_lock(&comp_cache_lock); ! total1 = free_space_count(0, array_num_fragments); ! length += sprintf(page + length, "total %lu act %lu pages %lu\n", num_fragments, num_active_fragments, num_comp_pages << COMP_PAGE_ORDER); length += sprintf(page + length, " %4d: %7lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu\n", *************** *** 385,393 **** for (i = 1; i < free_space_hash_size; i += 2) { ! memset((void *) num_fragments, 0, HIST_COUNT * sizeof(unsigned long)); ! total1 = free_space_count(i, num_fragments); total2 = 0; if (i + 1 < free_space_hash_size) ! total2 = free_space_count(i + 1, num_fragments); length += sprintf(page + length, --- 414,422 ---- for (i = 1; i < free_space_hash_size; i += 2) { ! memset((void *) array_num_fragments, 0, HIST_COUNT * sizeof(unsigned long)); ! total1 = free_space_count(i, array_num_fragments); total2 = 0; if (i + 1 < free_space_hash_size) ! total2 = free_space_count(i + 1, array_num_fragments); length += sprintf(page + length, *************** *** 398,407 **** spin_unlock(&comp_cache_lock); ! vfree(num_fragments); out: return proc_calc_metrics(page, start, off, count, eof, length); } ! #define FRAG_INTERVAL (500 * (comp_page_order + 1)) #define FRAG_PRINTK \ frag_space[0], frag_space[1], frag_space[2], frag_space[3], \ --- 427,436 ---- spin_unlock(&comp_cache_lock); ! vfree(array_num_fragments); out: return proc_calc_metrics(page, start, off, count, eof, length); } ! #define FRAG_INTERVAL (500 * (COMP_PAGE_ORDER + 1)) #define FRAG_PRINTK \ frag_space[0], frag_space[1], frag_space[2], frag_space[3], \ *************** *** 454,458 **** comp_cache_stat_read_proc(char *page, char **start, off_t off, int count, int *eof, void *data) { ! int length = 0, i; length += sprintf(page + length, --- 483,487 ---- comp_cache_stat_read_proc(char *page, char **start, off_t off, int count, int *eof, void *data) { ! int length = 0; length += sprintf(page + length, *************** *** 463,479 **** #ifdef CONFIG_COMP_PAGE_CACHE " - (P) page cache support enabled\n" - #else - " - (P) page cache support disabled\n" #endif " - maximum used size: %6lu KiB\n" " - comp page size: %6lu KiB\n" " - failed allocations: %6lu\n", ! max_used_num_comp_pages << (comp_page_order + PAGE_SHIFT - 10), ! PAGE_SIZE >> (10 - comp_page_order), failed_comp_page_allocs); ! for (i = 0; i < NUM_ALGORITHMS; i++) ! print_comp_cache_stats(i, page, &length); ! return proc_calc_metrics(page, start, off, count, eof, length); } --- 492,507 ---- #ifdef CONFIG_COMP_PAGE_CACHE " - (P) page cache support enabled\n" #endif + #ifdef CONFIG_COMP_SWAP + " - compressed swap support enabled\n" + #endif " - maximum used size: %6lu KiB\n" " - comp page size: %6lu KiB\n" " - failed allocations: %6lu\n", ! max_used_num_comp_pages << (COMP_PAGE_ORDER + PAGE_SHIFT - 10), ! COMP_PAGE_SIZE, failed_comp_page_allocs); ! print_comp_cache_stats(page, &length); return proc_calc_metrics(page, start, off, count, eof, length); } *************** *** 483,487 **** char * endp; ! current_algorithm = simple_strtoul(str, &endp, 0); return 1; } --- 511,515 ---- char * endp; ! algorithm_idx = simple_strtoul(str, &endp, 0); return 1; } Index: swapin.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapin.c,v retrieving revision 1.53 retrieving revision 1.54 diff -C2 -r1.53 -r1.54 *** swapin.c 7 Aug 2002 18:30:58 -0000 1.53 --- swapin.c 10 Sep 2002 16:43:24 -0000 1.54 *************** *** 2,6 **** * linux/mm/comp_cache/swapin.c * ! * Time-stamp: <2002-08-07 10:46:04 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/swapin.c * ! * Time-stamp: <2002-09-10 10:36:42 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 18,21 **** --- 18,26 ---- #include <asm/uaccess.h> + #define ACTIVE_FRAGMENT 1 + #define INACTIVE_FRAGMENT 0 + + int last_accessed = 0, last_state_accessed = 0; + int invalidate_comp_cache(struct address_space * mapping, unsigned long offset) *************** *** 69,108 **** } ! unsigned short ! decompress_fragment(struct comp_cache_fragment * fragment, struct page * page) ! { ! struct comp_cache_page * comp_page; ! int algorithm = get_fragment_algorithm(fragment); ! ! if (!fragment) ! BUG(); ! if (!fragment_count(fragment)) ! BUG(); ! comp_page = fragment->comp_page; ! if (!comp_page->page) ! BUG(); ! if (!PageLocked(page)) ! BUG(); ! if (!PageLocked(comp_page->page)) ! BUG(); ! ! if (compressed(fragment)) ! decompress(fragment, page, algorithm); ! else ! memcpy(page_address(page), page_address(comp_page->page) + fragment->offset, PAGE_SIZE); ! ! SetPageUptodate(page); ! return algorithm; ! } ! ! extern inline void comp_cache_update_read_stats(unsigned short, struct comp_cache_fragment *); /* caller may hold pagecache_lock (__find_lock_page()) */ int ! read_comp_cache(struct address_space *mapping, unsigned long offset, struct page * page) { struct comp_cache_fragment * fragment; ! unsigned short algorithm; ! int err; if (!PageLocked(page)) --- 74,85 ---- } ! extern inline void comp_cache_update_read_stats(struct comp_cache_fragment *); /* caller may hold pagecache_lock (__find_lock_page()) */ int ! __read_comp_cache(struct address_space *mapping, unsigned long offset, struct page * page, int state) { struct comp_cache_fragment * fragment; ! int err, ratio; if (!PageLocked(page)) *************** *** 119,134 **** if (!fragment_count(fragment)) BUG(); - - get_fragment(fragment); ! /* move the fragment to the back of the lru list */ ! remove_fragment_from_lru_queue(fragment); ! add_fragment_to_lru_queue(fragment); spin_unlock(&comp_cache_lock); lock_page(fragment->comp_page->page); ! algorithm = decompress_fragment(fragment, page); ! comp_cache_update_read_stats(algorithm, fragment); spin_lock(&comp_cache_lock); --- 96,197 ---- if (!fragment_count(fragment)) BUG(); ! get_fragment(fragment); + #if 0 + if (CompFragmentDirty(fragment)) { + //if (last_state_accessed > 0) + // last_state_accessed = -1; + //else + last_state_accessed--; + ratio = -3; //-(((num_fragments - num_clean_fragments) * 4)/num_fragments?:0); + if (last_state_accessed < ratio) { + clean_page_compress_lock = 1; + last_state_accessed = 0; + } + goto test_active; + } + + //if (last_state_accessed < 0) + // last_state_accessed = 1; + //else + last_state_accessed++; + ratio = 3; //((num_clean_fragments * 4)/num_fragments?:0); + if (last_state_accessed > ratio) { + clean_page_compress_lock = 0; + last_state_accessed = 0; + } + + test_active: + #endif + + #if 0 + if (!CompFragmentDirty(fragment)) { + last_state_accessed++; + ratio = 3; //((num_clean_fragments * 4)/num_fragments?:0); + if (last_state_accessed > ratio) { + clean_page_compress_lock = 0; + last_state_accessed = 0; + } + #endif + + if (CompFragmentActive(fragment)) {// || !CompFragmentDirty(fragment)) { + if (last_accessed == ACTIVE_FRAGMENT) { + #if 0 + /* -- VERSÃO 3 -- */ + if (growing_lock) { + compact_comp_cache(); + //writeout_fragments(GFP_KERNEL, 1, SHRINKAGE_PRIORITY); + last_accessed = INACTIVE_FRAGMENT; + goto read; + } + growing_lock = 1; + goto read; + #endif + + #if 1 + /* -- VERSÃO 2 -- */ + if (growing_lock) { + compact_comp_cache(); + //writeout_fragments(GFP_KERNEL, 1, SHRINKAGE_PRIORITY); + growing_lock = 0; + last_accessed = INACTIVE_FRAGMENT; + goto read; + } + growing_lock = 1; + goto read; + #endif + + #if 0 + /* -- VERSÂO 1 -- */ + writeout_fragments(GFP_KERNEL, 1, SHRINKAGE_PRIORITY); + growing_lock = 1; + last_accessed = INACTIVE_FRAGMENT; + goto read; + #endif + } + last_accessed = ACTIVE_FRAGMENT; + goto read; + } + + /* inactive fragment */ + growing_lock = 0; + last_accessed = INACTIVE_FRAGMENT; + + read: + /* If only dirty fragmenst should be returned (when reading + * the page for writing it), free the fragment and return. A + * scenario where that happens is when writing a page: there + * is no point decompressing a clean fragment. */ + if (CompFragmentDirty(fragment) && state == DIRTY_PAGE) { + drop_fragment(fragment); + goto out_unlock; + } + spin_unlock(&comp_cache_lock); lock_page(fragment->comp_page->page); ! decompress_fragment_to_page(fragment, page); ! comp_cache_update_read_stats(fragment); spin_lock(&comp_cache_lock); *************** *** 138,145 **** UnlockPage(fragment->comp_page->page); - put_fragment(fragment); if (!drop_fragment(fragment)) PageSetCompCache(page); out_unlock: spin_unlock(&comp_cache_lock); --- 201,209 ---- UnlockPage(fragment->comp_page->page); + put_fragment(fragment); if (!drop_fragment(fragment)) PageSetCompCache(page); + out_unlock: spin_unlock(&comp_cache_lock); *************** *** 258,262 **** old_page = find_or_add_page(page, mapping, fragment->index); if (!old_page) { ! decompress_fragment(fragment, page); goto free_and_dirty; } --- 322,326 ---- old_page = find_or_add_page(page, mapping, fragment->index); if (!old_page) { ! decompress_fragment_to_page(fragment, page); goto free_and_dirty; } *************** *** 267,275 **** UnlockPage(fragment->comp_page->page); spin_lock(&comp_cache_lock); put_fragment(fragment); /* effectively free it */ if (drop_fragment(fragment)) ! PageClearCompCache(page); spin_unlock(&comp_cache_lock); __set_page_dirty(page); --- 331,340 ---- UnlockPage(fragment->comp_page->page); spin_lock(&comp_cache_lock); + put_fragment(fragment); /* effectively free it */ if (drop_fragment(fragment)) ! PageClearCompCache(page); spin_unlock(&comp_cache_lock); __set_page_dirty(page); Index: swapout.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapout.c,v retrieving revision 1.70 retrieving revision 1.71 diff -C2 -r1.70 -r1.71 *** swapout.c 7 Aug 2002 18:30:58 -0000 1.70 --- swapout.c 10 Sep 2002 16:43:25 -0000 1.71 *************** *** 2,6 **** * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-08-07 11:04:43 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-09-10 10:37:03 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 18,25 **** #include <linux/pagemap.h> - extern kmem_cache_t * fragment_cachep; - /* swap buffer */ struct list_head swp_free_buffer_head, swp_used_buffer_head; static spinlock_t swap_buffer_lock __cacheline_aligned = SPIN_LOCK_UNLOCKED; --- 18,30 ---- #include <linux/pagemap.h> /* swap buffer */ struct list_head swp_free_buffer_head, swp_used_buffer_head; + #ifdef CONFIG_COMP_SWAP + static struct { + unsigned short size; + unsigned short offset; + unsigned long index; + } grouped_fragments[255]; + #endif static spinlock_t swap_buffer_lock __cacheline_aligned = SPIN_LOCK_UNLOCKED; *************** *** 160,163 **** --- 165,170 ---- if (TryLockPage(buffer_page)) BUG(); + if (page_count(buffer_page) != 1) + BUG(); list_del(swp_buffer_lh); *************** *** 183,195 **** } ! extern unsigned short decompress_fragment(struct comp_cache_fragment *, struct page *); ! extern inline void comp_cache_update_written_stats(unsigned short, struct comp_cache_fragment *); static struct swp_buffer * ! decompress_to_swp_buffer(struct comp_cache_fragment * fragment, unsigned int gfp_mask) { struct page * buffer_page; struct swp_buffer * swp_buffer; ! unsigned short algorithm; ! swp_buffer = find_free_swp_buffer(fragment, gfp_mask); if (!swp_buffer) --- 190,320 ---- } ! extern inline void comp_cache_update_written_stats(struct comp_cache_fragment *); ! extern void set_swap_compressed(swp_entry_t, int); ! ! #ifdef CONFIG_COMP_SWAP ! static void ! group_fragments(struct comp_cache_fragment * fragment, struct page * page) { ! struct list_head * fragment_lh; ! struct comp_cache_fragment * aux_fragment; ! swp_entry_t entry, real_entry; ! unsigned short counter, next_offset, metadata_size; ! ! entry.val = fragment->index; ! real_entry = get_real_swap_page(entry); ! ! if (!real_entry.val) ! BUG(); ! ! /*** ! * Metadata: for each swap block ! * ! * Header: ! * 4 bytes -> ! * number of fragments (unsigned short) ! * offset for fragment metadata (unsigned short) ! * ! * Tail: ! * - for every fragment - ! * 8 bytes -> ! * offset (unsigned short) ! * compressed size (unsigned short) ! * index (unsigned long) ! */ ! metadata_size = 8; ! next_offset = 4; ! ! /* cannot store the fragment in compressed format */ ! if (next_offset + fragment->compressed_size + metadata_size > PAGE_SIZE) { ! set_swap_compressed(entry, 0); ! decompress_fragment_to_page(fragment, page); ! return; ! } ! ! /* prepare header with data from the 1st fragment */ ! set_swap_compressed(entry, 1); ! ! counter = 1; ! grouped_fragments[0].size = fragment->compressed_size; ! grouped_fragments[0].offset = next_offset; ! grouped_fragments[0].index = fragment->index; ! ! memcpy(page_address(page) + next_offset, page_address(fragment->comp_page->page) + fragment->offset, fragment->compressed_size); ! ! next_offset += fragment->compressed_size; ! ! /* try to group other fragments */ ! for_each_fragment(fragment_lh, fragment->comp_page) { ! aux_fragment = list_entry(fragment_lh, struct comp_cache_fragment, list); ! ! if (aux_fragment == fragment) ! continue; ! if (!PageSwapCache(aux_fragment)) ! continue; ! if (!CompFragmentDirty(aux_fragment)) ! continue; ! entry.val = aux_fragment->index; ! if (vswap_address(entry)) ! continue; ! if (next_offset + aux_fragment->compressed_size + metadata_size + 8 > PAGE_SIZE) ! continue; ! ! CompFragmentClearDirty(aux_fragment); ! num_clean_fragments++; ! ! set_swap_compressed(entry, 1); ! map_swap(entry, real_entry); ! ! grouped_fragments[counter].size = aux_fragment->compressed_size; ! grouped_fragments[counter].offset = next_offset; ! grouped_fragments[counter].index = aux_fragment->index; ! ! memcpy(page_address(page) + next_offset, page_address(fragment->comp_page->page) + aux_fragment->offset, aux_fragment->compressed_size); ! ! next_offset += aux_fragment->compressed_size; ! metadata_size += 8; ! counter++; ! } ! ! memcpy(page_address(page), &counter, 2); ! memcpy(page_address(page) + 2, &next_offset, 2); ! ! while (counter--) { ! memcpy(page_address(page) + next_offset, &(grouped_fragments[counter].size), 2); ! next_offset += 2; ! memcpy(page_address(page) + next_offset, &(grouped_fragments[counter].offset), 2); ! next_offset += 2; ! memcpy(page_address(page) + next_offset, &(grouped_fragments[counter].index), 4); ! next_offset += 4; ! } ! } ! #else ! static void ! group_fragments(struct comp_cache_fragment * fragment, struct page * page) ! { ! swp_entry_t entry; ! ! /* uncompressed fragments or fragments that cannot have the ! * metadata written together must be decompressed */ ! entry.val = fragment->index; ! if (fragment->compressed_size + sizeof(unsigned short) > PAGE_SIZE) { ! set_swap_compressed(entry, 0); ! decompress_fragment_to_page(fragment, page); ! return; ! } ! ! /* copy the compressed data and metadata */ ! memcpy(page_address(page), &(fragment->compressed_size), sizeof(unsigned short)); ! memcpy(page_address(page) + sizeof(unsigned short), page_address(fragment->comp_page->page) + fragment->offset, fragment->compressed_size); ! set_swap_compressed(entry, 1); ! } ! #endif static struct swp_buffer * ! prepare_swp_buffer(struct comp_cache_fragment * fragment, unsigned int gfp_mask) { struct page * buffer_page; struct swp_buffer * swp_buffer; ! swp_entry_t entry; ! swp_buffer = find_free_swp_buffer(fragment, gfp_mask); if (!swp_buffer) *************** *** 202,208 **** lock_page(fragment->comp_page->page); ! algorithm = decompress_fragment(fragment, buffer_page); UnlockPage(fragment->comp_page->page); ! comp_cache_update_written_stats(algorithm, fragment); buffer_page->flags &= (1 << PG_locked); --- 327,341 ---- lock_page(fragment->comp_page->page); ! ! /* pages from page cache need to have its data decompressed */ ! if (!PageSwapCache(fragment)) { ! decompress_fragment_to_page(fragment, buffer_page); ! goto out_unlock; ! } ! ! group_fragments(fragment, buffer_page); ! out_unlock: UnlockPage(f... [truncated message content] |