[lc-checkins] CVS: linux/include/linux comp_cache.h,1.100,1.101 mm.h,1.17,1.18 swap.h,1.15,1.16
Status: Beta
Brought to you by:
nitin_sf
From: Rodrigo S. de C. <rc...@us...> - 2002-09-10 16:43:41
|
Update of /cvsroot/linuxcompressed/linux/include/linux In directory usw-pr-cvs1:/tmp/cvs-serv17835/include/linux Modified Files: comp_cache.h mm.h swap.h Log Message: New features o Adaptivity: the greatest feature of the changeset is the adaptivity implementation. Now compressed cache resizes by itself and it seems to be picking the a size pretty close to the best size noticed in our tests. The police can be described as follow. Instead of having an LRU queue, we have now two queues: active and inactive, like the LRU queues in vanilla. The active list has the pages that would be in memory if the compressed cache is not used and the inactive list is the gain from using the compressed cache. If there are many accesses to the active list, we first block growing (by demand) and later shrink the compressed cache, and if we have many accesses to the inactive list, we let the cache grow if needed. The active list size is computed based on the effective compression ratio (number of fragments/number of memory pages). When shrinking the cache, we try to free a compressed cache by moving its fragments to other places. If unable to free a page that way, we free a fragment at the end of inactive list. o Compressed swap: now all swap cache pages are swapped out in compressed format. A bit in swap_map array is used to know if the entry is compressed or not. The compressed size is stored in the entry on the disk. There is almost no cost to store the pages in compressed format, that's why it is the default configuration for compressed cache. o Compacted swap: besides swapping out the pages in compressed format, we may decrease the number of writeouts by writing many fragments to the same disk block. Since it has a memory cost to store some metadata, it is an option to be enabled by user. It uses two arrays, real_swap (unsigned long array) and real_swap_map (unsigned short array). All the metadata about the fragments in the disk block are stored on the block, like offset, size, index. o Clean fragments not decompressed when they would be used to write some data. We don't decompress a clean fragment when grabbing a page cache page in __grab_cache_page() any longer. We would decompress a fragment, but it's data wouldn't be used (that's why this __grab_cache_page() creates a page if not found in page cache). Dirty fragments will be decompressed, but that's a rare situation in page cache since most data are written via buffers. Bug fixes o Larger compressed cache page support would not support pages larger than 2*PAGE_SIZE (8K). Reason: wrong computation of comp page size, very simple to fix. o In /proc/comp_cache_hist, we were showing the number of fragments in a comp page, no matter if those fragments were freed. It has been fixed to not show the freed fragments. o Writing out every dirty page with buffers. That was a conceptual bug, since all the swapped in pages would have bugs, and if they got dirty, they would not be added to compressed cache as dirty, they would be written out first and only then added to swap cache as a clean page. Now we try to free the buffers and if we are unable to do that, we write it out. With this bug, the page was added to compressed cache, but we were forcing many writes. Other: o Removed support to change algorithms online. That was a not very used option and would introduce a space cost to pages swapped out in compressed format, so it was removed. It also saved some memory space, since we allocate only the data structure used by the selected algorithm. Recall that the algorithm can be set through the compalg= kernel parameter. o All entries in /proc/sys/vm/comp_cache removed. Since compression algorithms cannot be changed nor compressed cache size, so it's useless to have a directory in /proc/sys. Compressed cache size can still be checked in /proc/meminfo. o Info for compression algorithm is shown even if no page has been compressed. o There are many code blocks with "#if 0" that are/were being tested. Cleanups: o Code to add the fragment into a comp page fragment list was split to a new function. o decompress() function removed. Index: comp_cache.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/comp_cache.h,v retrieving revision 1.100 retrieving revision 1.101 diff -C2 -r1.100 -r1.101 *** comp_cache.h 7 Aug 2002 18:30:58 -0000 1.100 --- comp_cache.h 10 Sep 2002 16:43:03 -0000 1.101 *************** *** 2,6 **** * linux/mm/comp_cache.h * ! * Time-stamp: <2002-08-07 10:51:24 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache.h * ! * Time-stamp: <2002-09-06 19:25:59 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 17,20 **** --- 17,21 ---- #include <linux/compiler.h> #include <linux/list.h> + #include <linux/slab.h> #include <linux/spinlock.h> #include <linux/fs.h> *************** *** 30,40 **** #include <linux/minilzo.h> ! #define COMP_CACHE_VERSION "0.24pre3" /* maximum compressed size of a page */ #define MAX_COMPRESSED_SIZE 4500 ! extern unsigned long num_comp_pages, num_fragments, num_swapper_fragments, new_num_comp_pages, zone_num_comp_pages; ! extern unsigned long min_num_comp_pages, max_num_comp_pages, max_used_num_comp_pages; struct pte_list { --- 31,42 ---- #include <linux/minilzo.h> ! #define COMP_CACHE_VERSION "0.24pre4" /* maximum compressed size of a page */ #define MAX_COMPRESSED_SIZE 4500 ! extern unsigned long num_comp_pages, num_fragments, num_active_fragments, num_swapper_fragments, num_clean_fragments, zone_num_comp_pages; ! extern unsigned long new_num_comp_pages, min_num_comp_pages, max_num_comp_pages, max_used_num_comp_pages; ! extern kmem_cache_t * fragment_cachep; struct pte_list { *************** *** 114,119 **** --- 116,125 ---- #ifdef CONFIG_COMP_CACHE extern unsigned long failed_comp_page_allocs; + extern int growing_lock; + int grow_on_demand(void); int shrink_on_demand(struct comp_cache_page *); + void compact_comp_cache(void); + void balance_lru_queues(void); #else static inline int grow_on_demand(void) { return 0; } *************** *** 124,154 **** extern struct list_head swp_free_buffer_head; ! /* -- Fragment Flags */ ! /* CF_WKdm/CF_WK4x4/CF_LZO: defines the algorithm the fragment has ! * been compressed (if it's been compressed) */ ! #define CF_WKdm 0 ! #define CF_WK4x4 1 ! #define CF_LZO 2 /* CF_Dirty: is the fragment dirty? */ ! #define CF_Dirty 3 ! #define CF_ToBeFreed 4 ! ! #define CompFragmentWKdm(fragment) test_bit(CF_WKdm, &(fragment)->flags) ! #define CompFragmentSetWKdm(fragment) set_bit(CF_WKdm, &(fragment)->flags) ! #define CompFragmentTestandSetWKdm(fragment) test_and_set_bit(CF_WKdm, &(fragment)->flags) ! #define CompFragmentClearWKdm(fragment) clear_bit(CF_WKdm, &(fragment)->flags) ! ! #define CompFragmentWK4x4(fragment) test_bit(CF_WK4x4, &(fragment)->flags) ! #define CompFragmentSetWK4x4(fragment) set_bit(CF_WK4x4, &(fragment)->flags) ! #define CompFragmentTestandSetWK4x4(fragment) test_and_set_bit(CF_WK4x4, &(fragment)->flags) ! #define CompFragmentClearWK4x4(fragment) clear_bit(CF_WK4x4, &(fragment)->flags) ! ! #define CompFragmentLZO(fragment) test_bit(CF_LZO, &(fragment)->flags) ! #define CompFragmentSetLZO(fragment) set_bit(CF_LZO, &(fragment)->flags) ! #define CompFragmentTestandSetLZO(fragment) test_and_set_bit(CF_LZO, &(fragment)->flags) ! #define CompFragmentClearLZO(fragment) clear_bit(CF_LZO, &(fragment)->flags) #define CompFragmentDirty(fragment) test_bit(CF_Dirty, &(fragment)->flags) --- 130,142 ---- extern struct list_head swp_free_buffer_head; ! int writeout_fragments(unsigned int, int, int); ! /* -- Fragment Flags */ /* CF_Dirty: is the fragment dirty? */ ! #define CF_Dirty 0 ! #define CF_ToBeFreed 1 ! #define CF_Active 2 #define CompFragmentDirty(fragment) test_bit(CF_Dirty, &(fragment)->flags) *************** *** 159,164 **** --- 147,159 ---- #define CompFragmentToBeFreed(fragment) test_bit(CF_ToBeFreed, &(fragment)->flags) + #define CompFragmentSetToBeFreed(fragment) set_bit(CF_ToBeFreed, &(fragment)->flags) #define CompFragmentTestandSetToBeFreed(fragment) test_and_set_bit(CF_ToBeFreed, &(fragment)->flags) + #define CompFragmentActive(fragment) test_bit(CF_Active, &(fragment)->flags) + #define CompFragmentSetActive(fragment) set_bit(CF_Active, &(fragment)->flags) + #define CompFragmentTestandSetActive(fragment) test_and_set_bit(CF_Active, &(fragment)->flags) + #define CompFragmentTestandClearActive(fragment) test_and_clear_bit(CF_Active, &(fragment)->flags) + #define CompFragmentClearActive(fragment) clear_bit(CF_Active, &(fragment)->flags) + /* general */ #define get_fragment(f) do { \ *************** *** 221,225 **** /* LZO */ lzo_byte * wrkmem; ! unsigned short compressed_size; }; --- 216,223 ---- /* LZO */ lzo_byte * wrkmem; ! unsigned short compressed_size; ! ! /* Compressed Swap */ ! struct page * decompress_buffer; }; *************** *** 236,252 **** /* proc.c */ #ifdef CONFIG_COMP_CACHE ! void set_fragment_algorithm(struct comp_cache_fragment *, unsigned short); ! void decompress(struct comp_cache_fragment *, struct page *, int); ! int compress(struct page *, void *, unsigned short *); void __init comp_cache_algorithms_init(void); #endif /* swapin.c */ #ifdef CONFIG_COMP_CACHE extern int FASTCALL(flush_comp_cache(struct page *)); ! int read_comp_cache(struct address_space *, unsigned long, struct page *); int invalidate_comp_cache(struct address_space *, unsigned long); void invalidate_comp_pages(struct address_space *); --- 234,266 ---- /* proc.c */ #ifdef CONFIG_COMP_CACHE ! void decompress_fragment_to_page(struct comp_cache_fragment *, struct page *); ! void decompress_swap_cache_page(struct page *); ! int compress(struct page *, void *, int); void __init comp_cache_algorithms_init(void); + extern int clean_page_compress_lock; + #endif + + #ifdef CONFIG_COMP_SWAP + void get_comp_data(struct page *, unsigned short *, unsigned short *); + #else + static inline void + get_comp_data(struct page * page, unsigned short * size, unsigned short * offset) + { + *size = *((unsigned short *) page_address(page)); + *offset = sizeof(unsigned short); + } #endif + /* swapin.c */ #ifdef CONFIG_COMP_CACHE extern int FASTCALL(flush_comp_cache(struct page *)); ! #define read_comp_cache(mapping, index, page) __read_comp_cache(mapping, index, page, CLEAN_PAGE) ! #define read_dirty_comp_cache(mapping, index, page) __read_comp_cache(mapping, index, page, DIRTY_PAGE) ! ! int __read_comp_cache(struct address_space *, unsigned long, struct page *, int); int invalidate_comp_cache(struct address_space *, unsigned long); void invalidate_comp_pages(struct address_space *); *************** *** 278,288 **** #define DIRTY_PAGE 1 ! #define COMP_PAGE_SIZE ((comp_page_order + 1) * PAGE_SIZE) #define comp_cache_used_space ((num_comp_pages * COMP_PAGE_SIZE) - comp_cache_free_space) ! #define page_to_comp_page(n) ((n) >> comp_page_order) ! #define comp_page_to_page(n) ((n) << comp_page_order) - extern int comp_page_order; extern unsigned long comp_cache_free_space; extern spinlock_t comp_cache_lock; --- 292,307 ---- #define DIRTY_PAGE 1 ! #ifdef CONFIG_COMP_DOUBLE_PAGE ! #define COMP_PAGE_ORDER 1 ! #else ! #define COMP_PAGE_ORDER 0 ! #endif ! ! #define COMP_PAGE_SIZE (PAGE_SIZE << COMP_PAGE_ORDER) #define comp_cache_used_space ((num_comp_pages * COMP_PAGE_SIZE) - comp_cache_free_space) ! #define page_to_comp_page(n) ((n) >> COMP_PAGE_ORDER) ! #define comp_page_to_page(n) ((n) << COMP_PAGE_ORDER) extern unsigned long comp_cache_free_space; extern spinlock_t comp_cache_lock; *************** *** 344,347 **** --- 363,367 ---- int virtual_swap_free(unsigned long); swp_entry_t get_virtual_swap_page(void); + void add_fragment_vswap(struct comp_cache_fragment *); int comp_cache_available_space(void); *************** *** 433,436 **** --- 453,458 ---- inline void set_comp_page(struct comp_cache_page *, struct page *); inline void check_all_fragments(struct comp_cache_page *); + void add_to_comp_page_list(struct comp_cache_page *, struct comp_cache_fragment *); + extern struct comp_cache_fragment ** fragment_hash; *************** *** 486,493 **** struct comp_cache_fragment ** create_fragment_hash(unsigned long *, unsigned int *, unsigned int *); ! extern struct list_head lru_queue; ! inline void add_fragment_to_lru_queue(struct comp_cache_fragment *); ! inline void add_fragment_to_lru_queue_tail(struct comp_cache_fragment *); inline void remove_fragment_from_lru_queue(struct comp_cache_fragment *); --- 508,517 ---- struct comp_cache_fragment ** create_fragment_hash(unsigned long *, unsigned int *, unsigned int *); ! extern struct list_head active_lru_queue, inactive_lru_queue; ! inline void add_fragment_to_active_lru_queue(struct comp_cache_fragment *); ! inline void add_fragment_to_active_lru_queue_tail(struct comp_cache_fragment *); ! inline void add_fragment_to_inactive_lru_queue(struct comp_cache_fragment *); ! inline void add_fragment_to_inactive_lru_queue_tail(struct comp_cache_fragment *); inline void remove_fragment_from_lru_queue(struct comp_cache_fragment *); *************** *** 505,513 **** /* proc.c */ - void print_comp_cache_stats(unsigned short, char *, int *); int comp_cache_stat_read_proc(char *, char **, off_t, int, int *, void *); int comp_cache_hist_read_proc(char *, char **, off_t, int, int *, void *); int comp_cache_frag_read_proc(char *, char **, off_t, int, int *, void *); - int get_fragment_algorithm(struct comp_cache_fragment *); --- 529,535 ---- Index: mm.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/mm.h,v retrieving revision 1.17 retrieving revision 1.18 diff -C2 -r1.17 -r1.18 *** mm.h 16 Jul 2002 18:41:55 -0000 1.17 --- mm.h 10 Sep 2002 16:43:04 -0000 1.18 *************** *** 287,292 **** #define PG_launder 15 /* written out by VM pressure.. */ #define PG_comp_cache 16 /* page with a fragment in compressed cache */ ! #define PG_mapped_comp_cache 17 /* page from page cache that is also mapped in ! * the compressed cache */ /* Make it prettier to test the above... */ --- 287,291 ---- #define PG_launder 15 /* written out by VM pressure.. */ #define PG_comp_cache 16 /* page with a fragment in compressed cache */ ! #define PG_compressed 17 /* swapped in page with compressed data */ /* Make it prettier to test the above... */ *************** *** 332,337 **** --- 331,338 ---- #ifdef CONFIG_COMP_CACHE #define PageCompCache(page) test_bit(PG_comp_cache, &(page)->flags) + #define PageCompressed(page) test_bit(PG_compressed, &(page)->flags) #else #define PageCompCache(page) 0 + #define PageCompressed(page) 0 #endif *************** *** 340,343 **** --- 341,349 ---- #define PageTestandSetCompCache(page) test_and_set_bit(PG_comp_cache, &(page)->flags) #define PageTestandClearCompCache(page) test_and_clear_bit(PG_comp_cache, &(page)->flags) + + #define PageSetCompressed(page) set_bit(PG_compressed, &(page)->flags) + #define PageClearCompressed(page) clear_bit(PG_compressed, &(page)->flags) + #define PageTestandSetCompressed(page) test_and_set_bit(PG_compressed, &(page)->flags) + #define PageTestandClearCompressed(page) test_and_clear_bit(PG_compressed, &(page)->flags) #define PageActive(page) test_bit(PG_active, &(page)->flags) Index: swap.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/swap.h,v retrieving revision 1.15 retrieving revision 1.16 diff -C2 -r1.15 -r1.16 *** swap.h 1 Jul 2002 17:37:29 -0000 1.15 --- swap.h 10 Sep 2002 16:43:04 -0000 1.16 *************** *** 66,71 **** --- 66,86 ---- #define SWAP_CLUSTER_MAX 32 + #ifdef CONFIG_COMP_CACHE + #define SWAP_MAP_MAX 0x3fff + #define SWAP_MAP_BAD 0x4000 + #define SWAP_MAP_COMP_BIT 0x8000 + #define SWAP_MAP_COMP_BIT_MASK 0x7fff + #define swap_map_count(swap) (swap & 0x7fff) + #else #define SWAP_MAP_MAX 0x7fff #define SWAP_MAP_BAD 0x8000 + #define SWAP_MAP_COMP 0x0000 + #define swap_map_count(swap) (swap) + #endif + + #ifdef CONFIG_COMP_SWAP + #define COMP_SWAP_MAP_MAX 0x7fff + #define COMP_SWAP_MAP_BAD 0x8000 + #endif /* *************** *** 83,86 **** --- 98,109 ---- unsigned int cluster_next; unsigned int cluster_nr; + #ifdef CONFIG_COMP_SWAP + unsigned long * real_swap; + unsigned short * real_swap_map; + unsigned int real_lowest_bit; + unsigned int real_highest_bit; + unsigned int real_cluster_next; + unsigned int real_cluster_nr; + #endif int prio; /* swap priority */ int pages; *************** *** 168,171 **** --- 191,208 ---- asmlinkage long sys_swapoff(const char *); asmlinkage long sys_swapon(const char *, int); + + #ifdef CONFIG_COMP_SWAP + swp_entry_t get_real_swap_page(swp_entry_t); + swp_entry_t get_map(swp_entry_t); + + void map_swap(swp_entry_t, swp_entry_t); + #endif + + #ifdef CONFIG_COMP_CACHE + void set_swap_compressed(swp_entry_t, int); + int get_swap_compressed(swp_entry_t); + #else + static inline int get_swap_compressed(swp_entry_t entry) { return 0; } + #endif extern spinlock_t pagemap_lru_lock; |