[lc-checkins] CVS: linux/include/linux WK4x4.h,1.1.1.1,1.2 WKdm.h,1.1.1.1,1.2 comp_cache.h,1.96,1.97
Status: Beta
Brought to you by:
nitin_sf
From: Rodrigo S. de C. <rc...@us...> - 2002-07-28 15:47:08
|
Update of /cvsroot/linuxcompressed/linux/include/linux In directory usw-pr-cvs1:/tmp/cvs-serv26313/include/linux Modified Files: WK4x4.h WKdm.h comp_cache.h Log Message: Features o First page cache support for preempted kernels is implemented. o Fragments have a "count" field that stores the number of references to the fragment, so we don't have to worry about it getting freed in the middle of an operation. That tries to fix a highly potential source of bugs. Bug fixes o Fix memory accountancy for double page sizes. Meminfo was broken for 8K pages. o truncate_list_comp_pages() could try to truncate fragments that were in locked_comp_pages list, which is bogus. Only swap buffers list are on this list, and are listed there only for wait_comp_pages(). o when writing out fragments, we didn't pay attention to the return value, so we may end up freeing a fragment (when refilling swap buffer) even if the writepage failed. In particular, ramfs, ramdisk and other memory file systems always fail to write out its pages. Now we check if the swap buffer page has been set dirty (the writepage() usually does that after failing to write a page), moving back the fragment to the dirty list (and of course not freeing the fragment). o fixed bug that would corrupt the swap buffer list. A bug in the variable that returned the error code could return error even if a fragment was found afterall, so the caller function would backout the writeout operation, leaving the swap buffer locked on the used list, and it wouldn't never get unlocked. o account writeout stats only for pages that have been actually submitted to IO operation. o fixed bug that would deadlock a system with comp_cache that has page cache support. The lookup_comp_pages() function may be called from the following code path: __sync_one() -> filemap_fdatasync(). This code path tries to sync an inode (and keeps it locked while it is syncing). However, that very inode can be also in the clear path (clear_inode() function, called in the exit process path) which will lock the super block and then wait for inode if it is locked (what happens with an inode syncing). Since the allocation path may write pages, which may need to lock the same super block, it will deadlock, because the super block is locked by the exit path explained above. So, we end up not being able to allocate the page (in order to finish this function and unlock the inode) _and_ the super block won't be unlocked since the inode doesn't get unlocked either. The fix was to allocate pages with GFP_NOFS mask. Cleanups o Some functions were renamed. o Compression algorithms (removed unnecessary data structures that were allocated, made some structures to be allocated statically in the algorithms, some data statically allocated are now kmalloc()) o Removed /proc/sys/vm/comp_cache/actual_size, it doesn't make sense with resizing on demand. Others o Compressed cache only resizes on demand. Index: WK4x4.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/WK4x4.h,v retrieving revision 1.1.1.1 retrieving revision 1.2 diff -C2 -r1.1.1.1 -r1.2 *** WK4x4.h 15 Apr 2001 22:27:56 -0000 1.1.1.1 --- WK4x4.h 28 Jul 2002 15:47:04 -0000 1.2 *************** *** 326,330 **** WK_word* destinationBuffer, unsigned int words, ! void *page); /* Given a pointer to a source buffer (sourceBuffer) of compressed --- 326,330 ---- WK_word* destinationBuffer, unsigned int words, ! struct comp_alg_data * data); /* Given a pointer to a source buffer (sourceBuffer) of compressed *************** *** 336,340 **** WK_word* destinationPage, unsigned int words, ! void * page); /* Given a pointer to a source buffer (sourceBuffer) of uncompressed --- 336,340 ---- WK_word* destinationPage, unsigned int words, ! struct comp_alg_data * data); /* Given a pointer to a source buffer (sourceBuffer) of uncompressed Index: WKdm.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/WKdm.h,v retrieving revision 1.1.1.1 retrieving revision 1.2 diff -C2 -r1.1.1.1 -r1.2 *** WKdm.h 15 Apr 2001 22:27:56 -0000 1.1.1.1 --- WKdm.h 28 Jul 2002 15:47:04 -0000 1.2 *************** *** 52,56 **** WK_word* destinationBuffer, unsigned int words, ! void * page); /* Given a pointer to a source buffer (sourceBuffer) of compressed --- 52,56 ---- WK_word* destinationBuffer, unsigned int words, ! struct comp_alg_data * data); /* Given a pointer to a source buffer (sourceBuffer) of compressed *************** *** 63,67 **** WK_word* destinationPage, unsigned int words, ! void * page); /* Given a pointer to a source buffer (sourceBuffer) of uncompressed --- 63,67 ---- WK_word* destinationPage, unsigned int words, ! struct comp_alg_data * data); /* Given a pointer to a source buffer (sourceBuffer) of uncompressed Index: comp_cache.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/comp_cache.h,v retrieving revision 1.96 retrieving revision 1.97 diff -C2 -r1.96 -r1.97 *** comp_cache.h 18 Jul 2002 21:31:08 -0000 1.96 --- comp_cache.h 28 Jul 2002 15:47:04 -0000 1.97 *************** *** 2,6 **** * linux/mm/comp_cache.h * ! * Time-stamp: <2002-07-18 15:45:44 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache.h * ! * Time-stamp: <2002-07-28 10:08:41 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 28,33 **** #include <linux/shmem_fs.h> #include <linux/WKcommon.h> ! #define COMP_CACHE_VERSION "0.24pre1" /* maximum compressed size of a page */ --- 28,34 ---- #include <linux/shmem_fs.h> #include <linux/WKcommon.h> + #include <linux/minilzo.h> ! #define COMP_CACHE_VERSION "0.24pre2" /* maximum compressed size of a page */ *************** *** 51,57 **** struct list_head mapping_list; unsigned long index; ! struct address_space *mapping; /* offset in the compressed cache we are stored in */ unsigned short offset; --- 52,63 ---- struct list_head mapping_list; + /* usage count */ + atomic_t count; + unsigned long index; ! struct address_space * mapping; + struct swp_buffer * swp_buffer; + /* offset in the compressed cache we are stored in */ unsigned short offset; *************** *** 104,120 **** ((struct swp_buffer *) kmem_cache_alloc(comp_cachep, SLAB_ATOMIC)) extern int shmem_page(struct page * page); /* adaptivity.c */ #ifdef CONFIG_COMP_CACHE - int shrink_comp_cache(struct comp_cache_page *, int); - int grow_comp_cache(int); - void adapt_comp_cache(void); - #else - static inline int shrink_comp_cache(struct comp_cache_page * comp_page, int check_further) { return 0; } - static inline void grow_comp_cache(int nrpages) { } - #endif - - #ifdef CONFIG_COMP_DEMAND_RESIZE int grow_on_demand(void); int shrink_on_demand(struct comp_cache_page *); --- 110,133 ---- ((struct swp_buffer *) kmem_cache_alloc(comp_cachep, SLAB_ATOMIC)) + #define get_fragment(f) do { \ + if (atomic_read(&(f)->count) == 0) \ + BUG(); \ + atomic_inc(&(f)->count); \ + } while(0); + + #define drop_fragment(f) do { \ + if (!CompFragmentTestandSetToBeFreed(f)) \ + put_fragment(f); \ + } while(0); + + #define put_fragment(f) __comp_cache_free(f) + #define put_fragment_testzero(f) atomic_dec_and_test(&(f)->count) + #define fragment_count(f) atomic_read(&(f)->count) + #define set_fragment_count(f,v) atomic_set(&(f)->count, v) + extern int shmem_page(struct page * page); /* adaptivity.c */ #ifdef CONFIG_COMP_CACHE int grow_on_demand(void); int shrink_on_demand(struct comp_cache_page *); *************** *** 129,165 **** /* -- Fragment Flags */ - /* CF_Freed: when a fragment is going to be submitted to IO, there's a - * special case where one must tell swap buffer functions that the - * fragment was partially freed, so it does not need IO any longer - * (but the struct cannot be freed). In this case, we use Freed - * flag */ - #define CF_Freed 0 - - /* CF_IO: used to coordinate the swap buffer functions and - comp_cache_free() when freeing the fragment. If CF_IO is set and a - fragment happens to be freed, its structure will not be freed there, - only in find_free_swp_buffer(). */ - #define CF_IO 1 - /* CF_WKdm/CF_WK4x4/CF_LZO: defines the algorithm the fragment has * been compressed (if it's been compressed) */ ! #define CF_WKdm 2 ! #define CF_WK4x4 3 ! #define CF_LZO 4 /* CF_Dirty: is the fragment dirty? */ ! #define CF_Dirty 5 ! #define CompFragmentFreed(fragment) test_bit(CF_Freed, &(fragment)->flags) ! #define CompFragmentSetFreed(fragment) set_bit(CF_Freed, &(fragment)->flags) ! #define CompFragmentTestandSetFreed(fragment) test_and_set_bit(CF_Freed, &(fragment)->flags) ! #define CompFragmentTestandClearFreed(fragment) test_and_clear_bit(CF_Freed, &(fragment)->flags) ! #define CompFragmentClearFreed(fragment) clear_bit(CF_Freed, &(fragment)->flags) ! ! #define CompFragmentIO(fragment) test_bit(CF_IO, &(fragment)->flags) ! #define CompFragmentSetIO(fragment) set_bit(CF_IO, &(fragment)->flags) ! #define CompFragmentTestandSetIO(fragment) test_and_set_bit(CF_IO, &(fragment)->flags) ! #define CompFragmentTestandClearIO(fragment) test_and_clear_bit(CF_IO, &(fragment)->flags) ! #define CompFragmentClearIO(fragment) clear_bit(CF_IO, &(fragment)->flags) #define CompFragmentWKdm(fragment) test_bit(CF_WKdm, &(fragment)->flags) --- 142,155 ---- /* -- Fragment Flags */ /* CF_WKdm/CF_WK4x4/CF_LZO: defines the algorithm the fragment has * been compressed (if it's been compressed) */ ! #define CF_WKdm 0 ! #define CF_WK4x4 1 ! #define CF_LZO 2 /* CF_Dirty: is the fragment dirty? */ ! #define CF_Dirty 3 ! #define CF_ToBeFreed 4 #define CompFragmentWKdm(fragment) test_bit(CF_WKdm, &(fragment)->flags) *************** *** 184,187 **** --- 174,180 ---- #define CompFragmentClearDirty(fragment) clear_bit(CF_Dirty, &(fragment)->flags) + #define CompFragmentToBeFreed(fragment) test_bit(CF_ToBeFreed, &(fragment)->flags) + #define CompFragmentTestandSetToBeFreed(fragment) test_and_set_bit(CF_ToBeFreed, &(fragment)->flags) + #define INF 0xffffffff *************** *** 192,205 **** struct list_head list; ! struct page * page; /* page for IO */ ! struct comp_cache_fragment * fragment; /* pointer to the fragment we are doing IO */ }; - #define DEBUG_CHECK_COUNT \ - if ((page_count(page) - !!page->buffers) != 2) { \ - printk("page_count %d page->buffers: %d\n", page_count(page), !!page->buffers); \ - BUG(); \ - } - #define NUM_MEAN_PAGES 100 --- 185,192 ---- struct list_head list; ! struct page * page; /* page for IO */ ! struct comp_cache_fragment * fragment; /* pointer to the fragment we are doing IO */ }; #define NUM_MEAN_PAGES 100 *************** *** 219,232 **** #define DISCARD_MARK 0.80 - typedef unsigned int (compress_function_t)(unsigned long* source, - unsigned long* destination, - unsigned int words, - void *page); - - typedef void (decompress_function_t)(unsigned long* source, - unsigned long* destination, - unsigned int words, - void *page); - typedef struct { union { --- 206,209 ---- *************** *** 268,290 **** }; - struct comp_alg { - char name[6]; - compress_function_t * comp; - decompress_function_t * decomp; - struct stats_summary stats; - }; - struct comp_alg_data { ! WK_word *compressed_data; ! WK_word *decompressed_data; ! ! WK_word *dictionary; ! char *hashLookupTable_WKdm; ! unsigned int *hashLookupTable_WK4x4; WK_word *tempTagsArray; WK_word *tempQPosArray; WK_word *tempLowBitsArray; ! unsigned short compressed_size; }; --- 245,274 ---- }; struct comp_alg_data { ! /* WKdm and WK4x4 */ WK_word *tempTagsArray; WK_word *tempQPosArray; WK_word *tempLowBitsArray; ! /* LZO */ ! lzo_byte * wrkmem; ! unsigned short compressed_size; ! }; ! ! typedef unsigned int (compress_function_t)(unsigned long* source, ! unsigned long* destination, ! unsigned int words, ! struct comp_alg_data * data); ! ! typedef void (decompress_function_t)(unsigned long* source, ! unsigned long* destination, ! unsigned int words, ! struct comp_alg_data * data); ! ! struct comp_alg { ! char name[6]; ! compress_function_t * comp; ! decompress_function_t * decomp; ! struct stats_summary stats; }; *************** *** 324,333 **** int read_comp_cache(struct address_space *, unsigned long, struct page *); - int __invalidate_comp_cache(struct address_space *, unsigned long); int invalidate_comp_cache(struct address_space *, unsigned long); void invalidate_comp_pages(struct address_space *); void truncate_comp_pages(struct address_space *, unsigned long, unsigned); ! void wait_all_comp_pages(struct address_space *); ! void lookup_all_comp_pages(struct address_space *); #define there_are_dirty_comp_pages(mapping) (!list_empty(&(mapping)->dirty_comp_pages)) #define there_are_locked_comp_pages(mapping) (!list_empty(&(mapping)->locked_comp_pages)) --- 308,316 ---- int read_comp_cache(struct address_space *, unsigned long, struct page *); int invalidate_comp_cache(struct address_space *, unsigned long); void invalidate_comp_pages(struct address_space *); void truncate_comp_pages(struct address_space *, unsigned long, unsigned); ! void wait_comp_pages(struct address_space *); ! void lookup_comp_pages(struct address_space *); #define there_are_dirty_comp_pages(mapping) (!list_empty(&(mapping)->dirty_comp_pages)) #define there_are_locked_comp_pages(mapping) (!list_empty(&(mapping)->locked_comp_pages)) *************** *** 338,343 **** static inline void invalidate_comp_pages(struct address_space * mapping) { }; static inline void truncate_comp_pages(struct address_space * mapping, unsigned long start, unsigned partial) { }; ! static inline void wait_all_comp_pages(struct address_space * mapping) { }; ! static inline void lookup_all_comp_pages(struct address_space * mapping) { }; #define there_are_dirty_comp_pages(mapping) 0 #define there_are_locked_comp_pages(mapping) 0 --- 321,326 ---- static inline void invalidate_comp_pages(struct address_space * mapping) { }; static inline void truncate_comp_pages(struct address_space * mapping, unsigned long start, unsigned partial) { }; ! static inline void wait_comp_pages(struct address_space * mapping) { }; ! static inline void lookup_comp_pages(struct address_space * mapping) { }; #define there_are_dirty_comp_pages(mapping) 0 #define there_are_locked_comp_pages(mapping) 0 *************** *** 346,357 **** /* main.c */ #ifdef CONFIG_COMP_CACHE - int compress_page(struct page *, int, unsigned int, int); void comp_cache_init(void); inline int init_comp_page(struct comp_cache_page **,struct page *); ! inline void compress_dirty_page(struct page *, int (*writepage)(struct page *), unsigned int, int); ! inline int compress_clean_page(struct page *, unsigned int, int); #define COMP_PAGE_SIZE ((comp_page_order + 1) * PAGE_SIZE) ! #define comp_cache_used_space ((num_comp_pages * PAGE_SIZE) - comp_cache_free_space) #define page_to_comp_page(n) ((n) >> comp_page_order) --- 329,342 ---- /* main.c */ #ifdef CONFIG_COMP_CACHE void comp_cache_init(void); inline int init_comp_page(struct comp_cache_page **,struct page *); ! int compress_dirty_page(struct page *, int (*writepage)(struct page *), unsigned int, int); ! int compress_clean_page(struct page *, unsigned int, int); ! ! #define CLEAN_PAGE 0 ! #define DIRTY_PAGE 1 #define COMP_PAGE_SIZE ((comp_page_order + 1) * PAGE_SIZE) ! #define comp_cache_used_space ((num_comp_pages * COMP_PAGE_SIZE) - comp_cache_free_space) #define page_to_comp_page(n) ((n) >> comp_page_order) *************** *** 363,367 **** #else static inline void comp_cache_init(void) {}; ! static inline int compress_dirty_page(struct page * page, int (*writepage)(struct page *), unsigned int gfp_mask, int priority) { return writepage(page); } static inline int compress_clean_page(struct page * page, unsigned int gfp_mask, int priority) { return 1; } #endif --- 348,352 ---- #else static inline void comp_cache_init(void) {}; ! static inline int compress_dirty_page(struct page * page, int (*writepage)(struct page *), unsigned int gfp_mask, int priority) { writepage(page); return 0; } static inline int compress_clean_page(struct page * page, unsigned int gfp_mask, int priority) { return 1; } #endif *************** *** 453,460 **** /* free.c */ ! void comp_cache_free_locked(struct comp_cache_fragment *); ! void comp_cache_free(struct comp_cache_fragment *); #ifdef CONFIG_COMP_CACHE int comp_cache_use_address(swp_entry_t); --- 438,447 ---- /* free.c */ ! int __comp_cache_free(struct comp_cache_fragment *); #ifdef CONFIG_COMP_CACHE + + #define fragment_freed(fragment) (!fragment_count(fragment) && !fragment->mapping) + int comp_cache_use_address(swp_entry_t); *************** *** 582,586 **** int comp_cache_hist_read_proc(char *, char **, off_t, int, int *, void *); int comp_cache_frag_read_proc(char *, char **, off_t, int, int *, void *); - inline void comp_cache_update_page_stats(struct page *, int); #endif /* _LINUX_COMP_CACHE_H */ --- 569,572 ---- |