linuxcompressed-checkins Mailing List for Linux Compressed Cache (Page 7)
Status: Beta
Brought to you by:
nitin_sf
You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
|
Dec
(31) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(28) |
Feb
(50) |
Mar
(29) |
Apr
(6) |
May
(33) |
Jun
(36) |
Jul
(60) |
Aug
(7) |
Sep
(12) |
Oct
|
Nov
(13) |
Dec
(3) |
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(9) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2006 |
Jan
(13) |
Feb
(4) |
Mar
(4) |
Apr
(1) |
May
|
Jun
(22) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Rodrigo S. de C. <rc...@us...> - 2002-06-25 14:34:12
|
Update of /cvsroot/linuxcompressed/linux/Documentation In directory usw-pr-cvs1:/tmp/cvs-serv13268/Documentation Modified Files: Configure.help Log Message: Feature o Implemented support for resizing the compressed cache on demand. The user defines the maximum compressed cache size and compressed cache will grow up to this size if necessary. Only then it will start swapping out fragments. And when the compressed cache entries start to get empty, their pages will be released to the system, decreasing compressed cache size. Still have to solve some issues about resizing vswap. o Changed most of the calls from comp_cache_free_locked() to comp_cache_free(), in order to release the page if necessary. Only calls from writeout functions were not changed since we don't want to use those pages to shrink the compressed cache. Bug fixes o Fixed potential oops in comp_cache_use_address(). If the ptes cannot be set to the new address, we would access a null variable (fragment). o Fixed bug in swap in process for virtual swap addresses. While allocating a new page, that virtual swap address might get unused (it gained a real address or vswap table got shrunk), what could lead to a BUG() in comp_cache_swp_duplicate(). Other o Some comments added to functions in adaptivity.c o Updated Configure.help for CONFIG_COMP_CACHE Index: Configure.help =================================================================== RCS file: /cvsroot/linuxcompressed/linux/Documentation/Configure.help,v retrieving revision 1.7 retrieving revision 1.8 diff -C2 -r1.7 -r1.8 *** Configure.help 20 Jun 2002 14:28:48 -0000 1.7 --- Configure.help 25 Jun 2002 14:34:07 -0000 1.8 *************** *** 387,391 **** Initial number of pages reserved for compressed cache is set by the ! kernel parameter "compsize=N", where N is a number of memory pages. If unsure, say N here. --- 387,393 ---- Initial number of pages reserved for compressed cache is set by the ! kernel parameter "compsize=N", where N is a memory size like the ! input accepted by "mem=" parameter. For example "compsize=48M" sets ! the initial compressed cache size to 48 megabytes. If unsure, say N here. *************** *** 398,401 **** --- 400,425 ---- behaviour. If you don't select this option, compressed cache will store only anonymous pages, ie pages not mapped to files. + + If unsure, say N here. + + Resize Compressed Cache On Demand + CONFIG_COMP_DEMAND_RESIZE + + Select this option in case you want compressed cache to start with a + minimum number of pages and resize on demand. It means that + compressed cache will grow uo to its maximum size while the system + is under memory pressure and will only start swapping out when it + reaches that size. As soon as the reserved pages for compressed + cache are no longer used, they are freed to the system, decreasing + compressed cache size. + + The maximum size is defined by the very same kernel parameter + "compsize=N", where N is a memory size like the input accepted by + "mem=" parameter. For example "compsize=48M" will set the maximum + compressed cache size to 48 megabytes. + + In the case this option is enabled, the user cannot any longer + change compressed cache size via sysctl entry + (/proc/sys/vm/comp_cache/size). If unsure, say N here. |
From: Rodrigo S. de C. <rc...@us...> - 2002-06-25 14:34:12
|
Update of /cvsroot/linuxcompressed/linux/mm In directory usw-pr-cvs1:/tmp/cvs-serv13268/mm Modified Files: swap_state.c vmscan.c Log Message: Feature o Implemented support for resizing the compressed cache on demand. The user defines the maximum compressed cache size and compressed cache will grow up to this size if necessary. Only then it will start swapping out fragments. And when the compressed cache entries start to get empty, their pages will be released to the system, decreasing compressed cache size. Still have to solve some issues about resizing vswap. o Changed most of the calls from comp_cache_free_locked() to comp_cache_free(), in order to release the page if necessary. Only calls from writeout functions were not changed since we don't want to use those pages to shrink the compressed cache. Bug fixes o Fixed potential oops in comp_cache_use_address(). If the ptes cannot be set to the new address, we would access a null variable (fragment). o Fixed bug in swap in process for virtual swap addresses. While allocating a new page, that virtual swap address might get unused (it gained a real address or vswap table got shrunk), what could lead to a BUG() in comp_cache_swp_duplicate(). Other o Some comments added to functions in adaptivity.c o Updated Configure.help for CONFIG_COMP_CACHE Index: swap_state.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/swap_state.c,v retrieving revision 1.30 retrieving revision 1.31 diff -C2 -r1.30 -r1.31 *** swap_state.c 20 Jun 2002 14:28:49 -0000 1.30 --- swap_state.c 25 Jun 2002 14:34:07 -0000 1.31 *************** *** 214,217 **** --- 214,218 ---- */ if (!new_page) { + set_vswap_allocating(entry); new_page = alloc_page(GFP_HIGHUSER); if (!new_page) *************** *** 232,258 **** if (!read_comp_cache(&swapper_space, entry.val, new_page, 1)) return new_page; ! ! /* ! * vswap address? It's been moved when vswap ! * got shrunk, or gained a real entry and has ! * been swapped out. In either cases, its pte ! * has changed. There's no problem returning a ! * NULL page, mainly when swapping in, since ! * the pte is checked wrt changes. If it's ! * been swapped in when allocating the page ! * above, it will fail to add to swap cache. ! */ ! if (vswap_address(entry)) { ! delete_from_swap_cache(new_page); ! break; ! } ! rw_swap_page(READ, new_page); return new_page; } } while (err != -ENOENT); ! if (new_page) page_cache_release(new_page); return found_page; } --- 233,246 ---- if (!read_comp_cache(&swapper_space, entry.val, new_page, 1)) return new_page; ! if (vswap_address(entry)) ! BUG(); rw_swap_page(READ, new_page); return new_page; } } while (err != -ENOENT); ! if (new_page) page_cache_release(new_page); + clear_vswap_allocating(entry); return found_page; } Index: vmscan.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/vmscan.c,v retrieving revision 1.35 retrieving revision 1.36 diff -C2 -r1.35 -r1.36 *** vmscan.c 18 Jun 2002 13:39:33 -0000 1.35 --- vmscan.c 25 Jun 2002 14:34:07 -0000 1.36 *************** *** 636,643 **** nr_pages = shrink_caches(classzone, priority, gfp_mask, nr_pages); if (nr_pages <= 0) { ! /* let's steal at most half the pages that has ! * been freed by shrink_caches to grow ! * compressed cache (only for normal zone) */ ! grow_comp_cache(classzone, SWAP_CLUSTER_MAX/2); return 1; } --- 636,642 ---- nr_pages = shrink_caches(classzone, priority, gfp_mask, nr_pages); if (nr_pages <= 0) { ! #ifndef CONFIG_COMP_DEMAND_RESIZE ! grow_comp_cache(SWAP_CLUSTER_MAX/2); ! #endif return 1; } |
From: Rodrigo S. de C. <rc...@us...> - 2002-06-25 14:34:11
|
Update of /cvsroot/linuxcompressed/linux/include/linux In directory usw-pr-cvs1:/tmp/cvs-serv13268/include/linux Modified Files: comp_cache.h Log Message: Feature o Implemented support for resizing the compressed cache on demand. The user defines the maximum compressed cache size and compressed cache will grow up to this size if necessary. Only then it will start swapping out fragments. And when the compressed cache entries start to get empty, their pages will be released to the system, decreasing compressed cache size. Still have to solve some issues about resizing vswap. o Changed most of the calls from comp_cache_free_locked() to comp_cache_free(), in order to release the page if necessary. Only calls from writeout functions were not changed since we don't want to use those pages to shrink the compressed cache. Bug fixes o Fixed potential oops in comp_cache_use_address(). If the ptes cannot be set to the new address, we would access a null variable (fragment). o Fixed bug in swap in process for virtual swap addresses. While allocating a new page, that virtual swap address might get unused (it gained a real address or vswap table got shrunk), what could lead to a BUG() in comp_cache_swp_duplicate(). Other o Some comments added to functions in adaptivity.c o Updated Configure.help for CONFIG_COMP_CACHE Index: comp_cache.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/comp_cache.h,v retrieving revision 1.82 retrieving revision 1.83 diff -C2 -r1.82 -r1.83 *** comp_cache.h 20 Jun 2002 14:28:49 -0000 1.82 --- comp_cache.h 25 Jun 2002 14:34:07 -0000 1.83 *************** *** 2,6 **** * linux/mm/comp_cache.h * ! * Time-stamp: <2002-06-20 11:15:27 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache.h * ! * Time-stamp: <2002-06-23 12:35:16 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 29,33 **** #include <linux/WKcommon.h> ! #define COMP_CACHE_VERSION "0.23pre7" /* maximum compressed size of a page */ --- 29,33 ---- #include <linux/WKcommon.h> ! #define COMP_CACHE_VERSION "0.23pre8" /* maximum compressed size of a page */ *************** *** 36,40 **** #define NUM_VSWAP_ENTRIES (3 * num_comp_pages) ! extern unsigned long num_comp_pages, num_fragments, num_swapper_fragments, new_num_comp_pages, max_num_comp_pages, zone_num_comp_pages; struct pte_list { --- 36,40 ---- #define NUM_VSWAP_ENTRIES (3 * num_comp_pages) ! extern unsigned long num_comp_pages, num_fragments, num_swapper_fragments, new_num_comp_pages, min_num_comp_pages, max_num_comp_pages, zone_num_comp_pages; struct pte_list { *************** *** 99,110 **** /* adaptivity.c */ #ifdef CONFIG_COMP_CACHE ! int shrink_comp_cache(struct comp_cache_page *); ! inline void grow_comp_cache(zone_t *, int); void adapt_comp_cache(void); #else ! static inline int shrink_comp_cache(struct comp_cache_page * comp_page) { return 0; } static inline void grow_comp_cache(zone_t * zone, int nr_pages) { } #endif /* swapout.c */ extern struct list_head swp_free_buffer_head; --- 99,118 ---- /* adaptivity.c */ #ifdef CONFIG_COMP_CACHE ! int shrink_comp_cache(struct comp_cache_page *, int); ! int grow_comp_cache(int); void adapt_comp_cache(void); #else ! static inline int shrink_comp_cache(struct comp_cache_page * comp_page, int check_further) { return 0; } static inline void grow_comp_cache(zone_t * zone, int nr_pages) { } #endif + #ifdef CONFIG_COMP_DEMAND_RESIZE + int grow_on_demand(void); + int shrink_on_demand(struct comp_cache_page *); + #else + static inline int grow_on_demand(void) { return 0; } + static inline int shrink_on_demand(struct comp_cache_page * comp_page) { return 0; } + #endif + /* swapout.c */ extern struct list_head swp_free_buffer_head; *************** *** 381,390 **** #define COMP_CACHE_SWP_TYPE MAX_SWAPFILES ! #define VSWAP_RESERVED ((struct comp_cache_fragment *) 0xffffffff) #ifdef CONFIG_COMP_CACHE #define vswap_info_struct(p) (p == &swap_info[COMP_CACHE_SWP_TYPE]) #define vswap_address(entry) (SWP_TYPE(entry) == COMP_CACHE_SWP_TYPE) ! #define reserved(offset) (vswap_address[offset]->fragment == VSWAP_RESERVED) int comp_cache_swp_duplicate(swp_entry_t); --- 389,403 ---- #define COMP_CACHE_SWP_TYPE MAX_SWAPFILES ! #define VSWAP_RESERVED ((struct comp_cache_fragment *) 0xffffffff) ! #define VSWAP_FREEING ((struct comp_cache_fragment *) 0xfffffffe) ! ! #define VSWAP_ALLOCATING ((struct page *) 0xffffffff) #ifdef CONFIG_COMP_CACHE #define vswap_info_struct(p) (p == &swap_info[COMP_CACHE_SWP_TYPE]) #define vswap_address(entry) (SWP_TYPE(entry) == COMP_CACHE_SWP_TYPE) ! #define reserved(offset) (vswap_address[offset]->fragment == VSWAP_RESERVED) ! #define freeing(offset) (vswap_address[offset]->fragment == VSWAP_FREEING) ! #define allocating(offset) (vswap_address[offset]->swap_cache_page == VSWAP_ALLOCATING) int comp_cache_swp_duplicate(swp_entry_t); *************** *** 394,397 **** --- 407,413 ---- inline int comp_cache_available_space(void); + + inline void set_vswap_allocating(swp_entry_t entry); + inline void clear_vswap_allocating(swp_entry_t entry); extern void FASTCALL(add_pte_vswap(pte_t *, swp_entry_t)); |
From: Rodrigo S. de C. <rc...@us...> - 2002-06-25 14:34:11
|
Update of /cvsroot/linuxcompressed/linux/mm/comp_cache In directory usw-pr-cvs1:/tmp/cvs-serv13268/mm/comp_cache Modified Files: adaptivity.c free.c main.c swapin.c swapout.c vswap.c Log Message: Feature o Implemented support for resizing the compressed cache on demand. The user defines the maximum compressed cache size and compressed cache will grow up to this size if necessary. Only then it will start swapping out fragments. And when the compressed cache entries start to get empty, their pages will be released to the system, decreasing compressed cache size. Still have to solve some issues about resizing vswap. o Changed most of the calls from comp_cache_free_locked() to comp_cache_free(), in order to release the page if necessary. Only calls from writeout functions were not changed since we don't want to use those pages to shrink the compressed cache. Bug fixes o Fixed potential oops in comp_cache_use_address(). If the ptes cannot be set to the new address, we would access a null variable (fragment). o Fixed bug in swap in process for virtual swap addresses. While allocating a new page, that virtual swap address might get unused (it gained a real address or vswap table got shrunk), what could lead to a BUG() in comp_cache_swp_duplicate(). Other o Some comments added to functions in adaptivity.c o Updated Configure.help for CONFIG_COMP_CACHE Index: adaptivity.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/adaptivity.c,v retrieving revision 1.28 retrieving revision 1.29 diff -C2 -r1.28 -r1.29 *** adaptivity.c 20 Jun 2002 14:28:49 -0000 1.28 --- adaptivity.c 25 Jun 2002 14:34:07 -0000 1.29 *************** *** 2,6 **** * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-06-20 10:59:52 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-06-25 10:32:29 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 18,21 **** --- 18,25 ---- static int fragment_failed_alloc = 0, vswap_failed_alloc = 0; + /* semaphore used to avoid two concurrent instances of + * {grow,shrink}_vswap() functions to run together */ + static struct semaphore vswap_resize_semaphore; + extern void comp_cache_fix_watermarks(int); *************** *** 25,29 **** unsigned long new_fragment_hash_size; unsigned int i, new_fragment_hash_bits, new_fragment_hash_order, hash_index; ! new_fragment_hash_size = 3 * num_comp_pages * sizeof(struct comp_cache_fragment *); new_fragment_hash = create_fragment_hash(&new_fragment_hash_size, &new_fragment_hash_bits, &new_fragment_hash_order); --- 29,33 ---- unsigned long new_fragment_hash_size; unsigned int i, new_fragment_hash_bits, new_fragment_hash_order, hash_index; ! new_fragment_hash_size = 3 * num_comp_pages * sizeof(struct comp_cache_fragment *); new_fragment_hash = create_fragment_hash(&new_fragment_hash_size, &new_fragment_hash_bits, &new_fragment_hash_order); *************** *** 77,86 **** extern kmem_cache_t * vswap_cachep; static int wait_scan = 0; /*** ! * shrink_vswap(unsigned long) - shrinks vswap adressing table from ! * its current size (vswap_current_num_entries) to NUM_VSWAP_ENTRIES, ! * its new size in function of num_comp_pages. * * we try to shrink the vswap at once, but that will depend on getting --- 81,91 ---- extern kmem_cache_t * vswap_cachep; + extern unsigned long nr_free_vswap; static int wait_scan = 0; /*** ! * shrink_vswap(void) - shrinks vswap adressing table from its current ! * size (vswap_current_num_entries) to NUM_VSWAP_ENTRIES, its new size ! * in function of num_comp_pages. * * we try to shrink the vswap at once, but that will depend on getting *************** *** 108,119 **** */ void ! shrink_vswap(unsigned long vswap_new_num_entries) { struct page * swap_cache_page; struct comp_cache_fragment * fragment; struct vswap_address ** new_vswap_address; ! unsigned int total_scan = 0, failed_scan = 0, failed_alloc = 0; ! unsigned long index, new_index; swp_entry_t old_entry, entry; if (vswap_current_num_entries <= vswap_new_num_entries) --- 113,140 ---- */ void ! shrink_vswap(void) { struct page * swap_cache_page; struct comp_cache_fragment * fragment; struct vswap_address ** new_vswap_address; ! unsigned int failed_alloc = 0; ! unsigned long index, new_index, vswap_new_num_entries = NUM_VSWAP_ENTRIES; swp_entry_t old_entry, entry; + + if (!vswap_address) + return; + + if (vswap_current_num_entries <= 1.10 * NUM_VSWAP_ENTRIES) + return; + /* more used entries than the new size? can't shrink */ + if (vswap_num_used_entries >= NUM_VSWAP_ENTRIES) + return; + + if (down_trylock(&vswap_resize_semaphore)) + return; + + #if 0 + printk("shrinking\n"); + #endif if (vswap_current_num_entries <= vswap_new_num_entries) *************** *** 133,145 **** for (index = vswap_last_used; index >= vswap_new_num_entries; index--) { ! /* we have already freed this entry for shrink */ if (!vswap_address[index]) continue; /* unused entry? let's only free it */ if (!vswap_address[index]->count) { list_del(&(vswap_address[index]->list)); kmem_cache_free(vswap_cachep, (vswap_address[index])); vswap_address[index] = NULL; continue; } --- 154,185 ---- for (index = vswap_last_used; index >= vswap_new_num_entries; index--) { ! /* either this entry has already been freed or hasn't ! * been sucessfully allocated */ if (!vswap_address[index]) continue; + /* we are shrinking this vswap table from a function + * which is freeing a vswap entry, so forget this + * entry. The same for the case this entry is in the + * middle of a swapin process (allocating a new + * page) */ + if (freeing(index) || allocating(index)) + continue; + /* unused entry? let's only free it */ if (!vswap_address[index]->count) { list_del(&(vswap_address[index]->list)); + nr_free_vswap--; kmem_cache_free(vswap_cachep, (vswap_address[index])); vswap_address[index] = NULL; + + /* time to fix the last_vswap_allocated (we + * may not reach the point where it will be + * updated) */ + if (index <= last_vswap_allocated) + last_vswap_allocated = index - 1; + #if 0 + printk("null %d\n", index); + #endif continue; } *************** *** 164,179 **** * boundary to link this used entry we are moving * down */ ! for (; new_index > 0 && vswap_address[new_index]->count; new_index--); ! /* we must have a new index, otherwise ! * vswap_needs_to_shrink() is broken */ if (!new_index) ! BUG(); old_entry = SWP_ENTRY(COMP_CACHE_SWP_TYPE, index); entry = SWP_ENTRY(COMP_CACHE_SWP_TYPE, new_index); - total_scan++; - /* let's fix the ptes */ if (!set_pte_list_to_entry(vswap_address[index]->pte_list, old_entry, entry)) --- 204,227 ---- * boundary to link this used entry we are moving * down */ ! while (new_index > 0) { ! if (!vswap_address[new_index]) ! break; ! ! if (freeing(new_index)) ! goto next; ! ! if (!vswap_address[new_index]->count) ! break; ! next: ! new_index--; ! } ! if (!new_index) ! goto backout; old_entry = SWP_ENTRY(COMP_CACHE_SWP_TYPE, index); entry = SWP_ENTRY(COMP_CACHE_SWP_TYPE, new_index); /* let's fix the ptes */ if (!set_pte_list_to_entry(vswap_address[index]->pte_list, old_entry, entry)) *************** *** 195,205 **** } ! list_del(&(vswap_address[new_index]->list)); ! kmem_cache_free(vswap_cachep, (vswap_address[new_index])); ! vswap_address[new_index] = NULL; ! ! if (vswap_address[new_index]) ! BUG(); vswap_address[new_index] = vswap_address[index]; vswap_address[new_index]->offset = new_index; --- 243,257 ---- } ! if (vswap_address[new_index]) { ! list_del(&(vswap_address[new_index]->list)); ! nr_free_vswap--; ! kmem_cache_free(vswap_cachep, (vswap_address[new_index])); ! vswap_address[new_index] = NULL; ! } + #if 0 + printk("vswap %lu -> %lu\n", index, new_index); + #endif + vswap_address[new_index] = vswap_address[index]; vswap_address[new_index]->offset = new_index; *************** *** 210,218 **** backout: - failed_scan++; if (swap_cache_page) UnlockPage(swap_cache_page); if (fragment && !reserved(index)) UnlockPage(fragment->comp_page->page); } --- 262,270 ---- backout: if (swap_cache_page) UnlockPage(swap_cache_page); if (fragment && !reserved(index)) UnlockPage(fragment->comp_page->page); + break; } *************** *** 222,226 **** continue; ! if (!vswap_address[vswap_last_used]->count && vswap_last_used >= vswap_new_num_entries) BUG(); --- 274,280 ---- continue; ! if (!vswap_address[vswap_last_used]->count ! && vswap_last_used >= vswap_new_num_entries ! && !freeing(vswap_last_used)) BUG(); *************** *** 228,239 **** } ! if (vswap_last_used >= vswap_new_num_entries) { ! /* if we failed all tries to find the vmas, it's ! * better wait for a while before trying again, since ! * the call might be coming from mmput() */ ! if (total_scan > 0 && total_scan == failed_scan) ! wait_scan = total_scan * 2; ! return; ! } allocate_new_vswap: --- 282,287 ---- } ! if (vswap_last_used >= vswap_new_num_entries) ! goto out; allocate_new_vswap: *************** *** 242,246 **** if (!new_vswap_address) { vswap_failed_alloc = 1; ! return; } --- 290,294 ---- if (!new_vswap_address) { vswap_failed_alloc = 1; ! goto out; } *************** *** 290,296 **** vswap_last_used = vswap_new_num_entries - 1; vswap_failed_alloc = 0; } ! /* grow_vswap(void) - grows vswap adressing table from its current * size (vswap_current_num_entries) to NUM_VSWAP_ENTRIES, its new size * in function of num_comp_pages. --- 338,347 ---- vswap_last_used = vswap_new_num_entries - 1; vswap_failed_alloc = 0; + out: + up(&vswap_resize_semaphore); } ! /*** ! * grow_vswap(void) - grows vswap adressing table from its current * size (vswap_current_num_entries) to NUM_VSWAP_ENTRIES, its new size * in function of num_comp_pages. *************** *** 300,308 **** * new ones), updating some control variables to conclude. */ ! void ! grow_vswap(unsigned long vswap_new_num_entries) { struct vswap_address ** new_vswap_address; unsigned int i, failed_alloc = 0; if (vswap_last_used >= vswap_new_num_entries - 1) BUG(); --- 351,376 ---- * new ones), updating some control variables to conclude. */ ! static void ! grow_vswap(void) { struct vswap_address ** new_vswap_address; unsigned int i, failed_alloc = 0; + unsigned long vswap_new_num_entries = NUM_VSWAP_ENTRIES; + + if (!vswap_address) + return; + + /* using vswap_last_used instead of vswap_current_num_entries + * forces us to grow the cache even if we started shrinking + * it, but one set comp cache to the original size */ + if (vswap_last_used >= 0.90 * (NUM_VSWAP_ENTRIES - 1)) + return; + + if (down_trylock(&vswap_resize_semaphore)) + return; + #if 0 + printk("growing\n"); + #endif + if (vswap_last_used >= vswap_new_num_entries - 1) BUG(); *************** *** 315,319 **** if (!new_vswap_address) { vswap_failed_alloc = 1; ! return; } --- 383,387 ---- if (!new_vswap_address) { vswap_failed_alloc = 1; ! goto out; } *************** *** 359,363 **** vswap_last_used = vswap_new_num_entries - 1; vswap_failed_alloc = 0; ! return; fix_old_vswap: --- 427,431 ---- vswap_last_used = vswap_new_num_entries - 1; vswap_failed_alloc = 0; ! goto out; fix_old_vswap: *************** *** 378,385 **** last_vswap_allocated = vswap_new_num_entries - 1; vswap_last_used = vswap_current_num_entries - 1; } ! static inline int ! fragment_hash_needs_to_shrink(void) { unsigned long new_fragment_hash_size = (3 * num_comp_pages) * sizeof(struct comp_cache_fragment *); --- 446,455 ---- last_vswap_allocated = vswap_new_num_entries - 1; vswap_last_used = vswap_current_num_entries - 1; + out: + up(&vswap_resize_semaphore); } ! static inline void ! shrink_fragment_hash_table(void) { unsigned long new_fragment_hash_size = (3 * num_comp_pages) * sizeof(struct comp_cache_fragment *); *************** *** 387,414 **** * there? if they won't, no need to shrink the hash table */ if ((PAGE_SIZE << (fragment_hash_order - 1)) < new_fragment_hash_size) ! return 0; ! ! return 1; ! } ! ! static inline int ! vswap_needs_to_shrink(void) { ! if (!vswap_address) ! return 0; ! ! if (vswap_current_num_entries <= NUM_VSWAP_ENTRIES) ! return 0; ! ! /* more used entries than the new size? can't shrink */ ! if (vswap_num_used_entries >= NUM_VSWAP_ENTRIES) ! return 0; ! ! /* failed a lot in the last tries? let's wait for a while */ ! if (wait_scan) { ! wait_scan--; ! return 0; ! } ! return 1; } --- 457,463 ---- * there? if they won't, no need to shrink the hash table */ if ((PAGE_SIZE << (fragment_hash_order - 1)) < new_fragment_hash_size) ! return; ! resize_fragment_hash_table(); } *************** *** 425,444 **** } ! static inline int ! zone_wrong_watermarks_shrink(void) { ! return (zone_num_comp_pages > num_comp_pages); } int ! shrink_comp_cache(struct comp_cache_page * comp_page) { struct comp_cache_page * empty_comp_page; int retval = 0; /* if the comp_page is not empty, can't free it */ ! if (!list_empty(&(comp_page->fragments))) { UnlockPage(comp_page->page); ! goto check_shrink; } --- 474,515 ---- } ! static inline void ! shrink_zone_watermarks(void) { ! if (zone_num_comp_pages <= num_comp_pages) ! return; ! ! comp_cache_fix_watermarks(num_comp_pages); } + /*** + * shrink_comp_cache(comp_page, check_further) - given a "comp_page" + * entry, check if this page does not have fragments and if the + * compressed cache need to be shrunk. + * + * In the case we can use the comp page to shrink the cache, release + * it to the system, fixing all compressed cache data structures. + * + * @check_further: this parameter is used to distinguish between two + * cases where we might be shrinking the case: user input to sysctl + * entry or shrinking on demand. In the latter case, we want to simply + * check the comp_page and free it if possible, we don't want to + * perform an agressive shrinkage. + */ int ! shrink_comp_cache(struct comp_cache_page * comp_page, int check_further) { struct comp_cache_page * empty_comp_page; int retval = 0; + + if (!comp_page->page) + BUG(); /* if the comp_page is not empty, can't free it */ ! if (!list_empty(&(comp_page->fragments))) { UnlockPage(comp_page->page); ! if (check_further) ! goto check_shrink; ! goto out; } *************** *** 466,484 **** check_shrink: ! if (comp_cache_needs_to_shrink()) { ! if (!fragment_failed_alloc && !vswap_failed_alloc) ! goto check_empty_pages; ! } ! else { ! if (zone_wrong_watermarks_shrink()) ! comp_cache_fix_watermarks(num_comp_pages); } out: ! if (fragment_hash_needs_to_shrink()) ! resize_fragment_hash_table(); ! ! if (vswap_needs_to_shrink()) ! shrink_vswap(NUM_VSWAP_ENTRIES); return retval; --- 537,551 ---- check_shrink: ! if (!comp_cache_needs_to_shrink()) { ! shrink_zone_watermarks(); ! goto out; } + + if (!fragment_failed_alloc && !vswap_failed_alloc) + goto check_empty_pages; out: ! shrink_fragment_hash_table(); ! shrink_vswap(); return retval; *************** *** 502,549 **** } #define comp_cache_needs_to_grow() (new_num_comp_pages > num_comp_pages) ! static inline int ! fragment_hash_needs_to_grow(void) { unsigned long new_fragment_hash_size = (3 * num_comp_pages) * sizeof(struct comp_cache_fragment *); /* do we really need a bigger hash table? */ if ((PAGE_SIZE << fragment_hash_order) >= new_fragment_hash_size) ! return 0; ! ! return 1; ! } ! ! static inline int ! vswap_needs_to_grow(void) { ! if (!vswap_address) ! return 0; ! ! /* using vswap_last_used instead of vswap_current_num_entries ! * forces us to grow the cache even if we started shrinking ! * it, but one set comp cache to the original size */ ! if (vswap_last_used >= NUM_VSWAP_ENTRIES - 1) ! return 0; ! return 1; } ! static inline int ! zone_wrong_watermarks_grow(void) { ! return (zone_num_comp_pages < num_comp_pages); } ! inline void ! grow_comp_cache(zone_t * zone, int nr_pages) { struct comp_cache_page * comp_page; struct page * page; - /* we only care about the pages freed in normal zone since all - * the allocations we make are GFP_KERNEL */ - if (zone != &(zone->zone_pgdat->node_zones[ZONE_NORMAL])) - return; - while (comp_cache_needs_to_grow() && nr_pages--) { page = alloc_page(GFP_ATOMIC); --- 569,630 ---- } + #ifdef CONFIG_COMP_DEMAND_RESIZE + /*** + * shrink_on_demand(comp_page) - called by comp_cache_free(), it will + * try to shrink the compressed cache by one entry (comp_page). The + * comp_cache_free() function is called by every place that free a + * compressed cache fragment but swap out functions. + */ + int + shrink_on_demand(struct comp_cache_page * comp_page) + { + if (num_comp_pages == min_num_comp_pages) { + UnlockPage(comp_page->page); + return 0; + } + + /* to force the shrink_comp_cache() to grow the cache */ + new_num_comp_pages = num_comp_pages - 1; + + if (shrink_comp_cache(comp_page, 0)) { + #if 0 + printk("wow, it has shrunk %d\n", num_comp_pages); + #endif + return 1; + } + + new_num_comp_pages = num_comp_pages; + return 0; + } + #endif + #define comp_cache_needs_to_grow() (new_num_comp_pages > num_comp_pages) ! static inline void ! grow_fragment_hash_table(void) { unsigned long new_fragment_hash_size = (3 * num_comp_pages) * sizeof(struct comp_cache_fragment *); /* do we really need a bigger hash table? */ if ((PAGE_SIZE << fragment_hash_order) >= new_fragment_hash_size) ! return; ! resize_fragment_hash_table(); } ! static inline void ! grow_zone_watermarks(void) { ! if (zone_num_comp_pages >= num_comp_pages) ! return; ! ! comp_cache_fix_watermarks(num_comp_pages); } ! int ! grow_comp_cache(int nr_pages) { struct comp_cache_page * comp_page; struct page * page; while (comp_cache_needs_to_grow() && nr_pages--) { page = alloc_page(GFP_ATOMIC); *************** *** 551,555 **** /* couldn't allocate the page */ if (!page) ! return; init_comp_page(&comp_page, page); --- 632,636 ---- /* couldn't allocate the page */ if (!page) ! return 0; init_comp_page(&comp_page, page); *************** *** 563,580 **** } ! if (comp_cache_needs_to_grow()) { ! if (!fragment_failed_alloc && !vswap_failed_alloc) ! return; ! } ! else { ! if (zone_wrong_watermarks_grow()) ! comp_cache_fix_watermarks(num_comp_pages); } ! if (fragment_hash_needs_to_grow()) ! resize_fragment_hash_table(); ! ! if (vswap_needs_to_grow()) ! grow_vswap(NUM_VSWAP_ENTRIES); } --- 644,694 ---- } ! if (!comp_cache_needs_to_grow()) { ! grow_zone_watermarks(); ! goto out; } + + if (!fragment_failed_alloc && !vswap_failed_alloc) + return 1; + + out: + grow_fragment_hash_table(); + grow_vswap(); ! return 1; ! } ! ! #ifdef CONFIG_COMP_DEMAND_RESIZE ! /*** ! * grow_on_demand(void) - called by get_comp_cache_page() when it ! * cannot find space in the compressed cache. If compressed cache has ! * not yet reached the maximum size, we try to grow compressed cache ! * by one new entry. ! */ ! int ! grow_on_demand(void) ! { ! if (num_comp_pages == max_num_comp_pages) ! return 0; ! ! /* to force the grow_comp_cache() to grow the cache */ ! new_num_comp_pages = num_comp_pages + 1; ! ! if (grow_comp_cache(1)) { ! #if 0 ! printk("wow, it has grown %d\n", num_comp_pages); ! #endif ! return 1; ! } ! ! new_num_comp_pages = num_comp_pages; ! return 0; ! } ! #endif ! ! void __init ! comp_cache_adaptivity_init(void) ! { ! init_MUTEX(&vswap_resize_semaphore); } Index: free.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/free.c,v retrieving revision 1.32 retrieving revision 1.33 diff -C2 -r1.32 -r1.33 *** free.c 19 Jun 2002 12:18:44 -0000 1.32 --- free.c 25 Jun 2002 14:34:07 -0000 1.33 *************** *** 2,6 **** * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-06-19 08:46:13 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-06-24 18:13:13 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 166,176 **** comp_cache_free(struct comp_cache_fragment * fragment) { struct comp_cache_page * comp_page; ! int locked; if (!fragment) BUG(); ! comp_page = fragment->comp_page; ! locked = !TryLockPage(comp_page->page); comp_cache_free_locked(fragment); --- 166,180 ---- comp_cache_free(struct comp_cache_fragment * fragment) { struct comp_cache_page * comp_page; ! struct page * page; ! int locked = 0; if (!fragment) BUG(); ! comp_page = fragment->comp_page; ! if (comp_page->page) { ! locked = !TryLockPage(comp_page->page); ! page = comp_page->page; ! } comp_cache_free_locked(fragment); *************** *** 179,184 **** * page will be unlocked in shrink_comp_cache() * function */ ! if (locked) ! shrink_comp_cache(comp_page); } --- 183,193 ---- * page will be unlocked in shrink_comp_cache() * function */ ! if (locked) { ! #ifdef CONFIG_COMP_DEMAND_RESIZE ! shrink_on_demand(comp_page); ! #else ! shrink_comp_cache(comp_page, 1); ! #endif ! } } *************** *** 232,235 **** --- 241,246 ---- return 0; + fragment = vswap->fragment; + /* set old virtual addressed ptes to the real swap entry */ if (!set_pte_list_to_entry(vswap->pte_list, old_entry, entry)) *************** *** 244,248 **** swap_duplicate(entry); - fragment = vswap->fragment; remove_fragment_vswap(fragment); remove_fragment_from_hash_table(fragment); --- 255,258 ---- Index: main.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/main.c,v retrieving revision 1.47 retrieving revision 1.48 diff -C2 -r1.47 -r1.48 *** main.c 20 Jun 2002 14:28:49 -0000 1.47 --- main.c 25 Jun 2002 14:34:07 -0000 1.48 *************** *** 2,6 **** * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-06-20 11:01:24 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-06-24 18:14:59 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 178,182 **** #endif ! comp_cache_free_locked(fragment); PageClearMappedCompCache(old_page); --- 178,182 ---- #endif ! comp_cache_free(fragment); PageClearMappedCompCache(old_page); *************** *** 264,267 **** --- 264,268 ---- extern void __init comp_cache_swp_buffer_init(void); extern void __init comp_cache_vswap_init(void); + extern void __init comp_cache_adaptivity_init(void); LIST_HEAD(lru_queue); *************** *** 285,290 **** int i; ! max_num_comp_pages = num_physpages * 0.5; min_num_comp_pages = num_physpages * 0.05; if (!init_num_comp_pages || init_num_comp_pages < min_num_comp_pages || init_num_comp_pages > max_num_comp_pages) --- 286,297 ---- int i; ! #ifdef CONFIG_COMP_DEMAND_RESIZE ! min_num_comp_pages = 48; ! #else min_num_comp_pages = num_physpages * 0.05; + #endif + + if (!max_num_comp_pages || max_num_comp_pages < min_num_comp_pages || max_num_comp_pages > num_physpages * 0.5) + max_num_comp_pages = num_physpages * 0.5; if (!init_num_comp_pages || init_num_comp_pages < min_num_comp_pages || init_num_comp_pages > max_num_comp_pages) *************** *** 319,332 **** /* initialize our algorithms statistics array */ comp_cache_algorithms_init(); } static int __init comp_cache_size(char *str) { char * endp; - unsigned long long comp_cache_size; /* size in bytes */ ! comp_cache_size = memparse(str, &endp); ! init_num_comp_pages = comp_cache_size >> PAGE_SHIFT; ! return 1; } --- 326,343 ---- /* initialize our algorithms statistics array */ comp_cache_algorithms_init(); + + comp_cache_adaptivity_init(); } + static int __init comp_cache_size(char *str) { char * endp; ! #ifdef CONFIG_COMP_DEMAND_RESIZE ! max_num_comp_pages = memparse(str, &endp) >> PAGE_SHIFT; ! #else ! init_num_comp_pages = memparse(str, &endp) >> PAGE_SHIFT; ! #endif return 1; } Index: swapin.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapin.c,v retrieving revision 1.41 retrieving revision 1.42 diff -C2 -r1.41 -r1.42 *** swapin.c 20 Jun 2002 14:28:50 -0000 1.41 --- swapin.c 25 Jun 2002 14:34:08 -0000 1.42 *************** *** 2,6 **** * linux/mm/comp_cache/swapin.c * ! * Time-stamp: <2002-06-20 11:00:45 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/swapin.c * ! * Time-stamp: <2002-06-22 15:19:52 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 120,124 **** __set_page_dirty(page); ! comp_cache_free_locked(fragment); #endif --- 120,124 ---- __set_page_dirty(page); ! comp_cache_free(fragment); #endif *************** *** 223,229 **** if (TryLockPage(fragment->comp_page->page)) BUG(); - decompress_fragment(fragment, page); ! comp_cache_free_locked(fragment); PageClearCompCache(page); --- 223,230 ---- if (TryLockPage(fragment->comp_page->page)) BUG(); decompress_fragment(fragment, page); ! UnlockPage(fragment->comp_page->page); ! ! comp_cache_free(fragment); PageClearCompCache(page); *************** *** 231,235 **** page_cache_release(page); - UnlockPage(fragment->comp_page->page); UnlockPage(page); return; --- 232,235 ---- Index: swapout.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapout.c,v retrieving revision 1.52 retrieving revision 1.53 diff -C2 -r1.52 -r1.53 *** swapout.c 19 Jun 2002 18:10:20 -0000 1.52 --- swapout.c 25 Jun 2002 14:34:08 -0000 1.53 *************** *** 2,6 **** * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-06-19 11:34:35 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-06-22 14:55:33 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 446,451 **** --- 446,465 ---- } + /*** + * We couldn't find a comp page with enough free + * space, so let's first check if we are supposed and + * are able to grow the compressed cache on demand + */ + if (grow_on_demand()) + continue; + UnlockPage(page); + /*** + * We didn't grow the compressed cache, thus it's time + * to check if we able to free any fragment which was + * waiting for IO completion. If we can't free any + * fragment, it's time to write out some fragments. + */ if (!refill_swp_buffer(gfp_mask, 1, priority)) writeout_fragments(gfp_mask, priority--); Index: vswap.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/vswap.c,v retrieving revision 1.37 retrieving revision 1.38 diff -C2 -r1.37 -r1.38 *** vswap.c 20 Jun 2002 12:33:58 -0000 1.37 --- vswap.c 25 Jun 2002 14:34:08 -0000 1.38 *************** *** 2,6 **** * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-06-20 09:04:04 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-06-24 18:24:11 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 59,62 **** --- 59,109 ---- unsigned short last_page = 0; + unsigned long nr_free_vswap = 0, nr_used_vswap = 0; + + /*** + * Lock this vswap entry since it has a new page being allocated. That + * avoids this entry to be moved either when vswap is shrunk or to + * gain a new real swap entry. This sort of vswap entry does not have + * a swap cache page, so this is the field used to set this flag. + */ + inline void + set_vswap_allocating(swp_entry_t entry) + { + unsigned long offset = SWP_OFFSET(entry); + struct vswap_address * vswap; + + if (!vswap_address(entry)) + return; + if (offset >= vswap_current_num_entries) + BUG(); + vswap = vswap_address[offset]; + + if (vswap->swap_cache_page) + BUG(); + + vswap->swap_cache_page = VSWAP_ALLOCATING; + } + + /*** + * Clear the allocating flag of this vswap entry. + */ + inline void + clear_vswap_allocating(swp_entry_t entry) + { + unsigned long offset = SWP_OFFSET(entry); + struct vswap_address * vswap; + + if (!vswap_address(entry)) + return; + if (offset >= vswap_current_num_entries) + BUG(); + vswap = vswap_address[offset]; + + if (vswap->swap_cache_page != VSWAP_ALLOCATING) + BUG(); + + vswap->swap_cache_page = NULL; + } + static int comp_cache_vswap_alloc(void) *************** *** 120,124 **** if (list_empty(&vswap_address_free_head)) { /* have all vswap addresses already been allocated? */ ! if (last_vswap_allocated == NUM_VSWAP_ENTRIES - 1) return 0; --- 167,171 ---- if (list_empty(&vswap_address_free_head)) { /* have all vswap addresses already been allocated? */ ! if (last_vswap_allocated == vswap_current_num_entries - 1) return 0; *************** *** 130,134 **** return 0; ! for (i = last_vswap_allocated + 1; i < NUM_VSWAP_ENTRIES && vswap_address[i]; i++); last_vswap_allocated = i - 1; --- 177,181 ---- return 0; ! for (i = last_vswap_allocated + 1; i < vswap_current_num_entries && vswap_address[i]; i++); last_vswap_allocated = i - 1; *************** *** 199,202 **** --- 246,250 ---- vswap = list_entry(vswap_address_free_head.next, struct vswap_address, list); list_del_init(vswap_address_free_head.next); + nr_free_vswap--; type = COMP_CACHE_SWP_TYPE; *************** *** 275,279 **** struct comp_cache_fragment * fragment; struct vswap_address * vswap; - struct page * page; if (!vswap_address(entry)) --- 323,326 ---- *************** *** 295,299 **** if (--count) { vswap->count = count; ! goto out; } --- 342,346 ---- if (--count) { vswap->count = count; ! return count; } *************** *** 310,314 **** vswap->swap_cache_page = NULL; ! /* if this entry is reserved, it's not in any list (either * because it has never had a fragment or the fragment has * already been remove in remove_fragment_vswap()), so we can --- 357,361 ---- vswap->swap_cache_page = NULL; ! /* if this entry is reserved, it's not on any list (either * because it has never had a fragment or the fragment has * already been remove in remove_fragment_vswap()), so we can *************** *** 316,320 **** if (fragment == VSWAP_RESERVED) { vswap_num_reserved_entries--; ! goto add_to_free_list; } --- 363,371 ---- if (fragment == VSWAP_RESERVED) { vswap_num_reserved_entries--; ! vswap->fragment = NULL; ! list_add(&(vswap->list), &vswap_address_free_head); ! nr_free_vswap++; ! ! return 0; } *************** *** 322,338 **** BUG(); ! page = fragment->comp_page->page; ! if (TryLockPage(page)) ! BUG(); ! comp_cache_free_locked(fragment); ! UnlockPage(page); ! ! vswap_num_reserved_entries--; ! add_to_free_list: ! vswap->fragment = NULL; list_add(&(vswap->list), &vswap_address_free_head); ! out: ! return count; } --- 373,391 ---- BUG(); ! /* remove from used list */ ! list_del_init(&(vswap_address[offset]->list)); ! nr_used_vswap--; ! vswap->fragment = VSWAP_FREEING; ! comp_cache_freeable_space += fragment->compressed_size; ! ! comp_cache_free(fragment); ! ! /* add to to the free list */ list_add(&(vswap->list), &vswap_address_free_head); ! nr_free_vswap++; ! ! vswap->fragment = NULL; ! return 0; } *************** *** 386,393 **** offset = SWP_OFFSET(entry); if (reserved(offset) || !vswap_address[offset]->fragment) BUG(); ! vswap_address[offset]->fragment = VSWAP_RESERVED; --- 439,451 ---- offset = SWP_OFFSET(entry); + + /* if we are freeing this vswap, don't have to worry since it + * will be handled by comp_cache_swp_free() function */ + if (freeing(offset)) + return; if (reserved(offset) || !vswap_address[offset]->fragment) BUG(); ! vswap_address[offset]->fragment = VSWAP_RESERVED; *************** *** 396,399 **** --- 454,458 ---- * address */ list_del_init(&(vswap_address[offset]->list)); + nr_used_vswap--; comp_cache_freeable_space += fragment->compressed_size; *************** *** 439,442 **** --- 498,502 ---- list_add(&(vswap_address[offset]->list), &vswap_address_used_head); + nr_used_vswap++; comp_cache_freeable_space -= fragment->compressed_size; *************** *** 642,646 **** offset = SWP_OFFSET(entry); ! if (vswap_address[offset]->swap_cache_page) BUG(); --- 702,707 ---- offset = SWP_OFFSET(entry); ! if (vswap_address[offset]->swap_cache_page && ! vswap_address[offset]->swap_cache_page != VSWAP_ALLOCATING) BUG(); *************** *** 706,709 **** --- 767,771 ---- list_add(&(vswap_address[offset]->list), &vswap_address_free_head); + nr_free_vswap++; return 1; } |
From: Rodrigo S. de C. <rc...@us...> - 2002-06-25 14:34:11
|
Update of /cvsroot/linuxcompressed/linux/arch/i386 In directory usw-pr-cvs1:/tmp/cvs-serv13268/arch/i386 Modified Files: config.in Log Message: Feature o Implemented support for resizing the compressed cache on demand. The user defines the maximum compressed cache size and compressed cache will grow up to this size if necessary. Only then it will start swapping out fragments. And when the compressed cache entries start to get empty, their pages will be released to the system, decreasing compressed cache size. Still have to solve some issues about resizing vswap. o Changed most of the calls from comp_cache_free_locked() to comp_cache_free(), in order to release the page if necessary. Only calls from writeout functions were not changed since we don't want to use those pages to shrink the compressed cache. Bug fixes o Fixed potential oops in comp_cache_use_address(). If the ptes cannot be set to the new address, we would access a null variable (fragment). o Fixed bug in swap in process for virtual swap addresses. While allocating a new page, that virtual swap address might get unused (it gained a real address or vswap table got shrunk), what could lead to a BUG() in comp_cache_swp_duplicate(). Other o Some comments added to functions in adaptivity.c o Updated Configure.help for CONFIG_COMP_CACHE Index: config.in =================================================================== RCS file: /cvsroot/linuxcompressed/linux/arch/i386/config.in,v retrieving revision 1.18 retrieving revision 1.19 diff -C2 -r1.18 -r1.19 *** config.in 20 Jun 2002 14:28:49 -0000 1.18 --- config.in 25 Jun 2002 14:34:07 -0000 1.19 *************** *** 211,214 **** --- 211,215 ---- if [ "$CONFIG_COMP_CACHE" = "y" ]; then bool ' Support for Page Cache compression' CONFIG_COMP_PAGE_CACHE + bool ' Resize Compressed Cache On Demand' CONFIG_COMP_DEMAND_RESIZE fi fi |
From: Rodrigo S. de C. <rc...@us...> - 2002-06-20 14:28:53
|
Update of /cvsroot/linuxcompressed/linux/Documentation In directory usw-pr-cvs1:/tmp/cvs-serv24634/Documentation Modified Files: Configure.help Log Message: Cleanup o Removed adapt_comp_cache() and all CONFIG_COMP_ADAPTIVITY related stuff. That will be replaced by growing/shrinking by demand at the moment. Index: Configure.help =================================================================== RCS file: /cvsroot/linuxcompressed/linux/Documentation/Configure.help,v retrieving revision 1.6 retrieving revision 1.7 diff -C2 -r1.6 -r1.7 *** Configure.help 28 Apr 2002 20:51:32 -0000 1.6 --- Configure.help 20 Jun 2002 14:28:48 -0000 1.7 *************** *** 401,415 **** If unsure, say N here. - Automatic adaptivity for compressed cache size - CONFIG_COMP_ADAPTIVITY - Select this option in case you want compressed cache to adapt its - size to the system behaviour. That way, current code will - automatically compute the cost and benefit of several compressed - cache sizes, choosing the best size for whole system performance. - - This option is still not functional. - - If unsure, say N here. - Normal floppy disk support CONFIG_BLK_DEV_FD --- 401,404 ---- |
From: Rodrigo S. de C. <rc...@us...> - 2002-06-20 14:28:53
|
Update of /cvsroot/linuxcompressed/linux/include/linux In directory usw-pr-cvs1:/tmp/cvs-serv24634/include/linux Modified Files: comp_cache.h Log Message: Cleanup o Removed adapt_comp_cache() and all CONFIG_COMP_ADAPTIVITY related stuff. That will be replaced by growing/shrinking by demand at the moment. Index: comp_cache.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/comp_cache.h,v retrieving revision 1.81 retrieving revision 1.82 diff -C2 -r1.81 -r1.82 *** comp_cache.h 20 Jun 2002 12:33:57 -0000 1.81 --- comp_cache.h 20 Jun 2002 14:28:49 -0000 1.82 *************** *** 2,6 **** * linux/mm/comp_cache.h * ! * Time-stamp: <2002-06-20 08:44:35 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache.h * ! * Time-stamp: <2002-06-20 11:15:27 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 98,111 **** /* adaptivity.c */ - struct preset_comp_cache { - unsigned int size; - int profit; - }; - - extern struct preset_comp_cache * preset_comp_cache; - extern int nr_preset_sizes, current_preset_size; - extern int latest_uncomp_misses[], latest_miss; - - #ifdef CONFIG_COMP_CACHE int shrink_comp_cache(struct comp_cache_page *); --- 98,101 ---- *************** *** 119,123 **** /* swapout.c */ extern struct list_head swp_free_buffer_head; - extern atomic_t number_of_free_swp_buffers; /* -- Fragment Flags */ --- 109,112 ---- *************** *** 345,379 **** inline int compress_clean_page(struct page *, unsigned int); - extern int nr_swap_misses; - extern int nr_compressed_cache_misses; extern unsigned long comp_cache_free_space; - #define comp_cache_used_space ((num_comp_pages * PAGE_SIZE) - comp_cache_free_space) - - #define add_swap_miss() (nr_swap_misses++) - #define add_compressed_cache_miss() (nr_compressed_cache_misses++) - #else static inline void comp_cache_init(void) {}; static inline int compress_dirty_page(struct page * page, int (*writepage)(struct page *), unsigned int gfp_mask) { return writepage(page); } static inline int compress_clean_page(struct page * page, unsigned int gfp_mask) { return 1; } - - #define add_swap_miss() (0) - #define add_compressed_cache_miss() (0) - #endif #ifdef CONFIG_COMP_PAGE_CACHE - int comp_cache_try_to_release_page(struct page **, int); void steal_page_from_comp_cache(struct page *, struct page *); - #else - static inline int comp_cache_try_to_release_page(struct page ** page, int gfp_mask) { return try_to_release_page(*page, gfp_mask); } static inline void steal_page_from_comp_cache(struct page * page, struct page * new_page) {}; - #endif - /* vswap.c */ --- 334,352 ---- |
From: Rodrigo S. de C. <rc...@us...> - 2002-06-20 14:28:53
|
Update of /cvsroot/linuxcompressed/linux/mm/comp_cache In directory usw-pr-cvs1:/tmp/cvs-serv24634/mm/comp_cache Modified Files: adaptivity.c main.c swapin.c Log Message: Cleanup o Removed adapt_comp_cache() and all CONFIG_COMP_ADAPTIVITY related stuff. That will be replaced by growing/shrinking by demand at the moment. Index: adaptivity.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/adaptivity.c,v retrieving revision 1.27 retrieving revision 1.28 diff -C2 -r1.27 -r1.28 *** adaptivity.c 20 Jun 2002 12:33:58 -0000 1.27 --- adaptivity.c 20 Jun 2002 14:28:49 -0000 1.28 *************** *** 2,6 **** * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-06-20 08:57:04 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-06-20 10:59:52 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 18,168 **** static int fragment_failed_alloc = 0, vswap_failed_alloc = 0; - struct preset_comp_cache * preset_comp_cache; - int nr_preset_sizes, current_preset_size; - - static double time_comp = 0.3, time_decomp = 0.2, time_disk_read = 5; - int latest_uncomp_misses[10], latest_miss; - - #define comp_cache_total_space (preset_comp_cache[i].size * PAGE_SIZE) - extern void comp_cache_fix_watermarks(int); - - /*** - * adapt_comp_cache(void) - adapt compressed cache to the recent - * behaviour, resizing it if we would have better performance with - * another size. - * - * TODO - * - make compressed_ratio variable show the actual ratio - * - collect faults by lru region - * - account the number of swap cache pages in active and inactive lists? - */ - void - adapt_comp_cache(void) { - static int nr = 0; - int i, best_size, nr_uncomp_misses, uncomp_size, delta_disk_reads, compress_ratio = 2; - - if (++nr % 100) - return; - - /* decay miss information */ - i = (latest_miss + 1) % 10; - while (i != latest_miss) { - latest_uncomp_misses[i] = 0.8 * latest_uncomp_misses[i]; - i = (i + 1) % 10; - } - latest_uncomp_misses[latest_miss] = nr_compressed_cache_misses + nr_swap_misses; - - for (nr_uncomp_misses = 0, i = 0; i < 10; i++) - nr_uncomp_misses += latest_uncomp_misses[i]; - - latest_miss = (latest_miss + 1) % 10; - - if (!nr_uncomp_misses) - return; - - printk("nr_uncomp_misses %d\n", nr_uncomp_misses); - printk("free space %ld\n", (comp_cache_free_space * 100)/(num_comp_pages * PAGE_SIZE)); - - /* compute costs and benefits - smaller sizes*/ - best_size = current_preset_size; - for (i = current_preset_size; i >= 0; i--) { - double cost, benefit; - int comp_size, delta_real_size; - - comp_size = preset_comp_cache[i].size; - uncomp_size = num_physpages - comp_size; - - delta_real_size = (comp_cache_total_space/compress_ratio); - printk("size %d real size %d used space %ld\n", preset_comp_cache[i].size, delta_real_size, comp_cache_used_space); - - if (comp_cache_used_space < delta_real_size) - delta_disk_reads = 0; - else { - if (comp_cache_used_space > preset_comp_cache[i].size * PAGE_SIZE) { - delta_disk_reads = ((float) comp_size)/preset_comp_cache[current_preset_size].size * nr_compressed_cache_misses; - //printk("disk reads 1 %d\n", delta_disk_reads); - } - else { - delta_disk_reads = ((comp_cache_used_space - delta_real_size) * nr_compressed_cache_misses)/comp_cache_used_space; - //printk("disk reads 2 %d\n", delta_disk_reads); - } - } - - cost = (nr_uncomp_misses * comp_size)/preset_comp_cache[current_preset_size].size; - printk("cost %d\n", (int) cost); - cost *= (time_comp + time_decomp); - benefit = delta_disk_reads * (time_disk_read); - printk("cost %d benefit %d\n", (int) cost, (int) benefit); - - preset_comp_cache[i].profit = cost - benefit; - - if (preset_comp_cache[i].profit < preset_comp_cache[best_size].profit) - best_size = i; - - printk("profit %d -> %d (smaller)\n", i, preset_comp_cache[i].profit); - } - - if (comp_cache_free_space > 0.30 * num_comp_pages * PAGE_SIZE) - goto out; - - /* compute costs and benefits - larger sizes*/ - for (i = current_preset_size + 1; i < nr_preset_sizes; i++) { - double cost, benefit; - int comp_size, diff_new_real_old_uncomp, incr_comp_size, scale = 0; - - comp_size = preset_comp_cache[i].size; - uncomp_size = num_physpages - comp_size; - - /* new real memory size in function of the new compressed cache size */ - diff_new_real_old_uncomp = uncomp_size + comp_size/compress_ratio; - /* minus the current uncompressed cache */ - diff_new_real_old_uncomp -= (num_physpages - preset_comp_cache[current_preset_size].size); - - /* unlikely */ - if (diff_new_real_old_uncomp > 0) { - printk("1st case\n"); - scale = 1; - } - - /* we can fill up the new comp cache space */ - incr_comp_size = preset_comp_cache[i].size - preset_comp_cache[current_preset_size].size; - if (swapper_space.nrpages/compress_ratio > incr_comp_size) { - printk("fill up\n"); - scale = 1; - } - - printk("nr_compressed_cache_misses %d\n", nr_compressed_cache_misses); - - if (scale) - delta_disk_reads = (1 - ((float) diff_new_real_old_uncomp/preset_comp_cache[current_preset_size].size)) * nr_compressed_cache_misses; - else { - delta_disk_reads = nr_compressed_cache_misses; - delta_disk_reads += ((((float) swapper_space.nrpages)/compress_ratio - (incr_comp_size + diff_new_real_old_uncomp)) * nr_compressed_cache_misses)/preset_comp_cache[current_preset_size].size; - printk("delta_disk_reads %d\n", delta_disk_reads); - } - - cost = nr_uncomp_misses * ((float) preset_comp_cache[i].size/preset_comp_cache[current_preset_size].size); - cost *= (time_comp + time_decomp); - benefit = delta_disk_reads * (time_disk_read); - printk("cost %d benefit %d\n", (int) cost, (int) benefit); - - preset_comp_cache[i].profit = cost - benefit; - - printk("profit %d -> %d (bigger)\n", i, preset_comp_cache[i].profit); - - if (preset_comp_cache[i].profit < preset_comp_cache[best_size].profit) - best_size = i; - } - - - out: - new_num_comp_pages = preset_comp_cache[best_size].size; - current_preset_size = best_size; - printk("best size %d\n", best_size); - - /* reset stats */ - nr_compressed_cache_misses = nr_swap_misses = 0; - } void --- 18,22 ---- Index: main.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/main.c,v retrieving revision 1.46 retrieving revision 1.47 diff -C2 -r1.46 -r1.47 *** main.c 19 Jun 2002 12:18:44 -0000 1.46 --- main.c 20 Jun 2002 14:28:49 -0000 1.47 *************** *** 2,6 **** * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-06-19 08:46:38 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-06-20 11:01:24 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 39,45 **** kmem_cache_t * fragment_cachep; - int nr_swap_misses; - int nr_compressed_cache_misses; - extern unsigned long num_physpages; --- 39,42 ---- *************** *** 114,121 **** } - #ifdef CONFIG_COMP_ADAPTIVITY - adapt_comp_cache(); - #endif - comp_size = compress(current_compressed_page = page, buffer_compressed = (unsigned long *) &buffer_compressed1, &algorithm, dirty); comp_page = get_comp_cache_page(page, comp_size, &fragment, dirty, 1, gfp_mask); --- 111,114 ---- *************** *** 295,330 **** min_num_comp_pages = num_physpages * 0.05; - #ifndef CONFIG_COMP_ADAPTIVITY if (!init_num_comp_pages || init_num_comp_pages < min_num_comp_pages || init_num_comp_pages > max_num_comp_pages) - #endif init_num_comp_pages = min_num_comp_pages; new_num_comp_pages = num_comp_pages = init_num_comp_pages; printk("Compressed Cache: %s\n", COMP_CACHE_VERSION); - - /* adaptivity */ - nr_swap_misses = 0; - nr_compressed_cache_misses = 0; - - nr_preset_sizes = 4; - preset_comp_cache = (struct preset_comp_cache *) kmalloc(nr_preset_sizes * sizeof(*preset_comp_cache), GFP_ATOMIC); - - #ifdef CONFIG_COMP_ADAPTIVITY - printk("Compressed Cache: adaptivity\n"); - preset_comp_cache[0].size = num_physpages * 0.05; - preset_comp_cache[1].size = num_physpages * 0.23; - preset_comp_cache[2].size = num_physpages * 0.37; - preset_comp_cache[3].size = num_physpages * 0.50; - - for (i = 0; i < nr_preset_sizes; i++) - printk("Compressed Cache: preset size %d: %u memory pages\n", i, preset_comp_cache[i].size); - - for (i = 0; i < 10; i++) - latest_uncomp_misses[i] = 0; - latest_miss = 0; - #else printk("Compressed Cache: initial size\n" "Compressed Cache: %lu pages = %luKiB\n", init_num_comp_pages, (init_num_comp_pages * PAGE_SIZE)/1024); - #endif /* fiz zone watermarks */ --- 288,298 ---- *************** *** 353,357 **** } - #ifndef CONFIG_COMP_ADAPTIVITY static int __init comp_cache_size(char *str) { --- 321,324 ---- *************** *** 366,370 **** __setup("compsize=", comp_cache_size); - #endif /* --- 333,336 ---- Index: swapin.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapin.c,v retrieving revision 1.40 retrieving revision 1.41 diff -C2 -r1.40 -r1.41 *** swapin.c 19 Jun 2002 12:18:44 -0000 1.40 --- swapin.c 20 Jun 2002 14:28:50 -0000 1.41 *************** *** 2,6 **** * linux/mm/comp_cache/swapin.c * ! * Time-stamp: <2002-06-19 08:47:06 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/swapin.c * ! * Time-stamp: <2002-06-20 11:00:45 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 99,106 **** goto out; - #ifdef CONFIG_COMP_ADAPTIVITY - adapt_comp_cache(); - #endif - if (!PageLocked(page)) BUG(); --- 99,102 ---- |
From: Rodrigo S. de C. <rc...@us...> - 2002-06-20 14:28:52
|
Update of /cvsroot/linuxcompressed/linux/arch/i386 In directory usw-pr-cvs1:/tmp/cvs-serv24634/arch/i386 Modified Files: config.in Log Message: Cleanup o Removed adapt_comp_cache() and all CONFIG_COMP_ADAPTIVITY related stuff. That will be replaced by growing/shrinking by demand at the moment. Index: config.in =================================================================== RCS file: /cvsroot/linuxcompressed/linux/arch/i386/config.in,v retrieving revision 1.17 retrieving revision 1.18 diff -C2 -r1.17 -r1.18 *** config.in 28 Apr 2002 20:51:32 -0000 1.17 --- config.in 20 Jun 2002 14:28:49 -0000 1.18 *************** *** 209,216 **** if [ "$CONFIG_SMP" != "y" ]; then dep_bool 'Compressed cache (EXPERIMENTAL)' CONFIG_COMP_CACHE $CONFIG_EXPERIMENTAL - define_bool CONFIG_COMP_ADAPTIVITY n if [ "$CONFIG_COMP_CACHE" = "y" ]; then bool ' Support for Page Cache compression' CONFIG_COMP_PAGE_CACHE - bool ' Automatic adaptivity for compressed cache size' CONFIG_COMP_ADAPTIVITY fi fi --- 209,214 ---- |
From: Rodrigo S. de C. <rc...@us...> - 2002-06-20 14:28:52
|
Update of /cvsroot/linuxcompressed/linux/mm In directory usw-pr-cvs1:/tmp/cvs-serv24634/mm Modified Files: swap_state.c Log Message: Cleanup o Removed adapt_comp_cache() and all CONFIG_COMP_ADAPTIVITY related stuff. That will be replaced by growing/shrinking by demand at the moment. Index: swap_state.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/swap_state.c,v retrieving revision 1.29 retrieving revision 1.30 diff -C2 -r1.29 -r1.30 *** swap_state.c 13 Jun 2002 20:18:31 -0000 1.29 --- swap_state.c 20 Jun 2002 14:28:49 -0000 1.30 *************** *** 230,237 **** err = add_to_swap_cache(new_page, entry); if (!err) { ! if (!read_comp_cache(&swapper_space, entry.val, new_page, 1)) { ! add_compressed_cache_miss(); return new_page; - } /* --- 230,235 ---- err = add_to_swap_cache(new_page, entry); if (!err) { ! if (!read_comp_cache(&swapper_space, entry.val, new_page, 1)) return new_page; /* *************** *** 251,255 **** rw_swap_page(READ, new_page); - add_swap_miss(); return new_page; } --- 249,252 ---- |
From: Rodrigo S. de C. <rc...@us...> - 2002-06-20 12:34:04
|
Update of /cvsroot/linuxcompressed/linux/include/linux In directory usw-pr-cvs1:/tmp/cvs-serv25515/include/linux Modified Files: comp_cache.h Log Message: Bug fix o Another fix to the vswap failed allocation bug fix. It would BUG() if there weren't any failed allocation. Also changed last_vswap_allocated from unsigned int to int to handle the case it can't allocate any vswap entry. Index: comp_cache.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/comp_cache.h,v retrieving revision 1.80 retrieving revision 1.81 diff -C2 -r1.80 -r1.81 *** comp_cache.h 19 Jun 2002 19:32:23 -0000 1.80 --- comp_cache.h 20 Jun 2002 12:33:57 -0000 1.81 *************** *** 2,6 **** * linux/mm/comp_cache.h * ! * Time-stamp: <2002-06-19 15:49:17 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache.h * ! * Time-stamp: <2002-06-20 08:44:35 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 402,406 **** extern unsigned long vswap_num_swap_cache; extern unsigned int vswap_last_used; ! extern unsigned int last_vswap_allocated; extern unsigned short * last_page_size; --- 402,406 ---- extern unsigned long vswap_num_swap_cache; extern unsigned int vswap_last_used; ! extern int last_vswap_allocated; extern unsigned short * last_page_size; |
From: Rodrigo S. de C. <rc...@us...> - 2002-06-20 12:34:02
|
Update of /cvsroot/linuxcompressed/linux/mm/comp_cache In directory usw-pr-cvs1:/tmp/cvs-serv25515/mm/comp_cache Modified Files: adaptivity.c vswap.c Log Message: Bug fix o Another fix to the vswap failed allocation bug fix. It would BUG() if there weren't any failed allocation. Also changed last_vswap_allocated from unsigned int to int to handle the case it can't allocate any vswap entry. Index: adaptivity.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/adaptivity.c,v retrieving revision 1.26 retrieving revision 1.27 diff -C2 -r1.26 -r1.27 *** adaptivity.c 19 Jun 2002 19:32:25 -0000 1.26 --- adaptivity.c 20 Jun 2002 12:33:58 -0000 1.27 *************** *** 2,6 **** * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-06-19 16:20:02 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-06-20 08:57:04 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 424,427 **** --- 424,430 ---- } + if (!failed_alloc) + last_vswap_allocated = vswap_new_num_entries - 1; + vfree(vswap_address); vswap_address = new_vswap_address; *************** *** 488,492 **** last_vswap_allocated = i - 1; } ! } vfree(vswap_address); vswap_address = new_vswap_address; --- 491,499 ---- last_vswap_allocated = i - 1; } ! } ! ! if (!failed_alloc) ! last_vswap_allocated = vswap_new_num_entries - 1; ! vfree(vswap_address); vswap_address = new_vswap_address; *************** *** 514,517 **** --- 521,526 ---- } + if (!failed_alloc) + last_vswap_allocated = vswap_new_num_entries - 1; vswap_last_used = vswap_current_num_entries - 1; } Index: vswap.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/vswap.c,v retrieving revision 1.36 retrieving revision 1.37 diff -C2 -r1.36 -r1.37 *** vswap.c 19 Jun 2002 19:32:25 -0000 1.36 --- vswap.c 20 Jun 2002 12:33:58 -0000 1.37 *************** *** 2,6 **** * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-06-19 16:11:47 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-06-20 09:04:04 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 54,58 **** * following position is empty). This index will be used to try to * allocate in case the vswap addresses are over) */ ! unsigned int last_vswap_allocated; unsigned short * last_page_size; --- 54,58 ---- * following position is empty). This index will be used to try to * allocate in case the vswap addresses are over) */ ! int last_vswap_allocated; unsigned short * last_page_size; *************** *** 77,80 **** --- 77,81 ---- vswap_num_swap_cache = 0; + last_vswap_allocated = NUM_VSWAP_ENTRIES - 1; for (i = 0; i < NUM_VSWAP_ENTRIES; i++) { if (!vswap_alloc_and_init(vswap_address, i) && !failed_alloc) { |
From: Rodrigo S. de C. <rc...@us...> - 2002-06-19 19:32:31
|
Update of /cvsroot/linuxcompressed/linux/mm/comp_cache In directory usw-pr-cvs1:/tmp/cvs-serv23188/mm/comp_cache Modified Files: adaptivity.c vswap.c Log Message: Bug fix o Improved the bug fix for failed vswap allocation for vswap resizing code ({grow,shrink}_vswap). Also fixed a potential bug introduced in the previous bug fix. Index: adaptivity.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/adaptivity.c,v retrieving revision 1.25 retrieving revision 1.26 diff -C2 -r1.25 -r1.26 *** adaptivity.c 19 Jun 2002 12:18:44 -0000 1.25 --- adaptivity.c 19 Jun 2002 19:32:25 -0000 1.26 *************** *** 2,6 **** * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-06-19 08:45:29 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-06-19 16:20:02 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 258,262 **** struct comp_cache_fragment * fragment; struct vswap_address ** new_vswap_address; ! unsigned int total_scan = 0, failed_scan = 0; unsigned long index, new_index; swp_entry_t old_entry, entry; --- 258,262 ---- struct comp_cache_fragment * fragment; struct vswap_address ** new_vswap_address; ! unsigned int total_scan = 0, failed_scan = 0, failed_alloc = 0; unsigned long index, new_index; swp_entry_t old_entry, entry; *************** *** 397,401 **** * be reallocated */ if (!vswap_address[index]) { ! vswap_alloc_and_init(new_vswap_address, index); continue; } --- 397,404 ---- * be reallocated */ if (!vswap_address[index]) { ! if (!vswap_alloc_and_init(new_vswap_address, index) && !failed_alloc) { ! failed_alloc = 1; ! last_vswap_allocated = index - 1; ! } continue; } *************** *** 414,419 **** } ! for (index = vswap_last_used + 1; index < vswap_new_num_entries; index++) ! vswap_alloc_and_init(new_vswap_address, index); vfree(vswap_address); --- 417,426 ---- } ! for (index = vswap_last_used + 1; index < vswap_new_num_entries; index++) { ! if (!vswap_alloc_and_init(new_vswap_address, index) && !failed_alloc) { ! failed_alloc = 1; ! last_vswap_allocated = index - 1; ! } ! } vfree(vswap_address); *************** *** 439,443 **** grow_vswap(unsigned long vswap_new_num_entries) { struct vswap_address ** new_vswap_address; ! unsigned int i; if (vswap_last_used >= vswap_new_num_entries - 1) --- 446,450 ---- grow_vswap(unsigned long vswap_new_num_entries) { struct vswap_address ** new_vswap_address; ! unsigned int i, failed_alloc = 0; if (vswap_last_used >= vswap_new_num_entries - 1) *************** *** 463,467 **** * vswap_last_used that have to be reallocated */ if (!vswap_address[i]) { ! vswap_alloc_and_init(new_vswap_address, i); continue; } --- 470,477 ---- * vswap_last_used that have to be reallocated */ if (!vswap_address[i]) { ! if (!vswap_alloc_and_init(new_vswap_address, i) && !failed_alloc) { ! failed_alloc = 1; ! last_vswap_allocated = i - 1; ! } continue; } *************** *** 473,479 **** * than vswap_new_num_entries - 1, so we have to reallocate the * missing entries */ ! for (i = vswap_last_used + 1; i < vswap_new_num_entries; i++) ! vswap_alloc_and_init(new_vswap_address, i); ! vfree(vswap_address); vswap_address = new_vswap_address; --- 483,492 ---- * than vswap_new_num_entries - 1, so we have to reallocate the * missing entries */ ! for (i = vswap_last_used + 1; i < vswap_new_num_entries; i++) { ! if (!vswap_alloc_and_init(new_vswap_address, i) && !failed_alloc) { ! failed_alloc = 1; ! last_vswap_allocated = i - 1; ! } ! } vfree(vswap_address); vswap_address = new_vswap_address; *************** *** 493,498 **** * vswap, only reallocate the empty entries */ for (i = 0; i < vswap_current_num_entries; i++) { ! if (!vswap_address[i]) ! vswap_alloc_and_init(vswap_address, i); } --- 506,515 ---- * vswap, only reallocate the empty entries */ for (i = 0; i < vswap_current_num_entries; i++) { ! if (!vswap_address[i]) { ! if (!vswap_alloc_and_init(vswap_address, i) && !failed_alloc) { ! failed_alloc = 1; ! last_vswap_allocated = i - 1; ! } ! } } Index: vswap.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/vswap.c,v retrieving revision 1.35 retrieving revision 1.36 diff -C2 -r1.35 -r1.36 *** vswap.c 19 Jun 2002 18:10:20 -0000 1.35 --- vswap.c 19 Jun 2002 19:32:25 -0000 1.36 *************** *** 2,6 **** * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-06-19 14:48:47 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-06-19 16:11:47 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 51,56 **** unsigned int vswap_last_used; ! /* last vswap which has been allocated (this index will be used to try ! * to allocate in case the vswap addresses are over) */ unsigned int last_vswap_allocated; --- 51,57 ---- unsigned int vswap_last_used; ! /* last vswap index which has been allocated contiguously (the ! * following position is empty). This index will be used to try to ! * allocate in case the vswap addresses are over) */ unsigned int last_vswap_allocated; *************** *** 62,65 **** --- 63,67 ---- { unsigned long i; + unsigned int failed_alloc = 0; vswap_cachep = kmem_cache_create("comp_cache_vswap", sizeof(struct vswap_address), 0, SLAB_HWCACHE_ALIGN, NULL, NULL); *************** *** 75,81 **** vswap_num_swap_cache = 0; ! for (i = 0; i < NUM_VSWAP_ENTRIES && vswap_alloc_and_init(vswap_address, i); i++); ! last_vswap_allocated = i - 1; ! return 1; } --- 77,86 ---- vswap_num_swap_cache = 0; ! for (i = 0; i < NUM_VSWAP_ENTRIES; i++) { ! if (!vswap_alloc_and_init(vswap_address, i) && !failed_alloc) { ! failed_alloc = 1; ! last_vswap_allocated = i - 1; ! } ! } return 1; } *************** *** 103,106 **** --- 108,112 ---- comp_cache_available_vswap(void) { unsigned short available_mean_size; + unsigned long i; /* that should avoid problems when looking for a place in *************** *** 115,124 **** if (last_vswap_allocated == NUM_VSWAP_ENTRIES - 1) return 0; ! /* allocate an index that has failed to allocate */ if (!vswap_alloc_and_init(vswap_address, last_vswap_allocated + 1)) return 0; ! last_vswap_allocated++; return 1; } --- 121,135 ---- if (last_vswap_allocated == NUM_VSWAP_ENTRIES - 1) return 0; ! ! if (vswap_address[last_vswap_allocated + 1]) ! BUG(); ! /* allocate an index that has failed to allocate */ if (!vswap_alloc_and_init(vswap_address, last_vswap_allocated + 1)) return 0; ! for (i = last_vswap_allocated + 1; i < NUM_VSWAP_ENTRIES && vswap_address[i]; i++); ! last_vswap_allocated = i - 1; ! return 1; } |
From: Rodrigo S. de C. <rc...@us...> - 2002-06-19 19:32:30
|
Update of /cvsroot/linuxcompressed/linux/include/linux In directory usw-pr-cvs1:/tmp/cvs-serv23188/include/linux Modified Files: comp_cache.h Log Message: Bug fix o Improved the bug fix for failed vswap allocation for vswap resizing code ({grow,shrink}_vswap). Also fixed a potential bug introduced in the previous bug fix. Index: comp_cache.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/comp_cache.h,v retrieving revision 1.79 retrieving revision 1.80 diff -C2 -r1.79 -r1.80 *** comp_cache.h 19 Jun 2002 18:10:20 -0000 1.79 --- comp_cache.h 19 Jun 2002 19:32:23 -0000 1.80 *************** *** 2,6 **** * linux/mm/comp_cache.h * ! * Time-stamp: <2002-06-19 14:41:32 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache.h * ! * Time-stamp: <2002-06-19 15:49:17 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 402,405 **** --- 402,406 ---- extern unsigned long vswap_num_swap_cache; extern unsigned int vswap_last_used; + extern unsigned int last_vswap_allocated; extern unsigned short * last_page_size; |
From: Rodrigo S. de C. <rc...@us...> - 2002-06-19 18:10:25
|
Update of /cvsroot/linuxcompressed/linux/mm/comp_cache In directory usw-pr-cvs1:/tmp/cvs-serv28892/mm/comp_cache Modified Files: swapout.c vswap.c Log Message: Bug fixes o Fixed potential bug in get_comp_cache_page() when adding a fragment to an empty comp page o Fixed a bug that wouldn't allocate some vswap entries if they couldn't be allocated for the first time (in comp_cache_vswap_alloc()). It means that if we allocated only one third of vswap entries, it would oom kill some process, but wouldn't try to allocate the rest of vswap entries later. This bug usually didn't happen before since vswap data structures were allocated at the boot time. Other o Now we don't try to refill swap buffer with many pages in get_comp_cache_page(), but only one. That was done to the scenario where we could refill with few pages, but not the amount we previously set (SWAP_CLUSTER_MAX >> 2), so we end up writing out fragments. Index: swapout.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapout.c,v retrieving revision 1.51 retrieving revision 1.52 diff -C2 -r1.51 -r1.52 *** swapout.c 19 Jun 2002 12:18:44 -0000 1.51 --- swapout.c 19 Jun 2002 18:10:20 -0000 1.52 *************** *** 2,6 **** * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-06-19 08:47:28 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-06-19 11:34:35 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 448,452 **** UnlockPage(page); ! if (!refill_swp_buffer(gfp_mask, SWAP_CLUSTER_MAX >> 2, priority)) writeout_fragments(gfp_mask, priority--); --- 448,452 ---- UnlockPage(page); ! if (!refill_swp_buffer(gfp_mask, 1, priority)) writeout_fragments(gfp_mask, priority--); *************** *** 542,545 **** --- 542,550 ---- /* add the fragment to the comp_page list of fragments */ + if (list_empty(&(comp_page->fragments))) { + list_add(&(fragment->list), &(comp_page->fragments)); + goto out; + } + previous_fragment = list_entry(comp_page->fragments.prev, struct comp_cache_fragment, list); Index: vswap.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/vswap.c,v retrieving revision 1.34 retrieving revision 1.35 diff -C2 -r1.34 -r1.35 *** vswap.c 19 Jun 2002 12:18:44 -0000 1.34 --- vswap.c 19 Jun 2002 18:10:20 -0000 1.35 *************** *** 2,6 **** * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-06-19 08:47:38 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-06-19 14:48:47 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 24,29 **** struct vswap_address ** vswap_address = NULL; - struct list_head vswap_address_free_head; struct list_head vswap_address_used_head; static struct pte_list * pte_list_freelist = NULL; --- 24,29 ---- struct vswap_address ** vswap_address = NULL; struct list_head vswap_address_used_head; + struct list_head vswap_address_free_head; static struct pte_list * pte_list_freelist = NULL; *************** *** 51,54 **** --- 51,58 ---- unsigned int vswap_last_used; + /* last vswap which has been allocated (this index will be used to try + * to allocate in case the vswap addresses are over) */ + unsigned int last_vswap_allocated; + unsigned short * last_page_size; unsigned short last_page = 0; *************** *** 71,76 **** vswap_num_swap_cache = 0; ! for (i = 0; i < NUM_VSWAP_ENTRIES; i++) ! vswap_alloc_and_init(vswap_address, i); return 1; --- 75,80 ---- vswap_num_swap_cache = 0; ! for (i = 0; i < NUM_VSWAP_ENTRIES && vswap_alloc_and_init(vswap_address, i); i++); ! last_vswap_allocated = i - 1; return 1; *************** *** 106,116 **** return 0; ! /* no more free vswap address or too many used entries for the ! * current compressed cache size? so no available space */ ! if (list_empty(&vswap_address_free_head) || vswap_num_used_entries >= NUM_VSWAP_ENTRIES) ! return 0; available_mean_size = (unsigned short) (comp_cache_freeable_space/num_comp_pages); ! if (available_mean_size > PAGE_SIZE) BUG(); --- 110,134 ---- return 0; ! /* no more free vswap address? */ ! if (list_empty(&vswap_address_free_head)) { ! /* have all vswap addresses already been allocated? */ ! if (last_vswap_allocated == NUM_VSWAP_ENTRIES - 1) ! return 0; ! ! /* allocate an index that has failed to allocate */ ! if (!vswap_alloc_and_init(vswap_address, last_vswap_allocated + 1)) ! return 0; ! ! last_vswap_allocated++; ! return 1; ! } + /* or too many used entries for the current compressed cache + * size? so no available space */ + if (vswap_num_used_entries >= NUM_VSWAP_ENTRIES) + return 0; + available_mean_size = (unsigned short) (comp_cache_freeable_space/num_comp_pages); ! if (available_mean_size > PAGE_SIZE) BUG(); *************** *** 662,671 **** * */ ! void vswap_alloc_and_init(struct vswap_address ** vswap_address, unsigned long offset) { vswap_address[offset] = alloc_vswap(); if (!vswap_address[offset]) ! return; vswap_address[offset]->offset = offset; --- 680,689 ---- * */ ! int vswap_alloc_and_init(struct vswap_address ** vswap_address, unsigned long offset) { vswap_address[offset] = alloc_vswap(); if (!vswap_address[offset]) ! return 0; vswap_address[offset]->offset = offset; *************** *** 676,679 **** --- 694,698 ---- list_add(&(vswap_address[offset]->list), &vswap_address_free_head); + return 1; } |
From: Rodrigo S. de C. <rc...@us...> - 2002-06-19 18:10:25
|
Update of /cvsroot/linuxcompressed/linux/include/linux In directory usw-pr-cvs1:/tmp/cvs-serv28892/include/linux Modified Files: comp_cache.h Log Message: Bug fixes o Fixed potential bug in get_comp_cache_page() when adding a fragment to an empty comp page o Fixed a bug that wouldn't allocate some vswap entries if they couldn't be allocated for the first time (in comp_cache_vswap_alloc()). It means that if we allocated only one third of vswap entries, it would oom kill some process, but wouldn't try to allocate the rest of vswap entries later. This bug usually didn't happen before since vswap data structures were allocated at the boot time. Other o Now we don't try to refill swap buffer with many pages in get_comp_cache_page(), but only one. That was done to the scenario where we could refill with few pages, but not the amount we previously set (SWAP_CLUSTER_MAX >> 2), so we end up writing out fragments. Index: comp_cache.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/comp_cache.h,v retrieving revision 1.78 retrieving revision 1.79 diff -C2 -r1.78 -r1.79 *** comp_cache.h 19 Jun 2002 12:18:43 -0000 1.78 --- comp_cache.h 19 Jun 2002 18:10:20 -0000 1.79 *************** *** 2,6 **** * linux/mm/comp_cache.h * ! * Time-stamp: <2002-06-19 08:59:31 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache.h * ! * Time-stamp: <2002-06-19 14:41:32 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 427,431 **** extern int FASTCALL(free_pte_list(struct pte_list *, unsigned long)); ! void vswap_alloc_and_init(struct vswap_address **, unsigned long); #else --- 427,431 ---- extern int FASTCALL(free_pte_list(struct pte_list *, unsigned long)); ! int vswap_alloc_and_init(struct vswap_address **, unsigned long); #else |
From: Rodrigo S. de C. <rc...@us...> - 2002-06-19 12:18:50
|
Update of /cvsroot/linuxcompressed/linux/include/linux In directory usw-pr-cvs1:/tmp/cvs-serv15026/include/linux Modified Files: comp_cache.h Log Message: Cleanups o Most of typedefs removed: - comp_cache_t -> struct comp_cache_page - comp_cache_fragment_t -> struct comp_cache_fragment - stats_summary_t -> struct stats_summary - stats_page_t -> struct stats_page - compression_algorithm_t -> struct comp_alg - comp_data_t -> struct comp_alg_data Index: comp_cache.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/comp_cache.h,v retrieving revision 1.77 retrieving revision 1.78 diff -C2 -r1.77 -r1.78 *** comp_cache.h 18 Jun 2002 13:39:33 -0000 1.77 --- comp_cache.h 19 Jun 2002 12:18:43 -0000 1.78 *************** *** 2,6 **** * linux/mm/comp_cache.h * ! * Time-stamp: <2002-06-18 10:16:07 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache.h * ! * Time-stamp: <2002-06-19 08:59:31 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 43,47 **** }; ! typedef struct comp_cache_fragment { /* list of fragments in a comp page*/ struct list_head list; --- 43,47 ---- }; ! struct comp_cache_fragment { /* list of fragments in a comp page*/ struct list_head list; *************** *** 64,74 **** unsigned long flags; ! struct comp_cache_struct * comp_page; struct comp_cache_fragment * next_hash; struct comp_cache_fragment ** pprev_hash; ! } comp_cache_fragment_t; ! typedef struct comp_cache_struct { struct page * page; --- 64,74 ---- unsigned long flags; ! struct comp_cache_page * comp_page; struct comp_cache_fragment * next_hash; struct comp_cache_fragment ** pprev_hash; ! }; ! struct comp_cache_page { struct page * page; *************** *** 79,91 **** struct list_head fragments; ! struct comp_cache_struct * next_hash; ! struct comp_cache_struct ** pprev_hash; ! } comp_cache_t; #define alloc_fragment() \ ! ((comp_cache_fragment_t *) kmem_cache_alloc(fragment_cachep, SLAB_ATOMIC)) #define alloc_comp_cache() \ ! ((comp_cache_t *) kmem_cache_alloc(comp_cachep, SLAB_ATOMIC)) #define alloc_vswap() \ --- 79,91 ---- struct list_head fragments; ! struct comp_cache_page * next_hash; ! struct comp_cache_page ** pprev_hash; ! }; #define alloc_fragment() \ ! ((struct comp_cache_fragment *) kmem_cache_alloc(fragment_cachep, SLAB_ATOMIC)) #define alloc_comp_cache() \ ! ((struct comp_cache_page *) kmem_cache_alloc(comp_cachep, SLAB_ATOMIC)) #define alloc_vswap() \ *************** *** 109,117 **** #ifdef CONFIG_COMP_CACHE ! int shrink_comp_cache(comp_cache_t *); inline void grow_comp_cache(zone_t *, int); void adapt_comp_cache(void); #else ! static inline int shrink_comp_cache(comp_cache_t * comp_page) { return 0; } static inline void grow_comp_cache(zone_t * zone, int nr_pages) { } #endif --- 109,117 ---- #ifdef CONFIG_COMP_CACHE ! int shrink_comp_cache(struct comp_cache_page *); inline void grow_comp_cache(zone_t *, int); void adapt_comp_cache(void); #else ! static inline int shrink_comp_cache(struct comp_cache_page * comp_page) { return 0; } static inline void grow_comp_cache(zone_t * zone, int nr_pages) { } #endif *************** *** 187,191 **** struct page * page; /* page for IO */ ! comp_cache_fragment_t * fragment; /* pointer to the fragment we are doing IO */ }; --- 187,191 ---- struct page * page; /* page for IO */ ! struct comp_cache_fragment * fragment; /* pointer to the fragment we are doing IO */ }; *************** *** 201,205 **** #define for_each_fragment(p, comp_page) list_for_each(p, &(comp_page->fragments)) ! #define member_offset(key) ((unsigned long) (&((comp_cache_t *)0)->key)) #define apply_key_offset(node, offset) (* (unsigned short *)((char *) node + offset)) --- 201,205 ---- #define for_each_fragment(p, comp_page) list_for_each(p, &(comp_page->fragments)) ! #define member_offset(key) ((unsigned long) (&((struct comp_cache_page *)0)->key)) #define apply_key_offset(node, offset) (* (unsigned short *)((char *) node + offset)) *************** *** 241,245 **** } zenTimerType; ! typedef struct stats_summary_struct { unsigned long long comp_size_sum; unsigned int comp_size_max, comp_size_min; --- 241,245 ---- } zenTimerType; ! struct stats_summary { unsigned long long comp_size_sum; unsigned int comp_size_max, comp_size_min; *************** *** 253,273 **** unsigned long faultin_swap, faultin_page; unsigned long discarded_pages; ! } stats_summary_t; ! typedef struct stats_page_struct { unsigned int comp_size; /* compressed size of a page */ unsigned long comp_cycles; /* cycles taken for compression */ unsigned long decomp_cycles; /* cycles taken for decompression */ zenTimerType myTimer; /* used to calculate the cycles */ ! } stats_page_t; ! typedef struct compression_algorithm_struct { char name[6]; compress_function_t * comp; decompress_function_t * decomp; ! stats_summary_t stats; ! } compression_algorithm_t; ! typedef struct { WK_word *compressed_data; WK_word *decompressed_data; --- 253,273 ---- unsigned long faultin_swap, faultin_page; unsigned long discarded_pages; ! }; ! struct stats_page { unsigned int comp_size; /* compressed size of a page */ unsigned long comp_cycles; /* cycles taken for compression */ unsigned long decomp_cycles; /* cycles taken for decompression */ zenTimerType myTimer; /* used to calculate the cycles */ ! }; ! struct comp_alg { char name[6]; compress_function_t * comp; decompress_function_t * decomp; ! struct stats_summary stats; ! }; ! struct comp_alg_data { WK_word *compressed_data; WK_word *decompressed_data; *************** *** 281,285 **** unsigned short compressed_size; ! } comp_data_t; #define START_ZEN_TIME(userTimer) { \ --- 281,285 ---- unsigned short compressed_size; ! }; #define START_ZEN_TIME(userTimer) { \ *************** *** 303,310 **** #ifdef CONFIG_COMP_CACHE void comp_cache_update_page_comp_stats(struct page *); ! void comp_cache_update_writeout_stats(comp_cache_fragment_t *); ! void comp_cache_update_faultin_stats(comp_cache_fragment_t *); ! void set_fragment_algorithm(comp_cache_fragment_t *, unsigned short); ! void decompress(comp_cache_fragment_t *, struct page *); int compress(struct page *, void *, unsigned short *, int); --- 303,310 ---- #ifdef CONFIG_COMP_CACHE void comp_cache_update_page_comp_stats(struct page *); ! void comp_cache_update_writeout_stats(struct comp_cache_fragment *); ! void comp_cache_update_faultin_stats(struct comp_cache_fragment *); ! void set_fragment_algorithm(struct comp_cache_fragment *, unsigned short); ! void decompress(struct comp_cache_fragment *, struct page *); int compress(struct page *, void *, unsigned short *, int); *************** *** 341,345 **** int compress_page(struct page *, int, unsigned int); void comp_cache_init(void); ! inline void init_comp_page(comp_cache_t **,struct page *); inline void compress_dirty_page(struct page *, int (*writepage)(struct page *), unsigned int); inline int compress_clean_page(struct page *, unsigned int); --- 341,345 ---- int compress_page(struct page *, int, unsigned int); void comp_cache_init(void); ! inline void init_comp_page(struct comp_cache_page **,struct page *); inline void compress_dirty_page(struct page *, int (*writepage)(struct page *), unsigned int); inline int compress_clean_page(struct page *, unsigned int); *************** *** 384,388 **** unsigned long offset; ! comp_cache_fragment_t * fragment; struct page * swap_cache_page; --- 384,388 ---- unsigned long offset; ! struct comp_cache_fragment * fragment; struct page * swap_cache_page; *************** *** 407,411 **** #define COMP_CACHE_SWP_TYPE MAX_SWAPFILES ! #define VSWAP_RESERVED ((comp_cache_fragment_t *) 0xffffffff) #ifdef CONFIG_COMP_CACHE --- 407,411 ---- #define COMP_CACHE_SWP_TYPE MAX_SWAPFILES ! #define VSWAP_RESERVED ((struct comp_cache_fragment *) 0xffffffff) #ifdef CONFIG_COMP_CACHE *************** *** 450,455 **** /* free.c */ ! void comp_cache_free_locked(comp_cache_fragment_t *); ! inline void comp_cache_free(comp_cache_fragment_t *); #ifdef CONFIG_COMP_CACHE --- 450,455 ---- /* free.c */ ! void comp_cache_free_locked(struct comp_cache_fragment *); ! inline void comp_cache_free(struct comp_cache_fragment *); #ifdef CONFIG_COMP_CACHE *************** *** 500,507 **** /* aux.c */ unsigned long long big_division(unsigned long long, unsigned long long); ! inline void set_comp_page(comp_cache_t *, struct page *); ! inline void check_all_fragments(comp_cache_t *); ! extern comp_cache_fragment_t ** fragment_hash; extern unsigned long fragment_hash_size; extern unsigned long fragment_hash_used; --- 500,507 ---- /* aux.c */ unsigned long long big_division(unsigned long long, unsigned long long); ! inline void set_comp_page(struct comp_cache_page *, struct page *); ! inline void check_all_fragments(struct comp_cache_page *); ! extern struct comp_cache_fragment ** fragment_hash; extern unsigned long fragment_hash_size; extern unsigned long fragment_hash_used; *************** *** 521,528 **** } ! inline void __add_fragment_to_hash_table(comp_cache_fragment_t **, unsigned int, comp_cache_fragment_t *); ! inline void remove_fragment_from_hash_table(comp_cache_fragment_t *); ! static inline void add_fragment_to_hash_table(comp_cache_fragment_t * fragment) { __add_fragment_to_hash_table(fragment_hash, __fragment_hashfn(fragment->mapping, fragment->index, fragment_hash_size, fragment_hash_bits), fragment); } --- 521,528 ---- } ! inline void __add_fragment_to_hash_table(struct comp_cache_fragment **, unsigned int, struct comp_cache_fragment *); ! inline void remove_fragment_from_hash_table(struct comp_cache_fragment *); ! static inline void add_fragment_to_hash_table(struct comp_cache_fragment * fragment) { __add_fragment_to_hash_table(fragment_hash, __fragment_hashfn(fragment->mapping, fragment->index, fragment_hash_size, fragment_hash_bits), fragment); } *************** *** 541,562 **** } ! inline void add_comp_page_to_hash_table(comp_cache_t *); ! inline void remove_comp_page_from_hash_table(comp_cache_t *); int set_pte_list_to_entry(struct pte_list *, swp_entry_t, swp_entry_t); ! comp_cache_t * search_comp_page_free_space(int); ! comp_cache_fragment_t ** create_fragment_hash(unsigned long *, unsigned int *, unsigned int *); extern struct list_head lru_queue; ! inline void add_fragment_to_lru_queue(comp_cache_fragment_t *); ! inline void add_fragment_to_lru_queue_tail(comp_cache_fragment_t *); ! inline void remove_fragment_from_lru_queue(comp_cache_fragment_t *); /* enough memory functions */ #ifdef CONFIG_COMP_CACHE ! extern int FASTCALL(find_comp_page(struct address_space *, unsigned long, comp_cache_fragment_t **)); #else ! static inline int find_comp_page(struct address_space * mapping, unsigned long offset, comp_cache_fragment_t ** fragment) { return -ENOENT; } #endif --- 541,562 ---- } ! inline void add_comp_page_to_hash_table(struct comp_cache_page *); ! inline void remove_comp_page_from_hash_table(struct comp_cache_page *); int set_pte_list_to_entry(struct pte_list *, swp_entry_t, swp_entry_t); ! struct comp_cache_page * search_comp_page_free_space(int); ! struct comp_cache_fragment ** create_fragment_hash(unsigned long *, unsigned int *, unsigned int *); extern struct list_head lru_queue; ! inline void add_fragment_to_lru_queue(struct comp_cache_fragment *); ! inline void add_fragment_to_lru_queue_tail(struct comp_cache_fragment *); ! inline void remove_fragment_from_lru_queue(struct comp_cache_fragment *); /* enough memory functions */ #ifdef CONFIG_COMP_CACHE ! extern int FASTCALL(find_comp_page(struct address_space *, unsigned long, struct comp_cache_fragment **)); #else ! static inline int find_comp_page(struct address_space * mapping, unsigned long offset, struct comp_cache_fragment ** fragment) { return -ENOENT; } #endif |
From: Rodrigo S. de C. <rc...@us...> - 2002-06-19 12:18:49
|
Update of /cvsroot/linuxcompressed/linux/mm/comp_cache In directory usw-pr-cvs1:/tmp/cvs-serv15026/mm/comp_cache Modified Files: WK4x4.c WKdm.c adaptivity.c aux.c free.c main.c proc.c swapin.c swapout.c vswap.c Log Message: Cleanups o Most of typedefs removed: - comp_cache_t -> struct comp_cache_page - comp_cache_fragment_t -> struct comp_cache_fragment - stats_summary_t -> struct stats_summary - stats_page_t -> struct stats_page - compression_algorithm_t -> struct comp_alg - comp_data_t -> struct comp_alg_data Index: WK4x4.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/WK4x4.c,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -r1.2 -r1.3 *** WK4x4.c 19 Dec 2001 20:02:53 -0000 1.2 --- WK4x4.c 19 Jun 2002 12:18:44 -0000 1.3 *************** *** 260,265 **** void *page) { ! DictionaryElement *dictionary = ((comp_data_t *)page)->dictionary; ! unsigned int *hashTable = ((comp_data_t *)page)->hashLookupTable_WK4x4; /*DictionaryElement dictionary[DICTIONARY_SIZE]; --- 260,265 ---- void *page) { ! DictionaryElement *dictionary = ((struct comp_alg_data *)page)->dictionary; ! unsigned int *hashTable = ((struct comp_alg_data *)page)->hashLookupTable_WK4x4; /*DictionaryElement dictionary[DICTIONARY_SIZE]; *************** *** 428,433 **** /*DictionaryElement dictionary[DICTIONARY_SIZE]; unsigned int hashTable [] = HASH_LOOKUP_TABLE_CONTENTS_WK4x4;*/ ! DictionaryElement *dictionary = ((comp_data_t *)page)->dictionary; ! unsigned int *hashTable = ((comp_data_t *)page)->hashLookupTable_WK4x4; unsigned int initialIndexTable [] = INITIAL_INDEX_TABLE_CONTENTS; --- 428,433 ---- /*DictionaryElement dictionary[DICTIONARY_SIZE]; unsigned int hashTable [] = HASH_LOOKUP_TABLE_CONTENTS_WK4x4;*/ ! DictionaryElement *dictionary = ((struct comp_alg_data *)page)->dictionary; ! unsigned int *hashTable = ((struct comp_alg_data *)page)->hashLookupTable_WK4x4; unsigned int initialIndexTable [] = INITIAL_INDEX_TABLE_CONTENTS; Index: WKdm.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/WKdm.c,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -r1.2 -r1.3 *** WKdm.c 19 Dec 2001 20:02:53 -0000 1.2 --- WKdm.c 19 Jun 2002 12:18:44 -0000 1.3 *************** *** 387,392 **** /* DictionaryElement dictionary[DICTIONARY_SIZE]; char hashLookupTable [] = HASH_LOOKUP_TABLE_CONTENTS; */ ! DictionaryElement *dictionary = ((comp_data_t *)page)->dictionary; ! char *hashLookupTable = ((comp_data_t *)page)->hashLookupTable_WKdm; /* arrays that hold output data in intermediate form during modeling */ --- 387,392 ---- /* DictionaryElement dictionary[DICTIONARY_SIZE]; char hashLookupTable [] = HASH_LOOKUP_TABLE_CONTENTS; */ ! DictionaryElement *dictionary = ((struct comp_alg_data *)page)->dictionary; ! char *hashLookupTable = ((struct comp_alg_data *)page)->hashLookupTable_WKdm; /* arrays that hold output data in intermediate form during modeling */ *************** *** 400,406 **** /* WK_word tempQPosArray[300]; queue positions for matches */ /* WK_word tempLowBitsArray[1200]; low bits for partial matches */ ! WK_word *tempTagsArray = ((comp_data_t *)page)->tempTagsArray; ! WK_word *tempQPosArray = ((comp_data_t *)page)->tempQPosArray; ! WK_word *tempLowBitsArray = ((comp_data_t *)page)->tempLowBitsArray; /* boundary_tmp will be used for keeping track of what's where in --- 400,406 ---- /* WK_word tempQPosArray[300]; queue positions for matches */ /* WK_word tempLowBitsArray[1200]; low bits for partial matches */ ! WK_word *tempTagsArray = ((struct comp_alg_data *)page)->tempTagsArray; ! WK_word *tempQPosArray = ((struct comp_alg_data *)page)->tempQPosArray; ! WK_word *tempLowBitsArray = ((struct comp_alg_data *)page)->tempLowBitsArray; /* boundary_tmp will be used for keeping track of what's where in *************** *** 642,647 **** /*DictionaryElement dictionary[DICTIONARY_SIZE]; unsigned int hashLookupTable [] = HASH_LOOKUP_TABLE_CONTENTS_WKDM;*/ ! DictionaryElement *dictionary = ((comp_data_t *)page)->dictionary; ! char *hashLookupTable = ((comp_data_t *)page)->hashLookupTable_WKdm; --- 642,647 ---- /*DictionaryElement dictionary[DICTIONARY_SIZE]; unsigned int hashLookupTable [] = HASH_LOOKUP_TABLE_CONTENTS_WKDM;*/ ! DictionaryElement *dictionary = ((struct comp_alg_data *)page)->dictionary; ! char *hashLookupTable = ((struct comp_alg_data *)page)->hashLookupTable_WKdm; *************** *** 655,661 **** //WK_word tempQPosArray[300]; /* queue positions for matches */ //WK_word tempLowBitsArray[1200]; /* low bits for partial matches */ ! WK_word *tempTagsArray = ((comp_data_t *)page)->tempTagsArray; ! WK_word *tempQPosArray = ((comp_data_t *)page)->tempQPosArray; ! WK_word *tempLowBitsArray = ((comp_data_t *)page)->tempLowBitsArray; --- 655,661 ---- //WK_word tempQPosArray[300]; /* queue positions for matches */ //WK_word tempLowBitsArray[1200]; /* low bits for partial matches */ ! WK_word *tempTagsArray = ((struct comp_alg_data *)page)->tempTagsArray; ! WK_word *tempQPosArray = ((struct comp_alg_data *)page)->tempQPosArray; ! WK_word *tempLowBitsArray = ((struct comp_alg_data *)page)->tempLowBitsArray; Index: adaptivity.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/adaptivity.c,v retrieving revision 1.24 retrieving revision 1.25 diff -C2 -r1.24 -r1.25 *** adaptivity.c 18 Jun 2002 18:04:31 -0000 1.24 --- adaptivity.c 19 Jun 2002 12:18:44 -0000 1.25 *************** *** 2,6 **** * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-06-18 13:28:03 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-06-19 08:45:29 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 168,176 **** void resize_fragment_hash_table(void) { ! comp_cache_fragment_t ** new_fragment_hash, * fragment, * next_fragment; unsigned long new_fragment_hash_size; unsigned int i, new_fragment_hash_bits, new_fragment_hash_order, hash_index; ! new_fragment_hash_size = 3 * num_comp_pages * sizeof(comp_cache_fragment_t *); new_fragment_hash = create_fragment_hash(&new_fragment_hash_size, &new_fragment_hash_bits, &new_fragment_hash_order); --- 168,176 ---- void resize_fragment_hash_table(void) { ! struct comp_cache_fragment ** new_fragment_hash, * fragment, * next_fragment; unsigned long new_fragment_hash_size; unsigned int i, new_fragment_hash_bits, new_fragment_hash_order, hash_index; ! new_fragment_hash_size = 3 * num_comp_pages * sizeof(struct comp_cache_fragment *); new_fragment_hash = create_fragment_hash(&new_fragment_hash_size, &new_fragment_hash_bits, &new_fragment_hash_order); *************** *** 256,260 **** shrink_vswap(unsigned long vswap_new_num_entries) { struct page * swap_cache_page; ! comp_cache_fragment_t * fragment; struct vswap_address ** new_vswap_address; unsigned int total_scan = 0, failed_scan = 0; --- 256,260 ---- shrink_vswap(unsigned long vswap_new_num_entries) { struct page * swap_cache_page; ! struct comp_cache_fragment * fragment; struct vswap_address ** new_vswap_address; unsigned int total_scan = 0, failed_scan = 0; *************** *** 502,506 **** static inline int fragment_hash_needs_to_shrink(void) { ! unsigned long new_fragment_hash_size = (3 * num_comp_pages) * sizeof(comp_cache_fragment_t *); /* if we shrink the hash table an order, will the data fit in --- 502,506 ---- static inline int fragment_hash_needs_to_shrink(void) { ! unsigned long new_fragment_hash_size = (3 * num_comp_pages) * sizeof(struct comp_cache_fragment *); /* if we shrink the hash table an order, will the data fit in *************** *** 552,558 **** int ! shrink_comp_cache(comp_cache_t * comp_page) { ! comp_cache_t * empty_comp_page; int retval = 0; --- 552,558 ---- int ! shrink_comp_cache(struct comp_cache_page * comp_page) { ! struct comp_cache_page * empty_comp_page; int retval = 0; *************** *** 626,630 **** static inline int fragment_hash_needs_to_grow(void) { ! unsigned long new_fragment_hash_size = (3 * num_comp_pages) * sizeof(comp_cache_fragment_t *); /* do we really need a bigger hash table? */ --- 626,630 ---- static inline int fragment_hash_needs_to_grow(void) { ! unsigned long new_fragment_hash_size = (3 * num_comp_pages) * sizeof(struct comp_cache_fragment *); /* do we really need a bigger hash table? */ *************** *** 658,662 **** grow_comp_cache(zone_t * zone, int nr_pages) { ! comp_cache_t * comp_page; struct page * page; --- 658,662 ---- grow_comp_cache(zone_t * zone, int nr_pages) { ! struct comp_cache_page * comp_page; struct page * page; Index: aux.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/aux.c,v retrieving revision 1.31 retrieving revision 1.32 diff -C2 -r1.31 -r1.32 *** aux.c 18 Jun 2002 12:47:21 -0000 1.31 --- aux.c 19 Jun 2002 12:18:44 -0000 1.32 *************** *** 2,6 **** * linux/mm/comp_cache/aux.c * ! * Time-stamp: <2002-06-17 16:14:31 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/aux.c * ! * Time-stamp: <2002-06-19 08:45:54 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 15,19 **** #include <linux/vmalloc.h> ! comp_cache_fragment_t ** fragment_hash; unsigned long fragment_hash_size; unsigned long fragment_hash_used; --- 15,19 ---- #include <linux/vmalloc.h> ! struct comp_cache_fragment ** fragment_hash; unsigned long fragment_hash_size; unsigned long fragment_hash_used; *************** *** 21,25 **** unsigned int fragment_hash_bits; ! static comp_cache_t ** free_space_hash; unsigned int free_space_hash_size; unsigned int free_space_interval; --- 21,25 ---- unsigned int fragment_hash_bits; ! static struct comp_cache_page ** free_space_hash; unsigned int free_space_hash_size; unsigned int free_space_interval; *************** *** 69,73 **** inline void ! set_comp_page(comp_cache_t * comp_page, struct page * page) { if (!comp_page) --- 69,73 ---- inline void ! set_comp_page(struct comp_cache_page * comp_page, struct page * page) { if (!comp_page) *************** *** 159,164 **** inline void ! __add_fragment_to_hash_table(comp_cache_fragment_t ** hash_table, unsigned int hash_index, comp_cache_fragment_t * new_fragment) { ! comp_cache_fragment_t ** fragment; fragment = &hash_table[hash_index]; --- 159,164 ---- inline void ! __add_fragment_to_hash_table(struct comp_cache_fragment ** hash_table, unsigned int hash_index, struct comp_cache_fragment * new_fragment) { ! struct comp_cache_fragment ** fragment; fragment = &hash_table[hash_index]; *************** *** 174,180 **** inline void ! remove_fragment_from_hash_table(comp_cache_fragment_t * fragment) { ! comp_cache_fragment_t *next = fragment->next_hash; ! comp_cache_fragment_t **pprev = fragment->pprev_hash; if (next) --- 174,180 ---- inline void ! remove_fragment_from_hash_table(struct comp_cache_fragment * fragment) { ! struct comp_cache_fragment *next = fragment->next_hash; ! struct comp_cache_fragment **pprev = fragment->pprev_hash; if (next) *************** *** 188,192 **** unsigned long free_space_count(int index, unsigned long * num_fragments) { ! comp_cache_t * comp_page; unsigned long total, total_fragments; struct list_head * fragment_lh; --- 188,192 ---- unsigned long free_space_count(int index, unsigned long * num_fragments) { ! struct comp_cache_page * comp_page; unsigned long total, total_fragments; struct list_head * fragment_lh; *************** *** 252,257 **** inline void ! add_comp_page_to_hash_table(comp_cache_t * new_comp_page) { ! comp_cache_t ** comp_page; comp_page = &free_space_hash[free_space_hashfn(new_comp_page->free_space)]; --- 252,257 ---- inline void ! add_comp_page_to_hash_table(struct comp_cache_page * new_comp_page) { ! struct comp_cache_page ** comp_page; comp_page = &free_space_hash[free_space_hashfn(new_comp_page->free_space)]; *************** *** 265,271 **** inline void ! remove_comp_page_from_hash_table(comp_cache_t * comp_page) { ! comp_cache_t *next = comp_page->next_hash; ! comp_cache_t **pprev = comp_page->pprev_hash; if (next) --- 265,271 ---- inline void ! remove_comp_page_from_hash_table(struct comp_cache_page * comp_page) { ! struct comp_cache_page *next = comp_page->next_hash; ! struct comp_cache_page **pprev = comp_page->pprev_hash; if (next) *************** *** 275,281 **** } ! comp_cache_t * search_comp_page_free_space(int free_space) { ! comp_cache_t * comp_page; int idx, i; --- 275,281 ---- } ! struct comp_cache_page * search_comp_page_free_space(int free_space) { ! struct comp_cache_page * comp_page; int idx, i; *************** *** 309,313 **** inline void ! add_fragment_to_lru_queue_tail(comp_cache_fragment_t * fragment) { swp_entry_t entry; --- 309,313 ---- inline void ! add_fragment_to_lru_queue_tail(struct comp_cache_fragment * fragment) { swp_entry_t entry; *************** *** 328,332 **** inline void ! add_fragment_to_lru_queue(comp_cache_fragment_t * fragment) { swp_entry_t entry; --- 328,332 ---- inline void ! add_fragment_to_lru_queue(struct comp_cache_fragment * fragment) { swp_entry_t entry; *************** *** 347,351 **** inline void ! remove_fragment_from_lru_queue(comp_cache_fragment_t * fragment) { swp_entry_t entry; --- 347,351 ---- inline void ! remove_fragment_from_lru_queue(struct comp_cache_fragment * fragment) { swp_entry_t entry; *************** *** 366,373 **** /* adapted version of __find_page_nolock:filemap.c */ ! int FASTCALL(find_comp_page(struct address_space *, unsigned long, comp_cache_fragment_t **)); ! int find_comp_page(struct address_space *mapping, unsigned long offset, comp_cache_fragment_t ** fragment) { ! comp_cache_fragment_t * fhash; int err = -ENOENT; --- 366,373 ---- /* adapted version of __find_page_nolock:filemap.c */ ! int FASTCALL(find_comp_page(struct address_space *, unsigned long, struct comp_cache_fragment **)); ! int find_comp_page(struct address_space *mapping, unsigned long offset, struct comp_cache_fragment ** fragment) { ! struct comp_cache_fragment * fhash; int err = -ENOENT; *************** *** 400,412 **** inline void ! print_all_fragments (comp_cache_t * comp_page) { struct list_head * fragment_lh; ! comp_cache_fragment_t * fragment; printk("DEBUG: fragment List for %08lx\n", (unsigned long) comp_page); for_each_fragment(fragment_lh, comp_page) { ! fragment = list_entry(fragment_lh, comp_cache_fragment_t, list); printk(" %08lx (entry: %08lx offset: %d compressed_size: %d\n", (unsigned long) fragment, fragment->index, fragment->offset, fragment->compressed_size); } --- 400,412 ---- inline void ! print_all_fragments (struct comp_cache_page * comp_page) { struct list_head * fragment_lh; ! struct comp_cache_fragment * fragment; printk("DEBUG: fragment List for %08lx\n", (unsigned long) comp_page); for_each_fragment(fragment_lh, comp_page) { ! fragment = list_entry(fragment_lh, struct comp_cache_fragment, list); printk(" %08lx (entry: %08lx offset: %d compressed_size: %d\n", (unsigned long) fragment, fragment->index, fragment->offset, fragment->compressed_size); } *************** *** 414,420 **** inline void ! check_all_fragments(comp_cache_t * comp_page) { ! comp_cache_fragment_t * fragment, * aux_fragment; struct list_head * fragment_lh, * aux_fragment_lh; int used_space = 0; --- 414,420 ---- inline void ! check_all_fragments(struct comp_cache_page * comp_page) { ! struct comp_cache_fragment * fragment, * aux_fragment; struct list_head * fragment_lh, * aux_fragment_lh; int used_space = 0; *************** *** 425,429 **** for_each_fragment(fragment_lh, comp_page) { ! fragment = list_entry(fragment_lh, comp_cache_fragment_t, list); if (fragment->comp_page != comp_page) --- 425,429 ---- for_each_fragment(fragment_lh, comp_page) { ! fragment = list_entry(fragment_lh, struct comp_cache_fragment, list); if (fragment->comp_page != comp_page) *************** *** 446,453 **** for_each_fragment(fragment_lh, comp_page) { ! fragment = list_entry(fragment_lh, comp_cache_fragment_t, list); for_each_fragment(aux_fragment_lh, comp_page) { ! aux_fragment = list_entry(aux_fragment_lh, comp_cache_fragment_t, list); if (aux_fragment == fragment) --- 446,453 ---- for_each_fragment(fragment_lh, comp_page) { ! fragment = list_entry(fragment_lh, struct comp_cache_fragment, list); for_each_fragment(aux_fragment_lh, comp_page) { ! aux_fragment = list_entry(aux_fragment_lh, struct comp_cache_fragment, list); if (aux_fragment == fragment) *************** *** 472,483 **** } ! comp_cache_fragment_t ** create_fragment_hash(unsigned long * fragment_hash_size, unsigned int * bits, unsigned int * order) { ! comp_cache_fragment_t ** hash_table; for (*order = 0; (PAGE_SIZE << *order) < *fragment_hash_size; (*order)++); do { ! unsigned long tmp = (PAGE_SIZE << *order)/sizeof(comp_cache_fragment_t *); *bits = 0; --- 472,483 ---- } ! struct comp_cache_fragment ** create_fragment_hash(unsigned long * fragment_hash_size, unsigned int * bits, unsigned int * order) { ! struct comp_cache_fragment ** hash_table; for (*order = 0; (PAGE_SIZE << *order) < *fragment_hash_size; (*order)++); do { ! unsigned long tmp = (PAGE_SIZE << *order)/sizeof(struct comp_cache_fragment *); *bits = 0; *************** *** 485,489 **** (*bits)++; ! hash_table = (comp_cache_fragment_t **) __get_free_pages(GFP_ATOMIC, *order); } while(hash_table == NULL && --(*order) > 0); --- 485,489 ---- (*bits)++; ! hash_table = (struct comp_cache_fragment **) __get_free_pages(GFP_ATOMIC, *order); } while(hash_table == NULL && --(*order) > 0); *************** *** 491,495 **** if (hash_table) ! memset((void *) hash_table, 0, *fragment_hash_size * sizeof(comp_cache_fragment_t *)); return hash_table; --- 491,495 ---- if (hash_table) ! memset((void *) hash_table, 0, *fragment_hash_size * sizeof(struct comp_cache_fragment *)); return hash_table; *************** *** 501,505 **** /* fragment hash table (code heavily based on * page_cache_init():filemap.c */ ! fragment_hash_size = 3 * num_comp_pages * sizeof(comp_cache_fragment_t *); fragment_hash_used = 0; fragment_hash = create_fragment_hash(&fragment_hash_size, &fragment_hash_bits, &fragment_hash_order); --- 501,505 ---- /* fragment hash table (code heavily based on * page_cache_init():filemap.c */ ! fragment_hash_size = 3 * num_comp_pages * sizeof(struct comp_cache_fragment *); fragment_hash_used = 0; fragment_hash = create_fragment_hash(&fragment_hash_size, &fragment_hash_bits, &fragment_hash_order); *************** *** 515,526 **** free_space_hash_size = (int) (PAGE_SIZE/free_space_interval) + 2; ! free_space_hash = vmalloc(free_space_hash_size * sizeof(comp_cache_t *)); ! printk("Compressed Cache: free space (%u entries = %uB)\n", free_space_hash_size, free_space_hash_size * sizeof(comp_cache_t *)); if (!free_space_hash) panic("comp_cache_hash_init(): couldn't allocate free space hash table\n"); ! memset((void *) free_space_hash, 0, free_space_hash_size * sizeof(comp_cache_t *)); } --- 515,526 ---- free_space_hash_size = (int) (PAGE_SIZE/free_space_interval) + 2; ! free_space_hash = vmalloc(free_space_hash_size * sizeof(struct comp_cache_page *)); ! printk("Compressed Cache: free space (%u entries = %uB)\n", free_space_hash_size, free_space_hash_size * sizeof(struct comp_cache_page *)); if (!free_space_hash) panic("comp_cache_hash_init(): couldn't allocate free space hash table\n"); ! memset((void *) free_space_hash, 0, free_space_hash_size * sizeof(struct comp_cache_page *)); } Index: free.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/free.c,v retrieving revision 1.31 retrieving revision 1.32 diff -C2 -r1.31 -r1.32 *** free.c 13 Jun 2002 20:18:32 -0000 1.31 --- free.c 19 Jun 2002 12:18:44 -0000 1.32 *************** *** 2,6 **** * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-06-13 10:37:11 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-06-19 08:46:13 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 19,24 **** extern kmem_cache_t * fragment_cachep; ! extern void remove_fragment_vswap(comp_cache_fragment_t *); ! extern void add_fragment_vswap(comp_cache_fragment_t *); /* is fragment1 the left neighbour of fragment2? */ --- 19,24 ---- extern kmem_cache_t * fragment_cachep; ! extern void remove_fragment_vswap(struct comp_cache_fragment *); ! extern void add_fragment_vswap(struct comp_cache_fragment *); /* is fragment1 the left neighbour of fragment2? */ *************** *** 30,34 **** static inline void ! merge_right_neighbour(comp_cache_fragment_t * fragment_to_free, comp_cache_fragment_t * right_fragment) { if (!right_fragment) --- 30,34 ---- static inline void ! merge_right_neighbour(struct comp_cache_fragment * fragment_to_free, struct comp_cache_fragment * right_fragment) { if (!right_fragment) *************** *** 47,51 **** static inline void ! merge_left_neighbour(comp_cache_fragment_t * fragment_to_free, comp_cache_fragment_t * left_fragment) { if (!left_fragment) --- 47,51 ---- static inline void ! merge_left_neighbour(struct comp_cache_fragment * fragment_to_free, struct comp_cache_fragment * left_fragment) { if (!left_fragment) *************** *** 64,68 **** static inline void ! remove_fragment_from_comp_cache(comp_cache_fragment_t * fragment) { remove_fragment_vswap(fragment); --- 64,68 ---- static inline void ! remove_fragment_from_comp_cache(struct comp_cache_fragment * fragment) { remove_fragment_vswap(fragment); *************** *** 78,85 **** void ! comp_cache_free_locked(comp_cache_fragment_t * fragment) { ! comp_cache_t * comp_page; ! comp_cache_fragment_t * next_fragment, * previous_fragment; if (!fragment) --- 78,85 ---- void ! comp_cache_free_locked(struct comp_cache_fragment * fragment) { ! struct comp_cache_page * comp_page; ! struct comp_cache_fragment * next_fragment, * previous_fragment; if (!fragment) *************** *** 98,106 **** next_fragment = NULL; if (fragment->list.next != &(comp_page->fragments)) ! next_fragment = list_entry(fragment->list.next, comp_cache_fragment_t, list); previous_fragment = NULL; if (fragment->list.prev != &(comp_page->fragments)) ! previous_fragment = list_entry(fragment->list.prev, comp_cache_fragment_t, list); /* simple case - no free space --- 98,106 ---- next_fragment = NULL; if (fragment->list.next != &(comp_page->fragments)) ! next_fragment = list_entry(fragment->list.next, struct comp_cache_fragment, list); previous_fragment = NULL; if (fragment->list.prev != &(comp_page->fragments)) ! previous_fragment = list_entry(fragment->list.prev, struct comp_cache_fragment, list); /* simple case - no free space *************** *** 164,169 **** inline void ! comp_cache_free(comp_cache_fragment_t * fragment) { ! comp_cache_t * comp_page; int locked; --- 164,169 ---- inline void ! comp_cache_free(struct comp_cache_fragment * fragment) { ! struct comp_cache_page * comp_page; int locked; *************** *** 188,192 **** comp_cache_use_address(swp_entry_t entry) { ! comp_cache_fragment_t * fragment = NULL; struct vswap_address * vswap; struct list_head * vswap_lh; --- 188,192 ---- comp_cache_use_address(swp_entry_t entry) { ! struct comp_cache_fragment * fragment = NULL; struct vswap_address * vswap; struct list_head * vswap_lh; Index: main.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/main.c,v retrieving revision 1.45 retrieving revision 1.46 diff -C2 -r1.45 -r1.46 *** main.c 18 Jun 2002 12:47:21 -0000 1.45 --- main.c 19 Jun 2002 12:18:44 -0000 1.46 *************** *** 2,6 **** * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-06-17 17:47:11 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-06-19 08:46:38 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 44,48 **** extern unsigned long num_physpages; ! extern comp_cache_t * get_comp_cache_page(struct page *, unsigned short, comp_cache_fragment_t **, int, int, unsigned int); inline void --- 44,48 ---- extern unsigned long num_physpages; ! extern struct comp_cache_page * get_comp_cache_page(struct page *, unsigned short, struct comp_cache_fragment **, int, int, unsigned int); inline void *************** *** 94,99 **** compress_page(struct page * page, int dirty, unsigned int gfp_mask) { ! comp_cache_t * comp_page; ! comp_cache_fragment_t * fragment; unsigned short comp_size, algorithm; --- 94,99 ---- compress_page(struct page * page, int dirty, unsigned int gfp_mask) { ! struct comp_cache_page * comp_page; ! struct comp_cache_fragment * fragment; unsigned short comp_size, algorithm; *************** *** 151,156 **** steal_page_from_comp_cache(struct page * page, struct page * new_page) { ! comp_cache_fragment_t * fragment; ! comp_cache_t * comp_page; struct page * old_page; int locked; --- 151,156 ---- steal_page_from_comp_cache(struct page * page, struct page * new_page) { ! struct comp_cache_fragment * fragment; ! struct comp_cache_page * comp_page; struct page * old_page; int locked; *************** *** 195,200 **** comp_cache_try_to_release_page(struct page ** page, int gfp_mask) { ! comp_cache_fragment_t * fragment; ! comp_cache_t * comp_page; unsigned short comp_size, dirty; struct page * old_page; --- 195,200 ---- comp_cache_try_to_release_page(struct page ** page, int gfp_mask) { ! struct comp_cache_fragment * fragment; ! struct comp_cache_page * comp_page; unsigned short comp_size, dirty; struct page * old_page; *************** *** 274,278 **** LIST_HEAD(lru_queue); ! inline void init_comp_page(comp_cache_t ** comp_page,struct page * page) { *comp_page = alloc_comp_cache(); (*comp_page)->free_space = PAGE_SIZE; --- 274,278 ---- LIST_HEAD(lru_queue); ! inline void init_comp_page(struct comp_cache_page ** comp_page,struct page * page) { *comp_page = alloc_comp_cache(); (*comp_page)->free_space = PAGE_SIZE; *************** *** 288,292 **** comp_cache_init(void) { ! comp_cache_t * comp_page; struct page * page; int i; --- 288,292 ---- comp_cache_init(void) { ! struct comp_cache_page * comp_page; struct page * page; int i; *************** *** 332,337 **** /* create slab caches */ ! comp_cachep = kmem_cache_create("comp_cache_struct", sizeof(comp_cache_t), 0, SLAB_HWCACHE_ALIGN, NULL, NULL); ! fragment_cachep = kmem_cache_create("comp_cache_frag", sizeof(comp_cache_fragment_t), 0, SLAB_HWCACHE_ALIGN, NULL, NULL); comp_cache_hash_init(); --- 332,337 ---- /* create slab caches */ ! comp_cachep = kmem_cache_create("comp_cache_struct", sizeof(struct comp_cache_page), 0, SLAB_HWCACHE_ALIGN, NULL, NULL); ! fragment_cachep = kmem_cache_create("comp_cache_frag", sizeof(struct comp_cache_fragment), 0, SLAB_HWCACHE_ALIGN, NULL, NULL); comp_cache_hash_init(); Index: proc.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/proc.c,v retrieving revision 1.15 retrieving revision 1.16 diff -C2 -r1.15 -r1.16 *** proc.c 13 Jun 2002 20:18:33 -0000 1.15 --- proc.c 19 Jun 2002 12:18:44 -0000 1.16 *************** *** 2,6 **** * linux/mm/comp_cache/proc.c * ! * Time-stamp: <2002-06-13 17:04:41 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/proc.c * ! * Time-stamp: <2002-06-19 08:59:17 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 27,31 **** extern unsigned long new_num_comp_pages, max_num_comp_pages, min_num_comp_pages; ! static compression_algorithm_t compression_algorithms[NUM_ALGORITHMS]; static int algorithm_min = WKDM_IDX; static int algorithm_max = LZO_IDX; --- 27,31 ---- extern unsigned long new_num_comp_pages, max_num_comp_pages, min_num_comp_pages; ! static struct comp_alg compression_algorithms[NUM_ALGORITHMS]; static int algorithm_min = WKDM_IDX; static int algorithm_max = LZO_IDX; *************** *** 33,37 **** /* data used for compression */ ! static comp_data_t comp_data; static WK_word compresseddata[1200]; --- 33,37 ---- /* data used for compression */ ! static struct comp_alg_data comp_data; static WK_word compresseddata[1200]; *************** *** 81,88 **** static void ! comp_cache_update_comp_stats(stats_page_t * comp_page_stats, struct page * page, int dirty) { ! compression_algorithm_t * algorithm = &compression_algorithms[current_algorithm]; ! stats_summary_t * stats = &(algorithm->stats); /* update compressed size statistics */ --- 81,88 ---- static void ! comp_cache_update_comp_stats(struct stats_page * comp_page_stats, struct page * page, int dirty) { ! struct comp_alg * algorithm = &compression_algorithms[current_algorithm]; ! struct stats_summary * stats = &(algorithm->stats); /* update compressed size statistics */ *************** *** 111,118 **** static void ! comp_cache_update_decomp_stats(unsigned short alg_idx, stats_page_t * comp_page_stats, comp_cache_fragment_t * fragment) { ! compression_algorithm_t * algorithm = &compression_algorithms[alg_idx]; ! stats_summary_t * stats = &(algorithm->stats); /* update decomp cycles statistics */ --- 111,118 ---- static void ! comp_cache_update_decomp_stats(unsigned short alg_idx, struct stats_page * comp_page_stats, struct comp_cache_fragment * fragment) { ! struct comp_alg * algorithm = &compression_algorithms[alg_idx]; ! struct stats_summary * stats = &(algorithm->stats); /* update decomp cycles statistics */ *************** *** 134,138 **** void ! comp_cache_update_writeout_stats(comp_cache_fragment_t * fragment) { #ifdef CONFIG_COMP_PAGE_CACHE --- 134,138 ---- void ! comp_cache_update_writeout_stats(struct comp_cache_fragment * fragment) { #ifdef CONFIG_COMP_PAGE_CACHE *************** *** 145,149 **** void ! comp_cache_update_faultin_stats(comp_cache_fragment_t * fragment) { #ifdef CONFIG_COMP_PAGE_CACHE --- 145,149 ---- void ! comp_cache_update_faultin_stats(struct comp_cache_fragment * fragment) { #ifdef CONFIG_COMP_PAGE_CACHE *************** *** 156,160 **** void ! set_fragment_algorithm(comp_cache_fragment_t * fragment, unsigned short algorithm) { switch (algorithm) { --- 156,160 ---- void ! set_fragment_algorithm(struct comp_cache_fragment * fragment, unsigned short algorithm) { switch (algorithm) { *************** *** 193,197 **** lzo_uint new_len; ! error = lzo1x_decompress((lzo_byte *) from, ((comp_data_t *) page)->compressed_size, (lzo_byte *) to, &new_len, NULL); if (error != LZO_E_OK || new_len != PAGE_SIZE) { --- 193,197 ---- lzo_uint new_len; ! error = lzo1x_decompress((lzo_byte *) from, ((struct comp_alg_data *) page)->compressed_size, (lzo_byte *) to, &new_len, NULL); if (error != LZO_E_OK || new_len != PAGE_SIZE) { *************** *** 204,208 **** compress(struct page * page, void * to, unsigned short * algorithm, int dirty) { ! stats_page_t comp_page_stats; void * from = page_address(page); --- 204,208 ---- compress(struct page * page, void * to, unsigned short * algorithm, int dirty) { ! struct stats_page comp_page_stats; void * from = page_address(page); *************** *** 230,236 **** void ! decompress(comp_cache_fragment_t * fragment, struct page * page) { ! stats_page_t comp_page_stats; unsigned int algorithm = WKDM_IDX; void * from = page_address(fragment->comp_page->page) + fragment->offset; --- 230,236 ---- void ! decompress(struct comp_cache_fragment * fragment, struct page * page) { ! struct stats_page comp_page_stats; unsigned int algorithm = WKDM_IDX; void * from = page_address(fragment->comp_page->page) + fragment->offset; *************** *** 267,271 **** for (i = 0; i < NUM_ALGORITHMS; i++) { ! memset((void *) &compression_algorithms[i], 0, sizeof(stats_summary_t)); compression_algorithms[i].stats.comp_size_min = INF; compression_algorithms[i].stats.comp_cycles_min = INF; --- 267,271 ---- for (i = 0; i < NUM_ALGORITHMS; i++) { ! memset((void *) &compression_algorithms[i], 0, sizeof(struct stats_summary)); compression_algorithms[i].stats.comp_size_min = INF; compression_algorithms[i].stats.comp_cycles_min = INF; *************** *** 315,320 **** unsigned long total_comp_pages, total_wout_pages, total_decomp_pages, total_faultin_pages; ! compression_algorithm_t * algorithm = &compression_algorithms[alg_idx]; ! stats_summary_t * stats = &algorithm->stats; total_comp_pages = stats->comp_swap + stats->comp_page; --- 315,320 ---- unsigned long total_comp_pages, total_wout_pages, total_decomp_pages, total_faultin_pages; ! struct comp_alg * algorithm = &compression_algorithms[alg_idx]; ! struct stats_summary * stats = &algorithm->stats; total_comp_pages = stats->comp_swap + stats->comp_page; Index: swapin.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapin.c,v retrieving revision 1.39 retrieving revision 1.40 diff -C2 -r1.39 -r1.40 *** swapin.c 13 Jun 2002 20:18:34 -0000 1.39 --- swapin.c 19 Jun 2002 12:18:44 -0000 1.40 *************** *** 2,6 **** * linux/mm/comp_cache/swapin.c * ! * Time-stamp: <2002-06-12 17:05:28 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/swapin.c * ! * Time-stamp: <2002-06-19 08:47:06 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 21,25 **** invalidate_comp_cache(struct address_space *mapping, unsigned long offset) { ! comp_cache_fragment_t * fragment; int err = find_comp_page(mapping, offset, &fragment); --- 21,25 ---- invalidate_comp_cache(struct address_space *mapping, unsigned long offset) { ! struct comp_cache_fragment * fragment; int err = find_comp_page(mapping, offset, &fragment); *************** *** 35,39 **** int flush_comp_cache(struct page * page) { ! comp_cache_fragment_t * fragment; int err = -ENOENT; --- 35,39 ---- int flush_comp_cache(struct page * page) { ! struct comp_cache_fragment * fragment; int err = -ENOENT; *************** *** 63,69 **** void ! decompress_fragment(comp_cache_fragment_t * fragment, struct page * page) { ! comp_cache_t * comp_page; if (!fragment) --- 63,69 ---- void ! decompress_fragment(struct comp_cache_fragment * fragment, struct page * page) { ! struct comp_cache_page * comp_page; if (!fragment) *************** *** 89,93 **** read_comp_cache(struct address_space *mapping, unsigned long offset, struct page * page, int access) { ! comp_cache_fragment_t * fragment; int err; --- 89,93 ---- read_comp_cache(struct address_space *mapping, unsigned long offset, struct page * page, int access) { ! struct comp_cache_fragment * fragment; int err; *************** *** 142,149 **** { struct list_head * fragment_lh, * tmp_lh; ! comp_cache_fragment_t * fragment; list_for_each_safe(fragment_lh, tmp_lh, list) { ! fragment = list_entry(fragment_lh, comp_cache_fragment_t, mapping_list); if ((fragment->index >= start) || (partial && (fragment->index + 1) == start)) { --- 142,149 ---- { struct list_head * fragment_lh, * tmp_lh; ! struct comp_cache_fragment * fragment; list_for_each_safe(fragment_lh, tmp_lh, list) { ! fragment = list_entry(fragment_lh, struct comp_cache_fragment, mapping_list); if ((fragment->index >= start) || (partial && (fragment->index + 1) == start)) { *************** *** 196,200 **** struct page **hash; struct page * page; ! comp_cache_fragment_t * fragment; if (list_empty(&mapping->dirty_comp_pages)) --- 196,200 ---- struct page **hash; struct page * page; ! struct comp_cache_fragment * fragment; if (list_empty(&mapping->dirty_comp_pages)) *************** *** 208,212 **** goto out_release; ! fragment = list_entry(mapping->dirty_comp_pages.next, comp_cache_fragment_t, mapping_list); hash = page_hash(mapping, fragment->index); --- 208,212 ---- goto out_release; ! fragment = list_entry(mapping->dirty_comp_pages.next, struct comp_cache_fragment, mapping_list); hash = page_hash(mapping, fragment->index); Index: swapout.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapout.c,v retrieving revision 1.50 retrieving revision 1.51 diff -C2 -r1.50 -r1.51 *** swapout.c 18 Jun 2002 12:47:21 -0000 1.50 --- swapout.c 19 Jun 2002 12:18:44 -0000 1.51 *************** *** 2,6 **** * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-06-17 17:39:26 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-06-19 08:47:28 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 29,33 **** struct page * buffer_page; struct swp_buffer * swp_buffer; ! comp_cache_fragment_t * fragment; unsigned int gfp_mask_buffer; int wait, maxscan; --- 29,33 ---- struct page * buffer_page; struct swp_buffer * swp_buffer; ! struct comp_cache_fragment * fragment; unsigned int gfp_mask_buffer; int wait, maxscan; *************** *** 123,127 **** */ static struct swp_buffer * ! find_free_swp_buffer(comp_cache_fragment_t * fragment, unsigned int gfp_mask) { struct page * buffer_page; --- 123,127 ---- */ static struct swp_buffer * ! find_free_swp_buffer(struct comp_cache_fragment * fragment, unsigned int gfp_mask) { struct page * buffer_page; *************** *** 172,179 **** } ! extern void decompress_fragment(comp_cache_fragment_t *, struct page *); static struct swp_buffer * ! decompress_to_swp_buffer(comp_cache_fragment_t * fragment, unsigned int gfp_mask) { struct page * buffer_page; struct swp_buffer * swp_buffer; --- 172,179 ---- } ! extern void decompress_fragment(struct comp_cache_fragment *, struct page *); static struct swp_buffer * ! decompress_to_swp_buffer(struct comp_cache_fragment * fragment, unsigned int gfp_mask) { struct page * buffer_page; struct swp_buffer * swp_buffer; *************** *** 210,214 **** struct list_head * fragment_lh, * tmp_lh; int maxscan, nrpages, swap_cache_page; ! comp_cache_fragment_t * fragment; struct swp_buffer * swp_buffer; struct page * page; --- 210,214 ---- struct list_head * fragment_lh, * tmp_lh; int maxscan, nrpages, swap_cache_page; ! struct comp_cache_fragment * fragment; struct swp_buffer * swp_buffer; struct page * page; *************** *** 224,228 **** } ! fragment = list_entry(fragment_lh = lru_queue.prev, comp_cache_fragment_t, lru_queue); page = fragment->comp_page->page; --- 224,228 ---- } ! fragment = list_entry(fragment_lh = lru_queue.prev, struct comp_cache_fragment, lru_queue); page = fragment->comp_page->page; *************** *** 314,318 **** for_each_fragment(tmp_lh, fragment->comp_page) { if (tmp_lh != fragment_lh) { ! comp_cache_fragment_t * tmp = list_entry(tmp_lh, comp_cache_fragment_t, list); if (!list_empty(&(tmp->lru_queue))) { remove_fragment_from_lru_queue(tmp); --- 314,318 ---- for_each_fragment(tmp_lh, fragment->comp_page) { if (tmp_lh != fragment_lh) { ! struct comp_cache_fragment * tmp = list_entry(tmp_lh, struct comp_cache_fragment, list); if (!list_empty(&(tmp->lru_queue))) { remove_fragment_from_lru_queue(tmp); *************** *** 356,360 **** } ! extern void add_fragment_vswap(comp_cache_fragment_t *); /*** --- 356,360 ---- } ! extern void add_fragment_vswap(struct comp_cache_fragment *); /*** *************** *** 381,389 **** * * @gfp_mask: we need to know if we can perform IO */ ! comp_cache_t * ! get_comp_cache_page(struct page * page, unsigned short compressed_size, comp_cache_fragment_t ** fragment_out, int dirty, int alloc, unsigned int gfp_mask) { ! comp_cache_t * comp_page = NULL; ! comp_cache_fragment_t * fragment = NULL, * previous_fragment = NULL; struct list_head * fragment_lh; struct page * new_page; --- 381,389 ---- * * @gfp_mask: we need to know if we can perform IO */ ! struct comp_cache_page * ! get_comp_cache_page(struct page * page, unsigned short compressed_size, struct comp_cache_fragment ** fragment_out, int dirty, int alloc, unsigned int gfp_mask) { ! struct comp_cache_page * comp_page = NULL; ! struct comp_cache_fragment * fragment = NULL, * previous_fragment = NULL; struct list_head * fragment_lh; struct page * new_page; *************** *** 521,525 **** #if 0 { ! comp_cache_fragment_t * fout; if (!find_comp_page(page->mapping, page->index, &fout)) { --- 521,525 ---- #if 0 { ! struct comp_cache_fragment * fout; if (!find_comp_page(page->mapping, page->index, &fout)) { *************** *** 542,546 **** /* add the fragment to the comp_page list of fragments */ ! previous_fragment = list_entry(comp_page->fragments.prev, comp_cache_fragment_t, list); if (previous_fragment->offset + previous_fragment->compressed_size == fragment->offset) { --- 542,546 ---- /* add the fragment to the comp_page list of fragments */ ! previous_fragment = list_entry(comp_page->fragments.prev, struct comp_cache_fragment, list); if (previous_fragment->offset + previous_fragment->compressed_size == fragment->offset) { *************** *** 553,559 **** for_each_fragment(fragment_lh, comp_page) { ! comp_cache_fragment_t * aux_fragment; ! aux_fragment = list_entry(fragment_lh, comp_cache_fragment_t, list); if (aux_fragment->offset + aux_fragment->compressed_size > fragment->offset) --- 553,559 ---- for_each_fragment(fragment_lh, comp_page) { ! struct comp_cache_fragment * aux_fragment; ! aux_fragment = list_entry(fragment_lh, struct comp_cache_fragment, list); if (aux_fragment->offset + aux_fragment->compressed_size > fragment->offset) Index: vswap.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/vswap.c,v retrieving revision 1.33 retrieving revision 1.34 diff -C2 -r1.33 -r1.34 *** vswap.c 18 Jun 2002 18:04:32 -0000 1.33 --- vswap.c 19 Jun 2002 12:18:44 -0000 1.34 *************** *** 2,6 **** * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-06-18 14:56:27 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-06-19 08:47:38 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 243,247 **** unsigned long offset = SWP_OFFSET(entry); unsigned int count; ! comp_cache_fragment_t * fragment; struct vswap_address * vswap; struct page * page; --- 243,247 ---- unsigned long offset = SWP_OFFSET(entry); unsigned int count; ! struct comp_cache_fragment * fragment; struct vswap_address * vswap; struct page * page; *************** *** 343,347 **** */ inline void ! remove_fragment_vswap(comp_cache_fragment_t * fragment) { swp_entry_t entry; --- 343,347 ---- */ inline void ! remove_fragment_vswap(struct comp_cache_fragment * fragment) { swp_entry_t entry; *************** *** 384,388 **** */ inline void ! add_fragment_vswap(comp_cache_fragment_t * fragment) { swp_entry_t entry; --- 384,388 ---- */ inline void ! add_fragment_vswap(struct comp_cache_fragment * fragment) { swp_entry_t entry; |
From: Rodrigo S. de C. <rc...@us...> - 2002-06-19 12:18:47
|
Update of /cvsroot/linuxcompressed/linux/mm In directory usw-pr-cvs1:/tmp/cvs-serv15026/mm Modified Files: filemap.c memory.c Log Message: Cleanups o Most of typedefs removed: - comp_cache_t -> struct comp_cache_page - comp_cache_fragment_t -> struct comp_cache_fragment - stats_summary_t -> struct stats_summary - stats_page_t -> struct stats_page - compression_algorithm_t -> struct comp_alg - comp_data_t -> struct comp_alg_data Index: filemap.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/filemap.c,v retrieving revision 1.29 retrieving revision 1.30 diff -C2 -r1.29 -r1.30 *** filemap.c 13 Jun 2002 20:18:31 -0000 1.29 --- filemap.c 19 Jun 2002 12:18:43 -0000 1.30 *************** *** 1031,1035 **** if (!page) { if (!cached_page) { ! comp_cache_fragment_t * fragment; if (find_comp_page(mapping, offset, &fragment)) goto out; --- 1031,1035 ---- if (!page) { if (!cached_page) { ! struct comp_cache_fragment * fragment; if (find_comp_page(mapping, offset, &fragment)) goto out; *************** *** 2091,2095 **** in_comp_cache = 0; { ! comp_cache_fragment_t * fragment; if (!find_comp_page(mapping, pgoff, &fragment)) in_comp_cache = 1; --- 2091,2095 ---- in_comp_cache = 0; { ! struct comp_cache_fragment * fragment; if (!find_comp_page(mapping, pgoff, &fragment)) in_comp_cache = 1; Index: memory.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/memory.c,v retrieving revision 1.30 retrieving revision 1.31 diff -C2 -r1.30 -r1.31 *** memory.c 11 Jun 2002 13:20:49 -0000 1.30 --- memory.c 19 Jun 2002 12:18:43 -0000 1.31 *************** *** 1133,1137 **** page = lookup_swap_cache(entry); if (!page) { ! comp_cache_fragment_t * fragment; /* perform readahead only if the page is on disk */ if (find_comp_page(&swapper_space, entry.val, &fragment)) { --- 1133,1137 ---- page = lookup_swap_cache(entry); if (!page) { ! struct comp_cache_fragment * fragment; /* perform readahead only if the page is on disk */ if (find_comp_page(&swapper_space, entry.val, &fragment)) { |
From: Rodrigo S. de C. <rc...@us...> - 2002-06-18 18:04:36
|
Update of /cvsroot/linuxcompressed/linux/mm/comp_cache In directory usw-pr-cvs1:/tmp/cvs-serv25032/mm/comp_cache Modified Files: adaptivity.c vswap.c Log Message: Bug fixes: o Fixed bug in shrink_comp_cache() which would release a NULL page o "Fixed" potential bug when unable to allocate page for ptes (vswap) by pre-allocating one page when initializing compressed cache o Fixed bug which would cause an oops when fixing memory watermarks. All zone_balance_* arrays were defined as __initdata, so they could be accessed by our function after being deallocated. The fix simply removes the __initdata option from their declaration when compressed cache is enabled. Other: o cleanups in adaptivity.c Index: adaptivity.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/adaptivity.c,v retrieving revision 1.23 retrieving revision 1.24 diff -C2 -r1.23 -r1.24 *** adaptivity.c 18 Jun 2002 12:47:21 -0000 1.23 --- adaptivity.c 18 Jun 2002 18:04:31 -0000 1.24 *************** *** 2,6 **** * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-06-17 17:42:23 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-06-18 13:28:03 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 203,208 **** free_pages((unsigned long) fragment_hash, fragment_hash_order); ! ! //printk("FRAGMENT HASH TABLE - resized from %lu to %lu\n", fragment_hash_size, new_fragment_hash_size); fragment_hash = new_fragment_hash; fragment_hash_size = new_fragment_hash_size; --- 203,210 ---- free_pages((unsigned long) fragment_hash, fragment_hash_order); ! ! #if 0 ! printk("FRAGMENT HASH TABLE - resized from %lu to %lu\n", fragment_hash_size, new_fragment_hash_size); ! #endif fragment_hash = new_fragment_hash; fragment_hash_size = new_fragment_hash_size; *************** *** 417,422 **** vfree(vswap_address); vswap_address = new_vswap_address; ! ! //printk("VSWAP - resized from %ld to %ld (copied until %d)\n", vswap_current_num_entries, vswap_new_num_entries, vswap_last_used); vswap_current_num_entries = vswap_new_num_entries; vswap_last_used = vswap_new_num_entries - 1; --- 419,426 ---- vfree(vswap_address); vswap_address = new_vswap_address; ! ! #if 0 ! printk("VSWAP - resized from %ld to %ld (copied until %d)\n", vswap_current_num_entries, vswap_new_num_entries, vswap_last_used); ! #endif vswap_current_num_entries = vswap_new_num_entries; vswap_last_used = vswap_new_num_entries - 1; *************** *** 572,579 **** remove_comp_page_from_hash_table(empty_comp_page); UnlockPage(empty_comp_page->page); set_comp_page(empty_comp_page, NULL); - - page_cache_release(empty_comp_page->page); kmem_cache_free(comp_cachep, (empty_comp_page)); num_comp_pages--; --- 576,582 ---- remove_comp_page_from_hash_table(empty_comp_page); UnlockPage(empty_comp_page->page); + page_cache_release(empty_comp_page->page); set_comp_page(empty_comp_page, NULL); kmem_cache_free(comp_cachep, (empty_comp_page)); num_comp_pages--; *************** *** 605,609 **** empty_comp_page = search_comp_page_free_space(PAGE_SIZE); ! if (!empty_comp_page) return retval; --- 608,612 ---- empty_comp_page = search_comp_page_free_space(PAGE_SIZE); ! if (!empty_comp_page || !empty_comp_page->page) return retval; Index: vswap.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/vswap.c,v retrieving revision 1.32 retrieving revision 1.33 diff -C2 -r1.32 -r1.33 *** vswap.c 18 Jun 2002 13:04:11 -0000 1.32 --- vswap.c 18 Jun 2002 18:04:32 -0000 1.33 *************** *** 2,6 **** * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-06-18 09:51:31 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-06-18 14:56:27 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 576,580 **** unsigned long offset; struct pte_list * pte_list; ! if (!vswap_address(entry)) return; --- 576,580 ---- unsigned long offset; struct pte_list * pte_list; ! if (!vswap_address(entry)) return; *************** *** 692,695 **** --- 692,700 ---- for (i = 0; i < NUM_MEAN_PAGES; i++) last_page_size[i] = PAGE_SIZE/2; + + /* alloc only one page right now to avoid problems when + * starting using virtual swap address (usually under high + * memory pressure) */ + alloc_new_pte_lists(); } |
From: Rodrigo S. de C. <rc...@us...> - 2002-06-18 18:04:35
|
Update of /cvsroot/linuxcompressed/linux/mm In directory usw-pr-cvs1:/tmp/cvs-serv25032/mm Modified Files: page_alloc.c Log Message: Bug fixes: o Fixed bug in shrink_comp_cache() which would release a NULL page o "Fixed" potential bug when unable to allocate page for ptes (vswap) by pre-allocating one page when initializing compressed cache o Fixed bug which would cause an oops when fixing memory watermarks. All zone_balance_* arrays were defined as __initdata, so they could be accessed by our function after being deallocated. The fix simply removes the __initdata option from their declaration when compressed cache is enabled. Other: o cleanups in adaptivity.c Index: page_alloc.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/page_alloc.c,v retrieving revision 1.20 retrieving revision 1.21 diff -C2 -r1.20 -r1.21 *** page_alloc.c 18 Jun 2002 12:47:21 -0000 1.20 --- page_alloc.c 18 Jun 2002 18:04:31 -0000 1.21 *************** *** 29,35 **** --- 29,41 ---- static char *zone_names[MAX_NR_ZONES] = { "DMA", "Normal", "HighMem" }; + #ifdef CONFIG_COMP_CACHE + static int zone_balance_ratio[MAX_NR_ZONES] = { 128, 128, 128, }; + static int zone_balance_min[MAX_NR_ZONES] = { 20 , 20, 20, }; + static int zone_balance_max[MAX_NR_ZONES] = { 255 , 255, 255, }; + #else static int zone_balance_ratio[MAX_NR_ZONES] __initdata = { 128, 128, 128, }; static int zone_balance_min[MAX_NR_ZONES] __initdata = { 20 , 20, 20, }; static int zone_balance_max[MAX_NR_ZONES] __initdata = { 255 , 255, 255, }; + #endif /* |
From: Rodrigo S. de C. <rc...@us...> - 2002-06-18 13:39:37
|
Update of /cvsroot/linuxcompressed/linux/mm In directory usw-pr-cvs1:/tmp/cvs-serv27511/mm Modified Files: vmscan.c Log Message: Bug fix: o Fixed a bug that would freeze a system where the compressed cache is not enabled. The return value of compress_clean_page() was wrong, making the shrink_cache() function to not free any page. The return value was fixed, but to avoid overheads when compressed cache is disabled, that part of the code in vmscan.c now also has "#ifdef CONFIG_COMP_CACHE". Index: vmscan.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/vmscan.c,v retrieving revision 1.34 retrieving revision 1.35 diff -C2 -r1.34 -r1.35 *** vmscan.c 11 Jun 2002 13:20:49 -0000 1.34 --- vmscan.c 18 Jun 2002 13:39:33 -0000 1.35 *************** *** 518,521 **** --- 518,522 ---- } + #ifdef CONFIG_COMP_CACHE /*** * compress the page if it's a clean page that has not *************** *** 536,539 **** --- 537,541 ---- spin_lock(&pagemap_lru_lock); } + #endif /* point of no return */ |
From: Rodrigo S. de C. <rc...@us...> - 2002-06-18 13:39:37
|
Update of /cvsroot/linuxcompressed/linux/include/linux In directory usw-pr-cvs1:/tmp/cvs-serv27511/include/linux Modified Files: comp_cache.h Log Message: Bug fix: o Fixed a bug that would freeze a system where the compressed cache is not enabled. The return value of compress_clean_page() was wrong, making the shrink_cache() function to not free any page. The return value was fixed, but to avoid overheads when compressed cache is disabled, that part of the code in vmscan.c now also has "#ifdef CONFIG_COMP_CACHE". Index: comp_cache.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/comp_cache.h,v retrieving revision 1.76 retrieving revision 1.77 diff -C2 -r1.76 -r1.77 *** comp_cache.h 18 Jun 2002 13:04:11 -0000 1.76 --- comp_cache.h 18 Jun 2002 13:39:33 -0000 1.77 *************** *** 2,6 **** * linux/mm/comp_cache.h * ! * Time-stamp: <2002-06-18 09:47:23 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache.h * ! * Time-stamp: <2002-06-18 10:16:07 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 357,361 **** static inline void comp_cache_init(void) {}; static inline int compress_dirty_page(struct page * page, int (*writepage)(struct page *), unsigned int gfp_mask) { return writepage(page); } ! static inline int compress_clean_page(struct page * page, unsigned int gfp_mask) { return 0; } #define add_swap_miss() (0) --- 357,361 ---- static inline void comp_cache_init(void) {}; static inline int compress_dirty_page(struct page * page, int (*writepage)(struct page *), unsigned int gfp_mask) { return writepage(page); } ! static inline int compress_clean_page(struct page * page, unsigned int gfp_mask) { return 1; } #define add_swap_miss() (0) |
From: Rodrigo S. de C. <rc...@us...> - 2002-06-18 13:04:14
|
Update of /cvsroot/linuxcompressed/linux/include/linux In directory usw-pr-cvs1:/tmp/cvs-serv17972/include/linux Modified Files: comp_cache.h Log Message: Bug fixes o Fixed potential bug that would panic if couldn't allocate vswap table. Other o Updated version from 0.23pre6 to 0.23pre7 Index: comp_cache.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/comp_cache.h,v retrieving revision 1.75 retrieving revision 1.76 diff -C2 -r1.75 -r1.76 *** comp_cache.h 18 Jun 2002 12:47:21 -0000 1.75 --- comp_cache.h 18 Jun 2002 13:04:11 -0000 1.76 *************** *** 2,6 **** * linux/mm/comp_cache.h * ! * Time-stamp: <2002-06-17 17:39:43 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache.h * ! * Time-stamp: <2002-06-18 09:47:23 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 29,33 **** #include <linux/WKcommon.h> ! #define COMP_CACHE_VERSION "0.23pre6" /* maximum compressed size of a page */ --- 29,33 ---- #include <linux/WKcommon.h> ! #define COMP_CACHE_VERSION "0.23pre7" /* maximum compressed size of a page */ |
From: Rodrigo S. de C. <rc...@us...> - 2002-06-18 13:04:14
|
Update of /cvsroot/linuxcompressed/linux/mm/comp_cache In directory usw-pr-cvs1:/tmp/cvs-serv17972/mm/comp_cache Modified Files: vswap.c Log Message: Bug fixes o Fixed potential bug that would panic if couldn't allocate vswap table. Other o Updated version from 0.23pre6 to 0.23pre7 Index: vswap.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/vswap.c,v retrieving revision 1.31 retrieving revision 1.32 diff -C2 -r1.31 -r1.32 *** vswap.c 18 Jun 2002 12:47:21 -0000 1.31 --- vswap.c 18 Jun 2002 13:04:11 -0000 1.32 *************** *** 2,6 **** * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-06-17 17:52:54 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-06-18 09:51:31 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 54,58 **** unsigned short last_page = 0; ! static void comp_cache_vswap_alloc(void) { --- 54,58 ---- unsigned short last_page = 0; ! static int comp_cache_vswap_alloc(void) { *************** *** 64,68 **** if (!vswap_address) ! panic("comp_cache_vswap_init(): cannot allocate vswap_address"); vswap_current_num_entries = NUM_VSWAP_ENTRIES; --- 64,68 ---- if (!vswap_address) ! return 0; vswap_current_num_entries = NUM_VSWAP_ENTRIES; *************** *** 73,76 **** --- 73,78 ---- for (i = 0; i < NUM_VSWAP_ENTRIES; i++) vswap_alloc_and_init(vswap_address, i); + + return 1; } *************** *** 159,164 **** entry.val = 0; ! if (!vswap_address) ! comp_cache_vswap_alloc(); if (!comp_cache_available_vswap()) --- 161,166 ---- entry.val = 0; ! if (!vswap_address && !comp_cache_vswap_alloc()) ! return entry; if (!comp_cache_available_vswap()) |