Thread: [lc-checkins] CVS: linux/mm/comp_cache aux.c,1.10,1.11 free.c,1.14,1.15 main.c,1.16,1.17 swapin.c,1.
Status: Beta
Brought to you by:
nitin_sf
From: Rodrigo S. de C. <rc...@us...> - 2002-01-10 12:39:34
|
Update of /cvsroot/linuxcompressed/linux/mm/comp_cache In directory usw-pr-cvs1:/tmp/cvs-serv10033/mm/comp_cache Modified Files: aux.c free.c main.c swapin.c swapout.c vswap.c Log Message: This batch of changes includes cleanups and code rewrites. The swap_out_fragments() and find_free_swap_buffer() are much simpler and also more efficient. Some preliminary tests showed a performance gain due to these improvements. - Fragment Freed Bit removed: the current code has been checked by and the few parts in the code where we could sleep with a comp page in locked state were rewritten (mainly swap out). It turns out that now we won't free a fragment without locking its page, and thus we don't need a bit to tell that the fragment needs to be freed. All special cases for Freed bit all over the code were removed. - Fragment SwapBuffer Bit removed: things changed a lot in swap buffer code. We don't add a "virtual" fragment to the comp cache hash table to avoid access to fragments which may be in swap out process. So, no need for this bit currently. - Fragment IO Bit added: instead of added a "virtual" fragment to the comp cache hash table, we don't free the fragment we are swapping out right after the IO function (rw_swap_page) has been called. We free it only when the IO has finished. But once the IO takes time, this fragment can be freed (maybe because it's been swapped in) in the meanwhile and so we have to tell the comp_cache_free() that this fragment is being written to disk and that its struct should not be freed. comp_cache_free() also clears this bit since we don't need to perform the IO (if it has not yet been submitted). In the case it has already been submitted, we don't have to free this fragment, only its struct. - find_free_swp_buffer() function was completely rewritten. There are two lists linking all the swap buffer pages: used and free. The comp page field in swp_buffer_t was removed. No array for swap buffers is needed, since they are linked in these lists (it's the first step to make it work with dynamic swap buffers, like listed on todo list). About its behaviour, there's no variable which counts the number of free swap buffers like before. So, once all the buffers were used, we try to move all the unlocked buffers to the free list at once. If we couldn't move even one, them we wait for one page to have its IO finished and go get this swap buffer. Our old code didn't perform this task so efficiently. For example, everytime we had a free fragment, we checked all the fragments to see if they were locked. With a used list we avoid this kind of overhead. - swap_out_fragments() was rewritten too. Part of the code was moved to decompress_to_swp_buffer() function. Since we don't have to worry about Freed bit, the code is much simpler. - find_*_comp_cache() functions don't have special cases for Freed fragments. - once there's no variable number_of_free_swp_buffers and we don't hold a referente to the swap entry in swap_out_fragments(), there's no need to update swap buffers in end_buffer_io_async(). Index: aux.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/aux.c,v retrieving revision 1.10 retrieving revision 1.11 diff -C2 -r1.10 -r1.11 *** aux.c 2002/01/07 17:48:29 1.10 --- aux.c 2002/01/10 12:39:31 1.11 *************** *** 2,6 **** * linux/mm/comp_cache/aux.c * ! * Time-stamp: <2002-01-07 14:44:05 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/aux.c * ! * Time-stamp: <2002-01-08 16:33:30 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 172,180 **** return; ! list_del(&(fragment->lru_queue)); } ! comp_cache_t * ! __find_nolock_comp_page(swp_entry_t entry, comp_cache_fragment_t ** fragment_out) { comp_cache_fragment_t * fragment; --- 172,180 ---- return; ! list_del_init(&(fragment->lru_queue)); } ! inline comp_cache_t * ! find_nolock_comp_page(swp_entry_t entry, comp_cache_fragment_t ** fragment_out) { comp_cache_fragment_t * fragment; *************** *** 183,210 **** for (fragment = fragment_hash[fragment_hashfn(entry)]; fragment != NULL; fragment = fragment->next_hash) { ! if (fragment->index == entry.val && !CompFragmentFreed(fragment)) { *fragment_out = fragment; return (fragment->comp_page); } } - return NULL; - } - - inline comp_cache_t * - find_nolock_comp_page(swp_entry_t entry, comp_cache_fragment_t ** fragment_out) - { - comp_cache_t * comp_page; ! comp_page = __find_nolock_comp_page(entry, fragment_out); ! ! if (!comp_page) ! return NULL; ! ! if (CompFragmentSwapBuffer(*fragment_out)) { ! *fragment_out = NULL; ! return NULL; ! } ! ! return comp_page; } --- 183,193 ---- for (fragment = fragment_hash[fragment_hashfn(entry)]; fragment != NULL; fragment = fragment->next_hash) { ! if (fragment->index == entry.val) { *fragment_out = fragment; return (fragment->comp_page); } } ! return NULL; } *************** *** 221,225 **** repeat: ! comp_page = __find_nolock_comp_page(entry, fragment_out); if (comp_page) { page = comp_page->page; --- 204,208 ---- repeat: ! comp_page = find_nolock_comp_page(entry, fragment_out); if (comp_page) { page = comp_page->page; *************** *** 235,252 **** fragment = list_entry(fragment_lh, comp_cache_fragment_t, list); ! /* if it is a SwapBuffer fragment, there will ! * be only one, so we can return, once it can ! * have been used to another entry and ! * therefore the for loop may loop forever */ ! if (CompFragmentSwapBuffer(fragment)) { ! UnlockPage(page); ! page_cache_release(page); ! return NULL; ! } ! ! if (fragment->index == entry.val && !CompFragmentFreed(fragment)) { ! if (aux_fragment) ! BUG(); aux_fragment = fragment; } } --- 218,224 ---- fragment = list_entry(fragment_lh, comp_cache_fragment_t, list); ! if (fragment->index == entry.val) { aux_fragment = fragment; + break; } } *************** *** 361,369 **** BUG(); ! if (aux_fragment->index == fragment->index) { ! if (CompFragmentFreed(aux_fragment) || CompFragmentFreed(fragment)) ! continue; BUG(); - } if (aux_fragment->offset < fragment->offset) { --- 333,338 ---- BUG(); ! if (aux_fragment->index == fragment->index) BUG(); if (aux_fragment->offset < fragment->offset) { Index: free.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/free.c,v retrieving revision 1.14 retrieving revision 1.15 diff -C2 -r1.14 -r1.15 *** free.c 2002/01/07 17:48:29 1.14 --- free.c 2002/01/10 12:39:31 1.15 *************** *** 2,6 **** * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-01-07 12:39:59 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-01-09 18:08:49 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 38,42 **** struct list_head * fragment_lh, * fragment_to_free_lh; unsigned short offset_from, offset_to, size_to_move; - int freed = 0; if (!comp_page) --- 38,41 ---- *************** *** 55,61 **** BUG(); - if (CompFragmentSwapBuffer(fragment_to_free)) - BUG(); - if (not_compressed(fragment_to_free) && comp_page->free_space) BUG(); --- 54,57 ---- *************** *** 63,69 **** //check_all_fragments(comp_page); - if (CompFragmentFreed(fragment_to_free)) - freed = 1; - next_fragment = NULL; previous_fragment = NULL; --- 59,62 ---- *************** *** 156,161 **** remove_fragment_from_lru_queue(fragment_to_free); - kmem_cache_free(fragment_cachep, (fragment_to_free)); - if (!comp_page->number_of_pages) BUG(); --- 149,152 ---- *************** *** 163,166 **** --- 154,167 ---- comp_page->free_space += fragment_to_free->compressed_size; comp_page->number_of_pages--; + + /* is this fragment waiting for swap out? let's not free it + * now, but let's tell swap out path that it does not need IO + * anymore because it has been freed (maybe due to swapin) */ + if (CompFragmentIO(fragment_to_free)) { + CompFragmentClearIO(fragment_to_free); + return; + } + + kmem_cache_free(fragment_cachep, (fragment_to_free)); } *************** *** 211,224 **** if (comp_page) { ! if (!TryLockPage(comp_page->page)) { ! comp_cache_free(fragment); ! goto assign_address; ! } ! ! if (CompFragmentTestandSetFreed(fragment)) BUG(); } - - assign_address: /* no virtual swap entry with a compressed page */ --- 212,220 ---- if (comp_page) { ! if (TryLockPage(comp_page->page)) BUG(); + + comp_cache_free(fragment); } /* no virtual swap entry with a compressed page */ Index: main.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/main.c,v retrieving revision 1.16 retrieving revision 1.17 diff -C2 -r1.16 -r1.17 *** main.c 2002/01/07 17:48:29 1.16 --- main.c 2002/01/10 12:39:31 1.17 *************** *** 2,6 **** * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-01-07 11:44:08 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-01-07 16:08:23 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 182,186 **** min_num_comp_pages = 0; ! printk("Starting compressed cache v0.21pre4 (%lu pages = %luk)\n", max_num_comp_pages, (max_num_comp_pages * PAGE_SIZE)/1024); /* initialize our data for the `test' compressed_page */ --- 182,186 ---- min_num_comp_pages = 0; ! printk("Starting compressed cache v0.21pre5 (%lu pages = %luk)\n", max_num_comp_pages, (max_num_comp_pages * PAGE_SIZE)/1024); /* initialize our data for the `test' compressed_page */ Index: swapin.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapin.c,v retrieving revision 1.12 retrieving revision 1.13 diff -C2 -r1.12 -r1.13 *** swapin.c 2002/01/04 22:24:07 1.12 --- swapin.c 2002/01/10 12:39:31 1.13 *************** *** 2,6 **** * linux/mm/comp_cache/swapin.c * ! * Time-stamp: <2002-01-04 11:44:53 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/swapin.c * ! * Time-stamp: <2002-01-08 11:27:02 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 61,69 **** comp_cache_t * comp_page = fragment->comp_page; - if (CompFragmentFreed(fragment)) { - page_cache_release(uncompressed_page); - return NULL; - } - if (!PageLocked(uncompressed_page)) BUG(); --- 61,64 ---- Index: swapout.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapout.c,v retrieving revision 1.15 retrieving revision 1.16 diff -C2 -r1.15 -r1.16 *** swapout.c 2002/01/07 17:48:29 1.15 --- swapout.c 2002/01/10 12:39:31 1.16 *************** *** 1,6 **** /* ! * linux/mm/comp_cache/swapout.c * ! * Time-stamp: <2002-01-07 15:33:19 rcastro> * * Linux Virtual Memory Compressed Cache --- 1,6 ---- /* ! * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-01-09 19:05:54 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 21,42 **** /* swap buffer */ ! struct list_head swp_free_buffer_head; ! struct list_head swp_buffer_head; ! struct swp_buffer ** swp_buffer; ! atomic_t number_of_free_swp_buffers; ! static inline void ! free_swp_buffer(struct swp_buffer * swp_buffer) ! { ! comp_cache_fragment_t * fragment; ! ! UnlockPage((swp_buffer)->comp_page->page); ! fragment = list_entry(swp_buffer->comp_page->fragments.next, comp_cache_fragment_t, list); ! list_add(&(fragment->list), &swp_free_buffer_head); ! atomic_inc(&number_of_free_swp_buffers); ! } ! ! #define swp_buffer_not_used(page, fragment) (!PageLocked(page) && list_empty(&fragment->list) && (page_count(page) == 1 + !!page->buffers)) /** --- 21,31 ---- /* swap buffer */ ! struct list_head swp_free_buffer_head, swp_used_buffer_head; ! #define swp_buffer_freed(swp_buffer) \ ! (!PageLocked(swp_buffer->page) && list_empty(&swp_buffer->free_list)) ! #define swp_buffer_used(swp_buffer) \ ! (page_count(swp_buffer->page) > 2 + !!swp_buffer->page->buffers) /** *************** *** 46,133 **** * - return value: pointer to the page which will be returned locked */ static struct swp_buffer * ! find_free_swp_buffer(void) { ! static struct list_head * cur_swp_entry = &swp_buffer_head; ! struct page * buffer_page; ! struct list_head * fragment_lh; ! struct swp_buffer * aux_buffer; ! comp_cache_fragment_t * fragment; ! int i; - /* all swap out buffers are locked for asynchronous write? - * let's wait one of them finish. It is _not_ worth to have - * more buffers in order to avoid waiting for the page lock at - * this moment since we are gonna stall at rw_swap_page_base() - * waiting for the page IO completion anyway. */ if (!list_empty(&swp_free_buffer_head)) goto get_a_page; - - if (!atomic_read(&number_of_free_swp_buffers)) - goto wait_page; ! for (i = 0; i < NUM_SWP_BUFFERS; i++) { ! struct page * page; ! page = swp_buffer[i]->comp_page->page; ! ! fragment_lh = swp_buffer[i]->comp_page->fragments.next; ! fragment = list_entry(fragment_lh, comp_cache_fragment_t, list); ! if (swp_buffer_not_used(page, fragment)) { ! /* remove the fragment from comp_page */ ! remove_fragment_from_hash_table(fragment); ! ! /* add this fragment to free swap buffers list */ ! list_add(&(fragment->list), &swp_free_buffer_head); } } ! if (list_empty(&swp_free_buffer_head)) ! goto wait_page; get_a_page: ! /* list field of struct page is used to implement our free ! * swap buffer page list. To add the page back (when IO is ! * finished), we only need the struct page pointer and ! * swp_buffer_head in order to call list_add() */ ! fragment = list_entry(fragment_lh = swp_free_buffer_head.next, comp_cache_fragment_t, list); ! aux_buffer = (struct swp_buffer *) &(fragment->comp_page); ! buffer_page = aux_buffer->comp_page->page; if (TryLockPage(buffer_page)) BUG(); ! out: ! atomic_dec(&number_of_free_swp_buffers); ! /* let's remove this page from free swap buffer pages list */ ! list_del_init(fragment_lh); ! if (!list_empty(fragment_lh)) ! BUG(); ! return (aux_buffer); ! wait_page: ! cur_swp_entry = cur_swp_entry->next; ! if (cur_swp_entry == &swp_buffer_head) ! cur_swp_entry = swp_buffer_head.next; ! ! aux_buffer = list_entry(cur_swp_entry, struct swp_buffer, list); ! fragment_lh = aux_buffer->comp_page->fragments.next; ! fragment = list_entry(fragment_lh, comp_cache_fragment_t, list); ! buffer_page = aux_buffer->comp_page->page; ! lock_page(buffer_page); ! if (list_empty(&fragment->list)) { ! remove_fragment_from_hash_table(fragment); ! list_add(&(fragment->list), &swp_free_buffer_head); ! } ! goto out; } --- 35,153 ---- * - return value: pointer to the page which will be returned locked */ static struct swp_buffer * ! find_free_swp_buffer(comp_cache_fragment_t * fragment) { ! struct page * buffer_page, * page; ! struct list_head * swp_buffer_lh, * tmp_lh; ! struct swp_buffer * swp_buffer; ! int wait; ! ! CompFragmentSetIO(fragment); if (!list_empty(&swp_free_buffer_head)) goto get_a_page; ! wait = 0; ! try_again: ! list_for_each_safe(swp_buffer_lh, tmp_lh, &swp_used_buffer_head) { ! swp_buffer = list_entry(swp_buffer_lh, struct swp_buffer, list); ! if (PageLocked(swp_buffer->page)) { ! if (!wait) ! continue; ! list_del_init(swp_buffer_lh); ! wait_on_page(swp_buffer->page); ! } ! /* has the fragment we are swapping out been swapped ! * in? so let's free only the fragment struct */ ! if (!CompFragmentIO(swp_buffer->fragment)) { ! kmem_cache_free(fragment_cachep, (swp_buffer->fragment)); ! goto out; } + + /* it's not swapped out, so let' free it */ + page = swp_buffer->fragment->comp_page->page; + + if (TryLockPage(page)) + BUG(); + + CompFragmentClearIO(swp_buffer->fragment); + comp_cache_free(swp_buffer->fragment); + + out: + list_del(swp_buffer_lh); + list_add_tail(swp_buffer_lh, &swp_free_buffer_head); + + /* there's no need to swap out the original + * fragment any longer? so, let's forget it */ + if (!CompFragmentIO(fragment)) + return NULL; + + if (wait) + goto get_a_page; } ! /* couldn't free any swap buffer? so let's IO to finish */ ! if (list_empty(&swp_free_buffer_head)) { ! wait = 1; ! goto try_again; ! } get_a_page: ! swp_buffer = list_entry(swp_buffer_lh = swp_free_buffer_head.next, struct swp_buffer, list); ! buffer_page = swp_buffer->page; if (TryLockPage(buffer_page)) BUG(); + + list_del(swp_buffer_lh); + list_add_tail(swp_buffer_lh, &swp_used_buffer_head); ! buffer_page->index = fragment->index; ! swp_buffer->fragment = fragment; ! return (swp_buffer); ! } ! extern void decompress_page(comp_cache_fragment_t *, struct page *); ! static inline struct swp_buffer * decompress_to_swp_buffer(comp_cache_fragment_t * fragment) { ! struct page * buffer_page; ! struct swp_buffer * swp_buffer; ! swp_entry_t entry; ! entry = (swp_entry_t) { fragment->index }; ! swp_buffer = find_free_swp_buffer(fragment); ! /* no need for IO any longer */ ! if (!swp_buffer) ! return NULL; ! ! buffer_page = swp_buffer->page; ! if (!buffer_page) ! BUG(); ! ! if (TryLockPage(fragment->comp_page->page)) ! BUG(); ! #ifdef CONFIG_COMP_SWAP ! memcpy(page_address(buffer_page), page_address(fragment->comp_page->page) + fragment->offset, fragment->compressed_size); ! set_comp_swp_entry(entry, compressed(fragment), fragment_algorithm(fragment)); ! ! if (compressed(fragment) != swap_compressed(entry)) ! BUG(); ! ! if (swap_compressed(entry) && fragment_algorithm(fragment) != swap_algorithm(entry)) ! BUG(); ! #else ! decompress_page(fragment, buffer_page); ! #endif ! ! UnlockPage(fragment->comp_page->page); ! return swp_buffer; } *************** *** 137,142 **** extern struct address_space swapper_space; - extern void decompress_page(comp_cache_fragment_t *, struct page *); - /** * - swap_out_fragment - swap out some pages in the lru order until we --- 157,160 ---- *************** *** 147,154 **** struct list_head * fragment_lh, * next_fragment; int maxscan; ! comp_cache_fragment_t * fragment, * aux_fragment; ! comp_cache_t * comp_page = NULL; struct swp_buffer * swp_buffer; ! struct page * buffer_page, * page; swp_entry_t entry; --- 165,171 ---- struct list_head * fragment_lh, * next_fragment; int maxscan; ! comp_cache_fragment_t * fragment; struct swp_buffer * swp_buffer; ! struct page * page; swp_entry_t entry; *************** *** 161,190 **** entry.val = fragment->index; ! comp_page = fragment->comp_page; ! page = comp_page->page; - /* avoid to free this page in locked state (like what - * can be done in shrink_comp_cache) */ - page_cache_get(page); - - if (CompFragmentFreed(fragment)) { - if (!TryLockPage(page)) - comp_cache_free(fragment); - page_cache_release(page); - maxscan++; - continue; - } - if (vswap_address(entry)) BUG(); - if (!comp_page) - BUG(); - - /* this avoids problems if the swap entry is freed in - * middle of rw_swap_page(). This reference to the - * swap entry is released in end_buffer_io_async */ - swap_duplicate(entry); - /* page locked? move it to the back of the list */ if (TryLockPage(page)) { --- 178,186 ---- entry.val = fragment->index; ! page = fragment->comp_page->page; if (vswap_address(entry)) BUG(); /* page locked? move it to the back of the list */ if (TryLockPage(page)) { *************** *** 192,252 **** list_add(fragment_lh, &lru_queue); maxscan++; ! goto freed; } - - swp_buffer = find_free_swp_buffer(); - buffer_page = swp_buffer->comp_page->page; ! if (!buffer_page) ! BUG(); ! /* race: this is not supposed to happen unless we ! * sleep to lock the page in find_free_swp_buffer() */ ! if (CompFragmentFreed(fragment)) { ! free_swp_buffer(swp_buffer); ! comp_cache_free(fragment); ! goto freed; ! } ! #ifdef CONFIG_COMP_SWAP ! memcpy(page_address(buffer_page), page_address(comp_page->page) + fragment->offset, fragment->compressed_size); ! set_comp_swp_entry(entry, compressed(fragment), fragment_algorithm(fragment)); ! ! if (compressed(fragment) != swap_compressed(entry)) ! BUG(); ! ! if (swap_compressed(entry) && fragment_algorithm(fragment) != swap_algorithm(entry)) ! BUG(); ! #else ! decompress_page(fragment, buffer_page); ! #endif ! ! /* adding this aux_fragment to hash table implies that ! * the page will be found by our find* functions. In ! * particular, any function that tries to lock it will ! * sleep until the lock on this page is released. Even ! * though this page will not be returned by any ! * function, the function will only return when the ! * page is unlocked, ie the IO is over and it's safe ! * to the kernel to read the data from disk.*/ ! aux_fragment = list_entry(swp_buffer->comp_page->fragments.next, comp_cache_fragment_t, list); ! aux_fragment->index = entry.val; ! add_fragment_to_hash_table(aux_fragment); ! comp_cache_free(fragment); ! ! /* to fake the check present in rw_swap_page, the same ! * way is done in rw_swap_page_nolock() */ ! buffer_page->index = entry.val; ! rw_swap_page(WRITE, buffer_page); ! page_cache_release(page); ! continue; ! ! freed: swap_free(entry); - page_cache_release(page); - continue; } } --- 188,211 ---- list_add(fragment_lh, &lru_queue); maxscan++; ! continue; } ! remove_fragment_from_lru_queue(fragment); ! UnlockPage(page); ! /* avoid to free this entry if we sleep in the ! * function below */ ! swap_duplicate(entry); ! swp_buffer = decompress_to_swp_buffer(fragment); ! /* no need for IO */ ! if (!swp_buffer) ! goto out; ! rw_swap_page(WRITE, swp_buffer->page); ! out: swap_free(entry); } } *************** *** 307,318 **** } ! /* the page is locked, forget about it */ ! if (TryLockPage(comp_page->page)) { ! /* let's try the following page that has ! * free_space bigger than what we need */ ! if (comp_page->free_space < PAGE_SIZE) aux_comp_size = comp_page->free_space + 1; ! continue; } /* remove from free space hash table before update */ --- 266,282 ---- } ! aux_comp_size = 0; ! ! while (comp_page && TryLockPage(comp_page->page)) { ! if (aux_comp_size < comp_page->free_space) aux_comp_size = comp_page->free_space + 1; ! ! do { ! comp_page = comp_page->next_hash; ! } while (comp_page && comp_page->free_space < compressed_size); } + + if (!comp_page) + continue; /* remove from free space hash table before update */ *************** *** 393,405 **** BUG(); - update_fragment: - /* free any freed fragments in this comp_page */ - for_each_fragment_safe(fragment_lh, temp_lh, comp_page) { - fragment = list_entry(fragment_lh, comp_cache_fragment_t, list); - - if (CompFragmentFreed(fragment)) - comp_cache_free_nohash(fragment); - } - /* allocate the new fragment */ fragment = alloc_fragment(); --- 357,360 ---- *************** *** 461,501 **** void __init comp_cache_swp_buffer_init(void) { - comp_cache_fragment_t * fragment; - comp_cache_t * comp_page; struct page * buffer_page; int i; INIT_LIST_HEAD(&swp_free_buffer_head); ! INIT_LIST_HEAD(&swp_buffer_head); - swp_buffer = (struct swp_buffer **) kmalloc(NUM_SWP_BUFFERS * sizeof(struct swp_buffer *), GFP_KERNEL); - atomic_set(&number_of_free_swp_buffers, NUM_SWP_BUFFERS); - for (i = 0; i < NUM_SWP_BUFFERS; i++) { ! swp_buffer[i] = (struct swp_buffer *) kmalloc(sizeof(struct swp_buffer), GFP_KERNEL); ! comp_page = swp_buffer[i]->comp_page = alloc_comp_cache(); ! buffer_page = comp_page->page = alloc_page(GFP_KERNEL); - INIT_LIST_HEAD(&(comp_page->fragments)); - if (!buffer_page) panic("comp_cache_swp_buffer_init(): cannot allocate page"); - PageSetCompCache(buffer_page); buffer_page->mapping = &swapper_space; ! ! fragment = alloc_fragment(); ! fragment->comp_page = comp_page; ! fragment->compressed_size = PAGE_SIZE; ! fragment->flags = 0; ! ! comp_page->fragments.next = &fragment->list; ! comp_page->fragments.prev = &fragment->list; ! ! CompFragmentSetSwapBuffer(fragment); ! ! list_add(&(fragment->list), &swp_free_buffer_head); ! list_add(&(swp_buffer[i]->list), &swp_buffer_head); } } --- 416,436 ---- void __init comp_cache_swp_buffer_init(void) { struct page * buffer_page; + struct swp_buffer * swp_buffer; int i; INIT_LIST_HEAD(&swp_free_buffer_head); ! INIT_LIST_HEAD(&swp_used_buffer_head); for (i = 0; i < NUM_SWP_BUFFERS; i++) { ! swp_buffer = (struct swp_buffer *) kmalloc(sizeof(struct swp_buffer), GFP_KERNEL); ! buffer_page = swp_buffer->page = alloc_page(GFP_KERNEL); if (!buffer_page) panic("comp_cache_swp_buffer_init(): cannot allocate page"); buffer_page->mapping = &swapper_space; ! list_add(&(swp_buffer->list), &swp_free_buffer_head); } } Index: vswap.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/vswap.c,v retrieving revision 1.14 retrieving revision 1.15 diff -C2 -r1.14 -r1.15 *** vswap.c 2002/01/02 16:59:06 1.14 --- vswap.c 2002/01/10 12:39:31 1.15 *************** *** 2,6 **** * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2001-12-31 12:53:42 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-01-08 11:20:31 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 189,197 **** comp_page = fragment->comp_page; ! if (TryLockPage(comp_page->page)) { ! if (CompFragmentTestandSetFreed(fragment)) ! BUG(); ! goto out; ! } comp_cache_free(fragment); --- 189,194 ---- comp_page = fragment->comp_page; ! if (TryLockPage(comp_page->page)) ! BUG(); comp_cache_free(fragment); |