[lc-checkins] CVS: linux/mm page_io.c,1.5,1.6 filemap.c,1.39,1.40 memory.c,1.35,1.36 page_alloc.c,1.
Status: Beta
Brought to you by:
nitin_sf
|
From: Rodrigo S. de C. <rc...@us...> - 2002-09-10 16:43:54
|
Update of /cvsroot/linuxcompressed/linux/mm
In directory usw-pr-cvs1:/tmp/cvs-serv17835/mm
Modified Files:
filemap.c memory.c page_alloc.c shmem.c swap_state.c
swapfile.c
Added Files:
page_io.c
Log Message:
New features
o Adaptivity: the greatest feature of the changeset is the adaptivity
implementation. Now compressed cache resizes by itself and it seems to
be picking the a size pretty close to the best size noticed in our
tests. The police can be described as follow. Instead of having an LRU
queue, we have now two queues: active and inactive, like the LRU
queues in vanilla. The active list has the pages that would be in
memory if the compressed cache is not used and the inactive list is
the gain from using the compressed cache. If there are many accesses
to the active list, we first block growing (by demand) and later
shrink the compressed cache, and if we have many accesses to the
inactive list, we let the cache grow if needed. The active list size
is computed based on the effective compression ratio (number of
fragments/number of memory pages). When shrinking the cache, we try to
free a compressed cache by moving its fragments to other places. If
unable to free a page that way, we free a fragment at the end of
inactive list.
o Compressed swap: now all swap cache pages are swapped out in
compressed format. A bit in swap_map array is used to know if the
entry is compressed or not. The compressed size is stored in the entry
on the disk. There is almost no cost to store the pages in compressed
format, that's why it is the default configuration for compressed
cache.
o Compacted swap: besides swapping out the pages in compressed format,
we may decrease the number of writeouts by writing many fragments to
the same disk block. Since it has a memory cost to store some
metadata, it is an option to be enabled by user. It uses two arrays,
real_swap (unsigned long array) and real_swap_map (unsigned short
array). All the metadata about the fragments in the disk block are
stored on the block, like offset, size, index.
o Clean fragments not decompressed when they would be used to write
some data. We don't decompress a clean fragment when grabbing a page
cache page in __grab_cache_page() any longer. We would decompress a
fragment, but it's data wouldn't be used (that's why this
__grab_cache_page() creates a page if not found in page cache). Dirty
fragments will be decompressed, but that's a rare situation in page
cache since most data are written via buffers.
Bug fixes
o Larger compressed cache page support would not support pages larger
than 2*PAGE_SIZE (8K). Reason: wrong computation of comp page size,
very simple to fix.
o In /proc/comp_cache_hist, we were showing the number of fragments in
a comp page, no matter if those fragments were freed. It has been
fixed to not show the freed fragments.
o Writing out every dirty page with buffers. That was a conceptual
bug, since all the swapped in pages would have bugs, and if they got
dirty, they would not be added to compressed cache as dirty, they
would be written out first and only then added to swap cache as a
clean page. Now we try to free the buffers and if we are unable to do
that, we write it out. With this bug, the page was added to compressed
cache, but we were forcing many writes.
Other:
o Removed support to change algorithms online. That was a not very
used option and would introduce a space cost to pages swapped out in
compressed format, so it was removed. It also saved some memory space,
since we allocate only the data structure used by the selected
algorithm. Recall that the algorithm can be set through the compalg=
kernel parameter.
o All entries in /proc/sys/vm/comp_cache removed. Since compression
algorithms cannot be changed nor compressed cache size, so it's
useless to have a directory in /proc/sys. Compressed cache size can
still be checked in /proc/meminfo.
o Info for compression algorithm is shown even if no page has been
compressed.
o There are many code blocks with "#if 0" that are/were being tested.
Cleanups:
o Code to add the fragment into a comp page fragment list was split to
a new function.
o decompress() function removed.
Index: filemap.c
===================================================================
RCS file: /cvsroot/linuxcompressed/linux/mm/filemap.c,v
retrieving revision 1.39
retrieving revision 1.40
diff -C2 -r1.39 -r1.40
*** filemap.c 1 Aug 2002 14:52:25 -0000 1.39
--- filemap.c 10 Sep 2002 16:43:12 -0000 1.40
***************
*** 2907,2911 ****
#endif
err = filler(data, page);
-
if (err < 0) {
page_cache_release(page);
--- 2907,2910 ----
***************
*** 2978,2982 ****
*cached_page = NULL;
#ifdef CONFIG_COMP_PAGE_CACHE
! read_comp_cache(mapping, index, page);
#endif
}
--- 2977,2981 ----
*cached_page = NULL;
#ifdef CONFIG_COMP_PAGE_CACHE
! read_dirty_comp_cache(mapping, index, page);
#endif
}
Index: memory.c
===================================================================
RCS file: /cvsroot/linuxcompressed/linux/mm/memory.c,v
retrieving revision 1.35
retrieving revision 1.36
diff -C2 -r1.35 -r1.36
*** memory.c 18 Jul 2002 13:32:50 -0000 1.35
--- memory.c 10 Sep 2002 16:43:12 -0000 1.36
***************
*** 1177,1181 ****
/* The page isn't present yet, go ahead with the fault. */
!
swap_free(entry);
if (vm_swap_full())
--- 1177,1182 ----
/* The page isn't present yet, go ahead with the fault. */
! if (PageCompressed(page))
! decompress_swap_cache_page(page);
swap_free(entry);
if (vm_swap_full())
Index: page_alloc.c
===================================================================
RCS file: /cvsroot/linuxcompressed/linux/mm/page_alloc.c,v
retrieving revision 1.23
retrieving revision 1.24
diff -C2 -r1.23 -r1.24
*** page_alloc.c 16 Jul 2002 18:41:55 -0000 1.23
--- page_alloc.c 10 Sep 2002 16:43:14 -0000 1.24
***************
*** 98,102 ****
if (PageActive(page))
BUG();
! page->flags &= ~((1<<PG_referenced) | (1<<PG_dirty) | (1<<PG_comp_cache));
if (current->flags & PF_FREE_PAGES)
--- 98,102 ----
if (PageActive(page))
BUG();
! page->flags &= ~((1<<PG_referenced) | (1<<PG_dirty) | (1<<PG_comp_cache) | (1<<PG_compressed));
if (current->flags & PF_FREE_PAGES)
Index: shmem.c
===================================================================
RCS file: /cvsroot/linuxcompressed/linux/mm/shmem.c,v
retrieving revision 1.21
retrieving revision 1.22
diff -C2 -r1.21 -r1.22
*** shmem.c 5 Jul 2002 15:21:50 -0000 1.21
--- shmem.c 10 Sep 2002 16:43:16 -0000 1.22
***************
*** 386,389 ****
--- 386,391 ----
return 0;
found:
+ if (PageCompressed(page))
+ decompress_swap_cache_page(page);
delete_from_swap_cache(page);
add_to_page_cache(page, info->inode->i_mapping, offset + idx);
***************
*** 558,561 ****
--- 560,565 ----
swap_free(*entry);
*entry = (swp_entry_t) {0};
+ if (PageCompressed(page))
+ decompress_swap_cache_page(page);
delete_from_swap_cache(page);
flags = page->flags & ~((1 << PG_uptodate) | (1 << PG_error) | (1 << PG_referenced) | (1 << PG_arch_1));
Index: swap_state.c
===================================================================
RCS file: /cvsroot/linuxcompressed/linux/mm/swap_state.c,v
retrieving revision 1.38
retrieving revision 1.39
diff -C2 -r1.38 -r1.39
*** swap_state.c 28 Jul 2002 15:47:04 -0000 1.38
--- swap_state.c 10 Sep 2002 16:43:16 -0000 1.39
***************
*** 242,246 ****
if (vswap_address(entry))
BUG();
!
rw_swap_page(READ, new_page);
return new_page;
--- 242,247 ----
if (vswap_address(entry))
BUG();
! if (get_swap_compressed(entry))
! PageSetCompressed(new_page);
rw_swap_page(READ, new_page);
return new_page;
Index: swapfile.c
===================================================================
RCS file: /cvsroot/linuxcompressed/linux/mm/swapfile.c,v
retrieving revision 1.34
retrieving revision 1.35
diff -C2 -r1.34 -r1.35
*** swapfile.c 18 Jul 2002 21:31:08 -0000 1.34
--- swapfile.c 10 Sep 2002 16:43:17 -0000 1.35
***************
*** 206,209 ****
--- 206,424 ----
}
+ #ifdef CONFIG_COMP_SWAP
+ void
+ real_swap_free(swp_entry_t entry, int count)
+ {
+ struct swap_info_struct * p;
+ unsigned long type, offset = SWP_OFFSET(entry);
+
+ type = SWP_TYPE(entry);
+ p = type + swap_info;
+
+ if (!p->real_swap_map[offset])
+ BUG();
+
+ if (p->real_swap_map[offset] < COMP_SWAP_MAP_MAX) {
+ p->real_swap_map[offset] -= count;
+ if (!p->real_swap_map[offset]) {
+ if (offset < p->real_lowest_bit)
+ p->real_lowest_bit = offset;
+ if (offset > p->real_highest_bit)
+ p->real_highest_bit = offset;
+ }
+ }
+ }
+
+ void
+ real_swap_duplicate(swp_entry_t entry, int count)
+ {
+ struct swap_info_struct * p;
+ unsigned long offset, type;
+
+ type = SWP_TYPE(entry);
+ p = type + swap_info;
+ offset = SWP_OFFSET(entry);
+
+ if (!p->real_swap_map[offset])
+ BUG();
+ if (count >= COMP_SWAP_MAP_MAX)
+ BUG();
+
+ if (p->real_swap_map[offset] < COMP_SWAP_MAP_MAX - count)
+ p->real_swap_map[offset] += count;
+ else if (p->real_swap_map[offset] <= COMP_SWAP_MAP_MAX)
+ p->real_swap_map[offset] = COMP_SWAP_MAP_MAX;
+ }
+
+ swp_entry_t
+ get_real_swap_page(swp_entry_t entry)
+ {
+ unsigned long offset, real_offset;
+ struct swap_info_struct * p;
+ swp_entry_t real_entry;
+
+ offset = SWP_OFFSET(entry);
+
+ p = swap_info_get(entry);
+
+ if (!p)
+ BUG();
+
+ if (p->real_cluster_nr) {
+ while (p->real_cluster_next <= p->real_highest_bit) {
+ real_offset = p->real_cluster_next++;
+ if (p->real_swap_map[real_offset])
+ continue;
+ p->real_cluster_nr--;
+ goto got_page;
+ }
+ }
+ p->real_cluster_nr = SWAPFILE_CLUSTER;
+
+ /* try to find an empty (even not aligned) cluster. */
+ real_offset = p->real_lowest_bit;
+ check_next_cluster:
+ if (real_offset+SWAPFILE_CLUSTER-1 <= p->real_highest_bit)
+ {
+ int nr;
+ for (nr = real_offset; nr < real_offset+SWAPFILE_CLUSTER; nr++)
+ if (p->real_swap_map[nr])
+ {
+ real_offset = nr+1;
+ goto check_next_cluster;
+ }
+ /* We found a completly empty cluster, so start
+ * using it.
+ */
+ goto got_page;
+ }
+ /* No luck, so now go finegrined as usual. -Andrea */
+ for (real_offset = p->real_lowest_bit; real_offset <= p->real_highest_bit ; real_offset++) {
+ if (p->real_swap_map[real_offset])
+ continue;
+ p->real_lowest_bit = real_offset+1;
+ got_page:
+ if (real_offset == p->real_lowest_bit)
+ p->real_lowest_bit++;
+ if (real_offset == p->real_highest_bit)
+ p->real_highest_bit--;
+ if (p->real_lowest_bit > p->real_highest_bit) {
+ p->real_lowest_bit = p->max;
+ p->real_highest_bit = 0;
+ }
+ real_entry.val = p->real_swap[offset];
+ if (real_entry.val)
+ real_swap_free(real_entry, swap_map_count(p->swap_map[offset]));
+ real_entry = SWP_ENTRY(SWP_TYPE(entry), real_offset);
+ p->real_swap[offset] = real_entry.val;
+ p->real_swap_map[real_offset] = swap_map_count(p->swap_map[offset]);
+ p->real_cluster_next = real_offset+1;
+ swap_info_put(p);
+ return real_entry;
+ }
+ p->real_lowest_bit = p->max;
+ p->real_highest_bit = 0;
+ swap_info_put(p);
+ real_entry.val = 0;
+ return real_entry;
+ }
+
+ swp_entry_t
+ get_map(swp_entry_t entry)
+ {
+ struct swap_info_struct * p;
+ unsigned long offset;
+ swp_entry_t real_entry;
+
+ p = swap_info_get(entry);
+
+ if (!p)
+ BUG();
+
+ offset = SWP_OFFSET(entry);
+ if (offset >= p->max)
+ BUG();
+ real_entry.val = p->real_swap[offset];
+
+ if (!real_entry.val)
+ BUG();
+ swap_info_put(p);
+
+ return real_entry;
+ }
+
+ void
+ map_swap(swp_entry_t entry, swp_entry_t real_entry)
+ {
+ struct swap_info_struct * p;
+ unsigned long offset;
+ swp_entry_t old_entry;
+
+ p = swap_info_get(entry);
+
+ if (!p)
+ BUG();
+
+ offset = SWP_OFFSET(entry);
+ if (offset >= p->max)
+ BUG();
+ old_entry.val = p->real_swap[offset];
+ if (old_entry.val)
+ real_swap_free(old_entry, swap_map_count(p->swap_map[offset]));
+
+ p->real_swap[offset] = real_entry.val;
+
+ real_swap_duplicate(real_entry, swap_map_count(p->swap_map[offset]));
+ swap_info_put(p);
+ }
+ #endif
+
+ #ifdef CONFIG_COMP_CACHE
+ void
+ set_swap_compressed(swp_entry_t entry, int compressed)
+ {
+ struct swap_info_struct * p;
+ unsigned long offset;
+
+ p = swap_info_get(entry);
+
+ if (!p)
+ BUG();
+
+ offset = SWP_OFFSET(entry);
+ if (offset >= p->max)
+ BUG();
+ if (compressed)
+ p->swap_map[offset] |= SWAP_MAP_COMP_BIT;
+ else
+ p->swap_map[offset] &= SWAP_MAP_COMP_BIT_MASK;
+
+ swap_info_put(p);
+ }
+
+ int
+ get_swap_compressed(swp_entry_t entry)
+ {
+ struct swap_info_struct * p;
+ unsigned long offset;
+ int ret = 0;
+
+ p = swap_info_get(entry);
+
+ if (!p)
+ BUG();
+
+ offset = SWP_OFFSET(entry);
+ if (offset >= p->max)
+ BUG();
+ if (p->swap_map[offset] & SWAP_MAP_COMP_BIT)
+ ret = 1;
+ swap_info_put(p);
+
+ return ret;
+ }
+
+ #endif
+
static int swap_entry_free(struct swap_info_struct *p, unsigned long offset)
{
***************
*** 215,221 ****
count = p->swap_map[offset];
! if (count < SWAP_MAP_MAX) {
count--;
! if (!count) {
entry = SWP_ENTRY(p - swap_info, offset);
invalidate_comp_cache(&swapper_space, entry.val);
--- 430,447 ----
count = p->swap_map[offset];
! if (swap_map_count(count) < SWAP_MAP_MAX) {
count--;
! #ifdef CONFIG_COMP_SWAP
! if (p->real_swap[offset]) {
! swp_entry_t real_entry;
! real_entry.val = p->real_swap[offset];
! real_swap_free(real_entry, 1);
! }
! #endif
! if (!swap_map_count(count)) {
! #ifdef CONFIG_COMP_SWAP
! if (p->real_swap[offset])
! p->real_swap[offset] = 0;
! #endif
entry = SWP_ENTRY(p - swap_info, offset);
invalidate_comp_cache(&swapper_space, entry.val);
***************
*** 225,234 ****
p->highest_bit = offset;
nr_swap_pages++;
! count = p->swap_map[offset];
! count--;
}
! p->swap_map[offset] = count;
}
! return count;
}
--- 451,459 ----
p->highest_bit = offset;
nr_swap_pages++;
! count = 0;
}
! p->swap_map[offset] = count;
}
! return swap_map_count(count);
}
***************
*** 268,272 ****
goto check_exclusive;
}
! if (p->swap_map[SWP_OFFSET(entry)] == 1)
exclusive = 1;
check_exclusive:
--- 493,497 ----
goto check_exclusive;
}
! if (swap_map_count(p->swap_map[SWP_OFFSET(entry)]) == 1)
exclusive = 1;
check_exclusive:
***************
*** 345,349 ****
goto check_exclusive;
}
! if (p->swap_map[SWP_OFFSET(entry)] == 1)
exclusive = 1;
check_exclusive:
--- 570,574 ----
goto check_exclusive;
}
! if (swap_map_count(p->swap_map[SWP_OFFSET(entry)]) == 1)
exclusive = 1;
check_exclusive:
***************
*** 353,356 ****
--- 578,583 ----
if (page_count(page) - !!page->buffers == 2) {
__delete_from_swap_cache(page);
+ if (PageCompressed(page))
+ decompress_swap_cache_page(page);
SetPageDirty(page);
retval = 1;
***************
*** 542,546 ****
i = 1;
}
! count = si->swap_map[i];
if (count && count != SWAP_MAP_BAD)
break;
--- 769,773 ----
i = 1;
}
! count = swap_map_count(si->swap_map[i]);
if (count && count != SWAP_MAP_BAD)
break;
***************
*** 643,647 ****
* to search, but use it as a reminder to search shmem.
*/
! swcount = *swap_map;
if (swcount > 1) {
flush_page_to_ram(page);
--- 870,874 ----
* to search, but use it as a reminder to search shmem.
*/
! swcount = swap_map_count(*swap_map);
if (swcount > 1) {
flush_page_to_ram(page);
***************
*** 651,656 ****
unuse_process(start_mm, entry, page);
}
! if (*swap_map > 1) {
! int set_start_mm = (*swap_map >= swcount);
struct list_head *p = &start_mm->mmlist;
struct mm_struct *new_start_mm = start_mm;
--- 878,883 ----
unuse_process(start_mm, entry, page);
}
! if (swap_map_count(*swap_map) > 1) {
! int set_start_mm = (swap_map_count(*swap_map) >= swcount);
struct list_head *p = &start_mm->mmlist;
struct mm_struct *new_start_mm = start_mm;
***************
*** 658,665 ****
spin_lock(&mmlist_lock);
! while (*swap_map > 1 &&
(p = p->next) != &start_mm->mmlist) {
mm = list_entry(p, struct mm_struct, mmlist);
! swcount = *swap_map;
if (mm == &init_mm) {
set_start_mm = 1;
--- 885,892 ----
spin_lock(&mmlist_lock);
! while (swap_map_count(*swap_map) > 1 &&
(p = p->next) != &start_mm->mmlist) {
mm = list_entry(p, struct mm_struct, mmlist);
! swcount = swap_map_count(*swap_map);
if (mm == &init_mm) {
set_start_mm = 1;
***************
*** 667,671 ****
} else
unuse_process(mm, entry, page);
! if (set_start_mm && *swap_map < swcount) {
new_start_mm = mm;
set_start_mm = 0;
--- 894,898 ----
} else
unuse_process(mm, entry, page);
! if (set_start_mm && swap_map_count(*swap_map) < swcount) {
new_start_mm = mm;
set_start_mm = 0;
***************
*** 691,699 ****
* report them; but do report if we reset SWAP_MAP_MAX.
*/
! if (*swap_map == SWAP_MAP_MAX) {
swap_list_lock();
swap_device_lock(si);
nr_swap_pages++;
! *swap_map = 1;
swap_device_unlock(si);
swap_list_unlock();
--- 918,926 ----
* report them; but do report if we reset SWAP_MAP_MAX.
*/
! if (swap_map_count(*swap_map) == SWAP_MAP_MAX) {
swap_list_lock();
swap_device_lock(si);
nr_swap_pages++;
! *swap_map = 1 | (*swap_map & SWAP_MAP_COMP_BIT);
swap_device_unlock(si);
swap_list_unlock();
***************
*** 715,719 ****
* Note shmem_unuse already deleted its from swap cache.
*/
! swcount = *swap_map;
if ((swcount > 0) != PageSwapCache(page))
BUG();
--- 942,946 ----
* Note shmem_unuse already deleted its from swap cache.
*/
! swcount = swap_map_count(*swap_map);
if ((swcount > 0) != PageSwapCache(page))
BUG();
***************
*** 722,725 ****
--- 949,954 ----
lock_page(page);
}
+ if (PageCompressed(page))
+ decompress_swap_cache_page(page);
if (PageSwapCache(page))
delete_from_swap_cache(page);
***************
*** 758,761 ****
--- 987,994 ----
int i, type, prev;
int err;
+ #ifdef CONFIG_COMP_SWAP
+ unsigned long * real_swap;
+ unsigned short * real_swap_map;
+ #endif
if (!capable(CAP_SYS_ADMIN))
***************
*** 830,837 ****
--- 1063,1080 ----
swap_map = p->swap_map;
p->swap_map = NULL;
+ #ifdef CONFIG_COMP_SWAP
+ real_swap = p->real_swap;
+ p->real_swap = NULL;
+ real_swap_map = p->real_swap_map;
+ p->real_swap_map = NULL;
+ #endif
p->flags = 0;
swap_device_unlock(p);
swap_list_unlock();
vfree(swap_map);
+ #ifdef CONFIG_COMP_SWAP
+ vfree(real_swap);
+ vfree(real_swap_map);
+ #endif
err = 0;
***************
*** 867,871 ****
usedswap = 0;
for (j = 0; j < ptr->max; ++j)
! switch (ptr->swap_map[j]) {
case SWAP_MAP_BAD:
case 0:
--- 1110,1114 ----
usedswap = 0;
for (j = 0; j < ptr->max; ++j)
! switch (swap_map_count(ptr->swap_map[j])) {
case SWAP_MAP_BAD:
case 0:
***************
*** 915,918 ****
--- 1158,1165 ----
struct block_device *bdev = NULL;
unsigned short *swap_map;
+ #ifdef CONFIG_COMP_SWAP
+ unsigned long * real_swap;
+ unsigned short * real_swap_map;
+ #endif
if (!capable(CAP_SYS_ADMIN))
***************
*** 939,942 ****
--- 1186,1196 ----
p->highest_bit = 0;
p->cluster_nr = 0;
+ #ifdef CONFIG_COMP_SWAP
+ p->real_swap = NULL;
+ p->real_swap_map = NULL;
+ p->real_lowest_bit = 0;
+ p->real_highest_bit = 0;
+ p->real_cluster_nr = 0;
+ #endif
p->sdev_lock = SPIN_LOCK_UNLOCKED;
p->next = -1;
***************
*** 1039,1042 ****
--- 1293,1308 ----
goto bad_swap;
}
+ #ifdef CONFIG_COMP_SWAP
+ p->real_lowest_bit = p->lowest_bit;
+ p->real_highest_bit = p->highest_bit;
+
+ p->real_swap = vmalloc(maxpages * sizeof(long));
+ p->real_swap_map = vmalloc(maxpages * sizeof(short));
+ if (!p->real_swap || !p->real_swap_map) {
+ error = -ENOMEM;
+ goto bad_swap;
+ }
+ memset(p->real_swap, 0, maxpages * sizeof(long));
+ #endif
for (i = 1 ; i < maxpages ; i++) {
if (test_bit(i,(char *) swap_header))
***************
*** 1045,1048 ****
--- 1311,1322 ----
p->swap_map[i] = SWAP_MAP_BAD;
}
+ #ifdef CONFIG_COMP_SWAP
+ for (i = 1 ; i < maxpages ; i++) {
+ if (test_bit(i,(char *) swap_header))
+ p->real_swap_map[i] = 0;
+ else
+ p->real_swap_map[i] = COMP_SWAP_MAP_BAD;
+ }
+ #endif
break;
***************
*** 1076,1085 ****
error = 0;
memset(p->swap_map, 0, maxpages * sizeof(short));
for (i=0; i<swap_header->info.nr_badpages; i++) {
int page = swap_header->info.badpages[i];
if (page <= 0 || page >= swap_header->info.last_page)
error = -EINVAL;
! else
p->swap_map[page] = SWAP_MAP_BAD;
}
nr_good_pages = swap_header->info.last_page -
--- 1350,1376 ----
error = 0;
memset(p->swap_map, 0, maxpages * sizeof(short));
+ #ifdef CONFIG_COMP_SWAP
+ p->real_lowest_bit = p->lowest_bit;
+ p->real_highest_bit = p->highest_bit;
+
+ p->real_swap = vmalloc(maxpages * sizeof(long));
+ p->real_swap_map = vmalloc(maxpages * sizeof(short));
+ if (!p->real_swap || !p->real_swap_map) {
+ error = -ENOMEM;
+ goto bad_swap;
+ }
+ memset(p->real_swap, 0, maxpages * sizeof(long));
+ memset(p->real_swap_map, 0, maxpages * sizeof(short));
+ #endif
for (i=0; i<swap_header->info.nr_badpages; i++) {
int page = swap_header->info.badpages[i];
if (page <= 0 || page >= swap_header->info.last_page)
error = -EINVAL;
! else {
p->swap_map[page] = SWAP_MAP_BAD;
+ #ifdef CONFIG_COMP_SWAP
+ p->real_swap_map[page] = COMP_SWAP_MAP_BAD;
+ #endif
+ }
}
nr_good_pages = swap_header->info.last_page -
***************
*** 1102,1105 ****
--- 1393,1399 ----
}
p->swap_map[0] = SWAP_MAP_BAD;
+ #ifdef CONFIG_COMP_SWAP
+ p->real_swap_map[0] = COMP_SWAP_MAP_BAD;
+ #endif
swap_list_lock();
swap_device_lock(p);
***************
*** 1136,1139 ****
--- 1430,1437 ----
swap_list_lock();
swap_map = p->swap_map;
+ #ifdef CONFIG_COMP_SWAP
+ real_swap = p->real_swap;
+ real_swap_map = p->real_swap_map;
+ #endif
nd.mnt = p->swap_vfsmnt;
nd.dentry = p->swap_file;
***************
*** 1148,1151 ****
--- 1446,1455 ----
if (swap_map)
vfree(swap_map);
+ #ifdef CONFIG_COMP_SWAP
+ if (real_swap)
+ vfree(real_swap);
+ if (real_swap_map)
+ vfree(real_swap_map);
+ #endif
path_release(&nd);
out:
***************
*** 1167,1171 ****
continue;
for (j = 0; j < swap_info[i].max; ++j) {
! switch (swap_info[i].swap_map[j]) {
case 0:
case SWAP_MAP_BAD:
--- 1471,1475 ----
continue;
for (j = 0; j < swap_info[i].max; ++j) {
! switch (swap_map_count(swap_info[i].swap_map[j])) {
case 0:
case SWAP_MAP_BAD:
***************
*** 1194,1198 ****
if (vswap_address(entry))
! goto virtual_swap;
type = SWP_TYPE(entry);
if (type >= nr_swapfiles)
--- 1498,1502 ----
if (vswap_address(entry))
! return virtual_swap_duplicate(entry);
type = SWP_TYPE(entry);
if (type >= nr_swapfiles)
***************
*** 1203,1216 ****
swap_device_lock(p);
if (offset < p->max && p->swap_map[offset]) {
! if (p->swap_map[offset] < SWAP_MAP_MAX - 1) {
p->swap_map[offset]++;
result = 1;
! } else if (p->swap_map[offset] <= SWAP_MAP_MAX) {
if (swap_overflow++ < 5)
printk(KERN_WARNING "swap_dup: swap entry overflow\n");
! p->swap_map[offset] = SWAP_MAP_MAX;
result = 1;
}
}
swap_device_unlock(p);
out:
--- 1507,1527 ----
swap_device_lock(p);
if (offset < p->max && p->swap_map[offset]) {
! if (swap_map_count(p->swap_map[offset]) < SWAP_MAP_MAX - 1) {
p->swap_map[offset]++;
result = 1;
! } else if (swap_map_count(p->swap_map[offset]) <= SWAP_MAP_MAX) {
if (swap_overflow++ < 5)
printk(KERN_WARNING "swap_dup: swap entry overflow\n");
! p->swap_map[offset] = SWAP_MAP_MAX | (p->swap_map[offset] & SWAP_MAP_COMP_BIT);
result = 1;
}
}
+ #ifdef CONFIG_COMP_SWAP
+ if (p->real_swap[offset]) {
+ swp_entry_t real_entry;
+ real_entry.val = p->real_swap[offset];
+ real_swap_duplicate(real_entry, 1);
+ }
+ #endif
swap_device_unlock(p);
out:
***************
*** 1220,1226 ****
printk(KERN_ERR "swap_dup: %s%08lx\n", Bad_file, entry.val);
goto out;
-
- virtual_swap:
- return virtual_swap_duplicate(entry);
}
--- 1531,1534 ----
***************
*** 1250,1254 ****
if (!p->swap_map[offset])
goto bad_unused;
! retval = p->swap_map[offset];
out:
return retval;
--- 1558,1562 ----
if (!p->swap_map[offset])
goto bad_unused;
! retval = swap_map_count(p->swap_map[offset]);
out:
return retval;
***************
*** 1291,1295 ****
--- 1599,1607 ----
return;
}
+ #ifdef CONFIG_COMP_SWAP
+ if (p->real_swap_map && !p->real_swap_map[*offset]) {
+ #else
if (p->swap_map && !p->swap_map[*offset]) {
+ #endif
printk(KERN_ERR "rw_swap_page: %s%08lx\n", Unused_offset, entry.val);
return;
***************
*** 1335,1339 ****
if (!swapdev->swap_map[toff])
break;
! if (swapdev->swap_map[toff] == SWAP_MAP_BAD)
break;
toff++;
--- 1647,1651 ----
if (!swapdev->swap_map[toff])
break;
! if (swap_map_count(swapdev->swap_map[toff]) == SWAP_MAP_BAD)
break;
toff++;
***************
*** 1343,1344 ****
--- 1655,1657 ----
return ret;
}
+
|