linuxcompressed-checkins Mailing List for Linux Compressed Cache (Page 3)
Status: Beta
Brought to you by:
nitin_sf
You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
|
Dec
(31) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(28) |
Feb
(50) |
Mar
(29) |
Apr
(6) |
May
(33) |
Jun
(36) |
Jul
(60) |
Aug
(7) |
Sep
(12) |
Oct
|
Nov
(13) |
Dec
(3) |
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(9) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2006 |
Jan
(13) |
Feb
(4) |
Mar
(4) |
Apr
(1) |
May
|
Jun
(22) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Rodrigo S. de C. <rc...@us...> - 2003-05-19 01:39:16
|
Update of /cvsroot/linuxcompressed/linux In directory sc8-pr-cvs1:/tmp/cvs-serv25395 Modified Files: MAINTAINERS Log Message: o Port code to 2.4.20 Bug fix (?) o Changes checks in vswap.c to avoid oopses. It will BUG() instead. Some of the checks were done after the value had been accessed. Note o Virtual swap addresses are temporarily disabled, due to debugging sessions related to the use of swap files instead of swap partitions. Index: MAINTAINERS =================================================================== RCS file: /cvsroot/linuxcompressed/linux/MAINTAINERS,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -r1.2 -r1.3 *** MAINTAINERS 26 Feb 2002 20:59:01 -0000 1.2 --- MAINTAINERS 19 May 2003 01:38:43 -0000 1.3 *************** *** 1,2 **** --- 1,3 ---- + List of maintainers and how to submit kernel changes *************** *** 70,73 **** --- 71,82 ---- should be using that. + 3C359 NETWORK DRIVER + P: Mike Phillips + M: mi...@li... + L: lin...@vg... + L: lin...@li... + W: http://www.linuxtr.net + S: Maintained + 3C501 NETWORK DRIVER P: Alan Cox *************** *** 234,237 **** --- 243,252 ---- S: Maintained + BEFS FILE SYSTEM + P: Will Dyson + M: wi...@cs... + W: http://cs.earlham.edu/~will/software/linux/kernel/BeFS.html + S: Maintained + BERKSHIRE PRODUCTS PC WATCHDOG DRIVER P: Kenji Hollis *************** *** 259,267 **** S: Maintained BTTV VIDEO4LINUX DRIVER P: Gerd Knorr ! M: kr...@go... L: vid...@re... ! W: http://me.in-berlin.de/~kraxel/bttv.html S: Maintained --- 274,288 ---- S: Maintained + BLUETOOTH SUBSYSTEM (PC Card Drivers) + P: Marcel Holtmann + M: ma...@ho... + W: http://www.holtmann.org/linux/bluetooth/ + S: Maintained + BTTV VIDEO4LINUX DRIVER P: Gerd Knorr ! M: kr...@by... L: vid...@re... ! W: http://bytesex.org/bttv/ S: Maintained *************** *** 529,534 **** P: Rui Sousa M: rui...@cl... ! L: emu...@op... ! W: http://opensource.creative.com/ S: Maintained --- 550,555 ---- P: Rui Sousa M: rui...@cl... ! L: emu...@li... ! W: http://sourceforge.net/projects/emu10k1/ S: Maintained *************** *** 552,563 **** EXT2 FILE SYSTEM ! P: Remy Card ! M: Rem...@li... ! L: lin...@vg... S: Maintained EXT3 FILE SYSTEM ! P: Remy Card, Stephen Tweedie ! M: sc...@re..., ak...@zi..., ad...@tu... L: ext...@re... S: Maintained --- 573,582 ---- EXT2 FILE SYSTEM ! L: ext...@li... S: Maintained EXT3 FILE SYSTEM ! P: Stephen Tweedie, Andrew Morton ! M: sc...@re..., ak...@zi..., ad...@cl... L: ext...@re... S: Maintained *************** *** 628,633 **** HFS FILESYSTEM ! P: Adrian Sun ! M: as...@co... L: lin...@vg... S: Maintained --- 647,652 ---- HFS FILESYSTEM ! P: Oliver Neukum ! M: ol...@ne... L: lin...@vg... S: Maintained *************** *** 675,679 **** i386 BOOT CODE P: Riley H. Williams ! M: rh...@me... L: Lin...@vg... S: Maintained --- 694,698 ---- i386 BOOT CODE P: Riley H. Williams ! M: Ri...@Wi... L: Lin...@vg... S: Maintained *************** *** 706,711 **** IBM ServeRAID RAID DRIVER ! P: Keith Mitchell ! M: ips...@us... W: http://www.developer.ibm.com/welcome/netfinity/serveraid.html S: Supported --- 725,732 ---- IBM ServeRAID RAID DRIVER ! P: Jack Hammer ! M: ips...@ad... ! P: David Jeffery ! M: ips...@ad... W: http://www.developer.ibm.com/welcome/netfinity/serveraid.html S: Supported *************** *** 714,723 **** P: Andre Hedrick M: an...@li... ! M: an...@as... ! M: an...@su... L: lin...@vg... W: http://www.kernel.org/pub/linux/kernel/people/hedrick/ W: http://www.linux-ide.org/ ! S: Supported IDE/ATAPI CDROM DRIVER --- 735,744 ---- P: Andre Hedrick M: an...@li... ! M: an...@li... L: lin...@vg... W: http://www.kernel.org/pub/linux/kernel/people/hedrick/ W: http://www.linux-ide.org/ ! W: http://www.linuxdiskcert.org/ ! S: Maintained IDE/ATAPI CDROM DRIVER *************** *** 742,755 **** IEEE 1394 SUBSYSTEM ! P: Andreas Bombe ! M: and...@mu... L: lin...@li... W: http://linux1394.sourceforge.net/ S: Maintained - IEEE 1394 AIC5800 DRIVER - L: lin...@li... - S: Orphan - IEEE 1394 OHCI DRIVER P: Ben Collins --- 763,772 ---- IEEE 1394 SUBSYSTEM ! P: Ben Collins ! M: bco...@de... L: lin...@li... W: http://linux1394.sourceforge.net/ S: Maintained IEEE 1394 OHCI DRIVER P: Ben Collins *************** *** 799,802 **** --- 816,825 ---- S: Maintained + IOC3 DRIVER + P: Ralf Baechle + M: ra...@os... + L: lin...@li... + S: Maintained + IP MASQUERADING: P: Juanjo Ciarlante *************** *** 852,855 **** --- 875,885 ---- S: Maintained + JFS FILESYSTEM + P: Dave Kleikamp + M: sh...@au... + L: jfs...@os... + W: http://oss.software.ibm.com/developerworks/opensource/jfs/ + S: Supported + JOYSTICK DRIVER P: Vojtech Pavlik *************** *** 857,861 **** L: lin...@at... W: http://www.suse.cz/development/joystick/ ! S: Supported KERNEL AUTOMOUNTER (AUTOFS) --- 887,891 ---- L: lin...@at... W: http://www.suse.cz/development/joystick/ ! S: Maintained KERNEL AUTOMOUNTER (AUTOFS) *************** *** 926,929 **** --- 956,966 ---- S: Maintained + LINUX FOR 64BIT POWERPC + P: David Engebretsen + M: eng...@us... + W: http://linuxppc64.org + L: lin...@li... + S: Supported + LOGICAL DISK MANAGER SUPPORT (LDM, Windows 2000/XP Dynamic Disks) P: Richard Russon (FlatCap) *************** *** 994,998 **** M: ra...@gn... W: http://oss.sgi.com/mips/mips-howto.html ! L: lin...@os... S: Maintained --- 1031,1035 ---- M: ra...@gn... W: http://oss.sgi.com/mips/mips-howto.html ! L: lin...@li... S: Maintained *************** *** 1049,1066 **** S: Maintained ! NETFILTER P: Rusty Russell - M: ru...@ru... P: Marc Boucher - M: ma...@mb... P: James Morris - M: ja...@in... P: Harald Welte - M: la...@gn... P: Jozsef Kadlecsik ! M: ka...@bl... W: http://www.netfilter.org/ W: http://www.iptables.org/ ! L: net...@li... S: Supported --- 1086,1100 ---- S: Maintained ! NETFILTER/IPTABLES P: Rusty Russell P: Marc Boucher P: James Morris P: Harald Welte P: Jozsef Kadlecsik ! M: cor...@ne... W: http://www.netfilter.org/ W: http://www.iptables.org/ ! L: net...@li... ! L: net...@li... S: Supported *************** *** 1088,1092 **** M: ne...@os... L: lin...@vg... - W: http://www.uk.linux.org/NetNews.html (2.0 only) S: Maintained --- 1122,1125 ---- *************** *** 1094,1099 **** P: David S. Miller M: da...@re... - P: Andi Kleen - M: ak...@mu... P: Alexey Kuznetsov M: ku...@ms... --- 1127,1130 ---- *************** *** 1164,1167 **** --- 1195,1204 ---- S: Maintained + ORINOCO DRIVER + P: David Gibson + M: he...@gi... + W: http://www.ozlabs.org/people/dgibson/dldwd + S: Maintained + PARALLEL PORT SUPPORT P: Phil Blundell *************** *** 1191,1196 **** PCI ID DATABASE ! P: Jens Maurer ! M: jm...@cc... S: Maintained --- 1228,1235 ---- PCI ID DATABASE ! P: Martin Mares ! M: mj...@uc... ! L: pci...@li... ! W: http://pciids.sourceforge.net/ S: Maintained *************** *** 1221,1225 **** PCMCIA SUBSYSTEM P: David Hinds ! M: dh...@ze... L: lin...@vg... W: http://pcmcia-cs.sourceforge.net --- 1260,1264 ---- PCMCIA SUBSYSTEM P: David Hinds ! M: da...@us... L: lin...@vg... W: http://pcmcia-cs.sourceforge.net *************** *** 1232,1235 **** --- 1271,1288 ---- S: Maintained + PERMEDIA 3 FRAMEBUFFER DRIVER + P: Romain Dolbeau + M: do...@ir... + L: lin...@li... + W: http://www.irisa.fr/prive/dolbeau/pm3fb/pm3fb.html + S: Maintained + + PHILIPS NINO PALM PC + P: Steven Hill + M: sj...@re... + L: lin...@li... + W: http://www.realitydiluted.com/projects/nino + S: Maintained + PNP SUPPORT P: Tom Lees *************** *** 1240,1243 **** --- 1293,1305 ---- S: Maintained + POWERVR2 FRAMEBUFFER DRIVER + P: M. R. Brown + M: mr...@0x... + P: Paul Mundt + M: le...@0x... + L: lin...@li... + W: http://www.linuxdc.org + S: Maintained + PPP PROTOCOL DRIVERS AND COMPRESSORS P: Paul Mackerras *************** *** 1262,1265 **** --- 1324,1338 ---- S: Maintained + PROMISE PDC202XX IDE CONTROLLER DRIVER + P: Hank Yang + M: su...@pr... [TAIWAN] + P: Jordan Rhody + M: su...@pr... [U.S.A] + P: Jack Hu + M: sup...@pr... [CHINA] + W: http://www.promise.com/support/linux_eng.asp + W: http://www.promise.com.tw/support/linux_eng.asp + S: Maintained + QNX4 FILESYSTEM P: Anders Larsen *************** *** 1312,1315 **** --- 1385,1394 ---- S: Maintained + RME96XX MULTICHANNEL SOUND DRIVER + P: Guenter Geiger + M: ge...@ep... + L: lin...@vg... + S: Maintained + RTLINUX REALTIME LINUX P: Victor Yodaiken *************** *** 1339,1342 **** --- 1418,1426 ---- S: Maintained + SC1200 WDT DRIVER + P: Zwane Mwaikambo + M: zw...@co... + S: Maintained + SCSI CDROM DRIVER P: Jens Axboe *************** *** 1370,1373 **** --- 1454,1464 ---- S: Maintained + SIS 5513 IDE CONTROLLER DRIVER + P: Lionel Bouton + M: Lio...@in... + W: http://inet6.dyn.dhs.org/sponsoring/sis5513/index.html + W: http://gyver.homeip.net/sis5513/index.html + S: Maintained + SIS 900/7016 FAST ETHERNET DRIVER P: Ollie Lho *************** *** 1420,1425 **** L: spa...@vg... L: ult...@vg... - W: http://ultra.linux.cz - W: http://www.geog.ubc.ca/s_linux.html S: Maintained --- 1511,1514 ---- *************** *** 1474,1477 **** --- 1563,1573 ---- S: Maintained + SUPERH WATCHDOG + P: Paul Mundt + M: le...@0x... + L: lin...@li... + W: http://www.linuxsh.org + S: Maintained + SVGA HANDLING P: Martin Mares *************** *** 1485,1495 **** S: Maintained TLAN NETWORK DRIVER ! P: Torben Mathiasen ! M: tor...@co... ! M: to...@ke... ! L: tl...@vu... ! L: lin...@vg... ! W: http://tlan.kernel.dk S: Maintained --- 1581,1610 ---- S: Maintained + TI GRAPH LINK USB (SilverLink) CABLE DRIVER + P: Romain Lievin + M: ro...@lp... + P: Julien Blache + M: jb...@te... + S: Maintained + + TIEMAN VOYAGER USB BRAILLE DISPLAY DRIVER + P: Stephane Dalton + M: sd...@vi... + P: Stéphane Doyon + M: s....@vi... + S: Maintained + + TIEMAN VOYAGER USB BRAILLE DISPLAY DRIVER + P: Stephane Dalton + M: sd...@vi... + P: Stéphane Doyon + M: s....@vi... + S: Maintained + TLAN NETWORK DRIVER ! P: Samuel Chessman ! M: che...@tu... ! L: tla...@li... ! W: http://sourceforge.net/projects/tlan/ S: Maintained *************** *** 1570,1574 **** L: lin...@li... L: lin...@li... ! S: Supported USB BLUETOOTH DRIVER --- 1685,1696 ---- L: lin...@li... L: lin...@li... ! S: Maintained ! ! USB AUERSWALD DRIVER ! P: Wolfgang Muees ! M: wol...@ik... ! L: lin...@li... ! L: lin...@li... ! S: Maintained USB BLUETOOTH DRIVER *************** *** 1587,1590 **** --- 1709,1718 ---- S: Maintained + USB EHCI DRIVER + P: David Brownell + M: dbr...@us... + L: lin...@li... + S: Maintained + USB HID/HIDBP/INPUT DRIVERS P: Vojtech Pavlik *************** *** 1593,1597 **** L: lin...@li... W: http://www.suse.cz/development/input/ ! S: Supported USB HUB --- 1721,1725 ---- L: lin...@li... W: http://www.suse.cz/development/input/ ! S: Maintained USB HUB *************** *** 1603,1608 **** USB KAWASAKI LSI DRIVER ! P: Brad Hards ! M: br...@fr... L: lin...@li... L: lin...@li... --- 1731,1736 ---- USB KAWASAKI LSI DRIVER ! P: Oliver Neukum ! M: dr...@ne... L: lin...@li... L: lin...@li... *************** *** 1634,1638 **** USB PEGASUS DRIVER P: Petko Manolov ! M: pe...@dc... L: lin...@li... L: lin...@li... --- 1762,1766 ---- USB PEGASUS DRIVER P: Petko Manolov ! M: pe...@us... L: lin...@li... L: lin...@li... *************** *** 1644,1648 **** L: lin...@li... L: lin...@li... ! S: Supported USB SE401 DRIVER --- 1772,1783 ---- L: lin...@li... L: lin...@li... ! S: Maintained ! ! USB RTL8150 DRIVER ! P: Petko Manolov ! M: pe...@us... ! L: lin...@li... ! L: lin...@li... ! S: Maintained USB SE401 DRIVER *************** *** 1692,1701 **** USB SERIAL KEYSPAN DRIVER ! P: Hugh Blemings ! M: hu...@mi... L: lin...@li... L: lin...@li... S: Maintained - W: http://misc.nu/hugh/keyspan/ USB SUBSYSTEM --- 1827,1836 ---- USB SERIAL KEYSPAN DRIVER ! P: Greg Kroah-Hartman ! M: gr...@kr... L: lin...@li... L: lin...@li... + W: http://www.kroah.com/linux/ S: Maintained USB SUBSYSTEM *************** *** 1715,1718 **** --- 1850,1859 ---- S: Maintained + USB "USBNET" DRIVER + P: David Brownell + M: dbr...@us... + L: lin...@li... + S: Maintained + VFAT FILESYSTEM: P: Gordon Chaffee *************** *** 1735,1742 **** VIDEO FOR LINUX ! P: Alan Cox ! M: Ala...@li... ! W: http://roadrunner.swansea.linux.org.uk/v4l.shtml ! S: Maintained for 2.2 only WAN ROUTER & SANGOMA WANPIPE DRIVERS & API (X.25, FRAME RELAY, PPP, CISCO HDLC) --- 1876,1882 ---- VIDEO FOR LINUX ! P: Gerd Knorr ! M: kr...@by... ! S: Maintained WAN ROUTER & SANGOMA WANPIPE DRIVERS & API (X.25, FRAME RELAY, PPP, CISCO HDLC) *************** *** 1768,1771 **** --- 1908,1918 ---- P: Ingo Molnar M: mi...@re... + S: Maintained + + X86-64 port + P: Andi Kleen + M: ak...@su... + L: di...@x8... + W: http://www.x86-64.org S: Maintained |
Update of /cvsroot/linuxcompressed/linux/mm In directory sc8-pr-cvs1:/tmp/cvs-serv25395/mm Modified Files: Makefile filemap.c memory.c mmap.c oom_kill.c page_alloc.c page_io.c shmem.c swap_state.c swapfile.c vmscan.c Log Message: o Port code to 2.4.20 Bug fix (?) o Changes checks in vswap.c to avoid oopses. It will BUG() instead. Some of the checks were done after the value had been accessed. Note o Virtual swap addresses are temporarily disabled, due to debugging sessions related to the use of swap files instead of swap partitions. Index: Makefile =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/Makefile,v retrieving revision 1.5 retrieving revision 1.6 diff -C2 -r1.5 -r1.6 *** Makefile 12 Dec 2001 20:45:46 -0000 1.5 --- Makefile 19 May 2003 01:38:47 -0000 1.6 *************** *** 10,14 **** O_TARGET := mm.o ! export-objs := shmem.o filemap.o obj-y := memory.o mmap.o filemap.o mprotect.o mlock.o mremap.o \ --- 10,14 ---- O_TARGET := mm.o ! export-objs := shmem.o filemap.o memory.o page_alloc.o obj-y := memory.o mmap.o filemap.o mprotect.o mlock.o mremap.o \ Index: filemap.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/filemap.c,v retrieving revision 1.42 retrieving revision 1.43 diff -C2 -r1.42 -r1.43 *** filemap.c 29 Nov 2002 21:23:02 -0000 1.42 --- filemap.c 19 May 2003 01:38:47 -0000 1.43 *************** *** 24,28 **** #include <linux/mm.h> #include <linux/iobuf.h> - #include <linux/compiler.h> #include <linux/comp_cache.h> --- 24,27 ---- *************** *** 55,59 **** ! spinlock_t pagecache_lock __cacheline_aligned_in_smp = SPIN_LOCK_UNLOCKED; /* * NOTE: to avoid deadlocking you must never acquire the pagemap_lru_lock --- 54,58 ---- ! spinlock_cacheline_t pagecache_lock_cacheline = {SPIN_LOCK_UNLOCKED}; /* * NOTE: to avoid deadlocking you must never acquire the pagemap_lru_lock *************** *** 65,69 **** * pagecache_lock */ ! spinlock_t pagemap_lru_lock __cacheline_aligned_in_smp = SPIN_LOCK_UNLOCKED; #define CLUSTER_PAGES (1 << page_cluster) --- 64,68 ---- * pagecache_lock */ ! spinlock_cacheline_t pagemap_lru_lock_cacheline = {SPIN_LOCK_UNLOCKED}; #define CLUSTER_PAGES (1 << page_cluster) *************** *** 122,126 **** void __remove_inode_page(struct page *page) { ! if (PageDirty(page)) BUG(); remove_page_from_inode_queue(page); remove_page_from_hash_queue(page); --- 121,126 ---- void __remove_inode_page(struct page *page) { ! if (PageDirty(page) && !PageSwapCache(page)) ! BUG(); remove_page_from_inode_queue(page); remove_page_from_hash_queue(page); *************** *** 156,164 **** if (mapping) { spin_lock(&pagecache_lock); ! list_del(&page->list); ! list_add(&page->list, &mapping->dirty_pages); spin_unlock(&pagecache_lock); ! if (mapping->host) mark_inode_dirty_pages(mapping->host); #ifdef CONFIG_COMP_CACHE --- 156,167 ---- if (mapping) { spin_lock(&pagecache_lock); ! mapping = page->mapping; ! if (mapping) { /* may have been truncated */ ! list_del(&page->list); ! list_add(&page->list, &mapping->dirty_pages); ! } spin_unlock(&pagecache_lock); ! if (mapping && mapping->host) mark_inode_dirty_pages(mapping->host); #ifdef CONFIG_COMP_CACHE *************** *** 582,586 **** while (!list_empty(&mapping->dirty_pages)) { ! struct page *page = list_entry(mapping->dirty_pages.next, struct page, list); list_del(&page->list); --- 585,589 ---- while (!list_empty(&mapping->dirty_pages)) { ! struct page *page = list_entry(mapping->dirty_pages.prev, struct page, list); list_del(&page->list); *************** *** 816,819 **** --- 819,882 ---- } + /* + * Knuth recommends primes in approximately golden ratio to the maximum + * integer representable by a machine word for multiplicative hashing. + * Chuck Lever verified the effectiveness of this technique: + * http://www.citi.umich.edu/techreports/reports/citi-tr-00-1.pdf + * + * These primes are chosen to be bit-sparse, that is operations on + * them can use shifts and additions instead of multiplications for + * machines where multiplications are slow. + */ + #if BITS_PER_LONG == 32 + /* 2^31 + 2^29 - 2^25 + 2^22 - 2^19 - 2^16 + 1 */ + #define GOLDEN_RATIO_PRIME 0x9e370001UL + #elif BITS_PER_LONG == 64 + /* 2^63 + 2^61 - 2^57 + 2^54 - 2^51 - 2^18 + 1 */ + #define GOLDEN_RATIO_PRIME 0x9e37fffffffc0001UL + #else + #error Define GOLDEN_RATIO_PRIME for your wordsize. + #endif + + /* + * In order to wait for pages to become available there must be + * waitqueues associated with pages. By using a hash table of + * waitqueues where the bucket discipline is to maintain all + * waiters on the same queue and wake all when any of the pages + * become available, and for the woken contexts to check to be + * sure the appropriate page became available, this saves space + * at a cost of "thundering herd" phenomena during rare hash + * collisions. + */ + static inline wait_queue_head_t *page_waitqueue(struct page *page) + { + const zone_t *zone = page_zone(page); + wait_queue_head_t *wait = zone->wait_table; + unsigned long hash = (unsigned long)page; + + #if BITS_PER_LONG == 64 + /* Sigh, gcc can't optimise this alone like it does for 32 bits. */ + unsigned long n = hash; + n <<= 18; + hash -= n; + n <<= 33; + hash -= n; + n <<= 3; + hash += n; + n <<= 3; + hash -= n; + n <<= 4; + hash += n; + n <<= 2; + hash += n; + #else + /* On some cpus multiply is faster, on others gcc will do shifts */ + hash *= GOLDEN_RATIO_PRIME; + #endif + hash >>= zone->wait_table_shift; + + return &wait[hash]; + } + /* * Wait for a page to get unlocked. *************** *** 822,832 **** * ie with increased "page->count" so that the page won't * go away during the wait.. */ void ___wait_on_page(struct page *page) { struct task_struct *tsk = current; DECLARE_WAITQUEUE(wait, tsk); ! add_wait_queue(&page->wait, &wait); do { set_task_state(tsk, TASK_UNINTERRUPTIBLE); --- 885,911 ---- * ie with increased "page->count" so that the page won't * go away during the wait.. + * + * The waiting strategy is to get on a waitqueue determined + * by hashing. Waiters will then collide, and the newly woken + * task must then determine whether it was woken for the page + * it really wanted, and go back to sleep on the waitqueue if + * that wasn't it. With the waitqueue semantics, it never leaves + * the waitqueue unless it calls, so the loop moves forward one + * iteration every time there is + * (1) a collision + * and + * (2) one of the colliding pages is woken + * + * This is the thundering herd problem, but it is expected to + * be very rare due to the few pages that are actually being + * waited on at any given time and the quality of the hash function. */ void ___wait_on_page(struct page *page) { + wait_queue_head_t *waitqueue = page_waitqueue(page); struct task_struct *tsk = current; DECLARE_WAITQUEUE(wait, tsk); ! add_wait_queue(waitqueue, &wait); do { set_task_state(tsk, TASK_UNINTERRUPTIBLE); *************** *** 836,852 **** schedule(); } while (PageLocked(page)); ! tsk->state = TASK_RUNNING; ! remove_wait_queue(&page->wait, &wait); } void unlock_page(struct page *page) { ! clear_bit(PG_launder, &(page)->flags); smp_mb__before_clear_bit(); if (!test_and_clear_bit(PG_locked, &(page)->flags)) BUG(); smp_mb__after_clear_bit(); ! if (waitqueue_active(&(page)->wait)) ! wake_up(&(page)->wait); } --- 915,946 ---- schedule(); } while (PageLocked(page)); ! __set_task_state(tsk, TASK_RUNNING); ! remove_wait_queue(waitqueue, &wait); } + /* + * unlock_page() is the other half of the story just above + * __wait_on_page(). Here a couple of quick checks are done + * and a couple of flags are set on the page, and then all + * of the waiters for all of the pages in the appropriate + * wait queue are woken. + */ void unlock_page(struct page *page) { ! wait_queue_head_t *waitqueue = page_waitqueue(page); ! ClearPageLaunder(page); smp_mb__before_clear_bit(); if (!test_and_clear_bit(PG_locked, &(page)->flags)) BUG(); smp_mb__after_clear_bit(); ! ! /* ! * Although the default semantics of wake_up() are ! * to wake all, here the specific function is used ! * to make it even more explicit that a number of ! * pages are being waited on here. ! */ ! if (waitqueue_active(waitqueue)) ! wake_up_all(waitqueue); } *************** *** 857,864 **** static void __lock_page(struct page *page) { struct task_struct *tsk = current; DECLARE_WAITQUEUE(wait, tsk); ! add_wait_queue_exclusive(&page->wait, &wait); for (;;) { set_task_state(tsk, TASK_UNINTERRUPTIBLE); --- 951,959 ---- static void __lock_page(struct page *page) { + wait_queue_head_t *waitqueue = page_waitqueue(page); struct task_struct *tsk = current; DECLARE_WAITQUEUE(wait, tsk); ! add_wait_queue_exclusive(waitqueue, &wait); for (;;) { set_task_state(tsk, TASK_UNINTERRUPTIBLE); *************** *** 870,877 **** break; } ! tsk->state = TASK_RUNNING; ! remove_wait_queue(&page->wait, &wait); } - /* --- 965,971 ---- break; } ! __set_task_state(tsk, TASK_RUNNING); ! remove_wait_queue(waitqueue, &wait); } /* *************** *** 1091,1103 **** /* - * Returns locked page at given index in given cache, creating it if needed. - */ - struct page *grab_cache_page(struct address_space *mapping, unsigned long index) - { - return find_or_create_page(mapping, index, mapping->gfp_mask); - } - - - /* * Same as grab_cache_page, but do not wait if the page is unavailable. * This is intended for speculative data generators, where the data can --- 1185,1188 ---- *************** *** 1381,1388 **** * Mark a page as having seen activity. * ! * If it was already so marked, move it ! * to the active queue and drop the referenced ! * bit. Otherwise, just mark it for future ! * action.. */ void mark_page_accessed(struct page *page) --- 1466,1471 ---- * Mark a page as having seen activity. * ! * If it was already so marked, move it to the active queue and drop ! * the referenced bit. Otherwise, just mark it for future action.. */ void mark_page_accessed(struct page *page) *************** *** 1391,1399 **** activate_page(page); ClearPageReferenced(page); ! return; ! } ! ! /* Mark the page referenced, AFTER checking for previous usage.. */ ! SetPageReferenced(page); } --- 1474,1479 ---- activate_page(page); ClearPageReferenced(page); ! } else ! SetPageReferenced(page); } *************** *** 1634,1637 **** --- 1714,1718 ---- struct address_space * mapping = filp->f_dentry->d_inode->i_mapping; struct inode * inode = mapping->host; + loff_t size = inode->i_size; new_iobuf = 0; *************** *** 1659,1662 **** --- 1740,1746 ---- goto out_free; + if ((rw == READ) && (offset + count > size)) + count = size - offset; + /* * Flush to disk exclusively the _data_, metadata must remain *************** *** 1689,1692 **** --- 1773,1777 ---- count -= retval; buf += retval; + /* warning: weird semantics here, we're reporting a read behind the end of the file */ progress += retval; } *************** *** 1778,1783 **** size = inode->i_size; if (pos < size) { - if (pos + count > size) - count = size - pos; retval = generic_file_direct_IO(READ, filp, buf, count, pos); if (retval > 0) --- 1863,1866 ---- *************** *** 2307,2310 **** --- 2390,2396 ---- struct file * file = vma->vm_file; + if ( (flags & MS_INVALIDATE) && (vma->vm_flags & VM_LOCKED) ) + return -EBUSY; + if (file && (vma->vm_flags & VM_SHARED)) { ret = filemap_sync(vma, start, end-start, flags); *************** *** 2348,2351 **** --- 2434,2440 ---- if (flags & ~(MS_ASYNC | MS_INVALIDATE | MS_SYNC)) goto out; + if ((flags & MS_ASYNC) && (flags & MS_SYNC)) + goto out; + error = 0; if (end == start) *************** *** 2353,2357 **** /* * If the interval [start,end) covers some unmapped address ranges, ! * just ignore them, but return -EFAULT at the end. */ vma = find_vma(current->mm, start); --- 2442,2446 ---- /* * If the interval [start,end) covers some unmapped address ranges, ! * just ignore them, but return -ENOMEM at the end. */ vma = find_vma(current->mm, start); *************** *** 2359,2368 **** for (;;) { /* Still start < end. */ ! error = -EFAULT; if (!vma) goto out; /* Here start < vma->vm_end. */ if (start < vma->vm_start) { ! unmapped_error = -EFAULT; start = vma->vm_start; } --- 2448,2457 ---- for (;;) { /* Still start < end. */ ! error = -ENOMEM; if (!vma) goto out; /* Here start < vma->vm_end. */ if (start < vma->vm_start) { ! unmapped_error = -ENOMEM; start = vma->vm_start; } *************** *** 2512,2516 **** /* This caps the number of vma's this process can own */ ! if (vma->vm_mm->map_count > MAX_MAP_COUNT) return -ENOMEM; --- 2601,2605 ---- /* This caps the number of vma's this process can own */ ! if (vma->vm_mm->map_count > max_map_count) return -ENOMEM; *************** *** 3077,3081 **** err = -EFBIG; ! if (limit != RLIM_INFINITY) { if (pos >= limit) { send_sig(SIGXFSZ, current, 0); --- 3166,3170 ---- err = -EFBIG; ! if (!S_ISBLK(inode->i_mode) && limit != RLIM_INFINITY) { if (pos >= limit) { send_sig(SIGXFSZ, current, 0); Index: memory.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/memory.c,v retrieving revision 1.36 retrieving revision 1.37 diff -C2 -r1.36 -r1.37 *** memory.c 10 Sep 2002 16:43:12 -0000 1.36 --- memory.c 19 May 2003 01:38:48 -0000 1.37 *************** *** 45,48 **** --- 45,49 ---- #include <linux/highmem.h> #include <linux/pagemap.h> + #include <linux/module.h> #include <linux/comp_cache.h> *************** *** 53,56 **** --- 54,58 ---- unsigned long max_mapnr; unsigned long num_physpages; + unsigned long num_mappedpages; void * high_memory; struct page *highmem_start_page; *************** *** 529,532 **** --- 531,536 ---- } + EXPORT_SYMBOL(get_user_pages); + /* * Force in an entire range of pages from the current process's user VA, *************** *** 587,590 **** --- 591,596 ---- * size of the kiobuf, so we have to stop marking pages dirty once the * requested byte count has been reached. + * + * Must be called from process context - set_page_dirty() takes VFS locks. */ *************** *** 604,608 **** if (!PageReserved(page)) ! SetPageDirty(page); remaining -= (PAGE_SIZE - offset); --- 610,614 ---- if (!PageReserved(page)) ! set_page_dirty(page); remaining -= (PAGE_SIZE - offset); *************** *** 1500,1502 **** --- 1506,1529 ---- len, write, 0, NULL, NULL); return ret == len ? 0 : -1; + } + + struct page * vmalloc_to_page(void * vmalloc_addr) + { + unsigned long addr = (unsigned long) vmalloc_addr; + struct page *page = NULL; + pmd_t *pmd; + pte_t *pte; + pgd_t *pgd; + + pgd = pgd_offset_k(addr); + if (!pgd_none(*pgd)) { + pmd = pmd_offset(pgd, addr); + if (!pmd_none(*pmd)) { + pte = pte_offset(pmd, addr); + if (pte_present(*pte)) { + page = pte_page(*pte); + } + } + } + return page; } Index: mmap.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/mmap.c,v retrieving revision 1.8 retrieving revision 1.9 diff -C2 -r1.8 -r1.9 *** mmap.c 28 Apr 2002 20:51:34 -0000 1.8 --- mmap.c 19 May 2003 01:38:48 -0000 1.9 *************** *** 47,50 **** --- 47,51 ---- int sysctl_overcommit_memory; + int max_map_count = DEFAULT_MAX_MAP_COUNT; /* Check that a process has enough memory to allocate a *************** *** 420,424 **** /* Too many mappings? */ ! if (mm->map_count > MAX_MAP_COUNT) return -ENOMEM; --- 421,425 ---- /* Too many mappings? */ ! if (mm->map_count > max_map_count) return -ENOMEM; *************** *** 485,489 **** /* Clear old maps */ - error = -ENOMEM; munmap_back: vma = find_vma_prepare(mm, addr, &prev, &rb_link, &rb_parent); --- 486,489 ---- *************** *** 555,559 **** * f_op->mmap method. -DaveM */ ! addr = vma->vm_start; vma_link(mm, vma, prev, rb_link, rb_parent); --- 555,582 ---- * f_op->mmap method. -DaveM */ ! if (addr != vma->vm_start) { ! /* ! * It is a bit too late to pretend changing the virtual ! * area of the mapping, we just corrupted userspace ! * in the do_munmap, so FIXME (not in 2.4 to avoid breaking ! * the driver API). ! */ ! struct vm_area_struct * stale_vma; ! /* Since addr changed, we rely on the mmap op to prevent ! * collisions with existing vmas and just use find_vma_prepare ! * to update the tree pointers. ! */ ! addr = vma->vm_start; ! stale_vma = find_vma_prepare(mm, addr, &prev, ! &rb_link, &rb_parent); ! /* ! * Make sure the lowlevel driver did its job right. ! */ ! if (unlikely(stale_vma && stale_vma->vm_start < vma->vm_end)) { ! printk(KERN_ERR "buggy mmap operation: [<%p>]\n", ! file ? file->f_op->mmap : NULL); ! BUG(); ! } ! } vma_link(mm, vma, prev, rb_link, rb_parent); *************** *** 926,930 **** /* If we'll make "hole", check the vm areas limit */ if ((mpnt->vm_start < addr && mpnt->vm_end > addr+len) ! && mm->map_count >= MAX_MAP_COUNT) return -ENOMEM; --- 949,953 ---- /* If we'll make "hole", check the vm areas limit */ if ((mpnt->vm_start < addr && mpnt->vm_end > addr+len) ! && mm->map_count >= max_map_count) return -ENOMEM; *************** *** 1047,1051 **** return -ENOMEM; ! if (mm->map_count > MAX_MAP_COUNT) return -ENOMEM; --- 1070,1074 ---- return -ENOMEM; ! if (mm->map_count > max_map_count) return -ENOMEM; *************** *** 1053,1060 **** return -ENOMEM; ! flags = calc_vm_flags(PROT_READ|PROT_WRITE|PROT_EXEC, ! MAP_FIXED|MAP_PRIVATE) | mm->def_flags; ! ! flags |= VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC; /* Can we just expand an old anonymous mapping? */ --- 1076,1080 ---- return -ENOMEM; ! flags = VM_DATA_DEFAULT_FLAGS | mm->def_flags; /* Can we just expand an old anonymous mapping? */ *************** *** 1140,1144 **** mpnt = next; } - flush_tlb_mm(mm); /* This is just debugging */ --- 1160,1163 ---- *************** *** 1147,1150 **** --- 1166,1171 ---- clear_page_tables(mm, FIRST_USER_PGD_NR, USER_PTRS_PER_PGD); + + flush_tlb_mm(mm); } Index: oom_kill.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/oom_kill.c,v retrieving revision 1.8 retrieving revision 1.9 diff -C2 -r1.8 -r1.9 *** oom_kill.c 14 Jan 2002 12:05:08 -0000 1.8 --- oom_kill.c 19 May 2003 01:38:48 -0000 1.9 *************** *** 112,117 **** /* * Simple selection loop. We chose the process with the highest ! * number of 'points'. We need the locks to make sure that the ! * list of task structs doesn't change while we look the other way. * * (not docbooked, we don't want this one cluttering up the manual) --- 112,116 ---- /* * Simple selection loop. We chose the process with the highest ! * number of 'points'. We expect the caller will lock the tasklist. * * (not docbooked, we don't want this one cluttering up the manual) *************** *** 123,127 **** struct task_struct *chosen = NULL; - read_lock(&tasklist_lock); for_each_task(p) { if (p->pid) { --- 122,125 ---- *************** *** 133,137 **** } } - read_unlock(&tasklist_lock); return chosen; } --- 131,134 ---- *************** *** 172,176 **** static void oom_kill(void) { ! struct task_struct *p = select_bad_process(), *q; /* Found nothing?!?! Either we hang forever, or we panic. */ --- 169,176 ---- static void oom_kill(void) { ! struct task_struct *p, *q; ! ! read_lock(&tasklist_lock); ! p = select_bad_process(); /* Found nothing?!?! Either we hang forever, or we panic. */ *************** *** 179,185 **** /* kill all processes that share the ->mm (i.e. all threads) */ - read_lock(&tasklist_lock); for_each_task(q) { ! if(q->mm == p->mm) oom_kill_task(q); } read_unlock(&tasklist_lock); --- 179,185 ---- /* kill all processes that share the ->mm (i.e. all threads) */ for_each_task(q) { ! if (q->mm == p->mm) ! oom_kill_task(q); } read_unlock(&tasklist_lock); *************** *** 190,195 **** * for more memory. */ ! current->policy |= SCHED_YIELD; ! schedule(); return; } --- 190,194 ---- * for more memory. */ ! yield(); return; } *************** *** 200,204 **** void out_of_memory(void) { ! static unsigned long first, last, count; unsigned long now, since; --- 199,203 ---- void out_of_memory(void) { ! static unsigned long first, last, count, lastkill; unsigned long now, since; *************** *** 243,248 **** --- 242,257 ---- /* + * If we just killed a process, wait a while + * to give that task a chance to exit. This + * avoids killing multiple processes needlessly. + */ + since = now - lastkill; + if (since < HZ*5) + return; + + /* * Ok, really out of memory. Kill something. */ + lastkill = now; oom_kill(); Index: page_alloc.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/page_alloc.c,v retrieving revision 1.26 retrieving revision 1.27 diff -C2 -r1.26 -r1.27 *** page_alloc.c 29 Nov 2002 21:23:02 -0000 1.26 --- page_alloc.c 19 May 2003 01:38:48 -0000 1.27 *************** *** 2,5 **** --- 2,8 ---- * linux/mm/page_alloc.c * + * Manages the free list, the system allocates free pages here. + * Note that kmalloc() lives in slab.c + * * Copyright (C) 1991, 1992, 1993, 1994 Linus Torvalds * Swap reorganised 29.12.95, Stephen Tweedie *************** *** 18,22 **** #include <linux/bootmem.h> #include <linux/slab.h> ! #include <linux/compiler.h> #include <linux/comp_cache.h> --- 21,25 ---- #include <linux/bootmem.h> #include <linux/slab.h> ! #include <linux/module.h> #include <linux/comp_cache.h> *************** *** 24,31 **** int nr_active_pages; int nr_inactive_pages; ! struct list_head inactive_list; ! struct list_head active_list; pg_data_t *pgdat_list; static char *zone_names[MAX_NR_ZONES] = { "DMA", "Normal", "HighMem" }; #ifdef CONFIG_COMP_CACHE --- 27,43 ---- int nr_active_pages; int nr_inactive_pages; ! LIST_HEAD(inactive_list); ! LIST_HEAD(active_list); pg_data_t *pgdat_list; + /* + * + * The zone_table array is used to look up the address of the + * struct zone corresponding to a given zone number (ZONE_DMA, + * ZONE_NORMAL, or ZONE_HIGHMEM). + */ + zone_t *zone_table[MAX_NR_ZONES*MAX_NR_NODES]; + EXPORT_SYMBOL(zone_table); + static char *zone_names[MAX_NR_ZONES] = { "DMA", "Normal", "HighMem" }; #ifdef CONFIG_COMP_CACHE *************** *** 40,71 **** /* - * Free_page() adds the page to the free lists. This is optimized for - * fast normal cases (no error jumps taken normally). - * - * The way to optimize jumps for gcc-2.2.2 is to: - * - select the "normal" case and put it inside the if () { XXX } - * - no else-statements if you can avoid them - * - * With the above two rules, you get a straight-line execution path - * for the normal case, giving better asm-code. - */ - - #define memlist_init(x) INIT_LIST_HEAD(x) - #define memlist_add_head list_add - #define memlist_add_tail list_add_tail - #define memlist_del list_del - #define memlist_entry list_entry - #define memlist_next(x) ((x)->next) - #define memlist_prev(x) ((x)->prev) - - /* * Temporary debugging check. */ ! #define BAD_RANGE(zone,x) (((zone) != (x)->zone) || (((x)-mem_map) < (zone)->zone_start_mapnr) || (((x)-mem_map) >= (zone)->zone_start_mapnr+(zone)->size)) /* ! * Buddy system. Hairy. You really aren't expected to understand this * ! * Hint: -mask = 1+~mask */ --- 52,87 ---- /* * Temporary debugging check. */ ! #define BAD_RANGE(zone, page) \ ! ( \ ! (((page) - mem_map) >= ((zone)->zone_start_mapnr+(zone)->size)) \ ! || (((page) - mem_map) < (zone)->zone_start_mapnr) \ ! || ((zone) != page_zone(page)) \ ! ) /* ! * Freeing function for a buddy system allocator. ! * Contrary to prior comments, this is *NOT* hairy, and there ! * is no reason for anyone not to understand it. * ! * The concept of a buddy system is to maintain direct-mapped tables ! * (containing bit values) for memory blocks of various "orders". ! * The bottom level table contains the map for the smallest allocatable ! * units of memory (here, pages), and each level above it describes ! * pairs of units from the levels below, hence, "buddies". ! * At a high level, all that happens here is marking the table entry ! * at the bottom level available, and propagating the changes upward ! * as necessary, plus some accounting needed to play nicely with other ! * parts of the VM system. ! * At each level, we keep one bit for each pair of blocks, which ! * is set to 1 iff only one of the pair is allocated. So when we ! * are allocating or freeing one, we can derive the state of the ! * other. That is, if we allocate a small block, and both were ! * free, the remainder of the region must be split into blocks. ! * If a block is freed, and its buddy is also free, then this ! * triggers coalescing into a block of larger size. ! * ! * -- wli */ *************** *** 78,86 **** zone_t *zone; ! /* Yes, think what happens when other parts of the kernel take * a reference to a page in order to pin it for io. -ben */ ! if (PageLRU(page)) lru_cache_del(page); if (page->buffers) --- 94,106 ---- zone_t *zone; ! /* ! * Yes, think what happens when other parts of the kernel take * a reference to a page in order to pin it for io. -ben */ ! if (PageLRU(page)) { ! if (unlikely(in_interrupt())) ! BUG(); lru_cache_del(page); + } if (page->buffers) *************** *** 90,99 **** if (!VALID_PAGE(page)) BUG(); - if (PageSwapCache(page)) - BUG(); if (PageLocked(page)) BUG(); - if (PageLRU(page)) - BUG(); if (PageActive(page)) BUG(); --- 110,115 ---- *************** *** 104,108 **** back_local_freelist: ! zone = page->zone; mask = (~0UL) << order; --- 120,124 ---- back_local_freelist: ! zone = page_zone(page); mask = (~0UL) << order; *************** *** 131,134 **** --- 147,152 ---- /* * Move the buddy up one level. + * This code is taking advantage of the identity: + * -mask = 1+~mask */ buddy1 = base + (page_idx ^ -mask); *************** *** 139,143 **** BUG(); ! memlist_del(&buddy1->list); mask <<= 1; area++; --- 157,161 ---- BUG(); ! list_del(&buddy1->list); mask <<= 1; area++; *************** *** 145,149 **** page_idx &= mask; } ! memlist_add_head(&(base + page_idx)->list, &area->free_list); spin_unlock_irqrestore(&zone->lock, flags); --- 163,167 ---- page_idx &= mask; } ! list_add(&(base + page_idx)->list, &area->free_list); spin_unlock_irqrestore(&zone->lock, flags); *************** *** 175,179 **** high--; size >>= 1; ! memlist_add_head(&(page)->list, &(area)->free_list); MARK_USED(index, high, area); index += size; --- 193,197 ---- high--; size >>= 1; ! list_add(&(page)->list, &(area)->free_list); MARK_USED(index, high, area); index += size; *************** *** 197,209 **** do { head = &area->free_list; ! curr = memlist_next(head); if (curr != head) { unsigned int index; ! page = memlist_entry(curr, struct page, list); if (BAD_RANGE(zone,page)) BUG(); ! memlist_del(curr); index = page - zone->zone_mem_map; if (curr_order != MAX_ORDER-1) --- 215,227 ---- do { head = &area->free_list; ! curr = head->next; if (curr != head) { unsigned int index; ! page = list_entry(curr, struct page, list); if (BAD_RANGE(zone,page)) BUG(); ! list_del(curr); index = page - zone->zone_mem_map; if (curr_order != MAX_ORDER-1) *************** *** 253,257 **** current->flags |= PF_MEMALLOC | PF_FREE_PAGES; ! __freed = try_to_free_pages(classzone, gfp_mask, order); current->flags &= ~(PF_MEMALLOC | PF_FREE_PAGES); --- 271,275 ---- current->flags |= PF_MEMALLOC | PF_FREE_PAGES; ! __freed = try_to_free_pages_zone(classzone, gfp_mask); current->flags &= ~(PF_MEMALLOC | PF_FREE_PAGES); *************** *** 269,273 **** do { tmp = list_entry(entry, struct page, list); ! if (tmp->index == order && memclass(tmp->zone, classzone)) { list_del(entry); current->nr_local_pages--; --- 287,291 ---- do { tmp = list_entry(entry, struct page, list); ! if (tmp->index == order && memclass(page_zone(tmp), classzone)) { list_del(entry); current->nr_local_pages--; *************** *** 281,286 **** if (!VALID_PAGE(page)) BUG(); - if (PageSwapCache(page)) - BUG(); if (PageLocked(page)) BUG(); --- 299,302 ---- *************** *** 325,328 **** --- 341,346 ---- zone = zonelist->zones; classzone = *zone; + if (classzone == NULL) + return NULL; min = 1UL << order; for (;;) { *************** *** 408,414 **** /* Yield for kswapd, and try again */ ! current->policy |= SCHED_YIELD; ! __set_current_state(TASK_RUNNING); ! schedule(); goto rebalance; } --- 426,430 ---- /* Yield for kswapd, and try again */ ! yield(); goto rebalance; } *************** *** 457,470 **** unsigned int nr_free_pages (void) { ! unsigned int sum; zone_t *zone; - pg_data_t *pgdat = pgdat_list; ! sum = 0; ! while (pgdat) { ! for (zone = pgdat->node_zones; zone < pgdat->node_zones + MAX_NR_ZONES; zone++) ! sum += zone->free_pages; ! pgdat = pgdat->node_next; ! } return sum; } --- 473,482 ---- unsigned int nr_free_pages (void) { ! unsigned int sum = 0; zone_t *zone; ! for_each_zone(zone) ! sum += zone->free_pages; ! return sum; } *************** *** 475,482 **** unsigned int nr_free_buffer_pages (void) { ! pg_data_t *pgdat = pgdat_list; unsigned int sum = 0; ! do { zonelist_t *zonelist = pgdat->node_zonelists + (GFP_USER & GFP_ZONEMASK); zone_t **zonep = zonelist->zones; --- 487,494 ---- unsigned int nr_free_buffer_pages (void) { ! pg_data_t *pgdat; unsigned int sum = 0; ! for_each_pgdat(pgdat) { zonelist_t *zonelist = pgdat->node_zonelists + (GFP_USER & GFP_ZONEMASK); zone_t **zonep = zonelist->zones; *************** *** 489,495 **** sum += size - high; } ! ! pgdat = pgdat->node_next; ! } while (pgdat); return sum; --- 501,505 ---- sum += size - high; } ! } return sum; *************** *** 499,509 **** unsigned int nr_free_highpages (void) { ! pg_data_t *pgdat = pgdat_list; unsigned int pages = 0; ! while (pgdat) { pages += pgdat->node_zones[ZONE_HIGHMEM].free_pages; ! pgdat = pgdat->node_next; ! } return pages; } --- 509,518 ---- unsigned int nr_free_highpages (void) { ! pg_data_t *pgdat; unsigned int pages = 0; ! for_each_pgdat(pgdat) pages += pgdat->node_zones[ZONE_HIGHMEM].free_pages; ! return pages; } *************** *** 560,565 **** nr = 0; for (;;) { ! curr = memlist_next(curr); ! if (curr == head) break; nr++; --- 569,573 ---- nr = 0; for (;;) { ! if ((curr = curr->next) == head) break; nr++; *************** *** 631,634 **** --- 639,684 ---- } + /* + * Helper functions to size the waitqueue hash table. + * Essentially these want to choose hash table sizes sufficiently + * large so that collisions trying to wait on pages are rare. + * But in fact, the number of active page waitqueues on typical + * systems is ridiculously low, less than 200. So this is even + * conservative, even though it seems large. + * + * The constant PAGES_PER_WAITQUEUE specifies the ratio of pages to + * waitqueues, i.e. the size of the waitq table given the number of pages. + */ + #define PAGES_PER_WAITQUEUE 256 + + static inline unsigned long wait_table_size(unsigned long pages) + { + unsigned long size = 1; + + pages /= PAGES_PER_WAITQUEUE; + + while (size < pages) + size <<= 1; + + /* + * Once we have dozens or even hundreds of threads sleeping + * on IO we've got bigger problems than wait queue collision. + * Limit the size of the wait table to a reasonable size. + */ + size = min(size, 4096UL); + + return size; + } + + /* + * This is an integer logarithm so that shifts can be used later + * to extract the more random high bits from the multiplicative + * hash function before the remainder is taken. + */ + static inline unsigned long wait_table_bits(unsigned long size) + { + return ffz(~size); + } + #define LONG_ALIGN(x) (((x)+(sizeof(long))-1)&~((sizeof(long))-1)) *************** *** 682,686 **** unsigned long *zholes_size, struct page *lmem_map) { - struct page *p; unsigned long i, j; unsigned long map_size; --- 732,735 ---- *************** *** 703,709 **** printk("On node %d totalpages: %lu\n", nid, realtotalpages); - INIT_LIST_HEAD(&active_list); - INIT_LIST_HEAD(&inactive_list); - /* * Some architectures (with lots of mem and discontinous memory --- 752,755 ---- *************** *** 725,740 **** pgdat->nr_zones = 0; - /* - * Initially all pages are reserved - free ones are freed - * up by free_all_bootmem() once the early boot process is - * done. - */ - for (p = lmem_map; p < lmem_map + totalpages; p++) { - set_page_count(p, 0); - SetPageReserved(p); - init_waitqueue_head(&p->wait); - memlist_init(&p->list); - } - offset = lmem_map - mem_map; for (j = 0; j < MAX_NR_ZONES; j++) { --- 771,774 ---- *************** *** 743,746 **** --- 777,781 ---- unsigned long size, realsize; + zone_table[nid * MAX_NR_ZONES + j] = zone; realsize = size = zones_size[j]; if (zholes_size) *************** *** 757,760 **** --- 792,809 ---- continue; + /* + * The per-page waitqueue mechanism uses hashed waitqueues + * per zone. + */ + zone->wait_table_size = wait_table_size(size); + zone->wait_table_shift = + BITS_PER_LONG - wait_table_bits(zone->wait_table_size); + zone->wait_table = (wait_queue_head_t *) + alloc_bootmem_node(pgdat, zone->wait_table_size + * sizeof(wait_queue_head_t)); + + for(i = 0; i < zone->wait_table_size; ++i) + init_waitqueue_head(zone->wait_table + i); + pgdat->nr_zones = j+1; *************** *** 775,783 **** printk("BUG: wrong zone alignment, it will crash\n"); for (i = 0; i < size; i++) { struct page *page = mem_map + offset + i; ! page->zone = zone; if (j != ZONE_HIGHMEM) ! page->virtual = __va(zone_start_paddr); zone_start_paddr += PAGE_SIZE; } --- 824,840 ---- printk("BUG: wrong zone alignment, it will crash\n"); + /* + * Initially all pages are reserved - free ones are freed + * up by free_all_bootmem() once the early boot process is + * done. Non-atomic initialization, single-pass. + */ for (i = 0; i < size; i++) { struct page *page = mem_map + offset + i; ! set_page_zone(page, nid * MAX_NR_ZONES + j); ! set_page_count(page, 0); ! SetPageReserved(page); ! INIT_LIST_HEAD(&page->list); if (j != ZONE_HIGHMEM) ! set_page_address(page, __va(zone_start_paddr)); zone_start_paddr += PAGE_SIZE; } *************** *** 787,791 **** unsigned long bitmap_size; ! memlist_init(&zone->free_area[i].free_list); if (i == MAX_ORDER-1) { zone->free_area[i].map = NULL; --- 844,848 ---- unsigned long bitmap_size; ! INIT_LIST_HEAD(&zone->free_area[i].free_list); if (i == MAX_ORDER-1) { zone->free_area[i].map = NULL; Index: page_io.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/page_io.c,v retrieving revision 1.6 retrieving revision 1.7 diff -C2 -r1.6 -r1.7 *** page_io.c 10 Sep 2002 16:43:15 -0000 1.6 --- page_io.c 19 May 2003 01:38:49 -0000 1.7 *************** *** 73,81 **** /* block_size == PAGE_SIZE/zones_used */ brw_page(rw, page, dev, zones, block_size); - - /* Note! For consistency we do all of the logic, - * decrementing the page count, and unlocking the page in the - * swap lock map - in the IO completion handler. - */ return 1; } --- 73,76 ---- *************** *** 100,105 **** if (!PageSwapCache(page)) PAGE_BUG(page); - if (page->mapping != &swapper_space) - PAGE_BUG(page); if (!rw_swap_page_base(rw, entry, page)) UnlockPage(page); --- 95,98 ---- *************** *** 117,129 **** if (!PageLocked(page)) PAGE_BUG(page); - if (PageSwapCache(page)) - PAGE_BUG(page); if (page->mapping) PAGE_BUG(page); /* needs sync_page to wait I/O completation */ page->mapping = &swapper_space; ! if (!rw_swap_page_base(rw, entry, page)) ! UnlockPage(page); ! wait_on_page(page); page->mapping = NULL; } --- 110,122 ---- if (!PageLocked(page)) PAGE_BUG(page); if (page->mapping) PAGE_BUG(page); /* needs sync_page to wait I/O completation */ page->mapping = &swapper_space; ! if (rw_swap_page_base(rw, entry, page)) ! lock_page(page); ! if (!block_flushpage(page, 0)) ! PAGE_BUG(page); page->mapping = NULL; + UnlockPage(page); } Index: shmem.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/shmem.c,v retrieving revision 1.22 retrieving revision 1.23 diff -C2 -r1.22 -r1.23 *** shmem.c 10 Sep 2002 16:43:16 -0000 1.22 --- shmem.c 19 May 2003 01:38:49 -0000 1.23 *************** *** 36,39 **** --- 36,47 ---- #define ENTRIES_PER_PAGE (PAGE_CACHE_SIZE/sizeof(unsigned long)) + #define BLOCKS_PER_PAGE (PAGE_CACHE_SIZE/512) + + #define SHMEM_MAX_INDEX (SHMEM_NR_DIRECT + ENTRIES_PER_PAGE * (ENTRIES_PER_PAGE/2) * (ENTRIES_PER_PAGE+1)) + #define SHMEM_MAX_BYTES ((unsigned long long)SHMEM_MAX_INDEX << PAGE_CACHE_SHIFT) + #define VM_ACCT(size) (((size) + PAGE_CACHE_SIZE - 1) >> PAGE_SHIFT) + + /* Pretend that each entry is of this size in directory's i_size */ + #define BOGO_DIRENT_SIZE 20 #define SHMEM_SB(sb) (&sb->u.shmem_sb) *************** *** 43,47 **** static struct file_operations shmem_file_operations; static struct inode_operations shmem_inode_operations; - static struct file_operations shmem_dir_operations; static struct inode_operations shmem_dir_inode_operations; static struct vm_operations_struct shmem_vm_ops; --- 51,54 ---- *************** *** 51,55 **** atomic_t shmem_nrpages = ATOMIC_INIT(0); /* Not used right now */ ! #define BLOCKS_PER_PAGE (PAGE_CACHE_SIZE/512) /* --- 58,62 ---- atomic_t shmem_nrpages = ATOMIC_INIT(0); /* Not used right now */ ! static struct page *shmem_getpage_locked(struct shmem_inode_info *, struct inode *, unsigned long); /* *************** *** 128,134 **** * +-> 52-55 */ - - #define SHMEM_MAX_BLOCKS (SHMEM_NR_DIRECT + ENTRIES_PER_PAGE * ENTRIES_PER_PAGE/2*(ENTRIES_PER_PAGE+1)) - static swp_entry_t * shmem_swp_entry (struct shmem_inode_info *info, unsigned long index, unsigned long page) { --- 135,138 ---- *************** *** 183,187 **** swp_entry_t * res; ! if (index >= SHMEM_MAX_BLOCKS) return ERR_PTR(-EFBIG); --- 187,191 ---- swp_entry_t * res; ! if (index >= SHMEM_MAX_INDEX) return ERR_PTR(-EFBIG); *************** *** 315,318 **** --- 319,323 ---- { unsigned long index; + unsigned long partial; unsigned long freed = 0; struct shmem_inode_info * info = SHMEM_I(inode); *************** *** 322,325 **** --- 327,352 ---- spin_lock (&info->lock); index = (inode->i_size + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT; + partial = inode->i_size & ~PAGE_CACHE_MASK; + + if (partial) { + swp_entry_t *entry = shmem_swp_entry(info, index-1, 0); + struct page *page; + /* + * This check is racy: it's faintly possible that page + * was assigned to swap during truncate_inode_pages, + * and now assigned to file; but better than nothing. + */ + if (!IS_ERR(entry) && entry->val) { + spin_unlock(&info->lock); + page = shmem_getpage_locked(info, inode, index-1); + if (!IS_ERR(page)) { + memclear_highpage_flush(page, partial, + PAGE_CACHE_SIZE - partial); + UnlockPage(page); + page_cache_release(page); + } + spin_lock(&info->lock); + } + } while (index < info->next_index) *************** *** 336,344 **** struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb); ! inode->i_size = 0; ! if (inode->i_op->truncate == shmem_truncate){ spin_lock (&shmem_ilock); list_del (&SHMEM_I(inode)->list); spin_unlock (&shmem_ilock); shmem_truncate (inode); } --- 363,371 ---- struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb); ! if (inode->i_op->truncate == shmem_truncate) { spin_lock (&shmem_ilock); list_del (&SHMEM_I(inode)->list); spin_unlock (&shmem_ilock); + inode->i_size = 0; shmem_truncate (inode); } *************** *** 349,374 **** } ! static int shmem_clear_swp (swp_entry_t entry, swp_entry_t *ptr, int size) { swp_entry_t *test; ! for (test = ptr; test < ptr + size; test++) { ! if (test->val == entry.val) { ! swap_free (entry); ! *test = (swp_entry_t) {0}; return test - ptr; - } } return -1; } ! static int shmem_unuse_inode (struct shmem_inode_info *info, swp_entry_t entry, struct page *page) { swp_entry_t *ptr; unsigned long idx; int offset; ! idx = 0; spin_lock (&info->lock); ! offset = shmem_clear_swp (entry, info->i_direct, SHMEM_NR_DIRECT); if (offset >= 0) goto found; --- 376,403 ---- } ! static inline int shmem_find_swp(swp_entry_t entry, swp_entry_t *ptr, swp_entry_t *eptr) ! { swp_entry_t *test; ! for (test = ptr; test < eptr; test++) { ! if (test->val == entry.val) return test - ptr; } return -1; } ! static int shmem_unuse_inode(struct shmem_inode_info *info, swp_entry_t entry, struct page *page) { swp_entry_t *ptr; unsigned long idx; int offset; ! idx = 0; + ptr = info->i_direct; spin_lock (&info->lock); ! offset = info->next_index; ! if (offset > SHMEM_NR_DIRECT) ! offset = SHMEM_NR_DIRECT; ! offset = shmem_find_swp(entry, ptr, ptr + offset); if (offset >= 0) goto found; *************** *** 379,383 **** if (IS_ERR(ptr)) continue; ! offset = shmem_clear_swp (entry, ptr, ENTRIES_PER_PAGE); if (offset >= 0) goto found; --- 408,415 ---- if (IS_ERR(ptr)) continue; ! offset = info->next_index - idx; ! if (offset > ENTRIES_PER_PAGE) ! offset = ENTRIES_PER_PAGE; ! offset = shmem_find_swp(entry, ptr, ptr + offset); if (offset >= 0) goto found; *************** *** 387,391 **** found: if (PageCompressed(page)) ! decompress_swap_cache_page(page); delete_from_swap_cache(page); add_to_page_cache(page, info->inode->i_mapping, offset + idx); --- 419,425 ---- found: if (PageCompressed(page)) ! decompress_swap_cache_page(page); ! swap_free(entry); ! ptr[offset] = (swp_entry_t) {0}; delete_from_swap_cache(page); add_to_page_cache(page, info->inode->i_mapping, offset + idx); *************** *** 398,402 **** /* ! * unuse_shmem() search for an eventually swapped out shmem page. */ void shmem_unuse(swp_entry_t entry, struct page *page) --- 432,436 ---- /* ! * shmem_unuse() search for an eventually swapped out shmem page. */ void shmem_unuse(swp_entry_t entry, struct page *page) *************** *** 409,414 **** info = list_entry(p, struct shmem_inode_info, list); ! if (shmem_unuse_inode(info, entry, page)) break; } spin_unlock (&shmem_ilock); --- 443,452 ---- info = list_entry(p, struct shmem_inode_info, list); ! if (info->swapped && shmem_unuse_inode(info, entry, page)) { ! /* move head to start search for next from here */ ! list_del(&shmem_inodes); ! list_add_tail(&shmem_inodes, p); break; + } } spin_unlock (&shmem_ilock); *************** *** 531,535 **** /* Look it up and read it in.. */ ! page = find_get_page(&swapper_space, entry->val); if (!page) { swp_entry_t swap = *entry; --- 569,573 ---- /* Look it up and read it in.. */ ! page = lookup_swap_cache(*entry); if (!page) { swp_entry_t swap = *entry; *************** *** 588,591 **** --- 626,630 ---- return ERR_PTR(-ENOMEM); clear_highpage(page); + flush_dcache_page(page); inode->i_blocks += BLOCKS_PER_PAGE; add_to_page_cache (page, mapping, idx); *************** *** 707,717 **** inode->i_fop = &shmem_file_operations; spin_lock (&shmem_ilock); ! list_add (&SHMEM_I(inode)->list, &shmem_inodes); spin_unlock (&shmem_ilock); break; case S_IFDIR: inode->i_nlink++; inode->i_op = &shmem_dir_inode_operations; ! inode->i_fop = &shmem_dir_operations; break; case S_IFLNK: --- 746,758 ---- inode->i_fop = &shmem_file_operations; spin_lock (&shmem_ilock); ! list_add_tail(&info->list, &shmem_inodes); spin_unlock (&shmem_ilock); break; case S_IFDIR: inode->i_nlink++; + /* Some things misbehave if size == 0 on a directory */ + inode->i_size = 2 * BOGO_DIRENT_SIZE; inode->i_op = &shmem_dir_inode_operations; ! inode->i_fop = &dcache_dir_ops; break; case S_IFLNK: *************** *** 884,888 **** status = -EFAULT; ClearPageUptodate(page); - kunmap(page); goto unlock; } --- 925,928 ---- *************** *** 979,983 **** buf->f_ffree = sbinfo->free_inodes; spin_unlock (&sbinfo->stat_lock); ! buf->f_namelen = 255; return 0; } --- 1019,1023 ---- buf->f_ffree = sbinfo->free_inodes; spin_unlock (&sbinfo->stat_lock); ! buf->f_namelen = NAME_MAX; return 0; } *************** *** 1001,1006 **** int error = -ENOSPC; - dir->i_ctime = dir->i_mtime = CURRENT_TIME; if (inode) { d_instantiate(dentry, inode); dget(dentry); /* Extra count - pin the dentry in core */ --- 1041,1047 ---- int error = -ENOSPC; if (inode) { + dir->i_size += BOGO_DIRENT_SIZE; + dir->i_ctime = dir->i_mtime = CURRENT_TIME; d_instantiate(dentry, inode); dget(dentry); /* Extra count - pin the dentry in core */ *************** *** 1035,1038 **** --- 1076,1080 ---- return -EPERM; + dir->i_size += BOGO_DIRENT_SIZE; inode->i_ctime = dir->i_ctime = dir->i_mtime = CURRENT_TIME; inode->i_nlink++; *************** *** 1079,1082 **** --- 1121,1126 ---- { struct inode *inode = dentry->d_inode; + + dir->i_size -= BOGO_DIRENT_SIZE; inode->i_ctime = dir->i_ctime = dir->i_mtime = CURRENT_TIME; inode->i_nlink--; *************** *** 1102,1123 **** static int shmem_rename(struct inode * old_dir, struct dentry *old_dentry, struct inode * new_dir,struct dentry *new_dentry) { ! int error = -ENOTEMPTY; ! if (shmem_empty(new_dentry)) { ! struct inode *inode = new_dentry->d_inode; ! if (inode) { ! inode->i_ctime = CURRENT_TIME; ! inode->i_nlink--; ! dput(new_dentry); ! } ! error = 0; ! old_dentry->d_inode->i_ctime = old_dir->i_ctime = old_dir->i_mtime = CURRENT_TIME; } ! return error; } static int shmem_symlink(struct inode * dir, struct dentry *dentry, const char * symname) { - int error; int len; struct inode *inode; --- 1146,1174 ---- static int shmem_rename(struct inode * old_dir, struct dentry *old_dentry, struct inode * new_dir,struct dentry *new_dentry) { ! struct inode *inode = old_dentry->d_inode; ! int they_are_dirs = S_ISDIR(inode->i_mode); ! if (!shmem_empty(new_dentry)) ! return -ENOTEMPTY; ! ! if (new_dentry->d_inode) { ! (void) shmem_unlink(new_dir, new_dentry); ! if (they_are_dirs) ! old_dir->i_nlink--; ! } else if (they_are_dirs) { ! old_dir->i_nlink--; ! new_dir->i_nlink++; } ! ! old_dir->i_size -= BOGO_DIRENT_SIZE; ! new_dir->i_size += BOGO_DIRENT_SIZE; ! old_dir->i_ctime = old_dir->i_mtime = ! new_dir->i_ctime = new_dir->i_mtime = ! inode->i_ctime = CURRENT_TIME; ! return 0; } static int shmem_symlink(struct inode * dir, struct dentry *dentry, const char * symname) { int len; struct inode *inode; *************** *** 1126,1138 **** struct shmem_inode_info * info; - error = shmem_mknod(dir, dentry, S_IFLNK | S_IRWXUGO, 0); - if (error) - return error; - len = strlen(symname) + 1; if (len > PAGE_CACHE_SIZE) return -ENAMETOOLONG; ! ! inode = dentry->d_inode; info = SHMEM_I(inode); inode->i_size = len-1; --- 1177,1188 ---- struct shmem_inode_info * info; len = strlen(symname) + 1; if (len > PAGE_CACHE_SIZE) return -ENAMETOOLONG; ! ! inode = shmem_get_inode(dir->i_sb, S_IFLNK|S_IRWXUGO, 0); ! if (!inode) ! return -ENOSPC; ! info = SHMEM_I(inode); inode->i_size = len-1; *************** *** 1142,1154 **** inode->i_op = &shmem_symlink_inline_operations; } else { - spin_lock (&shmem_ilock); - list_add (&info->list, &shmem_inodes); - spin_unlock (&shmem_ilock); down(&info->sem); page = shmem_getpage_locked(info, inode, 0); if (IS_ERR(page)) { up(&info->sem); return PTR_ERR(page); } kaddr = kmap(page); memcpy(kaddr, symname, len); --- 1192,1206 ---- inode->i_op = &shmem_symlink_inline_operations; } else { down(&info->sem); page = shmem_getpage_locked(info, inode, 0); if (IS_ERR(page)) { up(&info->sem); + iput(inode); return PTR_ERR(page); } + inode->i_op = &shmem_symlink_inode_operations; + spin_lock (&shmem_ilock); + list_add_tail(&info->list, &shmem_inodes); + spin_unlock (&shmem_ilock); kaddr = kmap(page); memcpy(kaddr, symname, len); *************** *** 1158,1164 **** page_cache_release(page); up(&info->sem); - inode->i_op = &shmem_symlink_inode_operations; } dir->i_ctime = dir->i_mtime = CURRENT_TIME; return 0; } --- 1210,1218 ---- page_cache_release(page); up(&info->sem); } + dir->i_size += BOGO_DIRENT_SIZE; dir->i_ctime = dir->i_mtime = CURRENT_TIME; + d_instantiate(dentry, inode); + dget(dentry); return 0; } *************** *** 1321,1325 **** sbinfo->max_inodes = inodes; sbinfo->free_inodes = inodes; ! sb->s_maxbytes = (unsigned long long) SHMEM_MAX_BLOCKS << PAGE_CACHE_SHIFT; sb->s_blocksize = PAGE_CACHE_SIZE; sb->s_blocksize_bits = PAGE_CACHE_SHIFT; --- 1375,1379 ---- sbinfo->max_inodes = inodes; sbinfo->free_inodes = inodes; ! sb->s_maxbytes = SHMEM_MAX_BYTES; sb->s_blocksize = PAGE_CACHE_SIZE; sb->s_blocksize_bits = PAGE_CACHE_SHIFT; *************** *** 1360,1371 **** }; - static struct file_operations shmem_dir_operations = { - read: generic_read_dir, - readdir: dcache_readdir, - #ifdef CONFIG_TMPFS - fsync: shmem_sync_file, - #endif - }; - static struct inode_operations shmem_dir_inode_operations = { #ifdef CONFIG_TMPFS --- 1414,1417 ---- *************** *** 1463,1470 **** int vm_enough_memory(long pages); ! if (size > (unsigned long long) SHMEM_MAX_BLOCKS << PAGE_CACHE_SHIFT) return ERR_PTR(-EINVAL); ! if (!vm_enough_memory((size) >> PAGE_CACHE_SHIFT)) return ERR_PTR(-ENOMEM); --- 1509,1516 ---- int vm_enough_memory(long pages); ! if (size > SHMEM_MAX_BYTES) return ERR_PTR(-EINVAL); ! if (!vm_enough_memory(VM_ACCT(size))) return ERR_PTR(-ENOMEM); *************** *** 1488,1498 **** d_instantiate(dentry, inode); ! dentry->d_inode->i_size = size; ! shmem_truncate(inode); file->f_vfsmnt = mntget(shm_mnt); file->f_dentry = dentry; file->f_op = &shmem_file_operations; file->f_mode = FMODE_WRITE | FMODE_READ; - inode->i_nlink = 0; /* It is unlinked */ return(file); --- 1534,1543 ---- d_instantiate(dentry, inode); ! inode->i_size = size; ! inode->i_nlink = 0; /* It is unlinked */ file->f_vfsmnt = mntget(shm_mnt); file->f_dentry = dentry; file->f_op = &shmem_file_operations; file->f_mode = FMODE_WRITE | FMODE_READ; return(file); *************** *** 1503,1506 **** --- 1548,1552 ---- return ERR_PTR(error); } + /* * shmem_zero_setup - setup a shared anonymous mapping Index: swap_state.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/swap_state.c,v retrieving revision 1.42 retrieving revision 1.43 diff -C2 -r1.42 -r1.43 *** swap_state.c 6 Dec 2002 19:29:21 -0000 1.42 --- swap_state.c 19 May 2003 01:38:49 -0000 1.43 *************** *** 127,131 **** BUG(); ! block_flushpage(page, 0); entry.val = page->index; --- 127,132 ---- BUG(); ! if (unlikely(!block_flushpage(page, 0))) ! BUG(); /* an anonymous page cannot have page->buffers set */ entry.val = page->index; Index: swapfile.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/swapfile.c,v retrieving revision 1.38 retrieving revision 1.39 diff -C2 -r1.38 -r1.39 *** swapfile.c 6 Dec 2002 19:29:21 -0000 1.38 --- swapfile.c 19 May 2003 01:38:49 -0000 1.39 *************** *** 15,19 **** #include <linux/pagemap.h> #include <linux/shm.h> - #include <linux/compiler.h> #include <linux/comp_cache.h> --- 15,18 ---- *************** *** 944,956 **** * Note shmem_unuse already deleted its from swap cache. */ ! swcount = swap_map_count(*swap_map); ! if ((swcount > 0) != PageSwapCache(page)) ! BUG(); ! if ((swcount > 1) && PageDirty(page)) { rw_swap_page(WRITE, page); lock_page(page); } ! if (PageCompressed(page)) ! decompress_swap_cache_page(page); if (PageSwapCache(page)) delete_from_swap_cache(page); --- 943,952 ---- * Note shmem_unuse already deleted its from swap cache. */ ! if ((swap_map_count(*swap_map) > 1) && PageDirty(page) && PageSwapCache(page)) { rw_swap_page(WRITE, page); lock_page(page); } ! if (PageCompressed(page)) ! decompress_swap_cache_page(page); if (PageSwapCache(page)) delete_from_swap_cache(page); Index: vmscan.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/vmscan.c,v retrieving revision 1.44 retrieving revision 1.45 diff -C2 -r1.44 -r1.45 *** vmscan.c 22 Nov 2002 16:01:36 -0000 1.44 --- vmscan.c 19 May 2003 01:38:50 -0000 1.45 *************** *** 2,5 **** --- 2,8 ---- * linux/mm/vmscan.c * + * The pageout daemon, decides which pages to evict (swap out) and + * does the actual work of freeing them. + * * Copyright (C) 1991, 1992, 1993, 1994 Linus Torvalds * *************** *** 21,25 **** #include <linux/highmem.h> #include <linux/file.h> - #include <linux/compiler.h> #include <linux/comp_cache.h> --- 24,27 ---- *************** *** 60,64 **** /* Don't bother replenishing zones not under pressure.. */ ! if (!memclass(page->zone, classzone)) return 0; --- 62,66 ---- /* Don't bother replenishing zones not under pressure.. */ ! if (!memclass(page_zone(page), classzone)) return 0; *************** *** 241,246 **** end = vma->vm_end; ! if (address >= end) ! BUG(); do { count = swap_out_pgd(mm, vma, pgdir, address, end, count, classzone); --- 243,247 ---- end = vma->vm_end; ! BUG_ON(address >= end); do { count = swap_out_pgd(mm, vma, pgdir, address, end, count, classzone); *************** *** 361,368 **** page = list_entry(entry, struct page, lru); ! if (unlikely(!PageLRU(page))) ! BUG(); ! if (unlikely(PageActive(page))) ! BUG(); list_del(entry); --- 362,367 ---- page = list_entry(entry, struct page, lru); ! BUG_ON(!PageLRU(page)); ! BUG_ON(PageActive(page)); list_del(entry); *************** *** 376,380 **** continue; ! if (!memclass(page->zone, classzone)) continue; --- 375,379 ---- continue; ! if (!memclass(page_zone(page), classzone)) continue; *************** *** 643,647 **** } ! int try_to_free_pages(zone_t *classzone, unsigned int gfp_mask, unsigned int order) { int priority = DEF_PRIORITY; --- 642,646 ---- } ! int try_to_free_pages_zone(zone_t *classzone, unsigned int gfp_mask) { int priority = DEF_PRIORITY; *************** *** 663,666 **** --- 662,684 ---- } + int try_to_free_pages(unsigned int gfp_mask) + { + pg_data_t *pgdat; + zonelist_t *zonelist; + unsigned long pf_free_pages; + int error = 0; + + pf_free_pages = current->flags & PF_FREE_PAGES; + current->flags &= ~PF_FREE_PAGES; + + for_each_pgdat(pgdat) { + zonelist = pgdat->node_zonelists + (gfp_mask & GFP_ZONEMASK); + error |= try_to_free_pages_zone(zonelist->zones[0], gfp_mask); + } + + current->flags |= pf_free_pages; + return error; + } + DECLARE_WAIT_QUEUE_HEAD(kswapd_wait); *************** *** 689,693 **** if (!zone->need_balance) continue; ! if (!try_to_free_pages(zone, GFP_KSWAPD, 0)) { zone->need_balance = 0; __set_current_state(TASK_INTERRUPTIBLE); --- 707,711 ---- if (!zone->need_balance) continue; ! if (!try_to_free_pages_zone(zone, GFP_KSWAPD)) { zone->need_balance = 0; __set_current_state(TASK_INTERRUPTIBLE); *************** *** 711,718 **** do { need_more_balance = 0; ! pgdat = pgdat_list; ! do need_more_balance |= kswapd_balance_pgdat(pgdat); - while ((pgdat = pgdat->node_next)); } while (need_more_balance); } --- 729,735 ---- do { need_more_balance = 0; ! ! for_each_pgdat(pgdat) need_more_balance |= kswapd_balance_pgdat(pgdat); } while (need_more_balance); } *************** *** 737,746 **** pg_data_t * pgdat; ! pgdat = pgdat_list; ! do { ! if (kswapd_can_sleep_pgdat(pgdat)) ! continue; ! return 0; ! } while ((pgdat = pgdat->node_next)); return 1; --- 754,761 ---- pg_data_t * pgdat; ! for_each_pgdat(pgdat) { ! if (!kswapd_can_sleep_pgdat(pgdat)) ! return 0; ! } return 1; |
From: Rodrigo S. de C. <rc...@us...> - 2003-05-19 01:38:55
|
Update of /cvsroot/linuxcompressed/linux/mm/comp_cache In directory sc8-pr-cvs1:/tmp/cvs-serv25395/mm/comp_cache Modified Files: swapin.c vswap.c Log Message: o Port code to 2.4.20 Bug fix (?) o Changes checks in vswap.c to avoid oopses. It will BUG() instead. Some of the checks were done after the value had been accessed. Note o Virtual swap addresses are temporarily disabled, due to debugging sessions related to the use of swap files instead of swap partitions. Index: swapin.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapin.c,v retrieving revision 1.56 retrieving revision 1.57 diff -C2 -r1.56 -r1.57 *** swapin.c 6 Dec 2002 19:29:23 -0000 1.56 --- swapin.c 19 May 2003 01:38:50 -0000 1.57 *************** *** 2,6 **** * linux/mm/comp_cache/swapin.c * ! * Time-stamp: <2002-12-06 17:15:44 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/swapin.c * ! * Time-stamp: <2003-05-13 18:58:50 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 157,161 **** /* -- version alair1 -- */ ! /* compact_comp_cache(); */ } else { --- 157,161 ---- /* -- version alair1 -- */ ! compact_comp_cache(); } else { Index: vswap.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/vswap.c,v retrieving revision 1.47 retrieving revision 1.48 diff -C2 -r1.47 -r1.48 *** vswap.c 6 Dec 2002 19:29:23 -0000 1.47 --- vswap.c 19 May 2003 01:38:51 -0000 1.48 *************** *** 2,6 **** * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-12-03 14:27:24 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2003-05-18 16:09:56 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 172,176 **** int ret = 1; ! spin_lock(&virtual_swap_list); if (comp_cache_available_vswap()) --- 172,179 ---- int ret = 1; ! /* TESTE */ ! return 0; ! ! spin_lock(&virtual_swap_list); if (comp_cache_available_vswap()) *************** *** 207,210 **** --- 210,215 ---- entry.val = 0; + /* TESTE */ + return entry; spin_lock(&virtual_swap_list); *************** *** 300,304 **** vswap = vswap_address[offset]; - fragment = vswap->fragment; if (!vswap) --- 305,308 ---- *************** *** 308,311 **** --- 312,316 ---- BUG(); + fragment = vswap->fragment; swap_count = vswap->swap_count; if (--swap_count) { *************** *** 360,366 **** int ret; fragment = vswap_address[offset]->fragment; ! ret = __virtual_swap_free(offset); ! if (ret) goto out_unlock; --- 365,375 ---- int ret; + if (offset >= vswap_current_num_entries) + BUG(); + if (!vswap_address[offset]) + BUG(); fragment = vswap_address[offset]->fragment; ! ! ret = __virtual_swap_free(offset); if (ret) goto out_unlock; |
From: Rodrigo S. de C. <rc...@us...> - 2002-12-06 22:50:35
|
Update of /cvsroot/linuxcompressed/linux/mm/comp_cache In directory sc8-pr-cvs1:/tmp/cvs-serv8451/mm/comp_cache Modified Files: adaptivity.c free.c main.c proc.c swapin.c swapout.c vswap.c Log Message: Some races still to be fixed, but we have fixed a bunch of them in this set of changes, including one that would corrupt FSs when used with preempt patch. Bug fixes o Fixed bug that might compress a page for the second time if it were swapped in while being written using swap buffers. In this case, a new swap cache page could be compressed and we are not sure the fragment being written out had actually been freed. The fix is to make the swap buffer get a reference on this swap cache page, releasing when the swap buffer is freed. o Fixed bug that could submit a read to the disk while the same block is being written by a swap buffer. When writing out the swap buffer, we get a reference on the fragment in order to avoid it to be released, even if swapped in in the meanwhile. o Removed extra spin_lock()/spin_unlock() on comp_cache_lock in grow_comp_cache() o Fixed race in compact_comp_cace() that we were triggering which would corrupt fs or return wrong process data, likely to segfault. It happened usually with preempt patch. When a fragment is relocated to another comp page, we could preempt the process after the fragment is removed from the previous comp page, but before being added to the next comp page. If this happens, a read operation is submitted to the disk, likely to read bogus data or, if vswap is used, to reach a kernel BUG. In order to solve, we add the new fragment to the hash table before the old one is removed. So, if the process is preempted before removing the old fragment, we have a fragment with its data. This fragment is locked until get to a sane state, but it surely avoids a read operation to be done. We think it's SMP-safe too, since if a reference to the old fragment is get after the new fragment is added to the hash table, the old fragment isn't freed and we remove the new fragment from the hash table. If the new fragment is referenced, it's the same behaviour that happens when the process is preempted. o Added spin_lock/spin_unlock to clean page adaptability to provide concurrency control. o Fixed bug that would allow to set more than 50% of the memory size as the maximum size of compressed cache. For example, booting with "mem=16M compsize=12M" would work. Simple fix. o Fixed bug that would duplicate a real swap entry (for compressed swap) even if the swap entry failed to duplicate. o Although unlikely, nothing prevents a swap entry to be freed while being written out by a swap buffer. Now we, besides the reference on the fragment, we hold a reference on the swap entry when writing out a page. Index: adaptivity.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/adaptivity.c,v retrieving revision 1.44 retrieving revision 1.45 diff -C2 -r1.44 -r1.45 *** adaptivity.c 29 Nov 2002 21:23:03 -0000 1.44 --- adaptivity.c 6 Dec 2002 19:29:21 -0000 1.45 *************** *** 2,6 **** * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-11-29 12:05:01 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-12-06 09:58:05 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 587,592 **** int retval = 0; - spin_lock(&comp_cache_lock); - page = alloc_pages(GFP_ATOMIC, COMP_PAGE_ORDER); --- 587,590 ---- *************** *** 594,603 **** if (!page) { failed_comp_page_allocs++; ! goto out_unlock; } if (!init_comp_page(&comp_page, page)) { __free_pages(page, COMP_PAGE_ORDER); ! goto out_unlock; } --- 592,601 ---- if (!page) { failed_comp_page_allocs++; ! goto out; } if (!init_comp_page(&comp_page, page)) { __free_pages(page, COMP_PAGE_ORDER); ! goto out; } *************** *** 614,619 **** grow_fragment_hash_table(); grow_vswap(); ! out_unlock: ! spin_unlock(&comp_cache_lock); return retval; } --- 612,616 ---- grow_fragment_hash_table(); grow_vswap(); ! out: return retval; } *************** *** 624,627 **** --- 621,626 ---- * not yet reached the maximum size, we try to grow compressed cache * by one new entry. + * + * caller must hold comp_cache_lock */ int *************** *** 675,678 **** --- 674,678 ---- while (1) { fragment = list_entry(fragment_lh, struct comp_cache_fragment, list); + /* what about count == 2 && swp_buffer != null? */ if (fragment_count(fragment) != 1) { fail = 1; *************** *** 722,727 **** --- 722,734 ---- new_fragment->flags = fragment->flags; new_fragment->comp_page = new_comp_page; + + /* Setting the fragment count to the count of the old + * fragment, we make sure that no reference will be lost. In + * particular, the swap buffer one. */ set_fragment_count(new_fragment, fragment_count(fragment)); + /* If we have a swap buffer, we just set the swap buffer to + * this fragment (the reference will be automatically set + * above). As simple as that. */ if ((new_fragment->swp_buffer = fragment->swp_buffer)) new_fragment->swp_buffer->fragment = new_fragment; *************** *** 731,736 **** --- 738,746 ---- previous_comp_page = comp_page; + add_fragment_to_hash_table(new_fragment); + UnlockPage(comp_page->page); if (!drop_fragment(fragment)) { + remove_fragment_from_hash_table(new_fragment); if (fragment->swp_buffer) fragment->swp_buffer->fragment = fragment; *************** *** 748,752 **** add_to_comp_page_list(new_comp_page, new_fragment); add_fragment_vswap(new_fragment); - add_fragment_to_hash_table(new_fragment); if (CompFragmentActive(new_fragment)) --- 758,761 ---- *************** *** 812,815 **** --- 821,826 ---- struct clean_page_data * clpage; + spin_lock(&comp_cache_lock); + clpage = clean_page_hash[clean_page_hashfn(page->mapping, page->index)]; *************** *** 820,824 **** inside: if (!clpage) ! return; if (clpage->mapping != page->mapping) continue; --- 831,835 ---- inside: if (!clpage) ! goto out_release; if (clpage->mapping != page->mapping) continue; *************** *** 837,840 **** --- 848,854 ---- nr_clean_page_hits = 0; } + + out_release: + spin_unlock(&comp_cache_lock); } *************** *** 845,853 **** unsigned long hash_index; /* allocate a new structure */ clpage = ((struct clean_page_data *) kmem_cache_alloc(clean_page_cachep, SLAB_ATOMIC)); if (unlikely(!clpage)) ! return; clpage->mapping = page->mapping; --- 859,869 ---- unsigned long hash_index; + spin_lock(&comp_cache_lock); + /* allocate a new structure */ clpage = ((struct clean_page_data *) kmem_cache_alloc(clean_page_cachep, SLAB_ATOMIC)); if (unlikely(!clpage)) ! goto out_release; clpage->mapping = page->mapping; *************** *** 901,904 **** --- 917,922 ---- if (num_clean_fragments * 10 > num_fragments * 3) compact_comp_cache(); + out_release: + spin_unlock(&comp_cache_lock); } #endif Index: free.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/free.c,v retrieving revision 1.48 retrieving revision 1.49 diff -C2 -r1.48 -r1.49 *** free.c 22 Nov 2002 16:01:37 -0000 1.48 --- free.c 6 Dec 2002 19:29:22 -0000 1.49 *************** *** 2,6 **** * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-10-25 11:26:26 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-12-05 19:38:23 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 62,72 **** if (!fragment->mapping) BUG(); - - /* fragments that have already been submitted to IO have a - * non-null swp_buffer. Let's warn the swap buffer that this - * page has been already removed by setting its fragment field - * to NULL. */ - if (fragment->swp_buffer) - fragment->swp_buffer->fragment = NULL; /* compressed fragments of swap cache are accounted in --- 62,65 ---- Index: main.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/main.c,v retrieving revision 1.67 retrieving revision 1.68 diff -C2 -r1.67 -r1.68 *** main.c 26 Nov 2002 21:42:32 -0000 1.67 --- main.c 6 Dec 2002 19:29:22 -0000 1.68 *************** *** 2,6 **** * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-11-26 19:32:57 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-12-05 11:35:39 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 206,209 **** --- 206,210 ---- LIST_HEAD(inactive_lru_queue); + /* caller must hold comp_cache_lock spinlock */ inline int init_comp_page(struct comp_cache_page ** comp_page,struct page * page) { *************** *** 234,238 **** min_num_comp_pages = page_to_comp_page(48); ! if (!max_num_comp_pages || max_num_comp_pages < min_num_comp_pages || max_num_comp_pages > num_physpages * 0.5) max_num_comp_pages = page_to_comp_page((unsigned long) (num_physpages * 0.5)); --- 235,239 ---- min_num_comp_pages = page_to_comp_page(48); ! if (!max_num_comp_pages || max_num_comp_pages < min_num_comp_pages || max_num_comp_pages > page_to_comp_page(num_physpages) * 0.5) max_num_comp_pages = page_to_comp_page((unsigned long) (num_physpages * 0.5)); *************** *** 243,247 **** max_used_num_comp_pages = min_num_comp_pages = num_comp_pages = page_to_comp_page(48); ! if (!max_num_comp_pages || max_num_comp_pages < min_num_comp_pages || max_num_comp_pages > num_physpages * 0.5) max_num_comp_pages = page_to_comp_page((unsigned long) (num_physpages * 0.5)); --- 244,248 ---- max_used_num_comp_pages = min_num_comp_pages = num_comp_pages = page_to_comp_page(48); ! if (!max_num_comp_pages || max_num_comp_pages < min_num_comp_pages || max_num_comp_pages > page_to_comp_page(num_physpages) * 0.5) max_num_comp_pages = page_to_comp_page((unsigned long) (num_physpages * 0.5)); Index: proc.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/proc.c,v retrieving revision 1.29 retrieving revision 1.30 diff -C2 -r1.29 -r1.30 *** proc.c 22 Nov 2002 16:01:41 -0000 1.29 --- proc.c 6 Dec 2002 19:29:22 -0000 1.30 *************** *** 2,6 **** * linux/mm/comp_cache/proc.c * ! * Time-stamp: <2002-10-21 16:26:52 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/proc.c * ! * Time-stamp: <2002-12-01 17:35:44 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 217,220 **** --- 217,225 ---- fragment_index = 0; + if (!counter) + BUG(); + if (metadata_offset > COMP_PAGE_SIZE) + BUG(); + while (counter-- && fragment_index != page->index) { fragment_index = *((unsigned long *) (page_address(page) + metadata_offset + 4)); Index: swapin.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapin.c,v retrieving revision 1.55 retrieving revision 1.56 diff -C2 -r1.55 -r1.56 *** swapin.c 22 Nov 2002 16:01:42 -0000 1.55 --- swapin.c 6 Dec 2002 19:29:23 -0000 1.56 *************** *** 2,6 **** * linux/mm/comp_cache/swapin.c * ! * Time-stamp: <2002-11-21 15:23:30 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/swapin.c * ! * Time-stamp: <2002-12-06 17:15:44 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 156,161 **** last_accessed = ACTIVE_FRAGMENT; ! /* Ver alair1 */ ! /* compact_comp_cache(); */ } else { --- 156,161 ---- last_accessed = ACTIVE_FRAGMENT; ! /* -- version alair1 -- */ ! /* compact_comp_cache(); */ } else { *************** *** 181,187 **** decompress_fragment_to_page(fragment, page); - comp_cache_update_read_stats(fragment); spin_lock(&comp_cache_lock); if (CompFragmentTestandClearDirty(fragment)) { --- 181,187 ---- decompress_fragment_to_page(fragment, page); spin_lock(&comp_cache_lock); + comp_cache_update_read_stats(fragment); if (CompFragmentTestandClearDirty(fragment)) { *************** *** 189,192 **** --- 189,203 ---- __set_page_dirty(page); } + + /* Swap buffer must know if this fragment was reclaimed. In + * this case, we get a reference on this page for the swap + * buffer, since we want to make sure this page will not get + * compressed while the I/O operation isn't finished. This + * reference will be released when the swap buffer is + * freed. */ + if (fragment->swp_buffer) { + fragment->swp_buffer->swap_cache_page = page; + page_cache_get(page); + } UnlockPage(fragment->comp_page->page); Index: swapout.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapout.c,v retrieving revision 1.75 retrieving revision 1.76 diff -C2 -r1.75 -r1.76 *** swapout.c 29 Nov 2002 21:23:03 -0000 1.75 --- swapout.c 6 Dec 2002 19:29:23 -0000 1.76 *************** *** 2,6 **** * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-11-29 18:09:53 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-12-05 17:11:02 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 26,29 **** --- 26,31 ---- unsigned long index; } grouped_fragments[255]; + + static spinlock_t comp_swap_lock __cacheline_aligned = SPIN_LOCK_UNLOCKED; #endif *************** *** 36,42 **** { struct list_head * swp_buffer_lh; ! struct page * buffer_page; struct swp_buffer * swp_buffer; struct comp_cache_fragment * fragment; int wait, maxscan; --- 38,45 ---- { struct list_head * swp_buffer_lh; ! struct page * buffer_page, * swap_cache_page; struct swp_buffer * swp_buffer; struct comp_cache_fragment * fragment; + swp_entry_t entry; int wait, maxscan; *************** *** 72,75 **** --- 75,79 ---- fragment = swp_buffer->fragment; + swap_cache_page = swp_buffer->swap_cache_page; /* A swap buffer page that has been set to dirty means *************** *** 78,90 **** if (PageDirty(buffer_page)) { spin_lock(&comp_cache_lock); ! if (fragment) { ! fragment->swp_buffer = NULL; ! spin_lock(&pagecache_lock); ! list_del(&fragment->mapping_list); ! list_add(&fragment->mapping_list, &fragment->mapping->dirty_comp_pages); ! spin_unlock(&pagecache_lock); ! ! CompFragmentSetDirty(fragment); ! } ClearPageDirty(buffer_page); spin_unlock(&comp_cache_lock); --- 82,100 ---- if (PageDirty(buffer_page)) { spin_lock(&comp_cache_lock); ! ! fragment->swp_buffer = NULL; ! put_fragment(fragment); ! ! spin_lock(&pagemap_lru_lock); ! if (swap_cache_page) ! page_cache_release(swap_cache_page); ! spin_unlock(&pagemap_lru_lock); ! ! spin_lock(&pagecache_lock); ! list_del(&fragment->mapping_list); ! list_add(&fragment->mapping_list, &fragment->mapping->dirty_comp_pages); ! spin_unlock(&pagecache_lock); ! ! CompFragmentSetDirty(fragment); ClearPageDirty(buffer_page); spin_unlock(&comp_cache_lock); *************** *** 96,103 **** * (if still needed). */ spin_lock(&comp_cache_lock); ! if (fragment) { ! fragment->swp_buffer = NULL; ! drop_fragment(fragment); ! } spin_unlock(&comp_cache_lock); add_to_free: --- 106,124 ---- * (if still needed). */ spin_lock(&comp_cache_lock); ! ! /* now that the data is actually back stored, release ! * references on the fragment... */ ! fragment->swp_buffer = NULL; ! put_fragment(fragment); ! drop_fragment(fragment); ! spin_lock(&pagemap_lru_lock); ! if (swap_cache_page) ! page_cache_release(swap_cache_page); ! spin_unlock(&pagemap_lru_lock); ! ! /* and on the swap entry. */ ! entry.val = buffer_page->index; ! swap_free(entry); ! spin_unlock(&comp_cache_lock); add_to_free: *************** *** 125,129 **** } ! /** * find_free_swp_buffer - gets a swap buffer page --- 146,162 ---- } ! /** ! * sync_all_swp_buffers - syncs all pending swap buffers. This is done ! * in order to release references on swap entries for swapoff ! * operation. It is the first implementation and I know it can be ! * smarter.* ! */ ! void ! sync_all_swp_buffers() ! { ! while (!list_empty(&swp_used_buffer_head)) ! refill_swp_buffer(GFP_KERNEL, 1); ! } ! /** * find_free_swp_buffer - gets a swap buffer page *************** *** 172,175 **** --- 205,210 ---- spin_lock(&comp_cache_lock); swp_buffer->fragment = fragment; + swp_buffer->swap_cache_page = NULL; + fragment->swp_buffer = swp_buffer; *************** *** 201,204 **** --- 236,241 ---- unsigned short counter, next_offset, metadata_size; + spin_lock(&comp_swap_lock); + entry.val = fragment->index; real_entry = get_real_swap_page(entry); *************** *** 229,233 **** set_swap_compressed(entry, 0); decompress_fragment_to_page(fragment, page); ! return; } --- 266,270 ---- set_swap_compressed(entry, 0); decompress_fragment_to_page(fragment, page); ! goto out_release; } *************** *** 288,291 **** --- 325,331 ---- next_offset += 4; } + + out_release: + spin_unlock(&comp_swap_lock); } #else *************** *** 420,434 **** remove_fragment_from_lru_queue(fragment); ! /* avoid to free this entry if we sleep below */ if (swap_cache_page && !swap_duplicate(entry)) ! BUG(); - get_fragment(fragment); spin_unlock(&comp_cache_lock); swp_buffer = prepare_swp_buffer(fragment, gfp_mask); ! if (!swp_buffer) ! goto out; spin_lock(&pagecache_lock); list_del(&fragment->mapping_list); --- 460,506 ---- remove_fragment_from_lru_queue(fragment); ! get_fragment(fragment); ! ! /* we need a reference on the swap counter to perform ! * the I/O (in order to avoid freeing this swap ! * entry). If we can't duplicate the swap entry, the ! * entry has already been freed and this fragment will ! * be probable freed as soon as we release our ! * reference on it */ if (swap_cache_page && !swap_duplicate(entry)) ! goto add_back; spin_unlock(&comp_cache_lock); + /* so far, we have: + * + * (a) a reference on the fragment, so it won't be + * freed until the end of the I/O. This reference + * makes sure that any access to this page will read + * sane data from the fragment. Without it, the system + * could free the fragment in the meanwhile and submit + * a concurrent read operation, returning bogus + * data. This fragment can be freed if reclaimed by + * the system. + * + * (b) and, if swap cache page, a reference on the + * swap entry, so it won't be freed until the end of + * I/O too. This reference is necessary since we want + * to keep the fragment alive in a save way until we + * finish our writeout. If the swap entry is freed, we + * are not safe any longer keeping a fragment set to + * this entry. + * + * thus we can go on and prepare swap buffers. + */ swp_buffer = prepare_swp_buffer(fragment, gfp_mask); ! if (!swp_buffer) { ! if (swap_cache_page) ! swap_free(entry); ! spin_lock(&comp_cache_lock); ! goto add_back; ! } + spin_lock(&comp_cache_lock); spin_lock(&pagecache_lock); list_del(&fragment->mapping_list); *************** *** 439,465 **** num_clean_fragments++; ! writepage = fragment->mapping->a_ops->writepage; ! if (!writepage) BUG(); writepage(swp_buffer->page); nrpages--; - out: - if (swap_cache_page) - swap_free(entry); spin_lock(&comp_cache_lock); - if (!swp_buffer) { - if (likely(list == &inactive_lru_queue)) - add_fragment_to_inactive_lru_queue(fragment); - else - add_fragment_to_active_lru_queue(fragment); - put_fragment(fragment); - goto try_again; - } - - put_fragment(fragment); - if (!nrpages) break; --- 511,525 ---- num_clean_fragments++; ! writepage = fragment->mapping->a_ops->writepage; ! spin_unlock(&comp_cache_lock); ! if (!writepage) BUG(); writepage(swp_buffer->page); + nrpages--; spin_lock(&comp_cache_lock); if (!nrpages) break; *************** *** 472,475 **** --- 532,544 ---- spin_lock(&comp_cache_lock); } + continue; + + add_back: + if (likely(list == &inactive_lru_queue)) + add_fragment_to_inactive_lru_queue(fragment); + else + add_fragment_to_active_lru_queue(fragment); + put_fragment(fragment); + goto try_again; } Index: vswap.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/vswap.c,v retrieving revision 1.46 retrieving revision 1.47 diff -C2 -r1.46 -r1.47 *** vswap.c 29 Nov 2002 21:23:03 -0000 1.46 --- vswap.c 6 Dec 2002 19:29:23 -0000 1.47 *************** *** 2,6 **** * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-11-29 12:05:38 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-12-03 14:27:24 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 171,175 **** comp_cache_available_space(void) { int ret = 1; ! spin_lock(&virtual_swap_list); --- 171,175 ---- comp_cache_available_space(void) { int ret = 1; ! spin_lock(&virtual_swap_list); |
From: Rodrigo S. de C. <rc...@us...> - 2002-12-06 22:50:35
|
Update of /cvsroot/linuxcompressed/linux/mm In directory sc8-pr-cvs1:/tmp/cvs-serv8451/mm Modified Files: swap_state.c swapfile.c Log Message: Some races still to be fixed, but we have fixed a bunch of them in this set of changes, including one that would corrupt FSs when used with preempt patch. Bug fixes o Fixed bug that might compress a page for the second time if it were swapped in while being written using swap buffers. In this case, a new swap cache page could be compressed and we are not sure the fragment being written out had actually been freed. The fix is to make the swap buffer get a reference on this swap cache page, releasing when the swap buffer is freed. o Fixed bug that could submit a read to the disk while the same block is being written by a swap buffer. When writing out the swap buffer, we get a reference on the fragment in order to avoid it to be released, even if swapped in in the meanwhile. o Removed extra spin_lock()/spin_unlock() on comp_cache_lock in grow_comp_cache() o Fixed race in compact_comp_cace() that we were triggering which would corrupt fs or return wrong process data, likely to segfault. It happened usually with preempt patch. When a fragment is relocated to another comp page, we could preempt the process after the fragment is removed from the previous comp page, but before being added to the next comp page. If this happens, a read operation is submitted to the disk, likely to read bogus data or, if vswap is used, to reach a kernel BUG. In order to solve, we add the new fragment to the hash table before the old one is removed. So, if the process is preempted before removing the old fragment, we have a fragment with its data. This fragment is locked until get to a sane state, but it surely avoids a read operation to be done. We think it's SMP-safe too, since if a reference to the old fragment is get after the new fragment is added to the hash table, the old fragment isn't freed and we remove the new fragment from the hash table. If the new fragment is referenced, it's the same behaviour that happens when the process is preempted. o Added spin_lock/spin_unlock to clean page adaptability to provide concurrency control. o Fixed bug that would allow to set more than 50% of the memory size as the maximum size of compressed cache. For example, booting with "mem=16M compsize=12M" would work. Simple fix. o Fixed bug that would duplicate a real swap entry (for compressed swap) even if the swap entry failed to duplicate. o Although unlikely, nothing prevents a swap entry to be freed while being written out by a swap buffer. Now we, besides the reference on the fragment, we hold a reference on the swap entry when writing out a page. Index: swap_state.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/swap_state.c,v retrieving revision 1.41 retrieving revision 1.42 diff -C2 -r1.41 -r1.42 *** swap_state.c 29 Nov 2002 21:23:02 -0000 1.41 --- swap_state.c 6 Dec 2002 19:29:21 -0000 1.42 *************** *** 215,218 **** --- 215,219 ---- } + /* racy - have to fix */ if (readahead) { found_page = find_get_page(&swapper_space, entry.val); *************** *** 234,237 **** --- 235,239 ---- err = add_to_swap_cache(new_page, entry); if (!err) { + /* racy - have to fix */ if (!readahead) { if (!read_comp_cache(&swapper_space, entry.val, new_page)) { Index: swapfile.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/swapfile.c,v retrieving revision 1.37 retrieving revision 1.38 diff -C2 -r1.37 -r1.38 *** swapfile.c 29 Nov 2002 21:23:02 -0000 1.37 --- swapfile.c 6 Dec 2002 19:29:21 -0000 1.38 *************** *** 810,813 **** --- 810,815 ---- atomic_inc(&init_mm.mm_users); + sync_all_swp_buffers(); + /* * Keep on scanning until all entries have gone. Usually, *************** *** 1516,1527 **** result = 1; } - } #ifdef CONFIG_COMP_SWAP ! if (p->real_swap[offset]) { ! swp_entry_t real_entry; ! real_entry.val = p->real_swap[offset]; ! real_swap_duplicate(real_entry, 1); } - #endif swap_device_unlock(p); out: --- 1518,1529 ---- result = 1; } #ifdef CONFIG_COMP_SWAP ! if (p->real_swap[offset]) { ! swp_entry_t real_entry; ! real_entry.val = p->real_swap[offset]; ! real_swap_duplicate(real_entry, 1); ! } ! #endif } swap_device_unlock(p); out: |
From: Rodrigo S. de C. <rc...@us...> - 2002-12-06 19:29:56
|
Update of /cvsroot/linuxcompressed/linux/include/linux In directory sc8-pr-cvs1:/tmp/cvs-serv8451/include/linux Modified Files: comp_cache.h Log Message: Some races still to be fixed, but we have fixed a bunch of them in this set of changes, including one that would corrupt FSs when used with preempt patch. Bug fixes o Fixed bug that might compress a page for the second time if it were swapped in while being written using swap buffers. In this case, a new swap cache page could be compressed and we are not sure the fragment being written out had actually been freed. The fix is to make the swap buffer get a reference on this swap cache page, releasing when the swap buffer is freed. o Fixed bug that could submit a read to the disk while the same block is being written by a swap buffer. When writing out the swap buffer, we get a reference on the fragment in order to avoid it to be released, even if swapped in in the meanwhile. o Removed extra spin_lock()/spin_unlock() on comp_cache_lock in grow_comp_cache() o Fixed race in compact_comp_cace() that we were triggering which would corrupt fs or return wrong process data, likely to segfault. It happened usually with preempt patch. When a fragment is relocated to another comp page, we could preempt the process after the fragment is removed from the previous comp page, but before being added to the next comp page. If this happens, a read operation is submitted to the disk, likely to read bogus data or, if vswap is used, to reach a kernel BUG. In order to solve, we add the new fragment to the hash table before the old one is removed. So, if the process is preempted before removing the old fragment, we have a fragment with its data. This fragment is locked until get to a sane state, but it surely avoids a read operation to be done. We think it's SMP-safe too, since if a reference to the old fragment is get after the new fragment is added to the hash table, the old fragment isn't freed and we remove the new fragment from the hash table. If the new fragment is referenced, it's the same behaviour that happens when the process is preempted. o Added spin_lock/spin_unlock to clean page adaptability to provide concurrency control. o Fixed bug that would allow to set more than 50% of the memory size as the maximum size of compressed cache. For example, booting with "mem=16M compsize=12M" would work. Simple fix. o Fixed bug that would duplicate a real swap entry (for compressed swap) even if the swap entry failed to duplicate. o Although unlikely, nothing prevents a swap entry to be freed while being written out by a swap buffer. Now we, besides the reference on the fragment, we hold a reference on the swap entry when writing out a page. Index: comp_cache.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/comp_cache.h,v retrieving revision 1.104 retrieving revision 1.105 diff -C2 -r1.104 -r1.105 *** comp_cache.h 26 Nov 2002 21:42:32 -0000 1.104 --- comp_cache.h 6 Dec 2002 19:29:21 -0000 1.105 *************** *** 2,6 **** * linux/mm/comp_cache.h * ! * Time-stamp: <2002-11-26 19:35:01 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache.h * ! * Time-stamp: <2002-12-05 10:11:02 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 175,178 **** --- 175,185 ---- int writeout_fragments(unsigned int, int, int); + #ifdef CONFIG_COMP_CACHE + void sync_all_swp_buffers(void); + #else + static inline void sync_all_swp_buffers() { }; + #endif + + /* -- Fragment Flags */ *************** *** 232,235 **** --- 239,243 ---- struct page * page; /* page for IO */ struct comp_cache_fragment * fragment; /* pointer to the fragment we are doing IO */ + struct page * swap_cache_page; }; |
From: Rodrigo S. de C. <rc...@us...> - 2002-11-29 21:23:05
|
Update of /cvsroot/linuxcompressed/linux/mm/comp_cache In directory sc8-pr-cvs1:/tmp/cvs-serv31487/mm/comp_cache Modified Files: adaptivity.c aux.c swapout.c vswap.c Log Message: Cleanups o New ifdefs to avoid compiling code for clean page adaptability when it is disabled. o Some whitespace changes to make the patch a little shorter. o Removed variable not used in swapout.c Bug fixes o Fixed a bug in comp_cache_fix_watermarks() introduce lately. This bug would set the watermark to a huge number (due to negative values), what would end up in a huge increase in the memory pressure. Fortunately, that is only noticeable when the system has a low amount of memory. o vswap_num_reserved_entries variable was used without being initialized. Depending on the scenario, it could have bogus value which would screw up compressed cache behaviour. o The total free space hash was set in function of the free_space_interval. A potential bug if the total_free_space_interval is set to a value different from the one in free_space_interval. o If the compressed cache cannot allocate the minimum number of pages in the boot process, the shrinkage code could end up annihilate compressed cache because it assumes the compressed cache will always have the number of pages equal or greater than the minimum size. Index: adaptivity.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/adaptivity.c,v retrieving revision 1.43 retrieving revision 1.44 diff -C2 -r1.43 -r1.44 *** adaptivity.c 26 Nov 2002 21:52:55 -0000 1.43 --- adaptivity.c 29 Nov 2002 21:23:03 -0000 1.44 *************** *** 2,6 **** * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-11-26 19:46:51 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-11-29 12:05:01 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 545,549 **** { /* don't shrink a comp cache that has reached the min size */ ! if (num_comp_pages == min_num_comp_pages) { UnlockPage(comp_page->page); return 0; --- 545,549 ---- { /* don't shrink a comp cache that has reached the min size */ ! if (num_comp_pages <= min_num_comp_pages) { UnlockPage(comp_page->page); return 0; *************** *** 674,678 **** fail = 0; while (1) { ! fragment = list_entry(fragment_lh, struct comp_cache_fragment, list); if (fragment_count(fragment) != 1) { fail = 1; --- 674,678 ---- fail = 0; while (1) { ! fragment = list_entry(fragment_lh, struct comp_cache_fragment, list); if (fragment_count(fragment) != 1) { fail = 1; Index: aux.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/aux.c,v retrieving revision 1.45 retrieving revision 1.46 diff -C2 -r1.45 -r1.46 *** aux.c 26 Nov 2002 21:42:32 -0000 1.45 --- aux.c 29 Nov 2002 21:23:03 -0000 1.46 *************** *** 2,6 **** * linux/mm/comp_cache/aux.c * ! * Time-stamp: <2002-11-26 19:34:59 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/aux.c * ! * Time-stamp: <2002-11-29 09:31:39 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 613,617 **** /* inits comp cache total free space hash table */ total_free_space_interval = 100 * (COMP_PAGE_ORDER + 1); ! total_free_space_hash_size = (int) (COMP_PAGE_SIZE/free_space_interval) + 2; total_free_space_hash = (struct comp_cache_page **) kmalloc(total_free_space_hash_size * sizeof(struct comp_cache_page *), GFP_ATOMIC); --- 613,617 ---- /* inits comp cache total free space hash table */ total_free_space_interval = 100 * (COMP_PAGE_ORDER + 1); ! total_free_space_hash_size = (int) (COMP_PAGE_SIZE/total_free_space_interval) + 2; total_free_space_hash = (struct comp_cache_page **) kmalloc(total_free_space_hash_size * sizeof(struct comp_cache_page *), GFP_ATOMIC); Index: swapout.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapout.c,v retrieving revision 1.74 retrieving revision 1.75 diff -C2 -r1.74 -r1.75 *** swapout.c 26 Nov 2002 21:42:32 -0000 1.74 --- swapout.c 29 Nov 2002 21:23:03 -0000 1.75 *************** *** 2,6 **** * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-11-26 19:33:23 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-11-29 18:09:53 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 499,503 **** struct comp_cache_page * comp_page = NULL, ** hash_table; struct comp_cache_fragment * fragment = NULL; - struct page * new_page; unsigned short aux_comp_size; int maxscan, maxtry; --- 499,502 ---- Index: vswap.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/vswap.c,v retrieving revision 1.45 retrieving revision 1.46 diff -C2 -r1.45 -r1.46 *** vswap.c 28 Jul 2002 15:47:04 -0000 1.45 --- vswap.c 29 Nov 2002 21:23:03 -0000 1.46 *************** *** 2,6 **** * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-07-27 11:15:46 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/vswap.c * ! * Time-stamp: <2002-11-29 12:05:38 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 80,83 **** --- 80,84 ---- vswap_last_used = NUM_VSWAP_ENTRIES - 1; vswap_num_used_entries = 0; + vswap_num_reserved_entries = 0; vswap_num_swap_cache = 0; |
From: Rodrigo S. de C. <rc...@us...> - 2002-11-29 21:23:05
|
Update of /cvsroot/linuxcompressed/linux/mm In directory sc8-pr-cvs1:/tmp/cvs-serv31487/mm Modified Files: filemap.c page_alloc.c swap_state.c swapfile.c Log Message: Cleanups o New ifdefs to avoid compiling code for clean page adaptability when it is disabled. o Some whitespace changes to make the patch a little shorter. o Removed variable not used in swapout.c Bug fixes o Fixed a bug in comp_cache_fix_watermarks() introduce lately. This bug would set the watermark to a huge number (due to negative values), what would end up in a huge increase in the memory pressure. Fortunately, that is only noticeable when the system has a low amount of memory. o vswap_num_reserved_entries variable was used without being initialized. Depending on the scenario, it could have bogus value which would screw up compressed cache behaviour. o The total free space hash was set in function of the free_space_interval. A potential bug if the total_free_space_interval is set to a value different from the one in free_space_interval. o If the compressed cache cannot allocate the minimum number of pages in the boot process, the shrinkage code could end up annihilate compressed cache because it assumes the compressed cache will always have the number of pages equal or greater than the minimum size. Index: filemap.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/filemap.c,v retrieving revision 1.41 retrieving revision 1.42 diff -C2 -r1.41 -r1.42 *** filemap.c 22 Nov 2002 16:01:34 -0000 1.41 --- filemap.c 29 Nov 2002 21:23:02 -0000 1.42 *************** *** 777,783 **** --- 777,785 ---- } } + #ifndef CONFIG_COMP_DIS_CLEAN if (clean_page_compress_lock) hit_clean_page(page); #endif + #endif error = mapping->a_ops->readpage(file, page); page_cache_release(page); *************** *** 1551,1555 **** readpage: ! #ifdef CONFIG_COMP_PAGE_CACHE if (clean_page_compress_lock) hit_clean_page(page); --- 1553,1557 ---- readpage: ! #if defined(CONFIG_COMP_PAGE_CACHE) && !defined(CONFIG_COMP_DIS_CLEAN) if (clean_page_compress_lock) hit_clean_page(page); *************** *** 2109,2113 **** } ! #ifdef CONFIG_COMP_PAGE_CACHE if (clean_page_compress_lock) hit_clean_page(page); --- 2111,2115 ---- } ! #if defined(CONFIG_COMP_PAGE_CACHE) && !defined(CONFIG_COMP_DIS_CLEAN) if (clean_page_compress_lock) hit_clean_page(page); *************** *** 2140,2144 **** } ClearPageError(page); ! #ifdef CONFIG_COMP_PAGE_CACHE if (clean_page_compress_lock) hit_clean_page(page); --- 2142,2146 ---- } ClearPageError(page); ! #if defined(CONFIG_COMP_PAGE_CACHE) && !defined(CONFIG_COMP_DIS_CLEAN) if (clean_page_compress_lock) hit_clean_page(page); Index: page_alloc.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/page_alloc.c,v retrieving revision 1.25 retrieving revision 1.26 diff -C2 -r1.25 -r1.26 *** page_alloc.c 22 Nov 2002 16:01:34 -0000 1.25 --- page_alloc.c 29 Nov 2002 21:23:02 -0000 1.26 *************** *** 653,656 **** --- 653,659 ---- zone = contig_page_data.node_zones + ZONE_NORMAL; + if (num_memory_pages > zone->size) + num_memory_pages = zone->size; + /* whoops: that should be zone->size minus zholes. Since * zholes is always 0 when calling free_area_init_core(), I Index: swap_state.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/swap_state.c,v retrieving revision 1.40 retrieving revision 1.41 diff -C2 -r1.40 -r1.41 *** swap_state.c 22 Nov 2002 16:01:35 -0000 1.40 --- swap_state.c 29 Nov 2002 21:23:02 -0000 1.41 *************** *** 244,248 **** if (get_swap_compressed(entry)) PageSetCompressed(new_page); ! #ifdef CONFIG_COMP_PAGE_CACHE if (clean_page_compress_lock) hit_clean_page(new_page); --- 244,248 ---- if (get_swap_compressed(entry)) PageSetCompressed(new_page); ! #if defined(CONFIG_COMP_CACHE) && !defined(CONFIG_COMP_DIS_CLEAN) if (clean_page_compress_lock) hit_clean_page(new_page); Index: swapfile.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/swapfile.c,v retrieving revision 1.36 retrieving revision 1.37 diff -C2 -r1.36 -r1.37 *** swapfile.c 22 Nov 2002 16:01:35 -0000 1.36 --- swapfile.c 29 Nov 2002 21:23:02 -0000 1.37 *************** *** 24,30 **** int total_swap_pages; static int swap_overflow; - #ifdef CONFIG_COMP_SWAP - unsigned long max_comp_swap_pages = 0; - #endif static const char Bad_file[] = "Bad swap file entry "; --- 24,27 ---- *************** *** 1350,1353 **** --- 1347,1351 ---- goto bad_swap; } + error = 0; memset(p->swap_map, 0, maxpages * sizeof(short)); *************** *** 1657,1658 **** --- 1655,1658 ---- return ret; } + + |
From: Rodrigo S. de C. <rc...@us...> - 2002-11-29 21:23:05
|
Update of /cvsroot/linuxcompressed/linux/fs In directory sc8-pr-cvs1:/tmp/cvs-serv31487/fs Modified Files: buffer.c Log Message: Cleanups o New ifdefs to avoid compiling code for clean page adaptability when it is disabled. o Some whitespace changes to make the patch a little shorter. o Removed variable not used in swapout.c Bug fixes o Fixed a bug in comp_cache_fix_watermarks() introduce lately. This bug would set the watermark to a huge number (due to negative values), what would end up in a huge increase in the memory pressure. Fortunately, that is only noticeable when the system has a low amount of memory. o vswap_num_reserved_entries variable was used without being initialized. Depending on the scenario, it could have bogus value which would screw up compressed cache behaviour. o The total free space hash was set in function of the free_space_interval. A potential bug if the total_free_space_interval is set to a value different from the one in free_space_interval. o If the compressed cache cannot allocate the minimum number of pages in the boot process, the shrinkage code could end up annihilate compressed cache because it assumes the compressed cache will always have the number of pages equal or greater than the minimum size. Index: buffer.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/fs/buffer.c,v retrieving revision 1.16 retrieving revision 1.17 diff -C2 -r1.16 -r1.17 *** buffer.c 22 Nov 2002 16:01:33 -0000 1.16 --- buffer.c 29 Nov 2002 21:23:02 -0000 1.17 *************** *** 797,800 **** --- 797,801 ---- UnlockPage(page); + return; |
From: Rodrigo S. de C. <rc...@us...> - 2002-11-29 21:23:05
|
Update of /cvsroot/linuxcompressed/linux/include/linux In directory sc8-pr-cvs1:/tmp/cvs-serv31487/include/linux Modified Files: swap.h Log Message: Cleanups o New ifdefs to avoid compiling code for clean page adaptability when it is disabled. o Some whitespace changes to make the patch a little shorter. o Removed variable not used in swapout.c Bug fixes o Fixed a bug in comp_cache_fix_watermarks() introduce lately. This bug would set the watermark to a huge number (due to negative values), what would end up in a huge increase in the memory pressure. Fortunately, that is only noticeable when the system has a low amount of memory. o vswap_num_reserved_entries variable was used without being initialized. Depending on the scenario, it could have bogus value which would screw up compressed cache behaviour. o The total free space hash was set in function of the free_space_interval. A potential bug if the total_free_space_interval is set to a value different from the one in free_space_interval. o If the compressed cache cannot allocate the minimum number of pages in the boot process, the shrinkage code could end up annihilate compressed cache because it assumes the compressed cache will always have the number of pages equal or greater than the minimum size. Index: swap.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/swap.h,v retrieving revision 1.18 retrieving revision 1.19 diff -C2 -r1.18 -r1.19 *** swap.h 22 Nov 2002 16:01:34 -0000 1.18 --- swap.h 29 Nov 2002 21:23:02 -0000 1.19 *************** *** 179,183 **** extern swp_entry_t get_swap_page(void); extern void get_swaphandle_info(swp_entry_t, unsigned long *, kdev_t *, ! struct inode **); extern int swap_duplicate(swp_entry_t); extern int swap_count(struct page *); --- 179,183 ---- extern swp_entry_t get_swap_page(void); extern void get_swaphandle_info(swp_entry_t, unsigned long *, kdev_t *, ! struct inode **); extern int swap_duplicate(swp_entry_t); extern int swap_count(struct page *); |
From: Rodrigo S. de C. <rc...@us...> - 2002-11-26 21:52:59
|
Update of /cvsroot/linuxcompressed/linux/mm/comp_cache In directory sc8-pr-cvs1:/tmp/cvs-serv12691/mm/comp_cache Modified Files: adaptivity.c Log Message: Bug fix o Fixed compilation error Index: adaptivity.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/adaptivity.c,v retrieving revision 1.42 retrieving revision 1.43 diff -C2 -r1.42 -r1.43 *** adaptivity.c 26 Nov 2002 21:42:32 -0000 1.42 --- adaptivity.c 26 Nov 2002 21:52:55 -0000 1.43 *************** *** 2,6 **** * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-11-26 19:34:55 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-11-26 19:46:51 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 744,748 **** new_comp_page->free_offset += new_fragment->compressed_size; ! comp_cache_free_space -= compressed_size; add_to_comp_page_list(new_comp_page, new_fragment); --- 744,748 ---- new_comp_page->free_offset += new_fragment->compressed_size; ! comp_cache_free_space -= new_fragment->compressed_size; add_to_comp_page_list(new_comp_page, new_fragment); |
From: Rodrigo S. de C. <rc...@us...> - 2002-11-26 21:42:35
|
Update of /cvsroot/linuxcompressed/linux/mm/comp_cache In directory sc8-pr-cvs1:/tmp/cvs-serv8394/mm/comp_cache Modified Files: adaptivity.c aux.c main.c swapout.c Log Message: Cleanup o Remove unused CF_End and related macros o Removed special cases in get_comp_cache_page() where there were the comp_page structure, but no page was set to it. o Removed set_comp_page() function, which is not needed any longer. Bug fixes o Fixed, at least partially, bug reported by Claudio Martella. The free space in compressed cache wasn't accounted correctly in compact_comp_cache(). Pages were removed from their original locations, increasing the comp_cache_free_space variable, but not decreasing when they were relocated. o Fixed bug that would account erroneously the number of allocated pages if any allocation (page or structure of the compressed cache) failed in the boot process. That can be worse when a static compressed cache is used. It would also oops if neither the page nor the fragment structure could be allocated. This bug is fixed too. Index: adaptivity.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/adaptivity.c,v retrieving revision 1.41 retrieving revision 1.42 diff -C2 -r1.41 -r1.42 *** adaptivity.c 22 Nov 2002 16:01:36 -0000 1.41 --- adaptivity.c 26 Nov 2002 21:42:32 -0000 1.42 *************** *** 2,6 **** * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-11-21 17:28:13 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-11-26 19:34:55 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 519,523 **** __free_pages(comp_page->page, COMP_PAGE_ORDER); ! set_comp_page(comp_page, NULL); kmem_cache_free(comp_cachep, comp_page); num_comp_pages--; --- 519,526 ---- __free_pages(comp_page->page, COMP_PAGE_ORDER); ! comp_cache_freeable_space -= COMP_PAGE_SIZE; ! comp_cache_free_space -= COMP_PAGE_SIZE; ! comp_page->page = NULL; ! kmem_cache_free(comp_cachep, comp_page); num_comp_pages--; *************** *** 741,744 **** --- 744,749 ---- new_comp_page->free_offset += new_fragment->compressed_size; + comp_cache_free_space -= compressed_size; + add_to_comp_page_list(new_comp_page, new_fragment); add_fragment_vswap(new_fragment); Index: aux.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/aux.c,v retrieving revision 1.44 retrieving revision 1.45 diff -C2 -r1.44 -r1.45 *** aux.c 22 Nov 2002 16:01:36 -0000 1.44 --- aux.c 26 Nov 2002 21:42:32 -0000 1.45 *************** *** 2,6 **** * linux/mm/comp_cache/aux.c * ! * Time-stamp: <2002-10-28 21:13:03 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/aux.c * ! * Time-stamp: <2002-11-26 19:34:59 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 70,96 **** return q; - } - - inline void - set_comp_page(struct comp_cache_page * comp_page, struct page * page) - { - if (!comp_page) - BUG(); - if (comp_page->page) { - if (page) - goto out; - comp_cache_freeable_space -= COMP_PAGE_SIZE; - comp_cache_free_space -= COMP_PAGE_SIZE; - goto out; - } - - if (!page) - BUG(); - - comp_cache_freeable_space += COMP_PAGE_SIZE; - comp_cache_free_space += COMP_PAGE_SIZE; - - out: - comp_page->page = page; } --- 70,73 ---- Index: main.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/main.c,v retrieving revision 1.66 retrieving revision 1.67 diff -C2 -r1.66 -r1.67 *** main.c 22 Nov 2002 16:01:37 -0000 1.66 --- main.c 26 Nov 2002 21:42:32 -0000 1.67 *************** *** 2,6 **** * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-10-25 08:54:11 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-11-26 19:32:57 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 37,41 **** unsigned long zone_num_comp_pages = 0; ! unsigned long comp_cache_free_space; kmem_cache_t * comp_cachep; --- 37,41 ---- unsigned long zone_num_comp_pages = 0; ! unsigned long comp_cache_free_space = 0; kmem_cache_t * comp_cachep; *************** *** 226,230 **** struct comp_cache_page * comp_page; struct page * page; ! int i; printk("Compressed Cache: %s\n", COMP_CACHE_VERSION); --- 226,230 ---- struct comp_cache_page * comp_page; struct page * page; ! int i, failed = 0; printk("Compressed Cache: %s\n", COMP_CACHE_VERSION); *************** *** 269,275 **** page = alloc_pages(GFP_KERNEL, COMP_PAGE_ORDER); ! if (!init_comp_page(&comp_page, page)) ! __free_pages(page, COMP_PAGE_ORDER); } comp_cache_free_space = num_comp_pages * COMP_PAGE_SIZE; --- 269,285 ---- page = alloc_pages(GFP_KERNEL, COMP_PAGE_ORDER); ! if (!page) ! goto failed; ! ! if (init_comp_page(&comp_page, page)) ! continue; ! ! __free_pages(page, COMP_PAGE_ORDER); ! failed: ! failed++; } + + num_comp_pages -= failed; + max_used_num_comp_pages -= failed; comp_cache_free_space = num_comp_pages * COMP_PAGE_SIZE; Index: swapout.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapout.c,v retrieving revision 1.73 retrieving revision 1.74 diff -C2 -r1.73 -r1.74 *** swapout.c 22 Nov 2002 16:01:45 -0000 1.73 --- swapout.c 26 Nov 2002 21:42:32 -0000 1.74 *************** *** 2,6 **** * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-10-25 11:26:59 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-11-26 19:33:23 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 529,537 **** break; ! if (!comp_page->page) { ! if (comp_page->free_space != COMP_PAGE_SIZE) ! BUG(); ! goto alloc_new_page; ! } aux_comp_size = 0; --- 529,534 ---- break; ! if (!comp_page->page) ! BUG(); aux_comp_size = 0; *************** *** 617,639 **** page_cache_release(page); return comp_page; - - alloc_new_page: - /* remove from free space hash table before update */ - remove_comp_page_from_hash_table(comp_page); - - if (comp_page->page) - BUG(); - - spin_unlock(&comp_cache_lock); - new_page = alloc_page(gfp_mask); - spin_lock(&comp_cache_lock); - - if (!new_page) - goto failed; - - set_comp_page(comp_page, new_page); - - if (TryLockPage(comp_page->page)) - BUG(); check_references: --- 614,617 ---- |
From: Rodrigo S. de C. <rc...@us...> - 2002-11-26 21:42:34
|
Update of /cvsroot/linuxcompressed/linux/include/linux In directory sc8-pr-cvs1:/tmp/cvs-serv8394/include/linux Modified Files: comp_cache.h Log Message: Cleanup o Remove unused CF_End and related macros o Removed special cases in get_comp_cache_page() where there were the comp_page structure, but no page was set to it. o Removed set_comp_page() function, which is not needed any longer. Bug fixes o Fixed, at least partially, bug reported by Claudio Martella. The free space in compressed cache wasn't accounted correctly in compact_comp_cache(). Pages were removed from their original locations, increasing the comp_cache_free_space variable, but not decreasing when they were relocated. o Fixed bug that would account erroneously the number of allocated pages if any allocation (page or structure of the compressed cache) failed in the boot process. That can be worse when a static compressed cache is used. It would also oops if neither the page nor the fragment structure could be allocated. This bug is fixed too. Index: comp_cache.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/comp_cache.h,v retrieving revision 1.103 retrieving revision 1.104 diff -C2 -r1.103 -r1.104 *** comp_cache.h 22 Nov 2002 16:01:33 -0000 1.103 --- comp_cache.h 26 Nov 2002 21:42:32 -0000 1.104 *************** *** 2,6 **** * linux/mm/comp_cache.h * ! * Time-stamp: <2002-11-21 16:46:32 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache.h * ! * Time-stamp: <2002-11-26 19:35:01 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 31,35 **** #include <linux/minilzo.h> ! #define COMP_CACHE_VERSION "0.24pre5" /* maximum compressed size of a page */ --- 31,35 ---- #include <linux/minilzo.h> ! #define COMP_CACHE_VERSION "0.24pre6" /* maximum compressed size of a page */ *************** *** 182,186 **** #define CF_ToBeFreed 1 #define CF_Active 2 - #define CF_End 3 #define CompFragmentDirty(fragment) test_bit(CF_Dirty, &(fragment)->flags) --- 182,185 ---- *************** *** 200,209 **** #define CompFragmentClearActive(fragment) clear_bit(CF_Active, &(fragment)->flags) - #define CompFragmentEnd(fragment) test_bit(CF_End, &(fragment)->flags) - #define CompFragmentSetEnd(fragment) set_bit(CF_End, &(fragment)->flags) - #define CompFragmentTestandSetEnd(fragment) test_and_set_bit(CF_End, &(fragment)->flags) - #define CompFragmentTestandClearEnd(fragment) test_and_clear_bit(CF_End, &(fragment)->flags) - #define CompFragmentClearEnd(fragment) clear_bit(CF_End, &(fragment)->flags) - /* general */ #define get_fragment(f) do { \ --- 199,202 ---- *************** *** 504,508 **** /* aux.c */ unsigned long long big_division(unsigned long long, unsigned long long); - inline void set_comp_page(struct comp_cache_page *, struct page *); inline void check_all_fragments(struct comp_cache_page *); void add_to_comp_page_list(struct comp_cache_page *, struct comp_cache_fragment *); --- 497,500 ---- |
From: Rodrigo S. de C. <rc...@us...> - 2002-11-22 16:02:09
|
Update of /cvsroot/linuxcompressed/linux/mm In directory sc8-pr-cvs1:/tmp/cvs-serv13256/mm Modified Files: filemap.c page_alloc.c swap_state.c swapfile.c vmscan.c Log Message: Features o New clean page adaptability. This policy disables compression of clean pages when it is not worth it (i.e., most pages are compressed and freed, without being reclaimed to the system). o Two new configuration options to disable the whole adaptability policy and clean page adaptability separately. It was most used for some tests, but it might be useful for someone which has compressed caching performing not very well. Bug Fixes o Make the LZO code compile on Athlon systems o __read_comp_cache(): if a dirty fragment was supposed to be freed, it wouldn't be actually freed because we forgot to drop a reference on the fragment. Cleanups o Lots, mainly in adaptivity.c Index: filemap.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/filemap.c,v retrieving revision 1.40 retrieving revision 1.41 diff -C2 -r1.40 -r1.41 *** filemap.c 10 Sep 2002 16:43:12 -0000 1.40 --- filemap.c 22 Nov 2002 16:01:34 -0000 1.41 *************** *** 776,780 **** return error; } ! } #endif error = mapping->a_ops->readpage(file, page); --- 776,782 ---- return error; } ! } ! if (clean_page_compress_lock) ! hit_clean_page(page); #endif error = mapping->a_ops->readpage(file, page); *************** *** 1549,1552 **** --- 1551,1558 ---- readpage: + #ifdef CONFIG_COMP_PAGE_CACHE + if (clean_page_compress_lock) + hit_clean_page(page); + #endif /* ... and start the actual read. The read will unlock the page. */ error = mapping->a_ops->readpage(filp, page); *************** *** 2103,2106 **** --- 2109,2116 ---- } + #ifdef CONFIG_COMP_PAGE_CACHE + if (clean_page_compress_lock) + hit_clean_page(page); + #endif if (!mapping->a_ops->readpage(file, page)) { wait_on_page(page); *************** *** 2130,2133 **** --- 2140,2147 ---- } ClearPageError(page); + #ifdef CONFIG_COMP_PAGE_CACHE + if (clean_page_compress_lock) + hit_clean_page(page); + #endif if (!mapping->a_ops->readpage(file, page)) { wait_on_page(page); Index: page_alloc.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/page_alloc.c,v retrieving revision 1.24 retrieving revision 1.25 diff -C2 -r1.24 -r1.25 *** page_alloc.c 10 Sep 2002 16:43:14 -0000 1.24 --- page_alloc.c 22 Nov 2002 16:01:34 -0000 1.25 *************** *** 638,662 **** { unsigned long mask; ! int j = ZONE_NORMAL; ! zone_t *zone = contig_page_data.node_zones + j; ! int real_num_comp_pages; ! ! /* the real number of memory pages used by compressed cache */ ! real_num_comp_pages = comp_page_to_page(num_comp_pages); ! zone_num_comp_pages = real_num_comp_pages; ! ! if (real_num_comp_pages > zone->size) ! real_num_comp_pages = zone->size; /* whoops: that should be zone->size minus zholes. Since * zholes is always 0 when calling free_area_init_core(), I * guess we don't have to worry about that now */ ! mask = ((zone->size - real_num_comp_pages)/zone_balance_ratio[j]); ! if (mask < zone_balance_min[j]) ! mask = zone_balance_min[j]; ! else if (mask > zone_balance_max[j]) ! mask = zone_balance_max[j]; zone->pages_min = mask; --- 638,665 ---- { unsigned long mask; ! zone_t *zone; ! int num_memory_pages; ! /* We don't have to worry if we have so much memory that it ! * will always be above the maximum value. As of 2.4.18, this ! * happens when we have 256M, since it always have a ! * (zone->size - num_memory_pages) greater than 128M */ ! //if (num_physpages >= 2 * zone_balance_ratio[ZONE_NORMAL] * zone_balance_max[ZONE_NORMAL]) ! //return; + /* the real number of memory pages used by compressed cache */ + zone_num_comp_pages = num_memory_pages = comp_page_to_page(num_comp_pages); + + zone = contig_page_data.node_zones + ZONE_NORMAL; + /* whoops: that should be zone->size minus zholes. Since * zholes is always 0 when calling free_area_init_core(), I * guess we don't have to worry about that now */ ! mask = ((zone->size - num_memory_pages)/zone_balance_ratio[ZONE_NORMAL]); ! if (mask < zone_balance_min[ZONE_NORMAL]) ! mask = zone_balance_min[ZONE_NORMAL]; ! else if (mask > zone_balance_max[ZONE_NORMAL]) ! mask = zone_balance_max[ZONE_NORMAL]; zone->pages_min = mask; *************** *** 664,679 **** zone->pages_high = mask*3; } - - void __init - comp_cache_init_fix_watermarks(int num_comp_pages) - { - zone_t *zone = contig_page_data.node_zones + ZONE_NORMAL; - - printk("Compressed Cache: page watermarks (normal zone)\nCompressed Cache: (%lu, %lu, %lu) -> ", - zone->pages_min, zone->pages_low, zone->pages_high); - comp_cache_fix_watermarks(num_comp_pages); - printk("(%lu, %lu, %lu)\n", zone->pages_min, zone->pages_low, zone->pages_high); - } - #endif --- 667,670 ---- Index: swap_state.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/swap_state.c,v retrieving revision 1.39 retrieving revision 1.40 diff -C2 -r1.39 -r1.40 *** swap_state.c 10 Sep 2002 16:43:16 -0000 1.39 --- swap_state.c 22 Nov 2002 16:01:35 -0000 1.40 *************** *** 244,247 **** --- 244,251 ---- if (get_swap_compressed(entry)) PageSetCompressed(new_page); + #ifdef CONFIG_COMP_PAGE_CACHE + if (clean_page_compress_lock) + hit_clean_page(new_page); + #endif rw_swap_page(READ, new_page); return new_page; Index: swapfile.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/swapfile.c,v retrieving revision 1.35 retrieving revision 1.36 diff -C2 -r1.35 -r1.36 *** swapfile.c 10 Sep 2002 16:43:17 -0000 1.35 --- swapfile.c 22 Nov 2002 16:01:35 -0000 1.36 *************** *** 24,27 **** --- 24,30 ---- int total_swap_pages; static int swap_overflow; + #ifdef CONFIG_COMP_SWAP + unsigned long max_comp_swap_pages = 0; + #endif static const char Bad_file[] = "Bad swap file entry "; *************** *** 1347,1351 **** goto bad_swap; } - error = 0; memset(p->swap_map, 0, maxpages * sizeof(short)); --- 1350,1353 ---- *************** *** 1655,1657 **** return ret; } - --- 1657,1658 ---- Index: vmscan.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/vmscan.c,v retrieving revision 1.43 retrieving revision 1.44 diff -C2 -r1.43 -r1.44 *** vmscan.c 31 Jul 2002 12:31:05 -0000 1.43 --- vmscan.c 22 Nov 2002 16:01:36 -0000 1.44 *************** *** 520,523 **** --- 520,533 ---- if (!PageCompCache(page)) { int compressed; + + #ifndef CONFIG_COMP_DIS_CLEAN + /* enable this #if 0 to enable policy that + * stop STORING clean page in compressed + * cache */ + if (clean_page_compress_lock) { + add_clean_page(page); + goto check_freeable; + } + #endif page_cache_get(page); *************** *** 535,539 **** } ! spin_lock(&pagecache_lock); if (!is_page_cache_freeable(page)) { spin_unlock(&pagecache_lock); --- 545,550 ---- } ! spin_lock(&pagecache_lock); ! check_freeable: if (!is_page_cache_freeable(page)) { spin_unlock(&pagecache_lock); |
From: Rodrigo S. de C. <rc...@us...> - 2002-11-22 16:02:07
|
Update of /cvsroot/linuxcompressed/linux/include/linux In directory sc8-pr-cvs1:/tmp/cvs-serv13256/include/linux Modified Files: comp_cache.h lzoconf.h minilzo.h swap.h Added Files: sysctl.h Log Message: Features o New clean page adaptability. This policy disables compression of clean pages when it is not worth it (i.e., most pages are compressed and freed, without being reclaimed to the system). o Two new configuration options to disable the whole adaptability policy and clean page adaptability separately. It was most used for some tests, but it might be useful for someone which has compressed caching performing not very well. Bug Fixes o Make the LZO code compile on Athlon systems o __read_comp_cache(): if a dirty fragment was supposed to be freed, it wouldn't be actually freed because we forgot to drop a reference on the fragment. Cleanups o Lots, mainly in adaptivity.c Index: comp_cache.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/comp_cache.h,v retrieving revision 1.102 retrieving revision 1.103 diff -C2 -r1.102 -r1.103 *** comp_cache.h 10 Sep 2002 17:23:55 -0000 1.102 --- comp_cache.h 22 Nov 2002 16:01:33 -0000 1.103 *************** *** 2,6 **** * linux/mm/comp_cache.h * ! * Time-stamp: <2002-09-10 14:05:49 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache.h * ! * Time-stamp: <2002-11-21 16:46:32 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 31,41 **** #include <linux/minilzo.h> ! #define COMP_CACHE_VERSION "0.24pre4" /* maximum compressed size of a page */ #define MAX_COMPRESSED_SIZE 4500 ! extern unsigned long num_comp_pages, num_fragments, num_active_fragments, num_swapper_fragments, num_clean_fragments, zone_num_comp_pages; ! extern unsigned long new_num_comp_pages, min_num_comp_pages, max_num_comp_pages, max_used_num_comp_pages; extern kmem_cache_t * fragment_cachep; --- 31,46 ---- #include <linux/minilzo.h> ! #define COMP_CACHE_VERSION "0.24pre5" /* maximum compressed size of a page */ #define MAX_COMPRESSED_SIZE 4500 ! /* compressed cache metadata */ ! extern unsigned long num_fragments, num_active_fragments; ! extern unsigned long num_swapper_fragments, num_clean_fragments; ! ! extern unsigned long num_comp_pages, zone_num_comp_pages; ! extern unsigned long min_num_comp_pages, max_num_comp_pages, max_used_num_comp_pages; ! extern kmem_cache_t * fragment_cachep; *************** *** 111,125 **** ((struct swp_buffer *) kmem_cache_alloc(comp_cachep, SLAB_ATOMIC)) ! extern int shmem_page(struct page * page); /* adaptivity.c */ #ifdef CONFIG_COMP_CACHE extern unsigned long failed_comp_page_allocs; ! extern int growing_lock; int grow_on_demand(void); int shrink_on_demand(struct comp_cache_page *); void compact_comp_cache(void); void balance_lru_queues(void); #else static inline int grow_on_demand(void) { return 0; } --- 116,145 ---- ((struct swp_buffer *) kmem_cache_alloc(comp_cachep, SLAB_ATOMIC)) ! extern int shmem_page(struct page *); ! extern void comp_cache_fix_watermarks(int); /* adaptivity.c */ #ifdef CONFIG_COMP_CACHE extern unsigned long failed_comp_page_allocs; ! extern int growth_lock; int grow_on_demand(void); int shrink_on_demand(struct comp_cache_page *); void compact_comp_cache(void); + + #ifdef CONFIG_COMP_DIS_CLEAN + static inline void hit_clean_page(struct page * page) { }; + static inline void add_clean_page(struct page * page) { }; + #else + void hit_clean_page(struct page *); + void add_clean_page(struct page *); + #endif + + #ifdef CONFIG_COMP_DIS_ADAPT + static inline void balance_lru_queues(void) { }; + #else void balance_lru_queues(void); + #endif + #else static inline int grow_on_demand(void) { return 0; } *************** *** 127,130 **** --- 147,173 ---- #endif + extern unsigned long clean_page_hash_size; + extern unsigned int clean_page_hash_bits; + + struct clean_page_data { + struct list_head list; + + struct address_space * mapping; + unsigned long index; + + struct clean_page_data * next_hash; + struct clean_page_data ** pprev_hash; + }; + + static inline unsigned long + clean_page_hashfn(struct address_space * mapping, unsigned long index) + { + #define i (((unsigned long) mapping)/(sizeof(struct inode) & ~ (sizeof(struct inode) - 1))) + #define s(x) ((x)+((x) >> clean_page_hash_bits)) + return s(i+index) & (clean_page_hash_size - 1); + #undef i + #undef s + } + /* swapout.c */ extern struct list_head swp_free_buffer_head; *************** *** 139,142 **** --- 182,186 ---- #define CF_ToBeFreed 1 #define CF_Active 2 + #define CF_End 3 #define CompFragmentDirty(fragment) test_bit(CF_Dirty, &(fragment)->flags) *************** *** 150,158 **** #define CompFragmentTestandSetToBeFreed(fragment) test_and_set_bit(CF_ToBeFreed, &(fragment)->flags) ! #define CompFragmentActive(fragment) test_bit(CF_Active, &(fragment)->flags) ! #define CompFragmentSetActive(fragment) set_bit(CF_Active, &(fragment)->flags) ! #define CompFragmentTestandSetActive(fragment) test_and_set_bit(CF_Active, &(fragment)->flags) ! #define CompFragmentTestandClearActive(fragment) test_and_clear_bit(CF_Active, &(fragment)->flags) ! #define CompFragmentClearActive(fragment) clear_bit(CF_Active, &(fragment)->flags) /* general */ --- 194,208 ---- #define CompFragmentTestandSetToBeFreed(fragment) test_and_set_bit(CF_ToBeFreed, &(fragment)->flags) ! #define CompFragmentActive(fragment) test_bit(CF_Active, &(fragment)->flags) ! #define CompFragmentSetActive(fragment) set_bit(CF_Active, &(fragment)->flags) ! #define CompFragmentTestandSetActive(fragment) test_and_set_bit(CF_Active, &(fragment)->flags) ! #define CompFragmentTestandClearActive(fragment) test_and_clear_bit(CF_Active, &(fragment)->flags) ! #define CompFragmentClearActive(fragment) clear_bit(CF_Active, &(fragment)->flags) ! ! #define CompFragmentEnd(fragment) test_bit(CF_End, &(fragment)->flags) ! #define CompFragmentSetEnd(fragment) set_bit(CF_End, &(fragment)->flags) ! #define CompFragmentTestandSetEnd(fragment) test_and_set_bit(CF_End, &(fragment)->flags) ! #define CompFragmentTestandClearEnd(fragment) test_and_clear_bit(CF_End, &(fragment)->flags) ! #define CompFragmentClearEnd(fragment) clear_bit(CF_End, &(fragment)->flags) /* general */ *************** *** 233,236 **** --- 283,288 ---- /* proc.c */ + extern int clean_page_compress_lock; + #ifdef CONFIG_COMP_CACHE void decompress_fragment_to_page(struct comp_cache_fragment *, struct page *); *************** *** 239,244 **** void __init comp_cache_algorithms_init(void); - - extern int clean_page_compress_lock; #else static inline void decompress_swap_cache_page(struct page * page) { }; --- 291,294 ---- *************** *** 308,311 **** --- 358,362 ---- extern unsigned long comp_cache_free_space; extern spinlock_t comp_cache_lock; + extern struct comp_cache_fragment * last_checked_inactive; #else static inline void comp_cache_init(void) {}; *************** *** 507,511 **** int set_pte_list_to_entry(struct pte_list *, swp_entry_t, swp_entry_t); ! struct comp_cache_page * search_comp_page(struct comp_cache_page **, int); struct comp_cache_fragment ** create_fragment_hash(unsigned long *, unsigned int *, unsigned int *); --- 558,562 ---- int set_pte_list_to_entry(struct pte_list *, swp_entry_t, swp_entry_t); ! struct comp_cache_page * FASTCALL(search_comp_page(struct comp_cache_page ** hash_table, int free_space)); struct comp_cache_fragment ** create_fragment_hash(unsigned long *, unsigned int *, unsigned int *); Index: lzoconf.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/lzoconf.h,v retrieving revision 1.5 retrieving revision 1.6 diff -C2 -r1.5 -r1.6 *** lzoconf.h 10 Sep 2002 20:19:06 -0000 1.5 --- lzoconf.h 22 Nov 2002 16:01:33 -0000 1.6 *************** *** 41,60 **** # include <config.h> #endif - //#include <limits.h> - #include <linux/kernel.h> #define CHAR_BIT 8 - - #undef UCHAR_MAX #define UCHAR_MAX 255 ! /* For the sake of 16 bit hosts, we may not use -32768 */ ! #define SHRT_MIN (-32767-1) ! #undef SHRT_MAX ! #define SHRT_MAX 32767 - /* Maximum value an `unsigned short int' can hold. (Minimum is 0). */ - #undef USHRT_MAX #define USHRT_MAX 65535 #ifdef __cplusplus --- 41,77 ---- # include <config.h> #endif + /* definitions from limits.h */ #define CHAR_BIT 8 #define UCHAR_MAX 255 ! #ifndef __INT_MAX__ ! #define __INT_MAX__ 2147483647 ! #endif ! #undef INT_MIN ! #define INT_MIN (-INT_MAX-1) ! #undef INT_MAX ! #define INT_MAX __INT_MAX__ ! ! #undef UINT_MAX ! #define UINT_MAX (INT_MAX * 2U + 1) ! ! #ifndef __LONG_MAX__ ! #if defined (__alpha__) || (defined (_ARCH_PPC) && defined (__64BIT__)) || defined (__sparc_v9__) || defined (__sparcv9) ! #define __LONG_MAX__ 9223372036854775807L ! #else ! #define __LONG_MAX__ 2147483647L ! #endif /* __alpha__ || sparc64 */ ! #endif ! #undef LONG_MIN ! #define LONG_MIN (-LONG_MAX-1) ! #undef LONG_MAX ! #define LONG_MAX __LONG_MAX__ ! ! #undef ULONG_MAX ! #define ULONG_MAX (LONG_MAX * 2UL + 1) #define USHRT_MAX 65535 + #define SHRT_MAX 32767 #ifdef __cplusplus Index: minilzo.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/minilzo.h,v retrieving revision 1.4 retrieving revision 1.5 diff -C2 -r1.4 -r1.5 *** minilzo.h 29 May 2002 21:28:54 -0000 1.4 --- minilzo.h 22 Nov 2002 16:01:34 -0000 1.5 *************** *** 46,50 **** #undef LZO_HAVE_CONFIG_H ! #include "lzoconf.h" #if !defined(LZO_VERSION) || (LZO_VERSION != MINILZO_VERSION) --- 46,50 ---- #undef LZO_HAVE_CONFIG_H ! #include <linux/lzoconf.h> #if !defined(LZO_VERSION) || (LZO_VERSION != MINILZO_VERSION) Index: swap.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/swap.h,v retrieving revision 1.17 retrieving revision 1.18 diff -C2 -r1.17 -r1.18 *** swap.h 10 Sep 2002 17:23:56 -0000 1.17 --- swap.h 22 Nov 2002 16:01:34 -0000 1.18 *************** *** 75,80 **** #define SWAP_MAP_MAX 0x7fff #define SWAP_MAP_BAD 0x8000 - #define SWAP_MAP_COMP 0x0000 #define SWAP_MAP_COMP_BIT 0x0000 #define swap_map_count(swap) (swap) #endif --- 75,80 ---- #define SWAP_MAP_MAX 0x7fff #define SWAP_MAP_BAD 0x8000 #define SWAP_MAP_COMP_BIT 0x0000 + #define SWAP_MAP_COMP_BIT_MASK 0x0000 #define swap_map_count(swap) (swap) #endif *************** *** 179,183 **** extern swp_entry_t get_swap_page(void); extern void get_swaphandle_info(swp_entry_t, unsigned long *, kdev_t *, ! struct inode **); extern int swap_duplicate(swp_entry_t); extern int swap_count(struct page *); --- 179,183 ---- extern swp_entry_t get_swap_page(void); extern void get_swaphandle_info(swp_entry_t, unsigned long *, kdev_t *, ! struct inode **); extern int swap_duplicate(swp_entry_t); extern int swap_count(struct page *); |
From: Rodrigo S. de C. <rc...@us...> - 2002-11-22 16:02:06
|
Update of /cvsroot/linuxcompressed/linux/fs In directory sc8-pr-cvs1:/tmp/cvs-serv13256/fs Modified Files: buffer.c Log Message: Features o New clean page adaptability. This policy disables compression of clean pages when it is not worth it (i.e., most pages are compressed and freed, without being reclaimed to the system). o Two new configuration options to disable the whole adaptability policy and clean page adaptability separately. It was most used for some tests, but it might be useful for someone which has compressed caching performing not very well. Bug Fixes o Make the LZO code compile on Athlon systems o __read_comp_cache(): if a dirty fragment was supposed to be freed, it wouldn't be actually freed because we forgot to drop a reference on the fragment. Cleanups o Lots, mainly in adaptivity.c Index: buffer.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/fs/buffer.c,v retrieving revision 1.15 retrieving revision 1.16 diff -C2 -r1.15 -r1.16 *** buffer.c 9 May 2002 12:31:01 -0000 1.15 --- buffer.c 22 Nov 2002 16:01:33 -0000 1.16 *************** *** 797,801 **** UnlockPage(page); - return; --- 797,800 ---- |
From: Rodrigo S. de C. <rc...@us...> - 2002-11-22 16:02:06
|
Update of /cvsroot/linuxcompressed/linux/Documentation In directory sc8-pr-cvs1:/tmp/cvs-serv13256/Documentation Modified Files: Configure.help Log Message: Features o New clean page adaptability. This policy disables compression of clean pages when it is not worth it (i.e., most pages are compressed and freed, without being reclaimed to the system). o Two new configuration options to disable the whole adaptability policy and clean page adaptability separately. It was most used for some tests, but it might be useful for someone which has compressed caching performing not very well. Bug Fixes o Make the LZO code compile on Athlon systems o __read_comp_cache(): if a dirty fragment was supposed to be freed, it wouldn't be actually freed because we forgot to drop a reference on the fragment. Cleanups o Lots, mainly in adaptivity.c Index: Configure.help =================================================================== RCS file: /cvsroot/linuxcompressed/linux/Documentation/Configure.help,v retrieving revision 1.11 retrieving revision 1.12 diff -C2 -r1.11 -r1.12 *** Configure.help 10 Sep 2002 16:42:54 -0000 1.11 --- Configure.help 22 Nov 2002 16:01:32 -0000 1.12 *************** *** 402,406 **** store only anonymous pages, ie pages not mapped to files. ! If unsure, say N here. Double Page Size --- 402,406 ---- store only anonymous pages, ie pages not mapped to files. ! If unsure, say Y here. Double Page Size *************** *** 421,429 **** CONFIG_COMP_SWAP ! Compressed cache swaps out its fragments (i.e, compressed memory ! pages) in compressed format. If you also want it to swap out them ! clustered to the disk, in order to reduce the writeout traffic, say ! Y here. Note that this option adds some data structures that will ! cost some memory, so if you don't have much, you'd better say N. Normal floppy disk support --- 421,457 ---- CONFIG_COMP_SWAP ! If you want to write many pages together in a block on the swap ! device, say Y here. The compressed cache will keep swapping out the ! pages in compressed form, and will group them to save swap ! space. This is likely to decrease the number of IO performed to the ! swap (dependant on the compression ratio). ! ! Notice that this option introduces a memory overhead due to the data ! structures need for the new swap addressing (dependant on the swap ! space). ! ! Disable Adaptability ! CONFIG_COMP_DIS_ADAPT ! ! Select this option if you want to disable compressed cache ! adaptability policy. In this case, compressed cache is known as ! static, because it has a fixed size, and does not change it at run ! time. When adaptability is disable, the "compsize=" kernel option ! will select, rather than the maximum size, the static compressed ! cache size. This option disables clean pages adaptability too. ! ! If unsure, say N here. ! ! Disable Clean Page Adaptability ! CONFIG_COMP_DIS_CLEAN ! ! Clean page adaptability attempts to detect when compressign clean ! pages is not worthwhile, disabling this compression. When the ! compression of clean pages is disabled, it keeps track of new ! evicted clean pages in order to decide when compressed cache should ! resume compressing them. Say Y here if you want to disable clean ! page adaptability. ! ! If unsure, say N here. Normal floppy disk support |
From: Rodrigo S. de C. <rc...@us...> - 2002-11-22 16:02:06
|
Update of /cvsroot/linuxcompressed/linux/arch/i386 In directory sc8-pr-cvs1:/tmp/cvs-serv13256/arch/i386 Modified Files: config.in Log Message: Features o New clean page adaptability. This policy disables compression of clean pages when it is not worth it (i.e., most pages are compressed and freed, without being reclaimed to the system). o Two new configuration options to disable the whole adaptability policy and clean page adaptability separately. It was most used for some tests, but it might be useful for someone which has compressed caching performing not very well. Bug Fixes o Make the LZO code compile on Athlon systems o __read_comp_cache(): if a dirty fragment was supposed to be freed, it wouldn't be actually freed because we forgot to drop a reference on the fragment. Cleanups o Lots, mainly in adaptivity.c Index: config.in =================================================================== RCS file: /cvsroot/linuxcompressed/linux/arch/i386/config.in,v retrieving revision 1.23 retrieving revision 1.24 diff -C2 -r1.23 -r1.24 *** config.in 10 Sep 2002 16:42:58 -0000 1.23 --- config.in 22 Nov 2002 16:01:33 -0000 1.24 *************** *** 213,216 **** --- 213,222 ---- bool ' Double Page Size' CONFIG_COMP_DOUBLE_PAGE bool ' Compressed Swap' CONFIG_COMP_SWAP + bool ' Disable Adaptability' CONFIG_COMP_DIS_ADAPT + if [ "$CONFIG_COMP_DIS_ADAPT" = "y" ]; then + define_bool CONFIG_COMP_DIS_CLEAN y + else + bool ' Disable Clean Page Adaptability' CONFIG_COMP_DIS_CLEAN + fi fi fi |
From: Rodrigo S. de C. <rc...@us...> - 2002-11-22 16:01:53
|
Update of /cvsroot/linuxcompressed/linux/mm/comp_cache In directory sc8-pr-cvs1:/tmp/cvs-serv13256/mm/comp_cache Modified Files: Makefile adaptivity.c aux.c free.c main.c minilzo.c proc.c swapin.c swapout.c Log Message: Features o New clean page adaptability. This policy disables compression of clean pages when it is not worth it (i.e., most pages are compressed and freed, without being reclaimed to the system). o Two new configuration options to disable the whole adaptability policy and clean page adaptability separately. It was most used for some tests, but it might be useful for someone which has compressed caching performing not very well. Bug Fixes o Make the LZO code compile on Athlon systems o __read_comp_cache(): if a dirty fragment was supposed to be freed, it wouldn't be actually freed because we forgot to drop a reference on the fragment. Cleanups o Lots, mainly in adaptivity.c Index: Makefile =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/Makefile,v retrieving revision 1.8 retrieving revision 1.9 diff -C2 -r1.8 -r1.9 *** Makefile 29 May 2002 21:28:54 -0000 1.8 --- Makefile 22 Nov 2002 16:01:36 -0000 1.9 *************** *** 7,11 **** export-objs := swapin.o ! obj-y := main.o vswap.o free.o swapout.o swapin.o adaptivity.o aux.o proc.o WK4x4.o WKdm.o minilzo.o include $(TOPDIR)/Rules.make --- 7,15 ---- export-objs := swapin.o ! obj-y := main.o vswap.o free.o swapout.o swapin.o aux.o proc.o WK4x4.o WKdm.o minilzo.o ! ! ifneq ($(CONFIG_COMP_DIS_ADAPT),y) ! obj-y += adaptivity.o ! endif include $(TOPDIR)/Rules.make Index: adaptivity.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/adaptivity.c,v retrieving revision 1.40 retrieving revision 1.41 diff -C2 -r1.40 -r1.41 *** adaptivity.c 10 Sep 2002 16:43:20 -0000 1.40 --- adaptivity.c 22 Nov 2002 16:01:36 -0000 1.41 *************** *** 2,6 **** * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-09-02 18:43:33 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-11-21 17:28:13 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 17,22 **** extern kmem_cache_t * comp_cachep; static int fragment_failed_alloc = 0, vswap_failed_alloc = 0; ! unsigned long failed_comp_page_allocs = 0; ! int growing_lock = 0; /* semaphore used to avoid two concurrent instances of --- 17,34 ---- extern kmem_cache_t * comp_cachep; static int fragment_failed_alloc = 0, vswap_failed_alloc = 0; ! int growth_lock = 0; ! ! /* clean page hash */ ! kmem_cache_t * clean_page_cachep; ! struct clean_page_data ** clean_page_hash; ! unsigned long clean_page_hash_size; ! unsigned int clean_page_hash_bits; ! ! /* clean page list */ ! LIST_HEAD(clean_page_list); ! unsigned long nr_clean_page_hash = 0; ! ! unsigned long nr_clean_page_hits = 0; ! unsigned long old_nr_clean_page_hits = 0; /* semaphore used to avoid two concurrent instances of *************** *** 24,29 **** static struct semaphore vswap_resize_semaphore; - extern void comp_cache_fix_watermarks(int); - void resize_fragment_hash_table(void) { --- 36,39 ---- *************** *** 466,481 **** } - static inline int - comp_cache_needs_to_shrink(void) { - /* obvious condition */ - if (new_num_comp_pages >= num_comp_pages) - return 0; - - if (vswap_num_reserved_entries > new_num_comp_pages) - return 0; - - return 1; - } - static inline void shrink_zone_watermarks(void) --- 476,479 ---- *************** *** 488,510 **** /*** ! * shrink_comp_cache(comp_page, check_further) - given a "comp_page" ! * entry, check if this page does not have fragments and if the ! * compressed cache need to be shrunk. ! * ! * In the case we can use the comp page to shrink the cache, release ! * it to the system, fixing all compressed cache data structures. ! * ! * @check_further: this parameter is used to distinguish between two ! * cases where we might be shrinking the case: user input to sysctl ! * entry or shrinking on demand. In the latter case, we want to simply ! * check the comp_page and free it if possible, we don't want to ! * perform an agressive shrinkage. * * caller must hold comp_cache_lock lock */ static int ! shrink_comp_cache(struct comp_cache_page * comp_page, int check_further) { - struct comp_cache_page * empty_comp_page; int retval = 0; --- 486,499 ---- /*** ! * shrink_comp_cache(comp_page) - given a "comp_page" entry, check if ! * this page does not have fragments, trying to release it to the ! * system in this case. After the page is released, all the compressed ! * cache data structures must be fixed accordingly. * * caller must hold comp_cache_lock lock */ static int ! shrink_comp_cache(struct comp_cache_page * comp_page) { int retval = 0; *************** *** 517,581 **** if (!list_empty(&(comp_page->fragments))) { UnlockPage(comp_page->page); - if (check_further) - goto check_shrink; - goto out; - } - - /* no need to shrink the cache */ - if (!comp_cache_needs_to_shrink()) { - UnlockPage(comp_page->page); goto out; } ! /* we need to shrink and have a empty page, so let's do it */ ! empty_comp_page = comp_page; retval = 1; ! shrink: ! remove_comp_page_from_hash_table(empty_comp_page); ! if (page_count(empty_comp_page->page) != 1) BUG(); ! UnlockPage(empty_comp_page->page); ! __free_pages(empty_comp_page->page, COMP_PAGE_ORDER); ! set_comp_page(empty_comp_page, NULL); ! kmem_cache_free(comp_cachep, (empty_comp_page)); num_comp_pages--; - #if 0 - printk("shrink new %lu real %lu\n", new_num_comp_pages, num_comp_pages); - #endif - - check_shrink: - if (!comp_cache_needs_to_shrink()) { - shrink_zone_watermarks(); - goto out; - } - - if (!fragment_failed_alloc && !vswap_failed_alloc) - goto check_empty_pages; out: shrink_fragment_hash_table(); shrink_vswap(); - out_unlock: spin_unlock(&comp_cache_lock); return retval; - - check_empty_pages: - /* let's look for empty compressed cache entries */ - empty_comp_page = search_comp_page(free_space_hash, PAGE_SIZE); - - if (!empty_comp_page || !empty_comp_page->page) - goto out_unlock; - - lock_page(empty_comp_page->page); - - /* we raced */ - if (!list_empty(&(comp_page->fragments))) { - UnlockPage(empty_comp_page->page); - goto out_unlock; - } - - goto shrink; } --- 506,533 ---- if (!list_empty(&(comp_page->fragments))) { UnlockPage(comp_page->page); goto out; } ! /* we have an empty page, so let's do it */ retval = 1; ! remove_comp_page_from_hash_table(comp_page); ! ! if (page_count(comp_page->page) != 1) BUG(); ! UnlockPage(comp_page->page); ! __free_pages(comp_page->page, COMP_PAGE_ORDER); ! set_comp_page(comp_page, NULL); ! kmem_cache_free(comp_cachep, comp_page); num_comp_pages--; + /* only change the zone watermarks if we shrunk the cache */ + shrink_zone_watermarks(); out: shrink_fragment_hash_table(); shrink_vswap(); spin_unlock(&comp_cache_lock); return retval; } *************** *** 589,592 **** --- 541,545 ---- shrink_on_demand(struct comp_cache_page * comp_page) { + /* don't shrink a comp cache that has reached the min size */ if (num_comp_pages == min_num_comp_pages) { UnlockPage(comp_page->page); *************** *** 594,613 **** } ! /* to force the shrink_comp_cache() to grow the cache */ ! new_num_comp_pages = num_comp_pages - 1; ! ! if (shrink_comp_cache(comp_page, 0)) { ! #if 0 ! printk("wow, it has shrunk %d\n", num_comp_pages); ! #endif return 1; - } - - new_num_comp_pages = num_comp_pages; return 0; } - #define comp_cache_needs_to_grow() (new_num_comp_pages > num_comp_pages) - static inline void grow_fragment_hash_table(void) { --- 547,555 ---- } ! if (shrink_comp_cache(comp_page)) return 1; return 0; } static inline void grow_fragment_hash_table(void) { *************** *** 630,682 **** } static int ! grow_comp_cache(int nrpages) { struct comp_cache_page * comp_page; struct page * page; ! int ret = 0; spin_lock(&comp_cache_lock); ! while (comp_cache_needs_to_grow() && nrpages--) { ! page = alloc_pages(GFP_ATOMIC, COMP_PAGE_ORDER); ! ! /* couldn't allocate the page */ ! if (!page) { ! failed_comp_page_allocs++; ! goto out_unlock; ! } ! ! if (!init_comp_page(&comp_page, page)) { ! __free_pages(page, COMP_PAGE_ORDER); ! goto out_unlock; ! } ! ! comp_cache_freeable_space += COMP_PAGE_SIZE; ! comp_cache_free_space += COMP_PAGE_SIZE; ! num_comp_pages++; ! if (num_comp_pages > max_used_num_comp_pages) ! max_used_num_comp_pages = num_comp_pages; ! #if 0 ! printk("grow real %lu\n", num_comp_pages); ! #endif ! } ! ! ret = 1; ! if (!comp_cache_needs_to_grow()) { ! grow_zone_watermarks(); ! goto grow_structures; } ! ! if (!fragment_failed_alloc && !vswap_failed_alloc) goto out_unlock; ! grow_structures: grow_fragment_hash_table(); grow_vswap(); out_unlock: spin_unlock(&comp_cache_lock); ! return ret; } --- 572,617 ---- } + /*** + * grow_comp_cache(void) - try to allocate a compressed cache page + * (may be 1 or 2 memory pages). If it is successful, initialize it, + * adding to the compressed cache. + */ static int ! grow_comp_cache(void) { struct comp_cache_page * comp_page; struct page * page; ! int retval = 0; spin_lock(&comp_cache_lock); ! page = alloc_pages(GFP_ATOMIC, COMP_PAGE_ORDER); ! /* couldn't allocate the page */ ! if (!page) { ! failed_comp_page_allocs++; ! goto out_unlock; } ! ! if (!init_comp_page(&comp_page, page)) { ! __free_pages(page, COMP_PAGE_ORDER); goto out_unlock; + } + + retval = 1; + + comp_cache_freeable_space += COMP_PAGE_SIZE; + comp_cache_free_space += COMP_PAGE_SIZE; + num_comp_pages++; ! if (num_comp_pages > max_used_num_comp_pages) ! max_used_num_comp_pages = num_comp_pages; ! ! grow_zone_watermarks(); grow_fragment_hash_table(); grow_vswap(); out_unlock: spin_unlock(&comp_cache_lock); ! return retval; } *************** *** 690,721 **** grow_on_demand(void) { if (num_comp_pages == max_num_comp_pages) return 0; ! if (growing_lock) return 0; ! /* to force the grow_comp_cache() to grow the cache */ ! new_num_comp_pages = num_comp_pages + 1; ! ! if (grow_comp_cache(1)) { ! #if 0 ! printk("wow, it has grown %d\n", num_comp_pages); ! #endif return 1; ! } ! ! new_num_comp_pages = num_comp_pages; return 0; } void compact_comp_cache(void) { ! struct comp_cache_page * comp_page, * previous_comp_page = NULL, * new_comp_page, ** hash_table = free_space_hash; struct comp_cache_fragment * fragment, * new_fragment; ! int i; next_fragment: i = free_space_hash_size - 1; do { --- 625,655 ---- grow_on_demand(void) { + /* don't grow a comp cache that has reached the max size */ if (num_comp_pages == max_num_comp_pages) return 0; ! /* if adaptability policy locked the growth, return */ ! if (growth_lock) return 0; ! if (grow_comp_cache()) return 1; ! return 0; } + #define writeout_one_fragment(gfp_mask) writeout_fragments(gfp_mask, 1, 6) + void compact_comp_cache(void) { ! struct comp_cache_page * comp_page, * previous_comp_page = NULL, * new_comp_page, ** hash_table; struct comp_cache_fragment * fragment, * new_fragment; ! struct list_head * fragment_lh; ! int i, fail; next_fragment: + hash_table = free_space_hash; + i = free_space_hash_size - 1; do { *************** *** 734,740 **** } ! fragment = list_entry(comp_page->fragments.prev, struct comp_cache_fragment, list); search_again: ! new_comp_page = search_comp_page(free_space_hash, fragment->compressed_size); if (new_comp_page && !TryLockPage(new_comp_page->page)) --- 668,692 ---- } ! fragment_lh = comp_page->fragments.prev; ! fail = 0; ! while (1) { ! fragment = list_entry(fragment_lh, struct comp_cache_fragment, list); ! if (fragment_count(fragment) != 1) { ! fail = 1; ! goto next; ! } ! if (!CompFragmentToBeFreed(fragment)) ! break; ! next: ! fragment_lh = fragment_lh->prev; ! if (fragment_lh == &comp_page->fragments) { ! if (fail) ! goto out2_failed; ! UnlockPage(comp_page->page); ! return; ! } ! } search_again: ! new_comp_page = search_comp_page(hash_table, fragment->compressed_size); if (new_comp_page && !TryLockPage(new_comp_page->page)) *************** *** 815,822 **** UnlockPage(new_comp_page->page); goto next_fragment; - //return; writeout: ! writeout_fragments(GFP_KERNEL, 1, 6); return; --- 767,773 ---- UnlockPage(new_comp_page->page); goto next_fragment; writeout: ! writeout_one_fragment(GFP_KERNEL); return; *************** *** 829,833 **** UnlockPage(comp_page->page); goto writeout; - } --- 780,783 ---- *************** *** 851,858 **** --- 801,937 ---- } + #ifndef CONFIG_COMP_DIS_CLEAN + void + hit_clean_page(struct page * page) + { + struct clean_page_data * clpage; + + clpage = clean_page_hash[clean_page_hashfn(page->mapping, page->index)]; + + goto inside; + + for (;;) { + clpage = clpage->next_hash; + inside: + if (!clpage) + return; + if (clpage->mapping != page->mapping) + continue; + if (clpage->index == page->index) + break; + } + + /* mark it as hit */ + clpage->mapping = NULL; + nr_clean_page_hits++; + + /* if too many hits, try to store the clean pages */ + if (nr_clean_page_hits * 10 > clean_page_hash_size) { + clean_page_compress_lock = 0; + old_nr_clean_page_hits += nr_clean_page_hits; + nr_clean_page_hits = 0; + } + } + + void + add_clean_page(struct page * page) + { + struct clean_page_data * clpage, **old_clpage; + unsigned long hash_index; + + /* allocate a new structure */ + clpage = ((struct clean_page_data *) kmem_cache_alloc(clean_page_cachep, SLAB_ATOMIC)); + + if (unlikely(!clpage)) + return; + + clpage->mapping = page->mapping; + clpage->index = page->index; + + /* add to hash table...*/ + hash_index = clean_page_hashfn(page->mapping, page->index); + old_clpage = &clean_page_hash[hash_index]; + + if ((clpage->next_hash = *old_clpage)) + (*old_clpage)->pprev_hash = &clpage->next_hash; + + *old_clpage = clpage; + clpage->pprev_hash = old_clpage; + + /* and to the list */ + list_add(&clpage->list, &clean_page_list); + nr_clean_page_hash++; + + if (nr_clean_page_hash > clean_page_hash_size * 2) { + struct clean_page_data *next; + struct clean_page_data **pprev; + + clpage = list_entry(clean_page_list.prev, struct clean_page_data, list); + + /* remove from the list... */ + list_del(clean_page_list.prev); + + if (!clpage->mapping) { + if (old_nr_clean_page_hits) + old_nr_clean_page_hits--; + else + nr_clean_page_hits--; + } + + /* and from the hash table */ + next = clpage->next_hash; + pprev = clpage->pprev_hash; + + if (next) + next->pprev_hash = pprev; + *pprev = next; + clpage->pprev_hash = NULL; + + /* free the old structure */ + kmem_cache_free(clean_page_cachep, clpage); + + nr_clean_page_hash--; + } + + if (num_clean_fragments * 10 > num_fragments * 3) + compact_comp_cache(); + } + #endif + void __init comp_cache_adaptivity_init(void) { + unsigned int order; + init_MUTEX(&vswap_resize_semaphore); + + #ifndef CONFIG_COMP_DIS_CLEAN + /* clean pages hash table */ + clean_page_hash_size = comp_page_to_page(max_num_comp_pages)/7; + + for (order = 0; (PAGE_SIZE << order) < clean_page_hash_size; order++); + + do { + unsigned long tmp = (PAGE_SIZE << order)/sizeof(struct clean_page_data *); + + clean_page_hash_bits = 0; + while((tmp >>= 1UL) != 0UL) + clean_page_hash_bits++; + + clean_page_hash = (struct clean_page_data **) __get_free_pages(GFP_ATOMIC, order); + } while(clean_page_hash == NULL && --order > 0); + + clean_page_hash_size = 1 << clean_page_hash_bits; + + if (!clean_page_hash) + panic("comp_cache_adaptivity_init(): couldn't allocate clean page hash table\n"); + + memset((void *) clean_page_hash, 0, clean_page_hash_size * sizeof(struct clean_page_data *)); + + clean_page_cachep = kmem_cache_create("comp_cache_clean", sizeof(struct clean_page_data), 0, SLAB_HWCACHE_ALIGN, NULL, NULL); + + printk("Compressed Cache: adaptivity\n" + "Compressed Cache: clean page (%lu entries = %luB)\n", clean_page_hash_size, PAGE_SIZE << order); + #endif } Index: aux.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/aux.c,v retrieving revision 1.43 retrieving revision 1.44 diff -C2 -r1.43 -r1.44 *** aux.c 10 Sep 2002 16:43:20 -0000 1.43 --- aux.c 22 Nov 2002 16:01:36 -0000 1.44 *************** *** 2,6 **** * linux/mm/comp_cache/aux.c * ! * Time-stamp: <2002-09-02 18:43:50 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/aux.c * ! * Time-stamp: <2002-10-28 21:13:03 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 373,376 **** --- 373,378 ---- struct comp_cache_page * + FASTCALL(search_comp_page(struct comp_cache_page ** hash_table, int free_space)); + struct comp_cache_page * search_comp_page(struct comp_cache_page ** hash_table, int free_space) { struct comp_cache_page * comp_page; *************** *** 379,412 **** idx = free_space_hashfn(free_space); - if (idx == free_space_hash_size - 1) - goto check_exact_size; - /* first of all let's try to get at once a comp page whose * free space is surely bigger than what need */ ! i = idx + 1; ! do { ! comp_page = hash_table[i++]; ! } while(i < free_space_hash_size && !comp_page); ! ! /* couldn't find a page? let's check the pages whose free ! * space is linked in our hash key entry */ ! if (!comp_page) ! goto check_exact_size; - return comp_page; - - check_exact_size: comp_page = hash_table[idx]; ! if (hash_table == free_space_hash) { ! while (comp_page && comp_page->free_space < free_space) ! comp_page = comp_page->next_hash_fs; ! } ! else { ! while (comp_page && comp_page->total_free_space < free_space) ! comp_page = comp_page->next_hash_tfs; ! } ! ! return comp_page; } --- 381,413 ---- idx = free_space_hashfn(free_space); /* first of all let's try to get at once a comp page whose * free space is surely bigger than what need */ ! for (i = idx + 1; i < free_space_hash_size; i++) { ! if (hash_table[i]) ! return hash_table[i]; ! } comp_page = hash_table[idx]; + if (hash_table == free_space_hash) + goto inside_fs; + goto inside_tfs; ! for (;;) { ! comp_page = comp_page->next_hash_fs; ! inside_fs: ! if (!comp_page) ! return NULL; ! if (comp_page->free_space >= free_space) ! return comp_page; ! } ! ! for (;;) { ! comp_page = comp_page->next_hash_tfs; ! inside_tfs: ! if (!comp_page) ! return NULL; ! if (comp_page->total_free_space >= free_space) ! return comp_page; ! } } *************** *** 622,626 **** /* inits comp cache free space hash table */ free_space_interval = 100 * (COMP_PAGE_ORDER + 1); ! free_space_hash_size = (int) (PAGE_SIZE/100) + 2; free_space_hash = (struct comp_cache_page **) kmalloc(free_space_hash_size * sizeof(struct comp_cache_page *), GFP_ATOMIC); --- 623,627 ---- /* inits comp cache free space hash table */ free_space_interval = 100 * (COMP_PAGE_ORDER + 1); ! free_space_hash_size = (int) (COMP_PAGE_SIZE/free_space_interval) + 2; free_space_hash = (struct comp_cache_page **) kmalloc(free_space_hash_size * sizeof(struct comp_cache_page *), GFP_ATOMIC); *************** *** 635,639 **** /* inits comp cache total free space hash table */ total_free_space_interval = 100 * (COMP_PAGE_ORDER + 1); ! total_free_space_hash_size = (int) (PAGE_SIZE/100) + 2; total_free_space_hash = (struct comp_cache_page **) kmalloc(total_free_space_hash_size * sizeof(struct comp_cache_page *), GFP_ATOMIC); --- 636,640 ---- /* inits comp cache total free space hash table */ total_free_space_interval = 100 * (COMP_PAGE_ORDER + 1); ! total_free_space_hash_size = (int) (COMP_PAGE_SIZE/free_space_interval) + 2; total_free_space_hash = (struct comp_cache_page **) kmalloc(total_free_space_hash_size * sizeof(struct comp_cache_page *), GFP_ATOMIC); Index: free.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/free.c,v retrieving revision 1.47 retrieving revision 1.48 diff -C2 -r1.47 -r1.48 *** free.c 10 Sep 2002 16:43:21 -0000 1.47 --- free.c 22 Nov 2002 16:01:37 -0000 1.48 *************** *** 2,6 **** * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-08-21 17:57:52 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-10-25 11:26:26 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 248,255 **** comp_cache_free_locked(fragment); ! /* steal the page if we need to shrink the cache. The page ! * will be unlocked in shrink_comp_cache() (even if shrinking ! * on demand, shrink_on_demand() will call it anyway) */ shrink_on_demand(comp_page); } --- 248,261 ---- comp_cache_free_locked(fragment); ! #ifdef CONFIG_COMP_DIS_ADAPT ! UnlockPage(comp_page->page); ! #else ! /* *** adaptability policy *** ! * ! * Release the page to the system if it doesn't have another ! * fragments after the above fragment got just freed. ! */ shrink_on_demand(comp_page); + #endif } Index: main.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/main.c,v retrieving revision 1.65 retrieving revision 1.66 diff -C2 -r1.65 -r1.66 *** main.c 10 Sep 2002 20:19:06 -0000 1.65 --- main.c 22 Nov 2002 16:01:37 -0000 1.66 *************** *** 2,6 **** * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-09-10 17:03:33 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-10-25 08:54:11 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 24,33 **** unsigned long num_active_fragments = 0; unsigned long num_clean_fragments = 0; ! unsigned long init_num_comp_pages = 0; ! ! unsigned long new_num_comp_pages = 0; unsigned long max_num_comp_pages = 0; unsigned long min_num_comp_pages = 0; unsigned long max_used_num_comp_pages = 0; --- 24,34 ---- unsigned long num_active_fragments = 0; unsigned long num_clean_fragments = 0; + unsigned long failed_comp_page_allocs = 0; ! /* maximum number of pages that the compressed cache can use */ unsigned long max_num_comp_pages = 0; + /* minimum number of pages that the compressed cache can use */ unsigned long min_num_comp_pages = 0; + /* maximum number of pages ever used by the compressed cache */ unsigned long max_used_num_comp_pages = 0; *************** *** 112,115 **** --- 113,117 ---- copy_page: + /* if the page is already compressed, we just copy it */ if (PageCompressed(page)) { memcpy(page_address(comp_page->page) + fragment->offset, page_address(page) + comp_offset, comp_size); *************** *** 219,224 **** } - extern void __init comp_cache_init_fix_watermarks(int num_comp_pages); - void __init comp_cache_init(void) --- 221,224 ---- *************** *** 228,245 **** int i; ! max_used_num_comp_pages = init_num_comp_pages = min_num_comp_pages = page_to_comp_page(48); if (!max_num_comp_pages || max_num_comp_pages < min_num_comp_pages || max_num_comp_pages > num_physpages * 0.5) max_num_comp_pages = page_to_comp_page((unsigned long) (num_physpages * 0.5)); ! new_num_comp_pages = num_comp_pages = init_num_comp_pages; ! ! printk("Compressed Cache: %s\n", COMP_CACHE_VERSION); ! printk("Compressed Cache: maximum size\n" ! "Compressed Cache: %lu pages = %luKiB\n", max_num_comp_pages, (max_num_comp_pages * COMP_PAGE_SIZE) >> 10); /* fiz zone watermarks */ ! comp_cache_init_fix_watermarks(init_num_comp_pages); /* create slab caches */ --- 228,256 ---- int i; ! printk("Compressed Cache: %s\n", COMP_CACHE_VERSION); + #ifdef CONFIG_COMP_DIS_ADAPT + /* static compressed cache */ + min_num_comp_pages = page_to_comp_page(48); + if (!max_num_comp_pages || max_num_comp_pages < min_num_comp_pages || max_num_comp_pages > num_physpages * 0.5) max_num_comp_pages = page_to_comp_page((unsigned long) (num_physpages * 0.5)); ! max_used_num_comp_pages = num_comp_pages = max_num_comp_pages; ! printk("Compressed Cache: static size\n"); ! #else ! /* adaptive compressed cache */ ! max_used_num_comp_pages = min_num_comp_pages = num_comp_pages = page_to_comp_page(48); ! ! if (!max_num_comp_pages || max_num_comp_pages < min_num_comp_pages || max_num_comp_pages > num_physpages * 0.5) ! max_num_comp_pages = page_to_comp_page((unsigned long) (num_physpages * 0.5)); ! ! printk("Compressed Cache: maximum size\n"); ! #endif ! printk("Compressed Cache: %lu pages = %luKiB\n", max_num_comp_pages, (max_num_comp_pages * COMP_PAGE_SIZE) >> 10); /* fiz zone watermarks */ ! comp_cache_fix_watermarks(num_comp_pages); /* create slab caches */ *************** *** 266,270 **** --- 277,283 ---- comp_cache_algorithms_init(); + #ifndef CONFIG_COMP_DIS_ADAPT comp_cache_adaptivity_init(); + #endif } Index: minilzo.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/minilzo.c,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -r1.3 -r1.4 *** minilzo.c 10 Sep 2002 20:19:06 -0000 1.3 --- minilzo.c 22 Nov 2002 16:01:40 -0000 1.4 *************** *** 54,61 **** #endif ! #if !defined(LZO_NO_SYS_TYPES_H) ! # include <linux/types.h> ! #endif ! //#include <stdio.h> #ifndef __LZO_CONF_H --- 54,61 ---- #endif ! /* #if !defined(LZO_NO_SYS_TYPES_H) */ ! /* # include <linux/types.h> */ ! /* #endif */ ! /* #include <stdio.h> */ #ifndef __LZO_CONF_H *************** *** 76,80 **** #if !defined(LZO_HAVE_CONFIG_H) ! # include <linux/stddef.h> # include <linux/string.h> # define HAVE_MEMCMP --- 76,80 ---- #if !defined(LZO_HAVE_CONFIG_H) ! # include <stddef.h> # include <linux/string.h> # define HAVE_MEMCMP *************** *** 324,328 **** #if defined(__LZO_DOS16) || defined(__LZO_WIN16) //# include <dos.h> ! # if 1 && defined(__WATCOMC__) //# include <i86.h> __LZO_EXTERN_C unsigned char _HShift; --- 324,328 ---- #if defined(__LZO_DOS16) || defined(__LZO_WIN16) //# include <dos.h> ! #if 1 && defined(__WATCOMC__) //# include <i86.h> __LZO_EXTERN_C unsigned char _HShift; Index: proc.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/proc.c,v retrieving revision 1.28 retrieving revision 1.29 diff -C2 -r1.28 -r1.29 *** proc.c 12 Sep 2002 15:11:31 -0000 1.28 --- proc.c 22 Nov 2002 16:01:41 -0000 1.29 *************** *** 2,6 **** * linux/mm/comp_cache/proc.c * ! * Time-stamp: <2002-09-12 11:42:20 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/proc.c * ! * Time-stamp: <2002-10-21 16:26:52 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 54,58 **** static int algorithm_min = WKDM_IDX; static int algorithm_max = LZO_IDX; ! static int algorithm_idx = 0; struct stats_summary * stats = &compression_algorithm.stats; --- 54,58 ---- static int algorithm_min = WKDM_IDX; static int algorithm_max = LZO_IDX; ! static int algorithm_idx = -1; struct stats_summary * stats = &compression_algorithm.stats; *************** *** 61,65 **** static spinlock_t comp_data_lock __cacheline_aligned = SPIN_LOCK_UNLOCKED; ! int clean_page_compress_lock = 1; inline void --- 61,65 ---- static spinlock_t comp_data_lock __cacheline_aligned = SPIN_LOCK_UNLOCKED; ! int clean_page_compress_lock = 0; inline void *************** *** 153,157 **** #if 0 ! if (state == CLEAN_PAGE && clean_page_compress_lock) { comp_size = PAGE_SIZE; comp_cache_update_comp_stats(comp_size, page); --- 153,157 ---- #if 0 ! if (state == CLEAN_PAGE && clean_page_compress_lock) { // && (num_clean_fragments * 5 < num_fragments)) { comp_size = PAGE_SIZE; comp_cache_update_comp_stats(comp_size, page); *************** *** 261,265 **** comp_cache_algorithms_init(void) { ! if (!algorithm_idx || algorithm_idx < algorithm_min || algorithm_idx > algorithm_max) algorithm_idx = LZO_IDX; --- 261,265 ---- comp_cache_algorithms_init(void) { ! if (algorithm_idx == -1 || algorithm_idx < algorithm_min || algorithm_idx > algorithm_max) algorithm_idx = LZO_IDX; *************** *** 406,410 **** total1 = free_space_count(0, array_num_fragments); - length += sprintf(page + length, "total %lu act %lu pages %lu\n", num_fragments, num_active_fragments, num_comp_pages << COMP_PAGE_ORDER); length += sprintf(page + length, " %4d: %7lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu\n", --- 406,409 ---- Index: swapin.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapin.c,v retrieving revision 1.54 retrieving revision 1.55 diff -C2 -r1.54 -r1.55 *** swapin.c 10 Sep 2002 16:43:24 -0000 1.54 --- swapin.c 22 Nov 2002 16:01:42 -0000 1.55 *************** *** 2,6 **** * linux/mm/comp_cache/swapin.c * ! * Time-stamp: <2002-09-10 10:36:42 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/swapin.c * ! * Time-stamp: <2002-11-21 15:23:30 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 18,25 **** #include <asm/uaccess.h> ! #define ACTIVE_FRAGMENT 1 #define INACTIVE_FRAGMENT 0 ! int last_accessed = 0, last_state_accessed = 0; int --- 18,25 ---- #include <asm/uaccess.h> ! #define ACTIVE_FRAGMENT 1 #define INACTIVE_FRAGMENT 0 ! int last_state_accessed = 0; int *************** *** 60,63 **** --- 60,64 ---- if (CompFragmentTestandClearDirty(fragment)) { + num_clean_fragments++; list_del(&fragment->mapping_list); list_add(&fragment->mapping_list, &fragment->mapping->clean_comp_pages); *************** *** 80,85 **** __read_comp_cache(struct address_space *mapping, unsigned long offset, struct page * page, int state) { struct comp_cache_fragment * fragment; ! int err, ratio; if (!PageLocked(page)) --- 81,87 ---- __read_comp_cache(struct address_space *mapping, unsigned long offset, struct page * page, int state) { + static int adapt_ratio = 2, last_accessed = 0; struct comp_cache_fragment * fragment; ! int err; if (!PageLocked(page)) *************** *** 99,183 **** get_fragment(fragment); ! #if 0 ! if (CompFragmentDirty(fragment)) { ! //if (last_state_accessed > 0) ! // last_state_accessed = -1; ! //else ! last_state_accessed--; ! ratio = -3; //-(((num_fragments - num_clean_fragments) * 4)/num_fragments?:0); ! if (last_state_accessed < ratio) { ! clean_page_compress_lock = 1; ! last_state_accessed = 0; ! } ! goto test_active; ! } ! ! //if (last_state_accessed < 0) ! // last_state_accessed = 1; ! //else ! last_state_accessed++; ! ratio = 3; //((num_clean_fragments * 4)/num_fragments?:0); ! if (last_state_accessed > ratio) { ! clean_page_compress_lock = 0; ! last_state_accessed = 0; ! } ! ! test_active: ! #endif ! ! #if 0 ! if (!CompFragmentDirty(fragment)) { last_state_accessed++; - ratio = 3; //((num_clean_fragments * 4)/num_fragments?:0); - if (last_state_accessed > ratio) { - clean_page_compress_lock = 0; - last_state_accessed = 0; - } #endif - if (CompFragmentActive(fragment)) {// || !CompFragmentDirty(fragment)) { - if (last_accessed == ACTIVE_FRAGMENT) { #if 0 ! /* -- VERSÃO 3 -- */ ! if (growing_lock) { ! compact_comp_cache(); ! //writeout_fragments(GFP_KERNEL, 1, SHRINKAGE_PRIORITY); ! last_accessed = INACTIVE_FRAGMENT; ! goto read; ! } ! growing_lock = 1; ! goto read; #endif ! #if 1 ! /* -- VERSÃO 2 -- */ ! if (growing_lock) { compact_comp_cache(); ! //writeout_fragments(GFP_KERNEL, 1, SHRINKAGE_PRIORITY); ! growing_lock = 0; last_accessed = INACTIVE_FRAGMENT; goto read; } ! growing_lock = 1; ! goto read; ! #endif ! ! #if 0 ! /* -- VERSÂO 1 -- */ ! writeout_fragments(GFP_KERNEL, 1, SHRINKAGE_PRIORITY); ! growing_lock = 1; ! last_accessed = INACTIVE_FRAGMENT; goto read; - #endif } last_accessed = ACTIVE_FRAGMENT; - goto read; - } - - /* inactive fragment */ - growing_lock = 0; - last_accessed = INACTIVE_FRAGMENT; read: /* If only dirty fragmenst should be returned (when reading * the page for writing it), free the fragment and return. A --- 101,170 ---- get_fragment(fragment); ! #ifndef CONFIG_COMP_DIS_CLEAN ! /* *** clean fragment policy *** ! * ! * All clean fragments read must account as +1 to ! * last_state_accessed variable. These fragments are only ! * accounted when we are not compressing clean pages ! * (clean_page_compress_lock == 1). ! */ ! if (!CompFragmentDirty(fragment) && !clean_page_compress_lock) last_state_accessed++; #endif #if 0 ! #ifndef CONFIG_COMP_DIS_ADAPT ! /* -- version 4 -- */ ! if (CompFragmentActive(fragment)) { ! /* fragments from compcache active list */ ! last_accessed++; ! if (last_accessed >= adapt_ratio) ! growth_lock = 1; ! if (last_accessed >= 2 * adapt_ratio) { ! compact_comp_cache(); ! ! growth_lock = 0; ! last_accessed = 0; ! } ! } ! else { ! /* fragments from compcache inactive list */ ! last_accessed--; ! if (last_accessed <= (-1 * adapt_ratio)) { ! growth_lock = 0; ! last_accessed = 0; ! } ! } #endif ! #endif ! #if 1 ! #ifndef CONFIG_COMP_DIS_ADAPT ! /* -- version 2 -- */ ! if (CompFragmentActive(fragment)) { ! /* fragments from compcache active list */ ! if (last_accessed == ACTIVE_FRAGMENT) { ! if (growth_lock) { compact_comp_cache(); ! growth_lock = 0; last_accessed = INACTIVE_FRAGMENT; goto read; } ! growth_lock = 1; goto read; } last_accessed = ACTIVE_FRAGMENT; + /* Ver alair1 */ + /* compact_comp_cache(); */ + } + else { + /* fragments from compcache inactive list */ + growth_lock = 0; + last_accessed = INACTIVE_FRAGMENT; + } read: + #endif + #endif /* If only dirty fragmenst should be returned (when reading * the page for writing it), free the fragment and return. A *************** *** 185,188 **** --- 172,176 ---- * is no point decompressing a clean fragment. */ if (CompFragmentDirty(fragment) && state == DIRTY_PAGE) { + put_fragment(fragment); drop_fragment(fragment); goto out_unlock; *************** *** 197,202 **** spin_lock(&comp_cache_lock); ! if (CompFragmentTestandClearDirty(fragment)) ! __set_page_dirty(page); UnlockPage(fragment->comp_page->page); --- 185,192 ---- spin_lock(&comp_cache_lock); ! if (CompFragmentTestandClearDirty(fragment)) { ! num_clean_fragments++; ! __set_page_dirty(page); ! } UnlockPage(fragment->comp_page->page); Index: swapout.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapout.c,v retrieving revision 1.72 retrieving revision 1.73 diff -C2 -r1.72 -r1.73 *** swapout.c 12 Sep 2002 15:11:31 -0000 1.72 --- swapout.c 22 Nov 2002 16:01:45 -0000 1.73 *************** *** 2,6 **** * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-09-12 11:42:33 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-10-25 11:26:59 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 361,372 **** maxscan = max((int) ((num_fragments - num_active_fragments)/priority), (int) (nrpages * 2)); ! if (!list_empty(&inactive_lru_queue)) ! goto scan; ! ! active_list: ! list = &active_lru_queue; ! maxscan = max((int) (num_active_fragments/priority), (int) (nrpages * 2)); - scan: while (!list_empty(list) && maxscan--) { fragment = list_entry(fragment_lh = list->prev, struct comp_cache_fragment, lru_queue); --- 361,369 ---- maxscan = max((int) ((num_fragments - num_active_fragments)/priority), (int) (nrpages * 2)); ! if (list_empty(&inactive_lru_queue)) { ! list = &active_lru_queue; ! maxscan = max((int) (num_active_fragments/priority), (int) (nrpages * 2)); ! } while (!list_empty(list) && maxscan--) { fragment = list_entry(fragment_lh = list->prev, struct comp_cache_fragment, lru_queue); *************** *** 381,393 **** /* clean page, let's free it */ if (!CompFragmentDirty(fragment)) { ! #if 0 ! //if (last_state_accessed > 0) ! //last_state_accessed = -1; ! //else ! last_state_accessed--; ! ratio = -2; //(((num_fragments - num_clean_fragments) * 6)/num_fragments?:0); ! if (last_state_accessed < ratio) { ! clean_page_compress_lock = 1; ! last_state_accessed = 0; } #endif --- 378,399 ---- /* clean page, let's free it */ if (!CompFragmentDirty(fragment)) { ! #ifndef CONFIG_COMP_DIS_CLEAN ! /* *** clean fragment policy *** ! * ! * All clean fragments to be freed accounts as ! * -1 to last_state_accessed variable. These ! * fragments are only accounted while we are ! * compressing clean pages ! * (clean_page_compress_lock == 1). ! */ ! if (!clean_page_compress_lock) { ! last_state_accessed--; ! ratio = -((num_clean_fragments * 40)/num_fragments); ! if (ratio > -5) ! ratio = -5; ! if (last_state_accessed < ratio) { ! clean_page_compress_lock = 1; ! last_state_accessed = 0; ! } } #endif *************** *** 399,416 **** goto try_again; } ! ! #if 0 ! //if (last_state_accessed < 0) ! //last_state_accessed = 1; ! //else ! last_state_accessed++; ! ratio = 2; //((num_clean_fragments * 6)/num_fragments?:0); ! if (last_state_accessed > ratio) { ! clean_page_compress_lock = 0; ! last_state_accessed = 0; ! } ! #endif ! ! /* we can't perform IO, so we can't go on */ if (!(gfp_mask & __GFP_FS)) --- 405,409 ---- goto try_again; } ! /* we can't perform IO, so we can't go on */ if (!(gfp_mask & __GFP_FS)) *************** *** 444,447 **** --- 437,441 ---- CompFragmentClearDirty(fragment); + num_clean_fragments++; writepage = fragment->mapping->a_ops->writepage; *************** *** 480,490 **** } - #if 0 - if (nrpages) { - if (list == &inactive_lru_queue && (num_active_fragments * 4 > num_fragments * 3)) - goto active_list; - } - #endif - return (!nrpages); } --- 474,477 ---- *************** *** 524,528 **** --- 511,519 ---- page_cache_get(page); + #ifndef CONFIG_COMP_DIS_ADAPT maxtry = 5; + #else + maxtry = 4; + #endif hash_table = free_space_hash; *************** *** 591,601 **** hash_table = free_space_hash; ! /*** ! * We couldn't find a comp page with enough free ! * space, so let's first check if we are supposed and ! * are able to grow the compressed cache on demand */ if (grow_on_demand()) continue; if (!writeout_fragments(gfp_mask, SWAP_CLUSTER_MAX, priority)) --- 582,595 ---- hash_table = free_space_hash; ! #ifndef CONFIG_COMP_DIS_ADAPT ! /* *** adaptability policy *** ! * ! * We couldn't find a comp page with enough free space ! * to store the new fragment. Let's then check if we ! * are able to grow the compressed cache on demand. */ if (grow_on_demand()) continue; + #endif if (!writeout_fragments(gfp_mask, SWAP_CLUSTER_MAX, priority)) |
From: Rodrigo S. de C. <rc...@us...> - 2002-09-12 15:11:34
|
Update of /cvsroot/linuxcompressed/linux/mm/comp_cache In directory usw-pr-cvs1:/tmp/cvs-serv4035/mm/comp_cache Modified Files: proc.c swapout.c Log Message: Cleanup: o Remove a warning due to an unused variable Other: o Make LZO the default compression algorithm Index: proc.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/proc.c,v retrieving revision 1.27 retrieving revision 1.28 diff -C2 -r1.27 -r1.28 *** proc.c 10 Sep 2002 20:19:06 -0000 1.27 --- proc.c 12 Sep 2002 15:11:31 -0000 1.28 *************** *** 2,6 **** * linux/mm/comp_cache/proc.c * ! * Time-stamp: <2002-09-10 16:57:22 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/proc.c * ! * Time-stamp: <2002-09-12 11:42:20 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 262,266 **** { if (!algorithm_idx || algorithm_idx < algorithm_min || algorithm_idx > algorithm_max) ! algorithm_idx = WKDM_IDX; /* data structure for compression algorithms */ --- 262,266 ---- { if (!algorithm_idx || algorithm_idx < algorithm_min || algorithm_idx > algorithm_max) ! algorithm_idx = LZO_IDX; /* data structure for compression algorithms */ Index: swapout.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapout.c,v retrieving revision 1.71 retrieving revision 1.72 diff -C2 -r1.71 -r1.72 *** swapout.c 10 Sep 2002 16:43:25 -0000 1.71 --- swapout.c 12 Sep 2002 15:11:31 -0000 1.72 *************** *** 2,6 **** * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-09-10 10:37:03 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-09-12 11:42:33 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 315,319 **** struct page * buffer_page; struct swp_buffer * swp_buffer; - swp_entry_t entry; swp_buffer = find_free_swp_buffer(fragment, gfp_mask); --- 315,318 ---- |
From: Rodrigo S. de C. <rc...@us...> - 2002-09-10 20:19:09
|
Update of /cvsroot/linuxcompressed/linux/mm/comp_cache In directory usw-pr-cvs1:/tmp/cvs-serv23073/mm/comp_cache Modified Files: main.c minilzo.c proc.c Log Message: Bug fixes: o /proc/comp_cache_stat showed the wrong size of compressed size. Fixed. o Fixed LZO compilation problems when porting the code to 2.4.19. Some Cleanups. Index: main.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/main.c,v retrieving revision 1.64 retrieving revision 1.65 diff -C2 -r1.64 -r1.65 *** main.c 10 Sep 2002 16:43:22 -0000 1.64 --- main.c 10 Sep 2002 20:19:06 -0000 1.65 *************** *** 2,6 **** * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-09-04 16:06:25 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-09-10 17:03:33 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 238,242 **** printk("Compressed Cache: maximum size\n" "Compressed Cache: %lu pages = %luKiB\n", ! max_num_comp_pages, (max_num_comp_pages * COMP_PAGE_SIZE)/1024); /* fiz zone watermarks */ --- 238,242 ---- printk("Compressed Cache: maximum size\n" "Compressed Cache: %lu pages = %luKiB\n", ! max_num_comp_pages, (max_num_comp_pages * COMP_PAGE_SIZE) >> 10); /* fiz zone watermarks */ Index: minilzo.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/minilzo.c,v retrieving revision 1.2 retrieving revision 1.3 diff -C2 -r1.2 -r1.3 *** minilzo.c 1 Jul 2002 18:16:59 -0000 1.2 --- minilzo.c 10 Sep 2002 20:19:06 -0000 1.3 *************** *** 37,46 **** #define __LZO_IN_MINILZO ! #ifdef MINILZO_HAVE_CONFIG_H ! # include <config.h> ! #endif #undef LZO_HAVE_CONFIG_H #include <linux/minilzo.h> #if !defined(MINILZO_VERSION) || (MINILZO_VERSION != 0x1070) --- 37,48 ---- #define __LZO_IN_MINILZO ! //#ifdef MINILZO_HAVE_CONFIG_H ! //# include <config.h> ! //#endif #undef LZO_HAVE_CONFIG_H #include <linux/minilzo.h> + #include <linux/compiler.h> + #include <asm/page.h> #if !defined(MINILZO_VERSION) || (MINILZO_VERSION != 0x1070) *************** *** 55,59 **** # include <linux/types.h> #endif ! #include <stdio.h> #ifndef __LZO_CONF_H --- 57,61 ---- # include <linux/types.h> #endif ! //#include <stdio.h> #ifndef __LZO_CONF_H *************** *** 67,71 **** #if defined(__BOUNDS_CHECKING_ON) ! # include <unchecked.h> #else # define BOUNDS_CHECKING_OFF_DURING(stmt) stmt --- 69,73 ---- #if defined(__BOUNDS_CHECKING_ON) ! //# include <unchecked.h> #else # define BOUNDS_CHECKING_OFF_DURING(stmt) stmt *************** *** 74,78 **** #if !defined(LZO_HAVE_CONFIG_H) ! # include <stddef.h> # include <linux/string.h> # define HAVE_MEMCMP --- 76,80 ---- #if !defined(LZO_HAVE_CONFIG_H) ! # include <linux/stddef.h> # include <linux/string.h> # define HAVE_MEMCMP *************** *** 84,94 **** # if defined(STDC_HEADERS) # include <linux/string.h> ! # include <stdlib.h> # endif # if defined(HAVE_STDDEF_H) ! # include <stddef.h> # endif # if defined(HAVE_MEMORY_H) ! # include <memory.h> # endif #endif --- 86,96 ---- # if defined(STDC_HEADERS) # include <linux/string.h> ! //# include <stdlib.h> # endif # if defined(HAVE_STDDEF_H) ! # include <linux/stddef.h> # endif # if defined(HAVE_MEMORY_H) ! //# include <memory.h> # endif #endif *************** *** 105,112 **** #if defined(LZO_DEBUG) || !defined(NDEBUG) # if !defined(NO_STDIO_H) ! # include <stdio.h> # endif #endif ! #include <assert.h> #if !defined(LZO_UNUSED) --- 107,116 ---- #if defined(LZO_DEBUG) || !defined(NDEBUG) # if !defined(NO_STDIO_H) ! //# include <stdio.h> # endif #endif ! //#include <assert.h> ! ! #define assert(condition) do { if (unlikely(!(condition))) BUG(); } while(0) #if !defined(LZO_UNUSED) *************** *** 319,325 **** #if defined(__LZO_DOS16) || defined(__LZO_WIN16) ! # include <dos.h> # if 1 && defined(__WATCOMC__) ! # include <i86.h> __LZO_EXTERN_C unsigned char _HShift; # define __LZO_HShift _HShift --- 323,329 ---- #if defined(__LZO_DOS16) || defined(__LZO_WIN16) ! //# include <dos.h> # if 1 && defined(__WATCOMC__) ! //# include <i86.h> __LZO_EXTERN_C unsigned char _HShift; # define __LZO_HShift _HShift *************** *** 869,873 **** } ! #include <stdio.h> #if 0 --- 873,877 ---- } ! //#include <stdio.h> #if 0 *************** *** 1305,1309 **** #if !defined(__LZO_IN_MINILZO) ! #include <lzo1x.h> #endif --- 1309,1313 ---- #if !defined(__LZO_IN_MINILZO) ! //#include <lzo1x.h> #endif Index: proc.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/proc.c,v retrieving revision 1.26 retrieving revision 1.27 diff -C2 -r1.26 -r1.27 *** proc.c 10 Sep 2002 16:43:23 -0000 1.26 --- proc.c 10 Sep 2002 20:19:06 -0000 1.27 *************** *** 2,6 **** * linux/mm/comp_cache/proc.c * ! * Time-stamp: <2002-09-10 13:27:33 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/proc.c * ! * Time-stamp: <2002-09-10 16:57:22 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 500,504 **** " - failed allocations: %6lu\n", max_used_num_comp_pages << (COMP_PAGE_ORDER + PAGE_SHIFT - 10), ! COMP_PAGE_SIZE, failed_comp_page_allocs); --- 500,504 ---- " - failed allocations: %6lu\n", max_used_num_comp_pages << (COMP_PAGE_ORDER + PAGE_SHIFT - 10), ! COMP_PAGE_SIZE >> 10, failed_comp_page_allocs); |
From: Rodrigo S. de C. <rc...@us...> - 2002-09-10 20:19:09
|
Update of /cvsroot/linuxcompressed/linux/include/linux In directory usw-pr-cvs1:/tmp/cvs-serv23073/include/linux Modified Files: lzoconf.h Log Message: Bug fixes: o /proc/comp_cache_stat showed the wrong size of compressed size. Fixed. o Fixed LZO compilation problems when porting the code to 2.4.19. Some Cleanups. Index: lzoconf.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/lzoconf.h,v retrieving revision 1.4 retrieving revision 1.5 diff -C2 -r1.4 -r1.5 *** lzoconf.h 29 May 2002 21:28:54 -0000 1.4 --- lzoconf.h 10 Sep 2002 20:19:06 -0000 1.5 *************** *** 41,45 **** # include <config.h> #endif ! #include <limits.h> #ifdef __cplusplus --- 41,60 ---- # include <config.h> #endif ! //#include <limits.h> ! #include <linux/kernel.h> ! ! #define CHAR_BIT 8 ! ! #undef UCHAR_MAX ! #define UCHAR_MAX 255 ! ! /* For the sake of 16 bit hosts, we may not use -32768 */ ! #define SHRT_MIN (-32767-1) ! #undef SHRT_MAX ! #define SHRT_MAX 32767 ! ! /* Maximum value an `unsigned short int' can hold. (Minimum is 0). */ ! #undef USHRT_MAX ! #define USHRT_MAX 65535 #ifdef __cplusplus |
From: Rodrigo S. de C. <rc...@us...> - 2002-09-10 18:21:03
|
Update of /cvsroot/linuxcompressed/linux/include/linux In directory usw-pr-cvs1:/tmp/cvs-serv19270/include/linux Removed Files: sysctl.h Log Message: Cleanup: o Removed last reference to the old sysctl entry. --- sysctl.h DELETED --- |
From: Rodrigo S. de C. <rc...@us...> - 2002-09-10 17:24:36
|
Update of /cvsroot/linuxcompressed/linux/include/linux In directory usw-pr-cvs1:/tmp/cvs-serv2841/include/linux Modified Files: comp_cache.h swap.h Log Message: Bug fixes: o Make the code compile with the compressed cache option disabled. Index: comp_cache.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/comp_cache.h,v retrieving revision 1.101 retrieving revision 1.102 diff -C2 -r1.101 -r1.102 *** comp_cache.h 10 Sep 2002 16:43:03 -0000 1.101 --- comp_cache.h 10 Sep 2002 17:23:55 -0000 1.102 *************** *** 2,6 **** * linux/mm/comp_cache.h * ! * Time-stamp: <2002-09-06 19:25:59 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache.h * ! * Time-stamp: <2002-09-10 14:05:49 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 241,244 **** --- 241,246 ---- extern int clean_page_compress_lock; + #else + static inline void decompress_swap_cache_page(struct page * page) { }; #endif Index: swap.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux/include/linux/swap.h,v retrieving revision 1.16 retrieving revision 1.17 diff -C2 -r1.16 -r1.17 *** swap.h 10 Sep 2002 16:43:04 -0000 1.16 --- swap.h 10 Sep 2002 17:23:56 -0000 1.17 *************** *** 73,79 **** #define swap_map_count(swap) (swap & 0x7fff) #else ! #define SWAP_MAP_MAX 0x7fff ! #define SWAP_MAP_BAD 0x8000 ! #define SWAP_MAP_COMP 0x0000 #define swap_map_count(swap) (swap) #endif --- 73,80 ---- #define swap_map_count(swap) (swap & 0x7fff) #else ! #define SWAP_MAP_MAX 0x7fff ! #define SWAP_MAP_BAD 0x8000 ! #define SWAP_MAP_COMP 0x0000 ! #define SWAP_MAP_COMP_BIT 0x0000 #define swap_map_count(swap) (swap) #endif |
From: Rodrigo S. de C. <rc...@us...> - 2002-09-10 16:44:03
|
Update of /cvsroot/linuxcompressed/linux/mm/comp_cache In directory usw-pr-cvs1:/tmp/cvs-serv17835/mm/comp_cache Modified Files: adaptivity.c aux.c free.c main.c proc.c swapin.c swapout.c Log Message: New features o Adaptivity: the greatest feature of the changeset is the adaptivity implementation. Now compressed cache resizes by itself and it seems to be picking the a size pretty close to the best size noticed in our tests. The police can be described as follow. Instead of having an LRU queue, we have now two queues: active and inactive, like the LRU queues in vanilla. The active list has the pages that would be in memory if the compressed cache is not used and the inactive list is the gain from using the compressed cache. If there are many accesses to the active list, we first block growing (by demand) and later shrink the compressed cache, and if we have many accesses to the inactive list, we let the cache grow if needed. The active list size is computed based on the effective compression ratio (number of fragments/number of memory pages). When shrinking the cache, we try to free a compressed cache by moving its fragments to other places. If unable to free a page that way, we free a fragment at the end of inactive list. o Compressed swap: now all swap cache pages are swapped out in compressed format. A bit in swap_map array is used to know if the entry is compressed or not. The compressed size is stored in the entry on the disk. There is almost no cost to store the pages in compressed format, that's why it is the default configuration for compressed cache. o Compacted swap: besides swapping out the pages in compressed format, we may decrease the number of writeouts by writing many fragments to the same disk block. Since it has a memory cost to store some metadata, it is an option to be enabled by user. It uses two arrays, real_swap (unsigned long array) and real_swap_map (unsigned short array). All the metadata about the fragments in the disk block are stored on the block, like offset, size, index. o Clean fragments not decompressed when they would be used to write some data. We don't decompress a clean fragment when grabbing a page cache page in __grab_cache_page() any longer. We would decompress a fragment, but it's data wouldn't be used (that's why this __grab_cache_page() creates a page if not found in page cache). Dirty fragments will be decompressed, but that's a rare situation in page cache since most data are written via buffers. Bug fixes o Larger compressed cache page support would not support pages larger than 2*PAGE_SIZE (8K). Reason: wrong computation of comp page size, very simple to fix. o In /proc/comp_cache_hist, we were showing the number of fragments in a comp page, no matter if those fragments were freed. It has been fixed to not show the freed fragments. o Writing out every dirty page with buffers. That was a conceptual bug, since all the swapped in pages would have bugs, and if they got dirty, they would not be added to compressed cache as dirty, they would be written out first and only then added to swap cache as a clean page. Now we try to free the buffers and if we are unable to do that, we write it out. With this bug, the page was added to compressed cache, but we were forcing many writes. Other: o Removed support to change algorithms online. That was a not very used option and would introduce a space cost to pages swapped out in compressed format, so it was removed. It also saved some memory space, since we allocate only the data structure used by the selected algorithm. Recall that the algorithm can be set through the compalg= kernel parameter. o All entries in /proc/sys/vm/comp_cache removed. Since compression algorithms cannot be changed nor compressed cache size, so it's useless to have a directory in /proc/sys. Compressed cache size can still be checked in /proc/meminfo. o Info for compression algorithm is shown even if no page has been compressed. o There are many code blocks with "#if 0" that are/were being tested. Cleanups: o Code to add the fragment into a comp page fragment list was split to a new function. o decompress() function removed. Index: adaptivity.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/adaptivity.c,v retrieving revision 1.39 retrieving revision 1.40 diff -C2 -r1.39 -r1.40 *** adaptivity.c 7 Aug 2002 18:30:58 -0000 1.39 --- adaptivity.c 10 Sep 2002 16:43:20 -0000 1.40 *************** *** 2,6 **** * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-08-03 12:12:40 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/adaptivity.c * ! * Time-stamp: <2002-09-02 18:43:33 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 18,21 **** --- 18,22 ---- static int fragment_failed_alloc = 0, vswap_failed_alloc = 0; unsigned long failed_comp_page_allocs = 0; + int growing_lock = 0; /* semaphore used to avoid two concurrent instances of *************** *** 536,540 **** BUG(); UnlockPage(empty_comp_page->page); ! __free_pages(empty_comp_page->page, comp_page_order); set_comp_page(empty_comp_page, NULL); --- 537,541 ---- BUG(); UnlockPage(empty_comp_page->page); ! __free_pages(empty_comp_page->page, COMP_PAGE_ORDER); set_comp_page(empty_comp_page, NULL); *************** *** 639,643 **** while (comp_cache_needs_to_grow() && nrpages--) { ! page = alloc_pages(GFP_ATOMIC, comp_page_order); /* couldn't allocate the page */ --- 640,644 ---- while (comp_cache_needs_to_grow() && nrpages--) { ! page = alloc_pages(GFP_ATOMIC, COMP_PAGE_ORDER); /* couldn't allocate the page */ *************** *** 648,652 **** if (!init_comp_page(&comp_page, page)) { ! __free_pages(page, comp_page_order); goto out_unlock; } --- 649,653 ---- if (!init_comp_page(&comp_page, page)) { ! __free_pages(page, COMP_PAGE_ORDER); goto out_unlock; } *************** *** 692,695 **** --- 693,699 ---- return 0; + if (growing_lock) + return 0; + /* to force the grow_comp_cache() to grow the cache */ new_num_comp_pages = num_comp_pages + 1; *************** *** 704,707 **** --- 708,852 ---- new_num_comp_pages = num_comp_pages; return 0; + } + + void + compact_comp_cache(void) + { + struct comp_cache_page * comp_page, * previous_comp_page = NULL, * new_comp_page, ** hash_table = free_space_hash; + struct comp_cache_fragment * fragment, * new_fragment; + int i; + + next_fragment: + i = free_space_hash_size - 1; + do { + comp_page = hash_table[i--]; + } while(i > 0 && !comp_page); + + if (previous_comp_page && previous_comp_page != comp_page) + return; + + if (!comp_page || TryLockPage(comp_page->page)) + goto writeout; + + if (list_empty(&comp_page->fragments)) { + shrink_on_demand(comp_page); + return; + } + + fragment = list_entry(comp_page->fragments.prev, struct comp_cache_fragment, list); + search_again: + new_comp_page = search_comp_page(free_space_hash, fragment->compressed_size); + + if (new_comp_page && !TryLockPage(new_comp_page->page)) + goto got_page; + + if (hash_table == free_space_hash) { + hash_table = total_free_space_hash; + goto search_again; + } + goto out2_failed; + + got_page: + if (hash_table == total_free_space_hash) + compact_fragments(new_comp_page); + + remove_comp_page_from_hash_table(new_comp_page); + + /* allocate the new fragment */ + new_fragment = alloc_fragment(); + + if (!new_fragment) { + UnlockPage(comp_page->page); + goto out_failed; + } + + new_fragment->index = fragment->index; + new_fragment->mapping = fragment->mapping; + new_fragment->offset = new_comp_page->free_offset; + new_fragment->compressed_size = fragment->compressed_size; + new_fragment->flags = fragment->flags; + new_fragment->comp_page = new_comp_page; + set_fragment_count(new_fragment, fragment_count(fragment)); + + if ((new_fragment->swp_buffer = fragment->swp_buffer)) + new_fragment->swp_buffer->fragment = new_fragment; + + memcpy(page_address(new_comp_page->page) + new_fragment->offset, page_address(comp_page->page) + fragment->offset, fragment->compressed_size); + + previous_comp_page = comp_page; + + UnlockPage(comp_page->page); + if (!drop_fragment(fragment)) { + if (fragment->swp_buffer) + fragment->swp_buffer->fragment = fragment; + kmem_cache_free(fragment_cachep, new_fragment); + goto out_failed; + } + + /* let's update some important fields */ + new_comp_page->free_space -= new_fragment->compressed_size; + new_comp_page->total_free_space -= new_fragment->compressed_size; + new_comp_page->free_offset += new_fragment->compressed_size; + + add_to_comp_page_list(new_comp_page, new_fragment); + add_fragment_vswap(new_fragment); + add_fragment_to_hash_table(new_fragment); + + if (CompFragmentActive(new_fragment)) + add_fragment_to_active_lru_queue(new_fragment); + else + add_fragment_to_inactive_lru_queue(new_fragment); + + if (PageSwapCache(new_fragment)) + num_swapper_fragments++; + num_fragments++; + + new_fragment->mapping->nrpages++; + if (CompFragmentDirty(new_fragment)) + list_add(&new_fragment->mapping_list, &new_fragment->mapping->dirty_comp_pages); + else { + list_add(&new_fragment->mapping_list, &new_fragment->mapping->clean_comp_pages); + num_clean_fragments++; + } + + balance_lru_queues(); + + add_comp_page_to_hash_table(new_comp_page); + UnlockPage(new_comp_page->page); + goto next_fragment; + //return; + + writeout: + writeout_fragments(GFP_KERNEL, 1, 6); + return; + + out_failed: + add_comp_page_to_hash_table(new_comp_page); + UnlockPage(new_comp_page->page); + goto writeout; + + out2_failed: + UnlockPage(comp_page->page); + goto writeout; + + } + + void + balance_lru_queues(void) + { + struct comp_cache_fragment * fragment; + unsigned long num_memory_pages; + + /* while condition: + * + * (num_active_fragments * 100)/num_fragments > ((num_comp_pages << COMP_PAGE_ORDER) * 100)/num_fragments + */ + num_memory_pages = (num_comp_pages << COMP_PAGE_ORDER); + while (num_active_fragments > num_memory_pages) { + fragment = list_entry(active_lru_queue.prev, struct comp_cache_fragment, lru_queue); + + remove_fragment_from_lru_queue(fragment); + add_fragment_to_inactive_lru_queue(fragment); + } } Index: aux.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/aux.c,v retrieving revision 1.42 retrieving revision 1.43 diff -C2 -r1.42 -r1.43 *** aux.c 28 Jul 2002 15:47:04 -0000 1.42 --- aux.c 10 Sep 2002 16:43:20 -0000 1.43 *************** *** 2,6 **** * linux/mm/comp_cache/aux.c * ! * Time-stamp: <2002-07-28 11:55:38 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/aux.c * ! * Time-stamp: <2002-09-02 18:43:50 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 199,202 **** --- 199,203 ---- free_space_count(int index, unsigned long * num_fragments) { struct comp_cache_page * comp_page; + struct comp_cache_fragment * fragment; unsigned long total, total_fragments; struct list_head * fragment_lh; *************** *** 211,216 **** total_fragments = 0; ! for_each_fragment(fragment_lh, comp_page) ! total_fragments++; #if 0 --- 212,221 ---- total_fragments = 0; ! for_each_fragment(fragment_lh, comp_page) { ! fragment = list_entry(fragment_lh, struct comp_cache_fragment, list); ! ! if (!fragment_freed(fragment)) ! total_fragments++; ! } #if 0 *************** *** 328,331 **** --- 333,375 ---- } + void + add_to_comp_page_list(struct comp_cache_page * comp_page, struct comp_cache_fragment * fragment) + { + struct list_head * fragment_lh; + struct comp_cache_fragment * previous_fragment = NULL; + + /* add the fragment to the comp_page list of fragments */ + if (list_empty(&(comp_page->fragments))) { + list_add(&(fragment->list), &(comp_page->fragments)); + return; + } + + previous_fragment = list_entry(comp_page->fragments.prev, struct comp_cache_fragment, list); + + if (previous_fragment->offset + previous_fragment->compressed_size == fragment->offset) { + list_add_tail(&(fragment->list), &(comp_page->fragments)); + return; + } + + /* let's search for the correct place in the comp_page list */ + previous_fragment = NULL; + + for_each_fragment(fragment_lh, comp_page) { + struct comp_cache_fragment * aux_fragment; + + aux_fragment = list_entry(fragment_lh, struct comp_cache_fragment, list); + + if (aux_fragment->offset + aux_fragment->compressed_size > fragment->offset) + break; + + previous_fragment = aux_fragment; + } + + if (previous_fragment) + list_add(&(fragment->list), &(previous_fragment->list)); + else + list_add(&(fragment->list), &(comp_page->fragments)); + } + struct comp_cache_page * search_comp_page(struct comp_cache_page ** hash_table, int free_space) { *************** *** 368,428 **** inline void ! add_fragment_to_lru_queue_tail(struct comp_cache_fragment * fragment) { ! swp_entry_t entry; ! if (!fragment) BUG(); ! #ifdef CONFIG_COMP_PAGE_CACHE ! if (!PageSwapCache(fragment)) { ! list_add_tail(&(fragment->lru_queue), &lru_queue); ! return; } ! #endif ! /* swap cache page */ ! entry.val = fragment->index; ! if (vswap_address(entry)) ! return; ! list_add_tail(&(fragment->lru_queue), &lru_queue); } inline void ! add_fragment_to_lru_queue(struct comp_cache_fragment * fragment) { ! swp_entry_t entry; ! if (!fragment) BUG(); ! #ifdef CONFIG_COMP_PAGE_CACHE ! if (!PageSwapCache(fragment)) { ! list_add(&(fragment->lru_queue), &lru_queue); ! return; } ! #endif ! /* swap cache page */ ! entry.val = fragment->index; ! if (vswap_address(entry)) ! return; ! list_add(&(fragment->lru_queue), &lru_queue); } inline void remove_fragment_from_lru_queue(struct comp_cache_fragment * fragment) { - swp_entry_t entry; - if (!fragment) BUG(); ! #ifdef CONFIG_COMP_PAGE_CACHE ! if (!PageSwapCache(fragment)) { ! list_del_init(&(fragment->lru_queue)); ! return; } ! #endif ! /* swap cache page */ ! entry.val = fragment->index; ! if (vswap_address(entry)) ! return; list_del_init(&(fragment->lru_queue)); } --- 412,461 ---- inline void ! add_fragment_to_active_lru_queue(struct comp_cache_fragment * fragment) { if (!fragment) BUG(); ! if (PageSwapCache(fragment)) { ! swp_entry_t entry; ! entry.val = fragment->index; ! if (vswap_address(entry)) ! return; } ! ! list_add(&(fragment->lru_queue), &active_lru_queue); ! CompFragmentSetActive(fragment); ! num_active_fragments++; } inline void ! add_fragment_to_inactive_lru_queue(struct comp_cache_fragment * fragment) { if (!fragment) BUG(); ! if (PageSwapCache(fragment)) { ! swp_entry_t entry; ! entry.val = fragment->index; ! if (vswap_address(entry)) ! return; } ! ! list_add(&(fragment->lru_queue), &inactive_lru_queue); } inline void remove_fragment_from_lru_queue(struct comp_cache_fragment * fragment) { if (!fragment) BUG(); ! if (PageSwapCache(fragment)) { ! swp_entry_t entry; ! entry.val = fragment->index; ! if (vswap_address(entry)) ! return; } ! list_del_init(&(fragment->lru_queue)); + if (CompFragmentTestandClearActive(fragment)) + num_active_fragments--; } *************** *** 588,592 **** /* inits comp cache free space hash table */ ! free_space_interval = 100 * (comp_page_order + 1); free_space_hash_size = (int) (PAGE_SIZE/100) + 2; --- 621,625 ---- /* inits comp cache free space hash table */ ! free_space_interval = 100 * (COMP_PAGE_ORDER + 1); free_space_hash_size = (int) (PAGE_SIZE/100) + 2; *************** *** 601,605 **** /* inits comp cache total free space hash table */ ! total_free_space_interval = 100 * (comp_page_order + 1); total_free_space_hash_size = (int) (PAGE_SIZE/100) + 2; --- 634,638 ---- /* inits comp cache total free space hash table */ ! total_free_space_interval = 100 * (COMP_PAGE_ORDER + 1); total_free_space_hash_size = (int) (PAGE_SIZE/100) + 2; Index: free.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/free.c,v retrieving revision 1.46 retrieving revision 1.47 diff -C2 -r1.46 -r1.47 *** free.c 7 Aug 2002 18:30:58 -0000 1.46 --- free.c 10 Sep 2002 16:43:21 -0000 1.47 *************** *** 2,6 **** * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-08-07 12:50:00 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/free.c * ! * Time-stamp: <2002-08-21 17:57:52 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 75,78 **** --- 75,80 ---- if (PageSwapCache(fragment)) num_swapper_fragments--; + if (!CompFragmentDirty(fragment)) + num_clean_fragments--; num_fragments--; *************** *** 376,380 **** spin_lock(&comp_cache_lock); ! add_fragment_to_lru_queue(fragment); add_fragment_to_hash_table(fragment); UnlockPage(fragment->comp_page->page); --- 378,382 ---- spin_lock(&comp_cache_lock); ! add_fragment_to_active_lru_queue(fragment); add_fragment_to_hash_table(fragment); UnlockPage(fragment->comp_page->page); Index: main.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/main.c,v retrieving revision 1.63 retrieving revision 1.64 diff -C2 -r1.63 -r1.64 *** main.c 7 Aug 2002 18:30:58 -0000 1.63 --- main.c 10 Sep 2002 16:43:22 -0000 1.64 *************** *** 2,6 **** * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-08-07 15:17:28 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/main.c * ! * Time-stamp: <2002-09-04 16:06:25 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 15,19 **** #include <linux/init.h> #include <linux/pagemap.h> - #include <linux/slab.h> #include <asm/page.h> --- 15,18 ---- *************** *** 21,26 **** /* compressed cache control variables */ unsigned long num_comp_pages = 0; - unsigned long num_swapper_fragments = 0; unsigned long num_fragments = 0; unsigned long init_num_comp_pages = 0; --- 20,27 ---- /* compressed cache control variables */ unsigned long num_comp_pages = 0; unsigned long num_fragments = 0; + unsigned long num_swapper_fragments = 0; + unsigned long num_active_fragments = 0; + unsigned long num_clean_fragments = 0; unsigned long init_num_comp_pages = 0; *************** *** 40,49 **** kmem_cache_t * fragment_cachep; - #ifdef CONFIG_COMP_DOUBLE_PAGE - int comp_page_order = 1; - #else - int comp_page_order = 0; - #endif - extern unsigned long num_physpages; extern struct comp_cache_page * get_comp_cache_page(struct page *, unsigned short, struct comp_cache_fragment **, unsigned int, int); --- 41,44 ---- *************** *** 57,66 **** struct comp_cache_page * comp_page; struct comp_cache_fragment * fragment; ! unsigned short comp_size, algorithm; static struct page * current_compressed_page; static char buffer_compressed1[MAX_COMPRESSED_SIZE]; static char buffer_compressed2[MAX_COMPRESSED_SIZE]; ! unsigned long * buffer_compressed; --- 52,61 ---- struct comp_cache_page * comp_page; struct comp_cache_fragment * fragment; ! unsigned short comp_size, comp_offset; static struct page * current_compressed_page; static char buffer_compressed1[MAX_COMPRESSED_SIZE]; static char buffer_compressed2[MAX_COMPRESSED_SIZE]; ! unsigned long * buffer_compressed = NULL; *************** *** 79,83 **** try_again: ! comp_size = compress(current_compressed_page = page, buffer_compressed = (unsigned long *) &buffer_compressed1, &algorithm); comp_page = get_comp_cache_page(page, comp_size, &fragment, gfp_mask, priority); --- 74,84 ---- try_again: ! /* don't compress a page already compressed */ ! if (PageCompressed(page)) ! get_comp_data(page, &comp_size, &comp_offset); ! else ! comp_size = compress(current_compressed_page = page, buffer_compressed = (unsigned long *) &buffer_compressed1, state); ! if (comp_size > PAGE_SIZE) ! BUG(); comp_page = get_comp_cache_page(page, comp_size, &fragment, gfp_mask, priority); *************** *** 93,108 **** BUG(); ! set_fragment_algorithm(fragment, algorithm); ! ! /* fix mapping stuff */ page->mapping->nrpages++; if (state != DIRTY_PAGE) { list_add(&fragment->mapping_list, &fragment->mapping->clean_comp_pages); goto copy_page; } ! CompFragmentSetDirty(fragment); list_add(&fragment->mapping_list, &fragment->mapping->dirty_comp_pages); ! /* the inode might have been synced in the meanwhile (if we * slept to get a free comp cache entry above), so dirty it */ --- 94,109 ---- BUG(); ! /* fix mapping stuff - clean fragment */ page->mapping->nrpages++; if (state != DIRTY_PAGE) { list_add(&fragment->mapping_list, &fragment->mapping->clean_comp_pages); + num_clean_fragments++; goto copy_page; } ! ! /* dirty fragment */ CompFragmentSetDirty(fragment); list_add(&fragment->mapping_list, &fragment->mapping->dirty_comp_pages); ! /* the inode might have been synced in the meanwhile (if we * slept to get a free comp cache entry above), so dirty it */ *************** *** 111,117 **** copy_page: if (compressed(fragment)) { if (current_compressed_page != page) { ! comp_size = compress(page, buffer_compressed = (unsigned long *) &buffer_compressed2, &algorithm); if (comp_size != fragment->compressed_size) { UnlockPage(comp_page->page); --- 112,123 ---- copy_page: + if (PageCompressed(page)) { + memcpy(page_address(comp_page->page) + fragment->offset, page_address(page) + comp_offset, comp_size); + goto out; + } + if (compressed(fragment)) { if (current_compressed_page != page) { ! comp_size = compress(page, buffer_compressed = (unsigned long *) &buffer_compressed2, state); if (comp_size != fragment->compressed_size) { UnlockPage(comp_page->page); *************** *** 124,127 **** --- 130,134 ---- memcpy(page_address(comp_page->page) + fragment->offset, page_address(page), PAGE_SIZE); + out: if (PageTestandSetCompCache(page)) BUG(); *************** *** 133,139 **** compress_dirty_page(struct page * page, int (*writepage)(struct page *), unsigned int gfp_mask, int priority) { ! int write, ret = 0; ! ! write = !!page->buffers; #ifdef CONFIG_COMP_PAGE_CACHE write |= shmem_page(page); --- 140,147 ---- compress_dirty_page(struct page * page, int (*writepage)(struct page *), unsigned int gfp_mask, int priority) { ! int write = 0, ret = 0; ! ! if (page->buffers) ! write = !try_to_free_buffers(page, 0); #ifdef CONFIG_COMP_PAGE_CACHE write |= shmem_page(page); *************** *** 193,197 **** extern void __init comp_cache_adaptivity_init(void); ! LIST_HEAD(lru_queue); inline int --- 201,206 ---- extern void __init comp_cache_adaptivity_init(void); ! LIST_HEAD(active_lru_queue); ! LIST_HEAD(inactive_lru_queue); inline int *************** *** 202,206 **** return 0; ! (*comp_page)->free_space = (*comp_page)->total_free_space = (comp_page_order + 1) * PAGE_SIZE; (*comp_page)->free_offset = 0; (*comp_page)->page = page; --- 211,215 ---- return 0; ! (*comp_page)->free_space = (*comp_page)->total_free_space = (COMP_PAGE_ORDER + 1) * PAGE_SIZE; (*comp_page)->free_offset = 0; (*comp_page)->page = page; *************** *** 247,254 **** /* initialize each comp cache entry */ for (i = 0; i < num_comp_pages; i++) { ! page = alloc_pages(GFP_KERNEL, comp_page_order); if (!init_comp_page(&comp_page, page)) ! __free_pages(page, comp_page_order); } comp_cache_free_space = num_comp_pages * COMP_PAGE_SIZE; --- 256,263 ---- /* initialize each comp cache entry */ for (i = 0; i < num_comp_pages; i++) { ! page = alloc_pages(GFP_KERNEL, COMP_PAGE_ORDER); if (!init_comp_page(&comp_page, page)) ! __free_pages(page, COMP_PAGE_ORDER); } comp_cache_free_space = num_comp_pages * COMP_PAGE_SIZE; *************** *** 266,270 **** char * endp; ! nr_pages = memparse(str, &endp) >> (PAGE_SHIFT + comp_page_order); max_num_comp_pages = nr_pages; --- 275,279 ---- char * endp; ! nr_pages = memparse(str, &endp) >> (PAGE_SHIFT + COMP_PAGE_ORDER); max_num_comp_pages = nr_pages; Index: proc.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/proc.c,v retrieving revision 1.25 retrieving revision 1.26 diff -C2 -r1.25 -r1.26 *** proc.c 13 Aug 2002 14:15:20 -0000 1.25 --- proc.c 10 Sep 2002 16:43:23 -0000 1.26 *************** *** 2,6 **** * linux/mm/comp_cache/proc.c * ! * Time-stamp: <2002-08-12 19:19:39 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/proc.c * ! * Time-stamp: <2002-09-10 13:27:33 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 50,58 **** decompress_function_t * decomp; struct stats_summary stats; ! } compression_algorithms[NUM_ALGORITHMS]; static int algorithm_min = WKDM_IDX; static int algorithm_max = LZO_IDX; ! static int current_algorithm = 0; static struct comp_alg_data comp_data; --- 50,59 ---- decompress_function_t * decomp; struct stats_summary stats; ! } compression_algorithm; static int algorithm_min = WKDM_IDX; static int algorithm_max = LZO_IDX; ! static int algorithm_idx = 0; ! struct stats_summary * stats = &compression_algorithm.stats; static struct comp_alg_data comp_data; *************** *** 60,112 **** static spinlock_t comp_data_lock __cacheline_aligned = SPIN_LOCK_UNLOCKED; ! enum ! { ! CC_SIZE=1, ! CC_ALGORITHM=2 ! }; ! ! ctl_table comp_cache_table[] = { ! {CC_SIZE, "size", &num_comp_pages, sizeof(int), 0444, NULL, &proc_dointvec}, ! {CC_ALGORITHM, "algorithm", ¤t_algorithm, sizeof(int), 0644, NULL, ! &proc_dointvec_minmax, &sysctl_intvec, NULL, &algorithm_min, &algorithm_max}, ! {0} ! }; ! ! int ! get_fragment_algorithm(struct comp_cache_fragment * fragment) ! { ! if (CompFragmentWKdm(fragment)) ! return WKDM_IDX; ! if (CompFragmentWK4x4(fragment)) ! return WK4X4_IDX; ! if (CompFragmentLZO(fragment)) ! return LZO_IDX; ! BUG(); ! return -1; ! } ! ! void ! set_fragment_algorithm(struct comp_cache_fragment * fragment, unsigned short algorithm) ! { ! switch (algorithm) { ! case WKDM_IDX: ! CompFragmentSetWKdm(fragment); ! break; ! case WK4X4_IDX: ! CompFragmentSetWK4x4(fragment); ! break; ! case LZO_IDX: ! CompFragmentSetLZO(fragment); ! break; ! default: ! BUG(); ! } ! } inline void ! comp_cache_update_read_stats(unsigned short algorithm, struct comp_cache_fragment * fragment) { - struct stats_summary * stats = &(compression_algorithms[algorithm].stats); - #ifdef CONFIG_COMP_PAGE_CACHE if (!PageSwapCache(fragment)) { --- 61,69 ---- static spinlock_t comp_data_lock __cacheline_aligned = SPIN_LOCK_UNLOCKED; ! int clean_page_compress_lock = 1; inline void ! comp_cache_update_read_stats(struct comp_cache_fragment * fragment) { #ifdef CONFIG_COMP_PAGE_CACHE if (!PageSwapCache(fragment)) { *************** *** 119,126 **** inline void ! comp_cache_update_written_stats(unsigned short algorithm, struct comp_cache_fragment * fragment) { - struct stats_summary * stats = &(compression_algorithms[algorithm].stats); - #ifdef CONFIG_COMP_PAGE_CACHE if (!PageSwapCache(fragment)) { --- 76,81 ---- inline void ! comp_cache_update_written_stats(struct comp_cache_fragment * fragment) { #ifdef CONFIG_COMP_PAGE_CACHE if (!PageSwapCache(fragment)) { *************** *** 133,140 **** static inline void ! comp_cache_update_decomp_stats(unsigned short algorithm, struct comp_cache_fragment * fragment) { - struct stats_summary * stats = &(compression_algorithms[algorithm].stats); - #ifdef CONFIG_COMP_PAGE_CACHE if (!PageSwapCache(fragment)) { --- 88,93 ---- static inline void ! comp_cache_update_decomp_stats(struct comp_cache_fragment * fragment) { #ifdef CONFIG_COMP_PAGE_CACHE if (!PageSwapCache(fragment)) { *************** *** 149,154 **** comp_cache_update_comp_stats(unsigned int comp_size, struct page * page) { - struct stats_summary * stats = &(compression_algorithms[current_algorithm].stats); - /* update compressed size statistics */ if (!comp_size) --- 102,105 ---- *************** *** 196,200 **** int ! compress(struct page * page, void * to, unsigned short * algorithm) { unsigned int comp_size; --- 147,151 ---- int ! compress(struct page * page, void * to, int state) { unsigned int comp_size; *************** *** 202,220 **** #if 0 ! /* That's a testing police to compress only swap cache ! * pages. All other pages from page cache will be stored ! * without compression in compressed cache. */ ! if (!PageSwapCache(page)) { ! *algorithm = current_algorithm; ! return PAGE_SIZE; } #endif ! spin_lock(&comp_data_lock); ! comp_size = compression_algorithms[current_algorithm].comp(from, to, PAGE_SIZE/4, &comp_data); spin_unlock(&comp_data_lock); comp_cache_update_comp_stats(comp_size, page); - *algorithm = current_algorithm; if (comp_size > PAGE_SIZE) comp_size = PAGE_SIZE; --- 153,168 ---- #if 0 ! if (state == CLEAN_PAGE && clean_page_compress_lock) { ! comp_size = PAGE_SIZE; ! comp_cache_update_comp_stats(comp_size, page); ! return comp_size; } #endif ! spin_lock(&comp_data_lock); ! comp_size = compression_algorithm.comp(from, to, PAGE_SIZE/4, &comp_data); spin_unlock(&comp_data_lock); comp_cache_update_comp_stats(comp_size, page); if (comp_size > PAGE_SIZE) comp_size = PAGE_SIZE; *************** *** 224,279 **** void ! decompress(struct comp_cache_fragment * fragment, struct page * page, int algorithm) { void * from = page_address(fragment->comp_page->page) + fragment->offset; void * to = page_address(page); spin_lock(&comp_data_lock); comp_data.compressed_size = fragment->compressed_size; ! compression_algorithms[algorithm].decomp(from, to, PAGE_SIZE/4, &comp_data); spin_unlock(&comp_data_lock); ! comp_cache_update_decomp_stats(algorithm, fragment); } void __init comp_cache_algorithms_init(void) { ! int i; ! /* data structures for WKdm and WK4x4 */ ! comp_data.tempTagsArray = kmalloc(300 * sizeof(WK_word), GFP_ATOMIC); ! comp_data.tempQPosArray = kmalloc(300 * sizeof(WK_word), GFP_ATOMIC); ! comp_data.tempLowBitsArray = kmalloc(1200 * sizeof(WK_word), GFP_ATOMIC); ! ! if (!comp_data.tempTagsArray || !comp_data.tempQPosArray || !comp_data.tempLowBitsArray) ! panic("comp_cache_algorithms_init(): cannot allocate structures for WKdm/WK4x4"); ! ! /* data structure (dictionary) for LZO */ ! comp_data.wrkmem = (lzo_byte *) kmalloc(LZO1X_1_MEM_COMPRESS, GFP_ATOMIC); ! if (!comp_data.wrkmem) ! panic("comp_cache_algorithms_init(): cannot allocate dictionary for LZO"); ! ! /* stats for algorithms */ ! for (i = 0; i < NUM_ALGORITHMS; i++) ! memset((void *) &compression_algorithms[i], 0, sizeof(struct stats_summary)); ! /* compression algorithms */ ! strcpy(compression_algorithms[WKDM_IDX].name, "WKdm"); ! compression_algorithms[WKDM_IDX].comp = WKdm_compress; ! compression_algorithms[WKDM_IDX].decomp = WKdm_decompress; ! ! strcpy(compression_algorithms[WK4X4_IDX].name, "WK4x4"); ! compression_algorithms[WK4X4_IDX].comp = WK4x4_compress; ! compression_algorithms[WK4X4_IDX].decomp = WK4x4_decompress; ! ! strcpy(compression_algorithms[LZO_IDX].name, "LZO"); ! compression_algorithms[LZO_IDX].comp = lzo_wrapper_compress; ! compression_algorithms[LZO_IDX].decomp = lzo_wrapper_decompress; ! if (!current_algorithm || current_algorithm < algorithm_min || current_algorithm > algorithm_max) ! current_algorithm = WKDM_IDX; ! printk("Compressed Cache: initial compression algorithm: %s\n", compression_algorithms[current_algorithm].name); } --- 172,311 ---- void ! decompress_fragment_to_page(struct comp_cache_fragment * fragment, struct page * page) { + struct comp_cache_page * comp_page; void * from = page_address(fragment->comp_page->page) + fragment->offset; void * to = page_address(page); + if (!fragment) + BUG(); + if (!fragment_count(fragment)) + BUG(); + comp_page = fragment->comp_page; + if (!comp_page->page) + BUG(); + if (!PageLocked(page)) + BUG(); + if (!PageLocked(comp_page->page)) + BUG(); + + SetPageUptodate(page); + + if (!compressed(fragment)) { + copy_page(to, from); + return; + } + + /* regular compressed fragment */ spin_lock(&comp_data_lock); comp_data.compressed_size = fragment->compressed_size; ! compression_algorithm.decomp(from, to, PAGE_SIZE/4, &comp_data); spin_unlock(&comp_data_lock); ! comp_cache_update_decomp_stats(fragment); ! } ! ! #ifdef CONFIG_COMP_SWAP ! void ! get_comp_data(struct page * page, unsigned short * size, unsigned short * offset) ! { ! unsigned short counter, metadata_offset; ! unsigned long fragment_index; ! ! counter = *((unsigned short *) page_address(page)); ! metadata_offset = *((unsigned short *) (page_address(page) + 2)); ! ! fragment_index = 0; ! ! while (counter-- && fragment_index != page->index) { ! fragment_index = *((unsigned long *) (page_address(page) + metadata_offset + 4)); ! metadata_offset += 8; ! } ! ! if (!fragment_index) ! BUG(); ! if (fragment_index != page->index) ! BUG(); ! ! metadata_offset -= 8; ! *size = *((unsigned short *) (page_address(page) + metadata_offset)); ! *offset = *((unsigned short *) (page_address(page) + metadata_offset + 2)); } + #endif + + void + decompress_swap_cache_page(struct page * page) + { + unsigned short comp_size, comp_offset; + + if (!PageLocked(page)) + BUG(); + + spin_lock(&comp_data_lock); + get_comp_data(page, &comp_size, &comp_offset); + + if (comp_size > PAGE_SIZE) + BUG(); + memcpy(page_address(comp_data.decompress_buffer), page_address(page) + comp_offset, comp_size); + + comp_data.compressed_size = comp_size; + compression_algorithm.decomp(page_address(comp_data.decompress_buffer), page_address(page), PAGE_SIZE/4, &comp_data); + + spin_unlock(&comp_data_lock); + + stats->decomp_swap++; + PageClearCompressed(page); + } void __init comp_cache_algorithms_init(void) { ! if (!algorithm_idx || algorithm_idx < algorithm_min || algorithm_idx > algorithm_max) ! algorithm_idx = WKDM_IDX; ! /* data structure for compression algorithms */ ! switch(algorithm_idx) { ! case WKDM_IDX: ! case WK4X4_IDX: ! comp_data.tempTagsArray = kmalloc(300 * sizeof(WK_word), GFP_ATOMIC); ! comp_data.tempQPosArray = kmalloc(300 * sizeof(WK_word), GFP_ATOMIC); ! comp_data.tempLowBitsArray = kmalloc(1200 * sizeof(WK_word), GFP_ATOMIC); ! ! if (!comp_data.tempTagsArray || !comp_data.tempQPosArray || !comp_data.tempLowBitsArray) ! panic("comp_cache_algorithms_init(): cannot allocate structures for WKdm/WK4x4"); ! break; ! case LZO_IDX: ! comp_data.wrkmem = (lzo_byte *) kmalloc(LZO1X_1_MEM_COMPRESS, GFP_ATOMIC); ! if (!comp_data.wrkmem) ! panic("comp_cache_algorithms_init(): cannot allocate dictionary for LZO"); ! break; ! } ! comp_data.decompress_buffer = alloc_page(GFP_ATOMIC); ! if (!comp_data.decompress_buffer) ! panic("comp_cache_algorithms_init(): cannot allocate decompression buffer"); ! /* stats for algorithm */ ! memset((void *) &compression_algorithm, 0, sizeof(struct stats_summary)); ! /* compression algorithms */ ! switch(algorithm_idx) { ! case WKDM_IDX: ! strcpy(compression_algorithm.name, "WKdm"); ! compression_algorithm.comp = WKdm_compress; ! compression_algorithm.decomp = WKdm_decompress; ! break; ! case WK4X4_IDX: ! strcpy(compression_algorithm.name, "WK4x4"); ! compression_algorithm.comp = WK4x4_compress; ! compression_algorithm.decomp = WK4x4_decompress; ! break; ! case LZO_IDX: ! strcpy(compression_algorithm.name, "LZO"); ! compression_algorithm.comp = lzo_wrapper_compress; ! compression_algorithm.decomp = lzo_wrapper_decompress; ! break; ! } ! printk("Compressed Cache: compression algorithm: %s\n", compression_algorithm.name); } *************** *** 289,303 **** } - #define current_msg ((algorithm == &compression_algorithms[current_algorithm])?"*":"") #define proportion(part, total) (total?((unsigned int) ((part * 100)/(total))):0) ! void ! print_comp_cache_stats(unsigned short alg_idx, char * page, int * length) { unsigned int compression_ratio_swap, compression_ratio_page, compression_ratio_total; unsigned long long total_sum_comp_pages; unsigned long total_comp_pages; - struct comp_alg * algorithm = &compression_algorithms[alg_idx]; - struct stats_summary * stats = &algorithm->stats; /* swap cache */ --- 321,332 ---- } #define proportion(part, total) (total?((unsigned int) ((part * 100)/(total))):0) ! static void ! print_comp_cache_stats(char * page, int * length) { unsigned int compression_ratio_swap, compression_ratio_page, compression_ratio_total; unsigned long long total_sum_comp_pages; unsigned long total_comp_pages; /* swap cache */ *************** *** 318,334 **** /* total */ ! if (!total_comp_pages) ! return; ! ! compression_ratio_total = ((big_division(total_sum_comp_pages, total_comp_pages)*100)/PAGE_SIZE); *length += sprintf(page + *length, ! " algorithm %s%s\n" " - (C) compressed pages: %8lu (S: %3d%% P: %3d%%)\n" ! " - (D) decompressed pages: %8lu (S: %3d%% P: %3d%%) D/C %3u%%\n" " - (R) read pages: %8lu (S: %3d%% P: %3d%%) R/C: %3u%%\n" ! " - (W) written pages: %8lu (S: %3d%% P: %3d%%) W/C: %3u%% \n" " compression ratio: %8u%% (S: %3u%% P: %3u%%)\n", ! algorithm->name, current_msg, total_comp_pages, proportion(stats->comp_swap, total_comp_pages), --- 347,362 ---- /* total */ ! compression_ratio_total = 0; ! if (total_comp_pages) ! compression_ratio_total = ((big_division(total_sum_comp_pages, total_comp_pages)*100)/PAGE_SIZE); *length += sprintf(page + *length, ! " algorithm %s\n" " - (C) compressed pages: %8lu (S: %3d%% P: %3d%%)\n" ! " - (D) decompressed pages: %8lu (S: %3d%% P: %3d%%) D/C: %3u%%\n" " - (R) read pages: %8lu (S: %3d%% P: %3d%%) R/C: %3u%%\n" ! " - (W) written pages: %8lu (S: %3d%% P: %3d%%) W/C: %3u%%\n" " compression ratio: %8u%% (S: %3u%% P: %3u%%)\n", ! compression_algorithm.name, total_comp_pages, proportion(stats->comp_swap, total_comp_pages), *************** *** 337,349 **** proportion(stats->decomp_swap, stats->decomp_swap + stats->decomp_page), proportion(stats->decomp_page, stats->decomp_swap + stats->decomp_page), ! (unsigned int) (((stats->decomp_swap + stats->decomp_page) * 100)/total_comp_pages), stats->read_swap + stats->read_page, proportion(stats->read_swap, stats->read_swap + stats->read_page), proportion(stats->read_page, stats->read_swap + stats->read_page), ! (unsigned int) (((stats->read_swap + stats->read_page) * 100)/total_comp_pages), stats->written_swap + stats->written_page, proportion(stats->written_swap, stats->written_swap + stats->written_page), proportion(stats->written_page, stats->written_swap + stats->written_page), ! (unsigned int) (((stats->written_swap + stats->written_page) * 100)/total_comp_pages), compression_ratio_total, compression_ratio_swap, --- 365,377 ---- proportion(stats->decomp_swap, stats->decomp_swap + stats->decomp_page), proportion(stats->decomp_page, stats->decomp_swap + stats->decomp_page), ! total_comp_pages?((unsigned int) (((stats->decomp_swap + stats->decomp_page) * 100)/total_comp_pages)):0, stats->read_swap + stats->read_page, proportion(stats->read_swap, stats->read_swap + stats->read_page), proportion(stats->read_page, stats->read_swap + stats->read_page), ! total_comp_pages?((unsigned int) (((stats->read_swap + stats->read_page) * 100)/total_comp_pages)):0, stats->written_swap + stats->written_page, proportion(stats->written_swap, stats->written_swap + stats->written_page), proportion(stats->written_page, stats->written_swap + stats->written_page), ! total_comp_pages?((unsigned int) (((stats->written_swap + stats->written_page) * 100)/total_comp_pages)):0, compression_ratio_total, compression_ratio_swap, *************** *** 352,357 **** #define HIST_PRINTK \ ! num_fragments[0], num_fragments[1], num_fragments[2], num_fragments[3], \ ! num_fragments[4], num_fragments[5], num_fragments[6], num_fragments[7] #define HIST_COUNT 8 --- 380,385 ---- #define HIST_PRINTK \ ! array_num_fragments[0], array_num_fragments[1], array_num_fragments[2], array_num_fragments[3], \ ! array_num_fragments[4], array_num_fragments[5], array_num_fragments[6], array_num_fragments[7] #define HIST_COUNT 8 *************** *** 359,368 **** comp_cache_hist_read_proc(char *page, char **start, off_t off, int count, int *eof, void *data) { ! unsigned long * num_fragments, total1, total2; int length = 0, i; ! num_fragments = (unsigned long *) vmalloc(HIST_COUNT * sizeof(unsigned long)); ! if (!num_fragments) { printk("couldn't allocate data structures for free space histogram\n"); goto out; --- 387,396 ---- comp_cache_hist_read_proc(char *page, char **start, off_t off, int count, int *eof, void *data) { ! unsigned long * array_num_fragments, total1, total2; int length = 0, i; ! array_num_fragments = (unsigned long *) vmalloc(HIST_COUNT * sizeof(unsigned long)); ! if (!array_num_fragments) { printk("couldn't allocate data structures for free space histogram\n"); goto out; *************** *** 373,381 **** " total 0f 1f 2f 3f 4f 5f 6f more\n"); ! memset((void *) num_fragments, 0, HIST_COUNT * sizeof(unsigned long)); spin_lock(&comp_cache_lock); ! total1 = free_space_count(0, num_fragments); length += sprintf(page + length, " %4d: %7lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu\n", --- 401,410 ---- " total 0f 1f 2f 3f 4f 5f 6f more\n"); ! memset((void *) array_num_fragments, 0, HIST_COUNT * sizeof(unsigned long)); spin_lock(&comp_cache_lock); ! total1 = free_space_count(0, array_num_fragments); ! length += sprintf(page + length, "total %lu act %lu pages %lu\n", num_fragments, num_active_fragments, num_comp_pages << COMP_PAGE_ORDER); length += sprintf(page + length, " %4d: %7lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu %5lu\n", *************** *** 385,393 **** for (i = 1; i < free_space_hash_size; i += 2) { ! memset((void *) num_fragments, 0, HIST_COUNT * sizeof(unsigned long)); ! total1 = free_space_count(i, num_fragments); total2 = 0; if (i + 1 < free_space_hash_size) ! total2 = free_space_count(i + 1, num_fragments); length += sprintf(page + length, --- 414,422 ---- for (i = 1; i < free_space_hash_size; i += 2) { ! memset((void *) array_num_fragments, 0, HIST_COUNT * sizeof(unsigned long)); ! total1 = free_space_count(i, array_num_fragments); total2 = 0; if (i + 1 < free_space_hash_size) ! total2 = free_space_count(i + 1, array_num_fragments); length += sprintf(page + length, *************** *** 398,407 **** spin_unlock(&comp_cache_lock); ! vfree(num_fragments); out: return proc_calc_metrics(page, start, off, count, eof, length); } ! #define FRAG_INTERVAL (500 * (comp_page_order + 1)) #define FRAG_PRINTK \ frag_space[0], frag_space[1], frag_space[2], frag_space[3], \ --- 427,436 ---- spin_unlock(&comp_cache_lock); ! vfree(array_num_fragments); out: return proc_calc_metrics(page, start, off, count, eof, length); } ! #define FRAG_INTERVAL (500 * (COMP_PAGE_ORDER + 1)) #define FRAG_PRINTK \ frag_space[0], frag_space[1], frag_space[2], frag_space[3], \ *************** *** 454,458 **** comp_cache_stat_read_proc(char *page, char **start, off_t off, int count, int *eof, void *data) { ! int length = 0, i; length += sprintf(page + length, --- 483,487 ---- comp_cache_stat_read_proc(char *page, char **start, off_t off, int count, int *eof, void *data) { ! int length = 0; length += sprintf(page + length, *************** *** 463,479 **** #ifdef CONFIG_COMP_PAGE_CACHE " - (P) page cache support enabled\n" - #else - " - (P) page cache support disabled\n" #endif " - maximum used size: %6lu KiB\n" " - comp page size: %6lu KiB\n" " - failed allocations: %6lu\n", ! max_used_num_comp_pages << (comp_page_order + PAGE_SHIFT - 10), ! PAGE_SIZE >> (10 - comp_page_order), failed_comp_page_allocs); ! for (i = 0; i < NUM_ALGORITHMS; i++) ! print_comp_cache_stats(i, page, &length); ! return proc_calc_metrics(page, start, off, count, eof, length); } --- 492,507 ---- #ifdef CONFIG_COMP_PAGE_CACHE " - (P) page cache support enabled\n" #endif + #ifdef CONFIG_COMP_SWAP + " - compressed swap support enabled\n" + #endif " - maximum used size: %6lu KiB\n" " - comp page size: %6lu KiB\n" " - failed allocations: %6lu\n", ! max_used_num_comp_pages << (COMP_PAGE_ORDER + PAGE_SHIFT - 10), ! COMP_PAGE_SIZE, failed_comp_page_allocs); ! print_comp_cache_stats(page, &length); return proc_calc_metrics(page, start, off, count, eof, length); } *************** *** 483,487 **** char * endp; ! current_algorithm = simple_strtoul(str, &endp, 0); return 1; } --- 511,515 ---- char * endp; ! algorithm_idx = simple_strtoul(str, &endp, 0); return 1; } Index: swapin.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapin.c,v retrieving revision 1.53 retrieving revision 1.54 diff -C2 -r1.53 -r1.54 *** swapin.c 7 Aug 2002 18:30:58 -0000 1.53 --- swapin.c 10 Sep 2002 16:43:24 -0000 1.54 *************** *** 2,6 **** * linux/mm/comp_cache/swapin.c * ! * Time-stamp: <2002-08-07 10:46:04 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * linux/mm/comp_cache/swapin.c * ! * Time-stamp: <2002-09-10 10:36:42 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 18,21 **** --- 18,26 ---- #include <asm/uaccess.h> + #define ACTIVE_FRAGMENT 1 + #define INACTIVE_FRAGMENT 0 + + int last_accessed = 0, last_state_accessed = 0; + int invalidate_comp_cache(struct address_space * mapping, unsigned long offset) *************** *** 69,108 **** } ! unsigned short ! decompress_fragment(struct comp_cache_fragment * fragment, struct page * page) ! { ! struct comp_cache_page * comp_page; ! int algorithm = get_fragment_algorithm(fragment); ! ! if (!fragment) ! BUG(); ! if (!fragment_count(fragment)) ! BUG(); ! comp_page = fragment->comp_page; ! if (!comp_page->page) ! BUG(); ! if (!PageLocked(page)) ! BUG(); ! if (!PageLocked(comp_page->page)) ! BUG(); ! ! if (compressed(fragment)) ! decompress(fragment, page, algorithm); ! else ! memcpy(page_address(page), page_address(comp_page->page) + fragment->offset, PAGE_SIZE); ! ! SetPageUptodate(page); ! return algorithm; ! } ! ! extern inline void comp_cache_update_read_stats(unsigned short, struct comp_cache_fragment *); /* caller may hold pagecache_lock (__find_lock_page()) */ int ! read_comp_cache(struct address_space *mapping, unsigned long offset, struct page * page) { struct comp_cache_fragment * fragment; ! unsigned short algorithm; ! int err; if (!PageLocked(page)) --- 74,85 ---- } ! extern inline void comp_cache_update_read_stats(struct comp_cache_fragment *); /* caller may hold pagecache_lock (__find_lock_page()) */ int ! __read_comp_cache(struct address_space *mapping, unsigned long offset, struct page * page, int state) { struct comp_cache_fragment * fragment; ! int err, ratio; if (!PageLocked(page)) *************** *** 119,134 **** if (!fragment_count(fragment)) BUG(); - - get_fragment(fragment); ! /* move the fragment to the back of the lru list */ ! remove_fragment_from_lru_queue(fragment); ! add_fragment_to_lru_queue(fragment); spin_unlock(&comp_cache_lock); lock_page(fragment->comp_page->page); ! algorithm = decompress_fragment(fragment, page); ! comp_cache_update_read_stats(algorithm, fragment); spin_lock(&comp_cache_lock); --- 96,197 ---- if (!fragment_count(fragment)) BUG(); ! get_fragment(fragment); + #if 0 + if (CompFragmentDirty(fragment)) { + //if (last_state_accessed > 0) + // last_state_accessed = -1; + //else + last_state_accessed--; + ratio = -3; //-(((num_fragments - num_clean_fragments) * 4)/num_fragments?:0); + if (last_state_accessed < ratio) { + clean_page_compress_lock = 1; + last_state_accessed = 0; + } + goto test_active; + } + + //if (last_state_accessed < 0) + // last_state_accessed = 1; + //else + last_state_accessed++; + ratio = 3; //((num_clean_fragments * 4)/num_fragments?:0); + if (last_state_accessed > ratio) { + clean_page_compress_lock = 0; + last_state_accessed = 0; + } + + test_active: + #endif + + #if 0 + if (!CompFragmentDirty(fragment)) { + last_state_accessed++; + ratio = 3; //((num_clean_fragments * 4)/num_fragments?:0); + if (last_state_accessed > ratio) { + clean_page_compress_lock = 0; + last_state_accessed = 0; + } + #endif + + if (CompFragmentActive(fragment)) {// || !CompFragmentDirty(fragment)) { + if (last_accessed == ACTIVE_FRAGMENT) { + #if 0 + /* -- VERSÃO 3 -- */ + if (growing_lock) { + compact_comp_cache(); + //writeout_fragments(GFP_KERNEL, 1, SHRINKAGE_PRIORITY); + last_accessed = INACTIVE_FRAGMENT; + goto read; + } + growing_lock = 1; + goto read; + #endif + + #if 1 + /* -- VERSÃO 2 -- */ + if (growing_lock) { + compact_comp_cache(); + //writeout_fragments(GFP_KERNEL, 1, SHRINKAGE_PRIORITY); + growing_lock = 0; + last_accessed = INACTIVE_FRAGMENT; + goto read; + } + growing_lock = 1; + goto read; + #endif + + #if 0 + /* -- VERSÂO 1 -- */ + writeout_fragments(GFP_KERNEL, 1, SHRINKAGE_PRIORITY); + growing_lock = 1; + last_accessed = INACTIVE_FRAGMENT; + goto read; + #endif + } + last_accessed = ACTIVE_FRAGMENT; + goto read; + } + + /* inactive fragment */ + growing_lock = 0; + last_accessed = INACTIVE_FRAGMENT; + + read: + /* If only dirty fragmenst should be returned (when reading + * the page for writing it), free the fragment and return. A + * scenario where that happens is when writing a page: there + * is no point decompressing a clean fragment. */ + if (CompFragmentDirty(fragment) && state == DIRTY_PAGE) { + drop_fragment(fragment); + goto out_unlock; + } + spin_unlock(&comp_cache_lock); lock_page(fragment->comp_page->page); ! decompress_fragment_to_page(fragment, page); ! comp_cache_update_read_stats(fragment); spin_lock(&comp_cache_lock); *************** *** 138,145 **** UnlockPage(fragment->comp_page->page); - put_fragment(fragment); if (!drop_fragment(fragment)) PageSetCompCache(page); out_unlock: spin_unlock(&comp_cache_lock); --- 201,209 ---- UnlockPage(fragment->comp_page->page); + put_fragment(fragment); if (!drop_fragment(fragment)) PageSetCompCache(page); + out_unlock: spin_unlock(&comp_cache_lock); *************** *** 258,262 **** old_page = find_or_add_page(page, mapping, fragment->index); if (!old_page) { ! decompress_fragment(fragment, page); goto free_and_dirty; } --- 322,326 ---- old_page = find_or_add_page(page, mapping, fragment->index); if (!old_page) { ! decompress_fragment_to_page(fragment, page); goto free_and_dirty; } *************** *** 267,275 **** UnlockPage(fragment->comp_page->page); spin_lock(&comp_cache_lock); put_fragment(fragment); /* effectively free it */ if (drop_fragment(fragment)) ! PageClearCompCache(page); spin_unlock(&comp_cache_lock); __set_page_dirty(page); --- 331,340 ---- UnlockPage(fragment->comp_page->page); spin_lock(&comp_cache_lock); + put_fragment(fragment); /* effectively free it */ if (drop_fragment(fragment)) ! PageClearCompCache(page); spin_unlock(&comp_cache_lock); __set_page_dirty(page); Index: swapout.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux/mm/comp_cache/swapout.c,v retrieving revision 1.70 retrieving revision 1.71 diff -C2 -r1.70 -r1.71 *** swapout.c 7 Aug 2002 18:30:58 -0000 1.70 --- swapout.c 10 Sep 2002 16:43:25 -0000 1.71 *************** *** 2,6 **** * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-08-07 11:04:43 rcastro> * * Linux Virtual Memory Compressed Cache --- 2,6 ---- * /mm/comp_cache/swapout.c * ! * Time-stamp: <2002-09-10 10:37:03 rcastro> * * Linux Virtual Memory Compressed Cache *************** *** 18,25 **** #include <linux/pagemap.h> - extern kmem_cache_t * fragment_cachep; - /* swap buffer */ struct list_head swp_free_buffer_head, swp_used_buffer_head; static spinlock_t swap_buffer_lock __cacheline_aligned = SPIN_LOCK_UNLOCKED; --- 18,30 ---- #include <linux/pagemap.h> /* swap buffer */ struct list_head swp_free_buffer_head, swp_used_buffer_head; + #ifdef CONFIG_COMP_SWAP + static struct { + unsigned short size; + unsigned short offset; + unsigned long index; + } grouped_fragments[255]; + #endif static spinlock_t swap_buffer_lock __cacheline_aligned = SPIN_LOCK_UNLOCKED; *************** *** 160,163 **** --- 165,170 ---- if (TryLockPage(buffer_page)) BUG(); + if (page_count(buffer_page) != 1) + BUG(); list_del(swp_buffer_lh); *************** *** 183,195 **** } ! extern unsigned short decompress_fragment(struct comp_cache_fragment *, struct page *); ! extern inline void comp_cache_update_written_stats(unsigned short, struct comp_cache_fragment *); static struct swp_buffer * ! decompress_to_swp_buffer(struct comp_cache_fragment * fragment, unsigned int gfp_mask) { struct page * buffer_page; struct swp_buffer * swp_buffer; ! unsigned short algorithm; ! swp_buffer = find_free_swp_buffer(fragment, gfp_mask); if (!swp_buffer) --- 190,320 ---- } ! extern inline void comp_cache_update_written_stats(struct comp_cache_fragment *); ! extern void set_swap_compressed(swp_entry_t, int); ! ! #ifdef CONFIG_COMP_SWAP ! static void ! group_fragments(struct comp_cache_fragment * fragment, struct page * page) { ! struct list_head * fragment_lh; ! struct comp_cache_fragment * aux_fragment; ! swp_entry_t entry, real_entry; ! unsigned short counter, next_offset, metadata_size; ! ! entry.val = fragment->index; ! real_entry = get_real_swap_page(entry); ! ! if (!real_entry.val) ! BUG(); ! ! /*** ! * Metadata: for each swap block ! * ! * Header: ! * 4 bytes -> ! * number of fragments (unsigned short) ! * offset for fragment metadata (unsigned short) ! * ! * Tail: ! * - for every fragment - ! * 8 bytes -> ! * offset (unsigned short) ! * compressed size (unsigned short) ! * index (unsigned long) ! */ ! metadata_size = 8; ! next_offset = 4; ! ! /* cannot store the fragment in compressed format */ ! if (next_offset + fragment->compressed_size + metadata_size > PAGE_SIZE) { ! set_swap_compressed(entry, 0); ! decompress_fragment_to_page(fragment, page); ! return; ! } ! ! /* prepare header with data from the 1st fragment */ ! set_swap_compressed(entry, 1); ! ! counter = 1; ! grouped_fragments[0].size = fragment->compressed_size; ! grouped_fragments[0].offset = next_offset; ! grouped_fragments[0].index = fragment->index; ! ! memcpy(page_address(page) + next_offset, page_address(fragment->comp_page->page) + fragment->offset, fragment->compressed_size); ! ! next_offset += fragment->compressed_size; ! ! /* try to group other fragments */ ! for_each_fragment(fragment_lh, fragment->comp_page) { ! aux_fragment = list_entry(fragment_lh, struct comp_cache_fragment, list); ! ! if (aux_fragment == fragment) ! continue; ! if (!PageSwapCache(aux_fragment)) ! continue; ! if (!CompFragmentDirty(aux_fragment)) ! continue; ! entry.val = aux_fragment->index; ! if (vswap_address(entry)) ! continue; ! if (next_offset + aux_fragment->compressed_size + metadata_size + 8 > PAGE_SIZE) ! continue; ! ! CompFragmentClearDirty(aux_fragment); ! num_clean_fragments++; ! ! set_swap_compressed(entry, 1); ! map_swap(entry, real_entry); ! ! grouped_fragments[counter].size = aux_fragment->compressed_size; ! grouped_fragments[counter].offset = next_offset; ! grouped_fragments[counter].index = aux_fragment->index; ! ! memcpy(page_address(page) + next_offset, page_address(fragment->comp_page->page) + aux_fragment->offset, aux_fragment->compressed_size); ! ! next_offset += aux_fragment->compressed_size; ! metadata_size += 8; ! counter++; ! } ! ! memcpy(page_address(page), &counter, 2); ! memcpy(page_address(page) + 2, &next_offset, 2); ! ! while (counter--) { ! memcpy(page_address(page) + next_offset, &(grouped_fragments[counter].size), 2); ! next_offset += 2; ! memcpy(page_address(page) + next_offset, &(grouped_fragments[counter].offset), 2); ! next_offset += 2; ! memcpy(page_address(page) + next_offset, &(grouped_fragments[counter].index), 4); ! next_offset += 4; ! } ! } ! #else ! static void ! group_fragments(struct comp_cache_fragment * fragment, struct page * page) ! { ! swp_entry_t entry; ! ! /* uncompressed fragments or fragments that cannot have the ! * metadata written together must be decompressed */ ! entry.val = fragment->index; ! if (fragment->compressed_size + sizeof(unsigned short) > PAGE_SIZE) { ! set_swap_compressed(entry, 0); ! decompress_fragment_to_page(fragment, page); ! return; ! } ! ! /* copy the compressed data and metadata */ ! memcpy(page_address(page), &(fragment->compressed_size), sizeof(unsigned short)); ! memcpy(page_address(page) + sizeof(unsigned short), page_address(fragment->comp_page->page) + fragment->offset, fragment->compressed_size); ! set_swap_compressed(entry, 1); ! } ! #endif static struct swp_buffer * ! prepare_swp_buffer(struct comp_cache_fragment * fragment, unsigned int gfp_mask) { struct page * buffer_page; struct swp_buffer * swp_buffer; ! swp_entry_t entry; ! swp_buffer = find_free_swp_buffer(fragment, gfp_mask); if (!swp_buffer) *************** *** 202,208 **** lock_page(fragment->comp_page->page); ! algorithm = decompress_fragment(fragment, buffer_page); UnlockPage(fragment->comp_page->page); ! comp_cache_update_written_stats(algorithm, fragment); buffer_page->flags &= (1 << PG_locked); --- 327,341 ---- lock_page(fragment->comp_page->page); ! ! /* pages from page cache need to have its data decompressed */ ! if (!PageSwapCache(fragment)) { ! decompress_fragment_to_page(fragment, buffer_page); ! goto out_unlock; ! } ! ! group_fragments(fragment, buffer_page); ! out_unlock: UnlockPage(f... [truncated message content] |