linuxcompressed-checkins Mailing List for Linux Compressed Cache (Page 2)
Status: Beta
Brought to you by:
nitin_sf
You can subscribe to this list here.
| 2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(2) |
Nov
|
Dec
(31) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2002 |
Jan
(28) |
Feb
(50) |
Mar
(29) |
Apr
(6) |
May
(33) |
Jun
(36) |
Jul
(60) |
Aug
(7) |
Sep
(12) |
Oct
|
Nov
(13) |
Dec
(3) |
| 2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(9) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2006 |
Jan
(13) |
Feb
(4) |
Mar
(4) |
Apr
(1) |
May
|
Jun
(22) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
|
From: Nitin G. <nit...@us...> - 2006-03-10 13:46:26
|
Update of /cvsroot/linuxcompressed/linux26/include/linux In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv24062/include/linux Modified Files: page-flags.h Log Message: Ver-2.6.16-rc5: Replace page cache radix tree entry with chunk head and restore on lookup Index: page-flags.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux26/include/linux/page-flags.h,v retrieving revision 1.4 retrieving revision 1.5 diff -C2 -r1.4 -r1.5 *** page-flags.h 25 Feb 2006 17:19:49 -0000 1.4 --- page-flags.h 10 Mar 2006 13:46:21 -0000 1.5 *************** *** 77,82 **** #define PG_uncached 19 /* Page has been mapped as uncached */ ! #define PG_will_compress 20 /* Page will be compressed asap */ ! #define PG_compressed 21 /* To mark 'chunk_head's */ /* --- 77,82 ---- #define PG_uncached 19 /* Page has been mapped as uncached */ ! #define PG_will_compress 20 ! #define PG_compressed 21 /* |
|
From: Nitin G. <nit...@us...> - 2006-03-10 13:46:25
|
Update of /cvsroot/linuxcompressed/linux26/mm In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv24062/mm Modified Files: vmscan.c Added Files: filemap.c Log Message: Ver-2.6.16-rc5: Replace page cache radix tree entry with chunk head and restore on lookup Index: vmscan.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux26/mm/vmscan.c,v retrieving revision 1.7 retrieving revision 1.8 diff -C2 -r1.7 -r1.8 *** vmscan.c 24 Feb 2006 21:58:33 -0000 1.7 --- vmscan.c 10 Mar 2006 13:46:21 -0000 1.8 *************** *** 312,316 **** if (PageDirty(page)) return 0; ! printk("<1>Some page to be added to ccache.\n"); return 1; } --- 312,316 ---- if (PageDirty(page)) return 0; ! //printk("<1>Some page to be added to ccache.\n"); return 1; } *************** *** 321,326 **** --- 321,329 ---- struct page *newpage; struct address_space *mapping; + struct page *ch; // dummy struct page as 'chunk head' + //struct chunk_head *ch; mapping=page_mapping(page); + /* printk ("<1> Add to CCache called.\n"); printk ("<1> orig page->count=%d\n", atomic_read(&page->_count)); *************** *** 328,334 **** --- 331,339 ---- printk ("<1> orig page->flags=%u\n", page->flags); printk ("<1> orig page->private=%u\n", page->private); + */ newpage=alloc_pages(GFP_KERNEL, 0); if (!newpage) goto out; + /* printk ("<1> NEW page->count=%d\n", atomic_read(&newpage->_count)); printk ("<1> NEW page->mapcount=%d\n", atomic_read(&newpage->_mapcount)); *************** *** 336,368 **** printk ("<1> NEW page->private=%u\n", newpage->private); // newly allocated pages give page_count(page) as 1 ! //memcpy(page_address(page), page_address(newpage), PAGE_SIZE); copy_highpage(newpage, page); write_lock_irq(&mapping->tree_lock); ! if (page_count(page) != 2) goto out; radix_tree_delete(&mapping->page_tree, page->index); ! radix_tree_insert(&mapping->page_tree, page->index, newpage); ! ! //newpage->mapping=mapping; ! //newpage->flags=page->flags; ! //*newpage=*page; ! //newpage->private=page->private; ! //newpage->lru=page->lru; newpage->mapping=mapping; newpage->index=page->index; ! // top-eight bits of flags are used for page->zone ! // so only touch lower 24 bits ! //newpage->flags=page->flags; ! // copy lower 24 bits of page->flags to newpage->flags flags=(page->flags << 8) >> 8; newpage->flags |= flags; ! printk ("<1> Now NEW page->flags=%u\n", newpage->flags); ! printk ("<1> -----=====-----====------====---\n"); ClearPageReclaim(newpage); ClearPageWillCompress(newpage); ! SetPageCompressed(newpage); set_page_count(newpage, 1); // only pagecache ref --- 341,379 ---- printk ("<1> NEW page->private=%u\n", newpage->private); // newly allocated pages give page_count(page) as 1 + */ ! //memcpy(page_address(newpage), page_address(page), PAGE_SIZE); copy_highpage(newpage, page); + ch = kmalloc(sizeof(struct page), GFP_KERNEL); + if (!ch) goto out; + write_lock_irq(&mapping->tree_lock); ! if (page_count(page) != 2) goto out_locked; radix_tree_delete(&mapping->page_tree, page->index); ! //radix_tree_insert(&mapping->page_tree, page->index, newpage); ! radix_tree_insert(&mapping->page_tree, page->index, ch); ! set_page_private(ch, (unsigned long)newpage); ! SetPageCompressed(ch); ! ! ch->mapping=mapping; ! ch->index=page->index; ! newpage->mapping=mapping; newpage->index=page->index; ! /* ! * top-eight bits of flags are used for page->zone ! * so only touch lower 24 bits ! */ ! /* copy lower 24 bits of page->flags to newpage->flags */ flags=(page->flags << 8) >> 8; newpage->flags |= flags; ! //printk ("<1> Now NEW page->flags=%u\n", newpage->flags); ! //printk ("<1> -----=====-----====------====---\n"); ClearPageReclaim(newpage); ClearPageWillCompress(newpage); ! //SetPageCompressed(newpage); set_page_count(newpage, 1); // only pagecache ref *************** *** 370,373 **** --- 381,385 ---- // as is done in remove_mapping(), // before write_unlock_irq page->mapping is set to NULL + // so that it can be freed (or else it'll be bad_page()) page->mapping=NULL; *************** *** 378,386 **** unlock_page(page); return 0; //__free_page(newpage); ! out: write_unlock_irq(&mapping->tree_lock); printk("<1>***** PAGE COUNT NOT 2 -- IT IS %d ******\n", page_count(page)); if (newpage) __free_page(newpage); --- 390,400 ---- unlock_page(page); + printk("<1> Page added to ccache\n"); return 0; //__free_page(newpage); ! out_locked: write_unlock_irq(&mapping->tree_lock); + out: printk("<1>***** PAGE COUNT NOT 2 -- IT IS %d ******\n", page_count(page)); if (newpage) __free_page(newpage); *************** *** 426,434 **** } - //if (!PageDirty(page)) return PAGE_CLEAN; - if (!is_page_cache_freeable(page)) return PAGE_KEEP; - if (!mapping) { /* --- 440,445 ---- *************** *** 539,546 **** list_del(&page->lru); - if (PageCompressed(page)) { - printk("<1> ##### Compressed page here for page-out!!! #####\n"); - } - if (TestSetPageLocked(page)) goto keep; --- 550,553 ---- *************** *** 627,631 **** * ahead and try to reclaim the page. */ - if (TestSetPageLocked(page)) goto keep; --- 634,637 ---- *************** *** 2025,2029 **** cond_resched(); ! p->flags |= PF_MEMALLOC; reclaim_state.reclaimed_slab = 0; p->reclaim_state = &reclaim_state; --- 2031,2040 ---- cond_resched(); ! /* ! * We need to be able to allocate from the reserves for RECLAIM_SWAP ! * and we also need to be able to write out pages for RECLAIM_WRITE ! * and RECLAIM_SWAP. ! */ ! p->flags |= PF_MEMALLOC | PF_SWAPWRITE; reclaim_state.reclaimed_slab = 0; p->reclaim_state = &reclaim_state; *************** *** 2049,2057 **** */ shrink_slab(sc.nr_scanned, gfp_mask, order); - sc.nr_reclaimed = 1; /* Avoid getting the off node timeout */ } p->reclaim_state = NULL; ! current->flags &= ~PF_MEMALLOC; if (sc.nr_reclaimed == 0) --- 2060,2067 ---- */ shrink_slab(sc.nr_scanned, gfp_mask, order); } p->reclaim_state = NULL; ! current->flags &= ~(PF_MEMALLOC | PF_SWAPWRITE); if (sc.nr_reclaimed == 0) |
|
From: Nitin G. <nit...@us...> - 2006-02-25 17:19:56
|
Update of /cvsroot/linuxcompressed/linux26/include/linux In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv25737/linux26/include/linux Modified Files: page-flags.h Log Message: For 2.6.16-rc4: simple page copy and replace in radix tree working Index: page-flags.h =================================================================== RCS file: /cvsroot/linuxcompressed/linux26/include/linux/page-flags.h,v retrieving revision 1.3 retrieving revision 1.4 diff -C2 -r1.3 -r1.4 *** page-flags.h 25 Feb 2006 16:36:27 -0000 1.3 --- page-flags.h 25 Feb 2006 17:19:49 -0000 1.4 *************** *** 77,82 **** #define PG_uncached 19 /* Page has been mapped as uncached */ ! #define PG_will_compress 20 ! #define PG_compressed 21 /* --- 77,82 ---- #define PG_uncached 19 /* Page has been mapped as uncached */ ! #define PG_will_compress 20 /* Page will be compressed asap */ ! #define PG_compressed 21 /* To mark 'chunk_head's */ /* |
|
From: Nitin G. <nit...@us...> - 2006-02-25 16:36:29
|
Update of /cvsroot/linuxcompressed/linux26/include/linux In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv5442/include/linux Added Files: page-flags.h Log Message: apply to 2.6.16-rc4: simple page copy and replace in radix tree working |
|
From: Nitin G. <nit...@us...> - 2006-02-24 21:58:36
|
Update of /cvsroot/linuxcompressed/linux26/include/linux In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv16827/include/linux Removed Files: ccache.h Log Message: for 2.6.16-rc4: simple copy to another page and replace in radix tree working --- ccache.h DELETED --- |
|
From: Nitin G. <nit...@us...> - 2006-02-24 21:58:36
|
Update of /cvsroot/linuxcompressed/linux26/mm In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv16827/mm Modified Files: vmscan.c Removed Files: filemap.c Log Message: for 2.6.16-rc4: simple copy to another page and replace in radix tree working Index: vmscan.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux26/mm/vmscan.c,v retrieving revision 1.6 retrieving revision 1.7 diff -C2 -r1.6 -r1.7 *** vmscan.c 23 Jan 2006 20:50:50 -0000 1.6 --- vmscan.c 24 Feb 2006 21:58:33 -0000 1.7 *************** *** 40,45 **** #include <linux/swapops.h> - #include <linux/ccache.h> // for struct chunk_head - /* possible outcome of pageout() */ typedef enum { --- 40,43 ---- *************** *** 54,67 **** } pageout_t; [...1453 lines suppressed...] ! * how many pages were freed in the zone. So we just ! * shake the slab and then go offnode for a single allocation. ! * ! * shrink_slab will free memory on all zones and may take ! * a long time. ! */ ! shrink_slab(sc.nr_scanned, gfp_mask, order); ! sc.nr_reclaimed = 1; /* Avoid getting the off node timeout */ ! } ! p->reclaim_state = NULL; ! current->flags &= ~PF_MEMALLOC; ! if (sc.nr_reclaimed == 0) ! zone->last_unsuccessful_zone_reclaim = jiffies; ! return sc.nr_reclaimed >= nr_pages; } + #endif + --- filemap.c DELETED --- |
|
From: Nitin G. <nit...@us...> - 2006-01-23 20:51:00
|
Update of /cvsroot/linuxcompressed/linux26/mm In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv19529/mm Modified Files: filemap.c vmscan.c Log Message: Initial (incomplete) implemetation (only page cache pages). No compress/decompress - just copy. Compiles cleanly - don't run Index: filemap.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux26/mm/filemap.c,v retrieving revision 1.5 retrieving revision 1.6 diff -C2 -r1.5 -r1.6 *** filemap.c 23 Jan 2006 20:45:09 -0000 1.5 --- filemap.c 23 Jan 2006 20:50:50 -0000 1.6 *************** *** 38,41 **** --- 38,43 ---- #include <asm/mman.h> + #include <linux/ccache.h> // for struct chunk_head + static ssize_t generic_file_direct_IO(int rw, struct kiocb *iocb, const struct iovec *iov, *************** *** 547,551 **** unsigned long offset) { ! struct page *page; read_lock_irq(&mapping->tree_lock); --- 549,554 ---- unsigned long offset) { ! struct page *page, *newpage; ! struct chunk_head *ch; read_lock_irq(&mapping->tree_lock); *************** *** 557,560 **** --- 560,622 ---- read_unlock_irq(&mapping->tree_lock); lock_page(page); + + /* + * If PageWillCompress is set after radix tree lookup + * and after acquiring lock on page, then page must + * be in ccache now and thus 'page' now points to + * original uncompressed page instead of chunk_head. + * So, now invalidate(free) page's entry in ccache + * and make radix node point back to this page. + */ + if (PageWillCompress(page)) { + /* + * This happens if control reaches here during page compression + * and before it could be replaced in page cache in place of + * original uncompresssed page. + */ + write_lock_irq(&mapping->tree_lock); + newpage = radix_tree_lookup(&mapping->page_tree, page->index); + radix_tree_delete(&mapping->page_tree, page->index); + radix_tree_insert(&mapping->page_tree, page->index, page); + write_unlock_irq(&mapping->tree_lock); + + ClearPageWillCompress(page); + + ch = (struct chunk_head *)(page_private(newpage)); + __free_page( ch->chunk ); + kfree( (struct chunk_head *)(page_private(newpage)) ); + kfree( newpage ); + } + + /* + * In this case, the 'page' points to 'chunk_head' + * instead of a original uncompressed page. + */ + if (PageCompressed(page)) { + // get_ccache_page(page); + ch = (struct chunk_head *)(page_private(page)); + newpage = ch->chunk; + + // Restore all fields we backed up in add_to_ccache() + *newpage = *page; + set_page_private(newpage, ch->orig_private); + + /* + * Replace this 'chunk_head' in page cache back to + * original uncompressed page (stored in 'chunk') + */ + write_lock_irq(&mapping->tree_lock); + radix_tree_delete(&mapping->page_tree, page->index); + radix_tree_insert(&mapping->page_tree, page->index, newpage); + write_unlock_irq(&mapping->tree_lock); + + // Free metadata info for this page from ccache + kfree( (struct chunk_head *)(page_private(page)) ); + kfree( page ); // this 'page' point to chunk_head + + // Now 'page' points to this just uncompressed page + page = newpage; + } + read_lock_irq(&mapping->tree_lock); Index: vmscan.c =================================================================== RCS file: /cvsroot/linuxcompressed/linux26/mm/vmscan.c,v retrieving revision 1.5 retrieving revision 1.6 diff -C2 -r1.5 -r1.6 *** vmscan.c 23 Jan 2006 20:45:09 -0000 1.5 --- vmscan.c 23 Jan 2006 20:50:50 -0000 1.6 *************** *** 40,43 **** --- 40,45 ---- #include <linux/swapops.h> + #include <linux/ccache.h> // for struct chunk_head + /* possible outcome of pageout() */ typedef enum { *************** *** 52,55 **** --- 54,67 ---- } pageout_t; + /* + struct chunk_head { + // actually there will be no single chunk; + // instead it will have chunk list + struct page *orig_page, *chunk; + // chunk_head is stored in private field so backup here + unsigned long orig_private; + }; + */ + struct scan_control { /* Ask refill_inactive_zone, or shrink_cache to scan this many pages */ *************** *** 127,130 **** --- 139,210 ---- /* + * Heuristic to determine if page should go to ccache go here. + * Assume page is locked + */ + static int should_add_to_ccache(struct page *page) + { + if (PagePrivate(page) || PageSwapCache(page)) + return 0; + SetPageWillCompress(page); + return 1; + } + + /* + * Compress the page and add it to ccache. + * newpage is container for info to locate page in ccache. + */ + static int add_to_ccache(struct page *page) + { + struct address_space *mapping; + struct chunk_head *ch; + struct page *newpage=0, *chunk=0; + + ch = kmalloc(sizeof(struct chunk_head), GFP_KERNEL); + if (!ch) goto out; + + newpage = kmalloc(sizeof(struct page), GFP_KERNEL); + if (!newpage) goto out; + + chunk = alloc_page(GFP_KERNEL); + if (!chunk) goto out; + + ch->orig_page = page; + ch->orig_private = page_private(page); + ch->chunk = chunk; + + *newpage = *page; // backup all fields in original struct page + set_page_private(newpage, (unsigned long)ch); + ClearPageWillCompress(newpage); + + // compress(page, dest); + memcpy(page_address(chunk), page_address(page), PAGE_SIZE); + + SetPageCompressed(newpage); + + /* + * Add newpage to ccache. + * Replace entry corres. to 'page' in radix tree to 'newpage'. + * Implement a real replace - not remove then add. + */ + mapping = page->mapping; + write_lock_irq(&mapping->tree_lock); + radix_tree_delete(&mapping->page_tree, page->index); + radix_tree_insert(&mapping->page_tree, page->index, newpage); + write_unlock_irq(&mapping->tree_lock); + + unlock_page(page); + ClearPageWriteback(page); + ClearPageReclaim(page); + + return 0; // success + out: + if (ch) kfree(ch); + if (newpage) kfree(newpage); + ClearPageWillCompress(page); + return 1; + } + + + /* * From 0 .. 100. Higher means more swappy. */ *************** *** 317,320 **** --- 397,401 ---- static pageout_t pageout(struct page *page, struct address_space *mapping) { + int error = 0; /* * If the page is dirty, only perform writeback if that write *************** *** 350,353 **** --- 431,441 ---- return PAGE_KEEP; } + + if (PageWillCompress(page)) { + SetPageReclaim(page); + error = add_to_ccache(page); + } + if (!error) return PAGE_SUCCESS; + if (mapping->a_ops->writepage == NULL) return PAGE_ACTIVATE; *************** *** 392,396 **** int pgactivate = 0; int reclaimed = 0; ! cond_resched(); --- 480,484 ---- int pgactivate = 0; int reclaimed = 0; ! int ret = 0; cond_resched(); *************** *** 420,423 **** --- 508,518 ---- goto keep_locked; + if (PageWillCompress(page)) { + ClearPageWillCompress(page); + __put_page(page); + if (page_count(page) == 1) + goto free_it; + } + referenced = page_referenced(page, 1); /* In active use or really unfreeable? Activate it. */ *************** *** 457,461 **** } ! if (PageDirty(page)) { if (referenced) goto keep_locked; --- 552,558 ---- } ! ret = should_add_to_ccache(page); ! ! if (PageDirty(page) || PageWillCompress(page)) { if (referenced) goto keep_locked; *************** *** 472,476 **** goto activate_locked; case PAGE_SUCCESS: ! if (PageWriteback(page) || PageDirty(page)) goto keep; /* --- 569,588 ---- goto activate_locked; case PAGE_SUCCESS: ! //if (PageWriteback(page) || PageDirty(page)) ! /* This can also occur in case of async add_to_ccache() */ ! if (PageWriteback(page)) ! goto keep; ! /* ! * Writeback is complete so free it now. ! * Page has been unlocked in add_to_ccache() ! */ ! ! if (PageWillCompress(page)) { ! ClearPageWillCompress(page); ! __put_page(page); ! if (page_count(page) == 1) ! goto free_it_unlocked; // i.e. it's already unlocked ! } ! if (PageDirty(page)) goto keep; /* *************** *** 549,552 **** --- 661,665 ---- free_it: unlock_page(page); + free_it_unlocked: reclaimed++; if (!pagevec_add(&freed_pvec, page)) |
|
From: Nitin G. <nit...@us...> - 2006-01-23 20:50:58
|
Update of /cvsroot/linuxcompressed/linux26/include/linux In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv19529/include/linux Added Files: ccache.h Log Message: Initial (incomplete) implemetation (only page cache pages). No compress/decompress - just copy. Compiles cleanly - don't run |
|
From: Nitin G. <nit...@us...> - 2006-01-23 20:45:19
|
Update of /cvsroot/linuxcompressed/linux26/mm In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv16913/mm Added Files: filemap.c vmscan.c Log Message: vanilla 2.6.15 files |
|
From: Nitin G. <nit...@us...> - 2006-01-23 20:42:40
|
Update of /cvsroot/linuxcompressed/linux26/include/linux In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv15956/include/linux Removed Files: ccache.h page-flags.h Log Message: removed files again for proper diffs --- ccache.h DELETED --- --- page-flags.h DELETED --- |
|
From: Nitin G. <nit...@us...> - 2006-01-23 20:42:40
|
Update of /cvsroot/linuxcompressed/linux26/mm In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv15956/mm Removed Files: filemap.c vmscan.c Log Message: removed files again for proper diffs --- filemap.c DELETED --- --- vmscan.c DELETED --- |
|
From: Nitin G. <nit...@us...> - 2006-01-23 20:30:39
|
Update of /cvsroot/linuxcompressed/linux26/mm In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv11398/mm Added Files: filemap.c vmscan.c Log Message: Initial (incomplete) implementation-only page cache now. Compiles cleanly - don't run |
|
From: Nitin G. <nit...@us...> - 2006-01-23 20:30:39
|
Update of /cvsroot/linuxcompressed/linux26/include/linux In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv11398/include/linux Added Files: ccache.h page-flags.h Log Message: Initial (incomplete) implementation-only page cache now. Compiles cleanly - don't run --- NEW FILE --- #ifndef _LINUX_CCACHE_H #define _LINUX_CCACHE_H #include <linux/mm.h> // for struct page struct chunk_head { // actually there will be no single chunk; // instead it will have chunk list struct page *orig_page, *chunk; // chunk_head is stored in private field so backup here unsigned long orig_private; }; #endif /* _LINUX_CCACHE_H */ --- NEW FILE --- /* * Macros for manipulating and testing page->flags */ #ifndef PAGE_FLAGS_H #define PAGE_FLAGS_H #include <linux/percpu.h> #include <linux/cache.h> #include <asm/pgtable.h> /* * Various page->flags bits: * * PG_reserved is set for special pages, which can never be swapped out. Some * of them might not even exist (eg empty_bad_page)... * * The PG_private bitflag is set if page->private contains a valid value. * * During disk I/O, PG_locked is used. This bit is set before I/O and * reset when I/O completes. page_waitqueue(page) is a wait queue of all tasks * waiting for the I/O on this page to complete. * * PG_uptodate tells whether the page's contents is valid. When a read * completes, the page becomes uptodate, unless a disk I/O error happened. * * For choosing which pages to swap out, inode pages carry a PG_referenced bit, * which is set any time the system accesses that page through the (mapping, * index) hash table. This referenced bit, together with the referenced bit * in the page tables, is used to manipulate page->age and move the page across * the active, inactive_dirty and inactive_clean lists. * * Note that the referenced bit, the page->lru list_head and the active, * inactive_dirty and inactive_clean lists are protected by the * zone->lru_lock, and *NOT* by the usual PG_locked bit! * * PG_error is set to indicate that an I/O error occurred on this page. * * PG_arch_1 is an architecture specific page state bit. The generic code * guarantees that this bit is cleared for a page when it first is entered into * the page cache. * * PG_highmem pages are not permanently mapped into the kernel virtual address * space, they need to be kmapped separately for doing IO on the pages. The * struct page (these bits with information) are always mapped into kernel * address space... */ /* * Don't use the *_dontuse flags. Use the macros. Otherwise you'll break * locked- and dirty-page accounting. The top eight bits of page->flags are * used for page->zone, so putting flag bits there doesn't work. */ #define PG_locked 0 /* Page is locked. Don't touch. */ #define PG_error 1 #define PG_referenced 2 #define PG_uptodate 3 #define PG_dirty 4 #define PG_lru 5 #define PG_active 6 #define PG_slab 7 /* slab debug (Suparna wants this) */ #define PG_checked 8 /* kill me in 2.5.<early>. */ #define PG_arch_1 9 #define PG_reserved 10 #define PG_private 11 /* Has something at ->private */ #define PG_writeback 12 /* Page is under writeback */ #define PG_nosave 13 /* Used for system suspend/resume */ #define PG_compound 14 /* Part of a compound page */ #define PG_swapcache 15 /* Swap page: swp_entry_t in private */ #define PG_mappedtodisk 16 /* Has blocks allocated on-disk */ #define PG_reclaim 17 /* To be reclaimed asap */ #define PG_nosave_free 18 /* Free, should not be written */ #define PG_uncached 19 /* Page has been mapped as uncached */ #define PG_will_compress 20 /* Page will be compresssed soon */ #define PG_compressed 21 /* Page is in compressed cache */ /* * Global page accounting. One instance per CPU. Only unsigned longs are * allowed. */ struct page_state { unsigned long nr_dirty; /* Dirty writeable pages */ unsigned long nr_writeback; /* Pages under writeback */ unsigned long nr_unstable; /* NFS unstable pages */ unsigned long nr_page_table_pages;/* Pages used for pagetables */ unsigned long nr_mapped; /* mapped into pagetables */ unsigned long nr_slab; /* In slab */ #define GET_PAGE_STATE_LAST nr_slab /* * The below are zeroed by get_page_state(). Use get_full_page_state() * to add up all these. */ unsigned long pgpgin; /* Disk reads */ unsigned long pgpgout; /* Disk writes */ unsigned long pswpin; /* swap reads */ unsigned long pswpout; /* swap writes */ unsigned long pgalloc_high; /* page allocations */ unsigned long pgalloc_normal; unsigned long pgalloc_dma; unsigned long pgfree; /* page freeings */ unsigned long pgactivate; /* pages moved inactive->active */ unsigned long pgdeactivate; /* pages moved active->inactive */ unsigned long pgfault; /* faults (major+minor) */ unsigned long pgmajfault; /* faults (major only) */ unsigned long pgrefill_high; /* inspected in refill_inactive_zone */ unsigned long pgrefill_normal; unsigned long pgrefill_dma; unsigned long pgsteal_high; /* total highmem pages reclaimed */ unsigned long pgsteal_normal; unsigned long pgsteal_dma; unsigned long pgscan_kswapd_high;/* total highmem pages scanned */ unsigned long pgscan_kswapd_normal; unsigned long pgscan_kswapd_dma; unsigned long pgscan_direct_high;/* total highmem pages scanned */ unsigned long pgscan_direct_normal; unsigned long pgscan_direct_dma; unsigned long pginodesteal; /* pages reclaimed via inode freeing */ unsigned long slabs_scanned; /* slab objects scanned */ unsigned long kswapd_steal; /* pages reclaimed by kswapd */ unsigned long kswapd_inodesteal;/* reclaimed via kswapd inode freeing */ unsigned long pageoutrun; /* kswapd's calls to page reclaim */ unsigned long allocstall; /* direct reclaim calls */ unsigned long pgrotated; /* pages rotated to tail of the LRU */ unsigned long nr_bounce; /* pages for bounce buffers */ }; extern void get_page_state(struct page_state *ret); extern void get_page_state_node(struct page_state *ret, int node); extern void get_full_page_state(struct page_state *ret); extern unsigned long __read_page_state(unsigned long offset); extern void __mod_page_state(unsigned long offset, unsigned long delta); #define read_page_state(member) \ __read_page_state(offsetof(struct page_state, member)) #define mod_page_state(member, delta) \ __mod_page_state(offsetof(struct page_state, member), (delta)) #define inc_page_state(member) mod_page_state(member, 1UL) #define dec_page_state(member) mod_page_state(member, 0UL - 1) #define add_page_state(member,delta) mod_page_state(member, (delta)) #define sub_page_state(member,delta) mod_page_state(member, 0UL - (delta)) #define mod_page_state_zone(zone, member, delta) \ do { \ unsigned offset; \ if (is_highmem(zone)) \ offset = offsetof(struct page_state, member##_high); \ else if (is_normal(zone)) \ offset = offsetof(struct page_state, member##_normal); \ else \ offset = offsetof(struct page_state, member##_dma); \ __mod_page_state(offset, (delta)); \ } while (0) /* * Manipulation of page state flags */ #define PageWillCompress(page) \ test_bit(PG_will_compress, &(page)->flags) #define SetPageWillCompress(page) \ set_bit(PG_will_compress, &(page)->flags) #define ClearPageWillCompress(page) \ clear_bit(PG_will_compress, &(page)->flags) #define PageCompressed(page) \ test_bit(PG_compressed, &(page)->flags) #define SetPageCompressed(page) \ set_bit(PG_compressed, &(page)->flags) #define ClearPageCompressed(page) \ clear_bit(PG_compressed, &(page)->flags) #define PageLocked(page) \ test_bit(PG_locked, &(page)->flags) #define SetPageLocked(page) \ set_bit(PG_locked, &(page)->flags) #define TestSetPageLocked(page) \ test_and_set_bit(PG_locked, &(page)->flags) #define ClearPageLocked(page) \ clear_bit(PG_locked, &(page)->flags) #define TestClearPageLocked(page) \ test_and_clear_bit(PG_locked, &(page)->flags) #define PageError(page) test_bit(PG_error, &(page)->flags) #define SetPageError(page) set_bit(PG_error, &(page)->flags) #define ClearPageError(page) clear_bit(PG_error, &(page)->flags) #define PageReferenced(page) test_bit(PG_referenced, &(page)->flags) #define SetPageReferenced(page) set_bit(PG_referenced, &(page)->flags) #define ClearPageReferenced(page) clear_bit(PG_referenced, &(page)->flags) #define TestClearPageReferenced(page) test_and_clear_bit(PG_referenced, &(page)->flags) #define PageUptodate(page) test_bit(PG_uptodate, &(page)->flags) #ifndef SetPageUptodate #define SetPageUptodate(page) set_bit(PG_uptodate, &(page)->flags) #endif #define ClearPageUptodate(page) clear_bit(PG_uptodate, &(page)->flags) #define PageDirty(page) test_bit(PG_dirty, &(page)->flags) #define SetPageDirty(page) set_bit(PG_dirty, &(page)->flags) #define TestSetPageDirty(page) test_and_set_bit(PG_dirty, &(page)->flags) #define ClearPageDirty(page) clear_bit(PG_dirty, &(page)->flags) #define __ClearPageDirty(page) __clear_bit(PG_dirty, &(page)->flags) #define TestClearPageDirty(page) test_and_clear_bit(PG_dirty, &(page)->flags) #define SetPageLRU(page) set_bit(PG_lru, &(page)->flags) #define PageLRU(page) test_bit(PG_lru, &(page)->flags) #define TestSetPageLRU(page) test_and_set_bit(PG_lru, &(page)->flags) #define TestClearPageLRU(page) test_and_clear_bit(PG_lru, &(page)->flags) #define PageActive(page) test_bit(PG_active, &(page)->flags) #define SetPageActive(page) set_bit(PG_active, &(page)->flags) #define ClearPageActive(page) clear_bit(PG_active, &(page)->flags) #define TestClearPageActive(page) test_and_clear_bit(PG_active, &(page)->flags) #define TestSetPageActive(page) test_and_set_bit(PG_active, &(page)->flags) #define PageSlab(page) test_bit(PG_slab, &(page)->flags) #define SetPageSlab(page) set_bit(PG_slab, &(page)->flags) #define ClearPageSlab(page) clear_bit(PG_slab, &(page)->flags) #define TestClearPageSlab(page) test_and_clear_bit(PG_slab, &(page)->flags) #define TestSetPageSlab(page) test_and_set_bit(PG_slab, &(page)->flags) #ifdef CONFIG_HIGHMEM #define PageHighMem(page) is_highmem(page_zone(page)) #else #define PageHighMem(page) 0 /* needed to optimize away at compile time */ #endif #define PageChecked(page) test_bit(PG_checked, &(page)->flags) #define SetPageChecked(page) set_bit(PG_checked, &(page)->flags) #define ClearPageChecked(page) clear_bit(PG_checked, &(page)->flags) #define PageReserved(page) test_bit(PG_reserved, &(page)->flags) #define SetPageReserved(page) set_bit(PG_reserved, &(page)->flags) #define ClearPageReserved(page) clear_bit(PG_reserved, &(page)->flags) #define __ClearPageReserved(page) __clear_bit(PG_reserved, &(page)->flags) #define SetPagePrivate(page) set_bit(PG_private, &(page)->flags) #define ClearPagePrivate(page) clear_bit(PG_private, &(page)->flags) #define PagePrivate(page) test_bit(PG_private, &(page)->flags) #define __SetPagePrivate(page) __set_bit(PG_private, &(page)->flags) #define __ClearPagePrivate(page) __clear_bit(PG_private, &(page)->flags) #define PageWriteback(page) test_bit(PG_writeback, &(page)->flags) #define SetPageWriteback(page) \ do { \ if (!test_and_set_bit(PG_writeback, \ &(page)->flags)) \ inc_page_state(nr_writeback); \ } while (0) #define TestSetPageWriteback(page) \ ({ \ int ret; \ ret = test_and_set_bit(PG_writeback, \ &(page)->flags); \ if (!ret) \ inc_page_state(nr_writeback); \ ret; \ }) #define ClearPageWriteback(page) \ do { \ if (test_and_clear_bit(PG_writeback, \ &(page)->flags)) \ dec_page_state(nr_writeback); \ } while (0) #define TestClearPageWriteback(page) \ ({ \ int ret; \ ret = test_and_clear_bit(PG_writeback, \ &(page)->flags); \ if (ret) \ dec_page_state(nr_writeback); \ ret; \ }) #define PageNosave(page) test_bit(PG_nosave, &(page)->flags) #define SetPageNosave(page) set_bit(PG_nosave, &(page)->flags) #define TestSetPageNosave(page) test_and_set_bit(PG_nosave, &(page)->flags) #define ClearPageNosave(page) clear_bit(PG_nosave, &(page)->flags) #define TestClearPageNosave(page) test_and_clear_bit(PG_nosave, &(page)->flags) #define PageNosaveFree(page) test_bit(PG_nosave_free, &(page)->flags) #define SetPageNosaveFree(page) set_bit(PG_nosave_free, &(page)->flags) #define ClearPageNosaveFree(page) clear_bit(PG_nosave_free, &(page)->flags) #define PageMappedToDisk(page) test_bit(PG_mappedtodisk, &(page)->flags) #define SetPageMappedToDisk(page) set_bit(PG_mappedtodisk, &(page)->flags) #define ClearPageMappedToDisk(page) clear_bit(PG_mappedtodisk, &(page)->flags) #define PageReclaim(page) test_bit(PG_reclaim, &(page)->flags) #define SetPageReclaim(page) set_bit(PG_reclaim, &(page)->flags) #define ClearPageReclaim(page) clear_bit(PG_reclaim, &(page)->flags) #define TestClearPageReclaim(page) test_and_clear_bit(PG_reclaim, &(page)->flags) #define PageCompound(page) test_bit(PG_compound, &(page)->flags) #define SetPageCompound(page) set_bit(PG_compound, &(page)->flags) #define ClearPageCompound(page) clear_bit(PG_compound, &(page)->flags) #ifdef CONFIG_SWAP #define PageSwapCache(page) test_bit(PG_swapcache, &(page)->flags) #define SetPageSwapCache(page) set_bit(PG_swapcache, &(page)->flags) #define ClearPageSwapCache(page) clear_bit(PG_swapcache, &(page)->flags) #else #define PageSwapCache(page) 0 #endif #define PageUncached(page) test_bit(PG_uncached, &(page)->flags) #define SetPageUncached(page) set_bit(PG_uncached, &(page)->flags) #define ClearPageUncached(page) clear_bit(PG_uncached, &(page)->flags) struct page; /* forward declaration */ int test_clear_page_dirty(struct page *page); int test_clear_page_writeback(struct page *page); int test_set_page_writeback(struct page *page); static inline void clear_page_dirty(struct page *page) { test_clear_page_dirty(page); } static inline void set_page_writeback(struct page *page) { test_set_page_writeback(page); } #endif /* PAGE_FLAGS_H */ |
|
From: Nitin G. <nit...@us...> - 2006-01-23 20:22:06
|
Update of /cvsroot/linuxcompressed/linux26/include/linux In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv7853/linux Log Message: Directory /cvsroot/linuxcompressed/linux26/include/linux added to the repository |
|
From: Nitin G. <nit...@us...> - 2006-01-23 20:21:33
|
Update of /cvsroot/linuxcompressed/linux26/include In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv7767/include Log Message: Directory /cvsroot/linuxcompressed/linux26/include added to the repository |
Update of /cvsroot/linuxcompressed/linux26/mm In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv6356 Removed Files: bootmem.c fadvise.c filemap.c filemap.h filemap_xip.c fremap.c highmem.c hugetlb.c internal.h madvise.c memory.c memory_hotplug.c mempolicy.c mempool.c mincore.c mlock.c mmap.c mprotect.c mremap.c msync.c nommu.c oom_kill.c page-writeback.c page_alloc.c page_io.c pdflush.c prio_tree.c readahead.c rmap.c shmem.c slab.c sparse.c swap.c swap_state.c swapfile.c thrash.c tiny-shmem.c truncate.c vmalloc.c vmscan.c Log Message: Removing unnecessary files --- bootmem.c DELETED --- --- fadvise.c DELETED --- --- filemap.c DELETED --- --- filemap.h DELETED --- --- filemap_xip.c DELETED --- --- fremap.c DELETED --- --- highmem.c DELETED --- --- hugetlb.c DELETED --- --- internal.h DELETED --- --- madvise.c DELETED --- --- memory.c DELETED --- --- memory_hotplug.c DELETED --- --- mempolicy.c DELETED --- --- mempool.c DELETED --- --- mincore.c DELETED --- --- mlock.c DELETED --- --- mmap.c DELETED --- --- mprotect.c DELETED --- --- mremap.c DELETED --- --- msync.c DELETED --- --- nommu.c DELETED --- --- oom_kill.c DELETED --- --- page-writeback.c DELETED --- --- page_alloc.c DELETED --- --- page_io.c DELETED --- --- pdflush.c DELETED --- --- prio_tree.c DELETED --- --- readahead.c DELETED --- --- rmap.c DELETED --- --- shmem.c DELETED --- --- slab.c DELETED --- --- sparse.c DELETED --- --- swap.c DELETED --- --- swap_state.c DELETED --- --- swapfile.c DELETED --- --- thrash.c DELETED --- --- tiny-shmem.c DELETED --- --- truncate.c DELETED --- --- vmalloc.c DELETED --- --- vmscan.c DELETED --- |
|
From: Nitin G. <nit...@us...> - 2006-01-22 08:30:46
|
Update of /cvsroot/linuxcompressed/linux26/mm In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv4673/mm Log Message: Directory /cvsroot/linuxcompressed/linux26/mm added to the repository |
|
From: Nitin G. <nit...@us...> - 2006-01-22 08:25:48
|
Update of /cvsroot/linuxcompressed/linux26 In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv3069 Log Message: Initial import of linux 2.6.15 Status: Vendor Tag: nitin Release Tags: start No conflicts created by this import ***** Bogus filespec: - ***** Bogus filespec: Imported ***** Bogus filespec: sources |
|
From: Nitin G. <nit...@us...> - 2006-01-22 08:07:30
|
Update of /cvsroot/linuxcompressed/linuxcompressed In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv28223 Log Message: Initial import for linux 2.6.15 Status: Vendor Tag: nitin Release Tags: start No conflicts created by this import ***** Bogus filespec: - ***** Bogus filespec: Imported ***** Bogus filespec: sources |
|
From: Rodrigo S. de C. <rc...@us...> - 2003-05-19 01:39:20
|
Update of /cvsroot/linuxcompressed/linux/include/linux
In directory sc8-pr-cvs1:/tmp/cvs-serv25395/include/linux
Modified Files:
fs.h mm.h swap.h sysctl.h
Log Message:
o Port code to 2.4.20
Bug fix (?)
o Changes checks in vswap.c to avoid oopses. It will BUG()
instead. Some of the checks were done after the value had been
accessed.
Note
o Virtual swap addresses are temporarily disabled, due to debugging
sessions related to the use of swap files instead of swap partitions.
Index: fs.h
===================================================================
RCS file: /cvsroot/linuxcompressed/linux/include/linux/fs.h,v
retrieving revision 1.4
retrieving revision 1.5
diff -C2 -r1.4 -r1.5
*** fs.h 27 Feb 2002 19:58:51 -0000 1.4
--- fs.h 19 May 2003 01:38:46 -0000 1.5
***************
*** 207,210 ****
--- 207,211 ----
extern void inode_init(unsigned long);
extern void mnt_init(unsigned long);
+ extern void files_init(unsigned long mempages);
/* bh state bits */
***************
*** 218,222 ****
BH_Async, /* 1 if the buffer is under end_buffer_io_async I/O */
BH_Wait_IO, /* 1 if we should write out this buffer */
! BH_launder, /* 1 if we should throttle on this buffer */
BH_JBD, /* 1 if it has an attached journal_head */
--- 219,223 ----
BH_Async, /* 1 if the buffer is under end_buffer_io_async I/O */
BH_Wait_IO, /* 1 if we should write out this buffer */
! BH_Launder, /* 1 if we can throttle on this buffer */
BH_JBD, /* 1 if it has an attached journal_head */
***************
*** 226,229 ****
--- 227,232 ----
};
+ #define MAX_BUF_PER_PAGE (PAGE_CACHE_SIZE / 512)
+
/*
* Try to keep the most commonly used fields in single cache lines (16
***************
*** 280,283 ****
--- 283,287 ----
#define buffer_new(bh) __buffer_state(bh,New)
#define buffer_async(bh) __buffer_state(bh,Async)
+ #define buffer_launder(bh) __buffer_state(bh,Launder)
#define bh_offset(bh) ((unsigned long)(bh)->b_data & ~PAGE_MASK)
***************
*** 556,559 ****
--- 560,571 ----
#define MAX_NON_LFS ((1UL<<31) - 1)
+ /* Page cache limit. The filesystems should put that into their s_maxbytes
+ limits, otherwise bad things can happen in VM. */
+ #if BITS_PER_LONG==32
+ #define MAX_LFS_FILESIZE (((u64)PAGE_CACHE_SIZE << (BITS_PER_LONG-1))-1)
+ #elif BITS_PER_LONG==64
+ #define MAX_LFS_FILESIZE 0x7fffffffffffffff
+ #endif
+
#define FL_POSIX 1
#define FL_FLOCK 2
***************
*** 590,593 ****
--- 602,606 ----
struct fasync_struct * fl_fasync; /* for lease break notifications */
+ unsigned long fl_break_time; /* for nonblocking lease breaks */
union {
***************
*** 859,862 ****
--- 872,879 ----
int (*setattr) (struct dentry *, struct iattr *);
int (*getattr) (struct dentry *, struct iattr *);
+ int (*setxattr) (struct dentry *, const char *, void *, size_t, int);
+ ssize_t (*getxattr) (struct dentry *, const char *, void *, size_t);
+ ssize_t (*listxattr) (struct dentry *, char *, size_t);
+ int (*removexattr) (struct dentry *, const char *);
};
***************
*** 1045,1049 ****
static inline int get_lease(struct inode *inode, unsigned int mode)
{
! if (inode->i_flock && (inode->i_flock->fl_flags & FL_LEASE))
return __get_lease(inode, mode);
return 0;
--- 1062,1066 ----
static inline int get_lease(struct inode *inode, unsigned int mode)
{
! if (inode->i_flock)
return __get_lease(inode, mode);
return 0;
***************
*** 1108,1112 ****
extern int fs_may_remount_ro(struct super_block *);
! extern int try_to_free_buffers(struct page *, unsigned int);
extern void refile_buffer(struct buffer_head * buf);
extern void create_empty_buffers(struct page *, kdev_t, unsigned long);
--- 1125,1129 ----
extern int fs_may_remount_ro(struct super_block *);
! extern int FASTCALL(try_to_free_buffers(struct page *, unsigned int));
extern void refile_buffer(struct buffer_head * buf);
extern void create_empty_buffers(struct page *, kdev_t, unsigned long);
***************
*** 1159,1165 ****
extern void FASTCALL(__mark_buffer_dirty(struct buffer_head *bh));
extern void FASTCALL(mark_buffer_dirty(struct buffer_head *bh));
extern void FASTCALL(buffer_insert_inode_data_queue(struct buffer_head *, struct inode *));
! #define atomic_set_buffer_dirty(bh) test_and_set_bit(BH_Dirty, &(bh)->b_state)
static inline void mark_buffer_async(struct buffer_head * bh, int on)
--- 1176,1186 ----
extern void FASTCALL(__mark_buffer_dirty(struct buffer_head *bh));
extern void FASTCALL(mark_buffer_dirty(struct buffer_head *bh));
+ extern void FASTCALL(buffer_insert_inode_queue(struct buffer_head *, struct inode *));
extern void FASTCALL(buffer_insert_inode_data_queue(struct buffer_head *, struct inode *));
! static inline int atomic_set_buffer_dirty(struct buffer_head *bh)
! {
! return test_and_set_bit(BH_Dirty, &bh->b_state);
! }
static inline void mark_buffer_async(struct buffer_head * bh, int on)
***************
*** 1186,1190 ****
}
- extern void buffer_insert_inode_queue(struct buffer_head *, struct inode *);
static inline void mark_buffer_dirty_inode(struct buffer_head *bh, struct inode *inode)
{
--- 1207,1210 ----
***************
*** 1214,1221 ****
extern int fsync_no_super(kdev_t);
extern void sync_inodes_sb(struct super_block *);
! extern int osync_inode_buffers(struct inode *);
! extern int osync_inode_data_buffers(struct inode *);
! extern int fsync_inode_buffers(struct inode *);
! extern int fsync_inode_data_buffers(struct inode *);
extern int inode_has_buffers(struct inode *);
extern int filemap_fdatasync(struct address_space *);
--- 1234,1246 ----
extern int fsync_no_super(kdev_t);
extern void sync_inodes_sb(struct super_block *);
! extern int fsync_buffers_list(struct list_head *);
! static inline int fsync_inode_buffers(struct inode *inode)
! {
! return fsync_buffers_list(&inode->i_dirty_buffers);
! }
! static inline int fsync_inode_data_buffers(struct inode *inode)
! {
! return fsync_buffers_list(&inode->i_dirty_data_buffers);
! }
extern int inode_has_buffers(struct inode *);
extern int filemap_fdatasync(struct address_space *);
***************
*** 1313,1316 ****
--- 1338,1342 ----
extern int FASTCALL(path_init(const char *, unsigned, struct nameidata *));
extern int FASTCALL(path_walk(const char *, struct nameidata *));
+ extern int FASTCALL(path_lookup(const char *, unsigned, struct nameidata *));
extern int FASTCALL(link_path_walk(const char *, struct nameidata *));
extern void path_release(struct nameidata *);
***************
*** 1371,1374 ****
--- 1397,1402 ----
}
extern int set_blocksize(kdev_t, int);
+ extern int sb_set_blocksize(struct super_block *, int);
+ extern int sb_min_blocksize(struct super_block *, int);
extern struct buffer_head * bread(kdev_t, int, int);
static inline struct buffer_head * sb_bread(struct super_block *sb, int block)
***************
*** 1433,1437 ****
--- 1461,1470 ----
extern int vfs_readdir(struct file *, filldir_t, void *);
+ extern int dcache_dir_open(struct inode *, struct file *);
+ extern int dcache_dir_close(struct inode *, struct file *);
+ extern loff_t dcache_dir_lseek(struct file *, loff_t, int);
+ extern int dcache_dir_fsync(struct file *, struct dentry *, int);
extern int dcache_readdir(struct file *, void *, filldir_t);
+ extern struct file_operations dcache_dir_ops;
extern struct file_system_type *get_fs_type(const char *name);
***************
*** 1454,1462 ****
extern void show_buffers(void);
- extern void mount_root(void);
#ifdef CONFIG_BLK_DEV_INITRD
extern unsigned int real_root_dev;
- extern int change_root(kdev_t, const char *);
#endif
--- 1487,1493 ----
Index: mm.h
===================================================================
RCS file: /cvsroot/linuxcompressed/linux/include/linux/mm.h,v
retrieving revision 1.18
retrieving revision 1.19
diff -C2 -r1.18 -r1.19
*** mm.h 10 Sep 2002 16:43:04 -0000 1.18
--- mm.h 19 May 2003 01:38:47 -0000 1.19
***************
*** 16,19 ****
--- 16,20 ----
extern unsigned long max_mapnr;
extern unsigned long num_physpages;
+ extern unsigned long num_mappedpages;
extern void * high_memory;
extern int page_cluster;
***************
*** 160,169 ****
struct list_head lru; /* Pageout list, eg. active_list;
protected by pagemap_lru_lock !! */
- wait_queue_head_t wait; /* Page locked? Stand in line... */
struct page **pprev_hash; /* Complement to *next_hash. */
struct buffer_head * buffers; /* Buffer maps us to a disk block. */
void *virtual; /* Kernel virtual address (NULL if
not kmapped, ie. highmem) */
! struct zone_struct *zone; /* Memory zone we are in. */
} mem_map_t;
--- 161,181 ----
struct list_head lru; /* Pageout list, eg. active_list;
protected by pagemap_lru_lock !! */
struct page **pprev_hash; /* Complement to *next_hash. */
struct buffer_head * buffers; /* Buffer maps us to a disk block. */
+
+ /*
+ * On machines where all RAM is mapped into kernel address space,
+ * we can simply calculate the virtual address. On machines with
+ * highmem some memory is mapped into kernel virtual memory
+ * dynamically, so we need a place to store that address.
+ * Note that this field could be 16 bits on x86 ... ;)
+ *
+ * Architectures with slow multiplication can define
+ * WANT_PAGE_VIRTUAL in asm/page.h
+ */
+ #if defined(CONFIG_HIGHMEM) || defined(WANT_PAGE_VIRTUAL)
void *virtual; /* Kernel virtual address (NULL if
not kmapped, ie. highmem) */
! #endif /* CONFIG_HIGMEM || WANT_PAGE_VIRTUAL */
} mem_map_t;
***************
*** 240,244 ****
* to swap space and (later) to be read back into memory.
* During disk I/O, PG_locked is used. This bit is set before I/O
! * and reset when I/O completes. page->wait is a wait queue of all
* tasks waiting for the I/O on this page to complete.
* PG_uptodate tells whether the page's contents is valid.
--- 252,256 ----
* to swap space and (later) to be read back into memory.
* During disk I/O, PG_locked is used. This bit is set before I/O
! * and reset when I/O completes. page_waitqueue(page) is a wait queue of all
* tasks waiting for the I/O on this page to complete.
* PG_uptodate tells whether the page's contents is valid.
***************
*** 306,309 ****
--- 318,375 ----
#define ClearPageLaunder(page) clear_bit(PG_launder, &(page)->flags)
+ /*
+ * The zone field is never updated after free_area_init_core()
+ * sets it, so none of the operations on it need to be atomic.
+ */
+ #define NODE_SHIFT 4
+ #define ZONE_SHIFT (BITS_PER_LONG - 8)
+
+ struct zone_struct;
+ extern struct zone_struct *zone_table[];
+
+ static inline zone_t *page_zone(struct page *page)
+ {
+ return zone_table[page->flags >> ZONE_SHIFT];
+ }
+
+ static inline void set_page_zone(struct page *page, unsigned long zone_num)
+ {
+ page->flags &= ~(~0UL << ZONE_SHIFT);
+ page->flags |= zone_num << ZONE_SHIFT;
+ }
+
+ /*
+ * In order to avoid #ifdefs within C code itself, we define
+ * set_page_address to a noop for non-highmem machines, where
+ * the field isn't useful.
+ * The same is true for page_address() in arch-dependent code.
+ */
+ #if defined(CONFIG_HIGHMEM) || defined(WANT_PAGE_VIRTUAL)
+
+ #define set_page_address(page, address) \
+ do { \
+ (page)->virtual = (address); \
+ } while(0)
+
+ #else /* CONFIG_HIGHMEM || WANT_PAGE_VIRTUAL */
+ #define set_page_address(page, address) do { } while(0)
+ #endif /* CONFIG_HIGHMEM || WANT_PAGE_VIRTUAL */
+
+ /*
+ * Permanent address of a page. Obviously must never be
+ * called on a highmem page.
+ */
+ #if defined(CONFIG_HIGHMEM) || defined(WANT_PAGE_VIRTUAL)
+
+ #define page_address(page) ((page)->virtual)
+
+ #else /* CONFIG_HIGHMEM || WANT_PAGE_VIRTUAL */
+
+ #define page_address(page) \
+ __va( (((page) - page_zone(page)->zone_mem_map) << PAGE_SHIFT) \
+ + page_zone(page)->zone_start_paddr)
+
+ #endif /* CONFIG_HIGHMEM || WANT_PAGE_VIRTUAL */
+
extern void FASTCALL(set_page_dirty(struct page *));
#ifdef CONFIG_COMP_CACHE
***************
*** 627,630 ****
--- 693,698 ----
extern struct vm_area_struct *find_extend_vma(struct mm_struct *mm, unsigned long addr);
+
+ extern struct page * vmalloc_to_page(void *addr);
#endif /* __KERNEL__ */
Index: swap.h
===================================================================
RCS file: /cvsroot/linuxcompressed/linux/include/linux/swap.h,v
retrieving revision 1.19
retrieving revision 1.20
diff -C2 -r1.19 -r1.20
*** swap.h 29 Nov 2002 21:23:02 -0000 1.19
--- swap.h 19 May 2003 01:38:47 -0000 1.20
***************
*** 122,129 ****
extern int nr_active_pages;
extern int nr_inactive_pages;
- extern atomic_t nr_async_pages;
extern atomic_t page_cache_size;
extern atomic_t buffermem_pages;
! extern spinlock_t pagecache_lock;
extern void __remove_inode_page(struct page *);
--- 122,131 ----
extern int nr_active_pages;
extern int nr_inactive_pages;
extern atomic_t page_cache_size;
extern atomic_t buffermem_pages;
!
! extern spinlock_cacheline_t pagecache_lock_cacheline;
! #define pagecache_lock (pagecache_lock_cacheline.lock)
!
extern void __remove_inode_page(struct page *);
***************
*** 146,150 ****
/* linux/mm/vmscan.c */
extern wait_queue_head_t kswapd_wait;
! extern int FASTCALL(try_to_free_pages(zone_t *, unsigned int, unsigned int));
/* linux/mm/page_io.c */
--- 148,153 ----
/* linux/mm/vmscan.c */
extern wait_queue_head_t kswapd_wait;
! extern int FASTCALL(try_to_free_pages_zone(zone_t *, unsigned int));
! extern int FASTCALL(try_to_free_pages(unsigned int));
/* linux/mm/page_io.c */
***************
*** 207,211 ****
#endif
! extern spinlock_t pagemap_lru_lock;
extern void FASTCALL(mark_page_accessed(struct page *));
--- 210,215 ----
#endif
! extern spinlock_cacheline_t pagemap_lru_lock_cacheline;
! #define pagemap_lru_lock pagemap_lru_lock_cacheline.lock
extern void FASTCALL(mark_page_accessed(struct page *));
Index: sysctl.h
===================================================================
RCS file: /cvsroot/linuxcompressed/linux/include/linux/sysctl.h,v
retrieving revision 1.6
retrieving revision 1.7
diff -C2 -r1.6 -r1.7
*** sysctl.h 22 Nov 2002 16:01:34 -0000 1.6
--- sysctl.h 19 May 2003 01:38:47 -0000 1.7
***************
*** 141,147 ****
VM_PGT_CACHE=9, /* struct: Set page table cache parameters */
VM_PAGE_CLUSTER=10, /* int: set number of pages to swap together */
! VM_MIN_READAHEAD=12, /* Min file readahead */
! VM_MAX_READAHEAD=13, /* Max file readahead */
! VM_CTL_COMP_CACHE=14
};
--- 141,148 ----
VM_PGT_CACHE=9, /* struct: Set page table cache parameters */
VM_PAGE_CLUSTER=10, /* int: set number of pages to swap together */
! VM_MAX_MAP_COUNT=11, /* int: Maximum number of active map areas */
! VM_MIN_READAHEAD=12, /* Min file readahead */
! VM_MAX_READAHEAD=13, /* Max file readahead */
! VM_CTL_COMP_CACHE=14
};
***************
*** 206,210 ****
NET_CORE_NO_CONG=14,
NET_CORE_LO_CONG=15,
! NET_CORE_MOD_CONG=16
};
--- 207,212 ----
NET_CORE_NO_CONG=14,
NET_CORE_LO_CONG=15,
! NET_CORE_MOD_CONG=16,
! NET_CORE_DEV_WEIGHT=17
};
***************
*** 291,295 ****
NET_IPV4_NONLOCAL_BIND=88,
NET_IPV4_ICMP_RATELIMIT=89,
! NET_IPV4_ICMP_RATEMASK=90
};
--- 293,298 ----
NET_IPV4_NONLOCAL_BIND=88,
NET_IPV4_ICMP_RATELIMIT=89,
! NET_IPV4_ICMP_RATEMASK=90,
! NET_TCP_TW_REUSE=91
};
***************
*** 336,340 ****
NET_IPV4_CONF_LOG_MARTIANS=11,
NET_IPV4_CONF_TAG=12,
! NET_IPV4_CONF_ARPFILTER=13
};
--- 339,344 ----
NET_IPV4_CONF_LOG_MARTIANS=11,
NET_IPV4_CONF_TAG=12,
! NET_IPV4_CONF_ARPFILTER=13,
! NET_IPV4_CONF_MEDIUM_ID=14,
};
|
|
From: Rodrigo S. de C. <rc...@us...> - 2003-05-19 01:39:20
|
Update of /cvsroot/linuxcompressed/linux/fs/proc
In directory sc8-pr-cvs1:/tmp/cvs-serv25395/fs/proc
Modified Files:
proc_misc.c
Log Message:
o Port code to 2.4.20
Bug fix (?)
o Changes checks in vswap.c to avoid oopses. It will BUG()
instead. Some of the checks were done after the value had been
accessed.
Note
o Virtual swap addresses are temporarily disabled, due to debugging
sessions related to the use of swap files instead of swap partitions.
Index: proc_misc.c
===================================================================
RCS file: /cvsroot/linuxcompressed/linux/fs/proc/proc_misc.c,v
retrieving revision 1.8
retrieving revision 1.9
diff -C2 -r1.8 -r1.9
*** proc_misc.c 10 Sep 2002 16:43:00 -0000 1.8
--- proc_misc.c 19 May 2003 01:38:46 -0000 1.9
***************
*** 43,47 ****
#include <asm/io.h>
-
#define LOAD_INT(x) ((x) >> FSHIFT)
#define LOAD_FRAC(x) LOAD_INT(((x) & (FIXED_1-1)) * 100)
--- 43,46 ----
***************
*** 52,60 ****
* wrappers, but this needs further analysis wrt potential overflows.
*/
#ifdef CONFIG_MODULES
extern int get_module_list(char *);
#endif
extern int get_device_list(char *);
- extern int get_partition_list(char *, char **, off_t, int);
extern int get_filesystem_list(char *);
extern int get_exec_domain_list(char *);
--- 51,60 ----
* wrappers, but this needs further analysis wrt potential overflows.
*/
+ extern int get_hardware_list(char *);
+ extern int get_stram_list(char *);
#ifdef CONFIG_MODULES
extern int get_module_list(char *);
#endif
extern int get_device_list(char *);
extern int get_filesystem_list(char *);
extern int get_exec_domain_list(char *);
***************
*** 67,70 ****
--- 67,91 ----
#endif
+ void proc_sprintf(char *page, off_t *off, int *lenp, const char *format, ...)
+ {
+ int len = *lenp;
+ va_list args;
+
+ /* try to only print whole lines */
+ if (len > PAGE_SIZE-512)
+ return;
+
+ va_start(args, format);
+ len += vsnprintf(page + len, PAGE_SIZE-len, format, args);
+ va_end(args);
+
+ if (len <= *off) {
+ *off -= len;
+ len = 0;
+ }
+
+ *lenp = len;
+ }
+
static int proc_calc_metrics(char *page, char **start, off_t off,
int count, int *eof, int len)
***************
*** 227,230 ****
--- 248,281 ----
};
+ #ifdef CONFIG_PROC_HARDWARE
+ static int hardware_read_proc(char *page, char **start, off_t off,
+ int count, int *eof, void *data)
+ {
+ int len = get_hardware_list(page);
+ return proc_calc_metrics(page, start, off, count, eof, len);
+ }
+ #endif
+
+ #ifdef CONFIG_STRAM_PROC
+ static int stram_read_proc(char *page, char **start, off_t off,
+ int count, int *eof, void *data)
+ {
+ int len = get_stram_list(page);
+ return proc_calc_metrics(page, start, off, count, eof, len);
+ }
+ #endif
+
+ extern struct seq_operations partitions_op;
+ static int partitions_open(struct inode *inode, struct file *file)
+ {
+ return seq_open(file, &partitions_op);
+ }
+ static struct file_operations proc_partitions_operations = {
+ open: partitions_open,
+ read: seq_read,
+ llseek: seq_lseek,
+ release: seq_release,
+ };
+
#ifdef CONFIG_MODULES
static int modules_read_proc(char *page, char **start, off_t off,
***************
*** 248,255 ****
#endif
static int kstat_read_proc(char *page, char **start, off_t off,
int count, int *eof, void *data)
{
! int i, len;
extern unsigned long total_forks;
unsigned long jif = jiffies;
--- 299,320 ----
#endif
+ extern struct seq_operations slabinfo_op;
+ extern ssize_t slabinfo_write(struct file *, const char *, size_t, loff_t *);
+ static int slabinfo_open(struct inode *inode, struct file *file)
+ {
+ return seq_open(file, &slabinfo_op);
+ }
+ static struct file_operations proc_slabinfo_operations = {
+ open: slabinfo_open,
+ read: seq_read,
+ write: slabinfo_write,
+ llseek: seq_lseek,
+ release: seq_release,
+ };
+
static int kstat_read_proc(char *page, char **start, off_t off,
int count, int *eof, void *data)
{
! int i, len = 0;
extern unsigned long total_forks;
unsigned long jif = jiffies;
***************
*** 269,276 ****
}
! len = sprintf(page, "cpu %u %u %u %lu\n", user, nice, system,
jif * smp_num_cpus - (user + nice + system));
for (i = 0 ; i < smp_num_cpus; i++)
! len += sprintf(page + len, "cpu%d %u %u %u %lu\n",
i,
kstat.per_cpu_user[cpu_logical_map(i)],
--- 334,343 ----
}
! proc_sprintf(page, &off, &len,
! "cpu %u %u %u %lu\n", user, nice, system,
jif * smp_num_cpus - (user + nice + system));
for (i = 0 ; i < smp_num_cpus; i++)
! proc_sprintf(page, &off, &len,
! "cpu%d %u %u %u %lu\n",
i,
kstat.per_cpu_user[cpu_logical_map(i)],
***************
*** 280,284 ****
+ kstat.per_cpu_nice[cpu_logical_map(i)] \
+ kstat.per_cpu_system[cpu_logical_map(i)]));
! len += sprintf(page + len,
"page %u %u\n"
"swap %u %u\n"
--- 347,351 ----
+ kstat.per_cpu_nice[cpu_logical_map(i)] \
+ kstat.per_cpu_system[cpu_logical_map(i)]));
! proc_sprintf(page, &off, &len,
"page %u %u\n"
"swap %u %u\n"
***************
*** 292,299 ****
#if !defined(CONFIG_ARCH_S390)
for (i = 0 ; i < NR_IRQS ; i++)
! len += sprintf(page + len, " %u", kstat_irqs(i));
#endif
! len += sprintf(page + len, "\ndisk_io: ");
for (major = 0; major < DK_MAX_MAJOR; major++) {
--- 359,367 ----
#if !defined(CONFIG_ARCH_S390)
for (i = 0 ; i < NR_IRQS ; i++)
! proc_sprintf(page, &off, &len,
! " %u", kstat_irqs(i));
#endif
! proc_sprintf(page, &off, &len, "\ndisk_io: ");
for (major = 0; major < DK_MAX_MAJOR; major++) {
***************
*** 303,307 ****
kstat.dk_drive_wblk[major][disk];
if (active)
! len += sprintf(page + len,
"(%u,%u):(%u,%u,%u,%u,%u) ",
major, disk,
--- 371,375 ----
kstat.dk_drive_wblk[major][disk];
if (active)
! proc_sprintf(page, &off, &len,
"(%u,%u):(%u,%u,%u,%u,%u) ",
major, disk,
***************
*** 315,319 ****
}
! len += sprintf(page + len,
"\nctxt %u\n"
"btime %lu\n"
--- 383,387 ----
}
! proc_sprintf(page, &off, &len,
"\nctxt %u\n"
"btime %lu\n"
***************
*** 333,344 ****
}
- static int partitions_read_proc(char *page, char **start, off_t off,
- int count, int *eof, void *data)
- {
- int len = get_partition_list(page, start, off, count);
- if (len < count) *eof = 1;
- return len;
- }
-
#if !defined(CONFIG_ARCH_S390)
static int interrupts_read_proc(char *page, char **start, off_t off,
--- 401,404 ----
***************
*** 377,382 ****
int len;
! len = sprintf(page, "%s\n", saved_command_line);
! len = strlen(page);
return proc_calc_metrics(page, start, off, count, eof, len);
}
--- 437,441 ----
int len;
! len = snprintf(page, count, "%s\n", saved_command_line);
return proc_calc_metrics(page, start, off, count, eof, len);
}
***************
*** 486,501 ****
};
- extern struct seq_operations mounts_op;
- static int mounts_open(struct inode *inode, struct file *file)
- {
- return seq_open(file, &mounts_op);
- }
- static struct file_operations proc_mounts_operations = {
- open: mounts_open,
- read: seq_read,
- llseek: seq_lseek,
- release: seq_release,
- };
-
struct proc_dir_entry *proc_root_kcore;
--- 545,548 ----
***************
*** 519,522 ****
--- 566,575 ----
{"meminfo", meminfo_read_proc},
{"version", version_read_proc},
+ #ifdef CONFIG_PROC_HARDWARE
+ {"hardware", hardware_read_proc},
+ #endif
+ #ifdef CONFIG_STRAM_PROC
+ {"stram", stram_read_proc},
+ #endif
#ifdef CONFIG_MODULES
{"modules", modules_read_proc},
***************
*** 529,533 ****
#endif
{"devices", devices_read_proc},
- {"partitions", partitions_read_proc},
#if !defined(CONFIG_ARCH_S390)
{"interrupts", interrupts_read_proc},
--- 582,585 ----
***************
*** 549,558 ****
create_proc_read_entry(p->name, 0, NULL, p->read_proc, NULL);
/* And now for trickier ones */
entry = create_proc_entry("kmsg", S_IRUSR, &proc_root);
if (entry)
entry->proc_fops = &proc_kmsg_operations;
- create_seq_entry("mounts", 0, &proc_mounts_operations);
create_seq_entry("cpuinfo", 0, &proc_cpuinfo_operations);
#ifdef CONFIG_MODULES
create_seq_entry("ksyms", 0, &proc_ksyms_operations);
--- 601,613 ----
create_proc_read_entry(p->name, 0, NULL, p->read_proc, NULL);
+ proc_symlink("mounts", NULL, "self/mounts");
+
/* And now for trickier ones */
entry = create_proc_entry("kmsg", S_IRUSR, &proc_root);
if (entry)
entry->proc_fops = &proc_kmsg_operations;
create_seq_entry("cpuinfo", 0, &proc_cpuinfo_operations);
+ create_seq_entry("partitions", 0, &proc_partitions_operations);
+ create_seq_entry("slabinfo",S_IWUSR|S_IRUGO,&proc_slabinfo_operations);
#ifdef CONFIG_MODULES
create_seq_entry("ksyms", 0, &proc_ksyms_operations);
***************
*** 579,585 ****
}
#endif
- entry = create_proc_read_entry("slabinfo", S_IWUSR | S_IRUGO, NULL,
- slabinfo_read_proc, NULL);
- if (entry)
- entry->write_proc = slabinfo_write_proc;
}
--- 634,636 ----
|
|
From: Rodrigo S. de C. <rc...@us...> - 2003-05-19 01:39:20
|
Update of /cvsroot/linuxcompressed/linux/fs
In directory sc8-pr-cvs1:/tmp/cvs-serv25395/fs
Modified Files:
buffer.c inode.c
Log Message:
o Port code to 2.4.20
Bug fix (?)
o Changes checks in vswap.c to avoid oopses. It will BUG()
instead. Some of the checks were done after the value had been
accessed.
Note
o Virtual swap addresses are temporarily disabled, due to debugging
sessions related to the use of swap files instead of swap partitions.
Index: buffer.c
===================================================================
RCS file: /cvsroot/linuxcompressed/linux/fs/buffer.c,v
retrieving revision 1.17
retrieving revision 1.18
diff -C2 -r1.17 -r1.18
*** buffer.c 29 Nov 2002 21:23:02 -0000 1.17
--- buffer.c 19 May 2003 01:38:46 -0000 1.18
***************
*** 55,59 ****
#include <asm/mmu_context.h>
- #define MAX_BUF_PER_PAGE (PAGE_CACHE_SIZE / 512)
#define NR_RESERVED (10*MAX_BUF_PER_PAGE)
#define MAX_UNUSED_BUFFERS NR_RESERVED+20 /* don't ever have more than this
--- 55,58 ----
***************
*** 75,79 ****
static struct buffer_head *lru_list[NR_LIST];
! static spinlock_t lru_list_lock __cacheline_aligned_in_smp = SPIN_LOCK_UNLOCKED;
static int nr_buffers_type[NR_LIST];
static unsigned long size_buffers_type[NR_LIST];
--- 74,81 ----
static struct buffer_head *lru_list[NR_LIST];
!
! static spinlock_cacheline_t lru_list_lock_cacheline = {SPIN_LOCK_UNLOCKED};
! #define lru_list_lock lru_list_lock_cacheline.lock
!
static int nr_buffers_type[NR_LIST];
static unsigned long size_buffers_type[NR_LIST];
***************
*** 85,88 ****
--- 87,91 ----
static int grow_buffers(kdev_t dev, unsigned long block, int size);
+ static int osync_buffers_list(struct list_head *);
static void __refile_buffer(struct buffer_head *);
***************
*** 104,108 ****
int nfract; /* Percentage of buffer cache dirty to
activate bdflush */
! int dummy1; /* old "ndirty" */
int dummy2; /* old "nrefill" */
int dummy3; /* unused */
--- 107,112 ----
int nfract; /* Percentage of buffer cache dirty to
activate bdflush */
! int ndirty; /* Maximum number of dirty blocks to write out per
! wake-cycle */
int dummy2; /* old "nrefill" */
int dummy3; /* unused */
***************
*** 111,128 ****
int nfract_sync;/* Percentage of buffer cache dirty to
activate bdflush synchronously */
! int dummy4; /* unused */
int dummy5; /* unused */
} b_un;
unsigned int data[N_PARAM];
! } bdf_prm = {{40, 0, 0, 0, 5*HZ, 30*HZ, 60, 0, 0}};
/* These are the min and max parameter values that we will allow to be assigned */
! int bdflush_min[N_PARAM] = { 0, 10, 5, 25, 0, 1*HZ, 0, 0, 0};
! int bdflush_max[N_PARAM] = {100,50000, 20000, 20000,10000*HZ, 6000*HZ, 100, 0, 0};
void unlock_buffer(struct buffer_head *bh)
{
clear_bit(BH_Wait_IO, &bh->b_state);
! clear_bit(BH_launder, &bh->b_state);
clear_bit(BH_Lock, &bh->b_state);
smp_mb__after_clear_bit();
--- 115,139 ----
int nfract_sync;/* Percentage of buffer cache dirty to
activate bdflush synchronously */
! int nfract_stop_bdflush; /* Percetange of buffer cache dirty to stop bdflush */
int dummy5; /* unused */
} b_un;
unsigned int data[N_PARAM];
! } bdf_prm = {{30, 500, 0, 0, 5*HZ, 30*HZ, 60, 20, 0}};
/* These are the min and max parameter values that we will allow to be assigned */
! int bdflush_min[N_PARAM] = { 0, 1, 0, 0, 0, 1*HZ, 0, 0, 0};
! int bdflush_max[N_PARAM] = {100,50000, 20000, 20000,10000*HZ, 10000*HZ, 100, 100, 0};
void unlock_buffer(struct buffer_head *bh)
{
clear_bit(BH_Wait_IO, &bh->b_state);
! clear_bit(BH_Launder, &bh->b_state);
! /*
! * When a locked buffer is visible to the I/O layer BH_Launder
! * is set. This means before unlocking we must clear BH_Launder,
! * mb() on alpha and then clear BH_Lock, so no reader can see
! * BH_Launder set on an unlocked buffer and then risk to deadlock.
! */
! smp_mb__after_clear_bit();
clear_bit(BH_Lock, &bh->b_state);
smp_mb__after_clear_bit();
***************
*** 132,142 ****
/*
- * Rewrote the wait-routines to use the "new" wait-queue functionality,
- * and getting rid of the cli-sti pairs. The wait-queue routines still
- * need cli-sti, but now it's just a couple of 386 instructions or so.
- *
* Note that the real wait_on_buffer() is an inline function that checks
! * if 'b_wait' is set before calling this, so that the queues aren't set
! * up unnecessarily.
*/
void __wait_on_buffer(struct buffer_head * bh)
--- 143,149 ----
/*
* Note that the real wait_on_buffer() is an inline function that checks
! * that the buffer is locked before calling this, so that unnecessary disk
! * unplugging does not occur.
*/
void __wait_on_buffer(struct buffer_head * bh)
***************
*** 204,208 ****
next = bh->b_next_free;
! if (dev && bh->b_dev != dev)
continue;
if (test_and_set_bit(BH_Lock, &bh->b_state))
--- 211,215 ----
next = bh->b_next_free;
! if (dev != NODEV && bh->b_dev != dev)
continue;
if (test_and_set_bit(BH_Lock, &bh->b_state))
***************
*** 234,241 ****
static void write_unlocked_buffers(kdev_t dev)
{
! do {
spin_lock(&lru_list_lock);
! } while (write_some_buffers(dev));
! run_task_queue(&tq_disk);
}
--- 241,247 ----
static void write_unlocked_buffers(kdev_t dev)
{
! do
spin_lock(&lru_list_lock);
! while (write_some_buffers(dev));
}
***************
*** 262,266 ****
continue;
}
! if (dev && bh->b_dev != dev)
continue;
--- 268,272 ----
continue;
}
! if (dev != NODEV && bh->b_dev != dev)
continue;
***************
*** 275,284 ****
}
- static inline void wait_for_some_buffers(kdev_t dev)
- {
- spin_lock(&lru_list_lock);
- wait_for_buffers(dev, BUF_LOCKED, 1);
- }
-
static int wait_for_locked_buffers(kdev_t dev, int index, int refile)
{
--- 281,284 ----
***************
*** 731,743 ****
static void free_more_memory(void)
{
- zone_t * zone = contig_page_data.node_zonelists[GFP_NOFS & GFP_ZONEMASK].zones[0];
-
balance_dirty();
wakeup_bdflush();
! try_to_free_pages(zone, GFP_NOFS, 0);
run_task_queue(&tq_disk);
! current->policy |= SCHED_YIELD;
! __set_current_state(TASK_RUNNING);
! schedule();
}
--- 731,739 ----
static void free_more_memory(void)
{
balance_dirty();
wakeup_bdflush();
! try_to_free_pages(GFP_NOIO);
run_task_queue(&tq_disk);
! yield();
}
***************
*** 755,758 ****
--- 751,755 ----
struct buffer_head *tmp;
struct page *page;
+ int fullup = 1;
mark_buffer_uptodate(bh, uptodate);
***************
*** 781,786 ****
tmp = bh->b_this_page;
while (tmp != bh) {
! if (buffer_async(tmp) && buffer_locked(tmp))
! goto still_busy;
tmp = tmp->b_this_page;
}
--- 778,786 ----
tmp = bh->b_this_page;
while (tmp != bh) {
! if (buffer_locked(tmp)) {
! if (buffer_async(tmp))
! goto still_busy;
! } else if (!buffer_uptodate(tmp))
! fullup = 0;
tmp = tmp->b_this_page;
}
***************
*** 790,797 ****
/*
! * if none of the buffers had errors then we can set the
! * page uptodate:
*/
! if (!PageError(page))
SetPageUptodate(page);
--- 790,797 ----
/*
! * If none of the buffers had errors and all were uptodate
! * then we can set the page uptodate:
*/
! if (fullup && !PageError(page))
SetPageUptodate(page);
***************
*** 805,811 ****
}
! inline void set_buffer_async_io(struct buffer_head *bh) {
! bh->b_end_io = end_buffer_io_async ;
! mark_buffer_async(bh, 1);
}
--- 805,812 ----
}
! inline void set_buffer_async_io(struct buffer_head *bh)
! {
! bh->b_end_io = end_buffer_io_async;
! mark_buffer_async(bh, 1);
}
***************
*** 829,834 ****
* any newly dirty buffers for write.
*/
!
! int fsync_inode_buffers(struct inode *inode)
{
struct buffer_head *bh;
--- 830,834 ----
* any newly dirty buffers for write.
*/
! int fsync_buffers_list(struct list_head *list)
{
struct buffer_head *bh;
***************
*** 840,845 ****
spin_lock(&lru_list_lock);
! while (!list_empty(&inode->i_dirty_buffers)) {
! bh = BH_ENTRY(inode->i_dirty_buffers.next);
list_del(&bh->b_inode_buffers);
if (!buffer_dirty(bh) && !buffer_locked(bh))
--- 840,845 ----
spin_lock(&lru_list_lock);
! while (!list_empty(list)) {
! bh = BH_ENTRY(list->next);
list_del(&bh->b_inode_buffers);
if (!buffer_dirty(bh) && !buffer_locked(bh))
***************
*** 851,854 ****
--- 851,863 ----
get_bh(bh);
spin_unlock(&lru_list_lock);
+ /*
+ * Wait I/O completion before submitting
+ * the buffer, to be sure the write will
+ * be effective on the latest data in
+ * the buffer. (otherwise - if there's old
+ * I/O in flight - write_buffer would become
+ * a noop)
+ */
+ wait_on_buffer(bh);
ll_rw_block(WRITE, 1, &bh);
brelse(bh);
***************
*** 871,924 ****
spin_unlock(&lru_list_lock);
! err2 = osync_inode_buffers(inode);
!
! if (err)
! return err;
! else
! return err2;
! }
!
! int fsync_inode_data_buffers(struct inode *inode)
! {
! struct buffer_head *bh;
! struct inode tmp;
! int err = 0, err2;
!
! INIT_LIST_HEAD(&tmp.i_dirty_data_buffers);
!
! spin_lock(&lru_list_lock);
!
! while (!list_empty(&inode->i_dirty_data_buffers)) {
! bh = BH_ENTRY(inode->i_dirty_data_buffers.next);
! list_del(&bh->b_inode_buffers);
! if (!buffer_dirty(bh) && !buffer_locked(bh))
! bh->b_inode = NULL;
! else {
! bh->b_inode = &tmp;
! list_add(&bh->b_inode_buffers, &tmp.i_dirty_data_buffers);
! if (buffer_dirty(bh)) {
! get_bh(bh);
! spin_unlock(&lru_list_lock);
! ll_rw_block(WRITE, 1, &bh);
! brelse(bh);
! spin_lock(&lru_list_lock);
! }
! }
! }
!
! while (!list_empty(&tmp.i_dirty_data_buffers)) {
! bh = BH_ENTRY(tmp.i_dirty_data_buffers.prev);
! remove_inode_queue(bh);
! get_bh(bh);
! spin_unlock(&lru_list_lock);
! wait_on_buffer(bh);
! if (!buffer_uptodate(bh))
! err = -EIO;
! brelse(bh);
! spin_lock(&lru_list_lock);
! }
!
! spin_unlock(&lru_list_lock);
! err2 = osync_inode_data_buffers(inode);
if (err)
--- 880,884 ----
spin_unlock(&lru_list_lock);
! err2 = osync_buffers_list(list);
if (err)
***************
*** 934,975 ****
*
* To do O_SYNC writes, just queue the buffer writes with ll_rw_block as
! * you dirty the buffers, and then use osync_inode_buffers to wait for
* completion. Any other dirty buffers which are not yet queued for
* write will not be flushed to disk by the osync.
*/
!
! int osync_inode_buffers(struct inode *inode)
! {
! struct buffer_head *bh;
! struct list_head *list;
! int err = 0;
!
! spin_lock(&lru_list_lock);
!
! repeat:
!
! for (list = inode->i_dirty_buffers.prev;
! bh = BH_ENTRY(list), list != &inode->i_dirty_buffers;
! list = bh->b_inode_buffers.prev) {
! if (buffer_locked(bh)) {
! get_bh(bh);
! spin_unlock(&lru_list_lock);
! wait_on_buffer(bh);
! if (!buffer_uptodate(bh))
! err = -EIO;
! brelse(bh);
! spin_lock(&lru_list_lock);
! goto repeat;
! }
! }
!
! spin_unlock(&lru_list_lock);
! return err;
! }
!
! int osync_inode_data_buffers(struct inode *inode)
{
struct buffer_head *bh;
! struct list_head *list;
int err = 0;
--- 894,905 ----
*
* To do O_SYNC writes, just queue the buffer writes with ll_rw_block as
! * you dirty the buffers, and then use osync_buffers_list to wait for
* completion. Any other dirty buffers which are not yet queued for
* write will not be flushed to disk by the osync.
*/
! static int osync_buffers_list(struct list_head *list)
{
struct buffer_head *bh;
! struct list_head *p;
int err = 0;
***************
*** 977,984 ****
repeat:
!
! for (list = inode->i_dirty_data_buffers.prev;
! bh = BH_ENTRY(list), list != &inode->i_dirty_data_buffers;
! list = bh->b_inode_buffers.prev) {
if (buffer_locked(bh)) {
get_bh(bh);
--- 907,912 ----
repeat:
! list_for_each_prev(p, list) {
! bh = BH_ENTRY(p);
if (buffer_locked(bh)) {
get_bh(bh);
***************
*** 997,1001 ****
}
-
/*
* Invalidate any and all dirty buffers on a given inode. We are
--- 925,928 ----
***************
*** 1032,1037 ****
bh = get_hash_table(dev, block, size);
! if (bh)
return bh;
if (!grow_buffers(dev, block, size))
--- 959,966 ----
bh = get_hash_table(dev, block, size);
! if (bh) {
! touch_buffer(bh);
return bh;
+ }
if (!grow_buffers(dev, block, size))
***************
*** 1048,1052 ****
dirty = size_buffers_type[BUF_DIRTY] >> PAGE_SHIFT;
- dirty += size_buffers_type[BUF_LOCKED] >> PAGE_SHIFT;
tot = nr_free_buffer_pages();
--- 977,980 ----
***************
*** 1065,1068 ****
--- 993,1011 ----
}
+ static int bdflush_stop(void)
+ {
+ unsigned long dirty, tot, dirty_limit;
+
+ dirty = size_buffers_type[BUF_DIRTY] >> PAGE_SHIFT;
+ tot = nr_free_buffer_pages();
+
+ dirty *= 100;
+ dirty_limit = tot * bdf_prm.b_un.nfract_stop_bdflush;
+
+ if (dirty > dirty_limit)
+ return 0;
+ return 1;
+ }
+
/*
* if a new dirty buffer is created we need to balance bdflush.
***************
*** 1079,1095 ****
return;
! /* If we're getting into imbalance, start write-out */
! spin_lock(&lru_list_lock);
! write_some_buffers(NODEV);
/*
* And if we're _really_ out of balance, wait for
! * some of the dirty/locked buffers ourselves and
! * start bdflush.
* This will throttle heavy writers.
*/
if (state > 0) {
! wait_for_some_buffers(NODEV);
! wakeup_bdflush();
}
}
--- 1022,1035 ----
return;
! wakeup_bdflush();
/*
* And if we're _really_ out of balance, wait for
! * some of the dirty/locked buffers ourselves.
* This will throttle heavy writers.
*/
if (state > 0) {
! spin_lock(&lru_list_lock);
! write_some_buffers(NODEV);
}
}
***************
*** 1185,1189 ****
bh = getblk(dev, block, size);
- touch_buffer(bh);
if (buffer_uptodate(bh))
return bh;
--- 1125,1128 ----
***************
*** 1274,1287 ****
void set_bh_page (struct buffer_head *bh, struct page *page, unsigned long offset)
{
- bh->b_page = page;
if (offset >= PAGE_SIZE)
BUG();
! if (PageHighMem(page))
! /*
! * This catches illegal uses and preserves the offset:
! */
! bh->b_data = (char *)(0 + offset);
! else
! bh->b_data = page_address(page) + offset;
}
EXPORT_SYMBOL(set_bh_page);
--- 1213,1224 ----
void set_bh_page (struct buffer_head *bh, struct page *page, unsigned long offset)
{
if (offset >= PAGE_SIZE)
BUG();
!
! /*
! * page_address will return NULL anyways for highmem pages
! */
! bh->b_data = page_address(page) + offset;
! bh->b_page = page;
}
EXPORT_SYMBOL(set_bh_page);
***************
*** 1682,1687 ****
--- 1619,1636 ----
* data. If BH_New is set, we know that the block was newly
* allocated in the above loop.
+ *
+ * Details the buffer can be new and uptodate because:
+ * 1) hole in uptodate page, get_block(create) allocate the block,
+ * so the buffer is new and additionally we also mark it uptodate
+ * 2) The buffer is not mapped and uptodate due a previous partial read.
+ *
+ * We can always ignore uptodate buffers here, if you mark a buffer
+ * uptodate you must make sure it contains the right data first.
+ *
+ * We must stop the "undo/clear" fixup pass not at the caller "to"
+ * but at the last block that we successfully arrived in the main loop.
*/
bh = head;
+ to = block_start; /* stop at the last successfully handled block */
block_start = 0;
do {
***************
*** 1691,1698 ****
if (block_start >= to)
break;
! if (buffer_new(bh)) {
! if (buffer_uptodate(bh))
! printk(KERN_ERR "%s: zeroing uptodate buffer!\n", __FUNCTION__);
memset(kaddr+block_start, 0, bh->b_size);
set_bit(BH_Uptodate, &bh->b_state);
mark_buffer_dirty(bh);
--- 1640,1646 ----
if (block_start >= to)
break;
! if (buffer_new(bh) && !buffer_uptodate(bh)) {
memset(kaddr+block_start, 0, bh->b_size);
+ flush_dcache_page(page);
set_bit(BH_Uptodate, &bh->b_state);
mark_buffer_dirty(bh);
***************
*** 1817,1823 ****
/* Stage 3: start the IO */
! for (i = 0; i < nr; i++)
! submit_bh(READ, arr[i]);
!
return 0;
}
--- 1765,1776 ----
/* Stage 3: start the IO */
! for (i = 0; i < nr; i++) {
! struct buffer_head * bh = arr[i];
! if (buffer_uptodate(bh))
! end_buffer_io_async(bh, 1);
! else
! submit_bh(READ, bh);
! }
!
return 0;
}
***************
*** 2054,2058 ****
kunmap(page);
! __mark_buffer_dirty(bh);
err = 0;
--- 2007,2016 ----
kunmap(page);
! if (!atomic_set_buffer_dirty(bh)) {
! __mark_dirty(bh);
! buffer_insert_inode_data_queue(bh, inode);
! balance_dirty();
! }
!
err = 0;
***************
*** 2259,2264 ****
*
* The kiobuf must already be locked for IO. IO is submitted
! * asynchronously: you need to check page->locked, page->uptodate, and
! * maybe wait on page->wait.
*
* It is up to the caller to make sure that there are enough blocks
--- 2217,2221 ----
*
* The kiobuf must already be locked for IO. IO is submitted
! * asynchronously: you need to check page->locked and page->uptodate.
*
* It is up to the caller to make sure that there are enough blocks
***************
*** 2393,2398 ****
* Start I/O on a page.
* This function expects the page to be locked and may return
! * before I/O is complete. You then have to check page->locked,
! * page->uptodate, and maybe wait on page->wait.
*
* brw_page() is SMP-safe, although it's being called with the
--- 2350,2355 ----
* Start I/O on a page.
* This function expects the page to be locked and may return
! * before I/O is complete. You then have to check page->locked
! * and page->uptodate.
*
* brw_page() is SMP-safe, although it's being called with the
***************
*** 2595,2602 ****
}
static int sync_page_buffers(struct buffer_head *head)
{
struct buffer_head * bh = head;
! int tryagain = 0;
do {
--- 2552,2590 ----
}
+ /*
+ * The first time the VM inspects a page which has locked buffers, it
+ * will just mark it as needing waiting upon on the scan of the page LRU.
+ * BH_Wait_IO is used for this.
+ *
+ * The second time the VM visits the page, if it still has locked
+ * buffers, it is time to start writing them out. (BH_Wait_IO was set).
+ *
+ * The third time the VM visits the page, if the I/O hasn't completed
+ * then it's time to wait upon writeout. BH_Lock and BH_Launder are
+ * used for this.
+ *
+ * There is also the case of buffers which were locked by someone else
+ * - write(2) callers, bdflush, etc. There can be a huge number of these
+ * and we don't want to just skip them all and fail the page allocation.
+ * We want to be able to wait on these buffers as well.
+ *
+ * The BH_Launder bit is set in submit_bh() to indicate that I/O is
+ * underway against the buffer, doesn't matter who started it - we know
+ * that the buffer will eventually come unlocked, and so it's safe to
+ * wait on it.
+ *
+ * The caller holds the page lock and the caller will free this page
+ * into current->local_page, so by waiting on the page's buffers the
+ * caller is guaranteed to obtain this page.
+ *
+ * sync_page_buffers() will sort-of return true if all the buffers
+ * against this page are freeable, so try_to_free_buffers() should
+ * try to free the page's buffers a second time. This is a bit
+ * broken for blocksize < PAGE_CACHE_SIZE, but not very importantly.
+ */
static int sync_page_buffers(struct buffer_head *head)
{
struct buffer_head * bh = head;
! int tryagain = 1;
do {
***************
*** 2605,2615 ****
/* Don't start IO first time around.. */
! if (!test_and_set_bit(BH_Wait_IO, &bh->b_state))
continue;
/* Second time through we start actively writing out.. */
if (test_and_set_bit(BH_Lock, &bh->b_state)) {
! if (!test_bit(BH_launder, &bh->b_state))
continue;
wait_on_buffer(bh);
tryagain = 1;
--- 2593,2607 ----
/* Don't start IO first time around.. */
! if (!test_and_set_bit(BH_Wait_IO, &bh->b_state)) {
! tryagain = 0;
continue;
+ }
/* Second time through we start actively writing out.. */
if (test_and_set_bit(BH_Lock, &bh->b_state)) {
! if (unlikely(!buffer_launder(bh))) {
! tryagain = 0;
continue;
+ }
wait_on_buffer(bh);
tryagain = 1;
***************
*** 2624,2628 ****
__mark_buffer_clean(bh);
get_bh(bh);
- set_bit(BH_launder, &bh->b_state);
bh->b_end_io = end_buffer_io_sync;
submit_bh(WRITE, bh);
--- 2616,2619 ----
***************
*** 2949,2960 ****
complete((struct completion *)startup);
for (;;) {
CHECK_EMERGENCY_SYNC
! spin_lock(&lru_list_lock);
! if (!write_some_buffers(NODEV) || balance_dirty_state() < 0) {
! wait_for_some_buffers(NODEV);
! interruptible_sleep_on(&bdflush_wait);
}
}
}
--- 2940,2966 ----
complete((struct completion *)startup);
+ /*
+ * FIXME: The ndirty logic here is wrong. It's supposed to
+ * send bdflush back to sleep after writing ndirty buffers.
+ * In fact, the test is wrong so bdflush will in fact
+ * sleep when bdflush_stop() returns true.
+ *
+ * FIXME: If it proves useful to implement ndirty properly,
+ * then perhaps the value of ndirty should be scaled by the
+ * amount of memory in the machine.
+ */
for (;;) {
+ int ndirty = bdf_prm.b_un.ndirty;
+
CHECK_EMERGENCY_SYNC
! while (ndirty > 0) {
! spin_lock(&lru_list_lock);
! if (!write_some_buffers(NODEV))
! break;
! ndirty -= NRSYNC;
}
+ if (ndirty > 0 || bdflush_stop())
+ interruptible_sleep_on(&bdflush_wait);
}
}
***************
*** 2985,2990 ****
for (;;) {
- wait_for_some_buffers(NODEV);
-
/* update interval */
interval = bdf_prm.b_un.interval;
--- 2991,2994 ----
***************
*** 3014,3017 ****
--- 3018,3022 ----
#endif
sync_old_buffers();
+ run_task_queue(&tq_disk);
}
}
Index: inode.c
===================================================================
RCS file: /cvsroot/linuxcompressed/linux/fs/inode.c,v
retrieving revision 1.3
retrieving revision 1.4
diff -C2 -r1.3 -r1.4
*** inode.c 27 Feb 2002 19:58:51 -0000 1.3
--- inode.c 19 May 2003 01:38:46 -0000 1.4
***************
*** 253,257 ****
static inline void sync_one(struct inode *inode, int sync)
{
! if (inode->i_state & I_LOCK) {
__iget(inode);
spin_unlock(&inode_lock);
--- 253,257 ----
static inline void sync_one(struct inode *inode, int sync)
{
! while (inode->i_state & I_LOCK) {
__iget(inode);
spin_unlock(&inode_lock);
***************
*** 259,265 ****
iput(inode);
spin_lock(&inode_lock);
- } else {
- __sync_one(inode, sync);
}
}
--- 259,265 ----
iput(inode);
spin_lock(&inode_lock);
}
+
+ __sync_one(inode, sync);
}
***************
*** 731,736 ****
prune_icache(count);
! kmem_cache_shrink(inode_cachep);
! return 0;
}
--- 731,735 ----
prune_icache(count);
! return kmem_cache_shrink(inode_cachep);
}
***************
*** 1158,1162 ****
} while (inode_hashtable == NULL && --order >= 0);
! printk("Inode-cache hash table entries: %d (order: %ld, %ld bytes)\n",
nr_hash, order, (PAGE_SIZE << order));
--- 1157,1161 ----
} while (inode_hashtable == NULL && --order >= 0);
! printk(KERN_INFO "Inode cache hash table entries: %d (order: %ld, %ld bytes)\n",
nr_hash, order, (PAGE_SIZE << order));
|
|
From: Rodrigo S. de C. <rc...@us...> - 2003-05-19 01:39:19
|
Update of /cvsroot/linuxcompressed/linux/fs/ncpfs
In directory sc8-pr-cvs1:/tmp/cvs-serv25395/fs/ncpfs
Modified Files:
dir.c
Log Message:
o Port code to 2.4.20
Bug fix (?)
o Changes checks in vswap.c to avoid oopses. It will BUG()
instead. Some of the checks were done after the value had been
accessed.
Note
o Virtual swap addresses are temporarily disabled, due to debugging
sessions related to the use of swap files instead of swap partitions.
Index: dir.c
===================================================================
RCS file: /cvsroot/linuxcompressed/linux/fs/ncpfs/dir.c,v
retrieving revision 1.4
retrieving revision 1.5
diff -C2 -r1.4 -r1.5
*** dir.c 5 Jul 2002 15:21:49 -0000 1.4
--- dir.c 19 May 2003 01:38:46 -0000 1.5
***************
*** 76,80 ****
static int ncp_delete_dentry(struct dentry *);
! struct dentry_operations ncp_dentry_operations =
{
d_revalidate: ncp_lookup_validate,
--- 76,80 ----
static int ncp_delete_dentry(struct dentry *);
! static struct dentry_operations ncp_dentry_operations =
{
d_revalidate: ncp_lookup_validate,
***************
*** 84,87 ****
--- 84,94 ----
};
+ struct dentry_operations ncp_root_dentry_operations =
+ {
+ d_hash: ncp_hash_dentry,
+ d_compare: ncp_compare_dentry,
+ d_delete: ncp_delete_dentry,
+ };
+
/*
***************
*** 844,848 ****
(server->m.flags & NCP_MOUNT_EXTRAS) &&
(mode & S_IXUGO))
! attributes |= aSYSTEM;
result = ncp_open_create_file_or_subdir(server, dir, __name,
--- 851,855 ----
(server->m.flags & NCP_MOUNT_EXTRAS) &&
(mode & S_IXUGO))
! attributes |= aSYSTEM | aSHARED;
result = ncp_open_create_file_or_subdir(server, dir, __name,
|
|
From: Rodrigo S. de C. <rc...@us...> - 2003-05-19 01:39:18
|
Update of /cvsroot/linuxcompressed/linux/arch/i386
In directory sc8-pr-cvs1:/tmp/cvs-serv25395/arch/i386
Modified Files:
config.in
Log Message:
o Port code to 2.4.20
Bug fix (?)
o Changes checks in vswap.c to avoid oopses. It will BUG()
instead. Some of the checks were done after the value had been
accessed.
Note
o Virtual swap addresses are temporarily disabled, due to debugging
sessions related to the use of swap files instead of swap partitions.
Index: config.in
===================================================================
RCS file: /cvsroot/linuxcompressed/linux/arch/i386/config.in,v
retrieving revision 1.24
retrieving revision 1.25
diff -C2 -r1.24 -r1.25
*** config.in 22 Nov 2002 16:01:33 -0000 1.24
--- config.in 19 May 2003 01:38:45 -0000 1.25
***************
*** 6,10 ****
define_bool CONFIG_X86 y
- define_bool CONFIG_ISA y
define_bool CONFIG_SBUS n
--- 6,9 ----
***************
*** 43,47 ****
Winchip-2 CONFIG_MWINCHIP2 \
Winchip-2A/Winchip-3 CONFIG_MWINCHIP3D \
! CyrixIII/C3 CONFIG_MCYRIXIII" Pentium-Pro
#
# Define implied options from the CPU selection here
--- 42,46 ----
Winchip-2 CONFIG_MWINCHIP2 \
Winchip-2A/Winchip-3 CONFIG_MWINCHIP3D \
! CyrixIII/VIA-C3/VIA-C5 CONFIG_MCYRIXIII" Pentium-Pro
#
# Define implied options from the CPU selection here
***************
*** 55,58 ****
--- 54,58 ----
define_bool CONFIG_RWSEM_XCHGADD_ALGORITHM n
define_bool CONFIG_X86_PPRO_FENCE y
+ define_bool CONFIG_X86_F00F_WORKS_OK n
else
define_bool CONFIG_X86_WP_WORKS_OK y
***************
*** 70,73 ****
--- 70,74 ----
define_bool CONFIG_X86_ALIGNMENT_16 y
define_bool CONFIG_X86_PPRO_FENCE y
+ define_bool CONFIG_X86_F00F_WORKS_OK n
fi
if [ "$CONFIG_M586" = "y" ]; then
***************
*** 76,79 ****
--- 77,81 ----
define_bool CONFIG_X86_ALIGNMENT_16 y
define_bool CONFIG_X86_PPRO_FENCE y
+ define_bool CONFIG_X86_F00F_WORKS_OK n
fi
if [ "$CONFIG_M586TSC" = "y" ]; then
***************
*** 81,86 ****
define_bool CONFIG_X86_USE_STRING_486 y
define_bool CONFIG_X86_ALIGNMENT_16 y
! define_bool CONFIG_X86_TSC y
define_bool CONFIG_X86_PPRO_FENCE y
fi
if [ "$CONFIG_M586MMX" = "y" ]; then
--- 83,89 ----
define_bool CONFIG_X86_USE_STRING_486 y
define_bool CONFIG_X86_ALIGNMENT_16 y
! define_bool CONFIG_X86_HAS_TSC y
define_bool CONFIG_X86_PPRO_FENCE y
+ define_bool CONFIG_X86_F00F_WORKS_OK n
fi
if [ "$CONFIG_M586MMX" = "y" ]; then
***************
*** 88,130 ****
define_bool CONFIG_X86_USE_STRING_486 y
define_bool CONFIG_X86_ALIGNMENT_16 y
! define_bool CONFIG_X86_TSC y
define_bool CONFIG_X86_GOOD_APIC y
define_bool CONFIG_X86_PPRO_FENCE y
fi
if [ "$CONFIG_M686" = "y" ]; then
define_int CONFIG_X86_L1_CACHE_SHIFT 5
! define_bool CONFIG_X86_TSC y
define_bool CONFIG_X86_GOOD_APIC y
define_bool CONFIG_X86_PGE y
define_bool CONFIG_X86_USE_PPRO_CHECKSUM y
define_bool CONFIG_X86_PPRO_FENCE y
fi
if [ "$CONFIG_MPENTIUMIII" = "y" ]; then
define_int CONFIG_X86_L1_CACHE_SHIFT 5
! define_bool CONFIG_X86_TSC y
define_bool CONFIG_X86_GOOD_APIC y
define_bool CONFIG_X86_PGE y
define_bool CONFIG_X86_USE_PPRO_CHECKSUM y
fi
if [ "$CONFIG_MPENTIUM4" = "y" ]; then
define_int CONFIG_X86_L1_CACHE_SHIFT 7
! define_bool CONFIG_X86_TSC y
define_bool CONFIG_X86_GOOD_APIC y
define_bool CONFIG_X86_PGE y
define_bool CONFIG_X86_USE_PPRO_CHECKSUM y
fi
if [ "$CONFIG_MK6" = "y" ]; then
define_int CONFIG_X86_L1_CACHE_SHIFT 5
define_bool CONFIG_X86_ALIGNMENT_16 y
! define_bool CONFIG_X86_TSC y
define_bool CONFIG_X86_USE_PPRO_CHECKSUM y
fi
if [ "$CONFIG_MK7" = "y" ]; then
define_int CONFIG_X86_L1_CACHE_SHIFT 6
! define_bool CONFIG_X86_TSC y
define_bool CONFIG_X86_GOOD_APIC y
define_bool CONFIG_X86_USE_3DNOW y
define_bool CONFIG_X86_PGE y
define_bool CONFIG_X86_USE_PPRO_CHECKSUM y
fi
if [ "$CONFIG_MELAN" = "y" ]; then
--- 91,138 ----
define_bool CONFIG_X86_USE_STRING_486 y
define_bool CONFIG_X86_ALIGNMENT_16 y
! define_bool CONFIG_X86_HAS_TSC y
define_bool CONFIG_X86_GOOD_APIC y
define_bool CONFIG_X86_PPRO_FENCE y
+ define_bool CONFIG_X86_F00F_WORKS_OK n
fi
if [ "$CONFIG_M686" = "y" ]; then
define_int CONFIG_X86_L1_CACHE_SHIFT 5
! define_bool CONFIG_X86_HAS_TSC y
define_bool CONFIG_X86_GOOD_APIC y
define_bool CONFIG_X86_PGE y
define_bool CONFIG_X86_USE_PPRO_CHECKSUM y
define_bool CONFIG_X86_PPRO_FENCE y
+ define_bool CONFIG_X86_F00F_WORKS_OK y
fi
if [ "$CONFIG_MPENTIUMIII" = "y" ]; then
define_int CONFIG_X86_L1_CACHE_SHIFT 5
! define_bool CONFIG_X86_HAS_TSC y
define_bool CONFIG_X86_GOOD_APIC y
define_bool CONFIG_X86_PGE y
define_bool CONFIG_X86_USE_PPRO_CHECKSUM y
+ define_bool CONFIG_X86_F00F_WORKS_OK y
fi
if [ "$CONFIG_MPENTIUM4" = "y" ]; then
define_int CONFIG_X86_L1_CACHE_SHIFT 7
! define_bool CONFIG_X86_HAS_TSC y
define_bool CONFIG_X86_GOOD_APIC y
define_bool CONFIG_X86_PGE y
define_bool CONFIG_X86_USE_PPRO_CHECKSUM y
+ define_bool CONFIG_X86_F00F_WORKS_OK y
fi
if [ "$CONFIG_MK6" = "y" ]; then
define_int CONFIG_X86_L1_CACHE_SHIFT 5
define_bool CONFIG_X86_ALIGNMENT_16 y
! define_bool CONFIG_X86_HAS_TSC y
define_bool CONFIG_X86_USE_PPRO_CHECKSUM y
fi
if [ "$CONFIG_MK7" = "y" ]; then
define_int CONFIG_X86_L1_CACHE_SHIFT 6
! define_bool CONFIG_X86_HAS_TSC y
define_bool CONFIG_X86_GOOD_APIC y
define_bool CONFIG_X86_USE_3DNOW y
define_bool CONFIG_X86_PGE y
define_bool CONFIG_X86_USE_PPRO_CHECKSUM y
+ define_bool CONFIG_X86_F00F_WORKS_OK y
fi
if [ "$CONFIG_MELAN" = "y" ]; then
***************
*** 132,139 ****
define_bool CONFIG_X86_USE_STRING_486 y
define_bool CONFIG_X86_ALIGNMENT_16 y
fi
if [ "$CONFIG_MCYRIXIII" = "y" ]; then
define_int CONFIG_X86_L1_CACHE_SHIFT 5
! define_bool CONFIG_X86_TSC y
define_bool CONFIG_X86_ALIGNMENT_16 y
define_bool CONFIG_X86_USE_3DNOW y
--- 140,148 ----
define_bool CONFIG_X86_USE_STRING_486 y
define_bool CONFIG_X86_ALIGNMENT_16 y
+ define_bool CONFIG_X86_F00F_WORKS_OK y
fi
if [ "$CONFIG_MCYRIXIII" = "y" ]; then
define_int CONFIG_X86_L1_CACHE_SHIFT 5
! define_bool CONFIG_X86_HAS_TSC y
define_bool CONFIG_X86_ALIGNMENT_16 y
define_bool CONFIG_X86_USE_3DNOW y
***************
*** 142,146 ****
if [ "$CONFIG_MCRUSOE" = "y" ]; then
define_int CONFIG_X86_L1_CACHE_SHIFT 5
! define_bool CONFIG_X86_TSC y
fi
if [ "$CONFIG_MWINCHIPC6" = "y" ]; then
--- 151,156 ----
if [ "$CONFIG_MCRUSOE" = "y" ]; then
define_int CONFIG_X86_L1_CACHE_SHIFT 5
! define_bool CONFIG_X86_HAS_TSC y
! define_bool CONFIG_X86_F00F_WORKS_OK y
fi
if [ "$CONFIG_MWINCHIPC6" = "y" ]; then
***************
*** 149,167 ****
define_bool CONFIG_X86_USE_PPRO_CHECKSUM y
define_bool CONFIG_X86_OOSTORE y
fi
if [ "$CONFIG_MWINCHIP2" = "y" ]; then
define_int CONFIG_X86_L1_CACHE_SHIFT 5
define_bool CONFIG_X86_ALIGNMENT_16 y
! define_bool CONFIG_X86_TSC y
define_bool CONFIG_X86_USE_PPRO_CHECKSUM y
define_bool CONFIG_X86_OOSTORE y
fi
if [ "$CONFIG_MWINCHIP3D" = "y" ]; then
define_int CONFIG_X86_L1_CACHE_SHIFT 5
define_bool CONFIG_X86_ALIGNMENT_16 y
! define_bool CONFIG_X86_TSC y
define_bool CONFIG_X86_USE_PPRO_CHECKSUM y
define_bool CONFIG_X86_OOSTORE y
fi
tristate 'Toshiba Laptop support' CONFIG_TOSHIBA
tristate 'Dell laptop support' CONFIG_I8K
--- 159,183 ----
define_bool CONFIG_X86_USE_PPRO_CHECKSUM y
define_bool CONFIG_X86_OOSTORE y
+ define_bool CONFIG_X86_F00F_WORKS_OK y
fi
if [ "$CONFIG_MWINCHIP2" = "y" ]; then
define_int CONFIG_X86_L1_CACHE_SHIFT 5
define_bool CONFIG_X86_ALIGNMENT_16 y
! define_bool CONFIG_X86_HAS_TSC y
define_bool CONFIG_X86_USE_PPRO_CHECKSUM y
define_bool CONFIG_X86_OOSTORE y
+ define_bool CONFIG_X86_F00F_WORKS_OK y
fi
if [ "$CONFIG_MWINCHIP3D" = "y" ]; then
define_int CONFIG_X86_L1_CACHE_SHIFT 5
define_bool CONFIG_X86_ALIGNMENT_16 y
! define_bool CONFIG_X86_HAS_TSC y
define_bool CONFIG_X86_USE_PPRO_CHECKSUM y
define_bool CONFIG_X86_OOSTORE y
+ define_bool CONFIG_X86_F00F_WORKS_OK y
fi
+
+ bool 'Machine Check Exception' CONFIG_X86_MCE
+
tristate 'Toshiba Laptop support' CONFIG_TOSHIBA
tristate 'Dell laptop support' CONFIG_I8K
***************
*** 175,186 ****
4GB CONFIG_HIGHMEM4G \
64GB CONFIG_HIGHMEM64G" off
! if [ "$CONFIG_HIGHMEM4G" = "y" ]; then
define_bool CONFIG_HIGHMEM y
fi
if [ "$CONFIG_HIGHMEM64G" = "y" ]; then
- define_bool CONFIG_HIGHMEM y
define_bool CONFIG_X86_PAE y
fi
bool 'Math emulation' CONFIG_MATH_EMULATION
bool 'MTRR (Memory Type Range Register) support' CONFIG_MTRR
--- 191,207 ----
4GB CONFIG_HIGHMEM4G \
64GB CONFIG_HIGHMEM64G" off
! if [ "$CONFIG_HIGHMEM4G" = "y" -o "$CONFIG_HIGHMEM64G" = "y" ]; then
define_bool CONFIG_HIGHMEM y
+ else
+ define_bool CONFIG_HIGHMEM n
fi
if [ "$CONFIG_HIGHMEM64G" = "y" ]; then
define_bool CONFIG_X86_PAE y
fi
+ if [ "$CONFIG_HIGHMEM" = "y" ]; then
+ bool 'HIGHMEM I/O support' CONFIG_HIGHIO
+ fi
+
bool 'Math emulation' CONFIG_MATH_EMULATION
bool 'MTRR (Memory Type Range Register) support' CONFIG_MTRR
***************
*** 199,202 ****
--- 220,228 ----
fi
+ bool 'Unsynced TSC support' CONFIG_X86_TSC_DISABLE
+ if [ "$CONFIG_X86_TSC_DISABLE" != "y" -a "$CONFIG_X86_HAS_TSC" = "y" ]; then
+ define_bool CONFIG_X86_TSC y
+ fi
+
if [ "$CONFIG_SMP" = "y" -a "$CONFIG_X86_CMPXCHG" = "y" ]; then
define_bool CONFIG_HAVE_DEC_LOCK y
***************
*** 231,234 ****
--- 257,261 ----
define_bool CONFIG_X86_LOCAL_APIC y
define_bool CONFIG_PCI y
+ define_bool CONFIG_ISA n
else
if [ "$CONFIG_SMP" = "y" ]; then
***************
*** 249,252 ****
--- 276,280 ----
fi
fi
+ bool 'ISA bus support' CONFIG_ISA
fi
***************
*** 379,390 ****
endmenu
! mainmenu_option next_comment
! comment 'Old CD-ROM drivers (not SCSI, not IDE)'
!
! bool 'Support non-SCSI/IDE/ATAPI CDROM drives' CONFIG_CD_NO_IDESCSI
! if [ "$CONFIG_CD_NO_IDESCSI" != "n" ]; then
! source drivers/cdrom/Config.in
fi
- endmenu
#
--- 407,420 ----
endmenu
! if [ "$CONFIG_ISA" = "y" ]; then
! mainmenu_option next_comment
! comment 'Old CD-ROM drivers (not SCSI, not IDE)'
!
! bool 'Support non-SCSI/IDE/ATAPI CDROM drives' CONFIG_CD_NO_IDESCSI
! if [ "$CONFIG_CD_NO_IDESCSI" != "n" ]; then
! source drivers/cdrom/Config.in
! fi
! endmenu
fi
#
***************
*** 423,429 ****
source drivers/usb/Config.in
! if [ "$CONFIG_EXPERIMENTAL" = "y" ]; then
! source net/bluetooth/Config.in
! fi
mainmenu_option next_comment
--- 453,457 ----
source drivers/usb/Config.in
! source net/bluetooth/Config.in
mainmenu_option next_comment
***************
*** 432,435 ****
--- 460,464 ----
bool 'Kernel debugging' CONFIG_DEBUG_KERNEL
if [ "$CONFIG_DEBUG_KERNEL" != "n" ]; then
+ bool ' Check for stack overflows' CONFIG_DEBUG_STACKOVERFLOW
bool ' Debug high memory support' CONFIG_DEBUG_HIGHMEM
bool ' Debug memory allocations' CONFIG_DEBUG_SLAB
***************
*** 437,442 ****
bool ' Magic SysRq key' CONFIG_MAGIC_SYSRQ
bool ' Spinlock debugging' CONFIG_DEBUG_SPINLOCK
! bool ' Verbose BUG() reporting (adds 70K)' CONFIG_DEBUG_BUGVERBOSE
fi
endmenu
--- 466,473 ----
bool ' Magic SysRq key' CONFIG_MAGIC_SYSRQ
bool ' Spinlock debugging' CONFIG_DEBUG_SPINLOCK
! bool ' Compile the kernel with frame pointers' CONFIG_FRAME_POINTER
fi
endmenu
+
+ source lib/Config.in
|
|
From: Rodrigo S. de C. <rc...@us...> - 2003-05-19 01:39:18
|
Update of /cvsroot/linuxcompressed/linux/Documentation In directory sc8-pr-cvs1:/tmp/cvs-serv25395/Documentation Modified Files: Configure.help Log Message: o Port code to 2.4.20 Bug fix (?) o Changes checks in vswap.c to avoid oopses. It will BUG() instead. Some of the checks were done after the value had been accessed. Note o Virtual swap addresses are temporarily disabled, due to debugging sessions related to the use of swap files instead of swap partitions. Index: Configure.help =================================================================== RCS file: /cvsroot/linuxcompressed/linux/Documentation/Configure.help,v retrieving revision 1.12 retrieving revision 1.13 diff -C2 -r1.12 -r1.13 *** Configure.help 22 Nov 2002 16:01:32 -0000 1.12 --- Configure.help 19 May 2003 01:38:43 -0000 1.13 *************** *** 3,10 **** # Steven Cole <mailto:ele...@me...> # ! # Merged version 2.69: current with 2.4.17-pre8/2.5.1-pre10. ! # ! # This version of the Linux kernel configuration help texts ! # corresponds to kernel versions 2.4.x and 2.5.x. # # Translations of this file available on the WWW: --- 3,7 ---- # Steven Cole <mailto:ele...@me...> [...7241 lines suppressed...] + CONFIG_IT8172_TUNING + Say Y here to support tuning the ITE8172's IDE interface. This makes + it possible to set DMA channel or PIO opration and the transfer rate. + + Enable protocol mode for the L1 console + CONFIG_SERIAL_SGI_L1_PROTOCOL + Uses protocol mode instead of raw mode for the level 1 console on the + SGI SN (Scalable NUMA) platform for IA64. If you are compiling for + an SGI SN box then Y is the recommended value, otherwise say N. + + New bus configuration (EXPERIMENTAL) + CONFIG_TULIP_MWI + This configures your Tulip card specifically for the card and + system cache line size type you are using. + + This is experimental code, not yet tested on many boards. + + If unsure, say N. # |