You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(165) |
Sep
(240) |
Oct
(424) |
Nov
(526) |
Dec
(293) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(242) |
Feb
(149) |
Mar
(143) |
Apr
(143) |
May
(76) |
Jun
(59) |
Jul
(20) |
Aug
(2) |
Sep
(49) |
Oct
(1) |
Nov
(4) |
Dec
|
2003 |
Jan
(1) |
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2004 |
Jan
|
Feb
|
Mar
|
Apr
(2) |
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2008 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(3) |
Nov
|
Dec
|
2009 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
(72) |
Jul
(36) |
Aug
(9) |
Sep
(16) |
Oct
(23) |
Nov
(9) |
Dec
(3) |
2010 |
Jan
|
Feb
(1) |
Mar
(35) |
Apr
(44) |
May
(56) |
Jun
(71) |
Jul
(41) |
Aug
(41) |
Sep
(22) |
Oct
(3) |
Nov
(1) |
Dec
(1) |
2011 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2012 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
(1) |
2013 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2014 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2015 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(1) |
Sep
|
Oct
(1) |
Nov
(1) |
Dec
|
2016 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(1) |
Dec
|
2017 |
Jan
|
Feb
|
Mar
(1) |
Apr
(1) |
May
(1) |
Jun
|
Jul
(1) |
Aug
|
Sep
(1) |
Oct
|
Nov
|
Dec
|
2021 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(1) |
Aug
(1) |
Sep
(25) |
Oct
(105) |
Nov
(15) |
Dec
|
2025 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
(4) |
Jun
|
Jul
(1) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Paul M. <le...@us...> - 2001-10-24 05:28:06
|
Update of /cvsroot/linux-mips/linux In directory usw-pr-cvs1:/tmp/cvs-serv20456 Modified Files: Makefile Log Message: Added a watchdog driver for the vr41xx family. Index: Makefile =================================================================== RCS file: /cvsroot/linux-mips/linux/Makefile,v retrieving revision 1.6 retrieving revision 1.7 diff -u -d -r1.6 -r1.7 --- Makefile 2001/10/19 21:19:37 1.6 +++ Makefile 2001/10/24 05:28:03 1.7 @@ -18,7 +18,7 @@ HOSTCC = gcc HOSTCFLAGS = -Wall -Wstrict-prototypes -O2 -fomit-frame-pointer -CROSS_COMPILE = +CROSS_COMPILE = /opt/hardhat/devkit/mips/fp_le/bin/mips_fp_le- # # Include the make variables (CC, etc...) |
From: Paul M. <le...@us...> - 2001-10-24 01:55:22
|
Update of /cvsroot/linux-mips/linux/include/asm-mips In directory usw-pr-cvs1:/tmp/cvs-serv6031 Modified Files: korva.h Log Message: Obvious typo fix. Wonder how this one got by. Index: korva.h =================================================================== RCS file: /cvsroot/linux-mips/linux/include/asm-mips/korva.h,v retrieving revision 1.1 retrieving revision 1.2 diff -u -d -r1.1 -r1.2 --- korva.h 2001/08/23 19:13:56 1.1 +++ korva.h 2001/10/24 01:55:17 1.2 @@ -13,7 +13,7 @@ ************************************************************************ */ -#define KORVA_A_GMRdefine KORVA_A_GMR 0xF000 /* R/W General Mode Register */ +#define KORVA_A_GMR 0xF000 /* R/W General Mode Register */ #define KORVA_A_GSR 0xF004 /* R General Status Register */ #define KORVA_A_IMR 0xF008 /* R/W Interrupt Mask Register */ #define KORVA_A_RQU 0xF00C /* R Receive Queue Underrunning */ |
From: James S. <jsi...@us...> - 2001-10-23 23:51:06
|
Update of /cvsroot/linux-mips/linux/arch/mips/mm In directory usw-pr-cvs1:/tmp/cvs-serv23335 Modified Files: tlb-r4k.c Log Message: Fix a few buglets caused by the recent restructuring. Index: tlb-r4k.c =================================================================== RCS file: /cvsroot/linux-mips/linux/arch/mips/mm/tlb-r4k.c,v retrieving revision 1.1 retrieving revision 1.2 diff -u -d -r1.1 -r1.2 --- tlb-r4k.c 2001/10/22 20:43:28 1.1 +++ tlb-r4k.c 2001/10/23 23:51:03 1.2 @@ -47,7 +47,6 @@ __save_and_cli(flags); /* Save old context and create impossible VPN2 value */ old_ctx = (get_entryhi() & 0xff); - set_entryhi(KSEG0); set_entrylo0(0); set_entrylo1(0); BARRIER; @@ -56,6 +55,11 @@ /* Blast 'em all away. */ while (entry < mips_cpu.tlbsize) { + /* + * Make sure all entries differ. If they're not different + * MIPS32 will take revenge ... + */ + set_entryhi(KSEG0 + entry*0x2000); set_index(entry); BARRIER; tlb_write_indexed(); @@ -116,9 +120,11 @@ set_entrylo0(0); set_entrylo1(0); set_entryhi(KSEG0); - BARRIER; - if(idx < 0) + if (idx < 0) continue; + BARRIER; + /* Make sure all entries differ. */ + set_entryhi(KSEG0+idx*0x2000); tlb_write_indexed(); BARRIER; } @@ -152,9 +158,10 @@ idx = get_index(); set_entrylo0(0); set_entrylo1(0); - set_entryhi(KSEG0); if(idx < 0) goto finish; + /* Make sure all entries differ. */ + set_entryhi(KSEG0+idx*0x2000); BARRIER; tlb_write_indexed(); @@ -326,8 +333,31 @@ return ret; } +static void __init probe_tlb(unsigned long config) +{ + unsigned long config1; + + if (!(config & (1 << 31))) + /* + * Not a MIPS32 complianant CPU. Config 1 register not + * supported, we assume R4k style. Cpu probing already figured + * out the number of tlb entries. + */ + return; + + config1 = read_mips32_cp0_config1(); + if (!((config >> 7) & 3)) + panic("No MMU present"); + else + mips_cpu.tlbsize = ((config1 >> 25) & 0x3f) + 1; + + printk("Number of TLB entries %d.\n", mips_cpu.tlbsize); +} + void __init r4k_tlb_init(void) { + u32 config = read_32bit_cp0_register(CP0_CONFIG); + /* * You should never change this register: * - On R4600 1.7 the tlbp never hits for pages smaller than @@ -335,6 +365,7 @@ * - The entire mm handling assumes the c0_pagemask register to * be set for 4kb pages. */ + probe_tlb(config); set_pagemask(PM_4K); write_32bit_cp0_register(CP0_WIRED, 0); temp_tlb_entry = mips_cpu.tlbsize - 1; |
From: James S. <jsi...@us...> - 2001-10-23 23:01:50
|
Update of /cvsroot/linux-mips/linux/arch/mips/cobalt In directory usw-pr-cvs1:/tmp/cvs-serv4820 Modified Files: Makefile pci.c Added Files: pci_fixups.c Log Message: Mirgrating to pci_fixup stuff. --- NEW FILE: pci_fixups.c --- /* * Various broken PCI things on the Qube. * * Copyright 2001, James Simmons, jsi...@tr... * * This program is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License as published by the * Free Software Foundation; either version 2 of the License, or (at your * option) any later version. * * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN * NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF * USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON * ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * You should have received a copy of the GNU General Public License along * with this program; if not, write to the Free Software Foundation, Inc., * 675 Mass Ave, Cambridge, MA 02139, USA. */ #include <linux/config.h> #ifdef CONFIG_PCI #include <linux/types.h> #include <linux/pci.h> #include <linux/kernel.h> #include <linux/init.h> #include <asm/cobalt/cobalt.h> #include <asm/pci.h> #include <asm/io.h> #undef DEBUG #ifdef DEBUG #define DBG(x...) printk(x) #else #define DBG(x...) #endif static void qube_expansion_slot_bist(void) { unsigned char ctrl; int timeout = 100000; pcibios_read_config_byte(0, (0x0a<<3), PCI_BIST, &ctrl); if (!(ctrl & PCI_BIST_CAPABLE)) return; pcibios_write_config_byte(0, (0x0a<<3), PCI_BIST, ctrl|PCI_BIST_START); do { pcibios_read_config_byte(0, (0x0a<<3), PCI_BIST, &ctrl); if (!(ctrl & PCI_BIST_START)) break; } while(--timeout > 0); if ((timeout <= 0) || (ctrl & PCI_BIST_CODE_MASK)) printk("PCI: Expansion slot card failed BIST with code %x\n", (ctrl & PCI_BIST_CODE_MASK)); } static void qube_expansion_slot_fixup(void) { unsigned long ioaddr_base = 0x10108000; /* It's magic, ask Doug. */ unsigned long memaddr_base = 0x12000000; unsigned short pci_cmd; int i; /* Enable bits in COMMAND so driver can talk to it. */ pcibios_read_config_word(0, (0x0a<<3), PCI_COMMAND, &pci_cmd); pci_cmd |= (PCI_COMMAND_IO | PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER); pcibios_write_config_word(0, (0x0a<<3), PCI_COMMAND, pci_cmd); /* Give it a working IRQ. */ pcibios_write_config_byte(0, (0x0a<<3), PCI_INTERRUPT_LINE, 9); /* Fixup base addresses, we only support I/O at the moment. */ for (i = 0; i <= 5; i++) { unsigned int regaddr = (PCI_BASE_ADDRESS_0 + (i * 4)); unsigned int rval, mask, size, alignme, aspace; unsigned long *basep = &ioaddr_base; /* Check type first, punt if non-IO. */ pcibios_read_config_dword(0, (0x0a<<3), regaddr, &rval); aspace = (rval & PCI_BASE_ADDRESS_SPACE); if (aspace != PCI_BASE_ADDRESS_SPACE_IO) basep = &memaddr_base; /* Figure out how much it wants, if anything. */ pcibios_write_config_dword(0, (0x0a<<3), regaddr, 0xffffffff); pcibios_read_config_dword(0, (0x0a<<3), regaddr, &rval); /* Unused? */ if (rval == 0) continue; rval &= PCI_BASE_ADDRESS_IO_MASK; mask = (~rval << 1) | 0x1; size = (mask & rval) & 0xffffffff; alignme = size; if (alignme < 0x400) alignme = 0x400; rval = ((*basep + (alignme - 1)) & ~(alignme - 1)); *basep = (rval + size); pcibios_write_config_dword(0,(0x0a<<3), regaddr, rval | aspace); } qube_expansion_slot_bist(); } #define DEFAULT_BMIBA 0xcc00 /* in case ROM did not init it */ static void qube_raq_via_bmIDE_fixup(struct pci_dev *dev) { unsigned short cfgword; unsigned char lt; unsigned int bmiba; int try_again = 1; /* Enable Bus Mastering and fast back to back. */ pci_read_config_word(dev, PCI_COMMAND, &cfgword); cfgword |= (PCI_COMMAND_FAST_BACK | PCI_COMMAND_MASTER); pci_write_config_word(dev, PCI_COMMAND, cfgword); /* Enable interfaces. ROM only enables primary one. */ { #ifdef CONFIG_BLK_DEV_COBALT_SECONDARY unsigned char iface_enable = 0xb; #else unsigned char iface_enable = 0xa; #endif pci_write_config_byte(dev, 0x40, iface_enable); } /* Set latency timer to reasonable value. */ pci_read_config_byte(dev, PCI_LATENCY_TIMER, <); if (lt < 64) pci_write_config_byte(dev, PCI_LATENCY_TIMER, 64); pci_write_config_byte(dev, PCI_CACHE_LINE_SIZE, 7); /* Get the bmiba base address. */ do { pci_read_config_dword(dev, 0x20, &bmiba); bmiba &= 0xfff0; /* extract port base address */ if (bmiba) { break; } else { printk("ide: BM-DMA base register is invalid (0x%08x)\n",bmiba); if (inb(DEFAULT_BMIBA) != 0xff || !try_again) break; printk("ide: setting BM-DMA base register to 0x%08x\n",DEFAULT_BMIBA); pci_write_config_dword(dev, 0x20, DEFAULT_BMIBA | 1); } } while (try_again--); bmiba += 0x10000000; dev->resource[4].start = bmiba; } static void qube_raq_tulip_fixup(struct pci_dev *dev) { unsigned short pci_cmd; extern int cobalt_is_raq; unsigned int tmp; /* Fixup the first tulip located at device PCICONF_ETH0 */ if (dev->devfn == PCI_DEVSHFT(COBALT_PCICONF_ETH0)) { /* * Now tell the Ethernet device that we expect an interrupt at * IRQ 13 and not the default 189. * * The IRQ of the first Tulip is different on Qube and RaQ * hardware except for the weird first RaQ bringup board, */ if (!cobalt_is_raq) { /* All Qube's route this the same way. */ pci_write_config_byte(dev, PCI_INTERRUPT_LINE, COBALT_ETHERNET_IRQ); } else { /* Setup the first Tulip on the RAQ */ #ifndef RAQ_BOARD_1_WITH_HWHACKS pci_write_config_byte(dev, PCI_INTERRUPT_LINE, 4); #else pci_write_config_byte(dev, PCI_INTERRUPT_LINE, 13); #endif } /* Fixup the second tulip located at device PCICONF_ETH1 */ } else if (dev->devfn == PCI_DEVSHFT(COBALT_PCICONF_ETH1)) { /* XXX Check for the second Tulip on the RAQ(Already got it!) */ pci_read_config_dword(dev, PCI_VENDOR_ID, &tmp); if (tmp == 0xffffffff || tmp == 0x00000000) return; /* Enable the second Tulip device. */ pci_read_config_word(dev, PCI_COMMAND, &pci_cmd); pci_cmd |= (PCI_COMMAND_IO | PCI_COMMAND_MASTER); pci_write_config_word(dev, PCI_COMMAND, pci_cmd); /* Give it it's IRQ. */ /* NOTE: RaQ board #1 has a bunch of green wires which swapped * the IRQ line values of Tulip 0 and Tulip 1. All other * boards have eth0=4,eth1=13. -DaveM */ #ifndef RAQ_BOARD_1_WITH_HWHACKS pci_write_config_byte(dev, PCI_INTERRUPT_LINE, 13); #else pci_write_config_byte(dev, PCI_INTERRUPT_LINE, 4); #endif /* And finally, a usable I/O space allocation, right after what * the first Tulip uses. */ pci_write_config_dword(dev, PCI_BASE_ADDRESS_0, 0x10101001); } } static void qube_raq_scsi_fixup(struct pci_dev *dev) { unsigned short pci_cmd; extern int cobalt_is_raq; unsigned int tmp; /* * Tell the SCSI device that we expect an interrupt at * IRQ 7 and not the default 0. */ pci_write_config_byte(dev, PCI_INTERRUPT_LINE, COBALT_SCSI_IRQ); if (cobalt_is_raq) { /* Check for the SCSI on the RAQ */ pci_read_config_dword(dev, PCI_VENDOR_ID, &tmp); if (tmp == 0xffffffff || tmp == 0x00000000) return; /* Enable the device. */ pci_read_config_word(dev, PCI_COMMAND, &pci_cmd); pci_cmd |= (PCI_COMMAND_IO | PCI_COMMAND_MASTER | PCI_COMMAND_MEMORY | PCI_COMMAND_INVALIDATE); pci_write_config_word(dev, PCI_COMMAND, pci_cmd); /* Give it it's IRQ. */ pci_write_config_byte(dev, PCI_INTERRUPT_LINE, 4); /* And finally, a usable I/O space allocation, right after what * the second Tulip uses. */ pci_write_config_dword(dev, PCI_BASE_ADDRESS_0, 0x10102001); pci_write_config_dword(dev, PCI_BASE_ADDRESS_1, 0x00002000); pci_write_config_dword(dev, PCI_BASE_ADDRESS_2, 0x00100000); } } static void qube_raq_galileo_fixup(struct pci_dev *dev) { unsigned short galileo_id; /* Fix PCI latency-timer and cache-line-size values in Galileo * host bridge. */ pci_write_config_byte(dev, PCI_LATENCY_TIMER, 64); pci_write_config_byte(dev, PCI_CACHE_LINE_SIZE, 7); /* On all machines prior to Q2, we had the STOP line disconnected * from Galileo to VIA on PCI. The new Galileo does not function * correctly unless we have it connected. * * Therefore we must set the disconnect/retry cycle values to * something sensible when using the new Galileo. */ pci_read_config_word(dev, PCI_REVISION_ID, &galileo_id); galileo_id &= 0xff; /* mask off class info */ if (galileo_id == 0x10) { /* New Galileo, assumes PCI stop line to VIA is connected. */ *((volatile unsigned int *)0xb4000c04) = 0x00004020; } else if (galileo_id == 0x1 || galileo_id == 0x2) { unsigned int timeo; /* XXX WE MUST DO THIS ELSE GALILEO LOCKS UP! -DaveM */ timeo = *((volatile unsigned int *)0xb4000c04); /* Old Galileo, assumes PCI STOP line to VIA is disconnected. */ *((volatile unsigned int *)0xb4000c04) = 0x0000ffff; } } static void qube_pcibios_fixup(struct pci_dev *dev) { extern int cobalt_is_raq; unsigned int tmp; if (!cobalt_is_raq) { /* See if there is a device in the expansion slot, if so * fixup IRQ, fix base addresses, and enable master + * I/O + memory accesses in config space. */ pcibios_read_config_dword(0, 0x0a<<3, PCI_VENDOR_ID, &tmp); if(tmp != 0xffffffff && tmp != 0x00000000) qube_expansion_slot_fixup(); } else { /* And if we are a 2800 we have to setup the expansion slot * too. */ pcibios_read_config_dword(0, 0x0a<<3, PCI_VENDOR_ID, &tmp); if(tmp != 0xffffffff && tmp != 0x00000000) qube_expansion_slot_fixup(); } } struct pci_fixup pcibios_fixups[] = { /* TBD:: Add each device here and divvy up pcibios_fixup */ { PCI_FIXUP_HEADER, PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C586_1, qube_raq_via_bmIDE_fixup }, { PCI_FIXUP_HEADER, PCI_VENDOR_ID_DEC, PCI_DEVICE_ID_DEC_21142, qube_raq_tulip_fixup }, { PCI_FIXUP_HEADER, PCI_VENDOR_ID_GALILEO, PCI_ANY_ID, qube_raq_galileo_fixup }, /* Not sure about what scsi chips are available on the RAQ, put an entry for all */ { PCI_FIXUP_HEADER, PCI_VENDOR_ID_NCR, PCI_DEVICE_ID_NCR_53C860, qube_raq_scsi_fixup }, { PCI_FIXUP_HEADER, PCI_ANY_ID, PCI_ANY_ID, qube_pcibios_fixup } }; /* * Fixup your resources here, if necessary. *Usually* you * don't have to do anything here. * Called after pcibios_fixup(). */ void __init pcibios_fixup_resources(struct pci_dev *dev) { /* will need to fixup IO resources */ } /* * Any board or system controller fixups go here. * Now, this is called after the pci_auto code (if enabled) and * after the linux pci scan. */ void __init pcibios_fixup(void) { struct pci_dev *dev; pci_for_each_dev(dev) { /* switch (dev->vendor) { case PCI_VENDOR_ID_NEC: switch (dev->device) { case PCI_DEVICE_ID_NEC_VRC4173_BCU: case PCI_DEVICE_ID_NEC_VRC4173_AC97: case PCI_DEVICE_ID_NEC_VRC4173_CARDU: case PCI_DEVICE_ID_NEC_VRC4173_USB: dev->irq = VR4122_IRQ_VRC4173; break; } break; case PCI_VENDOR_ID_MEDIAQ: if (dev->device == PCI_DEVICE_ID_MEDIAQ_MQ200) dev->irq = VR4122_IRQ_MQ200; break; } */ } } /* * This is very board specific. You'll have to look at * each pci device and assign its interrupt number. */ void __init pcibios_fixup_irqs(void) { struct pci_dev *dev; pci_for_each_dev(dev) { } } unsigned int pcibios_assign_all_busses(void) { return 0; } #endif Index: Makefile =================================================================== RCS file: /cvsroot/linux-mips/linux/arch/mips/cobalt/Makefile,v retrieving revision 1.4 retrieving revision 1.5 diff -u -d -r1.4 -r1.5 --- Makefile 2001/10/23 18:08:27 1.4 +++ Makefile 2001/10/23 23:01:44 1.5 @@ -16,7 +16,7 @@ O_TARGET := cobalt.o -obj-y += rtc_qube.o irq.o int-handler.o pci.o pci_ops.o \ +obj-y += rtc_qube.o irq.o int-handler.o pci.o pci_fixups.o pci_ops.o \ reset.o setup.o via.o promcon.o ide.o include $(TOPDIR)/Rules.make Index: pci.c =================================================================== RCS file: /cvsroot/linux-mips/linux/arch/mips/cobalt/pci.c,v retrieving revision 1.4 retrieving revision 1.5 diff -u -d -r1.4 -r1.5 --- pci.c 2001/10/23 18:08:27 1.4 +++ pci.c 2001/10/23 23:01:44 1.5 @@ -17,287 +17,8 @@ #ifdef CONFIG_PCI -#define SELF 0 - extern struct pci_ops qube_pci_ops; -static void qube_expansion_slot_bist(void) -{ - unsigned char ctrl; - int timeout = 100000; - - pcibios_read_config_byte(0, (0x0a<<3), PCI_BIST, &ctrl); - if(!(ctrl & PCI_BIST_CAPABLE)) - return; - - pcibios_write_config_byte(0, (0x0a<<3), PCI_BIST, ctrl|PCI_BIST_START); - do { - pcibios_read_config_byte(0, (0x0a<<3), PCI_BIST, &ctrl); - if(!(ctrl & PCI_BIST_START)) - break; - } while(--timeout > 0); - if((timeout <= 0) || (ctrl & PCI_BIST_CODE_MASK)) - printk("PCI: Expansion slot card failed BIST with code %x\n", - (ctrl & PCI_BIST_CODE_MASK)); -} - -static void qube_expansion_slot_fixup(void) -{ - unsigned short pci_cmd; - unsigned long ioaddr_base = 0x10108000; /* It's magic, ask Doug. */ - unsigned long memaddr_base = 0x12000000; - int i; - - /* Enable bits in COMMAND so driver can talk to it. */ - pcibios_read_config_word(0, (0x0a<<3), PCI_COMMAND, &pci_cmd); - pci_cmd |= (PCI_COMMAND_IO | PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER); - pcibios_write_config_word(0, (0x0a<<3), PCI_COMMAND, pci_cmd); - - /* Give it a working IRQ. */ - pcibios_write_config_byte(0, (0x0a<<3), PCI_INTERRUPT_LINE, 9); - - /* Fixup base addresses, we only support I/O at the moment. */ - for(i = 0; i <= 5; i++) { - unsigned int regaddr = (PCI_BASE_ADDRESS_0 + (i * 4)); - unsigned int rval, mask, size, alignme, aspace; - unsigned long *basep = &ioaddr_base; - - /* Check type first, punt if non-IO. */ - pcibios_read_config_dword(0, (0x0a<<3), regaddr, &rval); - aspace = (rval & PCI_BASE_ADDRESS_SPACE); - if(aspace != PCI_BASE_ADDRESS_SPACE_IO) - basep = &memaddr_base; - - /* Figure out how much it wants, if anything. */ - pcibios_write_config_dword(0, (0x0a<<3), regaddr, 0xffffffff); - pcibios_read_config_dword(0, (0x0a<<3), regaddr, &rval); - - /* Unused? */ - if(rval == 0) - continue; - - rval &= PCI_BASE_ADDRESS_IO_MASK; - mask = (~rval << 1) | 0x1; - size = (mask & rval) & 0xffffffff; - alignme = size; - if(alignme < 0x400) - alignme = 0x400; - rval = ((*basep + (alignme - 1)) & ~(alignme - 1)); - *basep = (rval + size); - pcibios_write_config_dword(0,(0x0a<<3), regaddr, rval | aspace); - } - qube_expansion_slot_bist(); -} - -#define DEFAULT_BMIBA 0xcc00 /* in case ROM did not init it */ - -static void qube_raq_via_bmIDE_fixup(struct pci_dev *dev) -{ - unsigned short cfgword; - unsigned char lt; - unsigned int bmiba; - int try_again = 1; - - /* Enable Bus Mastering and fast back to back. */ - pci_read_config_word(dev, PCI_COMMAND, &cfgword); - cfgword |= (PCI_COMMAND_FAST_BACK | PCI_COMMAND_MASTER); - pci_write_config_word(dev, PCI_COMMAND, cfgword); - - /* Enable interfaces. ROM only enables primary one. */ - { -#ifdef CONFIG_BLK_DEV_COBALT_SECONDARY - unsigned char iface_enable = 0xb; -#else - unsigned char iface_enable = 0xa; -#endif - pci_write_config_byte(dev, 0x40, iface_enable); - } - - /* Set latency timer to reasonable value. */ - pci_read_config_byte(dev, PCI_LATENCY_TIMER, <); - if (lt < 64) - pci_write_config_byte(dev, PCI_LATENCY_TIMER, 64); - pci_write_config_byte(dev, PCI_CACHE_LINE_SIZE, 7); - - /* Get the bmiba base address. */ - do { - pci_read_config_dword(dev, 0x20, &bmiba); - bmiba &= 0xfff0; /* extract port base address */ - if (bmiba) { - break; - } else { - printk("ide: BM-DMA base register is invalid (0x%08x)\n",bmiba); - if (inb(DEFAULT_BMIBA) != 0xff || !try_again) - break; - printk("ide: setting BM-DMA base register to 0x%08x\n",DEFAULT_BMIBA); - pci_write_config_dword(dev, 0x20, DEFAULT_BMIBA|1); - } - } while (try_again--); - - bmiba += 0x10000000; - - dev->resource[4].start = bmiba; -} - -static void qube_raq_tulip_fixup(struct pci_dev *dev) -{ - unsigned short pci_cmd; - extern int cobalt_is_raq; - unsigned int tmp; - - /* Fixup the first tulip located at device PCICONF_ETH0 */ - if (dev->devfn == PCI_DEVSHFT(COBALT_PCICONF_ETH0)) { - /* - * Now tell the Ethernet device that we expect an interrupt at - * IRQ 13 and not the default 189. - * - * The IRQ of the first Tulip is different on Qube and RaQ - * hardware except for the weird first RaQ bringup board, - */ - if (! cobalt_is_raq) { - /* All Qube's route this the same way. */ - pci_write_config_byte(dev, PCI_INTERRUPT_LINE, - COBALT_ETHERNET_IRQ); - } else { - /* Setup the first Tulip on the RAQ */ -#ifndef RAQ_BOARD_1_WITH_HWHACKS - pci_write_config_byte(dev, PCI_INTERRUPT_LINE, 4); -#else - pci_write_config_byte(dev, PCI_INTERRUPT_LINE, 13); -#endif - } - /* Fixup the second tulip located at device PCICONF_ETH1 */ - } else if (dev->devfn == PCI_DEVSHFT(COBALT_PCICONF_ETH1)) { - /* XXX Check for the second Tulip on the RAQ(Already got it!) */ - pci_read_config_dword(dev, PCI_VENDOR_ID, &tmp); - if(tmp == 0xffffffff || tmp == 0x00000000) - return; - - /* Enable the second Tulip device. */ - pci_read_config_word(dev, PCI_COMMAND, &pci_cmd); - pci_cmd |= (PCI_COMMAND_IO | PCI_COMMAND_MASTER); - pci_write_config_word(dev, PCI_COMMAND, pci_cmd); - - /* Give it it's IRQ. */ - /* NOTE: RaQ board #1 has a bunch of green wires which swapped - * the IRQ line values of Tulip 0 and Tulip 1. All other - * boards have eth0=4,eth1=13. -DaveM - */ -#ifndef RAQ_BOARD_1_WITH_HWHACKS - pci_write_config_byte(dev, PCI_INTERRUPT_LINE, 13); -#else - pci_write_config_byte(dev, PCI_INTERRUPT_LINE, 4); -#endif - /* And finally, a usable I/O space allocation, right after what - * the first Tulip uses. - */ - pci_write_config_dword(dev, PCI_BASE_ADDRESS_0, 0x10101001); - } -} - -static void qube_raq_scsi_fixup(struct pci_dev *dev) -{ - unsigned short pci_cmd; - extern int cobalt_is_raq; - unsigned int tmp; - - /* - * Tell the SCSI device that we expect an interrupt at - * IRQ 7 and not the default 0. - */ - pci_write_config_byte(dev, PCI_INTERRUPT_LINE, COBALT_SCSI_IRQ); - - if (cobalt_is_raq) { - /* Check for the SCSI on the RAQ */ - pci_read_config_dword(dev, PCI_VENDOR_ID, &tmp); - if(tmp == 0xffffffff || tmp == 0x00000000) - return; - - /* Enable the device. */ - pci_read_config_word(dev, PCI_COMMAND, &pci_cmd); - - pci_cmd |= (PCI_COMMAND_IO | PCI_COMMAND_MASTER | PCI_COMMAND_MEMORY | PCI_COMMAND_INVALIDATE); - pci_write_config_word(dev, PCI_COMMAND, pci_cmd); - - /* Give it it's IRQ. */ - pci_write_config_byte(dev, PCI_INTERRUPT_LINE, 4); - - /* And finally, a usable I/O space allocation, right after what - * the second Tulip uses. - */ - pci_write_config_dword(dev, PCI_BASE_ADDRESS_0, 0x10102001); - pci_write_config_dword(dev, PCI_BASE_ADDRESS_1, 0x00002000); - pci_write_config_dword(dev, PCI_BASE_ADDRESS_2, 0x00100000); - } -} - -static void qube_raq_galileo_fixup(struct pci_dev *dev) -{ - unsigned short galileo_id; - - /* Fix PCI latency-timer and cache-line-size values in Galileo - * host bridge. - */ - pci_write_config_byte(dev, PCI_LATENCY_TIMER, 64); - pci_write_config_byte(dev, PCI_CACHE_LINE_SIZE, 7); - - /* On all machines prior to Q2, we had the STOP line disconnected - * from Galileo to VIA on PCI. The new Galileo does not function - * correctly unless we have it connected. - * - * Therefore we must set the disconnect/retry cycle values to - * something sensible when using the new Galileo. - */ - pci_read_config_word(dev, PCI_REVISION_ID, &galileo_id); - galileo_id &= 0xff; /* mask off class info */ - if (galileo_id == 0x10) { - /* New Galileo, assumes PCI stop line to VIA is connected. */ - *((volatile unsigned int *)0xb4000c04) = 0x00004020; - } else if (galileo_id == 0x1 || galileo_id == 0x2) { - unsigned int timeo; - /* XXX WE MUST DO THIS ELSE GALILEO LOCKS UP! -DaveM */ - timeo = *((volatile unsigned int *)0xb4000c04); - /* Old Galileo, assumes PCI STOP line to VIA is disconnected. */ - *((volatile unsigned int *)0xb4000c04) = 0x0000ffff; - } -} - -static void -qube_pcibios_fixup(struct pci_dev *dev) -{ - extern int cobalt_is_raq; - unsigned int tmp; - - - if (! cobalt_is_raq) { - /* See if there is a device in the expansion slot, if so - * fixup IRQ, fix base addresses, and enable master + - * I/O + memory accesses in config space. - */ - pcibios_read_config_dword(0, 0x0a<<3, PCI_VENDOR_ID, &tmp); - if(tmp != 0xffffffff && tmp != 0x00000000) - qube_expansion_slot_fixup(); - } else { - /* And if we are a 2800 we have to setup the expansion slot - * too. - */ - pcibios_read_config_dword(0, 0x0a<<3, PCI_VENDOR_ID, &tmp); - if(tmp != 0xffffffff && tmp != 0x00000000) - qube_expansion_slot_fixup(); - } -} - -struct pci_fixup pcibios_fixups[] = { - /* TBD:: Add each device here and divvy up pcibios_fixup */ - { PCI_FIXUP_HEADER, PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_82C586_1, qube_raq_via_bmIDE_fixup }, - { PCI_FIXUP_HEADER, PCI_VENDOR_ID_DEC, PCI_DEVICE_ID_DEC_21142, qube_raq_tulip_fixup }, - { PCI_FIXUP_HEADER, PCI_VENDOR_ID_GALILEO, PCI_ANY_ID, qube_raq_galileo_fixup }, - /* Not sure about what scsi chips are available on the RAQ, put an - entry for all */ - { PCI_FIXUP_HEADER, PCI_VENDOR_ID_NCR, PCI_DEVICE_ID_NCR_53C860, qube_raq_scsi_fixup }, - { PCI_FIXUP_HEADER, PCI_ANY_ID, PCI_ANY_ID, qube_pcibios_fixup } -}; - void __init pcibios_init(void) { printk("PCI: Probing PCI hardware\n"); @@ -339,10 +60,5 @@ void __init pcibios_fixup_bus(struct pci_bus *bus) { /* We don't appear to have sub-busses to fixup here */ -} - -unsigned __init int pcibios_assign_all_busses(void) -{ - return 1; } #endif /* CONFIG_PCI */ |
From: James S. <jsi...@us...> - 2001-10-23 21:52:10
|
Update of /cvsroot/linux-mips/linux/arch/mips/cobalt In directory usw-pr-cvs1:/tmp/cvs-serv5269 Modified Files: pci_ops.c Log Message: Ah. Much better code. Index: pci_ops.c =================================================================== RCS file: /cvsroot/linux-mips/linux/arch/mips/cobalt/pci_ops.c,v retrieving revision 1.1 retrieving revision 1.2 diff -u -d -r1.1 -r1.2 --- pci_ops.c 2001/10/23 18:08:27 1.1 +++ pci_ops.c 2001/10/23 21:52:05 1.2 @@ -80,16 +80,6 @@ { NULL, NULL, NULL, NULL, NULL} }; -static __inline__ int pci_range_ck(struct pci_dev *dev) -{ - if ((dev->bus->number == 0) - && ((PCI_SLOT (dev->devfn) == 0) - || ((PCI_SLOT (dev->devfn) > 6) - && (PCI_SLOT (dev->devfn) <= 12)))) - return 0; /* OK device number */ - return -1; /* NOT ok device number */ -} - #define PCI_CFG_DATA ((volatile unsigned long *)0xb4000cfc) #define PCI_CFG_CTRL ((volatile unsigned long *)0xb4000cf8) @@ -119,30 +109,34 @@ * if a pci config cycle read fails, the data returned * will be 0xffffffff. */ - return 0; + + if ((dev->bus->number == 0) + && ((PCI_SLOT (dev->devfn) == 0) + || ((PCI_SLOT (dev->devfn) > 6) + && (PCI_SLOT (dev->devfn) <= 12)))) { + /* OK device number */ + if (access_type == PCI_ACCESS_READ) { + PCI_CFG_SET(dev, (where & ~0x3)); + *data = *PCI_CFG_DATA; + } else + *PCI_CFG_DATA = *data; + return 0; + } + return -1; /* NOT ok device number */ } static int read_config_byte (struct pci_dev *dev, int where, u8 *val) { - if (pci_range_ck (dev)) { - *val = 0xff; - return PCIBIOS_DEVICE_NOT_FOUND; - } - PCI_CFG_SET(dev, (where & ~0x3)); - *val = *PCI_CFG_DATA >> ((where & 3) * 8); -/* u32 data = 0; if (config_access(PCI_ACCESS_READ, dev, where, &data)) { *val = 0xff; - return -1; + return PCIBIOS_DEVICE_NOT_FOUND; } - *val = (data >> ((where & 3) << 3)) & 0xff; DBG("cfg read byte: bus %d dev_fn %x where %x: val %x\n", dev->bus->number, dev->devfn, where, *val); -*/ return PCIBIOS_SUCCESSFUL; } @@ -150,59 +144,36 @@ static int read_config_word (struct pci_dev *dev, int where, u16 *val) { - if (where & 0x1) - return PCIBIOS_BAD_REGISTER_NUMBER; - if (pci_range_ck (dev)) { - *val = 0xffff; - return PCIBIOS_DEVICE_NOT_FOUND; - } - PCI_CFG_SET(dev, (where & ~0x3)); - *val = *PCI_CFG_DATA >> ((where & 3) * 8); -/* u32 data = 0; - if (where & 1) + if (where & 0x1) return PCIBIOS_BAD_REGISTER_NUMBER; if (config_access(PCI_ACCESS_READ, dev, where, &data)) { *val = 0xffff; - return -1; + return PCIBIOS_DEVICE_NOT_FOUND; } - *val = (data >> ((where & 3) << 3)) & 0xffff; DBG("cfg read word: bus %d dev_fn %x where %x: val %x\n", dev->bus->number, dev->devfn, where, *val); -*/ return PCIBIOS_SUCCESSFUL; } static int read_config_dword (struct pci_dev *dev, int where, u32 *val) { - if (where & 0x3) - return PCIBIOS_BAD_REGISTER_NUMBER; - if (pci_range_ck (dev)) { - *val = 0xFFFFFFFF; - return PCIBIOS_DEVICE_NOT_FOUND; - } - PCI_CFG_SET(dev, where); - *val = *PCI_CFG_DATA; - return PCIBIOS_SUCCESSFUL; -/* u32 data = 0; - if (where & 3) + if (where & 0x3) return PCIBIOS_BAD_REGISTER_NUMBER; - + if (config_access(PCI_ACCESS_READ, dev, where, &data)) { - *val = 0xffffffff; - return -1; + *val = 0xFFFFFFFF; + return PCIBIOS_DEVICE_NOT_FOUND; } - *val = data; DBG("cfg read dword: bus %d dev_fn %x where %x: val %x\n", dev->bus->number, dev->devfn, where, *val); -*/ return PCIBIOS_SUCCESSFUL; } @@ -210,63 +181,41 @@ static int write_config_byte (struct pci_dev *dev, int where, u8 val) { - unsigned long tmp; - - if (pci_range_ck (dev)) - return PCIBIOS_DEVICE_NOT_FOUND; - PCI_CFG_SET(dev, (where & ~0x3)); - tmp = *PCI_CFG_DATA; - tmp &= ~(0xff << ((where & 0x3) * 8)); - tmp |= (val << ((where & 0x3) * 8)); - *PCI_CFG_DATA = tmp; -/* u32 data = 0; - + if (config_access(PCI_ACCESS_READ, dev, where, &data)) - return -1; + return PCIBIOS_DEVICE_NOT_FOUND; data = (data & ~(0xff << ((where & 3) << 3))) | (val << ((where & 3) << 3)); - DBG("cfg write byte: bus %d dev_fn %x where %x: val %x\n", + + DBG("cfg write byte: bus %d dev_fn %x where %x: val %x\n", dev->bus->number, dev->devfn, where, val); - + if (config_access(PCI_ACCESS_WRITE, dev, where, &data)) return -1; -*/ return PCIBIOS_SUCCESSFUL; } static int write_config_word (struct pci_dev *dev, int where, u16 val) { - unsigned long tmp; + u32 data = 0; if (where & 0x1) return PCIBIOS_BAD_REGISTER_NUMBER; - if (pci_range_ck (dev)) - return PCIBIOS_DEVICE_NOT_FOUND; - PCI_CFG_SET(dev, (where & ~0x3)); - tmp = *PCI_CFG_DATA; - tmp &= ~(0xffff << ((where & 0x3) * 8)); - tmp |= (val << ((where & 0x3) * 8)); - *PCI_CFG_DATA = tmp; -/* - u32 data = 0; - if (where & 1) - return PCIBIOS_BAD_REGISTER_NUMBER; - if (config_access(PCI_ACCESS_READ, dev, where, &data)) - return -1; + return PCIBIOS_DEVICE_NOT_FOUND; data = (data & ~(0xffff << ((where & 3) << 3))) | (val << ((where & 3) << 3)); + DBG("cfg write word: bus %d dev_fn %x where %x: val %x\n", dev->bus->number, dev->devfn, where, val); if (config_access(PCI_ACCESS_WRITE, dev, where, &data)) - return -1; -*/ + return -1; return PCIBIOS_SUCCESSFUL; } @@ -275,19 +224,15 @@ { if (where & 0x3) return PCIBIOS_BAD_REGISTER_NUMBER; - if (pci_range_ck (dev)) - return PCIBIOS_DEVICE_NOT_FOUND; - PCI_CFG_SET(dev, where); - *PCI_CFG_DATA = val; -/* - if (where & 3) - return PCIBIOS_BAD_REGISTER_NUMBER; if (config_access(PCI_ACCESS_WRITE, dev, where, &val)) - return -1; - DBG("cfg write dword: bus %d dev_fn %x where %x: val %x\n", + return PCIBIOS_DEVICE_NOT_FOUND; + + DBG("cfg write dword: bus %d dev_fn %x where %x: val %x\n", dev->bus->number, dev->devfn, where, val); -*/ + + if (config_access(PCI_ACCESS_WRITE, dev, where, &val)) + return -1; return PCIBIOS_SUCCESSFUL; } |
From: James S. <jsi...@us...> - 2001-10-23 18:08:30
|
Update of /cvsroot/linux-mips/linux/arch/mips/cobalt In directory usw-pr-cvs1:/tmp/cvs-serv7734 Modified Files: Makefile pci.c Added Files: pci_ops.c Log Message: Gradually migrating to new pci code. --- NEW FILE: pci_ops.c --- /* * Cobalt Cube specific pci support * * Copyright 2001, James Simmons, jsi...@tr... * * This program is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License as published by the * Free Software Foundation; either version 2 of the License, or (at your * option) any later version. * * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN * NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF * USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON * ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * You should have received a copy of the GNU General Public License along * with this program; if not, write to the Free Software Foundation, Inc., * 675 Mass Ave, Cambridge, MA 02139, USA. */ #include <linux/config.h> #ifdef CONFIG_PCI #include <linux/types.h> #include <linux/pci.h> #include <linux/kernel.h> #include <linux/init.h> #include <asm/cobalt/cobalt.h> #include <asm/pci_channel.h> #define PCI_ACCESS_READ 0 #define PCI_ACCESS_WRITE 1 #undef DEBUG #ifdef DEBUG #define DBG(x...) printk(x) #else #define DBG(x...) #endif #define MEM_BASE 0x12000000 #define MEM_SIZE 0x02000000 #define IO_BASE 0x10000000 #define IO_SIZE 0x02000000 static struct resource pci_io_resource = { "pci IO space", IO_BASE, IO_BASE + IO_SIZE, IORESOURCE_IO}; static struct resource pci_mem_resource = { "pci memory space", MEM_BASE, MEM_BASE + MEM_SIZE, IORESOURCE_MEM }; extern struct pci_ops qube_pci_ops; /* * The mips_pci_channels array has all descriptors for all * pci bus controllers. Usually on most boards there's only * one pci controller to worry about. * * Note that the '0' and '0xff' below indicate the first * and last "devfn" to scan. You can use these variables * to limit the scan. */ struct pci_channel mips_pci_channels[] = { { &qube_pci_ops, &pci_io_resource, &pci_mem_resource, 0, 0xff }, { NULL, NULL, NULL, NULL, NULL} }; static __inline__ int pci_range_ck(struct pci_dev *dev) { if ((dev->bus->number == 0) && ((PCI_SLOT (dev->devfn) == 0) || ((PCI_SLOT (dev->devfn) > 6) && (PCI_SLOT (dev->devfn) <= 12)))) return 0; /* OK device number */ return -1; /* NOT ok device number */ } #define PCI_CFG_DATA ((volatile unsigned long *)0xb4000cfc) #define PCI_CFG_CTRL ((volatile unsigned long *)0xb4000cf8) #define PCI_CFG_SET(dev,where) \ ((*PCI_CFG_CTRL) = (0x80000000 | (PCI_SLOT ((dev)->devfn) << 11) | \ (PCI_FUNC ((dev)->devfn) << 8) | (where))) /* * Typically there is one core config routine which the pci_ops * functions call. */ static int config_access(unsigned char access_type, struct pci_dev *dev, unsigned char where, u32 *data) { /* * The config routine usually does dword accesses only * The functions calling the routine then have to mask * the returned value. */ /* * IMPORTANT * If a pci config cycle fails, it's *very* important * that if the cycle requested is READ, you set *data * to 0xffffffff. The pci_auto code does not check the * return value of the pci_ops functions. It expects that * if a pci config cycle read fails, the data returned * will be 0xffffffff. */ return 0; } static int read_config_byte (struct pci_dev *dev, int where, u8 *val) { if (pci_range_ck (dev)) { *val = 0xff; return PCIBIOS_DEVICE_NOT_FOUND; } PCI_CFG_SET(dev, (where & ~0x3)); *val = *PCI_CFG_DATA >> ((where & 3) * 8); /* u32 data = 0; if (config_access(PCI_ACCESS_READ, dev, where, &data)) { *val = 0xff; return -1; } *val = (data >> ((where & 3) << 3)) & 0xff; DBG("cfg read byte: bus %d dev_fn %x where %x: val %x\n", dev->bus->number, dev->devfn, where, *val); */ return PCIBIOS_SUCCESSFUL; } static int read_config_word (struct pci_dev *dev, int where, u16 *val) { if (where & 0x1) return PCIBIOS_BAD_REGISTER_NUMBER; if (pci_range_ck (dev)) { *val = 0xffff; return PCIBIOS_DEVICE_NOT_FOUND; } PCI_CFG_SET(dev, (where & ~0x3)); *val = *PCI_CFG_DATA >> ((where & 3) * 8); /* u32 data = 0; if (where & 1) return PCIBIOS_BAD_REGISTER_NUMBER; if (config_access(PCI_ACCESS_READ, dev, where, &data)) { *val = 0xffff; return -1; } *val = (data >> ((where & 3) << 3)) & 0xffff; DBG("cfg read word: bus %d dev_fn %x where %x: val %x\n", dev->bus->number, dev->devfn, where, *val); */ return PCIBIOS_SUCCESSFUL; } static int read_config_dword (struct pci_dev *dev, int where, u32 *val) { if (where & 0x3) return PCIBIOS_BAD_REGISTER_NUMBER; if (pci_range_ck (dev)) { *val = 0xFFFFFFFF; return PCIBIOS_DEVICE_NOT_FOUND; } PCI_CFG_SET(dev, where); *val = *PCI_CFG_DATA; return PCIBIOS_SUCCESSFUL; /* u32 data = 0; if (where & 3) return PCIBIOS_BAD_REGISTER_NUMBER; if (config_access(PCI_ACCESS_READ, dev, where, &data)) { *val = 0xffffffff; return -1; } *val = data; DBG("cfg read dword: bus %d dev_fn %x where %x: val %x\n", dev->bus->number, dev->devfn, where, *val); */ return PCIBIOS_SUCCESSFUL; } static int write_config_byte (struct pci_dev *dev, int where, u8 val) { unsigned long tmp; if (pci_range_ck (dev)) return PCIBIOS_DEVICE_NOT_FOUND; PCI_CFG_SET(dev, (where & ~0x3)); tmp = *PCI_CFG_DATA; tmp &= ~(0xff << ((where & 0x3) * 8)); tmp |= (val << ((where & 0x3) * 8)); *PCI_CFG_DATA = tmp; /* u32 data = 0; if (config_access(PCI_ACCESS_READ, dev, where, &data)) return -1; data = (data & ~(0xff << ((where & 3) << 3))) | (val << ((where & 3) << 3)); DBG("cfg write byte: bus %d dev_fn %x where %x: val %x\n", dev->bus->number, dev->devfn, where, val); if (config_access(PCI_ACCESS_WRITE, dev, where, &data)) return -1; */ return PCIBIOS_SUCCESSFUL; } static int write_config_word (struct pci_dev *dev, int where, u16 val) { unsigned long tmp; if (where & 0x1) return PCIBIOS_BAD_REGISTER_NUMBER; if (pci_range_ck (dev)) return PCIBIOS_DEVICE_NOT_FOUND; PCI_CFG_SET(dev, (where & ~0x3)); tmp = *PCI_CFG_DATA; tmp &= ~(0xffff << ((where & 0x3) * 8)); tmp |= (val << ((where & 0x3) * 8)); *PCI_CFG_DATA = tmp; /* u32 data = 0; if (where & 1) return PCIBIOS_BAD_REGISTER_NUMBER; if (config_access(PCI_ACCESS_READ, dev, where, &data)) return -1; data = (data & ~(0xffff << ((where & 3) << 3))) | (val << ((where & 3) << 3)); DBG("cfg write word: bus %d dev_fn %x where %x: val %x\n", dev->bus->number, dev->devfn, where, val); if (config_access(PCI_ACCESS_WRITE, dev, where, &data)) return -1; */ return PCIBIOS_SUCCESSFUL; } static int write_config_dword(struct pci_dev *dev, int where, u32 val) { if (where & 0x3) return PCIBIOS_BAD_REGISTER_NUMBER; if (pci_range_ck (dev)) return PCIBIOS_DEVICE_NOT_FOUND; PCI_CFG_SET(dev, where); *PCI_CFG_DATA = val; /* if (where & 3) return PCIBIOS_BAD_REGISTER_NUMBER; if (config_access(PCI_ACCESS_WRITE, dev, where, &val)) return -1; DBG("cfg write dword: bus %d dev_fn %x where %x: val %x\n", dev->bus->number, dev->devfn, where, val); */ return PCIBIOS_SUCCESSFUL; } struct pci_ops qube_pci_ops = { read_config_byte, read_config_word, read_config_dword, write_config_byte, write_config_word, write_config_dword }; #endif /* CONFIG_PCI */ Index: Makefile =================================================================== RCS file: /cvsroot/linux-mips/linux/arch/mips/cobalt/Makefile,v retrieving revision 1.3 retrieving revision 1.4 diff -u -d -r1.3 -r1.4 --- Makefile 2001/09/13 17:54:57 1.3 +++ Makefile 2001/10/23 18:08:27 1.4 @@ -16,7 +16,7 @@ O_TARGET := cobalt.o -obj-y += rtc_qube.o irq.o int-handler.o pci.o \ +obj-y += rtc_qube.o irq.o int-handler.o pci.o pci_ops.o \ reset.o setup.o via.o promcon.o ide.o include $(TOPDIR)/Rules.make Index: pci.c =================================================================== RCS file: /cvsroot/linux-mips/linux/arch/mips/cobalt/pci.c,v retrieving revision 1.3 retrieving revision 1.4 diff -u -d -r1.3 -r1.4 --- pci.c 2001/09/04 20:12:09 1.3 +++ pci.c 2001/10/23 18:08:27 1.4 @@ -19,6 +19,8 @@ #define SELF 0 +extern struct pci_ops qube_pci_ops; + static void qube_expansion_slot_bist(void) { unsigned char ctrl; @@ -294,110 +296,6 @@ entry for all */ { PCI_FIXUP_HEADER, PCI_VENDOR_ID_NCR, PCI_DEVICE_ID_NCR_53C860, qube_raq_scsi_fixup }, { PCI_FIXUP_HEADER, PCI_ANY_ID, PCI_ANY_ID, qube_pcibios_fixup } -}; - -static __inline__ int pci_range_ck(struct pci_dev *dev) -{ - if ((dev->bus->number == 0) - && ((PCI_SLOT (dev->devfn) == 0) - || ((PCI_SLOT (dev->devfn) > 6) - && (PCI_SLOT (dev->devfn) <= 12)))) - return 0; /* OK device number */ - return -1; /* NOT ok device number */ -} - -#define PCI_CFG_DATA ((volatile unsigned long *)0xb4000cfc) -#define PCI_CFG_CTRL ((volatile unsigned long *)0xb4000cf8) - -#define PCI_CFG_SET(dev,where) \ - ((*PCI_CFG_CTRL) = (0x80000000 | (PCI_SLOT ((dev)->devfn) << 11) | \ - (PCI_FUNC ((dev)->devfn) << 8) | (where))) - -static int qube_pci_read_config_dword(struct pci_dev *dev, int where, u32 *val) -{ - if (where & 0x3) - return PCIBIOS_BAD_REGISTER_NUMBER; - if (pci_range_ck (dev)) { - *val = 0xFFFFFFFF; - return PCIBIOS_DEVICE_NOT_FOUND; - } - PCI_CFG_SET(dev, where); - *val = *PCI_CFG_DATA; - return PCIBIOS_SUCCESSFUL; -} - -static int qube_pci_read_config_word(struct pci_dev *dev, int where, u16 *val) -{ - if (where & 0x1) - return PCIBIOS_BAD_REGISTER_NUMBER; - if (pci_range_ck (dev)) { - *val = 0xffff; - return PCIBIOS_DEVICE_NOT_FOUND; - } - PCI_CFG_SET(dev, (where & ~0x3)); - *val = *PCI_CFG_DATA >> ((where & 3) * 8); - return PCIBIOS_SUCCESSFUL; -} - -static int qube_pci_read_config_byte(struct pci_dev *dev, int where, u8 *val) -{ - if (pci_range_ck (dev)) { - *val = 0xff; - return PCIBIOS_DEVICE_NOT_FOUND; - } - PCI_CFG_SET(dev, (where & ~0x3)); - *val = *PCI_CFG_DATA >> ((where & 3) * 8); - return PCIBIOS_SUCCESSFUL; -} - -static int qube_pci_write_config_dword(struct pci_dev *dev, int where, u32 val) -{ - if (where & 0x3) - return PCIBIOS_BAD_REGISTER_NUMBER; - if (pci_range_ck (dev)) - return PCIBIOS_DEVICE_NOT_FOUND; - PCI_CFG_SET(dev, where); - *PCI_CFG_DATA = val; - return PCIBIOS_SUCCESSFUL; -} - -static int qube_pci_write_config_word(struct pci_dev *dev, int where, u16 val) -{ - unsigned long tmp; - - if (where & 0x1) - return PCIBIOS_BAD_REGISTER_NUMBER; - if (pci_range_ck (dev)) - return PCIBIOS_DEVICE_NOT_FOUND; - PCI_CFG_SET(dev, (where & ~0x3)); - tmp = *PCI_CFG_DATA; - tmp &= ~(0xffff << ((where & 0x3) * 8)); - tmp |= (val << ((where & 0x3) * 8)); - *PCI_CFG_DATA = tmp; - return PCIBIOS_SUCCESSFUL; -} - -static int qube_pci_write_config_byte(struct pci_dev *dev, int where, u8 val) -{ - unsigned long tmp; - - if (pci_range_ck (dev)) - return PCIBIOS_DEVICE_NOT_FOUND; - PCI_CFG_SET(dev, (where & ~0x3)); - tmp = *PCI_CFG_DATA; - tmp &= ~(0xff << ((where & 0x3) * 8)); - tmp |= (val << ((where & 0x3) * 8)); - *PCI_CFG_DATA = tmp; - return PCIBIOS_SUCCESSFUL; -} - -struct pci_ops qube_pci_ops = { - qube_pci_read_config_byte, - qube_pci_read_config_word, - qube_pci_read_config_dword, - qube_pci_write_config_byte, - qube_pci_write_config_word, - qube_pci_write_config_dword }; void __init pcibios_init(void) |
Update of /cvsroot/linux-mips/linux/arch/mips/mm In directory usw-pr-cvs1:/tmp/cvs-serv13923/mm Modified Files: Makefile loadmmu.c pg-mips32.c pg-r3k.c pg-r4k.S pg-r5432.c pg-rm7k.c tlb-r3k.c Added Files: c-andes.c c-mips32.c c-r3k.c c-r4k.c c-r5432.c c-rm7k.c c-sb1.c c-tx39.c pg-andes.S pg-sb1.c Removed Files: andes.c mips32.c pg-andes.c r2300.c r4xx0.c r5432.c rm7k.c sb1.c Log Message: More berzerking in the cache code. --- NEW FILE: c-andes.c --- /* * This file is subject to the terms and conditions of the GNU General Public * License. See the file "COPYING" in the main directory of this archive * for more details. * * Copyright (C) 1997, 1998, 1999 Ralf Baechle (ra...@gn...) * Copyright (C) 1999 Silicon Graphics, Inc. * Copyright (C) 2000 Kanoj Sarcar (ka...@sg...) */ #include <linux/init.h> #include <linux/kernel.h> #include <linux/sched.h> #include <linux/mm.h> #include <asm/page.h> #include <asm/pgtable.h> #include <asm/r10kcache.h> #include <asm/system.h> #include <asm/sgialib.h> #include <asm/mmu_context.h> static int scache_lsz64; static void andes_flush_cache_all(void) { } static void andes_flush_cache_mm(struct mm_struct *mm) { } static void andes_flush_cache_range(struct mm_struct *mm, unsigned long start, unsigned long end) { } static void andes_flush_cache_page(struct vm_area_struct *vma, unsigned long page) { } static void andes_flush_page_to_ram(struct page *page) { } /* Cache operations. These are only used with the virtual memory system, not for non-coherent I/O so it's ok to ignore the secondary caches. */ static void andes_flush_cache_l1(void) { blast_dcache32(); blast_icache64(); } /* * This is only used during initialization time. vmalloc() also calls * this, but that will be changed pretty soon. */ static void andes_flush_cache_l2(void) { switch (sc_lsize()) { case 64: blast_scache64(); break; case 128: blast_scache128(); break; default: printk("Unknown L2 line size\n"); while(1); } } void andes_flush_icache_page(unsigned long page) { if (scache_lsz64) blast_scache64_page(page); else blast_scache128_page(page); } static void andes_flush_cache_sigtramp(unsigned long addr) { protected_writeback_dcache_line(addr & ~(dc_lsize - 1)); protected_flush_icache_line(addr & ~(ic_lsize - 1)); } void __init ld_mmu_andes(void) { printk("CPU revision is: %08x\n", read_32bit_cp0_register(CP0_PRID)); printk("Primary instruction cache %dkb, linesize %d bytes\n", icache_size >> 10, ic_lsize); printk("Primary data cache %dkb, linesize %d bytes\n", dcache_size >> 10, dc_lsize); printk("Secondary cache sized at %ldK, linesize %ld\n", scache_size() >> 10, sc_lsize()); _clear_page = andes_clear_page; _copy_page = andes_copy_page; _flush_cache_all = andes_flush_cache_all; _flush_cache_mm = andes_flush_cache_mm; _flush_cache_page = andes_flush_cache_page; _flush_page_to_ram = andes_flush_page_to_ram; _flush_cache_l1 = andes_flush_cache_l1; _flush_cache_l2 = andes_flush_cache_l2; _flush_cache_sigtramp = andes_flush_cache_sigtramp; switch (sc_lsize()) { case 64: scache_lsz64 = 1; break; case 128: scache_lsz64 = 0; break; default: printk("Unknown L2 line size\n"); while(1); } update_mmu_cache = andes_update_mmu_cache; flush_cache_l1(); } --- NEW FILE: c-mips32.c --- /* * Kevin D. Kissell, ke...@mi... and Carsten Langgaard, car...@mi... * Copyright (C) 2000 MIPS Technologies, Inc. All rights reserved. * * This program is free software; you can distribute it and/or modify it * under the terms of the GNU General Public License (Version 2) as * published by the Free Software Foundation. * * This program is distributed in the hope it will be useful, but WITHOUT * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License * for more details. * * You should have received a copy of the GNU General Public License along * with this program; if not, write to the Free Software Foundation, Inc., * 59 Temple Place - Suite 330, Boston MA 02111-1307, USA. * * MIPS32 CPU variant specific MMU/Cache routines. */ #include <linux/init.h> #include <linux/kernel.h> #include <linux/sched.h> #include <linux/mm.h> #include <asm/bootinfo.h> #include <asm/cpu.h> #include <asm/bcache.h> #include <asm/io.h> #include <asm/page.h> #include <asm/pgtable.h> #include <asm/system.h> #include <asm/mmu_context.h> /* CP0 hazard avoidance. */ #define BARRIER __asm__ __volatile__(".set noreorder\n\t" \ "nop; nop; nop; nop; nop; nop;\n\t" \ ".set reorder\n\t") /* Primary cache parameters. */ int icache_size, dcache_size; /* Size in bytes */ int ic_lsize, dc_lsize; /* LineSize in bytes */ /* Secondary cache (if present) parameters. */ unsigned int scache_size, sc_lsize; /* Again, in bytes */ #include <asm/cacheops.h> #include <asm/mips32_cache.h> #undef DEBUG_CACHE /* * Dummy cache handling routines for machines without boardcaches */ static void no_sc_noop(void) {} static struct bcache_ops no_sc_ops = { (void *)no_sc_noop, (void *)no_sc_noop, (void *)no_sc_noop, (void *)no_sc_noop }; struct bcache_ops *bcops = &no_sc_ops; static inline void mips32_flush_cache_all_sc(void) { unsigned long flags; __save_and_cli(flags); blast_dcache(); blast_icache(); blast_scache(); __restore_flags(flags); } static inline void mips32_flush_cache_all_pc(void) { unsigned long flags; __save_and_cli(flags); blast_dcache(); blast_icache(); __restore_flags(flags); } static void mips32_flush_cache_range_sc(struct mm_struct *mm, unsigned long start, unsigned long end) { struct vm_area_struct *vma; unsigned long flags; if(mm->context == 0) return; start &= PAGE_MASK; #ifdef DEBUG_CACHE printk("crange[%d,%08lx,%08lx]", (int)mm->context, start, end); #endif vma = find_vma(mm, start); if(vma) { if(mm->context != current->mm->context) { mips32_flush_cache_all_sc(); } else { pgd_t *pgd; pmd_t *pmd; pte_t *pte; __save_and_cli(flags); while(start < end) { pgd = pgd_offset(mm, start); pmd = pmd_offset(pgd, start); pte = pte_offset(pmd, start); if(pte_val(*pte) & _PAGE_VALID) blast_scache_page(start); start += PAGE_SIZE; } __restore_flags(flags); } } } static void mips32_flush_cache_range_pc(struct mm_struct *mm, unsigned long start, unsigned long end) { if(mm->context != 0) { unsigned long flags; #ifdef DEBUG_CACHE printk("crange[%d,%08lx,%08lx]", (int)mm->context, start, end); #endif __save_and_cli(flags); blast_dcache(); blast_icache(); __restore_flags(flags); } } /* * On architectures like the Sparc, we could get rid of lines in * the cache created only by a certain context, but on the MIPS * (and actually certain Sparc's) we cannot. */ static void mips32_flush_cache_mm_sc(struct mm_struct *mm) { if(mm->context != 0) { #ifdef DEBUG_CACHE printk("cmm[%d]", (int)mm->context); #endif mips32_flush_cache_all_sc(); } } static void mips32_flush_cache_mm_pc(struct mm_struct *mm) { if(mm->context != 0) { #ifdef DEBUG_CACHE printk("cmm[%d]", (int)mm->context); #endif mips32_flush_cache_all_pc(); } } static void mips32_flush_cache_page_sc(struct vm_area_struct *vma, unsigned long page) { struct mm_struct *mm = vma->vm_mm; unsigned long flags; pgd_t *pgdp; pmd_t *pmdp; pte_t *ptep; /* * If ownes no valid ASID yet, cannot possibly have gotten * this page into the cache. */ if (mm->context == 0) return; #ifdef DEBUG_CACHE printk("cpage[%d,%08lx]", (int)mm->context, page); #endif __save_and_cli(flags); page &= PAGE_MASK; pgdp = pgd_offset(mm, page); pmdp = pmd_offset(pgdp, page); ptep = pte_offset(pmdp, page); /* * If the page isn't marked valid, the page cannot possibly be * in the cache. */ if (!(pte_val(*ptep) & _PAGE_VALID)) goto out; /* * Doing flushes for another ASID than the current one is * too difficult since R4k caches do a TLB translation * for every cache flush operation. So we do indexed flushes * in that case, which doesn't overly flush the cache too much. */ if (mm->context != current->active_mm->context) { /* * Do indexed flush, too much work to get the (possible) * tlb refills to work correctly. */ page = (KSEG0 + (page & (scache_size - 1))); blast_dcache_page_indexed(page); blast_scache_page_indexed(page); } else blast_scache_page(page); out: __restore_flags(flags); } static void mips32_flush_cache_page_pc(struct vm_area_struct *vma, unsigned long page) { struct mm_struct *mm = vma->vm_mm; unsigned long flags; pgd_t *pgdp; pmd_t *pmdp; pte_t *ptep; /* * If ownes no valid ASID yet, cannot possibly have gotten * this page into the cache. */ if (mm->context == 0) return; #ifdef DEBUG_CACHE printk("cpage[%d,%08lx]", (int)mm->context, page); #endif __save_and_cli(flags); page &= PAGE_MASK; pgdp = pgd_offset(mm, page); pmdp = pmd_offset(pgdp, page); ptep = pte_offset(pmdp, page); /* * If the page isn't marked valid, the page cannot possibly be * in the cache. */ if (!(pte_val(*ptep) & _PAGE_VALID)) goto out; /* * Doing flushes for another ASID than the current one is * too difficult since Mips32 caches do a TLB translation * for every cache flush operation. So we do indexed flushes * in that case, which doesn't overly flush the cache too much. */ if (mm == current->active_mm) { blast_dcache_page(page); } else { /* Do indexed flush, too much work to get the (possible) * tlb refills to work correctly. */ page = (KSEG0 + (page & (dcache_size - 1))); blast_dcache_page_indexed(page); } out: __restore_flags(flags); } /* If the addresses passed to these routines are valid, they are * either: * * 1) In KSEG0, so we can do a direct flush of the page. * 2) In KSEG2, and since every process can translate those * addresses all the time in kernel mode we can do a direct * flush. * 3) In KSEG1, no flush necessary. */ static void mips32_flush_page_to_ram_sc(struct page *page) { blast_scache_page((unsigned long)page_address(page)); } static void mips32_flush_page_to_ram_pc(struct page *page) { blast_dcache_page((unsigned long)page_address(page)); } static void mips32_flush_icache_page_s(struct vm_area_struct *vma, struct page *page) { /* * We did an scache flush therefore PI is already clean. */ } static void mips32_flush_icache_range(unsigned long start, unsigned long end) { flush_cache_all(); } static void mips32_flush_icache_page(struct vm_area_struct *vma, struct page *page) { int address; if (!(vma->vm_flags & VM_EXEC)) return; address = KSEG0 + ((unsigned long)page_address(page) & PAGE_MASK & (dcache_size - 1)); blast_icache_page_indexed(address); } /* * Writeback and invalidate the primary cache dcache before DMA. */ static void mips32_dma_cache_wback_inv_pc(unsigned long addr, unsigned long size) { unsigned long end, a; unsigned int flags; if (size >= dcache_size) { flush_cache_all(); } else { __save_and_cli(flags); a = addr & ~(dc_lsize - 1); end = (addr + size) & ~(dc_lsize - 1); while (1) { flush_dcache_line(a); /* Hit_Writeback_Inv_D */ if (a == end) break; a += dc_lsize; } __restore_flags(flags); } bc_wback_inv(addr, size); } static void mips32_dma_cache_wback_inv_sc(unsigned long addr, unsigned long size) { unsigned long end, a; if (size >= scache_size) { flush_cache_all(); return; } a = addr & ~(sc_lsize - 1); end = (addr + size) & ~(sc_lsize - 1); while (1) { flush_scache_line(a); /* Hit_Writeback_Inv_SD */ if (a == end) break; a += sc_lsize; } } static void mips32_dma_cache_inv_pc(unsigned long addr, unsigned long size) { unsigned long end, a; unsigned int flags; if (size >= dcache_size) { flush_cache_all(); } else { __save_and_cli(flags); a = addr & ~(dc_lsize - 1); end = (addr + size) & ~(dc_lsize - 1); while (1) { flush_dcache_line(a); /* Hit_Writeback_Inv_D */ if (a == end) break; a += dc_lsize; } __restore_flags(flags); } bc_inv(addr, size); } static void mips32_dma_cache_inv_sc(unsigned long addr, unsigned long size) { unsigned long end, a; if (size >= scache_size) { flush_cache_all(); return; } a = addr & ~(sc_lsize - 1); end = (addr + size) & ~(sc_lsize - 1); while (1) { flush_scache_line(a); /* Hit_Writeback_Inv_SD */ if (a == end) break; a += sc_lsize; } } static void mips32_dma_cache_wback(unsigned long addr, unsigned long size) { panic("mips32_dma_cache called - should not happen.\n"); } /* * While we're protected against bad userland addresses we don't care * very much about what happens in that case. Usually a segmentation * fault will dump the process later on anyway ... */ static void mips32_flush_cache_sigtramp(unsigned long addr) { protected_writeback_dcache_line(addr & ~(dc_lsize - 1)); protected_flush_icache_line(addr & ~(ic_lsize - 1)); } /* Detect and size the various caches. */ static void __init probe_icache(unsigned long config) { unsigned long config1; unsigned int lsize; if (!(config & (1 << 31))) { /* * Not a MIPS32 complainant CPU. * Config 1 register not supported, we assume R4k style. */ icache_size = 1 << (12 + ((config >> 9) & 7)); ic_lsize = 16 << ((config >> 5) & 1); mips_cpu.icache.linesz = ic_lsize; /* * We cannot infer associativity - assume direct map * unless probe template indicates otherwise */ if(!mips_cpu.icache.ways) mips_cpu.icache.ways = 1; mips_cpu.icache.sets = (icache_size / ic_lsize) / mips_cpu.icache.ways; } else { config1 = read_mips32_cp0_config1(); if ((lsize = ((config1 >> 19) & 7))) mips_cpu.icache.linesz = 2 << lsize; else mips_cpu.icache.linesz = lsize; mips_cpu.icache.sets = 64 << ((config1 >> 22) & 7); mips_cpu.icache.ways = 1 + ((config1 >> 16) & 7); ic_lsize = mips_cpu.icache.linesz; icache_size = mips_cpu.icache.sets * mips_cpu.icache.ways * ic_lsize; } printk("Primary instruction cache %dkb, linesize %d bytes (%d ways)\n", icache_size >> 10, ic_lsize, mips_cpu.icache.ways); } static void __init probe_dcache(unsigned long config) { unsigned long config1; unsigned int lsize; if (!(config & (1 << 31))) { /* * Not a MIPS32 complainant CPU. * Config 1 register not supported, we assume R4k style. */ dcache_size = 1 << (12 + ((config >> 6) & 7)); dc_lsize = 16 << ((config >> 4) & 1); mips_cpu.dcache.linesz = dc_lsize; /* * We cannot infer associativity - assume direct map * unless probe template indicates otherwise */ if(!mips_cpu.dcache.ways) mips_cpu.dcache.ways = 1; mips_cpu.dcache.sets = (dcache_size / dc_lsize) / mips_cpu.dcache.ways; } else { config1 = read_mips32_cp0_config1(); if ((lsize = ((config1 >> 10) & 7))) mips_cpu.dcache.linesz = 2 << lsize; else mips_cpu.dcache.linesz= lsize; mips_cpu.dcache.sets = 64 << ((config1 >> 13) & 7); mips_cpu.dcache.ways = 1 + ((config1 >> 7) & 7); dc_lsize = mips_cpu.dcache.linesz; dcache_size = mips_cpu.dcache.sets * mips_cpu.dcache.ways * dc_lsize; } printk("Primary data cache %dkb, linesize %d bytes (%d ways)\n", dcache_size >> 10, dc_lsize, mips_cpu.dcache.ways); } /* If you even _breathe_ on this function, look at the gcc output * and make sure it does not pop things on and off the stack for * the cache sizing loop that executes in KSEG1 space or else * you will crash and burn badly. You have been warned. */ static int __init probe_scache(unsigned long config) { extern unsigned long stext; unsigned long flags, addr, begin, end, pow2; int tmp; if (mips_cpu.scache.flags == MIPS_CACHE_NOT_PRESENT) return 0; tmp = ((config >> 17) & 1); if(tmp) return 0; tmp = ((config >> 22) & 3); switch(tmp) { case 0: sc_lsize = 16; break; case 1: sc_lsize = 32; break; case 2: sc_lsize = 64; break; case 3: sc_lsize = 128; break; } begin = (unsigned long) &stext; begin &= ~((4 * 1024 * 1024) - 1); end = begin + (4 * 1024 * 1024); /* This is such a bitch, you'd think they would make it * easy to do this. Away you daemons of stupidity! */ __save_and_cli(flags); /* Fill each size-multiple cache line with a valid tag. */ pow2 = (64 * 1024); for(addr = begin; addr < end; addr = (begin + pow2)) { unsigned long *p = (unsigned long *) addr; __asm__ __volatile__("nop" : : "r" (*p)); /* whee... */ pow2 <<= 1; } /* Load first line with zero (therefore invalid) tag. */ set_taglo(0); set_taghi(0); __asm__ __volatile__("nop; nop; nop; nop;"); /* avoid the hazard */ __asm__ __volatile__("\n\t.set noreorder\n\t" ".set mips3\n\t" "cache 8, (%0)\n\t" ".set mips0\n\t" ".set reorder\n\t" : : "r" (begin)); __asm__ __volatile__("\n\t.set noreorder\n\t" ".set mips3\n\t" "cache 9, (%0)\n\t" ".set mips0\n\t" ".set reorder\n\t" : : "r" (begin)); __asm__ __volatile__("\n\t.set noreorder\n\t" ".set mips3\n\t" "cache 11, (%0)\n\t" ".set mips0\n\t" ".set reorder\n\t" : : "r" (begin)); /* Now search for the wrap around point. */ pow2 = (128 * 1024); tmp = 0; for(addr = (begin + (128 * 1024)); addr < (end); addr = (begin + pow2)) { __asm__ __volatile__("\n\t.set noreorder\n\t" ".set mips3\n\t" "cache 7, (%0)\n\t" ".set mips0\n\t" ".set reorder\n\t" : : "r" (addr)); __asm__ __volatile__("nop; nop; nop; nop;"); /* hazard... */ if(!get_taglo()) break; pow2 <<= 1; } __restore_flags(flags); addr -= begin; printk("Secondary cache sized at %dK linesize %d bytes.\n", (int) (addr >> 10), sc_lsize); scache_size = addr; return 1; } static void __init setup_noscache_funcs(void) { _clear_page = (void *)mips32_clear_page_dc; _copy_page = (void *)mips32_copy_page_dc; _flush_cache_all = mips32_flush_cache_all_pc; ___flush_cache_all = mips32_flush_cache_all_pc; _flush_cache_mm = mips32_flush_cache_mm_pc; _flush_cache_range = mips32_flush_cache_range_pc; _flush_cache_page = mips32_flush_cache_page_pc; _flush_page_to_ram = mips32_flush_page_to_ram_pc; _flush_icache_page = mips32_flush_icache_page; _dma_cache_wback_inv = mips32_dma_cache_wback_inv_pc; _dma_cache_wback = mips32_dma_cache_wback; _dma_cache_inv = mips32_dma_cache_inv_pc; } static void __init setup_scache_funcs(void) { _flush_cache_all = mips32_flush_cache_all_sc; ___flush_cache_all = mips32_flush_cache_all_sc; _flush_cache_mm = mips32_flush_cache_mm_sc; _flush_cache_range = mips32_flush_cache_range_sc; _flush_cache_page = mips32_flush_cache_page_sc; _flush_page_to_ram = mips32_flush_page_to_ram_sc; _clear_page = (void *)mips32_clear_page_sc; _copy_page = (void *)mips32_copy_page_sc; _flush_icache_page = mips32_flush_icache_page_s; _dma_cache_wback_inv = mips32_dma_cache_wback_inv_sc; _dma_cache_wback = mips32_dma_cache_wback; _dma_cache_inv = mips32_dma_cache_inv_sc; } typedef int (*probe_func_t)(unsigned long); static inline void __init setup_scache(unsigned int config) { probe_func_t probe_scache_kseg1; int sc_present = 0; /* Maybe the cpu knows about a l2 cache? */ probe_scache_kseg1 = (probe_func_t) (KSEG1ADDR(&probe_scache)); sc_present = probe_scache_kseg1(config); if (sc_present) { mips_cpu.scache.linesz = sc_lsize; /* * We cannot infer associativity - assume direct map * unless probe template indicates otherwise */ if(!mips_cpu.scache.ways) mips_cpu.scache.ways = 1; mips_cpu.scache.sets = (scache_size / sc_lsize) / mips_cpu.scache.ways; setup_scache_funcs(); return; } setup_noscache_funcs(); } static void __init probe_tlb(unsigned long config) { unsigned long config1; if (!(config & (1 << 31))) { /* * Not a MIPS32 complainant CPU. * Config 1 register not supported, we assume R4k style. */ mips_cpu.tlbsize = 48; } else { config1 = read_mips32_cp0_config1(); if (!((config >> 7) & 3)) panic("No MMU present"); else mips_cpu.tlbsize = ((config1 >> 25) & 0x3f) + 1; } printk("Number of TLB entries %d.\n", mips_cpu.tlbsize); } void __init ld_mmu_mips32(void) { unsigned long config = read_32bit_cp0_register(CP0_CONFIG); printk("CPU revision is: %08x\n", read_32bit_cp0_register(CP0_PRID)); #ifdef CONFIG_MIPS_UNCACHED change_cp0_config(CONF_CM_CMASK, CONF_CM_UNCACHED); #else change_cp0_config(CONF_CM_CMASK, CONF_CM_CACHABLE_NONCOHERENT); #endif probe_icache(config); probe_dcache(config); setup_scache(config); probe_tlb(config); _flush_cache_sigtramp = mips32_flush_cache_sigtramp; _flush_icache_range = mips32_flush_icache_range; /* Ouch */ __flush_cache_all(); } --- NEW FILE: c-r3k.c --- /* * r2300.c: R2000 and R3000 specific mmu/cache code. * * Copyright (C) 1996 David S. Miller (dm...@en...) * * with a lot of changes to make this thing work for R3000s * Tx39XX R4k style caches added. HK * Copyright (C) 1998, 1999, 2000 Harald Koerfgen * Copyright (C) 1998 Gleb Raiko & Vladimir Roganov */ #include <linux/init.h> #include <linux/kernel.h> #include <linux/sched.h> #include <linux/mm.h> #include <asm/page.h> #include <asm/pgtable.h> #include <asm/mmu_context.h> #include <asm/system.h> #include <asm/isadep.h> #include <asm/io.h> #include <asm/wbflush.h> #include <asm/bootinfo.h> #include <asm/cpu.h> static unsigned long icache_size, dcache_size; /* Size in bytes */ static unsigned long icache_lsize, dcache_lsize; /* Size in bytes */ #undef DEBUG_CACHE unsigned long __init r3k_cache_size(unsigned long ca_flags) { unsigned long flags, status, dummy, size; volatile unsigned long *p; p = (volatile unsigned long *) KSEG0; flags = read_32bit_cp0_register(CP0_STATUS); /* isolate cache space */ write_32bit_cp0_register(CP0_STATUS, (ca_flags|flags)&~ST0_IEC); *p = 0xa5a55a5a; dummy = *p; status = read_32bit_cp0_register(CP0_STATUS); if (dummy != 0xa5a55a5a || (status & ST0_CM)) { size = 0; } else { for (size = 128; size <= 0x40000; size <<= 1) *(p + size) = 0; *p = -1; for (size = 128; (size <= 0x40000) && (*(p + size) == 0); size <<= 1) ; if (size > 0x40000) size = 0; } write_32bit_cp0_register(CP0_STATUS, flags); return size * sizeof(*p); } unsigned long __init r3k_cache_lsize(unsigned long ca_flags) { unsigned long flags, status, lsize, i; volatile unsigned long *p; p = (volatile unsigned long *) KSEG0; flags = read_32bit_cp0_register(CP0_STATUS); /* isolate cache space */ write_32bit_cp0_register(CP0_STATUS, (ca_flags|flags)&~ST0_IEC); for (i = 0; i < 128; i++) *(p + i) = 0; *(volatile unsigned char *)p = 0; for (lsize = 1; lsize < 128; lsize <<= 1) { *(p + lsize); status = read_32bit_cp0_register(CP0_STATUS); if (!(status & ST0_CM)) break; } for (i = 0; i < 128; i += lsize) *(volatile unsigned char *)(p + i) = 0; write_32bit_cp0_register(CP0_STATUS, flags); return lsize * sizeof(*p); } static void __init r3k_probe_cache(void) { dcache_size = r3k_cache_size(ST0_ISC); if (dcache_size) dcache_lsize = r3k_cache_lsize(ST0_ISC); icache_size = r3k_cache_size(ST0_ISC|ST0_SWC); if (icache_size) icache_lsize = r3k_cache_lsize(ST0_ISC|ST0_SWC); } static void r3k_flush_icache_range(unsigned long start, unsigned long end) { unsigned long size, i, flags; volatile unsigned char *p = (char *)start; size = end - start; if (size > icache_size) size = icache_size; flags = read_32bit_cp0_register(CP0_STATUS); /* isolate cache space */ write_32bit_cp0_register(CP0_STATUS, (ST0_ISC|ST0_SWC|flags)&~ST0_IEC); for (i = 0; i < size; i += 0x080) { asm ( "sb\t$0, 0x000(%0)\n\t" "sb\t$0, 0x004(%0)\n\t" "sb\t$0, 0x008(%0)\n\t" "sb\t$0, 0x00c(%0)\n\t" "sb\t$0, 0x010(%0)\n\t" "sb\t$0, 0x014(%0)\n\t" "sb\t$0, 0x018(%0)\n\t" "sb\t$0, 0x01c(%0)\n\t" "sb\t$0, 0x020(%0)\n\t" "sb\t$0, 0x024(%0)\n\t" "sb\t$0, 0x028(%0)\n\t" "sb\t$0, 0x02c(%0)\n\t" "sb\t$0, 0x030(%0)\n\t" "sb\t$0, 0x034(%0)\n\t" "sb\t$0, 0x038(%0)\n\t" "sb\t$0, 0x03c(%0)\n\t" "sb\t$0, 0x040(%0)\n\t" "sb\t$0, 0x044(%0)\n\t" "sb\t$0, 0x048(%0)\n\t" "sb\t$0, 0x04c(%0)\n\t" "sb\t$0, 0x050(%0)\n\t" "sb\t$0, 0x054(%0)\n\t" "sb\t$0, 0x058(%0)\n\t" "sb\t$0, 0x05c(%0)\n\t" "sb\t$0, 0x060(%0)\n\t" "sb\t$0, 0x064(%0)\n\t" "sb\t$0, 0x068(%0)\n\t" "sb\t$0, 0x06c(%0)\n\t" "sb\t$0, 0x070(%0)\n\t" "sb\t$0, 0x074(%0)\n\t" "sb\t$0, 0x078(%0)\n\t" "sb\t$0, 0x07c(%0)\n\t" : : "r" (p) ); p += 0x080; } write_32bit_cp0_register(CP0_STATUS, flags); } static void r3k_flush_dcache_range(unsigned long start, unsigned long end) { unsigned long size, i, flags; volatile unsigned char *p = (char *)start; size = end - start; if (size > dcache_size) size = dcache_size; flags = read_32bit_cp0_register(CP0_STATUS); /* isolate cache space */ write_32bit_cp0_register(CP0_STATUS, (ST0_ISC|flags)&~ST0_IEC); for (i = 0; i < size; i += 0x080) { asm ( "sb\t$0, 0x000(%0)\n\t" "sb\t$0, 0x004(%0)\n\t" "sb\t$0, 0x008(%0)\n\t" "sb\t$0, 0x00c(%0)\n\t" "sb\t$0, 0x010(%0)\n\t" "sb\t$0, 0x014(%0)\n\t" "sb\t$0, 0x018(%0)\n\t" "sb\t$0, 0x01c(%0)\n\t" "sb\t$0, 0x020(%0)\n\t" "sb\t$0, 0x024(%0)\n\t" "sb\t$0, 0x028(%0)\n\t" "sb\t$0, 0x02c(%0)\n\t" "sb\t$0, 0x030(%0)\n\t" "sb\t$0, 0x034(%0)\n\t" "sb\t$0, 0x038(%0)\n\t" "sb\t$0, 0x03c(%0)\n\t" "sb\t$0, 0x040(%0)\n\t" "sb\t$0, 0x044(%0)\n\t" "sb\t$0, 0x048(%0)\n\t" "sb\t$0, 0x04c(%0)\n\t" "sb\t$0, 0x050(%0)\n\t" "sb\t$0, 0x054(%0)\n\t" "sb\t$0, 0x058(%0)\n\t" "sb\t$0, 0x05c(%0)\n\t" "sb\t$0, 0x060(%0)\n\t" "sb\t$0, 0x064(%0)\n\t" "sb\t$0, 0x068(%0)\n\t" "sb\t$0, 0x06c(%0)\n\t" "sb\t$0, 0x070(%0)\n\t" "sb\t$0, 0x074(%0)\n\t" "sb\t$0, 0x078(%0)\n\t" "sb\t$0, 0x07c(%0)\n\t" : : "r" (p) ); p += 0x080; } write_32bit_cp0_register(CP0_STATUS,flags); } static inline unsigned long get_phys_page (unsigned long addr, struct mm_struct *mm) { pgd_t *pgd; pmd_t *pmd; pte_t *pte; unsigned long physpage; pgd = pgd_offset(mm, addr); pmd = pmd_offset(pgd, addr); pte = pte_offset(pmd, addr); if ((physpage = pte_val(*pte)) & _PAGE_VALID) return KSEG0ADDR(physpage & PAGE_MASK); return 0; } static inline void r3k_flush_cache_all(void) { r3k_flush_icache_range(KSEG0, KSEG0 + icache_size); } static void r3k_flush_cache_mm(struct mm_struct *mm) { if (mm->context != 0) { #ifdef DEBUG_CACHE printk("cmm[%d]", (int)mm->context); #endif r3k_flush_cache_all(); } } static void r3k_flush_cache_range(struct mm_struct *mm, unsigned long start, unsigned long end) { struct vm_area_struct *vma; if (mm->context == 0) return; start &= PAGE_MASK; #ifdef DEBUG_CACHE printk("crange[%d,%08lx,%08lx]", (int)mm->context, start, end); #endif vma = find_vma(mm, start); if (!vma) return; if (mm->context != current->active_mm->context) { flush_cache_all(); } else { unsigned long flags, physpage; save_and_cli(flags); while (start < end) { if ((physpage = get_phys_page(start, mm))) r3k_flush_icache_range(physpage, physpage + PAGE_SIZE); start += PAGE_SIZE; } restore_flags(flags); } } static void r3k_flush_cache_page(struct vm_area_struct *vma, unsigned long page) { struct mm_struct *mm = vma->vm_mm; if (mm->context == 0) return; #ifdef DEBUG_CACHE printk("cpage[%d,%08lx]", (int)mm->context, page); #endif if (vma->vm_flags & VM_EXEC) { unsigned long physpage; if ((physpage = get_phys_page(page, vma->vm_mm))) r3k_flush_icache_range(physpage, physpage + PAGE_SIZE); } } static void r3k_flush_page_to_ram(struct page * page) { /* * Nothing to be done */ } static void r3k_flush_icache_page(struct vm_area_struct *vma, struct page *page) { struct mm_struct *mm = vma->vm_mm; unsigned long physpage; if (mm->context == 0) return; if (!(vma->vm_flags & VM_EXEC)) return; #ifdef DEBUG_CACHE printk("cpage[%d,%08lx]", (int)mm->context, page); #endif physpage = (unsigned long) page_address(page); if (physpage) r3k_flush_icache_range(physpage, physpage + PAGE_SIZE); } static void r3k_flush_cache_sigtramp(unsigned long addr) { unsigned long flags; #ifdef DEBUG_CACHE printk("csigtramp[%08lx]", addr); #endif flags = read_32bit_cp0_register(CP0_STATUS); write_32bit_cp0_register(CP0_STATUS, flags&~ST0_IEC); /* Fill the TLB to avoid an exception with caches isolated. */ asm ( "lw\t$0, 0x000(%0)\n\t" "lw\t$0, 0x004(%0)\n\t" : : "r" (addr) ); write_32bit_cp0_register(CP0_STATUS, (ST0_ISC|ST0_SWC|flags)&~ST0_IEC); asm ( "sb\t$0, 0x000(%0)\n\t" "sb\t$0, 0x004(%0)\n\t" : : "r" (addr) ); write_32bit_cp0_register(CP0_STATUS, flags); } static void r3k_dma_cache_wback_inv(unsigned long start, unsigned long size) { wbflush(); r3k_flush_dcache_range(start, start + size); } void __init ld_mmu_r23000(void) { unsigned long config; printk("CPU revision is: %08x\n", read_32bit_cp0_register(CP0_PRID)); _clear_page = r3k_clear_page; _copy_page = r3k_copy_page; r3k_probe_cache(); _flush_cache_all = r3k_flush_cache_all; ___flush_cache_all = r3k_flush_cache_all; _flush_cache_mm = r3k_flush_cache_mm; _flush_cache_range = r3k_flush_cache_range; _flush_cache_page = r3k_flush_cache_page; _flush_cache_sigtramp = r3k_flush_cache_sigtramp; _flush_page_to_ram = r3k_flush_page_to_ram; _flush_icache_page = r3k_flush_icache_page; _flush_icache_range = r3k_flush_icache_range; _dma_cache_wback_inv = r3k_dma_cache_wback_inv; printk("Primary instruction cache %dkb, linesize %d bytes\n", (int) (icache_size >> 10), (int) icache_lsize); printk("Primary data cache %dkb, linesize %d bytes\n", (int) (dcache_size >> 10), (int) dcache_lsize); } --- NEW FILE: c-r4k.c --- /* * This file is subject to the terms and conditions of the GNU General Public * License. See the file "COPYING" in the main directory of this archive * for more details. * * r4xx0.c: R4000 processor variant specific MMU/Cache routines. * * Copyright (C) 1996 David S. Miller (dm...@en...) * Copyright (C) 1997, 1998, 1999, 2000 Ralf Baechle ra...@gn... * * To do: * * - this code is a overbloated pig * - many of the bug workarounds are not efficient at all, but at * least they are functional ... */ #include <linux/init.h> #include <linux/kernel.h> #include <linux/sched.h> [...1538 lines suppressed...] probe_icache(config); probe_dcache(config); setup_scache(config); switch(mips_cpu.cputype) { case CPU_R4600: /* QED style two way caches? */ case CPU_R4700: case CPU_R5000: case CPU_NEVADA: _flush_cache_page = r4k_flush_cache_page_d32i32_r4600; } _flush_cache_sigtramp = r4k_flush_cache_sigtramp; _flush_icache_range = r4k_flush_icache_range; /* Ouch */ if ((read_32bit_cp0_register(CP0_PRID) & 0xfff0) == 0x2020) { _flush_cache_sigtramp = r4600v20k_flush_cache_sigtramp; } __flush_cache_all(); } --- NEW FILE: c-r5432.c --- /* * This file is subject to the terms and conditions of the GNU General Public * License. See the file "COPYING" in the main directory of this archive * for more details. * * r5432.c: NEC Vr5432 processor. We cannot use r4xx0.c because of * its unique way-selection method for indexed operations. * * Copyright (C) 1996 David S. Miller (dm...@en...) * Copyright (C) 1997, 1998, 1999, 2000 Ralf Baechle (ra...@gn...) * Copyright (C) 2000 Jun Sun (js...@mv...) * */ #include <linux/init.h> #include <linux/kernel.h> #include <linux/sched.h> #include <linux/mm.h> #include <asm/bcache.h> #include <asm/io.h> #include <asm/page.h> #include <asm/pgtable.h> #include <asm/system.h> #include <asm/bootinfo.h> #include <asm/mmu_context.h> /* CP0 hazard avoidance. */ #define BARRIER __asm__ __volatile__(".set noreorder\n\t" \ "nop; nop; nop; nop; nop; nop;\n\t" \ ".set reorder\n\t") #include <asm/asm.h> #include <asm/cacheops.h> #undef DEBUG_CACHE /* Primary cache parameters. */ static int icache_size, dcache_size; /* Size in bytes */ static int ic_lsize, dc_lsize; /* LineSize in bytes */ /* -------------------------------------------------------------------- */ /* #include <asm/r4kcache.h> */ extern inline void flush_icache_line_indexed(unsigned long addr) { __asm__ __volatile__( ".set noreorder\n\t" ".set mips3\n\t" "cache %1, (%0)\n\t" "cache %1, 1(%0)\n\t" ".set mips0\n\t" ".set reorder" : : "r" (addr), "i" (Index_Invalidate_I)); } extern inline void flush_dcache_line_indexed(unsigned long addr) { __asm__ __volatile__( ".set noreorder\n\t" ".set mips3\n\t" "cache %1, (%0)\n\t" "cache %1, 1(%0)\n\t" ".set mips0\n\t" ".set reorder" : : "r" (addr), "i" (Index_Writeback_Inv_D)); } extern inline void flush_icache_line(unsigned long addr) { __asm__ __volatile__( ".set noreorder\n\t" ".set mips3\n\t" "cache %1, (%0)\n\t" ".set mips0\n\t" ".set reorder" : : "r" (addr), "i" (Hit_Invalidate_I)); } extern inline void flush_dcache_line(unsigned long addr) { __asm__ __volatile__( ".set noreorder\n\t" ".set mips3\n\t" "cache %1, (%0)\n\t" ".set mips0\n\t" ".set reorder" : : "r" (addr), "i" (Hit_Writeback_Inv_D)); } extern inline void invalidate_dcache_line(unsigned long addr) { __asm__ __volatile__( ".set noreorder\n\t" ".set mips3\n\t" "cache %1, (%0)\n\t" ".set mips0\n\t" ".set reorder" : : "r" (addr), "i" (Hit_Invalidate_D)); } /* * The next two are for badland addresses like signal trampolines. */ extern inline void protected_flush_icache_line(unsigned long addr) { __asm__ __volatile__( ".set noreorder\n\t" ".set mips3\n" "1:\tcache %1,(%0)\n" "2:\t.set mips0\n\t" ".set reorder\n\t" ".section\t__ex_table,\"a\"\n\t" STR(PTR)"\t1b,2b\n\t" ".previous" : : "r" (addr), "i" (Hit_Invalidate_I)); } extern inline void protected_writeback_dcache_line(unsigned long addr) { __asm__ __volatile__( ".set noreorder\n\t" ".set mips3\n" "1:\tcache %1,(%0)\n" "2:\t.set mips0\n\t" ".set reorder\n\t" ".section\t__ex_table,\"a\"\n\t" STR(PTR)"\t1b,2b\n\t" ".previous" : : "r" (addr), "i" (Hit_Writeback_D)); } #define cache32_unroll32(base,op) \ __asm__ __volatile__(" \ .set noreorder; \ .set mips3; \ cache %1, 0x000(%0); cache %1, 0x020(%0); \ cache %1, 0x040(%0); cache %1, 0x060(%0); \ cache %1, 0x080(%0); cache %1, 0x0a0(%0); \ cache %1, 0x0c0(%0); cache %1, 0x0e0(%0); \ cache %1, 0x100(%0); cache %1, 0x120(%0); \ cache %1, 0x140(%0); cache %1, 0x160(%0); \ cache %1, 0x180(%0); cache %1, 0x1a0(%0); \ cache %1, 0x1c0(%0); cache %1, 0x1e0(%0); \ cache %1, 0x200(%0); cache %1, 0x220(%0); \ cache %1, 0x240(%0); cache %1, 0x260(%0); \ cache %1, 0x280(%0); cache %1, 0x2a0(%0); \ cache %1, 0x2c0(%0); cache %1, 0x2e0(%0); \ cache %1, 0x300(%0); cache %1, 0x320(%0); \ cache %1, 0x340(%0); cache %1, 0x360(%0); \ cache %1, 0x380(%0); cache %1, 0x3a0(%0); \ cache %1, 0x3c0(%0); cache %1, 0x3e0(%0); \ .set mips0; \ .set reorder" \ : \ : "r" (base), \ "i" (op)); extern inline void blast_dcache32(void) { unsigned long start = KSEG0; unsigned long end = (start + dcache_size/2); while(start < end) { cache32_unroll32(start,Index_Writeback_Inv_D); cache32_unroll32(start+1,Index_Writeback_Inv_D); start += 0x400; } } extern inline void blast_dcache32_page(unsigned long page) { unsigned long start = page; unsigned long end = (start + PAGE_SIZE); while(start < end) { cache32_unroll32(start,Hit_Writeback_Inv_D); start += 0x400; } } extern inline void blast_dcache32_page_indexed(unsigned long page) { unsigned long start = page; unsigned long end = (start + PAGE_SIZE); while(start < end) { cache32_unroll32(start,Index_Writeback_Inv_D); cache32_unroll32(start+1,Index_Writeback_Inv_D); start += 0x400; } } extern inline void blast_icache32(void) { unsigned long start = KSEG0; unsigned long end = (start + icache_size/2); while(start < end) { cache32_unroll32(start,Index_Invalidate_I); cache32_unroll32(start+1,Index_Invalidate_I); start += 0x400; } } extern inline void blast_icache32_page(unsigned long page) { unsigned long start = page; unsigned long end = (start + PAGE_SIZE); while(start < end) { cache32_unroll32(start,Hit_Invalidate_I); start += 0x400; } } extern inline void blast_icache32_page_indexed(unsigned long page) { unsigned long start = page; unsigned long end = (start + PAGE_SIZE); while(start < end) { cache32_unroll32(start,Index_Invalidate_I); cache32_unroll32(start+1,Index_Invalidate_I); start += 0x400; } } /* -------------------------------------------------------------------- */ /* * If you think for one second that this stuff coming up is a lot * of bulky code eating too many kernel cache lines. Think _again_. * * Consider: * 1) Taken branches have a 3 cycle penalty on R4k * 2) The branch itself is a real dead cycle on even R4600/R5000. * 3) Only one of the following variants of each type is even used by * the kernel based upon the cache parameters we detect at boot time. * * QED. */ static inline void r5432_flush_cache_all_d32i32(void) { blast_dcache32(); blast_icache32(); } static void r5432_flush_cache_range_d32i32(struct mm_struct *mm, unsigned long start, unsigned long end) { if (mm->context != 0) { #ifdef DEBUG_CACHE printk("crange[%d,%08lx,%08lx]", (int)mm->context, start, end); #endif blast_dcache32(); blast_icache32(); } } /* * On architectures like the Sparc, we could get rid of lines in * the cache created only by a certain context, but on the MIPS * (and actually certain Sparc's) we cannot. */ static void r5432_flush_cache_mm_d32i32(struct mm_struct *mm) { if (mm->context != 0) { #ifdef DEBUG_CACHE printk("cmm[%d]", (int)mm->context); #endif r5432_flush_cache_all_d32i32(); } } static void r5432_flush_cache_page_d32i32(struct vm_area_struct *vma, unsigned long page) { struct mm_struct *mm = vma->vm_mm; pgd_t *pgdp; pmd_t *pmdp; pte_t *ptep; /* * If ownes no valid ASID yet, cannot possibly have gotten * this page into the cache. */ if (mm->context == 0) return; #ifdef DEBUG_CACHE printk("cpage[%d,%08lx]", (int)mm->context, page); #endif page &= PAGE_MASK; pgdp = pgd_offset(mm, page); pmdp = pmd_offset(pgdp, page); ptep = pte_offset(pmdp, page); /* * If the page isn't marked valid, the page cannot possibly be * in the cache. */ if (!(pte_val(*ptep) & _PAGE_PRESENT)) return; /* * Doing flushes for another ASID than the current one is * too difficult since stupid R4k caches do a TLB translation * for every cache flush operation. So we do indexed flushes * in that case, which doesn't overly flush the cache too much. */ if ((mm == current->active_mm) && (pte_val(*ptep) & _PAGE_VALID)) { blast_dcache32_page(page); } else { /* * Do indexed flush, too much work to get the (possible) * tlb refills to work correctly. */ page = (KSEG0 + (page & (dcache_size - 1))); blast_dcache32_page_indexed(page); } } /* If the addresses passed to these routines are valid, they are * either: * * 1) In KSEG0, so we can do a direct flush of the page. * 2) In KSEG2, and since every process can translate those * addresses all the time in kernel mode we can do a direct * flush. * 3) In KSEG1, no flush necessary. */ static void r5432_flush_page_to_ram_d32(struct page *page) { blast_dcache32_page((unsigned long)page_address(page)); } static void r5432_flush_icache_range(unsigned long start, unsigned long end) { r5432_flush_cache_all_d32i32(); } /* * Ok, this seriously sucks. We use them to flush a user page but don't * know the virtual address, so we have to blast away the whole icache * which is significantly more expensive than the real thing. */ static void r5432_flush_icache_page_i32(struct vm_area_struct *vma, struct page *page) { if (!(vma->vm_flags & VM_EXEC)) return; r5432_flush_cache_all_d32i32(); } /* * Writeback and invalidate the primary cache dcache before DMA. */ static void r5432_dma_cache_wback_inv_pc(unsigned long addr, unsigned long size) { unsigned long end, a; if (size >= dcache_size) { flush_cache_all(); } else { a = addr & ~(dc_lsize - 1); end = (addr + size) & ~(dc_lsize - 1); while (1) { flush_dcache_line(a); /* Hit_Writeback_Inv_D */ if (a == end) break; a += dc_lsize; } } bc_wback_inv(addr, size); } static void r5432_dma_cache_inv_pc(unsigned long addr, unsigned long size) { unsigned long end, a; if (size >= dcache_size) { flush_cache_all(); } else { a = addr & ~(dc_lsize - 1); end = (addr + size) & ~(dc_lsize - 1); while (1) { flush_dcache_line(a); /* Hit_Writeback_Inv_D */ if (a == end) break; a += dc_lsize; } } bc_inv(addr, size); } static void r5432_dma_cache_wback(unsigned long addr, unsigned long size) { panic("r5432_dma_cache called - should not happen.\n"); } /* * While we're protected against bad userland addresses we don't care * very much about what happens in that case. Usually a segmentation * fault will dump the process later on anyway ... */ static void r5432_flush_cache_sigtramp(unsigned long addr) { protected_writeback_dcache_line(addr & ~(dc_lsize - 1)); protected_flush_icache_line(addr & ~(ic_lsize - 1)); } /* Detect and size the various r4k caches. */ static void __init probe_icache(unsigned long config) { icache_size = 1 << (12 + ((config >> 9) & 7)); ic_lsize = 16 << ((config >> 5) & 1); printk("Primary instruction cache %dkb, linesize %d bytes.\n", icache_size >> 10, ic_lsize); } static void __init probe_dcache(unsigned long config) { dcache_size = 1 << (12 + ((config >> 6) & 7)); dc_lsize = 16 << ((config >> 4) & 1); printk("Primary data cache %dkb, linesize %d bytes.\n", dcache_size >> 10, dc_lsize); } void __init ld_mmu_r5432(void) { unsigned long config = read_32bit_cp0_register(CP0_CONFIG); printk("CPU revision is: %08x\n", read_32bit_cp0_register(CP0_PRID)); change_cp0_config(CONF_CM_CMASK, CONF_CM_CACHABLE_NONCOHERENT); probe_icache(config); probe_dcache(config); _clear_page = r5432_clear_page_d32; _copy_page = r5432_copy_page_d32; _flush_cache_all = r5432_flush_cache_all_d32i32; ___flush_cache_all = r5432_flush_cache_all_d32i32; _flush_page_to_ram = r5432_flush_page_to_ram_d32; _flush_cache_mm = r5432_flush_cache_mm_d32i32; _flush_cache_range = r5432_flush_cache_range_d32i32; _flush_cache_page = r5432_flush_cache_page_d32i32; _flush_icache_page = r5432_flush_icache_page_i32; _dma_cache_wback_inv = r5432_dma_cache_wback_inv_pc; _dma_cache_wback = r5432_dma_cache_wback; _dma_cache_inv = r5432_dma_cache_inv_pc; _flush_cache_sigtramp = r5432_flush_cache_sigtramp; _flush_icache_range = r5432_flush_icache_range; /* Ouch */ __flush_cache_all(); } --- NEW FILE: c-rm7k.c --- /* * This file is subject to the terms and conditions of the GNU General Public * License. See the file "COPYING" in the main directory of this archive * for more details. * * r4xx0.c: R4000 processor variant specific MMU/Cache routines. * * Copyright (C) 1996 David S. Miller (dm...@en...) * Copyright (C) 1997, 1998 Ralf Baechle ra...@gn... * * To do: * * - this code is a overbloated pig * - many of the bug workarounds are not efficient at all, but at * least they are functional ... */ #include <linux/init.h> #include <linux/kernel.h> #include <linux/sched.h> #include <linux/mm.h> #include <asm/io.h> #include <asm/page.h> #include <asm/pgtable.h> #include <asm/system.h> #include <asm/bootinfo.h> #include <asm/mmu_context.h> /* CP0 hazard avoidance. */ #define BARRIER __asm__ __volatile__(".set noreorder\n\t" \ "nop; nop; nop; nop; nop; nop;\n\t" \ ".set reorder\n\t") /* Primary cache parameters. */ static int icache_size, dcache_size; /* Size in bytes */ #define ic_lsize 32 /* Fixed to 32 byte on RM7000 */ #define dc_lsize 32 /* Fixed to 32 byte on RM7000 */ #define sc_lsize 32 /* Fixed to 32 byte on RM7000 */ #define tc_pagesize (32*128) /* Secondary cache parameters. */ #define scache_size (256*1024) /* Fixed to 256KiB on RM7000 */ #include <asm/cacheops.h> #include <asm/r4kcache.h> int rm7k_tcache_enabled = 0; /* * Not added to asm/r4kcache.h because it seems to be RM7000-specific. */ #define Page_Invalidate_T 0x16 static inline void invalidate_tcache_page(unsigned long addr) { __asm__ __volatile__( ".set\tnoreorder\t\t\t# invalidate_tcache_page\n\t" ".set\tmips3\n\t" "cache\t%1, (%0)\n\t" ".set\tmips0\n\t" ".set\treorder" : : "r" (addr), "i" (Page_Invalidate_T)); } static void __flush_cache_all_d32i32(void) { blast_dcache32(); blast_icache32(); } static inline void rm7k_flush_cache_all_d32i32(void) { /* Yes! Caches that don't suck ... */ } static void rm7k_flush_cache_range_d32i32(struct mm_struct *mm, unsigned long start, unsigned long end) { /* RM7000 caches are sane ... */ } static void rm7k_flush_cache_mm_d32i32(struct mm_struct *mm) { /* RM7000 caches are sane ... */ } static void rm7k_flush_cache_page_d32i32(struct vm_area_struct *vma, unsigned long page) { /* RM7000 caches are sane ... */ } static void rm7k_flush_page_to_ram_d32i32(struct page * page) { /* Yes! Caches that don't suck! */ } static void rm7k_flush_icache_range(unsigned long start, unsigned long end) { /* * FIXME: This is overdoing things and harms performance. */ __flush_cache_all_d32i32(); } static void rm7k_flush_icache_page(struct vm_area_struct *vma, struct page *page) { /* * FIXME: We should not flush the entire cache but establish some * temporary mapping and use hit_invalidate operation to flush out * the line from the cache. */ __flush_cache_all_d32i32(); } /* * Writeback and invalidate the primary cache dcache before DMA. * (XXX These need to be fixed ...) */ static void rm7k_dma_cache_wback_inv(unsigned long addr, unsigned long size) { unsigned long end, a; a = addr & ~(sc_lsize - 1); end = (addr + size) & ~(sc_lsize - 1); while (1) { flush_dcache_line(a); /* Hit_Writeback_Inv_D */ flush_icache_line(a); /* Hit_Invalidate_I */ flush_scache_line(a); /* Hit_Writeback_Inv_SD */ if (a == end) break; a += sc_lsize; } if (!rm7k_tcache_enabled) return; a = addr & ~(tc_pagesize - 1); end = (addr + size) & ~(tc_pagesize - 1); while(1) { invalidate_tcache_page(a); /* Page_Invalidate_T */ if (a == end) break; a += tc_pagesize; } } static void rm7k_dma_cache_inv(unsigned long addr, unsigned long size) { unsigned long end, a; a = addr & ~(sc_lsize - 1); end = (addr + size) & ~(sc_lsize - 1); while (1) { invalidate_dcache_line(a); /* Hit_Invalidate_D */ flush_icache_line(a); /* Hit_Invalidate_I */ invalidate_scache_line(a); /* Hit_Invalidate_SD */ if (a == end) break; a += sc_lsize; } if (!rm7k_tcache_enabled) return; a = addr & ~(tc_pagesize - 1); end = (addr + size) & ~(tc_pagesize - 1); while(1) { invalidate_tcache_page(a); /* Page_Invalidate_T */ if (a == end) break; a += tc_pagesize; } } static void rm7k_dma_cache_wback(unsigned long addr, unsigned long size) { panic("rm7k_dma_cache_wback called - should not happen.\n"); } /* * While we're protected against bad userland addresses we don't care * very much about what happens in that case. Usually a segmentation * fault will dump the process later on anyway ... */ static void rm7k_flush_cache_sigtramp(unsigned long addr) { protected_writeback_dcache_line(addr & ~(dc_lsize - 1)); protected_flush_icache_line(addr & ~(ic_lsize - 1)); } /* Detect and size the caches. */ static inline void probe_icache(unsigned long config) { icache_size = 1 << (12 + ((config >> 9) & 7)); printk(KERN_INFO "Primary instruction cache %dKiB.\n", icache_size >> 10); } static inline void probe_dcache(unsigned long config) { dcache_size = 1 << (12 + ((config >> 6) & 7)); printk(KERN_INFO "Primary data cache %dKiB.\n", dcache_size >> 10); } /* * This function is executed in the uncached segment KSEG1. * It must not touch the stack, because the stack pointer still points * into KSEG0. * * Three options: * - Write it in assembly and guarantee that we don't use the stack. * - Disable caching for KSEG0 before calling it. * - Pray that GCC doesn't randomly start using the stack. * * This being Linux, we obviously take the least sane of those options - * following DaveM's lead in r4xx0.c * * It seems we get our kicks from relying on unguaranteed behaviour in GCC */ static __init void setup_scache(void) { int register i; set_cp0_config(1<<3 /* CONF_SE */); set_taglo(0); set_taghi(0); for (i=0; i<scache_size; i+=sc_lsize) { __asm__ __volatile__ ( ".set noreorder\n\t" ".set mips3\n\t" "cache %1, (%0)\n\t" ".set mips0\n\t" ".set reorder" : : "r" (KSEG0ADDR(i)), "i" (Index_Store_Tag_SD)); } } static inline void probe_scache(unsigned long config) { void (*func)(void) = KSEG1ADDR(&setup_scache); if ((config >> 31) & 1) return; printk(KERN_INFO "Secondary cache %dKiB, linesize %d bytes.\n", (scache_size >> 10), sc_lsize); if ((config >> 3) & 1) return; printk(KERN_INFO "Enabling secondary cache..."); func(); printk("Done\n"); } static inline void probe_tcache(unsigned long config) { if ((config >> 17) & 1) return; /* We can't enable the L3 cache yet. There may be board-specific * magic necessary to turn it on, and blindly asking the CPU to * start using it would may give cache errors. * * Also, board-specific knowledge may allow us to use the * CACHE Flash_Invalidate_T instruction if the tag RAM supports * it, and may specify the size of the L3 cache so we don't have * to probe it. */ printk(KERN_INFO "Tertiary cache present, %s enabled\n", config&(1<<12) ? "already" : "not (yet)"); if ((config >> 12) & 1) rm7k_tcache_enabled = 1; } void __init ld_mmu_rm7k(void) { unsigned long config = read_32bit_cp0_register(CP0_CONFIG); unsigned long addr; printk("CPU revision is: %08x\n", read_32bit_cp0_register(CP0_PRID)); change_cp0_config(CONF_CM_CMASK, CONF_CM_UNCACHED); /* RM7000 erratum #31. The icache is screwed at startup. */ set_taglo(0); set_taghi(0); for (addr = KSEG0; addr <= KSEG0 + 4096; addr += ic_lsize) { __asm__ __volatile__ ( ".set noreorder\n\t" ".set mips3\n\t" "cache\t%1, 0(%0)\n\t" "cache\t%1, 0x1000(%0)\n\t" "cache\t%1, 0x2000(%0)\n\t" "cache\t%1, 0x3000(%0)\n\t" "cache\t%2, 0(%0)\n\t" "cache\t%2, 0x1000(%0)\n\t" "cache\t%2, 0x2000(%0)\n\t" "cache\t%2, 0x3000(%0)\n\t" "cache\t%1, 0(%0)\n\t" "cache\t%1, 0x1000(%0)\n\t" "cache\t%1, 0x2000(%0)\n\t" "cache\t%1, 0x3000(%0)\n\t" ".set\tmips0\n\t" ".set\treorder\n\t" : : "r" (addr), "i" (Index_Store_Tag_I), "i" (Fill)); } #ifndef CONFIG_MIPS_UNCACHED change_cp0_config(CONF_CM_CMASK, CONF_CM_CACHABLE_NONCOHERENT); #endif probe_icache(config); probe_dcache(config); probe_scache(config); probe_tcache(config); _clear_page = rm7k_clear_page; _copy_page = rm7k_copy_page; _flush_cache_all = rm7k_flush_cache_all_d32i32; ___flush_cache_all = __flush_cache_all_d32i32; _flush_cache_mm = rm7k_flush_cache_mm_d32i32; _flush_cache_range = rm7k_flush_cache_range_d32i32; _flush_cache_page = rm7k_flush_cache_page_d32i32; _flush_page_to_ram = rm7k_flush_page_to_ram_d32i32; _flush_cache_sigtramp = rm7k_flush_cache_sigtramp; _flush_icache_range = rm7k_flush_icache_range; _flush_icache_page = rm7k_flush_icache_page; _dma_cache_wback_inv = rm7k_dma_cache_wback_inv; _dma_cache_wback = rm7k_dma_cache_wback; _dma_cache_inv = rm7k_dma_cache_inv; __flush_cache_all_d32i32(); } --- NEW FILE: c-sb1.c --- /* * Copyright (C) 1996 David S. Miller (dm...@en...) * Copyright (C) 1997, 2001 Ralf Baechle (ra...@gn...) * Copyright (C) 2000, 2001 Broadcom Corporation * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * as published by the Free Software Foundation; either version 2 * of the License, or (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ /* * In this entire file, I'm not sure what the role of the L2 on the sb1250 * is. Since it is coherent to the system, we should never need to flush * it...right?...right??? -JDC */ #include <asm/mmu_context.h> /* These are probed at ld_mmu time */ static unsigned int icache_size; static unsigned int dcache_size; static unsigned int icache_line_size; static unsigned int dcache_line_size; static unsigned int icache_assoc; static unsigned int dcache_assoc; static unsigned int icache_sets; static unsigned int dcache_sets; static unsigned int tlb_entries; void local_flush_tlb_all(void) { unsigned long flags; unsigned long old_ctx; int entry; __save_and_cli(flags); /* Save old context and create impossible VPN2 value */ old_ctx = (get_entryhi() & 0xff); set_entrylo0(0); set_entrylo1(0); for (entry = 0; entry < tlb_entries; entry++) { set_entryhi(KSEG0 + (PAGE_SIZE << 1) * entry); set_index(entry); tlb_write_indexed(); } set_entryhi(old_ctx); __restore_flags(flags); } /* * The dcache is fully coherent to the system, with one * big caveat: the instruction stream. In other words, * if we miss in the icache, and have dirty data in the * L1 dcache, then we'll go out to memory (or the L2) and * get the not-as-recent data. * * So the only time we have to flush the dcache is when * we're flushing the icache. Since the L2 is fully * coherent to everything, including I/O, we never have * to flush it */ static void sb1_flush_cache_all(void) { /* * Haven't worried too much about speed here; given that we're flushing * the icache, the time to invalidate is dwarfed by the time it's going * to take to refill it. Register usage: * * $1 - moving cache index * $2 - set count */ if (icache_sets) { __asm__ __volatile__ ( ".set push \n" ".set noreorder \n" ".set noat \n" ".set mips4 \n" " move $1, %2 \n" /* Start at index 0 */ "1: cache 0, 0($1) \n" /* Invalidate this index */ " addiu %1, %1, -1 \n" /* Decrement loop count */ " bnez %1, 1b \n" /* loop test */ " addu $1, $1, %0 \n" /* Next address JDCXXX - Should be short piped */ ".set pop \n" ::"r" (icache_line_size), "r" (icache_sets * icache_assoc), "r" (KSEG0)); } if (dcache_sets) { __asm__ __volatile__ ( ".set push \n" ".set noreorder \n" ".set noat \n" ".set mips4 \n" " move $1, %2 \n" /* Start at index 0 */ "1: cache 0x1, 0($1) \n" /* WB/Invalidate this index */ " addiu %1, %1, -1 \n" /* Decrement loop count */ " bnez %1, 1b \n" /* loop test */ " addu $1, $1, %0 \n" /* Next address JDCXXX - Should be short piped */ ".set pop \n" : : "r" (dcache_line_size), "r" (dcache_sets * dcache_assoc), "r" (KSEG0)); } } /* * When flushing a range in the icache, we have to first writeback * the dcache for the same range, so new ifetches will see any * data that was dirty in the dcache */ static void sb1_flush_icache_range(unsigned long start, unsigned long end) { /* JDCXXX - Implement me! */ sb1_flush_cache_all(); } static void sb1_flush_cache_mm(struct mm_struct *mm) { /* Don't need to do this, as the dcache is physically tagged */ } static void sb1_flush_cache_range(struct mm_struct *mm, unsigned long start, unsigned long end) { /* Don't need to do this, as the dcache is physically tagged */ } static void sb1_flush_cache_sigtramp(unsigned long page) { /* JDCXXX - Implement me! */ sb1_flush_cache_all(); } /* * This only needs to make sure stores done up to this * point are visible to other agents outside the CPU. Given * the coherent nature of the ZBus, all that's required here is * a sync to make sure the data gets out to the caches and is * visible to an arbitrary A Phase from an external agent * * Actually, I'm not even sure that's necessary; the semantics * of this function aren't clear. If it's supposed to serve as * a memory barrier, this is needed. If it's only meant to * prevent data from being invisible to non-cpu memory accessors * for some indefinite period of time (e.g. in a non-coherent * dcache) then this function would be a complete nop. */ static void sb1_flush_page_to_ram(struct page *page) { __asm__ __volatile__( " sync \n" /* Short pipe */ :::"memory"); } /* Cribbed from the r2300 code */ static void sb1_flush_cache_page(struct vm_area_struct *vma, unsigned long page) { sb1_flush_cache_all(); #if 0 struct mm_struct *mm = vma->vm_mm; unsigned long physpage; /* No icache flush needed without context; */ if (mm->context == 0) return; /* No icache flush needed if the page isn't executable */ if (!(vma->vm_flags & VM_EXEC)) return; physpage = (unsigned long) page_address(page); if (physpage) sb1_flush_icache_range(physpage, physpage + PAGE_SIZE); #endif } /* * Cache set values (from the mips64 spec) * 0 - 64 * 1 - 128 * 2 - 256 * 3 - 512 * 4 - 1024 * 5 - 2048 * 6 - 4096 * 7 - Reserved */ static unsigned int decode_cache_sets(unsigned int config_field) { if (config_field == 7) { /* JDCXXX - Find a graceful way to abort. */ return 0; } return (1<<(config_field + 6)); } /* * Cache line size values (from the mips64 spec) * 0 - No cache present. * 1 - 4 bytes * 2 - 8 bytes * 3 - 16 bytes * 4 - 32 bytes * 5 - 64 bytes * 6 - 128 bytes * 7 - Reserved */ static unsigned int decode_cache_line_size(unsigned int config_field) { if (config_field == 0) { return 0; } else if (config_field == 7) { /* JDCXXX - Find a graceful way to abort. */ return 0; } return (1<<(config_field + 1)); } /* * Relevant bits of the config1 register format (from the MIPS32/MIPS64 specs) * * 24:22 Icache sets per way * 21:19 Icache line size * 18:16 Icache Associativity * 15:13 Dcache sets per way * 12:10 Dcache line size * 9:7 Dcache Associativity */ static void probe_cache_sizes(void) { u32 config1; __asm__ __volatile__( ".set push \n" ".set mips64 \n" " mfc0 %0, $16, 1 \n" /* Get config1 register */ ".set pop \n" :"=r" (config1)); icache_line_size = decode_cache_line_size((config1 >> 19) & 0x7); dcache_line_size = decode_cache_line_size((config1 >> 10) & 0x7); icache_sets = decode_cache_sets((config1 >> 22) & 0x7); dcache_sets = decode_cache_sets((config1 >> 13) & 0x7); icache_assoc = ((config1 >> 16) & 0x7) + 1; dcache_assoc = ((config1 >> 7) & 0x7) + 1; icache_size = icache_line_size * icache_sets * icache_assoc; dcache_size = dcache_line_size * dcache_sets * dcache_assoc; tlb_entries = ((config1 >> 25) & 0x3f) + 1; } /* This is called from loadmmu.c. We have to set up all the memory management function pointers, as well as initialize the caches and tlbs */ void ld_mmu_sb1(void) { probe_cache_sizes(); _clear_page = sb1_clear_page; _copy_page = sb1_copy_page; _flush_cache_all = sb1_flush_cache_all; _flush_cache_mm = sb1_flush_cache_mm; _flush_cache_range = sb1_flush_cache_range; _flush_cache_page = sb1_f... [truncated message content] |
From: James S. <jsi...@us...> - 2001-10-23 17:20:17
|
Update of /cvsroot/linux-mips/linux/arch/mips/sgi/kernel In directory usw-pr-cvs1:/tmp/cvs-serv13923/sgi/kernel Modified Files: indy_int.c Log Message: More berzerking in the cache code. Index: indy_int.c =================================================================== RCS file: /cvsroot/linux-mips/linux/arch/mips/sgi/kernel/indy_int.c,v retrieving revision 1.2 retrieving revision 1.3 diff -u -d -r1.2 -r1.3 --- indy_int.c 2001/08/25 02:19:27 1.2 +++ indy_int.c 2001/10/23 17:20:14 1.3 @@ -419,8 +419,7 @@ irq_enter(cpu, irq); kstat.irqs[0][irq]++; - printk("Got a bus error IRQ, shouldn't happen yet\n"); - show_regs(regs); + die("Got a bus error IRQ, shouldn't happen yet\n", regs); printk("Spinning...\n"); while(1); irq_exit(cpu, irq); |
From: James S. <jsi...@us...> - 2001-10-23 17:20:16
|
Update of /cvsroot/linux-mips/linux/arch/mips/kernel In directory usw-pr-cvs1:/tmp/cvs-serv13923/kernel Modified Files: traps.c Log Message: More berzerking in the cache code. Index: traps.c =================================================================== RCS file: /cvsroot/linux-mips/linux/arch/mips/kernel/traps.c,v retrieving revision 1.20 retrieving revision 1.21 diff -u -d -r1.20 -r1.21 --- traps.c 2001/10/22 19:16:44 1.20 +++ traps.c 2001/10/23 17:20:14 1.21 @@ -200,6 +200,33 @@ } } +void show_regs(struct pt_regs * regs) +{ + /* + * Saved main processor registers + */ + printk("$0 : %08x %08lx %08lx %08lx %08lx %08lx %08lx %08lx\n", + 0, regs->regs[1], regs->regs[2], regs->regs[3], + regs->regs[4], regs->regs[5], regs->regs[6], regs->regs[7]); + printk("$8 : %08lx %08lx %08lx %08lx %08lx %08lx %08lx %08lx\n", + regs->regs[8], regs->regs[9], regs->regs[10], regs->regs[11], + regs->regs[12], regs->regs[13], regs->regs[14], regs->regs[15]); + printk("$16: %08lx %08lx %08lx %08lx %08lx %08lx %08lx %08lx\n", + regs->regs[16], regs->regs[17], regs->regs[18], regs->regs[19], + regs->regs[20], regs->regs[21], regs->regs[22], regs->regs[23]); + printk("$24: %08lx %08lx %08lx %08lx %08lx %08lx\n", + regs->regs[24], regs->regs[25], + regs->regs[28], regs->regs[29], regs->regs[30], regs->regs[31]); + printk("Hi : %016lx\n", regs->hi); + printk("Lo : %016lx\n", regs->lo); + + /* + * Saved cp0 registers + */ + printk("epc : %08lx\nStatus: %08x\nCause : %08x\n", + regs->cp0_epc, regs->cp0_status, regs->cp0_cause); +} + spinlock_t die_lock; extern void __die(const char * str, struct pt_regs * regs, const char *where, |
From: James S. <jsi...@us...> - 2001-10-23 17:14:08
|
Update of /cvsroot/linux-mips/linux/include/asm-mips In directory usw-pr-cvs1:/tmp/cvs-serv11514/asm-mips Modified Files: checksum.h Log Message: Fix IPv6 checksumming bugs. Index: checksum.h =================================================================== RCS file: /cvsroot/linux-mips/linux/include/asm-mips/checksum.h,v retrieving revision 1.3 retrieving revision 1.4 diff -u -d -r1.3 -r1.4 --- checksum.h 2001/10/08 16:18:38 1.3 +++ checksum.h 2001/10/23 17:14:04 1.4 @@ -213,42 +213,44 @@ "lw\t%1, 0(%2)\t\t\t# four words source address\n\t" "addu\t%0, $1\n\t" "addu\t%0, %1\n\t" - "sltu\t$1, %0, $1\n\t" + "sltu\t$1, %0, %1\n\t" "lw\t%1, 4(%2)\n\t" "addu\t%0, $1\n\t" "addu\t%0, %1\n\t" - "sltu\t$1, %0, $1\n\t" + "sltu\t$1, %0, %1\n\t" "lw\t%1, 8(%2)\n\t" "addu\t%0, $1\n\t" "addu\t%0, %1\n\t" - "sltu\t$1, %0, $1\n\t" + "sltu\t$1, %0, %1\n\t" "lw\t%1, 12(%2)\n\t" "addu\t%0, $1\n\t" "addu\t%0, %1\n\t" - "sltu\t$1, %0, $1\n\t" + "sltu\t$1, %0, %1\n\t" "lw\t%1, 0(%3)\n\t" "addu\t%0, $1\n\t" "addu\t%0, %1\n\t" - "sltu\t$1, %0, $1\n\t" + "sltu\t$1, %0, %1\n\t" "lw\t%1, 4(%3)\n\t" "addu\t%0, $1\n\t" "addu\t%0, %1\n\t" - "sltu\t$1, %0, $1\n\t" + "sltu\t$1, %0, %1\n\t" "lw\t%1, 8(%3)\n\t" "addu\t%0, $1\n\t" "addu\t%0, %1\n\t" - "sltu\t$1, %0, $1\n\t" + "sltu\t$1, %0, %1\n\t" "lw\t%1, 12(%3)\n\t" "addu\t%0, $1\n\t" "addu\t%0, %1\n\t" - "sltu\t$1, %0, $1\n\t" + "sltu\t$1, %0, %1\n\t" + + "addu\t%0, $1\t\t\t# Add final carry\n\t" ".set\tnoat\n\t" ".set\tnoreorder" : "=r" (sum), "=r" (proto) |
From: James S. <jsi...@us...> - 2001-10-23 17:14:08
|
Update of /cvsroot/linux-mips/linux/include/asm-mips64 In directory usw-pr-cvs1:/tmp/cvs-serv11514/asm-mips64 Modified Files: checksum.h Log Message: Fix IPv6 checksumming bugs. Index: checksum.h =================================================================== RCS file: /cvsroot/linux-mips/linux/include/asm-mips64/checksum.h,v retrieving revision 1.3 retrieving revision 1.4 diff -u -d -r1.3 -r1.4 --- checksum.h 2001/10/04 16:26:39 1.3 +++ checksum.h 2001/10/23 17:14:04 1.4 @@ -193,11 +193,11 @@ } #define _HAVE_ARCH_IPV6_CSUM -static inline unsigned short int csum_ipv6_magic(struct in6_addr *saddr, - struct in6_addr *daddr, - __u32 len, - unsigned short proto, - unsigned int sum) +static __inline__ unsigned short int csum_ipv6_magic(struct in6_addr *saddr, + struct in6_addr *daddr, + __u32 len, + unsigned short proto, + unsigned int sum) { __asm__( ".set\tnoreorder\t\t\t# csum_ipv6_magic\n\t" @@ -211,42 +211,44 @@ "lw\t%1, 0(%2)\t\t\t# four words source address\n\t" "addu\t%0, $1\n\t" "addu\t%0, %1\n\t" - "sltu\t$1, %0, $1\n\t" + "sltu\t$1, %0, %1\n\t" "lw\t%1, 4(%2)\n\t" "addu\t%0, $1\n\t" "addu\t%0, %1\n\t" - "sltu\t$1, %0, $1\n\t" + "sltu\t$1, %0, %1\n\t" "lw\t%1, 8(%2)\n\t" "addu\t%0, $1\n\t" "addu\t%0, %1\n\t" - "sltu\t$1, %0, $1\n\t" + "sltu\t$1, %0, %1\n\t" "lw\t%1, 12(%2)\n\t" "addu\t%0, $1\n\t" "addu\t%0, %1\n\t" - "sltu\t$1, %0, $1\n\t" + "sltu\t$1, %0, %1\n\t" "lw\t%1, 0(%3)\n\t" "addu\t%0, $1\n\t" "addu\t%0, %1\n\t" - "sltu\t$1, %0, $1\n\t" + "sltu\t$1, %0, %1\n\t" "lw\t%1, 4(%3)\n\t" "addu\t%0, $1\n\t" "addu\t%0, %1\n\t" - "sltu\t$1, %0, $1\n\t" + "sltu\t$1, %0, %1\n\t" "lw\t%1, 8(%3)\n\t" "addu\t%0, $1\n\t" "addu\t%0, %1\n\t" - "sltu\t$1, %0, $1\n\t" + "sltu\t$1, %0, %1\n\t" "lw\t%1, 12(%3)\n\t" "addu\t%0, $1\n\t" "addu\t%0, %1\n\t" - "sltu\t$1, %0, $1\n\t" + "sltu\t$1, %0, %1\n\t" + + "addu\t%0, $1\t\t\t# Add final carry\n\t" ".set\tnoat\n\t" ".set\tnoreorder" : "=r" (sum), "=r" (proto) |
From: Steve L. <slo...@us...> - 2001-10-22 22:40:01
|
Update of /cvsroot/linux-mips/linux/drivers/sound In directory usw-pr-cvs1:/tmp/cvs-serv342 Modified Files: au1000.c Log Message: changed some variable names and added some convenience variables. Index: au1000.c =================================================================== RCS file: /cvsroot/linux-mips/linux/drivers/sound/au1000.c,v retrieving revision 1.4 retrieving revision 1.5 diff -u -d -r1.4 -r1.5 --- au1000.c 2001/10/19 21:19:39 1.4 +++ au1000.c 2001/10/22 22:39:58 1.5 @@ -92,63 +92,69 @@ #define AC97_EXT_DACS (AC97_EXTID_SDAC | AC97_EXTID_CDAC | AC97_EXTID_LDAC) /* Boot options */ -static int vra = 0; // 0 = no VRA, 1 = use VRA if codec supports it +static int vra = 0; // 0 = no VRA, 1 = use VRA if codec supports it +MODULE_PARM(vra, "i"); +MODULE_PARM_DESC(vra, "if 1 use VRA if codec supports it"); /* --------------------------------------------------------------------- */ [...2244 lines suppressed...] { - char *this_opt; + char* this_opt; if (!options || !*options) return 0; - for (this_opt = strtok(options, ","); this_opt; - this_opt = strtok(NULL, ",")) { + for(this_opt=strtok(options, ","); + this_opt; this_opt=strtok(NULL, ",")) { if (!strncmp(this_opt, "vra", 3)) { vra = 1; } } - + return 1; } |
From: Steve L. <slo...@us...> - 2001-10-22 22:35:53
|
Update of /cvsroot/linux-mips/linux/drivers/char In directory usw-pr-cvs1:/tmp/cvs-serv31785 Modified Files: au1000_gpio.c Log Message: Include path for au1000_gpio.h was wrong. Index: au1000_gpio.c =================================================================== RCS file: /cvsroot/linux-mips/linux/drivers/char/au1000_gpio.c,v retrieving revision 1.2 retrieving revision 1.3 diff -u -d -r1.2 -r1.3 --- au1000_gpio.c 2001/10/19 21:19:38 1.2 +++ au1000_gpio.c 2001/10/22 22:35:50 1.3 @@ -35,11 +35,11 @@ #include <linux/types.h> #include <linux/kernel.h> #include <linux/miscdevice.h> +#include <linux/au1000_gpio.h> #include <linux/init.h> #include <asm/uaccess.h> #include <asm/io.h> #include <asm/au1000.h> -#include <asm/au1000_gpio.h> #define VERSION "0.01" |
From: Steve L. <slo...@us...> - 2001-10-22 22:32:09
|
Update of /cvsroot/linux-mips/linux/drivers/video In directory usw-pr-cvs1:/tmp/cvs-serv31114 Modified Files: Config.in Makefile fbmem.c Added Files: it8181fb.c Log Message: New framebuffer driver for ITE IT8181/PCI card. --- NEW FILE: it8181fb.c --- /* IT8181 console frame buffer driver---it8181fb.c * * Copyright (C) 2001 Integrated Technology Express, Inc. * Copyright (C) 2001 MontaVista Software Inc. * * Initial work by ric...@it... * * Rewritten by MontaVista Software, Inc. * st...@mv... or so...@mv... * * * This program is free software; you can redistribute it and/or modify it * under the terms of the GNU General Public License as published by the * Free Software Foundation; either version 2 of the License, or (at your * option) any later version. * * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN [...1070 lines suppressed...] strcpy(fontname, this_opt+5); } else if (!strncmp(this_opt, "bpp:", 4)) { default_bpp = simple_strtoul(this_opt+4, NULL, 0); } else if (!strncmp(this_opt, "xres:", 5)) { xres = simple_strtoul(this_opt+5, NULL, 0); if (xres == 640) default_res = RES_640x480; else if (xres == 800) default_res = RES_800x600; else if (xres == 1024) default_res = RES_1024x768; } else { mode_option = this_opt; } } return 0; } #endif /* MODULE */ Index: Config.in =================================================================== RCS file: /cvsroot/linux-mips/linux/drivers/video/Config.in,v retrieving revision 1.7 retrieving revision 1.8 diff -u -d -r1.7 -r1.8 --- Config.in 2001/10/19 21:19:39 1.7 +++ Config.in 2001/10/22 22:32:06 1.8 @@ -154,6 +154,8 @@ bool ' Use TFT Panel on Pb1000 (J64)' CONFIG_PB1000_TFT fi fi + + tristate ' ITE IT8181 framebuffer support' CONFIG_FB_IT8181 fi if [ "$ARCH" = "sparc" -o "$ARCH" = "sparc64" ]; then bool ' SBUS and UPA framebuffers' CONFIG_FB_SBUS Index: Makefile =================================================================== RCS file: /cvsroot/linux-mips/linux/drivers/video/Makefile,v retrieving revision 1.6 retrieving revision 1.7 diff -u -d -r1.6 -r1.7 --- Makefile 2001/10/19 21:19:39 1.6 +++ Makefile 2001/10/22 22:32:06 1.7 @@ -114,6 +114,7 @@ obj-$(CONFIG_FB_HIT) += hitfb.o fbgen.o obj-$(CONFIG_FB_E1355) += epson1355fb.o fbgen.o obj-$(CONFIG_FB_E1356) += epson1356fb.o +obj-$(CONFIG_FB_IT8181) += it8181fb.o fbgen.o obj-$(CONFIG_FB_PVR2) += pvr2fb.o obj-$(CONFIG_FB_VOODOO1) += sstfb.o Index: fbmem.c =================================================================== RCS file: /cvsroot/linux-mips/linux/drivers/video/fbmem.c,v retrieving revision 1.9 retrieving revision 1.10 diff -u -d -r1.9 -r1.10 --- fbmem.c 2001/10/22 19:16:45 1.9 +++ fbmem.c 2001/10/22 22:32:06 1.10 @@ -130,6 +130,8 @@ extern int e1355fb_setup(char*); extern int e1356fb_init(void); extern int e1356fb_setup(char*); +extern int it8181fb_init(void); +extern int it8181fb_setup(char*); extern int pvr2fb_init(void); extern int pvr2fb_setup(char*); extern int mq200fb_init(void); @@ -289,6 +291,9 @@ #endif #ifdef CONFIG_FB_E1356 { "e1356fb", e1356fb_init, e1356fb_setup }, +#endif +#ifdef CONFIG_FB_IT8181 + { "it8181fb", it8181fb_init, it8181fb_setup }, #endif #ifdef CONFIG_FB_PVR2 { "pvr2", pvr2fb_init, pvr2fb_setup }, |
From: James S. <jsi...@us...> - 2001-10-22 21:24:47
|
Update of /cvsroot/linux-mips/linux/Documentation/mips/pci In directory usw-pr-cvs1:/tmp/cvs-serv15229 Modified Files: pci_fixups.c Log Message: Added needed pcibios_assign_all_busses function example to pci_fixups.c. Index: pci_fixups.c =================================================================== RCS file: /cvsroot/linux-mips/linux/Documentation/mips/pci/pci_fixups.c,v retrieving revision 1.1 retrieving revision 1.2 diff -u -d -r1.1 -r1.2 --- pci_fixups.c 2001/07/13 16:33:37 1.1 +++ pci_fixups.c 2001/10/22 21:24:44 1.2 @@ -64,4 +64,9 @@ pci_for_each_dev(dev) { } } + +unsigned int pcibios_assign_all_busses(void) +{ + return 0; +} #endif |
From: Paul M. <le...@us...> - 2001-10-22 21:04:25
|
Update of /cvsroot/linux-mips/linux/arch/mips64/mm In directory usw-pr-cvs1:/tmp/cvs-serv8606/arch/mips64/mm Modified Files: fault.c Log Message: Don't kill init when OOM. Index: fault.c =================================================================== RCS file: /cvsroot/linux-mips/linux/arch/mips64/mm/fault.c,v retrieving revision 1.4 retrieving revision 1.5 diff -u -d -r1.4 -r1.5 --- fault.c 2001/10/22 21:03:27 1.4 +++ fault.c 2001/10/22 21:04:22 1.5 @@ -154,6 +154,7 @@ * make sure we exit gracefully rather than endlessly redo * the fault. */ +survive: switch (handle_mm_fault(mm, vma, address, write)) { case 1: tsk->min_flt++; @@ -234,6 +235,12 @@ */ out_of_memory: up_read(&mm->mmap_sem); + if (current->pid == 1) { + current->policy |= SCHED_YIELD; + schedule(); + down_read(&mm->mmap_sem); + goto survive; + } printk("VM: killing process %s\n", tsk->comm); if (user_mode(regs)) do_exit(SIGKILL); |
From: Paul M. <le...@us...> - 2001-10-22 21:04:25
|
Update of /cvsroot/linux-mips/linux/arch/mips/mm In directory usw-pr-cvs1:/tmp/cvs-serv8606/arch/mips/mm Modified Files: fault.c Log Message: Don't kill init when OOM. Index: fault.c =================================================================== RCS file: /cvsroot/linux-mips/linux/arch/mips/mm/fault.c,v retrieving revision 1.3 retrieving revision 1.4 diff -u -d -r1.3 -r1.4 --- fault.c 2001/10/22 20:43:28 1.3 +++ fault.c 2001/10/22 21:04:22 1.4 @@ -100,6 +100,7 @@ * make sure we exit gracefully rather than endlessly redo * the fault. */ +survive: switch (handle_mm_fault(mm, vma, address, write)) { case 1: tsk->min_flt++; @@ -176,6 +177,12 @@ */ out_of_memory: up_read(&mm->mmap_sem); + if (current->pid == 1) { + current->policy |= SCHED_YIELD; + schedule(); + down_read(&mm->mmap_sem); + goto survive; + } printk("VM: killing process %s\n", tsk->comm); if (user_mode(regs)) do_exit(SIGKILL); |
From: Paul M. <le...@us...> - 2001-10-22 21:03:30
|
Update of /cvsroot/linux-mips/linux/arch/mips64/mm In directory usw-pr-cvs1:/tmp/cvs-serv8354/arch/mips64/mm Added Files: fault.c Log Message: Re-add fault.c |
From: James S. <jsi...@us...> - 2001-10-22 20:59:41
|
Update of /cvsroot/linux-mips/linux/arch/mips/mm In directory usw-pr-cvs1:/tmp/cvs-serv7118 Removed Files: pg-r2300.c Log Message: Gone now. --- pg-r2300.c DELETED --- |
From: James S. <jsi...@us...> - 2001-10-22 20:57:39
|
Update of /cvsroot/linux-mips/linux/arch/mips/mm In directory usw-pr-cvs1:/tmp/cvs-serv6299 Removed Files: pg-r4xx0.S Log Message: Renamed to pg-r4k.S --- pg-r4xx0.S DELETED --- |
From: James S. <jsi...@us...> - 2001-10-22 20:43:32
|
Update of /cvsroot/linux-mips/linux/arch/mips/mm In directory usw-pr-cvs1:/tmp/cvs-serv2286 Added Files: fault.c pg-r3k.c pg-r4k.S tlb-r3k.c tlb-r4k.c tlbex-r3k.S tlbex-r4k.S Log Message: New files for TLB handling. --- NEW FILE: pg-r3k.c --- /* * Copyright (C) 2001 Ralf Baechle (ra...@gn...) */ #include <asm/page.h> /* page functions */ void r3k_clear_page(void * page) { __asm__ __volatile__( ".set\tnoreorder\n\t" ".set\tnoat\n\t" "addiu\t$1,%0,%2\n" "1:\tsw\t$0,(%0)\n\t" "sw\t$0,4(%0)\n\t" "sw\t$0,8(%0)\n\t" "sw\t$0,12(%0)\n\t" "addiu\t%0,32\n\t" "sw\t$0,-16(%0)\n\t" "sw\t$0,-12(%0)\n\t" "sw\t$0,-8(%0)\n\t" "bne\t$1,%0,1b\n\t" "sw\t$0,-4(%0)\n\t" ".set\tat\n\t" ".set\treorder" : "=r" (page) : "0" (page), "I" (PAGE_SIZE) : "memory"); } void r3k_copy_page(void * to, void * from) { unsigned long dummy1, dummy2; unsigned long reg1, reg2, reg3, reg4; __asm__ __volatile__( ".set\tnoreorder\n\t" ".set\tnoat\n\t" "addiu\t$1,%0,%8\n" "1:\tlw\t%2,(%1)\n\t" "lw\t%3,4(%1)\n\t" "lw\t%4,8(%1)\n\t" "lw\t%5,12(%1)\n\t" "sw\t%2,(%0)\n\t" "sw\t%3,4(%0)\n\t" "sw\t%4,8(%0)\n\t" "sw\t%5,12(%0)\n\t" "lw\t%2,16(%1)\n\t" "lw\t%3,20(%1)\n\t" "lw\t%4,24(%1)\n\t" "lw\t%5,28(%1)\n\t" "sw\t%2,16(%0)\n\t" "sw\t%3,20(%0)\n\t" "sw\t%4,24(%0)\n\t" "sw\t%5,28(%0)\n\t" "addiu\t%0,64\n\t" "addiu\t%1,64\n\t" "lw\t%2,-32(%1)\n\t" "lw\t%3,-28(%1)\n\t" "lw\t%4,-24(%1)\n\t" "lw\t%5,-20(%1)\n\t" "sw\t%2,-32(%0)\n\t" "sw\t%3,-28(%0)\n\t" "sw\t%4,-24(%0)\n\t" "sw\t%5,-20(%0)\n\t" "lw\t%2,-16(%1)\n\t" "lw\t%3,-12(%1)\n\t" "lw\t%4,-8(%1)\n\t" "lw\t%5,-4(%1)\n\t" "sw\t%2,-16(%0)\n\t" "sw\t%3,-12(%0)\n\t" "sw\t%4,-8(%0)\n\t" "bne\t$1,%0,1b\n\t" "sw\t%5,-4(%0)\n\t" ".set\tat\n\t" ".set\treorder" : "=r" (dummy1), "=r" (dummy2), "=&r" (reg1), "=&r" (reg2), "=&r" (reg3), "=&r" (reg4) : "0" (to), "1" (from), "I" (PAGE_SIZE)); } --- NEW FILE: pg-r4k.S --- /* * This file is subject to the terms and conditions of the GNU General Public * License. See the file "COPYING" in the main directory of this archive * for more details. * * r4xx0.c: R4000 processor variant specific MMU/Cache routines. * * Copyright (C) 1996 David S. Miller (dm...@en...) * Copyright (C) 1997, 1998, 1999, 2000 Ralf Baechle ra...@gn... */ #include <asm/addrspace.h> #include <asm/asm.h> #include <asm/regdef.h> #include <asm/cacheops.h> #include <asm/mipsregs.h> #define PAGE_SIZE 0x1000 .text .set mips3 .set noreorder .set nomacro .set noat /* * Zero an entire page. Basically a simple unrolled loop should do the * job but we want more performance by saving memory bus bandwidth. We * have five flavours of the routine available for: * * - 16byte cachelines and no second level cache * - 32byte cachelines second level cache * - a version which handles the buggy R4600 v1.x * - a version which handles the buggy R4600 v2.0 * - Finally a last version without fancy cache games for the SC and MC * versions of R4000 and R4400. */ LEAF(r4k_clear_page_d16) addiu AT, a0, PAGE_SIZE 1: cache Create_Dirty_Excl_D, (a0) sd zero, (a0) sd zero, 8(a0) cache Create_Dirty_Excl_D, 16(a0) sd zero, 16(a0) sd zero, 24(a0) addiu a0, 64 cache Create_Dirty_Excl_D, -32(a0) sd zero, -32(a0) sd zero, -24(a0) cache Create_Dirty_Excl_D, -16(a0) sd zero, -16(a0) bne AT, a0, 1b sd zero, -8(a0) jr ra END(r4k_clear_page_d16) LEAF(r4k_clear_page_d32) addiu AT, a0, PAGE_SIZE 1: cache Create_Dirty_Excl_D, (a0) sd zero, (a0) sd zero, 8(a0) sd zero, 16(a0) sd zero, 24(a0) addiu a0, 64 cache Create_Dirty_Excl_D, -32(a0) sd zero, -32(a0) sd zero, -24(a0) sd zero, -16(a0) bne AT, a0, 1b sd zero, -8(a0) jr ra END(r4k_clear_page_d32) /* * This flavour of r4k_clear_page is for the R4600 V1.x. Cite from the * IDT R4600 V1.7 errata: * * 18. The CACHE instructions Hit_Writeback_Invalidate_D, Hit_Writeback_D, * Hit_Invalidate_D and Create_Dirty_Excl_D should only be * executed if there is no other dcache activity. If the dcache is * accessed for another instruction immeidately preceding when these * cache instructions are executing, it is possible that the dcache * tag match outputs used by these cache instructions will be * incorrect. These cache instructions should be preceded by at least * four instructions that are not any kind of load or store * instruction. * * This is not allowed: lw * nop * nop * nop * cache Hit_Writeback_Invalidate_D * * This is allowed: lw * nop * nop * nop * nop * cache Hit_Writeback_Invalidate_D */ LEAF(r4k_clear_page_r4600_v1) addiu AT, a0, PAGE_SIZE 1: nop nop nop nop cache Create_Dirty_Excl_D, (a0) sd zero, (a0) sd zero, 8(a0) sd zero, 16(a0) sd zero, 24(a0) addiu a0, 64 nop nop nop cache Create_Dirty_Excl_D, -32(a0) sd zero, -32(a0) sd zero, -24(a0) sd zero, -16(a0) bne AT, a0, 1b sd zero, -8(a0) jr ra END(r4k_clear_page_r4600_v1) LEAF(r4k_clear_page_r4600_v2) mfc0 a1, CP0_STATUS ori AT, a1, 1 xori AT, 1 mtc0 AT, CP0_STATUS nop nop nop .set volatile la AT, KSEG1 lw zero, (AT) .set novolatile addiu AT, a0, PAGE_SIZE 1: cache Create_Dirty_Excl_D, (a0) sd zero, (a0) sd zero, 8(a0) sd zero, 16(a0) sd zero, 24(a0) addiu a0, 64 cache Create_Dirty_Excl_D, -32(a0) sd zero, -32(a0) sd zero, -24(a0) sd zero, -16(a0) bne AT, a0, 1b sd zero, -8(a0) mfc0 AT, CP0_STATUS # __restore_flags andi a1, 1 ori AT, 1 xori AT, 1 or a1, AT mtc0 a1, CP0_STATUS nop nop nop jr ra END(r4k_clear_page_r4600_v2) /* * The next 4 versions are optimized for all possible scache configurations * of the SC / MC versions of R4000 and R4400 ... * * Todo: For even better performance we should have a routine optimized for * every legal combination of dcache / scache linesize. When I (Ralf) tried * this the kernel crashed shortly after mounting the root filesystem. CPU * bug? Weirdo cache instruction semantics? */ LEAF(r4k_clear_page_s16) addiu AT, a0, PAGE_SIZE 1: cache Create_Dirty_Excl_SD, (a0) sd zero, (a0) sd zero, 8(a0) cache Create_Dirty_Excl_SD, 16(a0) sd zero, 16(a0) sd zero, 24(a0) addiu a0, 64 cache Create_Dirty_Excl_SD, -32(a0) sd zero, -32(a0) sd zero, -24(a0) cache Create_Dirty_Excl_SD, -16(a0) sd zero, -16(a0) bne AT, a0, 1b sd zero, -8(a0) jr ra END(r4k_clear_page_s16) LEAF(r4k_clear_page_s32) addiu AT, a0, PAGE_SIZE 1: cache Create_Dirty_Excl_SD, (a0) sd zero, (a0) sd zero, 8(a0) sd zero, 16(a0) sd zero, 24(a0) addiu a0, 64 cache Create_Dirty_Excl_SD, -32(a0) sd zero, -32(a0) sd zero, -24(a0) sd zero, -16(a0) bne AT, a0, 1b sd zero, -8(a0) jr ra END(r4k_clear_page_s32) LEAF(r4k_clear_page_s64) addiu AT, a0, PAGE_SIZE 1: cache Create_Dirty_Excl_SD, (a0) sd zero, (a0) sd zero, 8(a0) sd zero, 16(a0) sd zero, 24(a0) addiu a0, 64 sd zero, -32(a0) sd zero, -24(a0) sd zero, -16(a0) bne AT, a0, 1b sd zero, -8(a0) jr ra END(r4k_clear_page_s64) LEAF(r4k_clear_page_s128) addiu AT, a0, PAGE_SIZE 1: cache Create_Dirty_Excl_SD, (a0) sd zero, (a0) sd zero, 8(a0) sd zero, 16(a0) sd zero, 24(a0) sd zero, 32(a0) sd zero, 40(a0) sd zero, 48(a0) sd zero, 56(a0) addiu a0, 128 sd zero, -64(a0) sd zero, -56(a0) sd zero, -48(a0) sd zero, -40(a0) sd zero, -32(a0) sd zero, -24(a0) sd zero, -16(a0) bne AT, a0, 1b sd zero, -8(a0) jr ra END(r4k_clear_page_s128) /* * This is still inefficient. We only can do better if we know the * virtual address where the copy will be accessed. */ LEAF(r4k_copy_page_d16) addiu AT, a0, PAGE_SIZE 1: cache Create_Dirty_Excl_D, (a0) lw a3, (a1) lw a2, 4(a1) lw v1, 8(a1) lw v0, 12(a1) sw a3, (a0) sw a2, 4(a0) sw v1, 8(a0) sw v0, 12(a0) cache Create_Dirty_Excl_D, 16(a0) lw a3, 16(a1) lw a2, 20(a1) lw v1, 24(a1) lw v0, 28(a1) sw a3, 16(a0) sw a2, 20(a0) sw v1, 24(a0) sw v0, 28(a0) cache Create_Dirty_Excl_D, 32(a0) addiu a0, 64 addiu a1, 64 lw a3, -32(a1) lw a2, -28(a1) lw v1, -24(a1) lw v0, -20(a1) sw a3, -32(a0) sw a2, -28(a0) sw v1, -24(a0) sw v0, -20(a0) cache Create_Dirty_Excl_D, -16(a0) lw a3, -16(a1) lw a2, -12(a1) lw v1, -8(a1) lw v0, -4(a1) sw a3, -16(a0) sw a2, -12(a0) sw v1, -8(a0) bne AT, a0, 1b sw v0, -4(a0) jr ra END(r4k_copy_page_d16) LEAF(r4k_copy_page_d32) addiu AT, a0, PAGE_SIZE 1: cache Create_Dirty_Excl_D, (a0) lw a3, (a1) lw a2, 4(a1) lw v1, 8(a1) lw v0, 12(a1) sw a3, (a0) sw a2, 4(a0) sw v1, 8(a0) sw v0, 12(a0) lw a3, 16(a1) lw a2, 20(a1) lw v1, 24(a1) lw v0, 28(a1) sw a3, 16(a0) sw a2, 20(a0) sw v1, 24(a0) sw v0, 28(a0) cache Create_Dirty_Excl_D, 32(a0) addiu a0, 64 addiu a1, 64 lw a3, -32(a1) lw a2, -28(a1) lw v1, -24(a1) lw v0, -20(a1) sw a3, -32(a0) sw a2, -28(a0) sw v1, -24(a0) sw v0, -20(a0) lw a3, -16(a1) lw a2, -12(a1) lw v1, -8(a1) lw v0, -4(a1) sw a3, -16(a0) sw a2, -12(a0) sw v1, -8(a0) bne AT, a0, 1b sw v0, -4(a0) jr ra END(r4k_copy_page_d32) /* * Again a special version for the R4600 V1.x */ LEAF(r4k_copy_page_r4600_v1) addiu AT, a0, PAGE_SIZE 1: nop nop nop nop cache Create_Dirty_Excl_D, (a0) lw a3, (a1) lw a2, 4(a1) lw v1, 8(a1) lw v0, 12(a1) sw a3, (a0) sw a2, 4(a0) sw v1, 8(a0) sw v0, 12(a0) lw a3, 16(a1) lw a2, 20(a1) lw v1, 24(a1) lw v0, 28(a1) sw a3, 16(a0) sw a2, 20(a0) sw v1, 24(a0) sw v0, 28(a0) nop nop nop nop cache Create_Dirty_Excl_D, 32(a0) addiu a0, 64 addiu a1, 64 lw a3, -32(a1) lw a2, -28(a1) lw v1, -24(a1) lw v0, -20(a1) sw a3, -32(a0) sw a2, -28(a0) sw v1, -24(a0) sw v0, -20(a0) lw a3, -16(a1) lw a2, -12(a1) lw v1, -8(a1) lw v0, -4(a1) sw a3, -16(a0) sw a2, -12(a0) sw v1, -8(a0) bne AT, a0, 1b sw v0, -4(a0) jr ra END(r4k_copy_page_r4600_v1) LEAF(r4k_copy_page_r4600_v2) mfc0 v1, CP0_STATUS ori AT, v1, 1 xori AT, 1 mtc0 AT, CP0_STATUS nop nop nop addiu AT, a0, PAGE_SIZE 1: nop nop nop nop cache Create_Dirty_Excl_D, (a0) lw t1, (a1) lw t0, 4(a1) lw a3, 8(a1) lw a2, 12(a1) sw t1, (a0) sw t0, 4(a0) sw a3, 8(a0) sw a2, 12(a0) lw t1, 16(a1) lw t0, 20(a1) lw a3, 24(a1) lw a2, 28(a1) sw t1, 16(a0) sw t0, 20(a0) sw a3, 24(a0) sw a2, 28(a0) nop nop nop nop cache Create_Dirty_Excl_D, 32(a0) addiu a0, 64 addiu a1, 64 lw t1, -32(a1) lw t0, -28(a1) lw a3, -24(a1) lw a2, -20(a1) sw t1, -32(a0) sw t0, -28(a0) sw a3, -24(a0) sw a2, -20(a0) lw t1, -16(a1) lw t0, -12(a1) lw a3, -8(a1) lw a2, -4(a1) sw t1, -16(a0) sw t0, -12(a0) sw a3, -8(a0) bne AT, a0, 1b sw a2, -4(a0) mfc0 AT, CP0_STATUS # __restore_flags andi v1, 1 ori AT, 1 xori AT, 1 or v1, AT mtc0 v1, CP0_STATUS nop nop nop jr ra END(r4k_copy_page_r4600_v2) /* * These are for R4000SC / R4400MC */ LEAF(r4k_copy_page_s16) addiu AT, a0, PAGE_SIZE 1: cache Create_Dirty_Excl_SD, (a0) lw a3, (a1) lw a2, 4(a1) lw v1, 8(a1) lw v0, 12(a1) sw a3, (a0) sw a2, 4(a0) sw v1, 8(a0) sw v0, 12(a0) cache Create_Dirty_Excl_SD, 16(a0) lw a3, 16(a1) lw a2, 20(a1) lw v1, 24(a1) lw v0, 28(a1) sw a3, 16(a0) sw a2, 20(a0) sw v1, 24(a0) sw v0, 28(a0) cache Create_Dirty_Excl_SD, 32(a0) addiu a0, 64 addiu a1, 64 lw a3, -32(a1) lw a2, -28(a1) lw v1, -24(a1) lw v0, -20(a1) sw a3, -32(a0) sw a2, -28(a0) sw v1, -24(a0) sw v0, -20(a0) cache Create_Dirty_Excl_SD, -16(a0) lw a3, -16(a1) lw a2, -12(a1) lw v1, -8(a1) lw v0, -4(a1) sw a3, -16(a0) sw a2, -12(a0) sw v1, -8(a0) bne AT, a0, 1b sw v0, -4(a0) jr ra END(r4k_copy_page_s16) LEAF(r4k_copy_page_s32) addiu AT, a0, PAGE_SIZE 1: cache Create_Dirty_Excl_SD, (a0) lw a3, (a1) lw a2, 4(a1) lw v1, 8(a1) lw v0, 12(a1) sw a3, (a0) sw a2, 4(a0) sw v1, 8(a0) sw v0, 12(a0) lw a3, 16(a1) lw a2, 20(a1) lw v1, 24(a1) lw v0, 28(a1) sw a3, 16(a0) sw a2, 20(a0) sw v1, 24(a0) sw v0, 28(a0) cache Create_Dirty_Excl_SD, 32(a0) addiu a0, 64 addiu a1, 64 lw a3, -32(a1) lw a2, -28(a1) lw v1, -24(a1) lw v0, -20(a1) sw a3, -32(a0) sw a2, -28(a0) sw v1, -24(a0) sw v0, -20(a0) lw a3, -16(a1) lw a2, -12(a1) lw v1, -8(a1) lw v0, -4(a1) sw a3, -16(a0) sw a2, -12(a0) sw v1, -8(a0) bne AT, a0, 1b sw v0, -4(a0) jr ra END(r4k_copy_page_s32) LEAF(r4k_copy_page_s64) addiu AT, a0, PAGE_SIZE 1: cache Create_Dirty_Excl_SD, (a0) lw a3, (a1) lw a2, 4(a1) lw v1, 8(a1) lw v0, 12(a1) sw a3, (a0) sw a2, 4(a0) sw v1, 8(a0) sw v0, 12(a0) lw a3, 16(a1) lw a2, 20(a1) lw v1, 24(a1) lw v0, 28(a1) sw a3, 16(a0) sw a2, 20(a0) sw v1, 24(a0) sw v0, 28(a0) addiu a0, 64 addiu a1, 64 lw a3, -32(a1) lw a2, -28(a1) lw v1, -24(a1) lw v0, -20(a1) sw a3, -32(a0) sw a2, -28(a0) sw v1, -24(a0) sw v0, -20(a0) lw a3, -16(a1) lw a2, -12(a1) lw v1, -8(a1) lw v0, -4(a1) sw a3, -16(a0) sw a2, -12(a0) sw v1, -8(a0) bne AT, a0, 1b sw v0, -4(a0) jr ra END(r4k_copy_page_s64) LEAF(r4k_copy_page_s128) addiu AT, a0, PAGE_SIZE 1: cache Create_Dirty_Excl_SD, (a0) lw a3, (a1) lw a2, 4(a1) lw v1, 8(a1) lw v0, 12(a1) sw a3, (a0) sw a2, 4(a0) sw v1, 8(a0) sw v0, 12(a0) lw a3, 16(a1) lw a2, 20(a1) lw v1, 24(a1) lw v0, 28(a1) sw a3, 16(a0) sw a2, 20(a0) sw v1, 24(a0) sw v0, 28(a0) lw a3, 32(a1) lw a2, 36(a1) lw v1, 40(a1) lw v0, 44(a1) sw a3, 32(a0) sw a2, 36(a0) sw v1, 40(a0) sw v0, 44(a0) lw a3, 48(a1) lw a2, 52(a1) lw v1, 56(a1) lw v0, 60(a1) sw a3, 48(a0) sw a2, 52(a0) sw v1, 56(a0) sw v0, 60(a0) addiu a0, 128 addiu a1, 128 lw a3, -64(a1) lw a2, -60(a1) lw v1, -56(a1) lw v0, -52(a1) sw a3, -64(a0) sw a2, -60(a0) sw v1, -56(a0) sw v0, -52(a0) lw a3, -48(a1) lw a2, -44(a1) lw v1, -40(a1) lw v0, -36(a1) sw a3, -48(a0) sw a2, -44(a0) sw v1, -40(a0) sw v0, -36(a0) lw a3, -32(a1) lw a2, -28(a1) lw v1, -24(a1) lw v0, -20(a1) sw a3, -32(a0) sw a2, -28(a0) sw v1, -24(a0) sw v0, -20(a0) lw a3, -16(a1) lw a2, -12(a1) lw v1, -8(a1) lw v0, -4(a1) sw a3, -16(a0) sw a2, -12(a0) sw v1, -8(a0) bne AT, a0, 1b sw v0, -4(a0) jr ra END(r4k_copy_page_s128) --- NEW FILE: tlb-r3k.c --- /* * r2300.c: R2000 and R3000 specific mmu/cache code. * * Copyright (C) 1996 David S. Miller (dm...@en...) * * with a lot of changes to make this thing work for R3000s * Tx39XX R4k style caches added. HK * Copyright (C) 1998, 1999, 2000 Harald Koerfgen * Copyright (C) 1998 Gleb Raiko & Vladimir Roganov */ #include <linux/init.h> #include <linux/kernel.h> #include <linux/sched.h> #include <linux/mm.h> #include <asm/page.h> #include <asm/pgtable.h> #include <asm/mmu_context.h> #include <asm/system.h> #include <asm/isadep.h> #include <asm/io.h> #include <asm/wbflush.h> #include <asm/bootinfo.h> #include <asm/cpu.h> #undef DEBUG_TLB /* TLB operations. */ void local_flush_tlb_all(void) { unsigned long flags; unsigned long old_ctx; int entry; #ifdef DEBUG_TLB printk("[tlball]"); #endif save_and_cli(flags); old_ctx = (get_entryhi() & 0xfc0); write_32bit_cp0_register(CP0_ENTRYLO0, 0); for (entry = 8; entry < mips_cpu.tlbsize; entry++) { write_32bit_cp0_register(CP0_INDEX, entry << 8); write_32bit_cp0_register(CP0_ENTRYHI, ((entry | 0x80000) << 12)); __asm__ __volatile__("tlbwi"); } set_entryhi(old_ctx); restore_flags(flags); } void local_flush_tlb_mm(struct mm_struct *mm) { if (mm->context != 0) { unsigned long flags; #ifdef DEBUG_TLB printk("[tlbmm<%lu>]", (unsigned long) mm->context); #endif save_and_cli(flags); get_new_cpu_mmu_context(mm, smp_processor_id()); if (mm == current->active_mm) set_entryhi(mm->context & 0xfc0); restore_flags(flags); } } void local_flush_tlb_range(struct mm_struct *mm, unsigned long start, unsigned long end) { if (mm->context != 0) { unsigned long flags; int size; #ifdef DEBUG_TLB printk("[tlbrange<%lu,0x%08lx,0x%08lx>]", (mm->context & 0xfc0), start, end); #endif save_and_cli(flags); size = (end - start + (PAGE_SIZE - 1)) >> PAGE_SHIFT; if (size <= mips_cpu.tlbsize) { int oldpid = (get_entryhi() & 0xfc0); int newpid = (mm->context & 0xfc0); start &= PAGE_MASK; end += (PAGE_SIZE - 1); end &= PAGE_MASK; while (start < end) { int idx; set_entryhi(start | newpid); start += PAGE_SIZE; tlb_probe(); idx = get_index(); set_entrylo0(0); set_entryhi(KSEG0); if (idx < 0) continue; tlb_write_indexed(); } set_entryhi(oldpid); } else { get_new_cpu_mmu_context(mm, smp_processor_id()); if (mm == current->active_mm) set_entryhi(mm->context & 0xfc0); } restore_flags(flags); } } void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page) { if (vma->vm_mm->context != 0) { unsigned long flags; int oldpid, newpid, idx; #ifdef DEBUG_TLB printk("[tlbpage<%lu,0x%08lx>]", vma->vm_mm->context, page); #endif newpid = (vma->vm_mm->context & 0xfc0); page &= PAGE_MASK; save_and_cli(flags); oldpid = (get_entryhi() & 0xfc0); set_entryhi(page | newpid); tlb_probe(); idx = get_index(); set_entrylo0(0); set_entryhi(KSEG0); if (idx < 0) goto finish; tlb_write_indexed(); finish: set_entryhi(oldpid); restore_flags(flags); } } void update_mmu_cache(struct vm_area_struct * vma, unsigned long address, pte_t pte) { unsigned long flags; pgd_t *pgdp; pmd_t *pmdp; pte_t *ptep; int idx, pid; /* * Handle debugger faulting in for debugee. */ if (current->active_mm != vma->vm_mm) return; pid = get_entryhi() & 0xfc0; #ifdef DEBUG_TLB if ((pid != (vma->vm_mm->context & 0xfc0)) || (vma->vm_mm->context == 0)) { printk("update_mmu_cache: Wheee, bogus tlbpid mmpid=%lu tlbpid=%d\n", (vma->vm_mm->context & 0xfc0), pid); } #endif save_and_cli(flags); address &= PAGE_MASK; set_entryhi(address | (pid)); pgdp = pgd_offset(vma->vm_mm, address); tlb_probe(); pmdp = pmd_offset(pgdp, address); idx = get_index(); ptep = pte_offset(pmdp, address); set_entrylo0(pte_val(*ptep)); set_entryhi(address | (pid)); if (idx < 0) { tlb_write_random(); #if 0 printk("[MISS]"); #endif } else { tlb_write_indexed(); #if 0 printk("[HIT]"); #endif } set_entryhi(pid); restore_flags(flags); } void add_wired_entry(unsigned long entrylo0, unsigned long entrylo1, unsigned long entryhi, unsigned long pagemask) { unsigned long flags; unsigned long old_ctx; static unsigned long wired = 0; if (wired < 8) { __save_and_cli(flags); old_ctx = get_entryhi() & 0xfc0; set_entrylo0(entrylo0); set_entryhi(entryhi); set_index(wired); wired++; tlb_write_indexed(); set_entryhi(old_ctx); local_flush_tlb_all(); __restore_flags(flags); } } --- NEW FILE: tlb-r4k.c --- /* * This file is subject to the terms and conditions of the GNU General Public * License. See the file "COPYING" in the main directory of this archive * for more details. * * r4xx0.c: R4000 processor variant specific MMU/Cache routines. * * Copyright (C) 1996 David S. Miller (dm...@en...) * Copyright (C) 1997, 1998, 1999, 2000 Ralf Baechle ra...@gn... * * To do: * * - this code is a overbloated pig * - many of the bug workarounds are not efficient at all, but at * least they are functional ... */ #include <linux/init.h> #include <linux/sched.h> #include <linux/mm.h> #include <asm/cpu.h> #include <asm/bootinfo.h> #include <asm/mmu_context.h> #include <asm/pgtable.h> #include <asm/system.h> #undef DEBUG_TLB #undef DEBUG_TLBUPDATE extern char except_vec0_nevada, except_vec0_r4000, except_vec0_r4600; /* CP0 hazard avoidance. */ #define BARRIER __asm__ __volatile__(".set noreorder\n\t" \ "nop; nop; nop; nop; nop; nop;\n\t" \ ".set reorder\n\t") void local_flush_tlb_all(void) { unsigned long flags; unsigned long old_ctx; int entry; #ifdef DEBUG_TLB printk("[tlball]"); #endif __save_and_cli(flags); /* Save old context and create impossible VPN2 value */ old_ctx = (get_entryhi() & 0xff); set_entryhi(KSEG0); set_entrylo0(0); set_entrylo1(0); BARRIER; entry = get_wired(); /* Blast 'em all away. */ while (entry < mips_cpu.tlbsize) { set_index(entry); BARRIER; tlb_write_indexed(); BARRIER; entry++; } BARRIER; set_entryhi(old_ctx); __restore_flags(flags); } void local_flush_tlb_mm(struct mm_struct *mm) { if (mm->context != 0) { unsigned long flags; #ifdef DEBUG_TLB printk("[tlbmm<%d>]", mm->context); #endif __save_and_cli(flags); get_new_cpu_mmu_context(mm, smp_processor_id()); if (mm == current->active_mm) set_entryhi(mm->context & 0xff); __restore_flags(flags); } } void local_flush_tlb_range(struct mm_struct *mm, unsigned long start, unsigned long end) { if (mm->context != 0) { unsigned long flags; int size; #ifdef DEBUG_TLB printk("[tlbrange<%02x,%08lx,%08lx>]", (mm->context & 0xff), start, end); #endif __save_and_cli(flags); size = (end - start + (PAGE_SIZE - 1)) >> PAGE_SHIFT; size = (size + 1) >> 1; if (size <= mips_cpu.tlbsize/2) { int oldpid = (get_entryhi() & 0xff); int newpid = (mm->context & 0xff); start &= (PAGE_MASK << 1); end += ((PAGE_SIZE << 1) - 1); end &= (PAGE_MASK << 1); while (start < end) { int idx; set_entryhi(start | newpid); start += (PAGE_SIZE << 1); BARRIER; tlb_probe(); BARRIER; idx = get_index(); set_entrylo0(0); set_entrylo1(0); set_entryhi(KSEG0); BARRIER; if(idx < 0) continue; tlb_write_indexed(); BARRIER; } set_entryhi(oldpid); } else { get_new_cpu_mmu_context(mm, smp_processor_id()); if (mm == current->active_mm) set_entryhi(mm->context & 0xff); } __restore_flags(flags); } } void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page) { if (vma->vm_mm->context != 0) { unsigned long flags; int oldpid, newpid, idx; #ifdef DEBUG_TLB printk("[tlbpage<%d,%08lx>]", vma->vm_mm->context, page); #endif newpid = (vma->vm_mm->context & 0xff); page &= (PAGE_MASK << 1); __save_and_cli(flags); oldpid = (get_entryhi() & 0xff); set_entryhi(page | newpid); BARRIER; tlb_probe(); BARRIER; idx = get_index(); set_entrylo0(0); set_entrylo1(0); set_entryhi(KSEG0); if(idx < 0) goto finish; BARRIER; tlb_write_indexed(); finish: BARRIER; set_entryhi(oldpid); __restore_flags(flags); } } /* We will need multiple versions of update_mmu_cache(), one that just * updates the TLB with the new pte(s), and another which also checks * for the R4k "end of page" hardware bug and does the needy. */ void update_mmu_cache(struct vm_area_struct * vma, unsigned long address, pte_t pte) { unsigned long flags; pgd_t *pgdp; pmd_t *pmdp; pte_t *ptep; int idx, pid; /* * Handle debugger faulting in for debugee. */ if (current->active_mm != vma->vm_mm) return; pid = get_entryhi() & 0xff; #ifdef DEBUG_TLB if((pid != (vma->vm_mm->context & 0xff)) || (vma->vm_mm->context == 0)) { printk("update_mmu_cache: Wheee, bogus tlbpid mmpid=%d tlbpid=%d\n", (int) (vma->vm_mm->context & 0xff), pid); } #endif __save_and_cli(flags); address &= (PAGE_MASK << 1); set_entryhi(address | (pid)); pgdp = pgd_offset(vma->vm_mm, address); BARRIER; tlb_probe(); BARRIER; pmdp = pmd_offset(pgdp, address); idx = get_index(); ptep = pte_offset(pmdp, address); BARRIER; set_entrylo0(pte_val(*ptep++) >> 6); set_entrylo1(pte_val(*ptep) >> 6); set_entryhi(address | (pid)); BARRIER; if (idx < 0) { tlb_write_random(); } else { tlb_write_indexed(); } BARRIER; set_entryhi(pid); BARRIER; __restore_flags(flags); } #if 0 static void r4k_update_mmu_cache_hwbug(struct vm_area_struct * vma, unsigned long address, pte_t pte) { unsigned long flags; pgd_t *pgdp; pmd_t *pmdp; pte_t *ptep; int idx; __save_and_cli(flags); address &= (PAGE_MASK << 1); set_entryhi(address | (get_entryhi() & 0xff)); pgdp = pgd_offset(vma->vm_mm, address); tlb_probe(); pmdp = pmd_offset(pgdp, address); idx = get_index(); ptep = pte_offset(pmdp, address); set_entrylo0(pte_val(*ptep++) >> 6); set_entrylo1(pte_val(*ptep) >> 6); BARRIER; if (idx < 0) tlb_write_random(); else tlb_write_indexed(); BARRIER; __restore_flags(flags); } #endif void add_wired_entry(unsigned long entrylo0, unsigned long entrylo1, unsigned long entryhi, unsigned long pagemask) { unsigned long flags; unsigned long wired; unsigned long old_pagemask; unsigned long old_ctx; __save_and_cli(flags); /* Save old context and create impossible VPN2 value */ old_ctx = get_entryhi() & 0xff; old_pagemask = get_pagemask(); wired = get_wired(); set_wired(wired + 1); set_index(wired); BARRIER; set_pagemask(pagemask); set_entryhi(entryhi); set_entrylo0(entrylo0); set_entrylo1(entrylo1); BARRIER; tlb_write_indexed(); BARRIER; set_entryhi(old_ctx); BARRIER; set_pagemask(old_pagemask); local_flush_tlb_all(); __restore_flags(flags); } /* * Used for loading TLB entries before trap_init() has started, when we * don't actually want to add a wired entry which remains throughout the * lifetime of the system */ static int temp_tlb_entry __initdata; __init int add_temporary_entry(unsigned long entrylo0, unsigned long entrylo1, unsigned long entryhi, unsigned long pagemask) { int ret = 0; unsigned long flags; unsigned long wired; unsigned long old_pagemask; unsigned long old_ctx; __save_and_cli(flags); /* Save old context and create impossible VPN2 value */ old_ctx = get_entryhi() & 0xff; old_pagemask = get_pagemask(); wired = get_wired(); if (--temp_tlb_entry < wired) { printk(KERN_WARNING "No TLB space left for add_temporary_entry\n"); ret = -ENOSPC; goto out; } set_index(temp_tlb_entry); BARRIER; set_pagemask(pagemask); set_entryhi(entryhi); set_entrylo0(entrylo0); set_entrylo1(entrylo1); BARRIER; tlb_write_indexed(); BARRIER; set_entryhi(old_ctx); BARRIER; set_pagemask(old_pagemask); out: __restore_flags(flags); return ret; } void __init r4k_tlb_init(void) { /* * You should never change this register: * - On R4600 1.7 the tlbp never hits for pages smaller than * the value in the c0_pagemask register. * - The entire mm handling assumes the c0_pagemask register to * be set for 4kb pages. */ set_pagemask(PM_4K); write_32bit_cp0_register(CP0_WIRED, 0); temp_tlb_entry = mips_cpu.tlbsize - 1; printk("TLB has %d entries.\n", mips_cpu.tlbsize); local_flush_tlb_all(); if ((mips_cpu.options & MIPS_CPU_4KEX) && (mips_cpu.options & MIPS_CPU_4KTLB)) { if (mips_cpu.cputype == CPU_NEVADA) memcpy((void *)KSEG0, &except_vec0_nevada, 0x80); else if (mips_cpu.cputype == CPU_R4600) memcpy((void *)KSEG0, &except_vec0_r4600, 0x80); else memcpy((void *)KSEG0, &except_vec0_r4000, 0x80); flush_icache_range(KSEG0, KSEG0 + 0x80); } } --- NEW FILE: tlbex-r3k.S --- /* * TLB exception handling code for R2000/R3000. * * Copyright (C) 1994, 1995, 1996 by Ralf Baechle and Andreas Busse * * Multi-CPU abstraction reworking: * Copyright (C) 1996 David S. Miller (dm...@en...) * * Further modifications to make this work: * Copyright (c) 1998 Harald Koerfgen * Copyright (c) 1998, 1999 Gleb Raiko & Vladimir Roganov * Copyright (c) 2001 Ralf Baechle * Copyright (c) 2001 MIPS Technologies, Inc. */ #include <linux/init.h> #include <asm/asm.h> #include <asm/current.h> #include <asm/bootinfo.h> #include <asm/cachectl.h> #include <asm/fpregdef.h> #include <asm/mipsregs.h> #include <asm/page.h> #include <asm/pgtable.h> #include <asm/processor.h> #include <asm/regdef.h> #include <asm/segment.h> #include <asm/stackframe.h> #define TLB_OPTIMIZE /* If you are paranoid, disable this. */ .text .set mips1 .set noreorder __INIT /* TLB refill, R[23]00 version */ LEAF(except_vec0_r2300) .set noat .set mips1 mfc0 k0, CP0_BADVADDR lw k1, pgd_current # get pgd pointer srl k0, k0, 22 sll k0, k0, 2 addu k1, k1, k0 mfc0 k0, CP0_CONTEXT lw k1, (k1) and k0, k0, 0xffc addu k1, k1, k0 lw k0, (k1) nop mtc0 k0, CP0_ENTRYLO0 mfc0 k1, CP0_EPC tlbwr jr k1 rfe END(except_vec0_r2300) __FINIT /* ABUSE of CPP macros 101. */ /* After this macro runs, the pte faulted on is * in register PTE, a ptr into the table in which * the pte belongs is in PTR. */ #define LOAD_PTE(pte, ptr) \ mfc0 pte, CP0_BADVADDR; \ lw ptr, pgd_current; \ srl pte, pte, 22; \ sll pte, pte, 2; \ addu ptr, ptr, pte; \ mfc0 pte, CP0_CONTEXT; \ lw ptr, (ptr); \ andi pte, pte, 0xffc; \ addu ptr, ptr, pte; \ lw pte, (ptr); \ nop; /* This places the even/odd pte pair in the page * table at PTR into ENTRYLO0 and ENTRYLO1 using * TMP as a scratch register. */ #define PTE_RELOAD(ptr) \ lw ptr, (ptr) ; \ nop ; \ mtc0 ptr, CP0_ENTRYLO0; \ nop; #define DO_FAULT(write) \ .set noat; \ .set macro; \ SAVE_ALL; \ mfc0 a2, CP0_BADVADDR; \ STI; \ .set at; \ move a0, sp; \ jal do_page_fault; \ li a1, write; \ j ret_from_exception; \ nop; \ .set noat; \ .set nomacro; /* Check is PTE is present, if not then jump to LABEL. * PTR points to the page table where this PTE is located, * when the macro is done executing PTE will be restored * with it's original value. */ #define PTE_PRESENT(pte, ptr, label) \ andi pte, pte, (_PAGE_PRESENT | _PAGE_READ); \ xori pte, pte, (_PAGE_PRESENT | _PAGE_READ); \ bnez pte, label; \ .set push; \ .set reorder; \ lw pte, (ptr); \ .set pop; /* Make PTE valid, store result in PTR. */ #define PTE_MAKEVALID(pte, ptr) \ ori pte, pte, (_PAGE_VALID | _PAGE_ACCESSED); \ sw pte, (ptr); /* Check if PTE can be written to, if not branch to LABEL. * Regardless restore PTE with value from PTR when done. */ #define PTE_WRITABLE(pte, ptr, label) \ andi pte, pte, (_PAGE_PRESENT | _PAGE_WRITE); \ xori pte, pte, (_PAGE_PRESENT | _PAGE_WRITE); \ bnez pte, label; \ .set push; \ .set reorder; \ lw pte, (ptr); \ .set pop; /* Make PTE writable, update software status bits as well, * then store at PTR. */ #define PTE_MAKEWRITE(pte, ptr) \ ori pte, pte, (_PAGE_ACCESSED | _PAGE_MODIFIED | \ _PAGE_VALID | _PAGE_DIRTY); \ sw pte, (ptr); /* * The index register may have the probe fail bit set, * because we would trap on access kseg2, i.e. without refill. */ #define TLB_WRITE(reg) \ mfc0 reg, CP0_INDEX; \ nop; \ bltz reg, 1f; \ nop; \ tlbwi; \ j 2f; \ nop; \ 1: tlbwr; \ 2: #define RET(reg) \ mfc0 reg, CP0_EPC; \ nop; \ jr reg; \ rfe .set noreorder .align 5 NESTED(handle_tlbl, PT_SIZE, sp) .set noat #ifdef TLB_OPTIMIZE /* Test present bit in entry. */ LOAD_PTE(k0, k1) tlbp PTE_PRESENT(k0, k1, nopage_tlbl) PTE_MAKEVALID(k0, k1) PTE_RELOAD(k1) TLB_WRITE(k0) RET(k0) nopage_tlbl: #endif DO_FAULT(0) END(handle_tlbl) NESTED(handle_tlbs, PT_SIZE, sp) .set noat #ifdef TLB_OPTIMIZE LOAD_PTE(k0, k1) tlbp # find faulting entry PTE_WRITABLE(k0, k1, nopage_tlbs) PTE_MAKEWRITE(k0, k1) PTE_RELOAD(k1) TLB_WRITE(k0) RET(k0) nopage_tlbs: #endif DO_FAULT(1) END(handle_tlbs) .align 5 NESTED(handle_mod, PT_SIZE, sp) .set noat #ifdef TLB_OPTIMIZE LOAD_PTE(k0, k1) tlbp # find faulting entry andi k0, k0, _PAGE_WRITE beqz k0, nowrite_mod .set push .set reorder lw k0, (k1) .set pop /* Present and writable bits set, set accessed and dirty bits. */ PTE_MAKEWRITE(k0, k1) /* Now reload the entry into the tlb. */ PTE_RELOAD(k1) tlbwi RET(k0) #endif nowrite_mod: DO_FAULT(1) END(handle_mod) --- NEW FILE: tlbex-r4k.S --- /* * TLB exception handling code for r4k. * * Copyright (C) 1994, 1995, 1996 by Ralf Baechle and Andreas Busse * * Multi-cpu abstraction and reworking: * Copyright (C) 1996 David S. Miller (dm...@en...) * * Carsten Langgaard, car...@mi... * Copyright (C) 2000 MIPS Technologies, Inc. All rights reserved. */ #include <linux/init.h> #include <asm/asm.h> #include <asm/current.h> #include <asm/offset.h> #include <asm/bootinfo.h> #include <asm/cachectl.h> #include <asm/fpregdef.h> #include <asm/mipsregs.h> #include <asm/page.h> #include <asm/pgtable.h> #include <asm/processor.h> #include <asm/regdef.h> #include <asm/stackframe.h> #define TLB_OPTIMIZE /* If you are paranoid, disable this. */ __INIT /* * These handlers much be written in a relocatable manner * because based upon the cpu type an arbitrary one of the * following pieces of code will be copied to the KSEG0 * vector location. */ /* TLB refill, EXL == 0, R4xx0, non-R4600 version */ .set noreorder .set noat LEAF(except_vec0_r4000) .set mips3 #ifdef CONFIG_SMP mfc0 k1, CP0_CONTEXT la k0, pgd_current srl k1, 23 sll k1, 2 addu k1, k0, k1 lw k1, (k1) #else lw k1, pgd_current # get pgd pointer #endif mfc0 k0, CP0_BADVADDR # Get faulting address srl k0, k0, 22 # get pgd only bits sll k0, k0, 2 addu k1, k1, k0 # add in pgd offset mfc0 k0, CP0_CONTEXT # get context reg lw k1, (k1) #if defined(CONFIG_CPU_VR41XX) srl k0, k0, 3 # get pte offset #else srl k0, k0, 1 # get pte offset #endif and k0, k0, 0xff8 addu k1, k1, k0 # add in offset lw k0, 0(k1) # get even pte lw k1, 4(k1) # get odd pte srl k0, k0, 6 # convert to entrylo0 mtc0 k0, CP0_ENTRYLO0 # load it srl k1, k1, 6 # convert to entrylo1 mtc0 k1, CP0_ENTRYLO1 # load it b 1f tlbwr # write random tlb entry 1: nop eret # return from trap END(except_vec0_r4000) /* TLB refill, EXL == 0, R4600 version */ LEAF(except_vec0_r4600) .set mips3 mfc0 k0, CP0_BADVADDR srl k0, k0, 22 lw k1, pgd_current # get pgd pointer sll k0, k0, 2 addu k1, k1, k0 mfc0 k0, CP0_CONTEXT lw k1, (k1) srl k0, k0, 1 and k0, k0, 0xff8 addu k1, k1, k0 lw k0, 0(k1) lw k1, 4(k1) srl k0, k0, 6 mtc0 k0, CP0_ENTRYLO0 srl k1, k1, 6 mtc0 k1, CP0_ENTRYLO1 nop tlbwr nop eret END(except_vec0_r4600) /* TLB refill, EXL == 0, R52x0 "Nevada" version */ /* * This version has a bug workaround for the Nevada. It seems * as if under certain circumstances the move from cp0_context * might produce a bogus result when the mfc0 instruction and * it's consumer are in a different cacheline or a load instruction, * probably any memory reference, is between them. This is * potencially slower than the R4000 version, so we use this * special version. */ .set noreorder .set noat LEAF(except_vec0_nevada) .set mips3 mfc0 k0, CP0_BADVADDR # Get faulting address srl k0, k0, 22 # get pgd only bits lw k1, pgd_current # get pgd pointer sll k0, k0, 2 addu k1, k1, k0 # add in pgd offset lw k1, (k1) mfc0 k0, CP0_CONTEXT # get context reg srl k0, k0, 1 # get pte offset and k0, k0, 0xff8 addu k1, k1, k0 # add in offset lw k0, 0(k1) # get even pte lw k1, 4(k1) # get odd pte srl k0, k0, 6 # convert to entrylo0 mtc0 k0, CP0_ENTRYLO0 # load it srl k1, k1, 6 # convert to entrylo1 mtc0 k1, CP0_ENTRYLO1 # load it nop # QED specified nops nop tlbwr # write random tlb entry nop # traditional nop eret # return from trap END(except_vec0_nevada) /* TLB refill, EXL == 0, R4[40]00/R5000 badvaddr hwbug version */ LEAF(except_vec0_r45k_bvahwbug) .set mips3 mfc0 k0, CP0_BADVADDR srl k0, k0, 22 lw k1, pgd_current # get pgd pointer sll k0, k0, 2 addu k1, k1, k0 mfc0 k0, CP0_CONTEXT lw k1, (k1) srl k0, k0, 1 and k0, k0, 0xff8 addu k1, k1, k0 lw k0, 0(k1) lw k1, 4(k1) nop /* XXX */ tlbp srl k0, k0, 6 mtc0 k0, CP0_ENTRYLO0 srl k1, k1, 6 mfc0 k0, CP0_INDEX mtc0 k1, CP0_ENTRYLO1 bltzl k0, 1f tlbwr 1: nop eret END(except_vec0_r45k_bvahwbug) #ifdef CONFIG_SMP /* TLB refill, EXL == 0, R4000 MP badvaddr hwbug version */ LEAF(except_vec0_r4k_mphwbug) .set mips3 mfc0 k0, CP0_BADVADDR srl k0, k0, 22 lw k1, pgd_current # get pgd pointer sll k0, k0, 2 addu k1, k1, k0 mfc0 k0, CP0_CONTEXT lw k1, (k1) srl k0, k0, 1 and k0, k0, 0xff8 addu k1, k1, k0 lw k0, 0(k1) lw k1, 4(k1) nop /* XXX */ tlbp srl k0, k0, 6 mtc0 k0, CP0_ENTRYLO0 srl k1, k1, 6 mfc0 k0, CP0_INDEX mtc0 k1, CP0_ENTRYLO1 bltzl k0, 1f tlbwr 1: nop eret END(except_vec0_r4k_mphwbug) #endif /* TLB refill, EXL == 0, R4000 UP 250MHZ entrylo[01] hwbug version */ LEAF(except_vec0_r4k_250MHZhwbug) .set mips3 mfc0 k0, CP0_BADVADDR srl k0, k0, 22 lw k1, pgd_current # get pgd pointer sll k0, k0, 2 addu k1, k1, k0 mfc0 k0, CP0_CONTEXT lw k1, (k1) srl k0, k0, 1 and k0, k0, 0xff8 addu k1, k1, k0 lw k0, 0(k1) lw k1, 4(k1) srl k0, k0, 6 mtc0 zero, CP0_ENTRYLO0 mtc0 k0, CP0_ENTRYLO0 srl k1, k1, 6 mtc0 zero, CP0_ENTRYLO1 mtc0 k1, CP0_ENTRYLO1 b 1f tlbwr 1: nop eret END(except_vec0_r4k_250MHZhwbug) #ifdef CONFIG_SMP /* TLB refill, EXL == 0, R4000 MP 250MHZ entrylo[01]+badvaddr bug version */ LEAF(except_vec0_r4k_MP250MHZhwbug) .set mips3 mfc0 k0, CP0_BADVADDR srl k0, k0, 22 lw k1, pgd_current # get pgd pointer sll k0, k0, 2 addu k1, k1, k0 mfc0 k0, CP0_CONTEXT lw k1, (k1) srl k0, k0, 1 and k0, k0, 0xff8 addu k1, k1, k0 lw k0, 0(k1) lw k1, 4(k1) nop /* XXX */ tlbp srl k0, k0, 6 mtc0 zero, CP0_ENTRYLO0 mtc0 k0, CP0_ENTRYLO0 mfc0 k0, CP0_INDEX srl k1, k1, 6 mtc0 zero, CP0_ENTRYLO1 mtc0 k1, CP0_ENTRYLO1 bltzl k0, 1f tlbwr 1: nop eret END(except_vec0_r4k_MP250MHZhwbug) #endif __FINIT /* * ABUSE of CPP macros 101. * * After this macro runs, the pte faulted on is * in register PTE, a ptr into the table in which * the pte belongs is in PTR. */ #ifdef CONFIG_SMP #define GET_PGD(scratch, ptr) \ mfc0 ptr, CP0_CONTEXT; \ la scratch, pgd_current;\ srl ptr, 23; \ sll ptr, 2; \ addu ptr, scratch, ptr; \ lw ptr, (ptr); #else #define GET_PGD(scratch, ptr) \ lw ptr, pgd_current; #endif #define LOAD_PTE(pte, ptr) \ GET_PGD(pte, ptr) \ mfc0 pte, CP0_BADVADDR; \ srl pte, pte, 22; \ sll pte, pte, 2; \ addu ptr, ptr, pte; \ mfc0 pte, CP0_BADVADDR; \ lw ptr, (ptr); \ srl pte, pte, 10; \ and pte, pte, 0xffc; \ addu ptr, ptr, pte; \ lw pte, (ptr); /* This places the even/odd pte pair in the page * table at PTR into ENTRYLO0 and ENTRYLO1 using * TMP as a scratch register. */ #define PTE_RELOAD(ptr, tmp) \ ori ptr, ptr, 0x4; \ xori ptr, ptr, 0x4; \ lw tmp, 4(ptr); \ lw ptr, 0(ptr); \ srl tmp, tmp, 6; \ mtc0 tmp, CP0_ENTRYLO1; \ srl ptr, ptr, 6; \ mtc0 ptr, CP0_ENTRYLO0; #define DO_FAULT(write) \ .set noat; \ SAVE_ALL; \ mfc0 a2, CP0_BADVADDR; \ STI; \ .set at; \ move a0, sp; \ jal do_page_fault; \ li a1, write; \ j ret_from_exception; \ nop; \ .set noat; /* Check is PTE is present, if not then jump to LABEL. * PTR points to the page table where this PTE is located, * when the macro is done executing PTE will be restored * with it's original value. */ #define PTE_PRESENT(pte, ptr, label) \ andi pte, pte, (_PAGE_PRESENT | _PAGE_READ); \ xori pte, pte, (_PAGE_PRESENT | _PAGE_READ); \ bnez pte, label; \ lw pte, (ptr); /* Make PTE valid, store result in PTR. */ #define PTE_MAKEVALID(pte, ptr) \ ori pte, pte, (_PAGE_VALID | _PAGE_ACCESSED); \ sw pte, (ptr); /* Check if PTE can be written to, if not branch to LABEL. * Regardless restore PTE with value from PTR when done. */ #define PTE_WRITABLE(pte, ptr, label) \ andi pte, pte, (_PAGE_PRESENT | _PAGE_WRITE); \ xori pte, pte, (_PAGE_PRESENT | _PAGE_WRITE); \ bnez pte, label; \ lw pte, (ptr); /* Make PTE writable, update software status bits as well, * then store at PTR. */ #define PTE_MAKEWRITE(pte, ptr) \ ori pte, pte, (_PAGE_ACCESSED | _PAGE_MODIFIED | \ _PAGE_VALID | _PAGE_DIRTY); \ sw pte, (ptr); .set noreorder /* * From the IDT errata for the QED RM5230 (Nevada), processor revision 1.0: * 2. A timing hazard exists for the TLBP instruction. * * stalling_instruction * TLBP * * The JTLB is being read for the TLBP throughout the stall generated by the * previous instruction. This is not really correct as the stalling instruction * can modify the address used to access the JTLB. The failure symptom is that * the TLBP instruction will use an address created for the stalling instruction * and not the address held in C0_ENHI and thus report the wrong results. * * The software work-around is to not allow the instruction preceding the TLBP * to stall - make it an NOP or some other instruction guaranteed not to stall. * * Errata 2 will not be fixed. This errata is also on the R5000. * * As if we MIPS hackers wouldn't know how to nop pipelines happy ... */ #define R5K_HAZARD nop /* * Note for many R4k variants tlb probes cannot be executed out * of the instruction cache else you get bogus results. */ .align 5 NESTED(handle_tlbl, PT_SIZE, sp) .set noat invalid_tlbl: #ifdef TLB_OPTIMIZE /* Test present bit in entry. */ LOAD_PTE(k0, k1) R5K_HAZARD tlbp PTE_PRESENT(k0, k1, nopage_tlbl) PTE_MAKEVALID(k0, k1) PTE_RELOAD(k1, k0) nop b 1f tlbwi 1: nop .set mips3 eret .set mips0 #endif nopage_tlbl: DO_FAULT(0) END(handle_tlbl) .align 5 NESTED(handle_tlbs, PT_SIZE, sp) .set noat #ifdef TLB_OPTIMIZE LOAD_PTE(k0, k1) R5K_HAZARD tlbp # find faulting entry PTE_WRITABLE(k0, k1, nopage_tlbs) PTE_MAKEWRITE(k0, k1) PTE_RELOAD(k1, k0) nop b 1f tlbwi 1: nop .set mips3 eret .set mips0 #endif nopage_tlbs: DO_FAULT(1) END(handle_tlbs) .align 5 NESTED(handle_mod, PT_SIZE, sp) .set noat #ifdef TLB_OPTIMIZE LOAD_PTE(k0, k1) R5K_HAZARD tlbp # find faulting entry andi k0, k0, _PAGE_WRITE beqz k0, nowrite_mod lw k0, (k1) /* Present and writable bits set, set accessed and dirty bits. */ PTE_MAKEWRITE(k0, k1) #if 0 ori k0, k0, (_PAGE_ACCESSED | _PAGE_DIRTY) sw k0, (k1) #endif /* Now reload the entry into the tlb. */ PTE_RELOAD(k1, k0) nop b 1f tlbwi 1: nop .set mips3 eret .set mips0 #endif nowrite_mod: DO_FAULT(1) END(handle_mod) |
From: James S. <jsi...@us...> - 2001-10-22 20:27:47
|
Update of /cvsroot/linux-mips/linux/include/linux In directory usw-pr-cvs1:/tmp/cvs-serv30456 Removed Files: module.h Log Message: No longer needed. --- module.h DELETED --- |
From: Paul M. <pm...@mv...> - 2001-10-22 19:25:31
|
On Mon, Oct 22, 2001 at 12:21:42PM -0700, James Simmons wrote: > >Modified Files: > > Makefile andes.c mips32.c r2300.c r4xx0.c r5432.c rm7k.c sb1.c > >Log Message: > >Further syncing with OSS 2.4.10. Massive restructuring of TLB handling. >=20 > Thanks. I have been playing around with the tulip driver this morning. > Anyone here want to test out a tulip card on anotehr type of mips device.= =20 > That would be great. I plan to also change the qube over to the new time > and new pci code. It will give me a better feel for the new code. >=20 Just happen to have a tulip card sitting next to me, will stick it in the I= TE board and see if it behaves properly. Regards, --=20 Paul Mundt <pm...@mv...> MontaVista Software, Inc. |
From: James S. <jsi...@tr...> - 2001-10-22 19:21:47
|
>Modified Files: > Makefile andes.c mips32.c r2300.c r4xx0.c r5432.c rm7k.c sb1.c >Log Message: >Further syncing with OSS 2.4.10. Massive restructuring of TLB handling. Thanks. I have been playing around with the tulip driver this morning. Anyone here want to test out a tulip card on anotehr type of mips device. That would be great. I plan to also change the qube over to the new time and new pci code. It will give me a better feel for the new code. |
From: Paul M. <le...@us...> - 2001-10-22 19:20:14
|
Update of /cvsroot/linux-mips/linux/drivers/char In directory usw-pr-cvs1:/tmp/cvs-serv12929 Modified Files: serial_tx3912.c Log Message: Oops.. accidentally nuked the serial.h include. Index: serial_tx3912.c =================================================================== RCS file: /cvsroot/linux-mips/linux/drivers/char/serial_tx3912.c,v retrieving revision 1.4 retrieving revision 1.5 diff -u -d -r1.4 -r1.5 --- serial_tx3912.c 2001/10/22 19:16:45 1.4 +++ serial_tx3912.c 2001/10/22 19:20:09 1.5 @@ -18,6 +18,7 @@ #include <linux/ptrace.h> #include <linux/init.h> #include <linux/console.h> +#include <linux/serial.h> #include <linux/fs.h> #include <linux/mm.h> #include <linux/malloc.h> |