From: Marcelo T. <mto...@re...> - 2008-05-02 17:43:40
|
Add three PCI bridges to support 128 slots. Changes since v1: - Remove I/O address range "support" (so standard PCI I/O space is used). - Verify that there's no special quirks for 82801 PCI bridge. - Introduce separate flat IRQ mapping function for non-SPARC targets. -- |
From: Marcelo T. <mto...@re...> - 2008-05-02 17:43:37
|
Add 3 PCI bridges to the ACPI table: - Move IRQ routing, slot device and GPE processing to separate files which can be included from acpi-dsdt.dsl. - Add _SUN methods to every slot device so as to avoid collisions in OS handling. - Fix copy&paste typo in slot devices 8/9 and 24/25. This table breaks PCI hotplug for older userspace, hopefully not an issue (trivial enough to upgrade the BIOS). Signed-off-by: Marcelo Tosatti <mto...@re...> Index: kvm-userspace.pci3/bios/acpi-dsdt.dsl =================================================================== --- kvm-userspace.pci3.orig/bios/acpi-dsdt.dsl +++ kvm-userspace.pci3/bios/acpi-dsdt.dsl @@ -208,218 +208,29 @@ DefinitionBlock ( Name (_HID, EisaId ("PNP0A03")) Name (_ADR, 0x00) Name (_UID, 1) - Name(_PRT, Package() { - /* PCI IRQ routing table, example from ACPI 2.0a specification, - section 6.2.8.1 */ - /* Note: we provide the same info as the PCI routing - table of the Bochs BIOS */ - - // PCI Slot 0 - Package() {0x0000ffff, 0, LNKD, 0}, - Package() {0x0000ffff, 1, LNKA, 0}, - Package() {0x0000ffff, 2, LNKB, 0}, - Package() {0x0000ffff, 3, LNKC, 0}, - - // PCI Slot 1 - Package() {0x0001ffff, 0, LNKA, 0}, - Package() {0x0001ffff, 1, LNKB, 0}, - Package() {0x0001ffff, 2, LNKC, 0}, - Package() {0x0001ffff, 3, LNKD, 0}, - - // PCI Slot 2 - Package() {0x0002ffff, 0, LNKB, 0}, - Package() {0x0002ffff, 1, LNKC, 0}, - Package() {0x0002ffff, 2, LNKD, 0}, - Package() {0x0002ffff, 3, LNKA, 0}, - - // PCI Slot 3 - Package() {0x0003ffff, 0, LNKC, 0}, - Package() {0x0003ffff, 1, LNKD, 0}, - Package() {0x0003ffff, 2, LNKA, 0}, - Package() {0x0003ffff, 3, LNKB, 0}, - - // PCI Slot 4 - Package() {0x0004ffff, 0, LNKD, 0}, - Package() {0x0004ffff, 1, LNKA, 0}, - Package() {0x0004ffff, 2, LNKB, 0}, - Package() {0x0004ffff, 3, LNKC, 0}, - - // PCI Slot 5 - Package() {0x0005ffff, 0, LNKA, 0}, - Package() {0x0005ffff, 1, LNKB, 0}, - Package() {0x0005ffff, 2, LNKC, 0}, - Package() {0x0005ffff, 3, LNKD, 0}, - - // PCI Slot 6 - Package() {0x0006ffff, 0, LNKB, 0}, - Package() {0x0006ffff, 1, LNKC, 0}, - Package() {0x0006ffff, 2, LNKD, 0}, - Package() {0x0006ffff, 3, LNKA, 0}, - - // PCI Slot 7 - Package() {0x0007ffff, 0, LNKC, 0}, - Package() {0x0007ffff, 1, LNKD, 0}, - Package() {0x0007ffff, 2, LNKA, 0}, - Package() {0x0007ffff, 3, LNKB, 0}, - - // PCI Slot 8 - Package() {0x0008ffff, 0, LNKD, 0}, - Package() {0x0008ffff, 1, LNKA, 0}, - Package() {0x0008ffff, 2, LNKB, 0}, - Package() {0x0008ffff, 3, LNKC, 0}, - - // PCI Slot 9 - Package() {0x0008ffff, 0, LNKA, 0}, - Package() {0x0008ffff, 1, LNKB, 0}, - Package() {0x0008ffff, 2, LNKC, 0}, - Package() {0x0008ffff, 3, LNKD, 0}, - - // PCI Slot 10 - Package() {0x000affff, 0, LNKB, 0}, - Package() {0x000affff, 1, LNKC, 0}, - Package() {0x000affff, 2, LNKD, 0}, - Package() {0x000affff, 3, LNKA, 0}, - - // PCI Slot 11 - Package() {0x000bffff, 0, LNKC, 0}, - Package() {0x000bffff, 1, LNKD, 0}, - Package() {0x000bffff, 2, LNKA, 0}, - Package() {0x000bffff, 3, LNKB, 0}, - - // PCI Slot 12 - Package() {0x000cffff, 0, LNKD, 0}, - Package() {0x000cffff, 1, LNKA, 0}, - Package() {0x000cffff, 2, LNKB, 0}, - Package() {0x000cffff, 3, LNKC, 0}, - - // PCI Slot 13 - Package() {0x000dffff, 0, LNKA, 0}, - Package() {0x000dffff, 1, LNKB, 0}, - Package() {0x000dffff, 2, LNKC, 0}, - Package() {0x000dffff, 3, LNKD, 0}, - - // PCI Slot 14 - Package() {0x000effff, 0, LNKB, 0}, - Package() {0x000effff, 1, LNKC, 0}, - Package() {0x000effff, 2, LNKD, 0}, - Package() {0x000effff, 3, LNKA, 0}, - - // PCI Slot 15 - Package() {0x000fffff, 0, LNKC, 0}, - Package() {0x000fffff, 1, LNKD, 0}, - Package() {0x000fffff, 2, LNKA, 0}, - Package() {0x000fffff, 3, LNKB, 0}, - - // PCI Slot 16 - Package() {0x0010ffff, 0, LNKD, 0}, - Package() {0x0010ffff, 1, LNKA, 0}, - Package() {0x0010ffff, 2, LNKB, 0}, - Package() {0x0010ffff, 3, LNKC, 0}, - - // PCI Slot 17 - Package() {0x0011ffff, 0, LNKA, 0}, - Package() {0x0011ffff, 1, LNKB, 0}, - Package() {0x0011ffff, 2, LNKC, 0}, - Package() {0x0011ffff, 3, LNKD, 0}, - - // PCI Slot 18 - Package() {0x0012ffff, 0, LNKB, 0}, - Package() {0x0012ffff, 1, LNKC, 0}, - Package() {0x0012ffff, 2, LNKD, 0}, - Package() {0x0012ffff, 3, LNKA, 0}, - - // PCI Slot 19 - Package() {0x0013ffff, 0, LNKC, 0}, - Package() {0x0013ffff, 1, LNKD, 0}, - Package() {0x0013ffff, 2, LNKA, 0}, - Package() {0x0013ffff, 3, LNKB, 0}, - - // PCI Slot 20 - Package() {0x0014ffff, 0, LNKD, 0}, - Package() {0x0014ffff, 1, LNKA, 0}, - Package() {0x0014ffff, 2, LNKB, 0}, - Package() {0x0014ffff, 3, LNKC, 0}, - - // PCI Slot 21 - Package() {0x0015ffff, 0, LNKA, 0}, - Package() {0x0015ffff, 1, LNKB, 0}, - Package() {0x0015ffff, 2, LNKC, 0}, - Package() {0x0015ffff, 3, LNKD, 0}, - - // PCI Slot 22 - Package() {0x0016ffff, 0, LNKB, 0}, - Package() {0x0016ffff, 1, LNKC, 0}, - Package() {0x0016ffff, 2, LNKD, 0}, - Package() {0x0016ffff, 3, LNKA, 0}, - - // PCI Slot 23 - Package() {0x0017ffff, 0, LNKC, 0}, - Package() {0x0017ffff, 1, LNKD, 0}, - Package() {0x0017ffff, 2, LNKA, 0}, - Package() {0x0017ffff, 3, LNKB, 0}, - - // PCI Slot 24 - Package() {0x0018ffff, 0, LNKD, 0}, - Package() {0x0018ffff, 1, LNKA, 0}, - Package() {0x0018ffff, 2, LNKB, 0}, - Package() {0x0018ffff, 3, LNKC, 0}, - - // PCI Slot 25 - Package() {0x0018ffff, 0, LNKA, 0}, - Package() {0x0018ffff, 1, LNKB, 0}, - Package() {0x0018ffff, 2, LNKC, 0}, - Package() {0x0018ffff, 3, LNKD, 0}, - - // PCI Slot 26 - Package() {0x001affff, 0, LNKB, 0}, - Package() {0x001affff, 1, LNKC, 0}, - Package() {0x001affff, 2, LNKD, 0}, - Package() {0x001affff, 3, LNKA, 0}, - - // PCI Slot 27 - Package() {0x001bffff, 0, LNKC, 0}, - Package() {0x001bffff, 1, LNKD, 0}, - Package() {0x001bffff, 2, LNKA, 0}, - Package() {0x001bffff, 3, LNKB, 0}, - - // PCI Slot 28 - Package() {0x001cffff, 0, LNKD, 0}, - Package() {0x001cffff, 1, LNKA, 0}, - Package() {0x001cffff, 2, LNKB, 0}, - Package() {0x001cffff, 3, LNKC, 0}, - - // PCI Slot 29 - Package() {0x001dffff, 0, LNKA, 0}, - Package() {0x001dffff, 1, LNKB, 0}, - Package() {0x001dffff, 2, LNKC, 0}, - Package() {0x001dffff, 3, LNKD, 0}, - - // PCI Slot 30 - Package() {0x001effff, 0, LNKB, 0}, - Package() {0x001effff, 1, LNKC, 0}, - Package() {0x001effff, 2, LNKD, 0}, - Package() {0x001effff, 3, LNKA, 0}, - - // PCI Slot 31 - Package() {0x001fffff, 0, LNKC, 0}, - Package() {0x001fffff, 1, LNKD, 0}, - Package() {0x001fffff, 2, LNKA, 0}, - Package() {0x001fffff, 3, LNKB, 0}, - }) + + Include ("acpi-irq-routing.dsl") OperationRegion(PCST, SystemIO, 0xae00, 0x08) Field (PCST, DWordAcc, NoLock, WriteAsZeros) - { + { PCIU, 32, PCID, 32, - } - + } OperationRegion(SEJ, SystemIO, 0xae08, 0x04) Field (SEJ, DWordAcc, NoLock, WriteAsZeros) { B0EJ, 32, } + Device (S0) { // Slot 0 + Name (_ADR, 0x00000000) + Method (_EJ0,1) { + Store(0x1, B0EJ) + Return (0x0) + } + } + Device (S1) { // Slot 1 Name (_ADR, 0x00010000) Method (_EJ0,1) { @@ -436,28 +247,70 @@ DefinitionBlock ( } } - Device (S3) { // Slot 3 + Device (S3) { // Slot 3, PCI-to-PCI bridge Name (_ADR, 0x00030000) - Method (_EJ0,1) { - Store (0x8, B0EJ) - Return (0x0) + Include ("acpi-irq-routing.dsl") + + OperationRegion(PCST, SystemIO, 0xae0c, 0x08) + Field (PCST, DWordAcc, NoLock, WriteAsZeros) + { + PCIU, 32, + PCID, 32, } + + OperationRegion(SEJ, SystemIO, 0xae14, 0x04) + Field (SEJ, DWordAcc, NoLock, WriteAsZeros) + { + B1EJ, 32, + } + + Name (SUN1, 30) + Alias (\_SB.PCI0.S3.B1EJ, BEJ) + Include ("acpi-pci-slots.dsl") } - Device (S4) { // Slot 4 + Device (S4) { // Slot 4, PCI-to-PCI bridge Name (_ADR, 0x00040000) - Method (_EJ0,1) { - Store(0x10, B0EJ) - Return (0x0) + Include ("acpi-irq-routing.dsl") + + OperationRegion(PCST, SystemIO, 0xae18, 0x08) + Field (PCST, DWordAcc, NoLock, WriteAsZeros) + { + PCIU, 32, + PCID, 32, + } + + OperationRegion(SEJ, SystemIO, 0xae20, 0x04) + Field (SEJ, DWordAcc, NoLock, WriteAsZeros) + { + B2EJ, 32, } + + Name (SUN1, 62) + Alias (\_SB.PCI0.S4.B2EJ, BEJ) + Include ("acpi-pci-slots.dsl") } - Device (S5) { // Slot 5 + Device (S5) { // Slot 5, PCI-to-PCI bridge Name (_ADR, 0x00050000) - Method (_EJ0,1) { - Store(0x20, B0EJ) - Return (0x0) + Include ("acpi-irq-routing.dsl") + + OperationRegion(PCST, SystemIO, 0xae24, 0x08) + Field (PCST, DWordAcc, NoLock, WriteAsZeros) + { + PCIU, 32, + PCID, 32, } + + OperationRegion(SEJ, SystemIO, 0xae2c, 0x04) + Field (SEJ, DWordAcc, NoLock, WriteAsZeros) + { + B3EJ, 32, + } + + Name (SUN1, 94) + Alias (\_SB.PCI0.S5.B3EJ, BEJ) + Include ("acpi-pci-slots.dsl") } Device (S6) { // Slot 6 @@ -1248,266 +1101,156 @@ DefinitionBlock ( Return(0x01) } Method(_L01) { - /* Up status */ - If (And(\_SB.PCI0.PCIU, 0x2)) { - Notify(\_SB.PCI0.S1, 0x1) - } - - If (And(\_SB.PCI0.PCIU, 0x4)) { - Notify(\_SB.PCI0.S2, 0x1) - } - - If (And(\_SB.PCI0.PCIU, 0x8)) { - Notify(\_SB.PCI0.S3, 0x1) - } - - If (And(\_SB.PCI0.PCIU, 0x10)) { - Notify(\_SB.PCI0.S4, 0x1) - } - - If (And(\_SB.PCI0.PCIU, 0x20)) { - Notify(\_SB.PCI0.S5, 0x1) - } - - If (And(\_SB.PCI0.PCIU, 0x40)) { - Notify(\_SB.PCI0.S6, 0x1) - } - - If (And(\_SB.PCI0.PCIU, 0x80)) { - Notify(\_SB.PCI0.S7, 0x1) - } - - If (And(\_SB.PCI0.PCIU, 0x0100)) { - Notify(\_SB.PCI0.S8, 0x1) - } - - If (And(\_SB.PCI0.PCIU, 0x0200)) { - Notify(\_SB.PCI0.S9, 0x1) - } - - If (And(\_SB.PCI0.PCIU, 0x0400)) { - Notify(\_SB.PCI0.S10, 0x1) - } - - If (And(\_SB.PCI0.PCIU, 0x0800)) { - Notify(\_SB.PCI0.S11, 0x1) - } - - If (And(\_SB.PCI0.PCIU, 0x1000)) { - Notify(\_SB.PCI0.S12, 0x1) - } - - If (And(\_SB.PCI0.PCIU, 0x2000)) { - Notify(\_SB.PCI0.S13, 0x1) - } - - If (And(\_SB.PCI0.PCIU, 0x4000)) { - Notify(\_SB.PCI0.S14, 0x1) - } - - If (And(\_SB.PCI0.PCIU, 0x8000)) { - Notify(\_SB.PCI0.S15, 0x1) - } - - If (And(\_SB.PCI0.PCIU, 0x10000)) { - Notify(\_SB.PCI0.S16, 0x1) - } - - If (And(\_SB.PCI0.PCIU, 0x20000)) { - Notify(\_SB.PCI0.S17, 0x1) - } - - If (And(\_SB.PCI0.PCIU, 0x40000)) { - Notify(\_SB.PCI0.S18, 0x1) - } - - If (And(\_SB.PCI0.PCIU, 0x80000)) { - Notify(\_SB.PCI0.S19, 0x1) - } - - If (And(\_SB.PCI0.PCIU, 0x100000)) { - Notify(\_SB.PCI0.S20, 0x1) - } - - If (And(\_SB.PCI0.PCIU, 0x200000)) { - Notify(\_SB.PCI0.S21, 0x1) - } - - If (And(\_SB.PCI0.PCIU, 0x400000)) { - Notify(\_SB.PCI0.S22, 0x1) - } - - If (And(\_SB.PCI0.PCIU, 0x800000)) { - Notify(\_SB.PCI0.S23, 0x1) - } - - If (And(\_SB.PCI0.PCIU, 0x1000000)) { - Notify(\_SB.PCI0.S24, 0x1) - } - - If (And(\_SB.PCI0.PCIU, 0x2000000)) { - Notify(\_SB.PCI0.S25, 0x1) - } - - If (And(\_SB.PCI0.PCIU, 0x4000000)) { - Notify(\_SB.PCI0.S26, 0x1) - } - - If (And(\_SB.PCI0.PCIU, 0x8000000)) { - Notify(\_SB.PCI0.S27, 0x1) - } - - If (And(\_SB.PCI0.PCIU, 0x10000000)) { - Notify(\_SB.PCI0.S28, 0x1) - } - - If (And(\_SB.PCI0.PCIU, 0x20000000)) { - Notify(\_SB.PCI0.S29, 0x1) - } - - If (And(\_SB.PCI0.PCIU, 0x40000000)) { - Notify(\_SB.PCI0.S30, 0x1) - } - - If (And(\_SB.PCI0.PCIU, 0x80000000)) { - Notify(\_SB.PCI0.S31, 0x1) - } - - /* Down status */ - If (And(\_SB.PCI0.PCID, 0x2)) { - Notify(\_SB.PCI0.S1, 0x3) - } - - If (And(\_SB.PCI0.PCID, 0x4)) { - Notify(\_SB.PCI0.S2, 0x3) - } - - If (And(\_SB.PCI0.PCID, 0x8)) { - Notify(\_SB.PCI0.S3, 0x3) - } - - If (And(\_SB.PCI0.PCID, 0x10)) { - Notify(\_SB.PCI0.S4, 0x3) - } - - If (And(\_SB.PCI0.PCID, 0x20)) { - Notify(\_SB.PCI0.S5, 0x3) - } - - If (And(\_SB.PCI0.PCID, 0x40)) { - Notify(\_SB.PCI0.S6, 0x3) - } - - If (And(\_SB.PCI0.PCID, 0x80)) { - Notify(\_SB.PCI0.S7, 0x3) - } - - If (And(\_SB.PCI0.PCID, 0x0100)) { - Notify(\_SB.PCI0.S8, 0x3) - } - - If (And(\_SB.PCI0.PCID, 0x0200)) { - Notify(\_SB.PCI0.S9, 0x3) - } - - If (And(\_SB.PCI0.PCID, 0x0400)) { - Notify(\_SB.PCI0.S10, 0x3) - } - - If (And(\_SB.PCI0.PCID, 0x0800)) { - Notify(\_SB.PCI0.S11, 0x3) - } - - If (And(\_SB.PCI0.PCID, 0x1000)) { - Notify(\_SB.PCI0.S12, 0x3) - } - - If (And(\_SB.PCI0.PCID, 0x2000)) { - Notify(\_SB.PCI0.S13, 0x3) - } - - If (And(\_SB.PCI0.PCID, 0x4000)) { - Notify(\_SB.PCI0.S14, 0x3) - } - - If (And(\_SB.PCI0.PCID, 0x8000)) { - Notify(\_SB.PCI0.S15, 0x3) - } - - If (And(\_SB.PCI0.PCID, 0x10000)) { - Notify(\_SB.PCI0.S16, 0x3) - } - - If (And(\_SB.PCI0.PCID, 0x20000)) { - Notify(\_SB.PCI0.S17, 0x3) - } - - If (And(\_SB.PCI0.PCID, 0x40000)) { - Notify(\_SB.PCI0.S18, 0x3) - } - - If (And(\_SB.PCI0.PCID, 0x80000)) { - Notify(\_SB.PCI0.S19, 0x3) - } - - If (And(\_SB.PCI0.PCID, 0x100000)) { - Notify(\_SB.PCI0.S20, 0x3) - } - - If (And(\_SB.PCI0.PCID, 0x200000)) { - Notify(\_SB.PCI0.S21, 0x3) - } - - If (And(\_SB.PCI0.PCID, 0x400000)) { - Notify(\_SB.PCI0.S22, 0x3) - } - - If (And(\_SB.PCI0.PCID, 0x800000)) { - Notify(\_SB.PCI0.S23, 0x3) - } - - If (And(\_SB.PCI0.PCID, 0x1000000)) { - Notify(\_SB.PCI0.S24, 0x3) - } - - If (And(\_SB.PCI0.PCID, 0x2000000)) { - Notify(\_SB.PCI0.S25, 0x3) - } - - If (And(\_SB.PCI0.PCID, 0x4000000)) { - Notify(\_SB.PCI0.S26, 0x3) - } - - If (And(\_SB.PCI0.PCID, 0x8000000)) { - Notify(\_SB.PCI0.S27, 0x3) - } - - If (And(\_SB.PCI0.PCID, 0x10000000)) { - Notify(\_SB.PCI0.S28, 0x3) - } - - If (And(\_SB.PCI0.PCID, 0x20000000)) { - Notify(\_SB.PCI0.S29, 0x3) - } - - If (And(\_SB.PCI0.PCID, 0x40000000)) { - Notify(\_SB.PCI0.S30, 0x3) - } - - If (And(\_SB.PCI0.PCID, 0x80000000)) { - Notify(\_SB.PCI0.S31, 0x3) - } - - Return(0x01) + Alias (\_SB.PCI0.PCIU, UP) + Alias (\_SB.PCI0.PCID, DOWN) + Alias (\_SB.PCI0.S0, S0) + Alias (\_SB.PCI0.S1, S1) + Alias (\_SB.PCI0.S2, S2) + Alias (\_SB.PCI0.S3, S3) + Alias (\_SB.PCI0.S4, S4) + Alias (\_SB.PCI0.S5, S5) + Alias (\_SB.PCI0.S6, S6) + Alias (\_SB.PCI0.S7, S7) + Alias (\_SB.PCI0.S8, S8) + Alias (\_SB.PCI0.S9, S9) + Alias (\_SB.PCI0.S10, S10) + Alias (\_SB.PCI0.S11, S11) + Alias (\_SB.PCI0.S12, S12) + Alias (\_SB.PCI0.S13, S13) + Alias (\_SB.PCI0.S14, S14) + Alias (\_SB.PCI0.S15, S15) + Alias (\_SB.PCI0.S16, S16) + Alias (\_SB.PCI0.S17, S17) + Alias (\_SB.PCI0.S18, S18) + Alias (\_SB.PCI0.S19, S19) + Alias (\_SB.PCI0.S20, S20) + Alias (\_SB.PCI0.S21, S21) + Alias (\_SB.PCI0.S22, S22) + Alias (\_SB.PCI0.S23, S23) + Alias (\_SB.PCI0.S24, S24) + Alias (\_SB.PCI0.S25, S25) + Alias (\_SB.PCI0.S26, S26) + Alias (\_SB.PCI0.S27, S27) + Alias (\_SB.PCI0.S28, S28) + Alias (\_SB.PCI0.S29, S29) + Alias (\_SB.PCI0.S30, S30) + Alias (\_SB.PCI0.S31, S31) + Include ("acpi-hotplug-gpe.dsl") + Return (0x01) } Method(_L02) { - Return(0x01) + Alias (\_SB.PCI0.S3.PCIU, UP) + Alias (\_SB.PCI0.S3.PCID, DOWN) + Alias (\_SB.PCI0.S3.S0, S0) + Alias (\_SB.PCI0.S3.S1, S1) + Alias (\_SB.PCI0.S3.S2, S2) + Alias (\_SB.PCI0.S3.S3, S3) + Alias (\_SB.PCI0.S3.S4, S4) + Alias (\_SB.PCI0.S3.S5, S5) + Alias (\_SB.PCI0.S3.S6, S6) + Alias (\_SB.PCI0.S3.S7, S7) + Alias (\_SB.PCI0.S3.S8, S8) + Alias (\_SB.PCI0.S3.S9, S9) + Alias (\_SB.PCI0.S3.S10, S10) + Alias (\_SB.PCI0.S3.S11, S11) + Alias (\_SB.PCI0.S3.S12, S12) + Alias (\_SB.PCI0.S3.S13, S13) + Alias (\_SB.PCI0.S3.S14, S14) + Alias (\_SB.PCI0.S3.S15, S15) + Alias (\_SB.PCI0.S3.S16, S16) + Alias (\_SB.PCI0.S3.S17, S17) + Alias (\_SB.PCI0.S3.S18, S18) + Alias (\_SB.PCI0.S3.S19, S19) + Alias (\_SB.PCI0.S3.S20, S20) + Alias (\_SB.PCI0.S3.S21, S21) + Alias (\_SB.PCI0.S3.S22, S22) + Alias (\_SB.PCI0.S3.S23, S23) + Alias (\_SB.PCI0.S3.S24, S24) + Alias (\_SB.PCI0.S3.S25, S25) + Alias (\_SB.PCI0.S3.S26, S26) + Alias (\_SB.PCI0.S3.S27, S27) + Alias (\_SB.PCI0.S3.S28, S28) + Alias (\_SB.PCI0.S3.S29, S29) + Alias (\_SB.PCI0.S3.S30, S30) + Alias (\_SB.PCI0.S3.S31, S31) + Include ("acpi-hotplug-gpe.dsl") + Return (0x01) } Method(_L03) { - Return(0x01) + Alias (\_SB.PCI0.S4.PCIU, UP) + Alias (\_SB.PCI0.S4.PCID, DOWN) + Alias (\_SB.PCI0.S4.S0, S0) + Alias (\_SB.PCI0.S4.S1, S1) + Alias (\_SB.PCI0.S4.S2, S2) + Alias (\_SB.PCI0.S4.S3, S3) + Alias (\_SB.PCI0.S4.S4, S4) + Alias (\_SB.PCI0.S4.S5, S5) + Alias (\_SB.PCI0.S4.S6, S6) + Alias (\_SB.PCI0.S4.S7, S7) + Alias (\_SB.PCI0.S4.S8, S8) + Alias (\_SB.PCI0.S4.S9, S9) + Alias (\_SB.PCI0.S4.S10, S10) + Alias (\_SB.PCI0.S4.S11, S11) + Alias (\_SB.PCI0.S4.S12, S12) + Alias (\_SB.PCI0.S4.S13, S13) + Alias (\_SB.PCI0.S4.S14, S14) + Alias (\_SB.PCI0.S4.S15, S15) + Alias (\_SB.PCI0.S4.S16, S16) + Alias (\_SB.PCI0.S4.S17, S17) + Alias (\_SB.PCI0.S4.S18, S18) + Alias (\_SB.PCI0.S4.S19, S19) + Alias (\_SB.PCI0.S4.S20, S20) + Alias (\_SB.PCI0.S4.S21, S21) + Alias (\_SB.PCI0.S4.S22, S22) + Alias (\_SB.PCI0.S4.S23, S23) + Alias (\_SB.PCI0.S4.S24, S24) + Alias (\_SB.PCI0.S4.S25, S25) + Alias (\_SB.PCI0.S4.S26, S26) + Alias (\_SB.PCI0.S4.S27, S27) + Alias (\_SB.PCI0.S4.S28, S28) + Alias (\_SB.PCI0.S4.S29, S29) + Alias (\_SB.PCI0.S4.S30, S30) + Alias (\_SB.PCI0.S4.S31, S31) + Include ("acpi-hotplug-gpe.dsl") + Return (0x01) } Method(_L04) { - Return(0x01) + Alias (\_SB.PCI0.S5.PCIU, UP) + Alias (\_SB.PCI0.S5.PCID, DOWN) + Alias (\_SB.PCI0.S5.S0, S0) + Alias (\_SB.PCI0.S5.S1, S1) + Alias (\_SB.PCI0.S5.S2, S2) + Alias (\_SB.PCI0.S5.S3, S3) + Alias (\_SB.PCI0.S5.S4, S4) + Alias (\_SB.PCI0.S5.S5, S5) + Alias (\_SB.PCI0.S5.S6, S6) + Alias (\_SB.PCI0.S5.S7, S7) + Alias (\_SB.PCI0.S5.S8, S8) + Alias (\_SB.PCI0.S5.S9, S9) + Alias (\_SB.PCI0.S5.S10, S10) + Alias (\_SB.PCI0.S5.S11, S11) + Alias (\_SB.PCI0.S5.S12, S12) + Alias (\_SB.PCI0.S5.S13, S13) + Alias (\_SB.PCI0.S5.S14, S14) + Alias (\_SB.PCI0.S5.S15, S15) + Alias (\_SB.PCI0.S5.S16, S16) + Alias (\_SB.PCI0.S5.S17, S17) + Alias (\_SB.PCI0.S5.S18, S18) + Alias (\_SB.PCI0.S5.S19, S19) + Alias (\_SB.PCI0.S5.S20, S20) + Alias (\_SB.PCI0.S5.S21, S21) + Alias (\_SB.PCI0.S5.S22, S22) + Alias (\_SB.PCI0.S5.S23, S23) + Alias (\_SB.PCI0.S5.S24, S24) + Alias (\_SB.PCI0.S5.S25, S25) + Alias (\_SB.PCI0.S5.S26, S26) + Alias (\_SB.PCI0.S5.S27, S27) + Alias (\_SB.PCI0.S5.S28, S28) + Alias (\_SB.PCI0.S5.S29, S29) + Alias (\_SB.PCI0.S5.S30, S30) + Alias (\_SB.PCI0.S5.S31, S31) + Include ("acpi-hotplug-gpe.dsl") + Return (0x01) } Method(_L05) { Return(0x01) Index: kvm-userspace.pci3/bios/acpi-hotplug-gpe.dsl =================================================================== --- /dev/null +++ kvm-userspace.pci3/bios/acpi-hotplug-gpe.dsl @@ -0,0 +1,257 @@ + /* Up status */ + If (And(UP, 0x1)) { + Notify(S0, 0x1) + } + + If (And(UP, 0x2)) { + Notify(S1, 0x1) + } + + If (And(UP, 0x4)) { + Notify(S2, 0x1) + } + + If (And(UP, 0x8)) { + Notify(S3, 0x1) + } + + If (And(UP, 0x10)) { + Notify(S4, 0x1) + } + + If (And(UP, 0x20)) { + Notify(S5, 0x1) + } + + If (And(UP, 0x40)) { + Notify(S6, 0x1) + } + + If (And(UP, 0x80)) { + Notify(S7, 0x1) + } + + If (And(UP, 0x0100)) { + Notify(S8, 0x1) + } + + If (And(UP, 0x0200)) { + Notify(S9, 0x1) + } + + If (And(UP, 0x0400)) { + Notify(S10, 0x1) + } + + If (And(UP, 0x0800)) { + Notify(S11, 0x1) + } + + If (And(UP, 0x1000)) { + Notify(S12, 0x1) + } + + If (And(UP, 0x2000)) { + Notify(S13, 0x1) + } + + If (And(UP, 0x4000)) { + Notify(S14, 0x1) + } + + If (And(UP, 0x8000)) { + Notify(S15, 0x1) + } + + If (And(UP, 0x10000)) { + Notify(S16, 0x1) + } + + If (And(UP, 0x20000)) { + Notify(S17, 0x1) + } + + If (And(UP, 0x40000)) { + Notify(S18, 0x1) + } + + If (And(UP, 0x80000)) { + Notify(S19, 0x1) + } + + If (And(UP, 0x100000)) { + Notify(S20, 0x1) + } + + If (And(UP, 0x200000)) { + Notify(S21, 0x1) + } + + If (And(UP, 0x400000)) { + Notify(S22, 0x1) + } + + If (And(UP, 0x800000)) { + Notify(S23, 0x1) + } + + If (And(UP, 0x1000000)) { + Notify(S24, 0x1) + } + + If (And(UP, 0x2000000)) { + Notify(S25, 0x1) + } + + If (And(UP, 0x4000000)) { + Notify(S26, 0x1) + } + + If (And(UP, 0x8000000)) { + Notify(S27, 0x1) + } + + If (And(UP, 0x10000000)) { + Notify(S28, 0x1) + } + + If (And(UP, 0x20000000)) { + Notify(S29, 0x1) + } + + If (And(UP, 0x40000000)) { + Notify(S30, 0x1) + } + + If (And(UP, 0x80000000)) { + Notify(S31, 0x1) + } + + /* Down status */ + If (And(DOWN, 0x1)) { + Notify(S0, 0x3) + } + + If (And(DOWN, 0x2)) { + Notify(S1, 0x3) + } + + If (And(DOWN, 0x4)) { + Notify(S2, 0x3) + } + + If (And(DOWN, 0x8)) { + Notify(S3, 0x3) + } + + If (And(DOWN, 0x10)) { + Notify(S4, 0x3) + } + + If (And(DOWN, 0x20)) { + Notify(S5, 0x3) + } + + If (And(DOWN, 0x40)) { + Notify(S6, 0x3) + } + + If (And(DOWN, 0x80)) { + Notify(S7, 0x3) + } + + If (And(DOWN, 0x0100)) { + Notify(S8, 0x3) + } + + If (And(DOWN, 0x0200)) { + Notify(S9, 0x3) + } + + If (And(DOWN, 0x0400)) { + Notify(S10, 0x3) + } + + If (And(DOWN, 0x0800)) { + Notify(S11, 0x3) + } + + If (And(DOWN, 0x1000)) { + Notify(S12, 0x3) + } + + If (And(DOWN, 0x2000)) { + Notify(S13, 0x3) + } + + If (And(DOWN, 0x4000)) { + Notify(S14, 0x3) + } + + If (And(DOWN, 0x8000)) { + Notify(S15, 0x3) + } + + If (And(DOWN, 0x10000)) { + Notify(S16, 0x3) + } + + If (And(DOWN, 0x20000)) { + Notify(S17, 0x3) + } + + If (And(DOWN, 0x40000)) { + Notify(S18, 0x3) + } + + If (And(DOWN, 0x80000)) { + Notify(S19, 0x3) + } + + If (And(DOWN, 0x100000)) { + Notify(S20, 0x3) + } + + If (And(DOWN, 0x200000)) { + Notify(S21, 0x3) + } + + If (And(DOWN, 0x400000)) { + Notify(S22, 0x3) + } + + If (And(DOWN, 0x800000)) { + Notify(S23, 0x3) + } + + If (And(DOWN, 0x1000000)) { + Notify(S24, 0x3) + } + + If (And(DOWN, 0x2000000)) { + Notify(S25, 0x3) + } + + If (And(DOWN, 0x4000000)) { + Notify(S26, 0x3) + } + + If (And(DOWN, 0x8000000)) { + Notify(S27, 0x3) + } + + If (And(DOWN, 0x10000000)) { + Notify(S28, 0x3) + } + + If (And(DOWN, 0x20000000)) { + Notify(S29, 0x3) + } + + If (And(DOWN, 0x40000000)) { + Notify(S30, 0x3) + } + + If (And(DOWN, 0x80000000)) { + Notify(S31, 0x3) + } Index: kvm-userspace.pci3/bios/acpi-irq-routing.dsl =================================================================== --- /dev/null +++ kvm-userspace.pci3/bios/acpi-irq-routing.dsl @@ -0,0 +1,203 @@ + External(LNKA, DeviceObj) + External(LNKB, DeviceObj) + External(LNKC, DeviceObj) + External(LNKD, DeviceObj) + + Name(_PRT, Package() { + /* PCI IRQ routing table, example from ACPI 2.0a specification, + section 6.2.8.1 */ + /* Note: we provide the same info as the PCI routing + table of the Bochs BIOS */ + + // PCI Slot 0 + Package() {0x0000ffff, 0, LNKD, 0}, + Package() {0x0000ffff, 1, LNKA, 0}, + Package() {0x0000ffff, 2, LNKB, 0}, + Package() {0x0000ffff, 3, LNKC, 0}, + + // PCI Slot 1 + Package() {0x0001ffff, 0, LNKA, 0}, + Package() {0x0001ffff, 1, LNKB, 0}, + Package() {0x0001ffff, 2, LNKC, 0}, + Package() {0x0001ffff, 3, LNKD, 0}, + + // PCI Slot 2 + Package() {0x0002ffff, 0, LNKB, 0}, + Package() {0x0002ffff, 1, LNKC, 0}, + Package() {0x0002ffff, 2, LNKD, 0}, + Package() {0x0002ffff, 3, LNKA, 0}, + + // PCI Slot 3 + Package() {0x0003ffff, 0, LNKC, 0}, + Package() {0x0003ffff, 1, LNKD, 0}, + Package() {0x0003ffff, 2, LNKA, 0}, + Package() {0x0003ffff, 3, LNKB, 0}, + + // PCI Slot 4 + Package() {0x0004ffff, 0, LNKD, 0}, + Package() {0x0004ffff, 1, LNKA, 0}, + Package() {0x0004ffff, 2, LNKB, 0}, + Package() {0x0004ffff, 3, LNKC, 0}, + + // PCI Slot 5 + Package() {0x0005ffff, 0, LNKA, 0}, + Package() {0x0005ffff, 1, LNKB, 0}, + Package() {0x0005ffff, 2, LNKC, 0}, + Package() {0x0005ffff, 3, LNKD, 0}, + + // PCI Slot 6 + Package() {0x0006ffff, 0, LNKB, 0}, + Package() {0x0006ffff, 1, LNKC, 0}, + Package() {0x0006ffff, 2, LNKD, 0}, + Package() {0x0006ffff, 3, LNKA, 0}, + + // PCI Slot 7 + Package() {0x0007ffff, 0, LNKC, 0}, + Package() {0x0007ffff, 1, LNKD, 0}, + Package() {0x0007ffff, 2, LNKA, 0}, + Package() {0x0007ffff, 3, LNKB, 0}, + + // PCI Slot 8 + Package() {0x0008ffff, 0, LNKD, 0}, + Package() {0x0008ffff, 1, LNKA, 0}, + Package() {0x0008ffff, 2, LNKB, 0}, + Package() {0x0008ffff, 3, LNKC, 0}, + + // PCI Slot 9 + Package() {0x0009ffff, 0, LNKA, 0}, + Package() {0x0009ffff, 1, LNKB, 0}, + Package() {0x0009ffff, 2, LNKC, 0}, + Package() {0x0009ffff, 3, LNKD, 0}, + + // PCI Slot 10 + Package() {0x000affff, 0, LNKB, 0}, + Package() {0x000affff, 1, LNKC, 0}, + Package() {0x000affff, 2, LNKD, 0}, + Package() {0x000affff, 3, LNKA, 0}, + + // PCI Slot 11 + Package() {0x000bffff, 0, LNKC, 0}, + Package() {0x000bffff, 1, LNKD, 0}, + Package() {0x000bffff, 2, LNKA, 0}, + Package() {0x000bffff, 3, LNKB, 0}, + + // PCI Slot 12 + Package() {0x000cffff, 0, LNKD, 0}, + Package() {0x000cffff, 1, LNKA, 0}, + Package() {0x000cffff, 2, LNKB, 0}, + Package() {0x000cffff, 3, LNKC, 0}, + + // PCI Slot 13 + Package() {0x000dffff, 0, LNKA, 0}, + Package() {0x000dffff, 1, LNKB, 0}, + Package() {0x000dffff, 2, LNKC, 0}, + Package() {0x000dffff, 3, LNKD, 0}, + + // PCI Slot 14 + Package() {0x000effff, 0, LNKB, 0}, + Package() {0x000effff, 1, LNKC, 0}, + Package() {0x000effff, 2, LNKD, 0}, + Package() {0x000effff, 3, LNKA, 0}, + + // PCI Slot 15 + Package() {0x000fffff, 0, LNKC, 0}, + Package() {0x000fffff, 1, LNKD, 0}, + Package() {0x000fffff, 2, LNKA, 0}, + Package() {0x000fffff, 3, LNKB, 0}, + + // PCI Slot 16 + Package() {0x0010ffff, 0, LNKD, 0}, + Package() {0x0010ffff, 1, LNKA, 0}, + Package() {0x0010ffff, 2, LNKB, 0}, + Package() {0x0010ffff, 3, LNKC, 0}, + + // PCI Slot 17 + Package() {0x0011ffff, 0, LNKA, 0}, + Package() {0x0011ffff, 1, LNKB, 0}, + Package() {0x0011ffff, 2, LNKC, 0}, + Package() {0x0011ffff, 3, LNKD, 0}, + + // PCI Slot 18 + Package() {0x0012ffff, 0, LNKB, 0}, + Package() {0x0012ffff, 1, LNKC, 0}, + Package() {0x0012ffff, 2, LNKD, 0}, + Package() {0x0012ffff, 3, LNKA, 0}, + + // PCI Slot 19 + Package() {0x0013ffff, 0, LNKC, 0}, + Package() {0x0013ffff, 1, LNKD, 0}, + Package() {0x0013ffff, 2, LNKA, 0}, + Package() {0x0013ffff, 3, LNKB, 0}, + + // PCI Slot 20 + Package() {0x0014ffff, 0, LNKD, 0}, + Package() {0x0014ffff, 1, LNKA, 0}, + Package() {0x0014ffff, 2, LNKB, 0}, + Package() {0x0014ffff, 3, LNKC, 0}, + + // PCI Slot 21 + Package() {0x0015ffff, 0, LNKA, 0}, + Package() {0x0015ffff, 1, LNKB, 0}, + Package() {0x0015ffff, 2, LNKC, 0}, + Package() {0x0015ffff, 3, LNKD, 0}, + + // PCI Slot 22 + Package() {0x0016ffff, 0, LNKB, 0}, + Package() {0x0016ffff, 1, LNKC, 0}, + Package() {0x0016ffff, 2, LNKD, 0}, + Package() {0x0016ffff, 3, LNKA, 0}, + + // PCI Slot 23 + Package() {0x0017ffff, 0, LNKC, 0}, + Package() {0x0017ffff, 1, LNKD, 0}, + Package() {0x0017ffff, 2, LNKA, 0}, + Package() {0x0017ffff, 3, LNKB, 0}, + + // PCI Slot 24 + Package() {0x0018ffff, 0, LNKD, 0}, + Package() {0x0018ffff, 1, LNKA, 0}, + Package() {0x0018ffff, 2, LNKB, 0}, + Package() {0x0018ffff, 3, LNKC, 0}, + + // PCI Slot 25 + Package() {0x0019ffff, 0, LNKA, 0}, + Package() {0x0019ffff, 1, LNKB, 0}, + Package() {0x0019ffff, 2, LNKC, 0}, + Package() {0x0019ffff, 3, LNKD, 0}, + + // PCI Slot 26 + Package() {0x001affff, 0, LNKB, 0}, + Package() {0x001affff, 1, LNKC, 0}, + Package() {0x001affff, 2, LNKD, 0}, + Package() {0x001affff, 3, LNKA, 0}, + + // PCI Slot 27 + Package() {0x001bffff, 0, LNKC, 0}, + Package() {0x001bffff, 1, LNKD, 0}, + Package() {0x001bffff, 2, LNKA, 0}, + Package() {0x001bffff, 3, LNKB, 0}, + + // PCI Slot 28 + Package() {0x001cffff, 0, LNKD, 0}, + Package() {0x001cffff, 1, LNKA, 0}, + Package() {0x001cffff, 2, LNKB, 0}, + Package() {0x001cffff, 3, LNKC, 0}, + + // PCI Slot 29 + Package() {0x001dffff, 0, LNKA, 0}, + Package() {0x001dffff, 1, LNKB, 0}, + Package() {0x001dffff, 2, LNKC, 0}, + Package() {0x001dffff, 3, LNKD, 0}, + + // PCI Slot 30 + Package() {0x001effff, 0, LNKB, 0}, + Package() {0x001effff, 1, LNKC, 0}, + Package() {0x001effff, 2, LNKD, 0}, + Package() {0x001effff, 3, LNKA, 0}, + + // PCI Slot 31 + Package() {0x001fffff, 0, LNKC, 0}, + Package() {0x001fffff, 1, LNKD, 0}, + Package() {0x001fffff, 2, LNKA, 0}, + Package() {0x001fffff, 3, LNKB, 0}, + }) Index: kvm-userspace.pci3/bios/acpi-pci-slots.dsl =================================================================== --- /dev/null +++ kvm-userspace.pci3/bios/acpi-pci-slots.dsl @@ -0,0 +1,385 @@ + Device (S0) { // Slot 0 + Name (_ADR, 0x00000000) + Method (_EJ0,1) { + Store(0x1, BEJ) + Return (0x0) + } + Method(_SUN) { + Add (SUN1, 0, Local0) + Return (Local0) + } + } + + Device (S1) { // Slot 1 + Name (_ADR, 0x00010000) + Method (_EJ0,1) { + Store(0x2, BEJ) + Return (0x0) + } + Method(_SUN) { + Add (SUN1, 1, Local0) + Return (Local0) + } + } + + Device (S2) { // Slot 2 + Name (_ADR, 0x00020000) + Method (_EJ0,1) { + Store(0x4, BEJ) + Return (0x0) + } + Method(_SUN) { + Add (SUN1, 2, Local0) + Return (Local0) + } + } + + Device (S3) { // Slot 3 + Name (_ADR, 0x00030000) + Method (_EJ0,1) { + Store(0x4, BEJ) + Return (0x0) + } + Method(_SUN) { + Add (SUN1, 3, Local0) + Return (Local0) + } + } + + Device (S4) { // Slot 4 + Name (_ADR, 0x00040000) + Method (_EJ0,1) { + Store(0x4, BEJ) + Return (0x0) + } + Method(_SUN) { + Add (SUN1, 4, Local0) + Return (Local0) + } + } + + Device (S5) { // Slot 5 + Name (_ADR, 0x00050000) + Method (_EJ0,1) { + Store(0x4, BEJ) + Return (0x0) + } + Method(_SUN) { + Add (SUN1, 5, Local0) + Return (Local0) + } + } + + Device (S6) { // Slot 6 + Name (_ADR, 0x00060000) + Method (_EJ0,1) { + Store(0x40, BEJ) + Return (0x0) + } + + Method(_SUN) { + Add (SUN1, 6, Local0) + Return (Local0) + } + } + + Device (S7) { // Slot 7 + Name (_ADR, 0x00070000) + Method (_EJ0,1) { + Store(0x80, BEJ) + Return (0x0) + } + + Method(_SUN) { + Add (SUN1, 7, Local0) + Return (Local0) + } + } + + Device (S8) { // Slot 8 + Name (_ADR, 0x00080000) + Method (_EJ0,1) { + Store(0x100, BEJ) + Return (0x0) + } + Method(_SUN) { + Add (SUN1, 8, Local0) + Return (Local0) + } + } + + Device (S9) { // Slot 9 + Name (_ADR, 0x00090000) + Method (_EJ0,1) { + Store(0x200, BEJ) + Return (0x0) + } + Method(_SUN) { + Add (SUN1, 9, Local0) + Return (Local0) + } + } + + Device (S10) { // Slot 10 + Name (_ADR, 0x000A0000) + Method (_EJ0,1) { + Store(0x400, BEJ) + Return (0x0) + } + Method(_SUN) { + Add (SUN1, 10, Local0) + Return (Local0) + } + } + + Device (S11) { // Slot 11 + Name (_ADR, 0x000B0000) + Method (_EJ0,1) { + Store(0x800, BEJ) + Return (0x0) + } + Method(_SUN) { + Add (SUN1, 11, Local0) + Return (Local0) + } + } + + Device (S12) { // Slot 12 + Name (_ADR, 0x000C0000) + Method (_EJ0,1) { + Store(0x1000, BEJ) + Return (0x0) + } + Method(_SUN) { + Add (SUN1, 12, Local0) + Return (Local0) + } + } + + Device (S13) { // Slot 13 + Name (_ADR, 0x000D0000) + Method (_EJ0,1) { + Store(0x2000, BEJ) + Return (0x0) + } + Method(_SUN) { + Add (SUN1, 13, Local0) + Return (Local0) + } + } + + Device (S14) { // Slot 14 + Name (_ADR, 0x000E0000) + Method (_EJ0,1) { + Store(0x4000, BEJ) + Return (0x0) + } + Method(_SUN) { + Add (SUN1, 14, Local0) + Return (Local0) + } + } + + Device (S15) { // Slot 15 + Name (_ADR, 0x000F0000) + Method (_EJ0,1) { + Store(0x8000, BEJ) + Return (0x0) + } + Method(_SUN) { + Add (SUN1, 15, Local0) + Return (Local0) + } + } + + Device (S16) { // Slot 16 + Name (_ADR, 0x00100000) + Method (_EJ0,1) { + Store(0x10000, BEJ) + Return (0x0) + } + Method(_SUN) { + Add (SUN1, 16, Local0) + Return (Local0) + } + } + + Device (S17) { // Slot 17 + Name (_ADR, 0x00110000) + Method (_EJ0,1) { + Store(0x20000, BEJ) + Return (0x0) + } + Method(_SUN) { + Add (SUN1, 17, Local0) + Return (Local0) + } + } + + Device (S18) { // Slot 18 + Name (_ADR, 0x00120000) + Method (_EJ0,1) { + Store(0x40000, BEJ) + Return (0x0) + } + Method(_SUN) { + Add (SUN1, 18, Local0) + Return (Local0) + } + } + + Device (S19) { // Slot 19 + Name (_ADR, 0x00130000) + Method (_EJ0,1) { + Store(0x80000, BEJ) + Return (0x0) + } + Method(_SUN) { + Add (SUN1, 19, Local0) + Return (Local0) + } + } + + Device (S20) { // Slot 20 + Name (_ADR, 0x00140000) + Method (_EJ0,1) { + Store(0x100000, BEJ) + Return (0x0) + } + Method(_SUN) { + Add (SUN1, 20, Local0) + Return (Local0) + } + } + + Device (S21) { // Slot 21 + Name (_ADR, 0x00150000) + Method (_EJ0,1) { + Store(0x200000, BEJ) + Return (0x0) + } + Method(_SUN) { + Add (SUN1, 21, Local0) + Return (Local0) + } + } + + Device (S22) { // Slot 22 + Name (_ADR, 0x00160000) + Method (_EJ0,1) { + Store(0x400000, BEJ) + Return (0x0) + } + Method(_SUN) { + Add (SUN1, 22, Local0) + Return (Local0) + } + } + + Device (S23) { // Slot 23 + Name (_ADR, 0x00170000) + Method (_EJ0,1) { + Store(0x800000, BEJ) + Return (0x0) + } + Method(_SUN) { + Add (SUN1, 23, Local0) + Return (Local0) + } + } + + Device (S24) { // Slot 24 + Name (_ADR, 0x00180000) + Method (_EJ0,1) { + Store(0x1000000, BEJ) + Return (0x0) + } + Method(_SUN) { + Add (SUN1, 24, Local0) + Return (Local0) + } + } + + Device (S25) { // Slot 25 + Name (_ADR, 0x00190000) + Method (_EJ0,1) { + Store(0x2000000, BEJ) + Return (0x0) + } + Method(_SUN) { + Add (SUN1, 25, Local0) + Return (Local0) + } + } + + Device (S26) { // Slot 26 + Name (_ADR, 0x001A0000) + Method (_EJ0,1) { + Store(0x4000000, BEJ) + Return (0x0) + } + Method(_SUN) { + Add (SUN1, 26, Local0) + Return (Local0) + } + } + + Device (S27) { // Slot 27 + Name (_ADR, 0x001B0000) + Method (_EJ0,1) { + Store(0x8000000, BEJ) + Return (0x0) + } + Method(_SUN) { + Add (SUN1, 27, Local0) + Return (Local0) + } + } + + Device (S28) { // Slot 28 + Name (_ADR, 0x001C0000) + Method (_EJ0,1) { + Store(0x10000000, BEJ) + Return (0x0) + } + Method(_SUN) { + Add (SUN1, 28, Local0) + Return (Local0) + } + } + + Device (S29) { // Slot 29 + Name (_ADR, 0x001D0000) + Method (_EJ0,1) { + Store(0x20000000, BEJ) + Return (0x0) + } + Method(_SUN) { + Add (SUN1, 29, Local0) + Return (Local0) + } + } + + Device (S30) { // Slot 30 + Name (_ADR, 0x001E0000) + Method (_EJ0,1) { + Store(0x40000000, BEJ) + Return (0x0) + } + Method(_SUN) { + Add (SUN1, 30, Local0) + Return (Local0) + } + } + + Device (S31) { // Slot 31 + Name (_ADR, 0x001F0000) + Method (_EJ0,1) { + Store(0x80000000, BEJ) + Return (0x0) + } + Method(_SUN) { + Add (SUN1, 31, Local0) + Return (Local0) + } + } -- |
From: Alexander G. <ag...@su...> - 2008-05-03 21:03:29
|
On May 2, 2008, at 7:35 PM, Marcelo Tosatti wrote: > Add 3 PCI bridges to the ACPI table: > - Move IRQ routing, slot device and GPE processing to separate files > which can be included from acpi-dsdt.dsl. > - Add _SUN methods to every slot device so as to avoid collisions > in OS handling. > - Fix copy&paste typo in slot devices 8/9 and 24/25. > > This table breaks PCI hotplug for older userspace, hopefully not an > issue (trivial enough to upgrade the BIOS). > > Signed-off-by: Marcelo Tosatti <mto...@re...> > > Index: kvm-userspace.pci3/bios/acpi-dsdt.dsl > =================================================================== > --- kvm-userspace.pci3.orig/bios/acpi-dsdt.dsl > +++ kvm-userspace.pci3/bios/acpi-dsdt.dsl > @@ -208,218 +208,29 @@ DefinitionBlock ( > Name (_HID, EisaId ("PNP0A03")) > Name (_ADR, 0x00) > Name (_UID, 1) > - Name(_PRT, Package() { > - /* PCI IRQ routing table, example from ACPI 2.0a > specification, > - section 6.2.8.1 */ > - /* Note: we provide the same info as the PCI routing > - table of the Bochs BIOS */ > - > - // PCI Slot 0 > - Package() {0x0000ffff, 0, LNKD, 0}, > - Package() {0x0000ffff, 1, LNKA, 0}, > - Package() {0x0000ffff, 2, LNKB, 0}, > - Package() {0x0000ffff, 3, LNKC, 0}, [ ... snip ... ] > - // PCI Slot 31 > - Package() {0x001fffff, 0, LNKC, 0}, > - Package() {0x001fffff, 1, LNKD, 0}, > - Package() {0x001fffff, 2, LNKA, 0}, > - Package() {0x001fffff, 3, LNKB, 0}, > - }) > + > + Include ("acpi-irq-routing.dsl") > > OperationRegion(PCST, SystemIO, 0xae00, 0x08) > Field (PCST, DWordAcc, NoLock, WriteAsZeros) > - { > + { > PCIU, 32, > PCID, 32, > - } > - > + } Are these whitespace patches supposed to be here? > > OperationRegion(SEJ, SystemIO, 0xae08, 0x04) > Field (SEJ, DWordAcc, NoLock, WriteAsZeros) > { > B0EJ, 32, > } > > + Device (S0) { // Slot 0 > + Name (_ADR, 0x00000000) > + Method (_EJ0,1) { > + Store(0x1, B0EJ) > + Return (0x0) > + } > + } > + I'm having trouble understanding the semantic of the Sx devices here. What is this S0, S1 and S2 device? Maybe different names would make everything more understandable. > > Device (S1) { // Slot 1 > Name (_ADR, 0x00010000) > Method (_EJ0,1) { > @@ -436,28 +247,70 @@ DefinitionBlock ( > } > } > > - Device (S3) { // Slot 3 > + Device (S3) { // Slot 3, PCI-to-PCI bridge This device could be called BRI1 for example. That would make reading the DSDT a lot easier. > > Name (_ADR, 0x00030000) > - Method (_EJ0,1) { > - Store (0x8, B0EJ) > - Return (0x0) > + Include ("acpi-irq-routing.dsl") > + > + OperationRegion(PCST, SystemIO, 0xae0c, 0x08) > + Field (PCST, DWordAcc, NoLock, WriteAsZeros) > + { > + PCIU, 32, > + PCID, 32, > } > + > + OperationRegion(SEJ, SystemIO, 0xae14, 0x04) > + Field (SEJ, DWordAcc, NoLock, WriteAsZeros) > + { > + B1EJ, 32, > + } > + > + Name (SUN1, 30) > + Alias (\_SB.PCI0.S3.B1EJ, BEJ) > + Include ("acpi-pci-slots.dsl") [ ... snip ... ] > > Method(_L05) { > Return(0x01) > Index: kvm-userspace.pci3/bios/acpi-hotplug-gpe.dsl > =================================================================== > --- /dev/null > +++ kvm-userspace.pci3/bios/acpi-hotplug-gpe.dsl > @@ -0,0 +1,257 @@ > + /* Up status */ > + If (And(UP, 0x1)) { > + Notify(S0, 0x1) > + } While this is proper syntax I prefer the way Fabrice wrote the tables. Most of his entries were one-lined, even though they wouldn't end up like that when getting decompiled. In this case I'd vote for something like: If (And(UP, 0x1)) { Notify(S0, 0x1) } Which makes things easier to read again. The same goes for a lot of code below that chunk. > > + > + If (And(UP, 0x2)) { > + Notify(S1, 0x1) > + } > + [ ... snip ... ] > Index: kvm-userspace.pci3/bios/acpi-pci-slots.dsl > =================================================================== > --- /dev/null > +++ kvm-userspace.pci3/bios/acpi-pci-slots.dsl > @@ -0,0 +1,385 @@ > + Device (S0) { // Slot 0 > + Name (_ADR, 0x00000000) > + Method (_EJ0,1) { Hmm ... I never assumed anything could be wrong here, but doesn't that 1 mean there is one argument to the method? From the ACPI Specification: Method(_EJ0, 1){ //Hot docking support //Arg0: 0=insert, 1=eject So we aren't using this information? What else do we use? Sorry if I missed something. > > + Store(0x1, BEJ) > + Return (0x0) > + } > + Method(_SUN) { > + Add (SUN1, 0, Local0) > + Return (Local0) > + } > + } Same comment here. I don't like copy&paste code that goes over a lot of lines. Can't you simply do some helper methods that do what _EJ0 and _SUN do in a generic manner and Return that? I'd imagine something like: Device (S0) { // Slot 0 Name (_ADR, 0x00000000) Method (_EJ0,1) { Return( GEJ0(0x1) } Method(_SUN) { Return( GSUN(0) } } This looks way easier to read to me and keeps generic things generic and not copy&pasted. Nevertheless this is a nice approach, which will definitely show that we need to think about interrupt routing properly ;-). Alex |
From: Marcelo T. <mto...@re...> - 2008-05-02 17:43:38
|
Support more than one bus in the ACPI PCI hotplug code. Currently 4 buses are supported, but can be easily extended. Signed-off-by: Marcelo Tosatti <mto...@re...> Index: kvm-userspace.pci3/qemu/hw/acpi.c =================================================================== --- kvm-userspace.pci3.orig/qemu/hw/acpi.c +++ kvm-userspace.pci3/qemu/hw/acpi.c @@ -552,10 +552,11 @@ struct gpe_regs { struct pci_status { uint32_t up; uint32_t down; + unsigned long base; }; static struct gpe_regs gpe; -static struct pci_status pci0_status; +static struct pci_status pci_bus_status[4]; static uint32_t gpe_readb(void *opaque, uint32_t addr) { @@ -625,16 +626,19 @@ static void gpe_writeb(void *opaque, uin static uint32_t pcihotplug_read(void *opaque, uint32_t addr) { - uint32_t val = 0; struct pci_status *g = opaque; - switch (addr) { - case PCI_BASE: + uint32_t val, offset; + + offset = addr - g->base; + switch (offset) { + case 0: val = g->up; break; - case PCI_BASE + 4: + case 4: val = g->down; break; default: + val = 0; break; } @@ -647,11 +651,13 @@ static uint32_t pcihotplug_read(void *op static void pcihotplug_write(void *opaque, uint32_t addr, uint32_t val) { struct pci_status *g = opaque; - switch (addr) { - case PCI_BASE: + uint32_t offset = addr - g->base; + + switch (offset) { + case 0: g->up = val; break; - case PCI_BASE + 4: + case 4: g->down = val; break; } @@ -671,9 +677,13 @@ static uint32_t pciej_read(void *opaque, static void pciej_write(void *opaque, uint32_t addr, uint32_t val) { - int slot = ffs(val) - 1; + struct pci_status *g = opaque; + int slot, bus; - device_hot_remove_success(0, slot); + bus = (g->base - PCI_BASE) / 12; + slot = ffs(val) - 1; + + device_hot_remove_success(bus, slot); #if defined(DEBUG) printf("pciej write %lx <== %d\n", addr, val); @@ -684,17 +694,25 @@ static const char *model; void qemu_system_hot_add_init(const char *cpu_model) { + int i; + register_ioport_write(GPE_BASE, 4, 1, gpe_writeb, &gpe); register_ioport_read(GPE_BASE, 4, 1, gpe_readb, &gpe); register_ioport_write(PROC_BASE, 4, 1, gpe_writeb, &gpe); register_ioport_read(PROC_BASE, 4, 1, gpe_readb, &gpe); - register_ioport_write(PCI_BASE, 8, 4, pcihotplug_write, &pci0_status); - register_ioport_read(PCI_BASE, 8, 4, pcihotplug_read, &pci0_status); + for (i = 0; i < 4; i++) { + struct pci_status *pci_status = &pci_bus_status[i]; + unsigned long base = PCI_BASE + (i*12); + + pci_status->base = base; + register_ioport_write(base, 8, 4, pcihotplug_write, pci_status); + register_ioport_read(base, 8, 4, pcihotplug_read, pci_status); + register_ioport_write(base+8, 4, 4, pciej_write, pci_status); + register_ioport_read(base+8, 4, 4, pciej_read, pci_status); + } - register_ioport_write(PCI_EJ_BASE, 4, 4, pciej_write, NULL); - register_ioport_read(PCI_EJ_BASE, 4, 4, pciej_read, NULL); model = cpu_model; } @@ -740,28 +758,34 @@ void qemu_system_cpu_hot_add(int cpu, in } #endif -static void enable_device(struct pci_status *p, struct gpe_regs *g, int slot) +static void enable_device(struct pci_status *p, struct gpe_regs *g, int bus, int slot) { - g->sts |= 2; - g->en |= 2; + int gpe_bit = (1 << (bus+1)); + + g->sts |= gpe_bit; + g->en |= gpe_bit; p->up |= (1 << slot); } -static void disable_device(struct pci_status *p, struct gpe_regs *g, int slot) +static void disable_device(struct pci_status *p, struct gpe_regs *g, int bus, int slot) { - g->sts |= 2; - g->en |= 2; + int gpe_bit = (1 << (bus+1)); + + g->sts |= gpe_bit; + g->en |= gpe_bit; p->down |= (1 << slot); } void qemu_system_device_hot_add(int pcibus, int slot, int state) { + struct pci_status *pci_status = &pci_bus_status[pcibus]; + qemu_set_irq(pm_state->irq, 1); - pci0_status.up = 0; - pci0_status.down = 0; + pci_status->up = 0; + pci_status->down = 0; if (state) - enable_device(&pci0_status, &gpe, slot); + enable_device(pci_status, &gpe, pcibus, slot); else - disable_device(&pci0_status, &gpe, slot); + disable_device(pci_status, &gpe, pcibus, slot); qemu_set_irq(pm_state->irq, 0); } -- |
From: Marcelo T. <mto...@re...> - 2008-05-02 17:43:42
|
Initialize and configure 3 PCI bridges (Intel 82801 PCI Bridge rev d9), which have no special handling (quirks) in current Linux versions. IO base/limit registers are initialized with zero and read-only, indicating that the bridge does not support IO address ranges. To avoid potentially breaking SPARC, a separate pci_set_irq function is introduced to handle the flat IRQ space. Signed-off-by: Marcelo Tosatti <mto...@re...> Index: kvm-userspace.pci3/qemu/hw/pci.c =================================================================== --- kvm-userspace.pci3.orig/qemu/hw/pci.c +++ kvm-userspace.pci3/qemu/hw/pci.c @@ -528,6 +528,8 @@ uint32_t pci_data_read(void *opaque, uin /***********************************************************/ /* generic PCI irq support */ +/* SPARC uses the IRQ assigned to the bridge device for its children??? */ +#ifdef TARGET_SPARC /* 0 <= irq_num <= 3. level must be 0 or 1 */ static void pci_set_irq(void *opaque, int irq_num, int level) { @@ -550,6 +552,33 @@ static void pci_set_irq(void *opaque, in bus->irq_count[irq_num] += change; bus->set_irq(bus->irq_opaque, irq_num, bus->irq_count[irq_num] != 0); } +#else +/* 0 <= irq_num <= 3. level must be 0 or 1 */ +static void pci_set_irq(void *opaque, int irq_num, int level) +{ + PCIDevice *pci_dev = (PCIDevice *)opaque; + PCIDevice *host_dev; + PCIBus *bus; + int change; + + change = level - pci_dev->irq_state[irq_num]; + if (!change) + return; + + pci_dev->irq_state[irq_num] = level; + host_dev = pci_dev; + for (;;) { + bus = host_dev->bus; + if (bus->set_irq) { + irq_num = bus->map_irq(pci_dev, irq_num); + break; + } + host_dev = bus->parent_dev; + } + bus->irq_count[irq_num] += change; + bus->set_irq(bus->irq_opaque, irq_num, bus->irq_count[irq_num] != 0); +} +#endif /***********************************************************/ /* monitor info on PCI */ @@ -706,11 +735,6 @@ PCIDevice *pci_nic_init(PCIBus *bus, NIC return pci_dev; } -typedef struct { - PCIDevice dev; - PCIBus *bus; -} PCIBridge; - static void pci_bridge_write_config(PCIDevice *d, uint32_t address, uint32_t val, int len) { @@ -725,6 +749,13 @@ static void pci_bridge_write_config(PCID printf ("pci-bridge: %s: Assigned bus %d\n", d->name, s->bus->bus_num); #endif } +#ifdef TARGET_I386 + /* on x86 the bridges do not implement I/O address ranges, I/O base/limit + * registers are read-only and should return 0. + */ + if (address == 0x1c || address == 0x1d) + return; +#endif pci_default_write_config(d, address, val, len); } @@ -755,7 +786,7 @@ PCIDevice *pci_find_device(int bus_num, return NULL; } -PCIBus *pci_bridge_init(PCIBus *bus, int devfn, uint32_t id, +PCIBridge *pci_bridge_init(PCIBus *bus, int devfn, uint32_t id, pci_map_irq_fn map_irq, const char *name) { PCIBridge *s; @@ -778,5 +809,5 @@ PCIBus *pci_bridge_init(PCIBus *bus, int s->dev.config[0x1E] = 0xa0; // secondary status s->bus = pci_register_secondary_bus(&s->dev, map_irq); - return s->bus; + return s; } Index: kvm-userspace.pci3/bios/rombios32.c =================================================================== --- kvm-userspace.pci3.orig/bios/rombios32.c +++ kvm-userspace.pci3/bios/rombios32.c @@ -652,6 +652,30 @@ static void bios_lock_shadow_ram(void) pci_config_writeb(d, 0x59, v); } +static int nr_bridges = 1; +static int current_bridge = 0; + +static void pci_bios_count_p2p(PCIDevice *d) +{ + uint16_t vendor_id, device_id; + + vendor_id = pci_config_readw(d, PCI_VENDOR_ID); + device_id = pci_config_readw(d, PCI_DEVICE_ID); + if (vendor_id == 0x8086 && device_id == 0x244e) + nr_bridges++; +} + +int fls(int i) +{ + int bit; + + for (bit=31; bit >= 0; bit--) + if (i & (1 << bit)) + return bit+1; + + return 0; +} + static void pci_bios_init_bridges(PCIDevice *d) { uint16_t vendor_id, device_id; @@ -681,6 +705,20 @@ static void pci_bios_init_bridges(PCIDev } else if (vendor_id == 0x8086 && device_id == 0x1237) { /* i440 PCI bridge */ bios_shadow_init(d); + } else if (vendor_id == 0x8086 && device_id == 0x244e) { + int len, base; + + len = (0xfebfffff - 0xf0000000) / nr_bridges; + if (len & (len-1)) + len = 1 << fls(len); + + /* memory IO */ + base = (0xf0000000+len) + (current_bridge*len); + base >>= 16; + pci_config_writew(d, 0x20, base); + pci_config_writew(d, 0x22, base); + + current_bridge++; } } @@ -775,6 +813,8 @@ static void pci_bios_init_device(PCIDevi pci_set_io_region_addr(d, 0, 0x80800000); } break; + case 0x0604: + break; default: default_map: /* default memory mappings */ @@ -859,6 +899,8 @@ void pci_bios_init(void) if (pci_bios_bigmem_addr < 0x90000000) pci_bios_bigmem_addr = 0x90000000; + pci_for_each_device(pci_bios_count_p2p); + pci_for_each_device(pci_bios_init_bridges); pci_for_each_device(pci_bios_init_device); Index: kvm-userspace.pci3/qemu/hw/piix_pci.c =================================================================== --- kvm-userspace.pci3.orig/qemu/hw/piix_pci.c +++ kvm-userspace.pci3/qemu/hw/piix_pci.c @@ -172,6 +172,7 @@ static int i440fx_load(QEMUFile* f, void PCIBus *i440fx_init(PCIDevice **pi440fx_state, qemu_irq *pic) { PCIBus *b; + PCIBridge *b1, *b2, *b3; PCIDevice *d; I440FXState *s; @@ -203,6 +204,15 @@ PCIBus *i440fx_init(PCIDevice **pi440fx_ d->config[0x72] = 0x02; /* SMRAM */ + b1 = pci_bridge_init(s->bus, 24, 0x8086244e, pci_slot_get_pirq, + "first PCI-to-PCI bridge "); + b2 = pci_bridge_init(s->bus, 32, 0x8086244e, pci_slot_get_pirq, + "second PCI-to-PCI bridge"); + b3 = pci_bridge_init(s->bus, 40, 0x8086244e, pci_slot_get_pirq, + "third PCI-to-PCI bridge"); + b1->dev.config[0x1c] = b2->dev.config[0x1c] = b3->dev.config[0x1c] = 0; + b1->dev.config[0x1d] = b2->dev.config[0x1d] = b3->dev.config[0x1d] = 0; + register_savevm("I440FX", 0, 2, i440fx_save, i440fx_load, d); *pi440fx_state = d; return b; Index: kvm-userspace.pci3/qemu/hw/apb_pci.c =================================================================== --- kvm-userspace.pci3.orig/qemu/hw/apb_pci.c +++ kvm-userspace.pci3/qemu/hw/apb_pci.c @@ -214,7 +214,7 @@ PCIBus *pci_apb_init(target_phys_addr_t APBState *s; PCIDevice *d; int pci_mem_config, pci_mem_data, apb_config, pci_ioport; - PCIBus *secondary; + PCIBridge *secondary; s = qemu_mallocz(sizeof(APBState)); /* Ultrasparc PBM main bus */ @@ -254,7 +254,7 @@ PCIBus *pci_apb_init(target_phys_addr_t /* APB secondary busses */ secondary = pci_bridge_init(s->bus, 8, 0x108e5000, pci_apb_map_irq, "Advanced PCI Bus secondary bridge 1"); pci_bridge_init(s->bus, 9, 0x108e5000, pci_apb_map_irq, "Advanced PCI Bus secondary bridge 2"); - return secondary; + return secondary->bus; } Index: kvm-userspace.pci3/qemu/hw/pci.h =================================================================== --- kvm-userspace.pci3.orig/qemu/hw/pci.h +++ kvm-userspace.pci3/qemu/hw/pci.h @@ -70,6 +70,11 @@ struct PCIDevice { int irq_state[4]; }; +typedef struct { + PCIDevice dev; + PCIBus *bus; +} PCIBridge; + PCIDevice *pci_register_device(PCIBus *bus, const char *name, int instance_size, int devfn, PCIConfigReadFunc *config_read, @@ -102,7 +107,7 @@ PCIBus *pci_find_bus(int bus_num); PCIDevice *pci_find_device(int bus_num, int slot); void pci_info(void); -PCIBus *pci_bridge_init(PCIBus *bus, int devfn, uint32_t id, +PCIBridge *pci_bridge_init(PCIBus *bus, int devfn, uint32_t id, pci_map_irq_fn map_irq, const char *name); /* lsi53c895a.c */ -- |
From: Avi K. <av...@qu...> - 2008-05-04 07:56:34
|
Marcelo Tosatti wrote: > Add three PCI bridges to support 128 slots. > > Changes since v1: > - Remove I/O address range "support" (so standard PCI I/O space is used). > - Verify that there's no special quirks for 82801 PCI bridge. > - Introduce separate flat IRQ mapping function for non-SPARC targets. > > I've cooled off on the 128 slot stuff, mainly because most real hosts don't have them. An unusual configuration will likely lead to problems as most guest OSes and workloads will not have been tested thoroughly with them. - it requires a large number of interrupts, which are difficult to provide, and which it is hard to ensure all OSes support. MSI is relatively new. - is only a few interrupts are available, then each interrupt requires scanning a large number of queues If we are to do this, then we need better tests than "80 disks show up". The alternative approach of having the virtio block device control up to 16 disks allows having those 80 disks with just 5 slots (and 5 interrupts). This is similar to the way traditional SCSI controllers behave, and so should not surprise the guest OS. -- error compiling committee.c: too many arguments to function |
From: Alexander G. <ag...@su...> - 2008-05-05 23:16:29
|
On May 4, 2008, at 9:56 AM, Avi Kivity wrote: > Marcelo Tosatti wrote: >> Add three PCI bridges to support 128 slots. >> >> Changes since v1: >> - Remove I/O address range "support" (so standard PCI I/O space is >> used). >> - Verify that there's no special quirks for 82801 PCI bridge. >> - Introduce separate flat IRQ mapping function for non-SPARC targets. >> >> > > I've cooled off on the 128 slot stuff, mainly because most real hosts > don't have them. An unusual configuration will likely lead to > problems > as most guest OSes and workloads will not have been tested thoroughly > with them. This is more of a "let's do this conditionally" than a "let's not do it" reason imho. > - it requires a large number of interrupts, which are difficult to > provide, and which it is hard to ensure all OSes support. MSI is > relatively new. We could just as well extend the device layout to have every device be attached to one virtual IOAPIC pin, so we'd have like 128 / 4 = 32 IOAPICs in the system and one interrupt for each device. > - is only a few interrupts are available, then each interrupt requires > scanning a large number of queues This case should be rare, basically only existent with OSs that don't support APIC properly. > If we are to do this, then we need better tests than "80 disks show > up". True. > The alternative approach of having the virtio block device control > up to > 16 disks allows having those 80 disks with just 5 slots (and 5 > interrupts). This is similar to the way traditional SCSI controllers > behave, and so should not surprise the guest OS. The one thing I'm actually really missing here is use cases. What are we doing this for? And further along the line, are there other approaches to the problems for which this was supposed to be a solution? Maybe someone can raise a case where it's not virtblk / virtnet. Alex |
From: Alexander G. <ag...@su...> - 2008-05-05 22:47:03
|
On May 4, 2008, at 9:56 AM, Avi Kivity wrote: > Marcelo Tosatti wrote: >> Add three PCI bridges to support 128 slots. >> >> Changes since v1: >> - Remove I/O address range "support" (so standard PCI I/O space is >> used). >> - Verify that there's no special quirks for 82801 PCI bridge. >> - Introduce separate flat IRQ mapping function for non-SPARC targets. >> >> > > I've cooled off on the 128 slot stuff, mainly because most real hosts > don't have them. An unusual configuration will likely lead to > problems > as most guest OSes and workloads will not have been tested thoroughly > with them. This is more of a "let's do this conditionally" than a "let's not do it" reason imho. > - it requires a large number of interrupts, which are difficult to > provide, and which it is hard to ensure all OSes support. MSI is > relatively new. We could just as well extend the device layout to have every device be attached to one virtual IOAPIC pin, so we'd have like 128 / 4 = 32 IOAPICs in the system and one interrupt for each device. > - is only a few interrupts are available, then each interrupt requires > scanning a large number of queues This case should be rare, basically only existent with OSs that don't support APIC properly. > If we are to do this, then we need better tests than "80 disks show > up". True. > The alternative approach of having the virtio block device control > up to > 16 disks allows having those 80 disks with just 5 slots (and 5 > interrupts). This is similar to the way traditional SCSI controllers > behave, and so should not surprise the guest OS. The one thing I'm actually really missing here is use cases. What are we doing this for? And further along the line, are there other approaches to the problems for which this was supposed to be a solution? Maybe someone can raise a case where it's not virtblk / virtnet. Alex |
From: Avi K. <av...@qu...> - 2008-05-06 10:14:35
|
Alexander Graf wrote: >> Marcelo Tosatti wrote: >>> Add three PCI bridges to support 128 slots. >>> >>> Changes since v1: >>> - Remove I/O address range "support" (so standard PCI I/O space is >>> used). >>> - Verify that there's no special quirks for 82801 PCI bridge. >>> - Introduce separate flat IRQ mapping function for non-SPARC targets. >>> >>> >> >> I've cooled off on the 128 slot stuff, mainly because most real hosts >> don't have them. An unusual configuration will likely lead to problems >> as most guest OSes and workloads will not have been tested thoroughly >> with them. > > This is more of a "let's do this conditionally" than a "let's not do > it" reason imho. Yes. More precisely, let's not do it until we're sure it works and performs. I don't think a queue-per-disk approach will perform well, since the queue will always be very short and will not be able to amortize exit costs and ring management overhead very well. >> - it requires a large number of interrupts, which are difficult to >> provide, and which it is hard to ensure all OSes support. MSI is >> relatively new. > > We could just as well extend the device layout to have every device be > attached to one virtual IOAPIC pin, so we'd have like 128 / 4 = 32 > IOAPICs in the system and one interrupt for each device. That's problematic for these reasons: - how many OSes work well with 32 IOAPICs? - at one point, you run out of interrupt vectors (~ 220 per cpu if the OS can allocate per-cpu vectors; otherwise just ~220) - you will have many interrupts fired, each for a single device with a few requests, reducing performance >> - is only a few interrupts are available, then each interrupt requires >> scanning a large number of queues > > This case should be rare, basically only existent with OSs that don't > support APIC properly. > Hopefully. >> The alternative approach of having the virtio block device control up to >> 16 disks allows having those 80 disks with just 5 slots (and 5 >> interrupts). This is similar to the way traditional SCSI controllers >> behave, and so should not surprise the guest OS. > > The one thing I'm actually really missing here is use cases. What are > we doing this for? And further along the line, are there other > approaches to the problems for which this was supposed to be a > solution? Maybe someone can raise a case where it's not virtblk / > virtnet. The requirement for lots of storage is a given. There are two ways of doing that, paying a lot of money to EMC or NetApp for a storage controller, or connecting lots of disks directly and doing the storage controller on the OS (what EMC and NetApp do anyway, inside their boxes). zfs is a good example of a use case, and I'd guess databases could use this too if they were able to supply the redundancy. -- Do not meddle in the internals of kernels, for they are subtle and quick to panic. |
From: Anthony L. <an...@co...> - 2008-05-05 22:53:48
|
Avi Kivity wrote: > Marcelo Tosatti wrote: > >> Add three PCI bridges to support 128 slots. >> >> Changes since v1: >> - Remove I/O address range "support" (so standard PCI I/O space is used). >> - Verify that there's no special quirks for 82801 PCI bridge. >> - Introduce separate flat IRQ mapping function for non-SPARC targets. >> >> >> > > I've cooled off on the 128 slot stuff, mainly because most real hosts > don't have them. An unusual configuration will likely lead to problems > as most guest OSes and workloads will not have been tested thoroughly > with them. > > - it requires a large number of interrupts, which are difficult to > provide, and which it is hard to ensure all OSes support. MSI is > relatively new. > - is only a few interrupts are available, then each interrupt requires > scanning a large number of queues > > If we are to do this, then we need better tests than "80 disks show up". > > The alternative approach of having the virtio block device control up to > 16 disks allows having those 80 disks with just 5 slots (and 5 > interrupts). This is similar to the way traditional SCSI controllers > behave, and so should not surprise the guest OS. > If you have a single virtio-blk device that shows up as 8 functions, we could achieve the same thing. We can cheat with the interrupt handlers to avoid cache line bouncing too. Plus, we can use PCI hotplug so we don't have to reinvent a new hotplug mechanism. I'm inclined to think that ring sharing isn't as useful as it seems as long as we don't have indirect scatter gather lists. Regards, Anthony Liguori |
From: Avi K. <av...@qu...> - 2008-05-06 10:01:18
|
Anthony Liguori wrote: > Avi Kivity wrote: >> Marcelo Tosatti wrote: >> >>> Add three PCI bridges to support 128 slots. >>> >>> Changes since v1: >>> - Remove I/O address range "support" (so standard PCI I/O space is >>> used). >>> - Verify that there's no special quirks for 82801 PCI bridge. >>> - Introduce separate flat IRQ mapping function for non-SPARC targets. >>> >>> >> >> I've cooled off on the 128 slot stuff, mainly because most real hosts >> don't have them. An unusual configuration will likely lead to >> problems as most guest OSes and workloads will not have been tested >> thoroughly with them. >> >> - it requires a large number of interrupts, which are difficult to >> provide, and which it is hard to ensure all OSes support. MSI is >> relatively new. >> - is only a few interrupts are available, then each interrupt >> requires scanning a large number of queues >> >> If we are to do this, then we need better tests than "80 disks show up". >> >> The alternative approach of having the virtio block device control up >> to 16 disks allows having those 80 disks with just 5 slots (and 5 >> interrupts). This is similar to the way traditional SCSI controllers >> behave, and so should not surprise the guest OS. >> > > If you have a single virtio-blk device that shows up as 8 functions, > we could achieve the same thing. We can cheat with the interrupt > handlers to avoid cache line bouncing too. You can't cheat on all guests, and even on Linux, it's better to keep on doing what real hardware does than go off on a tangent than no one else uses. You'll have to cheat on ->kick(), too. Virtio needs one exit per O(queue depth). With one spindle per ring, it doesn't make sense to have a queue depth > 4 (or latency goes to hell), so you have many exits. > Plus, we can use PCI hotplug so we don't have to reinvent a new > hotplug mechanism. You can plug disks into a Fibre Channel mesh, so presumably that works on real hardware somehow. > > I'm inclined to think that ring sharing isn't as useful as it seems as > long as we don't have indirect scatter gather lists. I agree, but I think that indirect sg is very important for storage: - a long sg list is cheap from the disk's point of view (the seeks are what's expensive) - it is important to keep the queue depth meaningful and small (O(spindles * 3)), as it drastically affects latency -- Do not meddle in the internals of kernels, for they are subtle and quick to panic. |