public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] msi: Invert the sense of the MSI enables.
       [not found]                                       ` <20070522204103.134bf5a2@osprey.hogchain.net>
@ 2007-05-25  4:19                                         ` Eric W. Biederman
  2007-05-25  4:26                                           ` [PATCH 2/2] msi: Add support for the Intel chipsets that support MSI Eric W. Biederman
                                                             ` (4 more replies)
  0 siblings, 5 replies; 30+ messages in thread
From: Eric W. Biederman @ 2007-05-25  4:19 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Jay Cliburn, Grzegorz Krzystek, Andrew Morton, Andi Kleen, ninex,
	linux-kernel, linux-pci, Michael Ellerman, David Miller,
	Tony Luck


Currently we blacklist known bad msi configurations which means we
keep getting MSI enabled on chipsets that either do not support MSI,
or MSI is implemented improperly.  Since the normal IRQ routing
mechanism seems to works even when MSI does not, this is a bad default
and causes non-functioning systems for no good reason.

So this patch inverts the sense of the MSI bus flag to only enable
MSI on known good systems.  I am seeding that list with the set of
chipsets with an enabled hypertransport MSI mapping capability.  Which
is as close as I can come to an generic MSI enable.  So for actually
using MSI this patch is a regression, but for just having MSI enabled
in the kernel by default things should just work with this patch
applied.

People can still enable MSI on a per bus level for testing by writing
to sysfs so discovering chipsets that actually work (assuming we are
using modular drivers) should be pretty straight forward.

Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
---
 Documentation/MSI-HOWTO.txt |   30 +++++++++++++------
 drivers/pci/msi.c           |   19 ++++++++----
 drivers/pci/pci-sysfs.c     |    6 ++--
 drivers/pci/quirks.c        |   66 ++++---------------------------------------
 include/linux/pci.h         |    2 +-
 5 files changed, 42 insertions(+), 81 deletions(-)

diff --git a/Documentation/MSI-HOWTO.txt b/Documentation/MSI-HOWTO.txt
index 0d82407..81f4e18 100644
--- a/Documentation/MSI-HOWTO.txt
+++ b/Documentation/MSI-HOWTO.txt
@@ -485,21 +485,31 @@ single device.  This may be achieved by either not calling pci_enable_msi()
 or all, or setting the pci_dev->no_msi flag before (most of the time
 in a quirk).
 
-6.2. Disabling MSI below a bridge
-
-The vast majority of MSI quirks are required by PCI bridges not
-being able to route MSI between busses. In this case, MSI have to be
-disabled on all devices behind this bridge. It is achieves by setting
-the PCI_BUS_FLAGS_NO_MSI flag in the pci_bus->bus_flags of the bridge
+6.2. Enabling MSI below a bridge
+
+Despite being standard, mandated by the pci-express spec, supported
+by plugin cards, and less hassle to deal with then irq routing tables
+not all hardware, and not all chipsets enable MSI, and not all
+motherboards enable MSI support in MSI supporting hardware.  So until
+a hardware specific test is implemted to test if MSI is supported and
+enabled on a piece of hardware we disable MSI support by default.
+This at least ensures users will have a working system using older
+mechanisms.
+
+So to enable MSI support on a particular chipset we need a MSI quirk
+that recognizes the chipset and test to see if MSI is enabled.  In
+this case, MSI has to be enabled on all bridges that are capable of
+transform MSI messages into irqs.  It is achieved by setting
+the PCI_BUS_FLAGS_MSI flag in the pci_bus->bus_flags of the bridge
 subordinate bus. There is no need to set the same flag on bridges that
-are below the broken bridge. When pci_enable_msi() is called to enable
-MSI on a device, pci_msi_supported() takes care of checking the NO_MSI
-flag in all parent busses of the device.
+are below the working bridge. When pci_enable_msi() is called to enable
+MSI on a device, pci_msi_supported() takes care of checking to ensure
+at least one parent buss supports MSI.
 
 Some bridges actually support dynamic MSI support enabling/disabling
 by changing some bits in their PCI configuration space (especially
 the Hypertransport chipsets such as the nVidia nForce and Serverworks
-HT2000). It may then be required to update the NO_MSI flag on the
+HT2000). It may then be required to update the MSI flag on the
 corresponding devices in the sysfs hierarchy. To enable MSI support
 on device "0000:00:0e", do:
 
diff --git a/drivers/pci/msi.c b/drivers/pci/msi.c
index d9cbd58..000c9ae 100644
--- a/drivers/pci/msi.c
+++ b/drivers/pci/msi.c
@@ -467,15 +467,20 @@ static int pci_msi_check_device(struct pci_dev* dev, int nvec, int type)
 	if (nvec < 1)
 		return -ERANGE;
 
-	/* Any bridge which does NOT route MSI transactions from it's
-	 * secondary bus to it's primary bus must set NO_MSI flag on
-	 * the secondary pci_bus.
-	 * We expect only arch-specific PCI host bus controller driver
-	 * or quirks for specific PCI bridges to be setting NO_MSI.
+	/* For pure pci bridges routing MSI traffic is just another
+	 * example of routing a small DMA transaction so it should be
+	 * no problem.  However getting MSI traffic from PCI to the
+	 * the non PCI part of the chipset is a problem.  So this
+	 * code checks to see if we have an upstream bus where
+	 * MSI is known to work.
+	 *
+	 * Only non pci to pci bridges are expected to set this flag.
 	 */
 	for (bus = dev->bus; bus; bus = bus->parent)
-		if (bus->bus_flags & PCI_BUS_FLAGS_NO_MSI)
-			return -EINVAL;
+		if (bus->bus_flags & PCI_BUS_FLAGS_MSI)
+			break;
+	if (!bus)
+		return -EINVAL;
 
 	ret = arch_msi_check_device(dev, nvec, type);
 	if (ret)
diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c
index 284e83a..4cebb60 100644
--- a/drivers/pci/pci-sysfs.c
+++ b/drivers/pci/pci-sysfs.c
@@ -160,7 +160,7 @@ msi_bus_show(struct device *dev, struct device_attribute *attr, char *buf)
 		return 0;
 
 	return sprintf (buf, "%u\n",
-			!(pdev->subordinate->bus_flags & PCI_BUS_FLAGS_NO_MSI));
+			!!(pdev->subordinate->bus_flags & PCI_BUS_FLAGS_MSI));
 }
 
 static ssize_t
@@ -178,13 +178,13 @@ msi_bus_store(struct device *dev, struct device_attribute *attr,
 		return count;
 
 	if (*buf == '0') {
-		pdev->subordinate->bus_flags |= PCI_BUS_FLAGS_NO_MSI;
+		pdev->subordinate->bus_flags &= ~PCI_BUS_FLAGS_MSI;
 		dev_warn(&pdev->dev, "forced subordinate bus to not support MSI,"
 			 " bad things could happen.\n");
 	}
 
 	if (*buf == '1') {
-		pdev->subordinate->bus_flags &= ~PCI_BUS_FLAGS_NO_MSI;
+		pdev->subordinate->bus_flags |= PCI_BUS_FLAGS_MSI;
 		dev_warn(&pdev->dev, "forced subordinate bus to support MSI,"
 			 " bad things could happen.\n");
 	}
diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
index 6ccc2e9..f60c6c6 100644
--- a/drivers/pci/quirks.c
+++ b/drivers/pci/quirks.c
@@ -1625,33 +1625,6 @@ DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_NVIDIA,  PCI_DEVICE_ID_NVIDIA_CK804_PCIE,
 			quirk_nvidia_ck804_pcie_aer_ext_cap);
 
 #ifdef CONFIG_PCI_MSI
-/* The Serverworks PCI-X chipset does not support MSI. We cannot easily rely
- * on setting PCI_BUS_FLAGS_NO_MSI in its bus flags because there are actually
- * some other busses controlled by the chipset even if Linux is not aware of it.
- * Instead of setting the flag on all busses in the machine, simply disable MSI
- * globally.
- */
-static void __init quirk_svw_msi(struct pci_dev *dev)
-{
-	pci_no_msi();
-	printk(KERN_WARNING "PCI: MSI quirk detected. MSI deactivated.\n");
-}
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SERVERWORKS, PCI_DEVICE_ID_SERVERWORKS_GCNB_LE, quirk_svw_msi);
-
-/* Disable MSI on chipsets that are known to not support it */
-static void __devinit quirk_disable_msi(struct pci_dev *dev)
-{
-	if (dev->subordinate) {
-		printk(KERN_WARNING "PCI: MSI quirk detected. "
-		       "PCI_BUS_FLAGS_NO_MSI set for %s subordinate bus.\n",
-		       pci_name(dev));
-		dev->subordinate->bus_flags |= PCI_BUS_FLAGS_NO_MSI;
-	}
-}
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_8131_BRIDGE, quirk_disable_msi);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, PCI_DEVICE_ID_ATI_RS400_200, quirk_disable_msi);
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, PCI_DEVICE_ID_ATI_RS480, quirk_disable_msi);
-
 /* Go through the list of Hypertransport capabilities and
  * return 1 if a HT MSI capability is found and enabled */
 static int __devinit msi_ht_cap_enabled(struct pci_dev *dev)
@@ -1677,43 +1650,16 @@ static int __devinit msi_ht_cap_enabled(struct pci_dev *dev)
 	return 0;
 }
 
-/* Check the hypertransport MSI mapping to know whether MSI is enabled or not */
+/* Enable MSI on hypertransport chipsets supporting MSI */
 static void __devinit quirk_msi_ht_cap(struct pci_dev *dev)
 {
-	if (dev->subordinate && !msi_ht_cap_enabled(dev)) {
-		printk(KERN_WARNING "PCI: MSI quirk detected. "
-		       "MSI disabled on chipset %s.\n",
-		       pci_name(dev));
-		dev->subordinate->bus_flags |= PCI_BUS_FLAGS_NO_MSI;
+	if (dev->subordinate && msi_ht_cap_enabled(dev)) {
+		printk(KERN_INFO "PCI: Enabled MSI on chipset %s.\n",
+			pci_name(dev));
+		dev->subordinate->bus_flags |= PCI_BUS_FLAGS_MSI;
 	}
 }
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SERVERWORKS, PCI_DEVICE_ID_SERVERWORKS_HT2000_PCIE,
-			quirk_msi_ht_cap);
+DECLARE_PCI_FIXUP_FINAL(PCI_ANY_ID, PCI_ANY_ID, quirk_msi_ht_cap);
 
-/* The nVidia CK804 chipset may have 2 HT MSI mappings.
- * MSI are supported if the MSI capability set in any of these mappings.
- */
-static void __devinit quirk_nvidia_ck804_msi_ht_cap(struct pci_dev *dev)
-{
-	struct pci_dev *pdev;
-
-	if (!dev->subordinate)
-		return;
 
-	/* check HT MSI cap on this chipset and the root one.
-	 * a single one having MSI is enough to be sure that MSI are supported.
-	 */
-	pdev = pci_get_slot(dev->bus, 0);
-	if (!pdev)
-		return;
-	if (!msi_ht_cap_enabled(dev) && !msi_ht_cap_enabled(pdev)) {
-		printk(KERN_WARNING "PCI: MSI quirk detected. "
-		       "MSI disabled on chipset %s.\n",
-		       pci_name(dev));
-		dev->subordinate->bus_flags |= PCI_BUS_FLAGS_NO_MSI;
-	}
-	pci_dev_put(pdev);
-}
-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_CK804_PCIE,
-			quirk_nvidia_ck804_msi_ht_cap);
 #endif /* CONFIG_PCI_MSI */
diff --git a/include/linux/pci.h b/include/linux/pci.h
index fbf3766..468d761 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -111,7 +111,7 @@ enum pcie_reset_state {
 
 typedef unsigned short __bitwise pci_bus_flags_t;
 enum pci_bus_flags {
-	PCI_BUS_FLAGS_NO_MSI = (__force pci_bus_flags_t) 1,
+	PCI_BUS_FLAGS_MSI = (__force pci_bus_flags_t) 1,
 };
 
 struct pci_cap_saved_state {
-- 
1.5.1.1.181.g2de0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH 2/2] msi:  Add support for the Intel chipsets that support MSI.
  2007-05-25  4:19                                         ` [PATCH 1/2] msi: Invert the sense of the MSI enables Eric W. Biederman
@ 2007-05-25  4:26                                           ` Eric W. Biederman
  2007-05-25  5:38                                             ` Andi Kleen
  2007-05-25  4:31                                           ` [PATCH 1/2] msi: Invert the sense of the MSI enables Andrew Morton
                                                             ` (3 subsequent siblings)
  4 siblings, 1 reply; 30+ messages in thread
From: Eric W. Biederman @ 2007-05-25  4:26 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Jay Cliburn, Grzegorz Krzystek, Andrew Morton, Andi Kleen, ninex,
	linux-kernel, linux-pci, Michael Ellerman, David Miller,
	Tony Luck


This patch is the result of a quick survey of the Intel chipset
documents.  I took a quick look in the document to see if the chipset
supported MSI and if so I looked through to find the vendor and device
id of device 0 function 0 of the chipset and added a quirk for that
device id if I it was not a duplicate. 

I happen to have one of the systems on the list so the patch is
tested, and seems to work fine.

This patch should be safe.  The anecdotal evidence is that when dealing
with MSI the Intel chipsets just work.  If we find some buggy ones
changing the list won't be hard.

This patch should also serve as an example of how simple it is to
enable MSI on a chipset or platform configuration where it is known
to work.

This patch does not use the defines from pci_ids.h because there are
so many defines and so many duplicate host-bridge device id duplicates
in Intels documentation I could not keep any of it straight without
using the raw device ids.  Which allowed me to order the fixups and
quickly detect duplicates.  Unfortunately the good names were
a confusing layer of abstraction.  I have still updated pci_ids.h
so that it is possible to track the numbers back to their chipset.

Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
---
 drivers/pci/quirks.c    |   27 +++++++++++++++++++++++++++
 include/linux/pci_ids.h |   10 ++++++++++
 2 files changed, 37 insertions(+), 0 deletions(-)

diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
index f60c6c6..40fb499 100644
--- a/drivers/pci/quirks.c
+++ b/drivers/pci/quirks.c
@@ -1661,5 +1661,32 @@ static void __devinit quirk_msi_ht_cap(struct pci_dev *dev)
 }
 DECLARE_PCI_FIXUP_FINAL(PCI_ANY_ID, PCI_ANY_ID, quirk_msi_ht_cap);
 
+static void __devinit quirk_msi_chipset(struct pci_dev *dev)
+{
+	dev->bus->bus_flags |= PCI_BUS_FLAGS_MSI;
+}
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x254C, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2550, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2558, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2560, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2570, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2578, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2580, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2581, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2588, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2590, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x25C8, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x25D0, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x25D4, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2600, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2770, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2774, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2778, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x277C, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x27A0, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2990, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2A00, quirk_msi_chipset);
+DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x359E, quirk_msi_chipset);
+
 
 #endif /* CONFIG_PCI_MSI */
diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
index 62b3e00..71df8c0 100644
--- a/include/linux/pci_ids.h
+++ b/include/linux/pci_ids.h
@@ -2232,11 +2232,20 @@
 #define PCI_DEVICE_ID_INTEL_82865_IG	0x2572
 #define PCI_DEVICE_ID_INTEL_82875_HB	0x2578
 #define PCI_DEVICE_ID_INTEL_82915G_HB	0x2580
+#define PCI_DEVICE_ID_INTEL_82915_HB	0x2581
 #define PCI_DEVICE_ID_INTEL_82915G_IG	0x2582
+#define PCI_DEVICE_ID_INTEL_E7221_HB	0x2588
 #define PCI_DEVICE_ID_INTEL_82915GM_HB	0x2590
 #define PCI_DEVICE_ID_INTEL_82915GM_IG	0x2592
+#define PCI_DEVICE_ID_INTEL_5000P_HB	0x25C8
+#define PCI_DEVICE_ID_INTEL_5000Z_HB	0x25D0
+#define PCI_DEVICE_ID_INTEL_5000V_HB	0x25D4
+#define PCI_DEVICE_ID_INTEL_E8500_HB	0x2600
 #define PCI_DEVICE_ID_INTEL_82945G_HB	0x2770
 #define PCI_DEVICE_ID_INTEL_82945G_IG	0x2772
+#define PCI_DEVICE_ID_INTEL_82955X_HB	0x2774
+#define PCI_DEVICE_ID_INTEL_3000_HB	0x2778
+#define PCI_DEVICE_ID_INTEL_82975X_HB	0x277C
 #define PCI_DEVICE_ID_INTEL_82945GM_HB	0x27A0
 #define PCI_DEVICE_ID_INTEL_82945GM_IG	0x27A2
 #define PCI_DEVICE_ID_INTEL_ICH6_0	0x2640
@@ -2272,6 +2281,7 @@
 #define PCI_DEVICE_ID_INTEL_ICH9_4	0x2914
 #define PCI_DEVICE_ID_INTEL_ICH9_5	0x2915
 #define PCI_DEVICE_ID_INTEL_ICH9_6	0x2930
+#define PCI_DEVICE_ID_INTEL_82946_HB	0x2990
 #define PCI_DEVICE_ID_INTEL_82855PM_HB	0x3340
 #define PCI_DEVICE_ID_INTEL_82830_HB	0x3575
 #define PCI_DEVICE_ID_INTEL_82830_CGC	0x3577
-- 
1.5.1.1.181.g2de0


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.
  2007-05-25  4:19                                         ` [PATCH 1/2] msi: Invert the sense of the MSI enables Eric W. Biederman
  2007-05-25  4:26                                           ` [PATCH 2/2] msi: Add support for the Intel chipsets that support MSI Eric W. Biederman
@ 2007-05-25  4:31                                           ` Andrew Morton
  2007-05-25  5:20                                             ` Eric W. Biederman
                                                               ` (2 more replies)
  2007-05-25  5:14                                           ` Michael Ellerman
                                                             ` (2 subsequent siblings)
  4 siblings, 3 replies; 30+ messages in thread
From: Andrew Morton @ 2007-05-25  4:31 UTC (permalink / raw)
  To: Eric W. Biederman
  Cc: Greg Kroah-Hartman, Jay Cliburn, Grzegorz Krzystek, Andi Kleen,
	ninex, linux-kernel, linux-pci, Michael Ellerman, David Miller,
	Tony Luck, Linus Torvalds

On Thu, 24 May 2007 22:19:09 -0600 ebiederm@xmission.com (Eric W. Biederman) wrote:

> Currently we blacklist known bad msi configurations which means we
> keep getting MSI enabled on chipsets that either do not support MSI,
> or MSI is implemented improperly.  Since the normal IRQ routing
> mechanism seems to works even when MSI does not, this is a bad default
> and causes non-functioning systems for no good reason.
> 
> So this patch inverts the sense of the MSI bus flag to only enable
> MSI on known good systems.  I am seeding that list with the set of
> chipsets with an enabled hypertransport MSI mapping capability.  Which
> is as close as I can come to an generic MSI enable.  So for actually
> using MSI this patch is a regression, but for just having MSI enabled
> in the kernel by default things should just work with this patch
> applied.
> 
> People can still enable MSI on a per bus level for testing by writing
> to sysfs so discovering chipsets that actually work (assuming we are
> using modular drivers) should be pretty straight forward.

Yup.

Do we have a feel for how much performace we're losing on those 
systems which _could_ do MSI, but which will end up defaulting
to not using it?

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.
  2007-05-25  4:19                                         ` [PATCH 1/2] msi: Invert the sense of the MSI enables Eric W. Biederman
  2007-05-25  4:26                                           ` [PATCH 2/2] msi: Add support for the Intel chipsets that support MSI Eric W. Biederman
  2007-05-25  4:31                                           ` [PATCH 1/2] msi: Invert the sense of the MSI enables Andrew Morton
@ 2007-05-25  5:14                                           ` Michael Ellerman
  2007-05-25  5:59                                             ` Eric W. Biederman
  2007-05-25  6:40                                             ` David Miller
  2007-05-25  5:20                                           ` Greg KH
  2007-05-25 21:47                                           ` Brice Goglin
  4 siblings, 2 replies; 30+ messages in thread
From: Michael Ellerman @ 2007-05-25  5:14 UTC (permalink / raw)
  To: Eric W. Biederman
  Cc: Greg Kroah-Hartman, Jay Cliburn, Grzegorz Krzystek, Andrew Morton,
	Andi Kleen, ninex, linux-kernel, linux-pci, David Miller,
	Tony Luck

[-- Attachment #1: Type: text/plain, Size: 1268 bytes --]

On Thu, 2007-05-24 at 22:19 -0600, Eric W. Biederman wrote:
> Currently we blacklist known bad msi configurations which means we
> keep getting MSI enabled on chipsets that either do not support MSI,
> or MSI is implemented improperly.  Since the normal IRQ routing
> mechanism seems to works even when MSI does not, this is a bad default
> and causes non-functioning systems for no good reason.
> 
> So this patch inverts the sense of the MSI bus flag to only enable
> MSI on known good systems.  I am seeding that list with the set of
> chipsets with an enabled hypertransport MSI mapping capability.  Which
> is as close as I can come to an generic MSI enable.  So for actually
> using MSI this patch is a regression, but for just having MSI enabled
> in the kernel by default things should just work with this patch
> applied.

I guess this is a good idea for random x86 machines. On powerpc I think
we'll just turn it on for every bus, and let the existing per-platform
logic decide.

cheers

-- 
Michael Ellerman
OzLabs, IBM Australia Development Lab

wwweb: http://michael.ellerman.id.au
phone: +61 2 6212 1183 (tie line 70 21183)

We do not inherit the earth from our ancestors,
we borrow it from our children. - S.M.A.R.T Person

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 189 bytes --]

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.
  2007-05-25  4:19                                         ` [PATCH 1/2] msi: Invert the sense of the MSI enables Eric W. Biederman
                                                             ` (2 preceding siblings ...)
  2007-05-25  5:14                                           ` Michael Ellerman
@ 2007-05-25  5:20                                           ` Greg KH
  2007-05-25  5:57                                             ` Eric W. Biederman
  2007-05-25 15:17                                             ` Eric W. Biederman
  2007-05-25 21:47                                           ` Brice Goglin
  4 siblings, 2 replies; 30+ messages in thread
From: Greg KH @ 2007-05-25  5:20 UTC (permalink / raw)
  To: Eric W. Biederman
  Cc: Jay Cliburn, Grzegorz Krzystek, Andrew Morton, Andi Kleen, ninex,
	linux-kernel, linux-pci, Michael Ellerman, David Miller,
	Tony Luck

On Thu, May 24, 2007 at 10:19:09PM -0600, Eric W. Biederman wrote:
> 
> Currently we blacklist known bad msi configurations which means we
> keep getting MSI enabled on chipsets that either do not support MSI,
> or MSI is implemented improperly.  Since the normal IRQ routing
> mechanism seems to works even when MSI does not, this is a bad default
> and causes non-functioning systems for no good reason.
> 
> So this patch inverts the sense of the MSI bus flag to only enable
> MSI on known good systems.  I am seeding that list with the set of
> chipsets with an enabled hypertransport MSI mapping capability.  Which
> is as close as I can come to an generic MSI enable.  So for actually
> using MSI this patch is a regression, but for just having MSI enabled
> in the kernel by default things should just work with this patch
> applied.
> 
> People can still enable MSI on a per bus level for testing by writing
> to sysfs so discovering chipsets that actually work (assuming we are
> using modular drivers) should be pretty straight forward.

Originally I would have thought this would be a good idea, but now that
Vista is out, which supports MSI, I don't think we are going to need
this in the future.  All new chipsets should support MSI fine and this
table will only grow in the future, while the blacklist should not need
to have many new entries added to it.

So I don't think this is a good idea, sorry.

greg k-h

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.
  2007-05-25  4:31                                           ` [PATCH 1/2] msi: Invert the sense of the MSI enables Andrew Morton
@ 2007-05-25  5:20                                             ` Eric W. Biederman
  2007-05-25  5:44                                             ` Grant Grundler
  2007-05-25  5:51                                             ` Andi Kleen
  2 siblings, 0 replies; 30+ messages in thread
From: Eric W. Biederman @ 2007-05-25  5:20 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Greg Kroah-Hartman, Jay Cliburn, Grzegorz Krzystek, Andi Kleen,
	ninex, linux-kernel, linux-pci, Michael Ellerman, David Miller,
	Tony Luck, Linus Torvalds, Roland Dreier

Andrew Morton <akpm@linux-foundation.org> writes:

> On Thu, 24 May 2007 22:19:09 -0600 ebiederm@xmission.com (Eric W. Biederman)
> wrote:
>
>> Currently we blacklist known bad msi configurations which means we
>> keep getting MSI enabled on chipsets that either do not support MSI,
>> or MSI is implemented improperly.  Since the normal IRQ routing
>> mechanism seems to works even when MSI does not, this is a bad default
>> and causes non-functioning systems for no good reason.
>> 
>> So this patch inverts the sense of the MSI bus flag to only enable
>> MSI on known good systems.  I am seeding that list with the set of
>> chipsets with an enabled hypertransport MSI mapping capability.  Which
>> is as close as I can come to an generic MSI enable.  So for actually
>> using MSI this patch is a regression, but for just having MSI enabled
>> in the kernel by default things should just work with this patch
>> applied.
>> 
>> People can still enable MSI on a per bus level for testing by writing
>> to sysfs so discovering chipsets that actually work (assuming we are
>> using modular drivers) should be pretty straight forward.
>
> Yup.
>
> Do we have a feel for how much performace we're losing on those 
> systems which _could_ do MSI, but which will end up defaulting
> to not using it?

I don't have any good numbers, although it is enough that you can
measure the difference.  I think Roland Drier and the other IB
guys have actually made the measurements.  For low-end hardware
I expect the performance difference is currently in the noise.

However because MSI irqs are not shared a lot of things are
simplified.  In particular it means that you should be able to have an
irq fastpath that does not read the hardware at all, and hardware
register reads can be comparatively very slow.

Further because all MSI irq are not shared and edge triggered we don't
have a danger of screaming irqs or drivers not handling a shared
irq properly.

The big performance win is most likely to be for fast I/O devices
where the multiple IRQs per card will allow the irqs for different
flows of data to be directed to different cpus.

So for the short term I don't think there is much downside in having
MSI disabled.  For the long term I think MSI will be quite useful.

Further my gut feel is that my two patches together will enable MSI on
most of the x86 chipsets where it currently works.  Because most
chipsets are either from Intel or are hypertransport based.

Eric

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 2/2] msi:  Add support for the Intel chipsets that support MSI.
  2007-05-25  4:26                                           ` [PATCH 2/2] msi: Add support for the Intel chipsets that support MSI Eric W. Biederman
@ 2007-05-25  5:38                                             ` Andi Kleen
  2007-05-25  6:10                                               ` Eric W. Biederman
  0 siblings, 1 reply; 30+ messages in thread
From: Andi Kleen @ 2007-05-25  5:38 UTC (permalink / raw)
  To: Eric W. Biederman
  Cc: Greg Kroah-Hartman, Jay Cliburn, Grzegorz Krzystek, Andrew Morton,
	ninex, linux-kernel, linux-pci, Michael Ellerman, David Miller,
	Tony Luck, len.brown

On Friday 25 May 2007 06:26:50 Eric W. Biederman wrote:
> 
> This patch is the result of a quick survey of the Intel chipset
> documents.  I took a quick look in the document to see if the chipset
> supported MSI and if so I looked through to find the vendor and device
> id of device 0 function 0 of the chipset and added a quirk for that
> device id if I it was not a duplicate. 

It would be better to look for any PCI bridge. Sometimes there are
different PCI bridges around (e.g. external PCI-X bridges on HT systems) 
which might need own quirks

Also in the x86 world Microsoft defined a FADT ACPI flag that MSI doesn't
work for Vista. It might make sense to do

if (dmi year >= 2007 && FADT.msi_disable not set) assume it works

> This patch should be safe.  The anecdotal evidence is that when dealing
> with MSI the Intel chipsets just work.  If we find some buggy ones
> changing the list won't be hard.

The FADT bit should be probably checked anyways.

-Andi

 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.
  2007-05-25  4:31                                           ` [PATCH 1/2] msi: Invert the sense of the MSI enables Andrew Morton
  2007-05-25  5:20                                             ` Eric W. Biederman
@ 2007-05-25  5:44                                             ` Grant Grundler
  2007-05-25  5:51                                             ` Andi Kleen
  2 siblings, 0 replies; 30+ messages in thread
From: Grant Grundler @ 2007-05-25  5:44 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Eric W. Biederman, Greg Kroah-Hartman, Jay Cliburn,
	Grzegorz Krzystek, Andi Kleen, ninex, linux-kernel, linux-pci,
	Michael Ellerman, David Miller, Tony Luck, Linus Torvalds

On Thu, May 24, 2007 at 09:31:57PM -0700, Andrew Morton wrote:
> On Thu, 24 May 2007 22:19:09 -0600 ebiederm@xmission.com (Eric W. Biederman) wrote:
> 
> > Currently we blacklist known bad msi configurations which means we
> > keep getting MSI enabled on chipsets that either do not support MSI,
> > or MSI is implemented improperly.  Since the normal IRQ routing
> > mechanism seems to works even when MSI does not, this is a bad default
> > and causes non-functioning systems for no good reason.
...
> Yup.
> 
> Do we have a feel for how much performace we're losing on those 
> systems which _could_ do MSI, but which will end up defaulting
> to not using it?

Rick Jones (HP, aka Mr Netperf.org) just recently posted some data
that happened to compare. I've clipped out thw two relevant lines below:

http://lists.openfabrics.org/pipermail/general/2007-May/035709.html


                              Bulk Transfer                  "Latency"
                          Unidir            Bidir
     Card          Mbit/s SDx   SDr   Mbit/s SDx   SDr   Tran/s SDx   SDr
---------------------------------------------------------------------------
  Myri10G IP 9k     9320  0.862 0.949 10950  1.00  0.86   19260 19.67 16.18 *
  Myri10G IP 9k msi 9320  0.449 0.672 10840  0.63  0.62   19430 11.68 11.56

original posting explains the fields.
SDx (Service Demand on Transmit) is 2x more with MSI disabled. 
SDr (Service Demand on RX) is ~50% higher with MSI disabled.
Ditto for latency metrics.

ISTR to remember seeing ~5-10% difference on tg3 NICs and ~20% with PCI-X
infiniband (all on HP ZX1 chip, bottleneck was PCI-X bus).  When I posted
a tg3 patch to linux-net (which got rejected because of tg3 HW bugs), I
did NOT include any performance numbers like I thought I did. :(

hth,
grant

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.
  2007-05-25  4:31                                           ` [PATCH 1/2] msi: Invert the sense of the MSI enables Andrew Morton
  2007-05-25  5:20                                             ` Eric W. Biederman
  2007-05-25  5:44                                             ` Grant Grundler
@ 2007-05-25  5:51                                             ` Andi Kleen
  2007-05-25 20:16                                               ` Jonathan Lundell
  2 siblings, 1 reply; 30+ messages in thread
From: Andi Kleen @ 2007-05-25  5:51 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Eric W. Biederman, Greg Kroah-Hartman, Jay Cliburn,
	Grzegorz Krzystek, ninex, linux-kernel, linux-pci,
	Michael Ellerman, David Miller, Tony Luck, Linus Torvalds


> Do we have a feel for how much performace we're losing on those 
> systems which _could_ do MSI, but which will end up defaulting
> to not using it?

At least on 10GB ethernet it is a significant difference; you usually
cannot go anywhere near line speed without MSI

I suspect it is visible on high performance / multiple GB NICs too.

-Andi

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.
  2007-05-25  5:20                                           ` Greg KH
@ 2007-05-25  5:57                                             ` Eric W. Biederman
  2007-05-25 15:17                                             ` Eric W. Biederman
  1 sibling, 0 replies; 30+ messages in thread
From: Eric W. Biederman @ 2007-05-25  5:57 UTC (permalink / raw)
  To: Greg KH
  Cc: Jay Cliburn, Grzegorz Krzystek, Andrew Morton, Andi Kleen, ninex,
	linux-kernel, linux-pci, Michael Ellerman, David Miller,
	Tony Luck

Greg KH <gregkh@suse.de> writes:

> On Thu, May 24, 2007 at 10:19:09PM -0600, Eric W. Biederman wrote:
>> 
>> Currently we blacklist known bad msi configurations which means we
>> keep getting MSI enabled on chipsets that either do not support MSI,
>> or MSI is implemented improperly.  Since the normal IRQ routing
>> mechanism seems to works even when MSI does not, this is a bad default
>> and causes non-functioning systems for no good reason.
>> 
>> So this patch inverts the sense of the MSI bus flag to only enable
>> MSI on known good systems.  I am seeding that list with the set of
>> chipsets with an enabled hypertransport MSI mapping capability.  Which
>> is as close as I can come to an generic MSI enable.  So for actually
>> using MSI this patch is a regression, but for just having MSI enabled
>> in the kernel by default things should just work with this patch
>> applied.
>> 
>> People can still enable MSI on a per bus level for testing by writing
>> to sysfs so discovering chipsets that actually work (assuming we are
>> using modular drivers) should be pretty straight forward.
>
> Originally I would have thought this would be a good idea, but now that
> Vista is out, which supports MSI, I don't think we are going to need
> this in the future.  All new chipsets should support MSI fine and this
> table will only grow in the future, while the blacklist should not need
> to have many new entries added to it.
>
> So I don't think this is a good idea, sorry.

For me seeing is believing.  I don't see that yet.

What I see is myself pointed at a growing pile of bug reports
of chipsets where MSI does not work.

What I see is a steadily growing black list that looks like it should
include every non-Intel x86 pci-express chipset.  Despite the fact
the fact it requires skill to show a chipset does not support MSI
properly, and to generate the patch, and that MSI I/O devices are
still fairly rare.

Meanwhile I have written in a day something that does the nearly
the equivalent of our current black list.  A white list looks
much easier to maintain.

Further if we want to invert the list for a given vendor that we trust
I don't think it would be hard to write an inverse quirk that enables
MSI on chipsets that are new enough for some version of new enough.
In particular it probably makes sense to do that for Intel pci-express
chipsets.


In part I have a problem with the current black list because a number
of the entries appear to be flat out hacks.

When you add to this the fact that MSI can be implemented on simple
pci devices and those pci devices can potentially be plugged into old
machines.  (Say someone buys a new pci cards as an upgrade).  I
don't see how we can successfully setup a black list of every
historical chipset that supports pci.

Further there are a number of outstanding bugs that are there
simply because msi is currently enabled by default on chipsets
that don't support it.


So I see the current situation as a maintenance disaster.  People are
not stepping up to the plate fast enough to grow the black list to the
set of chipsets and motherboards that don't implement MSI, don't
implement MSI correctly, or only sometimes implement MSI correctly.

So Greg if you want to volunteer to comb through all of the existing
pci express chipsets and figure out what is needed to test to see
if the are setup correctly and to black list them if they are not
setup correctly, and to unconditionally black list them if we don't
support MSI more power to you.

Right now I want code that works, and this is the best path I can
see to that.

Eric

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.
  2007-05-25  5:14                                           ` Michael Ellerman
@ 2007-05-25  5:59                                             ` Eric W. Biederman
  2007-05-25  6:40                                             ` David Miller
  1 sibling, 0 replies; 30+ messages in thread
From: Eric W. Biederman @ 2007-05-25  5:59 UTC (permalink / raw)
  To: michael
  Cc: Greg Kroah-Hartman, Jay Cliburn, Grzegorz Krzystek, Andrew Morton,
	Andi Kleen, ninex, linux-kernel, linux-pci, David Miller,
	Tony Luck

Michael Ellerman <michael@ellerman.id.au> writes:

> On Thu, 2007-05-24 at 22:19 -0600, Eric W. Biederman wrote:
>> Currently we blacklist known bad msi configurations which means we
>> keep getting MSI enabled on chipsets that either do not support MSI,
>> or MSI is implemented improperly.  Since the normal IRQ routing
>> mechanism seems to works even when MSI does not, this is a bad default
>> and causes non-functioning systems for no good reason.
>> 
>> So this patch inverts the sense of the MSI bus flag to only enable
>> MSI on known good systems.  I am seeding that list with the set of
>> chipsets with an enabled hypertransport MSI mapping capability.  Which
>> is as close as I can come to an generic MSI enable.  So for actually
>> using MSI this patch is a regression, but for just having MSI enabled
>> in the kernel by default things should just work with this patch
>> applied.
>
> I guess this is a good idea for random x86 machines. On powerpc I think
> we'll just turn it on for every bus, and let the existing per-platform
> logic decide.

Yep.  That is pretty much what I expected.

Since you already have to detect how to implement the MSI methods
you need a separate white list anyway.

Just a side note.  This only needs to be enabled for pci root
busses.

Eric

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 2/2] msi:  Add support for the Intel chipsets that support MSI.
  2007-05-25  5:38                                             ` Andi Kleen
@ 2007-05-25  6:10                                               ` Eric W. Biederman
  2007-05-25 14:42                                                 ` Chuck Ebbert
  0 siblings, 1 reply; 30+ messages in thread
From: Eric W. Biederman @ 2007-05-25  6:10 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Greg Kroah-Hartman, Jay Cliburn, Grzegorz Krzystek, Andrew Morton,
	ninex, linux-kernel, linux-pci, Michael Ellerman, David Miller,
	Tony Luck, len.brown

Andi Kleen <ak@suse.de> writes:

> On Friday 25 May 2007 06:26:50 Eric W. Biederman wrote:
>> 
>> This patch is the result of a quick survey of the Intel chipset
>> documents.  I took a quick look in the document to see if the chipset
>> supported MSI and if so I looked through to find the vendor and device
>> id of device 0 function 0 of the chipset and added a quirk for that
>> device id if I it was not a duplicate. 
>
> It would be better to look for any PCI bridge. Sometimes there are
> different PCI bridges around (e.g. external PCI-X bridges on HT systems) 
> which might need own quirks

Maybe but pci-pci bridges should have no problems with MSI traffic
because at that level MSI is just a normal DMA write going upstream.

The AMD 8131 hypertransport to PCI-X bridge only failed because at the
connection to hypertransport it did not implement the hypertransport
msi mapping capability and the associated logic to convert an MSI
irq into a hypertranposrt IRQ.

I currently have a quirk that looks for any hypertransport msi mapping
capability and only enables MSI on the downstream bus of the bridge.

> Also in the x86 world Microsoft defined a FADT ACPI flag that MSI doesn't
> work for Vista. It might make sense to do
>
> if (dmi year >= 2007 && FADT.msi_disable not set) assume it works

Sounds reasonable to me.  If it happens to work reliably.

>> This patch should be safe.  The anecdotal evidence is that when dealing
>> with MSI the Intel chipsets just work.  If we find some buggy ones
>> changing the list won't be hard.
>
> The FADT bit should be probably checked anyways.

Sure if we have a way to check I have no problem, although I tend to
trust the hardware more.

Eric

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.
  2007-05-25  5:14                                           ` Michael Ellerman
  2007-05-25  5:59                                             ` Eric W. Biederman
@ 2007-05-25  6:40                                             ` David Miller
  1 sibling, 0 replies; 30+ messages in thread
From: David Miller @ 2007-05-25  6:40 UTC (permalink / raw)
  To: michael
  Cc: ebiederm, gregkh, jacliburn, ninex, akpm, ak, ninex, linux-kernel,
	linux-pci, tony.luck

From: Michael Ellerman <michael@ellerman.id.au>
Date: Fri, 25 May 2007 15:14:10 +1000

> On Thu, 2007-05-24 at 22:19 -0600, Eric W. Biederman wrote:
> > Currently we blacklist known bad msi configurations which means we
> > keep getting MSI enabled on chipsets that either do not support MSI,
> > or MSI is implemented improperly.  Since the normal IRQ routing
> > mechanism seems to works even when MSI does not, this is a bad default
> > and causes non-functioning systems for no good reason.
> > 
> > So this patch inverts the sense of the MSI bus flag to only enable
> > MSI on known good systems.  I am seeding that list with the set of
> > chipsets with an enabled hypertransport MSI mapping capability.  Which
> > is as close as I can come to an generic MSI enable.  So for actually
> > using MSI this patch is a regression, but for just having MSI enabled
> > in the kernel by default things should just work with this patch
> > applied.
> 
> I guess this is a good idea for random x86 machines. On powerpc I think
> we'll just turn it on for every bus, and let the existing per-platform
> logic decide.

I think I'll turn it on always on sparc64.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 2/2] msi:  Add support for the Intel chipsets that support MSI.
  2007-05-25  6:10                                               ` Eric W. Biederman
@ 2007-05-25 14:42                                                 ` Chuck Ebbert
  2007-05-25 16:52                                                   ` Eric W. Biederman
  0 siblings, 1 reply; 30+ messages in thread
From: Chuck Ebbert @ 2007-05-25 14:42 UTC (permalink / raw)
  To: Eric W. Biederman
  Cc: Andi Kleen, Greg Kroah-Hartman, Jay Cliburn, Grzegorz Krzystek,
	Andrew Morton, ninex, linux-kernel, linux-pci, Michael Ellerman,
	David Miller, Tony Luck, len.brown

On 05/25/2007 02:10 AM, Eric W. Biederman wrote:
>>> This patch should be safe.  The anecdotal evidence is that when dealing
>>> with MSI the Intel chipsets just work.  If we find some buggy ones
>>> changing the list won't be hard.
>> The FADT bit should be probably checked anyways.
> 
> Sure if we have a way to check I have no problem, although I tend to
> trust the hardware more.
> 

Already in 2.6.22:

http://git.kernel.org/git/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=f8993aff8b4de0317c6e081802ca5c86c449fef2
Commit:     f8993aff8b4de0317c6e081802ca5c86c449fef2
Parent:     a23cf14b161b8deeb0f701d577a0e8be6365e247
Author:     Shaohua Li <shaohua.li@intel.com>
AuthorDate: Wed Apr 25 11:05:12 2007 +0800
Committer:  Len Brown <len.brown@intel.com>
CommitDate: Wed Apr 25 01:13:47 2007 -0400

    ACPI: Disable MSI on request of FADT

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.
  2007-05-25  5:20                                           ` Greg KH
  2007-05-25  5:57                                             ` Eric W. Biederman
@ 2007-05-25 15:17                                             ` Eric W. Biederman
  2007-05-25 15:28                                               ` Chuck Ebbert
                                                                 ` (2 more replies)
  1 sibling, 3 replies; 30+ messages in thread
From: Eric W. Biederman @ 2007-05-25 15:17 UTC (permalink / raw)
  To: Greg KH
  Cc: Jay Cliburn, Grzegorz Krzystek, Andrew Morton, Andi Kleen, ninex,
	linux-kernel, linux-pci, Michael Ellerman, David Miller,
	Tony Luck

Greg KH <gregkh@suse.de> writes:

> Originally I would have thought this would be a good idea, but now that
> Vista is out, which supports MSI, I don't think we are going to need
> this in the future.  All new chipsets should support MSI fine and this
> table will only grow in the future, while the blacklist should not need
> to have many new entries added to it.
>
> So I don't think this is a good idea, sorry.

- The current situation is broken
- In spec hardware does not require MSI to generate interrupts
  Which leaves enabling MSI optional.

Do you have a better idea to solve the current brokenness?

MSI appears to have enough problems that enabling it in a kernel
that is supposed to run lots of different hardware (like a distro
kernel) is a recipe for disaster.

Eric

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.
  2007-05-25 15:17                                             ` Eric W. Biederman
@ 2007-05-25 15:28                                               ` Chuck Ebbert
  2007-05-25 15:40                                               ` Roland Dreier
  2007-05-25 20:35                                               ` Greg KH
  2 siblings, 0 replies; 30+ messages in thread
From: Chuck Ebbert @ 2007-05-25 15:28 UTC (permalink / raw)
  To: Eric W. Biederman
  Cc: Greg KH, Jay Cliburn, Grzegorz Krzystek, Andrew Morton,
	Andi Kleen, ninex, linux-kernel, linux-pci, Michael Ellerman,
	David Miller, Tony Luck

On 05/25/2007 11:17 AM, Eric W. Biederman wrote:
> 
> MSI appears to have enough problems that enabling it in a kernel
> that is supposed to run lots of different hardware (like a distro
> kernel) is a recipe for disaster.

Ubuntu and Fedora have disabled it and added a "pci=msi" option
to enable it if required.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.
  2007-05-25 15:17                                             ` Eric W. Biederman
  2007-05-25 15:28                                               ` Chuck Ebbert
@ 2007-05-25 15:40                                               ` Roland Dreier
  2007-05-25 15:42                                                 ` Roland Dreier
  2007-05-25 20:35                                               ` Greg KH
  2 siblings, 1 reply; 30+ messages in thread
From: Roland Dreier @ 2007-05-25 15:40 UTC (permalink / raw)
  To: Eric W. Biederman
  Cc: Greg KH, Jay Cliburn, Grzegorz Krzystek, Andrew Morton,
	Andi Kleen, ninex, linux-kernel, linux-pci, Michael Ellerman,
	David Miller, Tony Luck

 > - In spec hardware does not require MSI to generate interrupts
 >   Which leaves enabling MSI optional.

Actually at least the Qlogic/Pathscale PCI Express ipath adapters
cannot generate INTx interrupts -- they definitely do require MSI to
operate.

 - R.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.
  2007-05-25 15:40                                               ` Roland Dreier
@ 2007-05-25 15:42                                                 ` Roland Dreier
  2007-05-25 16:10                                                   ` Eric W. Biederman
  0 siblings, 1 reply; 30+ messages in thread
From: Roland Dreier @ 2007-05-25 15:42 UTC (permalink / raw)
  To: Eric W. Biederman
  Cc: Greg KH, Jay Cliburn, Grzegorz Krzystek, Andrew Morton,
	Andi Kleen, ninex, linux-kernel, linux-pci, Michael Ellerman,
	David Miller, Tony Luck

 >  > - In spec hardware does not require MSI to generate interrupts
 >  >   Which leaves enabling MSI optional.
 > 
 > Actually at least the Qlogic/Pathscale PCI Express ipath adapters
 > cannot generate INTx interrupts -- they definitely do require MSI to
 > operate.

Oh yeah... when I first found out about this, I rechecked the PCI
Express spec and found that in fact legacy INTx interrupts are
optional.  So the ipath adapters that require MSI do conform to the
spec.

 - R.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.
  2007-05-25 15:42                                                 ` Roland Dreier
@ 2007-05-25 16:10                                                   ` Eric W. Biederman
  2007-05-25 20:09                                                     ` David Schwartz
  2007-05-25 20:25                                                     ` Roland Dreier
  0 siblings, 2 replies; 30+ messages in thread
From: Eric W. Biederman @ 2007-05-25 16:10 UTC (permalink / raw)
  To: Roland Dreier
  Cc: Greg KH, Jay Cliburn, Grzegorz Krzystek, Andrew Morton,
	Andi Kleen, ninex, linux-kernel, linux-pci, Michael Ellerman,
	David Miller, Tony Luck

Roland Dreier <rdreier@cisco.com> writes:

>  >  > - In spec hardware does not require MSI to generate interrupts
>  >  >   Which leaves enabling MSI optional.
>  > 
>  > Actually at least the Qlogic/Pathscale PCI Express ipath adapters
>  > cannot generate INTx interrupts -- they definitely do require MSI to
>  > operate.
>
> Oh yeah... when I first found out about this, I rechecked the PCI
> Express spec and found that in fact legacy INTx interrupts are
> optional.  So the ipath adapters that require MSI do conform to the
> spec.

Hmm...

I find in section 6.1:
> In addition to PCI INTx compatible interrupt emulation, PCI Express
> requires support of MSI or MSI-X or both. 
Which suggests that INTx support is required.

I do not find any wording that suggest the opposite.
I do see it stated that it is intended to EOL support for INTx at
some point.

Where did you see it mentioned that INTx was optional?

I do see it clearly stating that MSI is the preferred mechanism from
pci express.

Eric

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 2/2] msi:  Add support for the Intel chipsets that support MSI.
  2007-05-25 14:42                                                 ` Chuck Ebbert
@ 2007-05-25 16:52                                                   ` Eric W. Biederman
  0 siblings, 0 replies; 30+ messages in thread
From: Eric W. Biederman @ 2007-05-25 16:52 UTC (permalink / raw)
  To: Chuck Ebbert
  Cc: Andi Kleen, Greg Kroah-Hartman, Jay Cliburn, Grzegorz Krzystek,
	Andrew Morton, ninex, linux-kernel, linux-pci, Michael Ellerman,
	David Miller, Tony Luck, len.brown

Chuck Ebbert <cebbert@redhat.com> writes:

> http://git.kernel.org/git/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=f8993aff8b4de0317c6e081802ca5c86c449fef2
> Commit:     f8993aff8b4de0317c6e081802ca5c86c449fef2
> Parent:     a23cf14b161b8deeb0f701d577a0e8be6365e247
> Author:     Shaohua Li <shaohua.li@intel.com>
> AuthorDate: Wed Apr 25 11:05:12 2007 +0800
> Committer:  Len Brown <len.brown@intel.com>
> CommitDate: Wed Apr 25 01:13:47 2007 -0400
>
>     ACPI: Disable MSI on request of FADT

Interesting.  I'm going to have to read up on this bit.
Perhaps it is enough.

Eric

^ permalink raw reply	[flat|nested] 30+ messages in thread

* RE: [PATCH 1/2] msi: Invert the sense of the MSI enables.
  2007-05-25 16:10                                                   ` Eric W. Biederman
@ 2007-05-25 20:09                                                     ` David Schwartz
  2007-05-25 20:25                                                     ` Roland Dreier
  1 sibling, 0 replies; 30+ messages in thread
From: David Schwartz @ 2007-05-25 20:09 UTC (permalink / raw)
  To: Linux-Kernel@Vger. Kernel. Org


> Hmm...

> I find in section 6.1:
> > In addition to PCI INTx compatible interrupt emulation, PCI Express
> > requires support of MSI or MSI-X or both.

> Which suggests that INTx support is required.

Unfortunately, this can be equally well read to suggest that MSI/MSI-X is
not required, but that MSI/MSI-x is required in addition if you choose to
support PCI INTx. Section 6.1 is ambiguous and you have to look elsewhere to
figure out if PCI INTx is required or not.

I think the most natural reading of this is that if you choose to support
PCI INTx compatible interrupt emulation, PCI Express requires support of MSI
or MSI-X or both in addition.

However, other sections are not ambiguous:

"For legacy compatibility, PCI Express provides a PCI INTx emulation
mechanism to signal
interrupts to the system interrupt controller (typically part of the Root
Complex). This
mechanism is compatible with existing PCI software, and provides the same
level and type
service as corresponding PCI interrupt signaling mechanism and is
independent of system
interrupt controller specifics. This legacy compatibility mechanism allows
boot device
support without requiring complex BIOS-level interrupt configuration/control
service stacks.
It virtualizes PCI physical interrupt signals by using an in-band signaling
mechanism."

This seems to make it pretty clear that if a device requires MSI/MSI-X, it
is broken. Devices are supposed to work even with PCI drivers that are not
smart enough to support MSI/MSI-X.

DS



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.
  2007-05-25  5:51                                             ` Andi Kleen
@ 2007-05-25 20:16                                               ` Jonathan Lundell
  2007-05-26  6:52                                                 ` Grant Grundler
  0 siblings, 1 reply; 30+ messages in thread
From: Jonathan Lundell @ 2007-05-25 20:16 UTC (permalink / raw)
  To: Andi Kleen
  Cc: Andrew Morton, Eric W. Biederman, Greg Kroah-Hartman, Jay Cliburn,
	Grzegorz Krzystek, ninex, linux-kernel, linux-pci,
	Michael Ellerman, David Miller, Tony Luck, Linus Torvalds

On May 24, 2007, at 10:51 PM, Andi Kleen wrote:

>> Do we have a feel for how much performace we're losing on those
>> systems which _could_ do MSI, but which will end up defaulting
>> to not using it?
>
> At least on 10GB ethernet it is a significant difference; you usually
> cannot go anywhere near line speed without MSI
>
> I suspect it is visible on high performance / multiple GB NICs too.

Why would that be? As the packet rate goes up and NAPI polling kicks  
in, wouldn't MSI make less and less difference?

I like the fact that MSI gives us finer control over CPU affinity  
than many INTx implementations, but that's a different issue.


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.
  2007-05-25 16:10                                                   ` Eric W. Biederman
  2007-05-25 20:09                                                     ` David Schwartz
@ 2007-05-25 20:25                                                     ` Roland Dreier
  1 sibling, 0 replies; 30+ messages in thread
From: Roland Dreier @ 2007-05-25 20:25 UTC (permalink / raw)
  To: Eric W. Biederman
  Cc: Greg KH, Jay Cliburn, Grzegorz Krzystek, Andrew Morton,
	Andi Kleen, ninex, linux-kernel, linux-pci, Michael Ellerman,
	David Miller, Tony Luck

 > > In addition to PCI INTx compatible interrupt emulation, PCI Express
 > > requires support of MSI or MSI-X or both. 
 > Which suggests that INTx support is required.
 > 
 > I do not find any wording that suggest the opposite.
 > I do see it stated that it is intended to EOL support for INTx at
 > some point.
 > 
 > Where did you see it mentioned that INTx was optional?

I don't see any requirement that a device that generates MSI
interrupts must also be able to signal the same interrupts via INTx.
The spec explicitly says:

    "All PCI Express device Functions that are capable of generating
    interrupts must support MSI or MSI-X or both."

but there is no corresponding explicit requirement that legacy INTx
mode be supported, so it certainly seems permitted for a device not to
generate INTx interrupts.  In fact as you alluded to, the spec says,

    "The legacy INTx emulation mechanism may be deprecated in a future
    version of this specification."

and I wouldn't think the intention would be for one version of the
spec to *require* something that is planned on being deprecated later.

And the Pathscale guys were pretty confident that their device was
compliant with the PCIe spec.

 - R.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.
  2007-05-25 15:17                                             ` Eric W. Biederman
  2007-05-25 15:28                                               ` Chuck Ebbert
  2007-05-25 15:40                                               ` Roland Dreier
@ 2007-05-25 20:35                                               ` Greg KH
  2007-05-25 21:06                                                 ` Eric W. Biederman
  2007-05-26  6:43                                                 ` Grant Grundler
  2 siblings, 2 replies; 30+ messages in thread
From: Greg KH @ 2007-05-25 20:35 UTC (permalink / raw)
  To: Eric W. Biederman
  Cc: Jay Cliburn, Grzegorz Krzystek, Andrew Morton, Andi Kleen, ninex,
	linux-kernel, linux-pci, Michael Ellerman, David Miller,
	Tony Luck

On Fri, May 25, 2007 at 09:17:35AM -0600, Eric W. Biederman wrote:
> Greg KH <gregkh@suse.de> writes:
> 
> > Originally I would have thought this would be a good idea, but now that
> > Vista is out, which supports MSI, I don't think we are going to need
> > this in the future.  All new chipsets should support MSI fine and this
> > table will only grow in the future, while the blacklist should not need
> > to have many new entries added to it.
> >
> > So I don't think this is a good idea, sorry.
> 
> - The current situation is broken
> - In spec hardware does not require MSI to generate interrupts
>   Which leaves enabling MSI optional.
> 
> Do you have a better idea to solve the current brokenness?
> 
> MSI appears to have enough problems that enabling it in a kernel
> that is supposed to run lots of different hardware (like a distro
> kernel) is a recipe for disaster.

Oh, I agree it's a major pain in the ass at times...

But I'm real hesitant to change things this way.  We'll get reports of
people who used to have MSI working, and now it will not (like all AMD
chipsets).  That's a regression...

Perhaps we can trigger off of the same flag that Vista uses like Andi
suggested?  That's one way to "know" that the hardware works, right?

For non-x86 arches, they all seem to want to always enable MSI as they
don't have to deal with as many broken chipsets, if any at all.  So for
them, we'd have to just whitelist the whole arch.  Does that really make
sense?

And again, over time, like years, this list is going to grow way beyond
a managable thing, especially as any new chipset that comes out in 2009
is going to have working MSI, right?  I think our blacklist is easier to
manage over time, while causing a problem for some users in trying to
determine their broken hardware that they currently have.

It's a trade off, and I'd like to choose the one that over the long
term, causes the least ammount of work and maintaiblity.  I think the
current blacklist meets that goal.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.
  2007-05-25 20:35                                               ` Greg KH
@ 2007-05-25 21:06                                                 ` Eric W. Biederman
  2007-05-25 21:17                                                   ` Roland Dreier
  2007-05-25 21:31                                                   ` Greg KH
  2007-05-26  6:43                                                 ` Grant Grundler
  1 sibling, 2 replies; 30+ messages in thread
From: Eric W. Biederman @ 2007-05-25 21:06 UTC (permalink / raw)
  To: Greg KH
  Cc: Jay Cliburn, Grzegorz Krzystek, Andrew Morton, Andi Kleen, ninex,
	linux-kernel, linux-pci, Michael Ellerman, David Miller,
	Tony Luck

Greg KH <gregkh@suse.de> writes:
>> MSI appears to have enough problems that enabling it in a kernel
>> that is supposed to run lots of different hardware (like a distro
>> kernel) is a recipe for disaster.
>
> Oh, I agree it's a major pain in the ass at times...
>
> But I'm real hesitant to change things this way.  We'll get reports of
> people who used to have MSI working, and now it will not (like all AMD
> chipsets).  That's a regression...

You saw my quirk that enabled MSI if you happen to have a
hypertransport msi mapping capability.  That should take care of
all hypertransport compliant AMD chipsets.

So at least 90% of what we have now should work.

> Perhaps we can trigger off of the same flag that Vista uses like Andi
> suggested?  That's one way to "know" that the hardware works, right?

A little.  It is redefining the problem as squarely a BIOS writers
problem and possibly we want to do that in the presence of pci-express.

> For non-x86 arches, they all seem to want to always enable MSI as they
> don't have to deal with as many broken chipsets, if any at all.  So for
> them, we'd have to just whitelist the whole arch.  Does that really make
> sense?

Non-x86 (unless I'm confused) already has white lists or some
equivalent because they can't use generic code for configuring MSI
because non-x86 does not have a standard MSI target window with a
standard meaning for the bits.  So non-x86 has it's own set of arch
specific mechanisms to handle this, and they just don't want generic
code getting in the way.

So it may makes sense make the default all to be x86 specific.

> And again, over time, like years, this list is going to grow way beyond
> a managable thing, especially as any new chipset that comes out in 2009
> is going to have working MSI, right?

I haven't a clue.  I know we are in we are in the teething pain stage
now which just generally makes things difficult.

> I think our blacklist is easier to
> manage over time, while causing a problem for some users in trying to
> determine their broken hardware that they currently have.

Possibly.  It just doesn't seem straight forward to add something
safely to our blacklist.

> It's a trade off, and I'd like to choose the one that over the long
> term, causes the least ammount of work and maintaiblity.  I think the
> current blacklist meets that goal.

A reasonable goal.  I will come back to this after the long holiday
weekend here and see what makes sense.

I think for most of Intel I can reduce my test to:
If (bus == 0 , device == 0, function == 0 && vendor == Intel &&
    has a pci express capability) {
	Enable msi on all busses().
}

At which point I don't think we will need to do much maintenance.

Eric


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.
  2007-05-25 21:06                                                 ` Eric W. Biederman
@ 2007-05-25 21:17                                                   ` Roland Dreier
  2007-05-25 21:31                                                   ` Greg KH
  1 sibling, 0 replies; 30+ messages in thread
From: Roland Dreier @ 2007-05-25 21:17 UTC (permalink / raw)
  To: Eric W. Biederman
  Cc: Greg KH, Jay Cliburn, Grzegorz Krzystek, Andrew Morton,
	Andi Kleen, ninex, linux-kernel, linux-pci, Michael Ellerman,
	David Miller, Tony Luck

 > I think for most of Intel I can reduce my test to:
 > If (bus == 0 , device == 0, function == 0 && vendor == Intel &&
 >     has a pci express capability) {
 > 	Enable msi on all busses().
 > }

MSI was working on every Intel PCI-X chipset I ever saw too...

 - R.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.
  2007-05-25 21:06                                                 ` Eric W. Biederman
  2007-05-25 21:17                                                   ` Roland Dreier
@ 2007-05-25 21:31                                                   ` Greg KH
  1 sibling, 0 replies; 30+ messages in thread
From: Greg KH @ 2007-05-25 21:31 UTC (permalink / raw)
  To: Eric W. Biederman
  Cc: Jay Cliburn, Grzegorz Krzystek, Andrew Morton, Andi Kleen, ninex,
	linux-kernel, linux-pci, Michael Ellerman, David Miller,
	Tony Luck

On Fri, May 25, 2007 at 03:06:22PM -0600, Eric W. Biederman wrote:
> Greg KH <gregkh@suse.de> writes:
> > It's a trade off, and I'd like to choose the one that over the long
> > term, causes the least ammount of work and maintaiblity.  I think the
> > current blacklist meets that goal.
> 
> A reasonable goal.  I will come back to this after the long holiday
> weekend here and see what makes sense.

Ok, if you still think after looking into it some more that it's still
needed, please resend the patches, and I'll reconsider it.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.
  2007-05-25  4:19                                         ` [PATCH 1/2] msi: Invert the sense of the MSI enables Eric W. Biederman
                                                             ` (3 preceding siblings ...)
  2007-05-25  5:20                                           ` Greg KH
@ 2007-05-25 21:47                                           ` Brice Goglin
  4 siblings, 0 replies; 30+ messages in thread
From: Brice Goglin @ 2007-05-25 21:47 UTC (permalink / raw)
  To: Eric W. Biederman
  Cc: Greg Kroah-Hartman, Jay Cliburn, Grzegorz Krzystek, Andrew Morton,
	Andi Kleen, ninex, linux-kernel, linux-pci, Michael Ellerman,
	David Miller, Tony Luck

Eric W. Biederman wrote:
> @@ -1677,43 +1650,16 @@ static int __devinit msi_ht_cap_enabled(struct pci_dev *dev)
>  	return 0;
>  }
>  
> -/* Check the hypertransport MSI mapping to know whether MSI is enabled or not */
> +/* Enable MSI on hypertransport chipsets supporting MSI */
>  static void __devinit quirk_msi_ht_cap(struct pci_dev *dev)
>  {
> -	if (dev->subordinate && !msi_ht_cap_enabled(dev)) {
> -		printk(KERN_WARNING "PCI: MSI quirk detected. "
> -		       "MSI disabled on chipset %s.\n",
> -		       pci_name(dev));
> -		dev->subordinate->bus_flags |= PCI_BUS_FLAGS_NO_MSI;
> +	if (dev->subordinate && msi_ht_cap_enabled(dev)) {
> +		printk(KERN_INFO "PCI: Enabled MSI on chipset %s.\n",
> +			pci_name(dev));
> +		dev->subordinate->bus_flags |= PCI_BUS_FLAGS_MSI;
>  	}
>  }
> -DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_SERVERWORKS, PCI_DEVICE_ID_SERVERWORKS_HT2000_PCIE,
> -			quirk_msi_ht_cap);
> +DECLARE_PCI_FIXUP_FINAL(PCI_ANY_ID, PCI_ANY_ID, quirk_msi_ht_cap);
>  
> -/* The nVidia CK804 chipset may have 2 HT MSI mappings.
> - * MSI are supported if the MSI capability set in any of these mappings.
> - */
> -static void __devinit quirk_nvidia_ck804_msi_ht_cap(struct pci_dev *dev)
> -{
> -	struct pci_dev *pdev;
> -
> -	if (!dev->subordinate)
> -		return;
>  
> -	/* check HT MSI cap on this chipset and the root one.
> -	 * a single one having MSI is enough to be sure that MSI are supported.
> -	 */
> -	pdev = pci_get_slot(dev->bus, 0);
> -	if (!pdev)
> -		return;
> -	if (!msi_ht_cap_enabled(dev) && !msi_ht_cap_enabled(pdev)) {
> -		printk(KERN_WARNING "PCI: MSI quirk detected. "
> -		       "MSI disabled on chipset %s.\n",
> -		       pci_name(dev));
> -		dev->subordinate->bus_flags |= PCI_BUS_FLAGS_NO_MSI;
> -	}
> -	pci_dev_put(pdev);
> -}
> -DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_CK804_PCIE,
> -			quirk_nvidia_ck804_msi_ht_cap);
>   

Are you sure that calling quirk_msi_ht_cap() on all devices will really
replace my nVidia CK804 specific quirk above?

I haven't looked at all this for a while, but if I remember correctly,
the PCI hierarchy with an AMD8131 and a CK804 looks like the following.

-+-[08]-+-0a.0-[09]--
 \-[00]-+-00.0
        +-0e.0-[02]--


The HT MSI mapping of the CK804 may be either on device 00.0 (10de:005e)
and 0e.0 (10de:005d). The devices that are physically behind the CK804
chipset are on bus 02.

To get MSI enabled for these devices, the MSI flag should be set either
on bus 00 (looks impossible here) or on bus 02 (if the HT MSI mapping is
found on 0e.0). However, if the MSI mapping is found on device 00.0, I
don't see your code could enable MSI behind on bus 02.

Brice


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.
  2007-05-25 20:35                                               ` Greg KH
  2007-05-25 21:06                                                 ` Eric W. Biederman
@ 2007-05-26  6:43                                                 ` Grant Grundler
  1 sibling, 0 replies; 30+ messages in thread
From: Grant Grundler @ 2007-05-26  6:43 UTC (permalink / raw)
  To: Greg KH
  Cc: Eric W. Biederman, Jay Cliburn, Grzegorz Krzystek, Andrew Morton,
	Andi Kleen, ninex, linux-kernel, linux-pci, Michael Ellerman,
	David Miller, Tony Luck

On Fri, May 25, 2007 at 01:35:01PM -0700, Greg KH wrote:
...
> And again, over time, like years, this list is going to grow way beyond
> a managable thing, especially as any new chipset that comes out in 2009
> is going to have working MSI, right?  I think our blacklist is easier to
> manage over time, while causing a problem for some users in trying to
> determine their broken hardware that they currently have.
> 
> It's a trade off, and I'd like to choose the one that over the long
> term, causes the least ammount of work and maintaiblity.  I think the
> current blacklist meets that goal.

Before you change your mind again, I agree with the above.
I'm convinced MSI support will only get better and not worse.

The reason is all high speed interconnects (IB, 10gige) and many
storage controllers already support MSI becuase it's more efficient
and will benchmark better at the very least. Server vendors won't be
able to sell their boxes if MSI isn't working.

Desktop has never been as sensitive to IO performance and I don't
know what the situation currently is with laptop gige NICs and
gfx cards (gfx cards use MSI? I don't see it in drivers/video).

thanks,
grant

> thanks,
> 
> greg k-h

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH 1/2] msi: Invert the sense of the MSI enables.
  2007-05-25 20:16                                               ` Jonathan Lundell
@ 2007-05-26  6:52                                                 ` Grant Grundler
  0 siblings, 0 replies; 30+ messages in thread
From: Grant Grundler @ 2007-05-26  6:52 UTC (permalink / raw)
  To: Jonathan Lundell
  Cc: Andi Kleen, Andrew Morton, Eric W. Biederman, Greg Kroah-Hartman,
	Jay Cliburn, Grzegorz Krzystek, ninex, linux-kernel, linux-pci,
	Michael Ellerman, David Miller, Tony Luck, Linus Torvalds

On Fri, May 25, 2007 at 01:16:57PM -0700, Jonathan Lundell wrote:
> On May 24, 2007, at 10:51 PM, Andi Kleen wrote:
> 
> >>Do we have a feel for how much performace we're losing on those
> >>systems which _could_ do MSI, but which will end up defaulting
> >>to not using it?
> >
> >At least on 10GB ethernet it is a significant difference; you usually
> >cannot go anywhere near line speed without MSI
> >
> >I suspect it is visible on high performance / multiple GB NICs too.
> 
> Why would that be? As the packet rate goes up and NAPI polling kicks  
> in, wouldn't MSI make less and less difference?

CPUs are so fast now that we never even get into polling mode.
So MSI makes even more of a difference.

davem and jamal hadi salim were already years ago seeing workloads
(packet rates) where the CPU utilization would peak at packet rates
that were just high enough for NAPI to occasionally be used.
IIRC, Jamal's OLS 2005 or 2006 paper talks about this behavior.


> I like the fact that MSI gives us finer control over CPU affinity  
> than many INTx implementations, but that's a different issue.

Yes, I agree.

thanks,
grant

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2007-05-26  6:52 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <200705122146.l4CLkH6q012322@fire-2.osdl.org>
     [not found] ` <m1mz09p1t7.fsf@ebiederm.dsl.xmission.com>
     [not found]   ` <20070513014622.c5702928.akpm@linux-foundation.org>
     [not found]     ` <46470209.9000502@bellsouth.net>
     [not found]       ` <46470515.50000@NineX.eu.org>
     [not found]         ` <464707F7.6080600@bellsouth.net>
     [not found]           ` <m1irawpwy2.fsf@ebiederm.dsl.xmission.com>
     [not found]             ` <20070513204407.7ba35010@osprey.hogchain.net>
     [not found]               ` <4647FA38.3090108@NineX.eu.org>
     [not found]                 ` <46480EA5.40400@NineX.eu.org>
     [not found]                   ` <20070514053406.478bf93f@osprey.hogchain.net>
     [not found]                     ` <m1d513oddj.fsf@ebiederm.dsl.xmission.com>
     [not found]                       ` <20070514093829.377e04bc@osprey.hogchain.net>
     [not found]                         ` <m18xbrnyxi.fsf@ebiederm.dsl.xmission.com>
     [not found]                           ` <20070514160005.627435e3@osprey.hogchain.net>
     [not found]                             ` <m1odkmmft4.fsf@ebiederm.dsl.xmission.com>
     [not found]                               ` <20070515212200.517fcba2@osprey.hogchain.net>
     [not found]                                 ` <m14pmcmz8r.fsf@ebiederm.dsl.xmission.com>
     [not found]                                   ` <20070516185225.3f3ac082@osprey.hogchain.net>
     [not found]                                     ` <m1sl9wl28m.fsf@ebiederm.dsl.xmission.com>
     [not found]                                       ` <20070522204103.134bf5a2@osprey.hogchain.net>
2007-05-25  4:19                                         ` [PATCH 1/2] msi: Invert the sense of the MSI enables Eric W. Biederman
2007-05-25  4:26                                           ` [PATCH 2/2] msi: Add support for the Intel chipsets that support MSI Eric W. Biederman
2007-05-25  5:38                                             ` Andi Kleen
2007-05-25  6:10                                               ` Eric W. Biederman
2007-05-25 14:42                                                 ` Chuck Ebbert
2007-05-25 16:52                                                   ` Eric W. Biederman
2007-05-25  4:31                                           ` [PATCH 1/2] msi: Invert the sense of the MSI enables Andrew Morton
2007-05-25  5:20                                             ` Eric W. Biederman
2007-05-25  5:44                                             ` Grant Grundler
2007-05-25  5:51                                             ` Andi Kleen
2007-05-25 20:16                                               ` Jonathan Lundell
2007-05-26  6:52                                                 ` Grant Grundler
2007-05-25  5:14                                           ` Michael Ellerman
2007-05-25  5:59                                             ` Eric W. Biederman
2007-05-25  6:40                                             ` David Miller
2007-05-25  5:20                                           ` Greg KH
2007-05-25  5:57                                             ` Eric W. Biederman
2007-05-25 15:17                                             ` Eric W. Biederman
2007-05-25 15:28                                               ` Chuck Ebbert
2007-05-25 15:40                                               ` Roland Dreier
2007-05-25 15:42                                                 ` Roland Dreier
2007-05-25 16:10                                                   ` Eric W. Biederman
2007-05-25 20:09                                                     ` David Schwartz
2007-05-25 20:25                                                     ` Roland Dreier
2007-05-25 20:35                                               ` Greg KH
2007-05-25 21:06                                                 ` Eric W. Biederman
2007-05-25 21:17                                                   ` Roland Dreier
2007-05-25 21:31                                                   ` Greg KH
2007-05-26  6:43                                                 ` Grant Grundler
2007-05-25 21:47                                           ` Brice Goglin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox