* [RFC PATCH v1 0/3] VMD add PCH rootbus support
@ 2024-10-25 15:01 Szymon Durawa
2024-10-25 15:01 ` [RFC PATCH v1 1/3] PCI: vmd: Clean up vmd_enable_domain function Szymon Durawa
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Szymon Durawa @ 2024-10-25 15:01 UTC (permalink / raw)
To: helgaas
Cc: Szymon Durawa, Dan Williams, Lukas Wunner, linux-pci,
Nirmal Patel, Mariusz Tkaczyk
This patch series implements the Intel VMD PCH rootbus support inside VMD
module. Current implementation allows VMD to take ownership of devices only on
IOC (Rootbus0). Starting from Intel Arrow Lake VMD exposes PCH rootbus to
allow VMD to own devices on PCH bus (Rootbus1) as well.
Patch 3 : WA for PCH bus enumeration. VMD PCH primary bus number is 0x80 and
it is correct value. Unfortunately, pci_scan_bridge_extend() function
assigns setup as broken for bus primary number different than zero.
Until VMD was limited to IOC, rootbus primary was always set to 0 and
therfore bus->number was irrelevant. With PCH rootbus primary bus is
different than 0 (set to 0x80) and pci_scan_bridge_extend() code marks
setup as broken and restarts enumeration.
I don't know whether pci_scan_bridge_extend() implementation is correct and
whether it can be safety reworked. I propse WA which does not impact on PCI
stack. It updates bus->number to the same number as primary bus value which
allows to pass check for sensible setup in pci_scan_bridge_extend().
From lspci tool perspective this WA is irrelevant:
# lspci -s 10000:80:1d.4 -v
10000:80:1d.4 PCI bridge: Intel Corporation Device 7f34 (rev 10) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0, IRQ 213
Bus: primary=80, secondary=f1, subordinate=f1, sec-latency=0
lspci tree presentation:
# lspci -tv
-+-[10000:e0]---01.0-[e2]----00.0 Sandisk Corp Western Digital WD Black SN850X NVMe SSD
+-[10000:80]-+-1d.0 Intel Corporation RST VMD Managed Controller
| \-1d.4-[f1]----00.0 Sandisk Corp PC SN735 NVMe SSD (DRAM-less)
+-[0000:80]-+-12.0 Intel Corporation Device 7f78
[...]
| +-1f.5 Intel Corporation Device 7f24
| \-1f.6 Intel Corporation Device 550d
\-[0000:00]-+-00.0 Intel Corporation Device 7d1a
+-01.0 Intel Corporation RST VMD Managed Controller
This workaround is a main reason it is send as RFC. To move this forward I need
to get maintainer decision whether it is acceptable or not. If not,
it would be great if you can take a look and propose a better solution.
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Lukas Wunner <lukas@wunner.de>
Cc: linux-pci@vger.kernel.org
Suggested-by: Nirmal Patel <nirmal.patel@linux.intel.com>
Reviewed-by: Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com>
Signed-off-by: Szymon Durawa <szymon.durawa@linux.intel.com>
Szymon Durawa (3):
PCI: vmd: Clean up vmd_enable_domain function
PCI: vmd: Add VMD PCH rootbus support
PCI: vmd: Add WA for VMD PCH rootbus support
drivers/pci/controller/vmd.c | 446 +++++++++++++++++++++++++++--------
1 file changed, 343 insertions(+), 103 deletions(-)
mode change 100644 => 100755 drivers/pci/controller/vmd.c
--
2.39.3
^ permalink raw reply [flat|nested] 7+ messages in thread
* [RFC PATCH v1 1/3] PCI: vmd: Clean up vmd_enable_domain function
2024-10-25 15:01 [RFC PATCH v1 0/3] VMD add PCH rootbus support Szymon Durawa
@ 2024-10-25 15:01 ` Szymon Durawa
2024-10-28 21:27 ` Bjorn Helgaas
2024-10-25 15:01 ` [RFC PATCH v1 2/3] PCI: vmd: Add VMD PCH rootbus support Szymon Durawa
2024-10-25 15:01 ` [RFC PATCH v1 3/3] PCI: vmd: Add WA for " Szymon Durawa
2 siblings, 1 reply; 7+ messages in thread
From: Szymon Durawa @ 2024-10-25 15:01 UTC (permalink / raw)
To: helgaas
Cc: Szymon Durawa, Dan Williams, Lukas Wunner, linux-pci,
Nirmal Patel, Mariusz Tkaczyk
This function is too long and needs to be shortened to make it more readable.
This clean up is a prework for enablement additional VMD bus range.
It doesn't change functional behavior of vmd_enable_domain().
Suggested-by: Nirmal Patel <nirmal.patel@linux.intel.com>
Reviewed-by: Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com>
Signed-off-by: Szymon Durawa <szymon.durawa@linux.intel.com>
---
drivers/pci/controller/vmd.c | 262 +++++++++++++++++++++--------------
1 file changed, 161 insertions(+), 101 deletions(-)
mode change 100644 => 100755 drivers/pci/controller/vmd.c
diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
old mode 100644
new mode 100755
index 264a180403a0..7cce7354b5c2
--- a/drivers/pci/controller/vmd.c
+++ b/drivers/pci/controller/vmd.c
@@ -34,6 +34,18 @@
#define MB2_SHADOW_OFFSET 0x2000
#define MB2_SHADOW_SIZE 16
+enum vmd_resource {
+ VMD_RES_CFGBAR = 0,
+ VMD_RES_MBAR_1, /*VMD Resource MemBAR 1 */
+ VMD_RES_MBAR_2, /*VMD Resource MemBAR 2 */
+ VMD_RES_COUNT
+};
+
+enum vmd_rootbus {
+ VMD_BUS_0 = 0,
+ VMD_BUS_COUNT
+};
+
enum vmd_features {
/*
* Device may contain registers which hint the physical location of the
@@ -132,10 +144,10 @@ struct vmd_dev {
struct vmd_irq_list *irqs;
struct pci_sysdata sysdata;
- struct resource resources[3];
+ struct resource resources[VMD_RES_COUNT];
struct irq_domain *irq_domain;
- struct pci_bus *bus;
- u8 busn_start;
+ struct pci_bus *bus[VMD_BUS_COUNT];
+ u8 busn_start[VMD_BUS_COUNT];
u8 first_vec;
char *name;
int instance;
@@ -367,7 +379,7 @@ static void vmd_remove_irq_domain(struct vmd_dev *vmd)
static void __iomem *vmd_cfg_addr(struct vmd_dev *vmd, struct pci_bus *bus,
unsigned int devfn, int reg, int len)
{
- unsigned int busnr_ecam = bus->number - vmd->busn_start;
+ unsigned int busnr_ecam = bus->number - vmd->busn_start[VMD_BUS_0];
u32 offset = PCIE_ECAM_OFFSET(busnr_ecam, devfn, reg);
if (offset + len >= resource_size(&vmd->dev->resource[VMD_CFGBAR]))
@@ -505,7 +517,7 @@ static inline void vmd_acpi_end(void) { }
static void vmd_domain_reset(struct vmd_dev *vmd)
{
- u16 bus, max_buses = resource_size(&vmd->resources[0]);
+ u16 bus, max_buses = resource_size(&vmd->resources[VMD_RES_CFGBAR]);
u8 dev, functions, fn, hdr_type;
char __iomem *base;
@@ -553,8 +565,8 @@ static void vmd_domain_reset(struct vmd_dev *vmd)
static void vmd_attach_resources(struct vmd_dev *vmd)
{
- vmd->dev->resource[VMD_MEMBAR1].child = &vmd->resources[1];
- vmd->dev->resource[VMD_MEMBAR2].child = &vmd->resources[2];
+ vmd->dev->resource[VMD_MEMBAR1].child = &vmd->resources[VMD_RES_MBAR_1];
+ vmd->dev->resource[VMD_MEMBAR2].child = &vmd->resources[VMD_RES_MBAR_2];
}
static void vmd_detach_resources(struct vmd_dev *vmd)
@@ -644,13 +656,13 @@ static int vmd_get_bus_number_start(struct vmd_dev *vmd)
switch (BUS_RESTRICT_CFG(reg)) {
case 0:
- vmd->busn_start = 0;
+ vmd->busn_start[VMD_BUS_0] = 0;
break;
case 1:
- vmd->busn_start = 128;
+ vmd->busn_start[VMD_BUS_0] = 128;
break;
case 2:
- vmd->busn_start = 224;
+ vmd->busn_start[VMD_BUS_0] = 224;
break;
default:
pci_err(dev, "Unknown Bus Offset Setting (%d)\n",
@@ -767,17 +779,126 @@ static int vmd_pm_enable_quirk(struct pci_dev *pdev, void *userdata)
return 0;
}
-static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
+static void vmd_configure_cfgbar(struct vmd_dev *vmd)
{
- struct pci_sysdata *sd = &vmd->sysdata;
- struct resource *res;
+ struct resource *res = &vmd->dev->resource[VMD_CFGBAR];
+
+ vmd->resources[VMD_RES_CFGBAR] = (struct resource){
+ .name = "VMD CFGBAR",
+ .start = vmd->busn_start[VMD_BUS_0],
+ .end = vmd->busn_start[VMD_BUS_0] +
+ (resource_size(res) >> 20) - 1,
+ .flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED,
+ };
+}
+
+/**
+ * vmd_configure_membar - Configure VMD MemBAR register, which points
+ * to MMIO address assigned by the OS or BIOS.
+ * @vmd: the VMD device
+ * @resource_number: resource buffer number to be filled in
+ * @membar_number: number of the MemBAR
+ * @start_offset: 4K aligned offset applied to start of VMD’s MEMBAR MMIO space
+ * @end_offset: 4K aligned offset applied to end of VMD’s MEMBAR MMIO space
+ * @parent: resource assigned as a parent, may be NULL
+ *
+ * Function fills resource buffer inside the VMD structure.
+ */
+static void vmd_configure_membar(struct vmd_dev *vmd,
+ enum vmd_resource resource_number,
+ u8 membar_number, resource_size_t start_offset,
+ resource_size_t end_offset,
+ struct resource *parent)
+{
+ struct resource *res_parent;
u32 upper_bits;
unsigned long flags;
+ char name[16];
+
+ struct resource *res = &vmd->dev->resource[membar_number];
+
+ upper_bits = upper_32_bits(res->end);
+ flags = res->flags & ~IORESOURCE_SIZEALIGN;
+ if (!upper_bits)
+ flags &= ~IORESOURCE_MEM_64;
+
+ snprintf(name, sizeof(name), "VMD MEMBAR%d", membar_number/2);
+
+ res_parent = parent;
+ if (!res_parent)
+ res_parent = res;
+
+ vmd->resources[resource_number] = (struct resource){
+ .name = name,
+ .start = res->start + start_offset,
+ .end = res->end - end_offset,
+ .flags = flags,
+ .parent = res_parent,
+ };
+}
+
+static void vmd_configure_membar1_membar2(struct vmd_dev *vmd,
+ resource_size_t mbar2_ofs)
+{
+ vmd_configure_membar(vmd, VMD_RES_MBAR_1, VMD_MEMBAR1, 0, 0, NULL);
+ vmd_configure_membar(vmd, VMD_RES_MBAR_2, VMD_MEMBAR2,
+ mbar2_ofs, 0, NULL);
+}
+
+static void vmd_bus_enumeration(struct pci_bus *bus, unsigned long features)
+{
+ struct pci_bus *child;
+ struct pci_dev *dev;
+ int ret;
+
+ vmd_acpi_begin();
+
+ pci_scan_child_bus(bus);
+ vmd_domain_reset(vmd_from_bus(bus));
+
+ /*
+ * When Intel VMD is enabled, the OS does not discover the Root Ports
+ * owned by Intel VMD within the MMCFG space. pci_reset_bus() applies
+ * a reset to the parent of the PCI device supplied as argument. This
+ * is why we pass a child device, so the reset can be triggered at
+ * the Intel bridge level and propagated to all the children in the
+ * hierarchy.
+ */
+ list_for_each_entry(child, &bus->children, node) {
+ if (!list_empty(&child->devices)) {
+ dev = list_first_entry(&child->devices, struct pci_dev,
+ bus_list);
+ ret = pci_reset_bus(dev);
+ if (ret)
+ pci_warn(dev, "can't reset device: %d\n", ret);
+
+ break;
+ }
+ }
+
+ pci_assign_unassigned_bus_resources(bus);
+
+ pci_walk_bus(bus, vmd_pm_enable_quirk, &features);
+
+ /*
+ * VMD root buses are virtual and don't return true on pci_is_pcie()
+ * and will fail pcie_bus_configure_settings() early. It can instead be
+ * run on each of the real root ports.
+ */
+ list_for_each_entry(child, &bus->children, node)
+ pcie_bus_configure_settings(child);
+
+ pci_bus_add_devices(bus);
+
+ vmd_acpi_end();
+}
+
+static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
+{
+ struct pci_sysdata *sd = &vmd->sysdata;
LIST_HEAD(resources);
resource_size_t offset[2] = {0};
resource_size_t membar2_offset = 0x2000;
- struct pci_bus *child;
- struct pci_dev *dev;
int ret;
/*
@@ -807,13 +928,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
return ret;
}
- res = &vmd->dev->resource[VMD_CFGBAR];
- vmd->resources[0] = (struct resource) {
- .name = "VMD CFGBAR",
- .start = vmd->busn_start,
- .end = vmd->busn_start + (resource_size(res) >> 20) - 1,
- .flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED,
- };
+ vmd_configure_cfgbar(vmd);
/*
* If the window is below 4GB, clear IORESOURCE_MEM_64 so we can
@@ -827,36 +942,12 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
*
* The only way we could use a 64-bit non-prefetchable MEMBAR is
* if its address is <4GB so that we can convert it to a 32-bit
- * resource. To be visible to the host OS, all VMD endpoints must
+ * resource. To be visible to the host OS, all VMD endpoints must
* be initially configured by platform BIOS, which includes setting
- * up these resources. We can assume the device is configured
+ * up these resources. We can assume the device is configured
* according to the platform needs.
*/
- res = &vmd->dev->resource[VMD_MEMBAR1];
- upper_bits = upper_32_bits(res->end);
- flags = res->flags & ~IORESOURCE_SIZEALIGN;
- if (!upper_bits)
- flags &= ~IORESOURCE_MEM_64;
- vmd->resources[1] = (struct resource) {
- .name = "VMD MEMBAR1",
- .start = res->start,
- .end = res->end,
- .flags = flags,
- .parent = res,
- };
-
- res = &vmd->dev->resource[VMD_MEMBAR2];
- upper_bits = upper_32_bits(res->end);
- flags = res->flags & ~IORESOURCE_SIZEALIGN;
- if (!upper_bits)
- flags &= ~IORESOURCE_MEM_64;
- vmd->resources[2] = (struct resource) {
- .name = "VMD MEMBAR2",
- .start = res->start + membar2_offset,
- .end = res->end,
- .flags = flags,
- .parent = res,
- };
+ vmd_configure_membar1_membar2(vmd, membar2_offset);
sd->vmd_dev = vmd->dev;
sd->domain = vmd_find_free_domain();
@@ -892,70 +983,39 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
vmd_set_msi_remapping(vmd, false);
}
- pci_add_resource(&resources, &vmd->resources[0]);
- pci_add_resource_offset(&resources, &vmd->resources[1], offset[0]);
- pci_add_resource_offset(&resources, &vmd->resources[2], offset[1]);
+ pci_add_resource(&resources, &vmd->resources[VMD_RES_CFGBAR]);
+ pci_add_resource_offset(&resources, &vmd->resources[VMD_RES_MBAR_1],
+ offset[0]);
+ pci_add_resource_offset(&resources, &vmd->resources[VMD_RES_MBAR_2],
+ offset[1]);
- vmd->bus = pci_create_root_bus(&vmd->dev->dev, vmd->busn_start,
- &vmd_ops, sd, &resources);
- if (!vmd->bus) {
+ vmd->bus[VMD_BUS_0] = pci_create_root_bus(&vmd->dev->dev,
+ vmd->busn_start[VMD_BUS_0],
+ &vmd_ops, sd, &resources);
+ if (!vmd->bus[VMD_BUS_0]) {
pci_free_resource_list(&resources);
vmd_remove_irq_domain(vmd);
return -ENODEV;
}
- vmd_copy_host_bridge_flags(pci_find_host_bridge(vmd->dev->bus),
- to_pci_host_bridge(vmd->bus->bridge));
+ vmd_copy_host_bridge_flags(
+ pci_find_host_bridge(vmd->dev->bus),
+ to_pci_host_bridge(vmd->bus[VMD_BUS_0]->bridge));
vmd_attach_resources(vmd);
if (vmd->irq_domain)
- dev_set_msi_domain(&vmd->bus->dev, vmd->irq_domain);
+ dev_set_msi_domain(&vmd->bus[VMD_BUS_0]->dev,
+ vmd->irq_domain);
else
- dev_set_msi_domain(&vmd->bus->dev,
+ dev_set_msi_domain(&vmd->bus[VMD_BUS_0]->dev,
dev_get_msi_domain(&vmd->dev->dev));
- WARN(sysfs_create_link(&vmd->dev->dev.kobj, &vmd->bus->dev.kobj,
- "domain"), "Can't create symlink to domain\n");
-
- vmd_acpi_begin();
+ WARN(sysfs_create_link(&vmd->dev->dev.kobj,
+ &vmd->bus[VMD_BUS_0]->dev.kobj, "domain"),
+ "Can't create symlink to domain\n");
- pci_scan_child_bus(vmd->bus);
- vmd_domain_reset(vmd);
+ vmd_bus_enumeration(vmd->bus[VMD_BUS_0], features);
- /* When Intel VMD is enabled, the OS does not discover the Root Ports
- * owned by Intel VMD within the MMCFG space. pci_reset_bus() applies
- * a reset to the parent of the PCI device supplied as argument. This
- * is why we pass a child device, so the reset can be triggered at
- * the Intel bridge level and propagated to all the children in the
- * hierarchy.
- */
- list_for_each_entry(child, &vmd->bus->children, node) {
- if (!list_empty(&child->devices)) {
- dev = list_first_entry(&child->devices,
- struct pci_dev, bus_list);
- ret = pci_reset_bus(dev);
- if (ret)
- pci_warn(dev, "can't reset device: %d\n", ret);
-
- break;
- }
- }
-
- pci_assign_unassigned_bus_resources(vmd->bus);
-
- pci_walk_bus(vmd->bus, vmd_pm_enable_quirk, &features);
-
- /*
- * VMD root buses are virtual and don't return true on pci_is_pcie()
- * and will fail pcie_bus_configure_settings() early. It can instead be
- * run on each of the real root ports.
- */
- list_for_each_entry(child, &vmd->bus->children, node)
- pcie_bus_configure_settings(child);
-
- pci_bus_add_devices(vmd->bus);
-
- vmd_acpi_end();
return 0;
}
@@ -1031,9 +1091,9 @@ static void vmd_remove(struct pci_dev *dev)
{
struct vmd_dev *vmd = pci_get_drvdata(dev);
- pci_stop_root_bus(vmd->bus);
+ pci_stop_root_bus(vmd->bus[VMD_BUS_0]);
sysfs_remove_link(&vmd->dev->dev.kobj, "domain");
- pci_remove_root_bus(vmd->bus);
+ pci_remove_root_bus(vmd->bus[VMD_BUS_0]);
vmd_cleanup_srcu(vmd);
vmd_detach_resources(vmd);
vmd_remove_irq_domain(vmd);
--
2.39.3
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [RFC PATCH v1 2/3] PCI: vmd: Add VMD PCH rootbus support
2024-10-25 15:01 [RFC PATCH v1 0/3] VMD add PCH rootbus support Szymon Durawa
2024-10-25 15:01 ` [RFC PATCH v1 1/3] PCI: vmd: Clean up vmd_enable_domain function Szymon Durawa
@ 2024-10-25 15:01 ` Szymon Durawa
2024-10-28 21:50 ` Bjorn Helgaas
2024-10-25 15:01 ` [RFC PATCH v1 3/3] PCI: vmd: Add WA for " Szymon Durawa
2 siblings, 1 reply; 7+ messages in thread
From: Szymon Durawa @ 2024-10-25 15:01 UTC (permalink / raw)
To: helgaas
Cc: Szymon Durawa, Dan Williams, Lukas Wunner, linux-pci,
Nirmal Patel, Mariusz Tkaczyk
Starting from Intel Arrow Lake VMD enhacement introduces separate
rotbus for PCH. It means that all 3 MMIO BARs exposed by VMD are
shared now between CPU IOC and PCH. This patch adds PCH bus
enumeration and MMIO management for devices with VMD enhancement
support.
Suggested-by: Nirmal Patel <nirmal.patel@linux.intel.com>
Reviewed-by: Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com>
Signed-off-by: Szymon Durawa <szymon.durawa@linux.intel.com>
---
drivers/pci/controller/vmd.c | 176 +++++++++++++++++++++++++++++++++--
1 file changed, 167 insertions(+), 9 deletions(-)
diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
index 7cce7354b5c2..842b70a21325 100755
--- a/drivers/pci/controller/vmd.c
+++ b/drivers/pci/controller/vmd.c
@@ -31,6 +31,15 @@
#define PCI_REG_VMLOCK 0x70
#define MB2_SHADOW_EN(vmlock) (vmlock & 0x2)
+#define VMD_PRIMARY_PCH_BUS 0x80
+#define VMD_BUSRANGE0 0xC8
+#define VMD_BUSRANGE1 0xCC
+#define VMD_MEMBAR1_OFFSET 0xD0
+#define VMD_MEMBAR2_OFFSET1 0xD8
+#define VMD_MEMBAR2_OFFSET2 0xDC
+#define VMD_BUS_END(busr) ((busr >> 8) & 0xff)
+#define VMD_BUS_START(busr) (busr & 0x00ff)
+
#define MB2_SHADOW_OFFSET 0x2000
#define MB2_SHADOW_SIZE 16
@@ -38,11 +47,15 @@ enum vmd_resource {
VMD_RES_CFGBAR = 0,
VMD_RES_MBAR_1, /*VMD Resource MemBAR 1 */
VMD_RES_MBAR_2, /*VMD Resource MemBAR 2 */
+ VMD_RES_PCH_CFGBAR,
+ VMD_RES_PCH_MBAR_1, /*VMD Resource PCH MemBAR 1 */
+ VMD_RES_PCH_MBAR_2, /*VMD Resource PCH MemBAR 2 */
VMD_RES_COUNT
};
enum vmd_rootbus {
VMD_BUS_0 = 0,
+ VMD_BUS_1,
VMD_BUS_COUNT
};
@@ -86,6 +99,12 @@ enum vmd_features {
* proper power management of the SoC.
*/
VMD_FEAT_BIOS_PM_QUIRK = (1 << 5),
+
+ /*
+ * Starting from Intel Arrow Lake, VMD devices have their VMD rootports
+ * connected to CPU IOC and PCH rootbuses.
+ */
+ VMD_FEAT_HAS_PCH_ROOTBUS = (1 << 6)
};
#define VMD_BIOS_PM_QUIRK_LTR 0x1003 /* 3145728 ns */
@@ -93,7 +112,8 @@ enum vmd_features {
#define VMD_FEATS_CLIENT (VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP | \
VMD_FEAT_HAS_BUS_RESTRICTIONS | \
VMD_FEAT_OFFSET_FIRST_VECTOR | \
- VMD_FEAT_BIOS_PM_QUIRK)
+ VMD_FEAT_BIOS_PM_QUIRK | \
+ VMD_FEAT_HAS_PCH_ROOTBUS)
static DEFINE_IDA(vmd_instance_ida);
@@ -376,6 +396,11 @@ static void vmd_remove_irq_domain(struct vmd_dev *vmd)
}
}
+static inline u8 vmd_has_pch_rootbus(struct vmd_dev *vmd)
+{
+ return vmd->busn_start[VMD_BUS_1] != 0;
+}
+
static void __iomem *vmd_cfg_addr(struct vmd_dev *vmd, struct pci_bus *bus,
unsigned int devfn, int reg, int len)
{
@@ -521,6 +546,11 @@ static void vmd_domain_reset(struct vmd_dev *vmd)
u8 dev, functions, fn, hdr_type;
char __iomem *base;
+ if (vmd_has_pch_rootbus(vmd)) {
+ max_buses += resource_size(&vmd->resources[VMD_RES_PCH_CFGBAR]);
+ max_buses += 2;
+ }
+
for (bus = 0; bus < max_buses; bus++) {
for (dev = 0; dev < 32; dev++) {
base = vmd->cfgbar + PCIE_ECAM_OFFSET(bus,
@@ -645,7 +675,7 @@ static int vmd_get_phys_offsets(struct vmd_dev *vmd, bool native_hint,
return 0;
}
-static int vmd_get_bus_number_start(struct vmd_dev *vmd)
+static int vmd_get_bus_number_start(struct vmd_dev *vmd, unsigned long features)
{
struct pci_dev *dev = vmd->dev;
u16 reg;
@@ -664,6 +694,18 @@ static int vmd_get_bus_number_start(struct vmd_dev *vmd)
case 2:
vmd->busn_start[VMD_BUS_0] = 224;
break;
+ case 3:
+ if (!(features & VMD_FEAT_HAS_PCH_ROOTBUS)) {
+ pci_err(dev, "VMD Bus Restriction detected type %d, but PCH Rootbus is not supported, aborting.\n",
+ BUS_RESTRICT_CFG(reg));
+ return -ENODEV;
+ }
+
+ /* IOC start bus */
+ vmd->busn_start[VMD_BUS_0] = 224;
+ /* PCH start bus */
+ vmd->busn_start[VMD_BUS_1] = 225;
+ break;
default:
pci_err(dev, "Unknown Bus Offset Setting (%d)\n",
BUS_RESTRICT_CFG(reg));
@@ -790,6 +832,30 @@ static void vmd_configure_cfgbar(struct vmd_dev *vmd)
(resource_size(res) >> 20) - 1,
.flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED,
};
+
+ if (vmd_has_pch_rootbus(vmd)) {
+ u16 ioc_range = 0;
+ u16 pch_range = 0;
+
+ pci_read_config_word(vmd->dev, VMD_BUSRANGE0, &ioc_range);
+ pci_read_config_word(vmd->dev, VMD_BUSRANGE1, &pch_range);
+
+ /*
+ * Resize CPU IOC CFGBAR range to make space for PCH owned
+ * devices by adjusting range end with value stored in
+ * VMD_BUSRANGE0 register.
+ */
+ vmd->resources[VMD_RES_CFGBAR].start = VMD_BUS_START(ioc_range);
+ vmd->resources[VMD_RES_CFGBAR].end = VMD_BUS_END(ioc_range);
+
+ vmd->resources[VMD_RES_PCH_CFGBAR] = (struct resource){
+ .name = "VMD CFGBAR PCH",
+ .start = VMD_BUS_START(pch_range),
+ .end = VMD_BUS_END(pch_range),
+ .flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED,
+ .parent = &vmd->resources[VMD_RES_CFGBAR],
+ };
+ }
}
/**
@@ -822,7 +888,8 @@ static void vmd_configure_membar(struct vmd_dev *vmd,
if (!upper_bits)
flags &= ~IORESOURCE_MEM_64;
- snprintf(name, sizeof(name), "VMD MEMBAR%d", membar_number/2);
+ snprintf(name, sizeof(name), "VMD MEMBAR%d %s", membar_number / 2,
+ resource_number > VMD_RES_MBAR_2 ? "PCH" : "");
res_parent = parent;
if (!res_parent)
@@ -840,9 +907,43 @@ static void vmd_configure_membar(struct vmd_dev *vmd,
static void vmd_configure_membar1_membar2(struct vmd_dev *vmd,
resource_size_t mbar2_ofs)
{
- vmd_configure_membar(vmd, VMD_RES_MBAR_1, VMD_MEMBAR1, 0, 0, NULL);
- vmd_configure_membar(vmd, VMD_RES_MBAR_2, VMD_MEMBAR2,
- mbar2_ofs, 0, NULL);
+ if (vmd_has_pch_rootbus(vmd)) {
+ u32 pch_mbar1_ofs = 0;
+ u64 pch_mbar2_ofs = 0;
+ u32 reg;
+
+ pci_read_config_dword(vmd->dev, VMD_MEMBAR1_OFFSET,
+ &pch_mbar1_ofs);
+
+ pci_read_config_dword(vmd->dev, VMD_MEMBAR2_OFFSET1, ®);
+ pch_mbar2_ofs = reg;
+
+ pci_read_config_dword(vmd->dev, VMD_MEMBAR2_OFFSET2, ®);
+ pch_mbar2_ofs |= (u64)reg << 32;
+
+ /*
+ * Resize CPU IOC MEMBAR1 and MEMBAR2 ranges to make space
+ * for PCH owned devices by adjusting range end with values
+ * stored in VMD_MEMBAR1_OFFSET and VMD_MEMBAR2_OFFSET registers
+ */
+ vmd_configure_membar(vmd, VMD_RES_MBAR_1, VMD_MEMBAR1, 0,
+ pch_mbar1_ofs, NULL);
+ vmd_configure_membar(vmd, VMD_RES_MBAR_2, VMD_MEMBAR2,
+ mbar2_ofs, pch_mbar2_ofs - mbar2_ofs,
+ NULL);
+
+ vmd_configure_membar(vmd, VMD_RES_PCH_MBAR_1, VMD_MEMBAR1,
+ pch_mbar1_ofs, 0,
+ &vmd->resources[VMD_RES_MBAR_1]);
+ vmd_configure_membar(vmd, VMD_RES_PCH_MBAR_2, VMD_MEMBAR2,
+ mbar2_ofs + pch_mbar2_ofs, 0,
+ &vmd->resources[VMD_RES_MBAR_2]);
+ } else {
+ vmd_configure_membar(vmd, VMD_RES_MBAR_1, VMD_MEMBAR1, 0, 0,
+ NULL);
+ vmd_configure_membar(vmd, VMD_RES_MBAR_2, VMD_MEMBAR2,
+ mbar2_ofs, 0, NULL);
+ }
}
static void vmd_bus_enumeration(struct pci_bus *bus, unsigned long features)
@@ -854,7 +955,9 @@ static void vmd_bus_enumeration(struct pci_bus *bus, unsigned long features)
vmd_acpi_begin();
pci_scan_child_bus(bus);
- vmd_domain_reset(vmd_from_bus(bus));
+
+ if (bus->primary == 0)
+ vmd_domain_reset(vmd_from_bus(bus));
/*
* When Intel VMD is enabled, the OS does not discover the Root Ports
@@ -893,6 +996,47 @@ static void vmd_bus_enumeration(struct pci_bus *bus, unsigned long features)
vmd_acpi_end();
}
+static int vmd_create_pch_bus(struct vmd_dev *vmd, struct pci_sysdata *sd,
+ resource_size_t *offset)
+{
+ LIST_HEAD(resources_pch);
+
+ pci_add_resource(&resources_pch, &vmd->resources[VMD_RES_PCH_CFGBAR]);
+ pci_add_resource_offset(&resources_pch,
+ &vmd->resources[VMD_RES_PCH_MBAR_1], offset[0]);
+ pci_add_resource_offset(&resources_pch,
+ &vmd->resources[VMD_RES_PCH_MBAR_2], offset[1]);
+
+ vmd->bus[VMD_BUS_1] = pci_create_root_bus(&vmd->dev->dev,
+ vmd->busn_start[VMD_BUS_1],
+ &vmd_ops, sd, &resources_pch);
+
+ if (!vmd->bus[VMD_BUS_1]) {
+ pci_free_resource_list(&resources_pch);
+ pci_stop_root_bus(vmd->bus[VMD_BUS_1]);
+ pci_remove_root_bus(vmd->bus[VMD_BUS_1]);
+ return -ENODEV;
+ }
+
+ /*
+ * primary bus is not set by pci_create_root_bus(), it is updated here
+ */
+ vmd->bus[VMD_BUS_1]->primary = VMD_PRIMARY_PCH_BUS;
+
+ vmd_copy_host_bridge_flags(
+ pci_find_host_bridge(vmd->dev->bus),
+ to_pci_host_bridge(vmd->bus[VMD_BUS_1]->bridge));
+
+ if (vmd->irq_domain)
+ dev_set_msi_domain(&vmd->bus[VMD_BUS_1]->dev,
+ vmd->irq_domain);
+ else
+ dev_set_msi_domain(&vmd->bus[VMD_BUS_1]->dev,
+ dev_get_msi_domain(&vmd->dev->dev));
+
+ return 0;
+}
+
static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
{
struct pci_sysdata *sd = &vmd->sysdata;
@@ -923,7 +1067,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
* limits the bus range to between 0-127, 128-255, or 224-255
*/
if (features & VMD_FEAT_HAS_BUS_RESTRICTIONS) {
- ret = vmd_get_bus_number_start(vmd);
+ ret = vmd_get_bus_number_start(vmd, features);
if (ret)
return ret;
}
@@ -1016,6 +1160,16 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
vmd_bus_enumeration(vmd->bus[VMD_BUS_0], features);
+ if (vmd_has_pch_rootbus(vmd)) {
+ ret = vmd_create_pch_bus(vmd, sd, offset);
+ if (ret) {
+ pci_err(vmd->dev, "Can't create PCH bus: %d\n", ret);
+ return ret;
+ }
+
+ vmd_bus_enumeration(vmd->bus[VMD_BUS_1], features);
+ }
+
return 0;
}
@@ -1094,6 +1248,10 @@ static void vmd_remove(struct pci_dev *dev)
pci_stop_root_bus(vmd->bus[VMD_BUS_0]);
sysfs_remove_link(&vmd->dev->dev.kobj, "domain");
pci_remove_root_bus(vmd->bus[VMD_BUS_0]);
+ if (vmd_has_pch_rootbus(vmd)) {
+ pci_stop_root_bus(vmd->bus[VMD_BUS_1]);
+ pci_remove_root_bus(vmd->bus[VMD_BUS_1]);
+ }
vmd_cleanup_srcu(vmd);
vmd_detach_resources(vmd);
vmd_remove_irq_domain(vmd);
@@ -1179,4 +1337,4 @@ module_pci_driver(vmd_drv);
MODULE_AUTHOR("Intel Corporation");
MODULE_DESCRIPTION("Volume Management Device driver");
MODULE_LICENSE("GPL v2");
-MODULE_VERSION("0.6");
+MODULE_VERSION("0.7");
--
2.39.3
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [RFC PATCH v1 3/3] PCI: vmd: Add WA for VMD PCH rootbus support
2024-10-25 15:01 [RFC PATCH v1 0/3] VMD add PCH rootbus support Szymon Durawa
2024-10-25 15:01 ` [RFC PATCH v1 1/3] PCI: vmd: Clean up vmd_enable_domain function Szymon Durawa
2024-10-25 15:01 ` [RFC PATCH v1 2/3] PCI: vmd: Add VMD PCH rootbus support Szymon Durawa
@ 2024-10-25 15:01 ` Szymon Durawa
2024-10-28 21:53 ` Bjorn Helgaas
2 siblings, 1 reply; 7+ messages in thread
From: Szymon Durawa @ 2024-10-25 15:01 UTC (permalink / raw)
To: helgaas
Cc: Szymon Durawa, Dan Williams, Lukas Wunner, linux-pci,
Nirmal Patel, Mariusz Tkaczyk
VMD PCH rootbus primary number is 0x80 and pci_scan_bridge_extend()
cannot assign it as "hard-wired to 0" and marks setup as broken. To
avoid this, PCH bus number has to be the same as PCH primary number.
Suggested-by: Nirmal Patel <nirmal.patel@linux.intel.com>
Reviewed-by: Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com>
Signed-off-by: Szymon Durawa <szymon.durawa@linux.intel.com>
---
drivers/pci/controller/vmd.c | 26 ++++++++++++++++++++++++--
1 file changed, 24 insertions(+), 2 deletions(-)
diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
index 842b70a21325..bb47e0a76c89 100755
--- a/drivers/pci/controller/vmd.c
+++ b/drivers/pci/controller/vmd.c
@@ -404,8 +404,22 @@ static inline u8 vmd_has_pch_rootbus(struct vmd_dev *vmd)
static void __iomem *vmd_cfg_addr(struct vmd_dev *vmd, struct pci_bus *bus,
unsigned int devfn, int reg, int len)
{
- unsigned int busnr_ecam = bus->number - vmd->busn_start[VMD_BUS_0];
- u32 offset = PCIE_ECAM_OFFSET(busnr_ecam, devfn, reg);
+ unsigned char bus_number;
+ unsigned int busnr_ecam;
+ u32 offset;
+
+ /*
+ * VMD WA: for PCH rootbus, bus number is set to VMD_PRIMARY_PCH_BUS
+ * (see comment in vmd_create_pch_bus()) but original value is 0xE1
+ * which is stored in vmd->busn_start[VMD_BUS_1].
+ */
+ if (vmd_has_pch_rootbus(vmd) && bus->number == VMD_PRIMARY_PCH_BUS)
+ bus_number = vmd->busn_start[VMD_BUS_1];
+ else
+ bus_number = bus->number;
+
+ busnr_ecam = bus_number - vmd->busn_start[VMD_BUS_0];
+ offset = PCIE_ECAM_OFFSET(busnr_ecam, devfn, reg);
if (offset + len >= resource_size(&vmd->dev->resource[VMD_CFGBAR]))
return NULL;
@@ -1023,6 +1037,14 @@ static int vmd_create_pch_bus(struct vmd_dev *vmd, struct pci_sysdata *sd,
*/
vmd->bus[VMD_BUS_1]->primary = VMD_PRIMARY_PCH_BUS;
+ /* This is a workaround for pci_scan_bridge_extend() code.
+ * It assigns setup as broken when primary != bus->number and
+ * for PCH rootbus primary is not "hard-wired to 0".
+ * To avoid this, vmd->bus[VMD_BUS_1]->number and
+ * vmd->bus[VMD_BUS_1]->primary are updated to the same value.
+ */
+ vmd->bus[VMD_BUS_1]->number = VMD_PRIMARY_PCH_BUS;
+
vmd_copy_host_bridge_flags(
pci_find_host_bridge(vmd->dev->bus),
to_pci_host_bridge(vmd->bus[VMD_BUS_1]->bridge));
--
2.39.3
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [RFC PATCH v1 1/3] PCI: vmd: Clean up vmd_enable_domain function
2024-10-25 15:01 ` [RFC PATCH v1 1/3] PCI: vmd: Clean up vmd_enable_domain function Szymon Durawa
@ 2024-10-28 21:27 ` Bjorn Helgaas
0 siblings, 0 replies; 7+ messages in thread
From: Bjorn Helgaas @ 2024-10-28 21:27 UTC (permalink / raw)
To: Szymon Durawa
Cc: Dan Williams, Lukas Wunner, linux-pci, Nirmal Patel,
Mariusz Tkaczyk
On Fri, Oct 25, 2024 at 05:01:51PM +0200, Szymon Durawa wrote:
> This function is too long and needs to be shortened to make it more readable.
Use the actual function name here instead of "This function".
Include "()" after function names, including in the subject line.
> +enum vmd_resource {
> + VMD_RES_CFGBAR = 0,
> + VMD_RES_MBAR_1, /*VMD Resource MemBAR 1 */
> + VMD_RES_MBAR_2, /*VMD Resource MemBAR 2 */
Add space after "/*".
> @@ -132,10 +144,10 @@ struct vmd_dev {
> struct vmd_irq_list *irqs;
>
> struct pci_sysdata sysdata;
> - struct resource resources[3];
> + struct resource resources[VMD_RES_COUNT];
> struct irq_domain *irq_domain;
> - struct pci_bus *bus;
> - u8 busn_start;
> + struct pci_bus *bus[VMD_BUS_COUNT];
> + u8 busn_start[VMD_BUS_COUNT];
This looks like a combination of things that don't logically need to
be in the same patch, e.g., adding VMD_RES_COUNT (basically cosmetic),
converting "bus" from a scalar to an array (possibly new
functionality?)
> -static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
> +static void vmd_configure_cfgbar(struct vmd_dev *vmd)
> {
> - struct pci_sysdata *sd = &vmd->sysdata;
> - struct resource *res;
> + struct resource *res = &vmd->dev->resource[VMD_CFGBAR];
> +
> + vmd->resources[VMD_RES_CFGBAR] = (struct resource){
> + .name = "VMD CFGBAR",
> + .start = vmd->busn_start[VMD_BUS_0],
> + .end = vmd->busn_start[VMD_BUS_0] +
> + (resource_size(res) >> 20) - 1,
> + .flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED,
> + };
> +}
This might need to be split into several patches, each of which moves
some code from vmd_enable_domain() into a helper function, to make
this easily reviewable. I think generally these are simple, obvious
changes, but when they're all collected together in a single patch,
it's hard to tell that.
Bjorn
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [RFC PATCH v1 2/3] PCI: vmd: Add VMD PCH rootbus support
2024-10-25 15:01 ` [RFC PATCH v1 2/3] PCI: vmd: Add VMD PCH rootbus support Szymon Durawa
@ 2024-10-28 21:50 ` Bjorn Helgaas
0 siblings, 0 replies; 7+ messages in thread
From: Bjorn Helgaas @ 2024-10-28 21:50 UTC (permalink / raw)
To: Szymon Durawa
Cc: Dan Williams, Lukas Wunner, linux-pci, Nirmal Patel,
Mariusz Tkaczyk
On Fri, Oct 25, 2024 at 05:01:52PM +0200, Szymon Durawa wrote:
> Starting from Intel Arrow Lake VMD enhacement introduces separate
> rotbus for PCH. It means that all 3 MMIO BARs exposed by VMD are
enhancement
root bus
Does VMD still have only 3 MMIO BARs? VMD_RES_PCH_* suggests more
BARs.
> shared now between CPU IOC and PCH. This patch adds PCH bus
> enumeration and MMIO management for devices with VMD enhancement
> support.
s/This patch adds/Add/
We already had bus enumeration and MMIO management.
It'd be nice to have something specific about what changes with PCH.
A different fixed root bus number? Multiple root buses? Additional
BARs in the VMD endpoint?
If possible, describe this in generic PCIe topology terms, not in
Intel-speak (IOC, PCH, etc).
> +#define VMD_PRIMARY_PCH_BUS 0x80
> +#define VMD_BUSRANGE0 0xC8
> +#define VMD_BUSRANGE1 0xCC
> +#define VMD_MEMBAR1_OFFSET 0xD0
> +#define VMD_MEMBAR2_OFFSET1 0xD8
> +#define VMD_MEMBAR2_OFFSET2 0xDC
This file (mostly) uses lower-case hex; match that style.
> +#define VMD_BUS_END(busr) ((busr >> 8) & 0xff)
> +#define VMD_BUS_START(busr) (busr & 0x00ff)
> +
> #define MB2_SHADOW_OFFSET 0x2000
> #define MB2_SHADOW_SIZE 16
>
> @@ -38,11 +47,15 @@ enum vmd_resource {
> VMD_RES_CFGBAR = 0,
> VMD_RES_MBAR_1, /*VMD Resource MemBAR 1 */
> VMD_RES_MBAR_2, /*VMD Resource MemBAR 2 */
> + VMD_RES_PCH_CFGBAR,
> + VMD_RES_PCH_MBAR_1, /*VMD Resource PCH MemBAR 1 */
> + VMD_RES_PCH_MBAR_2, /*VMD Resource PCH MemBAR 2 */
Space after "/*".
> +static inline u8 vmd_has_pch_rootbus(struct vmd_dev *vmd)
> +{
> + return vmd->busn_start[VMD_BUS_1] != 0;
Seems a little weird to learn this by testing whether this kzalloc'ed
field has been set. Could easily save the driver_data pointer or just
the "features" value in struct vmd_dev.
> + case 3:
> + if (!(features & VMD_FEAT_HAS_PCH_ROOTBUS)) {
> + pci_err(dev, "VMD Bus Restriction detected type %d, but PCH Rootbus is not supported, aborting.\n",
> + BUS_RESTRICT_CFG(reg));
> + return -ENODEV;
> + }
> +
> + /* IOC start bus */
> + vmd->busn_start[VMD_BUS_0] = 224;
> + /* PCH start bus */
> + vmd->busn_start[VMD_BUS_1] = 225;
Seems like these magic numbers could have #defines. I see we've been
using 128 and 224 already, and this basically adds 225.
> +static int vmd_create_pch_bus(struct vmd_dev *vmd, struct pci_sysdata *sd,
> + resource_size_t *offset)
> +{
> + LIST_HEAD(resources_pch);
> +
> + pci_add_resource(&resources_pch, &vmd->resources[VMD_RES_PCH_CFGBAR]);
> + pci_add_resource_offset(&resources_pch,
> + &vmd->resources[VMD_RES_PCH_MBAR_1], offset[0]);
> + pci_add_resource_offset(&resources_pch,
> + &vmd->resources[VMD_RES_PCH_MBAR_2], offset[1]);
> +
> + vmd->bus[VMD_BUS_1] = pci_create_root_bus(&vmd->dev->dev,
> + vmd->busn_start[VMD_BUS_1],
> + &vmd_ops, sd, &resources_pch);
> +
> + if (!vmd->bus[VMD_BUS_1]) {
> + pci_free_resource_list(&resources_pch);
> + pci_stop_root_bus(vmd->bus[VMD_BUS_1]);
> + pci_remove_root_bus(vmd->bus[VMD_BUS_1]);
> + return -ENODEV;
> + }
> +
> + /*
> + * primary bus is not set by pci_create_root_bus(), it is updated here
> + */
> + vmd->bus[VMD_BUS_1]->primary = VMD_PRIMARY_PCH_BUS;
> +
> + vmd_copy_host_bridge_flags(
> + pci_find_host_bridge(vmd->dev->bus),
> + to_pci_host_bridge(vmd->bus[VMD_BUS_1]->bridge));
> +
> + if (vmd->irq_domain)
> + dev_set_msi_domain(&vmd->bus[VMD_BUS_1]->dev,
> + vmd->irq_domain);
> + else
> + dev_set_msi_domain(&vmd->bus[VMD_BUS_1]->dev,
> + dev_get_msi_domain(&vmd->dev->dev));
> +
> + return 0;
This looks a lot like parts of vmd_enable_domain(). Could this be
factored out into a helper function that could be used for both
VMD_BUS_0 and VMD_BUS_1?
Why is vmd_attach_resource() is different between them? Why is
sysfs_create_link() is different?
Bjorn
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [RFC PATCH v1 3/3] PCI: vmd: Add WA for VMD PCH rootbus support
2024-10-25 15:01 ` [RFC PATCH v1 3/3] PCI: vmd: Add WA for " Szymon Durawa
@ 2024-10-28 21:53 ` Bjorn Helgaas
0 siblings, 0 replies; 7+ messages in thread
From: Bjorn Helgaas @ 2024-10-28 21:53 UTC (permalink / raw)
To: Szymon Durawa
Cc: Dan Williams, Lukas Wunner, linux-pci, Nirmal Patel,
Mariusz Tkaczyk
In subject (and the code comment), spell out "WA" (I assume it means
workaround?)
Also make it more specific than "VMD PCH rootbus support," which
doesn't say anything about what the actual issue is.
On Fri, Oct 25, 2024 at 05:01:53PM +0200, Szymon Durawa wrote:
> VMD PCH rootbus primary number is 0x80 and pci_scan_bridge_extend()
> cannot assign it as "hard-wired to 0" and marks setup as broken. To
> avoid this, PCH bus number has to be the same as PCH primary number.
From the cover letter, I infer that whatever the issue is, it doesn't
happen when VMD is integrated into an IOC, but it does happen when VMD
is in a PCH. These details should be in this commit log because they
are relevant to *this* patch, and the cover letter doesn't make it
into git.
Maybe the problem is that some root bus numbers are hardwired to fixed
non-zero values? I thought we already handled that, but perhaps not.
What does the user see without this patch? Some warning? Failure to
enumerate some devices? Include a hint here if possible so users can
find the fix to their issue.
> Suggested-by: Nirmal Patel <nirmal.patel@linux.intel.com>
> Reviewed-by: Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com>
> Signed-off-by: Szymon Durawa <szymon.durawa@linux.intel.com>
> ---
> drivers/pci/controller/vmd.c | 26 ++++++++++++++++++++++++--
> 1 file changed, 24 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
> index 842b70a21325..bb47e0a76c89 100755
> --- a/drivers/pci/controller/vmd.c
> +++ b/drivers/pci/controller/vmd.c
> @@ -404,8 +404,22 @@ static inline u8 vmd_has_pch_rootbus(struct vmd_dev *vmd)
> static void __iomem *vmd_cfg_addr(struct vmd_dev *vmd, struct pci_bus *bus,
> unsigned int devfn, int reg, int len)
> {
> - unsigned int busnr_ecam = bus->number - vmd->busn_start[VMD_BUS_0];
> - u32 offset = PCIE_ECAM_OFFSET(busnr_ecam, devfn, reg);
> + unsigned char bus_number;
> + unsigned int busnr_ecam;
> + u32 offset;
> +
> + /*
> + * VMD WA: for PCH rootbus, bus number is set to VMD_PRIMARY_PCH_BUS
> + * (see comment in vmd_create_pch_bus()) but original value is 0xE1
> + * which is stored in vmd->busn_start[VMD_BUS_1].
Used 225 elsewhere. Would be nice to at least use the same base (dec
vs hex), and preferably the same #define.
> + */
> + if (vmd_has_pch_rootbus(vmd) && bus->number == VMD_PRIMARY_PCH_BUS)
> + bus_number = vmd->busn_start[VMD_BUS_1];
> + else
> + bus_number = bus->number;
> +
> + busnr_ecam = bus_number - vmd->busn_start[VMD_BUS_0];
> + offset = PCIE_ECAM_OFFSET(busnr_ecam, devfn, reg);
>
> if (offset + len >= resource_size(&vmd->dev->resource[VMD_CFGBAR]))
> return NULL;
> @@ -1023,6 +1037,14 @@ static int vmd_create_pch_bus(struct vmd_dev *vmd, struct pci_sysdata *sd,
> */
> vmd->bus[VMD_BUS_1]->primary = VMD_PRIMARY_PCH_BUS;
>
> + /* This is a workaround for pci_scan_bridge_extend() code.
Use the prevailing comment style:
/*
* This is ...
> + * It assigns setup as broken when primary != bus->number and
> + * for PCH rootbus primary is not "hard-wired to 0".
> + * To avoid this, vmd->bus[VMD_BUS_1]->number and
> + * vmd->bus[VMD_BUS_1]->primary are updated to the same value.
> + */
> + vmd->bus[VMD_BUS_1]->number = VMD_PRIMARY_PCH_BUS;
> +
> vmd_copy_host_bridge_flags(
> pci_find_host_bridge(vmd->dev->bus),
> to_pci_host_bridge(vmd->bus[VMD_BUS_1]->bridge));
> --
> 2.39.3
>
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2024-10-28 21:53 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-25 15:01 [RFC PATCH v1 0/3] VMD add PCH rootbus support Szymon Durawa
2024-10-25 15:01 ` [RFC PATCH v1 1/3] PCI: vmd: Clean up vmd_enable_domain function Szymon Durawa
2024-10-28 21:27 ` Bjorn Helgaas
2024-10-25 15:01 ` [RFC PATCH v1 2/3] PCI: vmd: Add VMD PCH rootbus support Szymon Durawa
2024-10-28 21:50 ` Bjorn Helgaas
2024-10-25 15:01 ` [RFC PATCH v1 3/3] PCI: vmd: Add WA for " Szymon Durawa
2024-10-28 21:53 ` Bjorn Helgaas
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).