* [PATCH v3 1/8] PCI: vmd: Add vmd_bus_enumeration()
2024-11-22 8:52 [PATCH v3 0/8] VMD add second rootbus support Szymon Durawa
@ 2024-11-22 8:52 ` Szymon Durawa
2024-11-22 8:52 ` [PATCH v3 2/8] PCI: vmd: Add vmd_configure_cfgbar() Szymon Durawa
` (7 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Szymon Durawa @ 2024-11-22 8:52 UTC (permalink / raw)
To: helgaas
Cc: Szymon Durawa, Bjorn Helgaas, Dan Williams, Lukas Wunner,
linux-pci, Nirmal Patel, Mariusz Tkaczyk
Move the vmd bus enumeration code to a new helper vmd_bus_enumeration().
No functional changes.
Suggested-by: Nirmal Patel <nirmal.patel@linux.intel.com>
Reviewed-by: Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com>
Signed-off-by: Szymon Durawa <szymon.durawa@linux.intel.com>
---
drivers/pci/controller/vmd.c | 89 ++++++++++++++++++++----------------
1 file changed, 49 insertions(+), 40 deletions(-)
mode change 100644 => 100755 drivers/pci/controller/vmd.c
diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
old mode 100644
new mode 100755
index a726de0af011..fb66910f9204
--- a/drivers/pci/controller/vmd.c
+++ b/drivers/pci/controller/vmd.c
@@ -778,6 +778,54 @@ static int vmd_pm_enable_quirk(struct pci_dev *pdev, void *userdata)
return 0;
}
+static void vmd_bus_enumeration(struct pci_bus *bus, unsigned long features)
+{
+ struct pci_bus *child;
+ struct pci_dev *dev;
+ int ret;
+
+ vmd_acpi_begin();
+
+ pci_scan_child_bus(bus);
+ vmd_domain_reset(vmd_from_bus(bus));
+
+ /*
+ * When Intel VMD is enabled, the OS does not discover the Root Ports
+ * owned by Intel VMD within the MMCFG space. pci_reset_bus() applies
+ * a reset to the parent of the PCI device supplied as argument. This
+ * is why we pass a child device, so the reset can be triggered at
+ * the Intel bridge level and propagated to all the children in the
+ * hierarchy.
+ */
+ list_for_each_entry(child, &bus->children, node) {
+ if (!list_empty(&child->devices)) {
+ dev = list_first_entry(&child->devices, struct pci_dev,
+ bus_list);
+ ret = pci_reset_bus(dev);
+ if (ret)
+ pci_warn(dev, "can't reset device: %d\n", ret);
+
+ break;
+ }
+ }
+
+ pci_assign_unassigned_bus_resources(bus);
+
+ pci_walk_bus(bus, vmd_pm_enable_quirk, &features);
+
+ /*
+ * VMD root buses are virtual and don't return true on pci_is_pcie()
+ * and will fail pcie_bus_configure_settings() early. It can instead be
+ * run on each of the real root ports.
+ */
+ list_for_each_entry(child, &bus->children, node)
+ pcie_bus_configure_settings(child);
+
+ pci_bus_add_devices(bus);
+
+ vmd_acpi_end();
+}
+
static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
{
struct pci_sysdata *sd = &vmd->sysdata;
@@ -787,8 +835,6 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
LIST_HEAD(resources);
resource_size_t offset[2] = {0};
resource_size_t membar2_offset = 0x2000;
- struct pci_bus *child;
- struct pci_dev *dev;
int ret;
/*
@@ -928,45 +974,8 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
WARN(sysfs_create_link(&vmd->dev->dev.kobj, &vmd->bus->dev.kobj,
"domain"), "Can't create symlink to domain\n");
- vmd_acpi_begin();
-
- pci_scan_child_bus(vmd->bus);
- vmd_domain_reset(vmd);
+ vmd_bus_enumeration(vmd->bus, features);
- /* When Intel VMD is enabled, the OS does not discover the Root Ports
- * owned by Intel VMD within the MMCFG space. pci_reset_bus() applies
- * a reset to the parent of the PCI device supplied as argument. This
- * is why we pass a child device, so the reset can be triggered at
- * the Intel bridge level and propagated to all the children in the
- * hierarchy.
- */
- list_for_each_entry(child, &vmd->bus->children, node) {
- if (!list_empty(&child->devices)) {
- dev = list_first_entry(&child->devices,
- struct pci_dev, bus_list);
- ret = pci_reset_bus(dev);
- if (ret)
- pci_warn(dev, "can't reset device: %d\n", ret);
-
- break;
- }
- }
-
- pci_assign_unassigned_bus_resources(vmd->bus);
-
- pci_walk_bus(vmd->bus, vmd_pm_enable_quirk, &features);
-
- /*
- * VMD root buses are virtual and don't return true on pci_is_pcie()
- * and will fail pcie_bus_configure_settings() early. It can instead be
- * run on each of the real root ports.
- */
- list_for_each_entry(child, &vmd->bus->children, node)
- pcie_bus_configure_settings(child);
-
- pci_bus_add_devices(vmd->bus);
-
- vmd_acpi_end();
return 0;
}
--
2.39.3
^ permalink raw reply related [flat|nested] 14+ messages in thread* [PATCH v3 2/8] PCI: vmd: Add vmd_configure_cfgbar()
2024-11-22 8:52 [PATCH v3 0/8] VMD add second rootbus support Szymon Durawa
2024-11-22 8:52 ` [PATCH v3 1/8] PCI: vmd: Add vmd_bus_enumeration() Szymon Durawa
@ 2024-11-22 8:52 ` Szymon Durawa
2024-11-22 8:52 ` [PATCH v3 3/8] PCI: vmd: Add vmd_configure_membar() and vmd_configure_membar1_membar2() Szymon Durawa
` (6 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Szymon Durawa @ 2024-11-22 8:52 UTC (permalink / raw)
To: helgaas
Cc: Szymon Durawa, Bjorn Helgaas, Dan Williams, Lukas Wunner,
linux-pci, Nirmal Patel, Mariusz Tkaczyk
Move the VMD CFGBAR initialization code to a new helper
vmd_configure_cfgbar(). No functional changes.
Suggested-by: Nirmal Patel <nirmal.patel@linux.intel.com>
Reviewed-by: Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com>
Signed-off-by: Szymon Durawa <szymon.durawa@linux.intel.com>
---
drivers/pci/controller/vmd.c | 20 +++++++++++++-------
1 file changed, 13 insertions(+), 7 deletions(-)
diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
index fb66910f9204..bb09114068f5 100755
--- a/drivers/pci/controller/vmd.c
+++ b/drivers/pci/controller/vmd.c
@@ -778,6 +778,18 @@ static int vmd_pm_enable_quirk(struct pci_dev *pdev, void *userdata)
return 0;
}
+static void vmd_configure_cfgbar(struct vmd_dev *vmd)
+{
+ struct resource *res = &vmd->dev->resource[VMD_CFGBAR];
+
+ vmd->resources[0] = (struct resource){
+ .name = "VMD CFGBAR",
+ .start = vmd->busn_start,
+ .end = vmd->busn_start + (resource_size(res) >> 20) - 1,
+ .flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED,
+ };
+}
+
static void vmd_bus_enumeration(struct pci_bus *bus, unsigned long features)
{
struct pci_bus *child;
@@ -864,13 +876,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
return ret;
}
- res = &vmd->dev->resource[VMD_CFGBAR];
- vmd->resources[0] = (struct resource) {
- .name = "VMD CFGBAR",
- .start = vmd->busn_start,
- .end = vmd->busn_start + (resource_size(res) >> 20) - 1,
- .flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED,
- };
+ vmd_configure_cfgbar(vmd);
/*
* If the window is below 4GB, clear IORESOURCE_MEM_64 so we can
--
2.39.3
^ permalink raw reply related [flat|nested] 14+ messages in thread* [PATCH v3 3/8] PCI: vmd: Add vmd_configure_membar() and vmd_configure_membar1_membar2()
2024-11-22 8:52 [PATCH v3 0/8] VMD add second rootbus support Szymon Durawa
2024-11-22 8:52 ` [PATCH v3 1/8] PCI: vmd: Add vmd_bus_enumeration() Szymon Durawa
2024-11-22 8:52 ` [PATCH v3 2/8] PCI: vmd: Add vmd_configure_cfgbar() Szymon Durawa
@ 2024-11-22 8:52 ` Szymon Durawa
2024-11-22 8:52 ` [PATCH v3 4/8] PCI: vmd: Add vmd_create_bus() Szymon Durawa
` (5 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Szymon Durawa @ 2024-11-22 8:52 UTC (permalink / raw)
To: helgaas
Cc: Szymon Durawa, Bjorn Helgaas, Dan Williams, Lukas Wunner,
linux-pci, Nirmal Patel, Mariusz Tkaczyk
Move the MEMBAR1 and MEMBAR2 registry initialization code to new helpers
vmd_configure_membar() and vmd_configure_membar1_membar2(). No functional
changes.
Suggested-by: Nirmal Patel <nirmal.patel@linux.intel.com>
Reviewed-by: Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com>
Signed-off-by: Szymon Durawa <szymon.durawa@linux.intel.com>
---
drivers/pci/controller/vmd.c | 80 ++++++++++++++++++++++--------------
1 file changed, 50 insertions(+), 30 deletions(-)
diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
index bb09114068f5..240be800ae96 100755
--- a/drivers/pci/controller/vmd.c
+++ b/drivers/pci/controller/vmd.c
@@ -790,6 +790,48 @@ static void vmd_configure_cfgbar(struct vmd_dev *vmd)
};
}
+/*
+ * vmd_configure_membar - Configure VMD MemBAR register, which points
+ * to MMIO address assigned by the OS or BIOS.
+ * @vmd: the VMD device
+ * @resource_number: resource buffer number to be filled in
+ * @membar_number: number of the MemBAR
+ * @start_offset: 4K aligned offset applied to start of VMD’s MEMBAR MMIO space
+ * @end_offset: 4K aligned offset applied to end of VMD’s MEMBAR MMIO space
+ *
+ * Function fills resource buffer inside the VMD structure.
+ */
+static void vmd_configure_membar(struct vmd_dev *vmd, u8 resource_number,
+ u8 membar_number, resource_size_t start_offset,
+ resource_size_t end_offset)
+{
+ u32 upper_bits;
+ unsigned long flags;
+
+ struct resource *res = &vmd->dev->resource[membar_number];
+
+ upper_bits = upper_32_bits(res->end);
+ flags = res->flags & ~IORESOURCE_SIZEALIGN;
+ if (!upper_bits)
+ flags &= ~IORESOURCE_MEM_64;
+
+ vmd->resources[resource_number] = (struct resource){
+ .name = kasprintf(GFP_KERNEL, "VMD MEMBAR%d",
+ membar_number / 2),
+ .start = res->start + start_offset,
+ .end = res->end - end_offset,
+ .flags = flags,
+ .parent = res,
+ };
+}
+
+static void vmd_configure_membar1_membar2(struct vmd_dev *vmd,
+ resource_size_t mbar2_ofs)
+{
+ vmd_configure_membar(vmd, 1, VMD_MEMBAR1, 0, 0);
+ vmd_configure_membar(vmd, 2, VMD_MEMBAR2, mbar2_ofs, 0);
+}
+
static void vmd_bus_enumeration(struct pci_bus *bus, unsigned long features)
{
struct pci_bus *child;
@@ -841,9 +883,6 @@ static void vmd_bus_enumeration(struct pci_bus *bus, unsigned long features)
static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
{
struct pci_sysdata *sd = &vmd->sysdata;
- struct resource *res;
- u32 upper_bits;
- unsigned long flags;
LIST_HEAD(resources);
resource_size_t offset[2] = {0};
resource_size_t membar2_offset = 0x2000;
@@ -890,36 +929,12 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
*
* The only way we could use a 64-bit non-prefetchable MEMBAR is
* if its address is <4GB so that we can convert it to a 32-bit
- * resource. To be visible to the host OS, all VMD endpoints must
+ * resource. To be visible to the host OS, all VMD endpoints must
* be initially configured by platform BIOS, which includes setting
- * up these resources. We can assume the device is configured
+ * up these resources. We can assume the device is configured
* according to the platform needs.
*/
- res = &vmd->dev->resource[VMD_MEMBAR1];
- upper_bits = upper_32_bits(res->end);
- flags = res->flags & ~IORESOURCE_SIZEALIGN;
- if (!upper_bits)
- flags &= ~IORESOURCE_MEM_64;
- vmd->resources[1] = (struct resource) {
- .name = "VMD MEMBAR1",
- .start = res->start,
- .end = res->end,
- .flags = flags,
- .parent = res,
- };
-
- res = &vmd->dev->resource[VMD_MEMBAR2];
- upper_bits = upper_32_bits(res->end);
- flags = res->flags & ~IORESOURCE_SIZEALIGN;
- if (!upper_bits)
- flags &= ~IORESOURCE_MEM_64;
- vmd->resources[2] = (struct resource) {
- .name = "VMD MEMBAR2",
- .start = res->start + membar2_offset,
- .end = res->end,
- .flags = flags,
- .parent = res,
- };
+ vmd_configure_membar1_membar2(vmd, membar2_offset);
sd->vmd_dev = vmd->dev;
sd->domain = vmd_find_free_domain();
@@ -1060,6 +1075,11 @@ static void vmd_remove(struct pci_dev *dev)
pci_stop_root_bus(vmd->bus);
sysfs_remove_link(&vmd->dev->dev.kobj, "domain");
pci_remove_root_bus(vmd->bus);
+
+ /* CFGBAR is static, does not require releasing memory */
+ kfree(vmd->resources[1].name);
+ kfree(vmd->resources[2].name);
+
vmd_cleanup_srcu(vmd);
vmd_detach_resources(vmd);
vmd_remove_irq_domain(vmd);
--
2.39.3
^ permalink raw reply related [flat|nested] 14+ messages in thread* [PATCH v3 4/8] PCI: vmd: Add vmd_create_bus()
2024-11-22 8:52 [PATCH v3 0/8] VMD add second rootbus support Szymon Durawa
` (2 preceding siblings ...)
2024-11-22 8:52 ` [PATCH v3 3/8] PCI: vmd: Add vmd_configure_membar() and vmd_configure_membar1_membar2() Szymon Durawa
@ 2024-11-22 8:52 ` Szymon Durawa
2024-11-22 8:52 ` [PATCH v3 5/8] PCI: vmd: Replace hardcoded values with enum and defines Szymon Durawa
` (4 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Szymon Durawa @ 2024-11-22 8:52 UTC (permalink / raw)
To: helgaas
Cc: Szymon Durawa, Bjorn Helgaas, Dan Williams, Lukas Wunner,
linux-pci, Nirmal Patel, Mariusz Tkaczyk
Move the VMD bus initialization code to a new helper vmd_create_bus().
No functional changes.
Suggested-by: Nirmal Patel <nirmal.patel@linux.intel.com>
Reviewed-by: Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com>
Signed-off-by: Szymon Durawa <szymon.durawa@linux.intel.com>
---
drivers/pci/controller/vmd.c | 54 +++++++++++++++++++++++-------------
1 file changed, 34 insertions(+), 20 deletions(-)
diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
index 240be800ae96..24ca19a28ba7 100755
--- a/drivers/pci/controller/vmd.c
+++ b/drivers/pci/controller/vmd.c
@@ -832,6 +832,36 @@ static void vmd_configure_membar1_membar2(struct vmd_dev *vmd,
vmd_configure_membar(vmd, 2, VMD_MEMBAR2, mbar2_ofs, 0);
}
+static int vmd_create_bus(struct vmd_dev *vmd, struct pci_sysdata *sd,
+ resource_size_t *offset)
+{
+ LIST_HEAD(resources);
+
+ pci_add_resource(&resources, &vmd->resources[0]);
+ pci_add_resource_offset(&resources, &vmd->resources[1], offset[0]);
+ pci_add_resource_offset(&resources, &vmd->resources[2], offset[1]);
+
+ vmd->bus = pci_create_root_bus(&vmd->dev->dev, vmd->busn_start,
+ &vmd_ops, sd, &resources);
+ if (!vmd->bus) {
+ pci_free_resource_list(&resources);
+ vmd_remove_irq_domain(vmd);
+ return -ENODEV;
+ }
+
+ vmd_copy_host_bridge_flags(pci_find_host_bridge(vmd->dev->bus),
+ to_pci_host_bridge(vmd->bus->bridge));
+
+ vmd_attach_resources(vmd);
+ if (vmd->irq_domain)
+ dev_set_msi_domain(&vmd->bus->dev, vmd->irq_domain);
+ else
+ dev_set_msi_domain(&vmd->bus->dev,
+ dev_get_msi_domain(&vmd->dev->dev));
+
+ return 0;
+}
+
static void vmd_bus_enumeration(struct pci_bus *bus, unsigned long features)
{
struct pci_bus *child;
@@ -883,7 +913,6 @@ static void vmd_bus_enumeration(struct pci_bus *bus, unsigned long features)
static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
{
struct pci_sysdata *sd = &vmd->sysdata;
- LIST_HEAD(resources);
resource_size_t offset[2] = {0};
resource_size_t membar2_offset = 0x2000;
int ret;
@@ -970,28 +999,13 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
vmd_set_msi_remapping(vmd, false);
}
- pci_add_resource(&resources, &vmd->resources[0]);
- pci_add_resource_offset(&resources, &vmd->resources[1], offset[0]);
- pci_add_resource_offset(&resources, &vmd->resources[2], offset[1]);
+ ret = vmd_create_bus(vmd, sd, offset);
- vmd->bus = pci_create_root_bus(&vmd->dev->dev, vmd->busn_start,
- &vmd_ops, sd, &resources);
- if (!vmd->bus) {
- pci_free_resource_list(&resources);
- vmd_remove_irq_domain(vmd);
- return -ENODEV;
+ if (ret) {
+ pci_err(vmd->dev, "Can't create bus: %d\n", ret);
+ return ret;
}
- vmd_copy_host_bridge_flags(pci_find_host_bridge(vmd->dev->bus),
- to_pci_host_bridge(vmd->bus->bridge));
-
- vmd_attach_resources(vmd);
- if (vmd->irq_domain)
- dev_set_msi_domain(&vmd->bus->dev, vmd->irq_domain);
- else
- dev_set_msi_domain(&vmd->bus->dev,
- dev_get_msi_domain(&vmd->dev->dev));
-
WARN(sysfs_create_link(&vmd->dev->dev.kobj, &vmd->bus->dev.kobj,
"domain"), "Can't create symlink to domain\n");
--
2.39.3
^ permalink raw reply related [flat|nested] 14+ messages in thread* [PATCH v3 5/8] PCI: vmd: Replace hardcoded values with enum and defines
2024-11-22 8:52 [PATCH v3 0/8] VMD add second rootbus support Szymon Durawa
` (3 preceding siblings ...)
2024-11-22 8:52 ` [PATCH v3 4/8] PCI: vmd: Add vmd_create_bus() Szymon Durawa
@ 2024-11-22 8:52 ` Szymon Durawa
2025-05-05 17:33 ` Bjorn Helgaas
2024-11-22 8:52 ` [PATCH v3 6/8] PCI: vmd: Convert bus and busn_start to an array Szymon Durawa
` (3 subsequent siblings)
8 siblings, 1 reply; 14+ messages in thread
From: Szymon Durawa @ 2024-11-22 8:52 UTC (permalink / raw)
To: helgaas
Cc: Szymon Durawa, Bjorn Helgaas, Dan Williams, Lukas Wunner,
linux-pci, Nirmal Patel, Mariusz Tkaczyk
Add enum vmd_resource type to replace hardcoded values. Add defines for
vmd bus start number based on VMD restriction value. No functional
changes.
Suggested-by: Nirmal Patel <nirmal.patel@linux.intel.com>
Reviewed-by: Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com>
Signed-off-by: Szymon Durawa <szymon.durawa@linux.intel.com>
---
drivers/pci/controller/vmd.c | 42 ++++++++++++++++++++++++------------
1 file changed, 28 insertions(+), 14 deletions(-)
diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
index 24ca19a28ba7..1cd55c3686f3 100755
--- a/drivers/pci/controller/vmd.c
+++ b/drivers/pci/controller/vmd.c
@@ -23,6 +23,10 @@
#define VMD_MEMBAR1 2
#define VMD_MEMBAR2 4
+#define VMD_RESTRICT_0_BUS_START 0
+#define VMD_RESTRICT_1_BUS_START 128
+#define VMD_RESTRICT_2_BUS_START 224
+
#define PCI_REG_VMCAP 0x40
#define BUS_RESTRICT_CAP(vmcap) (vmcap & 0x1)
#define PCI_REG_VMCONFIG 0x44
@@ -34,6 +38,13 @@
#define MB2_SHADOW_OFFSET 0x2000
#define MB2_SHADOW_SIZE 16
+enum vmd_resource {
+ VMD_RES_CFGBAR = 0, /* VMD Bus0 Config BAR */
+ VMD_RES_MBAR_1, /* VMD Bus0 Resource MemBAR 1 */
+ VMD_RES_MBAR_2, /* VMD Bus0 Resource MemBAR 2 */
+ VMD_RES_COUNT
+};
+
enum vmd_features {
/*
* Device may contain registers which hint the physical location of the
@@ -132,7 +143,7 @@ struct vmd_dev {
struct vmd_irq_list *irqs;
struct pci_sysdata sysdata;
- struct resource resources[3];
+ struct resource resources[VMD_RES_COUNT];
struct irq_domain *irq_domain;
struct pci_bus *bus;
u8 busn_start;
@@ -516,7 +527,7 @@ static inline void vmd_acpi_end(void) { }
static void vmd_domain_reset(struct vmd_dev *vmd)
{
- u16 bus, max_buses = resource_size(&vmd->resources[0]);
+ u16 bus, max_buses = resource_size(&vmd->resources[VMD_RES_CFGBAR]);
u8 dev, functions, fn, hdr_type;
char __iomem *base;
@@ -564,8 +575,8 @@ static void vmd_domain_reset(struct vmd_dev *vmd)
static void vmd_attach_resources(struct vmd_dev *vmd)
{
- vmd->dev->resource[VMD_MEMBAR1].child = &vmd->resources[1];
- vmd->dev->resource[VMD_MEMBAR2].child = &vmd->resources[2];
+ vmd->dev->resource[VMD_MEMBAR1].child = &vmd->resources[VMD_RES_MBAR_1];
+ vmd->dev->resource[VMD_MEMBAR2].child = &vmd->resources[VMD_RES_MBAR_2];
}
static void vmd_detach_resources(struct vmd_dev *vmd)
@@ -655,13 +666,13 @@ static int vmd_get_bus_number_start(struct vmd_dev *vmd)
switch (BUS_RESTRICT_CFG(reg)) {
case 0:
- vmd->busn_start = 0;
+ vmd->busn_start = VMD_RESTRICT_0_BUS_START;
break;
case 1:
- vmd->busn_start = 128;
+ vmd->busn_start = VMD_RESTRICT_1_BUS_START;
break;
case 2:
- vmd->busn_start = 224;
+ vmd->busn_start = VMD_RESTRICT_2_BUS_START;
break;
default:
pci_err(dev, "Unknown Bus Offset Setting (%d)\n",
@@ -782,7 +793,7 @@ static void vmd_configure_cfgbar(struct vmd_dev *vmd)
{
struct resource *res = &vmd->dev->resource[VMD_CFGBAR];
- vmd->resources[0] = (struct resource){
+ vmd->resources[VMD_RES_CFGBAR] = (struct resource){
.name = "VMD CFGBAR",
.start = vmd->busn_start,
.end = vmd->busn_start + (resource_size(res) >> 20) - 1,
@@ -801,7 +812,8 @@ static void vmd_configure_cfgbar(struct vmd_dev *vmd)
*
* Function fills resource buffer inside the VMD structure.
*/
-static void vmd_configure_membar(struct vmd_dev *vmd, u8 resource_number,
+static void vmd_configure_membar(struct vmd_dev *vmd,
+ enum vmd_resource resource_number,
u8 membar_number, resource_size_t start_offset,
resource_size_t end_offset)
{
@@ -837,9 +849,11 @@ static int vmd_create_bus(struct vmd_dev *vmd, struct pci_sysdata *sd,
{
LIST_HEAD(resources);
- pci_add_resource(&resources, &vmd->resources[0]);
- pci_add_resource_offset(&resources, &vmd->resources[1], offset[0]);
- pci_add_resource_offset(&resources, &vmd->resources[2], offset[1]);
+ pci_add_resource(&resources, &vmd->resources[VMD_RES_CFGBAR]);
+ pci_add_resource_offset(&resources, &vmd->resources[VMD_RES_MBAR_1],
+ offset[0]);
+ pci_add_resource_offset(&resources, &vmd->resources[VMD_RES_MBAR_2],
+ offset[1]);
vmd->bus = pci_create_root_bus(&vmd->dev->dev, vmd->busn_start,
&vmd_ops, sd, &resources);
@@ -1091,8 +1105,8 @@ static void vmd_remove(struct pci_dev *dev)
pci_remove_root_bus(vmd->bus);
/* CFGBAR is static, does not require releasing memory */
- kfree(vmd->resources[1].name);
- kfree(vmd->resources[2].name);
+ kfree(vmd->resources[VMD_RES_MBAR_1].name);
+ kfree(vmd->resources[VMD_RES_MBAR_2].name);
vmd_cleanup_srcu(vmd);
vmd_detach_resources(vmd);
--
2.39.3
^ permalink raw reply related [flat|nested] 14+ messages in thread* Re: [PATCH v3 5/8] PCI: vmd: Replace hardcoded values with enum and defines
2024-11-22 8:52 ` [PATCH v3 5/8] PCI: vmd: Replace hardcoded values with enum and defines Szymon Durawa
@ 2025-05-05 17:33 ` Bjorn Helgaas
0 siblings, 0 replies; 14+ messages in thread
From: Bjorn Helgaas @ 2025-05-05 17:33 UTC (permalink / raw)
To: Szymon Durawa
Cc: Bjorn Helgaas, Dan Williams, Lukas Wunner, linux-pci,
Nirmal Patel, Mariusz Tkaczyk
On Fri, Nov 22, 2024 at 09:52:12AM +0100, Szymon Durawa wrote:
> Add enum vmd_resource type to replace hardcoded values. Add defines for
> vmd bus start number based on VMD restriction value. No functional
> changes.
>
> Suggested-by: Nirmal Patel <nirmal.patel@linux.intel.com>
> Reviewed-by: Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com>
> Signed-off-by: Szymon Durawa <szymon.durawa@linux.intel.com>
> ---
> drivers/pci/controller/vmd.c | 42 ++++++++++++++++++++++++------------
> 1 file changed, 28 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
> index 24ca19a28ba7..1cd55c3686f3 100755
> --- a/drivers/pci/controller/vmd.c
> +++ b/drivers/pci/controller/vmd.c
> @@ -23,6 +23,10 @@
> #define VMD_MEMBAR1 2
> #define VMD_MEMBAR2 4
>
> +#define VMD_RESTRICT_0_BUS_START 0
> +#define VMD_RESTRICT_1_BUS_START 128
> +#define VMD_RESTRICT_2_BUS_START 224
Oh, you added this in this series! I assumed they were already there.
Please convert these to hex.
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH v3 6/8] PCI: vmd: Convert bus and busn_start to an array
2024-11-22 8:52 [PATCH v3 0/8] VMD add second rootbus support Szymon Durawa
` (4 preceding siblings ...)
2024-11-22 8:52 ` [PATCH v3 5/8] PCI: vmd: Replace hardcoded values with enum and defines Szymon Durawa
@ 2024-11-22 8:52 ` Szymon Durawa
2024-11-22 8:52 ` [PATCH v3 7/8] PCI: vmd: Add support for second rootbus under VMD Szymon Durawa
` (2 subsequent siblings)
8 siblings, 0 replies; 14+ messages in thread
From: Szymon Durawa @ 2024-11-22 8:52 UTC (permalink / raw)
To: helgaas
Cc: Szymon Durawa, Bjorn Helgaas, Dan Williams, Lukas Wunner,
linux-pci, Nirmal Patel, Mariusz Tkaczyk
Convert bus and busn_start from scalar to an array to support
multiple VMD buses in the future. No functional changes.
Suggested-by: Nirmal Patel <nirmal.patel@linux.intel.com>
Reviewed-by: Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com>
Signed-off-by: Szymon Durawa <szymon.durawa@linux.intel.com>
---
drivers/pci/controller/vmd.c | 49 +++++++++++++++++++++---------------
1 file changed, 29 insertions(+), 20 deletions(-)
diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
index 1cd55c3686f3..6d8397b5ebee 100755
--- a/drivers/pci/controller/vmd.c
+++ b/drivers/pci/controller/vmd.c
@@ -45,6 +45,11 @@ enum vmd_resource {
VMD_RES_COUNT
};
+enum vmd_rootbus {
+ VMD_BUS_0 = 0,
+ VMD_BUS_COUNT
+};
+
enum vmd_features {
/*
* Device may contain registers which hint the physical location of the
@@ -145,8 +150,8 @@ struct vmd_dev {
struct pci_sysdata sysdata;
struct resource resources[VMD_RES_COUNT];
struct irq_domain *irq_domain;
- struct pci_bus *bus;
- u8 busn_start;
+ struct pci_bus *bus[VMD_BUS_COUNT];
+ u8 busn_start[VMD_BUS_COUNT];
u8 first_vec;
char *name;
int instance;
@@ -389,7 +394,7 @@ static void vmd_remove_irq_domain(struct vmd_dev *vmd)
static void __iomem *vmd_cfg_addr(struct vmd_dev *vmd, struct pci_bus *bus,
unsigned int devfn, int reg, int len)
{
- unsigned int busnr_ecam = bus->number - vmd->busn_start;
+ unsigned int busnr_ecam = bus->number - vmd->busn_start[VMD_BUS_0];
u32 offset = PCIE_ECAM_OFFSET(busnr_ecam, devfn, reg);
if (offset + len >= resource_size(&vmd->dev->resource[VMD_CFGBAR]))
@@ -666,13 +671,13 @@ static int vmd_get_bus_number_start(struct vmd_dev *vmd)
switch (BUS_RESTRICT_CFG(reg)) {
case 0:
- vmd->busn_start = VMD_RESTRICT_0_BUS_START;
+ vmd->busn_start[VMD_BUS_0] = VMD_RESTRICT_0_BUS_START;
break;
case 1:
- vmd->busn_start = VMD_RESTRICT_1_BUS_START;
+ vmd->busn_start[VMD_BUS_0] = VMD_RESTRICT_1_BUS_START;
break;
case 2:
- vmd->busn_start = VMD_RESTRICT_2_BUS_START;
+ vmd->busn_start[VMD_BUS_0] = VMD_RESTRICT_2_BUS_START;
break;
default:
pci_err(dev, "Unknown Bus Offset Setting (%d)\n",
@@ -795,8 +800,9 @@ static void vmd_configure_cfgbar(struct vmd_dev *vmd)
vmd->resources[VMD_RES_CFGBAR] = (struct resource){
.name = "VMD CFGBAR",
- .start = vmd->busn_start,
- .end = vmd->busn_start + (resource_size(res) >> 20) - 1,
+ .start = vmd->busn_start[VMD_BUS_0],
+ .end = vmd->busn_start[VMD_BUS_0] +
+ (resource_size(res) >> 20) - 1,
.flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED,
};
}
@@ -855,22 +861,24 @@ static int vmd_create_bus(struct vmd_dev *vmd, struct pci_sysdata *sd,
pci_add_resource_offset(&resources, &vmd->resources[VMD_RES_MBAR_2],
offset[1]);
- vmd->bus = pci_create_root_bus(&vmd->dev->dev, vmd->busn_start,
- &vmd_ops, sd, &resources);
- if (!vmd->bus) {
+ vmd->bus[VMD_BUS_0] = pci_create_root_bus(&vmd->dev->dev,
+ vmd->busn_start[VMD_BUS_0],
+ &vmd_ops, sd, &resources);
+ if (!vmd->bus[VMD_BUS_0]) {
pci_free_resource_list(&resources);
vmd_remove_irq_domain(vmd);
return -ENODEV;
}
- vmd_copy_host_bridge_flags(pci_find_host_bridge(vmd->dev->bus),
- to_pci_host_bridge(vmd->bus->bridge));
+ vmd_copy_host_bridge_flags(
+ pci_find_host_bridge(vmd->dev->bus),
+ to_pci_host_bridge(vmd->bus[VMD_BUS_0]->bridge));
vmd_attach_resources(vmd);
if (vmd->irq_domain)
- dev_set_msi_domain(&vmd->bus->dev, vmd->irq_domain);
+ dev_set_msi_domain(&vmd->bus[VMD_BUS_0]->dev, vmd->irq_domain);
else
- dev_set_msi_domain(&vmd->bus->dev,
+ dev_set_msi_domain(&vmd->bus[VMD_BUS_0]->dev,
dev_get_msi_domain(&vmd->dev->dev));
return 0;
@@ -1020,10 +1028,11 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
return ret;
}
- WARN(sysfs_create_link(&vmd->dev->dev.kobj, &vmd->bus->dev.kobj,
- "domain"), "Can't create symlink to domain\n");
+ WARN(sysfs_create_link(&vmd->dev->dev.kobj,
+ &vmd->bus[VMD_BUS_0]->dev.kobj, "domain"),
+ "Can't create symlink to domain\n");
- vmd_bus_enumeration(vmd->bus, features);
+ vmd_bus_enumeration(vmd->bus[VMD_BUS_0], features);
return 0;
}
@@ -1100,9 +1109,9 @@ static void vmd_remove(struct pci_dev *dev)
{
struct vmd_dev *vmd = pci_get_drvdata(dev);
- pci_stop_root_bus(vmd->bus);
+ pci_stop_root_bus(vmd->bus[VMD_BUS_0]);
sysfs_remove_link(&vmd->dev->dev.kobj, "domain");
- pci_remove_root_bus(vmd->bus);
+ pci_remove_root_bus(vmd->bus[VMD_BUS_0]);
/* CFGBAR is static, does not require releasing memory */
kfree(vmd->resources[VMD_RES_MBAR_1].name);
--
2.39.3
^ permalink raw reply related [flat|nested] 14+ messages in thread* [PATCH v3 7/8] PCI: vmd: Add support for second rootbus under VMD
2024-11-22 8:52 [PATCH v3 0/8] VMD add second rootbus support Szymon Durawa
` (5 preceding siblings ...)
2024-11-22 8:52 ` [PATCH v3 6/8] PCI: vmd: Convert bus and busn_start to an array Szymon Durawa
@ 2024-11-22 8:52 ` Szymon Durawa
2025-05-03 13:20 ` Manivannan Sadhasivam
2025-05-05 17:31 ` Bjorn Helgaas
2024-11-22 8:52 ` [PATCH v3 8/8] PCI: vmd: Add workaround for bus number hardwired to fixed non-zero value Szymon Durawa
2025-01-27 20:59 ` [PATCH v3 0/8] VMD add second rootbus support Durawa, Szymon
8 siblings, 2 replies; 14+ messages in thread
From: Szymon Durawa @ 2024-11-22 8:52 UTC (permalink / raw)
To: helgaas
Cc: Szymon Durawa, Bjorn Helgaas, Dan Williams, Lukas Wunner,
linux-pci, Nirmal Patel, Mariusz Tkaczyk
Starting from Intel Arrow Lake VMD enhancement introduces second rootbus
support with fixed root bus number (0x80). It means that all 3 MMIO BARs
exposed by VMD are shared now between both buses (current BUS0 and
new BUS1).
Add new BUS1 enumeration and divide MMIO space to be shared between
both rootbuses. Due to enumeration issues with rootbus hardwired to a
fixed non-zero value, this patch will work with a workaround proposed
in next patch. Without workaround user won't see attached devices for BUS1
rootbus.
Suggested-by: Nirmal Patel <nirmal.patel@linux.intel.com>
Reviewed-by: Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com>
Signed-off-by: Szymon Durawa <szymon.durawa@linux.intel.com>
---
drivers/pci/controller/vmd.c | 208 ++++++++++++++++++++++++++++++-----
1 file changed, 180 insertions(+), 28 deletions(-)
diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
index 6d8397b5ebee..6cd14c28fd4e 100755
--- a/drivers/pci/controller/vmd.c
+++ b/drivers/pci/controller/vmd.c
@@ -26,6 +26,7 @@
#define VMD_RESTRICT_0_BUS_START 0
#define VMD_RESTRICT_1_BUS_START 128
#define VMD_RESTRICT_2_BUS_START 224
+#define VMD_RESTRICT_3_BUS_START 225
#define PCI_REG_VMCAP 0x40
#define BUS_RESTRICT_CAP(vmcap) (vmcap & 0x1)
@@ -38,15 +39,33 @@
#define MB2_SHADOW_OFFSET 0x2000
#define MB2_SHADOW_SIZE 16
+#define VMD_PRIMARY_BUS0 0x00
+#define VMD_PRIMARY_BUS1 0x80
+#define VMD_BUSRANGE0 0xc8
+#define VMD_BUSRANGE1 0xcc
+#define VMD_MEMBAR1_OFFSET 0xd0
+#define VMD_MEMBAR2_OFFSET1 0xd8
+#define VMD_MEMBAR2_OFFSET2 0xdc
+#define VMD_BUS_END(busr) ((busr >> 8) & 0xff)
+#define VMD_BUS_START(busr) (busr & 0x00ff)
+
+/*
+ * Add VMD resources for BUS1, it will share the same MMIO space with
+ * previous VMD resources.
+ */
enum vmd_resource {
VMD_RES_CFGBAR = 0, /* VMD Bus0 Config BAR */
VMD_RES_MBAR_1, /* VMD Bus0 Resource MemBAR 1 */
VMD_RES_MBAR_2, /* VMD Bus0 Resource MemBAR 2 */
+ VMD_RES_BUS1_CFGBAR, /* VMD Bus1 Config BAR */
+ VMD_RES_BUS1_MBAR_1, /* VMD Bus1 Resource MemBAR 1 */
+ VMD_RES_BUS1_MBAR_2, /* VMD Bus1 Resource MemBAR 2 */
VMD_RES_COUNT
};
enum vmd_rootbus {
VMD_BUS_0 = 0,
+ VMD_BUS_1,
VMD_BUS_COUNT
};
@@ -90,6 +109,12 @@ enum vmd_features {
* proper power management of the SoC.
*/
VMD_FEAT_BIOS_PM_QUIRK = (1 << 5),
+
+ /*
+ * Starting from Intel Arrow Lake, VMD devices have their VMD rootports
+ * connected to additional BUS1 rootport.
+ */
+ VMD_FEAT_HAS_BUS1_ROOTBUS = (1 << 6)
};
#define VMD_BIOS_PM_QUIRK_LTR 0x1003 /* 3145728 ns */
@@ -97,7 +122,8 @@ enum vmd_features {
#define VMD_FEATS_CLIENT (VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP | \
VMD_FEAT_HAS_BUS_RESTRICTIONS | \
VMD_FEAT_OFFSET_FIRST_VECTOR | \
- VMD_FEAT_BIOS_PM_QUIRK)
+ VMD_FEAT_BIOS_PM_QUIRK | \
+ VMD_FEAT_HAS_BUS1_ROOTBUS)
static DEFINE_IDA(vmd_instance_ida);
@@ -155,6 +181,7 @@ struct vmd_dev {
u8 first_vec;
char *name;
int instance;
+ bool bus1_rootbus;
};
static inline struct vmd_dev *vmd_from_bus(struct pci_bus *bus)
@@ -532,11 +559,16 @@ static inline void vmd_acpi_end(void) { }
static void vmd_domain_reset(struct vmd_dev *vmd)
{
- u16 bus, max_buses = resource_size(&vmd->resources[VMD_RES_CFGBAR]);
+ u16 bus, bus_cnt = resource_size(&vmd->resources[VMD_RES_CFGBAR]);
u8 dev, functions, fn, hdr_type;
char __iomem *base;
- for (bus = 0; bus < max_buses; bus++) {
+ if (vmd->bus1_rootbus) {
+ bus_cnt += resource_size(&vmd->resources[VMD_RES_BUS1_CFGBAR]);
+ bus_cnt += 2;
+ }
+
+ for (bus = 0; bus < bus_cnt; bus++) {
for (dev = 0; dev < 32; dev++) {
base = vmd->cfgbar + PCIE_ECAM_OFFSET(bus,
PCI_DEVFN(dev, 0), 0);
@@ -582,12 +614,24 @@ static void vmd_attach_resources(struct vmd_dev *vmd)
{
vmd->dev->resource[VMD_MEMBAR1].child = &vmd->resources[VMD_RES_MBAR_1];
vmd->dev->resource[VMD_MEMBAR2].child = &vmd->resources[VMD_RES_MBAR_2];
+
+ if (vmd->bus1_rootbus) {
+ vmd->resources[VMD_RES_MBAR_1].sibling =
+ &vmd->resources[VMD_RES_BUS1_MBAR_1];
+ vmd->resources[VMD_RES_MBAR_2].sibling =
+ &vmd->resources[VMD_RES_BUS1_MBAR_2];
+ }
}
static void vmd_detach_resources(struct vmd_dev *vmd)
{
vmd->dev->resource[VMD_MEMBAR1].child = NULL;
vmd->dev->resource[VMD_MEMBAR2].child = NULL;
+
+ if (vmd->bus1_rootbus) {
+ vmd->resources[VMD_RES_MBAR_1].sibling = NULL;
+ vmd->resources[VMD_RES_MBAR_2].sibling = NULL;
+ }
}
/*
@@ -660,7 +704,7 @@ static int vmd_get_phys_offsets(struct vmd_dev *vmd, bool native_hint,
return 0;
}
-static int vmd_get_bus_number_start(struct vmd_dev *vmd)
+static int vmd_get_bus_number_start(struct vmd_dev *vmd, unsigned long features)
{
struct pci_dev *dev = vmd->dev;
u16 reg;
@@ -679,6 +723,19 @@ static int vmd_get_bus_number_start(struct vmd_dev *vmd)
case 2:
vmd->busn_start[VMD_BUS_0] = VMD_RESTRICT_2_BUS_START;
break;
+ case 3:
+ if (!(features & VMD_FEAT_HAS_BUS1_ROOTBUS)) {
+ pci_err(dev, "VMD Bus Restriction detected type %d, but BUS1 Rootbus is not supported, aborting.\n",
+ BUS_RESTRICT_CFG(reg));
+ return -ENODEV;
+ }
+
+ /* BUS0 start number */
+ vmd->busn_start[VMD_BUS_0] = VMD_RESTRICT_2_BUS_START;
+ /* BUS1 start number */
+ vmd->busn_start[VMD_BUS_1] = VMD_RESTRICT_3_BUS_START;
+ vmd->bus1_rootbus = true;
+ break;
default:
pci_err(dev, "Unknown Bus Offset Setting (%d)\n",
BUS_RESTRICT_CFG(reg));
@@ -805,6 +862,30 @@ static void vmd_configure_cfgbar(struct vmd_dev *vmd)
(resource_size(res) >> 20) - 1,
.flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED,
};
+
+ if (vmd->bus1_rootbus) {
+ u16 bus0_range = 0;
+ u16 bus1_range = 0;
+
+ pci_read_config_word(vmd->dev, VMD_BUSRANGE0, &bus0_range);
+ pci_read_config_word(vmd->dev, VMD_BUSRANGE1, &bus1_range);
+
+ /*
+ * Resize BUS0 CFGBAR range to make space for BUS1
+ * owned devices by adjusting range end with value stored in
+ * VMD_BUSRANGE0 register.
+ */
+ vmd->resources[VMD_RES_CFGBAR].start = VMD_BUS_START(bus0_range);
+ vmd->resources[VMD_RES_CFGBAR].end = VMD_BUS_END(bus0_range);
+
+ vmd->resources[VMD_RES_BUS1_CFGBAR] = (struct resource){
+ .name = "VMD CFGBAR BUS1",
+ .start = VMD_BUS_START(bus1_range),
+ .end = VMD_BUS_END(bus1_range),
+ .flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED,
+ .parent = res,
+ };
+ }
}
/*
@@ -834,8 +915,9 @@ static void vmd_configure_membar(struct vmd_dev *vmd,
flags &= ~IORESOURCE_MEM_64;
vmd->resources[resource_number] = (struct resource){
- .name = kasprintf(GFP_KERNEL, "VMD MEMBAR%d",
- membar_number / 2),
+ .name = kasprintf(
+ GFP_KERNEL, "VMD MEMBAR%d %s", membar_number / 2,
+ resource_number > VMD_RES_MBAR_2 ? "BUS1" : ""),
.start = res->start + start_offset,
.end = res->end - end_offset,
.flags = flags,
@@ -846,41 +928,80 @@ static void vmd_configure_membar(struct vmd_dev *vmd,
static void vmd_configure_membar1_membar2(struct vmd_dev *vmd,
resource_size_t mbar2_ofs)
{
- vmd_configure_membar(vmd, 1, VMD_MEMBAR1, 0, 0);
- vmd_configure_membar(vmd, 2, VMD_MEMBAR2, mbar2_ofs, 0);
+ if (vmd->bus1_rootbus) {
+ u32 bus1_mbar1_ofs = 0;
+ u64 bus1_mbar2_ofs = 0;
+ u32 reg;
+
+ pci_read_config_dword(vmd->dev, VMD_MEMBAR1_OFFSET,
+ &bus1_mbar1_ofs);
+
+ pci_read_config_dword(vmd->dev, VMD_MEMBAR2_OFFSET1, ®);
+ bus1_mbar2_ofs = reg;
+
+ pci_read_config_dword(vmd->dev, VMD_MEMBAR2_OFFSET2, ®);
+ bus1_mbar2_ofs |= (u64)reg << 32;
+
+ /*
+ * Resize BUS MEMBAR1 and MEMBAR2 ranges to make space
+ * for BUS1 owned devices by adjusting range end with values
+ * stored in VMD_MEMBAR1_OFFSET and VMD_MEMBAR2_OFFSET registers
+ */
+ vmd_configure_membar(vmd, VMD_RES_MBAR_1, VMD_MEMBAR1, 0,
+ bus1_mbar1_ofs);
+ vmd_configure_membar(vmd, VMD_RES_MBAR_2, VMD_MEMBAR2,
+ mbar2_ofs, bus1_mbar2_ofs - mbar2_ofs);
+
+ vmd_configure_membar(vmd, VMD_RES_BUS1_MBAR_1, VMD_MEMBAR1,
+ bus1_mbar1_ofs, 0);
+ vmd_configure_membar(vmd, VMD_RES_BUS1_MBAR_2, VMD_MEMBAR2,
+ mbar2_ofs + bus1_mbar2_ofs, 0);
+ } else {
+ vmd_configure_membar(vmd, VMD_RES_MBAR_1, VMD_MEMBAR1, 0, 0);
+ vmd_configure_membar(vmd, VMD_RES_MBAR_2, VMD_MEMBAR2,
+ mbar2_ofs, 0);
+ }
}
-static int vmd_create_bus(struct vmd_dev *vmd, struct pci_sysdata *sd,
- resource_size_t *offset)
+static int vmd_create_bus(struct vmd_dev *vmd, enum vmd_rootbus bus_number,
+ struct pci_sysdata *sd, resource_size_t *offset)
{
+ u8 cfgbar = bus_number * 3;
+ u8 membar1 = cfgbar + 1;
+ u8 membar2 = cfgbar + 2;
+ struct pci_bus *vmd_bus;
LIST_HEAD(resources);
- pci_add_resource(&resources, &vmd->resources[VMD_RES_CFGBAR]);
- pci_add_resource_offset(&resources, &vmd->resources[VMD_RES_MBAR_1],
+ pci_add_resource(&resources, &vmd->resources[cfgbar]);
+ pci_add_resource_offset(&resources, &vmd->resources[membar1],
offset[0]);
- pci_add_resource_offset(&resources, &vmd->resources[VMD_RES_MBAR_2],
+ pci_add_resource_offset(&resources, &vmd->resources[membar2],
offset[1]);
- vmd->bus[VMD_BUS_0] = pci_create_root_bus(&vmd->dev->dev,
- vmd->busn_start[VMD_BUS_0],
- &vmd_ops, sd, &resources);
- if (!vmd->bus[VMD_BUS_0]) {
+ vmd_bus = pci_create_root_bus(&vmd->dev->dev,
+ vmd->busn_start[bus_number], &vmd_ops, sd,
+ &resources);
+
+ if (!vmd_bus) {
pci_free_resource_list(&resources);
- vmd_remove_irq_domain(vmd);
+
+ if (bus_number == VMD_PRIMARY_BUS0)
+ vmd_remove_irq_domain(vmd);
return -ENODEV;
}
- vmd_copy_host_bridge_flags(
- pci_find_host_bridge(vmd->dev->bus),
- to_pci_host_bridge(vmd->bus[VMD_BUS_0]->bridge));
+ vmd_copy_host_bridge_flags(pci_find_host_bridge(vmd->dev->bus),
+ to_pci_host_bridge(vmd_bus->bridge));
vmd_attach_resources(vmd);
if (vmd->irq_domain)
- dev_set_msi_domain(&vmd->bus[VMD_BUS_0]->dev, vmd->irq_domain);
+ dev_set_msi_domain(&vmd_bus->dev, vmd->irq_domain);
else
- dev_set_msi_domain(&vmd->bus[VMD_BUS_0]->dev,
+ dev_set_msi_domain(&vmd_bus->dev,
dev_get_msi_domain(&vmd->dev->dev));
+ vmd->bus[bus_number] = vmd_bus;
+
return 0;
}
@@ -893,7 +1014,9 @@ static void vmd_bus_enumeration(struct pci_bus *bus, unsigned long features)
vmd_acpi_begin();
pci_scan_child_bus(bus);
- vmd_domain_reset(vmd_from_bus(bus));
+
+ if (bus->primary == VMD_PRIMARY_BUS0)
+ vmd_domain_reset(vmd_from_bus(bus));
/*
* When Intel VMD is enabled, the OS does not discover the Root Ports
@@ -961,7 +1084,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
* limits the bus range to between 0-127, 128-255, or 224-255
*/
if (features & VMD_FEAT_HAS_BUS_RESTRICTIONS) {
- ret = vmd_get_bus_number_start(vmd);
+ ret = vmd_get_bus_number_start(vmd, features);
if (ret)
return ret;
}
@@ -1021,7 +1144,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
vmd_set_msi_remapping(vmd, false);
}
- ret = vmd_create_bus(vmd, sd, offset);
+ ret = vmd_create_bus(vmd, VMD_BUS_0, sd, offset);
if (ret) {
pci_err(vmd->dev, "Can't create bus: %d\n", ret);
@@ -1034,6 +1157,27 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
vmd_bus_enumeration(vmd->bus[VMD_BUS_0], features);
+ if (vmd->bus1_rootbus) {
+ ret = vmd_create_bus(vmd, VMD_BUS_1, sd, offset);
+ if (ret) {
+ pci_err(vmd->dev, "Can't create BUS1: %d\n", ret);
+ return ret;
+ }
+
+ /*
+ * Primary bus is not set by pci_create_root_bus(), it is
+ * updated here
+ */
+ vmd->bus[VMD_BUS_1]->primary = VMD_PRIMARY_BUS1;
+
+ WARN(sysfs_create_link(&vmd->dev->dev.kobj,
+ &vmd->bus[VMD_BUS_1]->dev.kobj,
+ "domain1"),
+ "Can't create symlink to domain1\n");
+
+ vmd_bus_enumeration(vmd->bus[VMD_BUS_1], features);
+ }
+
return 0;
}
@@ -1113,10 +1257,18 @@ static void vmd_remove(struct pci_dev *dev)
sysfs_remove_link(&vmd->dev->dev.kobj, "domain");
pci_remove_root_bus(vmd->bus[VMD_BUS_0]);
- /* CFGBAR is static, does not require releasing memory */
+ /* CFGBARs are static, do not require releasing memory */
kfree(vmd->resources[VMD_RES_MBAR_1].name);
kfree(vmd->resources[VMD_RES_MBAR_2].name);
+ if (vmd->bus1_rootbus) {
+ pci_stop_root_bus(vmd->bus[VMD_BUS_1]);
+ sysfs_remove_link(&vmd->dev->dev.kobj, "domain1");
+ pci_remove_root_bus(vmd->bus[VMD_BUS_1]);
+ kfree(vmd->resources[VMD_RES_BUS1_MBAR_1].name);
+ kfree(vmd->resources[VMD_RES_BUS1_MBAR_2].name);
+ }
+
vmd_cleanup_srcu(vmd);
vmd_detach_resources(vmd);
vmd_remove_irq_domain(vmd);
@@ -1202,4 +1354,4 @@ module_pci_driver(vmd_drv);
MODULE_AUTHOR("Intel Corporation");
MODULE_DESCRIPTION("Volume Management Device driver");
MODULE_LICENSE("GPL v2");
-MODULE_VERSION("0.6");
+MODULE_VERSION("0.7");
--
2.39.3
^ permalink raw reply related [flat|nested] 14+ messages in thread* Re: [PATCH v3 7/8] PCI: vmd: Add support for second rootbus under VMD
2024-11-22 8:52 ` [PATCH v3 7/8] PCI: vmd: Add support for second rootbus under VMD Szymon Durawa
@ 2025-05-03 13:20 ` Manivannan Sadhasivam
2025-05-05 17:31 ` Bjorn Helgaas
1 sibling, 0 replies; 14+ messages in thread
From: Manivannan Sadhasivam @ 2025-05-03 13:20 UTC (permalink / raw)
To: Szymon Durawa
Cc: helgaas, Bjorn Helgaas, Dan Williams, Lukas Wunner, linux-pci,
Nirmal Patel, Mariusz Tkaczyk
On Fri, Nov 22, 2024 at 09:52:14AM +0100, Szymon Durawa wrote:
> Starting from Intel Arrow Lake VMD enhancement introduces second rootbus
> support with fixed root bus number (0x80). It means that all 3 MMIO BARs
> exposed by VMD are shared now between both buses (current BUS0 and
> new BUS1).
>
> Add new BUS1 enumeration and divide MMIO space to be shared between
> both rootbuses. Due to enumeration issues with rootbus hardwired to a
> fixed non-zero value, this patch will work with a workaround proposed
> in next patch. Without workaround user won't see attached devices for BUS1
> rootbus.
>
Series mostly looks good to me. But I'm not comfortable with patches 7 and 8 due
to the dependency. Patch 7 is adding the second root bus, but the devices will
only be detected after applying patch 8, which will break bisecting.
Can you try moving patch 8 before 7? Or merge both of them?
- Mani
> Suggested-by: Nirmal Patel <nirmal.patel@linux.intel.com>
> Reviewed-by: Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com>
> Signed-off-by: Szymon Durawa <szymon.durawa@linux.intel.com>
> ---
> drivers/pci/controller/vmd.c | 208 ++++++++++++++++++++++++++++++-----
> 1 file changed, 180 insertions(+), 28 deletions(-)
>
> diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
> index 6d8397b5ebee..6cd14c28fd4e 100755
> --- a/drivers/pci/controller/vmd.c
> +++ b/drivers/pci/controller/vmd.c
> @@ -26,6 +26,7 @@
> #define VMD_RESTRICT_0_BUS_START 0
> #define VMD_RESTRICT_1_BUS_START 128
> #define VMD_RESTRICT_2_BUS_START 224
> +#define VMD_RESTRICT_3_BUS_START 225
>
> #define PCI_REG_VMCAP 0x40
> #define BUS_RESTRICT_CAP(vmcap) (vmcap & 0x1)
> @@ -38,15 +39,33 @@
> #define MB2_SHADOW_OFFSET 0x2000
> #define MB2_SHADOW_SIZE 16
>
> +#define VMD_PRIMARY_BUS0 0x00
> +#define VMD_PRIMARY_BUS1 0x80
> +#define VMD_BUSRANGE0 0xc8
> +#define VMD_BUSRANGE1 0xcc
> +#define VMD_MEMBAR1_OFFSET 0xd0
> +#define VMD_MEMBAR2_OFFSET1 0xd8
> +#define VMD_MEMBAR2_OFFSET2 0xdc
> +#define VMD_BUS_END(busr) ((busr >> 8) & 0xff)
> +#define VMD_BUS_START(busr) (busr & 0x00ff)
> +
> +/*
> + * Add VMD resources for BUS1, it will share the same MMIO space with
> + * previous VMD resources.
> + */
> enum vmd_resource {
> VMD_RES_CFGBAR = 0, /* VMD Bus0 Config BAR */
> VMD_RES_MBAR_1, /* VMD Bus0 Resource MemBAR 1 */
> VMD_RES_MBAR_2, /* VMD Bus0 Resource MemBAR 2 */
> + VMD_RES_BUS1_CFGBAR, /* VMD Bus1 Config BAR */
> + VMD_RES_BUS1_MBAR_1, /* VMD Bus1 Resource MemBAR 1 */
> + VMD_RES_BUS1_MBAR_2, /* VMD Bus1 Resource MemBAR 2 */
> VMD_RES_COUNT
> };
>
> enum vmd_rootbus {
> VMD_BUS_0 = 0,
> + VMD_BUS_1,
> VMD_BUS_COUNT
> };
>
> @@ -90,6 +109,12 @@ enum vmd_features {
> * proper power management of the SoC.
> */
> VMD_FEAT_BIOS_PM_QUIRK = (1 << 5),
> +
> + /*
> + * Starting from Intel Arrow Lake, VMD devices have their VMD rootports
> + * connected to additional BUS1 rootport.
> + */
> + VMD_FEAT_HAS_BUS1_ROOTBUS = (1 << 6)
> };
>
> #define VMD_BIOS_PM_QUIRK_LTR 0x1003 /* 3145728 ns */
> @@ -97,7 +122,8 @@ enum vmd_features {
> #define VMD_FEATS_CLIENT (VMD_FEAT_HAS_MEMBAR_SHADOW_VSCAP | \
> VMD_FEAT_HAS_BUS_RESTRICTIONS | \
> VMD_FEAT_OFFSET_FIRST_VECTOR | \
> - VMD_FEAT_BIOS_PM_QUIRK)
> + VMD_FEAT_BIOS_PM_QUIRK | \
> + VMD_FEAT_HAS_BUS1_ROOTBUS)
>
> static DEFINE_IDA(vmd_instance_ida);
>
> @@ -155,6 +181,7 @@ struct vmd_dev {
> u8 first_vec;
> char *name;
> int instance;
> + bool bus1_rootbus;
> };
>
> static inline struct vmd_dev *vmd_from_bus(struct pci_bus *bus)
> @@ -532,11 +559,16 @@ static inline void vmd_acpi_end(void) { }
>
> static void vmd_domain_reset(struct vmd_dev *vmd)
> {
> - u16 bus, max_buses = resource_size(&vmd->resources[VMD_RES_CFGBAR]);
> + u16 bus, bus_cnt = resource_size(&vmd->resources[VMD_RES_CFGBAR]);
> u8 dev, functions, fn, hdr_type;
> char __iomem *base;
>
> - for (bus = 0; bus < max_buses; bus++) {
> + if (vmd->bus1_rootbus) {
> + bus_cnt += resource_size(&vmd->resources[VMD_RES_BUS1_CFGBAR]);
> + bus_cnt += 2;
> + }
> +
> + for (bus = 0; bus < bus_cnt; bus++) {
> for (dev = 0; dev < 32; dev++) {
> base = vmd->cfgbar + PCIE_ECAM_OFFSET(bus,
> PCI_DEVFN(dev, 0), 0);
> @@ -582,12 +614,24 @@ static void vmd_attach_resources(struct vmd_dev *vmd)
> {
> vmd->dev->resource[VMD_MEMBAR1].child = &vmd->resources[VMD_RES_MBAR_1];
> vmd->dev->resource[VMD_MEMBAR2].child = &vmd->resources[VMD_RES_MBAR_2];
> +
> + if (vmd->bus1_rootbus) {
> + vmd->resources[VMD_RES_MBAR_1].sibling =
> + &vmd->resources[VMD_RES_BUS1_MBAR_1];
> + vmd->resources[VMD_RES_MBAR_2].sibling =
> + &vmd->resources[VMD_RES_BUS1_MBAR_2];
> + }
> }
>
> static void vmd_detach_resources(struct vmd_dev *vmd)
> {
> vmd->dev->resource[VMD_MEMBAR1].child = NULL;
> vmd->dev->resource[VMD_MEMBAR2].child = NULL;
> +
> + if (vmd->bus1_rootbus) {
> + vmd->resources[VMD_RES_MBAR_1].sibling = NULL;
> + vmd->resources[VMD_RES_MBAR_2].sibling = NULL;
> + }
> }
>
> /*
> @@ -660,7 +704,7 @@ static int vmd_get_phys_offsets(struct vmd_dev *vmd, bool native_hint,
> return 0;
> }
>
> -static int vmd_get_bus_number_start(struct vmd_dev *vmd)
> +static int vmd_get_bus_number_start(struct vmd_dev *vmd, unsigned long features)
> {
> struct pci_dev *dev = vmd->dev;
> u16 reg;
> @@ -679,6 +723,19 @@ static int vmd_get_bus_number_start(struct vmd_dev *vmd)
> case 2:
> vmd->busn_start[VMD_BUS_0] = VMD_RESTRICT_2_BUS_START;
> break;
> + case 3:
> + if (!(features & VMD_FEAT_HAS_BUS1_ROOTBUS)) {
> + pci_err(dev, "VMD Bus Restriction detected type %d, but BUS1 Rootbus is not supported, aborting.\n",
> + BUS_RESTRICT_CFG(reg));
> + return -ENODEV;
> + }
> +
> + /* BUS0 start number */
> + vmd->busn_start[VMD_BUS_0] = VMD_RESTRICT_2_BUS_START;
> + /* BUS1 start number */
> + vmd->busn_start[VMD_BUS_1] = VMD_RESTRICT_3_BUS_START;
> + vmd->bus1_rootbus = true;
> + break;
> default:
> pci_err(dev, "Unknown Bus Offset Setting (%d)\n",
> BUS_RESTRICT_CFG(reg));
> @@ -805,6 +862,30 @@ static void vmd_configure_cfgbar(struct vmd_dev *vmd)
> (resource_size(res) >> 20) - 1,
> .flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED,
> };
> +
> + if (vmd->bus1_rootbus) {
> + u16 bus0_range = 0;
> + u16 bus1_range = 0;
> +
> + pci_read_config_word(vmd->dev, VMD_BUSRANGE0, &bus0_range);
> + pci_read_config_word(vmd->dev, VMD_BUSRANGE1, &bus1_range);
> +
> + /*
> + * Resize BUS0 CFGBAR range to make space for BUS1
> + * owned devices by adjusting range end with value stored in
> + * VMD_BUSRANGE0 register.
> + */
> + vmd->resources[VMD_RES_CFGBAR].start = VMD_BUS_START(bus0_range);
> + vmd->resources[VMD_RES_CFGBAR].end = VMD_BUS_END(bus0_range);
> +
> + vmd->resources[VMD_RES_BUS1_CFGBAR] = (struct resource){
> + .name = "VMD CFGBAR BUS1",
> + .start = VMD_BUS_START(bus1_range),
> + .end = VMD_BUS_END(bus1_range),
> + .flags = IORESOURCE_BUS | IORESOURCE_PCI_FIXED,
> + .parent = res,
> + };
> + }
> }
>
> /*
> @@ -834,8 +915,9 @@ static void vmd_configure_membar(struct vmd_dev *vmd,
> flags &= ~IORESOURCE_MEM_64;
>
> vmd->resources[resource_number] = (struct resource){
> - .name = kasprintf(GFP_KERNEL, "VMD MEMBAR%d",
> - membar_number / 2),
> + .name = kasprintf(
> + GFP_KERNEL, "VMD MEMBAR%d %s", membar_number / 2,
> + resource_number > VMD_RES_MBAR_2 ? "BUS1" : ""),
> .start = res->start + start_offset,
> .end = res->end - end_offset,
> .flags = flags,
> @@ -846,41 +928,80 @@ static void vmd_configure_membar(struct vmd_dev *vmd,
> static void vmd_configure_membar1_membar2(struct vmd_dev *vmd,
> resource_size_t mbar2_ofs)
> {
> - vmd_configure_membar(vmd, 1, VMD_MEMBAR1, 0, 0);
> - vmd_configure_membar(vmd, 2, VMD_MEMBAR2, mbar2_ofs, 0);
> + if (vmd->bus1_rootbus) {
> + u32 bus1_mbar1_ofs = 0;
> + u64 bus1_mbar2_ofs = 0;
> + u32 reg;
> +
> + pci_read_config_dword(vmd->dev, VMD_MEMBAR1_OFFSET,
> + &bus1_mbar1_ofs);
> +
> + pci_read_config_dword(vmd->dev, VMD_MEMBAR2_OFFSET1, ®);
> + bus1_mbar2_ofs = reg;
> +
> + pci_read_config_dword(vmd->dev, VMD_MEMBAR2_OFFSET2, ®);
> + bus1_mbar2_ofs |= (u64)reg << 32;
> +
> + /*
> + * Resize BUS MEMBAR1 and MEMBAR2 ranges to make space
> + * for BUS1 owned devices by adjusting range end with values
> + * stored in VMD_MEMBAR1_OFFSET and VMD_MEMBAR2_OFFSET registers
> + */
> + vmd_configure_membar(vmd, VMD_RES_MBAR_1, VMD_MEMBAR1, 0,
> + bus1_mbar1_ofs);
> + vmd_configure_membar(vmd, VMD_RES_MBAR_2, VMD_MEMBAR2,
> + mbar2_ofs, bus1_mbar2_ofs - mbar2_ofs);
> +
> + vmd_configure_membar(vmd, VMD_RES_BUS1_MBAR_1, VMD_MEMBAR1,
> + bus1_mbar1_ofs, 0);
> + vmd_configure_membar(vmd, VMD_RES_BUS1_MBAR_2, VMD_MEMBAR2,
> + mbar2_ofs + bus1_mbar2_ofs, 0);
> + } else {
> + vmd_configure_membar(vmd, VMD_RES_MBAR_1, VMD_MEMBAR1, 0, 0);
> + vmd_configure_membar(vmd, VMD_RES_MBAR_2, VMD_MEMBAR2,
> + mbar2_ofs, 0);
> + }
> }
>
> -static int vmd_create_bus(struct vmd_dev *vmd, struct pci_sysdata *sd,
> - resource_size_t *offset)
> +static int vmd_create_bus(struct vmd_dev *vmd, enum vmd_rootbus bus_number,
> + struct pci_sysdata *sd, resource_size_t *offset)
> {
> + u8 cfgbar = bus_number * 3;
> + u8 membar1 = cfgbar + 1;
> + u8 membar2 = cfgbar + 2;
> + struct pci_bus *vmd_bus;
> LIST_HEAD(resources);
>
> - pci_add_resource(&resources, &vmd->resources[VMD_RES_CFGBAR]);
> - pci_add_resource_offset(&resources, &vmd->resources[VMD_RES_MBAR_1],
> + pci_add_resource(&resources, &vmd->resources[cfgbar]);
> + pci_add_resource_offset(&resources, &vmd->resources[membar1],
> offset[0]);
> - pci_add_resource_offset(&resources, &vmd->resources[VMD_RES_MBAR_2],
> + pci_add_resource_offset(&resources, &vmd->resources[membar2],
> offset[1]);
>
> - vmd->bus[VMD_BUS_0] = pci_create_root_bus(&vmd->dev->dev,
> - vmd->busn_start[VMD_BUS_0],
> - &vmd_ops, sd, &resources);
> - if (!vmd->bus[VMD_BUS_0]) {
> + vmd_bus = pci_create_root_bus(&vmd->dev->dev,
> + vmd->busn_start[bus_number], &vmd_ops, sd,
> + &resources);
> +
> + if (!vmd_bus) {
> pci_free_resource_list(&resources);
> - vmd_remove_irq_domain(vmd);
> +
> + if (bus_number == VMD_PRIMARY_BUS0)
> + vmd_remove_irq_domain(vmd);
> return -ENODEV;
> }
>
> - vmd_copy_host_bridge_flags(
> - pci_find_host_bridge(vmd->dev->bus),
> - to_pci_host_bridge(vmd->bus[VMD_BUS_0]->bridge));
> + vmd_copy_host_bridge_flags(pci_find_host_bridge(vmd->dev->bus),
> + to_pci_host_bridge(vmd_bus->bridge));
>
> vmd_attach_resources(vmd);
> if (vmd->irq_domain)
> - dev_set_msi_domain(&vmd->bus[VMD_BUS_0]->dev, vmd->irq_domain);
> + dev_set_msi_domain(&vmd_bus->dev, vmd->irq_domain);
> else
> - dev_set_msi_domain(&vmd->bus[VMD_BUS_0]->dev,
> + dev_set_msi_domain(&vmd_bus->dev,
> dev_get_msi_domain(&vmd->dev->dev));
>
> + vmd->bus[bus_number] = vmd_bus;
> +
> return 0;
> }
>
> @@ -893,7 +1014,9 @@ static void vmd_bus_enumeration(struct pci_bus *bus, unsigned long features)
> vmd_acpi_begin();
>
> pci_scan_child_bus(bus);
> - vmd_domain_reset(vmd_from_bus(bus));
> +
> + if (bus->primary == VMD_PRIMARY_BUS0)
> + vmd_domain_reset(vmd_from_bus(bus));
>
> /*
> * When Intel VMD is enabled, the OS does not discover the Root Ports
> @@ -961,7 +1084,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
> * limits the bus range to between 0-127, 128-255, or 224-255
> */
> if (features & VMD_FEAT_HAS_BUS_RESTRICTIONS) {
> - ret = vmd_get_bus_number_start(vmd);
> + ret = vmd_get_bus_number_start(vmd, features);
> if (ret)
> return ret;
> }
> @@ -1021,7 +1144,7 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
> vmd_set_msi_remapping(vmd, false);
> }
>
> - ret = vmd_create_bus(vmd, sd, offset);
> + ret = vmd_create_bus(vmd, VMD_BUS_0, sd, offset);
>
> if (ret) {
> pci_err(vmd->dev, "Can't create bus: %d\n", ret);
> @@ -1034,6 +1157,27 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
>
> vmd_bus_enumeration(vmd->bus[VMD_BUS_0], features);
>
> + if (vmd->bus1_rootbus) {
> + ret = vmd_create_bus(vmd, VMD_BUS_1, sd, offset);
> + if (ret) {
> + pci_err(vmd->dev, "Can't create BUS1: %d\n", ret);
> + return ret;
> + }
> +
> + /*
> + * Primary bus is not set by pci_create_root_bus(), it is
> + * updated here
> + */
> + vmd->bus[VMD_BUS_1]->primary = VMD_PRIMARY_BUS1;
> +
> + WARN(sysfs_create_link(&vmd->dev->dev.kobj,
> + &vmd->bus[VMD_BUS_1]->dev.kobj,
> + "domain1"),
> + "Can't create symlink to domain1\n");
> +
> + vmd_bus_enumeration(vmd->bus[VMD_BUS_1], features);
> + }
> +
> return 0;
> }
>
> @@ -1113,10 +1257,18 @@ static void vmd_remove(struct pci_dev *dev)
> sysfs_remove_link(&vmd->dev->dev.kobj, "domain");
> pci_remove_root_bus(vmd->bus[VMD_BUS_0]);
>
> - /* CFGBAR is static, does not require releasing memory */
> + /* CFGBARs are static, do not require releasing memory */
> kfree(vmd->resources[VMD_RES_MBAR_1].name);
> kfree(vmd->resources[VMD_RES_MBAR_2].name);
>
> + if (vmd->bus1_rootbus) {
> + pci_stop_root_bus(vmd->bus[VMD_BUS_1]);
> + sysfs_remove_link(&vmd->dev->dev.kobj, "domain1");
> + pci_remove_root_bus(vmd->bus[VMD_BUS_1]);
> + kfree(vmd->resources[VMD_RES_BUS1_MBAR_1].name);
> + kfree(vmd->resources[VMD_RES_BUS1_MBAR_2].name);
> + }
> +
> vmd_cleanup_srcu(vmd);
> vmd_detach_resources(vmd);
> vmd_remove_irq_domain(vmd);
> @@ -1202,4 +1354,4 @@ module_pci_driver(vmd_drv);
> MODULE_AUTHOR("Intel Corporation");
> MODULE_DESCRIPTION("Volume Management Device driver");
> MODULE_LICENSE("GPL v2");
> -MODULE_VERSION("0.6");
> +MODULE_VERSION("0.7");
> --
> 2.39.3
>
>
--
மணிவண்ணன் சதாசிவம்
^ permalink raw reply [flat|nested] 14+ messages in thread* Re: [PATCH v3 7/8] PCI: vmd: Add support for second rootbus under VMD
2024-11-22 8:52 ` [PATCH v3 7/8] PCI: vmd: Add support for second rootbus under VMD Szymon Durawa
2025-05-03 13:20 ` Manivannan Sadhasivam
@ 2025-05-05 17:31 ` Bjorn Helgaas
1 sibling, 0 replies; 14+ messages in thread
From: Bjorn Helgaas @ 2025-05-05 17:31 UTC (permalink / raw)
To: Szymon Durawa
Cc: Bjorn Helgaas, Dan Williams, Lukas Wunner, linux-pci,
Nirmal Patel, Mariusz Tkaczyk
On Fri, Nov 22, 2024 at 09:52:14AM +0100, Szymon Durawa wrote:
> Starting from Intel Arrow Lake VMD enhancement introduces second rootbus
> support with fixed root bus number (0x80). It means that all 3 MMIO BARs
> exposed by VMD are shared now between both buses (current BUS0 and
> new BUS1).
>
> Add new BUS1 enumeration and divide MMIO space to be shared between
> both rootbuses. Due to enumeration issues with rootbus hardwired to a
> fixed non-zero value, this patch will work with a workaround proposed
> in next patch. Without workaround user won't see attached devices for BUS1
> rootbus.
s/rootbus/root bus/
> Suggested-by: Nirmal Patel <nirmal.patel@linux.intel.com>
> Reviewed-by: Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com>
> Signed-off-by: Szymon Durawa <szymon.durawa@linux.intel.com>
> ---
> drivers/pci/controller/vmd.c | 208 ++++++++++++++++++++++++++++++-----
> 1 file changed, 180 insertions(+), 28 deletions(-)
>
> diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
> index 6d8397b5ebee..6cd14c28fd4e 100755
> --- a/drivers/pci/controller/vmd.c
> +++ b/drivers/pci/controller/vmd.c
> @@ -26,6 +26,7 @@
> #define VMD_RESTRICT_0_BUS_START 0
> #define VMD_RESTRICT_1_BUS_START 128
> #define VMD_RESTRICT_2_BUS_START 224
> +#define VMD_RESTRICT_3_BUS_START 225
You're just following the pattern here, which makes sense. But these
are apparently bus numbers, which are typically written in hex, so it
would be nice to convert them all so we don't have to convert.
> #define PCI_REG_VMCAP 0x40
> #define BUS_RESTRICT_CAP(vmcap) (vmcap & 0x1)
> @@ -38,15 +39,33 @@
> #define MB2_SHADOW_OFFSET 0x2000
> #define MB2_SHADOW_SIZE 16
>
> +#define VMD_PRIMARY_BUS0 0x00
> +#define VMD_PRIMARY_BUS1 0x80
The above are bus numbers; the below are register offsets. Would be
nice to separate them with a blank line since they are semantically
different.
I don't understand the difference between VMD_RESTRICT_3_BUS_START and
VMD_PRIMARY_BUS1. Maybe one is the default Primary Bus Number of the
Root Ports after a reset?
> +#define VMD_BUSRANGE0 0xc8
> +#define VMD_BUSRANGE1 0xcc
> +#define VMD_MEMBAR1_OFFSET 0xd0
> +#define VMD_MEMBAR2_OFFSET1 0xd8
> +#define VMD_MEMBAR2_OFFSET2 0xdc
> +#define VMD_BUS_END(busr) ((busr >> 8) & 0xff)
> +#define VMD_BUS_START(busr) (busr & 0x00ff)
Would be nice if VMD_BUS_END/VMD_BUS_START were defined with
GENMASK(); then we could use FIELD_GET() below to extract them.
Nit: indent these bus numbers and offsets so the values line up like
the other #defines.
> + * Starting from Intel Arrow Lake, VMD devices have their VMD rootports
> + * connected to additional BUS1 rootport.
This doesn't quite make sense. Root Ports can't be connected to
another Root Port. I think you mean "Root Ports on the additional
root bus".
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH v3 8/8] PCI: vmd: Add workaround for bus number hardwired to fixed non-zero value
2024-11-22 8:52 [PATCH v3 0/8] VMD add second rootbus support Szymon Durawa
` (6 preceding siblings ...)
2024-11-22 8:52 ` [PATCH v3 7/8] PCI: vmd: Add support for second rootbus under VMD Szymon Durawa
@ 2024-11-22 8:52 ` Szymon Durawa
2025-05-05 17:31 ` Bjorn Helgaas
2025-01-27 20:59 ` [PATCH v3 0/8] VMD add second rootbus support Durawa, Szymon
8 siblings, 1 reply; 14+ messages in thread
From: Szymon Durawa @ 2024-11-22 8:52 UTC (permalink / raw)
To: helgaas
Cc: Szymon Durawa, Bjorn Helgaas, Dan Williams, Lukas Wunner,
linux-pci, Nirmal Patel, Mariusz Tkaczyk
VMD BUS1 rootbus primary number is 0x80 and pci_scan_bridge_extend()
detects that primary bus number doesn't match the bus it's sitting on.
As a result primary rootbus number is deconfigured in the first pass
of pci_scan_bridge() to be re-assigned to 0x0 in the second pass.
To avoid bus number reconfiguration, BUS1 number has to be the same
as BUS1 primary number.
Suggested-by: Nirmal Patel <nirmal.patel@linux.intel.com>
Reviewed-by: Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com>
Signed-off-by: Szymon Durawa <szymon.durawa@linux.intel.com>
---
drivers/pci/controller/vmd.c | 30 ++++++++++++++++++++++++++++--
1 file changed, 28 insertions(+), 2 deletions(-)
diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
index 6cd14c28fd4e..3b74cb8dd023 100755
--- a/drivers/pci/controller/vmd.c
+++ b/drivers/pci/controller/vmd.c
@@ -421,8 +421,22 @@ static void vmd_remove_irq_domain(struct vmd_dev *vmd)
static void __iomem *vmd_cfg_addr(struct vmd_dev *vmd, struct pci_bus *bus,
unsigned int devfn, int reg, int len)
{
- unsigned int busnr_ecam = bus->number - vmd->busn_start[VMD_BUS_0];
- u32 offset = PCIE_ECAM_OFFSET(busnr_ecam, devfn, reg);
+ unsigned char bus_number;
+ unsigned int busnr_ecam;
+ u32 offset;
+
+ /*
+ * VMD workaraund: for BUS1, bus->number is set to VMD_PRIMARY_BUS1
+ * (see comment under vmd_create_bus() for BUS1) but original value
+ * is 225 which is stored in vmd->busn_start[VMD_BUS_1].
+ */
+ if (vmd->bus1_rootbus && bus->number == VMD_PRIMARY_BUS1)
+ bus_number = vmd->busn_start[VMD_BUS_1];
+ else
+ bus_number = bus->number;
+
+ busnr_ecam = bus_number - vmd->busn_start[VMD_BUS_0];
+ offset = PCIE_ECAM_OFFSET(busnr_ecam, devfn, reg);
if (offset + len >= resource_size(&vmd->dev->resource[VMD_CFGBAR]))
return NULL;
@@ -1170,6 +1184,18 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
*/
vmd->bus[VMD_BUS_1]->primary = VMD_PRIMARY_BUS1;
+ /*
+ * This is a workaround for pci_scan_bridge_extend() code.
+ * It detects that non-zero (0x80) primary bus number doesn't
+ * match the bus it's sitting on. As a result rootbus number is
+ * deconfigured in the first pass of pci_scan_bridge() to be
+ * re-assigned to 0x0 in the second pass.
+ * Update vmd->bus[VMD_BUS_1]->number and
+ * vmd->bus[VMD_BUS_1]->primary to the same value, which
+ * bypasses bus number reconfiguration.
+ */
+ vmd->bus[VMD_BUS_1]->number = VMD_PRIMARY_BUS1;
+
WARN(sysfs_create_link(&vmd->dev->dev.kobj,
&vmd->bus[VMD_BUS_1]->dev.kobj,
"domain1"),
--
2.39.3
^ permalink raw reply related [flat|nested] 14+ messages in thread* Re: [PATCH v3 8/8] PCI: vmd: Add workaround for bus number hardwired to fixed non-zero value
2024-11-22 8:52 ` [PATCH v3 8/8] PCI: vmd: Add workaround for bus number hardwired to fixed non-zero value Szymon Durawa
@ 2025-05-05 17:31 ` Bjorn Helgaas
0 siblings, 0 replies; 14+ messages in thread
From: Bjorn Helgaas @ 2025-05-05 17:31 UTC (permalink / raw)
To: Szymon Durawa
Cc: Bjorn Helgaas, Dan Williams, Lukas Wunner, linux-pci,
Nirmal Patel, Mariusz Tkaczyk
On Fri, Nov 22, 2024 at 09:52:15AM +0100, Szymon Durawa wrote:
> VMD BUS1 rootbus primary number is 0x80 and pci_scan_bridge_extend()
> detects that primary bus number doesn't match the bus it's sitting on.
> As a result primary rootbus number is deconfigured in the first pass
> of pci_scan_bridge() to be re-assigned to 0x0 in the second pass.
Every bus has a bus number, but when we're talking about the bus
itself, there's only one bus number, so it is not a *primary* number.
"Primary" and "secondary" only apply in the context of a bridge
because it's connected to two buses and we need to distinguish them.
A root bus is created by a host bridge (in this case, a VMD bridge),
and the root bus number is determined by the host bridge. It sounds
like the bus number of the VMD BUS1 root bus is fixed in hardware to
0x80.
I think what you mean is something like:
The VMD BUS1 root bus number is fixed in hardware to 0x80, but after
reset, the default Primary Bus Number of Root Ports on BUS1 is 0x00.
"Primary bus number doesn't match the bus it's sitting on" is a
bit ambiguous because a bus is not a device and a bus does not "sit on
a bus." A Root Port *does* sit on a bus.
The struct pci_bus.primary member is misleading and probably
contributes to confusion here.
s/rootbus/root bus/ throughout
s/rootport/root port/ throughout
s/primary /primary / (spurious double space)
> To avoid bus number reconfiguration, BUS1 number has to be the same
> as BUS1 primary number.
>
> Suggested-by: Nirmal Patel <nirmal.patel@linux.intel.com>
> Reviewed-by: Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com>
> Signed-off-by: Szymon Durawa <szymon.durawa@linux.intel.com>
> ---
> drivers/pci/controller/vmd.c | 30 ++++++++++++++++++++++++++++--
> 1 file changed, 28 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
> index 6cd14c28fd4e..3b74cb8dd023 100755
> --- a/drivers/pci/controller/vmd.c
> +++ b/drivers/pci/controller/vmd.c
> @@ -421,8 +421,22 @@ static void vmd_remove_irq_domain(struct vmd_dev *vmd)
> static void __iomem *vmd_cfg_addr(struct vmd_dev *vmd, struct pci_bus *bus,
> unsigned int devfn, int reg, int len)
> {
> - unsigned int busnr_ecam = bus->number - vmd->busn_start[VMD_BUS_0];
> - u32 offset = PCIE_ECAM_OFFSET(busnr_ecam, devfn, reg);
> + unsigned char bus_number;
> + unsigned int busnr_ecam;
> + u32 offset;
> +
> + /*
> + * VMD workaraund: for BUS1, bus->number is set to VMD_PRIMARY_BUS1
> + * (see comment under vmd_create_bus() for BUS1) but original value
> + * is 225 which is stored in vmd->busn_start[VMD_BUS_1].
s/workaraund/workaround/
There is no comment in vmd_create_bus().
Another case of bus numbers in decimal (225,
VMD_RESTRICT_3_BUS_START), but we're comparing with VMD_PRIMARY_BUS1
(0x80). Needlessly confusing.
> + if (vmd->bus1_rootbus && bus->number == VMD_PRIMARY_BUS1)
> + bus_number = vmd->busn_start[VMD_BUS_1];
> + else
> + bus_number = bus->number;
> +
> + busnr_ecam = bus_number - vmd->busn_start[VMD_BUS_0];
> + offset = PCIE_ECAM_OFFSET(busnr_ecam, devfn, reg);
>
> if (offset + len >= resource_size(&vmd->dev->resource[VMD_CFGBAR]))
> return NULL;
> @@ -1170,6 +1184,18 @@ static int vmd_enable_domain(struct vmd_dev *vmd, unsigned long features)
> */
> vmd->bus[VMD_BUS_1]->primary = VMD_PRIMARY_BUS1;
>
> + /*
> + * This is a workaround for pci_scan_bridge_extend() code.
> + * It detects that non-zero (0x80) primary bus number doesn't
> + * match the bus it's sitting on. As a result rootbus number is
> + * deconfigured in the first pass of pci_scan_bridge() to be
> + * re-assigned to 0x0 in the second pass.
> + * Update vmd->bus[VMD_BUS_1]->number and
> + * vmd->bus[VMD_BUS_1]->primary to the same value, which
> + * bypasses bus number reconfiguration.
If you can include dmesg snippets in the commit log showing how
pci_scan_bridge_extend() and pci_scan_bridge() deal with this, I think
it will help understand this. There might be some improvement we can
make in pci_scan_bridge_extend() (someday, not today).
> + */
> + vmd->bus[VMD_BUS_1]->number = VMD_PRIMARY_BUS1;
> +
> WARN(sysfs_create_link(&vmd->dev->dev.kobj,
> &vmd->bus[VMD_BUS_1]->dev.kobj,
> "domain1"),
> --
> 2.39.3
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH v3 0/8] VMD add second rootbus support
2024-11-22 8:52 [PATCH v3 0/8] VMD add second rootbus support Szymon Durawa
` (7 preceding siblings ...)
2024-11-22 8:52 ` [PATCH v3 8/8] PCI: vmd: Add workaround for bus number hardwired to fixed non-zero value Szymon Durawa
@ 2025-01-27 20:59 ` Durawa, Szymon
8 siblings, 0 replies; 14+ messages in thread
From: Durawa, Szymon @ 2025-01-27 20:59 UTC (permalink / raw)
To: helgaas
Cc: Bjorn Helgaas, Dan Williams, Lukas Wunner, linux-pci,
Nirmal Patel, Mariusz Tkaczyk
On 11/22/2024 9:52 AM, Szymon Durawa wrote:
> This patch series implements second rootbus support inside Intel VMD module.
> Current implementation allows VMD to take ownership of devices only on first
> bus (Rootbus0). Starting from Intel Arrow Lake, VMD exposes second bus
> (Rootbus1) to allow VMD to own devices on this bus as well. VMD MMIO BARs
> (CFGBAR. MEMBAR1 and MEMBAR2) are now shared between Rootbus0 and Rootbus1.
> Reconfiguration of 3 MMIO BARs is required by resizing current MMIO BARs ranges.
> It allows to find/register VMD Rootbus1 and discovers devices or root
> ports under it.
>
> Patches 1 to 6 introduce code refactoring without functional changes.
> Patch 7 implements VMD Rootbus1 support and patch 8 provides workaround for
> rootbus number hardwired to fixed non-zero value. Patch 8 is necessary for
> correct enumeration attached devices under VMD Rootbus1. Without it user cannot
> access those devices as they are not visible in the system, only drives under
> VMD Rootbus0 are available to the user.
>
> Changes from v1:
> - splitting series into more commits, requested by Bjorn
> - adding helper functions, suggested by Bjorn
> - minor typos and unclear wording updated, suggested by Bjorn
>
> Changes from v2:
> - wording update in commit logs, suggested by Bjorn
>
> Cc: Bjorn Helgaas <bhelgaas@google.com>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: Lukas Wunner <lukas@wunner.de>
> Cc: linux-pci@vger.kernel.org
> Suggested-by: Nirmal Patel <nirmal.patel@linux.intel.com>
> Reviewed-by: Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com>
> Signed-off-by: Szymon Durawa <szymon.durawa@linux.intel.com>
>
> Szymon Durawa (8):
> PCI: vmd: Add vmd_bus_enumeration()
> PCI: vmd: Add vmd_configure_cfgbar()
> PCI: vmd: Add vmd_configure_membar() and
> vmd_configure_membar1_membar2()
> PCI: vmd: Add vmd_create_bus()
> PCI: vmd: Replace hardcoded values with enum and defines
> PCI: vmd: Convert bus and busn_start to an array
> PCI: vmd: Add support for second rootbus under VMD
> PCI: vmd: Add workaround for bus number hardwired to fixed non-zero
> value
>
> drivers/pci/controller/vmd.c | 470 +++++++++++++++++++++++++++--------
> 1 file changed, 360 insertions(+), 110 deletions(-)
> mode change 100644 => 100755 drivers/pci/controller/vmd.c
Hello Bjorn,
Gentle ping. I addressed all of your previous comments, please take a
look at v3 patch.
Thanks,
Szymon
^ permalink raw reply [flat|nested] 14+ messages in thread