From: Ben Cheatham <Benjamin.Cheatham@amd.com>
To: <linux-cxl@vger.kernel.org>, <linux-pci@vger.kernel.org>,
<linux-acpi@vger.kernel.org>
Cc: Ben Cheatham <Benjamin.Cheatham@amd.com>
Subject: [PATCH 09/16] cxl/core: Prevent onlining CXL memory behind isolated ports
Date: Wed, 30 Jul 2025 16:47:11 -0500 [thread overview]
Message-ID: <20250730214718.10679-10-Benjamin.Cheatham@amd.com> (raw)
In-Reply-To: <20250730214718.10679-1-Benjamin.Cheatham@amd.com>
The host will not be able to access the CXL memory on devices
enabled/added below an isolated CXL downstream port. Add a check during
cxl_mem probe to prevent adding devices below an isolated port. Also add
a check to prevent CXL region creation below an isolated port for
previously disabled devices below the port.
Signed-off-by: Ben Cheatham <Benjamin.Cheatham@amd.com>
---
drivers/cxl/core/port.c | 28 ++++++++++++++++++++++++++++
drivers/cxl/core/region.c | 3 +++
drivers/cxl/cxl.h | 2 ++
3 files changed, 33 insertions(+)
diff --git a/drivers/cxl/core/port.c b/drivers/cxl/core/port.c
index 90588bf927e0..c9e7bfc082d5 100644
--- a/drivers/cxl/core/port.c
+++ b/drivers/cxl/core/port.c
@@ -869,6 +869,13 @@ static int cxl_port_add(struct cxl_port *port,
*/
port->reg_map = cxlds->reg_map;
port->reg_map.host = &port->dev;
+
+ if (cxl_endpoint_port_isolated(port)) {
+ dev_err(&port->dev,
+ "port is under isolated CXL dport\n");
+ return -EBUSY;
+ }
+
cxlmd->endpoint = port;
} else if (parent_dport) {
rc = dev_set_name(dev, "port%d", port->id);
@@ -1174,6 +1181,27 @@ static void cxl_dport_unlink(void *data)
sysfs_remove_link(&port->dev.kobj, link_name);
}
+bool cxl_endpoint_port_isolated(struct cxl_port *ep)
+{
+ struct cxl_dport *iter;
+ u32 status;
+
+ for (iter = ep->parent_dport;
+ iter && iter->port && !is_cxl_root(iter->port);
+ iter = iter->port->parent_dport) {
+ if (!iter->regs.isolation)
+ continue;
+
+ status = readl(iter->regs.isolation +
+ CXL_ISOLATION_STATUS_OFFSET);
+ if (!(status & CXL_ISOLATION_STATUS_OFFSET))
+ return true;
+ }
+
+ return false;
+}
+EXPORT_SYMBOL_NS_GPL(cxl_endpoint_port_isolated, "CXL");
+
struct isolation_intr_data {
struct cxl_dport *dport;
struct cxl_port *port;
diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
index b94fda6f2e4c..db9ff3b683aa 100644
--- a/drivers/cxl/core/region.c
+++ b/drivers/cxl/core/region.c
@@ -3407,6 +3407,9 @@ static struct cxl_region *construct_region(struct cxl_root_decoder *cxlrd,
int rc, part = READ_ONCE(cxled->part);
struct cxl_region *cxlr;
+ if (cxl_endpoint_port_isolated(cxlmd->endpoint))
+ return ERR_PTR(-EBUSY);
+
do {
cxlr = __create_region(cxlrd, cxlds->part[part].mode,
atomic_read(&cxlrd->region_id));
diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
index 62b3ed188949..8da1e40ab4e7 100644
--- a/drivers/cxl/cxl.h
+++ b/drivers/cxl/cxl.h
@@ -838,6 +838,8 @@ pci_ers_result_t cxl_error_detected(struct device *dev);
void cxl_port_cor_error_detected(struct device *dev);
pci_ers_result_t cxl_port_error_detected(struct device *dev);
+bool cxl_endpoint_port_isolated(struct cxl_port *port);
+
/**
* struct cxl_endpoint_dvsec_info - Cached DVSEC info
* @mem_enabled: cached value of mem_enabled in the DVSEC at init time
--
2.34.1
next prev parent reply other threads:[~2025-07-30 21:49 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-30 21:47 [PATCH 00/16] CXL.mem error isolation support Ben Cheatham
2025-07-30 21:47 ` [PATCH 01/16] cxl/regs: Add cxl_unmap_component_regs() Ben Cheatham
2025-07-30 21:47 ` [PATCH 02/16] cxl/regs: Add CXL Isolation capability mapping Ben Cheatham
2025-07-30 21:47 ` [PATCH 03/16] PCI: PCIe portdrv: Add CXL Isolation service driver Ben Cheatham
2025-07-30 21:47 ` [PATCH 04/16] PCI: PCIe portdrv: Allocate CXL isolation MSI/-X vector Ben Cheatham
2025-08-04 21:39 ` Bjorn Helgaas
2025-08-06 17:58 ` Cheatham, Benjamin
2025-07-30 21:47 ` [PATCH 05/16] PCI: PCIe portdrv: Add interface for getting CXL isolation IRQ Ben Cheatham
2025-07-31 5:59 ` Lukas Wunner
2025-07-31 13:13 ` Cheatham, Benjamin
2025-07-30 21:47 ` [PATCH 06/16] cxl/core: Enable CXL.mem isolation Ben Cheatham
2025-07-30 21:47 ` [PATCH 07/16] cxl/core: Set up isolation interrupts Ben Cheatham
2025-07-30 21:47 ` [PATCH 08/16] cxl/core: Enable CXL " Ben Cheatham
2025-07-30 21:47 ` Ben Cheatham [this message]
2025-07-30 21:47 ` [PATCH 10/16] cxl/core: Enable CXL.mem timeout Ben Cheatham
2025-07-30 21:47 ` [PATCH 11/16] cxl/pci: Add isolation handler Ben Cheatham
2025-07-30 21:47 ` [PATCH 12/16] PCI: PCIe portdrv: Add cxl_isolation sysfs attributes Ben Cheatham
2025-07-30 21:47 ` [PATCH 13/16] cxl/core, PCI: PCIe portdrv: Add CXL timeout range programming Ben Cheatham
2025-08-04 21:39 ` Bjorn Helgaas
2025-08-06 17:58 ` Cheatham, Benjamin
2025-07-30 21:47 ` [PATCH 14/16] ACPI: Add CXL isolation _OSC fields Ben Cheatham
2025-08-22 19:19 ` Rafael J. Wysocki
2025-07-30 21:47 ` [PATCH 15/16] cxl/core, cxl/acpi: Enable CXL isolation based on _OSC handshake Ben Cheatham
2025-07-30 21:47 ` [PATCH 16/16] cxl/core, cxl/acpi: Add CXL isolation notify handler Ben Cheatham
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250730214718.10679-10-Benjamin.Cheatham@amd.com \
--to=benjamin.cheatham@amd.com \
--cc=linux-acpi@vger.kernel.org \
--cc=linux-cxl@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).