* [PATCH v2 0/4] cxl: Dirty shutdown followups
@ 2025-02-19 2:14 Davidlohr Bueso
2025-02-19 2:14 ` [PATCH 1/4] cxl/pci: Introduce cxl_gpf_get_dvsec() Davidlohr Bueso
` (3 more replies)
0 siblings, 4 replies; 14+ messages in thread
From: Davidlohr Bueso @ 2025-02-19 2:14 UTC (permalink / raw)
To: dave.jiang, dan.j.williams
Cc: jonathan.cameron, alison.schofield, ira.weiny, vishal.l.verma,
seven.yi.lee, a.manzanares, fan.ni, anisa.su, dave, linux-cxl
Changes from v1 (https://lore.kernel.org/linux-cxl/20250205040842.1253616-1-dave@stgolabs.net/):
- crated a new cxl_gpf_get_dvsec() helper to share for port and dev gpf (DaveJ)
- renamed cxl_dirty_shutdown_state() to cxl_arm_dirty_shutdown() (DaveJ)
- exported the cxl_gpf_get_dvsec() symbol used outside of core (Yi, DaveJ)
- introduced CXL_INVALID_DIRTY_SHUTDOWN_COUNT (DaveJ)
- rename to cxl_nvdimm_setup_dirty_tracking() and use return statements (DaveJ)
- picked up review tag in patch 4 (DaveJ)
Hi,
Some followup patches to the GPF work. First two patches are from feedback
provided by DaveJ. The third patch adds a $platform/dirty_shutdown sysfs
attribute to expose the count to userspace. Fourth patch adds support
emulating the set shutdown state command for the mock device.
Applies against the -next branch of cxl.git.
Thanks!
Davidlohr Bueso (4):
cxl/pci: Introduce cxl_gpf_get_dvsec()
cxl/pmem: Rename cxl_dirty_shutdown_state()
cxl/pmem: Export dirty shutdown count via sysfs
tools/testing/cxl: Set Shutdown State support
Documentation/ABI/testing/sysfs-bus-cxl | 12 +++
Documentation/driver-api/cxl/maturity-map.rst | 2 +-
drivers/cxl/core/mbox.c | 25 +++++-
drivers/cxl/core/pci.c | 38 ++++++---
drivers/cxl/cxl.h | 3 +
drivers/cxl/cxlmem.h | 17 +++-
drivers/cxl/pmem.c | 77 +++++++++++++++++--
tools/testing/cxl/test/mem.c | 23 ++++++
8 files changed, 175 insertions(+), 22 deletions(-)
--
2.39.5
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 1/4] cxl/pci: Introduce cxl_gpf_get_dvsec()
2025-02-19 2:14 [PATCH v2 0/4] cxl: Dirty shutdown followups Davidlohr Bueso
@ 2025-02-19 2:14 ` Davidlohr Bueso
2025-02-19 2:14 ` [PATCH 2/4] cxl/pmem: Rename cxl_dirty_shutdown_state() Davidlohr Bueso
` (2 subsequent siblings)
3 siblings, 0 replies; 14+ messages in thread
From: Davidlohr Bueso @ 2025-02-19 2:14 UTC (permalink / raw)
To: dave.jiang, dan.j.williams
Cc: jonathan.cameron, alison.schofield, ira.weiny, vishal.l.verma,
seven.yi.lee, a.manzanares, fan.ni, anisa.su, dave, linux-cxl
Add a helper to fetch the port/device GPF dvsecs. This is
currently only used for ports, but a later patch to export
dirty count to users will make use of the device one.
Signed-off-by: Davidlohr Bueso <dave@stgolabs.net>
---
drivers/cxl/core/pci.c | 38 ++++++++++++++++++++++++++++----------
drivers/cxl/cxl.h | 2 ++
2 files changed, 30 insertions(+), 10 deletions(-)
diff --git a/drivers/cxl/core/pci.c b/drivers/cxl/core/pci.c
index a5c65f79db18..2226cca3382d 100644
--- a/drivers/cxl/core/pci.c
+++ b/drivers/cxl/core/pci.c
@@ -1072,6 +1072,27 @@ int cxl_pci_get_bandwidth(struct pci_dev *pdev, struct access_coordinate *c)
#define GPF_TIMEOUT_BASE_MAX 2
#define GPF_TIMEOUT_SCALE_MAX 7 /* 10 seconds */
+int cxl_gpf_get_dvsec(struct device *dev, bool port)
+{
+ struct pci_dev *pdev;
+ int dvsec;
+
+ if (!dev_is_pci(dev))
+ return -EINVAL;
+
+ pdev = to_pci_dev(dev);
+ if (!pdev)
+ return -EINVAL;
+
+ dvsec = pci_find_dvsec_capability(pdev, PCI_VENDOR_ID_CXL,
+ port ? CXL_DVSEC_PORT_GPF : CXL_DVSEC_DEVICE_GPF);
+ if (!dvsec)
+ pci_warn(pdev, "%s GPF DVSEC not present\n",
+ port ? "Port" : "Device");
+ return dvsec;
+}
+EXPORT_SYMBOL_NS_GPL(cxl_gpf_get_dvsec, "CXL");
+
static int update_gpf_port_dvsec(struct pci_dev *pdev, int dvsec, int phase)
{
u64 base, scale;
@@ -1116,26 +1137,23 @@ int cxl_gpf_port_setup(struct device *dport_dev, struct cxl_port *port)
{
struct pci_dev *pdev;
- if (!dev_is_pci(dport_dev))
- return 0;
-
- pdev = to_pci_dev(dport_dev);
- if (!pdev || !port)
+ if (!port)
return -EINVAL;
if (!port->gpf_dvsec) {
int dvsec;
- dvsec = pci_find_dvsec_capability(pdev, PCI_VENDOR_ID_CXL,
- CXL_DVSEC_PORT_GPF);
- if (!dvsec) {
- pci_warn(pdev, "Port GPF DVSEC not present\n");
+ dvsec = cxl_gpf_get_dvsec(dport_dev, true);
+ if (dvsec <= 0)
return -EINVAL;
- }
port->gpf_dvsec = dvsec;
}
+ pdev = to_pci_dev(dport_dev);
+ if (!pdev)
+ return -EINVAL;
+
update_gpf_port_dvsec(pdev, port->gpf_dvsec, 1);
update_gpf_port_dvsec(pdev, port->gpf_dvsec, 2);
diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
index 6baec4ba9141..acbbba41356d 100644
--- a/drivers/cxl/cxl.h
+++ b/drivers/cxl/cxl.h
@@ -901,4 +901,6 @@ bool cxl_endpoint_decoder_reset_detected(struct cxl_port *port);
#define __mock static
#endif
+int cxl_gpf_get_dvsec(struct device *dev, bool port);
+
#endif /* __CXL_H__ */
--
2.39.5
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 2/4] cxl/pmem: Rename cxl_dirty_shutdown_state()
2025-02-19 2:14 [PATCH v2 0/4] cxl: Dirty shutdown followups Davidlohr Bueso
2025-02-19 2:14 ` [PATCH 1/4] cxl/pci: Introduce cxl_gpf_get_dvsec() Davidlohr Bueso
@ 2025-02-19 2:14 ` Davidlohr Bueso
2025-02-19 2:14 ` [PATCH 3/4] cxl/pmem: Export dirty shutdown count via sysfs Davidlohr Bueso
2025-02-19 2:14 ` [PATCH 4/4] tools/testing/cxl: Set Shutdown State support Davidlohr Bueso
3 siblings, 0 replies; 14+ messages in thread
From: Davidlohr Bueso @ 2025-02-19 2:14 UTC (permalink / raw)
To: dave.jiang, dan.j.williams
Cc: jonathan.cameron, alison.schofield, ira.weiny, vishal.l.verma,
seven.yi.lee, a.manzanares, fan.ni, anisa.su, dave, linux-cxl
... to a better suited 'cxl_arm_dirty_shutdown()'.
Suggested-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Davidlohr Bueso <dave@stgolabs.net>
---
drivers/cxl/core/mbox.c | 4 ++--
drivers/cxl/cxlmem.h | 2 +-
drivers/cxl/pmem.c | 2 +-
3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c
index c5eedcae3b02..86d13f4a1c18 100644
--- a/drivers/cxl/core/mbox.c
+++ b/drivers/cxl/core/mbox.c
@@ -1281,7 +1281,7 @@ int cxl_mem_dpa_fetch(struct cxl_memdev_state *mds, struct cxl_dpa_info *info)
}
EXPORT_SYMBOL_NS_GPL(cxl_mem_dpa_fetch, "CXL");
-int cxl_dirty_shutdown_state(struct cxl_memdev_state *mds)
+int cxl_arm_dirty_shutdown(struct cxl_memdev_state *mds)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
struct cxl_mbox_cmd mbox_cmd;
@@ -1297,7 +1297,7 @@ int cxl_dirty_shutdown_state(struct cxl_memdev_state *mds)
return cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
}
-EXPORT_SYMBOL_NS_GPL(cxl_dirty_shutdown_state, "CXL");
+EXPORT_SYMBOL_NS_GPL(cxl_arm_dirty_shutdown, "CXL");
int cxl_set_timestamp(struct cxl_memdev_state *mds)
{
diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
index 8e1e46c348f5..6d60030139df 100644
--- a/drivers/cxl/cxlmem.h
+++ b/drivers/cxl/cxlmem.h
@@ -822,7 +822,7 @@ void cxl_event_trace_record(const struct cxl_memdev *cxlmd,
enum cxl_event_log_type type,
enum cxl_event_type event_type,
const uuid_t *uuid, union cxl_event *evt);
-int cxl_dirty_shutdown_state(struct cxl_memdev_state *mds);
+int cxl_arm_dirty_shutdown(struct cxl_memdev_state *mds);
int cxl_set_timestamp(struct cxl_memdev_state *mds);
int cxl_poison_state_init(struct cxl_memdev_state *mds);
int cxl_mem_get_poison(struct cxl_memdev *cxlmd, u64 offset, u64 len,
diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c
index a39e2c52d7ab..6b284962592f 100644
--- a/drivers/cxl/pmem.c
+++ b/drivers/cxl/pmem.c
@@ -90,7 +90,7 @@ static int cxl_nvdimm_probe(struct device *dev)
* clear it upon a successful GPF flow. The exception to this
* is upon Viral detection, per CXL 3.2 section 12.4.2.
*/
- if (cxl_dirty_shutdown_state(mds))
+ if (cxl_arm_dirty_shutdown(mds))
dev_warn(dev, "GPF: could not dirty shutdown state\n");
dev_set_drvdata(dev, nvdimm);
--
2.39.5
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 3/4] cxl/pmem: Export dirty shutdown count via sysfs
2025-02-19 2:14 [PATCH v2 0/4] cxl: Dirty shutdown followups Davidlohr Bueso
2025-02-19 2:14 ` [PATCH 1/4] cxl/pci: Introduce cxl_gpf_get_dvsec() Davidlohr Bueso
2025-02-19 2:14 ` [PATCH 2/4] cxl/pmem: Rename cxl_dirty_shutdown_state() Davidlohr Bueso
@ 2025-02-19 2:14 ` Davidlohr Bueso
2025-02-19 2:34 ` Davidlohr Bueso
2025-02-19 2:14 ` [PATCH 4/4] tools/testing/cxl: Set Shutdown State support Davidlohr Bueso
3 siblings, 1 reply; 14+ messages in thread
From: Davidlohr Bueso @ 2025-02-19 2:14 UTC (permalink / raw)
To: dave.jiang, dan.j.williams
Cc: jonathan.cameron, alison.schofield, ira.weiny, vishal.l.verma,
seven.yi.lee, a.manzanares, fan.ni, anisa.su, dave, linux-cxl
Similar to how the acpi_nfit driver exports Optane dirty shutdown count,
introduce:
/sys/bus/cxl/devices/nvdimm-bridge0/ndbusX/nmemY/cxl/dirty_shutdown
Under the conditions that 1) dirty shutdown can be set, 2) Device GPF
DVSEC exists, and 3) the count itself can be retrieved.
Suggested-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Davidlohr Bueso <dave@stgolabs.net>
---
Documentation/ABI/testing/sysfs-bus-cxl | 12 +++
Documentation/driver-api/cxl/maturity-map.rst | 2 +-
drivers/cxl/core/mbox.c | 21 +++++
drivers/cxl/cxl.h | 1 +
drivers/cxl/cxlmem.h | 15 ++++
drivers/cxl/pmem.c | 77 +++++++++++++++++--
6 files changed, 119 insertions(+), 9 deletions(-)
diff --git a/Documentation/ABI/testing/sysfs-bus-cxl b/Documentation/ABI/testing/sysfs-bus-cxl
index 3f5627a1210a..a7491d214098 100644
--- a/Documentation/ABI/testing/sysfs-bus-cxl
+++ b/Documentation/ABI/testing/sysfs-bus-cxl
@@ -586,3 +586,15 @@ Description:
See Documentation/ABI/stable/sysfs-devices-node. access0 provides
the number to the closest initiator and access1 provides the
number to the closest CPU.
+
+
+What: /sys/bus/cxl/devices/nvdimm-bridge0/ndbusX/nmemY/cxl/dirty_shutdown
+Date: Feb, 2025
+KernelVersion: v6.15
+Contact: linux-cxl@vger.kernel.org
+Description:
+ (RO) The device dirty shutdown count value, which is the number
+ of times the device could have incurred in potential data loss.
+ The count is persistent across power loss and wraps back to 0
+ upon overflow. If this file is not present, the device does not
+ have the necessary support for dirty tracking.
diff --git a/Documentation/driver-api/cxl/maturity-map.rst b/Documentation/driver-api/cxl/maturity-map.rst
index 99dd2c841e69..a2288f9df658 100644
--- a/Documentation/driver-api/cxl/maturity-map.rst
+++ b/Documentation/driver-api/cxl/maturity-map.rst
@@ -130,7 +130,7 @@ Mailbox commands
* [0] Switch CCI
* [3] Timestamp
* [1] PMEM labels
-* [1] PMEM GPF / Dirty Shutdown
+* [3] PMEM GPF / Dirty Shutdown
* [0] Scan Media
PMU
diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c
index 86d13f4a1c18..f1009a265f9d 100644
--- a/drivers/cxl/core/mbox.c
+++ b/drivers/cxl/core/mbox.c
@@ -1281,6 +1281,27 @@ int cxl_mem_dpa_fetch(struct cxl_memdev_state *mds, struct cxl_dpa_info *info)
}
EXPORT_SYMBOL_NS_GPL(cxl_mem_dpa_fetch, "CXL");
+int cxl_get_dirty_count(struct cxl_memdev_state *mds, u32 *count)
+{
+ int rc;
+ struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
+ struct cxl_mbox_cmd mbox_cmd;
+ struct cxl_mbox_get_health_info_out hi;
+
+ mbox_cmd = (struct cxl_mbox_cmd) {
+ .opcode = CXL_MBOX_OP_GET_HEALTH_INFO,
+ .size_out = sizeof(hi),
+ .payload_out = &hi,
+ };
+
+ rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
+ if (!rc)
+ *count = le32_to_cpu(hi.dirty_shutdown_cnt);
+
+ return rc;
+}
+EXPORT_SYMBOL_NS_GPL(cxl_get_dirty_count, "CXL");
+
int cxl_arm_dirty_shutdown(struct cxl_memdev_state *mds)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
index acbbba41356d..4dbf1cc60047 100644
--- a/drivers/cxl/cxl.h
+++ b/drivers/cxl/cxl.h
@@ -542,6 +542,7 @@ struct cxl_nvdimm {
struct device dev;
struct cxl_memdev *cxlmd;
u8 dev_id[CXL_DEV_ID_LEN]; /* for nvdimm, string of 'serial' */
+ u64 dirty_shutdowns;
};
struct cxl_pmem_region_mapping {
diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
index 6d60030139df..03ad3c8ba88d 100644
--- a/drivers/cxl/cxlmem.h
+++ b/drivers/cxl/cxlmem.h
@@ -681,6 +681,18 @@ struct cxl_mbox_set_partition_info {
#define CXL_SET_PARTITION_IMMEDIATE_FLAG BIT(0)
+/* Get Health Info Output Payload CXL 3.2 Spec 8.2.10.9.3.1 Table 8-148 */
+struct cxl_mbox_get_health_info_out {
+ u8 health_status;
+ u8 media_status;
+ u8 additional_status;
+ u8 life_used;
+ __le16 device_temperature;
+ __le32 dirty_shutdown_cnt;
+ __le32 corrected_volatile_error_cnt;
+ __le32 corrected_persistent_error_cnt;
+} __packed;
+
/* Set Shutdown State Input Payload CXL 3.2 Spec 8.2.10.9.3.5 Table 8-152 */
struct cxl_mbox_set_shutdown_state_in {
u8 state;
@@ -822,6 +834,7 @@ void cxl_event_trace_record(const struct cxl_memdev *cxlmd,
enum cxl_event_log_type type,
enum cxl_event_type event_type,
const uuid_t *uuid, union cxl_event *evt);
+int cxl_get_dirty_count(struct cxl_memdev_state *mds, u32 *count);
int cxl_arm_dirty_shutdown(struct cxl_memdev_state *mds);
int cxl_set_timestamp(struct cxl_memdev_state *mds);
int cxl_poison_state_init(struct cxl_memdev_state *mds);
@@ -866,4 +879,6 @@ struct cxl_hdm {
struct seq_file;
struct dentry *cxl_debugfs_create_dir(const char *dir);
void cxl_dpa_debug(struct seq_file *file, struct cxl_dev_state *cxlds);
+
+int cxl_gpf_device(struct cxl_dev_state *cxlds);
#endif /* __CXL_MEM_H__ */
diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c
index 6b284962592f..aee1afe9d287 100644
--- a/drivers/cxl/pmem.c
+++ b/drivers/cxl/pmem.c
@@ -42,15 +42,44 @@ static ssize_t id_show(struct device *dev, struct device_attribute *attr, char *
}
static DEVICE_ATTR_RO(id);
+static ssize_t dirty_shutdown_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nvdimm *nvdimm = to_nvdimm(dev);
+ struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
+
+ return sysfs_emit(buf, "%lld\n", cxl_nvd->dirty_shutdowns);
+}
+static DEVICE_ATTR_RO(dirty_shutdown);
+
static struct attribute *cxl_dimm_attributes[] = {
&dev_attr_id.attr,
&dev_attr_provider.attr,
+ &dev_attr_dirty_shutdown.attr,
NULL
};
+#define CXL_INVALID_DIRTY_SHUTDOWN_COUNT -1
+static umode_t cxl_dimm_visible(struct kobject *kobj,
+ struct attribute *a, int n)
+{
+ if (a == &dev_attr_dirty_shutdown.attr) {
+ struct device *dev = kobj_to_dev(kobj);
+ struct nvdimm *nvdimm = to_nvdimm(dev);
+ struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
+
+ if (cxl_nvd->dirty_shutdowns ==
+ CXL_INVALID_DIRTY_SHUTDOWN_COUNT)
+ return 0;
+ }
+
+ return a->mode;
+}
+
static const struct attribute_group cxl_dimm_attribute_group = {
.name = "cxl",
.attrs = cxl_dimm_attributes,
+ .is_visible = cxl_dimm_visible
};
static const struct attribute_group *cxl_dimm_attribute_groups[] = {
@@ -58,6 +87,38 @@ static const struct attribute_group *cxl_dimm_attribute_groups[] = {
NULL
};
+static void cxl_nvdimm_setup_dirty_tracking(struct cxl_nvdimm *cxl_nvd)
+{
+ u32 count;
+ struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
+ struct cxl_dev_state *cxlds = cxlmd->cxlds;
+ struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds);
+ struct device *dev = &cxl_nvd->dev;
+
+ /*
+ * Dirty tracking is enabled and exposed to the user, only when:
+ * - dirty shutdown on the device can be set, and,
+ * - the device has a Device GPF DVSEC (albeit unused), and,
+ * - the Get Health Info cmd can retrieve the device's dirty count.
+ */
+ cxl_nvd->dirty_shutdowns = CXL_INVALID_DIRTY_SHUTDOWN_COUNT;
+
+ if (cxl_arm_dirty_shutdown(mds)) {
+ dev_warn(dev, "GPF: could not set dirty shutdown state\n");
+ return;
+ }
+
+ if (cxl_gpf_get_dvsec(cxlds->dev, false) <= 0)
+ return;
+
+ if (cxl_get_dirty_count(mds, &count)) {
+ dev_warn(dev, "GPF: could not retrieve dirty count\n");
+ return;
+ }
+
+ cxl_nvd->dirty_shutdowns = count;
+}
+
static int cxl_nvdimm_probe(struct device *dev)
{
struct cxl_nvdimm *cxl_nvd = to_cxl_nvdimm(dev);
@@ -78,20 +139,20 @@ static int cxl_nvdimm_probe(struct device *dev)
set_bit(ND_CMD_GET_CONFIG_SIZE, &cmd_mask);
set_bit(ND_CMD_GET_CONFIG_DATA, &cmd_mask);
set_bit(ND_CMD_SET_CONFIG_DATA, &cmd_mask);
- nvdimm = __nvdimm_create(cxl_nvb->nvdimm_bus, cxl_nvd,
- cxl_dimm_attribute_groups, flags,
- cmd_mask, 0, NULL, cxl_nvd->dev_id,
- cxl_security_ops, NULL);
- if (!nvdimm)
- return -ENOMEM;
/*
* Set dirty shutdown now, with the expectation that the device
* clear it upon a successful GPF flow. The exception to this
* is upon Viral detection, per CXL 3.2 section 12.4.2.
*/
- if (cxl_arm_dirty_shutdown(mds))
- dev_warn(dev, "GPF: could not dirty shutdown state\n");
+ cxl_nvdimm_setup_dirty_tracking(cxl_nvd);
+
+ nvdimm = __nvdimm_create(cxl_nvb->nvdimm_bus, cxl_nvd,
+ cxl_dimm_attribute_groups, flags,
+ cmd_mask, 0, NULL, cxl_nvd->dev_id,
+ cxl_security_ops, NULL);
+ if (!nvdimm)
+ return -ENOMEM;
dev_set_drvdata(dev, nvdimm);
return devm_add_action_or_reset(dev, unregister_nvdimm, nvdimm);
--
2.39.5
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 4/4] tools/testing/cxl: Set Shutdown State support
2025-02-19 2:14 [PATCH v2 0/4] cxl: Dirty shutdown followups Davidlohr Bueso
` (2 preceding siblings ...)
2025-02-19 2:14 ` [PATCH 3/4] cxl/pmem: Export dirty shutdown count via sysfs Davidlohr Bueso
@ 2025-02-19 2:14 ` Davidlohr Bueso
3 siblings, 0 replies; 14+ messages in thread
From: Davidlohr Bueso @ 2025-02-19 2:14 UTC (permalink / raw)
To: dave.jiang, dan.j.williams
Cc: jonathan.cameron, alison.schofield, ira.weiny, vishal.l.verma,
seven.yi.lee, a.manzanares, fan.ni, anisa.su, dave, linux-cxl
Add support to emulate the CXL Set Shutdown State operation.
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Davidlohr Bueso <dave@stgolabs.net>
---
tools/testing/cxl/test/mem.c | 23 +++++++++++++++++++++++
1 file changed, 23 insertions(+)
diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c
index 495199238335..832680a87c73 100644
--- a/tools/testing/cxl/test/mem.c
+++ b/tools/testing/cxl/test/mem.c
@@ -65,6 +65,10 @@ static struct cxl_cel_entry mock_cel[] = {
.opcode = cpu_to_le16(CXL_MBOX_OP_GET_HEALTH_INFO),
.effect = CXL_CMD_EFFECT_NONE,
},
+ {
+ .opcode = cpu_to_le16(CXL_MBOX_OP_SET_SHUTDOWN_STATE),
+ .effect = POLICY_CHANGE_IMMEDIATE,
+ },
{
.opcode = cpu_to_le16(CXL_MBOX_OP_GET_POISON),
.effect = CXL_CMD_EFFECT_NONE,
@@ -161,6 +165,7 @@ struct cxl_mockmem_data {
u8 event_buf[SZ_4K];
u64 timestamp;
unsigned long sanitize_timeout;
+ int shutdown_state;
};
static struct mock_event_log *event_find_log(struct device *dev, int log_type)
@@ -1088,6 +1093,21 @@ static int mock_health_info(struct cxl_mbox_cmd *cmd)
return 0;
}
+static int mock_set_shutdown_state(struct cxl_mockmem_data *mdata,
+ struct cxl_mbox_cmd *cmd)
+{
+ struct cxl_mbox_set_shutdown_state_in *ss = cmd->payload_in;
+
+ if (cmd->size_in != sizeof(*ss))
+ return -EINVAL;
+
+ if (cmd->size_out != 0)
+ return -EINVAL;
+
+ mdata->shutdown_state = ss->state;
+ return 0;
+}
+
static struct mock_poison {
struct cxl_dev_state *cxlds;
u64 dpa;
@@ -1421,6 +1441,9 @@ static int cxl_mock_mbox_send(struct cxl_mailbox *cxl_mbox,
case CXL_MBOX_OP_PASSPHRASE_SECURE_ERASE:
rc = mock_passphrase_secure_erase(mdata, cmd);
break;
+ case CXL_MBOX_OP_SET_SHUTDOWN_STATE:
+ rc = mock_set_shutdown_state(mdata, cmd);
+ break;
case CXL_MBOX_OP_GET_POISON:
rc = mock_get_poison(cxlds, cmd);
break;
--
2.39.5
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH 3/4] cxl/pmem: Export dirty shutdown count via sysfs
2025-02-19 2:14 ` [PATCH 3/4] cxl/pmem: Export dirty shutdown count via sysfs Davidlohr Bueso
@ 2025-02-19 2:34 ` Davidlohr Bueso
0 siblings, 0 replies; 14+ messages in thread
From: Davidlohr Bueso @ 2025-02-19 2:34 UTC (permalink / raw)
To: dave.jiang, dan.j.williams
Cc: jonathan.cameron, alison.schofield, ira.weiny, vishal.l.verma,
seven.yi.lee, a.manzanares, fan.ni, anisa.su, linux-cxl
On Tue, 18 Feb 2025, Davidlohr Bueso wrote:
>diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
>index 6d60030139df..03ad3c8ba88d 100644
>--- a/drivers/cxl/cxlmem.h
>+++ b/drivers/cxl/cxlmem.h
>@@ -866,4 +879,6 @@ struct cxl_hdm {
> struct seq_file;
> struct dentry *cxl_debugfs_create_dir(const char *dir);
> void cxl_dpa_debug(struct seq_file *file, struct cxl_dev_state *cxlds);
>+
>+int cxl_gpf_device(struct cxl_dev_state *cxlds);
Bleh this is a leftover of v1, it slipped through. I will send a v3.
Thanks,
Davidlohr
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 3/4] cxl/pmem: Export dirty shutdown count via sysfs
2025-02-19 6:28 [PATCH v3 0/4] cxl: Dirty shutdown followups Davidlohr Bueso
@ 2025-02-19 6:28 ` Davidlohr Bueso
2025-02-19 16:44 ` Dave Jiang
2025-02-19 21:15 ` Ira Weiny
0 siblings, 2 replies; 14+ messages in thread
From: Davidlohr Bueso @ 2025-02-19 6:28 UTC (permalink / raw)
To: dave.jiang, dan.j.williams
Cc: jonathan.cameron, alison.schofield, ira.weiny, vishal.l.verma,
seven.yi.lee, a.manzanares, fan.ni, anisa.su, dave, linux-cxl
Similar to how the acpi_nfit driver exports Optane dirty shutdown count,
introduce:
/sys/bus/cxl/devices/nvdimm-bridge0/ndbusX/nmemY/cxl/dirty_shutdown
Under the conditions that 1) dirty shutdown can be set, 2) Device GPF
DVSEC exists, and 3) the count itself can be retrieved.
Suggested-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Davidlohr Bueso <dave@stgolabs.net>
---
Documentation/ABI/testing/sysfs-bus-cxl | 12 +++
Documentation/driver-api/cxl/maturity-map.rst | 2 +-
drivers/cxl/core/mbox.c | 21 +++++
drivers/cxl/cxl.h | 1 +
drivers/cxl/cxlmem.h | 13 ++++
drivers/cxl/pmem.c | 77 +++++++++++++++++--
6 files changed, 117 insertions(+), 9 deletions(-)
diff --git a/Documentation/ABI/testing/sysfs-bus-cxl b/Documentation/ABI/testing/sysfs-bus-cxl
index 3f5627a1210a..a7491d214098 100644
--- a/Documentation/ABI/testing/sysfs-bus-cxl
+++ b/Documentation/ABI/testing/sysfs-bus-cxl
@@ -586,3 +586,15 @@ Description:
See Documentation/ABI/stable/sysfs-devices-node. access0 provides
the number to the closest initiator and access1 provides the
number to the closest CPU.
+
+
+What: /sys/bus/cxl/devices/nvdimm-bridge0/ndbusX/nmemY/cxl/dirty_shutdown
+Date: Feb, 2025
+KernelVersion: v6.15
+Contact: linux-cxl@vger.kernel.org
+Description:
+ (RO) The device dirty shutdown count value, which is the number
+ of times the device could have incurred in potential data loss.
+ The count is persistent across power loss and wraps back to 0
+ upon overflow. If this file is not present, the device does not
+ have the necessary support for dirty tracking.
diff --git a/Documentation/driver-api/cxl/maturity-map.rst b/Documentation/driver-api/cxl/maturity-map.rst
index 99dd2c841e69..a2288f9df658 100644
--- a/Documentation/driver-api/cxl/maturity-map.rst
+++ b/Documentation/driver-api/cxl/maturity-map.rst
@@ -130,7 +130,7 @@ Mailbox commands
* [0] Switch CCI
* [3] Timestamp
* [1] PMEM labels
-* [1] PMEM GPF / Dirty Shutdown
+* [3] PMEM GPF / Dirty Shutdown
* [0] Scan Media
PMU
diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c
index 86d13f4a1c18..f1009a265f9d 100644
--- a/drivers/cxl/core/mbox.c
+++ b/drivers/cxl/core/mbox.c
@@ -1281,6 +1281,27 @@ int cxl_mem_dpa_fetch(struct cxl_memdev_state *mds, struct cxl_dpa_info *info)
}
EXPORT_SYMBOL_NS_GPL(cxl_mem_dpa_fetch, "CXL");
+int cxl_get_dirty_count(struct cxl_memdev_state *mds, u32 *count)
+{
+ int rc;
+ struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
+ struct cxl_mbox_cmd mbox_cmd;
+ struct cxl_mbox_get_health_info_out hi;
+
+ mbox_cmd = (struct cxl_mbox_cmd) {
+ .opcode = CXL_MBOX_OP_GET_HEALTH_INFO,
+ .size_out = sizeof(hi),
+ .payload_out = &hi,
+ };
+
+ rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
+ if (!rc)
+ *count = le32_to_cpu(hi.dirty_shutdown_cnt);
+
+ return rc;
+}
+EXPORT_SYMBOL_NS_GPL(cxl_get_dirty_count, "CXL");
+
int cxl_arm_dirty_shutdown(struct cxl_memdev_state *mds)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
index acbbba41356d..4dbf1cc60047 100644
--- a/drivers/cxl/cxl.h
+++ b/drivers/cxl/cxl.h
@@ -542,6 +542,7 @@ struct cxl_nvdimm {
struct device dev;
struct cxl_memdev *cxlmd;
u8 dev_id[CXL_DEV_ID_LEN]; /* for nvdimm, string of 'serial' */
+ u64 dirty_shutdowns;
};
struct cxl_pmem_region_mapping {
diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
index 6d60030139df..3b6ef9e936c3 100644
--- a/drivers/cxl/cxlmem.h
+++ b/drivers/cxl/cxlmem.h
@@ -681,6 +681,18 @@ struct cxl_mbox_set_partition_info {
#define CXL_SET_PARTITION_IMMEDIATE_FLAG BIT(0)
+/* Get Health Info Output Payload CXL 3.2 Spec 8.2.10.9.3.1 Table 8-148 */
+struct cxl_mbox_get_health_info_out {
+ u8 health_status;
+ u8 media_status;
+ u8 additional_status;
+ u8 life_used;
+ __le16 device_temperature;
+ __le32 dirty_shutdown_cnt;
+ __le32 corrected_volatile_error_cnt;
+ __le32 corrected_persistent_error_cnt;
+} __packed;
+
/* Set Shutdown State Input Payload CXL 3.2 Spec 8.2.10.9.3.5 Table 8-152 */
struct cxl_mbox_set_shutdown_state_in {
u8 state;
@@ -822,6 +834,7 @@ void cxl_event_trace_record(const struct cxl_memdev *cxlmd,
enum cxl_event_log_type type,
enum cxl_event_type event_type,
const uuid_t *uuid, union cxl_event *evt);
+int cxl_get_dirty_count(struct cxl_memdev_state *mds, u32 *count);
int cxl_arm_dirty_shutdown(struct cxl_memdev_state *mds);
int cxl_set_timestamp(struct cxl_memdev_state *mds);
int cxl_poison_state_init(struct cxl_memdev_state *mds);
diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c
index 6b284962592f..aee1afe9d287 100644
--- a/drivers/cxl/pmem.c
+++ b/drivers/cxl/pmem.c
@@ -42,15 +42,44 @@ static ssize_t id_show(struct device *dev, struct device_attribute *attr, char *
}
static DEVICE_ATTR_RO(id);
+static ssize_t dirty_shutdown_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nvdimm *nvdimm = to_nvdimm(dev);
+ struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
+
+ return sysfs_emit(buf, "%lld\n", cxl_nvd->dirty_shutdowns);
+}
+static DEVICE_ATTR_RO(dirty_shutdown);
+
static struct attribute *cxl_dimm_attributes[] = {
&dev_attr_id.attr,
&dev_attr_provider.attr,
+ &dev_attr_dirty_shutdown.attr,
NULL
};
+#define CXL_INVALID_DIRTY_SHUTDOWN_COUNT -1
+static umode_t cxl_dimm_visible(struct kobject *kobj,
+ struct attribute *a, int n)
+{
+ if (a == &dev_attr_dirty_shutdown.attr) {
+ struct device *dev = kobj_to_dev(kobj);
+ struct nvdimm *nvdimm = to_nvdimm(dev);
+ struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
+
+ if (cxl_nvd->dirty_shutdowns ==
+ CXL_INVALID_DIRTY_SHUTDOWN_COUNT)
+ return 0;
+ }
+
+ return a->mode;
+}
+
static const struct attribute_group cxl_dimm_attribute_group = {
.name = "cxl",
.attrs = cxl_dimm_attributes,
+ .is_visible = cxl_dimm_visible
};
static const struct attribute_group *cxl_dimm_attribute_groups[] = {
@@ -58,6 +87,38 @@ static const struct attribute_group *cxl_dimm_attribute_groups[] = {
NULL
};
+static void cxl_nvdimm_setup_dirty_tracking(struct cxl_nvdimm *cxl_nvd)
+{
+ u32 count;
+ struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
+ struct cxl_dev_state *cxlds = cxlmd->cxlds;
+ struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds);
+ struct device *dev = &cxl_nvd->dev;
+
+ /*
+ * Dirty tracking is enabled and exposed to the user, only when:
+ * - dirty shutdown on the device can be set, and,
+ * - the device has a Device GPF DVSEC (albeit unused), and,
+ * - the Get Health Info cmd can retrieve the device's dirty count.
+ */
+ cxl_nvd->dirty_shutdowns = CXL_INVALID_DIRTY_SHUTDOWN_COUNT;
+
+ if (cxl_arm_dirty_shutdown(mds)) {
+ dev_warn(dev, "GPF: could not set dirty shutdown state\n");
+ return;
+ }
+
+ if (cxl_gpf_get_dvsec(cxlds->dev, false) <= 0)
+ return;
+
+ if (cxl_get_dirty_count(mds, &count)) {
+ dev_warn(dev, "GPF: could not retrieve dirty count\n");
+ return;
+ }
+
+ cxl_nvd->dirty_shutdowns = count;
+}
+
static int cxl_nvdimm_probe(struct device *dev)
{
struct cxl_nvdimm *cxl_nvd = to_cxl_nvdimm(dev);
@@ -78,20 +139,20 @@ static int cxl_nvdimm_probe(struct device *dev)
set_bit(ND_CMD_GET_CONFIG_SIZE, &cmd_mask);
set_bit(ND_CMD_GET_CONFIG_DATA, &cmd_mask);
set_bit(ND_CMD_SET_CONFIG_DATA, &cmd_mask);
- nvdimm = __nvdimm_create(cxl_nvb->nvdimm_bus, cxl_nvd,
- cxl_dimm_attribute_groups, flags,
- cmd_mask, 0, NULL, cxl_nvd->dev_id,
- cxl_security_ops, NULL);
- if (!nvdimm)
- return -ENOMEM;
/*
* Set dirty shutdown now, with the expectation that the device
* clear it upon a successful GPF flow. The exception to this
* is upon Viral detection, per CXL 3.2 section 12.4.2.
*/
- if (cxl_arm_dirty_shutdown(mds))
- dev_warn(dev, "GPF: could not dirty shutdown state\n");
+ cxl_nvdimm_setup_dirty_tracking(cxl_nvd);
+
+ nvdimm = __nvdimm_create(cxl_nvb->nvdimm_bus, cxl_nvd,
+ cxl_dimm_attribute_groups, flags,
+ cmd_mask, 0, NULL, cxl_nvd->dev_id,
+ cxl_security_ops, NULL);
+ if (!nvdimm)
+ return -ENOMEM;
dev_set_drvdata(dev, nvdimm);
return devm_add_action_or_reset(dev, unregister_nvdimm, nvdimm);
--
2.39.5
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH 3/4] cxl/pmem: Export dirty shutdown count via sysfs
2025-02-19 6:28 ` [PATCH 3/4] cxl/pmem: Export dirty shutdown count via sysfs Davidlohr Bueso
@ 2025-02-19 16:44 ` Dave Jiang
2025-02-19 21:15 ` Ira Weiny
1 sibling, 0 replies; 14+ messages in thread
From: Dave Jiang @ 2025-02-19 16:44 UTC (permalink / raw)
To: Davidlohr Bueso, dan.j.williams
Cc: jonathan.cameron, alison.schofield, ira.weiny, vishal.l.verma,
seven.yi.lee, a.manzanares, fan.ni, anisa.su, linux-cxl
On 2/18/25 11:28 PM, Davidlohr Bueso wrote:
> Similar to how the acpi_nfit driver exports Optane dirty shutdown count,
> introduce:
>
> /sys/bus/cxl/devices/nvdimm-bridge0/ndbusX/nmemY/cxl/dirty_shutdown
>
> Under the conditions that 1) dirty shutdown can be set, 2) Device GPF
> DVSEC exists, and 3) the count itself can be retrieved.
>
> Suggested-by: Dan Williams <dan.j.williams@intel.com>
> Signed-off-by: Davidlohr Bueso <dave@stgolabs.net>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
just a couple nits below
> ---
> Documentation/ABI/testing/sysfs-bus-cxl | 12 +++
> Documentation/driver-api/cxl/maturity-map.rst | 2 +-
> drivers/cxl/core/mbox.c | 21 +++++
> drivers/cxl/cxl.h | 1 +
> drivers/cxl/cxlmem.h | 13 ++++
> drivers/cxl/pmem.c | 77 +++++++++++++++++--
> 6 files changed, 117 insertions(+), 9 deletions(-)
>
> diff --git a/Documentation/ABI/testing/sysfs-bus-cxl b/Documentation/ABI/testing/sysfs-bus-cxl
> index 3f5627a1210a..a7491d214098 100644
> --- a/Documentation/ABI/testing/sysfs-bus-cxl
> +++ b/Documentation/ABI/testing/sysfs-bus-cxl
> @@ -586,3 +586,15 @@ Description:
> See Documentation/ABI/stable/sysfs-devices-node. access0 provides
> the number to the closest initiator and access1 provides the
> number to the closest CPU.
> +
> +
> +What: /sys/bus/cxl/devices/nvdimm-bridge0/ndbusX/nmemY/cxl/dirty_shutdown
> +Date: Feb, 2025
> +KernelVersion: v6.15
> +Contact: linux-cxl@vger.kernel.org
> +Description:
> + (RO) The device dirty shutdown count value, which is the number
> + of times the device could have incurred in potential data loss.
> + The count is persistent across power loss and wraps back to 0
> + upon overflow. If this file is not present, the device does not
> + have the necessary support for dirty tracking.
> diff --git a/Documentation/driver-api/cxl/maturity-map.rst b/Documentation/driver-api/cxl/maturity-map.rst
> index 99dd2c841e69..a2288f9df658 100644
> --- a/Documentation/driver-api/cxl/maturity-map.rst
> +++ b/Documentation/driver-api/cxl/maturity-map.rst
> @@ -130,7 +130,7 @@ Mailbox commands
> * [0] Switch CCI
> * [3] Timestamp
> * [1] PMEM labels
> -* [1] PMEM GPF / Dirty Shutdown
> +* [3] PMEM GPF / Dirty Shutdown
> * [0] Scan Media
>
> PMU
> diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c
> index 86d13f4a1c18..f1009a265f9d 100644
> --- a/drivers/cxl/core/mbox.c
> +++ b/drivers/cxl/core/mbox.c
> @@ -1281,6 +1281,27 @@ int cxl_mem_dpa_fetch(struct cxl_memdev_state *mds, struct cxl_dpa_info *info)
> }
> EXPORT_SYMBOL_NS_GPL(cxl_mem_dpa_fetch, "CXL");
>
> +int cxl_get_dirty_count(struct cxl_memdev_state *mds, u32 *count)
> +{
> + int rc;
> + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
> + struct cxl_mbox_cmd mbox_cmd;
> + struct cxl_mbox_get_health_info_out hi;
reverse xmas tree pls
> +
> + mbox_cmd = (struct cxl_mbox_cmd) {
> + .opcode = CXL_MBOX_OP_GET_HEALTH_INFO,
> + .size_out = sizeof(hi),
> + .payload_out = &hi,
> + };
> +
> + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
> + if (!rc)
> + *count = le32_to_cpu(hi.dirty_shutdown_cnt);
> +
> + return rc;
> +}
> +EXPORT_SYMBOL_NS_GPL(cxl_get_dirty_count, "CXL");
> +
> int cxl_arm_dirty_shutdown(struct cxl_memdev_state *mds)
> {
> struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
> diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
> index acbbba41356d..4dbf1cc60047 100644
> --- a/drivers/cxl/cxl.h
> +++ b/drivers/cxl/cxl.h
> @@ -542,6 +542,7 @@ struct cxl_nvdimm {
> struct device dev;
> struct cxl_memdev *cxlmd;
> u8 dev_id[CXL_DEV_ID_LEN]; /* for nvdimm, string of 'serial' */
> + u64 dirty_shutdowns;
> };
>
> struct cxl_pmem_region_mapping {
> diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
> index 6d60030139df..3b6ef9e936c3 100644
> --- a/drivers/cxl/cxlmem.h
> +++ b/drivers/cxl/cxlmem.h
> @@ -681,6 +681,18 @@ struct cxl_mbox_set_partition_info {
>
> #define CXL_SET_PARTITION_IMMEDIATE_FLAG BIT(0)
>
> +/* Get Health Info Output Payload CXL 3.2 Spec 8.2.10.9.3.1 Table 8-148 */
> +struct cxl_mbox_get_health_info_out {
> + u8 health_status;
> + u8 media_status;
> + u8 additional_status;
> + u8 life_used;
> + __le16 device_temperature;
> + __le32 dirty_shutdown_cnt;
> + __le32 corrected_volatile_error_cnt;
> + __le32 corrected_persistent_error_cnt;
> +} __packed;
> +
> /* Set Shutdown State Input Payload CXL 3.2 Spec 8.2.10.9.3.5 Table 8-152 */
> struct cxl_mbox_set_shutdown_state_in {
> u8 state;
> @@ -822,6 +834,7 @@ void cxl_event_trace_record(const struct cxl_memdev *cxlmd,
> enum cxl_event_log_type type,
> enum cxl_event_type event_type,
> const uuid_t *uuid, union cxl_event *evt);
> +int cxl_get_dirty_count(struct cxl_memdev_state *mds, u32 *count);
> int cxl_arm_dirty_shutdown(struct cxl_memdev_state *mds);
> int cxl_set_timestamp(struct cxl_memdev_state *mds);
> int cxl_poison_state_init(struct cxl_memdev_state *mds);
> diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c
> index 6b284962592f..aee1afe9d287 100644
> --- a/drivers/cxl/pmem.c
> +++ b/drivers/cxl/pmem.c
> @@ -42,15 +42,44 @@ static ssize_t id_show(struct device *dev, struct device_attribute *attr, char *
> }
> static DEVICE_ATTR_RO(id);
>
> +static ssize_t dirty_shutdown_show(struct device *dev,
> + struct device_attribute *attr, char *buf)
> +{
> + struct nvdimm *nvdimm = to_nvdimm(dev);
> + struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
> +
> + return sysfs_emit(buf, "%lld\n", cxl_nvd->dirty_shutdowns);
> +}
> +static DEVICE_ATTR_RO(dirty_shutdown);
> +
> static struct attribute *cxl_dimm_attributes[] = {
> &dev_attr_id.attr,
> &dev_attr_provider.attr,
> + &dev_attr_dirty_shutdown.attr,
> NULL
> };
>
> +#define CXL_INVALID_DIRTY_SHUTDOWN_COUNT -1
> +static umode_t cxl_dimm_visible(struct kobject *kobj,
> + struct attribute *a, int n)
> +{
> + if (a == &dev_attr_dirty_shutdown.attr) {
> + struct device *dev = kobj_to_dev(kobj);
> + struct nvdimm *nvdimm = to_nvdimm(dev);
> + struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
> +
> + if (cxl_nvd->dirty_shutdowns ==
> + CXL_INVALID_DIRTY_SHUTDOWN_COUNT)
> + return 0;
> + }
> +
> + return a->mode;
> +}
> +
> static const struct attribute_group cxl_dimm_attribute_group = {
> .name = "cxl",
> .attrs = cxl_dimm_attributes,
> + .is_visible = cxl_dimm_visible
> };
>
> static const struct attribute_group *cxl_dimm_attribute_groups[] = {
> @@ -58,6 +87,38 @@ static const struct attribute_group *cxl_dimm_attribute_groups[] = {
> NULL
> };
>
> +static void cxl_nvdimm_setup_dirty_tracking(struct cxl_nvdimm *cxl_nvd)
> +{
> + u32 count;
> + struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
> + struct cxl_dev_state *cxlds = cxlmd->cxlds;
> + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds);
> + struct device *dev = &cxl_nvd->dev;
reverse xmas tree pls
> +
> + /*
> + * Dirty tracking is enabled and exposed to the user, only when:
> + * - dirty shutdown on the device can be set, and,
> + * - the device has a Device GPF DVSEC (albeit unused), and,
> + * - the Get Health Info cmd can retrieve the device's dirty count.
> + */
> + cxl_nvd->dirty_shutdowns = CXL_INVALID_DIRTY_SHUTDOWN_COUNT;
> +
> + if (cxl_arm_dirty_shutdown(mds)) {
> + dev_warn(dev, "GPF: could not set dirty shutdown state\n");
> + return;
> + }
> +
> + if (cxl_gpf_get_dvsec(cxlds->dev, false) <= 0)
> + return;
> +
> + if (cxl_get_dirty_count(mds, &count)) {
> + dev_warn(dev, "GPF: could not retrieve dirty count\n");
> + return;
> + }
> +
> + cxl_nvd->dirty_shutdowns = count;
> +}
> +
> static int cxl_nvdimm_probe(struct device *dev)
> {
> struct cxl_nvdimm *cxl_nvd = to_cxl_nvdimm(dev);
> @@ -78,20 +139,20 @@ static int cxl_nvdimm_probe(struct device *dev)
> set_bit(ND_CMD_GET_CONFIG_SIZE, &cmd_mask);
> set_bit(ND_CMD_GET_CONFIG_DATA, &cmd_mask);
> set_bit(ND_CMD_SET_CONFIG_DATA, &cmd_mask);
> - nvdimm = __nvdimm_create(cxl_nvb->nvdimm_bus, cxl_nvd,
> - cxl_dimm_attribute_groups, flags,
> - cmd_mask, 0, NULL, cxl_nvd->dev_id,
> - cxl_security_ops, NULL);
> - if (!nvdimm)
> - return -ENOMEM;
>
> /*
> * Set dirty shutdown now, with the expectation that the device
> * clear it upon a successful GPF flow. The exception to this
> * is upon Viral detection, per CXL 3.2 section 12.4.2.
> */
> - if (cxl_arm_dirty_shutdown(mds))
> - dev_warn(dev, "GPF: could not dirty shutdown state\n");
> + cxl_nvdimm_setup_dirty_tracking(cxl_nvd);
> +
> + nvdimm = __nvdimm_create(cxl_nvb->nvdimm_bus, cxl_nvd,
> + cxl_dimm_attribute_groups, flags,
> + cmd_mask, 0, NULL, cxl_nvd->dev_id,
> + cxl_security_ops, NULL);
> + if (!nvdimm)
> + return -ENOMEM;
>
> dev_set_drvdata(dev, nvdimm);
> return devm_add_action_or_reset(dev, unregister_nvdimm, nvdimm);
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 3/4] cxl/pmem: Export dirty shutdown count via sysfs
2025-02-19 6:28 ` [PATCH 3/4] cxl/pmem: Export dirty shutdown count via sysfs Davidlohr Bueso
2025-02-19 16:44 ` Dave Jiang
@ 2025-02-19 21:15 ` Ira Weiny
1 sibling, 0 replies; 14+ messages in thread
From: Ira Weiny @ 2025-02-19 21:15 UTC (permalink / raw)
To: Davidlohr Bueso, dave.jiang, dan.j.williams
Cc: jonathan.cameron, alison.schofield, ira.weiny, vishal.l.verma,
seven.yi.lee, a.manzanares, fan.ni, anisa.su, dave, linux-cxl
Davidlohr Bueso wrote:
[snip]
> +static ssize_t dirty_shutdown_show(struct device *dev,
> + struct device_attribute *attr, char *buf)
> +{
> + struct nvdimm *nvdimm = to_nvdimm(dev);
> + struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
> +
> + return sysfs_emit(buf, "%lld\n", cxl_nvd->dirty_shutdowns);
> +}
> +static DEVICE_ATTR_RO(dirty_shutdown);
> +
> static struct attribute *cxl_dimm_attributes[] = {
> &dev_attr_id.attr,
> &dev_attr_provider.attr,
> + &dev_attr_dirty_shutdown.attr,
> NULL
> };
>
> +#define CXL_INVALID_DIRTY_SHUTDOWN_COUNT -1
I think this should be defined as ULLONG_MAX. Just for clarity since
dirty_shutdown is u64.
Ira
> +static umode_t cxl_dimm_visible(struct kobject *kobj,
> + struct attribute *a, int n)
> +{
> + if (a == &dev_attr_dirty_shutdown.attr) {
> + struct device *dev = kobj_to_dev(kobj);
> + struct nvdimm *nvdimm = to_nvdimm(dev);
> + struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
> +
> + if (cxl_nvd->dirty_shutdowns ==
> + CXL_INVALID_DIRTY_SHUTDOWN_COUNT)
> + return 0;
> + }
> +
> + return a->mode;
> +}
> +
[snip]
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 3/4] cxl/pmem: Export dirty shutdown count via sysfs
2025-02-20 1:36 [PATCH v4 0/4] cxl: Dirty shutdown followups Davidlohr Bueso
@ 2025-02-20 1:36 ` Davidlohr Bueso
2025-02-20 16:11 ` Ira Weiny
2025-02-20 17:29 ` Jonathan Cameron
0 siblings, 2 replies; 14+ messages in thread
From: Davidlohr Bueso @ 2025-02-20 1:36 UTC (permalink / raw)
To: dave.jiang, dan.j.williams
Cc: jonathan.cameron, alison.schofield, ira.weiny, vishal.l.verma,
seven.yi.lee, ming.li, a.manzanares, fan.ni, anisa.su, dave,
linux-cxl
Similar to how the acpi_nfit driver exports Optane dirty shutdown count,
introduce:
/sys/bus/cxl/devices/nvdimm-bridge0/ndbusX/nmemY/cxl/dirty_shutdown
Under the conditions that 1) dirty shutdown can be set, 2) Device GPF
DVSEC exists, and 3) the count itself can be retrieved.
Suggested-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Davidlohr Bueso <dave@stgolabs.net>
---
Documentation/ABI/testing/sysfs-bus-cxl | 12 +++
Documentation/driver-api/cxl/maturity-map.rst | 2 +-
drivers/cxl/core/mbox.c | 21 +++++
drivers/cxl/cxl.h | 1 +
drivers/cxl/cxlmem.h | 13 +++
drivers/cxl/pmem.c | 79 ++++++++++++++++---
6 files changed, 118 insertions(+), 10 deletions(-)
diff --git a/Documentation/ABI/testing/sysfs-bus-cxl b/Documentation/ABI/testing/sysfs-bus-cxl
index 3f5627a1210a..a7491d214098 100644
--- a/Documentation/ABI/testing/sysfs-bus-cxl
+++ b/Documentation/ABI/testing/sysfs-bus-cxl
@@ -586,3 +586,15 @@ Description:
See Documentation/ABI/stable/sysfs-devices-node. access0 provides
the number to the closest initiator and access1 provides the
number to the closest CPU.
+
+
+What: /sys/bus/cxl/devices/nvdimm-bridge0/ndbusX/nmemY/cxl/dirty_shutdown
+Date: Feb, 2025
+KernelVersion: v6.15
+Contact: linux-cxl@vger.kernel.org
+Description:
+ (RO) The device dirty shutdown count value, which is the number
+ of times the device could have incurred in potential data loss.
+ The count is persistent across power loss and wraps back to 0
+ upon overflow. If this file is not present, the device does not
+ have the necessary support for dirty tracking.
diff --git a/Documentation/driver-api/cxl/maturity-map.rst b/Documentation/driver-api/cxl/maturity-map.rst
index 99dd2c841e69..a2288f9df658 100644
--- a/Documentation/driver-api/cxl/maturity-map.rst
+++ b/Documentation/driver-api/cxl/maturity-map.rst
@@ -130,7 +130,7 @@ Mailbox commands
* [0] Switch CCI
* [3] Timestamp
* [1] PMEM labels
-* [1] PMEM GPF / Dirty Shutdown
+* [3] PMEM GPF / Dirty Shutdown
* [0] Scan Media
PMU
diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c
index 86d13f4a1c18..6bc398182a5d 100644
--- a/drivers/cxl/core/mbox.c
+++ b/drivers/cxl/core/mbox.c
@@ -1281,6 +1281,27 @@ int cxl_mem_dpa_fetch(struct cxl_memdev_state *mds, struct cxl_dpa_info *info)
}
EXPORT_SYMBOL_NS_GPL(cxl_mem_dpa_fetch, "CXL");
+int cxl_get_dirty_count(struct cxl_memdev_state *mds, u32 *count)
+{
+ struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
+ struct cxl_mbox_get_health_info_out hi;
+ struct cxl_mbox_cmd mbox_cmd;
+ int rc;
+
+ mbox_cmd = (struct cxl_mbox_cmd) {
+ .opcode = CXL_MBOX_OP_GET_HEALTH_INFO,
+ .size_out = sizeof(hi),
+ .payload_out = &hi,
+ };
+
+ rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
+ if (!rc)
+ *count = le32_to_cpu(hi.dirty_shutdown_cnt);
+
+ return rc;
+}
+EXPORT_SYMBOL_NS_GPL(cxl_get_dirty_count, "CXL");
+
int cxl_arm_dirty_shutdown(struct cxl_memdev_state *mds)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
index 29f2ab0d5bf6..8bdfa536262e 100644
--- a/drivers/cxl/cxl.h
+++ b/drivers/cxl/cxl.h
@@ -542,6 +542,7 @@ struct cxl_nvdimm {
struct device dev;
struct cxl_memdev *cxlmd;
u8 dev_id[CXL_DEV_ID_LEN]; /* for nvdimm, string of 'serial' */
+ u64 dirty_shutdowns;
};
struct cxl_pmem_region_mapping {
diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
index 6d60030139df..3b6ef9e936c3 100644
--- a/drivers/cxl/cxlmem.h
+++ b/drivers/cxl/cxlmem.h
@@ -681,6 +681,18 @@ struct cxl_mbox_set_partition_info {
#define CXL_SET_PARTITION_IMMEDIATE_FLAG BIT(0)
+/* Get Health Info Output Payload CXL 3.2 Spec 8.2.10.9.3.1 Table 8-148 */
+struct cxl_mbox_get_health_info_out {
+ u8 health_status;
+ u8 media_status;
+ u8 additional_status;
+ u8 life_used;
+ __le16 device_temperature;
+ __le32 dirty_shutdown_cnt;
+ __le32 corrected_volatile_error_cnt;
+ __le32 corrected_persistent_error_cnt;
+} __packed;
+
/* Set Shutdown State Input Payload CXL 3.2 Spec 8.2.10.9.3.5 Table 8-152 */
struct cxl_mbox_set_shutdown_state_in {
u8 state;
@@ -822,6 +834,7 @@ void cxl_event_trace_record(const struct cxl_memdev *cxlmd,
enum cxl_event_log_type type,
enum cxl_event_type event_type,
const uuid_t *uuid, union cxl_event *evt);
+int cxl_get_dirty_count(struct cxl_memdev_state *mds, u32 *count);
int cxl_arm_dirty_shutdown(struct cxl_memdev_state *mds);
int cxl_set_timestamp(struct cxl_memdev_state *mds);
int cxl_poison_state_init(struct cxl_memdev_state *mds);
diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c
index 6b284962592f..cb039cfc62cb 100644
--- a/drivers/cxl/pmem.c
+++ b/drivers/cxl/pmem.c
@@ -38,19 +38,48 @@ static ssize_t id_show(struct device *dev, struct device_attribute *attr, char *
struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
struct cxl_dev_state *cxlds = cxl_nvd->cxlmd->cxlds;
- return sysfs_emit(buf, "%lld\n", cxlds->serial);
+ return sysfs_emit(buf, "%llu\n", cxlds->serial);
}
static DEVICE_ATTR_RO(id);
+static ssize_t dirty_shutdown_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nvdimm *nvdimm = to_nvdimm(dev);
+ struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
+
+ return sysfs_emit(buf, "%lld\n", cxl_nvd->dirty_shutdowns);
+}
+static DEVICE_ATTR_RO(dirty_shutdown);
+
static struct attribute *cxl_dimm_attributes[] = {
&dev_attr_id.attr,
&dev_attr_provider.attr,
+ &dev_attr_dirty_shutdown.attr,
NULL
};
+#define CXL_INVALID_DIRTY_SHUTDOWN_COUNT ULLONG_MAX
+static umode_t cxl_dimm_visible(struct kobject *kobj,
+ struct attribute *a, int n)
+{
+ if (a == &dev_attr_dirty_shutdown.attr) {
+ struct device *dev = kobj_to_dev(kobj);
+ struct nvdimm *nvdimm = to_nvdimm(dev);
+ struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
+
+ if (cxl_nvd->dirty_shutdowns ==
+ CXL_INVALID_DIRTY_SHUTDOWN_COUNT)
+ return 0;
+ }
+
+ return a->mode;
+}
+
static const struct attribute_group cxl_dimm_attribute_group = {
.name = "cxl",
.attrs = cxl_dimm_attributes,
+ .is_visible = cxl_dimm_visible
};
static const struct attribute_group *cxl_dimm_attribute_groups[] = {
@@ -58,6 +87,38 @@ static const struct attribute_group *cxl_dimm_attribute_groups[] = {
NULL
};
+static void cxl_nvdimm_setup_dirty_tracking(struct cxl_nvdimm *cxl_nvd)
+{
+ struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
+ struct cxl_dev_state *cxlds = cxlmd->cxlds;
+ struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds);
+ struct device *dev = &cxl_nvd->dev;
+ u32 count;
+
+ /*
+ * Dirty tracking is enabled and exposed to the user, only when:
+ * - dirty shutdown on the device can be set, and,
+ * - the device has a Device GPF DVSEC (albeit unused), and,
+ * - the Get Health Info cmd can retrieve the device's dirty count.
+ */
+ cxl_nvd->dirty_shutdowns = CXL_INVALID_DIRTY_SHUTDOWN_COUNT;
+
+ if (cxl_arm_dirty_shutdown(mds)) {
+ dev_warn(dev, "GPF: could not set dirty shutdown state\n");
+ return;
+ }
+
+ if (!cxl_gpf_get_dvsec(cxlds->dev, false))
+ return;
+
+ if (cxl_get_dirty_count(mds, &count)) {
+ dev_warn(dev, "GPF: could not retrieve dirty count\n");
+ return;
+ }
+
+ cxl_nvd->dirty_shutdowns = count;
+}
+
static int cxl_nvdimm_probe(struct device *dev)
{
struct cxl_nvdimm *cxl_nvd = to_cxl_nvdimm(dev);
@@ -78,20 +139,20 @@ static int cxl_nvdimm_probe(struct device *dev)
set_bit(ND_CMD_GET_CONFIG_SIZE, &cmd_mask);
set_bit(ND_CMD_GET_CONFIG_DATA, &cmd_mask);
set_bit(ND_CMD_SET_CONFIG_DATA, &cmd_mask);
- nvdimm = __nvdimm_create(cxl_nvb->nvdimm_bus, cxl_nvd,
- cxl_dimm_attribute_groups, flags,
- cmd_mask, 0, NULL, cxl_nvd->dev_id,
- cxl_security_ops, NULL);
- if (!nvdimm)
- return -ENOMEM;
/*
* Set dirty shutdown now, with the expectation that the device
* clear it upon a successful GPF flow. The exception to this
* is upon Viral detection, per CXL 3.2 section 12.4.2.
*/
- if (cxl_arm_dirty_shutdown(mds))
- dev_warn(dev, "GPF: could not dirty shutdown state\n");
+ cxl_nvdimm_setup_dirty_tracking(cxl_nvd);
+
+ nvdimm = __nvdimm_create(cxl_nvb->nvdimm_bus, cxl_nvd,
+ cxl_dimm_attribute_groups, flags,
+ cmd_mask, 0, NULL, cxl_nvd->dev_id,
+ cxl_security_ops, NULL);
+ if (!nvdimm)
+ return -ENOMEM;
dev_set_drvdata(dev, nvdimm);
return devm_add_action_or_reset(dev, unregister_nvdimm, nvdimm);
--
2.39.5
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH 3/4] cxl/pmem: Export dirty shutdown count via sysfs
2025-02-20 1:36 ` [PATCH 3/4] cxl/pmem: Export dirty shutdown count via sysfs Davidlohr Bueso
@ 2025-02-20 16:11 ` Ira Weiny
2025-02-20 17:29 ` Jonathan Cameron
1 sibling, 0 replies; 14+ messages in thread
From: Ira Weiny @ 2025-02-20 16:11 UTC (permalink / raw)
To: Davidlohr Bueso, dave.jiang, dan.j.williams
Cc: jonathan.cameron, alison.schofield, ira.weiny, vishal.l.verma,
seven.yi.lee, ming.li, a.manzanares, fan.ni, anisa.su, dave,
linux-cxl
Davidlohr Bueso wrote:
> Similar to how the acpi_nfit driver exports Optane dirty shutdown count,
> introduce:
>
> /sys/bus/cxl/devices/nvdimm-bridge0/ndbusX/nmemY/cxl/dirty_shutdown
>
> Under the conditions that 1) dirty shutdown can be set, 2) Device GPF
> DVSEC exists, and 3) the count itself can be retrieved.
>
> Suggested-by: Dan Williams <dan.j.williams@intel.com>
> Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
> Signed-off-by: Davidlohr Bueso <dave@stgolabs.net>
[snip]
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 3/4] cxl/pmem: Export dirty shutdown count via sysfs
2025-02-20 1:36 ` [PATCH 3/4] cxl/pmem: Export dirty shutdown count via sysfs Davidlohr Bueso
2025-02-20 16:11 ` Ira Weiny
@ 2025-02-20 17:29 ` Jonathan Cameron
2025-02-20 19:28 ` Davidlohr Bueso
1 sibling, 1 reply; 14+ messages in thread
From: Jonathan Cameron @ 2025-02-20 17:29 UTC (permalink / raw)
To: Davidlohr Bueso
Cc: dave.jiang, dan.j.williams, alison.schofield, ira.weiny,
vishal.l.verma, seven.yi.lee, ming.li, a.manzanares, fan.ni,
anisa.su, linux-cxl
On Wed, 19 Feb 2025 17:36:03 -0800
Davidlohr Bueso <dave@stgolabs.net> wrote:
> Similar to how the acpi_nfit driver exports Optane dirty shutdown count,
> introduce:
>
> /sys/bus/cxl/devices/nvdimm-bridge0/ndbusX/nmemY/cxl/dirty_shutdown
>
> Under the conditions that 1) dirty shutdown can be set, 2) Device GPF
> DVSEC exists, and 3) the count itself can be retrieved.
>
> Suggested-by: Dan Williams <dan.j.williams@intel.com>
> Reviewed-by: Dave Jiang <dave.jiang@intel.com>
> Signed-off-by: Davidlohr Bueso <dave@stgolabs.net>
One trivial thing otherwise
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c
> index 6b284962592f..cb039cfc62cb 100644
> --- a/drivers/cxl/pmem.c
> +++ b/drivers/cxl/pmem.c
> @@ -38,19 +38,48 @@ static ssize_t id_show(struct device *dev, struct device_attribute *attr, char *
> struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
> struct cxl_dev_state *cxlds = cxl_nvd->cxlmd->cxlds;
>
> - return sysfs_emit(buf, "%lld\n", cxlds->serial);
> + return sysfs_emit(buf, "%llu\n", cxlds->serial);
I guess you 'fixed' the wrong one?
> }
> static DEVICE_ATTR_RO(id);
>
> +static ssize_t dirty_shutdown_show(struct device *dev,
> + struct device_attribute *attr, char *buf)
> +{
> + struct nvdimm *nvdimm = to_nvdimm(dev);
> + struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
> +
> + return sysfs_emit(buf, "%lld\n", cxl_nvd->dirty_shutdowns);
It's unsigned so %llu though I hope no one ever tests that by
doing that many dirty shutdowns.
> +}
> +static DEVICE_ATTR_RO(dirty_shutdown);
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 3/4] cxl/pmem: Export dirty shutdown count via sysfs
2025-02-20 17:29 ` Jonathan Cameron
@ 2025-02-20 19:28 ` Davidlohr Bueso
0 siblings, 0 replies; 14+ messages in thread
From: Davidlohr Bueso @ 2025-02-20 19:28 UTC (permalink / raw)
To: Jonathan Cameron
Cc: dave.jiang, dan.j.williams, alison.schofield, ira.weiny,
vishal.l.verma, seven.yi.lee, ming.li, a.manzanares, fan.ni,
anisa.su, linux-cxl
On Thu, 20 Feb 2025, Jonathan Cameron wrote:
>On Wed, 19 Feb 2025 17:36:03 -0800
>Davidlohr Bueso <dave@stgolabs.net> wrote:
>
>> Similar to how the acpi_nfit driver exports Optane dirty shutdown count,
>> introduce:
>>
>> /sys/bus/cxl/devices/nvdimm-bridge0/ndbusX/nmemY/cxl/dirty_shutdown
>>
>> Under the conditions that 1) dirty shutdown can be set, 2) Device GPF
>> DVSEC exists, and 3) the count itself can be retrieved.
>>
>> Suggested-by: Dan Williams <dan.j.williams@intel.com>
>> Reviewed-by: Dave Jiang <dave.jiang@intel.com>
>> Signed-off-by: Davidlohr Bueso <dave@stgolabs.net>
>One trivial thing otherwise
>Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
>
>
>> diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c
>> index 6b284962592f..cb039cfc62cb 100644
>> --- a/drivers/cxl/pmem.c
>> +++ b/drivers/cxl/pmem.c
>> @@ -38,19 +38,48 @@ static ssize_t id_show(struct device *dev, struct device_attribute *attr, char *
>> struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
>> struct cxl_dev_state *cxlds = cxl_nvd->cxlmd->cxlds;
>>
>> - return sysfs_emit(buf, "%lld\n", cxlds->serial);
>> + return sysfs_emit(buf, "%llu\n", cxlds->serial);
>
>I guess you 'fixed' the wrong one?
Yes, I'm an idiot. Will send a hopefully final v5 with this fix.
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 3/4] cxl/pmem: Export dirty shutdown count via sysfs
2025-02-20 22:02 [PATCH v5 0/4] cxl: Dirty shutdown followups Davidlohr Bueso
@ 2025-02-20 22:02 ` Davidlohr Bueso
0 siblings, 0 replies; 14+ messages in thread
From: Davidlohr Bueso @ 2025-02-20 22:02 UTC (permalink / raw)
To: dave.jiang, dan.j.williams
Cc: jonathan.cameron, alison.schofield, ira.weiny, vishal.l.verma,
seven.yi.lee, ming.li, a.manzanares, fan.ni, anisa.su, dave,
linux-cxl, Jonathan Cameron
Similar to how the acpi_nfit driver exports Optane dirty shutdown count,
introduce:
/sys/bus/cxl/devices/nvdimm-bridge0/ndbusX/nmemY/cxl/dirty_shutdown
Under the conditions that 1) dirty shutdown can be set, 2) Device GPF
DVSEC exists, and 3) the count itself can be retrieved.
Suggested-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Davidlohr Bueso <dave@stgolabs.net>
Reviewed-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
---
Documentation/ABI/testing/sysfs-bus-cxl | 12 +++
Documentation/driver-api/cxl/maturity-map.rst | 2 +-
drivers/cxl/core/mbox.c | 21 +++++
drivers/cxl/cxl.h | 1 +
drivers/cxl/cxlmem.h | 13 ++++
drivers/cxl/pmem.c | 77 +++++++++++++++++--
6 files changed, 117 insertions(+), 9 deletions(-)
diff --git a/Documentation/ABI/testing/sysfs-bus-cxl b/Documentation/ABI/testing/sysfs-bus-cxl
index 04a880bd1dde..6d911f046a78 100644
--- a/Documentation/ABI/testing/sysfs-bus-cxl
+++ b/Documentation/ABI/testing/sysfs-bus-cxl
@@ -604,3 +604,15 @@ Description:
See Documentation/ABI/stable/sysfs-devices-node. access0 provides
the number to the closest initiator and access1 provides the
number to the closest CPU.
+
+
+What: /sys/bus/cxl/devices/nvdimm-bridge0/ndbusX/nmemY/cxl/dirty_shutdown
+Date: Feb, 2025
+KernelVersion: v6.15
+Contact: linux-cxl@vger.kernel.org
+Description:
+ (RO) The device dirty shutdown count value, which is the number
+ of times the device could have incurred in potential data loss.
+ The count is persistent across power loss and wraps back to 0
+ upon overflow. If this file is not present, the device does not
+ have the necessary support for dirty tracking.
diff --git a/Documentation/driver-api/cxl/maturity-map.rst b/Documentation/driver-api/cxl/maturity-map.rst
index 99dd2c841e69..a2288f9df658 100644
--- a/Documentation/driver-api/cxl/maturity-map.rst
+++ b/Documentation/driver-api/cxl/maturity-map.rst
@@ -130,7 +130,7 @@ Mailbox commands
* [0] Switch CCI
* [3] Timestamp
* [1] PMEM labels
-* [1] PMEM GPF / Dirty Shutdown
+* [3] PMEM GPF / Dirty Shutdown
* [0] Scan Media
PMU
diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c
index 86d13f4a1c18..6bc398182a5d 100644
--- a/drivers/cxl/core/mbox.c
+++ b/drivers/cxl/core/mbox.c
@@ -1281,6 +1281,27 @@ int cxl_mem_dpa_fetch(struct cxl_memdev_state *mds, struct cxl_dpa_info *info)
}
EXPORT_SYMBOL_NS_GPL(cxl_mem_dpa_fetch, "CXL");
+int cxl_get_dirty_count(struct cxl_memdev_state *mds, u32 *count)
+{
+ struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
+ struct cxl_mbox_get_health_info_out hi;
+ struct cxl_mbox_cmd mbox_cmd;
+ int rc;
+
+ mbox_cmd = (struct cxl_mbox_cmd) {
+ .opcode = CXL_MBOX_OP_GET_HEALTH_INFO,
+ .size_out = sizeof(hi),
+ .payload_out = &hi,
+ };
+
+ rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
+ if (!rc)
+ *count = le32_to_cpu(hi.dirty_shutdown_cnt);
+
+ return rc;
+}
+EXPORT_SYMBOL_NS_GPL(cxl_get_dirty_count, "CXL");
+
int cxl_arm_dirty_shutdown(struct cxl_memdev_state *mds)
{
struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
index 29f2ab0d5bf6..8bdfa536262e 100644
--- a/drivers/cxl/cxl.h
+++ b/drivers/cxl/cxl.h
@@ -542,6 +542,7 @@ struct cxl_nvdimm {
struct device dev;
struct cxl_memdev *cxlmd;
u8 dev_id[CXL_DEV_ID_LEN]; /* for nvdimm, string of 'serial' */
+ u64 dirty_shutdowns;
};
struct cxl_pmem_region_mapping {
diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
index 6d60030139df..3b6ef9e936c3 100644
--- a/drivers/cxl/cxlmem.h
+++ b/drivers/cxl/cxlmem.h
@@ -681,6 +681,18 @@ struct cxl_mbox_set_partition_info {
#define CXL_SET_PARTITION_IMMEDIATE_FLAG BIT(0)
+/* Get Health Info Output Payload CXL 3.2 Spec 8.2.10.9.3.1 Table 8-148 */
+struct cxl_mbox_get_health_info_out {
+ u8 health_status;
+ u8 media_status;
+ u8 additional_status;
+ u8 life_used;
+ __le16 device_temperature;
+ __le32 dirty_shutdown_cnt;
+ __le32 corrected_volatile_error_cnt;
+ __le32 corrected_persistent_error_cnt;
+} __packed;
+
/* Set Shutdown State Input Payload CXL 3.2 Spec 8.2.10.9.3.5 Table 8-152 */
struct cxl_mbox_set_shutdown_state_in {
u8 state;
@@ -822,6 +834,7 @@ void cxl_event_trace_record(const struct cxl_memdev *cxlmd,
enum cxl_event_log_type type,
enum cxl_event_type event_type,
const uuid_t *uuid, union cxl_event *evt);
+int cxl_get_dirty_count(struct cxl_memdev_state *mds, u32 *count);
int cxl_arm_dirty_shutdown(struct cxl_memdev_state *mds);
int cxl_set_timestamp(struct cxl_memdev_state *mds);
int cxl_poison_state_init(struct cxl_memdev_state *mds);
diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c
index 5afd9ca9a944..d061fe3d2b86 100644
--- a/drivers/cxl/pmem.c
+++ b/drivers/cxl/pmem.c
@@ -42,15 +42,44 @@ static ssize_t id_show(struct device *dev, struct device_attribute *attr, char *
}
static DEVICE_ATTR_RO(id);
+static ssize_t dirty_shutdown_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nvdimm *nvdimm = to_nvdimm(dev);
+ struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
+
+ return sysfs_emit(buf, "%llu\n", cxl_nvd->dirty_shutdowns);
+}
+static DEVICE_ATTR_RO(dirty_shutdown);
+
static struct attribute *cxl_dimm_attributes[] = {
&dev_attr_id.attr,
&dev_attr_provider.attr,
+ &dev_attr_dirty_shutdown.attr,
NULL
};
+#define CXL_INVALID_DIRTY_SHUTDOWN_COUNT ULLONG_MAX
+static umode_t cxl_dimm_visible(struct kobject *kobj,
+ struct attribute *a, int n)
+{
+ if (a == &dev_attr_dirty_shutdown.attr) {
+ struct device *dev = kobj_to_dev(kobj);
+ struct nvdimm *nvdimm = to_nvdimm(dev);
+ struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
+
+ if (cxl_nvd->dirty_shutdowns ==
+ CXL_INVALID_DIRTY_SHUTDOWN_COUNT)
+ return 0;
+ }
+
+ return a->mode;
+}
+
static const struct attribute_group cxl_dimm_attribute_group = {
.name = "cxl",
.attrs = cxl_dimm_attributes,
+ .is_visible = cxl_dimm_visible
};
static const struct attribute_group *cxl_dimm_attribute_groups[] = {
@@ -58,6 +87,38 @@ static const struct attribute_group *cxl_dimm_attribute_groups[] = {
NULL
};
+static void cxl_nvdimm_arm_dirty_shutdown_tracking(struct cxl_nvdimm *cxl_nvd)
+{
+ struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
+ struct cxl_dev_state *cxlds = cxlmd->cxlds;
+ struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds);
+ struct device *dev = &cxl_nvd->dev;
+ u32 count;
+
+ /*
+ * Dirty tracking is enabled and exposed to the user, only when:
+ * - dirty shutdown on the device can be set, and,
+ * - the device has a Device GPF DVSEC (albeit unused), and,
+ * - the Get Health Info cmd can retrieve the device's dirty count.
+ */
+ cxl_nvd->dirty_shutdowns = CXL_INVALID_DIRTY_SHUTDOWN_COUNT;
+
+ if (cxl_arm_dirty_shutdown(mds)) {
+ dev_warn(dev, "GPF: could not set dirty shutdown state\n");
+ return;
+ }
+
+ if (!cxl_gpf_get_dvsec(cxlds->dev, false))
+ return;
+
+ if (cxl_get_dirty_count(mds, &count)) {
+ dev_warn(dev, "GPF: could not retrieve dirty count\n");
+ return;
+ }
+
+ cxl_nvd->dirty_shutdowns = count;
+}
+
static int cxl_nvdimm_probe(struct device *dev)
{
struct cxl_nvdimm *cxl_nvd = to_cxl_nvdimm(dev);
@@ -78,20 +139,20 @@ static int cxl_nvdimm_probe(struct device *dev)
set_bit(ND_CMD_GET_CONFIG_SIZE, &cmd_mask);
set_bit(ND_CMD_GET_CONFIG_DATA, &cmd_mask);
set_bit(ND_CMD_SET_CONFIG_DATA, &cmd_mask);
- nvdimm = __nvdimm_create(cxl_nvb->nvdimm_bus, cxl_nvd,
- cxl_dimm_attribute_groups, flags,
- cmd_mask, 0, NULL, cxl_nvd->dev_id,
- cxl_security_ops, NULL);
- if (!nvdimm)
- return -ENOMEM;
/*
* Set dirty shutdown now, with the expectation that the device
* clear it upon a successful GPF flow. The exception to this
* is upon Viral detection, per CXL 3.2 section 12.4.2.
*/
- if (cxl_arm_dirty_shutdown(mds))
- dev_warn(dev, "GPF: could not dirty shutdown state\n");
+ cxl_nvdimm_arm_dirty_shutdown_tracking(cxl_nvd);
+
+ nvdimm = __nvdimm_create(cxl_nvb->nvdimm_bus, cxl_nvd,
+ cxl_dimm_attribute_groups, flags,
+ cmd_mask, 0, NULL, cxl_nvd->dev_id,
+ cxl_security_ops, NULL);
+ if (!nvdimm)
+ return -ENOMEM;
dev_set_drvdata(dev, nvdimm);
return devm_add_action_or_reset(dev, unregister_nvdimm, nvdimm);
--
2.39.5
^ permalink raw reply related [flat|nested] 14+ messages in thread
end of thread, other threads:[~2025-02-20 22:08 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-02-19 2:14 [PATCH v2 0/4] cxl: Dirty shutdown followups Davidlohr Bueso
2025-02-19 2:14 ` [PATCH 1/4] cxl/pci: Introduce cxl_gpf_get_dvsec() Davidlohr Bueso
2025-02-19 2:14 ` [PATCH 2/4] cxl/pmem: Rename cxl_dirty_shutdown_state() Davidlohr Bueso
2025-02-19 2:14 ` [PATCH 3/4] cxl/pmem: Export dirty shutdown count via sysfs Davidlohr Bueso
2025-02-19 2:34 ` Davidlohr Bueso
2025-02-19 2:14 ` [PATCH 4/4] tools/testing/cxl: Set Shutdown State support Davidlohr Bueso
-- strict thread matches above, loose matches on Subject: below --
2025-02-19 6:28 [PATCH v3 0/4] cxl: Dirty shutdown followups Davidlohr Bueso
2025-02-19 6:28 ` [PATCH 3/4] cxl/pmem: Export dirty shutdown count via sysfs Davidlohr Bueso
2025-02-19 16:44 ` Dave Jiang
2025-02-19 21:15 ` Ira Weiny
2025-02-20 1:36 [PATCH v4 0/4] cxl: Dirty shutdown followups Davidlohr Bueso
2025-02-20 1:36 ` [PATCH 3/4] cxl/pmem: Export dirty shutdown count via sysfs Davidlohr Bueso
2025-02-20 16:11 ` Ira Weiny
2025-02-20 17:29 ` Jonathan Cameron
2025-02-20 19:28 ` Davidlohr Bueso
2025-02-20 22:02 [PATCH v5 0/4] cxl: Dirty shutdown followups Davidlohr Bueso
2025-02-20 22:02 ` [PATCH 3/4] cxl/pmem: Export dirty shutdown count via sysfs Davidlohr Bueso
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox