From: Dave Jiang <dave.jiang@intel.com>
To: Davidlohr Bueso <dave@stgolabs.net>, dan.j.williams@intel.com
Cc: jonathan.cameron@huawei.com, alison.schofield@intel.com,
ira.weiny@intel.com, vishal.l.verma@intel.com,
seven.yi.lee@gmail.com, a.manzanares@samsung.com,
fan.ni@samsung.com, anisa.su@samsung.com,
linux-cxl@vger.kernel.org
Subject: Re: [PATCH 1/2] cxl/pmem: Export dirty shutdown count via sysfs
Date: Mon, 10 Feb 2025 19:09:58 -0700 [thread overview]
Message-ID: <49bf1136-c4b5-4c25-8b3f-2b9f68e23983@intel.com> (raw)
In-Reply-To: <20250205040842.1253616-2-dave@stgolabs.net>
On 2/4/25 9:08 PM, Davidlohr Bueso wrote:
> Similar to how the acpi_nfit driver exports Optane dirty shutdown count,
> introduce:
>
> /sys/bus/cxl/devices/nvdimm-bridge0/ndbusX/nmemY/cxl/dirty_shutdown
>
> Under the conditions that 1) dirty shutdown can be set, 2) Device GPF
> DVSEC exists, and 3) the count itself can be retrieved.
>
> Suggested-by: Dan Williams <dan.j.williams@intel.com>
> Signed-off-by: Davidlohr Bueso <dave@stgolabs.net>
> ---
> Documentation/ABI/testing/sysfs-bus-cxl | 12 ++++
> Documentation/driver-api/cxl/maturity-map.rst | 2 +-
> drivers/cxl/core/mbox.c | 21 ++++++
> drivers/cxl/core/pci.c | 23 +++++++
> drivers/cxl/cxl.h | 1 +
> drivers/cxl/cxlmem.h | 15 ++++
> drivers/cxl/pmem.c | 69 ++++++++++++++++---
> 7 files changed, 134 insertions(+), 9 deletions(-)
>
> diff --git a/Documentation/ABI/testing/sysfs-bus-cxl b/Documentation/ABI/testing/sysfs-bus-cxl
> index 3f5627a1210a..a7491d214098 100644
> --- a/Documentation/ABI/testing/sysfs-bus-cxl
> +++ b/Documentation/ABI/testing/sysfs-bus-cxl
> @@ -586,3 +586,15 @@ Description:
> See Documentation/ABI/stable/sysfs-devices-node. access0 provides
> the number to the closest initiator and access1 provides the
> number to the closest CPU.
> +
> +
> +What: /sys/bus/cxl/devices/nvdimm-bridge0/ndbusX/nmemY/cxl/dirty_shutdown
> +Date: Feb, 2025
> +KernelVersion: v6.15
> +Contact: linux-cxl@vger.kernel.org
> +Description:
> + (RO) The device dirty shutdown count value, which is the number
> + of times the device could have incurred in potential data loss.
> + The count is persistent across power loss and wraps back to 0
> + upon overflow. If this file is not present, the device does not
> + have the necessary support for dirty tracking.
> diff --git a/Documentation/driver-api/cxl/maturity-map.rst b/Documentation/driver-api/cxl/maturity-map.rst
> index 99dd2c841e69..a2288f9df658 100644
> --- a/Documentation/driver-api/cxl/maturity-map.rst
> +++ b/Documentation/driver-api/cxl/maturity-map.rst
> @@ -130,7 +130,7 @@ Mailbox commands
> * [0] Switch CCI
> * [3] Timestamp
> * [1] PMEM labels
> -* [1] PMEM GPF / Dirty Shutdown
> +* [3] PMEM GPF / Dirty Shutdown
> * [0] Scan Media
>
> PMU
> diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c
> index 4d22bb731177..d03fc7ed76a8 100644
> --- a/drivers/cxl/core/mbox.c
> +++ b/drivers/cxl/core/mbox.c
> @@ -1290,6 +1290,27 @@ int cxl_mem_dpa_fetch(struct cxl_memdev_state *mds, struct cxl_dpa_info *info)
> }
> EXPORT_SYMBOL_NS_GPL(cxl_mem_dpa_fetch, "CXL");
>
> +int cxl_get_dirty_count(struct cxl_memdev_state *mds, u32 *count)
> +{
> + int rc;
> + struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
> + struct cxl_mbox_cmd mbox_cmd;
> + struct cxl_mbox_get_health_info_out hi;
> +
> + mbox_cmd = (struct cxl_mbox_cmd) {
> + .opcode = CXL_MBOX_OP_GET_HEALTH_INFO,
> + .size_out = sizeof(hi),
> + .payload_out = &hi,
> + };
> +
> + rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
> + if (!rc)
> + *count = le32_to_cpu(hi.dirty_shutdown_cnt);
> +
> + return rc;
> +}
> +EXPORT_SYMBOL_NS_GPL(cxl_get_dirty_count, "CXL");
> +
> int cxl_dirty_shutdown_state(struct cxl_memdev_state *mds)
> {
> struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
> diff --git a/drivers/cxl/core/pci.c b/drivers/cxl/core/pci.c
> index a5c65f79db18..bcf05d010a77 100644
> --- a/drivers/cxl/core/pci.c
> +++ b/drivers/cxl/core/pci.c
> @@ -1141,3 +1141,26 @@ int cxl_gpf_port_setup(struct device *dport_dev, struct cxl_port *port)
>
> return 0;
> }
> +
> +int cxl_gpf_device(struct cxl_dev_state *cxlds)
Maybe turn this into a helper for retrieving the dvsec and use it in cxl_gpf_port_setup() as well?
> +{
> + int dvsec;
> + struct device *dev = cxlds->dev;
> + struct pci_dev *pdev;
> +
> + if (!dev_is_pci(dev))
> + return 0;
> +
> + pdev = to_pci_dev(dev);
> + if (!pdev)
> + return -EINVAL;
> +
> + dvsec = pci_find_dvsec_capability(pdev, PCI_VENDOR_ID_CXL,
> + CXL_DVSEC_DEVICE_GPF);
> + if (!dvsec) {
> + pci_warn(pdev, "Device GPF DVSEC not present\n");
> + return -EINVAL;
> + }
> +
> + return 0;
> +}
I think it needs to export the symbol to use outside of cxl core.
> diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
> index 6baec4ba9141..40cc44a18df8 100644
> --- a/drivers/cxl/cxl.h
> +++ b/drivers/cxl/cxl.h
> @@ -542,6 +542,7 @@ struct cxl_nvdimm {
> struct device dev;
> struct cxl_memdev *cxlmd;
> u8 dev_id[CXL_DEV_ID_LEN]; /* for nvdimm, string of 'serial' */
> + long dirty_shutdown;
Maybe consider using u64 and call it 'dirty_shutdowns'?
> };
>
> struct cxl_pmem_region_mapping {
> diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
> index 536cbe521d16..ee0c93fde50c 100644
> --- a/drivers/cxl/cxlmem.h
> +++ b/drivers/cxl/cxlmem.h
> @@ -725,6 +725,18 @@ struct cxl_mbox_set_partition_info {
>
> #define CXL_SET_PARTITION_IMMEDIATE_FLAG BIT(0)
>
> +/* Get Health Info Output Payload CXL 3.2 Spec 8.2.10.9.3.1 Table 8-148 */
> +struct cxl_mbox_get_health_info_out {
> + u8 health_status;
> + u8 media_status;
> + u8 additional_status;
> + u8 life_used;
> + __le16 device_temperature;
> + __le32 dirty_shutdown_cnt;
> + __le32 corrected_volatile_error_cnt;
> + __le32 corrected_persistent_error_cnt;
> +} __packed;
> +
> /* Set Shutdown State Input Payload CXL 3.2 Spec 8.2.10.9.3.5 Table 8-152 */
> struct cxl_mbox_set_shutdown_state_in {
> u8 state;
> @@ -866,6 +878,7 @@ void cxl_event_trace_record(const struct cxl_memdev *cxlmd,
> enum cxl_event_log_type type,
> enum cxl_event_type event_type,
> const uuid_t *uuid, union cxl_event *evt);
> +int cxl_get_dirty_count(struct cxl_memdev_state *mds, u32 *count);
> int cxl_dirty_shutdown_state(struct cxl_memdev_state *mds);
> int cxl_set_timestamp(struct cxl_memdev_state *mds);
> int cxl_poison_state_init(struct cxl_memdev_state *mds);
> @@ -910,4 +923,6 @@ struct cxl_hdm {
> struct seq_file;
> struct dentry *cxl_debugfs_create_dir(const char *dir);
> void cxl_dpa_debug(struct seq_file *file, struct cxl_dev_state *cxlds);
> +
> +int cxl_gpf_device(struct cxl_dev_state *cxlds);
> #endif /* __CXL_MEM_H__ */
> diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c
> index a39e2c52d7ab..d83ecd568a9c 100644
> --- a/drivers/cxl/pmem.c
> +++ b/drivers/cxl/pmem.c
> @@ -42,15 +42,41 @@ static ssize_t id_show(struct device *dev, struct device_attribute *attr, char *
> }
> static DEVICE_ATTR_RO(id);
>
> +static ssize_t dirty_shutdown_show(struct device *dev,
> + struct device_attribute *attr, char *buf)
> +{
> + struct nvdimm *nvdimm = to_nvdimm(dev);
> + struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
> +
> + return sysfs_emit(buf, "%ld\n", cxl_nvd->dirty_shutdown);
> +}
> +static DEVICE_ATTR_RO(dirty_shutdown);
> +
> static struct attribute *cxl_dimm_attributes[] = {
> &dev_attr_id.attr,
> &dev_attr_provider.attr,
> + &dev_attr_dirty_shutdown.attr,
> NULL
> };
>
> +static umode_t cxl_dimm_visible(struct kobject *kobj, struct attribute *a, int n)
> +{
> + if (a == &dev_attr_dirty_shutdown.attr) {
> + struct device *dev = kobj_to_dev(kobj);
> + struct nvdimm *nvdimm = to_nvdimm(dev);
> + struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
> +
> + if (cxl_nvd->dirty_shutdown == -1)
> + return 0;
> + }
> +
> + return a->mode;
> +}
> +
> static const struct attribute_group cxl_dimm_attribute_group = {
> .name = "cxl",
> .attrs = cxl_dimm_attributes,
> + .is_visible = cxl_dimm_visible
> };
>
> static const struct attribute_group *cxl_dimm_attribute_groups[] = {
> @@ -58,6 +84,33 @@ static const struct attribute_group *cxl_dimm_attribute_groups[] = {
> NULL
> };
>
> +static void cxl_nvdimm_setup_dirty_tracking(struct cxl_nvdimm *cxl_nvd)
Please consider cxl_nvdimm_arm_dirty_shutdown_tracking()
> +{
> + struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
> + struct cxl_dev_state *cxlds = cxlmd->cxlds;
> + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds);
> + struct device *dev = &cxl_nvd->dev;
> +
> + /*
> + * Dirty tracking is enabled and exposed to the user, only when:
> + * - dirty shutdown on the device can be set, and,
> + * - the device has a Device GPF DVSEC (albeit unused), and,
> + * - the Get Health Info cmd can retrieve the device's dirty count.
> + */
> + cxl_nvd->dirty_shutdown = -1;
#define CXL_INVALID_DIRTY_SHUTDOWN_COUNT -1
perhaps?
> +
> + if (cxl_dirty_shutdown_state(mds)) {
> + dev_warn(dev, "GPF: could not dirty shutdown state\n");
"could not set dirty shutdown state"
Speaking of which, should we rename cxl_dirty_shutdown_state() to cxl_arm_dirty_shutdown()?
Also, maybe just return here instead of falling through
> + } else if (!cxl_gpf_device(cxlds)) {
Just return the fail case here
> + u32 count;
> +
> + if (!cxl_get_dirty_count(mds, &count))
> + cxl_nvd->dirty_shutdown = count;
> + else
> + dev_warn(dev, "GPF: could not retrieve dirty count\n");
Warn and call return directly here for the failed case first
DJ
> + }
> +}
> +
> static int cxl_nvdimm_probe(struct device *dev)
> {
> struct cxl_nvdimm *cxl_nvd = to_cxl_nvdimm(dev);
> @@ -78,20 +131,20 @@ static int cxl_nvdimm_probe(struct device *dev)
> set_bit(ND_CMD_GET_CONFIG_SIZE, &cmd_mask);
> set_bit(ND_CMD_GET_CONFIG_DATA, &cmd_mask);
> set_bit(ND_CMD_SET_CONFIG_DATA, &cmd_mask);
> - nvdimm = __nvdimm_create(cxl_nvb->nvdimm_bus, cxl_nvd,
> - cxl_dimm_attribute_groups, flags,
> - cmd_mask, 0, NULL, cxl_nvd->dev_id,
> - cxl_security_ops, NULL);
> - if (!nvdimm)
> - return -ENOMEM;
>
> /*
> * Set dirty shutdown now, with the expectation that the device
> * clear it upon a successful GPF flow. The exception to this
> * is upon Viral detection, per CXL 3.2 section 12.4.2.
> */
> - if (cxl_dirty_shutdown_state(mds))
> - dev_warn(dev, "GPF: could not dirty shutdown state\n");
> + cxl_nvdimm_setup_dirty_tracking(cxl_nvd);
> +
> + nvdimm = __nvdimm_create(cxl_nvb->nvdimm_bus, cxl_nvd,
> + cxl_dimm_attribute_groups, flags,
> + cmd_mask, 0, NULL, cxl_nvd->dev_id,
> + cxl_security_ops, NULL);
> + if (!nvdimm)
> + return -ENOMEM;
>
> dev_set_drvdata(dev, nvdimm);
> return devm_add_action_or_reset(dev, unregister_nvdimm, nvdimm);
next prev parent reply other threads:[~2025-02-11 2:09 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-05 4:08 [PATCH 0/2] cxl: Dirty shutdown followups Davidlohr Bueso
2025-02-05 4:08 ` [PATCH 1/2] cxl/pmem: Export dirty shutdown count via sysfs Davidlohr Bueso
2025-02-07 10:46 ` Yee Li
2025-02-11 2:09 ` Dave Jiang [this message]
2025-02-11 4:50 ` Davidlohr Bueso
2025-02-18 18:58 ` Dave Jiang
2025-02-05 4:08 ` [PATCH 2/2] tools/testing/cxl: Set Shutdown State support Davidlohr Bueso
2025-02-11 2:16 ` Dave Jiang
2025-02-11 4:13 ` Davidlohr Bueso
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=49bf1136-c4b5-4c25-8b3f-2b9f68e23983@intel.com \
--to=dave.jiang@intel.com \
--cc=a.manzanares@samsung.com \
--cc=alison.schofield@intel.com \
--cc=anisa.su@samsung.com \
--cc=dan.j.williams@intel.com \
--cc=dave@stgolabs.net \
--cc=fan.ni@samsung.com \
--cc=ira.weiny@intel.com \
--cc=jonathan.cameron@huawei.com \
--cc=linux-cxl@vger.kernel.org \
--cc=seven.yi.lee@gmail.com \
--cc=vishal.l.verma@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox