Linux CXL
 help / color / mirror / Atom feed
From: Dave Jiang <dave.jiang@intel.com>
To: Davidlohr Bueso <dave@stgolabs.net>, dan.j.williams@intel.com
Cc: jonathan.cameron@huawei.com, alison.schofield@intel.com,
	ira.weiny@intel.com, vishal.l.verma@intel.com,
	seven.yi.lee@gmail.com, a.manzanares@samsung.com,
	fan.ni@samsung.com, anisa.su@samsung.com,
	linux-cxl@vger.kernel.org
Subject: Re: [PATCH 3/4] cxl/pmem: Export dirty shutdown count via sysfs
Date: Wed, 19 Feb 2025 09:44:03 -0700	[thread overview]
Message-ID: <6afeaba7-3fbb-45af-b5e6-8d69396c8e54@intel.com> (raw)
In-Reply-To: <20250219062832.237881-4-dave@stgolabs.net>



On 2/18/25 11:28 PM, Davidlohr Bueso wrote:
> Similar to how the acpi_nfit driver exports Optane dirty shutdown count,
> introduce:
> 
>   /sys/bus/cxl/devices/nvdimm-bridge0/ndbusX/nmemY/cxl/dirty_shutdown
> 
> Under the conditions that 1) dirty shutdown can be set, 2) Device GPF
> DVSEC exists, and 3) the count itself can be retrieved.
> 
> Suggested-by: Dan Williams <dan.j.williams@intel.com>
> Signed-off-by: Davidlohr Bueso <dave@stgolabs.net>

Reviewed-by: Dave Jiang <dave.jiang@intel.com>

just a couple nits below

> ---
>  Documentation/ABI/testing/sysfs-bus-cxl       | 12 +++
>  Documentation/driver-api/cxl/maturity-map.rst |  2 +-
>  drivers/cxl/core/mbox.c                       | 21 +++++
>  drivers/cxl/cxl.h                             |  1 +
>  drivers/cxl/cxlmem.h                          | 13 ++++
>  drivers/cxl/pmem.c                            | 77 +++++++++++++++++--
>  6 files changed, 117 insertions(+), 9 deletions(-)
> 
> diff --git a/Documentation/ABI/testing/sysfs-bus-cxl b/Documentation/ABI/testing/sysfs-bus-cxl
> index 3f5627a1210a..a7491d214098 100644
> --- a/Documentation/ABI/testing/sysfs-bus-cxl
> +++ b/Documentation/ABI/testing/sysfs-bus-cxl
> @@ -586,3 +586,15 @@ Description:
>  		See Documentation/ABI/stable/sysfs-devices-node. access0 provides
>  		the number to the closest initiator and access1 provides the
>  		number to the closest CPU.
> +
> +
> +What:		/sys/bus/cxl/devices/nvdimm-bridge0/ndbusX/nmemY/cxl/dirty_shutdown
> +Date:		Feb, 2025
> +KernelVersion:	v6.15
> +Contact:	linux-cxl@vger.kernel.org
> +Description:
> +		(RO) The device dirty shutdown count value, which is the number
> +		of times the device could have incurred in potential data loss.
> +		The count is persistent across power loss and wraps back to 0
> +		upon overflow. If this file is not present, the device does not
> +		have the necessary support for dirty tracking.
> diff --git a/Documentation/driver-api/cxl/maturity-map.rst b/Documentation/driver-api/cxl/maturity-map.rst
> index 99dd2c841e69..a2288f9df658 100644
> --- a/Documentation/driver-api/cxl/maturity-map.rst
> +++ b/Documentation/driver-api/cxl/maturity-map.rst
> @@ -130,7 +130,7 @@ Mailbox commands
>  * [0] Switch CCI
>  * [3] Timestamp
>  * [1] PMEM labels
> -* [1] PMEM GPF / Dirty Shutdown
> +* [3] PMEM GPF / Dirty Shutdown
>  * [0] Scan Media
>  
>  PMU
> diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c
> index 86d13f4a1c18..f1009a265f9d 100644
> --- a/drivers/cxl/core/mbox.c
> +++ b/drivers/cxl/core/mbox.c
> @@ -1281,6 +1281,27 @@ int cxl_mem_dpa_fetch(struct cxl_memdev_state *mds, struct cxl_dpa_info *info)
>  }
>  EXPORT_SYMBOL_NS_GPL(cxl_mem_dpa_fetch, "CXL");
>  
> +int cxl_get_dirty_count(struct cxl_memdev_state *mds, u32 *count)
> +{
> +	int rc;
> +	struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
> +	struct cxl_mbox_cmd mbox_cmd;
> +	struct cxl_mbox_get_health_info_out hi;

reverse xmas tree pls

> +
> +	mbox_cmd = (struct cxl_mbox_cmd) {
> +		.opcode = CXL_MBOX_OP_GET_HEALTH_INFO,
> +		.size_out = sizeof(hi),
> +		.payload_out = &hi,
> +	};
> +
> +	rc = cxl_internal_send_cmd(cxl_mbox, &mbox_cmd);
> +	if (!rc)
> +		*count = le32_to_cpu(hi.dirty_shutdown_cnt);
> +
> +	return rc;
> +}
> +EXPORT_SYMBOL_NS_GPL(cxl_get_dirty_count, "CXL");
> +
>  int cxl_arm_dirty_shutdown(struct cxl_memdev_state *mds)
>  {
>  	struct cxl_mailbox *cxl_mbox = &mds->cxlds.cxl_mbox;
> diff --git a/drivers/cxl/cxl.h b/drivers/cxl/cxl.h
> index acbbba41356d..4dbf1cc60047 100644
> --- a/drivers/cxl/cxl.h
> +++ b/drivers/cxl/cxl.h
> @@ -542,6 +542,7 @@ struct cxl_nvdimm {
>  	struct device dev;
>  	struct cxl_memdev *cxlmd;
>  	u8 dev_id[CXL_DEV_ID_LEN]; /* for nvdimm, string of 'serial' */
> +	u64 dirty_shutdowns;
>  };
>  
>  struct cxl_pmem_region_mapping {
> diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
> index 6d60030139df..3b6ef9e936c3 100644
> --- a/drivers/cxl/cxlmem.h
> +++ b/drivers/cxl/cxlmem.h
> @@ -681,6 +681,18 @@ struct cxl_mbox_set_partition_info {
>  
>  #define  CXL_SET_PARTITION_IMMEDIATE_FLAG	BIT(0)
>  
> +/* Get Health Info Output Payload CXL 3.2 Spec 8.2.10.9.3.1 Table 8-148 */
> +struct cxl_mbox_get_health_info_out {
> +	u8 health_status;
> +	u8 media_status;
> +	u8 additional_status;
> +	u8 life_used;
> +	__le16 device_temperature;
> +	__le32 dirty_shutdown_cnt;
> +	__le32 corrected_volatile_error_cnt;
> +	__le32 corrected_persistent_error_cnt;
> +} __packed;
> +
>  /* Set Shutdown State Input Payload CXL 3.2 Spec 8.2.10.9.3.5 Table 8-152 */
>  struct cxl_mbox_set_shutdown_state_in {
>  	u8 state;
> @@ -822,6 +834,7 @@ void cxl_event_trace_record(const struct cxl_memdev *cxlmd,
>  			    enum cxl_event_log_type type,
>  			    enum cxl_event_type event_type,
>  			    const uuid_t *uuid, union cxl_event *evt);
> +int cxl_get_dirty_count(struct cxl_memdev_state *mds, u32 *count);
>  int cxl_arm_dirty_shutdown(struct cxl_memdev_state *mds);
>  int cxl_set_timestamp(struct cxl_memdev_state *mds);
>  int cxl_poison_state_init(struct cxl_memdev_state *mds);
> diff --git a/drivers/cxl/pmem.c b/drivers/cxl/pmem.c
> index 6b284962592f..aee1afe9d287 100644
> --- a/drivers/cxl/pmem.c
> +++ b/drivers/cxl/pmem.c
> @@ -42,15 +42,44 @@ static ssize_t id_show(struct device *dev, struct device_attribute *attr, char *
>  }
>  static DEVICE_ATTR_RO(id);
>  
> +static ssize_t dirty_shutdown_show(struct device *dev,
> +				   struct device_attribute *attr, char *buf)
> +{
> +	struct nvdimm *nvdimm = to_nvdimm(dev);
> +	struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
> +
> +	return sysfs_emit(buf, "%lld\n", cxl_nvd->dirty_shutdowns);
> +}
> +static DEVICE_ATTR_RO(dirty_shutdown);
> +
>  static struct attribute *cxl_dimm_attributes[] = {
>  	&dev_attr_id.attr,
>  	&dev_attr_provider.attr,
> +	&dev_attr_dirty_shutdown.attr,
>  	NULL
>  };
>  
> +#define CXL_INVALID_DIRTY_SHUTDOWN_COUNT -1
> +static umode_t cxl_dimm_visible(struct kobject *kobj,
> +				struct attribute *a, int n)
> +{
> +	if (a == &dev_attr_dirty_shutdown.attr) {
> +		struct device *dev = kobj_to_dev(kobj);
> +		struct nvdimm *nvdimm = to_nvdimm(dev);
> +		struct cxl_nvdimm *cxl_nvd = nvdimm_provider_data(nvdimm);
> +
> +		if (cxl_nvd->dirty_shutdowns ==
> +		    CXL_INVALID_DIRTY_SHUTDOWN_COUNT)
> +			return 0;
> +	}
> +
> +	return a->mode;
> +}
> +
>  static const struct attribute_group cxl_dimm_attribute_group = {
>  	.name = "cxl",
>  	.attrs = cxl_dimm_attributes,
> +	.is_visible = cxl_dimm_visible
>  };
>  
>  static const struct attribute_group *cxl_dimm_attribute_groups[] = {
> @@ -58,6 +87,38 @@ static const struct attribute_group *cxl_dimm_attribute_groups[] = {
>  	NULL
>  };
>  
> +static void cxl_nvdimm_setup_dirty_tracking(struct cxl_nvdimm *cxl_nvd)
> +{
> +	u32 count;
> +	struct cxl_memdev *cxlmd = cxl_nvd->cxlmd;
> +	struct cxl_dev_state *cxlds = cxlmd->cxlds;
> +	struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds);
> +	struct device *dev = &cxl_nvd->dev;

reverse xmas tree pls

> +
> +	/*
> +	 * Dirty tracking is enabled and exposed to the user, only when:
> +	 *   - dirty shutdown on the device can be set, and,
> +	 *   - the device has a Device GPF DVSEC (albeit unused), and,
> +	 *   - the Get Health Info cmd can retrieve the device's dirty count.
> +	 */
> +	cxl_nvd->dirty_shutdowns = CXL_INVALID_DIRTY_SHUTDOWN_COUNT;
> +
> +	if (cxl_arm_dirty_shutdown(mds)) {
> +		dev_warn(dev, "GPF: could not set dirty shutdown state\n");
> +		return;
> +	}
> +
> +	if (cxl_gpf_get_dvsec(cxlds->dev, false) <= 0)
> +		return;
> +
> +	if (cxl_get_dirty_count(mds, &count)) {
> +		dev_warn(dev, "GPF: could not retrieve dirty count\n");
> +		return;
> +	}
> +
> +	cxl_nvd->dirty_shutdowns = count;
> +}
> +
>  static int cxl_nvdimm_probe(struct device *dev)
>  {
>  	struct cxl_nvdimm *cxl_nvd = to_cxl_nvdimm(dev);
> @@ -78,20 +139,20 @@ static int cxl_nvdimm_probe(struct device *dev)
>  	set_bit(ND_CMD_GET_CONFIG_SIZE, &cmd_mask);
>  	set_bit(ND_CMD_GET_CONFIG_DATA, &cmd_mask);
>  	set_bit(ND_CMD_SET_CONFIG_DATA, &cmd_mask);
> -	nvdimm = __nvdimm_create(cxl_nvb->nvdimm_bus, cxl_nvd,
> -				 cxl_dimm_attribute_groups, flags,
> -				 cmd_mask, 0, NULL, cxl_nvd->dev_id,
> -				 cxl_security_ops, NULL);
> -	if (!nvdimm)
> -		return -ENOMEM;
>  
>  	/*
>  	 * Set dirty shutdown now, with the expectation that the device
>  	 * clear it upon a successful GPF flow. The exception to this
>  	 * is upon Viral detection, per CXL 3.2 section 12.4.2.
>  	 */
> -	if (cxl_arm_dirty_shutdown(mds))
> -		dev_warn(dev, "GPF: could not dirty shutdown state\n");
> +	cxl_nvdimm_setup_dirty_tracking(cxl_nvd);
> +
> +	nvdimm = __nvdimm_create(cxl_nvb->nvdimm_bus, cxl_nvd,
> +				 cxl_dimm_attribute_groups, flags,
> +				 cmd_mask, 0, NULL, cxl_nvd->dev_id,
> +				 cxl_security_ops, NULL);
> +	if (!nvdimm)
> +		return -ENOMEM;
>  
>  	dev_set_drvdata(dev, nvdimm);
>  	return devm_add_action_or_reset(dev, unregister_nvdimm, nvdimm);


  reply	other threads:[~2025-02-19 17:59 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-02-19  6:28 [PATCH v3 0/4] cxl: Dirty shutdown followups Davidlohr Bueso
2025-02-19  6:28 ` [PATCH 1/4] cxl/pci: Introduce cxl_gpf_get_dvsec() Davidlohr Bueso
2025-02-19 16:27   ` Dave Jiang
2025-02-19 22:54     ` Davidlohr Bueso
2025-02-20  0:55   ` Li Ming
2025-02-19  6:28 ` [PATCH 2/4] cxl/pmem: Rename cxl_dirty_shutdown_state() Davidlohr Bueso
2025-02-19 16:34   ` Dave Jiang
2025-02-20  0:56   ` Li Ming
2025-02-19  6:28 ` [PATCH 3/4] cxl/pmem: Export dirty shutdown count via sysfs Davidlohr Bueso
2025-02-19 16:44   ` Dave Jiang [this message]
2025-02-19 21:15   ` Ira Weiny
2025-02-19  6:28 ` [PATCH 4/4] tools/testing/cxl: Set Shutdown State support Davidlohr Bueso
2025-02-19 16:48   ` Dave Jiang
2025-02-20  1:01   ` Li Ming
  -- strict thread matches above, loose matches on Subject: below --
2025-02-20 22:02 [PATCH v5 0/4] cxl: Dirty shutdown followups Davidlohr Bueso
2025-02-20 22:02 ` [PATCH 3/4] cxl/pmem: Export dirty shutdown count via sysfs Davidlohr Bueso
2025-02-20  1:36 [PATCH v4 0/4] cxl: Dirty shutdown followups Davidlohr Bueso
2025-02-20  1:36 ` [PATCH 3/4] cxl/pmem: Export dirty shutdown count via sysfs Davidlohr Bueso
2025-02-20 16:11   ` Ira Weiny
2025-02-20 17:29   ` Jonathan Cameron
2025-02-20 19:28     ` Davidlohr Bueso
2025-02-19  2:14 [PATCH v2 0/4] cxl: Dirty shutdown followups Davidlohr Bueso
2025-02-19  2:14 ` [PATCH 3/4] cxl/pmem: Export dirty shutdown count via sysfs Davidlohr Bueso
2025-02-19  2:34   ` Davidlohr Bueso

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6afeaba7-3fbb-45af-b5e6-8d69396c8e54@intel.com \
    --to=dave.jiang@intel.com \
    --cc=a.manzanares@samsung.com \
    --cc=alison.schofield@intel.com \
    --cc=anisa.su@samsung.com \
    --cc=dan.j.williams@intel.com \
    --cc=dave@stgolabs.net \
    --cc=fan.ni@samsung.com \
    --cc=ira.weiny@intel.com \
    --cc=jonathan.cameron@huawei.com \
    --cc=linux-cxl@vger.kernel.org \
    --cc=seven.yi.lee@gmail.com \
    --cc=vishal.l.verma@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox