From: Jonathan Cameron <Jonathan.Cameron@Huawei.com>
To: Dave Jiang <dave.jiang@intel.com>
Cc: <linux-cxl@vger.kernel.org>, <dan.j.williams@intel.com>,
<ira.weiny@intel.com>, <vishal.l.verma@intel.com>,
<alison.schofield@intel.com>, <dave@stgolabs.net>
Subject: Re: [PATCH v3 2/3] cxl: Fix sysfs export of qos_class for memdev
Date: Mon, 5 Feb 2024 11:40:49 +0000 [thread overview]
Message-ID: <20240205114049.000073e6@Huawei.com> (raw)
In-Reply-To: <20240201214731.1297389-2-dave.jiang@intel.com>
On Thu, 1 Feb 2024 14:47:30 -0700
Dave Jiang <dave.jiang@intel.com> wrote:
> Current implementation exports only to
> /sys/bus/cxl/devices/.../memN/qos_class. With both ram and pmem exposed,
> the second registered sysfs attribute is rejected as duplicate. It's not
> possible to create qos_class under the dev_groups via the driver due to
> the ram and pmem sysfs sub-directories already created by the device sysfs
> groups. Move the ram and pmem qos_class to the device sysfs groups and add
> a call to sysfs_update() after the perf data are validated so the
> qos_class can be visible. The end results should be
> /sys/bus/cxl/devices/.../memN/ram/qos_class and
> /sys/bus/cxl/devices/.../memN/pmem/qos_class.
>
> Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Some comments inline. Only think I really care about though is a comment
in the code to say why sysfs_update_groups() is not appropriate.
> ---
> v3:
> - Updated based on pervious patch changes
> ---
> drivers/cxl/core/cdat.c | 1 +
> drivers/cxl/core/memdev.c | 71 +++++++++++++++++++++++++++++++++++++++
> drivers/cxl/cxl.h | 2 ++
> drivers/cxl/mem.c | 36 --------------------
> 4 files changed, 74 insertions(+), 36 deletions(-)
>
> diff --git a/drivers/cxl/core/cdat.c b/drivers/cxl/core/cdat.c
> index 55b82dfd794b..5c93bf9d5253 100644
> --- a/drivers/cxl/core/cdat.c
> +++ b/drivers/cxl/core/cdat.c
> @@ -382,6 +382,7 @@ void cxl_endpoint_parse_cdat(struct cxl_port *port)
>
> cxl_memdev_set_qos_class(cxlds, dsmas_xa);
> cxl_qos_class_verify(cxlmd);
> + cxl_memdev_update_attribute_groups(cxlmd);
> }
> EXPORT_SYMBOL_NS_GPL(cxl_endpoint_parse_cdat, CXL);
>
> diff --git a/drivers/cxl/core/memdev.c b/drivers/cxl/core/memdev.c
> index dae8802ecdb0..ed85096a33fb 100644
> --- a/drivers/cxl/core/memdev.c
> +++ b/drivers/cxl/core/memdev.c
> @@ -447,13 +447,41 @@ static struct attribute *cxl_memdev_attributes[] = {
> NULL,
> };
>
> +static ssize_t pmem_qos_class_show(struct device *dev,
> + struct device_attribute *attr, char *buf)
> +{
> + struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
> + struct cxl_dev_state *cxlds = cxlmd->cxlds;
> + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds);
> +
> + return sysfs_emit(buf, "%d\n", mds->pmem_perf.qos_class);
> +}
> +
> +static struct device_attribute dev_attr_pmem_qos_class =
> + __ATTR(qos_class, 0444, pmem_qos_class_show, NULL);
> +
> static struct attribute *cxl_memdev_pmem_attributes[] = {
> &dev_attr_pmem_size.attr,
> + &dev_attr_pmem_qos_class.attr,
> NULL,
> };
>
> +static ssize_t ram_qos_class_show(struct device *dev,
> + struct device_attribute *attr, char *buf)
> +{
> + struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
> + struct cxl_dev_state *cxlds = cxlmd->cxlds;
> + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds);
> +
> + return sysfs_emit(buf, "%d\n", mds->ram_perf.qos_class);
> +}
> +
> +static struct device_attribute dev_attr_ram_qos_class =
> + __ATTR(qos_class, 0444, ram_qos_class_show, NULL);
> +
> static struct attribute *cxl_memdev_ram_attributes[] = {
> &dev_attr_ram_size.attr,
> + &dev_attr_ram_qos_class.attr,
> NULL,
> };
>
> @@ -477,14 +505,42 @@ static struct attribute_group cxl_memdev_attribute_group = {
> .is_visible = cxl_memdev_visible,
> };
>
> +static umode_t cxl_ram_visible(struct kobject *kobj, struct attribute *a, int n)
> +{
> + struct device *dev = kobj_to_dev(kobj);
> + struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
> + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds);
> +
> + if (a == &dev_attr_ram_qos_class.attr)
> + if (mds->ram_perf.qos_class == CXL_QOS_CLASS_INVALID)
> + return 0;
> +
> + return a->mode;
> +}
> +
> static struct attribute_group cxl_memdev_ram_attribute_group = {
> .name = "ram",
> .attrs = cxl_memdev_ram_attributes,
> + .is_visible = cxl_ram_visible,
> };
>
> +static umode_t cxl_pmem_visible(struct kobject *kobj, struct attribute *a, int n)
> +{
> + struct device *dev = kobj_to_dev(kobj);
> + struct cxl_memdev *cxlmd = to_cxl_memdev(dev);
> + struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlmd->cxlds);
> +
> + if (a == &dev_attr_pmem_qos_class.attr)
> + if (mds->pmem_perf.qos_class == CXL_QOS_CLASS_INVALID)
> + return 0;
> +
> + return a->mode;
> +}
> +
> static struct attribute_group cxl_memdev_pmem_attribute_group = {
> .name = "pmem",
> .attrs = cxl_memdev_pmem_attributes,
> + .is_visible = cxl_pmem_visible,
> };
>
> static umode_t cxl_memdev_security_visible(struct kobject *kobj,
> @@ -519,6 +575,21 @@ static const struct attribute_group *cxl_memdev_attribute_groups[] = {
> NULL,
> };
>
> +void cxl_memdev_update_attribute_groups(struct cxl_memdev *cxlmd)
> +{
> + const struct attribute_group *grp = cxl_memdev_attribute_groups[0];
> +
> + for (int i = 0; grp; i++) {
> + int rc = sysfs_update_group(&cxlmd->dev.kobj, grp);
> +
> + if (rc)
> + dev_dbg(&cxlmd->dev,
> + "Unable to update memdev attribute group.\n");
> + grp = cxl_memdev_attribute_groups[i + 1];
We don't need i explicitly so maybe cleaner as something like.
for (const struct attribute_group **grp = &cxl_memdev_attribute_groups[0];
grp; grp++)
if (sysfs_update_group(&cxlmd->dev.kobj, *grp)
dev_dbg(....)
Mind you I'm not that convinced, so fine if you want to stick with current code :)
I'd also like a comment on why sysfs_update_groups() is not appropriate here.
> + }
> +}
> +EXPORT_SYMBOL_NS_GPL(cxl_memdev_update_attribute_groups, CXL);
> +
> static const struct device_type cxl_memdev_type = {
> .name = "cxl_memdev",
> .release = cxl_memdev_release,
>
next prev parent reply other threads:[~2024-02-05 11:40 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <CGME20240201214822epcas2p3062089e1281f483fb26eea3c80a71475@epcms2p3>
2024-02-01 21:47 ` [PATCH v3 1/3] cxl: Change 'struct cxl_memdev_state' *_perf_list to single 'struct cxl_dpa_perf' Dave Jiang
2024-02-01 21:47 ` [PATCH v3 2/3] cxl: Fix sysfs export of qos_class for memdev Dave Jiang
2024-02-05 11:40 ` Jonathan Cameron [this message]
2024-02-05 18:23 ` Dave Jiang
2024-02-01 21:47 ` [PATCH v3 3/3] cxl/test: Add support for qos_class checking Dave Jiang
2024-02-02 4:21 ` [PATCH v3 1/3] cxl: Change 'struct cxl_memdev_state' *_perf_list to single 'struct cxl_dpa_perf' Wonjae Lee
2024-02-02 5:28 ` Dan Williams
2024-02-02 15:40 ` Dave Jiang
2024-02-02 15:51 ` Dave Jiang
2024-02-05 11:15 ` Jonathan Cameron
2024-02-05 18:40 ` Dan Williams
2024-02-02 15:38 ` Dave Jiang
2024-02-05 11:32 ` Jonathan Cameron
2024-02-05 18:05 ` Dave Jiang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240205114049.000073e6@Huawei.com \
--to=jonathan.cameron@huawei.com \
--cc=alison.schofield@intel.com \
--cc=dan.j.williams@intel.com \
--cc=dave.jiang@intel.com \
--cc=dave@stgolabs.net \
--cc=ira.weiny@intel.com \
--cc=linux-cxl@vger.kernel.org \
--cc=vishal.l.verma@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox