From: Jonathan Cameron <Jonathan.Cameron@Huawei.com>
To: Dave Jiang <dave.jiang@intel.com>
Cc: <linux-cxl@vger.kernel.org>, <dan.j.williams@intel.com>,
<ira.weiny@intel.com>, <vishal.l.verma@intel.com>,
<alison.schofield@intel.com>
Subject: Re: [PATCH v6 09/11] cxl: Store QTG IDs and related info to the CXL memory device context
Date: Thu, 1 Jun 2023 15:32:14 +0100 [thread overview]
Message-ID: <20230601153214.00003782@Huawei.com> (raw)
In-Reply-To: <168451604884.3470703.10173844932484539394.stgit@djiang5-mobl3>
On Fri, 19 May 2023 10:07:28 -0700
Dave Jiang <dave.jiang@intel.com> wrote:
> Once the QTG ID _DSM is executed successfully, the QTG ID is retrieved from
> the return package. Create a list of entries in the cxl_memdev context and
> store the QTG ID and the associated DPA range. This information can be
> exposed to user space via sysfs in order to help region setup for
> hot-plugged CXL memory devices.
>
> Signed-off-by: Dave Jiang <dave.jiang@intel.com>
>
LGTM
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> ---
> v6:
> - Store entire QTG ID list
> v4:
> - Remove unused qos_list from cxl_md
> v3:
> - Move back to QTG ID per partition
> ---
> drivers/cxl/core/mbox.c | 1 +
> drivers/cxl/cxlmem.h | 23 +++++++++++++++++++++++
> drivers/cxl/port.c | 38 ++++++++++++++++++++++++++++++++++++++
> 3 files changed, 62 insertions(+)
>
> diff --git a/drivers/cxl/core/mbox.c b/drivers/cxl/core/mbox.c
> index 2c8dc7e2b84d..35941a306ea8 100644
> --- a/drivers/cxl/core/mbox.c
> +++ b/drivers/cxl/core/mbox.c
> @@ -1260,6 +1260,7 @@ struct cxl_dev_state *cxl_dev_state_create(struct device *dev)
> mutex_init(&cxlds->mbox_mutex);
> mutex_init(&cxlds->event.log_lock);
> cxlds->dev = dev;
> + INIT_LIST_HEAD(&cxlds->perf_list);
>
> return cxlds;
> }
> diff --git a/drivers/cxl/cxlmem.h b/drivers/cxl/cxlmem.h
> index a2845a7a69d8..708d60c5ffe1 100644
> --- a/drivers/cxl/cxlmem.h
> +++ b/drivers/cxl/cxlmem.h
> @@ -5,6 +5,7 @@
> #include <uapi/linux/cxl_mem.h>
> #include <linux/cdev.h>
> #include <linux/uuid.h>
> +#include <linux/node.h>
> #include "cxl.h"
>
> /* CXL 2.0 8.2.8.5.1.1 Memory Device Status Register */
> @@ -254,6 +255,21 @@ struct cxl_poison_state {
> struct mutex lock; /* Protect reads of poison list */
> };
>
> +/**
> + * struct perf_prop - performance property entry
> + * @list - list entry
> + * @dpa_range - range for DPA address
> + * @coord - QoS performance data (i.e. latency, bandwidth)
> + * @qos_class - QoS Class cookies
> + */
> +struct perf_prop_entry {
> + struct list_head list;
> + struct range dpa_range;
> + struct access_coordinate coord;
> + /* Do not add members below this, contains flex array */
> + struct qos_class qos_class;
> +};
> +
> /**
> * struct cxl_dev_state - The driver device state
> *
> @@ -292,6 +308,9 @@ struct cxl_poison_state {
> * @event: event log driver state
> * @poison: poison driver state info
> * @mbox_send: @dev specific transport for transmitting mailbox commands
> + * @ram_qos_class: QoS class cookies for volatile region
> + * @pmem_qos_class: QoS class cookies for persistent region
> + * @perf_list: performance data entries list
> *
> * See section 8.2.9.5.2 Capacity Configuration and Label Storage for
> * details on capacity parameters.
> @@ -325,6 +344,10 @@ struct cxl_dev_state {
> u64 next_volatile_bytes;
> u64 next_persistent_bytes;
>
> + struct qos_class *ram_qos_class;
> + struct qos_class *pmem_qos_class;
> + struct list_head perf_list;
> +
> resource_size_t component_reg_phys;
> u64 serial;
>
> diff --git a/drivers/cxl/port.c b/drivers/cxl/port.c
> index 03af92217192..e5d7ad5b1e16 100644
> --- a/drivers/cxl/port.c
> +++ b/drivers/cxl/port.c
> @@ -104,6 +104,42 @@ static int cxl_port_perf_data_calculate(struct cxl_port *port,
> return 0;
> }
>
> +static void cxl_memdev_set_qtg(struct cxl_dev_state *cxlds, struct list_head *dsmas_list)
> +{
> + struct range pmem_range = {
> + .start = cxlds->pmem_res.start,
> + .end = cxlds->pmem_res.end,
> + };
> + struct range ram_range = {
> + .start = cxlds->ram_res.start,
> + .end = cxlds->ram_res.end,
> + };
> + struct perf_prop_entry *perf;
> + struct dsmas_entry *dent;
> +
> + list_for_each_entry(dent, dsmas_list, list) {
> + perf = devm_kzalloc(cxlds->dev,
> + sizeof(*perf) + dent->qos_class->nr * sizeof(int),
> + GFP_KERNEL);
> + if (!perf)
> + return;
> +
> + perf->dpa_range = dent->dpa_range;
> + perf->coord = dent->coord;
> + perf->qos_class = *dent->qos_class;
> + list_add_tail(&perf->list, &cxlds->perf_list);
> +
> + if (resource_size(&cxlds->ram_res) &&
> + range_contains(&ram_range, &dent->dpa_range) &&
> + !cxlds->ram_qos_class)
> + cxlds->ram_qos_class = &perf->qos_class;
> + else if (resource_size(&cxlds->pmem_res) &&
> + range_contains(&pmem_range, &dent->dpa_range) &&
> + !cxlds->pmem_qos_class)
> + cxlds->pmem_qos_class = &perf->qos_class;
> + }
> +}
> +
> static int cxl_switch_port_probe(struct cxl_port *port)
> {
> struct cxl_hdm *cxlhdm;
> @@ -197,6 +233,8 @@ static int cxl_endpoint_port_probe(struct cxl_port *port)
> if (rc)
> dev_dbg(&port->dev,
> "Failed to do perf coord calculations.\n");
> + else
> + cxl_memdev_set_qtg(cxlds, &dsmas_list);
> }
>
> cxl_cdat_dsmas_list_destroy(&dsmas_list);
>
>
next prev parent reply other threads:[~2023-06-01 14:32 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-19 17:06 [PATCH v6 00/11] cxl: Add support for QTG ID retrieval for CXL subsystem Dave Jiang
2023-05-19 17:06 ` [PATCH v6 01/11] cxl: Add callback to parse the DSMAS subtables from CDAT Dave Jiang
2023-05-25 16:43 ` Jonathan Cameron
2023-05-19 17:06 ` [PATCH v6 02/11] cxl: Add callback to parse the DSLBIS subtable " Dave Jiang
2023-05-19 17:06 ` [PATCH v6 03/11] cxl: Add callback to parse the SSLBIS " Dave Jiang
2023-05-19 17:06 ` [PATCH v6 04/11] cxl: Add support for _DSM Function for retrieving QTG ID Dave Jiang
2023-05-25 16:44 ` Jonathan Cameron
2023-05-25 21:09 ` Dave Jiang
2023-05-19 17:07 ` [PATCH v6 05/11] cxl: Calculate and store PCI link latency for the downstream ports Dave Jiang
2023-05-19 17:07 ` [PATCH v6 06/11] cxl: Store the access coordinates for the generic ports Dave Jiang
2023-06-01 14:20 ` Jonathan Cameron
2023-05-19 17:07 ` [PATCH v6 07/11] cxl: Add helper function that calculate performance data for downstream ports Dave Jiang
2023-05-19 17:07 ` [PATCH v6 08/11] cxl: Compute the entire CXL path latency and bandwidth data Dave Jiang
2023-05-19 17:07 ` [PATCH v6 09/11] cxl: Store QTG IDs and related info to the CXL memory device context Dave Jiang
2023-06-01 14:32 ` Jonathan Cameron [this message]
2023-05-19 17:07 ` [PATCH v6 10/11] cxl: Export sysfs attributes for memory device QoS class Dave Jiang
2023-06-01 14:30 ` Jonathan Cameron
2023-06-01 17:13 ` Dave Jiang
2023-05-19 17:07 ` [PATCH v6 11/11] cxl/mem: Add debugfs output for QTG related data Dave Jiang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230601153214.00003782@Huawei.com \
--to=jonathan.cameron@huawei.com \
--cc=alison.schofield@intel.com \
--cc=dan.j.williams@intel.com \
--cc=dave.jiang@intel.com \
--cc=ira.weiny@intel.com \
--cc=linux-cxl@vger.kernel.org \
--cc=vishal.l.verma@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox