From: Jonathan Cameron <Jonathan.Cameron@Huawei.com>
To: Dave Jiang <dave.jiang@intel.com>
Cc: <linux-cxl@vger.kernel.org>,
Dan Williams <dan.j.williams@intel.com>,
"Greg Kroah-Hartman" <gregkh@linuxfoundation.org>,
<ira.weiny@intel.com>, <vishal.l.verma@intel.com>,
<alison.schofield@intel.com>, <dave@stgolabs.net>
Subject: Re: [PATCH v10 06/22] base/node / acpi: Change 'node_hmem_attrs' to 'access_coordinates'
Date: Wed, 11 Oct 2023 13:57:32 +0100 [thread overview]
Message-ID: <20231011135732.00005c35@Huawei.com> (raw)
In-Reply-To: <169698633344.1991735.1745201451434063116.stgit@djiang5-mobl3>
On Tue, 10 Oct 2023 18:05:33 -0700
Dave Jiang <dave.jiang@intel.com> wrote:
> Dan Williams suggested changing the struct 'node_hmem_attrs' to
> 'access_coordinates' [1]. The struct is a container of r/w-latency and
> r/w-bandwidth numbers. Moving forward, this container will also be used by
> CXL to store the performance characteristics of each link hop in
> the PCIE/CXL topology. So, where node_hmem_attrs is just the access
> parameters of a memory-node, access_coordinates applies more broadly
> to hardware topology characteristics. The observation is that seemed like
> an excercise in having the application identify "where" it falls on a
> spectrum of bandwidth and latency needs. For the tuple of read/write-latency
> and read/write-bandwidth, "coordinates" is not a perfect fit. Sometimes it
> is just conveying values in isolation and not a "location" relative to
> other performance points, but in the end this data is used to identify the
> performance operation point of a given memory-node. [2]
>
> Link: http://lore.kernel.org/r/64471313421f7_1b66294d5@dwillia2-xfh.jf.intel.com.notmuch/
> Link: https://lore.kernel.org/linux-cxl/645e6215ee0de_1e6f2945e@dwillia2-xfh.jf.intel.com.notmuch/
> Suggested-by: Dan Williams <dan.j.williams@intel.com>
> Reviewed-by: Dan Williams <dan.j.williams@intel.com>
> Signed-off-by: Dave Jiang <dave.jiang@intel.com>
> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Whilst the coordinates analogy is a bit weak in my mind, I guess I'll get used
to it.
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> ---
> drivers/acpi/numa/hmat.c | 20 ++++++++++----------
> drivers/base/node.c | 12 ++++++------
> include/linux/node.h | 8 ++++----
> 3 files changed, 20 insertions(+), 20 deletions(-)
>
> diff --git a/drivers/acpi/numa/hmat.c b/drivers/acpi/numa/hmat.c
> index bba268ecd802..f9ff992038fa 100644
> --- a/drivers/acpi/numa/hmat.c
> +++ b/drivers/acpi/numa/hmat.c
> @@ -62,7 +62,7 @@ struct memory_target {
> unsigned int memory_pxm;
> unsigned int processor_pxm;
> struct resource memregions;
> - struct node_hmem_attrs hmem_attrs[2];
> + struct access_coordinate coord[2];
> struct list_head caches;
> struct node_cache_attrs cache_attrs;
> bool registered;
> @@ -227,24 +227,24 @@ static void hmat_update_target_access(struct memory_target *target,
> {
> switch (type) {
> case ACPI_HMAT_ACCESS_LATENCY:
> - target->hmem_attrs[access].read_latency = value;
> - target->hmem_attrs[access].write_latency = value;
> + target->coord[access].read_latency = value;
> + target->coord[access].write_latency = value;
> break;
> case ACPI_HMAT_READ_LATENCY:
> - target->hmem_attrs[access].read_latency = value;
> + target->coord[access].read_latency = value;
> break;
> case ACPI_HMAT_WRITE_LATENCY:
> - target->hmem_attrs[access].write_latency = value;
> + target->coord[access].write_latency = value;
> break;
> case ACPI_HMAT_ACCESS_BANDWIDTH:
> - target->hmem_attrs[access].read_bandwidth = value;
> - target->hmem_attrs[access].write_bandwidth = value;
> + target->coord[access].read_bandwidth = value;
> + target->coord[access].write_bandwidth = value;
> break;
> case ACPI_HMAT_READ_BANDWIDTH:
> - target->hmem_attrs[access].read_bandwidth = value;
> + target->coord[access].read_bandwidth = value;
> break;
> case ACPI_HMAT_WRITE_BANDWIDTH:
> - target->hmem_attrs[access].write_bandwidth = value;
> + target->coord[access].write_bandwidth = value;
> break;
> default:
> break;
> @@ -701,7 +701,7 @@ static void hmat_register_target_cache(struct memory_target *target)
> static void hmat_register_target_perf(struct memory_target *target, int access)
> {
> unsigned mem_nid = pxm_to_node(target->memory_pxm);
> - node_set_perf_attrs(mem_nid, &target->hmem_attrs[access], access);
> + node_set_perf_attrs(mem_nid, &target->coord[access], access);
> }
>
> static void hmat_register_target_devices(struct memory_target *target)
> diff --git a/drivers/base/node.c b/drivers/base/node.c
> index 493d533f8375..cb2b6cc7f6e6 100644
> --- a/drivers/base/node.c
> +++ b/drivers/base/node.c
> @@ -74,14 +74,14 @@ static BIN_ATTR_RO(cpulist, CPULIST_FILE_MAX_BYTES);
> * @dev: Device for this memory access class
> * @list_node: List element in the node's access list
> * @access: The access class rank
> - * @hmem_attrs: Heterogeneous memory performance attributes
> + * @coord: Heterogeneous memory performance coordinates
> */
> struct node_access_nodes {
> struct device dev;
> struct list_head list_node;
> unsigned int access;
> #ifdef CONFIG_HMEM_REPORTING
> - struct node_hmem_attrs hmem_attrs;
> + struct access_coordinate coord;
> #endif
> };
> #define to_access_nodes(dev) container_of(dev, struct node_access_nodes, dev)
> @@ -167,7 +167,7 @@ static ssize_t property##_show(struct device *dev, \
> char *buf) \
> { \
> return sysfs_emit(buf, "%u\n", \
> - to_access_nodes(dev)->hmem_attrs.property); \
> + to_access_nodes(dev)->coord.property); \
> } \
> static DEVICE_ATTR_RO(property)
>
> @@ -187,10 +187,10 @@ static struct attribute *access_attrs[] = {
> /**
> * node_set_perf_attrs - Set the performance values for given access class
> * @nid: Node identifier to be set
> - * @hmem_attrs: Heterogeneous memory performance attributes
> + * @coord: Heterogeneous memory performance coordinates
> * @access: The access class the for the given attributes
> */
> -void node_set_perf_attrs(unsigned int nid, struct node_hmem_attrs *hmem_attrs,
> +void node_set_perf_attrs(unsigned int nid, struct access_coordinate *coord,
> unsigned int access)
> {
> struct node_access_nodes *c;
> @@ -205,7 +205,7 @@ void node_set_perf_attrs(unsigned int nid, struct node_hmem_attrs *hmem_attrs,
> if (!c)
> return;
>
> - c->hmem_attrs = *hmem_attrs;
> + c->coord = *coord;
> for (i = 0; access_attrs[i] != NULL; i++) {
> if (sysfs_add_file_to_group(&c->dev.kobj, access_attrs[i],
> "initiators")) {
> diff --git a/include/linux/node.h b/include/linux/node.h
> index 427a5975cf40..25b66d705ee2 100644
> --- a/include/linux/node.h
> +++ b/include/linux/node.h
> @@ -20,14 +20,14 @@
> #include <linux/list.h>
>
> /**
> - * struct node_hmem_attrs - heterogeneous memory performance attributes
> + * struct access_coordinate - generic performance coordinates container
> *
> * @read_bandwidth: Read bandwidth in MB/s
> * @write_bandwidth: Write bandwidth in MB/s
> * @read_latency: Read latency in nanoseconds
> * @write_latency: Write latency in nanoseconds
> */
> -struct node_hmem_attrs {
> +struct access_coordinate {
> unsigned int read_bandwidth;
> unsigned int write_bandwidth;
> unsigned int read_latency;
> @@ -65,7 +65,7 @@ struct node_cache_attrs {
>
> #ifdef CONFIG_HMEM_REPORTING
> void node_add_cache(unsigned int nid, struct node_cache_attrs *cache_attrs);
> -void node_set_perf_attrs(unsigned int nid, struct node_hmem_attrs *hmem_attrs,
> +void node_set_perf_attrs(unsigned int nid, struct access_coordinate *coord,
> unsigned access);
> #else
> static inline void node_add_cache(unsigned int nid,
> @@ -74,7 +74,7 @@ static inline void node_add_cache(unsigned int nid,
> }
>
> static inline void node_set_perf_attrs(unsigned int nid,
> - struct node_hmem_attrs *hmem_attrs,
> + struct access_coordinate *coord,
> unsigned access)
> {
> }
>
>
>
next prev parent reply other threads:[~2023-10-11 12:57 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-11 1:04 [PATCH v10 00/22] cxl: Add support for QTG ID retrieval for CXL subsystem Dave Jiang
2023-10-11 1:05 ` [PATCH v10 01/22] cxl: Export QTG ids from CFMWS to sysfs as qos_class attribute Dave Jiang
2023-10-11 1:05 ` [PATCH v10 02/22] cxl: Add checksum verification to CDAT from CXL Dave Jiang
2023-10-11 1:05 ` [PATCH v10 03/22] cxl: Add support for reading CXL switch CDAT table Dave Jiang
2023-10-11 1:05 ` [PATCH v10 04/22] acpi: Move common tables helper functions to common lib Dave Jiang
2023-10-11 12:55 ` Jonathan Cameron
2023-10-11 1:05 ` [PATCH v10 05/22] lib/firmware_table: tables: Add CDAT table parsing support Dave Jiang
2023-10-11 1:05 ` [PATCH v10 06/22] base/node / acpi: Change 'node_hmem_attrs' to 'access_coordinates' Dave Jiang
2023-10-11 12:57 ` Jonathan Cameron [this message]
2023-10-11 1:05 ` [PATCH v10 07/22] acpi: numa: Create enum for memory_target access coordinates indexing Dave Jiang
2023-10-11 1:05 ` [PATCH v10 08/22] acpi: numa: Add genport target allocation to the HMAT parsing Dave Jiang
2023-10-11 1:05 ` [PATCH v10 09/22] acpi: Break out nesting for hmat_parse_locality() Dave Jiang
2023-10-11 1:05 ` [PATCH v10 10/22] acpi: numa: Add setting of generic port system locality attributes Dave Jiang
2023-10-11 1:06 ` [PATCH v10 11/22] acpi: numa: Add helper function to retrieve the performance attributes Dave Jiang
2023-10-11 1:06 ` [PATCH v10 12/22] cxl: Add callback to parse the DSMAS subtables from CDAT Dave Jiang
2023-10-11 1:06 ` [PATCH v10 13/22] cxl: Add callback to parse the DSLBIS subtable " Dave Jiang
2023-10-11 1:06 ` [PATCH v10 14/22] cxl: Add callback to parse the SSLBIS " Dave Jiang
2023-10-11 1:06 ` [PATCH v10 15/22] cxl: Add support for _DSM Function for retrieving QTG ID Dave Jiang
2023-10-11 13:10 ` Jonathan Cameron
2023-10-11 15:37 ` Dave Jiang
2023-10-11 1:06 ` [PATCH v10 16/22] cxl: Calculate and store PCI link latency for the downstream ports Dave Jiang
2023-10-11 1:06 ` [PATCH v10 17/22] cxl: Store the access coordinates for the generic ports Dave Jiang
2023-10-11 1:06 ` [PATCH v10 18/22] cxl: Add helper function that calculate performance data for downstream ports Dave Jiang
2023-10-11 1:06 ` [PATCH v10 19/22] cxl: Compute the entire CXL path latency and bandwidth data Dave Jiang
2023-10-11 1:06 ` [PATCH v10 20/22] cxl: Store QTG IDs and related info to the CXL memory device context Dave Jiang
2023-10-11 13:19 ` Jonathan Cameron
2023-10-11 16:04 ` Dave Jiang
2023-10-11 1:07 ` [PATCH v10 21/22] cxl: Export sysfs attributes for memory device QoS class Dave Jiang
2023-10-11 13:26 ` Jonathan Cameron
2023-10-11 21:43 ` Dave Jiang
2023-10-12 11:04 ` Jonathan Cameron
2023-10-11 1:07 ` [PATCH v10 22/22] cxl: Check qos_class validity on memdev probe Dave Jiang
2023-10-11 13:29 ` Jonathan Cameron
2023-10-11 16:28 ` Dave Jiang
2023-10-11 12:59 ` [PATCH v10 00/22] cxl: Add support for QTG ID retrieval for CXL subsystem Jonathan Cameron
2023-10-11 16:31 ` Dave Jiang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231011135732.00005c35@Huawei.com \
--to=jonathan.cameron@huawei.com \
--cc=alison.schofield@intel.com \
--cc=dan.j.williams@intel.com \
--cc=dave.jiang@intel.com \
--cc=dave@stgolabs.net \
--cc=gregkh@linuxfoundation.org \
--cc=ira.weiny@intel.com \
--cc=linux-cxl@vger.kernel.org \
--cc=vishal.l.verma@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox