From: Dave Jiang <dave.jiang@intel.com>
To: Jonathan Cameron <Jonathan.Cameron@huawei.com>
Cc: linux-cxl@vger.kernel.org, linux-acpi@vger.kernel.org,
rafael@kernel.org, bp@alien8.de, dan.j.williams@intel.com,
tony.luck@intel.com, dave@stgolabs.net,
alison.schofield@intel.com, ira.weiny@intel.com
Subject: Re: [RFC PATCH v2 2/5] acpi/hmat / cxl: Add extended linear cache support for CXL
Date: Tue, 3 Dec 2024 16:08:48 -0700 [thread overview]
Message-ID: <8d591c40-7954-441a-886d-8621fac16094@intel.com> (raw)
In-Reply-To: <20241126162301.0000090c@huawei.com>
On 11/26/24 9:23 AM, Jonathan Cameron wrote:
> On Tue, 12 Nov 2024 15:12:34 -0700
> Dave Jiang <dave.jiang@intel.com> wrote:
>
>> The current cxl region size only indicates the size of the CXL memory
>> region without accounting for the extended linear cache size. Retrieve the
>> cache size from HMAT and append that to the cxl region size for the cxl
>> region range that matches the SRAT range that has extended linear cache
>> enabled.
>>
>> The SRAT defines the whole memory range that includes the extended linear
>> cache and the CXL memory region. The new HMAT ECN/ECR to the Memory Side
>> Cache Information Structure defines the size of the extended linear cache
>> size and matches to the SRAT Memory Affinity Structure by the memory
>> proxmity domain. Add a helper to match the cxl range to the SRAT memory
>> range in order to retrieve the cache size.
>>
>> There are several places that checks the cxl region range against the
>> decoder range. Use new helper to check between the two ranges and address
>> the new cache size.
>>
>> Signed-off-by: Dave Jiang <dave.jiang@intel.com>
> Hi Dave,
>
> A few minor comments inline.
>
> Thanks,
>
> Jonathan
>
>> ---
>> drivers/acpi/numa/hmat.c | 44 ++++++++++++++++++++++++
>> drivers/cxl/core/Makefile | 1 +
>> drivers/cxl/core/acpi.c | 11 ++++++
>> drivers/cxl/core/core.h | 3 ++
>> drivers/cxl/core/region.c | 70 ++++++++++++++++++++++++++++++++++++---
>> drivers/cxl/cxl.h | 2 ++
>> include/linux/acpi.h | 19 +++++++++++
>> tools/testing/cxl/Kbuild | 1 +
>> 8 files changed, 147 insertions(+), 4 deletions(-)
>> create mode 100644 drivers/cxl/core/acpi.c
>>
>> diff --git a/drivers/acpi/numa/hmat.c b/drivers/acpi/numa/hmat.c
>> index 39524f36be5b..92b818b72ecc 100644
>> --- a/drivers/acpi/numa/hmat.c
>> +++ b/drivers/acpi/numa/hmat.c
>> @@ -108,6 +108,50 @@ static struct memory_target *find_mem_target(unsigned int mem_pxm)
>> return NULL;
>> }
>>
>> +/**
>> + * hmat_get_extended_linear_cache_size - Retrieve the extended linear cache size
>> + * @backing_res: resource from the backing media
>> + * @nid: node id for the memory region
>> + * @cache_size: (Output) size of extended linear cache.
>> + *
>> + * Return: 0 on success. Errno on failure.
>> + *
>> + */
>> +int hmat_get_extended_linear_cache_size(struct resource *backing_res, int nid,
>> + resource_size_t *cache_size)
>> +{
>> + unsigned int pxm = node_to_pxm(nid);
>> + struct memory_target *target;
>> + struct target_cache *tcache;
>> + bool cache_found = false;
>> + struct resource *res;
>> +
>> + target = find_mem_target(pxm);
>> + if (!target)
>> + return -ENOENT;
>> +
>> + list_for_each_entry(tcache, &target->caches, node) {
>> + if (tcache->cache_attrs.mode == NODE_CACHE_MODE_EXTENDED_LINEAR) {
>
> I'd flip this for slightly better readability.
ok
> if (tcache->cache_attrs.mode != NODE_CACHE_MODE_EXTENDED_LINEAR)
> continue;
>
> res = ...
>
>
>> + res = &target->memregions;
>> + if (!resource_contains(res, backing_res))
>> + continue;
>> +
>> + cache_found = true;
>> + break;
>> + }
>> + }
>> +
>> + if (!cache_found) {
>> + *cache_size = 0;
>> + return 0;
>> + }
>> +
>> + *cache_size = tcache->cache_attrs.size;
>
> Why not set this and return in the loop?
> That way no need to have a local variable.
ok
>
>> +
>> + return 0;
>> +}
>> +EXPORT_SYMBOL_NS_GPL(hmat_get_extended_linear_cache_size, CXL);
>
>> diff --git a/drivers/cxl/core/acpi.c b/drivers/cxl/core/acpi.c
>> new file mode 100644
>> index 000000000000..f13b4dae6ac5
>> --- /dev/null
>> +++ b/drivers/cxl/core/acpi.c
>> @@ -0,0 +1,11 @@
>> +// SPDX-License-Identifier: GPL-2.0-only
>> +/* Copyright(c) 2024 Intel Corporation. All rights reserved. */
>> +#include <linux/acpi.h>
>> +#include "cxl.h"
>> +#include "core.h"
>
> Why do you need the cxl headers? Maybe a forwards def of
> struct resource, but I'm not seeing anything else being needed.
The prototype is declared in core.h, and it seems core.h needs cxl.h. I wonder if core.h should just include cxl.h.
>
>
>> +
>> +int cxl_acpi_get_extended_linear_cache_size(struct resource *backing_res,
>> + int nid, resource_size_t *size)
>> +{
>> + return hmat_get_extended_linear_cache_size(backing_res, nid, size);
>> +}
>
>
>> @@ -3215,6 +3229,42 @@ static int match_region_by_range(struct device *dev, void *data)
>> return rc;
>> }
>>
>> +static int cxl_extended_linear_cache_resize(struct cxl_region *cxlr,
>> + struct resource *res)
>> +{
>> + struct cxl_region_params *p = &cxlr->params;
>> + int nid = phys_to_target_node(res->start);
>> + resource_size_t size, cache_size;
>> + int rc;
>> +
>> + size = resource_size(res);
>> + if (!size)
>> + return -EINVAL;
>> +
>> + rc = cxl_acpi_get_extended_linear_cache_size(res, nid, &cache_size);
>> + if (rc)
>> + return rc;
>> +
>> + if (!cache_size)
>> + return 0;
>> +
>> + if (size != cache_size) {
>> + dev_warn(&cxlr->dev, "Extended Linear Cache is not 1:1, unsupported!");
>> + return -EOPNOTSUPP;
>> + }
>> +
>> + /*
>> + * Move the start of the range to where the cache range starts. The
>> + * implementation assumes that the cache range is in front of the
>> + * CXL range. This is not dictated by the HMAT spec but is how the
>> + * currently known implementation configured.
>
> is configured
will fix
>
>> + */
>> + res->start -= cache_size;
>> + p->cache_size = cache_size;
>> +
>> + return 0;
>> +}
>
>
next prev parent reply other threads:[~2024-12-03 23:08 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-11-12 22:12 [RFC PATCH v2 0/6] acpi/hmat / cxl: Add exclusive caching enumeration and RAS support Dave Jiang
2024-11-12 22:12 ` [RFC PATCH v2 1/5] acpi: numa: Add support to enumerate and store extended linear address mode Dave Jiang
2024-11-26 16:16 ` Jonathan Cameron
2024-12-03 23:05 ` Dave Jiang
2024-11-12 22:12 ` [RFC PATCH v2 2/5] acpi/hmat / cxl: Add extended linear cache support for CXL Dave Jiang
2024-11-26 16:23 ` Jonathan Cameron
2024-12-03 23:08 ` Dave Jiang [this message]
2024-11-12 22:12 ` [RFC PATCH v2 3/5] acpi/hmat: Add helper functions to provide extended linear cache translation Dave Jiang
2024-11-27 10:23 ` Jonathan Cameron
2024-11-12 22:12 ` [RFC PATCH v2 4/5] cxl: Add extended linear cache address alias emission for cxl events Dave Jiang
2024-11-27 16:40 ` Jonathan Cameron
2024-11-12 22:12 ` [RFC PATCH v2 5/5] cxl: Add mce notifier to emit aliased address for extended linear cache Dave Jiang
2024-11-13 8:11 ` Borislav Petkov
2024-11-13 15:27 ` Dave Jiang
2024-11-14 9:32 ` Borislav Petkov
2024-11-14 15:52 ` Dave Jiang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8d591c40-7954-441a-886d-8621fac16094@intel.com \
--to=dave.jiang@intel.com \
--cc=Jonathan.Cameron@huawei.com \
--cc=alison.schofield@intel.com \
--cc=bp@alien8.de \
--cc=dan.j.williams@intel.com \
--cc=dave@stgolabs.net \
--cc=ira.weiny@intel.com \
--cc=linux-acpi@vger.kernel.org \
--cc=linux-cxl@vger.kernel.org \
--cc=rafael@kernel.org \
--cc=tony.luck@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox