From: Jonathan Cameron <Jonathan.Cameron@huawei.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: <linux-cxl@vger.kernel.org>, Alejandro Lucero <alucerop@amd.com>,
"Ira Weiny" <ira.weiny@intel.com>,
Dave Jiang <dave.jiang@intel.com>
Subject: Re: [PATCH v3 2/6] cxl: Introduce to_{ram,pmem}_{res,perf}() helpers
Date: Tue, 4 Feb 2025 11:30:01 +0000 [thread overview]
Message-ID: <20250204113001.0000228c@huawei.com> (raw)
In-Reply-To: <173864305238.668823.16553986866633608541.stgit@dwillia2-xfh.jf.intel.com>
On Mon, 03 Feb 2025 20:24:12 -0800
Dan Williams <dan.j.williams@intel.com> wrote:
> In preparation for consolidating all DPA partition information into an
> array of DPA metadata, introduce helpers that hide the layout of the
> current data. I.e. make the eventual replacement of ->ram_res,
> ->pmem_res, ->ram_perf, and ->pmem_perf with a new DPA metadata array a
> no-op for code paths that consume that information, and reduce the noise
> of follow-on patches.
>
> The end goal is to consolidate all DPA information in 'struct
> cxl_dev_state', but for now the helpers just make it appear that all DPA
> metadata is relative to @cxlds.
>
> As the conversion to generic partition metadata walking is completed,
> these helpers will naturally be eliminated, or reduced in scope.
>
> Cc: Alejandro Lucero <alucerop@amd.com>
> Reviewed-by: Ira Weiny <ira.weiny@intel.com>
> Reviewed-by: Dave Jiang <dave.jiang@intel.com>
> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
One observation inline. I don't care much either way.
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
> ---
> drivers/cxl/core/cdat.c | 71 +++++++++++++++++++++++++++---------------
> drivers/cxl/core/hdm.c | 26 ++++++++-------
> drivers/cxl/core/mbox.c | 18 ++++++-----
> drivers/cxl/core/memdev.c | 42 +++++++++++++------------
> drivers/cxl/core/region.c | 10 ++++--
> drivers/cxl/cxlmem.h | 58 ++++++++++++++++++++++++++++++----
> drivers/cxl/mem.c | 2 +
> tools/testing/cxl/test/cxl.c | 25 ++++++++-------
> 8 files changed, 162 insertions(+), 90 deletions(-)
>
> diff --git a/drivers/cxl/core/cdat.c b/drivers/cxl/core/cdat.c
> index 8153f8d83a16..797baad483cb 100644
> --- a/drivers/cxl/core/cdat.c
> +++ b/drivers/cxl/core/cdat.c
> @@ -258,27 +258,36 @@ static void update_perf_entry(struct device *dev, struct dsmas_entry *dent,
> static void cxl_memdev_set_qos_class(struct cxl_dev_state *cxlds,
> struct xarray *dsmas_xa)
> {
> - struct cxl_memdev_state *mds = to_cxl_memdev_state(cxlds);
> struct device *dev = cxlds->dev;
> - struct range pmem_range = {
> - .start = cxlds->pmem_res.start,
> - .end = cxlds->pmem_res.end,
> - };
> - struct range ram_range = {
> - .start = cxlds->ram_res.start,
> - .end = cxlds->ram_res.end,
> - };
> struct dsmas_entry *dent;
> unsigned long index;
> + const struct resource *partition[] = {
> + to_ram_res(cxlds),
> + to_pmem_res(cxlds),
> + };
> + struct cxl_dpa_perf *perf[] = {
> + to_ram_perf(cxlds),
> + to_pmem_perf(cxlds),
> + };
>
> xa_for_each(dsmas_xa, index, dent) {
> - if (resource_size(&cxlds->ram_res) &&
> - range_contains(&ram_range, &dent->dpa_range))
> - update_perf_entry(dev, dent, &mds->ram_perf);
> - else if (resource_size(&cxlds->pmem_res) &&
> - range_contains(&pmem_range, &dent->dpa_range))
> - update_perf_entry(dev, dent, &mds->pmem_perf);
> - else
> + bool found = false;
> +
> + for (int i = 0; i < ARRAY_SIZE(partition); i++) {
> + const struct resource *res = partition[i];
> + struct range range = {
> + .start = res->start,
> + .end = res->end,
> + };
> +
> + if (range_contains(&range, &dent->dpa_range)) {
> + update_perf_entry(dev, dent, perf[i]);
> + found = true;
> + break;
> + }
> + }
> +
> + if (!found)
Could use check on whether we got to end of array check if you expand scope of i
outside loop.
if (i == ARRAY_SIZE(partition))
dev_dbg(dev, "no partition for dsmas dpa: %pra\n",
&dent->dpa_range);
I don't care much either way.
> dev_dbg(dev, "no partition for dsmas dpa: %pra\n",
> &dent->dpa_range);
> }
next prev parent reply other threads:[~2025-02-04 11:30 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-02-04 4:24 [PATCH v3 0/6] cxl: DPA partition metadata is a mess Dan Williams
2025-02-04 4:24 ` [PATCH v3 1/6] cxl: Remove the CXL_DECODER_MIXED mistake Dan Williams
2025-02-04 17:42 ` Fan Ni
2025-02-04 4:24 ` [PATCH v3 2/6] cxl: Introduce to_{ram,pmem}_{res,perf}() helpers Dan Williams
2025-02-04 11:30 ` Jonathan Cameron [this message]
2025-02-04 17:50 ` Fan Ni
2025-02-04 4:24 ` [PATCH v3 3/6] cxl: Introduce 'struct cxl_dpa_partition' and 'struct cxl_range_info' Dan Williams
2025-02-04 11:50 ` Jonathan Cameron
2025-02-04 18:50 ` Dan Williams
2025-02-04 4:24 ` [PATCH v3 4/6] cxl: Make cxl_dpa_alloc() DPA partition number agnostic Dan Williams
2025-02-04 12:13 ` Jonathan Cameron
2025-02-04 4:24 ` [PATCH v3 5/6] cxl: Kill enum cxl_decoder_mode Dan Williams
2025-02-04 12:23 ` Jonathan Cameron
2025-02-04 18:57 ` Dan Williams
2025-02-04 4:24 ` [PATCH v3 6/6] cxl: Cleanup partition size and perf helpers Dan Williams
2025-02-04 12:32 ` Jonathan Cameron
2025-02-04 20:52 ` Ira Weiny
2025-02-04 10:42 ` [PATCH v3 0/6] cxl: DPA partition metadata is a mess Alejandro Lucero Palau
2025-02-04 21:33 ` Dave Jiang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250204113001.0000228c@huawei.com \
--to=jonathan.cameron@huawei.com \
--cc=alucerop@amd.com \
--cc=dan.j.williams@intel.com \
--cc=dave.jiang@intel.com \
--cc=ira.weiny@intel.com \
--cc=linux-cxl@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox