From: Jonathan Cameron <Jonathan.Cameron@huawei.com>
To: <shiju.jose@huawei.com>
Cc: <rafael@kernel.org>, <bp@alien8.de>, <akpm@linux-foundation.org>,
<rppt@kernel.org>, <dferguson@amperecomputing.com>,
<linux-edac@vger.kernel.org>, <linux-acpi@vger.kernel.org>,
<linux-mm@kvack.org>, <linux-doc@vger.kernel.org>,
<tony.luck@intel.com>, <lenb@kernel.org>, <leo.duran@amd.com>,
<Yazen.Ghannam@amd.com>, <mchehab@kernel.org>,
<linuxarm@huawei.com>, <rientjes@google.com>,
<jiaqiyan@google.com>, <Jon.Grimm@amd.com>,
<dave.hansen@linux.intel.com>, <naoya.horiguchi@nec.com>,
<james.morse@arm.com>, <jthoughton@google.com>,
<somasundaram.a@hpe.com>, <erdemaktas@google.com>,
<pgonda@google.com>, <duenwen@google.com>, <gthelen@google.com>,
<wschwartz@amperecomputing.com>, <wbs@os.amperecomputing.com>,
<nifan.cxl@gmail.com>, <tanxiaofei@huawei.com>,
<prime.zeng@hisilicon.com>, <roberto.sassu@huawei.com>,
<kangkang.shen@futurewei.com>, <wanghuiqiang@huawei.com>
Subject: Re: [PATCH v11 1/3] mm: Add support to retrieve physical address range of memory from the node ID
Date: Tue, 19 Aug 2025 17:54:20 +0100 [thread overview]
Message-ID: <20250819175420.00007ce6@huawei.com> (raw)
In-Reply-To: <20250812142616.2330-2-shiju.jose@huawei.com>
On Tue, 12 Aug 2025 15:26:13 +0100
<shiju.jose@huawei.com> wrote:
> From: Shiju Jose <shiju.jose@huawei.com>
>
> In the numa_memblks, a lookup facility is required to retrieve the
> physical address range of memory in a NUMA node. ACPI RAS2 memory
> features are among the use cases.
>
> Suggested-by: Jonathan Cameron <jonathan.cameron@huawei.com>
> Signed-off-by: Shiju Jose <shiju.jose@huawei.com>
Looks fine to me. Mike, what do you think?
One passing comment inline.
Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com>
> ---
> include/linux/numa.h | 9 +++++++++
> include/linux/numa_memblks.h | 2 ++
> mm/numa.c | 10 ++++++++++
> mm/numa_memblks.c | 37 ++++++++++++++++++++++++++++++++++++
> 4 files changed, 58 insertions(+)
>
> diff --git a/include/linux/numa.h b/include/linux/numa.h
> index e6baaf6051bc..1d1aabebd26b 100644
> --- a/include/linux/numa.h
> +++ b/include/linux/numa.h
> @@ -41,6 +41,10 @@ int memory_add_physaddr_to_nid(u64 start);
> int phys_to_target_node(u64 start);
> #endif
>
> +#ifndef nid_get_mem_physaddr_range
> +int nid_get_mem_physaddr_range(int nid, u64 *start, u64 *end);
> +#endif
> +
> int numa_fill_memblks(u64 start, u64 end);
>
> #else /* !CONFIG_NUMA */
> @@ -63,6 +67,11 @@ static inline int phys_to_target_node(u64 start)
> return 0;
> }
>
> +static inline int nid_get_mem_physaddr_range(int nid, u64 *start, u64 *end)
> +{
> + return 0;
> +}
> +
> static inline void alloc_offline_node_data(int nid) {}
> #endif
>
> diff --git a/include/linux/numa_memblks.h b/include/linux/numa_memblks.h
> index 991076cba7c5..7b32d96d0134 100644
> --- a/include/linux/numa_memblks.h
> +++ b/include/linux/numa_memblks.h
> @@ -55,6 +55,8 @@ extern int phys_to_target_node(u64 start);
> #define phys_to_target_node phys_to_target_node
> extern int memory_add_physaddr_to_nid(u64 start);
> #define memory_add_physaddr_to_nid memory_add_physaddr_to_nid
> +extern int nid_get_mem_physaddr_range(int nid, u64 *start, u64 *end);
> +#define nid_get_mem_physaddr_range nid_get_mem_physaddr_range
> #endif /* CONFIG_NUMA_KEEP_MEMINFO */
>
> #endif /* CONFIG_NUMA_MEMBLKS */
> diff --git a/mm/numa.c b/mm/numa.c
> index 7d5e06fe5bd4..5335af1fefee 100644
> --- a/mm/numa.c
> +++ b/mm/numa.c
> @@ -59,3 +59,13 @@ int phys_to_target_node(u64 start)
> }
> EXPORT_SYMBOL_GPL(phys_to_target_node);
> #endif
> +
> +#ifndef nid_get_mem_physaddr_range
> +int nid_get_mem_physaddr_range(int nid, u64 *start, u64 *end)
> +{
> + pr_info_once("Unknown target phys addr range for node=%d\n", nid);
> +
> + return 0;
> +}
> +EXPORT_SYMBOL_GPL(nid_get_mem_physaddr_range);
> +#endif
> diff --git a/mm/numa_memblks.c b/mm/numa_memblks.c
> index 541a99c4071a..e1e56b7a3499 100644
> --- a/mm/numa_memblks.c
> +++ b/mm/numa_memblks.c
> @@ -590,4 +590,41 @@ int memory_add_physaddr_to_nid(u64 start)
> }
> EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid);
>
> +/**
> + * nid_get_mem_physaddr_range - Get the physical address range
> + * of the memblk in the NUMA node.
> + * @nid: NUMA node ID of the memblk
> + * @start: Start address of the memblk
> + * @end: End address of the memblk
> + *
> + * Find the lowest contiguous physical memory address range of the memblk
> + * in the NUMA node with the given nid and return the start and end
> + * addresses.
> + *
> + * RETURNS:
> + * 0 on success, -errno on failure.
> + */
> +int nid_get_mem_physaddr_range(int nid, u64 *start, u64 *end)
> +{
> + struct numa_meminfo *mi = &numa_meminfo;
> + int i;
> +
> + if (!numa_valid_node(nid))
> + return -EINVAL;
> +
> + for (i = 0; i < mi->nr_blks; i++) {
> + if (mi->blk[i].nid == nid) {
> + *start = mi->blk[i].start;
> + /*
> + * Assumption: mi->blk[i].end is the last address
> + * in the range + 1.
This was my fault for asking on internal review if this was documented
anywhere. It's kind of implicitly obvious when reading numa_memblk.c
because there are a bunch of end - 1 prints.
So can probably drop this comment.
> + */
> + *end = mi->blk[i].end;
> + return 0;
> + }
> + }
> +
> + return -ENODEV;
> +}
> +EXPORT_SYMBOL_GPL(nid_get_mem_physaddr_range);
> #endif /* CONFIG_NUMA_KEEP_MEMINFO */
next prev parent reply other threads:[~2025-08-19 16:54 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-12 14:26 [PATCH v11 0/3] ACPI: Add support for ACPI RAS2 feature table shiju.jose
2025-08-12 14:26 ` [PATCH v11 1/3] mm: Add support to retrieve physical address range of memory from the node ID shiju.jose
2025-08-19 16:54 ` Jonathan Cameron [this message]
2025-08-20 7:34 ` Mike Rapoport
2025-08-20 8:54 ` Jonathan Cameron
2025-08-20 10:00 ` Shiju Jose
2025-08-20 17:02 ` Mike Rapoport
2025-08-21 9:06 ` Jonathan Cameron
2025-08-21 16:16 ` Luck, Tony
2025-08-24 12:41 ` Mike Rapoport
2025-08-12 14:26 ` [PATCH v11 2/3] ACPI:RAS2: Add ACPI RAS2 driver shiju.jose
2025-08-12 14:26 ` [PATCH v11 3/3] ras: mem: Add memory " shiju.jose
2025-08-19 20:12 ` [PATCH v11 0/3] ACPI: Add support for ACPI RAS2 feature table Daniel Ferguson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250819175420.00007ce6@huawei.com \
--to=jonathan.cameron@huawei.com \
--cc=Jon.Grimm@amd.com \
--cc=Yazen.Ghannam@amd.com \
--cc=akpm@linux-foundation.org \
--cc=bp@alien8.de \
--cc=dave.hansen@linux.intel.com \
--cc=dferguson@amperecomputing.com \
--cc=duenwen@google.com \
--cc=erdemaktas@google.com \
--cc=gthelen@google.com \
--cc=james.morse@arm.com \
--cc=jiaqiyan@google.com \
--cc=jthoughton@google.com \
--cc=kangkang.shen@futurewei.com \
--cc=lenb@kernel.org \
--cc=leo.duran@amd.com \
--cc=linux-acpi@vger.kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-edac@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linuxarm@huawei.com \
--cc=mchehab@kernel.org \
--cc=naoya.horiguchi@nec.com \
--cc=nifan.cxl@gmail.com \
--cc=pgonda@google.com \
--cc=prime.zeng@hisilicon.com \
--cc=rafael@kernel.org \
--cc=rientjes@google.com \
--cc=roberto.sassu@huawei.com \
--cc=rppt@kernel.org \
--cc=shiju.jose@huawei.com \
--cc=somasundaram.a@hpe.com \
--cc=tanxiaofei@huawei.com \
--cc=tony.luck@intel.com \
--cc=wanghuiqiang@huawei.com \
--cc=wbs@os.amperecomputing.com \
--cc=wschwartz@amperecomputing.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).