From: Alison Schofield <alison.schofield@intel.com>
To: Robert Richter <rrichter@amd.com>
Cc: "Rafael J. Wysocki" <rafael@kernel.org>,
Dave Hansen <dave.hansen@linux.intel.com>,
Andy Lutomirski <luto@kernel.org>,
Peter Zijlstra <peterz@infradead.org>,
Thomas Gleixner <tglx@linutronix.de>,
Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
x86@kernel.org, Dan Williams <dan.j.williams@intel.com>,
linux-acpi@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-cxl@vger.kernel.org,
Derick Marks <derick.w.marks@intel.com>,
"H. Peter Anvin" <hpa@zytor.com>
Subject: Re: [PATCH v2 1/3] x86/numa: Fix SRAT lookup of CFMWS ranges with numa_fill_memblks()
Date: Thu, 21 Mar 2024 11:39:17 -0700 [thread overview]
Message-ID: <Zfx+1c6RVO6r176O@aschofie-mobl2> (raw)
In-Reply-To: <Zfxmnfj1K0OTk89U@rric.localdomain>
On Thu, Mar 21, 2024 at 05:55:57PM +0100, Robert Richter wrote:
> Alison,
>
> On 20.03.24 10:46:07, Alison Schofield wrote:
> > On Tue, Mar 19, 2024 at 01:00:23PM +0100, Robert Richter wrote:
> > > For configurations that have the kconfig option NUMA_KEEP_MEMINFO
> > > disabled, the SRAT lookup done with numa_fill_memblks() fails
> > > returning NUMA_NO_MEMBLK (-1). An existing SRAT memory range cannot be
> > > found for a CFMWS address range. This causes the addition of a
> > > duplicate numa_memblk with a different node id and a subsequent page
> > > fault and kernel crash during boot.
> > >
> > > numa_fill_memblks() is implemented and used in the init section only.
> > > The option NUMA_KEEP_MEMINFO is only for the case when NUMA data will
> > > be used outside of init. So fix the SRAT lookup by moving
> > > numa_fill_memblks() out of the NUMA_KEEP_MEMINFO block to make it
> > > always available in the init section.
> > >
> > > Note that the issue was initially introduced with [1]. But since
> > > phys_to_target_node() was originally used that returned the valid node
> > > 0, an additional numa_memblk was not added. Though, the node id was
> > > wrong too.
> >
> > Hi Richard,
> >
> > I recall a bit of wrangling w #defines to make ARM64 and LOONGARCH build.
> > I'm seeing an x86 build error today:
> >
> > >> arch/x86/mm/numa.c:957:12: error: redefinition of 'numa_fill_memblks'
> > 957 | int __init numa_fill_memblks(u64 start, u64 end)
> >
> > include/linux/numa.h:40:26: note: previous definition of 'numa_fill_memblks' with type
> > +'int(u64, u64)' {aka 'int(long long unsigned int, long long unsigned int)'}
> > 40 | static inline int __init numa_fill_memblks(u64 start, u64 end)
> > | ^~~~~~~~~~~~~~~~~
> >
> > In addition to what you suggest, would something like this diff below be
> > a useful safety measure to distinguish num_fill_memblks() success (rc:0)
> > and possible non-existence (rc:-1). I don't think it hurts to take a
> > second look using phys_to_target_node() (totall untested)
> >
> > diff --git a/drivers/acpi/numa/srat.c b/drivers/acpi/numa/srat.c
> > index 070a52e4daa8..0c48fe32ced4 100644
> > --- a/drivers/acpi/numa/srat.c
> > +++ b/drivers/acpi/numa/srat.c
> > @@ -437,9 +437,16 @@ static int __init acpi_parse_cfmws(union acpi_subtable_headers *header,
> > * found for any portion of the window to cover the entire
> > * window.
> > */
> > - if (!numa_fill_memblks(start, end))
> > + rc = numa_fill_memblks(start, end);
> > + if (!rc)
> > return 0;
> >
> > + if (rc == NUMA_NO_MEMBLK) {
> > + node = phys_to_target_node(start);
> > + if (node != NUMA_NO_NODE)
> > + return 0;
> > + }
> > +
>
> for non-x86 the numa_add_memblk() function looks good in a way that it
> is able to handle presumable overlapping regions. numa_fill_memblks()
> would just fail then and numa_add_memblk() being called. For x86 we
> need numa_fill_memblks() since x86 specific numa_add_memblk() cannot
> handle the overlapping case.
>
> That said, we do not need the 2nd check. It looks to me that it
> actually breaks non-x86 as the whole block may not be registered (if
> it is larger than anything existing).
>
> For x86 the 2nd check may never happen if numa_fill_memblks() is
> always enabled (which is this patch for).
Hi Robert, (<-- got it right this time ;))
I wasn't thinking of x86, but rather archs that may not support
numa_fill_memblks() and return NUMA_NO_MEMBLK (-1) per the
#ifndef numa_fill_memblks in include/linux/numa.h
In those cases, take a second look at phys_to_targe_node() before
blindly adding another memblk. Is that the failure signature you
reported here?
I can wait and see your final patch and how the different archs
will handle it. I'm worried that NUMA_NO_MEMBLK is overloaded and
we need to diffentiate between archs that don't even look for a
node, versus archs that look but don't find a node.
--Alison
>
> So we should be good without your change.
>
> Thanks,
>
> -Robert
>
> > /* No SRAT description. Create a new node. */
> >
> > --Alison
> >
> > >
> > > [1] fd49f99c1809 ("ACPI: NUMA: Add a node and memblk for each CFMWS not in SRAT")
> > >
> > > Fixes: 8f1004679987 ("ACPI/NUMA: Apply SRAT proximity domain to entire CFMWS window")
> > > Cc: Derick Marks <derick.w.marks@intel.com>
> > > Cc: Dan Williams <dan.j.williams@intel.com>
> > > Cc: Alison Schofield <alison.schofield@intel.com>
> > > Signed-off-by: Robert Richter <rrichter@amd.com>
> > > ---
> > > arch/x86/mm/numa.c | 4 ++--
> > > 1 file changed, 2 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
> > > index 65e9a6e391c0..ce84ba86e69e 100644
> > > --- a/arch/x86/mm/numa.c
> > > +++ b/arch/x86/mm/numa.c
> > > @@ -929,6 +929,8 @@ int memory_add_physaddr_to_nid(u64 start)
> > > }
> > > EXPORT_SYMBOL_GPL(memory_add_physaddr_to_nid);
> > >
> > > +#endif
> > > +
> > > static int __init cmp_memblk(const void *a, const void *b)
> > > {
> > > const struct numa_memblk *ma = *(const struct numa_memblk **)a;
> > > @@ -1001,5 +1003,3 @@ int __init numa_fill_memblks(u64 start, u64 end)
> > > }
> > > return 0;
> > > }
> > > -
> > > -#endif
> > > --
> > > 2.39.2
> > >
next prev parent reply other threads:[~2024-03-21 18:39 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-03-19 12:00 [PATCH v2 0/3] SRAT/CEDT fixes and updates Robert Richter
2024-03-19 12:00 ` [PATCH v2 1/3] x86/numa: Fix SRAT lookup of CFMWS ranges with numa_fill_memblks() Robert Richter
2024-03-19 20:15 ` Dan Williams
2024-03-21 8:09 ` Robert Richter
2024-03-20 17:46 ` Alison Schofield
2024-03-21 16:55 ` Robert Richter
2024-03-21 18:39 ` Alison Schofield [this message]
2024-03-21 22:17 ` Robert Richter
2024-03-19 12:00 ` [PATCH v2 2/3] ACPI/NUMA: Print CXL Early Discovery Table (CEDT) Robert Richter
2024-03-19 20:18 ` Dan Williams
2024-03-20 17:47 ` Alison Schofield
2024-03-19 12:00 ` [PATCH v2 3/3] ACPI/NUMA: Remove architecture dependent remainings Robert Richter
2024-03-19 20:44 ` Dan Williams
2024-03-22 2:12 ` kernel test robot
2024-03-28 16:49 ` Robert Richter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Zfx+1c6RVO6r176O@aschofie-mobl2 \
--to=alison.schofield@intel.com \
--cc=bp@alien8.de \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=derick.w.marks@intel.com \
--cc=hpa@zytor.com \
--cc=linux-acpi@vger.kernel.org \
--cc=linux-cxl@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=luto@kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=rafael@kernel.org \
--cc=rrichter@amd.com \
--cc=tglx@linutronix.de \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox