From: Nicholas Piggin <npiggin@gmail.com>
To: Michael Ellerman <mpe@ellerman.id.au>
Cc: linuxppc-dev@ozlabs.org
Subject: Re: [PATCH v2] powerpc/numa: Fix percpu allocations to be NUMA aware
Date: Tue, 6 Jun 2017 20:41:12 +1000 [thread overview]
Message-ID: <20170606204112.300af2f1@roar.ozlabs.ibm.com> (raw)
In-Reply-To: <1496744637-24585-1-git-send-email-mpe@ellerman.id.au>
On Tue, 6 Jun 2017 20:23:57 +1000
Michael Ellerman <mpe@ellerman.id.au> wrote:
> In commit 8c272261194d ("powerpc/numa: Enable USE_PERCPU_NUMA_NODE_ID"), we
> switched to the generic implementation of cpu_to_node(), which uses a percpu
> variable to hold the NUMA node for each CPU.
>
> Unfortunately we neglected to notice that we use cpu_to_node() in the allocation
> of our percpu areas, leading to a chicken and egg problem. In practice what
> happens is when we are setting up the percpu areas, cpu_to_node() reports that
> all CPUs are on node 0, so we allocate all percpu areas on node 0.
>
> This is visible in the dmesg output, as all pcpu allocs being in group 0:
>
> pcpu-alloc: [0] 00 01 02 03 [0] 04 05 06 07
> pcpu-alloc: [0] 08 09 10 11 [0] 12 13 14 15
> pcpu-alloc: [0] 16 17 18 19 [0] 20 21 22 23
> pcpu-alloc: [0] 24 25 26 27 [0] 28 29 30 31
> pcpu-alloc: [0] 32 33 34 35 [0] 36 37 38 39
> pcpu-alloc: [0] 40 41 42 43 [0] 44 45 46 47
>
> To fix it we need an early_cpu_to_node() which can run prior to percpu being
> setup. We already have the numa_cpu_lookup_table we can use, so just plumb it
> in. With the patch dmesg output shows two groups, 0 and 1:
>
> pcpu-alloc: [0] 00 01 02 03 [0] 04 05 06 07
> pcpu-alloc: [0] 08 09 10 11 [0] 12 13 14 15
> pcpu-alloc: [0] 16 17 18 19 [0] 20 21 22 23
> pcpu-alloc: [1] 24 25 26 27 [1] 28 29 30 31
> pcpu-alloc: [1] 32 33 34 35 [1] 36 37 38 39
> pcpu-alloc: [1] 40 41 42 43 [1] 44 45 46 47
>
> We can also check the data_offset in the paca of various CPUs, with the fix we
> see:
>
> CPU 0: data_offset = 0x0ffe8b0000
> CPU 24: data_offset = 0x1ffe5b0000
>
> And we can see from dmesg that CPU 24 has an allocation on node 1:
>
> node 0: [mem 0x0000000000000000-0x0000000fffffffff]
> node 1: [mem 0x0000001000000000-0x0000001fffffffff]
>
> Cc: stable@vger.kernel.org # v3.16+
> Fixes: 8c272261194d ("powerpc/numa: Enable USE_PERCPU_NUMA_NODE_ID")
> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Looks good.
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
next prev parent reply other threads:[~2017-06-06 10:41 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-06-06 10:23 [PATCH v2] powerpc/numa: Fix percpu allocations to be NUMA aware Michael Ellerman
2017-06-06 10:41 ` Nicholas Piggin [this message]
2017-06-08 4:05 ` [v2] " Michael Ellerman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170606204112.300af2f1@roar.ozlabs.ibm.com \
--to=npiggin@gmail.com \
--cc=linuxppc-dev@ozlabs.org \
--cc=mpe@ellerman.id.au \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).