linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
From: Michael Bringmann <mwb@linux.vnet.ibm.com>
To: linuxppc-dev@lists.ozlabs.org
Cc: Nathan Fontenot <nfont@linux.vnet.ibm.com>,
	John Allen <jallen@linux.vnet.ibm.com>
Subject: Re: [PATCH V13 4/4] powerpc/vphn: Fix numa update end-loop bug
Date: Thu, 7 Sep 2017 15:04:06 -0500	[thread overview]
Message-ID: <756d15f9-fa17-263e-491e-7215f5ec586a@linux.vnet.ibm.com> (raw)
In-Reply-To: <77caa097-9d7e-93aa-bdc5-2a971687ef08@linux.vnet.ibm.com>

Simplest change IMO:

		for_each_cpu(sibling, cpu_sibling_mask(cpu)) {
			ud = &updates[i++];
+			ud->next = &updates[i];
                        ud->cpu = sibling;
                        ud->new_nid = new_nid;
                        ud->old_nid = numa_cpu_lookup_table[sibling];
                        cpumask_set_cpu(sibling, &updated_cpus);
-                       if (i < weight)
-                               ud->next = &updates[i];
                }
                cpu = cpu_last_thread_sibling(cpu);

	}

	if (i)
		updates[i-1].next = NULL;

Link all of the updates together, and NULL the link pointer in the
last entry to be filled in.  No worries about invalid comparisons.
Reduced code.

Michael


On 09/07/2017 08:35 AM, Nathan Fontenot wrote:
> On 09/06/2017 05:03 PM, Michael Bringmann wrote:
>>
>>
>> On 09/06/2017 09:45 AM, Nathan Fontenot wrote:
>>> On 09/01/2017 10:48 AM, Michael Bringmann wrote:
>>>> powerpc/vphn: On Power systems with shared configurations of CPUs
>>>> and memory, there are some issues with the association of additional
>>>> CPUs and memory to nodes when hot-adding resources.  This patch
>>>> fixes an end-of-updates processing problem observed occasionally
>>>> in numa_update_cpu_topology().
>>>>
>>>> Signed-off-by: Michael Bringmann <mwb@linux.vnet.ibm.com>
>>>> ---
>>>>  arch/powerpc/mm/numa.c |    7 +++++++
>>>>  1 file changed, 7 insertions(+)
>>>>
>>>> diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
>>>> index 3a5b334..fccf23f 100644
>>>> --- a/arch/powerpc/mm/numa.c
>>>> +++ b/arch/powerpc/mm/numa.c
>>>> @@ -1410,6 +1410,13 @@ int numa_update_cpu_topology(bool cpus_locked)
>>>>  		cpu = cpu_last_thread_sibling(cpu);
>>>>  	}
>>>>
>>>> +	/*
>>>> +	 * Prevent processing of 'updates' from overflowing array
>>>> +	 * in cases where last entry filled in a 'next' pointer.
>>>> +	 */
>>>> +	if (i)
>>>> +		updates[i-1].next = NULL;
>>>> +
>>>
>>> This really looks like the bug is in the code above this where we
>>> fill in the updates array for each of the sibling cpus. The code
>>> there assumes that if the current update entry is not the end that
>>> there will be more updates and blindly sets the next pointer.
>>>
>>> Perhaps correcting the logic in that code to next pointers. Set the
>>> ud pointer to NULL before the outer for_each_cpu() loop. Then in the
>>> inner for_each_cpu(sibling,...) loop update the ud-> next pointer as
>>> the first operation.
>>>
>>> 		for_each_cpu(sibling, cpu_sibling_mask(cpu)) {
>>> 			if (ud)
>>> 				ud->next = &updates[i];
>>> 			...
>>> 		}
>>>
>>> Obviously untested, but I think this would prevent setting the next
>>> pointer in the last update entry that is filled out erroneously.
>>
>> The above fragment looks to skip initialization of the 'next' pointer
>> in the first element of the the 'updates'.  That would abort subsequent
>> evaluation of the array too soon, I believe.  I would like to take another look
>> to see whether the current check 'if (i < weight) ud->next = &updates[i];'
>> is having problems due to i being 0-relative and weight being 1-relative.
> 
> Another thing to keep in mind is that cpus can be skipped by checks earlier
> in the loop. There is not guarantee that we will add 'weight' elements to
> the ud list.
> 
> -Nathan
> 
>>
>>>   
>>> -Nathan
>>
>> Michael
>>
>>>
>>>>  	pr_debug("Topology update for the following CPUs:\n");
>>>>  	if (cpumask_weight(&updated_cpus)) {
>>>>  		for (ud = &updates[0]; ud; ud = ud->next) {
>>>>
>>>
>>
> 
> 

-- 
Michael W. Bringmann
Linux Technology Center
IBM Corporation
Tie-Line  363-5196
External: (512) 286-5196
Cell:       (512) 466-0650
mwb@linux.vnet.ibm.com

      reply	other threads:[~2017-09-07 20:04 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-09-01 15:47 [PATCH V13 0/4] powerpc/vphn: Update CPU topology when VPHN enabled Michael Bringmann
2017-09-01 15:48 ` [PATCH V13 1/4] " Michael Bringmann
2017-09-06 14:20   ` Nathan Fontenot
2017-09-01 15:48 ` [PATCH V13 2/4] powerpc/vphn: Improve recognition of PRRN/VPHN Michael Bringmann
2017-09-06 14:24   ` Nathan Fontenot
2017-09-01 15:48 ` [PATCH V13 3/4] powerpc/hotplug: Improve responsiveness of hotplug change Michael Bringmann
2017-09-06 14:33   ` Nathan Fontenot
2017-09-01 15:48 ` [PATCH V13 4/4] powerpc/vphn: Fix numa update end-loop bug Michael Bringmann
2017-09-06 14:45   ` Nathan Fontenot
2017-09-06 22:03     ` Michael Bringmann
2017-09-07 13:35       ` Nathan Fontenot
2017-09-07 20:04         ` Michael Bringmann [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=756d15f9-fa17-263e-491e-7215f5ec586a@linux.vnet.ibm.com \
    --to=mwb@linux.vnet.ibm.com \
    --cc=jallen@linux.vnet.ibm.com \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=nfont@linux.vnet.ibm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).