From: Kamezawa Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
To: Gu Zheng <guz.fnst@cn.fujitsu.com>
Cc: Tejun Heo <tj@kernel.org>, <linux-kernel@vger.kernel.org>,
<laijs@cn.fujitsu.com>, <isimatu.yasuaki@jp.fujitsu.com>,
<tangchen@cn.fujitsu.com>
Subject: Re: [PATCH] workqueue: update numa affinity when node hotplug
Date: Thu, 5 Mar 2015 16:16:19 +0900 [thread overview]
Message-ID: <54F802C3.7070903@jp.fujitsu.com> (raw)
In-Reply-To: <54F7B006.9050203@cn.fujitsu.com>
On 2015/03/05 10:23, Gu Zheng wrote:
> Hi Kamazawa-san,
> On 03/04/2015 01:45 PM, Kamezawa Hiroyuki wrote:
>
>> On 2015/03/03 22:18, Tejun Heo wrote:
>>> Hello, Kame.
>>>
>>> On Tue, Mar 03, 2015 at 03:53:46PM +0900, Kamezawa Hiroyuki wrote:
>>>> relationship between proximity domain and lapic id doesn't change.
>>>> relationship between lapic-id and cpu-id changes.
>>>>
>>>> pxm <-> memory address : no change
>>>> pxm <-> lapicid : no change
>>>> pxm <-> node id : no change
>>>> lapicid <-> cpu id : change.
>>>
>>> So, we're changing the cpu ID to NUMA node mapping because current
>>> NUMA code is ignoring PXM for memoryless nodes? That's it?
>>>
>>
>> For memory-less node case, yes.
>> Another problem is that lapicid <-> cpuid relationship is not persistent.
>>
>>
>>>>>> I personally thinks proper fix is building persistent cpu-id <-> lapicid relationship as
>>>>>> pxm does rather than creating band-aid.
>>>>>
>>>>> Oh if this is possible, I agree that's the right direction too.
>>>>>
>>>>
>>>> Implementation is a bit complicated now :(.
>>>
>>> Ah well, even then, the obviously right thing to do is updating NUMA
>>> code to always keep track of PXM information. We don't really want to
>>> pile NUMA hacks in random users of NUMA code.
>>>
>>
>> We'd like to start from making apicid <-> cpuid persistent because memory-less
>> node case doesn't cause panic.
>>
>> Gu-san, how do you think ?
>
> Fine by me. But it seems that the change will break the re-use of free cpuid when
> hot add new cpus, I am afraid it may affect other sub-systems, though I can not
> point out the specific example.
>
That may be concern. But, IHMO, applications should not expect cpuid of newly added cpu.
If an application takes care of placement of [thread, memory], OS API should be refreshed
to allow user to tell "please pleace [thread, memory] nearby each other" rather than specifying
[cpu, memory]. (As Peter Z.'s schedNUMA)
If application takes care of physical placement of hardware, ethernet, SSD, etc...
it should not depends on cpuid.
Some of required features may not be archived yet but cpuid of newly added cpu
should not be a big problem for applications, I think.
Open dicussion with (x86/sched/mm) maintainers will be required, anyway.
I think it's worth trying.
Thanks,
-Kame
prev parent reply other threads:[~2015-03-05 7:17 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-02-27 10:04 [PATCH] workqueue: update numa affinity when node hotplug Gu Zheng
2015-02-27 11:54 ` Tejun Heo
2015-03-02 8:41 ` Kamezawa Hiroyuki
2015-03-02 16:28 ` Tejun Heo
2015-03-03 6:53 ` Kamezawa Hiroyuki
2015-03-03 13:18 ` Tejun Heo
2015-03-04 5:45 ` Kamezawa Hiroyuki
2015-03-05 1:23 ` Gu Zheng
2015-03-05 7:16 ` Kamezawa Hiroyuki [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=54F802C3.7070903@jp.fujitsu.com \
--to=kamezawa.hiroyu@jp.fujitsu.com \
--cc=guz.fnst@cn.fujitsu.com \
--cc=isimatu.yasuaki@jp.fujitsu.com \
--cc=laijs@cn.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=tangchen@cn.fujitsu.com \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox