From: Tejun Heo <tj@kernel.org>
To: Nish Aravamudan <nish.aravamudan@gmail.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>,
Tony Luck <tony.luck@intel.com>,
linux-ia64@vger.kernel.org,
Nishanth Aravamudan <nacc@linux.vnet.ibm.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Linux Memory Management List <linux-mm@kvack.org>,
David Rientjes <rientjes@google.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
linuxppc-dev@lists.ozlabs.org,
Jiang Liu <jiang.liu@linux.intel.com>,
Wanpeng Li <liwanp@linux.vnet.ibm.com>
Subject: Re: [RFC 0/2] Memoryless nodes and kworker
Date: Fri, 18 Jul 2014 14:58:29 -0400 [thread overview]
Message-ID: <20140718185829.GF13012@htj.dyndns.org> (raw)
In-Reply-To: <CAOhV88Mby_vrLPtRsRNO724-_ABEL06Fc1mMwjgq7LWw-uxeAw@mail.gmail.com>
Hello,
On Fri, Jul 18, 2014 at 11:47:08AM -0700, Nish Aravamudan wrote:
> Why are any callers of the format kthread_create_on_node(...,
> cpu_to_node(cpu), ...) not using kthread_create_on_cpu(..., cpu, ...)?
Ah, okay, that's because unbound workers are NUMA node affine, not
CPU.
> It seems like an additional reasonable approach would be to provide a
> suitable _cpu() API for the allocators. I'm not sure why saying that
> callers should know about NUMA (in order to call cpu_to_node() in every
> caller) is any better than saying that callers should know about memoryless
> nodes (in order to call cpu_to_mem() in every caller instead) -- when at
It is better because that's what they want to express - "I'm on this
memory node, please allocate memory on or close to this one". That's
what the caller cares about. Calling with cpu could be an option but
you'll eventually run into cases where you end up having to map back
NUMA node id to a CPU on it, which will probably feel at least a bit
silly. There are things which really are per-NUMA node.
So, let's please express what needs to be expressed. Massaging around
it can be useful at times but that doesn't seem to be the case here.
Thanks.
--
tejun
prev parent reply other threads:[~2014-07-18 18:58 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-07-17 23:09 [RFC 0/2] Memoryless nodes and kworker Nishanth Aravamudan
2014-07-17 23:09 ` [RFC 1/2] workqueue: use the nearest NUMA node, not the local one Nishanth Aravamudan
2014-07-17 23:15 ` [RFC 2/2] powerpc: reorder per-cpu NUMA information's initialization Nishanth Aravamudan
2014-07-18 8:11 ` [RFC 1/2] workqueue: use the nearest NUMA node, not the local one Lai Jiangshan
2014-07-18 17:33 ` Nish Aravamudan
2014-07-18 11:20 ` [RFC 0/2] Memoryless nodes and kworker Tejun Heo
2014-07-18 17:42 ` Nish Aravamudan
2014-07-18 18:00 ` Tejun Heo
2014-07-18 18:01 ` Tejun Heo
2014-07-18 18:12 ` Nish Aravamudan
2014-07-18 18:19 ` Tejun Heo
2014-07-18 18:47 ` Nish Aravamudan
2014-07-18 18:58 ` Tejun Heo [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140718185829.GF13012@htj.dyndns.org \
--to=tj@kernel.org \
--cc=fenghua.yu@intel.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=jiang.liu@linux.intel.com \
--cc=linux-ia64@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=liwanp@linux.vnet.ibm.com \
--cc=nacc@linux.vnet.ibm.com \
--cc=nish.aravamudan@gmail.com \
--cc=rientjes@google.com \
--cc=tony.luck@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).