From: Tejun Heo <tj@kernel.org>
To: Nish Aravamudan <nish.aravamudan@gmail.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>,
Tony Luck <tony.luck@intel.com>,
linux-ia64@vger.kernel.org,
Nishanth Aravamudan <nacc@linux.vnet.ibm.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
Linux Memory Management List <linux-mm@kvack.org>,
David Rientjes <rientjes@google.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
linuxppc-dev@lists.ozlabs.org,
Jiang Liu <jiang.liu@linux.intel.com>,
Wanpeng Li <liwanp@linux.vnet.ibm.com>
Subject: Re: [RFC 0/2] Memoryless nodes and kworker
Date: Fri, 18 Jul 2014 14:19:47 -0400 [thread overview]
Message-ID: <20140718181947.GE13012@htj.dyndns.org> (raw)
In-Reply-To: <CAOhV88O03zCsv_3eadEKNv1D1RoBmjWRFNhPjEHawF9s71U0JA@mail.gmail.com>
Hello,
On Fri, Jul 18, 2014 at 11:12:01AM -0700, Nish Aravamudan wrote:
> why aren't these callers using kthread_create_on_cpu()? That API was
It is using that. There just are other data structures too.
> already change to use cpu_to_mem() [so one change, rather than of all over
> the kernel source]. We could change it back to cpu_to_node and push down
> the knowledge about the fallback.
And once it's properly solved, please convert back kthread to use
cpu_to_node() too. We really shouldn't be sprinkling the new subtly
different variant across the kernel. It's wrong and confusing.
> Yes, this is a good point. But honestly, we're not really even to the point
> of talking about fallback here, at least in my testing, going off-node at
> all causes SLUB-configured slabs to deactivate, which then leads to an
> explosion in the unreclaimable slab.
I don't think moving the logic inside allocator proper is a huge
amount of work and this isn't the first spillage of this subtlety out
of allocator proper. Fortunately, it hasn't spread too much yet.
Let's please stop it here. I'm not saying you shouldn't or can't fix
the off-node allocation.
Thanks.
--
tejun
next prev parent reply other threads:[~2014-07-18 18:19 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-07-17 23:09 [RFC 0/2] Memoryless nodes and kworker Nishanth Aravamudan
2014-07-17 23:09 ` [RFC 1/2] workqueue: use the nearest NUMA node, not the local one Nishanth Aravamudan
2014-07-17 23:15 ` [RFC 2/2] powerpc: reorder per-cpu NUMA information's initialization Nishanth Aravamudan
2014-07-18 8:11 ` [RFC 1/2] workqueue: use the nearest NUMA node, not the local one Lai Jiangshan
2014-07-18 17:33 ` Nish Aravamudan
2014-07-18 11:20 ` [RFC 0/2] Memoryless nodes and kworker Tejun Heo
2014-07-18 17:42 ` Nish Aravamudan
2014-07-18 18:00 ` Tejun Heo
2014-07-18 18:01 ` Tejun Heo
2014-07-18 18:12 ` Nish Aravamudan
2014-07-18 18:19 ` Tejun Heo [this message]
2014-07-18 18:47 ` Nish Aravamudan
2014-07-18 18:58 ` Tejun Heo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140718181947.GE13012@htj.dyndns.org \
--to=tj@kernel.org \
--cc=fenghua.yu@intel.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=jiang.liu@linux.intel.com \
--cc=linux-ia64@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=liwanp@linux.vnet.ibm.com \
--cc=nacc@linux.vnet.ibm.com \
--cc=nish.aravamudan@gmail.com \
--cc=rientjes@google.com \
--cc=tony.luck@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).