From: Nishanth Aravamudan <nacc@linux.vnet.ibm.com>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Han Pingtian <hanpt@linux.vnet.ibm.com>,
Matt Mackall <mpm@selenic.com>,
David Rientjes <rientjes@google.com>,
Pekka Enberg <penberg@kernel.org>,
Linux Memory Management List <linux-mm@kvack.org>,
Paul Mackerras <paulus@samba.org>, Tejun Heo <tj@kernel.org>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
linuxppc-dev@lists.ozlabs.org, Christoph Lameter <cl@linux.com>,
Wanpeng Li <liwanp@linux.vnet.ibm.com>,
Anton Blanchard <anton@samba.org>
Subject: [RFC PATCH 3/4] Partial revert of 81c98869faa5 ("kthread: ensure locality of task_struct allocations")
Date: Wed, 13 Aug 2014 17:16:29 -0700 [thread overview]
Message-ID: <20140814001629.GL11121@linux.vnet.ibm.com> (raw)
In-Reply-To: <20140814001301.GI11121@linux.vnet.ibm.com>
After discussions with Tejun, we don't want to spread the use of
cpu_to_mem() (and thus knowledge of allocators/NUMA topology details)
into callers, but would rather ensure the callees correctly handle
memoryless nodes. With the previous patches ("topology: add support for
node_to_mem_node() to determine the fallback node" and "slub: fallback
to node_to_mem_node() node if allocating on memoryless node") adding and
using node_to_mem_node(), we can safely undo part of the change to the
kthread logic from 81c98869faa5 ("kthread: ensure locality of
task_struct allocations").
Signed-off-by: Nishanth Aravamudan <nacc@linux.vnet.ibm.com>
diff --git a/kernel/kthread.c b/kernel/kthread.c
index ef483220e855..10e489c448fe 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -369,7 +369,7 @@ struct task_struct *kthread_create_on_cpu(int (*threadfn)(void *data),
{
struct task_struct *p;
- p = kthread_create_on_node(threadfn, data, cpu_to_mem(cpu), namefmt,
+ p = kthread_create_on_node(threadfn, data, cpu_to_node(cpu), namefmt,
cpu);
if (IS_ERR(p))
return p;
next prev parent reply other threads:[~2014-08-14 0:16 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-08-14 0:13 [RFC PATCH 0/4] Improve slab consumption with memoryless nodes Nishanth Aravamudan
2014-08-14 0:14 ` [RFC PATCH v3 1/4] topology: add support for node_to_mem_node() to determine the fallback node Nishanth Aravamudan
2014-08-14 14:35 ` Christoph Lameter
2014-08-14 20:06 ` Nishanth Aravamudan
2014-08-22 21:52 ` Nishanth Aravamudan
2014-08-14 0:15 ` [RFC PATCH 2/4] slub: fallback to node_to_mem_node() node if allocating on memoryless node Nishanth Aravamudan
2014-08-14 0:16 ` Nishanth Aravamudan [this message]
2014-08-14 0:17 ` [RFC PATCH 4/4] powerpc: reorder per-cpu NUMA information's initialization Nishanth Aravamudan
2014-08-22 1:10 ` [RFC PATCH 0/4] Improve slab consumption with memoryless nodes Nishanth Aravamudan
2014-08-22 20:32 ` Andrew Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20140814001629.GL11121@linux.vnet.ibm.com \
--to=nacc@linux.vnet.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=anton@samba.org \
--cc=cl@linux.com \
--cc=hanpt@linux.vnet.ibm.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-mm@kvack.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=liwanp@linux.vnet.ibm.com \
--cc=mpm@selenic.com \
--cc=paulus@samba.org \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
--cc=tj@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).