From: Christoph Lameter <clameter@sgi.com>
To: Yasunori Goto <y-goto@jp.fujitsu.com>
Cc: Andrew Morton <akpm@osdl.org>,
Hiroyuki KAMEZAWA <kamezawa.hiroyu@jp.fujitsu.com>,
Linux Kernel ML <linux-kernel@vger.kernel.org>,
linux-mm <linux-mm@kvack.org>
Subject: Re: [Patch 002/002] Create/delete kmem_cache_node for SLUB on memory online callback
Date: Thu, 11 Oct 2007 21:09:06 -0700 (PDT) [thread overview]
Message-ID: <Pine.LNX.4.64.0710112056300.1882@schroedinger.engr.sgi.com> (raw)
In-Reply-To: <20071012112801.B9A1.Y-GOTO@jp.fujitsu.com>
[-- Attachment #1: Type: TEXT/PLAIN, Size: 3635 bytes --]
On Fri, 12 Oct 2007, Yasunori Goto wrote:
> If pages on the new node available, slub can use it before making
> new kmem_cache_nodes. So, this callback should be called
> BEFORE pages on the node are available.
If its called before pages on the node are available then it must
fallback and cannot use the pages.
> +#if defined(CONFIG_NUMA) && defined(CONFIG_MEMORY_HOTPLUG)
> +static int slab_mem_going_offline_callback(void *arg)
> +{
> + struct kmem_cache *s;
> + struct memory_notify *marg = arg;
> + int local_node, offline_node = marg->status_change_nid;
> +
> + if (offline_node < 0)
> + /* node has memory yet. nothing to do. */
Please clarify the comment. This seems to indicate that we should not
do anything because the node still has memory?
Doesnt the node always have memory before offlining?
> + return 0;
> +
> + down_read(&slub_lock);
> + list_for_each_entry(s, &slab_caches, list) {
> + local_node = page_to_nid(virt_to_page(s));
> + if (local_node == offline_node)
> + /* This slub is on the offline node. */
> + return -EBUSY;
> + }
> + up_read(&slub_lock);
So this checks if the any kmem_cache structure is on the offlined node? If
so then we cannot offline the node?
> + kmem_cache_shrink_node(s, offline_node);
kmem_cache_shrink(s) would be okay here I would think. The function is
reasonably fast. Offlining is a rare event.
> +static void slab_mem_offline_callback(void *arg)
We call this after we have established that no kmem_cache structures are
on this and after we have shrunk the slabs. Is there any guarantee that
no slab operations have occurrent since then?
> +{
> + struct kmem_cache_node *n;
> + struct kmem_cache *s;
> + struct memory_notify *marg = arg;
> + int offline_node;
> +
> + offline_node = marg->status_change_nid;
> +
> + if (offline_node < 0)
> + /* node has memory yet. nothing to do. */
> + return;
Does this mean that the node still has memory?
> + down_read(&slub_lock);
> + list_for_each_entry(s, &slab_caches, list) {
> + n = get_node(s, offline_node);
> + if (n) {
> + /*
> + * if n->nr_slabs > 0, offline_pages() must be fail,
> + * because the node is used by slub yet.
> + */
It may be clearer to say:
"If nr_slabs > 0 then slabs still exist on the node that is going down.
We were unable to free them so we must fail."
> +static int slab_mem_going_online_callback(void *arg)
> +{
> + struct kmem_cache_node *n;
> + struct kmem_cache *s;
> + struct memory_notify *marg = arg;
> + int nid = marg->status_change_nid;
> +
> + /* If the node already has memory, then nothing is necessary. */
> + if (nid < 0)
> + return 0;
The node must have memory???? Or we have already brought up the code?
> + /*
> + * New memory will be onlined on the node which has no memory so far.
> + * New kmem_cache_node is necssary for it.
"We are bringing a node online. No memory is available yet. We must
allocate a kmem_cache_node structure in order to bring the node online." ?
> + */
> + down_read(&slub_lock);
> + list_for_each_entry(s, &slab_caches, list) {
> + /*
> + * XXX: The new node's memory can't be allocated yet,
> + * kmem_cache_node will be allocated other node.
> + */
"kmem_cache_alloc node will fallback to other nodes since memory is
not yet available from the node that is brought up.¨ ?
> + n = kmem_cache_alloc(kmalloc_caches, GFP_KERNEL);
> + if (!n)
> + return -ENOMEM;
> + init_kmem_cache_node(n);
> + s->node[nid] = n;
> + }
> + up_read(&slub_lock);
> +
> + return 0;
> +}
next prev parent reply other threads:[~2007-10-12 4:09 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-10-12 2:18 [Patch 000/002] Rearrange notifier of memory hotplug Yasunori Goto
2007-10-12 2:20 ` [Patch 001/002] Make description of memory hotplug notifier in document Yasunori Goto
2007-10-12 4:18 ` Christoph Lameter
2007-10-12 4:33 ` Yasunori Goto
2007-10-12 2:22 ` [Patch 002/002] rearrange patch for notifier of memory hotplug Yasunori Goto
2007-10-12 2:24 ` [Patch 000/002] Make kmem_cache_node for SLUB on memory online to avoid panic(take 2) Yasunori Goto
2007-10-12 2:27 ` [Patch 001/002] extract kmem_cache_shrink Yasunori Goto
2007-10-12 4:09 ` Christoph Lameter
2007-10-12 4:41 ` Yasunori Goto
2007-10-12 2:29 ` [Patch 002/002] Create/delete kmem_cache_node for SLUB on memory online callback Yasunori Goto
2007-10-12 4:09 ` Christoph Lameter [this message]
2007-10-12 6:15 ` Yasunori Goto
2007-10-12 17:19 ` Christoph Lameter
2007-10-13 5:00 ` Yasunori Goto
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Pine.LNX.4.64.0710112056300.1882@schroedinger.engr.sgi.com \
--to=clameter@sgi.com \
--cc=akpm@osdl.org \
--cc=kamezawa.hiroyu@jp.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=y-goto@jp.fujitsu.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).