From: Larry Woodman <lwoodman@redhat.com>
To: linux-mm@kvack.org
Cc: Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
Andrew Morton <akpm@linux-foundation.org>,
Rik van Riel <riel@redhat.com>,
Motohiro Kosaki <mkosaki@redhat.com>
Subject: [PATCH -mm] do_migrate_pages() calls migrate_to_node() even if task is already on a correct node
Date: Thu, 22 Mar 2012 14:14:23 -0400 [thread overview]
Message-ID: <4F6B6BFF.1020701@redhat.com> (raw)
[-- Attachment #1: Type: text/plain, Size: 1233 bytes --]
While moving tasks between cpusets I noticed some strange behavior. Specifically if the nodes of the destination
cpuset are a subset of the nodes of the source cpuset do_migrate_pages() will move pages that are already on a node
in the destination cpuset. The reason for this is do_migrate_pages() does not check whether each node in the source
nodemask is in the destination nodemask before calling migrate_to_node(). If we simply do this check and skip them
when the source is in the destination moving we wont move nodes that dont need to be moved.
Adding a little debug printk to migrate_to_node():
Without this change migrating tasks from a cpuset containing nodes 0-7 to a cpuset containing nodes 3-4, we migrate
from ALL the nodes even if they are in the both the source and destination nodesets:
Migrating 7 to 4
Migrating 6 to 3
Migrating 5 to 4
Migrating 4 to 3
Migrating 1 to 4
Migrating 3 to 4
Migrating 0 to 3
Migrating 2 to 3
With this change we only migrate from nodes that are not in the destination nodesets:
Migrating 7 to 4
Migrating 6 to 3
Migrating 5 to 4
Migrating 2 to 3
Migrating 1 to 4
Migrating 0 to 3
Signed-off-by: Larry Woodman<lwoodman@redhat.com>
[-- Attachment #2: upstream-do_migrate_pages.patch --]
[-- Type: text/plain, Size: 407 bytes --]
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 47296fe..2bd13e9 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -1012,6 +1012,9 @@ int do_migrate_pages(struct mm_struct *mm,
int dest = 0;
for_each_node_mask(s, tmp) {
+ /* no need to move if its already there */
+ if (node_isset(s, *to_nodes))
+ continue;
d = node_remap(s, *from_nodes, *to_nodes);
if (s == d)
continue;
next reply other threads:[~2012-03-22 18:14 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-03-22 18:14 Larry Woodman [this message]
2012-03-22 18:22 ` [PATCH -mm] do_migrate_pages() calls migrate_to_node() even if task is already on a correct node Rik van Riel
2012-03-22 18:45 ` KOSAKI Motohiro
2012-03-22 18:47 ` Rik van Riel
2012-03-22 18:49 ` Larry Woodman
2012-03-22 18:51 ` Christoph Lameter
2012-03-22 19:07 ` Larry Woodman
2012-03-22 19:30 ` Christoph Lameter
2012-03-29 18:00 ` Larry Woodman
2012-03-29 19:43 ` KOSAKI Motohiro
2012-03-29 20:01 ` Larry Woodman
2012-03-30 16:15 ` Christoph Lameter
2012-03-30 17:30 ` Larry Woodman
2012-03-30 20:49 ` Christoph Lameter
2012-03-22 19:36 ` Valdis.Kletnieks
2012-03-22 19:41 ` Christoph Lameter
2012-03-22 20:14 ` Larry Woodman
2012-03-23 1:21 ` Valdis.Kletnieks
2012-03-22 19:07 ` KOSAKI Motohiro
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4F6B6BFF.1020701@redhat.com \
--to=lwoodman@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mkosaki@redhat.com \
--cc=riel@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).