From: Peter Zijlstra <peterz@infradead.org>
To: Hillf Danton <dhillf@gmail.com>
Cc: LKML <linux-kernel@vger.kernel.org>, Ingo Molnar <mingo@elte.hu>,
Mike Galbraith <efault@gmx.de>,
Yong Zhang <yong.zhang0@gmail.com>
Subject: Re: [PATCH] sched: fix constructing the span cpu mask of sched domain
Date: Tue, 10 May 2011 10:32:09 +0200 [thread overview]
Message-ID: <1305016329.2914.22.camel@laptop> (raw)
In-Reply-To: <BANLkTi=qPWxRAa6+dT3ohEP6Z=0v+e4EXA@mail.gmail.com>
On Thu, 2011-05-05 at 20:53 +0800, Hillf Danton wrote:
> For a given node, when constructing the cpumask for its sched_domain
> to span, if there is no best node available after searching, further
> efforts could be saved, based on small change in the return value of
> find_next_best_node().
>
> Signed-off-by: Hillf Danton <dhillf@gmail.com>
> ---
>
> --- a/kernel/sched.c 2011-04-27 11:48:50.000000000 +0800
> +++ b/kernel/sched.c 2011-05-05 20:44:52.000000000 +0800
> @@ -6787,7 +6787,7 @@ init_sched_build_groups(const struct cpu
> */
> static int find_next_best_node(int node, nodemask_t *used_nodes)
> {
> - int i, n, val, min_val, best_node = 0;
> + int i, n, val, min_val, best_node = -1;
>
> min_val = INT_MAX;
>
> @@ -6811,7 +6811,8 @@ static int find_next_best_node(int node,
> }
> }
>
> - node_set(best_node, *used_nodes);
> + if (best_node != -1)
> + node_set(best_node, *used_nodes);
> return best_node;
> }
>
> @@ -6837,7 +6838,8 @@ static void sched_domain_node_span(int n
>
> for (i = 1; i < SD_NODES_PER_DOMAIN; i++) {
> int next_node = find_next_best_node(node, &used_nodes);
> -
> + if (next_node < 0)
> + break;
> cpumask_or(span, span, cpumask_of_node(next_node));
> }
> }
If you're interested in this area of the scheduler, you might want to
have a poke at:
http://marc.info/?l=linux-kernel&m=130218515520540
That tries to rewrite the CONFIG_NUMA support for the sched_domain stuff
to create domains based on the node_distance() to better reflect the
actual machine topology.
As stated, that patch is currently very broken, mostly because the
topologies encountered don't map to non-overlapping trees. I've not yet
come up with how to deal with that, but we sure need to do something
like that, the current group 16 nodes and a group of all simply doesn't
work well for today's machines now that NUMA is both common and the
inter-node latencies are more relevant.
next prev parent reply other threads:[~2011-05-10 8:29 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-05-05 12:53 [PATCH] sched: fix constructing the span cpu mask of sched domain Hillf Danton
2011-05-06 7:12 ` Ingo Molnar
2011-05-06 7:40 ` [tip:sched/core] sched: Shorten the construction of " tip-bot for Hillf Danton
2011-05-10 8:32 ` Peter Zijlstra [this message]
2011-05-10 12:29 ` [PATCH] sched: fix constructing " Hillf Danton
2011-05-11 13:26 ` Hillf Danton
2011-05-11 13:54 ` Peter Zijlstra
2011-05-11 14:10 ` Hillf Danton
2011-05-11 14:15 ` Ingo Molnar
2011-05-11 14:23 ` Hillf Danton
2011-05-13 13:06 ` Hillf Danton
2011-05-15 5:50 ` Hillf Danton
2011-05-17 9:23 ` Andreas Herrmann
2011-05-17 14:36 ` Hillf Danton
2011-05-18 2:45 ` Yong Zhang
2011-05-18 15:12 ` Hillf Danton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1305016329.2914.22.camel@laptop \
--to=peterz@infradead.org \
--cc=dhillf@gmail.com \
--cc=efault@gmx.de \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=yong.zhang0@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).