From: Nick Piggin <nickpiggin@yahoo.com.au>
To: Nathan Lynch <nathanl@austin.ibm.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>,
linux-kernel <linux-kernel@vger.kernel.org>
Subject: [PATCH 1/3] sched: trivial changes
Date: Wed, 08 Sep 2004 22:50:03 +1000 [thread overview]
Message-ID: <413EFFFB.5050902@yahoo.com.au> (raw)
[-- Attachment #1: Type: text/plain, Size: 51 bytes --]
Couple of trival changes before the hotplug stuff.
[-- Attachment #2: sched-trivial.patch --]
[-- Type: text/x-patch, Size: 2070 bytes --]
Make a definition static and slightly sanitize ifdefs.
Signed-off-by: Nick Piggin <nickpiggin@yahoo.com.au>
---
linux-2.6-npiggin/kernel/sched.c | 16 ++++++++--------
1 files changed, 8 insertions(+), 8 deletions(-)
diff -puN kernel/sched.c~sched-trivial kernel/sched.c
--- linux-2.6/kernel/sched.c~sched-trivial 2004-09-08 21:01:53.000000000 +1000
+++ linux-2.6-npiggin/kernel/sched.c 2004-09-08 22:39:38.000000000 +1000
@@ -247,9 +247,7 @@ struct sched_group {
/*
* CPU power of this group, SCHED_LOAD_SCALE being max power for a
- * single CPU. This should be read only (except for setup). Although
- * it will need to be written to at cpu hot(un)plug time, perhaps the
- * cpucontrol semaphore will provide enough exclusion?
+ * single CPU. This is read only (except for setup, hotplug CPU).
*/
unsigned long cpu_power;
};
@@ -4053,7 +4051,8 @@ static void cpu_attach_domain(struct sch
* in arch code. That defines the number of nearby nodes in a node's top
* level scheduling domain.
*/
-#if defined(CONFIG_NUMA) && defined(SD_NODES_PER_DOMAIN)
+#ifdef CONFIG_NUMA
+#ifdef SD_NODES_PER_DOMAIN
/**
* find_next_best_node - find the next node to include in a sched_domain
* @node: node whose sched_domain we're building
@@ -4100,7 +4099,7 @@ static int __init find_next_best_node(in
* should be one that prevents unnecessary balancing, but also spreads tasks
* out optimally.
*/
-cpumask_t __init sched_domain_node_span(int node)
+static cpumask_t __init sched_domain_node_span(int node)
{
int i;
cpumask_t span;
@@ -4119,12 +4118,13 @@ cpumask_t __init sched_domain_node_span(
return span;
}
-#else /* CONFIG_NUMA && SD_NODES_PER_DOMAIN */
-cpumask_t __init sched_domain_node_span(int node)
+#else /* SD_NODES_PER_DOMAIN */
+static cpumask_t __init sched_domain_node_span(int node)
{
return cpu_possible_map;
}
-#endif /* CONFIG_NUMA && SD_NODES_PER_DOMAIN */
+#endif /* SD_NODES_PER_DOMAIN */
+#endif /* CONFIG_NUMA */
#ifdef CONFIG_SCHED_SMT
static DEFINE_PER_CPU(struct sched_domain, cpu_domains);
_
next reply other threads:[~2004-09-08 13:59 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2004-09-08 12:50 Nick Piggin [this message]
2004-09-08 12:52 ` [PATCH 2/3] cpu: add a CPU_DOWN_PREPARE notifier Nick Piggin
2004-09-08 12:57 ` [PATCH 3/3] sched: cpu hotplug notifier for updating sched domains Nick Piggin
2004-09-08 15:43 ` Nathan Lynch
2004-09-08 15:55 ` [PATCH] fix schedstats null deref in sched_exec Nathan Lynch
2004-09-08 23:45 ` Nick Piggin
2004-09-09 10:23 ` [PATCH 2/3] cpu: add a CPU_DOWN_PREPARE notifier Rusty Russell
2004-09-09 10:24 ` Nick Piggin
2004-09-09 11:00 ` Nick Piggin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=413EFFFB.5050902@yahoo.com.au \
--to=nickpiggin@yahoo.com.au \
--cc=linux-kernel@vger.kernel.org \
--cc=nathanl@austin.ibm.com \
--cc=rusty@rustcorp.com.au \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox