public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] [sched] Move cpu masks from kernel/sched.c into kernel/cpu.c
@ 2008-05-27 22:06 Max Krasnyansky
  2008-05-27 22:06 ` [PATCH] [sched] Fixed CPU hotplug and sched domain handling Max Krasnyansky
  2008-05-29  4:40 ` [PATCH] [sched] Move cpu masks from kernel/sched.c into kernel/cpu.c Paul Jackson
  0 siblings, 2 replies; 8+ messages in thread
From: Max Krasnyansky @ 2008-05-27 22:06 UTC (permalink / raw)
  To: mingo; +Cc: pj, a.p.zijlstra, linux-kernel, menage, rostedt, Max Krasnyansky

kernel/cpu.c seems a more logical place for those maps since they do not really
have much to do with the scheduler these days.

Signed-off-by: Max Krasnyansky <maxk@qualcomm.com>
---
 kernel/Makefile |    4 ++--
 kernel/cpu.c    |   24 ++++++++++++++++++++++++
 kernel/sched.c  |   18 ------------------
 3 files changed, 26 insertions(+), 20 deletions(-)

diff --git a/kernel/Makefile b/kernel/Makefile
index 6c584c5..bb8da0a 100644
--- a/kernel/Makefile
+++ b/kernel/Makefile
@@ -3,7 +3,7 @@
 #
 
 obj-y     = sched.o fork.o exec_domain.o panic.o printk.o profile.o \
-	    exit.o itimer.o time.o softirq.o resource.o \
+	    cpu.o exit.o itimer.o time.o softirq.o resource.o \
 	    sysctl.o capability.o ptrace.o timer.o user.o \
 	    signal.o sys.o kmod.o workqueue.o pid.o \
 	    rcupdate.o extable.o params.o posix-timers.o \
@@ -27,7 +27,7 @@ obj-$(CONFIG_RT_MUTEXES) += rtmutex.o
 obj-$(CONFIG_DEBUG_RT_MUTEXES) += rtmutex-debug.o
 obj-$(CONFIG_RT_MUTEX_TESTER) += rtmutex-tester.o
 obj-$(CONFIG_GENERIC_ISA_DMA) += dma.o
-obj-$(CONFIG_SMP) += cpu.o spinlock.o
+obj-$(CONFIG_SMP) += spinlock.o
 obj-$(CONFIG_DEBUG_SPINLOCK) += spinlock.o
 obj-$(CONFIG_PROVE_LOCKING) += spinlock.o
 obj-$(CONFIG_UID16) += uid16.o
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 2eff3f6..6aa48cf 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -15,6 +15,28 @@
 #include <linux/stop_machine.h>
 #include <linux/mutex.h>
 
+/*
+ * Represents all cpu's present in the system
+ * In systems capable of hotplug, this map could dynamically grow
+ * as new cpu's are detected in the system via any platform specific
+ * method, such as ACPI for e.g.
+ */
+cpumask_t cpu_present_map __read_mostly;
+EXPORT_SYMBOL(cpu_present_map);
+
+#ifndef CONFIG_SMP
+
+/*
+ * Represents all cpu's that are currently online.
+ */
+cpumask_t cpu_online_map __read_mostly = CPU_MASK_ALL;
+EXPORT_SYMBOL(cpu_online_map);
+
+cpumask_t cpu_possible_map __read_mostly = CPU_MASK_ALL;
+EXPORT_SYMBOL(cpu_possible_map);
+
+#else /* CONFIG_SMP */
+
 /* Serializes the updates to cpu_online_map, cpu_present_map */
 static DEFINE_MUTEX(cpu_add_remove_lock);
 
@@ -413,3 +435,5 @@ out:
 	cpu_maps_update_done();
 }
 #endif /* CONFIG_PM_SLEEP_SMP */
+
+#endif /* CONFIG_SMP */
diff --git a/kernel/sched.c b/kernel/sched.c
index 8dcdec6..9694570 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -4850,24 +4850,6 @@ asmlinkage long sys_sched_setaffinity(pid_t pid, unsigned int len,
 	return sched_setaffinity(pid, new_mask);
 }
 
-/*
- * Represents all cpu's present in the system
- * In systems capable of hotplug, this map could dynamically grow
- * as new cpu's are detected in the system via any platform specific
- * method, such as ACPI for e.g.
- */
-
-cpumask_t cpu_present_map __read_mostly;
-EXPORT_SYMBOL(cpu_present_map);
-
-#ifndef CONFIG_SMP
-cpumask_t cpu_online_map __read_mostly = CPU_MASK_ALL;
-EXPORT_SYMBOL(cpu_online_map);
-
-cpumask_t cpu_possible_map __read_mostly = CPU_MASK_ALL;
-EXPORT_SYMBOL(cpu_possible_map);
-#endif
-
 long sched_getaffinity(pid_t pid, cpumask_t *mask)
 {
 	struct task_struct *p;
-- 
1.5.4.5


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH] [sched] Fixed CPU hotplug and sched domain handling
  2008-05-27 22:06 [PATCH] [sched] Move cpu masks from kernel/sched.c into kernel/cpu.c Max Krasnyansky
@ 2008-05-27 22:06 ` Max Krasnyansky
  2008-05-27 22:06   ` [PATCH] [sched] Give cpusets exclusive control over sched domains (ie remove cpu_isolated_map) Max Krasnyansky
  2008-05-27 22:31   ` [PATCH] [sched] Fixed CPU hotplug and sched domain handling Max Krasnyanskiy
  2008-05-29  4:40 ` [PATCH] [sched] Move cpu masks from kernel/sched.c into kernel/cpu.c Paul Jackson
  1 sibling, 2 replies; 8+ messages in thread
From: Max Krasnyansky @ 2008-05-27 22:06 UTC (permalink / raw)
  To: mingo; +Cc: pj, a.p.zijlstra, linux-kernel, menage, rostedt, Max Krasnyansky

First issue is that we're leaking doms_cur. It's allocated in
arch_init_sched_domains() which is called for every hotplug event.
So we just keep reallocation doms_cur without freeing it.
I introduced free_sched_domains() function that cleans things up.

Second issue is that sched domains created by the cpusets are
completely destroyed by the CPU hotplug events. For all CPU hotplug
events scheduler attaches all CPUs to the NULL domain and then puts
them all into the single domain thereby destroying domains created
by the cpusets (partition_sched_domains).
The solution is simple, when cpusets are enabled scheduler should not
create default domain and instead let cpusets do that. Which is
exactly what the patch does.

Signed-off-by: Max Krasnyansky <maxk@qualcomm.com>
---
 kernel/cpuset.c |    6 ++++++
 kernel/sched.c  |   22 ++++++++++++++++++++++
 2 files changed, 28 insertions(+), 0 deletions(-)

diff --git a/kernel/cpuset.c b/kernel/cpuset.c
index a1b61f4..29c6304 100644
--- a/kernel/cpuset.c
+++ b/kernel/cpuset.c
@@ -1789,6 +1789,12 @@ static void common_cpu_mem_hotplug_unplug(void)
 	top_cpuset.mems_allowed = node_states[N_HIGH_MEMORY];
 	scan_for_empty_cpusets(&top_cpuset);
 
+	/*
+	 * Scheduler destroys domains on hotplug events.
+	 * Rebuild them based on the current settings.
+	 */
+	rebuild_sched_domains();
+
 	cgroup_unlock();
 }
 
diff --git a/kernel/sched.c b/kernel/sched.c
index 9694570..5ebf6a7 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -6837,6 +6837,18 @@ void __attribute__((weak)) arch_update_cpu_topology(void)
 }
 
 /*
+ * Free current domain masks.
+ * Called after all cpus are attached to NULL domain.
+ */
+static void free_sched_domains(void)
+{
+	ndoms_cur = 0;
+	if (doms_cur != &fallback_doms)
+		kfree(doms_cur);
+	doms_cur = &fallback_doms;
+}
+
+/*
  * Set up scheduler domains and groups. Callers must hold the hotplug lock.
  * For now this just excludes isolated cpus, but could be used to
  * exclude other special cases in the future.
@@ -6956,6 +6968,7 @@ int arch_reinit_sched_domains(void)
 
 	get_online_cpus();
 	detach_destroy_domains(&cpu_online_map);
+	free_sched_domains();
 	err = arch_init_sched_domains(&cpu_online_map);
 	put_online_cpus();
 
@@ -7040,6 +7053,7 @@ static int update_sched_domains(struct notifier_block *nfb,
 	case CPU_DOWN_PREPARE:
 	case CPU_DOWN_PREPARE_FROZEN:
 		detach_destroy_domains(&cpu_online_map);
+		free_sched_domains();
 		return NOTIFY_OK;
 
 	case CPU_UP_CANCELED:
@@ -7058,8 +7072,16 @@ static int update_sched_domains(struct notifier_block *nfb,
 		return NOTIFY_DONE;
 	}
 
+#ifndef CONFIG_CPUSETS
+	/* 
+	 * Create default domain partitioning if cpusets are disabled.
+	 * Otherwise we let cpusets rebuild the domains based on the 
+	 * current setup.
+	 */
+
 	/* The hotplug lock is already held by cpu_up/cpu_down */
 	arch_init_sched_domains(&cpu_online_map);
+#endif
 
 	return NOTIFY_OK;
 }
-- 
1.5.4.5


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH] [sched] Give cpusets exclusive control over sched domains (ie remove cpu_isolated_map)
  2008-05-27 22:06 ` [PATCH] [sched] Fixed CPU hotplug and sched domain handling Max Krasnyansky
@ 2008-05-27 22:06   ` Max Krasnyansky
  2008-05-29  5:30     ` Paul Jackson
  2008-05-27 22:31   ` [PATCH] [sched] Fixed CPU hotplug and sched domain handling Max Krasnyanskiy
  1 sibling, 1 reply; 8+ messages in thread
From: Max Krasnyansky @ 2008-05-27 22:06 UTC (permalink / raw)
  To: mingo; +Cc: pj, a.p.zijlstra, linux-kernel, menage, rostedt, Max Krasnyansky

Ingo and Peter mentioned several times that cpu_isolated_map was a horrible hack.
So lets get rid of it.

cpu_isolated_map is controlling which CPUs are subject to the scheduler load balancing.
CPUs set in that map are put in the NULL scheduler domain and are excluded from the
load balancing. This functionality is provided in much more flexible and dynamic way by
the cpusets subsystem. Scheduler load balancing can be disabled/enabled either system
wide or per cpuset.

This patch gives cpusets exclusive control over the scheduler domains.

Signed-off-by: Max Krasnyansky <maxk@qualcomm.com>
---
 Documentation/cpusets.txt |    3 +--
 kernel/sched.c            |   34 +++++-----------------------------
 2 files changed, 6 insertions(+), 31 deletions(-)

diff --git a/Documentation/cpusets.txt b/Documentation/cpusets.txt
index ad2bb3b..d8b269a 100644
--- a/Documentation/cpusets.txt
+++ b/Documentation/cpusets.txt
@@ -382,8 +382,7 @@ Put simply, it costs less to balance between two smaller sched domains
 than one big one, but doing so means that overloads in one of the
 two domains won't be load balanced to the other one.
 
-By default, there is one sched domain covering all CPUs, except those
-marked isolated using the kernel boot time "isolcpus=" argument.
+By default, there is one sched domain covering all CPUs.
 
 This default load balancing across all CPUs is not well suited for
 the following two situations:
diff --git a/kernel/sched.c b/kernel/sched.c
index 5ebf6a7..e2eb2be 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -6206,24 +6206,6 @@ cpu_attach_domain(struct sched_domain *sd, struct root_domain *rd, int cpu)
 	rcu_assign_pointer(rq->sd, sd);
 }
 
-/* cpus with isolated domains */
-static cpumask_t cpu_isolated_map = CPU_MASK_NONE;
-
-/* Setup the mask of cpus configured for isolated domains */
-static int __init isolated_cpu_setup(char *str)
-{
-	int ints[NR_CPUS], i;
-
-	str = get_options(str, ARRAY_SIZE(ints), ints);
-	cpus_clear(cpu_isolated_map);
-	for (i = 1; i <= ints[0]; i++)
-		if (ints[i] < NR_CPUS)
-			cpu_set(ints[i], cpu_isolated_map);
-	return 1;
-}
-
-__setup("isolcpus=", isolated_cpu_setup);
-
 /*
  * init_sched_build_groups takes the cpumask we wish to span, and a pointer
  * to a function which identifies what group(along with sched group) a CPU
@@ -6850,8 +6832,6 @@ static void free_sched_domains(void)
 
 /*
  * Set up scheduler domains and groups. Callers must hold the hotplug lock.
- * For now this just excludes isolated cpus, but could be used to
- * exclude other special cases in the future.
  */
 static int arch_init_sched_domains(const cpumask_t *cpu_map)
 {
@@ -6862,7 +6842,7 @@ static int arch_init_sched_domains(const cpumask_t *cpu_map)
 	doms_cur = kmalloc(sizeof(cpumask_t), GFP_KERNEL);
 	if (!doms_cur)
 		doms_cur = &fallback_doms;
-	cpus_andnot(*doms_cur, *cpu_map, cpu_isolated_map);
+	*doms_cur = *cpu_map;
 	err = build_sched_domains(doms_cur);
 	register_sched_domain_sysctl();
 
@@ -6923,7 +6903,7 @@ void partition_sched_domains(int ndoms_new, cpumask_t *doms_new)
 	if (doms_new == NULL) {
 		ndoms_new = 1;
 		doms_new = &fallback_doms;
-		cpus_andnot(doms_new[0], cpu_online_map, cpu_isolated_map);
+		*doms_new = cpu_online_map;
 	}
 
 	/* Destroy deleted domains */
@@ -7088,19 +7068,15 @@ static int update_sched_domains(struct notifier_block *nfb,
 
 void __init sched_init_smp(void)
 {
-	cpumask_t non_isolated_cpus;
-
 	get_online_cpus();
 	arch_init_sched_domains(&cpu_online_map);
-	cpus_andnot(non_isolated_cpus, cpu_possible_map, cpu_isolated_map);
-	if (cpus_empty(non_isolated_cpus))
-		cpu_set(smp_processor_id(), non_isolated_cpus);
 	put_online_cpus();
+
 	/* XXX: Theoretical race here - CPU may be hotplugged now */
 	hotcpu_notifier(update_sched_domains, 0);
 
-	/* Move init over to a non-isolated CPU */
-	if (set_cpus_allowed(current, non_isolated_cpus) < 0)
+	/* Update init's affinity mask */
+	if (set_cpus_allowed(current, cpu_online_map) < 0)
 		BUG();
 	sched_init_granularity();
 }
-- 
1.5.4.5


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] [sched] Fixed CPU hotplug and sched domain handling
  2008-05-27 22:06 ` [PATCH] [sched] Fixed CPU hotplug and sched domain handling Max Krasnyansky
  2008-05-27 22:06   ` [PATCH] [sched] Give cpusets exclusive control over sched domains (ie remove cpu_isolated_map) Max Krasnyansky
@ 2008-05-27 22:31   ` Max Krasnyanskiy
  1 sibling, 0 replies; 8+ messages in thread
From: Max Krasnyanskiy @ 2008-05-27 22:31 UTC (permalink / raw)
  To: mingo; +Cc: pj, a.p.zijlstra, linux-kernel, menage, rostedt

Max Krasnyansky wrote:
> First issue is that we're leaking doms_cur. It's allocated in
> arch_init_sched_domains() which is called for every hotplug event.
> So we just keep reallocation doms_cur without freeing it.
> I introduced free_sched_domains() function that cleans things up.
> 
> Second issue is that sched domains created by the cpusets are
> completely destroyed by the CPU hotplug events. For all CPU hotplug
> events scheduler attaches all CPUs to the NULL domain and then puts
> them all into the single domain thereby destroying domains created
> by the cpusets (partition_sched_domains).
> The solution is simple, when cpusets are enabled scheduler should not
> create default domain and instead let cpusets do that. Which is
> exactly what the patch does.

Here is more info on this, with debug logs.

Here is initial cpuset setup.
cpus 0-3 balanced, cpus 4-7 non-balanced

cd /dev/cgroup
echo 0 > cpusets.sched_load_balance
mkdir boot
echo 0-3 > boot/cpusets.cpus
echo 1   > boot/cpusets.sched_load_balance
...

-----
CPU0 attaching NULL sched-domain.
CPU1 attaching NULL sched-domain.
CPU2 attaching NULL sched-domain.
CPU3 attaching NULL sched-domain.
CPU4 attaching NULL sched-domain.
CPU5 attaching NULL sched-domain.
CPU6 attaching NULL sched-domain.
CPU7 attaching NULL sched-domain.
CPU0 attaching sched-domain:
  domain 0: span 0f
   groups: 01 02 04 08
CPU1 attaching sched-domain:
  domain 0: span 0f
   groups: 02 04 08 01
CPU2 attaching sched-domain:
  domain 0: span 0f
   groups: 04 08 01 02
CPU3 attaching sched-domain:
  domain 0: span 0f
   groups: 08 01 02 04
-----

Looks good so far.
Now lets bring cpu7 offline (echo 0 > /sys/devices/system/cpu/cpu7/online)

-----
CPU0 attaching NULL sched-domain.
CPU1 attaching NULL sched-domain.
CPU2 attaching NULL sched-domain.
CPU3 attaching NULL sched-domain.
CPU4 attaching NULL sched-domain.
CPU5 attaching NULL sched-domain.
CPU6 attaching NULL sched-domain.
CPU7 attaching NULL sched-domain.
CPU 7 is now offline
CPU0 attaching sched-domain:
  domain 0: span 11
   groups: 01 10
   domain 1: span 7f
    groups: 11 22 44 08
CPU1 attaching sched-domain:
  domain 0: span 22
   groups: 02 20
   domain 1: span 7f
    groups: 22 44 08 11
CPU2 attaching sched-domain:
  domain 0: span 44
   groups: 04 40
   domain 1: span 7f
    groups: 44 08 11 22
CPU3 attaching sched-domain:
  domain 0: span 7f
   groups: 08 11 22 44
CPU4 attaching sched-domain:
  domain 0: span 11
   groups: 10 01
   domain 1: span 7f
    groups: 11 22 44 08
CPU5 attaching sched-domain:
  domain 0: span 22
   groups: 20 02
   domain 1: span 7f
    groups: 22 44 08 11
CPU6 attaching sched-domain:
  domain 0: span 44
   groups: 40 04
   domain 1: span 7f
    groups: 44 08 11 22
----

All cpus are now in the single domain.
Same thing happens when cpu7 comes back online.

----
CPU0 attaching NULL sched-domain.
CPU1 attaching NULL sched-domain.
CPU2 attaching NULL sched-domain.
CPU3 attaching NULL sched-domain.
CPU4 attaching NULL sched-domain.
CPU5 attaching NULL sched-domain.
CPU6 attaching NULL sched-domain.
Booting processor 7/8 APIC 0x7
Initializing CPU#7
Calibrating delay using timer specific routine.. 4655.39 BogoMIPS (lpj=9310785)
CPU: L1 I cache: 32K, L1 D cache: 32K
CPU: L2 cache: 6144K
CPU: Physical Processor ID: 1
CPU: Processor Core ID: 3
Intel(R) Xeon(R) CPU           E5410  @ 2.33GHz stepping 06
checking TSC synchronization [CPU#3 -> CPU#7]: passed.
CPU0 attaching sched-domain:
  domain 0: span 11
   groups: 01 10
   domain 1: span ff
    groups: 11 22 44 88
CPU1 attaching sched-domain:
  domain 0: span 22
   groups: 02 20
   domain 1: span ff
    groups: 22 44 88 11
CPU2 attaching sched-domain:
  domain 0: span 44
   groups: 04 40
   domain 1: span ff
    groups: 44 88 11 22
CPU3 attaching sched-domain:
  domain 0: span 88
   groups: 08 80
   domain 1: span ff
    groups: 88 11 22 44
CPU4 attaching sched-domain:
  domain 0: span 11
   groups: 10 01
   domain 1: span ff
    groups: 11 22 44 88
CPU5 attaching sched-domain:
  domain 0: span 22
   groups: 20 02
   domain 1: span ff
    groups: 22 44 88 11
CPU6 attaching sched-domain:
  domain 0: span 44
   groups: 40 04
   domain 1: span ff
    groups: 44 88 11 22
CPU7 attaching sched-domain:
  domain 0: span 88
   groups: 80 08
   domain 1: span ff
    groups: 88 11 22 44
----

As if cpusets do not exist :).
With the patch we now do the right thing when cpus go off/online.

----
CPU0 attaching NULL sched-domain.
CPU1 attaching NULL sched-domain.
CPU2 attaching NULL sched-domain.
CPU3 attaching NULL sched-domain.
CPU4 attaching NULL sched-domain.
CPU5 attaching NULL sched-domain.
CPU6 attaching NULL sched-domain.
CPU7 attaching NULL sched-domain.
CPU0 attaching sched-domain:
  domain 0: span 0f
   groups: 01 02 04 08
CPU1 attaching sched-domain:
  domain 0: span 0f
   groups: 02 04 08 01
CPU2 attaching sched-domain:
  domain 0: span 0f
   groups: 04 08 01 02
CPU3 attaching sched-domain:
  domain 0: span 0f
   groups: 08 01 02 04

CPU0 attaching NULL sched-domain.
CPU1 attaching NULL sched-domain.
CPU2 attaching NULL sched-domain.
CPU3 attaching NULL sched-domain.
CPU4 attaching NULL sched-domain.
CPU5 attaching NULL sched-domain.
CPU6 attaching NULL sched-domain.
CPU7 attaching NULL sched-domain.
CPU0 attaching sched-domain:
  domain 0: span 0f
   groups: 01 02 04 08
CPU1 attaching sched-domain:
  domain 0: span 0f
   groups: 02 04 08 01
CPU2 attaching sched-domain:
  domain 0: span 0f
   groups: 04 08 01 02
CPU3 attaching sched-domain:
  domain 0: span 0f
   groups: 08 01 02 04
CPU 7 is now offline

CPU0 attaching NULL sched-domain.
CPU1 attaching NULL sched-domain.
CPU2 attaching NULL sched-domain.
CPU3 attaching NULL sched-domain.
CPU4 attaching NULL sched-domain.
CPU5 attaching NULL sched-domain.
CPU6 attaching NULL sched-domain.
CPU0 attaching sched-domain:
  domain 0: span 0f
   groups: 01 02 04 08
CPU1 attaching sched-domain:
  domain 0: span 0f
   groups: 02 04 08 01
CPU2 attaching sched-domain:
  domain 0: span 0f
   groups: 04 08 01 02
CPU3 attaching sched-domain:
  domain 0: span 0f
   groups: 08 01 02 04
Booting processor 7/8 APIC 0x7
Initializing CPU#7
Calibrating delay using timer specific routine.. 4655.37 BogoMIPS (lpj=9310749)
CPU: L1 I cache: 32K, L1 D cache: 32K
CPU: L2 cache: 6144K
CPU: Physical Processor ID: 1
CPU: Processor Core ID: 3
Intel(R) Xeon(R) CPU           E5410  @ 2.33GHz stepping 06
checking TSC synchronization [CPU#3 -> CPU#7]: passed.

Max

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] [sched] Move cpu masks from kernel/sched.c into kernel/cpu.c
  2008-05-27 22:06 [PATCH] [sched] Move cpu masks from kernel/sched.c into kernel/cpu.c Max Krasnyansky
  2008-05-27 22:06 ` [PATCH] [sched] Fixed CPU hotplug and sched domain handling Max Krasnyansky
@ 2008-05-29  4:40 ` Paul Jackson
  2008-05-29 16:30   ` Max Krasnyanskiy
  1 sibling, 1 reply; 8+ messages in thread
From: Paul Jackson @ 2008-05-29  4:40 UTC (permalink / raw)
  To: Max Krasnyansky; +Cc: mingo, a.p.zijlstra, linux-kernel, menage, rostedt, maxk

Max,

Does this patch mean that some 2K to 3K bytes of kernel text
will be added to non-SMP systems, due to kernel/cpu.o moving
from CONFIG_SMP only to always being included?

If so, then I'd think that the folks who obsess with small
non-SMP kernels might not like this change.

-- 
                  I won't rest till it's the best ...
                  Programmer, Linux Scalability
                  Paul Jackson <pj@sgi.com> 1.940.382.4214

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] [sched] Give cpusets exclusive control over sched domains (ie remove cpu_isolated_map)
  2008-05-27 22:06   ` [PATCH] [sched] Give cpusets exclusive control over sched domains (ie remove cpu_isolated_map) Max Krasnyansky
@ 2008-05-29  5:30     ` Paul Jackson
  2008-05-29 16:36       ` Max Krasnyanskiy
  0 siblings, 1 reply; 8+ messages in thread
From: Paul Jackson @ 2008-05-29  5:30 UTC (permalink / raw)
  To: Max Krasnyansky; +Cc: mingo, a.p.zijlstra, linux-kernel, menage, rostedt, maxk

Max wrote:
> -marked isolated using the kernel boot time "isolcpus=" argument.

Yeah ... a hack ... but I suspect some folks use it.

I'm reluctant to discard features visible to users, unless
they are getting in the way of more serious stuff.

I'd have figured that this hack was not all that much of
a pain to the kernel code.

-- 
                  I won't rest till it's the best ...
                  Programmer, Linux Scalability
                  Paul Jackson <pj@sgi.com> 1.940.382.4214

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] [sched] Move cpu masks from kernel/sched.c into kernel/cpu.c
  2008-05-29  4:40 ` [PATCH] [sched] Move cpu masks from kernel/sched.c into kernel/cpu.c Paul Jackson
@ 2008-05-29 16:30   ` Max Krasnyanskiy
  0 siblings, 0 replies; 8+ messages in thread
From: Max Krasnyanskiy @ 2008-05-29 16:30 UTC (permalink / raw)
  To: Paul Jackson; +Cc: mingo, a.p.zijlstra, linux-kernel, menage, rostedt

Paul Jackson wrote:
> Max,
> 
> Does this patch mean that some 2K to 3K bytes of kernel text
> will be added to non-SMP systems, due to kernel/cpu.o moving
> from CONFIG_SMP only to always being included?
> 
> If so, then I'd think that the folks who obsess with small
> non-SMP kernels might not like this change.
> 

non-SMP ? Do people still build those :) ?
Just kidding.

Hmm, why would it be any different ?
I've just build UP kernel with and w/o the patch

size vmlinux

before
    text	   data	    bss	    dec	    hex	filename
3313797	 307060	 310352	3931209	 3bfc49	vmlinux

after
    text	   data	    bss	    dec	    hex	filename
3313797	 307060	 310352	3931209	 3bfc49	vmlinux

I think we're good here.

Max

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] [sched] Give cpusets exclusive control over sched domains (ie remove cpu_isolated_map)
  2008-05-29  5:30     ` Paul Jackson
@ 2008-05-29 16:36       ` Max Krasnyanskiy
  0 siblings, 0 replies; 8+ messages in thread
From: Max Krasnyanskiy @ 2008-05-29 16:36 UTC (permalink / raw)
  To: Paul Jackson; +Cc: mingo, a.p.zijlstra, linux-kernel, menage, rostedt

Paul Jackson wrote:
> Max wrote:
>> -marked isolated using the kernel boot time "isolcpus=" argument.
> 
> Yeah ... a hack ... but I suspect some folks use it.
> 
> I'm reluctant to discard features visible to users, unless
> they are getting in the way of more serious stuff.
> 
> I'd have figured that this hack was not all that much of
> a pain to the kernel code.
> 

I bet it will be tempting to extend it for other uses. Just like it was for me 
:). It looks just like another cpu_*map and stuff.

We could emulate isolcpu= boot option (the only visible user interface) via 
cpusets I suppose. But I'd rather not.
Since we do not plan on supporting it I'd say lets get rid of it.

Max




^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2008-05-29 16:36 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-05-27 22:06 [PATCH] [sched] Move cpu masks from kernel/sched.c into kernel/cpu.c Max Krasnyansky
2008-05-27 22:06 ` [PATCH] [sched] Fixed CPU hotplug and sched domain handling Max Krasnyansky
2008-05-27 22:06   ` [PATCH] [sched] Give cpusets exclusive control over sched domains (ie remove cpu_isolated_map) Max Krasnyansky
2008-05-29  5:30     ` Paul Jackson
2008-05-29 16:36       ` Max Krasnyanskiy
2008-05-27 22:31   ` [PATCH] [sched] Fixed CPU hotplug and sched domain handling Max Krasnyanskiy
2008-05-29  4:40 ` [PATCH] [sched] Move cpu masks from kernel/sched.c into kernel/cpu.c Paul Jackson
2008-05-29 16:30   ` Max Krasnyanskiy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox