* [patch] sched: remove scheduler locking from arch code
@ 2005-03-26 13:26 Nick Piggin
2005-03-26 13:34 ` [patch][RFC] sched: optimise/cleanup resched_task, idle routines Nick Piggin
` (3 more replies)
0 siblings, 4 replies; 5+ messages in thread
From: Nick Piggin @ 2005-03-26 13:26 UTC (permalink / raw)
To: linux-arch, Ingo Molnar, David S. Miller, Russell King
[-- Attachment #1: Type: text/plain, Size: 499 bytes --]
Hi,
Last time I sent in this patch, David raised the valid issue that
using bitops instead of the "switch_lock" could actually be slower
than the lock itself. So instead I'm using a simple int for the
flag now, which surely has to be quicker than the spinlock.
I've had this around for a while, but it isn't too widely tested
outside the usual suspects. Seems to work OK on ia64 though, which
uses unlocked context switches. The only other locking scheme is
ARM, which isn't tested.
Any comments?
[-- Attachment #2: task-running-flag.patch --]
[-- Type: text/plain, Size: 15801 bytes --]
Instead of requiring architecture code to interact with the scheduler's
locking implementation, provide a couple of defines that can be used by
the architecture to request runqueue unlocked context switches, and ask
for interrupts to be enabled over the context switch.
Also replaces the "switch_lock" used by these architectures with an oncpu
flag (note, not a potentially slow bitflag). This eliminates one bus locked
memory operation when context switching, and simplifies the task_running
function.
Index: linux-2.6/kernel/sched.c
===================================================================
--- linux-2.6.orig/kernel/sched.c 2005-03-26 19:24:10.000000000 +1100
+++ linux-2.6/kernel/sched.c 2005-03-27 00:25:49.000000000 +1100
@@ -269,14 +269,71 @@ static DEFINE_PER_CPU(struct runqueue, r
#define task_rq(p) cpu_rq(task_cpu(p))
#define cpu_curr(cpu) (cpu_rq(cpu)->curr)
-/*
- * Default context-switch locking:
- */
#ifndef prepare_arch_switch
-# define prepare_arch_switch(rq, next) do { } while (0)
-# define finish_arch_switch(rq, next) spin_unlock_irq(&(rq)->lock)
-# define task_running(rq, p) ((rq)->curr == (p))
+# define prepare_arch_switch(next) do { } while (0)
+#endif
+#ifndef finish_arch_switch
+# define finish_arch_switch(prev) do { } while (0)
+#endif
+
+#ifndef __ARCH_WANT_UNLOCKED_CTXSW
+static inline int task_running(runqueue_t *rq, task_t *p)
+{
+ return rq->curr == p;
+}
+
+static inline void prepare_lock_switch(runqueue_t *rq, task_t *next)
+{
+}
+
+static inline void finish_lock_switch(runqueue_t *rq, task_t *prev)
+{
+ spin_unlock_irq(&rq->lock);
+}
+
+#else /* __ARCH_WANT_UNLOCKED_CTXSW */
+static inline int task_running(runqueue_t *rq, task_t *p)
+{
+#ifdef CONFIG_SMP
+ return p->oncpu;
+#else
+ return rq->curr == p;
+#endif
+}
+
+static inline void prepare_lock_switch(runqueue_t *rq, task_t *next)
+{
+#ifdef CONFIG_SMP
+ /*
+ * We can optimise this out completely for !SMP, because the
+ * SMP rebalancing from interrupt is the only thing that cares
+ * here.
+ */
+ next->oncpu = 1;
+#endif
+#ifdef __ARCH_WANT_INTERRUPTS_ON_CTXSW
+ spin_unlock_irq(&rq->lock);
+#else
+ spin_unlock(&rq->lock);
#endif
+}
+
+static inline void finish_lock_switch(runqueue_t *rq, task_t *prev)
+{
+#ifdef CONFIG_SMP
+ /*
+ * After ->oncpu is cleared, the task can be moved to a different CPU.
+ * We must ensure this doesn't happen until the switch is completely
+ * finished.
+ */
+ smp_wmb();
+ prev->oncpu = 0;
+#endif
+#ifndef __ARCH_WANT_INTERRUPTS_ON_CTXSW
+ local_irq_enable();
+#endif
+}
+#endif /* __ARCH_WANT_UNLOCKED_CTXSW */
/*
* task_rq_lock - lock the runqueue a given task resides on and disable
@@ -1197,17 +1254,14 @@ void fastcall sched_fork(task_t *p)
p->state = TASK_RUNNING;
INIT_LIST_HEAD(&p->run_list);
p->array = NULL;
- spin_lock_init(&p->switch_lock);
#ifdef CONFIG_SCHEDSTATS
memset(&p->sched_info, 0, sizeof(p->sched_info));
#endif
+#if defined(CONFIG_SMP) && defined(__ARCH_WANT_UNLOCKED_CTXSW)
+ p->oncpu = 0;
+#endif
#ifdef CONFIG_PREEMPT
- /*
- * During context-switch we hold precisely one spinlock, which
- * schedule_tail drops. (in the common case it's this_rq()->lock,
- * but it also can be p->switch_lock.) So we compensate with a count
- * of 1. Also, we want to start with kernel preemption disabled.
- */
+ /* Want to start with kernel preemption disabled. */
p->thread_info->preempt_count = 1;
#endif
/*
@@ -1389,22 +1443,40 @@ void fastcall sched_exit(task_t * p)
}
/**
+ * prepare_task_switch - prepare to switch tasks
+ * @rq: the runqueue preparing to switch
+ * @next: the task we are going to switch to.
+ *
+ * This is called with the rq lock held and interrupts off. It must
+ * be paired with a subsequent finish_task_switch after the context
+ * switch.
+ *
+ * prepare_task_switch sets up locking and calls architecture specific
+ * hooks.
+ */
+static inline void prepare_task_switch(runqueue_t *rq, task_t *next)
+{
+ prepare_lock_switch(rq, next);
+ prepare_arch_switch(next);
+}
+
+/**
* finish_task_switch - clean up after a task-switch
* @prev: the thread we just switched away from.
*
- * We enter this with the runqueue still locked, and finish_arch_switch()
- * will unlock it along with doing any other architecture-specific cleanup
- * actions.
+ * finish_task_switch must be called after the context switch, paired
+ * with a prepare_task_switch call before the context switch.
+ * finish_task_switch will reconcile locking set up by prepare_task_switch,
+ * and do any other architecture-specific cleanup actions.
*
* Note that we may have delayed dropping an mm in context_switch(). If
* so, we finish that here outside of the runqueue lock. (Doing it
* with the lock held can cause deadlocks; see schedule() for
* details.)
*/
-static inline void finish_task_switch(task_t *prev)
+static inline void finish_task_switch(runqueue_t *rq, task_t *prev)
__releases(rq->lock)
{
- runqueue_t *rq = this_rq();
struct mm_struct *mm = rq->prev_mm;
unsigned long prev_task_flags;
@@ -1422,7 +1494,8 @@ static inline void finish_task_switch(ta
* Manfred Spraul <manfred@colorfullife.com>
*/
prev_task_flags = prev->flags;
- finish_arch_switch(rq, prev);
+ finish_arch_switch(prev);
+ finish_lock_switch(rq, prev);
if (mm)
mmdrop(mm);
if (unlikely(prev_task_flags & PF_DEAD))
@@ -1436,8 +1509,12 @@ static inline void finish_task_switch(ta
asmlinkage void schedule_tail(task_t *prev)
__releases(rq->lock)
{
- finish_task_switch(prev);
-
+ runqueue_t *rq = this_rq();
+ finish_task_switch(rq, prev);
+#ifdef __ARCH_WANT_UNLOCKED_CTXSW
+ /* In this case, finish_task_switch does not reenable preemption */
+ preempt_enable();
+#endif
if (current->set_child_tid)
put_user(current->pid, current->set_child_tid);
}
@@ -2820,11 +2897,15 @@ switch_tasks:
rq->curr = next;
++*switch_count;
- prepare_arch_switch(rq, next);
+ prepare_task_switch(rq, next);
prev = context_switch(rq, prev, next);
barrier();
-
- finish_task_switch(prev);
+ /*
+ * this_rq must be evaluated again because prev may have moved
+ * CPUs since it called schedule(), thus the 'rq' on its stack
+ * frame will be invalid.
+ */
+ finish_task_switch(this_rq(), prev);
} else
spin_unlock_irq(&rq->lock);
@@ -4075,6 +4156,9 @@ void __devinit init_idle(task_t *idle, i
spin_lock_irqsave(&rq->lock, flags);
rq->curr = rq->idle = idle;
+#if defined(CONFIG_SMP) && defined(__ARCH_WANT_UNLOCKED_CTXSW)
+ idle->oncpu = 1;
+#endif
set_tsk_need_resched(idle);
spin_unlock_irqrestore(&rq->lock, flags);
Index: linux-2.6/include/linux/sched.h
===================================================================
--- linux-2.6.orig/include/linux/sched.h 2005-03-26 19:24:10.000000000 +1100
+++ linux-2.6/include/linux/sched.h 2005-03-26 23:55:46.000000000 +1100
@@ -357,6 +357,11 @@ struct signal_struct {
#endif
};
+/* Context switch must be unlocked if interrupts are to be enabled */
+#ifdef __ARCH_WANT_INTERRUPTS_ON_CTXSW
+# define __ARCH_WANT_UNLOCKED_CTXSW
+#endif
+
/*
* Bits in flags field of signal_struct.
*/
@@ -582,6 +587,9 @@ struct task_struct {
int lock_depth; /* Lock depth */
+#if defined(CONFIG_SMP) && defined(__ARCH_WANT_UNLOCKED_CTXSW)
+ int oncpu;
+#endif
int prio, static_prio;
struct list_head run_list;
prio_array_t *array;
@@ -700,8 +708,6 @@ struct task_struct {
spinlock_t alloc_lock;
/* Protection of proc_dentry: nesting proc_lock, dcache_lock, write_lock_irq(&tasklist_lock); */
spinlock_t proc_lock;
-/* context-switch lock */
- spinlock_t switch_lock;
/* journalling filesystem info */
void *journal_info;
Index: linux-2.6/include/linux/init_task.h
===================================================================
--- linux-2.6.orig/include/linux/init_task.h 2005-03-26 19:24:10.000000000 +1100
+++ linux-2.6/include/linux/init_task.h 2005-03-26 23:55:46.000000000 +1100
@@ -74,14 +74,13 @@ extern struct group_info init_groups;
.usage = ATOMIC_INIT(2), \
.flags = 0, \
.lock_depth = -1, \
- .prio = MAX_PRIO-20, \
- .static_prio = MAX_PRIO-20, \
+ .prio = MAX_PRIO-29, \
+ .static_prio = MAX_PRIO-29, \
.policy = SCHED_NORMAL, \
.cpus_allowed = CPU_MASK_ALL, \
.mm = NULL, \
.active_mm = &init_mm, \
.run_list = LIST_HEAD_INIT(tsk.run_list), \
- .time_slice = HZ, \
.tasks = LIST_HEAD_INIT(tsk.tasks), \
.ptrace_children= LIST_HEAD_INIT(tsk.ptrace_children), \
.ptrace_list = LIST_HEAD_INIT(tsk.ptrace_list), \
@@ -108,7 +107,6 @@ extern struct group_info init_groups;
.blocked = {{0}}, \
.alloc_lock = SPIN_LOCK_UNLOCKED, \
.proc_lock = SPIN_LOCK_UNLOCKED, \
- .switch_lock = SPIN_LOCK_UNLOCKED, \
.journal_info = NULL, \
.cpu_timers = INIT_CPU_TIMERS(tsk.cpu_timers), \
}
Index: linux-2.6/include/asm-ia64/system.h
===================================================================
--- linux-2.6.orig/include/asm-ia64/system.h 2005-03-26 19:24:10.000000000 +1100
+++ linux-2.6/include/asm-ia64/system.h 2005-03-26 23:55:46.000000000 +1100
@@ -183,8 +183,6 @@ do { \
#ifdef __KERNEL__
-#define prepare_to_switch() do { } while(0)
-
#ifdef CONFIG_IA32_SUPPORT
# define IS_IA32_PROCESS(regs) (ia64_psr(regs)->is != 0)
#else
@@ -274,13 +272,7 @@ extern void ia64_load_extra (struct task
* of that CPU which will not be released, because there we wait for the
* tasklist_lock to become available.
*/
-#define prepare_arch_switch(rq, next) \
-do { \
- spin_lock(&(next)->switch_lock); \
- spin_unlock(&(rq)->lock); \
-} while (0)
-#define finish_arch_switch(rq, prev) spin_unlock_irq(&(prev)->switch_lock)
-#define task_running(rq, p) ((rq)->curr == (p) || spin_is_locked(&(p)->switch_lock))
+#define __ARCH_WANT_UNLOCKED_CTXSW
#define ia64_platform_is(x) (strcmp(x, platform_name) == 0)
Index: linux-2.6/include/asm-mips/system.h
===================================================================
--- linux-2.6.orig/include/asm-mips/system.h 2005-03-26 19:24:10.000000000 +1100
+++ linux-2.6/include/asm-mips/system.h 2005-03-26 23:55:46.000000000 +1100
@@ -422,16 +422,10 @@ extern void __die_if_kernel(const char *
extern int stop_a_enabled;
/*
- * Taken from include/asm-ia64/system.h; prevents deadlock on SMP
+ * See include/asm-ia64/system.h; prevents deadlock on SMP
* systems.
*/
-#define prepare_arch_switch(rq, next) \
-do { \
- spin_lock(&(next)->switch_lock); \
- spin_unlock(&(rq)->lock); \
-} while (0)
-#define finish_arch_switch(rq, prev) spin_unlock_irq(&(prev)->switch_lock)
-#define task_running(rq, p) ((rq)->curr == (p) || spin_is_locked(&(p)->switch_lock))
+#define __ARCH_WANT_UNLOCKED_CTXSW
#define arch_align_stack(x) (x)
Index: linux-2.6/include/asm-s390/system.h
===================================================================
--- linux-2.6.orig/include/asm-s390/system.h 2005-03-26 19:24:10.000000000 +1100
+++ linux-2.6/include/asm-s390/system.h 2005-03-26 23:55:46.000000000 +1100
@@ -103,29 +103,18 @@ static inline void restore_access_regs(u
prev = __switch_to(prev,next); \
} while (0)
-#define prepare_arch_switch(rq, next) do { } while(0)
-#define task_running(rq, p) ((rq)->curr == (p))
-
#ifdef CONFIG_VIRT_CPU_ACCOUNTING
extern void account_user_vtime(struct task_struct *);
extern void account_system_vtime(struct task_struct *);
-
-#define finish_arch_switch(rq, prev) do { \
- set_fs(current->thread.mm_segment); \
- spin_unlock(&(rq)->lock); \
- account_system_vtime(prev); \
- local_irq_enable(); \
-} while (0)
-
#else
+#define account_system_vtime(prev) do { } while (0)
+#endif
#define finish_arch_switch(rq, prev) do { \
set_fs(current->thread.mm_segment); \
- spin_unlock_irq(&(rq)->lock); \
+ account_system_vtime(prev); \
} while (0)
-#endif
-
#define nop() __asm__ __volatile__ ("nop")
#define xchg(ptr,x) \
Index: linux-2.6/include/asm-sparc/system.h
===================================================================
--- linux-2.6.orig/include/asm-sparc/system.h 2005-03-26 19:24:10.000000000 +1100
+++ linux-2.6/include/asm-sparc/system.h 2005-03-26 23:55:46.000000000 +1100
@@ -101,7 +101,7 @@ extern void fpsave(unsigned long *fpregs
* SWITCH_ENTER and SWITH_DO_LAZY_FPU do not work yet (e.g. SMP does not work)
* XXX WTF is the above comment? Found in late teen 2.4.x.
*/
-#define prepare_arch_switch(rq, next) do { \
+#define prepare_arch_switch(next) do { \
__asm__ __volatile__( \
".globl\tflush_patch_switch\nflush_patch_switch:\n\t" \
"save %sp, -0x40, %sp; save %sp, -0x40, %sp; save %sp, -0x40, %sp\n\t" \
@@ -109,8 +109,6 @@ extern void fpsave(unsigned long *fpregs
"save %sp, -0x40, %sp\n\t" \
"restore; restore; restore; restore; restore; restore; restore"); \
} while(0)
-#define finish_arch_switch(rq, next) spin_unlock_irq(&(rq)->lock)
-#define task_running(rq, p) ((rq)->curr == (p))
/* Much care has gone into this code, do not touch it.
*
Index: linux-2.6/include/asm-sparc64/system.h
===================================================================
--- linux-2.6.orig/include/asm-sparc64/system.h 2005-03-26 19:24:10.000000000 +1100
+++ linux-2.6/include/asm-sparc64/system.h 2005-03-26 23:55:46.000000000 +1100
@@ -139,19 +139,13 @@ extern void __flushw_user(void);
#define flush_user_windows flushw_user
#define flush_register_windows flushw_all
-#define prepare_arch_switch(rq, next) \
-do { spin_lock(&(next)->switch_lock); \
- spin_unlock(&(rq)->lock); \
+/* Don't hold the runqueue lock over context switch */
+#define __ARCH_WANT_UNLOCKED_CTXSW
+#define prepare_arch_switch(next) \
+do { \
flushw_all(); \
} while (0)
-#define finish_arch_switch(rq, prev) \
-do { spin_unlock_irq(&(prev)->switch_lock); \
-} while (0)
-
-#define task_running(rq, p) \
- ((rq)->curr == (p) || spin_is_locked(&(p)->switch_lock))
-
/* See what happens when you design the chip correctly?
*
* We tell gcc we clobber all non-fixed-usage registers except
Index: linux-2.6/include/asm-arm/system.h
===================================================================
--- linux-2.6.orig/include/asm-arm/system.h 2005-03-26 19:24:10.000000000 +1100
+++ linux-2.6/include/asm-arm/system.h 2005-03-26 23:55:46.000000000 +1100
@@ -141,34 +141,12 @@ extern unsigned int user_debug;
#define set_wmb(var, value) do { var = value; wmb(); } while (0)
#define nop() __asm__ __volatile__("mov\tr0,r0\t@ nop\n\t");
-#ifdef CONFIG_SMP
/*
- * Define our own context switch locking. This allows us to enable
- * interrupts over the context switch, otherwise we end up with high
- * interrupt latency. The real problem area is switch_mm() which may
- * do a full cache flush.
+ * switch_mm() may do a full cache flush over the context switch,
+ * so enable interrupts over the context switch to avoid high
+ * latency.
*/
-#define prepare_arch_switch(rq,next) \
-do { \
- spin_lock(&(next)->switch_lock); \
- spin_unlock_irq(&(rq)->lock); \
-} while (0)
-
-#define finish_arch_switch(rq,prev) \
- spin_unlock(&(prev)->switch_lock)
-
-#define task_running(rq,p) \
- ((rq)->curr == (p) || spin_is_locked(&(p)->switch_lock))
-#else
-/*
- * Our UP-case is more simple, but we assume knowledge of how
- * spin_unlock_irq() and friends are implemented. This avoids
- * us needlessly decrementing and incrementing the preempt count.
- */
-#define prepare_arch_switch(rq,next) local_irq_enable()
-#define finish_arch_switch(rq,prev) spin_unlock(&(rq)->lock)
-#define task_running(rq,p) ((rq)->curr == (p))
-#endif
+#define __ARCH_WANT_INTERRUPTS_ON_CTXSW
/*
* switch_to(prev, next) should switch from task `prev' to `next'
^ permalink raw reply [flat|nested] 5+ messages in thread* [patch][RFC] sched: optimise/cleanup resched_task, idle routines
2005-03-26 13:26 [patch] sched: remove scheduler locking from arch code Nick Piggin
@ 2005-03-26 13:34 ` Nick Piggin
2005-03-26 13:34 ` Nick Piggin
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: Nick Piggin @ 2005-03-26 13:34 UTC (permalink / raw)
To: linux-arch; +Cc: Ingo Molnar, Benjamin Herrenschmidt
Had this around for a while too. Quite a bit of arch work to get
through here, and I haven't done most of it (I probably butchered
ppc64, too).
It does seem to work OK on i386 and ia64, though. I'd like to get
confirmation that my approach is correct before putting more work
into it.
It can save locked memory ops and IPIs during scheduling operations.
I don't have access to any big systems that might most beneift from
the improved scalability, but it is worth a % or so on a dual Xeon
system here (on schedule happy workload).
But am I horribly misinformed? ;)
^ permalink raw reply [flat|nested] 5+ messages in thread* [patch][RFC] sched: optimise/cleanup resched_task, idle routines
2005-03-26 13:26 [patch] sched: remove scheduler locking from arch code Nick Piggin
2005-03-26 13:34 ` [patch][RFC] sched: optimise/cleanup resched_task, idle routines Nick Piggin
@ 2005-03-26 13:34 ` Nick Piggin
2005-03-26 21:36 ` [patch] sched: remove scheduler locking from arch code David S. Miller
2005-03-27 3:03 ` Nick Piggin
3 siblings, 0 replies; 5+ messages in thread
From: Nick Piggin @ 2005-03-26 13:34 UTC (permalink / raw)
To: linux-arch; +Cc: Ingo Molnar, Benjamin Herrenschmidt
[-- Attachment #1: Type: text/plain, Size: 568 bytes --]
Had this around for a while too. Quite a bit of arch work to get
through here, and I haven't done most of it (I probably butchered
ppc64, too).
It does seem to work OK on i386 and ia64, though. I'd like to get
confirmation that my approach is correct before putting more work
into it.
It can save locked memory ops and IPIs during scheduling operations.
I don't have access to any big systems that might most beneift from
the improved scalability, but it is worth a % or so on a dual Xeon
system here (on schedule happy workload).
But am I horribly misinformed? ;)
[-- Attachment #2: sched-resched-opt.patch --]
[-- Type: text/plain, Size: 21957 bytes --]
Make some changes to the NEED_RESCHED and POLLING_NRFLAG to reduce
confusion, and make their semantics rigid. Also have preempt explicitly
disabled in idle routines. Improves efficiency of resched_task and some
cpu_idle routines.
* In resched_task:
- TIF_NEED_RESCHED is only cleared with the task's runqueue lock held,
and as we hold it during resched_task, then there is no need for an
atomic test and set there. (The only time this may prevent an IPI is
when the task's quantum expires in the timer interrupt - this is a
very rare race to bother with in comparison with the cost).
- If TIF_NEED_RESCHED is set, then we don't need to do anything. It
won't get unset until the task get's schedule()d off.
- If we are running on the same CPU as the task we resched, then set
TIF_NEED_RESCHED and no further action is required.
- If we are running on another CPU, and TIF_POLLING_NRFLAG is *not* set
after TIF_NEED_RESCHED has been set, then we need to send an IPI.
Using these rules, we are able to remove the test and set operation in
resched_task, and make clear the previously vague semantics of POLLING_NRFLAG.
* In idle routines:
- Enter cpu_idle with preempt disabled. When the need_resched() condition
becomes true, explicitly call schedule(). This makes things a bit clearer
(IMO), but haven't updated all architectures yet.
- Many do a test and clear of TIF_NEED_RESCHED for some reason. According
to the resched_task rules, this isn't needed (and actually breaks the
assumption that TIF_NEED_RESCHED is only cleared with the runqueue lock
held). So remove that. Generally one less locked memory op when switching
to the idle thread.
- Many idle routines clear TIF_POLLING_NRFLAG, and only set it in the inner
most polling idle loops. The above resched_task semantics allow it to be
set until before the last time need_resched() is checked before going into
a halt requiring interrupt wakeup.
Many idle routines simply never enter such a halt, and so POLLING_NRFLAG
can be always left set, completely eliminating resched IPIs when rescheduling
the idle task.
POLLING_NRFLAG width can be increased, to reduce the chance of resched IPIs.
Index: linux-2.6/kernel/sched.c
===================================================================
--- linux-2.6.orig/kernel/sched.c 2005-03-27 00:20:25.000000000 +1100
+++ linux-2.6/kernel/sched.c 2005-03-27 00:22:34.000000000 +1100
@@ -796,21 +796,28 @@ static void deactivate_task(struct task_
#ifdef CONFIG_SMP
static void resched_task(task_t *p)
{
- int need_resched, nrpolling;
+ int cpu;
assert_spin_locked(&task_rq(p)->lock);
- /* minimise the chance of sending an interrupt to poll_idle() */
- nrpolling = test_tsk_thread_flag(p,TIF_POLLING_NRFLAG);
- need_resched = test_and_set_tsk_thread_flag(p,TIF_NEED_RESCHED);
- nrpolling |= test_tsk_thread_flag(p,TIF_POLLING_NRFLAG);
+ if (test_tsk_thread_flag(p, TIF_NEED_RESCHED))
+ return;
+
+ set_tsk_thread_flag(p, TIF_NEED_RESCHED);
- if (!need_resched && !nrpolling && (task_cpu(p) != smp_processor_id()))
- smp_send_reschedule(task_cpu(p));
+ cpu = task_cpu(p);
+ if (cpu == smp_processor_id())
+ return;
+
+ /* NEED_RESCHED must be visible before we test POLLING_NRFLAG */
+ smp_mb();
+ if (!test_tsk_thread_flag(p, TIF_POLLING_NRFLAG))
+ smp_send_reschedule(cpu);
}
#else
static inline void resched_task(task_t *p)
{
+ assert_spin_locked(&task_rq(p)->lock);
set_tsk_need_resched(p);
}
#endif
Index: linux-2.6/arch/i386/kernel/process.c
===================================================================
--- linux-2.6.orig/arch/i386/kernel/process.c 2005-03-27 00:20:25.000000000 +1100
+++ linux-2.6/arch/i386/kernel/process.c 2005-03-27 00:22:34.000000000 +1100
@@ -95,14 +95,19 @@ EXPORT_SYMBOL(enable_hlt);
*/
void default_idle(void)
{
+ local_irq_enable();
+
if (!hlt_counter && boot_cpu_data.hlt_works_ok) {
- local_irq_disable();
- if (!need_resched())
+ clear_thread_flag(TIF_POLLING_NRFLAG);
+ smp_mb__after_clear_bit();
+ while (!need_resched()) {
+ local_irq_disable();
safe_halt();
- else
- local_irq_enable();
+ }
+ set_thread_flag(TIF_POLLING_NRFLAG);
} else {
- cpu_relax();
+ while (!need_resched())
+ cpu_relax();
}
}
@@ -113,29 +118,14 @@ void default_idle(void)
*/
static void poll_idle (void)
{
- int oldval;
-
local_irq_enable();
- /*
- * Deal with another CPU just having chosen a thread to
- * run here:
- */
- oldval = test_and_clear_thread_flag(TIF_NEED_RESCHED);
-
- if (!oldval) {
- set_thread_flag(TIF_POLLING_NRFLAG);
- asm volatile(
- "2:"
- "testl %0, %1;"
- "rep; nop;"
- "je 2b;"
- : : "i"(_TIF_NEED_RESCHED), "m" (current_thread_info()->flags));
-
- clear_thread_flag(TIF_POLLING_NRFLAG);
- } else {
- set_need_resched();
- }
+ asm volatile(
+ "2:"
+ "testl %0, %1;"
+ "rep; nop;"
+ "je 2b;"
+ : : "i"(_TIF_NEED_RESCHED), "m" (current_thread_info()->flags));
}
/*
@@ -146,25 +136,27 @@ static void poll_idle (void)
*/
void cpu_idle (void)
{
- int cpu = _smp_processor_id();
+ int cpu = smp_processor_id();
+ set_thread_flag(TIF_POLLING_NRFLAG);
/* endless idle loop with no priority at all */
while (1) {
- while (!need_resched()) {
- void (*idle)(void);
+ void (*idle)(void);
- if (cpu_isset(cpu, cpu_idle_map))
- cpu_clear(cpu, cpu_idle_map);
- rmb();
- idle = pm_idle;
+ if (cpu_isset(cpu, cpu_idle_map))
+ cpu_clear(cpu, cpu_idle_map);
+ rmb();
+ idle = pm_idle;
- if (!idle)
- idle = default_idle;
+ if (!idle)
+ idle = default_idle;
- irq_stat[cpu].idle_timestamp = jiffies;
- idle();
- }
+ irq_stat[cpu].idle_timestamp = jiffies;
+ idle();
+
+ preempt_enable_no_resched();
schedule();
+ preempt_disable();
}
}
@@ -195,15 +187,12 @@ static void mwait_idle(void)
{
local_irq_enable();
- if (!need_resched()) {
- set_thread_flag(TIF_POLLING_NRFLAG);
- do {
- __monitor((void *)¤t_thread_info()->flags, 0, 0);
- if (need_resched())
- break;
- __mwait(0, 0);
- } while (!need_resched());
- clear_thread_flag(TIF_POLLING_NRFLAG);
+ while (!need_resched()) {
+ __monitor((void *)¤t_thread_info()->flags, 0, 0);
+ smp_mb();
+ if (need_resched())
+ break;
+ __mwait(0, 0);
}
}
Index: linux-2.6/init/main.c
===================================================================
--- linux-2.6.orig/init/main.c 2005-03-27 00:20:25.000000000 +1100
+++ linux-2.6/init/main.c 2005-03-27 00:22:34.000000000 +1100
@@ -382,7 +382,7 @@ static void noinline rest_init(void)
kernel_thread(init, NULL, CLONE_FS | CLONE_SIGHAND);
numa_default_policy();
unlock_kernel();
- preempt_enable_no_resched();
+ /* Don't re-enable preemption */
cpu_idle();
}
Index: linux-2.6/arch/i386/kernel/apm.c
===================================================================
--- linux-2.6.orig/arch/i386/kernel/apm.c 2005-03-27 00:20:25.000000000 +1100
+++ linux-2.6/arch/i386/kernel/apm.c 2005-03-27 00:22:34.000000000 +1100
@@ -767,8 +767,20 @@ static int set_system_power_state(u_shor
static int apm_do_idle(void)
{
u32 eax;
+ u8 ret;
+ int idled = 0;
- if (apm_bios_call_simple(APM_FUNC_IDLE, 0, 0, &eax)) {
+ clear_thread_flag(TIF_POLLING_NRFLAG);
+ smp_mb__after_clear_bit();
+ if (!need_resched()) {
+ idled = 1;
+ ret = apm_bios_call_simple(APM_FUNC_IDLE, 0, 0, &eax);
+ }
+ set_thread_flag(TIF_POLLING_NRFLAG);
+ if (!idled)
+ return 0;
+
+ if (ret) {
static unsigned long t;
/* This always fails on some SMP boards running UP kernels.
Index: linux-2.6/drivers/acpi/processor_idle.c
===================================================================
--- linux-2.6.orig/drivers/acpi/processor_idle.c 2005-03-27 00:20:25.000000000 +1100
+++ linux-2.6/drivers/acpi/processor_idle.c 2005-03-27 00:22:34.000000000 +1100
@@ -162,6 +162,14 @@ acpi_processor_power_activate (
return;
}
+static void acpi_safe_halt (void)
+{
+ clear_thread_flag(TIF_POLLING_NRFLAG);
+ smp_mb__after_clear_bit();
+ while (!need_resched())
+ safe_halt();
+ set_thread_flag(TIF_POLLING_NRFLAG);
+}
static void acpi_processor_idle (void)
{
@@ -171,7 +179,7 @@ static void acpi_processor_idle (void)
int sleep_ticks = 0;
u32 t1, t2 = 0;
- pr = processors[_smp_processor_id()];
+ pr = processors[smp_processor_id()];
if (!pr)
return;
@@ -191,8 +199,13 @@ static void acpi_processor_idle (void)
}
cx = pr->power.state;
- if (!cx)
- goto easy_out;
+ if (!cx) {
+ if (pm_idle_save)
+ pm_idle_save();
+ else
+ acpi_safe_halt();
+ return;
+ }
/*
* Check BM Activity
@@ -272,7 +285,8 @@ static void acpi_processor_idle (void)
if (pm_idle_save)
pm_idle_save();
else
- safe_halt();
+ acpi_safe_halt();
+
/*
* TBD: Can't get time duration while in C1, as resumes
* go to an ISR rather than here. Need to instrument
@@ -384,16 +398,6 @@ end:
*/
if (next_state != pr->power.state)
acpi_processor_power_activate(pr, next_state);
-
- return;
-
- easy_out:
- /* do C1 instead of busy loop */
- if (pm_idle_save)
- pm_idle_save();
- else
- safe_halt();
- return;
}
Index: linux-2.6/arch/i386/kernel/smpboot.c
===================================================================
--- linux-2.6.orig/arch/i386/kernel/smpboot.c 2005-03-27 00:20:25.000000000 +1100
+++ linux-2.6/arch/i386/kernel/smpboot.c 2005-03-27 00:22:34.000000000 +1100
@@ -414,6 +414,8 @@ static int cpucount;
*/
static void __init start_secondary(void *unused)
{
+ preempt_disable();
+
/*
* Dont put anything before smp_callin(), SMP
* booting is too fragile that we want to limit the
Index: linux-2.6/arch/x86_64/kernel/process.c
===================================================================
--- linux-2.6.orig/arch/x86_64/kernel/process.c 2005-03-27 00:20:25.000000000 +1100
+++ linux-2.6/arch/x86_64/kernel/process.c 2005-03-27 00:22:34.000000000 +1100
@@ -84,12 +84,19 @@ EXPORT_SYMBOL(enable_hlt);
*/
void default_idle(void)
{
+ local_irq_enable();
+
if (!atomic_read(&hlt_counter)) {
- local_irq_disable();
- if (!need_resched())
+ clear_thread_flag(TIF_POLLING_NRFLAG);
+ smp_mb__after_clear_bit();
+ while (!need_resched()) {
+ local_irq_disable();
safe_halt();
- else
- local_irq_enable();
+ }
+ set_thread_flag(TIF_POLLING_NRFLAG);
+ } else {
+ while (!need_resched())
+ cpu_relax();
}
}
@@ -100,29 +107,16 @@ void default_idle(void)
*/
static void poll_idle (void)
{
- int oldval;
-
local_irq_enable();
- /*
- * Deal with another CPU just having chosen a thread to
- * run here:
- */
- oldval = test_and_clear_thread_flag(TIF_NEED_RESCHED);
-
- if (!oldval) {
- set_thread_flag(TIF_POLLING_NRFLAG);
- asm volatile(
- "2:"
- "testl %0,%1;"
- "rep; nop;"
- "je 2b;"
- : :
- "i" (_TIF_NEED_RESCHED),
- "m" (current_thread_info()->flags));
- } else {
- set_need_resched();
- }
+ asm volatile(
+ "2:"
+ "testl %0,%1;"
+ "rep; nop;"
+ "je 2b;"
+ : :
+ "i" (_TIF_NEED_RESCHED),
+ "m" (current_thread_info()->flags));
}
@@ -151,21 +145,23 @@ EXPORT_SYMBOL_GPL(cpu_idle_wait);
void cpu_idle (void)
{
int cpu = smp_processor_id();
+ set_thread_flag(TIF_POLLING_NRFLAG);
/* endless idle loop with no priority at all */
while (1) {
- while (!need_resched()) {
- void (*idle)(void);
+ void (*idle)(void);
- if (cpu_isset(cpu, cpu_idle_map))
- cpu_clear(cpu, cpu_idle_map);
- rmb();
- idle = pm_idle;
- if (!idle)
- idle = default_idle;
- idle();
- }
+ if (cpu_isset(cpu, cpu_idle_map))
+ cpu_clear(cpu, cpu_idle_map);
+ rmb();
+ idle = pm_idle;
+ if (!idle)
+ idle = default_idle;
+ idle();
+
+ preempt_enable_noresched();
schedule();
+ preempt_disable();
}
}
@@ -180,15 +176,12 @@ static void mwait_idle(void)
{
local_irq_enable();
- if (!need_resched()) {
- set_thread_flag(TIF_POLLING_NRFLAG);
- do {
- __monitor((void *)¤t_thread_info()->flags, 0, 0);
- if (need_resched())
- break;
- __mwait(0, 0);
- } while (!need_resched());
- clear_thread_flag(TIF_POLLING_NRFLAG);
+ while (!need_resched()) {
+ __monitor((void *)¤t_thread_info()->flags, 0, 0);
+ smp_mb();
+ if (need_resched())
+ break;
+ __mwait(0, 0);
}
}
Index: linux-2.6/arch/ppc64/kernel/idle.c
===================================================================
--- linux-2.6.orig/arch/ppc64/kernel/idle.c 2005-03-27 00:20:25.000000000 +1100
+++ linux-2.6/arch/ppc64/kernel/idle.c 2005-03-27 00:22:34.000000000 +1100
@@ -77,6 +77,8 @@ static int iSeries_idle(void)
long oldval;
unsigned long CTRL;
+ set_thread_flag(TIF_POLLING_NRFLAG);
+
/* ensure iSeries run light will be out when idle */
clear_thread_flag(TIF_RUN_LIGHT);
CTRL = mfspr(CTRLF);
@@ -86,32 +88,21 @@ static int iSeries_idle(void)
lpaca = get_paca();
while (1) {
- if (lpaca->lppaca.shared_proc) {
- if (ItLpQueue_isLpIntPending(lpaca->lpqueue_ptr))
- process_iSeries_events();
- if (!need_resched())
- yield_shared_processor();
- } else {
- oldval = test_and_clear_thread_flag(TIF_NEED_RESCHED);
-
- if (!oldval) {
- set_thread_flag(TIF_POLLING_NRFLAG);
-
- while (!need_resched()) {
- HMT_medium();
- if (ItLpQueue_isLpIntPending(lpaca->lpqueue_ptr))
- process_iSeries_events();
- HMT_low();
- }
-
+ while (!need_resched()) {
+ HMT_low();
+ if (ItLpQueue_isLpIntPending(lpaca->lpqueue_ptr)) {
HMT_medium();
- clear_thread_flag(TIF_POLLING_NRFLAG);
- } else {
- set_need_resched();
+ process_iSeries_events();
+ HMT_low();
}
+ if (lpaca->lppaca.shared_proc)
+ yield_shared_processor();
}
+ HMT_medium();
+ preempt_enable_no_resched();
schedule();
+ preempt_disable();
}
return 0;
@@ -123,30 +114,23 @@ static int default_idle(void)
{
long oldval;
unsigned int cpu = smp_processor_id();
-
+ set_thread_flag(TIF_POLLING_NRFLAG);
+
while (1) {
- oldval = test_and_clear_thread_flag(TIF_NEED_RESCHED);
-
- if (!oldval) {
- set_thread_flag(TIF_POLLING_NRFLAG);
-
- while (!need_resched() && !cpu_is_offline(cpu)) {
- barrier();
- /*
- * Go into low thread priority and possibly
- * low power mode.
- */
- HMT_low();
- HMT_very_low();
- }
-
- HMT_medium();
- clear_thread_flag(TIF_POLLING_NRFLAG);
- } else {
- set_need_resched();
+ while (!need_resched() && !cpu_is_offline(cpu)) {
+ barrier();
+ /*
+ * Go into low thread priority and possibly
+ * low power mode.
+ */
+ HMT_low();
+ HMT_very_low();
}
+ HMT_medium();
+ preempt_enable_no_resched();
schedule();
+ preempt_disable();
if (cpu_is_offline(cpu) && system_state == SYSTEM_RUNNING)
cpu_die();
}
@@ -166,6 +150,7 @@ int dedicated_idle(void)
unsigned long *smt_snooze_delay = &__get_cpu_var(smt_snooze_delay);
unsigned int cpu = smp_processor_id();
+ set_thread_flag(TIF_POLLING_NRFLAG);
ppaca = &paca[cpu ^ 1];
while (1) {
@@ -175,66 +160,67 @@ int dedicated_idle(void)
*/
lpaca->lppaca.idle = 1;
- oldval = test_and_clear_thread_flag(TIF_NEED_RESCHED);
- if (!oldval) {
- set_thread_flag(TIF_POLLING_NRFLAG);
- start_snooze = __get_tb() +
+ start_snooze = __get_tb() +
*smt_snooze_delay * tb_ticks_per_usec;
- while (!need_resched() && !cpu_is_offline(cpu)) {
- /*
- * Go into low thread priority and possibly
- * low power mode.
- */
- HMT_low();
- HMT_very_low();
- if (*smt_snooze_delay == 0 ||
- __get_tb() < start_snooze)
- continue;
+ while (!need_resched() && !cpu_is_offline(cpu)) {
+ /*
+ * Go into low thread priority and possibly
+ * low power mode.
+ */
+ HMT_low();
+ HMT_very_low();
- HMT_medium();
+ if (*smt_snooze_delay == 0 || __get_tb() < start_snooze)
+ continue;
- if (!(ppaca->lppaca.idle)) {
- local_irq_disable();
+ HMT_medium();
- /*
- * We are about to sleep the thread
- * and so wont be polling any
- * more.
- */
- clear_thread_flag(TIF_POLLING_NRFLAG);
-
- /*
- * SMT dynamic mode. Cede will result
- * in this thread going dormant, if the
- * partner thread is still doing work.
- * Thread wakes up if partner goes idle,
- * an interrupt is presented, or a prod
- * occurs. Returning from the cede
- * enables external interrupts.
- */
- if (!need_resched())
- cede_processor();
- else
- local_irq_enable();
- } else {
- /*
- * Give the HV an opportunity at the
- * processor, since we are not doing
- * any work.
- */
- poll_pending();
- }
- }
+ if (!(ppaca->lppaca.idle)) {
+ local_irq_disable();
- clear_thread_flag(TIF_POLLING_NRFLAG);
- } else {
- set_need_resched();
+ /*
+ * We are about to sleep the thread
+ * and so wont be polling any
+ * more.
+ */
+ clear_thread_flag(TIF_POLLING_NRFLAG);
+
+ /*
+ * Must have TIF_POLLING_NRFLAG clear visible
+ * before checking need_resched
+ */
+ smp_mb__after_clear_bit();
+
+ /*
+ * SMT dynamic mode. Cede will result
+ * in this thread going dormant, if the
+ * partner thread is still doing work.
+ * Thread wakes up if partner goes idle,
+ * an interrupt is presented, or a prod
+ * occurs. Returning from the cede
+ * enables external interrupts.
+ */
+ if (!need_resched())
+ cede_processor();
+ else
+ local_irq_enable();
+ set_thread_flag(TIF_POLLING_NRFLAG);
+ } else {
+ /*
+ * Give the HV an opportunity at the
+ * processor, since we are not doing
+ * any work.
+ */
+ poll_pending();
+ }
}
HMT_medium();
lpaca->lppaca.idle = 0;
+ preempt_enable_no_resched();
schedule();
+ preempt_disable();
if (cpu_is_offline(cpu) && system_state == SYSTEM_RUNNING)
cpu_die();
}
@@ -245,6 +231,7 @@ static int shared_idle(void)
{
struct paca_struct *lpaca = get_paca();
unsigned int cpu = smp_processor_id();
+ set_thread_flag(TIF_POLLING_NRFLAG);
while (1) {
/*
@@ -256,6 +243,9 @@ static int shared_idle(void)
while (!need_resched() && !cpu_is_offline(cpu)) {
local_irq_disable();
+ clear_thread_flag(TIF_POLLING_NRFLAG);
+ smp_mb__after_clear_bit();
+
/*
* Yield the processor to the hypervisor. We return if
* an external interrupt occurs (which are driven prior
@@ -270,11 +260,14 @@ static int shared_idle(void)
cede_processor();
else
local_irq_enable();
+ set_thread_flag(TIF_POLLING_NRFLAG);
}
HMT_medium();
lpaca->lppaca.idle = 0;
+ preempt_enable_no_resched();
schedule();
+ preempt_disable();
if (cpu_is_offline(smp_processor_id()) &&
system_state == SYSTEM_RUNNING)
cpu_die();
@@ -289,10 +282,12 @@ static int native_idle(void)
{
while(1) {
/* check CPU type here */
- if (!need_resched())
+ while (!need_resched())
power4_idle();
- if (need_resched())
- schedule();
+
+ preempt_enable_no_resched();
+ schedule();
+ preempt_disable();
if (cpu_is_offline(_smp_processor_id()) &&
system_state == SYSTEM_RUNNING)
Index: linux-2.6/arch/ia64/kernel/process.c
===================================================================
--- linux-2.6.orig/arch/ia64/kernel/process.c 2005-03-27 00:20:25.000000000 +1100
+++ linux-2.6/arch/ia64/kernel/process.c 2005-03-27 00:22:34.000000000 +1100
@@ -188,11 +188,16 @@ default_idle (void)
{
unsigned long pmu_active = ia64_getreg(_IA64_REG_PSR) & (IA64_PSR_PP | IA64_PSR_UP);
- while (!need_resched())
- if (pal_halt && !pmu_active)
+ if (pal_halt && !pmu_active) {
+ clear_thread_flag(TIF_POLLING_NRFLAG);
+ smp_mb__after_clear_bit();
+ while (!need_resched())
safe_halt();
- else
+ set_thread_flag(TIF_POLLING_NRFLAG);
+ } else {
+ while (!need_resched())
cpu_relax();
+ }
}
#ifdef CONFIG_HOTPLUG_CPU
@@ -251,35 +256,36 @@ cpu_idle (void)
{
void (*mark_idle)(int) = ia64_mark_idle;
int cpu = smp_processor_id();
+ set_thread_flag(TIF_POLLING_NRFLAG);
/* endless idle loop with no priority at all */
while (1) {
+ void (*idle)(void);
+ if (cpu_isset(cpu, cpu_idle_map))
+ cpu_clear(cpu, cpu_idle_map);
+ rmb();
+ idle = pm_idle;
+ if (!idle)
+ idle = default_idle;
+
+ if (!need_resched()) {
#ifdef CONFIG_SMP
- if (!need_resched())
min_xtp();
#endif
- while (!need_resched()) {
- void (*idle)(void);
-
if (mark_idle)
(*mark_idle)(1);
-
- if (cpu_isset(cpu, cpu_idle_map))
- cpu_clear(cpu, cpu_idle_map);
- rmb();
- idle = pm_idle;
- if (!idle)
- idle = default_idle;
(*idle)();
- }
-
- if (mark_idle)
- (*mark_idle)(0);
-
+ if (mark_idle)
+ (*mark_idle)(0);
#ifdef CONFIG_SMP
- normal_xtp();
+ normal_xtp();
#endif
+ }
+
+ preempt_enable_no_resched();
schedule();
+ preempt_disable();
+
check_pgt_cache();
if (cpu_is_offline(smp_processor_id()))
play_dead();
Index: linux-2.6/arch/ia64/kernel/smpboot.c
===================================================================
--- linux-2.6.orig/arch/ia64/kernel/smpboot.c 2005-03-27 00:20:25.000000000 +1100
+++ linux-2.6/arch/ia64/kernel/smpboot.c 2005-03-27 00:22:34.000000000 +1100
@@ -343,6 +343,8 @@ smp_callin (void)
int __devinit
start_secondary (void *unused)
{
+ preempt_disable();
+
/* Early console may use I/O ports */
ia64_set_kr(IA64_KR_IO_BASE, __pa(ia64_iobase));
Index: linux-2.6/arch/ppc64/kernel/smp.c
===================================================================
--- linux-2.6.orig/arch/ppc64/kernel/smp.c 2005-03-27 00:20:25.000000000 +1100
+++ linux-2.6/arch/ppc64/kernel/smp.c 2005-03-27 00:22:34.000000000 +1100
@@ -561,7 +561,10 @@ int __devinit __cpu_up(unsigned int cpu)
/* Activate a secondary processor. */
int __devinit start_secondary(void *unused)
{
- unsigned int cpu = smp_processor_id();
+ unsigned int cpu;
+
+ preempt_disable();
+ cpu = smp_processor_id();
atomic_inc(&init_mm.mm_count);
current->active_mm = &init_mm;
^ permalink raw reply [flat|nested] 5+ messages in thread* Re: [patch] sched: remove scheduler locking from arch code
2005-03-26 13:26 [patch] sched: remove scheduler locking from arch code Nick Piggin
2005-03-26 13:34 ` [patch][RFC] sched: optimise/cleanup resched_task, idle routines Nick Piggin
2005-03-26 13:34 ` Nick Piggin
@ 2005-03-26 21:36 ` David S. Miller
2005-03-27 3:03 ` Nick Piggin
3 siblings, 0 replies; 5+ messages in thread
From: David S. Miller @ 2005-03-26 21:36 UTC (permalink / raw)
To: Nick Piggin; +Cc: linux-arch, mingo, rmk
On Sun, 27 Mar 2005 00:26:50 +1100
Nick Piggin <nickpiggin@yahoo.com.au> wrote:
> Any comments?
Looks fine to me.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [patch] sched: remove scheduler locking from arch code
2005-03-26 13:26 [patch] sched: remove scheduler locking from arch code Nick Piggin
` (2 preceding siblings ...)
2005-03-26 21:36 ` [patch] sched: remove scheduler locking from arch code David S. Miller
@ 2005-03-27 3:03 ` Nick Piggin
3 siblings, 0 replies; 5+ messages in thread
From: Nick Piggin @ 2005-03-27 3:03 UTC (permalink / raw)
To: Ingo Molnar; +Cc: linux-arch, David S. Miller, Russell King
Nick Piggin wrote:
> Index: linux-2.6/include/linux/init_task.h
> ===================================================================
> --- linux-2.6.orig/include/linux/init_task.h 2005-03-26 19:24:10.000000000 +1100
> +++ linux-2.6/include/linux/init_task.h 2005-03-26 23:55:46.000000000 +1100
> @@ -74,14 +74,13 @@ extern struct group_info init_groups;
> .usage = ATOMIC_INIT(2), \
> .flags = 0, \
> .lock_depth = -1, \
> - .prio = MAX_PRIO-20, \
> - .static_prio = MAX_PRIO-20, \
> + .prio = MAX_PRIO-29, \
> + .static_prio = MAX_PRIO-29, \
> .policy = SCHED_NORMAL, \
> .cpus_allowed = CPU_MASK_ALL, \
> .mm = NULL, \
> .active_mm = &init_mm, \
> .run_list = LIST_HEAD_INIT(tsk.run_list), \
> - .time_slice = HZ, \
> .tasks = LIST_HEAD_INIT(tsk.tasks), \
> .ptrace_children= LIST_HEAD_INIT(tsk.ptrace_children), \
> .ptrace_list = LIST_HEAD_INIT(tsk.ptrace_list), \
This hunk crept in from another patch. I'll fix that up.
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2005-03-27 3:03 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2005-03-26 13:26 [patch] sched: remove scheduler locking from arch code Nick Piggin
2005-03-26 13:34 ` [patch][RFC] sched: optimise/cleanup resched_task, idle routines Nick Piggin
2005-03-26 13:34 ` Nick Piggin
2005-03-26 21:36 ` [patch] sched: remove scheduler locking from arch code David S. Miller
2005-03-27 3:03 ` Nick Piggin
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox