* [PATCH -tip] sched: use need_resched @ 2009-03-06 10:24 Lai Jiangshan 2009-03-06 10:49 ` Ingo Molnar 0 siblings, 1 reply; 4+ messages in thread From: Lai Jiangshan @ 2009-03-06 10:24 UTC (permalink / raw) To: Ingo Molnar, LKML Impact: cleanup use need_resched() instead of unlikely(test_thread_flag(TIF_NEED_RESCHED)) Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> --- diff --git a/kernel/sched.c b/kernel/sched.c index e1f676e..78f4848 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -4610,12 +4610,11 @@ need_resched_nonpreemptible: asmlinkage void __sched schedule(void) { -need_resched: - preempt_disable(); - __schedule(); - preempt_enable_no_resched(); - if (unlikely(test_thread_flag(TIF_NEED_RESCHED))) - goto need_resched; + do { + preempt_disable(); + __schedule(); + preempt_enable_no_resched(); + } while (need_resched()); } EXPORT_SYMBOL(schedule); @@ -4707,7 +4706,7 @@ asmlinkage void __sched preempt_schedule(void) * between schedule and now. */ barrier(); - } while (unlikely(test_thread_flag(TIF_NEED_RESCHED))); + } while (need_resched()); } EXPORT_SYMBOL(preempt_schedule); @@ -4736,7 +4735,7 @@ asmlinkage void __sched preempt_schedule_irq(void) * between schedule and now. */ barrier(); - } while (unlikely(test_thread_flag(TIF_NEED_RESCHED))); + } while (need_resched()); } #endif /* CONFIG_PREEMPT */ diff --git a/lib/kernel_lock.c b/lib/kernel_lock.c index 01a3c22..39f1029 100644 --- a/lib/kernel_lock.c +++ b/lib/kernel_lock.c @@ -39,7 +39,7 @@ static __cacheline_aligned_in_smp DEFINE_SPINLOCK(kernel_flag); int __lockfunc __reacquire_kernel_lock(void) { while (!_raw_spin_trylock(&kernel_flag)) { - if (test_thread_flag(TIF_NEED_RESCHED)) + if (need_resched()) return -EAGAIN; cpu_relax(); } ^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH -tip] sched: use need_resched 2009-03-06 10:24 [PATCH -tip] sched: use need_resched Lai Jiangshan @ 2009-03-06 10:49 ` Ingo Molnar 2009-03-06 11:40 ` [PATCH -tip] sched: cleanup for TIF_NEED_RESCHED Lai Jiangshan 0 siblings, 1 reply; 4+ messages in thread From: Ingo Molnar @ 2009-03-06 10:49 UTC (permalink / raw) To: Lai Jiangshan; +Cc: LKML, Peter Zijlstra * Lai Jiangshan <laijs@cn.fujitsu.com> wrote: > > Impact: cleanup > > use need_resched() instead of unlikely(test_thread_flag(TIF_NEED_RESCHED)) > > Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> looks good - but it does not apply cleanly to the latest scheduler tree: http://people.redhat.com/mingo/tip.git/README Could you please send a merged up patch and also make sure there's no other TIF_NEED_RESCHED usage in kernel/sched.c that could be converted to need_resched()? Thanks, Ingo ^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH -tip] sched: cleanup for TIF_NEED_RESCHED 2009-03-06 10:49 ` Ingo Molnar @ 2009-03-06 11:40 ` Lai Jiangshan 2009-03-06 11:51 ` [tip:sched/cleanups] sched: TIF_NEED_RESCHED -> need_reshed() cleanup Lai Jiangshan 0 siblings, 1 reply; 4+ messages in thread From: Lai Jiangshan @ 2009-03-06 11:40 UTC (permalink / raw) To: Ingo Molnar; +Cc: LKML, Peter Zijlstra Ingo Molnar wrote: > * Lai Jiangshan <laijs@cn.fujitsu.com> wrote: > >> Impact: cleanup >> >> use need_resched() instead of unlikely(test_thread_flag(TIF_NEED_RESCHED)) >> >> Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> > > looks good - but it does not apply cleanly to the latest > scheduler tree: > > http://people.redhat.com/mingo/tip.git/README > > Could you please send a merged up patch and also make sure > there's no other TIF_NEED_RESCHED usage in kernel/sched.c that > could be converted to need_resched()? > > Thanks, > > Ingo > > From: Lai Jiangshan <laijs@cn.fujitsu.com> Impact: cleanup Use test_tsk_need_resched(), set_tsk_need_resched(), need_resched() instead of using TIF_NEED_RESCHED. Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> --- diff --git a/kernel/sched.c b/kernel/sched.c index 63e8414..81b7c8b 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -1210,10 +1210,10 @@ static void resched_task(struct task_struct *p) assert_spin_locked(&task_rq(p)->lock); - if (unlikely(test_tsk_thread_flag(p, TIF_NEED_RESCHED))) + if (test_tsk_need_resched(p)) return; - set_tsk_thread_flag(p, TIF_NEED_RESCHED); + set_tsk_need_resched(p); cpu = task_cpu(p); if (cpu == smp_processor_id()) @@ -1269,7 +1269,7 @@ void wake_up_idle_cpu(int cpu) * lockless. The worst case is that the other CPU runs the * idle task through an additional NOOP schedule() */ - set_tsk_thread_flag(rq->idle, TIF_NEED_RESCHED); + set_tsk_need_resched(rq->idle); /* NEED_RESCHED must be visible before we test polling */ smp_mb(); @@ -4795,12 +4795,11 @@ need_resched_nonpreemptible: asmlinkage void __sched schedule(void) { -need_resched: - preempt_disable(); - __schedule(); - preempt_enable_no_resched(); - if (unlikely(test_thread_flag(TIF_NEED_RESCHED))) - goto need_resched; + do { + preempt_disable(); + __schedule(); + preempt_enable_no_resched(); + } while (need_resched()); } EXPORT_SYMBOL(schedule); @@ -4892,7 +4891,7 @@ asmlinkage void __sched preempt_schedule(void) * between schedule and now. */ barrier(); - } while (unlikely(test_thread_flag(TIF_NEED_RESCHED))); + } while (need_resched()); } EXPORT_SYMBOL(preempt_schedule); @@ -4921,7 +4920,7 @@ asmlinkage void __sched preempt_schedule_irq(void) * between schedule and now. */ barrier(); - } while (unlikely(test_thread_flag(TIF_NEED_RESCHED))); + } while (need_resched()); } #endif /* CONFIG_PREEMPT */ diff --git a/lib/kernel_lock.c b/lib/kernel_lock.c index 01a3c22..39f1029 100644 --- a/lib/kernel_lock.c +++ b/lib/kernel_lock.c @@ -39,7 +39,7 @@ static __cacheline_aligned_in_smp DEFINE_SPINLOCK(kernel_flag); int __lockfunc __reacquire_kernel_lock(void) { while (!_raw_spin_trylock(&kernel_flag)) { - if (test_thread_flag(TIF_NEED_RESCHED)) + if (need_resched()) return -EAGAIN; cpu_relax(); } ^ permalink raw reply related [flat|nested] 4+ messages in thread
* [tip:sched/cleanups] sched: TIF_NEED_RESCHED -> need_reshed() cleanup 2009-03-06 11:40 ` [PATCH -tip] sched: cleanup for TIF_NEED_RESCHED Lai Jiangshan @ 2009-03-06 11:51 ` Lai Jiangshan 0 siblings, 0 replies; 4+ messages in thread From: Lai Jiangshan @ 2009-03-06 11:51 UTC (permalink / raw) To: linux-tip-commits Cc: linux-kernel, hpa, mingo, a.p.zijlstra, tglx, laijs, mingo Commit-ID: 5ed0cec0ac5f1b3759bdbe4d9df32ee4ff8afb5a Gitweb: http://git.kernel.org/tip/5ed0cec0ac5f1b3759bdbe4d9df32ee4ff8afb5a Author: "Lai Jiangshan" <laijs@cn.fujitsu.com> AuthorDate: Fri, 6 Mar 2009 19:40:20 +0800 Commit: Ingo Molnar <mingo@elte.hu> CommitDate: Fri, 6 Mar 2009 12:48:55 +0100 sched: TIF_NEED_RESCHED -> need_reshed() cleanup Impact: cleanup Use test_tsk_need_resched(), set_tsk_need_resched(), need_resched() instead of using TIF_NEED_RESCHED. Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <49B10BA4.9070209@cn.fujitsu.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> --- kernel/sched.c | 10 +++++----- lib/kernel_lock.c | 2 +- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/kernel/sched.c b/kernel/sched.c index 8b92f40..e0fa739 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -1189,10 +1189,10 @@ static void resched_task(struct task_struct *p) assert_spin_locked(&task_rq(p)->lock); - if (unlikely(test_tsk_thread_flag(p, TIF_NEED_RESCHED))) + if (test_tsk_need_resched(p)) return; - set_tsk_thread_flag(p, TIF_NEED_RESCHED); + set_tsk_need_resched(p); cpu = task_cpu(p); if (cpu == smp_processor_id()) @@ -1248,7 +1248,7 @@ void wake_up_idle_cpu(int cpu) * lockless. The worst case is that the other CPU runs the * idle task through an additional NOOP schedule() */ - set_tsk_thread_flag(rq->idle, TIF_NEED_RESCHED); + set_tsk_need_resched(rq->idle); /* NEED_RESCHED must be visible before we test polling */ smp_mb(); @@ -4740,7 +4740,7 @@ asmlinkage void __sched preempt_schedule(void) * between schedule and now. */ barrier(); - } while (unlikely(test_thread_flag(TIF_NEED_RESCHED))); + } while (need_resched()); } EXPORT_SYMBOL(preempt_schedule); @@ -4769,7 +4769,7 @@ asmlinkage void __sched preempt_schedule_irq(void) * between schedule and now. */ barrier(); - } while (unlikely(test_thread_flag(TIF_NEED_RESCHED))); + } while (need_resched()); } #endif /* CONFIG_PREEMPT */ diff --git a/lib/kernel_lock.c b/lib/kernel_lock.c index 01a3c22..39f1029 100644 --- a/lib/kernel_lock.c +++ b/lib/kernel_lock.c @@ -39,7 +39,7 @@ static __cacheline_aligned_in_smp DEFINE_SPINLOCK(kernel_flag); int __lockfunc __reacquire_kernel_lock(void) { while (!_raw_spin_trylock(&kernel_flag)) { - if (test_thread_flag(TIF_NEED_RESCHED)) + if (need_resched()) return -EAGAIN; cpu_relax(); } ^ permalink raw reply related [flat|nested] 4+ messages in thread
end of thread, other threads:[~2009-03-06 11:52 UTC | newest] Thread overview: 4+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2009-03-06 10:24 [PATCH -tip] sched: use need_resched Lai Jiangshan 2009-03-06 10:49 ` Ingo Molnar 2009-03-06 11:40 ` [PATCH -tip] sched: cleanup for TIF_NEED_RESCHED Lai Jiangshan 2009-03-06 11:51 ` [tip:sched/cleanups] sched: TIF_NEED_RESCHED -> need_reshed() cleanup Lai Jiangshan
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox