* [BUG] 2.6.27-rc5 couldn't boot on tulsa machine randomly
@ 2008-09-09 3:26 Zhang, Yanmin
2008-09-10 9:12 ` Zhang, Yanmin
0 siblings, 1 reply; 4+ messages in thread
From: Zhang, Yanmin @ 2008-09-09 3:26 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar; +Cc: LKML
On my tulsa x86-64 machine, kernel 2.6.25-rc5 couldn't boot randomly.
Basically, function __enable_runtime forgets to reset rt_rq->rt_throttled to 0.
When every cpu is up, per-cpu migration_thread is created and it runs very fast,
sometimes to mark the corresponding rt_rq->rt_throttled to 1 very quickly. After
all cpus are up, with below calling chain,
sched_init_smp => arch_init_sched_domains => build_sched_domains => ...
=> cpu_attach_domain => rq_attach_root => set_rq_online => ... => __enable_runtime,
__enable_runtime is called against every rt_rq again, so rt_rq->rt_time is reset to
0, but rt_rq->rt_throttled might be still 1. Later on function do_sched_rt_period_timer
couldn't reset it, and all RT tasks couldn't be scheduled to run on that cpu.
here is RT task migration_thread which is woken up when a task is migrated to another cpu.
Below patch fixes it against 2.6.27-rc5.
Signed-off-by: Zhang Yanmin <yanmin_zhang@linux.intel.com>
---
diff -Nraup linux-2.6.27-rc5/kernel/sched_rt.c linux-2.6.27-rc5_fix/kernel/sched_rt.c
--- linux-2.6.27-rc5/kernel/sched_rt.c 2008-09-09 11:06:43.000000000 +0800
+++ linux-2.6.27-rc5_fix/kernel/sched_rt.c 2008-09-09 11:13:04.000000000 +0800
@@ -350,6 +350,7 @@ static void __enable_runtime(struct rq *
spin_lock(&rt_rq->rt_runtime_lock);
rt_rq->rt_runtime = rt_b->rt_runtime;
rt_rq->rt_time = 0;
+ rt_rq->rt_throttled = 0;
spin_unlock(&rt_rq->rt_runtime_lock);
spin_unlock(&rt_b->rt_runtime_lock);
}
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [BUG] 2.6.27-rc5 couldn't boot on tulsa machine randomly
2008-09-09 3:26 [BUG] 2.6.27-rc5 couldn't boot on tulsa machine randomly Zhang, Yanmin
@ 2008-09-10 9:12 ` Zhang, Yanmin
2008-09-10 9:19 ` Peter Zijlstra
0 siblings, 1 reply; 4+ messages in thread
From: Zhang, Yanmin @ 2008-09-10 9:12 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: Ingo Molnar, LKML
On Tue, 2008-09-09 at 11:26 +0800, Zhang, Yanmin wrote:
> On my tulsa x86-64 machine, kernel 2.6.25-rc5 couldn't boot randomly.
>
> Basically, function __enable_runtime forgets to reset rt_rq->rt_throttled to 0.
Peter,
Is there any issue with the patch?
I tested 2.6.27-rc6 and it still couldn't boot on my tulsa machine. With my patch,
kernel could boot.
Yanmin
> When every cpu is up, per-cpu migration_thread is created and it runs very fast,
> sometimes to mark the corresponding rt_rq->rt_throttled to 1 very quickly. After
> all cpus are up, with below calling chain,
> sched_init_smp => arch_init_sched_domains => build_sched_domains => ...
> => cpu_attach_domain => rq_attach_root => set_rq_online => ... => __enable_runtime,
> __enable_runtime is called against every rt_rq again, so rt_rq->rt_time is reset to
> 0, but rt_rq->rt_throttled might be still 1. Later on function do_sched_rt_period_timer
> couldn't reset it, and all RT tasks couldn't be scheduled to run on that cpu.
> here is RT task migration_thread which is woken up when a task is migrated to another cpu.
>
> Below patch fixes it against 2.6.27-rc5.
>
> Signed-off-by: Zhang Yanmin <yanmin_zhang@linux.intel.com>
>
> ---
>
> diff -Nraup linux-2.6.27-rc5/kernel/sched_rt.c linux-2.6.27-rc5_fix/kernel/sched_rt.c
> --- linux-2.6.27-rc5/kernel/sched_rt.c 2008-09-09 11:06:43.000000000 +0800
> +++ linux-2.6.27-rc5_fix/kernel/sched_rt.c 2008-09-09 11:13:04.000000000 +0800
> @@ -350,6 +350,7 @@ static void __enable_runtime(struct rq *
> spin_lock(&rt_rq->rt_runtime_lock);
> rt_rq->rt_runtime = rt_b->rt_runtime;
> rt_rq->rt_time = 0;
> + rt_rq->rt_throttled = 0;
> spin_unlock(&rt_rq->rt_runtime_lock);
> spin_unlock(&rt_b->rt_runtime_lock);
> }
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [BUG] 2.6.27-rc5 couldn't boot on tulsa machine randomly
2008-09-10 9:12 ` Zhang, Yanmin
@ 2008-09-10 9:19 ` Peter Zijlstra
2008-09-10 12:25 ` Ingo Molnar
0 siblings, 1 reply; 4+ messages in thread
From: Peter Zijlstra @ 2008-09-10 9:19 UTC (permalink / raw)
To: Zhang, Yanmin; +Cc: Ingo Molnar, LKML
On Wed, 2008-09-10 at 17:12 +0800, Zhang, Yanmin wrote:
> On Tue, 2008-09-09 at 11:26 +0800, Zhang, Yanmin wrote:
> > On my tulsa x86-64 machine, kernel 2.6.25-rc5 couldn't boot randomly.
> >
> > Basically, function __enable_runtime forgets to reset rt_rq->rt_throttled to 0.
> Peter,
>
> Is there any issue with the patch?
No, just got lost in my inbox due to me getting distracted at the wrong
moment, sorry!
> I tested 2.6.27-rc6 and it still couldn't boot on my tulsa machine. With my patch,
> kernel could boot.
> > When every cpu is up, per-cpu migration_thread is created and it runs very fast,
> > sometimes to mark the corresponding rt_rq->rt_throttled to 1 very quickly. After
> > all cpus are up, with below calling chain,
> > sched_init_smp => arch_init_sched_domains => build_sched_domains => ...
> > => cpu_attach_domain => rq_attach_root => set_rq_online => ... => __enable_runtime,
> > __enable_runtime is called against every rt_rq again, so rt_rq->rt_time is reset to
> > 0, but rt_rq->rt_throttled might be still 1. Later on function do_sched_rt_period_timer
> > couldn't reset it, and all RT tasks couldn't be scheduled to run on that cpu.
> > here is RT task migration_thread which is woken up when a task is migrated to another cpu.
> >
> > Below patch fixes it against 2.6.27-rc5.
> >
> > Signed-off-by: Zhang Yanmin <yanmin_zhang@linux.intel.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Ingo, please push to Linus.
> > ---
> >
> > diff -Nraup linux-2.6.27-rc5/kernel/sched_rt.c linux-2.6.27-rc5_fix/kernel/sched_rt.c
> > --- linux-2.6.27-rc5/kernel/sched_rt.c 2008-09-09 11:06:43.000000000 +0800
> > +++ linux-2.6.27-rc5_fix/kernel/sched_rt.c 2008-09-09 11:13:04.000000000 +0800
> > @@ -350,6 +350,7 @@ static void __enable_runtime(struct rq *
> > spin_lock(&rt_rq->rt_runtime_lock);
> > rt_rq->rt_runtime = rt_b->rt_runtime;
> > rt_rq->rt_time = 0;
> > + rt_rq->rt_throttled = 0;
> > spin_unlock(&rt_rq->rt_runtime_lock);
> > spin_unlock(&rt_b->rt_runtime_lock);
> > }
> >
> >
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
> > Please read the FAQ at http://www.tux.org/lkml/
>
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [BUG] 2.6.27-rc5 couldn't boot on tulsa machine randomly
2008-09-10 9:19 ` Peter Zijlstra
@ 2008-09-10 12:25 ` Ingo Molnar
0 siblings, 0 replies; 4+ messages in thread
From: Ingo Molnar @ 2008-09-10 12:25 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: Zhang, Yanmin, LKML
* Peter Zijlstra <a.p.zijlstra@chello.nl> wrote:
> On Wed, 2008-09-10 at 17:12 +0800, Zhang, Yanmin wrote:
> > On Tue, 2008-09-09 at 11:26 +0800, Zhang, Yanmin wrote:
> > > On my tulsa x86-64 machine, kernel 2.6.25-rc5 couldn't boot randomly.
> > >
> > > Basically, function __enable_runtime forgets to reset rt_rq->rt_throttled to 0.
> > Peter,
> >
> > Is there any issue with the patch?
>
> No, just got lost in my inbox due to me getting distracted at the
> wrong moment, sorry!
there's no issue with the patch, it was already queued up in
tip/sched/urgent, for -rc7.
Ingo
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2008-09-10 12:25 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-09-09 3:26 [BUG] 2.6.27-rc5 couldn't boot on tulsa machine randomly Zhang, Yanmin
2008-09-10 9:12 ` Zhang, Yanmin
2008-09-10 9:19 ` Peter Zijlstra
2008-09-10 12:25 ` Ingo Molnar
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox