From: Peter Zijlstra <peterz@infradead.org>
To: Yong Zhang <yong.zhang0@gmail.com>
Cc: samu.p.onkalo@nokia.com, mingo@elte.hu,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
tglx <tglx@linutronix.de>
Subject: Re: Bug in scheduler when using rt_mutex
Date: Tue, 18 Jan 2011 14:35:46 +0100 [thread overview]
Message-ID: <1295357746.30950.681.camel@laptop> (raw)
In-Reply-To: <AANLkTikXf-OMEBx-UZktxooOs+YqUkSE9doyEZopjzOx@mail.gmail.com>
On Tue, 2011-01-18 at 16:59 +0800, Yong Zhang wrote:
> On Tue, Jan 18, 2011 at 4:23 PM, Onkalo Samu <samu.p.onkalo@nokia.com> wrote:
> > On Mon, 2011-01-17 at 17:00 +0100, ext Peter Zijlstra wrote:
> >> On Mon, 2011-01-17 at 16:42 +0200, Onkalo Samu wrote:
> >> >
> >> > Failure case:
> >> > - user process locks rt_mutex
> >> > - and goes to sleep (wait_for_completion etc.)
> >> > - user process is dequeued to sleep state
> >> > -> vruntime is not updated in dequeue_entity
> >> >
> >>
> >> Does the below (completely untested) patch help?
> >
> > Unfortunately no.
>
> It couldn't work because place_entity() will not place it
> backwards.
Ah indeed, I was somehow assuming it was way left, but that is not at
all true, something like the below then..
---
Subject: sched: Fix switch_to_fair()
From: Peter Zijlstra <a.p.zijlstra@chello.nl>
Date: Mon Jan 17 17:03:27 CET 2011
When a task is placed back into fair_sched_class, we must update its
placement, since we don't know how long its been gone, hence its
vruntime is stale and cannot be trusted.
There is also a case where it was moved from fair_sched_class when it
was in a blocked state and moved back while it is running, this causes
an imbalance between DEQUEUE_SLEEP/DEQUEUE_WAKEUP for the fair class
and leaves vruntime way out there (due to the min_vruntime
adjustment).
Also update sysrq-n to call the ->switch_{to,from} methods.
Reported-by: Onkalo Samu <samu.p.onkalo@nokia.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
kernel/sched.c | 4 ++++
kernel/sched_fair.c | 16 ++++++++++++++++
2 files changed, 20 insertions(+)
Index: linux-2.6/kernel/sched.c
===================================================================
--- linux-2.6.orig/kernel/sched.c
+++ linux-2.6/kernel/sched.c
@@ -8106,6 +8106,8 @@ EXPORT_SYMBOL(__might_sleep);
#ifdef CONFIG_MAGIC_SYSRQ
static void normalize_task(struct rq *rq, struct task_struct *p)
{
+ struct sched_class *prev_class = p->sched_class;
+ int old_prio = p->prio;
int on_rq;
on_rq = p->se.on_rq;
@@ -8116,6 +8118,8 @@ static void normalize_task(struct rq *rq
activate_task(rq, p, 0);
resched_task(rq->curr);
}
+
+ check_class_changed(rq, p, prev_class, old_prio, task_current(rq, p));
}
void normalize_rt_tasks(void)
Index: linux-2.6/kernel/sched_fair.c
===================================================================
--- linux-2.6.orig/kernel/sched_fair.c
+++ linux-2.6/kernel/sched_fair.c
@@ -4075,6 +4075,22 @@ static void prio_changed_fair(struct rq
static void switched_to_fair(struct rq *rq, struct task_struct *p,
int running)
{
+ struct sched_entity *se = &p->se;
+ struct cfs_rq *cfs_rq = cfs_rq_of(se);
+
+ if (se->on_rq && cfs_rq->curr != se)
+ __dequeue_entity(cfs_rq, se);
+
+ /*
+ * se->vruntime can be completely out there, there is no telling
+ * how long this task was !fair and on what CPU if any it became
+ * !fair. Therefore, reset it to a known, reasonable value.
+ */
+ se->vruntime = cfs_rq->min_vruntime;
+
+ if (se->on_rq && cfs_rq->curr != se)
+ __enqueue_entity(cfs_rq, se);
+
/*
* We were most likely switched from sched_rt, so
* kick off the schedule if running, otherwise just see
next prev parent reply other threads:[~2011-01-18 13:35 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-01-17 14:42 Bug in scheduler when using rt_mutex Onkalo Samu
2011-01-17 15:00 ` Peter Zijlstra
2011-01-17 15:15 ` samu.p.onkalo
2011-01-17 15:28 ` Peter Zijlstra
2011-01-17 16:00 ` Peter Zijlstra
2011-01-18 8:23 ` Onkalo Samu
2011-01-18 8:59 ` Yong Zhang
2011-01-18 13:35 ` Peter Zijlstra [this message]
2011-01-18 14:25 ` Onkalo Samu
2011-01-19 2:38 ` Yong Zhang
2011-01-19 3:43 ` Mike Galbraith
2011-01-19 4:35 ` Yong Zhang
2011-01-19 5:40 ` Mike Galbraith
2011-01-19 6:09 ` Yong Zhang
2011-01-19 6:37 ` Mike Galbraith
2011-01-19 7:19 ` Ingo Molnar
2011-01-19 7:41 ` Mike Galbraith
2011-01-19 9:44 ` Peter Zijlstra
2011-01-19 10:38 ` Peter Zijlstra
2011-01-19 11:30 ` Peter Zijlstra
2011-01-19 12:58 ` Onkalo Samu
2011-01-19 13:13 ` Onkalo Samu
2011-01-19 13:30 ` Peter Zijlstra
2011-01-20 4:18 ` Yong Zhang
2011-01-20 4:27 ` Yong Zhang
2011-01-20 5:32 ` Yong Zhang
2011-01-20 4:59 ` Mike Galbraith
2011-01-20 5:30 ` Yong Zhang
2011-01-20 6:12 ` Mike Galbraith
2011-01-20 7:06 ` Yong Zhang
2011-01-20 8:37 ` Mike Galbraith
2011-01-20 9:07 ` Yong Zhang
2011-01-20 10:07 ` Mike Galbraith
2011-01-21 11:08 ` Peter Zijlstra
2011-01-21 12:24 ` Yong Zhang
2011-01-21 13:40 ` Peter Zijlstra
2011-01-21 15:03 ` Yong Zhang
2011-01-21 15:10 ` Peter Zijlstra
2011-01-21 13:15 ` Yong Zhang
2011-01-20 7:07 ` Onkalo Samu
2011-01-21 6:25 ` Onkalo Samu
2011-01-20 3:10 ` Yong Zhang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1295357746.30950.681.camel@laptop \
--to=peterz@infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=samu.p.onkalo@nokia.com \
--cc=tglx@linutronix.de \
--cc=yong.zhang0@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox