From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755894AbbCCJcf (ORCPT ); Tue, 3 Mar 2015 04:32:35 -0500 Received: from service87.mimecast.com ([91.220.42.44]:50285 "EHLO service87.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751678AbbCCJcc convert rfc822-to-8bit (ORCPT ); Tue, 3 Mar 2015 04:32:32 -0500 Message-ID: <54F57FAD.4020306@arm.com> Date: Tue, 03 Mar 2015 09:32:29 +0000 From: Juri Lelli User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.4.0 MIME-Version: 1.0 To: Wanpeng Li , Ingo Molnar , Peter Zijlstra CC: "linux-kernel@vger.kernel.org" , Kirill Tkhai Subject: Re: [PATCH RESEND] sched/deadline: fix pull if dl task who's prio changed is not on queue References: <1424918285-94574-1-git-send-email-wanpeng.li@linux.intel.com> In-Reply-To: <1424918285-94574-1-git-send-email-wanpeng.li@linux.intel.com> X-OriginalArrivalTime: 03 Mar 2015 09:32:29.0788 (UTC) FILETIME=[F7FDF1C0:01D05594] X-MC-Unique: 115030309323002601 Content-Type: text/plain; charset=WINDOWS-1252 Content-Transfer-Encoding: 8BIT Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi, [+Kirill] On 26/02/2015 02:38, Wanpeng Li wrote: > Dl task who is not on queue and it is also the curr task simultaneously > can not happen. In addition, pull since the priority of a not on queue > dl task doesn't make any sense. > > This patch fix it by don't pull if dl task who's prio changed is not on > queue. > So, this is something that was already raised by Kirill, but I always forgot to fix :/. Thanks for reminding me! I have this fix: >>From 8fcb04eee2d76042970e9561d253d1bc1fe4cc2b Mon Sep 17 00:00:00 2001 From: Juri Lelli Date: Tue, 3 Mar 2015 07:51:56 +0000 Subject: [PATCH] sched/deadline: cleanup prio_changed_dl() rq->curr task can't be in "dequeued" state in prio_changed_dl() (The only place we can have that is __schedule()), but it can be throttled, in which case we shouldn't do balancing. Also modify the "else" branch, which is dead code (switched_to_dl() is not interested in dequeued tasks and we are not interested in balancing in this case), to take into account updates to not running tasks. Signed-off-by: Juri Lelli Suggested-by: Kirill Tkhai Suggested-by: Wanpeng Li Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: linux-kernel@vger.kernel.org --- kernel/sched/deadline.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index cfa45c1..9d83748 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -1759,7 +1759,10 @@ static void switched_to_dl(struct rq *rq, struct task_struct *p) static void prio_changed_dl(struct rq *rq, struct task_struct *p, int oldprio) { - if (task_on_rq_queued(p) || rq->curr == p) { + if (!on_dl_rq(&p->dl)) + return; + + if (rq->curr == p) { #ifdef CONFIG_SMP /* * This might be too much, but unfortunately @@ -1786,8 +1789,15 @@ static void prio_changed_dl(struct rq *rq, struct task_struct *p, */ resched_curr(rq); #endif /* CONFIG_SMP */ - } else - switched_to_dl(rq, p); + } else { + /* + * This task is not running, so if its deadline is + * now more imminent than that of the current running + * task then reschedule. + */ + if (dl_time_before(p->dl.deadline, rq->curr->dl.deadline)) + resched_curr(rq); + } } const struct sched_class dl_sched_class = { -- 2.3.0 > Signed-off-by: Wanpeng Li > --- > kernel/sched/deadline.c | 5 ++++- > 1 file changed, 4 insertions(+), 1 deletion(-) > > diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c > index 49f92c8..ca391c0 100644 > --- a/kernel/sched/deadline.c > +++ b/kernel/sched/deadline.c > @@ -1728,7 +1728,10 @@ static void switched_to_dl(struct rq *rq, struct task_struct *p) > static void prio_changed_dl(struct rq *rq, struct task_struct *p, > int oldprio) > { > - if (task_on_rq_queued(p) || rq->curr == p) { > + if (!task_on_rq_queued(p)) > + return; > + > + if (rq->curr == p) { > #ifdef CONFIG_SMP > /* > * This might be too much, but unfortunately >