From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753879AbbCPM1j (ORCPT ); Mon, 16 Mar 2015 08:27:39 -0400 Received: from mga09.intel.com ([134.134.136.24]:12329 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753806AbbCPM1i (ORCPT ); Mon, 16 Mar 2015 08:27:38 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.11,409,1422950400"; d="scan'208";a="665834966" Date: Mon, 16 Mar 2015 20:09:31 +0800 From: Wanpeng Li To: Ingo Molnar Cc: Wanpeng Li , Peter Zijlstra , Juri Lelli , linux-kernel@vger.kernel.org Subject: Re: [PATCH RESEND v10] sched/deadline: support dl task migration during cpu hotplug Message-ID: <20150316120931.GA4809@kernel> Reply-To: Wanpeng Li References: <1426231647-11966-1-git-send-email-wanpeng.li@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1426231647-11966-1-git-send-email-wanpeng.li@linux.intel.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Ping Ingo, On Fri, Mar 13, 2015 at 03:27:27PM +0800, Wanpeng Li wrote: >I observe that dl task can't be migrated to other cpus during cpu hotplug, >in addition, task may/may not be running again if cpu is added back. The >root cause which I found is that dl task will be throtted and removed from >dl rq after comsuming all budget, which leads to stop task can't pick it up >from dl rq and migrate to other cpus during hotplug. > >The method to reproduce: >schedtool -E -t 50000:100000 -e ./test >Actually test is just a simple for loop. Then observe which cpu the test >task is on. >echo 0 > /sys/devices/system/cpu/cpuN/online > >This patch adds the dl task migration during cpu hotplug by finding a most >suitable later deadline rq after dl timer fire if current rq is offline, >if fail to find a suitable later deadline rq then fallback to any eligible >online cpu in order that the deadline task will come back to us, and the >push/pull mechanism should then move it around properly. > >Suggested-and-acked-by: Juri Lelli >Signed-off-by: Wanpeng Li >--- > kernel/sched/deadline.c | 54 +++++++++++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 54 insertions(+) > >diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c >index 5cb5c9c..db457b9 100644 >--- a/kernel/sched/deadline.c >+++ b/kernel/sched/deadline.c >@@ -492,6 +492,7 @@ static int start_dl_timer(struct sched_dl_entity *dl_se, bool boosted) > return hrtimer_active(&dl_se->dl_timer); > } > >+static struct rq *find_lock_later_rq(struct task_struct *task, struct rq *rq); > /* > * This is the bandwidth enforcement timer callback. If here, we know > * a task is not on its dl_rq, since the fact that the timer was running >@@ -537,6 +538,59 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer) > update_rq_clock(rq); > > /* >+ * So if we find that the rq the task was on is no longer >+ * available, we need to select a new rq. >+ */ >+ if (unlikely(!rq->online)) { >+ struct rq *later_rq = NULL; >+ bool fallback = false; >+ >+ later_rq = find_lock_later_rq(p, rq); >+ >+ if (!later_rq) { >+ int cpu; >+ >+ /* >+ * If cannot preempt any rq, fallback to pick any >+ * online cpu. >+ */ >+ fallback = true; >+ cpu = cpumask_any_and(cpu_active_mask, >+ tsk_cpus_allowed(p)); >+ if (cpu >= nr_cpu_ids) { >+ if (dl_bandwidth_enabled()) { >+ /* >+ * Fail to find any suitable cpu. >+ * The task will never come back! >+ */ >+ WARN_ON(1); >+ goto unlock; >+ } else { >+ /* >+ * If admission control is disabled we >+ * try a little harder to let the task >+ * run. >+ */ >+ cpu = cpumask_any(cpu_active_mask); >+ } >+ } >+ later_rq = cpu_rq(cpu); >+ double_lock_balance(rq, later_rq); >+ } >+ >+ deactivate_task(rq, p, 0); >+ set_task_cpu(p, later_rq->cpu); >+ activate_task(later_rq, p, ENQUEUE_REPLENISH); >+ >+ if (!fallback) >+ resched_curr(later_rq); >+ >+ double_unlock_balance(rq, later_rq); >+ >+ goto unlock; >+ } >+ >+ /* > * If the throttle happened during sched-out; like: > * > * schedule() >-- >1.9.1