linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: tip-bot for Peter Zijlstra <tipbot@zytor.com>
To: linux-tip-commits@vger.kernel.org
Cc: linux-kernel@vger.kernel.org, hpa@zytor.com, mingo@kernel.org,
	peterz@infradead.org, tglx@linutronix.de
Subject: [tip:sched/core] sched/deadline: Fix hotplug admission control
Date: Mon, 13 Jan 2014 07:55:18 -0800	[thread overview]
Message-ID: <tip-de212f18e92c952533d57c5510d2790199c75734@git.kernel.org> (raw)
In-Reply-To: <20131220171343.GL2480@laptop.programming.kicks-ass.net>

Commit-ID:  de212f18e92c952533d57c5510d2790199c75734
Gitweb:     http://git.kernel.org/tip/de212f18e92c952533d57c5510d2790199c75734
Author:     Peter Zijlstra <peterz@infradead.org>
AuthorDate: Thu, 19 Dec 2013 11:54:45 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 13 Jan 2014 13:47:25 +0100

sched/deadline: Fix hotplug admission control

The current hotplug admission control is broken because:

  CPU_DYING -> migration_call() -> migrate_tasks() -> __migrate_task()

cannot fail and hard assumes it _will_ move all tasks off of the dying
cpu, failing this will break hotplug.

The much simpler solution is a DOWN_PREPARE handler that fails when
removing one CPU gets us below the total allocated bandwidth.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/20131220171343.GL2480@laptop.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/core.c | 83 +++++++++++++++++++++--------------------------------
 1 file changed, 32 insertions(+), 51 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 1d33eb8..a549d9a 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1887,9 +1887,15 @@ inline struct dl_bw *dl_bw_of(int i)
 	return &cpu_rq(i)->rd->dl_bw;
 }
 
-static inline int __dl_span_weight(struct rq *rq)
+static inline int dl_bw_cpus(int i)
 {
-	return cpumask_weight(rq->rd->span);
+	struct root_domain *rd = cpu_rq(i)->rd;
+	int cpus = 0;
+
+	for_each_cpu_and(i, rd->span, cpu_active_mask)
+		cpus++;
+
+	return cpus;
 }
 #else
 inline struct dl_bw *dl_bw_of(int i)
@@ -1897,7 +1903,7 @@ inline struct dl_bw *dl_bw_of(int i)
 	return &cpu_rq(i)->dl.dl_bw;
 }
 
-static inline int __dl_span_weight(struct rq *rq)
+static inline int dl_bw_cpus(int i)
 {
 	return 1;
 }
@@ -1938,8 +1944,7 @@ static int dl_overflow(struct task_struct *p, int policy,
 	u64 period = attr->sched_period;
 	u64 runtime = attr->sched_runtime;
 	u64 new_bw = dl_policy(policy) ? to_ratio(period, runtime) : 0;
-	int cpus = __dl_span_weight(task_rq(p));
-	int err = -1;
+	int cpus, err = -1;
 
 	if (new_bw == p->dl.dl_bw)
 		return 0;
@@ -1950,6 +1955,7 @@ static int dl_overflow(struct task_struct *p, int policy,
 	 * allocated bandwidth of the container.
 	 */
 	raw_spin_lock(&dl_b->lock);
+	cpus = dl_bw_cpus(task_cpu(p));
 	if (dl_policy(policy) && !task_has_dl_policy(p) &&
 	    !__dl_overflow(dl_b, cpus, 0, new_bw)) {
 		__dl_add(dl_b, new_bw);
@@ -4522,42 +4528,6 @@ out:
 EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
 
 /*
- * When dealing with a -deadline task, we have to check if moving it to
- * a new CPU is possible or not. In fact, this is only true iff there
- * is enough bandwidth available on such CPU, otherwise we want the
- * whole migration procedure to fail over.
- */
-static inline
-bool set_task_cpu_dl(struct task_struct *p, unsigned int cpu)
-{
-	struct dl_bw *dl_b = dl_bw_of(task_cpu(p));
-	struct dl_bw *cpu_b = dl_bw_of(cpu);
-	int ret = 1;
-	u64 bw;
-
-	if (dl_b == cpu_b)
-		return 1;
-
-	raw_spin_lock(&dl_b->lock);
-	raw_spin_lock(&cpu_b->lock);
-
-	bw = cpu_b->bw * cpumask_weight(cpu_rq(cpu)->rd->span);
-	if (dl_bandwidth_enabled() &&
-	    bw < cpu_b->total_bw + p->dl.dl_bw) {
-		ret = 0;
-		goto unlock;
-	}
-	dl_b->total_bw -= p->dl.dl_bw;
-	cpu_b->total_bw += p->dl.dl_bw;
-
-unlock:
-	raw_spin_unlock(&cpu_b->lock);
-	raw_spin_unlock(&dl_b->lock);
-
-	return ret;
-}
-
-/*
  * Move (not current) task off this cpu, onto dest cpu. We're doing
  * this because either it can't run here any more (set_cpus_allowed()
  * away from this CPU, or CPU going down), or because we're
@@ -4589,13 +4559,6 @@ static int __migrate_task(struct task_struct *p, int src_cpu, int dest_cpu)
 		goto fail;
 
 	/*
-	 * If p is -deadline, proceed only if there is enough
-	 * bandwidth available on dest_cpu
-	 */
-	if (unlikely(dl_task(p)) && !set_task_cpu_dl(p, dest_cpu))
-		goto fail;
-
-	/*
 	 * If we're not on a rq, the next wake-up will ensure we're
 	 * placed properly.
 	 */
@@ -5052,13 +5015,31 @@ static int sched_cpu_active(struct notifier_block *nfb,
 static int sched_cpu_inactive(struct notifier_block *nfb,
 					unsigned long action, void *hcpu)
 {
+	unsigned long flags;
+	long cpu = (long)hcpu;
+
 	switch (action & ~CPU_TASKS_FROZEN) {
 	case CPU_DOWN_PREPARE:
-		set_cpu_active((long)hcpu, false);
+		set_cpu_active(cpu, false);
+
+		/* explicitly allow suspend */
+		if (!(action & CPU_TASKS_FROZEN)) {
+			struct dl_bw *dl_b = dl_bw_of(cpu);
+			bool overflow;
+			int cpus;
+
+			raw_spin_lock_irqsave(&dl_b->lock, flags);
+			cpus = dl_bw_cpus(cpu);
+			overflow = __dl_overflow(dl_b, cpus, 0, 0);
+			raw_spin_unlock_irqrestore(&dl_b->lock, flags);
+
+			if (overflow)
+				return notifier_from_errno(-EBUSY);
+		}
 		return NOTIFY_OK;
-	default:
-		return NOTIFY_DONE;
 	}
+
+	return NOTIFY_DONE;
 }
 
 static int __init migration_init(void)

  parent reply	other threads:[~2014-01-13 15:55 UTC|newest]

Thread overview: 71+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-12-17 12:27 [PATCH 00/13] sched, deadline: patches Peter Zijlstra
2013-12-17 12:27 ` [PATCH 01/13] sched: Add 3 new scheduler syscalls to support an extended scheduling parameters ABI Peter Zijlstra
2014-01-21 14:36   ` Michael Kerrisk
2014-01-21 15:38     ` Peter Zijlstra
2014-01-21 15:46       ` Peter Zijlstra
2014-01-21 16:02         ` Steven Rostedt
2014-01-21 16:06           ` Peter Zijlstra
2014-01-21 16:46             ` Juri Lelli
2014-02-14 14:13       ` Michael Kerrisk (man-pages)
2014-02-14 16:19         ` Peter Zijlstra
2014-02-15 12:52           ` Ingo Molnar
2014-02-17 13:20           ` Michael Kerrisk (man-pages)
2014-04-09  9:25             ` sched_{set,get}attr() manpage Peter Zijlstra
2014-04-09 15:19               ` Henrik Austad
2014-04-09 15:42                 ` Peter Zijlstra
2014-04-10  7:47                   ` Juri Lelli
2014-04-10  9:59                     ` Claudio Scordino
2014-04-27 15:47                   ` Michael Kerrisk (man-pages)
2014-04-27 19:34                     ` Peter Zijlstra
2014-04-27 19:45                       ` Steven Rostedt
2014-04-28  7:39                       ` Juri Lelli
2014-04-28  8:18             ` Peter Zijlstra
2014-04-29 13:08               ` Michael Kerrisk (man-pages)
2014-04-29 14:22                 ` Peter Zijlstra
2014-04-29 16:04                 ` Peter Zijlstra
2014-04-30 11:09                   ` Michael Kerrisk (man-pages)
2014-04-30 12:35                     ` Peter Zijlstra
2014-04-30 13:09                     ` Peter Zijlstra
2014-05-03 10:43                       ` Juri Lelli
2014-05-05  6:55                         ` Michael Kerrisk (man-pages)
2014-05-05  7:21                           ` Peter Zijlstra
2014-05-05  7:41                             ` Michael Kerrisk (man-pages)
2014-05-05  7:47                               ` Peter Zijlstra
2014-05-05  9:53                                 ` Michael Kerrisk (man-pages)
2014-05-06  8:16                             ` Peter Zijlstra
2014-05-09  8:23                               ` Michael Kerrisk (man-pages)
2014-05-09  8:53                                 ` Peter Zijlstra
2014-05-09  9:26                                   ` Michael Kerrisk (man-pages)
2014-05-19 13:06                                   ` [tip:sched/core] sched: Disallow sched_attr::sched_policy < 0 tip-bot for Peter Zijlstra
2014-05-22 12:25                                   ` tip-bot for Peter Zijlstra
2014-02-21 20:32           ` [tip:sched/urgent] sched: Add 'flags' argument to sched_{set, get}attr() syscalls tip-bot for Peter Zijlstra
2014-01-26  9:48   ` [PATCH 01/13] sched: Add 3 new scheduler syscalls to support an extended scheduling parameters ABI Geert Uytterhoeven
2013-12-17 12:27 ` [PATCH 02/13] sched: SCHED_DEADLINE structures & implementation Peter Zijlstra
2013-12-17 12:27 ` [PATCH 03/13] sched: SCHED_DEADLINE SMP-related data structures & logic Peter Zijlstra
2013-12-17 12:27 ` [PATCH 04/13] [PATCH 05/13] sched: SCHED_DEADLINE avg_update accounting Peter Zijlstra
2013-12-17 12:27 ` [PATCH 05/13] sched: Add period support for -deadline tasks Peter Zijlstra
2013-12-17 12:27 ` [PATCH 06/13] [PATCH 07/13] sched: Add latency tracing " Peter Zijlstra
2013-12-17 12:27 ` [PATCH 07/13] rtmutex: Turn the plist into an rb-tree Peter Zijlstra
2013-12-17 12:27 ` [PATCH 08/13] sched: Drafted deadline inheritance logic Peter Zijlstra
2013-12-17 12:27 ` [PATCH 09/13] sched: Add bandwidth management for sched_dl Peter Zijlstra
2013-12-18 16:55   ` Peter Zijlstra
2013-12-20 17:13     ` Peter Zijlstra
2013-12-20 17:37       ` Steven Rostedt
2013-12-20 17:42         ` Peter Zijlstra
2013-12-20 18:23           ` Steven Rostedt
2013-12-20 18:26             ` Steven Rostedt
2013-12-20 21:44             ` Peter Zijlstra
2013-12-20 23:29               ` Steven Rostedt
2013-12-21 10:05                 ` Peter Zijlstra
2013-12-21 17:26                   ` Peter Zijlstra
2014-01-13 15:55       ` tip-bot for Peter Zijlstra [this message]
2013-12-17 12:27 ` [PATCH 10/13] sched: speed up -dl pushes with a push-heap Peter Zijlstra
2013-12-17 12:27 ` [PATCH 11/13] sched: Remove sched_setscheduler2() Peter Zijlstra
2013-12-17 12:27 ` [PATCH 12/13] sched, deadline: Fixup the smp-affinity mask tests Peter Zijlstra
2013-12-17 12:27 ` [PATCH 13/13] sched, deadline: Remove the sysctl_sched_dl knobs Peter Zijlstra
2013-12-17 20:17 ` [PATCH] sched, deadline: Properly initialize def_dl_bandwidth lock Steven Rostedt
2013-12-18 10:01   ` Peter Zijlstra
2013-12-20 13:51 ` [PATCH 00/13] sched, deadline: patches Juri Lelli
2013-12-20 14:28   ` Steven Rostedt
2013-12-20 14:51   ` Peter Zijlstra
2013-12-20 15:19     ` Steven Rostedt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=tip-de212f18e92c952533d57c5510d2790199c75734@git.kernel.org \
    --to=tipbot@zytor.com \
    --cc=hpa@zytor.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-tip-commits@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).