* [PATCH v2 2/7] sched/deadline: make init_sched_dl_class() __init
2015-05-13 6:01 [PATCH v2 1/7] sched/deadline: fix try to pull pinned dl tasks in pull algorithm Wanpeng Li
@ 2015-05-13 6:01 ` Wanpeng Li
2015-06-19 18:01 ` [tip:sched/core] sched/deadline: Make " tip-bot for Wanpeng Li
2015-05-13 6:01 ` [PATCH v2 3/7] sched/deadline: reduce rq lock contention by eliminating locking of non-feasible target Wanpeng Li
` (6 subsequent siblings)
7 siblings, 1 reply; 18+ messages in thread
From: Wanpeng Li @ 2015-05-13 6:01 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra; +Cc: Juri Lelli, linux-kernel, Wanpeng Li
It's a bootstrap function, make init_sched_dl_class() __init.
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
---
kernel/sched/deadline.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index f09f3ad..4303af2 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1686,7 +1686,7 @@ static void rq_offline_dl(struct rq *rq)
cpudl_clear_freecpu(&rq->rd->cpudl, rq->cpu);
}
-void init_sched_dl_class(void)
+void __init init_sched_dl_class(void)
{
unsigned int i;
--
1.9.1
^ permalink raw reply related [flat|nested] 18+ messages in thread* [tip:sched/core] sched/deadline: Make init_sched_dl_class() __init
2015-05-13 6:01 ` [PATCH v2 2/7] sched/deadline: make init_sched_dl_class() __init Wanpeng Li
@ 2015-06-19 18:01 ` tip-bot for Wanpeng Li
0 siblings, 0 replies; 18+ messages in thread
From: tip-bot for Wanpeng Li @ 2015-06-19 18:01 UTC (permalink / raw)
To: linux-tip-commits
Cc: torvalds, juri.lelli, akpm, peterz, tglx, hpa, bp, linux-kernel,
wanpeng.li, mingo
Commit-ID: a6c0e746fb8f4ea6508f274314378325a6e1ec9b
Gitweb: http://git.kernel.org/tip/a6c0e746fb8f4ea6508f274314378325a6e1ec9b
Author: Wanpeng Li <wanpeng.li@linux.intel.com>
AuthorDate: Wed, 13 May 2015 14:01:02 +0800
Committer: Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 19 Jun 2015 10:06:46 +0200
sched/deadline: Make init_sched_dl_class() __init
It's a bootstrap function, make init_sched_dl_class() __init.
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Juri Lelli <juri.lelli@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1431496867-4194-2-git-send-email-wanpeng.li@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
kernel/sched/deadline.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 9cbe1c7..1c4bc31 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1685,7 +1685,7 @@ static void rq_offline_dl(struct rq *rq)
cpudl_clear_freecpu(&rq->rd->cpudl, rq->cpu);
}
-void init_sched_dl_class(void)
+void __init init_sched_dl_class(void)
{
unsigned int i;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v2 3/7] sched/deadline: reduce rq lock contention by eliminating locking of non-feasible target
2015-05-13 6:01 [PATCH v2 1/7] sched/deadline: fix try to pull pinned dl tasks in pull algorithm Wanpeng Li
2015-05-13 6:01 ` [PATCH v2 2/7] sched/deadline: make init_sched_dl_class() __init Wanpeng Li
@ 2015-05-13 6:01 ` Wanpeng Li
2015-06-19 18:02 ` [tip:sched/core] sched/deadline: Reduce " tip-bot for Wanpeng Li
2015-05-13 6:01 ` [PATCH v2 4/7] sched/deadline: reschedule if stop task slip in after pull operations Wanpeng Li
` (5 subsequent siblings)
7 siblings, 1 reply; 18+ messages in thread
From: Wanpeng Li @ 2015-05-13 6:01 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra; +Cc: Juri Lelli, linux-kernel, Wanpeng Li
This patch adds check that prevents futile attempts to move dl tasks to
a CPU with active tasks of equal or earlier deadline. The same behavior as
commit 80e3d87b2c55 ("sched/rt: Reduce rq lock contention by eliminating
locking of non-feasible target") for rt class.
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
---
v1 -> v2:
* add check in find_lock_later_rq()
kernel/sched/deadline.c | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 4303af2..e49b1e6 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1012,7 +1012,9 @@ select_task_rq_dl(struct task_struct *p, int cpu, int sd_flag, int flags)
(p->nr_cpus_allowed > 1)) {
int target = find_later_rq(p);
- if (target != -1)
+ if (target != -1 &&
+ dl_time_before(p->dl.deadline,
+ cpu_rq(target)->dl.earliest_dl.curr))
cpu = target;
}
rcu_read_unlock();
@@ -1360,6 +1362,17 @@ static struct rq *find_lock_later_rq(struct task_struct *task, struct rq *rq)
later_rq = cpu_rq(cpu);
+ if (!dl_time_before(task->dl.deadline,
+ later_rq->dl.earliest_dl.curr)) {
+ /*
+ * Target rq has tasks of equal or earlier deadline,
+ * retrying does not release any lock and is unlikely
+ * to yield a different result.
+ */
+ later_rq = NULL;
+ break;
+ }
+
/* Retry if something changed. */
if (double_lock_balance(rq, later_rq)) {
if (unlikely(task_rq(task) != rq ||
--
1.9.1
^ permalink raw reply related [flat|nested] 18+ messages in thread* [tip:sched/core] sched/deadline: Reduce rq lock contention by eliminating locking of non-feasible target
2015-05-13 6:01 ` [PATCH v2 3/7] sched/deadline: reduce rq lock contention by eliminating locking of non-feasible target Wanpeng Li
@ 2015-06-19 18:02 ` tip-bot for Wanpeng Li
0 siblings, 0 replies; 18+ messages in thread
From: tip-bot for Wanpeng Li @ 2015-06-19 18:02 UTC (permalink / raw)
To: linux-tip-commits
Cc: torvalds, wanpeng.li, hpa, linux-kernel, juri.lelli, peterz, tglx,
akpm, mingo, bp
Commit-ID: 9d514262425691dddf942edea8bc9919e66fe140
Gitweb: http://git.kernel.org/tip/9d514262425691dddf942edea8bc9919e66fe140
Author: Wanpeng Li <wanpeng.li@linux.intel.com>
AuthorDate: Wed, 13 May 2015 14:01:03 +0800
Committer: Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 19 Jun 2015 10:06:46 +0200
sched/deadline: Reduce rq lock contention by eliminating locking of non-feasible target
This patch adds a check that prevents futile attempts to move DL tasks
to a CPU with active tasks of equal or earlier deadline. The same
behavior as commit 80e3d87b2c55 ("sched/rt: Reduce rq lock contention
by eliminating locking of non-feasible target") for rt class.
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Juri Lelli <juri.lelli@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1431496867-4194-3-git-send-email-wanpeng.li@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
kernel/sched/deadline.c | 15 ++++++++++++++-
1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 1c4bc31..98f7871 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1012,7 +1012,9 @@ select_task_rq_dl(struct task_struct *p, int cpu, int sd_flag, int flags)
(p->nr_cpus_allowed > 1)) {
int target = find_later_rq(p);
- if (target != -1)
+ if (target != -1 &&
+ dl_time_before(p->dl.deadline,
+ cpu_rq(target)->dl.earliest_dl.curr))
cpu = target;
}
rcu_read_unlock();
@@ -1359,6 +1361,17 @@ static struct rq *find_lock_later_rq(struct task_struct *task, struct rq *rq)
later_rq = cpu_rq(cpu);
+ if (!dl_time_before(task->dl.deadline,
+ later_rq->dl.earliest_dl.curr)) {
+ /*
+ * Target rq has tasks of equal or earlier deadline,
+ * retrying does not release any lock and is unlikely
+ * to yield a different result.
+ */
+ later_rq = NULL;
+ break;
+ }
+
/* Retry if something changed. */
if (double_lock_balance(rq, later_rq)) {
if (unlikely(task_rq(task) != rq ||
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v2 4/7] sched/deadline: reschedule if stop task slip in after pull operations
2015-05-13 6:01 [PATCH v2 1/7] sched/deadline: fix try to pull pinned dl tasks in pull algorithm Wanpeng Li
2015-05-13 6:01 ` [PATCH v2 2/7] sched/deadline: make init_sched_dl_class() __init Wanpeng Li
2015-05-13 6:01 ` [PATCH v2 3/7] sched/deadline: reduce rq lock contention by eliminating locking of non-feasible target Wanpeng Li
@ 2015-05-13 6:01 ` Wanpeng Li
2015-05-29 14:16 ` Peter Zijlstra
2015-05-13 6:01 ` [PATCH v2 5/7] sched/deadline: drop duplicate init_sched_dl_class declaration Wanpeng Li
` (4 subsequent siblings)
7 siblings, 1 reply; 18+ messages in thread
From: Wanpeng Li @ 2015-05-13 6:01 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra; +Cc: Juri Lelli, linux-kernel, Wanpeng Li
pull_dl_task can drop (and re-acquire) rq->lock, this means a stop task
can slip in, in which case we need to reschedule. This patch add the
reschedule when the scenario occurs.
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
---
kernel/sched/deadline.c | 16 +++++++++++++++-
1 file changed, 15 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index e49b1e6..7d4c4fc 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1750,7 +1750,13 @@ static void switched_from_dl(struct rq *rq, struct task_struct *p)
if (!task_on_rq_queued(p) || rq->dl.dl_nr_running)
return;
- if (pull_dl_task(rq))
+ /*
+ * pull_dl_task() can drop (and re-acquire) rq->lock; this
+ * means a stop task can slip in, in which case we need to
+ * reschedule.
+ */
+ if (pull_dl_task(rq) ||
+ (rq->stop && task_on_rq_queued(rq->stop)))
resched_curr(rq);
}
@@ -1797,6 +1803,14 @@ static void prio_changed_dl(struct rq *rq, struct task_struct *p,
pull_dl_task(rq);
/*
+ * pull_dl_task() can drop (and re-acquire) rq->lock; this
+ * means a stop task can slip in, in which case we need to
+ * reschedule.
+ */
+ if (rq->stop && task_on_rq_queued(rq->stop))
+ resched_curr(rq);
+
+ /*
* If we now have a earlier deadline task than p,
* then reschedule, provided p is still on this
* runqueue.
--
1.9.1
^ permalink raw reply related [flat|nested] 18+ messages in thread* Re: [PATCH v2 4/7] sched/deadline: reschedule if stop task slip in after pull operations
2015-05-13 6:01 ` [PATCH v2 4/7] sched/deadline: reschedule if stop task slip in after pull operations Wanpeng Li
@ 2015-05-29 14:16 ` Peter Zijlstra
2015-05-31 0:06 ` Wanpeng Li
0 siblings, 1 reply; 18+ messages in thread
From: Peter Zijlstra @ 2015-05-29 14:16 UTC (permalink / raw)
To: Wanpeng Li; +Cc: Ingo Molnar, Juri Lelli, linux-kernel
On Wed, May 13, 2015 at 02:01:04PM +0800, Wanpeng Li wrote:
> pull_dl_task can drop (and re-acquire) rq->lock, this means a stop task
> can slip in, in which case we need to reschedule. This patch add the
> reschedule when the scenario occurs.
>
> Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
> ---
> kernel/sched/deadline.c | 16 +++++++++++++++-
> 1 file changed, 15 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
> index e49b1e6..7d4c4fc 100644
> --- a/kernel/sched/deadline.c
> +++ b/kernel/sched/deadline.c
> @@ -1750,7 +1750,13 @@ static void switched_from_dl(struct rq *rq, struct task_struct *p)
> if (!task_on_rq_queued(p) || rq->dl.dl_nr_running)
> return;
>
> - if (pull_dl_task(rq))
> + /*
> + * pull_dl_task() can drop (and re-acquire) rq->lock; this
> + * means a stop task can slip in, in which case we need to
> + * reschedule.
> + */
> + if (pull_dl_task(rq) ||
> + (rq->stop && task_on_rq_queued(rq->stop)))
> resched_curr(rq);
But, the waking of the stop task will already have done the preemption
check and won (obviously). So the wakeup should already have done the
resched_curr().
So why?
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 4/7] sched/deadline: reschedule if stop task slip in after pull operations
2015-05-29 14:16 ` Peter Zijlstra
@ 2015-05-31 0:06 ` Wanpeng Li
0 siblings, 0 replies; 18+ messages in thread
From: Wanpeng Li @ 2015-05-31 0:06 UTC (permalink / raw)
To: Peter Zijlstra, Wanpeng Li; +Cc: Ingo Molnar, Juri Lelli, linux-kernel
On 5/29/15 10:16 PM, Peter Zijlstra wrote:
> On Wed, May 13, 2015 at 02:01:04PM +0800, Wanpeng Li wrote:
>> pull_dl_task can drop (and re-acquire) rq->lock, this means a stop task
>> can slip in, in which case we need to reschedule. This patch add the
>> reschedule when the scenario occurs.
>>
>> Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
>> ---
>> kernel/sched/deadline.c | 16 +++++++++++++++-
>> 1 file changed, 15 insertions(+), 1 deletion(-)
>>
>> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
>> index e49b1e6..7d4c4fc 100644
>> --- a/kernel/sched/deadline.c
>> +++ b/kernel/sched/deadline.c
>> @@ -1750,7 +1750,13 @@ static void switched_from_dl(struct rq *rq, struct task_struct *p)
>> if (!task_on_rq_queued(p) || rq->dl.dl_nr_running)
>> return;
>>
>> - if (pull_dl_task(rq))
>> + /*
>> + * pull_dl_task() can drop (and re-acquire) rq->lock; this
>> + * means a stop task can slip in, in which case we need to
>> + * reschedule.
>> + */
>> + if (pull_dl_task(rq) ||
>> + (rq->stop && task_on_rq_queued(rq->stop)))
>> resched_curr(rq);
> But, the waking of the stop task will already have done the preemption
> check and won (obviously). So the wakeup should already have done the
> resched_curr().
Indeed, thanks for your pointing out. :)
>
> So why?
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 18+ messages in thread
* [PATCH v2 5/7] sched/deadline: drop duplicate init_sched_dl_class declaration
2015-05-13 6:01 [PATCH v2 1/7] sched/deadline: fix try to pull pinned dl tasks in pull algorithm Wanpeng Li
` (2 preceding siblings ...)
2015-05-13 6:01 ` [PATCH v2 4/7] sched/deadline: reschedule if stop task slip in after pull operations Wanpeng Li
@ 2015-05-13 6:01 ` Wanpeng Li
2015-06-19 18:02 ` [tip:sched/core] sched/deadline: Drop duplicate init_sched_dl_class() declaration tip-bot for Wanpeng Li
2015-05-13 6:01 ` [PATCH v2 6/7] sched/core: remove superfluous resetting of dl_throttled flag Wanpeng Li
` (3 subsequent siblings)
7 siblings, 1 reply; 18+ messages in thread
From: Wanpeng Li @ 2015-05-13 6:01 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra; +Cc: Juri Lelli, linux-kernel, Wanpeng Li
There are two init_sched_dl_class declarations, this patch drop
the duplicated one.
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
---
v1 -> v2:
* trim the changelog
kernel/sched/sched.h | 1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index d854555..d62b288 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1290,7 +1290,6 @@ extern void update_max_interval(void);
extern void init_sched_dl_class(void);
extern void init_sched_rt_class(void);
extern void init_sched_fair_class(void);
-extern void init_sched_dl_class(void);
extern void resched_curr(struct rq *rq);
extern void resched_cpu(int cpu);
--
1.9.1
^ permalink raw reply related [flat|nested] 18+ messages in thread* [tip:sched/core] sched/deadline: Drop duplicate init_sched_dl_class() declaration
2015-05-13 6:01 ` [PATCH v2 5/7] sched/deadline: drop duplicate init_sched_dl_class declaration Wanpeng Li
@ 2015-06-19 18:02 ` tip-bot for Wanpeng Li
0 siblings, 0 replies; 18+ messages in thread
From: tip-bot for Wanpeng Li @ 2015-06-19 18:02 UTC (permalink / raw)
To: linux-tip-commits
Cc: wanpeng.li, bp, akpm, mingo, torvalds, juri.lelli, tglx, hpa,
peterz, linux-kernel
Commit-ID: 178a4d23e4e6a0a90b086dad86697676b49db60a
Gitweb: http://git.kernel.org/tip/178a4d23e4e6a0a90b086dad86697676b49db60a
Author: Wanpeng Li <wanpeng.li@linux.intel.com>
AuthorDate: Wed, 13 May 2015 14:01:05 +0800
Committer: Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 19 Jun 2015 10:06:47 +0200
sched/deadline: Drop duplicate init_sched_dl_class() declaration
There are two init_sched_dl_class() declarations, this patch drops
the duplicate.
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Juri Lelli <juri.lelli@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1431496867-4194-5-git-send-email-wanpeng.li@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
kernel/sched/sched.h | 1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index d854555..d62b288 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1290,7 +1290,6 @@ extern void update_max_interval(void);
extern void init_sched_dl_class(void);
extern void init_sched_rt_class(void);
extern void init_sched_fair_class(void);
-extern void init_sched_dl_class(void);
extern void resched_curr(struct rq *rq);
extern void resched_cpu(int cpu);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v2 6/7] sched/core: remove superfluous resetting of dl_throttled flag
2015-05-13 6:01 [PATCH v2 1/7] sched/deadline: fix try to pull pinned dl tasks in pull algorithm Wanpeng Li
` (3 preceding siblings ...)
2015-05-13 6:01 ` [PATCH v2 5/7] sched/deadline: drop duplicate init_sched_dl_class declaration Wanpeng Li
@ 2015-05-13 6:01 ` Wanpeng Li
2015-06-19 18:02 ` [tip:sched/core] sched: Remove superfluous resetting of the p-> " tip-bot for Wanpeng Li
2015-05-13 6:01 ` [PATCH v2 7/7] sched/rt: reschedule if stop/dl task slip in after pull operations Wanpeng Li
` (2 subsequent siblings)
7 siblings, 1 reply; 18+ messages in thread
From: Wanpeng Li @ 2015-05-13 6:01 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra; +Cc: Juri Lelli, linux-kernel, Wanpeng Li
Resetting dl_throttled flag in rt_mutex_setprio (for a task that is going
to be boosted) is superfluous, as the natural place to do so is in
replenish_dl_entity(). If the task was on the runqueue and it is boosted
by a DL task, it will be enqueued back with ENQUEUE_REPLENISH flag set,
which can guarantee that dl_throttled is reset in replenish_dl_entity().
This patch drops the resetting of throttled status in function
rt_mutex_setprio().
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
---
v1 -> v2:
* rewrite patch subject and changelog
kernel/sched/core.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 355f953..a9fd4916 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3087,7 +3087,6 @@ void rt_mutex_setprio(struct task_struct *p, int prio)
if (!dl_prio(p->normal_prio) ||
(pi_task && dl_entity_preempt(&pi_task->dl, &p->dl))) {
p->dl.dl_boosted = 1;
- p->dl.dl_throttled = 0;
enqueue_flag = ENQUEUE_REPLENISH;
} else
p->dl.dl_boosted = 0;
--
1.9.1
^ permalink raw reply related [flat|nested] 18+ messages in thread* [tip:sched/core] sched: Remove superfluous resetting of the p-> dl_throttled flag
2015-05-13 6:01 ` [PATCH v2 6/7] sched/core: remove superfluous resetting of dl_throttled flag Wanpeng Li
@ 2015-06-19 18:02 ` tip-bot for Wanpeng Li
0 siblings, 0 replies; 18+ messages in thread
From: tip-bot for Wanpeng Li @ 2015-06-19 18:02 UTC (permalink / raw)
To: linux-tip-commits
Cc: mingo, tglx, linux-kernel, wanpeng.li, peterz, torvalds,
juri.lelli, bp, akpm, hpa
Commit-ID: 6713c3aa7f63626c0cecf9c509fb48d885b2dd12
Gitweb: http://git.kernel.org/tip/6713c3aa7f63626c0cecf9c509fb48d885b2dd12
Author: Wanpeng Li <wanpeng.li@linux.intel.com>
AuthorDate: Wed, 13 May 2015 14:01:06 +0800
Committer: Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 19 Jun 2015 10:06:47 +0200
sched: Remove superfluous resetting of the p->dl_throttled flag
Resetting the p->dl_throttled flag in rt_mutex_setprio() (for a task that is going
to be boosted) is superfluous, as the natural place to do so is in
replenish_dl_entity().
If the task was on the runqueue and it is boosted by a DL task, it will be enqueued
back with ENQUEUE_REPLENISH flag set, which can guarantee that dl_throttled is
reset in replenish_dl_entity().
This patch drops the resetting of throttled status in function rt_mutex_setprio().
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Juri Lelli <juri.lelli@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1431496867-4194-6-git-send-email-wanpeng.li@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
kernel/sched/core.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 1428c7c..10338ce 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3099,7 +3099,6 @@ void rt_mutex_setprio(struct task_struct *p, int prio)
if (!dl_prio(p->normal_prio) ||
(pi_task && dl_entity_preempt(&pi_task->dl, &p->dl))) {
p->dl.dl_boosted = 1;
- p->dl.dl_throttled = 0;
enqueue_flag = ENQUEUE_REPLENISH;
} else
p->dl.dl_boosted = 0;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply related [flat|nested] 18+ messages in thread
* [PATCH v2 7/7] sched/rt: reschedule if stop/dl task slip in after pull operations
2015-05-13 6:01 [PATCH v2 1/7] sched/deadline: fix try to pull pinned dl tasks in pull algorithm Wanpeng Li
` (4 preceding siblings ...)
2015-05-13 6:01 ` [PATCH v2 6/7] sched/core: remove superfluous resetting of dl_throttled flag Wanpeng Li
@ 2015-05-13 6:01 ` Wanpeng Li
2015-05-29 14:34 ` Peter Zijlstra
2015-05-19 0:23 ` [PATCH v2 1/7] sched/deadline: fix try to pull pinned dl tasks in pull algorithm Wanpeng Li
2015-06-19 18:01 ` [tip:sched/core] sched/deadline: Optimize pull_dl_task() tip-bot for Wanpeng Li
7 siblings, 1 reply; 18+ messages in thread
From: Wanpeng Li @ 2015-05-13 6:01 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra; +Cc: Juri Lelli, linux-kernel, Wanpeng Li
pull_rt_task() can drop (and re-acquire) rq->lock, this means a dl
or stop task can slip in, in which case need to reschedule. This
patch add the reschedule when the scenario occurs.
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
---
kernel/sched/rt.c | 19 ++++++++++++++++++-
1 file changed, 18 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 560d2fa..8c948bf 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -2136,7 +2136,14 @@ static void switched_from_rt(struct rq *rq, struct task_struct *p)
if (!task_on_rq_queued(p) || rq->rt.rt_nr_running)
return;
- if (pull_rt_task(rq))
+ /*
+ * pull_rt_task() can drop (and re-acquire) rq->lock; this
+ * means a dl or stop task can slip in, in which case we need
+ * to reschedule.
+ */
+ if (pull_rt_task(rq) ||
+ (unlikely((rq->stop && task_on_rq_queued(rq->stop)) ||
+ rq->dl.dl_nr_running)))
resched_curr(rq);
}
@@ -2197,6 +2204,16 @@ prio_changed_rt(struct rq *rq, struct task_struct *p, int oldprio)
*/
if (oldprio < p->prio)
pull_rt_task(rq);
+
+ /*
+ * pull_rt_task() can drop (and re-acquire) rq->lock; this
+ * means a dl or stop task can slip in, in which case we need
+ * to reschedule.
+ */
+ if (unlikely((rq->stop && task_on_rq_queued(rq->stop)) ||
+ rq->dl.dl_nr_running))
+ resched_curr(rq);
+
/*
* If there's a higher priority task waiting to run
* then reschedule. Note, the above pull_rt_task
--
1.9.1
^ permalink raw reply related [flat|nested] 18+ messages in thread* Re: [PATCH v2 7/7] sched/rt: reschedule if stop/dl task slip in after pull operations
2015-05-13 6:01 ` [PATCH v2 7/7] sched/rt: reschedule if stop/dl task slip in after pull operations Wanpeng Li
@ 2015-05-29 14:34 ` Peter Zijlstra
2015-05-31 0:15 ` Wanpeng Li
0 siblings, 1 reply; 18+ messages in thread
From: Peter Zijlstra @ 2015-05-29 14:34 UTC (permalink / raw)
To: Wanpeng Li; +Cc: Ingo Molnar, Juri Lelli, linux-kernel
On Wed, May 13, 2015 at 02:01:07PM +0800, Wanpeng Li wrote:
> pull_rt_task() can drop (and re-acquire) rq->lock, this means a dl
> or stop task can slip in, in which case need to reschedule. This
> patch add the reschedule when the scenario occurs.
>
> Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
> ---
> kernel/sched/rt.c | 19 ++++++++++++++++++-
> 1 file changed, 18 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
> index 560d2fa..8c948bf 100644
> --- a/kernel/sched/rt.c
> +++ b/kernel/sched/rt.c
> @@ -2136,7 +2136,14 @@ static void switched_from_rt(struct rq *rq, struct task_struct *p)
> if (!task_on_rq_queued(p) || rq->rt.rt_nr_running)
> return;
>
> - if (pull_rt_task(rq))
> + /*
> + * pull_rt_task() can drop (and re-acquire) rq->lock; this
> + * means a dl or stop task can slip in, in which case we need
> + * to reschedule.
> + */
> + if (pull_rt_task(rq) ||
> + (unlikely((rq->stop && task_on_rq_queued(rq->stop)) ||
> + rq->dl.dl_nr_running)))
> resched_curr(rq);
> }
Same as before; why is the normal wakeup preemption check not working?
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 7/7] sched/rt: reschedule if stop/dl task slip in after pull operations
2015-05-29 14:34 ` Peter Zijlstra
@ 2015-05-31 0:15 ` Wanpeng Li
0 siblings, 0 replies; 18+ messages in thread
From: Wanpeng Li @ 2015-05-31 0:15 UTC (permalink / raw)
To: Peter Zijlstra, Wanpeng Li; +Cc: Ingo Molnar, Juri Lelli, linux-kernel
On 5/29/15 10:34 PM, Peter Zijlstra wrote:
> On Wed, May 13, 2015 at 02:01:07PM +0800, Wanpeng Li wrote:
>> pull_rt_task() can drop (and re-acquire) rq->lock, this means a dl
>> or stop task can slip in, in which case need to reschedule. This
>> patch add the reschedule when the scenario occurs.
>>
>> Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
>> ---
>> kernel/sched/rt.c | 19 ++++++++++++++++++-
>> 1 file changed, 18 insertions(+), 1 deletion(-)
>>
>> diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
>> index 560d2fa..8c948bf 100644
>> --- a/kernel/sched/rt.c
>> +++ b/kernel/sched/rt.c
>> @@ -2136,7 +2136,14 @@ static void switched_from_rt(struct rq *rq, struct task_struct *p)
>> if (!task_on_rq_queued(p) || rq->rt.rt_nr_running)
>> return;
>>
>> - if (pull_rt_task(rq))
>> + /*
>> + * pull_rt_task() can drop (and re-acquire) rq->lock; this
>> + * means a dl or stop task can slip in, in which case we need
>> + * to reschedule.
>> + */
>> + if (pull_rt_task(rq) ||
>> + (unlikely((rq->stop && task_on_rq_queued(rq->stop)) ||
>> + rq->dl.dl_nr_running)))
>> resched_curr(rq);
>> }
> Same as before; why is the normal wakeup preemption check not working?
I will drop these two patches and send out v3. :)
Regards,
Wanpeng Li
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [PATCH v2 1/7] sched/deadline: fix try to pull pinned dl tasks in pull algorithm
2015-05-13 6:01 [PATCH v2 1/7] sched/deadline: fix try to pull pinned dl tasks in pull algorithm Wanpeng Li
` (5 preceding siblings ...)
2015-05-13 6:01 ` [PATCH v2 7/7] sched/rt: reschedule if stop/dl task slip in after pull operations Wanpeng Li
@ 2015-05-19 0:23 ` Wanpeng Li
2015-05-26 2:17 ` Wanpeng Li
2015-06-19 18:01 ` [tip:sched/core] sched/deadline: Optimize pull_dl_task() tip-bot for Wanpeng Li
7 siblings, 1 reply; 18+ messages in thread
From: Wanpeng Li @ 2015-05-19 0:23 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: Ingo Molnar, Juri Lelli, linux-kernel, Wanpeng Li
Ping Peterz for this patchset, :)
On Wed, May 13, 2015 at 02:01:01PM +0800, Wanpeng Li wrote:
>Function pick_next_earliest_dl_task is used to pick earliest and pushable
>dl task from overloaded cpus in pull algorithm, however, it traverses
>runqueue rbtree instead of pushable task rbtree which is also ordered by
>tasks' deadlines. This will result in getting no candidates from overloaded
>cpus if all the dl tasks on the overloaded cpus are pinned. This patch fix
>it by traversing pushable task rbtree which is also ordered by tasks'
>deadlines.
>
>Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
>---
> kernel/sched/deadline.c | 29 ++++++++++++++++++++++++++++-
> 1 file changed, 28 insertions(+), 1 deletion(-)
>
>diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
>index 890ce95..f09f3ad 100644
>--- a/kernel/sched/deadline.c
>+++ b/kernel/sched/deadline.c
>@@ -1230,6 +1230,33 @@ next_node:
> return NULL;
> }
>
>+/*
>+ * Return the earliest pushable rq's task, which is suitable to be executed
>+ * on the cpu, NULL otherwse
>+ */
>+static struct task_struct *pick_earliest_pushable_dl_task(struct rq *rq,
>+ int cpu)
>+{
>+ struct rb_node *next_node = rq->dl.pushable_dl_tasks_leftmost;
>+ struct task_struct *p = NULL;
>+
>+ if (!has_pushable_dl_tasks(rq))
>+ return NULL;
>+
>+next_node:
>+ if (next_node) {
>+ p = rb_entry(next_node, struct task_struct, pushable_dl_tasks);
>+
>+ if (pick_dl_task(rq, p, cpu))
>+ return p;
>+
>+ next_node = rb_next(next_node);
>+ goto next_node;
>+ }
>+
>+ return NULL;
>+}
>+
> static DEFINE_PER_CPU(cpumask_var_t, local_cpu_mask_dl);
>
> static int find_later_rq(struct task_struct *task)
>@@ -1514,7 +1541,7 @@ static int pull_dl_task(struct rq *this_rq)
> if (src_rq->dl.dl_nr_running <= 1)
> goto skip;
>
>- p = pick_next_earliest_dl_task(src_rq, this_cpu);
>+ p = pick_earliest_pushable_dl_task(src_rq, this_cpu);
>
> /*
> * We found a task to be pulled if:
>--
>1.9.1
^ permalink raw reply [flat|nested] 18+ messages in thread* Re: [PATCH v2 1/7] sched/deadline: fix try to pull pinned dl tasks in pull algorithm
2015-05-19 0:23 ` [PATCH v2 1/7] sched/deadline: fix try to pull pinned dl tasks in pull algorithm Wanpeng Li
@ 2015-05-26 2:17 ` Wanpeng Li
0 siblings, 0 replies; 18+ messages in thread
From: Wanpeng Li @ 2015-05-26 2:17 UTC (permalink / raw)
To: Peter Zijlstra, Ingo Molnar; +Cc: Juri Lelli, linux-kernel, Wanpeng Li
Ping,
On Tue, May 19, 2015 at 08:23:15AM +0800, Wanpeng Li wrote:
>Ping Peterz for this patchset, :)
>On Wed, May 13, 2015 at 02:01:01PM +0800, Wanpeng Li wrote:
>>Function pick_next_earliest_dl_task is used to pick earliest and pushable
>>dl task from overloaded cpus in pull algorithm, however, it traverses
>>runqueue rbtree instead of pushable task rbtree which is also ordered by
>>tasks' deadlines. This will result in getting no candidates from overloaded
>>cpus if all the dl tasks on the overloaded cpus are pinned. This patch fix
>>it by traversing pushable task rbtree which is also ordered by tasks'
>>deadlines.
>>
>>Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
>>---
>> kernel/sched/deadline.c | 29 ++++++++++++++++++++++++++++-
>> 1 file changed, 28 insertions(+), 1 deletion(-)
>>
>>diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
>>index 890ce95..f09f3ad 100644
>>--- a/kernel/sched/deadline.c
>>+++ b/kernel/sched/deadline.c
>>@@ -1230,6 +1230,33 @@ next_node:
>> return NULL;
>> }
>>
>>+/*
>>+ * Return the earliest pushable rq's task, which is suitable to be executed
>>+ * on the cpu, NULL otherwse
>>+ */
>>+static struct task_struct *pick_earliest_pushable_dl_task(struct rq *rq,
>>+ int cpu)
>>+{
>>+ struct rb_node *next_node = rq->dl.pushable_dl_tasks_leftmost;
>>+ struct task_struct *p = NULL;
>>+
>>+ if (!has_pushable_dl_tasks(rq))
>>+ return NULL;
>>+
>>+next_node:
>>+ if (next_node) {
>>+ p = rb_entry(next_node, struct task_struct, pushable_dl_tasks);
>>+
>>+ if (pick_dl_task(rq, p, cpu))
>>+ return p;
>>+
>>+ next_node = rb_next(next_node);
>>+ goto next_node;
>>+ }
>>+
>>+ return NULL;
>>+}
>>+
>> static DEFINE_PER_CPU(cpumask_var_t, local_cpu_mask_dl);
>>
>> static int find_later_rq(struct task_struct *task)
>>@@ -1514,7 +1541,7 @@ static int pull_dl_task(struct rq *this_rq)
>> if (src_rq->dl.dl_nr_running <= 1)
>> goto skip;
>>
>>- p = pick_next_earliest_dl_task(src_rq, this_cpu);
>>+ p = pick_earliest_pushable_dl_task(src_rq, this_cpu);
>>
>> /*
>> * We found a task to be pulled if:
>>--
>>1.9.1
^ permalink raw reply [flat|nested] 18+ messages in thread
* [tip:sched/core] sched/deadline: Optimize pull_dl_task()
2015-05-13 6:01 [PATCH v2 1/7] sched/deadline: fix try to pull pinned dl tasks in pull algorithm Wanpeng Li
` (6 preceding siblings ...)
2015-05-19 0:23 ` [PATCH v2 1/7] sched/deadline: fix try to pull pinned dl tasks in pull algorithm Wanpeng Li
@ 2015-06-19 18:01 ` tip-bot for Wanpeng Li
7 siblings, 0 replies; 18+ messages in thread
From: tip-bot for Wanpeng Li @ 2015-06-19 18:01 UTC (permalink / raw)
To: linux-tip-commits
Cc: mingo, tglx, linux-kernel, peterz, torvalds, juri.lelli, hpa,
akpm, wanpeng.li, bp
Commit-ID: 8b5e770ed7c05a65ffd2d33a83c14572696236dc
Gitweb: http://git.kernel.org/tip/8b5e770ed7c05a65ffd2d33a83c14572696236dc
Author: Wanpeng Li <wanpeng.li@linux.intel.com>
AuthorDate: Wed, 13 May 2015 14:01:01 +0800
Committer: Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 19 Jun 2015 10:06:45 +0200
sched/deadline: Optimize pull_dl_task()
pull_dl_task() uses pick_next_earliest_dl_task() to select a migration
candidate; this is sub-optimal since the next earliest task -- as per
the regular runqueue -- might not be migratable at all. This could
result in iterating the entire runqueue looking for a task.
Instead iterate the pushable queue -- this queue only contains tasks
that have at least 2 cpus set in their cpus_allowed mask.
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
[ Improved the changelog. ]
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Juri Lelli <juri.lelli@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1431496867-4194-1-git-send-email-wanpeng.li@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
kernel/sched/deadline.c | 28 +++++++++++++++++++++++++++-
1 file changed, 27 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 890ce95..9cbe1c7 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1230,6 +1230,32 @@ next_node:
return NULL;
}
+/*
+ * Return the earliest pushable rq's task, which is suitable to be executed
+ * on the CPU, NULL otherwise:
+ */
+static struct task_struct *pick_earliest_pushable_dl_task(struct rq *rq, int cpu)
+{
+ struct rb_node *next_node = rq->dl.pushable_dl_tasks_leftmost;
+ struct task_struct *p = NULL;
+
+ if (!has_pushable_dl_tasks(rq))
+ return NULL;
+
+next_node:
+ if (next_node) {
+ p = rb_entry(next_node, struct task_struct, pushable_dl_tasks);
+
+ if (pick_dl_task(rq, p, cpu))
+ return p;
+
+ next_node = rb_next(next_node);
+ goto next_node;
+ }
+
+ return NULL;
+}
+
static DEFINE_PER_CPU(cpumask_var_t, local_cpu_mask_dl);
static int find_later_rq(struct task_struct *task)
@@ -1514,7 +1540,7 @@ static int pull_dl_task(struct rq *this_rq)
if (src_rq->dl.dl_nr_running <= 1)
goto skip;
- p = pick_next_earliest_dl_task(src_rq, this_cpu);
+ p = pick_earliest_pushable_dl_task(src_rq, this_cpu);
/*
* We found a task to be pulled if:
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply related [flat|nested] 18+ messages in thread