* Re: [patch] sched,rt: __always_inline preemptible_lazy()
@ 2016-02-22 3:36 Hillf Danton
2016-02-22 4:03 ` Mike Galbraith
0 siblings, 1 reply; 5+ messages in thread
From: Hillf Danton @ 2016-02-22 3:36 UTC (permalink / raw)
To: Mike Galbraith
Cc: 'Sebastian Andrzej Siewior', 'Thomas Gleixner',
'LKML', 'linux-rt-users'
>
> homer: # nm kernel/sched/core.o|grep preemptible_lazy
> 00000000000000b5 t preemptible_lazy
>
> echo wakeup_rt > current_tracer ==> Welcome to infinity.
>
> Signed-off-bx: Mike Galbraith <umgwanakikbuti@gmail.com>
> ---
Fat finger?
BTW, would you please make a better description of the
problem this patch is trying to address/fix?
Hillf
> kernel/sched/core.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3469,7 +3469,7 @@ static void __sched notrace preempt_sche
> * set by a RT task. Oterwise we try to avoid beeing scheduled out as long as
> * preempt_lazy_count counter >0.
> */
> -static int preemptible_lazy(void)
> +static __always_inline int preemptible_lazy(void)
> {
> if (test_thread_flag(TIF_NEED_RESCHED))
> return 1;
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [patch] sched,rt: __always_inline preemptible_lazy()
2016-02-22 3:36 [patch] sched,rt: __always_inline preemptible_lazy() Hillf Danton
@ 2016-02-22 4:03 ` Mike Galbraith
2016-02-22 5:16 ` Hillf Danton
2016-02-26 10:17 ` Sebastian Andrzej Siewior
0 siblings, 2 replies; 5+ messages in thread
From: Mike Galbraith @ 2016-02-22 4:03 UTC (permalink / raw)
To: Hillf Danton
Cc: 'Sebastian Andrzej Siewior', 'Thomas Gleixner',
'LKML', 'linux-rt-users'
On Mon, 2016-02-22 at 11:36 +0800, Hillf Danton wrote:
> >
> > homer: # nm kernel/sched/core.o|grep preemptible_lazy
> > 00000000000000b5 t preemptible_lazy
> >
> > echo wakeup_rt > current_tracer ==> Welcome to infinity.
> >
> > Signed-off-bx: Mike Galbraith <umgwanakikbuti@gmail.com>
> > ---
>
> Fat finger?
Yeah, my fingers don't take direction all that well.
> BTW, would you please make a better description of the
> problem this patch is trying to address/fix?
Ok, I thought it was clear what happens.
sched,rt: __always_inline preemptible_lazy()
Functions called within a notrace function must either also be
notrace or be inlined, lest recursion blow the stack.
homer: # nm kernel/sched/core.o|grep preemptible_lazy
00000000000000b5 t preemptible_lazy
echo wakeup_rt > current_tracer ==> Welcome to infinity.
Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
---
kernel/sched/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3469,7 +3469,7 @@ static void __sched notrace preempt_sche
* set by a RT task. Oterwise we try to avoid beeing scheduled out as long as
* preempt_lazy_count counter >0.
*/
-static int preemptible_lazy(void)
+static __always_inline int preemptible_lazy(void)
{
if (test_thread_flag(TIF_NEED_RESCHED))
return 1;
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [patch] sched,rt: __always_inline preemptible_lazy()
2016-02-22 4:03 ` Mike Galbraith
@ 2016-02-22 5:16 ` Hillf Danton
2016-02-26 10:17 ` Sebastian Andrzej Siewior
1 sibling, 0 replies; 5+ messages in thread
From: Hillf Danton @ 2016-02-22 5:16 UTC (permalink / raw)
To: 'Mike Galbraith'
Cc: 'Sebastian Andrzej Siewior', 'Thomas Gleixner',
'LKML', 'linux-rt-users'
>
> sched,rt: __always_inline preemptible_lazy()
>
> Functions called within a notrace function must either also be
> notrace or be inlined, lest recursion blow the stack.
>
> homer: # nm kernel/sched/core.o|grep preemptible_lazy
> 00000000000000b5 t preemptible_lazy
>
> echo wakeup_rt > current_tracer ==> Welcome to infinity.
>
> Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
> ---
Thank you, Mike.
Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>
> kernel/sched/core.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -3469,7 +3469,7 @@ static void __sched notrace preempt_sche
> * set by a RT task. Oterwise we try to avoid beeing scheduled out as long as
> * preempt_lazy_count counter >0.
> */
> -static int preemptible_lazy(void)
> +static __always_inline int preemptible_lazy(void)
> {
> if (test_thread_flag(TIF_NEED_RESCHED))
> return 1;
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [patch] sched,rt: __always_inline preemptible_lazy()
2016-02-22 4:03 ` Mike Galbraith
2016-02-22 5:16 ` Hillf Danton
@ 2016-02-26 10:17 ` Sebastian Andrzej Siewior
1 sibling, 0 replies; 5+ messages in thread
From: Sebastian Andrzej Siewior @ 2016-02-26 10:17 UTC (permalink / raw)
To: Mike Galbraith
Cc: Hillf Danton, 'Thomas Gleixner', 'LKML',
'linux-rt-users'
* Mike Galbraith | 2016-02-22 05:03:05 [+0100]:
>sched,rt: __always_inline preemptible_lazy()
>Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
applied.
Sebastian
^ permalink raw reply [flat|nested] 5+ messages in thread
* [rfc patch v4.4-rt2] sched: fix up preempt lazy forward port
@ 2016-01-18 9:08 Mike Galbraith
2016-01-18 20:18 ` Sebastian Andrzej Siewior
0 siblings, 1 reply; 5+ messages in thread
From: Mike Galbraith @ 2016-01-18 9:08 UTC (permalink / raw)
To: Thomas Gleixner, Sebastian Andrzej Siewior; +Cc: LKML, linux-rt-users
pre:
preempt lazy enabled
homer:/root # taskset -c 7 pipe-test 1
2.618038 usecs/loop -- avg 2.618038 763.9 KHz
2.620658 usecs/loop -- avg 2.618300 763.9 KHz
2.618775 usecs/loop -- avg 2.618347 763.8 KHz
2.618819 usecs/loop -- avg 2.618395 763.8 KHz
2.619269 usecs/loop -- avg 2.618482 763.8 KHz
tbench
Throughput 2612.9 MB/sec 8 procs
preempt lazy disabled
homer:/root # taskset -c 7 pipe-test 1
1.771921 usecs/loop -- avg 1.771921 1128.7 KHz
1.782686 usecs/loop -- avg 1.772998 1128.0 KHz
1.785444 usecs/loop -- avg 1.774242 1127.2 KHz
1.787388 usecs/loop -- avg 1.775557 1126.4 KHz
1.770772 usecs/loop -- avg 1.775078 1126.7 KHz
tbench
Throughput 2626.91 MB/sec 8 procs
post:
preempt lazy enabled
homer:/root # taskset -c 7 pipe-test 1
1.485592 usecs/loop -- avg 1.485592 1346.3 KHz
1.489640 usecs/loop -- avg 1.485997 1345.9 KHz
1.488325 usecs/loop -- avg 1.486230 1345.7 KHz
1.484632 usecs/loop -- avg 1.486070 1345.8 KHz
1.484889 usecs/loop -- avg 1.485952 1345.9 KHz
tbench
Throughput 3091.84 MB/sec 8 procs
preempt lazy disabled
homer:/root # taskset -c 7 pipe-test 1
1.579723 usecs/loop -- avg 1.579723 1266.0 KHz
1.562026 usecs/loop -- avg 1.577953 1267.5 KHz
1.546090 usecs/loop -- avg 1.574767 1270.0 KHz
1.543852 usecs/loop -- avg 1.571675 1272.5 KHz
1.546313 usecs/loop -- avg 1.569139 1274.6 KHz
tbench
Throughput 2649.65 MB/sec 8 procs
Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
---
arch/x86/entry/common.c | 4 ++--
kernel/sched/core.c | 9 +++++++++
2 files changed, 11 insertions(+), 2 deletions(-)
--- a/arch/x86/entry/common.c
+++ b/arch/x86/entry/common.c
@@ -220,14 +220,14 @@ long syscall_trace_enter(struct pt_regs
#define EXIT_TO_USERMODE_LOOP_FLAGS \
(_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_UPROBE | \
- _TIF_NEED_RESCHED | _TIF_USER_RETURN_NOTIFY)
+ _TIF_NEED_RESCHED_MASK | _TIF_USER_RETURN_NOTIFY)
static void exit_to_usermode_loop(struct pt_regs *regs, u32 cached_flags)
{
/*
* In order to return to user mode, we need to have IRQs off with
* none of _TIF_SIGPENDING, _TIF_NOTIFY_RESUME, _TIF_USER_RETURN_NOTIFY,
- * _TIF_UPROBE, or _TIF_NEED_RESCHED set. Several of these flags
+ * _TIF_UPROBE, or _TIF_NEED_RESCHED_MASK set. Several of these flags
* can be set at any time on preemptable kernels if we have IRQs on,
* so we need to loop. Disabling preemption wouldn't help: doing the
* work to clear some of the flags can sleep.
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3542,6 +3542,15 @@ asmlinkage __visible void __sched notrac
if (likely(!preemptible()))
return;
+#ifdef CONFIG_PREEMPT_LAZY
+ /*
+ * Check for lazy preemption
+ */
+ if (current_thread_info()->preempt_lazy_count &&
+ !test_thread_flag(TIF_NEED_RESCHED))
+ return;
+#endif
+
preempt_schedule_common();
}
NOKPROBE_SYMBOL(preempt_schedule);
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [rfc patch v4.4-rt2] sched: fix up preempt lazy forward port
2016-01-18 9:08 [rfc patch v4.4-rt2] sched: fix up preempt lazy forward port Mike Galbraith
@ 2016-01-18 20:18 ` Sebastian Andrzej Siewior
2016-01-19 2:29 ` Mike Galbraith
0 siblings, 1 reply; 5+ messages in thread
From: Sebastian Andrzej Siewior @ 2016-01-18 20:18 UTC (permalink / raw)
To: Mike Galbraith; +Cc: Thomas Gleixner, LKML, linux-rt-users
* Mike Galbraith | 2016-01-18 10:08:23 [+0100]:
>--- a/arch/x86/entry/common.c
>+++ b/arch/x86/entry/common.c
>@@ -220,14 +220,14 @@ long syscall_trace_enter(struct pt_regs
>
> #define EXIT_TO_USERMODE_LOOP_FLAGS \
> (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | _TIF_UPROBE | \
>- _TIF_NEED_RESCHED | _TIF_USER_RETURN_NOTIFY)
>+ _TIF_NEED_RESCHED_MASK | _TIF_USER_RETURN_NOTIFY)
>
If I read this right, the loop where this define is used
_TIF_ALLWORK_MASK in v4.1 of which _TIF_NEED_RESCHED_MASK was part of.
Adding this will reassmeble the old behaviour.
…
>--- a/kernel/sched/core.c
>+++ b/kernel/sched/core.c
>@@ -3542,6 +3542,15 @@ asmlinkage __visible void __sched notrac
> if (likely(!preemptible()))
> return;
>
>+#ifdef CONFIG_PREEMPT_LAZY
>+ /*
>+ * Check for lazy preemption
>+ */
>+ if (current_thread_info()->preempt_lazy_count &&
>+ !test_thread_flag(TIF_NEED_RESCHED))
>+ return;
>+#endif
>+
And this is a new piece. So you forbid that tasks leave the CPU if
lazy_count > 0. Let me look closed why this is happening and if this is
v4.1 … v4.4 or not.
> preempt_schedule_common();
> }
> NOKPROBE_SYMBOL(preempt_schedule);
Sebastian
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [rfc patch v4.4-rt2] sched: fix up preempt lazy forward port
2016-01-18 20:18 ` Sebastian Andrzej Siewior
@ 2016-01-19 2:29 ` Mike Galbraith
2016-01-21 12:54 ` Sebastian Andrzej Siewior
0 siblings, 1 reply; 5+ messages in thread
From: Mike Galbraith @ 2016-01-19 2:29 UTC (permalink / raw)
To: Sebastian Andrzej Siewior; +Cc: Thomas Gleixner, LKML, linux-rt-users
On Mon, 2016-01-18 at 21:18 +0100, Sebastian Andrzej Siewior wrote:
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -3542,6 +3542,15 @@ asmlinkage __visible void __sched notrac
> > if (likely(!preemptible()))
> > return;
> >
> > +#ifdef CONFIG_PREEMPT_LAZY
> > + /*
> > + * Check for lazy preemption
> > + */
> > + if (current_thread_info()->preempt_lazy_count &&
> > + !test_thread_flag(TIF_NEED_RESCHED))
> > + return;
> > +#endif
> > +
>
> And this is a new piece. So you forbid that tasks leave the CPU if
> lazy_count > 0. Let me look closed why this is happening and if this is
> v4.1 … v4.4 or not.
We should probably just add the lazy bits to preemptible().
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [rfc patch v4.4-rt2] sched: fix up preempt lazy forward port
2016-01-19 2:29 ` Mike Galbraith
@ 2016-01-21 12:54 ` Sebastian Andrzej Siewior
2016-02-21 15:11 ` [patch] sched,rt: __always_inline preemptible_lazy() Mike Galbraith
0 siblings, 1 reply; 5+ messages in thread
From: Sebastian Andrzej Siewior @ 2016-01-21 12:54 UTC (permalink / raw)
To: Mike Galbraith; +Cc: Thomas Gleixner, LKML, linux-rt-users
* Mike Galbraith | 2016-01-19 03:29:57 [+0100]:
>> And this is a new piece. So you forbid that tasks leave the CPU if
>> lazy_count > 0. Let me look closed why this is happening and if this is
>> v4.1 … v4.4 or not.
>
>We should probably just add the lazy bits to preemptible().
Subject: preempt-lazy: Add the lazy-preemption check to preempt_schedule()
Probably in the rebase onto v4.1 this check got moved into less commonly used
preempt_schedule_notrace(). This patch ensures that both functions use it.
Reported-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
kernel/sched/core.c | 36 ++++++++++++++++++++++++++++--------
1 file changed, 28 insertions(+), 8 deletions(-)
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3461,6 +3461,30 @@ static void __sched notrace preempt_sche
} while (need_resched());
}
+#ifdef CONFIG_PREEMPT_LAZY
+/*
+ * If TIF_NEED_RESCHED is then we allow to be scheduled away since this is
+ * set by a RT task. Oterwise we try to avoid beeing scheduled out as long as
+ * preempt_lazy_count counter >0.
+ */
+static int preemptible_lazy(void)
+{
+ if (test_thread_flag(TIF_NEED_RESCHED))
+ return 1;
+ if (current_thread_info()->preempt_lazy_count)
+ return 0;
+ return 1;
+}
+
+#else
+
+static int preemptible_lazy(void)
+{
+ return 1;
+}
+
+#endif
+
#ifdef CONFIG_PREEMPT
/*
* this is the entry point to schedule() from in-kernel preemption
@@ -3475,6 +3499,8 @@ asmlinkage __visible void __sched notrac
*/
if (likely(!preemptible()))
return;
+ if (!preemptible_lazy())
+ return;
preempt_schedule_common();
}
@@ -3501,15 +3527,9 @@ asmlinkage __visible void __sched notrac
if (likely(!preemptible()))
return;
-
-#ifdef CONFIG_PREEMPT_LAZY
- /*
- * Check for lazy preemption
- */
- if (current_thread_info()->preempt_lazy_count &&
- !test_thread_flag(TIF_NEED_RESCHED))
+ if (!preemptible_lazy())
return;
-#endif
+
do {
preempt_disable_notrace();
/*
^ permalink raw reply [flat|nested] 5+ messages in thread
* [patch] sched,rt: __always_inline preemptible_lazy()
2016-01-21 12:54 ` Sebastian Andrzej Siewior
@ 2016-02-21 15:11 ` Mike Galbraith
0 siblings, 0 replies; 5+ messages in thread
From: Mike Galbraith @ 2016-02-21 15:11 UTC (permalink / raw)
To: Sebastian Andrzej Siewior; +Cc: Thomas Gleixner, LKML, linux-rt-users
homer: # nm kernel/sched/core.o|grep preemptible_lazy
00000000000000b5 t preemptible_lazy
echo wakeup_rt > current_tracer ==> Welcome to infinity.
Signed-off-bx: Mike Galbraith <umgwanakikbuti@gmail.com>
---
kernel/sched/core.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -3469,7 +3469,7 @@ static void __sched notrace preempt_sche
* set by a RT task. Oterwise we try to avoid beeing scheduled out as long as
* preempt_lazy_count counter >0.
*/
-static int preemptible_lazy(void)
+static __always_inline int preemptible_lazy(void)
{
if (test_thread_flag(TIF_NEED_RESCHED))
return 1;
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2016-02-26 10:17 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-02-22 3:36 [patch] sched,rt: __always_inline preemptible_lazy() Hillf Danton
2016-02-22 4:03 ` Mike Galbraith
2016-02-22 5:16 ` Hillf Danton
2016-02-26 10:17 ` Sebastian Andrzej Siewior
-- strict thread matches above, loose matches on Subject: below --
2016-01-18 9:08 [rfc patch v4.4-rt2] sched: fix up preempt lazy forward port Mike Galbraith
2016-01-18 20:18 ` Sebastian Andrzej Siewior
2016-01-19 2:29 ` Mike Galbraith
2016-01-21 12:54 ` Sebastian Andrzej Siewior
2016-02-21 15:11 ` [patch] sched,rt: __always_inline preemptible_lazy() Mike Galbraith
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).