* WARN_ON_ONCE(in_nmi()) hit in irq_work_queue_on @ 2014-08-06 16:21 Dave Jones 2014-08-06 16:46 ` Peter Zijlstra 0 siblings, 1 reply; 9+ messages in thread From: Dave Jones @ 2014-08-06 16:21 UTC (permalink / raw) To: Linux Kernel; +Cc: Peter Zijlstra WARNING: CPU: 3 PID: 18062 at kernel/irq_work.c:72 irq_work_queue_on+0x11e/0x140() CPU: 3 PID: 18062 Comm: trinity-subchil Not tainted 3.16.0+ #34 0000000000000009 00000000903774d1 ffff880244e06c00 ffffffff9a7f1e37 0000000000000000 ffff880244e06c38 ffffffff9a0791dd ffff880244fce180 0000000000000003 ffff880244e06d58 ffff880244e06ef8 0000000000000000 Call Trace: <NMI> [<ffffffff9a7f1e37>] dump_stack+0x4e/0x7a [<ffffffff9a0791dd>] warn_slowpath_common+0x7d/0xa0 [<ffffffff9a07930a>] warn_slowpath_null+0x1a/0x20 [<ffffffff9a17ca1e>] irq_work_queue_on+0x11e/0x140 [<ffffffff9a10a2c7>] tick_nohz_full_kick_cpu+0x57/0x90 [<ffffffff9a186cd5>] __perf_event_overflow+0x275/0x350 [<ffffffff9a184f80>] ? perf_event_task_disable+0xa0/0xa0 [<ffffffff9a01a4cf>] ? x86_perf_event_set_period+0xbf/0x150 [<ffffffff9a187934>] perf_event_overflow+0x14/0x20 [<ffffffff9a020386>] intel_pmu_handle_irq+0x206/0x410 [<ffffffff9a0b54d3>] ? arch_vtime_task_switch+0x63/0x130 [<ffffffff9a01937b>] perf_event_nmi_handler+0x2b/0x50 [<ffffffff9a007b72>] nmi_handle+0xd2/0x390 [<ffffffff9a007aa5>] ? nmi_handle+0x5/0x390 [<ffffffff9a0d131b>] ? lock_release+0xab/0x330 [<ffffffff9a008062>] default_do_nmi+0x72/0x1c0 [<ffffffff9a0c925f>] ? cpuacct_account_field+0xcf/0x200 [<ffffffff9a008268>] do_nmi+0xb8/0x100 [<ffffffff9a7ff66a>] end_repeat_nmi+0x1e/0x2e [<ffffffff9a0c925f>] ? cpuacct_account_field+0xcf/0x200 [<ffffffff9a0d131b>] ? lock_release+0xab/0x330 [<ffffffff9a0d131b>] ? lock_release+0xab/0x330 [<ffffffff9a0d131b>] ? lock_release+0xab/0x330 <<EOE>> [<ffffffff9a0c9277>] cpuacct_account_field+0xe7/0x200 [<ffffffff9a0c9195>] ? cpuacct_account_field+0x5/0x200 [<ffffffff9a0b4d18>] account_system_time+0x98/0x1a0 [<ffffffff9a0b4e4e>] __vtime_account_system+0x2e/0x40 [<ffffffff9a0b5419>] vtime_user_enter+0x59/0x90 [<ffffffff9a18e183>] ? context_tracking_user_enter+0xd3/0x1f0 [<ffffffff9a18e183>] context_tracking_user_enter+0xd3/0x1f0 [<ffffffff9a013815>] syscall_trace_leave+0xa5/0x210 [<ffffffff9a7fd8bf>] int_check_syscall_exit_work+0x34/0x3d ---[ end trace 73831bdc3ef3ba75 ]--- ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: WARN_ON_ONCE(in_nmi()) hit in irq_work_queue_on 2014-08-06 16:21 WARN_ON_ONCE(in_nmi()) hit in irq_work_queue_on Dave Jones @ 2014-08-06 16:46 ` Peter Zijlstra 2014-08-06 16:56 ` Peter Zijlstra 0 siblings, 1 reply; 9+ messages in thread From: Peter Zijlstra @ 2014-08-06 16:46 UTC (permalink / raw) To: Dave Jones, Linux Kernel; +Cc: Frederic Weisbecker On Wed, Aug 06, 2014 at 12:21:13PM -0400, Dave Jones wrote: > WARNING: CPU: 3 PID: 18062 at kernel/irq_work.c:72 irq_work_queue_on+0x11e/0x140() > CPU: 3 PID: 18062 Comm: trinity-subchil Not tainted 3.16.0+ #34 > 0000000000000009 00000000903774d1 ffff880244e06c00 ffffffff9a7f1e37 > 0000000000000000 ffff880244e06c38 ffffffff9a0791dd ffff880244fce180 > 0000000000000003 ffff880244e06d58 ffff880244e06ef8 0000000000000000 > Call Trace: > <NMI> [<ffffffff9a7f1e37>] dump_stack+0x4e/0x7a > [<ffffffff9a0791dd>] warn_slowpath_common+0x7d/0xa0 > [<ffffffff9a07930a>] warn_slowpath_null+0x1a/0x20 > [<ffffffff9a17ca1e>] irq_work_queue_on+0x11e/0x140 > [<ffffffff9a10a2c7>] tick_nohz_full_kick_cpu+0x57/0x90 > [<ffffffff9a186cd5>] __perf_event_overflow+0x275/0x350 > [<ffffffff9a184f80>] ? perf_event_task_disable+0xa0/0xa0 > [<ffffffff9a01a4cf>] ? x86_perf_event_set_period+0xbf/0x150 > [<ffffffff9a187934>] perf_event_overflow+0x14/0x20 > [<ffffffff9a020386>] intel_pmu_handle_irq+0x206/0x410 > [<ffffffff9a0b54d3>] ? arch_vtime_task_switch+0x63/0x130 > [<ffffffff9a01937b>] perf_event_nmi_handler+0x2b/0x50 > [<ffffffff9a007b72>] nmi_handle+0xd2/0x390 > [<ffffffff9a007aa5>] ? nmi_handle+0x5/0x390 > [<ffffffff9a0d131b>] ? lock_release+0xab/0x330 > [<ffffffff9a008062>] default_do_nmi+0x72/0x1c0 > [<ffffffff9a0c925f>] ? cpuacct_account_field+0xcf/0x200 > [<ffffffff9a008268>] do_nmi+0xb8/0x100 > [<ffffffff9a7ff66a>] end_repeat_nmi+0x1e/0x2e > [<ffffffff9a0c925f>] ? cpuacct_account_field+0xcf/0x200 > [<ffffffff9a0d131b>] ? lock_release+0xab/0x330 > [<ffffffff9a0d131b>] ? lock_release+0xab/0x330 > [<ffffffff9a0d131b>] ? lock_release+0xab/0x330 > <<EOE>> [<ffffffff9a0c9277>] cpuacct_account_field+0xe7/0x200 > [<ffffffff9a0c9195>] ? cpuacct_account_field+0x5/0x200 > [<ffffffff9a0b4d18>] account_system_time+0x98/0x1a0 > [<ffffffff9a0b4e4e>] __vtime_account_system+0x2e/0x40 > [<ffffffff9a0b5419>] vtime_user_enter+0x59/0x90 > [<ffffffff9a18e183>] ? context_tracking_user_enter+0xd3/0x1f0 > [<ffffffff9a18e183>] context_tracking_user_enter+0xd3/0x1f0 > [<ffffffff9a013815>] syscall_trace_leave+0xa5/0x210 > [<ffffffff9a7fd8bf>] int_check_syscall_exit_work+0x34/0x3d > ---[ end trace 73831bdc3ef3ba75 ]--- Urgh, Frederic, any idea how that happened? ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: WARN_ON_ONCE(in_nmi()) hit in irq_work_queue_on 2014-08-06 16:46 ` Peter Zijlstra @ 2014-08-06 16:56 ` Peter Zijlstra 2014-08-06 17:01 ` Peter Zijlstra 0 siblings, 1 reply; 9+ messages in thread From: Peter Zijlstra @ 2014-08-06 16:56 UTC (permalink / raw) To: Dave Jones, Linux Kernel; +Cc: Frederic Weisbecker On Wed, Aug 06, 2014 at 06:46:33PM +0200, Peter Zijlstra wrote: > On Wed, Aug 06, 2014 at 12:21:13PM -0400, Dave Jones wrote: > > WARNING: CPU: 3 PID: 18062 at kernel/irq_work.c:72 irq_work_queue_on+0x11e/0x140() > > CPU: 3 PID: 18062 Comm: trinity-subchil Not tainted 3.16.0+ #34 > > 0000000000000009 00000000903774d1 ffff880244e06c00 ffffffff9a7f1e37 > > 0000000000000000 ffff880244e06c38 ffffffff9a0791dd ffff880244fce180 > > 0000000000000003 ffff880244e06d58 ffff880244e06ef8 0000000000000000 > > Call Trace: > > <NMI> [<ffffffff9a7f1e37>] dump_stack+0x4e/0x7a > > [<ffffffff9a0791dd>] warn_slowpath_common+0x7d/0xa0 > > [<ffffffff9a07930a>] warn_slowpath_null+0x1a/0x20 > > [<ffffffff9a17ca1e>] irq_work_queue_on+0x11e/0x140 > > [<ffffffff9a10a2c7>] tick_nohz_full_kick_cpu+0x57/0x90 > > [<ffffffff9a186cd5>] __perf_event_overflow+0x275/0x350 > > [<ffffffff9a184f80>] ? perf_event_task_disable+0xa0/0xa0 > > [<ffffffff9a01a4cf>] ? x86_perf_event_set_period+0xbf/0x150 > > [<ffffffff9a187934>] perf_event_overflow+0x14/0x20 > > [<ffffffff9a020386>] intel_pmu_handle_irq+0x206/0x410 > > [<ffffffff9a0b54d3>] ? arch_vtime_task_switch+0x63/0x130 > > [<ffffffff9a01937b>] perf_event_nmi_handler+0x2b/0x50 > > [<ffffffff9a007b72>] nmi_handle+0xd2/0x390 > > [<ffffffff9a007aa5>] ? nmi_handle+0x5/0x390 > > [<ffffffff9a0d131b>] ? lock_release+0xab/0x330 > > [<ffffffff9a008062>] default_do_nmi+0x72/0x1c0 > > [<ffffffff9a0c925f>] ? cpuacct_account_field+0xcf/0x200 > > [<ffffffff9a008268>] do_nmi+0xb8/0x100 > > [<ffffffff9a7ff66a>] end_repeat_nmi+0x1e/0x2e > > [<ffffffff9a0c925f>] ? cpuacct_account_field+0xcf/0x200 > > [<ffffffff9a0d131b>] ? lock_release+0xab/0x330 > > [<ffffffff9a0d131b>] ? lock_release+0xab/0x330 > > [<ffffffff9a0d131b>] ? lock_release+0xab/0x330 > > <<EOE>> [<ffffffff9a0c9277>] cpuacct_account_field+0xe7/0x200 > > [<ffffffff9a0c9195>] ? cpuacct_account_field+0x5/0x200 > > [<ffffffff9a0b4d18>] account_system_time+0x98/0x1a0 > > [<ffffffff9a0b4e4e>] __vtime_account_system+0x2e/0x40 > > [<ffffffff9a0b5419>] vtime_user_enter+0x59/0x90 > > [<ffffffff9a18e183>] ? context_tracking_user_enter+0xd3/0x1f0 > > [<ffffffff9a18e183>] context_tracking_user_enter+0xd3/0x1f0 > > [<ffffffff9a013815>] syscall_trace_leave+0xa5/0x210 > > [<ffffffff9a7fd8bf>] int_check_syscall_exit_work+0x34/0x3d > > ---[ end trace 73831bdc3ef3ba75 ]--- > > Urgh, Frederic, any idea how that happened? Sigh, that's d84153d6c96f61a so that's been there a while, and been broken equally long. So this is where we run a low period (!freq) hardware event on a nohz_full cpu or so? And because it throttles, we need to kick the tick into action to unthrottle it. I suppose there's a good reason I never build with that nohz_full nonsense enabled :/ Not sure how we should go fix that, you can't just issue random IPIs from NMI context. ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: WARN_ON_ONCE(in_nmi()) hit in irq_work_queue_on 2014-08-06 16:56 ` Peter Zijlstra @ 2014-08-06 17:01 ` Peter Zijlstra 2014-08-06 23:44 ` Frederic Weisbecker 0 siblings, 1 reply; 9+ messages in thread From: Peter Zijlstra @ 2014-08-06 17:01 UTC (permalink / raw) To: Dave Jones, Linux Kernel; +Cc: Frederic Weisbecker > Sigh, that's d84153d6c96f61a so that's been there a while, and been > broken equally long. > > So this is where we run a low period (!freq) hardware event on a > nohz_full cpu or so? And because it throttles, we need to kick the tick > into action to unthrottle it. > > I suppose there's a good reason I never build with that nohz_full > nonsense enabled :/ > > Not sure how we should go fix that, you can't just issue random IPIs > from NMI context. OK, thinking one more second would've done it, how about so? --- include/linux/perf_event.h | 7 ++++--- kernel/events/core.c | 8 +++++++- 2 files changed, 11 insertions(+), 4 deletions(-) diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h index 707617a8c0f6..177411e3ffc4 100644 --- a/include/linux/perf_event.h +++ b/include/linux/perf_event.h @@ -421,9 +421,10 @@ struct perf_event { struct fasync_struct *fasync; /* delayed work for NMIs and such */ - int pending_wakeup; - int pending_kill; - int pending_disable; + int pending_kill : 16; + int pending_wakeup : 1; + int pending_disable : 1; + int pending_nohz_kick : 1; struct irq_work pending; atomic_t event_limit; diff --git a/kernel/events/core.c b/kernel/events/core.c index 1cf24b3e42ec..e95fca20e26f 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -4258,6 +4258,11 @@ static void perf_pending_event(struct irq_work *entry) event->pending_wakeup = 0; perf_event_wakeup(event); } + + if (event->pending_nohz_kick) { + event->pending_nohz_kick = 0; + tick_nohz_full_kick(); + } } /* @@ -5431,7 +5436,8 @@ static int __perf_event_overflow(struct perf_event *event, __this_cpu_inc(perf_throttled_count); hwc->interrupts = MAX_INTERRUPTS; perf_log_throttle(event, 0); - tick_nohz_full_kick(); + event->pending_nohz_kick = 1; + irq_work_queue(&event->pending); ret = 1; } } ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: WARN_ON_ONCE(in_nmi()) hit in irq_work_queue_on 2014-08-06 17:01 ` Peter Zijlstra @ 2014-08-06 23:44 ` Frederic Weisbecker 2014-08-07 8:13 ` Peter Zijlstra 0 siblings, 1 reply; 9+ messages in thread From: Frederic Weisbecker @ 2014-08-06 23:44 UTC (permalink / raw) To: Peter Zijlstra; +Cc: Dave Jones, Linux Kernel On Wed, Aug 06, 2014 at 07:01:51PM +0200, Peter Zijlstra wrote: > > Sigh, that's d84153d6c96f61a so that's been there a while, and been > > broken equally long. > > > > So this is where we run a low period (!freq) hardware event on a > > nohz_full cpu or so? And because it throttles, we need to kick the tick > > into action to unthrottle it. > > > > I suppose there's a good reason I never build with that nohz_full > > nonsense enabled :/ > > > > Not sure how we should go fix that, you can't just issue random IPIs > > from NMI context. > > OK, thinking one more second would've done it, how about so? > > --- > include/linux/perf_event.h | 7 ++++--- > kernel/events/core.c | 8 +++++++- > 2 files changed, 11 insertions(+), 4 deletions(-) > > diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h > index 707617a8c0f6..177411e3ffc4 100644 > --- a/include/linux/perf_event.h > +++ b/include/linux/perf_event.h > @@ -421,9 +421,10 @@ struct perf_event { > struct fasync_struct *fasync; > > /* delayed work for NMIs and such */ > - int pending_wakeup; > - int pending_kill; > - int pending_disable; > + int pending_kill : 16; > + int pending_wakeup : 1; > + int pending_disable : 1; > + int pending_nohz_kick : 1; > struct irq_work pending; > > atomic_t event_limit; > diff --git a/kernel/events/core.c b/kernel/events/core.c > index 1cf24b3e42ec..e95fca20e26f 100644 > --- a/kernel/events/core.c > +++ b/kernel/events/core.c > @@ -4258,6 +4258,11 @@ static void perf_pending_event(struct irq_work *entry) > event->pending_wakeup = 0; > perf_event_wakeup(event); > } > + > + if (event->pending_nohz_kick) { > + event->pending_nohz_kick = 0; > + tick_nohz_full_kick(); > + } > } > > /* > @@ -5431,7 +5436,8 @@ static int __perf_event_overflow(struct perf_event *event, > __this_cpu_inc(perf_throttled_count); > hwc->interrupts = MAX_INTERRUPTS; > perf_log_throttle(event, 0); > - tick_nohz_full_kick(); > + event->pending_nohz_kick = 1; > + irq_work_queue(&event->pending); > ret = 1; > } > } In fact the problem has arised since the recent irq work patches I did. There I've changed tick_nohz_full_kick() to use irq_work_queue_on() instead of irq_work_queue() so it has become NMI unsafe by accident. So I'd rather suggest this instead of queuing two levels of irq_work: diff --git a/include/linux/tick.h b/include/linux/tick.h index 8a4987f..fed88b5 100644 --- a/include/linux/tick.h +++ b/include/linux/tick.h @@ -181,13 +181,8 @@ static inline bool tick_nohz_full_cpu(int cpu) extern void tick_nohz_init(void); extern void __tick_nohz_full_check(void); +extern void tick_nohz_full_kick(void); extern void tick_nohz_full_kick_cpu(int cpu); - -static inline void tick_nohz_full_kick(void) -{ - tick_nohz_full_kick_cpu(smp_processor_id()); -} - extern void tick_nohz_full_kick_all(void); extern void __tick_nohz_task_switch(struct task_struct *tsk); #else diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c index d4ccb96..8e0d347 100644 --- a/kernel/time/tick-sched.c +++ b/kernel/time/tick-sched.c @@ -225,6 +225,12 @@ static DEFINE_PER_CPU(struct irq_work, nohz_full_kick_work) = { .func = nohz_full_kick_work_func, }; +void tick_nohz_full_kick(void) +{ + if (!tick_nohz_full_cpu(smp_processor_id())) + irq_work_queue(&__get_cpu_var(nohz_full_kick_work)); +} + /* * Kick the CPU if it's full dynticks in order to force it to * re-evaluate its dependency on the tick and restart it if necessary. ^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: WARN_ON_ONCE(in_nmi()) hit in irq_work_queue_on 2014-08-06 23:44 ` Frederic Weisbecker @ 2014-08-07 8:13 ` Peter Zijlstra 2014-08-07 12:58 ` Frederic Weisbecker 0 siblings, 1 reply; 9+ messages in thread From: Peter Zijlstra @ 2014-08-07 8:13 UTC (permalink / raw) To: Frederic Weisbecker; +Cc: Dave Jones, Linux Kernel [-- Attachment #1: Type: text/plain, Size: 673 bytes --] On Thu, Aug 07, 2014 at 01:44:58AM +0200, Frederic Weisbecker wrote: > In fact the problem has arised since the recent irq work patches I did. No, those just added the WARN, previously we send the resched IPI, and that's equally wrong from NMI context. > There I've changed tick_nohz_full_kick() to use irq_work_queue_on() instead > of irq_work_queue() so it has become NMI unsafe by accident. > > So I'd rather suggest this instead of queuing two levels of irq_work: > +void tick_nohz_full_kick(void) > +{ > + if (!tick_nohz_full_cpu(smp_processor_id())) > + irq_work_queue(&__get_cpu_var(nohz_full_kick_work)); > +} Indeed, that's better. Thanks! [-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --] ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: WARN_ON_ONCE(in_nmi()) hit in irq_work_queue_on 2014-08-07 8:13 ` Peter Zijlstra @ 2014-08-07 12:58 ` Frederic Weisbecker 2014-08-07 13:15 ` Peter Zijlstra 0 siblings, 1 reply; 9+ messages in thread From: Frederic Weisbecker @ 2014-08-07 12:58 UTC (permalink / raw) To: Peter Zijlstra; +Cc: Dave Jones, Linux Kernel On Thu, Aug 07, 2014 at 10:13:21AM +0200, Peter Zijlstra wrote: > On Thu, Aug 07, 2014 at 01:44:58AM +0200, Frederic Weisbecker wrote: > > In fact the problem has arised since the recent irq work patches I did. > > No, those just added the WARN, previously we send the resched IPI, and > that's equally wrong from NMI context. Well the scheduler IPI was rather used for remote kicks before we had remote irq work. This includes local kicks as well as the caller could need to kick anywhere. As in inc_nr_running(). But for strict local kicks, as in perf, we were using tick_nohz_full_kick() which has been using irq work for a while. But it got broken when we replaced it to call irq_work_queue_on instead of irq_work_queue. > > > There I've changed tick_nohz_full_kick() to use irq_work_queue_on() instead > > of irq_work_queue() so it has become NMI unsafe by accident. > > > > So I'd rather suggest this instead of queuing two levels of irq_work: > > > +void tick_nohz_full_kick(void) > > +{ > > + if (!tick_nohz_full_cpu(smp_processor_id())) > > + irq_work_queue(&__get_cpu_var(nohz_full_kick_work)); > > +} > > Indeed, that's better. Thanks! ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: WARN_ON_ONCE(in_nmi()) hit in irq_work_queue_on 2014-08-07 12:58 ` Frederic Weisbecker @ 2014-08-07 13:15 ` Peter Zijlstra 2014-08-07 13:18 ` Frederic Weisbecker 0 siblings, 1 reply; 9+ messages in thread From: Peter Zijlstra @ 2014-08-07 13:15 UTC (permalink / raw) To: Frederic Weisbecker; +Cc: Dave Jones, Linux Kernel [-- Attachment #1: Type: text/plain, Size: 894 bytes --] On Thu, Aug 07, 2014 at 02:58:55PM +0200, Frederic Weisbecker wrote: > On Thu, Aug 07, 2014 at 10:13:21AM +0200, Peter Zijlstra wrote: > > On Thu, Aug 07, 2014 at 01:44:58AM +0200, Frederic Weisbecker wrote: > > > In fact the problem has arised since the recent irq work patches I did. > > > > No, those just added the WARN, previously we send the resched IPI, and > > that's equally wrong from NMI context. > > Well the scheduler IPI was rather used for remote kicks before we had remote irq > work. This includes local kicks as well as the caller could need to kick > anywhere. As in inc_nr_running(). > > But for strict local kicks, as in perf, we were using tick_nohz_full_kick() which has > been using irq work for a while. But it got broken when we replaced it to call irq_work_queue_on > instead of irq_work_queue. OK, clearly I made a mess of things in my head ;-) [-- Attachment #2: Type: application/pgp-signature, Size: 836 bytes --] ^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: WARN_ON_ONCE(in_nmi()) hit in irq_work_queue_on 2014-08-07 13:15 ` Peter Zijlstra @ 2014-08-07 13:18 ` Frederic Weisbecker 0 siblings, 0 replies; 9+ messages in thread From: Frederic Weisbecker @ 2014-08-07 13:18 UTC (permalink / raw) To: Peter Zijlstra; +Cc: Dave Jones, Linux Kernel On Thu, Aug 07, 2014 at 03:15:36PM +0200, Peter Zijlstra wrote: > On Thu, Aug 07, 2014 at 02:58:55PM +0200, Frederic Weisbecker wrote: > > On Thu, Aug 07, 2014 at 10:13:21AM +0200, Peter Zijlstra wrote: > > > On Thu, Aug 07, 2014 at 01:44:58AM +0200, Frederic Weisbecker wrote: > > > > In fact the problem has arised since the recent irq work patches I did. > > > > > > No, those just added the WARN, previously we send the resched IPI, and > > > that's equally wrong from NMI context. > > > > Well the scheduler IPI was rather used for remote kicks before we had remote irq > > work. This includes local kicks as well as the caller could need to kick > > anywhere. As in inc_nr_running(). > > > > But for strict local kicks, as in perf, we were using tick_nohz_full_kick() which has > > been using irq work for a while. But it got broken when we replaced it to call irq_work_queue_on > > instead of irq_work_queue. > > OK, clearly I made a mess of things in my head ;-) I'm a bit responsible for that mess since I created quite some flavours of nohz kicks all around :) But now they should be more unified. ^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2014-08-07 13:18 UTC | newest] Thread overview: 9+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2014-08-06 16:21 WARN_ON_ONCE(in_nmi()) hit in irq_work_queue_on Dave Jones 2014-08-06 16:46 ` Peter Zijlstra 2014-08-06 16:56 ` Peter Zijlstra 2014-08-06 17:01 ` Peter Zijlstra 2014-08-06 23:44 ` Frederic Weisbecker 2014-08-07 8:13 ` Peter Zijlstra 2014-08-07 12:58 ` Frederic Weisbecker 2014-08-07 13:15 ` Peter Zijlstra 2014-08-07 13:18 ` Frederic Weisbecker
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox