* lockdep and preemptoff tracer are fighting again.
@ 2009-01-22 20:40 Steven Rostedt
2009-01-22 21:08 ` Peter Zijlstra
0 siblings, 1 reply; 5+ messages in thread
From: Steven Rostedt @ 2009-01-22 20:40 UTC (permalink / raw)
To: Ingo Molnar, Peter Zijlstra; +Cc: LKML
Hey guys, I can consistently hit this bug when running the preempt tracer:
------------[ cut here ]------------
WARNING: at kernel/lockdep.c:2899 check_flags+0x154/0x18b()
Hardware name: Precision WorkStation 470
Modules linked in: radeon drm autofs4 hidp rfcomm l2cap bluetooth sunrpc nf_conn
track_netbios_ns ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 xt_state iptable_fi
lter ip_tables ip6t_REJECT xt_tcpudp ip6table_filter ip6_tables x_tables ipv6 sb
s sbshc battery ac snd_intel8x0 snd_ac97_codec sg ac97_bus snd_seq_dummy snd_seq
_oss snd_seq_midi_event floppy snd_seq snd_seq_device snd_pcm_oss ide_cd_mod snd
_mixer_oss cdrom snd_pcm e1000 serio_raw snd_timer snd i2c_i801 button soundcore
ata_generic i2c_core iTCO_wdt snd_page_alloc e752x_edac iTCO_vendor_support shp
chp edac_core pcspkr dm_snapshot dm_zero dm_mirror dm_region_hash dm_log dm_mod
ata_piix libata sd_mod scsi_mod ext3 jbd ehci_hcd ohci_hcd uhci_hcd
Pid: 3855, comm: sshd Not tainted 2.6.29-rc2-tip #366
Call Trace:
[<ffffffff80245e9f>] warn_slowpath+0xd8/0xf7
[<ffffffff80297154>] ? ring_buffer_unlock_commit+0x24/0xa3
[<ffffffff80299501>] ? trace_function+0xad/0xbc
[<ffffffff8025c1ff>] ? remove_wait_queue+0x4d/0x52
[<ffffffff8029e5dc>] ? trace_preempt_on+0x113/0x130
[<ffffffff8029e4ba>] ? check_critical_timing+0x12e/0x13d
[<ffffffff8025c1ff>] ? remove_wait_queue+0x4d/0x52
[<ffffffff8029f75b>] ? stack_trace_call+0x249/0x25d
[<ffffffff802da06e>] ? fput+0x4/0x1c
[<ffffffff802e7edc>] ? free_poll_entry+0x26/0x2a
[<ffffffff802da06e>] ? fput+0x4/0x1c
[<ffffffff8020c2d6>] ? ftrace_call+0x5/0x2b
[<ffffffff8029f75b>] ? stack_trace_call+0x249/0x25d
[<ffffffff80543dec>] ? _spin_lock_irqsave+0xb/0x59
[<ffffffff802699bf>] check_flags+0x154/0x18b
[<ffffffff8026de66>] lock_acquire+0x41/0xa9
[<ffffffff80543dfd>] ? _spin_lock_irqsave+0x1c/0x59
[<ffffffff80543e27>] _spin_lock_irqsave+0x46/0x59
[<ffffffff8029519c>] ? ring_buffer_reset_cpu+0x31/0x6b
[<ffffffff8029519c>] ring_buffer_reset_cpu+0x31/0x6b
[<ffffffff80299ec6>] tracing_reset+0x46/0x9b
[<ffffffff8029e33f>] trace_preempt_off+0x100/0x14d
[<ffffffff8024b491>] ? local_bh_disable+0x12/0x14
[<ffffffff8024b44f>] ? __local_bh_disable+0xc0/0xf0
[<ffffffff8024b491>] ? local_bh_disable+0x12/0x14
[<ffffffff80543b95>] ? _spin_lock_bh+0x16/0x4c
[<ffffffff80546df1>] add_preempt_count+0x12d/0x132
[<ffffffff8024b44f>] __local_bh_disable+0xc0/0xf0
[<ffffffff8024b491>] local_bh_disable+0x12/0x14
[<ffffffff80543b95>] _spin_lock_bh+0x16/0x4c
[<ffffffff804ab49a>] lock_sock_nested+0x28/0xe5
[<ffffffff80292c90>] ? ftrace_list_func+0x24/0x39
[<ffffffff8020c2d6>] ? ftrace_call+0x5/0x2b
[<ffffffff804eff87>] tcp_sendmsg+0x27/0xac2
[<ffffffff803556c7>] ? cap_socket_sendmsg+0x4/0xd
[<ffffffff80292c90>] ? ftrace_list_func+0x24/0x39
[<ffffffff8020c2d6>] ? ftrace_call+0x5/0x2b
[<ffffffff804a82b0>] sock_aio_write+0x109/0x11d
[<ffffffff8029f75b>] ? stack_trace_call+0x249/0x25d
[<ffffffff8020c2d6>] ? ftrace_call+0x5/0x2b
[<ffffffff802d8881>] do_sync_write+0xf0/0x137
[<ffffffff8025c002>] ? autoremove_wake_function+0x0/0x3d
[<ffffffff8020c2d6>] ? ftrace_call+0x5/0x2b
[<ffffffff803553ca>] ? cap_file_permission+0x9/0xd
[<ffffffff80353c88>] ? security_file_permission+0x16/0x18
[<ffffffff802d921c>] vfs_write+0x103/0x17d
[<ffffffff802d978f>] sys_write+0x4e/0x8c
[<ffffffff8020c64b>] system_call_fastpath+0x16/0x1b
---[ end trace 713cc9df66b54d6e ]---
The cause is simple. The following happens:
local_bh_disable is called, which calls __local_bh_disable which does a
add_preempt_count(SOFTIRQ_OFFSET).
Thus, add_preempt_count adds the SOFTIRQ_OFFSET to the preempt_count of
current, and then calls trace_preempt_off.
This goes into the preempt tracer which calls start_critical_timing, and
this will reset the ring buffer for the CPU, because this is the start of
the trace.
ring_buffer_reset_cpu() calls spin_lock_irqsave() which eventually calls
spin_acquire which is lock_acquire in lockdep.
lock_acquire calls check_flags which performs this check:
if (!hardirq_count()) {
if (softirq_count())
DEBUG_LOCKS_WARN_ON(current->softirqs_enabled);
else
DEBUG_LOCKS_WARN_ON(!current->softirqs_enabled);
}
With this:
#define hardirq_count() (preempt_count() & HARDIRQ_MASK)
#define softirq_count() (preempt_count() & SOFTIRQ_MASK)
The hardirq_count returns false, but the softirq_count returns true and
softirqs_enalbed is also true. The problem lies in local_bh_disable:
static void __local_bh_disable(unsigned long ip)
{
unsigned long flags;
WARN_ON_ONCE(in_irq());
raw_local_irq_save(flags);
add_preempt_count(SOFTIRQ_OFFSET); <-- here softirq_count is true
/*
* Were softirqs turned off above:
*/
if (softirq_count() == SOFTIRQ_OFFSET)
trace_softirqs_off(ip); <-- here softirqs_enabled is false
raw_local_irq_restore(flags);
}
If we call into lockdep between softirq_count == true and
softirqs_enabled == false, we hit the WARN_ON.
The trace_softirqs_off() sets softirs_enabled to false. But because the
tracer calls into lockdep between the two, we hit this warning.
If we try to swap the trace_softirqs_off with the add_preempt_count we hit
another warning thatch checks to make sure softirq_count is true in the
trace_softirqs_off code.
We need a way to have lockdep and the preempt tracer to be able to talk to
each other and let it know that it should not fail here.
Any ideas?
-- Steve
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: lockdep and preemptoff tracer are fighting again.
2009-01-22 20:40 lockdep and preemptoff tracer are fighting again Steven Rostedt
@ 2009-01-22 21:08 ` Peter Zijlstra
2009-01-22 22:27 ` Steven Rostedt
0 siblings, 1 reply; 5+ messages in thread
From: Peter Zijlstra @ 2009-01-22 21:08 UTC (permalink / raw)
To: Steven Rostedt; +Cc: Ingo Molnar, LKML
On Thu, 2009-01-22 at 15:40 -0500, Steven Rostedt wrote:
>
> Hey guys, I can consistently hit this bug when running the preempt tracer:
>
> ------------[ cut here ]------------
> WARNING: at kernel/lockdep.c:2899 check_flags+0x154/0x18b()
> Hardware name: Precision WorkStation 470
> Modules linked in: radeon drm autofs4 hidp rfcomm l2cap bluetooth sunrpc nf_conn
> track_netbios_ns ipt_REJECT nf_conntrack_ipv4 nf_defrag_ipv4 xt_state iptable_fi
> lter ip_tables ip6t_REJECT xt_tcpudp ip6table_filter ip6_tables x_tables ipv6 sb
> s sbshc battery ac snd_intel8x0 snd_ac97_codec sg ac97_bus snd_seq_dummy snd_seq
> _oss snd_seq_midi_event floppy snd_seq snd_seq_device snd_pcm_oss ide_cd_mod snd
> _mixer_oss cdrom snd_pcm e1000 serio_raw snd_timer snd i2c_i801 button soundcore
> ata_generic i2c_core iTCO_wdt snd_page_alloc e752x_edac iTCO_vendor_support shp
> chp edac_core pcspkr dm_snapshot dm_zero dm_mirror dm_region_hash dm_log dm_mod
> ata_piix libata sd_mod scsi_mod ext3 jbd ehci_hcd ohci_hcd uhci_hcd
> Pid: 3855, comm: sshd Not tainted 2.6.29-rc2-tip #366
> Call Trace:
> [<ffffffff80245e9f>] warn_slowpath+0xd8/0xf7
> [<ffffffff80297154>] ? ring_buffer_unlock_commit+0x24/0xa3
> [<ffffffff80299501>] ? trace_function+0xad/0xbc
> [<ffffffff8025c1ff>] ? remove_wait_queue+0x4d/0x52
> [<ffffffff8029e5dc>] ? trace_preempt_on+0x113/0x130
> [<ffffffff8029e4ba>] ? check_critical_timing+0x12e/0x13d
> [<ffffffff8025c1ff>] ? remove_wait_queue+0x4d/0x52
> [<ffffffff8029f75b>] ? stack_trace_call+0x249/0x25d
> [<ffffffff802da06e>] ? fput+0x4/0x1c
> [<ffffffff802e7edc>] ? free_poll_entry+0x26/0x2a
> [<ffffffff802da06e>] ? fput+0x4/0x1c
> [<ffffffff8020c2d6>] ? ftrace_call+0x5/0x2b
> [<ffffffff8029f75b>] ? stack_trace_call+0x249/0x25d
> [<ffffffff80543dec>] ? _spin_lock_irqsave+0xb/0x59
> [<ffffffff802699bf>] check_flags+0x154/0x18b
> [<ffffffff8026de66>] lock_acquire+0x41/0xa9
> [<ffffffff80543dfd>] ? _spin_lock_irqsave+0x1c/0x59
> [<ffffffff80543e27>] _spin_lock_irqsave+0x46/0x59
> [<ffffffff8029519c>] ? ring_buffer_reset_cpu+0x31/0x6b
> [<ffffffff8029519c>] ring_buffer_reset_cpu+0x31/0x6b
> [<ffffffff80299ec6>] tracing_reset+0x46/0x9b
> [<ffffffff8029e33f>] trace_preempt_off+0x100/0x14d
> [<ffffffff8024b491>] ? local_bh_disable+0x12/0x14
> [<ffffffff8024b44f>] ? __local_bh_disable+0xc0/0xf0
> [<ffffffff8024b491>] ? local_bh_disable+0x12/0x14
> [<ffffffff80543b95>] ? _spin_lock_bh+0x16/0x4c
> [<ffffffff80546df1>] add_preempt_count+0x12d/0x132
> [<ffffffff8024b44f>] __local_bh_disable+0xc0/0xf0
> [<ffffffff8024b491>] local_bh_disable+0x12/0x14
> [<ffffffff80543b95>] _spin_lock_bh+0x16/0x4c
> [<ffffffff804ab49a>] lock_sock_nested+0x28/0xe5
> [<ffffffff80292c90>] ? ftrace_list_func+0x24/0x39
> [<ffffffff8020c2d6>] ? ftrace_call+0x5/0x2b
> [<ffffffff804eff87>] tcp_sendmsg+0x27/0xac2
> [<ffffffff803556c7>] ? cap_socket_sendmsg+0x4/0xd
> [<ffffffff80292c90>] ? ftrace_list_func+0x24/0x39
> [<ffffffff8020c2d6>] ? ftrace_call+0x5/0x2b
> [<ffffffff804a82b0>] sock_aio_write+0x109/0x11d
> [<ffffffff8029f75b>] ? stack_trace_call+0x249/0x25d
> [<ffffffff8020c2d6>] ? ftrace_call+0x5/0x2b
> [<ffffffff802d8881>] do_sync_write+0xf0/0x137
> [<ffffffff8025c002>] ? autoremove_wake_function+0x0/0x3d
> [<ffffffff8020c2d6>] ? ftrace_call+0x5/0x2b
> [<ffffffff803553ca>] ? cap_file_permission+0x9/0xd
> [<ffffffff80353c88>] ? security_file_permission+0x16/0x18
> [<ffffffff802d921c>] vfs_write+0x103/0x17d
> [<ffffffff802d978f>] sys_write+0x4e/0x8c
> [<ffffffff8020c64b>] system_call_fastpath+0x16/0x1b
> ---[ end trace 713cc9df66b54d6e ]---
>
>
> The cause is simple. The following happens:
>
> local_bh_disable is called, which calls __local_bh_disable which does a
> add_preempt_count(SOFTIRQ_OFFSET).
>
> Thus, add_preempt_count adds the SOFTIRQ_OFFSET to the preempt_count of
> current, and then calls trace_preempt_off.
>
> This goes into the preempt tracer which calls start_critical_timing, and
> this will reset the ring buffer for the CPU, because this is the start of
> the trace.
>
> ring_buffer_reset_cpu() calls spin_lock_irqsave() which eventually calls
> spin_acquire which is lock_acquire in lockdep.
>
> lock_acquire calls check_flags which performs this check:
>
> if (!hardirq_count()) {
> if (softirq_count())
> DEBUG_LOCKS_WARN_ON(current->softirqs_enabled);
> else
> DEBUG_LOCKS_WARN_ON(!current->softirqs_enabled);
> }
>
> With this:
>
> #define hardirq_count() (preempt_count() & HARDIRQ_MASK)
> #define softirq_count() (preempt_count() & SOFTIRQ_MASK)
>
>
> The hardirq_count returns false, but the softirq_count returns true and
> softirqs_enalbed is also true. The problem lies in local_bh_disable:
>
> static void __local_bh_disable(unsigned long ip)
> {
> unsigned long flags;
>
> WARN_ON_ONCE(in_irq());
>
> raw_local_irq_save(flags);
> add_preempt_count(SOFTIRQ_OFFSET); <-- here softirq_count is true
> /*
> * Were softirqs turned off above:
> */
> if (softirq_count() == SOFTIRQ_OFFSET)
> trace_softirqs_off(ip); <-- here softirqs_enabled is false
> raw_local_irq_restore(flags);
> }
>
> If we call into lockdep between softirq_count == true and
> softirqs_enabled == false, we hit the WARN_ON.
>
> The trace_softirqs_off() sets softirs_enabled to false. But because the
> tracer calls into lockdep between the two, we hit this warning.
>
> If we try to swap the trace_softirqs_off with the add_preempt_count we hit
> another warning thatch checks to make sure softirq_count is true in the
> trace_softirqs_off code.
>
> We need a way to have lockdep and the preempt tracer to be able to talk to
> each other and let it know that it should not fail here.
something like so?
__local_bh_disable()
{
unsigned long flags;
raw_local_irq_save(flags);
/*
* comment explaining why add_preempt_count() doesn't work
*/
preempt_count() += SOFTIRQ_OFFSET;
if (softirq_count() == SOFTIRQ_OFFSET)
trace_softirqs_off(ip);
if (preempt_count() == SOFTIRQ_OFFSET)
trace_preempt_off(CALLER_ADDR0, ...);
raw_local_irq_restore(flags);
}
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: lockdep and preemptoff tracer are fighting again.
2009-01-22 21:08 ` Peter Zijlstra
@ 2009-01-22 22:27 ` Steven Rostedt
2009-01-23 0:27 ` [PATCH] trace, lockdep: manual preempt count adding for local_bh_disable Steven Rostedt
0 siblings, 1 reply; 5+ messages in thread
From: Steven Rostedt @ 2009-01-22 22:27 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: Ingo Molnar, LKML
On Thu, 22 Jan 2009, Peter Zijlstra wrote:
> On Thu, 2009-01-22 at 15:40 -0500, Steven Rostedt wrote:
> >
> > Hey guys, I can consistently hit this bug when running the preempt tracer:
> >
[ shortened ]
> > ------------[ cut here ]------------
> > WARNING: at kernel/lockdep.c:2899 check_flags+0x154/0x18b()
> > Call Trace:
> > [<ffffffff802699bf>] check_flags+0x154/0x18b
> > [<ffffffff8026de66>] lock_acquire+0x41/0xa9
> > [<ffffffff80543e27>] _spin_lock_irqsave+0x46/0x59
> > [<ffffffff8029519c>] ring_buffer_reset_cpu+0x31/0x6b
> > [<ffffffff80299ec6>] tracing_reset+0x46/0x9b
> > [<ffffffff8029e33f>] trace_preempt_off+0x100/0x14d
> > [<ffffffff80546df1>] add_preempt_count+0x12d/0x132
> > [<ffffffff8024b44f>] __local_bh_disable+0xc0/0xf0
> > [<ffffffff8024b491>] local_bh_disable+0x12/0x14
> > [<ffffffff80543b95>] _spin_lock_bh+0x16/0x4c
> > [<ffffffff804ab49a>] lock_sock_nested+0x28/0xe5
> > [<ffffffff804eff87>] tcp_sendmsg+0x27/0xac2
> > [<ffffffff804a82b0>] sock_aio_write+0x109/0x11d
> > [<ffffffff802d8881>] do_sync_write+0xf0/0x137
> > [<ffffffff802d921c>] vfs_write+0x103/0x17d
> > [<ffffffff802d978f>] sys_write+0x4e/0x8c
> > [<ffffffff8020c64b>] system_call_fastpath+0x16/0x1b
> > ---[ end trace 713cc9df66b54d6e ]---
> >
> >
> > The cause is simple. The following happens:
> >
> > lock_acquire calls check_flags which performs this check:
> >
> > if (!hardirq_count()) {
> > if (softirq_count())
> > DEBUG_LOCKS_WARN_ON(current->softirqs_enabled);
> > else
> > DEBUG_LOCKS_WARN_ON(!current->softirqs_enabled);
> > }
> >
>
> __local_bh_disable()
> {
> unsigned long flags;
>
> raw_local_irq_save(flags);
>
> /*
> * comment explaining why add_preempt_count() doesn't work
> */
> preempt_count() += SOFTIRQ_OFFSET;
> if (softirq_count() == SOFTIRQ_OFFSET)
> trace_softirqs_off(ip);
> if (preempt_count() == SOFTIRQ_OFFSET)
> trace_preempt_off(CALLER_ADDR0, ...);
> raw_local_irq_restore(flags);
> }
Almost ;-)
With the above I get:
------------[ cut here ]------------
WARNING: at kernel/lockdep.c:2887 check_flags+0xba/0x18b()
Call Trace:
[<ffffffff80245ee7>] warn_slowpath+0xd8/0xf7
[<ffffffff802699a1>] check_flags+0xba/0x18b
[<ffffffff8026dee2>] lock_acquire+0x41/0xa9
[<ffffffff80543c3f>] _spin_lock_bh+0x40/0x4c
[<ffffffff804ab51a>] lock_sock_nested+0x28/0xe5
[<ffffffff804f0007>] tcp_sendmsg+0x27/0xac2
[<ffffffff804a8330>] sock_aio_write+0x109/0x11d
[<ffffffff802d88fd>] do_sync_write+0xf0/0x137
[<ffffffff802d9298>] vfs_write+0x103/0x17d
[<ffffffff802d980b>] sys_write+0x4e/0x8c
[<ffffffff8020c64b>] system_call_fastpath+0x16/0x1b
---[ end trace 5057c94a6be2ce17 ]---
Which is hitting:
if (irqs_disabled_flags(flags)) {
if (DEBUG_LOCKS_WARN_ON(current->hardirqs_enabled)) {
printk("possible reason: unannotated irqs-off.\n");
}
} else {
Which is caused by the "raw_local_irq_save(flags) in the local_bh_disable
;-)
The solution is to move the "if (preempt_count() == SOFTIRQ_OFFSET)"
outside the raw_local_irq_restore(flags). There should be nothing wrong
with doing this outside the irqs disabled. The check in add_preempt_count
does not disable interrupts either.
Patch soon on its way, Thanks!
-- Steve
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH] trace, lockdep: manual preempt count adding for local_bh_disable
2009-01-22 22:27 ` Steven Rostedt
@ 2009-01-23 0:27 ` Steven Rostedt
2009-01-23 10:11 ` Ingo Molnar
0 siblings, 1 reply; 5+ messages in thread
From: Steven Rostedt @ 2009-01-23 0:27 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: Ingo Molnar, Andrew Morton, LKML
The following patch is in:
git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-2.6-trace.git
branch: tip/devel
Steven Rostedt (1):
trace, lockdep: manual preempt count adding for local_bh_disable
----
include/linux/sched.h | 2 ++
kernel/sched.c | 8 ++++----
kernel/softirq.c | 13 ++++++++++++-
3 files changed, 18 insertions(+), 5 deletions(-)
---------------------------
commit f635d8460f544ffa64c8456f53356c28960ee46f
Author: Steven Rostedt <srostedt@redhat.com>
Date: Thu Jan 22 19:01:40 2009 -0500
trace, lockdep: manual preempt count adding for local_bh_disable
Impact: fix to preempt trace triggering lockdep check_flag failure
In local_bh_disable, the use of add_preempt_count causes the
preempt tracer to start recording the time preemption is off.
But because it already modified the preempt_count to show
softirqs disabled, and before it called the lockdep code to
handle this, it causes a state that lockdep can not handle.
The preempt tracer will reset the ring buffer on start of a trace,
and the ring buffer reset code does a spin_lock_irqsave. This
calls into lockdep and lockdep will fail when it detects the
invalid state of having softirqs disabled but the internal
current->softirqs_enabled is still set.
The fix is to manually add the SOFTIRQ_OFFSET to preempt count
and call the preempt tracer code outside the lockdep critical
area.
Thanks to Peter Zijlstra for suggesting this solution.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 5305e61..8545057 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -138,6 +138,8 @@ extern unsigned long nr_uninterruptible(void);
extern unsigned long nr_active(void);
extern unsigned long nr_iowait(void);
+extern unsigned long get_parent_ip(unsigned long addr);
+
struct seq_file;
struct cfs_rq;
struct task_group;
diff --git a/kernel/sched.c b/kernel/sched.c
index d7ae5f4..440a6b1 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -4559,10 +4559,7 @@ void scheduler_tick(void)
#endif
}
-#if defined(CONFIG_PREEMPT) && (defined(CONFIG_DEBUG_PREEMPT) || \
- defined(CONFIG_PREEMPT_TRACER))
-
-static inline unsigned long get_parent_ip(unsigned long addr)
+unsigned long get_parent_ip(unsigned long addr)
{
if (in_lock_functions(addr)) {
addr = CALLER_ADDR2;
@@ -4572,6 +4569,9 @@ static inline unsigned long get_parent_ip(unsigned long addr)
return addr;
}
+#if defined(CONFIG_PREEMPT) && (defined(CONFIG_DEBUG_PREEMPT) || \
+ defined(CONFIG_PREEMPT_TRACER))
+
void __kprobes add_preempt_count(int val)
{
#ifdef CONFIG_DEBUG_PREEMPT
diff --git a/kernel/softirq.c b/kernel/softirq.c
index 7e93870..3dd0d13 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -21,6 +21,7 @@
#include <linux/freezer.h>
#include <linux/kthread.h>
#include <linux/rcupdate.h>
+#include <linux/ftrace.h>
#include <linux/smp.h>
#include <linux/tick.h>
@@ -79,13 +80,23 @@ static void __local_bh_disable(unsigned long ip)
WARN_ON_ONCE(in_irq());
raw_local_irq_save(flags);
- add_preempt_count(SOFTIRQ_OFFSET);
+ /*
+ * The preempt tracer hooks into add_preempt_count and will break
+ * lockdep because it calls back into lockdep after SOFTIRQ_OFFSET
+ * is set and before current->softirq_enabled is cleared.
+ * We must manually increment preempt_count here and manually
+ * call the trace_preempt_off later.
+ */
+ preempt_count() += SOFTIRQ_OFFSET;
/*
* Were softirqs turned off above:
*/
if (softirq_count() == SOFTIRQ_OFFSET)
trace_softirqs_off(ip);
raw_local_irq_restore(flags);
+
+ if (preempt_count() == SOFTIRQ_OFFSET)
+ trace_preempt_off(CALLER_ADDR0, get_parent_ip(CALLER_ADDR1));
}
#else /* !CONFIG_TRACE_IRQFLAGS */
static inline void __local_bh_disable(unsigned long ip)
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] trace, lockdep: manual preempt count adding for local_bh_disable
2009-01-23 0:27 ` [PATCH] trace, lockdep: manual preempt count adding for local_bh_disable Steven Rostedt
@ 2009-01-23 10:11 ` Ingo Molnar
0 siblings, 0 replies; 5+ messages in thread
From: Ingo Molnar @ 2009-01-23 10:11 UTC (permalink / raw)
To: Steven Rostedt; +Cc: Peter Zijlstra, Andrew Morton, LKML
* Steven Rostedt <rostedt@goodmis.org> wrote:
>
> The following patch is in:
>
> git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-2.6-trace.git
>
> branch: tip/devel
>
>
> Steven Rostedt (1):
> trace, lockdep: manual preempt count adding for local_bh_disable
>
> ----
> include/linux/sched.h | 2 ++
> kernel/sched.c | 8 ++++----
> kernel/softirq.c | 13 ++++++++++++-
> 3 files changed, 18 insertions(+), 5 deletions(-)
pulled, thanks Steve!
Ingo
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2009-01-23 10:12 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-01-22 20:40 lockdep and preemptoff tracer are fighting again Steven Rostedt
2009-01-22 21:08 ` Peter Zijlstra
2009-01-22 22:27 ` Steven Rostedt
2009-01-23 0:27 ` [PATCH] trace, lockdep: manual preempt count adding for local_bh_disable Steven Rostedt
2009-01-23 10:11 ` Ingo Molnar
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox