* [BUG RT] - rcupreempt.c:133 on 2.6.23-rc1-rt7 @ 2007-08-05 4:51 Steven Rostedt 2007-08-05 5:05 ` Steven Rostedt 0 siblings, 1 reply; 10+ messages in thread From: Steven Rostedt @ 2007-08-05 4:51 UTC (permalink / raw) To: Paul E. McKenney; +Cc: Ingo Molnar, Thomas Gleixner, RT, LKML Why is it, that every time I go to write examples for my chapter in a book, that I hit a bug! I got this on bootup of my Thinkpad G41 running SMP. Installing knfsd (copyright (C) 1996 okir@monad.swb.de). WARNING: at kernel/rcupreempt.c:133 __rcu_read_lock() [<c010557a>] show_trace_log_lvl+0x35/0x54 [<c01061bd>] show_trace+0x2c/0x2e [<c01061e8>] dump_stack+0x29/0x2b [<c01660d9>] __rcu_read_lock+0x13f/0x14e [<c02f61fd>] ip_local_deliver+0x73/0x2b6 [<c02f5e74>] ip_rcv+0x2d8/0x5ee [<c02d4324>] netif_receive_skb+0x2b7/0x3fc [<c02d68b4>] process_backlog+0xb0/0x148 [<c02d6b6a>] net_rx_action+0xe0/0x1dd [<c01311a8>] ksoftirqd+0x126/0x240 [<c013f9d2>] kthread+0x44/0x69 [<c0105147>] kernel_thread_helper+0x7/0x10 ======================= I don't have time to look further now, and it's something that isn't easily reproducible (Well, it happened once out of two boots). If you need me to look further, or need a config or dmesg (I have both), then just give me a holler. -- Steve ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [BUG RT] - rcupreempt.c:133 on 2.6.23-rc1-rt7 2007-08-05 4:51 [BUG RT] - rcupreempt.c:133 on 2.6.23-rc1-rt7 Steven Rostedt @ 2007-08-05 5:05 ` Steven Rostedt 2007-08-05 6:59 ` Ingo Molnar 0 siblings, 1 reply; 10+ messages in thread From: Steven Rostedt @ 2007-08-05 5:05 UTC (permalink / raw) To: Paul E. McKenney; +Cc: Ingo Molnar, Thomas Gleixner, RT, LKML On Sun, 2007-08-05 at 00:51 -0400, Steven Rostedt wrote: > I don't have time to look further now, and it's something that isn't > easily reproducible (Well, it happened once out of two boots). If you > need me to look further, or need a config or dmesg (I have both), then > just give me a holler. Silly me. FYI, I was running with !PREEMPT_RT, but with Hard and Softirqs as threads. Must have copied the wrong config over :-/ -- Steve ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [BUG RT] - rcupreempt.c:133 on 2.6.23-rc1-rt7 2007-08-05 5:05 ` Steven Rostedt @ 2007-08-05 6:59 ` Ingo Molnar 2007-08-05 14:24 ` Steven Rostedt 0 siblings, 1 reply; 10+ messages in thread From: Ingo Molnar @ 2007-08-05 6:59 UTC (permalink / raw) To: Steven Rostedt; +Cc: Paul E. McKenney, Thomas Gleixner, RT, LKML * Steven Rostedt <rostedt@goodmis.org> wrote: > > I don't have time to look further now, and it's something that isn't > > easily reproducible (Well, it happened once out of two boots). If > > you need me to look further, or need a config or dmesg (I have > > both), then just give me a holler. > > Silly me. FYI, I was running with !PREEMPT_RT, but with Hard and > Softirqs as threads. Must have copied the wrong config over :-/ it's still not supposed to happen ... rcu read lock nesting that deep? Ingo ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [BUG RT] - rcupreempt.c:133 on 2.6.23-rc1-rt7 2007-08-05 6:59 ` Ingo Molnar @ 2007-08-05 14:24 ` Steven Rostedt 2007-08-05 15:04 ` Paul E. McKenney 2007-08-05 15:26 ` [BUG RT] - rcupreempt.c:133 on 2.6.23-rc1-rt7 Ingo Molnar 0 siblings, 2 replies; 10+ messages in thread From: Steven Rostedt @ 2007-08-05 14:24 UTC (permalink / raw) To: Ingo Molnar; +Cc: Paul E. McKenney, Thomas Gleixner, RT, LKML -- On Sun, 5 Aug 2007, Ingo Molnar wrote: > > * Steven Rostedt <rostedt@goodmis.org> wrote: > > > > I don't have time to look further now, and it's something that isn't > > > easily reproducible (Well, it happened once out of two boots). If > > > you need me to look further, or need a config or dmesg (I have > > > both), then just give me a holler. > > > > Silly me. FYI, I was running with !PREEMPT_RT, but with Hard and > > Softirqs as threads. Must have copied the wrong config over :-/ > > it's still not supposed to happen ... rcu read lock nesting that deep? > The code on line 133 is: WARN_ON_ONCE(current->rcu_read_lock_nesting > NR_CPUS); I have NR_CPUS set to 2 since the box I'm running this on only has 2 cpus and I see no reason to waste more data structures. Is rcu read lock nesting deeper than 2? -- Steve ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [BUG RT] - rcupreempt.c:133 on 2.6.23-rc1-rt7 2007-08-05 14:24 ` Steven Rostedt @ 2007-08-05 15:04 ` Paul E. McKenney 2007-08-05 15:35 ` [PATCH RT] put in a relatively high number for rcu read lock upper limit Steven Rostedt 2007-08-05 15:26 ` [BUG RT] - rcupreempt.c:133 on 2.6.23-rc1-rt7 Ingo Molnar 1 sibling, 1 reply; 10+ messages in thread From: Paul E. McKenney @ 2007-08-05 15:04 UTC (permalink / raw) To: Steven Rostedt; +Cc: Ingo Molnar, Thomas Gleixner, RT, LKML On Sun, Aug 05, 2007 at 10:24:15AM -0400, Steven Rostedt wrote: > > -- > > On Sun, 5 Aug 2007, Ingo Molnar wrote: > > > > > * Steven Rostedt <rostedt@goodmis.org> wrote: > > > > > > I don't have time to look further now, and it's something that isn't > > > > easily reproducible (Well, it happened once out of two boots). If > > > > you need me to look further, or need a config or dmesg (I have > > > > both), then just give me a holler. > > > > > > Silly me. FYI, I was running with !PREEMPT_RT, but with Hard and > > > Softirqs as threads. Must have copied the wrong config over :-/ > > > > it's still not supposed to happen ... rcu read lock nesting that deep? > > > > The code on line 133 is: > > WARN_ON_ONCE(current->rcu_read_lock_nesting > NR_CPUS); > > I have NR_CPUS set to 2 since the box I'm running this on only has > 2 cpus and I see no reason to waste more data structures. > > Is rcu read lock nesting deeper than 2? In networking, I would not be at all surprised, given things like fib_trie and netfilter usage. In addition, if rcu_read_lock() is called from hardirq or NMI/SMI, it is necessary to add the nesting levels in these environments as well. In any case, rcu_read_lock() is freely nestable, so there is no penalty for nesting pretty deeply. I must have missed this WARN_ON_ONCE() being added to rcu_read_lock() -- I did ack Daniel Walker's check for negative values of rcu_read_lock_nesting in rcu_read_unlock(), but saw no upper-limit checks. So, are you running into a situation where rcu_read_lock_nesting is growing unboundedly? I would not expect the per-task nesting level to normally be a function of the number of CPUs -- unless one was doing some sort of nested scan of RCU-protected per-CPU data structures or some such. So if you are adding this to your local build as a debug check, I would suggest a fixed limit -- but would -not- suggest putting such a check into a production build, at least not for a small limit. Thanx, Paul ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH RT] put in a relatively high number for rcu read lock upper limit. 2007-08-05 15:04 ` Paul E. McKenney @ 2007-08-05 15:35 ` Steven Rostedt 2007-08-05 17:53 ` Ingo Molnar 0 siblings, 1 reply; 10+ messages in thread From: Steven Rostedt @ 2007-08-05 15:35 UTC (permalink / raw) To: paulmck; +Cc: Ingo Molnar, Thomas Gleixner, RT, LKML Paul and Ingo, Should we just remove the upper limit check, or is something like this patch sound? -- Steve When DEBUG_KERNEL is set, place an upper bound limit on the rcu read lock set to 100. If we go that deep, then a warn on will print. Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Index: linux-2.6.23-rc1-rt7/kernel/rcupreempt.c =================================================================== --- linux-2.6.23-rc1-rt7.orig/kernel/rcupreempt.c 2007-08-05 11:25:38.000000000 -0400 +++ linux-2.6.23-rc1-rt7/kernel/rcupreempt.c 2007-08-05 11:30:33.000000000 -0400 @@ -50,6 +50,14 @@ #include <linux/cpumask.h> #include <linux/rcupreempt_trace.h> +#ifdef CONFIG_DEBUG_KERNEL +/* Picking 100 as a high enough limit on rcu read lock nesting. */ +# define rcu_read_lock_check_upper_limit() \ + WARN_ON_ONCE(current->rcu_read_lock_nesting > 100); +#else +# define rcu_read_lock_check_upper_limit() do { } while(0) +#endif + /* * PREEMPT_RCU data structures. */ @@ -129,9 +137,9 @@ void __rcu_read_lock(void) atomic_inc(current->rcu_flipctr2); smp_mb__after_atomic_inc(); /* might optimize out... */ } - } else { - WARN_ON_ONCE(current->rcu_read_lock_nesting > NR_CPUS); - } + } else + rcu_read_lock_check_upper_limit(); + local_irq_restore(oldirq); } ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH RT] put in a relatively high number for rcu read lock upper limit. 2007-08-05 15:35 ` [PATCH RT] put in a relatively high number for rcu read lock upper limit Steven Rostedt @ 2007-08-05 17:53 ` Ingo Molnar 2007-08-05 17:58 ` Steven Rostedt 2007-08-06 3:20 ` Paul E. McKenney 0 siblings, 2 replies; 10+ messages in thread From: Ingo Molnar @ 2007-08-05 17:53 UTC (permalink / raw) To: Steven Rostedt; +Cc: paulmck, Thomas Gleixner, RT, LKML * Steven Rostedt <rostedt@goodmis.org> wrote: > Paul and Ingo, > > Should we just remove the upper limit check, or is something like this > patch sound? i've changed the limit to 30 (the same depth limit is used by lockdep). beyond that we could get stack overflow, etc. Ingo ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH RT] put in a relatively high number for rcu read lock upper limit. 2007-08-05 17:53 ` Ingo Molnar @ 2007-08-05 17:58 ` Steven Rostedt 2007-08-06 3:20 ` Paul E. McKenney 1 sibling, 0 replies; 10+ messages in thread From: Steven Rostedt @ 2007-08-05 17:58 UTC (permalink / raw) To: Ingo Molnar; +Cc: paulmck, Thomas Gleixner, RT, LKML -- On Sun, 5 Aug 2007, Ingo Molnar wrote: > > * Steven Rostedt <rostedt@goodmis.org> wrote: > > > Paul and Ingo, > > > > Should we just remove the upper limit check, or is something like this > > patch sound? > > i've changed the limit to 30 (the same depth limit is used by lockdep). > > beyond that we could get stack overflow, etc. Great! Thanks Ingo, -- Steve ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH RT] put in a relatively high number for rcu read lock upper limit. 2007-08-05 17:53 ` Ingo Molnar 2007-08-05 17:58 ` Steven Rostedt @ 2007-08-06 3:20 ` Paul E. McKenney 1 sibling, 0 replies; 10+ messages in thread From: Paul E. McKenney @ 2007-08-06 3:20 UTC (permalink / raw) To: Ingo Molnar; +Cc: Steven Rostedt, Thomas Gleixner, RT, LKML On Sun, Aug 05, 2007 at 07:53:10PM +0200, Ingo Molnar wrote: > > * Steven Rostedt <rostedt@goodmis.org> wrote: > > > Paul and Ingo, > > > > Should we just remove the upper limit check, or is something like this > > patch sound? > > i've changed the limit to 30 (the same depth limit is used by lockdep). > > beyond that we could get stack overflow, etc. Works for me! Thanx, Paul ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [BUG RT] - rcupreempt.c:133 on 2.6.23-rc1-rt7 2007-08-05 14:24 ` Steven Rostedt 2007-08-05 15:04 ` Paul E. McKenney @ 2007-08-05 15:26 ` Ingo Molnar 1 sibling, 0 replies; 10+ messages in thread From: Ingo Molnar @ 2007-08-05 15:26 UTC (permalink / raw) To: Steven Rostedt; +Cc: Paul E. McKenney, Thomas Gleixner, RT, LKML * Steven Rostedt <rostedt@goodmis.org> wrote: > The code on line 133 is: > > WARN_ON_ONCE(current->rcu_read_lock_nesting > NR_CPUS); > > I have NR_CPUS set to 2 since the box I'm running this on only has 2 > cpus and I see no reason to waste more data structures. > > Is rcu read lock nesting deeper than 2? ah, silly me - that should indeed be something fixed like 128. Ingo ^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2007-08-06 3:20 UTC | newest] Thread overview: 10+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2007-08-05 4:51 [BUG RT] - rcupreempt.c:133 on 2.6.23-rc1-rt7 Steven Rostedt 2007-08-05 5:05 ` Steven Rostedt 2007-08-05 6:59 ` Ingo Molnar 2007-08-05 14:24 ` Steven Rostedt 2007-08-05 15:04 ` Paul E. McKenney 2007-08-05 15:35 ` [PATCH RT] put in a relatively high number for rcu read lock upper limit Steven Rostedt 2007-08-05 17:53 ` Ingo Molnar 2007-08-05 17:58 ` Steven Rostedt 2007-08-06 3:20 ` Paul E. McKenney 2007-08-05 15:26 ` [BUG RT] - rcupreempt.c:133 on 2.6.23-rc1-rt7 Ingo Molnar
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox