From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Steven Rostedt <rostedt@goodmis.org>
Cc: Ingo Molnar <mingo@elte.hu>, Thomas Gleixner <tglx@linutronix.de>,
RT <linux-rt-users@vger.kernel.org>,
LKML <linux-kernel@vger.kernel.org>
Subject: Re: [BUG RT] - rcupreempt.c:133 on 2.6.23-rc1-rt7
Date: Sun, 5 Aug 2007 08:04:49 -0700 [thread overview]
Message-ID: <20070805150449.GA19418@linux.vnet.ibm.com> (raw)
In-Reply-To: <Pine.LNX.4.58.0708051019560.8421@gandalf.stny.rr.com>
On Sun, Aug 05, 2007 at 10:24:15AM -0400, Steven Rostedt wrote:
>
> --
>
> On Sun, 5 Aug 2007, Ingo Molnar wrote:
>
> >
> > * Steven Rostedt <rostedt@goodmis.org> wrote:
> >
> > > > I don't have time to look further now, and it's something that isn't
> > > > easily reproducible (Well, it happened once out of two boots). If
> > > > you need me to look further, or need a config or dmesg (I have
> > > > both), then just give me a holler.
> > >
> > > Silly me. FYI, I was running with !PREEMPT_RT, but with Hard and
> > > Softirqs as threads. Must have copied the wrong config over :-/
> >
> > it's still not supposed to happen ... rcu read lock nesting that deep?
> >
>
> The code on line 133 is:
>
> WARN_ON_ONCE(current->rcu_read_lock_nesting > NR_CPUS);
>
> I have NR_CPUS set to 2 since the box I'm running this on only has
> 2 cpus and I see no reason to waste more data structures.
>
> Is rcu read lock nesting deeper than 2?
In networking, I would not be at all surprised, given things like fib_trie
and netfilter usage. In addition, if rcu_read_lock() is called from
hardirq or NMI/SMI, it is necessary to add the nesting levels in these
environments as well. In any case, rcu_read_lock() is freely nestable,
so there is no penalty for nesting pretty deeply. I must have missed this
WARN_ON_ONCE() being added to rcu_read_lock() -- I did ack Daniel Walker's
check for negative values of rcu_read_lock_nesting in rcu_read_unlock(),
but saw no upper-limit checks.
So, are you running into a situation where rcu_read_lock_nesting is
growing unboundedly?
I would not expect the per-task nesting level to normally be a function
of the number of CPUs -- unless one was doing some sort of nested scan
of RCU-protected per-CPU data structures or some such. So if you are
adding this to your local build as a debug check, I would suggest a fixed
limit -- but would -not- suggest putting such a check into a production
build, at least not for a small limit.
Thanx, Paul
next prev parent reply other threads:[~2007-08-05 15:05 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-08-05 4:51 [BUG RT] - rcupreempt.c:133 on 2.6.23-rc1-rt7 Steven Rostedt
2007-08-05 5:05 ` Steven Rostedt
2007-08-05 6:59 ` Ingo Molnar
2007-08-05 14:24 ` Steven Rostedt
2007-08-05 15:04 ` Paul E. McKenney [this message]
2007-08-05 15:35 ` [PATCH RT] put in a relatively high number for rcu read lock upper limit Steven Rostedt
2007-08-05 17:53 ` Ingo Molnar
2007-08-05 17:58 ` Steven Rostedt
2007-08-06 3:20 ` Paul E. McKenney
2007-08-05 15:26 ` [BUG RT] - rcupreempt.c:133 on 2.6.23-rc1-rt7 Ingo Molnar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20070805150449.GA19418@linux.vnet.ibm.com \
--to=paulmck@linux.vnet.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-rt-users@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox