From: Frederic Weisbecker <fweisbec@gmail.com>
To: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>,
linux-kernel@vger.kernel.org,
Dipankar Sarma <dipankar@in.ibm.com>,
Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@elte.hu>,
Peter Zijlstra <a.p.zijlstra@chello.nl>,
Lai Jiangshan <laijs@cn.fujitsu.com>
Subject: Re: linux-next-20110923: warning kernel/rcutree.c:1833
Date: Mon, 3 Oct 2011 19:11:44 +0200 [thread overview]
Message-ID: <20111003171140.GF1835@somewhere> (raw)
In-Reply-To: <20111003162221.GB2403@linux.vnet.ibm.com>
On Mon, Oct 03, 2011 at 09:22:21AM -0700, Paul E. McKenney wrote:
> On Mon, Oct 03, 2011 at 02:59:03PM +0200, Frederic Weisbecker wrote:
> > On Sun, Oct 02, 2011 at 05:28:32PM -0700, Paul E. McKenney wrote:
> > > On Mon, Oct 03, 2011 at 12:50:22AM +0200, Frederic Weisbecker wrote:
> > > > On Fri, Sep 30, 2011 at 12:24:38PM -0700, Paul E. McKenney wrote:
> > > > > @@ -328,11 +326,11 @@ static int rcu_implicit_offline_qs(struct rcu_data *rdp)
> > > > > return 1;
> > > > > }
> > > > >
> > > > > - /* If preemptible RCU, no point in sending reschedule IPI. */
> > > > > - if (rdp->preemptible)
> > > > > - return 0;
> > > > > -
> > > > > - /* The CPU is online, so send it a reschedule IPI. */
> > > > > + /*
> > > > > + * The CPU is online, so send it a reschedule IPI. This forces
> > > > > + * it through the scheduler, and (inefficiently) also handles cases
> > > > > + * where idle loops fail to inform RCU about the CPU being idle.
> > > > > + */
> > > >
> > > > If the idle loop forgets to call rcu_idle_enter() before going to
> > > > sleep, I don't know if it's a good idea to try to cure that situation
> > > > by forcing a quiescent state remotely. It may make the thing worse
> > > > because we actually won't notice the lack of call to rcu_idle_enter()
> > > > that the rcu stall detector would otherwise report to us.
> > > >
> > > > Also I don't think that works. If the task doesn't have
> > > > TIF_RESCHED, it won't go through the scheduler on irq exit.
> > > > smp_send_reschedule() doesn't set the flag. And also scheduler_ipi()
> > > > returns right away if no wake up is pending.
> > > >
> > > > So, other than resuming the idle loop to sleep again, nothing may happen.
> > > >
> > > > Or am I missing something?
> > >
> > > Hmmm... Seems like the IPIs aren't helping in any case, then?
> >
> > I thought it was there for !PREEMPT cases where the task has TIF_RESCHED
> > but takes too much time to find an opportunity to go to sleep.
>
> Indeed, and it might be worth leaving in for that.
Now I realize it's not even helpful in that case. If you're having a long
time in the kernel without calling schedule(), an IPI won't be very useful
on that.
No, the current call looks useless to me :)
> > > I suppose that I could do an smp_call_function_single(), which then
> > > did a set_need_resched()...
> > >
> > > But this is a separate issue that I need to deal with. That said, any
> > > suggestions are welcome!
> >
> > Note you can't call smp_call_function_*() while irqs are disabled.
>
> Sigh! This isn't the first time this year that I have forgotten that,
> is it?
>
> > Perhaps you need something like kernel/sched.c:resched_cpu()
> > This adds some rq->lock contention though.
>
> This would happen infrequently, and could be made to be event more
> infrequent. But I wonder what happens when you do this to a CPU
> that is running the idle task? Seems like it should work normally,
> but...
That should work as well. But I think we shouldn't send an IPI
with TIF_RESCHED set along to a remote CPU that is running idle.
If there is a missing rcu_idle_enter() call, we should report it (rcu
stall) and fix it. Not trying to cure the consequences. Sending an IPI
would make it harder to find such bugs.
next prev parent reply other threads:[~2011-10-03 17:11 UTC|newest]
Thread overview: 57+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-09-25 0:24 linux-next-20110923: warning kernel/rcutree.c:1833 Kirill A. Shutemov
2011-09-25 5:08 ` Paul E. McKenney
2011-09-25 11:26 ` Kirill A. Shutemov
2011-09-25 13:06 ` Frederic Weisbecker
2011-09-25 14:19 ` Kirill A. Shutemov
2011-09-25 16:48 ` Paul E. McKenney
2011-09-26 1:04 ` Frederic Weisbecker
2011-09-26 1:10 ` Frederic Weisbecker
2011-09-26 1:26 ` Paul E. McKenney
2011-09-26 1:41 ` Paul E. McKenney
2011-09-26 9:39 ` Frederic Weisbecker
2011-09-26 22:34 ` Paul E. McKenney
2011-09-27 12:07 ` Frederic Weisbecker
2011-09-26 9:42 ` Frederic Weisbecker
2011-09-26 22:35 ` Paul E. McKenney
2011-09-26 9:20 ` Frederic Weisbecker
2011-09-26 22:50 ` Paul E. McKenney
2011-09-27 12:16 ` Frederic Weisbecker
2011-09-27 18:01 ` Paul E. McKenney
2011-09-28 12:31 ` Frederic Weisbecker
2011-09-28 18:40 ` Paul E. McKenney
2011-09-28 23:46 ` Frederic Weisbecker
2011-09-29 0:55 ` Paul E. McKenney
2011-09-29 4:49 ` Paul E. McKenney
2011-09-29 12:30 ` Frederic Weisbecker
2011-09-29 17:12 ` Paul E. McKenney
2011-09-29 17:19 ` Paul E. McKenney
2011-09-29 23:18 ` Paul E. McKenney
2011-09-30 13:11 ` Frederic Weisbecker
2011-09-30 15:29 ` Paul E. McKenney
2011-09-30 19:24 ` Paul E. McKenney
2011-10-01 4:34 ` Paul E. McKenney
2011-10-01 12:24 ` Frederic Weisbecker
2011-10-01 12:28 ` Frederic Weisbecker
2011-10-01 16:35 ` Paul E. McKenney
2011-10-01 17:07 ` Paul E. McKenney
2011-10-02 3:23 ` Paul E. McKenney
2011-10-02 11:45 ` Frederic Weisbecker
2011-10-02 22:50 ` Frederic Weisbecker
2011-10-03 0:28 ` Paul E. McKenney
2011-10-03 12:59 ` Frederic Weisbecker
2011-10-03 16:22 ` Paul E. McKenney
2011-10-03 17:11 ` Frederic Weisbecker [this message]
2011-10-02 23:07 ` Frederic Weisbecker
2011-10-03 0:32 ` Paul E. McKenney
2011-10-03 13:03 ` Frederic Weisbecker
2011-10-03 16:30 ` Paul E. McKenney
2011-10-06 0:58 ` Paul E. McKenney
2011-10-06 1:59 ` Paul E. McKenney
2011-10-06 12:11 ` Frederic Weisbecker
2011-10-06 18:44 ` Paul E. McKenney
2011-10-06 23:44 ` Paul E. McKenney
2011-09-26 1:25 ` Paul E. McKenney
2011-09-26 8:48 ` Frederic Weisbecker
2011-09-26 8:49 ` Frederic Weisbecker
2011-09-26 22:30 ` Paul E. McKenney
2011-09-27 11:55 ` Frederic Weisbecker
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20111003171140.GF1835@somewhere \
--to=fweisbec@gmail.com \
--cc=a.p.zijlstra@chello.nl \
--cc=dipankar@in.ibm.com \
--cc=kirill@shutemov.name \
--cc=laijs@cn.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=paulmck@linux.vnet.ibm.com \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).