From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Nick Piggin <npiggin@suse.de>
Cc: linux-kernel@vger.kernel.org, mingo@elte.hu,
laijs@cn.fujitsu.com, dipankar@in.ibm.com,
akpm@linux-foundation.org, mathieu.desnoyers@polymtl.ca,
josh@joshtriplett.org, dvhltc@us.ibm.com, niv@us.ibm.com,
tglx@linutronix.de, peterz@infradead.org, rostedt@goodmis.org,
Valdis.Kletnieks@vt.edu, dhowells@redhat.com,
jens.axboe@oracle.com
Subject: Re: [PATCH tip/core/rcu 2/6] rcu: prevent RCU IPI storms in presence of high call_rcu() load
Date: Thu, 15 Oct 2009 09:07:17 -0700 [thread overview]
Message-ID: <20091015160717.GB6706@linux.vnet.ibm.com> (raw)
In-Reply-To: <20091015110404.GC3127@wotan.suse.de>
On Thu, Oct 15, 2009 at 01:04:04PM +0200, Nick Piggin wrote:
> Testing this on top of my vfs-scale patches, a 64-way 32-node ia64
> system runs the parallel open/close microbenchmark ~30 times faster
> and is scaling linearly now. Single thread performance is not
> noticably changed. Thanks very much Paul.
Good to hear, thank you for giving it a go!
Thanx, Paul
> On Wed, Oct 14, 2009 at 10:15:55AM -0700, Paul E. McKenney wrote:
> > From: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> >
> > As the number of callbacks on a given CPU rises, invoke
> > force_quiescent_state() only every blimit number of callbacks
> > (defaults to 10,000), and even then only if no other CPU has invoked
> > force_quiescent_state() in the meantime.
> >
> > Reported-by: Nick Piggin <npiggin@suse.de>
> > Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> > ---
> > kernel/rcutree.c | 29 ++++++++++++++++++++++++-----
> > kernel/rcutree.h | 4 ++++
> > 2 files changed, 28 insertions(+), 5 deletions(-)
> >
> > diff --git a/kernel/rcutree.c b/kernel/rcutree.c
> > index 705f02a..ddbf111 100644
> > --- a/kernel/rcutree.c
> > +++ b/kernel/rcutree.c
> > @@ -958,7 +958,7 @@ static void rcu_offline_cpu(int cpu)
> > * Invoke any RCU callbacks that have made it to the end of their grace
> > * period. Thottle as specified by rdp->blimit.
> > */
> > -static void rcu_do_batch(struct rcu_data *rdp)
> > +static void rcu_do_batch(struct rcu_state *rsp, struct rcu_data *rdp)
> > {
> > unsigned long flags;
> > struct rcu_head *next, *list, **tail;
> > @@ -1011,6 +1011,13 @@ static void rcu_do_batch(struct rcu_data *rdp)
> > if (rdp->blimit == LONG_MAX && rdp->qlen <= qlowmark)
> > rdp->blimit = blimit;
> >
> > + /* Reset ->qlen_last_fqs_check trigger if enough CBs have drained. */
> > + if (rdp->qlen == 0 && rdp->qlen_last_fqs_check != 0) {
> > + rdp->qlen_last_fqs_check = 0;
> > + rdp->n_force_qs_snap = rsp->n_force_qs;
> > + } else if (rdp->qlen < rdp->qlen_last_fqs_check - qhimark)
> > + rdp->qlen_last_fqs_check = rdp->qlen;
> > +
> > local_irq_restore(flags);
> >
> > /* Re-raise the RCU softirq if there are callbacks remaining. */
> > @@ -1224,7 +1231,7 @@ __rcu_process_callbacks(struct rcu_state *rsp, struct rcu_data *rdp)
> > }
> >
> > /* If there are callbacks ready, invoke them. */
> > - rcu_do_batch(rdp);
> > + rcu_do_batch(rsp, rdp);
> > }
> >
> > /*
> > @@ -1288,10 +1295,20 @@ __call_rcu(struct rcu_head *head, void (*func)(struct rcu_head *rcu),
> > rcu_start_gp(rsp, nestflag); /* releases rnp_root->lock. */
> > }
> >
> > - /* Force the grace period if too many callbacks or too long waiting. */
> > - if (unlikely(++rdp->qlen > qhimark)) {
> > + /*
> > + * Force the grace period if too many callbacks or too long waiting.
> > + * Enforce hysteresis, and don't invoke force_quiescent_state()
> > + * if some other CPU has recently done so. Also, don't bother
> > + * invoking force_quiescent_state() if the newly enqueued callback
> > + * is the only one waiting for a grace period to complete.
> > + */
> > + if (unlikely(++rdp->qlen > rdp->qlen_last_fqs_check + qhimark)) {
> > rdp->blimit = LONG_MAX;
> > - force_quiescent_state(rsp, 0);
> > + if (rsp->n_force_qs == rdp->n_force_qs_snap &&
> > + *rdp->nxttail[RCU_DONE_TAIL] != head)
> > + force_quiescent_state(rsp, 0);
> > + rdp->n_force_qs_snap = rsp->n_force_qs;
> > + rdp->qlen_last_fqs_check = rdp->qlen;
> > } else if ((long)(ACCESS_ONCE(rsp->jiffies_force_qs) - jiffies) < 0)
> > force_quiescent_state(rsp, 1);
> > local_irq_restore(flags);
> > @@ -1523,6 +1540,8 @@ rcu_init_percpu_data(int cpu, struct rcu_state *rsp, int preemptable)
> > rdp->beenonline = 1; /* We have now been online. */
> > rdp->preemptable = preemptable;
> > rdp->passed_quiesc_completed = lastcomp - 1;
> > + rdp->qlen_last_fqs_check = 0;
> > + rdp->n_force_qs_snap = rsp->n_force_qs;
> > rdp->blimit = blimit;
> > spin_unlock(&rnp->lock); /* irqs remain disabled. */
> >
> > diff --git a/kernel/rcutree.h b/kernel/rcutree.h
> > index b40ac57..599161f 100644
> > --- a/kernel/rcutree.h
> > +++ b/kernel/rcutree.h
> > @@ -167,6 +167,10 @@ struct rcu_data {
> > struct rcu_head *nxtlist;
> > struct rcu_head **nxttail[RCU_NEXT_SIZE];
> > long qlen; /* # of queued callbacks */
> > + long qlen_last_fqs_check;
> > + /* qlen at last check for QS forcing */
> > + unsigned long n_force_qs_snap;
> > + /* did other CPU force QS recently? */
> > long blimit; /* Upper limit on a processed batch */
> >
> > #ifdef CONFIG_NO_HZ
> > --
> > 1.5.2.5
next prev parent reply other threads:[~2009-10-15 18:36 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-10-14 17:15 [PATCH tip/core/rcu 0/6] rcu: fix synchronize_rcu_expedited(), update docs, improve perf Paul E. McKenney
2009-10-14 17:15 ` [PATCH tip/core/rcu 1/6] rcu: Update trace.txt documentation to reflect recent changes Paul E. McKenney
2009-10-15 9:25 ` [tip:core/rcu] " tip-bot for Paul E. McKenney
2009-10-14 17:15 ` [PATCH tip/core/rcu 2/6] rcu: prevent RCU IPI storms in presence of high call_rcu() load Paul E. McKenney
2009-10-15 3:31 ` Nick Piggin
2009-10-15 4:37 ` Paul E. McKenney
2009-10-15 9:24 ` [tip:core/rcu] rcu: Prevent " tip-bot for Paul E. McKenney
2009-10-15 11:04 ` [PATCH tip/core/rcu 2/6] rcu: prevent " Nick Piggin
2009-10-15 11:20 ` Ingo Molnar
2009-10-15 16:07 ` Paul E. McKenney [this message]
2009-10-14 17:15 ` [PATCH tip/core/rcu 3/6] rcu: stopgap fix for synchronize_rcu_expedited() for TREE_PREEMPT_RCU Paul E. McKenney
2009-10-15 9:25 ` [tip:core/rcu] rcu: Stopgap " tip-bot for Paul E. McKenney
2009-10-14 17:15 ` [PATCH tip/core/rcu 4/6] rcu: add exports for synchronize_rcu_expedited() Paul E. McKenney
2009-10-14 17:15 ` [PATCH tip/core/rcu 5/6] rcu: add rnp->blocked_tasks to tracing Paul E. McKenney
2009-10-14 20:26 ` Josh Triplett
2009-10-14 23:36 ` Paul E. McKenney
2009-10-15 9:25 ` [tip:core/rcu] rcu: Add " tip-bot for Paul E. McKenney
2009-10-14 17:15 ` [PATCH tip/core/rcu 6/6] rcu: Update trace.txt documentation for blocked-tasks lists Paul E. McKenney
2009-10-15 9:25 ` [tip:core/rcu] " tip-bot for Paul E. McKenney
2009-10-14 20:28 ` [PATCH tip/core/rcu 0/6] rcu: fix synchronize_rcu_expedited(), update docs, improve perf Josh Triplett
2009-10-15 9:21 ` Ingo Molnar
2009-10-15 9:35 ` Josh Triplett
2009-10-15 11:19 ` Ingo Molnar
2009-10-15 16:14 ` Paul E. McKenney
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20091015160717.GB6706@linux.vnet.ibm.com \
--to=paulmck@linux.vnet.ibm.com \
--cc=Valdis.Kletnieks@vt.edu \
--cc=akpm@linux-foundation.org \
--cc=dhowells@redhat.com \
--cc=dipankar@in.ibm.com \
--cc=dvhltc@us.ibm.com \
--cc=jens.axboe@oracle.com \
--cc=josh@joshtriplett.org \
--cc=laijs@cn.fujitsu.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mathieu.desnoyers@polymtl.ca \
--cc=mingo@elte.hu \
--cc=niv@us.ibm.com \
--cc=npiggin@suse.de \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox