netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Pablo Neira Ayuso <pablo@netfilter.org>
To: Simon Horman <horms@verge.net.au>
Cc: Eric Dumazet <eric.dumazet@gmail.com>,
	Julian Anastasov <ja@ssi.bg>, Ingo Molnar <mingo@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	"Paul E. McKenney" <paulmck@linux.vnet.ibm.com>,
	lvs-devel@vger.kernel.org, netdev@vger.kernel.org,
	netfilter-devel@vger.kernel.org, linux-kernel@vger.kernel.org,
	Dipankar Sarma <dipankar@in.ibm.com>
Subject: Re: [PATCH ipvs-next v3 1/2] sched: add cond_resched_rcu() helper
Date: Thu, 23 May 2013 13:30:19 +0200	[thread overview]
Message-ID: <20130523113019.GE22553@localhost> (raw)
In-Reply-To: <1369201832-17163-2-git-send-email-horms@verge.net.au>

On Wed, May 22, 2013 at 02:50:31PM +0900, Simon Horman wrote:
> This is intended for use in loops which read data protected by RCU and may
> have a large number of iterations.  Such an example is dumping the list of
> connections known to IPVS: ip_vs_conn_array() and ip_vs_conn_seq_next().
> 
> The benefits are for CONFIG_PREEMPT_RCU=y where we save CPU cycles
> by moving rcu_read_lock and rcu_read_unlock out of large loops
> but still allowing the current task to be preempted after every
> loop iteration for the CONFIG_PREEMPT_RCU=n case.
> 
> The call to cond_resched() is not needed when CONFIG_PREEMPT_RCU=y.
> Thanks to Paul E. McKenney for explaining this and for the
> final version that checks the context with CONFIG_DEBUG_ATOMIC_SLEEP=y
> for all possible configurations.
> 
> The function can be empty in the CONFIG_PREEMPT_RCU case,
> rcu_read_lock and rcu_read_unlock are not needed in this case
> because the task can be preempted on indication from scheduler.
> Thanks to Peter Zijlstra for catching this and for his help
> in trying a solution that changes __might_sleep.
> 
> Initial cond_resched_rcu_lock() function suggested by Eric Dumazet.

Applied, thanks.

  reply	other threads:[~2013-05-23 11:30 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-05-22  5:50 [PATCH ipvs-next v3 0/2] sched: Add cond_resched_rcu_lock() helper Simon Horman
2013-05-22  5:50 ` [PATCH ipvs-next v3 1/2] sched: add cond_resched_rcu() helper Simon Horman
2013-05-23 11:30   ` Pablo Neira Ayuso [this message]
2013-05-22  5:50 ` [PATCH ipvs-next v3 2/2] ipvs: use cond_resched_rcu() helper when walking connections Simon Horman
2013-05-23 11:30   ` Pablo Neira Ayuso
2013-05-22  7:54 ` [PATCH ipvs-next v3 0/2] sched: Add cond_resched_rcu_lock() helper Peter Zijlstra
2013-05-22  8:31   ` David Miller
2013-05-22  9:00     ` Peter Zijlstra
2013-05-23  1:11       ` Simon Horman
2013-05-23 11:31     ` Pablo Neira Ayuso
2013-05-28  8:51     ` Ingo Molnar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20130523113019.GE22553@localhost \
    --to=pablo@netfilter.org \
    --cc=dipankar@in.ibm.com \
    --cc=eric.dumazet@gmail.com \
    --cc=horms@verge.net.au \
    --cc=ja@ssi.bg \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lvs-devel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=netfilter-devel@vger.kernel.org \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).