linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
To: Nick Piggin <npiggin@gmail.com>
Cc: Jeff Moyer <jmoyer@redhat.com>, Jan Kara <jack@suse.cz>,
	Andrew Morton <akpm@linux-foundation.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	linux-kernel@vger.kernel.org
Subject: Re: [patch] fs: aio fix rcu lookup
Date: Wed, 19 Jan 2011 20:03:08 -0800	[thread overview]
Message-ID: <20110120040308.GD8476@linux.vnet.ibm.com> (raw)
In-Reply-To: <AANLkTikTJ-eCR2M73=G4q=0AAsb9aVosrFdwP+uY7enB@mail.gmail.com>

On Thu, Jan 20, 2011 at 08:20:00AM +1100, Nick Piggin wrote:
> On Thu, Jan 20, 2011 at 8:03 AM, Jeff Moyer <jmoyer@redhat.com> wrote:
> > Nick Piggin <npiggin@gmail.com> writes:
> >
> >> On Thu, Jan 20, 2011 at 7:32 AM, Jeff Moyer <jmoyer@redhat.com> wrote:
> >>> Nick Piggin <npiggin@gmail.com> writes:
> >>>
> >>>> On Thu, Jan 20, 2011 at 6:46 AM, Jeff Moyer <jmoyer@redhat.com> wrote:
> >>>>> Jeff Moyer <jmoyer@redhat.com> writes:
> >>>>>
> >>>>>> Jan Kara <jack@suse.cz> writes:
> >>>>>>
> >>>>>>>  But there's the second race I describe making it possible
> >>>>>>> for new IO to be created after io_destroy() has waited for all IO to
> >>>>>>> finish...
> >>>>>>
> >>>>>> Can't that be solved by introducing memory barriers around the accesses
> >>>>>> to ->dead?
> >>>>>
> >>>>> Upon further consideration, I don't think so.
> >>>>>
> >>>>> Given the options, I think adding the synchronize rcu to the io_destroy
> >>>>> path is the best way forward.  You're already waiting for a bunch of
> >>>>> queued I/O to finish, so there is no guarantee that you're going to
> >>>>> finish that call quickly.
> >>>>
> >>>> I think synchronize_rcu() is not something to sprinkle around outside
> >>>> very slow paths. It can be done without synchronize_rcu.
> >>>
> >>> I'm not sure I understand what you're saying.  Do you mean to imply that
> >>> io_destroy is not a very slow path?  Because it is.  I prefer a solution
> >>> that doesn't re-architecht things in order to solve a theoretical issue
> >>> that's never been observed.
> >>
> >> Even something that happens once per process lifetime, like in fork/exit
> >> is not necessarily suitable for RCU.
> >
> > Now you've really lost me.  ;-)  Processes which utilize the in-kernel
> > aio interface typically create an ioctx at process startup, use that for
> > submitting all of their io, then destroy it on exit.  Think of a
> > database.  Every time you call io_submit, you're doing a lookup of the
> > ioctx.
> >
> >> I don't know exactly how all programs use io_destroy -- of the small
> >> number that do, probably an even smaller number would care here. But I
> >> don't think it simplifies things enough to use synchronize_rcu for it.
> >
> > Above it sounded like you didn't think AIO should be using RCU at all.
> 
> synchronize_rcu of course, not RCU (typo).

I think that Nick is suggesting that call_rcu() be used instead.
Perhaps also very sparing use of synchronize_rcu_expedited(), which
is faster than synchronize_rcu(), but which which uses more CPU time.

							Thanx, Paul

> > Here it sounds like you are just against synchronize_rcu.  Which is it?
> > And if the latter, then please tell me in what cases you feel one would
> > be justified in calling synchronize_rcu.  For now, I simply disagree
> > with you.  As I said before, you're already potentially waiting for disk
> > I/O to complete.  It doesn't get much worse than that for latency.
> 
> I think synchronize_rcu should firstly not be used unless it gives a good
> simplification, or speedup in fastpath.
> 
> When that is satified, then it is a question of exactly what kind of slow
> path it should be used in. I don't think it should be used in process-
> synchronous code (eg syscalls) except for error cases, resource
> exhaustion, management syscalls (like module unload).
> 
> For example "it's waiting for IO anyway" is not a good reason, IMO.
> Firstly because it may not be waiting for a 10ms disk IO, it may be
> waiting for anything up to an in-RAM device. Secondly because it
> could be quite slow depending on the RCU model used.

  reply	other threads:[~2011-01-20  4:03 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-01-14  1:35 [patch] fs: aio fix rcu lookup Nick Piggin
2011-01-14 14:52 ` Jeff Moyer
2011-01-14 15:00   ` Nick Piggin
2011-01-17 19:07     ` Jeff Moyer
2011-01-17 23:24       ` Nick Piggin
2011-01-18 17:21         ` Jeff Moyer
2011-01-18 19:01         ` Jan Kara
2011-01-18 22:17           ` Nick Piggin
2011-01-18 23:00             ` Jeff Moyer
2011-01-18 23:05               ` Nick Piggin
2011-01-18 23:52             ` Jan Kara
2011-01-19  0:20               ` Nick Piggin
2011-01-19 13:21                 ` Jan Kara
2011-01-19 16:03                   ` Nick Piggin
2011-01-19 16:50                     ` Jan Kara
2011-01-19 17:37                       ` Nick Piggin
2011-01-20 20:21                         ` Jan Kara
2011-01-19 19:13                   ` Jeff Moyer
2011-01-19 19:46                     ` Jeff Moyer
2011-01-19 20:18                       ` Nick Piggin
2011-01-19 20:32                         ` Jeff Moyer
2011-01-19 20:45                           ` Nick Piggin
2011-01-19 21:03                             ` Jeff Moyer
2011-01-19 21:20                               ` Nick Piggin
2011-01-20  4:03                                 ` Paul E. McKenney [this message]
2011-01-20 18:31                                   ` Nick Piggin
2011-01-20 20:02                                     ` Paul E. McKenney
2011-01-20 20:15                                       ` Eric Dumazet
2011-01-21 21:22                                         ` Paul E. McKenney
2011-01-20 20:16                                     ` Jan Kara
2011-01-20 21:16                                       ` Jeff Moyer
2011-02-01 16:24                                       ` Jan Kara

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110120040308.GD8476@linux.vnet.ibm.com \
    --to=paulmck@linux.vnet.ibm.com \
    --cc=akpm@linux-foundation.org \
    --cc=jack@suse.cz \
    --cc=jmoyer@redhat.com \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=npiggin@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).