linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jeff Moyer <jmoyer@redhat.com>
To: Nick Piggin <npiggin@gmail.com>
Cc: Jan Kara <jack@suse.cz>,
	Andrew Morton <akpm@linux-foundation.org>,
	linux-fsdevel <linux-fsdevel@vger.kernel.org>,
	linux-kernel@vger.kernel.org
Subject: Re: [patch] fs: aio fix rcu lookup
Date: Tue, 18 Jan 2011 18:00:14 -0500	[thread overview]
Message-ID: <x49mxmx6cn5.fsf@segfault.boston.devel.redhat.com> (raw)
In-Reply-To: <AANLkTin1+B_1CV-weALdS8EYJO60BJd0b7AGWzc0wrWr@mail.gmail.com> (Nick Piggin's message of "Wed, 19 Jan 2011 09:17:23 +1100")

Nick Piggin <npiggin@gmail.com> writes:

> On Wed, Jan 19, 2011 at 6:01 AM, Jan Kara <jack@suse.cz> wrote:
>>  Hi,
>>
>> On Tue 18-01-11 10:24:24, Nick Piggin wrote:
>>> On Tue, Jan 18, 2011 at 6:07 AM, Jeff Moyer <jmoyer@redhat.com> wrote:
>>> > Nick Piggin <npiggin@gmail.com> writes:
>>> >> Do you agree with the theoretical problem? I didn't try to
>>> >> write a racer to break it yet. Inserting a delay before the
>>> >> get_ioctx might do the trick.
>>> >
>>> > I'm not convinced, no.  The last reference to the kioctx is always the
>>> > process, released in the exit_aio path, or via sys_io_destroy.  In both
>>> > cases, we cancel all aios, then wait for them all to complete before
>>> > dropping the final reference to the context.
>>>
>>> That wouldn't appear to prevent a concurrent thread from doing an
>>> io operation that requires ioctx lookup, and taking the last reference
>>> after the io_cancel thread drops the ref.
>>>
>>> > So, while I agree that what you wrote is better, I remain unconvinced of
>>> > it solving a real-world problem.  Feel free to push it in as a cleanup,
>>> > though.
>>>
>>> Well I think it has to be technically correct first. If there is indeed a
>>> guaranteed ref somehow, it just needs a comment.
>>  Hmm, the code in io_destroy() indeed looks fishy. We delete the ioctx
>> from the hash table and set ioctx->dead which is supposed to stop
>> lookup_ioctx() from finding it (see the !ctx->dead check in
>> lookup_ioctx()). There's even a comment in io_destroy() saying:
>>        /*
>>         * Wake up any waiters.  The setting of ctx->dead must be seen
>>         * by other CPUs at this point.  Right now, we rely on the
>>         * locking done by the above calls to ensure this consistency.
>>         */
>> But since lookup_ioctx() is called without any lock or barrier nothing
>> really seems to prevent the list traversal and ioctx->dead test to happen
>> before io_destroy() and get_ioctx() after io_destroy().
>>
>> But wouldn't the right fix be to call synchronize_rcu() in io_destroy()?
>> Because with your fix we could still return 'dead' ioctx and I don't think
>> we are supposed to do that...
>
> With my fix we won't oops, I was a bit concerned about ->dead,
> yes but I don't know what semantics it is attempted to have there.
>
> synchronize_rcu() in io_destroy()  does not prevent it from returning
> as soon as lookup_ioctx drops the rcu_read_lock().
>
> The dead=1 in io_destroy indeed doesn't guarantee a whole lot.
> Anyone know?

See the comment above io_destroy for starters.  Note that rcu was
bolted on later, and I believe that ->dead has nothing to do with the
rcu-ification.

Cheers,
Jeff

  reply	other threads:[~2011-01-18 23:00 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-01-14  1:35 [patch] fs: aio fix rcu lookup Nick Piggin
2011-01-14 14:52 ` Jeff Moyer
2011-01-14 15:00   ` Nick Piggin
2011-01-17 19:07     ` Jeff Moyer
2011-01-17 23:24       ` Nick Piggin
2011-01-18 17:21         ` Jeff Moyer
2011-01-18 19:01         ` Jan Kara
2011-01-18 22:17           ` Nick Piggin
2011-01-18 23:00             ` Jeff Moyer [this message]
2011-01-18 23:05               ` Nick Piggin
2011-01-18 23:52             ` Jan Kara
2011-01-19  0:20               ` Nick Piggin
2011-01-19 13:21                 ` Jan Kara
2011-01-19 16:03                   ` Nick Piggin
2011-01-19 16:50                     ` Jan Kara
2011-01-19 17:37                       ` Nick Piggin
2011-01-20 20:21                         ` Jan Kara
2011-01-19 19:13                   ` Jeff Moyer
2011-01-19 19:46                     ` Jeff Moyer
2011-01-19 20:18                       ` Nick Piggin
2011-01-19 20:32                         ` Jeff Moyer
2011-01-19 20:45                           ` Nick Piggin
2011-01-19 21:03                             ` Jeff Moyer
2011-01-19 21:20                               ` Nick Piggin
2011-01-20  4:03                                 ` Paul E. McKenney
2011-01-20 18:31                                   ` Nick Piggin
2011-01-20 20:02                                     ` Paul E. McKenney
2011-01-20 20:15                                       ` Eric Dumazet
2011-01-21 21:22                                         ` Paul E. McKenney
2011-01-20 20:16                                     ` Jan Kara
2011-01-20 21:16                                       ` Jeff Moyer
2011-02-01 16:24                                       ` Jan Kara

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=x49mxmx6cn5.fsf@segfault.boston.devel.redhat.com \
    --to=jmoyer@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=jack@suse.cz \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=npiggin@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).