linux-nfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: NeilBrown <neilb@suse.de>
To: "Myklebust, Trond" <Trond.Myklebust@netapp.com>
Cc: NFS <linux-nfs@vger.kernel.org>
Subject: Re: [PATCH/RFC] - hard-to-hit race in xprtsock.
Date: Wed, 30 Oct 2013 17:02:50 +1100	[thread overview]
Message-ID: <20131030170250.702da2c3@notabene.brown> (raw)
In-Reply-To: <1383058955.7805.2.camel@leira.trondhjem.org>

[-- Attachment #1: Type: text/plain, Size: 2260 bytes --]

On Tue, 29 Oct 2013 15:02:36 +0000 "Myklebust, Trond"
<Trond.Myklebust@netapp.com> wrote:

> On Tue, 2013-10-29 at 17:42 +1100, NeilBrown wrote:
> > We have a customer who hit a rare race in sunrpc (in a 3.0 based kernel,
> > but the relevant code doesn't seem to have changed much).
> > 
> > The thread that crashed was in 
> >   xs_tcp_setup_socket -> inet_stream_connect -> lock_sock_nested.
> > 
> > 'sock' in this last function is NULL.
> > 
> > The only way I can imagine this happening is if some other thread called
> > 
> >  xs_close -> xs_reset_transport -> sock_release -> inet_release
> > 
> > in a very small window a moment earlier.
> > 
> > As far as I can tell, xs_close is only called with XPRT_LOCKED set.
> > 
> > xs_tcp_setup_socket is mostly scheduled with XPRT_LOCKED set to which would
> > exclude them from running at the same time.
> > 
> > 
> > However xs_tcp_schedule_linger_timeout can schedule the thread which runs
> > xs_tcp_setup_socket without first claiming XPRT_LOCKED.
> > So I assume that is what is happening.
> > 
> > I imagine some race between the client closing the socket, and getting
> > TCP_FIN_WAIT1 from the server and somehow the two threads racing.
> > 
> > I wonder if it might make sense to always abort 'connect_worker' in
> > xs_close()?
> > I think the connect_worker really mustn't be running or queued at this point,
> > so cancelling it is either a no-op, or vitally important.
> > 
> > So: does the following patch seem reasonable?  If so I'll submit it properly
> > with a coherent description etc.
> 
> Hi Neil,
> 
> Will that do the right thing if the connect_worker and close are running
> on the same rpciod thread? I think it should, but I never manage to keep
> 100% up to date with the ever changing semantics of
> cancel_delayed_work_sync() and friends...
> 
> Cheers,
>   Trond

Thanks for asking that!  I had the exact same concern when I first conceived
the patch.

I managed to convince my self that there wasn't a problem as long as
xs_tcp_setup_socket never called into xs_close.
Otherwise the worst case is that one thread running xs_close could block
while some other thread runs xs_{tcp,udp}_setup_socket.

Thanks,
NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

  reply	other threads:[~2013-10-30  6:03 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-10-29  6:42 [PATCH/RFC] - hard-to-hit race in xprtsock NeilBrown
2013-10-29 15:02 ` Myklebust, Trond
2013-10-30  6:02   ` NeilBrown [this message]
2013-10-30 15:12     ` Myklebust, Trond

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20131030170250.702da2c3@notabene.brown \
    --to=neilb@suse.de \
    --cc=Trond.Myklebust@netapp.com \
    --cc=linux-nfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).