linux-nfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH/RFC] - hard-to-hit race in xprtsock.
@ 2013-10-29  6:42 NeilBrown
  2013-10-29 15:02 ` Myklebust, Trond
  0 siblings, 1 reply; 4+ messages in thread
From: NeilBrown @ 2013-10-29  6:42 UTC (permalink / raw)
  To: NFS

[-- Attachment #1: Type: text/plain, Size: 2107 bytes --]


We have a customer who hit a rare race in sunrpc (in a 3.0 based kernel,
but the relevant code doesn't seem to have changed much).

The thread that crashed was in 
  xs_tcp_setup_socket -> inet_stream_connect -> lock_sock_nested.

'sock' in this last function is NULL.

The only way I can imagine this happening is if some other thread called

 xs_close -> xs_reset_transport -> sock_release -> inet_release

in a very small window a moment earlier.

As far as I can tell, xs_close is only called with XPRT_LOCKED set.

xs_tcp_setup_socket is mostly scheduled with XPRT_LOCKED set to which would
exclude them from running at the same time.


However xs_tcp_schedule_linger_timeout can schedule the thread which runs
xs_tcp_setup_socket without first claiming XPRT_LOCKED.
So I assume that is what is happening.

I imagine some race between the client closing the socket, and getting
TCP_FIN_WAIT1 from the server and somehow the two threads racing.

I wonder if it might make sense to always abort 'connect_worker' in
xs_close()?
I think the connect_worker really mustn't be running or queued at this point,
so cancelling it is either a no-op, or vitally important.

So: does the following patch seem reasonable?  If so I'll submit it properly
with a coherent description etc.

Thanks,
NeilBrown



diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index ee03d35..b19ba53 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -835,6 +835,8 @@ static void xs_close(struct rpc_xprt *xprt)
 
 	dprintk("RPC:       xs_close xprt %p\n", xprt);
 
+	cancel_delayed_work_sync(&transport->connect_worker);
+
 	xs_reset_transport(transport);
 	xprt->reestablish_timeout = 0;
 
@@ -869,12 +871,8 @@ static void xs_local_destroy(struct rpc_xprt *xprt)
  */
 static void xs_destroy(struct rpc_xprt *xprt)
 {
-	struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
-
 	dprintk("RPC:       xs_destroy xprt %p\n", xprt);
 
-	cancel_delayed_work_sync(&transport->connect_worker);
-
 	xs_local_destroy(xprt);
 }
 


[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH/RFC] - hard-to-hit race in xprtsock.
  2013-10-29  6:42 [PATCH/RFC] - hard-to-hit race in xprtsock NeilBrown
@ 2013-10-29 15:02 ` Myklebust, Trond
  2013-10-30  6:02   ` NeilBrown
  0 siblings, 1 reply; 4+ messages in thread
From: Myklebust, Trond @ 2013-10-29 15:02 UTC (permalink / raw)
  To: NeilBrown; +Cc: NFS

On Tue, 2013-10-29 at 17:42 +1100, NeilBrown wrote:
> We have a customer who hit a rare race in sunrpc (in a 3.0 based kernel,
> but the relevant code doesn't seem to have changed much).
> 
> The thread that crashed was in 
>   xs_tcp_setup_socket -> inet_stream_connect -> lock_sock_nested.
> 
> 'sock' in this last function is NULL.
> 
> The only way I can imagine this happening is if some other thread called
> 
>  xs_close -> xs_reset_transport -> sock_release -> inet_release
> 
> in a very small window a moment earlier.
> 
> As far as I can tell, xs_close is only called with XPRT_LOCKED set.
> 
> xs_tcp_setup_socket is mostly scheduled with XPRT_LOCKED set to which would
> exclude them from running at the same time.
> 
> 
> However xs_tcp_schedule_linger_timeout can schedule the thread which runs
> xs_tcp_setup_socket without first claiming XPRT_LOCKED.
> So I assume that is what is happening.
> 
> I imagine some race between the client closing the socket, and getting
> TCP_FIN_WAIT1 from the server and somehow the two threads racing.
> 
> I wonder if it might make sense to always abort 'connect_worker' in
> xs_close()?
> I think the connect_worker really mustn't be running or queued at this point,
> so cancelling it is either a no-op, or vitally important.
> 
> So: does the following patch seem reasonable?  If so I'll submit it properly
> with a coherent description etc.

Hi Neil,

Will that do the right thing if the connect_worker and close are running
on the same rpciod thread? I think it should, but I never manage to keep
100% up to date with the ever changing semantics of
cancel_delayed_work_sync() and friends...

Cheers,
  Trond
-- 
Trond Myklebust
Linux NFS client maintainer

NetApp
Trond.Myklebust@netapp.com
www.netapp.com

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH/RFC] - hard-to-hit race in xprtsock.
  2013-10-29 15:02 ` Myklebust, Trond
@ 2013-10-30  6:02   ` NeilBrown
  2013-10-30 15:12     ` Myklebust, Trond
  0 siblings, 1 reply; 4+ messages in thread
From: NeilBrown @ 2013-10-30  6:02 UTC (permalink / raw)
  To: Myklebust, Trond; +Cc: NFS

[-- Attachment #1: Type: text/plain, Size: 2260 bytes --]

On Tue, 29 Oct 2013 15:02:36 +0000 "Myklebust, Trond"
<Trond.Myklebust@netapp.com> wrote:

> On Tue, 2013-10-29 at 17:42 +1100, NeilBrown wrote:
> > We have a customer who hit a rare race in sunrpc (in a 3.0 based kernel,
> > but the relevant code doesn't seem to have changed much).
> > 
> > The thread that crashed was in 
> >   xs_tcp_setup_socket -> inet_stream_connect -> lock_sock_nested.
> > 
> > 'sock' in this last function is NULL.
> > 
> > The only way I can imagine this happening is if some other thread called
> > 
> >  xs_close -> xs_reset_transport -> sock_release -> inet_release
> > 
> > in a very small window a moment earlier.
> > 
> > As far as I can tell, xs_close is only called with XPRT_LOCKED set.
> > 
> > xs_tcp_setup_socket is mostly scheduled with XPRT_LOCKED set to which would
> > exclude them from running at the same time.
> > 
> > 
> > However xs_tcp_schedule_linger_timeout can schedule the thread which runs
> > xs_tcp_setup_socket without first claiming XPRT_LOCKED.
> > So I assume that is what is happening.
> > 
> > I imagine some race between the client closing the socket, and getting
> > TCP_FIN_WAIT1 from the server and somehow the two threads racing.
> > 
> > I wonder if it might make sense to always abort 'connect_worker' in
> > xs_close()?
> > I think the connect_worker really mustn't be running or queued at this point,
> > so cancelling it is either a no-op, or vitally important.
> > 
> > So: does the following patch seem reasonable?  If so I'll submit it properly
> > with a coherent description etc.
> 
> Hi Neil,
> 
> Will that do the right thing if the connect_worker and close are running
> on the same rpciod thread? I think it should, but I never manage to keep
> 100% up to date with the ever changing semantics of
> cancel_delayed_work_sync() and friends...
> 
> Cheers,
>   Trond

Thanks for asking that!  I had the exact same concern when I first conceived
the patch.

I managed to convince my self that there wasn't a problem as long as
xs_tcp_setup_socket never called into xs_close.
Otherwise the worst case is that one thread running xs_close could block
while some other thread runs xs_{tcp,udp}_setup_socket.

Thanks,
NeilBrown

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH/RFC] - hard-to-hit race in xprtsock.
  2013-10-30  6:02   ` NeilBrown
@ 2013-10-30 15:12     ` Myklebust, Trond
  0 siblings, 0 replies; 4+ messages in thread
From: Myklebust, Trond @ 2013-10-30 15:12 UTC (permalink / raw)
  To: NeilBrown; +Cc: NFS

On Wed, 2013-10-30 at 17:02 +1100, NeilBrown wrote:
> On Tue, 29 Oct 2013 15:02:36 +0000 "Myklebust, Trond"
> <Trond.Myklebust@netapp.com> wrote:
> 
> > On Tue, 2013-10-29 at 17:42 +1100, NeilBrown wrote:
> > > We have a customer who hit a rare race in sunrpc (in a 3.0 based kernel,
> > > but the relevant code doesn't seem to have changed much).
> > > 
> > > The thread that crashed was in 
> > >   xs_tcp_setup_socket -> inet_stream_connect -> lock_sock_nested.
> > > 
> > > 'sock' in this last function is NULL.
> > > 
> > > The only way I can imagine this happening is if some other thread called
> > > 
> > >  xs_close -> xs_reset_transport -> sock_release -> inet_release
> > > 
> > > in a very small window a moment earlier.
> > > 
> > > As far as I can tell, xs_close is only called with XPRT_LOCKED set.
> > > 
> > > xs_tcp_setup_socket is mostly scheduled with XPRT_LOCKED set to which would
> > > exclude them from running at the same time.
> > > 
> > > 
> > > However xs_tcp_schedule_linger_timeout can schedule the thread which runs
> > > xs_tcp_setup_socket without first claiming XPRT_LOCKED.
> > > So I assume that is what is happening.
> > > 
> > > I imagine some race between the client closing the socket, and getting
> > > TCP_FIN_WAIT1 from the server and somehow the two threads racing.
> > > 
> > > I wonder if it might make sense to always abort 'connect_worker' in
> > > xs_close()?
> > > I think the connect_worker really mustn't be running or queued at this point,
> > > so cancelling it is either a no-op, or vitally important.
> > > 
> > > So: does the following patch seem reasonable?  If so I'll submit it properly
> > > with a coherent description etc.
> > 
> > Hi Neil,
> > 
> > Will that do the right thing if the connect_worker and close are running
> > on the same rpciod thread? I think it should, but I never manage to keep
> > 100% up to date with the ever changing semantics of
> > cancel_delayed_work_sync() and friends...
> > 
> > Cheers,
> >   Trond
> 
> Thanks for asking that!  I had the exact same concern when I first conceived
> the patch.
> 
> I managed to convince my self that there wasn't a problem as long as
> xs_tcp_setup_socket never called into xs_close.
> Otherwise the worst case is that one thread running xs_close could block
> while some other thread runs xs_{tcp,udp}_setup_socket.

OK. Let's go with that then. Could you please resend as a formal patch?

Cheers,
  Trond
-- 
Trond Myklebust
Linux NFS client maintainer

NetApp
Trond.Myklebust@netapp.com
www.netapp.com

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2013-10-30 15:12 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-10-29  6:42 [PATCH/RFC] - hard-to-hit race in xprtsock NeilBrown
2013-10-29 15:02 ` Myklebust, Trond
2013-10-30  6:02   ` NeilBrown
2013-10-30 15:12     ` Myklebust, Trond

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).