From: NeilBrown <neilb@suse.de>
To: NFS <linux-nfs@vger.kernel.org>
Subject: [PATCH/RFC] - hard-to-hit race in xprtsock.
Date: Tue, 29 Oct 2013 17:42:04 +1100 [thread overview]
Message-ID: <20131029174204.7f6578d4@notabene.brown> (raw)
[-- Attachment #1: Type: text/plain, Size: 2107 bytes --]
We have a customer who hit a rare race in sunrpc (in a 3.0 based kernel,
but the relevant code doesn't seem to have changed much).
The thread that crashed was in
xs_tcp_setup_socket -> inet_stream_connect -> lock_sock_nested.
'sock' in this last function is NULL.
The only way I can imagine this happening is if some other thread called
xs_close -> xs_reset_transport -> sock_release -> inet_release
in a very small window a moment earlier.
As far as I can tell, xs_close is only called with XPRT_LOCKED set.
xs_tcp_setup_socket is mostly scheduled with XPRT_LOCKED set to which would
exclude them from running at the same time.
However xs_tcp_schedule_linger_timeout can schedule the thread which runs
xs_tcp_setup_socket without first claiming XPRT_LOCKED.
So I assume that is what is happening.
I imagine some race between the client closing the socket, and getting
TCP_FIN_WAIT1 from the server and somehow the two threads racing.
I wonder if it might make sense to always abort 'connect_worker' in
xs_close()?
I think the connect_worker really mustn't be running or queued at this point,
so cancelling it is either a no-op, or vitally important.
So: does the following patch seem reasonable? If so I'll submit it properly
with a coherent description etc.
Thanks,
NeilBrown
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
index ee03d35..b19ba53 100644
--- a/net/sunrpc/xprtsock.c
+++ b/net/sunrpc/xprtsock.c
@@ -835,6 +835,8 @@ static void xs_close(struct rpc_xprt *xprt)
dprintk("RPC: xs_close xprt %p\n", xprt);
+ cancel_delayed_work_sync(&transport->connect_worker);
+
xs_reset_transport(transport);
xprt->reestablish_timeout = 0;
@@ -869,12 +871,8 @@ static void xs_local_destroy(struct rpc_xprt *xprt)
*/
static void xs_destroy(struct rpc_xprt *xprt)
{
- struct sock_xprt *transport = container_of(xprt, struct sock_xprt, xprt);
-
dprintk("RPC: xs_destroy xprt %p\n", xprt);
- cancel_delayed_work_sync(&transport->connect_worker);
-
xs_local_destroy(xprt);
}
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 828 bytes --]
next reply other threads:[~2013-10-29 6:42 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-10-29 6:42 NeilBrown [this message]
2013-10-29 15:02 ` [PATCH/RFC] - hard-to-hit race in xprtsock Myklebust, Trond
2013-10-30 6:02 ` NeilBrown
2013-10-30 15:12 ` Myklebust, Trond
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20131029174204.7f6578d4@notabene.brown \
--to=neilb@suse.de \
--cc=linux-nfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).