linux-nfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* server tcp performance patches
@ 2011-04-06 23:06 J. Bruce Fields
  0 siblings, 0 replies; 2+ messages in thread
From: J. Bruce Fields @ 2011-04-06 23:06 UTC (permalink / raw)
  To: linux-nfs

We previously attempted to turn on autotuning of nfsd's tcp receive
buffers, enabling better performance on large bandwidth-delay-product
networks, but ran into some regressions on local gigabit networks.

At the time Trond proposed modifying the server to receive partial rpc
calls as they arrive instead of waiting for the entire request to
arrive, reasoning that that would a) free up receive buffer space
sooner, causing the server to advertise a larger window soon, and b)
avoid theoretical deadlocks possible if the receive buffer ever fell
below the minimum required to hold a request.

That seemed to solve the observed regression, but we didn't completely
understand why, so I put off applying the patches.

I still don't completely understand the cause of the original
regression, but it seems like a reasonable thing to do anyway, and
solves an immediate problem, so I've updated Trond's original patch (and
split it up slightly)--the main changes required were just to adapt to
the 4.1 backchannel reply receive code.

I'm considering queueing this up for 2.6.40.

--b.

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: server tcp performance patches
@ 2011-04-10 16:34 J. Bruce Fields
  0 siblings, 0 replies; 2+ messages in thread
From: J. Bruce Fields @ 2011-04-10 16:34 UTC (permalink / raw)
  To: linux-nfs

> I'm considering queueing this up for 2.6.40.

Done, but with the following fix.

--b.

commit 8985ef0b8af895c3b85a8c1b7108e0169fcbd20b
Author: J. Bruce Fields <bfields@redhat.com>
Date:   Sat Apr 9 10:03:10 2011 -0400

    svcrpc: complete svsk processing on cb receive failure
    
    Currently when there's some failure to receive a callback (because we
    couldn't find a matching xid, for example), we exit svc_recv with
    sk_tcplen still set but without any pages saved with the socket.  This
    will cause a crash later in svc_tcp_restore_pages.
    
    Instead, make sure we reset that tcp information whether the callback
    received failed or succeeded.
    
    Signed-off-by: J. Bruce Fields <bfields@redhat.com>

diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
index 213dea8..af04f77 100644
--- a/net/sunrpc/svcsock.c
+++ b/net/sunrpc/svcsock.c
@@ -1143,11 +1143,8 @@ static int svc_tcp_recvfrom(struct svc_rqst *rqstp)
 
 	p = (__be32 *)rqstp->rq_arg.head[0].iov_base;
 	calldir = p[1];
-	if (calldir) {
+	if (calldir)
 		len = receive_cb_reply(svsk, rqstp);
-		if (len < 0)
-			goto error;
-	}
 
 	/* Reset TCP read info */
 	svsk->sk_reclen = 0;
@@ -1156,6 +1153,8 @@ static int svc_tcp_recvfrom(struct svc_rqst *rqstp)
 	if (svc_recv_available(svsk) > sizeof(rpc_fraghdr))
 		set_bit(XPT_DATA, &svsk->sk_xprt.xpt_flags);
 
+	if (len < 0)
+		goto error;
 
 	svc_xprt_copy_addrs(rqstp, &svsk->sk_xprt);
 	if (serv->sv_stats)

^ permalink raw reply related	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2011-04-10 16:34 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-04-10 16:34 server tcp performance patches J. Bruce Fields
  -- strict thread matches above, loose matches on Subject: below --
2011-04-06 23:06 J. Bruce Fields

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).