linux-nfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "J. Bruce Fields" <bfields@fieldses.org>
To: Chuck Lever <chuck.lever@oracle.com>
Cc: linux-nfs@vger.kernel.org
Subject: Re: [PATCH] svcrdma: Select NFSv4.1 backchannel transport based on forward channel
Date: Thu, 17 Jul 2014 14:36:21 -0400	[thread overview]
Message-ID: <20140717183621.GA30442@fieldses.org> (raw)
In-Reply-To: <20140716193542.7847.95868.stgit@klimt.1015granger.net>

On Wed, Jul 16, 2014 at 03:38:32PM -0400, Chuck Lever wrote:
> The current code always selects XPRT_TRANSPORT_BC_TCP for the back
> channel, even when the forward channel was not TCP (eg, RDMA). When
> a 4.1 mount is attempted with RDMA, the server panics in the TCP BC
> code when trying to send CB_NULL.
> 
> Instead, construct the transport protocol number from the forward
> channel transport or'd with XPRT_TRANSPORT_BC. Transports that do
> not support bi-directional RPC will not have registered a "BC"
> transport, causing create_backchannel_client() to fail immediately.
> 
> Fixes: https://bugzilla.linux-nfs.org/show_bug.cgi?id=265
> Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
> ---
> Hi Bruce-
> 
> What do you think of this approach?

OK by me.  (So clients use a separate tcp connection for the
backchannel?)

--b.

> 
> 
>  fs/nfsd/nfs4callback.c                   |    3 ++-
>  include/linux/sunrpc/svc_xprt.h          |    1 +
>  net/sunrpc/svcsock.c                     |    2 ++
>  net/sunrpc/xprt.c                        |    2 +-
>  net/sunrpc/xprtrdma/svc_rdma_transport.c |    1 +
>  5 files changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
> index 2c73cae..0f23ad0 100644
> --- a/fs/nfsd/nfs4callback.c
> +++ b/fs/nfsd/nfs4callback.c
> @@ -689,7 +689,8 @@ static int setup_callback_client(struct nfs4_client *clp, struct nfs4_cb_conn *c
>  		clp->cl_cb_session = ses;
>  		args.bc_xprt = conn->cb_xprt;
>  		args.prognumber = clp->cl_cb_session->se_cb_prog;
> -		args.protocol = XPRT_TRANSPORT_BC_TCP;
> +		args.protocol = conn->cb_xprt->xpt_class->xcl_ident |
> +				XPRT_TRANSPORT_BC;
>  		args.authflavor = ses->se_cb_sec.flavor;
>  	}
>  	/* Create RPC client */
> diff --git a/include/linux/sunrpc/svc_xprt.h b/include/linux/sunrpc/svc_xprt.h
> index 7235040..5d9d6f8 100644
> --- a/include/linux/sunrpc/svc_xprt.h
> +++ b/include/linux/sunrpc/svc_xprt.h
> @@ -33,6 +33,7 @@ struct svc_xprt_class {
>  	struct svc_xprt_ops	*xcl_ops;
>  	struct list_head	xcl_list;
>  	u32			xcl_max_payload;
> +	int			xcl_ident;
>  };
>  
>  /*
> diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
> index b507cd3..b2437ee 100644
> --- a/net/sunrpc/svcsock.c
> +++ b/net/sunrpc/svcsock.c
> @@ -692,6 +692,7 @@ static struct svc_xprt_class svc_udp_class = {
>  	.xcl_owner = THIS_MODULE,
>  	.xcl_ops = &svc_udp_ops,
>  	.xcl_max_payload = RPCSVC_MAXPAYLOAD_UDP,
> +	.xcl_ident = XPRT_TRANSPORT_UDP,
>  };
>  
>  static void svc_udp_init(struct svc_sock *svsk, struct svc_serv *serv)
> @@ -1292,6 +1293,7 @@ static struct svc_xprt_class svc_tcp_class = {
>  	.xcl_owner = THIS_MODULE,
>  	.xcl_ops = &svc_tcp_ops,
>  	.xcl_max_payload = RPCSVC_MAXPAYLOAD_TCP,
> +	.xcl_ident = XPRT_TRANSPORT_TCP,
>  };
>  
>  void svc_init_xprt_sock(void)
> diff --git a/net/sunrpc/xprt.c b/net/sunrpc/xprt.c
> index c3b2b33..51c6316 100644
> --- a/net/sunrpc/xprt.c
> +++ b/net/sunrpc/xprt.c
> @@ -1306,7 +1306,7 @@ struct rpc_xprt *xprt_create_transport(struct xprt_create *args)
>  		}
>  	}
>  	spin_unlock(&xprt_list_lock);
> -	printk(KERN_ERR "RPC: transport (%d) not supported\n", args->ident);
> +	dprintk("RPC: transport (%d) not supported\n", args->ident);
>  	return ERR_PTR(-EIO);
>  
>  found:
> diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c
> index e7323fb..06a5d92 100644
> --- a/net/sunrpc/xprtrdma/svc_rdma_transport.c
> +++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c
> @@ -92,6 +92,7 @@ struct svc_xprt_class svc_rdma_class = {
>  	.xcl_owner = THIS_MODULE,
>  	.xcl_ops = &svc_rdma_ops,
>  	.xcl_max_payload = RPCSVC_MAXPAYLOAD_TCP,
> +	.xcl_ident = XPRT_TRANSPORT_RDMA,
>  };
>  
>  struct svc_rdma_op_ctxt *svc_rdma_get_context(struct svcxprt_rdma *xprt)
> 

  reply	other threads:[~2014-07-17 18:36 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-07-16 19:38 [PATCH] svcrdma: Select NFSv4.1 backchannel transport based on forward channel Chuck Lever
2014-07-17 18:36 ` J. Bruce Fields [this message]
2014-07-17 18:37   ` J. Bruce Fields
2014-07-17 18:59   ` Chuck Lever
2014-07-18 18:47     ` Chuck Lever
2014-07-18 19:12       ` J. Bruce Fields
2014-07-18 19:27         ` Chuck Lever
2014-07-21 15:23           ` J. Bruce Fields

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20140717183621.GA30442@fieldses.org \
    --to=bfields@fieldses.org \
    --cc=chuck.lever@oracle.com \
    --cc=linux-nfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).