Linux NFS development
 help / color / mirror / Atom feed
From: Chuck Lever <chuck.lever@oracle.com>
To: anna.schumaker@netapp.com
Cc: linux-rdma@vger.kernel.org, linux-nfs@vger.kernel.org
Subject: [PATCH v3 02/16] xprtrdma: Re-arm after missed events
Date: Fri, 16 Oct 2015 09:24:16 -0400	[thread overview]
Message-ID: <20151016132416.6819.91661.stgit@oracle-122.nfsv4bat.org> (raw)
In-Reply-To: <20151016131958.6819.98407.stgit@oracle-122.nfsv4bat.org>

ib_req_notify_cq(IB_CQ_REPORT_MISSED_EVENTS) returns a positive
value if WCs were added to a CQ after the last completion upcall
but before the CQ has been re-armed.

Commit 7f23f6f6e388 ("xprtrmda: Reduce lock contention in
completion handlers") assumed that when ib_req_notify_cq() returned
a positive RC, the CQ had also been successfully re-armed, making
it safe to return control to the provider without losing any
completion signals. That is an invalid assumption.

Change both completion handlers to continue polling while
ib_req_notify_cq() returns a positive value.

Fixes: 7f23f6f6e388 ("xprtrmda: Reduce lock contention in ...")
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Reviewed-by: Devesh Sharma <devesh.sharma@avagotech.com>
Tested-By: Devesh Sharma <devesh.sharma@avagotech.com>
---
 net/sunrpc/xprtrdma/verbs.c |   66 +++++++------------------------------------
 1 file changed, 10 insertions(+), 56 deletions(-)

diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
index 8a477e2..c713909 100644
--- a/net/sunrpc/xprtrdma/verbs.c
+++ b/net/sunrpc/xprtrdma/verbs.c
@@ -179,38 +179,17 @@ rpcrdma_sendcq_poll(struct ib_cq *cq, struct rpcrdma_ep *ep)
 	return 0;
 }
 
-/*
- * Handle send, fast_reg_mr, and local_inv completions.
- *
- * Send events are typically suppressed and thus do not result
- * in an upcall. Occasionally one is signaled, however. This
- * prevents the provider's completion queue from wrapping and
- * losing a completion.
+/* Handle provider send completion upcalls.
  */
 static void
 rpcrdma_sendcq_upcall(struct ib_cq *cq, void *cq_context)
 {
 	struct rpcrdma_ep *ep = (struct rpcrdma_ep *)cq_context;
-	int rc;
-
-	rc = rpcrdma_sendcq_poll(cq, ep);
-	if (rc) {
-		dprintk("RPC:       %s: ib_poll_cq failed: %i\n",
-			__func__, rc);
-		return;
-	}
 
-	rc = ib_req_notify_cq(cq,
-			IB_CQ_NEXT_COMP | IB_CQ_REPORT_MISSED_EVENTS);
-	if (rc == 0)
-		return;
-	if (rc < 0) {
-		dprintk("RPC:       %s: ib_req_notify_cq failed: %i\n",
-			__func__, rc);
-		return;
-	}
-
-	rpcrdma_sendcq_poll(cq, ep);
+	do {
+		rpcrdma_sendcq_poll(cq, ep);
+	} while (ib_req_notify_cq(cq, IB_CQ_NEXT_COMP |
+				  IB_CQ_REPORT_MISSED_EVENTS) > 0);
 }
 
 static void
@@ -274,42 +253,17 @@ out_schedule:
 	return rc;
 }
 
-/*
- * Handle receive completions.
- *
- * It is reentrant but processes single events in order to maintain
- * ordering of receives to keep server credits.
- *
- * It is the responsibility of the scheduled tasklet to return
- * recv buffers to the pool. NOTE: this affects synchronization of
- * connection shutdown. That is, the structures required for
- * the completion of the reply handler must remain intact until
- * all memory has been reclaimed.
+/* Handle provider receive completion upcalls.
  */
 static void
 rpcrdma_recvcq_upcall(struct ib_cq *cq, void *cq_context)
 {
 	struct rpcrdma_ep *ep = (struct rpcrdma_ep *)cq_context;
-	int rc;
-
-	rc = rpcrdma_recvcq_poll(cq, ep);
-	if (rc) {
-		dprintk("RPC:       %s: ib_poll_cq failed: %i\n",
-			__func__, rc);
-		return;
-	}
 
-	rc = ib_req_notify_cq(cq,
-			IB_CQ_NEXT_COMP | IB_CQ_REPORT_MISSED_EVENTS);
-	if (rc == 0)
-		return;
-	if (rc < 0) {
-		dprintk("RPC:       %s: ib_req_notify_cq failed: %i\n",
-			__func__, rc);
-		return;
-	}
-
-	rpcrdma_recvcq_poll(cq, ep);
+	do {
+		rpcrdma_recvcq_poll(cq, ep);
+	} while (ib_req_notify_cq(cq, IB_CQ_NEXT_COMP |
+				  IB_CQ_REPORT_MISSED_EVENTS) > 0);
 }
 
 static void


  parent reply	other threads:[~2015-10-16 13:24 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-10-16 13:23 [PATCH v3 00/16] NFS/RDMA patches for merging into v4.4 Chuck Lever
2015-10-16 13:24 ` [PATCH v3 01/16] xprtrdma: Enable swap-on-NFS/RDMA Chuck Lever
2015-10-16 13:24 ` Chuck Lever [this message]
2015-10-16 13:24 ` [PATCH v3 03/16] xprtrdma: Prevent loss of completion signals Chuck Lever
2015-10-16 13:24 ` [PATCH v3 04/16] xprtrdma: Refactor reply handler error handling Chuck Lever
2015-10-16 13:24 ` [PATCH v3 05/16] xprtrdma: Replace send and receive arrays Chuck Lever
2015-10-16 13:24 ` [PATCH v3 06/16] xprtrdma: Use workqueue to process RPC/RDMA replies Chuck Lever
2015-10-16 13:24 ` [PATCH v3 07/16] xprtrdma: Remove reply tasklet Chuck Lever
2015-10-16 13:25 ` [PATCH v3 08/16] xprtrdma: Saving IRQs no longer needed for rb_lock Chuck Lever
2015-10-16 13:25 ` [PATCH v3 09/16] SUNRPC: Abstract backchannel operations Chuck Lever
2015-10-16 13:25 ` [PATCH v3 10/16] xprtrdma: Pre-allocate backward rpc_rqst and send/receive buffers Chuck Lever
2015-10-16 13:25 ` [PATCH v3 11/16] xprtrdma: Pre-allocate Work Requests for backchannel Chuck Lever
2015-10-16 13:25 ` [PATCH v3 12/16] xprtrdma: Add support for sending backward direction RPC replies Chuck Lever
2015-10-16 13:25 ` [PATCH v3 13/16] xprtrdma: Handle incoming backward direction RPC calls Chuck Lever
2015-10-16 13:25 ` [PATCH v3 14/16] svcrdma: Add backward direction service for RPC/RDMA transport Chuck Lever
2015-10-16 13:26 ` [PATCH v3 15/16] SUNRPC: Remove the TCP-only restriction in bc_svc_process() Chuck Lever
2015-10-16 13:26 ` [PATCH v3 16/16] NFS: Enable client side NFSv4.1 backchannel to use other transports Chuck Lever
     [not found]   ` <CAHQdGtQ+iUnuxFzZ6kOHaC=EGj1ptB_P6odciBe3MnuSZ4PBiA@mail.gmail.com>
2015-10-23 21:30     ` Trond Myklebust
2015-10-23 21:49     ` Chuck Lever
2015-10-19 18:58 ` [PATCH v3 00/16] NFS/RDMA patches for merging into v4.4 Anna Schumaker

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20151016132416.6819.91661.stgit@oracle-122.nfsv4bat.org \
    --to=chuck.lever@oracle.com \
    --cc=anna.schumaker@netapp.com \
    --cc=linux-nfs@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox