From: Max Gurtovoy <mgurtovoy@nvidia.com>
To: <leonro@nvidia.com>, <jgg@nvidia.com>,
<linux-nvme@lists.infradead.org>, <linux-rdma@vger.kernel.org>,
<chuck.lever@oracle.com>
Cc: <oren@nvidia.com>, <israelr@nvidia.com>, <maorg@nvidia.com>,
<yishaih@nvidia.com>, <hch@lst.de>, <bvanassche@acm.org>,
<shiraz.saleem@intel.com>, <edumazet@google.com>,
Max Gurtovoy <mgurtovoy@nvidia.com>
Subject: [PATCH v1 0/6] Last WQE Reached event treatment
Date: Tue, 18 Jun 2024 03:10:28 +0300 [thread overview]
Message-ID: <20240618001034.22681-1-mgurtovoy@nvidia.com> (raw)
Hi Jason/Leon/Sagi,
This series adds a support for draining a QP that is associated with a
SRQ (Shared Receive Queue).
Leakage problem can occur if we won't treat Last WQE Reached event.
In the series, that is based on some old series I've send during 2018, I
used a different approach and handled the event in the RDMA core, as was
suggested in discussion in the mailing list.
I've updated RDMA ULPs. Most of them were trivial except IPoIB that was
handling the Last WQE reached in the ULP.
I've tested this series with NVMf/RDMA on RoCE.
Max Gurtovoy (6):
IB/core: add support for draining Shared receive queues
IB/isert: remove the handling of last WQE reached event
RDMA/srpt: remove the handling of last WQE reached event
nvmet-rdma: remove the handling of last WQE reached event
svcrdma: remove the handling of last WQE reached event
RDMA/IPoIB: remove the handling of last WQE reached event
drivers/infiniband/core/verbs.c | 83 +++++++++++++++++++++++-
drivers/infiniband/ulp/ipoib/ipoib.h | 33 +---------
drivers/infiniband/ulp/ipoib/ipoib_cm.c | 71 ++------------------
drivers/infiniband/ulp/isert/ib_isert.c | 3 -
drivers/infiniband/ulp/srpt/ib_srpt.c | 5 --
drivers/nvme/target/rdma.c | 4 --
include/rdma/ib_verbs.h | 2 +
net/sunrpc/xprtrdma/svc_rdma_transport.c | 1 -
8 files changed, 92 insertions(+), 110 deletions(-)
--
2.18.1
next reply other threads:[~2024-06-18 0:10 UTC|newest]
Thread overview: 19+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-18 0:10 Max Gurtovoy [this message]
2024-06-18 0:10 ` [PATCH 1/6] IB/core: add support for draining Shared receive queues Max Gurtovoy
2024-06-18 16:07 ` Bart Van Assche
2024-06-19 9:14 ` Sagi Grimberg
2024-06-19 11:12 ` Max Gurtovoy
2024-06-19 9:09 ` Sagi Grimberg
2024-06-19 11:16 ` Max Gurtovoy
2024-06-18 0:10 ` [PATCH 2/6] IB/isert: remove the handling of last WQE reached event Max Gurtovoy
2024-06-19 9:16 ` Sagi Grimberg
2024-06-19 15:25 ` Max Gurtovoy
2024-06-18 0:10 ` [PATCH 3/6] RDMA/srpt: " Max Gurtovoy
2024-06-18 16:08 ` Bart Van Assche
2024-06-18 0:10 ` [PATCH 4/6] nvmet-rdma: " Max Gurtovoy
2024-06-18 0:10 ` [PATCH 5/6] svcrdma: " Max Gurtovoy
2024-06-18 15:12 ` Chuck Lever
2024-06-18 0:10 ` [PATCH 6/6] RDMA/IPoIB: " Max Gurtovoy
2024-06-19 9:18 ` Sagi Grimberg
2024-06-19 9:25 ` Leon Romanovsky
2024-06-23 13:03 ` [PATCH v1 0/6] Last WQE Reached event treatment Zhu Yanjun
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240618001034.22681-1-mgurtovoy@nvidia.com \
--to=mgurtovoy@nvidia.com \
--cc=bvanassche@acm.org \
--cc=chuck.lever@oracle.com \
--cc=edumazet@google.com \
--cc=hch@lst.de \
--cc=israelr@nvidia.com \
--cc=jgg@nvidia.com \
--cc=leonro@nvidia.com \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-rdma@vger.kernel.org \
--cc=maorg@nvidia.com \
--cc=oren@nvidia.com \
--cc=shiraz.saleem@intel.com \
--cc=yishaih@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).