From: Yuval Shaia <yuval.shaia-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org>
To: Parav Pandit <parav-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Cc: hch-jcswGhMUV9g@public.gmane.org,
sagi-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org,
linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org,
linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
dledford-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org
Subject: Re: [PATCHv1] nvmet-rdma: Fix missing dma sync to nvme data structures
Date: Mon, 16 Jan 2017 22:31:33 +0200 [thread overview]
Message-ID: <20170116203132.GB7384@yuval-lap> (raw)
In-Reply-To: <1484597945-31143-1-git-send-email-parav-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
On Mon, Jan 16, 2017 at 02:19:05PM -0600, Parav Pandit wrote:
> This patch performs dma sync operations on nvme_command,
> inline page(s) and nvme_completion.
>
> nvme_command and write cmd inline data is synced
> (a) on receiving of the recv queue completion for cpu access.
> (b) before posting recv wqe back to rdma adapter for device access.
>
> nvme_completion is synced
> (a) on receiving send completion for nvme_completion for cpu access.
> (b) before posting send wqe to rdma adapter for device access.
>
> This patch is generated for git://git.infradead.org/nvme-fabrics.git
> Branch: nvmf-4.10
>
> Signed-off-by: Parav Pandit <parav-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
> Reviewed-by: Max Gurtovoy <maxg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
> ---
> drivers/nvme/target/rdma.c | 25 +++++++++++++++++++++++++
> 1 file changed, 25 insertions(+)
>
> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
> index 6c1c368..fe7e257 100644
> --- a/drivers/nvme/target/rdma.c
> +++ b/drivers/nvme/target/rdma.c
> @@ -437,6 +437,14 @@ static int nvmet_rdma_post_recv(struct nvmet_rdma_device *ndev,
> struct nvmet_rdma_cmd *cmd)
> {
> struct ib_recv_wr *bad_wr;
> + int i;
> +
> + for (i = 0; i < 2; i++) {
> + if (cmd->sge[i].length)
> + ib_dma_sync_single_for_device(ndev->device,
Aren't we trying to get rid of all these ib_dma_* wrappers?
> + cmd->sge[0].addr, cmd->sge[0].length,
> + DMA_FROM_DEVICE);
> + }
>
> if (ndev->srq)
> return ib_post_srq_recv(ndev->srq, &cmd->wr, &bad_wr);
> @@ -507,6 +515,10 @@ static void nvmet_rdma_send_done(struct ib_cq *cq, struct ib_wc *wc)
> struct nvmet_rdma_rsp *rsp =
> container_of(wc->wr_cqe, struct nvmet_rdma_rsp, send_cqe);
>
> + ib_dma_sync_single_for_cpu(rsp->queue->dev->device,
> + rsp->send_sge.addr, rsp->send_sge.length,
> + DMA_TO_DEVICE);
> +
> nvmet_rdma_release_rsp(rsp);
>
> if (unlikely(wc->status != IB_WC_SUCCESS &&
> @@ -538,6 +550,11 @@ static void nvmet_rdma_queue_response(struct nvmet_req *req)
> first_wr = &rsp->send_wr;
>
> nvmet_rdma_post_recv(rsp->queue->dev, rsp->cmd);
> +
> + ib_dma_sync_single_for_device(rsp->queue->dev->device,
> + rsp->send_sge.addr, rsp->send_sge.length,
> + DMA_TO_DEVICE);
> +
> if (ib_post_send(cm_id->qp, first_wr, &bad_wr)) {
> pr_err("sending cmd response failed\n");
> nvmet_rdma_release_rsp(rsp);
> @@ -692,12 +709,20 @@ static bool nvmet_rdma_execute_command(struct nvmet_rdma_rsp *rsp)
> static void nvmet_rdma_handle_command(struct nvmet_rdma_queue *queue,
> struct nvmet_rdma_rsp *cmd)
> {
> + int i;
> u16 status;
>
> cmd->queue = queue;
> cmd->n_rdma = 0;
> cmd->req.port = queue->port;
>
> + for (i = 0; i < 2; i++) {
> + if (cmd->cmd->sge[i].length)
> + ib_dma_sync_single_for_cpu(queue->dev->device,
> + cmd->cmd->sge[i].addr, cmd->cmd->sge[i].length,
> + DMA_FROM_DEVICE);
> + }
> +
> if (!nvmet_req_init(&cmd->req, &queue->nvme_cq,
> &queue->nvme_sq, &nvmet_rdma_ops))
> return;
> --
> 1.8.3.1
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
> the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2017-01-16 20:31 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-01-16 20:19 [PATCHv1] nvmet-rdma: Fix missing dma sync to nvme data structures Parav Pandit
[not found] ` <1484597945-31143-1-git-send-email-parav-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
2017-01-16 20:31 ` Yuval Shaia [this message]
2017-01-16 20:51 ` Parav Pandit
2017-01-16 21:12 ` Sagi Grimberg
[not found] ` <ff2d3e9b-323c-aae5-f2dc-b18da5e0dc09-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
2017-01-16 23:17 ` Parav Pandit
[not found] ` <VI1PR0502MB30083D014DF2D4C0BEEA9F17D17D0-o1MPJYiShExKsLr+rGaxW8DSnupUy6xnnBOFsp37pqbUKgpGm//BTAC/G2K4zDHf@public.gmane.org>
2017-01-17 8:07 ` Sagi Grimberg
[not found] ` <77902aab-5d1c-5755-ca46-679d737f496d-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org>
2017-01-17 16:03 ` Parav Pandit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170116203132.GB7384@yuval-lap \
--to=yuval.shaia-qhclzuegtsvqt0dzr+alfa@public.gmane.org \
--cc=dledford-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org \
--cc=hch-jcswGhMUV9g@public.gmane.org \
--cc=linux-nvme-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r@public.gmane.org \
--cc=linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org \
--cc=parav-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org \
--cc=sagi-NQWnxTmZq1alnMjI0IkVqw@public.gmane.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox