linux-nvme.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/2] nvmet-rdma: remove p2p_client initialization from fast-path
@ 2019-04-08 15:39 Max Gurtovoy
  2019-04-08 15:39 ` [PATCH 2/2] nvmet: rename nvme_completion instances from rsp to cqe Max Gurtovoy
                   ` (2 more replies)
  0 siblings, 3 replies; 9+ messages in thread
From: Max Gurtovoy @ 2019-04-08 15:39 UTC (permalink / raw)


Initialize it during command allocation.

Cc: Logan Gunthorpe <logang at deltatee.com>
Cc: Stephen Bates <sbates at raithlin.com>
Signed-off-by: Max Gurtovoy <maxg at mellanox.com>
---
 drivers/nvme/target/rdma.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index ef893ad..b727521 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -373,6 +373,7 @@ static int nvmet_rdma_alloc_rsp(struct nvmet_rdma_device *ndev,
 	if (ib_dma_mapping_error(ndev->device, r->send_sge.addr))
 		goto out_free_rsp;
 
+	r->req.p2p_client = &ndev->device->dev;
 	r->send_sge.length = sizeof(*r->req.rsp);
 	r->send_sge.lkey = ndev->pd->local_dma_lkey;
 
@@ -763,8 +764,6 @@ static void nvmet_rdma_handle_command(struct nvmet_rdma_queue *queue,
 		cmd->send_sge.addr, cmd->send_sge.length,
 		DMA_TO_DEVICE);
 
-	cmd->req.p2p_client = &queue->dev->device->dev;
-
 	if (!nvmet_req_init(&cmd->req, &queue->nvme_cq,
 			&queue->nvme_sq, &nvmet_rdma_ops))
 		return;
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 2/2] nvmet: rename nvme_completion instances from rsp to cqe
  2019-04-08 15:39 [PATCH 1/2] nvmet-rdma: remove p2p_client initialization from fast-path Max Gurtovoy
@ 2019-04-08 15:39 ` Max Gurtovoy
  2019-04-08 19:04   ` Chaitanya Kulkarni
  2019-04-09 10:10   ` Christoph Hellwig
  2019-04-08 16:04 ` [PATCH 1/2] nvmet-rdma: remove p2p_client initialization from fast-path Logan Gunthorpe
  2019-04-09 10:10 ` Christoph Hellwig
  2 siblings, 2 replies; 9+ messages in thread
From: Max Gurtovoy @ 2019-04-08 15:39 UTC (permalink / raw)


Use NVMe namings for improving code readability.

Signed-off-by: Max Gurtovoy <maxg at mellanox.com>
---
 drivers/nvme/target/core.c        | 22 +++++++++++-----------
 drivers/nvme/target/fabrics-cmd.c | 16 ++++++++--------
 drivers/nvme/target/fc.c          |  2 +-
 drivers/nvme/target/loop.c        |  6 +++---
 drivers/nvme/target/nvmet.h       |  4 ++--
 drivers/nvme/target/rdma.c        | 18 +++++++++---------
 drivers/nvme/target/tcp.c         |  8 ++++----
 7 files changed, 38 insertions(+), 38 deletions(-)

diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
index b3e765a..6feef3f 100644
--- a/drivers/nvme/target/core.c
+++ b/drivers/nvme/target/core.c
@@ -644,7 +644,7 @@ static void nvmet_update_sq_head(struct nvmet_req *req)
 		} while (cmpxchg(&req->sq->sqhd, old_sqhd, new_sqhd) !=
 					old_sqhd);
 	}
-	req->rsp->sq_head = cpu_to_le16(req->sq->sqhd & 0x0000FFFF);
+	req->cqe->sq_head = cpu_to_le16(req->sq->sqhd & 0x0000FFFF);
 }
 
 static void nvmet_set_error(struct nvmet_req *req, u16 status)
@@ -653,7 +653,7 @@ static void nvmet_set_error(struct nvmet_req *req, u16 status)
 	struct nvme_error_slot *new_error_slot;
 	unsigned long flags;
 
-	req->rsp->status = cpu_to_le16(status << 1);
+	req->cqe->status = cpu_to_le16(status << 1);
 
 	if (!ctrl || req->error_loc == NVMET_NO_ERROR_LOC)
 		return;
@@ -673,15 +673,15 @@ static void nvmet_set_error(struct nvmet_req *req, u16 status)
 	spin_unlock_irqrestore(&ctrl->error_lock, flags);
 
 	/* set the more bit for this request */
-	req->rsp->status |= cpu_to_le16(1 << 14);
+	req->cqe->status |= cpu_to_le16(1 << 14);
 }
 
 static void __nvmet_req_complete(struct nvmet_req *req, u16 status)
 {
 	if (!req->sq->sqhd_disabled)
 		nvmet_update_sq_head(req);
-	req->rsp->sq_id = cpu_to_le16(req->sq->qid);
-	req->rsp->command_id = req->cmd->common.command_id;
+	req->cqe->sq_id = cpu_to_le16(req->sq->qid);
+	req->cqe->command_id = req->cmd->common.command_id;
 
 	if (unlikely(status))
 		nvmet_set_error(req, status);
@@ -838,8 +838,8 @@ bool nvmet_req_init(struct nvmet_req *req, struct nvmet_cq *cq,
 	req->sg = NULL;
 	req->sg_cnt = 0;
 	req->transfer_len = 0;
-	req->rsp->status = 0;
-	req->rsp->sq_head = 0;
+	req->cqe->status = 0;
+	req->cqe->sq_head = 0;
 	req->ns = NULL;
 	req->error_loc = NVMET_NO_ERROR_LOC;
 	req->error_slba = 0;
@@ -1066,7 +1066,7 @@ u16 nvmet_ctrl_find_get(const char *subsysnqn, const char *hostnqn, u16 cntlid,
 	if (!subsys) {
 		pr_warn("connect request for invalid subsystem %s!\n",
 			subsysnqn);
-		req->rsp->result.u32 = IPO_IATTR_CONNECT_DATA(subsysnqn);
+		req->cqe->result.u32 = IPO_IATTR_CONNECT_DATA(subsysnqn);
 		return NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR;
 	}
 
@@ -1087,7 +1087,7 @@ u16 nvmet_ctrl_find_get(const char *subsysnqn, const char *hostnqn, u16 cntlid,
 
 	pr_warn("could not find controller %d for subsys %s / host %s\n",
 		cntlid, subsysnqn, hostnqn);
-	req->rsp->result.u32 = IPO_IATTR_CONNECT_DATA(cntlid);
+	req->cqe->result.u32 = IPO_IATTR_CONNECT_DATA(cntlid);
 	status = NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR;
 
 out:
@@ -1185,7 +1185,7 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
 	if (!subsys) {
 		pr_warn("connect request for invalid subsystem %s!\n",
 			subsysnqn);
-		req->rsp->result.u32 = IPO_IATTR_CONNECT_DATA(subsysnqn);
+		req->cqe->result.u32 = IPO_IATTR_CONNECT_DATA(subsysnqn);
 		goto out;
 	}
 
@@ -1194,7 +1194,7 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
 	if (!nvmet_host_allowed(subsys, hostnqn)) {
 		pr_info("connect by host %s for subsystem %s not allowed\n",
 			hostnqn, subsysnqn);
-		req->rsp->result.u32 = IPO_IATTR_CONNECT_DATA(hostnqn);
+		req->cqe->result.u32 = IPO_IATTR_CONNECT_DATA(hostnqn);
 		up_read(&nvmet_config_sem);
 		status = NVME_SC_CONNECT_INVALID_HOST | NVME_SC_DNR;
 		goto out_put_subsystem;
diff --git a/drivers/nvme/target/fabrics-cmd.c b/drivers/nvme/target/fabrics-cmd.c
index 3a76ebc..3b9f79a 100644
--- a/drivers/nvme/target/fabrics-cmd.c
+++ b/drivers/nvme/target/fabrics-cmd.c
@@ -72,7 +72,7 @@ static void nvmet_execute_prop_get(struct nvmet_req *req)
 			offsetof(struct nvmf_property_get_command, attrib);
 	}
 
-	req->rsp->result.u64 = cpu_to_le64(val);
+	req->cqe->result.u64 = cpu_to_le64(val);
 	nvmet_req_complete(req, status);
 }
 
@@ -124,7 +124,7 @@ static u16 nvmet_install_queue(struct nvmet_ctrl *ctrl, struct nvmet_req *req)
 
 	if (c->cattr & NVME_CONNECT_DISABLE_SQFLOW) {
 		req->sq->sqhd_disabled = true;
-		req->rsp->sq_head = cpu_to_le16(0xffff);
+		req->cqe->sq_head = cpu_to_le16(0xffff);
 	}
 
 	if (ctrl->ops->install_queue) {
@@ -158,7 +158,7 @@ static void nvmet_execute_admin_connect(struct nvmet_req *req)
 		goto out;
 
 	/* zero out initial completion result, assign values as needed */
-	req->rsp->result.u32 = 0;
+	req->cqe->result.u32 = 0;
 
 	if (c->recfmt != 0) {
 		pr_warn("invalid connect version (%d).\n",
@@ -172,7 +172,7 @@ static void nvmet_execute_admin_connect(struct nvmet_req *req)
 		pr_warn("connect attempt for invalid controller ID %#x\n",
 			d->cntlid);
 		status = NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR;
-		req->rsp->result.u32 = IPO_IATTR_CONNECT_DATA(cntlid);
+		req->cqe->result.u32 = IPO_IATTR_CONNECT_DATA(cntlid);
 		goto out;
 	}
 
@@ -195,7 +195,7 @@ static void nvmet_execute_admin_connect(struct nvmet_req *req)
 
 	pr_info("creating controller %d for subsystem %s for NQN %s.\n",
 		ctrl->cntlid, ctrl->subsys->subsysnqn, ctrl->hostnqn);
-	req->rsp->result.u16 = cpu_to_le16(ctrl->cntlid);
+	req->cqe->result.u16 = cpu_to_le16(ctrl->cntlid);
 
 out:
 	kfree(d);
@@ -222,7 +222,7 @@ static void nvmet_execute_io_connect(struct nvmet_req *req)
 		goto out;
 
 	/* zero out initial completion result, assign values as needed */
-	req->rsp->result.u32 = 0;
+	req->cqe->result.u32 = 0;
 
 	if (c->recfmt != 0) {
 		pr_warn("invalid connect version (%d).\n",
@@ -240,14 +240,14 @@ static void nvmet_execute_io_connect(struct nvmet_req *req)
 	if (unlikely(qid > ctrl->subsys->max_qid)) {
 		pr_warn("invalid queue id (%d)\n", qid);
 		status = NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR;
-		req->rsp->result.u32 = IPO_IATTR_CONNECT_SQE(qid);
+		req->cqe->result.u32 = IPO_IATTR_CONNECT_SQE(qid);
 		goto out_ctrl_put;
 	}
 
 	status = nvmet_install_queue(ctrl, req);
 	if (status) {
 		/* pass back cntlid that had the issue of installing queue */
-		req->rsp->result.u16 = cpu_to_le16(ctrl->cntlid);
+		req->cqe->result.u16 = cpu_to_le16(ctrl->cntlid);
 		goto out_ctrl_put;
 	}
 
diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
index 98b7b1f..6733673 100644
--- a/drivers/nvme/target/fc.c
+++ b/drivers/nvme/target/fc.c
@@ -2187,7 +2187,7 @@ enum {
 	}
 
 	fod->req.cmd = &fod->cmdiubuf.sqe;
-	fod->req.rsp = &fod->rspiubuf.cqe;
+	fod->req.cqe = &fod->rspiubuf.cqe;
 	fod->req.port = tgtport->pe->port;
 
 	/* clear any response payload */
diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
index b9f623a..a3ae491 100644
--- a/drivers/nvme/target/loop.c
+++ b/drivers/nvme/target/loop.c
@@ -18,7 +18,7 @@
 struct nvme_loop_iod {
 	struct nvme_request	nvme_req;
 	struct nvme_command	cmd;
-	struct nvme_completion	rsp;
+	struct nvme_completion	cqe;
 	struct nvmet_req	req;
 	struct nvme_loop_queue	*queue;
 	struct work_struct	work;
@@ -94,7 +94,7 @@ static void nvme_loop_queue_response(struct nvmet_req *req)
 {
 	struct nvme_loop_queue *queue =
 		container_of(req->sq, struct nvme_loop_queue, nvme_sq);
-	struct nvme_completion *cqe = req->rsp;
+	struct nvme_completion *cqe = req->cqe;
 
 	/*
 	 * AEN requests are special as they don't time out and can
@@ -207,7 +207,7 @@ static int nvme_loop_init_iod(struct nvme_loop_ctrl *ctrl,
 		struct nvme_loop_iod *iod, unsigned int queue_idx)
 {
 	iod->req.cmd = &iod->cmd;
-	iod->req.rsp = &iod->rsp;
+	iod->req.cqe = &iod->cqe;
 	iod->queue = &ctrl->queues[queue_idx];
 	INIT_WORK(&iod->work, nvme_loop_execute_work);
 	return 0;
diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
index 51e49ef..53d5a44 100644
--- a/drivers/nvme/target/nvmet.h
+++ b/drivers/nvme/target/nvmet.h
@@ -284,7 +284,7 @@ struct nvmet_fabrics_ops {
 
 struct nvmet_req {
 	struct nvme_command	*cmd;
-	struct nvme_completion	*rsp;
+	struct nvme_completion	*cqe;
 	struct nvmet_sq		*sq;
 	struct nvmet_cq		*cq;
 	struct nvmet_ns		*ns;
@@ -322,7 +322,7 @@ struct nvmet_req {
 
 static inline void nvmet_set_result(struct nvmet_req *req, u32 result)
 {
-	req->rsp->result.u32 = cpu_to_le32(result);
+	req->cqe->result.u32 = cpu_to_le32(result);
 }
 
 /*
diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index b727521..36d906a 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -160,7 +160,7 @@ static inline bool nvmet_rdma_need_data_out(struct nvmet_rdma_rsp *rsp)
 {
 	return !nvme_is_write(rsp->req.cmd) &&
 		rsp->req.transfer_len &&
-		!rsp->req.rsp->status &&
+		!rsp->req.cqe->status &&
 		!(rsp->flags & NVMET_RDMA_REQ_INLINE_DATA);
 }
 
@@ -364,17 +364,17 @@ static int nvmet_rdma_alloc_rsp(struct nvmet_rdma_device *ndev,
 		struct nvmet_rdma_rsp *r)
 {
 	/* NVMe CQE / RDMA SEND */
-	r->req.rsp = kmalloc(sizeof(*r->req.rsp), GFP_KERNEL);
-	if (!r->req.rsp)
+	r->req.cqe = kmalloc(sizeof(*r->req.cqe), GFP_KERNEL);
+	if (!r->req.cqe)
 		goto out;
 
-	r->send_sge.addr = ib_dma_map_single(ndev->device, r->req.rsp,
-			sizeof(*r->req.rsp), DMA_TO_DEVICE);
+	r->send_sge.addr = ib_dma_map_single(ndev->device, r->req.cqe,
+			sizeof(*r->req.cqe), DMA_TO_DEVICE);
 	if (ib_dma_mapping_error(ndev->device, r->send_sge.addr))
 		goto out_free_rsp;
 
 	r->req.p2p_client = &ndev->device->dev;
-	r->send_sge.length = sizeof(*r->req.rsp);
+	r->send_sge.length = sizeof(*r->req.cqe);
 	r->send_sge.lkey = ndev->pd->local_dma_lkey;
 
 	r->send_cqe.done = nvmet_rdma_send_done;
@@ -389,7 +389,7 @@ static int nvmet_rdma_alloc_rsp(struct nvmet_rdma_device *ndev,
 	return 0;
 
 out_free_rsp:
-	kfree(r->req.rsp);
+	kfree(r->req.cqe);
 out:
 	return -ENOMEM;
 }
@@ -398,8 +398,8 @@ static void nvmet_rdma_free_rsp(struct nvmet_rdma_device *ndev,
 		struct nvmet_rdma_rsp *r)
 {
 	ib_dma_unmap_single(ndev->device, r->send_sge.addr,
-				sizeof(*r->req.rsp), DMA_TO_DEVICE);
-	kfree(r->req.rsp);
+				sizeof(*r->req.cqe), DMA_TO_DEVICE);
+	kfree(r->req.cqe);
 }
 
 static int
diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
index ad0df78..0d7df13 100644
--- a/drivers/nvme/target/tcp.c
+++ b/drivers/nvme/target/tcp.c
@@ -161,14 +161,14 @@ static inline bool nvmet_tcp_has_data_in(struct nvmet_tcp_cmd *cmd)
 
 static inline bool nvmet_tcp_need_data_in(struct nvmet_tcp_cmd *cmd)
 {
-	return nvmet_tcp_has_data_in(cmd) && !cmd->req.rsp->status;
+	return nvmet_tcp_has_data_in(cmd) && !cmd->req.cqe->status;
 }
 
 static inline bool nvmet_tcp_need_data_out(struct nvmet_tcp_cmd *cmd)
 {
 	return !nvme_is_write(cmd->req.cmd) &&
 		cmd->req.transfer_len > 0 &&
-		!cmd->req.rsp->status;
+		!cmd->req.cqe->status;
 }
 
 static inline bool nvmet_tcp_has_inline_data(struct nvmet_tcp_cmd *cmd)
@@ -377,7 +377,7 @@ static void nvmet_setup_c2h_data_pdu(struct nvmet_tcp_cmd *cmd)
 	pdu->hdr.plen =
 		cpu_to_le32(pdu->hdr.hlen + hdgst +
 				cmd->req.transfer_len + ddgst);
-	pdu->command_id = cmd->req.rsp->command_id;
+	pdu->command_id = cmd->req.cqe->command_id;
 	pdu->data_length = cpu_to_le32(cmd->req.transfer_len);
 	pdu->data_offset = cpu_to_le32(cmd->wbytes_done);
 
@@ -1206,7 +1206,7 @@ static int nvmet_tcp_alloc_cmd(struct nvmet_tcp_queue *queue,
 			sizeof(*c->rsp_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO);
 	if (!c->rsp_pdu)
 		goto out_free_cmd;
-	c->req.rsp = &c->rsp_pdu->cqe;
+	c->req.cqe = &c->rsp_pdu->cqe;
 
 	c->data_pdu = page_frag_alloc(&queue->pf_cache,
 			sizeof(*c->data_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO);
-- 
1.8.3.1

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH 1/2] nvmet-rdma: remove p2p_client initialization from fast-path
  2019-04-08 15:39 [PATCH 1/2] nvmet-rdma: remove p2p_client initialization from fast-path Max Gurtovoy
  2019-04-08 15:39 ` [PATCH 2/2] nvmet: rename nvme_completion instances from rsp to cqe Max Gurtovoy
@ 2019-04-08 16:04 ` Logan Gunthorpe
  2019-04-08 16:26   ` Max Gurtovoy
  2019-04-09 10:10 ` Christoph Hellwig
  2 siblings, 1 reply; 9+ messages in thread
From: Logan Gunthorpe @ 2019-04-08 16:04 UTC (permalink / raw)




On 2019-04-08 9:39 a.m., Max Gurtovoy wrote:
> Initialize it during command allocation.


It would be nice to hear some justification for this change in the
commit message, but the change itself appears to be fine to me:

Reviewed-by: Logan Gunthorpe <logang at deltatee.com>

> Cc: Logan Gunthorpe <logang at deltatee.com>
> Cc: Stephen Bates <sbates at raithlin.com>
> Signed-off-by: Max Gurtovoy <maxg at mellanox.com>
> ---
>  drivers/nvme/target/rdma.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
> index ef893ad..b727521 100644
> --- a/drivers/nvme/target/rdma.c
> +++ b/drivers/nvme/target/rdma.c
> @@ -373,6 +373,7 @@ static int nvmet_rdma_alloc_rsp(struct nvmet_rdma_device *ndev,
>  	if (ib_dma_mapping_error(ndev->device, r->send_sge.addr))
>  		goto out_free_rsp;
>  
> +	r->req.p2p_client = &ndev->device->dev;
>  	r->send_sge.length = sizeof(*r->req.rsp);
>  	r->send_sge.lkey = ndev->pd->local_dma_lkey;
>  
> @@ -763,8 +764,6 @@ static void nvmet_rdma_handle_command(struct nvmet_rdma_queue *queue,
>  		cmd->send_sge.addr, cmd->send_sge.length,
>  		DMA_TO_DEVICE);
>  
> -	cmd->req.p2p_client = &queue->dev->device->dev;
> -
>  	if (!nvmet_req_init(&cmd->req, &queue->nvme_cq,
>  			&queue->nvme_sq, &nvmet_rdma_ops))
>  		return;
> 

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/2] nvmet-rdma: remove p2p_client initialization from fast-path
  2019-04-08 16:04 ` [PATCH 1/2] nvmet-rdma: remove p2p_client initialization from fast-path Logan Gunthorpe
@ 2019-04-08 16:26   ` Max Gurtovoy
  2019-04-08 16:35     ` Logan Gunthorpe
  0 siblings, 1 reply; 9+ messages in thread
From: Max Gurtovoy @ 2019-04-08 16:26 UTC (permalink / raw)



On 4/8/2019 7:04 PM, Logan Gunthorpe wrote:
>
> On 2019-04-08 9:39 a.m., Max Gurtovoy wrote:
>> Initialize it during command allocation.
>
> It would be nice to hear some justification for this change in the
> commit message, but the change itself appears to be fine to me:

well, it's written in the subject and it's pretty small change so there 
is not much to say about it.

Just set this ptr once instead of every IO.

>
> Reviewed-by: Logan Gunthorpe <logang at deltatee.com>
>
>> Cc: Logan Gunthorpe <logang at deltatee.com>
>> Cc: Stephen Bates <sbates at raithlin.com>
>> Signed-off-by: Max Gurtovoy <maxg at mellanox.com>
>> ---
>>   drivers/nvme/target/rdma.c | 3 +--
>>   1 file changed, 1 insertion(+), 2 deletions(-)
>>
>> diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
>> index ef893ad..b727521 100644
>> --- a/drivers/nvme/target/rdma.c
>> +++ b/drivers/nvme/target/rdma.c
>> @@ -373,6 +373,7 @@ static int nvmet_rdma_alloc_rsp(struct nvmet_rdma_device *ndev,
>>   	if (ib_dma_mapping_error(ndev->device, r->send_sge.addr))
>>   		goto out_free_rsp;
>>   
>> +	r->req.p2p_client = &ndev->device->dev;
>>   	r->send_sge.length = sizeof(*r->req.rsp);
>>   	r->send_sge.lkey = ndev->pd->local_dma_lkey;
>>   
>> @@ -763,8 +764,6 @@ static void nvmet_rdma_handle_command(struct nvmet_rdma_queue *queue,
>>   		cmd->send_sge.addr, cmd->send_sge.length,
>>   		DMA_TO_DEVICE);
>>   
>> -	cmd->req.p2p_client = &queue->dev->device->dev;
>> -
>>   	if (!nvmet_req_init(&cmd->req, &queue->nvme_cq,
>>   			&queue->nvme_sq, &nvmet_rdma_ops))
>>   		return;
>>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/2] nvmet-rdma: remove p2p_client initialization from fast-path
  2019-04-08 16:26   ` Max Gurtovoy
@ 2019-04-08 16:35     ` Logan Gunthorpe
  2019-04-08 16:59       ` Max Gurtovoy
  0 siblings, 1 reply; 9+ messages in thread
From: Logan Gunthorpe @ 2019-04-08 16:35 UTC (permalink / raw)




On 2019-04-08 10:26 a.m., Max Gurtovoy wrote:
> 
> On 4/8/2019 7:04 PM, Logan Gunthorpe wrote:
>>
>> On 2019-04-08 9:39 a.m., Max Gurtovoy wrote:
>>> Initialize it during command allocation.
>>
>> It would be nice to hear some justification for this change in the
>> commit message, but the change itself appears to be fine to me:
> 
> well, it's written in the subject and it's pretty small change so there 
> is not much to say about it.
> 
> Just set this ptr once instead of every IO.

But is this just an optimization-without-measuring from code review? Or
did I actually cause a measurable performance regression? What actually
prompted the change?

Just to be clear: I'm fine with the change and am happy it's being done;
I'd just prefer to see more detail in the commit message.

Thanks,

Logan

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/2] nvmet-rdma: remove p2p_client initialization from fast-path
  2019-04-08 16:35     ` Logan Gunthorpe
@ 2019-04-08 16:59       ` Max Gurtovoy
  0 siblings, 0 replies; 9+ messages in thread
From: Max Gurtovoy @ 2019-04-08 16:59 UTC (permalink / raw)



On 4/8/2019 7:35 PM, Logan Gunthorpe wrote:
>
> On 2019-04-08 10:26 a.m., Max Gurtovoy wrote:
>> On 4/8/2019 7:04 PM, Logan Gunthorpe wrote:
>>> On 2019-04-08 9:39 a.m., Max Gurtovoy wrote:
>>>> Initialize it during command allocation.
>>> It would be nice to hear some justification for this change in the
>>> commit message, but the change itself appears to be fine to me:
>> well, it's written in the subject and it's pretty small change so there
>> is not much to say about it.
>>
>> Just set this ptr once instead of every IO.
> But is this just an optimization-without-measuring from code review? Or
> did I actually cause a measurable performance regression? What actually
> prompted the change?

The first one :)

I don't think that one assignment can cause a regression but I also 
think the we shouldn't do redundant operations on the fast path.

These can accumulate in the future..


>
> Just to be clear: I'm fine with the change and am happy it's being done;
> I'd just prefer to see more detail in the commit message.

I guess it's possible.

Lets get a review for the second patch and I'll send V2.


>
> Thanks,
>
> Logan

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 2/2] nvmet: rename nvme_completion instances from rsp to cqe
  2019-04-08 15:39 ` [PATCH 2/2] nvmet: rename nvme_completion instances from rsp to cqe Max Gurtovoy
@ 2019-04-08 19:04   ` Chaitanya Kulkarni
  2019-04-09 10:10   ` Christoph Hellwig
  1 sibling, 0 replies; 9+ messages in thread
From: Chaitanya Kulkarni @ 2019-04-08 19:04 UTC (permalink / raw)


Looks good.

Reviewed-by : Chaitanya Kulkarni <chaitanya.kulkarni at wdc.com>

?On 4/8/19, 8:41 AM, "Linux-nvme on behalf of Max Gurtovoy" <linux-nvme-bounces@lists.infradead.org on behalf of maxg@mellanox.com> wrote:

    Use NVMe namings for improving code readability.
    
    Signed-off-by: Max Gurtovoy <maxg at mellanox.com>
    ---
     drivers/nvme/target/core.c        | 22 +++++++++++-----------
     drivers/nvme/target/fabrics-cmd.c | 16 ++++++++--------
     drivers/nvme/target/fc.c          |  2 +-
     drivers/nvme/target/loop.c        |  6 +++---
     drivers/nvme/target/nvmet.h       |  4 ++--
     drivers/nvme/target/rdma.c        | 18 +++++++++---------
     drivers/nvme/target/tcp.c         |  8 ++++----
     7 files changed, 38 insertions(+), 38 deletions(-)
    
    diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
    index b3e765a..6feef3f 100644
    --- a/drivers/nvme/target/core.c
    +++ b/drivers/nvme/target/core.c
    @@ -644,7 +644,7 @@ static void nvmet_update_sq_head(struct nvmet_req *req)
     		} while (cmpxchg(&req->sq->sqhd, old_sqhd, new_sqhd) !=
     					old_sqhd);
     	}
    -	req->rsp->sq_head = cpu_to_le16(req->sq->sqhd & 0x0000FFFF);
    +	req->cqe->sq_head = cpu_to_le16(req->sq->sqhd & 0x0000FFFF);
     }
     
     static void nvmet_set_error(struct nvmet_req *req, u16 status)
    @@ -653,7 +653,7 @@ static void nvmet_set_error(struct nvmet_req *req, u16 status)
     	struct nvme_error_slot *new_error_slot;
     	unsigned long flags;
     
    -	req->rsp->status = cpu_to_le16(status << 1);
    +	req->cqe->status = cpu_to_le16(status << 1);
     
     	if (!ctrl || req->error_loc == NVMET_NO_ERROR_LOC)
     		return;
    @@ -673,15 +673,15 @@ static void nvmet_set_error(struct nvmet_req *req, u16 status)
     	spin_unlock_irqrestore(&ctrl->error_lock, flags);
     
     	/* set the more bit for this request */
    -	req->rsp->status |= cpu_to_le16(1 << 14);
    +	req->cqe->status |= cpu_to_le16(1 << 14);
     }
     
     static void __nvmet_req_complete(struct nvmet_req *req, u16 status)
     {
     	if (!req->sq->sqhd_disabled)
     		nvmet_update_sq_head(req);
    -	req->rsp->sq_id = cpu_to_le16(req->sq->qid);
    -	req->rsp->command_id = req->cmd->common.command_id;
    +	req->cqe->sq_id = cpu_to_le16(req->sq->qid);
    +	req->cqe->command_id = req->cmd->common.command_id;
     
     	if (unlikely(status))
     		nvmet_set_error(req, status);
    @@ -838,8 +838,8 @@ bool nvmet_req_init(struct nvmet_req *req, struct nvmet_cq *cq,
     	req->sg = NULL;
     	req->sg_cnt = 0;
     	req->transfer_len = 0;
    -	req->rsp->status = 0;
    -	req->rsp->sq_head = 0;
    +	req->cqe->status = 0;
    +	req->cqe->sq_head = 0;
     	req->ns = NULL;
     	req->error_loc = NVMET_NO_ERROR_LOC;
     	req->error_slba = 0;
    @@ -1066,7 +1066,7 @@ u16 nvmet_ctrl_find_get(const char *subsysnqn, const char *hostnqn, u16 cntlid,
     	if (!subsys) {
     		pr_warn("connect request for invalid subsystem %s!\n",
     			subsysnqn);
    -		req->rsp->result.u32 = IPO_IATTR_CONNECT_DATA(subsysnqn);
    +		req->cqe->result.u32 = IPO_IATTR_CONNECT_DATA(subsysnqn);
     		return NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR;
     	}
     
    @@ -1087,7 +1087,7 @@ u16 nvmet_ctrl_find_get(const char *subsysnqn, const char *hostnqn, u16 cntlid,
     
     	pr_warn("could not find controller %d for subsys %s / host %s\n",
     		cntlid, subsysnqn, hostnqn);
    -	req->rsp->result.u32 = IPO_IATTR_CONNECT_DATA(cntlid);
    +	req->cqe->result.u32 = IPO_IATTR_CONNECT_DATA(cntlid);
     	status = NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR;
     
     out:
    @@ -1185,7 +1185,7 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
     	if (!subsys) {
     		pr_warn("connect request for invalid subsystem %s!\n",
     			subsysnqn);
    -		req->rsp->result.u32 = IPO_IATTR_CONNECT_DATA(subsysnqn);
    +		req->cqe->result.u32 = IPO_IATTR_CONNECT_DATA(subsysnqn);
     		goto out;
     	}
     
    @@ -1194,7 +1194,7 @@ u16 nvmet_alloc_ctrl(const char *subsysnqn, const char *hostnqn,
     	if (!nvmet_host_allowed(subsys, hostnqn)) {
     		pr_info("connect by host %s for subsystem %s not allowed\n",
     			hostnqn, subsysnqn);
    -		req->rsp->result.u32 = IPO_IATTR_CONNECT_DATA(hostnqn);
    +		req->cqe->result.u32 = IPO_IATTR_CONNECT_DATA(hostnqn);
     		up_read(&nvmet_config_sem);
     		status = NVME_SC_CONNECT_INVALID_HOST | NVME_SC_DNR;
     		goto out_put_subsystem;
    diff --git a/drivers/nvme/target/fabrics-cmd.c b/drivers/nvme/target/fabrics-cmd.c
    index 3a76ebc..3b9f79a 100644
    --- a/drivers/nvme/target/fabrics-cmd.c
    +++ b/drivers/nvme/target/fabrics-cmd.c
    @@ -72,7 +72,7 @@ static void nvmet_execute_prop_get(struct nvmet_req *req)
     			offsetof(struct nvmf_property_get_command, attrib);
     	}
     
    -	req->rsp->result.u64 = cpu_to_le64(val);
    +	req->cqe->result.u64 = cpu_to_le64(val);
     	nvmet_req_complete(req, status);
     }
     
    @@ -124,7 +124,7 @@ static u16 nvmet_install_queue(struct nvmet_ctrl *ctrl, struct nvmet_req *req)
     
     	if (c->cattr & NVME_CONNECT_DISABLE_SQFLOW) {
     		req->sq->sqhd_disabled = true;
    -		req->rsp->sq_head = cpu_to_le16(0xffff);
    +		req->cqe->sq_head = cpu_to_le16(0xffff);
     	}
     
     	if (ctrl->ops->install_queue) {
    @@ -158,7 +158,7 @@ static void nvmet_execute_admin_connect(struct nvmet_req *req)
     		goto out;
     
     	/* zero out initial completion result, assign values as needed */
    -	req->rsp->result.u32 = 0;
    +	req->cqe->result.u32 = 0;
     
     	if (c->recfmt != 0) {
     		pr_warn("invalid connect version (%d).\n",
    @@ -172,7 +172,7 @@ static void nvmet_execute_admin_connect(struct nvmet_req *req)
     		pr_warn("connect attempt for invalid controller ID %#x\n",
     			d->cntlid);
     		status = NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR;
    -		req->rsp->result.u32 = IPO_IATTR_CONNECT_DATA(cntlid);
    +		req->cqe->result.u32 = IPO_IATTR_CONNECT_DATA(cntlid);
     		goto out;
     	}
     
    @@ -195,7 +195,7 @@ static void nvmet_execute_admin_connect(struct nvmet_req *req)
     
     	pr_info("creating controller %d for subsystem %s for NQN %s.\n",
     		ctrl->cntlid, ctrl->subsys->subsysnqn, ctrl->hostnqn);
    -	req->rsp->result.u16 = cpu_to_le16(ctrl->cntlid);
    +	req->cqe->result.u16 = cpu_to_le16(ctrl->cntlid);
     
     out:
     	kfree(d);
    @@ -222,7 +222,7 @@ static void nvmet_execute_io_connect(struct nvmet_req *req)
     		goto out;
     
     	/* zero out initial completion result, assign values as needed */
    -	req->rsp->result.u32 = 0;
    +	req->cqe->result.u32 = 0;
     
     	if (c->recfmt != 0) {
     		pr_warn("invalid connect version (%d).\n",
    @@ -240,14 +240,14 @@ static void nvmet_execute_io_connect(struct nvmet_req *req)
     	if (unlikely(qid > ctrl->subsys->max_qid)) {
     		pr_warn("invalid queue id (%d)\n", qid);
     		status = NVME_SC_CONNECT_INVALID_PARAM | NVME_SC_DNR;
    -		req->rsp->result.u32 = IPO_IATTR_CONNECT_SQE(qid);
    +		req->cqe->result.u32 = IPO_IATTR_CONNECT_SQE(qid);
     		goto out_ctrl_put;
     	}
     
     	status = nvmet_install_queue(ctrl, req);
     	if (status) {
     		/* pass back cntlid that had the issue of installing queue */
    -		req->rsp->result.u16 = cpu_to_le16(ctrl->cntlid);
    +		req->cqe->result.u16 = cpu_to_le16(ctrl->cntlid);
     		goto out_ctrl_put;
     	}
     
    diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
    index 98b7b1f..6733673 100644
    --- a/drivers/nvme/target/fc.c
    +++ b/drivers/nvme/target/fc.c
    @@ -2187,7 +2187,7 @@ enum {
     	}
     
     	fod->req.cmd = &fod->cmdiubuf.sqe;
    -	fod->req.rsp = &fod->rspiubuf.cqe;
    +	fod->req.cqe = &fod->rspiubuf.cqe;
     	fod->req.port = tgtport->pe->port;
     
     	/* clear any response payload */
    diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
    index b9f623a..a3ae491 100644
    --- a/drivers/nvme/target/loop.c
    +++ b/drivers/nvme/target/loop.c
    @@ -18,7 +18,7 @@
     struct nvme_loop_iod {
     	struct nvme_request	nvme_req;
     	struct nvme_command	cmd;
    -	struct nvme_completion	rsp;
    +	struct nvme_completion	cqe;
     	struct nvmet_req	req;
     	struct nvme_loop_queue	*queue;
     	struct work_struct	work;
    @@ -94,7 +94,7 @@ static void nvme_loop_queue_response(struct nvmet_req *req)
     {
     	struct nvme_loop_queue *queue =
     		container_of(req->sq, struct nvme_loop_queue, nvme_sq);
    -	struct nvme_completion *cqe = req->rsp;
    +	struct nvme_completion *cqe = req->cqe;
     
     	/*
     	 * AEN requests are special as they don't time out and can
    @@ -207,7 +207,7 @@ static int nvme_loop_init_iod(struct nvme_loop_ctrl *ctrl,
     		struct nvme_loop_iod *iod, unsigned int queue_idx)
     {
     	iod->req.cmd = &iod->cmd;
    -	iod->req.rsp = &iod->rsp;
    +	iod->req.cqe = &iod->cqe;
     	iod->queue = &ctrl->queues[queue_idx];
     	INIT_WORK(&iod->work, nvme_loop_execute_work);
     	return 0;
    diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h
    index 51e49ef..53d5a44 100644
    --- a/drivers/nvme/target/nvmet.h
    +++ b/drivers/nvme/target/nvmet.h
    @@ -284,7 +284,7 @@ struct nvmet_fabrics_ops {
     
     struct nvmet_req {
     	struct nvme_command	*cmd;
    -	struct nvme_completion	*rsp;
    +	struct nvme_completion	*cqe;
     	struct nvmet_sq		*sq;
     	struct nvmet_cq		*cq;
     	struct nvmet_ns		*ns;
    @@ -322,7 +322,7 @@ struct nvmet_req {
     
     static inline void nvmet_set_result(struct nvmet_req *req, u32 result)
     {
    -	req->rsp->result.u32 = cpu_to_le32(result);
    +	req->cqe->result.u32 = cpu_to_le32(result);
     }
     
     /*
    diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
    index b727521..36d906a 100644
    --- a/drivers/nvme/target/rdma.c
    +++ b/drivers/nvme/target/rdma.c
    @@ -160,7 +160,7 @@ static inline bool nvmet_rdma_need_data_out(struct nvmet_rdma_rsp *rsp)
     {
     	return !nvme_is_write(rsp->req.cmd) &&
     		rsp->req.transfer_len &&
    -		!rsp->req.rsp->status &&
    +		!rsp->req.cqe->status &&
     		!(rsp->flags & NVMET_RDMA_REQ_INLINE_DATA);
     }
     
    @@ -364,17 +364,17 @@ static int nvmet_rdma_alloc_rsp(struct nvmet_rdma_device *ndev,
     		struct nvmet_rdma_rsp *r)
     {
     	/* NVMe CQE / RDMA SEND */
    -	r->req.rsp = kmalloc(sizeof(*r->req.rsp), GFP_KERNEL);
    -	if (!r->req.rsp)
    +	r->req.cqe = kmalloc(sizeof(*r->req.cqe), GFP_KERNEL);
    +	if (!r->req.cqe)
     		goto out;
     
    -	r->send_sge.addr = ib_dma_map_single(ndev->device, r->req.rsp,
    -			sizeof(*r->req.rsp), DMA_TO_DEVICE);
    +	r->send_sge.addr = ib_dma_map_single(ndev->device, r->req.cqe,
    +			sizeof(*r->req.cqe), DMA_TO_DEVICE);
     	if (ib_dma_mapping_error(ndev->device, r->send_sge.addr))
     		goto out_free_rsp;
     
     	r->req.p2p_client = &ndev->device->dev;
    -	r->send_sge.length = sizeof(*r->req.rsp);
    +	r->send_sge.length = sizeof(*r->req.cqe);
     	r->send_sge.lkey = ndev->pd->local_dma_lkey;
     
     	r->send_cqe.done = nvmet_rdma_send_done;
    @@ -389,7 +389,7 @@ static int nvmet_rdma_alloc_rsp(struct nvmet_rdma_device *ndev,
     	return 0;
     
     out_free_rsp:
    -	kfree(r->req.rsp);
    +	kfree(r->req.cqe);
     out:
     	return -ENOMEM;
     }
    @@ -398,8 +398,8 @@ static void nvmet_rdma_free_rsp(struct nvmet_rdma_device *ndev,
     		struct nvmet_rdma_rsp *r)
     {
     	ib_dma_unmap_single(ndev->device, r->send_sge.addr,
    -				sizeof(*r->req.rsp), DMA_TO_DEVICE);
    -	kfree(r->req.rsp);
    +				sizeof(*r->req.cqe), DMA_TO_DEVICE);
    +	kfree(r->req.cqe);
     }
     
     static int
    diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
    index ad0df78..0d7df13 100644
    --- a/drivers/nvme/target/tcp.c
    +++ b/drivers/nvme/target/tcp.c
    @@ -161,14 +161,14 @@ static inline bool nvmet_tcp_has_data_in(struct nvmet_tcp_cmd *cmd)
     
     static inline bool nvmet_tcp_need_data_in(struct nvmet_tcp_cmd *cmd)
     {
    -	return nvmet_tcp_has_data_in(cmd) && !cmd->req.rsp->status;
    +	return nvmet_tcp_has_data_in(cmd) && !cmd->req.cqe->status;
     }
     
     static inline bool nvmet_tcp_need_data_out(struct nvmet_tcp_cmd *cmd)
     {
     	return !nvme_is_write(cmd->req.cmd) &&
     		cmd->req.transfer_len > 0 &&
    -		!cmd->req.rsp->status;
    +		!cmd->req.cqe->status;
     }
     
     static inline bool nvmet_tcp_has_inline_data(struct nvmet_tcp_cmd *cmd)
    @@ -377,7 +377,7 @@ static void nvmet_setup_c2h_data_pdu(struct nvmet_tcp_cmd *cmd)
     	pdu->hdr.plen =
     		cpu_to_le32(pdu->hdr.hlen + hdgst +
     				cmd->req.transfer_len + ddgst);
    -	pdu->command_id = cmd->req.rsp->command_id;
    +	pdu->command_id = cmd->req.cqe->command_id;
     	pdu->data_length = cpu_to_le32(cmd->req.transfer_len);
     	pdu->data_offset = cpu_to_le32(cmd->wbytes_done);
     
    @@ -1206,7 +1206,7 @@ static int nvmet_tcp_alloc_cmd(struct nvmet_tcp_queue *queue,
     			sizeof(*c->rsp_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO);
     	if (!c->rsp_pdu)
     		goto out_free_cmd;
    -	c->req.rsp = &c->rsp_pdu->cqe;
    +	c->req.cqe = &c->rsp_pdu->cqe;
     
     	c->data_pdu = page_frag_alloc(&queue->pf_cache,
     			sizeof(*c->data_pdu) + hdgst, GFP_KERNEL | __GFP_ZERO);
    -- 
    1.8.3.1
    
    
    _______________________________________________
    Linux-nvme mailing list
    Linux-nvme at lists.infradead.org
    http://lists.infradead.org/mailman/listinfo/linux-nvme
    

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/2] nvmet-rdma: remove p2p_client initialization from fast-path
  2019-04-08 15:39 [PATCH 1/2] nvmet-rdma: remove p2p_client initialization from fast-path Max Gurtovoy
  2019-04-08 15:39 ` [PATCH 2/2] nvmet: rename nvme_completion instances from rsp to cqe Max Gurtovoy
  2019-04-08 16:04 ` [PATCH 1/2] nvmet-rdma: remove p2p_client initialization from fast-path Logan Gunthorpe
@ 2019-04-09 10:10 ` Christoph Hellwig
  2 siblings, 0 replies; 9+ messages in thread
From: Christoph Hellwig @ 2019-04-09 10:10 UTC (permalink / raw)


Applied to nvme-5.2.

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 2/2] nvmet: rename nvme_completion instances from rsp to cqe
  2019-04-08 15:39 ` [PATCH 2/2] nvmet: rename nvme_completion instances from rsp to cqe Max Gurtovoy
  2019-04-08 19:04   ` Chaitanya Kulkarni
@ 2019-04-09 10:10   ` Christoph Hellwig
  1 sibling, 0 replies; 9+ messages in thread
From: Christoph Hellwig @ 2019-04-09 10:10 UTC (permalink / raw)


Applied to nvme-5.2.  I hope this isn't going to create a lot of
conflicts..

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2019-04-09 10:10 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-04-08 15:39 [PATCH 1/2] nvmet-rdma: remove p2p_client initialization from fast-path Max Gurtovoy
2019-04-08 15:39 ` [PATCH 2/2] nvmet: rename nvme_completion instances from rsp to cqe Max Gurtovoy
2019-04-08 19:04   ` Chaitanya Kulkarni
2019-04-09 10:10   ` Christoph Hellwig
2019-04-08 16:04 ` [PATCH 1/2] nvmet-rdma: remove p2p_client initialization from fast-path Logan Gunthorpe
2019-04-08 16:26   ` Max Gurtovoy
2019-04-08 16:35     ` Logan Gunthorpe
2019-04-08 16:59       ` Max Gurtovoy
2019-04-09 10:10 ` Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).