public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
* [PATCH V2 0/8] nvme-core: trivial cleanups
@ 2023-03-27  6:04 Chaitanya Kulkarni
  2023-03-27  6:04 ` [PATCH V2 1/8] nvme-core: remove unnecessary else Chaitanya Kulkarni
                   ` (8 more replies)
  0 siblings, 9 replies; 15+ messages in thread
From: Chaitanya Kulkarni @ 2023-03-27  6:04 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, hch, sagi, hare, Chaitanya Kulkarni

Hi,

Small code cleanups for nvme-core and the last patch has fixing really
long argument list of __nvme_submit_sync_cmd().

Please have a closer look at patch 3 and 8. There are no functional
changes in any of these patches.

V2 is passing the blktests, see log below.

-ck

v1->v2:

1. Update Reviewed-by tags (Sagi)
2. Revne struct nvme_submit_cmd_data to nvme_submit_cmd_args. (Sagi)
3. Remove patch #1 (Sagi)

Chaitanya Kulkarni (8):
  nvme-core: remove unnecessary else
  nvme-core: remvoe extra line at end of function
  nvme-core: code cleanup for __nvme_check_ready()
  nvme-core: use normal pattern
  nvme-core: open code nvme_delete_ctrl_sync()
  nvme-core: cleanup for nvme_set_latency_tolerance
  nvme-core: remove unneacessary else
  nvme-core: fix nvme_submit_sync_cmd() args

 drivers/nvme/host/auth.c    |  13 +++-
 drivers/nvme/host/core.c    | 148 +++++++++++++++++++++---------------
 drivers/nvme/host/fabrics.c |  60 ++++++++++++---
 drivers/nvme/host/nvme.h    |  16 +++-
 4 files changed, 158 insertions(+), 79 deletions(-)
blktests (master) # ./check nvme; nvme_trtype=tcp ./check nvme
nvme/002 (create many subsystems and test discovery)         [passed]
    runtime  18.950s  ...  19.715s
nvme/003 (test if we're sending keep-alives to a discovery controller) [passed]
    runtime  10.099s  ...  10.095s
nvme/004 (test nvme and nvmet UUID NS descriptors)           [passed]
    runtime  1.471s  ...  1.546s
nvme/005 (reset local loopback target)                       [passed]
    runtime  6.825s  ...  6.919s
nvme/006 (create an NVMeOF target with a block device-backed ns) [passed]
    runtime  0.068s  ...  0.074s
nvme/007 (create an NVMeOF target with a file-backed ns)     [passed]
    runtime  0.044s  ...  0.048s
nvme/008 (create an NVMeOF host with a block device-backed ns) [passed]
    runtime  1.476s  ...  1.518s
nvme/009 (create an NVMeOF host with a file-backed ns)       [passed]
    runtime  1.444s  ...  1.475s
nvme/010 (run data verification fio job on NVMeOF block device-backed ns) [passed]
    runtime  7.449s  ...  7.685s
nvme/011 (run data verification fio job on NVMeOF file-backed ns) [passed]
    runtime  60.217s  ...  78.689s
nvme/012 (run mkfs and data verification fio job on NVMeOF block device-backed ns) [passed]
    runtime    ...  12.459s
nvme/013 (run mkfs and data verification fio job on NVMeOF file-backed ns) [passed]
    runtime  104.896s  ...  79.450s
nvme/014 (flush a NVMeOF block device-backed ns)             [passed]
    runtime  10.311s  ...  9.759s
nvme/015 (unit test for NVMe flush for file backed ns)       [passed]
    runtime  9.617s  ...  8.870s
nvme/016 (create/delete many NVMeOF block device-backed ns and test discovery) [passed]
    runtime  13.747s  ...  15.775s
nvme/017 (create/delete many file-ns and test discovery)     [passed]
    runtime  14.690s  ...  13.533s
nvme/018 (unit test NVMe-oF out of range access on a file backend) [passed]
    runtime  1.524s  ...  1.490s
nvme/019 (test NVMe DSM Discard command on NVMeOF block-device ns) [passed]
    runtime  1.592s  ...  1.470s
nvme/020 (test NVMe DSM Discard command on NVMeOF file-backed ns) [passed]
    runtime  1.536s  ...  1.467s
nvme/021 (test NVMe list command on NVMeOF file-backed ns)   [passed]
    runtime  1.472s  ...  1.461s
nvme/022 (test NVMe reset command on NVMeOF file-backed ns)  [passed]
    runtime  6.758s  ...  6.827s
nvme/023 (test NVMe smart-log command on NVMeOF block-device ns) [passed]
    runtime  1.449s  ...  1.485s
nvme/024 (test NVMe smart-log command on NVMeOF file-backed ns) [passed]
    runtime  1.440s  ...  1.556s
nvme/025 (test NVMe effects-log command on NVMeOF file-backed ns) [passed]
    runtime  1.434s  ...  1.455s
nvme/026 (test NVMe ns-descs command on NVMeOF file-backed ns) [passed]
    runtime  1.429s  ...  1.460s
nvme/027 (test NVMe ns-rescan command on NVMeOF file-backed ns) [passed]
    runtime  1.446s  ...  1.461s
nvme/028 (test NVMe list-subsys command on NVMeOF file-backed ns) [passed]
    runtime  1.450s  ...  1.470s
nvme/029 (test userspace IO via nvme-cli read/write interface) [passed]
    runtime  1.589s  ...  1.709s
nvme/030 (ensure the discovery generation counter is updated appropriately) [passed]
    runtime  0.226s  ...  0.237s
nvme/031 (test deletion of NVMeOF controllers immediately after setup) [passed]
    runtime  53.920s  ...  54.157s
nvme/038 (test deletion of NVMeOF subsystem without enabling) [passed]
    runtime  0.028s  ...  0.026s
nvme/002 (create many subsystems and test discovery)         [not run]
    runtime  19.715s  ...  
    nvme_trtype=tcp is not supported in this test
nvme/003 (test if we're sending keep-alives to a discovery controller) [passed]
    runtime  10.095s  ...  10.110s
nvme/004 (test nvme and nvmet UUID NS descriptors)           [passed]
    runtime  1.546s  ...  1.158s
nvme/005 (reset local loopback target)                       [passed]
    runtime  6.919s  ...  6.211s
nvme/006 (create an NVMeOF target with a block device-backed ns) [passed]
    runtime  0.074s  ...  0.073s
nvme/007 (create an NVMeOF target with a file-backed ns)     [passed]
    runtime  0.048s  ...  0.049s
nvme/008 (create an NVMeOF host with a block device-backed ns) [passed]
    runtime  1.518s  ...  1.160s
nvme/009 (create an NVMeOF host with a file-backed ns)       [passed]
    runtime  1.475s  ...  1.152s
nvme/010 (run data verification fio job on NVMeOF block device-backed ns) [passed]
    runtime  7.685s  ...  15.872s
nvme/011 (run data verification fio job on NVMeOF file-backed ns) [passed]
    runtime  78.689s  ...  67.854s
nvme/012 (run mkfs and data verification fio job on NVMeOF block device-backed ns) [passed]
    runtime  12.459s  ...  20.635s
nvme/013 (run mkfs and data verification fio job on NVMeOF file-backed ns) [passed]
    runtime  79.450s  ...  73.919s
nvme/014 (flush a NVMeOF block device-backed ns)             [passed]
    runtime  9.759s  ...  8.695s
nvme/015 (unit test for NVMe flush for file backed ns)       [passed]
    runtime  8.870s  ...  8.553s
nvme/016 (create/delete many NVMeOF block device-backed ns and test discovery) [not run]
    runtime  15.775s  ...  
    nvme_trtype=tcp is not supported in this test
nvme/017 (create/delete many file-ns and test discovery)     [not run]
    runtime  13.533s  ...  
    nvme_trtype=tcp is not supported in this test
nvme/018 (unit test NVMe-oF out of range access on a file backend) [passed]
    runtime  1.490s  ...  1.144s
nvme/019 (test NVMe DSM Discard command on NVMeOF block-device ns) [passed]
    runtime  1.470s  ...  1.159s
nvme/020 (test NVMe DSM Discard command on NVMeOF file-backed ns) [passed]
    runtime  1.467s  ...  1.136s
nvme/021 (test NVMe list command on NVMeOF file-backed ns)   [passed]
    runtime  1.461s  ...  1.133s
nvme/022 (test NVMe reset command on NVMeOF file-backed ns)  [passed]
    runtime  6.827s  ...  6.162s
nvme/023 (test NVMe smart-log command on NVMeOF block-device ns) [passed]
    runtime  1.485s  ...  1.164s
nvme/024 (test NVMe smart-log command on NVMeOF file-backed ns) [passed]
    runtime  1.556s  ...  1.130s
nvme/025 (test NVMe effects-log command on NVMeOF file-backed ns) [passed]
    runtime  1.455s  ...  1.141s
nvme/026 (test NVMe ns-descs command on NVMeOF file-backed ns) [passed]
    runtime  1.460s  ...  1.137s
nvme/027 (test NVMe ns-rescan command on NVMeOF file-backed ns) [passed]
    runtime  1.461s  ...  1.143s
nvme/028 (test NVMe list-subsys command on NVMeOF file-backed ns) [passed]
    runtime  1.470s  ...  1.138s
nvme/029 (test userspace IO via nvme-cli read/write interface) [passed]
    runtime  1.709s  ...  1.271s
nvme/030 (ensure the discovery generation counter is updated appropriately) [passed]
    runtime  0.237s  ...  0.146s
nvme/031 (test deletion of NVMeOF controllers immediately after setup) [passed]
    runtime  54.157s  ...  50.809s
nvme/038 (test deletion of NVMeOF subsystem without enabling) [passed]
    runtime  0.026s  ...  0.029s
blktests (master) # 

-- 
2.29.0



^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH V2 1/8] nvme-core: remove unnecessary else
  2023-03-27  6:04 [PATCH V2 0/8] nvme-core: trivial cleanups Chaitanya Kulkarni
@ 2023-03-27  6:04 ` Chaitanya Kulkarni
  2023-03-28  0:40   ` Christoph Hellwig
  2023-03-27  6:04 ` [PATCH V2 2/8] nvme-core: remvoe extra line at end of function Chaitanya Kulkarni
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 15+ messages in thread
From: Chaitanya Kulkarni @ 2023-03-27  6:04 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, hch, sagi, hare, Chaitanya Kulkarni

By returning early we can safely remove the unnecessary else part

Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/nvme/host/core.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index d4be525f8100..71e2f6723590 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -4475,9 +4475,10 @@ static void nvme_scan_ns(struct nvme_ctrl *ctrl, unsigned nsid)
 	if (ns) {
 		nvme_validate_ns(ns, &info);
 		nvme_put_ns(ns);
-	} else {
-		nvme_alloc_ns(ctrl, &info);
+		return;
 	}
+
+	nvme_alloc_ns(ctrl, &info);
 }
 
 static void nvme_remove_invalid_namespaces(struct nvme_ctrl *ctrl,
-- 
2.29.0



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V2 2/8] nvme-core: remvoe extra line at end of function
  2023-03-27  6:04 [PATCH V2 0/8] nvme-core: trivial cleanups Chaitanya Kulkarni
  2023-03-27  6:04 ` [PATCH V2 1/8] nvme-core: remove unnecessary else Chaitanya Kulkarni
@ 2023-03-27  6:04 ` Chaitanya Kulkarni
  2023-03-27  6:04 ` [PATCH V2 3/8] nvme-core: code cleanup for __nvme_check_ready() Chaitanya Kulkarni
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 15+ messages in thread
From: Chaitanya Kulkarni @ 2023-03-27  6:04 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, hch, sagi, hare, Chaitanya Kulkarni

Remove the extra line at the end of the function.

Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/nvme/host/core.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 71e2f6723590..725ffa66fa6a 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -4496,7 +4496,6 @@ static void nvme_remove_invalid_namespaces(struct nvme_ctrl *ctrl,
 
 	list_for_each_entry_safe(ns, next, &rm_list, list)
 		nvme_ns_remove(ns);
-
 }
 
 static int nvme_scan_ns_list(struct nvme_ctrl *ctrl)
-- 
2.29.0



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V2 3/8] nvme-core: code cleanup for __nvme_check_ready()
  2023-03-27  6:04 [PATCH V2 0/8] nvme-core: trivial cleanups Chaitanya Kulkarni
  2023-03-27  6:04 ` [PATCH V2 1/8] nvme-core: remove unnecessary else Chaitanya Kulkarni
  2023-03-27  6:04 ` [PATCH V2 2/8] nvme-core: remvoe extra line at end of function Chaitanya Kulkarni
@ 2023-03-27  6:04 ` Chaitanya Kulkarni
  2023-03-27 11:15   ` Pankaj Raghav
  2023-03-27  6:04 ` [PATCH V2 4/8] nvme-core: use normal pattern Chaitanya Kulkarni
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 15+ messages in thread
From: Chaitanya Kulkarni @ 2023-03-27  6:04 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, hch, sagi, hare, Chaitanya Kulkarni

We can avoid using nested indentation by returning early if the
controller is not fabrics. Additionally, instead of using comparisons,
we can use a switch statement to determine if the required command is
connect/auth_send/auth_recv. This will also help to reduce the length of
the lines in the code and makes code easy to read since we don't have to
verify three different comparisons. Lastly, fix the spelling of
"appropriate" in the second comment.

Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
---
 drivers/nvme/host/core.c | 41 +++++++++++++++++++++++-----------------
 1 file changed, 24 insertions(+), 17 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 725ffa66fa6a..5a12be27bea7 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -721,25 +721,32 @@ bool __nvme_check_ready(struct nvme_ctrl *ctrl, struct request *rq,
 	if (rq->q == ctrl->admin_q && (req->flags & NVME_REQ_USERCMD))
 		return false;
 
-	if (ctrl->ops->flags & NVME_F_FABRICS) {
-		/*
-		 * Only allow commands on a live queue, except for the connect
-		 * command, which is require to set the queue live in the
-		 * appropinquate states.
-		 */
-		switch (ctrl->state) {
-		case NVME_CTRL_CONNECTING:
-			if (blk_rq_is_passthrough(rq) && nvme_is_fabrics(req->cmd) &&
-			    (req->cmd->fabrics.fctype == nvme_fabrics_type_connect ||
-			     req->cmd->fabrics.fctype == nvme_fabrics_type_auth_send ||
-			     req->cmd->fabrics.fctype == nvme_fabrics_type_auth_receive))
+	if (!(ctrl->ops->flags & NVME_F_FABRICS))
+		return queue_live;
+
+	/*
+	 * Only allow commands on a live queue, except for connect/auth_send/
+	 * auth_recv commands which are require to set the queue live in the
+	 * appropriate states.
+	 */
+	switch (ctrl->state) {
+	case NVME_CTRL_CONNECTING:
+		if (blk_rq_is_passthrough(rq) &&
+		    nvme_is_fabrics(req->cmd)) {
+			switch (req->cmd->fabrics.fctype) {
+			case nvme_fabrics_type_connect:
+			case nvme_fabrics_type_auth_send:
+			case nvme_fabrics_type_auth_receive:
 				return true;
-			break;
-		default:
-			break;
-		case NVME_CTRL_DEAD:
-			return false;
+			default:
+				break;
+			}
 		}
+		break;
+	default:
+		break;
+	case NVME_CTRL_DEAD:
+		return false;
 	}
 
 	return queue_live;
-- 
2.29.0



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V2 4/8] nvme-core: use normal pattern
  2023-03-27  6:04 [PATCH V2 0/8] nvme-core: trivial cleanups Chaitanya Kulkarni
                   ` (2 preceding siblings ...)
  2023-03-27  6:04 ` [PATCH V2 3/8] nvme-core: code cleanup for __nvme_check_ready() Chaitanya Kulkarni
@ 2023-03-27  6:04 ` Chaitanya Kulkarni
  2023-03-28  0:42   ` Christoph Hellwig
  2023-03-27  6:04 ` [PATCH V2 5/8] nvme-core: open code nvme_delete_ctrl_sync() Chaitanya Kulkarni
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 15+ messages in thread
From: Chaitanya Kulkarni @ 2023-03-27  6:04 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, hch, sagi, hare, Chaitanya Kulkarni

Although return is allowed in the switch ... case when function is
returing void, it creates confusion for future code which is desirable
pattern to use for switch ... case. Repalce return in the switch with
break that is standard pattern for the switch ... case.

Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/nvme/host/core.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 5a12be27bea7..cf9469e486ec 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -403,13 +403,13 @@ void nvme_complete_rq(struct request *req)
 	switch (nvme_decide_disposition(req)) {
 	case COMPLETE:
 		nvme_end_req(req);
-		return;
+		break;
 	case RETRY:
 		nvme_retry_req(req);
-		return;
+		break;
 	case FAILOVER:
 		nvme_failover_req(req);
-		return;
+		break;
 	case AUTHENTICATE:
 #ifdef CONFIG_NVME_AUTH
 		queue_work(nvme_wq, &ctrl->dhchap_auth_work);
@@ -417,7 +417,7 @@ void nvme_complete_rq(struct request *req)
 #else
 		nvme_end_req(req);
 #endif
-		return;
+		break;
 	}
 }
 EXPORT_SYMBOL_GPL(nvme_complete_rq);
-- 
2.29.0



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V2 5/8] nvme-core: open code nvme_delete_ctrl_sync()
  2023-03-27  6:04 [PATCH V2 0/8] nvme-core: trivial cleanups Chaitanya Kulkarni
                   ` (3 preceding siblings ...)
  2023-03-27  6:04 ` [PATCH V2 4/8] nvme-core: use normal pattern Chaitanya Kulkarni
@ 2023-03-27  6:04 ` Chaitanya Kulkarni
  2023-03-27  6:04 ` [PATCH V2 6/8] nvme-core: cleanup for nvme_set_latency_tolerance Chaitanya Kulkarni
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 15+ messages in thread
From: Chaitanya Kulkarni @ 2023-03-27  6:04 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, hch, sagi, hare, Chaitanya Kulkarni

There is only one caller for the nvme_delete_ctrl_sync() i.e.
nvme_sysfs_delete(). Just open code the function in the caller.

Also, add a meaningful comment since we don't have the function name to
indicate synchronous deletion of the controller.

Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/nvme/host/core.c | 26 ++++++++++++--------------
 1 file changed, 12 insertions(+), 14 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index cf9469e486ec..c4372855924d 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -237,18 +237,6 @@ int nvme_delete_ctrl(struct nvme_ctrl *ctrl)
 }
 EXPORT_SYMBOL_GPL(nvme_delete_ctrl);
 
-static void nvme_delete_ctrl_sync(struct nvme_ctrl *ctrl)
-{
-	/*
-	 * Keep a reference until nvme_do_delete_ctrl() complete,
-	 * since ->delete_ctrl can free the controller.
-	 */
-	nvme_get_ctrl(ctrl);
-	if (nvme_change_ctrl_state(ctrl, NVME_CTRL_DELETING))
-		nvme_do_delete_ctrl(ctrl);
-	nvme_put_ctrl(ctrl);
-}
-
 static blk_status_t nvme_error_status(u16 status)
 {
 	switch (status & 0x7ff) {
@@ -3591,8 +3579,18 @@ static ssize_t nvme_sysfs_delete(struct device *dev,
 {
 	struct nvme_ctrl *ctrl = dev_get_drvdata(dev);
 
-	if (device_remove_file_self(dev, attr))
-		nvme_delete_ctrl_sync(ctrl);
+	if (!device_remove_file_self(dev, attr))
+		return count;
+
+	/*
+	 * Delete the controller synchronously but keep a reference until
+	 * nvme_do_delete_ctrl() complete, since ->delete_ctrl can free the
+	 * controller.
+	 */
+	nvme_get_ctrl(ctrl);
+	if (nvme_change_ctrl_state(ctrl, NVME_CTRL_DELETING))
+		nvme_do_delete_ctrl(ctrl);
+	nvme_put_ctrl(ctrl);
 	return count;
 }
 static DEVICE_ATTR(delete_controller, S_IWUSR, NULL, nvme_sysfs_delete);
-- 
2.29.0



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V2 6/8] nvme-core: cleanup for nvme_set_latency_tolerance
  2023-03-27  6:04 [PATCH V2 0/8] nvme-core: trivial cleanups Chaitanya Kulkarni
                   ` (4 preceding siblings ...)
  2023-03-27  6:04 ` [PATCH V2 5/8] nvme-core: open code nvme_delete_ctrl_sync() Chaitanya Kulkarni
@ 2023-03-27  6:04 ` Chaitanya Kulkarni
  2023-03-27  6:04 ` [PATCH V2 7/8] nvme-core: remove unneacessary else Chaitanya Kulkarni
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 15+ messages in thread
From: Chaitanya Kulkarni @ 2023-03-27  6:04 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, hch, sagi, hare, Chaitanya Kulkarni

Remove the extra line before default in the swicth and add a break
at the end of default in the swicth which is a standard practice.

Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/nvme/host/core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index c4372855924d..bc1091953562 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2594,9 +2594,9 @@ static void nvme_set_latency_tolerance(struct device *dev, s32 val)
 	case PM_QOS_LATENCY_ANY:
 		latency = U64_MAX;
 		break;
-
 	default:
 		latency = val;
+		break;
 	}
 
 	if (ctrl->ps_max_latency_us != latency) {
-- 
2.29.0



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V2 7/8] nvme-core: remove unneacessary else
  2023-03-27  6:04 [PATCH V2 0/8] nvme-core: trivial cleanups Chaitanya Kulkarni
                   ` (5 preceding siblings ...)
  2023-03-27  6:04 ` [PATCH V2 6/8] nvme-core: cleanup for nvme_set_latency_tolerance Chaitanya Kulkarni
@ 2023-03-27  6:04 ` Chaitanya Kulkarni
  2023-03-27  6:04 ` [PATCH V2 8/8] nvme-core: fix nvme_submit_sync_cmd() args Chaitanya Kulkarni
  2023-03-28  6:12 ` [PATCH V2 0/8] nvme-core: trivial cleanups Chaitanya Kulkarni
  8 siblings, 0 replies; 15+ messages in thread
From: Chaitanya Kulkarni @ 2023-03-27  6:04 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, hch, sagi, hare, Chaitanya Kulkarni

There is no need for the else when direct return is used at the
end of the function.

Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
---
 drivers/nvme/host/core.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index bc1091953562..fda851e8b7c6 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3418,8 +3418,8 @@ static inline struct nvme_ns_head *dev_to_ns_head(struct device *dev)
 
 	if (disk->fops == &nvme_bdev_ops)
 		return nvme_get_ns_from_dev(dev)->head;
-	else
-		return disk->private_data;
+
+	return disk->private_data;
 }
 
 static ssize_t wwid_show(struct device *dev, struct device_attribute *attr,
-- 
2.29.0



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH V2 8/8] nvme-core: fix nvme_submit_sync_cmd() args
  2023-03-27  6:04 [PATCH V2 0/8] nvme-core: trivial cleanups Chaitanya Kulkarni
                   ` (6 preceding siblings ...)
  2023-03-27  6:04 ` [PATCH V2 7/8] nvme-core: remove unneacessary else Chaitanya Kulkarni
@ 2023-03-27  6:04 ` Chaitanya Kulkarni
  2023-03-28  0:43   ` Christoph Hellwig
  2023-03-28  6:12 ` [PATCH V2 0/8] nvme-core: trivial cleanups Chaitanya Kulkarni
  8 siblings, 1 reply; 15+ messages in thread
From: Chaitanya Kulkarni @ 2023-03-27  6:04 UTC (permalink / raw)
  To: linux-nvme; +Cc: kbusch, hch, sagi, hare, Chaitanya Kulkarni

The __nvme_submit_sync_cmd() function currently has a long list of
eight arguments, which can make it difficult to read and debug,
particularly when the function is commonly used from different parts
of the codebase. This complexity can also increase when extending the
functionality of the function, resulting in more arguments being added.

To address this issue, we can create a new structure called
nvme_submit_cmd_args and move all the existing arguments into this
structure. By doing this, we can pass the structure as a single argument
to the __nvme_submit_sync_cmd() function. This approach simplifies the
code and makes it more readable, and also provides a way to add future
arguments to the structure without increasing the number of function
arguments.

This pattern of using a structure for arguments is commonly used in the
block layer code:-

blk_mq_rq_cache_fill()
	struct blk_mq_allocate_data data = { ... }
	__blk_mq_alloc_request(data)
		blk_mq_rq_ctx_init(data)

blk_mq_alloc_and_init_hctx()
	blk_mq_alloc_request_hctx()
		struct blk_mq_allocate_data data = { ... }
 		blk_mq_rq_ctx_init(data)

Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
---
 drivers/nvme/host/auth.c    | 13 ++++++--
 drivers/nvme/host/core.c    | 61 +++++++++++++++++++++++++------------
 drivers/nvme/host/fabrics.c | 60 ++++++++++++++++++++++++++++--------
 drivers/nvme/host/nvme.h    | 16 +++++++---
 4 files changed, 112 insertions(+), 38 deletions(-)

diff --git a/drivers/nvme/host/auth.c b/drivers/nvme/host/auth.c
index ea16a0aba679..a18c0c7ce0b7 100644
--- a/drivers/nvme/host/auth.c
+++ b/drivers/nvme/host/auth.c
@@ -64,6 +64,15 @@ static int nvme_auth_submit(struct nvme_ctrl *ctrl, int qid,
 	struct nvme_command cmd = {};
 	blk_mq_req_flags_t flags = nvme_auth_flags_from_qid(qid);
 	struct request_queue *q = nvme_auth_queue_from_qid(ctrl, qid);
+	struct nvme_submit_cmd_args d = {
+		.q = q,
+		.cmd = &cmd,
+		.buffer = data,
+		.bufflen = data_len,
+		.qid = qid == 0 ? NVME_QID_ANY : qid,
+		.at_head = false,
+		.flags = flags,
+	};
 	int ret;
 
 	cmd.auth_common.opcode = nvme_fabrics_command;
@@ -78,9 +87,7 @@ static int nvme_auth_submit(struct nvme_ctrl *ctrl, int qid,
 		cmd.auth_receive.al = cpu_to_le32(data_len);
 	}
 
-	ret = __nvme_submit_sync_cmd(q, &cmd, NULL, data, data_len,
-				     qid == 0 ? NVME_QID_ANY : qid,
-				     0, flags);
+	ret = __nvme_submit_sync_cmd(&d);
 	if (ret > 0)
 		dev_warn(ctrl->device,
 			"qid %d auth_send failed with status %d\n", qid, ret);
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index fda851e8b7c6..105c5655410f 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1025,32 +1025,30 @@ EXPORT_SYMBOL_NS_GPL(nvme_execute_rq, NVME_TARGET_PASSTHRU);
  * Returns 0 on success.  If the result is negative, it's a Linux error code;
  * if the result is positive, it's an NVM Express status code
  */
-int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
-		union nvme_result *result, void *buffer, unsigned bufflen,
-		int qid, int at_head, blk_mq_req_flags_t flags)
+int __nvme_submit_sync_cmd(struct nvme_submit_cmd_args *d)
 {
 	struct request *req;
 	int ret;
 
-	if (qid == NVME_QID_ANY)
-		req = blk_mq_alloc_request(q, nvme_req_op(cmd), flags);
+	if (d->qid == NVME_QID_ANY)
+		req = blk_mq_alloc_request(d->q, nvme_req_op(d->cmd), d->flags);
 	else
-		req = blk_mq_alloc_request_hctx(q, nvme_req_op(cmd), flags,
-						qid - 1);
+		req = blk_mq_alloc_request_hctx(d->q, nvme_req_op(d->cmd), d->flags,
+						d->qid - 1);
 
 	if (IS_ERR(req))
 		return PTR_ERR(req);
-	nvme_init_request(req, cmd);
+	nvme_init_request(req, d->cmd);
 
-	if (buffer && bufflen) {
-		ret = blk_rq_map_kern(q, req, buffer, bufflen, GFP_KERNEL);
+	if (d->buffer && d->bufflen) {
+		ret = blk_rq_map_kern(d->q, req, d->buffer, d->bufflen, GFP_KERNEL);
 		if (ret)
 			goto out;
 	}
 
-	ret = nvme_execute_rq(req, at_head);
-	if (result && ret >= 0)
-		*result = nvme_req(req)->result;
+	ret = nvme_execute_rq(req, d->at_head);
+	if (d->result && ret >= 0)
+		*(d->result) = nvme_req(req)->result;
  out:
 	blk_mq_free_request(req);
 	return ret;
@@ -1060,8 +1058,16 @@ EXPORT_SYMBOL_GPL(__nvme_submit_sync_cmd);
 int nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
 		void *buffer, unsigned bufflen)
 {
-	return __nvme_submit_sync_cmd(q, cmd, NULL, buffer, bufflen,
-			NVME_QID_ANY, 0, 0);
+	struct nvme_submit_cmd_args d = {
+		.q = q,
+		.cmd = cmd,
+		.buffer = buffer,
+		.bufflen = bufflen,
+		.qid = NVME_QID_ANY,
+		.at_head = false,
+		.flags = 0,
+	};
+	return __nvme_submit_sync_cmd(&d);
 }
 EXPORT_SYMBOL_GPL(nvme_submit_sync_cmd);
 
@@ -1480,14 +1486,23 @@ static int nvme_features(struct nvme_ctrl *dev, u8 op, unsigned int fid,
 {
 	union nvme_result res = { 0 };
 	struct nvme_command c = { };
+	struct nvme_submit_cmd_args d = {
+		.q = dev->admin_q,
+		.cmd = &c,
+		.result = &res,
+		.buffer = buffer,
+		.bufflen = buflen,
+		.qid = NVME_QID_ANY,
+		.at_head = false,
+		.flags = 0,
+	};
 	int ret;
 
 	c.features.opcode = op;
 	c.features.fid = cpu_to_le32(fid);
 	c.features.dword11 = cpu_to_le32(dword11);
 
-	ret = __nvme_submit_sync_cmd(dev->admin_q, &c, &res,
-			buffer, buflen, NVME_QID_ANY, 0, 0);
+	ret = __nvme_submit_sync_cmd(&d);
 	if (ret >= 0 && result)
 		*result = le32_to_cpu(res.u32);
 	return ret;
@@ -2209,6 +2224,15 @@ static int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t l
 {
 	struct nvme_ctrl *ctrl = data;
 	struct nvme_command cmd = { };
+	struct nvme_submit_cmd_args d = {
+		.q = ctrl->admin_q,
+		.cmd = &cmd,
+		.buffer = buffer,
+		.bufflen = len,
+		.qid = NVME_QID_ANY,
+		.at_head = true,
+		.flags = 0,
+	};
 
 	if (send)
 		cmd.common.opcode = nvme_admin_security_send;
@@ -2218,8 +2242,7 @@ static int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t l
 	cmd.common.cdw10 = cpu_to_le32(((u32)secp) << 24 | ((u32)spsp) << 8);
 	cmd.common.cdw11 = cpu_to_le32(len);
 
-	return __nvme_submit_sync_cmd(ctrl->admin_q, &cmd, NULL, buffer, len,
-			NVME_QID_ANY, 1, 0);
+	return __nvme_submit_sync_cmd(&d);
 }
 
 static void nvme_configure_opal(struct nvme_ctrl *ctrl, bool was_suspended)
diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index bbaa04a0c502..4036da49a18d 100644
--- a/drivers/nvme/host/fabrics.c
+++ b/drivers/nvme/host/fabrics.c
@@ -146,14 +146,21 @@ int nvmf_reg_read32(struct nvme_ctrl *ctrl, u32 off, u32 *val)
 {
 	struct nvme_command cmd = { };
 	union nvme_result res;
+	struct nvme_submit_cmd_args d = {
+		.q = ctrl->fabrics_q,
+		.cmd = &cmd,
+		.result = &res,
+		.qid = NVME_QID_ANY,
+		.at_head = false,
+		.flags = 0,
+	};
 	int ret;
 
 	cmd.prop_get.opcode = nvme_fabrics_command;
 	cmd.prop_get.fctype = nvme_fabrics_type_property_get;
 	cmd.prop_get.offset = cpu_to_le32(off);
 
-	ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, &cmd, &res, NULL, 0,
-			NVME_QID_ANY, 0, 0);
+	ret = __nvme_submit_sync_cmd(&d);
 
 	if (ret >= 0)
 		*val = le64_to_cpu(res.u64);
@@ -191,6 +198,14 @@ int nvmf_reg_read64(struct nvme_ctrl *ctrl, u32 off, u64 *val)
 {
 	struct nvme_command cmd = { };
 	union nvme_result res;
+	struct nvme_submit_cmd_args d = {
+		.q = ctrl->fabrics_q,
+		.cmd = &cmd,
+		.result = &res,
+		.qid = NVME_QID_ANY,
+		.at_head = false,
+		.flags = 0,
+	};
 	int ret;
 
 	cmd.prop_get.opcode = nvme_fabrics_command;
@@ -198,8 +213,7 @@ int nvmf_reg_read64(struct nvme_ctrl *ctrl, u32 off, u64 *val)
 	cmd.prop_get.attrib = 1;
 	cmd.prop_get.offset = cpu_to_le32(off);
 
-	ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, &cmd, &res, NULL, 0,
-			NVME_QID_ANY, 0, 0);
+	ret = __nvme_submit_sync_cmd(&d);
 
 	if (ret >= 0)
 		*val = le64_to_cpu(res.u64);
@@ -235,6 +249,13 @@ EXPORT_SYMBOL_GPL(nvmf_reg_read64);
 int nvmf_reg_write32(struct nvme_ctrl *ctrl, u32 off, u32 val)
 {
 	struct nvme_command cmd = { };
+	struct nvme_submit_cmd_args d = {
+		.q = ctrl->fabrics_q,
+		.cmd = &cmd,
+		.qid = NVME_QID_ANY,
+		.at_head = false,
+		.flags = 0,
+	};
 	int ret;
 
 	cmd.prop_set.opcode = nvme_fabrics_command;
@@ -243,8 +264,7 @@ int nvmf_reg_write32(struct nvme_ctrl *ctrl, u32 off, u32 val)
 	cmd.prop_set.offset = cpu_to_le32(off);
 	cmd.prop_set.value = cpu_to_le64(val);
 
-	ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, &cmd, NULL, NULL, 0,
-			NVME_QID_ANY, 0, 0);
+	ret = __nvme_submit_sync_cmd(&d);
 	if (unlikely(ret))
 		dev_err(ctrl->device,
 			"Property Set error: %d, offset %#x\n",
@@ -374,6 +394,15 @@ int nvmf_connect_admin_queue(struct nvme_ctrl *ctrl)
 	struct nvme_command cmd = { };
 	union nvme_result res;
 	struct nvmf_connect_data *data;
+	struct nvme_submit_cmd_args d = {
+		.q = ctrl->fabrics_q,
+		.cmd = &cmd,
+		.result = &res,
+		.bufflen = sizeof(*data),
+		.qid = NVME_QID_ANY,
+		.at_head = true,
+		.flags = BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT,
+	};
 	int ret;
 	u32 result;
 
@@ -394,14 +423,13 @@ int nvmf_connect_admin_queue(struct nvme_ctrl *ctrl)
 	if (!data)
 		return -ENOMEM;
 
+	d.buffer = data;
 	uuid_copy(&data->hostid, &ctrl->opts->host->id);
 	data->cntlid = cpu_to_le16(0xffff);
 	strncpy(data->subsysnqn, ctrl->opts->subsysnqn, NVMF_NQN_SIZE);
 	strncpy(data->hostnqn, ctrl->opts->host->nqn, NVMF_NQN_SIZE);
 
-	ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, &cmd, &res,
-			data, sizeof(*data), NVME_QID_ANY, 1,
-			BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT);
+	ret = __nvme_submit_sync_cmd(&d);
 	if (ret) {
 		nvmf_log_connect_error(ctrl, ret, le32_to_cpu(res.u32),
 				       &cmd, data);
@@ -465,6 +493,15 @@ int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid)
 	struct nvme_command cmd = { };
 	struct nvmf_connect_data *data;
 	union nvme_result res;
+	struct nvme_submit_cmd_args d = {
+		.q = ctrl->connect_q,
+		.cmd = &cmd,
+		.result = &res,
+		.bufflen = sizeof(*data),
+		.qid = qid,
+		.at_head = true,
+		.flags = BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT,
+	};
 	int ret;
 	u32 result;
 
@@ -480,14 +517,13 @@ int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid)
 	if (!data)
 		return -ENOMEM;
 
+	d.buffer = data;
 	uuid_copy(&data->hostid, &ctrl->opts->host->id);
 	data->cntlid = cpu_to_le16(ctrl->cntlid);
 	strncpy(data->subsysnqn, ctrl->opts->subsysnqn, NVMF_NQN_SIZE);
 	strncpy(data->hostnqn, ctrl->opts->host->nqn, NVMF_NQN_SIZE);
 
-	ret = __nvme_submit_sync_cmd(ctrl->connect_q, &cmd, &res,
-			data, sizeof(*data), qid, 1,
-			BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT);
+	ret = __nvme_submit_sync_cmd(&d);
 	if (ret) {
 		nvmf_log_connect_error(ctrl, ret, le32_to_cpu(res.u32),
 				       &cmd, data);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index bf46f122e9e1..0b0b6e0468aa 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -811,12 +811,20 @@ static inline bool nvme_is_unique_nsid(struct nvme_ctrl *ctrl,
 		(ctrl->ctratt & NVME_CTRL_CTRATT_NVM_SETS);
 }
 
+struct nvme_submit_cmd_args {
+	struct request_queue *q;
+	struct nvme_command *cmd;
+	union nvme_result *result;
+	void *buffer;
+	unsigned int bufflen;
+	int qid;
+	bool at_head;
+	blk_mq_req_flags_t flags;
+};
+
 int nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
 		void *buf, unsigned bufflen);
-int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
-		union nvme_result *result, void *buffer, unsigned bufflen,
-		int qid, int at_head,
-		blk_mq_req_flags_t flags);
+int __nvme_submit_sync_cmd(struct nvme_submit_cmd_args *d);
 int nvme_set_features(struct nvme_ctrl *dev, unsigned int fid,
 		      unsigned int dword11, void *buffer, size_t buflen,
 		      u32 *result);
-- 
2.29.0



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH V2 3/8] nvme-core: code cleanup for __nvme_check_ready()
  2023-03-27  6:04 ` [PATCH V2 3/8] nvme-core: code cleanup for __nvme_check_ready() Chaitanya Kulkarni
@ 2023-03-27 11:15   ` Pankaj Raghav
  0 siblings, 0 replies; 15+ messages in thread
From: Pankaj Raghav @ 2023-03-27 11:15 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: linux-nvme, kbusch, hch, sagi, hare, p.raghav

>  		}
> +		break;
> +	default:
> +		break;
> +	case NVME_CTRL_DEAD:
> +		return false;
While you are at it, doesn't it make sense to have default as the last
switch case instead of NVME_CTRL_DEAD. It looks a bit strange to have
something else after the default case.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V2 1/8] nvme-core: remove unnecessary else
  2023-03-27  6:04 ` [PATCH V2 1/8] nvme-core: remove unnecessary else Chaitanya Kulkarni
@ 2023-03-28  0:40   ` Christoph Hellwig
  0 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2023-03-28  0:40 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: linux-nvme, kbusch, hch, sagi, hare

On Sun, Mar 26, 2023 at 11:04:11PM -0700, Chaitanya Kulkarni wrote:
> By returning early we can safely remove the unnecessary else part
> 
> Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com>
> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
> ---
>  drivers/nvme/host/core.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)

.. and add a line while creating churn?


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V2 4/8] nvme-core: use normal pattern
  2023-03-27  6:04 ` [PATCH V2 4/8] nvme-core: use normal pattern Chaitanya Kulkarni
@ 2023-03-28  0:42   ` Christoph Hellwig
  0 siblings, 0 replies; 15+ messages in thread
From: Christoph Hellwig @ 2023-03-28  0:42 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: linux-nvme, kbusch, hch, sagi, hare

On Sun, Mar 26, 2023 at 11:04:14PM -0700, Chaitanya Kulkarni wrote:
> Although return is allowed in the switch ... case when function is
> returing void, it creates confusion for future code which is desirable
> pattern to use for switch ... case. Repalce return in the switch with
> break that is standard pattern for the switch ... case.

I don't really see any point in doing that.  An early return makes it
clear what is going on here, and the logic is so that we really
should never have code after the switch as the calls inside the
switch statement "dispose" of the command, so anything done there
would cause use after free issues.


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V2 8/8] nvme-core: fix nvme_submit_sync_cmd() args
  2023-03-27  6:04 ` [PATCH V2 8/8] nvme-core: fix nvme_submit_sync_cmd() args Chaitanya Kulkarni
@ 2023-03-28  0:43   ` Christoph Hellwig
  2023-03-28  5:46     ` Chaitanya Kulkarni
  0 siblings, 1 reply; 15+ messages in thread
From: Christoph Hellwig @ 2023-03-28  0:43 UTC (permalink / raw)
  To: Chaitanya Kulkarni; +Cc: linux-nvme, kbusch, hch, sagi, hare

On Sun, Mar 26, 2023 at 11:04:18PM -0700, Chaitanya Kulkarni wrote:
> 4 files changed, 112 insertions(+), 38 deletions(-)

This:

 - adds a shit ton of code
 - makes it harder to use the function
 - creates tons of churn

Why?


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V2 8/8] nvme-core: fix nvme_submit_sync_cmd() args
  2023-03-28  0:43   ` Christoph Hellwig
@ 2023-03-28  5:46     ` Chaitanya Kulkarni
  0 siblings, 0 replies; 15+ messages in thread
From: Chaitanya Kulkarni @ 2023-03-28  5:46 UTC (permalink / raw)
  To: Christoph Hellwig
  Cc: linux-nvme@lists.infradead.org, kbusch@kernel.org,
	Chaitanya Kulkarni, sagi@grimberg.me, hare@suse.de

On 3/27/23 17:43, Christoph Hellwig wrote:
> On Sun, Mar 26, 2023 at 11:04:18PM -0700, Chaitanya Kulkarni wrote:
>> 4 files changed, 112 insertions(+), 38 deletions(-)
> This:
>
>   - adds a shit ton of code

agree, but it is similar what we do for nvme_command structure
all over in the host side core and fabrics code nothing new :-
nvme_identify_ns_descs()
nvme_identify_ns()
nvme_ns_info_from_id_cs_indep()
nvme_init_ms()
nvme_get_log()
nvme_init_non_mdts_limits()
nvme_scan_ns_list()
nvme_identify_ctrl()
nvmf_reg_read32()
nvmf_reg_read64()
nvmf_reg_write32()
nvmf_connect_admin_queue()
nvmf_connect_io_queue()

>   - makes it harder to use the function

we pass nvme_command struct similar way with initializing
its members, this patch is doing the same without inventing
anything new similar to block layer code (mentioned in commit
log) and host side core/fabrics code for the nvme_command
mentioned above.

>   - creates tons of churn
>
> Why?

in general there is even opposition to have 6 args when I sent out
a patch-series for __nvme_submit_sync_cmd() it has 8!

currently it has long list of arguments and I think people have
complained about it already on the mailing list, if someone wants
to add more argument(s) what is the right way of doing that
since this patch is not acceptable ? please explain ...

if you still think keeping long list of argument is a better way
to go then feel free to drop this no biggy ..

-ck



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH V2 0/8] nvme-core: trivial cleanups
  2023-03-27  6:04 [PATCH V2 0/8] nvme-core: trivial cleanups Chaitanya Kulkarni
                   ` (7 preceding siblings ...)
  2023-03-27  6:04 ` [PATCH V2 8/8] nvme-core: fix nvme_submit_sync_cmd() args Chaitanya Kulkarni
@ 2023-03-28  6:12 ` Chaitanya Kulkarni
  8 siblings, 0 replies; 15+ messages in thread
From: Chaitanya Kulkarni @ 2023-03-28  6:12 UTC (permalink / raw)
  To: hch@lst.de
  Cc: kbusch@kernel.org, linux-nvme@lists.infradead.org,
	Chaitanya Kulkarni, sagi@grimberg.me, hare@suse.de

On 3/26/23 23:04, Chaitanya Kulkarni wrote:
> Hi,
>
> Small code cleanups for nvme-core and the last patch has fixing really
> long argument list of __nvme_submit_sync_cmd().
>
> Please have a closer look at patch 3 and 8. There are no functional
> changes in any of these patches.
>
> V2 is passing the blktests, see log below.
>
>

feel free to drop these patches, while reading the code I felt it will
help so I sent it, but please consider last patch or suggest
alternative ...

-ck



^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2023-03-28  6:12 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-03-27  6:04 [PATCH V2 0/8] nvme-core: trivial cleanups Chaitanya Kulkarni
2023-03-27  6:04 ` [PATCH V2 1/8] nvme-core: remove unnecessary else Chaitanya Kulkarni
2023-03-28  0:40   ` Christoph Hellwig
2023-03-27  6:04 ` [PATCH V2 2/8] nvme-core: remvoe extra line at end of function Chaitanya Kulkarni
2023-03-27  6:04 ` [PATCH V2 3/8] nvme-core: code cleanup for __nvme_check_ready() Chaitanya Kulkarni
2023-03-27 11:15   ` Pankaj Raghav
2023-03-27  6:04 ` [PATCH V2 4/8] nvme-core: use normal pattern Chaitanya Kulkarni
2023-03-28  0:42   ` Christoph Hellwig
2023-03-27  6:04 ` [PATCH V2 5/8] nvme-core: open code nvme_delete_ctrl_sync() Chaitanya Kulkarni
2023-03-27  6:04 ` [PATCH V2 6/8] nvme-core: cleanup for nvme_set_latency_tolerance Chaitanya Kulkarni
2023-03-27  6:04 ` [PATCH V2 7/8] nvme-core: remove unneacessary else Chaitanya Kulkarni
2023-03-27  6:04 ` [PATCH V2 8/8] nvme-core: fix nvme_submit_sync_cmd() args Chaitanya Kulkarni
2023-03-28  0:43   ` Christoph Hellwig
2023-03-28  5:46     ` Chaitanya Kulkarni
2023-03-28  6:12 ` [PATCH V2 0/8] nvme-core: trivial cleanups Chaitanya Kulkarni

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox