* [PATCH 0/3] nvme: rework __nvme_submit_sync_cmd()
@ 2023-02-08 8:49 Hannes Reinecke
2023-02-08 8:49 ` [PATCH 1/3] nvme: split __nvme_submit_sync_cmd() Hannes Reinecke
` (2 more replies)
0 siblings, 3 replies; 11+ messages in thread
From: Hannes Reinecke @ 2023-02-08 8:49 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Sagi Grimberg, Keith Busch, linux-nvme, Hannes Reinecke
Hi all,
as Jens and hch complained about the argument list to __nvme_submit_sync_cmd()
getting too long, here's now a patchset to clean things up.
The first patch is to split off __nvme_alloc_rq() from __nvme_submit_sync_cmd(),
which serves to reduce the number of arguments; it's also a nice cleanup as this
function can be utilized to replace some open-coded versions.
The second patch is to introduce a 'retry' argument to __nvme_alloc_rq(), to
indicate whether retries for this command should be allowd.
And the third is another cleanup; the 'at_head' argument to __nvme_submit_sync_cmd()
really should be a boolean, as it can take only two values, and it's being
converted into a boolean within __nvme_submit_sync_cmd() anyway.
As usual, comments and reviews are welcome.
Hannes Reinecke (3):
nvme: split __nvme_submit_sync_cmd()
nvme: retry internal commands if DNR status bit is not set
nvme: make 'at_head' parameter for __nvme_submit_sync_cmd() boolean
drivers/nvme/host/auth.c | 10 ++++--
drivers/nvme/host/core.c | 65 ++++++++++++++++++++++------------
drivers/nvme/host/fabrics.c | 46 +++++++++++++++++-------
drivers/nvme/host/ioctl.c | 2 +-
drivers/nvme/host/nvme.h | 11 +++---
drivers/nvme/host/pci.c | 9 +++--
drivers/nvme/target/passthru.c | 3 +-
7 files changed, 96 insertions(+), 50 deletions(-)
--
2.35.3
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 1/3] nvme: split __nvme_submit_sync_cmd()
2023-02-08 8:49 [PATCH 0/3] nvme: rework __nvme_submit_sync_cmd() Hannes Reinecke
@ 2023-02-08 8:49 ` Hannes Reinecke
2023-02-08 14:10 ` Kanchan Joshi
2023-02-08 8:49 ` [PATCH 2/3] nvme: retry internal commands if DNR status bit is not set Hannes Reinecke
2023-02-08 8:49 ` [PATCH 3/3] nvme: make 'at_head' parameter for __nvme_submit_sync_cmd() boolean Hannes Reinecke
2 siblings, 1 reply; 11+ messages in thread
From: Hannes Reinecke @ 2023-02-08 8:49 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Sagi Grimberg, Keith Busch, linux-nvme, Hannes Reinecke
Split a __nvme_alloc_rq() function from __nvme_submit_sync_cmd()
to reduce the number of arguments.
Signed-off-by: Hannes Reinecke <hare@suse.de>
---
drivers/nvme/host/auth.c | 9 ++++--
drivers/nvme/host/core.c | 59 ++++++++++++++++++++++------------
drivers/nvme/host/fabrics.c | 42 ++++++++++++++++++------
drivers/nvme/host/nvme.h | 8 ++---
drivers/nvme/host/pci.c | 8 ++---
drivers/nvme/target/passthru.c | 3 +-
6 files changed, 85 insertions(+), 44 deletions(-)
diff --git a/drivers/nvme/host/auth.c b/drivers/nvme/host/auth.c
index 787537454f7f..cbb6f1cb2046 100644
--- a/drivers/nvme/host/auth.c
+++ b/drivers/nvme/host/auth.c
@@ -62,6 +62,7 @@ static int nvme_auth_submit(struct nvme_ctrl *ctrl, int qid,
struct nvme_command cmd = {};
blk_mq_req_flags_t flags = nvme_auth_flags_from_qid(qid);
struct request_queue *q = nvme_auth_queue_from_qid(ctrl, qid);
+ struct request *req;
int ret;
cmd.auth_common.opcode = nvme_fabrics_command;
@@ -76,9 +77,11 @@ static int nvme_auth_submit(struct nvme_ctrl *ctrl, int qid,
cmd.auth_receive.al = cpu_to_le32(data_len);
}
- ret = __nvme_submit_sync_cmd(q, &cmd, NULL, data, data_len,
- qid == 0 ? NVME_QID_ANY : qid,
- 0, flags);
+ req = __nvme_alloc_rq(q, &cmd, qid == 0 ? NVME_QID_ANY : qid, flags);
+ if (IS_ERR(req))
+ return PTR_ERR(req);
+
+ ret = __nvme_submit_sync_cmd(q, req, NULL, data, data_len, 0);
if (ret > 0)
dev_warn(ctrl->device,
"qid %d auth_send failed with status %d\n", qid, ret);
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index d1c9402389f9..4e0f61f27823 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1017,26 +1017,32 @@ int nvme_execute_rq(struct request *rq, bool at_head)
}
EXPORT_SYMBOL_NS_GPL(nvme_execute_rq, NVME_TARGET_PASSTHRU);
-/*
- * Returns 0 on success. If the result is negative, it's a Linux error code;
- * if the result is positive, it's an NVM Express status code
- */
-int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
- union nvme_result *result, void *buffer, unsigned bufflen,
- int qid, int at_head, blk_mq_req_flags_t flags)
+struct request *__nvme_alloc_rq(struct request_queue *q,
+ struct nvme_command *cmd, int qid,
+ blk_mq_req_flags_t flags)
{
struct request *req;
- int ret;
if (qid == NVME_QID_ANY)
req = blk_mq_alloc_request(q, nvme_req_op(cmd), flags);
else
req = blk_mq_alloc_request_hctx(q, nvme_req_op(cmd), flags,
qid - 1);
+ if (!IS_ERR(req))
+ nvme_init_request(req, cmd);
- if (IS_ERR(req))
- return PTR_ERR(req);
- nvme_init_request(req, cmd);
+ return req;
+}
+EXPORT_SYMBOL_GPL(__nvme_alloc_rq);
+
+/*
+ * Returns 0 on success. If the result is negative, it's a Linux error code;
+ * if the result is positive, it's an NVM Express status code
+ */
+int __nvme_submit_sync_cmd(struct request_queue *q, struct request *req,
+ union nvme_result *result, void *buffer, unsigned bufflen, int at_head)
+{
+ int ret;
if (buffer && bufflen) {
ret = blk_rq_map_kern(q, req, buffer, bufflen, GFP_KERNEL);
@@ -1056,8 +1062,13 @@ EXPORT_SYMBOL_GPL(__nvme_submit_sync_cmd);
int nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
void *buffer, unsigned bufflen)
{
- return __nvme_submit_sync_cmd(q, cmd, NULL, buffer, bufflen,
- NVME_QID_ANY, 0, 0);
+ struct request *req;
+
+ req = __nvme_alloc_rq(q, cmd, NVME_QID_ANY, 0);
+ if (IS_ERR(req))
+ return PTR_ERR(req);
+
+ return __nvme_submit_sync_cmd(q, req, NULL, buffer, bufflen, 0);
}
EXPORT_SYMBOL_GPL(nvme_submit_sync_cmd);
@@ -1199,15 +1210,14 @@ static void nvme_keep_alive_work(struct work_struct *work)
return;
}
- rq = blk_mq_alloc_request(ctrl->admin_q, nvme_req_op(&ctrl->ka_cmd),
- BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT);
+ rq = __nvme_alloc_rq(ctrl->admin_q, &ctrl->ka_cmd, NVME_QID_ANY,
+ BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT);
if (IS_ERR(rq)) {
/* allocation failure, reset the controller */
dev_err(ctrl->device, "keep-alive failed: %ld\n", PTR_ERR(rq));
nvme_reset_ctrl(ctrl);
return;
}
- nvme_init_request(rq, &ctrl->ka_cmd);
rq->timeout = ctrl->kato * HZ;
rq->end_io = nvme_keep_alive_end_io;
@@ -1475,6 +1485,7 @@ static int nvme_ns_info_from_id_cs_indep(struct nvme_ctrl *ctrl,
static int nvme_features(struct nvme_ctrl *dev, u8 op, unsigned int fid,
unsigned int dword11, void *buffer, size_t buflen, u32 *result)
{
+ struct request *req;
union nvme_result res = { 0 };
struct nvme_command c = { };
int ret;
@@ -1483,8 +1494,12 @@ static int nvme_features(struct nvme_ctrl *dev, u8 op, unsigned int fid,
c.features.fid = cpu_to_le32(fid);
c.features.dword11 = cpu_to_le32(dword11);
- ret = __nvme_submit_sync_cmd(dev->admin_q, &c, &res,
- buffer, buflen, NVME_QID_ANY, 0, 0);
+ req = __nvme_alloc_rq(dev->admin_q, &c, NVME_QID_ANY, 0);
+ if (IS_ERR(req))
+ return PTR_ERR(req);
+
+ ret = __nvme_submit_sync_cmd(dev->admin_q, req, &res,
+ buffer, buflen, 0);
if (ret >= 0 && result)
*result = le32_to_cpu(res.u32);
return ret;
@@ -2206,6 +2221,7 @@ static int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t l
{
struct nvme_ctrl *ctrl = data;
struct nvme_command cmd = { };
+ struct request *req;
if (send)
cmd.common.opcode = nvme_admin_security_send;
@@ -2215,8 +2231,11 @@ static int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t l
cmd.common.cdw10 = cpu_to_le32(((u32)secp) << 24 | ((u32)spsp) << 8);
cmd.common.cdw11 = cpu_to_le32(len);
- return __nvme_submit_sync_cmd(ctrl->admin_q, &cmd, NULL, buffer, len,
- NVME_QID_ANY, 1, 0);
+ req = __nvme_alloc_rq(ctrl->admin_q, &cmd, NVME_QID_ANY, 0);
+ if (IS_ERR(req))
+ return PTR_ERR(req);
+
+ return __nvme_submit_sync_cmd(ctrl->admin_q, req, NULL, buffer, len, 1);
}
static void nvme_configure_opal(struct nvme_ctrl *ctrl, bool was_suspended)
diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index bbaa04a0c502..f8e623279b75 100644
--- a/drivers/nvme/host/fabrics.c
+++ b/drivers/nvme/host/fabrics.c
@@ -144,6 +144,7 @@ EXPORT_SYMBOL_GPL(nvmf_get_address);
*/
int nvmf_reg_read32(struct nvme_ctrl *ctrl, u32 off, u32 *val)
{
+ struct request *req;
struct nvme_command cmd = { };
union nvme_result res;
int ret;
@@ -152,8 +153,11 @@ int nvmf_reg_read32(struct nvme_ctrl *ctrl, u32 off, u32 *val)
cmd.prop_get.fctype = nvme_fabrics_type_property_get;
cmd.prop_get.offset = cpu_to_le32(off);
- ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, &cmd, &res, NULL, 0,
- NVME_QID_ANY, 0, 0);
+ req = __nvme_alloc_rq(ctrl->fabrics_q, &cmd, NVME_QID_ANY, 0);
+ if (IS_ERR(req))
+ return PTR_ERR(req);
+
+ ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, req, &res, NULL, 0, 0);
if (ret >= 0)
*val = le64_to_cpu(res.u64);
@@ -189,6 +193,7 @@ EXPORT_SYMBOL_GPL(nvmf_reg_read32);
*/
int nvmf_reg_read64(struct nvme_ctrl *ctrl, u32 off, u64 *val)
{
+ struct request *req;
struct nvme_command cmd = { };
union nvme_result res;
int ret;
@@ -198,8 +203,11 @@ int nvmf_reg_read64(struct nvme_ctrl *ctrl, u32 off, u64 *val)
cmd.prop_get.attrib = 1;
cmd.prop_get.offset = cpu_to_le32(off);
- ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, &cmd, &res, NULL, 0,
- NVME_QID_ANY, 0, 0);
+ req = __nvme_alloc_rq(ctrl->fabrics_q, &cmd, NVME_QID_ANY, 0);
+ if (IS_ERR(req))
+ return PTR_ERR(req);
+
+ ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, req, &res, NULL, 0, 0);
if (ret >= 0)
*val = le64_to_cpu(res.u64);
@@ -234,6 +242,7 @@ EXPORT_SYMBOL_GPL(nvmf_reg_read64);
*/
int nvmf_reg_write32(struct nvme_ctrl *ctrl, u32 off, u32 val)
{
+ struct request *req;
struct nvme_command cmd = { };
int ret;
@@ -243,8 +252,11 @@ int nvmf_reg_write32(struct nvme_ctrl *ctrl, u32 off, u32 val)
cmd.prop_set.offset = cpu_to_le32(off);
cmd.prop_set.value = cpu_to_le64(val);
- ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, &cmd, NULL, NULL, 0,
- NVME_QID_ANY, 0, 0);
+ req = __nvme_alloc_rq(ctrl->fabrics_q, &cmd, NVME_QID_ANY, 0);
+ if (IS_ERR(req))
+ return PTR_ERR(req);
+
+ ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, req, NULL, NULL, 0, 0);
if (unlikely(ret))
dev_err(ctrl->device,
"Property Set error: %d, offset %#x\n",
@@ -374,6 +386,7 @@ int nvmf_connect_admin_queue(struct nvme_ctrl *ctrl)
struct nvme_command cmd = { };
union nvme_result res;
struct nvmf_connect_data *data;
+ struct request *req;
int ret;
u32 result;
@@ -399,9 +412,13 @@ int nvmf_connect_admin_queue(struct nvme_ctrl *ctrl)
strncpy(data->subsysnqn, ctrl->opts->subsysnqn, NVMF_NQN_SIZE);
strncpy(data->hostnqn, ctrl->opts->host->nqn, NVMF_NQN_SIZE);
- ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, &cmd, &res,
- data, sizeof(*data), NVME_QID_ANY, 1,
+ req = __nvme_alloc_rq(ctrl->fabrics_q, &cmd, NVME_QID_ANY,
BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT);
+ if (IS_ERR(req))
+ return PTR_ERR(req);
+
+ ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, req, &res,
+ data, sizeof(*data), 1);
if (ret) {
nvmf_log_connect_error(ctrl, ret, le32_to_cpu(res.u32),
&cmd, data);
@@ -465,6 +482,7 @@ int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid)
struct nvme_command cmd = { };
struct nvmf_connect_data *data;
union nvme_result res;
+ struct request *req;
int ret;
u32 result;
@@ -485,9 +503,13 @@ int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid)
strncpy(data->subsysnqn, ctrl->opts->subsysnqn, NVMF_NQN_SIZE);
strncpy(data->hostnqn, ctrl->opts->host->nqn, NVMF_NQN_SIZE);
- ret = __nvme_submit_sync_cmd(ctrl->connect_q, &cmd, &res,
- data, sizeof(*data), qid, 1,
+ req = __nvme_alloc_rq(ctrl->fabrics_q, &cmd, NVME_QID_ANY,
BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT);
+ if (IS_ERR(req))
+ return PTR_ERR(req);
+
+ ret = __nvme_submit_sync_cmd(ctrl->connect_q, req, &res,
+ data, sizeof(*data), 1);
if (ret) {
nvmf_log_connect_error(ctrl, ret, le32_to_cpu(res.u32),
&cmd, data);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index bf46f122e9e1..0ae5ef8b217f 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -813,10 +813,10 @@ static inline bool nvme_is_unique_nsid(struct nvme_ctrl *ctrl,
int nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
void *buf, unsigned bufflen);
-int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
- union nvme_result *result, void *buffer, unsigned bufflen,
- int qid, int at_head,
- blk_mq_req_flags_t flags);
+struct request *__nvme_alloc_rq(struct request_queue *q,
+ struct nvme_command *cmd, int qid, blk_mq_req_flags_t flags);
+int __nvme_submit_sync_cmd(struct request_queue *q, struct request *req, union nvme_result *result,
+ void *buffer, unsigned bufflen, int at_head);
int nvme_set_features(struct nvme_ctrl *dev, unsigned int fid,
unsigned int dword11, void *buffer, size_t buflen,
u32 *result);
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index a331fbfa9a66..8e3bae7d489b 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -1376,13 +1376,12 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req)
nvme_get_opcode_str(nvme_req(req)->cmd->common.opcode),
nvmeq->qid);
- abort_req = blk_mq_alloc_request(dev->ctrl.admin_q, nvme_req_op(&cmd),
- BLK_MQ_REQ_NOWAIT);
+ abort_req = __nvme_alloc_rq(dev->ctrl.admin_q, &cmd,
+ NVME_QID_ANY, BLK_MQ_REQ_NOWAIT);
if (IS_ERR(abort_req)) {
atomic_inc(&dev->ctrl.abort_limit);
return BLK_EH_RESET_TIMER;
}
- nvme_init_request(abort_req, &cmd);
abort_req->end_io = abort_endio;
abort_req->end_io_data = NULL;
@@ -2391,10 +2390,9 @@ static int nvme_delete_queue(struct nvme_queue *nvmeq, u8 opcode)
cmd.delete_queue.opcode = opcode;
cmd.delete_queue.qid = cpu_to_le16(nvmeq->qid);
- req = blk_mq_alloc_request(q, nvme_req_op(&cmd), BLK_MQ_REQ_NOWAIT);
+ req = __nvme_alloc_rq(q, &cmd, NVME_QID_ANY, BLK_MQ_REQ_NOWAIT);
if (IS_ERR(req))
return PTR_ERR(req);
- nvme_init_request(req, &cmd);
if (opcode == nvme_admin_delete_cq)
req->end_io = nvme_del_cq_end;
diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
index 511c980d538d..0dae824a2d05 100644
--- a/drivers/nvme/target/passthru.c
+++ b/drivers/nvme/target/passthru.c
@@ -316,12 +316,11 @@ static void nvmet_passthru_execute_cmd(struct nvmet_req *req)
timeout = nvmet_req_subsys(req)->admin_timeout;
}
- rq = blk_mq_alloc_request(q, nvme_req_op(req->cmd), 0);
+ rq = __nvme_alloc_rq(q, req->cmd, NVME_QID_ANY, 0);
if (IS_ERR(rq)) {
status = NVME_SC_INTERNAL;
goto out_put_ns;
}
- nvme_init_request(rq, req->cmd);
if (timeout)
rq->timeout = timeout;
--
2.35.3
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 2/3] nvme: retry internal commands if DNR status bit is not set
2023-02-08 8:49 [PATCH 0/3] nvme: rework __nvme_submit_sync_cmd() Hannes Reinecke
2023-02-08 8:49 ` [PATCH 1/3] nvme: split __nvme_submit_sync_cmd() Hannes Reinecke
@ 2023-02-08 8:49 ` Hannes Reinecke
2023-02-08 14:02 ` Kanchan Joshi
2023-02-08 8:49 ` [PATCH 3/3] nvme: make 'at_head' parameter for __nvme_submit_sync_cmd() boolean Hannes Reinecke
2 siblings, 1 reply; 11+ messages in thread
From: Hannes Reinecke @ 2023-02-08 8:49 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Sagi Grimberg, Keith Busch, linux-nvme, Hannes Reinecke
Add a 'retry' argument to __nvme_alloc_rq() to instruct
the function to not set the FAILFAST_DRIVER bit for the command,
causing it to be retried in nvme_decide_disposition() if the DNR
bit is not set in the command result.
And modify the authentication code to allow for retries.
Signed-off-by: Hannes Reinecke <hare@suse.de>
---
drivers/nvme/host/auth.c | 3 ++-
drivers/nvme/host/core.c | 18 ++++++++++--------
drivers/nvme/host/fabrics.c | 10 +++++-----
drivers/nvme/host/ioctl.c | 2 +-
drivers/nvme/host/nvme.h | 5 +++--
drivers/nvme/host/pci.c | 5 +++--
drivers/nvme/target/passthru.c | 2 +-
7 files changed, 25 insertions(+), 20 deletions(-)
diff --git a/drivers/nvme/host/auth.c b/drivers/nvme/host/auth.c
index cbb6f1cb2046..50cd45e92998 100644
--- a/drivers/nvme/host/auth.c
+++ b/drivers/nvme/host/auth.c
@@ -77,7 +77,8 @@ static int nvme_auth_submit(struct nvme_ctrl *ctrl, int qid,
cmd.auth_receive.al = cpu_to_le32(data_len);
}
- req = __nvme_alloc_rq(q, &cmd, qid == 0 ? NVME_QID_ANY : qid, flags);
+ req = __nvme_alloc_rq(q, &cmd, qid == 0 ? NVME_QID_ANY : qid,
+ flags, true);
if (IS_ERR(req))
return PTR_ERR(req);
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 4e0f61f27823..e7c954eae9c7 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -663,7 +663,8 @@ static inline void nvme_clear_nvme_request(struct request *req)
}
/* initialize a passthrough request */
-void nvme_init_request(struct request *req, struct nvme_command *cmd)
+void nvme_init_request(struct request *req, struct nvme_command *cmd,
+ bool retry)
{
if (req->q->queuedata)
req->timeout = NVME_IO_TIMEOUT;
@@ -673,7 +674,8 @@ void nvme_init_request(struct request *req, struct nvme_command *cmd)
/* passthru commands should let the driver set the SGL flags */
cmd->common.flags &= ~NVME_CMD_SGL_ALL;
- req->cmd_flags |= REQ_FAILFAST_DRIVER;
+ if (!retry)
+ req->cmd_flags |= REQ_FAILFAST_DRIVER;
if (req->mq_hctx->type == HCTX_TYPE_POLL)
req->cmd_flags |= REQ_POLLED;
nvme_clear_nvme_request(req);
@@ -1019,7 +1021,7 @@ EXPORT_SYMBOL_NS_GPL(nvme_execute_rq, NVME_TARGET_PASSTHRU);
struct request *__nvme_alloc_rq(struct request_queue *q,
struct nvme_command *cmd, int qid,
- blk_mq_req_flags_t flags)
+ blk_mq_req_flags_t flags, bool retry)
{
struct request *req;
@@ -1029,7 +1031,7 @@ struct request *__nvme_alloc_rq(struct request_queue *q,
req = blk_mq_alloc_request_hctx(q, nvme_req_op(cmd), flags,
qid - 1);
if (!IS_ERR(req))
- nvme_init_request(req, cmd);
+ nvme_init_request(req, cmd, retry);
return req;
}
@@ -1064,7 +1066,7 @@ int nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
{
struct request *req;
- req = __nvme_alloc_rq(q, cmd, NVME_QID_ANY, 0);
+ req = __nvme_alloc_rq(q, cmd, NVME_QID_ANY, 0, false);
if (IS_ERR(req))
return PTR_ERR(req);
@@ -1211,7 +1213,7 @@ static void nvme_keep_alive_work(struct work_struct *work)
}
rq = __nvme_alloc_rq(ctrl->admin_q, &ctrl->ka_cmd, NVME_QID_ANY,
- BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT);
+ BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT, false);
if (IS_ERR(rq)) {
/* allocation failure, reset the controller */
dev_err(ctrl->device, "keep-alive failed: %ld\n", PTR_ERR(rq));
@@ -1494,7 +1496,7 @@ static int nvme_features(struct nvme_ctrl *dev, u8 op, unsigned int fid,
c.features.fid = cpu_to_le32(fid);
c.features.dword11 = cpu_to_le32(dword11);
- req = __nvme_alloc_rq(dev->admin_q, &c, NVME_QID_ANY, 0);
+ req = __nvme_alloc_rq(dev->admin_q, &c, NVME_QID_ANY, 0, false);
if (IS_ERR(req))
return PTR_ERR(req);
@@ -2231,7 +2233,7 @@ static int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t l
cmd.common.cdw10 = cpu_to_le32(((u32)secp) << 24 | ((u32)spsp) << 8);
cmd.common.cdw11 = cpu_to_le32(len);
- req = __nvme_alloc_rq(ctrl->admin_q, &cmd, NVME_QID_ANY, 0);
+ req = __nvme_alloc_rq(ctrl->admin_q, &cmd, NVME_QID_ANY, 0, false);
if (IS_ERR(req))
return PTR_ERR(req);
diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index f8e623279b75..d999364af43d 100644
--- a/drivers/nvme/host/fabrics.c
+++ b/drivers/nvme/host/fabrics.c
@@ -153,7 +153,7 @@ int nvmf_reg_read32(struct nvme_ctrl *ctrl, u32 off, u32 *val)
cmd.prop_get.fctype = nvme_fabrics_type_property_get;
cmd.prop_get.offset = cpu_to_le32(off);
- req = __nvme_alloc_rq(ctrl->fabrics_q, &cmd, NVME_QID_ANY, 0);
+ req = __nvme_alloc_rq(ctrl->fabrics_q, &cmd, NVME_QID_ANY, 0, false);
if (IS_ERR(req))
return PTR_ERR(req);
@@ -203,7 +203,7 @@ int nvmf_reg_read64(struct nvme_ctrl *ctrl, u32 off, u64 *val)
cmd.prop_get.attrib = 1;
cmd.prop_get.offset = cpu_to_le32(off);
- req = __nvme_alloc_rq(ctrl->fabrics_q, &cmd, NVME_QID_ANY, 0);
+ req = __nvme_alloc_rq(ctrl->fabrics_q, &cmd, NVME_QID_ANY, 0, false);
if (IS_ERR(req))
return PTR_ERR(req);
@@ -252,7 +252,7 @@ int nvmf_reg_write32(struct nvme_ctrl *ctrl, u32 off, u32 val)
cmd.prop_set.offset = cpu_to_le32(off);
cmd.prop_set.value = cpu_to_le64(val);
- req = __nvme_alloc_rq(ctrl->fabrics_q, &cmd, NVME_QID_ANY, 0);
+ req = __nvme_alloc_rq(ctrl->fabrics_q, &cmd, NVME_QID_ANY, 0, false);
if (IS_ERR(req))
return PTR_ERR(req);
@@ -413,7 +413,7 @@ int nvmf_connect_admin_queue(struct nvme_ctrl *ctrl)
strncpy(data->hostnqn, ctrl->opts->host->nqn, NVMF_NQN_SIZE);
req = __nvme_alloc_rq(ctrl->fabrics_q, &cmd, NVME_QID_ANY,
- BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT);
+ BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT, false);
if (IS_ERR(req))
return PTR_ERR(req);
@@ -504,7 +504,7 @@ int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid)
strncpy(data->hostnqn, ctrl->opts->host->nqn, NVMF_NQN_SIZE);
req = __nvme_alloc_rq(ctrl->fabrics_q, &cmd, NVME_QID_ANY,
- BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT);
+ BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT, false);
if (IS_ERR(req))
return PTR_ERR(req);
diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
index 723e7d5b778f..11f03a0696c2 100644
--- a/drivers/nvme/host/ioctl.c
+++ b/drivers/nvme/host/ioctl.c
@@ -154,7 +154,7 @@ static struct request *nvme_alloc_user_request(struct request_queue *q,
req = blk_mq_alloc_request(q, nvme_req_op(cmd) | rq_flags, blk_flags);
if (IS_ERR(req))
return req;
- nvme_init_request(req, cmd);
+ nvme_init_request(req, cmd, false);
nvme_req(req)->flags |= NVME_REQ_USERCMD;
return req;
}
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 0ae5ef8b217f..624f1879ee20 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -774,7 +774,8 @@ static inline enum req_op nvme_req_op(struct nvme_command *cmd)
}
#define NVME_QID_ANY -1
-void nvme_init_request(struct request *req, struct nvme_command *cmd);
+void nvme_init_request(struct request *req, struct nvme_command *cmd,
+ bool retry);
void nvme_cleanup_cmd(struct request *req);
blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req);
blk_status_t nvme_fail_nonready_command(struct nvme_ctrl *ctrl,
@@ -814,7 +815,7 @@ static inline bool nvme_is_unique_nsid(struct nvme_ctrl *ctrl,
int nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
void *buf, unsigned bufflen);
struct request *__nvme_alloc_rq(struct request_queue *q,
- struct nvme_command *cmd, int qid, blk_mq_req_flags_t flags);
+ struct nvme_command *cmd, int qid, blk_mq_req_flags_t flags, bool retry);
int __nvme_submit_sync_cmd(struct request_queue *q, struct request *req, union nvme_result *result,
void *buffer, unsigned bufflen, int at_head);
int nvme_set_features(struct nvme_ctrl *dev, unsigned int fid,
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 8e3bae7d489b..f8ff52ad472e 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -1377,7 +1377,7 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req)
nvmeq->qid);
abort_req = __nvme_alloc_rq(dev->ctrl.admin_q, &cmd,
- NVME_QID_ANY, BLK_MQ_REQ_NOWAIT);
+ NVME_QID_ANY, BLK_MQ_REQ_NOWAIT, false);
if (IS_ERR(abort_req)) {
atomic_inc(&dev->ctrl.abort_limit);
return BLK_EH_RESET_TIMER;
@@ -2390,7 +2390,8 @@ static int nvme_delete_queue(struct nvme_queue *nvmeq, u8 opcode)
cmd.delete_queue.opcode = opcode;
cmd.delete_queue.qid = cpu_to_le16(nvmeq->qid);
- req = __nvme_alloc_rq(q, &cmd, NVME_QID_ANY, BLK_MQ_REQ_NOWAIT);
+ req = __nvme_alloc_rq(q, &cmd, NVME_QID_ANY,
+ BLK_MQ_REQ_NOWAIT, false);
if (IS_ERR(req))
return PTR_ERR(req);
diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
index 0dae824a2d05..c00ab5ac54db 100644
--- a/drivers/nvme/target/passthru.c
+++ b/drivers/nvme/target/passthru.c
@@ -316,7 +316,7 @@ static void nvmet_passthru_execute_cmd(struct nvmet_req *req)
timeout = nvmet_req_subsys(req)->admin_timeout;
}
- rq = __nvme_alloc_rq(q, req->cmd, NVME_QID_ANY, 0);
+ rq = __nvme_alloc_rq(q, req->cmd, NVME_QID_ANY, 0, false);
if (IS_ERR(rq)) {
status = NVME_SC_INTERNAL;
goto out_put_ns;
--
2.35.3
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 3/3] nvme: make 'at_head' parameter for __nvme_submit_sync_cmd() boolean
2023-02-08 8:49 [PATCH 0/3] nvme: rework __nvme_submit_sync_cmd() Hannes Reinecke
2023-02-08 8:49 ` [PATCH 1/3] nvme: split __nvme_submit_sync_cmd() Hannes Reinecke
2023-02-08 8:49 ` [PATCH 2/3] nvme: retry internal commands if DNR status bit is not set Hannes Reinecke
@ 2023-02-08 8:49 ` Hannes Reinecke
2 siblings, 0 replies; 11+ messages in thread
From: Hannes Reinecke @ 2023-02-08 8:49 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Sagi Grimberg, Keith Busch, linux-nvme, Hannes Reinecke
Use a boolean for the 'at_head' parameter in __nvme_submit_sync_cmd()
to improve readability and consistency; it's being converted into a
boolean anyway.
Signed-off-by: Hannes Reinecke <hare@suse.de>
---
drivers/nvme/host/auth.c | 2 +-
drivers/nvme/host/core.c | 8 ++++----
drivers/nvme/host/fabrics.c | 10 +++++-----
drivers/nvme/host/nvme.h | 2 +-
4 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/drivers/nvme/host/auth.c b/drivers/nvme/host/auth.c
index 50cd45e92998..4f287806571a 100644
--- a/drivers/nvme/host/auth.c
+++ b/drivers/nvme/host/auth.c
@@ -82,7 +82,7 @@ static int nvme_auth_submit(struct nvme_ctrl *ctrl, int qid,
if (IS_ERR(req))
return PTR_ERR(req);
- ret = __nvme_submit_sync_cmd(q, req, NULL, data, data_len, 0);
+ ret = __nvme_submit_sync_cmd(q, req, NULL, data, data_len, false);
if (ret > 0)
dev_warn(ctrl->device,
"qid %d auth_send failed with status %d\n", qid, ret);
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index e7c954eae9c7..b8020c3ff13e 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1042,7 +1042,7 @@ EXPORT_SYMBOL_GPL(__nvme_alloc_rq);
* if the result is positive, it's an NVM Express status code
*/
int __nvme_submit_sync_cmd(struct request_queue *q, struct request *req,
- union nvme_result *result, void *buffer, unsigned bufflen, int at_head)
+ union nvme_result *result, void *buffer, unsigned bufflen, bool at_head)
{
int ret;
@@ -1070,7 +1070,7 @@ int nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
if (IS_ERR(req))
return PTR_ERR(req);
- return __nvme_submit_sync_cmd(q, req, NULL, buffer, bufflen, 0);
+ return __nvme_submit_sync_cmd(q, req, NULL, buffer, bufflen, false);
}
EXPORT_SYMBOL_GPL(nvme_submit_sync_cmd);
@@ -1501,7 +1501,7 @@ static int nvme_features(struct nvme_ctrl *dev, u8 op, unsigned int fid,
return PTR_ERR(req);
ret = __nvme_submit_sync_cmd(dev->admin_q, req, &res,
- buffer, buflen, 0);
+ buffer, buflen, false);
if (ret >= 0 && result)
*result = le32_to_cpu(res.u32);
return ret;
@@ -2237,7 +2237,7 @@ static int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t l
if (IS_ERR(req))
return PTR_ERR(req);
- return __nvme_submit_sync_cmd(ctrl->admin_q, req, NULL, buffer, len, 1);
+ return __nvme_submit_sync_cmd(ctrl->admin_q, req, NULL, buffer, len, true);
}
static void nvme_configure_opal(struct nvme_ctrl *ctrl, bool was_suspended)
diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index d999364af43d..ce56ff85d23c 100644
--- a/drivers/nvme/host/fabrics.c
+++ b/drivers/nvme/host/fabrics.c
@@ -157,7 +157,7 @@ int nvmf_reg_read32(struct nvme_ctrl *ctrl, u32 off, u32 *val)
if (IS_ERR(req))
return PTR_ERR(req);
- ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, req, &res, NULL, 0, 0);
+ ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, req, &res, NULL, 0, false);
if (ret >= 0)
*val = le64_to_cpu(res.u64);
@@ -207,7 +207,7 @@ int nvmf_reg_read64(struct nvme_ctrl *ctrl, u32 off, u64 *val)
if (IS_ERR(req))
return PTR_ERR(req);
- ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, req, &res, NULL, 0, 0);
+ ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, req, &res, NULL, 0, false);
if (ret >= 0)
*val = le64_to_cpu(res.u64);
@@ -256,7 +256,7 @@ int nvmf_reg_write32(struct nvme_ctrl *ctrl, u32 off, u32 val)
if (IS_ERR(req))
return PTR_ERR(req);
- ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, req, NULL, NULL, 0, 0);
+ ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, req, NULL, NULL, 0, false);
if (unlikely(ret))
dev_err(ctrl->device,
"Property Set error: %d, offset %#x\n",
@@ -418,7 +418,7 @@ int nvmf_connect_admin_queue(struct nvme_ctrl *ctrl)
return PTR_ERR(req);
ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, req, &res,
- data, sizeof(*data), 1);
+ data, sizeof(*data), true);
if (ret) {
nvmf_log_connect_error(ctrl, ret, le32_to_cpu(res.u32),
&cmd, data);
@@ -509,7 +509,7 @@ int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid)
return PTR_ERR(req);
ret = __nvme_submit_sync_cmd(ctrl->connect_q, req, &res,
- data, sizeof(*data), 1);
+ data, sizeof(*data), true);
if (ret) {
nvmf_log_connect_error(ctrl, ret, le32_to_cpu(res.u32),
&cmd, data);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 624f1879ee20..f506d0db3c06 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -817,7 +817,7 @@ int nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
struct request *__nvme_alloc_rq(struct request_queue *q,
struct nvme_command *cmd, int qid, blk_mq_req_flags_t flags, bool retry);
int __nvme_submit_sync_cmd(struct request_queue *q, struct request *req, union nvme_result *result,
- void *buffer, unsigned bufflen, int at_head);
+ void *buffer, unsigned bufflen, bool at_head);
int nvme_set_features(struct nvme_ctrl *dev, unsigned int fid,
unsigned int dword11, void *buffer, size_t buflen,
u32 *result);
--
2.35.3
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 2/3] nvme: retry internal commands if DNR status bit is not set
2023-02-08 8:49 ` [PATCH 2/3] nvme: retry internal commands if DNR status bit is not set Hannes Reinecke
@ 2023-02-08 14:02 ` Kanchan Joshi
2023-02-08 14:58 ` Hannes Reinecke
0 siblings, 1 reply; 11+ messages in thread
From: Kanchan Joshi @ 2023-02-08 14:02 UTC (permalink / raw)
To: Hannes Reinecke; +Cc: Christoph Hellwig, Sagi Grimberg, Keith Busch, linux-nvme
[-- Attachment #1: Type: text/plain, Size: 542 bytes --]
On Wed, Feb 08, 2023 at 09:49:38AM +0100, Hannes Reinecke wrote:
>Add a 'retry' argument to __nvme_alloc_rq() to instruct
>the function to not set the FAILFAST_DRIVER bit for the command,
>causing it to be retried in nvme_decide_disposition() if the DNR
>bit is not set in the command result.
>And modify the authentication code to allow for retries.
This new argument is sent as true only at one place.
Will it be better to have that case handled in that place itself?
By clearing REQ_FAILFAST_DRIVER bit from req->cmd_flags in auth code.
[-- Attachment #2: Type: text/plain, Size: 0 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/3] nvme: split __nvme_submit_sync_cmd()
2023-02-08 8:49 ` [PATCH 1/3] nvme: split __nvme_submit_sync_cmd() Hannes Reinecke
@ 2023-02-08 14:10 ` Kanchan Joshi
2023-02-08 15:03 ` Hannes Reinecke
0 siblings, 1 reply; 11+ messages in thread
From: Kanchan Joshi @ 2023-02-08 14:10 UTC (permalink / raw)
To: Hannes Reinecke; +Cc: Christoph Hellwig, Sagi Grimberg, Keith Busch, linux-nvme
[-- Attachment #1: Type: text/plain, Size: 1585 bytes --]
On Wed, Feb 08, 2023 at 09:49:37AM +0100, Hannes Reinecke wrote:
>Split a __nvme_alloc_rq() function from __nvme_submit_sync_cmd()
>to reduce the number of arguments.
>
>Signed-off-by: Hannes Reinecke <hare@suse.de>
>---
> drivers/nvme/host/auth.c | 9 ++++--
> drivers/nvme/host/core.c | 59 ++++++++++++++++++++++------------
> drivers/nvme/host/fabrics.c | 42 ++++++++++++++++++------
> drivers/nvme/host/nvme.h | 8 ++---
> drivers/nvme/host/pci.c | 8 ++---
> drivers/nvme/target/passthru.c | 3 +-
> 6 files changed, 85 insertions(+), 44 deletions(-)
>
>diff --git a/drivers/nvme/host/auth.c b/drivers/nvme/host/auth.c
>index 787537454f7f..cbb6f1cb2046 100644
>--- a/drivers/nvme/host/auth.c
>+++ b/drivers/nvme/host/auth.c
>@@ -62,6 +62,7 @@ static int nvme_auth_submit(struct nvme_ctrl *ctrl, int qid,
> struct nvme_command cmd = {};
> blk_mq_req_flags_t flags = nvme_auth_flags_from_qid(qid);
> struct request_queue *q = nvme_auth_queue_from_qid(ctrl, qid);
>+ struct request *req;
> int ret;
>
> cmd.auth_common.opcode = nvme_fabrics_command;
>@@ -76,9 +77,11 @@ static int nvme_auth_submit(struct nvme_ctrl *ctrl, int qid,
> cmd.auth_receive.al = cpu_to_le32(data_len);
> }
>
>- ret = __nvme_submit_sync_cmd(q, &cmd, NULL, data, data_len,
>- qid == 0 ? NVME_QID_ANY : qid,
>- 0, flags);
>+ req = __nvme_alloc_rq(q, &cmd, qid == 0 ? NVME_QID_ANY : qid, flags);
>+ if (IS_ERR(req))
>+ return PTR_ERR(req);
>+
>+ ret = __nvme_submit_sync_cmd(q, req, NULL, data, data_len, 0);
First argument 'q' of above can also be killed.
[-- Attachment #2: Type: text/plain, Size: 0 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/3] nvme: retry internal commands if DNR status bit is not set
2023-02-08 14:02 ` Kanchan Joshi
@ 2023-02-08 14:58 ` Hannes Reinecke
0 siblings, 0 replies; 11+ messages in thread
From: Hannes Reinecke @ 2023-02-08 14:58 UTC (permalink / raw)
To: Kanchan Joshi; +Cc: Christoph Hellwig, Sagi Grimberg, Keith Busch, linux-nvme
On 2/8/23 15:02, Kanchan Joshi wrote:
> On Wed, Feb 08, 2023 at 09:49:38AM +0100, Hannes Reinecke wrote:
>> Add a 'retry' argument to __nvme_alloc_rq() to instruct
>> the function to not set the FAILFAST_DRIVER bit for the command,
>> causing it to be retried in nvme_decide_disposition() if the DNR
>> bit is not set in the command result.
>> And modify the authentication code to allow for retries.
>
>
> This new argument is sent as true only at one place.
> Will it be better to have that case handled in that place itself?
> By clearing REQ_FAILFAST_DRIVER bit from req->cmd_flags in auth code.
>
Good point. Will be resending.
Cheers,
Hannes
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/3] nvme: split __nvme_submit_sync_cmd()
2023-02-08 14:10 ` Kanchan Joshi
@ 2023-02-08 15:03 ` Hannes Reinecke
0 siblings, 0 replies; 11+ messages in thread
From: Hannes Reinecke @ 2023-02-08 15:03 UTC (permalink / raw)
To: Kanchan Joshi; +Cc: Christoph Hellwig, Sagi Grimberg, Keith Busch, linux-nvme
On 2/8/23 15:10, Kanchan Joshi wrote:
> On Wed, Feb 08, 2023 at 09:49:37AM +0100, Hannes Reinecke wrote:
>> Split a __nvme_alloc_rq() function from __nvme_submit_sync_cmd()
>> to reduce the number of arguments.
>>
>> Signed-off-by: Hannes Reinecke <hare@suse.de>
>> ---
>> drivers/nvme/host/auth.c | 9 ++++--
>> drivers/nvme/host/core.c | 59 ++++++++++++++++++++++------------
>> drivers/nvme/host/fabrics.c | 42 ++++++++++++++++++------
>> drivers/nvme/host/nvme.h | 8 ++---
>> drivers/nvme/host/pci.c | 8 ++---
>> drivers/nvme/target/passthru.c | 3 +-
>> 6 files changed, 85 insertions(+), 44 deletions(-)
>>
>> diff --git a/drivers/nvme/host/auth.c b/drivers/nvme/host/auth.c
>> index 787537454f7f..cbb6f1cb2046 100644
>> --- a/drivers/nvme/host/auth.c
>> +++ b/drivers/nvme/host/auth.c
>> @@ -62,6 +62,7 @@ static int nvme_auth_submit(struct nvme_ctrl *ctrl,
>> int qid,
>> struct nvme_command cmd = {};
>> blk_mq_req_flags_t flags = nvme_auth_flags_from_qid(qid);
>> struct request_queue *q = nvme_auth_queue_from_qid(ctrl, qid);
>> + struct request *req;
>> int ret;
>>
>> cmd.auth_common.opcode = nvme_fabrics_command;
>> @@ -76,9 +77,11 @@ static int nvme_auth_submit(struct nvme_ctrl *ctrl,
>> int qid,
>> cmd.auth_receive.al = cpu_to_le32(data_len);
>> }
>>
>> - ret = __nvme_submit_sync_cmd(q, &cmd, NULL, data, data_len,
>> - qid == 0 ? NVME_QID_ANY : qid,
>> - 0, flags);
>> + req = __nvme_alloc_rq(q, &cmd, qid == 0 ? NVME_QID_ANY : qid,
>> flags);
>> + if (IS_ERR(req))
>> + return PTR_ERR(req);
>> +
>> + ret = __nvme_submit_sync_cmd(q, req, NULL, data, data_len, 0);
>
> First argument 'q' of above can also be killed.
>
Indeed.
Cheers,
Hannes
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 1/3] nvme: split __nvme_submit_sync_cmd()
2023-02-08 15:17 [PATCHv2 0/3] nvme: rework __nvme_submit_sync_cmd() Hannes Reinecke
@ 2023-02-08 15:17 ` Hannes Reinecke
2023-02-09 5:33 ` Christoph Hellwig
0 siblings, 1 reply; 11+ messages in thread
From: Hannes Reinecke @ 2023-02-08 15:17 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Sagi Grimberg, Keith Busch, linux-nvme, Hannes Reinecke
Split a __nvme_alloc_rq() function from __nvme_submit_sync_cmd()
to reduce the number of arguments.
Signed-off-by: Hannes Reinecke <hare@suse.de>
---
drivers/nvme/host/auth.c | 9 +++--
drivers/nvme/host/core.c | 60 ++++++++++++++++++++++------------
drivers/nvme/host/fabrics.c | 40 +++++++++++++++++------
drivers/nvme/host/nvme.h | 8 ++---
drivers/nvme/host/pci.c | 8 ++---
drivers/nvme/target/passthru.c | 3 +-
6 files changed, 83 insertions(+), 45 deletions(-)
diff --git a/drivers/nvme/host/auth.c b/drivers/nvme/host/auth.c
index 787537454f7f..dcea6873ca2c 100644
--- a/drivers/nvme/host/auth.c
+++ b/drivers/nvme/host/auth.c
@@ -62,6 +62,7 @@ static int nvme_auth_submit(struct nvme_ctrl *ctrl, int qid,
struct nvme_command cmd = {};
blk_mq_req_flags_t flags = nvme_auth_flags_from_qid(qid);
struct request_queue *q = nvme_auth_queue_from_qid(ctrl, qid);
+ struct request *req;
int ret;
cmd.auth_common.opcode = nvme_fabrics_command;
@@ -76,9 +77,11 @@ static int nvme_auth_submit(struct nvme_ctrl *ctrl, int qid,
cmd.auth_receive.al = cpu_to_le32(data_len);
}
- ret = __nvme_submit_sync_cmd(q, &cmd, NULL, data, data_len,
- qid == 0 ? NVME_QID_ANY : qid,
- 0, flags);
+ req = __nvme_alloc_rq(q, &cmd, qid == 0 ? NVME_QID_ANY : qid, flags);
+ if (IS_ERR(req))
+ return PTR_ERR(req);
+
+ ret = __nvme_submit_sync_cmd(req, NULL, data, data_len, 0);
if (ret > 0)
dev_warn(ctrl->device,
"qid %d auth_send failed with status %d\n", qid, ret);
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index d1c9402389f9..278f7aa674c6 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1017,29 +1017,35 @@ int nvme_execute_rq(struct request *rq, bool at_head)
}
EXPORT_SYMBOL_NS_GPL(nvme_execute_rq, NVME_TARGET_PASSTHRU);
-/*
- * Returns 0 on success. If the result is negative, it's a Linux error code;
- * if the result is positive, it's an NVM Express status code
- */
-int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
- union nvme_result *result, void *buffer, unsigned bufflen,
- int qid, int at_head, blk_mq_req_flags_t flags)
+struct request *__nvme_alloc_rq(struct request_queue *q,
+ struct nvme_command *cmd, int qid,
+ blk_mq_req_flags_t flags)
{
struct request *req;
- int ret;
if (qid == NVME_QID_ANY)
req = blk_mq_alloc_request(q, nvme_req_op(cmd), flags);
else
req = blk_mq_alloc_request_hctx(q, nvme_req_op(cmd), flags,
qid - 1);
+ if (!IS_ERR(req))
+ nvme_init_request(req, cmd);
- if (IS_ERR(req))
- return PTR_ERR(req);
- nvme_init_request(req, cmd);
+ return req;
+}
+EXPORT_SYMBOL_GPL(__nvme_alloc_rq);
+
+/*
+ * Returns 0 on success. If the result is negative, it's a Linux error code;
+ * if the result is positive, it's an NVM Express status code
+ */
+int __nvme_submit_sync_cmd(struct request *req, union nvme_result *result,
+ void *buffer, unsigned bufflen, int at_head)
+{
+ int ret;
if (buffer && bufflen) {
- ret = blk_rq_map_kern(q, req, buffer, bufflen, GFP_KERNEL);
+ ret = blk_rq_map_kern(req->q, req, buffer, bufflen, GFP_KERNEL);
if (ret)
goto out;
}
@@ -1056,8 +1062,13 @@ EXPORT_SYMBOL_GPL(__nvme_submit_sync_cmd);
int nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
void *buffer, unsigned bufflen)
{
- return __nvme_submit_sync_cmd(q, cmd, NULL, buffer, bufflen,
- NVME_QID_ANY, 0, 0);
+ struct request *req;
+
+ req = __nvme_alloc_rq(q, cmd, NVME_QID_ANY, 0);
+ if (IS_ERR(req))
+ return PTR_ERR(req);
+
+ return __nvme_submit_sync_cmd(req, NULL, buffer, bufflen, 0);
}
EXPORT_SYMBOL_GPL(nvme_submit_sync_cmd);
@@ -1199,15 +1210,14 @@ static void nvme_keep_alive_work(struct work_struct *work)
return;
}
- rq = blk_mq_alloc_request(ctrl->admin_q, nvme_req_op(&ctrl->ka_cmd),
- BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT);
+ rq = __nvme_alloc_rq(ctrl->admin_q, &ctrl->ka_cmd, NVME_QID_ANY,
+ BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT);
if (IS_ERR(rq)) {
/* allocation failure, reset the controller */
dev_err(ctrl->device, "keep-alive failed: %ld\n", PTR_ERR(rq));
nvme_reset_ctrl(ctrl);
return;
}
- nvme_init_request(rq, &ctrl->ka_cmd);
rq->timeout = ctrl->kato * HZ;
rq->end_io = nvme_keep_alive_end_io;
@@ -1475,6 +1485,7 @@ static int nvme_ns_info_from_id_cs_indep(struct nvme_ctrl *ctrl,
static int nvme_features(struct nvme_ctrl *dev, u8 op, unsigned int fid,
unsigned int dword11, void *buffer, size_t buflen, u32 *result)
{
+ struct request *req;
union nvme_result res = { 0 };
struct nvme_command c = { };
int ret;
@@ -1483,8 +1494,11 @@ static int nvme_features(struct nvme_ctrl *dev, u8 op, unsigned int fid,
c.features.fid = cpu_to_le32(fid);
c.features.dword11 = cpu_to_le32(dword11);
- ret = __nvme_submit_sync_cmd(dev->admin_q, &c, &res,
- buffer, buflen, NVME_QID_ANY, 0, 0);
+ req = __nvme_alloc_rq(dev->admin_q, &c, NVME_QID_ANY, 0);
+ if (IS_ERR(req))
+ return PTR_ERR(req);
+
+ ret = __nvme_submit_sync_cmd(req, &res, buffer, buflen, 0);
if (ret >= 0 && result)
*result = le32_to_cpu(res.u32);
return ret;
@@ -2206,6 +2220,7 @@ static int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t l
{
struct nvme_ctrl *ctrl = data;
struct nvme_command cmd = { };
+ struct request *req;
if (send)
cmd.common.opcode = nvme_admin_security_send;
@@ -2215,8 +2230,11 @@ static int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t l
cmd.common.cdw10 = cpu_to_le32(((u32)secp) << 24 | ((u32)spsp) << 8);
cmd.common.cdw11 = cpu_to_le32(len);
- return __nvme_submit_sync_cmd(ctrl->admin_q, &cmd, NULL, buffer, len,
- NVME_QID_ANY, 1, 0);
+ req = __nvme_alloc_rq(ctrl->admin_q, &cmd, NVME_QID_ANY, 0);
+ if (IS_ERR(req))
+ return PTR_ERR(req);
+
+ return __nvme_submit_sync_cmd(req, NULL, buffer, len, 1);
}
static void nvme_configure_opal(struct nvme_ctrl *ctrl, bool was_suspended)
diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
index bbaa04a0c502..7d2fd4b83bbc 100644
--- a/drivers/nvme/host/fabrics.c
+++ b/drivers/nvme/host/fabrics.c
@@ -144,6 +144,7 @@ EXPORT_SYMBOL_GPL(nvmf_get_address);
*/
int nvmf_reg_read32(struct nvme_ctrl *ctrl, u32 off, u32 *val)
{
+ struct request *req;
struct nvme_command cmd = { };
union nvme_result res;
int ret;
@@ -152,8 +153,11 @@ int nvmf_reg_read32(struct nvme_ctrl *ctrl, u32 off, u32 *val)
cmd.prop_get.fctype = nvme_fabrics_type_property_get;
cmd.prop_get.offset = cpu_to_le32(off);
- ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, &cmd, &res, NULL, 0,
- NVME_QID_ANY, 0, 0);
+ req = __nvme_alloc_rq(ctrl->fabrics_q, &cmd, NVME_QID_ANY, 0);
+ if (IS_ERR(req))
+ return PTR_ERR(req);
+
+ ret = __nvme_submit_sync_cmd(req, &res, NULL, 0, 0);
if (ret >= 0)
*val = le64_to_cpu(res.u64);
@@ -189,6 +193,7 @@ EXPORT_SYMBOL_GPL(nvmf_reg_read32);
*/
int nvmf_reg_read64(struct nvme_ctrl *ctrl, u32 off, u64 *val)
{
+ struct request *req;
struct nvme_command cmd = { };
union nvme_result res;
int ret;
@@ -198,8 +203,11 @@ int nvmf_reg_read64(struct nvme_ctrl *ctrl, u32 off, u64 *val)
cmd.prop_get.attrib = 1;
cmd.prop_get.offset = cpu_to_le32(off);
- ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, &cmd, &res, NULL, 0,
- NVME_QID_ANY, 0, 0);
+ req = __nvme_alloc_rq(ctrl->fabrics_q, &cmd, NVME_QID_ANY, 0);
+ if (IS_ERR(req))
+ return PTR_ERR(req);
+
+ ret = __nvme_submit_sync_cmd(req, &res, NULL, 0, 0);
if (ret >= 0)
*val = le64_to_cpu(res.u64);
@@ -234,6 +242,7 @@ EXPORT_SYMBOL_GPL(nvmf_reg_read64);
*/
int nvmf_reg_write32(struct nvme_ctrl *ctrl, u32 off, u32 val)
{
+ struct request *req;
struct nvme_command cmd = { };
int ret;
@@ -243,8 +252,11 @@ int nvmf_reg_write32(struct nvme_ctrl *ctrl, u32 off, u32 val)
cmd.prop_set.offset = cpu_to_le32(off);
cmd.prop_set.value = cpu_to_le64(val);
- ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, &cmd, NULL, NULL, 0,
- NVME_QID_ANY, 0, 0);
+ req = __nvme_alloc_rq(ctrl->fabrics_q, &cmd, NVME_QID_ANY, 0);
+ if (IS_ERR(req))
+ return PTR_ERR(req);
+
+ ret = __nvme_submit_sync_cmd(req, NULL, NULL, 0, 0);
if (unlikely(ret))
dev_err(ctrl->device,
"Property Set error: %d, offset %#x\n",
@@ -374,6 +386,7 @@ int nvmf_connect_admin_queue(struct nvme_ctrl *ctrl)
struct nvme_command cmd = { };
union nvme_result res;
struct nvmf_connect_data *data;
+ struct request *req;
int ret;
u32 result;
@@ -399,9 +412,12 @@ int nvmf_connect_admin_queue(struct nvme_ctrl *ctrl)
strncpy(data->subsysnqn, ctrl->opts->subsysnqn, NVMF_NQN_SIZE);
strncpy(data->hostnqn, ctrl->opts->host->nqn, NVMF_NQN_SIZE);
- ret = __nvme_submit_sync_cmd(ctrl->fabrics_q, &cmd, &res,
- data, sizeof(*data), NVME_QID_ANY, 1,
+ req = __nvme_alloc_rq(ctrl->fabrics_q, &cmd, NVME_QID_ANY,
BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT);
+ if (IS_ERR(req))
+ return PTR_ERR(req);
+
+ ret = __nvme_submit_sync_cmd(req, &res, data, sizeof(*data), 1);
if (ret) {
nvmf_log_connect_error(ctrl, ret, le32_to_cpu(res.u32),
&cmd, data);
@@ -465,6 +481,7 @@ int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid)
struct nvme_command cmd = { };
struct nvmf_connect_data *data;
union nvme_result res;
+ struct request *req;
int ret;
u32 result;
@@ -485,9 +502,12 @@ int nvmf_connect_io_queue(struct nvme_ctrl *ctrl, u16 qid)
strncpy(data->subsysnqn, ctrl->opts->subsysnqn, NVMF_NQN_SIZE);
strncpy(data->hostnqn, ctrl->opts->host->nqn, NVMF_NQN_SIZE);
- ret = __nvme_submit_sync_cmd(ctrl->connect_q, &cmd, &res,
- data, sizeof(*data), qid, 1,
+ req = __nvme_alloc_rq(ctrl->fabrics_q, &cmd, NVME_QID_ANY,
BLK_MQ_REQ_RESERVED | BLK_MQ_REQ_NOWAIT);
+ if (IS_ERR(req))
+ return PTR_ERR(req);
+
+ ret = __nvme_submit_sync_cmd(req, &res, data, sizeof(*data), 1);
if (ret) {
nvmf_log_connect_error(ctrl, ret, le32_to_cpu(res.u32),
&cmd, data);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index bf46f122e9e1..d10f930d1edb 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -813,10 +813,10 @@ static inline bool nvme_is_unique_nsid(struct nvme_ctrl *ctrl,
int nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
void *buf, unsigned bufflen);
-int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd,
- union nvme_result *result, void *buffer, unsigned bufflen,
- int qid, int at_head,
- blk_mq_req_flags_t flags);
+struct request *__nvme_alloc_rq(struct request_queue *q,
+ struct nvme_command *cmd, int qid, blk_mq_req_flags_t flags);
+int __nvme_submit_sync_cmd(struct request *req, union nvme_result *result,
+ void *buffer, unsigned bufflen, int at_head);
int nvme_set_features(struct nvme_ctrl *dev, unsigned int fid,
unsigned int dword11, void *buffer, size_t buflen,
u32 *result);
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index a331fbfa9a66..8e3bae7d489b 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -1376,13 +1376,12 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req)
nvme_get_opcode_str(nvme_req(req)->cmd->common.opcode),
nvmeq->qid);
- abort_req = blk_mq_alloc_request(dev->ctrl.admin_q, nvme_req_op(&cmd),
- BLK_MQ_REQ_NOWAIT);
+ abort_req = __nvme_alloc_rq(dev->ctrl.admin_q, &cmd,
+ NVME_QID_ANY, BLK_MQ_REQ_NOWAIT);
if (IS_ERR(abort_req)) {
atomic_inc(&dev->ctrl.abort_limit);
return BLK_EH_RESET_TIMER;
}
- nvme_init_request(abort_req, &cmd);
abort_req->end_io = abort_endio;
abort_req->end_io_data = NULL;
@@ -2391,10 +2390,9 @@ static int nvme_delete_queue(struct nvme_queue *nvmeq, u8 opcode)
cmd.delete_queue.opcode = opcode;
cmd.delete_queue.qid = cpu_to_le16(nvmeq->qid);
- req = blk_mq_alloc_request(q, nvme_req_op(&cmd), BLK_MQ_REQ_NOWAIT);
+ req = __nvme_alloc_rq(q, &cmd, NVME_QID_ANY, BLK_MQ_REQ_NOWAIT);
if (IS_ERR(req))
return PTR_ERR(req);
- nvme_init_request(req, &cmd);
if (opcode == nvme_admin_delete_cq)
req->end_io = nvme_del_cq_end;
diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c
index 511c980d538d..0dae824a2d05 100644
--- a/drivers/nvme/target/passthru.c
+++ b/drivers/nvme/target/passthru.c
@@ -316,12 +316,11 @@ static void nvmet_passthru_execute_cmd(struct nvmet_req *req)
timeout = nvmet_req_subsys(req)->admin_timeout;
}
- rq = blk_mq_alloc_request(q, nvme_req_op(req->cmd), 0);
+ rq = __nvme_alloc_rq(q, req->cmd, NVME_QID_ANY, 0);
if (IS_ERR(rq)) {
status = NVME_SC_INTERNAL;
goto out_put_ns;
}
- nvme_init_request(rq, req->cmd);
if (timeout)
rq->timeout = timeout;
--
2.35.3
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH 1/3] nvme: split __nvme_submit_sync_cmd()
2023-02-08 15:17 ` [PATCH 1/3] nvme: split __nvme_submit_sync_cmd() Hannes Reinecke
@ 2023-02-09 5:33 ` Christoph Hellwig
2023-02-09 7:58 ` Hannes Reinecke
0 siblings, 1 reply; 11+ messages in thread
From: Christoph Hellwig @ 2023-02-09 5:33 UTC (permalink / raw)
To: Hannes Reinecke; +Cc: Christoph Hellwig, Sagi Grimberg, Keith Busch, linux-nvme
On Wed, Feb 08, 2023 at 04:17:18PM +0100, Hannes Reinecke wrote:
> Split a __nvme_alloc_rq() function from __nvme_submit_sync_cmd()
> to reduce the number of arguments.
But now everyone has to call both?
> 6 files changed, 83 insertions(+), 45 deletions(-)
.. and the code is a lot longer. So this doesn't really seem like
much of a win?
> +struct request *__nvme_alloc_rq(struct request_queue *q,
> + struct nvme_command *cmd, int qid,
> + blk_mq_req_flags_t flags)
Why the double underscore profix? Why _rq instead of _request
like blk_mq_alloc_request and nvme_init_request?
> {
> struct request *req;
>
> if (qid == NVME_QID_ANY)
> req = blk_mq_alloc_request(q, nvme_req_op(cmd), flags);
> else
> req = blk_mq_alloc_request_hctx(q, nvme_req_op(cmd), flags,
> qid - 1);
> + if (!IS_ERR(req))
> + nvme_init_request(req, cmd);
>
> + return req;
And I'd at very least split out the qid case as that is substantially
different and only used in very specific places.
> + */
> +int __nvme_submit_sync_cmd(struct request *req, union nvme_result *result,
> + void *buffer, unsigned bufflen, int at_head)
> +{
> + int ret;
>
> if (buffer && bufflen) {
> - ret = blk_rq_map_kern(q, req, buffer, bufflen, GFP_KERNEL);
> + ret = blk_rq_map_kern(req->q, req, buffer, bufflen, GFP_KERNEL);
> if (ret)
> goto out;
This new __nvme_submit_sync_cmd now consumes the request, which
is an odd calling conventions.
What do you think about:
- passing the "union nvme_result *result" to nvme_execute_rq, and do the
conditional assignment to it there, where it fits along with all the
status management
- make blk_rq_map_kern handle a NULL kbuf gracefully instead of doing
that in a lot of the callers
- just open code __nvme_submit_sync_cmd using these building blocks.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/3] nvme: split __nvme_submit_sync_cmd()
2023-02-09 5:33 ` Christoph Hellwig
@ 2023-02-09 7:58 ` Hannes Reinecke
0 siblings, 0 replies; 11+ messages in thread
From: Hannes Reinecke @ 2023-02-09 7:58 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Sagi Grimberg, Keith Busch, linux-nvme
On 2/9/23 06:33, Christoph Hellwig wrote:
> On Wed, Feb 08, 2023 at 04:17:18PM +0100, Hannes Reinecke wrote:
>> Split a __nvme_alloc_rq() function from __nvme_submit_sync_cmd()
>> to reduce the number of arguments.
>
> But now everyone has to call both?
>
Yes, that's the downside of it.
>> 6 files changed, 83 insertions(+), 45 deletions(-)
>
> .. and the code is a lot longer. So this doesn't really seem like
> much of a win?
>> +struct request *__nvme_alloc_rq(struct request_queue *q,
>> + struct nvme_command *cmd, int qid,
>> + blk_mq_req_flags_t flags)
>
> Why the double underscore profix? Why _rq instead of _request
> like blk_mq_alloc_request and nvme_init_request?
>
Ok, will be renaming to nvme_alloc_request().
>> {
>> struct request *req;
>>
>> if (qid == NVME_QID_ANY)
>> req = blk_mq_alloc_request(q, nvme_req_op(cmd), flags);
>> else
>> req = blk_mq_alloc_request_hctx(q, nvme_req_op(cmd), flags,
>> qid - 1);
>> + if (!IS_ERR(req))
>> + nvme_init_request(req, cmd);
>>
>> + return req;
>
> And I'd at very least split out the qid case as that is substantially
> different and only used in very specific places.
>
>> + */
>> +int __nvme_submit_sync_cmd(struct request *req, union nvme_result *result,
>> + void *buffer, unsigned bufflen, int at_head)
>> +{
>> + int ret;
>>
>> if (buffer && bufflen) {
>> - ret = blk_rq_map_kern(q, req, buffer, bufflen, GFP_KERNEL);
>> + ret = blk_rq_map_kern(req->q, req, buffer, bufflen, GFP_KERNEL);
>> if (ret)
>> goto out;
>
> This new __nvme_submit_sync_cmd now consumes the request, which
> is an odd calling conventions.
>
> What do you think about:
>
> - passing the "union nvme_result *result" to nvme_execute_rq, and do the
> conditional assignment to it there, where it fits along with all the
> status management
> - make blk_rq_map_kern handle a NULL kbuf gracefully instead of doing
> that in a lot of the callers
> - just open code __nvme_submit_sync_cmd using these building blocks.
Hmm. Let me see how this pans out.
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Ivo Totev, Andrew
Myers, Andrew McDonald, Martje Boudien Moerman
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2023-02-09 7:58 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-02-08 8:49 [PATCH 0/3] nvme: rework __nvme_submit_sync_cmd() Hannes Reinecke
2023-02-08 8:49 ` [PATCH 1/3] nvme: split __nvme_submit_sync_cmd() Hannes Reinecke
2023-02-08 14:10 ` Kanchan Joshi
2023-02-08 15:03 ` Hannes Reinecke
2023-02-08 8:49 ` [PATCH 2/3] nvme: retry internal commands if DNR status bit is not set Hannes Reinecke
2023-02-08 14:02 ` Kanchan Joshi
2023-02-08 14:58 ` Hannes Reinecke
2023-02-08 8:49 ` [PATCH 3/3] nvme: make 'at_head' parameter for __nvme_submit_sync_cmd() boolean Hannes Reinecke
-- strict thread matches above, loose matches on Subject: below --
2023-02-08 15:17 [PATCHv2 0/3] nvme: rework __nvme_submit_sync_cmd() Hannes Reinecke
2023-02-08 15:17 ` [PATCH 1/3] nvme: split __nvme_submit_sync_cmd() Hannes Reinecke
2023-02-09 5:33 ` Christoph Hellwig
2023-02-09 7:58 ` Hannes Reinecke
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox