* queue ready fixes and cleanups
@ 2018-06-11 15:46 Christoph Hellwig
2018-06-11 15:46 ` [PATCH 1/4] nvme-fabrics: refactor queue ready check Christoph Hellwig
` (3 more replies)
0 siblings, 4 replies; 20+ messages in thread
From: Christoph Hellwig @ 2018-06-11 15:46 UTC (permalink / raw)
This is based on the patch Hannes sent last week.
^ permalink raw reply [flat|nested] 20+ messages in thread* [PATCH 1/4] nvme-fabrics: refactor queue ready check 2018-06-11 15:46 queue ready fixes and cleanups Christoph Hellwig @ 2018-06-11 15:46 ` Christoph Hellwig 2018-06-12 8:23 ` Johannes Thumshirn ` (2 more replies) 2018-06-11 15:46 ` [PATCH 2/4] nvme-fabrics: handle queue ready fast path inline Christoph Hellwig ` (2 subsequent siblings) 3 siblings, 3 replies; 20+ messages in thread From: Christoph Hellwig @ 2018-06-11 15:46 UTC (permalink / raw) Move the is_connected check to the fibre channel transport, as it has no meaning for other transports. To facilitate this split out a new nvmf_fail_nonready_command helper that is called by the transport when it is asked to handle a command on a queue that is not ready. Signed-off-by: Christoph Hellwig <hch at lst.de> --- drivers/nvme/host/fabrics.c | 60 ++++++++++++++++--------------------- drivers/nvme/host/fabrics.h | 5 ++-- drivers/nvme/host/fc.c | 9 +++--- drivers/nvme/host/rdma.c | 7 ++--- drivers/nvme/target/loop.c | 7 ++--- 5 files changed, 39 insertions(+), 49 deletions(-) diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c index fa32c1216409..72d290c71a1e 100644 --- a/drivers/nvme/host/fabrics.c +++ b/drivers/nvme/host/fabrics.c @@ -536,30 +536,35 @@ static struct nvmf_transport_ops *nvmf_lookup_transport( return NULL; } -blk_status_t nvmf_check_if_ready(struct nvme_ctrl *ctrl, struct request *rq, - bool queue_live, bool is_connected) +/* + * For something we're not in a state to send to the device the default action + * is to busy it and retry it after the controller state is recovered. However, + * anything marked for failfast or nvme multipath is immediately failed. + * + * Note: commands used to initialize the controller will be marked for failfast. + * Note: nvme cli/ioctl commands are marked for failfast. + */ +blk_status_t nvmf_fail_nonready_command(struct request *rq) +{ + if (!blk_noretry_request(rq) && !(rq->cmd_flags & REQ_NVME_MPATH)) + return BLK_STS_RESOURCE; + nvme_req(rq)->status = NVME_SC_ABORT_REQ; + return BLK_STS_IOERR; +} +EXPORT_SYMBOL_GPL(nvmf_fail_nonready_command); + +bool nvmf_check_ready(struct nvme_ctrl *ctrl, struct request *rq, + bool queue_live) { struct nvme_command *cmd = nvme_req(rq)->cmd; - if (likely(ctrl->state == NVME_CTRL_LIVE && is_connected)) - return BLK_STS_OK; + if (likely(ctrl->state == NVME_CTRL_LIVE)) + return true; switch (ctrl->state) { case NVME_CTRL_NEW: case NVME_CTRL_CONNECTING: case NVME_CTRL_DELETING: - /* - * This is the case of starting a new or deleting an association - * but connectivity was lost before it was fully created or torn - * down. We need to error the commands used to initialize the - * controller so the reconnect can go into a retry attempt. The - * commands should all be marked REQ_FAILFAST_DRIVER, which will - * hit the reject path below. Anything else will be queued while - * the state settles. - */ - if (!is_connected) - break; - /* * If queue is live, allow only commands that are internally * generated pass through. These are commands on the admin @@ -567,7 +572,7 @@ blk_status_t nvmf_check_if_ready(struct nvme_ctrl *ctrl, struct request *rq, * ioctl admin cmds received while initializing. */ if (queue_live && !(nvme_req(rq)->flags & NVME_REQ_USERCMD)) - return BLK_STS_OK; + return true; /* * If the queue is not live, allow only a connect command. This @@ -577,26 +582,13 @@ blk_status_t nvmf_check_if_ready(struct nvme_ctrl *ctrl, struct request *rq, if (!queue_live && blk_rq_is_passthrough(rq) && cmd->common.opcode == nvme_fabrics_command && cmd->fabrics.fctype == nvme_fabrics_type_connect) - return BLK_STS_OK; - break; + return true; + return false; default: - break; + return false; } - - /* - * Any other new io is something we're not in a state to send to the - * device. Default action is to busy it and retry it after the - * controller state is recovered. However, anything marked for failfast - * or nvme multipath is immediately failed. Note: commands used to - * initialize the controller will be marked for failfast. - * Note: nvme cli/ioctl commands are marked for failfast. - */ - if (!blk_noretry_request(rq) && !(rq->cmd_flags & REQ_NVME_MPATH)) - return BLK_STS_RESOURCE; - nvme_req(rq)->status = NVME_SC_ABORT_REQ; - return BLK_STS_IOERR; } -EXPORT_SYMBOL_GPL(nvmf_check_if_ready); +EXPORT_SYMBOL_GPL(nvmf_check_ready); static const match_table_t opt_tokens = { { NVMF_OPT_TRANSPORT, "transport=%s" }, diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h index 7491a0bbf711..2b7033d95cbd 100644 --- a/drivers/nvme/host/fabrics.h +++ b/drivers/nvme/host/fabrics.h @@ -162,7 +162,8 @@ void nvmf_unregister_transport(struct nvmf_transport_ops *ops); void nvmf_free_options(struct nvmf_ctrl_options *opts); int nvmf_get_address(struct nvme_ctrl *ctrl, char *buf, int size); bool nvmf_should_reconnect(struct nvme_ctrl *ctrl); -blk_status_t nvmf_check_if_ready(struct nvme_ctrl *ctrl, - struct request *rq, bool queue_live, bool is_connected); +blk_status_t nvmf_fail_nonready_command(struct request *rq); +bool nvmf_check_ready(struct nvme_ctrl *ctrl, struct request *rq, + bool queue_live); #endif /* _NVME_FABRICS_H */ diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c index 0bad65803271..7a4afbd22b8a 100644 --- a/drivers/nvme/host/fc.c +++ b/drivers/nvme/host/fc.c @@ -2279,14 +2279,13 @@ nvme_fc_queue_rq(struct blk_mq_hw_ctx *hctx, struct nvme_fc_cmd_iu *cmdiu = &op->cmd_iu; struct nvme_command *sqe = &cmdiu->sqe; enum nvmefc_fcp_datadir io_dir; + bool queue_ready = test_bit(NVME_FC_Q_LIVE, &queue->flags); u32 data_len; blk_status_t ret; - ret = nvmf_check_if_ready(&queue->ctrl->ctrl, rq, - test_bit(NVME_FC_Q_LIVE, &queue->flags), - ctrl->rport->remoteport.port_state == FC_OBJSTATE_ONLINE); - if (unlikely(ret)) - return ret; + if (ctrl->rport->remoteport.port_state != FC_OBJSTATE_ONLINE || + nvmf_check_ready(&queue->ctrl->ctrl, rq, queue_ready)) + return nvmf_fail_nonready_command(rq); ret = nvme_setup_cmd(ns, rq, sqe); if (ret) diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c index 2aba03876d84..981a942be681 100644 --- a/drivers/nvme/host/rdma.c +++ b/drivers/nvme/host/rdma.c @@ -1613,15 +1613,14 @@ static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx, struct nvme_rdma_qe *sqe = &req->sqe; struct nvme_command *c = sqe->data; struct ib_device *dev; + bool queue_ready = test_bit(NVME_RDMA_Q_LIVE, &queue->flags); blk_status_t ret; int err; WARN_ON_ONCE(rq->tag < 0); - ret = nvmf_check_if_ready(&queue->ctrl->ctrl, rq, - test_bit(NVME_RDMA_Q_LIVE, &queue->flags), true); - if (unlikely(ret)) - return ret; + if (!nvmf_check_ready(&queue->ctrl->ctrl, rq, queue_ready)) + return nvmf_fail_nonready_command(rq); dev = queue->device->dev; ib_dma_sync_single_for_cpu(dev, sqe->dma, diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c index 1304ec3a7ede..d8d91f04bd7e 100644 --- a/drivers/nvme/target/loop.c +++ b/drivers/nvme/target/loop.c @@ -158,12 +158,11 @@ static blk_status_t nvme_loop_queue_rq(struct blk_mq_hw_ctx *hctx, struct nvme_loop_queue *queue = hctx->driver_data; struct request *req = bd->rq; struct nvme_loop_iod *iod = blk_mq_rq_to_pdu(req); + bool queue_ready = test_bit(NVME_LOOP_Q_LIVE, &queue->flags); blk_status_t ret; - ret = nvmf_check_if_ready(&queue->ctrl->ctrl, req, - test_bit(NVME_LOOP_Q_LIVE, &queue->flags), true); - if (unlikely(ret)) - return ret; + if (!nvmf_check_ready(&queue->ctrl->ctrl, req, queue_ready)) + return nvmf_fail_nonready_command(req); ret = nvme_setup_cmd(ns, req, &iod->cmd); if (ret) -- 2.17.1 ^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 1/4] nvme-fabrics: refactor queue ready check 2018-06-11 15:46 ` [PATCH 1/4] nvme-fabrics: refactor queue ready check Christoph Hellwig @ 2018-06-12 8:23 ` Johannes Thumshirn 2018-06-12 21:45 ` James Smart 2018-06-13 19:48 ` James Smart 2 siblings, 0 replies; 20+ messages in thread From: Johannes Thumshirn @ 2018-06-12 8:23 UTC (permalink / raw) Looks good, Reviewed-by: Johannes Thumshirn <jthumshirn at suse.de> -- Johannes Thumshirn Storage jthumshirn at suse.de +49 911 74053 689 SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg GF: Felix Imend?rffer, Jane Smithard, Graham Norton HRB 21284 (AG N?rnberg) Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850 ^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 1/4] nvme-fabrics: refactor queue ready check 2018-06-11 15:46 ` [PATCH 1/4] nvme-fabrics: refactor queue ready check Christoph Hellwig 2018-06-12 8:23 ` Johannes Thumshirn @ 2018-06-12 21:45 ` James Smart 2018-06-13 8:16 ` Christoph Hellwig 2018-06-13 19:48 ` James Smart 2 siblings, 1 reply; 20+ messages in thread From: James Smart @ 2018-06-12 21:45 UTC (permalink / raw) Looking through this - I don't see anything wrong with the if_connect movement or the splitting of the if_ready into a 2 part implementation. However, I disagree with the existing base, which added the DELETING state to the same logic block for NEW and CONNECTING, as the qlive state may not yet be accurate for the controller state, thus we're allowing io when we shouldn't be. For example - as nvme_delete_ctrl() changes state, then scheduled delete_work, and as delete_work won't change the queue_live state, until ctrl->ops->delete_ctrl() is called, which isn't until after the nvme_stop_ctrl and nvme_remove_namespaces() is done.... there's a workq scheduling window where we allow lots more regular io through and it keeps going through even as we're stopping the controller.? that's live io that may need to finish (with no assist by the transport to stop/abort it) before the delete can take place. Additionally, if a user-mode delete request is made while in the middle of a RECONNECT, and the io the if_ready is looking at is part of the reconnect - we're allowing the connect command to go out/complete even though we're in the middle of deleting it.?? May well be fine - but I don't know why we're letting all this io through simply because we didn't want to add a check for a property_set() on a reset or delete. -- james On 6/11/2018 8:46 AM, Christoph Hellwig wrote: > Move the is_connected check to the fibre channel transport, as it has no > meaning for other transports. To facilitate this split out a new > nvmf_fail_nonready_command helper that is called by the transport when > it is asked to handle a command on a queue that is not ready. > > Signed-off-by: Christoph Hellwig <hch at lst.de> > --- > drivers/nvme/host/fabrics.c | 60 ++++++++++++++++--------------------- > drivers/nvme/host/fabrics.h | 5 ++-- > drivers/nvme/host/fc.c | 9 +++--- > drivers/nvme/host/rdma.c | 7 ++--- > drivers/nvme/target/loop.c | 7 ++--- > 5 files changed, 39 insertions(+), 49 deletions(-) > > diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c > index fa32c1216409..72d290c71a1e 100644 > --- a/drivers/nvme/host/fabrics.c > +++ b/drivers/nvme/host/fabrics.c > @@ -536,30 +536,35 @@ static struct nvmf_transport_ops *nvmf_lookup_transport( > return NULL; > } > > -blk_status_t nvmf_check_if_ready(struct nvme_ctrl *ctrl, struct request *rq, > - bool queue_live, bool is_connected) > +/* > + * For something we're not in a state to send to the device the default action > + * is to busy it and retry it after the controller state is recovered. However, > + * anything marked for failfast or nvme multipath is immediately failed. > + * > + * Note: commands used to initialize the controller will be marked for failfast. > + * Note: nvme cli/ioctl commands are marked for failfast. > + */ > +blk_status_t nvmf_fail_nonready_command(struct request *rq) > +{ > + if (!blk_noretry_request(rq) && !(rq->cmd_flags & REQ_NVME_MPATH)) > + return BLK_STS_RESOURCE; > + nvme_req(rq)->status = NVME_SC_ABORT_REQ; > + return BLK_STS_IOERR; > +} > +EXPORT_SYMBOL_GPL(nvmf_fail_nonready_command); > + > +bool nvmf_check_ready(struct nvme_ctrl *ctrl, struct request *rq, > + bool queue_live) > { > struct nvme_command *cmd = nvme_req(rq)->cmd; > > - if (likely(ctrl->state == NVME_CTRL_LIVE && is_connected)) > - return BLK_STS_OK; > + if (likely(ctrl->state == NVME_CTRL_LIVE)) > + return true; > > switch (ctrl->state) { > case NVME_CTRL_NEW: > case NVME_CTRL_CONNECTING: > case NVME_CTRL_DELETING: > - /* > - * This is the case of starting a new or deleting an association > - * but connectivity was lost before it was fully created or torn > - * down. We need to error the commands used to initialize the > - * controller so the reconnect can go into a retry attempt. The > - * commands should all be marked REQ_FAILFAST_DRIVER, which will > - * hit the reject path below. Anything else will be queued while > - * the state settles. > - */ > - if (!is_connected) > - break; > - > /* > * If queue is live, allow only commands that are internally > * generated pass through. These are commands on the admin > @@ -567,7 +572,7 @@ blk_status_t nvmf_check_if_ready(struct nvme_ctrl *ctrl, struct request *rq, > * ioctl admin cmds received while initializing. > */ > if (queue_live && !(nvme_req(rq)->flags & NVME_REQ_USERCMD)) > - return BLK_STS_OK; > + return true; ^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 1/4] nvme-fabrics: refactor queue ready check 2018-06-12 21:45 ` James Smart @ 2018-06-13 8:16 ` Christoph Hellwig 2018-06-13 15:54 ` James Smart 0 siblings, 1 reply; 20+ messages in thread From: Christoph Hellwig @ 2018-06-13 8:16 UTC (permalink / raw) On Tue, Jun 12, 2018@02:45:16PM -0700, James Smart wrote: > Additionally, if a user-mode delete request is made while in the middle of > a RECONNECT, and the io the if_ready is looking at is part of the reconnect > - we're allowing the connect command to go out/complete even though we're > in the middle of deleting it.?? May well be fine - but I don't know why > we're letting all this io through simply because we didn't want to add a > check for a property_set() on a reset or delete. so what about something like this on top of the series (untested so far): diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c index dc87719a5c9e..a36a74b2cb8e 100644 --- a/drivers/nvme/host/fabrics.c +++ b/drivers/nvme/host/fabrics.c @@ -556,24 +556,31 @@ EXPORT_SYMBOL_GPL(nvmf_fail_nonready_command); bool __nvmf_check_ready(struct nvme_ctrl *ctrl, struct request *rq, bool queue_live) { - if (ctrl->state == NVME_CTRL_DEAD) - return false; + struct nvme_request *req = nvme_req(rq); /* - * If the queue is not live the only thing allowed is a connect - * command to actually set the queue live. + * If we are in some state of setup or teardown only allow + * internally generated commands. */ - if (!queue_live && - (nvme_req(rq)->cmd->common.opcode != nvme_fabrics_command || - nvme_req(rq)->cmd->fabrics.fctype != nvme_fabrics_type_connect)) + if (!blk_rq_is_passthrough(rq) || (req->flags & NVME_REQ_USERCMD)) return false; /* - * If we are in some state of setup or teardown only allow - * internally generated commands. + * Only allow commands on a live queue, except for the connect command, + * which is require to set the queue live in the appropinquate states. */ - return blk_rq_is_passthrough(rq) && - !(nvme_req(rq)->flags & NVME_REQ_USERCMD); + switch (ctrl->state) { + case NVME_CTRL_DEAD: + return false; + case NVME_CTRL_NEW: + case NVME_CTRL_CONNECTING: + if (req->cmd->common.opcode == nvme_fabrics_command && + req->cmd->fabrics.fctype == nvme_fabrics_type_connect) + return true; + /*FALLTHRU*/ + default: + return queue_live; + } } EXPORT_SYMBOL_GPL(__nvmf_check_ready); ^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 1/4] nvme-fabrics: refactor queue ready check 2018-06-13 8:16 ` Christoph Hellwig @ 2018-06-13 15:54 ` James Smart 2018-06-14 12:19 ` Christoph Hellwig 0 siblings, 1 reply; 20+ messages in thread From: James Smart @ 2018-06-13 15:54 UTC (permalink / raw) When I finally finished with the review on your last patch: [PATCH 4/4] nvme-fabrics: reverse polarity in __nvmf_check_ready I think you had addressed this issue so the patch below isn't necessary. -- james On 6/13/2018 1:16 AM, Christoph Hellwig wrote: > On Tue, Jun 12, 2018@02:45:16PM -0700, James Smart wrote: >> Additionally, if a user-mode delete request is made while in the middle of >> a RECONNECT, and the io the if_ready is looking at is part of the reconnect >> - we're allowing the connect command to go out/complete even though we're >> in the middle of deleting it.?? May well be fine - but I don't know why >> we're letting all this io through simply because we didn't want to add a >> check for a property_set() on a reset or delete. > so what about something like this on top of the series (untested so far): > > > diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c > index dc87719a5c9e..a36a74b2cb8e 100644 > --- a/drivers/nvme/host/fabrics.c > +++ b/drivers/nvme/host/fabrics.c > @@ -556,24 +556,31 @@ EXPORT_SYMBOL_GPL(nvmf_fail_nonready_command); > bool __nvmf_check_ready(struct nvme_ctrl *ctrl, struct request *rq, > bool queue_live) > { > - if (ctrl->state == NVME_CTRL_DEAD) > - return false; > + struct nvme_request *req = nvme_req(rq); > > /* > - * If the queue is not live the only thing allowed is a connect > - * command to actually set the queue live. > + * If we are in some state of setup or teardown only allow > + * internally generated commands. > */ > - if (!queue_live && > - (nvme_req(rq)->cmd->common.opcode != nvme_fabrics_command || > - nvme_req(rq)->cmd->fabrics.fctype != nvme_fabrics_type_connect)) > + if (!blk_rq_is_passthrough(rq) || (req->flags & NVME_REQ_USERCMD)) > return false; > > /* > - * If we are in some state of setup or teardown only allow > - * internally generated commands. > + * Only allow commands on a live queue, except for the connect command, > + * which is require to set the queue live in the appropinquate states. > */ > - return blk_rq_is_passthrough(rq) && > - !(nvme_req(rq)->flags & NVME_REQ_USERCMD); > + switch (ctrl->state) { > + case NVME_CTRL_DEAD: > + return false; > + case NVME_CTRL_NEW: > + case NVME_CTRL_CONNECTING: > + if (req->cmd->common.opcode == nvme_fabrics_command && > + req->cmd->fabrics.fctype == nvme_fabrics_type_connect) > + return true; > + /*FALLTHRU*/ > + default: > + return queue_live; > + } > } > EXPORT_SYMBOL_GPL(__nvmf_check_ready); > ^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 1/4] nvme-fabrics: refactor queue ready check 2018-06-13 15:54 ` James Smart @ 2018-06-14 12:19 ` Christoph Hellwig 0 siblings, 0 replies; 20+ messages in thread From: Christoph Hellwig @ 2018-06-14 12:19 UTC (permalink / raw) On Wed, Jun 13, 2018@08:54:25AM -0700, James Smart wrote: > When I finally finished with the review on your last patch: > [PATCH 4/4] nvme-fabrics: reverse polarity in __nvmf_check_ready > > I think you had addressed this issue so the patch below isn't necessary. Even with patch 4 we'd still allow kernel-generated connect commands in non-new, non-connecting states. We should not generally send them, so I think the series as-is should be ok, but if touch all this code we might as well handle that case as well. ^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 1/4] nvme-fabrics: refactor queue ready check 2018-06-11 15:46 ` [PATCH 1/4] nvme-fabrics: refactor queue ready check Christoph Hellwig 2018-06-12 8:23 ` Johannes Thumshirn 2018-06-12 21:45 ` James Smart @ 2018-06-13 19:48 ` James Smart 2 siblings, 0 replies; 20+ messages in thread From: James Smart @ 2018-06-13 19:48 UTC (permalink / raw) On 6/11/2018 8:46 AM, Christoph Hellwig wrote: > diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c > index 0bad65803271..7a4afbd22b8a 100644 > --- a/drivers/nvme/host/fc.c > +++ b/drivers/nvme/host/fc.c > @@ -2279,14 +2279,13 @@ nvme_fc_queue_rq(struct blk_mq_hw_ctx *hctx, > struct nvme_fc_cmd_iu *cmdiu = &op->cmd_iu; > struct nvme_command *sqe = &cmdiu->sqe; > enum nvmefc_fcp_datadir io_dir; > + bool queue_ready = test_bit(NVME_FC_Q_LIVE, &queue->flags); > u32 data_len; > blk_status_t ret; > > - ret = nvmf_check_if_ready(&queue->ctrl->ctrl, rq, > - test_bit(NVME_FC_Q_LIVE, &queue->flags), > - ctrl->rport->remoteport.port_state == FC_OBJSTATE_ONLINE); > - if (unlikely(ret)) > - return ret; > + if (ctrl->rport->remoteport.port_state != FC_OBJSTATE_ONLINE || > + nvmf_check_ready(&queue->ctrl->ctrl, rq, queue_ready)) > + return nvmf_fail_nonready_command(rq); > > ret = nvme_setup_cmd(ns, rq, sqe); > if (ret) > This is missing the ! on the nvmf_check_ready() call. Should be: + if (ctrl->rport->remoteport.port_state != FC_OBJSTATE_ONLINE || + !nvmf_check_ready(&queue->ctrl->ctrl, rq, queue_ready)) + return nvmf_fail_nonready_command(rq); other transports had it right. -- james ^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 2/4] nvme-fabrics: handle queue ready fast path inline 2018-06-11 15:46 queue ready fixes and cleanups Christoph Hellwig 2018-06-11 15:46 ` [PATCH 1/4] nvme-fabrics: refactor queue ready check Christoph Hellwig @ 2018-06-11 15:46 ` Christoph Hellwig 2018-06-12 8:24 ` Johannes Thumshirn 2018-06-12 21:47 ` James Smart 2018-06-11 15:46 ` [PATCH 3/4] nvme-fabrics: handle the admin-only case properly in nvmf_check_ready Christoph Hellwig 2018-06-11 15:46 ` [PATCH 4/4] nvme-fabrics: reverse polarity in __nvmf_check_ready Christoph Hellwig 3 siblings, 2 replies; 20+ messages in thread From: Christoph Hellwig @ 2018-06-11 15:46 UTC (permalink / raw) Avoid a function call for the queue live fast path. Signed-off-by: Christoph Hellwig <hch at lst.de> --- drivers/nvme/host/fabrics.c | 7 ++----- drivers/nvme/host/fabrics.h | 10 +++++++++- 2 files changed, 11 insertions(+), 6 deletions(-) diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c index 72d290c71a1e..6b4e253b9347 100644 --- a/drivers/nvme/host/fabrics.c +++ b/drivers/nvme/host/fabrics.c @@ -553,14 +553,11 @@ blk_status_t nvmf_fail_nonready_command(struct request *rq) } EXPORT_SYMBOL_GPL(nvmf_fail_nonready_command); -bool nvmf_check_ready(struct nvme_ctrl *ctrl, struct request *rq, +bool __nvmf_check_ready(struct nvme_ctrl *ctrl, struct request *rq, bool queue_live) { struct nvme_command *cmd = nvme_req(rq)->cmd; - if (likely(ctrl->state == NVME_CTRL_LIVE)) - return true; - switch (ctrl->state) { case NVME_CTRL_NEW: case NVME_CTRL_CONNECTING: @@ -588,7 +585,7 @@ bool nvmf_check_ready(struct nvme_ctrl *ctrl, struct request *rq, return false; } } -EXPORT_SYMBOL_GPL(nvmf_check_ready); +EXPORT_SYMBOL_GPL(__nvmf_check_ready); static const match_table_t opt_tokens = { { NVMF_OPT_TRANSPORT, "transport=%s" }, diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h index 2b7033d95cbd..2ea949a3868c 100644 --- a/drivers/nvme/host/fabrics.h +++ b/drivers/nvme/host/fabrics.h @@ -163,7 +163,15 @@ void nvmf_free_options(struct nvmf_ctrl_options *opts); int nvmf_get_address(struct nvme_ctrl *ctrl, char *buf, int size); bool nvmf_should_reconnect(struct nvme_ctrl *ctrl); blk_status_t nvmf_fail_nonready_command(struct request *rq); -bool nvmf_check_ready(struct nvme_ctrl *ctrl, struct request *rq, +bool __nvmf_check_ready(struct nvme_ctrl *ctrl, struct request *rq, bool queue_live); +static inline bool nvmf_check_ready(struct nvme_ctrl *ctrl, struct request *rq, + bool queue_live) +{ + if (likely(ctrl->state == NVME_CTRL_LIVE)) + return true; + return __nvmf_check_ready(ctrl, rq, queue_live); +} + #endif /* _NVME_FABRICS_H */ -- 2.17.1 ^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 2/4] nvme-fabrics: handle queue ready fast path inline 2018-06-11 15:46 ` [PATCH 2/4] nvme-fabrics: handle queue ready fast path inline Christoph Hellwig @ 2018-06-12 8:24 ` Johannes Thumshirn 2018-06-12 21:47 ` James Smart 1 sibling, 0 replies; 20+ messages in thread From: Johannes Thumshirn @ 2018-06-12 8:24 UTC (permalink / raw) Looks good, Reviewed-by: Johannes Thumshirn <jthumshirn at suse.de> Although I wouldn't mind squashing it into the previous patch. -- Johannes Thumshirn Storage jthumshirn at suse.de +49 911 74053 689 SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg GF: Felix Imend?rffer, Jane Smithard, Graham Norton HRB 21284 (AG N?rnberg) Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850 ^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 2/4] nvme-fabrics: handle queue ready fast path inline 2018-06-11 15:46 ` [PATCH 2/4] nvme-fabrics: handle queue ready fast path inline Christoph Hellwig 2018-06-12 8:24 ` Johannes Thumshirn @ 2018-06-12 21:47 ` James Smart 1 sibling, 0 replies; 20+ messages in thread From: James Smart @ 2018-06-12 21:47 UTC (permalink / raw) On 6/11/2018 8:46 AM, Christoph Hellwig wrote: > Avoid a function call for the queue live fast path. > > Signed-off-by: Christoph Hellwig <hch at lst.de> > --- > drivers/nvme/host/fabrics.c | 7 ++----- > drivers/nvme/host/fabrics.h | 10 +++++++++- > 2 files changed, 11 insertions(+), 6 deletions(-) > > diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c > index 72d290c71a1e..6b4e253b9347 100644 > --- a/drivers/nvme/host/fabrics.c > +++ b/drivers/nvme/host/fabrics.c > Reviewed-by:?? James Smart ? <james.smart at broadcom.com> ^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 3/4] nvme-fabrics: handle the admin-only case properly in nvmf_check_ready 2018-06-11 15:46 queue ready fixes and cleanups Christoph Hellwig 2018-06-11 15:46 ` [PATCH 1/4] nvme-fabrics: refactor queue ready check Christoph Hellwig 2018-06-11 15:46 ` [PATCH 2/4] nvme-fabrics: handle queue ready fast path inline Christoph Hellwig @ 2018-06-11 15:46 ` Christoph Hellwig 2018-06-12 8:25 ` Johannes Thumshirn 2018-06-12 21:48 ` James Smart 2018-06-11 15:46 ` [PATCH 4/4] nvme-fabrics: reverse polarity in __nvmf_check_ready Christoph Hellwig 3 siblings, 2 replies; 20+ messages in thread From: Christoph Hellwig @ 2018-06-11 15:46 UTC (permalink / raw) In the ADMIN_ONLY state we don't have any I/O queues, but we should accept all admin commands without further checks. Signed-off-by: Christoph Hellwig <hch at lst.de> --- drivers/nvme/host/fabrics.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h index 2ea949a3868c..e1818a27aa2d 100644 --- a/drivers/nvme/host/fabrics.h +++ b/drivers/nvme/host/fabrics.h @@ -169,7 +169,8 @@ bool __nvmf_check_ready(struct nvme_ctrl *ctrl, struct request *rq, static inline bool nvmf_check_ready(struct nvme_ctrl *ctrl, struct request *rq, bool queue_live) { - if (likely(ctrl->state == NVME_CTRL_LIVE)) + if (likely(ctrl->state == NVME_CTRL_LIVE || + ctrl->state == NVME_CTRL_ADMIN_ONLY)) return true; return __nvmf_check_ready(ctrl, rq, queue_live); } -- 2.17.1 ^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 3/4] nvme-fabrics: handle the admin-only case properly in nvmf_check_ready 2018-06-11 15:46 ` [PATCH 3/4] nvme-fabrics: handle the admin-only case properly in nvmf_check_ready Christoph Hellwig @ 2018-06-12 8:25 ` Johannes Thumshirn 2018-06-12 21:48 ` James Smart 1 sibling, 0 replies; 20+ messages in thread From: Johannes Thumshirn @ 2018-06-12 8:25 UTC (permalink / raw) Same here. Anyways, Reviewed-by: Johannes Thumshirn <jthumshirn at suse.de> -- Johannes Thumshirn Storage jthumshirn at suse.de +49 911 74053 689 SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg GF: Felix Imend?rffer, Jane Smithard, Graham Norton HRB 21284 (AG N?rnberg) Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850 ^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 3/4] nvme-fabrics: handle the admin-only case properly in nvmf_check_ready 2018-06-11 15:46 ` [PATCH 3/4] nvme-fabrics: handle the admin-only case properly in nvmf_check_ready Christoph Hellwig 2018-06-12 8:25 ` Johannes Thumshirn @ 2018-06-12 21:48 ` James Smart 2018-06-13 7:59 ` Christoph Hellwig 1 sibling, 1 reply; 20+ messages in thread From: James Smart @ 2018-06-12 21:48 UTC (permalink / raw) On 6/11/2018 8:46 AM, Christoph Hellwig wrote: > In the ADMIN_ONLY state we don't have any I/O queues, but we should accept > all admin commands without further checks. > > Signed-off-by: Christoph Hellwig <hch at lst.de> > --- > drivers/nvme/host/fabrics.h | 3 ++- > 1 file changed, 2 insertions(+), 1 deletion(-) > > diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h > index 2ea949a3868c..e1818a27aa2d 100644 > --- a/drivers/nvme/host/fabrics.h > +++ b/drivers/nvme/host/fabrics.h > @@ -169,7 +169,8 @@ bool __nvmf_check_ready(struct nvme_ctrl *ctrl, struct request *rq, > static inline bool nvmf_check_ready(struct nvme_ctrl *ctrl, struct request *rq, > bool queue_live) > { > - if (likely(ctrl->state == NVME_CTRL_LIVE)) > + if (likely(ctrl->state == NVME_CTRL_LIVE || > + ctrl->state == NVME_CTRL_ADMIN_ONLY)) > return true; > return __nvmf_check_ready(ctrl, rq, queue_live); > } True. Although there was no validation of it being an admin cmd. But reitterates my statement of NVME_CTRL_ADMIN_ONLY being just a variant of NVME_CTRL_LIVE. Reviewed-by:?? James Smart? <james.smart at broadcom.com> ^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 3/4] nvme-fabrics: handle the admin-only case properly in nvmf_check_ready 2018-06-12 21:48 ` James Smart @ 2018-06-13 7:59 ` Christoph Hellwig 0 siblings, 0 replies; 20+ messages in thread From: Christoph Hellwig @ 2018-06-13 7:59 UTC (permalink / raw) On Tue, Jun 12, 2018@02:48:33PM -0700, James Smart wrote: > True. Although there was no validation of it being an admin cmd. But > reitterates my statement of NVME_CTRL_ADMIN_ONLY being just a variant of > NVME_CTRL_LIVE. I have to admit that so far the NVME_CTRL_ADMIN_ONLY state has created way too much trouble. I wonder if we can find a better fix for the original scanning problem that it solved using e.g the number of queues variable? ^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 4/4] nvme-fabrics: reverse polarity in __nvmf_check_ready 2018-06-11 15:46 queue ready fixes and cleanups Christoph Hellwig ` (2 preceding siblings ...) 2018-06-11 15:46 ` [PATCH 3/4] nvme-fabrics: handle the admin-only case properly in nvmf_check_ready Christoph Hellwig @ 2018-06-11 15:46 ` Christoph Hellwig 2018-06-12 8:26 ` Johannes Thumshirn ` (2 more replies) 3 siblings, 3 replies; 20+ messages in thread From: Christoph Hellwig @ 2018-06-11 15:46 UTC (permalink / raw) All non-live, non-dead states have the same command filters. So just check for the dead state first, and then apply out filters to catch all other "special" states automatically. Signed-off-by: Christoph Hellwig <hch at lst.de> --- drivers/nvme/host/fabrics.c | 42 ++++++++++++++----------------------- 1 file changed, 16 insertions(+), 26 deletions(-) diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c index 6b4e253b9347..dc87719a5c9e 100644 --- a/drivers/nvme/host/fabrics.c +++ b/drivers/nvme/host/fabrics.c @@ -556,34 +556,24 @@ EXPORT_SYMBOL_GPL(nvmf_fail_nonready_command); bool __nvmf_check_ready(struct nvme_ctrl *ctrl, struct request *rq, bool queue_live) { - struct nvme_command *cmd = nvme_req(rq)->cmd; - - switch (ctrl->state) { - case NVME_CTRL_NEW: - case NVME_CTRL_CONNECTING: - case NVME_CTRL_DELETING: - /* - * If queue is live, allow only commands that are internally - * generated pass through. These are commands on the admin - * queue to initialize the controller. This will reject any - * ioctl admin cmds received while initializing. - */ - if (queue_live && !(nvme_req(rq)->flags & NVME_REQ_USERCMD)) - return true; - - /* - * If the queue is not live, allow only a connect command. This - * will reject any ioctl admin cmd as well as initialization - * commands if the controller reverted the queue to non-live. - */ - if (!queue_live && blk_rq_is_passthrough(rq) && - cmd->common.opcode == nvme_fabrics_command && - cmd->fabrics.fctype == nvme_fabrics_type_connect) - return true; + if (ctrl->state == NVME_CTRL_DEAD) return false; - default: + + /* + * If the queue is not live the only thing allowed is a connect + * command to actually set the queue live. + */ + if (!queue_live && + (nvme_req(rq)->cmd->common.opcode != nvme_fabrics_command || + nvme_req(rq)->cmd->fabrics.fctype != nvme_fabrics_type_connect)) return false; - } + + /* + * If we are in some state of setup or teardown only allow + * internally generated commands. + */ + return blk_rq_is_passthrough(rq) && + !(nvme_req(rq)->flags & NVME_REQ_USERCMD); } EXPORT_SYMBOL_GPL(__nvmf_check_ready); -- 2.17.1 ^ permalink raw reply related [flat|nested] 20+ messages in thread
* [PATCH 4/4] nvme-fabrics: reverse polarity in __nvmf_check_ready 2018-06-11 15:46 ` [PATCH 4/4] nvme-fabrics: reverse polarity in __nvmf_check_ready Christoph Hellwig @ 2018-06-12 8:26 ` Johannes Thumshirn 2018-06-12 11:08 ` Hannes Reinecke 2018-06-12 22:05 ` James Smart 2 siblings, 0 replies; 20+ messages in thread From: Johannes Thumshirn @ 2018-06-12 8:26 UTC (permalink / raw) Looks good, Reviewed-by: Johannes Thumshirn <jthumshirn at suse.de> -- Johannes Thumshirn Storage jthumshirn at suse.de +49 911 74053 689 SUSE LINUX GmbH, Maxfeldstr. 5, 90409 N?rnberg GF: Felix Imend?rffer, Jane Smithard, Graham Norton HRB 21284 (AG N?rnberg) Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850 ^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 4/4] nvme-fabrics: reverse polarity in __nvmf_check_ready 2018-06-11 15:46 ` [PATCH 4/4] nvme-fabrics: reverse polarity in __nvmf_check_ready Christoph Hellwig 2018-06-12 8:26 ` Johannes Thumshirn @ 2018-06-12 11:08 ` Hannes Reinecke 2018-06-12 22:22 ` James Smart 2018-06-12 22:05 ` James Smart 2 siblings, 1 reply; 20+ messages in thread From: Hannes Reinecke @ 2018-06-12 11:08 UTC (permalink / raw) On Mon, 11 Jun 2018 17:46:47 +0200 Christoph Hellwig <hch@lst.de> wrote: > All non-live, non-dead states have the same command filters. So just > check for the dead state first, and then apply out filters to catch > all other "special" states automatically. > > Signed-off-by: Christoph Hellwig <hch at lst.de> > --- > drivers/nvme/host/fabrics.c | 42 > ++++++++++++++----------------------- 1 file changed, 16 > insertions(+), 26 deletions(-) > > diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c > index 6b4e253b9347..dc87719a5c9e 100644 > --- a/drivers/nvme/host/fabrics.c > +++ b/drivers/nvme/host/fabrics.c > @@ -556,34 +556,24 @@ EXPORT_SYMBOL_GPL(nvmf_fail_nonready_command); > bool __nvmf_check_ready(struct nvme_ctrl *ctrl, struct request *rq, > bool queue_live) > { > - struct nvme_command *cmd = nvme_req(rq)->cmd; > - > - switch (ctrl->state) { > - case NVME_CTRL_NEW: > - case NVME_CTRL_CONNECTING: > - case NVME_CTRL_DELETING: > - /* > - * If queue is live, allow only commands that are > internally > - * generated pass through. These are commands on > the admin > - * queue to initialize the controller. This will > reject any > - * ioctl admin cmds received while initializing. > - */ > - if (queue_live && !(nvme_req(rq)->flags & > NVME_REQ_USERCMD)) > - return true; > - > - /* > - * If the queue is not live, allow only a connect > command. This > - * will reject any ioctl admin cmd as well as > initialization > - * commands if the controller reverted the queue to > non-live. > - */ > - if (!queue_live && blk_rq_is_passthrough(rq) && > - cmd->common.opcode == nvme_fabrics_command && > - cmd->fabrics.fctype == > nvme_fabrics_type_connect) > - return true; > + if (ctrl->state == NVME_CTRL_DEAD) > return false; > - default: > + > + /* > + * If the queue is not live the only thing allowed is a > connect > + * command to actually set the queue live. > + */ > + if (!queue_live && > + (nvme_req(rq)->cmd->common.opcode != > nvme_fabrics_command || > + nvme_req(rq)->cmd->fabrics.fctype != > nvme_fabrics_type_connect)) return false; > - } > + > + /* > + * If we are in some state of setup or teardown only allow > + * internally generated commands. > + */ > + return blk_rq_is_passthrough(rq) && > + !(nvme_req(rq)->flags & NVME_REQ_USERCMD); > } > EXPORT_SYMBOL_GPL(__nvmf_check_ready); > Is that really correct? We're squashing the 'queue_live' check with the check for the 'connect' command, meaning we will be submitting non-connect commands even on a non-live queue. I'd rather split that check into two if-clauses like: if (!queue_live) { if (nvme_req(rq)->cmd....) return false; return true; } to improve readability (and possibly correctness) Cheers, Hannes ^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 4/4] nvme-fabrics: reverse polarity in __nvmf_check_ready 2018-06-12 11:08 ` Hannes Reinecke @ 2018-06-12 22:22 ` James Smart 0 siblings, 0 replies; 20+ messages in thread From: James Smart @ 2018-06-12 22:22 UTC (permalink / raw) On 6/12/2018 4:08 AM, Hannes Reinecke wrote: > On Mon, 11 Jun 2018 17:46:47 +0200 > Christoph Hellwig <hch@lst.de> wrote: > >> All non-live, non-dead states have the same command filters. So just >> check for the dead state first, and then apply out filters to catch >> all other "special" states automatically. >> >> Signed-off-by: Christoph Hellwig <hch at lst.de> >> --- >> drivers/nvme/host/fabrics.c | 42 >> ++++++++++++++----------------------- 1 file changed, 16 >> insertions(+), 26 deletions(-) >> >> diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c >> index 6b4e253b9347..dc87719a5c9e 100644 >> --- a/drivers/nvme/host/fabrics.c >> +++ b/drivers/nvme/host/fabrics.c >> @@ -556,34 +556,24 @@ EXPORT_SYMBOL_GPL(nvmf_fail_nonready_command); >> bool __nvmf_check_ready(struct nvme_ctrl *ctrl, struct request *rq, >> bool queue_live) >> { >> - struct nvme_command *cmd = nvme_req(rq)->cmd; >> - >> - switch (ctrl->state) { >> - case NVME_CTRL_NEW: >> - case NVME_CTRL_CONNECTING: >> - case NVME_CTRL_DELETING: >> - /* >> - * If queue is live, allow only commands that are >> internally >> - * generated pass through. These are commands on >> the admin >> - * queue to initialize the controller. This will >> reject any >> - * ioctl admin cmds received while initializing. >> - */ >> - if (queue_live && !(nvme_req(rq)->flags & >> NVME_REQ_USERCMD)) >> - return true; >> - >> - /* >> - * If the queue is not live, allow only a connect >> command. This >> - * will reject any ioctl admin cmd as well as >> initialization >> - * commands if the controller reverted the queue to >> non-live. >> - */ >> - if (!queue_live && blk_rq_is_passthrough(rq) && >> - cmd->common.opcode == nvme_fabrics_command && >> - cmd->fabrics.fctype == >> nvme_fabrics_type_connect) >> - return true; >> + if (ctrl->state == NVME_CTRL_DEAD) >> return false; >> - default: >> + >> + /* >> + * If the queue is not live the only thing allowed is a >> connect >> + * command to actually set the queue live. >> + */ >> + if (!queue_live && >> + (nvme_req(rq)->cmd->common.opcode != >> nvme_fabrics_command || >> + nvme_req(rq)->cmd->fabrics.fctype != >> nvme_fabrics_type_connect)) return false; >> - } >> + >> + /* >> + * If we are in some state of setup or teardown only allow >> + * internally generated commands. >> + */ >> + return blk_rq_is_passthrough(rq) && >> + !(nvme_req(rq)->flags & NVME_REQ_USERCMD); >> } >> EXPORT_SYMBOL_GPL(__nvmf_check_ready); >> > Is that really correct? > We're squashing the 'queue_live' check with the check for the 'connect' > command, meaning we will be submitting non-connect commands even on a > non-live queue. I think it is.? the queue_live=true check isn't necessarily valid. When it is valid, is when the controller state is also LIVE/ADMIN_ONLY, and that should have been picked off/allowed prior to reaching this point. So, to be in this section, you are something other than LIVE.?? The queue_live=false will always be a valid case, so using that check with the connect type is good.? If it was the admin queue that had an invalid=true case, then it's highly unlikely anything is going to be generated via the core code without being an ioctl command and the ioctl commands should fail the ! NVME_REQ_USERCMD check.? So if any other admin command, given the serialized nature of controller init, then it should be post the admin queue having a connect being done, so it is live, and any command other than an ioctl would be one generated to init the controller (or to do the property_set on a reset/delete). -- james ^ permalink raw reply [flat|nested] 20+ messages in thread
* [PATCH 4/4] nvme-fabrics: reverse polarity in __nvmf_check_ready 2018-06-11 15:46 ` [PATCH 4/4] nvme-fabrics: reverse polarity in __nvmf_check_ready Christoph Hellwig 2018-06-12 8:26 ` Johannes Thumshirn 2018-06-12 11:08 ` Hannes Reinecke @ 2018-06-12 22:05 ` James Smart 2 siblings, 0 replies; 20+ messages in thread From: James Smart @ 2018-06-12 22:05 UTC (permalink / raw) On 6/11/2018 8:46 AM, Christoph Hellwig wrote: > All non-live, non-dead states have the same command filters. So just > check for the dead state first, and then apply out filters to catch all > other "special" states automatically. > > Signed-off-by: Christoph Hellwig <hch at lst.de> > --- > drivers/nvme/host/fabrics.c | 42 ++++++++++++++----------------------- > 1 file changed, 16 insertions(+), 26 deletions(-) > > diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c > index 6b4e253b9347..dc87719a5c9e 100644 > --- a/drivers/nvme/host/fabrics.c > +++ b/drivers/nvme/host/fabrics.c > @@ -556,34 +556,24 @@ EXPORT_SYMBOL_GPL(nvmf_fail_nonready_command); > bool __nvmf_check_ready(struct nvme_ctrl *ctrl, struct request *rq, > bool queue_live) > { > - struct nvme_command *cmd = nvme_req(rq)->cmd; > - > - switch (ctrl->state) { > - case NVME_CTRL_NEW: > - case NVME_CTRL_CONNECTING: > - case NVME_CTRL_DELETING: > - /* > - * If queue is live, allow only commands that are internally > - * generated pass through. These are commands on the admin > - * queue to initialize the controller. This will reject any > - * ioctl admin cmds received while initializing. > - */ > - if (queue_live && !(nvme_req(rq)->flags & NVME_REQ_USERCMD)) > - return true; > - > - /* > - * If the queue is not live, allow only a connect command. This > - * will reject any ioctl admin cmd as well as initialization > - * commands if the controller reverted the queue to non-live. > - */ > - if (!queue_live && blk_rq_is_passthrough(rq) && > - cmd->common.opcode == nvme_fabrics_command && > - cmd->fabrics.fctype == nvme_fabrics_type_connect) > - return true; > + if (ctrl->state == NVME_CTRL_DEAD) > return false; > - default: > + > + /* > + * If the queue is not live the only thing allowed is a connect > + * command to actually set the queue live. > + */ > + if (!queue_live && > + (nvme_req(rq)->cmd->common.opcode != nvme_fabrics_command || > + nvme_req(rq)->cmd->fabrics.fctype != nvme_fabrics_type_connect)) > return false; > - } ok - so what still gets through to the next check are cases when queue_live=true is inconsistent with the controller state - e.g. std io or ioctls received while the workqueue item is queued and prior to the transport finally marking the queue not-live somewhere in the workqueue item code flow. > + > + /* > + * If we are in some state of setup or teardown only allow > + * internally generated commands. > + */ > + return blk_rq_is_passthrough(rq) && > + !(nvme_req(rq)->flags & NVME_REQ_USERCMD); > } > EXPORT_SYMBOL_GPL(__nvmf_check_ready); > and this (nice!) second check filters out those standard io's as well as any ioctls, which leaves only those things generated by core that should be for reset or initialization. Looks good. --james Reviewed-by:?? James Smart? <james.smart at broadcom.com> ^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2018-06-14 12:19 UTC | newest] Thread overview: 20+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2018-06-11 15:46 queue ready fixes and cleanups Christoph Hellwig 2018-06-11 15:46 ` [PATCH 1/4] nvme-fabrics: refactor queue ready check Christoph Hellwig 2018-06-12 8:23 ` Johannes Thumshirn 2018-06-12 21:45 ` James Smart 2018-06-13 8:16 ` Christoph Hellwig 2018-06-13 15:54 ` James Smart 2018-06-14 12:19 ` Christoph Hellwig 2018-06-13 19:48 ` James Smart 2018-06-11 15:46 ` [PATCH 2/4] nvme-fabrics: handle queue ready fast path inline Christoph Hellwig 2018-06-12 8:24 ` Johannes Thumshirn 2018-06-12 21:47 ` James Smart 2018-06-11 15:46 ` [PATCH 3/4] nvme-fabrics: handle the admin-only case properly in nvmf_check_ready Christoph Hellwig 2018-06-12 8:25 ` Johannes Thumshirn 2018-06-12 21:48 ` James Smart 2018-06-13 7:59 ` Christoph Hellwig 2018-06-11 15:46 ` [PATCH 4/4] nvme-fabrics: reverse polarity in __nvmf_check_ready Christoph Hellwig 2018-06-12 8:26 ` Johannes Thumshirn 2018-06-12 11:08 ` Hannes Reinecke 2018-06-12 22:22 ` James Smart 2018-06-12 22:05 ` James Smart
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox