From: James Smart <jsmart833426@gmail.com>
To: Mohamed Khalfella <mkhalfella@purestorage.com>,
Justin Tee <justin.tee@broadcom.com>,
Naresh Gottumukkala <nareshgottumukkala83@gmail.com>,
Paul Ely <paul.ely@broadcom.com>,
Chaitanya Kulkarni <kch@nvidia.com>,
Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
Keith Busch <kbusch@kernel.org>, Sagi Grimberg <sagi@grimberg.me>
Cc: Aaron Dailey <adailey@purestorage.com>,
Randy Jennings <randyj@purestorage.com>,
Dhaval Giani <dgiani@purestorage.com>,
Hannes Reinecke <hare@suse.de>,
linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org,
jsmart833426@gmail.com
Subject: Re: [PATCH v2 12/14] nvme-fc: Decouple error recovery from controller reset
Date: Tue, 3 Feb 2026 11:19:28 -0800 [thread overview]
Message-ID: <383bbbe9-6cf5-465c-8811-0dddce34f883@gmail.com> (raw)
In-Reply-To: <20260130223531.2478849-13-mkhalfella@purestorage.com>
On 1/30/2026 2:34 PM, Mohamed Khalfella wrote:
> nvme_fc_error_recovery() called from nvme_fc_timeout() while controller
> in CONNECTING state results in deadlock reported in link below. Update
> nvme_fc_timeout() to schedule error recovery to avoid the deadlock.
>
> Previous to this change if controller was LIVE error recovery resets
> the controller and this does not match nvme-tcp and nvme-rdma.
It is not intended to match tcp/rda. Using the reset path was done to
avoid code duplication of paths to teardown the association. FC, given
we interact with an HBA for device and io state and have a lot of async
io completions, requires a lot more work than straight data structure
teardown in rdma/tcp.
I agree with wanting to changeup the execution thread for the deadlock.
> Decouple> error recovery from controller reset to match other fabric transports.>
> Link: https://lore.kernel.org/all/20250529214928.2112990-1-mkhalfella@purestorage.com/
> Signed-off-by: Mohamed Khalfella <mkhalfella@purestorage.com>
> ---
> drivers/nvme/host/fc.c | 94 ++++++++++++++++++------------------------
> 1 file changed, 41 insertions(+), 53 deletions(-)
>
> diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
> index 6948de3f438a..f8f6071b78ed 100644
> --- a/drivers/nvme/host/fc.c
> +++ b/drivers/nvme/host/fc.c
> @@ -227,6 +227,8 @@ static DEFINE_IDA(nvme_fc_ctrl_cnt);
> static struct device *fc_udev_device;
>
> static void nvme_fc_complete_rq(struct request *rq);
> +static void nvme_fc_start_ioerr_recovery(struct nvme_fc_ctrl *ctrl,
> + char *errmsg);
>
> /* *********************** FC-NVME Port Management ************************ */
>
> @@ -788,7 +790,7 @@ nvme_fc_ctrl_connectivity_loss(struct nvme_fc_ctrl *ctrl)
> "Reconnect", ctrl->cnum);
>
> set_bit(ASSOC_FAILED, &ctrl->flags);
> - nvme_reset_ctrl(&ctrl->ctrl);
> + nvme_fc_start_ioerr_recovery(ctrl, "Connectivity Loss");
> }
>
> /**
> @@ -985,7 +987,7 @@ fc_dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nents,
> static void nvme_fc_ctrl_put(struct nvme_fc_ctrl *);
> static int nvme_fc_ctrl_get(struct nvme_fc_ctrl *);
>
> -static void nvme_fc_error_recovery(struct nvme_fc_ctrl *ctrl, char *errmsg);
> +static void nvme_fc_error_recovery(struct nvme_fc_ctrl *ctrl);
>
> static void
> __nvme_fc_finish_ls_req(struct nvmefc_ls_req_op *lsop)
> @@ -1567,9 +1569,8 @@ nvme_fc_ls_disconnect_assoc(struct nvmefc_ls_rcv_op *lsop)
> * for the association have been ABTS'd by
> * nvme_fc_delete_association().
> */
> -
> - /* fail the association */
> - nvme_fc_error_recovery(ctrl, "Disconnect Association LS received");
> + nvme_fc_start_ioerr_recovery(ctrl,
> + "Disconnect Association LS received");
>
> /* release the reference taken by nvme_fc_match_disconn_ls() */
> nvme_fc_ctrl_put(ctrl);
> @@ -1871,7 +1872,7 @@ nvme_fc_ctrl_ioerr_work(struct work_struct *work)
> struct nvme_fc_ctrl *ctrl =
> container_of(work, struct nvme_fc_ctrl, ioerr_work);
>
> - nvme_fc_error_recovery(ctrl, "transport detected io error");
> + nvme_fc_error_recovery(ctrl);
hmm.. not sure how I feel about this. There is at least a break in reset
processing that is no longer present - e.g. prior queued ioerr_work,
which would then queue reset_work. This effectively calls the reset_work
handler directly. I assume it should be ok.
> }
>
> /*
> @@ -1892,6 +1893,17 @@ char *nvme_fc_io_getuuid(struct nvmefc_fcp_req *req)
> }
> EXPORT_SYMBOL_GPL(nvme_fc_io_getuuid);
>
> +static void nvme_fc_start_ioerr_recovery(struct nvme_fc_ctrl *ctrl,
> + char *errmsg)
> +{
> + if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_RESETTING))
> + return;
> +> + dev_warn(ctrl->ctrl.device, "NVME-FC{%d}: starting error
recovery %s\n",
> + ctrl->cnum, errmsg);
> + queue_work(nvme_reset_wq, &ctrl->ioerr_work);
> +}
> +
Disagree with this.
The clause in error_recovery around the CONNECTING state is pretty
important to terminate io occurring during connect/reconnect where the
ctrl state should not change. we don't want start_ioerr making it RESETTING.
This should be reworked.
> static void
> nvme_fc_fcpio_done(struct nvmefc_fcp_req *req)
> {
> @@ -2049,9 +2061,8 @@ nvme_fc_fcpio_done(struct nvmefc_fcp_req *req)
> nvme_fc_complete_rq(rq);
>
> check_error:
> - if (terminate_assoc &&
> - nvme_ctrl_state(&ctrl->ctrl) != NVME_CTRL_RESETTING)
> - queue_work(nvme_reset_wq, &ctrl->ioerr_work);
> + if (terminate_assoc)
> + nvme_fc_start_ioerr_recovery(ctrl, "io error");
this is ok. the ioerr_recovery will bounce the RESETTING state if it's
already in the state. So this is a little cleaner.
> }
>
> static int
> @@ -2495,39 +2506,6 @@ __nvme_fc_abort_outstanding_ios(struct nvme_fc_ctrl *ctrl, bool start_queues)
> nvme_unquiesce_admin_queue(&ctrl->ctrl);
> }
>
> -static void
> -nvme_fc_error_recovery(struct nvme_fc_ctrl *ctrl, char *errmsg)
> -{
> - enum nvme_ctrl_state state = nvme_ctrl_state(&ctrl->ctrl);
> -
> - /*
> - * if an error (io timeout, etc) while (re)connecting, the remote
> - * port requested terminating of the association (disconnect_ls)
> - * or an error (timeout or abort) occurred on an io while creating
> - * the controller. Abort any ios on the association and let the
> - * create_association error path resolve things.
> - */
> - if (state == NVME_CTRL_CONNECTING) {
> - __nvme_fc_abort_outstanding_ios(ctrl, true);
> - dev_warn(ctrl->ctrl.device,
> - "NVME-FC{%d}: transport error during (re)connect\n",
> - ctrl->cnum);
> - return;
> - }
This logic needs to be preserved. Its no longer part of
nvme_fc_start_ioerr_recovery(). Failures during CONNECTING should not be
"fenced". They should fail immediately.
> -
> - /* Otherwise, only proceed if in LIVE state - e.g. on first error */
> - if (state != NVME_CTRL_LIVE)
> - return;
This was to filter out multiple requests of the reset. I guess that is
what happens now in start_ioerr when attempting to set state to
RESETTING and already RESETTING.
There is a small difference here in that The existing code avoids doing
the ctrl reset if the controller is NEW. start_ioerr will change the
ctrl to RESETTING. I'm not sure how much of an impact that is.
> -
> - dev_warn(ctrl->ctrl.device,
> - "NVME-FC{%d}: transport association event: %s\n",
> - ctrl->cnum, errmsg);
> - dev_warn(ctrl->ctrl.device,
> - "NVME-FC{%d}: resetting controller\n", ctrl->cnum);
I haven't paid much attention, but keeping the transport messages for
these cases is very very useful for diagnosis.
> -
> - nvme_reset_ctrl(&ctrl->ctrl);
> -}
> -
> static enum blk_eh_timer_return nvme_fc_timeout(struct request *rq)
> {
> struct nvme_fc_fcp_op *op = blk_mq_rq_to_pdu(rq);
> @@ -2536,24 +2514,14 @@ static enum blk_eh_timer_return nvme_fc_timeout(struct request *rq)
> struct nvme_fc_cmd_iu *cmdiu = &op->cmd_iu;
> struct nvme_command *sqe = &cmdiu->sqe;
>
> - /*
> - * Attempt to abort the offending command. Command completion
> - * will detect the aborted io and will fail the connection.
> - */
> dev_info(ctrl->ctrl.device,
> "NVME-FC{%d.%d}: io timeout: opcode %d fctype %d (%s) w10/11: "
> "x%08x/x%08x\n",
> ctrl->cnum, qnum, sqe->common.opcode, sqe->fabrics.fctype,
> nvme_fabrics_opcode_str(qnum, sqe),
> sqe->common.cdw10, sqe->common.cdw11);
> - if (__nvme_fc_abort_op(ctrl, op))
> - nvme_fc_error_recovery(ctrl, "io timeout abort failed");
>
> - /*
> - * the io abort has been initiated. Have the reset timer
> - * restarted and the abort completion will complete the io
> - * shortly. Avoids a synchronous wait while the abort finishes.
> - */
> + nvme_fc_start_ioerr_recovery(ctrl, "io timeout");
Why get rid of the abort logic ?
Note: the error recovery/controller reset is only called when the abort
failed.
I believe you should continue to abort the op. The fence logic will
kick in when the op completes later (along with other io completions).
If nothing else, it allows a hw resource to be freed up.
> return BLK_EH_RESET_TIMER;
> }
>
> @@ -3352,6 +3320,26 @@ nvme_fc_reset_ctrl_work(struct work_struct *work)
> }
> }
>
> +static void
> +nvme_fc_error_recovery(struct nvme_fc_ctrl *ctrl)
> +{
> + nvme_stop_keep_alive(&ctrl->ctrl);
Curious, why did the stop_keep_alive() call get added to this ?
Doesn't hurt.
I assume it was due to other transports having it as they originally
were calling stop_ctrl, but then moved to stop_keep_alive. Shouldn't
this be followed by flush_work((&ctrl->ctrl.async_event_work) ?
> + nvme_stop_ctrl(&ctrl->ctrl);
> +
> + /* will block while waiting for io to terminate */
> + nvme_fc_delete_association(ctrl);
> +
> + /* Do not reconnect if controller is being deleted */
> + if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING))
> + return;
> +
> + if (ctrl->rport->remoteport.port_state == FC_OBJSTATE_ONLINE) {
> + queue_delayed_work(nvme_wq, &ctrl->connect_work, 0);
> + return;
> + }
> +
> + nvme_fc_reconnect_or_delete(ctrl, -ENOTCONN);
> +}
This code and that in nvme_fc_reset_ctrl_work() need to be collapsed
into a common helper function invoked by the 2 routines. Also addresses
the missing flush_delayed work in this routine.
>
> static const struct nvme_ctrl_ops nvme_fc_ctrl_ops = {
> .name = "fc",
-- James
(new email address. can always reach me at james.smart@broadcom.com as well)
next prev parent reply other threads:[~2026-02-03 19:19 UTC|newest]
Thread overview: 82+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-30 22:34 [PATCH v2 00/14] TP8028 Rapid Path Failure Recovery Mohamed Khalfella
2026-01-30 22:34 ` [PATCH v2 01/14] nvmet: Rapid Path Failure Recovery set controller identify fields Mohamed Khalfella
2026-02-03 3:03 ` Hannes Reinecke
2026-02-03 18:14 ` Mohamed Khalfella
2026-02-04 0:34 ` Hannes Reinecke
2026-02-07 13:41 ` Sagi Grimberg
2026-02-14 0:42 ` Randy Jennings
2026-02-14 3:56 ` Mohamed Khalfella
2026-01-30 22:34 ` [PATCH v2 02/14] nvmet/debugfs: Add ctrl uniquifier and random values Mohamed Khalfella
2026-02-03 3:04 ` Hannes Reinecke
2026-02-07 13:47 ` Sagi Grimberg
2026-02-11 0:50 ` Randy Jennings
2026-02-11 1:02 ` Mohamed Khalfella
2026-01-30 22:34 ` [PATCH v2 03/14] nvmet: Implement CCR nvme command Mohamed Khalfella
2026-02-03 3:19 ` Hannes Reinecke
2026-02-03 18:40 ` Mohamed Khalfella
2026-02-04 0:38 ` Hannes Reinecke
2026-02-04 0:44 ` Mohamed Khalfella
2026-02-04 0:55 ` Hannes Reinecke
2026-02-04 17:52 ` Mohamed Khalfella
2026-02-07 13:58 ` Sagi Grimberg
2026-02-08 23:10 ` Mohamed Khalfella
2026-02-09 19:27 ` Mohamed Khalfella
2026-02-11 1:34 ` Randy Jennings
2026-02-07 14:11 ` Sagi Grimberg
2026-01-30 22:34 ` [PATCH v2 04/14] nvmet: Implement CCR logpage Mohamed Khalfella
2026-02-03 3:21 ` Hannes Reinecke
2026-02-07 14:11 ` Sagi Grimberg
2026-02-11 1:49 ` Randy Jennings
2026-01-30 22:34 ` [PATCH v2 05/14] nvmet: Send an AEN on CCR completion Mohamed Khalfella
2026-02-03 3:27 ` Hannes Reinecke
2026-02-03 18:48 ` Mohamed Khalfella
2026-02-04 0:43 ` Hannes Reinecke
2026-02-07 14:12 ` Sagi Grimberg
2026-02-11 1:52 ` Randy Jennings
2026-01-30 22:34 ` [PATCH v2 06/14] nvme: Rapid Path Failure Recovery read controller identify fields Mohamed Khalfella
2026-02-03 3:28 ` Hannes Reinecke
2026-02-07 14:13 ` Sagi Grimberg
2026-02-11 1:56 ` Randy Jennings
2026-01-30 22:34 ` [PATCH v2 07/14] nvme: Introduce FENCING and FENCED controller states Mohamed Khalfella
2026-02-03 5:07 ` Hannes Reinecke
2026-02-03 19:13 ` Mohamed Khalfella
2026-01-30 22:34 ` [PATCH v2 08/14] nvme: Implement cross-controller reset recovery Mohamed Khalfella
2026-02-03 5:19 ` Hannes Reinecke
2026-02-03 20:00 ` Mohamed Khalfella
2026-02-04 1:10 ` Hannes Reinecke
2026-02-04 23:24 ` Mohamed Khalfella
2026-02-11 3:44 ` Randy Jennings
2026-02-11 15:19 ` Hannes Reinecke
2026-02-10 22:09 ` James Smart
2026-02-10 22:27 ` Mohamed Khalfella
2026-02-10 22:49 ` James Smart
2026-02-10 23:25 ` Mohamed Khalfella
2026-02-11 0:12 ` Mohamed Khalfella
2026-02-11 3:33 ` Randy Jennings
2026-01-30 22:34 ` [PATCH v2 09/14] nvme: Implement cross-controller reset completion Mohamed Khalfella
2026-02-03 5:22 ` Hannes Reinecke
2026-02-03 20:07 ` Mohamed Khalfella
2026-01-30 22:34 ` [PATCH v2 10/14] nvme-tcp: Use CCR to recover controller that hits an error Mohamed Khalfella
2026-02-03 5:34 ` Hannes Reinecke
2026-02-03 21:24 ` Mohamed Khalfella
2026-02-04 0:48 ` Randy Jennings
2026-02-04 2:57 ` Hannes Reinecke
2026-02-10 1:39 ` Mohamed Khalfella
2026-01-30 22:34 ` [PATCH v2 11/14] nvme-rdma: " Mohamed Khalfella
2026-02-03 5:35 ` Hannes Reinecke
2026-01-30 22:34 ` [PATCH v2 12/14] nvme-fc: Decouple error recovery from controller reset Mohamed Khalfella
2026-02-03 5:40 ` Hannes Reinecke
2026-02-03 21:29 ` Mohamed Khalfella
2026-02-03 19:19 ` James Smart [this message]
2026-02-03 22:49 ` James Smart
2026-02-04 0:15 ` Mohamed Khalfella
2026-02-04 0:11 ` Mohamed Khalfella
2026-02-05 0:08 ` James Smart
2026-02-05 0:59 ` Mohamed Khalfella
2026-02-09 22:53 ` Mohamed Khalfella
2026-01-30 22:34 ` [PATCH v2 13/14] nvme-fc: Use CCR to recover controller that hits an error Mohamed Khalfella
2026-02-03 5:43 ` Hannes Reinecke
2026-02-10 22:12 ` James Smart
2026-02-10 22:20 ` Mohamed Khalfella
2026-02-13 19:29 ` Mohamed Khalfella
2026-01-30 22:34 ` [PATCH v2 14/14] nvme-fc: Hold inflight requests while in FENCING state Mohamed Khalfella
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=383bbbe9-6cf5-465c-8811-0dddce34f883@gmail.com \
--to=jsmart833426@gmail.com \
--cc=adailey@purestorage.com \
--cc=axboe@kernel.dk \
--cc=dgiani@purestorage.com \
--cc=hare@suse.de \
--cc=hch@lst.de \
--cc=justin.tee@broadcom.com \
--cc=kbusch@kernel.org \
--cc=kch@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=mkhalfella@purestorage.com \
--cc=nareshgottumukkala83@gmail.com \
--cc=paul.ely@broadcom.com \
--cc=randyj@purestorage.com \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox