From: Mohamed Khalfella <mkhalfella@purestorage.com>
To: Hannes Reinecke <hare@suse.de>
Cc: Justin Tee <justin.tee@broadcom.com>,
Naresh Gottumukkala <nareshgottumukkala83@gmail.com>,
Paul Ely <paul.ely@broadcom.com>,
Chaitanya Kulkarni <kch@nvidia.com>,
Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
Keith Busch <kbusch@kernel.org>, Sagi Grimberg <sagi@grimberg.me>,
Aaron Dailey <adailey@purestorage.com>,
Randy Jennings <randyj@purestorage.com>,
Dhaval Giani <dgiani@purestorage.com>,
linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 10/14] nvme-tcp: Use CCR to recover controller that hits an error
Date: Tue, 3 Feb 2026 13:24:09 -0800 [thread overview]
Message-ID: <20260203212409.GG3729-mkhalfella@purestorage.com> (raw)
In-Reply-To: <48a05027-9ca2-4e84-a7ac-946391ed1e26@suse.de>
On Tue 2026-02-03 06:34:51 +0100, Hannes Reinecke wrote:
> On 1/30/26 23:34, Mohamed Khalfella wrote:
> > An alive nvme controller that hits an error now will move to FENCING
> > state instead of RESETTING state. ctrl->fencing_work attempts CCR to
> > terminate inflight IOs. If CCR succeeds, switch to FENCED -> RESETTING
> > and continue error recovery as usual. If CCR fails, the behavior depends
> > on whether the subsystem supports CQT or not. If CQT is not supported
> > then reset the controller immediately as if CCR succeeded in order to
> > maintain the current behavior. If CQT is supported switch to time-based
> > recovery. Schedule ctrl->fenced_work resets the controller when time
> > based recovery finishes.
> >
> > Either ctrl->err_work or ctrl->reset_work can run after a controller is
> > fenced. Flush fencing work when either work run.
> >
> > Signed-off-by: Mohamed Khalfella <mkhalfella@purestorage.com>
> > ---
> > drivers/nvme/host/tcp.c | 62 ++++++++++++++++++++++++++++++++++++++++-
> > 1 file changed, 61 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
> > index 69cb04406b47..af8d3b36a4bb 100644
> > --- a/drivers/nvme/host/tcp.c
> > +++ b/drivers/nvme/host/tcp.c
> > @@ -193,6 +193,8 @@ struct nvme_tcp_ctrl {
> > struct sockaddr_storage src_addr;
> > struct nvme_ctrl ctrl;
> >
> > + struct work_struct fencing_work;
> > + struct delayed_work fenced_work;
> > struct work_struct err_work;
> > struct delayed_work connect_work;
> > struct nvme_tcp_request async_req;
> > @@ -611,6 +613,12 @@ static void nvme_tcp_init_recv_ctx(struct nvme_tcp_queue *queue)
> >
> > static void nvme_tcp_error_recovery(struct nvme_ctrl *ctrl)
> > {
> > + if (nvme_change_ctrl_state(ctrl, NVME_CTRL_FENCING)) {
> > + dev_warn(ctrl->device, "starting controller fencing\n");
> > + queue_work(nvme_wq, &to_tcp_ctrl(ctrl)->fencing_work);
> > + return;
> > + }
> > +
>
> Don't you need to flush any outstanding 'fenced_work' queue items here
> before calling 'queue_work()'?
I do not think we need to flush ctr->fencing_work. It can not be running
at this time.
>
> > if (!nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
> > return;
> >
> > @@ -2470,12 +2478,59 @@ static void nvme_tcp_reconnect_ctrl_work(struct work_struct *work)
> > nvme_tcp_reconnect_or_remove(ctrl, ret);
> > }
> >
> > +static void nvme_tcp_fenced_work(struct work_struct *work)
> > +{
> > + struct nvme_tcp_ctrl *tcp_ctrl = container_of(to_delayed_work(work),
> > + struct nvme_tcp_ctrl, fenced_work);
> > + struct nvme_ctrl *ctrl = &tcp_ctrl->ctrl;
> > +
> > + nvme_change_ctrl_state(ctrl, NVME_CTRL_FENCED);
> > + if (nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
> > + queue_work(nvme_reset_wq, &tcp_ctrl->err_work);
> > +}
> > +
> > +static void nvme_tcp_fencing_work(struct work_struct *work)
> > +{
> > + struct nvme_tcp_ctrl *tcp_ctrl = container_of(work,
> > + struct nvme_tcp_ctrl, fencing_work);
> > + struct nvme_ctrl *ctrl = &tcp_ctrl->ctrl;
> > + unsigned long rem;
> > +
> > + rem = nvme_fence_ctrl(ctrl);
> > + if (!rem)
> > + goto done;
> > +
> > + if (!ctrl->cqt) {
> > + dev_info(ctrl->device,
> > + "CCR failed, CQT not supported, skip time-based recovery\n");
> > + goto done;
> > + }
> > +
>
> As mentioned, cqt handling should be part of another patchset.
Let us suppose we drop cqt from this patchset
- How will we be able to calculate CCR time budget?
Currently it is calculated by nvme_fence_timeout_ms()
- What should we do if CCR fails? Retry requests immediately?
> > + dev_info(ctrl->device,
> > + "CCR failed, switch to time-based recovery, timeout = %ums\n",
> > + jiffies_to_msecs(rem));
> > + queue_delayed_work(nvme_wq, &tcp_ctrl->fenced_work, rem);
> > + return;
> > +
>
> Why do you need the 'fenced' workqueue at all? All it does is queing yet
> another workqueue item, which certainly can be done from the 'fencing'
> workqueue directly, no?
It is possible to drop ctr->fenced_work and requeue ctrl->fencing_work
as delayed work to implement request hold time. If we do that then we
need to modify nvme_tcp_fencing_work() to tell if it is being called for
'fencing' or 'fenced'. The first version of this patch used a controller
flag RECOVERED for that and it has been suggested to use a separate work
to simplify the logic and drop the controller flag.
>
> > +done:
> > + nvme_change_ctrl_state(ctrl, NVME_CTRL_FENCED);
> > + if (nvme_change_ctrl_state(ctrl, NVME_CTRL_RESETTING))
> > + queue_work(nvme_reset_wq, &tcp_ctrl->err_work);
> > +}
> > +
> > +static void nvme_tcp_flush_fencing_work(struct nvme_ctrl *ctrl)
> > +{
> > + flush_work(&to_tcp_ctrl(ctrl)->fencing_work);
> > + flush_delayed_work(&to_tcp_ctrl(ctrl)->fenced_work);
> > +}
> > +
> > static void nvme_tcp_error_recovery_work(struct work_struct *work)
> > {
> > struct nvme_tcp_ctrl *tcp_ctrl = container_of(work,
> > struct nvme_tcp_ctrl, err_work);
> > struct nvme_ctrl *ctrl = &tcp_ctrl->ctrl;
> >
> > + nvme_tcp_flush_fencing_work(ctrl);
>
> Why not 'fenced_work' ?
You mean rename nvme_tcp_flush_fencing_work() to
nvme_tcp_flush_fenced_work()?
If yes, then I can do that if you think it makes more sense.
>
> > if (nvme_tcp_key_revoke_needed(ctrl))
> > nvme_auth_revoke_tls_key(ctrl);
> > nvme_stop_keep_alive(ctrl);
> > @@ -2518,6 +2573,7 @@ static void nvme_reset_ctrl_work(struct work_struct *work)
> > container_of(work, struct nvme_ctrl, reset_work);
> > int ret;
> >
> > + nvme_tcp_flush_fencing_work(ctrl);
>
> Same.
>
> > if (nvme_tcp_key_revoke_needed(ctrl))
> > nvme_auth_revoke_tls_key(ctrl);
> > nvme_stop_ctrl(ctrl);
> > @@ -2643,13 +2699,15 @@ static enum blk_eh_timer_return nvme_tcp_timeout(struct request *rq)
> > struct nvme_tcp_cmd_pdu *pdu = nvme_tcp_req_cmd_pdu(req);
> > struct nvme_command *cmd = &pdu->cmd;
> > int qid = nvme_tcp_queue_id(req->queue);
> > + enum nvme_ctrl_state state;
> >
> > dev_warn(ctrl->device,
> > "I/O tag %d (%04x) type %d opcode %#x (%s) QID %d timeout\n",
> > rq->tag, nvme_cid(rq), pdu->hdr.type, cmd->common.opcode,
> > nvme_fabrics_opcode_str(qid, cmd), qid);
> >
> > - if (nvme_ctrl_state(ctrl) != NVME_CTRL_LIVE) {
> > + state = nvme_ctrl_state(ctrl);
> > + if (state != NVME_CTRL_LIVE && state != NVME_CTRL_FENCING) {
>
> 'FENCED' too, presumably?
I do not think it makes a difference here. FENCED and RESETTING are
almost the same states.
>
> > /*
> > * If we are resetting, connecting or deleting we should
> > * complete immediately because we may block controller
> > @@ -2904,6 +2962,8 @@ static struct nvme_tcp_ctrl *nvme_tcp_alloc_ctrl(struct device *dev,
> >
> > INIT_DELAYED_WORK(&ctrl->connect_work,
> > nvme_tcp_reconnect_ctrl_work);
> > + INIT_DELAYED_WORK(&ctrl->fenced_work, nvme_tcp_fenced_work);
> > + INIT_WORK(&ctrl->fencing_work, nvme_tcp_fencing_work);
> > INIT_WORK(&ctrl->err_work, nvme_tcp_error_recovery_work);
> > INIT_WORK(&ctrl->ctrl.reset_work, nvme_reset_ctrl_work);
> >
>
> Here you are calling CCR whenever error recovery is triggered.
> This will cause CCR to be send from a command timeout, which is
> technically wrong (CCR should be send when the KATO timeout expires,
> not when a command timout expires). Both could be vastly different.
KATO is driven by the host. What does KTO expires mean?
I think KATO expiry is more applicable to target, no?
KATO timeout is a signal of an error that target is not reachable or
something is wrong with the target?
>
> So I'd prefer to have CCR send whenever KATO timeout triggers, and
> lease to current command timeout mechanism in place.
Assuming we used CCR only when KATO request times out.
What should we do when we hit other errors?
should nvme_tcp_error_recovery() is called from many places to handle
errors and it effectively resets the controller. What should this
function do if not trigger CCR?
>
> Cheers,
>
> Hannes
> --
> Dr. Hannes Reinecke Kernel Storage Architect
> hare@suse.de +49 911 74053 688
> SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
> HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich
next prev parent reply other threads:[~2026-02-03 21:24 UTC|newest]
Thread overview: 82+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-30 22:34 [PATCH v2 00/14] TP8028 Rapid Path Failure Recovery Mohamed Khalfella
2026-01-30 22:34 ` [PATCH v2 01/14] nvmet: Rapid Path Failure Recovery set controller identify fields Mohamed Khalfella
2026-02-03 3:03 ` Hannes Reinecke
2026-02-03 18:14 ` Mohamed Khalfella
2026-02-04 0:34 ` Hannes Reinecke
2026-02-07 13:41 ` Sagi Grimberg
2026-02-14 0:42 ` Randy Jennings
2026-02-14 3:56 ` Mohamed Khalfella
2026-01-30 22:34 ` [PATCH v2 02/14] nvmet/debugfs: Add ctrl uniquifier and random values Mohamed Khalfella
2026-02-03 3:04 ` Hannes Reinecke
2026-02-07 13:47 ` Sagi Grimberg
2026-02-11 0:50 ` Randy Jennings
2026-02-11 1:02 ` Mohamed Khalfella
2026-01-30 22:34 ` [PATCH v2 03/14] nvmet: Implement CCR nvme command Mohamed Khalfella
2026-02-03 3:19 ` Hannes Reinecke
2026-02-03 18:40 ` Mohamed Khalfella
2026-02-04 0:38 ` Hannes Reinecke
2026-02-04 0:44 ` Mohamed Khalfella
2026-02-04 0:55 ` Hannes Reinecke
2026-02-04 17:52 ` Mohamed Khalfella
2026-02-07 13:58 ` Sagi Grimberg
2026-02-08 23:10 ` Mohamed Khalfella
2026-02-09 19:27 ` Mohamed Khalfella
2026-02-11 1:34 ` Randy Jennings
2026-02-07 14:11 ` Sagi Grimberg
2026-01-30 22:34 ` [PATCH v2 04/14] nvmet: Implement CCR logpage Mohamed Khalfella
2026-02-03 3:21 ` Hannes Reinecke
2026-02-07 14:11 ` Sagi Grimberg
2026-02-11 1:49 ` Randy Jennings
2026-01-30 22:34 ` [PATCH v2 05/14] nvmet: Send an AEN on CCR completion Mohamed Khalfella
2026-02-03 3:27 ` Hannes Reinecke
2026-02-03 18:48 ` Mohamed Khalfella
2026-02-04 0:43 ` Hannes Reinecke
2026-02-07 14:12 ` Sagi Grimberg
2026-02-11 1:52 ` Randy Jennings
2026-01-30 22:34 ` [PATCH v2 06/14] nvme: Rapid Path Failure Recovery read controller identify fields Mohamed Khalfella
2026-02-03 3:28 ` Hannes Reinecke
2026-02-07 14:13 ` Sagi Grimberg
2026-02-11 1:56 ` Randy Jennings
2026-01-30 22:34 ` [PATCH v2 07/14] nvme: Introduce FENCING and FENCED controller states Mohamed Khalfella
2026-02-03 5:07 ` Hannes Reinecke
2026-02-03 19:13 ` Mohamed Khalfella
2026-01-30 22:34 ` [PATCH v2 08/14] nvme: Implement cross-controller reset recovery Mohamed Khalfella
2026-02-03 5:19 ` Hannes Reinecke
2026-02-03 20:00 ` Mohamed Khalfella
2026-02-04 1:10 ` Hannes Reinecke
2026-02-04 23:24 ` Mohamed Khalfella
2026-02-11 3:44 ` Randy Jennings
2026-02-11 15:19 ` Hannes Reinecke
2026-02-10 22:09 ` James Smart
2026-02-10 22:27 ` Mohamed Khalfella
2026-02-10 22:49 ` James Smart
2026-02-10 23:25 ` Mohamed Khalfella
2026-02-11 0:12 ` Mohamed Khalfella
2026-02-11 3:33 ` Randy Jennings
2026-01-30 22:34 ` [PATCH v2 09/14] nvme: Implement cross-controller reset completion Mohamed Khalfella
2026-02-03 5:22 ` Hannes Reinecke
2026-02-03 20:07 ` Mohamed Khalfella
2026-01-30 22:34 ` [PATCH v2 10/14] nvme-tcp: Use CCR to recover controller that hits an error Mohamed Khalfella
2026-02-03 5:34 ` Hannes Reinecke
2026-02-03 21:24 ` Mohamed Khalfella [this message]
2026-02-04 0:48 ` Randy Jennings
2026-02-04 2:57 ` Hannes Reinecke
2026-02-10 1:39 ` Mohamed Khalfella
2026-01-30 22:34 ` [PATCH v2 11/14] nvme-rdma: " Mohamed Khalfella
2026-02-03 5:35 ` Hannes Reinecke
2026-01-30 22:34 ` [PATCH v2 12/14] nvme-fc: Decouple error recovery from controller reset Mohamed Khalfella
2026-02-03 5:40 ` Hannes Reinecke
2026-02-03 21:29 ` Mohamed Khalfella
2026-02-03 19:19 ` James Smart
2026-02-03 22:49 ` James Smart
2026-02-04 0:15 ` Mohamed Khalfella
2026-02-04 0:11 ` Mohamed Khalfella
2026-02-05 0:08 ` James Smart
2026-02-05 0:59 ` Mohamed Khalfella
2026-02-09 22:53 ` Mohamed Khalfella
2026-01-30 22:34 ` [PATCH v2 13/14] nvme-fc: Use CCR to recover controller that hits an error Mohamed Khalfella
2026-02-03 5:43 ` Hannes Reinecke
2026-02-10 22:12 ` James Smart
2026-02-10 22:20 ` Mohamed Khalfella
2026-02-13 19:29 ` Mohamed Khalfella
2026-01-30 22:34 ` [PATCH v2 14/14] nvme-fc: Hold inflight requests while in FENCING state Mohamed Khalfella
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260203212409.GG3729-mkhalfella@purestorage.com \
--to=mkhalfella@purestorage.com \
--cc=adailey@purestorage.com \
--cc=axboe@kernel.dk \
--cc=dgiani@purestorage.com \
--cc=hare@suse.de \
--cc=hch@lst.de \
--cc=justin.tee@broadcom.com \
--cc=kbusch@kernel.org \
--cc=kch@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=nareshgottumukkala83@gmail.com \
--cc=paul.ely@broadcom.com \
--cc=randyj@purestorage.com \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox