public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: Mohamed Khalfella <mkhalfella@purestorage.com>
To: Sagi Grimberg <sagi@grimberg.me>
Cc: Chaitanya Kulkarni <kch@nvidia.com>,
	Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Keith Busch <kbusch@kernel.org>,
	Aaron Dailey <adailey@purestorage.com>,
	Randy Jennings <randyj@purestorage.com>,
	John Meneghini <jmeneghi@redhat.com>,
	Hannes Reinecke <hare@suse.de>,
	linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH 08/14] nvme: Implement cross-controller reset recovery
Date: Fri, 30 Jan 2026 14:01:19 -0800	[thread overview]
Message-ID: <20260130220119.GE1710902-mkhalfella@purestorage.com> (raw)
In-Reply-To: <cdd44f55-cc5a-4ebe-aaba-6201674b9326@grimberg.me>

On Sun 2026-01-04 23:39:35 +0200, Sagi Grimberg wrote:
> 
> 
> On 01/01/2026 1:43, Mohamed Khalfella wrote:
> > On Sat 2025-12-27 12:14:11 +0200, Sagi Grimberg wrote:
> >>
> >> On 26/11/2025 4:11, Mohamed Khalfella wrote:
> >>> A host that has more than one path connecting to an nvme subsystem
> >>> typically has an nvme controller associated with every path. This is
> >>> mostly applicable to nvmeof. If one path goes down, inflight IOs on that
> >>> path should not be retried immediately on another path because this
> >>> could lead to data corruption as described in TP4129. TP8028 defines
> >>> cross-controller reset mechanism that can be used by host to terminate
> >>> IOs on the failed path using one of the remaining healthy paths. Only
> >>> after IOs are terminated, or long enough time passes as defined by
> >>> TP4129, inflight IOs should be retried on another path. Implement core
> >>> cross-controller reset shared logic to be used by the transports.
> >>>
> >>> Signed-off-by: Mohamed Khalfella <mkhalfella@purestorage.com>
> >>> ---
> >>>    drivers/nvme/host/constants.c |   1 +
> >>>    drivers/nvme/host/core.c      | 133 ++++++++++++++++++++++++++++++++++
> >>>    drivers/nvme/host/nvme.h      |  10 +++
> >>>    3 files changed, 144 insertions(+)
> >>>
> >>> diff --git a/drivers/nvme/host/constants.c b/drivers/nvme/host/constants.c
> >>> index dc90df9e13a2..f679efd5110e 100644
> >>> --- a/drivers/nvme/host/constants.c
> >>> +++ b/drivers/nvme/host/constants.c
> >>> @@ -46,6 +46,7 @@ static const char * const nvme_admin_ops[] = {
> >>>    	[nvme_admin_virtual_mgmt] = "Virtual Management",
> >>>    	[nvme_admin_nvme_mi_send] = "NVMe Send MI",
> >>>    	[nvme_admin_nvme_mi_recv] = "NVMe Receive MI",
> >>> +	[nvme_admin_cross_ctrl_reset] = "Cross Controller Reset",
> >>>    	[nvme_admin_dbbuf] = "Doorbell Buffer Config",
> >>>    	[nvme_admin_format_nvm] = "Format NVM",
> >>>    	[nvme_admin_security_send] = "Security Send",
> >>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> >>> index f5b84bc327d3..f38b70ca9cee 100644
> >>> --- a/drivers/nvme/host/core.c
> >>> +++ b/drivers/nvme/host/core.c
> >>> @@ -554,6 +554,138 @@ void nvme_cancel_admin_tagset(struct nvme_ctrl *ctrl)
> >>>    }
> >>>    EXPORT_SYMBOL_GPL(nvme_cancel_admin_tagset);
> >>>    
> >>> +static struct nvme_ctrl *nvme_find_ccr_ctrl(struct nvme_ctrl *ictrl,
> >>> +					    u32 min_cntlid)
> >>> +{
> >>> +	struct nvme_subsystem *subsys = ictrl->subsys;
> >>> +	struct nvme_ctrl *sctrl;
> >>> +	unsigned long flags;
> >>> +
> >>> +	mutex_lock(&nvme_subsystems_lock);
> >> This looks like the wrong lock to take here?
> > This is similar to nvme_validate_cntlid()?
> > What is the correct lock to use?
> 
> Not really, its only because it is called from nvme_init_subsystem which 
> spans
> subsystems.

Okay. I will use this lock for now. If this is not the right lock to use
please point me to the right one.

> 
> >
> >>> +	list_for_each_entry(sctrl, &subsys->ctrls, subsys_entry) {
> >>> +		if (sctrl->cntlid < min_cntlid)
> >>> +			continue;
> >> The use of min_cntlid is not clear to me.
> >>
> >>> +
> >>> +		if (atomic_dec_if_positive(&sctrl->ccr_limit) < 0)
> >>> +			continue;
> >>> +
> >>> +		spin_lock_irqsave(&sctrl->lock, flags);
> >>> +		if (sctrl->state != NVME_CTRL_LIVE) {
> >>> +			spin_unlock_irqrestore(&sctrl->lock, flags);
> >>> +			atomic_inc(&sctrl->ccr_limit);
> >>> +			continue;
> >>> +		}
> >>> +
> >>> +		/*
> >>> +		 * We got a good candidate source controller that is locked and
> >>> +		 * LIVE. However, no guarantee sctrl will not be deleted after
> >>> +		 * sctrl->lock is released. Get a ref of both sctrl and admin_q
> >>> +		 * so they do not disappear until we are done with them.
> >>> +		 */
> >>> +		WARN_ON_ONCE(!blk_get_queue(sctrl->admin_q));
> >>> +		nvme_get_ctrl(sctrl);
> >>> +		spin_unlock_irqrestore(&sctrl->lock, flags);
> >>> +		goto found;
> >>> +	}
> >>> +	sctrl = NULL;
> >>> +found:
> >>> +	mutex_unlock(&nvme_subsystems_lock);
> >>> +	return sctrl;
> >>> +}
> >>> +
> >>> +static int nvme_issue_wait_ccr(struct nvme_ctrl *sctrl, struct nvme_ctrl *ictrl)
> >>> +{
> >>> +	unsigned long flags, tmo, remain;
> >>> +	struct nvme_ccr_entry ccr = { };
> >>> +	union nvme_result res = { 0 };
> >>> +	struct nvme_command c = { };
> >>> +	u32 result;
> >>> +	int ret = 0;
> >>> +
> >>> +	init_completion(&ccr.complete);
> >>> +	ccr.ictrl = ictrl;
> >>> +
> >>> +	spin_lock_irqsave(&sctrl->lock, flags);
> >>> +	list_add_tail(&ccr.list, &sctrl->ccrs);
> >>> +	spin_unlock_irqrestore(&sctrl->lock, flags);
> >>> +
> >>> +	c.ccr.opcode = nvme_admin_cross_ctrl_reset;
> >>> +	c.ccr.ciu = ictrl->ciu;
> >>> +	c.ccr.icid = cpu_to_le16(ictrl->cntlid);
> >>> +	c.ccr.cirn = cpu_to_le64(ictrl->cirn);
> >>> +	ret = __nvme_submit_sync_cmd(sctrl->admin_q, &c, &res,
> >>> +				     NULL, 0, NVME_QID_ANY, 0);
> >>> +	if (ret)
> >>> +		goto out;
> >>> +
> >>> +	result = le32_to_cpu(res.u32);
> >>> +	if (result & 0x01) /* Immediate Reset */
> >>> +		goto out;
> >>> +
> >>> +	tmo = msecs_to_jiffies(max(ictrl->cqt, ictrl->kato * 1000));
> >>> +	remain = wait_for_completion_timeout(&ccr.complete, tmo);
> >>> +	if (!remain)
> >> I think remain is redundant here.
> > Deleted 'remain'.
> >
> >>> +		ret = -EAGAIN;
> >>> +out:
> >>> +	spin_lock_irqsave(&sctrl->lock, flags);
> >>> +	list_del(&ccr.list);
> >>> +	spin_unlock_irqrestore(&sctrl->lock, flags);
> >>> +	return ccr.ccrs == 1 ? 0 : ret;
> >> Why would you still return 0 and not EAGAIN? you expired on timeout but
> >> still
> >> return success if you have ccrs=1? btw you have ccrs in the ccr struct
> >> and in the controller
> >> as a list. Lets rename to distinguish the two.
> > True, we did expire timeout here. However, after we removed the ccr
> > entry we found that it was marked as completed. We return success in
> > this case even though we hit timeout.
> 
> When does this happen? Why is it worth having the code non-intuitive for
> something that effectively never happens (unless I'm missing something?)

Agree. It is a very low probability. I deleted the check for this
condition.

> 
> >
> > Renamed ctrl->ccrs to ctrl->ccr_list.
> >
> >>> +}
> >>> +
> >>> +unsigned long nvme_recover_ctrl(struct nvme_ctrl *ictrl)
> >>> +{
> >> I'd call it nvme_fence_controller()
> > Okay. I will do that. I will also rename the controller state FENCING.
> >
> >>> +	unsigned long deadline, now, timeout;
> >>> +	struct nvme_ctrl *sctrl;
> >>> +	u32 min_cntlid = 0;
> >>> +	int ret;
> >>> +
> >>> +	timeout = nvme_recovery_timeout_ms(ictrl);
> >>> +	dev_info(ictrl->device, "attempting CCR, timeout %lums\n", timeout);
> >>> +
> >>> +	now = jiffies;
> >>> +	deadline = now + msecs_to_jiffies(timeout);
> >>> +	while (time_before(now, deadline)) {
> >>> +		sctrl = nvme_find_ccr_ctrl(ictrl, min_cntlid);
> >>> +		if (!sctrl) {
> >>> +			/* CCR failed, switch to time-based recovery */
> >>> +			return deadline - now;
> >> It is not clear what is the return code semantics of this function.
> >> How about making it success/failure and have the caller choose what to do?
> > The function returns 0 on success. On failure it returns the time in
> > jiffies to hold requests for before they are canceled. On failure the
> > returned time is essentially the hold time defined in TP4129 minus the
> > time it took to attempt CCR.
> 
> I think it would be cleaner to simple have this function return status 
> code and
> have the caller worry about time spent.

nvme_fence_ctrl() needs to track the time. It needs to be aware of how
much time spent on attempting CCR in order to decide whether to continue
trying CCR or give up.

> 
> >
> >>> +		}
> >>> +
> >>> +		ret = nvme_issue_wait_ccr(sctrl, ictrl);
> >>> +		atomic_inc(&sctrl->ccr_limit);
> >> inc after you wait for the ccr? shouldn't this be before?
> > I think it should be after we wait for CCR. sctrl->ccr_limit is the
> > number of concurrent CCRs the controller supports. Only after we are
> > done with CCR on this controller we increment it.
> 
> Maybe it should be folded into nvme_issue_wait_ccr for symmetry?

Done.

> 
> >
> >>> +
> >>> +		if (!ret) {
> >>> +			dev_info(ictrl->device, "CCR succeeded using %s\n",
> >>> +				 dev_name(sctrl->device));
> >>> +			blk_put_queue(sctrl->admin_q);
> >>> +			nvme_put_ctrl(sctrl);
> >>> +			return 0;
> >>> +		}
> >>> +
> >>> +		/* Try another controller */
> >>> +		min_cntlid = sctrl->cntlid + 1;
> >> OK, I see why min_cntlid is used. That is very non-intuitive.
> >>
> >> I'm wandering if it will be simpler to take one-shot at ccr and
> >> if it fails fallback to crt. I mean, if the sctrl is alive, and it was
> >> unable
> >> to reset the ictrl in time, how would another ctrl do a better job here?
> > We need to attempt CCR from multiple controllers for reason explained in
> > another response. As you figured out min_cntlid is needed in order to
> > not loop controller list forever. Do you have a better idea?
> 
> No, just know that I don't like it very much :)
> 
> >
> >>> +		blk_put_queue(sctrl->admin_q);
> >>> +		nvme_put_ctrl(sctrl);
> >>> +		now = jiffies;
> >>> +	}
> >>> +
> >>> +	dev_info(ictrl->device, "CCR reached timeout, call it done\n");
> >>> +	return 0;
> >>> +}
> >>> +EXPORT_SYMBOL_GPL(nvme_recover_ctrl);
> >>> +
> >>> +void nvme_end_ctrl_recovery(struct nvme_ctrl *ctrl)
> >>> +{
> >>> +	unsigned long flags;
> >>> +
> >>> +	spin_lock_irqsave(&ctrl->lock, flags);
> >>> +	WRITE_ONCE(ctrl->state, NVME_CTRL_RESETTING);
> >> This needs to be a proper state transition.
> > We do not want to have proper transition from RECOVERING to RESETTING.
> > The reason is that we do not want the controller to be reset while it is
> > being recovered/fenced because requests should not be canceled. One way
> > to keep the transitions in nvme_change_ctrl_state() is to use two
> > states. Say FENCING and FENCED.
> >
> > The allowed transitions are
> >
> > - LIVE -> FENCING
> > - FENCING -> FENCED
> > - FENCED -> (RESETTING, DELETING)
> >
> > This will also git rid of NVME_CTRL_RECOVERED
> >
> > Does this sound good?
> 
> We could do what failfast is doing, in case we get transition FENCING -> 
> RESETTING/DELETING we flush
> the fence_work...

Yes. This is what v2 does.


  reply	other threads:[~2026-01-30 22:01 UTC|newest]

Thread overview: 68+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-26  2:11 [RFC PATCH 00/14] TP8028 Rapid Path Failure Recovery Mohamed Khalfella
2025-11-26  2:11 ` [RFC PATCH 01/14] nvmet: Rapid Path Failure Recovery set controller identify fields Mohamed Khalfella
2025-12-16  1:35   ` Randy Jennings
2025-11-26  2:11 ` [RFC PATCH 02/14] nvmet/debugfs: Add ctrl uniquifier and random values Mohamed Khalfella
2025-12-16  1:43   ` Randy Jennings
2025-11-26  2:11 ` [RFC PATCH 03/14] nvmet: Implement CCR nvme command Mohamed Khalfella
2025-12-16  3:01   ` Randy Jennings
2025-12-31 21:14     ` Mohamed Khalfella
2025-12-25 13:14   ` Sagi Grimberg
2025-12-25 17:33     ` Mohamed Khalfella
2025-12-27  9:39       ` Sagi Grimberg
2025-12-31 21:35         ` Mohamed Khalfella
2025-11-26  2:11 ` [RFC PATCH 04/14] nvmet: Implement CCR logpage Mohamed Khalfella
2025-12-16  3:11   ` Randy Jennings
2025-11-26  2:11 ` [RFC PATCH 05/14] nvmet: Send an AEN on CCR completion Mohamed Khalfella
2025-12-16  3:31   ` Randy Jennings
2025-12-25 13:23   ` Sagi Grimberg
2025-12-25 18:13     ` Mohamed Khalfella
2025-12-27  9:48       ` Sagi Grimberg
2025-12-31 22:00         ` Mohamed Khalfella
2026-01-04 21:09           ` Sagi Grimberg
2026-01-07  2:58             ` Randy Jennings
2026-01-30 22:31             ` Mohamed Khalfella
2025-11-26  2:11 ` [RFC PATCH 06/14] nvme: Rapid Path Failure Recovery read controller identify fields Mohamed Khalfella
2025-12-18 15:22   ` Randy Jennings
2025-12-31 22:26     ` Mohamed Khalfella
2026-01-02 19:06       ` Mohamed Khalfella
2025-11-26  2:11 ` [RFC PATCH 07/14] nvme: Add RECOVERING nvme controller state Mohamed Khalfella
2025-12-18 23:18   ` Randy Jennings
2025-12-19  1:39     ` Randy Jennings
2025-12-25 13:29   ` Sagi Grimberg
2025-12-25 17:17     ` Mohamed Khalfella
2025-12-27  9:52       ` Sagi Grimberg
2025-12-31 22:45         ` Mohamed Khalfella
2025-12-27  9:55       ` Sagi Grimberg
2025-12-31 22:36         ` Mohamed Khalfella
2025-12-31 23:04           ` Mohamed Khalfella
2025-11-26  2:11 ` [RFC PATCH 08/14] nvme: Implement cross-controller reset recovery Mohamed Khalfella
2025-12-19  1:21   ` Randy Jennings
2025-12-27 10:14   ` Sagi Grimberg
2025-12-31  0:04     ` Randy Jennings
2026-01-04 21:14       ` Sagi Grimberg
2026-01-07  3:16         ` Randy Jennings
2025-12-31 23:43     ` Mohamed Khalfella
2026-01-04 21:39       ` Sagi Grimberg
2026-01-30 22:01         ` Mohamed Khalfella [this message]
2025-11-26  2:11 ` [RFC PATCH 09/14] nvme: Implement cross-controller reset completion Mohamed Khalfella
2025-12-19  1:31   ` Randy Jennings
2025-12-27 10:24   ` Sagi Grimberg
2025-12-31 23:51     ` Mohamed Khalfella
2026-01-04 21:15       ` Sagi Grimberg
2026-01-30 22:32         ` Mohamed Khalfella
2025-11-26  2:11 ` [RFC PATCH 10/14] nvme-tcp: Use CCR to recover controller that hits an error Mohamed Khalfella
2025-12-19  2:06   ` Randy Jennings
2026-01-01  0:04     ` Mohamed Khalfella
2025-12-27 10:35   ` Sagi Grimberg
2025-12-31  0:13     ` Randy Jennings
2026-01-04 21:19       ` Sagi Grimberg
2026-01-01  0:27     ` Mohamed Khalfella
2025-11-26  2:11 ` [RFC PATCH 11/14] nvme-rdma: " Mohamed Khalfella
2025-12-19  2:16   ` Randy Jennings
2025-12-27 10:36   ` Sagi Grimberg
2025-11-26  2:11 ` [RFC PATCH 12/14] nvme-fc: Decouple error recovery from controller reset Mohamed Khalfella
2025-12-19  2:59   ` Randy Jennings
2025-11-26  2:12 ` [RFC PATCH 13/14] nvme-fc: Use CCR to recover controller that hits an error Mohamed Khalfella
2025-12-20  1:21   ` Randy Jennings
2025-11-26  2:12 ` [RFC PATCH 14/14] nvme-fc: Hold inflight requests while in RECOVERING state Mohamed Khalfella
2025-12-20  1:44   ` Randy Jennings

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260130220119.GE1710902-mkhalfella@purestorage.com \
    --to=mkhalfella@purestorage.com \
    --cc=adailey@purestorage.com \
    --cc=axboe@kernel.dk \
    --cc=hare@suse.de \
    --cc=hch@lst.de \
    --cc=jmeneghi@redhat.com \
    --cc=kbusch@kernel.org \
    --cc=kch@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=randyj@purestorage.com \
    --cc=sagi@grimberg.me \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox