From: Mohamed Khalfella <mkhalfella@purestorage.com>
To: Hannes Reinecke <hare@suse.de>
Cc: Justin Tee <justin.tee@broadcom.com>,
Naresh Gottumukkala <nareshgottumukkala83@gmail.com>,
Paul Ely <paul.ely@broadcom.com>,
Chaitanya Kulkarni <kch@nvidia.com>,
Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
Keith Busch <kbusch@kernel.org>, Sagi Grimberg <sagi@grimberg.me>,
Aaron Dailey <adailey@purestorage.com>,
Randy Jennings <randyj@purestorage.com>,
Dhaval Giani <dgiani@purestorage.com>,
linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 01/14] nvmet: Rapid Path Failure Recovery set controller identify fields
Date: Tue, 3 Feb 2026 10:14:57 -0800 [thread overview]
Message-ID: <20260203181457.GA3729-mkhalfella@purestorage.com> (raw)
In-Reply-To: <59a8d510-d06d-4d35-b911-c758c184df52@suse.de>
On Tue 2026-02-03 04:03:22 +0100, Hannes Reinecke wrote:
> On 1/30/26 23:34, Mohamed Khalfella wrote:
> > TP8028 Rapid Path Failure Recovery defined new fields in controller
> > identify response. The newly defined fields are:
> >
> > - CIU (Controller Instance UNIQUIFIER): is an 8bit non-zero value that
> > is assigned a random value when controller first created. The value is
> > expected to be incremented when RDY bit in CSTS register is asserted
> > - CIRN (Controller Instance Random Number): is 64bit random value that
> > gets generated when controller is crated. CIRN is regenerated everytime
> > RDY bit is CSTS register is asserted.
> > - CCRL (Cross-Controller Reset Limit) is an 8bit value that defines the
> > maximum number of in-progress controller reset operations. CCRL is
> > hardcoded to 4 as recommended by TP8028.
> >
> > TP4129 KATO Corrections and Clarifications defined CQT (Command Quiesce
> > Time) which is used along with KATO (Keep Alive Timeout) to set an upper
> > time limit for attempting Cross-Controller Recovery. For NVME subsystem
> > CQT is set to 0 by default to keep the current behavior. The value can
> > be set from configfs if needed.
> >
> > Make the new fields available for IO controllers only since TP8028 is
> > not very useful for discovery controllers.
> >
> > Signed-off-by: Mohamed Khalfella <mkhalfella@purestorage.com>
> > ---
> > drivers/nvme/target/admin-cmd.c | 6 ++++++
> > drivers/nvme/target/configfs.c | 31 +++++++++++++++++++++++++++++++
> > drivers/nvme/target/core.c | 12 ++++++++++++
> > drivers/nvme/target/nvmet.h | 4 ++++
> > include/linux/nvme.h | 15 ++++++++++++---
> > 5 files changed, 65 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/nvme/target/admin-cmd.c b/drivers/nvme/target/admin-cmd.c
> > index 3da31bb1183e..ade1145df72d 100644
> > --- a/drivers/nvme/target/admin-cmd.c
> > +++ b/drivers/nvme/target/admin-cmd.c
> > @@ -696,6 +696,12 @@ static void nvmet_execute_identify_ctrl(struct nvmet_req *req)
> >
> > id->cntlid = cpu_to_le16(ctrl->cntlid);
> > id->ver = cpu_to_le32(ctrl->subsys->ver);
> > + if (!nvmet_is_disc_subsys(ctrl->subsys)) {
> > + id->cqt = cpu_to_le16(ctrl->cqt);
> > + id->ciu = ctrl->ciu;
> > + id->cirn = cpu_to_le64(ctrl->cirn);
> > + id->ccrl = NVMF_CCR_LIMIT;
> > + }
> >
> > /* XXX: figure out what to do about RTD3R/RTD3 */
> > id->oaes = cpu_to_le32(NVMET_AEN_CFG_OPTIONAL);
> > diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
> > index e44ef69dffc2..035f6e75a818 100644
> > --- a/drivers/nvme/target/configfs.c
> > +++ b/drivers/nvme/target/configfs.c
> > @@ -1636,6 +1636,36 @@ static ssize_t nvmet_subsys_attr_pi_enable_store(struct config_item *item,
> > CONFIGFS_ATTR(nvmet_subsys_, attr_pi_enable);
> > #endif
> >
> > +static ssize_t nvmet_subsys_attr_cqt_show(struct config_item *item,
> > + char *page)
> > +{
> > + return snprintf(page, PAGE_SIZE, "%u\n", to_subsys(item)->cqt);
> > +}
> > +
> > +static ssize_t nvmet_subsys_attr_cqt_store(struct config_item *item,
> > + const char *page, size_t cnt)
> > +{
> > + struct nvmet_subsys *subsys = to_subsys(item);
> > + struct nvmet_ctrl *ctrl;
> > + u16 cqt;
> > +
> > + if (sscanf(page, "%hu\n", &cqt) != 1)
> > + return -EINVAL;
> > +
> > + down_write(&nvmet_config_sem);
> > + if (subsys->cqt == cqt)
> > + goto out;
> > +
> > + subsys->cqt = cqt;
> > + /* Force reconnect */
> > + list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry)
> > + ctrl->ops->delete_ctrl(ctrl);
> > +out:
> > + up_write(&nvmet_config_sem);
> > + return cnt;
> > +}
> > +CONFIGFS_ATTR(nvmet_subsys_, attr_cqt);
> > +
> > static ssize_t nvmet_subsys_attr_qid_max_show(struct config_item *item,
> > char *page)
> > {
> > @@ -1676,6 +1706,7 @@ static struct configfs_attribute *nvmet_subsys_attrs[] = {
> > &nvmet_subsys_attr_attr_vendor_id,
> > &nvmet_subsys_attr_attr_subsys_vendor_id,
> > &nvmet_subsys_attr_attr_model,
> > + &nvmet_subsys_attr_attr_cqt,
> > &nvmet_subsys_attr_attr_qid_max,
> > &nvmet_subsys_attr_attr_ieee_oui,
> > &nvmet_subsys_attr_attr_firmware,
>
> I do think that TP8028 (ie the CQT defintions) are somewhat independent
> on CCR. So I'm not sure if they should be integrated in this patchset;
> personally I would prefer to have it moved to another patchset.
Agreed that CQT is not directly related to CCR from the target
perspective. But there is a relationship when it comes to how the
initiator uses CQT to calculate the time budget for CCR. As you know on
the host side if CCR fails and CQT is supported the requests needs to be
held for certain amount of time before they are retried. So CQT value is
needed and that I why I included it in this patchset.
>
> > diff --git a/drivers/nvme/target/core.c b/drivers/nvme/target/core.c
> > index cc88e5a28c8a..0d2a1206e08f 100644
> > --- a/drivers/nvme/target/core.c
> > +++ b/drivers/nvme/target/core.c
> > @@ -1393,6 +1393,10 @@ static void nvmet_start_ctrl(struct nvmet_ctrl *ctrl)
> > return;
> > }
> >
> > + if (!nvmet_is_disc_subsys(ctrl->subsys)) {
> > + ctrl->ciu = ((u8)(ctrl->ciu + 1)) ? : 1;
> > + ctrl->cirn = get_random_u64();
> > + }
> > ctrl->csts = NVME_CSTS_RDY;
> >
> > /*
> > @@ -1661,6 +1665,12 @@ struct nvmet_ctrl *nvmet_alloc_ctrl(struct nvmet_alloc_ctrl_args *args)
> > }
> > ctrl->cntlid = ret;
> >
> > + if (!nvmet_is_disc_subsys(ctrl->subsys)) {
> > + ctrl->cqt = subsys->cqt;
> > + ctrl->ciu = get_random_u8() ? : 1;
> > + ctrl->cirn = get_random_u64();
> > + }
> > +
> > /*
> > * Discovery controllers may use some arbitrary high value
> > * in order to cleanup stale discovery sessions
> > @@ -1853,10 +1863,12 @@ struct nvmet_subsys *nvmet_subsys_alloc(const char *subsysnqn,
> >
> > switch (type) {
> > case NVME_NQN_NVME:
> > + subsys->cqt = NVMF_CQT_MS;
> > subsys->max_qid = NVMET_NR_QUEUES;
> > break;
>
> And I would not set the CQT default here.
> Thing is, implementing CQT to the letter would inflict a CQT delay
> during failover for _every_ installation, thereby resulting in a
> regression to previous implementations where we would fail over
> with _no_ delay.
> So again, we should make it a different patchset.
CQT defaults to 0 to avoid introducing surprise delay. The initiator will
skip holding requests if it sees CQT set to 0.
next prev parent reply other threads:[~2026-02-03 18:15 UTC|newest]
Thread overview: 82+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-30 22:34 [PATCH v2 00/14] TP8028 Rapid Path Failure Recovery Mohamed Khalfella
2026-01-30 22:34 ` [PATCH v2 01/14] nvmet: Rapid Path Failure Recovery set controller identify fields Mohamed Khalfella
2026-02-03 3:03 ` Hannes Reinecke
2026-02-03 18:14 ` Mohamed Khalfella [this message]
2026-02-04 0:34 ` Hannes Reinecke
2026-02-07 13:41 ` Sagi Grimberg
2026-02-14 0:42 ` Randy Jennings
2026-02-14 3:56 ` Mohamed Khalfella
2026-01-30 22:34 ` [PATCH v2 02/14] nvmet/debugfs: Add ctrl uniquifier and random values Mohamed Khalfella
2026-02-03 3:04 ` Hannes Reinecke
2026-02-07 13:47 ` Sagi Grimberg
2026-02-11 0:50 ` Randy Jennings
2026-02-11 1:02 ` Mohamed Khalfella
2026-01-30 22:34 ` [PATCH v2 03/14] nvmet: Implement CCR nvme command Mohamed Khalfella
2026-02-03 3:19 ` Hannes Reinecke
2026-02-03 18:40 ` Mohamed Khalfella
2026-02-04 0:38 ` Hannes Reinecke
2026-02-04 0:44 ` Mohamed Khalfella
2026-02-04 0:55 ` Hannes Reinecke
2026-02-04 17:52 ` Mohamed Khalfella
2026-02-07 13:58 ` Sagi Grimberg
2026-02-08 23:10 ` Mohamed Khalfella
2026-02-09 19:27 ` Mohamed Khalfella
2026-02-11 1:34 ` Randy Jennings
2026-02-07 14:11 ` Sagi Grimberg
2026-01-30 22:34 ` [PATCH v2 04/14] nvmet: Implement CCR logpage Mohamed Khalfella
2026-02-03 3:21 ` Hannes Reinecke
2026-02-07 14:11 ` Sagi Grimberg
2026-02-11 1:49 ` Randy Jennings
2026-01-30 22:34 ` [PATCH v2 05/14] nvmet: Send an AEN on CCR completion Mohamed Khalfella
2026-02-03 3:27 ` Hannes Reinecke
2026-02-03 18:48 ` Mohamed Khalfella
2026-02-04 0:43 ` Hannes Reinecke
2026-02-07 14:12 ` Sagi Grimberg
2026-02-11 1:52 ` Randy Jennings
2026-01-30 22:34 ` [PATCH v2 06/14] nvme: Rapid Path Failure Recovery read controller identify fields Mohamed Khalfella
2026-02-03 3:28 ` Hannes Reinecke
2026-02-07 14:13 ` Sagi Grimberg
2026-02-11 1:56 ` Randy Jennings
2026-01-30 22:34 ` [PATCH v2 07/14] nvme: Introduce FENCING and FENCED controller states Mohamed Khalfella
2026-02-03 5:07 ` Hannes Reinecke
2026-02-03 19:13 ` Mohamed Khalfella
2026-01-30 22:34 ` [PATCH v2 08/14] nvme: Implement cross-controller reset recovery Mohamed Khalfella
2026-02-03 5:19 ` Hannes Reinecke
2026-02-03 20:00 ` Mohamed Khalfella
2026-02-04 1:10 ` Hannes Reinecke
2026-02-04 23:24 ` Mohamed Khalfella
2026-02-11 3:44 ` Randy Jennings
2026-02-11 15:19 ` Hannes Reinecke
2026-02-10 22:09 ` James Smart
2026-02-10 22:27 ` Mohamed Khalfella
2026-02-10 22:49 ` James Smart
2026-02-10 23:25 ` Mohamed Khalfella
2026-02-11 0:12 ` Mohamed Khalfella
2026-02-11 3:33 ` Randy Jennings
2026-01-30 22:34 ` [PATCH v2 09/14] nvme: Implement cross-controller reset completion Mohamed Khalfella
2026-02-03 5:22 ` Hannes Reinecke
2026-02-03 20:07 ` Mohamed Khalfella
2026-01-30 22:34 ` [PATCH v2 10/14] nvme-tcp: Use CCR to recover controller that hits an error Mohamed Khalfella
2026-02-03 5:34 ` Hannes Reinecke
2026-02-03 21:24 ` Mohamed Khalfella
2026-02-04 0:48 ` Randy Jennings
2026-02-04 2:57 ` Hannes Reinecke
2026-02-10 1:39 ` Mohamed Khalfella
2026-01-30 22:34 ` [PATCH v2 11/14] nvme-rdma: " Mohamed Khalfella
2026-02-03 5:35 ` Hannes Reinecke
2026-01-30 22:34 ` [PATCH v2 12/14] nvme-fc: Decouple error recovery from controller reset Mohamed Khalfella
2026-02-03 5:40 ` Hannes Reinecke
2026-02-03 21:29 ` Mohamed Khalfella
2026-02-03 19:19 ` James Smart
2026-02-03 22:49 ` James Smart
2026-02-04 0:15 ` Mohamed Khalfella
2026-02-04 0:11 ` Mohamed Khalfella
2026-02-05 0:08 ` James Smart
2026-02-05 0:59 ` Mohamed Khalfella
2026-02-09 22:53 ` Mohamed Khalfella
2026-01-30 22:34 ` [PATCH v2 13/14] nvme-fc: Use CCR to recover controller that hits an error Mohamed Khalfella
2026-02-03 5:43 ` Hannes Reinecke
2026-02-10 22:12 ` James Smart
2026-02-10 22:20 ` Mohamed Khalfella
2026-02-13 19:29 ` Mohamed Khalfella
2026-01-30 22:34 ` [PATCH v2 14/14] nvme-fc: Hold inflight requests while in FENCING state Mohamed Khalfella
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260203181457.GA3729-mkhalfella@purestorage.com \
--to=mkhalfella@purestorage.com \
--cc=adailey@purestorage.com \
--cc=axboe@kernel.dk \
--cc=dgiani@purestorage.com \
--cc=hare@suse.de \
--cc=hch@lst.de \
--cc=justin.tee@broadcom.com \
--cc=kbusch@kernel.org \
--cc=kch@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=nareshgottumukkala83@gmail.com \
--cc=paul.ely@broadcom.com \
--cc=randyj@purestorage.com \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox