From: Mohamed Khalfella <mkhalfella@purestorage.com>
To: Randy Jennings <randyj@purestorage.com>
Cc: Chaitanya Kulkarni <kch@nvidia.com>,
Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
Keith Busch <kbusch@kernel.org>, Sagi Grimberg <sagi@grimberg.me>,
Aaron Dailey <adailey@purestorage.com>,
John Meneghini <jmeneghi@redhat.com>,
Hannes Reinecke <hare@suse.de>,
linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH 06/14] nvme: Rapid Path Failure Recovery read controller identify fields
Date: Fri, 2 Jan 2026 11:06:57 -0800 [thread overview]
Message-ID: <20260102190657.GT3864520-mkhalfella@purestorage.com> (raw)
In-Reply-To: <20251231222637.GL3864520-mkhalfella@purestorage.com>
On Wed 2025-12-31 14:26:39 -0800, Mohamed Khalfella wrote:
> On Thu 2025-12-18 07:22:41 -0800, Randy Jennings wrote:
> > On Tue, Nov 25, 2025 at 6:13 PM Mohamed Khalfella
> > <mkhalfella@purestorage.com> wrote:
> > >
> > > TP2028 Rapid path failure added new fileds to controller identify
> > TP8028
>
> Fixed.
>
> > > response. Read CIU (Controller Instance Uniquifier), CIRN (Controller
> > > Instance Random Number), and CCRL (Cross-Controller Reset Limit) from
> > > controller identify response. Expose CIU and CIRN as sysfs attributes
> > > so the values can be used directrly by user if needed.
> > >
> > > TP4129 KATO Corrections and Clarifications defined CQT (Command Quiesce
> > > Time) which is used along with KATO (Keep Alive Timeout) to set an upper
> > > limite for attempting Cross-Controller Recovery.
> > "limite" -> "limit"
>
> Fixed.
>
> > >
> > > Signed-off-by: Mohamed Khalfella <mkhalfella@purestorage.com>
> > > ---
> > > drivers/nvme/host/core.c | 5 +++++
> > > drivers/nvme/host/nvme.h | 11 +++++++++++
> > > drivers/nvme/host/sysfs.c | 23 +++++++++++++++++++++++
> > > 3 files changed, 39 insertions(+)
> > >
> > > diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> > > index fa4181d7de73..aa007a7b9606 100644
> > > --- a/drivers/nvme/host/core.c
> > > +++ b/drivers/nvme/host/core.c
> > > @@ -3572,12 +3572,17 @@ static int nvme_init_identify(struct nvme_ctrl *ctrl)
> > > ctrl->crdt[1] = le16_to_cpu(id->crdt2);
> > > ctrl->crdt[2] = le16_to_cpu(id->crdt3);
> > >
> > > + ctrl->ciu = id->ciu;
> > > + ctrl->cirn = le64_to_cpu(id->cirn);
> > > + atomic_set(&ctrl->ccr_limit, id->ccrl);
> > Seems like it would be good for the target & init to use the same
> > name for these fields. I have a preference for these over
> > instance_uniquifier and random because they are more concise, but
> > the preference is not strong.
>
> The field names in the spec are concise, but they are also cryptic.
>
> >
> > > +
> > > ctrl->oacs = le16_to_cpu(id->oacs);
> > > ctrl->oncs = le16_to_cpu(id->oncs);
> > > ctrl->mtfa = le16_to_cpu(id->mtfa);
> > > ctrl->oaes = le32_to_cpu(id->oaes);
> > > ctrl->wctemp = le16_to_cpu(id->wctemp);
> > > ctrl->cctemp = le16_to_cpu(id->cctemp);
> > > + ctrl->cqt = le16_to_cpu(id->cqt);
> > >
> > > atomic_set(&ctrl->abort_limit, id->acl + 1);
> > > ctrl->vwc = id->vwc;
> > I cannot discern an ordering to the attributes set here. Any
> > particular reason, you placed cqt away from the others you added?
>
> No reason. Moved ctrl->cqt initialization up with other fields.
>
> >
> > > diff --git a/drivers/nvme/host/sysfs.c b/drivers/nvme/host/sysfs.c
> > > index 29430949ce2f..ae36249ad61e 100644
> > > --- a/drivers/nvme/host/sysfs.c
> > > +++ b/drivers/nvme/host/sysfs.c
> > > @@ -388,6 +388,27 @@ nvme_show_int_function(queue_count);
> > > nvme_show_int_function(sqsize);
> > > nvme_show_int_function(kato);
> > >
> > > +static ssize_t nvme_sysfs_uniquifier_show(struct device *dev,
> > > + struct device_attribute *attr,
> > > + char *buf)
> > > +{
> > > + struct nvme_ctrl *ctrl = dev_get_drvdata(dev);
> > > +
> > > + return sysfs_emit(buf, "%02x\n", ctrl->ciu);
> > > +}
> > > +static DEVICE_ATTR(uniquifier, S_IRUGO, nvme_sysfs_uniquifier_show, NULL);
> > > +
> > > +static ssize_t nvme_sysfs_random_show(struct device *dev,
> > > + struct device_attribute *attr,
> > > + char *buf)
> > > +{
> > > + struct nvme_ctrl *ctrl = dev_get_drvdata(dev);
> > > +
> > > + return sysfs_emit(buf, "%016llx\n", ctrl->cirn);
> > > +}
> > > +static DEVICE_ATTR(random, S_IRUGO, nvme_sysfs_random_show, NULL);
> > > +
> > > +
> > > static ssize_t nvme_sysfs_delete(struct device *dev,
> > > struct device_attribute *attr, const char *buf,
> > > size_t count)
> > > @@ -734,6 +755,8 @@ static struct attribute *nvme_dev_attrs[] = {
> > > &dev_attr_numa_node.attr,
> > > &dev_attr_queue_count.attr,
> > > &dev_attr_sqsize.attr,
> > > + &dev_attr_uniquifier.attr,
> > > + &dev_attr_random.attr,
> > > &dev_attr_hostnqn.attr,
> > > &dev_attr_hostid.attr,
> > > &dev_attr_ctrl_loss_tmo.attr,
> > > --
> > > 2.51.2
> > >
> >
> > These are the names used in the target code (uniquifer & random.
> > I'd rather have them match (identify structure will have spec's
> > abbreviations; ctrl & debug/sysfs for target & initiator either be
> > ciu/cirn or uniquifer/random.
>
> I think it matters for sysfs attributes. I do not know the right thing
> to do. Should we use spec names like "cirn" or call it "random"?
Now I am thinking about it, I think sticking to the spec's abbreviations
makes more sense here. names like "random" and "uniquifier" in sysfs are
not descriptive enough. I changed struct member names, sysfs, and
debugfs file names to match the spec.
>
> >
> > But this is small stuff.
> >
> > Reviewed-by: Randy Jennings <randyj@purestorage.com>
next prev parent reply other threads:[~2026-01-02 19:07 UTC|newest]
Thread overview: 68+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-26 2:11 [RFC PATCH 00/14] TP8028 Rapid Path Failure Recovery Mohamed Khalfella
2025-11-26 2:11 ` [RFC PATCH 01/14] nvmet: Rapid Path Failure Recovery set controller identify fields Mohamed Khalfella
2025-12-16 1:35 ` Randy Jennings
2025-11-26 2:11 ` [RFC PATCH 02/14] nvmet/debugfs: Add ctrl uniquifier and random values Mohamed Khalfella
2025-12-16 1:43 ` Randy Jennings
2025-11-26 2:11 ` [RFC PATCH 03/14] nvmet: Implement CCR nvme command Mohamed Khalfella
2025-12-16 3:01 ` Randy Jennings
2025-12-31 21:14 ` Mohamed Khalfella
2025-12-25 13:14 ` Sagi Grimberg
2025-12-25 17:33 ` Mohamed Khalfella
2025-12-27 9:39 ` Sagi Grimberg
2025-12-31 21:35 ` Mohamed Khalfella
2025-11-26 2:11 ` [RFC PATCH 04/14] nvmet: Implement CCR logpage Mohamed Khalfella
2025-12-16 3:11 ` Randy Jennings
2025-11-26 2:11 ` [RFC PATCH 05/14] nvmet: Send an AEN on CCR completion Mohamed Khalfella
2025-12-16 3:31 ` Randy Jennings
2025-12-25 13:23 ` Sagi Grimberg
2025-12-25 18:13 ` Mohamed Khalfella
2025-12-27 9:48 ` Sagi Grimberg
2025-12-31 22:00 ` Mohamed Khalfella
2026-01-04 21:09 ` Sagi Grimberg
2026-01-07 2:58 ` Randy Jennings
2026-01-30 22:31 ` Mohamed Khalfella
2025-11-26 2:11 ` [RFC PATCH 06/14] nvme: Rapid Path Failure Recovery read controller identify fields Mohamed Khalfella
2025-12-18 15:22 ` Randy Jennings
2025-12-31 22:26 ` Mohamed Khalfella
2026-01-02 19:06 ` Mohamed Khalfella [this message]
2025-11-26 2:11 ` [RFC PATCH 07/14] nvme: Add RECOVERING nvme controller state Mohamed Khalfella
2025-12-18 23:18 ` Randy Jennings
2025-12-19 1:39 ` Randy Jennings
2025-12-25 13:29 ` Sagi Grimberg
2025-12-25 17:17 ` Mohamed Khalfella
2025-12-27 9:52 ` Sagi Grimberg
2025-12-31 22:45 ` Mohamed Khalfella
2025-12-27 9:55 ` Sagi Grimberg
2025-12-31 22:36 ` Mohamed Khalfella
2025-12-31 23:04 ` Mohamed Khalfella
2025-11-26 2:11 ` [RFC PATCH 08/14] nvme: Implement cross-controller reset recovery Mohamed Khalfella
2025-12-19 1:21 ` Randy Jennings
2025-12-27 10:14 ` Sagi Grimberg
2025-12-31 0:04 ` Randy Jennings
2026-01-04 21:14 ` Sagi Grimberg
2026-01-07 3:16 ` Randy Jennings
2025-12-31 23:43 ` Mohamed Khalfella
2026-01-04 21:39 ` Sagi Grimberg
2026-01-30 22:01 ` Mohamed Khalfella
2025-11-26 2:11 ` [RFC PATCH 09/14] nvme: Implement cross-controller reset completion Mohamed Khalfella
2025-12-19 1:31 ` Randy Jennings
2025-12-27 10:24 ` Sagi Grimberg
2025-12-31 23:51 ` Mohamed Khalfella
2026-01-04 21:15 ` Sagi Grimberg
2026-01-30 22:32 ` Mohamed Khalfella
2025-11-26 2:11 ` [RFC PATCH 10/14] nvme-tcp: Use CCR to recover controller that hits an error Mohamed Khalfella
2025-12-19 2:06 ` Randy Jennings
2026-01-01 0:04 ` Mohamed Khalfella
2025-12-27 10:35 ` Sagi Grimberg
2025-12-31 0:13 ` Randy Jennings
2026-01-04 21:19 ` Sagi Grimberg
2026-01-01 0:27 ` Mohamed Khalfella
2025-11-26 2:11 ` [RFC PATCH 11/14] nvme-rdma: " Mohamed Khalfella
2025-12-19 2:16 ` Randy Jennings
2025-12-27 10:36 ` Sagi Grimberg
2025-11-26 2:11 ` [RFC PATCH 12/14] nvme-fc: Decouple error recovery from controller reset Mohamed Khalfella
2025-12-19 2:59 ` Randy Jennings
2025-11-26 2:12 ` [RFC PATCH 13/14] nvme-fc: Use CCR to recover controller that hits an error Mohamed Khalfella
2025-12-20 1:21 ` Randy Jennings
2025-11-26 2:12 ` [RFC PATCH 14/14] nvme-fc: Hold inflight requests while in RECOVERING state Mohamed Khalfella
2025-12-20 1:44 ` Randy Jennings
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260102190657.GT3864520-mkhalfella@purestorage.com \
--to=mkhalfella@purestorage.com \
--cc=adailey@purestorage.com \
--cc=axboe@kernel.dk \
--cc=hare@suse.de \
--cc=hch@lst.de \
--cc=jmeneghi@redhat.com \
--cc=kbusch@kernel.org \
--cc=kch@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=randyj@purestorage.com \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox