From: Mohamed Khalfella <mkhalfella@purestorage.com>
To: Hannes Reinecke <hare@suse.de>
Cc: Justin Tee <justin.tee@broadcom.com>,
Naresh Gottumukkala <nareshgottumukkala83@gmail.com>,
Paul Ely <paul.ely@broadcom.com>,
Chaitanya Kulkarni <kch@nvidia.com>, Jens Axboe <axboe@kernel.dk>,
Keith Busch <kbusch@kernel.org>, Sagi Grimberg <sagi@grimberg.me>,
James Smart <jsmart833426@gmail.com>,
Aaron Dailey <adailey@purestorage.com>,
Randy Jennings <randyj@purestorage.com>,
Dhaval Giani <dgiani@purestorage.com>,
linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v4 09/15] nvme: Implement cross-controller reset completion
Date: Tue, 7 Apr 2026 12:09:40 -0700 [thread overview]
Message-ID: <20260407190940.GF2861-mkhalfella@purestorage.com> (raw)
In-Reply-To: <019cf04f-8988-46fd-aecd-0f77ac5f8b8a@suse.de>
On Tue 2026-04-07 07:48:50 +0200, Hannes Reinecke wrote:
> On 3/31/26 18:55, Mohamed Khalfella wrote:
> > On Mon 2026-03-30 12:53:07 +0200, Hannes Reinecke wrote:
> >> On 3/28/26 01:43, Mohamed Khalfella wrote:
> >>> An nvme source controller that issues CCR command expects to receive an
> >>> NVME_AER_NOTICE_CCR_COMPLETED when pending CCR succeeds or fails. Add
> >>> sctrl->ccr_work to read NVME_LOG_CCR logpage and wakeup any thread
> >>> waiting on CCR completion.
> >>>
> >>> Signed-off-by: Mohamed Khalfella <mkhalfella@purestorage.com>
> >>> ---
> >>> drivers/nvme/host/core.c | 49 +++++++++++++++++++++++++++++++++++++++-
> >>> drivers/nvme/host/nvme.h | 1 +
> >>> 2 files changed, 49 insertions(+), 1 deletion(-)
> >>>
> >>> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> >>> index 5603ae36444f..793f203bfc38 100644
> >>> --- a/drivers/nvme/host/core.c
> >>> +++ b/drivers/nvme/host/core.c
> >>> @@ -1920,7 +1920,8 @@ EXPORT_SYMBOL_GPL(nvme_set_queue_count);
> >>>
> >>> #define NVME_AEN_SUPPORTED \
> >>> (NVME_AEN_CFG_NS_ATTR | NVME_AEN_CFG_FW_ACT | \
> >>> - NVME_AEN_CFG_ANA_CHANGE | NVME_AEN_CFG_DISC_CHANGE)
> >>> + NVME_AEN_CFG_ANA_CHANGE | NVME_AEN_CFG_CCR_COMPLETE | \
> >>> + NVME_AEN_CFG_DISC_CHANGE)
> >>>
> >>> static void nvme_enable_aen(struct nvme_ctrl *ctrl)
> >>> {
> >>> @@ -4873,6 +4874,47 @@ static void nvme_get_fw_slot_info(struct nvme_ctrl *ctrl)
> >>> kfree(log);
> >>> }
> >>>
> >>> +static void nvme_ccr_work(struct work_struct *work)
> >>> +{
> >>> + struct nvme_ctrl *ctrl = container_of(work, struct nvme_ctrl, ccr_work);
> >>> + struct nvme_ccr_entry *ccr;
> >>> + struct nvme_ccr_log_entry *entry;
> >>> + struct nvme_ccr_log *log;
> >>> + unsigned long flags;
> >>> + int ret, i;
> >>> +
> >>> + log = kmalloc(sizeof(*log), GFP_KERNEL);
> >>> + if (!log)
> >>> + return;
> >>> +
> >>> + ret = nvme_get_log(ctrl, 0, NVME_LOG_CCR, 0x01,
> >>> + 0x00, log, sizeof(*log), 0);
> >>> + if (ret)
> >>> + goto out;
> >>> +
> >>> + spin_lock_irqsave(&ctrl->lock, flags);
> >>> + for (i = 0; i < le16_to_cpu(log->ne); i++) {
> >>> + entry = &log->entries[i];
> >>> + if (entry->ccrs == NVME_CCR_STATUS_IN_PROGRESS)
> >>> + continue;
> >>> +
> >>> + list_for_each_entry(ccr, &ctrl->ccr_list, list) {
> >>> + struct nvme_ctrl *ictrl = ccr->ictrl;
> >>> +
> >>> + if (ictrl->cntlid != le16_to_cpu(entry->icid) ||
> >>> + ictrl->ciu != entry->ciu)
> >>> + continue;
> >>> +
> >>> + /* Complete matching entry */
> >>> + ccr->ccrs = entry->ccrs;
> >>> + complete(&ccr->complete);
> >>> + }
> >>> + }
> >>> + spin_unlock_irqrestore(&ctrl->lock, flags);
> >>> +out:
> >>> + kfree(log);
> >>> +}
> >>> +
> >>> static void nvme_fw_act_work(struct work_struct *work)
> >>> {
> >>> struct nvme_ctrl *ctrl = container_of(work,
> >>> @@ -4949,6 +4991,9 @@ static bool nvme_handle_aen_notice(struct nvme_ctrl *ctrl, u32 result)
> >>> case NVME_AER_NOTICE_DISC_CHANGED:
> >>> ctrl->aen_result = result;
> >>> break;
> >>> + case NVME_AER_NOTICE_CCR_COMPLETED:
> >>> + queue_work(nvme_wq, &ctrl->ccr_work);
> >>> + break;
> >>> default:
> >>> dev_warn(ctrl->device, "async event result %08x\n", result);
> >>> }
> >>> @@ -5144,6 +5189,7 @@ void nvme_stop_ctrl(struct nvme_ctrl *ctrl)
> >>> nvme_stop_failfast_work(ctrl);
> >>> flush_work(&ctrl->async_event_work);
> >>> cancel_work_sync(&ctrl->fw_act_work);
> >>> + cancel_work_sync(&ctrl->ccr_work);
> >>> if (ctrl->ops->stop_ctrl)
> >>> ctrl->ops->stop_ctrl(ctrl);
> >>> }
> >>> @@ -5267,6 +5313,7 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
> >>> ctrl->quirks = quirks;
> >>> ctrl->numa_node = NUMA_NO_NODE;
> >>> INIT_WORK(&ctrl->scan_work, nvme_scan_work);
> >>> + INIT_WORK(&ctrl->ccr_work, nvme_ccr_work);
> >>> INIT_WORK(&ctrl->async_event_work, nvme_async_event_work);
> >>> INIT_WORK(&ctrl->fw_act_work, nvme_fw_act_work);
> >>> INIT_WORK(&ctrl->delete_work, nvme_delete_ctrl_work);
> >>> diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
> >>> index f2bcff9ccd25..776ee8aa5a93 100644
> >>> --- a/drivers/nvme/host/nvme.h
> >>> +++ b/drivers/nvme/host/nvme.h
> >>> @@ -419,6 +419,7 @@ struct nvme_ctrl {
> >>> struct nvme_effects_log *effects;
> >>> struct xarray cels;
> >>> struct work_struct scan_work;
> >>> + struct work_struct ccr_work;
> >>> struct work_struct async_event_work;
> >>> struct delayed_work ka_work;
> >>> struct delayed_work failfast_work;
> >>
> >> Hmm. The 'nvme_fence_ctrl' operation introduced in the previous patch
> >> is synchronous, yet in this patch we're looking a a log page to figure
> >> out if the cross-controller reset is complete.
> >> Which is slightly irritating.
> >> Wouldn't it be better to make the 'nvme_fence_ctrl' operation
> >> asynchronous, and then have a separate function to wait for the fence
> >> operation to complete (which then could look at log pages etc)?
> >
> > True nvme_fence_ctrl() is synchronous, but it runs in from ctrl->fencing_work.
> > What is it that you find irritating about nvme_fence_ctrl()?
> >
>
> Thins is, in order to make nvme_fence_ctrl() synchronous we have to
> wait for the operation itself (which is asynchronous) to complete.
> And that wait in itself is implemented by a wait queue.
> So we're having a wait queue calling nvme_fence_ctrl(), which calls
> another wait queue waiting for a completion.
> And then (if the IRS bit is not set) calling another waitqueue for
> checking the log page.
There is no point of checking the CCR logpage before getting AEN. Sure
we can implement some sort of polling, but I do not think this is the
right approach.
>
> I think we could simplify this by simply making nvme_fence_ctrl()
> asynchronous, which could do away with all the workqueue handling.
I am not sure I understand exactly how nvme_fence_ctrl() can be make
asynchronous. Can you provide example code?
next prev parent reply other threads:[~2026-04-07 19:09 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-28 0:43 [PATCH v4 00/15] TP8028 Rapid Path Failure Recovery Mohamed Khalfella
2026-03-28 0:43 ` [PATCH v4 01/15] nvmet: Rapid Path Failure Recovery set controller identify fields Mohamed Khalfella
2026-03-30 10:37 ` Hannes Reinecke
2026-03-28 0:43 ` [PATCH v4 02/15] nvmet/debugfs: Export controller CIU and CIRN via debugfs Mohamed Khalfella
2026-03-28 0:43 ` [PATCH v4 03/15] nvmet: Implement CCR nvme command Mohamed Khalfella
2026-03-30 10:45 ` Hannes Reinecke
2026-03-31 16:38 ` Mohamed Khalfella
2026-04-07 5:40 ` Hannes Reinecke
2026-03-28 0:43 ` [PATCH v4 04/15] nvmet: Implement CCR logpage Mohamed Khalfella
2026-03-28 0:43 ` [PATCH v4 05/15] nvmet: Send an AEN on CCR completion Mohamed Khalfella
2026-03-28 0:43 ` [PATCH v4 06/15] nvme: Rapid Path Failure Recovery read controller identify fields Mohamed Khalfella
2026-03-28 0:43 ` [PATCH v4 07/15] nvme: Introduce FENCING and FENCED controller states Mohamed Khalfella
2026-03-30 10:46 ` Hannes Reinecke
2026-03-28 0:43 ` [PATCH v4 08/15] nvme: Implement cross-controller reset recovery Mohamed Khalfella
2026-03-30 10:50 ` Hannes Reinecke
2026-03-31 16:47 ` Mohamed Khalfella
2026-04-07 5:39 ` Hannes Reinecke
2026-04-07 20:46 ` Mohamed Khalfella
2026-03-28 0:43 ` [PATCH v4 09/15] nvme: Implement cross-controller reset completion Mohamed Khalfella
2026-03-30 10:53 ` Hannes Reinecke
2026-03-31 16:55 ` Mohamed Khalfella
2026-04-07 5:48 ` Hannes Reinecke
2026-04-07 19:09 ` Mohamed Khalfella [this message]
2026-03-28 0:43 ` [PATCH v4 10/15] nvme-tcp: Use CCR to recover controller that hits an error Mohamed Khalfella
2026-03-30 11:00 ` Hannes Reinecke
2026-03-28 0:43 ` [PATCH v4 11/15] nvme-rdma: " Mohamed Khalfella
2026-03-28 0:43 ` [PATCH v4 12/15] nvme-fc: Refactor IO error recovery Mohamed Khalfella
2026-03-28 0:43 ` [PATCH v4 13/15] nvme-fc: Use CCR to recover controller that hits an error Mohamed Khalfella
2026-03-28 0:43 ` [PATCH v4 14/15] nvme-fc: Hold inflight requests while in FENCING state Mohamed Khalfella
2026-03-28 0:43 ` [PATCH v4 15/15] nvme-fc: Do not cancel requests in io taget before it is initialized Mohamed Khalfella
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260407190940.GF2861-mkhalfella@purestorage.com \
--to=mkhalfella@purestorage.com \
--cc=adailey@purestorage.com \
--cc=axboe@kernel.dk \
--cc=dgiani@purestorage.com \
--cc=hare@suse.de \
--cc=jsmart833426@gmail.com \
--cc=justin.tee@broadcom.com \
--cc=kbusch@kernel.org \
--cc=kch@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=nareshgottumukkala83@gmail.com \
--cc=paul.ely@broadcom.com \
--cc=randyj@purestorage.com \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox