From: Nilay Shroff <nilay@linux.ibm.com>
To: Keith Busch <kbusch@kernel.org>
Cc: linux-nvme@lists.infradead.org, hch@lst.de, sagi@grimberg.me,
gjoyce@linux.ibm.com, axboe@fb.com
Subject: Re: [PATCH v3 1/1] nvme-pci : Fix EEH failure on ppc after subsystem reset
Date: Sat, 22 Jun 2024 20:37:02 +0530 [thread overview]
Message-ID: <653be192-cfa7-45d6-adf0-2a863960630a@linux.ibm.com> (raw)
In-Reply-To: <ZnWsXmRdCCgMwL5V@kbusch-mbp>
On 6/21/24 22:07, Keith Busch wrote:
> On Tue, Jun 04, 2024 at 02:40:04PM +0530, Nilay Shroff wrote:
>> The NVMe subsystem reset command when executed may cause the loss of
>> the NVMe adapter communication with kernel. And the only way today
>> to recover the adapter is to either re-enumerate the pci bus or
>> hotplug NVMe disk or reboot OS.
>>
>> The PPC architecture supports mechanism called EEH (enhanced error
>> handling) which allows pci bus errors to be cleared and a pci card to
>> be rebooted, without having to physically hotplug NVMe disk or reboot
>> the OS.
>>
>> In the current implementation when user executes the nvme subsystem
>> reset command and if kernel loses the communication with NVMe adapter
>> then subsequent read/write to the PCIe config space of the device
>> would fail. Failing to read/write to PCI config space makes NVMe
>> driver assume the permanent loss of communication with the device and
>> so driver marks the NVMe controller dead and frees all resources
>> associate to that controller. As the NVMe controller goes dead, the
>> EEH recovery can't succeed.
>>
>> This patch helps fix this issue so that after user executes subsystem
>> reset command if the communication with the NVMe adapter is lost and
>> EEH recovery is initiated then we allow the EEH recovery to forward
>> progress and gives the EEH thread a fair chance to recover the
>> adapter. If in case, the EEH thread couldn't recover the adapter
>> communication then it sets the pci channel state of the erring
>> adapter to "permanent failure" and removes the device.
>
> I think the driver is trying to do too much here by trying to handle the
> subsystem reset inline with the reset request. It would surely fail, but
> that was the idea because we had been expecting pciehp to re-enumerate.
> But there are other possibilities, like your EEH, or others like DPC and
> AER could do different handling instead of a bus rescan. So, perhaps the
> problem is just the subsystem reset handling. Maybe just don't proceed
> with the reset handling, and it'll be fine?
>
> ---
> diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
> index 68b400f9c42d5..97ed33d9046d4 100644
> --- a/drivers/nvme/host/nvme.h
> +++ b/drivers/nvme/host/nvme.h
> @@ -646,9 +646,12 @@ static inline void nvme_should_fail(struct request *req) {}
>
> bool nvme_wait_reset(struct nvme_ctrl *ctrl);
> int nvme_try_sched_reset(struct nvme_ctrl *ctrl);
> +bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
> + enum nvme_ctrl_state new_state);
>
> static inline int nvme_reset_subsystem(struct nvme_ctrl *ctrl)
> {
> + u32 val;
> int ret;
>
> if (!ctrl->subsystem)
> @@ -657,10 +660,10 @@ static inline int nvme_reset_subsystem(struct nvme_ctrl *ctrl)
> return -EBUSY;
>
> ret = ctrl->ops->reg_write32(ctrl, NVME_REG_NSSR, 0x4E564D65);
> - if (ret)
> - return ret;
> -
> - return nvme_try_sched_reset(ctrl);
> + nvme_change_ctrl_state(ctrl, NVME_CTRL_LIVE);
> + if (!ret)
> + ctrl->ops->reg_read32(ctrl, NVME_REG_CSTS, &val);
> + return ret;
> }
>
> /*
> @@ -786,8 +789,6 @@ blk_status_t nvme_host_path_error(struct request *req);
> bool nvme_cancel_request(struct request *req, void *data);
> void nvme_cancel_tagset(struct nvme_ctrl *ctrl);
> void nvme_cancel_admin_tagset(struct nvme_ctrl *ctrl);
> -bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
> - enum nvme_ctrl_state new_state);
> int nvme_disable_ctrl(struct nvme_ctrl *ctrl, bool shutdown);
> int nvme_enable_ctrl(struct nvme_ctrl *ctrl);
> int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
> --
>
This is a nice idea! These changes look good. I have tested it on powerpc with
EEH and I observed that post nvme subsystem-reset, EEH is able to recover the disk.
I have also tested it on a platform which *doesn't* support EEH or pci error recovery
and on this platform I observed that nvme disk falls through the dead state.
So I think you may submit a formal patch with this change.
Thanks,
--Nilay
next prev parent reply other threads:[~2024-06-22 15:07 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-06-04 9:10 [PATCH v3 0/1] nvme-pci: recover from NVM subsystem reset Nilay Shroff
2024-06-04 9:10 ` [PATCH v3 1/1] nvme-pci : Fix EEH failure on ppc after " Nilay Shroff
2024-06-10 12:32 ` Maurizio Lombardi
2024-06-12 11:07 ` Nilay Shroff
2024-06-12 13:10 ` Maurizio Lombardi
2024-06-12 17:07 ` Nilay Shroff
2024-06-13 7:02 ` Maurizio Lombardi
2024-06-14 9:51 ` Hannes Reinecke
2024-06-21 16:37 ` Keith Busch
2024-06-22 15:07 ` Nilay Shroff [this message]
2024-06-24 16:07 ` Keith Busch
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=653be192-cfa7-45d6-adf0-2a863960630a@linux.ibm.com \
--to=nilay@linux.ibm.com \
--cc=axboe@fb.com \
--cc=gjoyce@linux.ibm.com \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).