From: Hannes Reinecke <hare@suse.de>
To: Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>
Cc: Keith Busch <keith.busch@wdc.com>,
linux-nvme@lists.infradead.org, Keith Busch <kbusch@kernel.org>,
Daniel Wagner <daniel.wagner@suse.de>
Subject: Re: [PATCHv3] nvme-mpath: delete disk after last connection
Date: Thu, 6 May 2021 10:42:59 +0200 [thread overview]
Message-ID: <96fbc678-a7e8-aaf3-5f15-8c866e683afe@suse.de> (raw)
In-Reply-To: <20210506074341.GC14615@lst.de>
On 5/6/21 9:43 AM, Christoph Hellwig wrote:
> On Tue, May 04, 2021 at 12:54:14PM -0700, Sagi Grimberg wrote:
>> Yes, I'm not sure I understand your comment Christoph. This addresses an
>> issue with mdraid where hot unplug+replug does not restore the device to
>> the raid group (pci and fabrics alike), where before multipath this used
>> to work.
>>
>> queue_if_no_path is a dm-multipath feature so I'm not entirely clear
>> what is the concern? mdraid on nvme (pci/fabrics) used to work a certain
>> way, with the introduction of nvme-mpath the behavior was broken (as far
>> as I understand from Hannes).
>
> AFAIK that specific mdraid behavior is also fixed by the uevent patch
> he sent.
>
It is most emphatically _NOT_.
These two patches are complementary.
To rephrase: with the current behaviour MD is completely hosed once one
NVMe-oF device get removed after ctrl_loss_tmo kicks in.
And _nothing_ will fix that except a system reboot.
_That_ is the issue this patch fixes.
The other patch for sending the uevent is just to tell MD that recovery
can start. But recovery _cannot_ start without this patch.
>
> I really do not think we should change the mpath behaviors years after
> first adding it.
>
But only because no-one ever tested MD on nvme-multipath.
It has been broken since day 1.
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions Germany GmbH, 90409 Nürnberg
GF: F. Imendörffer, HRB 36809 (AG Nürnberg)
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
next prev parent reply other threads:[~2021-05-06 8:43 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-05-01 12:04 [PATCHv3] nvme-mpath: delete disk after last connection Hannes Reinecke
2021-05-04 8:54 ` Christoph Hellwig
2021-05-04 13:40 ` Hannes Reinecke
2021-05-04 19:54 ` Sagi Grimberg
2021-05-05 15:26 ` Keith Busch
2021-05-05 16:15 ` Hannes Reinecke
2021-05-05 20:40 ` Sagi Grimberg
2021-05-06 2:50 ` Keith Busch
2021-05-06 6:13 ` Hannes Reinecke
2021-05-06 7:43 ` Christoph Hellwig
2021-05-06 8:42 ` Hannes Reinecke [this message]
2021-05-06 9:47 ` Sagi Grimberg
2021-05-06 12:08 ` Christoph Hellwig
2021-05-06 15:54 ` Hannes Reinecke
2021-05-07 6:46 ` Christoph Hellwig
2021-05-07 17:02 ` Hannes Reinecke
2021-05-07 17:20 ` Sagi Grimberg
2021-05-10 6:23 ` Christoph Hellwig
2021-05-10 13:01 ` Hannes Reinecke
2021-05-10 13:57 ` Hannes Reinecke
2021-05-10 14:48 ` Hannes Reinecke
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=96fbc678-a7e8-aaf3-5f15-8c866e683afe@suse.de \
--to=hare@suse.de \
--cc=daniel.wagner@suse.de \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=keith.busch@wdc.com \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox