From: Hannes Reinecke <hare@suse.de>
To: Christoph Hellwig <hch@lst.de>
Cc: Sagi Grimberg <sagi@grimberg.me>,
Keith Busch <keith.busch@wdc.com>,
linux-nvme@lists.infradead.org, Keith Busch <kbusch@kernel.org>,
Daniel Wagner <daniel.wagner@suse.de>
Subject: Re: [PATCHv3] nvme-mpath: delete disk after last connection
Date: Mon, 10 May 2021 16:48:36 +0200 [thread overview]
Message-ID: <deb2beb3-5aec-b439-3ce3-6c04f7badc45@suse.de> (raw)
In-Reply-To: <20210510062346.GA30116@lst.de>
On 5/10/21 8:23 AM, Christoph Hellwig wrote:
> On Fri, May 07, 2021 at 07:02:52PM +0200, Hannes Reinecke wrote:
>> On 5/7/21 8:46 AM, Christoph Hellwig wrote:
>>> On Thu, May 06, 2021 at 05:54:29PM +0200, Hannes Reinecke wrote:
>>>> PCI and fabrics have different defaults; for PCI the device goes away if
>>>> the last path (ie the controller) goes away, for fabrics it doesn't if the
>>>> device is mounted.
>>>
>>> Err, no. For fabrics we reconnect a while, but otherwise the behavior
>>> is the same right now.
>>>
>> No, that is not the case.
>>
>> When a PCI nvme device with CMIC=0 is removed (via pci hotplug, say), the
>> nvme device is completely removed, irrespective on whether it's mounted or
>> not.
>> When the _same_ PCI device with CMIC=1 is removed, the nvme device (ie the
>> nsnhead) will _stay_ when mounted (as the refcount is not zero).
>
> Yes. But that has nothing to do with fabrics as you claimed above, but
> with the fact if the subsystem supports multiple controller (and thus
> shared namespaces) or not.
>
So, I seem to have reproduced the issue with latest nvme-5.13.
The failure pattern is slightly different, so I think I've been able to
solve is in a slightly less controversial manner.
Patch to follow.
Cheers,
Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions Germany GmbH, 90409 Nürnberg
GF: F. Imendörffer, HRB 36809 (AG Nürnberg)
_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme
prev parent reply other threads:[~2021-05-10 14:49 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-05-01 12:04 [PATCHv3] nvme-mpath: delete disk after last connection Hannes Reinecke
2021-05-04 8:54 ` Christoph Hellwig
2021-05-04 13:40 ` Hannes Reinecke
2021-05-04 19:54 ` Sagi Grimberg
2021-05-05 15:26 ` Keith Busch
2021-05-05 16:15 ` Hannes Reinecke
2021-05-05 20:40 ` Sagi Grimberg
2021-05-06 2:50 ` Keith Busch
2021-05-06 6:13 ` Hannes Reinecke
2021-05-06 7:43 ` Christoph Hellwig
2021-05-06 8:42 ` Hannes Reinecke
2021-05-06 9:47 ` Sagi Grimberg
2021-05-06 12:08 ` Christoph Hellwig
2021-05-06 15:54 ` Hannes Reinecke
2021-05-07 6:46 ` Christoph Hellwig
2021-05-07 17:02 ` Hannes Reinecke
2021-05-07 17:20 ` Sagi Grimberg
2021-05-10 6:23 ` Christoph Hellwig
2021-05-10 13:01 ` Hannes Reinecke
2021-05-10 13:57 ` Hannes Reinecke
2021-05-10 14:48 ` Hannes Reinecke [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=deb2beb3-5aec-b439-3ce3-6c04f7badc45@suse.de \
--to=hare@suse.de \
--cc=daniel.wagner@suse.de \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=keith.busch@wdc.com \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox