From: Klaus Jensen <its@irrelevant.dk>
To: Hannes Reinecke <hare@suse.de>
Cc: Daniel Wagner <wagi@monom.org>, Kevin Wolf <kwolf@redhat.com>,
qemu-block@nongnu.org, qemu-devel@nongnu.org,
Keith Busch <keith.busch@wdc.com>
Subject: Re: [PATCH] hw/block/nvme: re-enable NVMe PCI hotplug
Date: Wed, 12 Oct 2022 10:06:07 +0200 [thread overview]
Message-ID: <Y0Z1b8C0zi3Kjw9G@cormorant.local> (raw)
In-Reply-To: <deb27a4f-a053-40b8-b46b-5b4dbd4674a5@suse.de>
[-- Attachment #1: Type: text/plain, Size: 3579 bytes --]
On Okt 12 08:24, Hannes Reinecke wrote:
> On 10/10/22 19:01, Daniel Wagner wrote:
> > On Tue, May 11, 2021 at 06:12:47PM +0200, Hannes Reinecke wrote:
> > > On 5/11/21 6:03 PM, Klaus Jensen wrote:
> > > > On May 11 16:54, Hannes Reinecke wrote:
> > > > > On 5/11/21 3:37 PM, Klaus Jensen wrote:
> > > > > > On May 11 15:12, Hannes Reinecke wrote:
> > > > > > > On 5/11/21 2:22 PM, Klaus Jensen wrote:
> > > > > [ .. ]
> > > > > > > > The hotplug fix looks good - I'll post a series that
> > > > > > > > tries to integrate
> > > > > > > > both.
> > > > > > > >
> > > > > > > Ta.
> > > > > > >
> > > > > > > The more I think about it, the more I think we should be looking into
> > > > > > > reparenting the namespaces to the subsystem.
> > > > > > > That would have the _immediate_ benefit that 'device_del' and
> > > > > > > 'device_add' becomes symmetric (ie one doesn't have to do a separate
> > > > > > > 'device_add nvme-ns'), as the nvme namespace is not affected by the
> > > > > > > hotplug event.
> > > > > > >
> > > > > >
> > > > > > I have that working, but I'm struggling with a QEMU API technicality in
> > > > > > that I apparently cannot simply move the NvmeBus creation to the
> > > > > > nvme-subsys device. For some reason the bus is not available for the
> > > > > > nvme-ns devices. That is, if one does something like this:
> > > > > >
> > > > > > -device nvme-subsys,...
> > > > > > -device nvme-ns,...
> > > > > >
> > > > > > Then I get an error that "no 'nvme-bus' bus found for device 'nvme'ns".
> > > > > > This is probably just me not grok'ing the qdev well enough, so I'll keep
> > > > > > trying to fix that. What works now is to have the regular setup:
> > > > > >
> > > > > _Normally_ the 'id' of the parent device spans a bus, so the syntax
> > > > > should be
> > > > >
> > > > > -device nvme-subsys,id=subsys1,...
> > > > > -device nvme-ns,bus=subsys1,...
> > > >
> > > > Yeah, I know, I just oversimplified the example. This *is* how I wanted
> > > > it to work ;)
> > > >
> > > > >
> > > > > As for the nvme device I would initially expose any namespace from the
> > > > > subsystem to the controller; the nvme spec has some concept of 'active'
> > > > > or 'inactive' namespaces which would allow us to blank out individual
> > > > > namespaces on a per-controller basis, but I fear that's not easy to
> > > > > model with qdev and the structure above.
> > > > >
> > > >
> > > > The nvme-ns device already supports the boolean 'detached' parameter to
> > > > support the concept of an inactive namespace.
> > > >
> > > Yeah, but that doesn't really work if we move the namespace to the
> > > subsystem; the 'detached' parameter is for the controller<->namespace
> > > relationship.
> > > That's why I meant we have to have some sort of NSID map for the controller
> > > such that the controller knows with NSID to access.
> > > I guess we can copy the trick with the NSID array, and reverse the operation
> > > we have now wrt subsystem; keep the main NSID array in the subsystem, and
> > > per-controller NSID arrays holding those which can be accessed.
> > >
> > > And ignore the commandline for now; figure that one out later.
> > >
> [..]
> >
> > Sorry to ask but has there been any progress on this topic? Just run
> > into the same issue that adding nvme device during runtime is not
> > showing any namespace.
> >
> I _thought_ that the pci hotplug fixes have now been merged with qemu
> upstream. Klaus?
>
Yup. It's all upstream.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
prev parent reply other threads:[~2022-10-12 8:13 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-05-11 7:35 [PATCH] hw/block/nvme: re-enable NVMe PCI hotplug Hannes Reinecke
2021-05-11 7:45 ` no-reply
2021-05-11 8:05 ` Philippe Mathieu-Daudé
2021-05-11 12:22 ` Klaus Jensen
2021-05-11 13:12 ` Hannes Reinecke
2021-05-11 13:37 ` Klaus Jensen
2021-05-11 14:54 ` Hannes Reinecke
2021-05-11 16:03 ` Klaus Jensen
2021-05-11 16:12 ` Hannes Reinecke
2022-10-10 17:01 ` Daniel Wagner
2022-10-10 17:15 ` Klaus Jensen
2022-10-18 8:15 ` Daniel Wagner
2022-10-21 12:59 ` Daniel Wagner
2022-10-31 7:30 ` Klaus Jensen
2022-10-12 6:24 ` Hannes Reinecke
2022-10-12 8:06 ` Klaus Jensen [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y0Z1b8C0zi3Kjw9G@cormorant.local \
--to=its@irrelevant.dk \
--cc=hare@suse.de \
--cc=keith.busch@wdc.com \
--cc=kwolf@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=wagi@monom.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).