From: "Daniel P. Berrangé" <berrange@redhat.com>
To: "Annie.li" <annie.li@oracle.com>
Cc: Ani Sinha <ani@anisinha.ca>,
"imammedo@redhat.com" <imammedo@redhat.com>,
jusual@redhat.com, qemu-devel@nongnu.org, kraxel@redhat.com
Subject: Re: Failure of hot plugging secondary virtio_blk into q35 Windows 2019
Date: Tue, 9 Nov 2021 18:32:21 +0000 [thread overview]
Message-ID: <YYq+tc4dxqSAjRCR@redhat.com> (raw)
In-Reply-To: <59be6397-57f8-cd0e-2762-0a3f8b9b4a05@oracle.com>
On Tue, Nov 09, 2021 at 12:01:30PM -0500, Annie.li wrote:
> On 11/9/2021 6:19 AM, Daniel P. Berrangé wrote:
> > On Tue, Nov 09, 2021 at 04:40:10PM +0530, Ani Sinha wrote:
> > > On Tue, Nov 9, 2021 at 3:23 PM Daniel P. Berrangé <berrange@redhat.com> wrote:
> > > > On Tue, Nov 09, 2021 at 12:41:39PM +0530, Ani Sinha wrote:
> > > > > +gerd
> > > > >
> > > > > On Mon, 8 Nov 2021, Annie.li wrote:
> > > > >
> > > > > > Update:
> > > > > >
> > > > > > I've tested q35 guest w/o OVMF, the APCI PCI hot-plugging works well in q35
> > > > > > guest. Seems this issue only happens in q35 guest w/ OVMF.
> > > > > >
> > > > > > Looks that there is already a bug filed against this hotplug issue in q35
> > > > > > guest w/ OVMF,
> > > > > >
> > > > > > https://urldefense.com/v3/__https://bugzilla.redhat.com/show_bug.cgi?id=2004829__;!!ACWV5N9M2RV99hQ!bCogoVXfTRaTu03Bg6dQ8NrSINSha4iSSLuZJerOTVdIdWnu1msYwJ8E_c_jRg$
> > > > > >
> > > > > > In this bug, it is recommended to add "-global
> > > > > > ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off \" on qemu command for 6.1.
> > > > > > However, with this option for 6.1(PCIe native hotplug), there still are kinds
> > > > > > of issues. For example, one of them is the deleted virtio_blk device still
> > > > > > shows in the Device Manager in Windows q35 guest, the operation of re-scanning
> > > > > > new hardware takes forever there. This means both PCIe native hotplug and ACPI
> > > > > > hotplug all have issues in q35 guests.
> > > > > This is sad.
> > > > >
> > > > > > Per comments in this bug, changes in both OVMF and QEMU are necessary to
> > > > > > support ACPI hot plug in q35 guest. The fixes may likely be available in QEMU
> > > > > > 6.2.0.
> > > > > So we are in soft code freeze for 6.2:
> > > > > https://urldefense.com/v3/__https://wiki.qemu.org/Planning/6.2__;!!ACWV5N9M2RV99hQ!bCogoVXfTRaTu03Bg6dQ8NrSINSha4iSSLuZJerOTVdIdWnu1msYwJ_pKO8AzA$
> > > > >
> > > > > I am curious about Gerd's comment #10:
> > > > > "The 6.2 rebase should make hotplug work
> > > > > again with the default configuration."
> > > > >
> > > > > Sadly I have not seen any public discussion on what we want to do
> > > > > for the issues with acpi hotplug for bridges in q35.
> > > > I've raised one of the problems a week ago and there's a promised
> > > > fix
> > > >
> > > > https://urldefense.com/v3/__https://lists.gnu.org/archive/html/qemu-devel/2021-11/msg00558.html__;!!ACWV5N9M2RV99hQ!bCogoVXfTRaTu03Bg6dQ8NrSINSha4iSSLuZJerOTVdIdWnu1msYwJ-np8GcUA$
> > > So https://urldefense.com/v3/__https://gitlab.com/qemu-project/qemu/-/issues/641__;!!ACWV5N9M2RV99hQ!bCogoVXfTRaTu03Bg6dQ8NrSINSha4iSSLuZJerOTVdIdWnu1msYwJ86xk2gtg$ is the same as
> > > https://urldefense.com/v3/__https://bugzilla.redhat.com/show_bug.cgi?id=2006409__;!!ACWV5N9M2RV99hQ!bCogoVXfTRaTu03Bg6dQ8NrSINSha4iSSLuZJerOTVdIdWnu1msYwJ9crT9JKw$
> > >
> > > isn't it?
> > Yes, one upstream, one downstream.
>
> Thanks for the info.
>
> So q35 guests with either OVMF or SeaBIOS have different ACPI hotplug issues
> in QEMU 6.1.
>
> As Ani mentioned earlier, QEMU 6.2 is in soft code freeze.
> Today(Nov 9) is the date of hard feature freeze.
>
> I suppose this means the fix for the issue with SeaBIOS or the feature to
> cooperate
> with the coming change in OVMF won't happen in 6.2?
Patches are allowed if they're bug fixes. If a change requires coordination
with an OVMF change too though, I think that's going to be difficult to
justify.
Our fallback option is to revert to native hotplug out of the box for
QEMU machine types in 6.2
Regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
next prev parent reply other threads:[~2021-11-09 18:33 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-11-01 14:06 Failure of hot plugging secondary virtio_blk into q35 Windows 2019 Annie.li
2021-11-08 18:53 ` Annie.li
2021-11-08 22:56 ` Annie.li
2021-11-09 7:11 ` Ani Sinha
2021-11-09 9:52 ` Daniel P. Berrangé
2021-11-09 11:10 ` Ani Sinha
2021-11-09 11:19 ` Daniel P. Berrangé
2021-11-09 17:01 ` Annie.li
2021-11-09 18:32 ` Daniel P. Berrangé [this message]
2021-11-10 18:12 ` Annie.li
2021-11-19 21:29 ` Annie.li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YYq+tc4dxqSAjRCR@redhat.com \
--to=berrange@redhat.com \
--cc=ani@anisinha.ca \
--cc=annie.li@oracle.com \
--cc=imammedo@redhat.com \
--cc=jusual@redhat.com \
--cc=kraxel@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).