From: "Annie.li" <annie.li@oracle.com>
To: qemu-devel@nongnu.org, jusual@redhat.com
Cc: ani@anisinha.ca, "imammedo@redhat.com" <imammedo@redhat.com>
Subject: Re: Failure of hot plugging secondary virtio_blk into q35 Windows 2019
Date: Mon, 8 Nov 2021 13:53:50 -0500 [thread overview]
Message-ID: <31c8012c-234f-2bb8-7db2-f7fee7bd311f@oracle.com> (raw)
In-Reply-To: <57170d20-635b-95fd-171e-e84de0d2d84e@oracle.com>
Hi Julia
Not sure if you've noticed my previous email...
After switching from PCIe native hotplug to ACPI PCI hotplug(with
patches
https://lists.gnu.org/archive/html/qemu-devel/2021-07/msg03306.html),
I've run into the secondary virtio_blk hotplugging issue in Windows q35
guest.
Now, it seems Linux q35 guest also runs into issues when
virtio_blk/virtio_net device is hotplugged, both devices fail to get
proper BAR memory assigned. After hot-plugging virtio_blk device, dmesg
shows following error,
[ 111.131377] pci 0000:03:00.0: [1af4:1042] type 00 class 0x010000
[ 111.131815] pci 0000:03:00.0: reg 0x14: [mem 0x00000000-0x00000fff]
[ 111.132206] pci 0000:03:00.0: reg 0x20: [mem 0x00000000-0x00003fff
64bit pref]
[ 111.135050] pci 0000:03:00.0: BAR 4: no space for [mem size
0x00004000 64bit pref]
[ 111.135053] pci 0000:03:00.0: BAR 4: failed to assign [mem size
0x00004000 64bit pref]
[ 111.135055] pci 0000:03:00.0: BAR 1: no space for [mem size 0x00001000]
[ 111.135056] pci 0000:03:00.0: BAR 1: failed to assign [mem size
0x00001000]
[ 111.136332] virtio-pci 0000:03:00.0: virtio_pci: leaving for legacy
driver
After hot-plugging virtio_nic device, dmesg shows following error.
[ 144.932161] pci 0000:04:00.0: [1af4:1041] type 00 class 0x020000
[ 144.932613] pci 0000:04:00.0: reg 0x14: [mem 0x00000000-0x00000fff]
[ 144.932999] pci 0000:04:00.0: reg 0x20: [mem 0x00000000-0x00003fff
64bit pref]
[ 144.933093] pci 0000:04:00.0: reg 0x30: [mem 0x00000000-0x0003ffff pref]
[ 144.935734] pci 0000:04:00.0: BAR 6: no space for [mem size
0x00040000 pref]
[ 144.935737] pci 0000:04:00.0: BAR 6: failed to assign [mem size
0x00040000 pref]
[ 144.935739] pci 0000:04:00.0: BAR 4: no space for [mem size
0x00004000 64bit pref]
[ 144.935741] pci 0000:04:00.0: BAR 4: failed to assign [mem size
0x00004000 64bit pref]
[ 144.935743] pci 0000:04:00.0: BAR 1: no space for [mem size 0x00001000]
[ 144.935744] pci 0000:04:00.0: BAR 1: failed to assign [mem size
0x00001000]
[ 144.937163] virtio-pci 0000:04:00.0: virtio_pci: leaving for legacy
driver
This error in Linux guest looks similar to the one I posted in Windows
guest earlier, there are memory conflicts among these hotplugged devices.
I am using Linux host version 5.15.0, the qemu version 6.1.0(patched
with
https://patchwork.kernel.org/project/qemu-devel/patch/20210916132838.3469580-3-ani@anisinha.ca/),
OVMF version stable202108.
If I switch from the ACPI PCI hotplug to PCIe native hotplug with
following option, these errors are gone.
-global ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off
However, with PCIe native hotplug, we have been seeing other
hotplug/unplug failures, for example, deleted virtio disks still show in
q35 Windows guest.
I am wondering if anyone has seen this kind of errors with ACPI PCI
hotplug in q35 guests with latest QEMU version?
Thanks
Annie
On 11/1/2021 10:06 AM, Annie.li wrote:
> Hello,
>
> I've found an issue when hot-plugging the secondary virtio_blk device
> into q35 Windows guest(2019) with upstream qemu 6.1.0(+1 patch). The
> first disk can be hot-plugged successfully.
>
> The qemu options for PCIe root port is,
>
> -device
> pcie-root-port,port=2,chassis=2,id=pciroot2,bus=pcie.0,addr=0x2,multifunction=on
> \
> -device
> pcie-root-port,port=3,chassis=3,id=pciroot3,bus=pcie.0,addr=0x3,multifunction=on
> \
> -device
> pcie-root-port,port=4,chassis=4,id=pciroot4,bus=pcie.0,addr=0x4,multifunction=on
> \
> -device
> pcie-root-port,port=5,chassis=5,id=pciroot5,bus=pcie.0,addr=0x5,multifunction=on
> \
> -device
> pcie-root-port,port=6,chassis=6,id=pciroot6,bus=pcie.0,addr=0x6,multifunction=on
> \
>
> The command to hotplug 1st virtio_blk disk is following, the PCI slot
> of the 1st virtio_blk is Pci slot 0(PCI bus 1, device 0, function 0).
>
> drive_add auto
> file=block_10.qcow2,format=qcow2,if=none,id=drive10,cache=none
>
> device_add virtio-blk-pci,drive=drive10,id=block-disk10,bus=pciroot2
>
> Following is the related "info mtree" after the 1st virtio_blk device
> is hot plugged
>
> memory-region: pci_bridge_pci
> 0000000000000000-ffffffffffffffff (prio 0, i/o): pci_bridge_pci
> 00000000febff000-00000000febfffff (prio 1, i/o): virtio-blk-pci-msix
> 00000000febff000-00000000febff01f (prio 0, i/o): msix-table
> 00000000febff800-00000000febff807 (prio 0, i/o): msix-pba
> 0000000fffffc000-0000000fffffffff (prio 1, i/o): virtio-pci
> 0000000fffffc000-0000000fffffcfff (prio 0, i/o): virtio-pci-common
> 0000000fffffd000-0000000fffffdfff (prio 0, i/o): virtio-pci-isr
> 0000000fffffe000-0000000fffffefff (prio 0, i/o): virtio-pci-device
> 0000000ffffff000-0000000fffffffff (prio 0, i/o): virtio-pci-notify
>
> memory-region: pci_bridge_pci
> 0000000000000000-ffffffffffffffff (prio 0, i/o): pci_bridge_pci
>
> memory-region: pci_bridge_pci
> 0000000000000000-ffffffffffffffff (prio 0, i/o): pci_bridge_pci
>
> memory-region: pci_bridge_pci
> 0000000000000000-ffffffffffffffff (prio 0, i/o): pci_bridge_pci
>
> memory-region: pci_bridge_pci
> 0000000000000000-ffffffffffffffff (prio 0, i/o): pci_bridge_pci
>
> Right after the secondary virtio_blk device is hot-plugged, a yellow
> mark shows on the first virtio_blk device in the Windows guest. The
> PCI slot info of the 2nd virtio_blk is Pci slot 0(PCI bus 2, device 0,
> function 0). The debug log of Windows virtio_blk driver shows a
> "ScsiStopAdapter" adapter control operation is triggered first, and
> then "StorSurpriseRemoval". From the following "info mtree", it seems
> the 2nd virtio_blk device is occupying the same memory resource as the
> above 1st virtio_blk device. Maybe this causes the failure of the 1st
> virtio_blk device and then the system assume it is surprisingly removed?
>
> The command to hotplug 2nd virtio_blk disk,
>
> drive_add auto
> file=block_11.qcow2,format=qcow2,if=none,id=drive11,cache=none
>
> device_add virtio-blk-pci,drive=drive11,id=block-disk11,bus=pciroot3
>
> Following is the related "info mtree" after the 2nd virtio_blk device
> is hot-plugged,
>
> memory-region: pci_bridge_pci
> 0000000000000000-ffffffffffffffff (prio 0, i/o): pci_bridge_pci
>
> memory-region: pci_bridge_pci
> 0000000000000000-ffffffffffffffff (prio 0, i/o): pci_bridge_pci
> 00000000febff000-00000000febfffff (prio 1, i/o): virtio-blk-pci-msix
> 00000000febff000-00000000febff01f (prio 0, i/o): msix-table
> 00000000febff800-00000000febff807 (prio 0, i/o): msix-pba
> 0000000fffffc000-0000000fffffffff (prio 1, i/o): virtio-pci
> 0000000fffffc000-0000000fffffcfff (prio 0, i/o): virtio-pci-common
> 0000000fffffd000-0000000fffffdfff (prio 0, i/o): virtio-pci-isr
> 0000000fffffe000-0000000fffffefff (prio 0, i/o): virtio-pci-device
> 0000000ffffff000-0000000fffffffff (prio 0, i/o): virtio-pci-notify
>
> memory-region: pci_bridge_pci
> 0000000000000000-ffffffffffffffff (prio 0, i/o): pci_bridge_pci
>
> memory-region: pci_bridge_pci
> 0000000000000000-ffffffffffffffff (prio 0, i/o): pci_bridge_pci
>
> memory-region: pci_bridge_pci
> 0000000000000000-ffffffffffffffff (prio 0, i/o): pci_bridge_pci
>
>
> Note: I've patched the upstream 6.1.0 qemu with following patch,
>
> https://patchwork.kernel.org/project/qemu-devel/patch/20210916132838.3469580-3-ani@anisinha.ca/
>
>
> the acpi-pci-hotplug memory is following as expected,
>
> 0000000000000cc0-0000000000000cd7 (prio 0, i/o): acpi-pci-hotplug
> 0000000000000cd8-0000000000000ce3 (prio 0, i/o): acpi-mem-hotplug
>
> Thanks
>
> Annie
>
next prev parent reply other threads:[~2021-11-08 18:54 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-11-01 14:06 Failure of hot plugging secondary virtio_blk into q35 Windows 2019 Annie.li
2021-11-08 18:53 ` Annie.li [this message]
2021-11-08 22:56 ` Annie.li
2021-11-09 7:11 ` Ani Sinha
2021-11-09 9:52 ` Daniel P. Berrangé
2021-11-09 11:10 ` Ani Sinha
2021-11-09 11:19 ` Daniel P. Berrangé
2021-11-09 17:01 ` Annie.li
2021-11-09 18:32 ` Daniel P. Berrangé
2021-11-10 18:12 ` Annie.li
2021-11-19 21:29 ` Annie.li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=31c8012c-234f-2bb8-7db2-f7fee7bd311f@oracle.com \
--to=annie.li@oracle.com \
--cc=ani@anisinha.ca \
--cc=imammedo@redhat.com \
--cc=jusual@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).