* [Qemu-devel] [PATCH 1/2] return available ioeventfds count in kvm_has_many_ioeventfds()
@ 2012-03-13 10:35 Amos Kong
0 siblings, 0 replies; 24+ messages in thread
From: Amos Kong @ 2012-03-13 10:35 UTC (permalink / raw)
To: aliguori, stefanha, kvm, mtosatti, qemu-devel, avi
Older kernels have a 6 device limit on the KVM io bus.
This patch makes kvm_has_many_ioeventfds() return available
ioeventfd count. ioeventfd will be disabled if there is
no 7 available ioeventfds.
Signed-off-by: Amos Kong <akong@redhat.com>
---
hw/virtio-pci.c | 2 +-
kvm-all.c | 9 +++------
2 files changed, 4 insertions(+), 7 deletions(-)
diff --git a/hw/virtio-pci.c b/hw/virtio-pci.c
index a0fb7c1..d63f303 100644
--- a/hw/virtio-pci.c
+++ b/hw/virtio-pci.c
@@ -678,7 +678,7 @@ void virtio_init_pci(VirtIOPCIProxy *proxy, VirtIODevice *vdev)
pci_register_bar(&proxy->pci_dev, 0, PCI_BASE_ADDRESS_SPACE_IO,
&proxy->bar);
- if (!kvm_has_many_ioeventfds()) {
+ if (kvm_has_many_ioeventfds() != 7) {
proxy->flags &= ~VIRTIO_PCI_FLAG_USE_IOEVENTFD;
}
diff --git a/kvm-all.c b/kvm-all.c
index 3c6b4f0..d12694b 100644
--- a/kvm-all.c
+++ b/kvm-all.c
@@ -78,7 +78,6 @@ struct KVMState
int pit_in_kernel;
int pit_state2;
int xsave, xcrs;
- int many_ioeventfds;
int irqchip_inject_ioctl;
#ifdef KVM_CAP_IRQ_ROUTING
struct kvm_irq_routing *irq_routes;
@@ -510,8 +509,8 @@ static int kvm_check_many_ioeventfds(void)
}
}
- /* Decide whether many devices are supported or not */
- ret = i == ARRAY_SIZE(ioeventfds);
+ /* If i equals to 7, many devices are supported */
+ ret = i;
while (i-- > 0) {
kvm_set_ioeventfd_pio_word(ioeventfds[i], 0, i, false);
@@ -1078,8 +1077,6 @@ int kvm_init(void)
kvm_state = s;
memory_listener_register(&kvm_memory_listener, NULL);
- s->many_ioeventfds = kvm_check_many_ioeventfds();
-
cpu_interrupt_handler = kvm_handle_interrupt;
return 0;
@@ -1407,7 +1404,7 @@ int kvm_has_many_ioeventfds(void)
if (!kvm_enabled()) {
return 0;
}
- return kvm_state->many_ioeventfds;
+ return kvm_check_many_ioeventfds();
}
int kvm_has_gsi_routing(void)
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [Qemu-devel] [PATCH 0/2] virtio-pci: fix abort when fail to allocate ioeventfd
@ 2012-03-13 10:42 Amos Kong
2012-03-13 10:42 ` [Qemu-devel] [PATCH 1/2] return available ioeventfds count in kvm_has_many_ioeventfds() Amos Kong
` (3 more replies)
0 siblings, 4 replies; 24+ messages in thread
From: Amos Kong @ 2012-03-13 10:42 UTC (permalink / raw)
To: aliguori, stefanha, kvm, mtosatti, qemu-devel, avi
Boot up guest with 232 virtio-blk disk, qemu will abort for fail to
allocate ioeventfd. This patchset changes kvm_has_many_ioeventfds(),
and check if available ioeventfd exists. If not, virtio-pci will
fallback to userspace, and don't use ioeventfd for io notification.
---
Amos Kong (2):
return available ioeventfds count in kvm_has_many_ioeventfds()
virtio-pci: fallback to userspace when there is no enough available ioeventfd
hw/virtio-pci.c | 5 ++++-
kvm-all.c | 9 +++------
2 files changed, 7 insertions(+), 7 deletions(-)
--
Amos Kong
^ permalink raw reply [flat|nested] 24+ messages in thread
* [Qemu-devel] [PATCH 1/2] return available ioeventfds count in kvm_has_many_ioeventfds()
2012-03-13 10:42 [Qemu-devel] [PATCH 0/2] virtio-pci: fix abort when fail to allocate ioeventfd Amos Kong
@ 2012-03-13 10:42 ` Amos Kong
2012-03-13 11:50 ` Jan Kiszka
2012-03-13 10:42 ` [Qemu-devel] [PATCH 2/2] virtio-pci: fallback to userspace when there is no enough available ioeventfd Amos Kong
` (2 subsequent siblings)
3 siblings, 1 reply; 24+ messages in thread
From: Amos Kong @ 2012-03-13 10:42 UTC (permalink / raw)
To: aliguori, stefanha, kvm, mtosatti, qemu-devel, avi
Older kernels have a 6 device limit on the KVM io bus.
This patch makes kvm_has_many_ioeventfds() return available
ioeventfd count. ioeventfd will be disabled if there is
no 7 available ioeventfds.
Signed-off-by: Amos Kong <akong@redhat.com>
---
hw/virtio-pci.c | 2 +-
kvm-all.c | 9 +++------
2 files changed, 4 insertions(+), 7 deletions(-)
diff --git a/hw/virtio-pci.c b/hw/virtio-pci.c
index a0fb7c1..d63f303 100644
--- a/hw/virtio-pci.c
+++ b/hw/virtio-pci.c
@@ -678,7 +678,7 @@ void virtio_init_pci(VirtIOPCIProxy *proxy, VirtIODevice *vdev)
pci_register_bar(&proxy->pci_dev, 0, PCI_BASE_ADDRESS_SPACE_IO,
&proxy->bar);
- if (!kvm_has_many_ioeventfds()) {
+ if (kvm_has_many_ioeventfds() != 7) {
proxy->flags &= ~VIRTIO_PCI_FLAG_USE_IOEVENTFD;
}
diff --git a/kvm-all.c b/kvm-all.c
index 3c6b4f0..d12694b 100644
--- a/kvm-all.c
+++ b/kvm-all.c
@@ -78,7 +78,6 @@ struct KVMState
int pit_in_kernel;
int pit_state2;
int xsave, xcrs;
- int many_ioeventfds;
int irqchip_inject_ioctl;
#ifdef KVM_CAP_IRQ_ROUTING
struct kvm_irq_routing *irq_routes;
@@ -510,8 +509,8 @@ static int kvm_check_many_ioeventfds(void)
}
}
- /* Decide whether many devices are supported or not */
- ret = i == ARRAY_SIZE(ioeventfds);
+ /* If i equals to 7, many devices are supported */
+ ret = i;
while (i-- > 0) {
kvm_set_ioeventfd_pio_word(ioeventfds[i], 0, i, false);
@@ -1078,8 +1077,6 @@ int kvm_init(void)
kvm_state = s;
memory_listener_register(&kvm_memory_listener, NULL);
- s->many_ioeventfds = kvm_check_many_ioeventfds();
-
cpu_interrupt_handler = kvm_handle_interrupt;
return 0;
@@ -1407,7 +1404,7 @@ int kvm_has_many_ioeventfds(void)
if (!kvm_enabled()) {
return 0;
}
- return kvm_state->many_ioeventfds;
+ return kvm_check_many_ioeventfds();
}
int kvm_has_gsi_routing(void)
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [Qemu-devel] [PATCH 2/2] virtio-pci: fallback to userspace when there is no enough available ioeventfd
2012-03-13 10:42 [Qemu-devel] [PATCH 0/2] virtio-pci: fix abort when fail to allocate ioeventfd Amos Kong
2012-03-13 10:42 ` [Qemu-devel] [PATCH 1/2] return available ioeventfds count in kvm_has_many_ioeventfds() Amos Kong
@ 2012-03-13 10:42 ` Amos Kong
2012-03-13 11:23 ` [Qemu-devel] [PATCH 0/2] virtio-pci: fix abort when fail to allocate ioeventfd Stefan Hajnoczi
2012-03-14 9:22 ` Avi Kivity
3 siblings, 0 replies; 24+ messages in thread
From: Amos Kong @ 2012-03-13 10:42 UTC (permalink / raw)
To: aliguori, stefanha, kvm, mtosatti, qemu-devel, avi
Qemu already supported multiple function devices, pci-bridge
would support more pci devices. But iobus dev in kernel are
limited. If there is no enough available ioeventfd, then
clean VIRTIO_PCI_FLAG_USE_IOEVENTFD bit, virtio-pci would
fallback to userspace.
Signed-off-by: Amos Kong <akong@redhat.com>
---
hw/virtio-pci.c | 3 +++
1 files changed, 3 insertions(+), 0 deletions(-)
diff --git a/hw/virtio-pci.c b/hw/virtio-pci.c
index d63f303..d15b11b 100644
--- a/hw/virtio-pci.c
+++ b/hw/virtio-pci.c
@@ -322,6 +322,9 @@ static void virtio_ioport_write(void *opaque, uint32_t addr, uint32_t val)
virtio_set_status(vdev, val & 0xFF);
if (val & VIRTIO_CONFIG_S_DRIVER_OK) {
+ if (kvm_has_many_ioeventfds() == 0) {
+ proxy->flags &= ~VIRTIO_PCI_FLAG_USE_IOEVENTFD;
+ }
virtio_pci_start_ioeventfd(proxy);
}
^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [Qemu-devel] [PATCH 0/2] virtio-pci: fix abort when fail to allocate ioeventfd
2012-03-13 10:42 [Qemu-devel] [PATCH 0/2] virtio-pci: fix abort when fail to allocate ioeventfd Amos Kong
2012-03-13 10:42 ` [Qemu-devel] [PATCH 1/2] return available ioeventfds count in kvm_has_many_ioeventfds() Amos Kong
2012-03-13 10:42 ` [Qemu-devel] [PATCH 2/2] virtio-pci: fallback to userspace when there is no enough available ioeventfd Amos Kong
@ 2012-03-13 11:23 ` Stefan Hajnoczi
2012-03-13 11:51 ` Amos Kong
2012-03-14 9:22 ` Avi Kivity
3 siblings, 1 reply; 24+ messages in thread
From: Stefan Hajnoczi @ 2012-03-13 11:23 UTC (permalink / raw)
To: Amos Kong; +Cc: aliguori, stefanha, kvm, mtosatti, qemu-devel, avi
On Tue, Mar 13, 2012 at 10:42 AM, Amos Kong <akong@redhat.com> wrote:
> Boot up guest with 232 virtio-blk disk, qemu will abort for fail to
> allocate ioeventfd. This patchset changes kvm_has_many_ioeventfds(),
> and check if available ioeventfd exists. If not, virtio-pci will
> fallback to userspace, and don't use ioeventfd for io notification.
Please explain how it fails with 232 devices. Where does it abort and why?
hw/virtio-pci.c:virtio_pci_start_ioeventfd() fails "gracefully" when
virtio_pci_set_host_notifier_internal()'s event_notifier_init() call
fails. (This might be because we've hit our file descriptor rlimit.)
Perhaps the problem is that we've exceeded the kvm.ko io device limit?
I guess that is now handled by the new memory region API and we need
to handle failure gracefully there too.
Either way, I don't think that using kvm_has_many_ioeventfds() is the
right answer.
Stefan
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Qemu-devel] [PATCH 1/2] return available ioeventfds count in kvm_has_many_ioeventfds()
2012-03-13 10:42 ` [Qemu-devel] [PATCH 1/2] return available ioeventfds count in kvm_has_many_ioeventfds() Amos Kong
@ 2012-03-13 11:50 ` Jan Kiszka
2012-03-13 12:00 ` Amos Kong
0 siblings, 1 reply; 24+ messages in thread
From: Jan Kiszka @ 2012-03-13 11:50 UTC (permalink / raw)
To: Amos Kong; +Cc: aliguori, stefanha, kvm, mtosatti, qemu-devel, avi
Please tag uq/master patches with "PATCH uq/master".
On 2012-03-13 11:42, Amos Kong wrote:
> Older kernels have a 6 device limit on the KVM io bus.
> This patch makes kvm_has_many_ioeventfds() return available
> ioeventfd count. ioeventfd will be disabled if there is
> no 7 available ioeventfds.
>
> Signed-off-by: Amos Kong <akong@redhat.com>
> ---
> hw/virtio-pci.c | 2 +-
> kvm-all.c | 9 +++------
> 2 files changed, 4 insertions(+), 7 deletions(-)
>
> diff --git a/hw/virtio-pci.c b/hw/virtio-pci.c
> index a0fb7c1..d63f303 100644
> --- a/hw/virtio-pci.c
> +++ b/hw/virtio-pci.c
> @@ -678,7 +678,7 @@ void virtio_init_pci(VirtIOPCIProxy *proxy, VirtIODevice *vdev)
> pci_register_bar(&proxy->pci_dev, 0, PCI_BASE_ADDRESS_SPACE_IO,
> &proxy->bar);
>
> - if (!kvm_has_many_ioeventfds()) {
> + if (kvm_has_many_ioeventfds() != 7) {
> proxy->flags &= ~VIRTIO_PCI_FLAG_USE_IOEVENTFD;
> }
>
> diff --git a/kvm-all.c b/kvm-all.c
> index 3c6b4f0..d12694b 100644
> --- a/kvm-all.c
> +++ b/kvm-all.c
> @@ -78,7 +78,6 @@ struct KVMState
> int pit_in_kernel;
> int pit_state2;
> int xsave, xcrs;
> - int many_ioeventfds;
> int irqchip_inject_ioctl;
> #ifdef KVM_CAP_IRQ_ROUTING
> struct kvm_irq_routing *irq_routes;
> @@ -510,8 +509,8 @@ static int kvm_check_many_ioeventfds(void)
> }
> }
>
> - /* Decide whether many devices are supported or not */
> - ret = i == ARRAY_SIZE(ioeventfds);
> + /* If i equals to 7, many devices are supported */
> + ret = i;
>
> while (i-- > 0) {
> kvm_set_ioeventfd_pio_word(ioeventfds[i], 0, i, false);
> @@ -1078,8 +1077,6 @@ int kvm_init(void)
> kvm_state = s;
> memory_listener_register(&kvm_memory_listener, NULL);
>
> - s->many_ioeventfds = kvm_check_many_ioeventfds();
> -
> cpu_interrupt_handler = kvm_handle_interrupt;
>
> return 0;
> @@ -1407,7 +1404,7 @@ int kvm_has_many_ioeventfds(void)
> if (!kvm_enabled()) {
> return 0;
> }
> - return kvm_state->many_ioeventfds;
> + return kvm_check_many_ioeventfds();
And why are you dropping the caching of the kvm_check_many_ioeventfds()
return value? Is kvm_has_many_ioeventfds not used outside init scopes?
Jan
--
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Qemu-devel] [PATCH 0/2] virtio-pci: fix abort when fail to allocate ioeventfd
2012-03-13 11:23 ` [Qemu-devel] [PATCH 0/2] virtio-pci: fix abort when fail to allocate ioeventfd Stefan Hajnoczi
@ 2012-03-13 11:51 ` Amos Kong
2012-03-13 14:30 ` Stefan Hajnoczi
0 siblings, 1 reply; 24+ messages in thread
From: Amos Kong @ 2012-03-13 11:51 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: aliguori, stefanha, kvm, Michael S. Tsirkin, mtosatti, qemu-devel,
avi
[-- Attachment #1: Type: text/plain, Size: 5028 bytes --]
On 13/03/12 19:23, Stefan Hajnoczi wrote:
> On Tue, Mar 13, 2012 at 10:42 AM, Amos Kong<akong@redhat.com> wrote:
>> Boot up guest with 232 virtio-blk disk, qemu will abort for fail to
>> allocate ioeventfd. This patchset changes kvm_has_many_ioeventfds(),
>> and check if available ioeventfd exists. If not, virtio-pci will
>> fallback to userspace, and don't use ioeventfd for io notification.
Hi Stefan,
> Please explain how it fails with 232 devices. Where does it abort and why?
(gdb) bt
#0 0x00007ffff48c8885 in raise () from /lib64/libc.so.6
#1 0x00007ffff48ca065 in abort () from /lib64/libc.so.6
#2 0x00007ffff7e89a3d in kvm_io_ioeventfd_add (section=0x7fffbfbf5610,
match_data=true, data=0, fd=461) at /home/devel/qemu/kvm-all.c:778
#3 0x00007ffff7e89b3f in kvm_eventfd_add (listener=0x7ffff82ebe80,
section=0x7fffbfbf5610, match_data=true, data=0, fd=461) at
/home/devel/qemu/kvm-all.c:802
#4 0x00007ffff7e9bcf7 in address_space_add_del_ioeventfds
(as=0x7ffff8b278a0, fds_new=0x7fffb80106f0, fds_new_nb=201,
fds_old=0x7fffb800db20, fds_old_nb=200)
at /home/devel/qemu/memory.c:612
#5 0x00007ffff7e9c04f in address_space_update_ioeventfds
(as=0x7ffff8b278a0) at /home/devel/qemu/memory.c:645
#6 0x00007ffff7e9caa0 in address_space_update_topology
(as=0x7ffff8b278a0) at /home/devel/qemu/memory.c:726
#7 0x00007ffff7e9cb95 in memory_region_update_topology
(mr=0x7fffdeb179b0) at /home/devel/qemu/memory.c:746
#8 0x00007ffff7e9e802 in memory_region_add_eventfd (mr=0x7fffdeb179b0,
addr=16, size=2, match_data=true, data=0, fd=461) at
/home/devel/qemu/memory.c:1220
#9 0x00007ffff7d9e832 in virtio_pci_set_host_notifier_internal
(proxy=0x7fffdeb175a0, n=0, assign=true) at
/home/devel/qemu/hw/virtio-pci.c:175
#10 0x00007ffff7d9ea5f in virtio_pci_start_ioeventfd
(proxy=0x7fffdeb175a0) at /home/devel/qemu/hw/virtio-pci.c:230
#11 0x00007ffff7d9ee51 in virtio_ioport_write (opaque=0x7fffdeb175a0,
addr=18, val=7) at /home/devel/qemu/hw/virtio-pci.c:325
#12 0x00007ffff7d9f37b in virtio_pci_config_writeb
(opaque=0x7fffdeb175a0, addr=18, val=7) at
/home/devel/qemu/hw/virtio-pci.c:457
#13 0x00007ffff7e9ac23 in memory_region_iorange_write
(iorange=0x7fffb8005cc0, offset=18, width=1, data=7) at
/home/devel/qemu/memory.c:427
#14 0x00007ffff7e857e2 in ioport_writeb_thunk (opaque=0x7fffb8005cc0,
addr=61970, data=7) at /home/devel/qemu/ioport.c:212
#15 0x00007ffff7e85197 in ioport_write (index=0, address=61970, data=7)
at /home/devel/qemu/ioport.c:83
#16 0x00007ffff7e85d9a in cpu_outb (addr=61970, val=7 '\a') at
/home/devel/qemu/ioport.c:289
#17 0x00007ffff7e8a70a in kvm_handle_io (port=61970,
data=0x7ffff7c11000, direction=1, size=1, count=1) at
/home/devel/qemu/kvm-all.c:1123
#18 0x00007ffff7e8ad0a in kvm_cpu_exec (env=0x7fffc1688010) at
/home/devel/qemu/kvm-all.c:1271
#19 0x00007ffff7e595fc in qemu_kvm_cpu_thread_fn (arg=0x7fffc1688010) at
/home/devel/qemu/cpus.c:733
#20 0x00007ffff63687f1 in start_thread () from /lib64/libpthread.so.0
#21 0x00007ffff497b92d in clone () from /lib64/libc.so.6
(gdb) frame 2
#2 0x00007ffff7e89a3d in kvm_io_ioeventfd_add (section=0x7fffbfbf5610,
match_data=true, data=0, fd=461) at /home/devel/qemu/kvm-all.c:778
778 abort();
(gdb) l
773 assert(match_data && section->size == 2);
774
775 r = kvm_set_ioeventfd_pio_word(fd,
section->offset_within_address_space,
776 data, true);
777 if (r < 0) {
778 abort();
779 }
780 }
781
782 static void kvm_io_ioeventfd_del(MemoryRegionSection *section,
(gdb) p r
$1 = -28
-28 -> -ENOSPC
In the older kernel, Kmod has a limitation of iobus device (200) in the
past,
It's increased to 300 by commit 2b3c246a
[PATCH] KVM: Make coalesced mmio use a device per zone
include/linux/kvm_host.h:
struct kvm_io_bus {
...
#define NR_IOBUS_DEVS 300
struct kvm_io_range range[NR_IOBUS_DEVS];
I touched this problem with the kernel which has 200 iobus devs.
(attached qemu cmdline)
One virtio-blk dev will allocate one ioeventfd for io notification.
When I start guest with 232 multiple-function disks, qemu will abort for
the ENOSPC error.
> hw/virtio-pci.c:virtio_pci_start_ioeventfd() fails "gracefully" when
> virtio_pci_set_host_notifier_internal()'s event_notifier_init() call
> fails. (This might be because we've hit our file descriptor rlimit.)
>
> Perhaps the problem is that we've exceeded the kvm.ko io device limit?
yes. Actually I had increased the limitation to 1000 in kvm upstream
http://git.kernel.org/?p=virt/kvm/kvm.git;a=commitdiff;h=29f3ec59a0d175d1b2976131feb7553ec4baa678
But if we use pci-bridge, this limitation will also be breached.
so we need to process ENOSPC condition, fallback to userspace is better
than abort.
> I guess that is now handled by the new memory region API and we need
> to handle failure gracefully there too.
> Either way, I don't think that using kvm_has_many_ioeventfds() is the
> right answer.
--
Amos.
[-- Attachment #2: qemu-cmdline --]
[-- Type: text/plain, Size: 40132 bytes --]
gdb --args /home/devel/qemu/x86_64-softmmu/qemu-system-x86_64 -enable-kvm -monitor unix:/tmp/m,nowait,server -vga qxl -m 3G -smp 2,sockets=2,cores=1,threads=1 -name rhel6 -uuid 745fe449-aac8-29f1-0c2d-5042a707263b -boot c -drive file=/home/r.qcow2,if=none,id=drive-ide0-0-0,format=qcow2,cache=none,aio=threads -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -net none -spice disable-ticketing,port=5914 -drive file=images/u0,if=none,id=drive-virtio0-0-0,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-0,id=virti0-0-0,multifunction=on,addr=0x03.0 -drive file=images/u1,if=none,id=drive-virtio0-0-1,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-1,id=virti0-0-1,multifunction=on,addr=0x03.1 -drive file=images/u2,if=none,id=drive-virtio0-0-2,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-2,id=virti0-0-2,multifunction=on,addr=0x03.2 -drive file=images/u3,if=none,id=drive-virtio0-0-3,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-3,id=virti0-0-3,multifunction=on,addr=0x03.3 -drive file=images/u4,if=none,id=drive-virtio0-0-4,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-4,id=virti0-0-4,multifunction=on,addr=0x03.4 -drive file=images/u5,if=none,id=drive-virtio0-0-5,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-5,id=virti0-0-5,multifunction=on,addr=0x03.5 -drive file=images/u6,if=none,id=drive-virtio0-0-6,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-6,id=virti0-0-6,multifunction=on,addr=0x03.6 -drive file=images/u7,if=none,id=drive-virtio0-0-7,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-7,id=virti0-0-7,multifunction=on,addr=0x03.7 -drive file=images/u8,if=none,id=drive-virtio0-0-8,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-8,id=virti0-0-8,multifunction=on,addr=0x04.0 -drive file=images/u9,if=none,id=drive-virtio0-0-9,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-9,id=virti0-0-9,multifunction=on,addr=0x04.1 -drive file=images/u10,if=none,id=drive-virtio0-0-10,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-10,id=virti0-0-10,multifunction=on,addr=0x04.2 -drive file=images/u11,if=none,id=drive-virtio0-0-11,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-11,id=virti0-0-11,multifunction=on,addr=0x04.3 -drive file=images/u12,if=none,id=drive-virtio0-0-12,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-12,id=virti0-0-12,multifunction=on,addr=0x04.4 -drive file=images/u13,if=none,id=drive-virtio0-0-13,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-13,id=virti0-0-13,multifunction=on,addr=0x04.5 -drive file=images/u14,if=none,id=drive-virtio0-0-14,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-14,id=virti0-0-14,multifunction=on,addr=0x04.6 -drive file=images/u15,if=none,id=drive-virtio0-0-15,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-15,id=virti0-0-15,multifunction=on,addr=0x04.7 -drive file=images/u16,if=none,id=drive-virtio0-0-16,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-16,id=virti0-0-16,multifunction=on,addr=0x05.0 -drive file=images/u17,if=none,id=drive-virtio0-0-17,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-17,id=virti0-0-17,multifunction=on,addr=0x05.1 -drive file=images/u18,if=none,id=drive-virtio0-0-18,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-18,id=virti0-0-18,multifunction=on,addr=0x05.2 -drive file=images/u19,if=none,id=drive-virtio0-0-19,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-19,id=virti0-0-19,multifunction=on,addr=0x05.3 -drive file=images/u20,if=none,id=drive-virtio0-0-20,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-20,id=virti0-0-20,multifunction=on,addr=0x05.4 -drive file=images/u21,if=none,id=drive-virtio0-0-21,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-21,id=virti0-0-21,multifunction=on,addr=0x05.5 -drive file=images/u22,if=none,id=drive-virtio0-0-22,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-22,id=virti0-0-22,multifunction=on,addr=0x05.6 -drive file=images/u23,if=none,id=drive-virtio0-0-23,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-23,id=virti0-0-23,multifunction=on,addr=0x05.7 -drive file=images/u24,if=none,id=drive-virtio0-0-24,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-24,id=virti0-0-24,multifunction=on,addr=0x06.0 -drive file=images/u25,if=none,id=drive-virtio0-0-25,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-25,id=virti0-0-25,multifunction=on,addr=0x06.1 -drive file=images/u26,if=none,id=drive-virtio0-0-26,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-26,id=virti0-0-26,multifunction=on,addr=0x06.2 -drive file=images/u27,if=none,id=drive-virtio0-0-27,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-27,id=virti0-0-27,multifunction=on,addr=0x06.3 -drive file=images/u28,if=none,id=drive-virtio0-0-28,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-28,id=virti0-0-28,multifunction=on,addr=0x06.4 -drive file=images/u29,if=none,id=drive-virtio0-0-29,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-29,id=virti0-0-29,multifunction=on,addr=0x06.5 -drive file=images/u30,if=none,id=drive-virtio0-0-30,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-30,id=virti0-0-30,multifunction=on,addr=0x06.6 -drive file=images/u31,if=none,id=drive-virtio0-0-31,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-31,id=virti0-0-31,multifunction=on,addr=0x06.7 -drive file=images/u32,if=none,id=drive-virtio0-0-32,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-32,id=virti0-0-32,multifunction=on,addr=0x07.0 -drive file=images/u33,if=none,id=drive-virtio0-0-33,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-33,id=virti0-0-33,multifunction=on,addr=0x07.1 -drive file=images/u34,if=none,id=drive-virtio0-0-34,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-34,id=virti0-0-34,multifunction=on,addr=0x07.2 -drive file=images/u35,if=none,id=drive-virtio0-0-35,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-35,id=virti0-0-35,multifunction=on,addr=0x07.3 -drive file=images/u36,if=none,id=drive-virtio0-0-36,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-36,id=virti0-0-36,multifunction=on,addr=0x07.4 -drive file=images/u37,if=none,id=drive-virtio0-0-37,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-37,id=virti0-0-37,multifunction=on,addr=0x07.5 -drive file=images/u38,if=none,id=drive-virtio0-0-38,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-38,id=virti0-0-38,multifunction=on,addr=0x07.6 -drive file=images/u39,if=none,id=drive-virtio0-0-39,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-39,id=virti0-0-39,multifunction=on,addr=0x07.7 -drive file=images/u40,if=none,id=drive-virtio0-0-40,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-40,id=virti0-0-40,multifunction=on,addr=0x08.0 -drive file=images/u41,if=none,id=drive-virtio0-0-41,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-41,id=virti0-0-41,multifunction=on,addr=0x08.1 -drive file=images/u42,if=none,id=drive-virtio0-0-42,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-42,id=virti0-0-42,multifunction=on,addr=0x08.2 -drive file=images/u43,if=none,id=drive-virtio0-0-43,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-43,id=virti0-0-43,multifunction=on,addr=0x08.3 -drive file=images/u44,if=none,id=drive-virtio0-0-44,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-44,id=virti0-0-44,multifunction=on,addr=0x08.4 -drive file=images/u45,if=none,id=drive-virtio0-0-45,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-45,id=virti0-0-45,multifunction=on,addr=0x08.5 -drive file=images/u46,if=none,id=drive-virtio0-0-46,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-46,id=virti0-0-46,multifunction=on,addr=0x08.6 -drive file=images/u47,if=none,id=drive-virtio0-0-47,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-47,id=virti0-0-47,multifunction=on,addr=0x08.7 -drive file=images/u48,if=none,id=drive-virtio0-0-48,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-48,id=virti0-0-48,multifunction=on,addr=0x09.0 -drive file=images/u49,if=none,id=drive-virtio0-0-49,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-49,id=virti0-0-49,multifunction=on,addr=0x09.1 -drive file=images/u50,if=none,id=drive-virtio0-0-50,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-50,id=virti0-0-50,multifunction=on,addr=0x09.2 -drive file=images/u51,if=none,id=drive-virtio0-0-51,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-51,id=virti0-0-51,multifunction=on,addr=0x09.3 -drive file=images/u52,if=none,id=drive-virtio0-0-52,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-52,id=virti0-0-52,multifunction=on,addr=0x09.4 -drive file=images/u53,if=none,id=drive-virtio0-0-53,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-53,id=virti0-0-53,multifunction=on,addr=0x09.5 -drive file=images/u54,if=none,id=drive-virtio0-0-54,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-54,id=virti0-0-54,multifunction=on,addr=0x09.6 -drive file=images/u55,if=none,id=drive-virtio0-0-55,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-55,id=virti0-0-55,multifunction=on,addr=0x09.7 -drive file=images/u56,if=none,id=drive-virtio0-0-56,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-56,id=virti0-0-56,multifunction=on,addr=0x0a.0 -drive file=images/u57,if=none,id=drive-virtio0-0-57,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-57,id=virti0-0-57,multifunction=on,addr=0x0a.1 -drive file=images/u58,if=none,id=drive-virtio0-0-58,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-58,id=virti0-0-58,multifunction=on,addr=0x0a.2 -drive file=images/u59,if=none,id=drive-virtio0-0-59,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-59,id=virti0-0-59,multifunction=on,addr=0x0a.3 -drive file=images/u60,if=none,id=drive-virtio0-0-60,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-60,id=virti0-0-60,multifunction=on,addr=0x0a.4 -drive file=images/u61,if=none,id=drive-virtio0-0-61,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-61,id=virti0-0-61,multifunction=on,addr=0x0a.5 -drive file=images/u62,if=none,id=drive-virtio0-0-62,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-62,id=virti0-0-62,multifunction=on,addr=0x0a.6 -drive file=images/u63,if=none,id=drive-virtio0-0-63,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-63,id=virti0-0-63,multifunction=on,addr=0x0a.7 -drive file=images/u64,if=none,id=drive-virtio0-0-64,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-64,id=virti0-0-64,multifunction=on,addr=0x0b.0 -drive file=images/u65,if=none,id=drive-virtio0-0-65,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-65,id=virti0-0-65,multifunction=on,addr=0x0b.1 -drive file=images/u66,if=none,id=drive-virtio0-0-66,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-66,id=virti0-0-66,multifunction=on,addr=0x0b.2 -drive file=images/u67,if=none,id=drive-virtio0-0-67,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-67,id=virti0-0-67,multifunction=on,addr=0x0b.3 -drive file=images/u68,if=none,id=drive-virtio0-0-68,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-68,id=virti0-0-68,multifunction=on,addr=0x0b.4 -drive file=images/u69,if=none,id=drive-virtio0-0-69,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-69,id=virti0-0-69,multifunction=on,addr=0x0b.5 -drive file=images/u70,if=none,id=drive-virtio0-0-70,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-70,id=virti0-0-70,multifunction=on,addr=0x0b.6 -drive file=images/u71,if=none,id=drive-virtio0-0-71,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-71,id=virti0-0-71,multifunction=on,addr=0x0b.7 -drive file=images/u72,if=none,id=drive-virtio0-0-72,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-72,id=virti0-0-72,multifunction=on,addr=0x0c.0 -drive file=images/u73,if=none,id=drive-virtio0-0-73,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-73,id=virti0-0-73,multifunction=on,addr=0x0c.1 -drive file=images/u74,if=none,id=drive-virtio0-0-74,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-74,id=virti0-0-74,multifunction=on,addr=0x0c.2 -drive file=images/u75,if=none,id=drive-virtio0-0-75,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-75,id=virti0-0-75,multifunction=on,addr=0x0c.3 -drive file=images/u76,if=none,id=drive-virtio0-0-76,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-76,id=virti0-0-76,multifunction=on,addr=0x0c.4 -drive file=images/u77,if=none,id=drive-virtio0-0-77,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-77,id=virti0-0-77,multifunction=on,addr=0x0c.5 -drive file=images/u78,if=none,id=drive-virtio0-0-78,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-78,id=virti0-0-78,multifunction=on,addr=0x0c.6 -drive file=images/u79,if=none,id=drive-virtio0-0-79,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-79,id=virti0-0-79,multifunction=on,addr=0x0c.7 -drive file=images/u80,if=none,id=drive-virtio0-0-80,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-80,id=virti0-0-80,multifunction=on,addr=0x0d.0 -drive file=images/u81,if=none,id=drive-virtio0-0-81,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-81,id=virti0-0-81,multifunction=on,addr=0x0d.1 -drive file=images/u82,if=none,id=drive-virtio0-0-82,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-82,id=virti0-0-82,multifunction=on,addr=0x0d.2 -drive file=images/u83,if=none,id=drive-virtio0-0-83,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-83,id=virti0-0-83,multifunction=on,addr=0x0d.3 -drive file=images/u84,if=none,id=drive-virtio0-0-84,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-84,id=virti0-0-84,multifunction=on,addr=0x0d.4 -drive file=images/u85,if=none,id=drive-virtio0-0-85,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-85,id=virti0-0-85,multifunction=on,addr=0x0d.5 -drive file=images/u86,if=none,id=drive-virtio0-0-86,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-86,id=virti0-0-86,multifunction=on,addr=0x0d.6 -drive file=images/u87,if=none,id=drive-virtio0-0-87,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-87,id=virti0-0-87,multifunction=on,addr=0x0d.7 -drive file=images/u88,if=none,id=drive-virtio0-0-88,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-88,id=virti0-0-88,multifunction=on,addr=0x0e.0 -drive file=images/u89,if=none,id=drive-virtio0-0-89,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-89,id=virti0-0-89,multifunction=on,addr=0x0e.1 -drive file=images/u90,if=none,id=drive-virtio0-0-90,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-90,id=virti0-0-90,multifunction=on,addr=0x0e.2 -drive file=images/u91,if=none,id=drive-virtio0-0-91,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-91,id=virti0-0-91,multifunction=on,addr=0x0e.3 -drive file=images/u92,if=none,id=drive-virtio0-0-92,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-92,id=virti0-0-92,multifunction=on,addr=0x0e.4 -drive file=images/u93,if=none,id=drive-virtio0-0-93,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-93,id=virti0-0-93,multifunction=on,addr=0x0e.5 -drive file=images/u94,if=none,id=drive-virtio0-0-94,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-94,id=virti0-0-94,multifunction=on,addr=0x0e.6 -drive file=images/u95,if=none,id=drive-virtio0-0-95,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-95,id=virti0-0-95,multifunction=on,addr=0x0e.7 -drive file=images/u96,if=none,id=drive-virtio0-0-96,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-96,id=virti0-0-96,multifunction=on,addr=0x0f.0 -drive file=images/u97,if=none,id=drive-virtio0-0-97,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-97,id=virti0-0-97,multifunction=on,addr=0x0f.1 -drive file=images/u98,if=none,id=drive-virtio0-0-98,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-98,id=virti0-0-98,multifunction=on,addr=0x0f.2 -drive file=images/u99,if=none,id=drive-virtio0-0-99,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-99,id=virti0-0-99,multifunction=on,addr=0x0f.3 -drive file=images/u100,if=none,id=drive-virtio0-0-100,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-100,id=virti0-0-100,multifunction=on,addr=0x0f.4 -drive file=images/u101,if=none,id=drive-virtio0-0-101,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-101,id=virti0-0-101,multifunction=on,addr=0x0f.5 -drive file=images/u102,if=none,id=drive-virtio0-0-102,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-102,id=virti0-0-102,multifunction=on,addr=0x0f.6 -drive file=images/u103,if=none,id=drive-virtio0-0-103,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-103,id=virti0-0-103,multifunction=on,addr=0x0f.7 -drive file=images/u104,if=none,id=drive-virtio0-0-104,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-104,id=virti0-0-104,multifunction=on,addr=0x10.0 -drive file=images/u105,if=none,id=drive-virtio0-0-105,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-105,id=virti0-0-105,multifunction=on,addr=0x10.1 -drive file=images/u106,if=none,id=drive-virtio0-0-106,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-106,id=virti0-0-106,multifunction=on,addr=0x10.2 -drive file=images/u107,if=none,id=drive-virtio0-0-107,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-107,id=virti0-0-107,multifunction=on,addr=0x10.3 -drive file=images/u108,if=none,id=drive-virtio0-0-108,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-108,id=virti0-0-108,multifunction=on,addr=0x10.4 -drive file=images/u109,if=none,id=drive-virtio0-0-109,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-109,id=virti0-0-109,multifunction=on,addr=0x10.5 -drive file=images/u110,if=none,id=drive-virtio0-0-110,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-110,id=virti0-0-110,multifunction=on,addr=0x10.6 -drive file=images/u111,if=none,id=drive-virtio0-0-111,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-111,id=virti0-0-111,multifunction=on,addr=0x10.7 -drive file=images/u112,if=none,id=drive-virtio0-0-112,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-112,id=virti0-0-112,multifunction=on,addr=0x11.0 -drive file=images/u113,if=none,id=drive-virtio0-0-113,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-113,id=virti0-0-113,multifunction=on,addr=0x11.1 -drive file=images/u114,if=none,id=drive-virtio0-0-114,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-114,id=virti0-0-114,multifunction=on,addr=0x11.2 -drive file=images/u115,if=none,id=drive-virtio0-0-115,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-115,id=virti0-0-115,multifunction=on,addr=0x11.3 -drive file=images/u116,if=none,id=drive-virtio0-0-116,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-116,id=virti0-0-116,multifunction=on,addr=0x11.4 -drive file=images/u117,if=none,id=drive-virtio0-0-117,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-117,id=virti0-0-117,multifunction=on,addr=0x11.5 -drive file=images/u118,if=none,id=drive-virtio0-0-118,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-118,id=virti0-0-118,multifunction=on,addr=0x11.6 -drive file=images/u119,if=none,id=drive-virtio0-0-119,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-119,id=virti0-0-119,multifunction=on,addr=0x11.7 -drive file=images/u120,if=none,id=drive-virtio0-0-120,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-120,id=virti0-0-120,multifunction=on,addr=0x12.0 -drive file=images/u121,if=none,id=drive-virtio0-0-121,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-121,id=virti0-0-121,multifunction=on,addr=0x12.1 -drive file=images/u122,if=none,id=drive-virtio0-0-122,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-122,id=virti0-0-122,multifunction=on,addr=0x12.2 -drive file=images/u123,if=none,id=drive-virtio0-0-123,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-123,id=virti0-0-123,multifunction=on,addr=0x12.3 -drive file=images/u124,if=none,id=drive-virtio0-0-124,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-124,id=virti0-0-124,multifunction=on,addr=0x12.4 -drive file=images/u125,if=none,id=drive-virtio0-0-125,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-125,id=virti0-0-125,multifunction=on,addr=0x12.5 -drive file=images/u126,if=none,id=drive-virtio0-0-126,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-126,id=virti0-0-126,multifunction=on,addr=0x12.6 -drive file=images/u127,if=none,id=drive-virtio0-0-127,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-127,id=virti0-0-127,multifunction=on,addr=0x12.7 -drive file=images/u128,if=none,id=drive-virtio0-0-128,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-128,id=virti0-0-128,multifunction=on,addr=0x13.0 -drive file=images/u129,if=none,id=drive-virtio0-0-129,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-129,id=virti0-0-129,multifunction=on,addr=0x13.1 -drive file=images/u130,if=none,id=drive-virtio0-0-130,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-130,id=virti0-0-130,multifunction=on,addr=0x13.2 -drive file=images/u131,if=none,id=drive-virtio0-0-131,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-131,id=virti0-0-131,multifunction=on,addr=0x13.3 -drive file=images/u132,if=none,id=drive-virtio0-0-132,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-132,id=virti0-0-132,multifunction=on,addr=0x13.4 -drive file=images/u133,if=none,id=drive-virtio0-0-133,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-133,id=virti0-0-133,multifunction=on,addr=0x13.5 -drive file=images/u134,if=none,id=drive-virtio0-0-134,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-134,id=virti0-0-134,multifunction=on,addr=0x13.6 -drive file=images/u135,if=none,id=drive-virtio0-0-135,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-135,id=virti0-0-135,multifunction=on,addr=0x13.7 -drive file=images/u136,if=none,id=drive-virtio0-0-136,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-136,id=virti0-0-136,multifunction=on,addr=0x14.0 -drive file=images/u137,if=none,id=drive-virtio0-0-137,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-137,id=virti0-0-137,multifunction=on,addr=0x14.1 -drive file=images/u138,if=none,id=drive-virtio0-0-138,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-138,id=virti0-0-138,multifunction=on,addr=0x14.2 -drive file=images/u139,if=none,id=drive-virtio0-0-139,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-139,id=virti0-0-139,multifunction=on,addr=0x14.3 -drive file=images/u140,if=none,id=drive-virtio0-0-140,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-140,id=virti0-0-140,multifunction=on,addr=0x14.4 -drive file=images/u141,if=none,id=drive-virtio0-0-141,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-141,id=virti0-0-141,multifunction=on,addr=0x14.5 -drive file=images/u142,if=none,id=drive-virtio0-0-142,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-142,id=virti0-0-142,multifunction=on,addr=0x14.6 -drive file=images/u143,if=none,id=drive-virtio0-0-143,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-143,id=virti0-0-143,multifunction=on,addr=0x14.7 -drive file=images/u144,if=none,id=drive-virtio0-0-144,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-144,id=virti0-0-144,multifunction=on,addr=0x15.0 -drive file=images/u145,if=none,id=drive-virtio0-0-145,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-145,id=virti0-0-145,multifunction=on,addr=0x15.1 -drive file=images/u146,if=none,id=drive-virtio0-0-146,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-146,id=virti0-0-146,multifunction=on,addr=0x15.2 -drive file=images/u147,if=none,id=drive-virtio0-0-147,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-147,id=virti0-0-147,multifunction=on,addr=0x15.3 -drive file=images/u148,if=none,id=drive-virtio0-0-148,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-148,id=virti0-0-148,multifunction=on,addr=0x15.4 -drive file=images/u149,if=none,id=drive-virtio0-0-149,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-149,id=virti0-0-149,multifunction=on,addr=0x15.5 -drive file=images/u150,if=none,id=drive-virtio0-0-150,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-150,id=virti0-0-150,multifunction=on,addr=0x15.6 -drive file=images/u151,if=none,id=drive-virtio0-0-151,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-151,id=virti0-0-151,multifunction=on,addr=0x15.7 -drive file=images/u152,if=none,id=drive-virtio0-0-152,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-152,id=virti0-0-152,multifunction=on,addr=0x16.0 -drive file=images/u153,if=none,id=drive-virtio0-0-153,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-153,id=virti0-0-153,multifunction=on,addr=0x16.1 -drive file=images/u154,if=none,id=drive-virtio0-0-154,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-154,id=virti0-0-154,multifunction=on,addr=0x16.2 -drive file=images/u155,if=none,id=drive-virtio0-0-155,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-155,id=virti0-0-155,multifunction=on,addr=0x16.3 -drive file=images/u156,if=none,id=drive-virtio0-0-156,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-156,id=virti0-0-156,multifunction=on,addr=0x16.4 -drive file=images/u157,if=none,id=drive-virtio0-0-157,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-157,id=virti0-0-157,multifunction=on,addr=0x16.5 -drive file=images/u158,if=none,id=drive-virtio0-0-158,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-158,id=virti0-0-158,multifunction=on,addr=0x16.6 -drive file=images/u159,if=none,id=drive-virtio0-0-159,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-159,id=virti0-0-159,multifunction=on,addr=0x16.7 -drive file=images/u160,if=none,id=drive-virtio0-0-160,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-160,id=virti0-0-160,multifunction=on,addr=0x17.0 -drive file=images/u161,if=none,id=drive-virtio0-0-161,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-161,id=virti0-0-161,multifunction=on,addr=0x17.1 -drive file=images/u162,if=none,id=drive-virtio0-0-162,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-162,id=virti0-0-162,multifunction=on,addr=0x17.2 -drive file=images/u163,if=none,id=drive-virtio0-0-163,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-163,id=virti0-0-163,multifunction=on,addr=0x17.3 -drive file=images/u164,if=none,id=drive-virtio0-0-164,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-164,id=virti0-0-164,multifunction=on,addr=0x17.4 -drive file=images/u165,if=none,id=drive-virtio0-0-165,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-165,id=virti0-0-165,multifunction=on,addr=0x17.5 -drive file=images/u166,if=none,id=drive-virtio0-0-166,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-166,id=virti0-0-166,multifunction=on,addr=0x17.6 -drive file=images/u167,if=none,id=drive-virtio0-0-167,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-167,id=virti0-0-167,multifunction=on,addr=0x17.7 -drive file=images/u168,if=none,id=drive-virtio0-0-168,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-168,id=virti0-0-168,multifunction=on,addr=0x18.0 -drive file=images/u169,if=none,id=drive-virtio0-0-169,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-169,id=virti0-0-169,multifunction=on,addr=0x18.1 -drive file=images/u170,if=none,id=drive-virtio0-0-170,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-170,id=virti0-0-170,multifunction=on,addr=0x18.2 -drive file=images/u171,if=none,id=drive-virtio0-0-171,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-171,id=virti0-0-171,multifunction=on,addr=0x18.3 -drive file=images/u172,if=none,id=drive-virtio0-0-172,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-172,id=virti0-0-172,multifunction=on,addr=0x18.4 -drive file=images/u173,if=none,id=drive-virtio0-0-173,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-173,id=virti0-0-173,multifunction=on,addr=0x18.5 -drive file=images/u174,if=none,id=drive-virtio0-0-174,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-174,id=virti0-0-174,multifunction=on,addr=0x18.6 -drive file=images/u175,if=none,id=drive-virtio0-0-175,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-175,id=virti0-0-175,multifunction=on,addr=0x18.7 -drive file=images/u176,if=none,id=drive-virtio0-0-176,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-176,id=virti0-0-176,multifunction=on,addr=0x19.0 -drive file=images/u177,if=none,id=drive-virtio0-0-177,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-177,id=virti0-0-177,multifunction=on,addr=0x19.1 -drive file=images/u178,if=none,id=drive-virtio0-0-178,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-178,id=virti0-0-178,multifunction=on,addr=0x19.2 -drive file=images/u179,if=none,id=drive-virtio0-0-179,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-179,id=virti0-0-179,multifunction=on,addr=0x19.3 -drive file=images/u180,if=none,id=drive-virtio0-0-180,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-180,id=virti0-0-180,multifunction=on,addr=0x19.4 -drive file=images/u181,if=none,id=drive-virtio0-0-181,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-181,id=virti0-0-181,multifunction=on,addr=0x19.5 -drive file=images/u182,if=none,id=drive-virtio0-0-182,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-182,id=virti0-0-182,multifunction=on,addr=0x19.6 -drive file=images/u183,if=none,id=drive-virtio0-0-183,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-183,id=virti0-0-183,multifunction=on,addr=0x19.7 -drive file=images/u184,if=none,id=drive-virtio0-0-184,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-184,id=virti0-0-184,multifunction=on,addr=0x1a.0 -drive file=images/u185,if=none,id=drive-virtio0-0-185,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-185,id=virti0-0-185,multifunction=on,addr=0x1a.1 -drive file=images/u186,if=none,id=drive-virtio0-0-186,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-186,id=virti0-0-186,multifunction=on,addr=0x1a.2 -drive file=images/u187,if=none,id=drive-virtio0-0-187,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-187,id=virti0-0-187,multifunction=on,addr=0x1a.3 -drive file=images/u188,if=none,id=drive-virtio0-0-188,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-188,id=virti0-0-188,multifunction=on,addr=0x1a.4 -drive file=images/u189,if=none,id=drive-virtio0-0-189,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-189,id=virti0-0-189,multifunction=on,addr=0x1a.5 -drive file=images/u190,if=none,id=drive-virtio0-0-190,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-190,id=virti0-0-190,multifunction=on,addr=0x1a.6 -drive file=images/u191,if=none,id=drive-virtio0-0-191,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-191,id=virti0-0-191,multifunction=on,addr=0x1a.7 -drive file=images/u192,if=none,id=drive-virtio0-0-192,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-192,id=virti0-0-192,multifunction=on,addr=0x1b.0 -drive file=images/u193,if=none,id=drive-virtio0-0-193,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-193,id=virti0-0-193,multifunction=on,addr=0x1b.1 -drive file=images/u194,if=none,id=drive-virtio0-0-194,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-194,id=virti0-0-194,multifunction=on,addr=0x1b.2 -drive file=images/u195,if=none,id=drive-virtio0-0-195,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-195,id=virti0-0-195,multifunction=on,addr=0x1b.3 -drive file=images/u196,if=none,id=drive-virtio0-0-196,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-196,id=virti0-0-196,multifunction=on,addr=0x1b.4 -drive file=images/u197,if=none,id=drive-virtio0-0-197,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-197,id=virti0-0-197,multifunction=on,addr=0x1b.5 -drive file=images/u198,if=none,id=drive-virtio0-0-198,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-198,id=virti0-0-198,multifunction=on,addr=0x1b.6 -drive file=images/u199,if=none,id=drive-virtio0-0-199,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-199,id=virti0-0-199,multifunction=on,addr=0x1b.7 -drive file=images/u200,if=none,id=drive-virtio0-0-200,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-200,id=virti0-0-200,multifunction=on,addr=0x1c.0 -drive file=images/u201,if=none,id=drive-virtio0-0-201,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-201,id=virti0-0-201,multifunction=on,addr=0x1c.1 -drive file=images/u202,if=none,id=drive-virtio0-0-202,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-202,id=virti0-0-202,multifunction=on,addr=0x1c.2 -drive file=images/u203,if=none,id=drive-virtio0-0-203,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-203,id=virti0-0-203,multifunction=on,addr=0x1c.3 -drive file=images/u204,if=none,id=drive-virtio0-0-204,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-204,id=virti0-0-204,multifunction=on,addr=0x1c.4 -drive file=images/u205,if=none,id=drive-virtio0-0-205,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-205,id=virti0-0-205,multifunction=on,addr=0x1c.5 -drive file=images/u206,if=none,id=drive-virtio0-0-206,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-206,id=virti0-0-206,multifunction=on,addr=0x1c.6 -drive file=images/u207,if=none,id=drive-virtio0-0-207,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-207,id=virti0-0-207,multifunction=on,addr=0x1c.7 -drive file=images/u208,if=none,id=drive-virtio0-0-208,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-208,id=virti0-0-208,multifunction=on,addr=0x1d.0 -drive file=images/u209,if=none,id=drive-virtio0-0-209,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-209,id=virti0-0-209,multifunction=on,addr=0x1d.1 -drive file=images/u210,if=none,id=drive-virtio0-0-210,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-210,id=virti0-0-210,multifunction=on,addr=0x1d.2 -drive file=images/u211,if=none,id=drive-virtio0-0-211,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-211,id=virti0-0-211,multifunction=on,addr=0x1d.3 -drive file=images/u212,if=none,id=drive-virtio0-0-212,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-212,id=virti0-0-212,multifunction=on,addr=0x1d.4 -drive file=images/u213,if=none,id=drive-virtio0-0-213,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-213,id=virti0-0-213,multifunction=on,addr=0x1d.5 -drive file=images/u214,if=none,id=drive-virtio0-0-214,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-214,id=virti0-0-214,multifunction=on,addr=0x1d.6 -drive file=images/u215,if=none,id=drive-virtio0-0-215,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-215,id=virti0-0-215,multifunction=on,addr=0x1d.7 -drive file=images/u216,if=none,id=drive-virtio0-0-216,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-216,id=virti0-0-216,multifunction=on,addr=0x1e.0 -drive file=images/u217,if=none,id=drive-virtio0-0-217,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-217,id=virti0-0-217,multifunction=on,addr=0x1e.1 -drive file=images/u218,if=none,id=drive-virtio0-0-218,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-218,id=virti0-0-218,multifunction=on,addr=0x1e.2 -drive file=images/u219,if=none,id=drive-virtio0-0-219,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-219,id=virti0-0-219,multifunction=on,addr=0x1e.3 -drive file=images/u220,if=none,id=drive-virtio0-0-220,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-220,id=virti0-0-220,multifunction=on,addr=0x1e.4 -drive file=images/u221,if=none,id=drive-virtio0-0-221,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-221,id=virti0-0-221,multifunction=on,addr=0x1e.5 -drive file=images/u222,if=none,id=drive-virtio0-0-222,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-222,id=virti0-0-222,multifunction=on,addr=0x1e.6 -drive file=images/u223,if=none,id=drive-virtio0-0-223,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-223,id=virti0-0-223,multifunction=on,addr=0x1e.7 -drive file=images/u224,if=none,id=drive-virtio0-0-224,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-224,id=virti0-0-224,multifunction=on,addr=0x1f.0 -drive file=images/u225,if=none,id=drive-virtio0-0-225,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-225,id=virti0-0-225,multifunction=on,addr=0x1f.1 -drive file=images/u226,if=none,id=drive-virtio0-0-226,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-226,id=virti0-0-226,multifunction=on,addr=0x1f.2 -drive file=images/u227,if=none,id=drive-virtio0-0-227,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-227,id=virti0-0-227,multifunction=on,addr=0x1f.3 -drive file=images/u228,if=none,id=drive-virtio0-0-228,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-228,id=virti0-0-228,multifunction=on,addr=0x1f.4 -drive file=images/u229,if=none,id=drive-virtio0-0-229,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-229,id=virti0-0-229,multifunction=on,addr=0x1f.5 -drive file=images/u230,if=none,id=drive-virtio0-0-230,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-230,id=virti0-0-230,multifunction=on,addr=0x1f.6 -drive file=images/u231,if=none,id=drive-virtio0-0-231,format=qcow2,cache=none -device virtio-blk-pci,drive=drive-virtio0-0-231,id=virti0-0-231,multifunction=on,addr=0x1f.7
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Qemu-devel] [PATCH 1/2] return available ioeventfds count in kvm_has_many_ioeventfds()
2012-03-13 11:50 ` Jan Kiszka
@ 2012-03-13 12:00 ` Amos Kong
2012-03-13 12:24 ` Jan Kiszka
0 siblings, 1 reply; 24+ messages in thread
From: Amos Kong @ 2012-03-13 12:00 UTC (permalink / raw)
To: Jan Kiszka; +Cc: aliguori, stefanha, kvm, mtosatti, qemu-devel, avi
On 13/03/12 19:50, Jan Kiszka wrote:
> Please tag uq/master patches with "PATCH uq/master".
>
> On 2012-03-13 11:42, Amos Kong wrote:
>> Older kernels have a 6 device limit on the KVM io bus.
>> This patch makes kvm_has_many_ioeventfds() return available
>> ioeventfd count. ioeventfd will be disabled if there is
>> no 7 available ioeventfds.
>>
>> Signed-off-by: Amos Kong<akong@redhat.com>
>> ---
>> hw/virtio-pci.c | 2 +-
>> kvm-all.c | 9 +++------
>> 2 files changed, 4 insertions(+), 7 deletions(-)
>>
>> diff --git a/hw/virtio-pci.c b/hw/virtio-pci.c
>> index a0fb7c1..d63f303 100644
>> --- a/hw/virtio-pci.c
>> +++ b/hw/virtio-pci.c
>> @@ -678,7 +678,7 @@ void virtio_init_pci(VirtIOPCIProxy *proxy, VirtIODevice *vdev)
>> pci_register_bar(&proxy->pci_dev, 0, PCI_BASE_ADDRESS_SPACE_IO,
>> &proxy->bar);
>>
>> - if (!kvm_has_many_ioeventfds()) {
>> + if (kvm_has_many_ioeventfds() != 7) {
>> proxy->flags&= ~VIRTIO_PCI_FLAG_USE_IOEVENTFD;
>> }
>>
>> diff --git a/kvm-all.c b/kvm-all.c
>> index 3c6b4f0..d12694b 100644
>> --- a/kvm-all.c
>> +++ b/kvm-all.c
>> @@ -78,7 +78,6 @@ struct KVMState
>> int pit_in_kernel;
>> int pit_state2;
>> int xsave, xcrs;
>> - int many_ioeventfds;
>> int irqchip_inject_ioctl;
>> #ifdef KVM_CAP_IRQ_ROUTING
>> struct kvm_irq_routing *irq_routes;
>> @@ -510,8 +509,8 @@ static int kvm_check_many_ioeventfds(void)
>> }
>> }
>>
>> - /* Decide whether many devices are supported or not */
>> - ret = i == ARRAY_SIZE(ioeventfds);
>> + /* If i equals to 7, many devices are supported */
>> + ret = i;
>>
>> while (i--> 0) {
>> kvm_set_ioeventfd_pio_word(ioeventfds[i], 0, i, false);
>> @@ -1078,8 +1077,6 @@ int kvm_init(void)
>> kvm_state = s;
>> memory_listener_register(&kvm_memory_listener, NULL);
>>
>> - s->many_ioeventfds = kvm_check_many_ioeventfds();
>> -
>> cpu_interrupt_handler = kvm_handle_interrupt;
>>
>> return 0;
>> @@ -1407,7 +1404,7 @@ int kvm_has_many_ioeventfds(void)
>> if (!kvm_enabled()) {
>> return 0;
>> }
>> - return kvm_state->many_ioeventfds;
>> + return kvm_check_many_ioeventfds();
>
> And why are you dropping the caching of the kvm_check_many_ioeventfds()
> return value? Is kvm_has_many_ioeventfds not used outside init scopes?
Hi Jan,
In the past, kvm_state->many_ioeventfds is only updated once at the
beginning,
I want to use kvm_check_many_ioeventfds() to check if available
ioeventfd exists
before starting ioeventfd each time.
--
Amos.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Qemu-devel] [PATCH 1/2] return available ioeventfds count in kvm_has_many_ioeventfds()
2012-03-13 12:00 ` Amos Kong
@ 2012-03-13 12:24 ` Jan Kiszka
2012-03-13 13:05 ` Amos Kong
0 siblings, 1 reply; 24+ messages in thread
From: Jan Kiszka @ 2012-03-13 12:24 UTC (permalink / raw)
To: Amos Kong
Cc: aliguori@us.ibm.com, stefanha@linux.vnet.ibm.com,
kvm@vger.kernel.org, mtosatti@redhat.com, qemu-devel@nongnu.org,
avi@redhat.com
On 2012-03-13 13:00, Amos Kong wrote:
> On 13/03/12 19:50, Jan Kiszka wrote:
>> Please tag uq/master patches with "PATCH uq/master".
>>
>> On 2012-03-13 11:42, Amos Kong wrote:
>>> Older kernels have a 6 device limit on the KVM io bus.
>>> This patch makes kvm_has_many_ioeventfds() return available
>>> ioeventfd count. ioeventfd will be disabled if there is
>>> no 7 available ioeventfds.
>>>
>>> Signed-off-by: Amos Kong<akong@redhat.com>
>>> ---
>>> hw/virtio-pci.c | 2 +-
>>> kvm-all.c | 9 +++------
>>> 2 files changed, 4 insertions(+), 7 deletions(-)
>>>
>>> diff --git a/hw/virtio-pci.c b/hw/virtio-pci.c
>>> index a0fb7c1..d63f303 100644
>>> --- a/hw/virtio-pci.c
>>> +++ b/hw/virtio-pci.c
>>> @@ -678,7 +678,7 @@ void virtio_init_pci(VirtIOPCIProxy *proxy, VirtIODevice *vdev)
>>> pci_register_bar(&proxy->pci_dev, 0, PCI_BASE_ADDRESS_SPACE_IO,
>>> &proxy->bar);
>>>
>>> - if (!kvm_has_many_ioeventfds()) {
>>> + if (kvm_has_many_ioeventfds() != 7) {
>>> proxy->flags&= ~VIRTIO_PCI_FLAG_USE_IOEVENTFD;
>>> }
>>>
>>> diff --git a/kvm-all.c b/kvm-all.c
>>> index 3c6b4f0..d12694b 100644
>>> --- a/kvm-all.c
>>> +++ b/kvm-all.c
>>> @@ -78,7 +78,6 @@ struct KVMState
>>> int pit_in_kernel;
>>> int pit_state2;
>>> int xsave, xcrs;
>>> - int many_ioeventfds;
>>> int irqchip_inject_ioctl;
>>> #ifdef KVM_CAP_IRQ_ROUTING
>>> struct kvm_irq_routing *irq_routes;
>>> @@ -510,8 +509,8 @@ static int kvm_check_many_ioeventfds(void)
>>> }
>>> }
>>>
>>> - /* Decide whether many devices are supported or not */
>>> - ret = i == ARRAY_SIZE(ioeventfds);
>>> + /* If i equals to 7, many devices are supported */
>>> + ret = i;
>>>
>>> while (i--> 0) {
>>> kvm_set_ioeventfd_pio_word(ioeventfds[i], 0, i, false);
>>> @@ -1078,8 +1077,6 @@ int kvm_init(void)
>>> kvm_state = s;
>>> memory_listener_register(&kvm_memory_listener, NULL);
>>>
>>> - s->many_ioeventfds = kvm_check_many_ioeventfds();
>>> -
>>> cpu_interrupt_handler = kvm_handle_interrupt;
>>>
>>> return 0;
>>> @@ -1407,7 +1404,7 @@ int kvm_has_many_ioeventfds(void)
>>> if (!kvm_enabled()) {
>>> return 0;
>>> }
>>> - return kvm_state->many_ioeventfds;
>>> + return kvm_check_many_ioeventfds();
>>
>> And why are you dropping the caching of the kvm_check_many_ioeventfds()
>> return value? Is kvm_has_many_ioeventfds not used outside init scopes?
>
> Hi Jan,
>
> In the past, kvm_state->many_ioeventfds is only updated once at the
> beginning,
And the additional use case of patch 2 is not hot path either, right?
> I want to use kvm_check_many_ioeventfds() to check if available
> ioeventfd exists
> before starting ioeventfd each time.
OK, but than kvm_has_many_ioeventfds is not the right name for this
function anymore. Call it "kvm_get_available_ioventfds" or so, but not
in a away that implies a boolean return value.
Jan
--
Siemens AG, Corporate Technology, CT T DE IT 1
Corporate Competence Center Embedded Linux
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Qemu-devel] [PATCH 1/2] return available ioeventfds count in kvm_has_many_ioeventfds()
2012-03-13 12:24 ` Jan Kiszka
@ 2012-03-13 13:05 ` Amos Kong
0 siblings, 0 replies; 24+ messages in thread
From: Amos Kong @ 2012-03-13 13:05 UTC (permalink / raw)
To: Jan Kiszka; +Cc: aliguori, stefanha, kvm, mtosatti, qemu-devel, avi
----- Original Message -----
> On 2012-03-13 13:00, Amos Kong wrote:
> > On 13/03/12 19:50, Jan Kiszka wrote:
> >> Please tag uq/master patches with "PATCH uq/master".
> >>
> >> On 2012-03-13 11:42, Amos Kong wrote:
> >>> Older kernels have a 6 device limit on the KVM io bus.
> >>> This patch makes kvm_has_many_ioeventfds() return available
> >>> ioeventfd count. ioeventfd will be disabled if there is
> >>> no 7 available ioeventfds.
> >>>
> >>> Signed-off-by: Amos Kong<akong@redhat.com>
> >>> ---
> >>> hw/virtio-pci.c | 2 +-
> >>> kvm-all.c | 9 +++------
> >>> 2 files changed, 4 insertions(+), 7 deletions(-)
> >>>
> >>> diff --git a/hw/virtio-pci.c b/hw/virtio-pci.c
> >>> index a0fb7c1..d63f303 100644
> >>> --- a/hw/virtio-pci.c
> >>> +++ b/hw/virtio-pci.c
> >>> @@ -678,7 +678,7 @@ void virtio_init_pci(VirtIOPCIProxy *proxy,
> >>> VirtIODevice *vdev)
> >>> pci_register_bar(&proxy->pci_dev, 0,
> >>> PCI_BASE_ADDRESS_SPACE_IO,
> >>> &proxy->bar);
> >>>
> >>> - if (!kvm_has_many_ioeventfds()) {
> >>> + if (kvm_has_many_ioeventfds() != 7) {
> >>> proxy->flags&= ~VIRTIO_PCI_FLAG_USE_IOEVENTFD;
> >>> }
> >>>
> >>> diff --git a/kvm-all.c b/kvm-all.c
> >>> index 3c6b4f0..d12694b 100644
> >>> --- a/kvm-all.c
> >>> +++ b/kvm-all.c
> >>> @@ -78,7 +78,6 @@ struct KVMState
> >>> int pit_in_kernel;
> >>> int pit_state2;
> >>> int xsave, xcrs;
> >>> - int many_ioeventfds;
> >>> int irqchip_inject_ioctl;
> >>> #ifdef KVM_CAP_IRQ_ROUTING
> >>> struct kvm_irq_routing *irq_routes;
> >>> @@ -510,8 +509,8 @@ static int kvm_check_many_ioeventfds(void)
> >>> }
> >>> }
> >>>
> >>> - /* Decide whether many devices are supported or not */
> >>> - ret = i == ARRAY_SIZE(ioeventfds);
> >>> + /* If i equals to 7, many devices are supported */
> >>> + ret = i;
> >>>
> >>> while (i--> 0) {
> >>> kvm_set_ioeventfd_pio_word(ioeventfds[i], 0, i, false);
> >>> @@ -1078,8 +1077,6 @@ int kvm_init(void)
> >>> kvm_state = s;
> >>> memory_listener_register(&kvm_memory_listener, NULL);
> >>>
> >>> - s->many_ioeventfds = kvm_check_many_ioeventfds();
> >>> -
> >>> cpu_interrupt_handler = kvm_handle_interrupt;
> >>>
> >>> return 0;
> >>> @@ -1407,7 +1404,7 @@ int kvm_has_many_ioeventfds(void)
> >>> if (!kvm_enabled()) {
> >>> return 0;
> >>> }
> >>> - return kvm_state->many_ioeventfds;
> >>> + return kvm_check_many_ioeventfds();
> >>
> >> And why are you dropping the caching of the
> >> kvm_check_many_ioeventfds()
> >> return value? Is kvm_has_many_ioeventfds not used outside init
> >> scopes?
> >
> > Hi Jan,
> >
> > In the past, kvm_state->many_ioeventfds is only updated once at the
> > beginning,
>
> And the additional use case of patch 2 is not hot path either, right?
Hi Jan,
It's not hot, it's called during init virtio-pci devices.
> > I want to use kvm_check_many_ioeventfds() to check if available
> > ioeventfd exists
> > before starting ioeventfd each time.
>
> OK, but than kvm_has_many_ioeventfds is not the right name for this
> function anymore. Call it "kvm_get_available_ioventfds" or so, but
> not
> in a away that implies a boolean return value.
Yeah, will fix it in v2.
Amos.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Qemu-devel] [PATCH 0/2] virtio-pci: fix abort when fail to allocate ioeventfd
2012-03-13 11:51 ` Amos Kong
@ 2012-03-13 14:30 ` Stefan Hajnoczi
2012-03-13 14:47 ` Amos Kong
0 siblings, 1 reply; 24+ messages in thread
From: Stefan Hajnoczi @ 2012-03-13 14:30 UTC (permalink / raw)
To: Amos Kong
Cc: aliguori, stefanha, kvm, Michael S. Tsirkin, mtosatti, qemu-devel,
avi
On Tue, Mar 13, 2012 at 11:51 AM, Amos Kong <akong@redhat.com> wrote:
> On 13/03/12 19:23, Stefan Hajnoczi wrote:
>>
>> On Tue, Mar 13, 2012 at 10:42 AM, Amos Kong<akong@redhat.com> wrote:
>>>
>>> Boot up guest with 232 virtio-blk disk, qemu will abort for fail to
>>> allocate ioeventfd. This patchset changes kvm_has_many_ioeventfds(),
>>> and check if available ioeventfd exists. If not, virtio-pci will
>>> fallback to userspace, and don't use ioeventfd for io notification.
>
>
> Hi Stefan,
>
>
>> Please explain how it fails with 232 devices. Where does it abort and
>> why?
>
>
> (gdb) bt
> #0 0x00007ffff48c8885 in raise () from /lib64/libc.so.6
> #1 0x00007ffff48ca065 in abort () from /lib64/libc.so.6
> #2 0x00007ffff7e89a3d in kvm_io_ioeventfd_add (section=0x7fffbfbf5610,
> match_data=true, data=0, fd=461) at /home/devel/qemu/kvm-all.c:778
> #3 0x00007ffff7e89b3f in kvm_eventfd_add (listener=0x7ffff82ebe80,
> section=0x7fffbfbf5610, match_data=true, data=0, fd=461) at
> /home/devel/qemu/kvm-all.c:802
> #4 0x00007ffff7e9bcf7 in address_space_add_del_ioeventfds
> (as=0x7ffff8b278a0, fds_new=0x7fffb80106f0, fds_new_nb=201,
> fds_old=0x7fffb800db20, fds_old_nb=200)
> at /home/devel/qemu/memory.c:612
> #5 0x00007ffff7e9c04f in address_space_update_ioeventfds
> (as=0x7ffff8b278a0) at /home/devel/qemu/memory.c:645
> #6 0x00007ffff7e9caa0 in address_space_update_topology (as=0x7ffff8b278a0)
> at /home/devel/qemu/memory.c:726
> #7 0x00007ffff7e9cb95 in memory_region_update_topology (mr=0x7fffdeb179b0)
> at /home/devel/qemu/memory.c:746
> #8 0x00007ffff7e9e802 in memory_region_add_eventfd (mr=0x7fffdeb179b0,
> addr=16, size=2, match_data=true, data=0, fd=461) at
> /home/devel/qemu/memory.c:1220
> #9 0x00007ffff7d9e832 in virtio_pci_set_host_notifier_internal
> (proxy=0x7fffdeb175a0, n=0, assign=true) at
> /home/devel/qemu/hw/virtio-pci.c:175
> #10 0x00007ffff7d9ea5f in virtio_pci_start_ioeventfd (proxy=0x7fffdeb175a0)
> at /home/devel/qemu/hw/virtio-pci.c:230
> #11 0x00007ffff7d9ee51 in virtio_ioport_write (opaque=0x7fffdeb175a0,
> addr=18, val=7) at /home/devel/qemu/hw/virtio-pci.c:325
> #12 0x00007ffff7d9f37b in virtio_pci_config_writeb (opaque=0x7fffdeb175a0,
> addr=18, val=7) at /home/devel/qemu/hw/virtio-pci.c:457
> #13 0x00007ffff7e9ac23 in memory_region_iorange_write
> (iorange=0x7fffb8005cc0, offset=18, width=1, data=7) at
> /home/devel/qemu/memory.c:427
> #14 0x00007ffff7e857e2 in ioport_writeb_thunk (opaque=0x7fffb8005cc0,
> addr=61970, data=7) at /home/devel/qemu/ioport.c:212
> #15 0x00007ffff7e85197 in ioport_write (index=0, address=61970, data=7) at
> /home/devel/qemu/ioport.c:83
> #16 0x00007ffff7e85d9a in cpu_outb (addr=61970, val=7 '\a') at
> /home/devel/qemu/ioport.c:289
> #17 0x00007ffff7e8a70a in kvm_handle_io (port=61970, data=0x7ffff7c11000,
> direction=1, size=1, count=1) at /home/devel/qemu/kvm-all.c:1123
> #18 0x00007ffff7e8ad0a in kvm_cpu_exec (env=0x7fffc1688010) at
> /home/devel/qemu/kvm-all.c:1271
> #19 0x00007ffff7e595fc in qemu_kvm_cpu_thread_fn (arg=0x7fffc1688010) at
> /home/devel/qemu/cpus.c:733
> #20 0x00007ffff63687f1 in start_thread () from /lib64/libpthread.so.0
> #21 0x00007ffff497b92d in clone () from /lib64/libc.so.6
> (gdb) frame 2
> #2 0x00007ffff7e89a3d in kvm_io_ioeventfd_add (section=0x7fffbfbf5610,
> match_data=true, data=0, fd=461) at /home/devel/qemu/kvm-all.c:778
> 778 abort();
> (gdb) l
> 773 assert(match_data && section->size == 2);
> 774
> 775 r = kvm_set_ioeventfd_pio_word(fd,
> section->offset_within_address_space,
> 776 data, true);
> 777 if (r < 0) {
> 778 abort();
> 779 }
This is where graceful fallback code needs to be added. I have added
Avi because I'm not very familiar with the new memory API and how it
does ioeventfd.
Basically we need a way for ioeventfd to fail if we hit rlimit or the
in-kernel io bus device limit. Suggestions?
(The reason I don't like using has_many_ioeventfds() is because it's
ugly and inefficient to open and then close file descriptors -
especially in a multithreaded program where file descriptor
availability can change while we're executing.)
Stefan
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Qemu-devel] [PATCH 0/2] virtio-pci: fix abort when fail to allocate ioeventfd
2012-03-13 14:30 ` Stefan Hajnoczi
@ 2012-03-13 14:47 ` Amos Kong
2012-03-13 16:36 ` Stefan Hajnoczi
0 siblings, 1 reply; 24+ messages in thread
From: Amos Kong @ 2012-03-13 14:47 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: aliguori, stefanha, kvm, Michael S. Tsirkin, mtosatti, qemu-devel,
avi
----- Original Message -----
> On Tue, Mar 13, 2012 at 11:51 AM, Amos Kong <akong@redhat.com> wrote:
> > On 13/03/12 19:23, Stefan Hajnoczi wrote:
> >>
> >> On Tue, Mar 13, 2012 at 10:42 AM, Amos Kong<akong@redhat.com>
> >> wrote:
> >>>
> >>> Boot up guest with 232 virtio-blk disk, qemu will abort for fail
> >>> to
> >>> allocate ioeventfd. This patchset changes
> >>> kvm_has_many_ioeventfds(),
> >>> and check if available ioeventfd exists. If not, virtio-pci will
> >>> fallback to userspace, and don't use ioeventfd for io
> >>> notification.
> >
> >
> > Hi Stefan,
> >
> >
> >> Please explain how it fails with 232 devices. Where does it abort
> >> and
> >> why?
> >
> >
> > (gdb) bt
> > #0 0x00007ffff48c8885 in raise () from /lib64/libc.so.6
> > #1 0x00007ffff48ca065 in abort () from /lib64/libc.so.6
> > #2 0x00007ffff7e89a3d in kvm_io_ioeventfd_add
> > (section=0x7fffbfbf5610,
> > match_data=true, data=0, fd=461) at /home/devel/qemu/kvm-all.c:778
> > #3 0x00007ffff7e89b3f in kvm_eventfd_add (listener=0x7ffff82ebe80,
> > section=0x7fffbfbf5610, match_data=true, data=0, fd=461) at
> > /home/devel/qemu/kvm-all.c:802
> > #4 0x00007ffff7e9bcf7 in address_space_add_del_ioeventfds
> > (as=0x7ffff8b278a0, fds_new=0x7fffb80106f0, fds_new_nb=201,
> > fds_old=0x7fffb800db20, fds_old_nb=200)
> > at /home/devel/qemu/memory.c:612
> > #5 0x00007ffff7e9c04f in address_space_update_ioeventfds
> > (as=0x7ffff8b278a0) at /home/devel/qemu/memory.c:645
> > #6 0x00007ffff7e9caa0 in address_space_update_topology
> > (as=0x7ffff8b278a0)
> > at /home/devel/qemu/memory.c:726
> > #7 0x00007ffff7e9cb95 in memory_region_update_topology
> > (mr=0x7fffdeb179b0)
> > at /home/devel/qemu/memory.c:746
> > #8 0x00007ffff7e9e802 in memory_region_add_eventfd
> > (mr=0x7fffdeb179b0,
> > addr=16, size=2, match_data=true, data=0, fd=461) at
> > /home/devel/qemu/memory.c:1220
> > #9 0x00007ffff7d9e832 in virtio_pci_set_host_notifier_internal
> > (proxy=0x7fffdeb175a0, n=0, assign=true) at
> > /home/devel/qemu/hw/virtio-pci.c:175
> > #10 0x00007ffff7d9ea5f in virtio_pci_start_ioeventfd
> > (proxy=0x7fffdeb175a0)
> > at /home/devel/qemu/hw/virtio-pci.c:230
> > #11 0x00007ffff7d9ee51 in virtio_ioport_write
> > (opaque=0x7fffdeb175a0,
> > addr=18, val=7) at /home/devel/qemu/hw/virtio-pci.c:325
> > #12 0x00007ffff7d9f37b in virtio_pci_config_writeb
> > (opaque=0x7fffdeb175a0,
> > addr=18, val=7) at /home/devel/qemu/hw/virtio-pci.c:457
> > #13 0x00007ffff7e9ac23 in memory_region_iorange_write
> > (iorange=0x7fffb8005cc0, offset=18, width=1, data=7) at
> > /home/devel/qemu/memory.c:427
> > #14 0x00007ffff7e857e2 in ioport_writeb_thunk
> > (opaque=0x7fffb8005cc0,
> > addr=61970, data=7) at /home/devel/qemu/ioport.c:212
> > #15 0x00007ffff7e85197 in ioport_write (index=0, address=61970,
> > data=7) at
> > /home/devel/qemu/ioport.c:83
> > #16 0x00007ffff7e85d9a in cpu_outb (addr=61970, val=7 '\a') at
> > /home/devel/qemu/ioport.c:289
> > #17 0x00007ffff7e8a70a in kvm_handle_io (port=61970,
> > data=0x7ffff7c11000,
> > direction=1, size=1, count=1) at /home/devel/qemu/kvm-all.c:1123
> > #18 0x00007ffff7e8ad0a in kvm_cpu_exec (env=0x7fffc1688010) at
> > /home/devel/qemu/kvm-all.c:1271
> > #19 0x00007ffff7e595fc in qemu_kvm_cpu_thread_fn
> > (arg=0x7fffc1688010) at
> > /home/devel/qemu/cpus.c:733
> > #20 0x00007ffff63687f1 in start_thread () from
> > /lib64/libpthread.so.0
> > #21 0x00007ffff497b92d in clone () from /lib64/libc.so.6
> > (gdb) frame 2
> > #2 0x00007ffff7e89a3d in kvm_io_ioeventfd_add
> > (section=0x7fffbfbf5610,
> > match_data=true, data=0, fd=461) at /home/devel/qemu/kvm-all.c:778
> > 778 abort();
> > (gdb) l
> > 773 assert(match_data && section->size == 2);
> > 774
> > 775 r = kvm_set_ioeventfd_pio_word(fd,
> > section->offset_within_address_space,
> > 776 data, true);
> > 777 if (r < 0) {
> > 778 abort();
> > 779 }
>
> This is where graceful fallback code needs to be added. I have added
> Avi because I'm not very familiar with the new memory API and how it
> does ioeventfd.
Hi, Stefan
I thought a fix here, but virtio_pci_start_ioeventfd() could not know allocation failed.
diff --git a/kvm-all.c b/kvm-all.c
index 77eadf6..7157e78 100644
--- a/kvm-all.c
+++ b/kvm-all.c
@@ -771,6 +771,8 @@ static void kvm_io_ioeventfd_add(MemoryRegionSection *sectio
r = kvm_set_ioeventfd_pio_word(fd, section->offset_within_address_space,
data, true);
+ if (r == -ENOSPC)
+ return;
if (r < 0) {
abort();
}
> Basically we need a way for ioeventfd to fail if we hit rlimit or the
> in-kernel io bus device limit. Suggestions?
> (The reason I don't like using has_many_ioeventfds() is because it's
> ugly and inefficient to open and then close file descriptors -
> especially in a multithreaded program where file descriptor
> availability can change while we're executing.)
I identified it's too ugly ;)
but I want to reserve it for older kernel (iobus limit is 6)
Can we remove the check for old kernel (iobus limit is 6)?
user can disable ioeventfd through qemu cmdline
virtio-net-pci.ioeventfd=on/off
virtio-blk-pci.ioeventfd=on/off
Amos
^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [Qemu-devel] [PATCH 0/2] virtio-pci: fix abort when fail to allocate ioeventfd
2012-03-13 14:47 ` Amos Kong
@ 2012-03-13 16:36 ` Stefan Hajnoczi
2012-03-14 0:30 ` Amos Kong
0 siblings, 1 reply; 24+ messages in thread
From: Stefan Hajnoczi @ 2012-03-13 16:36 UTC (permalink / raw)
To: Amos Kong
Cc: aliguori, stefanha, kvm, Michael S. Tsirkin, mtosatti, qemu-devel,
avi
On Tue, Mar 13, 2012 at 2:47 PM, Amos Kong <akong@redhat.com> wrote:
> ----- Original Message -----
>> On Tue, Mar 13, 2012 at 11:51 AM, Amos Kong <akong@redhat.com> wrote:
>> > On 13/03/12 19:23, Stefan Hajnoczi wrote:
>> >>
>> >> On Tue, Mar 13, 2012 at 10:42 AM, Amos Kong<akong@redhat.com>
>> >> wrote:
>> >>>
>> >>> Boot up guest with 232 virtio-blk disk, qemu will abort for fail
>> >>> to
>> >>> allocate ioeventfd. This patchset changes
>> >>> kvm_has_many_ioeventfds(),
>> >>> and check if available ioeventfd exists. If not, virtio-pci will
>> >>> fallback to userspace, and don't use ioeventfd for io
>> >>> notification.
>> >
>> >
>> > Hi Stefan,
>> >
>> >
>> >> Please explain how it fails with 232 devices. Where does it abort
>> >> and
>> >> why?
>> >
>> >
>> > (gdb) bt
>> > #0 0x00007ffff48c8885 in raise () from /lib64/libc.so.6
>> > #1 0x00007ffff48ca065 in abort () from /lib64/libc.so.6
>> > #2 0x00007ffff7e89a3d in kvm_io_ioeventfd_add
>> > (section=0x7fffbfbf5610,
>> > match_data=true, data=0, fd=461) at /home/devel/qemu/kvm-all.c:778
>> > #3 0x00007ffff7e89b3f in kvm_eventfd_add (listener=0x7ffff82ebe80,
>> > section=0x7fffbfbf5610, match_data=true, data=0, fd=461) at
>> > /home/devel/qemu/kvm-all.c:802
>> > #4 0x00007ffff7e9bcf7 in address_space_add_del_ioeventfds
>> > (as=0x7ffff8b278a0, fds_new=0x7fffb80106f0, fds_new_nb=201,
>> > fds_old=0x7fffb800db20, fds_old_nb=200)
>> > at /home/devel/qemu/memory.c:612
>> > #5 0x00007ffff7e9c04f in address_space_update_ioeventfds
>> > (as=0x7ffff8b278a0) at /home/devel/qemu/memory.c:645
>> > #6 0x00007ffff7e9caa0 in address_space_update_topology
>> > (as=0x7ffff8b278a0)
>> > at /home/devel/qemu/memory.c:726
>> > #7 0x00007ffff7e9cb95 in memory_region_update_topology
>> > (mr=0x7fffdeb179b0)
>> > at /home/devel/qemu/memory.c:746
>> > #8 0x00007ffff7e9e802 in memory_region_add_eventfd
>> > (mr=0x7fffdeb179b0,
>> > addr=16, size=2, match_data=true, data=0, fd=461) at
>> > /home/devel/qemu/memory.c:1220
>> > #9 0x00007ffff7d9e832 in virtio_pci_set_host_notifier_internal
>> > (proxy=0x7fffdeb175a0, n=0, assign=true) at
>> > /home/devel/qemu/hw/virtio-pci.c:175
>> > #10 0x00007ffff7d9ea5f in virtio_pci_start_ioeventfd
>> > (proxy=0x7fffdeb175a0)
>> > at /home/devel/qemu/hw/virtio-pci.c:230
>> > #11 0x00007ffff7d9ee51 in virtio_ioport_write
>> > (opaque=0x7fffdeb175a0,
>> > addr=18, val=7) at /home/devel/qemu/hw/virtio-pci.c:325
>> > #12 0x00007ffff7d9f37b in virtio_pci_config_writeb
>> > (opaque=0x7fffdeb175a0,
>> > addr=18, val=7) at /home/devel/qemu/hw/virtio-pci.c:457
>> > #13 0x00007ffff7e9ac23 in memory_region_iorange_write
>> > (iorange=0x7fffb8005cc0, offset=18, width=1, data=7) at
>> > /home/devel/qemu/memory.c:427
>> > #14 0x00007ffff7e857e2 in ioport_writeb_thunk
>> > (opaque=0x7fffb8005cc0,
>> > addr=61970, data=7) at /home/devel/qemu/ioport.c:212
>> > #15 0x00007ffff7e85197 in ioport_write (index=0, address=61970,
>> > data=7) at
>> > /home/devel/qemu/ioport.c:83
>> > #16 0x00007ffff7e85d9a in cpu_outb (addr=61970, val=7 '\a') at
>> > /home/devel/qemu/ioport.c:289
>> > #17 0x00007ffff7e8a70a in kvm_handle_io (port=61970,
>> > data=0x7ffff7c11000,
>> > direction=1, size=1, count=1) at /home/devel/qemu/kvm-all.c:1123
>> > #18 0x00007ffff7e8ad0a in kvm_cpu_exec (env=0x7fffc1688010) at
>> > /home/devel/qemu/kvm-all.c:1271
>> > #19 0x00007ffff7e595fc in qemu_kvm_cpu_thread_fn
>> > (arg=0x7fffc1688010) at
>> > /home/devel/qemu/cpus.c:733
>> > #20 0x00007ffff63687f1 in start_thread () from
>> > /lib64/libpthread.so.0
>> > #21 0x00007ffff497b92d in clone () from /lib64/libc.so.6
>> > (gdb) frame 2
>> > #2 0x00007ffff7e89a3d in kvm_io_ioeventfd_add
>> > (section=0x7fffbfbf5610,
>> > match_data=true, data=0, fd=461) at /home/devel/qemu/kvm-all.c:778
>> > 778 abort();
>> > (gdb) l
>> > 773 assert(match_data && section->size == 2);
>> > 774
>> > 775 r = kvm_set_ioeventfd_pio_word(fd,
>> > section->offset_within_address_space,
>> > 776 data, true);
>> > 777 if (r < 0) {
>> > 778 abort();
>> > 779 }
>>
>> This is where graceful fallback code needs to be added. I have added
>> Avi because I'm not very familiar with the new memory API and how it
>> does ioeventfd.
>
> Hi, Stefan
>
> I thought a fix here, but virtio_pci_start_ioeventfd() could not know allocation failed.
>
> diff --git a/kvm-all.c b/kvm-all.c
> index 77eadf6..7157e78 100644
> --- a/kvm-all.c
> +++ b/kvm-all.c
> @@ -771,6 +771,8 @@ static void kvm_io_ioeventfd_add(MemoryRegionSection *sectio
>
> r = kvm_set_ioeventfd_pio_word(fd, section->offset_within_address_space,
> data, true);
> + if (r == -ENOSPC)
> + return;
The caller needs a way to detect the failure.
> if (r < 0) {
> abort();
> }
>
>
>
>> Basically we need a way for ioeventfd to fail if we hit rlimit or the
>> in-kernel io bus device limit. Suggestions?
>
>
>
>> (The reason I don't like using has_many_ioeventfds() is because it's
>> ugly and inefficient to open and then close file descriptors -
>> especially in a multithreaded program where file descriptor
>> availability can change while we're executing.)
>
> I identified it's too ugly ;)
> but I want to reserve it for older kernel (iobus limit is 6)
>
> Can we remove the check for old kernel (iobus limit is 6)?
> user can disable ioeventfd through qemu cmdline
> virtio-net-pci.ioeventfd=on/off
> virtio-blk-pci.ioeventfd=on/off
Why do you want to remove the iobus limit 6 check? The point of that
check is to allow vhost-net to always work since it requires an
ioeventfd. Userspace virtio devices, on the other hand, can
gracefully fall back to non-ioeventfd.
Stefan
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Qemu-devel] [PATCH 0/2] virtio-pci: fix abort when fail to allocate ioeventfd
2012-03-13 16:36 ` Stefan Hajnoczi
@ 2012-03-14 0:30 ` Amos Kong
2012-03-14 8:57 ` Stefan Hajnoczi
0 siblings, 1 reply; 24+ messages in thread
From: Amos Kong @ 2012-03-14 0:30 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: aliguori, stefanha, kvm, Michael S. Tsirkin, mtosatti, qemu-devel,
avi
----- Original Message -----
> On Tue, Mar 13, 2012 at 2:47 PM, Amos Kong <akong@redhat.com> wrote:
...
Hi, Stefan
> > diff --git a/kvm-all.c b/kvm-all.c
> > index 77eadf6..7157e78 100644
> > --- a/kvm-all.c
> > +++ b/kvm-all.c
> > @@ -771,6 +771,8 @@ static void
> > kvm_io_ioeventfd_add(MemoryRegionSection *sectio
> >
> > r = kvm_set_ioeventfd_pio_word(fd,
> > section->offset_within_address_space,
> > data, true);
> > + if (r == -ENOSPC)
> > + return;
>
> The caller needs a way to detect the failure.
Yes, about memory API
> > if (r < 0) {
> > abort();
> > }
> >
> >
> >
> >> Basically we need a way for ioeventfd to fail if we hit rlimit or
> >> the
> >> in-kernel io bus device limit. Suggestions?
> >
> >
> >
> >> (The reason I don't like using has_many_ioeventfds() is because
> >> it's
> >> ugly and inefficient to open and then close file descriptors -
> >> especially in a multithreaded program where file descriptor
> >> availability can change while we're executing.)
> >
> > I identified it's too ugly ;)
> > but I want to reserve it for older kernel (iobus limit is 6)
> >
> > Can we remove the check for old kernel (iobus limit is 6)?
> > user can disable ioeventfd through qemu cmdline
> > virtio-net-pci.ioeventfd=on/off
> > virtio-blk-pci.ioeventfd=on/off
>
> Why do you want to remove the iobus limit 6 check? The point of that
> check is to allow vhost-net to always work since it requires an
> ioeventfd.
### -device virtio-blk-pci,ioeventfd=off,drive=drive-virtio0-0-0,id=id1 -drive ...
this blk dev will not use ioeventfd for io notification.
### -device virtio-net-pci,netdev=he,ioeventfd=off -netdev tap,id=he
this net dev will not use ioeventfd
### -device virtio-net-pci,netdev=he,vhost=on -netdev tap,id=he
this dev will take 2 ioeventfd (service notifications from rx/tx queues)
ioeventfd paramenter is a way for user to limit ioeventfd usage.
### qemu-kvm -net none -device virtio-blk-pci,ioeventfd=on,drive=drive-virtio0-0-0,id=id1 -drive ...
For some odd users, they don't use network, and they need a better disk io performance.
but we could not satisfy them if we reserve the checking of 6 iobus devs.
Strategy should be decided by users.
> Userspace virtio devices, on the other hand, can
> gracefully fall back to non-ioeventfd.
This is expected, not abort.
Thanks, Amos.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Qemu-devel] [PATCH 0/2] virtio-pci: fix abort when fail to allocate ioeventfd
2012-03-14 0:30 ` Amos Kong
@ 2012-03-14 8:57 ` Stefan Hajnoczi
0 siblings, 0 replies; 24+ messages in thread
From: Stefan Hajnoczi @ 2012-03-14 8:57 UTC (permalink / raw)
To: Amos Kong
Cc: aliguori, stefanha, kvm, Michael S. Tsirkin, mtosatti, qemu-devel,
avi
On Wed, Mar 14, 2012 at 12:30 AM, Amos Kong <akong@redhat.com> wrote:
> ----- Original Message -----
>> On Tue, Mar 13, 2012 at 2:47 PM, Amos Kong <akong@redhat.com> wrote:
>
> ...
>
> Hi, Stefan
>
>> > diff --git a/kvm-all.c b/kvm-all.c
>> > index 77eadf6..7157e78 100644
>> > --- a/kvm-all.c
>> > +++ b/kvm-all.c
>> > @@ -771,6 +771,8 @@ static void
>> > kvm_io_ioeventfd_add(MemoryRegionSection *sectio
>> >
>> > r = kvm_set_ioeventfd_pio_word(fd,
>> > section->offset_within_address_space,
>> > data, true);
>> > + if (r == -ENOSPC)
>> > + return;
>>
>> The caller needs a way to detect the failure.
>
> Yes, about memory API
>
>> > if (r < 0) {
>> > abort();
>> > }
>> >
>> >
>> >
>> >> Basically we need a way for ioeventfd to fail if we hit rlimit or
>> >> the
>> >> in-kernel io bus device limit. Suggestions?
>> >
>> >
>> >
>> >> (The reason I don't like using has_many_ioeventfds() is because
>> >> it's
>> >> ugly and inefficient to open and then close file descriptors -
>> >> especially in a multithreaded program where file descriptor
>> >> availability can change while we're executing.)
>> >
>> > I identified it's too ugly ;)
>> > but I want to reserve it for older kernel (iobus limit is 6)
>> >
>> > Can we remove the check for old kernel (iobus limit is 6)?
>> > user can disable ioeventfd through qemu cmdline
>> > virtio-net-pci.ioeventfd=on/off
>> > virtio-blk-pci.ioeventfd=on/off
>>
>> Why do you want to remove the iobus limit 6 check? The point of that
>> check is to allow vhost-net to always work since it requires an
>> ioeventfd.
>
>
> ### -device virtio-blk-pci,ioeventfd=off,drive=drive-virtio0-0-0,id=id1 -drive ...
> this blk dev will not use ioeventfd for io notification.
>
> ### -device virtio-net-pci,netdev=he,ioeventfd=off -netdev tap,id=he
> this net dev will not use ioeventfd
>
> ### -device virtio-net-pci,netdev=he,vhost=on -netdev tap,id=he
> this dev will take 2 ioeventfd (service notifications from rx/tx queues)
>
> ioeventfd paramenter is a way for user to limit ioeventfd usage.
>
>
> ### qemu-kvm -net none -device virtio-blk-pci,ioeventfd=on,drive=drive-virtio0-0-0,id=id1 -drive ...
> For some odd users, they don't use network, and they need a better disk io performance.
> but we could not satisfy them if we reserve the checking of 6 iobus devs.
>
> Strategy should be decided by users.
We're discussing the wrong thing here, the 6 ioeventfd workaround is
unrelated to the problem you are trying to solve.
Graceful fallback for ioeventfd failure previously existed but it
seems the new memory API broke it. The thing that needs fixing is the
regression caused by the new memory API code.
An error code path needs to be added to the memory API ioeventfd code
so that virtio-pci.c can deal with failure and QEMU no longer aborts.
Stefan
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Qemu-devel] [PATCH 0/2] virtio-pci: fix abort when fail to allocate ioeventfd
2012-03-13 10:42 [Qemu-devel] [PATCH 0/2] virtio-pci: fix abort when fail to allocate ioeventfd Amos Kong
` (2 preceding siblings ...)
2012-03-13 11:23 ` [Qemu-devel] [PATCH 0/2] virtio-pci: fix abort when fail to allocate ioeventfd Stefan Hajnoczi
@ 2012-03-14 9:22 ` Avi Kivity
2012-03-14 9:59 ` Stefan Hajnoczi
3 siblings, 1 reply; 24+ messages in thread
From: Avi Kivity @ 2012-03-14 9:22 UTC (permalink / raw)
To: Amos Kong; +Cc: aliguori, mtosatti, stefanha, kvm, qemu-devel
On 03/13/2012 12:42 PM, Amos Kong wrote:
> Boot up guest with 232 virtio-blk disk, qemu will abort for fail to
> allocate ioeventfd. This patchset changes kvm_has_many_ioeventfds(),
> and check if available ioeventfd exists. If not, virtio-pci will
> fallback to userspace, and don't use ioeventfd for io notification.
How about an alternative way of solving this, within the memory core:
trap those writes in qemu and write to the ioeventfd yourself. This way
ioeventfds work even without kvm:
core: create eventfd
core: install handler for memory address that writes to ioeventfd
kvm (optional): install kernel handler for ioeventfd
even if the third step fails, the ioeventfd still works, it's just slower.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Qemu-devel] [PATCH 0/2] virtio-pci: fix abort when fail to allocate ioeventfd
2012-03-14 9:22 ` Avi Kivity
@ 2012-03-14 9:59 ` Stefan Hajnoczi
2012-03-14 10:05 ` Avi Kivity
0 siblings, 1 reply; 24+ messages in thread
From: Stefan Hajnoczi @ 2012-03-14 9:59 UTC (permalink / raw)
To: Avi Kivity; +Cc: aliguori, stefanha, kvm, mtosatti, qemu-devel, Amos Kong
On Wed, Mar 14, 2012 at 9:22 AM, Avi Kivity <avi@redhat.com> wrote:
> On 03/13/2012 12:42 PM, Amos Kong wrote:
>> Boot up guest with 232 virtio-blk disk, qemu will abort for fail to
>> allocate ioeventfd. This patchset changes kvm_has_many_ioeventfds(),
>> and check if available ioeventfd exists. If not, virtio-pci will
>> fallback to userspace, and don't use ioeventfd for io notification.
>
> How about an alternative way of solving this, within the memory core:
> trap those writes in qemu and write to the ioeventfd yourself. This way
> ioeventfds work even without kvm:
>
>
> core: create eventfd
> core: install handler for memory address that writes to ioeventfd
> kvm (optional): install kernel handler for ioeventfd
>
> even if the third step fails, the ioeventfd still works, it's just slower.
That approach will penalize guests with large numbers of disks - they
see an extra switch to vcpu thread instead of kvm.ko -> iothread. It
seems okay provided we can solve the limit in the kernel once and for
all by introducing a more dynamic data structure for in-kernel
devices. That way future kernels will never hit an arbitrary limit
below their file descriptor rlimit.
Is there some reason why kvm.ko must use a fixed size array? Would it
be possible to use a tree (maybe with a cache for recent lookups)?
Stefan
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Qemu-devel] [PATCH 0/2] virtio-pci: fix abort when fail to allocate ioeventfd
2012-03-14 9:59 ` Stefan Hajnoczi
@ 2012-03-14 10:05 ` Avi Kivity
2012-03-14 10:39 ` Stefan Hajnoczi
0 siblings, 1 reply; 24+ messages in thread
From: Avi Kivity @ 2012-03-14 10:05 UTC (permalink / raw)
To: Stefan Hajnoczi; +Cc: aliguori, stefanha, kvm, mtosatti, qemu-devel, Amos Kong
On 03/14/2012 11:59 AM, Stefan Hajnoczi wrote:
> On Wed, Mar 14, 2012 at 9:22 AM, Avi Kivity <avi@redhat.com> wrote:
> > On 03/13/2012 12:42 PM, Amos Kong wrote:
> >> Boot up guest with 232 virtio-blk disk, qemu will abort for fail to
> >> allocate ioeventfd. This patchset changes kvm_has_many_ioeventfds(),
> >> and check if available ioeventfd exists. If not, virtio-pci will
> >> fallback to userspace, and don't use ioeventfd for io notification.
> >
> > How about an alternative way of solving this, within the memory core:
> > trap those writes in qemu and write to the ioeventfd yourself. This way
> > ioeventfds work even without kvm:
> >
> >
> > core: create eventfd
> > core: install handler for memory address that writes to ioeventfd
> > kvm (optional): install kernel handler for ioeventfd
> >
> > even if the third step fails, the ioeventfd still works, it's just slower.
>
> That approach will penalize guests with large numbers of disks - they
> see an extra switch to vcpu thread instead of kvm.ko -> iothread.
It's only a failure path. The normal path is expected to have a kvm
ioeventfd installed.
> It
> seems okay provided we can solve the limit in the kernel once and for
> all by introducing a more dynamic data structure for in-kernel
> devices. That way future kernels will never hit an arbitrary limit
> below their file descriptor rlimit.
>
> Is there some reason why kvm.ko must use a fixed size array? Would it
> be possible to use a tree (maybe with a cache for recent lookups)?
It does use bsearch today IIRC. We'll expand the limit, but there must
be a limit, and qemu must be prepared to deal with it.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Qemu-devel] [PATCH 0/2] virtio-pci: fix abort when fail to allocate ioeventfd
2012-03-14 10:05 ` Avi Kivity
@ 2012-03-14 10:39 ` Stefan Hajnoczi
2012-03-14 10:46 ` Avi Kivity
0 siblings, 1 reply; 24+ messages in thread
From: Stefan Hajnoczi @ 2012-03-14 10:39 UTC (permalink / raw)
To: Avi Kivity; +Cc: aliguori, stefanha, kvm, mtosatti, qemu-devel, Amos Kong
On Wed, Mar 14, 2012 at 10:05 AM, Avi Kivity <avi@redhat.com> wrote:
> On 03/14/2012 11:59 AM, Stefan Hajnoczi wrote:
>> On Wed, Mar 14, 2012 at 9:22 AM, Avi Kivity <avi@redhat.com> wrote:
>> > On 03/13/2012 12:42 PM, Amos Kong wrote:
>> >> Boot up guest with 232 virtio-blk disk, qemu will abort for fail to
>> >> allocate ioeventfd. This patchset changes kvm_has_many_ioeventfds(),
>> >> and check if available ioeventfd exists. If not, virtio-pci will
>> >> fallback to userspace, and don't use ioeventfd for io notification.
>> >
>> > How about an alternative way of solving this, within the memory core:
>> > trap those writes in qemu and write to the ioeventfd yourself. This way
>> > ioeventfds work even without kvm:
>> >
>> >
>> > core: create eventfd
>> > core: install handler for memory address that writes to ioeventfd
>> > kvm (optional): install kernel handler for ioeventfd
>> >
>> > even if the third step fails, the ioeventfd still works, it's just slower.
>>
>> That approach will penalize guests with large numbers of disks - they
>> see an extra switch to vcpu thread instead of kvm.ko -> iothread.
>
> It's only a failure path. The normal path is expected to have a kvm
> ioeventfd installed.
It's the normal path when you attach >232 virtio-blk devices to a
guest (or 300 in the future).
>> It
>> seems okay provided we can solve the limit in the kernel once and for
>> all by introducing a more dynamic data structure for in-kernel
>> devices. That way future kernels will never hit an arbitrary limit
>> below their file descriptor rlimit.
>>
>> Is there some reason why kvm.ko must use a fixed size array? Would it
>> be possible to use a tree (maybe with a cache for recent lookups)?
>
> It does use bsearch today IIRC. We'll expand the limit, but there must
> be a limit, and qemu must be prepared to deal with it.
Shouldn't the limit be the file descriptor rlimit? If userspace
cannot create more eventfds then it cannot set up more ioeventfds.
I agree there always needs to be an error path because there is a
finite resource (either file descriptors or in-kernel device slots).
Stefan
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Qemu-devel] [PATCH 0/2] virtio-pci: fix abort when fail to allocate ioeventfd
2012-03-14 10:39 ` Stefan Hajnoczi
@ 2012-03-14 10:46 ` Avi Kivity
2012-03-14 11:46 ` Stefan Hajnoczi
0 siblings, 1 reply; 24+ messages in thread
From: Avi Kivity @ 2012-03-14 10:46 UTC (permalink / raw)
To: Stefan Hajnoczi; +Cc: aliguori, stefanha, kvm, mtosatti, qemu-devel, Amos Kong
On 03/14/2012 12:39 PM, Stefan Hajnoczi wrote:
> On Wed, Mar 14, 2012 at 10:05 AM, Avi Kivity <avi@redhat.com> wrote:
> > On 03/14/2012 11:59 AM, Stefan Hajnoczi wrote:
> >> On Wed, Mar 14, 2012 at 9:22 AM, Avi Kivity <avi@redhat.com> wrote:
> >> > On 03/13/2012 12:42 PM, Amos Kong wrote:
> >> >> Boot up guest with 232 virtio-blk disk, qemu will abort for fail to
> >> >> allocate ioeventfd. This patchset changes kvm_has_many_ioeventfds(),
> >> >> and check if available ioeventfd exists. If not, virtio-pci will
> >> >> fallback to userspace, and don't use ioeventfd for io notification.
> >> >
> >> > How about an alternative way of solving this, within the memory core:
> >> > trap those writes in qemu and write to the ioeventfd yourself. This way
> >> > ioeventfds work even without kvm:
> >> >
> >> >
> >> > core: create eventfd
> >> > core: install handler for memory address that writes to ioeventfd
> >> > kvm (optional): install kernel handler for ioeventfd
> >> >
> >> > even if the third step fails, the ioeventfd still works, it's just slower.
> >>
> >> That approach will penalize guests with large numbers of disks - they
> >> see an extra switch to vcpu thread instead of kvm.ko -> iothread.
> >
> > It's only a failure path. The normal path is expected to have a kvm
> > ioeventfd installed.
>
> It's the normal path when you attach >232 virtio-blk devices to a
> guest (or 300 in the future).
Well, there's nothing we can do about it.
We'll increase the limit of course, but old kernels will remain out
there. The right fix is virtio-scsi anyway.
> >> It
> >> seems okay provided we can solve the limit in the kernel once and for
> >> all by introducing a more dynamic data structure for in-kernel
> >> devices. That way future kernels will never hit an arbitrary limit
> >> below their file descriptor rlimit.
> >>
> >> Is there some reason why kvm.ko must use a fixed size array? Would it
> >> be possible to use a tree (maybe with a cache for recent lookups)?
> >
> > It does use bsearch today IIRC. We'll expand the limit, but there must
> > be a limit, and qemu must be prepared to deal with it.
>
> Shouldn't the limit be the file descriptor rlimit? If userspace
> cannot create more eventfds then it cannot set up more ioeventfds.
You can use the same eventfd for multiple ioeventfds. If you mean to
slave kvm's ioeventfd limit to the number of files the process can have,
that's a good idea. Surely an ioeventfd occupies less resources than an
open file.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Qemu-devel] [PATCH 0/2] virtio-pci: fix abort when fail to allocate ioeventfd
2012-03-14 10:46 ` Avi Kivity
@ 2012-03-14 11:46 ` Stefan Hajnoczi
2012-03-16 8:59 ` Amos Kong
0 siblings, 1 reply; 24+ messages in thread
From: Stefan Hajnoczi @ 2012-03-14 11:46 UTC (permalink / raw)
To: Avi Kivity; +Cc: aliguori, stefanha, kvm, mtosatti, qemu-devel, Amos Kong
On Wed, Mar 14, 2012 at 10:46 AM, Avi Kivity <avi@redhat.com> wrote:
> On 03/14/2012 12:39 PM, Stefan Hajnoczi wrote:
>> On Wed, Mar 14, 2012 at 10:05 AM, Avi Kivity <avi@redhat.com> wrote:
>> > On 03/14/2012 11:59 AM, Stefan Hajnoczi wrote:
>> >> On Wed, Mar 14, 2012 at 9:22 AM, Avi Kivity <avi@redhat.com> wrote:
>> >> > On 03/13/2012 12:42 PM, Amos Kong wrote:
>> >> >> Boot up guest with 232 virtio-blk disk, qemu will abort for fail to
>> >> >> allocate ioeventfd. This patchset changes kvm_has_many_ioeventfds(),
>> >> >> and check if available ioeventfd exists. If not, virtio-pci will
>> >> >> fallback to userspace, and don't use ioeventfd for io notification.
>> >> >
>> >> > How about an alternative way of solving this, within the memory core:
>> >> > trap those writes in qemu and write to the ioeventfd yourself. This way
>> >> > ioeventfds work even without kvm:
>> >> >
>> >> >
>> >> > core: create eventfd
>> >> > core: install handler for memory address that writes to ioeventfd
>> >> > kvm (optional): install kernel handler for ioeventfd
>> >> >
>> >> > even if the third step fails, the ioeventfd still works, it's just slower.
>> >>
>> >> That approach will penalize guests with large numbers of disks - they
>> >> see an extra switch to vcpu thread instead of kvm.ko -> iothread.
>> >
>> > It's only a failure path. The normal path is expected to have a kvm
>> > ioeventfd installed.
>>
>> It's the normal path when you attach >232 virtio-blk devices to a
>> guest (or 300 in the future).
>
> Well, there's nothing we can do about it.
>
> We'll increase the limit of course, but old kernels will remain out
> there. The right fix is virtio-scsi anyway.
>
>> >> It
>> >> seems okay provided we can solve the limit in the kernel once and for
>> >> all by introducing a more dynamic data structure for in-kernel
>> >> devices. That way future kernels will never hit an arbitrary limit
>> >> below their file descriptor rlimit.
>> >>
>> >> Is there some reason why kvm.ko must use a fixed size array? Would it
>> >> be possible to use a tree (maybe with a cache for recent lookups)?
>> >
>> > It does use bsearch today IIRC. We'll expand the limit, but there must
>> > be a limit, and qemu must be prepared to deal with it.
>>
>> Shouldn't the limit be the file descriptor rlimit? If userspace
>> cannot create more eventfds then it cannot set up more ioeventfds.
>
> You can use the same eventfd for multiple ioeventfds. If you mean to
> slave kvm's ioeventfd limit to the number of files the process can have,
> that's a good idea. Surely an ioeventfd occupies less resources than an
> open file.
Yes.
Ultimately I guess you're right in that we still need to have an error
path and virtio-scsi will reduce the pressure on I/O eventfds for
storage.
Stefan
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Qemu-devel] [PATCH 0/2] virtio-pci: fix abort when fail to allocate ioeventfd
2012-03-14 11:46 ` Stefan Hajnoczi
@ 2012-03-16 8:59 ` Amos Kong
2012-03-19 8:21 ` Stefan Hajnoczi
2012-03-19 10:11 ` Avi Kivity
0 siblings, 2 replies; 24+ messages in thread
From: Amos Kong @ 2012-03-16 8:59 UTC (permalink / raw)
To: Stefan Hajnoczi; +Cc: aliguori, stefanha, kvm, mtosatti, qemu-devel, Avi Kivity
On 14/03/12 19:46, Stefan Hajnoczi wrote:
> On Wed, Mar 14, 2012 at 10:46 AM, Avi Kivity<avi@redhat.com> wrote:
>> On 03/14/2012 12:39 PM, Stefan Hajnoczi wrote:
>>> On Wed, Mar 14, 2012 at 10:05 AM, Avi Kivity<avi@redhat.com> wrote:
>>>> On 03/14/2012 11:59 AM, Stefan Hajnoczi wrote:
>>>>> On Wed, Mar 14, 2012 at 9:22 AM, Avi Kivity<avi@redhat.com> wrote:
>>>>>> On 03/13/2012 12:42 PM, Amos Kong wrote:
>>>>>>> Boot up guest with 232 virtio-blk disk, qemu will abort for fail to
>>>>>>> allocate ioeventfd. This patchset changes kvm_has_many_ioeventfds(),
>>>>>>> and check if available ioeventfd exists. If not, virtio-pci will
>>>>>>> fallback to userspace, and don't use ioeventfd for io notification.
>>>>>>
>>>>>> How about an alternative way of solving this, within the memory core:
>>>>>> trap those writes in qemu and write to the ioeventfd yourself. This way
>>>>>> ioeventfds work even without kvm:
>>>>>>
>>>>>>
>>>>>> core: create eventfd
>>>>>> core: install handler for memory address that writes to ioeventfd
>>>>>> kvm (optional): install kernel handler for ioeventfd
Can you give some detail about this? I'm not familiar with Memory API.
btw, can we fix this problem by replacing abort() by a error note?
virtio-pci will auto fallback to userspace.
diff --git a/kvm-all.c b/kvm-all.c
index 3c6b4f0..cf23dbf 100644
--- a/kvm-all.c
+++ b/kvm-all.c
@@ -749,7 +749,8 @@ static void
kvm_mem_ioeventfd_add(MemoryRegionSection *section,
r = kvm_set_ioeventfd_mmio_long(fd,
section->offset_within_address_space,
data, true);
if (r < 0) {
- abort();
+ fprintf(stderr, "%s: unable to map ioeventfd: %s.\nFallback to "
+ "userspace (slower).\n", __func__, strerror(-r));
}
}
@@ -775,7 +776,8 @@ static void kvm_io_ioeventfd_add(MemoryRegionSection
*section,
r = kvm_set_ioeventfd_pio_word(fd,
section->offset_within_address_space,
data, true);
if (r < 0) {
- abort();
+ fprintf(stderr, "%s: unable to map ioeventfd: %s.\nFallback to "
+ "userspace (slower).\n", __func__, strerror(-r));
}
}
>>>>>> even if the third step fails, the ioeventfd still works, it's just slower.
>>>>>
>>>>> That approach will penalize guests with large numbers of disks - they
>>>>> see an extra switch to vcpu thread instead of kvm.ko -> iothread.
>>>>
>>>> It's only a failure path. The normal path is expected to have a kvm
>>>> ioeventfd installed.
>>>
>>> It's the normal path when you attach>232 virtio-blk devices to a
>>> guest (or 300 in the future).
>>
>> Well, there's nothing we can do about it.
>>
>> We'll increase the limit of course, but old kernels will remain out
>> there. The right fix is virtio-scsi anyway.
>>
>>>>> It
>>>>> seems okay provided we can solve the limit in the kernel once and for
>>>>> all by introducing a more dynamic data structure for in-kernel
>>>>> devices. That way future kernels will never hit an arbitrary limit
>>>>> below their file descriptor rlimit.
>>>>>
>>>>> Is there some reason why kvm.ko must use a fixed size array? Would it
>>>>> be possible to use a tree (maybe with a cache for recent lookups)?
>>>>
>>>> It does use bsearch today IIRC. We'll expand the limit, but there must
>>>> be a limit, and qemu must be prepared to deal with it.
>>>
>>> Shouldn't the limit be the file descriptor rlimit? If userspace
>>> cannot create more eventfds then it cannot set up more ioeventfds.
>>
>> You can use the same eventfd for multiple ioeventfds. If you mean to
>> slave kvm's ioeventfd limit to the number of files the process can have,
>> that's a good idea. Surely an ioeventfd occupies less resources than an
>> open file.
>
> Yes.
>
> Ultimately I guess you're right in that we still need to have an error
> path and virtio-scsi will reduce the pressure on I/O eventfds for
> storage.
>
> Stefan
--
Amos.
^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [Qemu-devel] [PATCH 0/2] virtio-pci: fix abort when fail to allocate ioeventfd
2012-03-16 8:59 ` Amos Kong
@ 2012-03-19 8:21 ` Stefan Hajnoczi
2012-03-19 10:11 ` Avi Kivity
1 sibling, 0 replies; 24+ messages in thread
From: Stefan Hajnoczi @ 2012-03-19 8:21 UTC (permalink / raw)
To: Amos Kong; +Cc: aliguori, stefanha, kvm, mtosatti, qemu-devel, Avi Kivity
On Fri, Mar 16, 2012 at 04:59:35PM +0800, Amos Kong wrote:
> On 14/03/12 19:46, Stefan Hajnoczi wrote:
> >On Wed, Mar 14, 2012 at 10:46 AM, Avi Kivity<avi@redhat.com> wrote:
> >>On 03/14/2012 12:39 PM, Stefan Hajnoczi wrote:
> >>>On Wed, Mar 14, 2012 at 10:05 AM, Avi Kivity<avi@redhat.com> wrote:
> >>>>On 03/14/2012 11:59 AM, Stefan Hajnoczi wrote:
> >>>>>On Wed, Mar 14, 2012 at 9:22 AM, Avi Kivity<avi@redhat.com> wrote:
> >>>>>>On 03/13/2012 12:42 PM, Amos Kong wrote:
> >>>>>>>Boot up guest with 232 virtio-blk disk, qemu will abort for fail to
> >>>>>>>allocate ioeventfd. This patchset changes kvm_has_many_ioeventfds(),
> >>>>>>>and check if available ioeventfd exists. If not, virtio-pci will
> >>>>>>>fallback to userspace, and don't use ioeventfd for io notification.
> >>>>>>
> >>>>>>How about an alternative way of solving this, within the memory core:
> >>>>>>trap those writes in qemu and write to the ioeventfd yourself. This way
> >>>>>>ioeventfds work even without kvm:
> >>>>>>
> >>>>>>
> >>>>>> core: create eventfd
> >>>>>> core: install handler for memory address that writes to ioeventfd
> >>>>>> kvm (optional): install kernel handler for ioeventfd
>
> Can you give some detail about this? I'm not familiar with Memory API.
>
>
> btw, can we fix this problem by replacing abort() by a error note?
> virtio-pci will auto fallback to userspace.
>
> diff --git a/kvm-all.c b/kvm-all.c
> index 3c6b4f0..cf23dbf 100644
> --- a/kvm-all.c
> +++ b/kvm-all.c
> @@ -749,7 +749,8 @@ static void
> kvm_mem_ioeventfd_add(MemoryRegionSection *section,
> r = kvm_set_ioeventfd_mmio_long(fd,
> section->offset_within_address_space,
> data, true);
> if (r < 0) {
> - abort();
> + fprintf(stderr, "%s: unable to map ioeventfd: %s.\nFallback to "
> + "userspace (slower).\n", __func__, strerror(-r));
The challenge is propagating the error code. If virtio-pci.c doesn't
know that ioeventfd has failed, then it's not possible to fall back to a
userspace handler.
I believe Avi's suggestion is to put the fallback code into the KVM
memory API implementation so that virtio-pci.c doesn't need to know that
ioeventfd failed at all.
Stefan
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Qemu-devel] [PATCH 0/2] virtio-pci: fix abort when fail to allocate ioeventfd
2012-03-16 8:59 ` Amos Kong
2012-03-19 8:21 ` Stefan Hajnoczi
@ 2012-03-19 10:11 ` Avi Kivity
1 sibling, 0 replies; 24+ messages in thread
From: Avi Kivity @ 2012-03-19 10:11 UTC (permalink / raw)
To: Amos Kong; +Cc: aliguori, stefanha, kvm, Stefan Hajnoczi, mtosatti, qemu-devel
On 03/16/2012 10:59 AM, Amos Kong wrote:
>
> Can you give some detail about this? I'm not familiar with Memory API.
Well there's a huge amount of detail needed here. The idea is that
memory_region_add_eventfd() will always work, with or without kvm, and
even if kvm is enabled but we run out of ioeventfds.
One way to do this is to implement core_eventfd_add() in exec.c. This
is unlikely to be easy however.
>
> btw, can we fix this problem by replacing abort() by a error note?
> virtio-pci will auto fallback to userspace.
But other users will silently break, need to audit all other users of
ioeventfd, for example ivshmem.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2012-03-19 10:11 UTC | newest]
Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-03-13 10:42 [Qemu-devel] [PATCH 0/2] virtio-pci: fix abort when fail to allocate ioeventfd Amos Kong
2012-03-13 10:42 ` [Qemu-devel] [PATCH 1/2] return available ioeventfds count in kvm_has_many_ioeventfds() Amos Kong
2012-03-13 11:50 ` Jan Kiszka
2012-03-13 12:00 ` Amos Kong
2012-03-13 12:24 ` Jan Kiszka
2012-03-13 13:05 ` Amos Kong
2012-03-13 10:42 ` [Qemu-devel] [PATCH 2/2] virtio-pci: fallback to userspace when there is no enough available ioeventfd Amos Kong
2012-03-13 11:23 ` [Qemu-devel] [PATCH 0/2] virtio-pci: fix abort when fail to allocate ioeventfd Stefan Hajnoczi
2012-03-13 11:51 ` Amos Kong
2012-03-13 14:30 ` Stefan Hajnoczi
2012-03-13 14:47 ` Amos Kong
2012-03-13 16:36 ` Stefan Hajnoczi
2012-03-14 0:30 ` Amos Kong
2012-03-14 8:57 ` Stefan Hajnoczi
2012-03-14 9:22 ` Avi Kivity
2012-03-14 9:59 ` Stefan Hajnoczi
2012-03-14 10:05 ` Avi Kivity
2012-03-14 10:39 ` Stefan Hajnoczi
2012-03-14 10:46 ` Avi Kivity
2012-03-14 11:46 ` Stefan Hajnoczi
2012-03-16 8:59 ` Amos Kong
2012-03-19 8:21 ` Stefan Hajnoczi
2012-03-19 10:11 ` Avi Kivity
-- strict thread matches above, loose matches on Subject: below --
2012-03-13 10:35 [Qemu-devel] [PATCH 1/2] return available ioeventfds count in kvm_has_many_ioeventfds() Amos Kong
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).