* bad virtio disk performance
@ 2009-04-27 15:12 Lucas Nussbaum
2009-04-27 17:36 ` john cooper
0 siblings, 1 reply; 8+ messages in thread
From: Lucas Nussbaum @ 2009-04-27 15:12 UTC (permalink / raw)
To: kvm
Hi,
I'm experiencing bad disk I/O performance using virtio disks.
I'm using Linux 2.6.29 (host & guest), kvm 84 userspace.
On the host, and in a non-virtio guest, I get ~120 MB/s when writing
with dd (the disks are fast RAID0 SAS disks).
In a guest with a virtio disk, I get at most ~32 MB/s.
The rest of the setup is the same. For reference, I'm running kvm -drive
file=/tmp/debian-amd64.img,if=virtio.
Is such performance expected? What should I check?
Thank you,
--
| Lucas Nussbaum
| lucas@lucas-nussbaum.net http://www.lucas-nussbaum.net/ |
| jabber: lucas@nussbaum.fr GPG: 1024D/023B3F4F |
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: bad virtio disk performance
2009-04-27 15:12 bad virtio disk performance Lucas Nussbaum
@ 2009-04-27 17:36 ` john cooper
2009-04-27 18:28 ` Lucas Nussbaum
0 siblings, 1 reply; 8+ messages in thread
From: john cooper @ 2009-04-27 17:36 UTC (permalink / raw)
To: Lucas Nussbaum; +Cc: kvm, john.cooper
Lucas Nussbaum wrote:
> Hi,
>
> I'm experiencing bad disk I/O performance using virtio disks.
>
> I'm using Linux 2.6.29 (host & guest), kvm 84 userspace.
> On the host, and in a non-virtio guest, I get ~120 MB/s when writing
> with dd (the disks are fast RAID0 SAS disks).
Could you provide detail of the exact type and size
of i/o load you were creating with dd?
Also the full qemu cmd line invocation in both
cases would be useful.
> In a guest with a virtio disk, I get at most ~32 MB/s.
Which non-virtio interface was used for the
comparison?
> The rest of the setup is the same. For reference, I'm running kvm -drive
> file=/tmp/debian-amd64.img,if=virtio.
>
> Is such performance expected? What should I check?
Not expected, something is awry.
blktrace(8) run on the host will shed some light
on the type of i/o requests issued by qemu in both
cases.
-john
--
john.cooper@third-harmonic.com
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: bad virtio disk performance
2009-04-27 17:36 ` john cooper
@ 2009-04-27 18:28 ` Lucas Nussbaum
2009-04-27 23:40 ` john cooper
0 siblings, 1 reply; 8+ messages in thread
From: Lucas Nussbaum @ 2009-04-27 18:28 UTC (permalink / raw)
To: john cooper; +Cc: kvm, john.cooper
On 27/04/09 at 13:36 -0400, john cooper wrote:
> Lucas Nussbaum wrote:
>> Hi,
>>
>> I'm experiencing bad disk I/O performance using virtio disks.
>>
>> I'm using Linux 2.6.29 (host & guest), kvm 84 userspace.
>> On the host, and in a non-virtio guest, I get ~120 MB/s when writing
>> with dd (the disks are fast RAID0 SAS disks).
>
> Could you provide detail of the exact type and size
> of i/o load you were creating with dd?
I tried with various block sizes. An example invocation would be:
dd if=/dev/zero of=foo bs=4096 count=262144 conv=fsync (126 MB/s without
virtio, 32 MB/s with virtio).
> Also the full qemu cmd line invocation in both
> cases would be useful.
non-virtio:
kvm -drive file=/tmp/debian-amd64.img,if=scsi,cache=writethrough -net
nic,macaddr=00:16:3e:5a:28:1,model=e1000 -net tap -nographic -kernel
/boot/vmlinuz-2.6.29 -initrd /boot/initrd.img-2.6.29 -append
root=/dev/sda1 ro console=tty0 console=ttyS0,9600,8n1
virtio:
kvm -drive file=/tmp/debian-amd64.img,if=virtio,cache=writethrough -net
nic,macaddr=00:16:3e:5a:28:1,model=e1000 -net tap -nographic -kernel
/boot/vmlinuz-2.6.29 -initrd /boot/initrd.img-2.6.29 -append
root=/dev/vda1 ro console=tty0 console=ttyS0,9600,8n1
>> In a guest with a virtio disk, I get at most ~32 MB/s.
>
> Which non-virtio interface was used for the
> comparison?
if=ide
I got the same performance with if=scsi
>> The rest of the setup is the same. For reference, I'm running kvm -drive
>> file=/tmp/debian-amd64.img,if=virtio.
>>
>> Is such performance expected? What should I check?
>
> Not expected, something is awry.
>
> blktrace(8) run on the host will shed some light
> on the type of i/o requests issued by qemu in both
> cases.
Ah, I found something interesting. btrace summary after writing a 1 GB
file:
--- without virtio:
Total (8,5):
Reads Queued: 0, 0KiB Writes Queued: 272259, 1089MiB
Read Dispatches: 0, 0KiB Write Dispatches: 9769, 1089MiB
Reads Requeued: 0 Writes Requeued: 0
Reads Completed: 0, 0KiB Writes Completed: 9769, 1089MiB
Read Merges: 0, 0KiB Write Merges: 262490, 1049MiB
IO unplugs: 45973 Timer unplugs: 30
--- with virtio:
Total (8,5):
Reads Queued: 1, 4KiB Writes Queued: 430734, 1776MiB
Read Dispatches: 1, 4KiB Write Dispatches: 196143, 1776MiB
Reads Requeued: 0 Writes Requeued: 0
Reads Completed: 1, 4KiB Writes Completed: 196143, 1776MiB
Read Merges: 0, 0KiB Write Merges: 234578, 938488KiB
IO unplugs: 301311 Timer unplugs: 25
(I re-ran the test twice, got similar results)
So, apparently, with virtio, there's a lot more data being written to
disk. The underlying filesystem is ext3, and is monted as /tmp. It only
contains the VM image file. Another difference is that, with virtio, the
data was shared equally over all 4 CPUs, with without virt-io, CPU0 and
CPU1 did all the work.
In the virtio log, I also see a (null) process doing a lot of writes.
I uploaded the logs to http://blop.info/bazaar/virtio/, if you want to
take a look.
Thank you,
- Lucas
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: bad virtio disk performance
2009-04-27 18:28 ` Lucas Nussbaum
@ 2009-04-27 23:40 ` john cooper
2009-04-28 10:56 ` Lucas Nussbaum
0 siblings, 1 reply; 8+ messages in thread
From: john cooper @ 2009-04-27 23:40 UTC (permalink / raw)
To: Lucas Nussbaum; +Cc: john cooper, kvm, john.cooper
Lucas Nussbaum wrote:
> On 27/04/09 at 13:36 -0400, john cooper wrote:
>> Lucas Nussbaum wrote:
>
> non-virtio:
> kvm -drive file=/tmp/debian-amd64.img,if=scsi,cache=writethrough -net
> nic,macaddr=00:16:3e:5a:28:1,model=e1000 -net tap -nographic -kernel
> /boot/vmlinuz-2.6.29 -initrd /boot/initrd.img-2.6.29 -append
> root=/dev/sda1 ro console=tty0 console=ttyS0,9600,8n1
>
> virtio:
> kvm -drive file=/tmp/debian-amd64.img,if=virtio,cache=writethrough -net
> nic,macaddr=00:16:3e:5a:28:1,model=e1000 -net tap -nographic -kernel
> /boot/vmlinuz-2.6.29 -initrd /boot/initrd.img-2.6.29 -append
> root=/dev/vda1 ro console=tty0 console=ttyS0,9600,8n1
>
One suggestion would be to use a separate drive
for the virtio vs. non-virtio comparison to avoid
a Heisenberg effect.
>
> So, apparently, with virtio, there's a lot more data being written to
> disk. The underlying filesystem is ext3, and is monted as /tmp. It only
> contains the VM image file. Another difference is that, with virtio, the
> data was shared equally over all 4 CPUs, with without virt-io, CPU0 and
> CPU1 did all the work.
> In the virtio log, I also see a (null) process doing a lot of writes.
Can't say what is causing that -- only took a look
at the short logs. However the isolation suggested
above may help factor that out if you need to
pursue this path.
>
> I uploaded the logs to http://blop.info/bazaar/virtio/, if you want to
> take a look.
In the virtio case i/o is being issued from multiple
threads. You could be hitting the cfq close-cooperator
bug we saw as late as 2.6.28.
A quick test to rule this in/out would be to change
the block scheduler to other than cfq for the host
device where the backing image resides -- in your
case the host device containing /tmp/debian-amd64.img.
Eg for /dev/sda1:
# cat /sys/block/sda/queue/scheduler
noop anticipatory deadline [cfq]
# echo deadline > /sys/block/sda/queue/scheduler
# cat /sys/block/sda/queue/scheduler
noop anticipatory [deadline] cfq
And re-run your test to see if this brings
virtio vs. non-virtio closer to the expected
performance.
-john
--
john.cooper@redhat.com
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: bad virtio disk performance
2009-04-27 23:40 ` john cooper
@ 2009-04-28 10:56 ` Lucas Nussbaum
2009-04-28 11:48 ` Lucas Nussbaum
0 siblings, 1 reply; 8+ messages in thread
From: Lucas Nussbaum @ 2009-04-28 10:56 UTC (permalink / raw)
To: john cooper; +Cc: john cooper, kvm
On 27/04/09 at 19:40 -0400, john cooper wrote:
> Lucas Nussbaum wrote:
>> On 27/04/09 at 13:36 -0400, john cooper wrote:
>>> Lucas Nussbaum wrote:
>>
>> non-virtio:
>> kvm -drive file=/tmp/debian-amd64.img,if=scsi,cache=writethrough -net
>> nic,macaddr=00:16:3e:5a:28:1,model=e1000 -net tap -nographic -kernel
>> /boot/vmlinuz-2.6.29 -initrd /boot/initrd.img-2.6.29 -append
>> root=/dev/sda1 ro console=tty0 console=ttyS0,9600,8n1
>>
>> virtio:
>> kvm -drive file=/tmp/debian-amd64.img,if=virtio,cache=writethrough -net
>> nic,macaddr=00:16:3e:5a:28:1,model=e1000 -net tap -nographic -kernel
>> /boot/vmlinuz-2.6.29 -initrd /boot/initrd.img-2.6.29 -append
>> root=/dev/vda1 ro console=tty0 console=ttyS0,9600,8n1
>>
> One suggestion would be to use a separate drive
> for the virtio vs. non-virtio comparison to avoid
> a Heisenberg effect.
I don't have another drive available, but tried to output the trace over
the network. Results were the same.
>> So, apparently, with virtio, there's a lot more data being written to
>> disk. The underlying filesystem is ext3, and is monted as /tmp. It only
>> contains the VM image file. Another difference is that, with virtio, the
>> data was shared equally over all 4 CPUs, with without virt-io, CPU0 and
>> CPU1 did all the work.
>> In the virtio log, I also see a (null) process doing a lot of writes.
> Can't say what is causing that -- only took a look
> at the short logs. However the isolation suggested
> above may help factor that out if you need to
> pursue this path.
>>
>> I uploaded the logs to http://blop.info/bazaar/virtio/, if you want to
>> take a look.
> In the virtio case i/o is being issued from multiple
> threads. You could be hitting the cfq close-cooperator
> bug we saw as late as 2.6.28.
>
> A quick test to rule this in/out would be to change
> the block scheduler to other than cfq for the host
> device where the backing image resides -- in your
> case the host device containing /tmp/debian-amd64.img.
>
> Eg for /dev/sda1:
>
> # cat /sys/block/sda/queue/scheduler
> noop anticipatory deadline [cfq]
> # echo deadline > /sys/block/sda/queue/scheduler
> # cat /sys/block/sda/queue/scheduler
> noop anticipatory [deadline] cfq
Tried that (also with noop and anticipatory), but didn't result in any
improvement.
I then upgraded to kvm-85 (both the host kernel modules and the
userspace), and re-ran the tests. Performance is better (about 85 MB/s),
but still very far from the non-virtio case.
Any other suggestions ?
--
| Lucas Nussbaum
| lucas@lucas-nussbaum.net http://www.lucas-nussbaum.net/ |
| jabber: lucas@nussbaum.fr GPG: 1024D/023B3F4F |
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: bad virtio disk performance
2009-04-28 10:56 ` Lucas Nussbaum
@ 2009-04-28 11:48 ` Lucas Nussbaum
2009-04-28 11:55 ` Avi Kivity
0 siblings, 1 reply; 8+ messages in thread
From: Lucas Nussbaum @ 2009-04-28 11:48 UTC (permalink / raw)
To: john cooper; +Cc: john cooper, kvm
On 28/04/09 at 12:56 +0200, Lucas Nussbaum wrote:
> I then upgraded to kvm-85 (both the host kernel modules and the
> userspace), and re-ran the tests. Performance is better (about 85 MB/s),
> but still very far from the non-virtio case.
I forgot to mention that the strangest result I got was the total amount
of write blocks queued (as measured by blktrace). I was writing a 1 GB
file to disk, which resulted in:
- 1 GB of write blocks queued without virtio
- ~1.7 GB of write blocks queued with virtio on kvm 84
- ~1.4 GB of write blocks queued with virtio on kvm 85
I don't understand with kvm with virtio writes "more blocks than
necessary", but that could explain the performance difference.
--
| Lucas Nussbaum
| lucas@lucas-nussbaum.net http://www.lucas-nussbaum.net/ |
| jabber: lucas@nussbaum.fr GPG: 1024D/023B3F4F |
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: bad virtio disk performance
2009-04-28 11:48 ` Lucas Nussbaum
@ 2009-04-28 11:55 ` Avi Kivity
2009-04-28 14:35 ` Lucas Nussbaum
0 siblings, 1 reply; 8+ messages in thread
From: Avi Kivity @ 2009-04-28 11:55 UTC (permalink / raw)
To: Lucas Nussbaum; +Cc: john cooper, john cooper, kvm
Lucas Nussbaum wrote:
> On 28/04/09 at 12:56 +0200, Lucas Nussbaum wrote:
>
>> I then upgraded to kvm-85 (both the host kernel modules and the
>> userspace), and re-ran the tests. Performance is better (about 85 MB/s),
>> but still very far from the non-virtio case.
>>
>
> I forgot to mention that the strangest result I got was the total amount
> of write blocks queued (as measured by blktrace). I was writing a 1 GB
> file to disk, which resulted in:
>
> - 1 GB of write blocks queued without virtio
> - ~1.7 GB of write blocks queued with virtio on kvm 84
> - ~1.4 GB of write blocks queued with virtio on kvm 85
>
> I don't understand with kvm with virtio writes "more blocks than
> necessary", but that could explain the performance difference.
>
Are these numbers repeatable?
Try increasing the virtio queue depth. See the call to
virtio_add_queue() in qemu/hw/virtio-blk.c.
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: bad virtio disk performance
2009-04-28 11:55 ` Avi Kivity
@ 2009-04-28 14:35 ` Lucas Nussbaum
0 siblings, 0 replies; 8+ messages in thread
From: Lucas Nussbaum @ 2009-04-28 14:35 UTC (permalink / raw)
To: Avi Kivity; +Cc: john cooper, john cooper, kvm
On 28/04/09 at 14:55 +0300, Avi Kivity wrote:
> Lucas Nussbaum wrote:
>> On 28/04/09 at 12:56 +0200, Lucas Nussbaum wrote:
>>
>>> I then upgraded to kvm-85 (both the host kernel modules and the
>>> userspace), and re-ran the tests. Performance is better (about 85 MB/s),
>>> but still very far from the non-virtio case.
>>>
>>
>> I forgot to mention that the strangest result I got was the total amount
>> of write blocks queued (as measured by blktrace). I was writing a 1 GB
>> file to disk, which resulted in:
>>
>> - 1 GB of write blocks queued without virtio
>> - ~1.7 GB of write blocks queued with virtio on kvm 84
>> - ~1.4 GB of write blocks queued with virtio on kvm 85
>>
>> I don't understand with kvm with virtio writes "more blocks than
>> necessary", but that could explain the performance difference.
>
> Are these numbers repeatable?
The fact that more data than necessary is written to disk with virtio is
reproducible. The exact amount of additional data varies between runs.
> Try increasing the virtio queue depth. See the call to
> virtio_add_queue() in qemu/hw/virtio-blk.c.
It doesn't seem to change the performance I get (but since the
performance itself is varying a lot, it's difficult to tell).
Some example data points, writing a 500 MiB file:
1st run, with virtio queue length = 512
- total size of write req queued: 874568 KiB
- 55 MB/s
2nd run, with virtio queue length = 128
- total size of write req queued: 694328 KiB
- 86 MB/s
--
| Lucas Nussbaum
| lucas@lucas-nussbaum.net http://www.lucas-nussbaum.net/ |
| jabber: lucas@nussbaum.fr GPG: 1024D/023B3F4F |
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2009-04-28 14:36 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-04-27 15:12 bad virtio disk performance Lucas Nussbaum
2009-04-27 17:36 ` john cooper
2009-04-27 18:28 ` Lucas Nussbaum
2009-04-27 23:40 ` john cooper
2009-04-28 10:56 ` Lucas Nussbaum
2009-04-28 11:48 ` Lucas Nussbaum
2009-04-28 11:55 ` Avi Kivity
2009-04-28 14:35 ` Lucas Nussbaum
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox