* Virtual drives performance
@ 2011-09-06 22:25 TooMeeK
2011-09-07 5:50 ` martin f krafft
` (3 more replies)
0 siblings, 4 replies; 5+ messages in thread
From: TooMeeK @ 2011-09-06 22:25 UTC (permalink / raw)
To: KVM list
Hello,
I'm trying to set up a fast virtualized Samba file server. I'm using
Debian Sqeeze 64 bit as hypervisor.
First, I created mirrored storage in hypervisor from one 600-gig
partition (yes, that's correct - I have only one drive currently), details:
sudo mdadm --detail /dev/md3
/dev/md3:
Version : 1.2
Creation Time : Thu Jul 28 20:07:00 2011
Raid Level : raid1
Array Size : 664187352 (633.42 GiB 680.13 GB)
Used Dev Size : 664187352 (633.42 GiB 680.13 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Update Time : Thu Jul 28 22:07:10 2011
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Name : Server:3 (local to host Server)
UUID : 87184170:2d9102b1:ca16a5d7:1f23fe2e
Events : 3276
Number Major Minor RaidDevice State
0 8 23 0 active sync /dev/sdb7
1 0 0 1 removed
Partition type is Linux RAID autodetect and this drive can do 80MB/s
write and 100 MB/s read seq.
QEMU PC emulator version 0.12.5 (qemu-kvm-0.12.5)
Next, I've tried following combinations with virt-manager 0.8.4 (from
XML of VM):
1.on Debian VM with virtio drivers for both storage and NIC:
<disk type='block' device='disk'>
<source dev='/dev/md3'/>
<target dev='vdb' bus='virtio'/>
partition type used in guest: EXT4
result: poor performance, 9-10MB/s sequentional copy via SMB
2.on Debian VM with virtio drivers:
<disk type='block' device='disk' cache='writeback'>
<source dev='/dev/md3'/>
<target dev='vdb' bus='virtio'/>
partition type used: EXT4
result: poor performance, 10-15MB/s sequentional copy via SMB
3.Direct attached partition to FreeBSD VM without virtio support (e1000
NIC and SCSI disk):
<disk type='block' device='disk' cache='writeback'>
<source dev='/dev/md3'/>
<target dev='sdb' bus='scsi'/>
partition type used: ZFS
result: poor performance, 20-25MB/s sequentional copy via SMB
4.Direct attached whole physical disk to FreeBSD VM (/dev/sdc 2,5")
partition type used in VM: ZFS
good performance, 60MB/s sequentional copy via SMB
I find that way it's not possible to direct attach PARTITION from host
to VM using virt-manager, this only works for whole device (like /dev/sdc).
Anyway, any storage advices for performance?
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Virtual drives performance
2011-09-06 22:25 Virtual drives performance TooMeeK
@ 2011-09-07 5:50 ` martin f krafft
2011-09-07 7:52 ` Stefan Hajnoczi
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: martin f krafft @ 2011-09-07 5:50 UTC (permalink / raw)
To: TooMeeK; +Cc: KVM list
[-- Attachment #1: Type: text/plain, Size: 1301 bytes --]
also sprach TooMeeK <toomeek_85@o2.pl> [2011.09.07.0025 +0200]:
> Partition type is Linux RAID autodetect
This is not really necessary, presuming that you are doing what
everyone is doing nowadays and assemble the RAID from the initrd or
the init scripts.
> I find that way it's not possible to direct attach PARTITION from
> host to VM using virt-manager, this only works for whole device (like
> /dev/sdc).
You could/should be using LVM instead of partitions, IMHO.
> Anyway, any storage advices for performance?
Unfortunately, no. I have seen similar things happen. I think that
the problem is Linux I/O. Your drive may be able to handle high,
sustained read/write rates, but the Linux I/O scheduler, in
combination with random-access reads/writes, makes these performance
data inaccurate.
The only way I found to decently increase performance of drives:
- 10k and 15k drives
- external NAS or fibrechannel SAN, which do not use Linux
Yes, it's sad, and I would love to be proven wrong!
--
martin | http://madduck.net/ | http://two.sentenc.es/
"if java had true garbage collection,
most programs would delete themselves upon execution."
-- robert sewell
spamtraps: madduck.bogus@madduck.net
[-- Attachment #2: Digital signature (see http://martin-krafft.net/gpg/sig-policy/999bbcc4/current) --]
[-- Type: application/pgp-signature, Size: 1124 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Virtual drives performance
2011-09-06 22:25 Virtual drives performance TooMeeK
2011-09-07 5:50 ` martin f krafft
@ 2011-09-07 7:52 ` Stefan Hajnoczi
2011-09-07 9:46 ` Asdo
2011-09-07 14:38 ` Virtbie
3 siblings, 0 replies; 5+ messages in thread
From: Stefan Hajnoczi @ 2011-09-07 7:52 UTC (permalink / raw)
To: TooMeeK; +Cc: KVM list
On Tue, Sep 6, 2011 at 11:25 PM, TooMeeK <toomeek_85@o2.pl> wrote:
> First, I created mirrored storage in hypervisor from one 600-gig partition
> (yes, that's correct - I have only one drive currently), details:
> sudo mdadm --detail /dev/md3
> /dev/md3:
> Version : 1.2
> Creation Time : Thu Jul 28 20:07:00 2011
> Raid Level : raid1
> Array Size : 664187352 (633.42 GiB 680.13 GB)
> Used Dev Size : 664187352 (633.42 GiB 680.13 GB)
> Raid Devices : 2
> Total Devices : 1
> Persistence : Superblock is persistent
>
> Update Time : Thu Jul 28 22:07:10 2011
> State : clean, degraded
> Active Devices : 1
> Working Devices : 1
> Failed Devices : 0
> Spare Devices : 0
>
> Name : Server:3 (local to host Server)
> UUID : 87184170:2d9102b1:ca16a5d7:1f23fe2e
> Events : 3276
>
> Number Major Minor RaidDevice State
> 0 8 23 0 active sync /dev/sdb7
> 1 0 0 1 removed
>
> Partition type is Linux RAID autodetect and this drive can do 80MB/s write
> and 100 MB/s read seq.
How did you measure those figures?
To double-check sequential read throughput on the host:
# dd if=/dev/md3 of=/dev/null bs=64k count=16384 iflag=direct
The SMB results don't help narrow down a disk I/O problem. To collect
comparable sequential read throughput inside the guest:
# dd if=/dev/vda of=/dev/null bs=64k count=16384 iflag=direct
> QEMU PC emulator version 0.12.5 (qemu-kvm-0.12.5)
Try qemu-kvm 0.15.
> Next, I've tried following combinations with virt-manager 0.8.4 (from XML of
> VM):
> 1.on Debian VM with virtio drivers for both storage and NIC:
> <disk type='block' device='disk'>
cache='none'
> <source dev='/dev/md3'/>
> <target dev='vdb' bus='virtio'/>
You can enable Linux AIO, which typically performs better than the
default io="threads":
<driver name="qemu" type="raw" io="native"/>
Stefan
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Virtual drives performance
2011-09-06 22:25 Virtual drives performance TooMeeK
2011-09-07 5:50 ` martin f krafft
2011-09-07 7:52 ` Stefan Hajnoczi
@ 2011-09-07 9:46 ` Asdo
2011-09-07 14:38 ` Virtbie
3 siblings, 0 replies; 5+ messages in thread
From: Asdo @ 2011-09-07 9:46 UTC (permalink / raw)
To: 'KVM-ML (kvm@vger.kernel.org)'
On 09/07/11 00:25, TooMeeK wrote:
>
> Next, I've tried following combinations with virt-manager 0.8.4 (from
> XML of VM):
> 1.on Debian VM with virtio drivers for both storage and NIC:
> <disk type='block' device='disk'>
> <source dev='/dev/md3'/>
> <target dev='vdb' bus='virtio'/>
> partition type used in guest: EXT4
> result: poor performance, 9-10MB/s sequentional copy via SMB
> 2.on Debian VM with virtio drivers:
> <disk type='block' device='disk' cache='writeback'>
> <source dev='/dev/md3'/>
> <target dev='vdb' bus='virtio'/>
> partition type used: EXT4
> result: poor performance, 10-15MB/s sequentional copy via SMB
> 3.Direct attached partition to FreeBSD VM without virtio support
> (e1000 NIC and SCSI disk):
> <disk type='block' device='disk' cache='writeback'>
Shouldn't have you used cache=none when using virtio?
http://www.linux-kvm.org/page/Tuning_KVM
how is that performance with cache none?
Also note that when you don't specify it, I think the default is not
none. Maybe it is writethrough, I don't remember.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Virtual drives performance
2011-09-06 22:25 Virtual drives performance TooMeeK
` (2 preceding siblings ...)
2011-09-07 9:46 ` Asdo
@ 2011-09-07 14:38 ` Virtbie
3 siblings, 0 replies; 5+ messages in thread
From: Virtbie @ 2011-09-07 14:38 UTC (permalink / raw)
To: 'KVM-ML (kvm@vger.kernel.org)'
On 09/07/11 00:25, TooMeeK wrote:
>
> Next, I've tried following combinations with virt-manager 0.8.4 (from
> XML of VM):
> 1.on Debian VM with virtio drivers for both storage and NIC:
> <disk type='block' device='disk'>
> <source dev='/dev/md3'/>
> <target dev='vdb' bus='virtio'/>
> partition type used in guest: EXT4
> result: poor performance, 9-10MB/s sequentional copy via SMB
> 2.on Debian VM with virtio drivers:
> <disk type='block' device='disk' cache='writeback'>
> <source dev='/dev/md3'/>
> <target dev='vdb' bus='virtio'/>
> partition type used: EXT4
> result: poor performance, 10-15MB/s sequentional copy via SMB
> 3.Direct attached partition to FreeBSD VM without virtio support
> (e1000 NIC and SCSI disk):
> <disk type='block' device='disk' cache='writeback'>
Shouldn't have you used cache=none when using virtio?
http://www.linux-kvm.org/page/Tuning_KVM
how is that performance with cache none?
Also note that when you don't specify it, I think the default is not
none. Maybe it is writethrough, I don't remember.
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2011-09-08 0:22 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-09-06 22:25 Virtual drives performance TooMeeK
2011-09-07 5:50 ` martin f krafft
2011-09-07 7:52 ` Stefan Hajnoczi
2011-09-07 9:46 ` Asdo
2011-09-07 14:38 ` Virtbie
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox