* kvm-83 write performance raw
@ 2009-03-02 20:11 Malinka Rellikwodahs
2009-03-02 20:35 ` Anthony Liguori
2009-03-02 20:53 ` Mark van Walraven
0 siblings, 2 replies; 16+ messages in thread
From: Malinka Rellikwodahs @ 2009-03-02 20:11 UTC (permalink / raw)
To: Mailing Lists
Host: 2.6.27-gentoo-r8 AMD Opteron 2378
Guest: Windows XP SP2
when running with a raw disk image as a file or a raw disk image on an
lvm vg, I'm getting very low performance on write (5-10 MB/s) however
when using qcow2 format disk image the write speed is much better
(~30MB/s), which is consistant with a very similar setup running
kvm-68. Unfortunately when running the test with qcow2 the system
becomes unresponsive for a brief time during the test.
kvm -localtime -m 1024 -drive file=<diskimage>,boot=on
where diskimage is either the qcow2 format image, or the path to the lv
The host is running raid5 and drbd (drive replication software),
however performance on the host is performaning well and avoiding the
drbd layer in the guest does not improve performance, but running on
qcow2 does.
Any thoughts/suggestions of what could be wrong or what to do to fix this?
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: kvm-83 write performance raw
2009-03-02 20:11 kvm-83 write performance raw Malinka Rellikwodahs
@ 2009-03-02 20:35 ` Anthony Liguori
2009-03-02 20:37 ` Malinka Rellikwodahs
2009-03-02 20:53 ` Mark van Walraven
1 sibling, 1 reply; 16+ messages in thread
From: Anthony Liguori @ 2009-03-02 20:35 UTC (permalink / raw)
To: AelMalinka; +Cc: Mailing Lists
Malinka Rellikwodahs wrote:
> Host: 2.6.27-gentoo-r8 AMD Opteron 2378
> Guest: Windows XP SP2
>
> when running with a raw disk image as a file or a raw disk image on an
> lvm vg, I'm getting very low performance on write (5-10 MB/s) however
> when using qcow2 format disk image the write speed is much better
> (~30MB/s), which is consistant with a very similar setup running
> kvm-68. Unfortunately when running the test with qcow2 the system
> becomes unresponsive for a brief time during the test.
>
> kvm -localtime -m 1024 -drive file=<diskimage>,boot=on
>
> where diskimage is either the qcow2 format image, or the path to the lv
>
What version of kvm is this? Is it kvm-68? You'll have better luck
with something newer than that.
Regards,
Anthony Liguori
> The host is running raid5 and drbd (drive replication software),
> however performance on the host is performaning well and avoiding the
> drbd layer in the guest does not improve performance, but running on
> qcow2 does.
>
> Any thoughts/suggestions of what could be wrong or what to do to fix this?
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: kvm-83 write performance raw
2009-03-02 20:35 ` Anthony Liguori
@ 2009-03-02 20:37 ` Malinka Rellikwodahs
2009-03-02 20:39 ` Malinka Rellikwodahs
0 siblings, 1 reply; 16+ messages in thread
From: Malinka Rellikwodahs @ 2009-03-02 20:37 UTC (permalink / raw)
To: Anthony Liguori; +Cc: Mailing Lists
kvm-83
On Mon, Mar 2, 2009 at 15:35, Anthony Liguori <anthony@codemonkey.ws> wrote:
> Malinka Rellikwodahs wrote:
>>
>> Host: 2.6.27-gentoo-r8 AMD Opteron 2378
>> Guest: Windows XP SP2
>>
>> when running with a raw disk image as a file or a raw disk image on an
>> lvm vg, I'm getting very low performance on write (5-10 MB/s) however
>> when using qcow2 format disk image the write speed is much better
>> (~30MB/s), which is consistant with a very similar setup running
>> kvm-68. Unfortunately when running the test with qcow2 the system
>> becomes unresponsive for a brief time during the test.
>>
>> kvm -localtime -m 1024 -drive file=<diskimage>,boot=on
>>
>> where diskimage is either the qcow2 format image, or the path to the lv
>>
>
> What version of kvm is this? Is it kvm-68? You'll have better luck with
> something newer than that.
>
> Regards,
>
> Anthony Liguori
>
>> The host is running raid5 and drbd (drive replication software),
>> however performance on the host is performaning well and avoiding the
>> drbd layer in the guest does not improve performance, but running on
>> qcow2 does.
>>
>> Any thoughts/suggestions of what could be wrong or what to do to fix this?
>> --
>> To unsubscribe from this list: send the line "unsubscribe kvm" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: kvm-83 write performance raw
2009-03-02 20:37 ` Malinka Rellikwodahs
@ 2009-03-02 20:39 ` Malinka Rellikwodahs
2009-03-02 21:22 ` Anthony Liguori
0 siblings, 1 reply; 16+ messages in thread
From: Malinka Rellikwodahs @ 2009-03-02 20:39 UTC (permalink / raw)
To: Anthony Liguori; +Cc: Mailing Lists
> On Mon, Mar 2, 2009 at 15:35, Anthony Liguori <anthony@codemonkey.ws> wrote:
>> Malinka Rellikwodahs wrote:
>>>
>>> Host: 2.6.27-gentoo-r8 AMD Opteron 2378
>>> Guest: Windows XP SP2
>>>
>>> when running with a raw disk image as a file or a raw disk image on an
>>> lvm vg, I'm getting very low performance on write (5-10 MB/s) however
>>> when using qcow2 format disk image the write speed is much better
>>> (~30MB/s), which is consistant with a very similar setup running
>>> kvm-68. Unfortunately when running the test with qcow2 the system
>>> becomes unresponsive for a brief time during the test.
>>>
>>> kvm -localtime -m 1024 -drive file=<diskimage>,boot=on
>>>
>>> where diskimage is either the qcow2 format image, or the path to the lv
>>>
>>
>> What version of kvm is this? Is it kvm-68? You'll have better luck with
>> something newer than that.
kvm-83 is the one with the problem, kvm-68 is working correctly.
>>
>> Regards,
>>
>> Anthony Liguori
>>
>>> The host is running raid5 and drbd (drive replication software),
>>> however performance on the host is performaning well and avoiding the
>>> drbd layer in the guest does not improve performance, but running on
>>> qcow2 does.
>>>
>>> Any thoughts/suggestions of what could be wrong or what to do to fix this?
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe kvm" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>>
>>
>>
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: kvm-83 write performance raw
2009-03-02 20:11 kvm-83 write performance raw Malinka Rellikwodahs
2009-03-02 20:35 ` Anthony Liguori
@ 2009-03-02 20:53 ` Mark van Walraven
2009-03-02 21:00 ` Malinka Rellikwodahs
2009-03-04 22:28 ` Paolo Pedaletti
1 sibling, 2 replies; 16+ messages in thread
From: Mark van Walraven @ 2009-03-02 20:53 UTC (permalink / raw)
To: Malinka Rellikwodahs; +Cc: kvm
On Mon, Mar 02, 2009 at 03:11:59PM -0500, Malinka Rellikwodahs wrote:
> when running with a raw disk image as a file or a raw disk image on an
> lvm vg, I'm getting very low performance on write (5-10 MB/s) however
> when using qcow2 format disk image the write speed is much better
> (~30MB/s), which is consistant with a very similar setup running
> kvm-68. Unfortunately when running the test with qcow2 the system
> becomes unresponsive for a brief time during the test.
> The host is running raid5 and drbd (drive replication software),
> however performance on the host is performaning well and avoiding the
> drbd layer in the guest does not improve performance, but running on
> qcow2 does.
>
> Any thoughts/suggestions of what could be wrong or what to do to fix this?
RAID1 has *much* better write performance. With striping RAIDs, alignment
is important. RAID controllers sometimes introduce hidden alignment
offsets. Excessive read-ahead is a waste of time with a lot of small
random I/O, which is what I see mostly with guests on flat disk images.
With LVM, it pays to make sure the LVs are aligned to the disk. I prefer
boundaries with multiples of at least 64-sectors, which makes the LVM
overhead virtually disappear. I align the guest filesystems too, when
I can.
I don't think DRBD has an effect on alignment, but you might look at
keeping the metadata on another drive.
Block - rather than file - images are much faster.
Hope this helps,
Mark.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: kvm-83 write performance raw
2009-03-02 20:53 ` Mark van Walraven
@ 2009-03-02 21:00 ` Malinka Rellikwodahs
2009-03-03 15:13 ` Nikola Ciprich
2009-03-04 22:28 ` Paolo Pedaletti
1 sibling, 1 reply; 16+ messages in thread
From: Malinka Rellikwodahs @ 2009-03-02 21:00 UTC (permalink / raw)
To: Mark van Walraven; +Cc: kvm
On Mon, Mar 2, 2009 at 15:53, Mark van Walraven <markv@netvalue.net.nz> wrote:
> On Mon, Mar 02, 2009 at 03:11:59PM -0500, Malinka Rellikwodahs wrote:
>> when running with a raw disk image as a file or a raw disk image on an
>> lvm vg, I'm getting very low performance on write (5-10 MB/s) however
>> when using qcow2 format disk image the write speed is much better
>> (~30MB/s), which is consistant with a very similar setup running
>> kvm-68. Unfortunately when running the test with qcow2 the system
>> becomes unresponsive for a brief time during the test.
>
>> The host is running raid5 and drbd (drive replication software),
>> however performance on the host is performaning well and avoiding the
>> drbd layer in the guest does not improve performance, but running on
>> qcow2 does.
>>
>> Any thoughts/suggestions of what could be wrong or what to do to fix this?
>
> RAID1 has *much* better write performance. With striping RAIDs, alignment
> is important. RAID controllers sometimes introduce hidden alignment
> offsets. Excessive read-ahead is a waste of time with a lot of small
> random I/O, which is what I see mostly with guests on flat disk images.
>
> With LVM, it pays to make sure the LVs are aligned to the disk. I prefer
> boundaries with multiples of at least 64-sectors, which makes the LVM
> overhead virtually disappear. I align the guest filesystems too, when
> I can.
>
> I don't think DRBD has an effect on alignment, but you might look at
> keeping the metadata on another drive.
>
> Block - rather than file - images are much faster.
>
> Hope this helps,
It does, however unless I'm missing something the performance is being
lost not in the lvm/raid/drbd config, because I'm using the same setup
for other partitions which are used for data on the host and write
performance to those drives is just fine.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: kvm-83 write performance raw
2009-03-02 20:39 ` Malinka Rellikwodahs
@ 2009-03-02 21:22 ` Anthony Liguori
2009-03-02 21:39 ` Malinka Rellikwodahs
0 siblings, 1 reply; 16+ messages in thread
From: Anthony Liguori @ 2009-03-02 21:22 UTC (permalink / raw)
To: AelMalinka; +Cc: Mailing Lists
Malinka Rellikwodahs wrote:
>>>
>>> What version of kvm is this? Is it kvm-68? You'll have better luck with
>>> something newer than that.
>>>
>
> kvm-83 is the one with the problem, kvm-68 is working correctly.
>
kvm-68 and qcow2 both use cache=writeback by default which is less safe
than cache=writethrough which is now the default.
But performance shouldn't be as bad as your seeing.
Regards,
Anthony Liguori
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: kvm-83 write performance raw
2009-03-02 21:22 ` Anthony Liguori
@ 2009-03-02 21:39 ` Malinka Rellikwodahs
2009-03-02 22:21 ` Anthony Liguori
2009-03-09 15:09 ` Avi Kivity
0 siblings, 2 replies; 16+ messages in thread
From: Malinka Rellikwodahs @ 2009-03-02 21:39 UTC (permalink / raw)
To: Anthony Liguori; +Cc: Mailing Lists
On Mon, Mar 2, 2009 at 16:22, Anthony Liguori <anthony@codemonkey.ws> wrote:
> Malinka Rellikwodahs wrote:
>>>>
>>>> What version of kvm is this? Is it kvm-68? You'll have better luck
>>>> with
>>>> something newer than that.
>>>>
>>
>> kvm-83 is the one with the problem, kvm-68 is working correctly.
>>
>
> kvm-68 and qcow2 both use cache=writeback by default which is less safe than
> cache=writethrough which is now the default.
>
> But performance shouldn't be as bad as your seeing.
Running the kvm-84 install on the qcow image as kvm -m 1024 -drive
file=qcow,boot=on,cache=writethrough, I get similar performance to the
raw performance.
So it looks like with cache set to writethrough there's a big
performance hit on this setup, any ideas where to look for that?
> Regards,
>
> Anthony Liguori
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: kvm-83 write performance raw
2009-03-02 21:39 ` Malinka Rellikwodahs
@ 2009-03-02 22:21 ` Anthony Liguori
2009-03-09 15:09 ` Avi Kivity
1 sibling, 0 replies; 16+ messages in thread
From: Anthony Liguori @ 2009-03-02 22:21 UTC (permalink / raw)
To: AelMalinka; +Cc: Mailing Lists
Malinka Rellikwodahs wrote:
> On Mon, Mar 2, 2009 at 16:22, Anthony Liguori <anthony@codemonkey.ws> wrote:
>
>> Malinka Rellikwodahs wrote:
>>
>>>>> What version of kvm is this? Is it kvm-68? You'll have better luck
>>>>> with
>>>>> something newer than that.
>>>>>
>>>>>
>>> kvm-83 is the one with the problem, kvm-68 is working correctly.
>>>
>>>
>> kvm-68 and qcow2 both use cache=writeback by default which is less safe than
>> cache=writethrough which is now the default.
>>
>> But performance shouldn't be as bad as your seeing.
>>
>
> Running the kvm-84 install on the qcow image as kvm -m 1024 -drive
> file=qcow,boot=on,cache=writethrough, I get similar performance to the
> raw performance.
>
> So it looks like with cache set to writethrough there's a big
> performance hit on this setup, any ideas where to look for that?
>
cache=writeback is "fake" performance. In theory, it should be better
than native because it's being loose with data consistency. So the real
question is why is performance so bad.
It's probably got something to do with IDE. It could be that kvm-84's
IDE is pre-Avi's AIO implementation such that you're splitting up IDE
requests into something small. They should be submitted in parallel but
maybe that's not working out well with your disk setup.
Regards,
Anthony Liguori
>> Regards,
>>
>> Anthony Liguori
>>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: kvm-83 write performance raw
2009-03-02 21:00 ` Malinka Rellikwodahs
@ 2009-03-03 15:13 ` Nikola Ciprich
[not found] ` <aa2a0fc0903051110q528da32ek17b0f6468d0f15ff@mail.gmail.com>
0 siblings, 1 reply; 16+ messages in thread
From: Nikola Ciprich @ 2009-03-03 15:13 UTC (permalink / raw)
To: Malinka Rellikwodahs; +Cc: Mark van Walraven, kvm, nikola.ciprich
Hi,
I think DRBD *MIGHT* be Your problem anyways...
Can You try repeating Your measurments with
no-disk-barrier, no-disk-flushes, no-disk-drain
options for Your drbd devices and report the results?
nik
On Mon, Mar 02, 2009 at 04:00:57PM -0500, Malinka Rellikwodahs wrote:
> On Mon, Mar 2, 2009 at 15:53, Mark van Walraven <markv@netvalue.net.nz> wrote:
> > On Mon, Mar 02, 2009 at 03:11:59PM -0500, Malinka Rellikwodahs wrote:
> >> when running with a raw disk image as a file or a raw disk image on an
> >> lvm vg, I'm getting very low performance on write (5-10 MB/s) however
> >> when using qcow2 format disk image the write speed is much better
> >> (~30MB/s), which is consistant with a very similar setup running
> >> kvm-68. Unfortunately when running the test with qcow2 the system
> >> becomes unresponsive for a brief time during the test.
> >
> >> The host is running raid5 and drbd (drive replication software),
> >> however performance on the host is performaning well and avoiding the
> >> drbd layer in the guest does not improve performance, but running on
> >> qcow2 does.
> >>
> >> Any thoughts/suggestions of what could be wrong or what to do to fix this?
> >
> > RAID1 has *much* better write performance. With striping RAIDs, alignment
> > is important. RAID controllers sometimes introduce hidden alignment
> > offsets. Excessive read-ahead is a waste of time with a lot of small
> > random I/O, which is what I see mostly with guests on flat disk images.
> >
> > With LVM, it pays to make sure the LVs are aligned to the disk. I prefer
> > boundaries with multiples of at least 64-sectors, which makes the LVM
> > overhead virtually disappear. I align the guest filesystems too, when
> > I can.
> >
> > I don't think DRBD has an effect on alignment, but you might look at
> > keeping the metadata on another drive.
> >
> > Block - rather than file - images are much faster.
> >
> > Hope this helps,
>
> It does, however unless I'm missing something the performance is being
> lost not in the lvm/raid/drbd config, because I'm using the same setup
> for other partitions which are used for data on the host and write
> performance to those drives is just fine.
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
-------------------------------------
Nikola CIPRICH
LinuxBox.cz, s.r.o.
28. rijna 168, 709 01 Ostrava
tel.: +420 596 603 142
fax: +420 596 621 273
mobil: +420 777 093 799
www.linuxbox.cz
mobil servis: +420 737 238 656
email servis: servis@linuxbox.cz
-------------------------------------
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: kvm-83 write performance raw
2009-03-02 20:53 ` Mark van Walraven
2009-03-02 21:00 ` Malinka Rellikwodahs
@ 2009-03-04 22:28 ` Paolo Pedaletti
2009-03-05 2:09 ` Mark van Walraven
1 sibling, 1 reply; 16+ messages in thread
From: Paolo Pedaletti @ 2009-03-04 22:28 UTC (permalink / raw)
Cc: kvm
Ciao Mark,
> RAID1 has *much* better write performance. With striping RAIDs, alignment
> is important. RAID controllers sometimes introduce hidden alignment
> offsets. Excessive read-ahead is a waste of time with a lot of small
> random I/O, which is what I see mostly with guests on flat disk images.
ok, I can understand this
but on a big multimedia-file partition an "opportune" read-ahead could
be useful (to set with blockdev)
> With LVM, it pays to make sure the LVs are aligned to the disk. I prefer
> boundaries with multiples of at least 64-sectors, which makes the LVM
> overhead virtually disappear. I align the guest filesystems too, when
> I can.
I use LVM extensively so can you explain how can you achieve alignments
between lvm and filesistem? and how to check it?
I have found this interesting:
http://www.mail-archive.com/linux-raid@vger.kernel.org/msg09685.html
http://kerneltrap.org/mailarchive/linux-raid/2008/12/1/4272764
http://blog.endpoint.com/2008/09/filesystem-io-what-we-presented.html
http://lonesysadmin.net/2009/01/02/how-to-grow-linux-virtual-disks-in-vmware/
(useful even for kvm users :-)
http://orezpraw.com/blog/your-filesystem-starts-where
http://www.issociate.de/board/post/464221/stride_/_stripe_alignment_on_LVM_?.html
http://www.ocztechnologyforum.com/forum/showpost.php?p=335049&postcount=134
http://thunk.org/tytso/blog/2009/02/20/aligning-filesystems-to-an-ssds-erase-block-size/
I've post this links because:
1) I didn't know this alignment-problem
2) lvm is suggested as preferred/best solution instead qcow2 file-image
3) filesystem performance may not related to kvm driver
4) I still have to read those post and understand them :-)
thank you...
--
Paolo Pedaletti
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: kvm-83 write performance raw
2009-03-04 22:28 ` Paolo Pedaletti
@ 2009-03-05 2:09 ` Mark van Walraven
0 siblings, 0 replies; 16+ messages in thread
From: Mark van Walraven @ 2009-03-05 2:09 UTC (permalink / raw)
To: Paolo Pedaletti; +Cc: kvm
Hi Paolo,
Sorry, list - getting a bit off-topic but I'll include because it might
be of general interest for kvm users ...
On Wed, Mar 04, 2009 at 11:28:18PM +0100, Paolo Pedaletti wrote:
> ok, I can understand this
> but on a big multimedia-file partition an "opportune" read-ahead could
> be useful (to set with blockdev)
Sure. Adjust and measure for your average and worst-case workload.
I expected a moderate read-ahead to help on the storage serving my kvm
hosts, but in practice it caused painful latency spikes.
> I use LVM extensively so can you explain how can you achieve alignments
> between lvm and filesistem? and how to check it?
Your links contain good material on this. My comments are:
When you can, don't use a partition table but make the whole disk a PV.
Otherwise, watch that your partitions are properly aligned.
Use '--metadatasize 250k' arguments with pvcreate (the size is always
rounded up the next 64KB boundary so 250k ends up 256KB, '--metadatasize
256k' would result in 320KB).
'pvs -o+pe_start' and 'dmsetup table' will show your PV and LV offsets.
If you use image files, you probably don't want them to have holes in
them, or they will likey fragment as the holes are filled. I expect
qcow2 images internally fragment? Read-ahead on a fragmented image file
will really hurt.
Ext2 doesn't seem very sensitive to alignment. I haven't played with
aligning ext3's journal. (Speculation: a deliberately-wrong stride could
be interesting if inode lookups are a seek away from their data block
and your RAID is clever about splitting seeks between mirror drives.)
RAID controllers can have their own sector offsets and read-aheads.
Using disk type='block' avoids the host filesystem overhead altogether.
Regards,
Mark.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Fwd: kvm-83 write performance raw
[not found] ` <aa2a0fc0903051110q528da32ek17b0f6468d0f15ff@mail.gmail.com>
@ 2009-03-05 19:11 ` Malinka Rellikwodahs
2009-03-06 0:14 ` Malinka Rellikwodahs
2009-03-09 15:40 ` Fwd: " Nikola Ciprich
0 siblings, 2 replies; 16+ messages in thread
From: Malinka Rellikwodahs @ 2009-03-05 19:11 UTC (permalink / raw)
To: Mailing Lists
On Tue, Mar 3, 2009 at 10:13, Nikola Ciprich <extmaillist@linuxbox.cz> wrote:
>
> Hi,
> I think DRBD *MIGHT* be Your problem anyways...
> Can You try repeating Your measurments with
> no-disk-barrier, no-disk-flushes, no-disk-drain
> options for Your drbd devices and report the results?
> nik
I'm running DRBD 8.0.14 (latest stable) and it appears that
no-disk-drain and no-disk-barrier options aren't available, however
with no-disk-flushes write performance to the drbd volumes (other than
the kvm volume) is the same and write performance in kvm is also
unchanged (~10MB/s in windows, ~30MB/s in Linux)
>
> On Mon, Mar 02, 2009 at 04:00:57PM -0500, Malinka Rellikwodahs wrote:
> > On Mon, Mar 2, 2009 at 15:53, Mark van Walraven <markv@netvalue.net.nz> wrote:
> > > On Mon, Mar 02, 2009 at 03:11:59PM -0500, Malinka Rellikwodahs wrote:
> > >> when running with a raw disk image as a file or a raw disk image on an
> > >> lvm vg, I'm getting very low performance on write (5-10 MB/s) however
> > >> when using qcow2 format disk image the write speed is much better
> > >> (~30MB/s), which is consistant with a very similar setup running
> > >> kvm-68. Unfortunately when running the test with qcow2 the system
> > >> becomes unresponsive for a brief time during the test.
> > >
> > >> The host is running raid5 and drbd (drive replication software),
> > >> however performance on the host is performaning well and avoiding the
> > >> drbd layer in the guest does not improve performance, but running on
> > >> qcow2 does.
> > >>
> > >> Any thoughts/suggestions of what could be wrong or what to do to fix this?
> > >
> > > RAID1 has *much* better write performance. With striping RAIDs, alignment
> > > is important. RAID controllers sometimes introduce hidden alignment
> > > offsets. Excessive read-ahead is a waste of time with a lot of small
> > > random I/O, which is what I see mostly with guests on flat disk images.
> > >
> > > With LVM, it pays to make sure the LVs are aligned to the disk. I prefer
> > > boundaries with multiples of at least 64-sectors, which makes the LVM
> > > overhead virtually disappear. I align the guest filesystems too, when
> > > I can.
> > >
> > > I don't think DRBD has an effect on alignment, but you might look at
> > > keeping the metadata on another drive.
> > >
> > > Block - rather than file - images are much faster.
> > >
> > > Hope this helps,
> >
> > It does, however unless I'm missing something the performance is being
> > lost not in the lvm/raid/drbd config, because I'm using the same setup
> > for other partitions which are used for data on the host and write
> > performance to those drives is just fine.
> > --
> > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
> >
>
> --
> -------------------------------------
> Nikola CIPRICH
> LinuxBox.cz, s.r.o.
> 28. rijna 168, 709 01 Ostrava
>
> tel.: +420 596 603 142
> fax: +420 596 621 273
> mobil: +420 777 093 799
> www.linuxbox.cz
>
> mobil servis: +420 737 238 656
> email servis: servis@linuxbox.cz
> -------------------------------------
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: kvm-83 write performance raw
2009-03-05 19:11 ` Fwd: " Malinka Rellikwodahs
@ 2009-03-06 0:14 ` Malinka Rellikwodahs
2009-03-09 15:40 ` Fwd: " Nikola Ciprich
1 sibling, 0 replies; 16+ messages in thread
From: Malinka Rellikwodahs @ 2009-03-06 0:14 UTC (permalink / raw)
To: Mailing Lists
the below results are the averages of the times returned by dd
if=/dev/zero of=test bs=512M count=2 5 times
7.607s 134.6 MiB/s raid5 + lvm + ext3
29.69s 34.5 MiB/s raid5 + lvm + drbd + ext3
36.28s 28.2 MiB/s raid5 + lvm + kvm + ext3
80.05s 12.8 MiB/s raid5 + lvm + drbd + kvm + ext3
which leads me to believe the major cause here is that drbd and kvm
are not working well together, I'm not suprised by the major
performance drop off when adding drbd, however I am moderately
surprised by seeing the same thing when adding kvm. KVM-84 was used
in the tests
As a note the last test using both drbd and kvm resulted in a response
"Clocksource tsc unstable (delta = 4691311571 ns)", making me wonder
if the timings were negatively affect by clock skew maybe?
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: kvm-83 write performance raw
2009-03-02 21:39 ` Malinka Rellikwodahs
2009-03-02 22:21 ` Anthony Liguori
@ 2009-03-09 15:09 ` Avi Kivity
1 sibling, 0 replies; 16+ messages in thread
From: Avi Kivity @ 2009-03-09 15:09 UTC (permalink / raw)
To: AelMalinka; +Cc: Anthony Liguori, Mailing Lists
Malinka Rellikwodahs wrote:
> Running the kvm-84 install on the qcow image as kvm -m 1024 -drive
> file=qcow,boot=on,cache=writethrough, I get similar performance to the
> raw performance.
>
> So it looks like with cache set to writethrough there's a big
> performance hit on this setup, any ideas where to look for that?
>
>
Unrelated, but you'll probably improve raw performance somewhat by using
cache=none.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: Fwd: kvm-83 write performance raw
2009-03-05 19:11 ` Fwd: " Malinka Rellikwodahs
2009-03-06 0:14 ` Malinka Rellikwodahs
@ 2009-03-09 15:40 ` Nikola Ciprich
1 sibling, 0 replies; 16+ messages in thread
From: Nikola Ciprich @ 2009-03-09 15:40 UTC (permalink / raw)
To: Malinka Rellikwodahs; +Cc: Mailing Lists, nikola.ciprich
Hi, sorry for such a late reply.
well, we're using drbd 8.3 branch (it's already stable), and disabling barriers (and all other
cache sync mechanisms) gave us quite a huge speed boost. of course it's really safe only if You have
battery backed-up raid adapter (and disk caches disabled).
n.
On Thu, Mar 05, 2009 at 02:11:39PM -0500, Malinka Rellikwodahs wrote:
> On Tue, Mar 3, 2009 at 10:13, Nikola Ciprich <extmaillist@linuxbox.cz> wrote:
> >
> > Hi,
> > I think DRBD *MIGHT* be Your problem anyways...
> > Can You try repeating Your measurments with
> > no-disk-barrier, no-disk-flushes, no-disk-drain
> > options for Your drbd devices and report the results?
> > nik
>
>
> I'm running DRBD 8.0.14 (latest stable) and it appears that
> no-disk-drain and no-disk-barrier options aren't available, however
> with no-disk-flushes write performance to the drbd volumes (other than
> the kvm volume) is the same and write performance in kvm is also
> unchanged (~10MB/s in windows, ~30MB/s in Linux)
>
> >
> > On Mon, Mar 02, 2009 at 04:00:57PM -0500, Malinka Rellikwodahs wrote:
> > > On Mon, Mar 2, 2009 at 15:53, Mark van Walraven <markv@netvalue.net.nz> wrote:
> > > > On Mon, Mar 02, 2009 at 03:11:59PM -0500, Malinka Rellikwodahs wrote:
> > > >> when running with a raw disk image as a file or a raw disk image on an
> > > >> lvm vg, I'm getting very low performance on write (5-10 MB/s) however
> > > >> when using qcow2 format disk image the write speed is much better
> > > >> (~30MB/s), which is consistant with a very similar setup running
> > > >> kvm-68. Unfortunately when running the test with qcow2 the system
> > > >> becomes unresponsive for a brief time during the test.
> > > >
> > > >> The host is running raid5 and drbd (drive replication software),
> > > >> however performance on the host is performaning well and avoiding the
> > > >> drbd layer in the guest does not improve performance, but running on
> > > >> qcow2 does.
> > > >>
> > > >> Any thoughts/suggestions of what could be wrong or what to do to fix this?
> > > >
> > > > RAID1 has *much* better write performance. With striping RAIDs, alignment
> > > > is important. RAID controllers sometimes introduce hidden alignment
> > > > offsets. Excessive read-ahead is a waste of time with a lot of small
> > > > random I/O, which is what I see mostly with guests on flat disk images.
> > > >
> > > > With LVM, it pays to make sure the LVs are aligned to the disk. I prefer
> > > > boundaries with multiples of at least 64-sectors, which makes the LVM
> > > > overhead virtually disappear. I align the guest filesystems too, when
> > > > I can.
> > > >
> > > > I don't think DRBD has an effect on alignment, but you might look at
> > > > keeping the metadata on another drive.
> > > >
> > > > Block - rather than file - images are much faster.
> > > >
> > > > Hope this helps,
> > >
> > > It does, however unless I'm missing something the performance is being
> > > lost not in the lvm/raid/drbd config, because I'm using the same setup
> > > for other partitions which are used for data on the host and write
> > > performance to those drives is just fine.
> > > --
> > > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > > the body of a message to majordomo@vger.kernel.org
> > > More majordomo info at http://vger.kernel.org/majordomo-info.html
> > >
> >
> > --
> > -------------------------------------
> > Nikola CIPRICH
> > LinuxBox.cz, s.r.o.
> > 28. rijna 168, 709 01 Ostrava
> >
> > tel.: +420 596 603 142
> > fax: +420 596 621 273
> > mobil: +420 777 093 799
> > www.linuxbox.cz
> >
> > mobil servis: +420 737 238 656
> > email servis: servis@linuxbox.cz
> > -------------------------------------
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
-------------------------------------
Nikola CIPRICH
LinuxBox.cz, s.r.o.
28. rijna 168, 709 01 Ostrava
tel.: +420 596 603 142
fax: +420 596 621 273
mobil: +420 777 093 799
www.linuxbox.cz
mobil servis: +420 737 238 656
email servis: servis@linuxbox.cz
-------------------------------------
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2009-03-09 15:38 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-03-02 20:11 kvm-83 write performance raw Malinka Rellikwodahs
2009-03-02 20:35 ` Anthony Liguori
2009-03-02 20:37 ` Malinka Rellikwodahs
2009-03-02 20:39 ` Malinka Rellikwodahs
2009-03-02 21:22 ` Anthony Liguori
2009-03-02 21:39 ` Malinka Rellikwodahs
2009-03-02 22:21 ` Anthony Liguori
2009-03-09 15:09 ` Avi Kivity
2009-03-02 20:53 ` Mark van Walraven
2009-03-02 21:00 ` Malinka Rellikwodahs
2009-03-03 15:13 ` Nikola Ciprich
[not found] ` <aa2a0fc0903051110q528da32ek17b0f6468d0f15ff@mail.gmail.com>
2009-03-05 19:11 ` Fwd: " Malinka Rellikwodahs
2009-03-06 0:14 ` Malinka Rellikwodahs
2009-03-09 15:40 ` Fwd: " Nikola Ciprich
2009-03-04 22:28 ` Paolo Pedaletti
2009-03-05 2:09 ` Mark van Walraven
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox