public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* Shouldn't cache=none be the default for drives?
@ 2010-04-07 14:39 Troels Arvin
  2010-04-07 15:17 ` Gordan Bobic
  2010-04-08  5:07 ` Thomas Mueller
  0 siblings, 2 replies; 7+ messages in thread
From: Troels Arvin @ 2010-04-07 14:39 UTC (permalink / raw)
  To: kvm

Hello,

I'm conducting some performancetests with KVM-virtualized CentOSes. One 
thing I noticed is that guest I/O performance seems to be significantly 
better for virtio-based block devices ("drive"s) if the cache=none 
argument is used. (This was with a rather powerful storage system 
backend which is hard to saturate.)

So: Why isn't cache=none be the default for drives?

-- 
Troels

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Shouldn't cache=none be the default for drives?
  2010-04-07 14:39 Shouldn't cache=none be the default for drives? Troels Arvin
@ 2010-04-07 15:17 ` Gordan Bobic
  2010-04-08  5:07 ` Thomas Mueller
  1 sibling, 0 replies; 7+ messages in thread
From: Gordan Bobic @ 2010-04-07 15:17 UTC (permalink / raw)
  To: kvm

Troels Arvin wrote:
> Hello,
> 
> I'm conducting some performancetests with KVM-virtualized CentOSes. One 
> thing I noticed is that guest I/O performance seems to be significantly 
> better for virtio-based block devices ("drive"s) if the cache=none 
> argument is used. (This was with a rather powerful storage system 
> backend which is hard to saturate.)
> 
> So: Why isn't cache=none be the default for drives?

Is that the right question? Or is the right question "Why is cache=none 
faster?"

What did you use for measuring the performance? I have found in the past 
that virtio block device was slower than IDE block device emulation.

Gordan

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Shouldn't cache=none be the default for drives?
  2010-04-07 14:39 Shouldn't cache=none be the default for drives? Troels Arvin
  2010-04-07 15:17 ` Gordan Bobic
@ 2010-04-08  5:07 ` Thomas Mueller
  2010-04-08  6:05   ` Michael Tokarev
  1 sibling, 1 reply; 7+ messages in thread
From: Thomas Mueller @ 2010-04-08  5:07 UTC (permalink / raw)
  To: kvm

Am Wed, 07 Apr 2010 16:39:41 +0200 schrieb Troels Arvin:

> Hello,
> 
> I'm conducting some performancetests with KVM-virtualized CentOSes. One
> thing I noticed is that guest I/O performance seems to be significantly
> better for virtio-based block devices ("drive"s) if the cache=none
> argument is used. (This was with a rather powerful storage system
> backend which is hard to saturate.)
> 
> So: Why isn't cache=none be the default for drives?

while ago i suffered poor performance of virtio and win2008. 

This helped alot:

I enabled "deadline" block scheduler instead of the default "cfq" on the 
host system. tested with: Host Debian with scheduler deadline, Guest 
Win2008 with Virtio and cache=none. (26MB/s to 50MB/s boost measured) 
Maybe this is also true for Linux/Linux.

I expect that scheduler "noop" for linux guests would be good.

- Thomas



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Shouldn't cache=none be the default for drives?
  2010-04-08  5:07 ` Thomas Mueller
@ 2010-04-08  6:05   ` Michael Tokarev
  2010-04-08  6:09     ` Thomas Mueller
  2010-04-08 10:08     ` Christoph Hellwig
  0 siblings, 2 replies; 7+ messages in thread
From: Michael Tokarev @ 2010-04-08  6:05 UTC (permalink / raw)
  To: Thomas Mueller; +Cc: kvm

08.04.2010 09:07, Thomas Mueller wrote:
[]
> This helped alot:
>
> I enabled "deadline" block scheduler instead of the default "cfq" on the
> host system. tested with: Host Debian with scheduler deadline, Guest
> Win2008 with Virtio and cache=none. (26MB/s to 50MB/s boost measured)
> Maybe this is also true for Linux/Linux.
>
> I expect that scheduler "noop" for linux guests would be good.

Hmm.   I wonder why it helped.  In theory, host scheduler should not
change anything for cache=none case, at least for raw partitions of
LVM volumes.  This is because with cache=none, the virtual disk
image is opened with O_DIRECT flag, which means all I/O bypasses
host scheduler and buffer cache.

I tried a few quick tests here, -- with LVM volumes it makes no
measurable difference.  But if the guest disk images are on
plain files (also raw), scheduler makes some difference, and
indeed deadline works better.  Maybe you were testing with
plain files instead of block devices?

Thanks!

/mjt

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Shouldn't cache=none be the default for drives?
  2010-04-08  6:05   ` Michael Tokarev
@ 2010-04-08  6:09     ` Thomas Mueller
  2010-04-08  6:23       ` Thomas Mueller
  2010-04-08 10:08     ` Christoph Hellwig
  1 sibling, 1 reply; 7+ messages in thread
From: Thomas Mueller @ 2010-04-08  6:09 UTC (permalink / raw)
  To: kvm

Am Thu, 08 Apr 2010 10:05:09 +0400 schrieb Michael Tokarev:

> 08.04.2010 09:07, Thomas Mueller wrote: []
>> This helped alot:
>>
>> I enabled "deadline" block scheduler instead of the default "cfq" on
>> the host system. tested with: Host Debian with scheduler deadline,
>> Guest Win2008 with Virtio and cache=none. (26MB/s to 50MB/s boost
>> measured) Maybe this is also true for Linux/Linux.
>>
>> I expect that scheduler "noop" for linux guests would be good.
> 
> Hmm.   I wonder why it helped.  In theory, host scheduler should not
> change anything for cache=none case, at least for raw partitions of LVM
> volumes.  This is because with cache=none, the virtual disk image is
> opened with O_DIRECT flag, which means all I/O bypasses host scheduler
> and buffer cache.
> 
> I tried a few quick tests here, -- with LVM volumes it makes no
> measurable difference.  But if the guest disk images are on plain files
> (also raw), scheduler makes some difference, and indeed deadline works
> better.  Maybe you were testing with plain files instead of block
> devices?

ah yes, qcow2 images. 

- Thomas


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Shouldn't cache=none be the default for drives?
  2010-04-08  6:09     ` Thomas Mueller
@ 2010-04-08  6:23       ` Thomas Mueller
  0 siblings, 0 replies; 7+ messages in thread
From: Thomas Mueller @ 2010-04-08  6:23 UTC (permalink / raw)
  To: kvm

Am Thu, 08 Apr 2010 06:09:05 +0000 schrieb Thomas Mueller:

> Am Thu, 08 Apr 2010 10:05:09 +0400 schrieb Michael Tokarev:
> 
>> 08.04.2010 09:07, Thomas Mueller wrote: []
>>> This helped alot:
>>>
>>> I enabled "deadline" block scheduler instead of the default "cfq" on
>>> the host system. tested with: Host Debian with scheduler deadline,
>>> Guest Win2008 with Virtio and cache=none. (26MB/s to 50MB/s boost
>>> measured) Maybe this is also true for Linux/Linux.
>>>
>>> I expect that scheduler "noop" for linux guests would be good.
>> 
>> Hmm.   I wonder why it helped.  In theory, host scheduler should not
>> change anything for cache=none case, at least for raw partitions of LVM
>> volumes.  This is because with cache=none, the virtual disk image is
>> opened with O_DIRECT flag, which means all I/O bypasses host scheduler
>> and buffer cache.
>> 
>> I tried a few quick tests here, -- with LVM volumes it makes no
>> measurable difference.  But if the guest disk images are on plain files
>> (also raw), scheduler makes some difference, and indeed deadline works
>> better.  Maybe you were testing with plain files instead of block
>> devices?
> 
> ah yes, qcow2 images.

... but does the scheduler really now about O_DIRECT? isn't O_DIRECT 
meant to bypass only buffers (aka "return write not before it really hit 
the disk")? my understanding is that the scheduler is layer down the 
stack. but only guessing - i'm not a kernel hacker. :)

- Thomas  



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Shouldn't cache=none be the default for drives?
  2010-04-08  6:05   ` Michael Tokarev
  2010-04-08  6:09     ` Thomas Mueller
@ 2010-04-08 10:08     ` Christoph Hellwig
  1 sibling, 0 replies; 7+ messages in thread
From: Christoph Hellwig @ 2010-04-08 10:08 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: Thomas Mueller, kvm

On Thu, Apr 08, 2010 at 10:05:09AM +0400, Michael Tokarev wrote:
> LVM volumes.  This is because with cache=none, the virtual disk
> image is opened with O_DIRECT flag, which means all I/O bypasses
> host scheduler and buffer cache.

O_DIRECT does not bypass the I/O scheduler, only the page cache.


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2010-04-08 10:08 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-04-07 14:39 Shouldn't cache=none be the default for drives? Troels Arvin
2010-04-07 15:17 ` Gordan Bobic
2010-04-08  5:07 ` Thomas Mueller
2010-04-08  6:05   ` Michael Tokarev
2010-04-08  6:09     ` Thomas Mueller
2010-04-08  6:23       ` Thomas Mueller
2010-04-08 10:08     ` Christoph Hellwig

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox