* Poor performance with qemu
@ 2010-03-29 19:21 Markus Suvanto
0 siblings, 0 replies; 12+ messages in thread
From: Markus Suvanto @ 2010-03-29 19:21 UTC (permalink / raw)
To: diegocg; +Cc: linux-btrfs
>Hi, I'm using KVM, and the virtual disk (a 20 GB file using the "raw"
>qemu format according to virt-manager and, of course, placed on a btrfs
>filesystem, running the latest mainline git) is awfully slow, no matter
>what OS is running inside the VM. The PCBSD installer says it's copying
>data at a 40-50 KB/s rate. Is someone using KVM and having better numbers
>than me? How can I help to debug this workload?
Maybe cache=writeback helps?
-drive file=image.raw,cache=writeback
-Markus
^ permalink raw reply [flat|nested] 12+ messages in thread
* Poor performance with qemu
@ 2010-03-28 15:18 Diego Calleja
2010-03-30 12:56 ` Chris Mason
0 siblings, 1 reply; 12+ messages in thread
From: Diego Calleja @ 2010-03-28 15:18 UTC (permalink / raw)
To: linux-btrfs
Hi, I'm using KVM, and the virtual disk (a 20 GB file using the "raw"
qemu format according to virt-manager and, of course, placed on a btrfs
filesystem, running the latest mainline git) is awfully slow, no matter
what OS is running inside the VM. The PCBSD installer says it's copying
data at a 40-50 KB/s rate. Is someone using KVM and having better numbers
than me? How can I help to debug this workload?
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Poor performance with qemu
2010-03-28 15:18 Diego Calleja
@ 2010-03-30 12:56 ` Chris Mason
2010-04-08 14:58 ` Avi Kivity
0 siblings, 1 reply; 12+ messages in thread
From: Chris Mason @ 2010-03-30 12:56 UTC (permalink / raw)
To: Diego Calleja; +Cc: linux-btrfs
On Sun, Mar 28, 2010 at 05:18:03PM +0200, Diego Calleja wrote:
> Hi, I'm using KVM, and the virtual disk (a 20 GB file using the "raw"
> qemu format according to virt-manager and, of course, placed on a btrfs
> filesystem, running the latest mainline git) is awfully slow, no matter
> what OS is running inside the VM. The PCBSD installer says it's copying
> data at a 40-50 KB/s rate. Is someone using KVM and having better numbers
> than me? How can I help to debug this workload?
The problem is that qemu uses O_SYNC by default, which makes btrfs do
log commits for every write.
Once the O_DIRECT read patch is in, you can switch to that, or tell qemu
to use a writeback cache instead.
-chris
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Poor performance with qemu
2010-03-30 12:56 ` Chris Mason
@ 2010-04-08 14:58 ` Avi Kivity
2010-04-08 15:21 ` Gordan Bobic
2010-04-08 15:26 ` Chris Mason
0 siblings, 2 replies; 12+ messages in thread
From: Avi Kivity @ 2010-04-08 14:58 UTC (permalink / raw)
To: Chris Mason, Diego Calleja, linux-btrfs
On 03/30/2010 03:56 PM, Chris Mason wrote:
> On Sun, Mar 28, 2010 at 05:18:03PM +0200, Diego Calleja wrote:
>
>> Hi, I'm using KVM, and the virtual disk (a 20 GB file using the "raw"
>> qemu format according to virt-manager and, of course, placed on a btrfs
>> filesystem, running the latest mainline git) is awfully slow, no matter
>> what OS is running inside the VM. The PCBSD installer says it's copying
>> data at a 40-50 KB/s rate. Is someone using KVM and having better numbers
>> than me? How can I help to debug this workload?
>>
> The problem is that qemu uses O_SYNC by default, which makes btrfs do
> log commits for every write.
>
Problem is, btrfs takes the 50 KB/s guest rate and inflates it to
something much larger (megabytes/sec). Are there plans to reduce the
amount of O_SYNC overhead writes?
I saw this too, but with 2.6.31 or 2.6.32 IIRC.
> Once the O_DIRECT read patch is in, you can switch to that, or tell qemu
> to use a writeback cache instead.
>
Even with writeback qemu will issue a lot of fsyncs.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Poor performance with qemu
2010-04-08 14:58 ` Avi Kivity
@ 2010-04-08 15:21 ` Gordan Bobic
2010-04-08 15:26 ` Chris Mason
1 sibling, 0 replies; 12+ messages in thread
From: Gordan Bobic @ 2010-04-08 15:21 UTC (permalink / raw)
To: linux-btrfs
Avi Kivity wrote:
> On 03/30/2010 03:56 PM, Chris Mason wrote:
>> On Sun, Mar 28, 2010 at 05:18:03PM +0200, Diego Calleja wrote:
>>
>>> Hi, I'm using KVM, and the virtual disk (a 20 GB file using the "raw"
>>> qemu format according to virt-manager and, of course, placed on a btrfs
>>> filesystem, running the latest mainline git) is awfully slow, no matter
>>> what OS is running inside the VM. The PCBSD installer says it's copying
>>> data at a 40-50 KB/s rate. Is someone using KVM and having better
>>> numbers
>>> than me? How can I help to debug this workload?
>>>
>> The problem is that qemu uses O_SYNC by default, which makes btrfs do
>> log commits for every write.
>>
>
> Problem is, btrfs takes the 50 KB/s guest rate and inflates it to
> something much larger (megabytes/sec). Are there plans to reduce the
> amount of O_SYNC overhead writes?
>
> I saw this too, but with 2.6.31 or 2.6.32 IIRC.
>
>> Once the O_DIRECT read patch is in, you can switch to that, or tell qemu
>> to use a writeback cache instead.
>>
>
> Even with writeback qemu will issue a lot of fsyncs.
>
My understanding was that with cache=writeback qemu shouldn't issue any
fsyncs at all, but I could be wrong.
Gordan
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Poor performance with qemu
2010-04-08 14:58 ` Avi Kivity
2010-04-08 15:21 ` Gordan Bobic
@ 2010-04-08 15:26 ` Chris Mason
2010-04-08 15:28 ` Avi Kivity
2010-04-08 15:32 ` Christoph Hellwig
1 sibling, 2 replies; 12+ messages in thread
From: Chris Mason @ 2010-04-08 15:26 UTC (permalink / raw)
To: Avi Kivity; +Cc: Diego Calleja, linux-btrfs
On Thu, Apr 08, 2010 at 05:58:17PM +0300, Avi Kivity wrote:
> On 03/30/2010 03:56 PM, Chris Mason wrote:
> >On Sun, Mar 28, 2010 at 05:18:03PM +0200, Diego Calleja wrote:
> >>Hi, I'm using KVM, and the virtual disk (a 20 GB file using the "raw"
> >>qemu format according to virt-manager and, of course, placed on a btrfs
> >>filesystem, running the latest mainline git) is awfully slow, no matter
> >>what OS is running inside the VM. The PCBSD installer says it's copying
> >>data at a 40-50 KB/s rate. Is someone using KVM and having better numbers
> >>than me? How can I help to debug this workload?
> >The problem is that qemu uses O_SYNC by default, which makes btrfs do
> >log commits for every write.
>
> Problem is, btrfs takes the 50 KB/s guest rate and inflates it to
> something much larger (megabytes/sec). Are there plans to reduce
> the amount of O_SYNC overhead writes?
>
> I saw this too, but with 2.6.31 or 2.6.32 IIRC.
With O_DIRECT the writeback rates are very reasonable. I'll work up a
way to pass the barrier down from the guest to btrfs to force logging of
updated metadata when required.
>
> >Once the O_DIRECT read patch is in, you can switch to that, or tell qemu
> >to use a writeback cache instead.
>
> Even with writeback qemu will issue a lot of fsyncs.
Oh, I didn't see that when I was testing, when does it fsync?
-chris
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Poor performance with qemu
2010-04-08 15:26 ` Chris Mason
@ 2010-04-08 15:28 ` Avi Kivity
2010-04-08 15:32 ` Chris Mason
2010-04-08 15:34 ` Christoph Hellwig
2010-04-08 15:32 ` Christoph Hellwig
1 sibling, 2 replies; 12+ messages in thread
From: Avi Kivity @ 2010-04-08 15:28 UTC (permalink / raw)
To: Chris Mason, Diego Calleja, linux-btrfs
On 04/08/2010 06:26 PM, Chris Mason wrote:
>>
>>> Once the O_DIRECT read patch is in, you can switch to that, or tell qemu
>>> to use a writeback cache instead.
>>>
>> Even with writeback qemu will issue a lot of fsyncs.
>>
> Oh, I didn't see that when I was testing, when does it fsync?
>
When it updates qcow2 metadata or when the guest issues a barrier. It's
relatively new. I have a patch that introduces cache=volatile somewhere.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Poor performance with qemu
2010-04-08 15:28 ` Avi Kivity
@ 2010-04-08 15:32 ` Chris Mason
2010-04-08 15:34 ` Christoph Hellwig
1 sibling, 0 replies; 12+ messages in thread
From: Chris Mason @ 2010-04-08 15:32 UTC (permalink / raw)
To: Avi Kivity; +Cc: Diego Calleja, linux-btrfs
On Thu, Apr 08, 2010 at 06:28:54PM +0300, Avi Kivity wrote:
> On 04/08/2010 06:26 PM, Chris Mason wrote:
> >>
> >>>Once the O_DIRECT read patch is in, you can switch to that, or tell qemu
> >>>to use a writeback cache instead.
> >>Even with writeback qemu will issue a lot of fsyncs.
> >Oh, I didn't see that when I was testing, when does it fsync?
>
> When it updates qcow2 metadata or when the guest issues a barrier.
> It's relatively new. I have a patch that introduces cache=volatile
> somewhere.
Ok, that's actually perfect. I'll retest with that if I can get
qemu-git to work here.
-chris
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Poor performance with qemu
2010-04-08 15:28 ` Avi Kivity
2010-04-08 15:32 ` Chris Mason
@ 2010-04-08 15:34 ` Christoph Hellwig
2010-04-08 15:36 ` Avi Kivity
1 sibling, 1 reply; 12+ messages in thread
From: Christoph Hellwig @ 2010-04-08 15:34 UTC (permalink / raw)
To: Avi Kivity; +Cc: Chris Mason, Diego Calleja, linux-btrfs
On Thu, Apr 08, 2010 at 06:28:54PM +0300, Avi Kivity wrote:
> When it updates qcow2 metadata or when the guest issues a barrier. It's
> relatively new. I have a patch that introduces cache=volatile somewhere.
qcow2 does not issues any fsyncs by itself, it only passes throught the
guests ones. The only other placess issueing fsyncs is commit a COW
image back to the base image, and on migreation.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Poor performance with qemu
2010-04-08 15:34 ` Christoph Hellwig
@ 2010-04-08 15:36 ` Avi Kivity
2010-04-08 15:39 ` Christoph Hellwig
0 siblings, 1 reply; 12+ messages in thread
From: Avi Kivity @ 2010-04-08 15:36 UTC (permalink / raw)
To: Christoph Hellwig; +Cc: Chris Mason, Diego Calleja, linux-btrfs
On 04/08/2010 06:34 PM, Christoph Hellwig wrote:
> On Thu, Apr 08, 2010 at 06:28:54PM +0300, Avi Kivity wrote:
>
>> When it updates qcow2 metadata or when the guest issues a barrier. It's
>> relatively new. I have a patch that introduces cache=volatile somewhere.
>>
> qcow2 does not issues any fsyncs by itself, it only passes throught the
> guests ones. The only other placess issueing fsyncs is commit a COW
> image back to the base image, and on migreation.
>
Shouldn't it do that then? What's the point of fsyncing guest data if
qcow2 metadata is volatile?
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Poor performance with qemu
2010-04-08 15:36 ` Avi Kivity
@ 2010-04-08 15:39 ` Christoph Hellwig
0 siblings, 0 replies; 12+ messages in thread
From: Christoph Hellwig @ 2010-04-08 15:39 UTC (permalink / raw)
To: Avi Kivity; +Cc: Christoph Hellwig, Chris Mason, Diego Calleja, linux-btrfs
On Thu, Apr 08, 2010 at 06:36:15PM +0300, Avi Kivity wrote:
> Shouldn't it do that then? What's the point of fsyncing guest data if
> qcow2 metadata is volatile?
Not my territory - but in the end getting qcow2 as-is solid in face
of crashes will be an uphill battel - I'd rather recommend not using it.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: Poor performance with qemu
2010-04-08 15:26 ` Chris Mason
2010-04-08 15:28 ` Avi Kivity
@ 2010-04-08 15:32 ` Christoph Hellwig
1 sibling, 0 replies; 12+ messages in thread
From: Christoph Hellwig @ 2010-04-08 15:32 UTC (permalink / raw)
To: Chris Mason, Avi Kivity, Diego Calleja, linux-btrfs
On Thu, Apr 08, 2010 at 11:26:15AM -0400, Chris Mason wrote:
> With O_DIRECT the writeback rates are very reasonable. I'll work up a
> way to pass the barrier down from the guest to btrfs to force logging of
> updated metadata when required.
Barriers are implemented in the guest kernel using queue drains and
cache flush commands. Qemu maps the cache flush to fdatasync.
> > >Once the O_DIRECT read patch is in, you can switch to that, or tell qemu
> > >to use a writeback cache instead.
> >
> > Even with writeback qemu will issue a lot of fsyncs.
>
> Oh, I didn't see that when I was testing, when does it fsync?
When the guest issues a barrier (and it's actually a fdatasync)
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2010-04-08 15:39 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-03-29 19:21 Poor performance with qemu Markus Suvanto
-- strict thread matches above, loose matches on Subject: below --
2010-03-28 15:18 Diego Calleja
2010-03-30 12:56 ` Chris Mason
2010-04-08 14:58 ` Avi Kivity
2010-04-08 15:21 ` Gordan Bobic
2010-04-08 15:26 ` Chris Mason
2010-04-08 15:28 ` Avi Kivity
2010-04-08 15:32 ` Chris Mason
2010-04-08 15:34 ` Christoph Hellwig
2010-04-08 15:36 ` Avi Kivity
2010-04-08 15:39 ` Christoph Hellwig
2010-04-08 15:32 ` Christoph Hellwig
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).