From: Christopher Pereira <kripper@imatronix.cl>
To: Kevin Wolf <kwolf@redhat.com>
Cc: qemu-devel@nongnu.org, qemu-block@nongnu.org
Subject: Re: qcow2 perfomance: read-only IO on the guest generates high write IO on the host
Date: Thu, 9 Sep 2021 07:23:56 -0300 [thread overview]
Message-ID: <d28a48c8-48af-e5c7-b333-071f648b7b79@imatronix.cl> (raw)
In-Reply-To: <YSUSNCR6kZVnCBKF@redhat.com>
[-- Attachment #1: Type: text/plain, Size: 1965 bytes --]
On 24-08-2021 11:37, Kevin Wolf wrote:
> [ Cc: qemu-block ]
>
> Am 11.08.2021 um 13:36 hat Christopher Pereira geschrieben:
>> Hi,
>>
>> I'm reading a directory with 5.000.000 files (2,4 GB) inside a guest using
>> "find | grep -c".
>>
>> On the host I saw high write IO (40 MB/s !) during over 1 hour using
>> virt-top.
>>
>> I later repeated the read-only operation inside the guest and no additional
>> data was written on the host. The operation took only some seconds.
>>
>> I believe QEMU was creating some kind of cache or metadata map the first
>> time I accessed the inodes.
> No, at least in theory, QEMU shouldn't allocate anything when you're
> just reading.
Hmm...interesting.
> Are you sure that this isn't activity coming from your guest OS?
Yes. iotop was showing only read IOs on the guest, and on the host iotop
and virt-top where showing strong write IOs for hours.
Stopping the "find" command on the guest also stopped the write IOs on
the host.
>> But I wonder why the cache or metadata map wasn't available the first time
>> and why QEMU had to recreate it?
>>
>> The VM has "compressed base <- snap 1" and base was converted without
>> prealloc.
>>
>> Is it because we created the base using convert without metadata prealloc
>> and so the metadata map got lost?
>>
>> I will do some experiments soon using convert + metadata prealloc and
>> probably find out myself, but I will happy to read your comments and gain
>> some additional insights.
>> If it the problem persists, I would try again without compression.
> What were the results of your experiments? Is the behaviour related to
> any of these options?
I will do the experiments and report back.
It's also strange that the second time I repeat the "find" command, I
see no more write IOs and it takes only seconds instead of hours.
I was assuming QEMU was creating some kind of map or cache on the
snapshot for the content present in the base, but now I got more curious.
[-- Attachment #2: Type: text/html, Size: 2931 bytes --]
prev parent reply other threads:[~2021-09-09 10:24 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-08-11 11:36 qcow2 perfomance: read-only IO on the guest generates high write IO on the host Christopher Pereira
2021-08-24 15:37 ` Kevin Wolf
2021-09-09 10:23 ` Christopher Pereira [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d28a48c8-48af-e5c7-b333-071f648b7b79@imatronix.cl \
--to=kripper@imatronix.cl \
--cc=kwolf@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).