From: Stefan Hajnoczi <stefanha@gmail.com>
To: Alberto Garcia <berto@igalia.com>
Cc: Kevin Wolf <kwolf@redhat.com>,
Stefan Hajnoczi <stefanha@redhat.com>,
qemu-devel <qemu-devel@nongnu.org>,
qemu block <qemu-block@nongnu.org>
Subject: Re: [Qemu-devel] [Qemu-block] [PATCH] qcow2: do lazy allocation of the L2 cache
Date: Fri, 24 Apr 2015 13:37:21 +0100 [thread overview]
Message-ID: <CAJSP0QU4_oL78aAsf_--+2kTAj_uLuGuCiHL81N3wiLzPjmJCg@mail.gmail.com> (raw)
In-Reply-To: <w51d22tye7i.fsf@maestria.local.igalia.com>
On Fri, Apr 24, 2015 at 12:10 PM, Alberto Garcia <berto@igalia.com> wrote:
> On Fri 24 Apr 2015 11:52:14 AM CEST, Kevin Wolf <kwolf@redhat.com> wrote:
>
>>> The posix_memalign() call wastes memory. I compared:
>>>
>>> posix_memalign(&memptr, 65536, 2560 * 65536);
>>> memset(memptr, 0, 2560 * 65536);
>>>
>>> with:
>>>
>>> for (i = 0; i < 2560; i++) {
>>> posix_memalign(&memptr, 65536, 65536);
>>> memset(memptr, 0, 65536);
>>> }
>>
>> 64k alignment is too much, in practice you need 512b or 4k, which
>> probably wastes a lot less memory.
>
> My tests were with 512b and 4k and the overhead was around 4k per page
> (hence 2560 * 4 = 10240, the 10MB I was talking about).
>
>> But I just looked at the qcow2 cache code and you're right anyway.
>> Allocating one big block instead of many small allocations in a loop
>> looks like a good idea either way.
>
> The problem is that I had the idea to make the cache dynamic.
>
> Consider the scenario [A] <- [B], with a virtual size of 1TB and [B] a
> newly created snapshot. The L2 cache size is 128MB for each image, you
> read a lot of data from the disk and the cache from [A] starts to fill
> up (at this point [B] is mostly empty so you get all the data from
> [A]). Then you start to write data into [B], and now its L2 cache starts
> to fill up as well.
>
> After a while you're going to have lots of cache entries in [A] that are
> not needed anymore because now the data for those clusters is in [B].
>
> I think it would be nice to have a way to free unused cache entries
> after a while.
Do you think mmap plus a periodic timer would work?
I'm hesitant about changes like this because they make QEMU more
complex, slow down the guest, and make the memory footprint volatile.
But if a simple solution addresses the problem, then I'd be happy.
Stefan
next prev parent reply other threads:[~2015-04-24 12:37 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-04-21 15:20 [Qemu-devel] [PATCH] qcow2: do lazy allocation of the L2 cache Alberto Garcia
2015-04-22 10:26 ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi
2015-04-22 14:37 ` Alberto Garcia
2015-04-23 10:15 ` Stefan Hajnoczi
2015-04-23 11:50 ` Alberto Garcia
2015-04-24 9:26 ` Stefan Hajnoczi
2015-04-24 9:45 ` Kevin Wolf
2015-04-24 9:52 ` Kevin Wolf
2015-04-24 11:10 ` Alberto Garcia
2015-04-24 12:37 ` Stefan Hajnoczi [this message]
2015-04-24 12:50 ` Alberto Garcia
2015-04-24 13:04 ` Kevin Wolf
2015-05-08 9:00 ` Alberto Garcia
2015-05-08 9:47 ` Kevin Wolf
2015-05-08 11:47 ` Alberto Garcia
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAJSP0QU4_oL78aAsf_--+2kTAj_uLuGuCiHL81N3wiLzPjmJCg@mail.gmail.com \
--to=stefanha@gmail.com \
--cc=berto@igalia.com \
--cc=kwolf@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).