From: Alberto Garcia <berto@igalia.com>
To: qemu-devel@nongnu.org
Cc: qemu-block@nongnu.org, Kevin Wolf <kwolf@redhat.com>,
Max Reitz <mreitz@redhat.com>, Alberto Garcia <berto@igalia.com>
Subject: [Qemu-devel] [PATCH RFC 0/1] Allow storing the qcow2 L2 cache in disk
Date: Fri, 9 Dec 2016 15:47:03 +0200 [thread overview]
Message-ID: <cover.1481290956.git.berto@igalia.com> (raw)
Hi all,
as we all know, one of the main things that can make the qcow2 format
slow is the need to load entries from the L2 table in order to map a
guest offset (on the virtual disk) to a host offset (on the qcow2
image).
We have an L2 cache to deal with this, and as long as the cache is big
enough then the peformance is comparable to that of a raw image.
For large qcow2 images the amount of RAM we need in order to cache all
L2 tables can be big (128MB per TB of disk image if we're using the
default cluster size of 64KB). In order to solve this problem we have
a setting that allows the user to clean unused cache entries after a
certain interval of time. This works fine most of the time, although
we can still have peaks of RAM usage if there's a lot of I/O going on
in one or more VMs.
In some scenarios, however, there's a different alternative: if the
qcow2 image is stored in a slow backend (eg. HDD), we could save
memory by putting the L2 cache in a faster one (SSD) instead of in
RAM.
I have been making some tests with exactly that scenario and the
results look good: storing the cache in disk gives roughly the same
performance as storing it in memory.
|---------------------+------------+------+------------+--------|
| | Random 4k reads | Sequential 4k reads |
| | Throughput | IOPS | Throughput | IOPS |
|---------------------+------------+------+------------+--------|
| Cache in memory/SSD | 406 KB/s | 99 | 84 MB/s | 21000 |
| Default cache (1MB) | 200 KB/s | 60 | 83 MB/s | 21000 |
| No cache | 200 KB/s | 49 | 56 MB/s | 14000 |
|---------------------+------------+------+------------+--------|
I'm including the patch that I used to get these results. This is the
simplest approach that I could think of.
Opinions, questions?
Thanks,
Berto
Alberto Garcia (1):
qcow2: Allow storing the qcow2 L2 cache in disk
block/qcow2-cache.c | 56 +++++++++++++++++++++++++++++++++++++++++++++--------
block/qcow2.c | 11 +++++++++--
block/qcow2.h | 3 ++-
3 files changed, 59 insertions(+), 11 deletions(-)
--
2.10.2
next reply other threads:[~2016-12-09 13:48 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-12-09 13:47 Alberto Garcia [this message]
2016-12-09 13:47 ` [Qemu-devel] [PATCH RFC 1/1] qcow2: Allow storing the qcow2 L2 cache in disk Alberto Garcia
2016-12-09 14:05 ` [Qemu-devel] [PATCH RFC 0/1] " no-reply
2016-12-09 14:18 ` Kevin Wolf
2016-12-09 15:00 ` Alberto Garcia
2016-12-12 12:10 ` Kevin Wolf
2016-12-09 14:21 ` Max Reitz
2016-12-12 14:13 ` Alberto Garcia
2016-12-13 8:02 ` Max Reitz
2016-12-13 10:16 ` Fam Zheng
2016-12-13 12:29 ` Kevin Wolf
2016-12-13 13:04 ` Fam Zheng
2016-12-13 12:55 ` Alberto Garcia
2016-12-13 13:44 ` Max Reitz
2016-12-13 15:38 ` Alberto Garcia
2016-12-12 16:53 ` [Qemu-devel] [Qemu-block] " Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=cover.1481290956.git.berto@igalia.com \
--to=berto@igalia.com \
--cc=kwolf@redhat.com \
--cc=mreitz@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).