qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Stefan Hajnoczi <stefanha@gmail.com>
To: Alberto Garcia <berto@igalia.com>
Cc: qemu-devel@nongnu.org, Kevin Wolf <kwolf@redhat.com>,
	qemu-block@nongnu.org, Max Reitz <mreitz@redhat.com>
Subject: Re: [Qemu-devel] [Qemu-block] [PATCH RFC 0/1] Allow storing the qcow2 L2 cache in disk
Date: Mon, 12 Dec 2016 16:53:00 +0000	[thread overview]
Message-ID: <20161212165300.GO4074@stefanha-x1.localdomain> (raw)
In-Reply-To: <cover.1481290956.git.berto@igalia.com>

[-- Attachment #1: Type: text/plain, Size: 3463 bytes --]

On Fri, Dec 09, 2016 at 03:47:03PM +0200, Alberto Garcia wrote:
> as we all know, one of the main things that can make the qcow2 format
> slow is the need to load entries from the L2 table in order to map a
> guest offset (on the virtual disk) to a host offset (on the qcow2
> image).
> 
> We have an L2 cache to deal with this, and as long as the cache is big
> enough then the peformance is comparable to that of a raw image.
> 
> For large qcow2 images the amount of RAM we need in order to cache all
> L2 tables can be big (128MB per TB of disk image if we're using the
> default cluster size of 64KB). In order to solve this problem we have
> a setting that allows the user to clean unused cache entries after a
> certain interval of time. This works fine most of the time, although
> we can still have peaks of RAM usage if there's a lot of I/O going on
> in one or more VMs.
> 
> In some scenarios, however, there's a different alternative: if the
> qcow2 image is stored in a slow backend (eg. HDD), we could save
> memory by putting the L2 cache in a faster one (SSD) instead of in
> RAM.
> 
> I have been making some tests with exactly that scenario and the
> results look good: storing the cache in disk gives roughly the same
> performance as storing it in memory.
> 
> |---------------------+------------+------+------------+--------|
> |                     | Random 4k reads   | Sequential 4k reads |
> |                     | Throughput | IOPS | Throughput |  IOPS  |
> |---------------------+------------+------+------------+--------|
> | Cache in memory/SSD | 406 KB/s   |   99 | 84 MB/s    |  21000 |
> | Default cache (1MB) | 200 KB/s   |   60 | 83 MB/s    |  21000 |
> | No cache            | 200 KB/s   |   49 | 56 MB/s    |  14000 |
> |---------------------+------------+------+------------+--------|
> 
> I'm including the patch that I used to get these results. This is the
> simplest approach that I could think of.
> 
> Opinions, questions?

The root of the performance problem is the L2 table on-disk format,
which also happens to be used as the in-memory L2 table format.  It does
not scale to large disk images.

The simplest tweak is to use larger cluster sizes.  64 KB has been the
default for a long time and it may be time to evaluate performance
effects of increasing it.  I suspect this doesn't solve the problem,
instead we need to decouple metadata scalability from the cluster
size...

Is it time for a new on-disk representation?  Modern file systems seem
to use extent trees instead of offset tables.  That brings a lot of
complication because a good B-tree implementation would require quite a
bit of code changes.

Maybe a more modest change to the on-disk representation could solve
most of the performance problem.  In a very sparsely allocated L2 table
something like run-length encoding is more space-efficient than an
offset table.  In a very densely allocated L2 table it may be possible
to choose a "base offset" and then use much smaller offset entries
relative to the base.  For example:

typedef struct {
    uint64_t base_offset;
    uint16_t rel_offset[];  /* covers 4 GB with 64 KB cluster size */
} L2TableRelative;

uint64_t offset = l2->base_offset + l2->rel_offset[i] * cluster_size;

A final option is to leave the on-disk representation alone but
convert to an efficient in-memory representation when loading from disk.

Stefan

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]

      parent reply	other threads:[~2016-12-12 16:54 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-12-09 13:47 [Qemu-devel] [PATCH RFC 0/1] Allow storing the qcow2 L2 cache in disk Alberto Garcia
2016-12-09 13:47 ` [Qemu-devel] [PATCH RFC 1/1] qcow2: " Alberto Garcia
2016-12-09 14:05 ` [Qemu-devel] [PATCH RFC 0/1] " no-reply
2016-12-09 14:18 ` Kevin Wolf
2016-12-09 15:00   ` Alberto Garcia
2016-12-12 12:10     ` Kevin Wolf
2016-12-09 14:21 ` Max Reitz
2016-12-12 14:13   ` Alberto Garcia
2016-12-13  8:02     ` Max Reitz
2016-12-13 10:16       ` Fam Zheng
2016-12-13 12:29         ` Kevin Wolf
2016-12-13 13:04           ` Fam Zheng
2016-12-13 12:55       ` Alberto Garcia
2016-12-13 13:44         ` Max Reitz
2016-12-13 15:38           ` Alberto Garcia
2016-12-12 16:53 ` Stefan Hajnoczi [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20161212165300.GO4074@stefanha-x1.localdomain \
    --to=stefanha@gmail.com \
    --cc=berto@igalia.com \
    --cc=kwolf@redhat.com \
    --cc=mreitz@redhat.com \
    --cc=qemu-block@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).