From: Max Reitz <mreitz@redhat.com>
To: Stefan Hajnoczi <stefanha@gmail.com>
Cc: qemu-block@nongnu.org, Kevin Wolf <kwolf@redhat.com>,
qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [RFC] qemu-img: Drop BLK_ZERO from convert
Date: Wed, 28 Feb 2018 21:11:32 +0100 [thread overview]
Message-ID: <2fddac03-cde4-8d83-7651-a3928398e909@redhat.com> (raw)
In-Reply-To: <f0f0f6a0-bf09-8561-5c81-87d38ebef168@redhat.com>
[-- Attachment #1.1: Type: text/plain, Size: 4957 bytes --]
On 2018-02-28 19:08, Max Reitz wrote:
> On 2018-02-27 17:17, Stefan Hajnoczi wrote:
>> On Mon, Feb 26, 2018 at 06:03:13PM +0100, Max Reitz wrote:
>>> There are filesystems (among which is tmpfs) that have a hard time
>>> reporting allocation status. That is definitely a bug in them.
>>>
>>> However, there is no good reason why qemu-img convert should query the
>>> allocation status in the first place. It does zero detection by itself
>>> anyway, so we can detect unallocated areas ourselves.
>>>
>>> Furthermore, if a filesystem driver has any sense, reading unallocated
>>> data should take just as much time as lseek(SEEK_DATA) + memset(). So
>>> the only overhead we introduce by dropping the manual lseek() call is a
>>> memset() in the driver and a buffer_is_zero() in qemu-img, both of which
>>> should be relatively quick.
>>
>> This makes sense. Which file systems did you test this patch on?
>
> On tmpfs and xfs, so far.
>
>> XFS, ext4, and tmpfs would be a good minimal test set to prove the
>> patch. Perhaps with two input files:
>> 1. A file that is mostly filled with data.
>> 2. A file that is only sparsely populated with data.
>
> And probably with vmdk, which (by default) forbids querying any areas
> larger than 64 kB.
>
>> The time taken should be comparable with the time before this patch.
>
> Yep, I'll do some benchmarks.
And the results are in. I've created 2 GB images on various filesystems
in various formats, then I've either written 64 kB every 32 MB to them
("sparse"), or left out 64 kB every 32 MB ("full"). Then I've converted
them to null-co:// and took the (real) time through "time". (Script is
attached.)
I've attached the raw results before and after this patch. Usually, I
did six runs for each case and dropped the most extreme outlier --
except for full vmdk images, where I've only done one run for each case
because creating these images can take a very long time.
Here are the differences from before to after:
sparse raw on tmpfs: + 19 % (436 ms to 520 ms)
sparse qcow2 on tmpfs: - 31 % (435 ms to 301 ms)
sparse vmdk on tmpfs: + 37 % (214 ms to 294 ms)
sparse raw on xfs: + 69 % (452 ms to 762 ms)
sparse qcow2 on xfs: - 34 % (462 ms to 304 ms)
sparse vmdk on xfs: + 42 % (210 ms to 298 ms)
sparse raw on ext4: +360 % (144 ms to 655 ms)
sparse qcow2 on ext4: +120 % (147 ms to 330 ms)
sparse vmdk on ext4: + 16 % (253 ms to 293 ms)
full raw on tmpfs: - 9 % (437 ms to 398 ms)
full qcow2 on tmpfs: - 75 % (1.63 s to 403 ms)
full vmdk on tmpfs: -100 % (10 min to 767 ms)
full raw on xfs: - 1 % (407 ms to 404 ms, insignificant)
full qcow2 on xfs: - 1 % (410 ms to 404 ms, insignificant)
full vmdk on xfs: - 33 % (1.05 s to 695 ms)
full raw on ext4: - 2 % (308 ms to 301 ms, insignificant)
full qcow2 on ext4: + 2 % (307 ms to 312 ms, insignificant)
full vmdk on ext4: - 74 % (3.53 s to 839 ms)
So... It's more extreme than I had hoped, that's for sure. What I
conclude from this is:
(1) This patch is generally good for nearly fully allocated images. In
the worst case (on well-behaving filesystems with well-optimized image
formats) it changes nothing. In the best case, conversion time is
reduced drastically.
(2) For sparse raw images, this is absolutely devastating. Reading them
now takes more than (ext4) or nearly (xfs) twice as much time as reading
a fully allocated image. So much for "if a filesystem driver has any
sense".
(2a) It might be worth noting that on xfs, reading the sparse file took
longer even before this patch...
(3) qcow2 is different: It benefits from this patch on tmpfs and xfs
(note that reading a sparse qcow2 file took longer than reading a full
qcow2 file before this patch!), but it gets pretty much destroyed on
ext4, too.
(4) As for sparse vmdk images... Reading them takes longer, but it's
still fasster than reading full vmdk images, so that's not horrible.
So there we are. I was wrong about filesystem drivers having any sense,
so this patch can indeed have a hugely negative impact.
I would argue that conversion time for full images is more important,
because that's probably the main use case; but the thing is that here
this patch only helps for tmpfs and vmdk. We don't care too much about
vmdk, and the fact that tmpfs takes so long simply is a bug.
I guess the xfs/tmpfs results would still be in a range where they are
barely acceptable (because it's mainly a qcow2 vs. raw tradeoff), but
the ext4 horrors probably make this patch a no-go in its current form.
In any case it's interesting to see that even the current qemu-img
convert takes longer to read sparsely allocated qcow2/raw files from xfs
than fully allocated images...
So I guess I'll re-send this patch where the change is done only for
-S 0.
Max
[-- Attachment #1.2: test.sh --]
[-- Type: application/x-shellscript, Size: 620 bytes --]
[-- Attachment #1.3: test-before --]
[-- Type: text/plain, Size: 1610 bytes --]
Testing: raw on tmpfs (sparse): 0.436 ±0.012
real 0.426 0.444 0.447 0.442 (0.500) 0.421
Testing: qcow2 on tmpfs (sparse): 0.435 ±0.018
real 0.421 0.459 0.448 0.430 (0.776) 0.415
Testing: vmdk on tmpfs (sparse): 0.214 ±0.004
real 0.216 0.218 0.210 0.215 (0.222) 0.209
Testing: raw on xfs (sparse): 0.452 ±0.028
real 0.430 0.468 0.493 0.442 (0.640) 0.428
Testing: qcow2 on xfs (sparse): 0.462 ±0.016
real 0.445 0.475 0.478 0.445 0.466 (0.437)
Testing: vmdk on xfs (sparse): 0.210 ±0.005
real 0.214 0.213 (0.222) 0.203 0.207 0.213
Testing: raw on ext4 (sparse): 0.144 ±0.023
real 0.119 0.181 0.142 0.143 0.137 (0.296)
Testing: qcow2 on ext4 (sparse): 0.147 ±0.016
real 0.139 0.138 0.142 0.175 0.140 (0.301)
Testing: vmdk on ext4 (sparse): 0.253 ±0.004
real 0.247 (0.273) 0.257 0.252 0.254 0.254
Testing: raw on tmpfs (full): 0.437 ±0.012
real (0.477) 0.436 0.434 0.432 0.457 0.424
Testing: qcow2 on tmpfs (full): 1.630 ±0.021
real 1.618 1.631 (1.684) 1.625 1.665 1.612
Testing: vmdk on tmpfs (full): 612.379 (n=1)
real 612.379
Testing: raw on xfs (full): 0.407 ±0.011
real 0.400 0.413 0.402 (0.495) 0.424 0.397
Testing: qcow2 on xfs (full): 0.410 ±0.009
real 0.403 0.406 0.413 (0.485) 0.423 0.403
Testing: vmdk on xfs (full): 1.048 (n=1)
real 1.048
Testing: raw on ext4 (full): 0.308 ±0.007
real 0.299 (0.419) 0.315 0.311 0.301 0.314
Testing: qcow2 on ext4 (full): 0.307 ±0.003
real 0.307 0.306 (0.490) 0.308 0.303 0.312
Testing: vmdk on ext4 (full): 3.528 (n=1)
real 3.528
[-- Attachment #1.4: test-after --]
[-- Type: text/plain, Size: 1576 bytes --]
Testing: raw on tmpfs (sparse): 0.520 ±0.022
real 0.496 0.535 0.536 0.538 (0.569) 0.495
Testing: qcow2 on tmpfs (sparse): 0.301 ±0.008
real 0.298 0.297 (0.338) 0.292 0.307 0.312
Testing: vmdk on tmpfs (sparse): 0.294 ±0.002
real 0.295 0.294 0.292 (0.287) 0.293 0.297
Testing: raw on xfs (sparse): 0.762 ±0.026
real (0.713) 0.770 0.769 0.759 0.790 0.720
Testing: qcow2 on xfs (sparse): 0.304 ±0.003
real 0.309 (0.312) 0.301 0.302 0.306 0.304
Testing: vmdk on xfs (sparse): 0.298 ±0.009
real 0.294 0.312 0.293 0.301 0.290 (0.345)
Testing: raw on ext4 (sparse): 0.655 ±0.011
real 0.664 (0.624) 0.663 0.639 0.648 0.659
Testing: qcow2 on ext4 (sparse): 0.330 ±0.035
real 0.293 0.333 (0.391) 0.374 0.353 0.297
Testing: vmdk on ext4 (sparse): 0.293 ±0.004
real 0.290 0.289 0.293 0.297 (0.285) 0.296
Testing: raw on tmpfs (full): 0.398 ±0.004
real 0.397 (0.406) 0.396 0.397 0.394 0.404
Testing: qcow2 on tmpfs (full): 0.403 ±0.004
real 0.403 0.408 0.406 0.399 (0.538) 0.400
Testing: vmdk on tmpfs (full): 0.767 (n=1)
real 0.767
Testing: raw on xfs (full): 0.404 ±0.004
real 0.411 0.402 0.401 0.402 0.402 (0.421)
Testing: qcow2 on xfs (full): 0.404 ±0.004
real 0.402 (0.415) 0.410 0.401 0.401 0.404
Testing: vmdk on xfs (full): 0.695 (n=1)
real 0.695
Testing: raw on ext4 (full): 0.301 ±0.004
real 0.304 0.303 0.305 0.299 0.296 (0.322)
Testing: qcow2 on ext4 (full): 0.312 ±0.008
real (0.325) 0.310 0.319 0.304 0.305 0.321
Testing: vmdk on ext4 (full): 0.839 (n=1)
real 0.839
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 512 bytes --]
next prev parent reply other threads:[~2018-02-28 20:12 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-02-26 17:03 [Qemu-devel] [RFC] qemu-img: Drop BLK_ZERO from convert Max Reitz
2018-02-27 16:17 ` Stefan Hajnoczi
2018-02-28 18:08 ` Max Reitz
2018-02-28 20:11 ` Max Reitz [this message]
2018-02-28 20:23 ` Max Reitz
2018-03-06 13:47 ` Stefan Hajnoczi
2018-03-06 17:37 ` Kevin Wolf
2018-03-07 15:57 ` Max Reitz
2018-03-07 16:33 ` Paolo Bonzini
2018-03-07 17:11 ` Max Reitz
2018-03-07 17:05 ` Kevin Wolf
2018-03-07 17:22 ` Max Reitz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2fddac03-cde4-8d83-7651-a3928398e909@redhat.com \
--to=mreitz@redhat.com \
--cc=kwolf@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).