From: Kevin Wolf <kwolf@redhat.com>
To: Max Reitz <mreitz@redhat.com>
Cc: Anton Nefedov <anton.nefedov@virtuozzo.com>,
Alberto Garcia <berto@igalia.com>,
"qemu-block@nongnu.org" <qemu-block@nongnu.org>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>,
Stefan Hajnoczi <stefanha@redhat.com>
Subject: Re: [RFC 0/3] block/file-posix: Work around XFS bug
Date: Fri, 25 Oct 2019 16:35:18 +0200 [thread overview]
Message-ID: <20191025143518.GH7275@localhost.localdomain> (raw)
In-Reply-To: <0f75cbcf-e6c7-c74c-972b-22e7760a8b5c@redhat.com>
[-- Attachment #1: Type: text/plain, Size: 9589 bytes --]
Am 25.10.2019 um 16:19 hat Max Reitz geschrieben:
> On 25.10.19 15:56, Vladimir Sementsov-Ogievskiy wrote:
> > 25.10.2019 16:40, Vladimir Sementsov-Ogievskiy wrote:
> >> 25.10.2019 12:58, Max Reitz wrote:
> >>> Hi,
> >>>
> >>> It seems to me that there is a bug in Linux’s XFS kernel driver, as
> >>> I’ve explained here:
> >>>
> >>> https://lists.nongnu.org/archive/html/qemu-block/2019-10/msg01429.html
> >>>
> >>> In combination with our commit c8bb23cbdbe32f, this may lead to guest
> >>> data corruption when using qcow2 images on XFS with aio=native.
> >>>
> >>> We can’t wait until the XFS kernel driver is fixed, we should work
> >>> around the problem ourselves.
> >>>
> >>> This is an RFC for two reasons:
> >>> (1) I don’t know whether this is the right way to address the issue,
> >>> (2) Ideally, we should detect whether the XFS kernel driver is fixed and
> >>> if so stop applying the workaround.
> >>> I don’t know how we would go about this, so this series doesn’t do
> >>> it. (Hence it’s an RFC.)
> >>> (3) Perhaps it’s a bit of a layering violation to let the file-posix
> >>> driver access and modify a BdrvTrackedRequest object.
> >>>
> >>> As for how we can address the issue, I see three ways:
> >>> (1) The one presented in this series: On XFS with aio=native, we extend
> >>> tracked requests for post-EOF fallocate() calls (i.e., write-zero
> >>> operations) to reach until infinity (INT64_MAX in practice), mark
> >>> them serializing and wait for other conflicting requests.
> >>>
> >>> Advantages:
> >>> + Limits the impact to very specific cases
> >>> (And that means it wouldn’t hurt too much to keep this workaround
> >>> even when the XFS driver has been fixed)
> >>> + Works around the bug where it happens, namely in file-posix
> >>>
> >>> Disadvantages:
> >>> - A bit complex
> >>> - A bit of a layering violation (should file-posix have access to
> >>> tracked requests?)
> >>>
> >>> (2) Always skip qcow2’s handle_alloc_space() on XFS. The XFS bug only
> >>> becomes visible due to that function: I don’t think qcow2 writes
> >>> zeroes in any other I/O path, and raw images are fixed in size so
> >>> post-EOF writes won’t happen.
> >>>
> >>> Advantages:
> >>> + Maybe simpler, depending on how difficult it is to handle the
> >>> layering violation
> >>> + Also fixes the performance problem of handle_alloc_space() being
> >>> slow on ppc64+XFS.
> >>>
> >>> Disadvantages:
> >>> - Huge layering violation because qcow2 would need to know whether
> >>> the image is stored on XFS or not.
> >>> - We’d definitely want to skip this workaround when the XFS driver
> >>> has been fixed, so we need some method to find out whether it has
> >>>
> >>> (3) Drop handle_alloc_space(), i.e. revert c8bb23cbdbe32f.
> >>> To my knowledge I’m the only one who has provided any benchmarks for
> >>> this commit, and even then I was a bit skeptical because it performs
> >>> well in some cases and bad in others. I concluded that it’s
> >>> probably worth it because the “some cases” are more likely to occur.
> >>>
> >>> Now we have this problem of corruption here (granted due to a bug in
> >>> the XFS driver), and another report of massively degraded
> >>> performance on ppc64
> >>> (https://bugzilla.redhat.com/show_bug.cgi?id=1745823 – sorry, a
> >>> private BZ; I hate that :-/ The report is about 40 % worse
> >>> performance for an in-guest fio write benchmark.)
> >>>
> >>> So I have to ask the question about what the justification for
> >>> keeping c8bb23cbdbe32f is. How much does performance increase with
> >>> it actually? (On non-(ppc64+XFS) machines, obviously)
> >>>
> >>> Advantages:
> >>> + Trivial
> >>> + No layering violations
> >>> + We wouldn’t need to keep track of whether the kernel bug has been
> >>> fixed or not
> >>> + Fixes the ppc64+XFS performance problem
> >>>
> >>> Disadvantages:
> >>> - Reverts cluster allocation performance to pre-c8bb23cbdbe32f
> >>> levels, whatever that means
> >>>
> >>> So this is the main reason this is an RFC: What should we do? Is (1)
> >>> really the best choice?
> >>>
> >>>
> >>> In any case, I’ve ran the test case I showed in
> >>> https://lists.nongnu.org/archive/html/qemu-block/2019-10/msg01282.html
> >>> more than ten times with this series applied and the installation
> >>> succeeded every time. (Without this series, it fails like every other
> >>> time.)
> >>>
> >>>
> >>
> >> Hi!
> >>
> >> First, great thanks for your investigation!
> >>
> >> We need c8bb23cbdbe3 patch, because we use 1M clusters, and zeroing 1M is significant
> >> in time.
> >>
> >> I've tested a bit:
> >>
> >> test:
> >> for img in /ssd/test.img /test.img; do for cl in 64K 1M; do for step in 4K 64K 1M; do ./qemu-img create -f qcow2 -o cluster_size=$cl $img 15G > /dev/null; printf '%-15s%-7s%-10s : ' $img cl=$cl step=$step; ./qemu-img bench -c $((15 * 1024)) -n -s 4K -S $step -t none -w $img | tail -1 | awk '{print $4}'; done; done; done
> >>
> >> on master:
> >>
> >> /ssd/test.img cl=64K step=4K : 0.291
> >> /ssd/test.img cl=64K step=64K : 0.813
> >> /ssd/test.img cl=64K step=1M : 2.799
> >> /ssd/test.img cl=1M step=4K : 0.217
> >> /ssd/test.img cl=1M step=64K : 0.332
> >> /ssd/test.img cl=1M step=1M : 0.685
> >> /test.img cl=64K step=4K : 1.751
> >> /test.img cl=64K step=64K : 14.811
> >> /test.img cl=64K step=1M : 18.321
> >> /test.img cl=1M step=4K : 0.759
> >> /test.img cl=1M step=64K : 13.574
> >> /test.img cl=1M step=1M : 28.970
> >>
> >> rerun on master:
> >>
> >> /ssd/test.img cl=64K step=4K : 0.295
> >> /ssd/test.img cl=64K step=64K : 0.803
> >> /ssd/test.img cl=64K step=1M : 2.921
> >> /ssd/test.img cl=1M step=4K : 0.233
> >> /ssd/test.img cl=1M step=64K : 0.321
> >> /ssd/test.img cl=1M step=1M : 0.762
> >> /test.img cl=64K step=4K : 1.873
> >> /test.img cl=64K step=64K : 15.621
> >> /test.img cl=64K step=1M : 18.428
> >> /test.img cl=1M step=4K : 0.883
> >> /test.img cl=1M step=64K : 13.484
> >> /test.img cl=1M step=1M : 26.244
> >>
> >>
> >> on master + revert c8bb23cbdbe32f5c326
> >>
> >> /ssd/test.img cl=64K step=4K : 0.395
> >> /ssd/test.img cl=64K step=64K : 4.231
> >> /ssd/test.img cl=64K step=1M : 5.598
> >> /ssd/test.img cl=1M step=4K : 0.352
> >> /ssd/test.img cl=1M step=64K : 2.519
> >> /ssd/test.img cl=1M step=1M : 38.919
> >> /test.img cl=64K step=4K : 1.758
> >> /test.img cl=64K step=64K : 9.838
> >> /test.img cl=64K step=1M : 13.384
> >> /test.img cl=1M step=4K : 1.849
> >> /test.img cl=1M step=64K : 19.405
> >> /test.img cl=1M step=1M : 157.090
> >>
> >> rerun:
> >>
> >> /ssd/test.img cl=64K step=4K : 0.407
> >> /ssd/test.img cl=64K step=64K : 3.325
> >> /ssd/test.img cl=64K step=1M : 5.641
> >> /ssd/test.img cl=1M step=4K : 0.346
> >> /ssd/test.img cl=1M step=64K : 2.583
> >> /ssd/test.img cl=1M step=1M : 39.692
> >> /test.img cl=64K step=4K : 1.727
> >> /test.img cl=64K step=64K : 10.058
> >> /test.img cl=64K step=1M : 13.441
> >> /test.img cl=1M step=4K : 1.926
> >> /test.img cl=1M step=64K : 19.738
> >> /test.img cl=1M step=1M : 158.268
> >>
> >>
> >> So, it's obvious that c8bb23cbdbe32f5c326 is significant for 1M cluster-size, even on rotational
> >> disk, which means that previous assumption about calling handle_alloc_space() only for ssd is
> >> wrong, we need smarter heuristics..
> >>
> >> So, I'd prefer (1) or (2).
>
> OK. I wonder whether that problem would go away with Berto’s subcluster
> series, though.
>
> > About degradation in some cases: I think the problem is that one (a bit larger)
> > write may be faster than fast-write-zeroes + small write, as the latter means
> > additional write to metadata. And it's expected for small clusters in
> > conjunction with rotational disk. But the actual limit is dependent on specific
> > disk. So, I think possible solution is just sometimes try work with
> > handle_alloc_space and sometimes without, remember time and length of request
> > and make dynamic limit...
>
> Maybe make a decision based both on the ratio of data size to COW area
> length (only invoke handle_alloc_space() under a certain threshold), and
> the absolute COW area length (always invoke it above a certain
> threshold, unless the ratio doesn’t allow it)?
I'm not sure that I would like this level of complexity in this code
path...
Kevin
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 801 bytes --]
next prev parent reply other threads:[~2019-10-25 15:00 UTC|newest]
Thread overview: 47+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-10-25 9:58 [RFC 0/3] block/file-posix: Work around XFS bug Max Reitz
2019-10-25 9:58 ` [RFC 1/3] block: Make wait/mark serialising requests public Max Reitz
2019-10-25 9:58 ` [RFC 2/3] block/file-posix: Detect XFS with CONFIG_FALLOCATE Max Reitz
2019-10-25 10:19 ` Kevin Wolf
2019-10-25 10:22 ` Max Reitz
2019-10-25 10:35 ` Kevin Wolf
2019-10-25 10:41 ` Max Reitz
2019-10-26 17:26 ` Nir Soffer
2019-10-25 9:58 ` [RFC 3/3] block/file-posix: Let post-EOF fallocate serialize Max Reitz
2019-10-26 17:28 ` Nir Soffer
2019-10-25 13:40 ` [RFC 0/3] block/file-posix: Work around XFS bug Vladimir Sementsov-Ogievskiy
2019-10-25 13:56 ` Vladimir Sementsov-Ogievskiy
2019-10-25 14:19 ` Max Reitz
2019-10-25 14:35 ` Kevin Wolf [this message]
2019-10-25 14:36 ` Vladimir Sementsov-Ogievskiy
2019-10-27 12:21 ` Stefan Hajnoczi
2019-11-04 14:03 ` Alberto Garcia
2019-11-04 14:25 ` Max Reitz
2019-11-04 15:12 ` Alberto Garcia
2019-11-04 15:14 ` Max Reitz
2019-11-04 15:49 ` Alberto Garcia
2019-11-04 16:07 ` Max Reitz
2019-10-25 13:46 ` Peter Maydell
2019-10-25 14:16 ` Max Reitz
2019-10-25 14:17 ` Peter Maydell
2019-10-25 14:21 ` Max Reitz
2019-10-25 14:56 ` Peter Maydell
2019-10-26 0:14 ` no-reply
2019-10-26 17:37 ` Nir Soffer
2019-10-26 17:52 ` Vladimir Sementsov-Ogievskiy
2019-10-28 8:56 ` Max Reitz
2019-10-27 12:35 ` Stefan Hajnoczi
2019-10-28 9:24 ` Max Reitz
2019-10-28 9:30 ` Max Reitz
2019-10-28 9:56 ` Max Reitz
2019-10-28 10:07 ` Vladimir Sementsov-Ogievskiy
2019-10-28 10:10 ` Max Reitz
2019-10-28 11:19 ` Vladimir Sementsov-Ogievskiy
2019-10-28 11:04 ` Kevin Wolf
2019-10-28 11:25 ` Vladimir Sementsov-Ogievskiy
2019-10-29 8:50 ` Max Reitz
2019-10-29 11:48 ` Vladimir Sementsov-Ogievskiy
2019-10-29 11:55 ` Max Reitz
2019-10-29 12:05 ` Vladimir Sementsov-Ogievskiy
2019-10-29 12:11 ` Max Reitz
2019-10-29 12:19 ` Vladimir Sementsov-Ogievskiy
2019-10-29 12:23 ` Max Reitz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20191025143518.GH7275@localhost.localdomain \
--to=kwolf@redhat.com \
--cc=anton.nefedov@virtuozzo.com \
--cc=berto@igalia.com \
--cc=mreitz@redhat.com \
--cc=qemu-block@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
--cc=vsementsov@virtuozzo.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).