linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Eryu Guan <guan@eryu.me>
To: fdmanana@kernel.org
Cc: fstests@vger.kernel.org, linux-btrfs@vger.kernel.org,
	Filipe Manana <fdmanana@suse.com>
Subject: Re: [PATCH] btrfs: check that cloning an inline extent does not lead to data loss
Date: Sun, 19 Apr 2020 23:34:41 +0800	[thread overview]
Message-ID: <20200419153441.GG388005@desktop> (raw)
In-Reply-To: <20200406105134.2233-1-fdmanana@kernel.org>

On Mon, Apr 06, 2020 at 11:51:34AM +0100, fdmanana@kernel.org wrote:
> From: Filipe Manana <fdmanana@suse.com>
> 
> We have a bug in the current kernel merge window (for 5.7) that results
> in data loss when cloning an inline extent into a new file (or an empty
> file. This change adds a test for such case into the existing test case
> btrfs/205, because it's the test case that tests all the supported
> scenarios for cloning inline extents in btrfs.
> 
> The btrfs patch for the linux kernel that fixes the regression has the
> following sibject:
> 
>   "Btrfs: fix lost i_size update after cloning inline extent"

The patch has been merged now, I updated the commit log to refer to the
patch with commit id as

4fdb688c7071 ("btrfs: fix lost i_size update after cloning inline
extent")

Thanks,
Eryu

> 
> Signed-off-by: Filipe Manana <fdmanana@suse.com>
> ---
>  tests/btrfs/205     | 13 +++++++++++++
>  tests/btrfs/205.out | 32 ++++++++++++++++++++++++++++++++
>  2 files changed, 45 insertions(+)
> 
> diff --git a/tests/btrfs/205 b/tests/btrfs/205
> index 9bec2bfa..66355678 100755
> --- a/tests/btrfs/205
> +++ b/tests/btrfs/205
> @@ -128,6 +128,18 @@ run_tests()
>  
>      echo "File bar6 digest = $(_md5_checksum $SCRATCH_MNT/bar6)"
>  
> +    # File foo3 a single inline extent of 500 bytes.
> +    echo "Creating file foo3"
> +    $XFS_IO_PROG -f -c "pwrite -S 0xbf 0 500" $SCRATCH_MNT/foo3 | _filter_xfs_io
> +
> +    # File bar7 is an empty file, has no extents.
> +    touch $SCRATCH_MNT/bar7
> +
> +    echo "Cloning foo3 into bar7"
> +    $XFS_IO_PROG -c "reflink $SCRATCH_MNT/foo3" $SCRATCH_MNT/bar7 | _filter_xfs_io
> +
> +    echo "File bar7 digest = $(_md5_checksum $SCRATCH_MNT/bar7)"
> +
>      # Unmount and mount again the filesystem. We want to verify the reflink
>      # operations were durably persisted.
>      _scratch_cycle_mount
> @@ -139,6 +151,7 @@ run_tests()
>      echo "File bar4 digest = $(_md5_checksum $SCRATCH_MNT/bar4)"
>      echo "File bar5 digest = $(_md5_checksum $SCRATCH_MNT/bar5)"
>      echo "File bar6 digest = $(_md5_checksum $SCRATCH_MNT/bar6)"
> +    echo "File bar7 digest = $(_md5_checksum $SCRATCH_MNT/bar7)"
>  }
>  
>  _scratch_mkfs "-O ^no-holes" >>$seqres.full 2>&1
> diff --git a/tests/btrfs/205.out b/tests/btrfs/205.out
> index 948e0634..d9932fc0 100644
> --- a/tests/btrfs/205.out
> +++ b/tests/btrfs/205.out
> @@ -52,6 +52,13 @@ Cloning foo1 into bar6
>  linked 131072/131072 bytes at offset 0
>  XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
>  File bar6 digest = 4b48829714d20a4e73a0cf1565270076
> +Creating file foo3
> +wrote 500/500 bytes at offset 0
> +XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
> +Cloning foo3 into bar7
> +linked 0/0 bytes at offset 0
> +XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
> +File bar7 digest = 67679afda6f846539ca7138452de0171
>  File digests after mounting again the filesystem:
>  File bar1 digest = e9d03fb5fff30baf3c709f2384dfde67
>  File bar2 digest = 85678cf32ed48f92ca42ad06d0b63f2a
> @@ -59,6 +66,7 @@ File bar3 digest = 85678cf32ed48f92ca42ad06d0b63f2a
>  File bar4 digest = 4b48829714d20a4e73a0cf1565270076
>  File bar5 digest = 4b48829714d20a4e73a0cf1565270076
>  File bar6 digest = 4b48829714d20a4e73a0cf1565270076
> +File bar7 digest = 67679afda6f846539ca7138452de0171
>  
>  Testing with -o compress
>  
> @@ -112,6 +120,13 @@ Cloning foo1 into bar6
>  linked 131072/131072 bytes at offset 0
>  XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
>  File bar6 digest = 4b48829714d20a4e73a0cf1565270076
> +Creating file foo3
> +wrote 500/500 bytes at offset 0
> +XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
> +Cloning foo3 into bar7
> +linked 0/0 bytes at offset 0
> +XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
> +File bar7 digest = 67679afda6f846539ca7138452de0171
>  File digests after mounting again the filesystem:
>  File bar1 digest = e9d03fb5fff30baf3c709f2384dfde67
>  File bar2 digest = 85678cf32ed48f92ca42ad06d0b63f2a
> @@ -119,6 +134,7 @@ File bar3 digest = 85678cf32ed48f92ca42ad06d0b63f2a
>  File bar4 digest = 4b48829714d20a4e73a0cf1565270076
>  File bar5 digest = 4b48829714d20a4e73a0cf1565270076
>  File bar6 digest = 4b48829714d20a4e73a0cf1565270076
> +File bar7 digest = 67679afda6f846539ca7138452de0171
>  
>  Testing with -o nodatacow
>  
> @@ -172,6 +188,13 @@ Cloning foo1 into bar6
>  linked 131072/131072 bytes at offset 0
>  XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
>  File bar6 digest = 4b48829714d20a4e73a0cf1565270076
> +Creating file foo3
> +wrote 500/500 bytes at offset 0
> +XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
> +Cloning foo3 into bar7
> +linked 0/0 bytes at offset 0
> +XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
> +File bar7 digest = 67679afda6f846539ca7138452de0171
>  File digests after mounting again the filesystem:
>  File bar1 digest = e9d03fb5fff30baf3c709f2384dfde67
>  File bar2 digest = 85678cf32ed48f92ca42ad06d0b63f2a
> @@ -179,6 +202,7 @@ File bar3 digest = 85678cf32ed48f92ca42ad06d0b63f2a
>  File bar4 digest = 4b48829714d20a4e73a0cf1565270076
>  File bar5 digest = 4b48829714d20a4e73a0cf1565270076
>  File bar6 digest = 4b48829714d20a4e73a0cf1565270076
> +File bar7 digest = 67679afda6f846539ca7138452de0171
>  
>  Testing with -O no-holes
>  
> @@ -232,6 +256,13 @@ Cloning foo1 into bar6
>  linked 131072/131072 bytes at offset 0
>  XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
>  File bar6 digest = 4b48829714d20a4e73a0cf1565270076
> +Creating file foo3
> +wrote 500/500 bytes at offset 0
> +XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
> +Cloning foo3 into bar7
> +linked 0/0 bytes at offset 0
> +XXX Bytes, X ops; XX:XX:XX.X (XXX YYY/sec and XXX ops/sec)
> +File bar7 digest = 67679afda6f846539ca7138452de0171
>  File digests after mounting again the filesystem:
>  File bar1 digest = e9d03fb5fff30baf3c709f2384dfde67
>  File bar2 digest = 85678cf32ed48f92ca42ad06d0b63f2a
> @@ -239,3 +270,4 @@ File bar3 digest = 85678cf32ed48f92ca42ad06d0b63f2a
>  File bar4 digest = 4b48829714d20a4e73a0cf1565270076
>  File bar5 digest = 4b48829714d20a4e73a0cf1565270076
>  File bar6 digest = 4b48829714d20a4e73a0cf1565270076
> +File bar7 digest = 67679afda6f846539ca7138452de0171
> -- 
> 2.11.0
> 

      parent reply	other threads:[~2020-04-19 15:33 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-06 10:51 [PATCH] btrfs: check that cloning an inline extent does not lead to data loss fdmanana
2020-04-10 19:22 ` Josef Bacik
2020-04-19 15:34 ` Eryu Guan [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200419153441.GG388005@desktop \
    --to=guan@eryu.me \
    --cc=fdmanana@kernel.org \
    --cc=fdmanana@suse.com \
    --cc=fstests@vger.kernel.org \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).