public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Lukas Straub <lukasstraub2@web.de>
To: Qu Wenruo <wqu@suse.com>
Cc: linux-btrfs@vger.kernel.org, David Sterba <dsterba@suse.cz>
Subject: Re: [PATCH 00/12] btrfs: introduce write-intent bitmaps for RAID56
Date: Wed, 13 Jul 2022 16:18:35 +0000	[thread overview]
Message-ID: <20220713161835.63ad1b00@gecko> (raw)
In-Reply-To: <cover.1657171615.git.wqu@suse.com>

[-- Attachment #1: Type: text/plain, Size: 6687 bytes --]

Hello Qu,

I think I mentioned it elsewhere, but could this be made general to all
raid levels (e.g. raid1/10 too)? This way, the bitmap can also be used
to fix the corruption of nocow files on btrfs raid. IMHO this is very
important since openSUSE and Fedora use nocow for certain files
(databases, vm images) by default and currently anyone using btrfs raid
there will be shot in the foot by this.

More comments below.

On Thu,  7 Jul 2022 13:32:25 +0800
Qu Wenruo <wqu@suse.com> 
> [...]
> [OBJECTIVE]
> 
> This patchset will introduce a btrfs specific write-intent bitmap.
> 
> The bitmap will locate at physical offset 1MiB of each device, and the
> content is the same between all devices.
> 
> When there is a RAID56 write (currently all RAID56 write, _including full
> stripe write_), before submitting all the real bios to disks,
> write-intent bitmap will be updated and flushed to all writeable
> devices.

You'll need to update the bitmap even with full stripe writes. I don't
know btrfs internals well, but this example should apply:

1. Powerloss happens during a full stripe write. If the bitmap wasn't set,
the whole stripe will contain inconsistent data:

	0		32K		64K
Disk 1	|iiiiiiiiiiiiiiiiiiiiiiiiiiiiiii| (data stripe)
Disk 2  |iiiiiiiiiiiiiiiiiiiiiiiiiiiiiii| (data stripe)
Disk 3	|iiiiiiiiiiiiiiiiiiiiiiiiiiiiiii| (parity stripe)

2. Partial stripe write happens, only updates one data + parity:

	0		32K		64K
Disk 1	|XXiiiiiiiiiiiiiiiiiiiiiiiiiiiii| (data stripe)
Disk 2  |iiiiiiiiiiiiiiiiiiiiiiiiiiiiiii| (data stripe)
Disk 3	|XXiiiiiiiiiiiiiiiiiiiiiiiiiiiii| (parity stripe)

3. We loose Disk 1. We try to recover Disk 1 data by using Disk 2 data
+ parity. Because Disk 2 is inconsistent we get invalid data.

Thus, we need to scrub the stripe even after a full stripe write to
prevent this.

> So even if a powerloss happened, at the next mount time we know which
> full stripes needs to check, and can start a scrub for those involved
> logical bytenr ranges.
> 
> [...]
> 
> [BITMAPS DESIGN]
> 
> The bitmaps on-disk format looks like this:
> 
>  [ super ][ entry 1 ][ entry 2 ] ... [entry N]
>  |<---------  super::size (4K) ------------->|
> 
> Super block contains how many entires are in use.
> 
> Each entry is 128 bits (16 bytes) in size, containing one u64 for
> bytenr, and u64 for one bitmap.
> 
> And all utilized entries will be sorted in their bytenr order, and no
> bit can overlap.
> 
> The blocksize is now fixed to BTRFS_STRIPE_LEN (64KiB), so each entry
> can contain at most 4MiB, and the whole bitmaps can contain 224 entries.

IMHO we can go much larger, mdraid for example uses a blocksize of
64MiB by default. Sure, we'll scrub many unrelated stripes on recovery
but write performance will be better.

Regards,
Lukas Straub

> For the worst case, it can contain 14MiB dirty ranges.
> (1 bits set per bitmap, also means 2 disks RAID5 or 3 disks RAID6).
> 
> For the best case, it can contain 896MiB dirty ranges.
> (all bits set per bitmap)
> 
> [WHY NOT BTRFS BTREE]
> 
> Current write-intent structure needs two features:
> 
> - Its data needs to survive cross stripe boundary
>   Normally this means write-intent btree needs to acts like a proper
>   tree root, has METADATA_ITEMs for all its tree blocks.
> 
> - Its data update must be outside of a transaction
>   Currently only log tree can do such thing.
>   But unfortunately log tree can not survive across transaction
>   boundary.
> 
> Thus write-intent btree can only meet one of the requirement, not a
> suitable solution here.
> 
> [TESTING AND BENCHMARK]
> 
> For performance benchmark, unfortunately I don't have 3 HDDs to test.
> Will do the benchmark after secured enough hardware.
> 
> For testing, it can survive volume/raid/dev-replace test groups, and no
> write-intent bitmap leakage.
> 
> Unfortunately there is still a warning triggered in btrfs/070, still
> under investigation, hopefully to be a false alert in bitmap clearing
> path.
> 
> [TODO]
> - Scrub refactor to allow us to do proper recovery at mount time
>   Need to change scrub interface to scrub based on logical bytenr.
> 
>   This can be a super big work, thus currently we will focus only on
>   RAID56 new scrub interface for write-intent recovery only.
> 
> - Extra optimizations
>   * Skip full stripe writes
>   * Enlarge the window between btrfs_write_intent_mark_dirty() and
>     btrfs_write_intent_writeback()
>     So that we can merge more dirty bites and cause less bitmaps
>     writeback
> 
> - Proper performance benchmark
>   Needs hardware/baremetal VMs, since I don't have any physical machine
>   large enough to contian 3 3.5" HDDs.
> 
> 
> Qu Wenruo (12):
>   btrfs: introduce new compat RO flag, EXTRA_SUPER_RESERVED
>   btrfs: introduce a new experimental compat RO flag,
>     WRITE_INTENT_BITMAP
>   btrfs: introduce the on-disk format of btrfs write intent bitmaps
>   btrfs: load/create write-intent bitmaps at mount time
>   btrfs: write-intent: write the newly created bitmaps to all disks
>   btrfs: write-intent: introduce an internal helper to set bits for a
>     range.
>   btrfs: write-intent: introduce an internal helper to clear bits for a
>     range.
>   btrfs: selftests: add selftests for write-intent bitmaps
>   btrfs: write back write intent bitmap after barrier_all_devices()
>   btrfs: update and writeback the write-intent bitmap for RAID56 write.
>   btrfs: raid56: clear write-intent bimaps when a full stripe finishes.
>   btrfs: warn and clear bitmaps if there is dirty bitmap at mount time
> 
>  fs/btrfs/Makefile                           |   5 +-
>  fs/btrfs/ctree.h                            |  24 +-
>  fs/btrfs/disk-io.c                          |  54 ++
>  fs/btrfs/raid56.c                           |  16 +
>  fs/btrfs/sysfs.c                            |   2 +
>  fs/btrfs/tests/btrfs-tests.c                |   4 +
>  fs/btrfs/tests/btrfs-tests.h                |   2 +
>  fs/btrfs/tests/write-intent-bitmaps-tests.c | 247 ++++++
>  fs/btrfs/volumes.c                          |  34 +-
>  fs/btrfs/write-intent.c                     | 903 ++++++++++++++++++++
>  fs/btrfs/write-intent.h                     | 303 +++++++
>  fs/btrfs/zoned.c                            |   8 +
>  include/uapi/linux/btrfs.h                  |  17 +
>  13 files changed, 1610 insertions(+), 9 deletions(-)
>  create mode 100644 fs/btrfs/tests/write-intent-bitmaps-tests.c
>  create mode 100644 fs/btrfs/write-intent.c
>  create mode 100644 fs/btrfs/write-intent.h
> 



-- 


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

  parent reply	other threads:[~2022-07-13 16:18 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-07  5:32 [PATCH 00/12] btrfs: introduce write-intent bitmaps for RAID56 Qu Wenruo
2022-07-07  5:32 ` [PATCH 01/12] btrfs: introduce new compat RO flag, EXTRA_SUPER_RESERVED Qu Wenruo
2022-07-07  5:32 ` [PATCH 02/12] btrfs: introduce a new experimental compat RO flag, WRITE_INTENT_BITMAP Qu Wenruo
2022-07-07  5:32 ` [PATCH 03/12] btrfs: introduce the on-disk format of btrfs write intent bitmaps Qu Wenruo
2022-07-07  5:32 ` [PATCH 04/12] btrfs: load/create write-intent bitmaps at mount time Qu Wenruo
2022-07-07  5:32 ` [PATCH 05/12] btrfs: write-intent: write the newly created bitmaps to all disks Qu Wenruo
2022-07-07  5:32 ` [PATCH 06/12] btrfs: write-intent: introduce an internal helper to set bits for a range Qu Wenruo
2022-07-08  1:55   ` kernel test robot
2022-07-08  2:22     ` Qu Wenruo
2022-07-08  7:23   ` kernel test robot
2022-07-07  5:32 ` [PATCH 07/12] btrfs: write-intent: introduce an internal helper to clear " Qu Wenruo
2022-07-07  5:32 ` [PATCH 08/12] btrfs: selftests: add selftests for write-intent bitmaps Qu Wenruo
2022-07-07  5:32 ` [PATCH 09/12] btrfs: write back write intent bitmap after barrier_all_devices() Qu Wenruo
2022-07-07  5:32 ` [PATCH 10/12] btrfs: update and writeback the write-intent bitmap for RAID56 write Qu Wenruo
2022-07-07  5:32 ` [PATCH 11/12] btrfs: raid56: clear write-intent bimaps when a full stripe finishes Qu Wenruo
2022-07-07  5:32 ` [PATCH 12/12] btrfs: warn and clear bitmaps if there is dirty bitmap at mount time Qu Wenruo
2022-07-07  5:36 ` [PATCH 00/12] btrfs: introduce write-intent bitmaps for RAID56 Christoph Hellwig
2022-07-07  5:48   ` Qu Wenruo
2022-07-07  9:37     ` Johannes Thumshirn
2022-07-07  9:45       ` Qu Wenruo
2022-07-07 10:42         ` Qu Wenruo
2022-07-07 12:23         ` Johannes Thumshirn
2022-07-07 13:36     ` Christoph Hellwig
2022-07-07 13:48       ` Qu Wenruo
2022-07-13 16:18 ` Lukas Straub [this message]
2022-07-13 23:00   ` Qu Wenruo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220713161835.63ad1b00@gecko \
    --to=lukasstraub2@web.de \
    --cc=dsterba@suse.cz \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=wqu@suse.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox