public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Qu Wenruo <wqu@suse.com>
To: linux-btrfs@vger.kernel.org
Subject: Re: [PATCH v4 0/9] btrfs: refactor scrub entrances for each profile
Date: Tue, 15 Mar 2022 15:02:43 +0800	[thread overview]
Message-ID: <260f924b-e434-c49b-0c39-a09dbf61ac19@suse.com> (raw)
In-Reply-To: <cover.1646984153.git.wqu@suse.com>



On 2022/3/11 15:38, Qu Wenruo wrote:
> This patchset is cherry-picked from my github repo:
> https://github.com/adam900710/linux/tree/refactor_scrub

Just to mention, the branch get an update as misc-next now merged the 
renaming part.

The good news is, this entrance refactor can be applied without 
conflicts. So no update on this patchset.

Thanks,
Qu
> 
> [Changelog]
> v2:
> - Rebased to latest misc-next
> 
> - Fix several uninitialized variables in the 2nd and 3rd patch
>    This is because @physical, @physical_end and @offset are also used for
>    zoned device sync.
> 
>    Initial those values early to fix the problem.
> 
> v3:
> - Add two patches to better split cleanups from refactors
>    One to change the timing of initialization of @physical and
>    @physical_end
> 
>    One to remove dead non-RAID56 branches after making scrub_stripe() to
>    work on RAID56 only.
> 
> - Fix an unfinished comment in scrub_simple_mirror()
> 
> v4:
> - Rebased after scrub renaming patchset
>    Only minor conflicts.
> 
> - Fix uninitialized variable in patch 6 and 7
>    Now there should be no uninitialized even only patches 1~6 are
>    applied.
> 
> [CRAP-BUT-IT-WORKS(TM)]
> 
> Scrub is one of the area we seldom touch because:
> 
> - It's a mess
>    Just check scrub_stripe() function.
>    It's a function scrubbing a stripe for *all* profiles.
> 
>    It's near 400 lines for a single complex function, with double while()
>    loop and several different jumps inside the loop.
> 
>    Not to mention the lack of comments for various structures.
> 
>    This should and will never happen under our current code standard.
> 
> - It just works
>    I have hit more than 10 bugs during development, and I just want to
>    give up the refactor, as even the code is crap, it works, passing the
>    existing scrub/replace group.
>    While no matter how small code change I'm doing, it always fails to pass
>    the same tests.
> 
> [REFACTOR-IDEA]
> 
> The core idea here, is to get rid of one-fit-all solution for
> scrub_stripe().
> 
> Instead, we explicitly separate the scrub into 3 groups (the idea is
> from my btrfs-fuse project):
> 
> - Simple-mirror based profiles
>    This includes SINGLE/DUP/RAID1/RAID1C* profiles.
>    They have no stripe, and their repair is purely mirror based.
> 
> - Simple-stripe based profiles
>    This includes RAID0/RAID10 profiles.
>    They are just simple stripe (without P/Q nor rotation), with extra
>    mirrors to support their repair.
> 
> - RAID56
>    The most complex profiles, they have extra P/Q, and have rotation.
> 
> [REFACTOR-IMPLEMENTATION]
> 
> So we have 3 entrances for all those supported profiles:
> 
> - scrub_simple_mirror()
>    For SINGLE/DUP/RAID1/RAID1C* profiles.
>    Just go through each extent and scrub the extent.
> 
> - scrub_simple_stripe()
>    For RAID0/RAID10 profiles.
>    Instead we go each data stripe first, then inside each data stripe, we
>    can call scrub_simple_mirror(), since after stripe split, RAID0 is
>    just SINGLE and RAID10 is just RAID1.
> 
> - scrub_stripe() untouched for RAID56
>    RAID56 still has all the complex things to do, but they can still be
>    split into two types (already done by the original code)
> 
>    * data stripes
>      They are no different in the verification part, RAID56 is just
>      SINGLE if we ignore the repair path.
>      It's only in repair path that our path divides.
> 
>      So we can reuse scrub_simple_mirror() again.
> 
>    * P/Q stripes
>      They already have a dedicated function handling the case.
> 
> With all these refactors, although we have several more functions, we
> get rid of:
> 
> - A double while () loop
> - Several jumps inside the double loop
> - Complex calculation to try to fit all profiles
> 
> And we get:
> 
> - Better comments
> - More dedicated functions
> - A better basis for further refactors
> 
> [FUTURE CLEANUPS]
> - Refactor scrub_pages/scrub_parity/... structures
> - Further cleanup RAID56 codes
> 
> Changelog:
> v2:
> - Rebased to latest misc-next
> 
> - Fix several uninitialized variables in the 2nd and 3rd patch
>    This is because @physical, @physical_end and @offset are also used for
>    zoned device sync.
> 
>    Initial those values early to fix the problem.
> 
> v3:
> - Add two patches to better split cleanups from refactors
>    One to change the timing of initialization of @physical and
>    @physical_end
> 
>    One to remove dead non-RAID56 branches after making scrub_stripe() to
>    work on RAID56 only.
> 
> - Fix an unfinished comment in scrub_simple_mirror()
> 
> Qu Wenruo (9):
>    btrfs: calculate @physical_end using @dev_extent_len directly in
>      scrub_stripe()
>    btrfs: introduce a helper to locate an extent item
>    btrfs: introduce dedicated helper to scrub simple-mirror based range
>    btrfs: introduce dedicated helper to scrub simple-stripe based range
>    btrfs: scrub: cleanup the non-RAID56 branches in scrub_stripe()
>    btrfs: use scrub_simple_mirror() to handle RAID56 data stripe scrub
>    btrfs: refactor scrub_raid56_parity()
>    btrfs: use find_first_extent_item() to replace the open-coded extent
>      item search
>    btrfs: move scrub_remap_extent() call into scrub_extent() with more
>      comments
> 
>   fs/btrfs/scrub.c | 1037 +++++++++++++++++++++++++---------------------
>   1 file changed, 559 insertions(+), 478 deletions(-)
> 


  parent reply	other threads:[~2022-03-15  7:03 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-11  7:38 [PATCH v4 0/9] btrfs: refactor scrub entrances for each profile Qu Wenruo
2022-03-11  7:38 ` [PATCH v4 1/9] btrfs: calculate @physical_end using @dev_extent_len directly in scrub_stripe() Qu Wenruo
2022-03-23  6:26   ` Christoph Hellwig
2022-03-11  7:38 ` [PATCH v4 2/9] btrfs: introduce a helper to locate an extent item Qu Wenruo
2022-03-23  6:27   ` Christoph Hellwig
2022-03-23  7:01     ` Qu Wenruo
2022-03-11  7:38 ` [PATCH v4 3/9] btrfs: introduce dedicated helper to scrub simple-mirror based range Qu Wenruo
2022-03-11  7:38 ` [PATCH v4 4/9] btrfs: introduce dedicated helper to scrub simple-stripe " Qu Wenruo
2022-03-11  7:38 ` [PATCH v4 5/9] btrfs: scrub: cleanup the non-RAID56 branches in scrub_stripe() Qu Wenruo
2022-03-11  7:38 ` [PATCH v4 6/9] btrfs: use scrub_simple_mirror() to handle RAID56 data stripe scrub Qu Wenruo
2022-03-11  7:38 ` [PATCH v4 7/9] btrfs: refactor scrub_raid56_parity() Qu Wenruo
2022-03-11  7:38 ` [PATCH v4 8/9] btrfs: use find_first_extent_item() to replace the open-coded extent item search Qu Wenruo
2022-03-11  7:38 ` [PATCH v4 9/9] btrfs: move scrub_remap_extent() call into scrub_extent() with more comments Qu Wenruo
2022-03-15  7:02 ` Qu Wenruo [this message]
2022-04-20 21:16   ` [PATCH v4 0/9] btrfs: refactor scrub entrances for each profile David Sterba
2022-04-27 19:10     ` David Sterba

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=260f924b-e434-c49b-0c39-a09dbf61ac19@suse.com \
    --to=wqu@suse.com \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox