linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Qu Wenruo <wqu@suse.com>
To: linux-btrfs@vger.kernel.org
Subject: Re: [PATCH v2 0/5] btrfs: scrub: make scrub uses less memory for metadata scrub
Date: Fri, 5 Aug 2022 14:32:48 +0800	[thread overview]
Message-ID: <29229e8b-d4db-ed4c-1e06-90ab9b43a206@suse.com> (raw)
In-Reply-To: <cover.1658215183.git.wqu@suse.com>

Ping?

This series would begin a new wave of changes of moving various members 
to scrub_block, thus being able to further reduce memory usage.
(Per-sector to per-block)

Thus it would be better to get this series merged before newer changes 
to arrive.

Thanks,
Qu

On 2022/7/19 15:24, Qu Wenruo wrote:
> [Changelog]
> v2:
> - Rebased to latest misc-next
>    The conflicts are mostly from the member renaming.
>    Has re-ran the tests on both aarch64 64K page size, and x86_64.
>    For both scrub and replace groups.
> 
> 
> Although btrfs scrub works for subpage from day one, it has a small
> pitfall:
> 
>    Scrub will always allocate a full page for each sector.
> 
> This causes increased memory usage, although not a big deal, it's still
> not ideal.
> 
> The patchset will change the behavior by integrating all pages into
> scrub_block::pages[], instead of using scrub_sector::page.
> 
> Now scrub_sector will no longer hold a page pointer, but uses its
> logical bytenr to caculate which page and page range it should use.
> 
> This behavior unfortunately still only affects memory usage on metadata
> scrub, which uses nodesize for scrub.
> 
> For the best case, 64K node size with 64K page size, we waste no memory
> to scrub one tree block.
> 
> For the worst case, 4K node size with 64K page size, we are no worse
> than the existing behavior (still one 64K page for the tree block)
> 
> For the default case (16K nodesize), we use one 64K page, compared to
> 4x64K pages previously.
> 
> For data scrubing, we uses sector size, thus it causes no difference.
> In the future, we may want to enlarge the data scrub size so that
> subpage can waste less memory.
> 
> [PATCHSET STRUCTURE]
> The first 3 patches are just cleanups, mostly to make scrub_sector
> allocation much easier.
> 
> The 4th patch is to introduce the new page array for sblock, and
> the last one to completely remove the usage of scrub_sector::page.
> 
> Qu Wenruo (5):
>    btrfs: scrub: use pointer array to replace @sblocks_for_recheck
>    btrfs: extract the initialization of scrub_block into a helper
>      function
>    btrfs: extract the allocation and initialization of scrub_sector into
>      a helper
>    btrfs: scrub: introduce scrub_block::pages for more efficient memory
>      usage for subpage
>    btrfs: scrub: remove scrub_sector::page and use scrub_block::pages
>      instead
> 
>   fs/btrfs/scrub.c | 398 +++++++++++++++++++++++++++++++----------------
>   1 file changed, 266 insertions(+), 132 deletions(-)
> 

  parent reply	other threads:[~2022-08-05  6:33 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-19  7:24 [PATCH v2 0/5] btrfs: scrub: make scrub uses less memory for metadata scrub Qu Wenruo
2022-07-19  7:24 ` [PATCH v2 1/5] btrfs: scrub: use pointer array to replace @sblocks_for_recheck Qu Wenruo
2022-07-19  7:24 ` [PATCH v2 2/5] btrfs: extract the initialization of scrub_block into a helper function Qu Wenruo
2022-07-19  7:24 ` [PATCH v2 3/5] btrfs: extract the allocation and initialization of scrub_sector into a helper Qu Wenruo
2022-07-19  7:24 ` [PATCH v2 4/5] btrfs: scrub: introduce scrub_block::pages for more efficient memory usage for subpage Qu Wenruo
2022-07-26 18:08   ` David Sterba
2022-07-19  7:24 ` [PATCH v2 5/5] btrfs: scrub: remove scrub_sector::page and use scrub_block::pages instead Qu Wenruo
2022-08-05  6:32 ` Qu Wenruo [this message]
2022-09-06 16:52   ` [PATCH v2 0/5] btrfs: scrub: make scrub uses less memory for metadata scrub David Sterba

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=29229e8b-d4db-ed4c-1e06-90ab9b43a206@suse.com \
    --to=wqu@suse.com \
    --cc=linux-btrfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).