linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] btrfs: scrub: make scrub uses less memory for metadata scrub
@ 2022-03-02  8:44 Qu Wenruo
  2022-03-07 16:36 ` David Sterba
  2022-05-27 15:37 ` David Sterba
  0 siblings, 2 replies; 5+ messages in thread
From: Qu Wenruo @ 2022-03-02  8:44 UTC (permalink / raw)
  To: linux-btrfs

Although btrfs scrub works for subpage from day one, it has a small
pitfall:

  Scrub will always allocate a full page for each sector.

This causes increased memory usage, although not a big deal, it's still
not ideal.

The patchset will change the behavior by integrating all pages into
scrub_block::pages[], instead of using scrub_sector::page.

Now scrub_sector will no longer hold a page pointer, but uses its
logical bytenr to caculate which page and page range it should use.

This behavior unfortunately still only affects memory usage on metadata
scrub, which uses nodesize for scrub.

For the best case, 64K node size with 64K page size, we waste no memory
to scrub one tree block.

For the worst case, 4K node size with 64K page size, we are no worse
than the existing behavior (still one 64K page for the tree block)

For the default case (16K nodesize), we use one 64K page, compared to
4x64K pages previously.

For data scrubing, we uses sector size, thus it causes no difference.
We need to do more work on data scrubing size to properly handle mutilpe
sectors for non-RAID56 profiles.

The patchset requires the rename patchset.
(https://lore.kernel.org/linux-btrfs/cover.1645530899.git.wqu@suse.com/)

If David is not happy with the big change again, at least first 3
patches can be considered as some cleanup.

Qu Wenruo (5):
  btrfs: scrub: use pointer array to replace @sblocks_for_recheck
  btrfs: extract the initialization of scrub_block into a helper
    function
  btrfs: extract the allocation and initialization of scrub_sector into
    a helper
  btrfs: scrub: introduce scrub_block::pages for more efficient memory
    usage for subpage
  btrfs: scrub: remove scrub_sector::page and use scrub_block::pages
    instead

 fs/btrfs/scrub.c | 399 ++++++++++++++++++++++++++++++++---------------
 1 file changed, 270 insertions(+), 129 deletions(-)

-- 
2.35.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 0/5] btrfs: scrub: make scrub uses less memory for metadata scrub
  2022-03-02  8:44 [PATCH 0/5] btrfs: scrub: make scrub uses less memory for metadata scrub Qu Wenruo
@ 2022-03-07 16:36 ` David Sterba
  2022-03-08  3:49   ` Qu Wenruo
  2022-05-27 15:37 ` David Sterba
  1 sibling, 1 reply; 5+ messages in thread
From: David Sterba @ 2022-03-07 16:36 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

On Wed, Mar 02, 2022 at 04:44:03PM +0800, Qu Wenruo wrote:
> Although btrfs scrub works for subpage from day one, it has a small
> pitfall:
> 
>   Scrub will always allocate a full page for each sector.
> 
> This causes increased memory usage, although not a big deal, it's still
> not ideal.
> 
> The patchset will change the behavior by integrating all pages into
> scrub_block::pages[], instead of using scrub_sector::page.
> 
> Now scrub_sector will no longer hold a page pointer, but uses its
> logical bytenr to caculate which page and page range it should use.
> 
> This behavior unfortunately still only affects memory usage on metadata
> scrub, which uses nodesize for scrub.
> 
> For the best case, 64K node size with 64K page size, we waste no memory
> to scrub one tree block.
> 
> For the worst case, 4K node size with 64K page size, we are no worse
> than the existing behavior (still one 64K page for the tree block)
> 
> For the default case (16K nodesize), we use one 64K page, compared to
> 4x64K pages previously.
> 
> For data scrubing, we uses sector size, thus it causes no difference.
> We need to do more work on data scrubing size to properly handle mutilpe
> sectors for non-RAID56 profiles.
> 
> The patchset requires the rename patchset.
> (https://lore.kernel.org/linux-btrfs/cover.1645530899.git.wqu@suse.com/)
> 
> If David is not happy with the big change again, at least first 3
> patches can be considered as some cleanup.
> 
> Qu Wenruo (5):
>   btrfs: scrub: use pointer array to replace @sblocks_for_recheck
>   btrfs: extract the initialization of scrub_block into a helper
>     function
>   btrfs: extract the allocation and initialization of scrub_sector into
>     a helper
>   btrfs: scrub: introduce scrub_block::pages for more efficient memory
>     usage for subpage
>   btrfs: scrub: remove scrub_sector::page and use scrub_block::pages
>     instead

Added to for-next as topic branch for now.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 0/5] btrfs: scrub: make scrub uses less memory for metadata scrub
  2022-03-07 16:36 ` David Sterba
@ 2022-03-08  3:49   ` Qu Wenruo
  2022-03-14 19:41     ` David Sterba
  0 siblings, 1 reply; 5+ messages in thread
From: Qu Wenruo @ 2022-03-08  3:49 UTC (permalink / raw)
  To: dsterba, Qu Wenruo, linux-btrfs



On 2022/3/8 00:36, David Sterba wrote:
> On Wed, Mar 02, 2022 at 04:44:03PM +0800, Qu Wenruo wrote:
>> Although btrfs scrub works for subpage from day one, it has a small
>> pitfall:
>>
>>    Scrub will always allocate a full page for each sector.
>>
>> This causes increased memory usage, although not a big deal, it's still
>> not ideal.
>>
>> The patchset will change the behavior by integrating all pages into
>> scrub_block::pages[], instead of using scrub_sector::page.
>>
>> Now scrub_sector will no longer hold a page pointer, but uses its
>> logical bytenr to caculate which page and page range it should use.
>>
>> This behavior unfortunately still only affects memory usage on metadata
>> scrub, which uses nodesize for scrub.
>>
>> For the best case, 64K node size with 64K page size, we waste no memory
>> to scrub one tree block.
>>
>> For the worst case, 4K node size with 64K page size, we are no worse
>> than the existing behavior (still one 64K page for the tree block)
>>
>> For the default case (16K nodesize), we use one 64K page, compared to
>> 4x64K pages previously.
>>
>> For data scrubing, we uses sector size, thus it causes no difference.
>> We need to do more work on data scrubing size to properly handle mutilpe
>> sectors for non-RAID56 profiles.
>>
>> The patchset requires the rename patchset.
>> (https://lore.kernel.org/linux-btrfs/cover.1645530899.git.wqu@suse.com/)
>>
>> If David is not happy with the big change again, at least first 3
>> patches can be considered as some cleanup.
>>
>> Qu Wenruo (5):
>>    btrfs: scrub: use pointer array to replace @sblocks_for_recheck
>>    btrfs: extract the initialization of scrub_block into a helper
>>      function
>>    btrfs: extract the allocation and initialization of scrub_sector into
>>      a helper
>>    btrfs: scrub: introduce scrub_block::pages for more efficient memory
>>      usage for subpage
>>    btrfs: scrub: remove scrub_sector::page and use scrub_block::pages
>>      instead
>
> Added to for-next as topic branch for now.

I guess you replied to the wrong patch?

The for-next branch only contains the scrub entrance refactor v3.

No the renaming nor the subpage optimization.

Thanks,
Qu

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 0/5] btrfs: scrub: make scrub uses less memory for metadata scrub
  2022-03-08  3:49   ` Qu Wenruo
@ 2022-03-14 19:41     ` David Sterba
  0 siblings, 0 replies; 5+ messages in thread
From: David Sterba @ 2022-03-14 19:41 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: dsterba, Qu Wenruo, linux-btrfs

On Tue, Mar 08, 2022 at 11:49:08AM +0800, Qu Wenruo wrote:
> >> The patchset requires the rename patchset.
> >> (https://lore.kernel.org/linux-btrfs/cover.1645530899.git.wqu@suse.com/)
> >>
> >> If David is not happy with the big change again, at least first 3
> >> patches can be considered as some cleanup.
> >>
> >> Qu Wenruo (5):
> >>    btrfs: scrub: use pointer array to replace @sblocks_for_recheck
> >>    btrfs: extract the initialization of scrub_block into a helper
> >>      function
> >>    btrfs: extract the allocation and initialization of scrub_sector into
> >>      a helper
> >>    btrfs: scrub: introduce scrub_block::pages for more efficient memory
> >>      usage for subpage
> >>    btrfs: scrub: remove scrub_sector::page and use scrub_block::pages
> >>      instead
> >
> > Added to for-next as topic branch for now.
> 
> I guess you replied to the wrong patch?

Yeah.

> The for-next branch only contains the scrub entrance refactor v3.
> 
> No the renaming nor the subpage optimization.

The scrub renaming is in misc-next, please refresh this patchset,
thanks.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 0/5] btrfs: scrub: make scrub uses less memory for metadata scrub
  2022-03-02  8:44 [PATCH 0/5] btrfs: scrub: make scrub uses less memory for metadata scrub Qu Wenruo
  2022-03-07 16:36 ` David Sterba
@ 2022-05-27 15:37 ` David Sterba
  1 sibling, 0 replies; 5+ messages in thread
From: David Sterba @ 2022-05-27 15:37 UTC (permalink / raw)
  To: Qu Wenruo; +Cc: linux-btrfs

On Wed, Mar 02, 2022 at 04:44:03PM +0800, Qu Wenruo wrote:
> Although btrfs scrub works for subpage from day one, it has a small
> pitfall:
> 
>   Scrub will always allocate a full page for each sector.
> 
> This causes increased memory usage, although not a big deal, it's still
> not ideal.
> 
> The patchset will change the behavior by integrating all pages into
> scrub_block::pages[], instead of using scrub_sector::page.
> 
> Now scrub_sector will no longer hold a page pointer, but uses its
> logical bytenr to caculate which page and page range it should use.
> 
> This behavior unfortunately still only affects memory usage on metadata
> scrub, which uses nodesize for scrub.
> 
> For the best case, 64K node size with 64K page size, we waste no memory
> to scrub one tree block.
> 
> For the worst case, 4K node size with 64K page size, we are no worse
> than the existing behavior (still one 64K page for the tree block)
> 
> For the default case (16K nodesize), we use one 64K page, compared to
> 4x64K pages previously.
> 
> For data scrubing, we uses sector size, thus it causes no difference.
> We need to do more work on data scrubing size to properly handle mutilpe
> sectors for non-RAID56 profiles.
> 
> The patchset requires the rename patchset.
> (https://lore.kernel.org/linux-btrfs/cover.1645530899.git.wqu@suse.com/)
> 
> If David is not happy with the big change again, at least first 3
> patches can be considered as some cleanup.
> 
> Qu Wenruo (5):
>   btrfs: scrub: use pointer array to replace @sblocks_for_recheck
>   btrfs: extract the initialization of scrub_block into a helper
>     function
>   btrfs: extract the allocation and initialization of scrub_sector into
>     a helper
>   btrfs: scrub: introduce scrub_block::pages for more efficient memory
>     usage for subpage
>   btrfs: scrub: remove scrub_sector::page and use scrub_block::pages
>     instead

This patchset still seems relevant, but does not apply after recent
changes. Please refresh and resend, thanks.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2022-05-27 15:42 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-03-02  8:44 [PATCH 0/5] btrfs: scrub: make scrub uses less memory for metadata scrub Qu Wenruo
2022-03-07 16:36 ` David Sterba
2022-03-08  3:49   ` Qu Wenruo
2022-03-14 19:41     ` David Sterba
2022-05-27 15:37 ` David Sterba

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).