Linux Btrfs filesystem development
 help / color / mirror / Atom feed
From: Qu Wenruo <quwenruo.btrfs@gmx.com>
To: dsterba@suse.cz
Cc: Qu Wenruo <wqu@suse.com>, linux-btrfs@vger.kernel.org
Subject: Re: [PATCH] btrfs: shrink the size of btrfs_bio
Date: Tue, 9 Dec 2025 07:23:54 +1030	[thread overview]
Message-ID: <b77a5a1f-3c8c-4a43-bd1a-bc392baeecee@gmx.com> (raw)
In-Reply-To: <20251208204420.GD4859@twin.jikos.cz>



在 2025/12/9 07:14, David Sterba 写道:
> On Tue, Dec 09, 2025 at 06:56:47AM +1030, Qu Wenruo wrote:
>>
>>
>> 在 2025/12/9 05:49, David Sterba 写道:
>>> On Fri, Dec 05, 2025 at 06:34:30PM +1030, Qu Wenruo wrote:
>>>> This is done by:
>>>>
>>>> - Shrink the size of btrfs_bio::mirror_num
>>>>     From 32 bits unsigned int to 8 bits u8.
>>>
>>> What is the explanation for this? IIRC the mirror num on raid56 refers
>>> to the device index,
>>
>> You're right, u8 can not cut the max number of devices for RAID6.
>> (RAID5 only has two mirrors, mirror 0 meaning reading from data stripes,
>> mirror 1 means rebuild using other data and P stripe)
>>
>> BTRFS_MAX_DEVICES() is around 500 for the default 16K node size, which
>> is already beyond 255.
>>
>> Although in the real world it can hardly go that extreme, but without a
>> proper rejection/sanity checks, we can not do the shrink now.
>>
>> I'd like to limit the device number to something more realistic.
>> Would the device limit of 32 cut for both RAID5 and RAID6?
>> (And maybe apply this limit to RAID10/RAID0 too?)
>>
>> Or someone would prefer more devices?
> 
> I'd rather not add such artificial limit, I find 32 to small anyway.
> Using say 200+ devices will likely hit other boundaries like fitting
> items into some structures or performance reasons, but this does not
> justify setting some data structure to u8/1 byte.

By limiting I mean limiting the number of devices for a chunk, not the 
number of total devices.

We can still have whatever number of devices (no real limit), but a 
RAID0/RAID10/RAID5/RAID6 chunk shouldn't have that many devices anyway.

With that limit, things will work like this:

  The fs has 64/128 or whatever number of devices, but when allocating a
  RAID0/5/6 chunk, only 32 devices can be added to that chunk.

This should not make any difference, as 32 devices is already too large 
to make RAID0 to have any real difference.

> 
> With u16 and 16K devices this sounds future proof enough and we may use
> u16 in the sructures to save bytes (although it generates a bit worse
> code).

16K is already impossible for the device number of a chunk. I'm fine 
with u16, but I really prefer more sane default limits.

Thanks,
Qu


  reply	other threads:[~2025-12-08 20:54 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-05  8:04 [PATCH] btrfs: shrink the size of btrfs_bio Qu Wenruo
2025-12-05 10:09 ` Johannes Thumshirn
2025-12-08 19:19 ` David Sterba
2025-12-08 20:26   ` Qu Wenruo
2025-12-08 20:44     ` David Sterba
2025-12-08 20:53       ` Qu Wenruo [this message]
  -- strict thread matches above, loose matches on Subject: below --
2025-12-08 21:25 Qu Wenruo
2025-12-12  3:18 ` David Sterba

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b77a5a1f-3c8c-4a43-bd1a-bc392baeecee@gmx.com \
    --to=quwenruo.btrfs@gmx.com \
    --cc=dsterba@suse.cz \
    --cc=linux-btrfs@vger.kernel.org \
    --cc=wqu@suse.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox