From: Wang Yugui <wangyugui@e16-tech.com>
To: dsterba@suse.cz, Stefan Roesch <shr@fb.com>,
linux-btrfs@vger.kernel.org, kernel-team@fb.com
Subject: Re: [PATCH v6 1/4] btrfs: store chunk size in space-info struct.
Date: Fri, 29 Jul 2022 12:44:53 +0800 [thread overview]
Message-ID: <20220729124452.A58B.409509F4@e16-tech.com> (raw)
In-Reply-To: <20220729111400.658E.409509F4@e16-tech.com>
Hi,
I'm sorry that this folded patch is wrong.
because init_alloc_chunk_ctl() is not called in btrfs mount stage.
Best Regards
Wang Yugui (wangyugui@e16-tech.com)
2022/07/29
> Hi,
>
> attachement file(folded-v3.patch):
>
> changes since v2:
> move the caller of btrfs_update_space_info_chunk_size() into init_alloc_chunk_ctl();
> drop the added check of BTRFS_MAX_DEVS_SYS_CHUNK, that check is already done
>
> changes since v1:
> move ASSERT(space_info) into btrfs_update_space_info_chunk_size();
>
> Best Regards
> Wang Yugui (wangyugui@e16-tech.com)
> 2022/07/29
>
> > Hi,
> >
> > attachement file(folded-v2.patch):
> >
> > changes:
> > move ASSERT(space_info) into btrfs_update_space_info_chunk_size();
> >
> > Best Regards
> > Wang Yugui (wangyugui@e16-tech.com)
> > 2022/07/29
> >
> > > Hi,
> > >
> > > I tried to fold my fix/comment into the attachement file(folded.patch).
> > >
> > > Just the following is new, others are just orig feature or refactor.
> > >
> > > + if(ctl->max_stripe_size > ctl->max_chunk_size)
> > > + ctl->max_stripe_size = ctl->max_chunk_size;
> > >
> > > Best Regards
> > > Wang Yugui (wangyugui@e16-tech.com)
> > > 2022/07/29
> > >
> > > > On Tue, Jul 26, 2022 at 06:25:38AM +0800, Wang Yugui wrote:
> > > > > > On Sat, Jul 23, 2022 at 07:49:37AM +0800, Wang Yugui wrote:
> > > > > > > In this patch, the max chunk size is changed from
> > > > > > > BTRFS_MAX_DATA_CHUNK_SIZE(10G) to SZ_1G without any comment ?
> > > > > >
> > > > > > The patch hasn't been merged, the change from 1G to 10G without proper
> > > > > > evaluation won't happen. The sysfs knob is available for users who want
> > > > > > to test it or know that the non-default value works in their
> > > > > > environment.
> > > > >
> > > > > this patch is in misc-next( 5.19.0-rc8 based, 5.19.0-rc7 based) now.
> > > > >
> > > > > 5.19.0-rc8 based:
> > > > > f6fca3917b4d btrfs: store chunk size in space-info struct
> > > > >
> > > > > The sysfs knob show that current default chunk size is 1G, not 10G as
> > > > > older version.
> > > >
> > > > So there are two things regarding chunk size, the default size and that
> > > > it's settable by user (with some limitations). I was replying to the
> > > > default size change while you are concerned about the max_chunk_size.
> > > >
> > > > You're right that the value changed in the patch, but as I'm reading the
> > > > code it should not have any effect. When user sets a value in
> > > > btrfs_chunk_size_store() it's limited inside the sysfs handler to the
> > > > 10G. Also there are various adjustments when the chunk size is
> > > > initialized (init_alloc_chunk_ctl_policy_regular).
> > > >
> > > > The only difference I can see comparing master and misc-next is in
> > > > decide_stripe_size_regular()
> > > >
> > > > 5259 /*
> > > > 5260 * Use the number of data stripes to figure out how big this chunk is
> > > > 5261 * really going to be in terms of logical address space, and compare
> > > > 5262 * that answer with the max chunk size. If it's higher, we try to
> > > > 5263 * reduce stripe_size.
> > > > 5264 */
> > > > 5265 if (ctl->stripe_size * data_stripes > ctl->max_chunk_size) {
> > > > ^^^^
> > > > 5266 /*
> > > > 5267 * Reduce stripe_size, round it up to a 16MB boundary again and
> > > > 5268 * then use it, unless it ends up being even bigger than the
> > > > 5269 * previous value we had already.
> > > > 5270 */
> > > > 5271 ctl->stripe_size = min(round_up(div_u64(ctl->max_chunk_size,
> > > > 5272 data_stripes), SZ_16M),
> > > > 5273 ctl->stripe_size);
> > > > 5274 }
> > > >
> > > > Here it could lead to a different stripe_size when max_chunk_size would
> > > > be 1G vs 10G, though the other adjustments could change the upper value.
> > >
> >
>
next prev parent reply other threads:[~2022-07-29 4:45 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-12-03 22:04 [PATCH v6 0/4] btrfs: sysfs: set / query btrfs chunk size Stefan Roesch
2021-12-03 22:04 ` [PATCH v6 1/4] btrfs: store chunk size in space-info struct Stefan Roesch
2022-07-22 23:49 ` Wang Yugui
2022-07-25 13:41 ` David Sterba
2022-07-25 22:25 ` Wang Yugui
2022-07-26 17:08 ` David Sterba
2022-07-29 1:23 ` Wang Yugui
2022-07-29 2:05 ` Wang Yugui
2022-07-29 3:14 ` Wang Yugui
2022-07-29 4:44 ` Wang Yugui [this message]
2022-08-18 6:32 ` Qu Wenruo
2021-12-03 22:04 ` [PATCH v6 2/4] btrfs: expose chunk size in sysfs Stefan Roesch
2021-12-03 22:04 ` [PATCH v6 3/4] btrfs: add force_chunk_alloc sysfs entry to force allocation Stefan Roesch
2021-12-03 22:04 ` [PATCH v6 4/4] btrfs: increase metadata alloc size to 5GB for volumes > 50GB Stefan Roesch
2022-08-18 10:40 ` [PATCH v6 0/4] btrfs: sysfs: set / query btrfs chunk size Qu Wenruo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20220729124452.A58B.409509F4@e16-tech.com \
--to=wangyugui@e16-tech.com \
--cc=dsterba@suse.cz \
--cc=kernel-team@fb.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=shr@fb.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).