* [PATCH 0/4] btrfs: some cleanups to the block group size class code
@ 2026-01-20 12:25 fdmanana
2026-01-20 12:25 ` [PATCH 1/4] btrfs: make load_block_group_size_class() return void fdmanana
` (6 more replies)
0 siblings, 7 replies; 14+ messages in thread
From: fdmanana @ 2026-01-20 12:25 UTC (permalink / raw)
To: linux-btrfs
From: Filipe Manana <fdmanana@suse.com>
Simple changes, details in the change logs.
Filipe Manana (4):
btrfs: make load_block_group_size_class() return void
btrfs: allocate path in load_block_group_size_class()
btrfs: don't pass block group argument to load_block_group_size_class()
btrfs: assert block group is locked in btrfs_use_block_group_size_class()
fs/btrfs/block-group.c | 52 ++++++++++++++++++++++--------------------
1 file changed, 27 insertions(+), 25 deletions(-)
--
2.47.2
^ permalink raw reply [flat|nested] 14+ messages in thread
* [PATCH 1/4] btrfs: make load_block_group_size_class() return void
2026-01-20 12:25 [PATCH 0/4] btrfs: some cleanups to the block group size class code fdmanana
@ 2026-01-20 12:25 ` fdmanana
2026-01-20 12:25 ` [PATCH 2/4] btrfs: allocate path in load_block_group_size_class() fdmanana
` (5 subsequent siblings)
6 siblings, 0 replies; 14+ messages in thread
From: fdmanana @ 2026-01-20 12:25 UTC (permalink / raw)
To: linux-btrfs
From: Filipe Manana <fdmanana@suse.com>
There's no point in returning anything since determining and setting a
size class for a block group is an optimization, not something critical.
The only caller of load_block_group_size_class() (the caching thread)
does not do anything with the return value anyway, exactly because having
a size class is just an optimization and it can always be set later when
adding reserved bytes to a block group (btrfs_add_reserved_bytes()).
Signed-off-by: Filipe Manana <fdmanana@suse.com>
---
fs/btrfs/block-group.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
index 4fc4d49910bf..343c29344484 100644
--- a/fs/btrfs/block-group.c
+++ b/fs/btrfs/block-group.c
@@ -673,27 +673,29 @@ static int sample_block_group_extent_item(struct btrfs_caching_control *caching_
* 3, we can either read every file extent, or admit that this is best effort
* anyway and try to stay fast.
*
- * Returns: 0 on success, negative error code on error.
+ * No errors are returned since failing to determine the size class is not a
+ * critical error, size classes are just an optimization.
*/
-static int load_block_group_size_class(struct btrfs_caching_control *caching_ctl,
- struct btrfs_block_group *block_group)
+static void load_block_group_size_class(struct btrfs_caching_control *caching_ctl,
+ struct btrfs_block_group *block_group)
{
struct btrfs_fs_info *fs_info = block_group->fs_info;
struct btrfs_key key;
int i;
u64 min_size = block_group->length;
enum btrfs_block_group_size_class size_class = BTRFS_BG_SZ_NONE;
- int ret;
if (!btrfs_block_group_should_use_size_class(block_group))
- return 0;
+ return;
lockdep_assert_held(&caching_ctl->mutex);
lockdep_assert_held_read(&fs_info->commit_root_sem);
for (i = 0; i < 5; ++i) {
+ int ret;
+
ret = sample_block_group_extent_item(caching_ctl, block_group, i, 5, &key);
if (ret < 0)
- goto out;
+ return;
if (ret > 0)
continue;
min_size = min_t(u64, min_size, key.offset);
@@ -704,8 +706,6 @@ static int load_block_group_size_class(struct btrfs_caching_control *caching_ctl
block_group->size_class = size_class;
spin_unlock(&block_group->lock);
}
-out:
- return ret;
}
static int load_extent_tree_free(struct btrfs_caching_control *caching_ctl)
--
2.47.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 2/4] btrfs: allocate path in load_block_group_size_class()
2026-01-20 12:25 [PATCH 0/4] btrfs: some cleanups to the block group size class code fdmanana
2026-01-20 12:25 ` [PATCH 1/4] btrfs: make load_block_group_size_class() return void fdmanana
@ 2026-01-20 12:25 ` fdmanana
2026-01-20 20:46 ` Qu Wenruo
2026-01-20 12:25 ` [PATCH 3/4] btrfs: don't pass block group argument to load_block_group_size_class() fdmanana
` (4 subsequent siblings)
6 siblings, 1 reply; 14+ messages in thread
From: fdmanana @ 2026-01-20 12:25 UTC (permalink / raw)
To: linux-btrfs
From: Filipe Manana <fdmanana@suse.com>
Instead of allocating and freeing a path in every iteration of
load_block_group_size_class(), through its helper function
sample_block_group_extent_item(), allocate the path in the former and
pass it to the later.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
---
fs/btrfs/block-group.c | 32 +++++++++++++++++---------------
1 file changed, 17 insertions(+), 15 deletions(-)
diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
index 343c29344484..a7828673be39 100644
--- a/fs/btrfs/block-group.c
+++ b/fs/btrfs/block-group.c
@@ -579,24 +579,24 @@ int btrfs_add_new_free_space(struct btrfs_block_group *block_group, u64 start,
* @index: the integral step through the block group to grab from
* @max_index: the granularity of the sampling
* @key: return value parameter for the item we find
+ * @path: path to use for searching in the extent tree
*
* Pre-conditions on indices:
* 0 <= index <= max_index
* 0 < max_index
*
- * Returns: 0 on success, 1 if the search didn't yield a useful item, negative
- * error code on error.
+ * Returns: 0 on success, 1 if the search didn't yield a useful item.
*/
static int sample_block_group_extent_item(struct btrfs_caching_control *caching_ctl,
struct btrfs_block_group *block_group,
int index, int max_index,
- struct btrfs_key *found_key)
+ struct btrfs_key *found_key,
+ struct btrfs_path *path)
{
struct btrfs_fs_info *fs_info = block_group->fs_info;
struct btrfs_root *extent_root;
u64 search_offset;
const u64 search_end = btrfs_block_group_end(block_group);
- BTRFS_PATH_AUTO_FREE(path);
struct btrfs_key search_key;
int ret = 0;
@@ -606,17 +606,9 @@ static int sample_block_group_extent_item(struct btrfs_caching_control *caching_
lockdep_assert_held(&caching_ctl->mutex);
lockdep_assert_held_read(&fs_info->commit_root_sem);
- path = btrfs_alloc_path();
- if (!path)
- return -ENOMEM;
-
extent_root = btrfs_extent_root(fs_info, max_t(u64, block_group->start,
BTRFS_SUPER_INFO_OFFSET));
- path->skip_locking = true;
- path->search_commit_root = true;
- path->reada = READA_FORWARD;
-
search_offset = index * div_u64(block_group->length, max_index);
search_key.objectid = block_group->start + search_offset;
search_key.type = BTRFS_EXTENT_ITEM_KEY;
@@ -679,6 +671,7 @@ static int sample_block_group_extent_item(struct btrfs_caching_control *caching_
static void load_block_group_size_class(struct btrfs_caching_control *caching_ctl,
struct btrfs_block_group *block_group)
{
+ BTRFS_PATH_AUTO_FREE(path);
struct btrfs_fs_info *fs_info = block_group->fs_info;
struct btrfs_key key;
int i;
@@ -688,14 +681,23 @@ static void load_block_group_size_class(struct btrfs_caching_control *caching_ct
if (!btrfs_block_group_should_use_size_class(block_group))
return;
+ path = btrfs_alloc_path();
+ if (!path)
+ return;
+
+ path->skip_locking = true;
+ path->search_commit_root = true;
+ path->reada = READA_FORWARD;
+
lockdep_assert_held(&caching_ctl->mutex);
lockdep_assert_held_read(&fs_info->commit_root_sem);
for (i = 0; i < 5; ++i) {
int ret;
- ret = sample_block_group_extent_item(caching_ctl, block_group, i, 5, &key);
- if (ret < 0)
- return;
+ ret = sample_block_group_extent_item(caching_ctl, block_group,
+ i, 5, &key, path);
+ btrfs_release_path(path);
+ ASSERT(ret >= 0);
if (ret > 0)
continue;
min_size = min_t(u64, min_size, key.offset);
--
2.47.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 3/4] btrfs: don't pass block group argument to load_block_group_size_class()
2026-01-20 12:25 [PATCH 0/4] btrfs: some cleanups to the block group size class code fdmanana
2026-01-20 12:25 ` [PATCH 1/4] btrfs: make load_block_group_size_class() return void fdmanana
2026-01-20 12:25 ` [PATCH 2/4] btrfs: allocate path in load_block_group_size_class() fdmanana
@ 2026-01-20 12:25 ` fdmanana
2026-01-20 12:25 ` [PATCH 4/4] btrfs: assert block group is locked in btrfs_use_block_group_size_class() fdmanana
` (3 subsequent siblings)
6 siblings, 0 replies; 14+ messages in thread
From: fdmanana @ 2026-01-20 12:25 UTC (permalink / raw)
To: linux-btrfs
From: Filipe Manana <fdmanana@suse.com>
There's no need to pass the block group since we can extract it from
the given caching control structure. Same goes for its helper function
sample_block_group_extent_item().
Signed-off-by: Filipe Manana <fdmanana@suse.com>
---
fs/btrfs/block-group.c | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
index a7828673be39..82c488a4b54c 100644
--- a/fs/btrfs/block-group.c
+++ b/fs/btrfs/block-group.c
@@ -575,7 +575,7 @@ int btrfs_add_new_free_space(struct btrfs_block_group *block_group, u64 start,
/*
* Get an arbitrary extent item index / max_index through the block group
*
- * @block_group the block group to sample from
+ * @caching_ctl the caching control containing the block group to sample from
* @index: the integral step through the block group to grab from
* @max_index: the granularity of the sampling
* @key: return value parameter for the item we find
@@ -588,11 +588,11 @@ int btrfs_add_new_free_space(struct btrfs_block_group *block_group, u64 start,
* Returns: 0 on success, 1 if the search didn't yield a useful item.
*/
static int sample_block_group_extent_item(struct btrfs_caching_control *caching_ctl,
- struct btrfs_block_group *block_group,
int index, int max_index,
struct btrfs_key *found_key,
struct btrfs_path *path)
{
+ struct btrfs_block_group *block_group = caching_ctl->block_group;
struct btrfs_fs_info *fs_info = block_group->fs_info;
struct btrfs_root *extent_root;
u64 search_offset;
@@ -668,10 +668,10 @@ static int sample_block_group_extent_item(struct btrfs_caching_control *caching_
* No errors are returned since failing to determine the size class is not a
* critical error, size classes are just an optimization.
*/
-static void load_block_group_size_class(struct btrfs_caching_control *caching_ctl,
- struct btrfs_block_group *block_group)
+static void load_block_group_size_class(struct btrfs_caching_control *caching_ctl)
{
BTRFS_PATH_AUTO_FREE(path);
+ struct btrfs_block_group *block_group = caching_ctl->block_group;
struct btrfs_fs_info *fs_info = block_group->fs_info;
struct btrfs_key key;
int i;
@@ -694,8 +694,7 @@ static void load_block_group_size_class(struct btrfs_caching_control *caching_ct
for (i = 0; i < 5; ++i) {
int ret;
- ret = sample_block_group_extent_item(caching_ctl, block_group,
- i, 5, &key, path);
+ ret = sample_block_group_extent_item(caching_ctl, i, 5, &key, path);
btrfs_release_path(path);
ASSERT(ret >= 0);
if (ret > 0)
@@ -864,7 +863,7 @@ static noinline void caching_thread(struct btrfs_work *work)
mutex_lock(&caching_ctl->mutex);
down_read(&fs_info->commit_root_sem);
- load_block_group_size_class(caching_ctl, block_group);
+ load_block_group_size_class(caching_ctl);
if (btrfs_test_opt(fs_info, SPACE_CACHE)) {
ret = load_free_space_cache(block_group);
if (ret == 1) {
--
2.47.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* [PATCH 4/4] btrfs: assert block group is locked in btrfs_use_block_group_size_class()
2026-01-20 12:25 [PATCH 0/4] btrfs: some cleanups to the block group size class code fdmanana
` (2 preceding siblings ...)
2026-01-20 12:25 ` [PATCH 3/4] btrfs: don't pass block group argument to load_block_group_size_class() fdmanana
@ 2026-01-20 12:25 ` fdmanana
2026-01-21 4:18 ` David Sterba
2026-01-20 13:16 ` [PATCH 0/4] btrfs: some cleanups to the block group size class code Johannes Thumshirn
` (2 subsequent siblings)
6 siblings, 1 reply; 14+ messages in thread
From: fdmanana @ 2026-01-20 12:25 UTC (permalink / raw)
To: linux-btrfs
From: Filipe Manana <fdmanana@suse.com>
It's supposed to be called with the block group locked, in order to read
and set its size_class member, so assert it's locked.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
---
fs/btrfs/block-group.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
index 82c488a4b54c..2efeb5daf45c 100644
--- a/fs/btrfs/block-group.c
+++ b/fs/btrfs/block-group.c
@@ -4749,6 +4749,7 @@ int btrfs_use_block_group_size_class(struct btrfs_block_group *bg,
enum btrfs_block_group_size_class size_class,
bool force_wrong_size_class)
{
+ lockdep_assert_held(bg->lock);
ASSERT(size_class != BTRFS_BG_SZ_NONE);
/* The new allocation is in the right size class, do nothing */
--
2.47.2
^ permalink raw reply related [flat|nested] 14+ messages in thread
* Re: [PATCH 0/4] btrfs: some cleanups to the block group size class code
2026-01-20 12:25 [PATCH 0/4] btrfs: some cleanups to the block group size class code fdmanana
` (3 preceding siblings ...)
2026-01-20 12:25 ` [PATCH 4/4] btrfs: assert block group is locked in btrfs_use_block_group_size_class() fdmanana
@ 2026-01-20 13:16 ` Johannes Thumshirn
2026-01-20 16:48 ` David Sterba
2026-01-20 20:47 ` Qu Wenruo
6 siblings, 0 replies; 14+ messages in thread
From: Johannes Thumshirn @ 2026-01-20 13:16 UTC (permalink / raw)
To: fdmanana@kernel.org, linux-btrfs@vger.kernel.org
looks good to me
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
for the series
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 0/4] btrfs: some cleanups to the block group size class code
2026-01-20 12:25 [PATCH 0/4] btrfs: some cleanups to the block group size class code fdmanana
` (4 preceding siblings ...)
2026-01-20 13:16 ` [PATCH 0/4] btrfs: some cleanups to the block group size class code Johannes Thumshirn
@ 2026-01-20 16:48 ` David Sterba
2026-01-20 20:47 ` Qu Wenruo
6 siblings, 0 replies; 14+ messages in thread
From: David Sterba @ 2026-01-20 16:48 UTC (permalink / raw)
To: fdmanana; +Cc: linux-btrfs
On Tue, Jan 20, 2026 at 12:25:50PM +0000, fdmanana@kernel.org wrote:
> From: Filipe Manana <fdmanana@suse.com>
>
> Simple changes, details in the change logs.
>
> Filipe Manana (4):
> btrfs: make load_block_group_size_class() return void
> btrfs: allocate path in load_block_group_size_class()
> btrfs: don't pass block group argument to load_block_group_size_class()
> btrfs: assert block group is locked in btrfs_use_block_group_size_class()
Reviewed-by: David Sterba <dsterba@suse.com>
Thanks.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 2/4] btrfs: allocate path in load_block_group_size_class()
2026-01-20 12:25 ` [PATCH 2/4] btrfs: allocate path in load_block_group_size_class() fdmanana
@ 2026-01-20 20:46 ` Qu Wenruo
2026-01-21 4:24 ` David Sterba
0 siblings, 1 reply; 14+ messages in thread
From: Qu Wenruo @ 2026-01-20 20:46 UTC (permalink / raw)
To: fdmanana, linux-btrfs
在 2026/1/20 22:55, fdmanana@kernel.org 写道:
> From: Filipe Manana <fdmanana@suse.com>
>
> Instead of allocating and freeing a path in every iteration of
> load_block_group_size_class(), through its helper function
> sample_block_group_extent_item(), allocate the path in the former and
> pass it to the later.
>
> Signed-off-by: Filipe Manana <fdmanana@suse.com>
> ---
> fs/btrfs/block-group.c | 32 +++++++++++++++++---------------
> 1 file changed, 17 insertions(+), 15 deletions(-)
>
> diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
> index 343c29344484..a7828673be39 100644
> --- a/fs/btrfs/block-group.c
> +++ b/fs/btrfs/block-group.c
> @@ -579,24 +579,24 @@ int btrfs_add_new_free_space(struct btrfs_block_group *block_group, u64 start,
> * @index: the integral step through the block group to grab from
> * @max_index: the granularity of the sampling
> * @key: return value parameter for the item we find
> + * @path: path to use for searching in the extent tree
> *
> * Pre-conditions on indices:
> * 0 <= index <= max_index
> * 0 < max_index
> *
> - * Returns: 0 on success, 1 if the search didn't yield a useful item, negative
> - * error code on error.
> + * Returns: 0 on success, 1 if the search didn't yield a useful item.
> */
> static int sample_block_group_extent_item(struct btrfs_caching_control *caching_ctl,
> struct btrfs_block_group *block_group,
> int index, int max_index,
> - struct btrfs_key *found_key)
> + struct btrfs_key *found_key,
> + struct btrfs_path *path)
> {
> struct btrfs_fs_info *fs_info = block_group->fs_info;
> struct btrfs_root *extent_root;
> u64 search_offset;
> const u64 search_end = btrfs_block_group_end(block_group);
> - BTRFS_PATH_AUTO_FREE(path);
> struct btrfs_key search_key;
> int ret = 0;
>
> @@ -606,17 +606,9 @@ static int sample_block_group_extent_item(struct btrfs_caching_control *caching_
> lockdep_assert_held(&caching_ctl->mutex);
> lockdep_assert_held_read(&fs_info->commit_root_sem);
>
> - path = btrfs_alloc_path();
> - if (!path)
> - return -ENOMEM;
> -
> extent_root = btrfs_extent_root(fs_info, max_t(u64, block_group->start,
> BTRFS_SUPER_INFO_OFFSET));
>
> - path->skip_locking = true;
> - path->search_commit_root = true;
> - path->reada = READA_FORWARD;
> -
> search_offset = index * div_u64(block_group->length, max_index);
> search_key.objectid = block_group->start + search_offset;
> search_key.type = BTRFS_EXTENT_ITEM_KEY;
> @@ -679,6 +671,7 @@ static int sample_block_group_extent_item(struct btrfs_caching_control *caching_
> static void load_block_group_size_class(struct btrfs_caching_control *caching_ctl,
> struct btrfs_block_group *block_group)
> {
> + BTRFS_PATH_AUTO_FREE(path);
> struct btrfs_fs_info *fs_info = block_group->fs_info;
> struct btrfs_key key;
> int i;
> @@ -688,14 +681,23 @@ static void load_block_group_size_class(struct btrfs_caching_control *caching_ct
> if (!btrfs_block_group_should_use_size_class(block_group))
> return;
>
> + path = btrfs_alloc_path();
> + if (!path)
> + return;
Considering the function is only called inside a workqueue, we can avoid
a memory allocation by using on-stack path, which also reduces one error
path.
Although this is a prett minor optimization, the patch still looks good
to me.
Thanks,
Qu
> +
> + path->skip_locking = true;
> + path->search_commit_root = true;
> + path->reada = READA_FORWARD;
> +
> lockdep_assert_held(&caching_ctl->mutex);
> lockdep_assert_held_read(&fs_info->commit_root_sem);
> for (i = 0; i < 5; ++i) {
> int ret;
>
> - ret = sample_block_group_extent_item(caching_ctl, block_group, i, 5, &key);
> - if (ret < 0)
> - return;
> + ret = sample_block_group_extent_item(caching_ctl, block_group,
> + i, 5, &key, path);
> + btrfs_release_path(path);
> + ASSERT(ret >= 0);
> if (ret > 0)
> continue;
> min_size = min_t(u64, min_size, key.offset);
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 0/4] btrfs: some cleanups to the block group size class code
2026-01-20 12:25 [PATCH 0/4] btrfs: some cleanups to the block group size class code fdmanana
` (5 preceding siblings ...)
2026-01-20 16:48 ` David Sterba
@ 2026-01-20 20:47 ` Qu Wenruo
6 siblings, 0 replies; 14+ messages in thread
From: Qu Wenruo @ 2026-01-20 20:47 UTC (permalink / raw)
To: fdmanana, linux-btrfs
在 2026/1/20 22:55, fdmanana@kernel.org 写道:
> From: Filipe Manana <fdmanana@suse.com>
>
> Simple changes, details in the change logs.
Reviewed-by: Qu Wenruo <wqu@suse.com>
>
> Filipe Manana (4):
> btrfs: make load_block_group_size_class() return void
> btrfs: allocate path in load_block_group_size_class()
Just a minor comment on that we can use on-stack path, but it's more
like personal preference.
Thanks,
Qu
> btrfs: don't pass block group argument to load_block_group_size_class()
> btrfs: assert block group is locked in btrfs_use_block_group_size_class()
>
> fs/btrfs/block-group.c | 52 ++++++++++++++++++++++--------------------
> 1 file changed, 27 insertions(+), 25 deletions(-)
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 4/4] btrfs: assert block group is locked in btrfs_use_block_group_size_class()
2026-01-20 12:25 ` [PATCH 4/4] btrfs: assert block group is locked in btrfs_use_block_group_size_class() fdmanana
@ 2026-01-21 4:18 ` David Sterba
0 siblings, 0 replies; 14+ messages in thread
From: David Sterba @ 2026-01-21 4:18 UTC (permalink / raw)
To: fdmanana; +Cc: linux-btrfs
On Tue, Jan 20, 2026 at 12:25:54PM +0000, fdmanana@kernel.org wrote:
> From: Filipe Manana <fdmanana@suse.com>
>
> It's supposed to be called with the block group locked, in order to read
> and set its size_class member, so assert it's locked.
>
> Signed-off-by: Filipe Manana <fdmanana@suse.com>
> ---
> fs/btrfs/block-group.c | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
> index 82c488a4b54c..2efeb5daf45c 100644
> --- a/fs/btrfs/block-group.c
> +++ b/fs/btrfs/block-group.c
> @@ -4749,6 +4749,7 @@ int btrfs_use_block_group_size_class(struct btrfs_block_group *bg,
> enum btrfs_block_group_size_class size_class,
> bool force_wrong_size_class)
> {
> + lockdep_assert_held(bg->lock);
lockdep_assert_held(&bg->lock);
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 2/4] btrfs: allocate path in load_block_group_size_class()
2026-01-20 20:46 ` Qu Wenruo
@ 2026-01-21 4:24 ` David Sterba
2026-01-21 4:39 ` Qu Wenruo
0 siblings, 1 reply; 14+ messages in thread
From: David Sterba @ 2026-01-21 4:24 UTC (permalink / raw)
To: Qu Wenruo; +Cc: fdmanana, linux-btrfs
On Wed, Jan 21, 2026 at 07:16:06AM +1030, Qu Wenruo wrote:
> 在 2026/1/20 22:55, fdmanana@kernel.org 写道:
> > From: Filipe Manana <fdmanana@suse.com>
> >
> > Instead of allocating and freeing a path in every iteration of
> > load_block_group_size_class(), through its helper function
> > sample_block_group_extent_item(), allocate the path in the former and
> > pass it to the later.
> >
> > Signed-off-by: Filipe Manana <fdmanana@suse.com>
> > ---
> > fs/btrfs/block-group.c | 32 +++++++++++++++++---------------
> > 1 file changed, 17 insertions(+), 15 deletions(-)
> >
> > diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
> > index 343c29344484..a7828673be39 100644
> > --- a/fs/btrfs/block-group.c
> > +++ b/fs/btrfs/block-group.c
> > @@ -579,24 +579,24 @@ int btrfs_add_new_free_space(struct btrfs_block_group *block_group, u64 start,
> > * @index: the integral step through the block group to grab from
> > * @max_index: the granularity of the sampling
> > * @key: return value parameter for the item we find
> > + * @path: path to use for searching in the extent tree
> > *
> > * Pre-conditions on indices:
> > * 0 <= index <= max_index
> > * 0 < max_index
> > *
> > - * Returns: 0 on success, 1 if the search didn't yield a useful item, negative
> > - * error code on error.
> > + * Returns: 0 on success, 1 if the search didn't yield a useful item.
> > */
> > static int sample_block_group_extent_item(struct btrfs_caching_control *caching_ctl,
> > struct btrfs_block_group *block_group,
> > int index, int max_index,
> > - struct btrfs_key *found_key)
> > + struct btrfs_key *found_key,
> > + struct btrfs_path *path)
> > {
> > struct btrfs_fs_info *fs_info = block_group->fs_info;
> > struct btrfs_root *extent_root;
> > u64 search_offset;
> > const u64 search_end = btrfs_block_group_end(block_group);
> > - BTRFS_PATH_AUTO_FREE(path);
> > struct btrfs_key search_key;
> > int ret = 0;
> >
> > @@ -606,17 +606,9 @@ static int sample_block_group_extent_item(struct btrfs_caching_control *caching_
> > lockdep_assert_held(&caching_ctl->mutex);
> > lockdep_assert_held_read(&fs_info->commit_root_sem);
> >
> > - path = btrfs_alloc_path();
> > - if (!path)
> > - return -ENOMEM;
> > -
> > extent_root = btrfs_extent_root(fs_info, max_t(u64, block_group->start,
> > BTRFS_SUPER_INFO_OFFSET));
> >
> > - path->skip_locking = true;
> > - path->search_commit_root = true;
> > - path->reada = READA_FORWARD;
> > -
> > search_offset = index * div_u64(block_group->length, max_index);
> > search_key.objectid = block_group->start + search_offset;
> > search_key.type = BTRFS_EXTENT_ITEM_KEY;
> > @@ -679,6 +671,7 @@ static int sample_block_group_extent_item(struct btrfs_caching_control *caching_
> > static void load_block_group_size_class(struct btrfs_caching_control *caching_ctl,
> > struct btrfs_block_group *block_group)
> > {
> > + BTRFS_PATH_AUTO_FREE(path);
> > struct btrfs_fs_info *fs_info = block_group->fs_info;
> > struct btrfs_key key;
> > int i;
> > @@ -688,14 +681,23 @@ static void load_block_group_size_class(struct btrfs_caching_control *caching_ct
> > if (!btrfs_block_group_should_use_size_class(block_group))
> > return;
> >
> > + path = btrfs_alloc_path();
> > + if (!path)
> > + return;
>
> Considering the function is only called inside a workqueue, we can avoid
> a memory allocation by using on-stack path, which also reduces one error
> path.
As a generic pattern we could switch to on-stack variables for the
functions called from workqueues but it may not be obvious that it's OK
to do that (unlike eg. the compression functions).
But I'd like to have an assertion or a debug warning for that, not sure
how exactly to do it, maybe something is in the task_struct.
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 2/4] btrfs: allocate path in load_block_group_size_class()
2026-01-21 4:24 ` David Sterba
@ 2026-01-21 4:39 ` Qu Wenruo
2026-01-21 10:42 ` Filipe Manana
0 siblings, 1 reply; 14+ messages in thread
From: Qu Wenruo @ 2026-01-21 4:39 UTC (permalink / raw)
To: dsterba; +Cc: fdmanana, linux-btrfs
在 2026/1/21 14:54, David Sterba 写道:
> On Wed, Jan 21, 2026 at 07:16:06AM +1030, Qu Wenruo wrote:
>> 在 2026/1/20 22:55, fdmanana@kernel.org 写道:
>>> From: Filipe Manana <fdmanana@suse.com>
>>>
>>> Instead of allocating and freeing a path in every iteration of
>>> load_block_group_size_class(), through its helper function
>>> sample_block_group_extent_item(), allocate the path in the former and
>>> pass it to the later.
>>>
>>> Signed-off-by: Filipe Manana <fdmanana@suse.com>
>>> ---
>>> fs/btrfs/block-group.c | 32 +++++++++++++++++---------------
>>> 1 file changed, 17 insertions(+), 15 deletions(-)
>>>
>>> diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
>>> index 343c29344484..a7828673be39 100644
>>> --- a/fs/btrfs/block-group.c
>>> +++ b/fs/btrfs/block-group.c
>>> @@ -579,24 +579,24 @@ int btrfs_add_new_free_space(struct btrfs_block_group *block_group, u64 start,
>>> * @index: the integral step through the block group to grab from
>>> * @max_index: the granularity of the sampling
>>> * @key: return value parameter for the item we find
>>> + * @path: path to use for searching in the extent tree
>>> *
>>> * Pre-conditions on indices:
>>> * 0 <= index <= max_index
>>> * 0 < max_index
>>> *
>>> - * Returns: 0 on success, 1 if the search didn't yield a useful item, negative
>>> - * error code on error.
>>> + * Returns: 0 on success, 1 if the search didn't yield a useful item.
>>> */
>>> static int sample_block_group_extent_item(struct btrfs_caching_control *caching_ctl,
>>> struct btrfs_block_group *block_group,
>>> int index, int max_index,
>>> - struct btrfs_key *found_key)
>>> + struct btrfs_key *found_key,
>>> + struct btrfs_path *path)
>>> {
>>> struct btrfs_fs_info *fs_info = block_group->fs_info;
>>> struct btrfs_root *extent_root;
>>> u64 search_offset;
>>> const u64 search_end = btrfs_block_group_end(block_group);
>>> - BTRFS_PATH_AUTO_FREE(path);
>>> struct btrfs_key search_key;
>>> int ret = 0;
>>>
>>> @@ -606,17 +606,9 @@ static int sample_block_group_extent_item(struct btrfs_caching_control *caching_
>>> lockdep_assert_held(&caching_ctl->mutex);
>>> lockdep_assert_held_read(&fs_info->commit_root_sem);
>>>
>>> - path = btrfs_alloc_path();
>>> - if (!path)
>>> - return -ENOMEM;
>>> -
>>> extent_root = btrfs_extent_root(fs_info, max_t(u64, block_group->start,
>>> BTRFS_SUPER_INFO_OFFSET));
>>>
>>> - path->skip_locking = true;
>>> - path->search_commit_root = true;
>>> - path->reada = READA_FORWARD;
>>> -
>>> search_offset = index * div_u64(block_group->length, max_index);
>>> search_key.objectid = block_group->start + search_offset;
>>> search_key.type = BTRFS_EXTENT_ITEM_KEY;
>>> @@ -679,6 +671,7 @@ static int sample_block_group_extent_item(struct btrfs_caching_control *caching_
>>> static void load_block_group_size_class(struct btrfs_caching_control *caching_ctl,
>>> struct btrfs_block_group *block_group)
>>> {
>>> + BTRFS_PATH_AUTO_FREE(path);
>>> struct btrfs_fs_info *fs_info = block_group->fs_info;
>>> struct btrfs_key key;
>>> int i;
>>> @@ -688,14 +681,23 @@ static void load_block_group_size_class(struct btrfs_caching_control *caching_ct
>>> if (!btrfs_block_group_should_use_size_class(block_group))
>>> return;
>>>
>>> + path = btrfs_alloc_path();
>>> + if (!path)
>>> + return;
>>
>> Considering the function is only called inside a workqueue, we can avoid
>> a memory allocation by using on-stack path, which also reduces one error
>> path.
>
> As a generic pattern we could switch to on-stack variables for the
> functions called from workqueues but it may not be obvious that it's OK
> to do that (unlike eg. the compression functions).
>
> But I'd like to have an assertion or a debug warning for that, not sure
> how exactly to do it, maybe something is in the task_struct.
>
I was looking into that during async csum development. But I didn't find
a good way to determine if we're in workqueue.
The closest one I found is in_task(), which can not distinguish
workqueue and regular userspace falling into kernel situations.
Maybe there are some way to poking into the current task structure, but
I didn't find a straightforward helper.
Thanks,
Qu
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 2/4] btrfs: allocate path in load_block_group_size_class()
2026-01-21 4:39 ` Qu Wenruo
@ 2026-01-21 10:42 ` Filipe Manana
2026-01-21 21:58 ` David Sterba
0 siblings, 1 reply; 14+ messages in thread
From: Filipe Manana @ 2026-01-21 10:42 UTC (permalink / raw)
To: Qu Wenruo; +Cc: dsterba, linux-btrfs
On Wed, Jan 21, 2026 at 4:40 AM Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
>
>
>
> 在 2026/1/21 14:54, David Sterba 写道:
> > On Wed, Jan 21, 2026 at 07:16:06AM +1030, Qu Wenruo wrote:
> >> 在 2026/1/20 22:55, fdmanana@kernel.org 写道:
> >>> From: Filipe Manana <fdmanana@suse.com>
> >>>
> >>> Instead of allocating and freeing a path in every iteration of
> >>> load_block_group_size_class(), through its helper function
> >>> sample_block_group_extent_item(), allocate the path in the former and
> >>> pass it to the later.
> >>>
> >>> Signed-off-by: Filipe Manana <fdmanana@suse.com>
> >>> ---
> >>> fs/btrfs/block-group.c | 32 +++++++++++++++++---------------
> >>> 1 file changed, 17 insertions(+), 15 deletions(-)
> >>>
> >>> diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
> >>> index 343c29344484..a7828673be39 100644
> >>> --- a/fs/btrfs/block-group.c
> >>> +++ b/fs/btrfs/block-group.c
> >>> @@ -579,24 +579,24 @@ int btrfs_add_new_free_space(struct btrfs_block_group *block_group, u64 start,
> >>> * @index: the integral step through the block group to grab from
> >>> * @max_index: the granularity of the sampling
> >>> * @key: return value parameter for the item we find
> >>> + * @path: path to use for searching in the extent tree
> >>> *
> >>> * Pre-conditions on indices:
> >>> * 0 <= index <= max_index
> >>> * 0 < max_index
> >>> *
> >>> - * Returns: 0 on success, 1 if the search didn't yield a useful item, negative
> >>> - * error code on error.
> >>> + * Returns: 0 on success, 1 if the search didn't yield a useful item.
> >>> */
> >>> static int sample_block_group_extent_item(struct btrfs_caching_control *caching_ctl,
> >>> struct btrfs_block_group *block_group,
> >>> int index, int max_index,
> >>> - struct btrfs_key *found_key)
> >>> + struct btrfs_key *found_key,
> >>> + struct btrfs_path *path)
> >>> {
> >>> struct btrfs_fs_info *fs_info = block_group->fs_info;
> >>> struct btrfs_root *extent_root;
> >>> u64 search_offset;
> >>> const u64 search_end = btrfs_block_group_end(block_group);
> >>> - BTRFS_PATH_AUTO_FREE(path);
> >>> struct btrfs_key search_key;
> >>> int ret = 0;
> >>>
> >>> @@ -606,17 +606,9 @@ static int sample_block_group_extent_item(struct btrfs_caching_control *caching_
> >>> lockdep_assert_held(&caching_ctl->mutex);
> >>> lockdep_assert_held_read(&fs_info->commit_root_sem);
> >>>
> >>> - path = btrfs_alloc_path();
> >>> - if (!path)
> >>> - return -ENOMEM;
> >>> -
> >>> extent_root = btrfs_extent_root(fs_info, max_t(u64, block_group->start,
> >>> BTRFS_SUPER_INFO_OFFSET));
> >>>
> >>> - path->skip_locking = true;
> >>> - path->search_commit_root = true;
> >>> - path->reada = READA_FORWARD;
> >>> -
> >>> search_offset = index * div_u64(block_group->length, max_index);
> >>> search_key.objectid = block_group->start + search_offset;
> >>> search_key.type = BTRFS_EXTENT_ITEM_KEY;
> >>> @@ -679,6 +671,7 @@ static int sample_block_group_extent_item(struct btrfs_caching_control *caching_
> >>> static void load_block_group_size_class(struct btrfs_caching_control *caching_ctl,
> >>> struct btrfs_block_group *block_group)
> >>> {
> >>> + BTRFS_PATH_AUTO_FREE(path);
> >>> struct btrfs_fs_info *fs_info = block_group->fs_info;
> >>> struct btrfs_key key;
> >>> int i;
> >>> @@ -688,14 +681,23 @@ static void load_block_group_size_class(struct btrfs_caching_control *caching_ct
> >>> if (!btrfs_block_group_should_use_size_class(block_group))
> >>> return;
> >>>
> >>> + path = btrfs_alloc_path();
> >>> + if (!path)
> >>> + return;
> >>
> >> Considering the function is only called inside a workqueue, we can avoid
> >> a memory allocation by using on-stack path, which also reduces one error
> >> path.
> >
> > As a generic pattern we could switch to on-stack variables for the
> > functions called from workqueues but it may not be obvious that it's OK
> > to do that (unlike eg. the compression functions).
> >
> > But I'd like to have an assertion or a debug warning for that, not sure
> > how exactly to do it, maybe something is in the task_struct.
> >
>
> I was looking into that during async csum development. But I didn't find
> a good way to determine if we're in workqueue.
We can do this and it works:
/*
* Since we run in workqueue context, we allocate the path on stack to
* avoid memory allocation failure, as the stack in a work queue task
* is not deep.
*/
ASSERT(current_work() == &caching_ctl->work.normal_work);
I'm adding that to the patch and will push into for-next soon.
> The closest one I found is in_task(), which can not distinguish
> workqueue and regular userspace falling into kernel situations.
>
> Maybe there are some way to poking into the current task structure, but
> I didn't find a straightforward helper.
>
> Thanks,
> Qu
>
^ permalink raw reply [flat|nested] 14+ messages in thread
* Re: [PATCH 2/4] btrfs: allocate path in load_block_group_size_class()
2026-01-21 10:42 ` Filipe Manana
@ 2026-01-21 21:58 ` David Sterba
0 siblings, 0 replies; 14+ messages in thread
From: David Sterba @ 2026-01-21 21:58 UTC (permalink / raw)
To: Filipe Manana; +Cc: Qu Wenruo, dsterba, linux-btrfs
On Wed, Jan 21, 2026 at 10:42:23AM +0000, Filipe Manana wrote:
> On Wed, Jan 21, 2026 at 4:40 AM Qu Wenruo <quwenruo.btrfs@gmx.com> wrote:
> > > how exactly to do it, maybe something is in the task_struct.
> > >
> >
> > I was looking into that during async csum development. But I didn't find
> > a good way to determine if we're in workqueue.
>
> We can do this and it works:
>
> /*
> * Since we run in workqueue context, we allocate the path on stack to
> * avoid memory allocation failure, as the stack in a work queue task
> * is not deep.
> */
> ASSERT(current_work() == &caching_ctl->work.normal_work);
Great, thanks. I was hoping for something less verbose that we can wrap
into eg. ASSERT_WORKQUEUE(...), and for different types we could use
_Generic. But we can rethink it once we have more instances and see a
pattern.
^ permalink raw reply [flat|nested] 14+ messages in thread
end of thread, other threads:[~2026-01-21 21:58 UTC | newest]
Thread overview: 14+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-01-20 12:25 [PATCH 0/4] btrfs: some cleanups to the block group size class code fdmanana
2026-01-20 12:25 ` [PATCH 1/4] btrfs: make load_block_group_size_class() return void fdmanana
2026-01-20 12:25 ` [PATCH 2/4] btrfs: allocate path in load_block_group_size_class() fdmanana
2026-01-20 20:46 ` Qu Wenruo
2026-01-21 4:24 ` David Sterba
2026-01-21 4:39 ` Qu Wenruo
2026-01-21 10:42 ` Filipe Manana
2026-01-21 21:58 ` David Sterba
2026-01-20 12:25 ` [PATCH 3/4] btrfs: don't pass block group argument to load_block_group_size_class() fdmanana
2026-01-20 12:25 ` [PATCH 4/4] btrfs: assert block group is locked in btrfs_use_block_group_size_class() fdmanana
2026-01-21 4:18 ` David Sterba
2026-01-20 13:16 ` [PATCH 0/4] btrfs: some cleanups to the block group size class code Johannes Thumshirn
2026-01-20 16:48 ` David Sterba
2026-01-20 20:47 ` Qu Wenruo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox