* [PATCH 0/2] btrfs: adjust reservation sizes for block group item updates/inserts
@ 2023-09-28 10:12 fdmanana
2023-09-28 10:12 ` [PATCH 1/2] btrfs: stop reserving excessive space for block group item updates fdmanana
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: fdmanana @ 2023-09-28 10:12 UTC (permalink / raw)
To: linux-btrfs
From: Filipe Manana <fdmanana@suse.com>
The following patches adjust how we calculate the size for block group
item insertions and updates, so that we stop reserving/accounting
excessive space for these operations, specially when the free space tree
is being used (a default nowadays). More details on the changelogs.
Filipe Manana (2):
btrfs: stop reserving excessive space for block group item updates
btrfs: stop reserving excessive space for block group item insertions
fs/btrfs/block-group.c | 17 +++++-----
fs/btrfs/delayed-ref.c | 70 ++++++++++++++++++++++++++++++++++++++++++
fs/btrfs/delayed-ref.h | 4 +++
fs/btrfs/disk-io.c | 2 +-
fs/btrfs/transaction.c | 2 +-
5 files changed, 85 insertions(+), 10 deletions(-)
--
2.40.1
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH 1/2] btrfs: stop reserving excessive space for block group item updates
2023-09-28 10:12 [PATCH 0/2] btrfs: adjust reservation sizes for block group item updates/inserts fdmanana
@ 2023-09-28 10:12 ` fdmanana
2023-09-28 10:12 ` [PATCH 2/2] btrfs: stop reserving excessive space for block group item insertions fdmanana
2023-10-02 11:23 ` [PATCH 0/2] btrfs: adjust reservation sizes for block group item updates/inserts David Sterba
2 siblings, 0 replies; 4+ messages in thread
From: fdmanana @ 2023-09-28 10:12 UTC (permalink / raw)
To: linux-btrfs
From: Filipe Manana <fdmanana@suse.com>
Space for block group item updates, necessary after allocating or
deallocating an extent from a block group, is reserved in the delayed
refs block reserve. Currently we do this by incrementing the transaction
handle's delayed_ref_updates counter and then calling
btrfs_update_delayed_refs_rsv(), which will increase the size of the
delayed refs block reserve by an amount that corresponds to the same
amount we use for delayed refs, given by btrfs_calc_delayed_ref_bytes().
That is an excessive amount because it corresponds to the amount of space
needed to insert one item in a btree (btrfs_calc_insert_metadata_size())
times 2 when the free space tree feature is enabled. All we need is an
amount as given by btrfs_calc_metadata_size(), since we only need to
update an existing block group item in the extent tree (or block group
tree if this feature is enabled). By using btrfs_calc_metadata_size() we
will need to reserve 4 times less space when using the free space tree
and 2 times less space when not using it, putting less pressure on space
reservation.
So use helpers to reserve and release space for block group item updates
that use btrfs_calc_metadata_size() for calculation of the space.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
---
fs/btrfs/block-group.c | 12 +++++++-----
fs/btrfs/delayed-ref.c | 35 +++++++++++++++++++++++++++++++++++
fs/btrfs/delayed-ref.h | 2 ++
fs/btrfs/disk-io.c | 2 +-
4 files changed, 45 insertions(+), 6 deletions(-)
diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
index 6e2a4000bfe0..9d17b0580fbf 100644
--- a/fs/btrfs/block-group.c
+++ b/fs/btrfs/block-group.c
@@ -1286,7 +1286,7 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans,
/* Once for the lookup reference */
btrfs_put_block_group(block_group);
if (remove_rsv)
- btrfs_delayed_refs_rsv_release(fs_info, 1, 0);
+ btrfs_dec_delayed_refs_rsv_bg_updates(fs_info);
btrfs_free_path(path);
return ret;
}
@@ -3369,7 +3369,7 @@ int btrfs_start_dirty_block_groups(struct btrfs_trans_handle *trans)
if (should_put)
btrfs_put_block_group(cache);
if (drop_reserve)
- btrfs_delayed_refs_rsv_release(fs_info, 1, 0);
+ btrfs_dec_delayed_refs_rsv_bg_updates(fs_info);
/*
* Avoid blocking other tasks for too long. It might even save
* us from writing caches for block groups that are going to be
@@ -3516,7 +3516,7 @@ int btrfs_write_dirty_block_groups(struct btrfs_trans_handle *trans)
/* If its not on the io list, we need to put the block group */
if (should_put)
btrfs_put_block_group(cache);
- btrfs_delayed_refs_rsv_release(fs_info, 1, 0);
+ btrfs_dec_delayed_refs_rsv_bg_updates(fs_info);
spin_lock(&cur_trans->dirty_bgs_lock);
}
spin_unlock(&cur_trans->dirty_bgs_lock);
@@ -3545,6 +3545,7 @@ int btrfs_update_block_group(struct btrfs_trans_handle *trans,
struct btrfs_block_group *cache;
u64 old_val;
bool reclaim = false;
+ bool bg_already_dirty = true;
int factor;
/* Block accounting for super block */
@@ -3613,7 +3614,7 @@ int btrfs_update_block_group(struct btrfs_trans_handle *trans,
spin_lock(&trans->transaction->dirty_bgs_lock);
if (list_empty(&cache->dirty_list)) {
list_add_tail(&cache->dirty_list, &trans->transaction->dirty_bgs);
- trans->delayed_ref_updates++;
+ bg_already_dirty = false;
btrfs_get_block_group(cache);
}
spin_unlock(&trans->transaction->dirty_bgs_lock);
@@ -3633,7 +3634,8 @@ int btrfs_update_block_group(struct btrfs_trans_handle *trans,
btrfs_put_block_group(cache);
/* Modified block groups are accounted for in the delayed_refs_rsv. */
- btrfs_update_delayed_refs_rsv(trans);
+ if (!bg_already_dirty)
+ btrfs_inc_delayed_refs_rsv_bg_updates(info);
return 0;
}
diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c
index 25d0cdf85a91..a7feef155ded 100644
--- a/fs/btrfs/delayed-ref.c
+++ b/fs/btrfs/delayed-ref.c
@@ -125,6 +125,41 @@ void btrfs_update_delayed_refs_rsv(struct btrfs_trans_handle *trans)
trans->delayed_ref_csum_deletions = 0;
}
+/*
+ * Adjust the size of the delayed refs block reserve for 1 block group item
+ * update.
+ */
+void btrfs_inc_delayed_refs_rsv_bg_updates(struct btrfs_fs_info *fs_info)
+{
+ struct btrfs_block_rsv *delayed_rsv = &fs_info->delayed_refs_rsv;
+
+ spin_lock(&delayed_rsv->lock);
+ /*
+ * Updating a block group item does not result in new nodes/leaves and
+ * does not require changing the free space tree, only the extent tree
+ * or the block group tree, so this is all we need.
+ */
+ delayed_rsv->size += btrfs_calc_metadata_size(fs_info, 1);
+ delayed_rsv->full = false;
+ spin_unlock(&delayed_rsv->lock);
+}
+
+/*
+ * Adjust the size of the delayed refs block reserve to release space for 1
+ * block group item update.
+ */
+void btrfs_dec_delayed_refs_rsv_bg_updates(struct btrfs_fs_info *fs_info)
+{
+ struct btrfs_block_rsv *delayed_rsv = &fs_info->delayed_refs_rsv;
+ const u64 num_bytes = btrfs_calc_metadata_size(fs_info, 1);
+ u64 released;
+
+ released = btrfs_block_rsv_release(fs_info, delayed_rsv, num_bytes, NULL);
+ if (released > 0)
+ trace_btrfs_space_reservation(fs_info, "delayed_refs_rsv",
+ 0, released, 0);
+}
+
/*
* Transfer bytes to our delayed refs rsv.
*
diff --git a/fs/btrfs/delayed-ref.h b/fs/btrfs/delayed-ref.h
index 783f84c9f2f4..3d2c455fd9b0 100644
--- a/fs/btrfs/delayed-ref.h
+++ b/fs/btrfs/delayed-ref.h
@@ -420,6 +420,8 @@ int btrfs_check_delayed_seq(struct btrfs_fs_info *fs_info, u64 seq);
void btrfs_delayed_refs_rsv_release(struct btrfs_fs_info *fs_info, int nr_refs, int nr_csums);
void btrfs_update_delayed_refs_rsv(struct btrfs_trans_handle *trans);
+void btrfs_inc_delayed_refs_rsv_bg_updates(struct btrfs_fs_info *fs_info);
+void btrfs_dec_delayed_refs_rsv_bg_updates(struct btrfs_fs_info *fs_info);
int btrfs_delayed_refs_rsv_refill(struct btrfs_fs_info *fs_info,
enum btrfs_reserve_flush_enum flush);
void btrfs_migrate_to_delayed_refs_rsv(struct btrfs_fs_info *fs_info,
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index dc577b3c53f6..f0864d016ceb 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -4785,7 +4785,7 @@ void btrfs_cleanup_dirty_bgs(struct btrfs_transaction *cur_trans,
spin_unlock(&cur_trans->dirty_bgs_lock);
btrfs_put_block_group(cache);
- btrfs_delayed_refs_rsv_release(fs_info, 1, 0);
+ btrfs_dec_delayed_refs_rsv_bg_updates(fs_info);
spin_lock(&cur_trans->dirty_bgs_lock);
}
spin_unlock(&cur_trans->dirty_bgs_lock);
--
2.40.1
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [PATCH 2/2] btrfs: stop reserving excessive space for block group item insertions
2023-09-28 10:12 [PATCH 0/2] btrfs: adjust reservation sizes for block group item updates/inserts fdmanana
2023-09-28 10:12 ` [PATCH 1/2] btrfs: stop reserving excessive space for block group item updates fdmanana
@ 2023-09-28 10:12 ` fdmanana
2023-10-02 11:23 ` [PATCH 0/2] btrfs: adjust reservation sizes for block group item updates/inserts David Sterba
2 siblings, 0 replies; 4+ messages in thread
From: fdmanana @ 2023-09-28 10:12 UTC (permalink / raw)
To: linux-btrfs
From: Filipe Manana <fdmanana@suse.com>
Space for block group item insertions, necessary after allocating a new
block group, is reserved in the delayed refs block reserve. Currently we
do this by incrementing the transaction handle's delayed_ref_updates
counter and then calling btrfs_update_delayed_refs_rsv(), which will
increase the size of the delayed refs block reserve by an amount that
corresponds to the same amount we use for delayed refs, given by
btrfs_calc_delayed_ref_bytes().
That is an excessive amount because it corresponds to the amount of space
needed to insert one item in a btree (btrfs_calc_insert_metadata_size())
times 2 when the free space tree feature is enabled. All we need is an
amount as given by btrfs_calc_insert_metadata_size(), since we only need to
insert a block group item in the extent tree (or block group tree if this
feature is enabled). By using btrfs_calc_insert_metadata_size() we will
need to reserve 2 times less space when using the free space tree, putting
less pressure on space reservation.
So use helpers to reserve and release space for block group item
insertions that use btrfs_calc_insert_metadata_size() for calculation of
the space.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
---
fs/btrfs/block-group.c | 5 ++---
fs/btrfs/delayed-ref.c | 35 +++++++++++++++++++++++++++++++++++
fs/btrfs/delayed-ref.h | 2 ++
fs/btrfs/transaction.c | 2 +-
4 files changed, 40 insertions(+), 4 deletions(-)
diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
index 9d17b0580fbf..6e5dc68ff661 100644
--- a/fs/btrfs/block-group.c
+++ b/fs/btrfs/block-group.c
@@ -2709,7 +2709,7 @@ void btrfs_create_pending_block_groups(struct btrfs_trans_handle *trans)
/* Already aborted the transaction if it failed. */
next:
- btrfs_delayed_refs_rsv_release(fs_info, 1, 0);
+ btrfs_dec_delayed_refs_rsv_bg_inserts(fs_info);
list_del_init(&block_group->bg_list);
clear_bit(BLOCK_GROUP_FLAG_NEW, &block_group->runtime_flags);
}
@@ -2819,8 +2819,7 @@ struct btrfs_block_group *btrfs_make_block_group(struct btrfs_trans_handle *tran
#endif
list_add_tail(&cache->bg_list, &trans->new_bgs);
- trans->delayed_ref_updates++;
- btrfs_update_delayed_refs_rsv(trans);
+ btrfs_inc_delayed_refs_rsv_bg_inserts(fs_info);
set_avail_alloc_bits(fs_info, type);
return cache;
diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c
index a7feef155ded..f1e99d57d866 100644
--- a/fs/btrfs/delayed-ref.c
+++ b/fs/btrfs/delayed-ref.c
@@ -125,6 +125,41 @@ void btrfs_update_delayed_refs_rsv(struct btrfs_trans_handle *trans)
trans->delayed_ref_csum_deletions = 0;
}
+/*
+ * Adjust the size of the delayed refs block reserve for 1 block group item
+ * insertion, used after allocating a block group.
+ */
+void btrfs_inc_delayed_refs_rsv_bg_inserts(struct btrfs_fs_info *fs_info)
+{
+ struct btrfs_block_rsv *delayed_rsv = &fs_info->delayed_refs_rsv;
+
+ spin_lock(&delayed_rsv->lock);
+ /*
+ * Inserting a block group item does not require changing the free space
+ * tree, only the extent tree or the block group tree, so this is all we
+ * need.
+ */
+ delayed_rsv->size += btrfs_calc_insert_metadata_size(fs_info, 1);
+ delayed_rsv->full = false;
+ spin_unlock(&delayed_rsv->lock);
+}
+
+/*
+ * Adjust the size of the delayed refs block reserve to release space for 1
+ * block group item insertion.
+ */
+void btrfs_dec_delayed_refs_rsv_bg_inserts(struct btrfs_fs_info *fs_info)
+{
+ struct btrfs_block_rsv *delayed_rsv = &fs_info->delayed_refs_rsv;
+ const u64 num_bytes = btrfs_calc_insert_metadata_size(fs_info, 1);
+ u64 released;
+
+ released = btrfs_block_rsv_release(fs_info, delayed_rsv, num_bytes, NULL);
+ if (released > 0)
+ trace_btrfs_space_reservation(fs_info, "delayed_refs_rsv",
+ 0, released, 0);
+}
+
/*
* Adjust the size of the delayed refs block reserve for 1 block group item
* update.
diff --git a/fs/btrfs/delayed-ref.h b/fs/btrfs/delayed-ref.h
index 3d2c455fd9b0..d8bfa6f03976 100644
--- a/fs/btrfs/delayed-ref.h
+++ b/fs/btrfs/delayed-ref.h
@@ -420,6 +420,8 @@ int btrfs_check_delayed_seq(struct btrfs_fs_info *fs_info, u64 seq);
void btrfs_delayed_refs_rsv_release(struct btrfs_fs_info *fs_info, int nr_refs, int nr_csums);
void btrfs_update_delayed_refs_rsv(struct btrfs_trans_handle *trans);
+void btrfs_inc_delayed_refs_rsv_bg_inserts(struct btrfs_fs_info *fs_info);
+void btrfs_dec_delayed_refs_rsv_bg_inserts(struct btrfs_fs_info *fs_info);
void btrfs_inc_delayed_refs_rsv_bg_updates(struct btrfs_fs_info *fs_info);
void btrfs_dec_delayed_refs_rsv_bg_updates(struct btrfs_fs_info *fs_info);
int btrfs_delayed_refs_rsv_refill(struct btrfs_fs_info *fs_info,
diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
index c05c2cd84688..89a5df3dd2d0 100644
--- a/fs/btrfs/transaction.c
+++ b/fs/btrfs/transaction.c
@@ -2126,7 +2126,7 @@ static void btrfs_cleanup_pending_block_groups(struct btrfs_trans_handle *trans)
struct btrfs_block_group *block_group, *tmp;
list_for_each_entry_safe(block_group, tmp, &trans->new_bgs, bg_list) {
- btrfs_delayed_refs_rsv_release(fs_info, 1, 0);
+ btrfs_dec_delayed_refs_rsv_bg_inserts(fs_info);
list_del_init(&block_group->bg_list);
}
}
--
2.40.1
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH 0/2] btrfs: adjust reservation sizes for block group item updates/inserts
2023-09-28 10:12 [PATCH 0/2] btrfs: adjust reservation sizes for block group item updates/inserts fdmanana
2023-09-28 10:12 ` [PATCH 1/2] btrfs: stop reserving excessive space for block group item updates fdmanana
2023-09-28 10:12 ` [PATCH 2/2] btrfs: stop reserving excessive space for block group item insertions fdmanana
@ 2023-10-02 11:23 ` David Sterba
2 siblings, 0 replies; 4+ messages in thread
From: David Sterba @ 2023-10-02 11:23 UTC (permalink / raw)
To: fdmanana; +Cc: linux-btrfs
On Thu, Sep 28, 2023 at 11:12:48AM +0100, fdmanana@kernel.org wrote:
> From: Filipe Manana <fdmanana@suse.com>
>
> The following patches adjust how we calculate the size for block group
> item insertions and updates, so that we stop reserving/accounting
> excessive space for these operations, specially when the free space tree
> is being used (a default nowadays). More details on the changelogs.
>
> Filipe Manana (2):
> btrfs: stop reserving excessive space for block group item updates
> btrfs: stop reserving excessive space for block group item insertions
Added to misc-next, thanks.
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2023-10-02 11:30 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-09-28 10:12 [PATCH 0/2] btrfs: adjust reservation sizes for block group item updates/inserts fdmanana
2023-09-28 10:12 ` [PATCH 1/2] btrfs: stop reserving excessive space for block group item updates fdmanana
2023-09-28 10:12 ` [PATCH 2/2] btrfs: stop reserving excessive space for block group item insertions fdmanana
2023-10-02 11:23 ` [PATCH 0/2] btrfs: adjust reservation sizes for block group item updates/inserts David Sterba
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).