* [PATCH 00/11] btrfs: some delayed refs optimizations and cleanups
@ 2023-05-29 15:16 fdmanana
2023-05-29 15:16 ` [PATCH 01/11] btrfs: reorder some members of struct btrfs_delayed_ref_head fdmanana
` (11 more replies)
0 siblings, 12 replies; 15+ messages in thread
From: fdmanana @ 2023-05-29 15:16 UTC (permalink / raw)
To: linux-btrfs
From: Filipe Manana <fdmanana@suse.com>
This brings an optimization for delayed ref heads and several cleanups.
These come out while doing some other work around delayed refs, but as
these are independent of that other work, and trivial, I'm sending these
out separately. More details on the changelogs.
Filipe Manana (11):
btrfs: reorder some members of struct btrfs_delayed_ref_head
btrfs: remove unused is_head field from struct btrfs_delayed_ref_node
btrfs: remove pointless in_tree field from struct btrfs_delayed_ref_node
btrfs: use a bool to track qgroup record insertion when adding ref head
btrfs: make insert_delayed_ref() return a bool instead of an int
btrfs: get rid of label and goto at insert_delayed_ref()
btrfs: assert correct lock is held at btrfs_select_ref_head()
btrfs: use bool type for delayed ref head fields that are used as booleans
btrfs: use a single switch statement when initializing delayed ref head
btrfs: remove unnecessary prototype declarations at disk-io.c
btrfs: make btrfs_destroy_delayed_refs() return void
fs/btrfs/delayed-ref.c | 110 ++++++++++++++++++++---------------------
fs/btrfs/delayed-ref.h | 25 +++++-----
fs/btrfs/disk-io.c | 19 ++-----
fs/btrfs/extent-tree.c | 11 ++---
4 files changed, 77 insertions(+), 88 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH 01/11] btrfs: reorder some members of struct btrfs_delayed_ref_head
2023-05-29 15:16 [PATCH 00/11] btrfs: some delayed refs optimizations and cleanups fdmanana
@ 2023-05-29 15:16 ` fdmanana
2023-05-29 15:16 ` [PATCH 02/11] btrfs: remove unused is_head field from struct btrfs_delayed_ref_node fdmanana
` (10 subsequent siblings)
11 siblings, 0 replies; 15+ messages in thread
From: fdmanana @ 2023-05-29 15:16 UTC (permalink / raw)
To: linux-btrfs
From: Filipe Manana <fdmanana@suse.com>
Currently struct delayed_ref_head has its 'bytenr' and 'href_node' members
in different cache lines (even on a release, non-debug, kernel). This is
not optimal because when iterating the red black tree of delayed ref heads
for inserting a new delayed ref head (htree_insert()) we have to pull in 2
cache lines of delayed ref heads we find in a patch, one for the tree node
(struct rb_node) and another one for the 'bytenr' field. The same applies
when searching for an existing delayed ref head (find_ref_head()).
On a release (non-debug) kernel, the structure also has two 4 bytes holes,
which makes it 8 bytes longer than necessary. Its current layout is the
following:
struct btrfs_delayed_ref_head {
u64 bytenr; /* 0 8 */
u64 num_bytes; /* 8 8 */
refcount_t refs; /* 16 4 */
/* XXX 4 bytes hole, try to pack */
struct mutex mutex; /* 24 32 */
spinlock_t lock; /* 56 4 */
/* XXX 4 bytes hole, try to pack */
/* --- cacheline 1 boundary (64 bytes) --- */
struct rb_root_cached ref_tree; /* 64 16 */
struct list_head ref_add_list; /* 80 16 */
struct rb_node href_node __attribute__((__aligned__(8))); /* 96 24 */
struct btrfs_delayed_extent_op * extent_op; /* 120 8 */
/* --- cacheline 2 boundary (128 bytes) --- */
int total_ref_mod; /* 128 4 */
int ref_mod; /* 132 4 */
unsigned int must_insert_reserved:1; /* 136: 0 4 */
unsigned int is_data:1; /* 136: 1 4 */
unsigned int is_system:1; /* 136: 2 4 */
unsigned int processing:1; /* 136: 3 4 */
/* size: 144, cachelines: 3, members: 15 */
/* sum members: 128, holes: 2, sum holes: 8 */
/* sum bitfield members: 4 bits (0 bytes) */
/* padding: 4 */
/* bit_padding: 28 bits */
/* forced alignments: 1 */
/* last cacheline: 16 bytes */
} __attribute__((__aligned__(8)));
This change reorders the 'href_node' and 'refs' members so that we have
the 'href_node' in the same cache line as the 'bytenr' field, while also
eliminating the two holes and reducing the structure size from 144 bytes
down to 136 bytes, so we can now have 30 ref heads per 4K page (on x86_64)
instead of 28. The new structure layout after this change is now:
struct btrfs_delayed_ref_head {
u64 bytenr; /* 0 8 */
u64 num_bytes; /* 8 8 */
struct rb_node href_node __attribute__((__aligned__(8))); /* 16 24 */
struct mutex mutex; /* 40 32 */
/* --- cacheline 1 boundary (64 bytes) was 8 bytes ago --- */
refcount_t refs; /* 72 4 */
spinlock_t lock; /* 76 4 */
struct rb_root_cached ref_tree; /* 80 16 */
struct list_head ref_add_list; /* 96 16 */
struct btrfs_delayed_extent_op * extent_op; /* 112 8 */
int total_ref_mod; /* 120 4 */
int ref_mod; /* 124 4 */
/* --- cacheline 2 boundary (128 bytes) --- */
unsigned int must_insert_reserved:1; /* 128: 0 4 */
unsigned int is_data:1; /* 128: 1 4 */
unsigned int is_system:1; /* 128: 2 4 */
unsigned int processing:1; /* 128: 3 4 */
/* size: 136, cachelines: 3, members: 15 */
/* padding: 4 */
/* bit_padding: 28 bits */
/* forced alignments: 1 */
/* last cacheline: 8 bytes */
} __attribute__((__aligned__(8)));
Running the following fs_mark test shows some significant improvment.
$ cat test.sh
#!/bin/bash
# 15G null block device
DEV=/dev/nullb0
MNT=/mnt/nullb0
FILES=100000
THREADS=$(nproc --all)
FILE_SIZE=0
echo "performance" | \
tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
mkfs.btrfs -f $DEV
mount -o ssd $DEV $MNT
OPTS="-S 0 -L 5 -n $FILES -s $FILE_SIZE -t $THREADS -k"
for ((i = 1; i <= $THREADS; i++)); do
OPTS="$OPTS -d $MNT/d$i"
done
fs_mark $OPTS
umount $MNT
Before this change:
FSUse% Count Size Files/sec App Overhead
10 1200000 0 112631.3 11928055
16 2400000 0 189943.8 12140777
23 3600000 0 150719.2 13178480
50 4800000 0 99137.3 12504293
53 6000000 0 111733.9 12670836
Total files/sec: 664165.5
After this change:
FSUse% Count Size Files/sec App Overhead
10 1200000 0 148589.5 11565889
16 2400000 0 227743.8 11561596
23 3600000 0 191590.5 12550755
30 4800000 0 179812.3 12629610
53 6000000 0 92471.4 12352383
Total files/sec: 840207.5
Measuring the execution times of htree_insert(), in nanoseconds, during
those fs_mark runs:
Before this change:
Range: 0.000 - 940647.000; Mean: 619.733; Median: 548.000; Stddev: 1834.231
Percentiles: 90th: 980.000; 95th: 1208.000; 99th: 2090.000
0.000 - 6.384: 257 |
6.384 - 26.259: 977 |
26.259 - 99.635: 4963 |
99.635 - 370.526: 136800 #############
370.526 - 1370.603: 566110 #####################################################
1370.603 - 5062.704: 24945 ##
5062.704 - 18693.248: 944 |
18693.248 - 69014.670: 211 |
69014.670 - 254791.959: 30 |
254791.959 - 940647.000: 4 |
After this change:
Range: 0.000 - 299200.000; Mean: 587.754; Median: 542.000; Stddev: 1030.422
Percentiles: 90th: 918.000; 95th: 1113.000; 99th: 1987.000
0.000 - 5.585: 163 |
5.585 - 20.678: 452 |
20.678 - 70.369: 1806 |
70.369 - 233.965: 26268 ####
233.965 - 772.564: 333519 #####################################################
772.564 - 2545.771: 91820 ###############
2545.771 - 8383.615: 2238 |
8383.615 - 27603.280: 170 |
27603.280 - 90879.297: 68 |
90879.297 - 299200.000: 12 |
Mean, percentiles, maximum times are all better, as well as a lower
standard deviation.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
---
fs/btrfs/delayed-ref.h | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/fs/btrfs/delayed-ref.h b/fs/btrfs/delayed-ref.h
index b54261fe509b..9b28e800a604 100644
--- a/fs/btrfs/delayed-ref.h
+++ b/fs/btrfs/delayed-ref.h
@@ -70,20 +70,26 @@ struct btrfs_delayed_extent_op {
struct btrfs_delayed_ref_head {
u64 bytenr;
u64 num_bytes;
- refcount_t refs;
+ /*
+ * For insertion into struct btrfs_delayed_ref_root::href_root.
+ * Keep it in the same cache line as 'bytenr' for more efficient
+ * searches in the rbtree.
+ */
+ struct rb_node href_node;
/*
* the mutex is held while running the refs, and it is also
* held when checking the sum of reference modifications.
*/
struct mutex mutex;
+ refcount_t refs;
+
+ /* Protects 'ref_tree' and 'ref_add_list'. */
spinlock_t lock;
struct rb_root_cached ref_tree;
/* accumulate add BTRFS_ADD_DELAYED_REF nodes to this ref_add_list. */
struct list_head ref_add_list;
- struct rb_node href_node;
-
struct btrfs_delayed_extent_op *extent_op;
/*
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 02/11] btrfs: remove unused is_head field from struct btrfs_delayed_ref_node
2023-05-29 15:16 [PATCH 00/11] btrfs: some delayed refs optimizations and cleanups fdmanana
2023-05-29 15:16 ` [PATCH 01/11] btrfs: reorder some members of struct btrfs_delayed_ref_head fdmanana
@ 2023-05-29 15:16 ` fdmanana
2023-05-29 15:16 ` [PATCH 03/11] btrfs: remove pointless in_tree " fdmanana
` (9 subsequent siblings)
11 siblings, 0 replies; 15+ messages in thread
From: fdmanana @ 2023-05-29 15:16 UTC (permalink / raw)
To: linux-btrfs
From: Filipe Manana <fdmanana@suse.com>
The 'is_head' field of struct btrfs_delayed_ref_node is no longer after
commit d278850eff30 ("btrfs: remove delayed_ref_node from ref_head"),
so remove it.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
---
fs/btrfs/delayed-ref.c | 1 -
fs/btrfs/delayed-ref.h | 2 --
2 files changed, 3 deletions(-)
diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c
index 0b32432d7d56..be1d18ec5cef 100644
--- a/fs/btrfs/delayed-ref.c
+++ b/fs/btrfs/delayed-ref.c
@@ -853,7 +853,6 @@ static void init_delayed_ref_common(struct btrfs_fs_info *fs_info,
ref->num_bytes = num_bytes;
ref->ref_mod = 1;
ref->action = action;
- ref->is_head = 0;
ref->in_tree = 1;
ref->seq = seq;
ref->type = ref_type;
diff --git a/fs/btrfs/delayed-ref.h b/fs/btrfs/delayed-ref.h
index 9b28e800a604..77b3e735772d 100644
--- a/fs/btrfs/delayed-ref.h
+++ b/fs/btrfs/delayed-ref.h
@@ -48,8 +48,6 @@ struct btrfs_delayed_ref_node {
unsigned int action:8;
unsigned int type:8;
- /* is this node still in the rbtree? */
- unsigned int is_head:1;
unsigned int in_tree:1;
};
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 03/11] btrfs: remove pointless in_tree field from struct btrfs_delayed_ref_node
2023-05-29 15:16 [PATCH 00/11] btrfs: some delayed refs optimizations and cleanups fdmanana
2023-05-29 15:16 ` [PATCH 01/11] btrfs: reorder some members of struct btrfs_delayed_ref_head fdmanana
2023-05-29 15:16 ` [PATCH 02/11] btrfs: remove unused is_head field from struct btrfs_delayed_ref_node fdmanana
@ 2023-05-29 15:16 ` fdmanana
2023-05-29 15:17 ` [PATCH 04/11] btrfs: use a bool to track qgroup record insertion when adding ref head fdmanana
` (8 subsequent siblings)
11 siblings, 0 replies; 15+ messages in thread
From: fdmanana @ 2023-05-29 15:16 UTC (permalink / raw)
To: linux-btrfs
From: Filipe Manana <fdmanana@suse.com>
The 'in_tree' field is really not needed in struct btrfs_delayed_ref_node,
as we can check whether a reference is in the tree or not simply by
checking its red black tree node member with RB_EMPTY_NODE(), as when we
remove it from the tree we always call RB_CLEAR_NODE(). So remove that
field and use RB_EMPTY_NODE().
Signed-off-by: Filipe Manana <fdmanana@suse.com>
---
fs/btrfs/delayed-ref.c | 2 --
fs/btrfs/delayed-ref.h | 3 +--
fs/btrfs/disk-io.c | 1 -
fs/btrfs/extent-tree.c | 1 -
4 files changed, 1 insertion(+), 6 deletions(-)
diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c
index be1d18ec5cef..d6dce5792c0f 100644
--- a/fs/btrfs/delayed-ref.c
+++ b/fs/btrfs/delayed-ref.c
@@ -407,7 +407,6 @@ static inline void drop_delayed_ref(struct btrfs_delayed_ref_root *delayed_refs,
RB_CLEAR_NODE(&ref->ref_node);
if (!list_empty(&ref->add_list))
list_del(&ref->add_list);
- ref->in_tree = 0;
btrfs_put_delayed_ref(ref);
atomic_dec(&delayed_refs->num_entries);
}
@@ -853,7 +852,6 @@ static void init_delayed_ref_common(struct btrfs_fs_info *fs_info,
ref->num_bytes = num_bytes;
ref->ref_mod = 1;
ref->action = action;
- ref->in_tree = 1;
ref->seq = seq;
ref->type = ref_type;
RB_CLEAR_NODE(&ref->ref_node);
diff --git a/fs/btrfs/delayed-ref.h b/fs/btrfs/delayed-ref.h
index 77b3e735772d..9b103bf27185 100644
--- a/fs/btrfs/delayed-ref.h
+++ b/fs/btrfs/delayed-ref.h
@@ -48,7 +48,6 @@ struct btrfs_delayed_ref_node {
unsigned int action:8;
unsigned int type:8;
- unsigned int in_tree:1;
};
struct btrfs_delayed_extent_op {
@@ -341,7 +340,7 @@ static inline void btrfs_put_delayed_ref(struct btrfs_delayed_ref_node *ref)
{
WARN_ON(refcount_read(&ref->refs) == 0);
if (refcount_dec_and_test(&ref->refs)) {
- WARN_ON(ref->in_tree);
+ WARN_ON(!RB_EMPTY_NODE(&ref->ref_node));
switch (ref->type) {
case BTRFS_TREE_BLOCK_REF_KEY:
case BTRFS_SHARED_BLOCK_REF_KEY:
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 96f144094af6..793f7e48d62e 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -4849,7 +4849,6 @@ static int btrfs_destroy_delayed_refs(struct btrfs_transaction *trans,
while ((n = rb_first_cached(&head->ref_tree)) != NULL) {
ref = rb_entry(n, struct btrfs_delayed_ref_node,
ref_node);
- ref->in_tree = 0;
rb_erase_cached(&ref->ref_node, &head->ref_tree);
RB_CLEAR_NODE(&ref->ref_node);
if (!list_empty(&ref->add_list))
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index 4f7ac5a5d29e..0aa775ac31e4 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -1913,7 +1913,6 @@ static int btrfs_run_delayed_refs_for_head(struct btrfs_trans_handle *trans,
return -EAGAIN;
}
- ref->in_tree = 0;
rb_erase_cached(&ref->ref_node, &locked_ref->ref_tree);
RB_CLEAR_NODE(&ref->ref_node);
if (!list_empty(&ref->add_list))
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 04/11] btrfs: use a bool to track qgroup record insertion when adding ref head
2023-05-29 15:16 [PATCH 00/11] btrfs: some delayed refs optimizations and cleanups fdmanana
` (2 preceding siblings ...)
2023-05-29 15:16 ` [PATCH 03/11] btrfs: remove pointless in_tree " fdmanana
@ 2023-05-29 15:17 ` fdmanana
2023-05-29 15:17 ` [PATCH 05/11] btrfs: make insert_delayed_ref() return a bool instead of an int fdmanana
` (7 subsequent siblings)
11 siblings, 0 replies; 15+ messages in thread
From: fdmanana @ 2023-05-29 15:17 UTC (permalink / raw)
To: linux-btrfs
From: Filipe Manana <fdmanana@suse.com>
We are using an integer as a boolean to track the qgroup record insertion
status when adding a delayed reference head. Since all we need is a
boolean, switch the type from int to bool to make it more obvious.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
---
fs/btrfs/delayed-ref.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c
index d6dce5792c0f..0bcec428766c 100644
--- a/fs/btrfs/delayed-ref.c
+++ b/fs/btrfs/delayed-ref.c
@@ -762,11 +762,11 @@ static noinline struct btrfs_delayed_ref_head *
add_delayed_ref_head(struct btrfs_trans_handle *trans,
struct btrfs_delayed_ref_head *head_ref,
struct btrfs_qgroup_extent_record *qrecord,
- int action, int *qrecord_inserted_ret)
+ int action, bool *qrecord_inserted_ret)
{
struct btrfs_delayed_ref_head *existing;
struct btrfs_delayed_ref_root *delayed_refs;
- int qrecord_inserted = 0;
+ bool qrecord_inserted = false;
delayed_refs = &trans->transaction->delayed_refs;
@@ -776,7 +776,7 @@ add_delayed_ref_head(struct btrfs_trans_handle *trans,
delayed_refs, qrecord))
kfree(qrecord);
else
- qrecord_inserted = 1;
+ qrecord_inserted = true;
}
trace_add_delayed_ref_head(trans->fs_info, head_ref, action);
@@ -872,7 +872,7 @@ int btrfs_add_delayed_tree_ref(struct btrfs_trans_handle *trans,
struct btrfs_delayed_ref_head *head_ref;
struct btrfs_delayed_ref_root *delayed_refs;
struct btrfs_qgroup_extent_record *record = NULL;
- int qrecord_inserted;
+ bool qrecord_inserted;
bool is_system;
int action = generic_ref->action;
int level = generic_ref->tree_ref.level;
@@ -965,7 +965,7 @@ int btrfs_add_delayed_data_ref(struct btrfs_trans_handle *trans,
struct btrfs_delayed_ref_head *head_ref;
struct btrfs_delayed_ref_root *delayed_refs;
struct btrfs_qgroup_extent_record *record = NULL;
- int qrecord_inserted;
+ bool qrecord_inserted;
int action = generic_ref->action;
int ret;
u64 bytenr = generic_ref->bytenr;
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 05/11] btrfs: make insert_delayed_ref() return a bool instead of an int
2023-05-29 15:16 [PATCH 00/11] btrfs: some delayed refs optimizations and cleanups fdmanana
` (3 preceding siblings ...)
2023-05-29 15:17 ` [PATCH 04/11] btrfs: use a bool to track qgroup record insertion when adding ref head fdmanana
@ 2023-05-29 15:17 ` fdmanana
2023-05-29 15:17 ` [PATCH 06/11] btrfs: get rid of label and goto at insert_delayed_ref() fdmanana
` (6 subsequent siblings)
11 siblings, 0 replies; 15+ messages in thread
From: fdmanana @ 2023-05-29 15:17 UTC (permalink / raw)
To: linux-btrfs
From: Filipe Manana <fdmanana@suse.com>
insert_delayed_ref() can only return 0 or 1, to indicate if the given
delayed reference was added to the head reference or if it was merged
into an existing delayed ref, respectively. So just make it return a
boolean instead.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
---
fs/btrfs/delayed-ref.c | 27 ++++++++++++++-------------
1 file changed, 14 insertions(+), 13 deletions(-)
diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c
index 0bcec428766c..c3da7c3185de 100644
--- a/fs/btrfs/delayed-ref.c
+++ b/fs/btrfs/delayed-ref.c
@@ -555,16 +555,17 @@ void btrfs_delete_ref_head(struct btrfs_delayed_ref_root *delayed_refs,
/*
* Helper to insert the ref_node to the tail or merge with tail.
*
- * Return 0 for insert.
- * Return >0 for merge.
+ * Return false if the ref was inserted.
+ * Return true if the ref was merged into an existing one (and therefore can be
+ * freed by the caller).
*/
-static int insert_delayed_ref(struct btrfs_delayed_ref_root *root,
- struct btrfs_delayed_ref_head *href,
- struct btrfs_delayed_ref_node *ref)
+static bool insert_delayed_ref(struct btrfs_delayed_ref_root *root,
+ struct btrfs_delayed_ref_head *href,
+ struct btrfs_delayed_ref_node *ref)
{
struct btrfs_delayed_ref_node *exist;
int mod;
- int ret = 0;
+ bool ret = false;
spin_lock(&href->lock);
exist = tree_insert(&href->ref_tree, ref);
@@ -572,7 +573,7 @@ static int insert_delayed_ref(struct btrfs_delayed_ref_root *root,
goto inserted;
/* Now we are sure we can merge */
- ret = 1;
+ ret = true;
if (exist->action == ref->action) {
mod = ref->ref_mod;
} else {
@@ -874,9 +875,9 @@ int btrfs_add_delayed_tree_ref(struct btrfs_trans_handle *trans,
struct btrfs_qgroup_extent_record *record = NULL;
bool qrecord_inserted;
bool is_system;
+ bool merged;
int action = generic_ref->action;
int level = generic_ref->tree_ref.level;
- int ret;
u64 bytenr = generic_ref->bytenr;
u64 num_bytes = generic_ref->len;
u64 parent = generic_ref->parent;
@@ -932,7 +933,7 @@ int btrfs_add_delayed_tree_ref(struct btrfs_trans_handle *trans,
head_ref = add_delayed_ref_head(trans, head_ref, record,
action, &qrecord_inserted);
- ret = insert_delayed_ref(delayed_refs, head_ref, &ref->node);
+ merged = insert_delayed_ref(delayed_refs, head_ref, &ref->node);
spin_unlock(&delayed_refs->lock);
/*
@@ -944,7 +945,7 @@ int btrfs_add_delayed_tree_ref(struct btrfs_trans_handle *trans,
trace_add_delayed_tree_ref(fs_info, &ref->node, ref,
action == BTRFS_ADD_DELAYED_EXTENT ?
BTRFS_ADD_DELAYED_REF : action);
- if (ret > 0)
+ if (merged)
kmem_cache_free(btrfs_delayed_tree_ref_cachep, ref);
if (qrecord_inserted)
@@ -967,7 +968,7 @@ int btrfs_add_delayed_data_ref(struct btrfs_trans_handle *trans,
struct btrfs_qgroup_extent_record *record = NULL;
bool qrecord_inserted;
int action = generic_ref->action;
- int ret;
+ bool merged;
u64 bytenr = generic_ref->bytenr;
u64 num_bytes = generic_ref->len;
u64 parent = generic_ref->parent;
@@ -1024,7 +1025,7 @@ int btrfs_add_delayed_data_ref(struct btrfs_trans_handle *trans,
head_ref = add_delayed_ref_head(trans, head_ref, record,
action, &qrecord_inserted);
- ret = insert_delayed_ref(delayed_refs, head_ref, &ref->node);
+ merged = insert_delayed_ref(delayed_refs, head_ref, &ref->node);
spin_unlock(&delayed_refs->lock);
/*
@@ -1036,7 +1037,7 @@ int btrfs_add_delayed_data_ref(struct btrfs_trans_handle *trans,
trace_add_delayed_data_ref(trans->fs_info, &ref->node, ref,
action == BTRFS_ADD_DELAYED_EXTENT ?
BTRFS_ADD_DELAYED_REF : action);
- if (ret > 0)
+ if (merged)
kmem_cache_free(btrfs_delayed_data_ref_cachep, ref);
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 06/11] btrfs: get rid of label and goto at insert_delayed_ref()
2023-05-29 15:16 [PATCH 00/11] btrfs: some delayed refs optimizations and cleanups fdmanana
` (4 preceding siblings ...)
2023-05-29 15:17 ` [PATCH 05/11] btrfs: make insert_delayed_ref() return a bool instead of an int fdmanana
@ 2023-05-29 15:17 ` fdmanana
2023-05-29 15:17 ` [PATCH 07/11] btrfs: assert correct lock is held at btrfs_select_ref_head() fdmanana
` (5 subsequent siblings)
11 siblings, 0 replies; 15+ messages in thread
From: fdmanana @ 2023-05-29 15:17 UTC (permalink / raw)
To: linux-btrfs
From: Filipe Manana <fdmanana@suse.com>
At insert_delayed_ref() there's no point of having a label and goto in the
case we were able to insert the delayed ref head. We can just add the code
under label to the if statement's body and return immediately, and also
there is no need to track the return value in a variable, we can just
return a literal true or false value directly. So do those changes.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
---
fs/btrfs/delayed-ref.c | 19 ++++++++-----------
1 file changed, 8 insertions(+), 11 deletions(-)
diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c
index c3da7c3185de..e4579e66a57a 100644
--- a/fs/btrfs/delayed-ref.c
+++ b/fs/btrfs/delayed-ref.c
@@ -565,15 +565,18 @@ static bool insert_delayed_ref(struct btrfs_delayed_ref_root *root,
{
struct btrfs_delayed_ref_node *exist;
int mod;
- bool ret = false;
spin_lock(&href->lock);
exist = tree_insert(&href->ref_tree, ref);
- if (!exist)
- goto inserted;
+ if (!exist) {
+ if (ref->action == BTRFS_ADD_DELAYED_REF)
+ list_add_tail(&ref->add_list, &href->ref_add_list);
+ atomic_inc(&root->num_entries);
+ spin_unlock(&href->lock);
+ return false;
+ }
/* Now we are sure we can merge */
- ret = true;
if (exist->action == ref->action) {
mod = ref->ref_mod;
} else {
@@ -600,13 +603,7 @@ static bool insert_delayed_ref(struct btrfs_delayed_ref_root *root,
if (exist->ref_mod == 0)
drop_delayed_ref(root, href, exist);
spin_unlock(&href->lock);
- return ret;
-inserted:
- if (ref->action == BTRFS_ADD_DELAYED_REF)
- list_add_tail(&ref->add_list, &href->ref_add_list);
- atomic_inc(&root->num_entries);
- spin_unlock(&href->lock);
- return ret;
+ return true;
}
/*
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 07/11] btrfs: assert correct lock is held at btrfs_select_ref_head()
2023-05-29 15:16 [PATCH 00/11] btrfs: some delayed refs optimizations and cleanups fdmanana
` (5 preceding siblings ...)
2023-05-29 15:17 ` [PATCH 06/11] btrfs: get rid of label and goto at insert_delayed_ref() fdmanana
@ 2023-05-29 15:17 ` fdmanana
2023-05-29 15:17 ` [PATCH 08/11] btrfs: use bool type for delayed ref head fields that are used as booleans fdmanana
` (4 subsequent siblings)
11 siblings, 0 replies; 15+ messages in thread
From: fdmanana @ 2023-05-29 15:17 UTC (permalink / raw)
To: linux-btrfs
From: Filipe Manana <fdmanana@suse.com>
The function btrfs_select_ref_head() iterates over the red black tree of
delayed reference heads, which is protected by the spinlock in the delayed
refs root. The function doesn't take the lock, it's taken by its single
caller, btrfs_obtain_ref_head(), because it needs to call that function
and btrfs_delayed_ref_lock() in the same critical section (delimited by
that spinlock). So assert at btrfs_select_ref_head() that we are holding
the expected lock.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
---
fs/btrfs/delayed-ref.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c
index e4579e66a57a..c61af9012a82 100644
--- a/fs/btrfs/delayed-ref.c
+++ b/fs/btrfs/delayed-ref.c
@@ -506,6 +506,7 @@ struct btrfs_delayed_ref_head *btrfs_select_ref_head(
{
struct btrfs_delayed_ref_head *head;
+ lockdep_assert_held(&delayed_refs->lock);
again:
head = find_ref_head(delayed_refs, delayed_refs->run_delayed_start,
true);
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 08/11] btrfs: use bool type for delayed ref head fields that are used as booleans
2023-05-29 15:16 [PATCH 00/11] btrfs: some delayed refs optimizations and cleanups fdmanana
` (6 preceding siblings ...)
2023-05-29 15:17 ` [PATCH 07/11] btrfs: assert correct lock is held at btrfs_select_ref_head() fdmanana
@ 2023-05-29 15:17 ` fdmanana
2023-05-29 15:17 ` [PATCH 09/11] btrfs: use a single switch statement when initializing delayed ref head fdmanana
` (3 subsequent siblings)
11 siblings, 0 replies; 15+ messages in thread
From: fdmanana @ 2023-05-29 15:17 UTC (permalink / raw)
To: linux-btrfs
From: Filipe Manana <fdmanana@suse.com>
There's no point in have several fields defined as 1 bit unsigned int in
struct btrfs_delayed_ref_head, we can instead use a bool type, it makes
the code a bit more readable and it doesn't change the structure size.
So switch them to proper booleans.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
---
fs/btrfs/delayed-ref.c | 12 ++++++------
fs/btrfs/delayed-ref.h | 8 ++++----
fs/btrfs/extent-tree.c | 10 +++++-----
3 files changed, 15 insertions(+), 15 deletions(-)
diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c
index c61af9012a82..bf8c2ac6c95e 100644
--- a/fs/btrfs/delayed-ref.c
+++ b/fs/btrfs/delayed-ref.c
@@ -531,7 +531,7 @@ struct btrfs_delayed_ref_head *btrfs_select_ref_head(
href_node);
}
- head->processing = 1;
+ head->processing = true;
WARN_ON(delayed_refs->num_heads_ready == 0);
delayed_refs->num_heads_ready--;
delayed_refs->run_delayed_start = head->bytenr +
@@ -549,7 +549,7 @@ void btrfs_delete_ref_head(struct btrfs_delayed_ref_root *delayed_refs,
RB_CLEAR_NODE(&head->href_node);
atomic_dec(&delayed_refs->num_entries);
delayed_refs->num_heads--;
- if (head->processing == 0)
+ if (!head->processing)
delayed_refs->num_heads_ready--;
}
@@ -697,7 +697,7 @@ static void init_delayed_ref_head(struct btrfs_delayed_ref_head *head_ref,
bool is_system)
{
int count_mod = 1;
- int must_insert_reserved = 0;
+ bool must_insert_reserved = false;
/* If reserved is provided, it must be a data extent. */
BUG_ON(!is_data && reserved);
@@ -722,9 +722,9 @@ static void init_delayed_ref_head(struct btrfs_delayed_ref_head *head_ref,
* BTRFS_ADD_DELAYED_REF because other special casing is not required.
*/
if (action == BTRFS_ADD_DELAYED_EXTENT)
- must_insert_reserved = 1;
+ must_insert_reserved = true;
else
- must_insert_reserved = 0;
+ must_insert_reserved = false;
refcount_set(&head_ref->refs, 1);
head_ref->bytenr = bytenr;
@@ -736,7 +736,7 @@ static void init_delayed_ref_head(struct btrfs_delayed_ref_head *head_ref,
head_ref->ref_tree = RB_ROOT_CACHED;
INIT_LIST_HEAD(&head_ref->ref_add_list);
RB_CLEAR_NODE(&head_ref->href_node);
- head_ref->processing = 0;
+ head_ref->processing = false;
head_ref->total_ref_mod = count_mod;
spin_lock_init(&head_ref->lock);
mutex_init(&head_ref->mutex);
diff --git a/fs/btrfs/delayed-ref.h b/fs/btrfs/delayed-ref.h
index 9b103bf27185..b8e14b0ba5f1 100644
--- a/fs/btrfs/delayed-ref.h
+++ b/fs/btrfs/delayed-ref.h
@@ -116,10 +116,10 @@ struct btrfs_delayed_ref_head {
* we need to update the in ram accounting to properly reflect
* the free has happened.
*/
- unsigned int must_insert_reserved:1;
- unsigned int is_data:1;
- unsigned int is_system:1;
- unsigned int processing:1;
+ bool must_insert_reserved;
+ bool is_data;
+ bool is_system;
+ bool processing;
};
struct btrfs_delayed_tree_ref {
diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
index 0aa775ac31e4..ffd4198c19b0 100644
--- a/fs/btrfs/extent-tree.c
+++ b/fs/btrfs/extent-tree.c
@@ -1497,7 +1497,7 @@ static int __btrfs_inc_extent_ref(struct btrfs_trans_handle *trans,
static int run_delayed_data_ref(struct btrfs_trans_handle *trans,
struct btrfs_delayed_ref_node *node,
struct btrfs_delayed_extent_op *extent_op,
- int insert_reserved)
+ bool insert_reserved)
{
int ret = 0;
struct btrfs_delayed_data_ref *ref;
@@ -1647,7 +1647,7 @@ static int run_delayed_extent_op(struct btrfs_trans_handle *trans,
static int run_delayed_tree_ref(struct btrfs_trans_handle *trans,
struct btrfs_delayed_ref_node *node,
struct btrfs_delayed_extent_op *extent_op,
- int insert_reserved)
+ bool insert_reserved)
{
int ret = 0;
struct btrfs_delayed_tree_ref *ref;
@@ -1687,7 +1687,7 @@ static int run_delayed_tree_ref(struct btrfs_trans_handle *trans,
static int run_one_delayed_ref(struct btrfs_trans_handle *trans,
struct btrfs_delayed_ref_node *node,
struct btrfs_delayed_extent_op *extent_op,
- int insert_reserved)
+ bool insert_reserved)
{
int ret = 0;
@@ -1897,7 +1897,7 @@ static int btrfs_run_delayed_refs_for_head(struct btrfs_trans_handle *trans,
struct btrfs_delayed_ref_root *delayed_refs;
struct btrfs_delayed_extent_op *extent_op;
struct btrfs_delayed_ref_node *ref;
- int must_insert_reserved = 0;
+ bool must_insert_reserved;
int ret;
delayed_refs = &trans->transaction->delayed_refs;
@@ -1939,7 +1939,7 @@ static int btrfs_run_delayed_refs_for_head(struct btrfs_trans_handle *trans,
* spin lock.
*/
must_insert_reserved = locked_ref->must_insert_reserved;
- locked_ref->must_insert_reserved = 0;
+ locked_ref->must_insert_reserved = false;
extent_op = locked_ref->extent_op;
locked_ref->extent_op = NULL;
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 09/11] btrfs: use a single switch statement when initializing delayed ref head
2023-05-29 15:16 [PATCH 00/11] btrfs: some delayed refs optimizations and cleanups fdmanana
` (7 preceding siblings ...)
2023-05-29 15:17 ` [PATCH 08/11] btrfs: use bool type for delayed ref head fields that are used as booleans fdmanana
@ 2023-05-29 15:17 ` fdmanana
2023-05-29 15:17 ` [PATCH 10/11] btrfs: remove unnecessary prototype declarations at disk-io.c fdmanana
` (2 subsequent siblings)
11 siblings, 0 replies; 15+ messages in thread
From: fdmanana @ 2023-05-29 15:17 UTC (permalink / raw)
To: linux-btrfs
From: Filipe Manana <fdmanana@suse.com>
At init_delayed_ref_head(), we are using two separate if statements to
check the delayed ref head action, and initializing 'must_insert_reserved'
to false twice, once when the variable is declared and once again in an
else branch.
Make this simpler and more straitghforward by having a single switch
statement, also moving the comment about a drop action to the
corresponding switch case to make it more clear and eliminating the
duplicated initialization of 'must_insert_reserved' to false.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
---
fs/btrfs/delayed-ref.c | 44 +++++++++++++++++++++++-------------------
1 file changed, 24 insertions(+), 20 deletions(-)
diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c
index bf8c2ac6c95e..6a13cf00218b 100644
--- a/fs/btrfs/delayed-ref.c
+++ b/fs/btrfs/delayed-ref.c
@@ -702,29 +702,33 @@ static void init_delayed_ref_head(struct btrfs_delayed_ref_head *head_ref,
/* If reserved is provided, it must be a data extent. */
BUG_ON(!is_data && reserved);
- /*
- * The head node stores the sum of all the mods, so dropping a ref
- * should drop the sum in the head node by one.
- */
- if (action == BTRFS_UPDATE_DELAYED_HEAD)
+ switch (action) {
+ case BTRFS_UPDATE_DELAYED_HEAD:
count_mod = 0;
- else if (action == BTRFS_DROP_DELAYED_REF)
+ break;
+ case BTRFS_DROP_DELAYED_REF:
+ /*
+ * The head node stores the sum of all the mods, so dropping a ref
+ * should drop the sum in the head node by one.
+ */
count_mod = -1;
-
- /*
- * BTRFS_ADD_DELAYED_EXTENT means that we need to update the reserved
- * accounting when the extent is finally added, or if a later
- * modification deletes the delayed ref without ever inserting the
- * extent into the extent allocation tree. ref->must_insert_reserved
- * is the flag used to record that accounting mods are required.
- *
- * Once we record must_insert_reserved, switch the action to
- * BTRFS_ADD_DELAYED_REF because other special casing is not required.
- */
- if (action == BTRFS_ADD_DELAYED_EXTENT)
+ break;
+ case BTRFS_ADD_DELAYED_EXTENT:
+ /*
+ * BTRFS_ADD_DELAYED_EXTENT means that we need to update the
+ * reserved accounting when the extent is finally added, or if a
+ * later modification deletes the delayed ref without ever
+ * inserting the extent into the extent allocation tree.
+ * ref->must_insert_reserved is the flag used to record that
+ * accounting mods are required.
+ *
+ * Once we record must_insert_reserved, switch the action to
+ * BTRFS_ADD_DELAYED_REF because other special casing is not
+ * required.
+ */
must_insert_reserved = true;
- else
- must_insert_reserved = false;
+ break;
+ }
refcount_set(&head_ref->refs, 1);
head_ref->bytenr = bytenr;
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 10/11] btrfs: remove unnecessary prototype declarations at disk-io.c
2023-05-29 15:16 [PATCH 00/11] btrfs: some delayed refs optimizations and cleanups fdmanana
` (8 preceding siblings ...)
2023-05-29 15:17 ` [PATCH 09/11] btrfs: use a single switch statement when initializing delayed ref head fdmanana
@ 2023-05-29 15:17 ` fdmanana
2023-05-29 15:17 ` [PATCH 11/11] btrfs: make btrfs_destroy_delayed_refs() return void fdmanana
2023-05-30 15:04 ` [PATCH 00/11] btrfs: some delayed refs optimizations and cleanups David Sterba
11 siblings, 0 replies; 15+ messages in thread
From: fdmanana @ 2023-05-29 15:17 UTC (permalink / raw)
To: linux-btrfs
From: Filipe Manana <fdmanana@suse.com>
We have a few static functions at disk-io.c for which we have a forward
declaration of their prototype, but it's not needed because all those
functions are defined before they are called, so remove them.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
---
fs/btrfs/disk-io.c | 9 ---------
1 file changed, 9 deletions(-)
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 793f7e48d62e..fb7ec47f21f1 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -60,15 +60,6 @@
BTRFS_SUPER_FLAG_METADUMP |\
BTRFS_SUPER_FLAG_METADUMP_V2)
-static void btrfs_destroy_ordered_extents(struct btrfs_root *root);
-static int btrfs_destroy_delayed_refs(struct btrfs_transaction *trans,
- struct btrfs_fs_info *fs_info);
-static void btrfs_destroy_delalloc_inodes(struct btrfs_root *root);
-static int btrfs_destroy_marked_extents(struct btrfs_fs_info *fs_info,
- struct extent_io_tree *dirty_pages,
- int mark);
-static int btrfs_destroy_pinned_extent(struct btrfs_fs_info *fs_info,
- struct extent_io_tree *pinned_extents);
static int btrfs_cleanup_transaction(struct btrfs_fs_info *fs_info);
static void btrfs_error_commit_super(struct btrfs_fs_info *fs_info);
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [PATCH 11/11] btrfs: make btrfs_destroy_delayed_refs() return void
2023-05-29 15:16 [PATCH 00/11] btrfs: some delayed refs optimizations and cleanups fdmanana
` (9 preceding siblings ...)
2023-05-29 15:17 ` [PATCH 10/11] btrfs: remove unnecessary prototype declarations at disk-io.c fdmanana
@ 2023-05-29 15:17 ` fdmanana
2023-05-30 15:03 ` David Sterba
2023-05-30 15:04 ` [PATCH 00/11] btrfs: some delayed refs optimizations and cleanups David Sterba
11 siblings, 1 reply; 15+ messages in thread
From: fdmanana @ 2023-05-29 15:17 UTC (permalink / raw)
To: linux-btrfs
From: Filipe Manana <fdmanana@suse.com>
btrfs_destroy_delayed_refs() always returns 0 and its single caller does
not even check its return value, so make it return void.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
---
fs/btrfs/disk-io.c | 9 +++------
1 file changed, 3 insertions(+), 6 deletions(-)
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index fb7ec47f21f1..02e9004f79dc 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -4809,13 +4809,12 @@ static void btrfs_destroy_all_ordered_extents(struct btrfs_fs_info *fs_info)
btrfs_wait_ordered_roots(fs_info, U64_MAX, 0, (u64)-1);
}
-static int btrfs_destroy_delayed_refs(struct btrfs_transaction *trans,
- struct btrfs_fs_info *fs_info)
+static void btrfs_destroy_delayed_refs(struct btrfs_transaction *trans,
+ struct btrfs_fs_info *fs_info)
{
struct rb_node *node;
struct btrfs_delayed_ref_root *delayed_refs;
struct btrfs_delayed_ref_node *ref;
- int ret = 0;
delayed_refs = &trans->delayed_refs;
@@ -4823,7 +4822,7 @@ static int btrfs_destroy_delayed_refs(struct btrfs_transaction *trans,
if (atomic_read(&delayed_refs->num_entries) == 0) {
spin_unlock(&delayed_refs->lock);
btrfs_debug(fs_info, "delayed_refs has NO entry");
- return ret;
+ return;
}
while ((node = rb_first_cached(&delayed_refs->href_root)) != NULL) {
@@ -4884,8 +4883,6 @@ static int btrfs_destroy_delayed_refs(struct btrfs_transaction *trans,
btrfs_qgroup_destroy_extent_records(trans);
spin_unlock(&delayed_refs->lock);
-
- return ret;
}
static void btrfs_destroy_delalloc_inodes(struct btrfs_root *root)
--
2.34.1
^ permalink raw reply related [flat|nested] 15+ messages in thread
* Re: [PATCH 11/11] btrfs: make btrfs_destroy_delayed_refs() return void
2023-05-29 15:17 ` [PATCH 11/11] btrfs: make btrfs_destroy_delayed_refs() return void fdmanana
@ 2023-05-30 15:03 ` David Sterba
2023-05-30 16:01 ` Filipe Manana
0 siblings, 1 reply; 15+ messages in thread
From: David Sterba @ 2023-05-30 15:03 UTC (permalink / raw)
To: fdmanana; +Cc: linux-btrfs
On Mon, May 29, 2023 at 04:17:07PM +0100, fdmanana@kernel.org wrote:
> From: Filipe Manana <fdmanana@suse.com>
>
> btrfs_destroy_delayed_refs() always returns 0 and its single caller does
> not even check its return value, so make it return void.
Function can return void if none of its callees return an error,
directly or indirectly, there are no BUG_ONs left to be turned to
proper error handling or there's no missing error handling.
There's still:
4610 cache = btrfs_lookup_block_group(fs_info, head->bytenr);
4611 BUG_ON(!cache);
and calling
btrfs_error_unpin_extent_range
unpin_extent_range
cache = btrfs_lookup_block_group()
BUG_ON(!cache)
If a function like btrfs_cleanup_one_transaction has limited options how
to handle errors then we can ignore them there but at least a comment
would be good that we're doing that intentionally.
This case is a bit special because there's only one caller so we know
the context and btrfs_destroy_delayed_refs() should eventually return
void but I'd rather do that as the last step after the call graph is
audited for proper error handling.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 00/11] btrfs: some delayed refs optimizations and cleanups
2023-05-29 15:16 [PATCH 00/11] btrfs: some delayed refs optimizations and cleanups fdmanana
` (10 preceding siblings ...)
2023-05-29 15:17 ` [PATCH 11/11] btrfs: make btrfs_destroy_delayed_refs() return void fdmanana
@ 2023-05-30 15:04 ` David Sterba
11 siblings, 0 replies; 15+ messages in thread
From: David Sterba @ 2023-05-30 15:04 UTC (permalink / raw)
To: fdmanana; +Cc: linux-btrfs
On Mon, May 29, 2023 at 04:16:56PM +0100, fdmanana@kernel.org wrote:
> From: Filipe Manana <fdmanana@suse.com>
>
> This brings an optimization for delayed ref heads and several cleanups.
> These come out while doing some other work around delayed refs, but as
> these are independent of that other work, and trivial, I'm sending these
> out separately. More details on the changelogs.
>
> Filipe Manana (11):
> btrfs: reorder some members of struct btrfs_delayed_ref_head
> btrfs: remove unused is_head field from struct btrfs_delayed_ref_node
> btrfs: remove pointless in_tree field from struct btrfs_delayed_ref_node
> btrfs: use a bool to track qgroup record insertion when adding ref head
> btrfs: make insert_delayed_ref() return a bool instead of an int
> btrfs: get rid of label and goto at insert_delayed_ref()
> btrfs: assert correct lock is held at btrfs_select_ref_head()
> btrfs: use bool type for delayed ref head fields that are used as booleans
> btrfs: use a single switch statement when initializing delayed ref head
> btrfs: remove unnecessary prototype declarations at disk-io.c
The above added to misc-next, thanks.
> btrfs: make btrfs_destroy_delayed_refs() return void
Commented why not merged for now.
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH 11/11] btrfs: make btrfs_destroy_delayed_refs() return void
2023-05-30 15:03 ` David Sterba
@ 2023-05-30 16:01 ` Filipe Manana
0 siblings, 0 replies; 15+ messages in thread
From: Filipe Manana @ 2023-05-30 16:01 UTC (permalink / raw)
To: dsterba; +Cc: linux-btrfs
On Tue, May 30, 2023 at 4:10 PM David Sterba <dsterba@suse.cz> wrote:
>
> On Mon, May 29, 2023 at 04:17:07PM +0100, fdmanana@kernel.org wrote:
> > From: Filipe Manana <fdmanana@suse.com>
> >
> > btrfs_destroy_delayed_refs() always returns 0 and its single caller does
> > not even check its return value, so make it return void.
>
> Function can return void if none of its callees return an error,
> directly or indirectly, there are no BUG_ONs left to be turned to
> proper error handling or there's no missing error handling.
>
> There's still:
>
> 4610 cache = btrfs_lookup_block_group(fs_info, head->bytenr);
> 4611 BUG_ON(!cache);
>
> and calling
>
> btrfs_error_unpin_extent_range
> unpin_extent_range
> cache = btrfs_lookup_block_group()
> BUG_ON(!cache)
>
> If a function like btrfs_cleanup_one_transaction has limited options how
> to handle errors then we can ignore them there but at least a comment
> would be good that we're doing that intentionally.
>
> This case is a bit special because there's only one caller so we know
> the context and btrfs_destroy_delayed_refs() should eventually return
> void but I'd rather do that as the last step after the call graph is
> audited for proper error handling.
What possible error handling are you expecting?
This is the transaction abort path, we have no way of dealing with
errors - every cleanup of resources is best effort.
Thanks.
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2023-05-30 16:03 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-05-29 15:16 [PATCH 00/11] btrfs: some delayed refs optimizations and cleanups fdmanana
2023-05-29 15:16 ` [PATCH 01/11] btrfs: reorder some members of struct btrfs_delayed_ref_head fdmanana
2023-05-29 15:16 ` [PATCH 02/11] btrfs: remove unused is_head field from struct btrfs_delayed_ref_node fdmanana
2023-05-29 15:16 ` [PATCH 03/11] btrfs: remove pointless in_tree " fdmanana
2023-05-29 15:17 ` [PATCH 04/11] btrfs: use a bool to track qgroup record insertion when adding ref head fdmanana
2023-05-29 15:17 ` [PATCH 05/11] btrfs: make insert_delayed_ref() return a bool instead of an int fdmanana
2023-05-29 15:17 ` [PATCH 06/11] btrfs: get rid of label and goto at insert_delayed_ref() fdmanana
2023-05-29 15:17 ` [PATCH 07/11] btrfs: assert correct lock is held at btrfs_select_ref_head() fdmanana
2023-05-29 15:17 ` [PATCH 08/11] btrfs: use bool type for delayed ref head fields that are used as booleans fdmanana
2023-05-29 15:17 ` [PATCH 09/11] btrfs: use a single switch statement when initializing delayed ref head fdmanana
2023-05-29 15:17 ` [PATCH 10/11] btrfs: remove unnecessary prototype declarations at disk-io.c fdmanana
2023-05-29 15:17 ` [PATCH 11/11] btrfs: make btrfs_destroy_delayed_refs() return void fdmanana
2023-05-30 15:03 ` David Sterba
2023-05-30 16:01 ` Filipe Manana
2023-05-30 15:04 ` [PATCH 00/11] btrfs: some delayed refs optimizations and cleanups David Sterba
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).