* [PATCH v3 0/6] reduce boilerplate code within btrfs
@ 2024-12-17 21:58 Qu Wenruo
2024-12-17 21:58 ` [PATCH v3 1/6] rbtree: add rb_find_add_cached() to rbtree.h Qu Wenruo
` (6 more replies)
0 siblings, 7 replies; 8+ messages in thread
From: Qu Wenruo @ 2024-12-17 21:58 UTC (permalink / raw)
To: linux-btrfs
[Changelog]
v3:
- Fix the error handling in the 2nd patch
- Fix the backref walk failure
There are two bugs in the 3rd patch:
* Bad parent/ref pointer update
Which only get updated once at the beginning, meanwhile they should
be updated only when we found an existing node.
* Order change for prelim_ref
The prelim_ref_compare() require the first parameter to be the
existing ref, and the second parameter to be the new one.
This is different from the new rb_find_add_cached() order
Fix both and remove unnecessary variables.
- Add named parameters for cmp() function used in rb_find_add_cached()
To give a more explicit distinguish on which one should be the newer
node and which should be the existing node.
This should reduce the confusion on the order.
- Unify the parameters for cmp() functions
And all internal entry structure will have corresponding "new_/exist_"
prefix.
- Make all parameters of cmp() to be const
v2: By Roger L
- Fix error handing in the 2nd patch
Which is still not done correctly
- Add Acked-by tag from Peter
The goal of this patch series is to reduce boilerplate code
within btrfs. To accomplish this rb_find_add_cached() was added
to linux/include/rbtree.h. Any replaceable functions were then
replaced within btrfs.
changelog:
updated if() statements to utilize newer error checking
resolved lock error on 0002
edited title of patches to utilize update instead of edit
added Acked-by from Peter Zijlstra to 0001
eliminated extra variables throughout the patch series
Roger L. Beckermeyer III (6):
rbtree: add rb_find_add_cached() to rbtree.h
btrfs: update btrfs_add_block_group_cache() to use rb helper
btrfs: update prelim_ref_insert() to use rb helpers
btrfs: update __btrfs_add_delayed_item() to use rb helper
btrfs: update btrfs_add_chunk_map() to use rb helpers
btrfs: update tree_insert() to use rb helpers
fs/btrfs/backref.c | 79 ++++++++++++++++++++--------------------
fs/btrfs/block-group.c | 46 +++++++++++------------
fs/btrfs/delayed-inode.c | 43 ++++++++++------------
fs/btrfs/delayed-ref.c | 45 +++++++++--------------
fs/btrfs/volumes.c | 41 +++++++++++----------
include/linux/rbtree.h | 37 +++++++++++++++++++
6 files changed, 156 insertions(+), 135 deletions(-)
--
2.47.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v3 1/6] rbtree: add rb_find_add_cached() to rbtree.h
2024-12-17 21:58 [PATCH v3 0/6] reduce boilerplate code within btrfs Qu Wenruo
@ 2024-12-17 21:58 ` Qu Wenruo
2024-12-17 21:58 ` [PATCH v3 2/6] btrfs: update btrfs_add_block_group_cache() to use rb helper Qu Wenruo
` (5 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Qu Wenruo @ 2024-12-17 21:58 UTC (permalink / raw)
To: linux-btrfs; +Cc: Roger L. Beckermeyer III, Josef Bacik, Peter Zijlstra (Intel)
From: "Roger L. Beckermeyer III" <beckerlee3@gmail.com>
Adds rb_find_add_cached() as a helper function for use with
red-black trees. Used in btrfs to reduce boilerplate code.
And since it's a new helper, the cmp() function will require both
parameter to be const rb_node pointers.
Suggested-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Roger L. Beckermeyer III <beckerlee3@gmail.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
include/linux/rbtree.h | 37 +++++++++++++++++++++++++++++++++++++
1 file changed, 37 insertions(+)
diff --git a/include/linux/rbtree.h b/include/linux/rbtree.h
index 7c173aa64e1e..8d2ba3749866 100644
--- a/include/linux/rbtree.h
+++ b/include/linux/rbtree.h
@@ -210,6 +210,43 @@ rb_add(struct rb_node *node, struct rb_root *tree,
rb_insert_color(node, tree);
}
+/**
+ * rb_find_add_cached() - find equivalent @node in @tree, or add @node
+ * @node: node to look-for / insert
+ * @tree: tree to search / modify
+ * @cmp: operator defining the node order
+ *
+ * Returns the rb_node matching @node, or NULL when no match is found and @node
+ * is inserted.
+ */
+static __always_inline struct rb_node *
+rb_find_add_cached(struct rb_node *node, struct rb_root_cached *tree,
+ int (*cmp)(const struct rb_node *new, const struct rb_node *exist))
+{
+ bool leftmost = true;
+ struct rb_node **link = &tree->rb_root.rb_node;
+ struct rb_node *parent = NULL;
+ int c;
+
+ while (*link) {
+ parent = *link;
+ c = cmp(node, parent);
+
+ if (c < 0) {
+ link = &parent->rb_left;
+ } else if (c > 0) {
+ link = &parent->rb_right;
+ leftmost = false;
+ } else {
+ return parent;
+ }
+ }
+
+ rb_link_node(node, parent, link);
+ rb_insert_color_cached(node, tree, leftmost);
+ return NULL;
+}
+
/**
* rb_find_add() - find equivalent @node in @tree, or add @node
* @node: node to look-for / insert
--
2.47.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v3 2/6] btrfs: update btrfs_add_block_group_cache() to use rb helper
2024-12-17 21:58 [PATCH v3 0/6] reduce boilerplate code within btrfs Qu Wenruo
2024-12-17 21:58 ` [PATCH v3 1/6] rbtree: add rb_find_add_cached() to rbtree.h Qu Wenruo
@ 2024-12-17 21:58 ` Qu Wenruo
2024-12-17 21:58 ` [PATCH v3 3/6] btrfs: update prelim_ref_insert() to use rb helpers Qu Wenruo
` (4 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Qu Wenruo @ 2024-12-17 21:58 UTC (permalink / raw)
To: linux-btrfs; +Cc: Roger L. Beckermeyer III, Josef Bacik
From: "Roger L. Beckermeyer III" <beckerlee3@gmail.com>
Update fs/btrfs/block-group.c to use rb_find_add_cached().
Suggested-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Roger L. Beckermeyer III <beckerlee3@gmail.com>
Reviewed-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
fs/btrfs/block-group.c | 46 ++++++++++++++++++++----------------------
1 file changed, 22 insertions(+), 24 deletions(-)
diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
index 5be029734cfa..39881d66cfa0 100644
--- a/fs/btrfs/block-group.c
+++ b/fs/btrfs/block-group.c
@@ -173,43 +173,41 @@ void btrfs_put_block_group(struct btrfs_block_group *cache)
}
}
+static int btrfs_bg_start_cmp(const struct rb_node *new,
+ const struct rb_node *exist)
+{
+ const struct btrfs_block_group *new_bg =
+ rb_entry(new, struct btrfs_block_group, cache_node);
+ const struct btrfs_block_group *exist_bg =
+ rb_entry(exist, struct btrfs_block_group, cache_node);
+
+ if (new_bg->start < exist_bg->start)
+ return -1;
+ if (new_bg->start > exist_bg->start)
+ return 1;
+ return 0;
+}
+
/*
* This adds the block group to the fs_info rb tree for the block group cache
*/
static int btrfs_add_block_group_cache(struct btrfs_fs_info *info,
struct btrfs_block_group *block_group)
{
- struct rb_node **p;
- struct rb_node *parent = NULL;
- struct btrfs_block_group *cache;
- bool leftmost = true;
+ struct rb_node *exist;
+ int ret = 0;
ASSERT(block_group->length != 0);
write_lock(&info->block_group_cache_lock);
- p = &info->block_group_cache_tree.rb_root.rb_node;
-
- while (*p) {
- parent = *p;
- cache = rb_entry(parent, struct btrfs_block_group, cache_node);
- if (block_group->start < cache->start) {
- p = &(*p)->rb_left;
- } else if (block_group->start > cache->start) {
- p = &(*p)->rb_right;
- leftmost = false;
- } else {
- write_unlock(&info->block_group_cache_lock);
- return -EEXIST;
- }
- }
-
- rb_link_node(&block_group->cache_node, parent, p);
- rb_insert_color_cached(&block_group->cache_node,
- &info->block_group_cache_tree, leftmost);
+ exist = rb_find_add_cached(&block_group->cache_node,
+ &info->block_group_cache_tree, btrfs_bg_start_cmp);
+ if (exist)
+ ret = -EEXIST;
write_unlock(&info->block_group_cache_lock);
- return 0;
+ return ret;
}
/*
--
2.47.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v3 3/6] btrfs: update prelim_ref_insert() to use rb helpers
2024-12-17 21:58 [PATCH v3 0/6] reduce boilerplate code within btrfs Qu Wenruo
2024-12-17 21:58 ` [PATCH v3 1/6] rbtree: add rb_find_add_cached() to rbtree.h Qu Wenruo
2024-12-17 21:58 ` [PATCH v3 2/6] btrfs: update btrfs_add_block_group_cache() to use rb helper Qu Wenruo
@ 2024-12-17 21:58 ` Qu Wenruo
2024-12-17 21:58 ` [PATCH v3 4/6] btrfs: update __btrfs_add_delayed_item() to use rb helper Qu Wenruo
` (3 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Qu Wenruo @ 2024-12-17 21:58 UTC (permalink / raw)
To: linux-btrfs; +Cc: Roger L. Beckermeyer III
From: "Roger L. Beckermeyer III" <beckerlee3@gmail.com>
Update prelim_ref_insert() to use rb_find_add_cached().
There is a special change that the existing prelim_ref_compare() is
called with the first parameter as the existing ref in the rbtree.
But the newer rb_find_add_cached() expects the cmp() function to have
the first parameter as the to-be-added node, thus the new helper
prelim_ref_rb_add_cmp() need to adapt this new order.
Signed-off-by: Roger L. Beckermeyer III <beckerlee3@gmail.com>
Reviewed-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
fs/btrfs/backref.c | 79 +++++++++++++++++++++++-----------------------
1 file changed, 39 insertions(+), 40 deletions(-)
diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
index 6d9f39c1d89c..3d3923cfc357 100644
--- a/fs/btrfs/backref.c
+++ b/fs/btrfs/backref.c
@@ -250,6 +250,21 @@ static int prelim_ref_compare(const struct prelim_ref *ref1,
return 0;
}
+static int prelim_ref_rb_add_cmp(const struct rb_node *new,
+ const struct rb_node *exist)
+{
+ const struct prelim_ref *ref_new =
+ rb_entry(new, struct prelim_ref, rbnode);
+ const struct prelim_ref *ref_exist =
+ rb_entry(exist, struct prelim_ref, rbnode);
+
+ /*
+ * prelim_ref_compare() expects the first parameter as the existing one,
+ * different from the rb_find_add_cached() order.
+ */
+ return prelim_ref_compare(ref_exist, ref_new);
+}
+
static void update_share_count(struct share_check *sc, int oldcount,
int newcount, const struct prelim_ref *newref)
{
@@ -278,55 +293,39 @@ static void prelim_ref_insert(const struct btrfs_fs_info *fs_info,
struct share_check *sc)
{
struct rb_root_cached *root;
- struct rb_node **p;
- struct rb_node *parent = NULL;
- struct prelim_ref *ref;
- int result;
- bool leftmost = true;
+ struct rb_node *exist;
root = &preftree->root;
- p = &root->rb_root.rb_node;
+ exist = rb_find_add_cached(&newref->rbnode, root, prelim_ref_rb_add_cmp);
+ if (exist) {
+ struct prelim_ref *ref = rb_entry(exist, struct prelim_ref, rbnode);
+ /* Identical refs, merge them and free @newref */
+ struct extent_inode_elem *eie = ref->inode_list;
- while (*p) {
- parent = *p;
- ref = rb_entry(parent, struct prelim_ref, rbnode);
- result = prelim_ref_compare(ref, newref);
- if (result < 0) {
- p = &(*p)->rb_left;
- } else if (result > 0) {
- p = &(*p)->rb_right;
- leftmost = false;
- } else {
- /* Identical refs, merge them and free @newref */
- struct extent_inode_elem *eie = ref->inode_list;
+ while (eie && eie->next)
+ eie = eie->next;
- while (eie && eie->next)
- eie = eie->next;
-
- if (!eie)
- ref->inode_list = newref->inode_list;
- else
- eie->next = newref->inode_list;
- trace_btrfs_prelim_ref_merge(fs_info, ref, newref,
- preftree->count);
- /*
- * A delayed ref can have newref->count < 0.
- * The ref->count is updated to follow any
- * BTRFS_[ADD|DROP]_DELAYED_REF actions.
- */
- update_share_count(sc, ref->count,
- ref->count + newref->count, newref);
- ref->count += newref->count;
- free_pref(newref);
- return;
- }
+ if (!eie)
+ ref->inode_list = newref->inode_list;
+ else
+ eie->next = newref->inode_list;
+ trace_btrfs_prelim_ref_merge(fs_info, ref, newref,
+ preftree->count);
+ /*
+ * A delayed ref can have newref->count < 0.
+ * The ref->count is updated to follow any
+ * BTRFS_[ADD|DROP]_DELAYED_REF actions.
+ */
+ update_share_count(sc, ref->count,
+ ref->count + newref->count, newref);
+ ref->count += newref->count;
+ free_pref(newref);
+ return;
}
update_share_count(sc, 0, newref->count, newref);
preftree->count++;
trace_btrfs_prelim_ref_insert(fs_info, newref, NULL, preftree->count);
- rb_link_node(&newref->rbnode, parent, p);
- rb_insert_color_cached(&newref->rbnode, root, leftmost);
}
/*
--
2.47.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v3 4/6] btrfs: update __btrfs_add_delayed_item() to use rb helper
2024-12-17 21:58 [PATCH v3 0/6] reduce boilerplate code within btrfs Qu Wenruo
` (2 preceding siblings ...)
2024-12-17 21:58 ` [PATCH v3 3/6] btrfs: update prelim_ref_insert() to use rb helpers Qu Wenruo
@ 2024-12-17 21:58 ` Qu Wenruo
2024-12-17 21:58 ` [PATCH v3 5/6] btrfs: update btrfs_add_chunk_map() to use rb helpers Qu Wenruo
` (2 subsequent siblings)
6 siblings, 0 replies; 8+ messages in thread
From: Qu Wenruo @ 2024-12-17 21:58 UTC (permalink / raw)
To: linux-btrfs; +Cc: Roger L. Beckermeyer III
From: "Roger L. Beckermeyer III" <beckerlee3@gmail.com>
Update __btrfs_add_delayed_item() to use rb_find_add_cached().
Signed-off-by: Roger L. Beckermeyer III <beckerlee3@gmail.com>
Reviewed-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
fs/btrfs/delayed-inode.c | 43 ++++++++++++++++++----------------------
1 file changed, 19 insertions(+), 24 deletions(-)
diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c
index 508bdbae29a0..60a6866a6cd9 100644
--- a/fs/btrfs/delayed-inode.c
+++ b/fs/btrfs/delayed-inode.c
@@ -366,40 +366,35 @@ static struct btrfs_delayed_item *__btrfs_lookup_delayed_item(
return NULL;
}
+static int btrfs_delayed_item_cmp(const struct rb_node *new,
+ const struct rb_node *exist)
+{
+ const struct btrfs_delayed_item *new_item =
+ rb_entry(new, struct btrfs_delayed_item, rb_node);
+ const struct btrfs_delayed_item *exist_item =
+ rb_entry(exist, struct btrfs_delayed_item, rb_node);
+
+ if (new_item->index < exist_item->index)
+ return -1;
+ if (new_item->index > exist_item->index)
+ return 1;
+ return 0;
+}
+
static int __btrfs_add_delayed_item(struct btrfs_delayed_node *delayed_node,
struct btrfs_delayed_item *ins)
{
- struct rb_node **p, *node;
- struct rb_node *parent_node = NULL;
struct rb_root_cached *root;
- struct btrfs_delayed_item *item;
- bool leftmost = true;
+ struct rb_node *exist;
if (ins->type == BTRFS_DELAYED_INSERTION_ITEM)
root = &delayed_node->ins_root;
else
root = &delayed_node->del_root;
- p = &root->rb_root.rb_node;
- node = &ins->rb_node;
-
- while (*p) {
- parent_node = *p;
- item = rb_entry(parent_node, struct btrfs_delayed_item,
- rb_node);
-
- if (item->index < ins->index) {
- p = &(*p)->rb_right;
- leftmost = false;
- } else if (item->index > ins->index) {
- p = &(*p)->rb_left;
- } else {
- return -EEXIST;
- }
- }
-
- rb_link_node(node, parent_node, p);
- rb_insert_color_cached(node, root, leftmost);
+ exist = rb_find_add_cached(&ins->rb_node, root, btrfs_delayed_item_cmp);
+ if (exist)
+ return -EEXIST;
if (ins->type == BTRFS_DELAYED_INSERTION_ITEM &&
ins->index >= delayed_node->index_cnt)
--
2.47.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v3 5/6] btrfs: update btrfs_add_chunk_map() to use rb helpers
2024-12-17 21:58 [PATCH v3 0/6] reduce boilerplate code within btrfs Qu Wenruo
` (3 preceding siblings ...)
2024-12-17 21:58 ` [PATCH v3 4/6] btrfs: update __btrfs_add_delayed_item() to use rb helper Qu Wenruo
@ 2024-12-17 21:58 ` Qu Wenruo
2024-12-17 21:58 ` [PATCH v3 6/6] btrfs: update tree_insert() " Qu Wenruo
2024-12-23 19:54 ` [PATCH v3 0/6] reduce boilerplate code within btrfs David Sterba
6 siblings, 0 replies; 8+ messages in thread
From: Qu Wenruo @ 2024-12-17 21:58 UTC (permalink / raw)
To: linux-btrfs; +Cc: Roger L. Beckermeyer III
From: "Roger L. Beckermeyer III" <beckerlee3@gmail.com>
Update btrfs_add_chunk_map() to use rb_find_add_cached().
Signed-off-by: Roger L. Beckermeyer III <beckerlee3@gmail.com>
Reviewed-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
fs/btrfs/volumes.c | 41 +++++++++++++++++++++--------------------
1 file changed, 21 insertions(+), 20 deletions(-)
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index d32913c51d69..c8b079ad1dfa 100644
--- a/fs/btrfs/volumes.c
+++ b/fs/btrfs/volumes.c
@@ -5514,33 +5514,34 @@ void btrfs_remove_chunk_map(struct btrfs_fs_info *fs_info, struct btrfs_chunk_ma
btrfs_free_chunk_map(map);
}
+static int btrfs_chunk_map_cmp(const struct rb_node *new,
+ const struct rb_node *exist)
+{
+ const struct btrfs_chunk_map *new_map =
+ rb_entry(new, struct btrfs_chunk_map, rb_node);
+ const struct btrfs_chunk_map *exist_map =
+ rb_entry(exist, struct btrfs_chunk_map, rb_node);
+
+ if (new_map->start == exist_map->start)
+ return 0;
+ if (new_map->start < exist_map->start)
+ return -1;
+ return 1;
+}
+
EXPORT_FOR_TESTS
int btrfs_add_chunk_map(struct btrfs_fs_info *fs_info, struct btrfs_chunk_map *map)
{
- struct rb_node **p;
- struct rb_node *parent = NULL;
- bool leftmost = true;
+ struct rb_node *exist;
write_lock(&fs_info->mapping_tree_lock);
- p = &fs_info->mapping_tree.rb_root.rb_node;
- while (*p) {
- struct btrfs_chunk_map *entry;
+ exist = rb_find_add_cached(&map->rb_node, &fs_info->mapping_tree,
+ btrfs_chunk_map_cmp);
- parent = *p;
- entry = rb_entry(parent, struct btrfs_chunk_map, rb_node);
-
- if (map->start < entry->start) {
- p = &(*p)->rb_left;
- } else if (map->start > entry->start) {
- p = &(*p)->rb_right;
- leftmost = false;
- } else {
- write_unlock(&fs_info->mapping_tree_lock);
- return -EEXIST;
- }
+ if (exist) {
+ write_unlock(&fs_info->mapping_tree_lock);
+ return -EEXIST;
}
- rb_link_node(&map->rb_node, parent, p);
- rb_insert_color_cached(&map->rb_node, &fs_info->mapping_tree, leftmost);
chunk_map_device_set_bits(map, CHUNK_ALLOCATED);
chunk_map_device_clear_bits(map, CHUNK_TRIMMED);
write_unlock(&fs_info->mapping_tree_lock);
--
2.47.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v3 6/6] btrfs: update tree_insert() to use rb helpers
2024-12-17 21:58 [PATCH v3 0/6] reduce boilerplate code within btrfs Qu Wenruo
` (4 preceding siblings ...)
2024-12-17 21:58 ` [PATCH v3 5/6] btrfs: update btrfs_add_chunk_map() to use rb helpers Qu Wenruo
@ 2024-12-17 21:58 ` Qu Wenruo
2024-12-23 19:54 ` [PATCH v3 0/6] reduce boilerplate code within btrfs David Sterba
6 siblings, 0 replies; 8+ messages in thread
From: Qu Wenruo @ 2024-12-17 21:58 UTC (permalink / raw)
To: linux-btrfs; +Cc: Roger L. Beckermeyer III
From: "Roger L. Beckermeyer III" <beckerlee3@gmail.com>
Update tree_insert() to use rb_find_add_cached().
add cmp_refs_node in rb_find_add_cached() to compare.
Since we're here, also make comp_data_refs() and comp_refs() accept
both parameters as const.
Signed-off-by: Roger L. Beckermeyer III <beckerlee3@gmail.com>
Reviewed-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
fs/btrfs/delayed-ref.c | 45 +++++++++++++++++-------------------------
1 file changed, 18 insertions(+), 27 deletions(-)
diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c
index 30f7079fa28e..98c5b61dabe8 100644
--- a/fs/btrfs/delayed-ref.c
+++ b/fs/btrfs/delayed-ref.c
@@ -268,8 +268,8 @@ int btrfs_delayed_refs_rsv_refill(struct btrfs_fs_info *fs_info,
/*
* compare two delayed data backrefs with same bytenr and type
*/
-static int comp_data_refs(struct btrfs_delayed_ref_node *ref1,
- struct btrfs_delayed_ref_node *ref2)
+static int comp_data_refs(const struct btrfs_delayed_ref_node *ref1,
+ const struct btrfs_delayed_ref_node *ref2)
{
if (ref1->data_ref.objectid < ref2->data_ref.objectid)
return -1;
@@ -282,8 +282,8 @@ static int comp_data_refs(struct btrfs_delayed_ref_node *ref1,
return 0;
}
-static int comp_refs(struct btrfs_delayed_ref_node *ref1,
- struct btrfs_delayed_ref_node *ref2,
+static int comp_refs(const struct btrfs_delayed_ref_node *ref1,
+ const struct btrfs_delayed_ref_node *ref2,
bool check_seq)
{
int ret = 0;
@@ -317,34 +317,25 @@ static int comp_refs(struct btrfs_delayed_ref_node *ref1,
return 0;
}
+static int cmp_refs_node(const struct rb_node *new, const struct rb_node *exist)
+{
+ const struct btrfs_delayed_ref_node *new_node =
+ rb_entry(new, struct btrfs_delayed_ref_node, ref_node);
+ const struct btrfs_delayed_ref_node *exist_node =
+ rb_entry(exist, struct btrfs_delayed_ref_node, ref_node);
+
+ return comp_refs(new_node, exist_node, true);
+}
+
static struct btrfs_delayed_ref_node* tree_insert(struct rb_root_cached *root,
struct btrfs_delayed_ref_node *ins)
{
- struct rb_node **p = &root->rb_root.rb_node;
struct rb_node *node = &ins->ref_node;
- struct rb_node *parent_node = NULL;
- struct btrfs_delayed_ref_node *entry;
- bool leftmost = true;
+ struct rb_node *exist;
- while (*p) {
- int comp;
-
- parent_node = *p;
- entry = rb_entry(parent_node, struct btrfs_delayed_ref_node,
- ref_node);
- comp = comp_refs(ins, entry, true);
- if (comp < 0) {
- p = &(*p)->rb_left;
- } else if (comp > 0) {
- p = &(*p)->rb_right;
- leftmost = false;
- } else {
- return entry;
- }
- }
-
- rb_link_node(node, parent_node, p);
- rb_insert_color_cached(node, root, leftmost);
+ exist = rb_find_add_cached(node, root, cmp_refs_node);
+ if (exist)
+ return rb_entry(exist, struct btrfs_delayed_ref_node, ref_node);
return NULL;
}
--
2.47.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v3 0/6] reduce boilerplate code within btrfs
2024-12-17 21:58 [PATCH v3 0/6] reduce boilerplate code within btrfs Qu Wenruo
` (5 preceding siblings ...)
2024-12-17 21:58 ` [PATCH v3 6/6] btrfs: update tree_insert() " Qu Wenruo
@ 2024-12-23 19:54 ` David Sterba
6 siblings, 0 replies; 8+ messages in thread
From: David Sterba @ 2024-12-23 19:54 UTC (permalink / raw)
To: Qu Wenruo; +Cc: linux-btrfs
On Wed, Dec 18, 2024 at 08:28:49AM +1030, Qu Wenruo wrote:
> [Changelog]
> v3:
> - Fix the error handling in the 2nd patch
>
> - Fix the backref walk failure
> There are two bugs in the 3rd patch:
> * Bad parent/ref pointer update
> Which only get updated once at the beginning, meanwhile they should
> be updated only when we found an existing node.
> * Order change for prelim_ref
> The prelim_ref_compare() require the first parameter to be the
> existing ref, and the second parameter to be the new one.
> This is different from the new rb_find_add_cached() order
> Fix both and remove unnecessary variables.
>
> - Add named parameters for cmp() function used in rb_find_add_cached()
> To give a more explicit distinguish on which one should be the newer
> node and which should be the existing node.
> This should reduce the confusion on the order.
>
> - Unify the parameters for cmp() functions
> And all internal entry structure will have corresponding "new_/exist_"
> prefix.
>
> - Make all parameters of cmp() to be const
>
> v2: By Roger L
> - Fix error handing in the 2nd patch
> Which is still not done correctly
>
> - Add Acked-by tag from Peter
>
> The goal of this patch series is to reduce boilerplate code
> within btrfs. To accomplish this rb_find_add_cached() was added
> to linux/include/rbtree.h. Any replaceable functions were then
> replaced within btrfs.
>
> changelog:
> updated if() statements to utilize newer error checking
> resolved lock error on 0002
> edited title of patches to utilize update instead of edit
> added Acked-by from Peter Zijlstra to 0001
> eliminated extra variables throughout the patch series
>
> Roger L. Beckermeyer III (6):
> rbtree: add rb_find_add_cached() to rbtree.h
> btrfs: update btrfs_add_block_group_cache() to use rb helper
> btrfs: update prelim_ref_insert() to use rb helpers
> btrfs: update __btrfs_add_delayed_item() to use rb helper
> btrfs: update btrfs_add_chunk_map() to use rb helpers
> btrfs: update tree_insert() to use rb helpers
Reviewed-by: David Sterba <dsterba@suse.com>
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2024-12-23 19:55 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-12-17 21:58 [PATCH v3 0/6] reduce boilerplate code within btrfs Qu Wenruo
2024-12-17 21:58 ` [PATCH v3 1/6] rbtree: add rb_find_add_cached() to rbtree.h Qu Wenruo
2024-12-17 21:58 ` [PATCH v3 2/6] btrfs: update btrfs_add_block_group_cache() to use rb helper Qu Wenruo
2024-12-17 21:58 ` [PATCH v3 3/6] btrfs: update prelim_ref_insert() to use rb helpers Qu Wenruo
2024-12-17 21:58 ` [PATCH v3 4/6] btrfs: update __btrfs_add_delayed_item() to use rb helper Qu Wenruo
2024-12-17 21:58 ` [PATCH v3 5/6] btrfs: update btrfs_add_chunk_map() to use rb helpers Qu Wenruo
2024-12-17 21:58 ` [PATCH v3 6/6] btrfs: update tree_insert() " Qu Wenruo
2024-12-23 19:54 ` [PATCH v3 0/6] reduce boilerplate code within btrfs David Sterba
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox