linux-btrfs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/18] btrfs-progs: more prep work for syncing ctree.c
@ 2023-04-19 21:23 Josef Bacik
  2023-04-19 21:23 ` [PATCH 01/18] btrfs-progs: sync and stub-out tree-mod-log.h Josef Bacik
                   ` (17 more replies)
  0 siblings, 18 replies; 19+ messages in thread
From: Josef Bacik @ 2023-04-19 21:23 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

Hello,

These are a bunch of changes that sync various api and structure differences
that exist between btrfs-progs and the kernel.  Most of these are small, but the
last patch is sync'ing tree-checker.[ch].  This was mostly left intact, however
there's a slight change to disable some of the checking for tools like fsck or
btrfs-image.

This series depends on
	btrfs-progs: prep work for syncing files into kernel-shared
	btrfs-progs: sync basic code from the kernel
	btrfs-progs: prep work for syncing ctree.c

Thanks,

Josef

Josef Bacik (18):
  btrfs-progs: sync and stub-out tree-mod-log.h
  btrfs-progs: add btrfs_root_id helper
  btrfs-progs: remove root argument from free_extent and inc_extent_ref
  btrfs-progs: pass root_id for btrfs_free_tree_block
  btrfs-progs: add a free_extent_buffer_stale helper
  btrfs-progs: add btrfs_is_testing helper
  btrfs-progs: add accounting_lock to btrfs_root
  btrfs-progs: update read_tree_block to match the kernel definition
  btrfs-progs: make reada_for_search static
  btrfs-progs: sync btrfs_path fields with the kernel
  btrfs-progs: update arguments of find_extent_buffer
  btrfs-progs: add btrfs_readahead_node_child helper
  btrfs-progs: add an atomic arg to btrfs_buffer_uptodate
  btrfs-progs: add a btrfs_read_extent_buffer helper
  btrfs-progs: add BTRFS_STRIPE_LEN_SHIFT definition
  btrfs-progs: rename btrfs_check_* to __btrfs_check_*
  btrfs-progs: change btrfs_check_chunk_valid to match the kernel
    version
  btrfs-progs: sync tree-checker.[ch]

 Makefile                         |    1 +
 btrfs-corrupt-block.c            |    8 +-
 btrfs-find-root.c                |    2 +-
 check/clear-cache.c              |    5 +-
 check/main.c                     |   28 +-
 check/mode-common.c              |    4 +-
 check/mode-lowmem.c              |   31 +-
 check/qgroup-verify.c            |    3 +-
 check/repair.c                   |   13 +-
 cmds/inspect-dump-tree.c         |   12 +-
 cmds/inspect-tree-stats.c        |    4 +-
 cmds/rescue.c                    |    3 +-
 cmds/restore.c                   |   11 +-
 image/main.c                     |   25 +-
 include/kerncompat.h             |   10 +
 kernel-shared/backref.c          |    6 +-
 kernel-shared/ctree.c            |  222 +---
 kernel-shared/ctree.h            |   77 +-
 kernel-shared/disk-io.c          |   92 +-
 kernel-shared/disk-io.h          |   16 +-
 kernel-shared/extent-tree.c      |   33 +-
 kernel-shared/extent_io.c        |   26 +-
 kernel-shared/extent_io.h        |    4 +-
 kernel-shared/free-space-cache.c |    5 +-
 kernel-shared/print-tree.c       |    4 +-
 kernel-shared/tree-checker.c     | 2064 ++++++++++++++++++++++++++++++
 kernel-shared/tree-checker.h     |   72 ++
 kernel-shared/tree-mod-log.h     |   96 ++
 kernel-shared/volumes.c          |  136 +-
 kernel-shared/volumes.h          |    5 +-
 tune/change-uuid.c               |    2 +-
 31 files changed, 2521 insertions(+), 499 deletions(-)
 create mode 100644 kernel-shared/tree-checker.c
 create mode 100644 kernel-shared/tree-checker.h
 create mode 100644 kernel-shared/tree-mod-log.h

-- 
2.40.0


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH 01/18] btrfs-progs: sync and stub-out tree-mod-log.h
  2023-04-19 21:23 [PATCH 00/18] btrfs-progs: more prep work for syncing ctree.c Josef Bacik
@ 2023-04-19 21:23 ` Josef Bacik
  2023-04-19 21:23 ` [PATCH 02/18] btrfs-progs: add btrfs_root_id helper Josef Bacik
                   ` (16 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Josef Bacik @ 2023-04-19 21:23 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

In order to sync ctree.c we're going to have to have definitions for the
tree-mod-log stuff.  However we don't need any of the code, we don't do
live backref lookups in btrfs-progs, so simply sync the header file and
stub all the helpers out.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 kernel-shared/tree-mod-log.h | 96 ++++++++++++++++++++++++++++++++++++
 1 file changed, 96 insertions(+)
 create mode 100644 kernel-shared/tree-mod-log.h

diff --git a/kernel-shared/tree-mod-log.h b/kernel-shared/tree-mod-log.h
new file mode 100644
index 00000000..922862b2
--- /dev/null
+++ b/kernel-shared/tree-mod-log.h
@@ -0,0 +1,96 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef BTRFS_TREE_MOD_LOG_H
+#define BTRFS_TREE_MOD_LOG_H
+
+#include "ctree.h"
+
+/* Represents a tree mod log user. */
+struct btrfs_seq_list {
+	struct list_head list;
+	u64 seq;
+};
+
+#define BTRFS_SEQ_LIST_INIT(name) { .list = LIST_HEAD_INIT((name).list), .seq = 0 }
+#define BTRFS_SEQ_LAST            ((u64)-1)
+
+enum btrfs_mod_log_op {
+	BTRFS_MOD_LOG_KEY_REPLACE,
+	BTRFS_MOD_LOG_KEY_ADD,
+	BTRFS_MOD_LOG_KEY_REMOVE,
+	BTRFS_MOD_LOG_KEY_REMOVE_WHILE_FREEING,
+	BTRFS_MOD_LOG_KEY_REMOVE_WHILE_MOVING,
+	BTRFS_MOD_LOG_MOVE_KEYS,
+	BTRFS_MOD_LOG_ROOT_REPLACE,
+};
+
+static inline u64 btrfs_get_tree_mod_seq(struct btrfs_fs_info *fs_info,
+					 struct btrfs_seq_list *elem)
+{
+	return 0;
+}
+
+static inline void btrfs_put_tree_mod_seq(struct btrfs_fs_info *fs_info,
+					  struct btrfs_seq_list *elem)
+{
+}
+
+static inline int btrfs_tree_mod_log_insert_root(struct extent_buffer *old_root,
+						 struct extent_buffer *new_root,
+						 bool log_removal)
+{
+	return 0;
+}
+
+static inline int btrfs_tree_mod_log_insert_key(struct extent_buffer *eb, int slot,
+						enum btrfs_mod_log_op op)
+{
+	return 0;
+}
+
+static inline int btrfs_tree_mod_log_free_eb(struct extent_buffer *eb)
+{
+	return 0;
+}
+
+static inline struct extent_buffer *btrfs_tree_mod_log_rewind(struct btrfs_fs_info *fs_info,
+							      struct btrfs_path *path,
+							      struct extent_buffer *eb,
+							      u64 time_seq)
+{
+	return NULL;
+}
+
+static inline struct extent_buffer *btrfs_get_old_root(struct btrfs_root *root,
+						       u64 time_seq)
+{
+	return NULL;
+}
+
+static inline int btrfs_old_root_level(struct btrfs_root *root, u64 time_seq)
+{
+	return btrfs_header_level(root->node);
+}
+
+static inline int btrfs_tree_mod_log_eb_copy(struct extent_buffer *dst,
+					     struct extent_buffer *src,
+					     unsigned long dst_offset,
+					     unsigned long src_offset,
+					     int nr_items)
+{
+	return 0;
+}
+
+static inline int btrfs_tree_mod_log_insert_move(struct extent_buffer *eb,
+						 int dst_slot, int src_slot,
+						 int nr_items)
+{
+	return 0;
+}
+
+static inline u64 btrfs_tree_mod_log_lowest_seq(struct btrfs_fs_info *fs_info)
+{
+	return 0;
+}
+
+#endif
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 02/18] btrfs-progs: add btrfs_root_id helper
  2023-04-19 21:23 [PATCH 00/18] btrfs-progs: more prep work for syncing ctree.c Josef Bacik
  2023-04-19 21:23 ` [PATCH 01/18] btrfs-progs: sync and stub-out tree-mod-log.h Josef Bacik
@ 2023-04-19 21:23 ` Josef Bacik
  2023-04-19 21:23 ` [PATCH 03/18] btrfs-progs: remove root argument from free_extent and inc_extent_ref Josef Bacik
                   ` (15 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Josef Bacik @ 2023-04-19 21:23 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

This exists in the kernel and is used throughout ctree.c, sync this
helper to make sync'ing ctree.c easier.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 kernel-shared/ctree.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/kernel-shared/ctree.h b/kernel-shared/ctree.h
index 655b714f..d5cd7803 100644
--- a/kernel-shared/ctree.h
+++ b/kernel-shared/ctree.h
@@ -476,6 +476,11 @@ struct btrfs_root {
 	struct rb_node rb_node;
 };
 
+static inline u64 btrfs_root_id(const struct btrfs_root *root)
+{
+	return root->root_key.objectid;
+}
+
 static inline u32 BTRFS_MAX_ITEM_SIZE(const struct btrfs_fs_info *info)
 {
 	return BTRFS_LEAF_DATA_SIZE(info) - sizeof(struct btrfs_item);
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 03/18] btrfs-progs: remove root argument from free_extent and inc_extent_ref
  2023-04-19 21:23 [PATCH 00/18] btrfs-progs: more prep work for syncing ctree.c Josef Bacik
  2023-04-19 21:23 ` [PATCH 01/18] btrfs-progs: sync and stub-out tree-mod-log.h Josef Bacik
  2023-04-19 21:23 ` [PATCH 02/18] btrfs-progs: add btrfs_root_id helper Josef Bacik
@ 2023-04-19 21:23 ` Josef Bacik
  2023-04-19 21:23 ` [PATCH 04/18] btrfs-progs: pass root_id for btrfs_free_tree_block Josef Bacik
                   ` (14 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Josef Bacik @ 2023-04-19 21:23 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

Neither of these actually need the root argument, we provide all the
information for the ref through the arguments we pass through.  Remove
the root argument from both of them.  These needed to be done in the
same patch because of the __btrfs_mod_ref helper which will pick one or
the other function for processing reference updates.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 check/clear-cache.c              |  5 +++--
 check/main.c                     | 10 ++++------
 check/mode-lowmem.c              | 10 +++++-----
 kernel-shared/ctree.c            | 29 +++++++++++++----------------
 kernel-shared/ctree.h            |  2 --
 kernel-shared/extent-tree.c      | 24 ++++++++++--------------
 kernel-shared/free-space-cache.c |  5 ++---
 7 files changed, 37 insertions(+), 48 deletions(-)

diff --git a/check/clear-cache.c b/check/clear-cache.c
index 5ffdd430..031379ce 100644
--- a/check/clear-cache.c
+++ b/check/clear-cache.c
@@ -513,8 +513,9 @@ int truncate_free_ino_items(struct btrfs_root *root)
 			extent_offset = found_key.offset -
 					btrfs_file_extent_offset(leaf, fi);
 			UASSERT(extent_offset == 0);
-			ret = btrfs_free_extent(trans, root, extent_disk_bytenr,
-						extent_num_bytes, 0, root->objectid,
+			ret = btrfs_free_extent(trans, extent_disk_bytenr,
+						extent_num_bytes, 0,
+						root->objectid,
 						BTRFS_FREE_INO_OBJECTID, 0);
 			if (ret < 0) {
 				btrfs_abort_transaction(trans, ret);
diff --git a/check/main.c b/check/main.c
index 09805511..275f912b 100644
--- a/check/main.c
+++ b/check/main.c
@@ -3586,7 +3586,7 @@ static int repair_btree(struct btrfs_root *root,
 		 * return value is not concerned.
 		 */
 		btrfs_release_path(&path);
-		ret = btrfs_free_extent(trans, root, offset,
+		ret = btrfs_free_extent(trans, offset,
 				gfs_info->nodesize, 0,
 				root->root_key.objectid, level - 1, 0);
 		cache = next_cache_extent(cache);
@@ -6861,9 +6861,8 @@ static int record_extent(struct btrfs_trans_handle *trans,
 			 * just makes the backref allocator create a data
 			 * backref
 			 */
-			ret = btrfs_inc_extent_ref(trans, extent_root,
-						   rec->start, rec->max_size,
-						   parent,
+			ret = btrfs_inc_extent_ref(trans, rec->start,
+						   rec->max_size, parent,
 						   dback->root,
 						   parent ?
 						   BTRFS_FIRST_FREE_OBJECTID :
@@ -6890,8 +6889,7 @@ static int record_extent(struct btrfs_trans_handle *trans,
 		else
 			parent = 0;
 
-		ret = btrfs_inc_extent_ref(trans, extent_root,
-					   rec->start, rec->max_size,
+		ret = btrfs_inc_extent_ref(trans, rec->start, rec->max_size,
 					   parent, tback->root, 0, 0);
 		fprintf(stderr,
 "adding new tree backref on start %llu len %llu parent %llu root %llu\n",
diff --git a/check/mode-lowmem.c b/check/mode-lowmem.c
index 0bc95930..10f86161 100644
--- a/check/mode-lowmem.c
+++ b/check/mode-lowmem.c
@@ -755,8 +755,8 @@ static int repair_tree_block_ref(struct btrfs_root *root,
 		parent = nrefs->bytenr[level + 1];
 
 	/* increase the ref */
-	ret = btrfs_inc_extent_ref(trans, extent_root, bytenr, node_size,
-			parent, root->objectid, level, 0);
+	ret = btrfs_inc_extent_ref(trans, bytenr, node_size, parent,
+				   root->objectid, level, 0);
 
 	nrefs->refs[level]++;
 out:
@@ -3335,7 +3335,7 @@ static int repair_extent_data_item(struct btrfs_root *root,
 		btrfs_release_path(&path);
 	}
 
-	ret = btrfs_inc_extent_ref(trans, root, disk_bytenr, num_bytes, parent,
+	ret = btrfs_inc_extent_ref(trans, disk_bytenr, num_bytes, parent,
 				   root->objectid,
 		   parent ? BTRFS_FIRST_FREE_OBJECTID : fi_key.objectid,
 				   offset);
@@ -4132,8 +4132,8 @@ static int repair_extent_item(struct btrfs_path *path, u64 bytenr, u64
 		goto out;
 	}
 	/* delete the backref */
-	ret = btrfs_free_extent(trans, gfs_info->fs_root, bytenr,
-			num_bytes, parent, root_objectid, owner, offset);
+	ret = btrfs_free_extent(trans, bytenr, num_bytes, parent, root_objectid,
+				owner, offset);
 	if (!ret)
 		printf("Delete backref in extent [%llu %llu]\n",
 		       bytenr, num_bytes);
diff --git a/kernel-shared/ctree.c b/kernel-shared/ctree.c
index da9282b9..c3e9830a 100644
--- a/kernel-shared/ctree.c
+++ b/kernel-shared/ctree.c
@@ -491,8 +491,8 @@ static int __btrfs_cow_block(struct btrfs_trans_handle *trans,
 		root->node = cow;
 		extent_buffer_get(cow);
 
-		btrfs_free_extent(trans, root, buf->start, buf->len,
-				  0, root->root_key.objectid, level, 0);
+		btrfs_free_extent(trans, buf->start, buf->len, 0,
+				  root->root_key.objectid, level, 0);
 		free_extent_buffer(buf);
 		add_root_to_dirty_list(root);
 	} else {
@@ -504,8 +504,8 @@ static int __btrfs_cow_block(struct btrfs_trans_handle *trans,
 		btrfs_mark_buffer_dirty(parent);
 		WARN_ON(btrfs_header_generation(parent) != trans->transid);
 
-		btrfs_free_extent(trans, root, buf->start, buf->len,
-				  0, root->root_key.objectid, level, 0);
+		btrfs_free_extent(trans, buf->start, buf->len, 0,
+				  root->root_key.objectid, level, 0);
 	}
 	if (!list_empty(&buf->recow)) {
 		list_del_init(&buf->recow);
@@ -942,9 +942,8 @@ static int balance_level(struct btrfs_trans_handle *trans,
 
 		root_sub_used(root, mid->len);
 
-		ret = btrfs_free_extent(trans, root, mid->start, mid->len,
-					0, root->root_key.objectid,
-					level, 0);
+		ret = btrfs_free_extent(trans, mid->start, mid->len, 0,
+					root->root_key.objectid, level, 0);
 		/* once for the root ptr */
 		free_extent_buffer(mid);
 		return ret;
@@ -999,10 +998,9 @@ static int balance_level(struct btrfs_trans_handle *trans,
 				ret = wret;
 
 			root_sub_used(root, blocksize);
-			wret = btrfs_free_extent(trans, root, bytenr,
-						 blocksize, 0,
-						 root->root_key.objectid,
-						 level, 0);
+			wret = btrfs_free_extent(trans, bytenr, blocksize, 0,
+						 root->root_key.objectid, level,
+						 0);
 			if (wret)
 				ret = wret;
 		} else {
@@ -1047,9 +1045,8 @@ static int balance_level(struct btrfs_trans_handle *trans,
 			ret = wret;
 
 		root_sub_used(root, blocksize);
-		wret = btrfs_free_extent(trans, root, bytenr, blocksize,
-					 0, root->root_key.objectid,
-					 level, 0);
+		wret = btrfs_free_extent(trans, bytenr, blocksize, 0,
+					 root->root_key.objectid, level, 0);
 		if (wret)
 			ret = wret;
 	} else {
@@ -2956,8 +2953,8 @@ static noinline int btrfs_del_leaf(struct btrfs_trans_handle *trans,
 
 	root_sub_used(root, leaf->len);
 
-	ret = btrfs_free_extent(trans, root, leaf->start, leaf->len,
-				0, root->root_key.objectid, 0, 0);
+	ret = btrfs_free_extent(trans, leaf->start, leaf->len, 0,
+				root->root_key.objectid, 0, 0);
 	return ret;
 }
 
diff --git a/kernel-shared/ctree.h b/kernel-shared/ctree.h
index d5cd7803..2f41b58d 100644
--- a/kernel-shared/ctree.h
+++ b/kernel-shared/ctree.h
@@ -875,12 +875,10 @@ int btrfs_free_tree_block(struct btrfs_trans_handle *trans,
 			  struct extent_buffer *buf,
 			  u64 parent, int last_ref);
 int btrfs_free_extent(struct btrfs_trans_handle *trans,
-		      struct btrfs_root *root,
 		      u64 bytenr, u64 num_bytes, u64 parent,
 		      u64 root_objectid, u64 owner, u64 offset);
 void btrfs_finish_extent_commit(struct btrfs_trans_handle *trans);
 int btrfs_inc_extent_ref(struct btrfs_trans_handle *trans,
-			 struct btrfs_root *root,
 			 u64 bytenr, u64 num_bytes, u64 parent,
 			 u64 root_objectid, u64 owner, u64 offset);
 int btrfs_update_extent_ref(struct btrfs_trans_handle *trans,
diff --git a/kernel-shared/extent-tree.c b/kernel-shared/extent-tree.c
index 4dfb35d4..8d4483cd 100644
--- a/kernel-shared/extent-tree.c
+++ b/kernel-shared/extent-tree.c
@@ -1242,11 +1242,10 @@ static int remove_extent_backref(struct btrfs_trans_handle *trans,
 }
 
 int btrfs_inc_extent_ref(struct btrfs_trans_handle *trans,
-			 struct btrfs_root *root,
 			 u64 bytenr, u64 num_bytes, u64 parent,
 			 u64 root_objectid, u64 owner, u64 offset)
 {
-	struct btrfs_root *extent_root = btrfs_extent_root(root->fs_info,
+	struct btrfs_root *extent_root = btrfs_extent_root(trans->fs_info,
 							   bytenr);
 	struct btrfs_path *path;
 	struct extent_buffer *leaf;
@@ -1467,7 +1466,6 @@ static int __btrfs_mod_ref(struct btrfs_trans_handle *trans,
 	int level;
 	int ret = 0;
 	int (*process_func)(struct btrfs_trans_handle *trans,
-			    struct btrfs_root *root,
 			    u64, u64, u64, u64, u64, u64);
 
 	ref_root = btrfs_header_owner(buf);
@@ -1504,9 +1502,8 @@ static int __btrfs_mod_ref(struct btrfs_trans_handle *trans,
 
 			num_bytes = btrfs_file_extent_disk_num_bytes(buf, fi);
 			key.offset -= btrfs_file_extent_offset(buf, fi);
-			ret = process_func(trans, root, bytenr, num_bytes,
-					   parent, ref_root, key.objectid,
-					   key.offset);
+			ret = process_func(trans, bytenr, num_bytes, parent,
+					   ref_root, key.objectid, key.offset);
 			if (ret) {
 				WARN_ON(1);
 				goto fail;
@@ -1514,8 +1511,8 @@ static int __btrfs_mod_ref(struct btrfs_trans_handle *trans,
 		} else {
 			bytenr = btrfs_node_blockptr(buf, i);
 			num_bytes = root->fs_info->nodesize;
-			ret = process_func(trans, root, bytenr, num_bytes,
-					   parent, ref_root, level - 1, 0);
+			ret = process_func(trans, bytenr, num_bytes, parent,
+					   ref_root, level - 1, 0);
 			if (ret) {
 				WARN_ON(1);
 				goto fail;
@@ -2148,7 +2145,7 @@ int btrfs_free_tree_block(struct btrfs_trans_handle *trans,
 			  struct extent_buffer *buf,
 			  u64 parent, int last_ref)
 {
-	return btrfs_free_extent(trans, root, buf->start, buf->len, parent,
+	return btrfs_free_extent(trans, buf->start, buf->len, parent,
 				 root->root_key.objectid,
 				 btrfs_header_level(buf), 0);
 }
@@ -2158,13 +2155,12 @@ int btrfs_free_tree_block(struct btrfs_trans_handle *trans,
  */
 
 int btrfs_free_extent(struct btrfs_trans_handle *trans,
-		      struct btrfs_root *root,
 		      u64 bytenr, u64 num_bytes, u64 parent,
 		      u64 root_objectid, u64 owner, u64 offset)
 {
 	int ret;
 
-	WARN_ON(num_bytes < root->fs_info->sectorsize);
+	WARN_ON(num_bytes < trans->fs_info->sectorsize);
 	/*
 	 * tree log blocks never actually go into the extent allocation
 	 * tree, just update pinning info and exit early.
@@ -2579,8 +2575,8 @@ struct extent_buffer *btrfs_alloc_tree_block(struct btrfs_trans_handle *trans,
 
 	buf = btrfs_find_create_tree_block(root->fs_info, ins.objectid);
 	if (!buf) {
-		btrfs_free_extent(trans, root, ins.objectid, ins.offset,
-				  0, root->root_key.objectid, level, 0);
+		btrfs_free_extent(trans, ins.objectid, ins.offset, 0,
+				  root->root_key.objectid, level, 0);
 		BUG_ON(1);
 		return ERR_PTR(-ENOMEM);
 	}
@@ -3723,7 +3719,7 @@ static int __btrfs_record_file_extent(struct btrfs_trans_handle *trans,
 	btrfs_set_stack_inode_nbytes(inode, nbytes);
 	btrfs_release_path(path);
 
-	ret = btrfs_inc_extent_ref(trans, root, extent_bytenr, extent_num_bytes,
+	ret = btrfs_inc_extent_ref(trans, extent_bytenr, extent_num_bytes,
 				   0, root->root_key.objectid, objectid,
 				   file_pos - extent_offset);
 	if (ret)
diff --git a/kernel-shared/free-space-cache.c b/kernel-shared/free-space-cache.c
index 83897f10..7bd76e39 100644
--- a/kernel-shared/free-space-cache.c
+++ b/kernel-shared/free-space-cache.c
@@ -982,9 +982,8 @@ int btrfs_clear_free_space_cache(struct btrfs_trans_handle *trans,
 		disk_bytenr = btrfs_file_extent_disk_bytenr(node, fi);
 		disk_num_bytes = btrfs_file_extent_disk_num_bytes(node, fi);
 
-		ret = btrfs_free_extent(trans, tree_root, disk_bytenr,
-					disk_num_bytes, 0, tree_root->objectid,
-					ino, key.offset);
+		ret = btrfs_free_extent(trans, disk_bytenr, disk_num_bytes, 0,
+					tree_root->objectid, ino, key.offset);
 		if (ret < 0) {
 			error("failed to remove backref for disk bytenr %llu: %d",
 			      disk_bytenr, ret);
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 04/18] btrfs-progs: pass root_id for btrfs_free_tree_block
  2023-04-19 21:23 [PATCH 00/18] btrfs-progs: more prep work for syncing ctree.c Josef Bacik
                   ` (2 preceding siblings ...)
  2023-04-19 21:23 ` [PATCH 03/18] btrfs-progs: remove root argument from free_extent and inc_extent_ref Josef Bacik
@ 2023-04-19 21:23 ` Josef Bacik
  2023-04-19 21:23 ` [PATCH 05/18] btrfs-progs: add a free_extent_buffer_stale helper Josef Bacik
                   ` (13 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Josef Bacik @ 2023-04-19 21:23 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

In the kernel we pass in the root_id for btrfs_free_tree_block instead
of the root itself.  Update the btrfs-progs version of the helper to
match what we do in the kernel.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 cmds/rescue.c               | 3 ++-
 kernel-shared/ctree.h       | 6 ++----
 kernel-shared/disk-io.c     | 2 +-
 kernel-shared/extent-tree.c | 9 +++------
 4 files changed, 8 insertions(+), 12 deletions(-)

diff --git a/cmds/rescue.c b/cmds/rescue.c
index b84166ea..5551374d 100644
--- a/cmds/rescue.c
+++ b/cmds/rescue.c
@@ -343,7 +343,8 @@ static int clear_uuid_tree(struct btrfs_fs_info *fs_info)
 	ret = btrfs_clear_buffer_dirty(uuid_root->node);
 	if (ret < 0)
 		goto out;
-	ret = btrfs_free_tree_block(trans, uuid_root, uuid_root->node, 0, 1);
+	ret = btrfs_free_tree_block(trans, btrfs_root_id(uuid_root),
+				    uuid_root->node, 0, 1);
 	if (ret < 0)
 		goto out;
 	free_extent_buffer(uuid_root->node);
diff --git a/kernel-shared/ctree.h b/kernel-shared/ctree.h
index 2f41b58d..c892d707 100644
--- a/kernel-shared/ctree.h
+++ b/kernel-shared/ctree.h
@@ -870,10 +870,8 @@ int btrfs_inc_ref(struct btrfs_trans_handle *trans, struct btrfs_root *root,
 		  struct extent_buffer *buf, int record_parent);
 int btrfs_dec_ref(struct btrfs_trans_handle *trans, struct btrfs_root *root,
 		  struct extent_buffer *buf, int record_parent);
-int btrfs_free_tree_block(struct btrfs_trans_handle *trans,
-			  struct btrfs_root *root,
-			  struct extent_buffer *buf,
-			  u64 parent, int last_ref);
+int btrfs_free_tree_block(struct btrfs_trans_handle *trans, u64 root_id,
+			  struct extent_buffer *buf, u64 parent, int last_ref);
 int btrfs_free_extent(struct btrfs_trans_handle *trans,
 		      u64 bytenr, u64 num_bytes, u64 parent,
 		      u64 root_objectid, u64 owner, u64 offset);
diff --git a/kernel-shared/disk-io.c b/kernel-shared/disk-io.c
index 7bbcc381..9d93f331 100644
--- a/kernel-shared/disk-io.c
+++ b/kernel-shared/disk-io.c
@@ -2301,7 +2301,7 @@ int btrfs_delete_and_free_root(struct btrfs_trans_handle *trans,
 	ret = btrfs_clear_buffer_dirty(root->node);
 	if (ret)
 		return ret;
-	ret = btrfs_free_tree_block(trans, root, root->node, 0, 1);
+	ret = btrfs_free_tree_block(trans, btrfs_root_id(root), root->node, 0, 1);
 	if (ret)
 		return ret;
 	if (is_global_root(root))
diff --git a/kernel-shared/extent-tree.c b/kernel-shared/extent-tree.c
index 8d4483cd..5c33fd53 100644
--- a/kernel-shared/extent-tree.c
+++ b/kernel-shared/extent-tree.c
@@ -2140,13 +2140,10 @@ fail:
 	return ret;
 }
 
-int btrfs_free_tree_block(struct btrfs_trans_handle *trans,
-			  struct btrfs_root *root,
-			  struct extent_buffer *buf,
-			  u64 parent, int last_ref)
+int btrfs_free_tree_block(struct btrfs_trans_handle *trans, u64 root_id,
+			  struct extent_buffer *buf, u64 parent, int last_ref)
 {
-	return btrfs_free_extent(trans, buf->start, buf->len, parent,
-				 root->root_key.objectid,
+	return btrfs_free_extent(trans, buf->start, buf->len, parent, root_id,
 				 btrfs_header_level(buf), 0);
 }
 
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 05/18] btrfs-progs: add a free_extent_buffer_stale helper
  2023-04-19 21:23 [PATCH 00/18] btrfs-progs: more prep work for syncing ctree.c Josef Bacik
                   ` (3 preceding siblings ...)
  2023-04-19 21:23 ` [PATCH 04/18] btrfs-progs: pass root_id for btrfs_free_tree_block Josef Bacik
@ 2023-04-19 21:23 ` Josef Bacik
  2023-04-19 21:23 ` [PATCH 06/18] btrfs-progs: add btrfs_is_testing helper Josef Bacik
                   ` (12 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Josef Bacik @ 2023-04-19 21:23 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

This does exactly what free_extent_buffer_nocache does, but we call
btrfs_free_extent_buffer_stale in the kernel code, so add this extra
helper.  Once the kernel code is sync'ed we can get rid of the old
helper.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 kernel-shared/extent_io.c | 5 +++++
 kernel-shared/extent_io.h | 1 +
 2 files changed, 6 insertions(+)

diff --git a/kernel-shared/extent_io.c b/kernel-shared/extent_io.c
index 4dff81bd..992b5f35 100644
--- a/kernel-shared/extent_io.c
+++ b/kernel-shared/extent_io.c
@@ -204,6 +204,11 @@ void free_extent_buffer_nocache(struct extent_buffer *eb)
 	free_extent_buffer_internal(eb, 1);
 }
 
+void free_extent_buffer_stale(struct extent_buffer *eb)
+{
+	free_extent_buffer_internal(eb, 1);
+}
+
 struct extent_buffer *find_extent_buffer(struct btrfs_fs_info *fs_info,
 					 u64 bytenr, u32 blocksize)
 {
diff --git a/kernel-shared/extent_io.h b/kernel-shared/extent_io.h
index 09f3c82a..e4da3c57 100644
--- a/kernel-shared/extent_io.h
+++ b/kernel-shared/extent_io.h
@@ -104,6 +104,7 @@ struct extent_buffer *alloc_dummy_extent_buffer(struct btrfs_fs_info *fs_info,
 						u64 bytenr, u32 blocksize);
 void free_extent_buffer(struct extent_buffer *eb);
 void free_extent_buffer_nocache(struct extent_buffer *eb);
+void free_extent_buffer_stale(struct extent_buffer *eb);
 int memcmp_extent_buffer(const struct extent_buffer *eb, const void *ptrv,
 			 unsigned long start, unsigned long len);
 void read_extent_buffer(const struct extent_buffer *eb, void *dst,
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 06/18] btrfs-progs: add btrfs_is_testing helper
  2023-04-19 21:23 [PATCH 00/18] btrfs-progs: more prep work for syncing ctree.c Josef Bacik
                   ` (4 preceding siblings ...)
  2023-04-19 21:23 ` [PATCH 05/18] btrfs-progs: add a free_extent_buffer_stale helper Josef Bacik
@ 2023-04-19 21:23 ` Josef Bacik
  2023-04-19 21:23 ` [PATCH 07/18] btrfs-progs: add accounting_lock to btrfs_root Josef Bacik
                   ` (11 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Josef Bacik @ 2023-04-19 21:23 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

This is sprinkled throughout the kernel code for the in-kernel self
tests.  Add the helper to btrfs-progs to make it easier to sync the
kernel code.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 kernel-shared/ctree.h | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/kernel-shared/ctree.h b/kernel-shared/ctree.h
index c892d707..26171288 100644
--- a/kernel-shared/ctree.h
+++ b/kernel-shared/ctree.h
@@ -389,6 +389,11 @@ static inline bool btrfs_is_zoned(const struct btrfs_fs_info *fs_info)
 	return fs_info->zoned != 0;
 }
 
+static inline bool btrfs_is_testing(const struct btrfs_fs_info *fs_info)
+{
+	return false;
+}
+
 /*
  * The state of btrfs root
  */
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 07/18] btrfs-progs: add accounting_lock to btrfs_root
  2023-04-19 21:23 [PATCH 00/18] btrfs-progs: more prep work for syncing ctree.c Josef Bacik
                   ` (5 preceding siblings ...)
  2023-04-19 21:23 ` [PATCH 06/18] btrfs-progs: add btrfs_is_testing helper Josef Bacik
@ 2023-04-19 21:23 ` Josef Bacik
  2023-04-19 21:23 ` [PATCH 08/18] btrfs-progs: update read_tree_block to match the kernel definition Josef Bacik
                   ` (10 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Josef Bacik @ 2023-04-19 21:23 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

This is used to protect the used count for btrfs_root in the kernel,
sync it to btrfs-progs to allow us to sync ctree.c into btrfs-progs.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 kernel-shared/ctree.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/kernel-shared/ctree.h b/kernel-shared/ctree.h
index 26171288..50f97533 100644
--- a/kernel-shared/ctree.h
+++ b/kernel-shared/ctree.h
@@ -479,6 +479,8 @@ struct btrfs_root {
 	/* the dirty list is only used by non-reference counted roots */
 	struct list_head dirty_list;
 	struct rb_node rb_node;
+
+	spinlock_t accounting_lock;
 };
 
 static inline u64 btrfs_root_id(const struct btrfs_root *root)
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 08/18] btrfs-progs: update read_tree_block to match the kernel definition
  2023-04-19 21:23 [PATCH 00/18] btrfs-progs: more prep work for syncing ctree.c Josef Bacik
                   ` (6 preceding siblings ...)
  2023-04-19 21:23 ` [PATCH 07/18] btrfs-progs: add accounting_lock to btrfs_root Josef Bacik
@ 2023-04-19 21:23 ` Josef Bacik
  2023-04-19 21:24 ` [PATCH 09/18] btrfs-progs: make reada_for_search static Josef Bacik
                   ` (9 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Josef Bacik @ 2023-04-19 21:23 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

The in-kernel version of read_tree_block adds some extra sanity checks
to make sure we don't return blocks that don't match what we expect.
This includes the owning root, the level, and the expected first key.
We don't actually do these checks in btrfs-progs, however kernel code
we're going to sync will expect this calling convention, so update it to
match the in-kernel code and then update all the callers.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 btrfs-corrupt-block.c      |  8 +++++---
 btrfs-find-root.c          |  2 +-
 check/main.c               |  9 ++++++---
 check/mode-common.c        |  4 ++--
 check/mode-lowmem.c        | 12 +++++++-----
 check/qgroup-verify.c      |  3 ++-
 check/repair.c             |  8 ++++++--
 cmds/inspect-dump-tree.c   | 12 +++++++-----
 cmds/inspect-tree-stats.c  |  4 +++-
 cmds/restore.c             |  6 ++++--
 image/main.c               | 11 ++++++-----
 kernel-shared/backref.c    |  6 ++++--
 kernel-shared/ctree.c      |  4 +++-
 kernel-shared/disk-io.c    |  8 +++++---
 kernel-shared/disk-io.h    |  5 +++--
 kernel-shared/print-tree.c |  4 +++-
 tune/change-uuid.c         |  2 +-
 17 files changed, 68 insertions(+), 40 deletions(-)

diff --git a/btrfs-corrupt-block.c b/btrfs-corrupt-block.c
index 35933854..98cfe598 100644
--- a/btrfs-corrupt-block.c
+++ b/btrfs-corrupt-block.c
@@ -166,7 +166,7 @@ static int corrupt_keys_in_block(struct btrfs_fs_info *fs_info, u64 bytenr)
 {
 	struct extent_buffer *eb;
 
-	eb = read_tree_block(fs_info, bytenr, 0);
+	eb = read_tree_block(fs_info, bytenr, 0, 0, 0, NULL);
 	if (!extent_buffer_uptodate(eb))
 		return -EIO;;
 
@@ -296,7 +296,9 @@ static void btrfs_corrupt_extent_tree(struct btrfs_trans_handle *trans,
 		struct extent_buffer *next;
 
 		next = read_tree_block(fs_info, btrfs_node_blockptr(eb, i),
-				       btrfs_node_ptr_generation(eb, i));
+				       btrfs_header_owner(eb),
+				       btrfs_node_ptr_generation(eb, i),
+				       btrfs_header_level(eb) - 1, NULL);
 		if (!extent_buffer_uptodate(next))
 			continue;
 		btrfs_corrupt_extent_tree(trans, root, next);
@@ -860,7 +862,7 @@ static int corrupt_metadata_block(struct btrfs_fs_info *fs_info, u64 block,
 		return -EINVAL;
 	}
 
-	eb = read_tree_block(fs_info, block, 0);
+	eb = read_tree_block(fs_info, block, 0, 0, 0, NULL);
 	if (!extent_buffer_uptodate(eb)) {
 		error("couldn't read in tree block %s", field);
 		return -EINVAL;
diff --git a/btrfs-find-root.c b/btrfs-find-root.c
index 9d7296c3..398d7f21 100644
--- a/btrfs-find-root.c
+++ b/btrfs-find-root.c
@@ -199,7 +199,7 @@ int btrfs_find_root_search(struct btrfs_fs_info *fs_info,
 		for (offset = chunk_offset;
 		     offset < chunk_offset + chunk_size;
 		     offset += nodesize) {
-			eb = read_tree_block(fs_info, offset, 0);
+			eb = read_tree_block(fs_info, offset, 0, 0, 0, NULL);
 			if (!eb || IS_ERR(eb))
 				continue;
 			ret = add_eb_to_result(eb, result, nodesize, filter,
diff --git a/check/main.c b/check/main.c
index 275f912b..610c3091 100644
--- a/check/main.c
+++ b/check/main.c
@@ -1898,7 +1898,9 @@ static int walk_down_tree(struct btrfs_root *root, struct btrfs_path *path,
 		if (!next || !btrfs_buffer_uptodate(next, ptr_gen)) {
 			free_extent_buffer(next);
 			reada_walk_down(root, cur, path->slots[*level]);
-			next = read_tree_block(gfs_info, bytenr, ptr_gen);
+			next = read_tree_block(gfs_info, bytenr,
+					       btrfs_header_owner(cur), ptr_gen,
+					       *level - 1, NULL);
 			if (!extent_buffer_uptodate(next)) {
 				struct btrfs_key node_key;
 
@@ -6269,7 +6271,7 @@ static int run_next_block(struct btrfs_root *root,
 	}
 
 	/* fixme, get the real parent transid */
-	buf = read_tree_block(gfs_info, bytenr, gen);
+	buf = read_tree_block(gfs_info, bytenr, 0, gen, 0, NULL);
 	if (!extent_buffer_uptodate(buf)) {
 		record_bad_block_io(extent_cache, bytenr, size);
 		goto out;
@@ -8615,7 +8617,8 @@ static int deal_root_from_list(struct list_head *list,
 		rec = list_entry(list->next,
 				 struct root_item_record, list);
 		last = 0;
-		buf = read_tree_block(gfs_info, rec->bytenr, 0);
+		buf = read_tree_block(gfs_info, rec->bytenr, rec->objectid, 0,
+				      rec->level, NULL);
 		if (!extent_buffer_uptodate(buf)) {
 			free_extent_buffer(buf);
 			ret = -EIO;
diff --git a/check/mode-common.c b/check/mode-common.c
index 394c35fe..a38d2afc 100644
--- a/check/mode-common.c
+++ b/check/mode-common.c
@@ -132,7 +132,7 @@ static int check_prealloc_shared_data_ref(u64 parent, u64 disk_bytenr)
 	int i;
 	int ret = 0;
 
-	eb = read_tree_block(gfs_info, parent, 0);
+	eb = read_tree_block(gfs_info, parent, 0, 0, 0, NULL);
 	if (!extent_buffer_uptodate(eb)) {
 		ret = -EIO;
 		goto out;
@@ -1127,7 +1127,7 @@ int get_extent_item_generation(u64 bytenr, u64 *gen_ret)
 	    BTRFS_EXTENT_FLAG_TREE_BLOCK) {
 		struct extent_buffer *eb;
 
-		eb = read_tree_block(gfs_info, bytenr, 0);
+		eb = read_tree_block(gfs_info, bytenr, 0, 0, 0, NULL);
 		if (extent_buffer_uptodate(eb)) {
 			*gen_ret = btrfs_header_generation(eb);
 			ret = 0;
diff --git a/check/mode-lowmem.c b/check/mode-lowmem.c
index 10f86161..83b86e63 100644
--- a/check/mode-lowmem.c
+++ b/check/mode-lowmem.c
@@ -3748,7 +3748,7 @@ static int query_tree_block_level(u64 bytenr)
 	btrfs_release_path(&path);
 
 	/* Get level from tree block as an alternative source */
-	eb = read_tree_block(gfs_info, bytenr, transid);
+	eb = read_tree_block(gfs_info, bytenr, 0, transid, 0, NULL);
 	if (!extent_buffer_uptodate(eb)) {
 		free_extent_buffer(eb);
 		return -EIO;
@@ -3800,7 +3800,7 @@ static int check_tree_block_backref(u64 root_id, u64 bytenr, int level)
 	}
 
 	/* Read out the tree block to get item/node key */
-	eb = read_tree_block(gfs_info, bytenr, 0);
+	eb = read_tree_block(gfs_info, bytenr, root_id, 0, 0, NULL);
 	if (!extent_buffer_uptodate(eb)) {
 		err |= REFERENCER_MISSING;
 		free_extent_buffer(eb);
@@ -3899,7 +3899,7 @@ static int check_shared_block_backref(u64 parent, u64 bytenr, int level)
 	int found_parent = 0;
 	int i;
 
-	eb = read_tree_block(gfs_info, parent, 0);
+	eb = read_tree_block(gfs_info, parent, 0, 0, 0, NULL);
 	if (!extent_buffer_uptodate(eb))
 		goto out;
 
@@ -4072,7 +4072,7 @@ static int check_shared_data_backref(u64 parent, u64 bytenr)
 	int found_parent = 0;
 	int i;
 
-	eb = read_tree_block(gfs_info, parent, 0);
+	eb = read_tree_block(gfs_info, parent, 0, 0, 0, NULL);
 	if (!extent_buffer_uptodate(eb))
 		goto out;
 
@@ -5046,7 +5046,9 @@ static int walk_down_tree(struct btrfs_root *root, struct btrfs_path *path,
 		if (!next || !btrfs_buffer_uptodate(next, ptr_gen)) {
 			free_extent_buffer(next);
 			reada_walk_down(root, cur, path->slots[*level]);
-			next = read_tree_block(gfs_info, bytenr, ptr_gen);
+			next = read_tree_block(gfs_info, bytenr,
+					       btrfs_header_owner(cur),
+					       ptr_gen, *level - 1, NULL);
 			if (!extent_buffer_uptodate(next)) {
 				struct btrfs_key node_key;
 
diff --git a/check/qgroup-verify.c b/check/qgroup-verify.c
index db49e3c9..1a62009b 100644
--- a/check/qgroup-verify.c
+++ b/check/qgroup-verify.c
@@ -720,7 +720,8 @@ static int travel_tree(struct btrfs_fs_info *info, struct btrfs_root *root,
 //	printf("travel_tree: bytenr: %llu\tnum_bytes: %llu\tref_parent: %llu\n",
 //	       bytenr, num_bytes, ref_parent);
 
-	eb = read_tree_block(info, bytenr, 0);
+	eb = read_tree_block(info, bytenr, btrfs_root_id(root), 0,
+			     0, NULL);
 	if (!extent_buffer_uptodate(eb))
 		return -EIO;
 
diff --git a/check/repair.c b/check/repair.c
index 07c432b3..ec8b0196 100644
--- a/check/repair.c
+++ b/check/repair.c
@@ -108,7 +108,9 @@ static int traverse_tree_blocks(struct extent_io_tree *tree,
 			 * in, but for now this doesn't actually use the root so
 			 * just pass in extent_root.
 			 */
-			tmp = read_tree_block(fs_info, bytenr, 0);
+			tmp = read_tree_block(fs_info, bytenr, key.objectid, 0,
+					      btrfs_disk_root_level(eb, ri),
+					      NULL);
 			if (!extent_buffer_uptodate(tmp)) {
 				fprintf(stderr, "Error reading root block\n");
 				return -EIO;
@@ -133,7 +135,9 @@ static int traverse_tree_blocks(struct extent_io_tree *tree,
 				continue;
 			}
 
-			tmp = read_tree_block(fs_info, bytenr, 0);
+			tmp = read_tree_block(fs_info, bytenr,
+					      btrfs_header_owner(eb), 0,
+					      level - 1, NULL);
 			if (!extent_buffer_uptodate(tmp)) {
 				fprintf(stderr, "Error reading tree block\n");
 				return -EIO;
diff --git a/cmds/inspect-dump-tree.c b/cmds/inspect-dump-tree.c
index 4c93056b..7c524b04 100644
--- a/cmds/inspect-dump-tree.c
+++ b/cmds/inspect-dump-tree.c
@@ -58,9 +58,10 @@ static void print_extents(struct extent_buffer *eb)
 
 	nr = btrfs_header_nritems(eb);
 	for (i = 0; i < nr; i++) {
-		next = read_tree_block(fs_info,
-				btrfs_node_blockptr(eb, i),
-				btrfs_node_ptr_generation(eb, i));
+		next = read_tree_block(fs_info, btrfs_node_blockptr(eb, i),
+				       btrfs_header_owner(eb),
+				       btrfs_node_ptr_generation(eb, i),
+				       btrfs_header_level(eb) - 1, NULL);
 		if (!extent_buffer_uptodate(next))
 			continue;
 		if (btrfs_is_leaf(next) && btrfs_header_level(eb) != 1) {
@@ -288,7 +289,7 @@ static int dump_print_tree_blocks(struct btrfs_fs_info *fs_info,
 			goto next;
 		}
 
-		eb = read_tree_block(fs_info, bytenr, 0);
+		eb = read_tree_block(fs_info, bytenr, 0, 0, 0, NULL);
 		if (!extent_buffer_uptodate(eb)) {
 			error("failed to read tree block %llu", bytenr);
 			ret = -EIO;
@@ -625,7 +626,8 @@ again:
 
 			offset = btrfs_item_ptr_offset(leaf, slot);
 			read_extent_buffer(leaf, &ri, offset, sizeof(ri));
-			buf = read_tree_block(info, btrfs_root_bytenr(&ri), 0);
+			buf = read_tree_block(info, btrfs_root_bytenr(&ri),
+					      key.objectid, 0, 0, NULL);
 			if (!extent_buffer_uptodate(buf))
 				goto next;
 			if (tree_id && found_key.objectid != tree_id) {
diff --git a/cmds/inspect-tree-stats.c b/cmds/inspect-tree-stats.c
index 08be1686..716aa008 100644
--- a/cmds/inspect-tree-stats.c
+++ b/cmds/inspect-tree-stats.c
@@ -153,7 +153,9 @@ static int walk_nodes(struct btrfs_root *root, struct btrfs_path *path,
 		path->slots[level] = i;
 		if ((level - 1) > 0 || find_inline) {
 			tmp = read_tree_block(root->fs_info, cur_blocknr,
-					      btrfs_node_ptr_generation(b, i));
+					      btrfs_header_owner(b),
+					      btrfs_node_ptr_generation(b, i),
+					      level - 1, NULL);
 			if (!extent_buffer_uptodate(tmp)) {
 				error("failed to read blocknr %llu",
 					btrfs_node_blockptr(b, i));
diff --git a/cmds/restore.c b/cmds/restore.c
index c38cb541..72fc7a07 100644
--- a/cmds/restore.c
+++ b/cmds/restore.c
@@ -1260,7 +1260,8 @@ static struct btrfs_root *open_fs(const char *dev, u64 root_location,
 			root_location = btrfs_super_root(fs_info->super_copy);
 		generation = btrfs_super_generation(fs_info->super_copy);
 		root->node = read_tree_block(fs_info, root_location,
-					     generation);
+					     btrfs_root_id(root), generation,
+					     0, NULL);
 		if (!extent_buffer_uptodate(root->node)) {
 			error("opening tree root failed");
 			close_ctree(root);
@@ -1527,7 +1528,8 @@ static int cmd_restore(const struct cmd_struct *cmd, int argc, char **argv)
 
 	if (fs_location != 0) {
 		free_extent_buffer(root->node);
-		root->node = read_tree_block(root->fs_info, fs_location, 0);
+		root->node = read_tree_block(root->fs_info, fs_location, 0, 0,
+					     0, NULL);
 		if (!extent_buffer_uptodate(root->node)) {
 			error("failed to read fs location");
 			ret = 1;
diff --git a/image/main.c b/image/main.c
index ae7acb96..92b0dbfa 100644
--- a/image/main.c
+++ b/image/main.c
@@ -707,7 +707,8 @@ static int flush_pending(struct metadump_struct *md, int done)
 			u64 this_read = min((u64)md->root->fs_info->nodesize,
 					size);
 
-			eb = read_tree_block(md->root->fs_info, start, 0);
+			eb = read_tree_block(md->root->fs_info, start, 0, 0, 0,
+					     NULL);
 			if (!extent_buffer_uptodate(eb)) {
 				free(async->buffer);
 				free(async);
@@ -811,7 +812,7 @@ static int copy_tree_blocks(struct btrfs_root *root, struct extent_buffer *eb,
 				continue;
 			ri = btrfs_item_ptr(eb, i, struct btrfs_root_item);
 			bytenr = btrfs_disk_root_bytenr(eb, ri);
-			tmp = read_tree_block(fs_info, bytenr, 0);
+			tmp = read_tree_block(fs_info, bytenr, 0, 0, 0, NULL);
 			if (!extent_buffer_uptodate(tmp)) {
 				error("unable to read log root block");
 				return -EIO;
@@ -822,7 +823,7 @@ static int copy_tree_blocks(struct btrfs_root *root, struct extent_buffer *eb,
 				return ret;
 		} else {
 			bytenr = btrfs_node_blockptr(eb, i);
-			tmp = read_tree_block(fs_info, bytenr, 0);
+			tmp = read_tree_block(fs_info, bytenr, 0, 0, 0, NULL);
 			if (!extent_buffer_uptodate(tmp)) {
 				error("unable to read log root block");
 				return -EIO;
@@ -2697,7 +2698,7 @@ static int iter_tree_blocks(struct btrfs_fs_info *fs_info,
 				continue;
 			ri = btrfs_item_ptr(eb, i, struct btrfs_root_item);
 			bytenr = btrfs_disk_root_bytenr(eb, ri);
-			tmp = read_tree_block(fs_info, bytenr, 0);
+			tmp = read_tree_block(fs_info, bytenr, 0, 0, 0, NULL);
 			if (!extent_buffer_uptodate(tmp)) {
 				error("unable to read log root block");
 				return -EIO;
@@ -2708,7 +2709,7 @@ static int iter_tree_blocks(struct btrfs_fs_info *fs_info,
 				return ret;
 		} else {
 			bytenr = btrfs_node_blockptr(eb, i);
-			tmp = read_tree_block(fs_info, bytenr, 0);
+			tmp = read_tree_block(fs_info, bytenr, 0, 0, 0, NULL);
 			if (!extent_buffer_uptodate(tmp)) {
 				error("unable to read log root block");
 				return -EIO;
diff --git a/kernel-shared/backref.c b/kernel-shared/backref.c
index 897cd089..3b979430 100644
--- a/kernel-shared/backref.c
+++ b/kernel-shared/backref.c
@@ -461,7 +461,8 @@ static int __add_missing_keys(struct btrfs_fs_info *fs_info,
 		ASSERT(!ref->parent);
 		ASSERT(!ref->key_for_search.type);
 		BUG_ON(!ref->wanted_disk_byte);
-		eb = read_tree_block(fs_info, ref->wanted_disk_byte, 0);
+		eb = read_tree_block(fs_info, ref->wanted_disk_byte,
+				     ref->root_id, 0, ref->level - 1, NULL);
 		if (!extent_buffer_uptodate(eb)) {
 			free_extent_buffer(eb);
 			return -EIO;
@@ -823,7 +824,8 @@ static int find_parent_nodes(struct btrfs_trans_handle *trans,
 			    ref->level == 0) {
 				struct extent_buffer *eb;
 
-				eb = read_tree_block(fs_info, ref->parent, 0);
+				eb = read_tree_block(fs_info, ref->parent, 0,
+						     0, ref->level, NULL);
 				if (!extent_buffer_uptodate(eb)) {
 					free_extent_buffer(eb);
 					ret = -EIO;
diff --git a/kernel-shared/ctree.c b/kernel-shared/ctree.c
index c3e9830a..35133268 100644
--- a/kernel-shared/ctree.c
+++ b/kernel-shared/ctree.c
@@ -874,7 +874,9 @@ struct extent_buffer *read_node_slot(struct btrfs_fs_info *fs_info,
 		return NULL;
 
 	ret = read_tree_block(fs_info, btrfs_node_blockptr(parent, slot),
-		       btrfs_node_ptr_generation(parent, slot));
+			      btrfs_header_owner(parent),
+			      btrfs_node_ptr_generation(parent, slot),
+			      level - 1, NULL);
 	if (!extent_buffer_uptodate(ret))
 		return ERR_PTR(-EIO);
 
diff --git a/kernel-shared/disk-io.c b/kernel-shared/disk-io.c
index 9d93f331..688b1c8e 100644
--- a/kernel-shared/disk-io.c
+++ b/kernel-shared/disk-io.c
@@ -337,8 +337,9 @@ int read_whole_eb(struct btrfs_fs_info *info, struct extent_buffer *eb, int mirr
 	return 0;
 }
 
-struct extent_buffer* read_tree_block(struct btrfs_fs_info *fs_info, u64 bytenr,
-		u64 parent_transid)
+struct extent_buffer *read_tree_block(struct btrfs_fs_info *fs_info, u64 bytenr,
+				      u64 owner_root, u64 parent_transid,
+				      int level, struct btrfs_key *first_key)
 {
 	int ret;
 	struct extent_buffer *eb;
@@ -510,7 +511,8 @@ static int read_root_node(struct btrfs_fs_info *fs_info,
 			  struct btrfs_root *root, u64 bytenr, u64 gen,
 			  int level)
 {
-	root->node = read_tree_block(fs_info, bytenr, gen);
+	root->node = read_tree_block(fs_info, bytenr, btrfs_root_id(root),
+				     gen, level, NULL);
 	if (!extent_buffer_uptodate(root->node))
 		goto err;
 	if (btrfs_header_level(root->node) != level) {
diff --git a/kernel-shared/disk-io.h b/kernel-shared/disk-io.h
index 2424060d..f349b3ef 100644
--- a/kernel-shared/disk-io.h
+++ b/kernel-shared/disk-io.h
@@ -138,8 +138,9 @@ static inline u64 btrfs_sb_offset(int mirror)
 struct btrfs_device;
 
 int read_whole_eb(struct btrfs_fs_info *info, struct extent_buffer *eb, int mirror);
-struct extent_buffer* read_tree_block(struct btrfs_fs_info *fs_info, u64 bytenr,
-		u64 parent_transid);
+struct extent_buffer *read_tree_block(struct btrfs_fs_info *fs_info, u64 bytenr,
+				      u64 owner_root, u64 parent_transid,
+				      int level, struct btrfs_key *first_key);
 
 void readahead_tree_block(struct btrfs_fs_info *fs_info, u64 bytenr,
 			  u64 parent_transid);
diff --git a/kernel-shared/print-tree.c b/kernel-shared/print-tree.c
index d536b2ff..6cdfdef7 100644
--- a/kernel-shared/print-tree.c
+++ b/kernel-shared/print-tree.c
@@ -1557,7 +1557,9 @@ static void dfs_print_children(struct extent_buffer *root_eb, unsigned int mode)
 
 	for (i = 0; i < nr; i++) {
 		next = read_tree_block(fs_info, btrfs_node_blockptr(root_eb, i),
-				btrfs_node_ptr_generation(root_eb, i));
+				       btrfs_header_owner(root_eb),
+				       btrfs_node_ptr_generation(root_eb, i),
+				       root_eb_level, NULL);
 		if (!extent_buffer_uptodate(next)) {
 			fprintf(stderr, "failed to read %llu in tree %llu\n",
 				btrfs_node_blockptr(root_eb, i),
diff --git a/tune/change-uuid.c b/tune/change-uuid.c
index 628a1bba..dae41056 100644
--- a/tune/change-uuid.c
+++ b/tune/change-uuid.c
@@ -111,7 +111,7 @@ static int change_extent_tree_uuid(struct btrfs_fs_info *fs_info, uuid_t new_fsi
 			goto next;
 
 		bytenr = key.objectid;
-		eb = read_tree_block(fs_info, bytenr, 0);
+		eb = read_tree_block(fs_info, bytenr, 0, 0, 0, NULL);
 		if (IS_ERR(eb)) {
 			error("failed to read tree block: %llu", bytenr);
 			ret = PTR_ERR(eb);
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 09/18] btrfs-progs: make reada_for_search static
  2023-04-19 21:23 [PATCH 00/18] btrfs-progs: more prep work for syncing ctree.c Josef Bacik
                   ` (7 preceding siblings ...)
  2023-04-19 21:23 ` [PATCH 08/18] btrfs-progs: update read_tree_block to match the kernel definition Josef Bacik
@ 2023-04-19 21:24 ` Josef Bacik
  2023-04-19 21:24 ` [PATCH 10/18] btrfs-progs: sync btrfs_path fields with the kernel Josef Bacik
                   ` (8 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Josef Bacik @ 2023-04-19 21:24 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

We were using this in cmds/restore.c, however it only does anything if
path->reada is set, and we don't set that in cmds/restore.c.  Remove
this usage of reada_for_search and make the function static.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 cmds/restore.c        | 5 -----
 kernel-shared/ctree.c | 5 +++--
 kernel-shared/ctree.h | 2 --
 3 files changed, 3 insertions(+), 9 deletions(-)

diff --git a/cmds/restore.c b/cmds/restore.c
index 72fc7a07..9fe7b4d2 100644
--- a/cmds/restore.c
+++ b/cmds/restore.c
@@ -267,9 +267,6 @@ again:
 			continue;
 		}
 
-		if (path->reada)
-			reada_for_search(fs_info, path, level, slot, 0);
-
 		next = read_node_slot(fs_info, c, slot);
 		if (extent_buffer_uptodate(next))
 			break;
@@ -284,8 +281,6 @@ again:
 		path->slots[level] = 0;
 		if (!level)
 			break;
-		if (path->reada)
-			reada_for_search(fs_info, path, level, 0, 0);
 		next = read_node_slot(fs_info, next, 0);
 		if (!extent_buffer_uptodate(next))
 			goto again;
diff --git a/kernel-shared/ctree.c b/kernel-shared/ctree.c
index 35133268..3e1085a0 100644
--- a/kernel-shared/ctree.c
+++ b/kernel-shared/ctree.c
@@ -1205,8 +1205,9 @@ static int noinline push_nodes_for_insert(struct btrfs_trans_handle *trans,
 /*
  * readahead one full node of leaves
  */
-void reada_for_search(struct btrfs_fs_info *fs_info, struct btrfs_path *path,
-		      int level, int slot, u64 objectid)
+static void reada_for_search(struct btrfs_fs_info *fs_info,
+			     struct btrfs_path *path, int level, int slot,
+			     u64 objectid)
 {
 	struct extent_buffer *node;
 	struct btrfs_disk_key disk_key;
diff --git a/kernel-shared/ctree.h b/kernel-shared/ctree.h
index 50f97533..2237f3ef 100644
--- a/kernel-shared/ctree.h
+++ b/kernel-shared/ctree.h
@@ -929,8 +929,6 @@ int btrfs_del_ptr(struct btrfs_root *root, struct btrfs_path *path,
 		int level, int slot);
 enum btrfs_tree_block_status btrfs_check_node(struct extent_buffer *buf);
 enum btrfs_tree_block_status btrfs_check_leaf(struct extent_buffer *buf);
-void reada_for_search(struct btrfs_fs_info *fs_info, struct btrfs_path *path,
-		      int level, int slot, u64 objectid);
 struct extent_buffer *read_node_slot(struct btrfs_fs_info *fs_info,
 				   struct extent_buffer *parent, int slot);
 int btrfs_previous_item(struct btrfs_root *root,
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 10/18] btrfs-progs: sync btrfs_path fields with the kernel
  2023-04-19 21:23 [PATCH 00/18] btrfs-progs: more prep work for syncing ctree.c Josef Bacik
                   ` (8 preceding siblings ...)
  2023-04-19 21:24 ` [PATCH 09/18] btrfs-progs: make reada_for_search static Josef Bacik
@ 2023-04-19 21:24 ` Josef Bacik
  2023-04-19 21:24 ` [PATCH 11/18] btrfs-progs: update arguments of find_extent_buffer Josef Bacik
                   ` (7 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Josef Bacik @ 2023-04-19 21:24 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

When we sync ctree.c into btrfs-progs we're going to need to have a
bunch of flags and definitions that exist in btrfs_path in the kernel
that do not exist in btrfs_progs.  Sync these changes into btrfs-progs
to enable us to sync ctree.c into btrfs-progs.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 kernel-shared/ctree.h | 41 ++++++++++++++++++++++++++++++++++++-----
 1 file changed, 36 insertions(+), 5 deletions(-)

diff --git a/kernel-shared/ctree.h b/kernel-shared/ctree.h
index 2237f3ef..20c9edc6 100644
--- a/kernel-shared/ctree.h
+++ b/kernel-shared/ctree.h
@@ -129,14 +129,32 @@ static inline u32 __BTRFS_LEAF_DATA_SIZE(u32 nodesize)
  * The slots array records the index of the item or block pointer
  * used while walking the tree.
  */
-enum { READA_NONE = 0, READA_BACK, READA_FORWARD };
+enum {
+	READA_NONE,
+	READA_BACK,
+	READA_FORWARD,
+	/*
+	 * Similar to READA_FORWARD but unlike it:
+	 *
+	 * 1) It will trigger readahead even for leaves that are not close to
+	 *    each other on disk;
+	 * 2) It also triggers readahead for nodes;
+	 * 3) During a search, even when a node or leaf is already in memory, it
+	 *    will still trigger readahead for other nodes and leaves that follow
+	 *    it.
+	 *
+	 * This is meant to be used only when we know we are iterating over the
+	 * entire tree or a very large part of it.
+	 */
+	READA_FORWARD_ALWAYS,
+};
+
 struct btrfs_path {
 	struct extent_buffer *nodes[BTRFS_MAX_LEVEL];
 	int slots[BTRFS_MAX_LEVEL];
-#if 0
 	/* The kernel locking scheme is not done in userspace. */
 	int locks[BTRFS_MAX_LEVEL];
-#endif
+
 	signed char reada;
 	/* keep some upper locks as we walk down */
 	u8 lowest_level;
@@ -145,8 +163,21 @@ struct btrfs_path {
 	 * set by btrfs_split_item, tells search_slot to keep all locks
 	 * and to force calls to keep space in the nodes
 	 */
-	u8 search_for_split;
-	u8 skip_check_block;
+	unsigned int search_for_split:1;
+	unsigned int keep_locks:1;
+	unsigned int skip_locking:1;
+	unsigned int search_commit_root:1;
+	unsigned int need_commit_sem:1;
+	unsigned int skip_release_on_error:1;
+	/*
+	 * Indicate that new item (btrfs_search_slot) is extending already
+	 * existing item and ins_len contains only the data size and not item
+	 * header (ie. sizeof(struct btrfs_item) is not included).
+	 */
+	unsigned int search_for_extension:1;
+	/* Stop search if any locks need to be taken (for read) */
+	unsigned int nowait:1;
+	unsigned int skip_check_block:1;
 };
 
 #define BTRFS_MAX_EXTENT_ITEM_SIZE(r) \
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 11/18] btrfs-progs: update arguments of find_extent_buffer
  2023-04-19 21:23 [PATCH 00/18] btrfs-progs: more prep work for syncing ctree.c Josef Bacik
                   ` (9 preceding siblings ...)
  2023-04-19 21:24 ` [PATCH 10/18] btrfs-progs: sync btrfs_path fields with the kernel Josef Bacik
@ 2023-04-19 21:24 ` Josef Bacik
  2023-04-19 21:24 ` [PATCH 12/18] btrfs-progs: add btrfs_readahead_node_child helper Josef Bacik
                   ` (6 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Josef Bacik @ 2023-04-19 21:24 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

In the kernel we only take a bytenr for this as the extent buffer cache
is indexed on bytenr.  Since we're passing in the btrfs_fs_info we can
simply use the ->nodesize for the blocksize, and drop the blocksize
argument completely.  This brings us into parity with the kernel, which
will allow the syncing of ctree.c.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 kernel-shared/disk-io.c   | 2 +-
 kernel-shared/extent_io.c | 7 ++++---
 kernel-shared/extent_io.h | 2 +-
 3 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/kernel-shared/disk-io.c b/kernel-shared/disk-io.c
index 688b1c8e..3b3188da 100644
--- a/kernel-shared/disk-io.c
+++ b/kernel-shared/disk-io.c
@@ -228,7 +228,7 @@ static int csum_tree_block(struct btrfs_fs_info *fs_info,
 struct extent_buffer *btrfs_find_tree_block(struct btrfs_fs_info *fs_info,
 					    u64 bytenr, u32 blocksize)
 {
-	return find_extent_buffer(fs_info, bytenr, blocksize);
+	return find_extent_buffer(fs_info, bytenr);
 }
 
 struct extent_buffer* btrfs_find_create_tree_block(
diff --git a/kernel-shared/extent_io.c b/kernel-shared/extent_io.c
index 992b5f35..d6705e37 100644
--- a/kernel-shared/extent_io.c
+++ b/kernel-shared/extent_io.c
@@ -210,14 +210,15 @@ void free_extent_buffer_stale(struct extent_buffer *eb)
 }
 
 struct extent_buffer *find_extent_buffer(struct btrfs_fs_info *fs_info,
-					 u64 bytenr, u32 blocksize)
+					 u64 bytenr)
 {
 	struct extent_buffer *eb = NULL;
 	struct cache_extent *cache;
 
-	cache = lookup_cache_extent(&fs_info->extent_cache, bytenr, blocksize);
+	cache = lookup_cache_extent(&fs_info->extent_cache, bytenr,
+				    fs_info->nodesize);
 	if (cache && cache->start == bytenr &&
-	    cache->size == blocksize) {
+	    cache->size == fs_info->nodesize) {
 		eb = container_of(cache, struct extent_buffer, cache_node);
 		list_move_tail(&eb->lru, &fs_info->lru);
 		eb->refs++;
diff --git a/kernel-shared/extent_io.h b/kernel-shared/extent_io.h
index e4da3c57..b4c2ac97 100644
--- a/kernel-shared/extent_io.h
+++ b/kernel-shared/extent_io.h
@@ -94,7 +94,7 @@ static inline int extent_buffer_uptodate(struct extent_buffer *eb)
 }
 
 struct extent_buffer *find_extent_buffer(struct btrfs_fs_info *fs_info,
-					 u64 bytenr, u32 blocksize);
+					 u64 bytenr);
 struct extent_buffer *find_first_extent_buffer(struct btrfs_fs_info *fs_info,
 					       u64 start);
 struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info,
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 12/18] btrfs-progs: add btrfs_readahead_node_child helper
  2023-04-19 21:23 [PATCH 00/18] btrfs-progs: more prep work for syncing ctree.c Josef Bacik
                   ` (10 preceding siblings ...)
  2023-04-19 21:24 ` [PATCH 11/18] btrfs-progs: update arguments of find_extent_buffer Josef Bacik
@ 2023-04-19 21:24 ` Josef Bacik
  2023-04-19 21:24 ` [PATCH 13/18] btrfs-progs: add an atomic arg to btrfs_buffer_uptodate Josef Bacik
                   ` (5 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Josef Bacik @ 2023-04-19 21:24 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

This exists in the kernel as a wrapper for readahead_tree_block, and is
used extensively in ctree.c in the kernel.  Sync this helper so that we
can easily sync ctree.c

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 kernel-shared/extent_io.c | 14 ++++++++++++++
 kernel-shared/extent_io.h |  1 +
 2 files changed, 15 insertions(+)

diff --git a/kernel-shared/extent_io.c b/kernel-shared/extent_io.c
index d6705e37..105b5ec8 100644
--- a/kernel-shared/extent_io.c
+++ b/kernel-shared/extent_io.c
@@ -651,3 +651,17 @@ void write_extent_buffer_fsid(const struct extent_buffer *eb, const void *srcv)
 {
 	write_extent_buffer(eb, srcv, btrfs_header_fsid(), BTRFS_FSID_SIZE);
 }
+
+/*
+ * btrfs_readahead_node_child - readahead a node's child block
+ * @node:	parent node we're reading from
+ * @slot:	slot in the parent node for the child we want to read
+ *
+ * A helper for readahead_tree_block, we simply read the bytenr pointed at the
+ * slot in the node provided.
+ */
+void btrfs_readahead_node_child(struct extent_buffer *node, int slot)
+{
+	readahead_tree_block(node->fs_info, btrfs_node_blockptr(node, slot),
+			     btrfs_node_ptr_generation(node, slot));
+}
diff --git a/kernel-shared/extent_io.h b/kernel-shared/extent_io.h
index b4c2ac97..a1cda3a5 100644
--- a/kernel-shared/extent_io.h
+++ b/kernel-shared/extent_io.h
@@ -137,5 +137,6 @@ void extent_buffer_bitmap_set(struct extent_buffer *eb, unsigned long start,
 void extent_buffer_init_cache(struct btrfs_fs_info *fs_info);
 void extent_buffer_free_cache(struct btrfs_fs_info *fs_info);
 void write_extent_buffer_fsid(const struct extent_buffer *eb, const void *srcv);
+void btrfs_readahead_node_child(struct extent_buffer *node, int slot);
 
 #endif
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 13/18] btrfs-progs: add an atomic arg to btrfs_buffer_uptodate
  2023-04-19 21:23 [PATCH 00/18] btrfs-progs: more prep work for syncing ctree.c Josef Bacik
                   ` (11 preceding siblings ...)
  2023-04-19 21:24 ` [PATCH 12/18] btrfs-progs: add btrfs_readahead_node_child helper Josef Bacik
@ 2023-04-19 21:24 ` Josef Bacik
  2023-04-19 21:24 ` [PATCH 14/18] btrfs-progs: add a btrfs_read_extent_buffer helper Josef Bacik
                   ` (4 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Josef Bacik @ 2023-04-19 21:24 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

We have this extra argument in the kernel to indicate if we are atomic
and thus can't lock the io_tree when checking the transid for an extent
buffer.  This isn't necessary in btrfs-progs, but to allow for easier
sync'ing of ctree.c add this argument to our copy of
btrfs_buffer_uptodate.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 check/main.c                | 2 +-
 check/mode-lowmem.c         | 2 +-
 kernel-shared/disk-io.c     | 9 +++++----
 kernel-shared/disk-io.h     | 3 ++-
 kernel-shared/extent-tree.c | 2 +-
 5 files changed, 10 insertions(+), 8 deletions(-)

diff --git a/check/main.c b/check/main.c
index 610c3091..f15272bf 100644
--- a/check/main.c
+++ b/check/main.c
@@ -1895,7 +1895,7 @@ static int walk_down_tree(struct btrfs_root *root, struct btrfs_path *path,
 		}
 
 		next = btrfs_find_tree_block(gfs_info, bytenr, gfs_info->nodesize);
-		if (!next || !btrfs_buffer_uptodate(next, ptr_gen)) {
+		if (!next || !btrfs_buffer_uptodate(next, ptr_gen, 0)) {
 			free_extent_buffer(next);
 			reada_walk_down(root, cur, path->slots[*level]);
 			next = read_tree_block(gfs_info, bytenr,
diff --git a/check/mode-lowmem.c b/check/mode-lowmem.c
index 83b86e63..fb294c90 100644
--- a/check/mode-lowmem.c
+++ b/check/mode-lowmem.c
@@ -5043,7 +5043,7 @@ static int walk_down_tree(struct btrfs_root *root, struct btrfs_path *path,
 		}
 
 		next = btrfs_find_tree_block(gfs_info, bytenr, gfs_info->nodesize);
-		if (!next || !btrfs_buffer_uptodate(next, ptr_gen)) {
+		if (!next || !btrfs_buffer_uptodate(next, ptr_gen, 0)) {
 			free_extent_buffer(next);
 			reada_walk_down(root, cur, path->slots[*level]);
 			next = read_tree_block(gfs_info, bytenr,
diff --git a/kernel-shared/disk-io.c b/kernel-shared/disk-io.c
index 3b3188da..29fe9027 100644
--- a/kernel-shared/disk-io.c
+++ b/kernel-shared/disk-io.c
@@ -246,7 +246,7 @@ void readahead_tree_block(struct btrfs_fs_info *fs_info, u64 bytenr,
 	struct btrfs_device *device;
 
 	eb = btrfs_find_tree_block(fs_info, bytenr, fs_info->nodesize);
-	if (!(eb && btrfs_buffer_uptodate(eb, parent_transid)) &&
+	if (!(eb && btrfs_buffer_uptodate(eb, parent_transid, 0)) &&
 	    !btrfs_map_block(fs_info, READ, bytenr, &length, &multi, 0,
 			     NULL)) {
 		device = multi->stripes[0].dev;
@@ -367,7 +367,7 @@ struct extent_buffer *read_tree_block(struct btrfs_fs_info *fs_info, u64 bytenr,
 	if (!eb)
 		return ERR_PTR(-ENOMEM);
 
-	if (btrfs_buffer_uptodate(eb, parent_transid))
+	if (btrfs_buffer_uptodate(eb, parent_transid, 0))
 		return eb;
 
 	num_copies = btrfs_num_copies(fs_info, eb->start, eb->len);
@@ -478,7 +478,7 @@ int write_tree_block(struct btrfs_trans_handle *trans,
 		BUG();
 	}
 
-	if (trans && !btrfs_buffer_uptodate(eb, trans->transid))
+	if (trans && !btrfs_buffer_uptodate(eb, trans->transid, 0))
 		BUG();
 
 	btrfs_clear_header_flag(eb, BTRFS_HEADER_FLAG_CSUM_NEW);
@@ -2262,7 +2262,8 @@ void btrfs_mark_buffer_dirty(struct extent_buffer *eb)
 	set_extent_buffer_dirty(eb);
 }
 
-int btrfs_buffer_uptodate(struct extent_buffer *buf, u64 parent_transid)
+int btrfs_buffer_uptodate(struct extent_buffer *buf, u64 parent_transid,
+			  int atomic)
 {
 	int ret;
 
diff --git a/kernel-shared/disk-io.h b/kernel-shared/disk-io.h
index f349b3ef..ed7f9259 100644
--- a/kernel-shared/disk-io.h
+++ b/kernel-shared/disk-io.h
@@ -201,7 +201,8 @@ struct btrfs_root *btrfs_read_fs_root_no_cache(struct btrfs_fs_info *fs_info,
 					       struct btrfs_key *location);
 int btrfs_free_fs_root(struct btrfs_root *root);
 void btrfs_mark_buffer_dirty(struct extent_buffer *buf);
-int btrfs_buffer_uptodate(struct extent_buffer *buf, u64 parent_transid);
+int btrfs_buffer_uptodate(struct extent_buffer *buf, u64 parent_transid,
+			  int atomic);
 int btrfs_set_buffer_uptodate(struct extent_buffer *buf);
 int btrfs_csum_data(struct btrfs_fs_info *fs_info, u16 csum_type, const u8 *data,
 		    u8 *out, size_t len);
diff --git a/kernel-shared/extent-tree.c b/kernel-shared/extent-tree.c
index 5c33fd53..062ff4a7 100644
--- a/kernel-shared/extent-tree.c
+++ b/kernel-shared/extent-tree.c
@@ -1893,7 +1893,7 @@ static int pin_down_bytes(struct btrfs_trans_handle *trans, u64 bytenr,
 	 * reuse anything from the tree log root because
 	 * it has tiny sub-transactions.
 	 */
-	if (btrfs_buffer_uptodate(buf, 0)) {
+	if (btrfs_buffer_uptodate(buf, 0, 0)) {
 		u64 header_owner = btrfs_header_owner(buf);
 		u64 header_transid = btrfs_header_generation(buf);
 		if (header_owner != BTRFS_TREE_LOG_OBJECTID &&
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 14/18] btrfs-progs: add a btrfs_read_extent_buffer helper
  2023-04-19 21:23 [PATCH 00/18] btrfs-progs: more prep work for syncing ctree.c Josef Bacik
                   ` (12 preceding siblings ...)
  2023-04-19 21:24 ` [PATCH 13/18] btrfs-progs: add an atomic arg to btrfs_buffer_uptodate Josef Bacik
@ 2023-04-19 21:24 ` Josef Bacik
  2023-04-19 21:24 ` [PATCH 15/18] btrfs-progs: add BTRFS_STRIPE_LEN_SHIFT definition Josef Bacik
                   ` (3 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Josef Bacik @ 2023-04-19 21:24 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

This exists in the kernel to do the read on an extent buffer we may have
already looked up and initialized.  Simply create this helper by
extracting out the existing code from read_tree_block and make
read_tree_block call this helper.  This gives us the helper we need to
sync ctree.c into btrfs-progs, and keeps the code the same in
btrfs-progs.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 kernel-shared/disk-io.c | 72 ++++++++++++++++++++++++-----------------
 kernel-shared/disk-io.h |  2 ++
 2 files changed, 45 insertions(+), 29 deletions(-)

diff --git a/kernel-shared/disk-io.c b/kernel-shared/disk-io.c
index 29fe9027..6e810bd1 100644
--- a/kernel-shared/disk-io.c
+++ b/kernel-shared/disk-io.c
@@ -337,39 +337,18 @@ int read_whole_eb(struct btrfs_fs_info *info, struct extent_buffer *eb, int mirr
 	return 0;
 }
 
-struct extent_buffer *read_tree_block(struct btrfs_fs_info *fs_info, u64 bytenr,
-				      u64 owner_root, u64 parent_transid,
-				      int level, struct btrfs_key *first_key)
+int btrfs_read_extent_buffer(struct extent_buffer *eb, u64 parent_transid,
+			     int level, struct btrfs_key *first_key)
 {
+	struct btrfs_fs_info *fs_info = eb->fs_info;
 	int ret;
-	struct extent_buffer *eb;
 	u64 best_transid = 0;
-	u32 sectorsize = fs_info->sectorsize;
 	int mirror_num = 1;
 	int good_mirror = 0;
 	int candidate_mirror = 0;
 	int num_copies;
 	int ignore = 0;
 
-	/*
-	 * Don't even try to create tree block for unaligned tree block
-	 * bytenr.
-	 * Such unaligned tree block will free overlapping extent buffer,
-	 * causing use-after-free bugs for fuzzed images.
-	 */
-	if (bytenr < sectorsize || !IS_ALIGNED(bytenr, sectorsize)) {
-		error("tree block bytenr %llu is not aligned to sectorsize %u",
-		      bytenr, sectorsize);
-		return ERR_PTR(-EIO);
-	}
-
-	eb = btrfs_find_create_tree_block(fs_info, bytenr);
-	if (!eb)
-		return ERR_PTR(-ENOMEM);
-
-	if (btrfs_buffer_uptodate(eb, parent_transid, 0))
-		return eb;
-
 	num_copies = btrfs_num_copies(fs_info, eb->start, eb->len);
 	while (1) {
 		ret = read_whole_eb(fs_info, eb, mirror_num);
@@ -396,7 +375,7 @@ struct extent_buffer *read_tree_block(struct btrfs_fs_info *fs_info, u64 bytenr,
 				ret = btrfs_check_leaf(eb);
 			if (!ret || candidate_mirror == mirror_num) {
 				btrfs_set_buffer_uptodate(eb);
-				return eb;
+				return 0;
 			}
 			if (candidate_mirror <= 0)
 				candidate_mirror = mirror_num;
@@ -439,12 +418,47 @@ struct extent_buffer *read_tree_block(struct btrfs_fs_info *fs_info, u64 bytenr,
 			continue;
 		}
 	}
+	return ret;
+}
+
+struct extent_buffer *read_tree_block(struct btrfs_fs_info *fs_info, u64 bytenr,
+				      u64 owner_root, u64 parent_transid,
+				      int level, struct btrfs_key *first_key)
+{
+	int ret;
+	struct extent_buffer *eb;
+	u32 sectorsize = fs_info->sectorsize;
+
 	/*
-	 * We failed to read this tree block, it be should deleted right now
-	 * to avoid stale cache populate the cache.
+	 * Don't even try to create tree block for unaligned tree block
+	 * bytenr.
+	 * Such unaligned tree block will free overlapping extent buffer,
+	 * causing use-after-free bugs for fuzzed images.
 	 */
-	free_extent_buffer_nocache(eb);
-	return ERR_PTR(ret);
+	if (bytenr < sectorsize || !IS_ALIGNED(bytenr, sectorsize)) {
+		error("tree block bytenr %llu is not aligned to sectorsize %u",
+		      bytenr, sectorsize);
+		return ERR_PTR(-EIO);
+	}
+
+	eb = btrfs_find_create_tree_block(fs_info, bytenr);
+	if (!eb)
+		return ERR_PTR(-ENOMEM);
+
+	if (btrfs_buffer_uptodate(eb, parent_transid, 0))
+		return eb;
+
+	ret = btrfs_read_extent_buffer(eb, parent_transid, level, first_key);
+	if (ret) {
+		/*
+		 * We failed to read this tree block, it be should deleted right
+		 * now to avoid stale cache populate the cache.
+		 */
+		free_extent_buffer_nocache(eb);
+		return ERR_PTR(ret);
+	}
+
+	return eb;
 }
 
 int write_and_map_eb(struct btrfs_fs_info *fs_info, struct extent_buffer *eb)
diff --git a/kernel-shared/disk-io.h b/kernel-shared/disk-io.h
index ed7f9259..4c63a4a8 100644
--- a/kernel-shared/disk-io.h
+++ b/kernel-shared/disk-io.h
@@ -233,6 +233,8 @@ int btrfs_global_root_insert(struct btrfs_fs_info *fs_info,
 int btrfs_find_and_setup_root(struct btrfs_root *tree_root,
 			      struct btrfs_fs_info *fs_info,
 			      u64 objectid, struct btrfs_root *root);
+int btrfs_read_extent_buffer(struct extent_buffer *eb, u64 parent_transid,
+			     int level, struct btrfs_key *first_key);
 
 static inline struct btrfs_root *btrfs_block_group_root(
 						struct btrfs_fs_info *fs_info)
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 15/18] btrfs-progs: add BTRFS_STRIPE_LEN_SHIFT definition
  2023-04-19 21:23 [PATCH 00/18] btrfs-progs: more prep work for syncing ctree.c Josef Bacik
                   ` (13 preceding siblings ...)
  2023-04-19 21:24 ` [PATCH 14/18] btrfs-progs: add a btrfs_read_extent_buffer helper Josef Bacik
@ 2023-04-19 21:24 ` Josef Bacik
  2023-04-19 21:24 ` [PATCH 16/18] btrfs-progs: rename btrfs_check_* to __btrfs_check_* Josef Bacik
                   ` (2 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Josef Bacik @ 2023-04-19 21:24 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

This is used by tree-checker.c, so sync this into volumes.h to make it
easier to sync tree-checker.c.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 kernel-shared/volumes.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel-shared/volumes.h b/kernel-shared/volumes.h
index 6e9103a9..206eab77 100644
--- a/kernel-shared/volumes.h
+++ b/kernel-shared/volumes.h
@@ -24,6 +24,7 @@
 #include "kernel-lib/sizes.h"
 
 #define BTRFS_STRIPE_LEN	SZ_64K
+#define BTRFS_STRIPE_LEN_SHIFT	(16)
 
 struct btrfs_device {
 	struct list_head dev_list;
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 16/18] btrfs-progs: rename btrfs_check_* to __btrfs_check_*
  2023-04-19 21:23 [PATCH 00/18] btrfs-progs: more prep work for syncing ctree.c Josef Bacik
                   ` (14 preceding siblings ...)
  2023-04-19 21:24 ` [PATCH 15/18] btrfs-progs: add BTRFS_STRIPE_LEN_SHIFT definition Josef Bacik
@ 2023-04-19 21:24 ` Josef Bacik
  2023-04-19 21:24 ` [PATCH 17/18] btrfs-progs: change btrfs_check_chunk_valid to match the kernel version Josef Bacik
  2023-04-19 21:24 ` [PATCH 18/18] btrfs-progs: sync tree-checker.[ch] Josef Bacik
  17 siblings, 0 replies; 19+ messages in thread
From: Josef Bacik @ 2023-04-19 21:24 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

These helpers are called __btrfs_check_* in the kernel as they return
the special enum to indicate what part of the leaf/node failed.  Rename
the uses in btrfs-progs to match the kernel naming convention to make it
easier to sync that code.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 check/repair.c          | 4 ++--
 kernel-shared/ctree.c   | 8 ++++----
 kernel-shared/ctree.h   | 4 ++--
 kernel-shared/disk-io.c | 4 ++--
 4 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/check/repair.c b/check/repair.c
index ec8b0196..b323ad3e 100644
--- a/check/repair.c
+++ b/check/repair.c
@@ -311,9 +311,9 @@ enum btrfs_tree_block_status btrfs_check_block_for_repair(struct extent_buffer *
 	enum btrfs_tree_block_status status;
 
 	if (btrfs_is_leaf(eb))
-		status = btrfs_check_leaf(eb);
+		status = __btrfs_check_leaf(eb);
 	else
-		status = btrfs_check_node(eb);
+		status = __btrfs_check_node(eb);
 
 	if (status == BTRFS_TREE_BLOCK_CLEAN)
 		return status;
diff --git a/kernel-shared/ctree.c b/kernel-shared/ctree.c
index 3e1085a0..66f44879 100644
--- a/kernel-shared/ctree.c
+++ b/kernel-shared/ctree.c
@@ -616,7 +616,7 @@ static void generic_err(const struct extent_buffer *buf, int slot,
 	fprintf(stderr, "\n");
 }
 
-enum btrfs_tree_block_status btrfs_check_node(struct extent_buffer *node)
+enum btrfs_tree_block_status __btrfs_check_node(struct extent_buffer *node)
 {
 	struct btrfs_fs_info *fs_info = node->fs_info;
 	unsigned long nr = btrfs_header_nritems(node);
@@ -677,7 +677,7 @@ fail:
 	return ret;
 }
 
-enum btrfs_tree_block_status btrfs_check_leaf(struct extent_buffer *leaf)
+enum btrfs_tree_block_status __btrfs_check_leaf(struct extent_buffer *leaf)
 {
 	struct btrfs_fs_info *fs_info = leaf->fs_info;
 	/* No valid key type is 0, so all key should be larger than this key */
@@ -789,9 +789,9 @@ static int noinline check_block(struct btrfs_fs_info *fs_info,
 	if (path->skip_check_block)
 		return 0;
 	if (level == 0)
-		ret = btrfs_check_leaf(path->nodes[0]);
+		ret = __btrfs_check_leaf(path->nodes[0]);
 	else
-		ret = btrfs_check_node(path->nodes[level]);
+		ret = __btrfs_check_node(path->nodes[level]);
 	if (ret == BTRFS_TREE_BLOCK_CLEAN)
 		return 0;
 	return -EIO;
diff --git a/kernel-shared/ctree.h b/kernel-shared/ctree.h
index 20c9edc6..237f530d 100644
--- a/kernel-shared/ctree.h
+++ b/kernel-shared/ctree.h
@@ -958,8 +958,8 @@ int btrfs_convert_one_bg(struct btrfs_trans_handle *trans, u64 bytenr);
 int btrfs_comp_cpu_keys(const struct btrfs_key *k1, const struct btrfs_key *k2);
 int btrfs_del_ptr(struct btrfs_root *root, struct btrfs_path *path,
 		int level, int slot);
-enum btrfs_tree_block_status btrfs_check_node(struct extent_buffer *buf);
-enum btrfs_tree_block_status btrfs_check_leaf(struct extent_buffer *buf);
+enum btrfs_tree_block_status __btrfs_check_node(struct extent_buffer *buf);
+enum btrfs_tree_block_status __btrfs_check_leaf(struct extent_buffer *buf);
 struct extent_buffer *read_node_slot(struct btrfs_fs_info *fs_info,
 				   struct extent_buffer *parent, int slot);
 int btrfs_previous_item(struct btrfs_root *root,
diff --git a/kernel-shared/disk-io.c b/kernel-shared/disk-io.c
index 6e810bd1..4950c685 100644
--- a/kernel-shared/disk-io.c
+++ b/kernel-shared/disk-io.c
@@ -370,9 +370,9 @@ int btrfs_read_extent_buffer(struct extent_buffer *eb, u64 parent_transid,
 			 * btrfs ins dump-tree.
 			 */
 			if (btrfs_header_level(eb))
-				ret = btrfs_check_node(eb);
+				ret = __btrfs_check_node(eb);
 			else
-				ret = btrfs_check_leaf(eb);
+				ret = __btrfs_check_leaf(eb);
 			if (!ret || candidate_mirror == mirror_num) {
 				btrfs_set_buffer_uptodate(eb);
 				return 0;
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 17/18] btrfs-progs: change btrfs_check_chunk_valid to match the kernel version
  2023-04-19 21:23 [PATCH 00/18] btrfs-progs: more prep work for syncing ctree.c Josef Bacik
                   ` (15 preceding siblings ...)
  2023-04-19 21:24 ` [PATCH 16/18] btrfs-progs: rename btrfs_check_* to __btrfs_check_* Josef Bacik
@ 2023-04-19 21:24 ` Josef Bacik
  2023-04-19 21:24 ` [PATCH 18/18] btrfs-progs: sync tree-checker.[ch] Josef Bacik
  17 siblings, 0 replies; 19+ messages in thread
From: Josef Bacik @ 2023-04-19 21:24 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

In btrfs-progs we check the actual leaf pointers as well as the chunk
itself in btrfs_check_chunk_valid.  However in the kernel the leaf stuff
is handled separately as part of the read, and then we have the chunk
checker itself.  Change the btrfs-progs version to match the in-kernel
version temporarily so it makes syncing the in-kernel code easier.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 check/main.c            |  3 +--
 check/mode-lowmem.c     |  6 ++----
 kernel-shared/volumes.c | 46 ++++-------------------------------------
 kernel-shared/volumes.h |  6 ++----
 4 files changed, 9 insertions(+), 52 deletions(-)

diff --git a/check/main.c b/check/main.c
index f15272bf..f9055f7a 100644
--- a/check/main.c
+++ b/check/main.c
@@ -5329,8 +5329,7 @@ static int process_chunk_item(struct cache_tree *chunk_cache,
 	 * wrong onwer(3) out of chunk tree, to pass both chunk tree check
 	 * and owner<->key_type check.
 	 */
-	ret = btrfs_check_chunk_valid(gfs_info, eb, chunk, slot,
-				      key->offset);
+	ret = btrfs_check_chunk_valid(eb, chunk, key->offset);
 	if (ret < 0) {
 		error("chunk(%llu, %llu) is not valid, ignore it",
 		      key->offset, btrfs_chunk_length(eb, chunk));
diff --git a/check/mode-lowmem.c b/check/mode-lowmem.c
index fb294c90..7a57f99a 100644
--- a/check/mode-lowmem.c
+++ b/check/mode-lowmem.c
@@ -4470,8 +4470,7 @@ static int check_dev_extent_item(struct extent_buffer *eb, int slot)
 
 	l = path.nodes[0];
 	chunk = btrfs_item_ptr(l, path.slots[0], struct btrfs_chunk);
-	ret = btrfs_check_chunk_valid(gfs_info, l, chunk, path.slots[0],
-				      chunk_key.offset);
+	ret = btrfs_check_chunk_valid(l, chunk, chunk_key.offset);
 	if (ret < 0)
 		goto out;
 
@@ -4702,8 +4701,7 @@ static int check_chunk_item(struct extent_buffer *eb, int slot)
 	chunk = btrfs_item_ptr(eb, slot, struct btrfs_chunk);
 	length = btrfs_chunk_length(eb, chunk);
 	chunk_end = chunk_key.offset + length;
-	ret = btrfs_check_chunk_valid(gfs_info, eb, chunk, slot,
-				      chunk_key.offset);
+	ret = btrfs_check_chunk_valid(eb, chunk, chunk_key.offset);
 	if (ret < 0) {
 		error("chunk[%llu %llu) is invalid", chunk_key.offset,
 			chunk_end);
diff --git a/kernel-shared/volumes.c b/kernel-shared/volumes.c
index 1e2c8895..14fcefee 100644
--- a/kernel-shared/volumes.c
+++ b/kernel-shared/volumes.c
@@ -2090,33 +2090,19 @@ static struct btrfs_device *fill_missing_device(u64 devid)
  * slot == -1: SYSTEM chunk
  * return -EIO on error, otherwise return 0
  */
-int btrfs_check_chunk_valid(struct btrfs_fs_info *fs_info,
-			    struct extent_buffer *leaf,
-			    struct btrfs_chunk *chunk,
-			    int slot, u64 logical)
+int btrfs_check_chunk_valid(struct extent_buffer *leaf,
+			    struct btrfs_chunk *chunk, u64 logical)
 {
+	struct btrfs_fs_info *fs_info = leaf->fs_info;
 	u64 length;
 	u64 stripe_len;
 	u16 num_stripes;
 	u16 sub_stripes;
 	u64 type;
-	u32 chunk_ondisk_size;
 	u32 sectorsize = fs_info->sectorsize;
 	int min_devs;
 	int table_sub_stripes;
 
-	/*
-	 * Basic chunk item size check.  Note that btrfs_chunk already contains
-	 * one stripe, so no "==" check.
-	 */
-	if (slot >= 0 &&
-	    btrfs_item_size(leaf, slot) < sizeof(struct btrfs_chunk)) {
-		error("invalid chunk item size, have %u expect [%zu, %u)",
-			btrfs_item_size(leaf, slot),
-			sizeof(struct btrfs_chunk),
-			BTRFS_LEAF_DATA_SIZE(fs_info));
-		return -EUCLEAN;
-	}
 	length = btrfs_chunk_length(leaf, chunk);
 	stripe_len = btrfs_chunk_stripe_len(leaf, chunk);
 	num_stripes = btrfs_chunk_num_stripes(leaf, chunk);
@@ -2128,13 +2114,6 @@ int btrfs_check_chunk_valid(struct btrfs_fs_info *fs_info,
 			num_stripes);
 		return -EUCLEAN;
 	}
-	if (slot >= 0 && btrfs_chunk_item_size(num_stripes) !=
-	    btrfs_item_size(leaf, slot)) {
-		error("invalid chunk item size, have %u expect %lu",
-			btrfs_item_size(leaf, slot),
-			btrfs_chunk_item_size(num_stripes));
-		return -EUCLEAN;
-	}
 
 	/*
 	 * These valid checks may be insufficient to cover every corner cases.
@@ -2156,11 +2135,6 @@ int btrfs_check_chunk_valid(struct btrfs_fs_info *fs_info,
 		error("invalid chunk stripe length: %llu", stripe_len);
 		return -EIO;
 	}
-	/* Check on chunk item type */
-	if (slot == -1 && (type & BTRFS_BLOCK_GROUP_SYSTEM) == 0) {
-		error("invalid chunk type %llu", type);
-		return -EIO;
-	}
 	if (type & ~(BTRFS_BLOCK_GROUP_TYPE_MASK |
 		     BTRFS_BLOCK_GROUP_PROFILE_MASK)) {
 		error("unrecognized chunk type: %llu",
@@ -2183,18 +2157,6 @@ int btrfs_check_chunk_valid(struct btrfs_fs_info *fs_info,
 		return -EIO;
 	}
 
-	chunk_ondisk_size = btrfs_chunk_item_size(num_stripes);
-	/*
-	 * Btrfs_chunk contains at least one stripe, and for sys_chunk
-	 * it can't exceed the system chunk array size
-	 * For normal chunk, it should match its chunk item size.
-	 */
-	if (num_stripes < 1 ||
-	    (slot == -1 && chunk_ondisk_size > BTRFS_SYSTEM_CHUNK_ARRAY_SIZE) ||
-	    (slot >= 0 && chunk_ondisk_size > btrfs_item_size(leaf, slot))) {
-		error("invalid num_stripes: %u", num_stripes);
-		return -EIO;
-	}
 	/*
 	 * Device number check against profile
 	 */
@@ -2243,7 +2205,7 @@ static int read_one_chunk(struct btrfs_fs_info *fs_info, struct btrfs_key *key,
 	length = btrfs_chunk_length(leaf, chunk);
 	num_stripes = btrfs_chunk_num_stripes(leaf, chunk);
 	/* Validation check */
-	ret = btrfs_check_chunk_valid(fs_info, leaf, chunk, slot, logical);
+	ret = btrfs_check_chunk_valid(leaf, chunk, logical);
 	if (ret) {
 		error("%s checksums match, but it has an invalid chunk, %s",
 		      (slot == -1) ? "Superblock" : "Metadata",
diff --git a/kernel-shared/volumes.h b/kernel-shared/volumes.h
index 206eab77..84fd6617 100644
--- a/kernel-shared/volumes.h
+++ b/kernel-shared/volumes.h
@@ -294,10 +294,8 @@ int write_raid56_with_parity(struct btrfs_fs_info *info,
 			     struct extent_buffer *eb,
 			     struct btrfs_multi_bio *multi,
 			     u64 stripe_len, u64 *raid_map);
-int btrfs_check_chunk_valid(struct btrfs_fs_info *fs_info,
-			    struct extent_buffer *leaf,
-			    struct btrfs_chunk *chunk,
-			    int slot, u64 logical);
+int btrfs_check_chunk_valid(struct extent_buffer *leaf,
+			    struct btrfs_chunk *chunk, u64 logical);
 u64 btrfs_stripe_length(struct btrfs_fs_info *fs_info,
 			struct extent_buffer *leaf,
 			struct btrfs_chunk *chunk);
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH 18/18] btrfs-progs: sync tree-checker.[ch]
  2023-04-19 21:23 [PATCH 00/18] btrfs-progs: more prep work for syncing ctree.c Josef Bacik
                   ` (16 preceding siblings ...)
  2023-04-19 21:24 ` [PATCH 17/18] btrfs-progs: change btrfs_check_chunk_valid to match the kernel version Josef Bacik
@ 2023-04-19 21:24 ` Josef Bacik
  17 siblings, 0 replies; 19+ messages in thread
From: Josef Bacik @ 2023-04-19 21:24 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

This syncs tree-checker.c from the kernel.  The main modification was to
add a open ctree flag to skip the deeper leaf checks, and plumbing this
through tree-checker.c.  We need this for things like fsck or
btrfs-image that need to work with slightly corrupted file systems, and
these checks simply make us unable to look at the corrupted blocks.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 Makefile                     |    1 +
 check/main.c                 |    4 +-
 check/mode-lowmem.c          |    1 +
 check/repair.c               |    1 +
 image/main.c                 |   14 +-
 include/kerncompat.h         |   10 +
 kernel-shared/ctree.c        |  180 +--
 kernel-shared/ctree.h        |   14 +-
 kernel-shared/disk-io.c      |    3 +
 kernel-shared/disk-io.h      |    6 +
 kernel-shared/tree-checker.c | 2064 ++++++++++++++++++++++++++++++++++
 kernel-shared/tree-checker.h |   72 ++
 kernel-shared/volumes.c      |   96 +-
 kernel-shared/volumes.h      |    2 -
 14 files changed, 2173 insertions(+), 295 deletions(-)
 create mode 100644 kernel-shared/tree-checker.c
 create mode 100644 kernel-shared/tree-checker.h

diff --git a/Makefile b/Makefile
index 8001f46a..6806d347 100644
--- a/Makefile
+++ b/Makefile
@@ -187,6 +187,7 @@ objects = \
 	kernel-shared/print-tree.o	\
 	kernel-shared/root-tree.o	\
 	kernel-shared/transaction.o	\
+	kernel-shared/tree-checker.o	\
 	kernel-shared/ulist.o	\
 	kernel-shared/uuid-tree.o	\
 	kernel-shared/volumes.o	\
diff --git a/check/main.c b/check/main.c
index f9055f7a..8714c213 100644
--- a/check/main.c
+++ b/check/main.c
@@ -64,6 +64,7 @@
 #include "check/clear-cache.h"
 #include "kernel-shared/uapi/btrfs.h"
 #include "kernel-lib/bitops.h"
+#include "kernel-shared/tree-checker.h"
 
 /* Global context variables */
 struct btrfs_fs_info *gfs_info;
@@ -9996,7 +9997,8 @@ static int cmd_check(const struct cmd_struct *cmd, int argc, char **argv)
 	int qgroups_repaired = 0;
 	int qgroup_verify_ret;
 	unsigned ctree_flags = OPEN_CTREE_EXCLUSIVE |
-			       OPEN_CTREE_ALLOW_TRANSID_MISMATCH;
+			       OPEN_CTREE_ALLOW_TRANSID_MISMATCH |
+			       OPEN_CTREE_SKIP_LEAF_ITEM_CHECKS;
 	int force = 0;
 
 	while(1) {
diff --git a/check/mode-lowmem.c b/check/mode-lowmem.c
index 7a57f99a..1614c065 100644
--- a/check/mode-lowmem.c
+++ b/check/mode-lowmem.c
@@ -38,6 +38,7 @@
 #include "check/repair.h"
 #include "check/mode-common.h"
 #include "check/mode-lowmem.h"
+#include "kernel-shared/tree-checker.h"
 
 static u64 last_allocated_chunk;
 static u64 total_used = 0;
diff --git a/check/repair.c b/check/repair.c
index b323ad3e..b73f9518 100644
--- a/check/repair.c
+++ b/check/repair.c
@@ -29,6 +29,7 @@
 #include "kernel-shared/disk-io.h"
 #include "common/extent-cache.h"
 #include "check/repair.h"
+#include "kernel-shared/tree-checker.h"
 
 int opt_check_repair = 0;
 
diff --git a/image/main.c b/image/main.c
index 92b0dbfa..856e313f 100644
--- a/image/main.c
+++ b/image/main.c
@@ -1025,7 +1025,8 @@ static int create_metadump(const char *input, FILE *out, int num_threads,
 	int ret;
 	int err = 0;
 
-	root = open_ctree(input, 0, OPEN_CTREE_ALLOW_TRANSID_MISMATCH);
+	root = open_ctree(input, 0, OPEN_CTREE_ALLOW_TRANSID_MISMATCH |
+			  OPEN_CTREE_SKIP_LEAF_ITEM_CHECKS);
 	if (!root) {
 		error("open ctree failed");
 		return -EIO;
@@ -2798,7 +2799,7 @@ static int restore_metadump(const char *input, FILE *out, int old_restore,
 
 		ocf.filename = target;
 		ocf.flags = OPEN_CTREE_WRITES | OPEN_CTREE_RESTORE |
-			    OPEN_CTREE_PARTIAL;
+			    OPEN_CTREE_PARTIAL | OPEN_CTREE_SKIP_LEAF_ITEM_CHECKS;
 		info = open_ctree_fs_info(&ocf);
 		if (!info) {
 			error("open ctree failed");
@@ -2864,7 +2865,8 @@ static int restore_metadump(const char *input, FILE *out, int old_restore,
 					  OPEN_CTREE_PARTIAL |
 					  OPEN_CTREE_WRITES |
 					  OPEN_CTREE_NO_DEVICES |
-					  OPEN_CTREE_ALLOW_TRANSID_MISMATCH);
+					  OPEN_CTREE_ALLOW_TRANSID_MISMATCH |
+					  OPEN_CTREE_SKIP_LEAF_ITEM_CHECKS);
 		if (!root) {
 			error("open ctree failed in %s", target);
 			ret = -EIO;
@@ -2883,7 +2885,8 @@ static int restore_metadump(const char *input, FILE *out, int old_restore,
 
 		if (!info) {
 			root = open_ctree_fd(fileno(out), target, 0,
-					     OPEN_CTREE_ALLOW_TRANSID_MISMATCH);
+					     OPEN_CTREE_ALLOW_TRANSID_MISMATCH |
+					     OPEN_CTREE_SKIP_LEAF_ITEM_CHECKS);
 			if (!root) {
 				error("open ctree failed in %s", target);
 				ret = -EIO;
@@ -3226,7 +3229,8 @@ int BOX_MAIN(image)(int argc, char *argv[])
 		int i;
 
 		ocf.filename = target;
-		ocf.flags = OPEN_CTREE_PARTIAL | OPEN_CTREE_RESTORE;
+		ocf.flags = OPEN_CTREE_PARTIAL | OPEN_CTREE_RESTORE |
+			OPEN_CTREE_SKIP_LEAF_ITEM_CHECKS;
 		info = open_ctree_fs_info(&ocf);
 		if (!info) {
 			error("open ctree failed at %s", target);
diff --git a/include/kerncompat.h b/include/kerncompat.h
index 28e9f443..7472ff75 100644
--- a/include/kerncompat.h
+++ b/include/kerncompat.h
@@ -86,6 +86,7 @@
 #define _RET_IP_ 0
 #define TASK_UNINTERRUPTIBLE 0
 #define SLAB_MEM_SPREAD 0
+#define ALLOW_ERROR_INJECTION(a, b)
 
 #ifndef ULONG_MAX
 #define ULONG_MAX       (~0UL)
@@ -418,6 +419,15 @@ do {					\
 	__ret_warn_on;					\
 })
 
+#define WARN(c, msg...) ({				\
+	int __ret_warn_on = !!(c);			\
+	if (__ret_warn_on)				\
+		printf(msg);				\
+	__ret_warn_on;					\
+})
+
+#define IS_ENABLED(c) 0
+
 #define container_of(ptr, type, member) ({                      \
         const typeof( ((type *)0)->member ) *__mptr = (ptr);    \
 	        (type *)( (char *)__mptr - offsetof(type,member) );})
diff --git a/kernel-shared/ctree.c b/kernel-shared/ctree.c
index 66f44879..d5a1f90b 100644
--- a/kernel-shared/ctree.c
+++ b/kernel-shared/ctree.c
@@ -28,6 +28,7 @@
 #include "kernel-lib/sizes.h"
 #include "kernel-shared/volumes.h"
 #include "check/repair.h"
+#include "tree-checker.h"
 
 static int split_node(struct btrfs_trans_handle *trans, struct btrfs_root
 		      *root, struct btrfs_path *path, int level);
@@ -602,185 +603,6 @@ static inline unsigned int leaf_data_end(const struct extent_buffer *leaf)
 	return btrfs_item_offset(leaf, nr - 1);
 }
 
-static void generic_err(const struct extent_buffer *buf, int slot,
-			const char *fmt, ...)
-{
-	va_list args;
-
-	fprintf(stderr, "corrupt %s: root=%lld block=%llu slot=%d, ",
-		btrfs_header_level(buf) == 0 ? "leaf": "node",
-		btrfs_header_owner(buf), btrfs_header_bytenr(buf), slot);
-	va_start(args, fmt);
-	vfprintf(stderr, fmt, args);
-	va_end(args);
-	fprintf(stderr, "\n");
-}
-
-enum btrfs_tree_block_status __btrfs_check_node(struct extent_buffer *node)
-{
-	struct btrfs_fs_info *fs_info = node->fs_info;
-	unsigned long nr = btrfs_header_nritems(node);
-	struct btrfs_key key, next_key;
-	int slot;
-	int level = btrfs_header_level(node);
-	u64 bytenr;
-	enum btrfs_tree_block_status ret = BTRFS_TREE_BLOCK_INVALID_NRITEMS;
-
-	if (level <= 0 || level >= BTRFS_MAX_LEVEL) {
-		generic_err(node, 0,
-			"invalid level for node, have %d expect [1, %d]",
-			level, BTRFS_MAX_LEVEL - 1);
-		ret = BTRFS_TREE_BLOCK_INVALID_LEVEL;
-		goto fail;
-	}
-	if (nr == 0 || nr > BTRFS_NODEPTRS_PER_BLOCK(fs_info)) {
-		generic_err(node, 0,
-"corrupt node: root=%llu block=%llu, nritems too %s, have %lu expect range [1,%u]",
-			   btrfs_header_owner(node), node->start,
-			   nr == 0 ? "small" : "large", nr,
-			   BTRFS_NODEPTRS_PER_BLOCK(fs_info));
-		ret = BTRFS_TREE_BLOCK_INVALID_NRITEMS;
-		goto fail;
-	}
-
-	for (slot = 0; slot < nr - 1; slot++) {
-		bytenr = btrfs_node_blockptr(node, slot);
-		btrfs_node_key_to_cpu(node, &key, slot);
-		btrfs_node_key_to_cpu(node, &next_key, slot + 1);
-
-		if (!bytenr) {
-			generic_err(node, slot,
-				"invalid NULL node pointer");
-			ret = BTRFS_TREE_BLOCK_INVALID_BLOCKPTR;
-			goto fail;
-		}
-		if (!IS_ALIGNED(bytenr, fs_info->sectorsize)) {
-			generic_err(node, slot,
-			"unaligned pointer, have %llu should be aligned to %u",
-				bytenr, fs_info->sectorsize);
-			ret = BTRFS_TREE_BLOCK_INVALID_BLOCKPTR;
-			goto fail;
-		}
-
-		if (btrfs_comp_cpu_keys(&key, &next_key) >= 0) {
-			generic_err(node, slot,
-	"bad key order, current (%llu %u %llu) next (%llu %u %llu)",
-				key.objectid, key.type, key.offset,
-				next_key.objectid, next_key.type,
-				next_key.offset);
-			ret = BTRFS_TREE_BLOCK_BAD_KEY_ORDER;
-			goto fail;
-		}
-	}
-	ret = BTRFS_TREE_BLOCK_CLEAN;
-fail:
-	return ret;
-}
-
-enum btrfs_tree_block_status __btrfs_check_leaf(struct extent_buffer *leaf)
-{
-	struct btrfs_fs_info *fs_info = leaf->fs_info;
-	/* No valid key type is 0, so all key should be larger than this key */
-	struct btrfs_key prev_key = {0, 0, 0};
-	struct btrfs_key key;
-	u32 nritems = btrfs_header_nritems(leaf);
-	int slot;
-	int ret;
-
-	if (btrfs_header_level(leaf) != 0) {
-		generic_err(leaf, 0,
-			"invalid level for leaf, have %d expect 0",
-			btrfs_header_level(leaf));
-		ret = BTRFS_TREE_BLOCK_INVALID_LEVEL;
-		goto fail;
-	}
-
-	if (nritems == 0)
-		return 0;
-
-	/*
-	 * Check the following things to make sure this is a good leaf, and
-	 * leaf users won't need to bother with similar sanity checks:
-	 *
-	 * 1) key ordering
-	 * 2) item offset and size
-	 *    No overlap, no hole, all inside the leaf.
-	 * 3) item content
-	 *    If possible, do comprehensive sanity check.
-	 *    NOTE: All checks must only rely on the item data itself.
-	 */
-	for (slot = 0; slot < nritems; slot++) {
-		u32 item_end_expected;
-		u64 item_data_end;
-
-		btrfs_item_key_to_cpu(leaf, &key, slot);
-
-		/* Make sure the keys are in the right order */
-		if (btrfs_comp_cpu_keys(&prev_key, &key) >= 0) {
-			generic_err(leaf, slot,
-	"bad key order, prev (%llu %u %llu) current (%llu %u %llu)",
-				prev_key.objectid, prev_key.type,
-				prev_key.offset, key.objectid, key.type,
-				key.offset);
-			ret = BTRFS_TREE_BLOCK_BAD_KEY_ORDER;
-			goto fail;
-		}
-
-		item_data_end = (u64)btrfs_item_offset(leaf, slot) +
-				btrfs_item_size(leaf, slot);
-		/*
-		 * Make sure the offset and ends are right, remember that the
-		 * item data starts at the end of the leaf and grows towards the
-		 * front.
-		 */
-		if (slot == 0)
-			item_end_expected = BTRFS_LEAF_DATA_SIZE(fs_info);
-		else
-			item_end_expected = btrfs_item_offset(leaf,
-								 slot - 1);
-		if (item_data_end != item_end_expected) {
-			generic_err(leaf, slot,
-				"unexpected item end, have %llu expect %u",
-				item_data_end, item_end_expected);
-			ret = BTRFS_TREE_BLOCK_INVALID_OFFSETS;
-			goto fail;
-		}
-
-		/*
-		 * Check to make sure that we don't point outside of the leaf,
-		 * just in case all the items are consistent to each other, but
-		 * all point outside of the leaf.
-		 */
-		if (item_data_end > BTRFS_LEAF_DATA_SIZE(fs_info)) {
-			generic_err(leaf, slot,
-			"slot end outside of leaf, have %llu expect range [0, %u]",
-				item_data_end, BTRFS_LEAF_DATA_SIZE(fs_info));
-			ret = BTRFS_TREE_BLOCK_INVALID_OFFSETS;
-			goto fail;
-		}
-
-		/* Also check if the item pointer overlaps with btrfs item. */
-		if (btrfs_item_ptr_offset(leaf, slot) <
-		    btrfs_item_nr_offset(leaf, slot) + sizeof(struct btrfs_item)) {
-			generic_err(leaf, slot,
-		"slot overlaps with its data, item end %lu data start %lu",
-				btrfs_item_nr_offset(leaf, slot) +
-				sizeof(struct btrfs_item),
-				btrfs_item_ptr_offset(leaf, slot));
-			ret = BTRFS_TREE_BLOCK_INVALID_OFFSETS;
-			goto fail;
-		}
-
-		prev_key.objectid = key.objectid;
-		prev_key.type = key.type;
-		prev_key.offset = key.offset;
-	}
-
-	ret = BTRFS_TREE_BLOCK_CLEAN;
-fail:
-	return ret;
-}
-
 static int noinline check_block(struct btrfs_fs_info *fs_info,
 				struct btrfs_path *path, int level)
 {
diff --git a/kernel-shared/ctree.h b/kernel-shared/ctree.h
index 237f530d..5eba9c14 100644
--- a/kernel-shared/ctree.h
+++ b/kernel-shared/ctree.h
@@ -185,17 +185,6 @@ struct btrfs_path {
 					sizeof(struct btrfs_item))
 #define BTRFS_MAX_EXTENT_SIZE		128UL * 1024 * 1024
 
-enum btrfs_tree_block_status {
-	BTRFS_TREE_BLOCK_CLEAN,
-	BTRFS_TREE_BLOCK_INVALID_NRITEMS,
-	BTRFS_TREE_BLOCK_INVALID_PARENT_KEY,
-	BTRFS_TREE_BLOCK_BAD_KEY_ORDER,
-	BTRFS_TREE_BLOCK_INVALID_LEVEL,
-	BTRFS_TREE_BLOCK_INVALID_FREE_SPACE,
-	BTRFS_TREE_BLOCK_INVALID_OFFSETS,
-	BTRFS_TREE_BLOCK_INVALID_BLOCKPTR,
-};
-
 /*
  * We don't want to overwrite 1M at the beginning of device, even though
  * there is our 1st superblock at 64k. Some possible reasons:
@@ -373,6 +362,7 @@ struct btrfs_fs_info {
 	unsigned int finalize_on_close:1;
 	unsigned int hide_names:1;
 	unsigned int allow_transid_mismatch:1;
+	unsigned int skip_leaf_item_checks:1;
 
 	int transaction_aborted;
 	int force_csum_type;
@@ -958,8 +948,6 @@ int btrfs_convert_one_bg(struct btrfs_trans_handle *trans, u64 bytenr);
 int btrfs_comp_cpu_keys(const struct btrfs_key *k1, const struct btrfs_key *k2);
 int btrfs_del_ptr(struct btrfs_root *root, struct btrfs_path *path,
 		int level, int slot);
-enum btrfs_tree_block_status __btrfs_check_node(struct extent_buffer *buf);
-enum btrfs_tree_block_status __btrfs_check_leaf(struct extent_buffer *buf);
 struct extent_buffer *read_node_slot(struct btrfs_fs_info *fs_info,
 				   struct extent_buffer *parent, int slot);
 int btrfs_previous_item(struct btrfs_root *root,
diff --git a/kernel-shared/disk-io.c b/kernel-shared/disk-io.c
index 4950c685..536b7119 100644
--- a/kernel-shared/disk-io.c
+++ b/kernel-shared/disk-io.c
@@ -37,6 +37,7 @@
 #include "common/device-scan.h"
 #include "common/device-utils.h"
 #include "crypto/hash.h"
+#include "tree-checker.h"
 
 /* specified errno for check_tree_block */
 #define BTRFS_BAD_BYTENR		(-1)
@@ -1503,6 +1504,8 @@ static struct btrfs_fs_info *__open_ctree_fd(int fp, struct open_ctree_flags *oc
 		fs_info->hide_names = 1;
 	if (flags & OPEN_CTREE_ALLOW_TRANSID_MISMATCH)
 		fs_info->allow_transid_mismatch = 1;
+	if (flags & OPEN_CTREE_SKIP_LEAF_ITEM_CHECKS)
+		fs_info->skip_leaf_item_checks = 1;
 
 	if ((flags & OPEN_CTREE_RECOVER_SUPER)
 	     && (flags & OPEN_CTREE_TEMPORARY_SUPER)) {
diff --git a/kernel-shared/disk-io.h b/kernel-shared/disk-io.h
index 4c63a4a8..6baa4a80 100644
--- a/kernel-shared/disk-io.h
+++ b/kernel-shared/disk-io.h
@@ -98,6 +98,12 @@ enum btrfs_open_ctree_flags {
 	 * stored in the csum tree during conversion.
 	 */
 	OPEN_CTREE_SKIP_CSUM_CHECK	= (1U << 16),
+
+	/*
+	 * Allow certain commands like check/restore to ignore more structure
+	 * specific checks and only do the superficial checks.
+	 */
+	OPEN_CTREE_SKIP_LEAF_ITEM_CHECKS	= (1U << 17),
 };
 
 /*
diff --git a/kernel-shared/tree-checker.c b/kernel-shared/tree-checker.c
new file mode 100644
index 00000000..4f38942a
--- /dev/null
+++ b/kernel-shared/tree-checker.c
@@ -0,0 +1,2064 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) Qu Wenruo 2017.  All rights reserved.
+ */
+
+/*
+ * The module is used to catch unexpected/corrupted tree block data.
+ * Such behavior can be caused either by a fuzzed image or bugs.
+ *
+ * The objective is to do leaf/node validation checks when tree block is read
+ * from disk, and check *every* possible member, so other code won't
+ * need to checking them again.
+ *
+ * Due to the potential and unwanted damage, every checker needs to be
+ * carefully reviewed otherwise so it does not prevent mount of valid images.
+ */
+
+#include "kerncompat.h"
+#include "kernel-lib/overflow.h"
+#include "kernel-lib/bitops.h"
+#include "common/internal.h"
+#include <sys/stat.h>
+#include <linux/types.h>
+#include <linux/limits.h>
+#include "messages.h"
+#include "ctree.h"
+#include "tree-checker.h"
+#include "disk-io.h"
+#include "compression.h"
+#include "volumes.h"
+#include "misc.h"
+#include "accessors.h"
+#include "file-item.h"
+
+/*
+ * btrfs_inode_item stores flags in a u64, btrfs_inode stores them in two
+ * separate u32s. These two functions convert between the two representations.
+ *
+ * MODIFIED:
+ *  - Declared these here since this is the only place they're used currently.
+ */
+static inline u64 btrfs_inode_combine_flags(u32 flags, u32 ro_flags)
+{
+	return (flags | ((u64)ro_flags << 32));
+}
+
+static inline void btrfs_inode_split_flags(u64 inode_item_flags,
+					   u32 *flags, u32 *ro_flags)
+{
+	*flags = (u32)inode_item_flags;
+	*ro_flags = (u32)(inode_item_flags >> 32);
+}
+
+/*
+ * Error message should follow the following format:
+ * corrupt <type>: <identifier>, <reason>[, <bad_value>]
+ *
+ * @type:	leaf or node
+ * @identifier:	the necessary info to locate the leaf/node.
+ * 		It's recommended to decode key.objecitd/offset if it's
+ * 		meaningful.
+ * @reason:	describe the error
+ * @bad_value:	optional, it's recommended to output bad value and its
+ *		expected value (range).
+ *
+ * Since comma is used to separate the components, only space is allowed
+ * inside each component.
+ */
+
+/*
+ * Append generic "corrupt leaf/node root=%llu block=%llu slot=%d: " to @fmt.
+ * Allows callers to customize the output.
+ */
+__printf(3, 4)
+__cold
+static void generic_err(const struct extent_buffer *eb, int slot,
+			const char *fmt, ...)
+{
+	const struct btrfs_fs_info *fs_info = eb->fs_info;
+	struct va_format vaf;
+	va_list args;
+
+	va_start(args, fmt);
+
+	vaf.fmt = fmt;
+	vaf.va = &args;
+
+	btrfs_crit(fs_info,
+		"corrupt %s: root=%llu block=%llu slot=%d, %pV",
+		btrfs_header_level(eb) == 0 ? "leaf" : "node",
+		btrfs_header_owner(eb), btrfs_header_bytenr(eb), slot, &vaf);
+	va_end(args);
+}
+
+/*
+ * Customized reporter for extent data item, since its key objectid and
+ * offset has its own meaning.
+ */
+__printf(3, 4)
+__cold
+static void file_extent_err(const struct extent_buffer *eb, int slot,
+			    const char *fmt, ...)
+{
+	const struct btrfs_fs_info *fs_info = eb->fs_info;
+	struct btrfs_key key;
+	struct va_format vaf;
+	va_list args;
+
+	btrfs_item_key_to_cpu(eb, &key, slot);
+	va_start(args, fmt);
+
+	vaf.fmt = fmt;
+	vaf.va = &args;
+
+	btrfs_crit(fs_info,
+	"corrupt %s: root=%llu block=%llu slot=%d ino=%llu file_offset=%llu, %pV",
+		btrfs_header_level(eb) == 0 ? "leaf" : "node",
+		btrfs_header_owner(eb), btrfs_header_bytenr(eb), slot,
+		key.objectid, key.offset, &vaf);
+	va_end(args);
+}
+
+/*
+ * Return 0 if the btrfs_file_extent_##name is aligned to @alignment
+ * Else return 1
+ */
+#define CHECK_FE_ALIGNED(leaf, slot, fi, name, alignment)		      \
+({									      \
+	if (unlikely(!IS_ALIGNED(btrfs_file_extent_##name((leaf), (fi)),      \
+				 (alignment))))				      \
+		file_extent_err((leaf), (slot),				      \
+	"invalid %s for file extent, have %llu, should be aligned to %u",     \
+			(#name), btrfs_file_extent_##name((leaf), (fi)),      \
+			(alignment));					      \
+	(!IS_ALIGNED(btrfs_file_extent_##name((leaf), (fi)), (alignment)));   \
+})
+
+static u64 file_extent_end(struct extent_buffer *leaf,
+			   struct btrfs_key *key,
+			   struct btrfs_file_extent_item *extent)
+{
+	u64 end;
+	u64 len;
+
+	if (btrfs_file_extent_type(leaf, extent) == BTRFS_FILE_EXTENT_INLINE) {
+		len = btrfs_file_extent_ram_bytes(leaf, extent);
+		end = ALIGN(key->offset + len, leaf->fs_info->sectorsize);
+	} else {
+		len = btrfs_file_extent_num_bytes(leaf, extent);
+		end = key->offset + len;
+	}
+	return end;
+}
+
+/*
+ * Customized report for dir_item, the only new important information is
+ * key->objectid, which represents inode number
+ */
+__printf(3, 4)
+__cold
+static void dir_item_err(const struct extent_buffer *eb, int slot,
+			 const char *fmt, ...)
+{
+	const struct btrfs_fs_info *fs_info = eb->fs_info;
+	struct btrfs_key key;
+	struct va_format vaf;
+	va_list args;
+
+	btrfs_item_key_to_cpu(eb, &key, slot);
+	va_start(args, fmt);
+
+	vaf.fmt = fmt;
+	vaf.va = &args;
+
+	btrfs_crit(fs_info,
+		"corrupt %s: root=%llu block=%llu slot=%d ino=%llu, %pV",
+		btrfs_header_level(eb) == 0 ? "leaf" : "node",
+		btrfs_header_owner(eb), btrfs_header_bytenr(eb), slot,
+		key.objectid, &vaf);
+	va_end(args);
+}
+
+/*
+ * This functions checks prev_key->objectid, to ensure current key and prev_key
+ * share the same objectid as inode number.
+ *
+ * This is to detect missing INODE_ITEM in subvolume trees.
+ *
+ * Return true if everything is OK or we don't need to check.
+ * Return false if anything is wrong.
+ */
+static bool check_prev_ino(struct extent_buffer *leaf,
+			   struct btrfs_key *key, int slot,
+			   struct btrfs_key *prev_key)
+{
+	/* No prev key, skip check */
+	if (slot == 0)
+		return true;
+
+	/* Only these key->types needs to be checked */
+	ASSERT(key->type == BTRFS_XATTR_ITEM_KEY ||
+	       key->type == BTRFS_INODE_REF_KEY ||
+	       key->type == BTRFS_DIR_INDEX_KEY ||
+	       key->type == BTRFS_DIR_ITEM_KEY ||
+	       key->type == BTRFS_EXTENT_DATA_KEY);
+
+	/*
+	 * Only subvolume trees along with their reloc trees need this check.
+	 * Things like log tree doesn't follow this ino requirement.
+	 */
+	if (!is_fstree(btrfs_header_owner(leaf)))
+		return true;
+
+	if (key->objectid == prev_key->objectid)
+		return true;
+
+	/* Error found */
+	dir_item_err(leaf, slot,
+		"invalid previous key objectid, have %llu expect %llu",
+		prev_key->objectid, key->objectid);
+	return false;
+}
+static int check_extent_data_item(struct extent_buffer *leaf,
+				  struct btrfs_key *key, int slot,
+				  struct btrfs_key *prev_key)
+{
+	struct btrfs_fs_info *fs_info = leaf->fs_info;
+	struct btrfs_file_extent_item *fi;
+	u32 sectorsize = fs_info->sectorsize;
+	u32 item_size = btrfs_item_size(leaf, slot);
+	u64 extent_end;
+
+	if (unlikely(!IS_ALIGNED(key->offset, sectorsize))) {
+		file_extent_err(leaf, slot,
+"unaligned file_offset for file extent, have %llu should be aligned to %u",
+			key->offset, sectorsize);
+		return -EUCLEAN;
+	}
+
+	/*
+	 * Previous key must have the same key->objectid (ino).
+	 * It can be XATTR_ITEM, INODE_ITEM or just another EXTENT_DATA.
+	 * But if objectids mismatch, it means we have a missing
+	 * INODE_ITEM.
+	 */
+	if (unlikely(!check_prev_ino(leaf, key, slot, prev_key)))
+		return -EUCLEAN;
+
+	fi = btrfs_item_ptr(leaf, slot, struct btrfs_file_extent_item);
+
+	/*
+	 * Make sure the item contains at least inline header, so the file
+	 * extent type is not some garbage.
+	 */
+	if (unlikely(item_size < BTRFS_FILE_EXTENT_INLINE_DATA_START)) {
+		file_extent_err(leaf, slot,
+				"invalid item size, have %u expect [%zu, %u)",
+				item_size, BTRFS_FILE_EXTENT_INLINE_DATA_START,
+				SZ_4K);
+		return -EUCLEAN;
+	}
+	if (unlikely(btrfs_file_extent_type(leaf, fi) >=
+		     BTRFS_NR_FILE_EXTENT_TYPES)) {
+		file_extent_err(leaf, slot,
+		"invalid type for file extent, have %u expect range [0, %u]",
+			btrfs_file_extent_type(leaf, fi),
+			BTRFS_NR_FILE_EXTENT_TYPES - 1);
+		return -EUCLEAN;
+	}
+
+	/*
+	 * Support for new compression/encryption must introduce incompat flag,
+	 * and must be caught in open_ctree().
+	 */
+	if (unlikely(btrfs_file_extent_compression(leaf, fi) >=
+		     BTRFS_NR_COMPRESS_TYPES)) {
+		file_extent_err(leaf, slot,
+	"invalid compression for file extent, have %u expect range [0, %u]",
+			btrfs_file_extent_compression(leaf, fi),
+			BTRFS_NR_COMPRESS_TYPES - 1);
+		return -EUCLEAN;
+	}
+	if (unlikely(btrfs_file_extent_encryption(leaf, fi))) {
+		file_extent_err(leaf, slot,
+			"invalid encryption for file extent, have %u expect 0",
+			btrfs_file_extent_encryption(leaf, fi));
+		return -EUCLEAN;
+	}
+	if (btrfs_file_extent_type(leaf, fi) == BTRFS_FILE_EXTENT_INLINE) {
+		/* Inline extent must have 0 as key offset */
+		if (unlikely(key->offset)) {
+			file_extent_err(leaf, slot,
+		"invalid file_offset for inline file extent, have %llu expect 0",
+				key->offset);
+			return -EUCLEAN;
+		}
+
+		/* Compressed inline extent has no on-disk size, skip it */
+		if (btrfs_file_extent_compression(leaf, fi) !=
+		    BTRFS_COMPRESS_NONE)
+			return 0;
+
+		/* Uncompressed inline extent size must match item size */
+		if (unlikely(item_size != BTRFS_FILE_EXTENT_INLINE_DATA_START +
+					  btrfs_file_extent_ram_bytes(leaf, fi))) {
+			file_extent_err(leaf, slot,
+	"invalid ram_bytes for uncompressed inline extent, have %u expect %llu",
+				item_size, BTRFS_FILE_EXTENT_INLINE_DATA_START +
+				btrfs_file_extent_ram_bytes(leaf, fi));
+			return -EUCLEAN;
+		}
+		return 0;
+	}
+
+	/* Regular or preallocated extent has fixed item size */
+	if (unlikely(item_size != sizeof(*fi))) {
+		file_extent_err(leaf, slot,
+	"invalid item size for reg/prealloc file extent, have %u expect %zu",
+			item_size, sizeof(*fi));
+		return -EUCLEAN;
+	}
+	if (unlikely(CHECK_FE_ALIGNED(leaf, slot, fi, ram_bytes, sectorsize) ||
+		     CHECK_FE_ALIGNED(leaf, slot, fi, disk_bytenr, sectorsize) ||
+		     CHECK_FE_ALIGNED(leaf, slot, fi, disk_num_bytes, sectorsize) ||
+		     CHECK_FE_ALIGNED(leaf, slot, fi, offset, sectorsize) ||
+		     CHECK_FE_ALIGNED(leaf, slot, fi, num_bytes, sectorsize)))
+		return -EUCLEAN;
+
+	/* Catch extent end overflow */
+	if (unlikely(check_add_overflow(btrfs_file_extent_num_bytes(leaf, fi),
+					key->offset, &extent_end))) {
+		file_extent_err(leaf, slot,
+	"extent end overflow, have file offset %llu extent num bytes %llu",
+				key->offset,
+				btrfs_file_extent_num_bytes(leaf, fi));
+		return -EUCLEAN;
+	}
+
+	/*
+	 * Check that no two consecutive file extent items, in the same leaf,
+	 * present ranges that overlap each other.
+	 */
+	if (slot > 0 &&
+	    prev_key->objectid == key->objectid &&
+	    prev_key->type == BTRFS_EXTENT_DATA_KEY) {
+		struct btrfs_file_extent_item *prev_fi;
+		u64 prev_end;
+
+		prev_fi = btrfs_item_ptr(leaf, slot - 1,
+					 struct btrfs_file_extent_item);
+		prev_end = file_extent_end(leaf, prev_key, prev_fi);
+		if (unlikely(prev_end > key->offset)) {
+			file_extent_err(leaf, slot - 1,
+"file extent end range (%llu) goes beyond start offset (%llu) of the next file extent",
+					prev_end, key->offset);
+			return -EUCLEAN;
+		}
+	}
+
+	return 0;
+}
+
+static int check_csum_item(struct extent_buffer *leaf, struct btrfs_key *key,
+			   int slot, struct btrfs_key *prev_key)
+{
+	struct btrfs_fs_info *fs_info = leaf->fs_info;
+	u32 sectorsize = fs_info->sectorsize;
+	const u32 csumsize = fs_info->csum_size;
+
+	if (unlikely(key->objectid != BTRFS_EXTENT_CSUM_OBJECTID)) {
+		generic_err(leaf, slot,
+		"invalid key objectid for csum item, have %llu expect %llu",
+			key->objectid, BTRFS_EXTENT_CSUM_OBJECTID);
+		return -EUCLEAN;
+	}
+	if (unlikely(!IS_ALIGNED(key->offset, sectorsize))) {
+		generic_err(leaf, slot,
+	"unaligned key offset for csum item, have %llu should be aligned to %u",
+			key->offset, sectorsize);
+		return -EUCLEAN;
+	}
+	if (unlikely(!IS_ALIGNED(btrfs_item_size(leaf, slot), csumsize))) {
+		generic_err(leaf, slot,
+	"unaligned item size for csum item, have %u should be aligned to %u",
+			btrfs_item_size(leaf, slot), csumsize);
+		return -EUCLEAN;
+	}
+	if (slot > 0 && prev_key->type == BTRFS_EXTENT_CSUM_KEY) {
+		u64 prev_csum_end;
+		u32 prev_item_size;
+
+		prev_item_size = btrfs_item_size(leaf, slot - 1);
+		prev_csum_end = (prev_item_size / csumsize) * sectorsize;
+		prev_csum_end += prev_key->offset;
+		if (unlikely(prev_csum_end > key->offset)) {
+			generic_err(leaf, slot - 1,
+"csum end range (%llu) goes beyond the start range (%llu) of the next csum item",
+				    prev_csum_end, key->offset);
+			return -EUCLEAN;
+		}
+	}
+	return 0;
+}
+
+/* Inode item error output has the same format as dir_item_err() */
+#define inode_item_err(eb, slot, fmt, ...)			\
+	dir_item_err(eb, slot, fmt, __VA_ARGS__)
+
+static int check_inode_key(struct extent_buffer *leaf, struct btrfs_key *key,
+			   int slot)
+{
+	struct btrfs_key item_key;
+	bool is_inode_item;
+
+	btrfs_item_key_to_cpu(leaf, &item_key, slot);
+	is_inode_item = (item_key.type == BTRFS_INODE_ITEM_KEY);
+
+	/* For XATTR_ITEM, location key should be all 0 */
+	if (item_key.type == BTRFS_XATTR_ITEM_KEY) {
+		if (unlikely(key->objectid != 0 || key->type != 0 ||
+			     key->offset != 0))
+			return -EUCLEAN;
+		return 0;
+	}
+
+	if (unlikely((key->objectid < BTRFS_FIRST_FREE_OBJECTID ||
+		      key->objectid > BTRFS_LAST_FREE_OBJECTID) &&
+		     key->objectid != BTRFS_ROOT_TREE_DIR_OBJECTID &&
+		     key->objectid != BTRFS_FREE_INO_OBJECTID)) {
+		if (is_inode_item) {
+			generic_err(leaf, slot,
+	"invalid key objectid: has %llu expect %llu or [%llu, %llu] or %llu",
+				key->objectid, BTRFS_ROOT_TREE_DIR_OBJECTID,
+				BTRFS_FIRST_FREE_OBJECTID,
+				BTRFS_LAST_FREE_OBJECTID,
+				BTRFS_FREE_INO_OBJECTID);
+		} else {
+			dir_item_err(leaf, slot,
+"invalid location key objectid: has %llu expect %llu or [%llu, %llu] or %llu",
+				key->objectid, BTRFS_ROOT_TREE_DIR_OBJECTID,
+				BTRFS_FIRST_FREE_OBJECTID,
+				BTRFS_LAST_FREE_OBJECTID,
+				BTRFS_FREE_INO_OBJECTID);
+		}
+		return -EUCLEAN;
+	}
+	if (unlikely(key->offset != 0)) {
+		if (is_inode_item)
+			inode_item_err(leaf, slot,
+				       "invalid key offset: has %llu expect 0",
+				       key->offset);
+		else
+			dir_item_err(leaf, slot,
+				"invalid location key offset:has %llu expect 0",
+				key->offset);
+		return -EUCLEAN;
+	}
+	return 0;
+}
+
+static int check_root_key(struct extent_buffer *leaf, struct btrfs_key *key,
+			  int slot)
+{
+	struct btrfs_key item_key;
+	bool is_root_item;
+
+	btrfs_item_key_to_cpu(leaf, &item_key, slot);
+	is_root_item = (item_key.type == BTRFS_ROOT_ITEM_KEY);
+
+	/* No such tree id */
+	if (unlikely(key->objectid == 0)) {
+		if (is_root_item)
+			generic_err(leaf, slot, "invalid root id 0");
+		else
+			dir_item_err(leaf, slot,
+				     "invalid location key root id 0");
+		return -EUCLEAN;
+	}
+
+	/* DIR_ITEM/INDEX/INODE_REF is not allowed to point to non-fs trees */
+	if (unlikely(!is_fstree(key->objectid) && !is_root_item)) {
+		dir_item_err(leaf, slot,
+		"invalid location key objectid, have %llu expect [%llu, %llu]",
+				key->objectid, BTRFS_FIRST_FREE_OBJECTID,
+				BTRFS_LAST_FREE_OBJECTID);
+		return -EUCLEAN;
+	}
+
+	/*
+	 * ROOT_ITEM with non-zero offset means this is a snapshot, created at
+	 * @offset transid.
+	 * Furthermore, for location key in DIR_ITEM, its offset is always -1.
+	 *
+	 * So here we only check offset for reloc tree whose key->offset must
+	 * be a valid tree.
+	 */
+	if (unlikely(key->objectid == BTRFS_TREE_RELOC_OBJECTID &&
+		     key->offset == 0)) {
+		generic_err(leaf, slot, "invalid root id 0 for reloc tree");
+		return -EUCLEAN;
+	}
+	return 0;
+}
+
+static int check_dir_item(struct extent_buffer *leaf,
+			  struct btrfs_key *key, struct btrfs_key *prev_key,
+			  int slot)
+{
+	struct btrfs_fs_info *fs_info = leaf->fs_info;
+	struct btrfs_dir_item *di;
+	u32 item_size = btrfs_item_size(leaf, slot);
+	u32 cur = 0;
+
+	if (unlikely(!check_prev_ino(leaf, key, slot, prev_key)))
+		return -EUCLEAN;
+
+	di = btrfs_item_ptr(leaf, slot, struct btrfs_dir_item);
+	while (cur < item_size) {
+		struct btrfs_key location_key;
+		u32 name_len;
+		u32 data_len;
+		u32 max_name_len;
+		u32 total_size;
+		u32 name_hash;
+		u8 dir_type;
+		int ret;
+
+		/* header itself should not cross item boundary */
+		if (unlikely(cur + sizeof(*di) > item_size)) {
+			dir_item_err(leaf, slot,
+		"dir item header crosses item boundary, have %zu boundary %u",
+				cur + sizeof(*di), item_size);
+			return -EUCLEAN;
+		}
+
+		/* Location key check */
+		btrfs_dir_item_key_to_cpu(leaf, di, &location_key);
+		if (location_key.type == BTRFS_ROOT_ITEM_KEY) {
+			ret = check_root_key(leaf, &location_key, slot);
+			if (unlikely(ret < 0))
+				return ret;
+		} else if (location_key.type == BTRFS_INODE_ITEM_KEY ||
+			   location_key.type == 0) {
+			ret = check_inode_key(leaf, &location_key, slot);
+			if (unlikely(ret < 0))
+				return ret;
+		} else {
+			dir_item_err(leaf, slot,
+			"invalid location key type, have %u, expect %u or %u",
+				     location_key.type, BTRFS_ROOT_ITEM_KEY,
+				     BTRFS_INODE_ITEM_KEY);
+			return -EUCLEAN;
+		}
+
+		/* dir type check */
+		dir_type = btrfs_dir_ftype(leaf, di);
+		if (unlikely(dir_type >= BTRFS_FT_MAX)) {
+			dir_item_err(leaf, slot,
+			"invalid dir item type, have %u expect [0, %u)",
+				dir_type, BTRFS_FT_MAX);
+			return -EUCLEAN;
+		}
+
+		if (unlikely(key->type == BTRFS_XATTR_ITEM_KEY &&
+			     dir_type != BTRFS_FT_XATTR)) {
+			dir_item_err(leaf, slot,
+		"invalid dir item type for XATTR key, have %u expect %u",
+				dir_type, BTRFS_FT_XATTR);
+			return -EUCLEAN;
+		}
+		if (unlikely(dir_type == BTRFS_FT_XATTR &&
+			     key->type != BTRFS_XATTR_ITEM_KEY)) {
+			dir_item_err(leaf, slot,
+			"xattr dir type found for non-XATTR key");
+			return -EUCLEAN;
+		}
+		if (dir_type == BTRFS_FT_XATTR)
+			max_name_len = XATTR_NAME_MAX;
+		else
+			max_name_len = BTRFS_NAME_LEN;
+
+		/* Name/data length check */
+		name_len = btrfs_dir_name_len(leaf, di);
+		data_len = btrfs_dir_data_len(leaf, di);
+		if (unlikely(name_len > max_name_len)) {
+			dir_item_err(leaf, slot,
+			"dir item name len too long, have %u max %u",
+				name_len, max_name_len);
+			return -EUCLEAN;
+		}
+		if (unlikely(name_len + data_len > BTRFS_MAX_XATTR_SIZE(fs_info))) {
+			dir_item_err(leaf, slot,
+			"dir item name and data len too long, have %u max %u",
+				name_len + data_len,
+				BTRFS_MAX_XATTR_SIZE(fs_info));
+			return -EUCLEAN;
+		}
+
+		if (unlikely(data_len && dir_type != BTRFS_FT_XATTR)) {
+			dir_item_err(leaf, slot,
+			"dir item with invalid data len, have %u expect 0",
+				data_len);
+			return -EUCLEAN;
+		}
+
+		total_size = sizeof(*di) + name_len + data_len;
+
+		/* header and name/data should not cross item boundary */
+		if (unlikely(cur + total_size > item_size)) {
+			dir_item_err(leaf, slot,
+		"dir item data crosses item boundary, have %u boundary %u",
+				cur + total_size, item_size);
+			return -EUCLEAN;
+		}
+
+		/*
+		 * Special check for XATTR/DIR_ITEM, as key->offset is name
+		 * hash, should match its name
+		 */
+		if (key->type == BTRFS_DIR_ITEM_KEY ||
+		    key->type == BTRFS_XATTR_ITEM_KEY) {
+			char namebuf[max(BTRFS_NAME_LEN, XATTR_NAME_MAX)];
+
+			read_extent_buffer(leaf, namebuf,
+					(unsigned long)(di + 1), name_len);
+			name_hash = btrfs_name_hash(namebuf, name_len);
+			if (unlikely(key->offset != name_hash)) {
+				dir_item_err(leaf, slot,
+		"name hash mismatch with key, have 0x%016x expect 0x%016llx",
+					name_hash, key->offset);
+				return -EUCLEAN;
+			}
+		}
+		cur += total_size;
+		di = (struct btrfs_dir_item *)((void *)di + total_size);
+	}
+	return 0;
+}
+
+__printf(3, 4)
+__cold
+static void block_group_err(const struct extent_buffer *eb, int slot,
+			    const char *fmt, ...)
+{
+	const struct btrfs_fs_info *fs_info = eb->fs_info;
+	struct btrfs_key key;
+	struct va_format vaf;
+	va_list args;
+
+	btrfs_item_key_to_cpu(eb, &key, slot);
+	va_start(args, fmt);
+
+	vaf.fmt = fmt;
+	vaf.va = &args;
+
+	btrfs_crit(fs_info,
+	"corrupt %s: root=%llu block=%llu slot=%d bg_start=%llu bg_len=%llu, %pV",
+		btrfs_header_level(eb) == 0 ? "leaf" : "node",
+		btrfs_header_owner(eb), btrfs_header_bytenr(eb), slot,
+		key.objectid, key.offset, &vaf);
+	va_end(args);
+}
+
+static int check_block_group_item(struct extent_buffer *leaf,
+				  struct btrfs_key *key, int slot)
+{
+	struct btrfs_fs_info *fs_info = leaf->fs_info;
+	struct btrfs_block_group_item bgi;
+	u32 item_size = btrfs_item_size(leaf, slot);
+	u64 chunk_objectid;
+	u64 flags;
+	u64 type;
+
+	/*
+	 * Here we don't really care about alignment since extent allocator can
+	 * handle it.  We care more about the size.
+	 */
+	if (unlikely(key->offset == 0)) {
+		block_group_err(leaf, slot,
+				"invalid block group size 0");
+		return -EUCLEAN;
+	}
+
+	if (unlikely(item_size != sizeof(bgi))) {
+		block_group_err(leaf, slot,
+			"invalid item size, have %u expect %zu",
+				item_size, sizeof(bgi));
+		return -EUCLEAN;
+	}
+
+	read_extent_buffer(leaf, &bgi, btrfs_item_ptr_offset(leaf, slot),
+			   sizeof(bgi));
+	chunk_objectid = btrfs_stack_block_group_chunk_objectid(&bgi);
+	if (btrfs_fs_incompat(fs_info, EXTENT_TREE_V2)) {
+		/*
+		 * We don't init the nr_global_roots until we load the global
+		 * roots, so this could be 0 at mount time.  If it's 0 we'll
+		 * just assume we're fine, and later we'll check against our
+		 * actual value.
+		 */
+		if (unlikely(fs_info->nr_global_roots &&
+			     chunk_objectid >= fs_info->nr_global_roots)) {
+			block_group_err(leaf, slot,
+	"invalid block group global root id, have %llu, needs to be <= %llu",
+					chunk_objectid,
+					fs_info->nr_global_roots);
+			return -EUCLEAN;
+		}
+	} else if (unlikely(chunk_objectid != BTRFS_FIRST_CHUNK_TREE_OBJECTID)) {
+		block_group_err(leaf, slot,
+		"invalid block group chunk objectid, have %llu expect %llu",
+				btrfs_stack_block_group_chunk_objectid(&bgi),
+				BTRFS_FIRST_CHUNK_TREE_OBJECTID);
+		return -EUCLEAN;
+	}
+
+	if (unlikely(btrfs_stack_block_group_used(&bgi) > key->offset)) {
+		block_group_err(leaf, slot,
+			"invalid block group used, have %llu expect [0, %llu)",
+				btrfs_stack_block_group_used(&bgi), key->offset);
+		return -EUCLEAN;
+	}
+
+	flags = btrfs_stack_block_group_flags(&bgi);
+	if (unlikely(hweight64(flags & BTRFS_BLOCK_GROUP_PROFILE_MASK) > 1)) {
+		block_group_err(leaf, slot,
+"invalid profile flags, have 0x%llx (%lu bits set) expect no more than 1 bit set",
+			flags & BTRFS_BLOCK_GROUP_PROFILE_MASK,
+			hweight64(flags & BTRFS_BLOCK_GROUP_PROFILE_MASK));
+		return -EUCLEAN;
+	}
+
+	type = flags & BTRFS_BLOCK_GROUP_TYPE_MASK;
+	if (unlikely(type != BTRFS_BLOCK_GROUP_DATA &&
+		     type != BTRFS_BLOCK_GROUP_METADATA &&
+		     type != BTRFS_BLOCK_GROUP_SYSTEM &&
+		     type != (BTRFS_BLOCK_GROUP_METADATA |
+			      BTRFS_BLOCK_GROUP_DATA))) {
+		block_group_err(leaf, slot,
+"invalid type, have 0x%llx (%lu bits set) expect either 0x%llx, 0x%llx, 0x%llx or 0x%llx",
+			type, hweight64(type),
+			BTRFS_BLOCK_GROUP_DATA, BTRFS_BLOCK_GROUP_METADATA,
+			BTRFS_BLOCK_GROUP_SYSTEM,
+			BTRFS_BLOCK_GROUP_METADATA | BTRFS_BLOCK_GROUP_DATA);
+		return -EUCLEAN;
+	}
+	return 0;
+}
+
+__printf(4, 5)
+__cold
+static void chunk_err(const struct extent_buffer *leaf,
+		      const struct btrfs_chunk *chunk, u64 logical,
+		      const char *fmt, ...)
+{
+	const struct btrfs_fs_info *fs_info = leaf->fs_info;
+	bool is_sb;
+	struct va_format vaf;
+	va_list args;
+	int i;
+	int slot = -1;
+
+	/* Only superblock eb is able to have such small offset */
+	is_sb = (leaf->start == BTRFS_SUPER_INFO_OFFSET);
+
+	if (!is_sb) {
+		/*
+		 * Get the slot number by iterating through all slots, this
+		 * would provide better readability.
+		 */
+		for (i = 0; i < btrfs_header_nritems(leaf); i++) {
+			if (btrfs_item_ptr_offset(leaf, i) ==
+					(unsigned long)chunk) {
+				slot = i;
+				break;
+			}
+		}
+	}
+	va_start(args, fmt);
+	vaf.fmt = fmt;
+	vaf.va = &args;
+
+	if (is_sb)
+		btrfs_crit(fs_info,
+		"corrupt superblock syschunk array: chunk_start=%llu, %pV",
+			   logical, &vaf);
+	else
+		btrfs_crit(fs_info,
+	"corrupt leaf: root=%llu block=%llu slot=%d chunk_start=%llu, %pV",
+			   BTRFS_CHUNK_TREE_OBJECTID, leaf->start, slot,
+			   logical, &vaf);
+	va_end(args);
+}
+
+/*
+ * The common chunk check which could also work on super block sys chunk array.
+ *
+ * Return -EUCLEAN if anything is corrupted.
+ * Return 0 if everything is OK.
+ */
+int btrfs_check_chunk_valid(struct extent_buffer *leaf,
+			    struct btrfs_chunk *chunk, u64 logical)
+{
+	struct btrfs_fs_info *fs_info = leaf->fs_info;
+	u64 length;
+	u64 chunk_end;
+	u64 stripe_len;
+	u16 num_stripes;
+	u16 sub_stripes;
+	u64 type;
+	u64 features;
+	bool mixed = false;
+	int raid_index;
+	int nparity;
+	int ncopies;
+
+	length = btrfs_chunk_length(leaf, chunk);
+	stripe_len = btrfs_chunk_stripe_len(leaf, chunk);
+	num_stripes = btrfs_chunk_num_stripes(leaf, chunk);
+	sub_stripes = btrfs_chunk_sub_stripes(leaf, chunk);
+	type = btrfs_chunk_type(leaf, chunk);
+	raid_index = btrfs_bg_flags_to_raid_index(type);
+	ncopies = btrfs_raid_array[raid_index].ncopies;
+	nparity = btrfs_raid_array[raid_index].nparity;
+
+	if (unlikely(!num_stripes)) {
+		chunk_err(leaf, chunk, logical,
+			  "invalid chunk num_stripes, have %u", num_stripes);
+		return -EUCLEAN;
+	}
+	if (unlikely(num_stripes < ncopies)) {
+		chunk_err(leaf, chunk, logical,
+			  "invalid chunk num_stripes < ncopies, have %u < %d",
+			  num_stripes, ncopies);
+		return -EUCLEAN;
+	}
+	if (unlikely(nparity && num_stripes == nparity)) {
+		chunk_err(leaf, chunk, logical,
+			  "invalid chunk num_stripes == nparity, have %u == %d",
+			  num_stripes, nparity);
+		return -EUCLEAN;
+	}
+	if (unlikely(!IS_ALIGNED(logical, fs_info->sectorsize))) {
+		chunk_err(leaf, chunk, logical,
+		"invalid chunk logical, have %llu should aligned to %u",
+			  logical, fs_info->sectorsize);
+		return -EUCLEAN;
+	}
+	if (unlikely(btrfs_chunk_sector_size(leaf, chunk) != fs_info->sectorsize)) {
+		chunk_err(leaf, chunk, logical,
+			  "invalid chunk sectorsize, have %u expect %u",
+			  btrfs_chunk_sector_size(leaf, chunk),
+			  fs_info->sectorsize);
+		return -EUCLEAN;
+	}
+	if (unlikely(!length || !IS_ALIGNED(length, fs_info->sectorsize))) {
+		chunk_err(leaf, chunk, logical,
+			  "invalid chunk length, have %llu", length);
+		return -EUCLEAN;
+	}
+	if (unlikely(check_add_overflow(logical, length, &chunk_end))) {
+		chunk_err(leaf, chunk, logical,
+"invalid chunk logical start and length, have logical start %llu length %llu",
+			  logical, length);
+		return -EUCLEAN;
+	}
+	if (unlikely(!is_power_of_2(stripe_len) || stripe_len != BTRFS_STRIPE_LEN)) {
+		chunk_err(leaf, chunk, logical,
+			  "invalid chunk stripe length: %llu",
+			  stripe_len);
+		return -EUCLEAN;
+	}
+	/*
+	 * We artificially limit the chunk size, so that the number of stripes
+	 * inside a chunk can be fit into a U32.  The current limit (256G) is
+	 * way too large for real world usage anyway, and it's also much larger
+	 * than our existing limit (10G).
+	 *
+	 * Thus it should be a good way to catch obvious bitflips.
+	 */
+	if (unlikely(length >= ((u64)U32_MAX << BTRFS_STRIPE_LEN_SHIFT))) {
+		chunk_err(leaf, chunk, logical,
+			  "chunk length too large: have %llu limit %llu",
+			  length, (u64)U32_MAX << BTRFS_STRIPE_LEN_SHIFT);
+		return -EUCLEAN;
+	}
+	if (unlikely(type & ~(BTRFS_BLOCK_GROUP_TYPE_MASK |
+			      BTRFS_BLOCK_GROUP_PROFILE_MASK))) {
+		chunk_err(leaf, chunk, logical,
+			  "unrecognized chunk type: 0x%llx",
+			  ~(BTRFS_BLOCK_GROUP_TYPE_MASK |
+			    BTRFS_BLOCK_GROUP_PROFILE_MASK) &
+			  btrfs_chunk_type(leaf, chunk));
+		return -EUCLEAN;
+	}
+
+	if (unlikely(!has_single_bit_set(type & BTRFS_BLOCK_GROUP_PROFILE_MASK) &&
+		     (type & BTRFS_BLOCK_GROUP_PROFILE_MASK) != 0)) {
+		chunk_err(leaf, chunk, logical,
+		"invalid chunk profile flag: 0x%llx, expect 0 or 1 bit set",
+			  type & BTRFS_BLOCK_GROUP_PROFILE_MASK);
+		return -EUCLEAN;
+	}
+	if (unlikely((type & BTRFS_BLOCK_GROUP_TYPE_MASK) == 0)) {
+		chunk_err(leaf, chunk, logical,
+	"missing chunk type flag, have 0x%llx one bit must be set in 0x%llx",
+			  type, BTRFS_BLOCK_GROUP_TYPE_MASK);
+		return -EUCLEAN;
+	}
+
+	if (unlikely((type & BTRFS_BLOCK_GROUP_SYSTEM) &&
+		     (type & (BTRFS_BLOCK_GROUP_METADATA |
+			      BTRFS_BLOCK_GROUP_DATA)))) {
+		chunk_err(leaf, chunk, logical,
+			  "system chunk with data or metadata type: 0x%llx",
+			  type);
+		return -EUCLEAN;
+	}
+
+	features = btrfs_super_incompat_flags(fs_info->super_copy);
+	if (features & BTRFS_FEATURE_INCOMPAT_MIXED_GROUPS)
+		mixed = true;
+
+	if (!mixed) {
+		if (unlikely((type & BTRFS_BLOCK_GROUP_METADATA) &&
+			     (type & BTRFS_BLOCK_GROUP_DATA))) {
+			chunk_err(leaf, chunk, logical,
+			"mixed chunk type in non-mixed mode: 0x%llx", type);
+			return -EUCLEAN;
+		}
+	}
+
+	if (unlikely((type & BTRFS_BLOCK_GROUP_RAID10 &&
+		      sub_stripes != btrfs_raid_array[BTRFS_RAID_RAID10].sub_stripes) ||
+		     (type & BTRFS_BLOCK_GROUP_RAID1 &&
+		      num_stripes != btrfs_raid_array[BTRFS_RAID_RAID1].devs_min) ||
+		     (type & BTRFS_BLOCK_GROUP_RAID1C3 &&
+		      num_stripes != btrfs_raid_array[BTRFS_RAID_RAID1C3].devs_min) ||
+		     (type & BTRFS_BLOCK_GROUP_RAID1C4 &&
+		      num_stripes != btrfs_raid_array[BTRFS_RAID_RAID1C4].devs_min) ||
+		     (type & BTRFS_BLOCK_GROUP_RAID5 &&
+		      num_stripes < btrfs_raid_array[BTRFS_RAID_RAID5].devs_min) ||
+		     (type & BTRFS_BLOCK_GROUP_RAID6 &&
+		      num_stripes < btrfs_raid_array[BTRFS_RAID_RAID6].devs_min) ||
+		     (type & BTRFS_BLOCK_GROUP_DUP &&
+		      num_stripes != btrfs_raid_array[BTRFS_RAID_DUP].dev_stripes) ||
+		     ((type & BTRFS_BLOCK_GROUP_PROFILE_MASK) == 0 &&
+		      num_stripes != btrfs_raid_array[BTRFS_RAID_SINGLE].dev_stripes))) {
+		chunk_err(leaf, chunk, logical,
+			"invalid num_stripes:sub_stripes %u:%u for profile %llu",
+			num_stripes, sub_stripes,
+			type & BTRFS_BLOCK_GROUP_PROFILE_MASK);
+		return -EUCLEAN;
+	}
+
+	return 0;
+}
+
+/*
+ * Enhanced version of chunk item checker.
+ *
+ * The common btrfs_check_chunk_valid() doesn't check item size since it needs
+ * to work on super block sys_chunk_array which doesn't have full item ptr.
+ */
+static int check_leaf_chunk_item(struct extent_buffer *leaf,
+				 struct btrfs_chunk *chunk,
+				 struct btrfs_key *key, int slot)
+{
+	int num_stripes;
+
+	if (unlikely(btrfs_item_size(leaf, slot) < sizeof(struct btrfs_chunk))) {
+		chunk_err(leaf, chunk, key->offset,
+			"invalid chunk item size: have %u expect [%zu, %u)",
+			btrfs_item_size(leaf, slot),
+			sizeof(struct btrfs_chunk),
+			BTRFS_LEAF_DATA_SIZE(leaf->fs_info));
+		return -EUCLEAN;
+	}
+
+	num_stripes = btrfs_chunk_num_stripes(leaf, chunk);
+	/* Let btrfs_check_chunk_valid() handle this error type */
+	if (num_stripes == 0)
+		goto out;
+
+	if (unlikely(btrfs_chunk_item_size(num_stripes) !=
+		     btrfs_item_size(leaf, slot))) {
+		chunk_err(leaf, chunk, key->offset,
+			"invalid chunk item size: have %u expect %lu",
+			btrfs_item_size(leaf, slot),
+			btrfs_chunk_item_size(num_stripes));
+		return -EUCLEAN;
+	}
+out:
+	return btrfs_check_chunk_valid(leaf, chunk, key->offset);
+}
+
+__printf(3, 4)
+__cold
+static void dev_item_err(const struct extent_buffer *eb, int slot,
+			 const char *fmt, ...)
+{
+	struct btrfs_key key;
+	struct va_format vaf;
+	va_list args;
+
+	btrfs_item_key_to_cpu(eb, &key, slot);
+	va_start(args, fmt);
+
+	vaf.fmt = fmt;
+	vaf.va = &args;
+
+	btrfs_crit(eb->fs_info,
+	"corrupt %s: root=%llu block=%llu slot=%d devid=%llu %pV",
+		btrfs_header_level(eb) == 0 ? "leaf" : "node",
+		btrfs_header_owner(eb), btrfs_header_bytenr(eb), slot,
+		key.objectid, &vaf);
+	va_end(args);
+}
+
+static int check_dev_item(struct extent_buffer *leaf,
+			  struct btrfs_key *key, int slot)
+{
+	struct btrfs_dev_item *ditem;
+	const u32 item_size = btrfs_item_size(leaf, slot);
+
+	if (unlikely(key->objectid != BTRFS_DEV_ITEMS_OBJECTID)) {
+		dev_item_err(leaf, slot,
+			     "invalid objectid: has=%llu expect=%llu",
+			     key->objectid, BTRFS_DEV_ITEMS_OBJECTID);
+		return -EUCLEAN;
+	}
+
+	if (unlikely(item_size != sizeof(*ditem))) {
+		dev_item_err(leaf, slot, "invalid item size: has %u expect %zu",
+			     item_size, sizeof(*ditem));
+		return -EUCLEAN;
+	}
+
+	ditem = btrfs_item_ptr(leaf, slot, struct btrfs_dev_item);
+	if (unlikely(btrfs_device_id(leaf, ditem) != key->offset)) {
+		dev_item_err(leaf, slot,
+			     "devid mismatch: key has=%llu item has=%llu",
+			     key->offset, btrfs_device_id(leaf, ditem));
+		return -EUCLEAN;
+	}
+
+	/*
+	 * For device total_bytes, we don't have reliable way to check it, as
+	 * it can be 0 for device removal. Device size check can only be done
+	 * by dev extents check.
+	 */
+	if (unlikely(btrfs_device_bytes_used(leaf, ditem) >
+		     btrfs_device_total_bytes(leaf, ditem))) {
+		dev_item_err(leaf, slot,
+			     "invalid bytes used: have %llu expect [0, %llu]",
+			     btrfs_device_bytes_used(leaf, ditem),
+			     btrfs_device_total_bytes(leaf, ditem));
+		return -EUCLEAN;
+	}
+	/*
+	 * Remaining members like io_align/type/gen/dev_group aren't really
+	 * utilized.  Skip them to make later usage of them easier.
+	 */
+	return 0;
+}
+
+static int check_inode_item(struct extent_buffer *leaf,
+			    struct btrfs_key *key, int slot)
+{
+	struct btrfs_fs_info *fs_info = leaf->fs_info;
+	struct btrfs_inode_item *iitem;
+	u64 super_gen = btrfs_super_generation(fs_info->super_copy);
+	u32 valid_mask = (S_IFMT | S_ISUID | S_ISGID | S_ISVTX | 0777);
+	const u32 item_size = btrfs_item_size(leaf, slot);
+	u32 mode;
+	int ret;
+	u32 flags;
+	u32 ro_flags;
+
+	ret = check_inode_key(leaf, key, slot);
+	if (unlikely(ret < 0))
+		return ret;
+
+	if (unlikely(item_size != sizeof(*iitem))) {
+		generic_err(leaf, slot, "invalid item size: has %u expect %zu",
+			    item_size, sizeof(*iitem));
+		return -EUCLEAN;
+	}
+
+	iitem = btrfs_item_ptr(leaf, slot, struct btrfs_inode_item);
+
+	/* Here we use super block generation + 1 to handle log tree */
+	if (unlikely(btrfs_inode_generation(leaf, iitem) > super_gen + 1)) {
+		inode_item_err(leaf, slot,
+			"invalid inode generation: has %llu expect (0, %llu]",
+			       btrfs_inode_generation(leaf, iitem),
+			       super_gen + 1);
+		return -EUCLEAN;
+	}
+	/* Note for ROOT_TREE_DIR_ITEM, mkfs could set its transid 0 */
+	if (unlikely(btrfs_inode_transid(leaf, iitem) > super_gen + 1)) {
+		inode_item_err(leaf, slot,
+			"invalid inode transid: has %llu expect [0, %llu]",
+			       btrfs_inode_transid(leaf, iitem), super_gen + 1);
+		return -EUCLEAN;
+	}
+
+	/*
+	 * For size and nbytes it's better not to be too strict, as for dir
+	 * item its size/nbytes can easily get wrong, but doesn't affect
+	 * anything in the fs. So here we skip the check.
+	 */
+	mode = btrfs_inode_mode(leaf, iitem);
+	if (unlikely(mode & ~valid_mask)) {
+		inode_item_err(leaf, slot,
+			       "unknown mode bit detected: 0x%x",
+			       mode & ~valid_mask);
+		return -EUCLEAN;
+	}
+
+	/*
+	 * S_IFMT is not bit mapped so we can't completely rely on
+	 * is_power_of_2/has_single_bit_set, but it can save us from checking
+	 * FIFO/CHR/DIR/REG.  Only needs to check BLK, LNK and SOCKS
+	 */
+	if (!has_single_bit_set(mode & S_IFMT)) {
+		if (unlikely(!S_ISLNK(mode) && !S_ISBLK(mode) && !S_ISSOCK(mode))) {
+			inode_item_err(leaf, slot,
+			"invalid mode: has 0%o expect valid S_IF* bit(s)",
+				       mode & S_IFMT);
+			return -EUCLEAN;
+		}
+	}
+	if (unlikely(S_ISDIR(mode) && btrfs_inode_nlink(leaf, iitem) > 1)) {
+		inode_item_err(leaf, slot,
+		       "invalid nlink: has %u expect no more than 1 for dir",
+			btrfs_inode_nlink(leaf, iitem));
+		return -EUCLEAN;
+	}
+	btrfs_inode_split_flags(btrfs_inode_flags(leaf, iitem), &flags, &ro_flags);
+	if (unlikely(flags & ~BTRFS_INODE_FLAG_MASK)) {
+		inode_item_err(leaf, slot,
+			       "unknown incompat flags detected: 0x%x", flags);
+		return -EUCLEAN;
+	}
+	if (unlikely(!sb_rdonly(fs_info->sb) &&
+		     (ro_flags & ~BTRFS_INODE_RO_FLAG_MASK))) {
+		inode_item_err(leaf, slot,
+			"unknown ro-compat flags detected on writeable mount: 0x%x",
+			ro_flags);
+		return -EUCLEAN;
+	}
+	return 0;
+}
+
+static int check_root_item(struct extent_buffer *leaf, struct btrfs_key *key,
+			   int slot)
+{
+	struct btrfs_fs_info *fs_info = leaf->fs_info;
+	struct btrfs_root_item ri = { 0 };
+	const u64 valid_root_flags = BTRFS_ROOT_SUBVOL_RDONLY |
+				     BTRFS_ROOT_SUBVOL_DEAD;
+	int ret;
+
+	ret = check_root_key(leaf, key, slot);
+	if (unlikely(ret < 0))
+		return ret;
+
+	if (unlikely(btrfs_item_size(leaf, slot) != sizeof(ri) &&
+		     btrfs_item_size(leaf, slot) !=
+		     btrfs_legacy_root_item_size())) {
+		generic_err(leaf, slot,
+			    "invalid root item size, have %u expect %zu or %u",
+			    btrfs_item_size(leaf, slot), sizeof(ri),
+			    btrfs_legacy_root_item_size());
+		return -EUCLEAN;
+	}
+
+	/*
+	 * For legacy root item, the members starting at generation_v2 will be
+	 * all filled with 0.
+	 * And since we allow geneartion_v2 as 0, it will still pass the check.
+	 */
+	read_extent_buffer(leaf, &ri, btrfs_item_ptr_offset(leaf, slot),
+			   btrfs_item_size(leaf, slot));
+
+	/* Generation related */
+	if (unlikely(btrfs_root_generation(&ri) >
+		     btrfs_super_generation(fs_info->super_copy) + 1)) {
+		generic_err(leaf, slot,
+			"invalid root generation, have %llu expect (0, %llu]",
+			    btrfs_root_generation(&ri),
+			    btrfs_super_generation(fs_info->super_copy) + 1);
+		return -EUCLEAN;
+	}
+	if (unlikely(btrfs_root_generation_v2(&ri) >
+		     btrfs_super_generation(fs_info->super_copy) + 1)) {
+		generic_err(leaf, slot,
+		"invalid root v2 generation, have %llu expect (0, %llu]",
+			    btrfs_root_generation_v2(&ri),
+			    btrfs_super_generation(fs_info->super_copy) + 1);
+		return -EUCLEAN;
+	}
+	if (unlikely(btrfs_root_last_snapshot(&ri) >
+		     btrfs_super_generation(fs_info->super_copy) + 1)) {
+		generic_err(leaf, slot,
+		"invalid root last_snapshot, have %llu expect (0, %llu]",
+			    btrfs_root_last_snapshot(&ri),
+			    btrfs_super_generation(fs_info->super_copy) + 1);
+		return -EUCLEAN;
+	}
+
+	/* Alignment and level check */
+	if (unlikely(!IS_ALIGNED(btrfs_root_bytenr(&ri), fs_info->sectorsize))) {
+		generic_err(leaf, slot,
+		"invalid root bytenr, have %llu expect to be aligned to %u",
+			    btrfs_root_bytenr(&ri), fs_info->sectorsize);
+		return -EUCLEAN;
+	}
+	if (unlikely(btrfs_root_level(&ri) >= BTRFS_MAX_LEVEL)) {
+		generic_err(leaf, slot,
+			    "invalid root level, have %u expect [0, %u]",
+			    btrfs_root_level(&ri), BTRFS_MAX_LEVEL - 1);
+		return -EUCLEAN;
+	}
+	if (unlikely(btrfs_root_drop_level(&ri) >= BTRFS_MAX_LEVEL)) {
+		generic_err(leaf, slot,
+			    "invalid root level, have %u expect [0, %u]",
+			    btrfs_root_drop_level(&ri), BTRFS_MAX_LEVEL - 1);
+		return -EUCLEAN;
+	}
+
+	/* Flags check */
+	if (unlikely(btrfs_root_flags(&ri) & ~valid_root_flags)) {
+		generic_err(leaf, slot,
+			    "invalid root flags, have 0x%llx expect mask 0x%llx",
+			    btrfs_root_flags(&ri), valid_root_flags);
+		return -EUCLEAN;
+	}
+	return 0;
+}
+
+__printf(3,4)
+__cold
+static void extent_err(const struct extent_buffer *eb, int slot,
+		       const char *fmt, ...)
+{
+	struct btrfs_key key;
+	struct va_format vaf;
+	va_list args;
+	u64 bytenr;
+	u64 len;
+
+	btrfs_item_key_to_cpu(eb, &key, slot);
+	bytenr = key.objectid;
+	if (key.type == BTRFS_METADATA_ITEM_KEY ||
+	    key.type == BTRFS_TREE_BLOCK_REF_KEY ||
+	    key.type == BTRFS_SHARED_BLOCK_REF_KEY)
+		len = eb->fs_info->nodesize;
+	else
+		len = key.offset;
+	va_start(args, fmt);
+
+	vaf.fmt = fmt;
+	vaf.va = &args;
+
+	btrfs_crit(eb->fs_info,
+	"corrupt %s: block=%llu slot=%d extent bytenr=%llu len=%llu %pV",
+		btrfs_header_level(eb) == 0 ? "leaf" : "node",
+		eb->start, slot, bytenr, len, &vaf);
+	va_end(args);
+}
+
+static int check_extent_item(struct extent_buffer *leaf,
+			     struct btrfs_key *key, int slot,
+			     struct btrfs_key *prev_key)
+{
+	struct btrfs_fs_info *fs_info = leaf->fs_info;
+	struct btrfs_extent_item *ei;
+	bool is_tree_block = false;
+	unsigned long ptr;	/* Current pointer inside inline refs */
+	unsigned long end;	/* Extent item end */
+	const u32 item_size = btrfs_item_size(leaf, slot);
+	u64 flags;
+	u64 generation;
+	u64 total_refs;		/* Total refs in btrfs_extent_item */
+	u64 inline_refs = 0;	/* found total inline refs */
+
+	if (unlikely(key->type == BTRFS_METADATA_ITEM_KEY &&
+		     !btrfs_fs_incompat(fs_info, SKINNY_METADATA))) {
+		generic_err(leaf, slot,
+"invalid key type, METADATA_ITEM type invalid when SKINNY_METADATA feature disabled");
+		return -EUCLEAN;
+	}
+	/* key->objectid is the bytenr for both key types */
+	if (unlikely(!IS_ALIGNED(key->objectid, fs_info->sectorsize))) {
+		generic_err(leaf, slot,
+		"invalid key objectid, have %llu expect to be aligned to %u",
+			   key->objectid, fs_info->sectorsize);
+		return -EUCLEAN;
+	}
+
+	/* key->offset is tree level for METADATA_ITEM_KEY */
+	if (unlikely(key->type == BTRFS_METADATA_ITEM_KEY &&
+		     key->offset >= BTRFS_MAX_LEVEL)) {
+		extent_err(leaf, slot,
+			   "invalid tree level, have %llu expect [0, %u]",
+			   key->offset, BTRFS_MAX_LEVEL - 1);
+		return -EUCLEAN;
+	}
+
+	/*
+	 * EXTENT/METADATA_ITEM consists of:
+	 * 1) One btrfs_extent_item
+	 *    Records the total refs, type and generation of the extent.
+	 *
+	 * 2) One btrfs_tree_block_info (for EXTENT_ITEM and tree backref only)
+	 *    Records the first key and level of the tree block.
+	 *
+	 * 2) Zero or more btrfs_extent_inline_ref(s)
+	 *    Each inline ref has one btrfs_extent_inline_ref shows:
+	 *    2.1) The ref type, one of the 4
+	 *         TREE_BLOCK_REF	Tree block only
+	 *         SHARED_BLOCK_REF	Tree block only
+	 *         EXTENT_DATA_REF	Data only
+	 *         SHARED_DATA_REF	Data only
+	 *    2.2) Ref type specific data
+	 *         Either using btrfs_extent_inline_ref::offset, or specific
+	 *         data structure.
+	 */
+	if (unlikely(item_size < sizeof(*ei))) {
+		extent_err(leaf, slot,
+			   "invalid item size, have %u expect [%zu, %u)",
+			   item_size, sizeof(*ei),
+			   BTRFS_LEAF_DATA_SIZE(fs_info));
+		return -EUCLEAN;
+	}
+	end = item_size + btrfs_item_ptr_offset(leaf, slot);
+
+	/* Checks against extent_item */
+	ei = btrfs_item_ptr(leaf, slot, struct btrfs_extent_item);
+	flags = btrfs_extent_flags(leaf, ei);
+	total_refs = btrfs_extent_refs(leaf, ei);
+	generation = btrfs_extent_generation(leaf, ei);
+	if (unlikely(generation >
+		     btrfs_super_generation(fs_info->super_copy) + 1)) {
+		extent_err(leaf, slot,
+			   "invalid generation, have %llu expect (0, %llu]",
+			   generation,
+			   btrfs_super_generation(fs_info->super_copy) + 1);
+		return -EUCLEAN;
+	}
+	if (unlikely(!has_single_bit_set(flags & (BTRFS_EXTENT_FLAG_DATA |
+						  BTRFS_EXTENT_FLAG_TREE_BLOCK)))) {
+		extent_err(leaf, slot,
+		"invalid extent flag, have 0x%llx expect 1 bit set in 0x%llx",
+			flags, BTRFS_EXTENT_FLAG_DATA |
+			BTRFS_EXTENT_FLAG_TREE_BLOCK);
+		return -EUCLEAN;
+	}
+	is_tree_block = !!(flags & BTRFS_EXTENT_FLAG_TREE_BLOCK);
+	if (is_tree_block) {
+		if (unlikely(key->type == BTRFS_EXTENT_ITEM_KEY &&
+			     key->offset != fs_info->nodesize)) {
+			extent_err(leaf, slot,
+				   "invalid extent length, have %llu expect %u",
+				   key->offset, fs_info->nodesize);
+			return -EUCLEAN;
+		}
+	} else {
+		if (unlikely(key->type != BTRFS_EXTENT_ITEM_KEY)) {
+			extent_err(leaf, slot,
+			"invalid key type, have %u expect %u for data backref",
+				   key->type, BTRFS_EXTENT_ITEM_KEY);
+			return -EUCLEAN;
+		}
+		if (unlikely(!IS_ALIGNED(key->offset, fs_info->sectorsize))) {
+			extent_err(leaf, slot,
+			"invalid extent length, have %llu expect aligned to %u",
+				   key->offset, fs_info->sectorsize);
+			return -EUCLEAN;
+		}
+		if (unlikely(flags & BTRFS_BLOCK_FLAG_FULL_BACKREF)) {
+			extent_err(leaf, slot,
+			"invalid extent flag, data has full backref set");
+			return -EUCLEAN;
+		}
+	}
+	ptr = (unsigned long)(struct btrfs_extent_item *)(ei + 1);
+
+	/* Check the special case of btrfs_tree_block_info */
+	if (is_tree_block && key->type != BTRFS_METADATA_ITEM_KEY) {
+		struct btrfs_tree_block_info *info;
+
+		info = (struct btrfs_tree_block_info *)ptr;
+		if (unlikely(btrfs_tree_block_level(leaf, info) >= BTRFS_MAX_LEVEL)) {
+			extent_err(leaf, slot,
+			"invalid tree block info level, have %u expect [0, %u]",
+				   btrfs_tree_block_level(leaf, info),
+				   BTRFS_MAX_LEVEL - 1);
+			return -EUCLEAN;
+		}
+		ptr = (unsigned long)(struct btrfs_tree_block_info *)(info + 1);
+	}
+
+	/* Check inline refs */
+	while (ptr < end) {
+		struct btrfs_extent_inline_ref *iref;
+		struct btrfs_extent_data_ref *dref;
+		struct btrfs_shared_data_ref *sref;
+		u64 dref_offset;
+		u64 inline_offset;
+		u8 inline_type;
+
+		if (unlikely(ptr + sizeof(*iref) > end)) {
+			extent_err(leaf, slot,
+"inline ref item overflows extent item, ptr %lu iref size %zu end %lu",
+				   ptr, sizeof(*iref), end);
+			return -EUCLEAN;
+		}
+		iref = (struct btrfs_extent_inline_ref *)ptr;
+		inline_type = btrfs_extent_inline_ref_type(leaf, iref);
+		inline_offset = btrfs_extent_inline_ref_offset(leaf, iref);
+		if (unlikely(ptr + btrfs_extent_inline_ref_size(inline_type) > end)) {
+			extent_err(leaf, slot,
+"inline ref item overflows extent item, ptr %lu iref size %u end %lu",
+				   ptr, inline_type, end);
+			return -EUCLEAN;
+		}
+
+		switch (inline_type) {
+		/* inline_offset is subvolid of the owner, no need to check */
+		case BTRFS_TREE_BLOCK_REF_KEY:
+			inline_refs++;
+			break;
+		/* Contains parent bytenr */
+		case BTRFS_SHARED_BLOCK_REF_KEY:
+			if (unlikely(!IS_ALIGNED(inline_offset,
+						 fs_info->sectorsize))) {
+				extent_err(leaf, slot,
+		"invalid tree parent bytenr, have %llu expect aligned to %u",
+					   inline_offset, fs_info->sectorsize);
+				return -EUCLEAN;
+			}
+			inline_refs++;
+			break;
+		/*
+		 * Contains owner subvolid, owner key objectid, adjusted offset.
+		 * The only obvious corruption can happen in that offset.
+		 */
+		case BTRFS_EXTENT_DATA_REF_KEY:
+			dref = (struct btrfs_extent_data_ref *)(&iref->offset);
+			dref_offset = btrfs_extent_data_ref_offset(leaf, dref);
+			if (unlikely(!IS_ALIGNED(dref_offset,
+						 fs_info->sectorsize))) {
+				extent_err(leaf, slot,
+		"invalid data ref offset, have %llu expect aligned to %u",
+					   dref_offset, fs_info->sectorsize);
+				return -EUCLEAN;
+			}
+			inline_refs += btrfs_extent_data_ref_count(leaf, dref);
+			break;
+		/* Contains parent bytenr and ref count */
+		case BTRFS_SHARED_DATA_REF_KEY:
+			sref = (struct btrfs_shared_data_ref *)(iref + 1);
+			if (unlikely(!IS_ALIGNED(inline_offset,
+						 fs_info->sectorsize))) {
+				extent_err(leaf, slot,
+		"invalid data parent bytenr, have %llu expect aligned to %u",
+					   inline_offset, fs_info->sectorsize);
+				return -EUCLEAN;
+			}
+			inline_refs += btrfs_shared_data_ref_count(leaf, sref);
+			break;
+		default:
+			extent_err(leaf, slot, "unknown inline ref type: %u",
+				   inline_type);
+			return -EUCLEAN;
+		}
+		ptr += btrfs_extent_inline_ref_size(inline_type);
+	}
+	/* No padding is allowed */
+	if (unlikely(ptr != end)) {
+		extent_err(leaf, slot,
+			   "invalid extent item size, padding bytes found");
+		return -EUCLEAN;
+	}
+
+	/* Finally, check the inline refs against total refs */
+	if (unlikely(inline_refs > total_refs)) {
+		extent_err(leaf, slot,
+			"invalid extent refs, have %llu expect >= inline %llu",
+			   total_refs, inline_refs);
+		return -EUCLEAN;
+	}
+
+	if ((prev_key->type == BTRFS_EXTENT_ITEM_KEY) ||
+	    (prev_key->type == BTRFS_METADATA_ITEM_KEY)) {
+		u64 prev_end = prev_key->objectid;
+
+		if (prev_key->type == BTRFS_METADATA_ITEM_KEY)
+			prev_end += fs_info->nodesize;
+		else
+			prev_end += prev_key->offset;
+
+		if (unlikely(prev_end > key->objectid)) {
+			extent_err(leaf, slot,
+	"previous extent [%llu %u %llu] overlaps current extent [%llu %u %llu]",
+				   prev_key->objectid, prev_key->type,
+				   prev_key->offset, key->objectid, key->type,
+				   key->offset);
+			return -EUCLEAN;
+		}
+	}
+
+	return 0;
+}
+
+static int check_simple_keyed_refs(struct extent_buffer *leaf,
+				   struct btrfs_key *key, int slot)
+{
+	u32 expect_item_size = 0;
+
+	if (key->type == BTRFS_SHARED_DATA_REF_KEY)
+		expect_item_size = sizeof(struct btrfs_shared_data_ref);
+
+	if (unlikely(btrfs_item_size(leaf, slot) != expect_item_size)) {
+		generic_err(leaf, slot,
+		"invalid item size, have %u expect %u for key type %u",
+			    btrfs_item_size(leaf, slot),
+			    expect_item_size, key->type);
+		return -EUCLEAN;
+	}
+	if (unlikely(!IS_ALIGNED(key->objectid, leaf->fs_info->sectorsize))) {
+		generic_err(leaf, slot,
+"invalid key objectid for shared block ref, have %llu expect aligned to %u",
+			    key->objectid, leaf->fs_info->sectorsize);
+		return -EUCLEAN;
+	}
+	if (unlikely(key->type != BTRFS_TREE_BLOCK_REF_KEY &&
+		     !IS_ALIGNED(key->offset, leaf->fs_info->sectorsize))) {
+		extent_err(leaf, slot,
+		"invalid tree parent bytenr, have %llu expect aligned to %u",
+			   key->offset, leaf->fs_info->sectorsize);
+		return -EUCLEAN;
+	}
+	return 0;
+}
+
+static int check_extent_data_ref(struct extent_buffer *leaf,
+				 struct btrfs_key *key, int slot)
+{
+	struct btrfs_extent_data_ref *dref;
+	unsigned long ptr = btrfs_item_ptr_offset(leaf, slot);
+	const unsigned long end = ptr + btrfs_item_size(leaf, slot);
+
+	if (unlikely(btrfs_item_size(leaf, slot) % sizeof(*dref) != 0)) {
+		generic_err(leaf, slot,
+	"invalid item size, have %u expect aligned to %zu for key type %u",
+			    btrfs_item_size(leaf, slot),
+			    sizeof(*dref), key->type);
+		return -EUCLEAN;
+	}
+	if (unlikely(!IS_ALIGNED(key->objectid, leaf->fs_info->sectorsize))) {
+		generic_err(leaf, slot,
+"invalid key objectid for shared block ref, have %llu expect aligned to %u",
+			    key->objectid, leaf->fs_info->sectorsize);
+		return -EUCLEAN;
+	}
+	for (; ptr < end; ptr += sizeof(*dref)) {
+		u64 offset;
+
+		/*
+		 * We cannot check the extent_data_ref hash due to possible
+		 * overflow from the leaf due to hash collisions.
+		 */
+		dref = (struct btrfs_extent_data_ref *)ptr;
+		offset = btrfs_extent_data_ref_offset(leaf, dref);
+		if (unlikely(!IS_ALIGNED(offset, leaf->fs_info->sectorsize))) {
+			extent_err(leaf, slot,
+	"invalid extent data backref offset, have %llu expect aligned to %u",
+				   offset, leaf->fs_info->sectorsize);
+			return -EUCLEAN;
+		}
+	}
+	return 0;
+}
+
+#define inode_ref_err(eb, slot, fmt, args...)			\
+	inode_item_err(eb, slot, fmt, ##args)
+static int check_inode_ref(struct extent_buffer *leaf,
+			   struct btrfs_key *key, struct btrfs_key *prev_key,
+			   int slot)
+{
+	struct btrfs_inode_ref *iref;
+	unsigned long ptr;
+	unsigned long end;
+
+	if (unlikely(!check_prev_ino(leaf, key, slot, prev_key)))
+		return -EUCLEAN;
+	/* namelen can't be 0, so item_size == sizeof() is also invalid */
+	if (unlikely(btrfs_item_size(leaf, slot) <= sizeof(*iref))) {
+		inode_ref_err(leaf, slot,
+			"invalid item size, have %u expect (%zu, %u)",
+			btrfs_item_size(leaf, slot),
+			sizeof(*iref), BTRFS_LEAF_DATA_SIZE(leaf->fs_info));
+		return -EUCLEAN;
+	}
+
+	ptr = btrfs_item_ptr_offset(leaf, slot);
+	end = ptr + btrfs_item_size(leaf, slot);
+	while (ptr < end) {
+		u16 namelen;
+
+		if (unlikely(ptr + sizeof(iref) > end)) {
+			inode_ref_err(leaf, slot,
+			"inode ref overflow, ptr %lu end %lu inode_ref_size %zu",
+				ptr, end, sizeof(iref));
+			return -EUCLEAN;
+		}
+
+		iref = (struct btrfs_inode_ref *)ptr;
+		namelen = btrfs_inode_ref_name_len(leaf, iref);
+		if (unlikely(ptr + sizeof(*iref) + namelen > end)) {
+			inode_ref_err(leaf, slot,
+				"inode ref overflow, ptr %lu end %lu namelen %u",
+				ptr, end, namelen);
+			return -EUCLEAN;
+		}
+
+		/*
+		 * NOTE: In theory we should record all found index numbers
+		 * to find any duplicated indexes, but that will be too time
+		 * consuming for inodes with too many hard links.
+		 */
+		ptr += sizeof(*iref) + namelen;
+	}
+	return 0;
+}
+
+/*
+ * Common point to switch the item-specific validation.
+ */
+static enum btrfs_tree_block_status check_leaf_item(struct extent_buffer *leaf,
+						    struct btrfs_key *key,
+						    int slot,
+						    struct btrfs_key *prev_key)
+{
+	struct btrfs_fs_info *fs_info = leaf->fs_info;
+	struct btrfs_chunk *chunk;
+	int ret = 0;
+
+	if (fs_info->skip_leaf_item_checks)
+		return 0;
+
+	switch (key->type) {
+	case BTRFS_EXTENT_DATA_KEY:
+		ret = check_extent_data_item(leaf, key, slot, prev_key);
+		break;
+	case BTRFS_EXTENT_CSUM_KEY:
+		ret = check_csum_item(leaf, key, slot, prev_key);
+		break;
+	case BTRFS_DIR_ITEM_KEY:
+	case BTRFS_DIR_INDEX_KEY:
+	case BTRFS_XATTR_ITEM_KEY:
+		ret = check_dir_item(leaf, key, prev_key, slot);
+		break;
+	case BTRFS_INODE_REF_KEY:
+		ret = check_inode_ref(leaf, key, prev_key, slot);
+		break;
+	case BTRFS_BLOCK_GROUP_ITEM_KEY:
+		ret = check_block_group_item(leaf, key, slot);
+		break;
+	case BTRFS_CHUNK_ITEM_KEY:
+		chunk = btrfs_item_ptr(leaf, slot, struct btrfs_chunk);
+		ret = check_leaf_chunk_item(leaf, chunk, key, slot);
+		break;
+	case BTRFS_DEV_ITEM_KEY:
+		ret = check_dev_item(leaf, key, slot);
+		break;
+	case BTRFS_INODE_ITEM_KEY:
+		ret = check_inode_item(leaf, key, slot);
+		break;
+	case BTRFS_ROOT_ITEM_KEY:
+		ret = check_root_item(leaf, key, slot);
+		break;
+	case BTRFS_EXTENT_ITEM_KEY:
+	case BTRFS_METADATA_ITEM_KEY:
+		ret = check_extent_item(leaf, key, slot, prev_key);
+		break;
+	case BTRFS_TREE_BLOCK_REF_KEY:
+	case BTRFS_SHARED_DATA_REF_KEY:
+	case BTRFS_SHARED_BLOCK_REF_KEY:
+		ret = check_simple_keyed_refs(leaf, key, slot);
+		break;
+	case BTRFS_EXTENT_DATA_REF_KEY:
+		ret = check_extent_data_ref(leaf, key, slot);
+		break;
+	}
+
+	if (ret)
+		return BTRFS_TREE_BLOCK_INVALID_ITEM;
+	return BTRFS_TREE_BLOCK_CLEAN;
+}
+
+enum btrfs_tree_block_status __btrfs_check_leaf(struct extent_buffer *leaf)
+{
+	struct btrfs_fs_info *fs_info = leaf->fs_info;
+	/* No valid key type is 0, so all key should be larger than this key */
+	struct btrfs_key prev_key = {0, 0, 0};
+	struct btrfs_key key;
+	u32 nritems = btrfs_header_nritems(leaf);
+	int slot;
+	bool check_item_data = btrfs_header_flag(leaf, BTRFS_HEADER_FLAG_WRITTEN);
+
+	if (unlikely(btrfs_header_level(leaf) != 0)) {
+		generic_err(leaf, 0,
+			"invalid level for leaf, have %d expect 0",
+			btrfs_header_level(leaf));
+		return BTRFS_TREE_BLOCK_INVALID_LEVEL;
+	}
+
+	/*
+	 * MODIFIED:
+	 *  - We need to skip the below checks for the temporary fs state during
+	 *    mkfs or --init-extent-tree.
+	 */
+	if (nritems == 0 &&
+	    (btrfs_super_magic(fs_info->super_copy) == BTRFS_MAGIC_TEMPORARY ||
+	     fs_info->skip_leaf_item_checks))
+		return BTRFS_TREE_BLOCK_CLEAN;
+
+	/*
+	 * Extent buffers from a relocation tree have a owner field that
+	 * corresponds to the subvolume tree they are based on. So just from an
+	 * extent buffer alone we can not find out what is the id of the
+	 * corresponding subvolume tree, so we can not figure out if the extent
+	 * buffer corresponds to the root of the relocation tree or not. So
+	 * skip this check for relocation trees.
+	 */
+	if (nritems == 0 && !btrfs_header_flag(leaf, BTRFS_HEADER_FLAG_RELOC)) {
+		u64 owner = btrfs_header_owner(leaf);
+
+		/* These trees must never be empty */
+		if (unlikely(owner == BTRFS_ROOT_TREE_OBJECTID ||
+			     owner == BTRFS_CHUNK_TREE_OBJECTID ||
+			     owner == BTRFS_DEV_TREE_OBJECTID ||
+			     owner == BTRFS_FS_TREE_OBJECTID ||
+			     owner == BTRFS_DATA_RELOC_TREE_OBJECTID)) {
+			generic_err(leaf, 0,
+			"invalid root, root %llu must never be empty",
+				    owner);
+			return BTRFS_TREE_BLOCK_INVALID_NRITEMS;
+		}
+
+		/* Unknown tree */
+		if (unlikely(owner == 0)) {
+			generic_err(leaf, 0,
+				"invalid owner, root 0 is not defined");
+			return BTRFS_TREE_BLOCK_INVALID_OWNER;
+		}
+
+		/* EXTENT_TREE_V2 can have empty extent trees. */
+		if (btrfs_fs_incompat(fs_info, EXTENT_TREE_V2))
+			return BTRFS_TREE_BLOCK_CLEAN;
+
+		if (unlikely(owner == BTRFS_EXTENT_TREE_OBJECTID)) {
+			generic_err(leaf, 0,
+			"invalid root, root %llu must never be empty",
+				    owner);
+			return BTRFS_TREE_BLOCK_INVALID_NRITEMS;
+		}
+
+		return BTRFS_TREE_BLOCK_CLEAN;
+	}
+
+	if (unlikely(nritems == 0))
+		return BTRFS_TREE_BLOCK_CLEAN;
+
+	/*
+	 * Check the following things to make sure this is a good leaf, and
+	 * leaf users won't need to bother with similar sanity checks:
+	 *
+	 * 1) key ordering
+	 * 2) item offset and size
+	 *    No overlap, no hole, all inside the leaf.
+	 * 3) item content
+	 *    If possible, do comprehensive sanity check.
+	 *    NOTE: All checks must only rely on the item data itself.
+	 */
+	for (slot = 0; slot < nritems; slot++) {
+		u32 item_end_expected;
+		u64 item_data_end;
+
+		btrfs_item_key_to_cpu(leaf, &key, slot);
+
+		/* Make sure the keys are in the right order */
+		if (unlikely(btrfs_comp_cpu_keys(&prev_key, &key) >= 0)) {
+			generic_err(leaf, slot,
+	"bad key order, prev (%llu %u %llu) current (%llu %u %llu)",
+				prev_key.objectid, prev_key.type,
+				prev_key.offset, key.objectid, key.type,
+				key.offset);
+			return BTRFS_TREE_BLOCK_BAD_KEY_ORDER;
+		}
+
+		item_data_end = (u64)btrfs_item_offset(leaf, slot) +
+				btrfs_item_size(leaf, slot);
+		/*
+		 * Make sure the offset and ends are right, remember that the
+		 * item data starts at the end of the leaf and grows towards the
+		 * front.
+		 */
+		if (slot == 0)
+			item_end_expected = BTRFS_LEAF_DATA_SIZE(fs_info);
+		else
+			item_end_expected = btrfs_item_offset(leaf,
+								 slot - 1);
+		if (unlikely(item_data_end != item_end_expected)) {
+			generic_err(leaf, slot,
+				"unexpected item end, have %llu expect %u",
+				item_data_end, item_end_expected);
+			return BTRFS_TREE_BLOCK_INVALID_OFFSETS;
+		}
+
+		/*
+		 * Check to make sure that we don't point outside of the leaf,
+		 * just in case all the items are consistent to each other, but
+		 * all point outside of the leaf.
+		 */
+		if (unlikely(item_data_end > BTRFS_LEAF_DATA_SIZE(fs_info))) {
+			generic_err(leaf, slot,
+			"slot end outside of leaf, have %llu expect range [0, %u]",
+				item_data_end, BTRFS_LEAF_DATA_SIZE(fs_info));
+			return BTRFS_TREE_BLOCK_INVALID_OFFSETS;
+		}
+
+		/* Also check if the item pointer overlaps with btrfs item. */
+		if (unlikely(btrfs_item_ptr_offset(leaf, slot) <
+			     btrfs_item_nr_offset(leaf, slot) + sizeof(struct btrfs_item))) {
+			generic_err(leaf, slot,
+		"slot overlaps with its data, item end %lu data start %lu",
+				btrfs_item_nr_offset(leaf, slot) +
+				sizeof(struct btrfs_item),
+				btrfs_item_ptr_offset(leaf, slot));
+			return BTRFS_TREE_BLOCK_INVALID_OFFSETS;
+		}
+
+		/*
+		 * We only want to do this if WRITTEN is set, otherwise the leaf
+		 * may be in some intermediate state and won't appear valid.
+		 */
+		if (check_item_data) {
+			enum btrfs_tree_block_status ret;
+
+			/*
+			 * Check if the item size and content meet other
+			 * criteria
+			 */
+			ret = check_leaf_item(leaf, &key, slot, &prev_key);
+			if (unlikely(ret != BTRFS_TREE_BLOCK_CLEAN))
+				return ret;
+		}
+
+		prev_key.objectid = key.objectid;
+		prev_key.type = key.type;
+		prev_key.offset = key.offset;
+	}
+
+	return BTRFS_TREE_BLOCK_CLEAN;
+}
+
+int btrfs_check_leaf(struct extent_buffer *leaf)
+{
+	enum btrfs_tree_block_status ret;
+
+	ret = __btrfs_check_leaf(leaf);
+	if (unlikely(ret != BTRFS_TREE_BLOCK_CLEAN))
+		return -EUCLEAN;
+	return 0;
+}
+ALLOW_ERROR_INJECTION(btrfs_check_leaf, ERRNO);
+
+enum btrfs_tree_block_status __btrfs_check_node(struct extent_buffer *node)
+{
+	struct btrfs_fs_info *fs_info = node->fs_info;
+	unsigned long nr = btrfs_header_nritems(node);
+	struct btrfs_key key, next_key;
+	int slot;
+	int level = btrfs_header_level(node);
+	u64 bytenr;
+
+	if (unlikely(level <= 0 || level >= BTRFS_MAX_LEVEL)) {
+		generic_err(node, 0,
+			"invalid level for node, have %d expect [1, %d]",
+			level, BTRFS_MAX_LEVEL - 1);
+		return BTRFS_TREE_BLOCK_INVALID_LEVEL;
+	}
+	if (unlikely(nr == 0 || nr > BTRFS_NODEPTRS_PER_BLOCK(fs_info))) {
+		btrfs_crit(fs_info,
+"corrupt node: root=%llu block=%llu, nritems too %s, have %lu expect range [1,%u]",
+			   btrfs_header_owner(node), node->start,
+			   nr == 0 ? "small" : "large", nr,
+			   BTRFS_NODEPTRS_PER_BLOCK(fs_info));
+		return BTRFS_TREE_BLOCK_INVALID_NRITEMS;
+	}
+
+	for (slot = 0; slot < nr - 1; slot++) {
+		bytenr = btrfs_node_blockptr(node, slot);
+		btrfs_node_key_to_cpu(node, &key, slot);
+		btrfs_node_key_to_cpu(node, &next_key, slot + 1);
+
+		if (unlikely(!bytenr)) {
+			generic_err(node, slot,
+				"invalid NULL node pointer");
+			return BTRFS_TREE_BLOCK_INVALID_BLOCKPTR;
+		}
+		if (unlikely(!IS_ALIGNED(bytenr, fs_info->sectorsize))) {
+			generic_err(node, slot,
+			"unaligned pointer, have %llu should be aligned to %u",
+				bytenr, fs_info->sectorsize);
+			return BTRFS_TREE_BLOCK_INVALID_BLOCKPTR;
+		}
+
+		if (unlikely(btrfs_comp_cpu_keys(&key, &next_key) >= 0)) {
+			generic_err(node, slot,
+	"bad key order, current (%llu %u %llu) next (%llu %u %llu)",
+				key.objectid, key.type, key.offset,
+				next_key.objectid, next_key.type,
+				next_key.offset);
+			return BTRFS_TREE_BLOCK_BAD_KEY_ORDER;
+		}
+	}
+	return BTRFS_TREE_BLOCK_CLEAN;
+}
+
+int btrfs_check_node(struct extent_buffer *node)
+{
+	enum btrfs_tree_block_status ret;
+
+	ret = __btrfs_check_node(node);
+	if (unlikely(ret != BTRFS_TREE_BLOCK_CLEAN))
+		return -EUCLEAN;
+	return 0;
+}
+ALLOW_ERROR_INJECTION(btrfs_check_node, ERRNO);
+
+int btrfs_check_eb_owner(const struct extent_buffer *eb, u64 root_owner)
+{
+	const bool is_subvol = is_fstree(root_owner);
+	const u64 eb_owner = btrfs_header_owner(eb);
+
+	/*
+	 * Skip dummy fs, as selftests don't create unique ebs for each dummy
+	 * root.
+	 *
+	 * MODIFIED:
+	 *  - The ->fs_state member doesn't exist in btrfs-progs yet.
+	 *
+	if (test_bit(BTRFS_FS_STATE_DUMMY_FS_INFO, &eb->fs_info->fs_state))
+		return 0;
+	*/
+
+	/*
+	 * There are several call sites (backref walking, qgroup, and data
+	 * reloc) passing 0 as @root_owner, as they are not holding the
+	 * tree root.  In that case, we can not do a reliable ownership check,
+	 * so just exit.
+	 */
+	if (root_owner == 0)
+		return 0;
+	/*
+	 * These trees use key.offset as their owner, our callers don't have
+	 * the extra capacity to pass key.offset here.  So we just skip them.
+	 */
+	if (root_owner == BTRFS_TREE_LOG_OBJECTID ||
+	    root_owner == BTRFS_TREE_RELOC_OBJECTID)
+		return 0;
+
+	if (!is_subvol) {
+		/* For non-subvolume trees, the eb owner should match root owner */
+		if (unlikely(root_owner != eb_owner)) {
+			btrfs_crit(eb->fs_info,
+"corrupted %s, root=%llu block=%llu owner mismatch, have %llu expect %llu",
+				btrfs_header_level(eb) == 0 ? "leaf" : "node",
+				root_owner, btrfs_header_bytenr(eb), eb_owner,
+				root_owner);
+			return -EUCLEAN;
+		}
+		return 0;
+	}
+
+	/*
+	 * For subvolume trees, owners can mismatch, but they should all belong
+	 * to subvolume trees.
+	 */
+	if (unlikely(is_subvol != is_fstree(eb_owner))) {
+		btrfs_crit(eb->fs_info,
+"corrupted %s, root=%llu block=%llu owner mismatch, have %llu expect [%llu, %llu]",
+			btrfs_header_level(eb) == 0 ? "leaf" : "node",
+			root_owner, btrfs_header_bytenr(eb), eb_owner,
+			BTRFS_FIRST_FREE_OBJECTID, BTRFS_LAST_FREE_OBJECTID);
+		return -EUCLEAN;
+	}
+	return 0;
+}
+
+int btrfs_verify_level_key(struct extent_buffer *eb, int level,
+			   struct btrfs_key *first_key, u64 parent_transid)
+{
+	struct btrfs_fs_info *fs_info = eb->fs_info;
+	int found_level;
+	struct btrfs_key found_key;
+	int ret;
+
+	found_level = btrfs_header_level(eb);
+	if (found_level != level) {
+		WARN(IS_ENABLED(CONFIG_BTRFS_DEBUG),
+		     KERN_ERR "BTRFS: tree level check failed\n");
+		btrfs_err(fs_info,
+"tree level mismatch detected, bytenr=%llu level expected=%u has=%u",
+			  eb->start, level, found_level);
+		return -EIO;
+	}
+
+	if (!first_key)
+		return 0;
+
+	/*
+	 * For live tree block (new tree blocks in current transaction),
+	 * we need proper lock context to avoid race, which is impossible here.
+	 * So we only checks tree blocks which is read from disk, whose
+	 * generation <= fs_info->last_trans_committed.
+	 */
+	if (btrfs_header_generation(eb) > fs_info->last_trans_committed)
+		return 0;
+
+	/* We have @first_key, so this @eb must have at least one item */
+	if (btrfs_header_nritems(eb) == 0) {
+		btrfs_err(fs_info,
+		"invalid tree nritems, bytenr=%llu nritems=0 expect >0",
+			  eb->start);
+		WARN_ON(IS_ENABLED(CONFIG_BTRFS_DEBUG));
+		return -EUCLEAN;
+	}
+
+	if (found_level)
+		btrfs_node_key_to_cpu(eb, &found_key, 0);
+	else
+		btrfs_item_key_to_cpu(eb, &found_key, 0);
+	ret = btrfs_comp_cpu_keys(first_key, &found_key);
+
+	if (ret) {
+		WARN(IS_ENABLED(CONFIG_BTRFS_DEBUG),
+		     KERN_ERR "BTRFS: tree first key check failed\n");
+		btrfs_err(fs_info,
+"tree first key mismatch detected, bytenr=%llu parent_transid=%llu key expected=(%llu,%u,%llu) has=(%llu,%u,%llu)",
+			  eb->start, parent_transid, first_key->objectid,
+			  first_key->type, first_key->offset,
+			  found_key.objectid, found_key.type,
+			  found_key.offset);
+	}
+	return ret;
+}
diff --git a/kernel-shared/tree-checker.h b/kernel-shared/tree-checker.h
new file mode 100644
index 00000000..9c4ba01a
--- /dev/null
+++ b/kernel-shared/tree-checker.h
@@ -0,0 +1,72 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (C) Qu Wenruo 2017.  All rights reserved.
+ */
+
+#ifndef BTRFS_TREE_CHECKER_H
+#define BTRFS_TREE_CHECKER_H
+
+#include "uapi/btrfs_tree.h"
+
+struct extent_buffer;
+struct btrfs_chunk;
+
+/* All the extra info needed to verify the parentness of a tree block. */
+struct btrfs_tree_parent_check {
+	/*
+	 * The owner check against the tree block.
+	 *
+	 * Can be 0 to skip the owner check.
+	 */
+	u64 owner_root;
+
+	/*
+	 * Expected transid, can be 0 to skip the check, but such skip
+	 * should only be utlized for backref walk related code.
+	 */
+	u64 transid;
+
+	/*
+	 * The expected first key.
+	 *
+	 * This check can be skipped if @has_first_key is false, such skip
+	 * can happen for case where we don't have the parent node key,
+	 * e.g. reading the tree root, doing backref walk.
+	 */
+	struct btrfs_key first_key;
+	bool has_first_key;
+
+	/* The expected level. Should always be set. */
+	u8 level;
+};
+
+enum btrfs_tree_block_status {
+	BTRFS_TREE_BLOCK_CLEAN,
+	BTRFS_TREE_BLOCK_INVALID_NRITEMS,
+	BTRFS_TREE_BLOCK_INVALID_PARENT_KEY,
+	BTRFS_TREE_BLOCK_BAD_KEY_ORDER,
+	BTRFS_TREE_BLOCK_INVALID_LEVEL,
+	BTRFS_TREE_BLOCK_INVALID_FREE_SPACE,
+	BTRFS_TREE_BLOCK_INVALID_OFFSETS,
+	BTRFS_TREE_BLOCK_INVALID_BLOCKPTR,
+	BTRFS_TREE_BLOCK_INVALID_ITEM,
+	BTRFS_TREE_BLOCK_INVALID_OWNER,
+};
+
+/*
+ * Exported simply for btrfs-progs which wants to have the
+ * btrfs_tree_block_status return codes.
+ */
+enum btrfs_tree_block_status __btrfs_check_leaf(struct extent_buffer *leaf);
+enum btrfs_tree_block_status __btrfs_check_node(struct extent_buffer *node);
+
+int btrfs_check_leaf(struct extent_buffer *leaf);
+int btrfs_check_node(struct extent_buffer *node);
+
+int btrfs_check_chunk_valid(struct extent_buffer *leaf,
+			    struct btrfs_chunk *chunk, u64 logical);
+int btrfs_check_eb_owner(const struct extent_buffer *eb, u64 root_owner);
+int btrfs_verify_level_key(struct extent_buffer *eb, int level,
+			   struct btrfs_key *first_key, u64 parent_transid);
+
+#endif
diff --git a/kernel-shared/volumes.c b/kernel-shared/volumes.c
index 14fcefee..fff49a06 100644
--- a/kernel-shared/volumes.c
+++ b/kernel-shared/volumes.c
@@ -32,6 +32,7 @@
 #include "common/utils.h"
 #include "common/device-utils.h"
 #include "kernel-lib/raid56.h"
+#include "tree-checker.h"
 
 const struct btrfs_raid_attr btrfs_raid_array[BTRFS_NR_RAID_TYPES] = {
 	[BTRFS_RAID_RAID10] = {
@@ -2086,101 +2087,6 @@ static struct btrfs_device *fill_missing_device(u64 devid)
 	return device;
 }
 
-/*
- * slot == -1: SYSTEM chunk
- * return -EIO on error, otherwise return 0
- */
-int btrfs_check_chunk_valid(struct extent_buffer *leaf,
-			    struct btrfs_chunk *chunk, u64 logical)
-{
-	struct btrfs_fs_info *fs_info = leaf->fs_info;
-	u64 length;
-	u64 stripe_len;
-	u16 num_stripes;
-	u16 sub_stripes;
-	u64 type;
-	u32 sectorsize = fs_info->sectorsize;
-	int min_devs;
-	int table_sub_stripes;
-
-	length = btrfs_chunk_length(leaf, chunk);
-	stripe_len = btrfs_chunk_stripe_len(leaf, chunk);
-	num_stripes = btrfs_chunk_num_stripes(leaf, chunk);
-	sub_stripes = btrfs_chunk_sub_stripes(leaf, chunk);
-	type = btrfs_chunk_type(leaf, chunk);
-
-	if (num_stripes == 0) {
-		error("invalid num_stripes, have %u expect non-zero",
-			num_stripes);
-		return -EUCLEAN;
-	}
-
-	/*
-	 * These valid checks may be insufficient to cover every corner cases.
-	 */
-	if (!IS_ALIGNED(logical, sectorsize)) {
-		error("invalid chunk logical %llu",  logical);
-		return -EIO;
-	}
-	if (btrfs_chunk_sector_size(leaf, chunk) != sectorsize) {
-		error("invalid chunk sectorsize %llu",
-		      (unsigned long long)btrfs_chunk_sector_size(leaf, chunk));
-		return -EIO;
-	}
-	if (!length || !IS_ALIGNED(length, sectorsize)) {
-		error("invalid chunk length %llu",  length);
-		return -EIO;
-	}
-	if (stripe_len != BTRFS_STRIPE_LEN) {
-		error("invalid chunk stripe length: %llu", stripe_len);
-		return -EIO;
-	}
-	if (type & ~(BTRFS_BLOCK_GROUP_TYPE_MASK |
-		     BTRFS_BLOCK_GROUP_PROFILE_MASK)) {
-		error("unrecognized chunk type: %llu",
-		      ~(BTRFS_BLOCK_GROUP_TYPE_MASK |
-			BTRFS_BLOCK_GROUP_PROFILE_MASK) & type);
-		return -EIO;
-	}
-	if (!(type & BTRFS_BLOCK_GROUP_TYPE_MASK)) {
-		error("missing chunk type flag: %llu", type);
-		return -EIO;
-	}
-	if (!(is_power_of_2(type & BTRFS_BLOCK_GROUP_PROFILE_MASK) ||
-	      (type & BTRFS_BLOCK_GROUP_PROFILE_MASK) == 0)) {
-		error("conflicting chunk type detected: %llu", type);
-		return -EIO;
-	}
-	if ((type & BTRFS_BLOCK_GROUP_PROFILE_MASK) &&
-	    !is_power_of_2(type & BTRFS_BLOCK_GROUP_PROFILE_MASK)) {
-		error("conflicting chunk profile detected: %llu", type);
-		return -EIO;
-	}
-
-	/*
-	 * Device number check against profile
-	 */
-	min_devs = btrfs_bg_type_to_devs_min(type);
-	table_sub_stripes = btrfs_bg_type_to_sub_stripes(type);
-	if ((type & BTRFS_BLOCK_GROUP_RAID10 && (sub_stripes != table_sub_stripes ||
-		  !IS_ALIGNED(num_stripes, sub_stripes))) ||
-	    (type & BTRFS_BLOCK_GROUP_RAID1 && num_stripes < min_devs) ||
-	    (type & BTRFS_BLOCK_GROUP_RAID1C3 && num_stripes < min_devs) ||
-	    (type & BTRFS_BLOCK_GROUP_RAID1C4 && num_stripes < min_devs) ||
-	    (type & BTRFS_BLOCK_GROUP_RAID5 && num_stripes < min_devs) ||
-	    (type & BTRFS_BLOCK_GROUP_RAID6 && num_stripes < min_devs) ||
-	    (type & BTRFS_BLOCK_GROUP_DUP && num_stripes > 2) ||
-	    ((type & BTRFS_BLOCK_GROUP_PROFILE_MASK) == 0 &&
-	     num_stripes != 1)) {
-		error("Invalid num_stripes:sub_stripes %u:%u for profile %llu",
-		      num_stripes, sub_stripes,
-		      type & BTRFS_BLOCK_GROUP_PROFILE_MASK);
-		return -EIO;
-	}
-
-	return 0;
-}
-
 /*
  * Slot is used to verify the chunk item is valid
  *
diff --git a/kernel-shared/volumes.h b/kernel-shared/volumes.h
index 84fd6617..ab5ac402 100644
--- a/kernel-shared/volumes.h
+++ b/kernel-shared/volumes.h
@@ -294,8 +294,6 @@ int write_raid56_with_parity(struct btrfs_fs_info *info,
 			     struct extent_buffer *eb,
 			     struct btrfs_multi_bio *multi,
 			     u64 stripe_len, u64 *raid_map);
-int btrfs_check_chunk_valid(struct extent_buffer *leaf,
-			    struct btrfs_chunk *chunk, u64 logical);
 u64 btrfs_stripe_length(struct btrfs_fs_info *fs_info,
 			struct extent_buffer *leaf,
 			struct btrfs_chunk *chunk);
-- 
2.40.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2023-04-19 21:25 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-04-19 21:23 [PATCH 00/18] btrfs-progs: more prep work for syncing ctree.c Josef Bacik
2023-04-19 21:23 ` [PATCH 01/18] btrfs-progs: sync and stub-out tree-mod-log.h Josef Bacik
2023-04-19 21:23 ` [PATCH 02/18] btrfs-progs: add btrfs_root_id helper Josef Bacik
2023-04-19 21:23 ` [PATCH 03/18] btrfs-progs: remove root argument from free_extent and inc_extent_ref Josef Bacik
2023-04-19 21:23 ` [PATCH 04/18] btrfs-progs: pass root_id for btrfs_free_tree_block Josef Bacik
2023-04-19 21:23 ` [PATCH 05/18] btrfs-progs: add a free_extent_buffer_stale helper Josef Bacik
2023-04-19 21:23 ` [PATCH 06/18] btrfs-progs: add btrfs_is_testing helper Josef Bacik
2023-04-19 21:23 ` [PATCH 07/18] btrfs-progs: add accounting_lock to btrfs_root Josef Bacik
2023-04-19 21:23 ` [PATCH 08/18] btrfs-progs: update read_tree_block to match the kernel definition Josef Bacik
2023-04-19 21:24 ` [PATCH 09/18] btrfs-progs: make reada_for_search static Josef Bacik
2023-04-19 21:24 ` [PATCH 10/18] btrfs-progs: sync btrfs_path fields with the kernel Josef Bacik
2023-04-19 21:24 ` [PATCH 11/18] btrfs-progs: update arguments of find_extent_buffer Josef Bacik
2023-04-19 21:24 ` [PATCH 12/18] btrfs-progs: add btrfs_readahead_node_child helper Josef Bacik
2023-04-19 21:24 ` [PATCH 13/18] btrfs-progs: add an atomic arg to btrfs_buffer_uptodate Josef Bacik
2023-04-19 21:24 ` [PATCH 14/18] btrfs-progs: add a btrfs_read_extent_buffer helper Josef Bacik
2023-04-19 21:24 ` [PATCH 15/18] btrfs-progs: add BTRFS_STRIPE_LEN_SHIFT definition Josef Bacik
2023-04-19 21:24 ` [PATCH 16/18] btrfs-progs: rename btrfs_check_* to __btrfs_check_* Josef Bacik
2023-04-19 21:24 ` [PATCH 17/18] btrfs-progs: change btrfs_check_chunk_valid to match the kernel version Josef Bacik
2023-04-19 21:24 ` [PATCH 18/18] btrfs-progs: sync tree-checker.[ch] Josef Bacik

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).