public inbox for linux-btrfs@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/3] btrfs: simplify extent buffer writeback
@ 2025-04-24 18:32 Josef Bacik
  2025-04-24 18:32 ` [PATCH v4 1/3] btrfs: convert the buffer_radix to an xarray Josef Bacik
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Josef Bacik @ 2025-04-24 18:32 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

v1: https://lore.kernel.org/all/cover.1744822090.git.josef@toxicpanda.com/
v2: https://lore.kernel.org/all/cover.1744840038.git.josef@toxicpanda.com/
v3: https://lore.kernel.org/all/cover.1744984487.git.josef@toxicpanda.com/

v3->v4:
- Adressed the various comments from Filipe.
- Added more comments to the more subtle xarray usages to help explain what's
  going on.

v2->v3:
- Fixed a where I didn't use xa_unlock_irq(), per Daniel's review.
- Changed the name of the radix tree to buffer_tree, per Daniel's review.

v1->v2:
- Even though xarray underpins radix tree, it doesn't quite work the same so you
  can't use the xarray functions directly, so added a patch to convert the
  buffer_radix to buffer_xarray.

--- Original email ---

Hello,

We currently have two different paths for writing out extent buffers, a subpage
path and a normal path.  This has resulted in subtle bugs with subpage code that
took us a while to figure out.  Additionally we have this complex interaction of
get folio, find eb, see if we already started writing that eb out, write out the
eb.

We already have a radix tree for our extent buffers, so we can use that
similarly to how pagecache uses the radix tree.  Tag the buffers with DIRTY when
they're dirty, and WRITEBACK when we start writing them out.

The unfortunate part is we have to re-implement folio_batch for extent buffers,
so that's where most of the new code comes from.  The good part is we are now
down to a single path for writing out extent buffers, it's way simpler, and in
fact quite a bit faster now that we don't have all of these folio->eb
transitions to deal with.

I ran this through fsperf on a VM with 8 CPUs and 16gib of ram.  I used
smallfiles100k, but reduced the files to 1k to make it run faster, the
results are as follows, with the statistically significant improvements
marked with *, there were no regressions.  fsperf was run with -n 10 for
both runs, so the baseline is the average 10 runs and the test is the
average of 10 runs.

smallfiles100k results
      metric           baseline       current        stdev            diff
================================================================================
avg_commit_ms               68.58         58.44          3.35   -14.79% *
commits                    270.60        254.70         16.24    -5.88%
dev_read_iops                  48            48             0     0.00%
dev_read_kbytes              1044          1044             0     0.00%
dev_write_iops          866117.90     850028.10      14292.20    -1.86%
dev_write_kbytes      10939976.40   10605701.20     351330.32    -3.06%
elapsed                     49.30            33          1.64   -33.06% *
end_state_mount_ns    41251498.80   35773220.70    2531205.32   -13.28% *
end_state_umount_ns      1.90e+09      1.50e+09   14186226.85   -21.38% *
max_commit_ms                 139        111.60          9.72   -19.71% *
sys_cpu                      4.90          3.86          0.88   -21.29%
write_bw_bytes        42935768.20   64318451.10    1609415.05    49.80% *
write_clat_ns_mean      366431.69     243202.60      14161.98   -33.63% *
write_clat_ns_p50        49203.20         20992        264.40   -57.34% *
write_clat_ns_p99          827392     653721.60      65904.74   -20.99% *
write_io_kbytes           2035940       2035940             0     0.00%
write_iops               10482.37      15702.75        392.92    49.80% *
write_lat_ns_max         1.01e+08      90516129    3910102.06   -10.29% *
write_lat_ns_mean       366556.19     243308.48      14154.51   -33.62% *

As you can see we get about a 33% decrease runtime, with a 50%
throughput increase, which is pretty significant.  Thanks,

Josef

Josef Bacik (3):
  btrfs: convert the buffer_radix to an xarray
  btrfs: set DIRTY and WRITEBACK tags on the buffer_tree
  btrfs: use buffer radix for extent buffer writeback operations

 fs/btrfs/disk-io.c           |  15 +-
 fs/btrfs/extent_io.c         | 588 ++++++++++++++++++-----------------
 fs/btrfs/extent_io.h         |   2 +
 fs/btrfs/fs.h                |   4 +-
 fs/btrfs/tests/btrfs-tests.c |  28 +-
 fs/btrfs/transaction.c       |   5 +-
 fs/btrfs/zoned.c             |  16 +-
 7 files changed, 331 insertions(+), 327 deletions(-)

-- 
2.48.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v4 1/3] btrfs: convert the buffer_radix to an xarray
  2025-04-24 18:32 [PATCH v4 0/3] btrfs: simplify extent buffer writeback Josef Bacik
@ 2025-04-24 18:32 ` Josef Bacik
  2025-04-25  9:24   ` Filipe Manana
  2025-04-24 18:32 ` [PATCH v4 2/3] btrfs: set DIRTY and WRITEBACK tags on the buffer_tree Josef Bacik
  2025-04-24 18:32 ` [PATCH v4 3/3] btrfs: use buffer radix for extent buffer writeback operations Josef Bacik
  2 siblings, 1 reply; 6+ messages in thread
From: Josef Bacik @ 2025-04-24 18:32 UTC (permalink / raw)
  To: linux-btrfs, kernel-team

In order to fully utilize xarray tagging to improve writeback we need to
convert the buffer_radix to a proper xarray.  This conversion is
relatively straightforward as the radix code uses the xarray underneath.
Using xarray directly allows for quite a lot less code.

Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/disk-io.c           |  15 ++-
 fs/btrfs/extent_io.c         | 211 +++++++++++++++++------------------
 fs/btrfs/fs.h                |   4 +-
 fs/btrfs/tests/btrfs-tests.c |  28 ++---
 fs/btrfs/zoned.c             |  16 +--
 5 files changed, 124 insertions(+), 150 deletions(-)

diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 59da809b7d57..24c08eb86b7b 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -2762,10 +2762,22 @@ static int __cold init_tree_roots(struct btrfs_fs_info *fs_info)
 	return ret;
 }
 
+/*
+ * lockdep gets confused between our buffer_tree which requires IRQ locking
+ * because we modify marks in the IRQ context, and our delayed inode xarray
+ * which doesn't have these requirements. Use a class key so lockdep doesn't get
+ * them mixed up.
+ */
+static struct lock_class_key buffer_xa_class;
+
 void btrfs_init_fs_info(struct btrfs_fs_info *fs_info)
 {
 	INIT_RADIX_TREE(&fs_info->fs_roots_radix, GFP_ATOMIC);
-	INIT_RADIX_TREE(&fs_info->buffer_radix, GFP_ATOMIC);
+
+	/* Use the same flags as mapping->i_pages. */
+	xa_init_flags(&fs_info->buffer_tree, XA_FLAGS_LOCK_IRQ | XA_FLAGS_ACCOUNT);
+	lockdep_set_class(&fs_info->buffer_tree.xa_lock, &buffer_xa_class);
+
 	INIT_LIST_HEAD(&fs_info->trans_list);
 	INIT_LIST_HEAD(&fs_info->dead_roots);
 	INIT_LIST_HEAD(&fs_info->delayed_iputs);
@@ -2777,7 +2789,6 @@ void btrfs_init_fs_info(struct btrfs_fs_info *fs_info)
 	spin_lock_init(&fs_info->delayed_iput_lock);
 	spin_lock_init(&fs_info->defrag_inodes_lock);
 	spin_lock_init(&fs_info->super_lock);
-	spin_lock_init(&fs_info->buffer_lock);
 	spin_lock_init(&fs_info->unused_bgs_lock);
 	spin_lock_init(&fs_info->treelog_bg_lock);
 	spin_lock_init(&fs_info->zone_active_bgs_lock);
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 6cfd286b8bbc..4f861a8ff695 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -1893,19 +1893,24 @@ static void set_btree_ioerr(struct extent_buffer *eb)
  * context.
  */
 static struct extent_buffer *find_extent_buffer_nolock(
-		const struct btrfs_fs_info *fs_info, u64 start)
+		struct btrfs_fs_info *fs_info, u64 start)
 {
+	XA_STATE(xas, &fs_info->buffer_tree, start >> fs_info->sectorsize_bits);
 	struct extent_buffer *eb;
 
+	/*
+	 * We open code xa_load() here because we need to be holding the rcu
+	 * lock when we access the eb.
+	 */
 	rcu_read_lock();
-	eb = radix_tree_lookup(&fs_info->buffer_radix,
-			       start >> fs_info->sectorsize_bits);
-	if (eb && atomic_inc_not_zero(&eb->refs)) {
-		rcu_read_unlock();
-		return eb;
-	}
+	do {
+		eb = xas_load(&xas);
+	} while (xas_retry(&xas, eb));
+
+	if (eb && !atomic_inc_not_zero(&eb->refs))
+		eb = NULL;
 	rcu_read_unlock();
-	return NULL;
+	return eb;
 }
 
 static void end_bbio_meta_write(struct btrfs_bio *bbio)
@@ -2769,11 +2774,10 @@ static void detach_extent_buffer_folio(const struct extent_buffer *eb, struct fo
 
 	if (!btrfs_meta_is_subpage(fs_info)) {
 		/*
-		 * We do this since we'll remove the pages after we've
-		 * removed the eb from the radix tree, so we could race
-		 * and have this page now attached to the new eb.  So
-		 * only clear folio if it's still connected to
-		 * this eb.
+		 * We do this since we'll remove the pages after we've removed
+		 * the eb from the xarray, so we could race and have this page
+		 * now attached to the new eb.  So only clear folio if it's
+		 * still connected to this eb.
 		 */
 		if (folio_test_private(folio) && folio_get_private(folio) == eb) {
 			BUG_ON(test_bit(EXTENT_BUFFER_DIRTY, &eb->bflags));
@@ -2938,9 +2942,9 @@ static void check_buffer_tree_ref(struct extent_buffer *eb)
 {
 	int refs;
 	/*
-	 * The TREE_REF bit is first set when the extent_buffer is added
-	 * to the radix tree. It is also reset, if unset, when a new reference
-	 * is created by find_extent_buffer.
+	 * The TREE_REF bit is first set when the extent_buffer is added to the
+	 * xarray. It is also reset, if unset, when a new reference is created
+	 * by find_extent_buffer.
 	 *
 	 * It is only cleared in two cases: freeing the last non-tree
 	 * reference to the extent_buffer when its STALE bit is set or
@@ -2952,13 +2956,12 @@ static void check_buffer_tree_ref(struct extent_buffer *eb)
 	 * conditions between the calls to check_buffer_tree_ref in those
 	 * codepaths and clearing TREE_REF in try_release_extent_buffer.
 	 *
-	 * The actual lifetime of the extent_buffer in the radix tree is
-	 * adequately protected by the refcount, but the TREE_REF bit and
-	 * its corresponding reference are not. To protect against this
-	 * class of races, we call check_buffer_tree_ref from the codepaths
-	 * which trigger io. Note that once io is initiated, TREE_REF can no
-	 * longer be cleared, so that is the moment at which any such race is
-	 * best fixed.
+	 * The actual lifetime of the extent_buffer in the xarray is adequately
+	 * protected by the refcount, but the TREE_REF bit and its corresponding
+	 * reference are not. To protect against this class of races, we call
+	 * check_buffer_tree_ref from the codepaths which trigger io. Note that
+	 * once io is initiated, TREE_REF can no longer be cleared, so that is
+	 * the moment at which any such race is best fixed.
 	 */
 	refs = atomic_read(&eb->refs);
 	if (refs >= 2 && test_bit(EXTENT_BUFFER_TREE_REF, &eb->bflags))
@@ -3022,23 +3025,26 @@ struct extent_buffer *alloc_test_extent_buffer(struct btrfs_fs_info *fs_info,
 		return ERR_PTR(-ENOMEM);
 	eb->fs_info = fs_info;
 again:
-	ret = radix_tree_preload(GFP_NOFS);
-	if (ret) {
-		exists = ERR_PTR(ret);
+	xa_lock_irq(&fs_info->buffer_tree);
+	exists = __xa_cmpxchg(&fs_info->buffer_tree,
+			      start >> fs_info->sectorsize_bits, NULL, eb,
+			      GFP_NOFS);
+	if (xa_is_err(exists)) {
+		ret = xa_err(exists);
+		xa_unlock_irq(&fs_info->buffer_tree);
+		btrfs_release_extent_buffer(eb);
+		return ERR_PTR(ret);
+	}
+	if (exists) {
+		if (!atomic_inc_not_zero(&exists->refs)) {
+			/* The extent buffer is being freed, retry. */
+			xa_unlock_irq(&fs_info->buffer_tree);
+			goto again;
+		}
+		xa_unlock_irq(&fs_info->buffer_tree);
 		goto free_eb;
 	}
-	spin_lock(&fs_info->buffer_lock);
-	ret = radix_tree_insert(&fs_info->buffer_radix,
-				start >> fs_info->sectorsize_bits, eb);
-	spin_unlock(&fs_info->buffer_lock);
-	radix_tree_preload_end();
-	if (ret == -EEXIST) {
-		exists = find_extent_buffer(fs_info, start);
-		if (exists)
-			goto free_eb;
-		else
-			goto again;
-	}
+	xa_unlock_irq(&fs_info->buffer_tree);
 	check_buffer_tree_ref(eb);
 
 	return eb;
@@ -3059,9 +3065,9 @@ static struct extent_buffer *grab_extent_buffer(struct btrfs_fs_info *fs_info,
 	lockdep_assert_held(&folio->mapping->i_private_lock);
 
 	/*
-	 * For subpage case, we completely rely on radix tree to ensure we
-	 * don't try to insert two ebs for the same bytenr.  So here we always
-	 * return NULL and just continue.
+	 * For subpage case, we completely rely on xarray to ensure we don't try
+	 * to insert two ebs for the same bytenr.  So here we always return NULL
+	 * and just continue.
 	 */
 	if (btrfs_meta_is_subpage(fs_info))
 		return NULL;
@@ -3194,7 +3200,7 @@ static int attach_eb_folio_to_filemap(struct extent_buffer *eb, int i,
 	/*
 	 * To inform we have an extra eb under allocation, so that
 	 * detach_extent_buffer_page() won't release the folio private when the
-	 * eb hasn't been inserted into radix tree yet.
+	 * eb hasn't been inserted into the xarray yet.
 	 *
 	 * The ref will be decreased when the eb releases the page, in
 	 * detach_extent_buffer_page().  Thus needs no special handling in the
@@ -3328,10 +3334,10 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info,
 
 		/*
 		 * We can't unlock the pages just yet since the extent buffer
-		 * hasn't been properly inserted in the radix tree, this
-		 * opens a race with btree_release_folio which can free a page
-		 * while we are still filling in all pages for the buffer and
-		 * we could crash.
+		 * hasn't been properly inserted in the xarray, this opens a
+		 * race with btree_release_folio which can free a page while we
+		 * are still filling in all pages for the buffer and we could
+		 * crash.
 		 */
 	}
 	if (uptodate)
@@ -3340,23 +3346,25 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info,
 	if (page_contig)
 		eb->addr = folio_address(eb->folios[0]) + offset_in_page(eb->start);
 again:
-	ret = radix_tree_preload(GFP_NOFS);
-	if (ret)
+	xa_lock_irq(&fs_info->buffer_tree);
+	existing_eb = __xa_cmpxchg(&fs_info->buffer_tree,
+				   start >> fs_info->sectorsize_bits, NULL, eb,
+				   GFP_NOFS);
+	if (xa_is_err(existing_eb)) {
+		ret = xa_err(existing_eb);
+		xa_unlock_irq(&fs_info->buffer_tree);
 		goto out;
-
-	spin_lock(&fs_info->buffer_lock);
-	ret = radix_tree_insert(&fs_info->buffer_radix,
-				start >> fs_info->sectorsize_bits, eb);
-	spin_unlock(&fs_info->buffer_lock);
-	radix_tree_preload_end();
-	if (ret == -EEXIST) {
-		ret = 0;
-		existing_eb = find_extent_buffer(fs_info, start);
-		if (existing_eb)
-			goto out;
-		else
-			goto again;
 	}
+	if (existing_eb) {
+		if (!atomic_inc_not_zero(&existing_eb->refs)) {
+			xa_unlock_irq(&fs_info->buffer_tree);
+			goto again;
+		}
+		xa_unlock_irq(&fs_info->buffer_tree);
+		goto out;
+	}
+	xa_unlock_irq(&fs_info->buffer_tree);
+
 	/* add one reference for the tree */
 	check_buffer_tree_ref(eb);
 
@@ -3426,10 +3434,19 @@ static int release_extent_buffer(struct extent_buffer *eb)
 
 		spin_unlock(&eb->refs_lock);
 
-		spin_lock(&fs_info->buffer_lock);
-		radix_tree_delete_item(&fs_info->buffer_radix,
-				       eb->start >> fs_info->sectorsize_bits, eb);
-		spin_unlock(&fs_info->buffer_lock);
+		/*
+		 * We're erasing, theoretically there will be no allocations, so
+		 * just use GFP_ATOMIC.
+		 *
+		 * We use cmpxchg instead of erase because we do not know if
+		 * this eb is actually in the tree or not, we could be cleaning
+		 * up an eb that we allocated but never inserted into the tree.
+		 * Thus use cmpxchg to remove it from the tree if it is there,
+		 * or leave the other entry if this isn't in the tree.
+		 */
+		xa_cmpxchg_irq(&fs_info->buffer_tree,
+			       eb->start >> fs_info->sectorsize_bits, eb, NULL,
+			       GFP_ATOMIC);
 
 		btrfs_leak_debug_del_eb(eb);
 		/* Should be safe to release folios at this point. */
@@ -4260,44 +4277,6 @@ void memmove_extent_buffer(const struct extent_buffer *dst,
 	}
 }
 
-#define GANG_LOOKUP_SIZE	16
-static struct extent_buffer *get_next_extent_buffer(
-		const struct btrfs_fs_info *fs_info, struct folio *folio, u64 bytenr)
-{
-	struct extent_buffer *gang[GANG_LOOKUP_SIZE];
-	struct extent_buffer *found = NULL;
-	u64 folio_start = folio_pos(folio);
-	u64 cur = folio_start;
-
-	ASSERT(in_range(bytenr, folio_start, PAGE_SIZE));
-	lockdep_assert_held(&fs_info->buffer_lock);
-
-	while (cur < folio_start + PAGE_SIZE) {
-		int ret;
-		int i;
-
-		ret = radix_tree_gang_lookup(&fs_info->buffer_radix,
-				(void **)gang, cur >> fs_info->sectorsize_bits,
-				min_t(unsigned int, GANG_LOOKUP_SIZE,
-				      PAGE_SIZE / fs_info->nodesize));
-		if (ret == 0)
-			goto out;
-		for (i = 0; i < ret; i++) {
-			/* Already beyond page end */
-			if (gang[i]->start >= folio_start + PAGE_SIZE)
-				goto out;
-			/* Found one */
-			if (gang[i]->start >= bytenr) {
-				found = gang[i];
-				goto out;
-			}
-		}
-		cur = gang[ret - 1]->start + gang[ret - 1]->len;
-	}
-out:
-	return found;
-}
-
 static int try_release_subpage_extent_buffer(struct folio *folio)
 {
 	struct btrfs_fs_info *fs_info = folio_to_fs_info(folio);
@@ -4306,21 +4285,31 @@ static int try_release_subpage_extent_buffer(struct folio *folio)
 	int ret;
 
 	while (cur < end) {
+		XA_STATE(xas, &fs_info->buffer_tree,
+			 cur >> fs_info->sectorsize_bits);
 		struct extent_buffer *eb = NULL;
 
 		/*
 		 * Unlike try_release_extent_buffer() which uses folio private
-		 * to grab buffer, for subpage case we rely on radix tree, thus
-		 * we need to ensure radix tree consistency.
+		 * to grab buffer, for subpage case we rely on xarray, thus we
+		 * need to ensure xarray tree consistency.
 		 *
-		 * We also want an atomic snapshot of the radix tree, thus go
+		 * We also want an atomic snapshot of the xarray tree, thus go
 		 * with spinlock rather than RCU.
+		 *
+		 * We open code xa_load() here because we need to be holding the
+		 * xa lock while we're accessing the eb. We could technically
+		 * use xa_load() while holding the lock since it just does an
+		 * rcu_read_lock(), but that would be a bit of a waste.
 		 */
-		spin_lock(&fs_info->buffer_lock);
-		eb = get_next_extent_buffer(fs_info, folio, cur);
+		xa_lock_irq(&fs_info->buffer_tree);
+		do {
+			eb = xas_find(&xas, end >> fs_info->sectorsize_bits);
+		} while (xas_retry(&xas, eb));
+
 		if (!eb) {
 			/* No more eb in the page range after or at cur */
-			spin_unlock(&fs_info->buffer_lock);
+			xa_unlock_irq(&fs_info->buffer_tree);
 			break;
 		}
 		cur = eb->start + eb->len;
@@ -4332,10 +4321,10 @@ static int try_release_subpage_extent_buffer(struct folio *folio)
 		spin_lock(&eb->refs_lock);
 		if (atomic_read(&eb->refs) != 1 || extent_buffer_under_io(eb)) {
 			spin_unlock(&eb->refs_lock);
-			spin_unlock(&fs_info->buffer_lock);
+			xa_unlock_irq(&fs_info->buffer_tree);
 			break;
 		}
-		spin_unlock(&fs_info->buffer_lock);
+		xa_unlock_irq(&fs_info->buffer_tree);
 
 		/*
 		 * If tree ref isn't set then we know the ref on this eb is a
diff --git a/fs/btrfs/fs.h b/fs/btrfs/fs.h
index bcca43046064..ed02d276d908 100644
--- a/fs/btrfs/fs.h
+++ b/fs/btrfs/fs.h
@@ -776,10 +776,8 @@ struct btrfs_fs_info {
 
 	struct btrfs_delayed_root *delayed_root;
 
-	/* Extent buffer radix tree */
-	spinlock_t buffer_lock;
 	/* Entries are eb->start / sectorsize */
-	struct radix_tree_root buffer_radix;
+	struct xarray buffer_tree;
 
 	/* Next backup root to be overwritten */
 	int backup_root_index;
diff --git a/fs/btrfs/tests/btrfs-tests.c b/fs/btrfs/tests/btrfs-tests.c
index 02a915eb51fb..b576897d71cc 100644
--- a/fs/btrfs/tests/btrfs-tests.c
+++ b/fs/btrfs/tests/btrfs-tests.c
@@ -157,9 +157,9 @@ struct btrfs_fs_info *btrfs_alloc_dummy_fs_info(u32 nodesize, u32 sectorsize)
 
 void btrfs_free_dummy_fs_info(struct btrfs_fs_info *fs_info)
 {
-	struct radix_tree_iter iter;
-	void **slot;
 	struct btrfs_device *dev, *tmp;
+	struct extent_buffer *eb;
+	unsigned long index;
 
 	if (!fs_info)
 		return;
@@ -169,25 +169,13 @@ void btrfs_free_dummy_fs_info(struct btrfs_fs_info *fs_info)
 
 	test_mnt->mnt_sb->s_fs_info = NULL;
 
-	spin_lock(&fs_info->buffer_lock);
-	radix_tree_for_each_slot(slot, &fs_info->buffer_radix, &iter, 0) {
-		struct extent_buffer *eb;
-
-		eb = radix_tree_deref_slot_protected(slot, &fs_info->buffer_lock);
-		if (!eb)
-			continue;
-		/* Shouldn't happen but that kind of thinking creates CVE's */
-		if (radix_tree_exception(eb)) {
-			if (radix_tree_deref_retry(eb))
-				slot = radix_tree_iter_retry(&iter);
-			continue;
-		}
-		slot = radix_tree_iter_resume(slot, &iter);
-		spin_unlock(&fs_info->buffer_lock);
-		free_extent_buffer_stale(eb);
-		spin_lock(&fs_info->buffer_lock);
+	xa_lock_irq(&fs_info->buffer_tree);
+	xa_for_each(&fs_info->buffer_tree, index, eb) {
+		xa_unlock_irq(&fs_info->buffer_tree);
+		free_extent_buffer(eb);
+		xa_lock_irq(&fs_info->buffer_tree);
 	}
-	spin_unlock(&fs_info->buffer_lock);
+	xa_unlock_irq(&fs_info->buffer_tree);
 
 	btrfs_mapping_tree_free(fs_info);
 	list_for_each_entry_safe(dev, tmp, &fs_info->fs_devices->devices,
diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
index 7b30700ec930..4b59bc480663 100644
--- a/fs/btrfs/zoned.c
+++ b/fs/btrfs/zoned.c
@@ -2171,27 +2171,15 @@ static void wait_eb_writebacks(struct btrfs_block_group *block_group)
 {
 	struct btrfs_fs_info *fs_info = block_group->fs_info;
 	const u64 end = block_group->start + block_group->length;
-	struct radix_tree_iter iter;
 	struct extent_buffer *eb;
-	void __rcu **slot;
+	unsigned long index, start = block_group->start >> fs_info->sectorsize_bits;
 
 	rcu_read_lock();
-	radix_tree_for_each_slot(slot, &fs_info->buffer_radix, &iter,
-				 block_group->start >> fs_info->sectorsize_bits) {
-		eb = radix_tree_deref_slot(slot);
-		if (!eb)
-			continue;
-		if (radix_tree_deref_retry(eb)) {
-			slot = radix_tree_iter_retry(&iter);
-			continue;
-		}
-
+	xa_for_each_start(&fs_info->buffer_tree, index, eb, start) {
 		if (eb->start < block_group->start)
 			continue;
 		if (eb->start >= end)
 			break;
-
-		slot = radix_tree_iter_resume(slot, &iter);
 		rcu_read_unlock();
 		wait_on_extent_buffer_writeback(eb);
 		rcu_read_lock();
-- 
2.48.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v4 2/3] btrfs: set DIRTY and WRITEBACK tags on the buffer_tree
  2025-04-24 18:32 [PATCH v4 0/3] btrfs: simplify extent buffer writeback Josef Bacik
  2025-04-24 18:32 ` [PATCH v4 1/3] btrfs: convert the buffer_radix to an xarray Josef Bacik
@ 2025-04-24 18:32 ` Josef Bacik
  2025-04-24 18:32 ` [PATCH v4 3/3] btrfs: use buffer radix for extent buffer writeback operations Josef Bacik
  2 siblings, 0 replies; 6+ messages in thread
From: Josef Bacik @ 2025-04-24 18:32 UTC (permalink / raw)
  To: linux-btrfs, kernel-team; +Cc: Filipe Manana

In preparation for changing how we do writeout of extent buffers, start
tagging the extent buffer xarray with DIRTY and WRITEBACK to make it
easier to find extent buffers that are in either state.

Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/extent_io.c | 38 ++++++++++++++++++++++++++++++++++++++
 1 file changed, 38 insertions(+)

diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 4f861a8ff695..65a769329981 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -1801,8 +1801,19 @@ static noinline_for_stack bool lock_extent_buffer_for_io(struct extent_buffer *e
 	 */
 	spin_lock(&eb->refs_lock);
 	if (test_and_clear_bit(EXTENT_BUFFER_DIRTY, &eb->bflags)) {
+		XA_STATE(xas, &fs_info->buffer_tree,
+			 eb->start >> fs_info->sectorsize_bits);
+		unsigned long flags;
+
 		set_bit(EXTENT_BUFFER_WRITEBACK, &eb->bflags);
 		spin_unlock(&eb->refs_lock);
+
+		xas_lock_irqsave(&xas, flags);
+		xas_load(&xas);
+		xas_set_mark(&xas, PAGECACHE_TAG_WRITEBACK);
+		xas_clear_mark(&xas, PAGECACHE_TAG_DIRTY);
+		xas_unlock_irqrestore(&xas, flags);
+
 		btrfs_set_header_flag(eb, BTRFS_HEADER_FLAG_WRITTEN);
 		percpu_counter_add_batch(&fs_info->dirty_metadata_bytes,
 					 -eb->len,
@@ -1888,6 +1899,30 @@ static void set_btree_ioerr(struct extent_buffer *eb)
 	}
 }
 
+static void buffer_tree_set_mark(const struct extent_buffer *eb, xa_mark_t mark)
+{
+	struct btrfs_fs_info *fs_info = eb->fs_info;
+	XA_STATE(xas, &fs_info->buffer_tree, eb->start >> fs_info->sectorsize_bits);
+	unsigned long flags;
+
+	xas_lock_irqsave(&xas, flags);
+	xas_load(&xas);
+	xas_set_mark(&xas, mark);
+	xas_unlock_irqrestore(&xas, flags);
+}
+
+static void buffer_tree_clear_mark(const struct extent_buffer *eb, xa_mark_t mark)
+{
+	struct btrfs_fs_info *fs_info = eb->fs_info;
+	XA_STATE(xas, &fs_info->buffer_tree, eb->start >> fs_info->sectorsize_bits);
+	unsigned long flags;
+
+	xas_lock_irqsave(&xas, flags);
+	xas_load(&xas);
+	xas_clear_mark(&xas, mark);
+	xas_unlock_irqrestore(&xas, flags);
+}
+
 /*
  * The endio specific version which won't touch any unsafe spinlock in endio
  * context.
@@ -1925,6 +1960,7 @@ static void end_bbio_meta_write(struct btrfs_bio *bbio)
 		btrfs_meta_folio_clear_writeback(fi.folio, eb);
 	}
 
+	buffer_tree_clear_mark(eb, PAGECACHE_TAG_WRITEBACK);
 	clear_bit(EXTENT_BUFFER_WRITEBACK, &eb->bflags);
 	smp_mb__after_atomic();
 	wake_up_bit(&eb->bflags, EXTENT_BUFFER_WRITEBACK);
@@ -3547,6 +3583,7 @@ void btrfs_clear_buffer_dirty(struct btrfs_trans_handle *trans,
 	if (!test_and_clear_bit(EXTENT_BUFFER_DIRTY, &eb->bflags))
 		return;
 
+	buffer_tree_clear_mark(eb, PAGECACHE_TAG_DIRTY);
 	percpu_counter_add_batch(&fs_info->dirty_metadata_bytes, -eb->len,
 				 fs_info->dirty_metadata_batch);
 
@@ -3595,6 +3632,7 @@ void set_extent_buffer_dirty(struct extent_buffer *eb)
 			folio_lock(eb->folios[0]);
 		for (int i = 0; i < num_extent_folios(eb); i++)
 			btrfs_meta_folio_set_dirty(eb->folios[i], eb);
+		buffer_tree_set_mark(eb, PAGECACHE_TAG_DIRTY);
 		if (subpage)
 			folio_unlock(eb->folios[0]);
 		percpu_counter_add_batch(&eb->fs_info->dirty_metadata_bytes,
-- 
2.48.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v4 3/3] btrfs: use buffer radix for extent buffer writeback operations
  2025-04-24 18:32 [PATCH v4 0/3] btrfs: simplify extent buffer writeback Josef Bacik
  2025-04-24 18:32 ` [PATCH v4 1/3] btrfs: convert the buffer_radix to an xarray Josef Bacik
  2025-04-24 18:32 ` [PATCH v4 2/3] btrfs: set DIRTY and WRITEBACK tags on the buffer_tree Josef Bacik
@ 2025-04-24 18:32 ` Josef Bacik
  2025-04-25  9:27   ` Filipe Manana
  2 siblings, 1 reply; 6+ messages in thread
From: Josef Bacik @ 2025-04-24 18:32 UTC (permalink / raw)
  To: linux-btrfs, kernel-team; +Cc: Filipe Manana

Currently we have this ugly back and forth with the btree writeback
where we find the folio, find the eb associated with that folio, and
then attempt to writeback.  This results in two different paths for
subpage eb's and >= pagesize eb's.

Clean this up by adding our own infrastructure around looking up tag'ed
eb's and writing the eb's out directly.  This allows us to unify the
subpage and >= pagesize IO paths, resulting in a much cleaner writeback
path for extent buffers.

I ran this through fsperf on a VM with 8 CPUs and 16gib of ram.  I used
smallfiles100k, but reduced the files to 1k to make it run faster, the
results are as follows, with the statistically significant improvements
marked with *, there were no regressions.  fsperf was run with -n 10 for
both runs, so the baseline is the average 10 runs and the test is the
average of 10 runs.

smallfiles100k results
      metric           baseline       current        stdev            diff
================================================================================
avg_commit_ms               68.58         58.44          3.35   -14.79% *
commits                    270.60        254.70         16.24    -5.88%
dev_read_iops                  48            48             0     0.00%
dev_read_kbytes              1044          1044             0     0.00%
dev_write_iops          866117.90     850028.10      14292.20    -1.86%
dev_write_kbytes      10939976.40   10605701.20     351330.32    -3.06%
elapsed                     49.30            33          1.64   -33.06% *
end_state_mount_ns    41251498.80   35773220.70    2531205.32   -13.28% *
end_state_umount_ns      1.90e+09      1.50e+09   14186226.85   -21.38% *
max_commit_ms                 139        111.60          9.72   -19.71% *
sys_cpu                      4.90          3.86          0.88   -21.29%
write_bw_bytes        42935768.20   64318451.10    1609415.05    49.80% *
write_clat_ns_mean      366431.69     243202.60      14161.98   -33.63% *
write_clat_ns_p50        49203.20         20992        264.40   -57.34% *
write_clat_ns_p99          827392     653721.60      65904.74   -20.99% *
write_io_kbytes           2035940       2035940             0     0.00%
write_iops               10482.37      15702.75        392.92    49.80% *
write_lat_ns_max         1.01e+08      90516129    3910102.06   -10.29% *
write_lat_ns_mean       366556.19     243308.48      14154.51   -33.62% *

As you can see we get about a 33% decrease runtime, with a 50%
throughput increase, which is pretty significant.

Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Josef Bacik <josef@toxicpanda.com>
---
 fs/btrfs/extent_io.c   | 339 ++++++++++++++++++++---------------------
 fs/btrfs/extent_io.h   |   2 +
 fs/btrfs/transaction.c |   5 +-
 3 files changed, 169 insertions(+), 177 deletions(-)

diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 65a769329981..6082f37cfca3 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -1923,6 +1923,111 @@ static void buffer_tree_clear_mark(const struct extent_buffer *eb, xa_mark_t mar
 	xas_unlock_irqrestore(&xas, flags);
 }
 
+static void buffer_tree_tag_for_writeback(struct btrfs_fs_info *fs_info,
+					    unsigned long start, unsigned long end)
+{
+	XA_STATE(xas, &fs_info->buffer_tree, start);
+	unsigned int tagged = 0;
+	void *eb;
+
+	xas_lock_irq(&xas);
+	xas_for_each_marked(&xas, eb, end, PAGECACHE_TAG_DIRTY) {
+		xas_set_mark(&xas, PAGECACHE_TAG_TOWRITE);
+		if (++tagged % XA_CHECK_SCHED)
+			continue;
+		xas_pause(&xas);
+		xas_unlock_irq(&xas);
+		cond_resched();
+		xas_lock_irq(&xas);
+	}
+	xas_unlock_irq(&xas);
+}
+
+struct eb_batch {
+	unsigned int nr;
+	unsigned int cur;
+	struct extent_buffer *ebs[PAGEVEC_SIZE];
+};
+
+static inline bool eb_batch_add(struct eb_batch *batch, struct extent_buffer *eb)
+{
+	batch->ebs[batch->nr++] = eb;
+	return (batch->nr < PAGEVEC_SIZE);
+}
+
+static inline void eb_batch_init(struct eb_batch *batch)
+{
+	batch->nr = 0;
+	batch->cur = 0;
+}
+
+static inline struct extent_buffer *eb_batch_next(struct eb_batch *batch)
+{
+	if (batch->cur >= batch->nr)
+		return NULL;
+	return batch->ebs[batch->cur++];
+}
+
+static inline void eb_batch_release(struct eb_batch *batch)
+{
+	for (unsigned int i = 0; i < batch->nr; i++)
+		free_extent_buffer(batch->ebs[i]);
+	eb_batch_init(batch);
+}
+
+static inline struct extent_buffer *find_get_eb(struct xa_state *xas, unsigned long max,
+						xa_mark_t mark)
+{
+	struct extent_buffer *eb;
+
+retry:
+	eb = xas_find_marked(xas, max, mark);
+
+	if (xas_retry(xas, eb))
+		goto retry;
+
+	if (!eb)
+		return NULL;
+
+	if (!atomic_inc_not_zero(&eb->refs))
+		goto reset;
+
+	if (unlikely(eb != xas_reload(xas))) {
+		free_extent_buffer(eb);
+		goto reset;
+	}
+
+	return eb;
+reset:
+	xas_reset(xas);
+	goto retry;
+}
+
+static unsigned int buffer_tree_get_ebs_tag(struct btrfs_fs_info *fs_info,
+					    unsigned long *start,
+					    unsigned long end, xa_mark_t tag,
+					    struct eb_batch *batch)
+{
+	XA_STATE(xas, &fs_info->buffer_tree, *start);
+	struct extent_buffer *eb;
+
+	rcu_read_lock();
+	while ((eb = find_get_eb(&xas, end, tag)) != NULL) {
+		if (!eb_batch_add(batch, eb)) {
+			*start = (eb->start + eb->len) >> fs_info->sectorsize_bits;
+			goto out;
+		}
+	}
+	if (end == ULONG_MAX)
+		*start = ULONG_MAX;
+	else
+		*start = end + 1;
+out:
+	rcu_read_unlock();
+
+	return batch->nr;
+}
+
 /*
  * The endio specific version which won't touch any unsafe spinlock in endio
  * context.
@@ -2032,163 +2137,38 @@ static noinline_for_stack void write_one_eb(struct extent_buffer *eb,
 }
 
 /*
- * Submit one subpage btree page.
+ * Wait for all eb writeback in the given range to finish.
  *
- * The main difference to submit_eb_page() is:
- * - Page locking
- *   For subpage, we don't rely on page locking at all.
- *
- * - Flush write bio
- *   We only flush bio if we may be unable to fit current extent buffers into
- *   current bio.
- *
- * Return >=0 for the number of submitted extent buffers.
- * Return <0 for fatal error.
+ * @fs_info:	The fs_info for this file system.
+ * @start:	The offset of the range to start waiting on writeback.
+ * @end:	The end of the range, inclusive. This is meant to be used in
+ *		conjuction with wait_marked_extents, so this will usually be
+ *		the_next_eb->start - 1.
  */
-static int submit_eb_subpage(struct folio *folio, struct writeback_control *wbc)
+void btrfs_btree_wait_writeback_range(struct btrfs_fs_info *fs_info, u64 start,
+				      u64 end)
 {
-	struct btrfs_fs_info *fs_info = folio_to_fs_info(folio);
-	int submitted = 0;
-	u64 folio_start = folio_pos(folio);
-	int bit_start = 0;
-	int sectors_per_node = fs_info->nodesize >> fs_info->sectorsize_bits;
-	const unsigned int blocks_per_folio = btrfs_blocks_per_folio(fs_info, folio);
+	struct eb_batch batch;
+	unsigned long start_index = start >> fs_info->sectorsize_bits;
+	unsigned long end_index = end >> fs_info->sectorsize_bits;
 
-	/* Lock and write each dirty extent buffers in the range */
-	while (bit_start < blocks_per_folio) {
-		struct btrfs_subpage *subpage = folio_get_private(folio);
+	eb_batch_init(&batch);
+	while (start_index <= end_index) {
 		struct extent_buffer *eb;
-		unsigned long flags;
-		u64 start;
+		unsigned int nr_ebs;
 
-		/*
-		 * Take private lock to ensure the subpage won't be detached
-		 * in the meantime.
-		 */
-		spin_lock(&folio->mapping->i_private_lock);
-		if (!folio_test_private(folio)) {
-			spin_unlock(&folio->mapping->i_private_lock);
+		nr_ebs = buffer_tree_get_ebs_tag(fs_info, &start_index,
+						 end_index,
+						 PAGECACHE_TAG_WRITEBACK,
+						 &batch);
+		if (!nr_ebs)
 			break;
-		}
-		spin_lock_irqsave(&subpage->lock, flags);
-		if (!test_bit(bit_start + btrfs_bitmap_nr_dirty * blocks_per_folio,
-			      subpage->bitmaps)) {
-			spin_unlock_irqrestore(&subpage->lock, flags);
-			spin_unlock(&folio->mapping->i_private_lock);
-			bit_start += sectors_per_node;
-			continue;
-		}
 
-		start = folio_start + bit_start * fs_info->sectorsize;
-		bit_start += sectors_per_node;
-
-		/*
-		 * Here we just want to grab the eb without touching extra
-		 * spin locks, so call find_extent_buffer_nolock().
-		 */
-		eb = find_extent_buffer_nolock(fs_info, start);
-		spin_unlock_irqrestore(&subpage->lock, flags);
-		spin_unlock(&folio->mapping->i_private_lock);
-
-		/*
-		 * The eb has already reached 0 refs thus find_extent_buffer()
-		 * doesn't return it. We don't need to write back such eb
-		 * anyway.
-		 */
-		if (!eb)
-			continue;
-
-		if (lock_extent_buffer_for_io(eb, wbc)) {
-			write_one_eb(eb, wbc);
-			submitted++;
-		}
-		free_extent_buffer(eb);
+		while ((eb = eb_batch_next(&batch)) != NULL)
+			wait_on_extent_buffer_writeback(eb);
+		eb_batch_release(&batch);
+		cond_resched();
 	}
-	return submitted;
-}
-
-/*
- * Submit all page(s) of one extent buffer.
- *
- * @page:	the page of one extent buffer
- * @eb_context:	to determine if we need to submit this page, if current page
- *		belongs to this eb, we don't need to submit
- *
- * The caller should pass each page in their bytenr order, and here we use
- * @eb_context to determine if we have submitted pages of one extent buffer.
- *
- * If we have, we just skip until we hit a new page that doesn't belong to
- * current @eb_context.
- *
- * If not, we submit all the page(s) of the extent buffer.
- *
- * Return >0 if we have submitted the extent buffer successfully.
- * Return 0 if we don't need to submit the page, as it's already submitted by
- * previous call.
- * Return <0 for fatal error.
- */
-static int submit_eb_page(struct folio *folio, struct btrfs_eb_write_context *ctx)
-{
-	struct writeback_control *wbc = ctx->wbc;
-	struct address_space *mapping = folio->mapping;
-	struct extent_buffer *eb;
-	int ret;
-
-	if (!folio_test_private(folio))
-		return 0;
-
-	if (btrfs_meta_is_subpage(folio_to_fs_info(folio)))
-		return submit_eb_subpage(folio, wbc);
-
-	spin_lock(&mapping->i_private_lock);
-	if (!folio_test_private(folio)) {
-		spin_unlock(&mapping->i_private_lock);
-		return 0;
-	}
-
-	eb = folio_get_private(folio);
-
-	/*
-	 * Shouldn't happen and normally this would be a BUG_ON but no point
-	 * crashing the machine for something we can survive anyway.
-	 */
-	if (WARN_ON(!eb)) {
-		spin_unlock(&mapping->i_private_lock);
-		return 0;
-	}
-
-	if (eb == ctx->eb) {
-		spin_unlock(&mapping->i_private_lock);
-		return 0;
-	}
-	ret = atomic_inc_not_zero(&eb->refs);
-	spin_unlock(&mapping->i_private_lock);
-	if (!ret)
-		return 0;
-
-	ctx->eb = eb;
-
-	ret = btrfs_check_meta_write_pointer(eb->fs_info, ctx);
-	if (ret) {
-		if (ret == -EBUSY)
-			ret = 0;
-		free_extent_buffer(eb);
-		return ret;
-	}
-
-	if (!lock_extent_buffer_for_io(eb, wbc)) {
-		free_extent_buffer(eb);
-		return 0;
-	}
-	/* Implies write in zoned mode. */
-	if (ctx->zoned_bg) {
-		/* Mark the last eb in the block group. */
-		btrfs_schedule_zone_finish_bg(ctx->zoned_bg, eb);
-		ctx->zoned_bg->meta_write_pointer += eb->len;
-	}
-	write_one_eb(eb, wbc);
-	free_extent_buffer(eb);
-	return 1;
 }
 
 int btree_write_cache_pages(struct address_space *mapping,
@@ -2199,25 +2179,27 @@ int btree_write_cache_pages(struct address_space *mapping,
 	int ret = 0;
 	int done = 0;
 	int nr_to_write_done = 0;
-	struct folio_batch fbatch;
-	unsigned int nr_folios;
-	pgoff_t index;
-	pgoff_t end;		/* Inclusive */
+	struct eb_batch batch;
+	unsigned int nr_ebs;
+	unsigned long index;
+	unsigned long end;
 	int scanned = 0;
 	xa_mark_t tag;
 
-	folio_batch_init(&fbatch);
+	eb_batch_init(&batch);
 	if (wbc->range_cyclic) {
-		index = mapping->writeback_index; /* Start from prev offset */
+		index = (mapping->writeback_index << PAGE_SHIFT) >> fs_info->sectorsize_bits;
 		end = -1;
+
 		/*
 		 * Start from the beginning does not need to cycle over the
 		 * range, mark it as scanned.
 		 */
 		scanned = (index == 0);
 	} else {
-		index = wbc->range_start >> PAGE_SHIFT;
-		end = wbc->range_end >> PAGE_SHIFT;
+		index = wbc->range_start >> fs_info->sectorsize_bits;
+		end = wbc->range_end >> fs_info->sectorsize_bits;
+
 		scanned = 1;
 	}
 	if (wbc->sync_mode == WB_SYNC_ALL)
@@ -2227,31 +2209,40 @@ int btree_write_cache_pages(struct address_space *mapping,
 	btrfs_zoned_meta_io_lock(fs_info);
 retry:
 	if (wbc->sync_mode == WB_SYNC_ALL)
-		tag_pages_for_writeback(mapping, index, end);
+		buffer_tree_tag_for_writeback(fs_info, index, end);
 	while (!done && !nr_to_write_done && (index <= end) &&
-	       (nr_folios = filemap_get_folios_tag(mapping, &index, end,
-					    tag, &fbatch))) {
-		unsigned i;
+	       (nr_ebs = buffer_tree_get_ebs_tag(fs_info, &index, end, tag,
+						 &batch))) {
+		struct extent_buffer *eb;
 
-		for (i = 0; i < nr_folios; i++) {
-			struct folio *folio = fbatch.folios[i];
+		while ((eb = eb_batch_next(&batch)) != NULL) {
+			ctx.eb = eb;
 
-			ret = submit_eb_page(folio, &ctx);
-			if (ret == 0)
+			ret = btrfs_check_meta_write_pointer(eb->fs_info, &ctx);
+			if (ret) {
+				if (ret == -EBUSY)
+					ret = 0;
+				if (ret) {
+					done = 1;
+					break;
+				}
+				free_extent_buffer(eb);
 				continue;
-			if (ret < 0) {
-				done = 1;
-				break;
 			}
 
-			/*
-			 * the filesystem may choose to bump up nr_to_write.
-			 * We have to make sure to honor the new nr_to_write
-			 * at any time
-			 */
-			nr_to_write_done = wbc->nr_to_write <= 0;
+			if (!lock_extent_buffer_for_io(eb, wbc))
+				continue;
+
+			/* Implies write in zoned mode. */
+			if (ctx.zoned_bg) {
+				/* Mark the last eb in the block group. */
+				btrfs_schedule_zone_finish_bg(ctx.zoned_bg, eb);
+				ctx.zoned_bg->meta_write_pointer += eb->len;
+			}
+			write_one_eb(eb, wbc);
 		}
-		folio_batch_release(&fbatch);
+		nr_to_write_done = wbc->nr_to_write <= 0;
+		eb_batch_release(&batch);
 		cond_resched();
 	}
 	if (!scanned && !done) {
diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
index b344162f790c..b7c1cd0b3c20 100644
--- a/fs/btrfs/extent_io.h
+++ b/fs/btrfs/extent_io.h
@@ -240,6 +240,8 @@ void extent_write_locked_range(struct inode *inode, const struct folio *locked_f
 int btrfs_writepages(struct address_space *mapping, struct writeback_control *wbc);
 int btree_write_cache_pages(struct address_space *mapping,
 			    struct writeback_control *wbc);
+void btrfs_btree_wait_writeback_range(struct btrfs_fs_info *fs_info, u64 start,
+				      u64 end);
 void btrfs_readahead(struct readahead_control *rac);
 int set_folio_extent_mapped(struct folio *folio);
 void clear_folio_extent_mapped(struct folio *folio);
diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
index 39e48bf610a1..a538a85ac2bd 100644
--- a/fs/btrfs/transaction.c
+++ b/fs/btrfs/transaction.c
@@ -1155,7 +1155,7 @@ int btrfs_write_marked_extents(struct btrfs_fs_info *fs_info,
 		if (!ret)
 			ret = filemap_fdatawrite_range(mapping, start, end);
 		if (!ret && wait_writeback)
-			ret = filemap_fdatawait_range(mapping, start, end);
+			btrfs_btree_wait_writeback_range(fs_info, start, end);
 		btrfs_free_extent_state(cached_state);
 		if (ret)
 			break;
@@ -1175,7 +1175,6 @@ int btrfs_write_marked_extents(struct btrfs_fs_info *fs_info,
 static int __btrfs_wait_marked_extents(struct btrfs_fs_info *fs_info,
 				       struct extent_io_tree *dirty_pages)
 {
-	struct address_space *mapping = fs_info->btree_inode->i_mapping;
 	struct extent_state *cached_state = NULL;
 	u64 start = 0;
 	u64 end;
@@ -1196,7 +1195,7 @@ static int __btrfs_wait_marked_extents(struct btrfs_fs_info *fs_info,
 		if (ret == -ENOMEM)
 			ret = 0;
 		if (!ret)
-			ret = filemap_fdatawait_range(mapping, start, end);
+			btrfs_btree_wait_writeback_range(fs_info, start, end);
 		btrfs_free_extent_state(cached_state);
 		if (ret)
 			break;
-- 
2.48.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v4 1/3] btrfs: convert the buffer_radix to an xarray
  2025-04-24 18:32 ` [PATCH v4 1/3] btrfs: convert the buffer_radix to an xarray Josef Bacik
@ 2025-04-25  9:24   ` Filipe Manana
  0 siblings, 0 replies; 6+ messages in thread
From: Filipe Manana @ 2025-04-25  9:24 UTC (permalink / raw)
  To: Josef Bacik; +Cc: linux-btrfs, kernel-team

On Thu, Apr 24, 2025 at 7:34 PM Josef Bacik <josef@toxicpanda.com> wrote:
>
> In order to fully utilize xarray tagging to improve writeback we need to
> convert the buffer_radix to a proper xarray.  This conversion is
> relatively straightforward as the radix code uses the xarray underneath.
> Using xarray directly allows for quite a lot less code.
>
> Signed-off-by: Josef Bacik <josef@toxicpanda.com>
> ---
>  fs/btrfs/disk-io.c           |  15 ++-
>  fs/btrfs/extent_io.c         | 211 +++++++++++++++++------------------
>  fs/btrfs/fs.h                |   4 +-
>  fs/btrfs/tests/btrfs-tests.c |  28 ++---
>  fs/btrfs/zoned.c             |  16 +--
>  5 files changed, 124 insertions(+), 150 deletions(-)
>
> diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
> index 59da809b7d57..24c08eb86b7b 100644
> --- a/fs/btrfs/disk-io.c
> +++ b/fs/btrfs/disk-io.c
> @@ -2762,10 +2762,22 @@ static int __cold init_tree_roots(struct btrfs_fs_info *fs_info)
>         return ret;
>  }
>
> +/*
> + * lockdep gets confused between our buffer_tree which requires IRQ locking
> + * because we modify marks in the IRQ context, and our delayed inode xarray
> + * which doesn't have these requirements. Use a class key so lockdep doesn't get
> + * them mixed up.
> + */
> +static struct lock_class_key buffer_xa_class;
> +
>  void btrfs_init_fs_info(struct btrfs_fs_info *fs_info)
>  {
>         INIT_RADIX_TREE(&fs_info->fs_roots_radix, GFP_ATOMIC);
> -       INIT_RADIX_TREE(&fs_info->buffer_radix, GFP_ATOMIC);
> +
> +       /* Use the same flags as mapping->i_pages. */
> +       xa_init_flags(&fs_info->buffer_tree, XA_FLAGS_LOCK_IRQ | XA_FLAGS_ACCOUNT);
> +       lockdep_set_class(&fs_info->buffer_tree.xa_lock, &buffer_xa_class);
> +
>         INIT_LIST_HEAD(&fs_info->trans_list);
>         INIT_LIST_HEAD(&fs_info->dead_roots);
>         INIT_LIST_HEAD(&fs_info->delayed_iputs);
> @@ -2777,7 +2789,6 @@ void btrfs_init_fs_info(struct btrfs_fs_info *fs_info)
>         spin_lock_init(&fs_info->delayed_iput_lock);
>         spin_lock_init(&fs_info->defrag_inodes_lock);
>         spin_lock_init(&fs_info->super_lock);
> -       spin_lock_init(&fs_info->buffer_lock);
>         spin_lock_init(&fs_info->unused_bgs_lock);
>         spin_lock_init(&fs_info->treelog_bg_lock);
>         spin_lock_init(&fs_info->zone_active_bgs_lock);
> diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
> index 6cfd286b8bbc..4f861a8ff695 100644
> --- a/fs/btrfs/extent_io.c
> +++ b/fs/btrfs/extent_io.c
> @@ -1893,19 +1893,24 @@ static void set_btree_ioerr(struct extent_buffer *eb)
>   * context.
>   */
>  static struct extent_buffer *find_extent_buffer_nolock(
> -               const struct btrfs_fs_info *fs_info, u64 start)
> +               struct btrfs_fs_info *fs_info, u64 start)
>  {
> +       XA_STATE(xas, &fs_info->buffer_tree, start >> fs_info->sectorsize_bits);
>         struct extent_buffer *eb;
>
> +       /*
> +        * We open code xa_load() here because we need to be holding the rcu
> +        * lock when we access the eb.
> +        */
>         rcu_read_lock();
> -       eb = radix_tree_lookup(&fs_info->buffer_radix,
> -                              start >> fs_info->sectorsize_bits);
> -       if (eb && atomic_inc_not_zero(&eb->refs)) {
> -               rcu_read_unlock();
> -               return eb;
> -       }
> +       do {
> +               eb = xas_load(&xas);
> +       } while (xas_retry(&xas, eb));
> +
> +       if (eb && !atomic_inc_not_zero(&eb->refs))
> +               eb = NULL;
>         rcu_read_unlock();
> -       return NULL;
> +       return eb;
>  }
>
>  static void end_bbio_meta_write(struct btrfs_bio *bbio)
> @@ -2769,11 +2774,10 @@ static void detach_extent_buffer_folio(const struct extent_buffer *eb, struct fo
>
>         if (!btrfs_meta_is_subpage(fs_info)) {
>                 /*
> -                * We do this since we'll remove the pages after we've
> -                * removed the eb from the radix tree, so we could race
> -                * and have this page now attached to the new eb.  So
> -                * only clear folio if it's still connected to
> -                * this eb.
> +                * We do this since we'll remove the pages after we've removed
> +                * the eb from the xarray, so we could race and have this page
> +                * now attached to the new eb.  So only clear folio if it's
> +                * still connected to this eb.
>                  */
>                 if (folio_test_private(folio) && folio_get_private(folio) == eb) {
>                         BUG_ON(test_bit(EXTENT_BUFFER_DIRTY, &eb->bflags));
> @@ -2938,9 +2942,9 @@ static void check_buffer_tree_ref(struct extent_buffer *eb)
>  {
>         int refs;
>         /*
> -        * The TREE_REF bit is first set when the extent_buffer is added
> -        * to the radix tree. It is also reset, if unset, when a new reference
> -        * is created by find_extent_buffer.
> +        * The TREE_REF bit is first set when the extent_buffer is added to the
> +        * xarray. It is also reset, if unset, when a new reference is created
> +        * by find_extent_buffer.
>          *
>          * It is only cleared in two cases: freeing the last non-tree
>          * reference to the extent_buffer when its STALE bit is set or
> @@ -2952,13 +2956,12 @@ static void check_buffer_tree_ref(struct extent_buffer *eb)
>          * conditions between the calls to check_buffer_tree_ref in those
>          * codepaths and clearing TREE_REF in try_release_extent_buffer.
>          *
> -        * The actual lifetime of the extent_buffer in the radix tree is
> -        * adequately protected by the refcount, but the TREE_REF bit and
> -        * its corresponding reference are not. To protect against this
> -        * class of races, we call check_buffer_tree_ref from the codepaths
> -        * which trigger io. Note that once io is initiated, TREE_REF can no
> -        * longer be cleared, so that is the moment at which any such race is
> -        * best fixed.
> +        * The actual lifetime of the extent_buffer in the xarray is adequately
> +        * protected by the refcount, but the TREE_REF bit and its corresponding
> +        * reference are not. To protect against this class of races, we call
> +        * check_buffer_tree_ref from the codepaths which trigger io. Note that
> +        * once io is initiated, TREE_REF can no longer be cleared, so that is
> +        * the moment at which any such race is best fixed.
>          */
>         refs = atomic_read(&eb->refs);
>         if (refs >= 2 && test_bit(EXTENT_BUFFER_TREE_REF, &eb->bflags))
> @@ -3022,23 +3025,26 @@ struct extent_buffer *alloc_test_extent_buffer(struct btrfs_fs_info *fs_info,
>                 return ERR_PTR(-ENOMEM);
>         eb->fs_info = fs_info;
>  again:
> -       ret = radix_tree_preload(GFP_NOFS);
> -       if (ret) {
> -               exists = ERR_PTR(ret);
> +       xa_lock_irq(&fs_info->buffer_tree);
> +       exists = __xa_cmpxchg(&fs_info->buffer_tree,
> +                             start >> fs_info->sectorsize_bits, NULL, eb,
> +                             GFP_NOFS);
> +       if (xa_is_err(exists)) {
> +               ret = xa_err(exists);
> +               xa_unlock_irq(&fs_info->buffer_tree);
> +               btrfs_release_extent_buffer(eb);
> +               return ERR_PTR(ret);
> +       }
> +       if (exists) {
> +               if (!atomic_inc_not_zero(&exists->refs)) {
> +                       /* The extent buffer is being freed, retry. */
> +                       xa_unlock_irq(&fs_info->buffer_tree);
> +                       goto again;
> +               }
> +               xa_unlock_irq(&fs_info->buffer_tree);
>                 goto free_eb;
>         }
> -       spin_lock(&fs_info->buffer_lock);
> -       ret = radix_tree_insert(&fs_info->buffer_radix,
> -                               start >> fs_info->sectorsize_bits, eb);
> -       spin_unlock(&fs_info->buffer_lock);
> -       radix_tree_preload_end();
> -       if (ret == -EEXIST) {
> -               exists = find_extent_buffer(fs_info, start);
> -               if (exists)
> -                       goto free_eb;
> -               else
> -                       goto again;
> -       }
> +       xa_unlock_irq(&fs_info->buffer_tree);
>         check_buffer_tree_ref(eb);
>
>         return eb;
> @@ -3059,9 +3065,9 @@ static struct extent_buffer *grab_extent_buffer(struct btrfs_fs_info *fs_info,
>         lockdep_assert_held(&folio->mapping->i_private_lock);
>
>         /*
> -        * For subpage case, we completely rely on radix tree to ensure we
> -        * don't try to insert two ebs for the same bytenr.  So here we always
> -        * return NULL and just continue.
> +        * For subpage case, we completely rely on xarray to ensure we don't try
> +        * to insert two ebs for the same bytenr.  So here we always return NULL
> +        * and just continue.
>          */
>         if (btrfs_meta_is_subpage(fs_info))
>                 return NULL;
> @@ -3194,7 +3200,7 @@ static int attach_eb_folio_to_filemap(struct extent_buffer *eb, int i,
>         /*
>          * To inform we have an extra eb under allocation, so that
>          * detach_extent_buffer_page() won't release the folio private when the
> -        * eb hasn't been inserted into radix tree yet.
> +        * eb hasn't been inserted into the xarray yet.
>          *
>          * The ref will be decreased when the eb releases the page, in
>          * detach_extent_buffer_page().  Thus needs no special handling in the
> @@ -3328,10 +3334,10 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info,
>
>                 /*
>                  * We can't unlock the pages just yet since the extent buffer
> -                * hasn't been properly inserted in the radix tree, this
> -                * opens a race with btree_release_folio which can free a page
> -                * while we are still filling in all pages for the buffer and
> -                * we could crash.
> +                * hasn't been properly inserted in the xarray, this opens a
> +                * race with btree_release_folio which can free a page while we
> +                * are still filling in all pages for the buffer and we could
> +                * crash.
>                  */
>         }
>         if (uptodate)
> @@ -3340,23 +3346,25 @@ struct extent_buffer *alloc_extent_buffer(struct btrfs_fs_info *fs_info,
>         if (page_contig)
>                 eb->addr = folio_address(eb->folios[0]) + offset_in_page(eb->start);
>  again:
> -       ret = radix_tree_preload(GFP_NOFS);
> -       if (ret)
> +       xa_lock_irq(&fs_info->buffer_tree);
> +       existing_eb = __xa_cmpxchg(&fs_info->buffer_tree,
> +                                  start >> fs_info->sectorsize_bits, NULL, eb,
> +                                  GFP_NOFS);
> +       if (xa_is_err(existing_eb)) {
> +               ret = xa_err(existing_eb);
> +               xa_unlock_irq(&fs_info->buffer_tree);
>                 goto out;
> -
> -       spin_lock(&fs_info->buffer_lock);
> -       ret = radix_tree_insert(&fs_info->buffer_radix,
> -                               start >> fs_info->sectorsize_bits, eb);
> -       spin_unlock(&fs_info->buffer_lock);
> -       radix_tree_preload_end();
> -       if (ret == -EEXIST) {
> -               ret = 0;
> -               existing_eb = find_extent_buffer(fs_info, start);
> -               if (existing_eb)
> -                       goto out;
> -               else
> -                       goto again;
>         }
> +       if (existing_eb) {
> +               if (!atomic_inc_not_zero(&existing_eb->refs)) {
> +                       xa_unlock_irq(&fs_info->buffer_tree);
> +                       goto again;
> +               }
> +               xa_unlock_irq(&fs_info->buffer_tree);
> +               goto out;
> +       }
> +       xa_unlock_irq(&fs_info->buffer_tree);
> +
>         /* add one reference for the tree */
>         check_buffer_tree_ref(eb);
>
> @@ -3426,10 +3434,19 @@ static int release_extent_buffer(struct extent_buffer *eb)
>
>                 spin_unlock(&eb->refs_lock);
>
> -               spin_lock(&fs_info->buffer_lock);
> -               radix_tree_delete_item(&fs_info->buffer_radix,
> -                                      eb->start >> fs_info->sectorsize_bits, eb);
> -               spin_unlock(&fs_info->buffer_lock);
> +               /*
> +                * We're erasing, theoretically there will be no allocations, so
> +                * just use GFP_ATOMIC.
> +                *
> +                * We use cmpxchg instead of erase because we do not know if
> +                * this eb is actually in the tree or not, we could be cleaning
> +                * up an eb that we allocated but never inserted into the tree.
> +                * Thus use cmpxchg to remove it from the tree if it is there,
> +                * or leave the other entry if this isn't in the tree.
> +                */
> +               xa_cmpxchg_irq(&fs_info->buffer_tree,
> +                              eb->start >> fs_info->sectorsize_bits, eb, NULL,
> +                              GFP_ATOMIC);

So I think you haven't seen my last reply:

https://lore.kernel.org/linux-btrfs/CAL3q7H6Nbar_o0GVGuTr5BVmCRsDUgAJfnOz-hSi5OEi86jejg@mail.gmail.com/

My concern was about if storing a NULL doesn't release space in the xarray.
Checking the xarray documentation confirms that storing a NULL does
release space except if the xarray was defined with the XA_FLAGS_ALLOC
flag, which is not the case here so we're fine.
Perhaps add a comment too about storing NULL releases space.

Anyway, overall it looks good, so:

Reviewed-by: Filipe Manana <fdmanana@suse.com>

Thanks.


>
>                 btrfs_leak_debug_del_eb(eb);
>                 /* Should be safe to release folios at this point. */
> @@ -4260,44 +4277,6 @@ void memmove_extent_buffer(const struct extent_buffer *dst,
>         }
>  }
>
> -#define GANG_LOOKUP_SIZE       16
> -static struct extent_buffer *get_next_extent_buffer(
> -               const struct btrfs_fs_info *fs_info, struct folio *folio, u64 bytenr)
> -{
> -       struct extent_buffer *gang[GANG_LOOKUP_SIZE];
> -       struct extent_buffer *found = NULL;
> -       u64 folio_start = folio_pos(folio);
> -       u64 cur = folio_start;
> -
> -       ASSERT(in_range(bytenr, folio_start, PAGE_SIZE));
> -       lockdep_assert_held(&fs_info->buffer_lock);
> -
> -       while (cur < folio_start + PAGE_SIZE) {
> -               int ret;
> -               int i;
> -
> -               ret = radix_tree_gang_lookup(&fs_info->buffer_radix,
> -                               (void **)gang, cur >> fs_info->sectorsize_bits,
> -                               min_t(unsigned int, GANG_LOOKUP_SIZE,
> -                                     PAGE_SIZE / fs_info->nodesize));
> -               if (ret == 0)
> -                       goto out;
> -               for (i = 0; i < ret; i++) {
> -                       /* Already beyond page end */
> -                       if (gang[i]->start >= folio_start + PAGE_SIZE)
> -                               goto out;
> -                       /* Found one */
> -                       if (gang[i]->start >= bytenr) {
> -                               found = gang[i];
> -                               goto out;
> -                       }
> -               }
> -               cur = gang[ret - 1]->start + gang[ret - 1]->len;
> -       }
> -out:
> -       return found;
> -}
> -
>  static int try_release_subpage_extent_buffer(struct folio *folio)
>  {
>         struct btrfs_fs_info *fs_info = folio_to_fs_info(folio);
> @@ -4306,21 +4285,31 @@ static int try_release_subpage_extent_buffer(struct folio *folio)
>         int ret;
>
>         while (cur < end) {
> +               XA_STATE(xas, &fs_info->buffer_tree,
> +                        cur >> fs_info->sectorsize_bits);
>                 struct extent_buffer *eb = NULL;
>
>                 /*
>                  * Unlike try_release_extent_buffer() which uses folio private
> -                * to grab buffer, for subpage case we rely on radix tree, thus
> -                * we need to ensure radix tree consistency.
> +                * to grab buffer, for subpage case we rely on xarray, thus we
> +                * need to ensure xarray tree consistency.
>                  *
> -                * We also want an atomic snapshot of the radix tree, thus go
> +                * We also want an atomic snapshot of the xarray tree, thus go
>                  * with spinlock rather than RCU.
> +                *
> +                * We open code xa_load() here because we need to be holding the
> +                * xa lock while we're accessing the eb. We could technically
> +                * use xa_load() while holding the lock since it just does an
> +                * rcu_read_lock(), but that would be a bit of a waste.
>                  */
> -               spin_lock(&fs_info->buffer_lock);
> -               eb = get_next_extent_buffer(fs_info, folio, cur);
> +               xa_lock_irq(&fs_info->buffer_tree);
> +               do {
> +                       eb = xas_find(&xas, end >> fs_info->sectorsize_bits);
> +               } while (xas_retry(&xas, eb));
> +
>                 if (!eb) {
>                         /* No more eb in the page range after or at cur */
> -                       spin_unlock(&fs_info->buffer_lock);
> +                       xa_unlock_irq(&fs_info->buffer_tree);
>                         break;
>                 }
>                 cur = eb->start + eb->len;
> @@ -4332,10 +4321,10 @@ static int try_release_subpage_extent_buffer(struct folio *folio)
>                 spin_lock(&eb->refs_lock);
>                 if (atomic_read(&eb->refs) != 1 || extent_buffer_under_io(eb)) {
>                         spin_unlock(&eb->refs_lock);
> -                       spin_unlock(&fs_info->buffer_lock);
> +                       xa_unlock_irq(&fs_info->buffer_tree);
>                         break;
>                 }
> -               spin_unlock(&fs_info->buffer_lock);
> +               xa_unlock_irq(&fs_info->buffer_tree);
>
>                 /*
>                  * If tree ref isn't set then we know the ref on this eb is a
> diff --git a/fs/btrfs/fs.h b/fs/btrfs/fs.h
> index bcca43046064..ed02d276d908 100644
> --- a/fs/btrfs/fs.h
> +++ b/fs/btrfs/fs.h
> @@ -776,10 +776,8 @@ struct btrfs_fs_info {
>
>         struct btrfs_delayed_root *delayed_root;
>
> -       /* Extent buffer radix tree */
> -       spinlock_t buffer_lock;
>         /* Entries are eb->start / sectorsize */
> -       struct radix_tree_root buffer_radix;
> +       struct xarray buffer_tree;
>
>         /* Next backup root to be overwritten */
>         int backup_root_index;
> diff --git a/fs/btrfs/tests/btrfs-tests.c b/fs/btrfs/tests/btrfs-tests.c
> index 02a915eb51fb..b576897d71cc 100644
> --- a/fs/btrfs/tests/btrfs-tests.c
> +++ b/fs/btrfs/tests/btrfs-tests.c
> @@ -157,9 +157,9 @@ struct btrfs_fs_info *btrfs_alloc_dummy_fs_info(u32 nodesize, u32 sectorsize)
>
>  void btrfs_free_dummy_fs_info(struct btrfs_fs_info *fs_info)
>  {
> -       struct radix_tree_iter iter;
> -       void **slot;
>         struct btrfs_device *dev, *tmp;
> +       struct extent_buffer *eb;
> +       unsigned long index;
>
>         if (!fs_info)
>                 return;
> @@ -169,25 +169,13 @@ void btrfs_free_dummy_fs_info(struct btrfs_fs_info *fs_info)
>
>         test_mnt->mnt_sb->s_fs_info = NULL;
>
> -       spin_lock(&fs_info->buffer_lock);
> -       radix_tree_for_each_slot(slot, &fs_info->buffer_radix, &iter, 0) {
> -               struct extent_buffer *eb;
> -
> -               eb = radix_tree_deref_slot_protected(slot, &fs_info->buffer_lock);
> -               if (!eb)
> -                       continue;
> -               /* Shouldn't happen but that kind of thinking creates CVE's */
> -               if (radix_tree_exception(eb)) {
> -                       if (radix_tree_deref_retry(eb))
> -                               slot = radix_tree_iter_retry(&iter);
> -                       continue;
> -               }
> -               slot = radix_tree_iter_resume(slot, &iter);
> -               spin_unlock(&fs_info->buffer_lock);
> -               free_extent_buffer_stale(eb);
> -               spin_lock(&fs_info->buffer_lock);
> +       xa_lock_irq(&fs_info->buffer_tree);
> +       xa_for_each(&fs_info->buffer_tree, index, eb) {
> +               xa_unlock_irq(&fs_info->buffer_tree);
> +               free_extent_buffer(eb);
> +               xa_lock_irq(&fs_info->buffer_tree);
>         }
> -       spin_unlock(&fs_info->buffer_lock);
> +       xa_unlock_irq(&fs_info->buffer_tree);
>
>         btrfs_mapping_tree_free(fs_info);
>         list_for_each_entry_safe(dev, tmp, &fs_info->fs_devices->devices,
> diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
> index 7b30700ec930..4b59bc480663 100644
> --- a/fs/btrfs/zoned.c
> +++ b/fs/btrfs/zoned.c
> @@ -2171,27 +2171,15 @@ static void wait_eb_writebacks(struct btrfs_block_group *block_group)
>  {
>         struct btrfs_fs_info *fs_info = block_group->fs_info;
>         const u64 end = block_group->start + block_group->length;
> -       struct radix_tree_iter iter;
>         struct extent_buffer *eb;
> -       void __rcu **slot;
> +       unsigned long index, start = block_group->start >> fs_info->sectorsize_bits;
>
>         rcu_read_lock();
> -       radix_tree_for_each_slot(slot, &fs_info->buffer_radix, &iter,
> -                                block_group->start >> fs_info->sectorsize_bits) {
> -               eb = radix_tree_deref_slot(slot);
> -               if (!eb)
> -                       continue;
> -               if (radix_tree_deref_retry(eb)) {
> -                       slot = radix_tree_iter_retry(&iter);
> -                       continue;
> -               }
> -
> +       xa_for_each_start(&fs_info->buffer_tree, index, eb, start) {
>                 if (eb->start < block_group->start)
>                         continue;
>                 if (eb->start >= end)
>                         break;
> -
> -               slot = radix_tree_iter_resume(slot, &iter);
>                 rcu_read_unlock();
>                 wait_on_extent_buffer_writeback(eb);
>                 rcu_read_lock();
> --
> 2.48.1
>
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v4 3/3] btrfs: use buffer radix for extent buffer writeback operations
  2025-04-24 18:32 ` [PATCH v4 3/3] btrfs: use buffer radix for extent buffer writeback operations Josef Bacik
@ 2025-04-25  9:27   ` Filipe Manana
  0 siblings, 0 replies; 6+ messages in thread
From: Filipe Manana @ 2025-04-25  9:27 UTC (permalink / raw)
  To: Josef Bacik; +Cc: linux-btrfs, kernel-team, Filipe Manana

On Thu, Apr 24, 2025 at 7:34 PM Josef Bacik <josef@toxicpanda.com> wrote:
>
> Currently we have this ugly back and forth with the btree writeback
> where we find the folio, find the eb associated with that folio, and
> then attempt to writeback.  This results in two different paths for
> subpage eb's and >= pagesize eb's.
>
> Clean this up by adding our own infrastructure around looking up tag'ed
> eb's and writing the eb's out directly.  This allows us to unify the
> subpage and >= pagesize IO paths, resulting in a much cleaner writeback
> path for extent buffers.
>
> I ran this through fsperf on a VM with 8 CPUs and 16gib of ram.  I used
> smallfiles100k, but reduced the files to 1k to make it run faster, the
> results are as follows, with the statistically significant improvements
> marked with *, there were no regressions.  fsperf was run with -n 10 for
> both runs, so the baseline is the average 10 runs and the test is the
> average of 10 runs.
>
> smallfiles100k results
>       metric           baseline       current        stdev            diff
> ================================================================================
> avg_commit_ms               68.58         58.44          3.35   -14.79% *
> commits                    270.60        254.70         16.24    -5.88%
> dev_read_iops                  48            48             0     0.00%
> dev_read_kbytes              1044          1044             0     0.00%
> dev_write_iops          866117.90     850028.10      14292.20    -1.86%
> dev_write_kbytes      10939976.40   10605701.20     351330.32    -3.06%
> elapsed                     49.30            33          1.64   -33.06% *
> end_state_mount_ns    41251498.80   35773220.70    2531205.32   -13.28% *
> end_state_umount_ns      1.90e+09      1.50e+09   14186226.85   -21.38% *
> max_commit_ms                 139        111.60          9.72   -19.71% *
> sys_cpu                      4.90          3.86          0.88   -21.29%
> write_bw_bytes        42935768.20   64318451.10    1609415.05    49.80% *
> write_clat_ns_mean      366431.69     243202.60      14161.98   -33.63% *
> write_clat_ns_p50        49203.20         20992        264.40   -57.34% *
> write_clat_ns_p99          827392     653721.60      65904.74   -20.99% *
> write_io_kbytes           2035940       2035940             0     0.00%
> write_iops               10482.37      15702.75        392.92    49.80% *
> write_lat_ns_max         1.01e+08      90516129    3910102.06   -10.29% *
> write_lat_ns_mean       366556.19     243308.48      14154.51   -33.62% *
>
> As you can see we get about a 33% decrease runtime, with a 50%
> throughput increase, which is pretty significant.
>
> Reviewed-by: Filipe Manana <fdmanana@suse.com>
> Signed-off-by: Josef Bacik <josef@toxicpanda.com>

Btw, the subject still says "buffer radix", should be changed to
"buffer xarray" when added to for-next.

> ---
>  fs/btrfs/extent_io.c   | 339 ++++++++++++++++++++---------------------
>  fs/btrfs/extent_io.h   |   2 +
>  fs/btrfs/transaction.c |   5 +-
>  3 files changed, 169 insertions(+), 177 deletions(-)
>
> diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
> index 65a769329981..6082f37cfca3 100644
> --- a/fs/btrfs/extent_io.c
> +++ b/fs/btrfs/extent_io.c
> @@ -1923,6 +1923,111 @@ static void buffer_tree_clear_mark(const struct extent_buffer *eb, xa_mark_t mar
>         xas_unlock_irqrestore(&xas, flags);
>  }
>
> +static void buffer_tree_tag_for_writeback(struct btrfs_fs_info *fs_info,
> +                                           unsigned long start, unsigned long end)
> +{
> +       XA_STATE(xas, &fs_info->buffer_tree, start);
> +       unsigned int tagged = 0;
> +       void *eb;
> +
> +       xas_lock_irq(&xas);
> +       xas_for_each_marked(&xas, eb, end, PAGECACHE_TAG_DIRTY) {
> +               xas_set_mark(&xas, PAGECACHE_TAG_TOWRITE);
> +               if (++tagged % XA_CHECK_SCHED)
> +                       continue;
> +               xas_pause(&xas);
> +               xas_unlock_irq(&xas);
> +               cond_resched();
> +               xas_lock_irq(&xas);
> +       }
> +       xas_unlock_irq(&xas);
> +}
> +
> +struct eb_batch {
> +       unsigned int nr;
> +       unsigned int cur;
> +       struct extent_buffer *ebs[PAGEVEC_SIZE];
> +};
> +
> +static inline bool eb_batch_add(struct eb_batch *batch, struct extent_buffer *eb)
> +{
> +       batch->ebs[batch->nr++] = eb;
> +       return (batch->nr < PAGEVEC_SIZE);
> +}
> +
> +static inline void eb_batch_init(struct eb_batch *batch)
> +{
> +       batch->nr = 0;
> +       batch->cur = 0;
> +}
> +
> +static inline struct extent_buffer *eb_batch_next(struct eb_batch *batch)
> +{
> +       if (batch->cur >= batch->nr)
> +               return NULL;
> +       return batch->ebs[batch->cur++];
> +}
> +
> +static inline void eb_batch_release(struct eb_batch *batch)
> +{
> +       for (unsigned int i = 0; i < batch->nr; i++)
> +               free_extent_buffer(batch->ebs[i]);
> +       eb_batch_init(batch);
> +}
> +
> +static inline struct extent_buffer *find_get_eb(struct xa_state *xas, unsigned long max,
> +                                               xa_mark_t mark)
> +{
> +       struct extent_buffer *eb;
> +
> +retry:
> +       eb = xas_find_marked(xas, max, mark);
> +
> +       if (xas_retry(xas, eb))
> +               goto retry;
> +
> +       if (!eb)
> +               return NULL;
> +
> +       if (!atomic_inc_not_zero(&eb->refs))
> +               goto reset;
> +
> +       if (unlikely(eb != xas_reload(xas))) {
> +               free_extent_buffer(eb);
> +               goto reset;
> +       }
> +
> +       return eb;
> +reset:
> +       xas_reset(xas);
> +       goto retry;
> +}
> +
> +static unsigned int buffer_tree_get_ebs_tag(struct btrfs_fs_info *fs_info,
> +                                           unsigned long *start,
> +                                           unsigned long end, xa_mark_t tag,
> +                                           struct eb_batch *batch)
> +{
> +       XA_STATE(xas, &fs_info->buffer_tree, *start);
> +       struct extent_buffer *eb;
> +
> +       rcu_read_lock();
> +       while ((eb = find_get_eb(&xas, end, tag)) != NULL) {
> +               if (!eb_batch_add(batch, eb)) {
> +                       *start = (eb->start + eb->len) >> fs_info->sectorsize_bits;
> +                       goto out;
> +               }
> +       }
> +       if (end == ULONG_MAX)
> +               *start = ULONG_MAX;
> +       else
> +               *start = end + 1;
> +out:
> +       rcu_read_unlock();
> +
> +       return batch->nr;
> +}
> +
>  /*
>   * The endio specific version which won't touch any unsafe spinlock in endio
>   * context.
> @@ -2032,163 +2137,38 @@ static noinline_for_stack void write_one_eb(struct extent_buffer *eb,
>  }
>
>  /*
> - * Submit one subpage btree page.
> + * Wait for all eb writeback in the given range to finish.
>   *
> - * The main difference to submit_eb_page() is:
> - * - Page locking
> - *   For subpage, we don't rely on page locking at all.
> - *
> - * - Flush write bio
> - *   We only flush bio if we may be unable to fit current extent buffers into
> - *   current bio.
> - *
> - * Return >=0 for the number of submitted extent buffers.
> - * Return <0 for fatal error.
> + * @fs_info:   The fs_info for this file system.
> + * @start:     The offset of the range to start waiting on writeback.
> + * @end:       The end of the range, inclusive. This is meant to be used in
> + *             conjuction with wait_marked_extents, so this will usually be
> + *             the_next_eb->start - 1.
>   */
> -static int submit_eb_subpage(struct folio *folio, struct writeback_control *wbc)
> +void btrfs_btree_wait_writeback_range(struct btrfs_fs_info *fs_info, u64 start,
> +                                     u64 end)
>  {
> -       struct btrfs_fs_info *fs_info = folio_to_fs_info(folio);
> -       int submitted = 0;
> -       u64 folio_start = folio_pos(folio);
> -       int bit_start = 0;
> -       int sectors_per_node = fs_info->nodesize >> fs_info->sectorsize_bits;
> -       const unsigned int blocks_per_folio = btrfs_blocks_per_folio(fs_info, folio);
> +       struct eb_batch batch;
> +       unsigned long start_index = start >> fs_info->sectorsize_bits;
> +       unsigned long end_index = end >> fs_info->sectorsize_bits;
>
> -       /* Lock and write each dirty extent buffers in the range */
> -       while (bit_start < blocks_per_folio) {
> -               struct btrfs_subpage *subpage = folio_get_private(folio);
> +       eb_batch_init(&batch);
> +       while (start_index <= end_index) {
>                 struct extent_buffer *eb;
> -               unsigned long flags;
> -               u64 start;
> +               unsigned int nr_ebs;
>
> -               /*
> -                * Take private lock to ensure the subpage won't be detached
> -                * in the meantime.
> -                */
> -               spin_lock(&folio->mapping->i_private_lock);
> -               if (!folio_test_private(folio)) {
> -                       spin_unlock(&folio->mapping->i_private_lock);
> +               nr_ebs = buffer_tree_get_ebs_tag(fs_info, &start_index,
> +                                                end_index,
> +                                                PAGECACHE_TAG_WRITEBACK,
> +                                                &batch);
> +               if (!nr_ebs)
>                         break;
> -               }
> -               spin_lock_irqsave(&subpage->lock, flags);
> -               if (!test_bit(bit_start + btrfs_bitmap_nr_dirty * blocks_per_folio,
> -                             subpage->bitmaps)) {
> -                       spin_unlock_irqrestore(&subpage->lock, flags);
> -                       spin_unlock(&folio->mapping->i_private_lock);
> -                       bit_start += sectors_per_node;
> -                       continue;
> -               }
>
> -               start = folio_start + bit_start * fs_info->sectorsize;
> -               bit_start += sectors_per_node;
> -
> -               /*
> -                * Here we just want to grab the eb without touching extra
> -                * spin locks, so call find_extent_buffer_nolock().
> -                */
> -               eb = find_extent_buffer_nolock(fs_info, start);
> -               spin_unlock_irqrestore(&subpage->lock, flags);
> -               spin_unlock(&folio->mapping->i_private_lock);
> -
> -               /*
> -                * The eb has already reached 0 refs thus find_extent_buffer()
> -                * doesn't return it. We don't need to write back such eb
> -                * anyway.
> -                */
> -               if (!eb)
> -                       continue;
> -
> -               if (lock_extent_buffer_for_io(eb, wbc)) {
> -                       write_one_eb(eb, wbc);
> -                       submitted++;
> -               }
> -               free_extent_buffer(eb);
> +               while ((eb = eb_batch_next(&batch)) != NULL)
> +                       wait_on_extent_buffer_writeback(eb);
> +               eb_batch_release(&batch);
> +               cond_resched();
>         }
> -       return submitted;
> -}
> -
> -/*
> - * Submit all page(s) of one extent buffer.
> - *
> - * @page:      the page of one extent buffer
> - * @eb_context:        to determine if we need to submit this page, if current page
> - *             belongs to this eb, we don't need to submit
> - *
> - * The caller should pass each page in their bytenr order, and here we use
> - * @eb_context to determine if we have submitted pages of one extent buffer.
> - *
> - * If we have, we just skip until we hit a new page that doesn't belong to
> - * current @eb_context.
> - *
> - * If not, we submit all the page(s) of the extent buffer.
> - *
> - * Return >0 if we have submitted the extent buffer successfully.
> - * Return 0 if we don't need to submit the page, as it's already submitted by
> - * previous call.
> - * Return <0 for fatal error.
> - */
> -static int submit_eb_page(struct folio *folio, struct btrfs_eb_write_context *ctx)
> -{
> -       struct writeback_control *wbc = ctx->wbc;
> -       struct address_space *mapping = folio->mapping;
> -       struct extent_buffer *eb;
> -       int ret;
> -
> -       if (!folio_test_private(folio))
> -               return 0;
> -
> -       if (btrfs_meta_is_subpage(folio_to_fs_info(folio)))
> -               return submit_eb_subpage(folio, wbc);
> -
> -       spin_lock(&mapping->i_private_lock);
> -       if (!folio_test_private(folio)) {
> -               spin_unlock(&mapping->i_private_lock);
> -               return 0;
> -       }
> -
> -       eb = folio_get_private(folio);
> -
> -       /*
> -        * Shouldn't happen and normally this would be a BUG_ON but no point
> -        * crashing the machine for something we can survive anyway.
> -        */
> -       if (WARN_ON(!eb)) {
> -               spin_unlock(&mapping->i_private_lock);
> -               return 0;
> -       }
> -
> -       if (eb == ctx->eb) {
> -               spin_unlock(&mapping->i_private_lock);
> -               return 0;
> -       }
> -       ret = atomic_inc_not_zero(&eb->refs);
> -       spin_unlock(&mapping->i_private_lock);
> -       if (!ret)
> -               return 0;
> -
> -       ctx->eb = eb;
> -
> -       ret = btrfs_check_meta_write_pointer(eb->fs_info, ctx);
> -       if (ret) {
> -               if (ret == -EBUSY)
> -                       ret = 0;
> -               free_extent_buffer(eb);
> -               return ret;
> -       }
> -
> -       if (!lock_extent_buffer_for_io(eb, wbc)) {
> -               free_extent_buffer(eb);
> -               return 0;
> -       }
> -       /* Implies write in zoned mode. */
> -       if (ctx->zoned_bg) {
> -               /* Mark the last eb in the block group. */
> -               btrfs_schedule_zone_finish_bg(ctx->zoned_bg, eb);
> -               ctx->zoned_bg->meta_write_pointer += eb->len;
> -       }
> -       write_one_eb(eb, wbc);
> -       free_extent_buffer(eb);
> -       return 1;
>  }
>
>  int btree_write_cache_pages(struct address_space *mapping,
> @@ -2199,25 +2179,27 @@ int btree_write_cache_pages(struct address_space *mapping,
>         int ret = 0;
>         int done = 0;
>         int nr_to_write_done = 0;
> -       struct folio_batch fbatch;
> -       unsigned int nr_folios;
> -       pgoff_t index;
> -       pgoff_t end;            /* Inclusive */
> +       struct eb_batch batch;
> +       unsigned int nr_ebs;
> +       unsigned long index;
> +       unsigned long end;
>         int scanned = 0;
>         xa_mark_t tag;
>
> -       folio_batch_init(&fbatch);
> +       eb_batch_init(&batch);
>         if (wbc->range_cyclic) {
> -               index = mapping->writeback_index; /* Start from prev offset */
> +               index = (mapping->writeback_index << PAGE_SHIFT) >> fs_info->sectorsize_bits;
>                 end = -1;
> +
>                 /*
>                  * Start from the beginning does not need to cycle over the
>                  * range, mark it as scanned.
>                  */
>                 scanned = (index == 0);
>         } else {
> -               index = wbc->range_start >> PAGE_SHIFT;
> -               end = wbc->range_end >> PAGE_SHIFT;
> +               index = wbc->range_start >> fs_info->sectorsize_bits;
> +               end = wbc->range_end >> fs_info->sectorsize_bits;
> +
>                 scanned = 1;
>         }
>         if (wbc->sync_mode == WB_SYNC_ALL)
> @@ -2227,31 +2209,40 @@ int btree_write_cache_pages(struct address_space *mapping,
>         btrfs_zoned_meta_io_lock(fs_info);
>  retry:
>         if (wbc->sync_mode == WB_SYNC_ALL)
> -               tag_pages_for_writeback(mapping, index, end);
> +               buffer_tree_tag_for_writeback(fs_info, index, end);
>         while (!done && !nr_to_write_done && (index <= end) &&
> -              (nr_folios = filemap_get_folios_tag(mapping, &index, end,
> -                                           tag, &fbatch))) {
> -               unsigned i;
> +              (nr_ebs = buffer_tree_get_ebs_tag(fs_info, &index, end, tag,
> +                                                &batch))) {
> +               struct extent_buffer *eb;
>
> -               for (i = 0; i < nr_folios; i++) {
> -                       struct folio *folio = fbatch.folios[i];
> +               while ((eb = eb_batch_next(&batch)) != NULL) {
> +                       ctx.eb = eb;
>
> -                       ret = submit_eb_page(folio, &ctx);
> -                       if (ret == 0)
> +                       ret = btrfs_check_meta_write_pointer(eb->fs_info, &ctx);
> +                       if (ret) {
> +                               if (ret == -EBUSY)
> +                                       ret = 0;
> +                               if (ret) {
> +                                       done = 1;
> +                                       break;
> +                               }
> +                               free_extent_buffer(eb);
>                                 continue;
> -                       if (ret < 0) {
> -                               done = 1;
> -                               break;
>                         }
>
> -                       /*
> -                        * the filesystem may choose to bump up nr_to_write.
> -                        * We have to make sure to honor the new nr_to_write
> -                        * at any time
> -                        */
> -                       nr_to_write_done = wbc->nr_to_write <= 0;
> +                       if (!lock_extent_buffer_for_io(eb, wbc))
> +                               continue;
> +
> +                       /* Implies write in zoned mode. */
> +                       if (ctx.zoned_bg) {
> +                               /* Mark the last eb in the block group. */
> +                               btrfs_schedule_zone_finish_bg(ctx.zoned_bg, eb);
> +                               ctx.zoned_bg->meta_write_pointer += eb->len;
> +                       }
> +                       write_one_eb(eb, wbc);
>                 }
> -               folio_batch_release(&fbatch);
> +               nr_to_write_done = wbc->nr_to_write <= 0;
> +               eb_batch_release(&batch);
>                 cond_resched();
>         }
>         if (!scanned && !done) {
> diff --git a/fs/btrfs/extent_io.h b/fs/btrfs/extent_io.h
> index b344162f790c..b7c1cd0b3c20 100644
> --- a/fs/btrfs/extent_io.h
> +++ b/fs/btrfs/extent_io.h
> @@ -240,6 +240,8 @@ void extent_write_locked_range(struct inode *inode, const struct folio *locked_f
>  int btrfs_writepages(struct address_space *mapping, struct writeback_control *wbc);
>  int btree_write_cache_pages(struct address_space *mapping,
>                             struct writeback_control *wbc);
> +void btrfs_btree_wait_writeback_range(struct btrfs_fs_info *fs_info, u64 start,
> +                                     u64 end);
>  void btrfs_readahead(struct readahead_control *rac);
>  int set_folio_extent_mapped(struct folio *folio);
>  void clear_folio_extent_mapped(struct folio *folio);
> diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
> index 39e48bf610a1..a538a85ac2bd 100644
> --- a/fs/btrfs/transaction.c
> +++ b/fs/btrfs/transaction.c
> @@ -1155,7 +1155,7 @@ int btrfs_write_marked_extents(struct btrfs_fs_info *fs_info,
>                 if (!ret)
>                         ret = filemap_fdatawrite_range(mapping, start, end);
>                 if (!ret && wait_writeback)
> -                       ret = filemap_fdatawait_range(mapping, start, end);
> +                       btrfs_btree_wait_writeback_range(fs_info, start, end);
>                 btrfs_free_extent_state(cached_state);
>                 if (ret)
>                         break;
> @@ -1175,7 +1175,6 @@ int btrfs_write_marked_extents(struct btrfs_fs_info *fs_info,
>  static int __btrfs_wait_marked_extents(struct btrfs_fs_info *fs_info,
>                                        struct extent_io_tree *dirty_pages)
>  {
> -       struct address_space *mapping = fs_info->btree_inode->i_mapping;
>         struct extent_state *cached_state = NULL;
>         u64 start = 0;
>         u64 end;
> @@ -1196,7 +1195,7 @@ static int __btrfs_wait_marked_extents(struct btrfs_fs_info *fs_info,
>                 if (ret == -ENOMEM)
>                         ret = 0;
>                 if (!ret)
> -                       ret = filemap_fdatawait_range(mapping, start, end);
> +                       btrfs_btree_wait_writeback_range(fs_info, start, end);
>                 btrfs_free_extent_state(cached_state);
>                 if (ret)
>                         break;
> --
> 2.48.1
>
>

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2025-04-25  9:28 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-04-24 18:32 [PATCH v4 0/3] btrfs: simplify extent buffer writeback Josef Bacik
2025-04-24 18:32 ` [PATCH v4 1/3] btrfs: convert the buffer_radix to an xarray Josef Bacik
2025-04-25  9:24   ` Filipe Manana
2025-04-24 18:32 ` [PATCH v4 2/3] btrfs: set DIRTY and WRITEBACK tags on the buffer_tree Josef Bacik
2025-04-24 18:32 ` [PATCH v4 3/3] btrfs: use buffer radix for extent buffer writeback operations Josef Bacik
2025-04-25  9:27   ` Filipe Manana

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox