* [PATCH v2 1/8] btrfs: prevent inline data extents read from touching blocks beyond its range
2025-02-27 5:54 [PATCH v2 0/8] btrfs: make subpage handling be feature full Qu Wenruo
@ 2025-02-27 5:54 ` Qu Wenruo
2025-02-27 5:54 ` [PATCH v2 2/8] btrfs: subpage: do not hold subpage spin lock when clearing folio writeback Qu Wenruo
` (7 subsequent siblings)
8 siblings, 0 replies; 16+ messages in thread
From: Qu Wenruo @ 2025-02-27 5:54 UTC (permalink / raw)
To: linux-btrfs; +Cc: Filipe Manana
Currently reading an inline data extent will zero out all the remaining
range in the page.
This is not yet causing problems even for block size < page size
(subpage) cases because:
1) An inline data extent always starts at file offset 0
Meaning at page read, we always read the inline extent first, before
any other blocks in the page. Then later blocks are properly read out
and re-fill the zeroed out ranges.
2) Currently btrfs will read out the whole page if a buffered write is
not page aligned
So a page is either fully uptodate at buffered write time (covers the
whole page), or we will read out the whole page first.
Meaning there is nothing to lose for such an inline extent read.
But it's still not ideal:
- We're zeroing out the page twice
One done by read_inline_extent()/uncompress_inline(), one done by
btrfs_do_readpage() for ranges beyond i_size.
- We're touching blocks that doesn't belong to the inline extent
In the incoming patches, we can have a partial uptodate folio, that
some dirty blocks can exist while the page is not fully uptodate:
The page size is 16K and block size is 4K:
0 4K 8K 12K 16K
| | |/////////| |
And range [8K, 12K) is dirtied by a buffered write, the remaining
blocks are not uptodate.
If range [0, 4K) contains an inline data extent, and we try to read
the whole page, the current behavior will overwrite range [8K, 12K)
with zero and cause data loss.
So to make the behavior more consistent and in preparation for future
changes, limit the inline data extents read to only zero out the range
inside the first block, not the whole page.
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
fs/btrfs/inode.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index c432ccfba56e..f06b1c78c399 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -6788,6 +6788,7 @@ static noinline int uncompress_inline(struct btrfs_path *path,
{
int ret;
struct extent_buffer *leaf = path->nodes[0];
+ const u32 sectorsize = leaf->fs_info->sectorsize;
char *tmp;
size_t max_size;
unsigned long inline_size;
@@ -6804,7 +6805,7 @@ static noinline int uncompress_inline(struct btrfs_path *path,
read_extent_buffer(leaf, tmp, ptr, inline_size);
- max_size = min_t(unsigned long, PAGE_SIZE, max_size);
+ max_size = min_t(unsigned long, sectorsize, max_size);
ret = btrfs_decompress(compress_type, tmp, folio, 0, inline_size,
max_size);
@@ -6816,14 +6817,15 @@ static noinline int uncompress_inline(struct btrfs_path *path,
* cover that region here.
*/
- if (max_size < PAGE_SIZE)
- folio_zero_range(folio, max_size, PAGE_SIZE - max_size);
+ if (max_size < sectorsize)
+ folio_zero_range(folio, max_size, sectorsize - max_size);
kfree(tmp);
return ret;
}
static int read_inline_extent(struct btrfs_path *path, struct folio *folio)
{
+ const u32 sectorsize = path->nodes[0]->fs_info->sectorsize;
struct btrfs_file_extent_item *fi;
void *kaddr;
size_t copy_size;
@@ -6838,14 +6840,14 @@ static int read_inline_extent(struct btrfs_path *path, struct folio *folio)
if (btrfs_file_extent_compression(path->nodes[0], fi) != BTRFS_COMPRESS_NONE)
return uncompress_inline(path, folio, fi);
- copy_size = min_t(u64, PAGE_SIZE,
+ copy_size = min_t(u64, sectorsize,
btrfs_file_extent_ram_bytes(path->nodes[0], fi));
kaddr = kmap_local_folio(folio, 0);
read_extent_buffer(path->nodes[0], kaddr,
btrfs_file_extent_inline_start(fi), copy_size);
kunmap_local(kaddr);
- if (copy_size < PAGE_SIZE)
- folio_zero_range(folio, copy_size, PAGE_SIZE - copy_size);
+ if (copy_size < sectorsize)
+ folio_zero_range(folio, copy_size, sectorsize - copy_size);
return 0;
}
--
2.48.1
^ permalink raw reply related [flat|nested] 16+ messages in thread* [PATCH v2 2/8] btrfs: subpage: do not hold subpage spin lock when clearing folio writeback
2025-02-27 5:54 [PATCH v2 0/8] btrfs: make subpage handling be feature full Qu Wenruo
2025-02-27 5:54 ` [PATCH v2 1/8] btrfs: prevent inline data extents read from touching blocks beyond its range Qu Wenruo
@ 2025-02-27 5:54 ` Qu Wenruo
2025-02-27 12:42 ` Filipe Manana
2025-02-27 5:54 ` [PATCH v2 3/8] btrfs: fix the qgroup data free range for inline data extents Qu Wenruo
` (6 subsequent siblings)
8 siblings, 1 reply; 16+ messages in thread
From: Qu Wenruo @ 2025-02-27 5:54 UTC (permalink / raw)
To: linux-btrfs; +Cc: stable
[BUG]
When testing subpage block size btrfs (block size < page size), I hit
the following spin lock hang on x86_64, with the experimental 2K block
size support:
<TASK>
_raw_spin_lock_irq+0x2f/0x40
wait_subpage_spinlock+0x69/0x80 [btrfs]
btrfs_release_folio+0x46/0x70 [btrfs]
folio_unmap_invalidate+0xcb/0x250
folio_end_writeback+0x127/0x1b0
btrfs_subpage_clear_writeback+0xef/0x140 [btrfs]
end_bbio_data_write+0x13a/0x3c0 [btrfs]
btrfs_bio_end_io+0x6f/0xc0 [btrfs]
process_one_work+0x156/0x310
worker_thread+0x252/0x390
? __pfx_worker_thread+0x10/0x10
kthread+0xef/0x250
? finish_task_switch.isra.0+0x8a/0x250
? __pfx_kthread+0x10/0x10
ret_from_fork+0x34/0x50
? __pfx_kthread+0x10/0x10
ret_from_fork_asm+0x1a/0x30
</TASK>
[CAUSE]
It's a self deadlock with the following sequence:
btrfs_subpage_clear_writeback()
|- spin_lock_irqsave(&subpage->lock);
|- folio_end_writeback()
|- folio_end_dropbehind_write()
|- folio_unmap_invalidate()
|- btrfs_release_folio()
|- wait_subpage_spinlock()
|- spin_lock_irq(&subpage->lock);
!! DEADLOCK !!
We're trying to acquire the same spin lock already held by ourselves.
[FIX]
Move the folio_end_writeback() call out of the spin lock critical
section.
And since we no longer have all the bitmap operation and the writeback
flag clearing happening inside the critical section, we must do extra
checks to make sure only the last one clearing the writeback bitmap can
clear the folio writeback flag.
Fixes: 3470da3b7d87 ("btrfs: subpage: introduce helpers for writeback status")
Cc: stable@vger.kernel.org # 5.15+
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
fs/btrfs/subpage.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/fs/btrfs/subpage.c b/fs/btrfs/subpage.c
index ebb40f506921..bedb5fac579b 100644
--- a/fs/btrfs/subpage.c
+++ b/fs/btrfs/subpage.c
@@ -466,15 +466,21 @@ void btrfs_subpage_clear_writeback(const struct btrfs_fs_info *fs_info,
struct btrfs_subpage *subpage = folio_get_private(folio);
unsigned int start_bit = subpage_calc_start_bit(fs_info, folio,
writeback, start, len);
+ bool was_writeback;
+ bool last = false;
unsigned long flags;
spin_lock_irqsave(&subpage->lock, flags);
+ was_writeback = !subpage_test_bitmap_all_zero(fs_info, folio, writeback);
bitmap_clear(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
- if (subpage_test_bitmap_all_zero(fs_info, folio, writeback)) {
+ if (subpage_test_bitmap_all_zero(fs_info, folio, writeback) &&
+ was_writeback) {
ASSERT(folio_test_writeback(folio));
- folio_end_writeback(folio);
+ last = true;
}
spin_unlock_irqrestore(&subpage->lock, flags);
+ if (last)
+ folio_end_writeback(folio);
}
void btrfs_subpage_set_ordered(const struct btrfs_fs_info *fs_info,
--
2.48.1
^ permalink raw reply related [flat|nested] 16+ messages in thread* Re: [PATCH v2 2/8] btrfs: subpage: do not hold subpage spin lock when clearing folio writeback
2025-02-27 5:54 ` [PATCH v2 2/8] btrfs: subpage: do not hold subpage spin lock when clearing folio writeback Qu Wenruo
@ 2025-02-27 12:42 ` Filipe Manana
0 siblings, 0 replies; 16+ messages in thread
From: Filipe Manana @ 2025-02-27 12:42 UTC (permalink / raw)
To: Qu Wenruo; +Cc: linux-btrfs, stable
On Thu, Feb 27, 2025 at 5:56 AM Qu Wenruo <wqu@suse.com> wrote:
>
> [BUG]
> When testing subpage block size btrfs (block size < page size), I hit
> the following spin lock hang on x86_64, with the experimental 2K block
> size support:
>
> <TASK>
> _raw_spin_lock_irq+0x2f/0x40
> wait_subpage_spinlock+0x69/0x80 [btrfs]
> btrfs_release_folio+0x46/0x70 [btrfs]
> folio_unmap_invalidate+0xcb/0x250
> folio_end_writeback+0x127/0x1b0
> btrfs_subpage_clear_writeback+0xef/0x140 [btrfs]
> end_bbio_data_write+0x13a/0x3c0 [btrfs]
> btrfs_bio_end_io+0x6f/0xc0 [btrfs]
> process_one_work+0x156/0x310
> worker_thread+0x252/0x390
> ? __pfx_worker_thread+0x10/0x10
> kthread+0xef/0x250
> ? finish_task_switch.isra.0+0x8a/0x250
> ? __pfx_kthread+0x10/0x10
> ret_from_fork+0x34/0x50
> ? __pfx_kthread+0x10/0x10
> ret_from_fork_asm+0x1a/0x30
> </TASK>
>
> [CAUSE]
> It's a self deadlock with the following sequence:
>
> btrfs_subpage_clear_writeback()
> |- spin_lock_irqsave(&subpage->lock);
> |- folio_end_writeback()
> |- folio_end_dropbehind_write()
> |- folio_unmap_invalidate()
> |- btrfs_release_folio()
> |- wait_subpage_spinlock()
> |- spin_lock_irq(&subpage->lock);
> !! DEADLOCK !!
>
> We're trying to acquire the same spin lock already held by ourselves.
>
> [FIX]
> Move the folio_end_writeback() call out of the spin lock critical
> section.
>
> And since we no longer have all the bitmap operation and the writeback
> flag clearing happening inside the critical section, we must do extra
> checks to make sure only the last one clearing the writeback bitmap can
> clear the folio writeback flag.
>
> Fixes: 3470da3b7d87 ("btrfs: subpage: introduce helpers for writeback status")
> Cc: stable@vger.kernel.org # 5.15+
> Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Looks good, thanks.
> ---
> fs/btrfs/subpage.c | 10 ++++++++--
> 1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/fs/btrfs/subpage.c b/fs/btrfs/subpage.c
> index ebb40f506921..bedb5fac579b 100644
> --- a/fs/btrfs/subpage.c
> +++ b/fs/btrfs/subpage.c
> @@ -466,15 +466,21 @@ void btrfs_subpage_clear_writeback(const struct btrfs_fs_info *fs_info,
> struct btrfs_subpage *subpage = folio_get_private(folio);
> unsigned int start_bit = subpage_calc_start_bit(fs_info, folio,
> writeback, start, len);
> + bool was_writeback;
> + bool last = false;
> unsigned long flags;
>
> spin_lock_irqsave(&subpage->lock, flags);
> + was_writeback = !subpage_test_bitmap_all_zero(fs_info, folio, writeback);
> bitmap_clear(subpage->bitmaps, start_bit, len >> fs_info->sectorsize_bits);
> - if (subpage_test_bitmap_all_zero(fs_info, folio, writeback)) {
> + if (subpage_test_bitmap_all_zero(fs_info, folio, writeback) &&
> + was_writeback) {
> ASSERT(folio_test_writeback(folio));
> - folio_end_writeback(folio);
> + last = true;
> }
> spin_unlock_irqrestore(&subpage->lock, flags);
> + if (last)
> + folio_end_writeback(folio);
> }
>
> void btrfs_subpage_set_ordered(const struct btrfs_fs_info *fs_info,
> --
> 2.48.1
>
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v2 3/8] btrfs: fix the qgroup data free range for inline data extents
2025-02-27 5:54 [PATCH v2 0/8] btrfs: make subpage handling be feature full Qu Wenruo
2025-02-27 5:54 ` [PATCH v2 1/8] btrfs: prevent inline data extents read from touching blocks beyond its range Qu Wenruo
2025-02-27 5:54 ` [PATCH v2 2/8] btrfs: subpage: do not hold subpage spin lock when clearing folio writeback Qu Wenruo
@ 2025-02-27 5:54 ` Qu Wenruo
2025-02-27 5:54 ` [PATCH v2 4/8] btrfs: introduce a read path dedicated extent lock helper Qu Wenruo
` (5 subsequent siblings)
8 siblings, 0 replies; 16+ messages in thread
From: Qu Wenruo @ 2025-02-27 5:54 UTC (permalink / raw)
To: linux-btrfs; +Cc: Filipe Manana
Inside function __cow_file_range_inline() since the inlined data no
longer takes any data space, we need to free up the reserved space.
However the code is still using the old page size == sector size
assumption, and will not handle subpage case well.
Thankfully it is not going to cause any problems because we have two extra
safe nets:
- Inline data extents creation is disable for sector size < page size
cases for now
But it won't stay that for long.
- btrfs_qgroup_free_data() will only clear ranges which are already
reserved
So even if we pass a range larger than what we need, it should still
be fine, especially there is only reserved space for a single block at
file offset 0 for an inline data extent.
But just for the sake of consistentcy, fix the call site to use
sectorsize instead of page size.
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
fs/btrfs/inode.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index f06b1c78c399..52802a3a078c 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -672,7 +672,7 @@ static noinline int __cow_file_range_inline(struct btrfs_inode *inode,
* And at reserve time, it's always aligned to page size, so
* just free one page here.
*/
- btrfs_qgroup_free_data(inode, NULL, 0, PAGE_SIZE, NULL);
+ btrfs_qgroup_free_data(inode, NULL, 0, fs_info->sectorsize, NULL);
btrfs_free_path(path);
btrfs_end_transaction(trans);
return ret;
--
2.48.1
^ permalink raw reply related [flat|nested] 16+ messages in thread* [PATCH v2 4/8] btrfs: introduce a read path dedicated extent lock helper
2025-02-27 5:54 [PATCH v2 0/8] btrfs: make subpage handling be feature full Qu Wenruo
` (2 preceding siblings ...)
2025-02-27 5:54 ` [PATCH v2 3/8] btrfs: fix the qgroup data free range for inline data extents Qu Wenruo
@ 2025-02-27 5:54 ` Qu Wenruo
2025-02-27 12:47 ` Filipe Manana
2025-02-27 5:54 ` [PATCH v2 5/8] btrfs: make btrfs_do_readpage() to do block-by-block read Qu Wenruo
` (4 subsequent siblings)
8 siblings, 1 reply; 16+ messages in thread
From: Qu Wenruo @ 2025-02-27 5:54 UTC (permalink / raw)
To: linux-btrfs
Currently we're using btrfs_lock_and_flush_ordered_range() for both
btrfs_read_folio() and btrfs_readahead(), but it has one critical
problem for future subpage optimizations:
- It will call btrfs_start_ordered_extent() to writeback the involved
folios
But remember we're calling btrfs_lock_and_flush_ordered_range() at
read paths, meaning the folio is already locked by read path.
If we really trigger writeback for those already locked folios, this
will lead to a deadlock and writeback can not get the folio lock.
Such dead lock is prevented by the fact that btrfs always keeps a
dirty folio also uptodate, by either dirtying all blocks of the folio,
or read the whole folio before dirtying.
To prepare for the incoming patch which allows btrfs to skip full folio
read if the buffered write is block aligned, we have to start by solving
the possible deadlock first.
Instead of blindly calling btrfs_start_ordered_extent(), introduce a
newer helper, which is smarter in the following ways:
- Only wait and flush the ordered extent if
* The folio doesn't even have private set
* Part of the blocks of the ordered extent are not uptodate
This can happen by:
* The folio writeback finished, then get invalidated.
There are a lot of reason that a folio can get invalidated,
from memory pressure to direct IO (which invalidates all folios
of the range).
But OE not yet finished
We have to wait for the ordered extent, as the OE may contain
to-be-inserted data checksum.
Without waiting, our read can fail due to the missing csum.
But either way, the OE should not need any extra flush inside the
locked folio range.
- Skip the ordered extent completely if
* All the blocks are dirty
This happens when OE creation is caused by a folio writeback whose
file offset is before our folio.
E.g. 16K page size and 4K block size
0 8K 16K 24K 32K
|//////////////||///////| |
The writeback of folio 0 created an OE for range [0, 24K), but since
folio 16K is not fully uptodate, a read is triggered for folio 16K.
The writeback will never happen (we're holding the folio lock for
read), nor will the OE finish.
Thus we must skip the range.
* All the blocks are uptodate
This happens when the writeback finished, but OE not yet finished.
Since the blocks are already uptodate, we can skip the OE range.
The newer helper, lock_extents_for_read() will do a loop for the target
range by:
1) Lock the full range
2) If there is no ordered extent in the remaining range, exit
3) If there is an ordered extent that we can skip
Skip to the end of the OE, and continue checking
We do not trigger writeback nor wait for the OE.
4) If there is an ordered extent that we can not skip
Unlock the whole extent range and start the ordered extent.
And also update btrfs_start_ordered_extent() to add two more parameters:
@nowriteback_start and @nowriteback_len, to prevent triggering flush for
a certain range.
This will allow us to handle the following case properly in the future:
16K page size, 4K btrfs block size:
0 4K 8K 12K 16K 20K 24K 28K 32K
|/////////////////////////////||////////////////| | |
|<-------------------- OE 2 ------------------->| |< OE 1 >|
The folio has been written back before, thus we have an OE at
[28K, 32K).
Although the OE 1 finished its IO, the OE is not yet removed from IO
tree.
The folio got invalidated after writeback completed and before the
ordered extent finished.
And [16K, 24K) range is dirty and uptodate, caused by a block aligned
buffered write (and future enhancements allowing btrfs to skip full
folio read for such case).
But writeback for folio 0 has began, thus it generated OE 2, covering
range [0, 24K).
Since the full folio 16K is not uptodate, if we want to read the folio,
the existing btrfs_lock_and_flush_ordered_range() will dead lock, by:
btrfs_read_folio()
| Folio 16K is already locked
|- btrfs_lock_and_flush_ordered_range()
|- btrfs_start_ordered_extent() for range [16K, 24K)
|- filemap_fdatawrite_range() for range [16K, 24K)
|- extent_write_cache_pages()
folio_lock() on folio 16K, deadlock.
But now we will have the following sequence:
btrfs_read_folio()
| Folio 16K is already locked
|- lock_extents_for_read()
|- can_skip_ordered_extent() for range [16K, 24K)
| Returned true, the range [16K, 24K) will be skipped.
|- can_skip_ordered_extent() for range [28K, 32K)
| Returned false.
|- btrfs_start_ordered_extent() for range [28K, 32K) with
[16K, 32K) as no writeback range
No writeback for folio 16K will be triggered.
And there will be no more possible deadlock on the same folio.
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
fs/btrfs/extent_io.c | 187 +++++++++++++++++++++++++++++++++++++++-
fs/btrfs/ordered-data.c | 23 +++--
fs/btrfs/ordered-data.h | 8 +-
3 files changed, 210 insertions(+), 8 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 7b0aa332aedc..3968ecbb727d 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -1081,6 +1081,189 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
return 0;
}
+/*
+ * Check if we can skip waiting the @ordered extent covering the block
+ * at @fileoff.
+ *
+ * @fileoff: Both input and output.
+ * Input as the file offset where the check should start at.
+ * Output as where the next check should start at,
+ * if the function returns true.
+ *
+ * Return true if we can skip to @fileoff. The caller needs to check
+ * the new @fileoff value to make sure it covers the full range, before
+ * skipping the full OE.
+ *
+ * Return false if we must wait for the ordered extent.
+ */
+static bool can_skip_one_ordered_range(struct btrfs_inode *inode,
+ struct btrfs_ordered_extent *ordered,
+ u64 *fileoff)
+{
+ const struct btrfs_fs_info *fs_info = inode->root->fs_info;
+ struct folio *folio;
+ const u32 blocksize = fs_info->sectorsize;
+ u64 cur = *fileoff;
+ bool ret;
+
+ folio = filemap_get_folio(inode->vfs_inode.i_mapping,
+ cur >> PAGE_SHIFT);
+
+ /*
+ * We should have locked the folio(s) for range [start, end], thus
+ * there must be a folio and it must be locked.
+ */
+ ASSERT(!IS_ERR(folio));
+ ASSERT(folio_test_locked(folio));
+
+ /*
+ * We several cases for the folio and OE combination:
+ *
+ * 1) Folio has no private flag
+ * The OE has all its IO done but not yet finished, and folio got
+ * invalidated.
+ *
+ * Have to wait for the OE to finish, as it may contain the
+ * to-be-inserted data checksum.
+ * Without the data checksum inserted into the csum tree, read
+ * will just fail with missing csum.
+ */
+ if (!folio_test_private(folio)) {
+ ret = false;
+ goto out;
+ }
+
+ /*
+ * 2) The first block is DIRTY.
+ *
+ * This means the OE is created by some other folios whose file pos is
+ * before us. And since we are holding the folio lock, the writeback of
+ * this folio can not start.
+ *
+ * We must skip the whole OE, because it will never start until
+ * we finished our folio read and unlocked the folio.
+ */
+ if (btrfs_folio_test_dirty(fs_info, folio, cur, blocksize)) {
+ u64 range_len = min(folio_pos(folio) + folio_size(folio),
+ ordered->file_offset + ordered->num_bytes) - cur;
+
+ ret = true;
+ /*
+ * At least inside the folio, all the remaining blocks should
+ * also be dirty.
+ */
+ ASSERT(btrfs_folio_test_dirty(fs_info, folio, cur, range_len));
+ *fileoff = ordered->file_offset + ordered->num_bytes;
+ goto out;
+ }
+
+ /*
+ * 3) The first block is uptodate.
+ *
+ * At least the first block can be skipped, but we are still
+ * not fully sure. E.g. if the OE has some other folios in
+ * the range that can not be skipped.
+ * So we return true and update @next_ret to the OE/folio boundary.
+ */
+ if (btrfs_folio_test_uptodate(fs_info, folio, cur, blocksize)) {
+ u64 range_len = min(folio_pos(folio) + folio_size(folio),
+ ordered->file_offset + ordered->num_bytes) - cur;
+
+ /*
+ * The whole range to the OE end or folio boundary should also
+ * be uptodate.
+ */
+ ASSERT(btrfs_folio_test_uptodate(fs_info, folio, cur, range_len));
+ ret = true;
+ *fileoff = cur + range_len;
+ goto out;
+ }
+
+ /*
+ * 4) The first block is not uptodate.
+ *
+ * This means the folio is invalidated after the writeback is finished,
+ * but by some other operations (e.g. block aligned buffered write) the
+ * folio is inserted into filemap.
+ * Very much the same as case 1).
+ */
+ ret = false;
+out:
+ folio_put(folio);
+ return ret;
+}
+
+static bool can_skip_ordered_extent(struct btrfs_inode *inode,
+ struct btrfs_ordered_extent *ordered,
+ u64 start, u64 end)
+{
+ const u64 range_end = min(end, ordered->file_offset + ordered->num_bytes - 1);
+ u64 cur = max(start, ordered->file_offset);
+
+ while (cur < range_end) {
+ bool can_skip;
+
+ can_skip = can_skip_one_ordered_range(inode, ordered, &cur);
+ if (!can_skip)
+ return false;
+ }
+ return true;
+}
+
+/*
+ * To make sure we get a stable view of extent maps for the involved range.
+ * This is for folio read paths (read and readahead), thus involved range
+ * should have all the folios locked.
+ */
+static void lock_extents_for_read(struct btrfs_inode *inode, u64 start, u64 end,
+ struct extent_state **cached_state)
+{
+ u64 cur_pos;
+
+ /* Caller must provide a valid @cached_state. */
+ ASSERT(cached_state);
+
+ /*
+ * The range must at least be page aligned, as all read paths
+ * are folio based.
+ */
+ ASSERT(IS_ALIGNED(start, PAGE_SIZE));
+ ASSERT(IS_ALIGNED(end + 1, PAGE_SIZE));
+
+again:
+ lock_extent(&inode->io_tree, start, end, cached_state);
+ cur_pos = start;
+ while (cur_pos < end) {
+ struct btrfs_ordered_extent *ordered;
+
+ ordered = btrfs_lookup_ordered_range(inode, cur_pos,
+ end - cur_pos + 1);
+ /*
+ * No ordered extents in the range, and we hold the
+ * extent lock, no one can modify the extent maps
+ * in the range, we're safe to return.
+ */
+ if (!ordered)
+ break;
+
+ /* Check if we can skip waiting for the whole OE. */
+ if (can_skip_ordered_extent(inode, ordered, start, end)) {
+ cur_pos = min(ordered->file_offset + ordered->num_bytes,
+ end + 1);
+ btrfs_put_ordered_extent(ordered);
+ continue;
+ }
+
+ /* Now wait for the OE to finish. */
+ unlock_extent(&inode->io_tree, start, end,
+ cached_state);
+ btrfs_start_ordered_extent_nowriteback(ordered, start, end + 1 - start);
+ btrfs_put_ordered_extent(ordered);
+ /* We have unlocked the whole range, restart from the beginning. */
+ goto again;
+ }
+}
+
int btrfs_read_folio(struct file *file, struct folio *folio)
{
struct btrfs_inode *inode = folio_to_inode(folio);
@@ -1091,7 +1274,7 @@ int btrfs_read_folio(struct file *file, struct folio *folio)
struct extent_map *em_cached = NULL;
int ret;
- btrfs_lock_and_flush_ordered_range(inode, start, end, &cached_state);
+ lock_extents_for_read(inode, start, end, &cached_state);
ret = btrfs_do_readpage(folio, &em_cached, &bio_ctrl, NULL);
unlock_extent(&inode->io_tree, start, end, &cached_state);
@@ -2380,7 +2563,7 @@ void btrfs_readahead(struct readahead_control *rac)
struct extent_map *em_cached = NULL;
u64 prev_em_start = (u64)-1;
- btrfs_lock_and_flush_ordered_range(inode, start, end, &cached_state);
+ lock_extents_for_read(inode, start, end, &cached_state);
while ((folio = readahead_folio(rac)) != NULL)
btrfs_do_readpage(folio, &em_cached, &bio_ctrl, &prev_em_start);
diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
index 4aca7475fd82..fd33217e4b27 100644
--- a/fs/btrfs/ordered-data.c
+++ b/fs/btrfs/ordered-data.c
@@ -842,10 +842,12 @@ void btrfs_wait_ordered_roots(struct btrfs_fs_info *fs_info, u64 nr,
/*
* Start IO and wait for a given ordered extent to finish.
*
- * Wait on page writeback for all the pages in the extent and the IO completion
- * code to insert metadata into the btree corresponding to the extent.
+ * Wait on page writeback for all the pages in the extent but not in
+ * [@nowriteback_start, @nowriteback_start + @nowriteback_len) and the
+ * IO completion code to insert metadata into the btree corresponding to the extent.
*/
-void btrfs_start_ordered_extent(struct btrfs_ordered_extent *entry)
+void btrfs_start_ordered_extent_nowriteback(struct btrfs_ordered_extent *entry,
+ u64 nowriteback_start, u32 nowriteback_len)
{
u64 start = entry->file_offset;
u64 end = start + entry->num_bytes - 1;
@@ -865,8 +867,19 @@ void btrfs_start_ordered_extent(struct btrfs_ordered_extent *entry)
* start IO on any dirty ones so the wait doesn't stall waiting
* for the flusher thread to find them
*/
- if (!test_bit(BTRFS_ORDERED_DIRECT, &entry->flags))
- filemap_fdatawrite_range(inode->vfs_inode.i_mapping, start, end);
+ if (!test_bit(BTRFS_ORDERED_DIRECT, &entry->flags)) {
+ if (!nowriteback_len) {
+ filemap_fdatawrite_range(inode->vfs_inode.i_mapping, start, end);
+ } else {
+ if (start < nowriteback_start)
+ filemap_fdatawrite_range(inode->vfs_inode.i_mapping, start,
+ nowriteback_start - 1);
+ if (nowriteback_start + nowriteback_len < end)
+ filemap_fdatawrite_range(inode->vfs_inode.i_mapping,
+ nowriteback_start + nowriteback_len,
+ end);
+ }
+ }
if (!freespace_inode)
btrfs_might_wait_for_event(inode->root->fs_info, btrfs_ordered_extent);
diff --git a/fs/btrfs/ordered-data.h b/fs/btrfs/ordered-data.h
index be36083297a7..1e6b0b182b29 100644
--- a/fs/btrfs/ordered-data.h
+++ b/fs/btrfs/ordered-data.h
@@ -192,7 +192,13 @@ void btrfs_add_ordered_sum(struct btrfs_ordered_extent *entry,
struct btrfs_ordered_sum *sum);
struct btrfs_ordered_extent *btrfs_lookup_ordered_extent(struct btrfs_inode *inode,
u64 file_offset);
-void btrfs_start_ordered_extent(struct btrfs_ordered_extent *entry);
+void btrfs_start_ordered_extent_nowriteback(struct btrfs_ordered_extent *entry,
+ u64 nowriteback_start, u32 nowriteback_len);
+static inline void btrfs_start_ordered_extent(struct btrfs_ordered_extent *entry)
+{
+ return btrfs_start_ordered_extent_nowriteback(entry, 0, 0);
+}
+
int btrfs_wait_ordered_range(struct btrfs_inode *inode, u64 start, u64 len);
struct btrfs_ordered_extent *
btrfs_lookup_first_ordered_extent(struct btrfs_inode *inode, u64 file_offset);
--
2.48.1
^ permalink raw reply related [flat|nested] 16+ messages in thread* Re: [PATCH v2 4/8] btrfs: introduce a read path dedicated extent lock helper
2025-02-27 5:54 ` [PATCH v2 4/8] btrfs: introduce a read path dedicated extent lock helper Qu Wenruo
@ 2025-02-27 12:47 ` Filipe Manana
0 siblings, 0 replies; 16+ messages in thread
From: Filipe Manana @ 2025-02-27 12:47 UTC (permalink / raw)
To: Qu Wenruo; +Cc: linux-btrfs
On Thu, Feb 27, 2025 at 5:56 AM Qu Wenruo <wqu@suse.com> wrote:
>
> Currently we're using btrfs_lock_and_flush_ordered_range() for both
> btrfs_read_folio() and btrfs_readahead(), but it has one critical
> problem for future subpage optimizations:
>
> - It will call btrfs_start_ordered_extent() to writeback the involved
> folios
>
> But remember we're calling btrfs_lock_and_flush_ordered_range() at
> read paths, meaning the folio is already locked by read path.
>
> If we really trigger writeback for those already locked folios, this
> will lead to a deadlock and writeback can not get the folio lock.
>
> Such dead lock is prevented by the fact that btrfs always keeps a
> dirty folio also uptodate, by either dirtying all blocks of the folio,
> or read the whole folio before dirtying.
>
> To prepare for the incoming patch which allows btrfs to skip full folio
> read if the buffered write is block aligned, we have to start by solving
> the possible deadlock first.
>
> Instead of blindly calling btrfs_start_ordered_extent(), introduce a
> newer helper, which is smarter in the following ways:
>
> - Only wait and flush the ordered extent if
> * The folio doesn't even have private set
> * Part of the blocks of the ordered extent are not uptodate
>
> This can happen by:
> * The folio writeback finished, then get invalidated.
> There are a lot of reason that a folio can get invalidated,
> from memory pressure to direct IO (which invalidates all folios
> of the range).
> But OE not yet finished
>
> We have to wait for the ordered extent, as the OE may contain
> to-be-inserted data checksum.
> Without waiting, our read can fail due to the missing csum.
>
> But either way, the OE should not need any extra flush inside the
> locked folio range.
>
> - Skip the ordered extent completely if
> * All the blocks are dirty
> This happens when OE creation is caused by a folio writeback whose
> file offset is before our folio.
>
> E.g. 16K page size and 4K block size
>
> 0 8K 16K 24K 32K
> |//////////////||///////| |
>
> The writeback of folio 0 created an OE for range [0, 24K), but since
> folio 16K is not fully uptodate, a read is triggered for folio 16K.
>
> The writeback will never happen (we're holding the folio lock for
> read), nor will the OE finish.
>
> Thus we must skip the range.
>
> * All the blocks are uptodate
> This happens when the writeback finished, but OE not yet finished.
>
> Since the blocks are already uptodate, we can skip the OE range.
>
> The newer helper, lock_extents_for_read() will do a loop for the target
> range by:
>
> 1) Lock the full range
>
> 2) If there is no ordered extent in the remaining range, exit
>
> 3) If there is an ordered extent that we can skip
> Skip to the end of the OE, and continue checking
> We do not trigger writeback nor wait for the OE.
>
> 4) If there is an ordered extent that we can not skip
> Unlock the whole extent range and start the ordered extent.
>
> And also update btrfs_start_ordered_extent() to add two more parameters:
> @nowriteback_start and @nowriteback_len, to prevent triggering flush for
> a certain range.
>
> This will allow us to handle the following case properly in the future:
>
> 16K page size, 4K btrfs block size:
>
> 0 4K 8K 12K 16K 20K 24K 28K 32K
> |/////////////////////////////||////////////////| | |
> |<-------------------- OE 2 ------------------->| |< OE 1 >|
>
> The folio has been written back before, thus we have an OE at
> [28K, 32K).
> Although the OE 1 finished its IO, the OE is not yet removed from IO
> tree.
> The folio got invalidated after writeback completed and before the
> ordered extent finished.
>
> And [16K, 24K) range is dirty and uptodate, caused by a block aligned
> buffered write (and future enhancements allowing btrfs to skip full
> folio read for such case).
> But writeback for folio 0 has began, thus it generated OE 2, covering
> range [0, 24K).
>
> Since the full folio 16K is not uptodate, if we want to read the folio,
> the existing btrfs_lock_and_flush_ordered_range() will dead lock, by:
>
> btrfs_read_folio()
> | Folio 16K is already locked
> |- btrfs_lock_and_flush_ordered_range()
> |- btrfs_start_ordered_extent() for range [16K, 24K)
> |- filemap_fdatawrite_range() for range [16K, 24K)
> |- extent_write_cache_pages()
> folio_lock() on folio 16K, deadlock.
>
> But now we will have the following sequence:
>
> btrfs_read_folio()
> | Folio 16K is already locked
> |- lock_extents_for_read()
> |- can_skip_ordered_extent() for range [16K, 24K)
> | Returned true, the range [16K, 24K) will be skipped.
> |- can_skip_ordered_extent() for range [28K, 32K)
> | Returned false.
> |- btrfs_start_ordered_extent() for range [28K, 32K) with
> [16K, 32K) as no writeback range
> No writeback for folio 16K will be triggered.
>
> And there will be no more possible deadlock on the same folio.
>
> Signed-off-by: Qu Wenruo <wqu@suse.com>
> ---
> fs/btrfs/extent_io.c | 187 +++++++++++++++++++++++++++++++++++++++-
> fs/btrfs/ordered-data.c | 23 +++--
> fs/btrfs/ordered-data.h | 8 +-
> 3 files changed, 210 insertions(+), 8 deletions(-)
>
> diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
> index 7b0aa332aedc..3968ecbb727d 100644
> --- a/fs/btrfs/extent_io.c
> +++ b/fs/btrfs/extent_io.c
> @@ -1081,6 +1081,189 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
> return 0;
> }
>
> +/*
> + * Check if we can skip waiting the @ordered extent covering the block
> + * at @fileoff.
> + *
> + * @fileoff: Both input and output.
> + * Input as the file offset where the check should start at.
> + * Output as where the next check should start at,
> + * if the function returns true.
> + *
> + * Return true if we can skip to @fileoff. The caller needs to check
> + * the new @fileoff value to make sure it covers the full range, before
> + * skipping the full OE.
> + *
> + * Return false if we must wait for the ordered extent.
> + */
> +static bool can_skip_one_ordered_range(struct btrfs_inode *inode,
> + struct btrfs_ordered_extent *ordered,
> + u64 *fileoff)
> +{
> + const struct btrfs_fs_info *fs_info = inode->root->fs_info;
> + struct folio *folio;
> + const u32 blocksize = fs_info->sectorsize;
> + u64 cur = *fileoff;
> + bool ret;
> +
> + folio = filemap_get_folio(inode->vfs_inode.i_mapping,
> + cur >> PAGE_SHIFT);
> +
> + /*
> + * We should have locked the folio(s) for range [start, end], thus
> + * there must be a folio and it must be locked.
> + */
> + ASSERT(!IS_ERR(folio));
> + ASSERT(folio_test_locked(folio));
> +
> + /*
> + * We several cases for the folio and OE combination:
> + *
> + * 1) Folio has no private flag
> + * The OE has all its IO done but not yet finished, and folio got
> + * invalidated.
> + *
> + * Have to wait for the OE to finish, as it may contain the
> + * to-be-inserted data checksum.
> + * Without the data checksum inserted into the csum tree, read
> + * will just fail with missing csum.
> + */
> + if (!folio_test_private(folio)) {
> + ret = false;
> + goto out;
> + }
> +
> + /*
> + * 2) The first block is DIRTY.
> + *
> + * This means the OE is created by some other folios whose file pos is
> + * before us. And since we are holding the folio lock, the writeback of
> + * this folio can not start.
> + *
> + * We must skip the whole OE, because it will never start until
> + * we finished our folio read and unlocked the folio.
> + */
> + if (btrfs_folio_test_dirty(fs_info, folio, cur, blocksize)) {
> + u64 range_len = min(folio_pos(folio) + folio_size(folio),
> + ordered->file_offset + ordered->num_bytes) - cur;
> +
> + ret = true;
> + /*
> + * At least inside the folio, all the remaining blocks should
> + * also be dirty.
> + */
> + ASSERT(btrfs_folio_test_dirty(fs_info, folio, cur, range_len));
> + *fileoff = ordered->file_offset + ordered->num_bytes;
> + goto out;
> + }
> +
> + /*
> + * 3) The first block is uptodate.
> + *
> + * At least the first block can be skipped, but we are still
> + * not fully sure. E.g. if the OE has some other folios in
> + * the range that can not be skipped.
> + * So we return true and update @next_ret to the OE/folio boundary.
> + */
> + if (btrfs_folio_test_uptodate(fs_info, folio, cur, blocksize)) {
> + u64 range_len = min(folio_pos(folio) + folio_size(folio),
> + ordered->file_offset + ordered->num_bytes) - cur;
> +
> + /*
> + * The whole range to the OE end or folio boundary should also
> + * be uptodate.
> + */
> + ASSERT(btrfs_folio_test_uptodate(fs_info, folio, cur, range_len));
> + ret = true;
> + *fileoff = cur + range_len;
> + goto out;
> + }
> +
> + /*
> + * 4) The first block is not uptodate.
> + *
> + * This means the folio is invalidated after the writeback is finished,
> + * but by some other operations (e.g. block aligned buffered write) the
> + * folio is inserted into filemap.
> + * Very much the same as case 1).
> + */
> + ret = false;
> +out:
> + folio_put(folio);
> + return ret;
> +}
> +
> +static bool can_skip_ordered_extent(struct btrfs_inode *inode,
> + struct btrfs_ordered_extent *ordered,
> + u64 start, u64 end)
> +{
> + const u64 range_end = min(end, ordered->file_offset + ordered->num_bytes - 1);
> + u64 cur = max(start, ordered->file_offset);
> +
> + while (cur < range_end) {
> + bool can_skip;
> +
> + can_skip = can_skip_one_ordered_range(inode, ordered, &cur);
> + if (!can_skip)
> + return false;
> + }
> + return true;
> +}
> +
> +/*
> + * To make sure we get a stable view of extent maps for the involved range.
> + * This is for folio read paths (read and readahead), thus involved range
> + * should have all the folios locked.
> + */
> +static void lock_extents_for_read(struct btrfs_inode *inode, u64 start, u64 end,
> + struct extent_state **cached_state)
> +{
> + u64 cur_pos;
> +
> + /* Caller must provide a valid @cached_state. */
> + ASSERT(cached_state);
> +
> + /*
> + * The range must at least be page aligned, as all read paths
> + * are folio based.
> + */
> + ASSERT(IS_ALIGNED(start, PAGE_SIZE));
> + ASSERT(IS_ALIGNED(end + 1, PAGE_SIZE));
> +
> +again:
> + lock_extent(&inode->io_tree, start, end, cached_state);
> + cur_pos = start;
> + while (cur_pos < end) {
> + struct btrfs_ordered_extent *ordered;
> +
> + ordered = btrfs_lookup_ordered_range(inode, cur_pos,
> + end - cur_pos + 1);
> + /*
> + * No ordered extents in the range, and we hold the
> + * extent lock, no one can modify the extent maps
> + * in the range, we're safe to return.
> + */
> + if (!ordered)
> + break;
> +
> + /* Check if we can skip waiting for the whole OE. */
> + if (can_skip_ordered_extent(inode, ordered, start, end)) {
> + cur_pos = min(ordered->file_offset + ordered->num_bytes,
> + end + 1);
> + btrfs_put_ordered_extent(ordered);
> + continue;
> + }
> +
> + /* Now wait for the OE to finish. */
> + unlock_extent(&inode->io_tree, start, end,
> + cached_state);
Btw, this fits all in one line, making things more readable.
Can be done when committed to for-next.
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Looks good, thanks.
> + btrfs_start_ordered_extent_nowriteback(ordered, start, end + 1 - start);
> + btrfs_put_ordered_extent(ordered);
> + /* We have unlocked the whole range, restart from the beginning. */
> + goto again;
> + }
> +}
> +
> int btrfs_read_folio(struct file *file, struct folio *folio)
> {
> struct btrfs_inode *inode = folio_to_inode(folio);
> @@ -1091,7 +1274,7 @@ int btrfs_read_folio(struct file *file, struct folio *folio)
> struct extent_map *em_cached = NULL;
> int ret;
>
> - btrfs_lock_and_flush_ordered_range(inode, start, end, &cached_state);
> + lock_extents_for_read(inode, start, end, &cached_state);
> ret = btrfs_do_readpage(folio, &em_cached, &bio_ctrl, NULL);
> unlock_extent(&inode->io_tree, start, end, &cached_state);
>
> @@ -2380,7 +2563,7 @@ void btrfs_readahead(struct readahead_control *rac)
> struct extent_map *em_cached = NULL;
> u64 prev_em_start = (u64)-1;
>
> - btrfs_lock_and_flush_ordered_range(inode, start, end, &cached_state);
> + lock_extents_for_read(inode, start, end, &cached_state);
>
> while ((folio = readahead_folio(rac)) != NULL)
> btrfs_do_readpage(folio, &em_cached, &bio_ctrl, &prev_em_start);
> diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
> index 4aca7475fd82..fd33217e4b27 100644
> --- a/fs/btrfs/ordered-data.c
> +++ b/fs/btrfs/ordered-data.c
> @@ -842,10 +842,12 @@ void btrfs_wait_ordered_roots(struct btrfs_fs_info *fs_info, u64 nr,
> /*
> * Start IO and wait for a given ordered extent to finish.
> *
> - * Wait on page writeback for all the pages in the extent and the IO completion
> - * code to insert metadata into the btree corresponding to the extent.
> + * Wait on page writeback for all the pages in the extent but not in
> + * [@nowriteback_start, @nowriteback_start + @nowriteback_len) and the
> + * IO completion code to insert metadata into the btree corresponding to the extent.
> */
> -void btrfs_start_ordered_extent(struct btrfs_ordered_extent *entry)
> +void btrfs_start_ordered_extent_nowriteback(struct btrfs_ordered_extent *entry,
> + u64 nowriteback_start, u32 nowriteback_len)
> {
> u64 start = entry->file_offset;
> u64 end = start + entry->num_bytes - 1;
> @@ -865,8 +867,19 @@ void btrfs_start_ordered_extent(struct btrfs_ordered_extent *entry)
> * start IO on any dirty ones so the wait doesn't stall waiting
> * for the flusher thread to find them
> */
> - if (!test_bit(BTRFS_ORDERED_DIRECT, &entry->flags))
> - filemap_fdatawrite_range(inode->vfs_inode.i_mapping, start, end);
> + if (!test_bit(BTRFS_ORDERED_DIRECT, &entry->flags)) {
> + if (!nowriteback_len) {
> + filemap_fdatawrite_range(inode->vfs_inode.i_mapping, start, end);
> + } else {
> + if (start < nowriteback_start)
> + filemap_fdatawrite_range(inode->vfs_inode.i_mapping, start,
> + nowriteback_start - 1);
> + if (nowriteback_start + nowriteback_len < end)
> + filemap_fdatawrite_range(inode->vfs_inode.i_mapping,
> + nowriteback_start + nowriteback_len,
> + end);
> + }
> + }
>
> if (!freespace_inode)
> btrfs_might_wait_for_event(inode->root->fs_info, btrfs_ordered_extent);
> diff --git a/fs/btrfs/ordered-data.h b/fs/btrfs/ordered-data.h
> index be36083297a7..1e6b0b182b29 100644
> --- a/fs/btrfs/ordered-data.h
> +++ b/fs/btrfs/ordered-data.h
> @@ -192,7 +192,13 @@ void btrfs_add_ordered_sum(struct btrfs_ordered_extent *entry,
> struct btrfs_ordered_sum *sum);
> struct btrfs_ordered_extent *btrfs_lookup_ordered_extent(struct btrfs_inode *inode,
> u64 file_offset);
> -void btrfs_start_ordered_extent(struct btrfs_ordered_extent *entry);
> +void btrfs_start_ordered_extent_nowriteback(struct btrfs_ordered_extent *entry,
> + u64 nowriteback_start, u32 nowriteback_len);
> +static inline void btrfs_start_ordered_extent(struct btrfs_ordered_extent *entry)
> +{
> + return btrfs_start_ordered_extent_nowriteback(entry, 0, 0);
> +}
> +
> int btrfs_wait_ordered_range(struct btrfs_inode *inode, u64 start, u64 len);
> struct btrfs_ordered_extent *
> btrfs_lookup_first_ordered_extent(struct btrfs_inode *inode, u64 file_offset);
> --
> 2.48.1
>
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v2 5/8] btrfs: make btrfs_do_readpage() to do block-by-block read
2025-02-27 5:54 [PATCH v2 0/8] btrfs: make subpage handling be feature full Qu Wenruo
` (3 preceding siblings ...)
2025-02-27 5:54 ` [PATCH v2 4/8] btrfs: introduce a read path dedicated extent lock helper Qu Wenruo
@ 2025-02-27 5:54 ` Qu Wenruo
2025-02-27 12:48 ` Filipe Manana
2025-02-27 5:54 ` [PATCH v2 6/8] btrfs: allow buffered write to avoid full page read if it's block aligned Qu Wenruo
` (3 subsequent siblings)
8 siblings, 1 reply; 16+ messages in thread
From: Qu Wenruo @ 2025-02-27 5:54 UTC (permalink / raw)
To: linux-btrfs
Currently if a btrfs has its block size (the older sector size) smaller
than the page size, btrfs_do_readpage() will handle the range extent by
extent, this is good for performance as it doesn't need to re-lookup the
same extent map again and again.
(Although get_extent_map() already does extra cached em check, thus
the optimization is not that obvious)
This is totally fine and is a valid optimization, but it has an
assumption that, there is no partial uptodate range in the page.
Meanwhile there is an incoming feature, requiring btrfs to skip the full
page read if a buffered write range covers a full block but not a full
page.
In that case, we can have a page that is partially uptodate, and the
current per-extent lookup can not handle such case.
So here we change btrfs_do_readpage() to do block-by-block read, this
simplifies the following things:
- Remove the need for @iosize variable
Because we just use sectorsize as our increment.
- Remove @pg_offset, and calculate it inside the loop when needed
It's just offset_in_folio().
- Use a for() loop instead of a while() loop
This will slightly reduce the read performance for subpage cases, but for
the future where we need to skip already uptodate blocks, it should still
be worthy.
For block size == page size, this brings no performance change.
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
fs/btrfs/extent_io.c | 38 ++++++++++++--------------------------
1 file changed, 12 insertions(+), 26 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 3968ecbb727d..2abf489e1a9b 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -942,14 +942,11 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
struct btrfs_fs_info *fs_info = inode_to_fs_info(inode);
u64 start = folio_pos(folio);
const u64 end = start + PAGE_SIZE - 1;
- u64 cur = start;
u64 extent_offset;
u64 last_byte = i_size_read(inode);
struct extent_map *em;
int ret = 0;
- size_t pg_offset = 0;
- size_t iosize;
- size_t blocksize = fs_info->sectorsize;
+ const size_t blocksize = fs_info->sectorsize;
ret = set_folio_extent_mapped(folio);
if (ret < 0) {
@@ -960,24 +957,23 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
if (folio_contains(folio, last_byte >> PAGE_SHIFT)) {
size_t zero_offset = offset_in_folio(folio, last_byte);
- if (zero_offset) {
- iosize = folio_size(folio) - zero_offset;
- folio_zero_range(folio, zero_offset, iosize);
- }
+ if (zero_offset)
+ folio_zero_range(folio, zero_offset,
+ folio_size(folio) - zero_offset);
}
bio_ctrl->end_io_func = end_bbio_data_read;
begin_folio_read(fs_info, folio);
- while (cur <= end) {
+ for (u64 cur = start; cur <= end; cur += blocksize) {
enum btrfs_compression_type compress_type = BTRFS_COMPRESS_NONE;
+ unsigned long pg_offset = offset_in_folio(folio, cur);
bool force_bio_submit = false;
u64 disk_bytenr;
u64 block_start;
ASSERT(IS_ALIGNED(cur, fs_info->sectorsize));
if (cur >= last_byte) {
- iosize = folio_size(folio) - pg_offset;
- folio_zero_range(folio, pg_offset, iosize);
- end_folio_read(folio, true, cur, iosize);
+ folio_zero_range(folio, pg_offset, end - cur + 1);
+ end_folio_read(folio, true, cur, end - cur + 1);
break;
}
em = get_extent_map(BTRFS_I(inode), folio, cur, end - cur + 1, em_cached);
@@ -991,8 +987,6 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
compress_type = extent_map_compression(em);
- iosize = min(extent_map_end(em) - cur, end - cur + 1);
- iosize = ALIGN(iosize, blocksize);
if (compress_type != BTRFS_COMPRESS_NONE)
disk_bytenr = em->disk_bytenr;
else
@@ -1050,18 +1044,13 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
/* we've found a hole, just zero and go on */
if (block_start == EXTENT_MAP_HOLE) {
- folio_zero_range(folio, pg_offset, iosize);
-
- end_folio_read(folio, true, cur, iosize);
- cur = cur + iosize;
- pg_offset += iosize;
+ folio_zero_range(folio, pg_offset, blocksize);
+ end_folio_read(folio, true, cur, blocksize);
continue;
}
/* the get_extent function already copied into the folio */
if (block_start == EXTENT_MAP_INLINE) {
- end_folio_read(folio, true, cur, iosize);
- cur = cur + iosize;
- pg_offset += iosize;
+ end_folio_read(folio, true, cur, blocksize);
continue;
}
@@ -1072,12 +1061,9 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
if (force_bio_submit)
submit_one_bio(bio_ctrl);
- submit_extent_folio(bio_ctrl, disk_bytenr, folio, iosize,
+ submit_extent_folio(bio_ctrl, disk_bytenr, folio, blocksize,
pg_offset);
- cur = cur + iosize;
- pg_offset += iosize;
}
-
return 0;
}
--
2.48.1
^ permalink raw reply related [flat|nested] 16+ messages in thread* Re: [PATCH v2 5/8] btrfs: make btrfs_do_readpage() to do block-by-block read
2025-02-27 5:54 ` [PATCH v2 5/8] btrfs: make btrfs_do_readpage() to do block-by-block read Qu Wenruo
@ 2025-02-27 12:48 ` Filipe Manana
0 siblings, 0 replies; 16+ messages in thread
From: Filipe Manana @ 2025-02-27 12:48 UTC (permalink / raw)
To: Qu Wenruo; +Cc: linux-btrfs
On Thu, Feb 27, 2025 at 5:56 AM Qu Wenruo <wqu@suse.com> wrote:
>
> Currently if a btrfs has its block size (the older sector size) smaller
> than the page size, btrfs_do_readpage() will handle the range extent by
> extent, this is good for performance as it doesn't need to re-lookup the
> same extent map again and again.
> (Although get_extent_map() already does extra cached em check, thus
> the optimization is not that obvious)
>
> This is totally fine and is a valid optimization, but it has an
> assumption that, there is no partial uptodate range in the page.
>
> Meanwhile there is an incoming feature, requiring btrfs to skip the full
> page read if a buffered write range covers a full block but not a full
> page.
>
> In that case, we can have a page that is partially uptodate, and the
> current per-extent lookup can not handle such case.
>
> So here we change btrfs_do_readpage() to do block-by-block read, this
> simplifies the following things:
>
> - Remove the need for @iosize variable
> Because we just use sectorsize as our increment.
>
> - Remove @pg_offset, and calculate it inside the loop when needed
> It's just offset_in_folio().
>
> - Use a for() loop instead of a while() loop
>
> This will slightly reduce the read performance for subpage cases, but for
> the future where we need to skip already uptodate blocks, it should still
> be worthy.
>
> For block size == page size, this brings no performance change.
>
> Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Looks good, thanks.
> ---
> fs/btrfs/extent_io.c | 38 ++++++++++++--------------------------
> 1 file changed, 12 insertions(+), 26 deletions(-)
>
> diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
> index 3968ecbb727d..2abf489e1a9b 100644
> --- a/fs/btrfs/extent_io.c
> +++ b/fs/btrfs/extent_io.c
> @@ -942,14 +942,11 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
> struct btrfs_fs_info *fs_info = inode_to_fs_info(inode);
> u64 start = folio_pos(folio);
> const u64 end = start + PAGE_SIZE - 1;
> - u64 cur = start;
> u64 extent_offset;
> u64 last_byte = i_size_read(inode);
> struct extent_map *em;
> int ret = 0;
> - size_t pg_offset = 0;
> - size_t iosize;
> - size_t blocksize = fs_info->sectorsize;
> + const size_t blocksize = fs_info->sectorsize;
>
> ret = set_folio_extent_mapped(folio);
> if (ret < 0) {
> @@ -960,24 +957,23 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
> if (folio_contains(folio, last_byte >> PAGE_SHIFT)) {
> size_t zero_offset = offset_in_folio(folio, last_byte);
>
> - if (zero_offset) {
> - iosize = folio_size(folio) - zero_offset;
> - folio_zero_range(folio, zero_offset, iosize);
> - }
> + if (zero_offset)
> + folio_zero_range(folio, zero_offset,
> + folio_size(folio) - zero_offset);
> }
> bio_ctrl->end_io_func = end_bbio_data_read;
> begin_folio_read(fs_info, folio);
> - while (cur <= end) {
> + for (u64 cur = start; cur <= end; cur += blocksize) {
> enum btrfs_compression_type compress_type = BTRFS_COMPRESS_NONE;
> + unsigned long pg_offset = offset_in_folio(folio, cur);
> bool force_bio_submit = false;
> u64 disk_bytenr;
> u64 block_start;
>
> ASSERT(IS_ALIGNED(cur, fs_info->sectorsize));
> if (cur >= last_byte) {
> - iosize = folio_size(folio) - pg_offset;
> - folio_zero_range(folio, pg_offset, iosize);
> - end_folio_read(folio, true, cur, iosize);
> + folio_zero_range(folio, pg_offset, end - cur + 1);
> + end_folio_read(folio, true, cur, end - cur + 1);
> break;
> }
> em = get_extent_map(BTRFS_I(inode), folio, cur, end - cur + 1, em_cached);
> @@ -991,8 +987,6 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
>
> compress_type = extent_map_compression(em);
>
> - iosize = min(extent_map_end(em) - cur, end - cur + 1);
> - iosize = ALIGN(iosize, blocksize);
> if (compress_type != BTRFS_COMPRESS_NONE)
> disk_bytenr = em->disk_bytenr;
> else
> @@ -1050,18 +1044,13 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
>
> /* we've found a hole, just zero and go on */
> if (block_start == EXTENT_MAP_HOLE) {
> - folio_zero_range(folio, pg_offset, iosize);
> -
> - end_folio_read(folio, true, cur, iosize);
> - cur = cur + iosize;
> - pg_offset += iosize;
> + folio_zero_range(folio, pg_offset, blocksize);
> + end_folio_read(folio, true, cur, blocksize);
> continue;
> }
> /* the get_extent function already copied into the folio */
> if (block_start == EXTENT_MAP_INLINE) {
> - end_folio_read(folio, true, cur, iosize);
> - cur = cur + iosize;
> - pg_offset += iosize;
> + end_folio_read(folio, true, cur, blocksize);
> continue;
> }
>
> @@ -1072,12 +1061,9 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
>
> if (force_bio_submit)
> submit_one_bio(bio_ctrl);
> - submit_extent_folio(bio_ctrl, disk_bytenr, folio, iosize,
> + submit_extent_folio(bio_ctrl, disk_bytenr, folio, blocksize,
> pg_offset);
> - cur = cur + iosize;
> - pg_offset += iosize;
> }
> -
> return 0;
> }
>
> --
> 2.48.1
>
>
--
Filipe David Manana,
“Whether you think you can, or you think you can't — you're right.”
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v2 6/8] btrfs: allow buffered write to avoid full page read if it's block aligned
2025-02-27 5:54 [PATCH v2 0/8] btrfs: make subpage handling be feature full Qu Wenruo
` (4 preceding siblings ...)
2025-02-27 5:54 ` [PATCH v2 5/8] btrfs: make btrfs_do_readpage() to do block-by-block read Qu Wenruo
@ 2025-02-27 5:54 ` Qu Wenruo
2025-02-27 5:54 ` [PATCH v2 7/8] btrfs: allow inline data extents creation if block size < page size Qu Wenruo
` (2 subsequent siblings)
8 siblings, 0 replies; 16+ messages in thread
From: Qu Wenruo @ 2025-02-27 5:54 UTC (permalink / raw)
To: linux-btrfs; +Cc: Filipe Manana
[BUG]
Since the support of block size (sector size) < page size for btrfs,
test case generic/563 fails with 4K block size and 64K page size:
--- tests/generic/563.out 2024-04-25 18:13:45.178550333 +0930
+++ /home/adam/xfstests-dev/results//generic/563.out.bad 2024-09-30 09:09:16.155312379 +0930
@@ -3,7 +3,8 @@
read is in range
write is in range
write -> read/write
-read is in range
+read has value of 8388608
+read is NOT in range -33792 .. 33792
write is in range
...
[CAUSE]
The test case creates a 8MiB file, then buffered write into the 8MiB
using 4K block size, to overwrite the whole file.
On 4K page sized systems, since the write range covers the full block and
page, btrfs will no bother reading the page, just like what XFS and EXT4
do.
But on 64K page sized systems, although the 4K sized write is still block
aligned, it's not page aligned any more, thus btrfs will read the full
page, which will be accounted by cgroup and fail the test.
As the test case itself expects such 4K block aligned write should not
trigger any read.
Such expected behavior is an optimization to reduce folio reads when
possible, and unfortunately btrfs does not implement such optimization.
[FIX]
To skip the full page read, we need to do the following modification:
- Do not trigger full page read as long as the buffered write is block
aligned
This is pretty simple by modifying the check inside
prepare_uptodate_page().
- Skip already uptodate blocks during full page read
Or we can lead to the following data corruption:
0 32K 64K
|///////| |
Where the file range [0, 32K) is dirtied by buffered write, the
remaining range [32K, 64K) is not.
When reading the full page, since [0,32K) is only dirtied but not
written back, there is no data extent map for it, but a hole covering
[0, 64k).
If we continue reading the full page range [0, 64K), the dirtied range
will be filled with 0 (since there is only a hole covering the whole
range).
This causes the dirtied range to get lost.
With this optimization, btrfs can pass generic/563 even if the page size
is larger than fs block size.
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
fs/btrfs/extent_io.c | 4 ++++
fs/btrfs/file.c | 5 +++--
2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 2abf489e1a9b..68030630222d 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -976,6 +976,10 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
end_folio_read(folio, true, cur, end - cur + 1);
break;
}
+ if (btrfs_folio_test_uptodate(fs_info, folio, cur, blocksize)) {
+ end_folio_read(folio, true, cur, blocksize);
+ continue;
+ }
em = get_extent_map(BTRFS_I(inode), folio, cur, end - cur + 1, em_cached);
if (IS_ERR(em)) {
end_folio_read(folio, false, cur, end + 1 - cur);
diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
index e87d4a37c929..008299217432 100644
--- a/fs/btrfs/file.c
+++ b/fs/btrfs/file.c
@@ -804,14 +804,15 @@ static int prepare_uptodate_folio(struct inode *inode, struct folio *folio, u64
{
u64 clamp_start = max_t(u64, pos, folio_pos(folio));
u64 clamp_end = min_t(u64, pos + len, folio_pos(folio) + folio_size(folio));
+ const u32 blocksize = inode_to_fs_info(inode)->sectorsize;
int ret = 0;
if (folio_test_uptodate(folio))
return 0;
if (!force_uptodate &&
- IS_ALIGNED(clamp_start, PAGE_SIZE) &&
- IS_ALIGNED(clamp_end, PAGE_SIZE))
+ IS_ALIGNED(clamp_start, blocksize) &&
+ IS_ALIGNED(clamp_end, blocksize))
return 0;
ret = btrfs_read_folio(NULL, folio);
--
2.48.1
^ permalink raw reply related [flat|nested] 16+ messages in thread* [PATCH v2 7/8] btrfs: allow inline data extents creation if block size < page size
2025-02-27 5:54 [PATCH v2 0/8] btrfs: make subpage handling be feature full Qu Wenruo
` (5 preceding siblings ...)
2025-02-27 5:54 ` [PATCH v2 6/8] btrfs: allow buffered write to avoid full page read if it's block aligned Qu Wenruo
@ 2025-02-27 5:54 ` Qu Wenruo
2025-02-27 5:54 ` [PATCH v2 8/8] btrfs: remove the subpage related warning message Qu Wenruo
2025-02-27 11:16 ` [PATCH v2 0/8] btrfs: make subpage handling be feature full David Sterba
8 siblings, 0 replies; 16+ messages in thread
From: Qu Wenruo @ 2025-02-27 5:54 UTC (permalink / raw)
To: linux-btrfs; +Cc: Filipe Manana
Previously inline data extents creation is disable if the block size
(previously called sector size) is smaller than the page size, for the
following reasons:
- Possible mixed inline and regular data extents
However this is also the same if the block size matches the page size,
thus we do not treat mixed inline and regular extents as an error.
And the chance to cause mixed inline and regular data extents are not
even increased, it has the same requirement (compressed inline data
extent covering the whole first block, followed by regular extents).
- Unable to handle async/inline delalloc range for block size < page
size cases
This is already fixed since commit 1d2fbb7f1f9e ("btrfs: allow
compression even if the range is not page aligned").
This was the major technical blockage, but it's no longer a blockage
anymore.
With the major technical blockage already removed, we can enable inline
data extents creation no matter the block size nor the page size,
allowing the btrfs to have the same capacity for all block sizes.
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
fs/btrfs/inode.c | 13 -------------
1 file changed, 13 deletions(-)
diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
index 52802a3a078c..c325185bb134 100644
--- a/fs/btrfs/inode.c
+++ b/fs/btrfs/inode.c
@@ -566,19 +566,6 @@ static bool can_cow_file_range_inline(struct btrfs_inode *inode,
if (offset != 0)
return false;
- /*
- * Due to the page size limit, for subpage we can only trigger the
- * writeback for the dirty sectors of page, that means data writeback
- * is doing more writeback than what we want.
- *
- * This is especially unexpected for some call sites like fallocate,
- * where we only increase i_size after everything is done.
- * This means we can trigger inline extent even if we didn't want to.
- * So here we skip inline extent creation completely.
- */
- if (fs_info->sectorsize != PAGE_SIZE)
- return false;
-
/* Inline extents are limited to sectorsize. */
if (size > fs_info->sectorsize)
return false;
--
2.48.1
^ permalink raw reply related [flat|nested] 16+ messages in thread* [PATCH v2 8/8] btrfs: remove the subpage related warning message
2025-02-27 5:54 [PATCH v2 0/8] btrfs: make subpage handling be feature full Qu Wenruo
` (6 preceding siblings ...)
2025-02-27 5:54 ` [PATCH v2 7/8] btrfs: allow inline data extents creation if block size < page size Qu Wenruo
@ 2025-02-27 5:54 ` Qu Wenruo
2025-02-27 11:16 ` [PATCH v2 0/8] btrfs: make subpage handling be feature full David Sterba
8 siblings, 0 replies; 16+ messages in thread
From: Qu Wenruo @ 2025-02-27 5:54 UTC (permalink / raw)
To: linux-btrfs; +Cc: Filipe Manana
Since the initial enablement of block size < page size support for
btrfs in v5.15, we have hit several milestones for block size < page
size (subpage) support:
- RAID56 subpage support
In v5.19
- Refactored scrub support to support subpage better
In v6.4
- Block perfect (previously requires page aligned ranges) compressed write
In v6.13
- Various error handling fixes involving subpage
In v6.14
Finally the only missing feature is the pretty simple and harmless
inlined data extent creation, just added in previous patches.
Now btrfs has all of its features ready for both regular and subpage
cases, there is no reason to output a warning about the experimental
subpage support, and we can finally remove it now.
Acked-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
fs/btrfs/disk-io.c | 5 -----
1 file changed, 5 deletions(-)
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index a799216aa264..c0b40dedceb5 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -3414,11 +3414,6 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
*/
fs_info->max_inline = min_t(u64, fs_info->max_inline, fs_info->sectorsize);
- if (sectorsize < PAGE_SIZE)
- btrfs_warn(fs_info,
- "read-write for sector size %u with page size %lu is experimental",
- sectorsize, PAGE_SIZE);
-
ret = btrfs_init_workqueues(fs_info);
if (ret)
goto fail_sb_buffer;
--
2.48.1
^ permalink raw reply related [flat|nested] 16+ messages in thread* Re: [PATCH v2 0/8] btrfs: make subpage handling be feature full
2025-02-27 5:54 [PATCH v2 0/8] btrfs: make subpage handling be feature full Qu Wenruo
` (7 preceding siblings ...)
2025-02-27 5:54 ` [PATCH v2 8/8] btrfs: remove the subpage related warning message Qu Wenruo
@ 2025-02-27 11:16 ` David Sterba
2025-02-28 3:14 ` Qu Wenruo
8 siblings, 1 reply; 16+ messages in thread
From: David Sterba @ 2025-02-27 11:16 UTC (permalink / raw)
To: Qu Wenruo; +Cc: linux-btrfs
On Thu, Feb 27, 2025 at 04:24:38PM +1030, Qu Wenruo wrote:
> [CHANGLOG]
> v2:
> - Add a new bug fix which is exposed by recent 2K block size tests on
> x86_64
> It's a self deadlock where the folio_end_writeback() is called with
> subpage lock hold, and folio_end_writeback() will eventually call
> btrfs_release_folio() and try to lock the same spin lock.
>
> Since the introduction of btrfs subapge support in v5.15, there are
> quite some limitations:
>
> - No RAID56 support
> Added in v5.19.
>
> - No memory efficient scrub support
> Added in v6.4.
>
> - No block perfect compressed write support
> Previously btrfs requires the dirty range to be fully page aligned, or
> it will skip compression completely.
>
> Added in v6.13.
>
> - Various subpage related error handling fixes
> Added in v6.14.
>
> - No support to create inline data extent
> - No partial uptodate page support
> This is a long existing optimization supported by EXT4/XFS and
> is required to pass generic/563 with subpage block size.
>
> The last two are addressed in this patchset.
That's great, thank you very much. I think not all patches have a
reviewed-by tag but I'd like to get it to for-next very soon. The next
is rc5 and we should have all features in. This also means the 2K
nodesize block so we can give it more testing.
^ permalink raw reply [flat|nested] 16+ messages in thread* Re: [PATCH v2 0/8] btrfs: make subpage handling be feature full
2025-02-27 11:16 ` [PATCH v2 0/8] btrfs: make subpage handling be feature full David Sterba
@ 2025-02-28 3:14 ` Qu Wenruo
2025-02-28 12:42 ` David Sterba
0 siblings, 1 reply; 16+ messages in thread
From: Qu Wenruo @ 2025-02-28 3:14 UTC (permalink / raw)
To: dsterba; +Cc: linux-btrfs
在 2025/2/27 21:46, David Sterba 写道:
> On Thu, Feb 27, 2025 at 04:24:38PM +1030, Qu Wenruo wrote:
>> [CHANGLOG]
>> v2:
>> - Add a new bug fix which is exposed by recent 2K block size tests on
>> x86_64
>> It's a self deadlock where the folio_end_writeback() is called with
>> subpage lock hold, and folio_end_writeback() will eventually call
>> btrfs_release_folio() and try to lock the same spin lock.
>>
>> Since the introduction of btrfs subapge support in v5.15, there are
>> quite some limitations:
>>
>> - No RAID56 support
>> Added in v5.19.
>>
>> - No memory efficient scrub support
>> Added in v6.4.
>>
>> - No block perfect compressed write support
>> Previously btrfs requires the dirty range to be fully page aligned, or
>> it will skip compression completely.
>>
>> Added in v6.13.
>>
>> - Various subpage related error handling fixes
>> Added in v6.14.
>>
>> - No support to create inline data extent
>> - No partial uptodate page support
>> This is a long existing optimization supported by EXT4/XFS and
>> is required to pass generic/563 with subpage block size.
>>
>> The last two are addressed in this patchset.
>
> That's great, thank you very much. I think not all patches have a
> reviewed-by tag but I'd like to get it to for-next very soon. The next
> is rc5 and we should have all features in. This also means the 2K
> nodesize block so we can give it more testing.
Now pushed to for-next, with all patches reviewed (or acked) by Filipe.
Great thanks to Filipe for the review!
For the 2K one, since it's just two small patches I'm also fine pushing
them now.
Just do not forget that we need progs patches, and a dozen of known
failures from fstests, and I'm not yet able to address them all any time
soon.
Thanks,
Qu
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v2 0/8] btrfs: make subpage handling be feature full
2025-02-28 3:14 ` Qu Wenruo
@ 2025-02-28 12:42 ` David Sterba
2025-03-02 22:48 ` Qu Wenruo
0 siblings, 1 reply; 16+ messages in thread
From: David Sterba @ 2025-02-28 12:42 UTC (permalink / raw)
To: Qu Wenruo; +Cc: linux-btrfs
On Fri, Feb 28, 2025 at 01:44:04PM +1030, Qu Wenruo wrote:
> For the 2K one, since it's just two small patches I'm also fine pushing
> them now.
> Just do not forget that we need progs patches, and a dozen of known
> failures from fstests, and I'm not yet able to address them all any time
> soon.
Yeah, the mkfs support can go to the next minor progs release. About the
status we can print a warning and document it. No need to focus on
fixing the fstests, I think stress testing will be sufficient for now.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v2 0/8] btrfs: make subpage handling be feature full
2025-02-28 12:42 ` David Sterba
@ 2025-03-02 22:48 ` Qu Wenruo
0 siblings, 0 replies; 16+ messages in thread
From: Qu Wenruo @ 2025-03-02 22:48 UTC (permalink / raw)
To: dsterba, Qu Wenruo; +Cc: linux-btrfs
在 2025/2/28 23:12, David Sterba 写道:
> On Fri, Feb 28, 2025 at 01:44:04PM +1030, Qu Wenruo wrote:
>> For the 2K one, since it's just two small patches I'm also fine pushing
>> them now.
>> Just do not forget that we need progs patches, and a dozen of known
>> failures from fstests, and I'm not yet able to address them all any time
>> soon.
>
> Yeah, the mkfs support can go to the next minor progs release. About the
> status we can print a warning and document it. No need to focus on
> fixing the fstests, I think stress testing will be sufficient for now.
>
It turns out that this will not be a smooth sailing.
There is a very huge conflicts between our async extent and subpage
handling for writeback.
Our async extent mechanism can mark a folio writeback at any time.
(At submission time we keep the range locked, and let the compression
work happen in another thread).
If we have two async extent inside the same folio, we will have the
following race: (64K page size, 4K fs block size)
0 32K 64K
|<- AE 1 ->|<- AE 2 ->|
Thread A (AE 1) | Thread B (AE 2)
--------------------------------------+------------------------------
submit_one_async_extent() |
|- process_one_folio() |
|- subpage_set_writeback() |
|
/* IO finished */ |
end_compressed_writeback() |
|- btrfs_folio_clear_writeback() |
| /* this is the last writeback |
| holder, should end the folio |
| writeback flag */ |
|- last = true |
| | submit_one_async_extent()
| | |- process_one_folio()
| | |- subpage_set_writeback()
| |
| | /* IO finished */
| | end_compressed_writeback()
| | |- btrfs_folio_clear_writeback()
| | | /* Again the last holder */
| | |- last = true
|- folio_end_writeback() | |- folio_end_writeback()
This leads to two threads calling folio_end_writeback() on the same folio.
This will eventually lead to VM_BUG_ON_FOLIO() or other problems.
Furthermore we can not rely on the folio->private to do anything after
folio_end_writeback() call.
Because that call may unmap/invalidate the folio.
What's worse is, the iomap's extra writeback accounting won't help.
Iomap will hold one extra writeback count before submitting the blocks
inside the folio, then reduce the writeback count after all blocks have
been marked writeback (submitted).
That solution requires that all the blocks inside the folio to be
submitted and marked writeback at the same time.
But our async extent breaks that requirement completely.
So far I have no better solution, but to disable the block-perfect
compression first, then introduce the same iomap's extra count solution.
The proper solution is not only the iomap solution, but to make the
async extent submission to mark the folios writeback.
That will be quite some work (part of the iomap migration plan).
Thanks,
Qu
^ permalink raw reply [flat|nested] 16+ messages in thread