* [PATCH 0/4] btrfs: bs > ps support preparation
@ 2025-09-01 5:24 Qu Wenruo
2025-09-01 5:24 ` [PATCH 1/4] btrfs: support all block sizes which is no larger than page size Qu Wenruo
` (3 more replies)
0 siblings, 4 replies; 8+ messages in thread
From: Qu Wenruo @ 2025-09-01 5:24 UTC (permalink / raw)
To: linux-btrfs; +Cc: linux-fsdevel
Some extra small and safe bs > ps support preparation, mostly focusing
on the bio vec iteration code and cached order/bit members.
This time the difference is, I can enable some early local bs > ps
tests, so basic file read/write and csum verification are all done
properly.
The tricky part is still inside the compression, and maybe some other
functionalities.
Qu Wenruo (4):
btrfs: support all block sizes which is no larger than page size
btrfs: cache max and min order inside btrfs_fs_info
btrfs: replace single page bio_iter_iovec() usage
btrfs: replace bio_for_each_segment usage
fs/btrfs/bio.c | 3 ++-
fs/btrfs/btrfs_inode.h | 6 +++---
fs/btrfs/compression.c | 3 +--
fs/btrfs/disk-io.c | 2 ++
fs/btrfs/file-item.c | 13 +++++++------
fs/btrfs/fs.c | 4 ++++
fs/btrfs/fs.h | 8 +++++---
fs/btrfs/raid56.c | 10 +++++-----
8 files changed, 29 insertions(+), 20 deletions(-)
--
2.50.1
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH 1/4] btrfs: support all block sizes which is no larger than page size
2025-09-01 5:24 [PATCH 0/4] btrfs: bs > ps support preparation Qu Wenruo
@ 2025-09-01 5:24 ` Qu Wenruo
2025-09-01 5:24 ` [PATCH 2/4] btrfs: cache max and min order inside btrfs_fs_info Qu Wenruo
` (2 subsequent siblings)
3 siblings, 0 replies; 8+ messages in thread
From: Qu Wenruo @ 2025-09-01 5:24 UTC (permalink / raw)
To: linux-btrfs; +Cc: linux-fsdevel
Currently if block size < page size, btrfs only supports one single
config, 4K.
This is mostly to reduce the test configurations, as 4K is going to be
the default block size for all architectures.
However all other major filesystems have no artificial limits on the
support block size, and some are already supporting block size > page
sizes.
Since the btrfs subpage block support has been there for a long time,
it's time for us to enable all block size <= page size support.
So here enable all block sizes support as long as it's no larger than
page size for experimental builds.
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
fs/btrfs/fs.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/fs/btrfs/fs.c b/fs/btrfs/fs.c
index 335209fe3734..014fb8b12f96 100644
--- a/fs/btrfs/fs.c
+++ b/fs/btrfs/fs.c
@@ -78,6 +78,10 @@ bool __attribute_const__ btrfs_supported_blocksize(u32 blocksize)
if (blocksize == PAGE_SIZE || blocksize == SZ_4K || blocksize == BTRFS_MIN_BLOCKSIZE)
return true;
+#ifdef CONFIG_BTRFS_EXPERIMENTAL
+ if (blocksize <= PAGE_SIZE)
+ return true;
+#endif
return false;
}
--
2.50.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 2/4] btrfs: cache max and min order inside btrfs_fs_info
2025-09-01 5:24 [PATCH 0/4] btrfs: bs > ps support preparation Qu Wenruo
2025-09-01 5:24 ` [PATCH 1/4] btrfs: support all block sizes which is no larger than page size Qu Wenruo
@ 2025-09-01 5:24 ` Qu Wenruo
2025-09-01 18:10 ` David Sterba
2025-09-02 3:46 ` kernel test robot
2025-09-01 5:24 ` [PATCH 3/4] btrfs: replace single page bio_iter_iovec() usage Qu Wenruo
2025-09-01 5:24 ` [PATCH 4/4] btrfs: replace bio_for_each_segment usage Qu Wenruo
3 siblings, 2 replies; 8+ messages in thread
From: Qu Wenruo @ 2025-09-01 5:24 UTC (permalink / raw)
To: linux-btrfs; +Cc: linux-fsdevel
Inside btrfs_fs_info we cache several bits shift like sectorsize_bits.
Apply this to max and min folio orders so that every time mapping order
needs to be applied we can skip the calculation.
Furthermore all those sectorsize/nodesize shifts, along with the new
min/max folio orders have a very limited value range by their natures.
E.g. blocksize bits can be at most ilog2(64K) which is 16, and for 4K
page size and 64K block size (bs > ps) the minimal folio order is only
4.
Neither those number can even exceed U8_MAX, thus there is no need to
use u32 for those bits.
Use u8 for those members to save memory.
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
fs/btrfs/btrfs_inode.h | 6 +++---
fs/btrfs/disk-io.c | 2 ++
fs/btrfs/fs.h | 8 +++++---
3 files changed, 10 insertions(+), 6 deletions(-)
diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h
index df3445448b7d..a9d6e1bfebae 100644
--- a/fs/btrfs/btrfs_inode.h
+++ b/fs/btrfs/btrfs_inode.h
@@ -527,14 +527,14 @@ static inline void btrfs_update_inode_mapping_flags(struct btrfs_inode *inode)
static inline void btrfs_set_inode_mapping_order(struct btrfs_inode *inode)
{
+ struct btrfs_fs_info *fs_info = inode->root->fs_info;
/* Metadata inode should not reach here. */
ASSERT(is_data_inode(inode));
/* We only allow BITS_PER_LONGS blocks for each bitmap. */
#ifdef CONFIG_BTRFS_EXPERIMENTAL
- mapping_set_folio_order_range(inode->vfs_inode.i_mapping, 0,
- ilog2(((BITS_PER_LONG << inode->root->fs_info->sectorsize_bits)
- >> PAGE_SHIFT)));
+ mapping_set_folio_order_range(inode->vfs_inode.i_mapping, fs_info->block_min_order,
+ fs_info->block_max_order);
#endif
}
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index 7b06bbc40898..a2eba8bc4336 100644
--- a/fs/btrfs/disk-io.c
+++ b/fs/btrfs/disk-io.c
@@ -3383,6 +3383,8 @@ int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_device
fs_info->nodesize_bits = ilog2(nodesize);
fs_info->sectorsize = sectorsize;
fs_info->sectorsize_bits = ilog2(sectorsize);
+ fs_info->block_min_order = ilog2(round_up(sectorsize, PAGE_SIZE) >> PAGE_SHIFT);
+ fs_info->block_max_order = ilog2((BITS_PER_LONG << fs_info->sectorsize_bits) >> PAGE_SHIFT);
fs_info->csums_per_leaf = BTRFS_MAX_ITEM_SIZE(fs_info) / fs_info->csum_size;
fs_info->stripesize = stripesize;
fs_info->fs_devices->fs_info = fs_info;
diff --git a/fs/btrfs/fs.h b/fs/btrfs/fs.h
index 5f0b185a7f21..412d3eb30b73 100644
--- a/fs/btrfs/fs.h
+++ b/fs/btrfs/fs.h
@@ -820,11 +820,13 @@ struct btrfs_fs_info {
struct mutex reclaim_bgs_lock;
/* Cached block sizes */
- u32 nodesize;
- u32 nodesize_bits;
u32 sectorsize;
+ u32 nodesize;
/* ilog2 of sectorsize, use to avoid 64bit division */
- u32 sectorsize_bits;
+ u8 sectorsize_bits;
+ u8 nodesize_bits;
+ u8 block_min_order;
+ u8 block_max_order;
u32 csum_size;
u32 csums_per_leaf;
u32 stripesize;
--
2.50.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 3/4] btrfs: replace single page bio_iter_iovec() usage
2025-09-01 5:24 [PATCH 0/4] btrfs: bs > ps support preparation Qu Wenruo
2025-09-01 5:24 ` [PATCH 1/4] btrfs: support all block sizes which is no larger than page size Qu Wenruo
2025-09-01 5:24 ` [PATCH 2/4] btrfs: cache max and min order inside btrfs_fs_info Qu Wenruo
@ 2025-09-01 5:24 ` Qu Wenruo
2025-09-01 6:25 ` Qu Wenruo
2025-09-01 5:24 ` [PATCH 4/4] btrfs: replace bio_for_each_segment usage Qu Wenruo
3 siblings, 1 reply; 8+ messages in thread
From: Qu Wenruo @ 2025-09-01 5:24 UTC (permalink / raw)
To: linux-btrfs; +Cc: linux-fsdevel
There are several functions inside btrfs calling bio_iter_iovec(),
mostly to do a block-by-block iteration on a bio.
- btrfs_check_read_bio()
- btrfs_decompress_buf2page()
- index_one_bio() from raid56
However that helper is single page based, meaning it will never return a
bv_len larger than PAGE_SIZE. For now it's fine as we only support bs <=
ps at most.
But for the incoming bs > ps support, we want to get bv_len larger than
PAGE_SIZE so that the bio_vec will cover a full block, not just part of
the large folio of the block.
In fact the call site inside btrfs_check_read_bio() will trigger
ASSERT() inside btrfs_data_csum_ok() when bs > ps support is enabled.
As bio_iter_iovec() will return a bv_len == 4K, meanwhile the bs is
larger than 4K, triggering the ASSERT().
Replace those call sites with mp_bvec_iter_bvec(), which will return the
full length of from the bi_io_vec array.
Currently all call sites are already doing extra loop inside the bvec
range for bs < ps support, so they will be fine.
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
fs/btrfs/bio.c | 3 ++-
fs/btrfs/compression.c | 3 +--
fs/btrfs/raid56.c | 2 +-
3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/fs/btrfs/bio.c b/fs/btrfs/bio.c
index ea7f7a17a3d5..f7aea4310dd6 100644
--- a/fs/btrfs/bio.c
+++ b/fs/btrfs/bio.c
@@ -277,8 +277,9 @@ static void btrfs_check_read_bio(struct btrfs_bio *bbio, struct btrfs_device *de
bbio->bio.bi_status = BLK_STS_OK;
while (iter->bi_size) {
- struct bio_vec bv = bio_iter_iovec(&bbio->bio, *iter);
+ struct bio_vec bv = mp_bvec_iter_bvec(bbio->bio.bi_io_vec, *iter);
+ ASSERT(bv.bv_len >= sectorsize && IS_ALIGNED(bv.bv_len, sectorsize));
bv.bv_len = min(bv.bv_len, sectorsize);
if (status || !btrfs_data_csum_ok(bbio, dev, offset, &bv))
fbio = repair_one_sector(bbio, offset, &bv, fbio);
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 068339e86123..8b415c780ba8 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -1227,14 +1227,13 @@ int btrfs_decompress_buf2page(const char *buf, u32 buf_len,
cur_offset = decompressed;
/* The main loop to do the copy */
while (cur_offset < decompressed + buf_len) {
- struct bio_vec bvec;
+ struct bio_vec bvec = mp_bvec_iter_bvec(orig_bio->bi_io_vec, orig_bio->bi_iter);
size_t copy_len;
u32 copy_start;
/* Offset inside the full decompressed extent */
u32 bvec_offset;
void *kaddr;
- bvec = bio_iter_iovec(orig_bio, orig_bio->bi_iter);
/*
* cb->start may underflow, but subtracting that value can still
* give us correct offset inside the full decompressed extent.
diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
index 3ff2bedfb3a4..df48dd6c3f54 100644
--- a/fs/btrfs/raid56.c
+++ b/fs/btrfs/raid56.c
@@ -1214,7 +1214,7 @@ static void index_one_bio(struct btrfs_raid_bio *rbio, struct bio *bio)
while (iter.bi_size) {
unsigned int index = (offset >> sectorsize_bits);
struct sector_ptr *sector = &rbio->bio_sectors[index];
- struct bio_vec bv = bio_iter_iovec(bio, iter);
+ struct bio_vec bv = mp_bvec_iter_bvec(bio->bi_io_vec, iter);
sector->has_paddr = true;
sector->paddr = bvec_phys(&bv);
--
2.50.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH 4/4] btrfs: replace bio_for_each_segment usage
2025-09-01 5:24 [PATCH 0/4] btrfs: bs > ps support preparation Qu Wenruo
` (2 preceding siblings ...)
2025-09-01 5:24 ` [PATCH 3/4] btrfs: replace single page bio_iter_iovec() usage Qu Wenruo
@ 2025-09-01 5:24 ` Qu Wenruo
3 siblings, 0 replies; 8+ messages in thread
From: Qu Wenruo @ 2025-09-01 5:24 UTC (permalink / raw)
To: linux-btrfs; +Cc: linux-fsdevel
Inside btrfs we have some call sites using bio_for_each_segment() and
bio_for_each_segment_all().
They are fine for now, as we only support bs <= ps, thus the returned
bv_len is no larger than block size.
However for the incoming bs > ps support, a block can cross several
pages (although they are still physical contiguous, as such block is
backed by large folio), in that case the single page iterator is not
going to handle such blocks.
Replace the followinng call sites with bio_for_each_bvec*() helpers:
- btrfs_csum_one_bio()
This one is critical for basic uncompressed writes for bs > ps case.
Or it will use the content of the page to calculate the checksum
instead of the correct block (which crosses multiple pages).
- set_bio_pages_uptodate()
- verify_bio_data_sectors()
They are mostly fine even with the old single-page interface, as they
won't bother bv_len at all.
But it's still helpful to replace them, as the new multi-page helper
will save some bytes from the stack memory.
Signed-off-by: Qu Wenruo <wqu@suse.com>
---
fs/btrfs/file-item.c | 13 +++++++------
fs/btrfs/raid56.c | 8 ++++----
2 files changed, 11 insertions(+), 10 deletions(-)
diff --git a/fs/btrfs/file-item.c b/fs/btrfs/file-item.c
index 4dd3d8a02519..bb08b27983a7 100644
--- a/fs/btrfs/file-item.c
+++ b/fs/btrfs/file-item.c
@@ -775,6 +775,7 @@ int btrfs_csum_one_bio(struct btrfs_bio *bbio)
SHASH_DESC_ON_STACK(shash, fs_info->csum_shash);
struct bio *bio = &bbio->bio;
struct btrfs_ordered_sum *sums;
+ const u32 blocksize = fs_info->sectorsize;
char *data;
struct bvec_iter iter;
struct bio_vec bvec;
@@ -799,16 +800,16 @@ int btrfs_csum_one_bio(struct btrfs_bio *bbio)
shash->tfm = fs_info->csum_shash;
- bio_for_each_segment(bvec, bio, iter) {
- blockcount = BTRFS_BYTES_TO_BLKS(fs_info,
- bvec.bv_len + fs_info->sectorsize
- - 1);
+ bio_for_each_bvec(bvec, bio, iter) {
+ ASSERT(bvec.bv_len >= blocksize);
+ ASSERT(IS_ALIGNED(bvec.bv_len, blocksize));
+ blockcount = BTRFS_BYTES_TO_BLKS(fs_info, bvec.bv_len);
for (i = 0; i < blockcount; i++) {
data = bvec_kmap_local(&bvec);
crypto_shash_digest(shash,
- data + (i * fs_info->sectorsize),
- fs_info->sectorsize,
+ data + (i << fs_info->sectorsize_bits),
+ blocksize,
sums->sums + index);
kunmap_local(data);
index += fs_info->csum_size;
diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
index df48dd6c3f54..2c810fe96bdf 100644
--- a/fs/btrfs/raid56.c
+++ b/fs/btrfs/raid56.c
@@ -1513,11 +1513,11 @@ static void set_bio_pages_uptodate(struct btrfs_raid_bio *rbio, struct bio *bio)
{
const u32 sectorsize = rbio->bioc->fs_info->sectorsize;
struct bio_vec *bvec;
- struct bvec_iter_all iter_all;
+ int i;
ASSERT(!bio_flagged(bio, BIO_CLONED));
- bio_for_each_segment_all(bvec, bio, iter_all) {
+ bio_for_each_bvec_all(bvec, bio, i) {
struct sector_ptr *sector;
phys_addr_t paddr = bvec_phys(bvec);
@@ -1574,7 +1574,7 @@ static void verify_bio_data_sectors(struct btrfs_raid_bio *rbio,
struct btrfs_fs_info *fs_info = rbio->bioc->fs_info;
int total_sector_nr = get_bio_sector_nr(rbio, bio);
struct bio_vec *bvec;
- struct bvec_iter_all iter_all;
+ int i;
/* No data csum for the whole stripe, no need to verify. */
if (!rbio->csum_bitmap || !rbio->csum_buf)
@@ -1584,7 +1584,7 @@ static void verify_bio_data_sectors(struct btrfs_raid_bio *rbio,
if (total_sector_nr >= rbio->nr_data * rbio->stripe_nsectors)
return;
- bio_for_each_segment_all(bvec, bio, iter_all) {
+ bio_for_each_bvec_all(bvec, bio, i) {
void *kaddr;
kaddr = bvec_kmap_local(bvec);
--
2.50.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH 3/4] btrfs: replace single page bio_iter_iovec() usage
2025-09-01 5:24 ` [PATCH 3/4] btrfs: replace single page bio_iter_iovec() usage Qu Wenruo
@ 2025-09-01 6:25 ` Qu Wenruo
0 siblings, 0 replies; 8+ messages in thread
From: Qu Wenruo @ 2025-09-01 6:25 UTC (permalink / raw)
To: linux-btrfs; +Cc: linux-fsdevel
在 2025/9/1 14:54, Qu Wenruo 写道:
> There are several functions inside btrfs calling bio_iter_iovec(),
> mostly to do a block-by-block iteration on a bio.
>
> - btrfs_check_read_bio()
> - btrfs_decompress_buf2page()
> - index_one_bio() from raid56
>
> However that helper is single page based, meaning it will never return a
> bv_len larger than PAGE_SIZE. For now it's fine as we only support bs <=
> ps at most.
>
> But for the incoming bs > ps support, we want to get bv_len larger than
> PAGE_SIZE so that the bio_vec will cover a full block, not just part of
> the large folio of the block.
>
> In fact the call site inside btrfs_check_read_bio() will trigger
> ASSERT() inside btrfs_data_csum_ok() when bs > ps support is enabled.
> As bio_iter_iovec() will return a bv_len == 4K, meanwhile the bs is
> larger than 4K, triggering the ASSERT().
>
> Replace those call sites with mp_bvec_iter_bvec(), which will return the
> full length of from the bi_io_vec array.
> Currently all call sites are already doing extra loop inside the bvec
> range for bs < ps support, so they will be fine.
>
> Signed-off-by: Qu Wenruo <wqu@suse.com>
> ---
> fs/btrfs/bio.c | 3 ++-
> fs/btrfs/compression.c | 3 +--
> fs/btrfs/raid56.c | 2 +-
> 3 files changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/fs/btrfs/bio.c b/fs/btrfs/bio.c
> index ea7f7a17a3d5..f7aea4310dd6 100644
> --- a/fs/btrfs/bio.c
> +++ b/fs/btrfs/bio.c
> @@ -277,8 +277,9 @@ static void btrfs_check_read_bio(struct btrfs_bio *bbio, struct btrfs_device *de
> bbio->bio.bi_status = BLK_STS_OK;
>
> while (iter->bi_size) {
> - struct bio_vec bv = bio_iter_iovec(&bbio->bio, *iter);
> + struct bio_vec bv = mp_bvec_iter_bvec(bbio->bio.bi_io_vec, *iter);
This multi-page conversion is going to hit VM_BUG_ON() when
btrfs_data_csum_ok() got a csum mismatch and has to call memzero_bvec(),
which is a single page only helper.
I'm wondering what is the proper handling for mp bvec.
Since we're inside one mp vec, all the pages in the bvec should be
physically contiguous. But will highmem sneak into a bvec?
If not, the memzero_page()'s check looks a little overkilled.
And if highmem page can sneak in, it means we will need a loop to
map/zero/unmap...
Thanks,
Qu
>
> + ASSERT(bv.bv_len >= sectorsize && IS_ALIGNED(bv.bv_len, sectorsize));
> bv.bv_len = min(bv.bv_len, sectorsize);
> if (status || !btrfs_data_csum_ok(bbio, dev, offset, &bv))
> fbio = repair_one_sector(bbio, offset, &bv, fbio);
> diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
> index 068339e86123..8b415c780ba8 100644
> --- a/fs/btrfs/compression.c
> +++ b/fs/btrfs/compression.c
> @@ -1227,14 +1227,13 @@ int btrfs_decompress_buf2page(const char *buf, u32 buf_len,
> cur_offset = decompressed;
> /* The main loop to do the copy */
> while (cur_offset < decompressed + buf_len) {
> - struct bio_vec bvec;
> + struct bio_vec bvec = mp_bvec_iter_bvec(orig_bio->bi_io_vec, orig_bio->bi_iter);
> size_t copy_len;
> u32 copy_start;
> /* Offset inside the full decompressed extent */
> u32 bvec_offset;
> void *kaddr;
>
> - bvec = bio_iter_iovec(orig_bio, orig_bio->bi_iter);
> /*
> * cb->start may underflow, but subtracting that value can still
> * give us correct offset inside the full decompressed extent.
> diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
> index 3ff2bedfb3a4..df48dd6c3f54 100644
> --- a/fs/btrfs/raid56.c
> +++ b/fs/btrfs/raid56.c
> @@ -1214,7 +1214,7 @@ static void index_one_bio(struct btrfs_raid_bio *rbio, struct bio *bio)
> while (iter.bi_size) {
> unsigned int index = (offset >> sectorsize_bits);
> struct sector_ptr *sector = &rbio->bio_sectors[index];
> - struct bio_vec bv = bio_iter_iovec(bio, iter);
> + struct bio_vec bv = mp_bvec_iter_bvec(bio->bi_io_vec, iter);
>
> sector->has_paddr = true;
> sector->paddr = bvec_phys(&bv);
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 2/4] btrfs: cache max and min order inside btrfs_fs_info
2025-09-01 5:24 ` [PATCH 2/4] btrfs: cache max and min order inside btrfs_fs_info Qu Wenruo
@ 2025-09-01 18:10 ` David Sterba
2025-09-02 3:46 ` kernel test robot
1 sibling, 0 replies; 8+ messages in thread
From: David Sterba @ 2025-09-01 18:10 UTC (permalink / raw)
To: Qu Wenruo; +Cc: linux-btrfs, linux-fsdevel
On Mon, Sep 01, 2025 at 02:54:04PM +0930, Qu Wenruo wrote:
> Inside btrfs_fs_info we cache several bits shift like sectorsize_bits.
>
> Apply this to max and min folio orders so that every time mapping order
> needs to be applied we can skip the calculation.
>
> Furthermore all those sectorsize/nodesize shifts, along with the new
> min/max folio orders have a very limited value range by their natures.
>
> E.g. blocksize bits can be at most ilog2(64K) which is 16, and for 4K
> page size and 64K block size (bs > ps) the minimal folio order is only
> 4.
> Neither those number can even exceed U8_MAX, thus there is no need to
> use u32 for those bits.
>
> Use u8 for those members to save memory.
The reason for u32 is that it generates a bit better assembly code, we
don't have to save each byte in fs_info so please keep it u32.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH 2/4] btrfs: cache max and min order inside btrfs_fs_info
2025-09-01 5:24 ` [PATCH 2/4] btrfs: cache max and min order inside btrfs_fs_info Qu Wenruo
2025-09-01 18:10 ` David Sterba
@ 2025-09-02 3:46 ` kernel test robot
1 sibling, 0 replies; 8+ messages in thread
From: kernel test robot @ 2025-09-02 3:46 UTC (permalink / raw)
To: Qu Wenruo, linux-btrfs; +Cc: oe-kbuild-all, linux-fsdevel
Hi Qu,
kernel test robot noticed the following build warnings:
[auto build test WARNING on kdave/for-next]
[also build test WARNING on next-20250901]
[cannot apply to linus/master v6.17-rc4]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Qu-Wenruo/btrfs-support-all-block-sizes-which-is-no-larger-than-page-size/20250901-132648
base: https://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux.git for-next
patch link: https://lore.kernel.org/r/d1a3793b551f0a6ccaf8907cc5aa06d8f5b3d5c2.1756703958.git.wqu%40suse.com
patch subject: [PATCH 2/4] btrfs: cache max and min order inside btrfs_fs_info
config: x86_64-buildonly-randconfig-004-20250902 (https://download.01.org/0day-ci/archive/20250902/202509021022.B3V4xUho-lkp@intel.com/config)
compiler: gcc-12 (Debian 12.2.0-14+deb12u1) 12.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250902/202509021022.B3V4xUho-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202509021022.B3V4xUho-lkp@intel.com/
All warnings (new ones prefixed by >>):
In file included from fs/btrfs/extent_map.c:10:
fs/btrfs/btrfs_inode.h: In function 'btrfs_set_inode_mapping_order':
>> fs/btrfs/btrfs_inode.h:530:31: warning: unused variable 'fs_info' [-Wunused-variable]
530 | struct btrfs_fs_info *fs_info = inode->root->fs_info;
| ^~~~~~~
--
In file included from fs/btrfs/tests/../transaction.h:15,
from fs/btrfs/tests/delayed-refs-tests.c:4:
fs/btrfs/tests/../btrfs_inode.h: In function 'btrfs_set_inode_mapping_order':
>> fs/btrfs/tests/../btrfs_inode.h:530:31: warning: unused variable 'fs_info' [-Wunused-variable]
530 | struct btrfs_fs_info *fs_info = inode->root->fs_info;
| ^~~~~~~
vim +/fs_info +530 fs/btrfs/btrfs_inode.h
527
528 static inline void btrfs_set_inode_mapping_order(struct btrfs_inode *inode)
529 {
> 530 struct btrfs_fs_info *fs_info = inode->root->fs_info;
531 /* Metadata inode should not reach here. */
532 ASSERT(is_data_inode(inode));
533
534 /* We only allow BITS_PER_LONGS blocks for each bitmap. */
535 #ifdef CONFIG_BTRFS_EXPERIMENTAL
536 mapping_set_folio_order_range(inode->vfs_inode.i_mapping, fs_info->block_min_order,
537 fs_info->block_max_order);
538 #endif
539 }
540
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2025-09-02 3:46 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-01 5:24 [PATCH 0/4] btrfs: bs > ps support preparation Qu Wenruo
2025-09-01 5:24 ` [PATCH 1/4] btrfs: support all block sizes which is no larger than page size Qu Wenruo
2025-09-01 5:24 ` [PATCH 2/4] btrfs: cache max and min order inside btrfs_fs_info Qu Wenruo
2025-09-01 18:10 ` David Sterba
2025-09-02 3:46 ` kernel test robot
2025-09-01 5:24 ` [PATCH 3/4] btrfs: replace single page bio_iter_iovec() usage Qu Wenruo
2025-09-01 6:25 ` Qu Wenruo
2025-09-01 5:24 ` [PATCH 4/4] btrfs: replace bio_for_each_segment usage Qu Wenruo
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).