From: "Darrick J. Wong" <djwong@kernel.org>
To: Qu Wenruo <quwenruo.btrfs@gmx.com>
Cc: Zhang Yi <yi.zhang@huaweicloud.com>, Qu Wenruo <wqu@suse.com>,
Theodore Ts'o <tytso@mit.edu>,
linux-ext4 <linux-ext4@vger.kernel.org>,
linux-btrfs <linux-btrfs@vger.kernel.org>,
"linux-fsdevel@vger.kernel.org" <linux-fsdevel@vger.kernel.org>
Subject: Re: Ext4 iomap warning during btrfs/136 (yes, it's from btrfs test cases)
Date: Mon, 11 Aug 2025 08:49:35 -0700 [thread overview]
Message-ID: <20250811154935.GD7942@frogsfrogsfrogs> (raw)
In-Reply-To: <15a4c437-d276-4503-9e30-4d48f5b7a4ff@gmx.com>
On Sun, Aug 10, 2025 at 07:36:48AM +0930, Qu Wenruo wrote:
>
>
> 在 2025/8/9 18:39, Zhang Yi 写道:
> > On 2025/8/9 6:11, Qu Wenruo wrote:
> > > 在 2025/8/8 21:46, Theodore Ts'o 写道:
> > > > On Fri, Aug 08, 2025 at 06:20:56PM +0930, Qu Wenruo wrote:
> > > > >
> > > > > 在 2025/8/8 17:22, Qu Wenruo 写道:
> > > > > > Hi,
> > > > > >
> > > > > > [BACKGROUND]
> > > > > > Recently I'm testing btrfs with 16KiB block size.
> > > > > >
> > > > > > Currently btrfs is artificially limiting subpage block size to 4K.
> > > > > > But there is a simple patch to change it to support all block sizes <=
> > > > > > page size in my branch:
> > > > > >
> > > > > > https://github.com/adam900710/linux/tree/larger_bs_support
> > > > > >
> > > > > > [IOMAP WARNING]
> > > > > > And I'm running into a very weird kernel warning at btrfs/136, with 16K
> > > > > > block size and 64K page size.
> > > > > >
> > > > > > The problem is, the problem happens with ext3 (using ext4 modeule) with
> > > > > > 16K block size, and no btrfs is involved yet.
> > > >
> > > >
> > > > Thanks for the bug report! This looks like it's an issue with using
> > > > indirect block-mapped file with a 16k block size. I tried your
> > > > reproducer using a 1k block size on an x86_64 system, which is how I
> > > > test problem caused by the block size < page size. It didn't
> > > > reproduce there, so it looks like it really needs a 16k block size.
> > > >
> > > > Can you say something about what system were you running your testing
> > > > on --- was it an arm64 system, or a powerpc 64 system (the two most
> > > > common systems with page size > 4k)? (I assume you're not trying to
> > > > do this on an Itanic. :-) And was the page size 16k or 64k?
> > >
> > > The architecture is aarch64, the host board is Rock5B (cheap and fast enough), the test machine is a VM on that board, with ovmf as the UEFI firmware.
> > >
> > > The kernel is configured to use 64K page size, the *ext3* system is using 16K block size.
> > >
> > > Currently I tried the following combination with 64K page size and ext3, the result looks like the following
> > >
> > > - 2K block size
> > > - 4K block size
> > > All fine
> > >
> > > - 8K block size
> > > - 16K block size
> > > All the same kernel warning and never ending fsstress
> > >
> > > - 32K block size
> > > - 64K block size
> > > All fine
> > >
> > > I am surprised as you that, not all subpage block size are having problems, just 2 of the less common combinations failed.
> > >
> > > And the most common ones (4K, page size) are all fine.
> > >
> > > Finally, if using ext4 not ext3, all combinations above are fine again.
> > >
> > > So I ran out of ideas why only 2 block sizes fail here...
> > >
> >
> > This issue is caused by an overflow in the calculation of the hole's
> > length on the forth-level depth for non-extent inodes. For a file system
> > with a 4KB block size, the calculation will not overflow. For a 64KB
> > block size, the queried position will not reach the fourth level, so this
> > issue only occur on the filesystem with a 8KB and 16KB block size.
> >
> > Hi, Wenruo, could you try the following fix?
> >
> > diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
> > index 7de327fa7b1c..d45124318200 100644
> > --- a/fs/ext4/indirect.c
> > +++ b/fs/ext4/indirect.c
> > @@ -539,7 +539,7 @@ int ext4_ind_map_blocks(handle_t *handle, struct inode *inode,
> > int indirect_blks;
> > int blocks_to_boundary = 0;
> > int depth;
> > - int count = 0;
> > + u64 count = 0;
> > ext4_fsblk_t first_block = 0;
> >
> > trace_ext4_ind_map_blocks_enter(inode, map->m_lblk, map->m_len, flags);
> > @@ -588,7 +588,7 @@ int ext4_ind_map_blocks(handle_t *handle, struct inode *inode,
> > count++;
> > /* Fill in size of a hole we found */
> > map->m_pblk = 0;
> > - map->m_len = min_t(unsigned int, map->m_len, count);
> > + map->m_len = umin(map->m_len, count);
> > goto cleanup;
> > }
>
> It indeed solves the problem.
>
> Tested-by: Qu Wenruo <wqu@suse.com>
Can we get the relevant chunks of this test turned into a tests/ext4/
fstest so that the ext4 developers have a regression test that doesn't
require setting up btrfs, please?
--D
> Thanks,
> Qu
>
> > Thanks,
> > Yi.
> >
>
>
next prev parent reply other threads:[~2025-08-11 15:49 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-08 7:52 Ext4 iomap warning during btrfs/136 (yes, it's from btrfs test cases) Qu Wenruo
2025-08-08 8:50 ` Qu Wenruo
2025-08-08 12:16 ` Theodore Ts'o
2025-08-08 22:11 ` Qu Wenruo
2025-08-09 9:09 ` Zhang Yi
2025-08-09 22:06 ` Qu Wenruo
2025-08-11 15:49 ` Darrick J. Wong [this message]
2025-08-11 22:14 ` Qu Wenruo
2025-08-12 16:48 ` Darrick J. Wong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250811154935.GD7942@frogsfrogsfrogs \
--to=djwong@kernel.org \
--cc=linux-btrfs@vger.kernel.org \
--cc=linux-ext4@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=quwenruo.btrfs@gmx.com \
--cc=tytso@mit.edu \
--cc=wqu@suse.com \
--cc=yi.zhang@huaweicloud.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox