From: libaokun@huaweicloud.com
To: linux-ext4@vger.kernel.org
Cc: tytso@mit.edu, adilger.kernel@dilger.ca, jack@suse.cz,
linux-kernel@vger.kernel.org, kernel@pankajraghav.com,
mcgrof@kernel.org, ebiggers@kernel.org, willy@infradead.org,
yi.zhang@huawei.com, yangerkun@huawei.com,
chengzhihao1@huawei.com, libaokun1@huawei.com,
libaokun@huaweicloud.com
Subject: [PATCH v3 13/24] ext4: support large block size in ext4_mb_init_cache()
Date: Tue, 11 Nov 2025 22:26:23 +0800 [thread overview]
Message-ID: <20251111142634.3301616-14-libaokun@huaweicloud.com> (raw)
In-Reply-To: <20251111142634.3301616-1-libaokun@huaweicloud.com>
From: Baokun Li <libaokun1@huawei.com>
Currently, ext4_mb_init_cache() uses blocks_per_page to calculate the
folio index and offset. However, when blocksize is larger than PAGE_SIZE,
blocks_per_page becomes zero, leading to a potential division-by-zero bug.
Since we now have the folio, we know its exact size. This allows us to
convert {blocks, groups}_per_page to {blocks, groups}_per_folio, thus
supporting block sizes greater than page size.
Signed-off-by: Baokun Li <libaokun1@huawei.com>
Reviewed-by: Zhang Yi <yi.zhang@huawei.com>
Reviewed-by: Jan Kara <jack@suse.cz>
---
fs/ext4/mballoc.c | 44 ++++++++++++++++++++------------------------
1 file changed, 20 insertions(+), 24 deletions(-)
diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
index b454a41dd6c1..3f10c64ab2b1 100644
--- a/fs/ext4/mballoc.c
+++ b/fs/ext4/mballoc.c
@@ -1329,26 +1329,25 @@ static void mb_regenerate_buddy(struct ext4_buddy *e4b)
* block bitmap and buddy information. The information are
* stored in the inode as
*
- * { page }
+ * { folio }
* [ group 0 bitmap][ group 0 buddy] [group 1][ group 1]...
*
*
* one block each for bitmap and buddy information.
- * So for each group we take up 2 blocks. A page can
- * contain blocks_per_page (PAGE_SIZE / blocksize) blocks.
- * So it can have information regarding groups_per_page which
- * is blocks_per_page/2
+ * So for each group we take up 2 blocks. A folio can
+ * contain blocks_per_folio (folio_size / blocksize) blocks.
+ * So it can have information regarding groups_per_folio which
+ * is blocks_per_folio/2
*
* Locking note: This routine takes the block group lock of all groups
- * for this page; do not hold this lock when calling this routine!
+ * for this folio; do not hold this lock when calling this routine!
*/
-
static int ext4_mb_init_cache(struct folio *folio, char *incore, gfp_t gfp)
{
ext4_group_t ngroups;
unsigned int blocksize;
- int blocks_per_page;
- int groups_per_page;
+ int blocks_per_folio;
+ int groups_per_folio;
int err = 0;
int i;
ext4_group_t first_group, group;
@@ -1365,27 +1364,24 @@ static int ext4_mb_init_cache(struct folio *folio, char *incore, gfp_t gfp)
sb = inode->i_sb;
ngroups = ext4_get_groups_count(sb);
blocksize = i_blocksize(inode);
- blocks_per_page = PAGE_SIZE / blocksize;
+ blocks_per_folio = folio_size(folio) / blocksize;
+ WARN_ON_ONCE(!blocks_per_folio);
+ groups_per_folio = DIV_ROUND_UP(blocks_per_folio, 2);
mb_debug(sb, "init folio %lu\n", folio->index);
- groups_per_page = blocks_per_page >> 1;
- if (groups_per_page == 0)
- groups_per_page = 1;
-
/* allocate buffer_heads to read bitmaps */
- if (groups_per_page > 1) {
- i = sizeof(struct buffer_head *) * groups_per_page;
+ if (groups_per_folio > 1) {
+ i = sizeof(struct buffer_head *) * groups_per_folio;
bh = kzalloc(i, gfp);
if (bh == NULL)
return -ENOMEM;
} else
bh = &bhs;
- first_group = folio->index * blocks_per_page / 2;
-
/* read all groups the folio covers into the cache */
- for (i = 0, group = first_group; i < groups_per_page; i++, group++) {
+ first_group = EXT4_PG_TO_LBLK(inode, folio->index) / 2;
+ for (i = 0, group = first_group; i < groups_per_folio; i++, group++) {
if (group >= ngroups)
break;
@@ -1393,7 +1389,7 @@ static int ext4_mb_init_cache(struct folio *folio, char *incore, gfp_t gfp)
if (!grinfo)
continue;
/*
- * If page is uptodate then we came here after online resize
+ * If folio is uptodate then we came here after online resize
* which added some new uninitialized group info structs, so
* we must skip all initialized uptodate buddies on the folio,
* which may be currently in use by an allocating task.
@@ -1413,7 +1409,7 @@ static int ext4_mb_init_cache(struct folio *folio, char *incore, gfp_t gfp)
}
/* wait for I/O completion */
- for (i = 0, group = first_group; i < groups_per_page; i++, group++) {
+ for (i = 0, group = first_group; i < groups_per_folio; i++, group++) {
int err2;
if (!bh[i])
@@ -1423,8 +1419,8 @@ static int ext4_mb_init_cache(struct folio *folio, char *incore, gfp_t gfp)
err = err2;
}
- first_block = folio->index * blocks_per_page;
- for (i = 0; i < blocks_per_page; i++) {
+ first_block = EXT4_PG_TO_LBLK(inode, folio->index);
+ for (i = 0; i < blocks_per_folio; i++) {
group = (first_block + i) >> 1;
if (group >= ngroups)
break;
@@ -1501,7 +1497,7 @@ static int ext4_mb_init_cache(struct folio *folio, char *incore, gfp_t gfp)
out:
if (bh) {
- for (i = 0; i < groups_per_page; i++)
+ for (i = 0; i < groups_per_folio; i++)
brelse(bh[i]);
if (bh != &bhs)
kfree(bh);
--
2.46.1
next prev parent reply other threads:[~2025-11-11 14:35 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-11 14:26 [PATCH v3 00/24] ext4: enable block size larger than page size libaokun
2025-11-11 14:26 ` [PATCH v3 01/24] ext4: remove page offset calculation in ext4_block_zero_page_range() libaokun
2025-11-11 14:26 ` [PATCH v3 02/24] ext4: remove page offset calculation in ext4_block_truncate_page() libaokun
2025-11-11 14:26 ` [PATCH v3 03/24] ext4: remove PAGE_SIZE checks for rec_len conversion libaokun
2025-11-11 14:26 ` [PATCH v3 04/24] ext4: make ext4_punch_hole() support large block size libaokun
2025-11-11 14:26 ` [PATCH v3 05/24] ext4: enable DIOREAD_NOLOCK by default for BS > PS as well libaokun
2025-11-11 14:26 ` [PATCH v3 06/24] ext4: introduce s_min_folio_order for future BS > PS support libaokun
2025-11-11 14:26 ` [PATCH v3 07/24] ext4: support large block size in ext4_calculate_overhead() libaokun
2025-11-11 14:26 ` [PATCH v3 08/24] ext4: support large block size in ext4_readdir() libaokun
2025-11-11 14:26 ` [PATCH v3 09/24] ext4: add EXT4_LBLK_TO_B macro for logical block to bytes conversion libaokun
2025-11-11 14:26 ` [PATCH v3 10/24] ext4: add EXT4_LBLK_TO_PG and EXT4_PG_TO_LBLK for block/page conversion libaokun
2025-11-11 14:26 ` [PATCH v3 11/24] ext4: support large block size in ext4_mb_load_buddy_gfp() libaokun
2025-11-11 14:26 ` [PATCH v3 12/24] ext4: support large block size in ext4_mb_get_buddy_page_lock() libaokun
2025-11-11 14:26 ` libaokun [this message]
2025-11-11 14:26 ` [PATCH v3 14/24] ext4: prepare buddy cache inode for BS > PS with large folios libaokun
2025-11-11 14:26 ` [PATCH v3 15/24] ext4: rename 'page' references to 'folio' in multi-block allocator libaokun
2025-11-11 14:26 ` [PATCH v3 16/24] ext4: support large block size in ext4_mpage_readpages() libaokun
2025-11-11 14:26 ` [PATCH v3 17/24] ext4: support large block size in ext4_block_write_begin() libaokun
2025-11-11 14:26 ` [PATCH v3 18/24] ext4: support large block size in mpage_map_and_submit_buffers() libaokun
2025-11-11 14:26 ` [PATCH v3 19/24] ext4: support large block size in mpage_prepare_extent_to_map() libaokun
2025-11-11 14:26 ` [PATCH v3 20/24] ext4: support large block size in __ext4_block_zero_page_range() libaokun
2025-11-11 14:26 ` [PATCH v3 21/24] ext4: make data=journal support large block size libaokun
2025-11-12 6:52 ` Zhang Yi
2025-11-12 15:56 ` Jan Kara
2025-11-19 12:41 ` Dan Carpenter
2025-11-20 1:21 ` Baokun Li
2025-11-20 15:41 ` Theodore Tso
2025-11-21 1:59 ` Baokun Li
2025-11-11 14:26 ` [PATCH v3 22/24] ext4: support verifying data from large folios with fs-verity libaokun
2025-11-12 6:54 ` Zhang Yi
2025-11-12 15:57 ` Jan Kara
2025-11-11 14:26 ` [PATCH v3 23/24] ext4: add checks for large folio incompatibilities when BS > PS libaokun
2025-11-12 6:56 ` Zhang Yi
2025-11-11 14:26 ` [PATCH v3 24/24] ext4: enable block size larger than page size libaokun
2025-11-11 18:01 ` Pankaj Raghav
2025-11-11 21:11 ` Theodore Ts'o
2025-11-12 1:20 ` Baokun Li
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251111142634.3301616-14-libaokun@huaweicloud.com \
--to=libaokun@huaweicloud.com \
--cc=adilger.kernel@dilger.ca \
--cc=chengzhihao1@huawei.com \
--cc=ebiggers@kernel.org \
--cc=jack@suse.cz \
--cc=kernel@pankajraghav.com \
--cc=libaokun1@huawei.com \
--cc=linux-ext4@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mcgrof@kernel.org \
--cc=tytso@mit.edu \
--cc=willy@infradead.org \
--cc=yangerkun@huawei.com \
--cc=yi.zhang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).