From: Baokun Li <libaokun1@huawei.com>
To: <linux-ext4@vger.kernel.org>
Cc: <tytso@mit.edu>, <adilger.kernel@dilger.ca>, <jack@suse.cz>,
<linux-kernel@vger.kernel.org>, <ojaswin@linux.ibm.com>,
<julia.lawall@inria.fr>, <yi.zhang@huawei.com>,
<yangerkun@huawei.com>, <libaokun1@huawei.com>,
<libaokun@huaweicloud.com>, <stable@vger.kernel.org>
Subject: [PATCH v3 11/17] ext4: fix largest free orders lists corruption on mb_optimize_scan switch
Date: Mon, 14 Jul 2025 21:03:21 +0800 [thread overview]
Message-ID: <20250714130327.1830534-12-libaokun1@huawei.com> (raw)
In-Reply-To: <20250714130327.1830534-1-libaokun1@huawei.com>
The grp->bb_largest_free_order is updated regardless of whether
mb_optimize_scan is enabled. This can lead to inconsistencies between
grp->bb_largest_free_order and the actual s_mb_largest_free_orders list
index when mb_optimize_scan is repeatedly enabled and disabled via remount.
For example, if mb_optimize_scan is initially enabled, largest free
order is 3, and the group is in s_mb_largest_free_orders[3]. Then,
mb_optimize_scan is disabled via remount, block allocations occur,
updating largest free order to 2. Finally, mb_optimize_scan is re-enabled
via remount, more block allocations update largest free order to 1.
At this point, the group would be removed from s_mb_largest_free_orders[3]
under the protection of s_mb_largest_free_orders_locks[2]. This lock
mismatch can lead to list corruption.
To fix this, whenever grp->bb_largest_free_order changes, we now always
attempt to remove the group from its old order list. However, we only
insert the group into the new order list if `mb_optimize_scan` is enabled.
This approach helps prevent lock inconsistencies and ensures the data in
the order lists remains reliable.
Fixes: 196e402adf2e ("ext4: improve cr 0 / cr 1 group scanning")
CC: stable@vger.kernel.org
Suggested-by: Jan Kara <jack@suse.cz>
Signed-off-by: Baokun Li <libaokun1@huawei.com>
---
fs/ext4/mballoc.c | 33 ++++++++++++++-------------------
1 file changed, 14 insertions(+), 19 deletions(-)
diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
index 72b20fc52bbf..fada0d1b3fdb 100644
--- a/fs/ext4/mballoc.c
+++ b/fs/ext4/mballoc.c
@@ -1152,33 +1152,28 @@ static void
mb_set_largest_free_order(struct super_block *sb, struct ext4_group_info *grp)
{
struct ext4_sb_info *sbi = EXT4_SB(sb);
- int i;
+ int new, old = grp->bb_largest_free_order;
- for (i = MB_NUM_ORDERS(sb) - 1; i >= 0; i--)
- if (grp->bb_counters[i] > 0)
+ for (new = MB_NUM_ORDERS(sb) - 1; new >= 0; new--)
+ if (grp->bb_counters[new] > 0)
break;
+
/* No need to move between order lists? */
- if (!test_opt2(sb, MB_OPTIMIZE_SCAN) ||
- i == grp->bb_largest_free_order) {
- grp->bb_largest_free_order = i;
+ if (new == old)
return;
- }
- if (grp->bb_largest_free_order >= 0) {
- write_lock(&sbi->s_mb_largest_free_orders_locks[
- grp->bb_largest_free_order]);
+ if (old >= 0 && !list_empty(&grp->bb_largest_free_order_node)) {
+ write_lock(&sbi->s_mb_largest_free_orders_locks[old]);
list_del_init(&grp->bb_largest_free_order_node);
- write_unlock(&sbi->s_mb_largest_free_orders_locks[
- grp->bb_largest_free_order]);
+ write_unlock(&sbi->s_mb_largest_free_orders_locks[old]);
}
- grp->bb_largest_free_order = i;
- if (grp->bb_largest_free_order >= 0 && grp->bb_free) {
- write_lock(&sbi->s_mb_largest_free_orders_locks[
- grp->bb_largest_free_order]);
+
+ grp->bb_largest_free_order = new;
+ if (test_opt2(sb, MB_OPTIMIZE_SCAN) && new >= 0 && grp->bb_free) {
+ write_lock(&sbi->s_mb_largest_free_orders_locks[new]);
list_add_tail(&grp->bb_largest_free_order_node,
- &sbi->s_mb_largest_free_orders[grp->bb_largest_free_order]);
- write_unlock(&sbi->s_mb_largest_free_orders_locks[
- grp->bb_largest_free_order]);
+ &sbi->s_mb_largest_free_orders[new]);
+ write_unlock(&sbi->s_mb_largest_free_orders_locks[new]);
}
}
--
2.46.1
next prev parent reply other threads:[~2025-07-14 13:18 UTC|newest]
Thread overview: 44+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-14 13:03 [PATCH v3 00/17] ext4: better scalability for ext4 block allocation Baokun Li
2025-07-14 13:03 ` [PATCH v3 01/17] ext4: add ext4_try_lock_group() to skip busy groups Baokun Li
2025-07-17 10:09 ` Ojaswin Mujoo
2025-07-19 0:37 ` Baokun Li
2025-07-17 22:28 ` Andi Kleen
2025-07-18 3:09 ` Theodore Ts'o
2025-07-19 0:29 ` Baokun Li
2025-07-22 20:59 ` Andi Kleen
2025-07-14 13:03 ` [PATCH v3 02/17] ext4: separate stream goal hits from s_bal_goals for better tracking Baokun Li
2025-07-17 10:29 ` Ojaswin Mujoo
2025-07-19 1:37 ` Baokun Li
2025-07-14 13:03 ` [PATCH v3 03/17] ext4: remove unnecessary s_mb_last_start Baokun Li
2025-07-17 10:31 ` Ojaswin Mujoo
2025-07-14 13:03 ` [PATCH v3 04/17] ext4: remove unnecessary s_md_lock on update s_mb_last_group Baokun Li
2025-07-17 13:36 ` Ojaswin Mujoo
2025-07-19 1:54 ` Baokun Li
2025-07-14 13:03 ` [PATCH v3 05/17] ext4: utilize multiple global goals to reduce contention Baokun Li
2025-07-14 13:03 ` [PATCH v3 06/17] ext4: get rid of some obsolete EXT4_MB_HINT flags Baokun Li
2025-07-14 13:03 ` [PATCH v3 07/17] ext4: fix typo in CR_GOAL_LEN_SLOW comment Baokun Li
2025-07-14 13:03 ` [PATCH v3 08/17] ext4: convert sbi->s_mb_free_pending to atomic_t Baokun Li
2025-07-14 13:03 ` [PATCH v3 09/17] ext4: merge freed extent with existing extents before insertion Baokun Li
2025-07-14 13:03 ` [PATCH v3 10/17] ext4: fix zombie groups in average fragment size lists Baokun Li
2025-07-14 13:03 ` Baokun Li [this message]
2025-07-14 13:03 ` [PATCH v3 12/17] ext4: factor out __ext4_mb_scan_group() Baokun Li
2025-07-14 13:03 ` [PATCH v3 13/17] ext4: factor out ext4_mb_might_prefetch() Baokun Li
2025-07-14 13:03 ` [PATCH v3 14/17] ext4: factor out ext4_mb_scan_group() Baokun Li
2025-07-14 13:03 ` [PATCH v3 15/17] ext4: convert free groups order lists to xarrays Baokun Li
2025-07-21 11:07 ` Jan Kara
2025-07-21 12:33 ` Baokun Li
2025-07-21 13:45 ` Baokun Li
2025-07-21 18:01 ` Theodore Ts'o
2025-07-22 5:58 ` Baokun Li
2025-07-24 3:55 ` Guenter Roeck
2025-07-24 4:54 ` Theodore Ts'o
2025-07-24 5:20 ` Guenter Roeck
2025-07-24 11:14 ` Zhang Yi
2025-07-24 14:30 ` Guenter Roeck
2025-07-24 14:54 ` Theodore Ts'o
2025-07-25 2:28 ` Zhang Yi
2025-07-26 0:50 ` Baokun Li
2025-07-14 13:03 ` [PATCH v3 16/17] ext4: refactor choose group to scan group Baokun Li
2025-07-14 13:03 ` [PATCH v3 17/17] ext4: implement linear-like traversal across order xarrays Baokun Li
2025-07-15 1:11 ` [PATCH v3 00/17] ext4: better scalability for ext4 block allocation Zhang Yi
2025-07-19 21:45 ` Theodore Ts'o
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250714130327.1830534-12-libaokun1@huawei.com \
--to=libaokun1@huawei.com \
--cc=adilger.kernel@dilger.ca \
--cc=jack@suse.cz \
--cc=julia.lawall@inria.fr \
--cc=libaokun@huaweicloud.com \
--cc=linux-ext4@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=ojaswin@linux.ibm.com \
--cc=stable@vger.kernel.org \
--cc=tytso@mit.edu \
--cc=yangerkun@huawei.com \
--cc=yi.zhang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).