From: Zhang Yi <yi.zhang@huawei.com>
To: Baokun Li <libaokun1@huawei.com>, <linux-ext4@vger.kernel.org>
Cc: <tytso@mit.edu>, <adilger.kernel@dilger.ca>, <jack@suse.cz>,
<linux-kernel@vger.kernel.org>, <ojaswin@linux.ibm.com>,
<julia.lawall@inria.fr>, <yangerkun@huawei.com>,
<libaokun@huaweicloud.com>
Subject: Re: [PATCH v3 00/17] ext4: better scalability for ext4 block allocation
Date: Tue, 15 Jul 2025 09:11:59 +0800 [thread overview]
Message-ID: <277b45e3-173d-4cf4-b044-7c25cd42e41b@huawei.com> (raw)
In-Reply-To: <20250714130327.1830534-1-libaokun1@huawei.com>
On 2025/7/14 21:03, Baokun Li wrote:
> Changes since v2:
> * Collect RVB from Jan Kara. (Thanks for your review!)
> * Add patch 2.
> * Patch 4: Switching to READ_ONCE/WRITE_ONCE (great for single-process)
> over smp_load_acquire/smp_store_release (only slight multi-process
> gain). (Suggested by Jan Kara)
> * Patch 5: The number of global goals is now set to the lesser of the CPU
> count or one-fourth of the group count. This prevents setting too
> many goals for small filesystems, which lead to file dispersion.
> (Suggested by Jan Kara)
> * Patch 5: Directly use kfree() to release s_mb_last_groups instead of
> kvfree(). (Suggested by Julia Lawall)
> * Patch 11: Even without mb_optimize_scan enabled, we now always attempt
> to remove the group from the old order list.(Suggested by Jan Kara)
> * Patch 14-16: Added comments for clarity, refined logic, and removed
> obsolete variables.
> * Update performance test results and indicate raw disk write bandwidth.
>
> Thanks to Honza for your suggestions!
This is a nice improvement! Overall, the series looks good to me!
Reviewed-by: Zhang Yi <yi.zhang@huawei.com>
>
> v2: https://lore.kernel.org/r/20250623073304.3275702-1-libaokun1@huawei.com
>
> Changes since v1:
> * Patch 1: Prioritize checking if a group is busy to avoid unnecessary
> checks and buddy loading. (Thanks to Ojaswin for the suggestion!)
> * Patch 4: Using multiple global goals instead of moving the goal to the
> inode level. (Thanks to Honza for the suggestion!)
> * Collect RVB from Jan Kara and Ojaswin Mujoo.(Thanks for your review!)
> * Add patch 2,3,7-16.
> * Due to the change of test server, the relevant test data was refreshed.
>
> v1: https://lore.kernel.org/r/20250523085821.1329392-1-libaokun@huaweicloud.com
>
> Since servers have more and more CPUs, and we're running more containers
> on them, we've been using will-it-scale to test how well ext4 scales. The
> fallocate2 test (append 8KB to 1MB, truncate to 0, repeat) run concurrently
> on 64 containers revealed significant contention in block allocation/free,
> leading to much lower average fallocate OPS compared to a single
> container (see below).
>
> 1 | 2 | 4 | 8 | 16 | 32 | 64
> -------|--------|--------|--------|--------|--------|-------
> 295287 | 70665 | 33865 | 19387 | 10104 | 5588 | 3588
>
> Under this test scenario, the primary operations are block allocation
> (fallocate) and block deallocation (truncate). The main bottlenecks for
> these operations are the group lock and s_md_lock. Therefore, this patch
> series primarily focuses on optimizing the code related to these two locks.
>
> The following is a brief overview of the patches, see the patches for
> more details.
>
> Patch 1: Add ext4_try_lock_group() to skip busy groups to take advantage
> of the large number of ext4 groups.
>
> Patch 2: Separates stream goal hits from s_bal_goals in preparation for
> cleanup of s_mb_last_start.
>
> Patches 3-5: Split stream allocation's global goal into multiple goals and
> remove the unnecessary and expensive s_md_lock.
>
> Patches 6-7: minor cleanups
>
> Patches 8: Converted s_mb_free_pending to atomic_t and used memory barriers
> for consistency, instead of relying on the expensive s_md_lock.
>
> Patches 9: When inserting free extents, we now attempt to merge them with
> already inserted extents first, to reduce s_md_lock contention.
>
> Patches 10: Updates bb_avg_fragment_size_order to -1 when a group is out of
> free blocks, eliminating efficiency-impacting "zombie groups."
>
> Patches 11: Fix potential largest free orders lists corruption when the
> mb_optimize_scan mount option is switched on or off.
>
> Patches 12-17: Convert mb_optimize_scan's existing unordered list traversal
> to ordered xarrays, thereby reducing contention between block allocation
> and freeing, similar to linear traversal.
>
> "kvm-xfstests -c ext4/all -g auto" has been executed with no new failures.
>
> Here are some performance test data for your reference:
>
> Test: Running will-it-scale/fallocate2 on CPU-bound containers.
> Observation: Average fallocate operations per container per second.
>
> |CPU: Kunpeng 920 | P80 | P1 |
> |Memory: 512GB |------------------------|-------------------------|
> |960GB SSD (0.5GB/s)| base | patched | base | patched |
> |-------------------|-------|----------------|--------|----------------|
> |mb_optimize_scan=0 | 2667 | 20049 (+651%) | 314065 | 316724 (+0.8%) |
> |mb_optimize_scan=1 | 2643 | 19342 (+631%) | 316344 | 328324 (+3.7%) |
>
> |CPU: AMD 9654 * 2 | P96 | P1 |
> |Memory: 1536GB |------------------------|-------------------------|
> |960GB SSD (1GB/s) | base | patched | base | patched |
> |-------------------|-------|----------------|--------|----------------|
> |mb_optimize_scan=0 | 3450 | 52125 (+1410%) | 205851 | 215136 (+4.5%) |
> |mb_optimize_scan=1 | 3209 | 50331 (+1468%) | 207373 | 209431 (+0.9%) |
>
> Tests also evaluated this patch set's impact on fragmentation: a minor
> increase in free space fragmentation for multi-process workloads, but a
> significant decrease in file fragmentation:
>
> Test Script:
> ```shell
> #!/bin/bash
>
> dir="/tmp/test"
> disk="/dev/sda"
>
> mkdir -p $dir
>
> for scan in 0 1 ; do
> mkfs.ext4 -F -E lazy_itable_init=0,lazy_journal_init=0 \
> -O orphan_file $disk 200G
> mount -o mb_optimize_scan=$scan $disk $dir
>
> fio -directory=$dir -direct=1 -iodepth 128 -thread -ioengine=falloc \
> -rw=write -bs=4k -fallocate=none -numjobs=64 -file_append=1 \
> -size=1G -group_reporting -name=job1 -cpus_allowed_policy=split
>
> e2freefrag $disk
> e4defrag -c $dir # Without the patch, this could take 5-6 hours.
> filefrag ${dir}/job* | awk '{print $2}' | \
> awk '{sum+=$1} END {print sum/NR}'
> umount $dir
> done
> ```
>
> Test results:
> -------------------------------------------------------------|
> | base | patched |
> -------------------------|--------|--------|--------|--------|
> mb_optimize_scan | linear |opt_scan| linear |opt_scan|
> -------------------------|--------|--------|--------|--------|
> bw(MiB/s) | 217 | 217 | 5718 | 5626 |
> -------------------------|-----------------------------------|
> Avg. free extent size(KB)| 1943732| 1943732| 1316212| 1171208|
> Num. free extent | 71 | 71 | 105 | 118 |
> -------------------------------------------------------------|
> Avg. extents per file | 261967 | 261973 | 588 | 570 |
> Avg. size per extent(KB) | 4 | 4 | 1780 | 1837 |
> Fragmentation score | 100 | 100 | 2 | 2 |
> -------------------------------------------------------------|
>
> Comments and questions are, as always, welcome.
>
> Thanks,
> Baokun
>
> Baokun Li (17):
> ext4: add ext4_try_lock_group() to skip busy groups
> ext4: separate stream goal hits from s_bal_goals for better tracking
> ext4: remove unnecessary s_mb_last_start
> ext4: remove unnecessary s_md_lock on update s_mb_last_group
> ext4: utilize multiple global goals to reduce contention
> ext4: get rid of some obsolete EXT4_MB_HINT flags
> ext4: fix typo in CR_GOAL_LEN_SLOW comment
> ext4: convert sbi->s_mb_free_pending to atomic_t
> ext4: merge freed extent with existing extents before insertion
> ext4: fix zombie groups in average fragment size lists
> ext4: fix largest free orders lists corruption on mb_optimize_scan
> switch
> ext4: factor out __ext4_mb_scan_group()
> ext4: factor out ext4_mb_might_prefetch()
> ext4: factor out ext4_mb_scan_group()
> ext4: convert free groups order lists to xarrays
> ext4: refactor choose group to scan group
> ext4: implement linear-like traversal across order xarrays
>
> fs/ext4/balloc.c | 2 +-
> fs/ext4/ext4.h | 61 +--
> fs/ext4/mballoc.c | 895 ++++++++++++++++++++----------------
> fs/ext4/mballoc.h | 9 +-
> include/trace/events/ext4.h | 3 -
> 5 files changed, 534 insertions(+), 436 deletions(-)
>
next prev parent reply other threads:[~2025-07-15 1:12 UTC|newest]
Thread overview: 44+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-14 13:03 [PATCH v3 00/17] ext4: better scalability for ext4 block allocation Baokun Li
2025-07-14 13:03 ` [PATCH v3 01/17] ext4: add ext4_try_lock_group() to skip busy groups Baokun Li
2025-07-17 10:09 ` Ojaswin Mujoo
2025-07-19 0:37 ` Baokun Li
2025-07-17 22:28 ` Andi Kleen
2025-07-18 3:09 ` Theodore Ts'o
2025-07-19 0:29 ` Baokun Li
2025-07-22 20:59 ` Andi Kleen
2025-07-14 13:03 ` [PATCH v3 02/17] ext4: separate stream goal hits from s_bal_goals for better tracking Baokun Li
2025-07-17 10:29 ` Ojaswin Mujoo
2025-07-19 1:37 ` Baokun Li
2025-07-14 13:03 ` [PATCH v3 03/17] ext4: remove unnecessary s_mb_last_start Baokun Li
2025-07-17 10:31 ` Ojaswin Mujoo
2025-07-14 13:03 ` [PATCH v3 04/17] ext4: remove unnecessary s_md_lock on update s_mb_last_group Baokun Li
2025-07-17 13:36 ` Ojaswin Mujoo
2025-07-19 1:54 ` Baokun Li
2025-07-14 13:03 ` [PATCH v3 05/17] ext4: utilize multiple global goals to reduce contention Baokun Li
2025-07-14 13:03 ` [PATCH v3 06/17] ext4: get rid of some obsolete EXT4_MB_HINT flags Baokun Li
2025-07-14 13:03 ` [PATCH v3 07/17] ext4: fix typo in CR_GOAL_LEN_SLOW comment Baokun Li
2025-07-14 13:03 ` [PATCH v3 08/17] ext4: convert sbi->s_mb_free_pending to atomic_t Baokun Li
2025-07-14 13:03 ` [PATCH v3 09/17] ext4: merge freed extent with existing extents before insertion Baokun Li
2025-07-14 13:03 ` [PATCH v3 10/17] ext4: fix zombie groups in average fragment size lists Baokun Li
2025-07-14 13:03 ` [PATCH v3 11/17] ext4: fix largest free orders lists corruption on mb_optimize_scan switch Baokun Li
2025-07-14 13:03 ` [PATCH v3 12/17] ext4: factor out __ext4_mb_scan_group() Baokun Li
2025-07-14 13:03 ` [PATCH v3 13/17] ext4: factor out ext4_mb_might_prefetch() Baokun Li
2025-07-14 13:03 ` [PATCH v3 14/17] ext4: factor out ext4_mb_scan_group() Baokun Li
2025-07-14 13:03 ` [PATCH v3 15/17] ext4: convert free groups order lists to xarrays Baokun Li
2025-07-21 11:07 ` Jan Kara
2025-07-21 12:33 ` Baokun Li
2025-07-21 13:45 ` Baokun Li
2025-07-21 18:01 ` Theodore Ts'o
2025-07-22 5:58 ` Baokun Li
2025-07-24 3:55 ` Guenter Roeck
2025-07-24 4:54 ` Theodore Ts'o
2025-07-24 5:20 ` Guenter Roeck
2025-07-24 11:14 ` Zhang Yi
2025-07-24 14:30 ` Guenter Roeck
2025-07-24 14:54 ` Theodore Ts'o
2025-07-25 2:28 ` Zhang Yi
2025-07-26 0:50 ` Baokun Li
2025-07-14 13:03 ` [PATCH v3 16/17] ext4: refactor choose group to scan group Baokun Li
2025-07-14 13:03 ` [PATCH v3 17/17] ext4: implement linear-like traversal across order xarrays Baokun Li
2025-07-15 1:11 ` Zhang Yi [this message]
2025-07-19 21:45 ` [PATCH v3 00/17] ext4: better scalability for ext4 block allocation Theodore Ts'o
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=277b45e3-173d-4cf4-b044-7c25cd42e41b@huawei.com \
--to=yi.zhang@huawei.com \
--cc=adilger.kernel@dilger.ca \
--cc=jack@suse.cz \
--cc=julia.lawall@inria.fr \
--cc=libaokun1@huawei.com \
--cc=libaokun@huaweicloud.com \
--cc=linux-ext4@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=ojaswin@linux.ibm.com \
--cc=tytso@mit.edu \
--cc=yangerkun@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).