From: Chao Yu via Linux-f2fs-devel <linux-f2fs-devel@lists.sourceforge.net>
To: Huang Jianan <huangjianan@xiaomi.com>,
"linux-f2fs-devel@lists.sourceforge.net"
<linux-f2fs-devel@lists.sourceforge.net>,
"jaegeuk@kernel.org" <jaegeuk@kernel.org>
Cc: 盛勇 <shengyong1@xiaomi.com>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
王辉 <wanghui33@xiaomi.com>
Subject: Re: [f2fs-dev] [External Mail]Re: [PATCH v3] f2fs: avoid splitting bio when reading multiple pages
Date: Mon, 30 Jun 2025 19:23:35 +0800 [thread overview]
Message-ID: <18cd79ac-b7c6-4d7c-a322-d98c194656ee@kernel.org> (raw)
In-Reply-To: <b76e5aaa-edb2-4a4d-a6a8-72f6e975f398@xiaomi.com>
On 6/25/25 17:50, Huang Jianan wrote:
> On 2025/6/25 17:48, Jianan Huang wrote:
>> On 2025/6/25 16:45, Chao Yu wrote:
>>>
>>> On 6/25/25 14:49, Jianan Huang wrote:
>>>> When fewer pages are read, nr_pages may be smaller than nr_cpages. Due
>>>> to the nr_vecs limit, the compressed pages will be split into multiple
>>>> bios and then merged at the block level. In this case, nr_cpages should
>>>> be used to pre-allocate bvecs.
>>>> To handle this case, align max_nr_pages to cluster_size, which should be
>>>> enough for all compressed pages.
>>>>
>>>> Signed-off-by: Jianan Huang <huangjianan@xiaomi.com>
>>>> Signed-off-by: Sheng Yong <shengyong1@xiaomi.com>
>>>> ---
>>>> Changes since v2:
>>>> - Initialize index only for compressed files.
>>>> Changes since v1:
>>>> - Use aligned nr_pages instead of nr_cpages to pre-allocate bvecs.
>>>>
>>>> fs/f2fs/data.c | 12 ++++++++++--
>>>> 1 file changed, 10 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
>>>> index 31e892842625..d071d9f6a811 100644
>>>> --- a/fs/f2fs/data.c
>>>> +++ b/fs/f2fs/data.c
>>>> @@ -2303,7 +2303,7 @@ int f2fs_read_multi_pages(struct compress_ctx
>>>> *cc, struct bio **bio_ret,
>>>> }
>>>>
>>>> if (!bio) {
>>>> - bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages,
>>>> + bio = f2fs_grab_read_bio(inode, blkaddr,
>>>> nr_pages - i,
>>>
>>> Jianan,
>>>
>>> Another case:
>>>
>>> read page #0,1,2,3 from block #1000,1001,1002, cluster_size=4.
>>>
>>> nr_pages=4
>>> max_nr_pages=round_up(0+4,4)-round_down(0,4)=4
>>>
>>> f2fs_mpage_readpages() calls f2fs_read_multi_pages() when nr_pages=1, at
>>> that time, max_nr_pages equals to 1 as well.
>>>
>>> f2fs_grab_read_bio(..., 1 - 0,...) allocate bio w/ 1 vec capacity,
>>> however,
>>> we need at least 3 vecs to merge all cpages, right?
>>>
>>
>> Hi, chao,
>>
>> If we don't align nr_pages, then when entering f2fs_read_multi_pages,
>> we have nr_pages pages left, which belong to other clusters.
>> If this is the last page, we can simply pass nr_pages = 0.
>>
>> When allocating bio, we need:
>> 1. The cpages remaining in the current cluster, which should be
>> (nr_capges - i).
>> 2. The maximum cpages remaining in other clusters, which should be
>> max(nr_pages, cc->nr_cpages).
>>
>
> align(nr_pages, cc->nr_cpages), sorry for this.
>
>> So (nr_capges - i) + max(nr_pages, nr_cpages), should be enough for all
>> vecs?
Jianan,
What about getting rid of below change? and just passing max_nr_pages to
f2fs_read_multi_pages? Maybe there is a little bit waste for bio vector space,
but it will be more safe to reserve enough margin.
+ for (; nr_pages; nr_pages--, max_nr_pages--) {
Thanks,
>>
>> Thanks,
>>
>>
>>> Thanks,
>>>
>>>> f2fs_ra_op_flags(rac),
>>>> folio->index, for_write);
>>>> if (IS_ERR(bio)) {
>>>> @@ -2376,6 +2376,14 @@ static int f2fs_mpage_readpages(struct inode
>>>> *inode,
>>>> unsigned max_nr_pages = nr_pages;
>>>> int ret = 0;
>>>>
>>>> +#ifdef CONFIG_F2FS_FS_COMPRESSION
>>>> + if (f2fs_compressed_file(inode)) {
>>>> + index = rac ? readahead_index(rac) : folio->index;
>>>> + max_nr_pages = round_up(index + nr_pages,
>>>> cc.cluster_size) -
>>>> + round_down(index, cc.cluster_size);
>>>> + }
>>>> +#endif
>>>> +
>>>> map.m_pblk = 0;
>>>> map.m_lblk = 0;
>>>> map.m_len = 0;
>>>> @@ -2385,7 +2393,7 @@ static int f2fs_mpage_readpages(struct inode
>>>> *inode,
>>>> map.m_seg_type = NO_CHECK_TYPE;
>>>> map.m_may_create = false;
>>>>
>>>> - for (; nr_pages; nr_pages--) {
>>>> + for (; nr_pages; nr_pages--, max_nr_pages--) {
>>>> if (rac) {
>>>> folio = readahead_folio(rac);
>>>> prefetchw(&folio->flags);
>>>
>>
>
_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
next prev parent reply other threads:[~2025-06-30 11:23 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-25 6:49 [f2fs-dev] [PATCH v3] f2fs: avoid splitting bio when reading multiple pages Jianan Huang via Linux-f2fs-devel
2025-06-25 8:45 ` Chao Yu via Linux-f2fs-devel
2025-06-25 9:48 ` [f2fs-dev] [External Mail]Re: " Huang Jianan via Linux-f2fs-devel
2025-06-25 9:50 ` Huang Jianan via Linux-f2fs-devel
2025-06-30 11:23 ` Chao Yu via Linux-f2fs-devel [this message]
2025-06-30 12:58 ` Huang Jianan via Linux-f2fs-devel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=18cd79ac-b7c6-4d7c-a322-d98c194656ee@kernel.org \
--to=linux-f2fs-devel@lists.sourceforge.net \
--cc=chao@kernel.org \
--cc=huangjianan@xiaomi.com \
--cc=jaegeuk@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=shengyong1@xiaomi.com \
--cc=wanghui33@xiaomi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).