* [PATCH v3] f2fs: avoid splitting bio when reading multiple pages
@ 2025-06-25 6:49 Jianan Huang
2025-06-25 8:45 ` Chao Yu
0 siblings, 1 reply; 6+ messages in thread
From: Jianan Huang @ 2025-06-25 6:49 UTC (permalink / raw)
To: linux-f2fs-devel, chao, jaegeuk
Cc: wanghui33, linux-kernel, Jianan Huang, Sheng Yong
When fewer pages are read, nr_pages may be smaller than nr_cpages. Due
to the nr_vecs limit, the compressed pages will be split into multiple
bios and then merged at the block level. In this case, nr_cpages should
be used to pre-allocate bvecs.
To handle this case, align max_nr_pages to cluster_size, which should be
enough for all compressed pages.
Signed-off-by: Jianan Huang <huangjianan@xiaomi.com>
Signed-off-by: Sheng Yong <shengyong1@xiaomi.com>
---
Changes since v2:
- Initialize index only for compressed files.
Changes since v1:
- Use aligned nr_pages instead of nr_cpages to pre-allocate bvecs.
fs/f2fs/data.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 31e892842625..d071d9f6a811 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -2303,7 +2303,7 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
}
if (!bio) {
- bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages,
+ bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages - i,
f2fs_ra_op_flags(rac),
folio->index, for_write);
if (IS_ERR(bio)) {
@@ -2376,6 +2376,14 @@ static int f2fs_mpage_readpages(struct inode *inode,
unsigned max_nr_pages = nr_pages;
int ret = 0;
+#ifdef CONFIG_F2FS_FS_COMPRESSION
+ if (f2fs_compressed_file(inode)) {
+ index = rac ? readahead_index(rac) : folio->index;
+ max_nr_pages = round_up(index + nr_pages, cc.cluster_size) -
+ round_down(index, cc.cluster_size);
+ }
+#endif
+
map.m_pblk = 0;
map.m_lblk = 0;
map.m_len = 0;
@@ -2385,7 +2393,7 @@ static int f2fs_mpage_readpages(struct inode *inode,
map.m_seg_type = NO_CHECK_TYPE;
map.m_may_create = false;
- for (; nr_pages; nr_pages--) {
+ for (; nr_pages; nr_pages--, max_nr_pages--) {
if (rac) {
folio = readahead_folio(rac);
prefetchw(&folio->flags);
--
2.43.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH v3] f2fs: avoid splitting bio when reading multiple pages
2025-06-25 6:49 [PATCH v3] f2fs: avoid splitting bio when reading multiple pages Jianan Huang
@ 2025-06-25 8:45 ` Chao Yu
2025-06-25 9:48 ` [External Mail]Re: " Huang Jianan
0 siblings, 1 reply; 6+ messages in thread
From: Chao Yu @ 2025-06-25 8:45 UTC (permalink / raw)
To: Jianan Huang, linux-f2fs-devel, jaegeuk
Cc: chao, wanghui33, linux-kernel, Sheng Yong
On 6/25/25 14:49, Jianan Huang wrote:
> When fewer pages are read, nr_pages may be smaller than nr_cpages. Due
> to the nr_vecs limit, the compressed pages will be split into multiple
> bios and then merged at the block level. In this case, nr_cpages should
> be used to pre-allocate bvecs.
> To handle this case, align max_nr_pages to cluster_size, which should be
> enough for all compressed pages.
>
> Signed-off-by: Jianan Huang <huangjianan@xiaomi.com>
> Signed-off-by: Sheng Yong <shengyong1@xiaomi.com>
> ---
> Changes since v2:
> - Initialize index only for compressed files.
> Changes since v1:
> - Use aligned nr_pages instead of nr_cpages to pre-allocate bvecs.
>
> fs/f2fs/data.c | 12 ++++++++++--
> 1 file changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
> index 31e892842625..d071d9f6a811 100644
> --- a/fs/f2fs/data.c
> +++ b/fs/f2fs/data.c
> @@ -2303,7 +2303,7 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
> }
>
> if (!bio) {
> - bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages,
> + bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages - i,
Jianan,
Another case:
read page #0,1,2,3 from block #1000,1001,1002, cluster_size=4.
nr_pages=4
max_nr_pages=round_up(0+4,4)-round_down(0,4)=4
f2fs_mpage_readpages() calls f2fs_read_multi_pages() when nr_pages=1, at
that time, max_nr_pages equals to 1 as well.
f2fs_grab_read_bio(..., 1 - 0,...) allocate bio w/ 1 vec capacity, however,
we need at least 3 vecs to merge all cpages, right?
Thanks,
> f2fs_ra_op_flags(rac),
> folio->index, for_write);
> if (IS_ERR(bio)) {
> @@ -2376,6 +2376,14 @@ static int f2fs_mpage_readpages(struct inode *inode,
> unsigned max_nr_pages = nr_pages;
> int ret = 0;
>
> +#ifdef CONFIG_F2FS_FS_COMPRESSION
> + if (f2fs_compressed_file(inode)) {
> + index = rac ? readahead_index(rac) : folio->index;
> + max_nr_pages = round_up(index + nr_pages, cc.cluster_size) -
> + round_down(index, cc.cluster_size);
> + }
> +#endif
> +
> map.m_pblk = 0;
> map.m_lblk = 0;
> map.m_len = 0;
> @@ -2385,7 +2393,7 @@ static int f2fs_mpage_readpages(struct inode *inode,
> map.m_seg_type = NO_CHECK_TYPE;
> map.m_may_create = false;
>
> - for (; nr_pages; nr_pages--) {
> + for (; nr_pages; nr_pages--, max_nr_pages--) {
> if (rac) {
> folio = readahead_folio(rac);
> prefetchw(&folio->flags);
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [External Mail]Re: [PATCH v3] f2fs: avoid splitting bio when reading multiple pages
2025-06-25 8:45 ` Chao Yu
@ 2025-06-25 9:48 ` Huang Jianan
2025-06-25 9:50 ` Huang Jianan
0 siblings, 1 reply; 6+ messages in thread
From: Huang Jianan @ 2025-06-25 9:48 UTC (permalink / raw)
To: Chao Yu, linux-f2fs-devel@lists.sourceforge.net,
jaegeuk@kernel.org
Cc: 王辉, linux-kernel@vger.kernel.org, 盛勇
On 2025/6/25 16:45, Chao Yu wrote:
>
> On 6/25/25 14:49, Jianan Huang wrote:
>> When fewer pages are read, nr_pages may be smaller than nr_cpages. Due
>> to the nr_vecs limit, the compressed pages will be split into multiple
>> bios and then merged at the block level. In this case, nr_cpages should
>> be used to pre-allocate bvecs.
>> To handle this case, align max_nr_pages to cluster_size, which should be
>> enough for all compressed pages.
>>
>> Signed-off-by: Jianan Huang <huangjianan@xiaomi.com>
>> Signed-off-by: Sheng Yong <shengyong1@xiaomi.com>
>> ---
>> Changes since v2:
>> - Initialize index only for compressed files.
>> Changes since v1:
>> - Use aligned nr_pages instead of nr_cpages to pre-allocate bvecs.
>>
>> fs/f2fs/data.c | 12 ++++++++++--
>> 1 file changed, 10 insertions(+), 2 deletions(-)
>>
>> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
>> index 31e892842625..d071d9f6a811 100644
>> --- a/fs/f2fs/data.c
>> +++ b/fs/f2fs/data.c
>> @@ -2303,7 +2303,7 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
>> }
>>
>> if (!bio) {
>> - bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages,
>> + bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages - i,
>
> Jianan,
>
> Another case:
>
> read page #0,1,2,3 from block #1000,1001,1002, cluster_size=4.
>
> nr_pages=4
> max_nr_pages=round_up(0+4,4)-round_down(0,4)=4
>
> f2fs_mpage_readpages() calls f2fs_read_multi_pages() when nr_pages=1, at
> that time, max_nr_pages equals to 1 as well.
>
> f2fs_grab_read_bio(..., 1 - 0,...) allocate bio w/ 1 vec capacity, however,
> we need at least 3 vecs to merge all cpages, right?
>
Hi, chao,
If we don't align nr_pages, then when entering f2fs_read_multi_pages,
we have nr_pages pages left, which belong to other clusters.
If this is the last page, we can simply pass nr_pages = 0.
When allocating bio, we need:
1. The cpages remaining in the current cluster, which should be
(nr_capges - i).
2. The maximum cpages remaining in other clusters, which should be
max(nr_pages, cc->nr_cpages).
So (nr_capges - i) + max(nr_pages, nr_cpages), should be enough for all
vecs?
Thanks,
> Thanks,
>
>> f2fs_ra_op_flags(rac),
>> folio->index, for_write);
>> if (IS_ERR(bio)) {
>> @@ -2376,6 +2376,14 @@ static int f2fs_mpage_readpages(struct inode *inode,
>> unsigned max_nr_pages = nr_pages;
>> int ret = 0;
>>
>> +#ifdef CONFIG_F2FS_FS_COMPRESSION
>> + if (f2fs_compressed_file(inode)) {
>> + index = rac ? readahead_index(rac) : folio->index;
>> + max_nr_pages = round_up(index + nr_pages, cc.cluster_size) -
>> + round_down(index, cc.cluster_size);
>> + }
>> +#endif
>> +
>> map.m_pblk = 0;
>> map.m_lblk = 0;
>> map.m_len = 0;
>> @@ -2385,7 +2393,7 @@ static int f2fs_mpage_readpages(struct inode *inode,
>> map.m_seg_type = NO_CHECK_TYPE;
>> map.m_may_create = false;
>>
>> - for (; nr_pages; nr_pages--) {
>> + for (; nr_pages; nr_pages--, max_nr_pages--) {
>> if (rac) {
>> folio = readahead_folio(rac);
>> prefetchw(&folio->flags);
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [External Mail]Re: [PATCH v3] f2fs: avoid splitting bio when reading multiple pages
2025-06-25 9:48 ` [External Mail]Re: " Huang Jianan
@ 2025-06-25 9:50 ` Huang Jianan
2025-06-30 11:23 ` Chao Yu
0 siblings, 1 reply; 6+ messages in thread
From: Huang Jianan @ 2025-06-25 9:50 UTC (permalink / raw)
To: Chao Yu, linux-f2fs-devel@lists.sourceforge.net,
jaegeuk@kernel.org
Cc: 王辉, linux-kernel@vger.kernel.org, 盛勇
On 2025/6/25 17:48, Jianan Huang wrote:
> On 2025/6/25 16:45, Chao Yu wrote:
>>
>> On 6/25/25 14:49, Jianan Huang wrote:
>>> When fewer pages are read, nr_pages may be smaller than nr_cpages. Due
>>> to the nr_vecs limit, the compressed pages will be split into multiple
>>> bios and then merged at the block level. In this case, nr_cpages should
>>> be used to pre-allocate bvecs.
>>> To handle this case, align max_nr_pages to cluster_size, which should be
>>> enough for all compressed pages.
>>>
>>> Signed-off-by: Jianan Huang <huangjianan@xiaomi.com>
>>> Signed-off-by: Sheng Yong <shengyong1@xiaomi.com>
>>> ---
>>> Changes since v2:
>>> - Initialize index only for compressed files.
>>> Changes since v1:
>>> - Use aligned nr_pages instead of nr_cpages to pre-allocate bvecs.
>>>
>>> fs/f2fs/data.c | 12 ++++++++++--
>>> 1 file changed, 10 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
>>> index 31e892842625..d071d9f6a811 100644
>>> --- a/fs/f2fs/data.c
>>> +++ b/fs/f2fs/data.c
>>> @@ -2303,7 +2303,7 @@ int f2fs_read_multi_pages(struct compress_ctx
>>> *cc, struct bio **bio_ret,
>>> }
>>>
>>> if (!bio) {
>>> - bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages,
>>> + bio = f2fs_grab_read_bio(inode, blkaddr,
>>> nr_pages - i,
>>
>> Jianan,
>>
>> Another case:
>>
>> read page #0,1,2,3 from block #1000,1001,1002, cluster_size=4.
>>
>> nr_pages=4
>> max_nr_pages=round_up(0+4,4)-round_down(0,4)=4
>>
>> f2fs_mpage_readpages() calls f2fs_read_multi_pages() when nr_pages=1, at
>> that time, max_nr_pages equals to 1 as well.
>>
>> f2fs_grab_read_bio(..., 1 - 0,...) allocate bio w/ 1 vec capacity,
>> however,
>> we need at least 3 vecs to merge all cpages, right?
>>
>
> Hi, chao,
>
> If we don't align nr_pages, then when entering f2fs_read_multi_pages,
> we have nr_pages pages left, which belong to other clusters.
> If this is the last page, we can simply pass nr_pages = 0.
>
> When allocating bio, we need:
> 1. The cpages remaining in the current cluster, which should be
> (nr_capges - i).
> 2. The maximum cpages remaining in other clusters, which should be
> max(nr_pages, cc->nr_cpages).
>
align(nr_pages, cc->nr_cpages), sorry for this.
> So (nr_capges - i) + max(nr_pages, nr_cpages), should be enough for all
> vecs?
>
> Thanks,
>
>
>> Thanks,
>>
>>> f2fs_ra_op_flags(rac),
>>> folio->index, for_write);
>>> if (IS_ERR(bio)) {
>>> @@ -2376,6 +2376,14 @@ static int f2fs_mpage_readpages(struct inode
>>> *inode,
>>> unsigned max_nr_pages = nr_pages;
>>> int ret = 0;
>>>
>>> +#ifdef CONFIG_F2FS_FS_COMPRESSION
>>> + if (f2fs_compressed_file(inode)) {
>>> + index = rac ? readahead_index(rac) : folio->index;
>>> + max_nr_pages = round_up(index + nr_pages,
>>> cc.cluster_size) -
>>> + round_down(index, cc.cluster_size);
>>> + }
>>> +#endif
>>> +
>>> map.m_pblk = 0;
>>> map.m_lblk = 0;
>>> map.m_len = 0;
>>> @@ -2385,7 +2393,7 @@ static int f2fs_mpage_readpages(struct inode
>>> *inode,
>>> map.m_seg_type = NO_CHECK_TYPE;
>>> map.m_may_create = false;
>>>
>>> - for (; nr_pages; nr_pages--) {
>>> + for (; nr_pages; nr_pages--, max_nr_pages--) {
>>> if (rac) {
>>> folio = readahead_folio(rac);
>>> prefetchw(&folio->flags);
>>
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [External Mail]Re: [PATCH v3] f2fs: avoid splitting bio when reading multiple pages
2025-06-25 9:50 ` Huang Jianan
@ 2025-06-30 11:23 ` Chao Yu
2025-06-30 12:58 ` Huang Jianan
0 siblings, 1 reply; 6+ messages in thread
From: Chao Yu @ 2025-06-30 11:23 UTC (permalink / raw)
To: Huang Jianan, linux-f2fs-devel@lists.sourceforge.net,
jaegeuk@kernel.org
Cc: chao, 王辉, linux-kernel@vger.kernel.org,
盛勇
On 6/25/25 17:50, Huang Jianan wrote:
> On 2025/6/25 17:48, Jianan Huang wrote:
>> On 2025/6/25 16:45, Chao Yu wrote:
>>>
>>> On 6/25/25 14:49, Jianan Huang wrote:
>>>> When fewer pages are read, nr_pages may be smaller than nr_cpages. Due
>>>> to the nr_vecs limit, the compressed pages will be split into multiple
>>>> bios and then merged at the block level. In this case, nr_cpages should
>>>> be used to pre-allocate bvecs.
>>>> To handle this case, align max_nr_pages to cluster_size, which should be
>>>> enough for all compressed pages.
>>>>
>>>> Signed-off-by: Jianan Huang <huangjianan@xiaomi.com>
>>>> Signed-off-by: Sheng Yong <shengyong1@xiaomi.com>
>>>> ---
>>>> Changes since v2:
>>>> - Initialize index only for compressed files.
>>>> Changes since v1:
>>>> - Use aligned nr_pages instead of nr_cpages to pre-allocate bvecs.
>>>>
>>>> fs/f2fs/data.c | 12 ++++++++++--
>>>> 1 file changed, 10 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
>>>> index 31e892842625..d071d9f6a811 100644
>>>> --- a/fs/f2fs/data.c
>>>> +++ b/fs/f2fs/data.c
>>>> @@ -2303,7 +2303,7 @@ int f2fs_read_multi_pages(struct compress_ctx
>>>> *cc, struct bio **bio_ret,
>>>> }
>>>>
>>>> if (!bio) {
>>>> - bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages,
>>>> + bio = f2fs_grab_read_bio(inode, blkaddr,
>>>> nr_pages - i,
>>>
>>> Jianan,
>>>
>>> Another case:
>>>
>>> read page #0,1,2,3 from block #1000,1001,1002, cluster_size=4.
>>>
>>> nr_pages=4
>>> max_nr_pages=round_up(0+4,4)-round_down(0,4)=4
>>>
>>> f2fs_mpage_readpages() calls f2fs_read_multi_pages() when nr_pages=1, at
>>> that time, max_nr_pages equals to 1 as well.
>>>
>>> f2fs_grab_read_bio(..., 1 - 0,...) allocate bio w/ 1 vec capacity,
>>> however,
>>> we need at least 3 vecs to merge all cpages, right?
>>>
>>
>> Hi, chao,
>>
>> If we don't align nr_pages, then when entering f2fs_read_multi_pages,
>> we have nr_pages pages left, which belong to other clusters.
>> If this is the last page, we can simply pass nr_pages = 0.
>>
>> When allocating bio, we need:
>> 1. The cpages remaining in the current cluster, which should be
>> (nr_capges - i).
>> 2. The maximum cpages remaining in other clusters, which should be
>> max(nr_pages, cc->nr_cpages).
>>
>
> align(nr_pages, cc->nr_cpages), sorry for this.
>
>> So (nr_capges - i) + max(nr_pages, nr_cpages), should be enough for all
>> vecs?
Jianan,
What about getting rid of below change? and just passing max_nr_pages to
f2fs_read_multi_pages? Maybe there is a little bit waste for bio vector space,
but it will be more safe to reserve enough margin.
+ for (; nr_pages; nr_pages--, max_nr_pages--) {
Thanks,
>>
>> Thanks,
>>
>>
>>> Thanks,
>>>
>>>> f2fs_ra_op_flags(rac),
>>>> folio->index, for_write);
>>>> if (IS_ERR(bio)) {
>>>> @@ -2376,6 +2376,14 @@ static int f2fs_mpage_readpages(struct inode
>>>> *inode,
>>>> unsigned max_nr_pages = nr_pages;
>>>> int ret = 0;
>>>>
>>>> +#ifdef CONFIG_F2FS_FS_COMPRESSION
>>>> + if (f2fs_compressed_file(inode)) {
>>>> + index = rac ? readahead_index(rac) : folio->index;
>>>> + max_nr_pages = round_up(index + nr_pages,
>>>> cc.cluster_size) -
>>>> + round_down(index, cc.cluster_size);
>>>> + }
>>>> +#endif
>>>> +
>>>> map.m_pblk = 0;
>>>> map.m_lblk = 0;
>>>> map.m_len = 0;
>>>> @@ -2385,7 +2393,7 @@ static int f2fs_mpage_readpages(struct inode
>>>> *inode,
>>>> map.m_seg_type = NO_CHECK_TYPE;
>>>> map.m_may_create = false;
>>>>
>>>> - for (; nr_pages; nr_pages--) {
>>>> + for (; nr_pages; nr_pages--, max_nr_pages--) {
>>>> if (rac) {
>>>> folio = readahead_folio(rac);
>>>> prefetchw(&folio->flags);
>>>
>>
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [External Mail]Re: [PATCH v3] f2fs: avoid splitting bio when reading multiple pages
2025-06-30 11:23 ` Chao Yu
@ 2025-06-30 12:58 ` Huang Jianan
0 siblings, 0 replies; 6+ messages in thread
From: Huang Jianan @ 2025-06-30 12:58 UTC (permalink / raw)
To: Chao Yu, linux-f2fs-devel@lists.sourceforge.net,
jaegeuk@kernel.org
Cc: 王辉, linux-kernel@vger.kernel.org, 盛勇
On 2025/6/30 19:23, Chao Yu wrote:
>
> On 6/25/25 17:50, Huang Jianan wrote:
>> On 2025/6/25 17:48, Jianan Huang wrote:
>>> On 2025/6/25 16:45, Chao Yu wrote:
>>>>
>>>> On 6/25/25 14:49, Jianan Huang wrote:
>>>>> When fewer pages are read, nr_pages may be smaller than nr_cpages. Due
>>>>> to the nr_vecs limit, the compressed pages will be split into multiple
>>>>> bios and then merged at the block level. In this case, nr_cpages should
>>>>> be used to pre-allocate bvecs.
>>>>> To handle this case, align max_nr_pages to cluster_size, which should be
>>>>> enough for all compressed pages.
>>>>>
>>>>> Signed-off-by: Jianan Huang <huangjianan@xiaomi.com>
>>>>> Signed-off-by: Sheng Yong <shengyong1@xiaomi.com>
>>>>> ---
>>>>> Changes since v2:
>>>>> - Initialize index only for compressed files.
>>>>> Changes since v1:
>>>>> - Use aligned nr_pages instead of nr_cpages to pre-allocate bvecs.
>>>>>
>>>>> fs/f2fs/data.c | 12 ++++++++++--
>>>>> 1 file changed, 10 insertions(+), 2 deletions(-)
>>>>>
>>>>> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
>>>>> index 31e892842625..d071d9f6a811 100644
>>>>> --- a/fs/f2fs/data.c
>>>>> +++ b/fs/f2fs/data.c
>>>>> @@ -2303,7 +2303,7 @@ int f2fs_read_multi_pages(struct compress_ctx
>>>>> *cc, struct bio **bio_ret,
>>>>> }
>>>>>
>>>>> if (!bio) {
>>>>> - bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages,
>>>>> + bio = f2fs_grab_read_bio(inode, blkaddr,
>>>>> nr_pages - i,
>>>>
>>>> Jianan,
>>>>
>>>> Another case:
>>>>
>>>> read page #0,1,2,3 from block #1000,1001,1002, cluster_size=4.
>>>>
>>>> nr_pages=4
>>>> max_nr_pages=round_up(0+4,4)-round_down(0,4)=4
>>>>
>>>> f2fs_mpage_readpages() calls f2fs_read_multi_pages() when nr_pages=1, at
>>>> that time, max_nr_pages equals to 1 as well.
>>>>
>>>> f2fs_grab_read_bio(..., 1 - 0,...) allocate bio w/ 1 vec capacity,
>>>> however,
>>>> we need at least 3 vecs to merge all cpages, right?
>>>>
>>>
>>> Hi, chao,
>>>
>>> If we don't align nr_pages, then when entering f2fs_read_multi_pages,
>>> we have nr_pages pages left, which belong to other clusters.
>>> If this is the last page, we can simply pass nr_pages = 0.
>>>
>>> When allocating bio, we need:
>>> 1. The cpages remaining in the current cluster, which should be
>>> (nr_capges - i).
>>> 2. The maximum cpages remaining in other clusters, which should be
>>> max(nr_pages, cc->nr_cpages).
>>>
>>
>> align(nr_pages, cc->nr_cpages), sorry for this.
>>
>>> So (nr_capges - i) + max(nr_pages, nr_cpages), should be enough for all
>>> vecs?
>
> Jianan,
>
> What about getting rid of below change? and just passing max_nr_pages to
> f2fs_read_multi_pages? Maybe there is a little bit waste for bio vector space,
> but it will be more safe to reserve enough margin.
>
> + for (; nr_pages; nr_pages--, max_nr_pages--) {
>
LGTM, updated in v4.
Thanks,
> Thanks,
>
>>>
>>> Thanks,
>>>
>>>
>>>> Thanks,
>>>>
>>>>> f2fs_ra_op_flags(rac),
>>>>> folio->index, for_write);
>>>>> if (IS_ERR(bio)) {
>>>>> @@ -2376,6 +2376,14 @@ static int f2fs_mpage_readpages(struct inode
>>>>> *inode,
>>>>> unsigned max_nr_pages = nr_pages;
>>>>> int ret = 0;
>>>>>
>>>>> +#ifdef CONFIG_F2FS_FS_COMPRESSION
>>>>> + if (f2fs_compressed_file(inode)) {
>>>>> + index = rac ? readahead_index(rac) : folio->index;
>>>>> + max_nr_pages = round_up(index + nr_pages,
>>>>> cc.cluster_size) -
>>>>> + round_down(index, cc.cluster_size);
>>>>> + }
>>>>> +#endif
>>>>> +
>>>>> map.m_pblk = 0;
>>>>> map.m_lblk = 0;
>>>>> map.m_len = 0;
>>>>> @@ -2385,7 +2393,7 @@ static int f2fs_mpage_readpages(struct inode
>>>>> *inode,
>>>>> map.m_seg_type = NO_CHECK_TYPE;
>>>>> map.m_may_create = false;
>>>>>
>>>>> - for (; nr_pages; nr_pages--) {
>>>>> + for (; nr_pages; nr_pages--, max_nr_pages--) {
>>>>> if (rac) {
>>>>> folio = readahead_folio(rac);
>>>>> prefetchw(&folio->flags);
>>>>
>>>
>>
>
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2025-06-30 12:59 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-25 6:49 [PATCH v3] f2fs: avoid splitting bio when reading multiple pages Jianan Huang
2025-06-25 8:45 ` Chao Yu
2025-06-25 9:48 ` [External Mail]Re: " Huang Jianan
2025-06-25 9:50 ` Huang Jianan
2025-06-30 11:23 ` Chao Yu
2025-06-30 12:58 ` Huang Jianan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).