linux-f2fs-devel.lists.sourceforge.net archive mirror
 help / color / mirror / Atom feed
* [f2fs-dev] [PATCH] f2fs: avoid splitting bio when reading multiple pages
@ 2025-06-17  5:55 Jianan Huang via Linux-f2fs-devel
  2025-06-17 11:37 ` Chao Yu via Linux-f2fs-devel
  2025-06-18  8:17 ` [f2fs-dev] [PATCH v2] " Jianan Huang via Linux-f2fs-devel
  0 siblings, 2 replies; 8+ messages in thread
From: Jianan Huang via Linux-f2fs-devel @ 2025-06-17  5:55 UTC (permalink / raw)
  To: linux-f2fs-devel, chao, jaegeuk; +Cc: Sheng Yong, wanghui33

When fewer pages are read, nr_pages may be smaller than nr_cpages. Due
to the nr_vecs limit, the compressed pages will be split into multiple
bios and then merged at the block level. In this case, nr_cpages should
be used to pre-allocate bvecs.

Signed-off-by: Jianan Huang <huangjianan@xiaomi.com>
Signed-off-by: Sheng Yong <shengyong1@xiaomi.com>
---
 fs/f2fs/data.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 31e892842625..c7773b09d83f 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -2303,7 +2303,8 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
 		}
 
 		if (!bio) {
-			bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages,
+			bio = f2fs_grab_read_bio(inode, blkaddr,
+					max(nr_pages, cc->nr_cpages) - i,
 					f2fs_ra_op_flags(rac),
 					folio->index, for_write);
 			if (IS_ERR(bio)) {
@@ -2373,7 +2374,6 @@ static int f2fs_mpage_readpages(struct inode *inode,
 	pgoff_t index;
 #endif
 	unsigned nr_pages = rac ? readahead_count(rac) : 1;
-	unsigned max_nr_pages = nr_pages;
 	int ret = 0;
 
 	map.m_pblk = 0;
@@ -2400,7 +2400,7 @@ static int f2fs_mpage_readpages(struct inode *inode,
 		/* there are remained compressed pages, submit them */
 		if (!f2fs_cluster_can_merge_page(&cc, index)) {
 			ret = f2fs_read_multi_pages(&cc, &bio,
-						max_nr_pages,
+						nr_pages,
 						&last_block_in_bio,
 						rac, false);
 			f2fs_destroy_compress_ctx(&cc, false);
@@ -2432,7 +2432,7 @@ static int f2fs_mpage_readpages(struct inode *inode,
 read_single_page:
 #endif
 
-		ret = f2fs_read_single_page(inode, folio, max_nr_pages, &map,
+		ret = f2fs_read_single_page(inode, folio, nr_pages, &map,
 					&bio, &last_block_in_bio, rac);
 		if (ret) {
 #ifdef CONFIG_F2FS_FS_COMPRESSION
@@ -2450,7 +2450,7 @@ static int f2fs_mpage_readpages(struct inode *inode,
 			/* last page */
 			if (nr_pages == 1 && !f2fs_cluster_is_empty(&cc)) {
 				ret = f2fs_read_multi_pages(&cc, &bio,
-							max_nr_pages,
+							nr_pages,
 							&last_block_in_bio,
 							rac, false);
 				f2fs_destroy_compress_ctx(&cc, false);
-- 
2.43.0



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [f2fs-dev] [PATCH] f2fs: avoid splitting bio when reading multiple pages
  2025-06-17  5:55 [f2fs-dev] [PATCH] f2fs: avoid splitting bio when reading multiple pages Jianan Huang via Linux-f2fs-devel
@ 2025-06-17 11:37 ` Chao Yu via Linux-f2fs-devel
  2025-06-17 13:13   ` Sheng Yong
  2025-06-18  8:17 ` [f2fs-dev] [PATCH v2] " Jianan Huang via Linux-f2fs-devel
  1 sibling, 1 reply; 8+ messages in thread
From: Chao Yu via Linux-f2fs-devel @ 2025-06-17 11:37 UTC (permalink / raw)
  To: Jianan Huang, linux-f2fs-devel, jaegeuk; +Cc: Sheng Yong, wanghui33

On 6/17/25 13:55, Jianan Huang wrote:
> When fewer pages are read, nr_pages may be smaller than nr_cpages. Due
> to the nr_vecs limit, the compressed pages will be split into multiple
> bios and then merged at the block level. In this case, nr_cpages should
> be used to pre-allocate bvecs.
> 
> Signed-off-by: Jianan Huang <huangjianan@xiaomi.com>
> Signed-off-by: Sheng Yong <shengyong1@xiaomi.com>
> ---
>  fs/f2fs/data.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
> index 31e892842625..c7773b09d83f 100644
> --- a/fs/f2fs/data.c
> +++ b/fs/f2fs/data.c
> @@ -2303,7 +2303,8 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
>  		}
>  
>  		if (!bio) {
> -			bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages,
> +			bio = f2fs_grab_read_bio(inode, blkaddr,
> +					max(nr_pages, cc->nr_cpages) - i,

Hi Jianan,

e.g.

User wants to read page [1, 5],
page #1,2,3,4 locates in compressed block #1000,1001,1003,
page #5 locate in compressed block #1004,1005

It submits first bio w/ block #1000,1001
It allocates second bio w/ size of max(nr_pages=1, nr_cpages=3) - 2 = 1 ?
However block #1003 and block #1004,1005 can be readed in one bio, we
should allocate larger bio for last continuous blocks which cross clusters.

>  					f2fs_ra_op_flags(rac),
>  					folio->index, for_write);
>  			if (IS_ERR(bio)) {
> @@ -2373,7 +2374,6 @@ static int f2fs_mpage_readpages(struct inode *inode,
>  	pgoff_t index;
>  #endif
>  	unsigned nr_pages = rac ? readahead_count(rac) : 1;
> -	unsigned max_nr_pages = nr_pages;

Maybe we can align both start and end of read range w/ cluster_size, and use
start and end for max_nr_pages calculation, then pass it to
f2fs_read_{multi,single}_pages(), something like this?

max_nr_pages = round_up(end_idx, cluster_size) -
		round_down(start_idx, cluster_size);

Its size should always cover size of all cpage and/or rpage.

Thanks,

>  	int ret = 0;
>  
>  	map.m_pblk = 0;
> @@ -2400,7 +2400,7 @@ static int f2fs_mpage_readpages(struct inode *inode,
>  		/* there are remained compressed pages, submit them */
>  		if (!f2fs_cluster_can_merge_page(&cc, index)) {
>  			ret = f2fs_read_multi_pages(&cc, &bio,
> -						max_nr_pages,
> +						nr_pages,
>  						&last_block_in_bio,
>  						rac, false);
>  			f2fs_destroy_compress_ctx(&cc, false);
> @@ -2432,7 +2432,7 @@ static int f2fs_mpage_readpages(struct inode *inode,
>  read_single_page:
>  #endif
>  
> -		ret = f2fs_read_single_page(inode, folio, max_nr_pages, &map,
> +		ret = f2fs_read_single_page(inode, folio, nr_pages, &map,
>  					&bio, &last_block_in_bio, rac);
>  		if (ret) {
>  #ifdef CONFIG_F2FS_FS_COMPRESSION
> @@ -2450,7 +2450,7 @@ static int f2fs_mpage_readpages(struct inode *inode,
>  			/* last page */
>  			if (nr_pages == 1 && !f2fs_cluster_is_empty(&cc)) {
>  				ret = f2fs_read_multi_pages(&cc, &bio,
> -							max_nr_pages,
> +							nr_pages,
>  							&last_block_in_bio,
>  							rac, false);
>  				f2fs_destroy_compress_ctx(&cc, false);



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [f2fs-dev] [PATCH] f2fs: avoid splitting bio when reading multiple pages
  2025-06-17 11:37 ` Chao Yu via Linux-f2fs-devel
@ 2025-06-17 13:13   ` Sheng Yong
  2025-06-17 13:43     ` [f2fs-dev] [External Mail]Re: " Huang Jianan via Linux-f2fs-devel
  2025-06-24 13:56     ` [f2fs-dev] " Chao Yu via Linux-f2fs-devel
  0 siblings, 2 replies; 8+ messages in thread
From: Sheng Yong @ 2025-06-17 13:13 UTC (permalink / raw)
  To: Chao Yu, Jianan Huang, linux-f2fs-devel, jaegeuk; +Cc: shengyong1, wanghui33

On 6/17/25 19:37, Chao Yu via Linux-f2fs-devel wrote:
> On 6/17/25 13:55, Jianan Huang wrote:
>> When fewer pages are read, nr_pages may be smaller than nr_cpages. Due
>> to the nr_vecs limit, the compressed pages will be split into multiple
>> bios and then merged at the block level. In this case, nr_cpages should
>> be used to pre-allocate bvecs.
>>
>> Signed-off-by: Jianan Huang <huangjianan@xiaomi.com>
>> Signed-off-by: Sheng Yong <shengyong1@xiaomi.com>
>> ---
>>   fs/f2fs/data.c | 10 +++++-----
>>   1 file changed, 5 insertions(+), 5 deletions(-)
>>
>> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
>> index 31e892842625..c7773b09d83f 100644
>> --- a/fs/f2fs/data.c
>> +++ b/fs/f2fs/data.c
>> @@ -2303,7 +2303,8 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
>>   		}
>>   
>>   		if (!bio) {
>> -			bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages,
>> +			bio = f2fs_grab_read_bio(inode, blkaddr,
>> +					max(nr_pages, cc->nr_cpages) - i,
> 
> Hi Jianan,
> 
> e.g.
> 
> User wants to read page [1, 5],
> page #1,2,3,4 locates in compressed block #1000,1001,1003,
> page #5 locate in compressed block #1004,1005
> 
> It submits first bio w/ block #1000,1001
> It allocates second bio w/ size of max(nr_pages=1, nr_cpages=3) - 2 = 1 ?
> However block #1003 and block #1004,1005 can be readed in one bio, we
> should allocate larger bio for last continuous blocks which cross clusters.

Hi, Chao,

I think `max(nr_pages, cc->nr_cpages) - i` can reserve enough vectors in bio
for later reads. IIUC, the case above is:

read page #1,2,3,4 at blkaddr #1000,1001,1003:
   * nr_pages=5, cpages=3, for the first bio1, vec=max(5,3)-0=5 (2 vecs are used)
                           for the second bio2, vec=max(5,3)-2=3 (1 vec is used)
read page #5 at blkaddr #1004,1005, prev bio2 is still available
   * nr_pages=1, cpages=2, prev bio2, 2 vecs left


For case: page #1,2,3,4 at compressed blkaddr #1000,1001,1003
           page #5,6,7,8 at compressed blkaddr #1004,1005,1006
If we are reading page[1,5], we could do calculation as the following?

     max_nr_pages=align(nr_pages, cluster_size)
     max(max_nr_pages, cc->nr_cpages) - i


thanks,
shengyong
> 
>>   					f2fs_ra_op_flags(rac),
>>   					folio->index, for_write);
>>   			if (IS_ERR(bio)) {
>> @@ -2373,7 +2374,6 @@ static int f2fs_mpage_readpages(struct inode *inode,
>>   	pgoff_t index;
>>   #endif
>>   	unsigned nr_pages = rac ? readahead_count(rac) : 1;
>> -	unsigned max_nr_pages = nr_pages;
> 
> Maybe we can align both start and end of read range w/ cluster_size, and use
> start and end for max_nr_pages calculation, then pass it to
> f2fs_read_{multi,single}_pages(), something like this?
> 
> max_nr_pages = round_up(end_idx, cluster_size) -
> 		round_down(start_idx, cluster_size);
> 
> Its size should always cover size of all cpage and/or rpage.
> 
> Thanks,
> 
>>   	int ret = 0;
>>   
>>   	map.m_pblk = 0;
>> @@ -2400,7 +2400,7 @@ static int f2fs_mpage_readpages(struct inode *inode,
>>   		/* there are remained compressed pages, submit them */
>>   		if (!f2fs_cluster_can_merge_page(&cc, index)) {
>>   			ret = f2fs_read_multi_pages(&cc, &bio,
>> -						max_nr_pages,
>> +						nr_pages,
>>   						&last_block_in_bio,
>>   						rac, false);
>>   			f2fs_destroy_compress_ctx(&cc, false);
>> @@ -2432,7 +2432,7 @@ static int f2fs_mpage_readpages(struct inode *inode,
>>   read_single_page:
>>   #endif
>>   
>> -		ret = f2fs_read_single_page(inode, folio, max_nr_pages, &map,
>> +		ret = f2fs_read_single_page(inode, folio, nr_pages, &map,
>>   					&bio, &last_block_in_bio, rac);
>>   		if (ret) {
>>   #ifdef CONFIG_F2FS_FS_COMPRESSION
>> @@ -2450,7 +2450,7 @@ static int f2fs_mpage_readpages(struct inode *inode,
>>   			/* last page */
>>   			if (nr_pages == 1 && !f2fs_cluster_is_empty(&cc)) {
>>   				ret = f2fs_read_multi_pages(&cc, &bio,
>> -							max_nr_pages,
>> +							nr_pages,
>>   							&last_block_in_bio,
>>   							rac, false);
>>   				f2fs_destroy_compress_ctx(&cc, false);
> 
> 
> 
> _______________________________________________
> Linux-f2fs-devel mailing list
> Linux-f2fs-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [f2fs-dev] [External Mail]Re: [PATCH] f2fs: avoid splitting bio when reading multiple pages
  2025-06-17 13:13   ` Sheng Yong
@ 2025-06-17 13:43     ` Huang Jianan via Linux-f2fs-devel
  2025-06-24 13:56     ` [f2fs-dev] " Chao Yu via Linux-f2fs-devel
  1 sibling, 0 replies; 8+ messages in thread
From: Huang Jianan via Linux-f2fs-devel @ 2025-06-17 13:43 UTC (permalink / raw)
  To: Sheng Yong, Chao Yu, linux-f2fs-devel@lists.sourceforge.net,
	jaegeuk@kernel.org
  Cc: 盛勇, 王辉

On 2025/6/17 21:13, Sheng Yong wrote:
> 
> On 6/17/25 19:37, Chao Yu via Linux-f2fs-devel wrote:
>> On 6/17/25 13:55, Jianan Huang wrote:
>>> When fewer pages are read, nr_pages may be smaller than nr_cpages. Due
>>> to the nr_vecs limit, the compressed pages will be split into multiple
>>> bios and then merged at the block level. In this case, nr_cpages should
>>> be used to pre-allocate bvecs.
>>>
>>> Signed-off-by: Jianan Huang <huangjianan@xiaomi.com>
>>> Signed-off-by: Sheng Yong <shengyong1@xiaomi.com>
>>> ---
>>>   fs/f2fs/data.c | 10 +++++-----
>>>   1 file changed, 5 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
>>> index 31e892842625..c7773b09d83f 100644
>>> --- a/fs/f2fs/data.c
>>> +++ b/fs/f2fs/data.c
>>> @@ -2303,7 +2303,8 @@ int f2fs_read_multi_pages(struct compress_ctx 
>>> *cc, struct bio **bio_ret,
>>>              }
>>>
>>>              if (!bio) {
>>> -                    bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages,
>>> +                    bio = f2fs_grab_read_bio(inode, blkaddr,
>>> +                                    max(nr_pages, cc->nr_cpages) - i,
>>
>> Hi Jianan,
>>
>> e.g.
>>
>> User wants to read page [1, 5],
>> page #1,2,3,4 locates in compressed block #1000,1001,1003,
>> page #5 locate in compressed block #1004,1005
>>
>> It submits first bio w/ block #1000,1001
>> It allocates second bio w/ size of max(nr_pages=1, nr_cpages=3) - 2 = 1 ?
>> However block #1003 and block #1004,1005 can be readed in one bio, we
>> should allocate larger bio for last continuous blocks which cross 
>> clusters.
> 
> Hi, Chao,
> 
> I think `max(nr_pages, cc->nr_cpages) - i` can reserve enough vectors in 
> bio
> for later reads. IIUC, the case above is:
> 
> read page #1,2,3,4 at blkaddr #1000,1001,1003:
>    * nr_pages=5, cpages=3, for the first bio1, vec=max(5,3)-0=5 (2 vecs 
> are used)
>                            for the second bio2, vec=max(5,3)-2=3 (1 vec 
> is used)
> read page #5 at blkaddr #1004,1005, prev bio2 is still available
>    * nr_pages=1, cpages=2, prev bio2, 2 vecs left
> 
> 
> For case: page #1,2,3,4 at compressed blkaddr #1000,1001,1003
>            page #5,6,7,8 at compressed blkaddr #1004,1005,1006
> If we are reading page[1,5], we could do calculation as the following?
> 
>      max_nr_pages=align(nr_pages, cluster_size)
>      max(max_nr_pages, cc->nr_cpages) - i
> 

f2fs_read_multi_pages only process the current cluster, so there are two 
cases:
1. When all the remaining pages belong to the current cluster, 
(nr_cpages - i) is enough for all vecs.
2. When there are pages from other clusters yet to be processed, 
(align(nr_pages, cluster_size) - i) is enough for all vecs.
For case 1, nr_cpages should be equal to align(nr_pages, cluster_size).
Therefore, max is not needed.

Thanks,
Jianan

> 
> thanks,
> shengyong
>>
>>>                                      f2fs_ra_op_flags(rac),
>>>                                      folio->index, for_write);
>>>                      if (IS_ERR(bio)) {
>>> @@ -2373,7 +2374,6 @@ static int f2fs_mpage_readpages(struct inode 
>>> *inode,
>>>      pgoff_t index;
>>>   #endif
>>>      unsigned nr_pages = rac ? readahead_count(rac) : 1;
>>> -    unsigned max_nr_pages = nr_pages;
>>
>> Maybe we can align both start and end of read range w/ cluster_size, 
>> and use
>> start and end for max_nr_pages calculation, then pass it to
>> f2fs_read_{multi,single}_pages(), something like this?
>>
>> max_nr_pages = round_up(end_idx, cluster_size) -
>>               round_down(start_idx, cluster_size);
>>
>> Its size should always cover size of all cpage and/or rpage.
>>
>> Thanks,
>>
>>>      int ret = 0;
>>>
>>>      map.m_pblk = 0;
>>> @@ -2400,7 +2400,7 @@ static int f2fs_mpage_readpages(struct inode 
>>> *inode,
>>>              /* there are remained compressed pages, submit them */
>>>              if (!f2fs_cluster_can_merge_page(&cc, index)) {
>>>                      ret = f2fs_read_multi_pages(&cc, &bio,
>>> -                                            max_nr_pages,
>>> +                                            nr_pages,
>>>                                              &last_block_in_bio,
>>>                                              rac, false);
>>>                      f2fs_destroy_compress_ctx(&cc, false);
>>> @@ -2432,7 +2432,7 @@ static int f2fs_mpage_readpages(struct inode 
>>> *inode,
>>>   read_single_page:
>>>   #endif
>>>
>>> -            ret = f2fs_read_single_page(inode, folio, max_nr_pages, 
>>> &map,
>>> +            ret = f2fs_read_single_page(inode, folio, nr_pages, &map,
>>>                                      &bio, &last_block_in_bio, rac);
>>>              if (ret) {
>>>   #ifdef CONFIG_F2FS_FS_COMPRESSION
>>> @@ -2450,7 +2450,7 @@ static int f2fs_mpage_readpages(struct inode 
>>> *inode,
>>>                      /* last page */
>>>                      if (nr_pages == 1 && !f2fs_cluster_is_empty(&cc)) {
>>>                              ret = f2fs_read_multi_pages(&cc, &bio,
>>> -                                                    max_nr_pages,
>>> +                                                    nr_pages,
>>>                                                      &last_block_in_bio,
>>>                                                      rac, false);
>>>                              f2fs_destroy_compress_ctx(&cc, false);
>>
>>
>>
>> _______________________________________________
>> Linux-f2fs-devel mailing list
>> Linux-f2fs-devel@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
> 


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [f2fs-dev] [PATCH v2] f2fs: avoid splitting bio when reading multiple pages
  2025-06-17  5:55 [f2fs-dev] [PATCH] f2fs: avoid splitting bio when reading multiple pages Jianan Huang via Linux-f2fs-devel
  2025-06-17 11:37 ` Chao Yu via Linux-f2fs-devel
@ 2025-06-18  8:17 ` Jianan Huang via Linux-f2fs-devel
  2025-06-25  3:05   ` Huang Jianan via Linux-f2fs-devel
  2025-06-25  3:26   ` Chao Yu via Linux-f2fs-devel
  1 sibling, 2 replies; 8+ messages in thread
From: Jianan Huang via Linux-f2fs-devel @ 2025-06-18  8:17 UTC (permalink / raw)
  To: linux-f2fs-devel, chao, jaegeuk; +Cc: Sheng Yong, linux-kernel, wanghui33

When fewer pages are read, nr_pages may be smaller than nr_cpages. Due
to the nr_vecs limit, the compressed pages will be split into multiple
bios and then merged at the block level. In this case, nr_cpages should
be used to pre-allocate bvecs.
To handle this case, align max_nr_pages to cluster_size, which should be
enough for all compressed pages.

Signed-off-by: Jianan Huang <huangjianan@xiaomi.com>
Signed-off-by: Sheng Yong <shengyong1@xiaomi.com>
---
Changes since v1:
- Use aligned nr_pages instead of nr_cpages to pre-allocate bvecs.

 fs/f2fs/data.c | 12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
index 31e892842625..2d948586fea0 100644
--- a/fs/f2fs/data.c
+++ b/fs/f2fs/data.c
@@ -2303,7 +2303,7 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
 		}
 
 		if (!bio) {
-			bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages,
+			bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages - i,
 					f2fs_ra_op_flags(rac),
 					folio->index, for_write);
 			if (IS_ERR(bio)) {
@@ -2370,12 +2370,18 @@ static int f2fs_mpage_readpages(struct inode *inode,
 		.nr_cpages = 0,
 	};
 	pgoff_t nc_cluster_idx = NULL_CLUSTER;
-	pgoff_t index;
+	pgoff_t index = rac ? readahead_index(rac) : folio->index;
 #endif
 	unsigned nr_pages = rac ? readahead_count(rac) : 1;
 	unsigned max_nr_pages = nr_pages;
 	int ret = 0;
 
+#ifdef CONFIG_F2FS_FS_COMPRESSION
+	if (f2fs_compressed_file(inode))
+		max_nr_pages = round_up(index + nr_pages, cc.cluster_size) -
+				round_down(index, cc.cluster_size);
+#endif
+
 	map.m_pblk = 0;
 	map.m_lblk = 0;
 	map.m_len = 0;
@@ -2385,7 +2391,7 @@ static int f2fs_mpage_readpages(struct inode *inode,
 	map.m_seg_type = NO_CHECK_TYPE;
 	map.m_may_create = false;
 
-	for (; nr_pages; nr_pages--) {
+	for (; nr_pages; nr_pages--, max_nr_pages--) {
 		if (rac) {
 			folio = readahead_folio(rac);
 			prefetchw(&folio->flags);
-- 
2.43.0



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [f2fs-dev] [PATCH] f2fs: avoid splitting bio when reading multiple pages
  2025-06-17 13:13   ` Sheng Yong
  2025-06-17 13:43     ` [f2fs-dev] [External Mail]Re: " Huang Jianan via Linux-f2fs-devel
@ 2025-06-24 13:56     ` Chao Yu via Linux-f2fs-devel
  1 sibling, 0 replies; 8+ messages in thread
From: Chao Yu via Linux-f2fs-devel @ 2025-06-24 13:56 UTC (permalink / raw)
  To: Sheng Yong, Jianan Huang, linux-f2fs-devel, jaegeuk; +Cc: shengyong1, wanghui33

On 2025/6/17 21:13, Sheng Yong wrote:
> On 6/17/25 19:37, Chao Yu via Linux-f2fs-devel wrote:
>> On 6/17/25 13:55, Jianan Huang wrote:
>>> When fewer pages are read, nr_pages may be smaller than nr_cpages. Due
>>> to the nr_vecs limit, the compressed pages will be split into multiple
>>> bios and then merged at the block level. In this case, nr_cpages should
>>> be used to pre-allocate bvecs.
>>>
>>> Signed-off-by: Jianan Huang <huangjianan@xiaomi.com>
>>> Signed-off-by: Sheng Yong <shengyong1@xiaomi.com>
>>> ---
>>>   fs/f2fs/data.c | 10 +++++-----
>>>   1 file changed, 5 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
>>> index 31e892842625..c7773b09d83f 100644
>>> --- a/fs/f2fs/data.c
>>> +++ b/fs/f2fs/data.c
>>> @@ -2303,7 +2303,8 @@ int f2fs_read_multi_pages(struct compress_ctx 
>>> *cc, struct bio **bio_ret,
>>>           }
>>>           if (!bio) {
>>> -            bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages,
>>> +            bio = f2fs_grab_read_bio(inode, blkaddr,
>>> +                    max(nr_pages, cc->nr_cpages) - i,
>>
>> Hi Jianan,
>>
>> e.g.
>>
>> User wants to read page [1, 5],
>> page #1,2,3,4 locates in compressed block #1000,1001,1003,
>> page #5 locate in compressed block #1004,1005
>>
>> It submits first bio w/ block #1000,1001
>> It allocates second bio w/ size of max(nr_pages=1, nr_cpages=3) - 2 = 1 ?
>> However block #1003 and block #1004,1005 can be readed in one bio, we
>> should allocate larger bio for last continuous blocks which cross 
>> clusters.
> 
> Hi, Chao,
> 
> I think `max(nr_pages, cc->nr_cpages) - i` can reserve enough vectors in 
> bio
> for later reads. IIUC, the case above is:
> 
> read page #1,2,3,4 at blkaddr #1000,1001,1003:
>    * nr_pages=5, cpages=3, for the first bio1, vec=max(5,3)-0=5 (2 vecs 
> are used)
>                            for the second bio2, vec=max(5,3)-2=3 (1 vec 
> is used)

Hi Yong,

Sorry for the delay.

About the second bio2 (vec=max(5,3)-2=3), in this patch, we pass 
nr_pages instead of max_nr_pages, nr_pages will be decreased to 1 rather 
than 5 when we allocate new bio. Because we add page #1,2,3,4 to 
decompress_ctx first, and then read cluster entirely, at that time, 
nr_pages is 1? IIRC.

Thanks,

> read page #5 at blkaddr #1004,1005, prev bio2 is still available
>    * nr_pages=1, cpages=2, prev bio2, 2 vecs left
> 
> 
> For case: page #1,2,3,4 at compressed blkaddr #1000,1001,1003
>            page #5,6,7,8 at compressed blkaddr #1004,1005,1006
> If we are reading page[1,5], we could do calculation as the following?
> 
>      max_nr_pages=align(nr_pages, cluster_size)
>      max(max_nr_pages, cc->nr_cpages) - i
> 
> 
> thanks,
> shengyong
>>
>>>                       f2fs_ra_op_flags(rac),
>>>                       folio->index, for_write);
>>>               if (IS_ERR(bio)) {
>>> @@ -2373,7 +2374,6 @@ static int f2fs_mpage_readpages(struct inode 
>>> *inode,
>>>       pgoff_t index;
>>>   #endif
>>>       unsigned nr_pages = rac ? readahead_count(rac) : 1;
>>> -    unsigned max_nr_pages = nr_pages;
>>
>> Maybe we can align both start and end of read range w/ cluster_size, 
>> and use
>> start and end for max_nr_pages calculation, then pass it to
>> f2fs_read_{multi,single}_pages(), something like this?
>>
>> max_nr_pages = round_up(end_idx, cluster_size) -
>>         round_down(start_idx, cluster_size);
>>
>> Its size should always cover size of all cpage and/or rpage.
>>
>> Thanks,
>>
>>>       int ret = 0;
>>>       map.m_pblk = 0;
>>> @@ -2400,7 +2400,7 @@ static int f2fs_mpage_readpages(struct inode 
>>> *inode,
>>>           /* there are remained compressed pages, submit them */
>>>           if (!f2fs_cluster_can_merge_page(&cc, index)) {
>>>               ret = f2fs_read_multi_pages(&cc, &bio,
>>> -                        max_nr_pages,
>>> +                        nr_pages,
>>>                           &last_block_in_bio,
>>>                           rac, false);
>>>               f2fs_destroy_compress_ctx(&cc, false);
>>> @@ -2432,7 +2432,7 @@ static int f2fs_mpage_readpages(struct inode 
>>> *inode,
>>>   read_single_page:
>>>   #endif
>>> -        ret = f2fs_read_single_page(inode, folio, max_nr_pages, &map,
>>> +        ret = f2fs_read_single_page(inode, folio, nr_pages, &map,
>>>                       &bio, &last_block_in_bio, rac);
>>>           if (ret) {
>>>   #ifdef CONFIG_F2FS_FS_COMPRESSION
>>> @@ -2450,7 +2450,7 @@ static int f2fs_mpage_readpages(struct inode 
>>> *inode,
>>>               /* last page */
>>>               if (nr_pages == 1 && !f2fs_cluster_is_empty(&cc)) {
>>>                   ret = f2fs_read_multi_pages(&cc, &bio,
>>> -                            max_nr_pages,
>>> +                            nr_pages,
>>>                               &last_block_in_bio,
>>>                               rac, false);
>>>                   f2fs_destroy_compress_ctx(&cc, false);
>>
>>
>>
>> _______________________________________________
>> Linux-f2fs-devel mailing list
>> Linux-f2fs-devel@lists.sourceforge.net
>> https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel
> 



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [f2fs-dev] [PATCH v2] f2fs: avoid splitting bio when reading multiple pages
  2025-06-18  8:17 ` [f2fs-dev] [PATCH v2] " Jianan Huang via Linux-f2fs-devel
@ 2025-06-25  3:05   ` Huang Jianan via Linux-f2fs-devel
  2025-06-25  3:26   ` Chao Yu via Linux-f2fs-devel
  1 sibling, 0 replies; 8+ messages in thread
From: Huang Jianan via Linux-f2fs-devel @ 2025-06-25  3:05 UTC (permalink / raw)
  To: linux-f2fs-devel@lists.sourceforge.net, chao@kernel.org,
	jaegeuk@kernel.org
  Cc: 盛勇, linux-kernel@vger.kernel.org, 王辉

On 2025/6/18 16:17, Jianan Huang wrote:
> When fewer pages are read, nr_pages may be smaller than nr_cpages. Due
> to the nr_vecs limit, the compressed pages will be split into multiple
> bios and then merged at the block level. In this case, nr_cpages should
> be used to pre-allocate bvecs.
> To handle this case, align max_nr_pages to cluster_size, which should be
> enough for all compressed pages.
>
> Signed-off-by: Jianan Huang <huangjianan@xiaomi.com>
> Signed-off-by: Sheng Yong <shengyong1@xiaomi.com>
> ---
> Changes since v1:
> - Use aligned nr_pages instead of nr_cpages to pre-allocate bvecs.
>
>   fs/f2fs/data.c | 12 +++++++++---
>   1 file changed, 9 insertions(+), 3 deletions(-)
>
> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
> index 31e892842625..2d948586fea0 100644
> --- a/fs/f2fs/data.c
> +++ b/fs/f2fs/data.c
> @@ -2303,7 +2303,7 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
>   		}
>   
>   		if (!bio) {
> -			bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages,
> +			bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages - i,
>   					f2fs_ra_op_flags(rac),
>   					folio->index, for_write);
>   			if (IS_ERR(bio)) {
> @@ -2370,12 +2370,18 @@ static int f2fs_mpage_readpages(struct inode *inode,
>   		.nr_cpages = 0,
>   	};
>   	pgoff_t nc_cluster_idx = NULL_CLUSTER;
> -	pgoff_t index;
> +	pgoff_t index = rac ? readahead_index(rac) : folio->index;
>   #endif
>   	unsigned nr_pages = rac ? readahead_count(rac) : 1;
>   	unsigned max_nr_pages = nr_pages;
>   	int ret = 0;
>   
> +#ifdef CONFIG_F2FS_FS_COMPRESSION
> +	if (f2fs_compressed_file(inode))
> +		max_nr_pages = round_up(index + nr_pages, cc.cluster_size) -
> +				round_down(index, cc.cluster_size);
> +#endif
> +
>   	map.m_pblk = 0;
>   	map.m_lblk = 0;
>   	map.m_len = 0;
> @@ -2385,7 +2391,7 @@ static int f2fs_mpage_readpages(struct inode *inode,
>   	map.m_seg_type = NO_CHECK_TYPE;
>   	map.m_may_create = false;
>   
> -	for (; nr_pages; nr_pages--) {
> +	for (; nr_pages; nr_pages--, max_nr_pages--) {
>   		if (rac) {
>   			folio = readahead_folio(rac);
>   			prefetchw(&folio->flags);

ping~


_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [f2fs-dev] [PATCH v2] f2fs: avoid splitting bio when reading multiple pages
  2025-06-18  8:17 ` [f2fs-dev] [PATCH v2] " Jianan Huang via Linux-f2fs-devel
  2025-06-25  3:05   ` Huang Jianan via Linux-f2fs-devel
@ 2025-06-25  3:26   ` Chao Yu via Linux-f2fs-devel
  1 sibling, 0 replies; 8+ messages in thread
From: Chao Yu via Linux-f2fs-devel @ 2025-06-25  3:26 UTC (permalink / raw)
  To: Jianan Huang, linux-f2fs-devel, jaegeuk
  Cc: Sheng Yong, linux-kernel, wanghui33

On 6/18/25 16:17, Jianan Huang wrote:
> When fewer pages are read, nr_pages may be smaller than nr_cpages. Due
> to the nr_vecs limit, the compressed pages will be split into multiple
> bios and then merged at the block level. In this case, nr_cpages should
> be used to pre-allocate bvecs.
> To handle this case, align max_nr_pages to cluster_size, which should be
> enough for all compressed pages.
> 
> Signed-off-by: Jianan Huang <huangjianan@xiaomi.com>
> Signed-off-by: Sheng Yong <shengyong1@xiaomi.com>
> ---
> Changes since v1:
> - Use aligned nr_pages instead of nr_cpages to pre-allocate bvecs.
> 
>  fs/f2fs/data.c | 12 +++++++++---
>  1 file changed, 9 insertions(+), 3 deletions(-)
> 
> diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
> index 31e892842625..2d948586fea0 100644
> --- a/fs/f2fs/data.c
> +++ b/fs/f2fs/data.c
> @@ -2303,7 +2303,7 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
>  		}
>  
>  		if (!bio) {
> -			bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages,
> +			bio = f2fs_grab_read_bio(inode, blkaddr, nr_pages - i,
>  					f2fs_ra_op_flags(rac),
>  					folio->index, for_write);
>  			if (IS_ERR(bio)) {
> @@ -2370,12 +2370,18 @@ static int f2fs_mpage_readpages(struct inode *inode,
>  		.nr_cpages = 0,
>  	};
>  	pgoff_t nc_cluster_idx = NULL_CLUSTER;
> -	pgoff_t index;
> +	pgoff_t index = rac ? readahead_index(rac) : folio->index;

For non-compressed file, it's redundant.

>  #endif
>  	unsigned nr_pages = rac ? readahead_count(rac) : 1;
>  	unsigned max_nr_pages = nr_pages;
>  	int ret = 0;
>  
> +#ifdef CONFIG_F2FS_FS_COMPRESSION
> +	if (f2fs_compressed_file(inode))

index = rac ? readahead_index(rac) : folio->index;

Thanks,

> +		max_nr_pages = round_up(index + nr_pages, cc.cluster_size) -
> +				round_down(index, cc.cluster_size);
> +#endif
> +
>  	map.m_pblk = 0;
>  	map.m_lblk = 0;
>  	map.m_len = 0;
> @@ -2385,7 +2391,7 @@ static int f2fs_mpage_readpages(struct inode *inode,
>  	map.m_seg_type = NO_CHECK_TYPE;
>  	map.m_may_create = false;
>  
> -	for (; nr_pages; nr_pages--) {
> +	for (; nr_pages; nr_pages--, max_nr_pages--) {
>  		if (rac) {
>  			folio = readahead_folio(rac);
>  			prefetchw(&folio->flags);



_______________________________________________
Linux-f2fs-devel mailing list
Linux-f2fs-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/linux-f2fs-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2025-06-25  3:26 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-06-17  5:55 [f2fs-dev] [PATCH] f2fs: avoid splitting bio when reading multiple pages Jianan Huang via Linux-f2fs-devel
2025-06-17 11:37 ` Chao Yu via Linux-f2fs-devel
2025-06-17 13:13   ` Sheng Yong
2025-06-17 13:43     ` [f2fs-dev] [External Mail]Re: " Huang Jianan via Linux-f2fs-devel
2025-06-24 13:56     ` [f2fs-dev] " Chao Yu via Linux-f2fs-devel
2025-06-18  8:17 ` [f2fs-dev] [PATCH v2] " Jianan Huang via Linux-f2fs-devel
2025-06-25  3:05   ` Huang Jianan via Linux-f2fs-devel
2025-06-25  3:26   ` Chao Yu via Linux-f2fs-devel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).