linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/6] Convert several functions in page_io.c to use a folio
@ 2023-07-17 13:25 Peng Zhang
  2023-07-17 13:25 ` [PATCH 1/6] mm/page_io: use a folio in __end_swap_bio_read() Peng Zhang
                   ` (5 more replies)
  0 siblings, 6 replies; 22+ messages in thread
From: Peng Zhang @ 2023-07-17 13:25 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: willy, sidhartha.kumar, akpm, wangkefeng.wang, sunnanyong,
	ZhangPeng

From: ZhangPeng <zhangpeng362@huawei.com>

This patch series converts several functions in page_io.c to use a
folio, which can remove several implicit calls to compound_head().

ZhangPeng (6):
  mm/page_io: use a folio in __end_swap_bio_read()
  mm/page_io: use a folio in sio_read_complete()
  mm/page_io: use a folio in swap_writepage_bdev_sync()
  mm/page_io: use a folio in swap_writepage_bdev_async()
  mm/page_io: convert count_swpout_vm_event() to take in a folio
  mm/page_io: convert bio_associate_blkg_from_page() to take in a folio

 mm/page_io.c | 56 +++++++++++++++++++++++++++-------------------------
 1 file changed, 29 insertions(+), 27 deletions(-)

-- 
2.25.1



^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 1/6] mm/page_io: use a folio in __end_swap_bio_read()
  2023-07-17 13:25 [PATCH 0/6] Convert several functions in page_io.c to use a folio Peng Zhang
@ 2023-07-17 13:25 ` Peng Zhang
  2023-07-17 13:34   ` Matthew Wilcox
  2023-07-17 13:25 ` [PATCH 2/6] mm/page_io: use a folio in sio_read_complete() Peng Zhang
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 22+ messages in thread
From: Peng Zhang @ 2023-07-17 13:25 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: willy, sidhartha.kumar, akpm, wangkefeng.wang, sunnanyong,
	ZhangPeng

From: ZhangPeng <zhangpeng362@huawei.com>

Saves three implicit call to compound_head().

Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
---
 mm/page_io.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/page_io.c b/mm/page_io.c
index 684cd3c7b59b..ebf431e5f538 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -58,18 +58,18 @@ static void end_swap_bio_write(struct bio *bio)
 
 static void __end_swap_bio_read(struct bio *bio)
 {
-	struct page *page = bio_first_page_all(bio);
+	struct folio *folio = page_folio(bio_first_page_all(bio));
 
 	if (bio->bi_status) {
-		SetPageError(page);
-		ClearPageUptodate(page);
+		folio_set_error(folio);
+		folio_clear_uptodate(folio);
 		pr_alert_ratelimited("Read-error on swap-device (%u:%u:%llu)\n",
 				     MAJOR(bio_dev(bio)), MINOR(bio_dev(bio)),
 				     (unsigned long long)bio->bi_iter.bi_sector);
 	} else {
-		SetPageUptodate(page);
+		folio_mark_uptodate(folio);
 	}
-	unlock_page(page);
+	folio_unlock(folio);
 }
 
 static void end_swap_bio_read(struct bio *bio)
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 2/6] mm/page_io: use a folio in sio_read_complete()
  2023-07-17 13:25 [PATCH 0/6] Convert several functions in page_io.c to use a folio Peng Zhang
  2023-07-17 13:25 ` [PATCH 1/6] mm/page_io: use a folio in __end_swap_bio_read() Peng Zhang
@ 2023-07-17 13:25 ` Peng Zhang
  2023-07-17 13:40   ` Matthew Wilcox
  2023-07-17 13:25 ` [PATCH 3/6] mm/page_io: use a folio in swap_writepage_bdev_sync() Peng Zhang
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 22+ messages in thread
From: Peng Zhang @ 2023-07-17 13:25 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: willy, sidhartha.kumar, akpm, wangkefeng.wang, sunnanyong,
	ZhangPeng

From: ZhangPeng <zhangpeng362@huawei.com>

Saves three implicit call to compound_head().

Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
---
 mm/page_io.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/mm/page_io.c b/mm/page_io.c
index ebf431e5f538..438d0c7c2194 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -406,19 +406,19 @@ static void sio_read_complete(struct kiocb *iocb, long ret)
 
 	if (ret == sio->len) {
 		for (p = 0; p < sio->pages; p++) {
-			struct page *page = sio->bvec[p].bv_page;
+			struct folio *folio = page_folio(sio->bvec[p].bv_page);
 
-			SetPageUptodate(page);
-			unlock_page(page);
+			folio_mark_uptodate(folio);
+			folio_unlock(folio);
 		}
 		count_vm_events(PSWPIN, sio->pages);
 	} else {
 		for (p = 0; p < sio->pages; p++) {
-			struct page *page = sio->bvec[p].bv_page;
+			struct folio *folio = page_folio(sio->bvec[p].bv_page);
 
-			SetPageError(page);
-			ClearPageUptodate(page);
-			unlock_page(page);
+			folio_set_error(folio);
+			folio_clear_uptodate(folio);
+			folio_unlock(folio);
 		}
 		pr_alert_ratelimited("Read-error on swap-device\n");
 	}
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 3/6] mm/page_io: use a folio in swap_writepage_bdev_sync()
  2023-07-17 13:25 [PATCH 0/6] Convert several functions in page_io.c to use a folio Peng Zhang
  2023-07-17 13:25 ` [PATCH 1/6] mm/page_io: use a folio in __end_swap_bio_read() Peng Zhang
  2023-07-17 13:25 ` [PATCH 2/6] mm/page_io: use a folio in sio_read_complete() Peng Zhang
@ 2023-07-17 13:25 ` Peng Zhang
  2023-07-17 14:00   ` Matthew Wilcox
  2023-07-17 13:26 ` [PATCH 4/6] mm/page_io: use a folio in swap_writepage_bdev_async() Peng Zhang
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 22+ messages in thread
From: Peng Zhang @ 2023-07-17 13:25 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: willy, sidhartha.kumar, akpm, wangkefeng.wang, sunnanyong,
	ZhangPeng

From: ZhangPeng <zhangpeng362@huawei.com>

Saves one implicit call to compound_head().

Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
---
 mm/page_io.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/mm/page_io.c b/mm/page_io.c
index 438d0c7c2194..f24f814b8dca 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -334,6 +334,7 @@ static void swap_writepage_bdev_sync(struct page *page,
 {
 	struct bio_vec bv;
 	struct bio bio;
+	struct folio *folio = page_folio(page);
 
 	bio_init(&bio, sis->bdev, &bv, 1,
 		 REQ_OP_WRITE | REQ_SWAP | wbc_to_write_flags(wbc));
@@ -343,8 +344,8 @@ static void swap_writepage_bdev_sync(struct page *page,
 	bio_associate_blkg_from_page(&bio, page);
 	count_swpout_vm_event(page);
 
-	set_page_writeback(page);
-	unlock_page(page);
+	folio_start_writeback(folio);
+	folio_unlock(folio);
 
 	submit_bio_wait(&bio);
 	__end_swap_bio_write(&bio);
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 4/6] mm/page_io: use a folio in swap_writepage_bdev_async()
  2023-07-17 13:25 [PATCH 0/6] Convert several functions in page_io.c to use a folio Peng Zhang
                   ` (2 preceding siblings ...)
  2023-07-17 13:25 ` [PATCH 3/6] mm/page_io: use a folio in swap_writepage_bdev_sync() Peng Zhang
@ 2023-07-17 13:26 ` Peng Zhang
  2023-07-17 14:01   ` Matthew Wilcox
  2023-07-17 13:26 ` [PATCH 5/6] mm/page_io: convert count_swpout_vm_event() to take in a folio Peng Zhang
  2023-07-17 13:26 ` [PATCH 6/6] mm/page_io: convert bio_associate_blkg_from_page() " Peng Zhang
  5 siblings, 1 reply; 22+ messages in thread
From: Peng Zhang @ 2023-07-17 13:26 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: willy, sidhartha.kumar, akpm, wangkefeng.wang, sunnanyong,
	ZhangPeng

From: ZhangPeng <zhangpeng362@huawei.com>

Saves one implicit call to compound_head().

Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
---
 mm/page_io.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/mm/page_io.c b/mm/page_io.c
index f24f814b8dca..84cc9e652451 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -355,6 +355,7 @@ static void swap_writepage_bdev_async(struct page *page,
 		struct writeback_control *wbc, struct swap_info_struct *sis)
 {
 	struct bio *bio;
+	struct folio *folio = page_folio(page);
 
 	bio = bio_alloc(sis->bdev, 1,
 			REQ_OP_WRITE | REQ_SWAP | wbc_to_write_flags(wbc),
@@ -365,8 +366,8 @@ static void swap_writepage_bdev_async(struct page *page,
 
 	bio_associate_blkg_from_page(bio, page);
 	count_swpout_vm_event(page);
-	set_page_writeback(page);
-	unlock_page(page);
+	folio_start_writeback(folio);
+	folio_unlock(folio);
 	submit_bio(bio);
 }
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 5/6] mm/page_io: convert count_swpout_vm_event() to take in a folio
  2023-07-17 13:25 [PATCH 0/6] Convert several functions in page_io.c to use a folio Peng Zhang
                   ` (3 preceding siblings ...)
  2023-07-17 13:26 ` [PATCH 4/6] mm/page_io: use a folio in swap_writepage_bdev_async() Peng Zhang
@ 2023-07-17 13:26 ` Peng Zhang
  2023-07-17 13:48   ` Matthew Wilcox
  2023-07-17 13:26 ` [PATCH 6/6] mm/page_io: convert bio_associate_blkg_from_page() " Peng Zhang
  5 siblings, 1 reply; 22+ messages in thread
From: Peng Zhang @ 2023-07-17 13:26 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: willy, sidhartha.kumar, akpm, wangkefeng.wang, sunnanyong,
	ZhangPeng

From: ZhangPeng <zhangpeng362@huawei.com>

Convert count_swpout_vm_event() to take in a folio. We can remove five
implicit calls to compound_head() by taking in a folio.

Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
---
 mm/page_io.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/mm/page_io.c b/mm/page_io.c
index 84cc9e652451..66e8403bf96e 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -208,13 +208,13 @@ int swap_writepage(struct page *page, struct writeback_control *wbc)
 	return 0;
 }
 
-static inline void count_swpout_vm_event(struct page *page)
+static inline void count_swpout_vm_event(struct folio *folio)
 {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-	if (unlikely(PageTransHuge(page)))
+	if (unlikely(folio_test_large(folio)))
 		count_vm_event(THP_SWPOUT);
 #endif
-	count_vm_events(PSWPOUT, thp_nr_pages(page));
+	count_vm_events(PSWPOUT, folio_nr_pages(folio));
 }
 
 #if defined(CONFIG_MEMCG) && defined(CONFIG_BLK_CGROUP)
@@ -283,7 +283,7 @@ static void sio_write_complete(struct kiocb *iocb, long ret)
 		}
 	} else {
 		for (p = 0; p < sio->pages; p++)
-			count_swpout_vm_event(sio->bvec[p].bv_page);
+			count_swpout_vm_event(page_folio(sio->bvec[p].bv_page));
 	}
 
 	for (p = 0; p < sio->pages; p++)
@@ -342,7 +342,7 @@ static void swap_writepage_bdev_sync(struct page *page,
 	__bio_add_page(&bio, page, thp_size(page), 0);
 
 	bio_associate_blkg_from_page(&bio, page);
-	count_swpout_vm_event(page);
+	count_swpout_vm_event(folio);
 
 	folio_start_writeback(folio);
 	folio_unlock(folio);
@@ -365,7 +365,7 @@ static void swap_writepage_bdev_async(struct page *page,
 	__bio_add_page(bio, page, thp_size(page), 0);
 
 	bio_associate_blkg_from_page(bio, page);
-	count_swpout_vm_event(page);
+	count_swpout_vm_event(folio);
 	folio_start_writeback(folio);
 	folio_unlock(folio);
 	submit_bio(bio);
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 6/6] mm/page_io: convert bio_associate_blkg_from_page() to take in a folio
  2023-07-17 13:25 [PATCH 0/6] Convert several functions in page_io.c to use a folio Peng Zhang
                   ` (4 preceding siblings ...)
  2023-07-17 13:26 ` [PATCH 5/6] mm/page_io: convert count_swpout_vm_event() to take in a folio Peng Zhang
@ 2023-07-17 13:26 ` Peng Zhang
  2023-07-17 13:46   ` Matthew Wilcox
  5 siblings, 1 reply; 22+ messages in thread
From: Peng Zhang @ 2023-07-17 13:26 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: willy, sidhartha.kumar, akpm, wangkefeng.wang, sunnanyong,
	ZhangPeng

From: ZhangPeng <zhangpeng362@huawei.com>

Convert bio_associate_blkg_from_page() to take in a folio. We can remove
two implicit calls to compound_head() by taking in a folio.

Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>
---
 mm/page_io.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/page_io.c b/mm/page_io.c
index 66e8403bf96e..3c1fede46bd9 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -218,12 +218,12 @@ static inline void count_swpout_vm_event(struct folio *folio)
 }
 
 #if defined(CONFIG_MEMCG) && defined(CONFIG_BLK_CGROUP)
-static void bio_associate_blkg_from_page(struct bio *bio, struct page *page)
+static void bio_associate_blkg_from_page(struct bio *bio, struct folio *folio)
 {
 	struct cgroup_subsys_state *css;
 	struct mem_cgroup *memcg;
 
-	memcg = page_memcg(page);
+	memcg = folio_memcg(folio);
 	if (!memcg)
 		return;
 
@@ -233,7 +233,7 @@ static void bio_associate_blkg_from_page(struct bio *bio, struct page *page)
 	rcu_read_unlock();
 }
 #else
-#define bio_associate_blkg_from_page(bio, page)		do { } while (0)
+#define bio_associate_blkg_from_page(bio, folio)		do { } while (0)
 #endif /* CONFIG_MEMCG && CONFIG_BLK_CGROUP */
 
 struct swap_iocb {
@@ -341,7 +341,7 @@ static void swap_writepage_bdev_sync(struct page *page,
 	bio.bi_iter.bi_sector = swap_page_sector(page);
 	__bio_add_page(&bio, page, thp_size(page), 0);
 
-	bio_associate_blkg_from_page(&bio, page);
+	bio_associate_blkg_from_page(&bio, folio);
 	count_swpout_vm_event(folio);
 
 	folio_start_writeback(folio);
@@ -364,7 +364,7 @@ static void swap_writepage_bdev_async(struct page *page,
 	bio->bi_end_io = end_swap_bio_write;
 	__bio_add_page(bio, page, thp_size(page), 0);
 
-	bio_associate_blkg_from_page(bio, page);
+	bio_associate_blkg_from_page(bio, folio);
 	count_swpout_vm_event(folio);
 	folio_start_writeback(folio);
 	folio_unlock(folio);
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/6] mm/page_io: use a folio in __end_swap_bio_read()
  2023-07-17 13:25 ` [PATCH 1/6] mm/page_io: use a folio in __end_swap_bio_read() Peng Zhang
@ 2023-07-17 13:34   ` Matthew Wilcox
  2023-07-18 12:56     ` zhangpeng (AS)
  0 siblings, 1 reply; 22+ messages in thread
From: Matthew Wilcox @ 2023-07-17 13:34 UTC (permalink / raw)
  To: Peng Zhang
  Cc: linux-mm, linux-kernel, sidhartha.kumar, akpm, wangkefeng.wang,
	sunnanyong

On Mon, Jul 17, 2023 at 09:25:57PM +0800, Peng Zhang wrote:
> +++ b/mm/page_io.c
> @@ -58,18 +58,18 @@ static void end_swap_bio_write(struct bio *bio)
>  
>  static void __end_swap_bio_read(struct bio *bio)
>  {
> -	struct page *page = bio_first_page_all(bio);
> +	struct folio *folio = page_folio(bio_first_page_all(bio));

Should we have a bio_first_folio_all()?  It wouldn't save any calls to
compound_head(), but it's slightly easier to use.

>  	if (bio->bi_status) {
> -		SetPageError(page);
> -		ClearPageUptodate(page);
> +		folio_set_error(folio);

I appreciate this is a 1:1 conversion, but maybe we could think about
this a bit.  Is there anybody who checks the
PageError()/folio_test_error() for this page/folio?

> +		folio_clear_uptodate(folio);

Similarly ... surely the folio is already !uptodate, so we don't need to
clear the flag?



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/6] mm/page_io: use a folio in sio_read_complete()
  2023-07-17 13:25 ` [PATCH 2/6] mm/page_io: use a folio in sio_read_complete() Peng Zhang
@ 2023-07-17 13:40   ` Matthew Wilcox
  2023-07-18 12:58     ` zhangpeng (AS)
  2023-07-20  5:22     ` Christoph Hellwig
  0 siblings, 2 replies; 22+ messages in thread
From: Matthew Wilcox @ 2023-07-17 13:40 UTC (permalink / raw)
  To: Peng Zhang
  Cc: linux-mm, linux-kernel, sidhartha.kumar, akpm, wangkefeng.wang,
	sunnanyong, Kent Overstreet

On Mon, Jul 17, 2023 at 09:25:58PM +0800, Peng Zhang wrote:
> +++ b/mm/page_io.c
> @@ -406,19 +406,19 @@ static void sio_read_complete(struct kiocb *iocb, long ret)
>  
>  	if (ret == sio->len) {
>  		for (p = 0; p < sio->pages; p++) {
> -			struct page *page = sio->bvec[p].bv_page;
> +			struct folio *folio = page_folio(sio->bvec[p].bv_page);
>  
> -			SetPageUptodate(page);
> -			unlock_page(page);
> +			folio_mark_uptodate(folio);
> +			folio_unlock(folio);
>  		}

I'm kind of shocked this works today.  Usually bvecs coalesce adjacent
pages into a single entry, so you need to use a real iterator like
bio_for_each_folio_all() to extract individual pages from a bvec.
Maybe the sio bvec is constructed inefficiently.

I think Kent had some bvec folio iterators in progress?

>  		count_vm_events(PSWPIN, sio->pages);
>  	} else {
>  		for (p = 0; p < sio->pages; p++) {
> -			struct page *page = sio->bvec[p].bv_page;
> +			struct folio *folio = page_folio(sio->bvec[p].bv_page);
>  
> -			SetPageError(page);
> -			ClearPageUptodate(page);
> -			unlock_page(page);
> +			folio_set_error(folio);
> +			folio_clear_uptodate(folio);
> +			folio_unlock(folio);

Similar questions to the last patch -- who checks the error flag on this
page/folio, and isn't the folio already !uptodate?

>  		}
>  		pr_alert_ratelimited("Read-error on swap-device\n");
>  	}
> -- 
> 2.25.1
> 


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 6/6] mm/page_io: convert bio_associate_blkg_from_page() to take in a folio
  2023-07-17 13:26 ` [PATCH 6/6] mm/page_io: convert bio_associate_blkg_from_page() " Peng Zhang
@ 2023-07-17 13:46   ` Matthew Wilcox
  0 siblings, 0 replies; 22+ messages in thread
From: Matthew Wilcox @ 2023-07-17 13:46 UTC (permalink / raw)
  To: Peng Zhang
  Cc: linux-mm, linux-kernel, sidhartha.kumar, akpm, wangkefeng.wang,
	sunnanyong

On Mon, Jul 17, 2023 at 09:26:02PM +0800, Peng Zhang wrote:
> Convert bio_associate_blkg_from_page() to take in a folio. We can remove
> two implicit calls to compound_head() by taking in a folio.
> 
> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>

Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 5/6] mm/page_io: convert count_swpout_vm_event() to take in a folio
  2023-07-17 13:26 ` [PATCH 5/6] mm/page_io: convert count_swpout_vm_event() to take in a folio Peng Zhang
@ 2023-07-17 13:48   ` Matthew Wilcox
  2023-07-18 12:56     ` zhangpeng (AS)
  0 siblings, 1 reply; 22+ messages in thread
From: Matthew Wilcox @ 2023-07-17 13:48 UTC (permalink / raw)
  To: Peng Zhang
  Cc: linux-mm, linux-kernel, sidhartha.kumar, akpm, wangkefeng.wang,
	sunnanyong

On Mon, Jul 17, 2023 at 09:26:01PM +0800, Peng Zhang wrote:
> -static inline void count_swpout_vm_event(struct page *page)
> +static inline void count_swpout_vm_event(struct folio *folio)
>  {
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> -	if (unlikely(PageTransHuge(page)))
> +	if (unlikely(folio_test_large(folio)))
>  		count_vm_event(THP_SWPOUT);
>  #endif

Since this is a THP_SWPOUT event, we should do this as:

-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
-	if (unlikely(PageTransHuge(page)))
+	if (folio_test_pmd_mappable(folio))
		count_vm_event(THP_SWPOUT);
-#endif



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 3/6] mm/page_io: use a folio in swap_writepage_bdev_sync()
  2023-07-17 13:25 ` [PATCH 3/6] mm/page_io: use a folio in swap_writepage_bdev_sync() Peng Zhang
@ 2023-07-17 14:00   ` Matthew Wilcox
  0 siblings, 0 replies; 22+ messages in thread
From: Matthew Wilcox @ 2023-07-17 14:00 UTC (permalink / raw)
  To: Peng Zhang
  Cc: linux-mm, linux-kernel, sidhartha.kumar, akpm, wangkefeng.wang,
	sunnanyong

On Mon, Jul 17, 2023 at 09:25:59PM +0800, Peng Zhang wrote:
> From: ZhangPeng <zhangpeng362@huawei.com>
> 
> Saves one implicit call to compound_head().
> 
> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>

Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 4/6] mm/page_io: use a folio in swap_writepage_bdev_async()
  2023-07-17 13:26 ` [PATCH 4/6] mm/page_io: use a folio in swap_writepage_bdev_async() Peng Zhang
@ 2023-07-17 14:01   ` Matthew Wilcox
  0 siblings, 0 replies; 22+ messages in thread
From: Matthew Wilcox @ 2023-07-17 14:01 UTC (permalink / raw)
  To: Peng Zhang
  Cc: linux-mm, linux-kernel, sidhartha.kumar, akpm, wangkefeng.wang,
	sunnanyong

On Mon, Jul 17, 2023 at 09:26:00PM +0800, Peng Zhang wrote:
> From: ZhangPeng <zhangpeng362@huawei.com>
> 
> Saves one implicit call to compound_head().
> 
> Signed-off-by: ZhangPeng <zhangpeng362@huawei.com>

Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/6] mm/page_io: use a folio in __end_swap_bio_read()
  2023-07-17 13:34   ` Matthew Wilcox
@ 2023-07-18 12:56     ` zhangpeng (AS)
  2023-07-18 16:16       ` Matthew Wilcox
  0 siblings, 1 reply; 22+ messages in thread
From: zhangpeng (AS) @ 2023-07-18 12:56 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-mm, linux-kernel, sidhartha.kumar, akpm, wangkefeng.wang,
	sunnanyong

On 2023/7/17 21:34, Matthew Wilcox wrote:

> On Mon, Jul 17, 2023 at 09:25:57PM +0800, Peng Zhang wrote:
>> +++ b/mm/page_io.c
>> @@ -58,18 +58,18 @@ static void end_swap_bio_write(struct bio *bio)
>>   
>>   static void __end_swap_bio_read(struct bio *bio)
>>   {
>> -	struct page *page = bio_first_page_all(bio);
>> +	struct folio *folio = page_folio(bio_first_page_all(bio));
> Should we have a bio_first_folio_all()?  It wouldn't save any calls to
> compound_head(), but it's slightly easier to use.

Yes, I'll convert bio_first_page_all() to bio_first_folio_all() in a v2.
Thanks.

>>   	if (bio->bi_status) {
>> -		SetPageError(page);
>> -		ClearPageUptodate(page);
>> +		folio_set_error(folio);
> I appreciate this is a 1:1 conversion, but maybe we could think about
> this a bit.  Is there anybody who checks the
> PageError()/folio_test_error() for this page/folio?

Maybe wait_dev_supers() checks the PageError() after write_dev_supers()
in fs/btrfs/disk-io.c?

>> +		folio_clear_uptodate(folio);
> Similarly ... surely the folio is already !uptodate, so we don't need to
> clear the flag?

Indeed, the folio is already !uptodate. I'll drop this line in a v2.
Thanks for your feedback.

-- 
Best Regards,
Peng



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 5/6] mm/page_io: convert count_swpout_vm_event() to take in a folio
  2023-07-17 13:48   ` Matthew Wilcox
@ 2023-07-18 12:56     ` zhangpeng (AS)
  0 siblings, 0 replies; 22+ messages in thread
From: zhangpeng (AS) @ 2023-07-18 12:56 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-mm, linux-kernel, sidhartha.kumar, akpm, wangkefeng.wang,
	sunnanyong

On 2023/7/17 21:48, Matthew Wilcox wrote:

> On Mon, Jul 17, 2023 at 09:26:01PM +0800, Peng Zhang wrote:
>> -static inline void count_swpout_vm_event(struct page *page)
>> +static inline void count_swpout_vm_event(struct folio *folio)
>>   {
>>   #ifdef CONFIG_TRANSPARENT_HUGEPAGE
>> -	if (unlikely(PageTransHuge(page)))
>> +	if (unlikely(folio_test_large(folio)))
>>   		count_vm_event(THP_SWPOUT);
>>   #endif
> Since this is a THP_SWPOUT event, we should do this as:
>
> -#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> -	if (unlikely(PageTransHuge(page)))
> +	if (folio_test_pmd_mappable(folio))
> 		count_vm_event(THP_SWPOUT);
> -#endif

Agreed. I'll convert PageTransHuge to folio_test_pmd_mappable.

-- 
Best Regards,
Peng



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/6] mm/page_io: use a folio in sio_read_complete()
  2023-07-17 13:40   ` Matthew Wilcox
@ 2023-07-18 12:58     ` zhangpeng (AS)
  2023-07-18 16:21       ` Matthew Wilcox
  2023-07-20  5:22     ` Christoph Hellwig
  1 sibling, 1 reply; 22+ messages in thread
From: zhangpeng (AS) @ 2023-07-18 12:58 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-mm, linux-kernel, sidhartha.kumar, akpm, wangkefeng.wang,
	sunnanyong, Kent Overstreet

On 2023/7/17 21:40, Matthew Wilcox wrote:

> On Mon, Jul 17, 2023 at 09:25:58PM +0800, Peng Zhang wrote:
>> +++ b/mm/page_io.c
>> @@ -406,19 +406,19 @@ static void sio_read_complete(struct kiocb *iocb, long ret)
>>   
>>   	if (ret == sio->len) {
>>   		for (p = 0; p < sio->pages; p++) {
>> -			struct page *page = sio->bvec[p].bv_page;
>> +			struct folio *folio = page_folio(sio->bvec[p].bv_page);
>>   
>> -			SetPageUptodate(page);
>> -			unlock_page(page);
>> +			folio_mark_uptodate(folio);
>> +			folio_unlock(folio);
>>   		}
> I'm kind of shocked this works today.  Usually bvecs coalesce adjacent
> pages into a single entry, so you need to use a real iterator like
> bio_for_each_folio_all() to extract individual pages from a bvec.
> Maybe the sio bvec is constructed inefficiently.
>
> I think Kent had some bvec folio iterators in progress?

I'll convert bio_first_page_all() to bio_first_folio_all() in a v2.

>>   		count_vm_events(PSWPIN, sio->pages);
>>   	} else {
>>   		for (p = 0; p < sio->pages; p++) {
>> -			struct page *page = sio->bvec[p].bv_page;
>> +			struct folio *folio = page_folio(sio->bvec[p].bv_page);
>>   
>> -			SetPageError(page);
>> -			ClearPageUptodate(page);
>> -			unlock_page(page);
>> +			folio_set_error(folio);
>> +			folio_clear_uptodate(folio);
>> +			folio_unlock(folio);
> Similar questions to the last patch -- who checks the error flag on this
> page/folio, and isn't the folio already !uptodate?

Yes, the folio is already !uptodate. I'll drop this line.
Maybe wait_dev_supers() in fs/btrfs/disk-io.c checks the PageError()?

>>   		}
>>   		pr_alert_ratelimited("Read-error on swap-device\n");
>>   	}
>> -- 
>> 2.25.1
>>
>
-- 
Best Regards,
Peng



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/6] mm/page_io: use a folio in __end_swap_bio_read()
  2023-07-18 12:56     ` zhangpeng (AS)
@ 2023-07-18 16:16       ` Matthew Wilcox
  2023-07-19  7:46         ` zhangpeng (AS)
  2023-07-20  5:21         ` Christoph Hellwig
  0 siblings, 2 replies; 22+ messages in thread
From: Matthew Wilcox @ 2023-07-18 16:16 UTC (permalink / raw)
  To: zhangpeng (AS)
  Cc: linux-mm, linux-kernel, sidhartha.kumar, akpm, wangkefeng.wang,
	sunnanyong

On Tue, Jul 18, 2023 at 08:56:16PM +0800, zhangpeng (AS) wrote:
> > >   	if (bio->bi_status) {
> > > -		SetPageError(page);
> > > -		ClearPageUptodate(page);
> > > +		folio_set_error(folio);
> > I appreciate this is a 1:1 conversion, but maybe we could think about
> > this a bit.  Is there anybody who checks the
> > PageError()/folio_test_error() for this page/folio?
> 
> Maybe wait_dev_supers() checks the PageError() after write_dev_supers()
> in fs/btrfs/disk-io.c?

How does _this_ folio end up in btrfs's write_dev_supers()?  This is a
swap read.  The only folios which are swapped are anonymous and tmpfs.
btrfs takes care of doing its own I/O.  wait_dev_supers() is looking
for the error set in btrfs_end_super_write() which is the completion
routine for write_dev_supers().  The pages involved there are attached
to a btrfs address_space, not shmem or anon.



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/6] mm/page_io: use a folio in sio_read_complete()
  2023-07-18 12:58     ` zhangpeng (AS)
@ 2023-07-18 16:21       ` Matthew Wilcox
  2023-07-19  7:47         ` zhangpeng (AS)
  0 siblings, 1 reply; 22+ messages in thread
From: Matthew Wilcox @ 2023-07-18 16:21 UTC (permalink / raw)
  To: zhangpeng (AS)
  Cc: linux-mm, linux-kernel, sidhartha.kumar, akpm, wangkefeng.wang,
	sunnanyong, Kent Overstreet

On Tue, Jul 18, 2023 at 08:58:17PM +0800, zhangpeng (AS) wrote:
> On 2023/7/17 21:40, Matthew Wilcox wrote:
> 
> > On Mon, Jul 17, 2023 at 09:25:58PM +0800, Peng Zhang wrote:
> > > +++ b/mm/page_io.c
> > > @@ -406,19 +406,19 @@ static void sio_read_complete(struct kiocb *iocb, long ret)
> > >   	if (ret == sio->len) {
> > >   		for (p = 0; p < sio->pages; p++) {
> > > -			struct page *page = sio->bvec[p].bv_page;
> > > +			struct folio *folio = page_folio(sio->bvec[p].bv_page);
> > > -			SetPageUptodate(page);
> > > -			unlock_page(page);
> > > +			folio_mark_uptodate(folio);
> > > +			folio_unlock(folio);
> > >   		}
> > I'm kind of shocked this works today.  Usually bvecs coalesce adjacent
> > pages into a single entry, so you need to use a real iterator like
> > bio_for_each_folio_all() to extract individual pages from a bvec.
> > Maybe the sio bvec is constructed inefficiently.
> > 
> > I think Kent had some bvec folio iterators in progress?
> 
> I'll convert bio_first_page_all() to bio_first_folio_all() in a v2.

That isn't my point at all.  What I'm saying is that when you call a
function like bio_add_folio(), if @folio is physically adjacent to the
immediately prior folio already in the bio, it will extend the bv_len
instead of adding the new folio to the bvec.  Maybe there's nothing like
that for sio.


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/6] mm/page_io: use a folio in __end_swap_bio_read()
  2023-07-18 16:16       ` Matthew Wilcox
@ 2023-07-19  7:46         ` zhangpeng (AS)
  2023-07-20  5:21         ` Christoph Hellwig
  1 sibling, 0 replies; 22+ messages in thread
From: zhangpeng (AS) @ 2023-07-19  7:46 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-mm, linux-kernel, sidhartha.kumar, akpm, wangkefeng.wang,
	sunnanyong

On 2023/7/19 0:16, Matthew Wilcox wrote:

> On Tue, Jul 18, 2023 at 08:56:16PM +0800, zhangpeng (AS) wrote:
>>>>    	if (bio->bi_status) {
>>>> -		SetPageError(page);
>>>> -		ClearPageUptodate(page);
>>>> +		folio_set_error(folio);
>>> I appreciate this is a 1:1 conversion, but maybe we could think about
>>> this a bit.  Is there anybody who checks the
>>> PageError()/folio_test_error() for this page/folio?
>> Maybe wait_dev_supers() checks the PageError() after write_dev_supers()
>> in fs/btrfs/disk-io.c?
> How does _this_ folio end up in btrfs's write_dev_supers()?  This is a
> swap read.  The only folios which are swapped are anonymous and tmpfs.
> btrfs takes care of doing its own I/O.  wait_dev_supers() is looking
> for the error set in btrfs_end_super_write() which is the completion
> routine for write_dev_supers().  The pages involved there are attached
> to a btrfs address_space, not shmem or anon.

Thanks for your explanation!

Then I think nobody checks the PageError()/folio_test_error() for the page
in patch 1 and patch 2. I'll delete SetPageError() in a v2.

-- 
Best Regards,
Peng



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/6] mm/page_io: use a folio in sio_read_complete()
  2023-07-18 16:21       ` Matthew Wilcox
@ 2023-07-19  7:47         ` zhangpeng (AS)
  0 siblings, 0 replies; 22+ messages in thread
From: zhangpeng (AS) @ 2023-07-19  7:47 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: linux-mm, linux-kernel, sidhartha.kumar, akpm, wangkefeng.wang,
	sunnanyong, Kent Overstreet

On 2023/7/19 0:21, Matthew Wilcox wrote:

> On Tue, Jul 18, 2023 at 08:58:17PM +0800, zhangpeng (AS) wrote:
>> On 2023/7/17 21:40, Matthew Wilcox wrote:
>>
>>> On Mon, Jul 17, 2023 at 09:25:58PM +0800, Peng Zhang wrote:
>>>> +++ b/mm/page_io.c
>>>> @@ -406,19 +406,19 @@ static void sio_read_complete(struct kiocb *iocb, long ret)
>>>>    	if (ret == sio->len) {
>>>>    		for (p = 0; p < sio->pages; p++) {
>>>> -			struct page *page = sio->bvec[p].bv_page;
>>>> +			struct folio *folio = page_folio(sio->bvec[p].bv_page);
>>>> -			SetPageUptodate(page);
>>>> -			unlock_page(page);
>>>> +			folio_mark_uptodate(folio);
>>>> +			folio_unlock(folio);
>>>>    		}
>>> I'm kind of shocked this works today.  Usually bvecs coalesce adjacent
>>> pages into a single entry, so you need to use a real iterator like
>>> bio_for_each_folio_all() to extract individual pages from a bvec.
>>> Maybe the sio bvec is constructed inefficiently.
>>>
>>> I think Kent had some bvec folio iterators in progress?
>> I'll convert bio_first_page_all() to bio_first_folio_all() in a v2.
> That isn't my point at all.  What I'm saying is that when you call a
> function like bio_add_folio(), if @folio is physically adjacent to the
> immediately prior folio already in the bio, it will extend the bv_len
> instead of adding the new folio to the bvec.  Maybe there's nothing like
> that for sio.

Thanks for your reply! I got it.

-- 
Best Regards,
Peng



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 1/6] mm/page_io: use a folio in __end_swap_bio_read()
  2023-07-18 16:16       ` Matthew Wilcox
  2023-07-19  7:46         ` zhangpeng (AS)
@ 2023-07-20  5:21         ` Christoph Hellwig
  1 sibling, 0 replies; 22+ messages in thread
From: Christoph Hellwig @ 2023-07-20  5:21 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: zhangpeng (AS), linux-mm, linux-kernel, sidhartha.kumar, akpm,
	wangkefeng.wang, sunnanyong

On Tue, Jul 18, 2023 at 05:16:15PM +0100, Matthew Wilcox wrote:
> How does _this_ folio end up in btrfs's write_dev_supers()?  This is a
> swap read.  The only folios which are swapped are anonymous and tmpfs.
> btrfs takes care of doing its own I/O.  wait_dev_supers() is looking
> for the error set in btrfs_end_super_write() which is the completion
> routine for write_dev_supers().  The pages involved there are attached
> to a btrfs address_space, not shmem or anon.

It actually operates on the block_device inode.  That does not matter
for this series, but complicates things in other ways.


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/6] mm/page_io: use a folio in sio_read_complete()
  2023-07-17 13:40   ` Matthew Wilcox
  2023-07-18 12:58     ` zhangpeng (AS)
@ 2023-07-20  5:22     ` Christoph Hellwig
  1 sibling, 0 replies; 22+ messages in thread
From: Christoph Hellwig @ 2023-07-20  5:22 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Peng Zhang, linux-mm, linux-kernel, sidhartha.kumar, akpm,
	wangkefeng.wang, sunnanyong, Kent Overstreet

On Mon, Jul 17, 2023 at 02:40:24PM +0100, Matthew Wilcox wrote:
> >  		for (p = 0; p < sio->pages; p++) {
> > -			struct page *page = sio->bvec[p].bv_page;
> > +			struct folio *folio = page_folio(sio->bvec[p].bv_page);
> >  
> > -			SetPageUptodate(page);
> > -			unlock_page(page);
> > +			folio_mark_uptodate(folio);
> > +			folio_unlock(folio);
> >  		}
> 
> I'm kind of shocked this works today.  Usually bvecs coalesce adjacent
> pages into a single entry, so you need to use a real iterator like
> bio_for_each_folio_all() to extract individual pages from a bvec.
> Maybe the sio bvec is constructed inefficiently.

sio_read_complete is a kiocb.ki_complete handler.  There is no
coalesce going on for ITER_BVEC iov_iters, which share nothing
but the underlying data structure with the block I/O path.



^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2023-07-20  5:22 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-07-17 13:25 [PATCH 0/6] Convert several functions in page_io.c to use a folio Peng Zhang
2023-07-17 13:25 ` [PATCH 1/6] mm/page_io: use a folio in __end_swap_bio_read() Peng Zhang
2023-07-17 13:34   ` Matthew Wilcox
2023-07-18 12:56     ` zhangpeng (AS)
2023-07-18 16:16       ` Matthew Wilcox
2023-07-19  7:46         ` zhangpeng (AS)
2023-07-20  5:21         ` Christoph Hellwig
2023-07-17 13:25 ` [PATCH 2/6] mm/page_io: use a folio in sio_read_complete() Peng Zhang
2023-07-17 13:40   ` Matthew Wilcox
2023-07-18 12:58     ` zhangpeng (AS)
2023-07-18 16:21       ` Matthew Wilcox
2023-07-19  7:47         ` zhangpeng (AS)
2023-07-20  5:22     ` Christoph Hellwig
2023-07-17 13:25 ` [PATCH 3/6] mm/page_io: use a folio in swap_writepage_bdev_sync() Peng Zhang
2023-07-17 14:00   ` Matthew Wilcox
2023-07-17 13:26 ` [PATCH 4/6] mm/page_io: use a folio in swap_writepage_bdev_async() Peng Zhang
2023-07-17 14:01   ` Matthew Wilcox
2023-07-17 13:26 ` [PATCH 5/6] mm/page_io: convert count_swpout_vm_event() to take in a folio Peng Zhang
2023-07-17 13:48   ` Matthew Wilcox
2023-07-18 12:56     ` zhangpeng (AS)
2023-07-17 13:26 ` [PATCH 6/6] mm/page_io: convert bio_associate_blkg_from_page() " Peng Zhang
2023-07-17 13:46   ` Matthew Wilcox

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).