linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
To: Andrew Morton <akpm@linux-foundation.org>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>, linux-mm@kvack.org
Subject: [PATCH v2 15/57] mm: Convert do_swap_page()'s swapcache variable to a folio
Date: Fri,  2 Sep 2022 20:46:11 +0100	[thread overview]
Message-ID: <20220902194653.1739778-16-willy@infradead.org> (raw)
In-Reply-To: <20220902194653.1739778-1-willy@infradead.org>

The 'swapcache' variable is used to track whether the page is from the
swapcache or not.  It can do this equally well by being the folio of
the page rather than the page itself, and this saves a number of calls
to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/memory.c | 31 +++++++++++++++----------------
 1 file changed, 15 insertions(+), 16 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index f172b148e29b..0184fe0ae736 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3718,8 +3718,8 @@ static vm_fault_t handle_pte_marker(struct vm_fault *vmf)
 vm_fault_t do_swap_page(struct vm_fault *vmf)
 {
 	struct vm_area_struct *vma = vmf->vma;
-	struct folio *folio;
-	struct page *page = NULL, *swapcache;
+	struct folio *swapcache, *folio = NULL;
+	struct page *page;
 	struct swap_info_struct *si = NULL;
 	rmap_t rmap_flags = RMAP_NONE;
 	bool exclusive = false;
@@ -3762,11 +3762,11 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 		goto out;
 
 	page = lookup_swap_cache(entry, vma, vmf->address);
-	swapcache = page;
 	if (page)
 		folio = page_folio(page);
+	swapcache = folio;
 
-	if (!page) {
+	if (!folio) {
 		if (data_race(si->flags & SWP_SYNCHRONOUS_IO) &&
 		    __swap_count(entry) == 1) {
 			/* skip swapcache */
@@ -3799,12 +3799,12 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 		} else {
 			page = swapin_readahead(entry, GFP_HIGHUSER_MOVABLE,
 						vmf);
-			swapcache = page;
 			if (page)
 				folio = page_folio(page);
+			swapcache = folio;
 		}
 
-		if (!page) {
+		if (!folio) {
 			/*
 			 * Back out if somebody else faulted in this pte
 			 * while we released the pte lock.
@@ -3856,7 +3856,6 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 		page = ksm_might_need_to_copy(page, vma, vmf->address);
 		if (unlikely(!page)) {
 			ret = VM_FAULT_OOM;
-			page = swapcache;
 			goto out_page;
 		}
 		folio = page_folio(page);
@@ -3867,7 +3866,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 		 * owner. Try removing the extra reference from the local LRU
 		 * pagevecs if required.
 		 */
-		if ((vmf->flags & FAULT_FLAG_WRITE) && page == swapcache &&
+		if ((vmf->flags & FAULT_FLAG_WRITE) && folio == swapcache &&
 		    !folio_test_ksm(folio) && !folio_test_lru(folio))
 			lru_add_drain();
 	}
@@ -3908,7 +3907,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 		 * without __HAVE_ARCH_PTE_SWP_EXCLUSIVE.
 		 */
 		exclusive = pte_swp_exclusive(vmf->orig_pte);
-		if (page != swapcache) {
+		if (folio != swapcache) {
 			/*
 			 * We have a fresh page that is not exposed to the
 			 * swapcache -> certainly exclusive.
@@ -3976,7 +3975,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 	vmf->orig_pte = pte;
 
 	/* ksm created a completely new copy */
-	if (unlikely(page != swapcache && swapcache)) {
+	if (unlikely(folio != swapcache && swapcache)) {
 		page_add_new_anon_rmap(page, vma, vmf->address);
 		folio_add_lru_vma(folio, vma);
 	} else {
@@ -3989,7 +3988,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 	arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte);
 
 	folio_unlock(folio);
-	if (page != swapcache && swapcache) {
+	if (folio != swapcache && swapcache) {
 		/*
 		 * Hold the lock to avoid the swap entry to be reused
 		 * until we take the PT lock for the pte_same() check
@@ -3998,8 +3997,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 		 * so that the swap count won't change under a
 		 * parallel locked swapcache.
 		 */
-		unlock_page(swapcache);
-		put_page(swapcache);
+		folio_unlock(swapcache);
+		folio_put(swapcache);
 	}
 
 	if (vmf->flags & FAULT_FLAG_WRITE) {
@@ -4023,9 +4022,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 	folio_unlock(folio);
 out_release:
 	folio_put(folio);
-	if (page != swapcache && swapcache) {
-		unlock_page(swapcache);
-		put_page(swapcache);
+	if (folio != swapcache && swapcache) {
+		folio_unlock(swapcache);
+		folio_put(swapcache);
 	}
 	if (si)
 		put_swap_device(si);
-- 
2.35.1



  parent reply	other threads:[~2022-09-02 19:47 UTC|newest]

Thread overview: 61+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-02 19:45 [PATCH v2 00/57] MM folio changes for 6.1 Matthew Wilcox (Oracle)
2022-09-02 19:45 ` [PATCH v2 01/57] mm/vmscan: Fix a lot of comments Matthew Wilcox (Oracle)
2022-09-02 23:28   ` Andrew Morton
2022-09-02 19:45 ` [PATCH v2 02/57] mm: Add the first tail page to struct folio Matthew Wilcox (Oracle)
2022-09-02 23:28   ` Andrew Morton
2022-09-04  0:44     ` Matthew Wilcox
2022-09-02 19:45 ` [PATCH v2 03/57] mm: Reimplement folio_order() and folio_nr_pages() Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 04/57] mm: Add split_folio() Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 05/57] mm: Add folio_add_lru_vma() Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 06/57] shmem: Convert shmem_writepage() to use a folio throughout Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 07/57] shmem: Convert shmem_delete_from_page_cache() to take a folio Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 08/57] shmem: Convert shmem_replace_page() to use folios throughout Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 09/57] mm/swapfile: Remove page_swapcount() Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 10/57] mm/swapfile: Convert try_to_free_swap() to folio_free_swap() Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 11/57] mm/swap: Convert __read_swap_cache_async() to use a folio Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 12/57] mm/swap: Convert add_to_swap_cache() to take " Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 13/57] mm/swap: Convert put_swap_page() to put_swap_folio() Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 14/57] mm: Convert do_swap_page() to use a folio Matthew Wilcox (Oracle)
2022-09-02 19:46 ` Matthew Wilcox (Oracle) [this message]
2022-09-02 19:46 ` [PATCH v2 16/57] memcg: Convert mem_cgroup_swapin_charge_page() to mem_cgroup_swapin_charge_folio() Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 17/57] shmem: Convert shmem_mfill_atomic_pte() to use a folio Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 18/57] shmem: Convert shmem_replace_page() to shmem_replace_folio() Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 19/57] swap: Add swap_cache_get_folio() Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 20/57] shmem: Eliminate struct page from shmem_swapin_folio() Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 21/57] shmem: Convert shmem_getpage_gfp() to shmem_get_folio_gfp() Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 22/57] shmem: Convert shmem_fault() to use shmem_get_folio_gfp() Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 23/57] shmem: Convert shmem_read_mapping_page_gfp() " Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 24/57] shmem: Add shmem_get_folio() Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 25/57] shmem: Convert shmem_get_partial_folio() to use shmem_get_folio() Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 26/57] shmem: Convert shmem_write_begin() " Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 27/57] shmem: Convert shmem_file_read_iter() " Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 28/57] shmem: Convert shmem_fallocate() to use a folio Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 29/57] shmem: Convert shmem_symlink() " Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 30/57] shmem: Convert shmem_get_link() " Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 31/57] khugepaged: Call shmem_get_folio() Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 32/57] userfaultfd: Convert mcontinue_atomic_pte() to use a folio Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 33/57] shmem: Remove shmem_getpage() Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 34/57] swapfile: Convert try_to_unuse() to use a folio Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 35/57] swapfile: Convert __try_to_reclaim_swap() " Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 36/57] swapfile: Convert unuse_pte_range() " Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 37/57] mm: Convert do_swap_page() to use swap_cache_get_folio() Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 38/57] mm: Remove lookup_swap_cache() Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 39/57] swap_state: Convert free_swap_cache() to use a folio Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 40/57] swap: Convert swap_writepage() " Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 41/57] mm: Convert do_wp_page() " Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 42/57] huge_memory: Convert do_huge_pmd_wp_page() " Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 43/57] madvise: Convert madvise_free_pte_range() " Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 44/57] uprobes: Use folios more widely in __replace_page() Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 45/57] ksm: Use a folio in replace_page() Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 46/57] mm: Convert do_swap_page() to use folio_free_swap() Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 47/57] memcg: Convert mem_cgroup_swap_full() to take a folio Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 48/57] mm: Remove try_to_free_swap() Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 49/57] rmap: Convert page_move_anon_rmap() to use a folio Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 50/57] migrate: Convert __unmap_and_move() to use folios Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 51/57] migrate: Convert unmap_and_move_huge_page() " Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 52/57] huge_memory: Convert split_huge_page_to_list() to use a folio Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 53/57] huge_memory: Convert unmap_page() to unmap_folio() Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 54/57] mm: Convert page_get_anon_vma() to folio_get_anon_vma() Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 55/57] rmap: Remove page_unlock_anon_vma_read() Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 56/57] uprobes: Use new_folio in __replace_page() Matthew Wilcox (Oracle)
2022-09-02 19:46 ` [PATCH v2 57/57] mm: Convert lock_page_or_retry() to folio_lock_or_retry() Matthew Wilcox (Oracle)

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220902194653.1739778-16-willy@infradead.org \
    --to=willy@infradead.org \
    --cc=akpm@linux-foundation.org \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).