mm-commits.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* + filemap-optimize-order0-folio-in-filemap_map_pages.patch added to mm-new branch
@ 2025-09-02 23:10 Andrew Morton
  2025-09-03  3:41 ` Matthew Wilcox
  0 siblings, 1 reply; 2+ messages in thread
From: Andrew Morton @ 2025-09-02 23:10 UTC (permalink / raw)
  To: mm-commits, willy, wangkefeng.wang, fengwei.yin, david,
	tujinjiang, akpm


The patch titled
     Subject: filemap: optimize order0 folio in filemap_map_pages
has been added to the -mm mm-new branch.  Its filename is
     filemap-optimize-order0-folio-in-filemap_map_pages.patch

This patch will shortly appear at
     https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/filemap-optimize-order0-folio-in-filemap_map_pages.patch

This patch will later appear in the mm-new branch at
    git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Note, mm-new is a provisional staging ground for work-in-progress
patches, and acceptance into mm-new is a notification for others take
notice and to finish up reviews.  Please do not hesitate to respond to
review feedback and post updated versions to replace or incrementally
fixup patches in mm-new.

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days

------------------------------------------------------
From: Jinjiang Tu <tujinjiang@huawei.com>
Subject: filemap: optimize order0 folio in filemap_map_pages
Date: Tue, 19 Aug 2025 22:06:53 +0800

There are two meaningless folio refcount updates for order 0 folio in
filemap_map_pages().  First, filemap_map_order0_folio() adds folio
refcount after the folio is mapped to pte.  And then, filemap_map_pages()
drops a refcount grabbed by next_uptodate_folio().  We could remain the
refcount unchanged in this case.

With this patch, we can get 8% performance gain for lmbench testcase
'lat_pagefault -P 1 file', the size of file is 512M.

Link: https://lkml.kernel.org/r/20250819140653.3229136-1-tujinjiang@huawei.com
Signed-off-by: Jinjiang Tu <tujinjiang@huawei.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Fengwei Yin <fengwei.yin@intel.com>
Cc: Kefeng Wang <wangkefeng.wang@huawei.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/filemap.c |   15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

--- a/mm/filemap.c~filemap-optimize-order0-folio-in-filemap_map_pages
+++ a/mm/filemap.c
@@ -3719,6 +3719,8 @@ skip:
 	}
 
 	vmf->pte = old_ptep;
+	folio_unlock(folio);
+	folio_put(folio);
 
 	return ret;
 }
@@ -3731,7 +3733,7 @@ static vm_fault_t filemap_map_order0_fol
 	struct page *page = &folio->page;
 
 	if (PageHWPoison(page))
-		return ret;
+		goto out;
 
 	/* See comment of filemap_map_folio_range() */
 	if (!folio_test_workingset(folio))
@@ -3743,15 +3745,19 @@ static vm_fault_t filemap_map_order0_fol
 	 * the fault-around logic.
 	 */
 	if (!pte_none(ptep_get(vmf->pte)))
-		return ret;
+		goto out;
 
 	if (vmf->address == addr)
 		ret = VM_FAULT_NOPAGE;
 
 	set_pte_range(vmf, folio, page, 1, addr);
 	(*rss)++;
-	folio_ref_inc(folio);
+	folio_unlock(folio);
+	return ret;
 
+out:
+	folio_unlock(folio);
+	folio_put(folio);
 	return ret;
 }
 
@@ -3809,9 +3815,6 @@ vm_fault_t filemap_map_pages(struct vm_f
 			ret |= filemap_map_folio_range(vmf, folio,
 					xas.xa_index - folio->index, addr,
 					nr_pages, &rss, &mmap_miss);
-
-		folio_unlock(folio);
-		folio_put(folio);
 	} while ((folio = next_uptodate_folio(&xas, mapping, end_pgoff)) != NULL);
 	add_mm_counter(vma->vm_mm, folio_type, rss);
 	pte_unmap_unlock(vmf->pte, vmf->ptl);
_

Patches currently in -mm which might be from tujinjiang@huawei.com are

mm-memory_hotplug-fix-hwpoisoned-large-folio-handling-in-do_migrate_range.patch
filemap-optimize-order0-folio-in-filemap_map_pages.patch


^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: + filemap-optimize-order0-folio-in-filemap_map_pages.patch added to mm-new branch
  2025-09-02 23:10 + filemap-optimize-order0-folio-in-filemap_map_pages.patch added to mm-new branch Andrew Morton
@ 2025-09-03  3:41 ` Matthew Wilcox
  0 siblings, 0 replies; 2+ messages in thread
From: Matthew Wilcox @ 2025-09-03  3:41 UTC (permalink / raw)
  To: Andrew Morton; +Cc: mm-commits, wangkefeng.wang, fengwei.yin, david, tujinjiang

On Tue, Sep 02, 2025 at 04:10:40PM -0700, Andrew Morton wrote:
>  	set_pte_range(vmf, folio, page, 1, addr);
>  	(*rss)++;
> -	folio_ref_inc(folio);
> +	folio_unlock(folio);
> +	return ret;

No, this is the wrong version.  The folio_unlock() should not be moved.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2025-09-03  3:41 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-02 23:10 + filemap-optimize-order0-folio-in-filemap_map_pages.patch added to mm-new branch Andrew Morton
2025-09-03  3:41 ` Matthew Wilcox

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).