From: Usama Arif <usama.arif@linux.dev>
To: Andrew Morton <akpm@linux-foundation.org>,
ryan.roberts@arm.com, david@kernel.org
Cc: ajd@linux.ibm.com, anshuman.khandual@arm.com, apopple@nvidia.com,
baohua@kernel.org, baolin.wang@linux.alibaba.com,
brauner@kernel.org, catalin.marinas@arm.com, dev.jain@arm.com,
jack@suse.cz, kees@kernel.org, kevin.brodsky@arm.com,
lance.yang@linux.dev, Liam.Howlett@oracle.com,
linux-arm-kernel@lists.infradead.org,
linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-mm@kvack.org, lorenzo.stoakes@oracle.com,
npache@redhat.com, rmclure@linux.ibm.com,
Al Viro <viro@zeniv.linux.org.uk>,
will@kernel.org, willy@infradead.org, ziy@nvidia.com,
hannes@cmpxchg.org, kas@kernel.org, shakeel.butt@linux.dev,
kernel-team@meta.com, Usama Arif <usama.arif@linux.dev>
Subject: [PATCH 2/4] mm: bypass mmap_miss heuristic for VM_EXEC readahead
Date: Tue, 10 Mar 2026 07:51:15 -0700 [thread overview]
Message-ID: <20260310145406.3073394-3-usama.arif@linux.dev> (raw)
In-Reply-To: <20260310145406.3073394-1-usama.arif@linux.dev>
The mmap_miss counter in do_sync_mmap_readahead() tracks whether
readahead is useful for mmap'd file access. It is incremented by 1 on
every page cache miss in do_sync_mmap_readahead(), and decremented in
two places:
- filemap_map_pages(): decremented by N for each of N pages
successfully mapped via fault-around (pages found already in cache,
evidence readahead was useful). Only pages not in the workingset
count as hits.
- do_async_mmap_readahead(): decremented by 1 when a page with
PG_readahead is found in cache.
When the counter exceeds MMAP_LOTSAMISS (100), all readahead is
disabled, including the targeted VM_EXEC readahead [1] that requests
arch-preferred folio orders for contpte mapping.
On arm64 with 64K base pages, both decrement paths are inactive:
1. filemap_map_pages() is never called because fault_around_pages
(65536 >> PAGE_SHIFT = 1) disables should_fault_around(), which
requires fault_around_pages > 1. With only 1 page in the
fault-around window, there is nothing "around" to map.
2. do_async_mmap_readahead() never fires for exec mappings because
exec readahead sets async_size = 0, so no PG_readahead markers
are placed.
With no decrements, mmap_miss monotonically increases past
MMAP_LOTSAMISS after 100 page faults, disabling all subsequent
exec readahead.
Fix this by moving the VM_EXEC readahead block above the mmap_miss
check. The exec readahead path is targeted. It reads a single folio at
the fault location with async_size=0, not speculative prefetch, so the
mmap_miss heuristic designed to throttle wasteful speculative readahead
should not gate it. The page would need to be faulted in regardless,
the only question is at what order.
[1] https://lore.kernel.org/all/20250430145920.3748738-6-ryan.roberts@arm.com/
Signed-off-by: Usama Arif <usama.arif@linux.dev>
---
mm/filemap.c | 72 ++++++++++++++++++++++++++++------------------------
1 file changed, 39 insertions(+), 33 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index 6cd7974d4adab..c064f31ecec5a 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -3331,6 +3331,37 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
}
}
+ if (vm_flags & VM_EXEC) {
+ /*
+ * Allow arch to request a preferred minimum folio order for
+ * executable memory. This can often be beneficial to
+ * performance if (e.g.) arm64 can contpte-map the folio.
+ * Executable memory rarely benefits from readahead, due to its
+ * random access nature, so set async_size to 0.
+ *
+ * Limit to the boundaries of the VMA to avoid reading in any
+ * pad that might exist between sections, which would be a waste
+ * of memory.
+ *
+ * This is targeted readahead (one folio at the fault location),
+ * not speculative prefetch, so bypass the mmap_miss heuristic
+ * which would otherwise disable it after MMAP_LOTSAMISS faults.
+ */
+ struct vm_area_struct *vma = vmf->vma;
+ unsigned long start = vma->vm_pgoff;
+ unsigned long end = start + vma_pages(vma);
+ unsigned long ra_end;
+
+ ra->order = exec_folio_order();
+ ra->start = round_down(vmf->pgoff, 1UL << ra->order);
+ ra->start = max(ra->start, start);
+ ra_end = round_up(ra->start + ra->ra_pages, 1UL << ra->order);
+ ra_end = min(ra_end, end);
+ ra->size = ra_end - ra->start;
+ ra->async_size = 0;
+ goto do_readahead;
+ }
+
if (!(vm_flags & VM_SEQ_READ)) {
/* Avoid banging the cache line if not needed */
mmap_miss = READ_ONCE(ra->mmap_miss);
@@ -3361,40 +3392,15 @@ static struct file *do_sync_mmap_readahead(struct vm_fault *vmf)
return fpin;
}
- if (vm_flags & VM_EXEC) {
- /*
- * Allow arch to request a preferred minimum folio order for
- * executable memory. This can often be beneficial to
- * performance if (e.g.) arm64 can contpte-map the folio.
- * Executable memory rarely benefits from readahead, due to its
- * random access nature, so set async_size to 0.
- *
- * Limit to the boundaries of the VMA to avoid reading in any
- * pad that might exist between sections, which would be a waste
- * of memory.
- */
- struct vm_area_struct *vma = vmf->vma;
- unsigned long start = vma->vm_pgoff;
- unsigned long end = start + vma_pages(vma);
- unsigned long ra_end;
-
- ra->order = exec_folio_order();
- ra->start = round_down(vmf->pgoff, 1UL << ra->order);
- ra->start = max(ra->start, start);
- ra_end = round_up(ra->start + ra->ra_pages, 1UL << ra->order);
- ra_end = min(ra_end, end);
- ra->size = ra_end - ra->start;
- ra->async_size = 0;
- } else {
- /*
- * mmap read-around
- */
- ra->start = max_t(long, 0, vmf->pgoff - ra->ra_pages / 2);
- ra->size = ra->ra_pages;
- ra->async_size = ra->ra_pages / 4;
- ra->order = 0;
- }
+ /*
+ * mmap read-around
+ */
+ ra->start = max_t(long, 0, vmf->pgoff - ra->ra_pages / 2);
+ ra->size = ra->ra_pages;
+ ra->async_size = ra->ra_pages / 4;
+ ra->order = 0;
+do_readahead:
fpin = maybe_unlock_mmap_for_io(vmf, fpin);
ractl._index = ra->start;
page_cache_ra_order(&ractl, ra);
--
2.47.3
next prev parent reply other threads:[~2026-03-10 14:54 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-10 14:51 [PATCH 0/4] arm64/mm: contpte-sized exec folios for 16K and 64K pages Usama Arif
2026-03-10 14:51 ` [PATCH 1/4] arm64: request contpte-sized folios for exec memory Usama Arif
2026-03-19 7:35 ` David Hildenbrand (Arm)
2026-03-10 14:51 ` Usama Arif [this message]
2026-03-18 16:43 ` [PATCH 2/4] mm: bypass mmap_miss heuristic for VM_EXEC readahead Jan Kara
2026-03-19 7:37 ` David Hildenbrand (Arm)
2026-03-10 14:51 ` [PATCH 3/4] elf: align ET_DYN base to exec folio order for contpte mapping Usama Arif
2026-03-13 14:42 ` WANG Rui
2026-03-13 19:47 ` Usama Arif
2026-03-14 2:10 ` hev
2026-03-10 14:51 ` [PATCH 4/4] mm: align file-backed mmap to exec folio order in thp_get_unmapped_area Usama Arif
2026-03-14 3:47 ` WANG Rui
2026-03-13 13:20 ` [PATCH 0/4] arm64/mm: contpte-sized exec folios for 16K and 64K pages David Hildenbrand (Arm)
2026-03-13 19:59 ` Usama Arif
2026-03-16 16:06 ` David Hildenbrand (Arm)
2026-03-18 10:41 ` Usama Arif
2026-03-18 12:41 ` David Hildenbrand (Arm)
2026-03-13 16:33 ` Ryan Roberts
2026-03-13 20:55 ` Usama Arif
2026-03-18 10:52 ` Usama Arif
2026-03-19 7:40 ` David Hildenbrand (Arm)
2026-03-14 13:20 ` WANG Rui
2026-03-13 16:35 ` hev
2026-03-14 9:50 ` WANG Rui
2026-03-18 10:57 ` Usama Arif
2026-03-18 11:46 ` WANG Rui
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260310145406.3073394-3-usama.arif@linux.dev \
--to=usama.arif@linux.dev \
--cc=Liam.Howlett@oracle.com \
--cc=ajd@linux.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=apopple@nvidia.com \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=brauner@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=david@kernel.org \
--cc=dev.jain@arm.com \
--cc=hannes@cmpxchg.org \
--cc=jack@suse.cz \
--cc=kas@kernel.org \
--cc=kees@kernel.org \
--cc=kernel-team@meta.com \
--cc=kevin.brodsky@arm.com \
--cc=lance.yang@linux.dev \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=npache@redhat.com \
--cc=rmclure@linux.ibm.com \
--cc=ryan.roberts@arm.com \
--cc=shakeel.butt@linux.dev \
--cc=viro@zeniv.linux.org.uk \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox