From: "Michael S. Tsirkin" <mst@redhat.com>
To: linux-kernel@vger.kernel.org
Cc: Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@kernel.org>,
Vlastimil Babka <vbabka@kernel.org>,
Brendan Jackman <jackmanb@google.com>,
Michal Hocko <mhocko@suse.com>,
Suren Baghdasaryan <surenb@google.com>,
Jason Wang <jasowang@redhat.com>,
Andrea Arcangeli <aarcange@redhat.com>,
Gregory Price <gourry@gourry.net>,
linux-mm@kvack.org, virtualization@lists.linux.dev,
Johannes Weiner <hannes@cmpxchg.org>, Zi Yan <ziy@nvidia.com>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Mike Rapoport <rppt@kernel.org>
Subject: [PATCH RFC v4 02/22] mm: add vma_alloc_folio_user_addr
Date: Sun, 26 Apr 2026 17:47:36 -0400 [thread overview]
Message-ID: <0652fa7c0ed9be9794f44cc1f10ad19ed5f3990a.1777223007.git.mst@redhat.com> (raw)
In-Reply-To: <cover.1777223007.git.mst@redhat.com>
Add vma_alloc_folio_user_addr() which will be used in follow-up patches. It takes a separate user_addr
parameter for the cache-friendly zeroing hint, independent of the
addr used for NUMA policy lookup.
The NUMA interleave index is computed from
(addr - vma->vm_start) >> (PAGE_SHIFT + order), so addr must be
folio-aligned for correct NUMA placement. But the zeroing hint
wants the exact fault address for cache locality.
vma_alloc_folio() becomes a thin wrapper that passes addr for both.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Assisted-by: Claude:claude-opus-4-6
---
include/linux/gfp.h | 4 ++++
mm/page_alloc.c | 17 +++++++++++++----
2 files changed, 17 insertions(+), 4 deletions(-)
diff --git a/include/linux/gfp.h b/include/linux/gfp.h
index 7ccbda35b9ad..7069b810f171 100644
--- a/include/linux/gfp.h
+++ b/include/linux/gfp.h
@@ -320,6 +320,9 @@ static inline struct page *alloc_pages_node_noprof(int nid, gfp_t gfp_mask,
struct folio *vma_alloc_folio_noprof(gfp_t gfp, int order,
struct vm_area_struct *vma, unsigned long addr);
+struct folio *vma_alloc_folio_user_addr_noprof(gfp_t gfp, int order,
+ struct vm_area_struct *vma, unsigned long addr,
+ unsigned long user_addr);
#ifdef CONFIG_NUMA
struct page *alloc_pages_noprof(gfp_t gfp, unsigned int order);
struct folio *folio_alloc_noprof(gfp_t gfp, unsigned int order);
@@ -345,6 +348,7 @@ static inline struct folio *folio_alloc_mpol_noprof(gfp_t gfp, unsigned int orde
#define folio_alloc(...) alloc_hooks(folio_alloc_noprof(__VA_ARGS__))
#define folio_alloc_mpol(...) alloc_hooks(folio_alloc_mpol_noprof(__VA_ARGS__))
#define vma_alloc_folio(...) alloc_hooks(vma_alloc_folio_noprof(__VA_ARGS__))
+#define vma_alloc_folio_user_addr(...) alloc_hooks(vma_alloc_folio_user_addr_noprof(__VA_ARGS__))
#define alloc_page(gfp_mask) alloc_pages(gfp_mask, 0)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0e6ec7310087..6d31a5c99e93 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5298,8 +5298,9 @@ struct folio *__folio_alloc_noprof(gfp_t gfp, unsigned int order, int preferred_
EXPORT_SYMBOL(__folio_alloc_noprof);
#ifdef CONFIG_NUMA
-struct folio *vma_alloc_folio_noprof(gfp_t gfp, int order,
- struct vm_area_struct *vma, unsigned long addr)
+struct folio *vma_alloc_folio_user_addr_noprof(gfp_t gfp, int order,
+ struct vm_area_struct *vma, unsigned long addr,
+ unsigned long user_addr)
{
struct mempolicy *pol;
pgoff_t ilx;
@@ -5314,8 +5315,9 @@ struct folio *vma_alloc_folio_noprof(gfp_t gfp, int order,
return folio;
}
#else
-struct folio *vma_alloc_folio_noprof(gfp_t gfp, int order,
- struct vm_area_struct *vma, unsigned long addr)
+struct folio *vma_alloc_folio_user_addr_noprof(gfp_t gfp, int order,
+ struct vm_area_struct *vma, unsigned long addr,
+ unsigned long user_addr)
{
if (vma->vm_flags & VM_DROPPABLE)
gfp |= __GFP_NOWARN;
@@ -5323,6 +5325,13 @@ struct folio *vma_alloc_folio_noprof(gfp_t gfp, int order,
return folio_alloc_noprof(gfp, order);
}
#endif
+EXPORT_SYMBOL(vma_alloc_folio_user_addr_noprof);
+
+struct folio *vma_alloc_folio_noprof(gfp_t gfp, int order,
+ struct vm_area_struct *vma, unsigned long addr)
+{
+ return vma_alloc_folio_user_addr_noprof(gfp, order, vma, addr, addr);
+}
EXPORT_SYMBOL(vma_alloc_folio_noprof);
/*
--
MST
next prev parent reply other threads:[~2026-04-26 21:47 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-26 21:47 [PATCH RFC v4 00/22] mm/virtio: skip redundant zeroing of host-zeroed reported pages Michael S. Tsirkin
2026-04-26 21:47 ` [PATCH RFC v4 01/22] mm: move vma_alloc_folio to page_alloc.c Michael S. Tsirkin
2026-04-26 21:47 ` Michael S. Tsirkin [this message]
2026-04-26 21:47 ` [PATCH RFC v4 03/22] mm: thread user_addr through page allocator for cache-friendly zeroing Michael S. Tsirkin
2026-04-26 21:47 ` [PATCH RFC v4 04/22] mm: add folio_zero_user stub for configs without THP/HUGETLBFS Michael S. Tsirkin
2026-04-26 21:47 ` [PATCH RFC v4 05/22] mm: page_alloc: move prep_compound_page before post_alloc_hook Michael S. Tsirkin
2026-04-26 21:47 ` [PATCH RFC v4 06/22] mm: use folio_zero_user for user pages in post_alloc_hook Michael S. Tsirkin
2026-04-26 21:47 ` [PATCH RFC v4 07/22] mm: use __GFP_ZERO in vma_alloc_zeroed_movable_folio Michael S. Tsirkin
2026-04-26 21:47 ` [PATCH RFC v4 08/22] mm: use __GFP_ZERO in alloc_anon_folio Michael S. Tsirkin
2026-04-26 21:47 ` [PATCH RFC v4 09/22] mm: use __GFP_ZERO in vma_alloc_anon_folio_pmd Michael S. Tsirkin
2026-04-26 21:48 ` [PATCH RFC v4 10/22] mm: hugetlb: use __GFP_ZERO and skip zeroing for zeroed pages Michael S. Tsirkin
2026-04-26 21:48 ` [PATCH RFC v4 11/22] mm: memfd: skip zeroing for zeroed hugetlb pool pages Michael S. Tsirkin
2026-04-26 21:48 ` [PATCH RFC v4 12/22] mm: remove arch vma_alloc_zeroed_movable_folio overrides Michael S. Tsirkin
2026-04-26 21:48 ` [PATCH RFC v4 13/22] mm: page_alloc: propagate PageReported flag across buddy splits Michael S. Tsirkin
2026-04-26 21:48 ` [PATCH RFC v4 14/22] mm: page_reporting: skip redundant zeroing of host-zeroed reported pages Michael S. Tsirkin
2026-04-27 15:13 ` Zi Yan
2026-04-27 15:18 ` Michael S. Tsirkin
2026-04-27 15:43 ` David Hildenbrand (Arm)
2026-04-26 21:48 ` [PATCH RFC v4 15/22] mm: page_alloc: clear PG_zeroed on buddy merge if not both zero Michael S. Tsirkin
2026-04-26 21:48 ` [PATCH RFC v4 16/22] mm: page_alloc: preserve PG_zeroed in page_del_and_expand Michael S. Tsirkin
2026-04-26 21:48 ` [PATCH RFC v4 17/22] mm: page_reporting: add per-page zeroed bitmap for host feedback Michael S. Tsirkin
2026-04-26 21:48 ` [PATCH RFC v4 18/22] virtio_balloon: a hack to enable host-zeroed page optimization Michael S. Tsirkin
2026-04-26 21:48 ` [PATCH RFC v4 19/22] mm: page_reporting: add flush parameter with page budget Michael S. Tsirkin
2026-04-26 21:48 ` [PATCH RFC v4 20/22] mm: add free_frozen_pages_zeroed Michael S. Tsirkin
2026-04-26 21:48 ` [PATCH RFC v4 21/22] mm: add put_page_zeroed and folio_put_zeroed Michael S. Tsirkin
2026-04-26 21:48 ` [PATCH RFC v4 22/22] virtio_balloon: mark deflated pages as zeroed Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=0652fa7c0ed9be9794f44cc1f10ad19ed5f3990a.1777223007.git.mst@redhat.com \
--to=mst@redhat.com \
--cc=Liam.Howlett@oracle.com \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=david@kernel.org \
--cc=gourry@gourry.net \
--cc=hannes@cmpxchg.org \
--cc=jackmanb@google.com \
--cc=jasowang@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mhocko@suse.com \
--cc=rppt@kernel.org \
--cc=surenb@google.com \
--cc=vbabka@kernel.org \
--cc=virtualization@lists.linux.dev \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox