From: Shivank Garg <shivankg@amd.com>
To: <seanjc@google.com>, <david@redhat.com>, <vbabka@suse.cz>,
<willy@infradead.org>, <akpm@linux-foundation.org>,
<shuah@kernel.org>, <pbonzini@redhat.com>, <brauner@kernel.org>,
<viro@zeniv.linux.org.uk>
Cc: <ackerleytng@google.com>, <paul@paul-moore.com>,
<jmorris@namei.org>, <serge@hallyn.com>, <pvorel@suse.cz>,
<bfoster@redhat.com>, <tabba@google.com>, <vannapurve@google.com>,
<chao.gao@intel.com>, <bharata@amd.com>, <nikunj@amd.com>,
<michael.day@amd.com>, <shdhiman@amd.com>, <yan.y.zhao@intel.com>,
<Neeraj.Upadhyay@amd.com>, <thomas.lendacky@amd.com>,
<michael.roth@amd.com>, <aik@amd.com>, <jgg@nvidia.com>,
<kalyazin@amazon.com>, <peterx@redhat.com>, <shivankg@amd.com>,
<jack@suse.cz>, <rppt@kernel.org>, <hch@infradead.org>,
<cgzones@googlemail.com>, <ira.weiny@intel.com>,
<rientjes@google.com>, <roypat@amazon.co.uk>, <ziy@nvidia.com>,
<matthew.brost@intel.com>, <joshua.hahnjy@gmail.com>,
<rakie.kim@sk.com>, <byungchul@sk.com>, <gourry@gourry.net>,
<kent.overstreet@linux.dev>, <ying.huang@linux.alibaba.com>,
<apopple@nvidia.com>, <chao.p.peng@intel.com>,
<amit@infradead.org>, <ddutile@redhat.com>,
<dan.j.williams@intel.com>, <ashish.kalra@amd.com>,
<gshan@redhat.com>, <jgowans@amazon.com>, <pankaj.gupta@amd.com>,
<papaluri@amd.com>, <yuzhao@google.com>, <suzuki.poulose@arm.com>,
<quic_eberman@quicinc.com>, <aneeshkumar.kizhakeveetil@arm.com>,
<linux-fsdevel@vger.kernel.org>, <linux-mm@kvack.org>,
<linux-kernel@vger.kernel.org>,
<linux-security-module@vger.kernel.org>, <kvm@vger.kernel.org>,
<linux-kselftest@vger.kernel.org>, <linux-coco@lists.linux.dev>
Subject: [PATCH V9 2/7] mm/filemap: Add NUMA mempolicy support to filemap_alloc_folio()
Date: Sun, 13 Jul 2025 17:43:36 +0000 [thread overview]
Message-ID: <20250713174339.13981-5-shivankg@amd.com> (raw)
In-Reply-To: <20250713174339.13981-2-shivankg@amd.com>
From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Add a mempolicy parameter to filemap_alloc_folio() to enable NUMA-aware
page cache allocations. This will be used by upcoming changes to
support NUMA policies in guest-memfd, where guest_memory need to be
allocated NUMA policy specified by VMM.
All existing users pass NULL maintaining current behavior.
Reviewed-by: Pankaj Gupta <pankaj.gupta@amd.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Shivank Garg <shivankg@amd.com>
---
fs/bcachefs/fs-io-buffered.c | 2 +-
fs/btrfs/compression.c | 4 ++--
fs/btrfs/verity.c | 2 +-
fs/erofs/zdata.c | 2 +-
fs/f2fs/compress.c | 2 +-
include/linux/pagemap.h | 8 +++++---
mm/filemap.c | 14 +++++++++-----
mm/readahead.c | 2 +-
8 files changed, 21 insertions(+), 15 deletions(-)
diff --git a/fs/bcachefs/fs-io-buffered.c b/fs/bcachefs/fs-io-buffered.c
index 66bacdd49f78..392344232b16 100644
--- a/fs/bcachefs/fs-io-buffered.c
+++ b/fs/bcachefs/fs-io-buffered.c
@@ -124,7 +124,7 @@ static int readpage_bio_extend(struct btree_trans *trans,
if (folio && !xa_is_value(folio))
break;
- folio = filemap_alloc_folio(readahead_gfp_mask(iter->mapping), order);
+ folio = filemap_alloc_folio(readahead_gfp_mask(iter->mapping), order, NULL);
if (!folio)
break;
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 48d07939fee4..a0808c8f897f 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -474,8 +474,8 @@ static noinline int add_ra_bio_pages(struct inode *inode,
continue;
}
- folio = filemap_alloc_folio(mapping_gfp_constraint(mapping,
- ~__GFP_FS), 0);
+ folio = filemap_alloc_folio(mapping_gfp_constraint(mapping, ~__GFP_FS),
+ 0, NULL);
if (!folio)
break;
diff --git a/fs/btrfs/verity.c b/fs/btrfs/verity.c
index b7a96a005487..c43a789ba6d2 100644
--- a/fs/btrfs/verity.c
+++ b/fs/btrfs/verity.c
@@ -742,7 +742,7 @@ static struct page *btrfs_read_merkle_tree_page(struct inode *inode,
}
folio = filemap_alloc_folio(mapping_gfp_constraint(inode->i_mapping, ~__GFP_FS),
- 0);
+ 0, NULL);
if (!folio)
return ERR_PTR(-ENOMEM);
diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
index e3f28a1bb945..f9ce234e1a66 100644
--- a/fs/erofs/zdata.c
+++ b/fs/erofs/zdata.c
@@ -562,7 +562,7 @@ static void z_erofs_bind_cache(struct z_erofs_frontend *fe)
* Allocate a managed folio for cached I/O, or it may be
* then filled with a file-backed folio for in-place I/O
*/
- newfolio = filemap_alloc_folio(gfp, 0);
+ newfolio = filemap_alloc_folio(gfp, 0, NULL);
if (!newfolio)
continue;
newfolio->private = Z_EROFS_PREALLOCATED_FOLIO;
diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
index b3c1df93a163..7ef937dd7624 100644
--- a/fs/f2fs/compress.c
+++ b/fs/f2fs/compress.c
@@ -1942,7 +1942,7 @@ void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi, struct page *page,
return;
}
- cfolio = filemap_alloc_folio(__GFP_NOWARN | __GFP_IO, 0);
+ cfolio = filemap_alloc_folio(__GFP_NOWARN | __GFP_IO, 0, NULL);
if (!cfolio)
return;
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index e63fbfbd5b0f..78ea357d2077 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -646,9 +646,11 @@ static inline void *detach_page_private(struct page *page)
}
#ifdef CONFIG_NUMA
-struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order);
+struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order,
+ struct mempolicy *policy);
#else
-static inline struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order)
+static inline struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order,
+ struct mempolicy *policy)
{
return folio_alloc_noprof(gfp, order);
}
@@ -659,7 +661,7 @@ static inline struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int o
static inline struct page *__page_cache_alloc(gfp_t gfp)
{
- return &filemap_alloc_folio(gfp, 0)->page;
+ return &filemap_alloc_folio(gfp, 0, NULL)->page;
}
static inline gfp_t readahead_gfp_mask(struct address_space *x)
diff --git a/mm/filemap.c b/mm/filemap.c
index bada249b9fb7..a30cd4dd085a 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -989,11 +989,16 @@ int filemap_add_folio(struct address_space *mapping, struct folio *folio,
EXPORT_SYMBOL_GPL(filemap_add_folio);
#ifdef CONFIG_NUMA
-struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order)
+struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order,
+ struct mempolicy *policy)
{
int n;
struct folio *folio;
+ if (policy)
+ return folio_alloc_mpol_noprof(gfp, order, policy,
+ NO_INTERLEAVE_INDEX, numa_node_id());
+
if (cpuset_do_page_mem_spread()) {
unsigned int cpuset_mems_cookie;
do {
@@ -1977,7 +1982,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
err = -ENOMEM;
if (order > min_order)
alloc_gfp |= __GFP_NORETRY | __GFP_NOWARN;
- folio = filemap_alloc_folio(alloc_gfp, order);
+ folio = filemap_alloc_folio(alloc_gfp, order, NULL);
if (!folio)
continue;
@@ -2516,7 +2521,7 @@ static int filemap_create_folio(struct kiocb *iocb, struct folio_batch *fbatch)
if (iocb->ki_flags & (IOCB_NOWAIT | IOCB_WAITQ))
return -EAGAIN;
- folio = filemap_alloc_folio(mapping_gfp_mask(mapping), min_order);
+ folio = filemap_alloc_folio(mapping_gfp_mask(mapping), min_order, NULL);
if (!folio)
return -ENOMEM;
if (iocb->ki_flags & IOCB_DONTCACHE)
@@ -3853,8 +3858,7 @@ static struct folio *do_read_cache_folio(struct address_space *mapping,
repeat:
folio = filemap_get_folio(mapping, index);
if (IS_ERR(folio)) {
- folio = filemap_alloc_folio(gfp,
- mapping_min_folio_order(mapping));
+ folio = filemap_alloc_folio(gfp, mapping_min_folio_order(mapping), NULL);
if (!folio)
return ERR_PTR(-ENOMEM);
index = mapping_align_index(mapping, index);
diff --git a/mm/readahead.c b/mm/readahead.c
index 20d36d6b055e..0b2aec0231e6 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -183,7 +183,7 @@ static struct folio *ractl_alloc_folio(struct readahead_control *ractl,
{
struct folio *folio;
- folio = filemap_alloc_folio(gfp_mask, order);
+ folio = filemap_alloc_folio(gfp_mask, order, NULL);
if (folio && ractl->dropbehind)
__folio_set_dropbehind(folio);
--
2.43.0
next prev parent reply other threads:[~2025-07-13 17:48 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-13 17:43 [PATCH V9 0/7] Add NUMA mempolicy support for KVM guest-memfd Shivank Garg
2025-07-13 17:43 ` [PATCH V9 1/7] KVM: guest_memfd: Use guest mem inodes instead of anonymous inodes Shivank Garg
2025-07-22 15:18 ` David Hildenbrand
2025-08-07 21:34 ` Ackerley Tng
2025-08-07 22:14 ` Ackerley Tng
2025-08-11 8:02 ` Garg, Shivank
2025-07-13 17:43 ` Shivank Garg [this message]
2025-07-22 15:20 ` [PATCH V9 2/7] mm/filemap: Add NUMA mempolicy support to filemap_alloc_folio() David Hildenbrand
2025-07-13 17:43 ` [PATCH V9 3/7] mm/filemap: Extend __filemap_get_folio() to support NUMA memory policies Shivank Garg
2025-07-22 15:21 ` David Hildenbrand
2025-07-13 17:43 ` [PATCH V9 4/7] mm/mempolicy: Export memory policy symbols Shivank Garg
2025-07-13 17:43 ` [PATCH V9 5/7] KVM: guest_memfd: Add slab-allocated inode cache Shivank Garg
2025-07-21 11:44 ` Vlastimil Babka
2025-07-22 5:03 ` Shivank Garg
2025-07-13 17:43 ` [PATCH V9 6/7] KVM: guest_memfd: Enforce NUMA mempolicy using shared policy Shivank Garg
2025-07-21 13:30 ` Vlastimil Babka
2025-07-22 15:24 ` David Hildenbrand
2025-07-13 17:43 ` [PATCH V9 7/7] KVM: guest_memfd: selftests: Add tests for mmap and NUMA policy support Shivank Garg
2025-07-22 14:40 ` [PATCH V9 0/7] Add NUMA mempolicy support for KVM guest-memfd David Hildenbrand
2025-07-22 14:45 ` Sean Christopherson
2025-07-22 15:51 ` David Hildenbrand
2025-07-22 23:07 ` Sean Christopherson
2025-07-23 8:20 ` David Hildenbrand
2025-07-22 15:49 ` Shivank Garg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250713174339.13981-5-shivankg@amd.com \
--to=shivankg@amd.com \
--cc=Neeraj.Upadhyay@amd.com \
--cc=ackerleytng@google.com \
--cc=aik@amd.com \
--cc=akpm@linux-foundation.org \
--cc=amit@infradead.org \
--cc=aneeshkumar.kizhakeveetil@arm.com \
--cc=apopple@nvidia.com \
--cc=ashish.kalra@amd.com \
--cc=bfoster@redhat.com \
--cc=bharata@amd.com \
--cc=brauner@kernel.org \
--cc=byungchul@sk.com \
--cc=cgzones@googlemail.com \
--cc=chao.gao@intel.com \
--cc=chao.p.peng@intel.com \
--cc=dan.j.williams@intel.com \
--cc=david@redhat.com \
--cc=ddutile@redhat.com \
--cc=gourry@gourry.net \
--cc=gshan@redhat.com \
--cc=hch@infradead.org \
--cc=ira.weiny@intel.com \
--cc=jack@suse.cz \
--cc=jgg@nvidia.com \
--cc=jgowans@amazon.com \
--cc=jmorris@namei.org \
--cc=joshua.hahnjy@gmail.com \
--cc=kalyazin@amazon.com \
--cc=kent.overstreet@linux.dev \
--cc=kvm@vger.kernel.org \
--cc=linux-coco@lists.linux.dev \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-security-module@vger.kernel.org \
--cc=matthew.brost@intel.com \
--cc=michael.day@amd.com \
--cc=michael.roth@amd.com \
--cc=nikunj@amd.com \
--cc=pankaj.gupta@amd.com \
--cc=papaluri@amd.com \
--cc=paul@paul-moore.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=pvorel@suse.cz \
--cc=quic_eberman@quicinc.com \
--cc=rakie.kim@sk.com \
--cc=rientjes@google.com \
--cc=roypat@amazon.co.uk \
--cc=rppt@kernel.org \
--cc=seanjc@google.com \
--cc=serge@hallyn.com \
--cc=shdhiman@amd.com \
--cc=shuah@kernel.org \
--cc=suzuki.poulose@arm.com \
--cc=tabba@google.com \
--cc=thomas.lendacky@amd.com \
--cc=vannapurve@google.com \
--cc=vbabka@suse.cz \
--cc=viro@zeniv.linux.org.uk \
--cc=willy@infradead.org \
--cc=yan.y.zhao@intel.com \
--cc=ying.huang@linux.alibaba.com \
--cc=yuzhao@google.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).