* [PATCH V9 0/7] Add NUMA mempolicy support for KVM guest-memfd
@ 2025-07-13 17:43 Shivank Garg
2025-07-13 17:43 ` [PATCH V9 1/7] KVM: guest_memfd: Use guest mem inodes instead of anonymous inodes Shivank Garg
` (7 more replies)
0 siblings, 8 replies; 24+ messages in thread
From: Shivank Garg @ 2025-07-13 17:43 UTC (permalink / raw)
To: seanjc, david, vbabka, willy, akpm, shuah, pbonzini, brauner,
viro
Cc: ackerleytng, paul, jmorris, serge, pvorel, bfoster, tabba,
vannapurve, chao.gao, bharata, nikunj, michael.day, shdhiman,
yan.y.zhao, Neeraj.Upadhyay, thomas.lendacky, michael.roth, aik,
jgg, kalyazin, peterx, shivankg, jack, rppt, hch, cgzones,
ira.weiny, rientjes, roypat, ziy, matthew.brost, joshua.hahnjy,
rakie.kim, byungchul, gourry, kent.overstreet, ying.huang,
apopple, chao.p.peng, amit, ddutile, dan.j.williams, ashish.kalra,
gshan, jgowans, pankaj.gupta, papaluri, yuzhao, suzuki.poulose,
quic_eberman, aneeshkumar.kizhakeveetil, linux-fsdevel, linux-mm,
linux-kernel, linux-security-module, kvm, linux-kselftest,
linux-coco
This series introduces NUMA-aware memory placement support for KVM guests
with guest_memfd memory backends. It builds upon Fuad Tabba's work that
enabled host-mapping for guest_memfd memory [1].
== Background ==
KVM's guest-memfd memory backend currently lacks support for NUMA policy
enforcement, causing guest memory allocations to be distributed across host
nodes according to kernel's default behavior, irrespective of any policy
specified by the VMM. This limitation arises because conventional userspace
NUMA control mechanisms like mbind(2) don't work since the memory isn't
directly mapped to userspace when allocations occur.
Fuad's work [1] provides the necessary mmap capability, and this series
leverages it to enable mbind(2).
== Implementation ==
This series implements proper NUMA policy support for guest-memfd by:
1. Adding mempolicy-aware allocation APIs to the filemap layer.
2. Introducing custom inodes (via a dedicated slab-allocated inode cache,
kvm_gmem_inode_info) to store NUMA policy and metadata for guest memory.
3. Implementing get/set_policy vm_ops in guest_memfd to support NUMA
policy.
With these changes, VMMs can now control guest memory placement by mapping
guest_memfd file descriptor and using mbind(2) to specify:
- Policy modes: default, bind, interleave, or preferred
- Host NUMA nodes: List of target nodes for memory allocation
These Policies affect only future allocations and do not migrate existing
memory. This matches mbind(2)'s default behavior which affects only new
allocations unless overridden with MPOL_MF_MOVE/MPOL_MF_MOVE_ALL flags (Not
supported for guest_memfd as it is unmovable by design).
== Upstream Plan ==
Phased approach as per David's guest_memfd extension overview [2] and
community calls [3]:
Phase 1 (this series):
1. Focuses on shared guest_memfd support (non-CoCo VMs).
2. Builds on Fuad's host-mapping work.
Phase2 (future work):
1. NUMA support for private guest_memfd (CoCo VMs).
2. Depends on SNP in-place conversion support [4].
This series provides a clean integration path for NUMA-aware memory
management for guest_memfd and lays the groundwork for future confidential
computing NUMA capabilities.
Please review and provide feedback!
Thanks,
Shivank
== Changelog ==
- v1,v2: Extended the KVM_CREATE_GUEST_MEMFD IOCTL to pass mempolicy.
- v3: Introduced fbind() syscall for VMM memory-placement configuration.
- v4-v6: Current approach using shared_policy support and vm_ops (based on
suggestions from David [5] and guest_memfd bi-weekly upstream
call discussion [6]).
- v7: Use inodes to store NUMA policy instead of file [7].
- v8: Rebase on top of Fuad's V12: Host mmaping for guest_memfd memory.
- v9: Rebase on top of Fuad's V13 and incorporate review comments
[1] https://lore.kernel.org/all/20250709105946.4009897-1-tabba@google.com
[2] https://lore.kernel.org/all/c1c9591d-218a-495c-957b-ba356c8f8e09@redhat.com
[3] https://docs.google.com/document/d/1M6766BzdY1Lhk7LiR5IqVR8B8mG3cr-cxTxOrAosPOk/edit?tab=t.0#heading=h.svcbod20b5ur
[4] https://lore.kernel.org/all/20250613005400.3694904-1-michael.roth@amd.com
[5] https://lore.kernel.org/all/6fbef654-36e2-4be5-906e-2a648a845278@redhat.com
[6] https://lore.kernel.org/all/2b77e055-98ac-43a1-a7ad-9f9065d7f38f@amd.com
[7] https://lore.kernel.org/all/diqzbjumm167.fsf@ackerleytng-ctop.c.googlers.com
Ackerley Tng (1):
KVM: guest_memfd: Use guest mem inodes instead of anonymous inodes
Matthew Wilcox (Oracle) (2):
mm/filemap: Add NUMA mempolicy support to filemap_alloc_folio()
mm/filemap: Extend __filemap_get_folio() to support NUMA memory
policies
Shivank Garg (4):
mm/mempolicy: Export memory policy symbols
KVM: guest_memfd: Add slab-allocated inode cache
KVM: guest_memfd: Enforce NUMA mempolicy using shared policy
KVM: guest_memfd: selftests: Add tests for mmap and NUMA policy
support
fs/bcachefs/fs-io-buffered.c | 2 +-
fs/btrfs/compression.c | 4 +-
fs/btrfs/verity.c | 2 +-
fs/erofs/zdata.c | 2 +-
fs/f2fs/compress.c | 2 +-
include/linux/pagemap.h | 18 +-
include/uapi/linux/magic.h | 1 +
mm/filemap.c | 23 +-
mm/mempolicy.c | 6 +
mm/readahead.c | 2 +-
tools/testing/selftests/kvm/Makefile.kvm | 1 +
.../testing/selftests/kvm/guest_memfd_test.c | 122 ++++++++-
virt/kvm/guest_memfd.c | 255 ++++++++++++++++--
virt/kvm/kvm_main.c | 7 +-
virt/kvm/kvm_mm.h | 10 +-
15 files changed, 408 insertions(+), 49 deletions(-)
--
2.43.0
---
== Earlier Postings ==
v8: https://lore.kernel.org/all/20250618112935.7629-1-shivankg@amd.com
v7: https://lore.kernel.org/all/20250408112402.181574-1-shivankg@amd.com
v6: https://lore.kernel.org/all/20250226082549.6034-1-shivankg@amd.com
v5: https://lore.kernel.org/all/20250219101559.414878-1-shivankg@amd.com
v4: https://lore.kernel.org/all/20250210063227.41125-1-shivankg@amd.com
v3: https://lore.kernel.org/all/20241105164549.154700-1-shivankg@amd.com
v2: https://lore.kernel.org/all/20240919094438.10987-1-shivankg@amd.com
v1: https://lore.kernel.org/all/20240916165743.201087-1-shivankg@amd.com
^ permalink raw reply [flat|nested] 24+ messages in thread
* [PATCH V9 1/7] KVM: guest_memfd: Use guest mem inodes instead of anonymous inodes
2025-07-13 17:43 [PATCH V9 0/7] Add NUMA mempolicy support for KVM guest-memfd Shivank Garg
@ 2025-07-13 17:43 ` Shivank Garg
2025-07-22 15:18 ` David Hildenbrand
2025-07-13 17:43 ` [PATCH V9 2/7] mm/filemap: Add NUMA mempolicy support to filemap_alloc_folio() Shivank Garg
` (6 subsequent siblings)
7 siblings, 1 reply; 24+ messages in thread
From: Shivank Garg @ 2025-07-13 17:43 UTC (permalink / raw)
To: seanjc, david, vbabka, willy, akpm, shuah, pbonzini, brauner,
viro
Cc: ackerleytng, paul, jmorris, serge, pvorel, bfoster, tabba,
vannapurve, chao.gao, bharata, nikunj, michael.day, shdhiman,
yan.y.zhao, Neeraj.Upadhyay, thomas.lendacky, michael.roth, aik,
jgg, kalyazin, peterx, shivankg, jack, rppt, hch, cgzones,
ira.weiny, rientjes, roypat, ziy, matthew.brost, joshua.hahnjy,
rakie.kim, byungchul, gourry, kent.overstreet, ying.huang,
apopple, chao.p.peng, amit, ddutile, dan.j.williams, ashish.kalra,
gshan, jgowans, pankaj.gupta, papaluri, yuzhao, suzuki.poulose,
quic_eberman, aneeshkumar.kizhakeveetil, linux-fsdevel, linux-mm,
linux-kernel, linux-security-module, kvm, linux-kselftest,
linux-coco
From: Ackerley Tng <ackerleytng@google.com>
guest_memfd's inode represents memory the guest_memfd is
providing. guest_memfd's file represents a struct kvm's view of that
memory.
Using a custom inode allows customization of the inode teardown
process via callbacks. For example, ->evict_inode() allows
customization of the truncation process on file close, and
->destroy_inode() and ->free_inode() allow customization of the inode
freeing process.
Customizing the truncation process allows flexibility in management of
guest_memfd memory and customization of the inode freeing process
allows proper cleanup of memory metadata stored on the inode.
Memory metadata is more appropriately stored on the inode (as opposed
to the file), since the metadata is for the memory and is not unique
to a specific binding and struct kvm.
Co-developed-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Ackerley Tng <ackerleytng@google.com>
Signed-off-by: Shivank Garg <shivankg@amd.com>
---
include/uapi/linux/magic.h | 1 +
virt/kvm/guest_memfd.c | 134 +++++++++++++++++++++++++++++++------
virt/kvm/kvm_main.c | 7 +-
virt/kvm/kvm_mm.h | 10 ++-
4 files changed, 127 insertions(+), 25 deletions(-)
diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h
index bb575f3ab45e..638ca21b7a90 100644
--- a/include/uapi/linux/magic.h
+++ b/include/uapi/linux/magic.h
@@ -103,5 +103,6 @@
#define DEVMEM_MAGIC 0x454d444d /* "DMEM" */
#define SECRETMEM_MAGIC 0x5345434d /* "SECM" */
#define PID_FS_MAGIC 0x50494446 /* "PIDF" */
+#define GUEST_MEMFD_MAGIC 0x474d454d /* "GMEM" */
#endif /* __LINUX_MAGIC_H__ */
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index d01bd7a2c2bd..dabcc2317291 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -1,12 +1,16 @@
// SPDX-License-Identifier: GPL-2.0
+#include <linux/anon_inodes.h>
#include <linux/backing-dev.h>
#include <linux/falloc.h>
+#include <linux/fs.h>
#include <linux/kvm_host.h>
+#include <linux/pseudo_fs.h>
#include <linux/pagemap.h>
-#include <linux/anon_inodes.h>
#include "kvm_mm.h"
+static struct vfsmount *kvm_gmem_mnt;
+
struct kvm_gmem {
struct kvm *kvm;
struct xarray bindings;
@@ -388,9 +392,51 @@ static struct file_operations kvm_gmem_fops = {
.fallocate = kvm_gmem_fallocate,
};
-void kvm_gmem_init(struct module *module)
+static const struct super_operations kvm_gmem_super_operations = {
+ .statfs = simple_statfs,
+};
+
+static int kvm_gmem_init_fs_context(struct fs_context *fc)
+{
+ struct pseudo_fs_context *ctx;
+
+ if (!init_pseudo(fc, GUEST_MEMFD_MAGIC))
+ return -ENOMEM;
+
+ ctx = fc->fs_private;
+ ctx->ops = &kvm_gmem_super_operations;
+
+ return 0;
+}
+
+static struct file_system_type kvm_gmem_fs = {
+ .name = "kvm_guest_memory",
+ .init_fs_context = kvm_gmem_init_fs_context,
+ .kill_sb = kill_anon_super,
+};
+
+static int kvm_gmem_init_mount(void)
+{
+ kvm_gmem_mnt = kern_mount(&kvm_gmem_fs);
+
+ if (IS_ERR(kvm_gmem_mnt))
+ return PTR_ERR(kvm_gmem_mnt);
+
+ kvm_gmem_mnt->mnt_flags |= MNT_NOEXEC;
+ return 0;
+}
+
+int kvm_gmem_init(struct module *module)
{
kvm_gmem_fops.owner = module;
+
+ return kvm_gmem_init_mount();
+}
+
+void kvm_gmem_exit(void)
+{
+ kern_unmount(kvm_gmem_mnt);
+ kvm_gmem_mnt = NULL;
}
static int kvm_gmem_migrate_folio(struct address_space *mapping,
@@ -472,11 +518,71 @@ static const struct inode_operations kvm_gmem_iops = {
.setattr = kvm_gmem_setattr,
};
+static struct inode *kvm_gmem_inode_make_secure_inode(const char *name,
+ loff_t size, u64 flags)
+{
+ struct inode *inode;
+
+ inode = anon_inode_make_secure_inode(kvm_gmem_mnt->mnt_sb, name, NULL);
+ if (IS_ERR(inode))
+ return inode;
+
+ inode->i_private = (void *)(unsigned long)flags;
+ inode->i_op = &kvm_gmem_iops;
+ inode->i_mapping->a_ops = &kvm_gmem_aops;
+ inode->i_mode |= S_IFREG;
+ inode->i_size = size;
+ mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER);
+ mapping_set_inaccessible(inode->i_mapping);
+ /* Unmovable mappings are supposed to be marked unevictable as well. */
+ WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping));
+
+ return inode;
+}
+
+static struct file *kvm_gmem_inode_create_getfile(void *priv, loff_t size,
+ u64 flags)
+{
+ static const char *name = "[kvm-gmem]";
+ struct inode *inode;
+ struct file *file;
+ int err;
+
+ err = -ENOENT;
+ if (!try_module_get(kvm_gmem_fops.owner))
+ goto err;
+
+ inode = kvm_gmem_inode_make_secure_inode(name, size, flags);
+ if (IS_ERR(inode)) {
+ err = PTR_ERR(inode);
+ goto err_put_module;
+ }
+
+ file = alloc_file_pseudo(inode, kvm_gmem_mnt, name, O_RDWR,
+ &kvm_gmem_fops);
+ if (IS_ERR(file)) {
+ err = PTR_ERR(file);
+ goto err_put_inode;
+ }
+
+ file->f_flags |= O_LARGEFILE;
+ file->private_data = priv;
+
+out:
+ return file;
+
+err_put_inode:
+ iput(inode);
+err_put_module:
+ module_put(kvm_gmem_fops.owner);
+err:
+ file = ERR_PTR(err);
+ goto out;
+}
+
static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
{
- const char *anon_name = "[kvm-gmem]";
struct kvm_gmem *gmem;
- struct inode *inode;
struct file *file;
int fd, err;
@@ -490,32 +596,16 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
goto err_fd;
}
- file = anon_inode_create_getfile(anon_name, &kvm_gmem_fops, gmem,
- O_RDWR, NULL);
+ file = kvm_gmem_inode_create_getfile(gmem, size, flags);
if (IS_ERR(file)) {
err = PTR_ERR(file);
goto err_gmem;
}
- file->f_flags |= O_LARGEFILE;
-
- inode = file->f_inode;
- WARN_ON(file->f_mapping != inode->i_mapping);
-
- inode->i_private = (void *)(unsigned long)flags;
- inode->i_op = &kvm_gmem_iops;
- inode->i_mapping->a_ops = &kvm_gmem_aops;
- inode->i_mode |= S_IFREG;
- inode->i_size = size;
- mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER);
- mapping_set_inaccessible(inode->i_mapping);
- /* Unmovable mappings are supposed to be marked unevictable as well. */
- WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping));
-
kvm_get_kvm(kvm);
gmem->kvm = kvm;
xa_init(&gmem->bindings);
- list_add(&gmem->entry, &inode->i_mapping->i_private_list);
+ list_add(&gmem->entry, &file_inode(file)->i_mapping->i_private_list);
fd_install(fd, file);
return fd;
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index f1ac872e01e9..9ccdedc9460a 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -6486,7 +6486,9 @@ int kvm_init(unsigned vcpu_size, unsigned vcpu_align, struct module *module)
if (WARN_ON_ONCE(r))
goto err_vfio;
- kvm_gmem_init(module);
+ r = kvm_gmem_init(module);
+ if (r)
+ goto err_gmem;
r = kvm_init_virtualization();
if (r)
@@ -6507,6 +6509,8 @@ int kvm_init(unsigned vcpu_size, unsigned vcpu_align, struct module *module)
err_register:
kvm_uninit_virtualization();
err_virt:
+ kvm_gmem_exit();
+err_gmem:
kvm_vfio_ops_exit();
err_vfio:
kvm_async_pf_deinit();
@@ -6538,6 +6542,7 @@ void kvm_exit(void)
for_each_possible_cpu(cpu)
free_cpumask_var(per_cpu(cpu_kick_mask, cpu));
kmem_cache_destroy(kvm_vcpu_cache);
+ kvm_gmem_exit();
kvm_vfio_ops_exit();
kvm_async_pf_deinit();
kvm_irqfd_exit();
diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h
index ec311c0d6718..089f87ed00dc 100644
--- a/virt/kvm/kvm_mm.h
+++ b/virt/kvm/kvm_mm.h
@@ -68,17 +68,23 @@ static inline void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm,
#endif /* HAVE_KVM_PFNCACHE */
#ifdef CONFIG_KVM_GMEM
-void kvm_gmem_init(struct module *module);
+int kvm_gmem_init(struct module *module);
+void kvm_gmem_exit(void);
int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args);
int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot,
unsigned int fd, loff_t offset);
void kvm_gmem_unbind(struct kvm_memory_slot *slot);
#else
-static inline void kvm_gmem_init(struct module *module)
+static inline int kvm_gmem_init(struct module *module)
{
+ return 0;
}
+static inline void kvm_gmem_exit(void) {};
+
+static inline void kvm_gmem_init(struct module *module)
+
static inline int kvm_gmem_bind(struct kvm *kvm,
struct kvm_memory_slot *slot,
unsigned int fd, loff_t offset)
--
2.43.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH V9 2/7] mm/filemap: Add NUMA mempolicy support to filemap_alloc_folio()
2025-07-13 17:43 [PATCH V9 0/7] Add NUMA mempolicy support for KVM guest-memfd Shivank Garg
2025-07-13 17:43 ` [PATCH V9 1/7] KVM: guest_memfd: Use guest mem inodes instead of anonymous inodes Shivank Garg
@ 2025-07-13 17:43 ` Shivank Garg
2025-07-22 15:20 ` David Hildenbrand
2025-07-13 17:43 ` [PATCH V9 3/7] mm/filemap: Extend __filemap_get_folio() to support NUMA memory policies Shivank Garg
` (5 subsequent siblings)
7 siblings, 1 reply; 24+ messages in thread
From: Shivank Garg @ 2025-07-13 17:43 UTC (permalink / raw)
To: seanjc, david, vbabka, willy, akpm, shuah, pbonzini, brauner,
viro
Cc: ackerleytng, paul, jmorris, serge, pvorel, bfoster, tabba,
vannapurve, chao.gao, bharata, nikunj, michael.day, shdhiman,
yan.y.zhao, Neeraj.Upadhyay, thomas.lendacky, michael.roth, aik,
jgg, kalyazin, peterx, shivankg, jack, rppt, hch, cgzones,
ira.weiny, rientjes, roypat, ziy, matthew.brost, joshua.hahnjy,
rakie.kim, byungchul, gourry, kent.overstreet, ying.huang,
apopple, chao.p.peng, amit, ddutile, dan.j.williams, ashish.kalra,
gshan, jgowans, pankaj.gupta, papaluri, yuzhao, suzuki.poulose,
quic_eberman, aneeshkumar.kizhakeveetil, linux-fsdevel, linux-mm,
linux-kernel, linux-security-module, kvm, linux-kselftest,
linux-coco
From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Add a mempolicy parameter to filemap_alloc_folio() to enable NUMA-aware
page cache allocations. This will be used by upcoming changes to
support NUMA policies in guest-memfd, where guest_memory need to be
allocated NUMA policy specified by VMM.
All existing users pass NULL maintaining current behavior.
Reviewed-by: Pankaj Gupta <pankaj.gupta@amd.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Shivank Garg <shivankg@amd.com>
---
fs/bcachefs/fs-io-buffered.c | 2 +-
fs/btrfs/compression.c | 4 ++--
fs/btrfs/verity.c | 2 +-
fs/erofs/zdata.c | 2 +-
fs/f2fs/compress.c | 2 +-
include/linux/pagemap.h | 8 +++++---
mm/filemap.c | 14 +++++++++-----
mm/readahead.c | 2 +-
8 files changed, 21 insertions(+), 15 deletions(-)
diff --git a/fs/bcachefs/fs-io-buffered.c b/fs/bcachefs/fs-io-buffered.c
index 66bacdd49f78..392344232b16 100644
--- a/fs/bcachefs/fs-io-buffered.c
+++ b/fs/bcachefs/fs-io-buffered.c
@@ -124,7 +124,7 @@ static int readpage_bio_extend(struct btree_trans *trans,
if (folio && !xa_is_value(folio))
break;
- folio = filemap_alloc_folio(readahead_gfp_mask(iter->mapping), order);
+ folio = filemap_alloc_folio(readahead_gfp_mask(iter->mapping), order, NULL);
if (!folio)
break;
diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
index 48d07939fee4..a0808c8f897f 100644
--- a/fs/btrfs/compression.c
+++ b/fs/btrfs/compression.c
@@ -474,8 +474,8 @@ static noinline int add_ra_bio_pages(struct inode *inode,
continue;
}
- folio = filemap_alloc_folio(mapping_gfp_constraint(mapping,
- ~__GFP_FS), 0);
+ folio = filemap_alloc_folio(mapping_gfp_constraint(mapping, ~__GFP_FS),
+ 0, NULL);
if (!folio)
break;
diff --git a/fs/btrfs/verity.c b/fs/btrfs/verity.c
index b7a96a005487..c43a789ba6d2 100644
--- a/fs/btrfs/verity.c
+++ b/fs/btrfs/verity.c
@@ -742,7 +742,7 @@ static struct page *btrfs_read_merkle_tree_page(struct inode *inode,
}
folio = filemap_alloc_folio(mapping_gfp_constraint(inode->i_mapping, ~__GFP_FS),
- 0);
+ 0, NULL);
if (!folio)
return ERR_PTR(-ENOMEM);
diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
index e3f28a1bb945..f9ce234e1a66 100644
--- a/fs/erofs/zdata.c
+++ b/fs/erofs/zdata.c
@@ -562,7 +562,7 @@ static void z_erofs_bind_cache(struct z_erofs_frontend *fe)
* Allocate a managed folio for cached I/O, or it may be
* then filled with a file-backed folio for in-place I/O
*/
- newfolio = filemap_alloc_folio(gfp, 0);
+ newfolio = filemap_alloc_folio(gfp, 0, NULL);
if (!newfolio)
continue;
newfolio->private = Z_EROFS_PREALLOCATED_FOLIO;
diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
index b3c1df93a163..7ef937dd7624 100644
--- a/fs/f2fs/compress.c
+++ b/fs/f2fs/compress.c
@@ -1942,7 +1942,7 @@ void f2fs_cache_compressed_page(struct f2fs_sb_info *sbi, struct page *page,
return;
}
- cfolio = filemap_alloc_folio(__GFP_NOWARN | __GFP_IO, 0);
+ cfolio = filemap_alloc_folio(__GFP_NOWARN | __GFP_IO, 0, NULL);
if (!cfolio)
return;
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index e63fbfbd5b0f..78ea357d2077 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -646,9 +646,11 @@ static inline void *detach_page_private(struct page *page)
}
#ifdef CONFIG_NUMA
-struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order);
+struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order,
+ struct mempolicy *policy);
#else
-static inline struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order)
+static inline struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order,
+ struct mempolicy *policy)
{
return folio_alloc_noprof(gfp, order);
}
@@ -659,7 +661,7 @@ static inline struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int o
static inline struct page *__page_cache_alloc(gfp_t gfp)
{
- return &filemap_alloc_folio(gfp, 0)->page;
+ return &filemap_alloc_folio(gfp, 0, NULL)->page;
}
static inline gfp_t readahead_gfp_mask(struct address_space *x)
diff --git a/mm/filemap.c b/mm/filemap.c
index bada249b9fb7..a30cd4dd085a 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -989,11 +989,16 @@ int filemap_add_folio(struct address_space *mapping, struct folio *folio,
EXPORT_SYMBOL_GPL(filemap_add_folio);
#ifdef CONFIG_NUMA
-struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order)
+struct folio *filemap_alloc_folio_noprof(gfp_t gfp, unsigned int order,
+ struct mempolicy *policy)
{
int n;
struct folio *folio;
+ if (policy)
+ return folio_alloc_mpol_noprof(gfp, order, policy,
+ NO_INTERLEAVE_INDEX, numa_node_id());
+
if (cpuset_do_page_mem_spread()) {
unsigned int cpuset_mems_cookie;
do {
@@ -1977,7 +1982,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
err = -ENOMEM;
if (order > min_order)
alloc_gfp |= __GFP_NORETRY | __GFP_NOWARN;
- folio = filemap_alloc_folio(alloc_gfp, order);
+ folio = filemap_alloc_folio(alloc_gfp, order, NULL);
if (!folio)
continue;
@@ -2516,7 +2521,7 @@ static int filemap_create_folio(struct kiocb *iocb, struct folio_batch *fbatch)
if (iocb->ki_flags & (IOCB_NOWAIT | IOCB_WAITQ))
return -EAGAIN;
- folio = filemap_alloc_folio(mapping_gfp_mask(mapping), min_order);
+ folio = filemap_alloc_folio(mapping_gfp_mask(mapping), min_order, NULL);
if (!folio)
return -ENOMEM;
if (iocb->ki_flags & IOCB_DONTCACHE)
@@ -3853,8 +3858,7 @@ static struct folio *do_read_cache_folio(struct address_space *mapping,
repeat:
folio = filemap_get_folio(mapping, index);
if (IS_ERR(folio)) {
- folio = filemap_alloc_folio(gfp,
- mapping_min_folio_order(mapping));
+ folio = filemap_alloc_folio(gfp, mapping_min_folio_order(mapping), NULL);
if (!folio)
return ERR_PTR(-ENOMEM);
index = mapping_align_index(mapping, index);
diff --git a/mm/readahead.c b/mm/readahead.c
index 20d36d6b055e..0b2aec0231e6 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -183,7 +183,7 @@ static struct folio *ractl_alloc_folio(struct readahead_control *ractl,
{
struct folio *folio;
- folio = filemap_alloc_folio(gfp_mask, order);
+ folio = filemap_alloc_folio(gfp_mask, order, NULL);
if (folio && ractl->dropbehind)
__folio_set_dropbehind(folio);
--
2.43.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH V9 3/7] mm/filemap: Extend __filemap_get_folio() to support NUMA memory policies
2025-07-13 17:43 [PATCH V9 0/7] Add NUMA mempolicy support for KVM guest-memfd Shivank Garg
2025-07-13 17:43 ` [PATCH V9 1/7] KVM: guest_memfd: Use guest mem inodes instead of anonymous inodes Shivank Garg
2025-07-13 17:43 ` [PATCH V9 2/7] mm/filemap: Add NUMA mempolicy support to filemap_alloc_folio() Shivank Garg
@ 2025-07-13 17:43 ` Shivank Garg
2025-07-22 15:21 ` David Hildenbrand
2025-07-13 17:43 ` [PATCH V9 4/7] mm/mempolicy: Export memory policy symbols Shivank Garg
` (4 subsequent siblings)
7 siblings, 1 reply; 24+ messages in thread
From: Shivank Garg @ 2025-07-13 17:43 UTC (permalink / raw)
To: seanjc, david, vbabka, willy, akpm, shuah, pbonzini, brauner,
viro
Cc: ackerleytng, paul, jmorris, serge, pvorel, bfoster, tabba,
vannapurve, chao.gao, bharata, nikunj, michael.day, shdhiman,
yan.y.zhao, Neeraj.Upadhyay, thomas.lendacky, michael.roth, aik,
jgg, kalyazin, peterx, shivankg, jack, rppt, hch, cgzones,
ira.weiny, rientjes, roypat, ziy, matthew.brost, joshua.hahnjy,
rakie.kim, byungchul, gourry, kent.overstreet, ying.huang,
apopple, chao.p.peng, amit, ddutile, dan.j.williams, ashish.kalra,
gshan, jgowans, pankaj.gupta, papaluri, yuzhao, suzuki.poulose,
quic_eberman, aneeshkumar.kizhakeveetil, linux-fsdevel, linux-mm,
linux-kernel, linux-security-module, kvm, linux-kselftest,
linux-coco
From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Extend __filemap_get_folio() to support NUMA memory policies by
renaming the implementation to __filemap_get_folio_mpol() and adding
a mempolicy parameter. The original function becomes a static inline
wrapper that passes NULL for the mempolicy.
This infrastructure will enable future support for NUMA-aware page cache
allocations in guest_memfd memory backend KVM guests.
Reviewed-by: Pankaj Gupta <pankaj.gupta@amd.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Shivank Garg <shivankg@amd.com>
---
include/linux/pagemap.h | 10 ++++++++--
mm/filemap.c | 11 ++++++-----
2 files changed, 14 insertions(+), 7 deletions(-)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 78ea357d2077..981ff97b4445 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -747,11 +747,17 @@ static inline fgf_t fgf_set_order(size_t size)
}
void *filemap_get_entry(struct address_space *mapping, pgoff_t index);
-struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
- fgf_t fgp_flags, gfp_t gfp);
+struct folio *__filemap_get_folio_mpol(struct address_space *mapping,
+ pgoff_t index, fgf_t fgf_flags, gfp_t gfp, struct mempolicy *policy);
struct page *pagecache_get_page(struct address_space *mapping, pgoff_t index,
fgf_t fgp_flags, gfp_t gfp);
+static inline struct folio *__filemap_get_folio(struct address_space *mapping,
+ pgoff_t index, fgf_t fgf_flags, gfp_t gfp)
+{
+ return __filemap_get_folio_mpol(mapping, index, fgf_flags, gfp, NULL);
+}
+
/**
* filemap_get_folio - Find and get a folio.
* @mapping: The address_space to search.
diff --git a/mm/filemap.c b/mm/filemap.c
index a30cd4dd085a..ec7de38c17c1 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1896,11 +1896,12 @@ void *filemap_get_entry(struct address_space *mapping, pgoff_t index)
}
/**
- * __filemap_get_folio - Find and get a reference to a folio.
+ * __filemap_get_folio_mpol - Find and get a reference to a folio.
* @mapping: The address_space to search.
* @index: The page index.
* @fgp_flags: %FGP flags modify how the folio is returned.
* @gfp: Memory allocation flags to use if %FGP_CREAT is specified.
+ * @policy: NUMA memory allocation policy to follow.
*
* Looks up the page cache entry at @mapping & @index.
*
@@ -1911,8 +1912,8 @@ void *filemap_get_entry(struct address_space *mapping, pgoff_t index)
*
* Return: The found folio or an ERR_PTR() otherwise.
*/
-struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
- fgf_t fgp_flags, gfp_t gfp)
+struct folio *__filemap_get_folio_mpol(struct address_space *mapping,
+ pgoff_t index, fgf_t fgp_flags, gfp_t gfp, struct mempolicy *policy)
{
struct folio *folio;
@@ -1982,7 +1983,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
err = -ENOMEM;
if (order > min_order)
alloc_gfp |= __GFP_NORETRY | __GFP_NOWARN;
- folio = filemap_alloc_folio(alloc_gfp, order, NULL);
+ folio = filemap_alloc_folio(alloc_gfp, order, policy);
if (!folio)
continue;
@@ -2029,7 +2030,7 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
folio_clear_dropbehind(folio);
return folio;
}
-EXPORT_SYMBOL(__filemap_get_folio);
+EXPORT_SYMBOL(__filemap_get_folio_mpol);
static inline struct folio *find_get_entry(struct xa_state *xas, pgoff_t max,
xa_mark_t mark)
--
2.43.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH V9 4/7] mm/mempolicy: Export memory policy symbols
2025-07-13 17:43 [PATCH V9 0/7] Add NUMA mempolicy support for KVM guest-memfd Shivank Garg
` (2 preceding siblings ...)
2025-07-13 17:43 ` [PATCH V9 3/7] mm/filemap: Extend __filemap_get_folio() to support NUMA memory policies Shivank Garg
@ 2025-07-13 17:43 ` Shivank Garg
2025-07-13 17:43 ` [PATCH V9 5/7] KVM: guest_memfd: Add slab-allocated inode cache Shivank Garg
` (3 subsequent siblings)
7 siblings, 0 replies; 24+ messages in thread
From: Shivank Garg @ 2025-07-13 17:43 UTC (permalink / raw)
To: seanjc, david, vbabka, willy, akpm, shuah, pbonzini, brauner,
viro
Cc: ackerleytng, paul, jmorris, serge, pvorel, bfoster, tabba,
vannapurve, chao.gao, bharata, nikunj, michael.day, shdhiman,
yan.y.zhao, Neeraj.Upadhyay, thomas.lendacky, michael.roth, aik,
jgg, kalyazin, peterx, shivankg, jack, rppt, hch, cgzones,
ira.weiny, rientjes, roypat, ziy, matthew.brost, joshua.hahnjy,
rakie.kim, byungchul, gourry, kent.overstreet, ying.huang,
apopple, chao.p.peng, amit, ddutile, dan.j.williams, ashish.kalra,
gshan, jgowans, pankaj.gupta, papaluri, yuzhao, suzuki.poulose,
quic_eberman, aneeshkumar.kizhakeveetil, linux-fsdevel, linux-mm,
linux-kernel, linux-security-module, kvm, linux-kselftest,
linux-coco
KVM guest_memfd wants to implement support for NUMA policies just like
shmem already does using the shared policy infrastructure. As
guest_memfd currently resides in KVM module code, we have to export the
relevant symbols.
In the future, guest_memfd might be moved to core-mm, at which point the
symbols no longer would have to be exported. When/if that happens is
still unclear.
Acked-by: David Hildenbrand <david@redhat.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Shivank Garg <shivankg@amd.com>
---
mm/mempolicy.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 3b1dfd08338b..a502e06cfaa2 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -354,6 +354,7 @@ struct mempolicy *get_task_policy(struct task_struct *p)
return &default_policy;
}
+EXPORT_SYMBOL_GPL_FOR_MODULES(get_task_policy, "kvm");
static const struct mempolicy_operations {
int (*create)(struct mempolicy *pol, const nodemask_t *nodes);
@@ -487,6 +488,7 @@ void __mpol_put(struct mempolicy *pol)
return;
kmem_cache_free(policy_cache, pol);
}
+EXPORT_SYMBOL_GPL_FOR_MODULES(__mpol_put, "kvm");
static void mpol_rebind_default(struct mempolicy *pol, const nodemask_t *nodes)
{
@@ -2888,6 +2890,7 @@ struct mempolicy *mpol_shared_policy_lookup(struct shared_policy *sp,
read_unlock(&sp->lock);
return pol;
}
+EXPORT_SYMBOL_GPL_FOR_MODULES(mpol_shared_policy_lookup, "kvm");
static void sp_free(struct sp_node *n)
{
@@ -3173,6 +3176,7 @@ void mpol_shared_policy_init(struct shared_policy *sp, struct mempolicy *mpol)
mpol_put(mpol); /* drop our incoming ref on sb mpol */
}
}
+EXPORT_SYMBOL_GPL_FOR_MODULES(mpol_shared_policy_init, "kvm");
int mpol_set_shared_policy(struct shared_policy *sp,
struct vm_area_struct *vma, struct mempolicy *pol)
@@ -3191,6 +3195,7 @@ int mpol_set_shared_policy(struct shared_policy *sp,
sp_free(new);
return err;
}
+EXPORT_SYMBOL_GPL_FOR_MODULES(mpol_set_shared_policy, "kvm");
/* Free a backing policy store on inode delete. */
void mpol_free_shared_policy(struct shared_policy *sp)
@@ -3209,6 +3214,7 @@ void mpol_free_shared_policy(struct shared_policy *sp)
}
write_unlock(&sp->lock);
}
+EXPORT_SYMBOL_GPL_FOR_MODULES(mpol_free_shared_policy, "kvm");
#ifdef CONFIG_NUMA_BALANCING
static int __initdata numabalancing_override;
--
2.43.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH V9 5/7] KVM: guest_memfd: Add slab-allocated inode cache
2025-07-13 17:43 [PATCH V9 0/7] Add NUMA mempolicy support for KVM guest-memfd Shivank Garg
` (3 preceding siblings ...)
2025-07-13 17:43 ` [PATCH V9 4/7] mm/mempolicy: Export memory policy symbols Shivank Garg
@ 2025-07-13 17:43 ` Shivank Garg
2025-07-21 11:44 ` Vlastimil Babka
2025-07-13 17:43 ` [PATCH V9 6/7] KVM: guest_memfd: Enforce NUMA mempolicy using shared policy Shivank Garg
` (2 subsequent siblings)
7 siblings, 1 reply; 24+ messages in thread
From: Shivank Garg @ 2025-07-13 17:43 UTC (permalink / raw)
To: seanjc, david, vbabka, willy, akpm, shuah, pbonzini, brauner,
viro
Cc: ackerleytng, paul, jmorris, serge, pvorel, bfoster, tabba,
vannapurve, chao.gao, bharata, nikunj, michael.day, shdhiman,
yan.y.zhao, Neeraj.Upadhyay, thomas.lendacky, michael.roth, aik,
jgg, kalyazin, peterx, shivankg, jack, rppt, hch, cgzones,
ira.weiny, rientjes, roypat, ziy, matthew.brost, joshua.hahnjy,
rakie.kim, byungchul, gourry, kent.overstreet, ying.huang,
apopple, chao.p.peng, amit, ddutile, dan.j.williams, ashish.kalra,
gshan, jgowans, pankaj.gupta, papaluri, yuzhao, suzuki.poulose,
quic_eberman, aneeshkumar.kizhakeveetil, linux-fsdevel, linux-mm,
linux-kernel, linux-security-module, kvm, linux-kselftest,
linux-coco
Add dedicated inode structure (kvm_gmem_inode_info) and slab-allocated
inode cache for guest memory backing, similar to how shmem handles inodes.
This adds the necessary allocation/destruction functions and prepares
for upcoming guest_memfd NUMA policy support changes.
Signed-off-by: Shivank Garg <shivankg@amd.com>
---
virt/kvm/guest_memfd.c | 58 ++++++++++++++++++++++++++++++++++++++++--
1 file changed, 56 insertions(+), 2 deletions(-)
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index dabcc2317291..989e2b26b344 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -17,6 +17,15 @@ struct kvm_gmem {
struct list_head entry;
};
+struct kvm_gmem_inode_info {
+ struct inode vfs_inode;
+};
+
+static inline struct kvm_gmem_inode_info *KVM_GMEM_I(struct inode *inode)
+{
+ return container_of(inode, struct kvm_gmem_inode_info, vfs_inode);
+}
+
/**
* folio_file_pfn - like folio_file_page, but return a pfn.
* @folio: The folio which contains this index.
@@ -392,8 +401,33 @@ static struct file_operations kvm_gmem_fops = {
.fallocate = kvm_gmem_fallocate,
};
+static struct kmem_cache *kvm_gmem_inode_cachep;
+
+static struct inode *kvm_gmem_alloc_inode(struct super_block *sb)
+{
+ struct kvm_gmem_inode_info *info;
+
+ info = alloc_inode_sb(sb, kvm_gmem_inode_cachep, GFP_KERNEL);
+ if (!info)
+ return NULL;
+
+ return &info->vfs_inode;
+}
+
+static void kvm_gmem_destroy_inode(struct inode *inode)
+{
+}
+
+static void kvm_gmem_free_inode(struct inode *inode)
+{
+ kmem_cache_free(kvm_gmem_inode_cachep, KVM_GMEM_I(inode));
+}
+
static const struct super_operations kvm_gmem_super_operations = {
.statfs = simple_statfs,
+ .alloc_inode = kvm_gmem_alloc_inode,
+ .destroy_inode = kvm_gmem_destroy_inode,
+ .free_inode = kvm_gmem_free_inode,
};
static int kvm_gmem_init_fs_context(struct fs_context *fc)
@@ -426,17 +460,37 @@ static int kvm_gmem_init_mount(void)
return 0;
}
+static void kvm_gmem_init_inode(void *foo)
+{
+ struct kvm_gmem_inode_info *info = foo;
+
+ inode_init_once(&info->vfs_inode);
+}
+
int kvm_gmem_init(struct module *module)
{
- kvm_gmem_fops.owner = module;
+ int ret;
- return kvm_gmem_init_mount();
+ kvm_gmem_fops.owner = module;
+ kvm_gmem_inode_cachep = kmem_cache_create("kvm_gmem_inode_cache",
+ sizeof(struct kvm_gmem_inode_info),
+ 0, SLAB_ACCOUNT,
+ kvm_gmem_init_inode);
+ if (!kvm_gmem_inode_cachep)
+ return -ENOMEM;
+ ret = kvm_gmem_init_mount();
+ if (ret) {
+ kmem_cache_destroy(kvm_gmem_inode_cachep);
+ return ret;
+ }
+ return 0;
}
void kvm_gmem_exit(void)
{
kern_unmount(kvm_gmem_mnt);
kvm_gmem_mnt = NULL;
+ kmem_cache_destroy(kvm_gmem_inode_cachep);
}
static int kvm_gmem_migrate_folio(struct address_space *mapping,
--
2.43.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH V9 6/7] KVM: guest_memfd: Enforce NUMA mempolicy using shared policy
2025-07-13 17:43 [PATCH V9 0/7] Add NUMA mempolicy support for KVM guest-memfd Shivank Garg
` (4 preceding siblings ...)
2025-07-13 17:43 ` [PATCH V9 5/7] KVM: guest_memfd: Add slab-allocated inode cache Shivank Garg
@ 2025-07-13 17:43 ` Shivank Garg
2025-07-21 13:30 ` Vlastimil Babka
2025-07-22 15:24 ` David Hildenbrand
2025-07-13 17:43 ` [PATCH V9 7/7] KVM: guest_memfd: selftests: Add tests for mmap and NUMA policy support Shivank Garg
2025-07-22 14:40 ` [PATCH V9 0/7] Add NUMA mempolicy support for KVM guest-memfd David Hildenbrand
7 siblings, 2 replies; 24+ messages in thread
From: Shivank Garg @ 2025-07-13 17:43 UTC (permalink / raw)
To: seanjc, david, vbabka, willy, akpm, shuah, pbonzini, brauner,
viro
Cc: ackerleytng, paul, jmorris, serge, pvorel, bfoster, tabba,
vannapurve, chao.gao, bharata, nikunj, michael.day, shdhiman,
yan.y.zhao, Neeraj.Upadhyay, thomas.lendacky, michael.roth, aik,
jgg, kalyazin, peterx, shivankg, jack, rppt, hch, cgzones,
ira.weiny, rientjes, roypat, ziy, matthew.brost, joshua.hahnjy,
rakie.kim, byungchul, gourry, kent.overstreet, ying.huang,
apopple, chao.p.peng, amit, ddutile, dan.j.williams, ashish.kalra,
gshan, jgowans, pankaj.gupta, papaluri, yuzhao, suzuki.poulose,
quic_eberman, aneeshkumar.kizhakeveetil, linux-fsdevel, linux-mm,
linux-kernel, linux-security-module, kvm, linux-kselftest,
linux-coco
Previously, guest-memfd allocations followed local NUMA node id in absence
of process mempolicy, resulting in arbitrary memory allocation.
Moreover, mbind() couldn't be used by the VMM as guest memory wasn't
mapped into userspace when allocation occurred.
Enable NUMA policy support by implementing vm_ops for guest-memfd mmap
operation. This allows the VMM to map the memory and use mbind() to set the
desired NUMA policy. The policy is stored in the inode structure via
kvm_gmem_inode_info, as memory policy is a property of the memory (struct
inode) itself. The policy is then retrieved via mpol_shared_policy_lookup()
and passed to filemap_grab_folio_mpol() to ensure that allocations follow
the specified memory policy.
This enables the VMM to control guest memory NUMA placement by calling
mbind() on the mapped memory regions, providing fine-grained control over
guest memory allocation across NUMA nodes.
The policy change only affect future allocations and does not migrate
existing memory. This matches mbind(2)'s default behavior which affects
only new allocations unless overridden with MPOL_MF_MOVE/MPOL_MF_MOVE_ALL
flags, which are not supported for guest_memfd as it is unmovable.
Suggested-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Shivank Garg <shivankg@amd.com>
---
virt/kvm/guest_memfd.c | 67 ++++++++++++++++++++++++++++++++++++++++--
1 file changed, 65 insertions(+), 2 deletions(-)
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index 989e2b26b344..5c9a5eb5c13f 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -4,6 +4,7 @@
#include <linux/falloc.h>
#include <linux/fs.h>
#include <linux/kvm_host.h>
+#include <linux/mempolicy.h>
#include <linux/pseudo_fs.h>
#include <linux/pagemap.h>
@@ -18,6 +19,7 @@ struct kvm_gmem {
};
struct kvm_gmem_inode_info {
+ struct shared_policy policy;
struct inode vfs_inode;
};
@@ -26,6 +28,9 @@ static inline struct kvm_gmem_inode_info *KVM_GMEM_I(struct inode *inode)
return container_of(inode, struct kvm_gmem_inode_info, vfs_inode);
}
+static struct mempolicy *kvm_gmem_get_pgoff_policy(struct kvm_gmem_inode_info *info,
+ pgoff_t index);
+
/**
* folio_file_pfn - like folio_file_page, but return a pfn.
* @folio: The folio which contains this index.
@@ -112,7 +117,25 @@ static int kvm_gmem_prepare_folio(struct kvm *kvm, struct kvm_memory_slot *slot,
static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index)
{
/* TODO: Support huge pages. */
- return filemap_grab_folio(inode->i_mapping, index);
+ struct mempolicy *policy;
+ struct folio *folio;
+
+ /*
+ * Fast-path: See if folio is already present in mapping to avoid
+ * policy_lookup.
+ */
+ folio = __filemap_get_folio(inode->i_mapping, index,
+ FGP_LOCK | FGP_ACCESSED, 0);
+ if (!IS_ERR(folio))
+ return folio;
+
+ policy = kvm_gmem_get_pgoff_policy(KVM_GMEM_I(inode), index);
+ folio = __filemap_get_folio_mpol(inode->i_mapping, index,
+ FGP_LOCK | FGP_ACCESSED | FGP_CREAT,
+ mapping_gfp_mask(inode->i_mapping), policy);
+ mpol_cond_put(policy);
+
+ return folio;
}
static void kvm_gmem_invalidate_begin(struct kvm_gmem *gmem, pgoff_t start,
@@ -375,8 +398,45 @@ static vm_fault_t kvm_gmem_fault_user_mapping(struct vm_fault *vmf)
return ret;
}
+#ifdef CONFIG_NUMA
+static int kvm_gmem_set_policy(struct vm_area_struct *vma, struct mempolicy *mpol)
+{
+ struct inode *inode = file_inode(vma->vm_file);
+
+ return mpol_set_shared_policy(&KVM_GMEM_I(inode)->policy, vma, mpol);
+}
+
+static struct mempolicy *kvm_gmem_get_policy(struct vm_area_struct *vma,
+ unsigned long addr, pgoff_t *pgoff)
+{
+ struct inode *inode = file_inode(vma->vm_file);
+
+ *pgoff = vma->vm_pgoff + ((addr - vma->vm_start) >> PAGE_SHIFT);
+ return mpol_shared_policy_lookup(&KVM_GMEM_I(inode)->policy, *pgoff);
+}
+
+static struct mempolicy *kvm_gmem_get_pgoff_policy(struct kvm_gmem_inode_info *info,
+ pgoff_t index)
+{
+ struct mempolicy *mpol;
+
+ mpol = mpol_shared_policy_lookup(&info->policy, index);
+ return mpol ? mpol : get_task_policy(current);
+}
+#else
+static struct mempolicy *kvm_gmem_get_pgoff_policy(struct kvm_gmem_inode_info *info,
+ pgoff_t index)
+{
+ return NULL;
+}
+#endif /* CONFIG_NUMA */
+
static const struct vm_operations_struct kvm_gmem_vm_ops = {
- .fault = kvm_gmem_fault_user_mapping,
+ .fault = kvm_gmem_fault_user_mapping,
+#ifdef CONFIG_NUMA
+ .get_policy = kvm_gmem_get_policy,
+ .set_policy = kvm_gmem_set_policy,
+#endif
};
static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma)
@@ -411,11 +471,14 @@ static struct inode *kvm_gmem_alloc_inode(struct super_block *sb)
if (!info)
return NULL;
+ mpol_shared_policy_init(&info->policy, NULL);
+
return &info->vfs_inode;
}
static void kvm_gmem_destroy_inode(struct inode *inode)
{
+ mpol_free_shared_policy(&KVM_GMEM_I(inode)->policy);
}
static void kvm_gmem_free_inode(struct inode *inode)
--
2.43.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* [PATCH V9 7/7] KVM: guest_memfd: selftests: Add tests for mmap and NUMA policy support
2025-07-13 17:43 [PATCH V9 0/7] Add NUMA mempolicy support for KVM guest-memfd Shivank Garg
` (5 preceding siblings ...)
2025-07-13 17:43 ` [PATCH V9 6/7] KVM: guest_memfd: Enforce NUMA mempolicy using shared policy Shivank Garg
@ 2025-07-13 17:43 ` Shivank Garg
2025-07-22 14:40 ` [PATCH V9 0/7] Add NUMA mempolicy support for KVM guest-memfd David Hildenbrand
7 siblings, 0 replies; 24+ messages in thread
From: Shivank Garg @ 2025-07-13 17:43 UTC (permalink / raw)
To: seanjc, david, vbabka, willy, akpm, shuah, pbonzini, brauner,
viro
Cc: ackerleytng, paul, jmorris, serge, pvorel, bfoster, tabba,
vannapurve, chao.gao, bharata, nikunj, michael.day, shdhiman,
yan.y.zhao, Neeraj.Upadhyay, thomas.lendacky, michael.roth, aik,
jgg, kalyazin, peterx, shivankg, jack, rppt, hch, cgzones,
ira.weiny, rientjes, roypat, ziy, matthew.brost, joshua.hahnjy,
rakie.kim, byungchul, gourry, kent.overstreet, ying.huang,
apopple, chao.p.peng, amit, ddutile, dan.j.williams, ashish.kalra,
gshan, jgowans, pankaj.gupta, papaluri, yuzhao, suzuki.poulose,
quic_eberman, aneeshkumar.kizhakeveetil, linux-fsdevel, linux-mm,
linux-kernel, linux-security-module, kvm, linux-kselftest,
linux-coco
Add tests for NUMA memory policy binding and NUMA aware allocation in
guest_memfd. This extends the existing selftests by adding proper
validation for:
- KVM GMEM set_policy and get_policy() vm_ops functionality using
mbind() and get_mempolicy()
- NUMA policy application before and after memory allocation
These tests help ensure NUMA support for guest_memfd works correctly.
Signed-off-by: Shivank Garg <shivankg@amd.com>
---
tools/testing/selftests/kvm/Makefile.kvm | 1 +
.../testing/selftests/kvm/guest_memfd_test.c | 122 +++++++++++++++++-
2 files changed, 122 insertions(+), 1 deletion(-)
diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm
index e11ed9e59ab5..f4bb02231d6a 100644
--- a/tools/testing/selftests/kvm/Makefile.kvm
+++ b/tools/testing/selftests/kvm/Makefile.kvm
@@ -273,6 +273,7 @@ pgste-option = $(call try-run, echo 'int main(void) { return 0; }' | \
$(CC) -Werror -Wl$(comma)--s390-pgste -x c - -o "$$TMP",-Wl$(comma)--s390-pgste)
LDLIBS += -ldl
+LDLIBS += -lnuma
LDFLAGS += -pthread $(no-pie-option) $(pgste-option)
LIBKVM_C := $(filter %.c,$(LIBKVM))
diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c
index 1252e74fbb8f..d8f3beccd5a0 100644
--- a/tools/testing/selftests/kvm/guest_memfd_test.c
+++ b/tools/testing/selftests/kvm/guest_memfd_test.c
@@ -7,6 +7,8 @@
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
+#include <numa.h>
+#include <numaif.h>
#include <errno.h>
#include <stdio.h>
#include <fcntl.h>
@@ -18,6 +20,7 @@
#include <sys/mman.h>
#include <sys/types.h>
#include <sys/stat.h>
+#include <sys/syscall.h>
#include "kvm_util.h"
#include "test_util.h"
@@ -115,6 +118,122 @@ static void test_mmap_not_supported(int fd, size_t page_size, size_t total_size)
TEST_ASSERT_EQ(mem, MAP_FAILED);
}
+#define TEST_REQUIRE_NUMA_MULTIPLE_NODES() \
+ TEST_REQUIRE(numa_available() != -1 && numa_max_node() >= 1)
+
+static void test_mbind(int fd, size_t page_size, size_t total_size)
+{
+ unsigned long nodemask = 1; /* nid: 0 */
+ unsigned long maxnode = 8;
+ unsigned long get_nodemask;
+ int get_policy;
+ char *mem;
+ int ret;
+
+ TEST_REQUIRE_NUMA_MULTIPLE_NODES();
+
+ mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
+ TEST_ASSERT(mem != MAP_FAILED, "mmap for mbind test should succeed");
+
+ /* Test MPOL_INTERLEAVE policy */
+ ret = syscall(__NR_mbind, mem, page_size * 2, MPOL_INTERLEAVE,
+ &nodemask, maxnode, 0);
+ TEST_ASSERT(!ret, "mbind with INTERLEAVE to node 0 should succeed");
+ ret = syscall(__NR_get_mempolicy, &get_policy, &get_nodemask,
+ maxnode, mem, MPOL_F_ADDR);
+ TEST_ASSERT(!ret && get_policy == MPOL_INTERLEAVE && get_nodemask == nodemask,
+ "Policy should be MPOL_INTERLEAVE and nodes match");
+
+ /* Test basic MPOL_BIND policy */
+ ret = syscall(__NR_mbind, mem + page_size * 2, page_size * 2, MPOL_BIND,
+ &nodemask, maxnode, 0);
+ TEST_ASSERT(!ret, "mbind with MPOL_BIND to node 0 should succeed");
+ ret = syscall(__NR_get_mempolicy, &get_policy, &get_nodemask,
+ maxnode, mem + page_size * 2, MPOL_F_ADDR);
+ TEST_ASSERT(!ret && get_policy == MPOL_BIND && get_nodemask == nodemask,
+ "Policy should be MPOL_BIND and nodes match");
+
+ /* Test MPOL_DEFAULT policy */
+ ret = syscall(__NR_mbind, mem, total_size, MPOL_DEFAULT, NULL, 0, 0);
+ TEST_ASSERT(!ret, "mbind with MPOL_DEFAULT should succeed");
+ ret = syscall(__NR_get_mempolicy, &get_policy, &get_nodemask,
+ maxnode, mem, MPOL_F_ADDR);
+ TEST_ASSERT(!ret && get_policy == MPOL_DEFAULT && get_nodemask == 0,
+ "Policy should be MPOL_DEFAULT and nodes zero");
+
+ /* Test with invalid policy */
+ ret = syscall(__NR_mbind, mem, page_size, 999, &nodemask, maxnode, 0);
+ TEST_ASSERT(ret == -1 && errno == EINVAL,
+ "mbind with invalid policy should fail with EINVAL");
+
+ TEST_ASSERT(munmap(mem, total_size) == 0, "munmap should succeed");
+}
+
+static void test_numa_allocation(int fd, size_t page_size, size_t total_size)
+{
+ unsigned long node0_mask = 1; /* Node 0 */
+ unsigned long node1_mask = 2; /* Node 1 */
+ unsigned long maxnode = 8;
+ void *pages[4];
+ int status[4];
+ char *mem;
+ int ret, i;
+
+ TEST_REQUIRE_NUMA_MULTIPLE_NODES();
+
+ /* Clean slate: deallocate all file space, if any */
+ ret = fallocate(fd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, 0, total_size);
+ TEST_ASSERT(!ret, "fallocate(PUNCH_HOLE) should succeed");
+
+ mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
+ TEST_ASSERT(mem != MAP_FAILED, "mmap should succeed");
+
+ for (i = 0; i < 4; i++)
+ pages[i] = (char *)mem + page_size * i;
+
+ /* Set NUMA policy after allocation */
+ memset(mem, 0xaa, page_size);
+ ret = syscall(__NR_mbind, pages[0], page_size, MPOL_BIND, &node0_mask, maxnode, 0);
+ TEST_ASSERT(!ret, "mbind after allocation page 0 to node 0 should succeed");
+ ret = fallocate(fd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, 0, page_size);
+ TEST_ASSERT(!ret, "fallocate(PUNCH_HOLE) should succeed");
+
+ /* Set NUMA policy before allocation */
+ ret = syscall(__NR_mbind, pages[0], page_size * 2, MPOL_BIND, &node1_mask, maxnode, 0);
+ TEST_ASSERT(!ret, "mbind page 0, 1 to node 1 should succeed");
+ ret = syscall(__NR_mbind, pages[2], page_size * 2, MPOL_BIND, &node0_mask, maxnode, 0);
+ TEST_ASSERT(!ret, "mbind page 2, 3 to node 0 should succeed");
+ memset(mem, 0xaa, total_size);
+
+ /* Validate if pages are allocated on specified NUMA nodes */
+ ret = syscall(__NR_move_pages, 0, 4, pages, NULL, status, 0);
+ TEST_ASSERT(ret >= 0, "move_pages should succeed for status check");
+ TEST_ASSERT(status[0] == 1, "Page 0 should be allocated on node 1");
+ TEST_ASSERT(status[1] == 1, "Page 1 should be allocated on node 1");
+ TEST_ASSERT(status[2] == 0, "Page 2 should be allocated on node 0");
+ TEST_ASSERT(status[3] == 0, "Page 3 should be allocated on node 0");
+
+ /* Punch hole for all pages */
+ ret = fallocate(fd, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE, 0, total_size);
+ TEST_ASSERT(!ret, "fallocate(PUNCH_HOLE) should succeed");
+
+ /* Change NUMA policy nodes and reallocate */
+ ret = syscall(__NR_mbind, pages[0], page_size * 2, MPOL_BIND, &node0_mask, maxnode, 0);
+ TEST_ASSERT(!ret, "mbind page 0, 1 to node 0 should succeed");
+ ret = syscall(__NR_mbind, pages[2], page_size * 2, MPOL_BIND, &node1_mask, maxnode, 0);
+ TEST_ASSERT(!ret, "mbind page 2, 3 to node 1 should succeed");
+ memset(mem, 0xaa, total_size);
+
+ ret = syscall(__NR_move_pages, 0, 4, pages, NULL, status, 0);
+ TEST_ASSERT(ret >= 0, "move_pages should succeed after reallocation");
+ TEST_ASSERT(status[0] == 0, "Page 0 should be allocated on node 0");
+ TEST_ASSERT(status[1] == 0, "Page 1 should be allocated on node 0");
+ TEST_ASSERT(status[2] == 1, "Page 2 should be allocated on node 1");
+ TEST_ASSERT(status[3] == 1, "Page 3 should be allocated on node 1");
+
+ TEST_ASSERT(munmap(mem, total_size) == 0, "munmap should succeed");
+}
+
static void test_file_size(int fd, size_t page_size, size_t total_size)
{
struct stat sb;
@@ -275,7 +394,8 @@ static void test_with_type(unsigned long vm_type, uint64_t guest_memfd_flags,
if (expect_mmap_allowed) {
test_mmap_supported(fd, page_size, total_size);
test_fault_overflow(fd, page_size, total_size);
-
+ test_mbind(fd, page_size, total_size);
+ test_numa_allocation(fd, page_size, total_size);
} else {
test_mmap_not_supported(fd, page_size, total_size);
}
--
2.43.0
^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [PATCH V9 5/7] KVM: guest_memfd: Add slab-allocated inode cache
2025-07-13 17:43 ` [PATCH V9 5/7] KVM: guest_memfd: Add slab-allocated inode cache Shivank Garg
@ 2025-07-21 11:44 ` Vlastimil Babka
2025-07-22 5:03 ` Shivank Garg
0 siblings, 1 reply; 24+ messages in thread
From: Vlastimil Babka @ 2025-07-21 11:44 UTC (permalink / raw)
To: Shivank Garg, seanjc, david, willy, akpm, shuah, pbonzini,
brauner, viro
Cc: ackerleytng, paul, jmorris, serge, pvorel, bfoster, tabba,
vannapurve, chao.gao, bharata, nikunj, michael.day, shdhiman,
yan.y.zhao, Neeraj.Upadhyay, thomas.lendacky, michael.roth, aik,
jgg, kalyazin, peterx, jack, rppt, hch, cgzones, ira.weiny,
rientjes, roypat, ziy, matthew.brost, joshua.hahnjy, rakie.kim,
byungchul, gourry, kent.overstreet, ying.huang, apopple,
chao.p.peng, amit, ddutile, dan.j.williams, ashish.kalra, gshan,
jgowans, pankaj.gupta, papaluri, yuzhao, suzuki.poulose,
quic_eberman, aneeshkumar.kizhakeveetil, linux-fsdevel, linux-mm,
linux-kernel, linux-security-module, kvm, linux-kselftest,
linux-coco
On 7/13/25 19:43, Shivank Garg wrote:
> Add dedicated inode structure (kvm_gmem_inode_info) and slab-allocated
> inode cache for guest memory backing, similar to how shmem handles inodes.
>
> This adds the necessary allocation/destruction functions and prepares
> for upcoming guest_memfd NUMA policy support changes.
>
> Signed-off-by: Shivank Garg <shivankg@amd.com>
> ---
> virt/kvm/guest_memfd.c | 58 ++++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 56 insertions(+), 2 deletions(-)
>
> diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
> index dabcc2317291..989e2b26b344 100644
> --- a/virt/kvm/guest_memfd.c
> +++ b/virt/kvm/guest_memfd.c
> @@ -17,6 +17,15 @@ struct kvm_gmem {
> struct list_head entry;
> };
>
> +struct kvm_gmem_inode_info {
> + struct inode vfs_inode;
> +};
> +
> +static inline struct kvm_gmem_inode_info *KVM_GMEM_I(struct inode *inode)
> +{
> + return container_of(inode, struct kvm_gmem_inode_info, vfs_inode);
> +}
> +
> /**
> * folio_file_pfn - like folio_file_page, but return a pfn.
> * @folio: The folio which contains this index.
> @@ -392,8 +401,33 @@ static struct file_operations kvm_gmem_fops = {
> .fallocate = kvm_gmem_fallocate,
> };
>
> +static struct kmem_cache *kvm_gmem_inode_cachep;
> +
> +static struct inode *kvm_gmem_alloc_inode(struct super_block *sb)
> +{
> + struct kvm_gmem_inode_info *info;
> +
> + info = alloc_inode_sb(sb, kvm_gmem_inode_cachep, GFP_KERNEL);
> + if (!info)
> + return NULL;
> +
> + return &info->vfs_inode;
> +}
> +
> +static void kvm_gmem_destroy_inode(struct inode *inode)
> +{
> +}
> +
> +static void kvm_gmem_free_inode(struct inode *inode)
> +{
> + kmem_cache_free(kvm_gmem_inode_cachep, KVM_GMEM_I(inode));
> +}
> +
> static const struct super_operations kvm_gmem_super_operations = {
> .statfs = simple_statfs,
> + .alloc_inode = kvm_gmem_alloc_inode,
> + .destroy_inode = kvm_gmem_destroy_inode,
> + .free_inode = kvm_gmem_free_inode,
> };
>
> static int kvm_gmem_init_fs_context(struct fs_context *fc)
> @@ -426,17 +460,37 @@ static int kvm_gmem_init_mount(void)
> return 0;
> }
>
> +static void kvm_gmem_init_inode(void *foo)
> +{
> + struct kvm_gmem_inode_info *info = foo;
> +
> + inode_init_once(&info->vfs_inode);
> +}
> +
> int kvm_gmem_init(struct module *module)
> {
> - kvm_gmem_fops.owner = module;
> + int ret;
>
> - return kvm_gmem_init_mount();
> + kvm_gmem_fops.owner = module;
> + kvm_gmem_inode_cachep = kmem_cache_create("kvm_gmem_inode_cache",
> + sizeof(struct kvm_gmem_inode_info),
> + 0, SLAB_ACCOUNT,
> + kvm_gmem_init_inode);
Since this is new code, please use the new variant of kmem_cache_create()
that takes the args parameter.
> + if (!kvm_gmem_inode_cachep)
> + return -ENOMEM;
> + ret = kvm_gmem_init_mount();
> + if (ret) {
> + kmem_cache_destroy(kvm_gmem_inode_cachep);
> + return ret;
> + }
> + return 0;
> }
>
> void kvm_gmem_exit(void)
> {
> kern_unmount(kvm_gmem_mnt);
> kvm_gmem_mnt = NULL;
> + kmem_cache_destroy(kvm_gmem_inode_cachep);
> }
>
> static int kvm_gmem_migrate_folio(struct address_space *mapping,
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH V9 6/7] KVM: guest_memfd: Enforce NUMA mempolicy using shared policy
2025-07-13 17:43 ` [PATCH V9 6/7] KVM: guest_memfd: Enforce NUMA mempolicy using shared policy Shivank Garg
@ 2025-07-21 13:30 ` Vlastimil Babka
2025-07-22 15:24 ` David Hildenbrand
1 sibling, 0 replies; 24+ messages in thread
From: Vlastimil Babka @ 2025-07-21 13:30 UTC (permalink / raw)
To: Shivank Garg, seanjc, david, willy, akpm, shuah, pbonzini,
brauner, viro
Cc: ackerleytng, paul, jmorris, serge, pvorel, bfoster, tabba,
vannapurve, chao.gao, bharata, nikunj, michael.day, shdhiman,
yan.y.zhao, Neeraj.Upadhyay, thomas.lendacky, michael.roth, aik,
jgg, kalyazin, peterx, jack, rppt, hch, cgzones, ira.weiny,
rientjes, roypat, ziy, matthew.brost, joshua.hahnjy, rakie.kim,
byungchul, gourry, kent.overstreet, ying.huang, apopple,
chao.p.peng, amit, ddutile, dan.j.williams, ashish.kalra, gshan,
jgowans, pankaj.gupta, papaluri, yuzhao, suzuki.poulose,
quic_eberman, aneeshkumar.kizhakeveetil, linux-fsdevel, linux-mm,
linux-kernel, linux-security-module, kvm, linux-kselftest,
linux-coco
On 7/13/25 19:43, Shivank Garg wrote:
> Previously, guest-memfd allocations followed local NUMA node id in absence
> of process mempolicy, resulting in arbitrary memory allocation.
> Moreover, mbind() couldn't be used by the VMM as guest memory wasn't
> mapped into userspace when allocation occurred.
>
> Enable NUMA policy support by implementing vm_ops for guest-memfd mmap
> operation. This allows the VMM to map the memory and use mbind() to set the
> desired NUMA policy. The policy is stored in the inode structure via
> kvm_gmem_inode_info, as memory policy is a property of the memory (struct
> inode) itself. The policy is then retrieved via mpol_shared_policy_lookup()
> and passed to filemap_grab_folio_mpol() to ensure that allocations follow
> the specified memory policy.
>
> This enables the VMM to control guest memory NUMA placement by calling
> mbind() on the mapped memory regions, providing fine-grained control over
> guest memory allocation across NUMA nodes.
>
> The policy change only affect future allocations and does not migrate
> existing memory. This matches mbind(2)'s default behavior which affects
> only new allocations unless overridden with MPOL_MF_MOVE/MPOL_MF_MOVE_ALL
> flags, which are not supported for guest_memfd as it is unmovable.
>
> Suggested-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Shivank Garg <shivankg@amd.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH V9 5/7] KVM: guest_memfd: Add slab-allocated inode cache
2025-07-21 11:44 ` Vlastimil Babka
@ 2025-07-22 5:03 ` Shivank Garg
0 siblings, 0 replies; 24+ messages in thread
From: Shivank Garg @ 2025-07-22 5:03 UTC (permalink / raw)
To: Vlastimil Babka, seanjc, david, willy, akpm, shuah, pbonzini,
brauner, viro
Cc: ackerleytng, paul, jmorris, serge, pvorel, bfoster, tabba,
vannapurve, chao.gao, bharata, nikunj, michael.day, shdhiman,
yan.y.zhao, Neeraj.Upadhyay, thomas.lendacky, michael.roth, aik,
jgg, kalyazin, peterx, jack, rppt, hch, cgzones, ira.weiny,
rientjes, roypat, ziy, matthew.brost, joshua.hahnjy, rakie.kim,
byungchul, gourry, kent.overstreet, ying.huang, apopple,
chao.p.peng, amit, ddutile, dan.j.williams, ashish.kalra, gshan,
jgowans, pankaj.gupta, papaluri, yuzhao, suzuki.poulose,
quic_eberman, aneeshkumar.kizhakeveetil, linux-fsdevel, linux-mm,
linux-kernel, linux-security-module, kvm, linux-kselftest,
linux-coco
On 7/21/2025 5:14 PM, Vlastimil Babka wrote:
>> + kvm_gmem_inode_cachep = kmem_cache_create("kvm_gmem_inode_cache",
>> + sizeof(struct kvm_gmem_inode_info),
>> + 0, SLAB_ACCOUNT,
>> + kvm_gmem_init_inode);
> Since this is new code, please use the new variant of kmem_cache_create()
> that takes the args parameter.
Thank you for the review and suggestion.
I'll update this in the next version.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH V9 0/7] Add NUMA mempolicy support for KVM guest-memfd
2025-07-13 17:43 [PATCH V9 0/7] Add NUMA mempolicy support for KVM guest-memfd Shivank Garg
` (6 preceding siblings ...)
2025-07-13 17:43 ` [PATCH V9 7/7] KVM: guest_memfd: selftests: Add tests for mmap and NUMA policy support Shivank Garg
@ 2025-07-22 14:40 ` David Hildenbrand
2025-07-22 14:45 ` Sean Christopherson
2025-07-22 15:49 ` Shivank Garg
7 siblings, 2 replies; 24+ messages in thread
From: David Hildenbrand @ 2025-07-22 14:40 UTC (permalink / raw)
To: Shivank Garg, seanjc, vbabka, willy, akpm, shuah, pbonzini,
brauner, viro
Cc: ackerleytng, paul, jmorris, serge, pvorel, bfoster, tabba,
vannapurve, chao.gao, bharata, nikunj, michael.day, shdhiman,
yan.y.zhao, Neeraj.Upadhyay, thomas.lendacky, michael.roth, aik,
jgg, kalyazin, peterx, jack, rppt, hch, cgzones, ira.weiny,
rientjes, roypat, ziy, matthew.brost, joshua.hahnjy, rakie.kim,
byungchul, gourry, kent.overstreet, ying.huang, apopple,
chao.p.peng, amit, ddutile, dan.j.williams, ashish.kalra, gshan,
jgowans, pankaj.gupta, papaluri, yuzhao, suzuki.poulose,
quic_eberman, aneeshkumar.kizhakeveetil, linux-fsdevel, linux-mm,
linux-kernel, linux-security-module, kvm, linux-kselftest,
linux-coco
On 13.07.25 19:43, Shivank Garg wrote:
> This series introduces NUMA-aware memory placement support for KVM guests
> with guest_memfd memory backends. It builds upon Fuad Tabba's work that
> enabled host-mapping for guest_memfd memory [1].
>
> == Background ==
> KVM's guest-memfd memory backend currently lacks support for NUMA policy
> enforcement, causing guest memory allocations to be distributed across host
> nodes according to kernel's default behavior, irrespective of any policy
> specified by the VMM. This limitation arises because conventional userspace
> NUMA control mechanisms like mbind(2) don't work since the memory isn't
> directly mapped to userspace when allocations occur.
> Fuad's work [1] provides the necessary mmap capability, and this series
> leverages it to enable mbind(2).
>
> == Implementation ==
>
> This series implements proper NUMA policy support for guest-memfd by:
>
> 1. Adding mempolicy-aware allocation APIs to the filemap layer.
> 2. Introducing custom inodes (via a dedicated slab-allocated inode cache,
> kvm_gmem_inode_info) to store NUMA policy and metadata for guest memory.
> 3. Implementing get/set_policy vm_ops in guest_memfd to support NUMA
> policy.
>
> With these changes, VMMs can now control guest memory placement by mapping
> guest_memfd file descriptor and using mbind(2) to specify:
> - Policy modes: default, bind, interleave, or preferred
> - Host NUMA nodes: List of target nodes for memory allocation
>
> These Policies affect only future allocations and do not migrate existing
> memory. This matches mbind(2)'s default behavior which affects only new
> allocations unless overridden with MPOL_MF_MOVE/MPOL_MF_MOVE_ALL flags (Not
> supported for guest_memfd as it is unmovable by design).
>
> == Upstream Plan ==
> Phased approach as per David's guest_memfd extension overview [2] and
> community calls [3]:
>
> Phase 1 (this series):
> 1. Focuses on shared guest_memfd support (non-CoCo VMs).
> 2. Builds on Fuad's host-mapping work.
Just to clarify: this is based on Fuad's stage 1 and should probably still be
tagged "RFC" until stage-1 is finally upstream.
(I was hoping stage-1 would go upstream in 6.17, but I am not sure yet if that is
still feasible looking at the never-ending review)
I'm surprised to see that
commit cbe4134ea4bc493239786220bd69cb8a13493190
Author: Shivank Garg <shivankg@amd.com>
Date: Fri Jun 20 07:03:30 2025 +0000
fs: export anon_inode_make_secure_inode() and fix secretmem LSM bypass
was merged with the kvm export
EXPORT_SYMBOL_GPL_FOR_MODULES(anon_inode_make_secure_inode, "kvm");
I thought I commented that this is something to done separately and not really
"fix" material.
Anyhow, good for this series, no need to touch that.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH V9 0/7] Add NUMA mempolicy support for KVM guest-memfd
2025-07-22 14:40 ` [PATCH V9 0/7] Add NUMA mempolicy support for KVM guest-memfd David Hildenbrand
@ 2025-07-22 14:45 ` Sean Christopherson
2025-07-22 15:51 ` David Hildenbrand
2025-07-22 15:49 ` Shivank Garg
1 sibling, 1 reply; 24+ messages in thread
From: Sean Christopherson @ 2025-07-22 14:45 UTC (permalink / raw)
To: David Hildenbrand
Cc: Shivank Garg, vbabka, willy, akpm, shuah, pbonzini, brauner, viro,
ackerleytng, paul, jmorris, serge, pvorel, bfoster, tabba,
vannapurve, chao.gao, bharata, nikunj, michael.day, shdhiman,
yan.y.zhao, Neeraj.Upadhyay, thomas.lendacky, michael.roth, aik,
jgg, kalyazin, peterx, jack, rppt, hch, cgzones, ira.weiny,
rientjes, roypat, ziy, matthew.brost, joshua.hahnjy, rakie.kim,
byungchul, gourry, kent.overstreet, ying.huang, apopple,
chao.p.peng, amit, ddutile, dan.j.williams, ashish.kalra, gshan,
jgowans, pankaj.gupta, papaluri, yuzhao, suzuki.poulose,
quic_eberman, aneeshkumar.kizhakeveetil, linux-fsdevel, linux-mm,
linux-kernel, linux-security-module, kvm, linux-kselftest,
linux-coco
On Tue, Jul 22, 2025, David Hildenbrand wrote:
> Just to clarify: this is based on Fuad's stage 1 and should probably still be
> tagged "RFC" until stage-1 is finally upstream.
>
> (I was hoping stage-1 would go upstream in 6.17, but I am not sure yet if that is
> still feasible looking at the never-ending review)
6.17 is very doable.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH V9 1/7] KVM: guest_memfd: Use guest mem inodes instead of anonymous inodes
2025-07-13 17:43 ` [PATCH V9 1/7] KVM: guest_memfd: Use guest mem inodes instead of anonymous inodes Shivank Garg
@ 2025-07-22 15:18 ` David Hildenbrand
2025-08-07 21:34 ` Ackerley Tng
0 siblings, 1 reply; 24+ messages in thread
From: David Hildenbrand @ 2025-07-22 15:18 UTC (permalink / raw)
To: Shivank Garg, seanjc, vbabka, willy, akpm, shuah, pbonzini,
brauner, viro
Cc: ackerleytng, paul, jmorris, serge, pvorel, bfoster, tabba,
vannapurve, chao.gao, bharata, nikunj, michael.day, shdhiman,
yan.y.zhao, Neeraj.Upadhyay, thomas.lendacky, michael.roth, aik,
jgg, kalyazin, peterx, jack, rppt, hch, cgzones, ira.weiny,
rientjes, roypat, ziy, matthew.brost, joshua.hahnjy, rakie.kim,
byungchul, gourry, kent.overstreet, ying.huang, apopple,
chao.p.peng, amit, ddutile, dan.j.williams, ashish.kalra, gshan,
jgowans, pankaj.gupta, papaluri, yuzhao, suzuki.poulose,
quic_eberman, aneeshkumar.kizhakeveetil, linux-fsdevel, linux-mm,
linux-kernel, linux-security-module, kvm, linux-kselftest,
linux-coco
On 13.07.25 19:43, Shivank Garg wrote:
> From: Ackerley Tng <ackerleytng@google.com>
>
> guest_memfd's inode represents memory the guest_memfd is
> providing. guest_memfd's file represents a struct kvm's view of that
> memory.
>
> Using a custom inode allows customization of the inode teardown
> process via callbacks. For example, ->evict_inode() allows
> customization of the truncation process on file close, and
> ->destroy_inode() and ->free_inode() allow customization of the inode
> freeing process.
>
> Customizing the truncation process allows flexibility in management of
> guest_memfd memory and customization of the inode freeing process
> allows proper cleanup of memory metadata stored on the inode.
>
> Memory metadata is more appropriately stored on the inode (as opposed
> to the file), since the metadata is for the memory and is not unique
> to a specific binding and struct kvm.
>
> Co-developed-by: Fuad Tabba <tabba@google.com>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> Signed-off-by: Ackerley Tng <ackerleytng@google.com>
> Signed-off-by: Shivank Garg <shivankg@amd.com>
> ---
[...]
>
> #include "kvm_mm.h"
>
> +static struct vfsmount *kvm_gmem_mnt;
> +
> struct kvm_gmem {
> struct kvm *kvm;
> struct xarray bindings;
> @@ -388,9 +392,51 @@ static struct file_operations kvm_gmem_fops = {
> .fallocate = kvm_gmem_fallocate,
> };
>
> -void kvm_gmem_init(struct module *module)
> +static const struct super_operations kvm_gmem_super_operations = {
> + .statfs = simple_statfs,
> +};
> +
> +static int kvm_gmem_init_fs_context(struct fs_context *fc)
> +{
> + struct pseudo_fs_context *ctx;
> +
> + if (!init_pseudo(fc, GUEST_MEMFD_MAGIC))
> + return -ENOMEM;
> +
> + ctx = fc->fs_private;
> + ctx->ops = &kvm_gmem_super_operations;
Curious, why is that required? (secretmem doesn't have it, so I wonder)
> +
> + return 0;
> +}
> +
> +static struct file_system_type kvm_gmem_fs = {
> + .name = "kvm_guest_memory",
It's GUEST_MEMFD_MAGIC but here "kvm_guest_memory".
For secretmem it's SECRETMEM_MAGIC vs. "secretmem".
So naturally, I wonder if that is to be made consistent :)
> + .init_fs_context = kvm_gmem_init_fs_context,
> + .kill_sb = kill_anon_super,
> +};
> +
> +static int kvm_gmem_init_mount(void)
> +{
> + kvm_gmem_mnt = kern_mount(&kvm_gmem_fs);
> +
> + if (IS_ERR(kvm_gmem_mnt))
> + return PTR_ERR(kvm_gmem_mnt);
> +
> + kvm_gmem_mnt->mnt_flags |= MNT_NOEXEC;
> + return 0;
> +}
> +
> +int kvm_gmem_init(struct module *module)
> {
> kvm_gmem_fops.owner = module;
> +
> + return kvm_gmem_init_mount();
> +}
> +
> +void kvm_gmem_exit(void)
> +{
> + kern_unmount(kvm_gmem_mnt);
> + kvm_gmem_mnt = NULL;
> }
>
> static int kvm_gmem_migrate_folio(struct address_space *mapping,
> @@ -472,11 +518,71 @@ static const struct inode_operations kvm_gmem_iops = {
> .setattr = kvm_gmem_setattr,
> };
>
> +static struct inode *kvm_gmem_inode_make_secure_inode(const char *name,
> + loff_t size, u64 flags)
> +{
> + struct inode *inode;
> +
> + inode = anon_inode_make_secure_inode(kvm_gmem_mnt->mnt_sb, name, NULL);
> + if (IS_ERR(inode))
> + return inode;
> +
> + inode->i_private = (void *)(unsigned long)flags;
> + inode->i_op = &kvm_gmem_iops;
> + inode->i_mapping->a_ops = &kvm_gmem_aops;
> + inode->i_mode |= S_IFREG;
> + inode->i_size = size;
> + mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER);
> + mapping_set_inaccessible(inode->i_mapping);
> + /* Unmovable mappings are supposed to be marked unevictable as well. */
> + WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping));
> +
> + return inode;
> +}
> +
> +static struct file *kvm_gmem_inode_create_getfile(void *priv, loff_t size,
> + u64 flags)
> +{
> + static const char *name = "[kvm-gmem]";
> + struct inode *inode;
> + struct file *file;
> + int err;
> +
> + err = -ENOENT;
> + if (!try_module_get(kvm_gmem_fops.owner))
> + goto err;
Curious, shouldn't there be a module_put() somewhere after this function
returned a file?
> +
> + inode = kvm_gmem_inode_make_secure_inode(name, size, flags);
> + if (IS_ERR(inode)) {
> + err = PTR_ERR(inode);
> + goto err_put_module;
> + }
> +
> + file = alloc_file_pseudo(inode, kvm_gmem_mnt, name, O_RDWR,
> + &kvm_gmem_fops);
> + if (IS_ERR(file)) {
> + err = PTR_ERR(file);
> + goto err_put_inode;
> + }
> +
> + file->f_flags |= O_LARGEFILE;
> + file->private_data = priv;
> +
>
Nothing else jumped at me.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH V9 2/7] mm/filemap: Add NUMA mempolicy support to filemap_alloc_folio()
2025-07-13 17:43 ` [PATCH V9 2/7] mm/filemap: Add NUMA mempolicy support to filemap_alloc_folio() Shivank Garg
@ 2025-07-22 15:20 ` David Hildenbrand
0 siblings, 0 replies; 24+ messages in thread
From: David Hildenbrand @ 2025-07-22 15:20 UTC (permalink / raw)
To: Shivank Garg, seanjc, vbabka, willy, akpm, shuah, pbonzini,
brauner, viro
Cc: ackerleytng, paul, jmorris, serge, pvorel, bfoster, tabba,
vannapurve, chao.gao, bharata, nikunj, michael.day, shdhiman,
yan.y.zhao, Neeraj.Upadhyay, thomas.lendacky, michael.roth, aik,
jgg, kalyazin, peterx, jack, rppt, hch, cgzones, ira.weiny,
rientjes, roypat, ziy, matthew.brost, joshua.hahnjy, rakie.kim,
byungchul, gourry, kent.overstreet, ying.huang, apopple,
chao.p.peng, amit, ddutile, dan.j.williams, ashish.kalra, gshan,
jgowans, pankaj.gupta, papaluri, yuzhao, suzuki.poulose,
quic_eberman, aneeshkumar.kizhakeveetil, linux-fsdevel, linux-mm,
linux-kernel, linux-security-module, kvm, linux-kselftest,
linux-coco
On 13.07.25 19:43, Shivank Garg wrote:
> From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
>
> Add a mempolicy parameter to filemap_alloc_folio() to enable NUMA-aware
> page cache allocations. This will be used by upcoming changes to
> support NUMA policies in guest-memfd, where guest_memory need to be
> allocated NUMA policy specified by VMM.
>
> All existing users pass NULL maintaining current behavior.
>
> Reviewed-by: Pankaj Gupta <pankaj.gupta@amd.com>
> Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Signed-off-by: Shivank Garg <shivankg@amd.com>
> ---
Reviewed-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH V9 3/7] mm/filemap: Extend __filemap_get_folio() to support NUMA memory policies
2025-07-13 17:43 ` [PATCH V9 3/7] mm/filemap: Extend __filemap_get_folio() to support NUMA memory policies Shivank Garg
@ 2025-07-22 15:21 ` David Hildenbrand
0 siblings, 0 replies; 24+ messages in thread
From: David Hildenbrand @ 2025-07-22 15:21 UTC (permalink / raw)
To: Shivank Garg, seanjc, vbabka, willy, akpm, shuah, pbonzini,
brauner, viro
Cc: ackerleytng, paul, jmorris, serge, pvorel, bfoster, tabba,
vannapurve, chao.gao, bharata, nikunj, michael.day, shdhiman,
yan.y.zhao, Neeraj.Upadhyay, thomas.lendacky, michael.roth, aik,
jgg, kalyazin, peterx, jack, rppt, hch, cgzones, ira.weiny,
rientjes, roypat, ziy, matthew.brost, joshua.hahnjy, rakie.kim,
byungchul, gourry, kent.overstreet, ying.huang, apopple,
chao.p.peng, amit, ddutile, dan.j.williams, ashish.kalra, gshan,
jgowans, pankaj.gupta, papaluri, yuzhao, suzuki.poulose,
quic_eberman, aneeshkumar.kizhakeveetil, linux-fsdevel, linux-mm,
linux-kernel, linux-security-module, kvm, linux-kselftest,
linux-coco
On 13.07.25 19:43, Shivank Garg wrote:
> From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
>
> Extend __filemap_get_folio() to support NUMA memory policies by
> renaming the implementation to __filemap_get_folio_mpol() and adding
> a mempolicy parameter. The original function becomes a static inline
> wrapper that passes NULL for the mempolicy.
>
> This infrastructure will enable future support for NUMA-aware page cache
> allocations in guest_memfd memory backend KVM guests.
>
> Reviewed-by: Pankaj Gupta <pankaj.gupta@amd.com>
> Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Signed-off-by: Shivank Garg <shivankg@amd.com>
> ---
Reviewed-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH V9 6/7] KVM: guest_memfd: Enforce NUMA mempolicy using shared policy
2025-07-13 17:43 ` [PATCH V9 6/7] KVM: guest_memfd: Enforce NUMA mempolicy using shared policy Shivank Garg
2025-07-21 13:30 ` Vlastimil Babka
@ 2025-07-22 15:24 ` David Hildenbrand
1 sibling, 0 replies; 24+ messages in thread
From: David Hildenbrand @ 2025-07-22 15:24 UTC (permalink / raw)
To: Shivank Garg, seanjc, vbabka, willy, akpm, shuah, pbonzini,
brauner, viro
Cc: ackerleytng, paul, jmorris, serge, pvorel, bfoster, tabba,
vannapurve, chao.gao, bharata, nikunj, michael.day, shdhiman,
yan.y.zhao, Neeraj.Upadhyay, thomas.lendacky, michael.roth, aik,
jgg, kalyazin, peterx, jack, rppt, hch, cgzones, ira.weiny,
rientjes, roypat, ziy, matthew.brost, joshua.hahnjy, rakie.kim,
byungchul, gourry, kent.overstreet, ying.huang, apopple,
chao.p.peng, amit, ddutile, dan.j.williams, ashish.kalra, gshan,
jgowans, pankaj.gupta, papaluri, yuzhao, suzuki.poulose,
quic_eberman, aneeshkumar.kizhakeveetil, linux-fsdevel, linux-mm,
linux-kernel, linux-security-module, kvm, linux-kselftest,
linux-coco
On 13.07.25 19:43, Shivank Garg wrote:
> Previously, guest-memfd allocations followed local NUMA node id in absence
> of process mempolicy, resulting in arbitrary memory allocation.
> Moreover, mbind() couldn't be used by the VMM as guest memory wasn't
> mapped into userspace when allocation occurred.
>
> Enable NUMA policy support by implementing vm_ops for guest-memfd mmap
> operation. This allows the VMM to map the memory and use mbind() to set the
> desired NUMA policy. The policy is stored in the inode structure via
> kvm_gmem_inode_info, as memory policy is a property of the memory (struct
> inode) itself. The policy is then retrieved via mpol_shared_policy_lookup()
> and passed to filemap_grab_folio_mpol() to ensure that allocations follow
> the specified memory policy.
>
> This enables the VMM to control guest memory NUMA placement by calling
> mbind() on the mapped memory regions, providing fine-grained control over
> guest memory allocation across NUMA nodes.
>
> The policy change only affect future allocations and does not migrate
> existing memory. This matches mbind(2)'s default behavior which affects
> only new allocations unless overridden with MPOL_MF_MOVE/MPOL_MF_MOVE_ALL
> flags, which are not supported for guest_memfd as it is unmovable.
>
> Suggested-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Shivank Garg <shivankg@amd.com>
> ---
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH V9 0/7] Add NUMA mempolicy support for KVM guest-memfd
2025-07-22 14:40 ` [PATCH V9 0/7] Add NUMA mempolicy support for KVM guest-memfd David Hildenbrand
2025-07-22 14:45 ` Sean Christopherson
@ 2025-07-22 15:49 ` Shivank Garg
1 sibling, 0 replies; 24+ messages in thread
From: Shivank Garg @ 2025-07-22 15:49 UTC (permalink / raw)
To: David Hildenbrand, seanjc, vbabka, willy, akpm, shuah, pbonzini,
brauner, viro
Cc: ackerleytng, paul, jmorris, serge, pvorel, bfoster, tabba,
vannapurve, chao.gao, bharata, nikunj, michael.day, shdhiman,
yan.y.zhao, Neeraj.Upadhyay, thomas.lendacky, michael.roth, aik,
jgg, kalyazin, peterx, jack, rppt, hch, cgzones, ira.weiny,
rientjes, roypat, ziy, matthew.brost, joshua.hahnjy, rakie.kim,
byungchul, gourry, kent.overstreet, ying.huang, apopple,
chao.p.peng, amit, ddutile, dan.j.williams, ashish.kalra, gshan,
jgowans, pankaj.gupta, papaluri, yuzhao, suzuki.poulose,
quic_eberman, aneeshkumar.kizhakeveetil, linux-fsdevel, linux-mm,
linux-kernel, linux-security-module, kvm, linux-kselftest,
linux-coco
On 7/22/2025 8:10 PM, David Hildenbrand wrote:
> On 13.07.25 19:43, Shivank Garg wrote:
>> This series introduces NUMA-aware memory placement support for KVM guests
>> with guest_memfd memory backends. It builds upon Fuad Tabba's work that
>> enabled host-mapping for guest_memfd memory [1].
>>
>> == Background ==
>> KVM's guest-memfd memory backend currently lacks support for NUMA policy
>> enforcement, causing guest memory allocations to be distributed across host
>> nodes according to kernel's default behavior, irrespective of any policy
>> specified by the VMM. This limitation arises because conventional userspace
>> NUMA control mechanisms like mbind(2) don't work since the memory isn't
>> directly mapped to userspace when allocations occur.
>> Fuad's work [1] provides the necessary mmap capability, and this series
>> leverages it to enable mbind(2).
>>
>> == Implementation ==
>>
>> This series implements proper NUMA policy support for guest-memfd by:
>>
>> 1. Adding mempolicy-aware allocation APIs to the filemap layer.
>> 2. Introducing custom inodes (via a dedicated slab-allocated inode cache,
>> kvm_gmem_inode_info) to store NUMA policy and metadata for guest memory.
>> 3. Implementing get/set_policy vm_ops in guest_memfd to support NUMA
>> policy.
>>
>> With these changes, VMMs can now control guest memory placement by mapping
>> guest_memfd file descriptor and using mbind(2) to specify:
>> - Policy modes: default, bind, interleave, or preferred
>> - Host NUMA nodes: List of target nodes for memory allocation
>>
>> These Policies affect only future allocations and do not migrate existing
>> memory. This matches mbind(2)'s default behavior which affects only new
>> allocations unless overridden with MPOL_MF_MOVE/MPOL_MF_MOVE_ALL flags (Not
>> supported for guest_memfd as it is unmovable by design).
>>
>> == Upstream Plan ==
>> Phased approach as per David's guest_memfd extension overview [2] and
>> community calls [3]:
>>
>> Phase 1 (this series):
>> 1. Focuses on shared guest_memfd support (non-CoCo VMs).
>> 2. Builds on Fuad's host-mapping work.
>
> Just to clarify: this is based on Fuad's stage 1 and should probably still be
> tagged "RFC" until stage-1 is finally upstream.
>
Sure.
> (I was hoping stage-1 would go upstream in 6.17, but I am not sure yet if that is
> still feasible looking at the never-ending review)
>
> I'm surprised to see that
>
> commit cbe4134ea4bc493239786220bd69cb8a13493190
> Author: Shivank Garg <shivankg@amd.com>
> Date: Fri Jun 20 07:03:30 2025 +0000
>
> fs: export anon_inode_make_secure_inode() and fix secretmem LSM bypass
> was merged with the kvm export
>
> EXPORT_SYMBOL_GPL_FOR_MODULES(anon_inode_make_secure_inode, "kvm");
>
> I thought I commented that this is something to done separately and not really
> "fix" material.
>
> Anyhow, good for this series, no need to touch that.
>
Yeah, V2 got merged instead of V3.
https://lore.kernel.org/all/1ab3381b-1620-485d-8e1b-fff2c48d45c3@amd.com
but backporting did not give issues either.
Thank you for the reviews :)
Best Regards,
Shivank
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH V9 0/7] Add NUMA mempolicy support for KVM guest-memfd
2025-07-22 14:45 ` Sean Christopherson
@ 2025-07-22 15:51 ` David Hildenbrand
2025-07-22 23:07 ` Sean Christopherson
0 siblings, 1 reply; 24+ messages in thread
From: David Hildenbrand @ 2025-07-22 15:51 UTC (permalink / raw)
To: Sean Christopherson
Cc: Shivank Garg, vbabka, willy, akpm, shuah, pbonzini, brauner, viro,
ackerleytng, paul, jmorris, serge, pvorel, bfoster, tabba,
vannapurve, chao.gao, bharata, nikunj, michael.day, shdhiman,
yan.y.zhao, Neeraj.Upadhyay, thomas.lendacky, michael.roth, aik,
jgg, kalyazin, peterx, jack, rppt, hch, cgzones, ira.weiny,
rientjes, roypat, ziy, matthew.brost, joshua.hahnjy, rakie.kim,
byungchul, gourry, kent.overstreet, ying.huang, apopple,
chao.p.peng, amit, ddutile, dan.j.williams, ashish.kalra, gshan,
jgowans, pankaj.gupta, papaluri, yuzhao, suzuki.poulose,
quic_eberman, aneeshkumar.kizhakeveetil, linux-fsdevel, linux-mm,
linux-kernel, linux-security-module, kvm, linux-kselftest,
linux-coco
On 22.07.25 16:45, Sean Christopherson wrote:
> On Tue, Jul 22, 2025, David Hildenbrand wrote:
>> Just to clarify: this is based on Fuad's stage 1 and should probably still be
>> tagged "RFC" until stage-1 is finally upstream.
>>
>> (I was hoping stage-1 would go upstream in 6.17, but I am not sure yet if that is
>> still feasible looking at the never-ending review)
>
> 6.17 is very doable.
I like your optimism :)
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH V9 0/7] Add NUMA mempolicy support for KVM guest-memfd
2025-07-22 15:51 ` David Hildenbrand
@ 2025-07-22 23:07 ` Sean Christopherson
2025-07-23 8:20 ` David Hildenbrand
0 siblings, 1 reply; 24+ messages in thread
From: Sean Christopherson @ 2025-07-22 23:07 UTC (permalink / raw)
To: David Hildenbrand
Cc: Shivank Garg, vbabka, willy, akpm, shuah, pbonzini, brauner, viro,
ackerleytng, paul, jmorris, serge, pvorel, bfoster, tabba,
vannapurve, chao.gao, bharata, nikunj, michael.day, shdhiman,
yan.y.zhao, Neeraj.Upadhyay, thomas.lendacky, michael.roth, aik,
jgg, kalyazin, peterx, jack, rppt, hch, cgzones, ira.weiny,
rientjes, roypat, ziy, matthew.brost, joshua.hahnjy, rakie.kim,
byungchul, gourry, kent.overstreet, ying.huang, apopple,
chao.p.peng, amit, ddutile, dan.j.williams, ashish.kalra, gshan,
jgowans, pankaj.gupta, papaluri, yuzhao, suzuki.poulose,
quic_eberman, aneeshkumar.kizhakeveetil, linux-fsdevel, linux-mm,
linux-kernel, linux-security-module, kvm, linux-kselftest,
linux-coco
On Tue, Jul 22, 2025, David Hildenbrand wrote:
> On 22.07.25 16:45, Sean Christopherson wrote:
> > On Tue, Jul 22, 2025, David Hildenbrand wrote:
> > > Just to clarify: this is based on Fuad's stage 1 and should probably still be
> > > tagged "RFC" until stage-1 is finally upstream.
> > >
> > > (I was hoping stage-1 would go upstream in 6.17, but I am not sure yet if that is
> > > still feasible looking at the never-ending review)
> >
> > 6.17 is very doable.
>
> I like your optimism :)
I'm not optimistic, just incompetent. I forgot what kernel we're on. **6.18**
is very doable, 6.17 not so much.
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH V9 0/7] Add NUMA mempolicy support for KVM guest-memfd
2025-07-22 23:07 ` Sean Christopherson
@ 2025-07-23 8:20 ` David Hildenbrand
0 siblings, 0 replies; 24+ messages in thread
From: David Hildenbrand @ 2025-07-23 8:20 UTC (permalink / raw)
To: Sean Christopherson
Cc: Shivank Garg, vbabka, willy, akpm, shuah, pbonzini, brauner, viro,
ackerleytng, paul, jmorris, serge, pvorel, bfoster, tabba,
vannapurve, chao.gao, bharata, nikunj, michael.day, shdhiman,
yan.y.zhao, Neeraj.Upadhyay, thomas.lendacky, michael.roth, aik,
jgg, kalyazin, peterx, jack, rppt, hch, cgzones, ira.weiny,
rientjes, roypat, ziy, matthew.brost, joshua.hahnjy, rakie.kim,
byungchul, gourry, kent.overstreet, ying.huang, apopple,
chao.p.peng, amit, ddutile, dan.j.williams, ashish.kalra, gshan,
jgowans, pankaj.gupta, papaluri, yuzhao, suzuki.poulose,
quic_eberman, aneeshkumar.kizhakeveetil, linux-fsdevel, linux-mm,
linux-kernel, linux-security-module, kvm, linux-kselftest,
linux-coco
On 23.07.25 01:07, Sean Christopherson wrote:
> On Tue, Jul 22, 2025, David Hildenbrand wrote:
>> On 22.07.25 16:45, Sean Christopherson wrote:
>>> On Tue, Jul 22, 2025, David Hildenbrand wrote:
>>>> Just to clarify: this is based on Fuad's stage 1 and should probably still be
>>>> tagged "RFC" until stage-1 is finally upstream.
>>>>
>>>> (I was hoping stage-1 would go upstream in 6.17, but I am not sure yet if that is
>>>> still feasible looking at the never-ending review)
>>>
>>> 6.17 is very doable.
>>
>> I like your optimism :)
>
> I'm not optimistic, just incompetent.
Well, I wouldn't agree with that :)
> I forgot what kernel we're on. **6.18**
> is very doable, 6.17 not so much.
Yes, probably best to target 6.18 than rushing this into the upcoming MR.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH V9 1/7] KVM: guest_memfd: Use guest mem inodes instead of anonymous inodes
2025-07-22 15:18 ` David Hildenbrand
@ 2025-08-07 21:34 ` Ackerley Tng
2025-08-07 22:14 ` Ackerley Tng
2025-08-11 8:02 ` Garg, Shivank
0 siblings, 2 replies; 24+ messages in thread
From: Ackerley Tng @ 2025-08-07 21:34 UTC (permalink / raw)
To: David Hildenbrand, Shivank Garg, seanjc, vbabka, willy, akpm,
shuah, pbonzini, brauner, viro
Cc: paul, jmorris, serge, pvorel, bfoster, tabba, vannapurve,
chao.gao, bharata, nikunj, michael.day, shdhiman, yan.y.zhao,
Neeraj.Upadhyay, thomas.lendacky, michael.roth, aik, jgg,
kalyazin, peterx, jack, rppt, hch, cgzones, ira.weiny, rientjes,
roypat, ziy, matthew.brost, joshua.hahnjy, rakie.kim, byungchul,
gourry, kent.overstreet, ying.huang, apopple, chao.p.peng, amit,
ddutile, dan.j.williams, ashish.kalra, gshan, jgowans,
pankaj.gupta, papaluri, yuzhao, suzuki.poulose, quic_eberman,
aneeshkumar.kizhakeveetil, linux-fsdevel, linux-mm, linux-kernel,
linux-security-module, kvm, linux-kselftest, linux-coco
David Hildenbrand <david@redhat.com> writes:
> On 13.07.25 19:43, Shivank Garg wrote:
>> From: Ackerley Tng <ackerleytng@google.com>
>>
>> guest_memfd's inode represents memory the guest_memfd is
>> providing. guest_memfd's file represents a struct kvm's view of that
>> memory.
>>
>> Using a custom inode allows customization of the inode teardown
>> process via callbacks. For example, ->evict_inode() allows
>> customization of the truncation process on file close, and
>> ->destroy_inode() and ->free_inode() allow customization of the inode
>> freeing process.
>>
>> Customizing the truncation process allows flexibility in management of
>> guest_memfd memory and customization of the inode freeing process
>> allows proper cleanup of memory metadata stored on the inode.
>>
>> Memory metadata is more appropriately stored on the inode (as opposed
>> to the file), since the metadata is for the memory and is not unique
>> to a specific binding and struct kvm.
>>
>> Co-developed-by: Fuad Tabba <tabba@google.com>
>> Signed-off-by: Fuad Tabba <tabba@google.com>
>> Signed-off-by: Ackerley Tng <ackerleytng@google.com>
>> Signed-off-by: Shivank Garg <shivankg@amd.com>
>> ---
>
> [...]
>
>>
>> #include "kvm_mm.h"
>>
>> +static struct vfsmount *kvm_gmem_mnt;
>> +
>> struct kvm_gmem {
>> struct kvm *kvm;
>> struct xarray bindings;
>> @@ -388,9 +392,51 @@ static struct file_operations kvm_gmem_fops = {
>> .fallocate = kvm_gmem_fallocate,
>> };
>>
>> -void kvm_gmem_init(struct module *module)
>> +static const struct super_operations kvm_gmem_super_operations = {
>> + .statfs = simple_statfs,
>> +};
>> +
>> +static int kvm_gmem_init_fs_context(struct fs_context *fc)
>> +{
>> + struct pseudo_fs_context *ctx;
>> +
>> + if (!init_pseudo(fc, GUEST_MEMFD_MAGIC))
>> + return -ENOMEM;
>> +
>> + ctx = fc->fs_private;
>> + ctx->ops = &kvm_gmem_super_operations;
>
> Curious, why is that required? (secretmem doesn't have it, so I wonder)
>
Good point! pseudo_fs_fill_super() fills in a struct super_operations
which already does simple_statfs, so guest_memfd doesn't need this.
>> +
>> + return 0;
>> +}
>> +
>> +static struct file_system_type kvm_gmem_fs = {
>> + .name = "kvm_guest_memory",
>
> It's GUEST_MEMFD_MAGIC but here "kvm_guest_memory".
>
> For secretmem it's SECRETMEM_MAGIC vs. "secretmem".
>
> So naturally, I wonder if that is to be made consistent :)
>
I'll update this to "guest_memfd" to be consistent.
>> + .init_fs_context = kvm_gmem_init_fs_context,
>> + .kill_sb = kill_anon_super,
>> +};
>> +
>> +static int kvm_gmem_init_mount(void)
>> +{
>> + kvm_gmem_mnt = kern_mount(&kvm_gmem_fs);
>> +
>> + if (IS_ERR(kvm_gmem_mnt))
>> + return PTR_ERR(kvm_gmem_mnt);
>> +
>> + kvm_gmem_mnt->mnt_flags |= MNT_NOEXEC;
>> + return 0;
>> +}
>> +
>> +int kvm_gmem_init(struct module *module)
>> {
>> kvm_gmem_fops.owner = module;
>> +
>> + return kvm_gmem_init_mount();
>> +}
>> +
>> +void kvm_gmem_exit(void)
>> +{
>> + kern_unmount(kvm_gmem_mnt);
>> + kvm_gmem_mnt = NULL;
>> }
>>
>> static int kvm_gmem_migrate_folio(struct address_space *mapping,
>> @@ -472,11 +518,71 @@ static const struct inode_operations kvm_gmem_iops = {
>> .setattr = kvm_gmem_setattr,
>> };
>>
>> +static struct inode *kvm_gmem_inode_make_secure_inode(const char *name,
>> + loff_t size, u64 flags)
>> +{
>> + struct inode *inode;
>> +
>> + inode = anon_inode_make_secure_inode(kvm_gmem_mnt->mnt_sb, name, NULL);
>> + if (IS_ERR(inode))
>> + return inode;
>> +
>> + inode->i_private = (void *)(unsigned long)flags;
>> + inode->i_op = &kvm_gmem_iops;
>> + inode->i_mapping->a_ops = &kvm_gmem_aops;
>> + inode->i_mode |= S_IFREG;
>> + inode->i_size = size;
>> + mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER);
>> + mapping_set_inaccessible(inode->i_mapping);
>> + /* Unmovable mappings are supposed to be marked unevictable as well. */
>> + WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping));
>> +
>> + return inode;
>> +}
>> +
>> +static struct file *kvm_gmem_inode_create_getfile(void *priv, loff_t size,
>> + u64 flags)
>> +{
>> + static const char *name = "[kvm-gmem]";
>> + struct inode *inode;
>> + struct file *file;
>> + int err;
>> +
>> + err = -ENOENT;
>> + if (!try_module_get(kvm_gmem_fops.owner))
>> + goto err;
>
> Curious, shouldn't there be a module_put() somewhere after this function
> returned a file?
>
This was interesting indeed, but IIUC this is correct.
I think this flow was basically copied from __anon_inode_getfile(),
which does this try_module_get().
The corresponding module_put() is in __fput(), which calls fops_put()
and calls module_put() on the owner.
>> +
>> + inode = kvm_gmem_inode_make_secure_inode(name, size, flags);
>> + if (IS_ERR(inode)) {
>> + err = PTR_ERR(inode);
>> + goto err_put_module;
>> + }
>> +
>> + file = alloc_file_pseudo(inode, kvm_gmem_mnt, name, O_RDWR,
>> + &kvm_gmem_fops);
>> + if (IS_ERR(file)) {
>> + err = PTR_ERR(file);
>> + goto err_put_inode;
>> + }
>> +
>> + file->f_flags |= O_LARGEFILE;
>> + file->private_data = priv;
>> +
>>
>
> Nothing else jumped at me.
>
Thanks for the review!
Since we're going to submit this patch through Shivank's mempolicy
support series, I'll follow up soon by sending a replacement patch in
reply to this series so Shivank could build on top of that?
> --
> Cheers,
>
> David / dhildenb
^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [PATCH V9 1/7] KVM: guest_memfd: Use guest mem inodes instead of anonymous inodes
2025-08-07 21:34 ` Ackerley Tng
@ 2025-08-07 22:14 ` Ackerley Tng
2025-08-11 8:02 ` Garg, Shivank
1 sibling, 0 replies; 24+ messages in thread
From: Ackerley Tng @ 2025-08-07 22:14 UTC (permalink / raw)
To: David Hildenbrand, Shivank Garg, seanjc, vbabka, willy, akpm,
shuah, pbonzini, brauner, viro
Cc: paul, jmorris, serge, pvorel, bfoster, tabba, vannapurve,
chao.gao, bharata, nikunj, michael.day, shdhiman, yan.y.zhao,
Neeraj.Upadhyay, thomas.lendacky, michael.roth, aik, jgg,
kalyazin, peterx, jack, rppt, hch, cgzones, ira.weiny, rientjes,
roypat, ziy, matthew.brost, joshua.hahnjy, rakie.kim, byungchul,
gourry, kent.overstreet, ying.huang, apopple, chao.p.peng, amit,
ddutile, dan.j.williams, ashish.kalra, gshan, jgowans,
pankaj.gupta, papaluri, yuzhao, suzuki.poulose, quic_eberman,
aneeshkumar.kizhakeveetil, linux-fsdevel, linux-mm, linux-kernel,
linux-security-module, kvm, linux-kselftest, linux-coco
Ackerley Tng <ackerleytng@google.com> writes:
> David Hildenbrand <david@redhat.com> writes:
>
[snip]
>>
>> Nothing else jumped at me.
>>
>
> Thanks for the review!
>
> Since we're going to submit this patch through Shivank's mempolicy
> support series, I'll follow up soon by sending a replacement patch in
> reply to this series so Shivank could build on top of that?
>
>> --
>> Cheers,
>>
>> David / dhildenb
I hope sending a patch within a reply this way works!
---
From 11845fed725ff68c3bad07cd9c717ae968465bf4 Mon Sep 17 00:00:00 2001
Message-ID: <11845fed725ff68c3bad07cd9c717ae968465bf4.1754603750.git.ackerleytng@google.com>
From: Ackerley Tng <ackerleytng@google.com>
Date: Sun, 13 Jul 2025 17:43:35 +0000
Subject: [PATCH 1/1] KVM: guest_memfd: Use guest mem inodes instead of
anonymous inodes
guest_memfd's inode represents memory the guest_memfd is
providing. guest_memfd's file represents a struct kvm's view of that
memory.
Using a custom inode allows customization of the inode teardown
process via callbacks. For example, ->evict_inode() allows
customization of the truncation process on file close, and
->destroy_inode() and ->free_inode() allow customization of the inode
freeing process.
Customizing the truncation process allows flexibility in management of
guest_memfd memory and customization of the inode freeing process
allows proper cleanup of memory metadata stored on the inode.
Memory metadata is more appropriately stored on the inode (as opposed
to the file), since the metadata is for the memory and is not unique
to a specific binding and struct kvm.
Co-developed-by: Fuad Tabba <tabba@google.com>
Change-Id: I64925f069637323023fbff91fc8521f92b8561bd
Signed-off-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Shivank Garg <shivankg@amd.com>
Signed-off-by: Ackerley Tng <ackerleytng@google.com>
---
include/uapi/linux/magic.h | 1 +
virt/kvm/guest_memfd.c | 128 ++++++++++++++++++++++++++++++-------
virt/kvm/kvm_main.c | 7 +-
virt/kvm/kvm_mm.h | 9 +--
4 files changed, 118 insertions(+), 27 deletions(-)
diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h
index bb575f3ab45e5..638ca21b7a909 100644
--- a/include/uapi/linux/magic.h
+++ b/include/uapi/linux/magic.h
@@ -103,5 +103,6 @@
#define DEVMEM_MAGIC 0x454d444d /* "DMEM" */
#define SECRETMEM_MAGIC 0x5345434d /* "SECM" */
#define PID_FS_MAGIC 0x50494446 /* "PIDF" */
+#define GUEST_MEMFD_MAGIC 0x474d454d /* "GMEM" */
#endif /* __LINUX_MAGIC_H__ */
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index 08a6bc7d25b60..0e93323fc8392 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -1,12 +1,16 @@
// SPDX-License-Identifier: GPL-2.0
+#include <linux/anon_inodes.h>
#include <linux/backing-dev.h>
#include <linux/falloc.h>
+#include <linux/fs.h>
#include <linux/kvm_host.h>
+#include <linux/pseudo_fs.h>
#include <linux/pagemap.h>
-#include <linux/anon_inodes.h>
#include "kvm_mm.h"
+static struct vfsmount *kvm_gmem_mnt;
+
struct kvm_gmem {
struct kvm *kvm;
struct xarray bindings;
@@ -385,9 +389,45 @@ static struct file_operations kvm_gmem_fops = {
.fallocate = kvm_gmem_fallocate,
};
-void kvm_gmem_init(struct module *module)
+static int kvm_gmem_init_fs_context(struct fs_context *fc)
+{
+ if (!init_pseudo(fc, GUEST_MEMFD_MAGIC))
+ return -ENOMEM;
+
+ fc->s_iflags |= SB_I_NOEXEC;
+ fc->s_iflags |= SB_I_NODEV;
+
+ return 0;
+}
+
+static struct file_system_type kvm_gmem_fs = {
+ .name = "guest_memfd",
+ .init_fs_context = kvm_gmem_init_fs_context,
+ .kill_sb = kill_anon_super,
+};
+
+static int kvm_gmem_init_mount(void)
+{
+ kvm_gmem_mnt = kern_mount(&kvm_gmem_fs);
+
+ if (IS_ERR(kvm_gmem_mnt))
+ return PTR_ERR(kvm_gmem_mnt);
+
+ kvm_gmem_mnt->mnt_flags |= MNT_NOEXEC;
+ return 0;
+}
+
+int kvm_gmem_init(struct module *module)
{
kvm_gmem_fops.owner = module;
+
+ return kvm_gmem_init_mount();
+}
+
+void kvm_gmem_exit(void)
+{
+ kern_unmount(kvm_gmem_mnt);
+ kvm_gmem_mnt = NULL;
}
static int kvm_gmem_migrate_folio(struct address_space *mapping,
@@ -463,11 +503,71 @@ bool __weak kvm_arch_supports_gmem_mmap(struct kvm *kvm)
return true;
}
+static struct inode *kvm_gmem_inode_make_secure_inode(const char *name,
+ loff_t size, u64 flags)
+{
+ struct inode *inode;
+
+ inode = anon_inode_make_secure_inode(kvm_gmem_mnt->mnt_sb, name, NULL);
+ if (IS_ERR(inode))
+ return inode;
+
+ inode->i_private = (void *)(unsigned long)flags;
+ inode->i_op = &kvm_gmem_iops;
+ inode->i_mapping->a_ops = &kvm_gmem_aops;
+ inode->i_mode |= S_IFREG;
+ inode->i_size = size;
+ mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER);
+ mapping_set_inaccessible(inode->i_mapping);
+ /* Unmovable mappings are supposed to be marked unevictable as well. */
+ WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping));
+
+ return inode;
+}
+
+static struct file *kvm_gmem_inode_create_getfile(void *priv, loff_t size,
+ u64 flags)
+{
+ static const char *name = "[kvm-gmem]";
+ struct inode *inode;
+ struct file *file;
+ int err;
+
+ err = -ENOENT;
+ if (!try_module_get(kvm_gmem_fops.owner))
+ goto err;
+
+ inode = kvm_gmem_inode_make_secure_inode(name, size, flags);
+ if (IS_ERR(inode)) {
+ err = PTR_ERR(inode);
+ goto err_put_module;
+ }
+
+ file = alloc_file_pseudo(inode, kvm_gmem_mnt, name, O_RDWR,
+ &kvm_gmem_fops);
+ if (IS_ERR(file)) {
+ err = PTR_ERR(file);
+ goto err_put_inode;
+ }
+
+ file->f_flags |= O_LARGEFILE;
+ file->private_data = priv;
+
+out:
+ return file;
+
+err_put_inode:
+ iput(inode);
+err_put_module:
+ module_put(kvm_gmem_fops.owner);
+err:
+ file = ERR_PTR(err);
+ goto out;
+}
+
static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
{
- const char *anon_name = "[kvm-gmem]";
struct kvm_gmem *gmem;
- struct inode *inode;
struct file *file;
int fd, err;
@@ -481,32 +581,16 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
goto err_fd;
}
- file = anon_inode_create_getfile(anon_name, &kvm_gmem_fops, gmem,
- O_RDWR, NULL);
+ file = kvm_gmem_inode_create_getfile(gmem, size, flags);
if (IS_ERR(file)) {
err = PTR_ERR(file);
goto err_gmem;
}
- file->f_flags |= O_LARGEFILE;
-
- inode = file->f_inode;
- WARN_ON(file->f_mapping != inode->i_mapping);
-
- inode->i_private = (void *)(unsigned long)flags;
- inode->i_op = &kvm_gmem_iops;
- inode->i_mapping->a_ops = &kvm_gmem_aops;
- inode->i_mode |= S_IFREG;
- inode->i_size = size;
- mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER);
- mapping_set_inaccessible(inode->i_mapping);
- /* Unmovable mappings are supposed to be marked unevictable as well. */
- WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping));
-
kvm_get_kvm(kvm);
gmem->kvm = kvm;
xa_init(&gmem->bindings);
- list_add(&gmem->entry, &inode->i_mapping->i_private_list);
+ list_add(&gmem->entry, &file_inode(file)->i_mapping->i_private_list);
fd_install(fd, file);
return fd;
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 18f29ef935437..301d48d6e00d0 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -6489,7 +6489,9 @@ int kvm_init(unsigned vcpu_size, unsigned vcpu_align, struct module *module)
if (WARN_ON_ONCE(r))
goto err_vfio;
- kvm_gmem_init(module);
+ r = kvm_gmem_init(module);
+ if (r)
+ goto err_gmem;
r = kvm_init_virtualization();
if (r)
@@ -6510,6 +6512,8 @@ int kvm_init(unsigned vcpu_size, unsigned vcpu_align, struct module *module)
err_register:
kvm_uninit_virtualization();
err_virt:
+ kvm_gmem_exit();
+err_gmem:
kvm_vfio_ops_exit();
err_vfio:
kvm_async_pf_deinit();
@@ -6541,6 +6545,7 @@ void kvm_exit(void)
for_each_possible_cpu(cpu)
free_cpumask_var(per_cpu(cpu_kick_mask, cpu));
kmem_cache_destroy(kvm_vcpu_cache);
+ kvm_gmem_exit();
kvm_vfio_ops_exit();
kvm_async_pf_deinit();
kvm_irqfd_exit();
diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h
index 31defb08ccbab..9fcc5d5b7f8d0 100644
--- a/virt/kvm/kvm_mm.h
+++ b/virt/kvm/kvm_mm.h
@@ -68,17 +68,18 @@ static inline void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm,
#endif /* HAVE_KVM_PFNCACHE */
#ifdef CONFIG_KVM_GUEST_MEMFD
-void kvm_gmem_init(struct module *module);
+int kvm_gmem_init(struct module *module);
+void kvm_gmem_exit(void);
int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args);
int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot,
unsigned int fd, loff_t offset);
void kvm_gmem_unbind(struct kvm_memory_slot *slot);
#else
-static inline void kvm_gmem_init(struct module *module)
+static inline int kvm_gmem_init(struct module *module)
{
-
+ return 0;
}
-
+static inline void kvm_gmem_exit(void) {};
static inline int kvm_gmem_bind(struct kvm *kvm,
struct kvm_memory_slot *slot,
unsigned int fd, loff_t offset)
--
2.50.1.703.g449372360f-goog
^ permalink raw reply related [flat|nested] 24+ messages in thread
* Re: [PATCH V9 1/7] KVM: guest_memfd: Use guest mem inodes instead of anonymous inodes
2025-08-07 21:34 ` Ackerley Tng
2025-08-07 22:14 ` Ackerley Tng
@ 2025-08-11 8:02 ` Garg, Shivank
1 sibling, 0 replies; 24+ messages in thread
From: Garg, Shivank @ 2025-08-11 8:02 UTC (permalink / raw)
To: Ackerley Tng, David Hildenbrand, seanjc, vbabka, willy, akpm,
shuah, pbonzini, brauner, viro
Cc: paul, jmorris, serge, pvorel, bfoster, tabba, vannapurve,
chao.gao, bharata, nikunj, michael.day, shdhiman, yan.y.zhao,
Neeraj.Upadhyay, thomas.lendacky, michael.roth, aik, jgg,
kalyazin, peterx, jack, rppt, hch, cgzones, ira.weiny, rientjes,
roypat, ziy, matthew.brost, joshua.hahnjy, rakie.kim, byungchul,
gourry, kent.overstreet, ying.huang, apopple, chao.p.peng, amit,
ddutile, dan.j.williams, ashish.kalra, gshan, jgowans,
pankaj.gupta, papaluri, yuzhao, suzuki.poulose, quic_eberman,
aneeshkumar.kizhakeveetil, linux-fsdevel, linux-mm, linux-kernel,
linux-security-module, kvm, linux-kselftest, linux-coco
On 8/8/2025 3:04 AM, Ackerley Tng wrote:
> David Hildenbrand <david@redhat.com> writes:
>
>> On 13.07.25 19:43, Shivank Garg wrote:
>>> From: Ackerley Tng <ackerleytng@google.com>
>>>
>>> + ctx->ops = &kvm_gmem_super_operations;
>>
>> Curious, why is that required? (secretmem doesn't have it, so I wonder)
>>
>
> Good point! pseudo_fs_fill_super() fills in a struct super_operations
> which already does simple_statfs, so guest_memfd doesn't need this.
>
Right, simple_statfs isn't strictly needed in this patch, but the
super_operations is required for the subsequent patches in
the series which add custom alloc_inode, destroy_inode, and free_inode
callback.
>>> + if (!try_module_get(kvm_gmem_fops.owner))
>>> + goto err;
>>
>> Curious, shouldn't there be a module_put() somewhere after this function
>> returned a file?
>>
>
> This was interesting indeed, but IIUC this is correct.
>
> I think this flow was basically copied from __anon_inode_getfile(),
> which does this try_module_get().
>
> The corresponding module_put() is in __fput(), which calls fops_put()
> and calls module_put() on the owner.
>
>>> +
>>>
>>
>> Nothing else jumped at me.
>>
>
> Thanks for the review!
>
> Since we're going to submit this patch through Shivank's mempolicy
> support series, I'll follow up soon by sending a replacement patch in
> reply to this series so Shivank could build on top of that?
>
yes, I'll post the V10 soon.
Thanks,
Shivank
^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2025-08-11 8:02 UTC | newest]
Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-13 17:43 [PATCH V9 0/7] Add NUMA mempolicy support for KVM guest-memfd Shivank Garg
2025-07-13 17:43 ` [PATCH V9 1/7] KVM: guest_memfd: Use guest mem inodes instead of anonymous inodes Shivank Garg
2025-07-22 15:18 ` David Hildenbrand
2025-08-07 21:34 ` Ackerley Tng
2025-08-07 22:14 ` Ackerley Tng
2025-08-11 8:02 ` Garg, Shivank
2025-07-13 17:43 ` [PATCH V9 2/7] mm/filemap: Add NUMA mempolicy support to filemap_alloc_folio() Shivank Garg
2025-07-22 15:20 ` David Hildenbrand
2025-07-13 17:43 ` [PATCH V9 3/7] mm/filemap: Extend __filemap_get_folio() to support NUMA memory policies Shivank Garg
2025-07-22 15:21 ` David Hildenbrand
2025-07-13 17:43 ` [PATCH V9 4/7] mm/mempolicy: Export memory policy symbols Shivank Garg
2025-07-13 17:43 ` [PATCH V9 5/7] KVM: guest_memfd: Add slab-allocated inode cache Shivank Garg
2025-07-21 11:44 ` Vlastimil Babka
2025-07-22 5:03 ` Shivank Garg
2025-07-13 17:43 ` [PATCH V9 6/7] KVM: guest_memfd: Enforce NUMA mempolicy using shared policy Shivank Garg
2025-07-21 13:30 ` Vlastimil Babka
2025-07-22 15:24 ` David Hildenbrand
2025-07-13 17:43 ` [PATCH V9 7/7] KVM: guest_memfd: selftests: Add tests for mmap and NUMA policy support Shivank Garg
2025-07-22 14:40 ` [PATCH V9 0/7] Add NUMA mempolicy support for KVM guest-memfd David Hildenbrand
2025-07-22 14:45 ` Sean Christopherson
2025-07-22 15:51 ` David Hildenbrand
2025-07-22 23:07 ` Sean Christopherson
2025-07-23 8:20 ` David Hildenbrand
2025-07-22 15:49 ` Shivank Garg
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).