* [PATCH v10 00/16] KVM: Mapping guest_memfd backed memory at the host for software protected VMs
@ 2025-05-27 18:02 Fuad Tabba
2025-05-27 18:02 ` [PATCH v10 01/16] KVM: Rename CONFIG_KVM_PRIVATE_MEM to CONFIG_KVM_GMEM Fuad Tabba
` (15 more replies)
0 siblings, 16 replies; 62+ messages in thread
From: Fuad Tabba @ 2025-05-27 18:02 UTC (permalink / raw)
To: kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny, tabba
Main changes since v9 [1]:
- Dropped best-effort validation that the userspace memory address range
matches the shared memory backed by guest_memfd
- Rework handling faults for shared guest_memfd memory in arm64
- Track in the memslot whether it's backed by guest_memfd with shared
memory support
- Various fixes based on feedback from v9
- Rebase on Linux 6.15
The purpose of this series is to allow mapping guest_memfd backed memory
at the host. This support enables VMMs like Firecracker to run guests
backed completely by guest_memfd [2]. Combined with Patrick's series for
direct map removal in guest_memfd [3], this would allow running VMs that
offer additional hardening against Spectre-like transient execution
attacks.
This series will also serve as a base for _restricted_ mmap() support
for guest_memfd backed memory at the host for CoCos that allow sharing
guest memory in-place with the host [4].
Patches 1 to 7 are mainly about decoupling the concept of guest memory
being private vs guest memory being backed by guest_memfd. They are
mostly refactoring and renaming.
Patches 8 and 9 add support for in-place shared memory, as well as the
ability to map it by the host as long as it is shared, gated by a new
configuration option, toggled by a new flag, and advertised to userspace
by a new capability (introduced in patch 15).
Patches 10 to 14 add x86 and arm64 support for in-place shared memory.
Patch 15 introduces the capability that advertises support for in-place
shared memory, and updates the documentation.
Patch 16 adds selftests for the new features.
For details on how to test this patch series, and on how to boot a guest
has uses the new features, please refer to v8 [5].
Cheers,
/fuad
[1] https://lore.kernel.org/all/20250513163438.3942405-1-tabba@google.com/
[2] https://github.com/firecracker-microvm/firecracker/tree/feature/secret-hiding
[3] https://lore.kernel.org/all/20250221160728.1584559-1-roypat@amazon.co.uk/
[4] https://lore.kernel.org/all/20250328153133.3504118-1-tabba@google.com/
[5] https://lore.kernel.org/all/20250430165655.605595-1-tabba@google.com/
Ackerley Tng (2):
KVM: x86/mmu: Handle guest page faults for guest_memfd with shared
memory
KVM: x86: Compute max_mapping_level with input from guest_memfd
Fuad Tabba (14):
KVM: Rename CONFIG_KVM_PRIVATE_MEM to CONFIG_KVM_GMEM
KVM: Rename CONFIG_KVM_GENERIC_PRIVATE_MEM to
CONFIG_KVM_GENERIC_GMEM_POPULATE
KVM: Rename kvm_arch_has_private_mem() to kvm_arch_supports_gmem()
KVM: x86: Rename kvm->arch.has_private_mem to kvm->arch.supports_gmem
KVM: Rename kvm_slot_can_be_private() to kvm_slot_has_gmem()
KVM: Fix comments that refer to slots_lock
KVM: Fix comment that refers to kvm uapi header path
KVM: guest_memfd: Allow host to map guest_memfd pages
KVM: guest_memfd: Track shared memory support in memslot
KVM: arm64: Refactor user_mem_abort() calculation of force_pte
KVM: arm64: Handle guest_memfd-backed guest page faults
KVM: arm64: Enable mapping guest_memfd in arm64
KVM: Introduce the KVM capability KVM_CAP_GMEM_SHARED_MEM
KVM: selftests: guest_memfd mmap() test when mapping is allowed
Documentation/virt/kvm/api.rst | 9 +
arch/arm64/include/asm/kvm_host.h | 5 +
arch/arm64/kvm/Kconfig | 1 +
arch/arm64/kvm/mmu.c | 109 ++++++++++--
arch/x86/include/asm/kvm_host.h | 22 ++-
arch/x86/kvm/Kconfig | 4 +-
arch/x86/kvm/mmu/mmu.c | 135 +++++++++------
arch/x86/kvm/svm/sev.c | 4 +-
arch/x86/kvm/svm/svm.c | 4 +-
arch/x86/kvm/x86.c | 4 +-
include/linux/kvm_host.h | 80 +++++++--
include/uapi/linux/kvm.h | 2 +
tools/testing/selftests/kvm/Makefile.kvm | 1 +
.../testing/selftests/kvm/guest_memfd_test.c | 162 +++++++++++++++---
virt/kvm/Kconfig | 15 +-
virt/kvm/Makefile.kvm | 2 +-
virt/kvm/guest_memfd.c | 101 ++++++++++-
virt/kvm/kvm_main.c | 16 +-
virt/kvm/kvm_mm.h | 4 +-
19 files changed, 553 insertions(+), 127 deletions(-)
base-commit: 0ff41df1cb268fc69e703a08a57ee14ae967d0ca
--
2.49.0.1164.gab81da1b16-goog
^ permalink raw reply [flat|nested] 62+ messages in thread
* [PATCH v10 01/16] KVM: Rename CONFIG_KVM_PRIVATE_MEM to CONFIG_KVM_GMEM
2025-05-27 18:02 [PATCH v10 00/16] KVM: Mapping guest_memfd backed memory at the host for software protected VMs Fuad Tabba
@ 2025-05-27 18:02 ` Fuad Tabba
2025-05-31 19:05 ` Shivank Garg
2025-06-05 8:19 ` Vlastimil Babka
2025-05-27 18:02 ` [PATCH v10 02/16] KVM: Rename CONFIG_KVM_GENERIC_PRIVATE_MEM to CONFIG_KVM_GENERIC_GMEM_POPULATE Fuad Tabba
` (14 subsequent siblings)
15 siblings, 2 replies; 62+ messages in thread
From: Fuad Tabba @ 2025-05-27 18:02 UTC (permalink / raw)
To: kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny, tabba
The option KVM_PRIVATE_MEM enables guest_memfd in general. Subsequent
patches add shared memory support to guest_memfd. Therefore, rename it
to KVM_GMEM to make its purpose clearer.
Reviewed-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Co-developed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/x86/include/asm/kvm_host.h | 2 +-
include/linux/kvm_host.h | 10 +++++-----
virt/kvm/Kconfig | 8 ++++----
virt/kvm/Makefile.kvm | 2 +-
virt/kvm/kvm_main.c | 4 ++--
virt/kvm/kvm_mm.h | 4 ++--
6 files changed, 15 insertions(+), 15 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 7bc174a1f1cb..52f6f6d08558 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -2253,7 +2253,7 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level,
int tdp_max_root_level, int tdp_huge_page_level);
-#ifdef CONFIG_KVM_PRIVATE_MEM
+#ifdef CONFIG_KVM_GMEM
#define kvm_arch_has_private_mem(kvm) ((kvm)->arch.has_private_mem)
#else
#define kvm_arch_has_private_mem(kvm) false
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 291d49b9bf05..d6900995725d 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -601,7 +601,7 @@ struct kvm_memory_slot {
short id;
u16 as_id;
-#ifdef CONFIG_KVM_PRIVATE_MEM
+#ifdef CONFIG_KVM_GMEM
struct {
/*
* Writes protected by kvm->slots_lock. Acquiring a
@@ -722,7 +722,7 @@ static inline int kvm_arch_vcpu_memslots_id(struct kvm_vcpu *vcpu)
* Arch code must define kvm_arch_has_private_mem if support for private memory
* is enabled.
*/
-#if !defined(kvm_arch_has_private_mem) && !IS_ENABLED(CONFIG_KVM_PRIVATE_MEM)
+#if !defined(kvm_arch_has_private_mem) && !IS_ENABLED(CONFIG_KVM_GMEM)
static inline bool kvm_arch_has_private_mem(struct kvm *kvm)
{
return false;
@@ -2504,7 +2504,7 @@ bool kvm_arch_post_set_memory_attributes(struct kvm *kvm,
static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn)
{
- return IS_ENABLED(CONFIG_KVM_PRIVATE_MEM) &&
+ return IS_ENABLED(CONFIG_KVM_GMEM) &&
kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE;
}
#else
@@ -2514,7 +2514,7 @@ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn)
}
#endif /* CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES */
-#ifdef CONFIG_KVM_PRIVATE_MEM
+#ifdef CONFIG_KVM_GMEM
int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
gfn_t gfn, kvm_pfn_t *pfn, struct page **page,
int *max_order);
@@ -2527,7 +2527,7 @@ static inline int kvm_gmem_get_pfn(struct kvm *kvm,
KVM_BUG_ON(1, kvm);
return -EIO;
}
-#endif /* CONFIG_KVM_PRIVATE_MEM */
+#endif /* CONFIG_KVM_GMEM */
#ifdef CONFIG_HAVE_KVM_ARCH_GMEM_PREPARE
int kvm_arch_gmem_prepare(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, int max_order);
diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
index 727b542074e7..49df4e32bff7 100644
--- a/virt/kvm/Kconfig
+++ b/virt/kvm/Kconfig
@@ -112,19 +112,19 @@ config KVM_GENERIC_MEMORY_ATTRIBUTES
depends on KVM_GENERIC_MMU_NOTIFIER
bool
-config KVM_PRIVATE_MEM
+config KVM_GMEM
select XARRAY_MULTI
bool
config KVM_GENERIC_PRIVATE_MEM
select KVM_GENERIC_MEMORY_ATTRIBUTES
- select KVM_PRIVATE_MEM
+ select KVM_GMEM
bool
config HAVE_KVM_ARCH_GMEM_PREPARE
bool
- depends on KVM_PRIVATE_MEM
+ depends on KVM_GMEM
config HAVE_KVM_ARCH_GMEM_INVALIDATE
bool
- depends on KVM_PRIVATE_MEM
+ depends on KVM_GMEM
diff --git a/virt/kvm/Makefile.kvm b/virt/kvm/Makefile.kvm
index 724c89af78af..8d00918d4c8b 100644
--- a/virt/kvm/Makefile.kvm
+++ b/virt/kvm/Makefile.kvm
@@ -12,4 +12,4 @@ kvm-$(CONFIG_KVM_ASYNC_PF) += $(KVM)/async_pf.o
kvm-$(CONFIG_HAVE_KVM_IRQ_ROUTING) += $(KVM)/irqchip.o
kvm-$(CONFIG_HAVE_KVM_DIRTY_RING) += $(KVM)/dirty_ring.o
kvm-$(CONFIG_HAVE_KVM_PFNCACHE) += $(KVM)/pfncache.o
-kvm-$(CONFIG_KVM_PRIVATE_MEM) += $(KVM)/guest_memfd.o
+kvm-$(CONFIG_KVM_GMEM) += $(KVM)/guest_memfd.o
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index e85b33a92624..4996cac41a8f 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -4842,7 +4842,7 @@ static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
case KVM_CAP_MEMORY_ATTRIBUTES:
return kvm_supported_mem_attributes(kvm);
#endif
-#ifdef CONFIG_KVM_PRIVATE_MEM
+#ifdef CONFIG_KVM_GMEM
case KVM_CAP_GUEST_MEMFD:
return !kvm || kvm_arch_has_private_mem(kvm);
#endif
@@ -5276,7 +5276,7 @@ static long kvm_vm_ioctl(struct file *filp,
case KVM_GET_STATS_FD:
r = kvm_vm_ioctl_get_stats_fd(kvm);
break;
-#ifdef CONFIG_KVM_PRIVATE_MEM
+#ifdef CONFIG_KVM_GMEM
case KVM_CREATE_GUEST_MEMFD: {
struct kvm_create_guest_memfd guest_memfd;
diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h
index acef3f5c582a..ec311c0d6718 100644
--- a/virt/kvm/kvm_mm.h
+++ b/virt/kvm/kvm_mm.h
@@ -67,7 +67,7 @@ static inline void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm,
}
#endif /* HAVE_KVM_PFNCACHE */
-#ifdef CONFIG_KVM_PRIVATE_MEM
+#ifdef CONFIG_KVM_GMEM
void kvm_gmem_init(struct module *module);
int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args);
int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot,
@@ -91,6 +91,6 @@ static inline void kvm_gmem_unbind(struct kvm_memory_slot *slot)
{
WARN_ON_ONCE(1);
}
-#endif /* CONFIG_KVM_PRIVATE_MEM */
+#endif /* CONFIG_KVM_GMEM */
#endif /* __KVM_MM_H__ */
--
2.49.0.1164.gab81da1b16-goog
^ permalink raw reply related [flat|nested] 62+ messages in thread
* [PATCH v10 02/16] KVM: Rename CONFIG_KVM_GENERIC_PRIVATE_MEM to CONFIG_KVM_GENERIC_GMEM_POPULATE
2025-05-27 18:02 [PATCH v10 00/16] KVM: Mapping guest_memfd backed memory at the host for software protected VMs Fuad Tabba
2025-05-27 18:02 ` [PATCH v10 01/16] KVM: Rename CONFIG_KVM_PRIVATE_MEM to CONFIG_KVM_GMEM Fuad Tabba
@ 2025-05-27 18:02 ` Fuad Tabba
2025-05-31 19:07 ` Shivank Garg
2025-06-05 8:19 ` Vlastimil Babka
2025-05-27 18:02 ` [PATCH v10 03/16] KVM: Rename kvm_arch_has_private_mem() to kvm_arch_supports_gmem() Fuad Tabba
` (13 subsequent siblings)
15 siblings, 2 replies; 62+ messages in thread
From: Fuad Tabba @ 2025-05-27 18:02 UTC (permalink / raw)
To: kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny, tabba
The option KVM_GENERIC_PRIVATE_MEM enables populating a GPA range with
guest data. Rename it to KVM_GENERIC_GMEM_POPULATE to make its purpose
clearer.
Reviewed-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Co-developed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/x86/kvm/Kconfig | 4 ++--
include/linux/kvm_host.h | 2 +-
virt/kvm/Kconfig | 2 +-
virt/kvm/guest_memfd.c | 2 +-
4 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
index fe8ea8c097de..b37258253543 100644
--- a/arch/x86/kvm/Kconfig
+++ b/arch/x86/kvm/Kconfig
@@ -46,7 +46,7 @@ config KVM_X86
select HAVE_KVM_PM_NOTIFIER if PM
select KVM_GENERIC_HARDWARE_ENABLING
select KVM_GENERIC_PRE_FAULT_MEMORY
- select KVM_GENERIC_PRIVATE_MEM if KVM_SW_PROTECTED_VM
+ select KVM_GENERIC_GMEM_POPULATE if KVM_SW_PROTECTED_VM
select KVM_WERROR if WERROR
config KVM
@@ -145,7 +145,7 @@ config KVM_AMD_SEV
depends on KVM_AMD && X86_64
depends on CRYPTO_DEV_SP_PSP && !(KVM_AMD=y && CRYPTO_DEV_CCP_DD=m)
select ARCH_HAS_CC_PLATFORM
- select KVM_GENERIC_PRIVATE_MEM
+ select KVM_GENERIC_GMEM_POPULATE
select HAVE_KVM_ARCH_GMEM_PREPARE
select HAVE_KVM_ARCH_GMEM_INVALIDATE
help
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index d6900995725d..7ca23837fa52 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -2533,7 +2533,7 @@ static inline int kvm_gmem_get_pfn(struct kvm *kvm,
int kvm_arch_gmem_prepare(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, int max_order);
#endif
-#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM
+#ifdef CONFIG_KVM_GENERIC_GMEM_POPULATE
/**
* kvm_gmem_populate() - Populate/prepare a GPA range with guest data
*
diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
index 49df4e32bff7..559c93ad90be 100644
--- a/virt/kvm/Kconfig
+++ b/virt/kvm/Kconfig
@@ -116,7 +116,7 @@ config KVM_GMEM
select XARRAY_MULTI
bool
-config KVM_GENERIC_PRIVATE_MEM
+config KVM_GENERIC_GMEM_POPULATE
select KVM_GENERIC_MEMORY_ATTRIBUTES
select KVM_GMEM
bool
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index b2aa6bf24d3a..befea51bbc75 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -638,7 +638,7 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
}
EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn);
-#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM
+#ifdef CONFIG_KVM_GENERIC_GMEM_POPULATE
long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long npages,
kvm_gmem_populate_cb post_populate, void *opaque)
{
--
2.49.0.1164.gab81da1b16-goog
^ permalink raw reply related [flat|nested] 62+ messages in thread
* [PATCH v10 03/16] KVM: Rename kvm_arch_has_private_mem() to kvm_arch_supports_gmem()
2025-05-27 18:02 [PATCH v10 00/16] KVM: Mapping guest_memfd backed memory at the host for software protected VMs Fuad Tabba
2025-05-27 18:02 ` [PATCH v10 01/16] KVM: Rename CONFIG_KVM_PRIVATE_MEM to CONFIG_KVM_GMEM Fuad Tabba
2025-05-27 18:02 ` [PATCH v10 02/16] KVM: Rename CONFIG_KVM_GENERIC_PRIVATE_MEM to CONFIG_KVM_GENERIC_GMEM_POPULATE Fuad Tabba
@ 2025-05-27 18:02 ` Fuad Tabba
2025-05-31 19:12 ` Shivank Garg
2025-06-05 8:20 ` Vlastimil Babka
2025-05-27 18:02 ` [PATCH v10 04/16] KVM: x86: Rename kvm->arch.has_private_mem to kvm->arch.supports_gmem Fuad Tabba
` (12 subsequent siblings)
15 siblings, 2 replies; 62+ messages in thread
From: Fuad Tabba @ 2025-05-27 18:02 UTC (permalink / raw)
To: kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny, tabba
The function kvm_arch_has_private_mem() is used to indicate whether
guest_memfd is supported by the architecture, which until now implies
that its private. To decouple guest_memfd support from whether the
memory is private, rename this function to kvm_arch_supports_gmem().
Reviewed-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Co-developed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/x86/include/asm/kvm_host.h | 8 ++++----
arch/x86/kvm/mmu/mmu.c | 8 ++++----
include/linux/kvm_host.h | 6 +++---
virt/kvm/kvm_main.c | 6 +++---
4 files changed, 14 insertions(+), 14 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 52f6f6d08558..4a83fbae7056 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -2254,9 +2254,9 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level,
#ifdef CONFIG_KVM_GMEM
-#define kvm_arch_has_private_mem(kvm) ((kvm)->arch.has_private_mem)
+#define kvm_arch_supports_gmem(kvm) ((kvm)->arch.has_private_mem)
#else
-#define kvm_arch_has_private_mem(kvm) false
+#define kvm_arch_supports_gmem(kvm) false
#endif
#define kvm_arch_has_readonly_mem(kvm) (!(kvm)->arch.has_protected_state)
@@ -2309,8 +2309,8 @@ enum {
#define HF_SMM_INSIDE_NMI_MASK (1 << 2)
# define KVM_MAX_NR_ADDRESS_SPACES 2
-/* SMM is currently unsupported for guests with private memory. */
-# define kvm_arch_nr_memslot_as_ids(kvm) (kvm_arch_has_private_mem(kvm) ? 1 : 2)
+/* SMM is currently unsupported for guests with guest_memfd (esp private) memory. */
+# define kvm_arch_nr_memslot_as_ids(kvm) (kvm_arch_supports_gmem(kvm) ? 1 : 2)
# define kvm_arch_vcpu_memslots_id(vcpu) ((vcpu)->arch.hflags & HF_SMM_MASK ? 1 : 0)
# define kvm_memslots_for_spte_role(kvm, role) __kvm_memslots(kvm, (role).smm)
#else
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 8d1b632e33d2..b66f1bf24e06 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -4917,7 +4917,7 @@ long kvm_arch_vcpu_pre_fault_memory(struct kvm_vcpu *vcpu,
if (r)
return r;
- if (kvm_arch_has_private_mem(vcpu->kvm) &&
+ if (kvm_arch_supports_gmem(vcpu->kvm) &&
kvm_mem_is_private(vcpu->kvm, gpa_to_gfn(range->gpa)))
error_code |= PFERR_PRIVATE_ACCESS;
@@ -7705,7 +7705,7 @@ bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm,
* Zapping SPTEs in this case ensures KVM will reassess whether or not
* a hugepage can be used for affected ranges.
*/
- if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm)))
+ if (WARN_ON_ONCE(!kvm_arch_supports_gmem(kvm)))
return false;
if (WARN_ON_ONCE(range->end <= range->start))
@@ -7784,7 +7784,7 @@ bool kvm_arch_post_set_memory_attributes(struct kvm *kvm,
* a range that has PRIVATE GFNs, and conversely converting a range to
* SHARED may now allow hugepages.
*/
- if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm)))
+ if (WARN_ON_ONCE(!kvm_arch_supports_gmem(kvm)))
return false;
/*
@@ -7840,7 +7840,7 @@ void kvm_mmu_init_memslot_memory_attributes(struct kvm *kvm,
{
int level;
- if (!kvm_arch_has_private_mem(kvm))
+ if (!kvm_arch_supports_gmem(kvm))
return;
for (level = PG_LEVEL_2M; level <= KVM_MAX_HUGEPAGE_LEVEL; level++) {
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 7ca23837fa52..6ca7279520cf 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -719,11 +719,11 @@ static inline int kvm_arch_vcpu_memslots_id(struct kvm_vcpu *vcpu)
#endif
/*
- * Arch code must define kvm_arch_has_private_mem if support for private memory
+ * Arch code must define kvm_arch_supports_gmem if support for guest_memfd
* is enabled.
*/
-#if !defined(kvm_arch_has_private_mem) && !IS_ENABLED(CONFIG_KVM_GMEM)
-static inline bool kvm_arch_has_private_mem(struct kvm *kvm)
+#if !defined(kvm_arch_supports_gmem) && !IS_ENABLED(CONFIG_KVM_GMEM)
+static inline bool kvm_arch_supports_gmem(struct kvm *kvm)
{
return false;
}
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 4996cac41a8f..2468d50a9ed4 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -1531,7 +1531,7 @@ static int check_memory_region_flags(struct kvm *kvm,
{
u32 valid_flags = KVM_MEM_LOG_DIRTY_PAGES;
- if (kvm_arch_has_private_mem(kvm))
+ if (kvm_arch_supports_gmem(kvm))
valid_flags |= KVM_MEM_GUEST_MEMFD;
/* Dirty logging private memory is not currently supported. */
@@ -2362,7 +2362,7 @@ static int kvm_vm_ioctl_clear_dirty_log(struct kvm *kvm,
#ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES
static u64 kvm_supported_mem_attributes(struct kvm *kvm)
{
- if (!kvm || kvm_arch_has_private_mem(kvm))
+ if (!kvm || kvm_arch_supports_gmem(kvm))
return KVM_MEMORY_ATTRIBUTE_PRIVATE;
return 0;
@@ -4844,7 +4844,7 @@ static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
#endif
#ifdef CONFIG_KVM_GMEM
case KVM_CAP_GUEST_MEMFD:
- return !kvm || kvm_arch_has_private_mem(kvm);
+ return !kvm || kvm_arch_supports_gmem(kvm);
#endif
default:
break;
--
2.49.0.1164.gab81da1b16-goog
^ permalink raw reply related [flat|nested] 62+ messages in thread
* [PATCH v10 04/16] KVM: x86: Rename kvm->arch.has_private_mem to kvm->arch.supports_gmem
2025-05-27 18:02 [PATCH v10 00/16] KVM: Mapping guest_memfd backed memory at the host for software protected VMs Fuad Tabba
` (2 preceding siblings ...)
2025-05-27 18:02 ` [PATCH v10 03/16] KVM: Rename kvm_arch_has_private_mem() to kvm_arch_supports_gmem() Fuad Tabba
@ 2025-05-27 18:02 ` Fuad Tabba
2025-05-31 19:13 ` Shivank Garg
2025-06-05 8:21 ` Vlastimil Babka
2025-05-27 18:02 ` [PATCH v10 05/16] KVM: Rename kvm_slot_can_be_private() to kvm_slot_has_gmem() Fuad Tabba
` (11 subsequent siblings)
15 siblings, 2 replies; 62+ messages in thread
From: Fuad Tabba @ 2025-05-27 18:02 UTC (permalink / raw)
To: kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny, tabba
The bool has_private_mem is used to indicate whether guest_memfd is
supported. Rename it to supports_gmem to make its meaning clearer and to
decouple memory being private from guest_memfd.
Reviewed-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Co-developed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/x86/include/asm/kvm_host.h | 4 ++--
arch/x86/kvm/mmu/mmu.c | 2 +-
arch/x86/kvm/svm/svm.c | 4 ++--
arch/x86/kvm/x86.c | 3 +--
4 files changed, 6 insertions(+), 7 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 4a83fbae7056..709cc2a7ba66 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -1331,7 +1331,7 @@ struct kvm_arch {
unsigned int indirect_shadow_pages;
u8 mmu_valid_gen;
u8 vm_type;
- bool has_private_mem;
+ bool supports_gmem;
bool has_protected_state;
bool pre_fault_allowed;
struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES];
@@ -2254,7 +2254,7 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level,
#ifdef CONFIG_KVM_GMEM
-#define kvm_arch_supports_gmem(kvm) ((kvm)->arch.has_private_mem)
+#define kvm_arch_supports_gmem(kvm) ((kvm)->arch.supports_gmem)
#else
#define kvm_arch_supports_gmem(kvm) false
#endif
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index b66f1bf24e06..69bf2ef22ed0 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3486,7 +3486,7 @@ static bool page_fault_can_be_fast(struct kvm *kvm, struct kvm_page_fault *fault
* on RET_PF_SPURIOUS until the update completes, or an actual spurious
* case might go down the slow path. Either case will resolve itself.
*/
- if (kvm->arch.has_private_mem &&
+ if (kvm->arch.supports_gmem &&
fault->is_private != kvm_mem_is_private(kvm, fault->gfn))
return false;
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index a89c271a1951..a05b7dc7b717 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -5110,8 +5110,8 @@ static int svm_vm_init(struct kvm *kvm)
(type == KVM_X86_SEV_ES_VM || type == KVM_X86_SNP_VM);
to_kvm_sev_info(kvm)->need_init = true;
- kvm->arch.has_private_mem = (type == KVM_X86_SNP_VM);
- kvm->arch.pre_fault_allowed = !kvm->arch.has_private_mem;
+ kvm->arch.supports_gmem = (type == KVM_X86_SNP_VM);
+ kvm->arch.pre_fault_allowed = !kvm->arch.supports_gmem;
}
if (!pause_filter_count || !pause_filter_thresh)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index be7bb6d20129..035ced06b2dd 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -12718,8 +12718,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
return -EINVAL;
kvm->arch.vm_type = type;
- kvm->arch.has_private_mem =
- (type == KVM_X86_SW_PROTECTED_VM);
+ kvm->arch.supports_gmem = (type == KVM_X86_SW_PROTECTED_VM);
/* Decided by the vendor code for other VM types. */
kvm->arch.pre_fault_allowed =
type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM;
--
2.49.0.1164.gab81da1b16-goog
^ permalink raw reply related [flat|nested] 62+ messages in thread
* [PATCH v10 05/16] KVM: Rename kvm_slot_can_be_private() to kvm_slot_has_gmem()
2025-05-27 18:02 [PATCH v10 00/16] KVM: Mapping guest_memfd backed memory at the host for software protected VMs Fuad Tabba
` (3 preceding siblings ...)
2025-05-27 18:02 ` [PATCH v10 04/16] KVM: x86: Rename kvm->arch.has_private_mem to kvm->arch.supports_gmem Fuad Tabba
@ 2025-05-27 18:02 ` Fuad Tabba
2025-05-31 19:13 ` Shivank Garg
2025-06-05 8:22 ` Vlastimil Babka
2025-05-27 18:02 ` [PATCH v10 06/16] KVM: Fix comments that refer to slots_lock Fuad Tabba
` (10 subsequent siblings)
15 siblings, 2 replies; 62+ messages in thread
From: Fuad Tabba @ 2025-05-27 18:02 UTC (permalink / raw)
To: kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny, tabba
The function kvm_slot_can_be_private() is used to check whether a memory
slot is backed by guest_memfd. Rename it to kvm_slot_has_gmem() to make
that clearer and to decouple memory being private from guest_memfd.
Reviewed-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Co-developed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/x86/kvm/mmu/mmu.c | 4 ++--
arch/x86/kvm/svm/sev.c | 4 ++--
include/linux/kvm_host.h | 2 +-
virt/kvm/guest_memfd.c | 2 +-
4 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 69bf2ef22ed0..2b6376986f96 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3283,7 +3283,7 @@ static int __kvm_mmu_max_mapping_level(struct kvm *kvm,
int kvm_mmu_max_mapping_level(struct kvm *kvm,
const struct kvm_memory_slot *slot, gfn_t gfn)
{
- bool is_private = kvm_slot_can_be_private(slot) &&
+ bool is_private = kvm_slot_has_gmem(slot) &&
kvm_mem_is_private(kvm, gfn);
return __kvm_mmu_max_mapping_level(kvm, slot, gfn, PG_LEVEL_NUM, is_private);
@@ -4496,7 +4496,7 @@ static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu,
{
int max_order, r;
- if (!kvm_slot_can_be_private(fault->slot)) {
+ if (!kvm_slot_has_gmem(fault->slot)) {
kvm_mmu_prepare_memory_fault_exit(vcpu, fault);
return -EFAULT;
}
diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
index a7a7dc507336..27759ca6d2f2 100644
--- a/arch/x86/kvm/svm/sev.c
+++ b/arch/x86/kvm/svm/sev.c
@@ -2378,7 +2378,7 @@ static int snp_launch_update(struct kvm *kvm, struct kvm_sev_cmd *argp)
mutex_lock(&kvm->slots_lock);
memslot = gfn_to_memslot(kvm, params.gfn_start);
- if (!kvm_slot_can_be_private(memslot)) {
+ if (!kvm_slot_has_gmem(memslot)) {
ret = -EINVAL;
goto out;
}
@@ -4688,7 +4688,7 @@ void sev_handle_rmp_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u64 error_code)
}
slot = gfn_to_memslot(kvm, gfn);
- if (!kvm_slot_can_be_private(slot)) {
+ if (!kvm_slot_has_gmem(slot)) {
pr_warn_ratelimited("SEV: Unexpected RMP fault, non-private slot for GPA 0x%llx\n",
gpa);
return;
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 6ca7279520cf..d9616ee6acc7 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -614,7 +614,7 @@ struct kvm_memory_slot {
#endif
};
-static inline bool kvm_slot_can_be_private(const struct kvm_memory_slot *slot)
+static inline bool kvm_slot_has_gmem(const struct kvm_memory_slot *slot)
{
return slot && (slot->flags & KVM_MEM_GUEST_MEMFD);
}
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index befea51bbc75..6db515833f61 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -654,7 +654,7 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long
return -EINVAL;
slot = gfn_to_memslot(kvm, start_gfn);
- if (!kvm_slot_can_be_private(slot))
+ if (!kvm_slot_has_gmem(slot))
return -EINVAL;
file = kvm_gmem_get_file(slot);
--
2.49.0.1164.gab81da1b16-goog
^ permalink raw reply related [flat|nested] 62+ messages in thread
* [PATCH v10 06/16] KVM: Fix comments that refer to slots_lock
2025-05-27 18:02 [PATCH v10 00/16] KVM: Mapping guest_memfd backed memory at the host for software protected VMs Fuad Tabba
` (4 preceding siblings ...)
2025-05-27 18:02 ` [PATCH v10 05/16] KVM: Rename kvm_slot_can_be_private() to kvm_slot_has_gmem() Fuad Tabba
@ 2025-05-27 18:02 ` Fuad Tabba
2025-05-31 19:14 ` Shivank Garg
2025-06-05 8:22 ` Vlastimil Babka
2025-05-27 18:02 ` [PATCH v10 07/16] KVM: Fix comment that refers to kvm uapi header path Fuad Tabba
` (9 subsequent siblings)
15 siblings, 2 replies; 62+ messages in thread
From: Fuad Tabba @ 2025-05-27 18:02 UTC (permalink / raw)
To: kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny, tabba
Fix comments so that they refer to slots_lock instead of slots_locks
(remove trailing s).
Reviewed-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
include/linux/kvm_host.h | 2 +-
virt/kvm/kvm_main.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index d9616ee6acc7..ae70e4e19700 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -859,7 +859,7 @@ struct kvm {
struct notifier_block pm_notifier;
#endif
#ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES
- /* Protected by slots_locks (for writes) and RCU (for reads) */
+ /* Protected by slots_lock (for writes) and RCU (for reads) */
struct xarray mem_attr_array;
#endif
char stats_id[KVM_STATS_NAME_SIZE];
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 2468d50a9ed4..6289ea1685dd 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -333,7 +333,7 @@ void kvm_flush_remote_tlbs_memslot(struct kvm *kvm,
* All current use cases for flushing the TLBs for a specific memslot
* are related to dirty logging, and many do the TLB flush out of
* mmu_lock. The interaction between the various operations on memslot
- * must be serialized by slots_locks to ensure the TLB flush from one
+ * must be serialized by slots_lock to ensure the TLB flush from one
* operation is observed by any other operation on the same memslot.
*/
lockdep_assert_held(&kvm->slots_lock);
--
2.49.0.1164.gab81da1b16-goog
^ permalink raw reply related [flat|nested] 62+ messages in thread
* [PATCH v10 07/16] KVM: Fix comment that refers to kvm uapi header path
2025-05-27 18:02 [PATCH v10 00/16] KVM: Mapping guest_memfd backed memory at the host for software protected VMs Fuad Tabba
` (5 preceding siblings ...)
2025-05-27 18:02 ` [PATCH v10 06/16] KVM: Fix comments that refer to slots_lock Fuad Tabba
@ 2025-05-27 18:02 ` Fuad Tabba
2025-05-31 19:19 ` Shivank Garg
` (3 more replies)
2025-05-27 18:02 ` [PATCH v10 08/16] KVM: guest_memfd: Allow host to map guest_memfd pages Fuad Tabba
` (8 subsequent siblings)
15 siblings, 4 replies; 62+ messages in thread
From: Fuad Tabba @ 2025-05-27 18:02 UTC (permalink / raw)
To: kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny, tabba
The comment that refers to the path where the user-visible memslot flags
are refers to an outdated path and has a typo. Make it refer to the
correct path.
Signed-off-by: Fuad Tabba <tabba@google.com>
---
include/linux/kvm_host.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index ae70e4e19700..80371475818f 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -52,7 +52,7 @@
/*
* The bit 16 ~ bit 31 of kvm_userspace_memory_region::flags are internally
* used in kvm, other bits are visible for userspace which are defined in
- * include/linux/kvm_h.
+ * include/uapi/linux/kvm.h.
*/
#define KVM_MEMSLOT_INVALID (1UL << 16)
--
2.49.0.1164.gab81da1b16-goog
^ permalink raw reply related [flat|nested] 62+ messages in thread
* [PATCH v10 08/16] KVM: guest_memfd: Allow host to map guest_memfd pages
2025-05-27 18:02 [PATCH v10 00/16] KVM: Mapping guest_memfd backed memory at the host for software protected VMs Fuad Tabba
` (6 preceding siblings ...)
2025-05-27 18:02 ` [PATCH v10 07/16] KVM: Fix comment that refers to kvm uapi header path Fuad Tabba
@ 2025-05-27 18:02 ` Fuad Tabba
2025-05-28 23:17 ` kernel test robot
` (6 more replies)
2025-05-27 18:02 ` [PATCH v10 09/16] KVM: guest_memfd: Track shared memory support in memslot Fuad Tabba
` (7 subsequent siblings)
15 siblings, 7 replies; 62+ messages in thread
From: Fuad Tabba @ 2025-05-27 18:02 UTC (permalink / raw)
To: kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny, tabba
This patch enables support for shared memory in guest_memfd, including
mapping that memory at the host userspace. This support is gated by the
configuration option KVM_GMEM_SHARED_MEM, and toggled by the guest_memfd
flag GUEST_MEMFD_FLAG_SUPPORT_SHARED, which can be set when creating a
guest_memfd instance.
Co-developed-by: Ackerley Tng <ackerleytng@google.com>
Signed-off-by: Ackerley Tng <ackerleytng@google.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/x86/include/asm/kvm_host.h | 10 ++++
arch/x86/kvm/x86.c | 3 +-
include/linux/kvm_host.h | 13 ++++++
include/uapi/linux/kvm.h | 1 +
virt/kvm/Kconfig | 5 ++
virt/kvm/guest_memfd.c | 81 +++++++++++++++++++++++++++++++++
6 files changed, 112 insertions(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 709cc2a7ba66..ce9ad4cd93c5 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -2255,8 +2255,18 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level,
#ifdef CONFIG_KVM_GMEM
#define kvm_arch_supports_gmem(kvm) ((kvm)->arch.supports_gmem)
+
+/*
+ * CoCo VMs with hardware support that use guest_memfd only for backing private
+ * memory, e.g., TDX, cannot use guest_memfd with userspace mapping enabled.
+ */
+#define kvm_arch_supports_gmem_shared_mem(kvm) \
+ (IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM) && \
+ ((kvm)->arch.vm_type == KVM_X86_SW_PROTECTED_VM || \
+ (kvm)->arch.vm_type == KVM_X86_DEFAULT_VM))
#else
#define kvm_arch_supports_gmem(kvm) false
+#define kvm_arch_supports_gmem_shared_mem(kvm) false
#endif
#define kvm_arch_has_readonly_mem(kvm) (!(kvm)->arch.has_protected_state)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 035ced06b2dd..2a02f2457c42 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -12718,7 +12718,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
return -EINVAL;
kvm->arch.vm_type = type;
- kvm->arch.supports_gmem = (type == KVM_X86_SW_PROTECTED_VM);
+ kvm->arch.supports_gmem =
+ type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM;
/* Decided by the vendor code for other VM types. */
kvm->arch.pre_fault_allowed =
type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM;
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 80371475818f..ba83547e62b0 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -729,6 +729,19 @@ static inline bool kvm_arch_supports_gmem(struct kvm *kvm)
}
#endif
+/*
+ * Returns true if this VM supports shared mem in guest_memfd.
+ *
+ * Arch code must define kvm_arch_supports_gmem_shared_mem if support for
+ * guest_memfd is enabled.
+ */
+#if !defined(kvm_arch_supports_gmem_shared_mem) && !IS_ENABLED(CONFIG_KVM_GMEM)
+static inline bool kvm_arch_supports_gmem_shared_mem(struct kvm *kvm)
+{
+ return false;
+}
+#endif
+
#ifndef kvm_arch_has_readonly_mem
static inline bool kvm_arch_has_readonly_mem(struct kvm *kvm)
{
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index b6ae8ad8934b..c2714c9d1a0e 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1566,6 +1566,7 @@ struct kvm_memory_attributes {
#define KVM_MEMORY_ATTRIBUTE_PRIVATE (1ULL << 3)
#define KVM_CREATE_GUEST_MEMFD _IOWR(KVMIO, 0xd4, struct kvm_create_guest_memfd)
+#define GUEST_MEMFD_FLAG_SUPPORT_SHARED (1ULL << 0)
struct kvm_create_guest_memfd {
__u64 size;
diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
index 559c93ad90be..df225298ab10 100644
--- a/virt/kvm/Kconfig
+++ b/virt/kvm/Kconfig
@@ -128,3 +128,8 @@ config HAVE_KVM_ARCH_GMEM_PREPARE
config HAVE_KVM_ARCH_GMEM_INVALIDATE
bool
depends on KVM_GMEM
+
+config KVM_GMEM_SHARED_MEM
+ select KVM_GMEM
+ bool
+ prompt "Enable support for non-private (shared) memory in guest_memfd"
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index 6db515833f61..5d34712f64fc 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -312,7 +312,81 @@ static pgoff_t kvm_gmem_get_index(struct kvm_memory_slot *slot, gfn_t gfn)
return gfn - slot->base_gfn + slot->gmem.pgoff;
}
+static bool kvm_gmem_supports_shared(struct inode *inode)
+{
+ u64 flags;
+
+ if (!IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM))
+ return false;
+
+ flags = (u64)inode->i_private;
+
+ return flags & GUEST_MEMFD_FLAG_SUPPORT_SHARED;
+}
+
+
+#ifdef CONFIG_KVM_GMEM_SHARED_MEM
+static vm_fault_t kvm_gmem_fault_shared(struct vm_fault *vmf)
+{
+ struct inode *inode = file_inode(vmf->vma->vm_file);
+ struct folio *folio;
+ vm_fault_t ret = VM_FAULT_LOCKED;
+
+ folio = kvm_gmem_get_folio(inode, vmf->pgoff);
+ if (IS_ERR(folio)) {
+ int err = PTR_ERR(folio);
+
+ if (err == -EAGAIN)
+ return VM_FAULT_RETRY;
+
+ return vmf_error(err);
+ }
+
+ if (WARN_ON_ONCE(folio_test_large(folio))) {
+ ret = VM_FAULT_SIGBUS;
+ goto out_folio;
+ }
+
+ if (!folio_test_uptodate(folio)) {
+ clear_highpage(folio_page(folio, 0));
+ kvm_gmem_mark_prepared(folio);
+ }
+
+ vmf->page = folio_file_page(folio, vmf->pgoff);
+
+out_folio:
+ if (ret != VM_FAULT_LOCKED) {
+ folio_unlock(folio);
+ folio_put(folio);
+ }
+
+ return ret;
+}
+
+static const struct vm_operations_struct kvm_gmem_vm_ops = {
+ .fault = kvm_gmem_fault_shared,
+};
+
+static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma)
+{
+ if (!kvm_gmem_supports_shared(file_inode(file)))
+ return -ENODEV;
+
+ if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) !=
+ (VM_SHARED | VM_MAYSHARE)) {
+ return -EINVAL;
+ }
+
+ vma->vm_ops = &kvm_gmem_vm_ops;
+
+ return 0;
+}
+#else
+#define kvm_gmem_mmap NULL
+#endif /* CONFIG_KVM_GMEM_SHARED_MEM */
+
static struct file_operations kvm_gmem_fops = {
+ .mmap = kvm_gmem_mmap,
.open = generic_file_open,
.release = kvm_gmem_release,
.fallocate = kvm_gmem_fallocate,
@@ -463,6 +537,9 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args)
u64 flags = args->flags;
u64 valid_flags = 0;
+ if (kvm_arch_supports_gmem_shared_mem(kvm))
+ valid_flags |= GUEST_MEMFD_FLAG_SUPPORT_SHARED;
+
if (flags & ~valid_flags)
return -EINVAL;
@@ -501,6 +578,10 @@ int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot,
offset + size > i_size_read(inode))
goto err;
+ if (kvm_gmem_supports_shared(inode) &&
+ !kvm_arch_supports_gmem_shared_mem(kvm))
+ goto err;
+
filemap_invalidate_lock(inode->i_mapping);
start = offset >> PAGE_SHIFT;
--
2.49.0.1164.gab81da1b16-goog
^ permalink raw reply related [flat|nested] 62+ messages in thread
* [PATCH v10 09/16] KVM: guest_memfd: Track shared memory support in memslot
2025-05-27 18:02 [PATCH v10 00/16] KVM: Mapping guest_memfd backed memory at the host for software protected VMs Fuad Tabba
` (7 preceding siblings ...)
2025-05-27 18:02 ` [PATCH v10 08/16] KVM: guest_memfd: Allow host to map guest_memfd pages Fuad Tabba
@ 2025-05-27 18:02 ` Fuad Tabba
2025-06-04 12:25 ` David Hildenbrand
2025-05-27 18:02 ` [PATCH v10 10/16] KVM: x86/mmu: Handle guest page faults for guest_memfd with shared memory Fuad Tabba
` (6 subsequent siblings)
15 siblings, 1 reply; 62+ messages in thread
From: Fuad Tabba @ 2025-05-27 18:02 UTC (permalink / raw)
To: kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny, tabba
Track whether a guest_memfd-backed memslot supports shared memory within
the memslot itself, using the flags field. The top half of memslot flags
is reserved for internal use in KVM. Add a flag there to track shared
memory support.
This saves the caller from having to check the guest_memfd-backed file
for this support, a potentially more expensive operation due to the need
to get/put the file.
Suggested-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
include/linux/kvm_host.h | 11 ++++++++++-
virt/kvm/guest_memfd.c | 8 ++++++--
2 files changed, 16 insertions(+), 3 deletions(-)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index ba83547e62b0..edb3795a64b9 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -54,7 +54,8 @@
* used in kvm, other bits are visible for userspace which are defined in
* include/uapi/linux/kvm.h.
*/
-#define KVM_MEMSLOT_INVALID (1UL << 16)
+#define KVM_MEMSLOT_INVALID (1UL << 16)
+#define KVM_MEMSLOT_SUPPORTS_SHARED (1UL << 17)
/*
* Bit 63 of the memslot generation number is an "update in-progress flag",
@@ -2502,6 +2503,14 @@ static inline void kvm_prepare_memory_fault_exit(struct kvm_vcpu *vcpu,
vcpu->run->memory_fault.flags |= KVM_MEMORY_EXIT_FLAG_PRIVATE;
}
+static inline bool kvm_gmem_memslot_supports_shared(const struct kvm_memory_slot *slot)
+{
+ if (!IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM))
+ return false;
+
+ return slot->flags & KVM_MEMSLOT_SUPPORTS_SHARED;
+}
+
#ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES
static inline unsigned long kvm_get_memory_attributes(struct kvm *kvm, gfn_t gfn)
{
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index 5d34712f64fc..9ded8d5139ee 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -555,6 +555,7 @@ int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot,
loff_t size = slot->npages << PAGE_SHIFT;
unsigned long start, end;
struct kvm_gmem *gmem;
+ bool supports_shared;
struct inode *inode;
struct file *file;
int r = -EINVAL;
@@ -578,8 +579,9 @@ int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot,
offset + size > i_size_read(inode))
goto err;
- if (kvm_gmem_supports_shared(inode) &&
- !kvm_arch_supports_gmem_shared_mem(kvm))
+ supports_shared = kvm_gmem_supports_shared(inode);
+
+ if (supports_shared && !kvm_arch_supports_gmem_shared_mem(kvm))
goto err;
filemap_invalidate_lock(inode->i_mapping);
@@ -600,6 +602,8 @@ int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot,
*/
WRITE_ONCE(slot->gmem.file, file);
slot->gmem.pgoff = start;
+ if (supports_shared)
+ slot->flags |= KVM_MEMSLOT_SUPPORTS_SHARED;
xa_store_range(&gmem->bindings, start, end - 1, slot, GFP_KERNEL);
filemap_invalidate_unlock(inode->i_mapping);
--
2.49.0.1164.gab81da1b16-goog
^ permalink raw reply related [flat|nested] 62+ messages in thread
* [PATCH v10 10/16] KVM: x86/mmu: Handle guest page faults for guest_memfd with shared memory
2025-05-27 18:02 [PATCH v10 00/16] KVM: Mapping guest_memfd backed memory at the host for software protected VMs Fuad Tabba
` (8 preceding siblings ...)
2025-05-27 18:02 ` [PATCH v10 09/16] KVM: guest_memfd: Track shared memory support in memslot Fuad Tabba
@ 2025-05-27 18:02 ` Fuad Tabba
2025-05-27 18:02 ` [PATCH v10 11/16] KVM: x86: Compute max_mapping_level with input from guest_memfd Fuad Tabba
` (5 subsequent siblings)
15 siblings, 0 replies; 62+ messages in thread
From: Fuad Tabba @ 2025-05-27 18:02 UTC (permalink / raw)
To: kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny, tabba
From: Ackerley Tng <ackerleytng@google.com>
For memslots backed by guest_memfd with shared mem support, the KVM MMU
always faults-in pages from guest_memfd, and not from the userspace_addr.
Function names have also been updated for accuracy -
kvm_mem_is_private() returns true only when the current private/shared
state (in the CoCo sense) of the memory is private, and returns false if
the current state is shared explicitly or impicitly, e.g., belongs to a
non-CoCo VM.
kvm_mmu_faultin_pfn_gmem() is updated to indicate that it can be used
to fault in not just private memory, but more generally, from
guest_memfd.
Co-developed-by: Fuad Tabba <tabba@google.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Co-developed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Ackerley Tng <ackerleytng@google.com>
---
arch/x86/kvm/mmu/mmu.c | 38 +++++++++++++++++++++++---------------
include/linux/kvm_host.h | 25 +++++++++++++++++++++++--
2 files changed, 46 insertions(+), 17 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 2b6376986f96..5b7df2905aa9 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3289,6 +3289,11 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm,
return __kvm_mmu_max_mapping_level(kvm, slot, gfn, PG_LEVEL_NUM, is_private);
}
+static inline bool fault_from_gmem(struct kvm_page_fault *fault)
+{
+ return fault->is_private || kvm_gmem_memslot_supports_shared(fault->slot);
+}
+
void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
{
struct kvm_memory_slot *slot = fault->slot;
@@ -4465,21 +4470,25 @@ static inline u8 kvm_max_level_for_order(int order)
return PG_LEVEL_4K;
}
-static u8 kvm_max_private_mapping_level(struct kvm *kvm, kvm_pfn_t pfn,
- u8 max_level, int gmem_order)
+static u8 kvm_max_level_for_fault_and_order(struct kvm *kvm,
+ struct kvm_page_fault *fault,
+ int order)
{
- u8 req_max_level;
+ u8 max_level = fault->max_level;
if (max_level == PG_LEVEL_4K)
return PG_LEVEL_4K;
- max_level = min(kvm_max_level_for_order(gmem_order), max_level);
+ max_level = min(kvm_max_level_for_order(order), max_level);
if (max_level == PG_LEVEL_4K)
return PG_LEVEL_4K;
- req_max_level = kvm_x86_call(private_max_mapping_level)(kvm, pfn);
- if (req_max_level)
- max_level = min(max_level, req_max_level);
+ if (fault->is_private) {
+ u8 level = kvm_x86_call(private_max_mapping_level)(kvm, fault->pfn);
+
+ if (level)
+ max_level = min(max_level, level);
+ }
return max_level;
}
@@ -4491,10 +4500,10 @@ static void kvm_mmu_finish_page_fault(struct kvm_vcpu *vcpu,
r == RET_PF_RETRY, fault->map_writable);
}
-static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu,
- struct kvm_page_fault *fault)
+static int kvm_mmu_faultin_pfn_gmem(struct kvm_vcpu *vcpu,
+ struct kvm_page_fault *fault)
{
- int max_order, r;
+ int gmem_order, r;
if (!kvm_slot_has_gmem(fault->slot)) {
kvm_mmu_prepare_memory_fault_exit(vcpu, fault);
@@ -4502,15 +4511,14 @@ static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu,
}
r = kvm_gmem_get_pfn(vcpu->kvm, fault->slot, fault->gfn, &fault->pfn,
- &fault->refcounted_page, &max_order);
+ &fault->refcounted_page, &gmem_order);
if (r) {
kvm_mmu_prepare_memory_fault_exit(vcpu, fault);
return r;
}
fault->map_writable = !(fault->slot->flags & KVM_MEM_READONLY);
- fault->max_level = kvm_max_private_mapping_level(vcpu->kvm, fault->pfn,
- fault->max_level, max_order);
+ fault->max_level = kvm_max_level_for_fault_and_order(vcpu->kvm, fault, gmem_order);
return RET_PF_CONTINUE;
}
@@ -4520,8 +4528,8 @@ static int __kvm_mmu_faultin_pfn(struct kvm_vcpu *vcpu,
{
unsigned int foll = fault->write ? FOLL_WRITE : 0;
- if (fault->is_private)
- return kvm_mmu_faultin_pfn_private(vcpu, fault);
+ if (fault_from_gmem(fault))
+ return kvm_mmu_faultin_pfn_gmem(vcpu, fault);
foll |= FOLL_NOWAIT;
fault->pfn = __kvm_faultin_pfn(fault->slot, fault->gfn, foll,
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index edb3795a64b9..b1786ef6d8ea 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -2524,10 +2524,31 @@ bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm,
bool kvm_arch_post_set_memory_attributes(struct kvm *kvm,
struct kvm_gfn_range *range);
+/*
+ * Returns true if the given gfn's private/shared status (in the CoCo sense) is
+ * private.
+ *
+ * A return value of false indicates that the gfn is explicitly or implicitly
+ * shared (i.e., non-CoCo VMs).
+ */
static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn)
{
- return IS_ENABLED(CONFIG_KVM_GMEM) &&
- kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE;
+ struct kvm_memory_slot *slot;
+
+ if (!IS_ENABLED(CONFIG_KVM_GMEM))
+ return false;
+
+ slot = gfn_to_memslot(kvm, gfn);
+ if (kvm_slot_has_gmem(slot) && kvm_gmem_memslot_supports_shared(slot)) {
+ /*
+ * Without in-place conversion support, if a guest_memfd memslot
+ * supports shared memory, then all the slot's memory is
+ * considered not private, i.e., implicitly shared.
+ */
+ return false;
+ }
+
+ return kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE;
}
#else
static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn)
--
2.49.0.1164.gab81da1b16-goog
^ permalink raw reply related [flat|nested] 62+ messages in thread
* [PATCH v10 11/16] KVM: x86: Compute max_mapping_level with input from guest_memfd
2025-05-27 18:02 [PATCH v10 00/16] KVM: Mapping guest_memfd backed memory at the host for software protected VMs Fuad Tabba
` (9 preceding siblings ...)
2025-05-27 18:02 ` [PATCH v10 10/16] KVM: x86/mmu: Handle guest page faults for guest_memfd with shared memory Fuad Tabba
@ 2025-05-27 18:02 ` Fuad Tabba
2025-06-04 13:09 ` David Hildenbrand
2025-05-27 18:02 ` [PATCH v10 12/16] KVM: arm64: Refactor user_mem_abort() calculation of force_pte Fuad Tabba
` (4 subsequent siblings)
15 siblings, 1 reply; 62+ messages in thread
From: Fuad Tabba @ 2025-05-27 18:02 UTC (permalink / raw)
To: kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny, tabba
From: Ackerley Tng <ackerleytng@google.com>
This patch adds kvm_gmem_max_mapping_level(), which always returns
PG_LEVEL_4K since guest_memfd only supports 4K pages for now.
When guest_memfd supports shared memory, max_mapping_level (especially
when recovering huge pages - see call to __kvm_mmu_max_mapping_level()
from recover_huge_pages_range()) should take input from
guest_memfd.
Input from guest_memfd should be taken in these cases:
+ if the memslot supports shared memory (guest_memfd is used for
shared memory, or in future both shared and private memory) or
+ if the memslot is only used for private memory and that gfn is
private.
If the memslot doesn't use guest_memfd, figure out the
max_mapping_level using the host page tables like before.
This patch also refactors and inlines the other call to
__kvm_mmu_max_mapping_level().
In kvm_mmu_hugepage_adjust(), guest_memfd's input is already
provided (if applicable) in fault->max_level. Hence, there is no need
to query guest_memfd.
lpage_info is queried like before, and then if the fault is not from
guest_memfd, adjust fault->req_level based on input from host page
tables.
Signed-off-by: Ackerley Tng <ackerleytng@google.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/x86/kvm/mmu/mmu.c | 87 +++++++++++++++++++++++++---------------
include/linux/kvm_host.h | 11 +++++
virt/kvm/guest_memfd.c | 12 ++++++
3 files changed, 78 insertions(+), 32 deletions(-)
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index 5b7df2905aa9..9e0bc8114859 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3256,12 +3256,11 @@ static int host_pfn_mapping_level(struct kvm *kvm, gfn_t gfn,
return level;
}
-static int __kvm_mmu_max_mapping_level(struct kvm *kvm,
- const struct kvm_memory_slot *slot,
- gfn_t gfn, int max_level, bool is_private)
+static int kvm_lpage_info_max_mapping_level(struct kvm *kvm,
+ const struct kvm_memory_slot *slot,
+ gfn_t gfn, int max_level)
{
struct kvm_lpage_info *linfo;
- int host_level;
max_level = min(max_level, max_huge_page_level);
for ( ; max_level > PG_LEVEL_4K; max_level--) {
@@ -3270,28 +3269,61 @@ static int __kvm_mmu_max_mapping_level(struct kvm *kvm,
break;
}
- if (is_private)
- return max_level;
+ return max_level;
+}
+
+static inline u8 kvm_max_level_for_order(int order)
+{
+ BUILD_BUG_ON(KVM_MAX_HUGEPAGE_LEVEL > PG_LEVEL_1G);
+
+ KVM_MMU_WARN_ON(order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G) &&
+ order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M) &&
+ order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_4K));
+
+ if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G))
+ return PG_LEVEL_1G;
+
+ if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M))
+ return PG_LEVEL_2M;
+
+ return PG_LEVEL_4K;
+}
+
+static inline int kvm_gmem_max_mapping_level(const struct kvm_memory_slot *slot,
+ gfn_t gfn, int max_level)
+{
+ int max_order;
if (max_level == PG_LEVEL_4K)
return PG_LEVEL_4K;
- host_level = host_pfn_mapping_level(kvm, gfn, slot);
- return min(host_level, max_level);
+ max_order = kvm_gmem_mapping_order(slot, gfn);
+ return min(max_level, kvm_max_level_for_order(max_order));
}
int kvm_mmu_max_mapping_level(struct kvm *kvm,
const struct kvm_memory_slot *slot, gfn_t gfn)
{
- bool is_private = kvm_slot_has_gmem(slot) &&
- kvm_mem_is_private(kvm, gfn);
+ int max_level;
- return __kvm_mmu_max_mapping_level(kvm, slot, gfn, PG_LEVEL_NUM, is_private);
+ max_level = kvm_lpage_info_max_mapping_level(kvm, slot, gfn, PG_LEVEL_NUM);
+ if (max_level == PG_LEVEL_4K)
+ return PG_LEVEL_4K;
+
+ if (kvm_slot_has_gmem(slot) &&
+ (kvm_gmem_memslot_supports_shared(slot) ||
+ kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE)) {
+ return kvm_gmem_max_mapping_level(slot, gfn, max_level);
+ }
+
+ return min(max_level, host_pfn_mapping_level(kvm, gfn, slot));
}
static inline bool fault_from_gmem(struct kvm_page_fault *fault)
{
- return fault->is_private || kvm_gmem_memslot_supports_shared(fault->slot);
+ return fault->is_private ||
+ (kvm_slot_has_gmem(fault->slot) &&
+ kvm_gmem_memslot_supports_shared(fault->slot));
}
void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
@@ -3314,12 +3346,20 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
* Enforce the iTLB multihit workaround after capturing the requested
* level, which will be used to do precise, accurate accounting.
*/
- fault->req_level = __kvm_mmu_max_mapping_level(vcpu->kvm, slot,
- fault->gfn, fault->max_level,
- fault->is_private);
+ fault->req_level = kvm_lpage_info_max_mapping_level(vcpu->kvm, slot,
+ fault->gfn, fault->max_level);
if (fault->req_level == PG_LEVEL_4K || fault->huge_page_disallowed)
return;
+ if (!fault_from_gmem(fault)) {
+ int host_level;
+
+ host_level = host_pfn_mapping_level(vcpu->kvm, fault->gfn, slot);
+ fault->req_level = min(fault->req_level, host_level);
+ if (fault->req_level == PG_LEVEL_4K)
+ return;
+ }
+
/*
* mmu_invalidate_retry() was successful and mmu_lock is held, so
* the pmd can't be split from under us.
@@ -4453,23 +4493,6 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work)
vcpu->stat.pf_fixed++;
}
-static inline u8 kvm_max_level_for_order(int order)
-{
- BUILD_BUG_ON(KVM_MAX_HUGEPAGE_LEVEL > PG_LEVEL_1G);
-
- KVM_MMU_WARN_ON(order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G) &&
- order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M) &&
- order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_4K));
-
- if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G))
- return PG_LEVEL_1G;
-
- if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M))
- return PG_LEVEL_2M;
-
- return PG_LEVEL_4K;
-}
-
static u8 kvm_max_level_for_fault_and_order(struct kvm *kvm,
struct kvm_page_fault *fault,
int order)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index b1786ef6d8ea..629915b2bb8d 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -2551,6 +2551,10 @@ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn)
return kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE;
}
#else
+static inline unsigned long kvm_get_memory_attributes(struct kvm *kvm, gfn_t gfn)
+{
+ return 0;
+}
static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn)
{
return false;
@@ -2561,6 +2565,7 @@ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn)
int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
gfn_t gfn, kvm_pfn_t *pfn, struct page **page,
int *max_order);
+int kvm_gmem_mapping_order(const struct kvm_memory_slot *slot, gfn_t gfn);
#else
static inline int kvm_gmem_get_pfn(struct kvm *kvm,
struct kvm_memory_slot *slot, gfn_t gfn,
@@ -2570,6 +2575,12 @@ static inline int kvm_gmem_get_pfn(struct kvm *kvm,
KVM_BUG_ON(1, kvm);
return -EIO;
}
+static inline int kvm_gmem_mapping_order(const struct kvm_memory_slot *slot,
+ gfn_t gfn)
+{
+ BUG();
+ return 0;
+}
#endif /* CONFIG_KVM_GMEM */
#ifdef CONFIG_HAVE_KVM_ARCH_GMEM_PREPARE
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index 9ded8d5139ee..0cd12f94958b 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -723,6 +723,18 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
}
EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn);
+/*
+ * Returns the mapping order for this @gfn in @slot.
+ *
+ * This is equal to max_order that would be returned if kvm_gmem_get_pfn() were
+ * called now.
+ */
+int kvm_gmem_mapping_order(const struct kvm_memory_slot *slot, gfn_t gfn)
+{
+ return 0;
+}
+EXPORT_SYMBOL_GPL(kvm_gmem_mapping_order);
+
#ifdef CONFIG_KVM_GENERIC_GMEM_POPULATE
long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long npages,
kvm_gmem_populate_cb post_populate, void *opaque)
--
2.49.0.1164.gab81da1b16-goog
^ permalink raw reply related [flat|nested] 62+ messages in thread
* [PATCH v10 12/16] KVM: arm64: Refactor user_mem_abort() calculation of force_pte
2025-05-27 18:02 [PATCH v10 00/16] KVM: Mapping guest_memfd backed memory at the host for software protected VMs Fuad Tabba
` (10 preceding siblings ...)
2025-05-27 18:02 ` [PATCH v10 11/16] KVM: x86: Compute max_mapping_level with input from guest_memfd Fuad Tabba
@ 2025-05-27 18:02 ` Fuad Tabba
2025-06-04 6:05 ` Gavin Shan
2025-05-27 18:02 ` [PATCH v10 13/16] KVM: arm64: Handle guest_memfd-backed guest page faults Fuad Tabba
` (3 subsequent siblings)
15 siblings, 1 reply; 62+ messages in thread
From: Fuad Tabba @ 2025-05-27 18:02 UTC (permalink / raw)
To: kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny, tabba
To simplify the code and to make the assumptions clearer,
refactor user_mem_abort() by immediately setting force_pte to
true if the conditions are met. Also, remove the comment about
logging_active being guaranteed to never be true for VM_PFNMAP
memslots, since it's not actually correct.
No functional change intended.
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/kvm/mmu.c | 13 ++++---------
1 file changed, 4 insertions(+), 9 deletions(-)
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index eeda92330ade..9865ada04a81 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1472,7 +1472,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
bool fault_is_perm)
{
int ret = 0;
- bool write_fault, writable, force_pte = false;
+ bool write_fault, writable;
bool exec_fault, mte_allowed;
bool device = false, vfio_allow_any_uc = false;
unsigned long mmu_seq;
@@ -1484,6 +1484,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
gfn_t gfn;
kvm_pfn_t pfn;
bool logging_active = memslot_is_logging(memslot);
+ bool force_pte = logging_active || is_protected_kvm_enabled();
long vma_pagesize, fault_granule;
enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R;
struct kvm_pgtable *pgt;
@@ -1536,16 +1537,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
return -EFAULT;
}
- /*
- * logging_active is guaranteed to never be true for VM_PFNMAP
- * memslots.
- */
- if (logging_active || is_protected_kvm_enabled()) {
- force_pte = true;
+ if (force_pte)
vma_shift = PAGE_SHIFT;
- } else {
+ else
vma_shift = get_vma_page_shift(vma, hva);
- }
switch (vma_shift) {
#ifndef __PAGETABLE_PMD_FOLDED
--
2.49.0.1164.gab81da1b16-goog
^ permalink raw reply related [flat|nested] 62+ messages in thread
* [PATCH v10 13/16] KVM: arm64: Handle guest_memfd-backed guest page faults
2025-05-27 18:02 [PATCH v10 00/16] KVM: Mapping guest_memfd backed memory at the host for software protected VMs Fuad Tabba
` (11 preceding siblings ...)
2025-05-27 18:02 ` [PATCH v10 12/16] KVM: arm64: Refactor user_mem_abort() calculation of force_pte Fuad Tabba
@ 2025-05-27 18:02 ` Fuad Tabba
2025-06-04 13:17 ` David Hildenbrand
2025-05-27 18:02 ` [PATCH v10 14/16] KVM: arm64: Enable mapping guest_memfd in arm64 Fuad Tabba
` (2 subsequent siblings)
15 siblings, 1 reply; 62+ messages in thread
From: Fuad Tabba @ 2025-05-27 18:02 UTC (permalink / raw)
To: kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny, tabba
Add arm64 support for handling guest page faults on guest_memfd backed
memslots. Until guest_memfd supports huge pages, the fault granule is
restricted to PAGE_SIZE.
Signed-off-by: Fuad Tabba <tabba@google.com>
---
Note: This patch introduces a new function, gmem_abort() rather than
previous attempts at trying to expand user_mem_abort(). This is because
there are many differences in how faults are handled when backed by
guest_memfd vs regular memslots with anonymous memory, e.g., lack of
VMA, and for now, lack of huge page support for guest_memfd. The
function user_mem_abort() is already big and unwieldly, adding more
complexity to it made things more difficult to understand.
Once larger page size support is added to guest_memfd, we could factor
out the common code between these two functions.
---
arch/arm64/kvm/mmu.c | 89 +++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 87 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 9865ada04a81..896c56683d88 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1466,6 +1466,87 @@ static bool kvm_vma_mte_allowed(struct vm_area_struct *vma)
return vma->vm_flags & VM_MTE_ALLOWED;
}
+static int gmem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
+ struct kvm_memory_slot *memslot, bool is_perm)
+{
+ enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED;
+ enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R;
+ bool logging, write_fault, exec_fault, writable;
+ struct kvm_pgtable *pgt;
+ struct page *page;
+ struct kvm *kvm;
+ void *memcache;
+ kvm_pfn_t pfn;
+ gfn_t gfn;
+ int ret;
+
+ if (!is_perm) {
+ int min_pages = kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu);
+
+ if (!is_protected_kvm_enabled()) {
+ memcache = &vcpu->arch.mmu_page_cache;
+ ret = kvm_mmu_topup_memory_cache(memcache, min_pages);
+ } else {
+ memcache = &vcpu->arch.pkvm_memcache;
+ ret = topup_hyp_memcache(memcache, min_pages);
+ }
+ if (ret)
+ return ret;
+ }
+
+ kvm = vcpu->kvm;
+ gfn = fault_ipa >> PAGE_SHIFT;
+
+ logging = memslot_is_logging(memslot);
+ write_fault = kvm_is_write_fault(vcpu);
+ exec_fault = kvm_vcpu_trap_is_exec_fault(vcpu);
+ VM_BUG_ON(write_fault && exec_fault);
+
+ if (is_perm && !write_fault && !exec_fault) {
+ kvm_err("Unexpected L2 read permission error\n");
+ return -EFAULT;
+ }
+
+ ret = kvm_gmem_get_pfn(vcpu->kvm, memslot, gfn, &pfn, &page, NULL);
+ if (ret) {
+ kvm_prepare_memory_fault_exit(vcpu, fault_ipa, PAGE_SIZE,
+ write_fault, exec_fault, false);
+ return ret;
+ }
+
+ writable = !(memslot->flags & KVM_MEM_READONLY) &&
+ (!logging || write_fault);
+
+ if (writable)
+ prot |= KVM_PGTABLE_PROT_W;
+
+ if (exec_fault || cpus_have_final_cap(ARM64_HAS_CACHE_DIC))
+ prot |= KVM_PGTABLE_PROT_X;
+
+ pgt = vcpu->arch.hw_mmu->pgt;
+
+ kvm_fault_lock(kvm);
+ if (is_perm) {
+ /*
+ * Drop the SW bits in favour of those stored in the
+ * PTE, which will be preserved.
+ */
+ prot &= ~KVM_NV_GUEST_MAP_SZ;
+ ret = KVM_PGT_FN(kvm_pgtable_stage2_relax_perms)(pgt, fault_ipa, prot, flags);
+ } else {
+ ret = KVM_PGT_FN(kvm_pgtable_stage2_map)(pgt, fault_ipa, PAGE_SIZE,
+ __pfn_to_phys(pfn), prot,
+ memcache, flags);
+ }
+ kvm_release_faultin_page(kvm, page, !!ret, writable);
+ kvm_fault_unlock(kvm);
+
+ if (writable && !ret)
+ mark_page_dirty_in_slot(kvm, memslot, gfn);
+
+ return ret != -EAGAIN ? ret : 0;
+}
+
static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
struct kvm_s2_trans *nested,
struct kvm_memory_slot *memslot, unsigned long hva,
@@ -1944,8 +2025,12 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu)
goto out_unlock;
}
- ret = user_mem_abort(vcpu, fault_ipa, nested, memslot, hva,
- esr_fsc_is_permission_fault(esr));
+ if (kvm_slot_has_gmem(memslot))
+ ret = gmem_abort(vcpu, fault_ipa, memslot,
+ esr_fsc_is_permission_fault(esr));
+ else
+ ret = user_mem_abort(vcpu, fault_ipa, nested, memslot, hva,
+ esr_fsc_is_permission_fault(esr));
if (ret == 0)
ret = 1;
out:
--
2.49.0.1164.gab81da1b16-goog
^ permalink raw reply related [flat|nested] 62+ messages in thread
* [PATCH v10 14/16] KVM: arm64: Enable mapping guest_memfd in arm64
2025-05-27 18:02 [PATCH v10 00/16] KVM: Mapping guest_memfd backed memory at the host for software protected VMs Fuad Tabba
` (12 preceding siblings ...)
2025-05-27 18:02 ` [PATCH v10 13/16] KVM: arm64: Handle guest_memfd-backed guest page faults Fuad Tabba
@ 2025-05-27 18:02 ` Fuad Tabba
2025-06-04 13:17 ` David Hildenbrand
2025-05-27 18:02 ` [PATCH v10 15/16] KVM: Introduce the KVM capability KVM_CAP_GMEM_SHARED_MEM Fuad Tabba
2025-05-27 18:02 ` [PATCH v10 16/16] KVM: selftests: guest_memfd mmap() test when mapping is allowed Fuad Tabba
15 siblings, 1 reply; 62+ messages in thread
From: Fuad Tabba @ 2025-05-27 18:02 UTC (permalink / raw)
To: kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny, tabba
Enable mapping guest_memfd backed memory at the host in arm64. For now,
it applies to all arm64 VM types in arm64 that use guest_memfd. In the
future, new VM types can restrict this via
kvm_arch_gmem_supports_shared_mem().
Signed-off-by: Fuad Tabba <tabba@google.com>
---
arch/arm64/include/asm/kvm_host.h | 5 +++++
arch/arm64/kvm/Kconfig | 1 +
arch/arm64/kvm/mmu.c | 7 +++++++
3 files changed, 13 insertions(+)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 08ba91e6fb03..8add94929711 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -1593,4 +1593,9 @@ static inline bool kvm_arch_has_irq_bypass(void)
return true;
}
+#ifdef CONFIG_KVM_GMEM
+#define kvm_arch_supports_gmem(kvm) true
+#define kvm_arch_supports_gmem_shared_mem(kvm) IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM)
+#endif
+
#endif /* __ARM64_KVM_HOST_H__ */
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index 096e45acadb2..8c1e1964b46a 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -38,6 +38,7 @@ menuconfig KVM
select HAVE_KVM_VCPU_RUN_PID_CHANGE
select SCHED_INFO
select GUEST_PERF_EVENTS if PERF_EVENTS
+ select KVM_GMEM_SHARED_MEM
help
Support hosting virtualized guest machines.
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 896c56683d88..03da08390bf0 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -2264,6 +2264,13 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
if ((new->base_gfn + new->npages) > (kvm_phys_size(&kvm->arch.mmu) >> PAGE_SHIFT))
return -EFAULT;
+ /*
+ * Only support guest_memfd backed memslots with shared memory, since
+ * there aren't any CoCo VMs that support only private memory on arm64.
+ */
+ if (kvm_slot_has_gmem(new) && !kvm_gmem_memslot_supports_shared(new))
+ return -EINVAL;
+
hva = new->userspace_addr;
reg_end = hva + (new->npages << PAGE_SHIFT);
--
2.49.0.1164.gab81da1b16-goog
^ permalink raw reply related [flat|nested] 62+ messages in thread
* [PATCH v10 15/16] KVM: Introduce the KVM capability KVM_CAP_GMEM_SHARED_MEM
2025-05-27 18:02 [PATCH v10 00/16] KVM: Mapping guest_memfd backed memory at the host for software protected VMs Fuad Tabba
` (13 preceding siblings ...)
2025-05-27 18:02 ` [PATCH v10 14/16] KVM: arm64: Enable mapping guest_memfd in arm64 Fuad Tabba
@ 2025-05-27 18:02 ` Fuad Tabba
2025-06-05 9:59 ` Gavin Shan
2025-05-27 18:02 ` [PATCH v10 16/16] KVM: selftests: guest_memfd mmap() test when mapping is allowed Fuad Tabba
15 siblings, 1 reply; 62+ messages in thread
From: Fuad Tabba @ 2025-05-27 18:02 UTC (permalink / raw)
To: kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny, tabba
This patch introduces the KVM capability KVM_CAP_GMEM_SHARED_MEM, which
indicates that guest_memfd supports shared memory (when enabled by the
flag). This support is limited to certain VM types, determined per
architecture.
This patch also updates the KVM documentation with details on the new
capability, flag, and other information about support for shared memory
in guest_memfd.
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
Documentation/virt/kvm/api.rst | 9 +++++++++
include/uapi/linux/kvm.h | 1 +
virt/kvm/kvm_main.c | 4 ++++
3 files changed, 14 insertions(+)
diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 47c7c3f92314..59f994a99481 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -6390,6 +6390,15 @@ most one mapping per page, i.e. binding multiple memory regions to a single
guest_memfd range is not allowed (any number of memory regions can be bound to
a single guest_memfd file, but the bound ranges must not overlap).
+When the capability KVM_CAP_GMEM_SHARED_MEM is supported, the 'flags' field
+supports GUEST_MEMFD_FLAG_SUPPORT_SHARED. Setting this flag on guest_memfd
+creation enables mmap() and faulting of guest_memfd memory to host userspace.
+
+When the KVM MMU performs a PFN lookup to service a guest fault and the backing
+guest_memfd has the GUEST_MEMFD_FLAG_SUPPORT_SHARED set, then the fault will
+always be consumed from guest_memfd, regardless of whether it is a shared or a
+private fault.
+
See KVM_SET_USER_MEMORY_REGION2 for additional details.
4.143 KVM_PRE_FAULT_MEMORY
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index c2714c9d1a0e..5aa85d34a29a 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -930,6 +930,7 @@ struct kvm_enable_cap {
#define KVM_CAP_X86_APIC_BUS_CYCLES_NS 237
#define KVM_CAP_X86_GUEST_MODE 238
#define KVM_CAP_ARM_WRITABLE_IMP_ID_REGS 239
+#define KVM_CAP_GMEM_SHARED_MEM 240
struct kvm_irq_routing_irqchip {
__u32 irqchip;
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 6289ea1685dd..64ed4da70d2f 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -4845,6 +4845,10 @@ static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
#ifdef CONFIG_KVM_GMEM
case KVM_CAP_GUEST_MEMFD:
return !kvm || kvm_arch_supports_gmem(kvm);
+#endif
+#ifdef CONFIG_KVM_GMEM_SHARED_MEM
+ case KVM_CAP_GMEM_SHARED_MEM:
+ return !kvm || kvm_arch_supports_gmem_shared_mem(kvm);
#endif
default:
break;
--
2.49.0.1164.gab81da1b16-goog
^ permalink raw reply related [flat|nested] 62+ messages in thread
* [PATCH v10 16/16] KVM: selftests: guest_memfd mmap() test when mapping is allowed
2025-05-27 18:02 [PATCH v10 00/16] KVM: Mapping guest_memfd backed memory at the host for software protected VMs Fuad Tabba
` (14 preceding siblings ...)
2025-05-27 18:02 ` [PATCH v10 15/16] KVM: Introduce the KVM capability KVM_CAP_GMEM_SHARED_MEM Fuad Tabba
@ 2025-05-27 18:02 ` Fuad Tabba
2025-06-04 9:19 ` Gavin Shan
15 siblings, 1 reply; 62+ messages in thread
From: Fuad Tabba @ 2025-05-27 18:02 UTC (permalink / raw)
To: kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny, tabba
Expand the guest_memfd selftests to include testing mapping guest
memory for VM types that support it.
Also, build the guest_memfd selftest for arm64.
Co-developed-by: Ackerley Tng <ackerleytng@google.com>
Signed-off-by: Ackerley Tng <ackerleytng@google.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
---
tools/testing/selftests/kvm/Makefile.kvm | 1 +
.../testing/selftests/kvm/guest_memfd_test.c | 162 +++++++++++++++---
2 files changed, 142 insertions(+), 21 deletions(-)
diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm
index f62b0a5aba35..ccf95ed037c3 100644
--- a/tools/testing/selftests/kvm/Makefile.kvm
+++ b/tools/testing/selftests/kvm/Makefile.kvm
@@ -163,6 +163,7 @@ TEST_GEN_PROGS_arm64 += access_tracking_perf_test
TEST_GEN_PROGS_arm64 += arch_timer
TEST_GEN_PROGS_arm64 += coalesced_io_test
TEST_GEN_PROGS_arm64 += dirty_log_perf_test
+TEST_GEN_PROGS_arm64 += guest_memfd_test
TEST_GEN_PROGS_arm64 += get-reg-list
TEST_GEN_PROGS_arm64 += memslot_modification_stress_test
TEST_GEN_PROGS_arm64 += memslot_perf_test
diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c
index ce687f8d248f..3d6765bc1f28 100644
--- a/tools/testing/selftests/kvm/guest_memfd_test.c
+++ b/tools/testing/selftests/kvm/guest_memfd_test.c
@@ -34,12 +34,46 @@ static void test_file_read_write(int fd)
"pwrite on a guest_mem fd should fail");
}
-static void test_mmap(int fd, size_t page_size)
+static void test_mmap_allowed(int fd, size_t page_size, size_t total_size)
+{
+ const char val = 0xaa;
+ char *mem;
+ size_t i;
+ int ret;
+
+ mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
+ TEST_ASSERT(mem != MAP_FAILED, "mmaping() guest memory should pass.");
+
+ memset(mem, val, total_size);
+ for (i = 0; i < total_size; i++)
+ TEST_ASSERT_EQ(mem[i], val);
+
+ ret = fallocate(fd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, 0,
+ page_size);
+ TEST_ASSERT(!ret, "fallocate the first page should succeed");
+
+ for (i = 0; i < page_size; i++)
+ TEST_ASSERT_EQ(mem[i], 0x00);
+ for (; i < total_size; i++)
+ TEST_ASSERT_EQ(mem[i], val);
+
+ memset(mem, val, page_size);
+ for (i = 0; i < total_size; i++)
+ TEST_ASSERT_EQ(mem[i], val);
+
+ ret = munmap(mem, total_size);
+ TEST_ASSERT(!ret, "munmap should succeed");
+}
+
+static void test_mmap_denied(int fd, size_t page_size, size_t total_size)
{
char *mem;
mem = mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
TEST_ASSERT_EQ(mem, MAP_FAILED);
+
+ mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
+ TEST_ASSERT_EQ(mem, MAP_FAILED);
}
static void test_file_size(int fd, size_t page_size, size_t total_size)
@@ -120,26 +154,19 @@ static void test_invalid_punch_hole(int fd, size_t page_size, size_t total_size)
}
}
-static void test_create_guest_memfd_invalid(struct kvm_vm *vm)
+static void test_create_guest_memfd_invalid_sizes(struct kvm_vm *vm,
+ uint64_t guest_memfd_flags,
+ size_t page_size)
{
- size_t page_size = getpagesize();
- uint64_t flag;
size_t size;
int fd;
for (size = 1; size < page_size; size++) {
- fd = __vm_create_guest_memfd(vm, size, 0);
- TEST_ASSERT(fd == -1 && errno == EINVAL,
+ fd = __vm_create_guest_memfd(vm, size, guest_memfd_flags);
+ TEST_ASSERT(fd < 0 && errno == EINVAL,
"guest_memfd() with non-page-aligned page size '0x%lx' should fail with EINVAL",
size);
}
-
- for (flag = BIT(0); flag; flag <<= 1) {
- fd = __vm_create_guest_memfd(vm, page_size, flag);
- TEST_ASSERT(fd == -1 && errno == EINVAL,
- "guest_memfd() with flag '0x%lx' should fail with EINVAL",
- flag);
- }
}
static void test_create_guest_memfd_multiple(struct kvm_vm *vm)
@@ -170,30 +197,123 @@ static void test_create_guest_memfd_multiple(struct kvm_vm *vm)
close(fd1);
}
-int main(int argc, char *argv[])
+#define GUEST_MEMFD_TEST_SLOT 10
+#define GUEST_MEMFD_TEST_GPA 0x100000000
+
+static bool check_vm_type(unsigned long vm_type)
{
- size_t page_size;
+ /*
+ * Not all architectures support KVM_CAP_VM_TYPES. However, those that
+ * support guest_memfd have that support for the default VM type.
+ */
+ if (vm_type == VM_TYPE_DEFAULT)
+ return true;
+
+ return kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(vm_type);
+}
+
+static void test_with_type(unsigned long vm_type, uint64_t guest_memfd_flags,
+ bool expect_mmap_allowed)
+{
+ struct kvm_vm *vm;
size_t total_size;
+ size_t page_size;
int fd;
- struct kvm_vm *vm;
- TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD));
+ if (!check_vm_type(vm_type))
+ return;
page_size = getpagesize();
total_size = page_size * 4;
- vm = vm_create_barebones();
+ vm = vm_create_barebones_type(vm_type);
- test_create_guest_memfd_invalid(vm);
test_create_guest_memfd_multiple(vm);
+ test_create_guest_memfd_invalid_sizes(vm, guest_memfd_flags, page_size);
- fd = vm_create_guest_memfd(vm, total_size, 0);
+ fd = vm_create_guest_memfd(vm, total_size, guest_memfd_flags);
test_file_read_write(fd);
- test_mmap(fd, page_size);
+
+ if (expect_mmap_allowed)
+ test_mmap_allowed(fd, page_size, total_size);
+ else
+ test_mmap_denied(fd, page_size, total_size);
+
test_file_size(fd, page_size, total_size);
test_fallocate(fd, page_size, total_size);
test_invalid_punch_hole(fd, page_size, total_size);
close(fd);
+ kvm_vm_release(vm);
+}
+
+static void test_vm_type_gmem_flag_validity(unsigned long vm_type,
+ uint64_t expected_valid_flags)
+{
+ size_t page_size = getpagesize();
+ struct kvm_vm *vm;
+ uint64_t flag = 0;
+ int fd;
+
+ if (!check_vm_type(vm_type))
+ return;
+
+ vm = vm_create_barebones_type(vm_type);
+
+ for (flag = BIT(0); flag; flag <<= 1) {
+ fd = __vm_create_guest_memfd(vm, page_size, flag);
+
+ if (flag & expected_valid_flags) {
+ TEST_ASSERT(fd >= 0,
+ "guest_memfd() with flag '0x%lx' should be valid",
+ flag);
+ close(fd);
+ } else {
+ TEST_ASSERT(fd < 0 && errno == EINVAL,
+ "guest_memfd() with flag '0x%lx' should fail with EINVAL",
+ flag);
+ }
+ }
+
+ kvm_vm_release(vm);
+}
+
+static void test_gmem_flag_validity(void)
+{
+ uint64_t non_coco_vm_valid_flags = 0;
+
+ if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM))
+ non_coco_vm_valid_flags = GUEST_MEMFD_FLAG_SUPPORT_SHARED;
+
+ test_vm_type_gmem_flag_validity(VM_TYPE_DEFAULT, non_coco_vm_valid_flags);
+
+#ifdef __x86_64__
+ test_vm_type_gmem_flag_validity(KVM_X86_SW_PROTECTED_VM, non_coco_vm_valid_flags);
+ test_vm_type_gmem_flag_validity(KVM_X86_SEV_VM, 0);
+ test_vm_type_gmem_flag_validity(KVM_X86_SEV_ES_VM, 0);
+ test_vm_type_gmem_flag_validity(KVM_X86_SNP_VM, 0);
+ test_vm_type_gmem_flag_validity(KVM_X86_TDX_VM, 0);
+#endif
+}
+
+int main(int argc, char *argv[])
+{
+ TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD));
+
+ test_gmem_flag_validity();
+
+ test_with_type(VM_TYPE_DEFAULT, 0, false);
+ if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) {
+ test_with_type(VM_TYPE_DEFAULT, GUEST_MEMFD_FLAG_SUPPORT_SHARED,
+ true);
+ }
+
+#ifdef __x86_64__
+ test_with_type(KVM_X86_SW_PROTECTED_VM, 0, false);
+ if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) {
+ test_with_type(KVM_X86_SW_PROTECTED_VM,
+ GUEST_MEMFD_FLAG_SUPPORT_SHARED, true);
+ }
+#endif
}
--
2.49.0.1164.gab81da1b16-goog
^ permalink raw reply related [flat|nested] 62+ messages in thread
* Re: [PATCH v10 08/16] KVM: guest_memfd: Allow host to map guest_memfd pages
2025-05-27 18:02 ` [PATCH v10 08/16] KVM: guest_memfd: Allow host to map guest_memfd pages Fuad Tabba
@ 2025-05-28 23:17 ` kernel test robot
2025-06-02 10:05 ` Fuad Tabba
` (5 subsequent siblings)
6 siblings, 0 replies; 62+ messages in thread
From: kernel test robot @ 2025-05-28 23:17 UTC (permalink / raw)
To: Fuad Tabba, kvm, linux-arm-msm, linux-mm
Cc: oe-kbuild-all, pbonzini, chenhuacai, mpe, anup, paul.walmsley,
palmer, aou, seanjc, viro, brauner, willy, akpm, xiaoyao.li,
yilun.xu, chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata,
mic, vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang
Hi Fuad,
kernel test robot noticed the following build errors:
[auto build test ERROR on 0ff41df1cb268fc69e703a08a57ee14ae967d0ca]
url: https://github.com/intel-lab-lkp/linux/commits/Fuad-Tabba/KVM-Rename-CONFIG_KVM_PRIVATE_MEM-to-CONFIG_KVM_GMEM/20250528-020608
base: 0ff41df1cb268fc69e703a08a57ee14ae967d0ca
patch link: https://lore.kernel.org/r/20250527180245.1413463-9-tabba%40google.com
patch subject: [PATCH v10 08/16] KVM: guest_memfd: Allow host to map guest_memfd pages
config: powerpc-allmodconfig (https://download.01.org/0day-ci/archive/20250529/202505290736.HR4GYiOF-lkp@intel.com/config)
compiler: powerpc64-linux-gcc (GCC) 14.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250529/202505290736.HR4GYiOF-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202505290736.HR4GYiOF-lkp@intel.com/
All errors (new ones prefixed by >>):
arch/powerpc/kvm/../../../virt/kvm/guest_memfd.c: In function '__kvm_gmem_create':
arch/powerpc/kvm/../../../virt/kvm/guest_memfd.c:487:14: error: implicit declaration of function 'get_unused_fd_flags' [-Wimplicit-function-declaration]
487 | fd = get_unused_fd_flags(0);
| ^~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/../../../virt/kvm/guest_memfd.c:524:9: error: implicit declaration of function 'fd_install'; did you mean 'fs_initcall'? [-Wimplicit-function-declaration]
524 | fd_install(fd, file);
| ^~~~~~~~~~
| fs_initcall
arch/powerpc/kvm/../../../virt/kvm/guest_memfd.c:530:9: error: implicit declaration of function 'put_unused_fd'; did you mean 'put_user_ns'? [-Wimplicit-function-declaration]
530 | put_unused_fd(fd);
| ^~~~~~~~~~~~~
| put_user_ns
arch/powerpc/kvm/../../../virt/kvm/guest_memfd.c: In function 'kvm_gmem_create':
>> arch/powerpc/kvm/../../../virt/kvm/guest_memfd.c:540:13: error: implicit declaration of function 'kvm_arch_supports_gmem_shared_mem' [-Wimplicit-function-declaration]
540 | if (kvm_arch_supports_gmem_shared_mem(kvm))
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
arch/powerpc/kvm/../../../virt/kvm/guest_memfd.c: In function 'kvm_gmem_bind':
arch/powerpc/kvm/../../../virt/kvm/guest_memfd.c:564:16: error: implicit declaration of function 'fget'; did you mean 'sget'? [-Wimplicit-function-declaration]
564 | file = fget(fd);
| ^~~~
| sget
arch/powerpc/kvm/../../../virt/kvm/guest_memfd.c:564:14: error: assignment to 'struct file *' from 'int' makes pointer from integer without a cast [-Wint-conversion]
564 | file = fget(fd);
| ^
arch/powerpc/kvm/../../../virt/kvm/guest_memfd.c:614:9: error: implicit declaration of function 'fput'; did you mean 'iput'? [-Wimplicit-function-declaration]
614 | fput(file);
| ^~~~
| iput
vim +/kvm_arch_supports_gmem_shared_mem +540 arch/powerpc/kvm/../../../virt/kvm/guest_memfd.c
533
534 int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args)
535 {
536 loff_t size = args->size;
537 u64 flags = args->flags;
538 u64 valid_flags = 0;
539
> 540 if (kvm_arch_supports_gmem_shared_mem(kvm))
541 valid_flags |= GUEST_MEMFD_FLAG_SUPPORT_SHARED;
542
543 if (flags & ~valid_flags)
544 return -EINVAL;
545
546 if (size <= 0 || !PAGE_ALIGNED(size))
547 return -EINVAL;
548
549 return __kvm_gmem_create(kvm, size, flags);
550 }
551
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 01/16] KVM: Rename CONFIG_KVM_PRIVATE_MEM to CONFIG_KVM_GMEM
2025-05-27 18:02 ` [PATCH v10 01/16] KVM: Rename CONFIG_KVM_PRIVATE_MEM to CONFIG_KVM_GMEM Fuad Tabba
@ 2025-05-31 19:05 ` Shivank Garg
2025-06-05 8:19 ` Vlastimil Babka
1 sibling, 0 replies; 62+ messages in thread
From: Shivank Garg @ 2025-05-31 19:05 UTC (permalink / raw)
To: Fuad Tabba, kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny
On 5/27/2025 11:32 PM, Fuad Tabba wrote:
> The option KVM_PRIVATE_MEM enables guest_memfd in general. Subsequent
> patches add shared memory support to guest_memfd. Therefore, rename it
> to KVM_GMEM to make its purpose clearer.
>
> Reviewed-by: Gavin Shan <gshan@redhat.com>
> Reviewed-by: Ira Weiny <ira.weiny@intel.com>
> Co-developed-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
> arch/x86/include/asm/kvm_host.h | 2 +-
> include/linux/kvm_host.h | 10 +++++-----
> virt/kvm/Kconfig | 8 ++++----
> virt/kvm/Makefile.kvm | 2 +-
> virt/kvm/kvm_main.c | 4 ++--
> virt/kvm/kvm_mm.h | 4 ++--
> 6 files changed, 15 insertions(+), 15 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 7bc174a1f1cb..52f6f6d08558 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -2253,7 +2253,7 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level,
> int tdp_max_root_level, int tdp_huge_page_level);
>
>
> -#ifdef CONFIG_KVM_PRIVATE_MEM
> +#ifdef CONFIG_KVM_GMEM
> #define kvm_arch_has_private_mem(kvm) ((kvm)->arch.has_private_mem)
> #else
> #define kvm_arch_has_private_mem(kvm) false
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 291d49b9bf05..d6900995725d 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -601,7 +601,7 @@ struct kvm_memory_slot {
> short id;
> u16 as_id;
>
> -#ifdef CONFIG_KVM_PRIVATE_MEM
> +#ifdef CONFIG_KVM_GMEM
> struct {
> /*
> * Writes protected by kvm->slots_lock. Acquiring a
> @@ -722,7 +722,7 @@ static inline int kvm_arch_vcpu_memslots_id(struct kvm_vcpu *vcpu)
> * Arch code must define kvm_arch_has_private_mem if support for private memory
> * is enabled.
> */
> -#if !defined(kvm_arch_has_private_mem) && !IS_ENABLED(CONFIG_KVM_PRIVATE_MEM)
> +#if !defined(kvm_arch_has_private_mem) && !IS_ENABLED(CONFIG_KVM_GMEM)
> static inline bool kvm_arch_has_private_mem(struct kvm *kvm)
> {
> return false;
> @@ -2504,7 +2504,7 @@ bool kvm_arch_post_set_memory_attributes(struct kvm *kvm,
>
> static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn)
> {
> - return IS_ENABLED(CONFIG_KVM_PRIVATE_MEM) &&
> + return IS_ENABLED(CONFIG_KVM_GMEM) &&
> kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE;
> }
> #else
> @@ -2514,7 +2514,7 @@ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn)
> }
> #endif /* CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES */
>
> -#ifdef CONFIG_KVM_PRIVATE_MEM
> +#ifdef CONFIG_KVM_GMEM
> int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
> gfn_t gfn, kvm_pfn_t *pfn, struct page **page,
> int *max_order);
> @@ -2527,7 +2527,7 @@ static inline int kvm_gmem_get_pfn(struct kvm *kvm,
> KVM_BUG_ON(1, kvm);
> return -EIO;
> }
> -#endif /* CONFIG_KVM_PRIVATE_MEM */
> +#endif /* CONFIG_KVM_GMEM */
>
> #ifdef CONFIG_HAVE_KVM_ARCH_GMEM_PREPARE
> int kvm_arch_gmem_prepare(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, int max_order);
> diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
> index 727b542074e7..49df4e32bff7 100644
> --- a/virt/kvm/Kconfig
> +++ b/virt/kvm/Kconfig
> @@ -112,19 +112,19 @@ config KVM_GENERIC_MEMORY_ATTRIBUTES
> depends on KVM_GENERIC_MMU_NOTIFIER
> bool
>
> -config KVM_PRIVATE_MEM
> +config KVM_GMEM
> select XARRAY_MULTI
> bool
>
> config KVM_GENERIC_PRIVATE_MEM
> select KVM_GENERIC_MEMORY_ATTRIBUTES
> - select KVM_PRIVATE_MEM
> + select KVM_GMEM
> bool
>
> config HAVE_KVM_ARCH_GMEM_PREPARE
> bool
> - depends on KVM_PRIVATE_MEM
> + depends on KVM_GMEM
>
> config HAVE_KVM_ARCH_GMEM_INVALIDATE
> bool
> - depends on KVM_PRIVATE_MEM
> + depends on KVM_GMEM
> diff --git a/virt/kvm/Makefile.kvm b/virt/kvm/Makefile.kvm
> index 724c89af78af..8d00918d4c8b 100644
> --- a/virt/kvm/Makefile.kvm
> +++ b/virt/kvm/Makefile.kvm
> @@ -12,4 +12,4 @@ kvm-$(CONFIG_KVM_ASYNC_PF) += $(KVM)/async_pf.o
> kvm-$(CONFIG_HAVE_KVM_IRQ_ROUTING) += $(KVM)/irqchip.o
> kvm-$(CONFIG_HAVE_KVM_DIRTY_RING) += $(KVM)/dirty_ring.o
> kvm-$(CONFIG_HAVE_KVM_PFNCACHE) += $(KVM)/pfncache.o
> -kvm-$(CONFIG_KVM_PRIVATE_MEM) += $(KVM)/guest_memfd.o
> +kvm-$(CONFIG_KVM_GMEM) += $(KVM)/guest_memfd.o
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index e85b33a92624..4996cac41a8f 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -4842,7 +4842,7 @@ static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
> case KVM_CAP_MEMORY_ATTRIBUTES:
> return kvm_supported_mem_attributes(kvm);
> #endif
> -#ifdef CONFIG_KVM_PRIVATE_MEM
> +#ifdef CONFIG_KVM_GMEM
> case KVM_CAP_GUEST_MEMFD:
> return !kvm || kvm_arch_has_private_mem(kvm);
> #endif
> @@ -5276,7 +5276,7 @@ static long kvm_vm_ioctl(struct file *filp,
> case KVM_GET_STATS_FD:
> r = kvm_vm_ioctl_get_stats_fd(kvm);
> break;
> -#ifdef CONFIG_KVM_PRIVATE_MEM
> +#ifdef CONFIG_KVM_GMEM
> case KVM_CREATE_GUEST_MEMFD: {
> struct kvm_create_guest_memfd guest_memfd;
>
> diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h
> index acef3f5c582a..ec311c0d6718 100644
> --- a/virt/kvm/kvm_mm.h
> +++ b/virt/kvm/kvm_mm.h
> @@ -67,7 +67,7 @@ static inline void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm,
> }
> #endif /* HAVE_KVM_PFNCACHE */
>
> -#ifdef CONFIG_KVM_PRIVATE_MEM
> +#ifdef CONFIG_KVM_GMEM
> void kvm_gmem_init(struct module *module);
> int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args);
> int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot,
> @@ -91,6 +91,6 @@ static inline void kvm_gmem_unbind(struct kvm_memory_slot *slot)
> {
> WARN_ON_ONCE(1);
> }
> -#endif /* CONFIG_KVM_PRIVATE_MEM */
> +#endif /* CONFIG_KVM_GMEM */
>
> #endif /* __KVM_MM_H__ */
Reviewed-by: Shivank Garg <shivankg@amd.com>
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 02/16] KVM: Rename CONFIG_KVM_GENERIC_PRIVATE_MEM to CONFIG_KVM_GENERIC_GMEM_POPULATE
2025-05-27 18:02 ` [PATCH v10 02/16] KVM: Rename CONFIG_KVM_GENERIC_PRIVATE_MEM to CONFIG_KVM_GENERIC_GMEM_POPULATE Fuad Tabba
@ 2025-05-31 19:07 ` Shivank Garg
2025-06-05 8:19 ` Vlastimil Babka
1 sibling, 0 replies; 62+ messages in thread
From: Shivank Garg @ 2025-05-31 19:07 UTC (permalink / raw)
To: Fuad Tabba, kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny
On 5/27/2025 11:32 PM, Fuad Tabba wrote:
> The option KVM_GENERIC_PRIVATE_MEM enables populating a GPA range with
> guest data. Rename it to KVM_GENERIC_GMEM_POPULATE to make its purpose
> clearer.
>
> Reviewed-by: Gavin Shan <gshan@redhat.com>
> Reviewed-by: Ira Weiny <ira.weiny@intel.com>
> Co-developed-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
> arch/x86/kvm/Kconfig | 4 ++--
> include/linux/kvm_host.h | 2 +-
> virt/kvm/Kconfig | 2 +-
> virt/kvm/guest_memfd.c | 2 +-
> 4 files changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
> index fe8ea8c097de..b37258253543 100644
> --- a/arch/x86/kvm/Kconfig
> +++ b/arch/x86/kvm/Kconfig
> @@ -46,7 +46,7 @@ config KVM_X86
> select HAVE_KVM_PM_NOTIFIER if PM
> select KVM_GENERIC_HARDWARE_ENABLING
> select KVM_GENERIC_PRE_FAULT_MEMORY
> - select KVM_GENERIC_PRIVATE_MEM if KVM_SW_PROTECTED_VM
> + select KVM_GENERIC_GMEM_POPULATE if KVM_SW_PROTECTED_VM
> select KVM_WERROR if WERROR
>
> config KVM
> @@ -145,7 +145,7 @@ config KVM_AMD_SEV
> depends on KVM_AMD && X86_64
> depends on CRYPTO_DEV_SP_PSP && !(KVM_AMD=y && CRYPTO_DEV_CCP_DD=m)
> select ARCH_HAS_CC_PLATFORM
> - select KVM_GENERIC_PRIVATE_MEM
> + select KVM_GENERIC_GMEM_POPULATE
> select HAVE_KVM_ARCH_GMEM_PREPARE
> select HAVE_KVM_ARCH_GMEM_INVALIDATE
> help
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index d6900995725d..7ca23837fa52 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -2533,7 +2533,7 @@ static inline int kvm_gmem_get_pfn(struct kvm *kvm,
> int kvm_arch_gmem_prepare(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, int max_order);
> #endif
>
> -#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM
> +#ifdef CONFIG_KVM_GENERIC_GMEM_POPULATE
> /**
> * kvm_gmem_populate() - Populate/prepare a GPA range with guest data
> *
> diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
> index 49df4e32bff7..559c93ad90be 100644
> --- a/virt/kvm/Kconfig
> +++ b/virt/kvm/Kconfig
> @@ -116,7 +116,7 @@ config KVM_GMEM
> select XARRAY_MULTI
> bool
>
> -config KVM_GENERIC_PRIVATE_MEM
> +config KVM_GENERIC_GMEM_POPULATE
> select KVM_GENERIC_MEMORY_ATTRIBUTES
> select KVM_GMEM
> bool
> diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
> index b2aa6bf24d3a..befea51bbc75 100644
> --- a/virt/kvm/guest_memfd.c
> +++ b/virt/kvm/guest_memfd.c
> @@ -638,7 +638,7 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
> }
> EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn);
>
> -#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM
> +#ifdef CONFIG_KVM_GENERIC_GMEM_POPULATE
> long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long npages,
> kvm_gmem_populate_cb post_populate, void *opaque)
> {
Reviewed-by: Shivank Garg <shivankg@amd.com>
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 03/16] KVM: Rename kvm_arch_has_private_mem() to kvm_arch_supports_gmem()
2025-05-27 18:02 ` [PATCH v10 03/16] KVM: Rename kvm_arch_has_private_mem() to kvm_arch_supports_gmem() Fuad Tabba
@ 2025-05-31 19:12 ` Shivank Garg
2025-06-05 8:20 ` Vlastimil Babka
1 sibling, 0 replies; 62+ messages in thread
From: Shivank Garg @ 2025-05-31 19:12 UTC (permalink / raw)
To: Fuad Tabba, kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny
On 5/27/2025 11:32 PM, Fuad Tabba wrote:
> The function kvm_arch_has_private_mem() is used to indicate whether
> guest_memfd is supported by the architecture, which until now implies
> that its private. To decouple guest_memfd support from whether the
> memory is private, rename this function to kvm_arch_supports_gmem().
>
> Reviewed-by: Gavin Shan <gshan@redhat.com>
> Reviewed-by: Ira Weiny <ira.weiny@intel.com>
> Co-developed-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
> arch/x86/include/asm/kvm_host.h | 8 ++++----
> arch/x86/kvm/mmu/mmu.c | 8 ++++----
> include/linux/kvm_host.h | 6 +++---
> virt/kvm/kvm_main.c | 6 +++---
> 4 files changed, 14 insertions(+), 14 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 52f6f6d08558..4a83fbae7056 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -2254,9 +2254,9 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level,
>
>
> #ifdef CONFIG_KVM_GMEM
> -#define kvm_arch_has_private_mem(kvm) ((kvm)->arch.has_private_mem)
> +#define kvm_arch_supports_gmem(kvm) ((kvm)->arch.has_private_mem)
> #else
> -#define kvm_arch_has_private_mem(kvm) false
> +#define kvm_arch_supports_gmem(kvm) false
> #endif
>
> #define kvm_arch_has_readonly_mem(kvm) (!(kvm)->arch.has_protected_state)
> @@ -2309,8 +2309,8 @@ enum {
> #define HF_SMM_INSIDE_NMI_MASK (1 << 2)
>
> # define KVM_MAX_NR_ADDRESS_SPACES 2
> -/* SMM is currently unsupported for guests with private memory. */
> -# define kvm_arch_nr_memslot_as_ids(kvm) (kvm_arch_has_private_mem(kvm) ? 1 : 2)
> +/* SMM is currently unsupported for guests with guest_memfd (esp private) memory. */
> +# define kvm_arch_nr_memslot_as_ids(kvm) (kvm_arch_supports_gmem(kvm) ? 1 : 2)
> # define kvm_arch_vcpu_memslots_id(vcpu) ((vcpu)->arch.hflags & HF_SMM_MASK ? 1 : 0)
> # define kvm_memslots_for_spte_role(kvm, role) __kvm_memslots(kvm, (role).smm)
> #else
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 8d1b632e33d2..b66f1bf24e06 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -4917,7 +4917,7 @@ long kvm_arch_vcpu_pre_fault_memory(struct kvm_vcpu *vcpu,
> if (r)
> return r;
>
> - if (kvm_arch_has_private_mem(vcpu->kvm) &&
> + if (kvm_arch_supports_gmem(vcpu->kvm) &&
> kvm_mem_is_private(vcpu->kvm, gpa_to_gfn(range->gpa)))
> error_code |= PFERR_PRIVATE_ACCESS;
>
> @@ -7705,7 +7705,7 @@ bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm,
> * Zapping SPTEs in this case ensures KVM will reassess whether or not
> * a hugepage can be used for affected ranges.
> */
> - if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm)))
> + if (WARN_ON_ONCE(!kvm_arch_supports_gmem(kvm)))
> return false;
>
> if (WARN_ON_ONCE(range->end <= range->start))
> @@ -7784,7 +7784,7 @@ bool kvm_arch_post_set_memory_attributes(struct kvm *kvm,
> * a range that has PRIVATE GFNs, and conversely converting a range to
> * SHARED may now allow hugepages.
> */
> - if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm)))
> + if (WARN_ON_ONCE(!kvm_arch_supports_gmem(kvm)))
> return false;
>
> /*
> @@ -7840,7 +7840,7 @@ void kvm_mmu_init_memslot_memory_attributes(struct kvm *kvm,
> {
> int level;
>
> - if (!kvm_arch_has_private_mem(kvm))
> + if (!kvm_arch_supports_gmem(kvm))
> return;
>
> for (level = PG_LEVEL_2M; level <= KVM_MAX_HUGEPAGE_LEVEL; level++) {
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 7ca23837fa52..6ca7279520cf 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -719,11 +719,11 @@ static inline int kvm_arch_vcpu_memslots_id(struct kvm_vcpu *vcpu)
> #endif
>
> /*
> - * Arch code must define kvm_arch_has_private_mem if support for private memory
> + * Arch code must define kvm_arch_supports_gmem if support for guest_memfd
> * is enabled.
> */
> -#if !defined(kvm_arch_has_private_mem) && !IS_ENABLED(CONFIG_KVM_GMEM)
> -static inline bool kvm_arch_has_private_mem(struct kvm *kvm)
> +#if !defined(kvm_arch_supports_gmem) && !IS_ENABLED(CONFIG_KVM_GMEM)
> +static inline bool kvm_arch_supports_gmem(struct kvm *kvm)
> {
> return false;
> }
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 4996cac41a8f..2468d50a9ed4 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -1531,7 +1531,7 @@ static int check_memory_region_flags(struct kvm *kvm,
> {
> u32 valid_flags = KVM_MEM_LOG_DIRTY_PAGES;
>
> - if (kvm_arch_has_private_mem(kvm))
> + if (kvm_arch_supports_gmem(kvm))
> valid_flags |= KVM_MEM_GUEST_MEMFD;
>
> /* Dirty logging private memory is not currently supported. */
> @@ -2362,7 +2362,7 @@ static int kvm_vm_ioctl_clear_dirty_log(struct kvm *kvm,
> #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES
> static u64 kvm_supported_mem_attributes(struct kvm *kvm)
> {
> - if (!kvm || kvm_arch_has_private_mem(kvm))
> + if (!kvm || kvm_arch_supports_gmem(kvm))
> return KVM_MEMORY_ATTRIBUTE_PRIVATE;
>
> return 0;
> @@ -4844,7 +4844,7 @@ static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg)
> #endif
> #ifdef CONFIG_KVM_GMEM
> case KVM_CAP_GUEST_MEMFD:
> - return !kvm || kvm_arch_has_private_mem(kvm);
> + return !kvm || kvm_arch_supports_gmem(kvm);
> #endif
> default:
> break;
Reviewed-by: Shivank Garg <shivankg@amd.com>
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 04/16] KVM: x86: Rename kvm->arch.has_private_mem to kvm->arch.supports_gmem
2025-05-27 18:02 ` [PATCH v10 04/16] KVM: x86: Rename kvm->arch.has_private_mem to kvm->arch.supports_gmem Fuad Tabba
@ 2025-05-31 19:13 ` Shivank Garg
2025-06-05 8:21 ` Vlastimil Babka
1 sibling, 0 replies; 62+ messages in thread
From: Shivank Garg @ 2025-05-31 19:13 UTC (permalink / raw)
To: Fuad Tabba, kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny
On 5/27/2025 11:32 PM, Fuad Tabba wrote:
> The bool has_private_mem is used to indicate whether guest_memfd is
> supported. Rename it to supports_gmem to make its meaning clearer and to
> decouple memory being private from guest_memfd.
>
> Reviewed-by: Gavin Shan <gshan@redhat.com>
> Reviewed-by: Ira Weiny <ira.weiny@intel.com>
> Co-developed-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
> arch/x86/include/asm/kvm_host.h | 4 ++--
> arch/x86/kvm/mmu/mmu.c | 2 +-
> arch/x86/kvm/svm/svm.c | 4 ++--
> arch/x86/kvm/x86.c | 3 +--
> 4 files changed, 6 insertions(+), 7 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 4a83fbae7056..709cc2a7ba66 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -1331,7 +1331,7 @@ struct kvm_arch {
> unsigned int indirect_shadow_pages;
> u8 mmu_valid_gen;
> u8 vm_type;
> - bool has_private_mem;
> + bool supports_gmem;
> bool has_protected_state;
> bool pre_fault_allowed;
> struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES];
> @@ -2254,7 +2254,7 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level,
>
>
> #ifdef CONFIG_KVM_GMEM
> -#define kvm_arch_supports_gmem(kvm) ((kvm)->arch.has_private_mem)
> +#define kvm_arch_supports_gmem(kvm) ((kvm)->arch.supports_gmem)
> #else
> #define kvm_arch_supports_gmem(kvm) false
> #endif
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index b66f1bf24e06..69bf2ef22ed0 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -3486,7 +3486,7 @@ static bool page_fault_can_be_fast(struct kvm *kvm, struct kvm_page_fault *fault
> * on RET_PF_SPURIOUS until the update completes, or an actual spurious
> * case might go down the slow path. Either case will resolve itself.
> */
> - if (kvm->arch.has_private_mem &&
> + if (kvm->arch.supports_gmem &&
> fault->is_private != kvm_mem_is_private(kvm, fault->gfn))
> return false;
>
> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
> index a89c271a1951..a05b7dc7b717 100644
> --- a/arch/x86/kvm/svm/svm.c
> +++ b/arch/x86/kvm/svm/svm.c
> @@ -5110,8 +5110,8 @@ static int svm_vm_init(struct kvm *kvm)
> (type == KVM_X86_SEV_ES_VM || type == KVM_X86_SNP_VM);
> to_kvm_sev_info(kvm)->need_init = true;
>
> - kvm->arch.has_private_mem = (type == KVM_X86_SNP_VM);
> - kvm->arch.pre_fault_allowed = !kvm->arch.has_private_mem;
> + kvm->arch.supports_gmem = (type == KVM_X86_SNP_VM);
> + kvm->arch.pre_fault_allowed = !kvm->arch.supports_gmem;
> }
>
> if (!pause_filter_count || !pause_filter_thresh)
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index be7bb6d20129..035ced06b2dd 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -12718,8 +12718,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
> return -EINVAL;
>
> kvm->arch.vm_type = type;
> - kvm->arch.has_private_mem =
> - (type == KVM_X86_SW_PROTECTED_VM);
> + kvm->arch.supports_gmem = (type == KVM_X86_SW_PROTECTED_VM);
> /* Decided by the vendor code for other VM types. */
> kvm->arch.pre_fault_allowed =
> type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM;
Reviewed-by: Shivank Garg <shivankg@amd.com>
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 05/16] KVM: Rename kvm_slot_can_be_private() to kvm_slot_has_gmem()
2025-05-27 18:02 ` [PATCH v10 05/16] KVM: Rename kvm_slot_can_be_private() to kvm_slot_has_gmem() Fuad Tabba
@ 2025-05-31 19:13 ` Shivank Garg
2025-06-05 8:22 ` Vlastimil Babka
1 sibling, 0 replies; 62+ messages in thread
From: Shivank Garg @ 2025-05-31 19:13 UTC (permalink / raw)
To: Fuad Tabba, kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny
On 5/27/2025 11:32 PM, Fuad Tabba wrote:
> The function kvm_slot_can_be_private() is used to check whether a memory
> slot is backed by guest_memfd. Rename it to kvm_slot_has_gmem() to make
> that clearer and to decouple memory being private from guest_memfd.
>
> Reviewed-by: Gavin Shan <gshan@redhat.com>
> Reviewed-by: Ira Weiny <ira.weiny@intel.com>
> Co-developed-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
> arch/x86/kvm/mmu/mmu.c | 4 ++--
> arch/x86/kvm/svm/sev.c | 4 ++--
> include/linux/kvm_host.h | 2 +-
> virt/kvm/guest_memfd.c | 2 +-
> 4 files changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 69bf2ef22ed0..2b6376986f96 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -3283,7 +3283,7 @@ static int __kvm_mmu_max_mapping_level(struct kvm *kvm,
> int kvm_mmu_max_mapping_level(struct kvm *kvm,
> const struct kvm_memory_slot *slot, gfn_t gfn)
> {
> - bool is_private = kvm_slot_can_be_private(slot) &&
> + bool is_private = kvm_slot_has_gmem(slot) &&
> kvm_mem_is_private(kvm, gfn);
>
> return __kvm_mmu_max_mapping_level(kvm, slot, gfn, PG_LEVEL_NUM, is_private);
> @@ -4496,7 +4496,7 @@ static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu,
> {
> int max_order, r;
>
> - if (!kvm_slot_can_be_private(fault->slot)) {
> + if (!kvm_slot_has_gmem(fault->slot)) {
> kvm_mmu_prepare_memory_fault_exit(vcpu, fault);
> return -EFAULT;
> }
> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
> index a7a7dc507336..27759ca6d2f2 100644
> --- a/arch/x86/kvm/svm/sev.c
> +++ b/arch/x86/kvm/svm/sev.c
> @@ -2378,7 +2378,7 @@ static int snp_launch_update(struct kvm *kvm, struct kvm_sev_cmd *argp)
> mutex_lock(&kvm->slots_lock);
>
> memslot = gfn_to_memslot(kvm, params.gfn_start);
> - if (!kvm_slot_can_be_private(memslot)) {
> + if (!kvm_slot_has_gmem(memslot)) {
> ret = -EINVAL;
> goto out;
> }
> @@ -4688,7 +4688,7 @@ void sev_handle_rmp_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u64 error_code)
> }
>
> slot = gfn_to_memslot(kvm, gfn);
> - if (!kvm_slot_can_be_private(slot)) {
> + if (!kvm_slot_has_gmem(slot)) {
> pr_warn_ratelimited("SEV: Unexpected RMP fault, non-private slot for GPA 0x%llx\n",
> gpa);
> return;
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 6ca7279520cf..d9616ee6acc7 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -614,7 +614,7 @@ struct kvm_memory_slot {
> #endif
> };
>
> -static inline bool kvm_slot_can_be_private(const struct kvm_memory_slot *slot)
> +static inline bool kvm_slot_has_gmem(const struct kvm_memory_slot *slot)
> {
> return slot && (slot->flags & KVM_MEM_GUEST_MEMFD);
> }
> diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
> index befea51bbc75..6db515833f61 100644
> --- a/virt/kvm/guest_memfd.c
> +++ b/virt/kvm/guest_memfd.c
> @@ -654,7 +654,7 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long
> return -EINVAL;
>
> slot = gfn_to_memslot(kvm, start_gfn);
> - if (!kvm_slot_can_be_private(slot))
> + if (!kvm_slot_has_gmem(slot))
> return -EINVAL;
>
> file = kvm_gmem_get_file(slot);
Reviewed-by: Shivank Garg <shivankg@amd.com>
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 06/16] KVM: Fix comments that refer to slots_lock
2025-05-27 18:02 ` [PATCH v10 06/16] KVM: Fix comments that refer to slots_lock Fuad Tabba
@ 2025-05-31 19:14 ` Shivank Garg
2025-06-05 8:22 ` Vlastimil Babka
1 sibling, 0 replies; 62+ messages in thread
From: Shivank Garg @ 2025-05-31 19:14 UTC (permalink / raw)
To: Fuad Tabba, kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny
On 5/27/2025 11:32 PM, Fuad Tabba wrote:
> Fix comments so that they refer to slots_lock instead of slots_locks
> (remove trailing s).
>
> Reviewed-by: Gavin Shan <gshan@redhat.com>
> Reviewed-by: David Hildenbrand <david@redhat.com>
> Reviewed-by: Ira Weiny <ira.weiny@intel.com>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
> include/linux/kvm_host.h | 2 +-
> virt/kvm/kvm_main.c | 2 +-
> 2 files changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index d9616ee6acc7..ae70e4e19700 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -859,7 +859,7 @@ struct kvm {
> struct notifier_block pm_notifier;
> #endif
> #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES
> - /* Protected by slots_locks (for writes) and RCU (for reads) */
> + /* Protected by slots_lock (for writes) and RCU (for reads) */
> struct xarray mem_attr_array;
> #endif
> char stats_id[KVM_STATS_NAME_SIZE];
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 2468d50a9ed4..6289ea1685dd 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -333,7 +333,7 @@ void kvm_flush_remote_tlbs_memslot(struct kvm *kvm,
> * All current use cases for flushing the TLBs for a specific memslot
> * are related to dirty logging, and many do the TLB flush out of
> * mmu_lock. The interaction between the various operations on memslot
> - * must be serialized by slots_locks to ensure the TLB flush from one
> + * must be serialized by slots_lock to ensure the TLB flush from one
> * operation is observed by any other operation on the same memslot.
> */
> lockdep_assert_held(&kvm->slots_lock);
Reviewed-by: Shivank Garg <shivankg@amd.com>
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 07/16] KVM: Fix comment that refers to kvm uapi header path
2025-05-27 18:02 ` [PATCH v10 07/16] KVM: Fix comment that refers to kvm uapi header path Fuad Tabba
@ 2025-05-31 19:19 ` Shivank Garg
2025-06-02 10:10 ` Fuad Tabba
2025-06-02 10:23 ` David Hildenbrand
` (2 subsequent siblings)
3 siblings, 1 reply; 62+ messages in thread
From: Shivank Garg @ 2025-05-31 19:19 UTC (permalink / raw)
To: Fuad Tabba, kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny
On 5/27/2025 11:32 PM, Fuad Tabba wrote:
> The comment that refers to the path where the user-visible memslot flags
> are refers to an outdated path and has a typo. Make it refer to the
> correct path.
>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
> include/linux/kvm_host.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index ae70e4e19700..80371475818f 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -52,7 +52,7 @@
> /*
> * The bit 16 ~ bit 31 of kvm_userspace_memory_region::flags are internally
> * used in kvm, other bits are visible for userspace which are defined in
> - * include/linux/kvm_h.
> + * include/uapi/linux/kvm.h.
Reviewed-by: Shivank Garg <shivankg@amd.com>
> */
> #define KVM_MEMSLOT_INVALID (1UL << 16)
>
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 08/16] KVM: guest_memfd: Allow host to map guest_memfd pages
2025-05-27 18:02 ` [PATCH v10 08/16] KVM: guest_memfd: Allow host to map guest_memfd pages Fuad Tabba
2025-05-28 23:17 ` kernel test robot
@ 2025-06-02 10:05 ` Fuad Tabba
2025-06-02 10:43 ` Shivank Garg
` (4 subsequent siblings)
6 siblings, 0 replies; 62+ messages in thread
From: Fuad Tabba @ 2025-06-02 10:05 UTC (permalink / raw)
To: kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny
Hi,
On Tue, 27 May 2025 at 19:03, Fuad Tabba <tabba@google.com> wrote:
>
> This patch enables support for shared memory in guest_memfd, including
> mapping that memory at the host userspace. This support is gated by the
> configuration option KVM_GMEM_SHARED_MEM, and toggled by the guest_memfd
> flag GUEST_MEMFD_FLAG_SUPPORT_SHARED, which can be set when creating a
> guest_memfd instance.
>
> Co-developed-by: Ackerley Tng <ackerleytng@google.com>
> Signed-off-by: Ackerley Tng <ackerleytng@google.com>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
> arch/x86/include/asm/kvm_host.h | 10 ++++
> arch/x86/kvm/x86.c | 3 +-
> include/linux/kvm_host.h | 13 ++++++
> include/uapi/linux/kvm.h | 1 +
> virt/kvm/Kconfig | 5 ++
> virt/kvm/guest_memfd.c | 81 +++++++++++++++++++++++++++++++++
> 6 files changed, 112 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 709cc2a7ba66..ce9ad4cd93c5 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -2255,8 +2255,18 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level,
>
> #ifdef CONFIG_KVM_GMEM
> #define kvm_arch_supports_gmem(kvm) ((kvm)->arch.supports_gmem)
> +
> +/*
> + * CoCo VMs with hardware support that use guest_memfd only for backing private
> + * memory, e.g., TDX, cannot use guest_memfd with userspace mapping enabled.
> + */
> +#define kvm_arch_supports_gmem_shared_mem(kvm) \
> + (IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM) && \
> + ((kvm)->arch.vm_type == KVM_X86_SW_PROTECTED_VM || \
> + (kvm)->arch.vm_type == KVM_X86_DEFAULT_VM))
> #else
> #define kvm_arch_supports_gmem(kvm) false
> +#define kvm_arch_supports_gmem_shared_mem(kvm) false
> #endif
>
> #define kvm_arch_has_readonly_mem(kvm) (!(kvm)->arch.has_protected_state)
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 035ced06b2dd..2a02f2457c42 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -12718,7 +12718,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
> return -EINVAL;
>
> kvm->arch.vm_type = type;
> - kvm->arch.supports_gmem = (type == KVM_X86_SW_PROTECTED_VM);
> + kvm->arch.supports_gmem =
> + type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM;
> /* Decided by the vendor code for other VM types. */
> kvm->arch.pre_fault_allowed =
> type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM;
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 80371475818f..ba83547e62b0 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -729,6 +729,19 @@ static inline bool kvm_arch_supports_gmem(struct kvm *kvm)
> }
> #endif
>
> +/*
> + * Returns true if this VM supports shared mem in guest_memfd.
> + *
> + * Arch code must define kvm_arch_supports_gmem_shared_mem if support for
> + * guest_memfd is enabled.
> + */
> +#if !defined(kvm_arch_supports_gmem_shared_mem) && !IS_ENABLED(CONFIG_KVM_GMEM)
> +static inline bool kvm_arch_supports_gmem_shared_mem(struct kvm *kvm)
> +{
> + return false;
> +}
> +#endif
> +
> #ifndef kvm_arch_has_readonly_mem
> static inline bool kvm_arch_has_readonly_mem(struct kvm *kvm)
> {
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index b6ae8ad8934b..c2714c9d1a0e 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1566,6 +1566,7 @@ struct kvm_memory_attributes {
> #define KVM_MEMORY_ATTRIBUTE_PRIVATE (1ULL << 3)
>
> #define KVM_CREATE_GUEST_MEMFD _IOWR(KVMIO, 0xd4, struct kvm_create_guest_memfd)
> +#define GUEST_MEMFD_FLAG_SUPPORT_SHARED (1ULL << 0)
>
> struct kvm_create_guest_memfd {
> __u64 size;
> diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
> index 559c93ad90be..df225298ab10 100644
> --- a/virt/kvm/Kconfig
> +++ b/virt/kvm/Kconfig
> @@ -128,3 +128,8 @@ config HAVE_KVM_ARCH_GMEM_PREPARE
> config HAVE_KVM_ARCH_GMEM_INVALIDATE
> bool
> depends on KVM_GMEM
> +
> +config KVM_GMEM_SHARED_MEM
> + select KVM_GMEM
> + bool
> + prompt "Enable support for non-private (shared) memory in guest_memfd"
> diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
> index 6db515833f61..5d34712f64fc 100644
> --- a/virt/kvm/guest_memfd.c
> +++ b/virt/kvm/guest_memfd.c
> @@ -312,7 +312,81 @@ static pgoff_t kvm_gmem_get_index(struct kvm_memory_slot *slot, gfn_t gfn)
> return gfn - slot->base_gfn + slot->gmem.pgoff;
> }
>
> +static bool kvm_gmem_supports_shared(struct inode *inode)
> +{
> + u64 flags;
> +
> + if (!IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM))
> + return false;
> +
> + flags = (u64)inode->i_private;
> +
> + return flags & GUEST_MEMFD_FLAG_SUPPORT_SHARED;
> +}
> +
> +
> +#ifdef CONFIG_KVM_GMEM_SHARED_MEM
> +static vm_fault_t kvm_gmem_fault_shared(struct vm_fault *vmf)
> +{
> + struct inode *inode = file_inode(vmf->vma->vm_file);
> + struct folio *folio;
> + vm_fault_t ret = VM_FAULT_LOCKED;
> +
This was mentioned in a different thread [*], but it should also be
mentioned here. This is missing a bound check. I have added the
following for the next spin:
+ if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode))
+ return VM_FAULT_SIGBUS;
+
I've also added a selftest to test this.
Cheers,
/fuad
[*] https://lore.kernel.org/all/CA+EHjTyv9KsRsbf6ec9u5mLjBbcnNti9k8hPi9Do59Mw7ayYqw@mail.gmail.com/
> + folio = kvm_gmem_get_folio(inode, vmf->pgoff);
> + if (IS_ERR(folio)) {
> + int err = PTR_ERR(folio);
> +
> + if (err == -EAGAIN)
> + return VM_FAULT_RETRY;
> +
> + return vmf_error(err);
> + }
> +
> + if (WARN_ON_ONCE(folio_test_large(folio))) {
> + ret = VM_FAULT_SIGBUS;
> + goto out_folio;
> + }
> +
> + if (!folio_test_uptodate(folio)) {
> + clear_highpage(folio_page(folio, 0));
> + kvm_gmem_mark_prepared(folio);
> + }
> +
> + vmf->page = folio_file_page(folio, vmf->pgoff);
> +
> +out_folio:
> + if (ret != VM_FAULT_LOCKED) {
> + folio_unlock(folio);
> + folio_put(folio);
> + }
> +
> + return ret;
> +}
> +
> +static const struct vm_operations_struct kvm_gmem_vm_ops = {
> + .fault = kvm_gmem_fault_shared,
> +};
> +
> +static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma)
> +{
> + if (!kvm_gmem_supports_shared(file_inode(file)))
> + return -ENODEV;
> +
> + if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) !=
> + (VM_SHARED | VM_MAYSHARE)) {
> + return -EINVAL;
> + }
> +
> + vma->vm_ops = &kvm_gmem_vm_ops;
> +
> + return 0;
> +}
> +#else
> +#define kvm_gmem_mmap NULL
> +#endif /* CONFIG_KVM_GMEM_SHARED_MEM */
> +
> static struct file_operations kvm_gmem_fops = {
> + .mmap = kvm_gmem_mmap,
> .open = generic_file_open,
> .release = kvm_gmem_release,
> .fallocate = kvm_gmem_fallocate,
> @@ -463,6 +537,9 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args)
> u64 flags = args->flags;
> u64 valid_flags = 0;
>
> + if (kvm_arch_supports_gmem_shared_mem(kvm))
> + valid_flags |= GUEST_MEMFD_FLAG_SUPPORT_SHARED;
> +
> if (flags & ~valid_flags)
> return -EINVAL;
>
> @@ -501,6 +578,10 @@ int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot,
> offset + size > i_size_read(inode))
> goto err;
>
> + if (kvm_gmem_supports_shared(inode) &&
> + !kvm_arch_supports_gmem_shared_mem(kvm))
> + goto err;
> +
> filemap_invalidate_lock(inode->i_mapping);
>
> start = offset >> PAGE_SHIFT;
> --
> 2.49.0.1164.gab81da1b16-goog
>
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 07/16] KVM: Fix comment that refers to kvm uapi header path
2025-05-31 19:19 ` Shivank Garg
@ 2025-06-02 10:10 ` Fuad Tabba
0 siblings, 0 replies; 62+ messages in thread
From: Fuad Tabba @ 2025-06-02 10:10 UTC (permalink / raw)
To: Shivank Garg
Cc: kvm, linux-arm-msm, linux-mm, pbonzini, chenhuacai, mpe, anup,
paul.walmsley, palmer, aou, seanjc, viro, brauner, willy, akpm,
xiaoyao.li, yilun.xu, chao.p.peng, jarkko, amoorthy, dmatlack,
isaku.yamahata, mic, vbabka, vannapurve, ackerleytng, mail, david,
michael.roth, wei.w.wang, liam.merwick, isaku.yamahata,
kirill.shutemov, suzuki.poulose, steven.price, quic_eberman,
quic_mnalajal, quic_tsoni, quic_svaddagi, quic_cvanscha,
quic_pderrin, quic_pheragu, catalin.marinas, james.morse,
yuzenghui, oliver.upton, maz, will, qperret, keirf, roypat, shuah,
hch, jgg, rientjes, jhubbard, fvdl, hughd, jthoughton, peterx,
pankaj.gupta, ira.weiny
On Sat, 31 May 2025 at 20:20, Shivank Garg <shivankg@amd.com> wrote:
>
>
>
> On 5/27/2025 11:32 PM, Fuad Tabba wrote:
> > The comment that refers to the path where the user-visible memslot flags
> > are refers to an outdated path and has a typo. Make it refer to the
> > correct path.
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> > include/linux/kvm_host.h | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > index ae70e4e19700..80371475818f 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -52,7 +52,7 @@
> > /*
> > * The bit 16 ~ bit 31 of kvm_userspace_memory_region::flags are internally
> > * used in kvm, other bits are visible for userspace which are defined in
> > - * include/linux/kvm_h.
> > + * include/uapi/linux/kvm.h.
>
> Reviewed-by: Shivank Garg <shivankg@amd.com>
Thanks for the reviews!
/fuad
> > */
> > #define KVM_MEMSLOT_INVALID (1UL << 16)
> >
>
>
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 07/16] KVM: Fix comment that refers to kvm uapi header path
2025-05-27 18:02 ` [PATCH v10 07/16] KVM: Fix comment that refers to kvm uapi header path Fuad Tabba
2025-05-31 19:19 ` Shivank Garg
@ 2025-06-02 10:23 ` David Hildenbrand
2025-06-04 9:00 ` Gavin Shan
2025-06-05 8:22 ` Vlastimil Babka
3 siblings, 0 replies; 62+ messages in thread
From: David Hildenbrand @ 2025-06-02 10:23 UTC (permalink / raw)
To: Fuad Tabba, kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, michael.roth, wei.w.wang,
liam.merwick, isaku.yamahata, kirill.shutemov, suzuki.poulose,
steven.price, quic_eberman, quic_mnalajal, quic_tsoni,
quic_svaddagi, quic_cvanscha, quic_pderrin, quic_pheragu,
catalin.marinas, james.morse, yuzenghui, oliver.upton, maz, will,
qperret, keirf, roypat, shuah, hch, jgg, rientjes, jhubbard, fvdl,
hughd, jthoughton, peterx, pankaj.gupta, ira.weiny
On 27.05.25 20:02, Fuad Tabba wrote:
> The comment that refers to the path where the user-visible memslot flags
> are refers to an outdated path and has a typo. Make it refer to the
> correct path.
>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
> include/linux/kvm_host.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index ae70e4e19700..80371475818f 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -52,7 +52,7 @@
> /*
> * The bit 16 ~ bit 31 of kvm_userspace_memory_region::flags are internally
> * used in kvm, other bits are visible for userspace which are defined in
> - * include/linux/kvm_h.
> + * include/uapi/linux/kvm.h.
> */
> #define KVM_MEMSLOT_INVALID (1UL << 16)
>
Reviewed-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 08/16] KVM: guest_memfd: Allow host to map guest_memfd pages
2025-05-27 18:02 ` [PATCH v10 08/16] KVM: guest_memfd: Allow host to map guest_memfd pages Fuad Tabba
2025-05-28 23:17 ` kernel test robot
2025-06-02 10:05 ` Fuad Tabba
@ 2025-06-02 10:43 ` Shivank Garg
2025-06-02 11:13 ` Fuad Tabba
2025-06-04 6:02 ` Gavin Shan
` (3 subsequent siblings)
6 siblings, 1 reply; 62+ messages in thread
From: Shivank Garg @ 2025-06-02 10:43 UTC (permalink / raw)
To: Fuad Tabba, kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny, qemu-devel, qemu-discuss, linux-coco@lists.linux.dev,
nikunj, Bharata B Rao
On 5/27/2025 11:32 PM, Fuad Tabba wrote:
> This patch enables support for shared memory in guest_memfd, including
> mapping that memory at the host userspace. This support is gated by the
> configuration option KVM_GMEM_SHARED_MEM, and toggled by the guest_memfd
> flag GUEST_MEMFD_FLAG_SUPPORT_SHARED, which can be set when creating a
> guest_memfd instance.
>
> Co-developed-by: Ackerley Tng <ackerleytng@google.com>
> Signed-off-by: Ackerley Tng <ackerleytng@google.com>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
> arch/x86/include/asm/kvm_host.h | 10 ++++
> arch/x86/kvm/x86.c | 3 +-
> include/linux/kvm_host.h | 13 ++++++
> include/uapi/linux/kvm.h | 1 +
> virt/kvm/Kconfig | 5 ++
> virt/kvm/guest_memfd.c | 81 +++++++++++++++++++++++++++++++++
> 6 files changed, 112 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 709cc2a7ba66..ce9ad4cd93c5 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -2255,8 +2255,18 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level,
>
> #ifdef CONFIG_KVM_GMEM
> #define kvm_arch_supports_gmem(kvm) ((kvm)->arch.supports_gmem)
> +
> +/*
> + * CoCo VMs with hardware support that use guest_memfd only for backing private
> + * memory, e.g., TDX, cannot use guest_memfd with userspace mapping enabled.
> + */
> +#define kvm_arch_supports_gmem_shared_mem(kvm) \
> + (IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM) && \
> + ((kvm)->arch.vm_type == KVM_X86_SW_PROTECTED_VM || \
> + (kvm)->arch.vm_type == KVM_X86_DEFAULT_VM))
> #else
> #define kvm_arch_supports_gmem(kvm) false
> +#define kvm_arch_supports_gmem_shared_mem(kvm) false
> #endif
>
> #define kvm_arch_has_readonly_mem(kvm) (!(kvm)->arch.has_protected_state)
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 035ced06b2dd..2a02f2457c42 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -12718,7 +12718,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
> return -EINVAL;
>
> kvm->arch.vm_type = type;
> - kvm->arch.supports_gmem = (type == KVM_X86_SW_PROTECTED_VM);
> + kvm->arch.supports_gmem =
> + type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM;
I've been testing this patch-series. I did not saw failure with guest_memfd selftests but encountered a regression on my system with KVM_X86_DEFAULT_VM.
I'm getting below error in QEMU:
Issue #1 - QEMU fails to start with KVM_X86_DEFAULT_VM, showing:
qemu-system-x86_64: kvm_set_user_memory_region: KVM_SET_USER_MEMORY_REGION2 failed, slot=65536, start=0x0, size=0x80000000, flags=0x0, guest_memfd=-1, guest_memfd_offset=0x0: Invalid argument
kvm_set_phys_mem: error registering slot: Invalid argument
I did some digging to find out,
In kvm_set_memory_region as_id >= kvm_arch_nr_memslot_as_ids(kvm) now returns true.
(as_id:1 kvm_arch_nr_memslot_as_ids(kvm):1 id:0 KVM_MEM_SLOTS_NUM:32767)
/* SMM is currently unsupported for guests with guest_memfd (esp private) memory. */
# define kvm_arch_nr_memslot_as_ids(kvm) (kvm_arch_supports_gmem(kvm) ? 1 : 2)
evaluates to be 1
I'm still debugging to find answer to these question
Why slot=65536 and (as_id = mem->slot >> 16 = 1) is requested for KVM_X86_DEFAULT_VM case
which is making it fail for above check.
Was this change intentional for KVM_X86_DEFAULT_VM? Should this be considered as KVM regression or QEMU[1] compatibility issue?
---
Issue #2: Testing challenges with QEMU changes[2] and mmap Implementation:
Currently, QEMU only enables guest_memfd for SEV_SNP_GUEST (KVM_X86_SNP_VM) by setting require_guest_memfd=true. However, the new mmap implementation doesn't support SNP guests per kvm_arch_supports_gmem_shared_mem().
static void
sev_snp_guest_instance_init(Object *obj)
{
ConfidentialGuestSupport *cgs = CONFIDENTIAL_GUEST_SUPPORT(obj);
SevSnpGuestState *sev_snp_guest = SEV_SNP_GUEST(obj);
cgs->require_guest_memfd = true;
To bypass this, I did two things and failed:
1. Enabling guest_memfd for KVM_X86_DEFAULT_VM in QEMU: Hits Issue #1 above
2. Adding KVM_X86_SNP_VM to kvm_arch_supports_gmem_shared_mem(): mmap() succeeds but QEMU stuck during boot.
My NUMA policy support for guest-memfd patch[3] depends on mmap() support and extends
kvm_gmem_vm_ops with get_policy/set_policy operations.
Since NUMA policy applies to both shared and private memory scenarios, what checks should
be included in the mmap() implementation, and what's the recommended approach for
integrating with your shared memory restrictions?
[1] https://github.com/qemu/qemu
[2] Snippet to QEMU changes to add mmap
+ new_block->guest_memfd = kvm_create_guest_memfd(
+ new_block->max_length, /*0 */GUEST_MEMFD_FLAG_SUPPORT_SHARED, errp);
+ if (new_block->guest_memfd < 0) {
+ qemu_mutex_unlock_ramlist();
+ goto out_free;
+ }
+ new_block->ptr_memfd = mmap(NULL, new_block->max_length,
+ PROT_READ | PROT_WRITE,
+ MAP_SHARED,
+ new_block->guest_memfd, 0);
+ if (new_block->ptr_memfd == MAP_FAILED) {
+ error_report("Failed to mmap guest_memfd");
+ qemu_mutex_unlock_ramlist();
+ goto out_free;
+ }
+ printf("mmap successful\n");
+ }
[3] https://lore.kernel.org/linux-mm/20250408112402.181574-1-shivankg@amd.com
> /* Decided by the vendor code for other VM types. */
> kvm->arch.pre_fault_allowed =
> type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM;
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 80371475818f..ba83547e62b0 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -729,6 +729,19 @@ static inline bool kvm_arch_supports_gmem(struct kvm *kvm)
> }
> #endif
>
> +/*
> + * Returns true if this VM supports shared mem in guest_memfd.
> + *
> + * Arch code must define kvm_arch_supports_gmem_shared_mem if support for
> + * guest_memfd is enabled.
> + */
> +#if !defined(kvm_arch_supports_gmem_shared_mem) && !IS_ENABLED(CONFIG_KVM_GMEM)
> +static inline bool kvm_arch_supports_gmem_shared_mem(struct kvm *kvm)
> +{
> + return false;
> +}
> +#endif
> +
> #ifndef kvm_arch_has_readonly_mem
> static inline bool kvm_arch_has_readonly_mem(struct kvm *kvm)
> {
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index b6ae8ad8934b..c2714c9d1a0e 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1566,6 +1566,7 @@ struct kvm_memory_attributes {
> #define KVM_MEMORY_ATTRIBUTE_PRIVATE (1ULL << 3)
>
> #define KVM_CREATE_GUEST_MEMFD _IOWR(KVMIO, 0xd4, struct kvm_create_guest_memfd)
> +#define GUEST_MEMFD_FLAG_SUPPORT_SHARED (1ULL << 0)
>
> struct kvm_create_guest_memfd {
> __u64 size;
> diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
> index 559c93ad90be..df225298ab10 100644
> --- a/virt/kvm/Kconfig
> +++ b/virt/kvm/Kconfig
> @@ -128,3 +128,8 @@ config HAVE_KVM_ARCH_GMEM_PREPARE
> config HAVE_KVM_ARCH_GMEM_INVALIDATE
> bool
> depends on KVM_GMEM
> +
> +config KVM_GMEM_SHARED_MEM
> + select KVM_GMEM
> + bool
> + prompt "Enable support for non-private (shared) memory in guest_memfd"
> diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
> index 6db515833f61..5d34712f64fc 100644
> --- a/virt/kvm/guest_memfd.c
> +++ b/virt/kvm/guest_memfd.c
> @@ -312,7 +312,81 @@ static pgoff_t kvm_gmem_get_index(struct kvm_memory_slot *slot, gfn_t gfn)
> return gfn - slot->base_gfn + slot->gmem.pgoff;
> }
>
> +static bool kvm_gmem_supports_shared(struct inode *inode)
> +{
> + u64 flags;
> +
> + if (!IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM))
> + return false;
> +
> + flags = (u64)inode->i_private;
> +
> + return flags & GUEST_MEMFD_FLAG_SUPPORT_SHARED;
> +}
> +
> +
> +#ifdef CONFIG_KVM_GMEM_SHARED_MEM
> +static vm_fault_t kvm_gmem_fault_shared(struct vm_fault *vmf)
> +{
> + struct inode *inode = file_inode(vmf->vma->vm_file);
> + struct folio *folio;
> + vm_fault_t ret = VM_FAULT_LOCKED;
> +
> + folio = kvm_gmem_get_folio(inode, vmf->pgoff);
> + if (IS_ERR(folio)) {
> + int err = PTR_ERR(folio);
> +
> + if (err == -EAGAIN)
> + return VM_FAULT_RETRY;
> +
> + return vmf_error(err);
> + }
> +
> + if (WARN_ON_ONCE(folio_test_large(folio))) {
> + ret = VM_FAULT_SIGBUS;
> + goto out_folio;
> + }
> +
> + if (!folio_test_uptodate(folio)) {
> + clear_highpage(folio_page(folio, 0));
> + kvm_gmem_mark_prepared(folio);
> + }
> +
> + vmf->page = folio_file_page(folio, vmf->pgoff);
> +
> +out_folio:
> + if (ret != VM_FAULT_LOCKED) {
> + folio_unlock(folio);
> + folio_put(folio);
> + }
> +
> + return ret;
> +}
> +
> +static const struct vm_operations_struct kvm_gmem_vm_ops = {
> + .fault = kvm_gmem_fault_shared,
> +};
> +
> +static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma)
> +{
> + if (!kvm_gmem_supports_shared(file_inode(file)))
> + return -ENODEV;
> +
> + if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) !=
> + (VM_SHARED | VM_MAYSHARE)) {
> + return -EINVAL;
> + }
> +
> + vma->vm_ops = &kvm_gmem_vm_ops;
> +
> + return 0;
> +}
> +#else
> +#define kvm_gmem_mmap NULL
> +#endif /* CONFIG_KVM_GMEM_SHARED_MEM */
> +
> static struct file_operations kvm_gmem_fops = {
> + .mmap = kvm_gmem_mmap,
> .open = generic_file_open,
> .release = kvm_gmem_release,
> .fallocate = kvm_gmem_fallocate,
> @@ -463,6 +537,9 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args)
> u64 flags = args->flags;
> u64 valid_flags = 0;
>
> + if (kvm_arch_supports_gmem_shared_mem(kvm))
> + valid_flags |= GUEST_MEMFD_FLAG_SUPPORT_SHARED;
> +
> if (flags & ~valid_flags)
> return -EINVAL;
>
> @@ -501,6 +578,10 @@ int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot,
> offset + size > i_size_read(inode))
> goto err;
>
> + if (kvm_gmem_supports_shared(inode) &&
> + !kvm_arch_supports_gmem_shared_mem(kvm))
> + goto err;
> +
> filemap_invalidate_lock(inode->i_mapping);
>
> start = offset >> PAGE_SHIFT;
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 08/16] KVM: guest_memfd: Allow host to map guest_memfd pages
2025-06-02 10:43 ` Shivank Garg
@ 2025-06-02 11:13 ` Fuad Tabba
0 siblings, 0 replies; 62+ messages in thread
From: Fuad Tabba @ 2025-06-02 11:13 UTC (permalink / raw)
To: Shivank Garg
Cc: kvm, linux-arm-msm, linux-mm, pbonzini, chenhuacai, mpe, anup,
paul.walmsley, palmer, aou, seanjc, viro, brauner, willy, akpm,
xiaoyao.li, yilun.xu, chao.p.peng, jarkko, amoorthy, dmatlack,
isaku.yamahata, mic, vbabka, vannapurve, ackerleytng, mail, david,
michael.roth, wei.w.wang, liam.merwick, isaku.yamahata,
kirill.shutemov, suzuki.poulose, steven.price, quic_eberman,
quic_mnalajal, quic_tsoni, quic_svaddagi, quic_cvanscha,
quic_pderrin, quic_pheragu, catalin.marinas, james.morse,
yuzenghui, oliver.upton, maz, will, qperret, keirf, roypat, shuah,
hch, jgg, rientjes, jhubbard, fvdl, hughd, jthoughton, peterx,
pankaj.gupta, ira.weiny, qemu-devel, qemu-discuss,
linux-coco@lists.linux.dev, nikunj, Bharata B Rao
Hi Shivank,
On Mon, 2 Jun 2025 at 11:44, Shivank Garg <shivankg@amd.com> wrote:
>
>
>
> On 5/27/2025 11:32 PM, Fuad Tabba wrote:
> > This patch enables support for shared memory in guest_memfd, including
> > mapping that memory at the host userspace. This support is gated by the
> > configuration option KVM_GMEM_SHARED_MEM, and toggled by the guest_memfd
> > flag GUEST_MEMFD_FLAG_SUPPORT_SHARED, which can be set when creating a
> > guest_memfd instance.
> >
> > Co-developed-by: Ackerley Tng <ackerleytng@google.com>
> > Signed-off-by: Ackerley Tng <ackerleytng@google.com>
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> > arch/x86/include/asm/kvm_host.h | 10 ++++
> > arch/x86/kvm/x86.c | 3 +-
> > include/linux/kvm_host.h | 13 ++++++
> > include/uapi/linux/kvm.h | 1 +
> > virt/kvm/Kconfig | 5 ++
> > virt/kvm/guest_memfd.c | 81 +++++++++++++++++++++++++++++++++
> > 6 files changed, 112 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > index 709cc2a7ba66..ce9ad4cd93c5 100644
> > --- a/arch/x86/include/asm/kvm_host.h
> > +++ b/arch/x86/include/asm/kvm_host.h
> > @@ -2255,8 +2255,18 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level,
> >
> > #ifdef CONFIG_KVM_GMEM
> > #define kvm_arch_supports_gmem(kvm) ((kvm)->arch.supports_gmem)
> > +
> > +/*
> > + * CoCo VMs with hardware support that use guest_memfd only for backing private
> > + * memory, e.g., TDX, cannot use guest_memfd with userspace mapping enabled.
> > + */
> > +#define kvm_arch_supports_gmem_shared_mem(kvm) \
> > + (IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM) && \
> > + ((kvm)->arch.vm_type == KVM_X86_SW_PROTECTED_VM || \
> > + (kvm)->arch.vm_type == KVM_X86_DEFAULT_VM))
> > #else
> > #define kvm_arch_supports_gmem(kvm) false
> > +#define kvm_arch_supports_gmem_shared_mem(kvm) false
> > #endif
> >
> > #define kvm_arch_has_readonly_mem(kvm) (!(kvm)->arch.has_protected_state)
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index 035ced06b2dd..2a02f2457c42 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -12718,7 +12718,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
> > return -EINVAL;
> >
> > kvm->arch.vm_type = type;
> > - kvm->arch.supports_gmem = (type == KVM_X86_SW_PROTECTED_VM);
> > + kvm->arch.supports_gmem =
> > + type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM;
>
>
> I've been testing this patch-series. I did not saw failure with guest_memfd selftests but encountered a regression on my system with KVM_X86_DEFAULT_VM.
>
> I'm getting below error in QEMU:
> Issue #1 - QEMU fails to start with KVM_X86_DEFAULT_VM, showing:
>
> qemu-system-x86_64: kvm_set_user_memory_region: KVM_SET_USER_MEMORY_REGION2 failed, slot=65536, start=0x0, size=0x80000000, flags=0x0, guest_memfd=-1, guest_memfd_offset=0x0: Invalid argument
> kvm_set_phys_mem: error registering slot: Invalid argument
>
> I did some digging to find out,
> In kvm_set_memory_region as_id >= kvm_arch_nr_memslot_as_ids(kvm) now returns true.
> (as_id:1 kvm_arch_nr_memslot_as_ids(kvm):1 id:0 KVM_MEM_SLOTS_NUM:32767)
>
> /* SMM is currently unsupported for guests with guest_memfd (esp private) memory. */
> # define kvm_arch_nr_memslot_as_ids(kvm) (kvm_arch_supports_gmem(kvm) ? 1 : 2)
> evaluates to be 1
>
> I'm still debugging to find answer to these question
> Why slot=65536 and (as_id = mem->slot >> 16 = 1) is requested for KVM_X86_DEFAULT_VM case
> which is making it fail for above check.
> Was this change intentional for KVM_X86_DEFAULT_VM? Should this be considered as KVM regression or QEMU[1] compatibility issue?
Yes, this was intentional. We talked about this during the guest_memfd
biweekly sync on May 15 [*]. We came to the conclusion that we cannot
support SMM with private memory. KVM_X86_DEFAULT_VM cannot have
private memory, but guest_memfd with shared memory.
[*] https://docs.google.com/document/d/1M6766BzdY1Lhk7LiR5IqVR8B8mG3cr-cxTxOrAosPOk/edit?tab=t.0#heading=h.b4x45fcfgzvo
> ---
> Issue #2: Testing challenges with QEMU changes[2] and mmap Implementation:
> Currently, QEMU only enables guest_memfd for SEV_SNP_GUEST (KVM_X86_SNP_VM) by setting require_guest_memfd=true. However, the new mmap implementation doesn't support SNP guests per kvm_arch_supports_gmem_shared_mem().
>
> static void
> sev_snp_guest_instance_init(Object *obj)
> {
> ConfidentialGuestSupport *cgs = CONFIDENTIAL_GUEST_SUPPORT(obj);
> SevSnpGuestState *sev_snp_guest = SEV_SNP_GUEST(obj);
>
> cgs->require_guest_memfd = true;
>
>
> To bypass this, I did two things and failed:
> 1. Enabling guest_memfd for KVM_X86_DEFAULT_VM in QEMU: Hits Issue #1 above
> 2. Adding KVM_X86_SNP_VM to kvm_arch_supports_gmem_shared_mem(): mmap() succeeds but QEMU stuck during boot.
>
>
>
> My NUMA policy support for guest-memfd patch[3] depends on mmap() support and extends
> kvm_gmem_vm_ops with get_policy/set_policy operations.
> Since NUMA policy applies to both shared and private memory scenarios, what checks should
> be included in the mmap() implementation, and what's the recommended approach for
> integrating with your shared memory restrictions?
KVM_X86_SNP_VM doesn't support in-place shared memory yet, so I think
this is to be expected for now.
Thanks,
/fuad
>
> [1] https://github.com/qemu/qemu
> [2] Snippet to QEMU changes to add mmap
>
> + new_block->guest_memfd = kvm_create_guest_memfd(
> + new_block->max_length, /*0 */GUEST_MEMFD_FLAG_SUPPORT_SHARED, errp);
> + if (new_block->guest_memfd < 0) {
> + qemu_mutex_unlock_ramlist();
> + goto out_free;
> + }
> + new_block->ptr_memfd = mmap(NULL, new_block->max_length,
> + PROT_READ | PROT_WRITE,
> + MAP_SHARED,
> + new_block->guest_memfd, 0);
> + if (new_block->ptr_memfd == MAP_FAILED) {
> + error_report("Failed to mmap guest_memfd");
> + qemu_mutex_unlock_ramlist();
> + goto out_free;
> + }
> + printf("mmap successful\n");
> + }
> [3] https://lore.kernel.org/linux-mm/20250408112402.181574-1-shivankg@amd.com
>
>
>
> > /* Decided by the vendor code for other VM types. */
> > kvm->arch.pre_fault_allowed =
> > type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM;
> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > index 80371475818f..ba83547e62b0 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -729,6 +729,19 @@ static inline bool kvm_arch_supports_gmem(struct kvm *kvm)
> > }
> > #endif
> >
> > +/*
> > + * Returns true if this VM supports shared mem in guest_memfd.
> > + *
> > + * Arch code must define kvm_arch_supports_gmem_shared_mem if support for
> > + * guest_memfd is enabled.
> > + */
> > +#if !defined(kvm_arch_supports_gmem_shared_mem) && !IS_ENABLED(CONFIG_KVM_GMEM)
> > +static inline bool kvm_arch_supports_gmem_shared_mem(struct kvm *kvm)
> > +{
> > + return false;
> > +}
> > +#endif
> > +
> > #ifndef kvm_arch_has_readonly_mem
> > static inline bool kvm_arch_has_readonly_mem(struct kvm *kvm)
> > {
> > diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> > index b6ae8ad8934b..c2714c9d1a0e 100644
> > --- a/include/uapi/linux/kvm.h
> > +++ b/include/uapi/linux/kvm.h
> > @@ -1566,6 +1566,7 @@ struct kvm_memory_attributes {
> > #define KVM_MEMORY_ATTRIBUTE_PRIVATE (1ULL << 3)
> >
> > #define KVM_CREATE_GUEST_MEMFD _IOWR(KVMIO, 0xd4, struct kvm_create_guest_memfd)
> > +#define GUEST_MEMFD_FLAG_SUPPORT_SHARED (1ULL << 0)
> >
> > struct kvm_create_guest_memfd {
> > __u64 size;
> > diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
> > index 559c93ad90be..df225298ab10 100644
> > --- a/virt/kvm/Kconfig
> > +++ b/virt/kvm/Kconfig
> > @@ -128,3 +128,8 @@ config HAVE_KVM_ARCH_GMEM_PREPARE
> > config HAVE_KVM_ARCH_GMEM_INVALIDATE
> > bool
> > depends on KVM_GMEM
> > +
> > +config KVM_GMEM_SHARED_MEM
> > + select KVM_GMEM
> > + bool
> > + prompt "Enable support for non-private (shared) memory in guest_memfd"
> > diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
> > index 6db515833f61..5d34712f64fc 100644
> > --- a/virt/kvm/guest_memfd.c
> > +++ b/virt/kvm/guest_memfd.c
> > @@ -312,7 +312,81 @@ static pgoff_t kvm_gmem_get_index(struct kvm_memory_slot *slot, gfn_t gfn)
> > return gfn - slot->base_gfn + slot->gmem.pgoff;
> > }
> >
> > +static bool kvm_gmem_supports_shared(struct inode *inode)
> > +{
> > + u64 flags;
> > +
> > + if (!IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM))
> > + return false;
> > +
> > + flags = (u64)inode->i_private;
> > +
> > + return flags & GUEST_MEMFD_FLAG_SUPPORT_SHARED;
> > +}
> > +
> > +
> > +#ifdef CONFIG_KVM_GMEM_SHARED_MEM
> > +static vm_fault_t kvm_gmem_fault_shared(struct vm_fault *vmf)
> > +{
> > + struct inode *inode = file_inode(vmf->vma->vm_file);
> > + struct folio *folio;
> > + vm_fault_t ret = VM_FAULT_LOCKED;
> > +
> > + folio = kvm_gmem_get_folio(inode, vmf->pgoff);
> > + if (IS_ERR(folio)) {
> > + int err = PTR_ERR(folio);
> > +
> > + if (err == -EAGAIN)
> > + return VM_FAULT_RETRY;
> > +
> > + return vmf_error(err);
> > + }
> > +
> > + if (WARN_ON_ONCE(folio_test_large(folio))) {
> > + ret = VM_FAULT_SIGBUS;
> > + goto out_folio;
> > + }
> > +
> > + if (!folio_test_uptodate(folio)) {
> > + clear_highpage(folio_page(folio, 0));
> > + kvm_gmem_mark_prepared(folio);
> > + }
> > +
> > + vmf->page = folio_file_page(folio, vmf->pgoff);
> > +
> > +out_folio:
> > + if (ret != VM_FAULT_LOCKED) {
> > + folio_unlock(folio);
> > + folio_put(folio);
> > + }
> > +
> > + return ret;
> > +}
> > +
> > +static const struct vm_operations_struct kvm_gmem_vm_ops = {
> > + .fault = kvm_gmem_fault_shared,
> > +};
> > +
> > +static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma)
> > +{
> > + if (!kvm_gmem_supports_shared(file_inode(file)))
> > + return -ENODEV;
> > +
> > + if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) !=
> > + (VM_SHARED | VM_MAYSHARE)) {
> > + return -EINVAL;
> > + }
> > +
> > + vma->vm_ops = &kvm_gmem_vm_ops;
> > +
> > + return 0;
> > +}
> > +#else
> > +#define kvm_gmem_mmap NULL
> > +#endif /* CONFIG_KVM_GMEM_SHARED_MEM */
> > +
> > static struct file_operations kvm_gmem_fops = {
> > + .mmap = kvm_gmem_mmap,
> > .open = generic_file_open,
> > .release = kvm_gmem_release,
> > .fallocate = kvm_gmem_fallocate,
> > @@ -463,6 +537,9 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args)
> > u64 flags = args->flags;
> > u64 valid_flags = 0;
> >
> > + if (kvm_arch_supports_gmem_shared_mem(kvm))
> > + valid_flags |= GUEST_MEMFD_FLAG_SUPPORT_SHARED;
> > +
> > if (flags & ~valid_flags)
> > return -EINVAL;
> >
> > @@ -501,6 +578,10 @@ int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot,
> > offset + size > i_size_read(inode))
> > goto err;
> >
> > + if (kvm_gmem_supports_shared(inode) &&
> > + !kvm_arch_supports_gmem_shared_mem(kvm))
> > + goto err;
> > +
> > filemap_invalidate_lock(inode->i_mapping);
> >
> > start = offset >> PAGE_SHIFT;
>
>
>
>
>
>
>
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 08/16] KVM: guest_memfd: Allow host to map guest_memfd pages
2025-05-27 18:02 ` [PATCH v10 08/16] KVM: guest_memfd: Allow host to map guest_memfd pages Fuad Tabba
` (2 preceding siblings ...)
2025-06-02 10:43 ` Shivank Garg
@ 2025-06-04 6:02 ` Gavin Shan
2025-06-04 8:37 ` Fuad Tabba
2025-06-04 12:26 ` David Hildenbrand
` (2 subsequent siblings)
6 siblings, 1 reply; 62+ messages in thread
From: Gavin Shan @ 2025-06-04 6:02 UTC (permalink / raw)
To: Fuad Tabba, kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny
Hi Fuad,
On 5/28/25 4:02 AM, Fuad Tabba wrote:
> This patch enables support for shared memory in guest_memfd, including
> mapping that memory at the host userspace. This support is gated by the
> configuration option KVM_GMEM_SHARED_MEM, and toggled by the guest_memfd
> flag GUEST_MEMFD_FLAG_SUPPORT_SHARED, which can be set when creating a
> guest_memfd instance.
>
> Co-developed-by: Ackerley Tng <ackerleytng@google.com>
> Signed-off-by: Ackerley Tng <ackerleytng@google.com>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
> arch/x86/include/asm/kvm_host.h | 10 ++++
> arch/x86/kvm/x86.c | 3 +-
> include/linux/kvm_host.h | 13 ++++++
> include/uapi/linux/kvm.h | 1 +
> virt/kvm/Kconfig | 5 ++
> virt/kvm/guest_memfd.c | 81 +++++++++++++++++++++++++++++++++
> 6 files changed, 112 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 709cc2a7ba66..ce9ad4cd93c5 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -2255,8 +2255,18 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level,
>
> #ifdef CONFIG_KVM_GMEM
> #define kvm_arch_supports_gmem(kvm) ((kvm)->arch.supports_gmem)
> +
> +/*
> + * CoCo VMs with hardware support that use guest_memfd only for backing private
> + * memory, e.g., TDX, cannot use guest_memfd with userspace mapping enabled.
> + */
> +#define kvm_arch_supports_gmem_shared_mem(kvm) \
> + (IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM) && \
> + ((kvm)->arch.vm_type == KVM_X86_SW_PROTECTED_VM || \
> + (kvm)->arch.vm_type == KVM_X86_DEFAULT_VM))
> #else
> #define kvm_arch_supports_gmem(kvm) false
> +#define kvm_arch_supports_gmem_shared_mem(kvm) false
> #endif
>
> #define kvm_arch_has_readonly_mem(kvm) (!(kvm)->arch.has_protected_state)
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 035ced06b2dd..2a02f2457c42 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -12718,7 +12718,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
> return -EINVAL;
>
> kvm->arch.vm_type = type;
> - kvm->arch.supports_gmem = (type == KVM_X86_SW_PROTECTED_VM);
> + kvm->arch.supports_gmem =
> + type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM;
> /* Decided by the vendor code for other VM types. */
> kvm->arch.pre_fault_allowed =
> type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM;
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 80371475818f..ba83547e62b0 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -729,6 +729,19 @@ static inline bool kvm_arch_supports_gmem(struct kvm *kvm)
> }
> #endif
>
> +/*
> + * Returns true if this VM supports shared mem in guest_memfd.
> + *
> + * Arch code must define kvm_arch_supports_gmem_shared_mem if support for
> + * guest_memfd is enabled.
> + */
> +#if !defined(kvm_arch_supports_gmem_shared_mem) && !IS_ENABLED(CONFIG_KVM_GMEM)
> +static inline bool kvm_arch_supports_gmem_shared_mem(struct kvm *kvm)
> +{
> + return false;
> +}
> +#endif
> +
> #ifndef kvm_arch_has_readonly_mem
> static inline bool kvm_arch_has_readonly_mem(struct kvm *kvm)
> {
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index b6ae8ad8934b..c2714c9d1a0e 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1566,6 +1566,7 @@ struct kvm_memory_attributes {
> #define KVM_MEMORY_ATTRIBUTE_PRIVATE (1ULL << 3)
>
> #define KVM_CREATE_GUEST_MEMFD _IOWR(KVMIO, 0xd4, struct kvm_create_guest_memfd)
> +#define GUEST_MEMFD_FLAG_SUPPORT_SHARED (1ULL << 0)
>
> struct kvm_create_guest_memfd {
> __u64 size;
> diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
> index 559c93ad90be..df225298ab10 100644
> --- a/virt/kvm/Kconfig
> +++ b/virt/kvm/Kconfig
> @@ -128,3 +128,8 @@ config HAVE_KVM_ARCH_GMEM_PREPARE
> config HAVE_KVM_ARCH_GMEM_INVALIDATE
> bool
> depends on KVM_GMEM
> +
> +config KVM_GMEM_SHARED_MEM
> + select KVM_GMEM
> + bool
> + prompt "Enable support for non-private (shared) memory in guest_memfd"
> diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
> index 6db515833f61..5d34712f64fc 100644
> --- a/virt/kvm/guest_memfd.c
> +++ b/virt/kvm/guest_memfd.c
> @@ -312,7 +312,81 @@ static pgoff_t kvm_gmem_get_index(struct kvm_memory_slot *slot, gfn_t gfn)
> return gfn - slot->base_gfn + slot->gmem.pgoff;
> }
>
> +static bool kvm_gmem_supports_shared(struct inode *inode)
> +{
> + u64 flags;
> +
> + if (!IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM))
> + return false;
> +
> + flags = (u64)inode->i_private;
> +
> + return flags & GUEST_MEMFD_FLAG_SUPPORT_SHARED;
> +}
> +
> +
> +#ifdef CONFIG_KVM_GMEM_SHARED_MEM
> +static vm_fault_t kvm_gmem_fault_shared(struct vm_fault *vmf)
> +{
> + struct inode *inode = file_inode(vmf->vma->vm_file);
> + struct folio *folio;
> + vm_fault_t ret = VM_FAULT_LOCKED;
> +
> + folio = kvm_gmem_get_folio(inode, vmf->pgoff);
> + if (IS_ERR(folio)) {
> + int err = PTR_ERR(folio);
> +
> + if (err == -EAGAIN)
> + return VM_FAULT_RETRY;
> +
> + return vmf_error(err);
> + }
> +
> + if (WARN_ON_ONCE(folio_test_large(folio))) {
> + ret = VM_FAULT_SIGBUS;
> + goto out_folio;
> + }
> +
> + if (!folio_test_uptodate(folio)) {
> + clear_highpage(folio_page(folio, 0));
> + kvm_gmem_mark_prepared(folio);
> + }
> +
> + vmf->page = folio_file_page(folio, vmf->pgoff);
> +
> +out_folio:
> + if (ret != VM_FAULT_LOCKED) {
> + folio_unlock(folio);
> + folio_put(folio);
> + }
> +
> + return ret;
> +}
> +
> +static const struct vm_operations_struct kvm_gmem_vm_ops = {
> + .fault = kvm_gmem_fault_shared,
> +};
> +
> +static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma)
> +{
> + if (!kvm_gmem_supports_shared(file_inode(file)))
> + return -ENODEV;
> +
> + if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) !=
> + (VM_SHARED | VM_MAYSHARE)) {
> + return -EINVAL;
> + }
> +
> + vma->vm_ops = &kvm_gmem_vm_ops;
> +
> + return 0;
> +}
> +#else
> +#define kvm_gmem_mmap NULL
> +#endif /* CONFIG_KVM_GMEM_SHARED_MEM */
> +
nit: The hunk of code doesn't have to be guarded by CONFIG_KVM_GMEM_SHARED_MEM.
With the guard removed, we run into error (-ENODEV) returned by kvm_gmem_mmap()
for non-sharable (or non-mapped) file, same effect as to "kvm_gmem_fops.mmap = NULL".
I may have missed other intentions to have this guard here.
> static struct file_operations kvm_gmem_fops = {
> + .mmap = kvm_gmem_mmap,
> .open = generic_file_open,
> .release = kvm_gmem_release,
> .fallocate = kvm_gmem_fallocate,
> @@ -463,6 +537,9 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args)
> u64 flags = args->flags;
> u64 valid_flags = 0;
>
> + if (kvm_arch_supports_gmem_shared_mem(kvm))
> + valid_flags |= GUEST_MEMFD_FLAG_SUPPORT_SHARED;
> +
> if (flags & ~valid_flags)
> return -EINVAL;
>
> @@ -501,6 +578,10 @@ int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot,
> offset + size > i_size_read(inode))
> goto err;
>
> + if (kvm_gmem_supports_shared(inode) &&
> + !kvm_arch_supports_gmem_shared_mem(kvm))
> + goto err;
> +
This check looks unnecessary if I'm not missing anything. The file (inode) can't be created
by kvm_gmem_create(GUEST_MEMFD_FLAG_SUPPORT_SHARED) on !kvm_arch_supports_gmem_shared_mem().
It means "kvm_gmem_supports_shared(inode) == true" is indicating "kvm_arch_supports_gmem_shared_mem(kvm) == true".
In this case, we won't never break the check? :-)
> filemap_invalidate_lock(inode->i_mapping);
>
> start = offset >> PAGE_SHIFT;
Thanks,
Gavin
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 12/16] KVM: arm64: Refactor user_mem_abort() calculation of force_pte
2025-05-27 18:02 ` [PATCH v10 12/16] KVM: arm64: Refactor user_mem_abort() calculation of force_pte Fuad Tabba
@ 2025-06-04 6:05 ` Gavin Shan
0 siblings, 0 replies; 62+ messages in thread
From: Gavin Shan @ 2025-06-04 6:05 UTC (permalink / raw)
To: Fuad Tabba, kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny
On 5/28/25 4:02 AM, Fuad Tabba wrote:
> To simplify the code and to make the assumptions clearer,
> refactor user_mem_abort() by immediately setting force_pte to
> true if the conditions are met. Also, remove the comment about
> logging_active being guaranteed to never be true for VM_PFNMAP
> memslots, since it's not actually correct.
>
> No functional change intended.
>
> Reviewed-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
> arch/arm64/kvm/mmu.c | 13 ++++---------
> 1 file changed, 4 insertions(+), 9 deletions(-)
>
Reviewed-by: Gavin Shan <gshan@redhat.com>
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 08/16] KVM: guest_memfd: Allow host to map guest_memfd pages
2025-06-04 6:02 ` Gavin Shan
@ 2025-06-04 8:37 ` Fuad Tabba
0 siblings, 0 replies; 62+ messages in thread
From: Fuad Tabba @ 2025-06-04 8:37 UTC (permalink / raw)
To: Gavin Shan
Cc: kvm, linux-arm-msm, linux-mm, pbonzini, chenhuacai, mpe, anup,
paul.walmsley, palmer, aou, seanjc, viro, brauner, willy, akpm,
xiaoyao.li, yilun.xu, chao.p.peng, jarkko, amoorthy, dmatlack,
isaku.yamahata, mic, vbabka, vannapurve, ackerleytng, mail, david,
michael.roth, wei.w.wang, liam.merwick, isaku.yamahata,
kirill.shutemov, suzuki.poulose, steven.price, quic_eberman,
quic_mnalajal, quic_tsoni, quic_svaddagi, quic_cvanscha,
quic_pderrin, quic_pheragu, catalin.marinas, james.morse,
yuzenghui, oliver.upton, maz, will, qperret, keirf, roypat, shuah,
hch, jgg, rientjes, jhubbard, fvdl, hughd, jthoughton, peterx,
pankaj.gupta, ira.weiny
Hi Gavin,
On Wed, 4 Jun 2025 at 07:02, Gavin Shan <gshan@redhat.com> wrote:
>
> Hi Fuad,
>
> On 5/28/25 4:02 AM, Fuad Tabba wrote:
> > This patch enables support for shared memory in guest_memfd, including
> > mapping that memory at the host userspace. This support is gated by the
> > configuration option KVM_GMEM_SHARED_MEM, and toggled by the guest_memfd
> > flag GUEST_MEMFD_FLAG_SUPPORT_SHARED, which can be set when creating a
> > guest_memfd instance.
> >
> > Co-developed-by: Ackerley Tng <ackerleytng@google.com>
> > Signed-off-by: Ackerley Tng <ackerleytng@google.com>
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> > arch/x86/include/asm/kvm_host.h | 10 ++++
> > arch/x86/kvm/x86.c | 3 +-
> > include/linux/kvm_host.h | 13 ++++++
> > include/uapi/linux/kvm.h | 1 +
> > virt/kvm/Kconfig | 5 ++
> > virt/kvm/guest_memfd.c | 81 +++++++++++++++++++++++++++++++++
> > 6 files changed, 112 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > index 709cc2a7ba66..ce9ad4cd93c5 100644
> > --- a/arch/x86/include/asm/kvm_host.h
> > +++ b/arch/x86/include/asm/kvm_host.h
> > @@ -2255,8 +2255,18 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level,
> >
> > #ifdef CONFIG_KVM_GMEM
> > #define kvm_arch_supports_gmem(kvm) ((kvm)->arch.supports_gmem)
> > +
> > +/*
> > + * CoCo VMs with hardware support that use guest_memfd only for backing private
> > + * memory, e.g., TDX, cannot use guest_memfd with userspace mapping enabled.
> > + */
> > +#define kvm_arch_supports_gmem_shared_mem(kvm) \
> > + (IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM) && \
> > + ((kvm)->arch.vm_type == KVM_X86_SW_PROTECTED_VM || \
> > + (kvm)->arch.vm_type == KVM_X86_DEFAULT_VM))
> > #else
> > #define kvm_arch_supports_gmem(kvm) false
> > +#define kvm_arch_supports_gmem_shared_mem(kvm) false
> > #endif
> >
> > #define kvm_arch_has_readonly_mem(kvm) (!(kvm)->arch.has_protected_state)
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index 035ced06b2dd..2a02f2457c42 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -12718,7 +12718,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
> > return -EINVAL;
> >
> > kvm->arch.vm_type = type;
> > - kvm->arch.supports_gmem = (type == KVM_X86_SW_PROTECTED_VM);
> > + kvm->arch.supports_gmem =
> > + type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM;
> > /* Decided by the vendor code for other VM types. */
> > kvm->arch.pre_fault_allowed =
> > type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM;
> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > index 80371475818f..ba83547e62b0 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -729,6 +729,19 @@ static inline bool kvm_arch_supports_gmem(struct kvm *kvm)
> > }
> > #endif
> >
> > +/*
> > + * Returns true if this VM supports shared mem in guest_memfd.
> > + *
> > + * Arch code must define kvm_arch_supports_gmem_shared_mem if support for
> > + * guest_memfd is enabled.
> > + */
> > +#if !defined(kvm_arch_supports_gmem_shared_mem) && !IS_ENABLED(CONFIG_KVM_GMEM)
> > +static inline bool kvm_arch_supports_gmem_shared_mem(struct kvm *kvm)
> > +{
> > + return false;
> > +}
> > +#endif
> > +
> > #ifndef kvm_arch_has_readonly_mem
> > static inline bool kvm_arch_has_readonly_mem(struct kvm *kvm)
> > {
> > diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> > index b6ae8ad8934b..c2714c9d1a0e 100644
> > --- a/include/uapi/linux/kvm.h
> > +++ b/include/uapi/linux/kvm.h
> > @@ -1566,6 +1566,7 @@ struct kvm_memory_attributes {
> > #define KVM_MEMORY_ATTRIBUTE_PRIVATE (1ULL << 3)
> >
> > #define KVM_CREATE_GUEST_MEMFD _IOWR(KVMIO, 0xd4, struct kvm_create_guest_memfd)
> > +#define GUEST_MEMFD_FLAG_SUPPORT_SHARED (1ULL << 0)
> >
> > struct kvm_create_guest_memfd {
> > __u64 size;
> > diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
> > index 559c93ad90be..df225298ab10 100644
> > --- a/virt/kvm/Kconfig
> > +++ b/virt/kvm/Kconfig
> > @@ -128,3 +128,8 @@ config HAVE_KVM_ARCH_GMEM_PREPARE
> > config HAVE_KVM_ARCH_GMEM_INVALIDATE
> > bool
> > depends on KVM_GMEM
> > +
> > +config KVM_GMEM_SHARED_MEM
> > + select KVM_GMEM
> > + bool
> > + prompt "Enable support for non-private (shared) memory in guest_memfd"
> > diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
> > index 6db515833f61..5d34712f64fc 100644
> > --- a/virt/kvm/guest_memfd.c
> > +++ b/virt/kvm/guest_memfd.c
> > @@ -312,7 +312,81 @@ static pgoff_t kvm_gmem_get_index(struct kvm_memory_slot *slot, gfn_t gfn)
> > return gfn - slot->base_gfn + slot->gmem.pgoff;
> > }
> >
> > +static bool kvm_gmem_supports_shared(struct inode *inode)
> > +{
> > + u64 flags;
> > +
> > + if (!IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM))
> > + return false;
> > +
> > + flags = (u64)inode->i_private;
> > +
> > + return flags & GUEST_MEMFD_FLAG_SUPPORT_SHARED;
> > +}
> > +
> > +
> > +#ifdef CONFIG_KVM_GMEM_SHARED_MEM
> > +static vm_fault_t kvm_gmem_fault_shared(struct vm_fault *vmf)
> > +{
> > + struct inode *inode = file_inode(vmf->vma->vm_file);
> > + struct folio *folio;
> > + vm_fault_t ret = VM_FAULT_LOCKED;
> > +
> > + folio = kvm_gmem_get_folio(inode, vmf->pgoff);
> > + if (IS_ERR(folio)) {
> > + int err = PTR_ERR(folio);
> > +
> > + if (err == -EAGAIN)
> > + return VM_FAULT_RETRY;
> > +
> > + return vmf_error(err);
> > + }
> > +
> > + if (WARN_ON_ONCE(folio_test_large(folio))) {
> > + ret = VM_FAULT_SIGBUS;
> > + goto out_folio;
> > + }
> > +
> > + if (!folio_test_uptodate(folio)) {
> > + clear_highpage(folio_page(folio, 0));
> > + kvm_gmem_mark_prepared(folio);
> > + }
> > +
> > + vmf->page = folio_file_page(folio, vmf->pgoff);
> > +
> > +out_folio:
> > + if (ret != VM_FAULT_LOCKED) {
> > + folio_unlock(folio);
> > + folio_put(folio);
> > + }
> > +
> > + return ret;
> > +}
> > +
> > +static const struct vm_operations_struct kvm_gmem_vm_ops = {
> > + .fault = kvm_gmem_fault_shared,
> > +};
> > +
> > +static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma)
> > +{
> > + if (!kvm_gmem_supports_shared(file_inode(file)))
> > + return -ENODEV;
> > +
> > + if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) !=
> > + (VM_SHARED | VM_MAYSHARE)) {
> > + return -EINVAL;
> > + }
> > +
> > + vma->vm_ops = &kvm_gmem_vm_ops;
> > +
> > + return 0;
> > +}
> > +#else
> > +#define kvm_gmem_mmap NULL
> > +#endif /* CONFIG_KVM_GMEM_SHARED_MEM */
> > +
>
> nit: The hunk of code doesn't have to be guarded by CONFIG_KVM_GMEM_SHARED_MEM.
> With the guard removed, we run into error (-ENODEV) returned by kvm_gmem_mmap()
> for non-sharable (or non-mapped) file, same effect as to "kvm_gmem_fops.mmap = NULL".
>
> I may have missed other intentions to have this guard here.
You're right. This guard is here because it was needed before, but not
anymore. I'll remove it.
> > static struct file_operations kvm_gmem_fops = {
> > + .mmap = kvm_gmem_mmap,
> > .open = generic_file_open,
> > .release = kvm_gmem_release,
> > .fallocate = kvm_gmem_fallocate,
> > @@ -463,6 +537,9 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args)
> > u64 flags = args->flags;
> > u64 valid_flags = 0;
> >
> > + if (kvm_arch_supports_gmem_shared_mem(kvm))
> > + valid_flags |= GUEST_MEMFD_FLAG_SUPPORT_SHARED;
> > +
> > if (flags & ~valid_flags)
> > return -EINVAL;
> >
> > @@ -501,6 +578,10 @@ int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot,
> > offset + size > i_size_read(inode))
> > goto err;
> >
> > + if (kvm_gmem_supports_shared(inode) &&
> > + !kvm_arch_supports_gmem_shared_mem(kvm))
> > + goto err;
> > +
>
> This check looks unnecessary if I'm not missing anything. The file (inode) can't be created
> by kvm_gmem_create(GUEST_MEMFD_FLAG_SUPPORT_SHARED) on !kvm_arch_supports_gmem_shared_mem().
> It means "kvm_gmem_supports_shared(inode) == true" is indicating "kvm_arch_supports_gmem_shared_mem(kvm) == true".
> In this case, we won't never break the check? :-)
You're right here as well. This check was there before that flag was
added, and I should have removed it after having added that. Consider
it gone!
Thanks!
/fuad
> > filemap_invalidate_lock(inode->i_mapping);
> >
> > start = offset >> PAGE_SHIFT;
>
> Thanks,
> Gavin
>
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 07/16] KVM: Fix comment that refers to kvm uapi header path
2025-05-27 18:02 ` [PATCH v10 07/16] KVM: Fix comment that refers to kvm uapi header path Fuad Tabba
2025-05-31 19:19 ` Shivank Garg
2025-06-02 10:23 ` David Hildenbrand
@ 2025-06-04 9:00 ` Gavin Shan
2025-06-05 8:22 ` Vlastimil Babka
3 siblings, 0 replies; 62+ messages in thread
From: Gavin Shan @ 2025-06-04 9:00 UTC (permalink / raw)
To: Fuad Tabba, kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny
On 5/28/25 4:02 AM, Fuad Tabba wrote:
> The comment that refers to the path where the user-visible memslot flags
> are refers to an outdated path and has a typo. Make it refer to the
> correct path.
>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
> include/linux/kvm_host.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
Reviewed-by: Gavin Shan <gshan@redhat.com>
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 16/16] KVM: selftests: guest_memfd mmap() test when mapping is allowed
2025-05-27 18:02 ` [PATCH v10 16/16] KVM: selftests: guest_memfd mmap() test when mapping is allowed Fuad Tabba
@ 2025-06-04 9:19 ` Gavin Shan
2025-06-04 9:48 ` Fuad Tabba
0 siblings, 1 reply; 62+ messages in thread
From: Gavin Shan @ 2025-06-04 9:19 UTC (permalink / raw)
To: Fuad Tabba, kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny
Hi Fuad,
On 5/28/25 4:02 AM, Fuad Tabba wrote:
> Expand the guest_memfd selftests to include testing mapping guest
> memory for VM types that support it.
>
> Also, build the guest_memfd selftest for arm64.
>
> Co-developed-by: Ackerley Tng <ackerleytng@google.com>
> Signed-off-by: Ackerley Tng <ackerleytng@google.com>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
> tools/testing/selftests/kvm/Makefile.kvm | 1 +
> .../testing/selftests/kvm/guest_memfd_test.c | 162 +++++++++++++++---
> 2 files changed, 142 insertions(+), 21 deletions(-)
>
The test case fails on 64KB host, and the file size in test_create_guest_memfd_multiple()
would be page_size and (2 * page_size). The fixed size 4096 and 8192 aren't aligned to 64KB.
# ./guest_memfd_test
Random seed: 0x6b8b4567
==== Test Assertion Failure ====
guest_memfd_test.c:178: fd1 != -1
pid=7565 tid=7565 errno=22 - Invalid argument
1 0x000000000040252f: test_create_guest_memfd_multiple at guest_memfd_test.c:178
2 (inlined by) test_with_type at guest_memfd_test.c:231
3 0x00000000004020c7: main at guest_memfd_test.c:306
4 0x0000ffff8cec733f: ?? ??:0
5 0x0000ffff8cec7417: ?? ??:0
6 0x00000000004021ef: _start at ??:?
memfd creation should succeed
> diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm
> index f62b0a5aba35..ccf95ed037c3 100644
> --- a/tools/testing/selftests/kvm/Makefile.kvm
> +++ b/tools/testing/selftests/kvm/Makefile.kvm
> @@ -163,6 +163,7 @@ TEST_GEN_PROGS_arm64 += access_tracking_perf_test
> TEST_GEN_PROGS_arm64 += arch_timer
> TEST_GEN_PROGS_arm64 += coalesced_io_test
> TEST_GEN_PROGS_arm64 += dirty_log_perf_test
> +TEST_GEN_PROGS_arm64 += guest_memfd_test
> TEST_GEN_PROGS_arm64 += get-reg-list
> TEST_GEN_PROGS_arm64 += memslot_modification_stress_test
> TEST_GEN_PROGS_arm64 += memslot_perf_test
> diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c
> index ce687f8d248f..3d6765bc1f28 100644
> --- a/tools/testing/selftests/kvm/guest_memfd_test.c
> +++ b/tools/testing/selftests/kvm/guest_memfd_test.c
> @@ -34,12 +34,46 @@ static void test_file_read_write(int fd)
> "pwrite on a guest_mem fd should fail");
> }
>
> -static void test_mmap(int fd, size_t page_size)
> +static void test_mmap_allowed(int fd, size_t page_size, size_t total_size)
> +{
> + const char val = 0xaa;
> + char *mem;
> + size_t i;
> + int ret;
> +
> + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> + TEST_ASSERT(mem != MAP_FAILED, "mmaping() guest memory should pass.");
> +
If you agree, I think it would be nice to ensure guest-memfd doesn't support
copy-on-write, more details are provided below.
> + memset(mem, val, total_size);
> + for (i = 0; i < total_size; i++)
> + TEST_ASSERT_EQ(mem[i], val);
> +
> + ret = fallocate(fd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, 0,
> + page_size);
> + TEST_ASSERT(!ret, "fallocate the first page should succeed");
> +
> + for (i = 0; i < page_size; i++)
> + TEST_ASSERT_EQ(mem[i], 0x00);
> + for (; i < total_size; i++)
> + TEST_ASSERT_EQ(mem[i], val);
> +
> + memset(mem, val, page_size);
> + for (i = 0; i < total_size; i++)
> + TEST_ASSERT_EQ(mem[i], val);
> +
> + ret = munmap(mem, total_size);
> + TEST_ASSERT(!ret, "munmap should succeed");
> +}
> +
> +static void test_mmap_denied(int fd, size_t page_size, size_t total_size)
> {
> char *mem;
>
> mem = mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> TEST_ASSERT_EQ(mem, MAP_FAILED);
> +
> + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> + TEST_ASSERT_EQ(mem, MAP_FAILED);
> }
Add one more argument to test_mmap_denied as the flags passed to mmap().
static void test_mmap_denied(int fd, size_t page_size, size_t total_size, int mmap_flags)
{
mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, mmap_flags, fd, 0);
}
>
> static void test_file_size(int fd, size_t page_size, size_t total_size)
> @@ -120,26 +154,19 @@ static void test_invalid_punch_hole(int fd, size_t page_size, size_t total_size)
> }
> }
>
> -static void test_create_guest_memfd_invalid(struct kvm_vm *vm)
> +static void test_create_guest_memfd_invalid_sizes(struct kvm_vm *vm,
> + uint64_t guest_memfd_flags,
> + size_t page_size)
> {
> - size_t page_size = getpagesize();
> - uint64_t flag;
> size_t size;
> int fd;
>
> for (size = 1; size < page_size; size++) {
> - fd = __vm_create_guest_memfd(vm, size, 0);
> - TEST_ASSERT(fd == -1 && errno == EINVAL,
> + fd = __vm_create_guest_memfd(vm, size, guest_memfd_flags);
> + TEST_ASSERT(fd < 0 && errno == EINVAL,
> "guest_memfd() with non-page-aligned page size '0x%lx' should fail with EINVAL",
> size);
> }
> -
> - for (flag = BIT(0); flag; flag <<= 1) {
> - fd = __vm_create_guest_memfd(vm, page_size, flag);
> - TEST_ASSERT(fd == -1 && errno == EINVAL,
> - "guest_memfd() with flag '0x%lx' should fail with EINVAL",
> - flag);
> - }
> }
>
> static void test_create_guest_memfd_multiple(struct kvm_vm *vm)
> @@ -170,30 +197,123 @@ static void test_create_guest_memfd_multiple(struct kvm_vm *vm)
> close(fd1);
> }
>
> -int main(int argc, char *argv[])
> +#define GUEST_MEMFD_TEST_SLOT 10
> +#define GUEST_MEMFD_TEST_GPA 0x100000000
> +
> +static bool check_vm_type(unsigned long vm_type)
> {
> - size_t page_size;
> + /*
> + * Not all architectures support KVM_CAP_VM_TYPES. However, those that
> + * support guest_memfd have that support for the default VM type.
> + */
> + if (vm_type == VM_TYPE_DEFAULT)
> + return true;
> +
> + return kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(vm_type);
> +}
> +
> +static void test_with_type(unsigned long vm_type, uint64_t guest_memfd_flags,
> + bool expect_mmap_allowed)
> +{
> + struct kvm_vm *vm;
> size_t total_size;
> + size_t page_size;
> int fd;
> - struct kvm_vm *vm;
>
> - TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD));
> + if (!check_vm_type(vm_type))
> + return;
>
> page_size = getpagesize();
> total_size = page_size * 4;
>
> - vm = vm_create_barebones();
> + vm = vm_create_barebones_type(vm_type);
>
> - test_create_guest_memfd_invalid(vm);
> test_create_guest_memfd_multiple(vm);
> + test_create_guest_memfd_invalid_sizes(vm, guest_memfd_flags, page_size);
>
> - fd = vm_create_guest_memfd(vm, total_size, 0);
> + fd = vm_create_guest_memfd(vm, total_size, guest_memfd_flags);
>
> test_file_read_write(fd);
> - test_mmap(fd, page_size);
> +
> + if (expect_mmap_allowed)
> + test_mmap_allowed(fd, page_size, total_size);
> + else
> + test_mmap_denied(fd, page_size, total_size);
> +
if (expect_mmap_allowed) {
test_mmap_denied(fd, page_size, total_size, MAP_PRIVATE);
test_mmap_allowed(fd, page_size, total_size);
} else {
test_mmap_denied(fd, page_size, total_size, MAP_SHARED);
}
> test_file_size(fd, page_size, total_size);
> test_fallocate(fd, page_size, total_size);
> test_invalid_punch_hole(fd, page_size, total_size);
>
> close(fd);
> + kvm_vm_release(vm);
> +}
> +
> +static void test_vm_type_gmem_flag_validity(unsigned long vm_type,
> + uint64_t expected_valid_flags)
> +{
> + size_t page_size = getpagesize();
> + struct kvm_vm *vm;
> + uint64_t flag = 0;
> + int fd;
> +
> + if (!check_vm_type(vm_type))
> + return;
> +
> + vm = vm_create_barebones_type(vm_type);
> +
> + for (flag = BIT(0); flag; flag <<= 1) {
> + fd = __vm_create_guest_memfd(vm, page_size, flag);
> +
> + if (flag & expected_valid_flags) {
> + TEST_ASSERT(fd >= 0,
> + "guest_memfd() with flag '0x%lx' should be valid",
> + flag);
> + close(fd);
> + } else {
> + TEST_ASSERT(fd < 0 && errno == EINVAL,
> + "guest_memfd() with flag '0x%lx' should fail with EINVAL",
> + flag);
> + }
> + }
> +
> + kvm_vm_release(vm);
> +}
> +
> +static void test_gmem_flag_validity(void)
> +{
> + uint64_t non_coco_vm_valid_flags = 0;
> +
> + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM))
> + non_coco_vm_valid_flags = GUEST_MEMFD_FLAG_SUPPORT_SHARED;
> +
> + test_vm_type_gmem_flag_validity(VM_TYPE_DEFAULT, non_coco_vm_valid_flags);
> +
> +#ifdef __x86_64__
> + test_vm_type_gmem_flag_validity(KVM_X86_SW_PROTECTED_VM, non_coco_vm_valid_flags);
> + test_vm_type_gmem_flag_validity(KVM_X86_SEV_VM, 0);
> + test_vm_type_gmem_flag_validity(KVM_X86_SEV_ES_VM, 0);
> + test_vm_type_gmem_flag_validity(KVM_X86_SNP_VM, 0);
> + test_vm_type_gmem_flag_validity(KVM_X86_TDX_VM, 0);
> +#endif
> +}
> +
> +int main(int argc, char *argv[])
> +{
> + TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD));
> +
> + test_gmem_flag_validity();
> +
> + test_with_type(VM_TYPE_DEFAULT, 0, false);
> + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) {
> + test_with_type(VM_TYPE_DEFAULT, GUEST_MEMFD_FLAG_SUPPORT_SHARED,
> + true);
> + }
> +
> +#ifdef __x86_64__
> + test_with_type(KVM_X86_SW_PROTECTED_VM, 0, false);
> + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) {
> + test_with_type(KVM_X86_SW_PROTECTED_VM,
> + GUEST_MEMFD_FLAG_SUPPORT_SHARED, true);
> + }
> +#endif
> }
Thanks,
Gavin
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 16/16] KVM: selftests: guest_memfd mmap() test when mapping is allowed
2025-06-04 9:19 ` Gavin Shan
@ 2025-06-04 9:48 ` Fuad Tabba
2025-06-04 10:05 ` Gavin Shan
0 siblings, 1 reply; 62+ messages in thread
From: Fuad Tabba @ 2025-06-04 9:48 UTC (permalink / raw)
To: Gavin Shan
Cc: kvm, linux-arm-msm, linux-mm, pbonzini, chenhuacai, mpe, anup,
paul.walmsley, palmer, aou, seanjc, viro, brauner, willy, akpm,
xiaoyao.li, yilun.xu, chao.p.peng, jarkko, amoorthy, dmatlack,
isaku.yamahata, mic, vbabka, vannapurve, ackerleytng, mail, david,
michael.roth, wei.w.wang, liam.merwick, isaku.yamahata,
kirill.shutemov, suzuki.poulose, steven.price, quic_eberman,
quic_mnalajal, quic_tsoni, quic_svaddagi, quic_cvanscha,
quic_pderrin, quic_pheragu, catalin.marinas, james.morse,
yuzenghui, oliver.upton, maz, will, qperret, keirf, roypat, shuah,
hch, jgg, rientjes, jhubbard, fvdl, hughd, jthoughton, peterx,
pankaj.gupta, ira.weiny
Hi Gavin,
On Wed, 4 Jun 2025 at 10:20, Gavin Shan <gshan@redhat.com> wrote:
>
> Hi Fuad,
>
> On 5/28/25 4:02 AM, Fuad Tabba wrote:
> > Expand the guest_memfd selftests to include testing mapping guest
> > memory for VM types that support it.
> >
> > Also, build the guest_memfd selftest for arm64.
> >
> > Co-developed-by: Ackerley Tng <ackerleytng@google.com>
> > Signed-off-by: Ackerley Tng <ackerleytng@google.com>
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> > tools/testing/selftests/kvm/Makefile.kvm | 1 +
> > .../testing/selftests/kvm/guest_memfd_test.c | 162 +++++++++++++++---
> > 2 files changed, 142 insertions(+), 21 deletions(-)
> >
>
> The test case fails on 64KB host, and the file size in test_create_guest_memfd_multiple()
> would be page_size and (2 * page_size). The fixed size 4096 and 8192 aren't aligned to 64KB.
Yes, however, this patch didn't introduce or modify this test. I think
it's better to fix it in a separate patch independent of this series.
> # ./guest_memfd_test
> Random seed: 0x6b8b4567
> ==== Test Assertion Failure ====
> guest_memfd_test.c:178: fd1 != -1
> pid=7565 tid=7565 errno=22 - Invalid argument
> 1 0x000000000040252f: test_create_guest_memfd_multiple at guest_memfd_test.c:178
> 2 (inlined by) test_with_type at guest_memfd_test.c:231
> 3 0x00000000004020c7: main at guest_memfd_test.c:306
> 4 0x0000ffff8cec733f: ?? ??:0
> 5 0x0000ffff8cec7417: ?? ??:0
> 6 0x00000000004021ef: _start at ??:?
> memfd creation should succeed
>
> > diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm
> > index f62b0a5aba35..ccf95ed037c3 100644
> > --- a/tools/testing/selftests/kvm/Makefile.kvm
> > +++ b/tools/testing/selftests/kvm/Makefile.kvm
> > @@ -163,6 +163,7 @@ TEST_GEN_PROGS_arm64 += access_tracking_perf_test
> > TEST_GEN_PROGS_arm64 += arch_timer
> > TEST_GEN_PROGS_arm64 += coalesced_io_test
> > TEST_GEN_PROGS_arm64 += dirty_log_perf_test
> > +TEST_GEN_PROGS_arm64 += guest_memfd_test
> > TEST_GEN_PROGS_arm64 += get-reg-list
> > TEST_GEN_PROGS_arm64 += memslot_modification_stress_test
> > TEST_GEN_PROGS_arm64 += memslot_perf_test
> > diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c
> > index ce687f8d248f..3d6765bc1f28 100644
> > --- a/tools/testing/selftests/kvm/guest_memfd_test.c
> > +++ b/tools/testing/selftests/kvm/guest_memfd_test.c
> > @@ -34,12 +34,46 @@ static void test_file_read_write(int fd)
> > "pwrite on a guest_mem fd should fail");
> > }
> >
> > -static void test_mmap(int fd, size_t page_size)
> > +static void test_mmap_allowed(int fd, size_t page_size, size_t total_size)
> > +{
> > + const char val = 0xaa;
> > + char *mem;
> > + size_t i;
> > + int ret;
> > +
> > + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> > + TEST_ASSERT(mem != MAP_FAILED, "mmaping() guest memory should pass.");
> > +
>
> If you agree, I think it would be nice to ensure guest-memfd doesn't support
> copy-on-write, more details are provided below.
Good idea. I think we can do this without adding much more code. I'll
add a check in test_mmap_allowed(), since the idea is, even if mmap()
is supported, we still can't COW. I'll rename the functions to make
this a bit clearer (i.e., supported instead of allowed).
Thank you for this and thank you for the reviews!
/fuad
> > + memset(mem, val, total_size);
> > + for (i = 0; i < total_size; i++)
> > + TEST_ASSERT_EQ(mem[i], val);
> > +
> > + ret = fallocate(fd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, 0,
> > + page_size);
> > + TEST_ASSERT(!ret, "fallocate the first page should succeed");
> > +
> > + for (i = 0; i < page_size; i++)
> > + TEST_ASSERT_EQ(mem[i], 0x00);
> > + for (; i < total_size; i++)
> > + TEST_ASSERT_EQ(mem[i], val);
> > +
> > + memset(mem, val, page_size);
> > + for (i = 0; i < total_size; i++)
> > + TEST_ASSERT_EQ(mem[i], val);
> > +
> > + ret = munmap(mem, total_size);
> > + TEST_ASSERT(!ret, "munmap should succeed");
> > +}
> > +
> > +static void test_mmap_denied(int fd, size_t page_size, size_t total_size)
> > {
> > char *mem;
> >
> > mem = mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> > TEST_ASSERT_EQ(mem, MAP_FAILED);
> > +
> > + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> > + TEST_ASSERT_EQ(mem, MAP_FAILED);
> > }
>
> Add one more argument to test_mmap_denied as the flags passed to mmap().
>
> static void test_mmap_denied(int fd, size_t page_size, size_t total_size, int mmap_flags)
> {
> mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, mmap_flags, fd, 0);
> }
>
> >
> > static void test_file_size(int fd, size_t page_size, size_t total_size)
> > @@ -120,26 +154,19 @@ static void test_invalid_punch_hole(int fd, size_t page_size, size_t total_size)
> > }
> > }
> >
> > -static void test_create_guest_memfd_invalid(struct kvm_vm *vm)
> > +static void test_create_guest_memfd_invalid_sizes(struct kvm_vm *vm,
> > + uint64_t guest_memfd_flags,
> > + size_t page_size)
> > {
> > - size_t page_size = getpagesize();
> > - uint64_t flag;
> > size_t size;
> > int fd;
> >
> > for (size = 1; size < page_size; size++) {
> > - fd = __vm_create_guest_memfd(vm, size, 0);
> > - TEST_ASSERT(fd == -1 && errno == EINVAL,
> > + fd = __vm_create_guest_memfd(vm, size, guest_memfd_flags);
> > + TEST_ASSERT(fd < 0 && errno == EINVAL,
> > "guest_memfd() with non-page-aligned page size '0x%lx' should fail with EINVAL",
> > size);
> > }
> > -
> > - for (flag = BIT(0); flag; flag <<= 1) {
> > - fd = __vm_create_guest_memfd(vm, page_size, flag);
> > - TEST_ASSERT(fd == -1 && errno == EINVAL,
> > - "guest_memfd() with flag '0x%lx' should fail with EINVAL",
> > - flag);
> > - }
> > }
> >
> > static void test_create_guest_memfd_multiple(struct kvm_vm *vm)
> > @@ -170,30 +197,123 @@ static void test_create_guest_memfd_multiple(struct kvm_vm *vm)
> > close(fd1);
> > }
> >
> > -int main(int argc, char *argv[])
> > +#define GUEST_MEMFD_TEST_SLOT 10
> > +#define GUEST_MEMFD_TEST_GPA 0x100000000
> > +
> > +static bool check_vm_type(unsigned long vm_type)
> > {
> > - size_t page_size;
> > + /*
> > + * Not all architectures support KVM_CAP_VM_TYPES. However, those that
> > + * support guest_memfd have that support for the default VM type.
> > + */
> > + if (vm_type == VM_TYPE_DEFAULT)
> > + return true;
> > +
> > + return kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(vm_type);
> > +}
> > +
> > +static void test_with_type(unsigned long vm_type, uint64_t guest_memfd_flags,
> > + bool expect_mmap_allowed)
> > +{
> > + struct kvm_vm *vm;
> > size_t total_size;
> > + size_t page_size;
> > int fd;
> > - struct kvm_vm *vm;
> >
> > - TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD));
> > + if (!check_vm_type(vm_type))
> > + return;
> >
> > page_size = getpagesize();
> > total_size = page_size * 4;
> >
> > - vm = vm_create_barebones();
> > + vm = vm_create_barebones_type(vm_type);
> >
> > - test_create_guest_memfd_invalid(vm);
> > test_create_guest_memfd_multiple(vm);
> > + test_create_guest_memfd_invalid_sizes(vm, guest_memfd_flags, page_size);
> >
> > - fd = vm_create_guest_memfd(vm, total_size, 0);
> > + fd = vm_create_guest_memfd(vm, total_size, guest_memfd_flags);
> >
> > test_file_read_write(fd);
> > - test_mmap(fd, page_size);
> > +
> > + if (expect_mmap_allowed)
> > + test_mmap_allowed(fd, page_size, total_size);
> > + else
> > + test_mmap_denied(fd, page_size, total_size);
> > +
>
> if (expect_mmap_allowed) {
> test_mmap_denied(fd, page_size, total_size, MAP_PRIVATE);
> test_mmap_allowed(fd, page_size, total_size);
> } else {
> test_mmap_denied(fd, page_size, total_size, MAP_SHARED);
> }
>
> > test_file_size(fd, page_size, total_size);
> > test_fallocate(fd, page_size, total_size);
> > test_invalid_punch_hole(fd, page_size, total_size);
> >
> > close(fd);
> > + kvm_vm_release(vm);
> > +}
> > +
> > +static void test_vm_type_gmem_flag_validity(unsigned long vm_type,
> > + uint64_t expected_valid_flags)
> > +{
> > + size_t page_size = getpagesize();
> > + struct kvm_vm *vm;
> > + uint64_t flag = 0;
> > + int fd;
> > +
> > + if (!check_vm_type(vm_type))
> > + return;
> > +
> > + vm = vm_create_barebones_type(vm_type);
> > +
> > + for (flag = BIT(0); flag; flag <<= 1) {
> > + fd = __vm_create_guest_memfd(vm, page_size, flag);
> > +
> > + if (flag & expected_valid_flags) {
> > + TEST_ASSERT(fd >= 0,
> > + "guest_memfd() with flag '0x%lx' should be valid",
> > + flag);
> > + close(fd);
> > + } else {
> > + TEST_ASSERT(fd < 0 && errno == EINVAL,
> > + "guest_memfd() with flag '0x%lx' should fail with EINVAL",
> > + flag);
> > + }
> > + }
> > +
> > + kvm_vm_release(vm);
> > +}
> > +
> > +static void test_gmem_flag_validity(void)
> > +{
> > + uint64_t non_coco_vm_valid_flags = 0;
> > +
> > + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM))
> > + non_coco_vm_valid_flags = GUEST_MEMFD_FLAG_SUPPORT_SHARED;
> > +
> > + test_vm_type_gmem_flag_validity(VM_TYPE_DEFAULT, non_coco_vm_valid_flags);
> > +
> > +#ifdef __x86_64__
> > + test_vm_type_gmem_flag_validity(KVM_X86_SW_PROTECTED_VM, non_coco_vm_valid_flags);
> > + test_vm_type_gmem_flag_validity(KVM_X86_SEV_VM, 0);
> > + test_vm_type_gmem_flag_validity(KVM_X86_SEV_ES_VM, 0);
> > + test_vm_type_gmem_flag_validity(KVM_X86_SNP_VM, 0);
> > + test_vm_type_gmem_flag_validity(KVM_X86_TDX_VM, 0);
> > +#endif
> > +}
> > +
> > +int main(int argc, char *argv[])
> > +{
> > + TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD));
> > +
> > + test_gmem_flag_validity();
> > +
> > + test_with_type(VM_TYPE_DEFAULT, 0, false);
> > + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) {
> > + test_with_type(VM_TYPE_DEFAULT, GUEST_MEMFD_FLAG_SUPPORT_SHARED,
> > + true);
> > + }
> > +
> > +#ifdef __x86_64__
> > + test_with_type(KVM_X86_SW_PROTECTED_VM, 0, false);
> > + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) {
> > + test_with_type(KVM_X86_SW_PROTECTED_VM,
> > + GUEST_MEMFD_FLAG_SUPPORT_SHARED, true);
> > + }
> > +#endif
> > }
>
> Thanks,
> Gavin
>
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 16/16] KVM: selftests: guest_memfd mmap() test when mapping is allowed
2025-06-04 9:48 ` Fuad Tabba
@ 2025-06-04 10:05 ` Gavin Shan
2025-06-04 10:25 ` Fuad Tabba
0 siblings, 1 reply; 62+ messages in thread
From: Gavin Shan @ 2025-06-04 10:05 UTC (permalink / raw)
To: Fuad Tabba
Cc: kvm, linux-arm-msm, linux-mm, pbonzini, chenhuacai, mpe, anup,
paul.walmsley, palmer, aou, seanjc, viro, brauner, willy, akpm,
xiaoyao.li, yilun.xu, chao.p.peng, jarkko, amoorthy, dmatlack,
isaku.yamahata, mic, vbabka, vannapurve, ackerleytng, mail, david,
michael.roth, wei.w.wang, liam.merwick, isaku.yamahata,
kirill.shutemov, suzuki.poulose, steven.price, quic_eberman,
quic_mnalajal, quic_tsoni, quic_svaddagi, quic_cvanscha,
quic_pderrin, quic_pheragu, catalin.marinas, james.morse,
yuzenghui, oliver.upton, maz, will, qperret, keirf, roypat, shuah,
hch, jgg, rientjes, jhubbard, fvdl, hughd, jthoughton, peterx,
pankaj.gupta, ira.weiny
Hi Fuad,
On 6/4/25 7:48 PM, Fuad Tabba wrote:
> On Wed, 4 Jun 2025 at 10:20, Gavin Shan <gshan@redhat.com> wrote:
>>
>> On 5/28/25 4:02 AM, Fuad Tabba wrote:
>>> Expand the guest_memfd selftests to include testing mapping guest
>>> memory for VM types that support it.
>>>
>>> Also, build the guest_memfd selftest for arm64.
>>>
>>> Co-developed-by: Ackerley Tng <ackerleytng@google.com>
>>> Signed-off-by: Ackerley Tng <ackerleytng@google.com>
>>> Signed-off-by: Fuad Tabba <tabba@google.com>
>>> ---
>>> tools/testing/selftests/kvm/Makefile.kvm | 1 +
>>> .../testing/selftests/kvm/guest_memfd_test.c | 162 +++++++++++++++---
>>> 2 files changed, 142 insertions(+), 21 deletions(-)
>>>
>>
>> The test case fails on 64KB host, and the file size in test_create_guest_memfd_multiple()
>> would be page_size and (2 * page_size). The fixed size 4096 and 8192 aren't aligned to 64KB.
>
> Yes, however, this patch didn't introduce or modify this test. I think
> it's better to fix it in a separate patch independent of this series.
>
Yeah, it can be separate patch or a preparatory patch before PATCH[16/16]
of this series because x86 hasn't 64KB page size. The currently fixed sizes
(4096 and 8192) are aligned to page size on x86. and 'guest-memfd-test' is
enabled on arm64 by this series.
>> # ./guest_memfd_test
>> Random seed: 0x6b8b4567
>> ==== Test Assertion Failure ====
>> guest_memfd_test.c:178: fd1 != -1
>> pid=7565 tid=7565 errno=22 - Invalid argument
>> 1 0x000000000040252f: test_create_guest_memfd_multiple at guest_memfd_test.c:178
>> 2 (inlined by) test_with_type at guest_memfd_test.c:231
>> 3 0x00000000004020c7: main at guest_memfd_test.c:306
>> 4 0x0000ffff8cec733f: ?? ??:0
>> 5 0x0000ffff8cec7417: ?? ??:0
>> 6 0x00000000004021ef: _start at ??:?
>> memfd creation should succeed
>>
>>> diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm
>>> index f62b0a5aba35..ccf95ed037c3 100644
>>> --- a/tools/testing/selftests/kvm/Makefile.kvm
>>> +++ b/tools/testing/selftests/kvm/Makefile.kvm
>>> @@ -163,6 +163,7 @@ TEST_GEN_PROGS_arm64 += access_tracking_perf_test
>>> TEST_GEN_PROGS_arm64 += arch_timer
>>> TEST_GEN_PROGS_arm64 += coalesced_io_test
>>> TEST_GEN_PROGS_arm64 += dirty_log_perf_test
>>> +TEST_GEN_PROGS_arm64 += guest_memfd_test
>>> TEST_GEN_PROGS_arm64 += get-reg-list
>>> TEST_GEN_PROGS_arm64 += memslot_modification_stress_test
>>> TEST_GEN_PROGS_arm64 += memslot_perf_test
>>> diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c
>>> index ce687f8d248f..3d6765bc1f28 100644
>>> --- a/tools/testing/selftests/kvm/guest_memfd_test.c
>>> +++ b/tools/testing/selftests/kvm/guest_memfd_test.c
>>> @@ -34,12 +34,46 @@ static void test_file_read_write(int fd)
>>> "pwrite on a guest_mem fd should fail");
>>> }
>>>
>>> -static void test_mmap(int fd, size_t page_size)
>>> +static void test_mmap_allowed(int fd, size_t page_size, size_t total_size)
>>> +{
>>> + const char val = 0xaa;
>>> + char *mem;
>>> + size_t i;
>>> + int ret;
>>> +
>>> + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
>>> + TEST_ASSERT(mem != MAP_FAILED, "mmaping() guest memory should pass.");
>>> +
>>
>> If you agree, I think it would be nice to ensure guest-memfd doesn't support
>> copy-on-write, more details are provided below.
>
> Good idea. I think we can do this without adding much more code. I'll
> add a check in test_mmap_allowed(), since the idea is, even if mmap()
> is supported, we still can't COW. I'll rename the functions to make
> this a bit clearer (i.e., supported instead of allowed).
>
> Thank you for this and thank you for the reviews!
>
Sounds good to me :)
>
>>> + memset(mem, val, total_size);
>>> + for (i = 0; i < total_size; i++)
>>> + TEST_ASSERT_EQ(mem[i], val);
>>> +
>>> + ret = fallocate(fd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, 0,
>>> + page_size);
>>> + TEST_ASSERT(!ret, "fallocate the first page should succeed");
>>> +
>>> + for (i = 0; i < page_size; i++)
>>> + TEST_ASSERT_EQ(mem[i], 0x00);
>>> + for (; i < total_size; i++)
>>> + TEST_ASSERT_EQ(mem[i], val);
>>> +
>>> + memset(mem, val, page_size);
>>> + for (i = 0; i < total_size; i++)
>>> + TEST_ASSERT_EQ(mem[i], val);
>>> +
>>> + ret = munmap(mem, total_size);
>>> + TEST_ASSERT(!ret, "munmap should succeed");
>>> +}
>>> +
>>> +static void test_mmap_denied(int fd, size_t page_size, size_t total_size)
>>> {
>>> char *mem;
>>>
>>> mem = mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
>>> TEST_ASSERT_EQ(mem, MAP_FAILED);
>>> +
>>> + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
>>> + TEST_ASSERT_EQ(mem, MAP_FAILED);
>>> }
>>
>> Add one more argument to test_mmap_denied as the flags passed to mmap().
>>
>> static void test_mmap_denied(int fd, size_t page_size, size_t total_size, int mmap_flags)
>> {
>> mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, mmap_flags, fd, 0);
>> }
>>
>>>
>>> static void test_file_size(int fd, size_t page_size, size_t total_size)
>>> @@ -120,26 +154,19 @@ static void test_invalid_punch_hole(int fd, size_t page_size, size_t total_size)
>>> }
>>> }
>>>
>>> -static void test_create_guest_memfd_invalid(struct kvm_vm *vm)
>>> +static void test_create_guest_memfd_invalid_sizes(struct kvm_vm *vm,
>>> + uint64_t guest_memfd_flags,
>>> + size_t page_size)
>>> {
>>> - size_t page_size = getpagesize();
>>> - uint64_t flag;
>>> size_t size;
>>> int fd;
>>>
>>> for (size = 1; size < page_size; size++) {
>>> - fd = __vm_create_guest_memfd(vm, size, 0);
>>> - TEST_ASSERT(fd == -1 && errno == EINVAL,
>>> + fd = __vm_create_guest_memfd(vm, size, guest_memfd_flags);
>>> + TEST_ASSERT(fd < 0 && errno == EINVAL,
>>> "guest_memfd() with non-page-aligned page size '0x%lx' should fail with EINVAL",
>>> size);
>>> }
>>> -
>>> - for (flag = BIT(0); flag; flag <<= 1) {
>>> - fd = __vm_create_guest_memfd(vm, page_size, flag);
>>> - TEST_ASSERT(fd == -1 && errno == EINVAL,
>>> - "guest_memfd() with flag '0x%lx' should fail with EINVAL",
>>> - flag);
>>> - }
>>> }
>>>
>>> static void test_create_guest_memfd_multiple(struct kvm_vm *vm)
>>> @@ -170,30 +197,123 @@ static void test_create_guest_memfd_multiple(struct kvm_vm *vm)
>>> close(fd1);
>>> }
>>>
>>> -int main(int argc, char *argv[])
>>> +#define GUEST_MEMFD_TEST_SLOT 10
>>> +#define GUEST_MEMFD_TEST_GPA 0x100000000
>>> +
>>> +static bool check_vm_type(unsigned long vm_type)
>>> {
>>> - size_t page_size;
>>> + /*
>>> + * Not all architectures support KVM_CAP_VM_TYPES. However, those that
>>> + * support guest_memfd have that support for the default VM type.
>>> + */
>>> + if (vm_type == VM_TYPE_DEFAULT)
>>> + return true;
>>> +
>>> + return kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(vm_type);
>>> +}
>>> +
>>> +static void test_with_type(unsigned long vm_type, uint64_t guest_memfd_flags,
>>> + bool expect_mmap_allowed)
>>> +{
>>> + struct kvm_vm *vm;
>>> size_t total_size;
>>> + size_t page_size;
>>> int fd;
>>> - struct kvm_vm *vm;
>>>
>>> - TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD));
>>> + if (!check_vm_type(vm_type))
>>> + return;
>>>
>>> page_size = getpagesize();
>>> total_size = page_size * 4;
>>>
>>> - vm = vm_create_barebones();
>>> + vm = vm_create_barebones_type(vm_type);
>>>
>>> - test_create_guest_memfd_invalid(vm);
>>> test_create_guest_memfd_multiple(vm);
>>> + test_create_guest_memfd_invalid_sizes(vm, guest_memfd_flags, page_size);
>>>
>>> - fd = vm_create_guest_memfd(vm, total_size, 0);
>>> + fd = vm_create_guest_memfd(vm, total_size, guest_memfd_flags);
>>>
>>> test_file_read_write(fd);
>>> - test_mmap(fd, page_size);
>>> +
>>> + if (expect_mmap_allowed)
>>> + test_mmap_allowed(fd, page_size, total_size);
>>> + else
>>> + test_mmap_denied(fd, page_size, total_size);
>>> +
>>
>> if (expect_mmap_allowed) {
>> test_mmap_denied(fd, page_size, total_size, MAP_PRIVATE);
>> test_mmap_allowed(fd, page_size, total_size);
>> } else {
>> test_mmap_denied(fd, page_size, total_size, MAP_SHARED);
>> }
>>
>>> test_file_size(fd, page_size, total_size);
>>> test_fallocate(fd, page_size, total_size);
>>> test_invalid_punch_hole(fd, page_size, total_size);
>>>
>>> close(fd);
>>> + kvm_vm_release(vm);
>>> +}
>>> +
>>> +static void test_vm_type_gmem_flag_validity(unsigned long vm_type,
>>> + uint64_t expected_valid_flags)
>>> +{
>>> + size_t page_size = getpagesize();
>>> + struct kvm_vm *vm;
>>> + uint64_t flag = 0;
>>> + int fd;
>>> +
>>> + if (!check_vm_type(vm_type))
>>> + return;
>>> +
>>> + vm = vm_create_barebones_type(vm_type);
>>> +
>>> + for (flag = BIT(0); flag; flag <<= 1) {
>>> + fd = __vm_create_guest_memfd(vm, page_size, flag);
>>> +
>>> + if (flag & expected_valid_flags) {
>>> + TEST_ASSERT(fd >= 0,
>>> + "guest_memfd() with flag '0x%lx' should be valid",
>>> + flag);
>>> + close(fd);
>>> + } else {
>>> + TEST_ASSERT(fd < 0 && errno == EINVAL,
>>> + "guest_memfd() with flag '0x%lx' should fail with EINVAL",
>>> + flag);
>>> + }
>>> + }
>>> +
>>> + kvm_vm_release(vm);
>>> +}
>>> +
>>> +static void test_gmem_flag_validity(void)
>>> +{
>>> + uint64_t non_coco_vm_valid_flags = 0;
>>> +
>>> + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM))
>>> + non_coco_vm_valid_flags = GUEST_MEMFD_FLAG_SUPPORT_SHARED;
>>> +
>>> + test_vm_type_gmem_flag_validity(VM_TYPE_DEFAULT, non_coco_vm_valid_flags);
>>> +
>>> +#ifdef __x86_64__
>>> + test_vm_type_gmem_flag_validity(KVM_X86_SW_PROTECTED_VM, non_coco_vm_valid_flags);
>>> + test_vm_type_gmem_flag_validity(KVM_X86_SEV_VM, 0);
>>> + test_vm_type_gmem_flag_validity(KVM_X86_SEV_ES_VM, 0);
>>> + test_vm_type_gmem_flag_validity(KVM_X86_SNP_VM, 0);
>>> + test_vm_type_gmem_flag_validity(KVM_X86_TDX_VM, 0);
>>> +#endif
>>> +}
>>> +
>>> +int main(int argc, char *argv[])
>>> +{
>>> + TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD));
>>> +
>>> + test_gmem_flag_validity();
>>> +
>>> + test_with_type(VM_TYPE_DEFAULT, 0, false);
>>> + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) {
>>> + test_with_type(VM_TYPE_DEFAULT, GUEST_MEMFD_FLAG_SUPPORT_SHARED,
>>> + true);
>>> + }
>>> +
>>> +#ifdef __x86_64__
>>> + test_with_type(KVM_X86_SW_PROTECTED_VM, 0, false);
>>> + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) {
>>> + test_with_type(KVM_X86_SW_PROTECTED_VM,
>>> + GUEST_MEMFD_FLAG_SUPPORT_SHARED, true);
>>> + }
>>> +#endif
>>> }
>>
Thanks,
Gavin
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 16/16] KVM: selftests: guest_memfd mmap() test when mapping is allowed
2025-06-04 10:05 ` Gavin Shan
@ 2025-06-04 10:25 ` Fuad Tabba
0 siblings, 0 replies; 62+ messages in thread
From: Fuad Tabba @ 2025-06-04 10:25 UTC (permalink / raw)
To: Gavin Shan
Cc: kvm, linux-arm-msm, linux-mm, pbonzini, chenhuacai, mpe, anup,
paul.walmsley, palmer, aou, seanjc, viro, brauner, willy, akpm,
xiaoyao.li, yilun.xu, chao.p.peng, jarkko, amoorthy, dmatlack,
isaku.yamahata, mic, vbabka, vannapurve, ackerleytng, mail, david,
michael.roth, wei.w.wang, liam.merwick, isaku.yamahata,
kirill.shutemov, suzuki.poulose, steven.price, quic_eberman,
quic_mnalajal, quic_tsoni, quic_svaddagi, quic_cvanscha,
quic_pderrin, quic_pheragu, catalin.marinas, james.morse,
yuzenghui, oliver.upton, maz, will, qperret, keirf, roypat, shuah,
hch, jgg, rientjes, jhubbard, fvdl, hughd, jthoughton, peterx,
pankaj.gupta, ira.weiny
Hi Gavin,
On Wed, 4 Jun 2025 at 11:05, Gavin Shan <gshan@redhat.com> wrote:
>
> Hi Fuad,
>
> On 6/4/25 7:48 PM, Fuad Tabba wrote:
> > On Wed, 4 Jun 2025 at 10:20, Gavin Shan <gshan@redhat.com> wrote:
> >>
> >> On 5/28/25 4:02 AM, Fuad Tabba wrote:
> >>> Expand the guest_memfd selftests to include testing mapping guest
> >>> memory for VM types that support it.
> >>>
> >>> Also, build the guest_memfd selftest for arm64.
> >>>
> >>> Co-developed-by: Ackerley Tng <ackerleytng@google.com>
> >>> Signed-off-by: Ackerley Tng <ackerleytng@google.com>
> >>> Signed-off-by: Fuad Tabba <tabba@google.com>
> >>> ---
> >>> tools/testing/selftests/kvm/Makefile.kvm | 1 +
> >>> .../testing/selftests/kvm/guest_memfd_test.c | 162 +++++++++++++++---
> >>> 2 files changed, 142 insertions(+), 21 deletions(-)
> >>>
> >>
> >> The test case fails on 64KB host, and the file size in test_create_guest_memfd_multiple()
> >> would be page_size and (2 * page_size). The fixed size 4096 and 8192 aren't aligned to 64KB.
> >
> > Yes, however, this patch didn't introduce or modify this test. I think
> > it's better to fix it in a separate patch independent of this series.
> >
>
> Yeah, it can be separate patch or a preparatory patch before PATCH[16/16]
> of this series because x86 hasn't 64KB page size. The currently fixed sizes
> (4096 and 8192) are aligned to page size on x86. and 'guest-memfd-test' is
> enabled on arm64 by this series.
You're right. This patch enables support for arm64, so it should be
fixed in conjunction with that. As you suggested, I'll add a seperate
patch before this one that fixes this and enables support for arm64.
Thanks again!
/fuad
> >> # ./guest_memfd_test
> >> Random seed: 0x6b8b4567
> >> ==== Test Assertion Failure ====
> >> guest_memfd_test.c:178: fd1 != -1
> >> pid=7565 tid=7565 errno=22 - Invalid argument
> >> 1 0x000000000040252f: test_create_guest_memfd_multiple at guest_memfd_test.c:178
> >> 2 (inlined by) test_with_type at guest_memfd_test.c:231
> >> 3 0x00000000004020c7: main at guest_memfd_test.c:306
> >> 4 0x0000ffff8cec733f: ?? ??:0
> >> 5 0x0000ffff8cec7417: ?? ??:0
> >> 6 0x00000000004021ef: _start at ??:?
> >> memfd creation should succeed
> >>
> >>> diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm
> >>> index f62b0a5aba35..ccf95ed037c3 100644
> >>> --- a/tools/testing/selftests/kvm/Makefile.kvm
> >>> +++ b/tools/testing/selftests/kvm/Makefile.kvm
> >>> @@ -163,6 +163,7 @@ TEST_GEN_PROGS_arm64 += access_tracking_perf_test
> >>> TEST_GEN_PROGS_arm64 += arch_timer
> >>> TEST_GEN_PROGS_arm64 += coalesced_io_test
> >>> TEST_GEN_PROGS_arm64 += dirty_log_perf_test
> >>> +TEST_GEN_PROGS_arm64 += guest_memfd_test
> >>> TEST_GEN_PROGS_arm64 += get-reg-list
> >>> TEST_GEN_PROGS_arm64 += memslot_modification_stress_test
> >>> TEST_GEN_PROGS_arm64 += memslot_perf_test
> >>> diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c
> >>> index ce687f8d248f..3d6765bc1f28 100644
> >>> --- a/tools/testing/selftests/kvm/guest_memfd_test.c
> >>> +++ b/tools/testing/selftests/kvm/guest_memfd_test.c
> >>> @@ -34,12 +34,46 @@ static void test_file_read_write(int fd)
> >>> "pwrite on a guest_mem fd should fail");
> >>> }
> >>>
> >>> -static void test_mmap(int fd, size_t page_size)
> >>> +static void test_mmap_allowed(int fd, size_t page_size, size_t total_size)
> >>> +{
> >>> + const char val = 0xaa;
> >>> + char *mem;
> >>> + size_t i;
> >>> + int ret;
> >>> +
> >>> + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> >>> + TEST_ASSERT(mem != MAP_FAILED, "mmaping() guest memory should pass.");
> >>> +
> >>
> >> If you agree, I think it would be nice to ensure guest-memfd doesn't support
> >> copy-on-write, more details are provided below.
> >
> > Good idea. I think we can do this without adding much more code. I'll
> > add a check in test_mmap_allowed(), since the idea is, even if mmap()
> > is supported, we still can't COW. I'll rename the functions to make
> > this a bit clearer (i.e., supported instead of allowed).
> >
> > Thank you for this and thank you for the reviews!
> >
>
> Sounds good to me :)
>
> >
> >>> + memset(mem, val, total_size);
> >>> + for (i = 0; i < total_size; i++)
> >>> + TEST_ASSERT_EQ(mem[i], val);
> >>> +
> >>> + ret = fallocate(fd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, 0,
> >>> + page_size);
> >>> + TEST_ASSERT(!ret, "fallocate the first page should succeed");
> >>> +
> >>> + for (i = 0; i < page_size; i++)
> >>> + TEST_ASSERT_EQ(mem[i], 0x00);
> >>> + for (; i < total_size; i++)
> >>> + TEST_ASSERT_EQ(mem[i], val);
> >>> +
> >>> + memset(mem, val, page_size);
> >>> + for (i = 0; i < total_size; i++)
> >>> + TEST_ASSERT_EQ(mem[i], val);
> >>> +
> >>> + ret = munmap(mem, total_size);
> >>> + TEST_ASSERT(!ret, "munmap should succeed");
> >>> +}
> >>> +
> >>> +static void test_mmap_denied(int fd, size_t page_size, size_t total_size)
> >>> {
> >>> char *mem;
> >>>
> >>> mem = mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> >>> TEST_ASSERT_EQ(mem, MAP_FAILED);
> >>> +
> >>> + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
> >>> + TEST_ASSERT_EQ(mem, MAP_FAILED);
> >>> }
> >>
> >> Add one more argument to test_mmap_denied as the flags passed to mmap().
> >>
> >> static void test_mmap_denied(int fd, size_t page_size, size_t total_size, int mmap_flags)
> >> {
> >> mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, mmap_flags, fd, 0);
> >> }
> >>
> >>>
> >>> static void test_file_size(int fd, size_t page_size, size_t total_size)
> >>> @@ -120,26 +154,19 @@ static void test_invalid_punch_hole(int fd, size_t page_size, size_t total_size)
> >>> }
> >>> }
> >>>
> >>> -static void test_create_guest_memfd_invalid(struct kvm_vm *vm)
> >>> +static void test_create_guest_memfd_invalid_sizes(struct kvm_vm *vm,
> >>> + uint64_t guest_memfd_flags,
> >>> + size_t page_size)
> >>> {
> >>> - size_t page_size = getpagesize();
> >>> - uint64_t flag;
> >>> size_t size;
> >>> int fd;
> >>>
> >>> for (size = 1; size < page_size; size++) {
> >>> - fd = __vm_create_guest_memfd(vm, size, 0);
> >>> - TEST_ASSERT(fd == -1 && errno == EINVAL,
> >>> + fd = __vm_create_guest_memfd(vm, size, guest_memfd_flags);
> >>> + TEST_ASSERT(fd < 0 && errno == EINVAL,
> >>> "guest_memfd() with non-page-aligned page size '0x%lx' should fail with EINVAL",
> >>> size);
> >>> }
> >>> -
> >>> - for (flag = BIT(0); flag; flag <<= 1) {
> >>> - fd = __vm_create_guest_memfd(vm, page_size, flag);
> >>> - TEST_ASSERT(fd == -1 && errno == EINVAL,
> >>> - "guest_memfd() with flag '0x%lx' should fail with EINVAL",
> >>> - flag);
> >>> - }
> >>> }
> >>>
> >>> static void test_create_guest_memfd_multiple(struct kvm_vm *vm)
> >>> @@ -170,30 +197,123 @@ static void test_create_guest_memfd_multiple(struct kvm_vm *vm)
> >>> close(fd1);
> >>> }
> >>>
> >>> -int main(int argc, char *argv[])
> >>> +#define GUEST_MEMFD_TEST_SLOT 10
> >>> +#define GUEST_MEMFD_TEST_GPA 0x100000000
> >>> +
> >>> +static bool check_vm_type(unsigned long vm_type)
> >>> {
> >>> - size_t page_size;
> >>> + /*
> >>> + * Not all architectures support KVM_CAP_VM_TYPES. However, those that
> >>> + * support guest_memfd have that support for the default VM type.
> >>> + */
> >>> + if (vm_type == VM_TYPE_DEFAULT)
> >>> + return true;
> >>> +
> >>> + return kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(vm_type);
> >>> +}
> >>> +
> >>> +static void test_with_type(unsigned long vm_type, uint64_t guest_memfd_flags,
> >>> + bool expect_mmap_allowed)
> >>> +{
> >>> + struct kvm_vm *vm;
> >>> size_t total_size;
> >>> + size_t page_size;
> >>> int fd;
> >>> - struct kvm_vm *vm;
> >>>
> >>> - TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD));
> >>> + if (!check_vm_type(vm_type))
> >>> + return;
> >>>
> >>> page_size = getpagesize();
> >>> total_size = page_size * 4;
> >>>
> >>> - vm = vm_create_barebones();
> >>> + vm = vm_create_barebones_type(vm_type);
> >>>
> >>> - test_create_guest_memfd_invalid(vm);
> >>> test_create_guest_memfd_multiple(vm);
> >>> + test_create_guest_memfd_invalid_sizes(vm, guest_memfd_flags, page_size);
> >>>
> >>> - fd = vm_create_guest_memfd(vm, total_size, 0);
> >>> + fd = vm_create_guest_memfd(vm, total_size, guest_memfd_flags);
> >>>
> >>> test_file_read_write(fd);
> >>> - test_mmap(fd, page_size);
> >>> +
> >>> + if (expect_mmap_allowed)
> >>> + test_mmap_allowed(fd, page_size, total_size);
> >>> + else
> >>> + test_mmap_denied(fd, page_size, total_size);
> >>> +
> >>
> >> if (expect_mmap_allowed) {
> >> test_mmap_denied(fd, page_size, total_size, MAP_PRIVATE);
> >> test_mmap_allowed(fd, page_size, total_size);
> >> } else {
> >> test_mmap_denied(fd, page_size, total_size, MAP_SHARED);
> >> }
> >>
> >>> test_file_size(fd, page_size, total_size);
> >>> test_fallocate(fd, page_size, total_size);
> >>> test_invalid_punch_hole(fd, page_size, total_size);
> >>>
> >>> close(fd);
> >>> + kvm_vm_release(vm);
> >>> +}
> >>> +
> >>> +static void test_vm_type_gmem_flag_validity(unsigned long vm_type,
> >>> + uint64_t expected_valid_flags)
> >>> +{
> >>> + size_t page_size = getpagesize();
> >>> + struct kvm_vm *vm;
> >>> + uint64_t flag = 0;
> >>> + int fd;
> >>> +
> >>> + if (!check_vm_type(vm_type))
> >>> + return;
> >>> +
> >>> + vm = vm_create_barebones_type(vm_type);
> >>> +
> >>> + for (flag = BIT(0); flag; flag <<= 1) {
> >>> + fd = __vm_create_guest_memfd(vm, page_size, flag);
> >>> +
> >>> + if (flag & expected_valid_flags) {
> >>> + TEST_ASSERT(fd >= 0,
> >>> + "guest_memfd() with flag '0x%lx' should be valid",
> >>> + flag);
> >>> + close(fd);
> >>> + } else {
> >>> + TEST_ASSERT(fd < 0 && errno == EINVAL,
> >>> + "guest_memfd() with flag '0x%lx' should fail with EINVAL",
> >>> + flag);
> >>> + }
> >>> + }
> >>> +
> >>> + kvm_vm_release(vm);
> >>> +}
> >>> +
> >>> +static void test_gmem_flag_validity(void)
> >>> +{
> >>> + uint64_t non_coco_vm_valid_flags = 0;
> >>> +
> >>> + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM))
> >>> + non_coco_vm_valid_flags = GUEST_MEMFD_FLAG_SUPPORT_SHARED;
> >>> +
> >>> + test_vm_type_gmem_flag_validity(VM_TYPE_DEFAULT, non_coco_vm_valid_flags);
> >>> +
> >>> +#ifdef __x86_64__
> >>> + test_vm_type_gmem_flag_validity(KVM_X86_SW_PROTECTED_VM, non_coco_vm_valid_flags);
> >>> + test_vm_type_gmem_flag_validity(KVM_X86_SEV_VM, 0);
> >>> + test_vm_type_gmem_flag_validity(KVM_X86_SEV_ES_VM, 0);
> >>> + test_vm_type_gmem_flag_validity(KVM_X86_SNP_VM, 0);
> >>> + test_vm_type_gmem_flag_validity(KVM_X86_TDX_VM, 0);
> >>> +#endif
> >>> +}
> >>> +
> >>> +int main(int argc, char *argv[])
> >>> +{
> >>> + TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD));
> >>> +
> >>> + test_gmem_flag_validity();
> >>> +
> >>> + test_with_type(VM_TYPE_DEFAULT, 0, false);
> >>> + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) {
> >>> + test_with_type(VM_TYPE_DEFAULT, GUEST_MEMFD_FLAG_SUPPORT_SHARED,
> >>> + true);
> >>> + }
> >>> +
> >>> +#ifdef __x86_64__
> >>> + test_with_type(KVM_X86_SW_PROTECTED_VM, 0, false);
> >>> + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) {
> >>> + test_with_type(KVM_X86_SW_PROTECTED_VM,
> >>> + GUEST_MEMFD_FLAG_SUPPORT_SHARED, true);
> >>> + }
> >>> +#endif
> >>> }
> >>
>
> Thanks,
> Gavin
>
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 09/16] KVM: guest_memfd: Track shared memory support in memslot
2025-05-27 18:02 ` [PATCH v10 09/16] KVM: guest_memfd: Track shared memory support in memslot Fuad Tabba
@ 2025-06-04 12:25 ` David Hildenbrand
2025-06-04 12:31 ` Fuad Tabba
0 siblings, 1 reply; 62+ messages in thread
From: David Hildenbrand @ 2025-06-04 12:25 UTC (permalink / raw)
To: Fuad Tabba, kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, michael.roth, wei.w.wang,
liam.merwick, isaku.yamahata, kirill.shutemov, suzuki.poulose,
steven.price, quic_eberman, quic_mnalajal, quic_tsoni,
quic_svaddagi, quic_cvanscha, quic_pderrin, quic_pheragu,
catalin.marinas, james.morse, yuzenghui, oliver.upton, maz, will,
qperret, keirf, roypat, shuah, hch, jgg, rientjes, jhubbard, fvdl,
hughd, jthoughton, peterx, pankaj.gupta, ira.weiny
On 27.05.25 20:02, Fuad Tabba wrote:
> Track whether a guest_memfd-backed memslot supports shared memory within
> the memslot itself, using the flags field. The top half of memslot flags
> is reserved for internal use in KVM. Add a flag there to track shared
> memory support.
>
> This saves the caller from having to check the guest_memfd-backed file
> for this support, a potentially more expensive operation due to the need
> to get/put the file.
>
> Suggested-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
> include/linux/kvm_host.h | 11 ++++++++++-
> virt/kvm/guest_memfd.c | 8 ++++++--
> 2 files changed, 16 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index ba83547e62b0..edb3795a64b9 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -54,7 +54,8 @@
> * used in kvm, other bits are visible for userspace which are defined in
> * include/uapi/linux/kvm.h.
> */
> -#define KVM_MEMSLOT_INVALID (1UL << 16)
> +#define KVM_MEMSLOT_INVALID (1UL << 16)
> +#define KVM_MEMSLOT_SUPPORTS_SHARED (1UL << 17)
Should there be a "GMEM" in there?
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 08/16] KVM: guest_memfd: Allow host to map guest_memfd pages
2025-05-27 18:02 ` [PATCH v10 08/16] KVM: guest_memfd: Allow host to map guest_memfd pages Fuad Tabba
` (3 preceding siblings ...)
2025-06-04 6:02 ` Gavin Shan
@ 2025-06-04 12:26 ` David Hildenbrand
2025-06-04 12:32 ` Fuad Tabba
2025-06-05 6:40 ` Gavin Shan
2025-06-05 8:28 ` Vlastimil Babka
6 siblings, 1 reply; 62+ messages in thread
From: David Hildenbrand @ 2025-06-04 12:26 UTC (permalink / raw)
To: Fuad Tabba, kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, michael.roth, wei.w.wang,
liam.merwick, isaku.yamahata, kirill.shutemov, suzuki.poulose,
steven.price, quic_eberman, quic_mnalajal, quic_tsoni,
quic_svaddagi, quic_cvanscha, quic_pderrin, quic_pheragu,
catalin.marinas, james.morse, yuzenghui, oliver.upton, maz, will,
qperret, keirf, roypat, shuah, hch, jgg, rientjes, jhubbard, fvdl,
hughd, jthoughton, peterx, pankaj.gupta, ira.weiny
On 27.05.25 20:02, Fuad Tabba wrote:
> This patch enables support for shared memory in guest_memfd, including
> mapping that memory at the host userspace. This support is gated by the
> configuration option KVM_GMEM_SHARED_MEM, and toggled by the guest_memfd
> flag GUEST_MEMFD_FLAG_SUPPORT_SHARED, which can be set when creating a
> guest_memfd instance.
>
> Co-developed-by: Ackerley Tng <ackerleytng@google.com>
> Signed-off-by: Ackerley Tng <ackerleytng@google.com>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
> arch/x86/include/asm/kvm_host.h | 10 ++++
> arch/x86/kvm/x86.c | 3 +-
Nit: I would split off the x86 bits. Meaning, this patch would only
introduce the infrastructure and a x86 KVM patch would enable it for
selected x86 VMs.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 09/16] KVM: guest_memfd: Track shared memory support in memslot
2025-06-04 12:25 ` David Hildenbrand
@ 2025-06-04 12:31 ` Fuad Tabba
0 siblings, 0 replies; 62+ messages in thread
From: Fuad Tabba @ 2025-06-04 12:31 UTC (permalink / raw)
To: David Hildenbrand
Cc: kvm, linux-arm-msm, linux-mm, pbonzini, chenhuacai, mpe, anup,
paul.walmsley, palmer, aou, seanjc, viro, brauner, willy, akpm,
xiaoyao.li, yilun.xu, chao.p.peng, jarkko, amoorthy, dmatlack,
isaku.yamahata, mic, vbabka, vannapurve, ackerleytng, mail,
michael.roth, wei.w.wang, liam.merwick, isaku.yamahata,
kirill.shutemov, suzuki.poulose, steven.price, quic_eberman,
quic_mnalajal, quic_tsoni, quic_svaddagi, quic_cvanscha,
quic_pderrin, quic_pheragu, catalin.marinas, james.morse,
yuzenghui, oliver.upton, maz, will, qperret, keirf, roypat, shuah,
hch, jgg, rientjes, jhubbard, fvdl, hughd, jthoughton, peterx,
pankaj.gupta, ira.weiny
On Wed, 4 Jun 2025 at 13:25, David Hildenbrand <david@redhat.com> wrote:
>
> On 27.05.25 20:02, Fuad Tabba wrote:
> > Track whether a guest_memfd-backed memslot supports shared memory within
> > the memslot itself, using the flags field. The top half of memslot flags
> > is reserved for internal use in KVM. Add a flag there to track shared
> > memory support.
> >
> > This saves the caller from having to check the guest_memfd-backed file
> > for this support, a potentially more expensive operation due to the need
> > to get/put the file.
> >
> > Suggested-by: David Hildenbrand <david@redhat.com>
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> > include/linux/kvm_host.h | 11 ++++++++++-
> > virt/kvm/guest_memfd.c | 8 ++++++--
> > 2 files changed, 16 insertions(+), 3 deletions(-)
> >
> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > index ba83547e62b0..edb3795a64b9 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -54,7 +54,8 @@
> > * used in kvm, other bits are visible for userspace which are defined in
> > * include/uapi/linux/kvm.h.
> > */
> > -#define KVM_MEMSLOT_INVALID (1UL << 16)
> > +#define KVM_MEMSLOT_INVALID (1UL << 16)
> > +#define KVM_MEMSLOT_SUPPORTS_SHARED (1UL << 17)
>
> Should there be a "GMEM" in there?
I'll change it to KVM_MEMSLOT_SUPPORTS_GMEM_SHARED. I thought of
adding _MEM as well, but it starts getting too long :)
Thanks,
/fuad
> --
> Cheers,
>
> David / dhildenb
>
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 08/16] KVM: guest_memfd: Allow host to map guest_memfd pages
2025-06-04 12:26 ` David Hildenbrand
@ 2025-06-04 12:32 ` Fuad Tabba
2025-06-04 13:02 ` David Hildenbrand
0 siblings, 1 reply; 62+ messages in thread
From: Fuad Tabba @ 2025-06-04 12:32 UTC (permalink / raw)
To: David Hildenbrand
Cc: kvm, linux-arm-msm, linux-mm, pbonzini, chenhuacai, mpe, anup,
paul.walmsley, palmer, aou, seanjc, viro, brauner, willy, akpm,
xiaoyao.li, yilun.xu, chao.p.peng, jarkko, amoorthy, dmatlack,
isaku.yamahata, mic, vbabka, vannapurve, ackerleytng, mail,
michael.roth, wei.w.wang, liam.merwick, isaku.yamahata,
kirill.shutemov, suzuki.poulose, steven.price, quic_eberman,
quic_mnalajal, quic_tsoni, quic_svaddagi, quic_cvanscha,
quic_pderrin, quic_pheragu, catalin.marinas, james.morse,
yuzenghui, oliver.upton, maz, will, qperret, keirf, roypat, shuah,
hch, jgg, rientjes, jhubbard, fvdl, hughd, jthoughton, peterx,
pankaj.gupta, ira.weiny
Hi David,
On Wed, 4 Jun 2025 at 13:26, David Hildenbrand <david@redhat.com> wrote:
>
> On 27.05.25 20:02, Fuad Tabba wrote:
> > This patch enables support for shared memory in guest_memfd, including
> > mapping that memory at the host userspace. This support is gated by the
> > configuration option KVM_GMEM_SHARED_MEM, and toggled by the guest_memfd
> > flag GUEST_MEMFD_FLAG_SUPPORT_SHARED, which can be set when creating a
> > guest_memfd instance.
> >
> > Co-developed-by: Ackerley Tng <ackerleytng@google.com>
> > Signed-off-by: Ackerley Tng <ackerleytng@google.com>
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> > arch/x86/include/asm/kvm_host.h | 10 ++++
> > arch/x86/kvm/x86.c | 3 +-
>
> Nit: I would split off the x86 bits. Meaning, this patch would only
> introduce the infrastructure and a x86 KVM patch would enable it for
> selected x86 VMs.
Will do.
Thanks,
/fuad
>
> --
> Cheers,
>
> David / dhildenb
>
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 08/16] KVM: guest_memfd: Allow host to map guest_memfd pages
2025-06-04 12:32 ` Fuad Tabba
@ 2025-06-04 13:02 ` David Hildenbrand
0 siblings, 0 replies; 62+ messages in thread
From: David Hildenbrand @ 2025-06-04 13:02 UTC (permalink / raw)
To: Fuad Tabba
Cc: kvm, linux-arm-msm, linux-mm, pbonzini, chenhuacai, mpe, anup,
paul.walmsley, palmer, aou, seanjc, viro, brauner, willy, akpm,
xiaoyao.li, yilun.xu, chao.p.peng, jarkko, amoorthy, dmatlack,
isaku.yamahata, mic, vbabka, vannapurve, ackerleytng, mail,
michael.roth, wei.w.wang, liam.merwick, isaku.yamahata,
kirill.shutemov, suzuki.poulose, steven.price, quic_eberman,
quic_mnalajal, quic_tsoni, quic_svaddagi, quic_cvanscha,
quic_pderrin, quic_pheragu, catalin.marinas, james.morse,
yuzenghui, oliver.upton, maz, will, qperret, keirf, roypat, shuah,
hch, jgg, rientjes, jhubbard, fvdl, hughd, jthoughton, peterx,
pankaj.gupta, ira.weiny
On 04.06.25 14:32, Fuad Tabba wrote:
> Hi David,
>
> On Wed, 4 Jun 2025 at 13:26, David Hildenbrand <david@redhat.com> wrote:
>>
>> On 27.05.25 20:02, Fuad Tabba wrote:
>>> This patch enables support for shared memory in guest_memfd, including
>>> mapping that memory at the host userspace. This support is gated by the
>>> configuration option KVM_GMEM_SHARED_MEM, and toggled by the guest_memfd
>>> flag GUEST_MEMFD_FLAG_SUPPORT_SHARED, which can be set when creating a
>>> guest_memfd instance.
>>>
>>> Co-developed-by: Ackerley Tng <ackerleytng@google.com>
>>> Signed-off-by: Ackerley Tng <ackerleytng@google.com>
>>> Signed-off-by: Fuad Tabba <tabba@google.com>
>>> ---
>>> arch/x86/include/asm/kvm_host.h | 10 ++++
>>> arch/x86/kvm/x86.c | 3 +-
>>
>> Nit: I would split off the x86 bits. Meaning, this patch would only
>> introduce the infrastructure and a x86 KVM patch would enable it for
>> selected x86 VMs.
>
> Will do.
And probably that patch should come after/with the x86 mmu bits.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 11/16] KVM: x86: Compute max_mapping_level with input from guest_memfd
2025-05-27 18:02 ` [PATCH v10 11/16] KVM: x86: Compute max_mapping_level with input from guest_memfd Fuad Tabba
@ 2025-06-04 13:09 ` David Hildenbrand
0 siblings, 0 replies; 62+ messages in thread
From: David Hildenbrand @ 2025-06-04 13:09 UTC (permalink / raw)
To: Fuad Tabba, kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, michael.roth, wei.w.wang,
liam.merwick, isaku.yamahata, kirill.shutemov, suzuki.poulose,
steven.price, quic_eberman, quic_mnalajal, quic_tsoni,
quic_svaddagi, quic_cvanscha, quic_pderrin, quic_pheragu,
catalin.marinas, james.morse, yuzenghui, oliver.upton, maz, will,
qperret, keirf, roypat, shuah, hch, jgg, rientjes, jhubbard, fvdl,
hughd, jthoughton, peterx, pankaj.gupta, ira.weiny
On 27.05.25 20:02, Fuad Tabba wrote:
> From: Ackerley Tng <ackerleytng@google.com>
>
> This patch adds kvm_gmem_max_mapping_level(), which always returns
> PG_LEVEL_4K since guest_memfd only supports 4K pages for now.
>
> When guest_memfd supports shared memory, max_mapping_level (especially
> when recovering huge pages - see call to __kvm_mmu_max_mapping_level()
> from recover_huge_pages_range()) should take input from
> guest_memfd.
>
> Input from guest_memfd should be taken in these cases:
>
> + if the memslot supports shared memory (guest_memfd is used for
> shared memory, or in future both shared and private memory) or
> + if the memslot is only used for private memory and that gfn is
> private.
>
> If the memslot doesn't use guest_memfd, figure out the
> max_mapping_level using the host page tables like before.
>
> This patch also refactors and inlines the other call to
> __kvm_mmu_max_mapping_level().
>
> In kvm_mmu_hugepage_adjust(), guest_memfd's input is already
> provided (if applicable) in fault->max_level. Hence, there is no need
> to query guest_memfd.
>
> lpage_info is queried like before, and then if the fault is not from
> guest_memfd, adjust fault->req_level based on input from host page
> tables.
>
> Signed-off-by: Ackerley Tng <ackerleytng@google.com>
Should there be a Co-developed-by?
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
LGTM, but I am not particularly an expert on that code. Having some
feedback from Sean Et. al would be great :)
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 13/16] KVM: arm64: Handle guest_memfd-backed guest page faults
2025-05-27 18:02 ` [PATCH v10 13/16] KVM: arm64: Handle guest_memfd-backed guest page faults Fuad Tabba
@ 2025-06-04 13:17 ` David Hildenbrand
2025-06-04 13:30 ` Fuad Tabba
0 siblings, 1 reply; 62+ messages in thread
From: David Hildenbrand @ 2025-06-04 13:17 UTC (permalink / raw)
To: Fuad Tabba, kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, michael.roth, wei.w.wang,
liam.merwick, isaku.yamahata, kirill.shutemov, suzuki.poulose,
steven.price, quic_eberman, quic_mnalajal, quic_tsoni,
quic_svaddagi, quic_cvanscha, quic_pderrin, quic_pheragu,
catalin.marinas, james.morse, yuzenghui, oliver.upton, maz, will,
qperret, keirf, roypat, shuah, hch, jgg, rientjes, jhubbard, fvdl,
hughd, jthoughton, peterx, pankaj.gupta, ira.weiny
On 27.05.25 20:02, Fuad Tabba wrote:
> Add arm64 support for handling guest page faults on guest_memfd backed
> memslots. Until guest_memfd supports huge pages, the fault granule is
> restricted to PAGE_SIZE.
>
> Signed-off-by: Fuad Tabba <tabba@google.com>
>
> ---
>
> Note: This patch introduces a new function, gmem_abort() rather than
> previous attempts at trying to expand user_mem_abort(). This is because
> there are many differences in how faults are handled when backed by
> guest_memfd vs regular memslots with anonymous memory, e.g., lack of
> VMA, and for now, lack of huge page support for guest_memfd. The
> function user_mem_abort() is already big and unwieldly, adding more
> complexity to it made things more difficult to understand.
>
> Once larger page size support is added to guest_memfd, we could factor
> out the common code between these two functions.
>
> ---
> arch/arm64/kvm/mmu.c | 89 +++++++++++++++++++++++++++++++++++++++++++-
> 1 file changed, 87 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 9865ada04a81..896c56683d88 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1466,6 +1466,87 @@ static bool kvm_vma_mte_allowed(struct vm_area_struct *vma)
> return vma->vm_flags & VM_MTE_ALLOWED;
> }
>
> +static int gmem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> + struct kvm_memory_slot *memslot, bool is_perm)
TBH, I have no idea why the existing function is called "_abort". I am
sure there is a good reason :)
> +{
> + enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED;
> + enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R;
> + bool logging, write_fault, exec_fault, writable;
> + struct kvm_pgtable *pgt;
> + struct page *page;
> + struct kvm *kvm;
> + void *memcache;
> + kvm_pfn_t pfn;
> + gfn_t gfn;
> + int ret;
> +
> + if (!is_perm) {
> + int min_pages = kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu);
> +
> + if (!is_protected_kvm_enabled()) {
> + memcache = &vcpu->arch.mmu_page_cache;
> + ret = kvm_mmu_topup_memory_cache(memcache, min_pages);
> + } else {
> + memcache = &vcpu->arch.pkvm_memcache;
> + ret = topup_hyp_memcache(memcache, min_pages);
> + }
> + if (ret)
> + return ret;
> + }
> +
> + kvm = vcpu->kvm;
> + gfn = fault_ipa >> PAGE_SHIFT;
These two can be initialized directly above.
> +
> + logging = memslot_is_logging(memslot);
> + write_fault = kvm_is_write_fault(vcpu);
> + exec_fault = kvm_vcpu_trap_is_exec_fault(vcpu);
> + VM_BUG_ON(write_fault && exec_fault);
No VM_BUG_ON please.
VM_WARN_ON_ONCE() maybe. Or just handle it along the "Unexpected L2 read
permission error" below cleanly.
> +
> + if (is_perm && !write_fault && !exec_fault) {
> + kvm_err("Unexpected L2 read permission error\n");
> + return -EFAULT;
> + }
> +
> + ret = kvm_gmem_get_pfn(vcpu->kvm, memslot, gfn, &pfn, &page, NULL);
> + if (ret) {
> + kvm_prepare_memory_fault_exit(vcpu, fault_ipa, PAGE_SIZE,
> + write_fault, exec_fault, false);
> + return ret;
> + }
> +
> + writable = !(memslot->flags & KVM_MEM_READONLY) &&
> + (!logging || write_fault);
> +
> + if (writable)
> + prot |= KVM_PGTABLE_PROT_W;
> +
> + if (exec_fault || cpus_have_final_cap(ARM64_HAS_CACHE_DIC))
> + prot |= KVM_PGTABLE_PROT_X;
> +
> + pgt = vcpu->arch.hw_mmu->pgt;
Can probably also initialize directly above.
> +
> + kvm_fault_lock(kvm);
> + if (is_perm) {
> + /*
> + * Drop the SW bits in favour of those stored in the
> + * PTE, which will be preserved.
> + */
> + prot &= ~KVM_NV_GUEST_MAP_SZ;
> + ret = KVM_PGT_FN(kvm_pgtable_stage2_relax_perms)(pgt, fault_ipa, prot, flags);
> + } else {
> + ret = KVM_PGT_FN(kvm_pgtable_stage2_map)(pgt, fault_ipa, PAGE_SIZE,
> + __pfn_to_phys(pfn), prot,
> + memcache, flags);
> + }
> + kvm_release_faultin_page(kvm, page, !!ret, writable);
> + kvm_fault_unlock(kvm);
> +
> + if (writable && !ret)
> + mark_page_dirty_in_slot(kvm, memslot, gfn);
> +
> + return ret != -EAGAIN ? ret : 0;
> +}
> +
Nothing else jumped at me. But just like on the x86 code, I think we
need some arch experts take a look at this one ...
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 14/16] KVM: arm64: Enable mapping guest_memfd in arm64
2025-05-27 18:02 ` [PATCH v10 14/16] KVM: arm64: Enable mapping guest_memfd in arm64 Fuad Tabba
@ 2025-06-04 13:17 ` David Hildenbrand
2025-06-04 13:31 ` Fuad Tabba
0 siblings, 1 reply; 62+ messages in thread
From: David Hildenbrand @ 2025-06-04 13:17 UTC (permalink / raw)
To: Fuad Tabba, kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, michael.roth, wei.w.wang,
liam.merwick, isaku.yamahata, kirill.shutemov, suzuki.poulose,
steven.price, quic_eberman, quic_mnalajal, quic_tsoni,
quic_svaddagi, quic_cvanscha, quic_pderrin, quic_pheragu,
catalin.marinas, james.morse, yuzenghui, oliver.upton, maz, will,
qperret, keirf, roypat, shuah, hch, jgg, rientjes, jhubbard, fvdl,
hughd, jthoughton, peterx, pankaj.gupta, ira.weiny
On 27.05.25 20:02, Fuad Tabba wrote:
> Enable mapping guest_memfd backed memory at the host in arm64. For now,
> it applies to all arm64 VM types in arm64 that use guest_memfd. In the
> future, new VM types can restrict this via
> kvm_arch_gmem_supports_shared_mem().
>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
> arch/arm64/include/asm/kvm_host.h | 5 +++++
> arch/arm64/kvm/Kconfig | 1 +
> arch/arm64/kvm/mmu.c | 7 +++++++
> 3 files changed, 13 insertions(+)
>
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 08ba91e6fb03..8add94929711 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -1593,4 +1593,9 @@ static inline bool kvm_arch_has_irq_bypass(void)
> return true;
> }
>
> +#ifdef CONFIG_KVM_GMEM
> +#define kvm_arch_supports_gmem(kvm) true
> +#define kvm_arch_supports_gmem_shared_mem(kvm) IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM)
> +#endif
> +
> #endif /* __ARM64_KVM_HOST_H__ */
> diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
> index 096e45acadb2..8c1e1964b46a 100644
> --- a/arch/arm64/kvm/Kconfig
> +++ b/arch/arm64/kvm/Kconfig
> @@ -38,6 +38,7 @@ menuconfig KVM
> select HAVE_KVM_VCPU_RUN_PID_CHANGE
> select SCHED_INFO
> select GUEST_PERF_EVENTS if PERF_EVENTS
> + select KVM_GMEM_SHARED_MEM
> help
> Support hosting virtualized guest machines.
>
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 896c56683d88..03da08390bf0 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -2264,6 +2264,13 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
> if ((new->base_gfn + new->npages) > (kvm_phys_size(&kvm->arch.mmu) >> PAGE_SHIFT))
> return -EFAULT;
>
> + /*
> + * Only support guest_memfd backed memslots with shared memory, since
> + * there aren't any CoCo VMs that support only private memory on arm64.
> + */
> + if (kvm_slot_has_gmem(new) && !kvm_gmem_memslot_supports_shared(new))
> + return -EINVAL;
> +
Right, that can get lifted once we have such VMs.
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 13/16] KVM: arm64: Handle guest_memfd-backed guest page faults
2025-06-04 13:17 ` David Hildenbrand
@ 2025-06-04 13:30 ` Fuad Tabba
2025-06-04 13:33 ` David Hildenbrand
0 siblings, 1 reply; 62+ messages in thread
From: Fuad Tabba @ 2025-06-04 13:30 UTC (permalink / raw)
To: David Hildenbrand
Cc: kvm, linux-arm-msm, linux-mm, pbonzini, chenhuacai, mpe, anup,
paul.walmsley, palmer, aou, seanjc, viro, brauner, willy, akpm,
xiaoyao.li, yilun.xu, chao.p.peng, jarkko, amoorthy, dmatlack,
isaku.yamahata, mic, vbabka, vannapurve, ackerleytng, mail,
michael.roth, wei.w.wang, liam.merwick, isaku.yamahata,
kirill.shutemov, suzuki.poulose, steven.price, quic_eberman,
quic_mnalajal, quic_tsoni, quic_svaddagi, quic_cvanscha,
quic_pderrin, quic_pheragu, catalin.marinas, james.morse,
yuzenghui, oliver.upton, maz, will, qperret, keirf, roypat, shuah,
hch, jgg, rientjes, jhubbard, fvdl, hughd, jthoughton, peterx,
pankaj.gupta, ira.weiny
Hi David
On Wed, 4 Jun 2025 at 14:17, David Hildenbrand <david@redhat.com> wrote:
>
> On 27.05.25 20:02, Fuad Tabba wrote:
> > Add arm64 support for handling guest page faults on guest_memfd backed
> > memslots. Until guest_memfd supports huge pages, the fault granule is
> > restricted to PAGE_SIZE.
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> >
> > ---
> >
> > Note: This patch introduces a new function, gmem_abort() rather than
> > previous attempts at trying to expand user_mem_abort(). This is because
> > there are many differences in how faults are handled when backed by
> > guest_memfd vs regular memslots with anonymous memory, e.g., lack of
> > VMA, and for now, lack of huge page support for guest_memfd. The
> > function user_mem_abort() is already big and unwieldly, adding more
> > complexity to it made things more difficult to understand.
> >
> > Once larger page size support is added to guest_memfd, we could factor
> > out the common code between these two functions.
> >
> > ---
> > arch/arm64/kvm/mmu.c | 89 +++++++++++++++++++++++++++++++++++++++++++-
> > 1 file changed, 87 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > index 9865ada04a81..896c56683d88 100644
> > --- a/arch/arm64/kvm/mmu.c
> > +++ b/arch/arm64/kvm/mmu.c
> > @@ -1466,6 +1466,87 @@ static bool kvm_vma_mte_allowed(struct vm_area_struct *vma)
> > return vma->vm_flags & VM_MTE_ALLOWED;
> > }
> >
> > +static int gmem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> > + struct kvm_memory_slot *memslot, bool is_perm)
>
> TBH, I have no idea why the existing function is called "_abort". I am
> sure there is a good reason :)
>
The reason is ARM. They're called "memory aborts", see D8.15 Memory
aborts in the ARM ARM:
https://developer.arm.com/documentation/ddi0487/latest/
Warning: PDF is 100mb+ with almost 15k pages :)
> > +{
> > + enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED;
> > + enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R;
> > + bool logging, write_fault, exec_fault, writable;
> > + struct kvm_pgtable *pgt;
> > + struct page *page;
> > + struct kvm *kvm;
> > + void *memcache;
> > + kvm_pfn_t pfn;
> > + gfn_t gfn;
> > + int ret;
> > +
> > + if (!is_perm) {
> > + int min_pages = kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu);
> > +
> > + if (!is_protected_kvm_enabled()) {
> > + memcache = &vcpu->arch.mmu_page_cache;
> > + ret = kvm_mmu_topup_memory_cache(memcache, min_pages);
> > + } else {
> > + memcache = &vcpu->arch.pkvm_memcache;
> > + ret = topup_hyp_memcache(memcache, min_pages);
> > + }
> > + if (ret)
> > + return ret;
> > + }
> > +
> > + kvm = vcpu->kvm;
> > + gfn = fault_ipa >> PAGE_SHIFT;
>
> These two can be initialized directly above.
>
I was trying to go with reverse christmas tree order of declarations,
but I'll do that.
> > +
> > + logging = memslot_is_logging(memslot);
> > + write_fault = kvm_is_write_fault(vcpu);
> > + exec_fault = kvm_vcpu_trap_is_exec_fault(vcpu);
> > + VM_BUG_ON(write_fault && exec_fault);
>
> No VM_BUG_ON please.
>
> VM_WARN_ON_ONCE() maybe. Or just handle it along the "Unexpected L2 read
> permission error" below cleanly.
I'm following the same pattern as the existing user_mem_abort(), but
I'll change it.
> > +
> > + if (is_perm && !write_fault && !exec_fault) {
> > + kvm_err("Unexpected L2 read permission error\n");
> > + return -EFAULT;
> > + }
> > +
> > + ret = kvm_gmem_get_pfn(vcpu->kvm, memslot, gfn, &pfn, &page, NULL);
> > + if (ret) {
> > + kvm_prepare_memory_fault_exit(vcpu, fault_ipa, PAGE_SIZE,
> > + write_fault, exec_fault, false);
> > + return ret;
> > + }
> > +
> > + writable = !(memslot->flags & KVM_MEM_READONLY) &&
> > + (!logging || write_fault);
> > +
> > + if (writable)
> > + prot |= KVM_PGTABLE_PROT_W;
> > +
> > + if (exec_fault || cpus_have_final_cap(ARM64_HAS_CACHE_DIC))
> > + prot |= KVM_PGTABLE_PROT_X;
> > +
> > + pgt = vcpu->arch.hw_mmu->pgt;
>
> Can probably also initialize directly above.
Ack.
> > +
> > + kvm_fault_lock(kvm);
> > + if (is_perm) {
> > + /*
> > + * Drop the SW bits in favour of those stored in the
> > + * PTE, which will be preserved.
> > + */
> > + prot &= ~KVM_NV_GUEST_MAP_SZ;
> > + ret = KVM_PGT_FN(kvm_pgtable_stage2_relax_perms)(pgt, fault_ipa, prot, flags);
> > + } else {
> > + ret = KVM_PGT_FN(kvm_pgtable_stage2_map)(pgt, fault_ipa, PAGE_SIZE,
> > + __pfn_to_phys(pfn), prot,
> > + memcache, flags);
> > + }
> > + kvm_release_faultin_page(kvm, page, !!ret, writable);
> > + kvm_fault_unlock(kvm);
> > +
> > + if (writable && !ret)
> > + mark_page_dirty_in_slot(kvm, memslot, gfn);
> > +
> > + return ret != -EAGAIN ? ret : 0;
> > +}
> > +
>
> Nothing else jumped at me. But just like on the x86 code, I think we
> need some arch experts take a look at this one ...
Thanks!
/fuad
> --
> Cheers,
>
> David / dhildenb
>
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 14/16] KVM: arm64: Enable mapping guest_memfd in arm64
2025-06-04 13:17 ` David Hildenbrand
@ 2025-06-04 13:31 ` Fuad Tabba
0 siblings, 0 replies; 62+ messages in thread
From: Fuad Tabba @ 2025-06-04 13:31 UTC (permalink / raw)
To: David Hildenbrand
Cc: kvm, linux-arm-msm, linux-mm, pbonzini, chenhuacai, mpe, anup,
paul.walmsley, palmer, aou, seanjc, viro, brauner, willy, akpm,
xiaoyao.li, yilun.xu, chao.p.peng, jarkko, amoorthy, dmatlack,
isaku.yamahata, mic, vbabka, vannapurve, ackerleytng, mail,
michael.roth, wei.w.wang, liam.merwick, isaku.yamahata,
kirill.shutemov, suzuki.poulose, steven.price, quic_eberman,
quic_mnalajal, quic_tsoni, quic_svaddagi, quic_cvanscha,
quic_pderrin, quic_pheragu, catalin.marinas, james.morse,
yuzenghui, oliver.upton, maz, will, qperret, keirf, roypat, shuah,
hch, jgg, rientjes, jhubbard, fvdl, hughd, jthoughton, peterx,
pankaj.gupta, ira.weiny
On Wed, 4 Jun 2025 at 14:17, David Hildenbrand <david@redhat.com> wrote:
>
> On 27.05.25 20:02, Fuad Tabba wrote:
> > Enable mapping guest_memfd backed memory at the host in arm64. For now,
> > it applies to all arm64 VM types in arm64 that use guest_memfd. In the
> > future, new VM types can restrict this via
> > kvm_arch_gmem_supports_shared_mem().
> >
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> > arch/arm64/include/asm/kvm_host.h | 5 +++++
> > arch/arm64/kvm/Kconfig | 1 +
> > arch/arm64/kvm/mmu.c | 7 +++++++
> > 3 files changed, 13 insertions(+)
> >
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index 08ba91e6fb03..8add94929711 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -1593,4 +1593,9 @@ static inline bool kvm_arch_has_irq_bypass(void)
> > return true;
> > }
> >
> > +#ifdef CONFIG_KVM_GMEM
> > +#define kvm_arch_supports_gmem(kvm) true
> > +#define kvm_arch_supports_gmem_shared_mem(kvm) IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM)
> > +#endif
> > +
> > #endif /* __ARM64_KVM_HOST_H__ */
> > diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
> > index 096e45acadb2..8c1e1964b46a 100644
> > --- a/arch/arm64/kvm/Kconfig
> > +++ b/arch/arm64/kvm/Kconfig
> > @@ -38,6 +38,7 @@ menuconfig KVM
> > select HAVE_KVM_VCPU_RUN_PID_CHANGE
> > select SCHED_INFO
> > select GUEST_PERF_EVENTS if PERF_EVENTS
> > + select KVM_GMEM_SHARED_MEM
> > help
> > Support hosting virtualized guest machines.
> >
> > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > index 896c56683d88..03da08390bf0 100644
> > --- a/arch/arm64/kvm/mmu.c
> > +++ b/arch/arm64/kvm/mmu.c
> > @@ -2264,6 +2264,13 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
> > if ((new->base_gfn + new->npages) > (kvm_phys_size(&kvm->arch.mmu) >> PAGE_SHIFT))
> > return -EFAULT;
> >
> > + /*
> > + * Only support guest_memfd backed memslots with shared memory, since
> > + * there aren't any CoCo VMs that support only private memory on arm64.
> > + */
> > + if (kvm_slot_has_gmem(new) && !kvm_gmem_memslot_supports_shared(new))
> > + return -EINVAL;
> > +
>
> Right, that can get lifted once we have such VMs.
>
> Acked-by: David Hildenbrand <david@redhat.com>
Thanks!
/fuad
> --
> Cheers,
>
> David / dhildenb
>
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 13/16] KVM: arm64: Handle guest_memfd-backed guest page faults
2025-06-04 13:30 ` Fuad Tabba
@ 2025-06-04 13:33 ` David Hildenbrand
0 siblings, 0 replies; 62+ messages in thread
From: David Hildenbrand @ 2025-06-04 13:33 UTC (permalink / raw)
To: Fuad Tabba
Cc: kvm, linux-arm-msm, linux-mm, pbonzini, chenhuacai, mpe, anup,
paul.walmsley, palmer, aou, seanjc, viro, brauner, willy, akpm,
xiaoyao.li, yilun.xu, chao.p.peng, jarkko, amoorthy, dmatlack,
isaku.yamahata, mic, vbabka, vannapurve, ackerleytng, mail,
michael.roth, wei.w.wang, liam.merwick, isaku.yamahata,
kirill.shutemov, suzuki.poulose, steven.price, quic_eberman,
quic_mnalajal, quic_tsoni, quic_svaddagi, quic_cvanscha,
quic_pderrin, quic_pheragu, catalin.marinas, james.morse,
yuzenghui, oliver.upton, maz, will, qperret, keirf, roypat, shuah,
hch, jgg, rientjes, jhubbard, fvdl, hughd, jthoughton, peterx,
pankaj.gupta, ira.weiny
On 04.06.25 15:30, Fuad Tabba wrote:
> Hi David
>
> On Wed, 4 Jun 2025 at 14:17, David Hildenbrand <david@redhat.com> wrote:
>>
>> On 27.05.25 20:02, Fuad Tabba wrote:
>>> Add arm64 support for handling guest page faults on guest_memfd backed
>>> memslots. Until guest_memfd supports huge pages, the fault granule is
>>> restricted to PAGE_SIZE.
>>>
>>> Signed-off-by: Fuad Tabba <tabba@google.com>
>>>
>>> ---
>>>
>>> Note: This patch introduces a new function, gmem_abort() rather than
>>> previous attempts at trying to expand user_mem_abort(). This is because
>>> there are many differences in how faults are handled when backed by
>>> guest_memfd vs regular memslots with anonymous memory, e.g., lack of
>>> VMA, and for now, lack of huge page support for guest_memfd. The
>>> function user_mem_abort() is already big and unwieldly, adding more
>>> complexity to it made things more difficult to understand.
>>>
>>> Once larger page size support is added to guest_memfd, we could factor
>>> out the common code between these two functions.
>>>
>>> ---
>>> arch/arm64/kvm/mmu.c | 89 +++++++++++++++++++++++++++++++++++++++++++-
>>> 1 file changed, 87 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
>>> index 9865ada04a81..896c56683d88 100644
>>> --- a/arch/arm64/kvm/mmu.c
>>> +++ b/arch/arm64/kvm/mmu.c
>>> @@ -1466,6 +1466,87 @@ static bool kvm_vma_mte_allowed(struct vm_area_struct *vma)
>>> return vma->vm_flags & VM_MTE_ALLOWED;
>>> }
>>>
>>> +static int gmem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
>>> + struct kvm_memory_slot *memslot, bool is_perm)
>>
>> TBH, I have no idea why the existing function is called "_abort". I am
>> sure there is a good reason :)
>>
>
> The reason is ARM. They're called "memory aborts", see D8.15 Memory
> aborts in the ARM ARM:
>
> https://developer.arm.com/documentation/ddi0487/latest/
>
> Warning: PDF is 100mb+ with almost 15k pages :)
>
>>> +{
>>> + enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED;
>>> + enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R;
>>> + bool logging, write_fault, exec_fault, writable;
>>> + struct kvm_pgtable *pgt;
>>> + struct page *page;
>>> + struct kvm *kvm;
>>> + void *memcache;
>>> + kvm_pfn_t pfn;
>>> + gfn_t gfn;
>>> + int ret;
>>> +
>>> + if (!is_perm) {
>>> + int min_pages = kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu);
>>> +
>>> + if (!is_protected_kvm_enabled()) {
>>> + memcache = &vcpu->arch.mmu_page_cache;
>>> + ret = kvm_mmu_topup_memory_cache(memcache, min_pages);
>>> + } else {
>>> + memcache = &vcpu->arch.pkvm_memcache;
>>> + ret = topup_hyp_memcache(memcache, min_pages);
>>> + }
>>> + if (ret)
>>> + return ret;
>>> + }
>>> +
>>> + kvm = vcpu->kvm;
>>> + gfn = fault_ipa >> PAGE_SHIFT;
>>
>> These two can be initialized directly above.
>>
>
> I was trying to go with reverse christmas tree order of declarations,
> but I'll do that.
Can still do that, no? vcpu and fault_ipa are input parameters, so no
dependency between them.
>
>>> +
>>> + logging = memslot_is_logging(memslot);
>>> + write_fault = kvm_is_write_fault(vcpu);
>>> + exec_fault = kvm_vcpu_trap_is_exec_fault(vcpu);
>> > + VM_BUG_ON(write_fault && exec_fault);
>>
>> No VM_BUG_ON please.
>>
>> VM_WARN_ON_ONCE() maybe. Or just handle it along the "Unexpected L2 read
>> permission error" below cleanly.
>
> I'm following the same pattern as the existing user_mem_abort(), but
> I'll change it.
Yeah, there are a lot of BUG_ON thingies that should be reworked.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 08/16] KVM: guest_memfd: Allow host to map guest_memfd pages
2025-05-27 18:02 ` [PATCH v10 08/16] KVM: guest_memfd: Allow host to map guest_memfd pages Fuad Tabba
` (4 preceding siblings ...)
2025-06-04 12:26 ` David Hildenbrand
@ 2025-06-05 6:40 ` Gavin Shan
2025-06-05 8:25 ` Fuad Tabba
2025-06-05 8:28 ` Vlastimil Babka
6 siblings, 1 reply; 62+ messages in thread
From: Gavin Shan @ 2025-06-05 6:40 UTC (permalink / raw)
To: Fuad Tabba, kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny
Hi Fuad,
On 5/28/25 4:02 AM, Fuad Tabba wrote:
> This patch enables support for shared memory in guest_memfd, including
> mapping that memory at the host userspace. This support is gated by the
> configuration option KVM_GMEM_SHARED_MEM, and toggled by the guest_memfd
> flag GUEST_MEMFD_FLAG_SUPPORT_SHARED, which can be set when creating a
> guest_memfd instance.
>
> Co-developed-by: Ackerley Tng <ackerleytng@google.com>
> Signed-off-by: Ackerley Tng <ackerleytng@google.com>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
> arch/x86/include/asm/kvm_host.h | 10 ++++
> arch/x86/kvm/x86.c | 3 +-
> include/linux/kvm_host.h | 13 ++++++
> include/uapi/linux/kvm.h | 1 +
> virt/kvm/Kconfig | 5 ++
> virt/kvm/guest_memfd.c | 81 +++++++++++++++++++++++++++++++++
> 6 files changed, 112 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 709cc2a7ba66..ce9ad4cd93c5 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -2255,8 +2255,18 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level,
>
> #ifdef CONFIG_KVM_GMEM
> #define kvm_arch_supports_gmem(kvm) ((kvm)->arch.supports_gmem)
> +
> +/*
> + * CoCo VMs with hardware support that use guest_memfd only for backing private
> + * memory, e.g., TDX, cannot use guest_memfd with userspace mapping enabled.
> + */
> +#define kvm_arch_supports_gmem_shared_mem(kvm) \
> + (IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM) && \
> + ((kvm)->arch.vm_type == KVM_X86_SW_PROTECTED_VM || \
> + (kvm)->arch.vm_type == KVM_X86_DEFAULT_VM))
> #else
> #define kvm_arch_supports_gmem(kvm) false
> +#define kvm_arch_supports_gmem_shared_mem(kvm) false
> #endif
>
> #define kvm_arch_has_readonly_mem(kvm) (!(kvm)->arch.has_protected_state)
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 035ced06b2dd..2a02f2457c42 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -12718,7 +12718,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
> return -EINVAL;
>
> kvm->arch.vm_type = type;
> - kvm->arch.supports_gmem = (type == KVM_X86_SW_PROTECTED_VM);
> + kvm->arch.supports_gmem =
> + type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM;
> /* Decided by the vendor code for other VM types. */
> kvm->arch.pre_fault_allowed =
> type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM;
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 80371475818f..ba83547e62b0 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -729,6 +729,19 @@ static inline bool kvm_arch_supports_gmem(struct kvm *kvm)
> }
> #endif
>
> +/*
> + * Returns true if this VM supports shared mem in guest_memfd.
> + *
> + * Arch code must define kvm_arch_supports_gmem_shared_mem if support for
> + * guest_memfd is enabled.
> + */
> +#if !defined(kvm_arch_supports_gmem_shared_mem) && !IS_ENABLED(CONFIG_KVM_GMEM)
> +static inline bool kvm_arch_supports_gmem_shared_mem(struct kvm *kvm)
> +{
> + return false;
> +}
> +#endif
> +
> #ifndef kvm_arch_has_readonly_mem
> static inline bool kvm_arch_has_readonly_mem(struct kvm *kvm)
> {
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index b6ae8ad8934b..c2714c9d1a0e 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1566,6 +1566,7 @@ struct kvm_memory_attributes {
> #define KVM_MEMORY_ATTRIBUTE_PRIVATE (1ULL << 3)
>
> #define KVM_CREATE_GUEST_MEMFD _IOWR(KVMIO, 0xd4, struct kvm_create_guest_memfd)
> +#define GUEST_MEMFD_FLAG_SUPPORT_SHARED (1ULL << 0)
>
> struct kvm_create_guest_memfd {
> __u64 size;
> diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
> index 559c93ad90be..df225298ab10 100644
> --- a/virt/kvm/Kconfig
> +++ b/virt/kvm/Kconfig
> @@ -128,3 +128,8 @@ config HAVE_KVM_ARCH_GMEM_PREPARE
> config HAVE_KVM_ARCH_GMEM_INVALIDATE
> bool
> depends on KVM_GMEM
> +
> +config KVM_GMEM_SHARED_MEM
> + select KVM_GMEM
> + bool
> + prompt "Enable support for non-private (shared) memory in guest_memfd"
> diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
> index 6db515833f61..5d34712f64fc 100644
> --- a/virt/kvm/guest_memfd.c
> +++ b/virt/kvm/guest_memfd.c
> @@ -312,7 +312,81 @@ static pgoff_t kvm_gmem_get_index(struct kvm_memory_slot *slot, gfn_t gfn)
> return gfn - slot->base_gfn + slot->gmem.pgoff;
> }
>
> +static bool kvm_gmem_supports_shared(struct inode *inode)
> +{
> + u64 flags;
> +
> + if (!IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM))
> + return false;
> +
> + flags = (u64)inode->i_private;
> +
> + return flags & GUEST_MEMFD_FLAG_SUPPORT_SHARED;
> +}
> +
> +
> +#ifdef CONFIG_KVM_GMEM_SHARED_MEM
> +static vm_fault_t kvm_gmem_fault_shared(struct vm_fault *vmf)
> +{
> + struct inode *inode = file_inode(vmf->vma->vm_file);
> + struct folio *folio;
> + vm_fault_t ret = VM_FAULT_LOCKED;
> +
> + folio = kvm_gmem_get_folio(inode, vmf->pgoff);
> + if (IS_ERR(folio)) {
> + int err = PTR_ERR(folio);
> +
> + if (err == -EAGAIN)
> + return VM_FAULT_RETRY;
> +
> + return vmf_error(err);
> + }
> +
> + if (WARN_ON_ONCE(folio_test_large(folio))) {
> + ret = VM_FAULT_SIGBUS;
> + goto out_folio;
> + }
> +
> + if (!folio_test_uptodate(folio)) {
> + clear_highpage(folio_page(folio, 0));
> + kvm_gmem_mark_prepared(folio);
> + }
> +
> + vmf->page = folio_file_page(folio, vmf->pgoff);
> +
> +out_folio:
> + if (ret != VM_FAULT_LOCKED) {
> + folio_unlock(folio);
> + folio_put(folio);
> + }
> +
> + return ret;
> +}
> +
> +static const struct vm_operations_struct kvm_gmem_vm_ops = {
> + .fault = kvm_gmem_fault_shared,
> +};
> +
> +static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma)
> +{
> + if (!kvm_gmem_supports_shared(file_inode(file)))
> + return -ENODEV;
> +
> + if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) !=
> + (VM_SHARED | VM_MAYSHARE)) {
> + return -EINVAL;
> + }
> +
> + vma->vm_ops = &kvm_gmem_vm_ops;
> +
> + return 0;
> +}
> +#else
> +#define kvm_gmem_mmap NULL
> +#endif /* CONFIG_KVM_GMEM_SHARED_MEM */
> +
> static struct file_operations kvm_gmem_fops = {
> + .mmap = kvm_gmem_mmap,
> .open = generic_file_open,
> .release = kvm_gmem_release,
> .fallocate = kvm_gmem_fallocate,
> @@ -463,6 +537,9 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args)
> u64 flags = args->flags;
> u64 valid_flags = 0;
>
It seems there is an uncovered corner case, which exists in current code (not directly
caused by this patch): After .mmap is hooked, the address space (inode->i_mapping) is
exposed to user space for futher requests like madvise().madvise(MADV_COLLAPSE) can
potentially collapse the pages to a huge page (folio) with the following assumptions.
It's not the expected behavior since huge page isn't supported yet.
- CONFIG_READ_ONLY_THP_FOR_FS = y
- the folios in the pagecache have been fully populated, it can be done by kvm_gmem_fallocate()
or kvm_gmem_get_pfn().
- mmap(0x00000f0100000000, ..., MAP_FIXED_NOREPLACE) on the guest-memfd, and then do
madvise(buf, size, MADV_COLLAPSE).
sys_madvise
do_madvise
madvise_do_behavior
madvise_vma_behavior
madvise_collapse
thp_vma_allowable_order
file_thp_enabled // need to return false to bail from the path earlier at least
hpage_collapse_scan_file
collapse_pte_mapped_thp
The fix would be to increase inode->i_writecount using allow_write_access() in
__kvm_gmem_create() to break the check done by file_thp_enabled().
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index 0cd12f94958b..fe706c9f21cf 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -502,6 +502,7 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
}
file->f_flags |= O_LARGEFILE;
+ allow_write_access(file);
> + if (kvm_arch_supports_gmem_shared_mem(kvm))
> + valid_flags |= GUEST_MEMFD_FLAG_SUPPORT_SHARED;
> +
> if (flags & ~valid_flags)
> return -EINVAL;
>
> @@ -501,6 +578,10 @@ int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot,
> offset + size > i_size_read(inode))
> goto err;
>
> + if (kvm_gmem_supports_shared(inode) &&
> + !kvm_arch_supports_gmem_shared_mem(kvm))
> + goto err;
> +
> filemap_invalidate_lock(inode->i_mapping);
>
> start = offset >> PAGE_SHIFT;
Thanks,
Gavin
^ permalink raw reply related [flat|nested] 62+ messages in thread
* Re: [PATCH v10 01/16] KVM: Rename CONFIG_KVM_PRIVATE_MEM to CONFIG_KVM_GMEM
2025-05-27 18:02 ` [PATCH v10 01/16] KVM: Rename CONFIG_KVM_PRIVATE_MEM to CONFIG_KVM_GMEM Fuad Tabba
2025-05-31 19:05 ` Shivank Garg
@ 2025-06-05 8:19 ` Vlastimil Babka
1 sibling, 0 replies; 62+ messages in thread
From: Vlastimil Babka @ 2025-06-05 8:19 UTC (permalink / raw)
To: Fuad Tabba, kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vannapurve, ackerleytng, mail, david, michael.roth, wei.w.wang,
liam.merwick, isaku.yamahata, kirill.shutemov, suzuki.poulose,
steven.price, quic_eberman, quic_mnalajal, quic_tsoni,
quic_svaddagi, quic_cvanscha, quic_pderrin, quic_pheragu,
catalin.marinas, james.morse, yuzenghui, oliver.upton, maz, will,
qperret, keirf, roypat, shuah, hch, jgg, rientjes, jhubbard, fvdl,
hughd, jthoughton, peterx, pankaj.gupta, ira.weiny
On 5/27/25 20:02, Fuad Tabba wrote:
> The option KVM_PRIVATE_MEM enables guest_memfd in general. Subsequent
> patches add shared memory support to guest_memfd. Therefore, rename it
> to KVM_GMEM to make its purpose clearer.
>
> Reviewed-by: Gavin Shan <gshan@redhat.com>
> Reviewed-by: Ira Weiny <ira.weiny@intel.com>
> Co-developed-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 02/16] KVM: Rename CONFIG_KVM_GENERIC_PRIVATE_MEM to CONFIG_KVM_GENERIC_GMEM_POPULATE
2025-05-27 18:02 ` [PATCH v10 02/16] KVM: Rename CONFIG_KVM_GENERIC_PRIVATE_MEM to CONFIG_KVM_GENERIC_GMEM_POPULATE Fuad Tabba
2025-05-31 19:07 ` Shivank Garg
@ 2025-06-05 8:19 ` Vlastimil Babka
1 sibling, 0 replies; 62+ messages in thread
From: Vlastimil Babka @ 2025-06-05 8:19 UTC (permalink / raw)
To: Fuad Tabba, kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vannapurve, ackerleytng, mail, david, michael.roth, wei.w.wang,
liam.merwick, isaku.yamahata, kirill.shutemov, suzuki.poulose,
steven.price, quic_eberman, quic_mnalajal, quic_tsoni,
quic_svaddagi, quic_cvanscha, quic_pderrin, quic_pheragu,
catalin.marinas, james.morse, yuzenghui, oliver.upton, maz, will,
qperret, keirf, roypat, shuah, hch, jgg, rientjes, jhubbard, fvdl,
hughd, jthoughton, peterx, pankaj.gupta, ira.weiny
On 5/27/25 20:02, Fuad Tabba wrote:
> The option KVM_GENERIC_PRIVATE_MEM enables populating a GPA range with
> guest data. Rename it to KVM_GENERIC_GMEM_POPULATE to make its purpose
> clearer.
>
> Reviewed-by: Gavin Shan <gshan@redhat.com>
> Reviewed-by: Ira Weiny <ira.weiny@intel.com>
> Co-developed-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 03/16] KVM: Rename kvm_arch_has_private_mem() to kvm_arch_supports_gmem()
2025-05-27 18:02 ` [PATCH v10 03/16] KVM: Rename kvm_arch_has_private_mem() to kvm_arch_supports_gmem() Fuad Tabba
2025-05-31 19:12 ` Shivank Garg
@ 2025-06-05 8:20 ` Vlastimil Babka
1 sibling, 0 replies; 62+ messages in thread
From: Vlastimil Babka @ 2025-06-05 8:20 UTC (permalink / raw)
To: Fuad Tabba, kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vannapurve, ackerleytng, mail, david, michael.roth, wei.w.wang,
liam.merwick, isaku.yamahata, kirill.shutemov, suzuki.poulose,
steven.price, quic_eberman, quic_mnalajal, quic_tsoni,
quic_svaddagi, quic_cvanscha, quic_pderrin, quic_pheragu,
catalin.marinas, james.morse, yuzenghui, oliver.upton, maz, will,
qperret, keirf, roypat, shuah, hch, jgg, rientjes, jhubbard, fvdl,
hughd, jthoughton, peterx, pankaj.gupta, ira.weiny
On 5/27/25 20:02, Fuad Tabba wrote:
> The function kvm_arch_has_private_mem() is used to indicate whether
> guest_memfd is supported by the architecture, which until now implies
> that its private. To decouple guest_memfd support from whether the
> memory is private, rename this function to kvm_arch_supports_gmem().
>
> Reviewed-by: Gavin Shan <gshan@redhat.com>
> Reviewed-by: Ira Weiny <ira.weiny@intel.com>
> Co-developed-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 04/16] KVM: x86: Rename kvm->arch.has_private_mem to kvm->arch.supports_gmem
2025-05-27 18:02 ` [PATCH v10 04/16] KVM: x86: Rename kvm->arch.has_private_mem to kvm->arch.supports_gmem Fuad Tabba
2025-05-31 19:13 ` Shivank Garg
@ 2025-06-05 8:21 ` Vlastimil Babka
1 sibling, 0 replies; 62+ messages in thread
From: Vlastimil Babka @ 2025-06-05 8:21 UTC (permalink / raw)
To: Fuad Tabba, kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vannapurve, ackerleytng, mail, david, michael.roth, wei.w.wang,
liam.merwick, isaku.yamahata, kirill.shutemov, suzuki.poulose,
steven.price, quic_eberman, quic_mnalajal, quic_tsoni,
quic_svaddagi, quic_cvanscha, quic_pderrin, quic_pheragu,
catalin.marinas, james.morse, yuzenghui, oliver.upton, maz, will,
qperret, keirf, roypat, shuah, hch, jgg, rientjes, jhubbard, fvdl,
hughd, jthoughton, peterx, pankaj.gupta, ira.weiny
On 5/27/25 20:02, Fuad Tabba wrote:
> The bool has_private_mem is used to indicate whether guest_memfd is
> supported. Rename it to supports_gmem to make its meaning clearer and to
> decouple memory being private from guest_memfd.
>
> Reviewed-by: Gavin Shan <gshan@redhat.com>
> Reviewed-by: Ira Weiny <ira.weiny@intel.com>
> Co-developed-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 05/16] KVM: Rename kvm_slot_can_be_private() to kvm_slot_has_gmem()
2025-05-27 18:02 ` [PATCH v10 05/16] KVM: Rename kvm_slot_can_be_private() to kvm_slot_has_gmem() Fuad Tabba
2025-05-31 19:13 ` Shivank Garg
@ 2025-06-05 8:22 ` Vlastimil Babka
1 sibling, 0 replies; 62+ messages in thread
From: Vlastimil Babka @ 2025-06-05 8:22 UTC (permalink / raw)
To: Fuad Tabba, kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vannapurve, ackerleytng, mail, david, michael.roth, wei.w.wang,
liam.merwick, isaku.yamahata, kirill.shutemov, suzuki.poulose,
steven.price, quic_eberman, quic_mnalajal, quic_tsoni,
quic_svaddagi, quic_cvanscha, quic_pderrin, quic_pheragu,
catalin.marinas, james.morse, yuzenghui, oliver.upton, maz, will,
qperret, keirf, roypat, shuah, hch, jgg, rientjes, jhubbard, fvdl,
hughd, jthoughton, peterx, pankaj.gupta, ira.weiny
On 5/27/25 20:02, Fuad Tabba wrote:
> The function kvm_slot_can_be_private() is used to check whether a memory
> slot is backed by guest_memfd. Rename it to kvm_slot_has_gmem() to make
> that clearer and to decouple memory being private from guest_memfd.
>
> Reviewed-by: Gavin Shan <gshan@redhat.com>
> Reviewed-by: Ira Weiny <ira.weiny@intel.com>
> Co-developed-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 06/16] KVM: Fix comments that refer to slots_lock
2025-05-27 18:02 ` [PATCH v10 06/16] KVM: Fix comments that refer to slots_lock Fuad Tabba
2025-05-31 19:14 ` Shivank Garg
@ 2025-06-05 8:22 ` Vlastimil Babka
1 sibling, 0 replies; 62+ messages in thread
From: Vlastimil Babka @ 2025-06-05 8:22 UTC (permalink / raw)
To: Fuad Tabba, kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vannapurve, ackerleytng, mail, david, michael.roth, wei.w.wang,
liam.merwick, isaku.yamahata, kirill.shutemov, suzuki.poulose,
steven.price, quic_eberman, quic_mnalajal, quic_tsoni,
quic_svaddagi, quic_cvanscha, quic_pderrin, quic_pheragu,
catalin.marinas, james.morse, yuzenghui, oliver.upton, maz, will,
qperret, keirf, roypat, shuah, hch, jgg, rientjes, jhubbard, fvdl,
hughd, jthoughton, peterx, pankaj.gupta, ira.weiny
On 5/27/25 20:02, Fuad Tabba wrote:
> Fix comments so that they refer to slots_lock instead of slots_locks
> (remove trailing s).
>
> Reviewed-by: Gavin Shan <gshan@redhat.com>
> Reviewed-by: David Hildenbrand <david@redhat.com>
> Reviewed-by: Ira Weiny <ira.weiny@intel.com>
> Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 07/16] KVM: Fix comment that refers to kvm uapi header path
2025-05-27 18:02 ` [PATCH v10 07/16] KVM: Fix comment that refers to kvm uapi header path Fuad Tabba
` (2 preceding siblings ...)
2025-06-04 9:00 ` Gavin Shan
@ 2025-06-05 8:22 ` Vlastimil Babka
3 siblings, 0 replies; 62+ messages in thread
From: Vlastimil Babka @ 2025-06-05 8:22 UTC (permalink / raw)
To: Fuad Tabba, kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vannapurve, ackerleytng, mail, david, michael.roth, wei.w.wang,
liam.merwick, isaku.yamahata, kirill.shutemov, suzuki.poulose,
steven.price, quic_eberman, quic_mnalajal, quic_tsoni,
quic_svaddagi, quic_cvanscha, quic_pderrin, quic_pheragu,
catalin.marinas, james.morse, yuzenghui, oliver.upton, maz, will,
qperret, keirf, roypat, shuah, hch, jgg, rientjes, jhubbard, fvdl,
hughd, jthoughton, peterx, pankaj.gupta, ira.weiny
On 5/27/25 20:02, Fuad Tabba wrote:
> The comment that refers to the path where the user-visible memslot flags
> are refers to an outdated path and has a typo. Make it refer to the
> correct path.
>
> Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 08/16] KVM: guest_memfd: Allow host to map guest_memfd pages
2025-06-05 6:40 ` Gavin Shan
@ 2025-06-05 8:25 ` Fuad Tabba
2025-06-05 9:53 ` Gavin Shan
0 siblings, 1 reply; 62+ messages in thread
From: Fuad Tabba @ 2025-06-05 8:25 UTC (permalink / raw)
To: Gavin Shan
Cc: kvm, linux-arm-msm, linux-mm, pbonzini, chenhuacai, mpe, anup,
paul.walmsley, palmer, aou, seanjc, viro, brauner, willy, akpm,
xiaoyao.li, yilun.xu, chao.p.peng, jarkko, amoorthy, dmatlack,
isaku.yamahata, mic, vbabka, vannapurve, ackerleytng, mail, david,
michael.roth, wei.w.wang, liam.merwick, isaku.yamahata,
kirill.shutemov, suzuki.poulose, steven.price, quic_eberman,
quic_mnalajal, quic_tsoni, quic_svaddagi, quic_cvanscha,
quic_pderrin, quic_pheragu, catalin.marinas, james.morse,
yuzenghui, oliver.upton, maz, will, qperret, keirf, roypat, shuah,
hch, jgg, rientjes, jhubbard, fvdl, hughd, jthoughton, peterx,
pankaj.gupta, ira.weiny
Hi Gavin,
On Thu, 5 Jun 2025 at 07:41, Gavin Shan <gshan@redhat.com> wrote:
>
> Hi Fuad,
>
> On 5/28/25 4:02 AM, Fuad Tabba wrote:
> > This patch enables support for shared memory in guest_memfd, including
> > mapping that memory at the host userspace. This support is gated by the
> > configuration option KVM_GMEM_SHARED_MEM, and toggled by the guest_memfd
> > flag GUEST_MEMFD_FLAG_SUPPORT_SHARED, which can be set when creating a
> > guest_memfd instance.
> >
> > Co-developed-by: Ackerley Tng <ackerleytng@google.com>
> > Signed-off-by: Ackerley Tng <ackerleytng@google.com>
> > Signed-off-by: Fuad Tabba <tabba@google.com>
> > ---
> > arch/x86/include/asm/kvm_host.h | 10 ++++
> > arch/x86/kvm/x86.c | 3 +-
> > include/linux/kvm_host.h | 13 ++++++
> > include/uapi/linux/kvm.h | 1 +
> > virt/kvm/Kconfig | 5 ++
> > virt/kvm/guest_memfd.c | 81 +++++++++++++++++++++++++++++++++
> > 6 files changed, 112 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> > index 709cc2a7ba66..ce9ad4cd93c5 100644
> > --- a/arch/x86/include/asm/kvm_host.h
> > +++ b/arch/x86/include/asm/kvm_host.h
> > @@ -2255,8 +2255,18 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level,
> >
> > #ifdef CONFIG_KVM_GMEM
> > #define kvm_arch_supports_gmem(kvm) ((kvm)->arch.supports_gmem)
> > +
> > +/*
> > + * CoCo VMs with hardware support that use guest_memfd only for backing private
> > + * memory, e.g., TDX, cannot use guest_memfd with userspace mapping enabled.
> > + */
> > +#define kvm_arch_supports_gmem_shared_mem(kvm) \
> > + (IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM) && \
> > + ((kvm)->arch.vm_type == KVM_X86_SW_PROTECTED_VM || \
> > + (kvm)->arch.vm_type == KVM_X86_DEFAULT_VM))
> > #else
> > #define kvm_arch_supports_gmem(kvm) false
> > +#define kvm_arch_supports_gmem_shared_mem(kvm) false
> > #endif
> >
> > #define kvm_arch_has_readonly_mem(kvm) (!(kvm)->arch.has_protected_state)
> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> > index 035ced06b2dd..2a02f2457c42 100644
> > --- a/arch/x86/kvm/x86.c
> > +++ b/arch/x86/kvm/x86.c
> > @@ -12718,7 +12718,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
> > return -EINVAL;
> >
> > kvm->arch.vm_type = type;
> > - kvm->arch.supports_gmem = (type == KVM_X86_SW_PROTECTED_VM);
> > + kvm->arch.supports_gmem =
> > + type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM;
> > /* Decided by the vendor code for other VM types. */
> > kvm->arch.pre_fault_allowed =
> > type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM;
> > diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> > index 80371475818f..ba83547e62b0 100644
> > --- a/include/linux/kvm_host.h
> > +++ b/include/linux/kvm_host.h
> > @@ -729,6 +729,19 @@ static inline bool kvm_arch_supports_gmem(struct kvm *kvm)
> > }
> > #endif
> >
> > +/*
> > + * Returns true if this VM supports shared mem in guest_memfd.
> > + *
> > + * Arch code must define kvm_arch_supports_gmem_shared_mem if support for
> > + * guest_memfd is enabled.
> > + */
> > +#if !defined(kvm_arch_supports_gmem_shared_mem) && !IS_ENABLED(CONFIG_KVM_GMEM)
> > +static inline bool kvm_arch_supports_gmem_shared_mem(struct kvm *kvm)
> > +{
> > + return false;
> > +}
> > +#endif
> > +
> > #ifndef kvm_arch_has_readonly_mem
> > static inline bool kvm_arch_has_readonly_mem(struct kvm *kvm)
> > {
> > diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> > index b6ae8ad8934b..c2714c9d1a0e 100644
> > --- a/include/uapi/linux/kvm.h
> > +++ b/include/uapi/linux/kvm.h
> > @@ -1566,6 +1566,7 @@ struct kvm_memory_attributes {
> > #define KVM_MEMORY_ATTRIBUTE_PRIVATE (1ULL << 3)
> >
> > #define KVM_CREATE_GUEST_MEMFD _IOWR(KVMIO, 0xd4, struct kvm_create_guest_memfd)
> > +#define GUEST_MEMFD_FLAG_SUPPORT_SHARED (1ULL << 0)
> >
> > struct kvm_create_guest_memfd {
> > __u64 size;
> > diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
> > index 559c93ad90be..df225298ab10 100644
> > --- a/virt/kvm/Kconfig
> > +++ b/virt/kvm/Kconfig
> > @@ -128,3 +128,8 @@ config HAVE_KVM_ARCH_GMEM_PREPARE
> > config HAVE_KVM_ARCH_GMEM_INVALIDATE
> > bool
> > depends on KVM_GMEM
> > +
> > +config KVM_GMEM_SHARED_MEM
> > + select KVM_GMEM
> > + bool
> > + prompt "Enable support for non-private (shared) memory in guest_memfd"
> > diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
> > index 6db515833f61..5d34712f64fc 100644
> > --- a/virt/kvm/guest_memfd.c
> > +++ b/virt/kvm/guest_memfd.c
> > @@ -312,7 +312,81 @@ static pgoff_t kvm_gmem_get_index(struct kvm_memory_slot *slot, gfn_t gfn)
> > return gfn - slot->base_gfn + slot->gmem.pgoff;
> > }
> >
> > +static bool kvm_gmem_supports_shared(struct inode *inode)
> > +{
> > + u64 flags;
> > +
> > + if (!IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM))
> > + return false;
> > +
> > + flags = (u64)inode->i_private;
> > +
> > + return flags & GUEST_MEMFD_FLAG_SUPPORT_SHARED;
> > +}
> > +
> > +
> > +#ifdef CONFIG_KVM_GMEM_SHARED_MEM
> > +static vm_fault_t kvm_gmem_fault_shared(struct vm_fault *vmf)
> > +{
> > + struct inode *inode = file_inode(vmf->vma->vm_file);
> > + struct folio *folio;
> > + vm_fault_t ret = VM_FAULT_LOCKED;
> > +
> > + folio = kvm_gmem_get_folio(inode, vmf->pgoff);
> > + if (IS_ERR(folio)) {
> > + int err = PTR_ERR(folio);
> > +
> > + if (err == -EAGAIN)
> > + return VM_FAULT_RETRY;
> > +
> > + return vmf_error(err);
> > + }
> > +
> > + if (WARN_ON_ONCE(folio_test_large(folio))) {
> > + ret = VM_FAULT_SIGBUS;
> > + goto out_folio;
> > + }
> > +
> > + if (!folio_test_uptodate(folio)) {
> > + clear_highpage(folio_page(folio, 0));
> > + kvm_gmem_mark_prepared(folio);
> > + }
> > +
> > + vmf->page = folio_file_page(folio, vmf->pgoff);
> > +
> > +out_folio:
> > + if (ret != VM_FAULT_LOCKED) {
> > + folio_unlock(folio);
> > + folio_put(folio);
> > + }
> > +
> > + return ret;
> > +}
> > +
> > +static const struct vm_operations_struct kvm_gmem_vm_ops = {
> > + .fault = kvm_gmem_fault_shared,
> > +};
> > +
> > +static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma)
> > +{
> > + if (!kvm_gmem_supports_shared(file_inode(file)))
> > + return -ENODEV;
> > +
> > + if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) !=
> > + (VM_SHARED | VM_MAYSHARE)) {
> > + return -EINVAL;
> > + }
> > +
> > + vma->vm_ops = &kvm_gmem_vm_ops;
> > +
> > + return 0;
> > +}
> > +#else
> > +#define kvm_gmem_mmap NULL
> > +#endif /* CONFIG_KVM_GMEM_SHARED_MEM */
> > +
> > static struct file_operations kvm_gmem_fops = {
> > + .mmap = kvm_gmem_mmap,
> > .open = generic_file_open,
> > .release = kvm_gmem_release,
> > .fallocate = kvm_gmem_fallocate,
> > @@ -463,6 +537,9 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args)
> > u64 flags = args->flags;
> > u64 valid_flags = 0;
> >
>
> It seems there is an uncovered corner case, which exists in current code (not directly
> caused by this patch): After .mmap is hooked, the address space (inode->i_mapping) is
> exposed to user space for futher requests like madvise().madvise(MADV_COLLAPSE) can
> potentially collapse the pages to a huge page (folio) with the following assumptions.
> It's not the expected behavior since huge page isn't supported yet.
>
> - CONFIG_READ_ONLY_THP_FOR_FS = y
> - the folios in the pagecache have been fully populated, it can be done by kvm_gmem_fallocate()
> or kvm_gmem_get_pfn().
> - mmap(0x00000f0100000000, ..., MAP_FIXED_NOREPLACE) on the guest-memfd, and then do
> madvise(buf, size, MADV_COLLAPSE).
>
> sys_madvise
> do_madvise
> madvise_do_behavior
> madvise_vma_behavior
> madvise_collapse
> thp_vma_allowable_order
> file_thp_enabled // need to return false to bail from the path earlier at least
> hpage_collapse_scan_file
> collapse_pte_mapped_thp
>
> The fix would be to increase inode->i_writecount using allow_write_access() in
> __kvm_gmem_create() to break the check done by file_thp_enabled().
Thanks for catching this. Even though it's not an issue until huge
page support is added, we might as well handle it now.
Out of curiosity, how did you spot this?
Cheers,
/fuad
> diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
> index 0cd12f94958b..fe706c9f21cf 100644
> --- a/virt/kvm/guest_memfd.c
> +++ b/virt/kvm/guest_memfd.c
> @@ -502,6 +502,7 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
> }
>
> file->f_flags |= O_LARGEFILE;
> + allow_write_access(file);
>
>
> > + if (kvm_arch_supports_gmem_shared_mem(kvm))
> > + valid_flags |= GUEST_MEMFD_FLAG_SUPPORT_SHARED;
> > +
> > if (flags & ~valid_flags)
> > return -EINVAL;
> >
> > @@ -501,6 +578,10 @@ int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot,
> > offset + size > i_size_read(inode))
> > goto err;
> >
> > + if (kvm_gmem_supports_shared(inode) &&
> > + !kvm_arch_supports_gmem_shared_mem(kvm))
> > + goto err;
> > +
> > filemap_invalidate_lock(inode->i_mapping);
> >
> > start = offset >> PAGE_SHIFT;
>
> Thanks,
> Gavin
>
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 08/16] KVM: guest_memfd: Allow host to map guest_memfd pages
2025-05-27 18:02 ` [PATCH v10 08/16] KVM: guest_memfd: Allow host to map guest_memfd pages Fuad Tabba
` (5 preceding siblings ...)
2025-06-05 6:40 ` Gavin Shan
@ 2025-06-05 8:28 ` Vlastimil Babka
2025-06-05 8:44 ` Fuad Tabba
6 siblings, 1 reply; 62+ messages in thread
From: Vlastimil Babka @ 2025-06-05 8:28 UTC (permalink / raw)
To: Fuad Tabba, kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vannapurve, ackerleytng, mail, david, michael.roth, wei.w.wang,
liam.merwick, isaku.yamahata, kirill.shutemov, suzuki.poulose,
steven.price, quic_eberman, quic_mnalajal, quic_tsoni,
quic_svaddagi, quic_cvanscha, quic_pderrin, quic_pheragu,
catalin.marinas, james.morse, yuzenghui, oliver.upton, maz, will,
qperret, keirf, roypat, shuah, hch, jgg, rientjes, jhubbard, fvdl,
hughd, jthoughton, peterx, pankaj.gupta, ira.weiny
On 5/27/25 20:02, Fuad Tabba wrote:
> This patch enables support for shared memory in guest_memfd, including
> mapping that memory at the host userspace. This support is gated by the
> configuration option KVM_GMEM_SHARED_MEM, and toggled by the guest_memfd
> flag GUEST_MEMFD_FLAG_SUPPORT_SHARED, which can be set when creating a
> guest_memfd instance.
>
> Co-developed-by: Ackerley Tng <ackerleytng@google.com>
> Signed-off-by: Ackerley Tng <ackerleytng@google.com>
> Signed-off-by: Fuad Tabba <tabba@google.com>
<snip>
> diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
> index 559c93ad90be..df225298ab10 100644
> --- a/virt/kvm/Kconfig
> +++ b/virt/kvm/Kconfig
> @@ -128,3 +128,8 @@ config HAVE_KVM_ARCH_GMEM_PREPARE
> config HAVE_KVM_ARCH_GMEM_INVALIDATE
> bool
> depends on KVM_GMEM
> +
> +config KVM_GMEM_SHARED_MEM
> + select KVM_GMEM
> + bool
> + prompt "Enable support for non-private (shared) memory in guest_memfd"
Due to this "prompt" line, the toggle for this appears on the front page on
make menuconfig, or is asked during make oldconfig etc.
Seems not intended, no other options in this Kconfig have prompts, and the
later patch selects this option. So the prompt should be removed, otherwise
it's a Linus yelling hazard :)
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 08/16] KVM: guest_memfd: Allow host to map guest_memfd pages
2025-06-05 8:28 ` Vlastimil Babka
@ 2025-06-05 8:44 ` Fuad Tabba
0 siblings, 0 replies; 62+ messages in thread
From: Fuad Tabba @ 2025-06-05 8:44 UTC (permalink / raw)
To: Vlastimil Babka
Cc: kvm, linux-arm-msm, linux-mm, pbonzini, chenhuacai, mpe, anup,
paul.walmsley, palmer, aou, seanjc, viro, brauner, willy, akpm,
xiaoyao.li, yilun.xu, chao.p.peng, jarkko, amoorthy, dmatlack,
isaku.yamahata, mic, vannapurve, ackerleytng, mail, david,
michael.roth, wei.w.wang, liam.merwick, isaku.yamahata,
kirill.shutemov, suzuki.poulose, steven.price, quic_eberman,
quic_mnalajal, quic_tsoni, quic_svaddagi, quic_cvanscha,
quic_pderrin, quic_pheragu, catalin.marinas, james.morse,
yuzenghui, oliver.upton, maz, will, qperret, keirf, roypat, shuah,
hch, jgg, rientjes, jhubbard, fvdl, hughd, jthoughton, peterx,
pankaj.gupta, ira.weiny
Hi Vlastimil
On Thu, 5 Jun 2025 at 09:28, Vlastimil Babka <vbabka@suse.cz> wrote:
>
> On 5/27/25 20:02, Fuad Tabba wrote:
> > This patch enables support for shared memory in guest_memfd, including
> > mapping that memory at the host userspace. This support is gated by the
> > configuration option KVM_GMEM_SHARED_MEM, and toggled by the guest_memfd
> > flag GUEST_MEMFD_FLAG_SUPPORT_SHARED, which can be set when creating a
> > guest_memfd instance.
> >
> > Co-developed-by: Ackerley Tng <ackerleytng@google.com>
> > Signed-off-by: Ackerley Tng <ackerleytng@google.com>
> > Signed-off-by: Fuad Tabba <tabba@google.com>
>
> <snip>
>
> > diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
> > index 559c93ad90be..df225298ab10 100644
> > --- a/virt/kvm/Kconfig
> > +++ b/virt/kvm/Kconfig
> > @@ -128,3 +128,8 @@ config HAVE_KVM_ARCH_GMEM_PREPARE
> > config HAVE_KVM_ARCH_GMEM_INVALIDATE
> > bool
> > depends on KVM_GMEM
> > +
> > +config KVM_GMEM_SHARED_MEM
> > + select KVM_GMEM
> > + bool
> > + prompt "Enable support for non-private (shared) memory in guest_memfd"
>
> Due to this "prompt" line, the toggle for this appears on the front page on
> make menuconfig, or is asked during make oldconfig etc.
> Seems not intended, no other options in this Kconfig have prompts, and the
> later patch selects this option. So the prompt should be removed, otherwise
> it's a Linus yelling hazard :)
Ack, and thanks for the reviews!
Cheers,
/fuad
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 08/16] KVM: guest_memfd: Allow host to map guest_memfd pages
2025-06-05 8:25 ` Fuad Tabba
@ 2025-06-05 9:53 ` Gavin Shan
0 siblings, 0 replies; 62+ messages in thread
From: Gavin Shan @ 2025-06-05 9:53 UTC (permalink / raw)
To: Fuad Tabba
Cc: kvm, linux-arm-msm, linux-mm, pbonzini, chenhuacai, mpe, anup,
paul.walmsley, palmer, aou, seanjc, viro, brauner, willy, akpm,
xiaoyao.li, yilun.xu, chao.p.peng, jarkko, amoorthy, dmatlack,
isaku.yamahata, mic, vbabka, vannapurve, ackerleytng, mail, david,
michael.roth, wei.w.wang, liam.merwick, isaku.yamahata,
kirill.shutemov, suzuki.poulose, steven.price, quic_eberman,
quic_mnalajal, quic_tsoni, quic_svaddagi, quic_cvanscha,
quic_pderrin, quic_pheragu, catalin.marinas, james.morse,
yuzenghui, oliver.upton, maz, will, qperret, keirf, roypat, shuah,
hch, jgg, rientjes, jhubbard, fvdl, hughd, jthoughton, peterx,
pankaj.gupta, ira.weiny
Hi Fuad,
On 6/5/25 6:25 PM, Fuad Tabba wrote:
> On Thu, 5 Jun 2025 at 07:41, Gavin Shan <gshan@redhat.com> wrote:
>> On 5/28/25 4:02 AM, Fuad Tabba wrote:
>>> This patch enables support for shared memory in guest_memfd, including
>>> mapping that memory at the host userspace. This support is gated by the
>>> configuration option KVM_GMEM_SHARED_MEM, and toggled by the guest_memfd
>>> flag GUEST_MEMFD_FLAG_SUPPORT_SHARED, which can be set when creating a
>>> guest_memfd instance.
>>>
>>> Co-developed-by: Ackerley Tng <ackerleytng@google.com>
>>> Signed-off-by: Ackerley Tng <ackerleytng@google.com>
>>> Signed-off-by: Fuad Tabba <tabba@google.com>
>>> ---
>>> arch/x86/include/asm/kvm_host.h | 10 ++++
>>> arch/x86/kvm/x86.c | 3 +-
>>> include/linux/kvm_host.h | 13 ++++++
>>> include/uapi/linux/kvm.h | 1 +
>>> virt/kvm/Kconfig | 5 ++
>>> virt/kvm/guest_memfd.c | 81 +++++++++++++++++++++++++++++++++
>>> 6 files changed, 112 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
>>> index 709cc2a7ba66..ce9ad4cd93c5 100644
>>> --- a/arch/x86/include/asm/kvm_host.h
>>> +++ b/arch/x86/include/asm/kvm_host.h
>>> @@ -2255,8 +2255,18 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level,
>>>
>>> #ifdef CONFIG_KVM_GMEM
>>> #define kvm_arch_supports_gmem(kvm) ((kvm)->arch.supports_gmem)
>>> +
>>> +/*
>>> + * CoCo VMs with hardware support that use guest_memfd only for backing private
>>> + * memory, e.g., TDX, cannot use guest_memfd with userspace mapping enabled.
>>> + */
>>> +#define kvm_arch_supports_gmem_shared_mem(kvm) \
>>> + (IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM) && \
>>> + ((kvm)->arch.vm_type == KVM_X86_SW_PROTECTED_VM || \
>>> + (kvm)->arch.vm_type == KVM_X86_DEFAULT_VM))
>>> #else
>>> #define kvm_arch_supports_gmem(kvm) false
>>> +#define kvm_arch_supports_gmem_shared_mem(kvm) false
>>> #endif
>>>
>>> #define kvm_arch_has_readonly_mem(kvm) (!(kvm)->arch.has_protected_state)
>>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>>> index 035ced06b2dd..2a02f2457c42 100644
>>> --- a/arch/x86/kvm/x86.c
>>> +++ b/arch/x86/kvm/x86.c
>>> @@ -12718,7 +12718,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
>>> return -EINVAL;
>>>
>>> kvm->arch.vm_type = type;
>>> - kvm->arch.supports_gmem = (type == KVM_X86_SW_PROTECTED_VM);
>>> + kvm->arch.supports_gmem =
>>> + type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM;
>>> /* Decided by the vendor code for other VM types. */
>>> kvm->arch.pre_fault_allowed =
>>> type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM;
>>> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
>>> index 80371475818f..ba83547e62b0 100644
>>> --- a/include/linux/kvm_host.h
>>> +++ b/include/linux/kvm_host.h
>>> @@ -729,6 +729,19 @@ static inline bool kvm_arch_supports_gmem(struct kvm *kvm)
>>> }
>>> #endif
>>>
>>> +/*
>>> + * Returns true if this VM supports shared mem in guest_memfd.
>>> + *
>>> + * Arch code must define kvm_arch_supports_gmem_shared_mem if support for
>>> + * guest_memfd is enabled.
>>> + */
>>> +#if !defined(kvm_arch_supports_gmem_shared_mem) && !IS_ENABLED(CONFIG_KVM_GMEM)
>>> +static inline bool kvm_arch_supports_gmem_shared_mem(struct kvm *kvm)
>>> +{
>>> + return false;
>>> +}
>>> +#endif
>>> +
>>> #ifndef kvm_arch_has_readonly_mem
>>> static inline bool kvm_arch_has_readonly_mem(struct kvm *kvm)
>>> {
>>> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
>>> index b6ae8ad8934b..c2714c9d1a0e 100644
>>> --- a/include/uapi/linux/kvm.h
>>> +++ b/include/uapi/linux/kvm.h
>>> @@ -1566,6 +1566,7 @@ struct kvm_memory_attributes {
>>> #define KVM_MEMORY_ATTRIBUTE_PRIVATE (1ULL << 3)
>>>
>>> #define KVM_CREATE_GUEST_MEMFD _IOWR(KVMIO, 0xd4, struct kvm_create_guest_memfd)
>>> +#define GUEST_MEMFD_FLAG_SUPPORT_SHARED (1ULL << 0)
>>>
>>> struct kvm_create_guest_memfd {
>>> __u64 size;
>>> diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
>>> index 559c93ad90be..df225298ab10 100644
>>> --- a/virt/kvm/Kconfig
>>> +++ b/virt/kvm/Kconfig
>>> @@ -128,3 +128,8 @@ config HAVE_KVM_ARCH_GMEM_PREPARE
>>> config HAVE_KVM_ARCH_GMEM_INVALIDATE
>>> bool
>>> depends on KVM_GMEM
>>> +
>>> +config KVM_GMEM_SHARED_MEM
>>> + select KVM_GMEM
>>> + bool
>>> + prompt "Enable support for non-private (shared) memory in guest_memfd"
>>> diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
>>> index 6db515833f61..5d34712f64fc 100644
>>> --- a/virt/kvm/guest_memfd.c
>>> +++ b/virt/kvm/guest_memfd.c
>>> @@ -312,7 +312,81 @@ static pgoff_t kvm_gmem_get_index(struct kvm_memory_slot *slot, gfn_t gfn)
>>> return gfn - slot->base_gfn + slot->gmem.pgoff;
>>> }
>>>
>>> +static bool kvm_gmem_supports_shared(struct inode *inode)
>>> +{
>>> + u64 flags;
>>> +
>>> + if (!IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM))
>>> + return false;
>>> +
>>> + flags = (u64)inode->i_private;
>>> +
>>> + return flags & GUEST_MEMFD_FLAG_SUPPORT_SHARED;
>>> +}
>>> +
>>> +
>>> +#ifdef CONFIG_KVM_GMEM_SHARED_MEM
>>> +static vm_fault_t kvm_gmem_fault_shared(struct vm_fault *vmf)
>>> +{
>>> + struct inode *inode = file_inode(vmf->vma->vm_file);
>>> + struct folio *folio;
>>> + vm_fault_t ret = VM_FAULT_LOCKED;
>>> +
>>> + folio = kvm_gmem_get_folio(inode, vmf->pgoff);
>>> + if (IS_ERR(folio)) {
>>> + int err = PTR_ERR(folio);
>>> +
>>> + if (err == -EAGAIN)
>>> + return VM_FAULT_RETRY;
>>> +
>>> + return vmf_error(err);
>>> + }
>>> +
>>> + if (WARN_ON_ONCE(folio_test_large(folio))) {
>>> + ret = VM_FAULT_SIGBUS;
>>> + goto out_folio;
>>> + }
>>> +
>>> + if (!folio_test_uptodate(folio)) {
>>> + clear_highpage(folio_page(folio, 0));
>>> + kvm_gmem_mark_prepared(folio);
>>> + }
>>> +
>>> + vmf->page = folio_file_page(folio, vmf->pgoff);
>>> +
>>> +out_folio:
>>> + if (ret != VM_FAULT_LOCKED) {
>>> + folio_unlock(folio);
>>> + folio_put(folio);
>>> + }
>>> +
>>> + return ret;
>>> +}
>>> +
>>> +static const struct vm_operations_struct kvm_gmem_vm_ops = {
>>> + .fault = kvm_gmem_fault_shared,
>>> +};
>>> +
>>> +static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma)
>>> +{
>>> + if (!kvm_gmem_supports_shared(file_inode(file)))
>>> + return -ENODEV;
>>> +
>>> + if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) !=
>>> + (VM_SHARED | VM_MAYSHARE)) {
>>> + return -EINVAL;
>>> + }
>>> +
>>> + vma->vm_ops = &kvm_gmem_vm_ops;
>>> +
>>> + return 0;
>>> +}
>>> +#else
>>> +#define kvm_gmem_mmap NULL
>>> +#endif /* CONFIG_KVM_GMEM_SHARED_MEM */
>>> +
>>> static struct file_operations kvm_gmem_fops = {
>>> + .mmap = kvm_gmem_mmap,
>>> .open = generic_file_open,
>>> .release = kvm_gmem_release,
>>> .fallocate = kvm_gmem_fallocate,
>>> @@ -463,6 +537,9 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args)
>>> u64 flags = args->flags;
>>> u64 valid_flags = 0;
>>>
>>
>> It seems there is an uncovered corner case, which exists in current code (not directly
>> caused by this patch): After .mmap is hooked, the address space (inode->i_mapping) is
>> exposed to user space for futher requests like madvise().madvise(MADV_COLLAPSE) can
>> potentially collapse the pages to a huge page (folio) with the following assumptions.
>> It's not the expected behavior since huge page isn't supported yet.
>>
>> - CONFIG_READ_ONLY_THP_FOR_FS = y
>> - the folios in the pagecache have been fully populated, it can be done by kvm_gmem_fallocate()
>> or kvm_gmem_get_pfn().
>> - mmap(0x00000f0100000000, ..., MAP_FIXED_NOREPLACE) on the guest-memfd, and then do
>> madvise(buf, size, MADV_COLLAPSE).
>>
>> sys_madvise
>> do_madvise
>> madvise_do_behavior
>> madvise_vma_behavior
>> madvise_collapse
>> thp_vma_allowable_order
>> file_thp_enabled // need to return false to bail from the path earlier at least
>> hpage_collapse_scan_file
>> collapse_pte_mapped_thp
>>
>> The fix would be to increase inode->i_writecount using allow_write_access() in
>> __kvm_gmem_create() to break the check done by file_thp_enabled().
>
> Thanks for catching this. Even though it's not an issue until huge
> page support is added, we might as well handle it now.
>
> Out of curiosity, how did you spot this?
>
No worries. Yeah, I think it's better to hide guest-memfd from huge pages at
present since huge page isn't supported yet. I have a kselftest scenario,
something like below. I expected it to fail on madvise(MADV_COLLAPSE), but
it didn't.
static void test_extra(unsigned long vm_type)
{
struct kvm_vm *vm;
size_t page_size = getpagesize();
size_t total_size = page_size * (page_size / 8); /* one huge page */
int i, fd, ret;
void *buf;
/* Create VM and guest-memfd */
vm = vm_create_barebones_type(vm_type);
fd = vm_create_guest_memfd(vm, total_size, GUEST_MEMFD_FLAG_SUPPORT_SHARED);
buf = mmap((void *)0x00000f0100000000, total_size, PROT_READ | PROT_WRITE,
MAP_SHARED | MAP_FIXED_NOREPLACE, fd, 0);
TEST_ASSERT(buf != MAP_FAILED, "mmap() guest memory should pass.");
fprintf(stdout, "0x%lx mmapped at 0x%lx\n", total_size, (unsigned long)buf);
/* fault-in without huge pages */
ret = madvise(buf, total_size, MADV_NOHUGEPAGE);
TEST_ASSERT(ret == 0, "madvise(NOHUGEPAGE) should pass.");
ret = madvise(buf, total_size, MADV_POPULATE_READ);
TEST_ASSERT(ret == 0, "madvise(POPULATE_READ) should pass.");
/* collapse to a huge page */
ret = madvise(buf, total_size, MADV_HUGEPAGE);
TEST_ASSERT(ret == 0, "madvise(HUGEPAGE) should pass.");
ret = madvise(buf, total_size, MADV_COLLAPSE);
TEST_ASSERT(ret != 0, "madvise(COLLAPSE) should fail.");
}
int main(int argc, char *argv[])
{
test_extra(VM_TYPE_DEFAULT);
}
Thanks,
Gavin
>> diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
>> index 0cd12f94958b..fe706c9f21cf 100644
>> --- a/virt/kvm/guest_memfd.c
>> +++ b/virt/kvm/guest_memfd.c
>> @@ -502,6 +502,7 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
>> }
>>
>> file->f_flags |= O_LARGEFILE;
>> + allow_write_access(file);
>>
>>
>>> + if (kvm_arch_supports_gmem_shared_mem(kvm))
>>> + valid_flags |= GUEST_MEMFD_FLAG_SUPPORT_SHARED;
>>> +
>>> if (flags & ~valid_flags)
>>> return -EINVAL;
>>>
>>> @@ -501,6 +578,10 @@ int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot,
>>> offset + size > i_size_read(inode))
>>> goto err;
>>>
>>> + if (kvm_gmem_supports_shared(inode) &&
>>> + !kvm_arch_supports_gmem_shared_mem(kvm))
>>> + goto err;
>>> +
>>> filemap_invalidate_lock(inode->i_mapping);
>>>
>>> start = offset >> PAGE_SHIFT;
>>
>> Thanks,
>> Gavin
>>
>
^ permalink raw reply [flat|nested] 62+ messages in thread
* Re: [PATCH v10 15/16] KVM: Introduce the KVM capability KVM_CAP_GMEM_SHARED_MEM
2025-05-27 18:02 ` [PATCH v10 15/16] KVM: Introduce the KVM capability KVM_CAP_GMEM_SHARED_MEM Fuad Tabba
@ 2025-06-05 9:59 ` Gavin Shan
0 siblings, 0 replies; 62+ messages in thread
From: Gavin Shan @ 2025-06-05 9:59 UTC (permalink / raw)
To: Fuad Tabba, kvm, linux-arm-msm, linux-mm
Cc: pbonzini, chenhuacai, mpe, anup, paul.walmsley, palmer, aou,
seanjc, viro, brauner, willy, akpm, xiaoyao.li, yilun.xu,
chao.p.peng, jarkko, amoorthy, dmatlack, isaku.yamahata, mic,
vbabka, vannapurve, ackerleytng, mail, david, michael.roth,
wei.w.wang, liam.merwick, isaku.yamahata, kirill.shutemov,
suzuki.poulose, steven.price, quic_eberman, quic_mnalajal,
quic_tsoni, quic_svaddagi, quic_cvanscha, quic_pderrin,
quic_pheragu, catalin.marinas, james.morse, yuzenghui,
oliver.upton, maz, will, qperret, keirf, roypat, shuah, hch, jgg,
rientjes, jhubbard, fvdl, hughd, jthoughton, peterx, pankaj.gupta,
ira.weiny
On 5/28/25 4:02 AM, Fuad Tabba wrote:
> This patch introduces the KVM capability KVM_CAP_GMEM_SHARED_MEM, which
> indicates that guest_memfd supports shared memory (when enabled by the
> flag). This support is limited to certain VM types, determined per
> architecture.
>
> This patch also updates the KVM documentation with details on the new
> capability, flag, and other information about support for shared memory
> in guest_memfd.
>
> Reviewed-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Fuad Tabba <tabba@google.com>
> ---
> Documentation/virt/kvm/api.rst | 9 +++++++++
> include/uapi/linux/kvm.h | 1 +
> virt/kvm/kvm_main.c | 4 ++++
> 3 files changed, 14 insertions(+)
>
Reviewed-by: Gavin Shan <gshan@redhat.com>
^ permalink raw reply [flat|nested] 62+ messages in thread
end of thread, other threads:[~2025-06-05 9:59 UTC | newest]
Thread overview: 62+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-05-27 18:02 [PATCH v10 00/16] KVM: Mapping guest_memfd backed memory at the host for software protected VMs Fuad Tabba
2025-05-27 18:02 ` [PATCH v10 01/16] KVM: Rename CONFIG_KVM_PRIVATE_MEM to CONFIG_KVM_GMEM Fuad Tabba
2025-05-31 19:05 ` Shivank Garg
2025-06-05 8:19 ` Vlastimil Babka
2025-05-27 18:02 ` [PATCH v10 02/16] KVM: Rename CONFIG_KVM_GENERIC_PRIVATE_MEM to CONFIG_KVM_GENERIC_GMEM_POPULATE Fuad Tabba
2025-05-31 19:07 ` Shivank Garg
2025-06-05 8:19 ` Vlastimil Babka
2025-05-27 18:02 ` [PATCH v10 03/16] KVM: Rename kvm_arch_has_private_mem() to kvm_arch_supports_gmem() Fuad Tabba
2025-05-31 19:12 ` Shivank Garg
2025-06-05 8:20 ` Vlastimil Babka
2025-05-27 18:02 ` [PATCH v10 04/16] KVM: x86: Rename kvm->arch.has_private_mem to kvm->arch.supports_gmem Fuad Tabba
2025-05-31 19:13 ` Shivank Garg
2025-06-05 8:21 ` Vlastimil Babka
2025-05-27 18:02 ` [PATCH v10 05/16] KVM: Rename kvm_slot_can_be_private() to kvm_slot_has_gmem() Fuad Tabba
2025-05-31 19:13 ` Shivank Garg
2025-06-05 8:22 ` Vlastimil Babka
2025-05-27 18:02 ` [PATCH v10 06/16] KVM: Fix comments that refer to slots_lock Fuad Tabba
2025-05-31 19:14 ` Shivank Garg
2025-06-05 8:22 ` Vlastimil Babka
2025-05-27 18:02 ` [PATCH v10 07/16] KVM: Fix comment that refers to kvm uapi header path Fuad Tabba
2025-05-31 19:19 ` Shivank Garg
2025-06-02 10:10 ` Fuad Tabba
2025-06-02 10:23 ` David Hildenbrand
2025-06-04 9:00 ` Gavin Shan
2025-06-05 8:22 ` Vlastimil Babka
2025-05-27 18:02 ` [PATCH v10 08/16] KVM: guest_memfd: Allow host to map guest_memfd pages Fuad Tabba
2025-05-28 23:17 ` kernel test robot
2025-06-02 10:05 ` Fuad Tabba
2025-06-02 10:43 ` Shivank Garg
2025-06-02 11:13 ` Fuad Tabba
2025-06-04 6:02 ` Gavin Shan
2025-06-04 8:37 ` Fuad Tabba
2025-06-04 12:26 ` David Hildenbrand
2025-06-04 12:32 ` Fuad Tabba
2025-06-04 13:02 ` David Hildenbrand
2025-06-05 6:40 ` Gavin Shan
2025-06-05 8:25 ` Fuad Tabba
2025-06-05 9:53 ` Gavin Shan
2025-06-05 8:28 ` Vlastimil Babka
2025-06-05 8:44 ` Fuad Tabba
2025-05-27 18:02 ` [PATCH v10 09/16] KVM: guest_memfd: Track shared memory support in memslot Fuad Tabba
2025-06-04 12:25 ` David Hildenbrand
2025-06-04 12:31 ` Fuad Tabba
2025-05-27 18:02 ` [PATCH v10 10/16] KVM: x86/mmu: Handle guest page faults for guest_memfd with shared memory Fuad Tabba
2025-05-27 18:02 ` [PATCH v10 11/16] KVM: x86: Compute max_mapping_level with input from guest_memfd Fuad Tabba
2025-06-04 13:09 ` David Hildenbrand
2025-05-27 18:02 ` [PATCH v10 12/16] KVM: arm64: Refactor user_mem_abort() calculation of force_pte Fuad Tabba
2025-06-04 6:05 ` Gavin Shan
2025-05-27 18:02 ` [PATCH v10 13/16] KVM: arm64: Handle guest_memfd-backed guest page faults Fuad Tabba
2025-06-04 13:17 ` David Hildenbrand
2025-06-04 13:30 ` Fuad Tabba
2025-06-04 13:33 ` David Hildenbrand
2025-05-27 18:02 ` [PATCH v10 14/16] KVM: arm64: Enable mapping guest_memfd in arm64 Fuad Tabba
2025-06-04 13:17 ` David Hildenbrand
2025-06-04 13:31 ` Fuad Tabba
2025-05-27 18:02 ` [PATCH v10 15/16] KVM: Introduce the KVM capability KVM_CAP_GMEM_SHARED_MEM Fuad Tabba
2025-06-05 9:59 ` Gavin Shan
2025-05-27 18:02 ` [PATCH v10 16/16] KVM: selftests: guest_memfd mmap() test when mapping is allowed Fuad Tabba
2025-06-04 9:19 ` Gavin Shan
2025-06-04 9:48 ` Fuad Tabba
2025-06-04 10:05 ` Gavin Shan
2025-06-04 10:25 ` Fuad Tabba
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).