public inbox for linux-mm@kvack.org
 help / color / mirror / Atom feed
* [PATCH v11 00/16] Direct Map Removal Support for guest_memfd
@ 2026-03-17 14:10 Kalyazin, Nikita
  2026-03-17 14:10 ` [PATCH v11 01/16] set_memory: set_direct_map_* to take address Kalyazin, Nikita
                   ` (15 more replies)
  0 siblings, 16 replies; 29+ messages in thread
From: Kalyazin, Nikita @ 2026-03-17 14:10 UTC (permalink / raw)
  To: kvm@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	bpf@vger.kernel.org, linux-kselftest@vger.kernel.org,
	kernel@xen0n.name, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, loongarch@lists.linux.dev,
	linux-pm@vger.kernel.org
  Cc: pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org,
	oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com,
	yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org,
	seanjc@google.com, tglx@kernel.org, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, luto@kernel.org, peterz@infradead.org,
	willy@infradead.org, akpm@linux-foundation.org, david@kernel.org,
	lorenzo.stoakes@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, ast@kernel.org,
	daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev,
	eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me,
	haoluo@google.com, jolsa@kernel.org, jgg@ziepe.ca,
	jhubbard@nvidia.com, peterx@redhat.com, jannh@google.com,
	pfalcato@suse.de, skhan@linuxfoundation.org, riel@surriel.com,
	ryan.roberts@arm.com, jgross@suse.com, yu-cheng.yu@intel.com,
	kas@kernel.org, coxu@redhat.com, kevin.brodsky@arm.com,
	ackerleytng@google.com, yosry@kernel.org, ajones@ventanamicro.com,
	maobibo@loongson.cn, tabba@google.com, prsampat@amd.com,
	wu.fei9@sanechips.com.cn, mlevitsk@redhat.com,
	jmattson@google.com, jthoughton@google.com,
	agordeev@linux.ibm.com, alex@ghiti.fr, aou@eecs.berkeley.edu,
	borntraeger@linux.ibm.com, chenhuacai@kernel.org,
	dev.jain@arm.com, gor@linux.ibm.com, hca@linux.ibm.com,
	palmer@dabbelt.com, pjw@kernel.org, shijie@os.amperecomputing.com,
	svens@linux.ibm.com, thuth@redhat.com, wyihan@google.com,
	yang@os.amperecomputing.com, Jonathan.Cameron@huawei.com,
	Liam.Howlett@oracle.com, urezki@gmail.com,
	zhengqi.arch@bytedance.com, gerald.schaefer@linux.ibm.com,
	jiayuan.chen@shopee.com, lenb@kernel.org, osalvador@suse.de,
	pavel@kernel.org, rafael@kernel.org, vannapurve@google.com,
	jackmanb@google.com, aneesh.kumar@kernel.org,
	patrick.roy@linux.dev, Thomson, Jack, Itazuri, Takahiro,
	Manwaring, Derek, Kalyazin, Nikita

[ based on kvm/next ]

Unmapping virtual machine guest memory from the host kernel's direct map
is a successful mitigation against Spectre-style transient execution
issues: if the kernel page tables do not contain entries pointing to
guest memory, then any attempted speculative read through the direct map
will necessarily be blocked by the MMU before any observable
microarchitectural side-effects happen.  This means that Spectre-gadgets
and similar cannot be used to target virtual machine memory.  Roughly
60% of speculative execution issues fall into this category [1, Table
1].

This patch series extends guest_memfd with the ability to remove its
memory from the host kernel's direct map, to be able to attain the above
protection for KVM guests running inside guest_memfd.

Additionally, a Firecracker branch with support for these VMs can be
found on GitHub [2].

For more details, please refer to the v5 cover letter.  No substantial
changes in design have taken place since.

See also related write() syscall support in guest_memfd [3] where
the interoperation between the two features is described.

Changes since v10:
 - David: use a generic implementation for
   folio_{zap,restore}_direct_map instead of per-arch and return void
   from folio_restore_direct_map instead of int.  Ackerley, I dropped your
   "Reviewed-by:" as the patch 02/16 has changed significantly.  Could you
   have another look when you have time?
 - David: fix: kvm_gmem_folio_zap_direct_map: do not set
   KVM_GMEM_FOLIO_NO_DIRECT_MAP on failure
 - David: minor readability fixes

v10: https://lore.kernel.org/kvm/20260126164445.11867-1-kalyazin@amazon.com
v9: https://lore.kernel.org/kvm/20260114134510.1835-1-kalyazin@amazon.com
v8: https://lore.kernel.org/kvm/20251205165743.9341-1-kalyazin@amazon.com
v7: https://lore.kernel.org/kvm/20250924151101.2225820-1-patrick.roy@campus.lmu.de
v6: https://lore.kernel.org/kvm/20250912091708.17502-1-roypat@amazon.co.uk
v5: https://lore.kernel.org/kvm/20250828093902.2719-1-roypat@amazon.co.uk
v4: https://lore.kernel.org/kvm/20250221160728.1584559-1-roypat@amazon.co.uk
RFCv3: https://lore.kernel.org/kvm/20241030134912.515725-1-roypat@amazon.co.uk
RFCv2: https://lore.kernel.org/kvm/20240910163038.1298452-1-roypat@amazon.co.uk
RFCv1: https://lore.kernel.org/kvm/20240709132041.3625501-1-roypat@amazon.co.uk

[1] https://download.vusec.net/papers/quarantine_raid23.pdf
[2] https://github.com/firecracker-microvm/firecracker/tree/feature/secret-hiding
[3] https://lore.kernel.org/kvm/20251114151828.98165-1-kalyazin@amazon.com

Nikita Kalyazin (4):
  set_memory: set_direct_map_* to take address
  set_memory: add folio_{zap,restore}_direct_map helpers
  mm/secretmem: make use of folio_{zap,restore}_direct_map
  mm/gup: drop local variable in gup_fast_folio_allowed

Patrick Roy (12):
  mm/gup: drop secretmem optimization from gup_fast_folio_allowed
  mm: introduce AS_NO_DIRECT_MAP
  KVM: guest_memfd: Add stub for kvm_arch_gmem_invalidate
  KVM: x86: define kvm_arch_gmem_supports_no_direct_map()
  KVM: arm64: define kvm_arch_gmem_supports_no_direct_map()
  KVM: guest_memfd: Add flag to remove from direct map
  KVM: selftests: load elf via bounce buffer
  KVM: selftests: set KVM_MEM_GUEST_MEMFD in vm_mem_add() if guest_memfd
    != -1
  KVM: selftests: Add guest_memfd based vm_mem_backing_src_types
  KVM: selftests: cover GUEST_MEMFD_FLAG_NO_DIRECT_MAP in existing
    selftests
  KVM: selftests: stuff vm_mem_backing_src_type into vm_shape
  KVM: selftests: Test guest execution from direct map removed gmem

 Documentation/virt/kvm/api.rst                | 21 +++---
 arch/arm64/include/asm/kvm_host.h             | 13 ++++
 arch/arm64/include/asm/set_memory.h           |  7 +-
 arch/arm64/mm/pageattr.c                      | 19 +++--
 arch/loongarch/include/asm/set_memory.h       |  8 ++-
 arch/loongarch/mm/pageattr.c                  | 25 +++----
 arch/riscv/include/asm/set_memory.h           |  7 +-
 arch/riscv/mm/pageattr.c                      | 17 ++---
 arch/s390/include/asm/set_memory.h            |  7 +-
 arch/s390/mm/pageattr.c                       | 13 ++--
 arch/x86/include/asm/kvm_host.h               |  6 ++
 arch/x86/include/asm/set_memory.h             |  7 +-
 arch/x86/kvm/x86.c                            |  5 ++
 arch/x86/mm/pat/set_memory.c                  | 23 +++---
 include/linux/kvm_host.h                      | 14 ++++
 include/linux/pagemap.h                       | 16 +++++
 include/linux/secretmem.h                     | 18 -----
 include/linux/set_memory.h                    | 22 ++++--
 include/uapi/linux/kvm.h                      |  1 +
 kernel/power/snapshot.c                       |  4 +-
 lib/buildid.c                                 |  8 ++-
 mm/execmem.c                                  |  6 +-
 mm/gup.c                                      | 41 +++++------
 mm/memory.c                                   | 42 +++++++++++
 mm/mlock.c                                    |  2 +-
 mm/secretmem.c                                | 18 ++---
 mm/vmalloc.c                                  | 11 +--
 .../testing/selftests/kvm/guest_memfd_test.c  | 17 ++++-
 .../testing/selftests/kvm/include/kvm_util.h  | 37 +++++++---
 .../testing/selftests/kvm/include/test_util.h |  8 +++
 tools/testing/selftests/kvm/lib/elf.c         |  8 +--
 tools/testing/selftests/kvm/lib/io.c          | 23 ++++++
 tools/testing/selftests/kvm/lib/kvm_util.c    | 59 ++++++++-------
 tools/testing/selftests/kvm/lib/test_util.c   |  8 +++
 tools/testing/selftests/kvm/lib/x86/sev.c     |  1 +
 .../selftests/kvm/pre_fault_memory_test.c     |  1 +
 .../selftests/kvm/set_memory_region_test.c    | 52 ++++++++++++--
 .../kvm/x86/private_mem_conversions_test.c    |  7 +-
 virt/kvm/guest_memfd.c                        | 71 ++++++++++++++++---
 39 files changed, 474 insertions(+), 199 deletions(-)


base-commit: d2ea4ff1ce50787a98a3900b3fb1636f3620b7cf
-- 
2.50.1



^ permalink raw reply	[flat|nested] 29+ messages in thread

* [PATCH v11 01/16] set_memory: set_direct_map_* to take address
  2026-03-17 14:10 [PATCH v11 00/16] Direct Map Removal Support for guest_memfd Kalyazin, Nikita
@ 2026-03-17 14:10 ` Kalyazin, Nikita
  2026-03-23 17:44   ` David Hildenbrand (Arm)
  2026-03-23 18:00   ` Ackerley Tng
  2026-03-17 14:10 ` [PATCH v11 02/16] set_memory: add folio_{zap,restore}_direct_map helpers Kalyazin, Nikita
                   ` (14 subsequent siblings)
  15 siblings, 2 replies; 29+ messages in thread
From: Kalyazin, Nikita @ 2026-03-17 14:10 UTC (permalink / raw)
  To: kvm@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	bpf@vger.kernel.org, linux-kselftest@vger.kernel.org,
	kernel@xen0n.name, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, loongarch@lists.linux.dev,
	linux-pm@vger.kernel.org
  Cc: pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org,
	oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com,
	yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org,
	seanjc@google.com, tglx@kernel.org, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, luto@kernel.org, peterz@infradead.org,
	willy@infradead.org, akpm@linux-foundation.org, david@kernel.org,
	lorenzo.stoakes@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, ast@kernel.org,
	daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev,
	eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me,
	haoluo@google.com, jolsa@kernel.org, jgg@ziepe.ca,
	jhubbard@nvidia.com, peterx@redhat.com, jannh@google.com,
	pfalcato@suse.de, skhan@linuxfoundation.org, riel@surriel.com,
	ryan.roberts@arm.com, jgross@suse.com, yu-cheng.yu@intel.com,
	kas@kernel.org, coxu@redhat.com, kevin.brodsky@arm.com,
	ackerleytng@google.com, yosry@kernel.org, ajones@ventanamicro.com,
	maobibo@loongson.cn, tabba@google.com, prsampat@amd.com,
	wu.fei9@sanechips.com.cn, mlevitsk@redhat.com,
	jmattson@google.com, jthoughton@google.com,
	agordeev@linux.ibm.com, alex@ghiti.fr, aou@eecs.berkeley.edu,
	borntraeger@linux.ibm.com, chenhuacai@kernel.org,
	dev.jain@arm.com, gor@linux.ibm.com, hca@linux.ibm.com,
	palmer@dabbelt.com, pjw@kernel.org, shijie@os.amperecomputing.com,
	svens@linux.ibm.com, thuth@redhat.com, wyihan@google.com,
	yang@os.amperecomputing.com, Jonathan.Cameron@huawei.com,
	Liam.Howlett@oracle.com, urezki@gmail.com,
	zhengqi.arch@bytedance.com, gerald.schaefer@linux.ibm.com,
	jiayuan.chen@shopee.com, lenb@kernel.org, osalvador@suse.de,
	pavel@kernel.org, rafael@kernel.org, vannapurve@google.com,
	jackmanb@google.com, aneesh.kumar@kernel.org,
	patrick.roy@linux.dev, Thomson, Jack, Itazuri, Takahiro,
	Manwaring, Derek, Kalyazin, Nikita

From: Nikita Kalyazin <kalyazin@amazon.com>

This is to avoid excessive conversions folio->page->address when adding
helpers on top of set_direct_map_valid_noflush() in the next patch.

Acked-by: David Hildenbrand (Arm) <david@kernel.org>
Signed-off-by: Nikita Kalyazin <kalyazin@amazon.com>
---
 arch/arm64/include/asm/set_memory.h     |  7 ++++---
 arch/arm64/mm/pageattr.c                | 19 +++++++++----------
 arch/loongarch/include/asm/set_memory.h |  7 ++++---
 arch/loongarch/mm/pageattr.c            | 25 +++++++++++--------------
 arch/riscv/include/asm/set_memory.h     |  7 ++++---
 arch/riscv/mm/pageattr.c                | 17 +++++++++--------
 arch/s390/include/asm/set_memory.h      |  7 ++++---
 arch/s390/mm/pageattr.c                 | 13 +++++++------
 arch/x86/include/asm/set_memory.h       |  7 ++++---
 arch/x86/mm/pat/set_memory.c            | 23 ++++++++++++-----------
 include/linux/set_memory.h              |  9 +++++----
 kernel/power/snapshot.c                 |  4 ++--
 mm/execmem.c                            |  6 ++++--
 mm/secretmem.c                          |  6 +++---
 mm/vmalloc.c                            | 11 +++++++----
 15 files changed, 89 insertions(+), 79 deletions(-)

diff --git a/arch/arm64/include/asm/set_memory.h b/arch/arm64/include/asm/set_memory.h
index 90f61b17275e..c71a2a6812c4 100644
--- a/arch/arm64/include/asm/set_memory.h
+++ b/arch/arm64/include/asm/set_memory.h
@@ -11,9 +11,10 @@ bool can_set_direct_map(void);
 
 int set_memory_valid(unsigned long addr, int numpages, int enable);
 
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
-int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid);
+int set_direct_map_invalid_noflush(const void *addr);
+int set_direct_map_default_noflush(const void *addr);
+int set_direct_map_valid_noflush(const void *addr, unsigned long numpages,
+				 bool valid);
 bool kernel_page_present(struct page *page);
 
 int set_memory_encrypted(unsigned long addr, int numpages);
diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c
index 358d1dc9a576..5aff94e1f8b2 100644
--- a/arch/arm64/mm/pageattr.c
+++ b/arch/arm64/mm/pageattr.c
@@ -245,7 +245,7 @@ int set_memory_valid(unsigned long addr, int numpages, int enable)
 					__pgprot(PTE_VALID));
 }
 
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(const void *addr)
 {
 	pgprot_t clear_mask = __pgprot(PTE_VALID);
 	pgprot_t set_mask = __pgprot(0);
@@ -253,11 +253,11 @@ int set_direct_map_invalid_noflush(struct page *page)
 	if (!can_set_direct_map())
 		return 0;
 
-	return update_range_prot((unsigned long)page_address(page),
-				 PAGE_SIZE, set_mask, clear_mask);
+	return update_range_prot((unsigned long)addr, PAGE_SIZE, set_mask,
+				 clear_mask);
 }
 
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(const void *addr)
 {
 	pgprot_t set_mask = __pgprot(PTE_VALID | PTE_WRITE);
 	pgprot_t clear_mask = __pgprot(PTE_RDONLY);
@@ -265,8 +265,8 @@ int set_direct_map_default_noflush(struct page *page)
 	if (!can_set_direct_map())
 		return 0;
 
-	return update_range_prot((unsigned long)page_address(page),
-				 PAGE_SIZE, set_mask, clear_mask);
+	return update_range_prot((unsigned long)addr, PAGE_SIZE, set_mask,
+				 clear_mask);
 }
 
 static int __set_memory_enc_dec(unsigned long addr,
@@ -349,14 +349,13 @@ int realm_register_memory_enc_ops(void)
 	return arm64_mem_crypt_ops_register(&realm_crypt_ops);
 }
 
-int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid)
+int set_direct_map_valid_noflush(const void *addr, unsigned long numpages,
+				 bool valid)
 {
-	unsigned long addr = (unsigned long)page_address(page);
-
 	if (!can_set_direct_map())
 		return 0;
 
-	return set_memory_valid(addr, nr, valid);
+	return set_memory_valid((unsigned long)addr, numpages, valid);
 }
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/arch/loongarch/include/asm/set_memory.h b/arch/loongarch/include/asm/set_memory.h
index 55dfaefd02c8..5e9b67b2fea1 100644
--- a/arch/loongarch/include/asm/set_memory.h
+++ b/arch/loongarch/include/asm/set_memory.h
@@ -15,8 +15,9 @@ int set_memory_ro(unsigned long addr, int numpages);
 int set_memory_rw(unsigned long addr, int numpages);
 
 bool kernel_page_present(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid);
+int set_direct_map_invalid_noflush(const void *addr);
+int set_direct_map_default_noflush(const void *addr);
+int set_direct_map_valid_noflush(const void *addr, unsigned long numpages,
+				 bool valid);
 
 #endif /* _ASM_LOONGARCH_SET_MEMORY_H */
diff --git a/arch/loongarch/mm/pageattr.c b/arch/loongarch/mm/pageattr.c
index f5e910b68229..9e08905d3624 100644
--- a/arch/loongarch/mm/pageattr.c
+++ b/arch/loongarch/mm/pageattr.c
@@ -198,32 +198,29 @@ bool kernel_page_present(struct page *page)
 	return pte_present(ptep_get(pte));
 }
 
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(const void *addr)
 {
-	unsigned long addr = (unsigned long)page_address(page);
-
-	if (addr < vm_map_base)
+	if ((unsigned long)addr < vm_map_base)
 		return 0;
 
-	return __set_memory(addr, 1, PAGE_KERNEL, __pgprot(0));
+	return __set_memory((unsigned long)addr, 1, PAGE_KERNEL, __pgprot(0));
 }
 
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(const void *addr)
 {
-	unsigned long addr = (unsigned long)page_address(page);
-
-	if (addr < vm_map_base)
+	if ((unsigned long)addr < vm_map_base)
 		return 0;
 
-	return __set_memory(addr, 1, __pgprot(0), __pgprot(_PAGE_PRESENT | _PAGE_VALID));
+	return __set_memory((unsigned long)addr, 1, __pgprot(0),
+			    __pgprot(_PAGE_PRESENT | _PAGE_VALID));
 }
 
-int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid)
+int set_direct_map_valid_noflush(const void *addr, unsigned long numpages,
+				 bool valid)
 {
-	unsigned long addr = (unsigned long)page_address(page);
 	pgprot_t set, clear;
 
-	if (addr < vm_map_base)
+	if ((unsigned long)addr < vm_map_base)
 		return 0;
 
 	if (valid) {
@@ -234,5 +231,5 @@ int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid)
 		clear = __pgprot(_PAGE_PRESENT | _PAGE_VALID);
 	}
 
-	return __set_memory(addr, 1, set, clear);
+	return __set_memory((unsigned long)addr, 1, set, clear);
 }
diff --git a/arch/riscv/include/asm/set_memory.h b/arch/riscv/include/asm/set_memory.h
index 87389e93325a..a87eabd7fc78 100644
--- a/arch/riscv/include/asm/set_memory.h
+++ b/arch/riscv/include/asm/set_memory.h
@@ -40,9 +40,10 @@ static inline int set_kernel_memory(char *startp, char *endp,
 }
 #endif
 
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
-int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid);
+int set_direct_map_invalid_noflush(const void *addr);
+int set_direct_map_default_noflush(const void *addr);
+int set_direct_map_valid_noflush(const void *addr, unsigned long numpages,
+				 bool valid);
 bool kernel_page_present(struct page *page);
 
 #endif /* __ASSEMBLER__ */
diff --git a/arch/riscv/mm/pageattr.c b/arch/riscv/mm/pageattr.c
index 3f76db3d2769..0a457177a88c 100644
--- a/arch/riscv/mm/pageattr.c
+++ b/arch/riscv/mm/pageattr.c
@@ -374,19 +374,20 @@ int set_memory_nx(unsigned long addr, int numpages)
 	return __set_memory(addr, numpages, __pgprot(0), __pgprot(_PAGE_EXEC));
 }
 
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(const void *addr)
 {
-	return __set_memory((unsigned long)page_address(page), 1,
-			    __pgprot(0), __pgprot(_PAGE_PRESENT));
+	return __set_memory((unsigned long)addr, 1, __pgprot(0),
+			    __pgprot(_PAGE_PRESENT));
 }
 
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(const void *addr)
 {
-	return __set_memory((unsigned long)page_address(page), 1,
-			    PAGE_KERNEL, __pgprot(_PAGE_EXEC));
+	return __set_memory((unsigned long)addr, 1, PAGE_KERNEL,
+			    __pgprot(_PAGE_EXEC));
 }
 
-int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid)
+int set_direct_map_valid_noflush(const void *addr, unsigned long numpages,
+				 bool valid)
 {
 	pgprot_t set, clear;
 
@@ -398,7 +399,7 @@ int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid)
 		clear = __pgprot(_PAGE_PRESENT);
 	}
 
-	return __set_memory((unsigned long)page_address(page), nr, set, clear);
+	return __set_memory((unsigned long)addr, numpages, set, clear);
 }
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/arch/s390/include/asm/set_memory.h b/arch/s390/include/asm/set_memory.h
index 94092f4ae764..3e43c3c96e67 100644
--- a/arch/s390/include/asm/set_memory.h
+++ b/arch/s390/include/asm/set_memory.h
@@ -60,9 +60,10 @@ __SET_MEMORY_FUNC(set_memory_rox, SET_MEMORY_RO | SET_MEMORY_X)
 __SET_MEMORY_FUNC(set_memory_rwnx, SET_MEMORY_RW | SET_MEMORY_NX)
 __SET_MEMORY_FUNC(set_memory_4k, SET_MEMORY_4K)
 
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
-int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid);
+int set_direct_map_invalid_noflush(const void *addr);
+int set_direct_map_default_noflush(const void *addr);
+int set_direct_map_valid_noflush(const void *addr, unsigned long numpages,
+				 bool valid);
 bool kernel_page_present(struct page *page);
 
 #endif
diff --git a/arch/s390/mm/pageattr.c b/arch/s390/mm/pageattr.c
index bb29c38ae624..8e90ff5cf50d 100644
--- a/arch/s390/mm/pageattr.c
+++ b/arch/s390/mm/pageattr.c
@@ -383,17 +383,18 @@ int __set_memory(unsigned long addr, unsigned long numpages, unsigned long flags
 	return rc;
 }
 
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(const void *addr)
 {
-	return __set_memory((unsigned long)page_to_virt(page), 1, SET_MEMORY_INV);
+	return __set_memory((unsigned long)addr, 1, SET_MEMORY_INV);
 }
 
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(const void *addr)
 {
-	return __set_memory((unsigned long)page_to_virt(page), 1, SET_MEMORY_DEF);
+	return __set_memory((unsigned long)addr, 1, SET_MEMORY_DEF);
 }
 
-int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid)
+int set_direct_map_valid_noflush(const void *addr, unsigned long numpages,
+				 bool valid)
 {
 	unsigned long flags;
 
@@ -402,7 +403,7 @@ int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid)
 	else
 		flags = SET_MEMORY_INV;
 
-	return __set_memory((unsigned long)page_to_virt(page), nr, flags);
+	return __set_memory((unsigned long)addr, numpages, flags);
 }
 
 bool kernel_page_present(struct page *page)
diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
index 4362c26aa992..b6a4173ff249 100644
--- a/arch/x86/include/asm/set_memory.h
+++ b/arch/x86/include/asm/set_memory.h
@@ -86,9 +86,10 @@ int set_pages_wb(struct page *page, int numpages);
 int set_pages_ro(struct page *page, int numpages);
 int set_pages_rw(struct page *page, int numpages);
 
-int set_direct_map_invalid_noflush(struct page *page);
-int set_direct_map_default_noflush(struct page *page);
-int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid);
+int set_direct_map_invalid_noflush(const void *addr);
+int set_direct_map_default_noflush(const void *addr);
+int set_direct_map_valid_noflush(const void *addr, unsigned long numpages,
+				 bool valid);
 bool kernel_page_present(struct page *page);
 
 extern int kernel_set_to_readonly;
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 40581a720fe8..6aea1f470fd5 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -2587,9 +2587,9 @@ int set_pages_rw(struct page *page, int numpages)
 	return set_memory_rw(addr, numpages);
 }
 
-static int __set_pages_p(struct page *page, int numpages)
+static int __set_pages_p(const void *addr, int numpages)
 {
-	unsigned long tempaddr = (unsigned long) page_address(page);
+	unsigned long tempaddr = (unsigned long)addr;
 	struct cpa_data cpa = { .vaddr = &tempaddr,
 				.pgd = NULL,
 				.numpages = numpages,
@@ -2606,9 +2606,9 @@ static int __set_pages_p(struct page *page, int numpages)
 	return __change_page_attr_set_clr(&cpa, 1);
 }
 
-static int __set_pages_np(struct page *page, int numpages)
+static int __set_pages_np(const void *addr, int numpages)
 {
-	unsigned long tempaddr = (unsigned long) page_address(page);
+	unsigned long tempaddr = (unsigned long)addr;
 	struct cpa_data cpa = { .vaddr = &tempaddr,
 				.pgd = NULL,
 				.numpages = numpages,
@@ -2625,22 +2625,23 @@ static int __set_pages_np(struct page *page, int numpages)
 	return __change_page_attr_set_clr(&cpa, 1);
 }
 
-int set_direct_map_invalid_noflush(struct page *page)
+int set_direct_map_invalid_noflush(const void *addr)
 {
-	return __set_pages_np(page, 1);
+	return __set_pages_np(addr, 1);
 }
 
-int set_direct_map_default_noflush(struct page *page)
+int set_direct_map_default_noflush(const void *addr)
 {
-	return __set_pages_p(page, 1);
+	return __set_pages_p(addr, 1);
 }
 
-int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid)
+int set_direct_map_valid_noflush(const void *addr, unsigned long numpages,
+				 bool valid)
 {
 	if (valid)
-		return __set_pages_p(page, nr);
+		return __set_pages_p(addr, numpages);
 
-	return __set_pages_np(page, nr);
+	return __set_pages_np(addr, numpages);
 }
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
index 3030d9245f5a..1a2563f525fc 100644
--- a/include/linux/set_memory.h
+++ b/include/linux/set_memory.h
@@ -25,17 +25,18 @@ static inline int set_memory_rox(unsigned long addr, int numpages)
 #endif
 
 #ifndef CONFIG_ARCH_HAS_SET_DIRECT_MAP
-static inline int set_direct_map_invalid_noflush(struct page *page)
+static inline int set_direct_map_invalid_noflush(const void *addr)
 {
 	return 0;
 }
-static inline int set_direct_map_default_noflush(struct page *page)
+static inline int set_direct_map_default_noflush(const void *addr)
 {
 	return 0;
 }
 
-static inline int set_direct_map_valid_noflush(struct page *page,
-					       unsigned nr, bool valid)
+static inline int set_direct_map_valid_noflush(const void *addr,
+					       unsigned long numpages,
+					       bool valid)
 {
 	return 0;
 }
diff --git a/kernel/power/snapshot.c b/kernel/power/snapshot.c
index 6e1321837c66..6eddfb22c0ff 100644
--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -88,7 +88,7 @@ static inline int hibernate_restore_unprotect_page(void *page_address) {return 0
 static inline void hibernate_map_page(struct page *page)
 {
 	if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
-		int ret = set_direct_map_default_noflush(page);
+		int ret = set_direct_map_default_noflush(page_address(page));
 
 		if (ret)
 			pr_warn_once("Failed to remap page\n");
@@ -101,7 +101,7 @@ static inline void hibernate_unmap_page(struct page *page)
 {
 	if (IS_ENABLED(CONFIG_ARCH_HAS_SET_DIRECT_MAP)) {
 		unsigned long addr = (unsigned long)page_address(page);
-		int ret  = set_direct_map_invalid_noflush(page);
+		int ret  = set_direct_map_invalid_noflush(page_address(page));
 
 		if (ret)
 			pr_warn_once("Failed to remap page\n");
diff --git a/mm/execmem.c b/mm/execmem.c
index 810a4ba9c924..220298ec87c8 100644
--- a/mm/execmem.c
+++ b/mm/execmem.c
@@ -119,7 +119,8 @@ static int execmem_set_direct_map_valid(struct vm_struct *vm, bool valid)
 	int err = 0;
 
 	for (int i = 0; i < vm->nr_pages; i += nr) {
-		err = set_direct_map_valid_noflush(vm->pages[i], nr, valid);
+		err = set_direct_map_valid_noflush(page_address(vm->pages[i]),
+						   nr, valid);
 		if (err)
 			goto err_restore;
 		updated += nr;
@@ -129,7 +130,8 @@ static int execmem_set_direct_map_valid(struct vm_struct *vm, bool valid)
 
 err_restore:
 	for (int i = 0; i < updated; i += nr)
-		set_direct_map_valid_noflush(vm->pages[i], nr, !valid);
+		set_direct_map_valid_noflush(page_address(vm->pages[i]), nr,
+					     !valid);
 
 	return err;
 }
diff --git a/mm/secretmem.c b/mm/secretmem.c
index 11a779c812a7..fd29b33c6764 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -72,7 +72,7 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
 			goto out;
 		}
 
-		err = set_direct_map_invalid_noflush(folio_page(folio, 0));
+		err = set_direct_map_invalid_noflush(folio_address(folio));
 		if (err) {
 			folio_put(folio);
 			ret = vmf_error(err);
@@ -87,7 +87,7 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
 			 * already happened when we marked the page invalid
 			 * which guarantees that this call won't fail
 			 */
-			set_direct_map_default_noflush(folio_page(folio, 0));
+			set_direct_map_default_noflush(folio_address(folio));
 			folio_put(folio);
 			if (err == -EEXIST)
 				goto retry;
@@ -151,7 +151,7 @@ static int secretmem_migrate_folio(struct address_space *mapping,
 
 static void secretmem_free_folio(struct folio *folio)
 {
-	set_direct_map_default_noflush(folio_page(folio, 0));
+	set_direct_map_default_noflush(folio_address(folio));
 	folio_zero_segment(folio, 0, folio_size(folio));
 }
 
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 61caa55a4402..8822f73957d9 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -3342,14 +3342,17 @@ struct vm_struct *remove_vm_area(const void *addr)
 }
 
 static inline void set_area_direct_map(const struct vm_struct *area,
-				       int (*set_direct_map)(struct page *page))
+				       int (*set_direct_map)(const void *addr))
 {
 	int i;
 
 	/* HUGE_VMALLOC passes small pages to set_direct_map */
-	for (i = 0; i < area->nr_pages; i++)
-		if (page_address(area->pages[i]))
-			set_direct_map(area->pages[i]);
+	for (i = 0; i < area->nr_pages; i++) {
+		const void *addr = page_address(area->pages[i]);
+
+		if (addr)
+			set_direct_map(addr);
+	}
 }
 
 /*
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v11 02/16] set_memory: add folio_{zap,restore}_direct_map helpers
  2026-03-17 14:10 [PATCH v11 00/16] Direct Map Removal Support for guest_memfd Kalyazin, Nikita
  2026-03-17 14:10 ` [PATCH v11 01/16] set_memory: set_direct_map_* to take address Kalyazin, Nikita
@ 2026-03-17 14:10 ` Kalyazin, Nikita
  2026-03-23 17:51   ` David Hildenbrand (Arm)
  2026-03-23 18:43   ` Ackerley Tng
  2026-03-17 14:11 ` [PATCH v11 03/16] mm/secretmem: make use of folio_{zap,restore}_direct_map Kalyazin, Nikita
                   ` (13 subsequent siblings)
  15 siblings, 2 replies; 29+ messages in thread
From: Kalyazin, Nikita @ 2026-03-17 14:10 UTC (permalink / raw)
  To: kvm@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	bpf@vger.kernel.org, linux-kselftest@vger.kernel.org,
	kernel@xen0n.name, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, loongarch@lists.linux.dev,
	linux-pm@vger.kernel.org
  Cc: pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org,
	oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com,
	yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org,
	seanjc@google.com, tglx@kernel.org, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, luto@kernel.org, peterz@infradead.org,
	willy@infradead.org, akpm@linux-foundation.org, david@kernel.org,
	lorenzo.stoakes@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, ast@kernel.org,
	daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev,
	eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me,
	haoluo@google.com, jolsa@kernel.org, jgg@ziepe.ca,
	jhubbard@nvidia.com, peterx@redhat.com, jannh@google.com,
	pfalcato@suse.de, skhan@linuxfoundation.org, riel@surriel.com,
	ryan.roberts@arm.com, jgross@suse.com, yu-cheng.yu@intel.com,
	kas@kernel.org, coxu@redhat.com, kevin.brodsky@arm.com,
	ackerleytng@google.com, yosry@kernel.org, ajones@ventanamicro.com,
	maobibo@loongson.cn, tabba@google.com, prsampat@amd.com,
	wu.fei9@sanechips.com.cn, mlevitsk@redhat.com,
	jmattson@google.com, jthoughton@google.com,
	agordeev@linux.ibm.com, alex@ghiti.fr, aou@eecs.berkeley.edu,
	borntraeger@linux.ibm.com, chenhuacai@kernel.org,
	dev.jain@arm.com, gor@linux.ibm.com, hca@linux.ibm.com,
	palmer@dabbelt.com, pjw@kernel.org, shijie@os.amperecomputing.com,
	svens@linux.ibm.com, thuth@redhat.com, wyihan@google.com,
	yang@os.amperecomputing.com, Jonathan.Cameron@huawei.com,
	Liam.Howlett@oracle.com, urezki@gmail.com,
	zhengqi.arch@bytedance.com, gerald.schaefer@linux.ibm.com,
	jiayuan.chen@shopee.com, lenb@kernel.org, osalvador@suse.de,
	pavel@kernel.org, rafael@kernel.org, vannapurve@google.com,
	jackmanb@google.com, aneesh.kumar@kernel.org,
	patrick.roy@linux.dev, Thomson, Jack, Itazuri, Takahiro,
	Manwaring, Derek, Kalyazin, Nikita

From: Nikita Kalyazin <kalyazin@amazon.com>

Let's provide folio_{zap,restore}_direct_map helpers as preparation for
supporting removal of the direct map for guest_memfd folios.
In folio_zap_direct_map(), flush TLB to make sure the data is not
accessible.

The new helpers need to be accessible to KVM on architectures that
support guest_memfd (x86 and arm64).

Direct map removal gives guest_memfd the same protection that
memfd_secret does, such as hardening against Spectre-like attacks
through in-kernel gadgets.

Signed-off-by: Nikita Kalyazin <kalyazin@amazon.com>
---
 include/linux/set_memory.h | 13 ++++++++++++
 mm/memory.c                | 42 ++++++++++++++++++++++++++++++++++++++
 2 files changed, 55 insertions(+)

diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
index 1a2563f525fc..24caea2931f9 100644
--- a/include/linux/set_memory.h
+++ b/include/linux/set_memory.h
@@ -41,6 +41,15 @@ static inline int set_direct_map_valid_noflush(const void *addr,
 	return 0;
 }
 
+static inline int folio_zap_direct_map(struct folio *folio)
+{
+	return 0;
+}
+
+static inline void folio_restore_direct_map(struct folio *folio)
+{
+}
+
 static inline bool kernel_page_present(struct page *page)
 {
 	return true;
@@ -57,6 +66,10 @@ static inline bool can_set_direct_map(void)
 }
 #define can_set_direct_map can_set_direct_map
 #endif
+
+int folio_zap_direct_map(struct folio *folio);
+void folio_restore_direct_map(struct folio *folio);
+
 #endif /* CONFIG_ARCH_HAS_SET_DIRECT_MAP */
 
 #ifdef CONFIG_X86_64
diff --git a/mm/memory.c b/mm/memory.c
index 07778814b4a8..cab6bb237fc0 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -78,6 +78,7 @@
 #include <linux/sched/sysctl.h>
 #include <linux/pgalloc.h>
 #include <linux/uaccess.h>
+#include <linux/set_memory.h>
 
 #include <trace/events/kmem.h>
 
@@ -7478,3 +7479,44 @@ void vma_pgtable_walk_end(struct vm_area_struct *vma)
 	if (is_vm_hugetlb_page(vma))
 		hugetlb_vma_unlock_read(vma);
 }
+
+#ifdef CONFIG_ARCH_HAS_SET_DIRECT_MAP
+/**
+ * folio_zap_direct_map - remove a folio from the kernel direct map
+ * @folio: folio to remove from the direct map
+ *
+ * Removes the folio from the kernel direct map and flushes the TLB.  This may
+ * require splitting huge pages in the direct map, which can fail due to memory
+ * allocation.
+ *
+ * Return: 0 on success, or a negative error code on failure.
+ */
+int folio_zap_direct_map(struct folio *folio)
+{
+	const void *addr = folio_address(folio);
+	int ret;
+
+	ret = set_direct_map_valid_noflush(addr, folio_nr_pages(folio), false);
+	flush_tlb_kernel_range((unsigned long)addr,
+			       (unsigned long)addr + folio_size(folio));
+
+	return ret;
+}
+EXPORT_SYMBOL_FOR_MODULES(folio_zap_direct_map, "kvm");
+
+/**
+ * folio_restore_direct_map - restore the kernel direct map entry for a folio
+ * @folio: folio whose direct map entry is to be restored
+ *
+ * This may only be called after a prior successful folio_zap_direct_map() on
+ * the same folio.  Because the zap will have already split any huge pages in
+ * the direct map, restoration here only updates protection bits and cannot
+ * fail.
+ */
+void folio_restore_direct_map(struct folio *folio)
+{
+	WARN_ON_ONCE(set_direct_map_valid_noflush(folio_address(folio),
+						  folio_nr_pages(folio), true));
+}
+EXPORT_SYMBOL_FOR_MODULES(folio_restore_direct_map, "kvm");
+#endif /* CONFIG_ARCH_HAS_SET_DIRECT_MAP */
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v11 03/16] mm/secretmem: make use of folio_{zap,restore}_direct_map
  2026-03-17 14:10 [PATCH v11 00/16] Direct Map Removal Support for guest_memfd Kalyazin, Nikita
  2026-03-17 14:10 ` [PATCH v11 01/16] set_memory: set_direct_map_* to take address Kalyazin, Nikita
  2026-03-17 14:10 ` [PATCH v11 02/16] set_memory: add folio_{zap,restore}_direct_map helpers Kalyazin, Nikita
@ 2026-03-17 14:11 ` Kalyazin, Nikita
  2026-03-23 17:53   ` David Hildenbrand (Arm)
  2026-03-23 18:46   ` Ackerley Tng
  2026-03-17 14:11 ` [PATCH v11 04/16] mm/gup: drop secretmem optimization from gup_fast_folio_allowed Kalyazin, Nikita
                   ` (12 subsequent siblings)
  15 siblings, 2 replies; 29+ messages in thread
From: Kalyazin, Nikita @ 2026-03-17 14:11 UTC (permalink / raw)
  To: kvm@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	bpf@vger.kernel.org, linux-kselftest@vger.kernel.org,
	kernel@xen0n.name, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, loongarch@lists.linux.dev,
	linux-pm@vger.kernel.org
  Cc: pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org,
	oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com,
	yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org,
	seanjc@google.com, tglx@kernel.org, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, luto@kernel.org, peterz@infradead.org,
	willy@infradead.org, akpm@linux-foundation.org, david@kernel.org,
	lorenzo.stoakes@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, ast@kernel.org,
	daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev,
	eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me,
	haoluo@google.com, jolsa@kernel.org, jgg@ziepe.ca,
	jhubbard@nvidia.com, peterx@redhat.com, jannh@google.com,
	pfalcato@suse.de, skhan@linuxfoundation.org, riel@surriel.com,
	ryan.roberts@arm.com, jgross@suse.com, yu-cheng.yu@intel.com,
	kas@kernel.org, coxu@redhat.com, kevin.brodsky@arm.com,
	ackerleytng@google.com, yosry@kernel.org, ajones@ventanamicro.com,
	maobibo@loongson.cn, tabba@google.com, prsampat@amd.com,
	wu.fei9@sanechips.com.cn, mlevitsk@redhat.com,
	jmattson@google.com, jthoughton@google.com,
	agordeev@linux.ibm.com, alex@ghiti.fr, aou@eecs.berkeley.edu,
	borntraeger@linux.ibm.com, chenhuacai@kernel.org,
	dev.jain@arm.com, gor@linux.ibm.com, hca@linux.ibm.com,
	palmer@dabbelt.com, pjw@kernel.org, shijie@os.amperecomputing.com,
	svens@linux.ibm.com, thuth@redhat.com, wyihan@google.com,
	yang@os.amperecomputing.com, Jonathan.Cameron@huawei.com,
	Liam.Howlett@oracle.com, urezki@gmail.com,
	zhengqi.arch@bytedance.com, gerald.schaefer@linux.ibm.com,
	jiayuan.chen@shopee.com, lenb@kernel.org, osalvador@suse.de,
	pavel@kernel.org, rafael@kernel.org, vannapurve@google.com,
	jackmanb@google.com, aneesh.kumar@kernel.org,
	patrick.roy@linux.dev, Thomson, Jack, Itazuri, Takahiro,
	Manwaring, Derek, Kalyazin, Nikita

From: Nikita Kalyazin <kalyazin@amazon.com>

Signed-off-by: Nikita Kalyazin <kalyazin@amazon.com>
---
 mm/secretmem.c | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/mm/secretmem.c b/mm/secretmem.c
index fd29b33c6764..27b176af8fc4 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -53,7 +53,6 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
 	struct inode *inode = file_inode(vmf->vma->vm_file);
 	pgoff_t offset = vmf->pgoff;
 	gfp_t gfp = vmf->gfp_mask;
-	unsigned long addr;
 	struct folio *folio;
 	vm_fault_t ret;
 	int err;
@@ -72,7 +71,7 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
 			goto out;
 		}
 
-		err = set_direct_map_invalid_noflush(folio_address(folio));
+		err = folio_zap_direct_map(folio);
 		if (err) {
 			folio_put(folio);
 			ret = vmf_error(err);
@@ -87,7 +86,7 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
 			 * already happened when we marked the page invalid
 			 * which guarantees that this call won't fail
 			 */
-			set_direct_map_default_noflush(folio_address(folio));
+			folio_restore_direct_map(folio);
 			folio_put(folio);
 			if (err == -EEXIST)
 				goto retry;
@@ -95,9 +94,6 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
 			ret = vmf_error(err);
 			goto out;
 		}
-
-		addr = (unsigned long)folio_address(folio);
-		flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
 	}
 
 	vmf->page = folio_file_page(folio, vmf->pgoff);
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v11 04/16] mm/gup: drop secretmem optimization from gup_fast_folio_allowed
  2026-03-17 14:10 [PATCH v11 00/16] Direct Map Removal Support for guest_memfd Kalyazin, Nikita
                   ` (2 preceding siblings ...)
  2026-03-17 14:11 ` [PATCH v11 03/16] mm/secretmem: make use of folio_{zap,restore}_direct_map Kalyazin, Nikita
@ 2026-03-17 14:11 ` Kalyazin, Nikita
  2026-03-23 18:31   ` David Hildenbrand (Arm)
  2026-03-17 14:11 ` [PATCH v11 05/16] mm/gup: drop local variable in gup_fast_folio_allowed Kalyazin, Nikita
                   ` (11 subsequent siblings)
  15 siblings, 1 reply; 29+ messages in thread
From: Kalyazin, Nikita @ 2026-03-17 14:11 UTC (permalink / raw)
  To: kvm@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	bpf@vger.kernel.org, linux-kselftest@vger.kernel.org,
	kernel@xen0n.name, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, loongarch@lists.linux.dev,
	linux-pm@vger.kernel.org
  Cc: pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org,
	oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com,
	yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org,
	seanjc@google.com, tglx@kernel.org, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, luto@kernel.org, peterz@infradead.org,
	willy@infradead.org, akpm@linux-foundation.org, david@kernel.org,
	lorenzo.stoakes@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, ast@kernel.org,
	daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev,
	eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me,
	haoluo@google.com, jolsa@kernel.org, jgg@ziepe.ca,
	jhubbard@nvidia.com, peterx@redhat.com, jannh@google.com,
	pfalcato@suse.de, skhan@linuxfoundation.org, riel@surriel.com,
	ryan.roberts@arm.com, jgross@suse.com, yu-cheng.yu@intel.com,
	kas@kernel.org, coxu@redhat.com, kevin.brodsky@arm.com,
	ackerleytng@google.com, yosry@kernel.org, ajones@ventanamicro.com,
	maobibo@loongson.cn, tabba@google.com, prsampat@amd.com,
	wu.fei9@sanechips.com.cn, mlevitsk@redhat.com,
	jmattson@google.com, jthoughton@google.com,
	agordeev@linux.ibm.com, alex@ghiti.fr, aou@eecs.berkeley.edu,
	borntraeger@linux.ibm.com, chenhuacai@kernel.org,
	dev.jain@arm.com, gor@linux.ibm.com, hca@linux.ibm.com,
	palmer@dabbelt.com, pjw@kernel.org, shijie@os.amperecomputing.com,
	svens@linux.ibm.com, thuth@redhat.com, wyihan@google.com,
	yang@os.amperecomputing.com, Jonathan.Cameron@huawei.com,
	Liam.Howlett@oracle.com, urezki@gmail.com,
	zhengqi.arch@bytedance.com, gerald.schaefer@linux.ibm.com,
	jiayuan.chen@shopee.com, lenb@kernel.org, osalvador@suse.de,
	pavel@kernel.org, rafael@kernel.org, vannapurve@google.com,
	jackmanb@google.com, aneesh.kumar@kernel.org,
	patrick.roy@linux.dev, Thomson, Jack, Itazuri, Takahiro,
	Manwaring, Derek, Kalyazin, Nikita, Vlastimil Babka

From: Patrick Roy <patrick.roy@linux.dev>

This drops an optimization in gup_fast_folio_allowed() where
secretmem_mapping() was only called if CONFIG_SECRETMEM=y. secretmem is
enabled by default since commit b758fe6df50d ("mm/secretmem: make it on
by default"), so the secretmem check did not actually end up elided in
most cases anymore anyway.

This is in preparation of the generalization of handling mappings where
direct map entries of folios are set to not present.  Currently,
mappings that match this description are secretmem mappings
(memfd_secret()).  Later, some guest_memfd configurations will also fall
into this category.

Signed-off-by: Patrick Roy <patrick.roy@linux.dev>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
Signed-off-by: Nikita Kalyazin <kalyazin@amazon.com>
---
 mm/gup.c | 11 +----------
 1 file changed, 1 insertion(+), 10 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index 8e7dc2c6ee73..5856d35be385 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2739,7 +2739,6 @@ static bool gup_fast_folio_allowed(struct folio *folio, unsigned int flags)
 {
 	bool reject_file_backed = false;
 	struct address_space *mapping;
-	bool check_secretmem = false;
 	unsigned long mapping_flags;
 
 	/*
@@ -2751,14 +2750,6 @@ static bool gup_fast_folio_allowed(struct folio *folio, unsigned int flags)
 		reject_file_backed = true;
 
 	/* We hold a folio reference, so we can safely access folio fields. */
-
-	/* secretmem folios are always order-0 folios. */
-	if (IS_ENABLED(CONFIG_SECRETMEM) && !folio_test_large(folio))
-		check_secretmem = true;
-
-	if (!reject_file_backed && !check_secretmem)
-		return true;
-
 	if (WARN_ON_ONCE(folio_test_slab(folio)))
 		return false;
 
@@ -2800,7 +2791,7 @@ static bool gup_fast_folio_allowed(struct folio *folio, unsigned int flags)
 	 * At this point, we know the mapping is non-null and points to an
 	 * address_space object.
 	 */
-	if (check_secretmem && secretmem_mapping(mapping))
+	if (secretmem_mapping(mapping))
 		return false;
 	/* The only remaining allowed file system is shmem. */
 	return !reject_file_backed || shmem_mapping(mapping);
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v11 05/16] mm/gup: drop local variable in gup_fast_folio_allowed
  2026-03-17 14:10 [PATCH v11 00/16] Direct Map Removal Support for guest_memfd Kalyazin, Nikita
                   ` (3 preceding siblings ...)
  2026-03-17 14:11 ` [PATCH v11 04/16] mm/gup: drop secretmem optimization from gup_fast_folio_allowed Kalyazin, Nikita
@ 2026-03-17 14:11 ` Kalyazin, Nikita
  2026-03-23 17:55   ` David Hildenbrand (Arm)
  2026-03-17 14:11 ` [PATCH v11 06/16] mm: introduce AS_NO_DIRECT_MAP Kalyazin, Nikita
                   ` (10 subsequent siblings)
  15 siblings, 1 reply; 29+ messages in thread
From: Kalyazin, Nikita @ 2026-03-17 14:11 UTC (permalink / raw)
  To: kvm@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	bpf@vger.kernel.org, linux-kselftest@vger.kernel.org,
	kernel@xen0n.name, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, loongarch@lists.linux.dev,
	linux-pm@vger.kernel.org
  Cc: pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org,
	oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com,
	yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org,
	seanjc@google.com, tglx@kernel.org, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, luto@kernel.org, peterz@infradead.org,
	willy@infradead.org, akpm@linux-foundation.org, david@kernel.org,
	lorenzo.stoakes@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, ast@kernel.org,
	daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev,
	eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me,
	haoluo@google.com, jolsa@kernel.org, jgg@ziepe.ca,
	jhubbard@nvidia.com, peterx@redhat.com, jannh@google.com,
	pfalcato@suse.de, skhan@linuxfoundation.org, riel@surriel.com,
	ryan.roberts@arm.com, jgross@suse.com, yu-cheng.yu@intel.com,
	kas@kernel.org, coxu@redhat.com, kevin.brodsky@arm.com,
	ackerleytng@google.com, yosry@kernel.org, ajones@ventanamicro.com,
	maobibo@loongson.cn, tabba@google.com, prsampat@amd.com,
	wu.fei9@sanechips.com.cn, mlevitsk@redhat.com,
	jmattson@google.com, jthoughton@google.com,
	agordeev@linux.ibm.com, alex@ghiti.fr, aou@eecs.berkeley.edu,
	borntraeger@linux.ibm.com, chenhuacai@kernel.org,
	dev.jain@arm.com, gor@linux.ibm.com, hca@linux.ibm.com,
	palmer@dabbelt.com, pjw@kernel.org, shijie@os.amperecomputing.com,
	svens@linux.ibm.com, thuth@redhat.com, wyihan@google.com,
	yang@os.amperecomputing.com, Jonathan.Cameron@huawei.com,
	Liam.Howlett@oracle.com, urezki@gmail.com,
	zhengqi.arch@bytedance.com, gerald.schaefer@linux.ibm.com,
	jiayuan.chen@shopee.com, lenb@kernel.org, osalvador@suse.de,
	pavel@kernel.org, rafael@kernel.org, vannapurve@google.com,
	jackmanb@google.com, aneesh.kumar@kernel.org,
	patrick.roy@linux.dev, Thomson, Jack, Itazuri, Takahiro,
	Manwaring, Derek, Kalyazin, Nikita

From: Nikita Kalyazin <kalyazin@amazon.com>

Move the check for pinning closer to where the result is used.
No functional changes.

Signed-off-by: Nikita Kalyazin <kalyazin@amazon.com>
---
 mm/gup.c | 23 ++++++++++++-----------
 1 file changed, 12 insertions(+), 11 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index 5856d35be385..869d79c8daa4 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2737,18 +2737,9 @@ EXPORT_SYMBOL(get_user_pages_unlocked);
  */
 static bool gup_fast_folio_allowed(struct folio *folio, unsigned int flags)
 {
-	bool reject_file_backed = false;
 	struct address_space *mapping;
 	unsigned long mapping_flags;
 
-	/*
-	 * If we aren't pinning then no problematic write can occur. A long term
-	 * pin is the most egregious case so this is the one we disallow.
-	 */
-	if ((flags & (FOLL_PIN | FOLL_LONGTERM | FOLL_WRITE)) ==
-	    (FOLL_PIN | FOLL_LONGTERM | FOLL_WRITE))
-		reject_file_backed = true;
-
 	/* We hold a folio reference, so we can safely access folio fields. */
 	if (WARN_ON_ONCE(folio_test_slab(folio)))
 		return false;
@@ -2793,8 +2784,18 @@ static bool gup_fast_folio_allowed(struct folio *folio, unsigned int flags)
 	 */
 	if (secretmem_mapping(mapping))
 		return false;
-	/* The only remaining allowed file system is shmem. */
-	return !reject_file_backed || shmem_mapping(mapping);
+
+	/*
+	 * If we aren't pinning then no problematic write can occur. A writable
+	 * long term pin is the most egregious case, so this is the one we
+	 * allow only for ...
+	 */
+	if ((flags & (FOLL_PIN | FOLL_LONGTERM | FOLL_WRITE)) !=
+	    (FOLL_PIN | FOLL_LONGTERM | FOLL_WRITE))
+		return true;
+
+	/* ... hugetlb (which we allowed above already) and shared memory. */
+	return shmem_mapping(mapping);
 }
 
 #ifdef CONFIG_ARCH_HAS_PTE_SPECIAL
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v11 06/16] mm: introduce AS_NO_DIRECT_MAP
  2026-03-17 14:10 [PATCH v11 00/16] Direct Map Removal Support for guest_memfd Kalyazin, Nikita
                   ` (4 preceding siblings ...)
  2026-03-17 14:11 ` [PATCH v11 05/16] mm/gup: drop local variable in gup_fast_folio_allowed Kalyazin, Nikita
@ 2026-03-17 14:11 ` Kalyazin, Nikita
  2026-03-17 14:11 ` [PATCH v11 07/16] KVM: guest_memfd: Add stub for kvm_arch_gmem_invalidate Kalyazin, Nikita
                   ` (9 subsequent siblings)
  15 siblings, 0 replies; 29+ messages in thread
From: Kalyazin, Nikita @ 2026-03-17 14:11 UTC (permalink / raw)
  To: kvm@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	bpf@vger.kernel.org, linux-kselftest@vger.kernel.org,
	kernel@xen0n.name, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, loongarch@lists.linux.dev,
	linux-pm@vger.kernel.org
  Cc: pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org,
	oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com,
	yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org,
	seanjc@google.com, tglx@kernel.org, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, luto@kernel.org, peterz@infradead.org,
	willy@infradead.org, akpm@linux-foundation.org, david@kernel.org,
	lorenzo.stoakes@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, ast@kernel.org,
	daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev,
	eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me,
	haoluo@google.com, jolsa@kernel.org, jgg@ziepe.ca,
	jhubbard@nvidia.com, peterx@redhat.com, jannh@google.com,
	pfalcato@suse.de, skhan@linuxfoundation.org, riel@surriel.com,
	ryan.roberts@arm.com, jgross@suse.com, yu-cheng.yu@intel.com,
	kas@kernel.org, coxu@redhat.com, kevin.brodsky@arm.com,
	ackerleytng@google.com, yosry@kernel.org, ajones@ventanamicro.com,
	maobibo@loongson.cn, tabba@google.com, prsampat@amd.com,
	wu.fei9@sanechips.com.cn, mlevitsk@redhat.com,
	jmattson@google.com, jthoughton@google.com,
	agordeev@linux.ibm.com, alex@ghiti.fr, aou@eecs.berkeley.edu,
	borntraeger@linux.ibm.com, chenhuacai@kernel.org,
	dev.jain@arm.com, gor@linux.ibm.com, hca@linux.ibm.com,
	palmer@dabbelt.com, pjw@kernel.org, shijie@os.amperecomputing.com,
	svens@linux.ibm.com, thuth@redhat.com, wyihan@google.com,
	yang@os.amperecomputing.com, Jonathan.Cameron@huawei.com,
	Liam.Howlett@oracle.com, urezki@gmail.com,
	zhengqi.arch@bytedance.com, gerald.schaefer@linux.ibm.com,
	jiayuan.chen@shopee.com, lenb@kernel.org, osalvador@suse.de,
	pavel@kernel.org, rafael@kernel.org, vannapurve@google.com,
	jackmanb@google.com, aneesh.kumar@kernel.org,
	patrick.roy@linux.dev, Thomson, Jack, Itazuri, Takahiro,
	Manwaring, Derek, Kalyazin, Nikita, Vlastimil Babka

From: Patrick Roy <patrick.roy@linux.dev>

Add AS_NO_DIRECT_MAP for mappings where direct map entries of folios are
set to not present. Currently, mappings that match this description are
secretmem mappings (memfd_secret()). Later, some guest_memfd
configurations will also fall into this category.

Reject this new type of mappings in all locations that currently reject
secretmem mappings, on the assumption that if secretmem mappings are
rejected somewhere, it is precisely because of an inability to deal with
folios without direct map entries, and then make memfd_secret() use
AS_NO_DIRECT_MAP on its address_space to drop its special
vma_is_secretmem()/secretmem_mapping() checks.

Use a new flag instead of overloading AS_INACCESSIBLE (which is already
set by guest_memfd) because not all guest_memfd mappings will end up
being direct map removed (e.g. in pKVM setups, parts of guest_memfd that
can be mapped to userspace should also be GUP-able, and generally not
have restrictions on who can access it).

Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
Signed-off-by: Patrick Roy <patrick.roy@linux.dev>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Ackerley Tng <ackerleytng@google.com>
Signed-off-by: Nikita Kalyazin <kalyazin@amazon.com>
---
 include/linux/pagemap.h   | 16 ++++++++++++++++
 include/linux/secretmem.h | 18 ------------------
 lib/buildid.c             |  8 ++++++--
 mm/gup.c                  |  9 ++++-----
 mm/mlock.c                |  2 +-
 mm/secretmem.c            |  8 ++------
 6 files changed, 29 insertions(+), 32 deletions(-)

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index ec442af3f886..68c075502d91 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -211,6 +211,7 @@ enum mapping_flags {
 	AS_KERNEL_FILE = 10,	/* mapping for a fake kernel file that shouldn't
 				   account usage to user cgroups */
 	AS_NO_DATA_INTEGRITY = 11, /* no data integrity guarantees */
+	AS_NO_DIRECT_MAP = 12,	/* Folios in the mapping are not in the direct map */
 	/* Bits 16-25 are used for FOLIO_ORDER */
 	AS_FOLIO_ORDER_BITS = 5,
 	AS_FOLIO_ORDER_MIN = 16,
@@ -356,6 +357,21 @@ static inline bool mapping_no_data_integrity(const struct address_space *mapping
 	return test_bit(AS_NO_DATA_INTEGRITY, &mapping->flags);
 }
 
+static inline void mapping_set_no_direct_map(struct address_space *mapping)
+{
+	set_bit(AS_NO_DIRECT_MAP, &mapping->flags);
+}
+
+static inline bool mapping_no_direct_map(const struct address_space *mapping)
+{
+	return test_bit(AS_NO_DIRECT_MAP, &mapping->flags);
+}
+
+static inline bool vma_has_no_direct_map(const struct vm_area_struct *vma)
+{
+	return vma->vm_file && mapping_no_direct_map(vma->vm_file->f_mapping);
+}
+
 static inline gfp_t mapping_gfp_mask(const struct address_space *mapping)
 {
 	return mapping->gfp_mask;
diff --git a/include/linux/secretmem.h b/include/linux/secretmem.h
index e918f96881f5..0ae1fb057b3d 100644
--- a/include/linux/secretmem.h
+++ b/include/linux/secretmem.h
@@ -4,28 +4,10 @@
 
 #ifdef CONFIG_SECRETMEM
 
-extern const struct address_space_operations secretmem_aops;
-
-static inline bool secretmem_mapping(struct address_space *mapping)
-{
-	return mapping->a_ops == &secretmem_aops;
-}
-
-bool vma_is_secretmem(struct vm_area_struct *vma);
 bool secretmem_active(void);
 
 #else
 
-static inline bool vma_is_secretmem(struct vm_area_struct *vma)
-{
-	return false;
-}
-
-static inline bool secretmem_mapping(struct address_space *mapping)
-{
-	return false;
-}
-
 static inline bool secretmem_active(void)
 {
 	return false;
diff --git a/lib/buildid.c b/lib/buildid.c
index c4b737640621..ba79bf28f7e6 100644
--- a/lib/buildid.c
+++ b/lib/buildid.c
@@ -47,6 +47,10 @@ static int freader_get_folio(struct freader *r, loff_t file_off)
 
 	freader_put_folio(r);
 
+	/* reject folios without direct map entries (e.g. from memfd_secret() or guest_memfd()) */
+	if (mapping_no_direct_map(r->file->f_mapping))
+		return -EFAULT;
+
 	/* only use page cache lookup - fail if not already cached */
 	r->folio = filemap_get_folio(r->file->f_mapping, file_off >> PAGE_SHIFT);
 
@@ -87,8 +91,8 @@ const void *freader_fetch(struct freader *r, loff_t file_off, size_t sz)
 		return r->data + file_off;
 	}
 
-	/* reject secretmem folios created with memfd_secret() */
-	if (secretmem_mapping(r->file->f_mapping)) {
+	/* reject folios without direct map entries (e.g. from memfd_secret() or guest_memfd()) */
+	if (mapping_no_direct_map(r->file->f_mapping)) {
 		r->err = -EFAULT;
 		return NULL;
 	}
diff --git a/mm/gup.c b/mm/gup.c
index 869d79c8daa4..a5a753da66aa 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -11,7 +11,6 @@
 #include <linux/rmap.h>
 #include <linux/swap.h>
 #include <linux/swapops.h>
-#include <linux/secretmem.h>
 
 #include <linux/sched/signal.h>
 #include <linux/rwsem.h>
@@ -1216,7 +1215,7 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags)
 	if ((gup_flags & FOLL_SPLIT_PMD) && is_vm_hugetlb_page(vma))
 		return -EOPNOTSUPP;
 
-	if (vma_is_secretmem(vma))
+	if (vma_has_no_direct_map(vma))
 		return -EFAULT;
 
 	if (write) {
@@ -2724,7 +2723,7 @@ EXPORT_SYMBOL(get_user_pages_unlocked);
  * This call assumes the caller has pinned the folio, that the lowest page table
  * level still points to this folio, and that interrupts have been disabled.
  *
- * GUP-fast must reject all secretmem folios.
+ * GUP-fast must reject all folios without direct map entries (such as secretmem).
  *
  * Writing to pinned file-backed dirty tracked folios is inherently problematic
  * (see comment describing the writable_file_mapping_allowed() function). We
@@ -2744,7 +2743,7 @@ static bool gup_fast_folio_allowed(struct folio *folio, unsigned int flags)
 	if (WARN_ON_ONCE(folio_test_slab(folio)))
 		return false;
 
-	/* hugetlb neither requires dirty-tracking nor can be secretmem. */
+	/* hugetlb neither requires dirty-tracking nor can be without direct map. */
 	if (folio_test_hugetlb(folio))
 		return true;
 
@@ -2782,7 +2781,7 @@ static bool gup_fast_folio_allowed(struct folio *folio, unsigned int flags)
 	 * At this point, we know the mapping is non-null and points to an
 	 * address_space object.
 	 */
-	if (secretmem_mapping(mapping))
+	if (mapping_no_direct_map(mapping))
 		return false;
 
 	/*
diff --git a/mm/mlock.c b/mm/mlock.c
index 2f699c3497a5..a6f4b3df4f3f 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -474,7 +474,7 @@ static int mlock_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
 
 	if (newflags == oldflags || (oldflags & VM_SPECIAL) ||
 	    is_vm_hugetlb_page(vma) || vma == get_gate_vma(current->mm) ||
-	    vma_is_dax(vma) || vma_is_secretmem(vma) || (oldflags & VM_DROPPABLE))
+	    vma_is_dax(vma) || vma_has_no_direct_map(vma) || (oldflags & VM_DROPPABLE))
 		/* don't set VM_LOCKED or VM_LOCKONFAULT and don't count */
 		goto out;
 
diff --git a/mm/secretmem.c b/mm/secretmem.c
index 27b176af8fc4..d32e1be1eb35 100644
--- a/mm/secretmem.c
+++ b/mm/secretmem.c
@@ -129,11 +129,6 @@ static int secretmem_mmap_prepare(struct vm_area_desc *desc)
 	return 0;
 }
 
-bool vma_is_secretmem(struct vm_area_struct *vma)
-{
-	return vma->vm_ops == &secretmem_vm_ops;
-}
-
 static const struct file_operations secretmem_fops = {
 	.release	= secretmem_release,
 	.mmap_prepare	= secretmem_mmap_prepare,
@@ -151,7 +146,7 @@ static void secretmem_free_folio(struct folio *folio)
 	folio_zero_segment(folio, 0, folio_size(folio));
 }
 
-const struct address_space_operations secretmem_aops = {
+static const struct address_space_operations secretmem_aops = {
 	.dirty_folio	= noop_dirty_folio,
 	.free_folio	= secretmem_free_folio,
 	.migrate_folio	= secretmem_migrate_folio,
@@ -200,6 +195,7 @@ static struct file *secretmem_file_create(unsigned long flags)
 
 	mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER);
 	mapping_set_unevictable(inode->i_mapping);
+	mapping_set_no_direct_map(inode->i_mapping);
 
 	inode->i_op = &secretmem_iops;
 	inode->i_mapping->a_ops = &secretmem_aops;
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v11 07/16] KVM: guest_memfd: Add stub for kvm_arch_gmem_invalidate
  2026-03-17 14:10 [PATCH v11 00/16] Direct Map Removal Support for guest_memfd Kalyazin, Nikita
                   ` (5 preceding siblings ...)
  2026-03-17 14:11 ` [PATCH v11 06/16] mm: introduce AS_NO_DIRECT_MAP Kalyazin, Nikita
@ 2026-03-17 14:11 ` Kalyazin, Nikita
  2026-03-17 14:12 ` [PATCH v11 08/16] KVM: x86: define kvm_arch_gmem_supports_no_direct_map() Kalyazin, Nikita
                   ` (8 subsequent siblings)
  15 siblings, 0 replies; 29+ messages in thread
From: Kalyazin, Nikita @ 2026-03-17 14:11 UTC (permalink / raw)
  To: kvm@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	bpf@vger.kernel.org, linux-kselftest@vger.kernel.org,
	kernel@xen0n.name, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, loongarch@lists.linux.dev,
	linux-pm@vger.kernel.org
  Cc: pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org,
	oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com,
	yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org,
	seanjc@google.com, tglx@kernel.org, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, luto@kernel.org, peterz@infradead.org,
	willy@infradead.org, akpm@linux-foundation.org, david@kernel.org,
	lorenzo.stoakes@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, ast@kernel.org,
	daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev,
	eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me,
	haoluo@google.com, jolsa@kernel.org, jgg@ziepe.ca,
	jhubbard@nvidia.com, peterx@redhat.com, jannh@google.com,
	pfalcato@suse.de, skhan@linuxfoundation.org, riel@surriel.com,
	ryan.roberts@arm.com, jgross@suse.com, yu-cheng.yu@intel.com,
	kas@kernel.org, coxu@redhat.com, kevin.brodsky@arm.com,
	ackerleytng@google.com, yosry@kernel.org, ajones@ventanamicro.com,
	maobibo@loongson.cn, tabba@google.com, prsampat@amd.com,
	wu.fei9@sanechips.com.cn, mlevitsk@redhat.com,
	jmattson@google.com, jthoughton@google.com,
	agordeev@linux.ibm.com, alex@ghiti.fr, aou@eecs.berkeley.edu,
	borntraeger@linux.ibm.com, chenhuacai@kernel.org,
	dev.jain@arm.com, gor@linux.ibm.com, hca@linux.ibm.com,
	palmer@dabbelt.com, pjw@kernel.org, shijie@os.amperecomputing.com,
	svens@linux.ibm.com, thuth@redhat.com, wyihan@google.com,
	yang@os.amperecomputing.com, Jonathan.Cameron@huawei.com,
	Liam.Howlett@oracle.com, urezki@gmail.com,
	zhengqi.arch@bytedance.com, gerald.schaefer@linux.ibm.com,
	jiayuan.chen@shopee.com, lenb@kernel.org, osalvador@suse.de,
	pavel@kernel.org, rafael@kernel.org, vannapurve@google.com,
	jackmanb@google.com, aneesh.kumar@kernel.org,
	patrick.roy@linux.dev, Thomson, Jack, Itazuri, Takahiro,
	Manwaring, Derek, Kalyazin, Nikita, Vlastimil Babka

From: Patrick Roy <patrick.roy@linux.dev>

Add a no-op stub for kvm_arch_gmem_invalidate if
CONFIG_HAVE_KVM_ARCH_GMEM_INVALIDATE=n. This allows defining
kvm_gmem_free_folio without ifdef-ery, which allows more cleanly using
guest_memfd's free_folio callback for non-arch-invalidation related
code.

Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
Signed-off-by: Patrick Roy <patrick.roy@linux.dev>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Ackerley Tng <ackerleytng@google.com>
Signed-off-by: Nikita Kalyazin <kalyazin@amazon.com>
---
 include/linux/kvm_host.h | 2 ++
 virt/kvm/guest_memfd.c   | 4 ----
 2 files changed, 2 insertions(+), 4 deletions(-)

diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index 6b76e7a6f4c2..e8aa3d676c31 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -2587,6 +2587,8 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t gfn, void __user *src, long npages
 
 #ifdef CONFIG_HAVE_KVM_ARCH_GMEM_INVALIDATE
 void kvm_arch_gmem_invalidate(kvm_pfn_t start, kvm_pfn_t end);
+#else
+static inline void kvm_arch_gmem_invalidate(kvm_pfn_t start, kvm_pfn_t end) { }
 #endif
 
 #ifdef CONFIG_KVM_GENERIC_PRE_FAULT_MEMORY
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index 017d84a7adf3..651649623448 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -522,7 +522,6 @@ static int kvm_gmem_error_folio(struct address_space *mapping, struct folio *fol
 	return MF_DELAYED;
 }
 
-#ifdef CONFIG_HAVE_KVM_ARCH_GMEM_INVALIDATE
 static void kvm_gmem_free_folio(struct folio *folio)
 {
 	struct page *page = folio_page(folio, 0);
@@ -531,15 +530,12 @@ static void kvm_gmem_free_folio(struct folio *folio)
 
 	kvm_arch_gmem_invalidate(pfn, pfn + (1ul << order));
 }
-#endif
 
 static const struct address_space_operations kvm_gmem_aops = {
 	.dirty_folio = noop_dirty_folio,
 	.migrate_folio	= kvm_gmem_migrate_folio,
 	.error_remove_folio = kvm_gmem_error_folio,
-#ifdef CONFIG_HAVE_KVM_ARCH_GMEM_INVALIDATE
 	.free_folio = kvm_gmem_free_folio,
-#endif
 };
 
 static int kvm_gmem_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v11 08/16] KVM: x86: define kvm_arch_gmem_supports_no_direct_map()
  2026-03-17 14:10 [PATCH v11 00/16] Direct Map Removal Support for guest_memfd Kalyazin, Nikita
                   ` (6 preceding siblings ...)
  2026-03-17 14:11 ` [PATCH v11 07/16] KVM: guest_memfd: Add stub for kvm_arch_gmem_invalidate Kalyazin, Nikita
@ 2026-03-17 14:12 ` Kalyazin, Nikita
  2026-03-17 14:12 ` [PATCH v11 09/16] KVM: arm64: " Kalyazin, Nikita
                   ` (7 subsequent siblings)
  15 siblings, 0 replies; 29+ messages in thread
From: Kalyazin, Nikita @ 2026-03-17 14:12 UTC (permalink / raw)
  To: kvm@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	bpf@vger.kernel.org, linux-kselftest@vger.kernel.org,
	kernel@xen0n.name, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, loongarch@lists.linux.dev,
	linux-pm@vger.kernel.org
  Cc: pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org,
	oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com,
	yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org,
	seanjc@google.com, tglx@kernel.org, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, luto@kernel.org, peterz@infradead.org,
	willy@infradead.org, akpm@linux-foundation.org, david@kernel.org,
	lorenzo.stoakes@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, ast@kernel.org,
	daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev,
	eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me,
	haoluo@google.com, jolsa@kernel.org, jgg@ziepe.ca,
	jhubbard@nvidia.com, peterx@redhat.com, jannh@google.com,
	pfalcato@suse.de, skhan@linuxfoundation.org, riel@surriel.com,
	ryan.roberts@arm.com, jgross@suse.com, yu-cheng.yu@intel.com,
	kas@kernel.org, coxu@redhat.com, kevin.brodsky@arm.com,
	ackerleytng@google.com, yosry@kernel.org, ajones@ventanamicro.com,
	maobibo@loongson.cn, tabba@google.com, prsampat@amd.com,
	wu.fei9@sanechips.com.cn, mlevitsk@redhat.com,
	jmattson@google.com, jthoughton@google.com,
	agordeev@linux.ibm.com, alex@ghiti.fr, aou@eecs.berkeley.edu,
	borntraeger@linux.ibm.com, chenhuacai@kernel.org,
	dev.jain@arm.com, gor@linux.ibm.com, hca@linux.ibm.com,
	palmer@dabbelt.com, pjw@kernel.org, shijie@os.amperecomputing.com,
	svens@linux.ibm.com, thuth@redhat.com, wyihan@google.com,
	yang@os.amperecomputing.com, Jonathan.Cameron@huawei.com,
	Liam.Howlett@oracle.com, urezki@gmail.com,
	zhengqi.arch@bytedance.com, gerald.schaefer@linux.ibm.com,
	jiayuan.chen@shopee.com, lenb@kernel.org, osalvador@suse.de,
	pavel@kernel.org, rafael@kernel.org, vannapurve@google.com,
	jackmanb@google.com, aneesh.kumar@kernel.org,
	patrick.roy@linux.dev, Thomson, Jack, Itazuri, Takahiro,
	Manwaring, Derek, Kalyazin, Nikita

From: Patrick Roy <patrick.roy@linux.dev>

x86 supports GUEST_MEMFD_FLAG_NO_DIRECT_MAP whenever direct map
modifications are possible (which is always the case).

Signed-off-by: Patrick Roy <patrick.roy@linux.dev>
Reviewed-by: Ackerley Tng <ackerleytng@google.com>
Reviewed-by: David Hildenbrand (Arm) <david@kernel.org>
Signed-off-by: Nikita Kalyazin <kalyazin@amazon.com>
---
 arch/x86/include/asm/kvm_host.h | 6 ++++++
 arch/x86/kvm/x86.c              | 5 +++++
 include/linux/kvm_host.h        | 9 +++++++++
 3 files changed, 20 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 6e4e3ef9b8c7..171ce8b84137 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -28,6 +28,7 @@
 #include <linux/sched/vhost_task.h>
 #include <linux/call_once.h>
 #include <linux/atomic.h>
+#include <linux/set_memory.h>
 
 #include <asm/apic.h>
 #include <asm/pvclock-abi.h>
@@ -2504,4 +2505,9 @@ static inline bool kvm_arch_has_irq_bypass(void)
 	return enable_device_posted_irqs;
 }
 
+#ifdef CONFIG_KVM_GUEST_MEMFD
+bool kvm_arch_gmem_supports_no_direct_map(struct kvm *kvm);
+#define kvm_arch_gmem_supports_no_direct_map kvm_arch_gmem_supports_no_direct_map
+#endif /* CONFIG_KVM_GUEST_MEMFD */
+
 #endif /* _ASM_X86_KVM_HOST_H */
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index fd1c4a36b593..6a4dcf449a37 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -14079,6 +14079,11 @@ void kvm_arch_gmem_invalidate(kvm_pfn_t start, kvm_pfn_t end)
 	kvm_x86_call(gmem_invalidate)(start, end);
 }
 #endif
+
+bool kvm_arch_gmem_supports_no_direct_map(struct kvm *kvm)
+{
+	return can_set_direct_map() && kvm->arch.vm_type != KVM_X86_TDX_VM;
+}
 #endif
 
 int kvm_spec_ctrl_test_value(u64 value)
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index e8aa3d676c31..ce8c5fdf2752 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -742,6 +742,15 @@ static inline u64 kvm_gmem_get_supported_flags(struct kvm *kvm)
 }
 #endif
 
+#ifdef CONFIG_KVM_GUEST_MEMFD
+#ifndef kvm_arch_gmem_supports_no_direct_map
+static inline bool kvm_arch_gmem_supports_no_direct_map(struct kvm *kvm)
+{
+	return false;
+}
+#endif
+#endif /* CONFIG_KVM_GUEST_MEMFD */
+
 #ifndef kvm_arch_has_readonly_mem
 static inline bool kvm_arch_has_readonly_mem(struct kvm *kvm)
 {
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v11 09/16] KVM: arm64: define kvm_arch_gmem_supports_no_direct_map()
  2026-03-17 14:10 [PATCH v11 00/16] Direct Map Removal Support for guest_memfd Kalyazin, Nikita
                   ` (7 preceding siblings ...)
  2026-03-17 14:12 ` [PATCH v11 08/16] KVM: x86: define kvm_arch_gmem_supports_no_direct_map() Kalyazin, Nikita
@ 2026-03-17 14:12 ` Kalyazin, Nikita
  2026-03-17 14:12 ` [PATCH v11 10/16] KVM: guest_memfd: Add flag to remove from direct map Kalyazin, Nikita
                   ` (6 subsequent siblings)
  15 siblings, 0 replies; 29+ messages in thread
From: Kalyazin, Nikita @ 2026-03-17 14:12 UTC (permalink / raw)
  To: kvm@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	bpf@vger.kernel.org, linux-kselftest@vger.kernel.org,
	kernel@xen0n.name, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, loongarch@lists.linux.dev,
	linux-pm@vger.kernel.org
  Cc: pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org,
	oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com,
	yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org,
	seanjc@google.com, tglx@kernel.org, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, luto@kernel.org, peterz@infradead.org,
	willy@infradead.org, akpm@linux-foundation.org, david@kernel.org,
	lorenzo.stoakes@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, ast@kernel.org,
	daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev,
	eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me,
	haoluo@google.com, jolsa@kernel.org, jgg@ziepe.ca,
	jhubbard@nvidia.com, peterx@redhat.com, jannh@google.com,
	pfalcato@suse.de, skhan@linuxfoundation.org, riel@surriel.com,
	ryan.roberts@arm.com, jgross@suse.com, yu-cheng.yu@intel.com,
	kas@kernel.org, coxu@redhat.com, kevin.brodsky@arm.com,
	ackerleytng@google.com, yosry@kernel.org, ajones@ventanamicro.com,
	maobibo@loongson.cn, tabba@google.com, prsampat@amd.com,
	wu.fei9@sanechips.com.cn, mlevitsk@redhat.com,
	jmattson@google.com, jthoughton@google.com,
	agordeev@linux.ibm.com, alex@ghiti.fr, aou@eecs.berkeley.edu,
	borntraeger@linux.ibm.com, chenhuacai@kernel.org,
	dev.jain@arm.com, gor@linux.ibm.com, hca@linux.ibm.com,
	palmer@dabbelt.com, pjw@kernel.org, shijie@os.amperecomputing.com,
	svens@linux.ibm.com, thuth@redhat.com, wyihan@google.com,
	yang@os.amperecomputing.com, Jonathan.Cameron@huawei.com,
	Liam.Howlett@oracle.com, urezki@gmail.com,
	zhengqi.arch@bytedance.com, gerald.schaefer@linux.ibm.com,
	jiayuan.chen@shopee.com, lenb@kernel.org, osalvador@suse.de,
	pavel@kernel.org, rafael@kernel.org, vannapurve@google.com,
	jackmanb@google.com, aneesh.kumar@kernel.org,
	patrick.roy@linux.dev, Thomson, Jack, Itazuri, Takahiro,
	Manwaring, Derek, Kalyazin, Nikita

From: Patrick Roy <patrick.roy@linux.dev>

Support for GUEST_MEMFD_FLAG_NO_DIRECT_MAP on arm64 depends on 1) direct
map manipulations at 4k granularity being possible, and 2) FEAT_S2FWB.

1) is met whenever the direct map is set up at 4k granularity (e.g. not
 with huge/gigantic pages) at boottime, as due to ARM's
break-before-make semantics, breaking huge mappings into 4k mappings in
the direct map is not possible (BBM would require temporary invalidation
of the entire huge mapping, even if only a 4k subrange should be zapped,
which will probably crash the kernel). However, the current default for
rodata_full is true, which forces a 4k direct map.

2) is required to allow KVM to elide cache coherency operations when
installing stage 2 page tables, which require the direct map to be
entry for the newly mapped memory to be present (which it will not be,
as guest_memfd would have removed direct map entries in
kvm_gmem_get_pfn()).

Cc: Will Deacon <will@kernel.org>
Signed-off-by: Patrick Roy <patrick.roy@linux.dev>
Reviewed-by: David Hildenbrand (Arm) <david@kernel.org>
Signed-off-by: Nikita Kalyazin <kalyazin@amazon.com>
---
 arch/arm64/include/asm/kvm_host.h | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 70cb9cfd760a..fbdd43e7e94e 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -19,6 +19,7 @@
 #include <linux/maple_tree.h>
 #include <linux/percpu.h>
 #include <linux/psci.h>
+#include <linux/set_memory.h>
 #include <asm/arch_gicv3.h>
 #include <asm/barrier.h>
 #include <asm/cpufeature.h>
@@ -1682,6 +1683,18 @@ static __always_inline enum fgt_group_id __fgt_reg_to_group_id(enum vcpu_sysreg
 									\
 		p;							\
 	})
+#ifdef CONFIG_KVM_GUEST_MEMFD
+static inline bool kvm_arch_gmem_supports_no_direct_map(struct kvm *kvm)
+{
+	/*
+	 * Without FWB, direct map access is needed in kvm_pgtable_stage2_map(),
+	 * as it calls dcache_clean_inval_poc().
+	 */
+	return can_set_direct_map() && cpus_have_final_cap(ARM64_HAS_STAGE2_FWB);
+}
+#define kvm_arch_gmem_supports_no_direct_map kvm_arch_gmem_supports_no_direct_map
+#endif /* CONFIG_KVM_GUEST_MEMFD */
+
 
 long kvm_get_cap_for_kvm_ioctl(unsigned int ioctl, long *ext);
 
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v11 10/16] KVM: guest_memfd: Add flag to remove from direct map
  2026-03-17 14:10 [PATCH v11 00/16] Direct Map Removal Support for guest_memfd Kalyazin, Nikita
                   ` (8 preceding siblings ...)
  2026-03-17 14:12 ` [PATCH v11 09/16] KVM: arm64: " Kalyazin, Nikita
@ 2026-03-17 14:12 ` Kalyazin, Nikita
  2026-03-23 18:05   ` David Hildenbrand (Arm)
  2026-03-23 21:15   ` Ackerley Tng
  2026-03-17 14:12 ` [PATCH v11 11/16] KVM: selftests: load elf via bounce buffer Kalyazin, Nikita
                   ` (5 subsequent siblings)
  15 siblings, 2 replies; 29+ messages in thread
From: Kalyazin, Nikita @ 2026-03-17 14:12 UTC (permalink / raw)
  To: kvm@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	bpf@vger.kernel.org, linux-kselftest@vger.kernel.org,
	kernel@xen0n.name, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, loongarch@lists.linux.dev,
	linux-pm@vger.kernel.org
  Cc: pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org,
	oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com,
	yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org,
	seanjc@google.com, tglx@kernel.org, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, luto@kernel.org, peterz@infradead.org,
	willy@infradead.org, akpm@linux-foundation.org, david@kernel.org,
	lorenzo.stoakes@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, ast@kernel.org,
	daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev,
	eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me,
	haoluo@google.com, jolsa@kernel.org, jgg@ziepe.ca,
	jhubbard@nvidia.com, peterx@redhat.com, jannh@google.com,
	pfalcato@suse.de, skhan@linuxfoundation.org, riel@surriel.com,
	ryan.roberts@arm.com, jgross@suse.com, yu-cheng.yu@intel.com,
	kas@kernel.org, coxu@redhat.com, kevin.brodsky@arm.com,
	ackerleytng@google.com, yosry@kernel.org, ajones@ventanamicro.com,
	maobibo@loongson.cn, tabba@google.com, prsampat@amd.com,
	wu.fei9@sanechips.com.cn, mlevitsk@redhat.com,
	jmattson@google.com, jthoughton@google.com,
	agordeev@linux.ibm.com, alex@ghiti.fr, aou@eecs.berkeley.edu,
	borntraeger@linux.ibm.com, chenhuacai@kernel.org,
	dev.jain@arm.com, gor@linux.ibm.com, hca@linux.ibm.com,
	palmer@dabbelt.com, pjw@kernel.org, shijie@os.amperecomputing.com,
	svens@linux.ibm.com, thuth@redhat.com, wyihan@google.com,
	yang@os.amperecomputing.com, Jonathan.Cameron@huawei.com,
	Liam.Howlett@oracle.com, urezki@gmail.com,
	zhengqi.arch@bytedance.com, gerald.schaefer@linux.ibm.com,
	jiayuan.chen@shopee.com, lenb@kernel.org, osalvador@suse.de,
	pavel@kernel.org, rafael@kernel.org, vannapurve@google.com,
	jackmanb@google.com, aneesh.kumar@kernel.org,
	patrick.roy@linux.dev, Thomson, Jack, Itazuri, Takahiro,
	Manwaring, Derek, Kalyazin, Nikita

From: Patrick Roy <patrick.roy@linux.dev>

Add GUEST_MEMFD_FLAG_NO_DIRECT_MAP flag for KVM_CREATE_GUEST_MEMFD()
ioctl. When set, guest_memfd folios will be removed from the direct map
after preparation, with direct map entries only restored when the folios
are freed.

To ensure these folios do not end up in places where the kernel cannot
deal with them, set AS_NO_DIRECT_MAP on the guest_memfd's struct
address_space if GUEST_MEMFD_FLAG_NO_DIRECT_MAP is requested.

Note that this flag causes removal of direct map entries for all
guest_memfd folios independent of whether they are "shared" or "private"
(although current guest_memfd only supports either all folios in the
"shared" state, or all folios in the "private" state if
GUEST_MEMFD_FLAG_MMAP is not set). The usecase for removing direct map
entries of also the shared parts of guest_memfd are a special type of
non-CoCo VM where, host userspace is trusted to have access to all of
guest memory, but where Spectre-style transient execution attacks
through the host kernel's direct map should still be mitigated.  In this
setup, KVM retains access to guest memory via userspace mappings of
guest_memfd, which are reflected back into KVM's memslots via
userspace_addr. This is needed for things like MMIO emulation on x86_64
to work.

Direct map entries are zapped right before guest or userspace mappings
of gmem folios are set up, e.g. in kvm_gmem_fault_user_mapping() or
kvm_gmem_get_pfn() [called from the KVM MMU code]. The only place where
a gmem folio can be allocated without being mapped anywhere is
kvm_gmem_populate(), where handling potential failures of direct map
removal is not possible (by the time direct map removal is attempted,
the folio is already marked as prepared, meaning attempting to re-try
kvm_gmem_populate() would just result in -EEXIST without fixing up the
direct map state). These folios are then removed form the direct map
upon kvm_gmem_get_pfn(), e.g. when they are mapped into the guest later.

Signed-off-by: Patrick Roy <patrick.roy@linux.dev>
Signed-off-by: Nikita Kalyazin <kalyazin@amazon.com>
---
 Documentation/virt/kvm/api.rst | 21 ++++++-----
 include/linux/kvm_host.h       |  3 ++
 include/uapi/linux/kvm.h       |  1 +
 virt/kvm/guest_memfd.c         | 67 ++++++++++++++++++++++++++++++++--
 4 files changed, 79 insertions(+), 13 deletions(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 032516783e96..8feec77b03fe 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -6439,15 +6439,18 @@ a single guest_memfd file, but the bound ranges must not overlap).
 The capability KVM_CAP_GUEST_MEMFD_FLAGS enumerates the `flags` that can be
 specified via KVM_CREATE_GUEST_MEMFD.  Currently defined flags:
 
-  ============================ ================================================
-  GUEST_MEMFD_FLAG_MMAP        Enable using mmap() on the guest_memfd file
-                               descriptor.
-  GUEST_MEMFD_FLAG_INIT_SHARED Make all memory in the file shared during
-                               KVM_CREATE_GUEST_MEMFD (memory files created
-                               without INIT_SHARED will be marked private).
-                               Shared memory can be faulted into host userspace
-                               page tables. Private memory cannot.
-  ============================ ================================================
+  ============================== ================================================
+  GUEST_MEMFD_FLAG_MMAP          Enable using mmap() on the guest_memfd file
+                                 descriptor.
+  GUEST_MEMFD_FLAG_INIT_SHARED   Make all memory in the file shared during
+                                 KVM_CREATE_GUEST_MEMFD (memory files created
+                                 without INIT_SHARED will be marked private).
+                                 Shared memory can be faulted into host userspace
+                                 page tables. Private memory cannot.
+  GUEST_MEMFD_FLAG_NO_DIRECT_MAP The guest_memfd instance will unmap the memory
+                                 backing it from the kernel's address space
+                                 before passing it off to userspace or the guest.
+  ============================== ================================================
 
 When the KVM MMU performs a PFN lookup to service a guest fault and the backing
 guest_memfd has the GUEST_MEMFD_FLAG_MMAP set, then the fault will always be
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index ce8c5fdf2752..c95747e2278c 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -738,6 +738,9 @@ static inline u64 kvm_gmem_get_supported_flags(struct kvm *kvm)
 	if (!kvm || kvm_arch_supports_gmem_init_shared(kvm))
 		flags |= GUEST_MEMFD_FLAG_INIT_SHARED;
 
+	if (!kvm || kvm_arch_gmem_supports_no_direct_map(kvm))
+		flags |= GUEST_MEMFD_FLAG_NO_DIRECT_MAP;
+
 	return flags;
 }
 #endif
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 80364d4dbebb..d864f67efdb7 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -1642,6 +1642,7 @@ struct kvm_memory_attributes {
 #define KVM_CREATE_GUEST_MEMFD	_IOWR(KVMIO,  0xd4, struct kvm_create_guest_memfd)
 #define GUEST_MEMFD_FLAG_MMAP		(1ULL << 0)
 #define GUEST_MEMFD_FLAG_INIT_SHARED	(1ULL << 1)
+#define GUEST_MEMFD_FLAG_NO_DIRECT_MAP	(1ULL << 2)
 
 struct kvm_create_guest_memfd {
 	__u64 size;
diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
index 651649623448..c9344647579c 100644
--- a/virt/kvm/guest_memfd.c
+++ b/virt/kvm/guest_memfd.c
@@ -7,6 +7,7 @@
 #include <linux/mempolicy.h>
 #include <linux/pseudo_fs.h>
 #include <linux/pagemap.h>
+#include <linux/set_memory.h>
 
 #include "kvm_mm.h"
 
@@ -76,6 +77,35 @@ static int __kvm_gmem_prepare_folio(struct kvm *kvm, struct kvm_memory_slot *slo
 	return 0;
 }
 
+#define KVM_GMEM_FOLIO_NO_DIRECT_MAP BIT(0)
+
+static bool kvm_gmem_folio_no_direct_map(struct folio *folio)
+{
+	return ((u64)folio->private) & KVM_GMEM_FOLIO_NO_DIRECT_MAP;
+}
+
+static int kvm_gmem_folio_zap_direct_map(struct folio *folio)
+{
+	u64 gmem_flags = GMEM_I(folio_inode(folio))->flags;
+	int r = 0;
+
+	if (kvm_gmem_folio_no_direct_map(folio) || !(gmem_flags & GUEST_MEMFD_FLAG_NO_DIRECT_MAP))
+		goto out;
+
+	r = folio_zap_direct_map(folio);
+	if (!r)
+		folio->private = (void *)((u64)folio->private | KVM_GMEM_FOLIO_NO_DIRECT_MAP);
+
+out:
+	return r;
+}
+
+static void kvm_gmem_folio_restore_direct_map(struct folio *folio)
+{
+	folio_restore_direct_map(folio);
+	folio->private = (void *)((u64)folio->private & ~KVM_GMEM_FOLIO_NO_DIRECT_MAP);
+}
+
 /*
  * Process @folio, which contains @gfn, so that the guest can use it.
  * The folio must be locked and the gfn must be contained in @slot.
@@ -388,11 +418,17 @@ static bool kvm_gmem_supports_mmap(struct inode *inode)
 	return GMEM_I(inode)->flags & GUEST_MEMFD_FLAG_MMAP;
 }
 
+static bool kvm_gmem_no_direct_map(struct inode *inode)
+{
+	return GMEM_I(inode)->flags & GUEST_MEMFD_FLAG_NO_DIRECT_MAP;
+}
+
 static vm_fault_t kvm_gmem_fault_user_mapping(struct vm_fault *vmf)
 {
 	struct inode *inode = file_inode(vmf->vma->vm_file);
 	struct folio *folio;
 	vm_fault_t ret = VM_FAULT_LOCKED;
+	int err;
 
 	if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode))
 		return VM_FAULT_SIGBUS;
@@ -418,6 +454,14 @@ static vm_fault_t kvm_gmem_fault_user_mapping(struct vm_fault *vmf)
 		folio_mark_uptodate(folio);
 	}
 
+	if (kvm_gmem_no_direct_map(folio_inode(folio))) {
+		err = kvm_gmem_folio_zap_direct_map(folio);
+		if (err) {
+			ret = vmf_error(err);
+			goto out_folio;
+		}
+	}
+
 	vmf->page = folio_file_page(folio, vmf->pgoff);
 
 out_folio:
@@ -528,6 +572,9 @@ static void kvm_gmem_free_folio(struct folio *folio)
 	kvm_pfn_t pfn = page_to_pfn(page);
 	int order = folio_order(folio);
 
+	if (kvm_gmem_folio_no_direct_map(folio))
+		kvm_gmem_folio_restore_direct_map(folio);
+
 	kvm_arch_gmem_invalidate(pfn, pfn + (1ul << order));
 }
 
@@ -591,6 +638,9 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
 	/* Unmovable mappings are supposed to be marked unevictable as well. */
 	WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping));
 
+	if (flags & GUEST_MEMFD_FLAG_NO_DIRECT_MAP)
+		mapping_set_no_direct_map(inode->i_mapping);
+
 	GMEM_I(inode)->flags = flags;
 
 	file = alloc_file_pseudo(inode, kvm_gmem_mnt, name, O_RDWR, &kvm_gmem_fops);
@@ -803,13 +853,22 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
 	}
 
 	r = kvm_gmem_prepare_folio(kvm, slot, gfn, folio);
+	if (r)
+		goto out_unlock;
 
+	if (kvm_gmem_no_direct_map(folio_inode(folio))) {
+		r = kvm_gmem_folio_zap_direct_map(folio);
+		if (r)
+			goto out_unlock;
+	}
+
+	*page = folio_file_page(folio, index);
 	folio_unlock(folio);
+	return 0;
 
-	if (!r)
-		*page = folio_file_page(folio, index);
-	else
-		folio_put(folio);
+out_unlock:
+	folio_unlock(folio);
+	folio_put(folio);
 
 	return r;
 }
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v11 11/16] KVM: selftests: load elf via bounce buffer
  2026-03-17 14:10 [PATCH v11 00/16] Direct Map Removal Support for guest_memfd Kalyazin, Nikita
                   ` (9 preceding siblings ...)
  2026-03-17 14:12 ` [PATCH v11 10/16] KVM: guest_memfd: Add flag to remove from direct map Kalyazin, Nikita
@ 2026-03-17 14:12 ` Kalyazin, Nikita
  2026-03-17 14:12 ` [PATCH v11 12/16] KVM: selftests: set KVM_MEM_GUEST_MEMFD in vm_mem_add() if guest_memfd != -1 Kalyazin, Nikita
                   ` (4 subsequent siblings)
  15 siblings, 0 replies; 29+ messages in thread
From: Kalyazin, Nikita @ 2026-03-17 14:12 UTC (permalink / raw)
  To: kvm@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	bpf@vger.kernel.org, linux-kselftest@vger.kernel.org,
	kernel@xen0n.name, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, loongarch@lists.linux.dev,
	linux-pm@vger.kernel.org
  Cc: pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org,
	oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com,
	yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org,
	seanjc@google.com, tglx@kernel.org, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, luto@kernel.org, peterz@infradead.org,
	willy@infradead.org, akpm@linux-foundation.org, david@kernel.org,
	lorenzo.stoakes@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, ast@kernel.org,
	daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev,
	eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me,
	haoluo@google.com, jolsa@kernel.org, jgg@ziepe.ca,
	jhubbard@nvidia.com, peterx@redhat.com, jannh@google.com,
	pfalcato@suse.de, skhan@linuxfoundation.org, riel@surriel.com,
	ryan.roberts@arm.com, jgross@suse.com, yu-cheng.yu@intel.com,
	kas@kernel.org, coxu@redhat.com, kevin.brodsky@arm.com,
	ackerleytng@google.com, yosry@kernel.org, ajones@ventanamicro.com,
	maobibo@loongson.cn, tabba@google.com, prsampat@amd.com,
	wu.fei9@sanechips.com.cn, mlevitsk@redhat.com,
	jmattson@google.com, jthoughton@google.com,
	agordeev@linux.ibm.com, alex@ghiti.fr, aou@eecs.berkeley.edu,
	borntraeger@linux.ibm.com, chenhuacai@kernel.org,
	dev.jain@arm.com, gor@linux.ibm.com, hca@linux.ibm.com,
	palmer@dabbelt.com, pjw@kernel.org, shijie@os.amperecomputing.com,
	svens@linux.ibm.com, thuth@redhat.com, wyihan@google.com,
	yang@os.amperecomputing.com, Jonathan.Cameron@huawei.com,
	Liam.Howlett@oracle.com, urezki@gmail.com,
	zhengqi.arch@bytedance.com, gerald.schaefer@linux.ibm.com,
	jiayuan.chen@shopee.com, lenb@kernel.org, osalvador@suse.de,
	pavel@kernel.org, rafael@kernel.org, vannapurve@google.com,
	jackmanb@google.com, aneesh.kumar@kernel.org,
	patrick.roy@linux.dev, Thomson, Jack, Itazuri, Takahiro,
	Manwaring, Derek, Kalyazin, Nikita

From: Patrick Roy <patrick.roy@linux.dev>

If guest memory is backed using a VMA that does not allow GUP (e.g. a
userspace mapping of guest_memfd when the fd was allocated using
GUEST_MEMFD_FLAG_NO_DIRECT_MAP), then directly loading the test ELF
binary into it via read(2) potentially does not work. To nevertheless
support loading binaries in this cases, do the read(2) syscall using a
bounce buffer, and then memcpy from the bounce buffer into guest memory.

Signed-off-by: Patrick Roy <patrick.roy@linux.dev>
Signed-off-by: Nikita Kalyazin <kalyazin@amazon.com>
---
 .../testing/selftests/kvm/include/test_util.h |  1 +
 tools/testing/selftests/kvm/lib/elf.c         |  8 +++----
 tools/testing/selftests/kvm/lib/io.c          | 23 +++++++++++++++++++
 3 files changed, 28 insertions(+), 4 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h
index b4872ba8ed12..8140e59b59e5 100644
--- a/tools/testing/selftests/kvm/include/test_util.h
+++ b/tools/testing/selftests/kvm/include/test_util.h
@@ -48,6 +48,7 @@ do {								\
 
 ssize_t test_write(int fd, const void *buf, size_t count);
 ssize_t test_read(int fd, void *buf, size_t count);
+ssize_t test_read_bounce(int fd, void *buf, size_t count);
 int test_seq_read(const char *path, char **bufp, size_t *sizep);
 
 void __printf(5, 6) test_assert(bool exp, const char *exp_str,
diff --git a/tools/testing/selftests/kvm/lib/elf.c b/tools/testing/selftests/kvm/lib/elf.c
index f34d926d9735..e829fbe0a11e 100644
--- a/tools/testing/selftests/kvm/lib/elf.c
+++ b/tools/testing/selftests/kvm/lib/elf.c
@@ -31,7 +31,7 @@ static void elfhdr_get(const char *filename, Elf64_Ehdr *hdrp)
 	 * the real size of the ELF header.
 	 */
 	unsigned char ident[EI_NIDENT];
-	test_read(fd, ident, sizeof(ident));
+	test_read_bounce(fd, ident, sizeof(ident));
 	TEST_ASSERT((ident[EI_MAG0] == ELFMAG0) && (ident[EI_MAG1] == ELFMAG1)
 		&& (ident[EI_MAG2] == ELFMAG2) && (ident[EI_MAG3] == ELFMAG3),
 		"ELF MAGIC Mismatch,\n"
@@ -79,7 +79,7 @@ static void elfhdr_get(const char *filename, Elf64_Ehdr *hdrp)
 	offset_rv = lseek(fd, 0, SEEK_SET);
 	TEST_ASSERT(offset_rv == 0, "Seek to ELF header failed,\n"
 		"  rv: %zi expected: %i", offset_rv, 0);
-	test_read(fd, hdrp, sizeof(*hdrp));
+	test_read_bounce(fd, hdrp, sizeof(*hdrp));
 	TEST_ASSERT(hdrp->e_phentsize == sizeof(Elf64_Phdr),
 		"Unexpected physical header size,\n"
 		"  hdrp->e_phentsize: %x\n"
@@ -146,7 +146,7 @@ void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename)
 
 		/* Read in the program header. */
 		Elf64_Phdr phdr;
-		test_read(fd, &phdr, sizeof(phdr));
+		test_read_bounce(fd, &phdr, sizeof(phdr));
 
 		/* Skip if this header doesn't describe a loadable segment. */
 		if (phdr.p_type != PT_LOAD)
@@ -187,7 +187,7 @@ void kvm_vm_elf_load(struct kvm_vm *vm, const char *filename)
 				"  expected: 0x%jx",
 				n1, errno, (intmax_t) offset_rv,
 				(intmax_t) phdr.p_offset);
-			test_read(fd, addr_gva2hva(vm, phdr.p_vaddr),
+			test_read_bounce(fd, addr_gva2hva(vm, phdr.p_vaddr),
 				phdr.p_filesz);
 		}
 	}
diff --git a/tools/testing/selftests/kvm/lib/io.c b/tools/testing/selftests/kvm/lib/io.c
index fedb2a741f0b..60613dce6cfd 100644
--- a/tools/testing/selftests/kvm/lib/io.c
+++ b/tools/testing/selftests/kvm/lib/io.c
@@ -155,3 +155,26 @@ ssize_t test_read(int fd, void *buf, size_t count)
 
 	return num_read;
 }
+
+/* Test read via intermediary buffer
+ *
+ * Same as test_read, except read(2)s happen into a bounce buffer that is memcpy'd
+ * to buf. For use with buffers that cannot be GUP'd (e.g. guest_memfd VMAs if
+ * guest_memfd was created with GUEST_MEMFD_FLAG_NO_DIRECT_MAP).
+ */
+ssize_t test_read_bounce(int fd, void *buf, size_t count)
+{
+	void *bounce_buffer;
+	ssize_t num_read;
+
+	TEST_ASSERT(count > 0, "Unexpected count, count: %zu", count);
+
+	bounce_buffer = malloc(count);
+	TEST_ASSERT(bounce_buffer != NULL, "Failed to allocate bounce buffer");
+
+	num_read = test_read(fd, bounce_buffer, count);
+	memcpy(buf, bounce_buffer, num_read);
+	free(bounce_buffer);
+
+	return num_read;
+}
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v11 12/16] KVM: selftests: set KVM_MEM_GUEST_MEMFD in vm_mem_add() if guest_memfd != -1
  2026-03-17 14:10 [PATCH v11 00/16] Direct Map Removal Support for guest_memfd Kalyazin, Nikita
                   ` (10 preceding siblings ...)
  2026-03-17 14:12 ` [PATCH v11 11/16] KVM: selftests: load elf via bounce buffer Kalyazin, Nikita
@ 2026-03-17 14:12 ` Kalyazin, Nikita
  2026-03-17 14:13 ` [PATCH v11 13/16] KVM: selftests: Add guest_memfd based vm_mem_backing_src_types Kalyazin, Nikita
                   ` (3 subsequent siblings)
  15 siblings, 0 replies; 29+ messages in thread
From: Kalyazin, Nikita @ 2026-03-17 14:12 UTC (permalink / raw)
  To: kvm@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	bpf@vger.kernel.org, linux-kselftest@vger.kernel.org,
	kernel@xen0n.name, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, loongarch@lists.linux.dev,
	linux-pm@vger.kernel.org
  Cc: pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org,
	oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com,
	yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org,
	seanjc@google.com, tglx@kernel.org, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, luto@kernel.org, peterz@infradead.org,
	willy@infradead.org, akpm@linux-foundation.org, david@kernel.org,
	lorenzo.stoakes@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, ast@kernel.org,
	daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev,
	eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me,
	haoluo@google.com, jolsa@kernel.org, jgg@ziepe.ca,
	jhubbard@nvidia.com, peterx@redhat.com, jannh@google.com,
	pfalcato@suse.de, skhan@linuxfoundation.org, riel@surriel.com,
	ryan.roberts@arm.com, jgross@suse.com, yu-cheng.yu@intel.com,
	kas@kernel.org, coxu@redhat.com, kevin.brodsky@arm.com,
	ackerleytng@google.com, yosry@kernel.org, ajones@ventanamicro.com,
	maobibo@loongson.cn, tabba@google.com, prsampat@amd.com,
	wu.fei9@sanechips.com.cn, mlevitsk@redhat.com,
	jmattson@google.com, jthoughton@google.com,
	agordeev@linux.ibm.com, alex@ghiti.fr, aou@eecs.berkeley.edu,
	borntraeger@linux.ibm.com, chenhuacai@kernel.org,
	dev.jain@arm.com, gor@linux.ibm.com, hca@linux.ibm.com,
	palmer@dabbelt.com, pjw@kernel.org, shijie@os.amperecomputing.com,
	svens@linux.ibm.com, thuth@redhat.com, wyihan@google.com,
	yang@os.amperecomputing.com, Jonathan.Cameron@huawei.com,
	Liam.Howlett@oracle.com, urezki@gmail.com,
	zhengqi.arch@bytedance.com, gerald.schaefer@linux.ibm.com,
	jiayuan.chen@shopee.com, lenb@kernel.org, osalvador@suse.de,
	pavel@kernel.org, rafael@kernel.org, vannapurve@google.com,
	jackmanb@google.com, aneesh.kumar@kernel.org,
	patrick.roy@linux.dev, Thomson, Jack, Itazuri, Takahiro,
	Manwaring, Derek, Kalyazin, Nikita

From: Patrick Roy <patrick.roy@linux.dev>

Have vm_mem_add() always set KVM_MEM_GUEST_MEMFD in the memslot flags if
a guest_memfd is passed in as an argument. This eliminates the
possibility where a guest_memfd instance is passed to vm_mem_add(), but
it ends up being ignored because the flags argument does not specify
KVM_MEM_GUEST_MEMFD at the same time.

This makes it easy to support more scenarios in which no vm_mem_add() is
not passed a guest_memfd instance, but is expected to allocate one.
Currently, this only happens if guest_memfd == -1 but flags &
KVM_MEM_GUEST_MEMFD != 0, but later vm_mem_add() will gain support for
loading the test code itself into guest_memfd (via
GUEST_MEMFD_FLAG_MMAP) if requested via a special
vm_mem_backing_src_type, at which point having to make sure the src_type
and flags are in-sync becomes cumbersome.

Signed-off-by: Patrick Roy <patrick.roy@linux.dev>
Signed-off-by: Nikita Kalyazin <kalyazin@amazon.com>
---
 tools/testing/selftests/kvm/lib/kvm_util.c | 24 +++++++++++++---------
 1 file changed, 14 insertions(+), 10 deletions(-)

diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 1959bf556e88..5b0865683047 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1090,21 +1090,25 @@ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
 
 	region->backing_src_type = src_type;
 
-	if (flags & KVM_MEM_GUEST_MEMFD) {
-		if (guest_memfd < 0) {
+	if (guest_memfd < 0) {
+		if (flags & KVM_MEM_GUEST_MEMFD) {
 			uint32_t guest_memfd_flags = 0;
 			TEST_ASSERT(!guest_memfd_offset,
 				    "Offset must be zero when creating new guest_memfd");
 			guest_memfd = vm_create_guest_memfd(vm, mem_size, guest_memfd_flags);
-		} else {
-			/*
-			 * Install a unique fd for each memslot so that the fd
-			 * can be closed when the region is deleted without
-			 * needing to track if the fd is owned by the framework
-			 * or by the caller.
-			 */
-			guest_memfd = kvm_dup(guest_memfd);
 		}
+	} else {
+		/*
+		 * Install a unique fd for each memslot so that the fd
+		 * can be closed when the region is deleted without
+		 * needing to track if the fd is owned by the framework
+		 * or by the caller.
+		 */
+		guest_memfd = kvm_dup(guest_memfd);
+	}
+
+	if (guest_memfd >= 0) {
+		flags |= KVM_MEM_GUEST_MEMFD;
 
 		region->region.guest_memfd = guest_memfd;
 		region->region.guest_memfd_offset = guest_memfd_offset;
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v11 13/16] KVM: selftests: Add guest_memfd based vm_mem_backing_src_types
  2026-03-17 14:10 [PATCH v11 00/16] Direct Map Removal Support for guest_memfd Kalyazin, Nikita
                   ` (11 preceding siblings ...)
  2026-03-17 14:12 ` [PATCH v11 12/16] KVM: selftests: set KVM_MEM_GUEST_MEMFD in vm_mem_add() if guest_memfd != -1 Kalyazin, Nikita
@ 2026-03-17 14:13 ` Kalyazin, Nikita
  2026-03-17 14:13 ` [PATCH v11 14/16] KVM: selftests: cover GUEST_MEMFD_FLAG_NO_DIRECT_MAP in existing selftests Kalyazin, Nikita
                   ` (2 subsequent siblings)
  15 siblings, 0 replies; 29+ messages in thread
From: Kalyazin, Nikita @ 2026-03-17 14:13 UTC (permalink / raw)
  To: kvm@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	bpf@vger.kernel.org, linux-kselftest@vger.kernel.org,
	kernel@xen0n.name, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, loongarch@lists.linux.dev,
	linux-pm@vger.kernel.org
  Cc: pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org,
	oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com,
	yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org,
	seanjc@google.com, tglx@kernel.org, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, luto@kernel.org, peterz@infradead.org,
	willy@infradead.org, akpm@linux-foundation.org, david@kernel.org,
	lorenzo.stoakes@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, ast@kernel.org,
	daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev,
	eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me,
	haoluo@google.com, jolsa@kernel.org, jgg@ziepe.ca,
	jhubbard@nvidia.com, peterx@redhat.com, jannh@google.com,
	pfalcato@suse.de, skhan@linuxfoundation.org, riel@surriel.com,
	ryan.roberts@arm.com, jgross@suse.com, yu-cheng.yu@intel.com,
	kas@kernel.org, coxu@redhat.com, kevin.brodsky@arm.com,
	ackerleytng@google.com, yosry@kernel.org, ajones@ventanamicro.com,
	maobibo@loongson.cn, tabba@google.com, prsampat@amd.com,
	wu.fei9@sanechips.com.cn, mlevitsk@redhat.com,
	jmattson@google.com, jthoughton@google.com,
	agordeev@linux.ibm.com, alex@ghiti.fr, aou@eecs.berkeley.edu,
	borntraeger@linux.ibm.com, chenhuacai@kernel.org,
	dev.jain@arm.com, gor@linux.ibm.com, hca@linux.ibm.com,
	palmer@dabbelt.com, pjw@kernel.org, shijie@os.amperecomputing.com,
	svens@linux.ibm.com, thuth@redhat.com, wyihan@google.com,
	yang@os.amperecomputing.com, Jonathan.Cameron@huawei.com,
	Liam.Howlett@oracle.com, urezki@gmail.com,
	zhengqi.arch@bytedance.com, gerald.schaefer@linux.ibm.com,
	jiayuan.chen@shopee.com, lenb@kernel.org, osalvador@suse.de,
	pavel@kernel.org, rafael@kernel.org, vannapurve@google.com,
	jackmanb@google.com, aneesh.kumar@kernel.org,
	patrick.roy@linux.dev, Thomson, Jack, Itazuri, Takahiro,
	Manwaring, Derek, Kalyazin, Nikita

From: Patrick Roy <patrick.roy@linux.dev>

Allow selftests to configure their memslots such that userspace_addr is
set to a MAP_SHARED mapping of the guest_memfd that's associated with
the memslot. This setup is the configuration for non-CoCo VMs, where all
guest memory is backed by a guest_memfd whose folios are all marked
shared, but KVM is still able to access guest memory to provide
functionality such as MMIO emulation on x86.

Add backing types for normal guest_memfd, as well as direct map removed
guest_memfd.

Signed-off-by: Patrick Roy <patrick.roy@linux.dev>
Signed-off-by: Nikita Kalyazin <kalyazin@amazon.com>
---
 .../testing/selftests/kvm/include/kvm_util.h  | 18 ++++++
 .../testing/selftests/kvm/include/test_util.h |  7 +++
 tools/testing/selftests/kvm/lib/kvm_util.c    | 61 ++++++++++---------
 tools/testing/selftests/kvm/lib/test_util.c   |  8 +++
 4 files changed, 65 insertions(+), 29 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 8b39cb919f4f..056a003a63c0 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -664,6 +664,24 @@ static inline bool is_smt_on(void)
 
 void vm_create_irqchip(struct kvm_vm *vm);
 
+static inline uint32_t backing_src_guest_memfd_flags(enum vm_mem_backing_src_type t)
+{
+	uint32_t flags = 0;
+
+	switch (t) {
+	case VM_MEM_SRC_GUEST_MEMFD_NO_DIRECT_MAP:
+		flags |= GUEST_MEMFD_FLAG_NO_DIRECT_MAP;
+		fallthrough;
+	case VM_MEM_SRC_GUEST_MEMFD:
+		flags |= GUEST_MEMFD_FLAG_MMAP | GUEST_MEMFD_FLAG_INIT_SHARED;
+		break;
+	default:
+		break;
+	}
+
+	return flags;
+}
+
 static inline int __vm_create_guest_memfd(struct kvm_vm *vm, uint64_t size,
 					uint64_t flags)
 {
diff --git a/tools/testing/selftests/kvm/include/test_util.h b/tools/testing/selftests/kvm/include/test_util.h
index 8140e59b59e5..ea6de20ce8ef 100644
--- a/tools/testing/selftests/kvm/include/test_util.h
+++ b/tools/testing/selftests/kvm/include/test_util.h
@@ -152,6 +152,8 @@ enum vm_mem_backing_src_type {
 	VM_MEM_SRC_ANONYMOUS_HUGETLB_16GB,
 	VM_MEM_SRC_SHMEM,
 	VM_MEM_SRC_SHARED_HUGETLB,
+	VM_MEM_SRC_GUEST_MEMFD,
+	VM_MEM_SRC_GUEST_MEMFD_NO_DIRECT_MAP,
 	NUM_SRC_TYPES,
 };
 
@@ -184,6 +186,11 @@ static inline bool backing_src_is_shared(enum vm_mem_backing_src_type t)
 	return vm_mem_backing_src_alias(t)->flag & MAP_SHARED;
 }
 
+static inline bool backing_src_is_guest_memfd(enum vm_mem_backing_src_type t)
+{
+	return t == VM_MEM_SRC_GUEST_MEMFD || t == VM_MEM_SRC_GUEST_MEMFD_NO_DIRECT_MAP;
+}
+
 static inline bool backing_src_can_be_huge(enum vm_mem_backing_src_type t)
 {
 	return t != VM_MEM_SRC_ANONYMOUS && t != VM_MEM_SRC_SHMEM;
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index 5b0865683047..fa4a2fc236fe 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1046,6 +1046,33 @@ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
 	alignment = 1;
 #endif
 
+	if (guest_memfd < 0) {
+		if ((flags & KVM_MEM_GUEST_MEMFD) || backing_src_is_guest_memfd(src_type)) {
+			uint32_t guest_memfd_flags = backing_src_guest_memfd_flags(src_type);
+
+			TEST_ASSERT(!guest_memfd_offset,
+				    "Offset must be zero when creating new guest_memfd");
+			guest_memfd = vm_create_guest_memfd(vm, mem_size, guest_memfd_flags);
+		}
+	} else {
+		/*
+		 * Install a unique fd for each memslot so that the fd
+		 * can be closed when the region is deleted without
+		 * needing to track if the fd is owned by the framework
+		 * or by the caller.
+		 */
+		guest_memfd = kvm_dup(guest_memfd);
+	}
+
+	if (guest_memfd >= 0) {
+		flags |= KVM_MEM_GUEST_MEMFD;
+
+		region->region.guest_memfd = guest_memfd;
+		region->region.guest_memfd_offset = guest_memfd_offset;
+	} else {
+		region->region.guest_memfd = -1;
+	}
+
 	/*
 	 * When using THP mmap is not guaranteed to returned a hugepage aligned
 	 * address so we have to pad the mmap. Padding is not needed for HugeTLB
@@ -1061,10 +1088,13 @@ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
 	if (alignment > 1)
 		region->mmap_size += alignment;
 
-	region->fd = -1;
-	if (backing_src_is_shared(src_type))
+	if (backing_src_is_guest_memfd(src_type))
+		region->fd = guest_memfd;
+	else if (backing_src_is_shared(src_type))
 		region->fd = kvm_memfd_alloc(region->mmap_size,
 					     src_type == VM_MEM_SRC_SHARED_HUGETLB);
+	else
+		region->fd = -1;
 
 	region->mmap_start = kvm_mmap(region->mmap_size, PROT_READ | PROT_WRITE,
 				      vm_mem_backing_src_alias(src_type)->flag,
@@ -1089,33 +1119,6 @@ void vm_mem_add(struct kvm_vm *vm, enum vm_mem_backing_src_type src_type,
 	}
 
 	region->backing_src_type = src_type;
-
-	if (guest_memfd < 0) {
-		if (flags & KVM_MEM_GUEST_MEMFD) {
-			uint32_t guest_memfd_flags = 0;
-			TEST_ASSERT(!guest_memfd_offset,
-				    "Offset must be zero when creating new guest_memfd");
-			guest_memfd = vm_create_guest_memfd(vm, mem_size, guest_memfd_flags);
-		}
-	} else {
-		/*
-		 * Install a unique fd for each memslot so that the fd
-		 * can be closed when the region is deleted without
-		 * needing to track if the fd is owned by the framework
-		 * or by the caller.
-		 */
-		guest_memfd = kvm_dup(guest_memfd);
-	}
-
-	if (guest_memfd >= 0) {
-		flags |= KVM_MEM_GUEST_MEMFD;
-
-		region->region.guest_memfd = guest_memfd;
-		region->region.guest_memfd_offset = guest_memfd_offset;
-	} else {
-		region->region.guest_memfd = -1;
-	}
-
 	region->unused_phy_pages = sparsebit_alloc();
 	if (vm_arch_has_protected_memory(vm))
 		region->protected_phy_pages = sparsebit_alloc();
diff --git a/tools/testing/selftests/kvm/lib/test_util.c b/tools/testing/selftests/kvm/lib/test_util.c
index 8a1848586a85..ce9fe0271515 100644
--- a/tools/testing/selftests/kvm/lib/test_util.c
+++ b/tools/testing/selftests/kvm/lib/test_util.c
@@ -306,6 +306,14 @@ const struct vm_mem_backing_src_alias *vm_mem_backing_src_alias(uint32_t i)
 			 */
 			.flag = MAP_SHARED,
 		},
+		[VM_MEM_SRC_GUEST_MEMFD] = {
+			.name = "guest_memfd",
+			.flag = MAP_SHARED,
+		},
+		[VM_MEM_SRC_GUEST_MEMFD_NO_DIRECT_MAP] = {
+			.name = "guest_memfd_no_direct_map",
+			.flag = MAP_SHARED,
+		}
 	};
 	_Static_assert(ARRAY_SIZE(aliases) == NUM_SRC_TYPES,
 		       "Missing new backing src types?");
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v11 14/16] KVM: selftests: cover GUEST_MEMFD_FLAG_NO_DIRECT_MAP in existing selftests
  2026-03-17 14:10 [PATCH v11 00/16] Direct Map Removal Support for guest_memfd Kalyazin, Nikita
                   ` (12 preceding siblings ...)
  2026-03-17 14:13 ` [PATCH v11 13/16] KVM: selftests: Add guest_memfd based vm_mem_backing_src_types Kalyazin, Nikita
@ 2026-03-17 14:13 ` Kalyazin, Nikita
  2026-03-17 14:13 ` [PATCH v11 15/16] KVM: selftests: stuff vm_mem_backing_src_type into vm_shape Kalyazin, Nikita
  2026-03-17 14:13 ` [PATCH v11 16/16] KVM: selftests: Test guest execution from direct map removed gmem Kalyazin, Nikita
  15 siblings, 0 replies; 29+ messages in thread
From: Kalyazin, Nikita @ 2026-03-17 14:13 UTC (permalink / raw)
  To: kvm@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	bpf@vger.kernel.org, linux-kselftest@vger.kernel.org,
	kernel@xen0n.name, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, loongarch@lists.linux.dev,
	linux-pm@vger.kernel.org
  Cc: pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org,
	oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com,
	yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org,
	seanjc@google.com, tglx@kernel.org, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, luto@kernel.org, peterz@infradead.org,
	willy@infradead.org, akpm@linux-foundation.org, david@kernel.org,
	lorenzo.stoakes@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, ast@kernel.org,
	daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev,
	eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me,
	haoluo@google.com, jolsa@kernel.org, jgg@ziepe.ca,
	jhubbard@nvidia.com, peterx@redhat.com, jannh@google.com,
	pfalcato@suse.de, skhan@linuxfoundation.org, riel@surriel.com,
	ryan.roberts@arm.com, jgross@suse.com, yu-cheng.yu@intel.com,
	kas@kernel.org, coxu@redhat.com, kevin.brodsky@arm.com,
	ackerleytng@google.com, yosry@kernel.org, ajones@ventanamicro.com,
	maobibo@loongson.cn, tabba@google.com, prsampat@amd.com,
	wu.fei9@sanechips.com.cn, mlevitsk@redhat.com,
	jmattson@google.com, jthoughton@google.com,
	agordeev@linux.ibm.com, alex@ghiti.fr, aou@eecs.berkeley.edu,
	borntraeger@linux.ibm.com, chenhuacai@kernel.org,
	dev.jain@arm.com, gor@linux.ibm.com, hca@linux.ibm.com,
	palmer@dabbelt.com, pjw@kernel.org, shijie@os.amperecomputing.com,
	svens@linux.ibm.com, thuth@redhat.com, wyihan@google.com,
	yang@os.amperecomputing.com, Jonathan.Cameron@huawei.com,
	Liam.Howlett@oracle.com, urezki@gmail.com,
	zhengqi.arch@bytedance.com, gerald.schaefer@linux.ibm.com,
	jiayuan.chen@shopee.com, lenb@kernel.org, osalvador@suse.de,
	pavel@kernel.org, rafael@kernel.org, vannapurve@google.com,
	jackmanb@google.com, aneesh.kumar@kernel.org,
	patrick.roy@linux.dev, Thomson, Jack, Itazuri, Takahiro,
	Manwaring, Derek, Kalyazin, Nikita

From: Patrick Roy <patrick.roy@linux.dev>

Extend mem conversion selftests to cover the scenario that the guest can
fault in and write gmem-backed guest memory even if its direct map
removed. Also cover the new flag in guest_memfd_test.c tests.

Signed-off-by: Patrick Roy <patrick.roy@linux.dev>
Signed-off-by: Nikita Kalyazin <kalyazin@amazon.com>
---
 tools/testing/selftests/kvm/guest_memfd_test.c  | 17 ++++++++++++++++-
 .../kvm/x86/private_mem_conversions_test.c      |  7 ++++---
 2 files changed, 20 insertions(+), 4 deletions(-)

diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c
index cc329b57ce2e..64c1200c182e 100644
--- a/tools/testing/selftests/kvm/guest_memfd_test.c
+++ b/tools/testing/selftests/kvm/guest_memfd_test.c
@@ -403,6 +403,17 @@ static void test_guest_memfd(unsigned long vm_type)
 		__test_guest_memfd(vm, GUEST_MEMFD_FLAG_MMAP |
 				       GUEST_MEMFD_FLAG_INIT_SHARED);
 
+	if (flags & GUEST_MEMFD_FLAG_NO_DIRECT_MAP) {
+		__test_guest_memfd(vm, GUEST_MEMFD_FLAG_NO_DIRECT_MAP);
+		if (flags & GUEST_MEMFD_FLAG_MMAP)
+			__test_guest_memfd(vm, GUEST_MEMFD_FLAG_NO_DIRECT_MAP |
+					       GUEST_MEMFD_FLAG_MMAP);
+		if (flags & GUEST_MEMFD_FLAG_INIT_SHARED)
+			__test_guest_memfd(vm, GUEST_MEMFD_FLAG_NO_DIRECT_MAP |
+					       GUEST_MEMFD_FLAG_MMAP |
+					       GUEST_MEMFD_FLAG_INIT_SHARED);
+	}
+
 	kvm_vm_free(vm);
 }
 
@@ -445,10 +456,14 @@ static void test_guest_memfd_guest(void)
 	TEST_ASSERT(vm_check_cap(vm, KVM_CAP_GUEST_MEMFD_FLAGS) & GUEST_MEMFD_FLAG_INIT_SHARED,
 		    "Default VM type should support INIT_SHARED, supported flags = 0x%x",
 		    vm_check_cap(vm, KVM_CAP_GUEST_MEMFD_FLAGS));
+	TEST_ASSERT(vm_check_cap(vm, KVM_CAP_GUEST_MEMFD_FLAGS) & GUEST_MEMFD_FLAG_NO_DIRECT_MAP,
+		    "Default VM type should support NO_DIRECT_MAP, supported flags = 0x%x",
+		    vm_check_cap(vm, KVM_CAP_GUEST_MEMFD_FLAGS));
 
 	size = vm->page_size;
 	fd = vm_create_guest_memfd(vm, size, GUEST_MEMFD_FLAG_MMAP |
-					     GUEST_MEMFD_FLAG_INIT_SHARED);
+					     GUEST_MEMFD_FLAG_INIT_SHARED |
+					     GUEST_MEMFD_FLAG_NO_DIRECT_MAP);
 	vm_set_user_memory_region2(vm, slot, KVM_MEM_GUEST_MEMFD, gpa, size, NULL, fd, 0);
 
 	mem = kvm_mmap(size, PROT_READ | PROT_WRITE, MAP_SHARED, fd);
diff --git a/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c b/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c
index 1969f4ab9b28..8767cb4a037e 100644
--- a/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c
+++ b/tools/testing/selftests/kvm/x86/private_mem_conversions_test.c
@@ -367,7 +367,7 @@ static void *__test_mem_conversions(void *__vcpu)
 }
 
 static void test_mem_conversions(enum vm_mem_backing_src_type src_type, uint32_t nr_vcpus,
-				 uint32_t nr_memslots)
+				 uint32_t nr_memslots, uint64_t gmem_flags)
 {
 	/*
 	 * Allocate enough memory so that each vCPU's chunk of memory can be
@@ -394,7 +394,7 @@ static void test_mem_conversions(enum vm_mem_backing_src_type src_type, uint32_t
 
 	vm_enable_cap(vm, KVM_CAP_EXIT_HYPERCALL, (1 << KVM_HC_MAP_GPA_RANGE));
 
-	memfd = vm_create_guest_memfd(vm, memfd_size, 0);
+	memfd = vm_create_guest_memfd(vm, memfd_size, gmem_flags);
 
 	for (i = 0; i < nr_memslots; i++)
 		vm_mem_add(vm, src_type, BASE_DATA_GPA + slot_size * i,
@@ -474,7 +474,8 @@ int main(int argc, char *argv[])
 		}
 	}
 
-	test_mem_conversions(src_type, nr_vcpus, nr_memslots);
+	test_mem_conversions(src_type, nr_vcpus, nr_memslots, 0);
+	test_mem_conversions(src_type, nr_vcpus, nr_memslots, GUEST_MEMFD_FLAG_NO_DIRECT_MAP);
 
 	return 0;
 }
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v11 15/16] KVM: selftests: stuff vm_mem_backing_src_type into vm_shape
  2026-03-17 14:10 [PATCH v11 00/16] Direct Map Removal Support for guest_memfd Kalyazin, Nikita
                   ` (13 preceding siblings ...)
  2026-03-17 14:13 ` [PATCH v11 14/16] KVM: selftests: cover GUEST_MEMFD_FLAG_NO_DIRECT_MAP in existing selftests Kalyazin, Nikita
@ 2026-03-17 14:13 ` Kalyazin, Nikita
  2026-03-17 14:13 ` [PATCH v11 16/16] KVM: selftests: Test guest execution from direct map removed gmem Kalyazin, Nikita
  15 siblings, 0 replies; 29+ messages in thread
From: Kalyazin, Nikita @ 2026-03-17 14:13 UTC (permalink / raw)
  To: kvm@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	bpf@vger.kernel.org, linux-kselftest@vger.kernel.org,
	kernel@xen0n.name, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, loongarch@lists.linux.dev,
	linux-pm@vger.kernel.org
  Cc: pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org,
	oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com,
	yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org,
	seanjc@google.com, tglx@kernel.org, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, luto@kernel.org, peterz@infradead.org,
	willy@infradead.org, akpm@linux-foundation.org, david@kernel.org,
	lorenzo.stoakes@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, ast@kernel.org,
	daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev,
	eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me,
	haoluo@google.com, jolsa@kernel.org, jgg@ziepe.ca,
	jhubbard@nvidia.com, peterx@redhat.com, jannh@google.com,
	pfalcato@suse.de, skhan@linuxfoundation.org, riel@surriel.com,
	ryan.roberts@arm.com, jgross@suse.com, yu-cheng.yu@intel.com,
	kas@kernel.org, coxu@redhat.com, kevin.brodsky@arm.com,
	ackerleytng@google.com, yosry@kernel.org, ajones@ventanamicro.com,
	maobibo@loongson.cn, tabba@google.com, prsampat@amd.com,
	wu.fei9@sanechips.com.cn, mlevitsk@redhat.com,
	jmattson@google.com, jthoughton@google.com,
	agordeev@linux.ibm.com, alex@ghiti.fr, aou@eecs.berkeley.edu,
	borntraeger@linux.ibm.com, chenhuacai@kernel.org,
	dev.jain@arm.com, gor@linux.ibm.com, hca@linux.ibm.com,
	palmer@dabbelt.com, pjw@kernel.org, shijie@os.amperecomputing.com,
	svens@linux.ibm.com, thuth@redhat.com, wyihan@google.com,
	yang@os.amperecomputing.com, Jonathan.Cameron@huawei.com,
	Liam.Howlett@oracle.com, urezki@gmail.com,
	zhengqi.arch@bytedance.com, gerald.schaefer@linux.ibm.com,
	jiayuan.chen@shopee.com, lenb@kernel.org, osalvador@suse.de,
	pavel@kernel.org, rafael@kernel.org, vannapurve@google.com,
	jackmanb@google.com, aneesh.kumar@kernel.org,
	patrick.roy@linux.dev, Thomson, Jack, Itazuri, Takahiro,
	Manwaring, Derek, Kalyazin, Nikita

From: Patrick Roy <patrick.roy@linux.dev>

Use one of the padding fields in struct vm_shape to carry an enum
vm_mem_backing_src_type value, to give the option to overwrite the
default of VM_MEM_SRC_ANONYMOUS in __vm_create().

Overwriting this default will allow tests to create VMs where the test
code is backed by mmap'd guest_memfd instead of anonymous memory.

Signed-off-by: Patrick Roy <patrick.roy@linux.dev>
Signed-off-by: Nikita Kalyazin <kalyazin@amazon.com>
---
 .../testing/selftests/kvm/include/kvm_util.h  | 19 ++++++++++---------
 tools/testing/selftests/kvm/lib/kvm_util.c    |  2 +-
 tools/testing/selftests/kvm/lib/x86/sev.c     |  1 +
 .../selftests/kvm/pre_fault_memory_test.c     |  1 +
 4 files changed, 13 insertions(+), 10 deletions(-)

diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index 056a003a63c0..48b6ee8223aa 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -215,7 +215,7 @@ enum vm_guest_mode {
 struct vm_shape {
 	uint32_t type;
 	uint8_t  mode;
-	uint8_t  pad0;
+	uint8_t  src_type;
 	uint16_t pad1;
 };
 
@@ -223,14 +223,15 @@ kvm_static_assert(sizeof(struct vm_shape) == sizeof(uint64_t));
 
 #define VM_TYPE_DEFAULT			0
 
-#define VM_SHAPE(__mode)			\
-({						\
-	struct vm_shape shape = {		\
-		.mode = (__mode),		\
-		.type = VM_TYPE_DEFAULT		\
-	};					\
-						\
-	shape;					\
+#define VM_SHAPE(__mode)				\
+({							\
+	struct vm_shape shape = {			\
+		.mode	  = (__mode),			\
+		.type	  = VM_TYPE_DEFAULT,		\
+		.src_type = VM_MEM_SRC_ANONYMOUS	\
+	};						\
+							\
+	shape;						\
 })
 
 extern enum vm_guest_mode vm_mode_default;
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index fa4a2fc236fe..824c94c64864 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -500,7 +500,7 @@ struct kvm_vm *__vm_create(struct vm_shape shape, uint32_t nr_runnable_vcpus,
 	if (is_guest_memfd_required(shape))
 		flags |= KVM_MEM_GUEST_MEMFD;
 
-	vm_userspace_mem_region_add(vm, VM_MEM_SRC_ANONYMOUS, 0, 0, nr_pages, flags);
+	vm_userspace_mem_region_add(vm, shape.src_type, 0, 0, nr_pages, flags);
 	for (i = 0; i < NR_MEM_REGIONS; i++)
 		vm->memslots[i] = 0;
 
diff --git a/tools/testing/selftests/kvm/lib/x86/sev.c b/tools/testing/selftests/kvm/lib/x86/sev.c
index c3a9838f4806..d920880e4fc0 100644
--- a/tools/testing/selftests/kvm/lib/x86/sev.c
+++ b/tools/testing/selftests/kvm/lib/x86/sev.c
@@ -164,6 +164,7 @@ struct kvm_vm *vm_sev_create_with_one_vcpu(uint32_t type, void *guest_code,
 	struct vm_shape shape = {
 		.mode = VM_MODE_DEFAULT,
 		.type = type,
+		.src_type = VM_MEM_SRC_ANONYMOUS,
 	};
 	struct kvm_vm *vm;
 	struct kvm_vcpu *cpus[1];
diff --git a/tools/testing/selftests/kvm/pre_fault_memory_test.c b/tools/testing/selftests/kvm/pre_fault_memory_test.c
index 93e603d91311..8a4d5af53fab 100644
--- a/tools/testing/selftests/kvm/pre_fault_memory_test.c
+++ b/tools/testing/selftests/kvm/pre_fault_memory_test.c
@@ -165,6 +165,7 @@ static void __test_pre_fault_memory(unsigned long vm_type, bool private)
 	const struct vm_shape shape = {
 		.mode = VM_MODE_DEFAULT,
 		.type = vm_type,
+		.src_type = VM_MEM_SRC_ANONYMOUS,
 	};
 	struct kvm_vcpu *vcpu;
 	struct kvm_run *run;
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH v11 16/16] KVM: selftests: Test guest execution from direct map removed gmem
  2026-03-17 14:10 [PATCH v11 00/16] Direct Map Removal Support for guest_memfd Kalyazin, Nikita
                   ` (14 preceding siblings ...)
  2026-03-17 14:13 ` [PATCH v11 15/16] KVM: selftests: stuff vm_mem_backing_src_type into vm_shape Kalyazin, Nikita
@ 2026-03-17 14:13 ` Kalyazin, Nikita
  15 siblings, 0 replies; 29+ messages in thread
From: Kalyazin, Nikita @ 2026-03-17 14:13 UTC (permalink / raw)
  To: kvm@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	bpf@vger.kernel.org, linux-kselftest@vger.kernel.org,
	kernel@xen0n.name, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, loongarch@lists.linux.dev,
	linux-pm@vger.kernel.org
  Cc: pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org,
	oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com,
	yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org,
	seanjc@google.com, tglx@kernel.org, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, luto@kernel.org, peterz@infradead.org,
	willy@infradead.org, akpm@linux-foundation.org, david@kernel.org,
	lorenzo.stoakes@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, ast@kernel.org,
	daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev,
	eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me,
	haoluo@google.com, jolsa@kernel.org, jgg@ziepe.ca,
	jhubbard@nvidia.com, peterx@redhat.com, jannh@google.com,
	pfalcato@suse.de, skhan@linuxfoundation.org, riel@surriel.com,
	ryan.roberts@arm.com, jgross@suse.com, yu-cheng.yu@intel.com,
	kas@kernel.org, coxu@redhat.com, kevin.brodsky@arm.com,
	ackerleytng@google.com, yosry@kernel.org, ajones@ventanamicro.com,
	maobibo@loongson.cn, tabba@google.com, prsampat@amd.com,
	wu.fei9@sanechips.com.cn, mlevitsk@redhat.com,
	jmattson@google.com, jthoughton@google.com,
	agordeev@linux.ibm.com, alex@ghiti.fr, aou@eecs.berkeley.edu,
	borntraeger@linux.ibm.com, chenhuacai@kernel.org,
	dev.jain@arm.com, gor@linux.ibm.com, hca@linux.ibm.com,
	palmer@dabbelt.com, pjw@kernel.org, shijie@os.amperecomputing.com,
	svens@linux.ibm.com, thuth@redhat.com, wyihan@google.com,
	yang@os.amperecomputing.com, Jonathan.Cameron@huawei.com,
	Liam.Howlett@oracle.com, urezki@gmail.com,
	zhengqi.arch@bytedance.com, gerald.schaefer@linux.ibm.com,
	jiayuan.chen@shopee.com, lenb@kernel.org, osalvador@suse.de,
	pavel@kernel.org, rafael@kernel.org, vannapurve@google.com,
	jackmanb@google.com, aneesh.kumar@kernel.org,
	patrick.roy@linux.dev, Thomson, Jack, Itazuri, Takahiro,
	Manwaring, Derek, Kalyazin, Nikita

From: Patrick Roy <patrick.roy@linux.dev>

Add a selftest that loads itself into guest_memfd (via
GUEST_MEMFD_FLAG_MMAP) and triggers an MMIO exit when executed. This
exercises x86 MMIO emulation code inside KVM for guest_memfd-backed
memslots where the guest_memfd folios are direct map removed.
Particularly, it validates that x86 MMIO emulation code (guest page
table walks + instruction fetch) correctly accesses gmem through the VMA
that's been reflected into the memslot's userspace_addr field (instead
of trying to do direct map accesses).

Signed-off-by: Patrick Roy <patrick.roy@linux.dev>
Signed-off-by: Nikita Kalyazin <kalyazin@amazon.com>
---
 .../selftests/kvm/set_memory_region_test.c    | 52 +++++++++++++++++--
 1 file changed, 48 insertions(+), 4 deletions(-)

diff --git a/tools/testing/selftests/kvm/set_memory_region_test.c b/tools/testing/selftests/kvm/set_memory_region_test.c
index 7fe427ff9b38..cb445d420e8c 100644
--- a/tools/testing/selftests/kvm/set_memory_region_test.c
+++ b/tools/testing/selftests/kvm/set_memory_region_test.c
@@ -602,6 +602,41 @@ static void test_mmio_during_vectoring(void)
 
 	kvm_vm_free(vm);
 }
+
+static void guest_code_trigger_mmio(void)
+{
+	/*
+	 * Read some GPA that is not backed by a memslot. KVM consider this
+	 * as MMIO and tell userspace to emulate the read.
+	 */
+	READ_ONCE(*((uint64_t *)MEM_REGION_GPA));
+
+	GUEST_DONE();
+}
+
+static void test_guest_memfd_mmio(void)
+{
+	struct kvm_vm *vm;
+	struct kvm_vcpu *vcpu;
+	struct vm_shape shape = {
+		.mode = VM_MODE_DEFAULT,
+		.src_type = VM_MEM_SRC_GUEST_MEMFD_NO_DIRECT_MAP,
+	};
+	pthread_t vcpu_thread;
+
+	pr_info("Testing MMIO emulation for instructions in gmem\n");
+
+	vm = __vm_create_shape_with_one_vcpu(shape, &vcpu, 0, guest_code_trigger_mmio);
+
+	virt_map(vm, MEM_REGION_GPA, MEM_REGION_GPA, 1);
+
+	pthread_create(&vcpu_thread, NULL, vcpu_worker, vcpu);
+
+	/* If the MMIO read was successfully emulated, the vcpu thread will exit */
+	pthread_join(vcpu_thread, NULL);
+
+	kvm_vm_free(vm);
+}
 #endif
 
 int main(int argc, char *argv[])
@@ -625,10 +660,19 @@ int main(int argc, char *argv[])
 	test_add_max_memory_regions();
 
 #ifdef __x86_64__
-	if (kvm_has_cap(KVM_CAP_GUEST_MEMFD) &&
-	    (kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SW_PROTECTED_VM))) {
-		test_add_private_memory_region();
-		test_add_overlapping_private_memory_regions();
+	if (kvm_has_cap(KVM_CAP_GUEST_MEMFD)) {
+		uint64_t valid_flags = kvm_check_cap(KVM_CAP_GUEST_MEMFD_FLAGS);
+
+		if (kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SW_PROTECTED_VM)) {
+			test_add_private_memory_region();
+			test_add_overlapping_private_memory_regions();
+		}
+
+		if ((valid_flags & GUEST_MEMFD_FLAG_MMAP) &&
+		    (valid_flags & GUEST_MEMFD_FLAG_NO_DIRECT_MAP))
+			test_guest_memfd_mmio();
+		else
+			pr_info("Skipping tests requiring GUEST_MEMFD_FLAG_MMAP | GUEST_MEMFD_FLAG_NO_DIRECT_MAP");
 	} else {
 		pr_info("Skipping tests for KVM_MEM_GUEST_MEMFD memory regions\n");
 	}
-- 
2.50.1



^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH v11 01/16] set_memory: set_direct_map_* to take address
  2026-03-17 14:10 ` [PATCH v11 01/16] set_memory: set_direct_map_* to take address Kalyazin, Nikita
@ 2026-03-23 17:44   ` David Hildenbrand (Arm)
  2026-03-23 18:00   ` Ackerley Tng
  1 sibling, 0 replies; 29+ messages in thread
From: David Hildenbrand (Arm) @ 2026-03-23 17:44 UTC (permalink / raw)
  To: Kalyazin, Nikita, kvm@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	bpf@vger.kernel.org, linux-kselftest@vger.kernel.org,
	kernel@xen0n.name, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, loongarch@lists.linux.dev,
	linux-pm@vger.kernel.org
  Cc: pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org,
	oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com,
	yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org,
	seanjc@google.com, tglx@kernel.org, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, luto@kernel.org, peterz@infradead.org,
	willy@infradead.org, akpm@linux-foundation.org,
	lorenzo.stoakes@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, ast@kernel.org,
	daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev,
	eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me,
	haoluo@google.com, jolsa@kernel.org, jgg@ziepe.ca,
	jhubbard@nvidia.com, peterx@redhat.com, jannh@google.com,
	pfalcato@suse.de, skhan@linuxfoundation.org, riel@surriel.com,
	ryan.roberts@arm.com, jgross@suse.com, yu-cheng.yu@intel.com,
	kas@kernel.org, coxu@redhat.com, kevin.brodsky@arm.com,
	ackerleytng@google.com, yosry@kernel.org, ajones@ventanamicro.com,
	maobibo@loongson.cn, tabba@google.com, prsampat@amd.com,
	wu.fei9@sanechips.com.cn, mlevitsk@redhat.com,
	jmattson@google.com, jthoughton@google.com,
	agordeev@linux.ibm.com, alex@ghiti.fr, aou@eecs.berkeley.edu,
	borntraeger@linux.ibm.com, chenhuacai@kernel.org,
	dev.jain@arm.com, gor@linux.ibm.com, hca@linux.ibm.com,
	palmer@dabbelt.com, pjw@kernel.org, shijie@os.amperecomputing.com,
	svens@linux.ibm.com, thuth@redhat.com, wyihan@google.com,
	yang@os.amperecomputing.com, Jonathan.Cameron@huawei.com,
	Liam.Howlett@oracle.com, urezki@gmail.com,
	zhengqi.arch@bytedance.com, gerald.schaefer@linux.ibm.com,
	jiayuan.chen@shopee.com, lenb@kernel.org, osalvador@suse.de,
	pavel@kernel.org, rafael@kernel.org, vannapurve@google.com,
	jackmanb@google.com, aneesh.kumar@kernel.org,
	patrick.roy@linux.dev, Thomson, Jack, Itazuri, Takahiro,
	Manwaring, Derek

On 3/17/26 15:10, Kalyazin, Nikita wrote:
> From: Nikita Kalyazin <kalyazin@amazon.com>
> 

Just a nit while reading over it once more: restate what the patch
subject says.

Like "Let's convert set_direct_map_*() to take an address instead of a
page to prepare for adding helpers that operate on folios; it will be
more efficient to convert from a folio directly to an address without
going through a page first."

> This is to avoid excessive conversions folio->page->address when adding
> helpers on top of set_direct_map_valid_noflush() in the next patch.
> 
> Acked-by: David Hildenbrand (Arm) <david@kernel.org>
> Signed-off-by: Nikita Kalyazin <kalyazin@amazon.com>
> ---

-- 
Cheers,

David


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v11 02/16] set_memory: add folio_{zap,restore}_direct_map helpers
  2026-03-17 14:10 ` [PATCH v11 02/16] set_memory: add folio_{zap,restore}_direct_map helpers Kalyazin, Nikita
@ 2026-03-23 17:51   ` David Hildenbrand (Arm)
  2026-03-23 18:43   ` Ackerley Tng
  1 sibling, 0 replies; 29+ messages in thread
From: David Hildenbrand (Arm) @ 2026-03-23 17:51 UTC (permalink / raw)
  To: Kalyazin, Nikita, kvm@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	bpf@vger.kernel.org, linux-kselftest@vger.kernel.org,
	kernel@xen0n.name, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, loongarch@lists.linux.dev,
	linux-pm@vger.kernel.org
  Cc: pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org,
	oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com,
	yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org,
	seanjc@google.com, tglx@kernel.org, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, luto@kernel.org, peterz@infradead.org,
	willy@infradead.org, akpm@linux-foundation.org,
	lorenzo.stoakes@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, ast@kernel.org,
	daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev,
	eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me,
	haoluo@google.com, jolsa@kernel.org, jgg@ziepe.ca,
	jhubbard@nvidia.com, peterx@redhat.com, jannh@google.com,
	pfalcato@suse.de, skhan@linuxfoundation.org, riel@surriel.com,
	ryan.roberts@arm.com, jgross@suse.com, yu-cheng.yu@intel.com,
	kas@kernel.org, coxu@redhat.com, kevin.brodsky@arm.com,
	ackerleytng@google.com, yosry@kernel.org, ajones@ventanamicro.com,
	maobibo@loongson.cn, tabba@google.com, prsampat@amd.com,
	wu.fei9@sanechips.com.cn, mlevitsk@redhat.com,
	jmattson@google.com, jthoughton@google.com,
	agordeev@linux.ibm.com, alex@ghiti.fr, aou@eecs.berkeley.edu,
	borntraeger@linux.ibm.com, chenhuacai@kernel.org,
	dev.jain@arm.com, gor@linux.ibm.com, hca@linux.ibm.com,
	palmer@dabbelt.com, pjw@kernel.org, shijie@os.amperecomputing.com,
	svens@linux.ibm.com, thuth@redhat.com, wyihan@google.com,
	yang@os.amperecomputing.com, Jonathan.Cameron@huawei.com,
	Liam.Howlett@oracle.com, urezki@gmail.com,
	zhengqi.arch@bytedance.com, gerald.schaefer@linux.ibm.com,
	jiayuan.chen@shopee.com, lenb@kernel.org, osalvador@suse.de,
	pavel@kernel.org, rafael@kernel.org, vannapurve@google.com,
	jackmanb@google.com, aneesh.kumar@kernel.org,
	patrick.roy@linux.dev, Thomson, Jack, Itazuri, Takahiro,
	Manwaring, Derek

On 3/17/26 15:10, Kalyazin, Nikita wrote:
> From: Nikita Kalyazin <kalyazin@amazon.com>
> 
> Let's provide folio_{zap,restore}_direct_map helpers as preparation for
> supporting removal of the direct map for guest_memfd folios.
> In folio_zap_direct_map(), flush TLB to make sure the data is not
> accessible.
> 
> The new helpers need to be accessible to KVM on architectures that
> support guest_memfd (x86 and arm64).
> 
> Direct map removal gives guest_memfd the same protection that
> memfd_secret does, such as hardening against Spectre-like attacks
> through in-kernel gadgets.

Maybe mention that there might be a double TLB flush on some
architectures, but that that is something to figure out later. Same
behavior in secretmem code where this will be used next.

> 
> Signed-off-by: Nikita Kalyazin <kalyazin@amazon.com>
> ---
>  include/linux/set_memory.h | 13 ++++++++++++
>  mm/memory.c                | 42 ++++++++++++++++++++++++++++++++++++++
>  2 files changed, 55 insertions(+)
> 
> diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
> index 1a2563f525fc..24caea2931f9 100644
> --- a/include/linux/set_memory.h
> +++ b/include/linux/set_memory.h
> @@ -41,6 +41,15 @@ static inline int set_direct_map_valid_noflush(const void *addr,
>  	return 0;
>  }
>  
> +static inline int folio_zap_direct_map(struct folio *folio)
> +{
> +	return 0;

Should we return -ENOSYS here or similar?

> +}
> +
> +static inline void folio_restore_direct_map(struct folio *folio)
> +{
> +}
> +
>  static inline bool kernel_page_present(struct page *page)
>  {
>  	return true;
> @@ -57,6 +66,10 @@ static inline bool can_set_direct_map(void)
>  }
>  #define can_set_direct_map can_set_direct_map
>  #endif
> +
> +int folio_zap_direct_map(struct folio *folio);
> +void folio_restore_direct_map(struct folio *folio);
> +
>  #endif /* CONFIG_ARCH_HAS_SET_DIRECT_MAP */
>  
>  #ifdef CONFIG_X86_64
> diff --git a/mm/memory.c b/mm/memory.c
> index 07778814b4a8..cab6bb237fc0 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -78,6 +78,7 @@
>  #include <linux/sched/sysctl.h>
>  #include <linux/pgalloc.h>
>  #include <linux/uaccess.h>
> +#include <linux/set_memory.h>
>  
>  #include <trace/events/kmem.h>
>  
> @@ -7478,3 +7479,44 @@ void vma_pgtable_walk_end(struct vm_area_struct *vma)
>  	if (is_vm_hugetlb_page(vma))
>  		hugetlb_vma_unlock_read(vma);
>  }
> +
> +#ifdef CONFIG_ARCH_HAS_SET_DIRECT_MAP
> +/**
> + * folio_zap_direct_map - remove a folio from the kernel direct map
> + * @folio: folio to remove from the direct map
> + *
> + * Removes the folio from the kernel direct map and flushes the TLB.  This may
> + * require splitting huge pages in the direct map, which can fail due to memory
> + * allocation.

Best to mention

"So far, only order-0 folios are supported." and then ...

> + *
> + * Return: 0 on success, or a negative error code on failure.
> + */
> +int folio_zap_direct_map(struct folio *folio)
> +{
> +	const void *addr = folio_address(folio);
> +	int ret;
> +

if (folio_test_large(folio))
	return -EINVAL;


With that,

Acked-by: David Hildenbrand (Arm) <david@kernel.org>

-- 
Cheers,

David


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v11 03/16] mm/secretmem: make use of folio_{zap,restore}_direct_map
  2026-03-17 14:11 ` [PATCH v11 03/16] mm/secretmem: make use of folio_{zap,restore}_direct_map Kalyazin, Nikita
@ 2026-03-23 17:53   ` David Hildenbrand (Arm)
  2026-03-23 18:46   ` Ackerley Tng
  1 sibling, 0 replies; 29+ messages in thread
From: David Hildenbrand (Arm) @ 2026-03-23 17:53 UTC (permalink / raw)
  To: Kalyazin, Nikita, kvm@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	bpf@vger.kernel.org, linux-kselftest@vger.kernel.org,
	kernel@xen0n.name, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, loongarch@lists.linux.dev,
	linux-pm@vger.kernel.org
  Cc: pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org,
	oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com,
	yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org,
	seanjc@google.com, tglx@kernel.org, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, luto@kernel.org, peterz@infradead.org,
	willy@infradead.org, akpm@linux-foundation.org,
	lorenzo.stoakes@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, ast@kernel.org,
	daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev,
	eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me,
	haoluo@google.com, jolsa@kernel.org, jgg@ziepe.ca,
	jhubbard@nvidia.com, peterx@redhat.com, jannh@google.com,
	pfalcato@suse.de, skhan@linuxfoundation.org, riel@surriel.com,
	ryan.roberts@arm.com, jgross@suse.com, yu-cheng.yu@intel.com,
	kas@kernel.org, coxu@redhat.com, kevin.brodsky@arm.com,
	ackerleytng@google.com, yosry@kernel.org, ajones@ventanamicro.com,
	maobibo@loongson.cn, tabba@google.com, prsampat@amd.com,
	wu.fei9@sanechips.com.cn, mlevitsk@redhat.com,
	jmattson@google.com, jthoughton@google.com,
	agordeev@linux.ibm.com, alex@ghiti.fr, aou@eecs.berkeley.edu,
	borntraeger@linux.ibm.com, chenhuacai@kernel.org,
	dev.jain@arm.com, gor@linux.ibm.com, hca@linux.ibm.com,
	palmer@dabbelt.com, pjw@kernel.org, shijie@os.amperecomputing.com,
	svens@linux.ibm.com, thuth@redhat.com, wyihan@google.com,
	yang@os.amperecomputing.com, Jonathan.Cameron@huawei.com,
	Liam.Howlett@oracle.com, urezki@gmail.com,
	zhengqi.arch@bytedance.com, gerald.schaefer@linux.ibm.com,
	jiayuan.chen@shopee.com, lenb@kernel.org, osalvador@suse.de,
	pavel@kernel.org, rafael@kernel.org, vannapurve@google.com,
	jackmanb@google.com, aneesh.kumar@kernel.org,
	patrick.roy@linux.dev, Thomson, Jack, Itazuri, Takahiro,
	Manwaring, Derek

On 3/17/26 15:11, Kalyazin, Nikita wrote:
> From: Nikita Kalyazin <kalyazin@amazon.com>
> 

Describe your change :)

Ans also worth mentioning that we now flush the TLB even though
filemap_add_folio() failed -- which shouldn't matter in practice I guess.

With that

Acked-by: David Hildenbrand (Arm) <david@kernel.org>

-- 
Cheers,

David


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v11 05/16] mm/gup: drop local variable in gup_fast_folio_allowed
  2026-03-17 14:11 ` [PATCH v11 05/16] mm/gup: drop local variable in gup_fast_folio_allowed Kalyazin, Nikita
@ 2026-03-23 17:55   ` David Hildenbrand (Arm)
  2026-03-23 20:22     ` Ackerley Tng
  0 siblings, 1 reply; 29+ messages in thread
From: David Hildenbrand (Arm) @ 2026-03-23 17:55 UTC (permalink / raw)
  To: Kalyazin, Nikita, kvm@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	bpf@vger.kernel.org, linux-kselftest@vger.kernel.org,
	kernel@xen0n.name, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, loongarch@lists.linux.dev,
	linux-pm@vger.kernel.org
  Cc: pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org,
	oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com,
	yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org,
	seanjc@google.com, tglx@kernel.org, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, luto@kernel.org, peterz@infradead.org,
	willy@infradead.org, akpm@linux-foundation.org,
	lorenzo.stoakes@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, ast@kernel.org,
	daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev,
	eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me,
	haoluo@google.com, jolsa@kernel.org, jgg@ziepe.ca,
	jhubbard@nvidia.com, peterx@redhat.com, jannh@google.com,
	pfalcato@suse.de, skhan@linuxfoundation.org, riel@surriel.com,
	ryan.roberts@arm.com, jgross@suse.com, yu-cheng.yu@intel.com,
	kas@kernel.org, coxu@redhat.com, kevin.brodsky@arm.com,
	ackerleytng@google.com, yosry@kernel.org, ajones@ventanamicro.com,
	maobibo@loongson.cn, tabba@google.com, prsampat@amd.com,
	wu.fei9@sanechips.com.cn, mlevitsk@redhat.com,
	jmattson@google.com, jthoughton@google.com,
	agordeev@linux.ibm.com, alex@ghiti.fr, aou@eecs.berkeley.edu,
	borntraeger@linux.ibm.com, chenhuacai@kernel.org,
	dev.jain@arm.com, gor@linux.ibm.com, hca@linux.ibm.com,
	palmer@dabbelt.com, pjw@kernel.org, shijie@os.amperecomputing.com,
	svens@linux.ibm.com, thuth@redhat.com, wyihan@google.com,
	yang@os.amperecomputing.com, Jonathan.Cameron@huawei.com,
	Liam.Howlett@oracle.com, urezki@gmail.com,
	zhengqi.arch@bytedance.com, gerald.schaefer@linux.ibm.com,
	jiayuan.chen@shopee.com, lenb@kernel.org, osalvador@suse.de,
	pavel@kernel.org, rafael@kernel.org, vannapurve@google.com,
	jackmanb@google.com, aneesh.kumar@kernel.org,
	patrick.roy@linux.dev, Thomson, Jack, Itazuri, Takahiro,
	Manwaring, Derek

On 3/17/26 15:11, Kalyazin, Nikita wrote:
> From: Nikita Kalyazin <kalyazin@amazon.com>
> 
> Move the check for pinning closer to where the result is used.
> No functional changes.
> 
> Signed-off-by: Nikita Kalyazin <kalyazin@amazon.com>
> ---
>  mm/gup.c | 23 ++++++++++++-----------
>  1 file changed, 12 insertions(+), 11 deletions(-)
> 
> diff --git a/mm/gup.c b/mm/gup.c
> index 5856d35be385..869d79c8daa4 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -2737,18 +2737,9 @@ EXPORT_SYMBOL(get_user_pages_unlocked);
>   */
>  static bool gup_fast_folio_allowed(struct folio *folio, unsigned int flags)
>  {
> -	bool reject_file_backed = false;
>  	struct address_space *mapping;
>  	unsigned long mapping_flags;
>  
> -	/*
> -	 * If we aren't pinning then no problematic write can occur. A long term
> -	 * pin is the most egregious case so this is the one we disallow.
> -	 */
> -	if ((flags & (FOLL_PIN | FOLL_LONGTERM | FOLL_WRITE)) ==
> -	    (FOLL_PIN | FOLL_LONGTERM | FOLL_WRITE))
> -		reject_file_backed = true;
> -
>  	/* We hold a folio reference, so we can safely access folio fields. */
>  	if (WARN_ON_ONCE(folio_test_slab(folio)))
>  		return false;
> @@ -2793,8 +2784,18 @@ static bool gup_fast_folio_allowed(struct folio *folio, unsigned int flags)
>  	 */
>  	if (secretmem_mapping(mapping))
>  		return false;
> -	/* The only remaining allowed file system is shmem. */
> -	return !reject_file_backed || shmem_mapping(mapping);
> +
> +	/*
> +	 * If we aren't pinning then no problematic write can occur. A writable
> +	 * long term pin is the most egregious case, so this is the one we
> +	 * allow only for ...
> +	 */
> +	if ((flags & (FOLL_PIN | FOLL_LONGTERM | FOLL_WRITE)) !=
> +	    (FOLL_PIN | FOLL_LONGTERM | FOLL_WRITE))
> +		return true;
> +
> +	/* ... hugetlb (which we allowed above already) and shared memory. */
> +	return shmem_mapping(mapping);

Acked-by: David Hildenbrand (Arm) <david@kernel.org>

I'm wondering if it would be a good idea to check for a hugetlb mapping
here instead of having the folio_test_hugetlb() check above.

Something to ponder about :)

-- 
Cheers,

David


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v11 01/16] set_memory: set_direct_map_* to take address
  2026-03-17 14:10 ` [PATCH v11 01/16] set_memory: set_direct_map_* to take address Kalyazin, Nikita
  2026-03-23 17:44   ` David Hildenbrand (Arm)
@ 2026-03-23 18:00   ` Ackerley Tng
  1 sibling, 0 replies; 29+ messages in thread
From: Ackerley Tng @ 2026-03-23 18:00 UTC (permalink / raw)
  To: Kalyazin, Nikita, kvm@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	bpf@vger.kernel.org, linux-kselftest@vger.kernel.org,
	kernel@xen0n.name, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, loongarch@lists.linux.dev,
	linux-pm@vger.kernel.org
  Cc: pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org,
	oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com,
	yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org,
	seanjc@google.com, tglx@kernel.org, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, luto@kernel.org, peterz@infradead.org,
	willy@infradead.org, akpm@linux-foundation.org, david@kernel.org,
	lorenzo.stoakes@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, ast@kernel.org,
	daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev,
	eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me,
	haoluo@google.com, jolsa@kernel.org, jgg@ziepe.ca,
	jhubbard@nvidia.com, peterx@redhat.com, jannh@google.com,
	pfalcato@suse.de, skhan@linuxfoundation.org, riel@surriel.com,
	ryan.roberts@arm.com, jgross@suse.com, yu-cheng.yu@intel.com,
	kas@kernel.org, coxu@redhat.com, kevin.brodsky@arm.com,
	yosry@kernel.org, ajones@ventanamicro.com, maobibo@loongson.cn,
	tabba@google.com, prsampat@amd.com, wu.fei9@sanechips.com.cn,
	mlevitsk@redhat.com, jmattson@google.com, jthoughton@google.com,
	agordeev@linux.ibm.com, alex@ghiti.fr, aou@eecs.berkeley.edu,
	borntraeger@linux.ibm.com, chenhuacai@kernel.org,
	dev.jain@arm.com, gor@linux.ibm.com, hca@linux.ibm.com,
	palmer@dabbelt.com, pjw@kernel.org, shijie@os.amperecomputing.com,
	svens@linux.ibm.com, thuth@redhat.com, wyihan@google.com,
	yang@os.amperecomputing.com, Jonathan.Cameron@huawei.com,
	Liam.Howlett@oracle.com, urezki@gmail.com,
	zhengqi.arch@bytedance.com, gerald.schaefer@linux.ibm.com,
	jiayuan.chen@shopee.com, lenb@kernel.org, osalvador@suse.de,
	pavel@kernel.org, rafael@kernel.org, vannapurve@google.com,
	jackmanb@google.com, aneesh.kumar@kernel.org,
	patrick.roy@linux.dev, Thomson, Jack, Itazuri, Takahiro,
	Manwaring, Derek

"Kalyazin, Nikita" <kalyazin@amazon.co.uk> writes:

> From: Nikita Kalyazin <kalyazin@amazon.com>
>
> This is to avoid excessive conversions folio->page->address when adding
> helpers on top of set_direct_map_valid_noflush() in the next patch.
>

I can't take credit for what Sashiko [1] spotted.

> Acked-by: David Hildenbrand (Arm) <david@kernel.org>
> Signed-off-by: Nikita Kalyazin <kalyazin@amazon.com>
> ---
>  arch/arm64/include/asm/set_memory.h     |  7 ++++---
>  arch/arm64/mm/pageattr.c                | 19 +++++++++----------
>  arch/loongarch/include/asm/set_memory.h |  7 ++++---
>  arch/loongarch/mm/pageattr.c            | 25 +++++++++++--------------
>  arch/riscv/include/asm/set_memory.h     |  7 ++++---
>  arch/riscv/mm/pageattr.c                | 17 +++++++++--------
>  arch/s390/include/asm/set_memory.h      |  7 ++++---
>  arch/s390/mm/pageattr.c                 | 13 +++++++------
>  arch/x86/include/asm/set_memory.h       |  7 ++++---
>  arch/x86/mm/pat/set_memory.c            | 23 ++++++++++++-----------
>  include/linux/set_memory.h              |  9 +++++----
>  kernel/power/snapshot.c                 |  4 ++--
>  mm/execmem.c                            |  6 ++++--
>  mm/secretmem.c                          |  6 +++---
>  mm/vmalloc.c                            | 11 +++++++----
>  15 files changed, 89 insertions(+), 79 deletions(-)
>
>
> [...snip...]
>
> diff --git a/arch/loongarch/mm/pageattr.c b/arch/loongarch/mm/pageattr.c
> index f5e910b68229..9e08905d3624 100644
> --- a/arch/loongarch/mm/pageattr.c
> +++ b/arch/loongarch/mm/pageattr.c
> @@ -198,32 +198,29 @@ bool kernel_page_present(struct page *page)
>  	return pte_present(ptep_get(pte));
>  }
>
> -int set_direct_map_default_noflush(struct page *page)
> +int set_direct_map_default_noflush(const void *addr)
>  {
> -	unsigned long addr = (unsigned long)page_address(page);
> -
> -	if (addr < vm_map_base)
> +	if ((unsigned long)addr < vm_map_base)
>  		return 0;
>
> -	return __set_memory(addr, 1, PAGE_KERNEL, __pgprot(0));
> +	return __set_memory((unsigned long)addr, 1, PAGE_KERNEL, __pgprot(0));
>  }
>
> -int set_direct_map_invalid_noflush(struct page *page)
> +int set_direct_map_invalid_noflush(const void *addr)
>  {
> -	unsigned long addr = (unsigned long)page_address(page);
> -
> -	if (addr < vm_map_base)
> +	if ((unsigned long)addr < vm_map_base)
>  		return 0;
>
> -	return __set_memory(addr, 1, __pgprot(0), __pgprot(_PAGE_PRESENT | _PAGE_VALID));
> +	return __set_memory((unsigned long)addr, 1, __pgprot(0),
> +			    __pgprot(_PAGE_PRESENT | _PAGE_VALID));
>  }
>
> -int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid)
> +int set_direct_map_valid_noflush(const void *addr, unsigned long numpages,
> +				 bool valid)
>  {
> -	unsigned long addr = (unsigned long)page_address(page);
>  	pgprot_t set, clear;
>
> -	if (addr < vm_map_base)
> +	if ((unsigned long)addr < vm_map_base)
>  		return 0;
>
>  	if (valid) {
> @@ -234,5 +231,5 @@ int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid)
>  		clear = __pgprot(_PAGE_PRESENT | _PAGE_VALID);
>  	}
>
> -	return __set_memory(addr, 1, set, clear);
> +	return __set_memory((unsigned long)addr, 1, set, clear);

Sashiko also spotted that there is a hard-coded 1 here. Before this
change, it was already hard-coded to 1. Not sure if this is a
bug.

Could this be addressed in a separate patch series?

>  }
>
> [...snip...]
>
> diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> index 40581a720fe8..6aea1f470fd5 100644
> --- a/arch/x86/mm/pat/set_memory.c
> +++ b/arch/x86/mm/pat/set_memory.c
> @@ -2587,9 +2587,9 @@ int set_pages_rw(struct page *page, int numpages)
>  	return set_memory_rw(addr, numpages);
>  }
>
> -static int __set_pages_p(struct page *page, int numpages)
> +static int __set_pages_p(const void *addr, int numpages)
>  {
> -	unsigned long tempaddr = (unsigned long) page_address(page);
> +	unsigned long tempaddr = (unsigned long)addr;
>  	struct cpa_data cpa = { .vaddr = &tempaddr,
>  				.pgd = NULL,
>  				.numpages = numpages,
> @@ -2606,9 +2606,9 @@ static int __set_pages_p(struct page *page, int numpages)
>  	return __change_page_attr_set_clr(&cpa, 1);
>  }
>
> -static int __set_pages_np(struct page *page, int numpages)
> +static int __set_pages_np(const void *addr, int numpages)
>  {
> -	unsigned long tempaddr = (unsigned long) page_address(page);
> +	unsigned long tempaddr = (unsigned long)addr;
>  	struct cpa_data cpa = { .vaddr = &tempaddr,
>  				.pgd = NULL,
>  				.numpages = numpages,
> @@ -2625,22 +2625,23 @@ static int __set_pages_np(struct page *page, int numpages)
>  	return __change_page_attr_set_clr(&cpa, 1);
>  }
>

I agree that in arch/x86/mm/pat/set_memory.c, __kernel_map_pages(), has
calls to __set_pages_p() and __set_pages_np() that seems to have been
missed out in this patch. Those calls still pass struct page *. Maybe
that's because __kernel_map_pages() was guarded by
CONFIG_DEBUG_PAGEALLOC, so if you were using an lsp-guided refactoring
that call was missed.

Should probably try a grep to see what else needs replacing :)

[1] https://sashiko.dev/#/patchset/20260317141031.514-1-kalyazin%40amazon.com

> -int set_direct_map_invalid_noflush(struct page *page)
> +int set_direct_map_invalid_noflush(const void *addr)
>  {
> -	return __set_pages_np(page, 1);
> +	return __set_pages_np(addr, 1);
>  }
>
> -int set_direct_map_default_noflush(struct page *page)
> +int set_direct_map_default_noflush(const void *addr)
>  {
> -	return __set_pages_p(page, 1);
> +	return __set_pages_p(addr, 1);
>  }
>
> -int set_direct_map_valid_noflush(struct page *page, unsigned nr, bool valid)
> +int set_direct_map_valid_noflush(const void *addr, unsigned long numpages,
> +				 bool valid)
>  {
>  	if (valid)
> -		return __set_pages_p(page, nr);
> +		return __set_pages_p(addr, numpages);
>
> -	return __set_pages_np(page, nr);
> +	return __set_pages_np(addr, numpages);
>  }
>
>  #ifdef CONFIG_DEBUG_PAGEALLOC
>
> [...snip...]
>


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v11 10/16] KVM: guest_memfd: Add flag to remove from direct map
  2026-03-17 14:12 ` [PATCH v11 10/16] KVM: guest_memfd: Add flag to remove from direct map Kalyazin, Nikita
@ 2026-03-23 18:05   ` David Hildenbrand (Arm)
  2026-03-23 20:47     ` Ackerley Tng
  2026-03-23 21:15   ` Ackerley Tng
  1 sibling, 1 reply; 29+ messages in thread
From: David Hildenbrand (Arm) @ 2026-03-23 18:05 UTC (permalink / raw)
  To: Kalyazin, Nikita, kvm@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	bpf@vger.kernel.org, linux-kselftest@vger.kernel.org,
	kernel@xen0n.name, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, loongarch@lists.linux.dev,
	linux-pm@vger.kernel.org
  Cc: pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org,
	oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com,
	yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org,
	seanjc@google.com, tglx@kernel.org, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, luto@kernel.org, peterz@infradead.org,
	willy@infradead.org, akpm@linux-foundation.org,
	lorenzo.stoakes@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, ast@kernel.org,
	daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev,
	eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me,
	haoluo@google.com, jolsa@kernel.org, jgg@ziepe.ca,
	jhubbard@nvidia.com, peterx@redhat.com, jannh@google.com,
	pfalcato@suse.de, skhan@linuxfoundation.org, riel@surriel.com,
	ryan.roberts@arm.com, jgross@suse.com, yu-cheng.yu@intel.com,
	kas@kernel.org, coxu@redhat.com, kevin.brodsky@arm.com,
	ackerleytng@google.com, yosry@kernel.org, ajones@ventanamicro.com,
	maobibo@loongson.cn, tabba@google.com, prsampat@amd.com,
	wu.fei9@sanechips.com.cn, mlevitsk@redhat.com,
	jmattson@google.com, jthoughton@google.com,
	agordeev@linux.ibm.com, alex@ghiti.fr, aou@eecs.berkeley.edu,
	borntraeger@linux.ibm.com, chenhuacai@kernel.org,
	dev.jain@arm.com, gor@linux.ibm.com, hca@linux.ibm.com,
	palmer@dabbelt.com, pjw@kernel.org, shijie@os.amperecomputing.com,
	svens@linux.ibm.com, thuth@redhat.com, wyihan@google.com,
	yang@os.amperecomputing.com, Jonathan.Cameron@huawei.com,
	Liam.Howlett@oracle.com, urezki@gmail.com,
	zhengqi.arch@bytedance.com, gerald.schaefer@linux.ibm.com,
	jiayuan.chen@shopee.com, lenb@kernel.org, osalvador@suse.de,
	pavel@kernel.org, rafael@kernel.org, vannapurve@google.com,
	jackmanb@google.com, aneesh.kumar@kernel.org,
	patrick.roy@linux.dev, Thomson, Jack, Itazuri, Takahiro,
	Manwaring, Derek

On 3/17/26 15:12, Kalyazin, Nikita wrote:
> From: Patrick Roy <patrick.roy@linux.dev>
> 
> Add GUEST_MEMFD_FLAG_NO_DIRECT_MAP flag for KVM_CREATE_GUEST_MEMFD()
> ioctl. When set, guest_memfd folios will be removed from the direct map
> after preparation, with direct map entries only restored when the folios
> are freed.
> 
> To ensure these folios do not end up in places where the kernel cannot
> deal with them, set AS_NO_DIRECT_MAP on the guest_memfd's struct
> address_space if GUEST_MEMFD_FLAG_NO_DIRECT_MAP is requested.
> 
> Note that this flag causes removal of direct map entries for all
> guest_memfd folios independent of whether they are "shared" or "private"
> (although current guest_memfd only supports either all folios in the
> "shared" state, or all folios in the "private" state if
> GUEST_MEMFD_FLAG_MMAP is not set). The usecase for removing direct map
> entries of also the shared parts of guest_memfd are a special type of
> non-CoCo VM where, host userspace is trusted to have access to all of
> guest memory, but where Spectre-style transient execution attacks
> through the host kernel's direct map should still be mitigated.  In this
> setup, KVM retains access to guest memory via userspace mappings of
> guest_memfd, which are reflected back into KVM's memslots via
> userspace_addr. This is needed for things like MMIO emulation on x86_64
> to work.
> 
> Direct map entries are zapped right before guest or userspace mappings
> of gmem folios are set up, e.g. in kvm_gmem_fault_user_mapping() or
> kvm_gmem_get_pfn() [called from the KVM MMU code]. The only place where
> a gmem folio can be allocated without being mapped anywhere is
> kvm_gmem_populate(), where handling potential failures of direct map
> removal is not possible (by the time direct map removal is attempted,
> the folio is already marked as prepared, meaning attempting to re-try
> kvm_gmem_populate() would just result in -EEXIST without fixing up the
> direct map state). These folios are then removed form the direct map
> upon kvm_gmem_get_pfn(), e.g. when they are mapped into the guest later.
> 
> Signed-off-by: Patrick Roy <patrick.roy@linux.dev>

I you changed this patch significantly, you should likely add a

Co-developed-by: Nikita Kalyazin <kalyazin@amazon.com>

above your sob.

(applies to other patches as well, please double check)

> Signed-off-by: Nikita Kalyazin <kalyazin@amazon.com>
> ---
>  Documentation/virt/kvm/api.rst | 21 ++++++-----
>  include/linux/kvm_host.h       |  3 ++
>  include/uapi/linux/kvm.h       |  1 +
>  virt/kvm/guest_memfd.c         | 67 ++++++++++++++++++++++++++++++++--
>  4 files changed, 79 insertions(+), 13 deletions(-)
> 
> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
> index 032516783e96..8feec77b03fe 100644
> --- a/Documentation/virt/kvm/api.rst
> +++ b/Documentation/virt/kvm/api.rst
> @@ -6439,15 +6439,18 @@ a single guest_memfd file, but the bound ranges must not overlap).
>  The capability KVM_CAP_GUEST_MEMFD_FLAGS enumerates the `flags` that can be
>  specified via KVM_CREATE_GUEST_MEMFD.  Currently defined flags:
>  
> -  ============================ ================================================
> -  GUEST_MEMFD_FLAG_MMAP        Enable using mmap() on the guest_memfd file
> -                               descriptor.
> -  GUEST_MEMFD_FLAG_INIT_SHARED Make all memory in the file shared during
> -                               KVM_CREATE_GUEST_MEMFD (memory files created
> -                               without INIT_SHARED will be marked private).
> -                               Shared memory can be faulted into host userspace
> -                               page tables. Private memory cannot.
> -  ============================ ================================================
> +  ============================== ================================================
> +  GUEST_MEMFD_FLAG_MMAP          Enable using mmap() on the guest_memfd file
> +                                 descriptor.
> +  GUEST_MEMFD_FLAG_INIT_SHARED   Make all memory in the file shared during
> +                                 KVM_CREATE_GUEST_MEMFD (memory files created
> +                                 without INIT_SHARED will be marked private).
> +                                 Shared memory can be faulted into host userspace
> +                                 page tables. Private memory cannot.
> +  GUEST_MEMFD_FLAG_NO_DIRECT_MAP The guest_memfd instance will unmap the memory
> +                                 backing it from the kernel's address space
> +                                 before passing it off to userspace or the guest.
> +  ============================== ================================================
>  
>  When the KVM MMU performs a PFN lookup to service a guest fault and the backing
>  guest_memfd has the GUEST_MEMFD_FLAG_MMAP set, then the fault will always be
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index ce8c5fdf2752..c95747e2278c 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -738,6 +738,9 @@ static inline u64 kvm_gmem_get_supported_flags(struct kvm *kvm)
>  	if (!kvm || kvm_arch_supports_gmem_init_shared(kvm))
>  		flags |= GUEST_MEMFD_FLAG_INIT_SHARED;
>  
> +	if (!kvm || kvm_arch_gmem_supports_no_direct_map(kvm))
> +		flags |= GUEST_MEMFD_FLAG_NO_DIRECT_MAP;
> +
>  	return flags;
>  }
>  #endif
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 80364d4dbebb..d864f67efdb7 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1642,6 +1642,7 @@ struct kvm_memory_attributes {
>  #define KVM_CREATE_GUEST_MEMFD	_IOWR(KVMIO,  0xd4, struct kvm_create_guest_memfd)
>  #define GUEST_MEMFD_FLAG_MMAP		(1ULL << 0)
>  #define GUEST_MEMFD_FLAG_INIT_SHARED	(1ULL << 1)
> +#define GUEST_MEMFD_FLAG_NO_DIRECT_MAP	(1ULL << 2)
>  
>  struct kvm_create_guest_memfd {
>  	__u64 size;
> diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
> index 651649623448..c9344647579c 100644
> --- a/virt/kvm/guest_memfd.c
> +++ b/virt/kvm/guest_memfd.c
> @@ -7,6 +7,7 @@
>  #include <linux/mempolicy.h>
>  #include <linux/pseudo_fs.h>
>  #include <linux/pagemap.h>
> +#include <linux/set_memory.h>
>  
>  #include "kvm_mm.h"
>  
> @@ -76,6 +77,35 @@ static int __kvm_gmem_prepare_folio(struct kvm *kvm, struct kvm_memory_slot *slo
>  	return 0;
>  }
>  
> +#define KVM_GMEM_FOLIO_NO_DIRECT_MAP BIT(0)
> +
> +static bool kvm_gmem_folio_no_direct_map(struct folio *folio)
> +{
> +	return ((u64)folio->private) & KVM_GMEM_FOLIO_NO_DIRECT_MAP;
> +}
> +
> +static int kvm_gmem_folio_zap_direct_map(struct folio *folio)
> +{
> +	u64 gmem_flags = GMEM_I(folio_inode(folio))->flags;
> +	int r = 0;
> +
> +	if (kvm_gmem_folio_no_direct_map(folio) || !(gmem_flags & GUEST_MEMFD_FLAG_NO_DIRECT_MAP))

The function is only called when

	kvm_gmem_no_direct_map(folio_inode(folio))

Does it really make sense to check for GUEST_MEMFD_FLAG_NO_DIRECT_MAP again?

If, at all, it should be a warning if GUEST_MEMFD_FLAG_NO_DIRECT_MAP is
not set?

Further, kvm_gmem_folio_zap_direct_map() uses the folio lock to
synchronize, right? Might be worth pointing that out somehow (e.g.,
lockdep check if possible).

> +		goto out;
> +
> +	r = folio_zap_direct_map(folio);
> +	if (!r)
> +		folio->private = (void *)((u64)folio->private | KVM_GMEM_FOLIO_NO_DIRECT_MAP);
> +
> +out:
> +	return r;
> +}
> +
> +static void kvm_gmem_folio_restore_direct_map(struct folio *folio)
> +{

kvm_gmem_folio_zap_direct_map() is allowed to be called on folios that
already have the directmap remove, kvm_gmem_folio_restore_direct_map()
cannot be called if the directmap was already restored.

Should we make that more consistent?


Hoping Sean can find some time to review

-- 
Cheers,

David


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v11 04/16] mm/gup: drop secretmem optimization from gup_fast_folio_allowed
  2026-03-17 14:11 ` [PATCH v11 04/16] mm/gup: drop secretmem optimization from gup_fast_folio_allowed Kalyazin, Nikita
@ 2026-03-23 18:31   ` David Hildenbrand (Arm)
  0 siblings, 0 replies; 29+ messages in thread
From: David Hildenbrand (Arm) @ 2026-03-23 18:31 UTC (permalink / raw)
  To: Kalyazin, Nikita, kvm@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	bpf@vger.kernel.org, linux-kselftest@vger.kernel.org,
	kernel@xen0n.name, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, loongarch@lists.linux.dev,
	linux-pm@vger.kernel.org
  Cc: pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org,
	oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com,
	yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org,
	seanjc@google.com, tglx@kernel.org, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, luto@kernel.org, peterz@infradead.org,
	willy@infradead.org, akpm@linux-foundation.org,
	lorenzo.stoakes@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, ast@kernel.org,
	daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev,
	eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me,
	haoluo@google.com, jolsa@kernel.org, jgg@ziepe.ca,
	jhubbard@nvidia.com, peterx@redhat.com, jannh@google.com,
	pfalcato@suse.de, skhan@linuxfoundation.org, riel@surriel.com,
	ryan.roberts@arm.com, jgross@suse.com, yu-cheng.yu@intel.com,
	kas@kernel.org, coxu@redhat.com, kevin.brodsky@arm.com,
	ackerleytng@google.com, yosry@kernel.org, ajones@ventanamicro.com,
	maobibo@loongson.cn, tabba@google.com, prsampat@amd.com,
	wu.fei9@sanechips.com.cn, mlevitsk@redhat.com,
	jmattson@google.com, jthoughton@google.com,
	agordeev@linux.ibm.com, alex@ghiti.fr, aou@eecs.berkeley.edu,
	borntraeger@linux.ibm.com, chenhuacai@kernel.org,
	dev.jain@arm.com, gor@linux.ibm.com, hca@linux.ibm.com,
	palmer@dabbelt.com, pjw@kernel.org, shijie@os.amperecomputing.com,
	svens@linux.ibm.com, thuth@redhat.com, wyihan@google.com,
	yang@os.amperecomputing.com, Jonathan.Cameron@huawei.com,
	Liam.Howlett@oracle.com, urezki@gmail.com,
	zhengqi.arch@bytedance.com, gerald.schaefer@linux.ibm.com,
	jiayuan.chen@shopee.com, lenb@kernel.org, osalvador@suse.de,
	pavel@kernel.org, rafael@kernel.org, vannapurve@google.com,
	jackmanb@google.com, aneesh.kumar@kernel.org,
	patrick.roy@linux.dev, Thomson, Jack, Itazuri, Takahiro,
	Manwaring, Derek, Vlastimil Babka, Dan Williams, Alistair Popple

On 3/17/26 15:11, Kalyazin, Nikita wrote:
> From: Patrick Roy <patrick.roy@linux.dev>
> 
> This drops an optimization in gup_fast_folio_allowed() where
> secretmem_mapping() was only called if CONFIG_SECRETMEM=y. secretmem is
> enabled by default since commit b758fe6df50d ("mm/secretmem: make it on
> by default"), so the secretmem check did not actually end up elided in
> most cases anymore anyway.
> 
> This is in preparation of the generalization of handling mappings where
> direct map entries of folios are set to not present.  Currently,
> mappings that match this description are secretmem mappings
> (memfd_secret()).  Later, some guest_memfd configurations will also fall
> into this category.
> 
> Signed-off-by: Patrick Roy <patrick.roy@linux.dev>
> Acked-by: Vlastimil Babka <vbabka@suse.cz>
> Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
> Signed-off-by: Nikita Kalyazin <kalyazin@amazon.com>
> ---
>  mm/gup.c | 11 +----------
>  1 file changed, 1 insertion(+), 10 deletions(-)
> 
> diff --git a/mm/gup.c b/mm/gup.c
> index 8e7dc2c6ee73..5856d35be385 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -2739,7 +2739,6 @@ static bool gup_fast_folio_allowed(struct folio *folio, unsigned int flags)
>  {
>  	bool reject_file_backed = false;
>  	struct address_space *mapping;
> -	bool check_secretmem = false;
>  	unsigned long mapping_flags;
>  
>  	/*
> @@ -2751,14 +2750,6 @@ static bool gup_fast_folio_allowed(struct folio *folio, unsigned int flags)
>  		reject_file_backed = true;
>  
>  	/* We hold a folio reference, so we can safely access folio fields. */
> -
> -	/* secretmem folios are always order-0 folios. */
> -	if (IS_ENABLED(CONFIG_SECRETMEM) && !folio_test_large(folio))
> -		check_secretmem = true;
> -
> -	if (!reject_file_backed && !check_secretmem)
> -		return true;
> -

The AI review says that this will force all small folios through the
mapping check (which we obviously need later :) ).

It brings up two cases where page->mapping is not set up:

1) ZONE_DEVICE pages (like Device DAX and PCI P2PDMA)

2) large shmem folios in the swap cache


2) doesn't make sense, because the folio cannot be mapped in user space
when that happens.

I am also skeptical about 1), especially as large folios are also
supported for device dax and would be problematic here.
__dev_dax_pte_fault() clearly sets folio->mapping through dax_set_mapping().


If 1) is ever a case we could allow them by checking for
folio_is_zone_device(). But I am not sure if that is really required.
Sounds weird.


-- 
Cheers,

David


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v11 02/16] set_memory: add folio_{zap,restore}_direct_map helpers
  2026-03-17 14:10 ` [PATCH v11 02/16] set_memory: add folio_{zap,restore}_direct_map helpers Kalyazin, Nikita
  2026-03-23 17:51   ` David Hildenbrand (Arm)
@ 2026-03-23 18:43   ` Ackerley Tng
  1 sibling, 0 replies; 29+ messages in thread
From: Ackerley Tng @ 2026-03-23 18:43 UTC (permalink / raw)
  To: Kalyazin, Nikita, kvm@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	bpf@vger.kernel.org, linux-kselftest@vger.kernel.org,
	kernel@xen0n.name, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, loongarch@lists.linux.dev,
	linux-pm@vger.kernel.org
  Cc: pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org,
	oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com,
	yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org,
	seanjc@google.com, tglx@kernel.org, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, luto@kernel.org, peterz@infradead.org,
	willy@infradead.org, akpm@linux-foundation.org, david@kernel.org,
	lorenzo.stoakes@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, ast@kernel.org,
	daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev,
	eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me,
	haoluo@google.com, jolsa@kernel.org, jgg@ziepe.ca,
	jhubbard@nvidia.com, peterx@redhat.com, jannh@google.com,
	pfalcato@suse.de, skhan@linuxfoundation.org, riel@surriel.com,
	ryan.roberts@arm.com, jgross@suse.com, yu-cheng.yu@intel.com,
	kas@kernel.org, coxu@redhat.com, kevin.brodsky@arm.com,
	yosry@kernel.org, ajones@ventanamicro.com, maobibo@loongson.cn,
	tabba@google.com, prsampat@amd.com, wu.fei9@sanechips.com.cn,
	mlevitsk@redhat.com, jmattson@google.com, jthoughton@google.com,
	agordeev@linux.ibm.com, alex@ghiti.fr, aou@eecs.berkeley.edu,
	borntraeger@linux.ibm.com, chenhuacai@kernel.org,
	dev.jain@arm.com, gor@linux.ibm.com, hca@linux.ibm.com,
	palmer@dabbelt.com, pjw@kernel.org, shijie@os.amperecomputing.com,
	svens@linux.ibm.com, thuth@redhat.com, wyihan@google.com,
	yang@os.amperecomputing.com, Jonathan.Cameron@huawei.com,
	Liam.Howlett@oracle.com, urezki@gmail.com,
	zhengqi.arch@bytedance.com, gerald.schaefer@linux.ibm.com,
	jiayuan.chen@shopee.com, lenb@kernel.org, osalvador@suse.de,
	pavel@kernel.org, rafael@kernel.org, vannapurve@google.com,
	jackmanb@google.com, aneesh.kumar@kernel.org,
	patrick.roy@linux.dev, Thomson, Jack, Itazuri, Takahiro,
	Manwaring, Derek

"Kalyazin, Nikita" <kalyazin@amazon.co.uk> writes:

> From: Nikita Kalyazin <kalyazin@amazon.com>
>
> Let's provide folio_{zap,restore}_direct_map helpers as preparation for
> supporting removal of the direct map for guest_memfd folios.
> In folio_zap_direct_map(), flush TLB to make sure the data is not
> accessible.
>
> The new helpers need to be accessible to KVM on architectures that
> support guest_memfd (x86 and arm64).
>
> Direct map removal gives guest_memfd the same protection that
> memfd_secret does, such as hardening against Spectre-like attacks
> through in-kernel gadgets.
>
> Signed-off-by: Nikita Kalyazin <kalyazin@amazon.com>
> ---
>  include/linux/set_memory.h | 13 ++++++++++++
>  mm/memory.c                | 42 ++++++++++++++++++++++++++++++++++++++
>  2 files changed, 55 insertions(+)
>
> diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
> index 1a2563f525fc..24caea2931f9 100644
> --- a/include/linux/set_memory.h
> +++ b/include/linux/set_memory.h
> @@ -41,6 +41,15 @@ static inline int set_direct_map_valid_noflush(const void *addr,
>  	return 0;
>  }
>
> +static inline int folio_zap_direct_map(struct folio *folio)
> +{
> +	return 0;
> +}
> +
> +static inline void folio_restore_direct_map(struct folio *folio)
> +{
> +}
> +
>  static inline bool kernel_page_present(struct page *page)
>  {
>  	return true;
> @@ -57,6 +66,10 @@ static inline bool can_set_direct_map(void)
>  }
>  #define can_set_direct_map can_set_direct_map
>  #endif
> +
> +int folio_zap_direct_map(struct folio *folio);
> +void folio_restore_direct_map(struct folio *folio);
> +
>  #endif /* CONFIG_ARCH_HAS_SET_DIRECT_MAP */
>
>  #ifdef CONFIG_X86_64
> diff --git a/mm/memory.c b/mm/memory.c
> index 07778814b4a8..cab6bb237fc0 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -78,6 +78,7 @@
>  #include <linux/sched/sysctl.h>
>  #include <linux/pgalloc.h>
>  #include <linux/uaccess.h>
> +#include <linux/set_memory.h>
>
>  #include <trace/events/kmem.h>
>
> @@ -7478,3 +7479,44 @@ void vma_pgtable_walk_end(struct vm_area_struct *vma)
>  	if (is_vm_hugetlb_page(vma))
>  		hugetlb_vma_unlock_read(vma);
>  }
> +
> +#ifdef CONFIG_ARCH_HAS_SET_DIRECT_MAP
> +/**
> + * folio_zap_direct_map - remove a folio from the kernel direct map
> + * @folio: folio to remove from the direct map
> + *
> + * Removes the folio from the kernel direct map and flushes the TLB.  This may
> + * require splitting huge pages in the direct map, which can fail due to memory
> + * allocation.
> + *
> + * Return: 0 on success, or a negative error code on failure.
> + */
> +int folio_zap_direct_map(struct folio *folio)
> +{
> +	const void *addr = folio_address(folio);
> +	int ret;
> +
> +	ret = set_direct_map_valid_noflush(addr, folio_nr_pages(folio), false);
> +	flush_tlb_kernel_range((unsigned long)addr,
> +			       (unsigned long)addr + folio_size(folio));
> +
> +	return ret;
> +}
> +EXPORT_SYMBOL_FOR_MODULES(folio_zap_direct_map, "kvm");
> +
> +/**
> + * folio_restore_direct_map - restore the kernel direct map entry for a folio
> + * @folio: folio whose direct map entry is to be restored
> + *
> + * This may only be called after a prior successful folio_zap_direct_map() on
> + * the same folio.  Because the zap will have already split any huge pages in
> + * the direct map, restoration here only updates protection bits and cannot
> + * fail.
> + */
> +void folio_restore_direct_map(struct folio *folio)
> +{
> +	WARN_ON_ONCE(set_direct_map_valid_noflush(folio_address(folio),
> +						  folio_nr_pages(folio), true));
> +}
> +EXPORT_SYMBOL_FOR_MODULES(folio_restore_direct_map, "kvm");
> +#endif /* CONFIG_ARCH_HAS_SET_DIRECT_MAP */
> --
> 2.50.1

Reviewed-by: Ackerley Tng <ackerleytng@google.com>

I also took a look at Sashiko's [1] comments and I think that the
highmem folio issues should be the responsibility of the caller to
check.

[1] https://sashiko.dev/#/patchset/20260317141031.514-1-kalyazin%40amazon.com


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v11 03/16] mm/secretmem: make use of folio_{zap,restore}_direct_map
  2026-03-17 14:11 ` [PATCH v11 03/16] mm/secretmem: make use of folio_{zap,restore}_direct_map Kalyazin, Nikita
  2026-03-23 17:53   ` David Hildenbrand (Arm)
@ 2026-03-23 18:46   ` Ackerley Tng
  1 sibling, 0 replies; 29+ messages in thread
From: Ackerley Tng @ 2026-03-23 18:46 UTC (permalink / raw)
  To: Kalyazin, Nikita, kvm@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	bpf@vger.kernel.org, linux-kselftest@vger.kernel.org,
	kernel@xen0n.name, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, loongarch@lists.linux.dev,
	linux-pm@vger.kernel.org
  Cc: pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org,
	oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com,
	yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org,
	seanjc@google.com, tglx@kernel.org, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, luto@kernel.org, peterz@infradead.org,
	willy@infradead.org, akpm@linux-foundation.org, david@kernel.org,
	lorenzo.stoakes@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, ast@kernel.org,
	daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev,
	eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me,
	haoluo@google.com, jolsa@kernel.org, jgg@ziepe.ca,
	jhubbard@nvidia.com, peterx@redhat.com, jannh@google.com,
	pfalcato@suse.de, skhan@linuxfoundation.org, riel@surriel.com,
	ryan.roberts@arm.com, jgross@suse.com, yu-cheng.yu@intel.com,
	kas@kernel.org, coxu@redhat.com, kevin.brodsky@arm.com,
	yosry@kernel.org, ajones@ventanamicro.com, maobibo@loongson.cn,
	tabba@google.com, prsampat@amd.com, wu.fei9@sanechips.com.cn,
	mlevitsk@redhat.com, jmattson@google.com, jthoughton@google.com,
	agordeev@linux.ibm.com, alex@ghiti.fr, aou@eecs.berkeley.edu,
	borntraeger@linux.ibm.com, chenhuacai@kernel.org,
	dev.jain@arm.com, gor@linux.ibm.com, hca@linux.ibm.com,
	palmer@dabbelt.com, pjw@kernel.org, shijie@os.amperecomputing.com,
	svens@linux.ibm.com, thuth@redhat.com, wyihan@google.com,
	yang@os.amperecomputing.com, Jonathan.Cameron@huawei.com,
	Liam.Howlett@oracle.com, urezki@gmail.com,
	zhengqi.arch@bytedance.com, gerald.schaefer@linux.ibm.com,
	jiayuan.chen@shopee.com, lenb@kernel.org, osalvador@suse.de,
	pavel@kernel.org, rafael@kernel.org, vannapurve@google.com,
	jackmanb@google.com, aneesh.kumar@kernel.org,
	patrick.roy@linux.dev, Thomson, Jack, Itazuri, Takahiro,
	Manwaring, Derek

"Kalyazin, Nikita" <kalyazin@amazon.co.uk> writes:

> From: Nikita Kalyazin <kalyazin@amazon.com>
>
> Signed-off-by: Nikita Kalyazin <kalyazin@amazon.com>
> ---
>  mm/secretmem.c | 8 ++------
>  1 file changed, 2 insertions(+), 6 deletions(-)
>
> diff --git a/mm/secretmem.c b/mm/secretmem.c
> index fd29b33c6764..27b176af8fc4 100644
> --- a/mm/secretmem.c
> +++ b/mm/secretmem.c
> @@ -53,7 +53,6 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
>  	struct inode *inode = file_inode(vmf->vma->vm_file);
>  	pgoff_t offset = vmf->pgoff;
>  	gfp_t gfp = vmf->gfp_mask;
> -	unsigned long addr;
>  	struct folio *folio;
>  	vm_fault_t ret;
>  	int err;
> @@ -72,7 +71,7 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
>  			goto out;
>  		}
>
> -		err = set_direct_map_invalid_noflush(folio_address(folio));
> +		err = folio_zap_direct_map(folio);
>  		if (err) {
>  			folio_put(folio);
>  			ret = vmf_error(err);
> @@ -87,7 +86,7 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
>  			 * already happened when we marked the page invalid
>  			 * which guarantees that this call won't fail
>  			 */
> -			set_direct_map_default_noflush(folio_address(folio));
> +			folio_restore_direct_map(folio);
>  			folio_put(folio);
>  			if (err == -EEXIST)
>  				goto retry;
> @@ -95,9 +94,6 @@ static vm_fault_t secretmem_fault(struct vm_fault *vmf)
>  			ret = vmf_error(err);
>  			goto out;
>  		}
> -
> -		addr = (unsigned long)folio_address(folio);
> -		flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
>  	}
>
>  	vmf->page = folio_file_page(folio, vmf->pgoff);
> --
> 2.50.1

Reviewed-by: Ackerley Tng <ackerleytng@google.com>


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v11 05/16] mm/gup: drop local variable in gup_fast_folio_allowed
  2026-03-23 17:55   ` David Hildenbrand (Arm)
@ 2026-03-23 20:22     ` Ackerley Tng
  0 siblings, 0 replies; 29+ messages in thread
From: Ackerley Tng @ 2026-03-23 20:22 UTC (permalink / raw)
  To: David Hildenbrand (Arm), Kalyazin, Nikita, kvm@vger.kernel.org,
	linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	bpf@vger.kernel.org, linux-kselftest@vger.kernel.org,
	kernel@xen0n.name, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, loongarch@lists.linux.dev,
	linux-pm@vger.kernel.org
  Cc: pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org,
	oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com,
	yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org,
	seanjc@google.com, tglx@kernel.org, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, luto@kernel.org, peterz@infradead.org,
	willy@infradead.org, akpm@linux-foundation.org,
	lorenzo.stoakes@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, ast@kernel.org,
	daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev,
	eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me,
	haoluo@google.com, jolsa@kernel.org, jgg@ziepe.ca,
	jhubbard@nvidia.com, peterx@redhat.com, jannh@google.com,
	pfalcato@suse.de, skhan@linuxfoundation.org, riel@surriel.com,
	ryan.roberts@arm.com, jgross@suse.com, yu-cheng.yu@intel.com,
	kas@kernel.org, coxu@redhat.com, kevin.brodsky@arm.com,
	yosry@kernel.org, ajones@ventanamicro.com, maobibo@loongson.cn,
	tabba@google.com, prsampat@amd.com, wu.fei9@sanechips.com.cn,
	mlevitsk@redhat.com, jmattson@google.com, jthoughton@google.com,
	agordeev@linux.ibm.com, alex@ghiti.fr, aou@eecs.berkeley.edu,
	borntraeger@linux.ibm.com, chenhuacai@kernel.org,
	dev.jain@arm.com, gor@linux.ibm.com, hca@linux.ibm.com,
	palmer@dabbelt.com, pjw@kernel.org, shijie@os.amperecomputing.com,
	svens@linux.ibm.com, thuth@redhat.com, wyihan@google.com,
	yang@os.amperecomputing.com, Jonathan.Cameron@huawei.com,
	Liam.Howlett@oracle.com, urezki@gmail.com,
	zhengqi.arch@bytedance.com, gerald.schaefer@linux.ibm.com,
	jiayuan.chen@shopee.com, lenb@kernel.org, osalvador@suse.de,
	pavel@kernel.org, rafael@kernel.org, vannapurve@google.com,
	jackmanb@google.com, aneesh.kumar@kernel.org,
	patrick.roy@linux.dev, Thomson, Jack, Itazuri, Takahiro,
	Manwaring, Derek

"David Hildenbrand (Arm)" <david@kernel.org> writes:

> On 3/17/26 15:11, Kalyazin, Nikita wrote:
>> From: Nikita Kalyazin <kalyazin@amazon.com>
>>
>> Move the check for pinning closer to where the result is used.
>> No functional changes.
>>
>> Signed-off-by: Nikita Kalyazin <kalyazin@amazon.com>
>> ---
>>  mm/gup.c | 23 ++++++++++++-----------
>>  1 file changed, 12 insertions(+), 11 deletions(-)
>>
>> diff --git a/mm/gup.c b/mm/gup.c
>> index 5856d35be385..869d79c8daa4 100644
>> --- a/mm/gup.c
>> +++ b/mm/gup.c
>> @@ -2737,18 +2737,9 @@ EXPORT_SYMBOL(get_user_pages_unlocked);
>>   */
>>  static bool gup_fast_folio_allowed(struct folio *folio, unsigned int flags)
>>  {
>> -	bool reject_file_backed = false;
>>  	struct address_space *mapping;
>>  	unsigned long mapping_flags;
>>
>> -	/*
>> -	 * If we aren't pinning then no problematic write can occur. A long term
>> -	 * pin is the most egregious case so this is the one we disallow.
>> -	 */
>> -	if ((flags & (FOLL_PIN | FOLL_LONGTERM | FOLL_WRITE)) ==
>> -	    (FOLL_PIN | FOLL_LONGTERM | FOLL_WRITE))
>> -		reject_file_backed = true;
>> -
>>  	/* We hold a folio reference, so we can safely access folio fields. */
>>  	if (WARN_ON_ONCE(folio_test_slab(folio)))
>>  		return false;
>> @@ -2793,8 +2784,18 @@ static bool gup_fast_folio_allowed(struct folio *folio, unsigned int flags)
>>  	 */
>>  	if (secretmem_mapping(mapping))
>>  		return false;
>> -	/* The only remaining allowed file system is shmem. */
>> -	return !reject_file_backed || shmem_mapping(mapping);
>> +
>> +	/*
>> +	 * If we aren't pinning then no problematic write can occur. A writable
>> +	 * long term pin is the most egregious case, so this is the one we
>> +	 * allow only for ...
>> +	 */
>> +	if ((flags & (FOLL_PIN | FOLL_LONGTERM | FOLL_WRITE)) !=
>> +	    (FOLL_PIN | FOLL_LONGTERM | FOLL_WRITE))
>> +		return true;
>> +
>> +	/* ... hugetlb (which we allowed above already) and shared memory. */
>> +	return shmem_mapping(mapping);
>
> Acked-by: David Hildenbrand (Arm) <david@kernel.org>
>
> I'm wondering if it would be a good idea to check for a hugetlb mapping
> here instead of having the folio_test_hugetlb() check above.
>

I think it's nice that hugetlb folios are determined immediately to be
eligible for GUP-fast regardless of whether the folio is file-backed or
not.

> Something to ponder about :)
>
> --
> Cheers,
>
> David


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v11 10/16] KVM: guest_memfd: Add flag to remove from direct map
  2026-03-23 18:05   ` David Hildenbrand (Arm)
@ 2026-03-23 20:47     ` Ackerley Tng
  0 siblings, 0 replies; 29+ messages in thread
From: Ackerley Tng @ 2026-03-23 20:47 UTC (permalink / raw)
  To: David Hildenbrand (Arm), Kalyazin, Nikita, kvm@vger.kernel.org,
	linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	bpf@vger.kernel.org, linux-kselftest@vger.kernel.org,
	kernel@xen0n.name, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, loongarch@lists.linux.dev,
	linux-pm@vger.kernel.org
  Cc: pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org,
	oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com,
	yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org,
	seanjc@google.com, tglx@kernel.org, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, luto@kernel.org, peterz@infradead.org,
	willy@infradead.org, akpm@linux-foundation.org,
	lorenzo.stoakes@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, ast@kernel.org,
	daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev,
	eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me,
	haoluo@google.com, jolsa@kernel.org, jgg@ziepe.ca,
	jhubbard@nvidia.com, peterx@redhat.com, jannh@google.com,
	pfalcato@suse.de, skhan@linuxfoundation.org, riel@surriel.com,
	ryan.roberts@arm.com, jgross@suse.com, yu-cheng.yu@intel.com,
	kas@kernel.org, coxu@redhat.com, kevin.brodsky@arm.com,
	yosry@kernel.org, ajones@ventanamicro.com, maobibo@loongson.cn,
	tabba@google.com, prsampat@amd.com, wu.fei9@sanechips.com.cn,
	mlevitsk@redhat.com, jmattson@google.com, jthoughton@google.com,
	agordeev@linux.ibm.com, alex@ghiti.fr, aou@eecs.berkeley.edu,
	borntraeger@linux.ibm.com, chenhuacai@kernel.org,
	dev.jain@arm.com, gor@linux.ibm.com, hca@linux.ibm.com,
	palmer@dabbelt.com, pjw@kernel.org, shijie@os.amperecomputing.com,
	svens@linux.ibm.com, thuth@redhat.com, wyihan@google.com,
	yang@os.amperecomputing.com, Jonathan.Cameron@huawei.com,
	Liam.Howlett@oracle.com, urezki@gmail.com,
	zhengqi.arch@bytedance.com, gerald.schaefer@linux.ibm.com,
	jiayuan.chen@shopee.com, lenb@kernel.org, osalvador@suse.de,
	pavel@kernel.org, rafael@kernel.org, vannapurve@google.com,
	jackmanb@google.com, aneesh.kumar@kernel.org,
	patrick.roy@linux.dev, Thomson, Jack, Itazuri, Takahiro,
	Manwaring, Derek

"David Hildenbrand (Arm)" <david@kernel.org> writes:

>
> [...snip...]
>
>> +static int kvm_gmem_folio_zap_direct_map(struct folio *folio)
>> +{
>> +	u64 gmem_flags = GMEM_I(folio_inode(folio))->flags;
>> +	int r = 0;
>> +
>> +	if (kvm_gmem_folio_no_direct_map(folio) || !(gmem_flags & GUEST_MEMFD_FLAG_NO_DIRECT_MAP))
>
> The function is only called when
>
> 	kvm_gmem_no_direct_map(folio_inode(folio))
>
> Does it really make sense to check for GUEST_MEMFD_FLAG_NO_DIRECT_MAP again?
>

Good point that GUEST_MEMFD_FLAG_NO_DIRECT_MAP was already checked in
the caller. I think we can drop this second check.

> If, at all, it should be a warning if GUEST_MEMFD_FLAG_NO_DIRECT_MAP is
> not set?
>
> Further, kvm_gmem_folio_zap_direct_map() uses the folio lock to
> synchronize, right? Might be worth pointing that out somehow (e.g.,
> lockdep check if possible).
>
>> +		goto out;
>> +
>> +	r = folio_zap_direct_map(folio);
>> +	if (!r)
>> +		folio->private = (void *)((u64)folio->private | KVM_GMEM_FOLIO_NO_DIRECT_MAP);
>> +
>> +out:
>> +	return r;
>> +}
>> +
>> +static void kvm_gmem_folio_restore_direct_map(struct folio *folio)
>> +{
>
> kvm_gmem_folio_zap_direct_map() is allowed to be called on folios that
> already have the directmap remove, kvm_gmem_folio_restore_direct_map()
> cannot be called if the directmap was already restored.
>

This inconsistency was probably introduced by my comments [1] (sorry!)

I think the inconsistency here is mostly because
kvm_gmem_folio_zap_direct_map() is called from two places but restore is
only called from one place :P

[1] https://lore.kernel.org/all/CAEvNRgEzVhEzr-3GWTsE7GSBsPdvVLq7WFEeLHzcmMe=R9S51w@mail.gmail.com/

> Should we make that more consistent?
>
>
> Hoping Sean can find some time to review
>
> --
> Cheers,
>
> David


^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH v11 10/16] KVM: guest_memfd: Add flag to remove from direct map
  2026-03-17 14:12 ` [PATCH v11 10/16] KVM: guest_memfd: Add flag to remove from direct map Kalyazin, Nikita
  2026-03-23 18:05   ` David Hildenbrand (Arm)
@ 2026-03-23 21:15   ` Ackerley Tng
  1 sibling, 0 replies; 29+ messages in thread
From: Ackerley Tng @ 2026-03-23 21:15 UTC (permalink / raw)
  To: Kalyazin, Nikita, kvm@vger.kernel.org, linux-doc@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org, kvmarm@lists.linux.dev,
	linux-fsdevel@vger.kernel.org, linux-mm@kvack.org,
	bpf@vger.kernel.org, linux-kselftest@vger.kernel.org,
	kernel@xen0n.name, linux-riscv@lists.infradead.org,
	linux-s390@vger.kernel.org, loongarch@lists.linux.dev,
	linux-pm@vger.kernel.org
  Cc: pbonzini@redhat.com, corbet@lwn.net, maz@kernel.org,
	oupton@kernel.org, joey.gouly@arm.com, suzuki.poulose@arm.com,
	yuzenghui@huawei.com, catalin.marinas@arm.com, will@kernel.org,
	seanjc@google.com, tglx@kernel.org, mingo@redhat.com,
	bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
	hpa@zytor.com, luto@kernel.org, peterz@infradead.org,
	willy@infradead.org, akpm@linux-foundation.org, david@kernel.org,
	lorenzo.stoakes@oracle.com, vbabka@kernel.org, rppt@kernel.org,
	surenb@google.com, mhocko@suse.com, ast@kernel.org,
	daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev,
	eddyz87@gmail.com, song@kernel.org, yonghong.song@linux.dev,
	john.fastabend@gmail.com, kpsingh@kernel.org, sdf@fomichev.me,
	haoluo@google.com, jolsa@kernel.org, jgg@ziepe.ca,
	jhubbard@nvidia.com, peterx@redhat.com, jannh@google.com,
	pfalcato@suse.de, skhan@linuxfoundation.org, riel@surriel.com,
	ryan.roberts@arm.com, jgross@suse.com, yu-cheng.yu@intel.com,
	kas@kernel.org, coxu@redhat.com, kevin.brodsky@arm.com,
	yosry@kernel.org, ajones@ventanamicro.com, maobibo@loongson.cn,
	tabba@google.com, prsampat@amd.com, wu.fei9@sanechips.com.cn,
	mlevitsk@redhat.com, jmattson@google.com, jthoughton@google.com,
	agordeev@linux.ibm.com, alex@ghiti.fr, aou@eecs.berkeley.edu,
	borntraeger@linux.ibm.com, chenhuacai@kernel.org,
	dev.jain@arm.com, gor@linux.ibm.com, hca@linux.ibm.com,
	palmer@dabbelt.com, pjw@kernel.org, shijie@os.amperecomputing.com,
	svens@linux.ibm.com, thuth@redhat.com, wyihan@google.com,
	yang@os.amperecomputing.com, Jonathan.Cameron@huawei.com,
	Liam.Howlett@oracle.com, urezki@gmail.com,
	zhengqi.arch@bytedance.com, gerald.schaefer@linux.ibm.com,
	jiayuan.chen@shopee.com, lenb@kernel.org, osalvador@suse.de,
	pavel@kernel.org, rafael@kernel.org, vannapurve@google.com,
	jackmanb@google.com, aneesh.kumar@kernel.org,
	patrick.roy@linux.dev, Thomson, Jack, Itazuri, Takahiro,
	Manwaring, Derek

"Kalyazin, Nikita" <kalyazin@amazon.co.uk> writes:

>
> [...snip...]
>
>  static vm_fault_t kvm_gmem_fault_user_mapping(struct vm_fault *vmf)
>  {
>  	struct inode *inode = file_inode(vmf->vma->vm_file);
>  	struct folio *folio;
>  	vm_fault_t ret = VM_FAULT_LOCKED;
> +	int err;
>
>  	if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode))
>  		return VM_FAULT_SIGBUS;
> @@ -418,6 +454,14 @@ static vm_fault_t kvm_gmem_fault_user_mapping(struct vm_fault *vmf)
>  		folio_mark_uptodate(folio);
>  	}
>
> +	if (kvm_gmem_no_direct_map(folio_inode(folio))) {
> +		err = kvm_gmem_folio_zap_direct_map(folio);
> +		if (err) {
> +			ret = vmf_error(err);
> +			goto out_folio;
> +		}
> +	}
> +
>  	vmf->page = folio_file_page(folio, vmf->pgoff);
>

Sashiko pointed out that kvm_gmem_populate() might try and write to
direct-map-removed folios, but I think that's handled because populate
will first try and GUP folios, which is already blocked for
direct-map-removed folios.

>  out_folio:
> @@ -528,6 +572,9 @@ static void kvm_gmem_free_folio(struct folio *folio)
>  	kvm_pfn_t pfn = page_to_pfn(page);
>  	int order = folio_order(folio);
>
> +	if (kvm_gmem_folio_no_direct_map(folio))
> +		kvm_gmem_folio_restore_direct_map(folio);
> +
>  	kvm_arch_gmem_invalidate(pfn, pfn + (1ul << order));
>  }
>

Sashiko says to invalidate then restore direct map, I think in this case
it doesn't matter since if the folio needed invalidation, it must be
private, and the host shouldn't be writing to the private pages anyway.

One benefit of retaining this order (restore, invalidate) is that it
opens the invalidate hook to possibly do something regarding memory
contents?

Or perhaps we should just take the suggestion (invalidate, restore) and
align that invalidate should not touch memory contents.

> @@ -591,6 +638,9 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags)
>  	/* Unmovable mappings are supposed to be marked unevictable as well. */
>  	WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping));
>
> +	if (flags & GUEST_MEMFD_FLAG_NO_DIRECT_MAP)
> +		mapping_set_no_direct_map(inode->i_mapping);
> +
>  	GMEM_I(inode)->flags = flags;
>
>  	file = alloc_file_pseudo(inode, kvm_gmem_mnt, name, O_RDWR, &kvm_gmem_fops);
> @@ -803,13 +853,22 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot,
>  	}
>
>  	r = kvm_gmem_prepare_folio(kvm, slot, gfn, folio);
> +	if (r)
> +		goto out_unlock;
>
> +	if (kvm_gmem_no_direct_map(folio_inode(folio))) {
> +		r = kvm_gmem_folio_zap_direct_map(folio);
> +		if (r)
> +			goto out_unlock;
> +	}
> +
>
> [...snip...]
>

Preparing a folio used to involve zeroing, but that has since been
refactored out, so I believe zapping can come before preparing.

Similar to the above point on invalidation: perhaps we should take the
suggestion to zap then prepare

+ And align that preparation should not touch memory contents
+ Avoid needing to undo the preparation on zapping failure (.free_folio
  is not called on folio_put(), it is only called folio on removal from
  filemap).


^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2026-03-23 21:15 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-17 14:10 [PATCH v11 00/16] Direct Map Removal Support for guest_memfd Kalyazin, Nikita
2026-03-17 14:10 ` [PATCH v11 01/16] set_memory: set_direct_map_* to take address Kalyazin, Nikita
2026-03-23 17:44   ` David Hildenbrand (Arm)
2026-03-23 18:00   ` Ackerley Tng
2026-03-17 14:10 ` [PATCH v11 02/16] set_memory: add folio_{zap,restore}_direct_map helpers Kalyazin, Nikita
2026-03-23 17:51   ` David Hildenbrand (Arm)
2026-03-23 18:43   ` Ackerley Tng
2026-03-17 14:11 ` [PATCH v11 03/16] mm/secretmem: make use of folio_{zap,restore}_direct_map Kalyazin, Nikita
2026-03-23 17:53   ` David Hildenbrand (Arm)
2026-03-23 18:46   ` Ackerley Tng
2026-03-17 14:11 ` [PATCH v11 04/16] mm/gup: drop secretmem optimization from gup_fast_folio_allowed Kalyazin, Nikita
2026-03-23 18:31   ` David Hildenbrand (Arm)
2026-03-17 14:11 ` [PATCH v11 05/16] mm/gup: drop local variable in gup_fast_folio_allowed Kalyazin, Nikita
2026-03-23 17:55   ` David Hildenbrand (Arm)
2026-03-23 20:22     ` Ackerley Tng
2026-03-17 14:11 ` [PATCH v11 06/16] mm: introduce AS_NO_DIRECT_MAP Kalyazin, Nikita
2026-03-17 14:11 ` [PATCH v11 07/16] KVM: guest_memfd: Add stub for kvm_arch_gmem_invalidate Kalyazin, Nikita
2026-03-17 14:12 ` [PATCH v11 08/16] KVM: x86: define kvm_arch_gmem_supports_no_direct_map() Kalyazin, Nikita
2026-03-17 14:12 ` [PATCH v11 09/16] KVM: arm64: " Kalyazin, Nikita
2026-03-17 14:12 ` [PATCH v11 10/16] KVM: guest_memfd: Add flag to remove from direct map Kalyazin, Nikita
2026-03-23 18:05   ` David Hildenbrand (Arm)
2026-03-23 20:47     ` Ackerley Tng
2026-03-23 21:15   ` Ackerley Tng
2026-03-17 14:12 ` [PATCH v11 11/16] KVM: selftests: load elf via bounce buffer Kalyazin, Nikita
2026-03-17 14:12 ` [PATCH v11 12/16] KVM: selftests: set KVM_MEM_GUEST_MEMFD in vm_mem_add() if guest_memfd != -1 Kalyazin, Nikita
2026-03-17 14:13 ` [PATCH v11 13/16] KVM: selftests: Add guest_memfd based vm_mem_backing_src_types Kalyazin, Nikita
2026-03-17 14:13 ` [PATCH v11 14/16] KVM: selftests: cover GUEST_MEMFD_FLAG_NO_DIRECT_MAP in existing selftests Kalyazin, Nikita
2026-03-17 14:13 ` [PATCH v11 15/16] KVM: selftests: stuff vm_mem_backing_src_type into vm_shape Kalyazin, Nikita
2026-03-17 14:13 ` [PATCH v11 16/16] KVM: selftests: Test guest execution from direct map removed gmem Kalyazin, Nikita

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox