linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>,
	Marc Zyngier <maz@kernel.org>,
	Oliver Upton <oliver.upton@linux.dev>,
	Huacai Chen <chenhuacai@kernel.org>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Anup Patel <anup@brainfault.org>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Palmer Dabbelt <palmer@dabbelt.com>,
	Albert Ou <aou@eecs.berkeley.edu>,
	Sean Christopherson <seanjc@google.com>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Christian Brauner <brauner@kernel.org>,
	"Matthew Wilcox (Oracle)" <willy@infradead.org>,
	Andrew Morton <akpm@linux-foundation.org>
Cc: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.linux.dev, linux-mips@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org,
	linux-riscv@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	"Xiaoyao Li" <xiaoyao.li@intel.com>,
	"Xu Yilun" <yilun.xu@intel.com>,
	"Chao Peng" <chao.p.peng@linux.intel.com>,
	"Fuad Tabba" <tabba@google.com>,
	"Jarkko Sakkinen" <jarkko@kernel.org>,
	"Anish Moorthy" <amoorthy@google.com>,
	"David Matlack" <dmatlack@google.com>,
	"Yu Zhang" <yu.c.zhang@linux.intel.com>,
	"Isaku Yamahata" <isaku.yamahata@intel.com>,
	"Mickaël Salaün" <mic@digikod.net>,
	"Vlastimil Babka" <vbabka@suse.cz>,
	"Vishal Annapurve" <vannapurve@google.com>,
	"Ackerley Tng" <ackerleytng@google.com>,
	"Maciej Szmigiero" <mail@maciej.szmigiero.name>,
	"David Hildenbrand" <david@redhat.com>,
	"Quentin Perret" <qperret@google.com>,
	"Michael Roth" <michael.roth@amd.com>,
	Wang <wei.w.wang@intel.com>,
	"Liam Merwick" <liam.merwick@oracle.com>,
	"Isaku Yamahata" <isaku.yamahata@gmail.com>,
	"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Subject: [PATCH 18/34] KVM: x86/mmu: Handle page fault for private memory
Date: Sun,  5 Nov 2023 17:30:21 +0100	[thread overview]
Message-ID: <20231105163040.14904-19-pbonzini@redhat.com> (raw)
In-Reply-To: <20231105163040.14904-1-pbonzini@redhat.com>

From: Chao Peng <chao.p.peng@linux.intel.com>

Add support for resolving page faults on guest private memory for VMs
that differentiate between "shared" and "private" memory.  For such VMs,
KVM_MEM_PRIVATE memslots can include both fd-based private memory and
hva-based shared memory, and KVM needs to map in the "correct" variant,
i.e. KVM needs to map the gfn shared/private as appropriate based on the
current state of the gfn's KVM_MEMORY_ATTRIBUTE_PRIVATE flag.

For AMD's SEV-SNP and Intel's TDX, the guest effectively gets to request
shared vs. private via a bit in the guest page tables, i.e. what the guest
wants may conflict with the current memory attributes.  To support such
"implicit" conversion requests, exit to user with KVM_EXIT_MEMORY_FAULT
to forward the request to userspace.  Add a new flag for memory faults,
KVM_MEMORY_EXIT_FLAG_PRIVATE, to communicate whether the guest wants to
map memory as shared vs. private.

Like KVM_MEMORY_ATTRIBUTE_PRIVATE, use bit 3 for flagging private memory
so that KVM can use bits 0-2 for capturing RWX behavior if/when userspace
needs such information, e.g. a likely user of KVM_EXIT_MEMORY_FAULT is to
exit on missing mappings when handling guest page fault VM-Exits.  In
that case, userspace will want to know RWX information in order to
correctly/precisely resolve the fault.

Note, private memory *must* be backed by guest_memfd, i.e. shared mappings
always come from the host userspace page tables, and private mappings
always come from a guest_memfd instance.

Co-developed-by: Yu Zhang <yu.c.zhang@linux.intel.com>
Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
Co-developed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Reviewed-by: Fuad Tabba <tabba@google.com>
Tested-by: Fuad Tabba <tabba@google.com>
Message-Id: <20231027182217.3615211-21-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 Documentation/virt/kvm/api.rst  |   8 ++-
 arch/x86/kvm/mmu/mmu.c          | 101 ++++++++++++++++++++++++++++++--
 arch/x86/kvm/mmu/mmu_internal.h |   1 +
 include/linux/kvm_host.h        |   8 ++-
 include/uapi/linux/kvm.h        |   1 +
 5 files changed, 110 insertions(+), 9 deletions(-)

diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
index 6d681f45969e..4a9a291380ad 100644
--- a/Documentation/virt/kvm/api.rst
+++ b/Documentation/virt/kvm/api.rst
@@ -6953,6 +6953,7 @@ spec refer, https://github.com/riscv/riscv-sbi-doc.
 
 		/* KVM_EXIT_MEMORY_FAULT */
 		struct {
+  #define KVM_MEMORY_EXIT_FLAG_PRIVATE	(1ULL << 3)
 			__u64 flags;
 			__u64 gpa;
 			__u64 size;
@@ -6961,8 +6962,11 @@ spec refer, https://github.com/riscv/riscv-sbi-doc.
 KVM_EXIT_MEMORY_FAULT indicates the vCPU has encountered a memory fault that
 could not be resolved by KVM.  The 'gpa' and 'size' (in bytes) describe the
 guest physical address range [gpa, gpa + size) of the fault.  The 'flags' field
-describes properties of the faulting access that are likely pertinent.
-Currently, no flags are defined.
+describes properties of the faulting access that are likely pertinent:
+
+ - KVM_MEMORY_EXIT_FLAG_PRIVATE - When set, indicates the memory fault occurred
+   on a private memory access.  When clear, indicates the fault occurred on a
+   shared access.
 
 Note!  KVM_EXIT_MEMORY_FAULT is unique among all KVM exit reasons in that it
 accompanies a return code of '-1', not '0'!  errno will always be set to EFAULT
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index f5c6b0643645..754a5aaebee5 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -3147,9 +3147,9 @@ static int host_pfn_mapping_level(struct kvm *kvm, gfn_t gfn,
 	return level;
 }
 
-int kvm_mmu_max_mapping_level(struct kvm *kvm,
-			      const struct kvm_memory_slot *slot, gfn_t gfn,
-			      int max_level)
+static int __kvm_mmu_max_mapping_level(struct kvm *kvm,
+				       const struct kvm_memory_slot *slot,
+				       gfn_t gfn, int max_level, bool is_private)
 {
 	struct kvm_lpage_info *linfo;
 	int host_level;
@@ -3161,6 +3161,9 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm,
 			break;
 	}
 
+	if (is_private)
+		return max_level;
+
 	if (max_level == PG_LEVEL_4K)
 		return PG_LEVEL_4K;
 
@@ -3168,6 +3171,16 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm,
 	return min(host_level, max_level);
 }
 
+int kvm_mmu_max_mapping_level(struct kvm *kvm,
+			      const struct kvm_memory_slot *slot, gfn_t gfn,
+			      int max_level)
+{
+	bool is_private = kvm_slot_can_be_private(slot) &&
+			  kvm_mem_is_private(kvm, gfn);
+
+	return __kvm_mmu_max_mapping_level(kvm, slot, gfn, max_level, is_private);
+}
+
 void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
 {
 	struct kvm_memory_slot *slot = fault->slot;
@@ -3188,8 +3201,9 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
 	 * Enforce the iTLB multihit workaround after capturing the requested
 	 * level, which will be used to do precise, accurate accounting.
 	 */
-	fault->req_level = kvm_mmu_max_mapping_level(vcpu->kvm, slot,
-						     fault->gfn, fault->max_level);
+	fault->req_level = __kvm_mmu_max_mapping_level(vcpu->kvm, slot,
+						       fault->gfn, fault->max_level,
+						       fault->is_private);
 	if (fault->req_level == PG_LEVEL_4K || fault->huge_page_disallowed)
 		return;
 
@@ -4269,6 +4283,55 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work)
 	kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, 0, true, NULL);
 }
 
+static inline u8 kvm_max_level_for_order(int order)
+{
+	BUILD_BUG_ON(KVM_MAX_HUGEPAGE_LEVEL > PG_LEVEL_1G);
+
+	KVM_MMU_WARN_ON(order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G) &&
+			order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M) &&
+			order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_4K));
+
+	if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G))
+		return PG_LEVEL_1G;
+
+	if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M))
+		return PG_LEVEL_2M;
+
+	return PG_LEVEL_4K;
+}
+
+static void kvm_mmu_prepare_memory_fault_exit(struct kvm_vcpu *vcpu,
+					      struct kvm_page_fault *fault)
+{
+	kvm_prepare_memory_fault_exit(vcpu, fault->gfn << PAGE_SHIFT,
+				      PAGE_SIZE, fault->write, fault->exec,
+				      fault->is_private);
+}
+
+static int kvm_faultin_pfn_private(struct kvm_vcpu *vcpu,
+				   struct kvm_page_fault *fault)
+{
+	int max_order, r;
+
+	if (!kvm_slot_can_be_private(fault->slot)) {
+		kvm_mmu_prepare_memory_fault_exit(vcpu, fault);
+		return -EFAULT;
+	}
+
+	r = kvm_gmem_get_pfn(vcpu->kvm, fault->slot, fault->gfn, &fault->pfn,
+			     &max_order);
+	if (r) {
+		kvm_mmu_prepare_memory_fault_exit(vcpu, fault);
+		return r;
+	}
+
+	fault->max_level = min(kvm_max_level_for_order(max_order),
+			       fault->max_level);
+	fault->map_writable = !(fault->slot->flags & KVM_MEM_READONLY);
+
+	return RET_PF_CONTINUE;
+}
+
 static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
 {
 	struct kvm_memory_slot *slot = fault->slot;
@@ -4301,6 +4364,14 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
 			return RET_PF_EMULATE;
 	}
 
+	if (fault->is_private != kvm_mem_is_private(vcpu->kvm, fault->gfn)) {
+		kvm_mmu_prepare_memory_fault_exit(vcpu, fault);
+		return -EFAULT;
+	}
+
+	if (fault->is_private)
+		return kvm_faultin_pfn_private(vcpu, fault);
+
 	async = false;
 	fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, false, &async,
 					  fault->write, &fault->map_writable,
@@ -7188,6 +7259,26 @@ void kvm_mmu_pre_destroy_vm(struct kvm *kvm)
 }
 
 #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES
+bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm,
+					struct kvm_gfn_range *range)
+{
+	/*
+	 * Zap SPTEs even if the slot can't be mapped PRIVATE.  KVM x86 only
+	 * supports KVM_MEMORY_ATTRIBUTE_PRIVATE, and so it *seems* like KVM
+	 * can simply ignore such slots.  But if userspace is making memory
+	 * PRIVATE, then KVM must prevent the guest from accessing the memory
+	 * as shared.  And if userspace is making memory SHARED and this point
+	 * is reached, then at least one page within the range was previously
+	 * PRIVATE, i.e. the slot's possible hugepage ranges are changing.
+	 * Zapping SPTEs in this case ensures KVM will reassess whether or not
+	 * a hugepage can be used for affected ranges.
+	 */
+	if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm)))
+		return false;
+
+	return kvm_unmap_gfn_range(kvm, range);
+}
+
 static bool hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn,
 				int level)
 {
diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
index decc1f153669..86c7cb692786 100644
--- a/arch/x86/kvm/mmu/mmu_internal.h
+++ b/arch/x86/kvm/mmu/mmu_internal.h
@@ -201,6 +201,7 @@ struct kvm_page_fault {
 
 	/* Derived from mmu and global state.  */
 	const bool is_tdp;
+	const bool is_private;
 	const bool nx_huge_page_workaround_enabled;
 
 	/*
diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
index a6de526c0426..67dfd4d79529 100644
--- a/include/linux/kvm_host.h
+++ b/include/linux/kvm_host.h
@@ -2357,14 +2357,18 @@ static inline void kvm_account_pgtable_pages(void *virt, int nr)
 #define  KVM_DIRTY_RING_MAX_ENTRIES  65536
 
 static inline void kvm_prepare_memory_fault_exit(struct kvm_vcpu *vcpu,
-						 gpa_t gpa, gpa_t size)
+						 gpa_t gpa, gpa_t size,
+						 bool is_write, bool is_exec,
+						 bool is_private)
 {
 	vcpu->run->exit_reason = KVM_EXIT_MEMORY_FAULT;
 	vcpu->run->memory_fault.gpa = gpa;
 	vcpu->run->memory_fault.size = size;
 
-	/* Flags are not (yet) defined or communicated to userspace. */
+	/* RWX flags are not (yet) defined or communicated to userspace. */
 	vcpu->run->memory_fault.flags = 0;
+	if (is_private)
+		vcpu->run->memory_fault.flags |= KVM_MEMORY_EXIT_FLAG_PRIVATE;
 }
 
 #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 2802d10aa88c..8eb10f560c69 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -535,6 +535,7 @@ struct kvm_run {
 		} notify;
 		/* KVM_EXIT_MEMORY_FAULT */
 		struct {
+#define KVM_MEMORY_EXIT_FLAG_PRIVATE	(1ULL << 3)
 			__u64 flags;
 			__u64 gpa;
 			__u64 size;
-- 
2.39.1



  parent reply	other threads:[~2023-11-05 16:33 UTC|newest]

Thread overview: 78+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-05 16:30 [PATCH v14 00/34] KVM: guest_memfd() and per-page attributes Paolo Bonzini
2023-11-05 16:30 ` [PATCH 01/34] KVM: Tweak kvm_hva_range and hva_handler_t to allow reusing for gfn ranges Paolo Bonzini
2023-11-06  9:28   ` Huang, Kai
2023-11-05 16:30 ` [PATCH 02/34] KVM: Assert that mmu_invalidate_in_progress *never* goes negative Paolo Bonzini
2023-11-06  9:29   ` Huang, Kai
2023-11-05 16:30 ` [PATCH 03/34] KVM: Use gfn instead of hva for mmu_notifier_retry Paolo Bonzini
2023-11-06  9:29   ` Huang, Kai
2023-11-05 16:30 ` [PATCH 04/34] KVM: WARN if there are dangling MMU invalidations at VM destruction Paolo Bonzini
2023-11-05 16:30 ` [PATCH 05/34] KVM: PPC: Drop dead code related to KVM_ARCH_WANT_MMU_NOTIFIER Paolo Bonzini
2023-11-05 16:30 ` [PATCH 06/34] KVM: PPC: Return '1' unconditionally for KVM_CAP_SYNC_MMU Paolo Bonzini
2023-11-05 16:30 ` [PATCH 07/34] KVM: Convert KVM_ARCH_WANT_MMU_NOTIFIER to CONFIG_KVM_GENERIC_MMU_NOTIFIER Paolo Bonzini
2023-11-05 16:30 ` [PATCH 08/34] KVM: Introduce KVM_SET_USER_MEMORY_REGION2 Paolo Bonzini
2023-11-06  9:27   ` Huang, Kai
2023-11-07  5:47   ` Yuan Yao
2023-11-05 16:30 ` [PATCH 09/34] KVM: Add KVM_EXIT_MEMORY_FAULT exit to report faults to userspace Paolo Bonzini
2023-11-06 10:23   ` Fuad Tabba
2023-11-09  7:30   ` Xiaoyao Li
2023-11-05 16:30 ` [PATCH 10/34] KVM: Add a dedicated mmu_notifier flag for reclaiming freed memory Paolo Bonzini
2023-11-05 16:30 ` [PATCH 11/34] KVM: Drop .on_unlock() mmu_notifier hook Paolo Bonzini
2023-11-05 16:30 ` [PATCH 12/34] KVM: Introduce per-page memory attributes Paolo Bonzini
2023-11-06 10:39   ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 13/34] mm: Add AS_UNMOVABLE to mark mapping as completely unmovable Paolo Bonzini
2023-11-05 16:30 ` [PATCH 14/34] fs: Rename anon_inode_getfile_secure() and anon_inode_getfd_secure() Paolo Bonzini
2023-11-06 11:41   ` Fuad Tabba
2023-11-06 15:16   ` Christian Brauner
2023-11-05 16:30 ` [PATCH 15/34] KVM: Add KVM_CREATE_GUEST_MEMFD ioctl() for guest-specific backing memory Paolo Bonzini
2023-11-06 10:51   ` Fuad Tabba
2023-11-10  1:53   ` Xiaoyao Li
2023-11-10 18:22     ` Sean Christopherson
2023-11-13  3:37       ` Xiaoyao Li
2023-11-05 16:30 ` [PATCH 16/34] KVM: x86: "Reset" vcpu->run->exit_reason early in KVM_RUN Paolo Bonzini
2023-11-10  8:49   ` Xiaoyao Li
2023-11-05 16:30 ` [PATCH 17/34] KVM: x86: Disallow hugepages when memory attributes are mixed Paolo Bonzini
2023-11-05 16:30 ` Paolo Bonzini [this message]
2023-11-06 10:54   ` [PATCH 18/34] KVM: x86/mmu: Handle page fault for private memory Fuad Tabba
2023-11-05 16:30 ` [PATCH 19/34] KVM: Drop superfluous __KVM_VCPU_MULTIPLE_ADDRESS_SPACE macro Paolo Bonzini
2023-11-05 16:30 ` [PATCH 20/34] KVM: Allow arch code to track number of memslot address spaces per VM Paolo Bonzini
2023-11-05 16:30 ` [PATCH 21/34] KVM: x86: Add support for "protected VMs" that can utilize private memory Paolo Bonzini
2023-11-06 11:01   ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 22/34] KVM: selftests: Drop unused kvm_userspace_memory_region_find() helper Paolo Bonzini
2023-11-06 11:02   ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 23/34] KVM: selftests: Convert lib's mem regions to KVM_SET_USER_MEMORY_REGION2 Paolo Bonzini
2023-11-06 11:03   ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 24/34] KVM: selftests: Add support for creating private memslots Paolo Bonzini
2023-11-06 11:09   ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 25/34] KVM: selftests: Add helpers to convert guest memory b/w private and shared Paolo Bonzini
2023-11-06 11:24   ` Fuad Tabba
2023-11-06 16:13     ` Sean Christopherson
2023-11-06 16:24       ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 26/34] KVM: selftests: Add helpers to do KVM_HC_MAP_GPA_RANGE hypercalls (x86) Paolo Bonzini
2023-11-06 11:44   ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 27/34] KVM: selftests: Introduce VM "shape" to allow tests to specify the VM type Paolo Bonzini
2023-11-06 11:54   ` Fuad Tabba
2023-11-06 16:04     ` Sean Christopherson
2023-11-06 16:17       ` Fuad Tabba
2023-11-08 17:00   ` Anish Moorthy
2023-11-08 23:37     ` Anish Moorthy
2023-11-09  8:25       ` Paolo Bonzini
2023-11-05 16:30 ` [PATCH 28/34] KVM: selftests: Add GUEST_SYNC[1-6] macros for synchronizing more data Paolo Bonzini
2023-11-06 11:44   ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 29/34] KVM: selftests: Add x86-only selftest for private memory conversions Paolo Bonzini
2023-11-05 16:30 ` [PATCH 30/34] KVM: selftests: Add KVM_SET_USER_MEMORY_REGION2 helper Paolo Bonzini
2023-11-07 12:54   ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 31/34] KVM: selftests: Expand set_memory_region_test to validate guest_memfd() Paolo Bonzini
2023-11-06 14:26   ` Paolo Bonzini
2023-11-05 16:30 ` [PATCH 32/34] KVM: selftests: Add basic selftest for guest_memfd() Paolo Bonzini
2023-11-07 13:07   ` Fuad Tabba
2023-11-16 21:00   ` Ackerley Tng
2023-11-05 16:30 ` [PATCH 33/34] KVM: selftests: Test KVM exit behavior for private memory/access Paolo Bonzini
2023-11-07 14:38   ` Fuad Tabba
2023-11-05 16:30 ` [PATCH 34/34] KVM: selftests: Add a memory region subtest to validate invalid flags Paolo Bonzini
2023-11-09  1:08   ` Anish Moorthy
2023-11-09  8:54     ` Fuad Tabba
2023-11-20 14:09     ` Mark Brown
2023-11-21 17:00       ` Paolo Bonzini
2023-11-05 16:30 ` [PATCH 35/34] KVM: Prepare for handling only shared mappings in mmu_notifier events Paolo Bonzini
2023-11-05 16:30 ` [PATCH 36/34] KVM: Add transparent hugepage support for dedicated guest memory Paolo Bonzini
2023-11-13 12:21 ` [PATCH v14 00/34] KVM: guest_memfd() and per-page attributes Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20231105163040.14904-19-pbonzini@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=ackerleytng@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=amoorthy@google.com \
    --cc=anup@brainfault.org \
    --cc=aou@eecs.berkeley.edu \
    --cc=brauner@kernel.org \
    --cc=chao.p.peng@linux.intel.com \
    --cc=chenhuacai@kernel.org \
    --cc=david@redhat.com \
    --cc=dmatlack@google.com \
    --cc=isaku.yamahata@gmail.com \
    --cc=isaku.yamahata@intel.com \
    --cc=jarkko@kernel.org \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=kvm-riscv@lists.infradead.org \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.linux.dev \
    --cc=liam.merwick@oracle.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mips@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mail@maciej.szmigiero.name \
    --cc=maz@kernel.org \
    --cc=mic@digikod.net \
    --cc=michael.roth@amd.com \
    --cc=mpe@ellerman.id.au \
    --cc=oliver.upton@linux.dev \
    --cc=palmer@dabbelt.com \
    --cc=paul.walmsley@sifive.com \
    --cc=qperret@google.com \
    --cc=seanjc@google.com \
    --cc=tabba@google.com \
    --cc=vannapurve@google.com \
    --cc=vbabka@suse.cz \
    --cc=viro@zeniv.linux.org.uk \
    --cc=wei.w.wang@intel.com \
    --cc=willy@infradead.org \
    --cc=xiaoyao.li@intel.com \
    --cc=yilun.xu@intel.com \
    --cc=yu.c.zhang@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).