linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Fuad Tabba <tabba@google.com>
To: Sean Christopherson <seanjc@google.com>
Cc: "Paolo Bonzini" <pbonzini@redhat.com>,
	"Marc Zyngier" <maz@kernel.org>,
	"Oliver Upton" <oliver.upton@linux.dev>,
	"Huacai Chen" <chenhuacai@kernel.org>,
	"Michael Ellerman" <mpe@ellerman.id.au>,
	"Anup Patel" <anup@brainfault.org>,
	"Paul Walmsley" <paul.walmsley@sifive.com>,
	"Palmer Dabbelt" <palmer@dabbelt.com>,
	"Albert Ou" <aou@eecs.berkeley.edu>,
	"Alexander Viro" <viro@zeniv.linux.org.uk>,
	"Christian Brauner" <brauner@kernel.org>,
	"Matthew Wilcox (Oracle)" <willy@infradead.org>,
	"Andrew Morton" <akpm@linux-foundation.org>,
	kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.linux.dev, linux-mips@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org, kvm-riscv@lists.infradead.org,
	linux-riscv@lists.infradead.org, linux-fsdevel@vger.kernel.org,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	"Xiaoyao Li" <xiaoyao.li@intel.com>,
	"Xu Yilun" <yilun.xu@intel.com>,
	"Chao Peng" <chao.p.peng@linux.intel.com>,
	"Jarkko Sakkinen" <jarkko@kernel.org>,
	"Anish Moorthy" <amoorthy@google.com>,
	"David Matlack" <dmatlack@google.com>,
	"Yu Zhang" <yu.c.zhang@linux.intel.com>,
	"Isaku Yamahata" <isaku.yamahata@intel.com>,
	"Mickaël Salaün" <mic@digikod.net>,
	"Vlastimil Babka" <vbabka@suse.cz>,
	"Vishal Annapurve" <vannapurve@google.com>,
	"Ackerley Tng" <ackerleytng@google.com>,
	"Maciej Szmigiero" <mail@maciej.szmigiero.name>,
	"David Hildenbrand" <david@redhat.com>,
	"Quentin Perret" <qperret@google.com>,
	"Michael Roth" <michael.roth@amd.com>,
	Wang <wei.w.wang@intel.com>,
	"Liam Merwick" <liam.merwick@oracle.com>,
	"Isaku Yamahata" <isaku.yamahata@gmail.com>,
	"Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
Subject: Re: [PATCH v13 20/35] KVM: x86/mmu: Handle page fault for private memory
Date: Thu, 2 Nov 2023 14:34:40 +0000	[thread overview]
Message-ID: <CA+EHjTyRwz7rnqy_qRa2ayPWAu_tjW5OeieZJEsaAVvXu1XeMw@mail.gmail.com> (raw)
In-Reply-To: <20231027182217.3615211-21-seanjc@google.com>

On Fri, Oct 27, 2023 at 7:23 PM Sean Christopherson <seanjc@google.com> wrote:
>
> From: Chao Peng <chao.p.peng@linux.intel.com>
>
> Add support for resolving page faults on guest private memory for VMs
> that differentiate between "shared" and "private" memory.  For such VMs,
> KVM_MEM_PRIVATE memslots can include both fd-based private memory and
> hva-based shared memory, and KVM needs to map in the "correct" variant,
> i.e. KVM needs to map the gfn shared/private as appropriate based on the
> current state of the gfn's KVM_MEMORY_ATTRIBUTE_PRIVATE flag.
>
> For AMD's SEV-SNP and Intel's TDX, the guest effectively gets to request
> shared vs. private via a bit in the guest page tables, i.e. what the guest
> wants may conflict with the current memory attributes.  To support such
> "implicit" conversion requests, exit to user with KVM_EXIT_MEMORY_FAULT
> to forward the request to userspace.  Add a new flag for memory faults,
> KVM_MEMORY_EXIT_FLAG_PRIVATE, to communicate whether the guest wants to
> map memory as shared vs. private.
>
> Like KVM_MEMORY_ATTRIBUTE_PRIVATE, use bit 3 for flagging private memory
> so that KVM can use bits 0-2 for capturing RWX behavior if/when userspace
> needs such information, e.g. a likely user of KVM_EXIT_MEMORY_FAULT is to
> exit on missing mappings when handling guest page fault VM-Exits.  In
> that case, userspace will want to know RWX information in order to
> correctly/precisely resolve the fault.
>
> Note, private memory *must* be backed by guest_memfd, i.e. shared mappings
> always come from the host userspace page tables, and private mappings
> always come from a guest_memfd instance.
>
> Co-developed-by: Yu Zhang <yu.c.zhang@linux.intel.com>
> Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
> Signed-off-by: Chao Peng <chao.p.peng@linux.intel.com>
> Co-developed-by: Sean Christopherson <seanjc@google.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---

With my limited understanding of kvm-x86 mmu code:
Reviewed-by: Fuad Tabba <tabba@google.com>

Tested the x86 code (as part of this patch series) on qemu. The x86
fault handling code was used as a guide to how it should be done for
pKVM/arm64 (with similar code added there):
Tested-by: Fuad Tabba <tabba@google.com>

Cheers,
/fuad

>  Documentation/virt/kvm/api.rst  |   8 ++-
>  arch/x86/kvm/mmu/mmu.c          | 101 ++++++++++++++++++++++++++++++--
>  arch/x86/kvm/mmu/mmu_internal.h |   1 +
>  include/linux/kvm_host.h        |   8 ++-
>  include/uapi/linux/kvm.h        |   1 +
>  5 files changed, 110 insertions(+), 9 deletions(-)
>
> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
> index 7f00c310c24a..38dc1fda4f45 100644
> --- a/Documentation/virt/kvm/api.rst
> +++ b/Documentation/virt/kvm/api.rst
> @@ -6837,6 +6837,7 @@ spec refer, https://github.com/riscv/riscv-sbi-doc.
>
>                 /* KVM_EXIT_MEMORY_FAULT */
>                 struct {
> +  #define KVM_MEMORY_EXIT_FLAG_PRIVATE (1ULL << 3)
>                         __u64 flags;
>                         __u64 gpa;
>                         __u64 size;
> @@ -6845,8 +6846,11 @@ spec refer, https://github.com/riscv/riscv-sbi-doc.
>  KVM_EXIT_MEMORY_FAULT indicates the vCPU has encountered a memory fault that
>  could not be resolved by KVM.  The 'gpa' and 'size' (in bytes) describe the
>  guest physical address range [gpa, gpa + size) of the fault.  The 'flags' field
> -describes properties of the faulting access that are likely pertinent.
> -Currently, no flags are defined.
> +describes properties of the faulting access that are likely pertinent:
> +
> + - KVM_MEMORY_EXIT_FLAG_PRIVATE - When set, indicates the memory fault occurred
> +   on a private memory access.  When clear, indicates the fault occurred on a
> +   shared access.
>
>  Note!  KVM_EXIT_MEMORY_FAULT is unique among all KVM exit reasons in that it
>  accompanies a return code of '-1', not '0'!  errno will always be set to EFAULT
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 4167d557c577..c4e758f0aebb 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -3147,9 +3147,9 @@ static int host_pfn_mapping_level(struct kvm *kvm, gfn_t gfn,
>         return level;
>  }
>
> -int kvm_mmu_max_mapping_level(struct kvm *kvm,
> -                             const struct kvm_memory_slot *slot, gfn_t gfn,
> -                             int max_level)
> +static int __kvm_mmu_max_mapping_level(struct kvm *kvm,
> +                                      const struct kvm_memory_slot *slot,
> +                                      gfn_t gfn, int max_level, bool is_private)
>  {
>         struct kvm_lpage_info *linfo;
>         int host_level;
> @@ -3161,6 +3161,9 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm,
>                         break;
>         }
>
> +       if (is_private)
> +               return max_level;
> +
>         if (max_level == PG_LEVEL_4K)
>                 return PG_LEVEL_4K;
>
> @@ -3168,6 +3171,16 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm,
>         return min(host_level, max_level);
>  }
>
> +int kvm_mmu_max_mapping_level(struct kvm *kvm,
> +                             const struct kvm_memory_slot *slot, gfn_t gfn,
> +                             int max_level)
> +{
> +       bool is_private = kvm_slot_can_be_private(slot) &&
> +                         kvm_mem_is_private(kvm, gfn);
> +
> +       return __kvm_mmu_max_mapping_level(kvm, slot, gfn, max_level, is_private);
> +}
> +
>  void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>  {
>         struct kvm_memory_slot *slot = fault->slot;
> @@ -3188,8 +3201,9 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
>          * Enforce the iTLB multihit workaround after capturing the requested
>          * level, which will be used to do precise, accurate accounting.
>          */
> -       fault->req_level = kvm_mmu_max_mapping_level(vcpu->kvm, slot,
> -                                                    fault->gfn, fault->max_level);
> +       fault->req_level = __kvm_mmu_max_mapping_level(vcpu->kvm, slot,
> +                                                      fault->gfn, fault->max_level,
> +                                                      fault->is_private);
>         if (fault->req_level == PG_LEVEL_4K || fault->huge_page_disallowed)
>                 return;
>
> @@ -4261,6 +4275,55 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work)
>         kvm_mmu_do_page_fault(vcpu, work->cr2_or_gpa, 0, true, NULL);
>  }
>
> +static inline u8 kvm_max_level_for_order(int order)
> +{
> +       BUILD_BUG_ON(KVM_MAX_HUGEPAGE_LEVEL > PG_LEVEL_1G);
> +
> +       KVM_MMU_WARN_ON(order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G) &&
> +                       order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M) &&
> +                       order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_4K));
> +
> +       if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G))
> +               return PG_LEVEL_1G;
> +
> +       if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M))
> +               return PG_LEVEL_2M;
> +
> +       return PG_LEVEL_4K;
> +}
> +
> +static void kvm_mmu_prepare_memory_fault_exit(struct kvm_vcpu *vcpu,
> +                                             struct kvm_page_fault *fault)
> +{
> +       kvm_prepare_memory_fault_exit(vcpu, fault->gfn << PAGE_SHIFT,
> +                                     PAGE_SIZE, fault->write, fault->exec,
> +                                     fault->is_private);
> +}
> +
> +static int kvm_faultin_pfn_private(struct kvm_vcpu *vcpu,
> +                                  struct kvm_page_fault *fault)
> +{
> +       int max_order, r;
> +
> +       if (!kvm_slot_can_be_private(fault->slot)) {
> +               kvm_mmu_prepare_memory_fault_exit(vcpu, fault);
> +               return -EFAULT;
> +       }
> +
> +       r = kvm_gmem_get_pfn(vcpu->kvm, fault->slot, fault->gfn, &fault->pfn,
> +                            &max_order);
> +       if (r) {
> +               kvm_mmu_prepare_memory_fault_exit(vcpu, fault);
> +               return r;
> +       }
> +
> +       fault->max_level = min(kvm_max_level_for_order(max_order),
> +                              fault->max_level);
> +       fault->map_writable = !(fault->slot->flags & KVM_MEM_READONLY);
> +
> +       return RET_PF_CONTINUE;
> +}
> +
>  static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>  {
>         struct kvm_memory_slot *slot = fault->slot;
> @@ -4293,6 +4356,14 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
>                         return RET_PF_EMULATE;
>         }
>
> +       if (fault->is_private != kvm_mem_is_private(vcpu->kvm, fault->gfn)) {
> +               kvm_mmu_prepare_memory_fault_exit(vcpu, fault);
> +               return -EFAULT;
> +       }
> +
> +       if (fault->is_private)
> +               return kvm_faultin_pfn_private(vcpu, fault);
> +
>         async = false;
>         fault->pfn = __gfn_to_pfn_memslot(slot, fault->gfn, false, false, &async,
>                                           fault->write, &fault->map_writable,
> @@ -7173,6 +7244,26 @@ void kvm_mmu_pre_destroy_vm(struct kvm *kvm)
>  }
>
>  #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES
> +bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm,
> +                                       struct kvm_gfn_range *range)
> +{
> +       /*
> +        * Zap SPTEs even if the slot can't be mapped PRIVATE.  KVM x86 only
> +        * supports KVM_MEMORY_ATTRIBUTE_PRIVATE, and so it *seems* like KVM
> +        * can simply ignore such slots.  But if userspace is making memory
> +        * PRIVATE, then KVM must prevent the guest from accessing the memory
> +        * as shared.  And if userspace is making memory SHARED and this point
> +        * is reached, then at least one page within the range was previously
> +        * PRIVATE, i.e. the slot's possible hugepage ranges are changing.
> +        * Zapping SPTEs in this case ensures KVM will reassess whether or not
> +        * a hugepage can be used for affected ranges.
> +        */
> +       if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm)))
> +               return false;
> +
> +       return kvm_unmap_gfn_range(kvm, range);
> +}
> +
>  static bool hugepage_test_mixed(struct kvm_memory_slot *slot, gfn_t gfn,
>                                 int level)
>  {
> diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> index decc1f153669..86c7cb692786 100644
> --- a/arch/x86/kvm/mmu/mmu_internal.h
> +++ b/arch/x86/kvm/mmu/mmu_internal.h
> @@ -201,6 +201,7 @@ struct kvm_page_fault {
>
>         /* Derived from mmu and global state.  */
>         const bool is_tdp;
> +       const bool is_private;
>         const bool nx_huge_page_workaround_enabled;
>
>         /*
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 7de93858054d..e3223cafd7db 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -2358,14 +2358,18 @@ static inline void kvm_account_pgtable_pages(void *virt, int nr)
>  #define  KVM_DIRTY_RING_MAX_ENTRIES  65536
>
>  static inline void kvm_prepare_memory_fault_exit(struct kvm_vcpu *vcpu,
> -                                                gpa_t gpa, gpa_t size)
> +                                                gpa_t gpa, gpa_t size,
> +                                                bool is_write, bool is_exec,
> +                                                bool is_private)
>  {
>         vcpu->run->exit_reason = KVM_EXIT_MEMORY_FAULT;
>         vcpu->run->memory_fault.gpa = gpa;
>         vcpu->run->memory_fault.size = size;
>
> -       /* Flags are not (yet) defined or communicated to userspace. */
> +       /* RWX flags are not (yet) defined or communicated to userspace. */
>         vcpu->run->memory_fault.flags = 0;
> +       if (is_private)
> +               vcpu->run->memory_fault.flags |= KVM_MEMORY_EXIT_FLAG_PRIVATE;
>  }
>
>  #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 33d542de0a61..29e9eb51dec9 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -527,6 +527,7 @@ struct kvm_run {
>                 } notify;
>                 /* KVM_EXIT_MEMORY_FAULT */
>                 struct {
> +#define KVM_MEMORY_EXIT_FLAG_PRIVATE   (1ULL << 3)
>                         __u64 flags;
>                         __u64 gpa;
>                         __u64 size;
> --
> 2.42.0.820.g83a721a137-goog
>


  reply	other threads:[~2023-11-02 14:35 UTC|newest]

Thread overview: 148+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-10-27 18:21 [PATCH v13 00/35] KVM: guest_memfd() and per-page attributes Sean Christopherson
2023-10-27 18:21 ` [PATCH v13 01/35] KVM: Tweak kvm_hva_range and hva_handler_t to allow reusing for gfn ranges Sean Christopherson
2023-11-01 12:46   ` Fuad Tabba
2023-10-27 18:21 ` [PATCH v13 02/35] KVM: Assert that mmu_invalidate_in_progress *never* goes negative Sean Christopherson
2023-10-30 16:27   ` Paolo Bonzini
2023-11-01 12:46   ` Fuad Tabba
2023-10-27 18:21 ` [PATCH v13 03/35] KVM: Use gfn instead of hva for mmu_notifier_retry Sean Christopherson
2023-10-30 16:30   ` Paolo Bonzini
2023-10-30 16:53   ` David Matlack
2023-10-30 17:00     ` Paolo Bonzini
2023-10-30 18:21       ` David Matlack
2023-10-30 18:19     ` David Matlack
2023-11-01 15:31   ` Xu Yilun
2023-10-27 18:21 ` [PATCH v13 04/35] KVM: WARN if there are dangling MMU invalidations at VM destruction Sean Christopherson
2023-10-30 16:32   ` Paolo Bonzini
2023-11-01 12:50   ` Fuad Tabba
2023-10-27 18:21 ` [PATCH v13 05/35] KVM: PPC: Drop dead code related to KVM_ARCH_WANT_MMU_NOTIFIER Sean Christopherson
2023-10-30 16:34   ` Paolo Bonzini
2023-11-01 12:51   ` Fuad Tabba
2023-10-27 18:21 ` [PATCH v13 06/35] KVM: PPC: Return '1' unconditionally for KVM_CAP_SYNC_MMU Sean Christopherson
2023-10-27 18:21 ` [PATCH v13 07/35] KVM: Convert KVM_ARCH_WANT_MMU_NOTIFIER to CONFIG_KVM_GENERIC_MMU_NOTIFIER Sean Christopherson
2023-10-30 16:37   ` Paolo Bonzini
2023-11-01 12:54   ` Fuad Tabba
2023-10-27 18:21 ` [PATCH v13 08/35] KVM: Introduce KVM_SET_USER_MEMORY_REGION2 Sean Christopherson
2023-10-30 16:41   ` Paolo Bonzini
2023-10-30 20:25     ` Sean Christopherson
2023-10-30 22:12       ` Sean Christopherson
2023-10-30 23:22       ` Paolo Bonzini
2023-10-31  0:18         ` Sean Christopherson
2023-10-31  2:26   ` Xiaoyao Li
2023-10-31 14:04     ` Sean Christopherson
2023-11-01 14:19   ` Fuad Tabba
2023-10-27 18:21 ` [PATCH v13 09/35] KVM: Add KVM_EXIT_MEMORY_FAULT exit to report faults to userspace Sean Christopherson
2023-10-30 17:22   ` Paolo Bonzini
2023-11-01  7:30   ` Binbin Wu
2023-11-01 10:52   ` Huang, Kai
2023-11-01 17:36     ` Sean Christopherson
2023-11-02  2:19       ` Xiaoyao Li
2023-11-02 15:51         ` Sean Christopherson
2023-11-02  3:17       ` Huang, Kai
2023-11-02  9:35         ` Huang, Kai
2023-11-02 11:03           ` Paolo Bonzini
2023-11-02 15:44             ` Sean Christopherson
2023-11-02 18:35               ` Huang, Kai
2023-11-02 15:56         ` Sean Christopherson
2023-11-02 11:01       ` Paolo Bonzini
2023-11-03  4:09   ` Xu Yilun
2023-10-27 18:21 ` [PATCH v13 10/35] KVM: Add a dedicated mmu_notifier flag for reclaiming freed memory Sean Christopherson
2023-10-30 17:11   ` Paolo Bonzini
2023-11-02 13:55   ` Fuad Tabba
2023-10-27 18:21 ` [PATCH v13 11/35] KVM: Drop .on_unlock() mmu_notifier hook Sean Christopherson
2023-10-30 17:18   ` Paolo Bonzini
2023-11-02 13:55   ` Fuad Tabba
2023-10-27 18:21 ` [PATCH v13 12/35] KVM: Prepare for handling only shared mappings in mmu_notifier events Sean Christopherson
2023-10-30 17:21   ` Paolo Bonzini
2023-10-30 22:07     ` Sean Christopherson
2023-11-02  5:59   ` Binbin Wu
2023-11-02 11:14     ` Paolo Bonzini
2023-11-02 14:01   ` Fuad Tabba
2023-11-02 14:41     ` Sean Christopherson
2023-11-02 14:57       ` Fuad Tabba
2023-10-27 18:21 ` [PATCH v13 13/35] KVM: Introduce per-page memory attributes Sean Christopherson
2023-10-30  8:11   ` Chao Gao
2023-10-30 16:10     ` Sean Christopherson
2023-10-30 22:05       ` Sean Christopherson
2023-10-31 16:43   ` David Matlack
2023-11-02  3:01   ` Huang, Kai
2023-11-02 10:32     ` Paolo Bonzini
2023-11-02 10:55       ` Huang, Kai
2023-10-27 18:21 ` [PATCH v13 14/35] mm: Add AS_UNMOVABLE to mark mapping as completely unmovable Sean Christopherson
2023-10-30 17:24   ` Paolo Bonzini
2023-10-27 18:21 ` [PATCH v13 15/35] fs: Export anon_inode_getfile_secure() for use by KVM Sean Christopherson
2023-10-30 17:30   ` Paolo Bonzini
2023-11-02 16:24   ` Christian Brauner
2023-11-03 10:40     ` Paolo Bonzini
2023-10-27 18:21 ` [PATCH v13 16/35] KVM: Add KVM_CREATE_GUEST_MEMFD ioctl() for guest-specific backing memory Sean Christopherson
2023-10-31  2:27   ` Xiaoyao Li
2023-10-31  6:30   ` Chao Gao
2023-10-31 14:10     ` Sean Christopherson
2023-10-31 15:05   ` Fuad Tabba
2023-10-31 22:13     ` Sean Christopherson
2023-10-31 22:18       ` Paolo Bonzini
2023-11-01 10:51       ` Fuad Tabba
2023-11-01 21:55         ` Sean Christopherson
2023-11-02 13:52           ` Fuad Tabba
2023-11-03 23:17             ` Sean Christopherson
2023-10-31 18:24   ` David Matlack
2023-10-31 21:36     ` Sean Christopherson
2023-10-31 22:39       ` David Matlack
2023-11-02 15:48         ` Paolo Bonzini
2023-11-02 16:03           ` Sean Christopherson
2023-11-02 16:28             ` David Matlack
2023-11-02 17:37               ` Sean Christopherson
2023-11-03  9:42   ` Fuad Tabba
2023-11-04 10:26   ` Xu Yilun
2023-11-06 15:43     ` Sean Christopherson
2023-10-27 18:21 ` [PATCH v13 17/35] KVM: Add transparent hugepage support for dedicated guest memory Sean Christopherson
2023-10-31  8:35   ` Xiaoyao Li
2023-10-31 14:16     ` Sean Christopherson
2023-11-01  7:25       ` Xiaoyao Li
2023-11-01 13:41         ` Sean Christopherson
2023-11-01 13:49           ` Paolo Bonzini
2023-11-01 16:36             ` Sean Christopherson
2023-11-01 22:28               ` Paolo Bonzini
2023-11-01 22:34                 ` Sean Christopherson
2023-11-01 23:17                   ` Paolo Bonzini
2023-11-02 15:38                     ` Sean Christopherson
2023-11-02 15:46                       ` Paolo Bonzini
2023-11-27 11:13                         ` Vlastimil Babka
2023-11-29 22:40                           ` Sean Christopherson
2023-10-27 18:22 ` [PATCH v13 18/35] KVM: x86: "Reset" vcpu->run->exit_reason early in KVM_RUN Sean Christopherson
2023-10-30 17:31   ` Paolo Bonzini
2023-11-02 14:16   ` Fuad Tabba
2023-10-27 18:22 ` [PATCH v13 19/35] KVM: x86: Disallow hugepages when memory attributes are mixed Sean Christopherson
2023-10-27 18:22 ` [PATCH v13 20/35] KVM: x86/mmu: Handle page fault for private memory Sean Christopherson
2023-11-02 14:34   ` Fuad Tabba [this message]
2023-11-05 13:02   ` Xu Yilun
2023-11-05 16:19     ` Paolo Bonzini
2023-11-06 13:29       ` Xu Yilun
2023-11-06 15:56         ` Sean Christopherson
2023-10-27 18:22 ` [PATCH v13 21/35] KVM: Drop superfluous __KVM_VCPU_MULTIPLE_ADDRESS_SPACE macro Sean Christopherson
2023-11-02 14:35   ` Fuad Tabba
2023-10-27 18:22 ` [PATCH v13 22/35] KVM: Allow arch code to track number of memslot address spaces per VM Sean Christopherson
2023-10-30 17:34   ` Paolo Bonzini
2023-11-02 14:52   ` Fuad Tabba
2023-10-27 18:22 ` [PATCH v13 23/35] KVM: x86: Add support for "protected VMs" that can utilize private memory Sean Christopherson
2023-10-30 17:36   ` Paolo Bonzini
2023-11-06 11:00   ` Fuad Tabba
2023-11-06 11:03     ` Paolo Bonzini
2023-10-27 18:22 ` [PATCH v13 24/35] KVM: selftests: Drop unused kvm_userspace_memory_region_find() helper Sean Christopherson
2023-10-27 18:22 ` [PATCH v13 25/35] KVM: selftests: Convert lib's mem regions to KVM_SET_USER_MEMORY_REGION2 Sean Christopherson
2024-04-25 14:12   ` Dan Carpenter
2024-04-25 14:45     ` Shuah Khan
2024-04-25 15:09       ` Sean Christopherson
2024-04-25 16:22         ` Shuah Khan
2024-04-26  7:33         ` Jarkko Sakkinen
2023-10-27 18:22 ` [PATCH v13 26/35] KVM: selftests: Add support for creating private memslots Sean Christopherson
2023-10-27 18:22 ` [PATCH v13 27/35] KVM: selftests: Add helpers to convert guest memory b/w private and shared Sean Christopherson
2023-11-06 11:26   ` Fuad Tabba
2023-10-27 18:22 ` [PATCH v13 28/35] KVM: selftests: Add helpers to do KVM_HC_MAP_GPA_RANGE hypercalls (x86) Sean Christopherson
2023-10-27 18:22 ` [PATCH v13 29/35] KVM: selftests: Introduce VM "shape" to allow tests to specify the VM type Sean Christopherson
2023-10-27 18:22 ` [PATCH v13 30/35] KVM: selftests: Add GUEST_SYNC[1-6] macros for synchronizing more data Sean Christopherson
2023-10-27 18:22 ` [PATCH v13 31/35] KVM: selftests: Add x86-only selftest for private memory conversions Sean Christopherson
2023-10-27 18:22 ` [PATCH v13 32/35] KVM: selftests: Add KVM_SET_USER_MEMORY_REGION2 helper Sean Christopherson
2023-10-27 18:22 ` [PATCH v13 33/35] KVM: selftests: Expand set_memory_region_test to validate guest_memfd() Sean Christopherson
2023-10-27 18:22 ` [PATCH v13 34/35] KVM: selftests: Add basic selftest for guest_memfd() Sean Christopherson
2023-10-27 18:22 ` [PATCH v13 35/35] KVM: selftests: Test KVM exit behavior for private memory/access Sean Christopherson
2023-10-30 17:39 ` [PATCH v13 00/35] KVM: guest_memfd() and per-page attributes Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CA+EHjTyRwz7rnqy_qRa2ayPWAu_tjW5OeieZJEsaAVvXu1XeMw@mail.gmail.com \
    --to=tabba@google.com \
    --cc=ackerleytng@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=amoorthy@google.com \
    --cc=anup@brainfault.org \
    --cc=aou@eecs.berkeley.edu \
    --cc=brauner@kernel.org \
    --cc=chao.p.peng@linux.intel.com \
    --cc=chenhuacai@kernel.org \
    --cc=david@redhat.com \
    --cc=dmatlack@google.com \
    --cc=isaku.yamahata@gmail.com \
    --cc=isaku.yamahata@intel.com \
    --cc=jarkko@kernel.org \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=kvm-riscv@lists.infradead.org \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.linux.dev \
    --cc=liam.merwick@oracle.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mips@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mail@maciej.szmigiero.name \
    --cc=maz@kernel.org \
    --cc=mic@digikod.net \
    --cc=michael.roth@amd.com \
    --cc=mpe@ellerman.id.au \
    --cc=oliver.upton@linux.dev \
    --cc=palmer@dabbelt.com \
    --cc=paul.walmsley@sifive.com \
    --cc=pbonzini@redhat.com \
    --cc=qperret@google.com \
    --cc=seanjc@google.com \
    --cc=vannapurve@google.com \
    --cc=vbabka@suse.cz \
    --cc=viro@zeniv.linux.org.uk \
    --cc=wei.w.wang@intel.com \
    --cc=willy@infradead.org \
    --cc=xiaoyao.li@intel.com \
    --cc=yilun.xu@intel.com \
    --cc=yu.c.zhang@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).