public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Jon Kohler <jon@nutanix.com>
To: David Riley <d.riley@proxmox.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
	Nikunj A Dadhania <nikunj@amd.com>,
	"Shah, Amit" <amit.shah@amd.com>,
	Sean Christopherson <seanjc@google.com>,
	Marcelo Tosatti <mtosatti@redhat.com>
Subject: Re: [PATCH v3 00/27] KVM: combined patchset for MBEC/GMET support
Date: Tue, 21 Apr 2026 02:07:08 +0000	[thread overview]
Message-ID: <818FF8AD-9DF7-4100-AA46-6E84253D7B72@nutanix.com> (raw)
In-Reply-To: <c91391f4-57b8-4bad-aba8-2c47c285ab27@proxmox.com>



> On Apr 15, 2026, at 3:06 AM, David Riley <d.riley@proxmox.com> wrote:
> 
> Hi Paolo, Jon,
> 
> Thanks to Paolo for sending the new patch series (v3), and to Jon
> for the feedback on my previous test.
> 
> I have once again tested this patchset (v3) on both Intel and AMD
> platforms using Proxmox VE (based on Debian Trixie) with a Windows
> Server guest (24H2, Build 26100.1742).
> 
> The focus of the tests were live migrations between different hosts
> (Intel <-> Intel & AMD <-> AMD).
> 
> All tests used the same base setup:
> 
> Kernel: mainline 7.0.0-rc7 (with MBEC/GMET v3 patches applied)
> QEMU: our downstream QEMU build based on 10.2.1, plus Jon's patches
> virtio-win: 0.1.271
> 
> Windows Guest:
> For the guest setup I enabled Virtualization-Based Security (VBS)
> and Hypervisor-Protected Code Integrity (HVCI).
> 
> I set the following in the Group Policy Editor (DeviceGuard):
> * Select Platform Security Level: Secure Boot
> * Virtualization Based Protection of Code Integrity: Enabled without
>   lock
> * Require UEFI Memory Attributes Table: Checked
> 
> Hosts:
> Intel Nodes:
>    CPU: Intel(R) Xeon(R) Gold 6426Y
> 
> AMD Nodes:
>    CPU: AMD EPYC 7302P
> 
> 
> I tested the following:
> 
> 1. Intel without Hyper-V Enlightenments:
> 
> QEMU CPU options: -cpu 'host,+kvm_pv_eoi,+kvm_pv_unhalt,level=30'
> AvailableSecurityProperties [0]:  1,2,4,5,7
> 
> Security Property 7 indicates MBEC/GMET support. [0]
> 
> I migrated the virtual guest between the two Intel hosts whilst
> running Cinebench R32.200. No issues were found, but the VM does not
> perform well without Hyper-V Enlightenments.
> 
> 2. Intel with Hyper-V Enlightenments:
> 
> QEMU CPU options: -cpu 'host,+hv-evmcs,+hv-ipi,+hv-relaxed,
>   +hv-runtime,hv-spinlocks=0x1fff,+hv-stimer,+hv-synic,+hv-time,
> +hv-tlbflush,+hv-tlbflush-ext,+hv-vapic,+hv-vpindex,+hv-xmm-input,
>   +kvm_pv_eoi,+kvm_pv_unhalt,level=30,+vmx-mbec'
> 
> AvailableSecurityProperties [0]: 1,2,4,5,7
> 
> I again migrated the virtual machine between the two Intel hosts
> whilst running Cinebench R32.200. No issues were found, but the VM
> performs significantly better with Hyper-V Enlightenments set.
> 
> 3. AMD without Hyper-V Enlightenments:
> 
> QEMU CPU options: -cpu 'host,+kvm_pv_eoi,+kvm_pv_unhalt,level=30'
> 
> AvailableSecurityProperties [0]: 1,2,4,5,7
> 
> I migrated the virtual machine between the two AMD hosts whilst
> running Cinebench R32.200. No issues were found.
> 
> 4. AMD with Hyper-V Enlightenments:
> 
> QEMU CPU options: -cpu 'host,+gmet,+hv-emsr-bitmap,+hv-ipi,
> +hv-relaxed,+hv-runtime,hv-spinlocks=0x1fff,+hv-stimer,+hv-synic,
>   +hv-time,+hv-tlbflush,+hv-tlbflush-ext,+hv-vapic,+hv-vpindex,
>   +hv-xmm-input,+kvm_pv_eoi,+kvm_pv_unhalt,level=30'
> 
> AvailableSecurityProperties [0]: 1,2,4,5,7
> 
> I again migrated the virtual machine between the two AMD hosts whilst
> running Cinebench R32.200. I have not found any issues.
> 
> Tested-by: David Riley <d.riley@proxmox.com>

Great! Thanks for testing these various permutations out, that’s
a very helpful datapoint. 

For posterity, we’ve also done a similar round of testing on both
AMD/Intel and knock on wood, things are holding up nicely, with
no trouble reports from QA as of yet (more knocking on wood).


      reply	other threads:[~2026-04-21  2:22 UTC|newest]

Thread overview: 34+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-08 15:41 [PATCH v3 00/27] KVM: combined patchset for MBEC/GMET support Paolo Bonzini
2026-04-08 15:41 ` [PATCH 01/27] KVM: TDX/VMX: rework EPT_VIOLATION_EXEC_FOR_RING3_LIN into PROT_MASK Paolo Bonzini
2026-04-08 15:41 ` [PATCH 02/27] KVM: x86/mmu: remove SPTE_PERM_MASK Paolo Bonzini
2026-04-08 15:41 ` [PATCH 03/27] KVM: x86/mmu: free up bit 10 of PTEs in preparation for MBEC Paolo Bonzini
2026-04-08 21:29   ` Huang, Kai
2026-04-08 15:41 ` [PATCH 04/27] KVM: x86/mmu: shuffle high bits of SPTEs " Paolo Bonzini
2026-04-17  1:23   ` Jon Kohler
2026-04-08 15:41 ` [PATCH 05/27] KVM: x86/mmu: remove SPTE_EPT_* Paolo Bonzini
2026-04-08 15:41 ` [PATCH 06/27] KVM: x86/mmu: merge make_spte_{non,}executable Paolo Bonzini
2026-04-08 15:41 ` [PATCH 07/27] KVM: x86/mmu: rename and clarify BYTE_MASK Paolo Bonzini
2026-04-08 15:41 ` [PATCH 08/27] KVM: x86/mmu: introduce ACC_READ_MASK Paolo Bonzini
2026-04-08 15:41 ` [PATCH 09/27] KVM: x86/mmu: separate more EPT/non-EPT permission_fault() Paolo Bonzini
2026-04-08 15:42 ` [PATCH 10/27] KVM: x86/mmu: pass PFERR_GUEST_PAGE/FINAL_MASK to kvm_translate_gpa Paolo Bonzini
2026-04-08 15:42 ` [PATCH 11/27] KVM: x86/mmu: pass pte_access for final nGPA->GPA walk Paolo Bonzini
2026-04-08 15:42 ` [PATCH 12/27] KVM: x86: make translate_nested_gpa vendor-specific Paolo Bonzini
2026-04-08 15:42 ` [PATCH 13/27] KVM: x86/mmu: split XS/XU bits for EPT Paolo Bonzini
2026-04-08 15:42 ` [PATCH 14/27] KVM: x86/mmu: move cr4_smep to base role Paolo Bonzini
2026-04-08 15:42 ` [PATCH 15/27] KVM: VMX: enable use of MBEC Paolo Bonzini
2026-04-17  1:19   ` Jon Kohler
2026-04-08 15:42 ` [PATCH 16/27] KVM: nVMX: pass advanced EPT violation vmexit info to guest Paolo Bonzini
2026-04-17  1:18   ` Jon Kohler
2026-04-08 15:42 ` [PATCH 17/27] KVM: nVMX: pass PFERR_USER_MASK to MMU on EPT violations Paolo Bonzini
2026-04-08 15:42 ` [PATCH 18/27] KVM: x86/mmu: add support for MBEC to EPT page table walks Paolo Bonzini
2026-04-08 15:42 ` [PATCH 19/27] KVM: nVMX: advertise MBEC to nested guests Paolo Bonzini
2026-04-08 15:42 ` [PATCH 20/27] KVM: nVMX: allow MBEC with EVMCS Paolo Bonzini
2026-04-08 15:42 ` [PATCH 21/27] KVM: x86/mmu: propagate access mask from root pages down Paolo Bonzini
2026-04-08 15:42 ` [PATCH 22/27] KVM: x86/mmu: introduce cpu_role bit for availability of PFEC.I/D Paolo Bonzini
2026-04-08 15:42 ` [PATCH 23/27] KVM: SVM: add GMET bit definitions Paolo Bonzini
2026-04-08 15:42 ` [PATCH 24/27] KVM: x86/mmu: add support for GMET to NPT page table walks Paolo Bonzini
2026-04-08 15:42 ` [PATCH 25/27] KVM: SVM: enable GMET and set it in MMU role Paolo Bonzini
2026-04-08 15:42 ` [PATCH 26/27] KVM: SVM: work around errata 1218 Paolo Bonzini
2026-04-08 15:42 ` [PATCH 27/27] KVM: nSVM: enable GMET for guests Paolo Bonzini
2026-04-15  7:06 ` [PATCH v3 00/27] KVM: combined patchset for MBEC/GMET support David Riley
2026-04-21  2:07   ` Jon Kohler [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=818FF8AD-9DF7-4100-AA46-6E84253D7B72@nutanix.com \
    --to=jon@nutanix.com \
    --cc=amit.shah@amd.com \
    --cc=d.riley@proxmox.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mtosatti@redhat.com \
    --cc=nikunj@amd.com \
    --cc=pbonzini@redhat.com \
    --cc=seanjc@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox